text
stringlengths 1
2.25M
|
---|
---
abstract: 'We find the following improved laboratory bounds on the coupling of light pseudoscalars to protons and neutrons: $g_p^2/4\pi < 1.7 \times 10^{-9}$ and $g_n^2/4\pi < 6.8 \times 10^{-8}$. The limit on $g_p$ arises since a nonzero $g_p$ would induce a coupling of the pseudoscalar to two photons, which is limited by experiments studying laser beam propagation in magnetic fields. Combining our bound on $g_p$ with a recent analysis of Fischbach and Krause on two-pseudoscalar exchange potentials and experiments testing the equivalence principle, we obtain our limit on $g_n$.'
address: |
Grup de Física Teòrica and Institut de Física d’Altes Energies\
Universitat Autònoma de Barcelona\
08193 Bellaterra, Barcelona, Spain
author:
- '[**E. Massó**]{} [^1]'
title: |
**Bounds on the Coupling\
of Light Pseudoscalars to Nucleons\
from Optical Laser Experiments[^2]**
---
The recent work of Fischbach and Krause in references [@FK1] and [@FK2] has reopened the issue of the laboratory constraints on the Yukawa couplings $g$ of a light pseudoscalar to fermions, defined through the Lagrangian density \_Y = i g |(x) \_5 (x) (x) , that couples the pseudoscalar field $\phi(x)$ to the fermion field $\psi(x)$.
It is well known that the exchange of a light $\phi$ leads to a spin-dependent long-range interaction among fermions. For fermions separated by a distance $r$, in the limit that the pseudoscalar mass $m \ll 1/r$, the potential is \[Vone\] V\^[(2)]{}= . (We display the formula for the particular case of identical fermions of mass $M$.) The spin-dependent factor $S_{12}$ of the potential (\[Vone\]) reads S\_[12]{} = 3 - (\_1 \_2) , with $\vec \sigma_i/2$ ($i=1,2$) the spins of the two fermions. The laboratory experiments trying to constrain such spin-dependent interaction lead to relatively poor bounds on the Yukawa couplings $g$ to fermions.
In [@FK1; @FK2] the authors have noticed that a significant improvement on the bounds on $g$ for nucleons can be obtained by considering the potential arising from two-pseudoscalar exchange, \[Vtwo\] V\^[(4)]{}= - where again we took the $m \rightarrow 0$ limit and the particular case of two identical fermions.
When comparing the potentials (\[Vone\]) and (\[Vtwo\]) one may think that to consider $V^{(4)}$ would lead to worse bounds since it has a $g^2/4\pi^2$ suppression relative to the potential $V^{(2)}$. However, $V^{(4)}$ is spin independent and thus it is constrained by experimental searches for such new macroscopic forces. Fischbach and Krause have shown that the second effect dominates over the first one when considering Yukawa couplings $g_p$ to protons and $g_n$ to neutrons. In [@FK1], these authors use data from experiments testing the equivalence principle [@EP]. In [@FK2], they use data from experiments testing the gravitational inverse square law [@IS]. From the combination of both types of limits they finally get [@FK2] &<& 1.6 10\^[-7]{} ,\
&<& 1.6 10\^[-7]{} .
In the present short paper, we would like to show that there are further laboratory constraints on the Yukawa couplings $g_n$ and $g_p$. We consider the coupling of $\phi$ to two photons induced by the triangle diagram that we display in the figure. In the loop, the internal line is a proton since it couples to the pseudoscalar and to the photons. As expected, the evaluation of the triangle diagram leads to a gauge-invariant effective Lagrangian density of the form \_= f \_ F\^(x) F\^(x) (x) , with $F^{\mu \nu}(x)$ the photon field strength. One gets [@anomaly] \[relationship\] f= g\_p . ($M_p$ is the proton mass).
The $\phi\gamma\gamma$ coupling $f$ is suppressed by a factor $\alpha$ compared to the Yukawa coupling $g_p$ but, as shown below, the existing laboratory constraints on $f$ allow us to place a stringent bound on $g_p$.
Among all the laboratory limits on $f$ [@MT] the most restrictive ones come from the study of laser beam propagation through a transverse magnetic field. A light pseudoscalar coupled to two photons would induce effects such as optical rotation of the beam polarization, the appearance of ellipticity of the beam, and photon regeneration [@cameron]. The absence of these effects in the data of the experiment leads to the limit [@cameron] \[laser\] f < 3.6 10\^[-7]{} [GeV]{}\^[-1]{} . Using now the relationship in Eq.(\[relationship\]), the previous limit translates into a bound on the proton Yukawa coupling \[limit\_p\] < 1.7 10\^[-9]{} . The limit (\[laser\]) is valid for masses of the light pseudoscalar $m<10^{-3}$ eV. It follows that our bound (\[limit\_p\]) is also valid for this mass range. This corresponds to interaction ranges of the potentials (\[Vone\]) and (\[Vtwo\]) larger than about $0.02$ cm.
We can now constrain the neutron Yukawa coupling by combining our bound (\[limit\_p\]) with the results from [@FK1]. As explained above, constraints on $g_n$ and $g_p$ can be placed by considering the $V^{(4)}$ potential. In reference [@FK1], the implications for the couplings $g_n$ and $g_p$ from the equivalence principle experiment [@EP] have been worked out in detail. The final result is a constraint on a combination of both Yukawa couplings [@FK1] \[gundlach\] (9.6 g\_p\^2 + 15.3 g\_n\^2)|0.05925 g\_p\^2 - 0.05830 g\_n\^2|\
< 6.4 10\^[-13]{} . Introducing (\[limit\_p\]) in (\[gundlach\]) we find the stringent bound \[limit\_n\] < 6.8 10\^[-8]{} .
The ongoing experiment PVLAS [@PVLAS] that also studies laser beam propagation is supposed to ameliorate the present limit (\[laser\]) on $f$. This will in turn improve our bounds (\[limit\_p\]) and (\[limit\_n\]).
In summary, we have shown first that the stringent bound in Eq.(\[limit\_p\]) on the coupling $g_p$ of the proton to a light pseudoscalar (with $m<10^{-3}$ eV) can be obtained by considering the induced coupling of the pseudoscalar to two photons which in turn is limited by laser propagation experiments. Second, we have combined our bound on $g_p$ with the results coming from data on equivalence principle experiments constraining the spin-independent potential due to two-pseudoscalar exchange. As a result, we are able to put the stringent bound in Eq. (\[limit\_n\]) on the neutron coupling $g_n$ to the light pseudoscalar.
Work partially supported by the CICYT Research Project AEN98-1116. We would like to thank Francesc Ferrer for helpful discussions.
[99]{}
E. Fischbach and D. E. Krause, Phys. Rev. Lett. [**82**]{} 4753 (1999).
E. Fischbach and D. E. Krause, hep-ph/9906240 (submitted to Physical Review Letters).
J. H. Gundlach, G. L. Smith, E. G. Adelberger, B. R. Heckel, and H. E. Swason, Phys. Rev. Lett. [**78**]{} 4753 (1997).
R. Spero, J. K. Hoskins, R. Newman, J. Pellam, and J. Schultz, Phys. Rev. Lett. [**44**]{} 1645 (1980);\
J. K. Hoskins, R. Newman, R. Spero, and J. Schultz, Phys. Rev. [**D32**]{} 3084 (1985).
S. Adler, Phys. Rev. [**177**]{} 2426 (1969);\
J.S. Bell and R. Jackiw, Nuovo Cim. [**60A**]{} 47 (1969).
E. Massó and R. Toldrà, Phys. Rev. [**D52**]{} 1755 (1995).
R. Cameron [*et al*]{}, Phys. Rev. [**D47**]{} 3707 (1993).
D. Bakalov [*et al.*]{}, Nucl. Phys. B (Proc. Suppl.) [**35**]{} 180 (1994).
[^1]: [email protected]
[^2]: To be published in Phys. Rev. D
|
---
author:
- Karim Belabas and Henri Cohen
title: |
Modular Forms in Pari/GP\
[*(Dedicated to Don Zagier for his 65th birthday.)*]{}
---
Introduction
============
Three packages exist which allow computations on classical modular forms: [Sage]{}, [Magma]{}, and [Pari/GP]{}, the latter being available since the spring of 2018. The first two packages are based on modular symbols, while the third is based on trace formulas. This difference is not so important (although the efficiency of certain computations can vary widely from one package to another), but at present the [Pari/GP]{} package is the only one which is routinely able to perform a number of computations on modular forms such as expansions at cusps, evaluation near the real axis, evaluation of $L$-functions of non-eigenforms, computation of general Petersson scalar products, etc.
The method used for these more advanced commands is based on the one hand on a theorem of Borisov–Gunnells [@Bor-Gun] [@Bor-Gun2] stating that with known exceptions (which can easily be circumvented) spaces of modular forms are generated by products of two Eisenstein series, and on the other hand by tedious computations on the expansions of these Eisenstein series. None of this is completely original, but it took us several months before obtaining a satisfactory implementation. In addition, note that we do not need the Borisov–Gunnells *theorem* (in fact in the beginning we were not even aware of their work) since we can always check whether products of two Eisenstein series generate the desired spaces (which they sometimes do not in weight $2$, but this can be circumvented).
This paper is divided into three parts. In the first part (Sections \[sec:two\] to \[sec:six\]), we describe the theoretical tools used in the construction of modular form spaces. In the short second part (Section \[sec:seven\]), we give some implementation details. In the third somewhat lengthy part (Sections \[sec:eight\] and \[sec:nine\]) we give some sample commands and results obtained using the package, with emphasis on the advanced commands not available elsewhere.
[**Acknowledgments.**]{} We would like to thank B. Allombert, J. Bober, A. Booker, M. Lee, and B. Perrin-Riou for very helpful discussions and help in algorithms and programming, F. Brunault and M. Neururer for Theorem \[thmfga\], as well as K. Khuri-Makdisi, N. Mascot, N. Billerey and E. Royer.
Last but not least, we thank Don Zagier for his continuous input on this package and on [Pari/GP]{} in general. In addition, note that a much smaller program written 30 years ago by Don, N. Skoruppa, and the second author can be considered as an ancestor to the present package.
Construction of Spaces of Integral Weight $k\ge2$ {#sec:two}
=================================================
Introduction
------------
We decided from the start to restrict to spaces of classical modular forms, and more precisely to the usual spaces $M_k({\Gamma}_0(N),\chi)$ where $\chi$ is a Dirichlet character modulo $N$ of suitable parity, including $k=1$ and $k$ half-integral. In addition to this *full* modular form space, we also want to construct the space of cusp forms $S_k({\Gamma}_0(N),\chi)$, the space of Eisenstein series ${{\mathcal E}}_k({\Gamma}_0(N),\chi)$, (so that $M_k({\Gamma}_0(N),\chi)={{\mathcal E}}_k({\Gamma}_0(N),\chi)\oplus S_k({\Gamma}_0(N),\chi)$), the space of newforms $S_k^{{\mathrm{new}}}({\Gamma}_0(N),\chi)$, and the space of oldforms $S_k^{{\mathrm{old}}}({\Gamma}_0(N),\chi)$ (so that $S_k({\Gamma}_0(N),\chi)=S_k^{{\mathrm{old}}}({\Gamma}_0(N),\chi)\oplus S_k^{{\mathrm{new}}}({\Gamma}_0(N),\chi)$).
Other finite index subgroups of $\operatorname{SL}_2({\mathbb{Z}})$ could be considered, as well as other subspaces of $M_k({\Gamma}_0(N),\chi)$, such as Skoruppa–Zagier’s *certain space* [@Sko-Zag], but to limit the amount of work we have restricted ourselves to the above. In this section, we assume that $k \geq 2$ is an integer and defer half-integral weights and weight $1$ to later sections.
Construction of ${{\mathcal E}}_k({\Gamma}_0(N),\chi)$
------------------------------------------------------
The construction of the space of Eisenstein series is easy, and based on a theorem apparently first published by J. Weisinger in 1977 [@Weis]. Recall that if $\chi_1$ and $\chi_2$ are two primitive characters modulo $N_1$ and $N_2$ respectively, we define for $k>2$ the Eisenstein series $$G_k(\chi_1,\chi_2;\tau)={\sideset{}{'}\sum}_{N_1\mid c,\ d}\dfrac{{\overline}{\chi_1(d)}\chi_2(c/N_1)}{(c\tau+d)^k}\;,$$ and if $k=1$ or $k=2$ one defines $G_k$ by analytic continuation to $s=0$ of the corresponding series where $(c\tau+d)^k$ is multiplied by $|c\tau+d|^{2s}$ (“Hecke’s trick”). Then $G_k(\chi_1,\chi_2)$ belongs to $M_k({\Gamma}_0(N_1N_2),\chi_1\chi_2)$, except when $k=2$ and $\chi_1$ and $\chi_2$ are trivial characters, in which case there is a nonanalytic term in $1/\Im(\tau)$.
We introduce the following useful notation: if $\chi$ is any Dirichlet character, we denote by $\chi_f$ the primitive character equivalent to $\chi$, and we denote by $1$ the trivial character modulo $1$. Weisinger’s theorem (slightly corrected for $k=2$) is as follows:
1. 2. For $k\ge3$ or for $k=2$ and $\chi$ a nontrivial character, a basis of the space ${{\mathcal E}}_k({\Gamma}_0(N),\chi)$ of Eisenstein series is given by the $G_k(\chi_1,\chi_2;m\tau)$, where $(\chi_1,\chi_2)$ ranges over pairs of primitive characters as above such that $(\chi_1\chi_2)_f=\chi_f$ and $N_1N_2\mid N$, and $m$ ranges over all divisors of $N/(N_1N_2)$.
3. For $k=2$ and $\chi$ a trivial character, a basis of ${{\mathcal E}}_2({\Gamma}_0(N))$ is given by the same functions as in (1) except that if $(\chi_1,\chi_2)=(1,1)$ we replace $G_k(\chi_1,\chi_2;m\tau)$ by $G_2(\chi_1,\chi_2;m\tau)-G_2(\chi_1,\chi_2;\tau)/m$ and exclude $m=1$.
4. For $k=1$, a basis of ${{\mathcal E}}_1({\Gamma}_0(N),\chi)$ is given by the same functions as in (1), except that we restrict to $\chi_1$ being an even character.
(Note that since in weight $1$ (and only in weight $1$) the characters $\chi_1$ and $\chi_2$ play a symmetrical role, we could instead restrict to $\chi_2$ being an even character.)
Thanks to this theorem it is immediate to construct a basis of ${{\mathcal E}}_k({\Gamma}_0(N),\chi)$. However, this is not the whole story. Indeed, one can easily compute the Fourier expansion at infinity of $G_k(\chi_1,\chi_2;\tau)$, and (after suitable normalization) the coefficients belong to the large cyclotomic field ${\mathbb{Q}}({\zeta}_{o_1},{\zeta}_{o_2})$, where $o_i$ denotes the order of the character $\chi_i$ and ${\zeta}_n$ denotes a primitive $n$th root of unity. It is in fact possible to obtain a basis whose Fourier coefficients are in the smaller cyclotomic field ${\mathbb{Q}}({\zeta}_o)$, where $o$ is the order of $\chi$. For this, we introduce the following notation:
1. $\operatorname{Tr}_{1,2}$ will denote the *trace map* from ${\mathbb{Q}}({\zeta}_{o_1},{\zeta}_{o_2})$ to ${\mathbb{Q}}({\zeta}_o)$.
2. We will say that two primitive characters $\chi$ and $\chi'$ modulo $N$ are *equivalent* and write $\chi\sim\chi'$ if there exists $j$ coprime to the order of $\chi$ such that $\chi'=\chi^j$.
Let $d_{1,2}=[{\mathbb{Q}}({\zeta}_{o_1},{\zeta}_{o_2}):{\mathbb{Q}}({\zeta}_o)]$ be the degree of the field extension and let ${\alpha}_{1,2}$ be such that ${\mathbb{Q}}({\zeta}_{o_1},{\zeta}_{o_2})={\mathbb{Q}}({\zeta}_o)({\alpha}_{1,2})$. A basis for the space ${{\mathcal E}}_k({\Gamma}_0(N),\chi)$ is given by the $\operatorname{Tr}_{1,2}({\alpha}_{1,2}^jG_k(\chi_1,\chi_2;m\tau))$ where $0\le j<d_{1,2}$ and $(\chi_1,\chi_2,m)$ are as in the previous theorem (with the suitable modification when $k=2$) except that $\chi_1$ is only chosen up to equivalence.
Thus we indeed obtain a basis of the Eisenstein space whose Fourier expansions have coefficients in the smaller field ${\mathbb{Q}}({\zeta}_o)$, for instance in ${\mathbb{Q}}$ if $\chi$ is trivial or a quadratic character.
Construction of $S_k^{{\mathrm{new}}}({\Gamma}_0(N),\chi)$ and of $S_k({\Gamma}_0(N),\chi)$ when $k\geq 2$
----------------------------------------------------------------------------------------------------------
Here, the [Pari/GP]{} package differs from the others (note that we do not claim that this is a better choice). First recall the *Eichler–Selberg trace formula* on ${\Gamma}_0(N)$. For every $n$ including those not coprime to $N$ one defines a Hecke operator $T(n)$ on $M_k({\Gamma}_0(N),\chi)$ by the formula $$T(n)(f)(\tau)=\dfrac{1}{n}\sum_{\substack{ad=n\\\gcd(a,N)=1}}\chi(a)a^k\sum_{b\bmod d}f\left(\dfrac{a\tau+b}{d}\right)\;.$$ Because of the condition $\gcd(a,N)=1$ (which is irrelevant if $\gcd(n,N)=1$) it is important to note that when $\gcd(n,N)>1$ the operator $T(n)$ *depends on the level* $N$ of the underlying space, so should be more properly be denoted $T_N(n)$. Equivalently, if we always consider $\chi$ as a Dirichlet character modulo $N$, so such that $\chi(a)=0$ when $\gcd(a,N)>1$, we can omit the condition $\gcd(a,N)=1$.
An important formula, due to Selberg and Eichler, gives the *trace* of $T(n)$ on $S_k({\Gamma}_0(N),\chi)$:
\[thmtrace\] Let $\chi$ be a Dirichlet character modulo $N$ and let $k\ge2$ be an integer such that $\chi(-1)=(-1)^k$. For all $n\ge1$, including those not coprime to $N$, we have $$\operatorname{Tr}_{S_k({\Gamma}_0(N),\chi)}(T(n))=A_1-A_2-A_3+A_4\;,$$ where the different contributions $A_i$ are as follows:
$$A_1=n^{k/2-1}\chi(\sqrt{n})\dfrac{k-1}{12}N\prod_{p\mid N}\left(1+\dfrac{1}{p}\right)\;,$$ where it is understood that $\chi(\sqrt{n})=0$ if $n$ is not a square (including when $\chi$ is a trivial character). $$A_2=\kern-5pt\sum_{\substack{t\in{\mathbb{Z}}\\t^2-4n<0}}\dfrac{\rho^{k-1}-{\overline}{\rho}^{k-1}}{\rho-{\overline}{\rho}}\sum_{f^2\mid (t^2-4n)}\dfrac{h((t^2-4n)/f^2)}{w((t^2-4n)/f^2)}\mu(t,\gcd(N,f),n)\;,$$ with $$\mu(t,g,n)=g\prod_{\substack{p\mid N\\p\nmid N/g}}\left(1+\dfrac{1}{p}\right)\sum_{\substack{x\bmod N\\x^2-tx+n\equiv0\pmod{Ng}}}\chi(x)\;,$$ where $\rho$ and ${\overline}{\rho}$ are the roots of the polynomial $X^2-tX+n$, in other words, $\rho+{\overline}{\rho}=t$ and $\rho{\overline}{\rho}=n$, and for $d<0$, $h(d)$ and $w(d)$ are the class number and number of roots of unity of the quadratic order of discriminant $d$.
$$A_3={\sideset{}{'}\sum}_{\substack{d\mid n\\d\le n^{1/2}}}d^{k-1}\kern-10pt\sum_{\substack{c\mid N\\\gcd(c,N/c)\mid\gcd(N/{{\mathfrak f}}(\chi),n/d-d)}}\phi(\gcd(c,N/c))\chi(x_1)\;,$$ where:
- ${\sideset{}{'}\sum}$ means that the term $d=n^{1/2}$, if present, must be counted with coefficient $1/2$,
- ${{\mathfrak f}}(\chi)$ is the conductor of $\chi$,
- $x_1$ is defined modulo $\operatorname{lcm}(c,N/c)=N/\gcd(c,N/c)$ by the Chinese remainder congruences $x_1\equiv d\pmod{c}$ and $x_1\equiv n/d\pmod{N/c}$,
- $\phi$ is Euler’s totient function.
$A_4=0$ if either $k>2$ or if $k=2$ and $\chi$ is not the trivial character, and otherwise, if $k=2$ and $\chi$ is trivial then $$A_4=\sum_{\substack{t\mid n\\\gcd(n/t,N)=1}}t\;.$$
We emphasize that in all the above formulas $\chi(x)=0$ if $\gcd(x,N)>1$, i.e., $\chi$ is always considered as a character modulo $N$.
From this, a nontrivial application of the Möbius inversion formula (explained to us by J. Bober, A. Booker, and M. Lee) allows us to compute the trace on the new space $S_k^{{\mathrm{new}}}({\Gamma}_0(N),\chi)$. To simplify notation, denote by $\operatorname{Tr}(N,n)$ (resp., $\operatorname{Tr}^{{\mathrm{new}}}(N,n)$) the trace of $T_N(n)$ on $S_k({\Gamma}_0(N),\chi)$ (resp., $S_k^{{\mathrm{new}}}({\Gamma}_0(N),\chi)$). We introduce the following definitions:
1. We define the multiplicative arithmetic function ${\beta}(n)$ on prime powers by ${\beta}(p)=-2$, ${\beta}(p^2)=1$, and ${\beta}(p^a)=0$ for $a\ge3$.
2. For $m\ge1$ we define the multiplicative arithmetic functions ${\beta}_m(n)$ on prime powers by ${\beta}_m(p^a)={\beta}(p^a)$ if $p\nmid m$ and ${\beta}_m(p^a)=\mu(p^a)$ if $p\mid m$, where $\mu$ is the usual M[ö]{}bius function.
3. An integer $N$ is said to be *squarefull* if for all primes $p\mid N$ we have $p^2\mid N$.
Let $\chi_N$ be a Dirichlet character modulo $N$ of conductor ${{\mathfrak f}}\mid N$, and $k\ge 2$ be an integer such that $\chi_N(-1)=(-1)^k$. Denote as above by $\chi_{{{\mathfrak f}}}$ the primitive character modulo ${{\mathfrak f}}$ equivalent to $\chi_N$. Finally, write $N=N_1N_2$ with $\gcd(N_1,N_2)=1$, $N_1$ squarefree and $N_2$ squarefull. We have $$\operatorname{Tr}^{{\mathrm{new}}}(N,n)=\sum_{{{\mathfrak f}}\mid M\mid N}\kern5pt\sum_{\substack{d\mid\gcd(M/{{\mathfrak f}},N_1)\\d^2\mid n}}\chi_{{{\mathfrak f}}}(d)d^{k-1}{\beta}_{n/d^2}(N/M)\operatorname{Tr}(M/d,n/d^2).$$
The point of this theorem is the following: set $${{\mathcal T}}^{{\mathrm{new}}}(N)=\sum_{n\ge1}\operatorname{Tr}^{{\mathrm{new}}}(N,n)q^n\;.$$ Then ${{\mathcal T}}^{{\mathrm{new}}}(N)$ is equal to the sum of the normalized eigenforms in $S_k^{{\mathrm{new}}}({\Gamma}_0(N),\chi)$, hence a simple argument shows that the $T(n){{\mathcal T}}^{{\mathrm{new}}}(N)$ generate $S_k^{{\mathrm{new}}}({\Gamma}_0(N),\chi)$, so we simply construct these forms until the dimension of the space they generate is equal to the dimension of the full new space (equal to $\operatorname{Tr}^{{\mathrm{new}}}(N,1)$).
Once we have obtained a basis for the space $S_k^{{\mathrm{new}}}({\Gamma}_0(N),\chi)$, it is immediate to obtain a basis of $S_k({\Gamma}_0(N),\chi)$ thanks to the relation $$S_k({\Gamma}_0(N),\chi)=\bigoplus_{{{\mathfrak f}}\mid M\mid N}\bigoplus_{d\mid N/M}B(d)S_k^{{\mathrm{new}}}({\Gamma}_0(M),\chi_{{{\mathfrak f}}})\;,$$ where $B(d)$ is the usual expanding operator $\tau\mapsto d\tau$.
The old space is given by the same formula but restricting to $M<N$: $$S_k^{{\mathrm{old}}}({\Gamma}_0(N),\chi)=\bigoplus_{\substack{{{\mathfrak f}}\mid M\mid N\\M<N}}\bigoplus_{d\mid N/M}B(d)S_k^{{\mathrm{new}}}({\Gamma}_0(M),\chi_{{{\mathfrak f}}})\;,$$
Note that one could think of using directly the trace formula on the full cuspidal space (Theorem \[thmtrace\]), but experiment and complexity analysis both show that, in addition to being much less canonical, it would also be less efficient. (Since on the one hand the $A_i$ to be computed are eventually the same, and on the other hand linear algebra’s cost is superlinear in the dimension, it is more costly to work in a direct sum than in each subspace independently.)
Construction of Modular Forms of Half-Integral Weight {#sec:three}
=====================================================
Recall that modular form spaces of half-integral weight $M_k({\Gamma}_0(N),\chi)$ with $k\in1/2+{\mathbb{Z}}$ are defined only when $4\mid N$ and $\chi$ is an even character. In weight $1/2$ a beautiful theorem of Serre–Stark asserts that the space $M_{1/2}({\Gamma}_0(N),\chi)$ is spanned by unary theta series, and the theorem also specifies the cuspidal subspace $S_{1/2}({\Gamma}_0(N),\chi)$. Thus we consider the construction of the spaces $S_k({\Gamma}_0(N),\chi)$ and $M_k({\Gamma}_0(N),\chi)$ when $4\mid N$, $k\ge3/2$ is a half-integer, and $\chi$ is an even character.
Recall the standard theta series $${\theta}(\tau)=\sum_{n\in{\mathbb{Z}}}q^{n^2}=1+2\sum_{n\ge1}q^{n^2}\in M_{1/2}({\Gamma}_0(4))\;.$$ Because of the well-known product expansion $${\theta}(\tau)=\prod_{n\ge1}(1-q^{2n})(1+q^{2n-1})^2$$ it is clear that ${\theta}$ does not vanish on the upper half-plane ${\mathfrak H}$, and it does not vanish at the cusps $i\infty$ and $0$ of ${\Gamma}_0(4)$. On the other hand, since the cusp $1/2$ is *irregular*, ${\theta}$ necessarily vanishes at the cusp $1/2$. Applying ${\Gamma}_0(4)$, we see that ${\theta}(\tau)=0$ if and only if $\tau$ is a cusp of the form $a/b$ with $\gcd(a,b)=1$ and $b\equiv2\pmod4$.
It follows that ${\theta}(2\tau)=0$ if and only if $\tau$ is a cusp of the form $a/b$ with $b\equiv4\pmod8$. In particular we see the essential fact that ${\theta}(\tau)$ and ${\theta}_2(\tau)={\theta}(2\tau)$ have *no common zeros* in the completed upper half-plane (we say that they are *coprime forms*).
This allows us to construct the desired modular form spaces of half-integral weight as follows. Let $k\in{\mathbb{Z}}+1/2$ and say we want to construct $M_k({\Gamma}_0(N),\chi)$. If $f\in M_k({\Gamma}_0(N),\chi)$ then $f{\theta}\in M_{k+1/2}({\Gamma}_0(N),\chi')$, where $\chi'=\chi$ if $k+1/2\equiv0\pmod2$ otherwise $\chi'=\chi\chi_{-4}$, where $\chi_{-4}(n)={\mbox{$\left(\dfrac{-4}{n}\right)$}}$, and similarly $$f{\theta}_2\in M_{k+1/2}({\Gamma}_0(N'),\chi')\supset M_{k+1/2}({\Gamma}_0(N),\chi')\;,$$ where $N'=N$ if $8\mid N$, and $N'=2N$ otherwise. By the preceding section we know how to construct a basis $B$ of $M_{k+1/2}({\Gamma}_0(N'),\chi')$.
Now the forms $g_1=f{\theta}$ and $g_2=f{\theta}_2$ which are both in that space satisfy $g_1{\theta}_2=g_2{\theta}$. This equality can be solved by simple linear algebra on the basis $B$, and once a basis of $(g_1,g_2)$ is found one recovers $f$ as the quotient $g_1/{\theta}$ (or $g_2/{\theta}_2$). Since the level $N'$ is at most twice the initial level $N$, this gives an efficient method for computing $M_k({\Gamma}_0(N),\chi)$. To compute the cuspidal space $S_k({\Gamma}_0(N),\chi)$, simply replace all the $M_{k+1/2}$ by $S_{k+1/2}$.
Note that we have a check on the correctness of the result by using a theorem of Oesterlé and the second author [@Coh-Oes] which gives the dimensions of $M_k({\Gamma}_0(N),\chi)$ and $S_k({\Gamma}_0(N),\chi)$ when $k\in1/2+{\mathbb{Z}}$.
The reader has certainly noticed that we do not speak of the old/new space, nor of the Eisenstein space. The construction of the old/new space is better performed in the so-called *Kohnen $+$-space*, and is implemented in the package, but we do not explain the details here.
It should be possible to construct the Eisenstein space explicitly in a manner analogous to Weisinger’s theorem, but as far as the authors are aware this has been done only when $N/4$ is squarefree.
Construction of Modular Forms of Weight $1$ {#sec:four}
===========================================
Although in principle it is algorithmically just as simple to construct modular forms of weight $1$ as modular forms of half-integral weight, it is more difficult to do it efficiently.
A first method which comes to mind is again to use two coprime forms. In fact, we can again use ${\theta}$ and ${\theta}_2$ as in the previous section, but to stay in the realm of integral weight forms it is preferable to use the weight $1$ coprime forms ${\theta}^2$ and ${\theta}_2^2$. This works exactly in the same way that we used for half-integral weight, but the main efficiency loss is due to the level: since ${\theta}_2^2\in M_1({\Gamma}_0(8),\chi_{-4})$, the level of $f{\theta}_2^2$ will be $N' = \operatorname{lcm}(8,N)$. In the half-integral case we always had $4\mid N$, so $N'$ was at most $2N$, but here if for instance $N$ is odd, $N'=8N$ so we are required to work in a space of weight $2$ forms and level $8$ times larger, which is prohibitive since the complexity is at least proportional to the cost of linear algebra in dimension $N'$.
We can search for other coprime forms. For instance J. Bober (personal communication) suggests to use two specific Eisenstein series of weight $1$ and levels $3$ and $4$ respectively. We would then need to work in level $\operatorname{lcm}(12,N)$, which can be lower than $\operatorname{lcm}(8,N)$ when $3\mid N$ for instance.
However, to our knowledge the most efficient method to construct spaces of modular forms of weight $1$, and the one which is implemented in the [Pari/GP]{} package, is the use of Schaeffer’s *Hecke stability* theorem. This theorem essentially states the following: if $V$ is a *finite-dimensional* vector space of meromorphic modular functions over ${\Gamma}_0(N)$ with character $\chi$, and if $V$ is stable by any single Hecke operator $T(n)$ with $n$ coprime to $N$, then $V$ is in fact a space of holomorphic modular forms.
Since Eisenstein series of weight $1$ are just as explicit as in higher weight, it is sufficient to construct the cuspidal space $S_1({\Gamma}_0(N),\chi)$, and to do so we proceed as follows. Let ${{\mathcal E}}_1({\Gamma}_0(N),{\overline}{\chi})$ be the space of Eisenstein series of weight $1$ and conjugate character. Note that if $f\in S_1({\Gamma}_0(N),\chi)$ and $E\in{{\mathcal E}}_1({\Gamma}_0(N),{\overline}{\chi})$ then $fE\in S_2({\Gamma}_0(N))$. Consider the (finite dimensional) space $$W=\bigcap_{E\in B_1({\Gamma}_0(N),{\overline}{\chi})}\dfrac{S_2({\Gamma}_0(N))}{E}\;,$$ where $B_1({\Gamma}_0(N),{\overline}{\chi})$ is a basis of ${{\mathcal E}}_1({\Gamma}_0(N),{\overline}{\chi})$. It is clear that $S_1({\Gamma}_0(N),\chi)\subset W$, and sometimes $W$ is $0$ (for instance if $N\le22$), so we are done. In general this is not the case, so we apply Schaeffer’s theorem. It is easy to show that the *maximal* stable subspace of $W$ under the action of $T(n)$ (for some fixed $n$ coprime to $N$) is exactly equal to the desired space $S_1({\Gamma}_0(N),\chi)$.
All the above operations (intersections of spaces and finding maximal stable subspaces) are elementary linear algebra, but can be extremely expensive in particular when the values of the character $\chi$ lie in a large cyclotomic field. Even when $\chi$ has small order, the computations suffer from intermediate coefficient explosion whereas we expect the final result to have tiny dimension. We thus use modular algorithms and perform the computations in various *finite fields* before lifting the final result.
Note that in the actual implementation we first look for *dihedral forms*, i.e., forms coming from Hecke Grössencharacters of quadratic fields. Once these forms computed, the orders of the possible characters for the so-called *exotic* forms is much more limited.
Elementary Computations on Modular Forms {#sec:elem}
========================================
\[sec:five\]
Note that since the construction of our modular form spaces ultimately boils down to the computation of the *trace forms* ${{\mathcal T}}^{{\mathrm{new}}}$, modular forms are always implicitly given by their Fourier expansion at infinity, which can be unfortunate for some applications.
Nonetheless, a large number of standard operations can be done on modular forms represented in this way: first, elementary arithmetic operations such as products, quotients, linear combinations, derivatives, etc., and second, specifically modular operations such as the action of Hecke operators, of the expanding operator $B(d)$, Rankin–Cohen brackets, twisting, etc.
Several limitations immediately come to mind: the action of the Atkin–Lehner operators can be described explicitly only when the level is squarefree. One can *evaluate* numerically a modular form by summing its $q$-expansion, but only if $|q|$ is not too close to $1$, i.e., if $\Im(\tau)$ is not too small (in small levels one can use the action of ${\Gamma}_0(N)$ to increase $\Im(\tau)$). One can compute the Fourier expansion at other cusps than infinity, but only if there exists an Atkin–Lehner involution sending infinity to that cusp, which will not always be the case in nonsquarefree level. We will see below how these limitations can be lifted by the use of the Borisov–Gunnells theorem.
An important operation is *splitting*: once the new space $S_k^{{\mathrm{new}}}({\Gamma}_0(N),\chi)$ has been constructed, we want to compute the basis of normalized Hecke eigenforms. This is done by simple linear algebra after factoring the characteristic polynomials of a sufficient number of elements of the Hecke algebra. Note that it is in general sufficient to use the $T(p)$ themselves, but it may happen that one needs more complicated elements. For instance, to split the space $S_2^{{\mathrm{new}}}({\Gamma}_0(512))$, no amount of $T(n)$ will be sufficient (the characteristic polynomials will always have square factors), but one needs to use in addition for instance the operator $T(3)+T(5)$.
Among the other elementary computations, note that modular forms can arise naturally from several different sources: forms associated to elliptic curves defined over ${\mathbb{Q}}$, forms coming from theta functions of lattices (possibly with a spherical polynomial), eta quotients, or forms coming from natural $L$-functions whose gamma factor is ${\Gamma}_{{\mathbb{C}}}(s)=2(2\pi)^{-s}{\Gamma}(s)$.
Advanced Computations on Modular Forms {#sec:six}
======================================
We now come to the more advanced functions of the package, which are up to now not available elsewhere. In view of the limitations above, the basic stumbling block is the computation of the Fourier expansions of a modular forms at cusps other than $i\infty$.
For $f\in M_k({\Gamma}_0(N),\chi)$, we want more generally to be able to compute the Fourier expansion of $f|_k{\gamma}$ for any ${\gamma}\in{\Gamma}$ (in fact it is trivial to generalize the construction to any ${\gamma}\in M_2^+({\mathbb{Q}})$, and this is done in the package but will not be explained here). We can assume that $k$ is an integer: indeed the expansion of $\theta|_k{\gamma}$ is known and we can compute the expansion of $f\theta$ when $f$ has half-integral weight. We recall that if ${\gamma}={\left(\begin{smallmatrix}{a}&{b}\\{c}&{d}\end{smallmatrix}\right)}\in{\Gamma}$ then $$f|_k{\gamma}(\tau)=(c\tau+d)^{-k}f\left(\dfrac{a\tau+b}{c\tau+d}\right)\;.$$ First, what precisely do we mean by “Fourier expansion” ? It is easy to show that there exists an integer $w\le N$ and a rational number ${\alpha}\in[0,1[\cap{\mathbb{Q}}$ such that[^1] $$f|_k{\gamma}(\tau)=q^{{\alpha}}\sum_{n\ge0}a_{{\gamma}}(n)q^{n/w}\;,$$ where we always use the convention that $q^x=e^{2\pi ix\tau}$ for $x\in{\mathbb{Q}}$. More precisely one can always choose $w=N/\gcd(N,c^2)$, the *width* of the cusp $a/c={\gamma}(i\infty)$, and ${\alpha}$ is the unique number in $[0,1[$ such that $\chi(1+acw)=e^{2\pi i{\alpha}}$. Note in passing that the denominator of ${\alpha}$ divides $\gcd(N,c^2)/\gcd(N,c)$, and by definition that ${\alpha}$ is nonzero if and only if the cusp $a/c$ is so-called *irregular* for the space $M_k({\Gamma}_0(N),\chi)$.
The basic idea is as follows. Consider the space generated by products of two Eisenstein series $E_1$ and $E_2$, chosen of course such that $E_1E_2\in M_k({\Gamma}_0(N),\chi)$, including the trivial Eisenstein series $1$ of weight $0$. Experiments show that usually this space is the whole of $M_k({\Gamma}_0(N),\chi)$, and in fact the only exceptions seem to be in weight $k=2$. In fact it is a theorem of Borisov–Gunnells [@Bor-Gun] [@Bor-Gun2] that this observation is indeed true, and that the exceptions occur only in weight $2$ when there exists an eigenform $f$ such that $L(f,1)=0$, for instance modular forms attached to elliptic curves of positive rank (so $N=37$ is the smallest level for which there is an exception).
Assume for the moment that we are in a space generated by products of two Eisenstein series. Since these are so completely explicit, it is possible by a tedious computation to obtain the expansions of $E|{\gamma}$, hence of all our forms. In the special case (occurring only in weight $2$) where the space is not generated by products of two Eisenstein series, we simply multiply by some known Eisenstein series $E$ so as to be in larger weight, do the computation there, and finally divide by the expansion of $E|{\gamma}$. This is what we do in any case in weight $1$ and in half-integral weight.
This may sound straightforward, but as mentioned at the beginning, it required several months of work first to obtain the correct formulas, and second to write a reasonably efficient implementation. In fact, we had to make a choice. With our current choice of Eisenstein series (which may change in the future), the coefficients of the Eisenstein series $E|{\gamma}$ lie in a very large cyclotomic field, at worst ${\mathbb{Q}}({\zeta}_{\operatorname{lcm}(N,\phi(N))})$. It is possible that we can reduce this considerably, but for now we do not know how to do this at least in a systematic way. Handling such large algebraic objects is extremely costly, so we chose instead to work with approximate complex numbers with sufficient accuracy, and if desired to recognize the algebraic coefficients at the end using the LLL algorithm. This is reflected in the command that we will explain below to compute these expansions.
Note however the following theorem, communicated to us by F. Brunault and M. Neururer whom we heartily thank:
\[thmfga\] Let $f\in M_k({\Gamma}_0(N),\chi)$, denote by $M\mid N$ the conductor of $\chi$, and assume that the coefficients of the Fourier expansion of $f$ at infinity all lie in a number field $K$. Then if ${\gamma}={\left(\begin{smallmatrix}{A}&{B}\\{C}&{D}\end{smallmatrix}\right)}\in{\Gamma}$ the Fourier coefficients $a_{{\gamma}}(n)$ of $f|_k{\gamma}$ lie in the field $K({\zeta}_u)$, where $u=\operatorname{lcm}(N/\gcd(N,CD),M/\gcd(M,BC))$.
Note that the Fourier coefficients of $f|_k{\gamma}$ can live in a smaller number field, or even in a smaller cyclotomic extension than that predicted by the theorem.
Once solved the problem of computing expansions of $f|_k{\gamma}$, essentially all of the limitations mentioned in Section \[sec:elem\] disappear: we can evaluate a modular form even very near the real axis (or at cusps), we can compute the action of the Atkin–Lehner operators in nonsquarefree level, we can compute general period integrals involving modular forms and in particular *modular symbols* such as $$\int_a^b(X-\tau)^{k-2}f(\tau)\,d\tau\;$$ when $k \ge 2$ is integral, we can numerically evaluate $L$-functions of modular forms which are not necessarily eigenforms at an arbitrary $s\in{\mathbb{C}}$, and we can compute general *Petersson scalar products* thanks to the following theorem similar to Haberland’s:
\[thmpet\] Let $k\ge2$ be an integer, let $f$ and $g$ in $S_k({\Gamma}_0(N),\chi)$ be two cusp forms, let $${\Gamma}=\bigsqcup_{j=1}^r{\Gamma}_0(N){\gamma}_j$$ be a right coset decomposition, and define $f_j=f|_k{\gamma}_j$ and $g_j=g|_k{\gamma}_j$. Finally, for any function $h$ and $a$ and $b$ in the completed upper half-plane set $$I_n(a,b,h)=\int_a^b \tau^nh(\tau)\,d\tau\;,$$ the integral being taken along a geodesic arc from $a$ to $b$. Then we have $$6r(2i)^{k-1}{\langle{f,g}\rangle}_{{\Gamma}_0(N)}=\sum_{j=1}^r\sum_{n=0}^{k-2}
(-1)^n\binom{k-2}{n}I_{k-2-n}(0,i\infty,f_j){\overline}{I_n(-1,1,g_j)}\;.$$
More generally, when at each cusp at least one of the two forms vanishes, there exists a similar formula which we do not give here.
On the other hand this theorem cannot be applied in weight $k=1$ or when $k\in1/2+{\mathbb{Z}}$. However, recent work by D. Collins [@Col] using a formula of P. Nelson [@Nel] allows to compute Petersson products in these cases, although less efficiently in general.
Implementation Issues {#sec:seven}
=====================
Modular form spaces can be represented by a basis, in echelon form or not, together with suitable linear algebra precomputations allowing fast recognition of elements of the space (recall the notion of *Sturm bound*, which tells us that if the Fourier coefficients at infinity of two modular forms belonging to the same space are equal up to some effective bound, the forms are identical).
The main problem is the representation of modular forms themselves. Because of our choice of using trace formulas, we must in some way represent the forms by their Fourier expansion at infinity. Since we do not want to specify in advance the number of coefficients that we want, a first approach would be to say that a modular form is a program which, given some $L$, outputs the Fourier coefficients up to $q^L$. This would be quite inefficient because of the action of Hecke operators: for instance, let $p$ be a prime not dividing the level $N$, and assume that $f(\tau)=\sum_{n\ge0}a(n)q^n$. Then $(T(p)f)(\tau)=\sum_{m\ge0}b(m)q^m$ with $b(m)=a(pn)+p^{k-1}\chi(p)a(n/p)$, where the last term occurs only if $p\mid n$. Thus if we want $L$ Fourier coefficients of $T(p)f$ we need on the one hand the coefficients $a(m)$ for $m\le L/p$, but also the coefficients $a(m)$ for $p\mid m$ and $m\le pL$. All the other Fourier coefficients $a(m)$ for $m\le pL$ with $p\nmid m$ are not needed, so it would be a waste to compute them if it can be avoided.
Thus we modify our first approach, and we say that a modular form is a program which, given some $L$ and step size $d$, outputs the Fourier coefficients of $q^m$ for $m\le dL$ and $d\mid m$ (hence all the coefficients if $d=1$). Such a program will define a modular form in our package.
Note that such representation may look like magic: for instance in a [GP]{} session, type [D=mfDelta()]{} which creates the Ramanujan $\Delta$ function. The output is only one and a half line long and contains mostly trivial information. Nonetheless, this information is sufficient to compute the Fourier expansion to millions of terms if desired, using the fundamental function [mfcoefs(D,n)]{} since internally the small information calls a much more sophisticated program which computes the expansion (note that in this specific case it is faster to compute the expansion directly using the product formula for the delta function than to use the modular form package).
Additional implementation comments: since the trace formula involves computing the class numbers $h(D)$ for $D<0$, we use a *cache* method for those: we first precompute a reasonable number of such class numbers, then if it is not sufficient, we precompute again a larger number, and so on. We do similar caching for the factorizations and divisors of integers.
We also need to represent Dirichlet characters $\chi$. Since the most frequent are (trivial or) quadratic characters modulo $D$, such a character will simply be represented by the number $D$ (so $D=1$ or omitted completely when $\chi$ is the trivial character). More general characters can be represented in several [Pari/GP]{} compatible ways, but the preferred way is to use the *Conrey numbering* which we do not explain here, so that a general Dirichlet character modulo $N$, primitive or not, is represented by [Mod(a,N)]{}, where $\gcd(a,N)=1$.
Pari/GP Commands {#sec:eight}
================
Commands involving only Modular Forms
-------------------------------------
As already mentioned above, the first basic command is [mfcoefs(f,n)]{} which gives the vector of Fourier coefficients $[a(0),a(1),\dotsc,a(n)]$. We have chosen to give a vector and not a series first because it is more compact, and second because the series variable (in principle $q$) could conflict with other user variables. Of course nothing prevents the user from defining his own function
? mfser(f,n) = Ser(mfcoefs(f,n),'q);
if the variable $q$ is ok[^2]. For simplicity we will use this user-defined function in the examples.
? mfser(mfDelta(), 8)
% = q - 24*q^2 + 252*q^3 - 1472*q^4 + 4830*q^5
- 6048*q^6 - 16744*q^7 + 84480*q^8 + O(q^9)
There are a number of predefined modular forms: in addition to Ramanujan $\Delta$ function, we have [mfEk(k)]{}, the normalized Eisenstein series for the full modular group $E_k$, more generally [mfeisenstein(k,chi1,chi2)]{} for general Eisenstein series, [mfEH(k)]{} for Eisenstein series over ${\Gamma}_0(4)$ in half-integral weight, [mfTheta(chi)]{}, the unary theta function associated to the Dirichlet character [chi]{} (if [chi]{} is omitted, the standard theta series), as well as modular forms coming from preexisting mathematical objects such as [mffrometaquo]{}, eta quotients, [mffromell]{}, modular cusp form of weight $2$ associated to an elliptic curve over ${\mathbb{Q}}$, [mffromqf]{}, modular form associated to a quadratic form, with an optional spherical polynomial, and [mffromlfun]{}, modular form associated to an $L$-function having factor at infinity equal to ${\Gamma}_{{\mathbb{C}}}(s)=2\cdot(2\pi)^{-s}{\Gamma}(s)$.
All the standard arithmetic operations are implemented. For instance, the modular form $F=E_4(\Delta^2+E_{24})$ is obtained by the following commands:
? E4 = mfEk(4); E24 = mfEk(24); D = mfDelta();
? F = mfmul(E4, mflinear([mfpow(D,2), E24],[1,1]));
Note that there is no [mfadd]{}, [mfsub]{}, or [mfscalmul]{} functions since they can be emulated by [mflinear]{}, which creates arbitrary linear combinations of forms:
? mfadd(F,G) = mflinear([F,G],[1,1]);
? mfsub(F,G) = mflinear([F,G],[1,-1]);
? mfscalmul(F,z) = mflinear([F],[z]);
Note also that the internal representation is an expression tree in *direct Polish* notation. In fact, since after a number of operations you may have forgotten what your modular form is, there is a function [mfdescribe]{} which essentially outputs this representation. For instance, applying to our above example:
? mfdescribe(F)
% = "MUL(E_4, LIN([POW(DELTA, 2), E_24], [1, 1]))"
? mfparams(F)
% = [1, 28, 1, y]
The [mfdescribe]{} command is not to be confused with the [mfparams]{} command which gives a short description of modularity and arithmetic properties of the form. The above command shows that $F\in M_{28}({\Gamma}_0(1),1)=M_{28}({\Gamma})$, and that the number field generated by the Fourier coefficients of $F$ is ${\mathbb{Q}}[y]/(y) = {\mathbb{Q}}$.
Other modular operations are available which work directly on forms, such as [mfbd]{} (expansion $\tau\mapsto d\tau$), [mfderivE2]{} (Serre derivative), [mfbracket]{} (Rankin–Cohen bracket), etc. But most operations need an underlying modular form *space*, even the Hecke operators, as we have seen above.
Commands on Modular Form Spaces
-------------------------------
The basic command which creates a basis of a modular form space is [mfinit]{}, analogous to the other [Pari/GP]{} commands such as [nfinit]{}, [bnfinit]{}, [ellinit]{}, etc. This command takes two parameters. The first is a vector (or simply if [CHI]{} is trivial), where $N$ is the level, $k$ the weight, and [CHI]{} the character in the format briefly described above. The second parameter is a code specifying which space we want ($0$ for the new space, $1$ for the cuspidal space, or omitted for the full space for instance). A simpler command with exactly the same parameters is [mfdim]{} which gives the dimension.
? mf = mfinit([26,2],0);
? L = mfbasis(mf); vector(#L,i,mfser(L[i],10))
The first command creates the space $S_2^{{\mathrm{new}}}({\Gamma}_0(26))$ (no character), and the second commands gives the $q$-expansions of the basis elements:
% = [2*q - 2*q^3 + 2*q^4 - 4*q^5 -...,\
-2*q - 4*q^2 + 10*q^3 - 2*q^4 +...]
These are of course not the eigenforms. To obtain the latter:
? LE = mfeigenbasis(mf); vector(#LE,i,mfser(LE[i],8))
% = [q - q^2 + q^3 + q^4 - 3*q^5 -...,\
q + q^2 - 3*q^3 + q^4 - q^5 - ...]
Note that eigenforms can be defined over a relative extension of ${\mathbb{Q}}(\chi)$:
? mf = mfinit([23,2],0); LE = mfeigenbasis(mf);
? vector(#LE,i,mfser(LE[i],10))
% = [Mod(1, y^2+y-1)*q + Mod(y, y^2+y-1)*q^2 + ...]
There are two ways to better see them:
? mffields(mf)
% = [y^2 + y - 1]
? vector(#LE,i,lift(mfser(LE[i],10)))
% = [q + y*q^2 + (-2*y-1)*q^3 + (-y-1)*q^4 + ...]
? f = LE[1]; mfembed(f, mfcoefs(f,10))
% = [[0, 1, 0.618033988..., -2.236067977..., ...],\
[0, 1, -1.618033988..., 2.236067977..., ...]]
The first command gives the number fields over which the eigenforms are defined. Here there is only one eigenform and only one field ${\mathbb{Q}}[y]/(y^2+y-1)$. The second command “lifts” the coefficients to ${\mathbb{Q}}[y]$, so the result is much more legible. The third command “embeds” the eigenform in all possible ways in ${\mathbb{C}}$: indeed, even though (in the present example) there is only one *formal* eigenform, the space is of dimension two so there are *two* eigenforms, given numerically as the last result.
Unavoidably, when nonquadratic characters occur, we can obtain even more complicated output. We have chosen to represent formal values of a character (which are in a cyclotomic field) with the variable letter “[t]{}”, but it must be understood that contrary to eigenforms this corresponds to a single canonical embedding: for instance
Mod(t, t^4 + t^3 + t^2 + t + 1)
means in fact $e^{2\pi i/5}$, not some other fifth root of unity.
? mf = mfinit([15,3,Mod(2,5)], 0); mffields(mf)
% = [y^2 + Mod(-3*t, t^2 + 1)]
? F = mfser(mfeigenbasis(mf)[1], 10); f = liftall(F)
% = q + (y-t-1)*q^2 + t*y*q^3 + ((-2*t-2)*y+t)*q^4...
The first command shows that the eigenforms will have coefficients in a quadratic extension of a quadratic extension, hence in the quartic field ${\mathbb{Q}}[y,t]/(t^2+1, y^2-3t)$. The second command lifts the $q$-expansion to ${\mathbb{Q}}[y,t]$. To obtain the expansion over the quartic field, which is isomorphic to ${\mathbb{Q}}[y]/(y^4+9)$, we write
? [T,a] = rnfequation(t^2+1, y^2-3*t,1);
? T
% = y^4 + 9
? lift(subst(f,t,a))
% = q + (-1/3*y^2+y-1)*q^2 + 1/3*y^3*q^3 + ...
Note that the variable $y$ actually stands for the same algebraic number in this block and the previous one but [rnfequation]{} does not guarantee this in general.
Miscellaneous Commands
----------------------
? mf = mfinit([96,4],0); M = mfheckemat(mf,5)
% =
[0 0 64 0 0 -84]
[0 4 0 36 0 0]
[1 0 -24/5 0 0 294/5]
[0 2 0 -12 -20 0]
[0 -1/2 0 1 6 0]
[0 0 6/5 0 0 14/5]
? factor(charpoly(M))
% = [x - 10, 2; x - 2, 2; x + 14, 2]
? M = mfatkininit(mf,3)[2] \\ Atkin-Lehner W_3
% =
[ 0 -3 0 0 -24 0]
[-1/3 0 -4/3 0 0 -12]
[ 0 0 0 -9/5 -6/5 0]
[ 0 0 -2/3 0 0 -1]
[ 0 0 1/6 0 0 3/2]
[ 0 0 0 1/5 4/5 0]
? factor(charpoly(M))
% = [x - 1, 3; x + 1, 3]
(all outputs edited for clarity). Self-explanatory. Note that the basis we compute for our modular form spaces is essentially random and does not guarantee that the matrices attached to Hecke or Atkin-Lehner operators have integral coefficients. They will in general have coefficients in ${\mathbb{Q}}(\chi)$ (up to normalizing Gauss sums in the case of $W_Q$); of course, their characteristic polynomials have coefficients in ${\mathbb{Z}}[\chi]$.
? T = mfTheta();
? mf = mfinit([4,2]); mftobasis(mf, mfpow(T, 4))
% = [0, 8]~
? mf = mfinit([4,5,-4]); mftobasis(mf, mfpow(T, 10))
% = [64/5, 4/5, 32/5]~
Since in both cases the basis of [mf]{} can be given explicitly, this gives explicit formulas for the number of representations of an integer as a sum of $r$ squares for $r=4$ and $r=10$ respectively (this can be done for $1\le r\le 8$ and $r=10$).
? B = mfbasis([4,3,-4],3); \\ 3: Eisenstein space
? [mfser(E,10) | E <- B]
% = [q + 4*q^2 + 8*q^3 + 16*q^4 + 26*q^5 + ...\
-1/4 + q + q^2 - 8*q^3 + q^4 + 26*q^5 + ...]
Modular forms can be evaluated numerically (even when the imaginary part is very small, see below), as well as their $L$-functions:
? E4 = mfEk(4); mf = mfinit(E4); mfeval(mf, E4, I)
% = 1.455762892268709322462422003598869...
? 3*gamma(1/4)^8/(2*Pi)^6
% = 1.455762892268709322462422003598869...
This equality is a consequence of the theory of *complex multiplication*, and in particular of the *Lerch, Chowla–Selberg* formula.
? D = mfDelta(); mf = mfinit(D,1); L = lfunmf(mf,D);
? lfunmfspec(L)
% = [[1, 25/48, 5/12, 25/48, 1],\
[1620/691, 1, 9/14, 9/14, 1, 1620/691],\
0.0074154209298961305890064277459002287248,\
0.0050835121083932868604942901374387473226]
The command [lfunmf]{} creates the $L$-function attached to $\Delta$, and [lfunmfspec]{} gives the corresponding *special values* in the interval $[1,11]$, which are rational numbers times two periods ${\omega}^+$ (for the odd integers) and ${\omega}^-$ (for the even integers).
[L]{} is now an $L$-function in the sense of the $L$-function package of [Pari/GP]{}, and can be handled as such. For instance:
? LF=lfuninit(L,[50]); ploth(t=0,50,lfunhardy(LF,t));
outputs in 50 ms the plot of the Hardy function associated to $\Delta$ on the critical line $\Re(s)=6$ from height $0$ to $50$:
{width="85.00000%"}
Similarly, we can compute zeros:
? lfunzeros(LF,20)
% = [ 9.2223793999211025222437671927434781355,\
13.907549861392134406446681328770219492,\
17.442776978234473313551525137127262719,\
19.656513141954961000127281756321302802]
Note the nontrivial fact that [lfunmf]{} is completely general and computes the $L$-function attached to *any* modular form, eigenform or not (although the computation is indeed more efficient when the function detects an eigenform): this makes use of the “advanced” features that we will explain below.
A very useful command is [mfeigensearch]{}, which searches for *rational* eigenforms (hence with trivial or quadratic character) in a given range.
? B = mfeigensearch([[1..60],2],[[2,-1],[3,-3]]);
? apply(mfparams,B)
% [[53, 2, 1, y], [58, 2, 1, y]]
? apply(x->mfser(x,10),B)
% = [q - q^2 - 3*q^3 - q^4 + 3*q^6 +...\
q - q^2 - 3*q^3 + q^4 - 3*q^5 + 3*q^6 +...]
The first command asks for all rational eigenforms of levels between $1$ and $60$, of weight $2$ such that $a(2)=-1$ and $a(3)=-3$. The [mfparams]{} command shows that there are two such forms, one in level $53$ the other in level $58$. The last command gives the beginning of their Fourier expansions, which of course agree up to the coefficient of $q^3$.
Note that the functions are returned as a black box allowing to compute an arbitrary number of Fourier coefficients, which need not be specified in advance. We could just as well have written [mfser(x,1000)]{} if we wanted $1000$ coefficients.
There exists also the more straightforward [mfsearch]{} command which simply searches for a rational modular form with given initial coefficients:
? B=mfsearch([[1..30],3],[0,1,2,3,4,5,6,7,8],1);
? apply(mfparams,B)
% = [[30, 3, -3, y], [30, 3, -15, y]]
? apply(x->mfser(x,10),B)
% = [q + 2*q^2 + 3*q^3 + 4*q^4 + 5*q^5 + 6*q^6\
+ 7*q^7 + 8*q^8 - 14*q^9 - 30*q^10 + O(q^11),\
q + 2*q^2 + 3*q^3 + 4*q^4 + 5*q^5 + 6*q^6\
+ 7*q^7 + 8*q^8 - 21*q^9 - 50*q^10 + O(q^11)]
This tells us that there exist exactly two forms of weight $3$ and level $N\le30$ in the cuspidal space (code $1$) whose Fourier expansion begins by $q+2q^2+\cdots+8q^8+O(q^9)$; the last command shows their Fourier expansion up to $q^{10}$.
Weight $1$ Examples
-------------------
Almost all of the commands given up to now (with the exception of [lfunmf]{} for non eigenforms and [mfatkininit]{} for non squarefree levels) are direct (although sometimes complicated) applications of the trace formula and linear algebra over cyclotomic fields. We now come to more advanced aspects of the package.
As already mentioned, constructing modular forms of weight $1$ is more difficult than in higher weight, but they are fully implemented in the package.
? mfdim([148,1,0], 1)
% = [[4, Mod(105, 148), 1, 0],
[6, Mod(63, 148), 1, 1],
[18, Mod(127, 148), 1, 1]]
This command uses the *joker* character $0$ (which is available for all weights but especially useful in weight $1$): it asks to output information about $S_1({\Gamma}_0(148,\chi))$ for all Galois equivalence classes of characters, but only for nonzero spaces. Here it gives us the Conrey labels of three characters modulo $148$, such as [Mod(105, 148)]{}, of respective orders $4$, $6$ and $18$. The other two integers are of course also important and give the dimension of the space and the dimension of the subspace generated by the dihedral forms. Let us look at the first space, which contains an exotic (non-dihedral) form:
? mf = mfinit([148,1,Mod(105,148)], 0);
? f = mfeigenbasis(mf)[1];
? mfser(f,12)
% = Mod(1,t^2+1)*q + Mod(-t,t^2+1)*q^3
+ Mod(-1,t^2+1)*q^7 + Mod(t,t^2+1)*q^11 + O(q^13)
? mfgaloistype(mf)
% = [-24]
This tells that the projective image of the Galois representation associated to the (unique) eigenform in [mf]{} is isomorphic to $S_4$, so is “exotic”. This is the lowest possible level for which it occurs (the smallest exotic $A_4$ is in level $124$, already found long ago by J. Tate, and the smallest exotic $A_5$ is in level $633$ found only a few years ago by K. Buzzard and A. Lauder:
? mfgaloistype([633, 1, Mod(107,633)])
% = [10, -60]
(The first eigenform is of dihedral type $D_5$, the second has exotic type $A_5$.) Note that this computation only requires 5 seconds.
? mfgaloistype([2083,1,-2083])
% = [14, -60]
This answers an old question of Serre who conjectured the existence of exotic $A_5$ forms of prime level $p\equiv3\pmod4$ with quadratic character ${\mbox{$\left(\frac{-p}{n}\right)$}}$: $p=2083$ is the smallest such prime.
Typical nonexotic examples:
? mfgaloistype([239,1,-239])
% = [6, 10, 30]
Three eigenforms with projective image isomorphic to $D_3$, $D_5$, and $D_{15}$.
Half-Integral Weight Examples
-----------------------------
? mf = mfinit([12,5/2]); B = mfbasis(mf);
? for(j=1,#B,print(mfser(B[j],8)))
1 + 12*q^5 + 30*q^8 + O(q^9)
q - 8*q^5 + 14*q^6 + 28*q^7 - 20*q^8 + O(q^9)
q^2 + 8*q^5 - q^6 - 10*q^7 + 18*q^8 + O(q^9)
q^3 - 4*q^5 + 4*q^6 + 10*q^7 - 10*q^8 + O(q^9)
q^4 + 2*q^5 - 2*q^6 - 4*q^7 + 5*q^8 + O(q^9)
? f = B[1]; [mf2,F] = mfshimura(mf,f,5); mfser(F,8)
% = -3/5 + 12*q + 108*q^2 + 132*q^3 + 876*q^4 ...
? mfparams(mf2)
% = [6, 4, 1, 4]
This returns the Shimura lift of $f$ of weight $2k-1 = 4$ in $M_4({\Gamma}_0(6),
{\mbox{$\left(\frac{4}{\cdot}\right)$}})$. The Kohnen $+$-space as well as the known bijections between it and spaces of integral weight are implemented, as well as the new space and the eigenforms in the Kohnen space. We refer the reader to the manual for details.
Advanced Examples {#sec:nine}
=================
Expansions of $F|_k{\gamma}$ and Applications
---------------------------------------------
As mentioned at the beginning, we now give a number of examples which we believe are not possible (at least in general) with other packages. The basic function which allows all the remaining advanced examples to work is the computation of the Fourier expansion of $F|_k{\gamma}$ for any ${\gamma}\in{\Gamma}$. We begin with a simple example:
? mf = mfinit([32,4],0); F = mfbasis(mf)[1];
? mfser(F,10)
% = 3*q + 2*q^5 + 47*q^9 + O(q^11)
? g = [1,0;2,1];
? Ser(mfslashexpansion(mf, F, g, 6, 1, ¶ms), q)
% = Mod(-1/64*t, t^8 + 1)*q
+ Mod(-1/4*t^3, t^8 + 1)*q^3
+ Mod(-11/32*t^5, t^8 + 1)*q^5 + O(q^7)
? [alpha, w] = params
% = [0, 8]
This requires a few explanations: the [mfslashexpansion]{} command asks for $6$ terms of the Fourier expansion of $F|_k{\gamma}$ with ${\gamma}={\left(\begin{smallmatrix}{1}&{0}\\{2}&{1}\end{smallmatrix}\right)}$, and the flag $1$ which follows asks to give the result in algebraic form if possible (set the flag to $0$ for complex floating point approximations). Thus, as for values of characters, the variable “[t]{}” which is printed is *canonically* $e^{2\pi i/16}$, root of $\Phi_{16}(t)=t^8+1$. The [params]{} components are ${\alpha}$ and $w$, and mean that the “[q]{}” in the expansion should be understood as $e^{2\pi i\tau/w}$ (so here $q=e^{2\pi i\tau/8}$), and the expansion should be multiplied by $q^{{\alpha}}$ (here by $1$).
Here is a more complicated example which illustrates this:
? mf = mfinit([36,3,-4],0); F = mfbasis(mf)[1];
? Ser(mfslashexpansion(mf, F, g, 4, 1, ¶ms), q)
% = Mod(-1/54*t^4 - 1/54*t, t^6 + t^3 + 1)
+ Mod(-1/3*t^5 - 1/3*t^2, t^6 + t^3 + 1)*q^2
+ Mod(-2/9*t^4, t^6 + t^3 + 1)*q^3 + O(q^5)
? [alpha, w] = params
% = [1/18, 9]
Thus $w=9$ so $q=e^{2\pi i\tau/9}$, and ${\alpha}=1/18$ so the expansion must be multiplied by $e^{2\pi i\tau/18}$. Note that the constant coefficient $-t^4/54 - t/54 \in {\mathbb{Q}}(t)/(t^6 + t^3 + 1)$ in the expansion is nonzero, so necessarily ${\alpha}>0$ otherwise $F$ would not be a cusp form.
Of course we can have “raw” expansions with approximate complex coefficients:
? Ser(mfslashexpansion(mf, F, g, 4, 0), q)
% = (0.00321570699... - 0.01823718061... I)
+ (0.25534814770... - 0.21426253656... I)q^2 +...
(Here we did not ask for [params]{} since we already know it from the previous computations.) Recall that all this is possible thanks to the expression of $F$ as a linear combination of products of two Eisenstein series. More generally for ${\gamma}$ in $M_2^+({\mathbb{Q}})$, the result would be expressed as $u^{k/2}f(\tau + v)$ and $f(q) = q^\alpha \sum_{m\ge0} a_m q^{m/w}$ as above for some rational numbers $u$ and $v$ chosen so as to minimize the field of definition of the $a_m$ (see the manual).
Atkin–Lehner operators are an important special case:
? mf = mfinit([32,4,8],0); Z = mfatkininit(mf,32);
? [mfB, M, C] = Z;
? M
% =
[ 1/8 -7/4]
[-1/16 -1/8]
? C
% = 0.35355339059327376220042218105242451964
(here $C=8^{-1/2}$). This is a difficult case for the Atkin–Lehner operators since first the level is not squarefree, and second the character, here $8$ which represents ${\mbox{$\left(\frac{8}{n}\right)$}}$, is not defined modulo $N/Q=32/32$.
The result involves a normalizing constant $C$ given above, essentially a Gauss sum, such that the expansion of $C\cdot F|_k W_Q$ has the same field of coefficients as $F$. A similar question for the more general $F|_k{\gamma}$ is answered by Theorem \[thmfga\], but the given value may not be optimal. In general, $C\cdot F|_k W_Q$ belongs to a different space than $F$ (the Nebentypus becomes $\overline{\chi_Q} \chi_{N/Q}$ in integral weight, and a similar formula holds in non-integral weight). The matrix $M$ expresses $C \cdot F_i |_k W_Q$ when $(F_i)$ is a basis of [mf]{} in terms of a basis of that other space [mfB]{}. The operator can also be applied to an individual form:
? F = mfbasis(mf)[1]; G = mfatkin(Z,F);
? mfser(G,10)
% = 1/4*q + 7/2*q^3 + 7*q^5 + 2*q^7 - 1/4*q^9 + ...
This returns $G = C\cdot F|_k W_{32}$.
Numerical Applications
----------------------
It is now easy to compute *period polynomials* $$P(F,X)=\int_0^{i\infty}(X-\tau)^{k-2}F(\tau)\,d\tau$$ for modular forms of integral weight $k\ge2$. Indeed, in addition to the Fourier expansion at infinity, this only requires the expansion of $F|_kW_N$ with $W_N={\left(\begin{smallmatrix}{0}&{-1}\\{N}&{0}\end{smallmatrix}\right)}$. In particular we can compute in complete generality *special values* and *periods*.
More generally, we can compute numerically *general* period polynomials and *modular symbols* $$\int_{s_1}^{s_2}(X-\tau)^{k-2}(F|_k{\gamma})(\tau)\,d\tau$$ for any cusps $s_1$ and $s_2$ and any ${\gamma}\in M_2^+({\mathbb{Q}})$, or even for any $s_1$ and $s_2$ in the completed upper-half plane ${\mathfrak H}$.
Thanks to these symbols, we can also compute general *Petersson products* of modular forms of integral weight $k\ge2$ by using Haberland-type formulas such as Theorem \[thmpet\]:
? mf = mfinit([11,2],0); F = mfbasis(mf)[1];
? FS = mfsymbol(mf,F); mfsymboleval(FS,[0,oo])
% = 0.040400186918863279214419198537327720301*I
? mfsymboleval(FS, [1/2,2*I])
% = -0.16160130270129178714056342927215612575*I
? mfpetersson(FS) \\ <F,F>
% = 0.0039083456561245989852473854813821138618
? mf = mfinit([23,2],1);
? BS = [mfsymbol(mf,f) | f <- mfbasis(mf)];
? [mfpetersson(f,g) | f<-BS; g<-BS]
% = [0.0095931508727672866790131897867245345540,
-0.0066920429957575620051313153184106231192,
-0.0066920429957575620051313153184106231192,
0.016285193868524848684144505105135157673]
We can also evaluate a form near the real axis: for forms over the full modular group ${\Gamma}$ or small index, we can use modular transformations *in the group* to significantly increase imaginary parts. In general, this is not possible, but now it is easy since we can always use the whole of ${\Gamma}$: in fact, we can always reduce to $\Im(\tau)\ge 1/(2N)$, which is almost optimal:
? \p57
? mf = mfinit([12,4],1); F = mfbasis(mf)[1];
? ev(m) = mfeval(mf, F, 1/Pi+I/10^m);
? ev(6)
% = -89811.0493... -58409.9409...*I
? ev(7)
% = 4.821... E-52 + 6.788... E-52*I
? ev(8)
% = -1.79763... E-69 + 2.6450... E-69*I
? ev(9)
% = 357873461.23... - 264528426.36...*I
? ev(10)
% = 0.3966...E18 - 1.6429...E18*I
Note that $|ev(m)|$ seems to tend to infinity with $m$, but with a pronounced “dip” around $m=7$ and $m=8$. This is not specific to the modular form $F$ but probably to the diophantine approximation properties of $1/\pi$, and in particular to its very close convergent $113/355$.
[99.]{} L. Borisov and P. Gunnells, [*Toric modular forms and nonvanishing of $L$-functions*]{}, J. Reine Angew. Math. [**539**]{} (2001), pp. 149–165. L. Borisov and P. Gunnells, [*Toric modular forms of higher weight*]{}, J. Reine Angew. Math. [**560**]{} (2003), pp. 43–64. H. Cohen and J. Oesterlé, [*Dimensions des espaces de formes modulaires*]{}, Modular functions of one variable VI, Lecture Notes in Math. [**627**]{}, Springer (1977), pp. 69–78. H. Cohen and F. Strömberg, [*Modular Forms, A Classical Approach*]{}, Graduate Studies in Math. [**179**]{}, American Math. Soc. (2017). D. Collins, [*Numerical computation of Petersson inner products and $q$-expansions*]{}, arXiv math. 1802.09740. P. Nelson, [*Evaluating modular forms on Shimura curves*]{}, Math. Comp. [**84**]{} (2015), pp. 2471–2503. The PARI Group, PARI/GP version [2.11.0]{}, Univ. Bordeaux, 2018, <http://pari.math.u-bordeaux.fr/>. N. Skoruppa and D. Zagier, [*Jacobi forms and a certain space of modular forms*]{}, Invent. Math. [**94**]{} (1988), pp. 113–146. J. Weisinger, [*Some results on classical Eisenstein series and modular forms over function fields*]{}, PhD Thesis, Harvard Univ. (1977).
[^1]: $[0,1[$ is a much more sensible notation than $[0,1)$, and $]0,1[$ than $(0,1)$ which can mean so many things.
[^2]: A technicality which explains why representing forms as series with this additional variable is awkward: the variable $q$ must have higher priority than $t$ otherwise some of the examples below will fail. A definition which would work in all cases is [mfser(f,n) = Ser(mfcoefs(f,n), varhigher(“q”, ’t));]{}
|
---
abstract: |
It is pointed out that the equations $$\begin{aligned}
\sum_{i=1}^d{\big[X_i,[X_i,X_j]\big]}=0
\end{aligned}$$ (and its super-symmetrizations, playing a central role in M-theory matrix models) describe noncommutative minimal surfaces – and can be solved as such.
address:
- |
Dept. of Math.\
Linköping University\
581 83 Linköping\
Sweden
- 'Korean Institute for Advanced Study, Royal Institute of Technology, Sogang University '
author:
- Joakim Arnlind
- Jens Hoppe
title: The world as quantized minimal surfaces
---
During the past two decades several authors (see e.g. [@M-Algebras; @BFSS; @IKKT; @Cornalba]) have advocated the equations $$\begin{aligned}
\label{eq:1}
\sum_{i=1}^d{\big[X_i,[X_i,X_j]\big]}=0,\end{aligned}$$ resp. the objects (specifically: self-adjoint infinite-dimensional matrices) satisfying them as of potential relevance to understanding space-time and the physical laws therein.
The analytical study of minimal surfaces on the other hand, going back at least 250 years [@Lagrange; @Meusnier; @Euler] and being one of the most established classical areas of mathematics, provides a wealth of explicit examples, and very detailed knowledge of their properties (see e.g. [@Nitsche; @DHKW]). In this note we would like to put forward a direct relation between these two lines.
Parametrized minimal surfaces in Euclidean space are solutions of $\Delta{\vec{x}}=0$, where $$\begin{aligned}
\label{eq:2}
\Delta := \frac{1}{\sqrt{g}}{\partial}_a\sqrt{g}g^{ab}{\partial}_b\end{aligned}$$ is the Laplace operator on the embedded surface, and $g=\det(g_{ab})$ with $$\begin{aligned}
\label{eq:3}
g_{ab}:=\sum_{i,j=1}^d
\frac{{\partial}x^i}{{\partial}{\varphi}^a}\frac{{\partial}x^j}{{\partial}\varphi^b}\eta_{ij}\end{aligned}$$ (here $\eta_{ij}=\delta_{ij}$ but one could equally well consider general embedding spaces). Defining Poisson-brackets (with $\rho=\rho({\varphi}^1,{\varphi}^2)$) $$\begin{aligned}
\label{eq:4}
{\left\{f,h\right\}} := \frac{1}{\rho}{\varepsilon}^{ab}{\big({\partial}_af\big)}{\big({\partial}_b h\big)}\end{aligned}$$ the minimal surface equations can be written as (cp. [@a:phdthesis; @ahh:nambudiscrete; @ah:dmsa]) $$\begin{aligned}
\label{eq:5}
\sum_{i=1}^d{\left\{x_i,{\left\{x_i,{\vec{x}}\right\}}\right\}}-
\frac{1}{2}\sum_{i=1}^d\frac{\rho^2}{g}{\left\{x_i,g/\rho^2\right\}}{\left\{x_i,{\vec{x}}\right\}}=0,\end{aligned}$$ hence as $$\begin{aligned}
\label{eq:6}
\sum_{i=1}^d{\left\{x_i,{\left\{x_i,{\vec{x}}\right\}}\right\}}=0\end{aligned}$$ when choosing $\rho=±\sqrt{g}$, i.e. $$\begin{aligned}
\label{eq:7}
\frac{g}{\rho^2}=\frac{1}{2}\sum_{i,j=1}^d{\left\{x_i,x_j\right\}}^2=1.\end{aligned}$$ While a general theory of non-commutative minimal surfaces, and methods to construct them, will be given in a separate paper [@ACH], let us here focus on a particular example, the Catenoid, $$\begin{aligned}
\label{eq:8}
{\vec{x}}=
\begin{pmatrix}
\cosh v\cos u\\
\cosh v\sin u\\
v
\end{pmatrix}=
\begin{pmatrix}
x \\ y \\ z
\end{pmatrix}.\end{aligned}$$ As ${\vec{x}}_u^2={\vec{x}}_v^2=\cosh^2 v=\sqrt{g}$ $$\begin{aligned}
\label{eq:9}
{\left\{x,y\right\}}=-\tanh z,\quad
{\left\{y,z\right\}}=\frac{x}{\cosh^2 z},\quad
{\left\{z,x\right\}}=\frac{y}{\cosh^2 z}.\end{aligned}$$ One can easily verify , as well as (using $x^2+y^2=\cosh^2z$) .
Following [@abhhs1; @abhhs2] one could take e.g. $$\begin{aligned}
\label{eq:10}
\begin{split}
&[X,Y] = -i\hbar\tanh Z\\
&[Y,Z] = (\cosh Z)^{-1}X(\cosh Z)^{-1}\\
&[Z,X] = (\cosh Z)^{-1}Y(\cosh Z)^{-1}
\end{split}\end{aligned}$$ or (using power-series expansions for $(\cosh Z)^{-1}$) totally symmetrized variants of , as defining a non-commutative Catenoid. While it is easy to see that does have solutions in terms of infinite-dimensional matrices $X,Y,Z$, it is difficult to see whether or not these will satisfy . Let us therefore first simplify the classical equations by defining $$\begin{aligned}
\label{eq:11}
{\tilde{z}}(z):=\frac{z}{2}+\frac{1}{4}\sinh(2z),\end{aligned}$$ satisfying $$\begin{aligned}
\label{eq:12}
\frac{d{\tilde{z}}}{dz}=\cosh^2z>0\end{aligned}$$ (hence being invertible, defining $z({\tilde{z}})$) as well as $$\begin{aligned}
\label{eq:13}
{\left\{x,y\right\}}=-t({\tilde{z}}),\quad
{\left\{y,{\tilde{z}}\right\}}=x,\quad
{\left\{{\tilde{z}},x\right\}}=y,\end{aligned}$$ with $t({\tilde{z}}):=\tanh z({\tilde{z}})$. The non-commutative analogue of , $$\begin{aligned}
\label{eq:14}
[X,Y]=-i\hbar t({\tilde{Z}}),\quad
[Y,{\tilde{Z}}] =i\hbar X,\quad
[{\tilde{Z}},X] = i\hbar Y\end{aligned}$$ resp. (defining $W=X+iY$) $$\begin{aligned}
\label{eq:17}
[{\tilde{Z}},W]=\hbar W,\quad
[W,{W^\dagger}]=-2\hbar t({\tilde{Z}})\end{aligned}$$ clearly has solutions where ${\tilde{Z}}$ is diagonal, with $$\begin{aligned}
\label{eq:18}
{\tilde{z}}_j:={\tilde{Z}}_{jj} = {\tilde{z}}_0-j\hbar=-j\hbar\end{aligned}$$ and $$\begin{aligned}
\label{eq:19}
W_{jk}=w_j\delta_{k,j+1};\quad
|w_j|^2-|w_{j-1}|^2=-2\hbar t(-j\hbar).\end{aligned}$$ When investigating , with $$\begin{aligned}
\label{eq:20}
X_3=h({\tilde{Z}})=:H,\quad
X_1+iX_2=W,\end{aligned}$$ (the function $h$ to be determined) one finds that the two resulting conditions (cp. (\[eq:1\])) $$\begin{aligned}
\label{eq:21}
{\big[W,[{W^\dagger},H]\big]}=0\end{aligned}$$ and $$\begin{aligned}
\label{eq:22}
\frac{1}{2}{\big[W,[{W^\dagger},W]\big]}+{\big[H,[H,W]\big]}=0\end{aligned}$$ may be solved when deforming $[W,{W^\dagger}]$ to $$\begin{aligned}
\label{eq:23}
\begin{split}
&[W,{W^\dagger}]=-2\hbar T\\
&T:=\tanh z({\tilde{Z}})+\hbar^2t_2({\tilde{Z}})+\sum_{n>2}^\infty\hbar^nt_n({\tilde{Z}}),
\end{split}\end{aligned}$$ as well as taking the relation between $H$ and ${\tilde{Z}}$ to be of the form $$\begin{aligned}
\label{eq:24}
H=z({\tilde{Z}})+\hbar^2h_2({\tilde{Z}})+\sum_{n>2}^\infty\hbar^nh_n({\tilde{Z}}).\end{aligned}$$ The advantage of keeping $[{\tilde{Z}},W]=\hbar W$ undeformed is that then ($W$ still being nonzero only on the first upper off-diagonal) $$\begin{aligned}
\label{eq:25}
\begin{split}
&f({\tilde{Z}})W = W f({\tilde{Z}}+\hbar{\mathds{1}})=:Wf_+\\
&f({\tilde{Z}}){W^\dagger}= {W^\dagger}f({\tilde{Z}}-\hbar{\mathds{1}})=:{W^\dagger}f_-
\end{split}\end{aligned}$$ so that / can be seen to hold provided the following finite-difference equations are satisfied: $$\begin{aligned}
&\hbar(T_+-T) = (H_+-H)^2\label{eq:26}\\
&T{\big(2H_+-H_{++}-H\big)} = T_+{\big(2H-H_+-H_-\big)}\label{eq:27},\end{aligned}$$ where $(H_{++})_{jj}=h_{++}({\tilde{Z}})_{jj}=h({\tilde{z}}_j+2\hbar),\ldots$. Assuming $T$ and $H$ to be monotonically increasing functions of ${\tilde{Z}}$ (and $\hbar>0$), one may write as $$\begin{aligned}
\label{eq:15}
H_+-H=\sqrt{\hbar(T_+-T)},\end{aligned}$$ which gives the condition $$\begin{aligned}
\label{eq:16}
T{\left(\sqrt{\frac{T_+-T}{T_{++}-T_+}}-1\right)}
=T_+{\left(1-\sqrt{\frac{T_+-T}{T-T_-}}\right)}\end{aligned}$$ when inserting into . Using the expansion for $T$ as given in , and Taylor-expanding $$\begin{aligned}
\label{eq:28}
T_{\pm} = \tanh{\big(z({\tilde{Z}}\pm\hbar{\mathds{1}})\big)}+\hbar^2t_2({\tilde{Z}}\pm\hbar{\mathds{1}})+\cdots,\end{aligned}$$ as well as $T_{++}$, one finds trivial agreement in $O(\hbar)$ while the $\hbar^2$ resp. $\hbar^3$ terms demand $$\begin{aligned}
\label{eq:29}
tt'''=\frac{3}{2}\frac{t(t'')^2}{t'}+t''t',\end{aligned}$$ resp. $$\begin{aligned}
\label{eq:30}
2t(t')^2t''''+6t(t'')^3-8tt't''t'''-3(t')^2(t'')^2=0 ;\end{aligned}$$ using that for $t:=\tanh z({\tilde{z}})$ one has (with $c=c({\tilde{z}}):=\cosh(z({\tilde{z}}))$) $$\begin{aligned}
\label{eq:31}
t'=\frac{1}{c^4},\quad
t''=-\frac{4t}{c^6},\quad
t'''=\frac{24}{c^8}-\frac{28}{c^{10}},\quad
t''''=t{\left(\frac{280}{c^{12}}-\frac{192}{c^{10}}\right)}\end{aligned}$$ it is straightforward to see that and actually do hold (one should also note that in these orders $t_2$ does not yet enter). Instead of deriving the 4th order expressions (which give a third-order linear ODE for $t_2$), let us go back to resp. which is consistently solved up to $O(\hbar^3)$ by $H=z({\tilde{Z}})$ and $T=t({\tilde{Z}})$, using $$\begin{aligned}
\label{eq:32}
z'=\frac{1}{c^2},\quad
z''=-\frac{2t}{c^4},\quad
t' = (z')^2,\quad
t''= 2z'z'' ,\end{aligned}$$ while in order $\hbar^4$ giving the condition $$\begin{aligned}
\label{eq:33}
t_2'-2z'h_2'=\frac{(z'')^2}{4}+\frac{2}{6}z'z'''-\frac{t'''}{6}
=-\frac{1}{3}\frac{t^2}{c^8}\end{aligned}$$ (using $z'''=\frac{8}{c^6}-\frac{10}{c^8}$, and ). Both $t_2$ (from , 4th order) and $h_2$ (from ) are indeed small corrections to $t$, resp. $z$ (note that due to $t'=1/c^4$, $c'=t/c$, any differential equation of the form $f'=\frac{\alpha}{c^{n}}$ or $\frac{\alpha t}{c^{n}}$ can easily be integrated), confirming the expectation that the power-series in and actually make sense (as formal power-series or asymptotic series, or even as series actually converging for small $\hbar$; note that due to the unboundedness of the eigenvalues of ${\tilde{Z}}$ it is necessary that $h_2({\tilde{z}}_j)$ and $t_2({\tilde{z}}_j)$ are small corrections to $z_j=z({\tilde{z}}_j)$ resp. $t({\tilde{z}}_j)$ for all $j$).
In accordance with the classical Casimir relation $$\begin{aligned}
\label{eq:34}
x^2+y^2-\cosh^2z({\tilde{z}})=x^2+y^2-c^2=0\end{aligned}$$ one may also look for $E=e({\tilde{Z}})$ such that $$\begin{aligned}
\label{eq:35}
\frac{1}{2}{\big(W{W^\dagger}+{W^\dagger}W\big)} = E =
c^2+\sum_{n\geq 2}\hbar^ne_n({\tilde{Z}}).\end{aligned}$$ The condition (take the commutator of with $W$, using ) $$\begin{aligned}
\label{eq:36}
\begin{split}
0 &= \hbar{\big(WT+TW\big)}-[E,W]\\
&= W{\big(\hbar T+\hbar T_++E-E_+\big)}
\end{split}\end{aligned}$$ necessitates $$\begin{aligned}
\label{eq:37}
\hbar
e_0'+\hbar^2\frac{e_0''}{2}+\hbar^3{\bigg(\frac{e_0'''}{6}+e_2'\bigg)}
=\hbar 2t+\hbar^2t'+\hbar^3{\left(\frac{t''}{2}+2t_2\right)}\end{aligned}$$ i.e. (using $e_0=c^2$, $e_0'=2cc'=2t$, $e_0^{(n)}=2t^{(n-1)}$) $$\begin{aligned}
\label{eq:38}
e_2'=\frac{t''}{6}+2t_2=2t_2-\frac{2t}{3c^6}.\end{aligned}$$ As a consistency-check consider again (\[eq:21\]), yielding $$\begin{aligned}
\label{eq:39} WW^{\dagger}&=& 2\hbar\frac{H_{+}-H}{2H-H_{+}-H_{-}}T\\
\label{eq:40} W^{\dagger}W&=&2\hbar\frac{H-H_{-}}{2H-H_{+}-H_{-}}T\,,\end{aligned}$$ but then using (\[eq:35\]), resulting in $$\label{eq:41}
\hbar(H_{+}-H_{-})T=E(2H-H_{+}-H_{-})\,,$$ which is consistently solved in$O(\hbar^{2})$ and $O(\hbar^{3})$ while requiring $$\label{eq:42}
c^{2}h''_{2}-\frac{2t}{c^{4}}e_{2}+\frac{2}{c^{2}}t_{2}+2th_{2}'
=\frac{t}{3}{\left(\frac{10}{c^{8}}-\frac{8}{c^{6}}\right)}
-\frac{c^{2}z''''}{12}
= \frac{t}{3}{\left(\frac{4}{c^{6}}-\frac{10}{c^{8}}\right)}$$ when comparing terms proportional to $\hbar^{4}$.
Using (\[eq:38\]) and (\[eq:33\]), as well as $z''''=\frac{-48t}{c^{8}}+\frac{80t}{c^{10}}$, then yields a 3rd order ODE for $ e_{2}$, (just as if inserting (\[eq:38\]) and (\[eq:33\]) into the third-order ODE for $t_{2}$ that results in 4th order from (\[eq:16\])) , $$\label{eq:44}
\frac{c^{4}}{4}e_{2}'''+tc^{2}e_{2}''+\frac{e_{2}'}{c^{2}}
-\frac{2t}{c^{4}}e_{2}=2t{\left(-\frac{1}{c^{6}}+\frac{1}{c^{8}}\right)},$$ which is in fact slightly simpler than the one for $t_{2}$, $$\label{eq:45}
\frac{tc^{12}}{2}t_{2}'''+t_{2}''{\left(6c^{10}-\frac{13}{2}c^{8}\right)}+
tt_{2}'{\left(12c^{8}-10c^{6}\right)}-2c^{2}t_{2}+t{\left(\frac{16}{c^{4}}
-\frac{20}{c^{2}}+4\right)}=0$$ that follows from (\[eq:33\])/(\[eq:38\])/(\[eq:42\]) (and is identical to the $\hbar^{4}$-condition following from (\[eq:16\])). Taking $$\label{eq:46}
e_{2}=\frac{1}{18}{\left(4-\frac{2}{c^{2}}+\frac{1}{c^{4}}\right)}$$ as a solution of (\[eq:44\]) one finds / can choose $$\label{eq:47}
t_{2}=\frac{t}{9}{\left(\frac{1}{c^{4}}+\frac{2}{c^{6}}\right)},
\quad h_{2}=\frac{t}{90}{\left(-4+\frac{8}{c^{2}}+\frac{11}{c^{4}}\right)}.$$ Note that $t_{2}$ and $h_{2}$ (both odd) and $e_{2}$ (even) are indeed small corrections to $t(\tilde{Z})=\tanh z(\tilde{Z})$ and $z(\tilde{Z})$ (resp. $c^{2}=\cosh^2 z(\tilde{Z})$) consistent with our claim that (\[eq:23\])/(\[eq:24\]) resp. (\[eq:18\])/(\[eq:19\])/(\[eq:20\]) (with $t$ replaced by $T$) define solutions of (\[eq:1\]), which for $\hbar \rightarrow 0$ converge to the classical commutative catenoid (described by Euler in 1744 [@Euler]). Let us comment that (cp. (\[eq:7\])) $$\label{eq:48}
G:=-\frac{1}{\hbar^{2}}\sum_{i<j}\lbrack X_{i}, X_{j}\rbrack^{2}$$ is indeed equal to ${\mathds{1}}$ to leading order (though not to all orders): $$\begin{aligned}
\label{eq:49}
\begin{split}
&\!\!\!\hbar^{2}G =\frac{1}{2}{\left(\lbrack H, W\rbrack
\lbrack W^{\dagger}, H\rbrack+\lbrack W^{\dagger}, H\rbrack
\lbrack H,W \rbrack\right)}-\lbrack X, Y\rbrack^{2}\\
&=\frac{1}{2}{\left((H-H_{-})WW^{\dagger}(H-H_{-})
+(H_{+}-H)W^{\dagger}W(H_{+}-H)\right)}-\lbrack X, Y\rbrack^{2}\\
&=\hbar T{\left(\frac{(H-H_{-})^{2}(H_{+}-H)}{2H-H_{+}-H_{-}}
+\frac{(H_{+}-H)^{2}(H-H_{-})}{2H-H_{+}-H_{-}}\right)}-
\lbrack X,Y\rbrack^{2}\\
&=\hbar T(H_{+}-H)(H-H_{-}){\left(\frac{H_{+}-H_{-}}{2H-H_{+}-H_{-}}
\right)}-\lbrack X, Y\rbrack^{2}\\
&=(H_{+}-H)(H-H_{-})E+\hbar^{2}T^{2}\\
&=\hbar^{2}\bigg({\left((z')^{2}+\hbar^{2}{\left(\frac{z'z'''}{3}+
2h_{2}z'-\frac{(z'')^{2}}{4}\right)}+\cdots\right)}(c^{2}+\hbar^{2}e_{2}+\cdots)\\
&\qquad+(t+\hbar^{2}t_{2}+\cdots)^{2}\bigg);
\end{split}\end{aligned}$$ while in leading order one thus gets $$\label{eq:50}
G_{0}=(z')^{2}c^{2}+t^{2}=\frac{1}{c^{2}}+t^{2}={\mathds{1}},$$ the terms proportional to $\hbar^{2}$, $$\label{eq:51}
(z')^{2}e_{2}+c^{2}{\left(\frac{z'z'''}{3}+2h_{2}z'
-\frac{(z'')^{2}}{4}\right)}+2tt_{2}=\frac{1}{18}{\left(\frac{40}{c^{6}}
-\frac{43}{c^{8}}\right)}$$ do not cancel, but are bounded $(\in\lbrack-\frac{1}{6},\frac{1}{4}))$ and because of $\hbar^{2}$ therefore small correction to ${\mathds{1}}$.
Note that due to the commutation relation (cp.(\[eq:23\])) $$\begin{aligned}
\label{eq:52}
\begin{split}
[X_{1}, X_{2}] &= -i\hbar T\\
[\tilde{Z}, X_{1}+iX_{2}]&=\hbar(X_{1}+iX_{2}),
\end{split}\end{aligned}$$ with $T\approx \tilde{Z}$ near the ”middle” of the infinite dimensional matrix (where, due to $(\cosh z(\tilde{Z}))^{2}\approx
{\mathds{1}}+\tilde{Z}^{2}$, $X_{1}^{2}+X_{2}^{2}-X_{3}^{2}\approx
{\mathds{1}}$) one also could think of the non-commutative catenoid as a particular infinite dimensional ‘unitarizable’ representation of a non-linear deformation of $so(2,1)$.
Let us summarize: we have shown how to construct 3 infinite-dimensional matrices $X_{i}$ $(i=1,2,3)$, correponding to the embedding functions of the classical catenoid in ${\mathbb{R}}^{3}$, satisfying $$\label{eq:53}
\sum_{i=1}^{3}{\big[X_i,[X_i,X_j]\big]}=0,$$ explicitely checked up to several orders in $\hbar$. Concretely, $$\begin{aligned}
\label{eq:54}
\begin{split}
&(X_{3})_{jk}=\delta_{jk}{\Big(z_{j}+\hbar^{2}\frac{t_{j}}{90}
{\Big(-4+\frac{8}{c_{j}^{2}}+\frac{11}{c_{j}^{4}}\Big)}+\cdots\Big)}\\
&(X_{1}+iX_{2})_{jk}=w_{j}\delta_{k,j+1}\\
&\vert w_{j}\vert^{2}-\vert w_{j-1} \vert^{2}=
-2\hbar t_{j}{\Big(1+\frac{\hbar^{2}}{9}{\Big(\frac{1}{c_{j}^{4}}
+\frac{2}{c_{j}^{6}}\Big)}+\cdots\Big)}
\end{split}\end{aligned}$$ where (cp. (\[eq:11\])) $\tilde{z}_{j}=-j\hbar$, $z_{j}=z(\tilde{z}_{j})$, $t_{j}=\tanh z(\tilde{z}_{j})$, $c_{j}=\cosh z(\tilde{z_{j}})$.
Acknowledgment {#acknowledgment .unnumbered}
==============
We thank Jaigyoung Choe for collaboration (on a general theory of noncommutative minimal surfaces), and Ki-Myeong Lee for a discussion concerning the IKKT model.
[AA]{}
J.Hoppe. On M-Algebras, the Quantisation of Nambu-Mechanics, and Volume Preserving Diffeomorphisms, hep-th/9602020, [*Helv.Phys.Acta*]{} 70 (1997) 302-317
T.Banks, W.Fischler, S.Shenker, L.Susskind. M Theory As A Matrix Model: A Conjecture, hep-th/9605168, [*Phys.Rev.*]{} D55:5112-5128,1997
N.Ishibashi, H.Kawai, Y.Kitazawa, A.Tsuchiya. A Large-N Reduced Model as Superstring, hep-th/9612115, [*Nucl.Phys.*]{} B498 (1997) 467-491
L.Cornalba, W.Taylor. Holomorphic curves from matrices, hep-th/9807060 [*Nucl.Phys.*]{} 536:513-552,1998
J.L.Lagrange. Essai d’une nouvelle methode pour determiner les maxima et les minima des formules integrales indefinies. Miscellanea Taurinensia 2, 173-195 (1760-1762). Oeuvres, vol. I. Gauthier-Villars, Paris 1867, pp. 335-362
J.B.Meusnier. Memoire sur la courbure des surfaces. Memoire des savants strangers 10 (lu 1776), 477-510 (1785)
L. Euler. Methodus inveniendi lineas curvas maximi minimive proprietate gaudentes, 1744, in: Opera omnia I, 24
J.C.C.Nitsche. Vorlesungen über Minimalflächen, Die Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen, Band 199, Springer-Verlag, Berlin Heidelberg, New York, 1975.
U.Dierkes, S.Hildebrandt, A.Küster, O.Wohlrab. Minimal surfaces I. Springer-Verlag, Berlin Heidelberg, New York, 1992.
J.Arnlind. . PhD thesis, Royal Institute of Technology, 2008.
J.Arnlind, J.Hoppe. Discrete minimal surface algebras. , 6, 2010.
J. Arnlind, J. Hoppe, G. Huisken. Multi-linear formulation of differential geometry and matrix regularizations. , 91:1–39, 2012.
J.Arnlind, J.Choe, J.Hoppe. Noncommutative Minimal Surfaces *(in preparation)*.
J.Arnlind, M.Bordemann, L.Hofer, J.Hoppe, H.Shimada; Fuzzy [R]{}iemann Surfaces, , JHEP06(2009)047.
J.Arnlind, M.Bordemann, L.Hofer, J.Hoppe, H.Shimada. Noncommutative [R]{}iemann surfaces by embeddings in [$\Bbb R\sp
3$]{}. , 288(2):403–429, 2009.
|
---
abstract: 'A novel flow state consisting of two oppositely travelling waves (TWs) with oscillating amplitudes has been found in the counterrotating Taylor-Couette system by full numerical simulations. This structure bifurcates out of axially standing waves that are nonlinear superpositions of left and right handed spiral vortex waves with equal time-independent amplitudes. Beyond a critical driving the two spiral TW modes start to oscillate in counterphase due to a Hopf bifurcation. The trigger for this bifurcation is provided by a nonlinearly excited mode of different symmetry than the spiral TWs. A three-mode coupled amplitude equation model is presented that captures this bifurcation scenario. The mode-coupling between two symmetry degenerate critical modes and a nonlinearly excited one that is contained in the model can be expected to occur in other structure forming systems as well.'
author:
- 'A. Pinter, M. Lücke, and Ch. Hoffmann'
title: ' Bifurcation of standing waves into a pair of oppositely travelling waves with oscillating amplitudes caused by three-mode interaction '
---
Many nonlinear structure forming systems that are driven out of equilibrium show a transition to traveling waves (TWs) as a result of an oscillatory instability [@CH93]. In the presence of spatial inversion symmetry in one or more directions also a standing wave (SW) solution bifurcates which is a nonlinear superposition of the two symmetry degenerated, oppositely propagating TWs with equal amplitudes.
SWs and TWs have a common onset as a result of a primary Hopf bifurcation. But at onset only one of them is stable [@GS87; @DI84]. Furthermore, there are mixed patterns with [*non-equal*]{} amplitude combinations of the degenerated TWs that arise, e.g., via secondary bifurcations at larger driving. The variety with temporally constant, non-equal TW amplitudes can provide a stability transferring connection between TWs that, e.g., are stable at onset and SWs that become stable later on [@PLH06].
The variety in which the TW amplitudes oscillate in time is the subject of this paper. This solution bifurcates out of the SW via a Hopf bifurcation. To be concrete we investigate wave structures consisting of spiral vortices in the annular gap between counter rotating concentric cylinders of the Taylor-Couette system [@T94; @CI94]. To that end we have performed numerical simulations of the Navier-Stokes equations (NSE) to reveal the bifurcation properties as well as the spatiotemporal structure of the novel oscillating mixed wave states. In addition, we provide coupled three-mode amplitude equations that capture this bifurcation to explain the underlying mode-coupling mechanism. We are not aware that these states have been reported so far in the Taylor-Couette literature. Furthermore, one can expect that the mode-coupling mechanism between two symmetry degenerate critical modes and the nonlinearly excited one that is described by our coupled amplitude equations and that drives the oscillatory instability is operating in other pattern forming systems as well.
The waves are realized by left handed spiral vortex (L-SPI) and right handed spiral vortex (R-SPI) structures that are mirror images of each other. The azimuthal advection by the basic circular Couette flow (CCF) rotates both like rigid objects into the same direction as the inner cylinder [@HLP04]. As a result of the enforced rotation the phases of L-SPI and R-SPI travel axially into opposite directions. This system offers an easy experimental and numerical access to forward bifurcating TWs and SWs that are called ribbons (RIBs) [@RIBs] in the Taylor-Couette literature. Being a nonlinear superposition of L-SPI and R-SPI the RIB structure also rotates azimuthally, however, such that its oscillations in axial direction form a SW.
Here we elucidate how such stable SWs loose stability to an oscillating state via a Hopf bifurcation. Therein, the interaction with another nonlinearly excited, non-traveling mode induces the TW constituents of the SW, i.e., the L-SPI and the R-SPI to oscillate in counterphase around a common mean. These oscillating mixed wave states that we call oscillating cross spirals (O-CR-SPI) are quite robust. Thus, they should easily be observable in experiments.
All these spiral structures are axially and azimuthally periodic. We have focussed our simulations on patterns with axial wavelength $\lambda=1.3$ measured in units of the gapwidth and azimuthal wave number $M=2$. The numerical solutions of the NSE were obtained for a system with radius ratio $\eta=1/2$ by methods described in [@HLP04].
[*Control- and order parameters*]{} – The rotational velocities of the inner and outer cylinders are measured by the respective Reynolds numbers $R_1$ and $R_2$. We fix $R_1=240$ [@phasediagram] and we introduce the reduced distance $\mu=(R_2-R_2^0)/|R_2^0|$ from the common onset of SPI and RIB flow at $R_2^0=-605.5$ as control parameter. We characterize the spatiotemporal properties of the vortex waves using the Fourier decomposition $$\label{modenansatz}
f(r,\varphi,z,t) = \sum_{m,n} f_{m,n}(r,t)\,e^{i(m\varphi + nkz)}$$ in azimuthal and axial direction. Here one has $f_{-m,-n}=\overline{f_{m,n}}$ with the overbar denoting complex conjugation. Order parameters are the moduli $|A|,|B|,|C|$ and the time derivatives $\dot\theta_A,\dot\theta_B,\dot\theta_C$ of the phases of the dominant modes in the decomposition (\[modenansatz\]) of, say, the radial velocity $u$ at midgap $u_{2,1}=A=|A|e^{-i\theta_A}, u_{2,-1}=B=|B|e^{-i\theta_B}$, and $u_{0,2}=C=|C|e^{-i\theta_C}$. Here, $A$ and $B$ are the amplitudes of the marginal L- and R-SPI modes. When both are finite as, e.g., in the SW of the RIB state their nonlinear coupling generates the $m=0$ $C$-mode below its threshold for linear growth: pure $m=0$ stationary Taylor vortices bifurcate out of the CCF only later on. Although $|C|$ itself remains small compared to $|A|, |B|$ in the RIB state its feedback on $A, B$ triggers the Hopf bifurcation of the O-CR-SPI: the oscillations of, say, $|A|$ are driven by bilinear mode couplings of $BC$ as indicated in Fig. \[Antrieb-O-CR-SPI\].
We also use the combined order parameters $$\label{EQ:def-comb}
S=\frac{|A|^2+|B|^2}{2}, \, D=\frac{|A|^2-|B|^2}{2}, \, \Phi=\theta_C+\theta_B-\theta_A-\pi$$ that are better suited to describe the bifurcation of the O-CR-SPI with oscillating $D(t)$ and $\Phi(t)$ out of the RIB state, $D=0=\Phi$.
[*Bifurcation sequence*]{} – The pure TW shown by circles in Fig. \[Bifurkdiagr\] and the SW solution $(A=B,C \neq 0)$ marked by diamonds bifurcate at $\mu=0$ out of the unstructured CCF. Initially, the SPI is stable and the RIB is unstable. But then there appears a stable cross-spiral (CR-SPI) solution \[triangles in Fig. \[Bifurkdiagr\](b)\] which transfers stability from the SPI to the RIB. The moduli and phase velocities of these three structures are time-independent. At $\mu_H$ in Fig. \[Bifurkdiagr\] the RIB lose stability in a supercritical Hopf bifurcation to the novel modulated state of O-CR-SPI. Increasing $\mu$ further beyond the range shown in Fig. \[Bifurkdiagr\] the O-CR-SPI loses stability at $R_2 \approx -543$ to oscillating structures with azimuthal wave number $M=1$ that are not discussed here.
[*Dynamics of the modulated SW*]{} – Figure \[t-charakter.-Groessen\] shows the temporal variation of characteristic quantities of the O-CR-SPI over one modulation period $\tau$. Thick lines refer to $\mu$ immediately above onset $\mu_H$. Thin lines show behavior at a larger value $\mu_>$ (arrow in Fig. \[Bifurkdiagr\]) that is close to the end of the existence interval of O-CR-SPI. The moduli $|A|,|B|$ in Fig. \[t-charakter.-Groessen\](a) and the phase velocities $\dot \theta_A, \dot \theta_B$ in Fig. \[t-charakter.-Groessen\](c) oscillate each in counter phase around a respective common mean. Also $\dot \theta_C$ oscillates. Furthermore, $|C|$ and the combined order parameter $S$ show small amplitude oscillations with twice the frequency of the other quantities. Close to onset all oscillations are harmonic with $|C|$ and $S$ being practically constant. But at $\mu_>$ the oscillations of $|A|, |B|$ and $\dot \theta_A, \dot \theta_B, \Phi$ have become quite anharmonic, whereas those of $S$, $D$, $|C|$, and $\dot\theta_C$ are still harmonic. The Fourier spectra in Figs. \[fourier-charakter.-Groessen\](a)-(g) of the temporal profiles shown by thin lines in Figs. \[t-charakter.-Groessen\](a)-(g) reflect this behavior at $\mu_>$.
In the RIB state the phases are such that $\theta_C(t)+\theta_B(t)-\theta_A(t)=\pi$. But as a consequence of the Hopf bifurcation $D$ as well as $\Phi$ oscillate in the O-CR-SPI. The squares of their oscillation amplitudes, $\widetilde D^2$ and $\widetilde \Phi^2$, increase at onset linearly with $\mu$ with a subsequent quadratic correction, cf. Figs. \[Fig-Aufspalten\](c)-(d). The monotonous decrease of the modulation period $\tau$ is shown in Fig. \[Fig-Aufspalten\](a). Note that the modulation amplitudes of $S$ in Fig. \[Fig-Aufspalten\](b) and also of $|C|$ remain very small compared to those of $D$ and $\Phi$.
[*Amplitude equations*]{} – The Hopf bifurcation behavior and the dynamics close to the transition from RIB to O-CR-SPI can be explained and described within a three-mode amplitude-equation approach. It reveals (i) how the rotationally symmetric $C$-mode is generated nonlinearly via the interaction of $A$ and $B$, i.e., of the M=2 SPI constituents in the RIB and (ii) how then $C$ – after it has reached a critical size beyond $\mu_H$ – induces amplitude oscillations in $A$ and $B$.
Invariance under axial translation and reflection of the Taylor-Couette system [@CI94] restricts the form of the three coupled amplitude equations to
\[EQ-gekoppelte-AG-A-B\] $$\begin{aligned}
\dot A&=&A\ G\left(|A|^2,|B|^2,|C|^2 \right)+i\kappa{BC},\label{EQ-gek-1}\\
\dot B&=&B\ \widehat G\left(|A|^2,|B|^2,|C|^2 \right)+i\kappa{A\overline{C}},\label{EQ-gek-2}\\
\dot C&=&C\ H\left(|A|^2,|B|^2,|C|^2 \right)+\kappa_0 {A\overline{B}}.\label{EQ-gek-3}
\label{gekoppelte-GLE-O-CR-SPI}\end{aligned}$$
With $\widehat G\left(|A|^2,|B|^2,|C|^2 \right)=G\left(|B|^2,|A|^2,|C|^2
\right)$ and $H(|A|^2,|B|^2,|C|^2)=\overline H(|B|^2,|A|^2,|C|^2)$ the equations are invariant under the operation $(A,B,C)
\leftrightarrow (B,A,\overline{C})$ which reflects the axial inversion symmetry $z \leftrightarrow -z$. The functions $G=G^{'}+iG^{''}$ and $H=H^{'}+iH^{''}$ are complex. The superscripts $^{'}$ and $^{''}$ identify the real and imaginary parts, respectively. The coupling constants $\kappa$ and $\kappa_0$ are real.
Since only invariance under translation and reflection along one spatial direction has been used in deriving Eqs. (\[EQ-gekoppelte-AG-A-B\]) our description of the phenomenon of a SW with oscillating TW components in terms of Eqs. (\[EQ-gekoppelte-AG-A-B\]) potentially applies to all bifurcating systems with O(2) symmetry in the center manifold, which is quite common.
In the following we discard the coupling term $\kappa_0 A {\overline B}$. It is small in our case and, more importantly, we checked that it is not relevant for driving the Hopf oscillations. They are generated by the coupling terms in (\[EQ-gek-1\]) and (\[EQ-gek-2\]) as we shall show in the next section.
The mechanism causing the Hopf bifurcation into the modulated SW can be better isolated by rewriting the amplitude equations (\[EQ-gekoppelte-AG-A-B\]) in terms of the combined order parameters (\[EQ:def-comb\])
\[EQ-gekoppelte-AG-S-D\] $$\begin{aligned}
\dot S &=& 2[D\ G_{-}^{'}+S\ G_{+}^{'}],\qquad \dot {|C|} =|C|\ H^{'},\label{EQ-gek-4}\label{EQ-gek-6} \qquad \\
\dot D &=& 2[S\ G_{-}^{'}+D\ G_{+}^{'}]-2\kappa|C| S^{\ast}\sin\Phi, \label{EQ-gek-5}\\
\dot \Phi &=& 2G_{-}^{''}+2\kappa\frac{|C|}{S^{\ast}} D \cos\Phi - H^{''} \label{EQ-gek-7},\end{aligned}$$
where $S^{\ast}=S\sqrt{1-(D/S)^2} \simeq S$. Here we defined $G_{\pm}=(G \pm\widehat G)/2$. Note that $G_{+}$ and $H^{'}$ ($G_{-}$ and $H^{''}$) are even (odd) in $D$ [@explanation] as a result of the inversion symmetry. Hence, eqs. (\[EQ-gek-4\]) are even in $D$. This in turn explains that $S$ and $|C|$ oscillate with twice the frequency of the other quantities in Fig. \[t-charakter.-Groessen\]. On the other hand, Eq. (\[EQ-gek-7\]) is odd in $D$ and causes the absence of a peak at $2/\tau$ in Fig. \[fourier-charakter.-Groessen\](g).
We have determined the specific functions $G$ and $H$ for our specific system via fits to the numerically obtained bifurcation branches of SPI, RIB, and CR-SPI in Fig. \[Bifurkdiagr\] and to the pure Taylor vortex solution (not shown here) with $A=0=B, C \ne 0$, and half the spiral wavelength. This produces the bifurcation behavior close to the Hopf threshold well. Note, however, that the Hopf bifurcation is a universal phenomena of systems like (\[EQ-gekoppelte-AG-A-B\],\[EQ-gekoppelte-AG-S-D\]) that is not specific to the Taylor-Couette system. This is most easily understood with the help of the universal small-$D$ expansion of (\[EQ-gekoppelte-AG-S-D\]) that results from the symmetry properties.
[*Hopf bifurcation*]{} – For small $D$, i.e., close to the Hopf bifurcation threshold we can use the expansions
\[EQ:gekoppelte-AG-S-D-small-D\] $$\begin{aligned}
G_{+}=G_{+}^{(0)}+\mathcal{O}\left(D^2\right), \quad
G_{-}=G_{-}^{(1)} D+\mathcal{O}\left(D^3\right),\\
H^{'}=H^{'(0)}+\mathcal{O}\left(D^2\right), \quad
H^{''}=H^{''(1)}D+\mathcal{O}\left(D^3\right).\end{aligned}$$
Here the leading order terms $G_{+}^{(0)}, G_{-}^{(1)}, H^{'(0)}, H^{''(1)}$ still depend on $S$ and $|C|^2$. Inserting (\[EQ:gekoppelte-AG-S-D-small-D\]) into (\[EQ-gekoppelte-AG-S-D\]) and using the smallness of $\Phi$ yields a simplified model that is linear in $D$
\[EQ:gekoppelte-AG-S-D-close-hopf\] $$\begin{aligned}
\dot S &=& 2S\ G_{+}^{'(0)},\qquad \dot {|C|} =|C|\ H^{'(0)},\label{EQ:gekoppelte-AG-S-D-close-hopf-1}\label{EQ:gekoppelte-AG-S-D-close-hopf-3}\\
\dot D &=& 2\left[S\ G_{-}^{'(1)}+\ G_{+}^{'(0)}\right]D - 2\kappa|C|S\Phi,\label{EQ:gekoppelte-AG-S-D-close-hopf-2}\\
\dot \Phi &=& \left[2G_{-}^{''(1)}+2\kappa\frac{|C|}{S} - H^{''(1)}\right]D .
\label{EQ:gekoppelte-AG-S-D-close-hopf-4}\end{aligned}$$
It explains the Hopf bifurcation out of the RIB state and the O-CR-SPI properties close to onset. For example, $S$ and $|C|$ are virtually constant because they are decoupled from $D$ and $\Phi$ in the model eqs. (\[EQ:gekoppelte-AG-S-D-close-hopf\]). Furthermore, Eq. (\[EQ:gekoppelte-AG-S-D-close-hopf-4\]) shows that $\Phi$ is enslaved by $D$ and that the phase shift between them is $\tau/4$ as to be seen in Fig. \[t-charakter.-Groessen\] close to $\mu_H$. This justifies the solution ansatz $$\label{EQ-D-Phi}
D(t)=\widetilde D \cos(\omega_H t), \quad
\Phi(t)=\widetilde \Phi \sin(\omega_H t)$$ where $\omega_H$ is the Hopf frequency.
The latter is identified together with the bifurcation threshold $\mu_H$ by a linear stability analysis of the RIB fixed point $D=0=\Phi, S=S_{RIB}(\mu), C=C_{RIB}(\mu)$ for which $ G_{+}^{'(0)}=0$ according to Eq. (\[EQ:gekoppelte-AG-S-D-close-hopf-1\]). Thus, the linearized equations for the stability-relevant deviations from this fixed point read $\dot D =aD + b\Phi, \quad \dot \Phi = c D$, with coefficients $a=2SG_{-}^{'(1)}, b=-2\kappa|C|S,
c=2G_{-}^{''(1)}+2\kappa\frac{|C|}{S} - H^{''(1)}$ to be taken at the RIB fixed point. Consequently, the location of the zero in $a(\mu)$ determines $\mu_H$ and the imaginary part of the eigenvalue at $\mu_H$, i.e., the Hopf frequency is then given by $\omega_H^2=-bc\propto \kappa +h.o.t$, revealing that the coupling terms in (\[EQ-gek-1\]) and (\[EQ-gek-2\]) cause the Hopf bifurcation. Furthermore, $a(\mu)=\alpha
(\mu-\mu_H)$ with positive $\alpha$ to ensure decay of oscillations below $\mu_H$ and growth above it.
[*Conclusion*]{} – The bifurcation of a novel spiral vortex structure with oscillating TW amplitudes out of an SW is shown to be triggered by the coupling to a nonlinearly excited mode when the latter exceeds a critical strength. Since this novel state, in which the TW amplitudes oscillate in counterphase around a common mean occurs quite robustly in a relatively wide parameter range it should be easily accessible to experiments.
Our results have been obtained by full numerical simulations and explained and confirmed by a coupled amplitude equation model that captures the mode-coupling between two symmetry degenerate critical modes and a nonlinearly excited one. Our bifurcation scenario can occur in all systems with an O(2) symmetric center manifold, arising for example in systems with translation and inversion symmetry, which is a quite general one. It has therefor the potential to occur also in other structure forming systems, say, in hydrodynamics, chemical reactions, or biological systems etc. where any two symmetry degenerate basic modes $A$ and $B$ couple similarly to a third one, $C$, that is nonlinearly excited by them and that destroys the $A=B$ state once $C$ has reached a critical size.
This work was supported by the Deutsche Forschungsgemeinschaft.
[999]{} M. C. Cross and P. C. Hohenberg, Rev. Mod. Phys. [**65**]{}, 851 (1993). M. Golubitsky and I. Stewart, Arch. Rat. Mech. Anal. [**87**]{}, 107 (1985). Y. Demay and G. Iooss, J. Mec. Theor. Appl., Spec. Suppl., 193 (1984). A. Pinter, M. Lücke, and Ch. Hoffmann, Phys. Rev. Lett. [**96**]{}, 044506 (2006). P. Chossat and G. Iooss, [*The Couette-Taylor Problem*]{}, (Springer, Berlin, 1994). R. Tagg, Nonlinear Science Today [**4**]{}, 1 (1994). Ch. Hoffmann, M. Lücke, and A. Pinter, Phys. Rev. E [**69**]{}, 056309 (2004). Stable RIBs were found [@TESM89] in a long system of aspect ratio $\Gamma=36$ with radius ratio $\eta=0.727$ where theory [@DI84; @CI94] and numerical simulations [@TESM89] predicted a subcritical transition. RIBs in end-plate-dominated short systems ($\Gamma < 10, \eta=1/2$) were recently reported [@LPA03-04] to come with two different symmetries [@KP92]. R. Tagg, W. S. Edwards, H. L. Swinney, and P. S. Marcus, Phys. Rev. A [**39**]{}, 3734 (1989). J. Langenberg, G. Pfister, and J. Abshagen, Phys. Rev. E [**68**]{}, 056308 (2003); Phys. of Fluids [**16**]{}, 2757 (2004). E. Knobloch and R. Pierce, in [*Ordered and turbulent patterns in Taylor-Couette Flow*]{}, ed. C. D. Andereck and F. Hayot, (Plenum Press, NY, 1992), p. 83. A phase diagram of SPI, RIB, CR-SPI, and O-CR-SPI for $\lambda=1.3$ and an investigation of the wave number dependence of the bifurcation porperties is provided in [@PLH08-2]. A. Pinter, M. Lücke, and Ch. Hoffmann, arXiv:0803.3898. To see that write $f(|A|^2,|B|^2)=f(S+D,S-D)$.
![(Color online) (a) Dominant modes and their complex conjugates in the Fourier space of Eq. (\[modenansatz\]). (b) Bilinear coupling of modes $B$ and $C$ (dashed arrows) that drive oscillations of mode $A$ (solid arrow). \[Antrieb-O-CR-SPI\]](./fig1.eps){width="8.6cm"}
![(Color online) Bifurcation diagrams of SPI (red circles), RIB (blue diamonds), CR-SPI (purple triangles), and O-CR-SPI (mangenta lines and crosses) obtained from numerical simulations of the NSE versus $\mu$ and $R_2$. SPI and CR-SPI are displayed only in (a) and (b); the latter shows the blow-up of the rectangle near the origin of (a). Shown are the squared mode amplitudes $|A|^2, |B|^2$ (a), (b) and phase velocities $\dot
\theta_A, \dot \theta_B$ (c) of the marginal modes and the same for the nonlinear excited mode $C$ in (d) and (e). Filled (open) symbols denote stable (unstable) solutions with time-independent amplitudes. Crosses refer to temporal averages of the O-CR-SPI. Upper (lower) line shows the maximum (minimum) of the oscillation range indicated by vertical lines. The arrow at $\mu_H$ ($R_2=-587$) marks the Hopf bifurcation of the modulated SWs. The second arrow at $\mu=\mu_>$ ($R_2=-546$) is inserted for later reference. \[Bifurkdiagr\]](./fig2.eps){width="8.6cm"}
![(Color online) Time variation of O-CR-SPI over one modulation period $\tau$. The left column shows moduli and phase velocities of $A$, $B$, and $C$. In (a) and (c) solid lines refer to $A$ and dashed ones to $B$. The right column contains the order parameters $S, D$, and $\Phi$ (\[EQ:def-comb\]). Thick lines are modulation profiles close to the Hopf threshold $\mu_H$ and thin ones those at the larger $\mu_{>}$ identified by second arrow in Fig. \[Bifurkdiagr\]. \[t-charakter.-Groessen\]](./fig3.eps){width="8.6cm"}
![(Color online) Fourier spectra of the modulation profiles shown by thin lines in Fig. \[t-charakter.-Groessen\] for $\mu=\mu_>$ (cf. arrow in Fig. \[Bifurkdiagr\]). Note that $|C|$ and $S$ oscillates with twice the frequency of the other quantities and that the spectra of $|C|$, $S$, and $D$ practically do not contain higher harmonics. The spectra of $\Phi$, $D$, and $\dot \theta_C$ ($|C|$ and $S$) contain only peaks at $(2l+1)/\tau$ ($2l/\tau$) with $l=0,1,2\ldots$. \[fourier-charakter.-Groessen\]](./fig4.eps){width="8.6cm"}
![(Color online) Bifurcation properties of RIB (blue diamonds) and O-CR-SPI (magenta lines and squares) obtained from numerical solutions of the NSE as functions of $\mu$ and $R_2$: (a) oscillation period $\tau$ of the modulation, say, of the moduli $|A|$ and $|B|$ of the O-CR-SPI, (b) S (thin lines delimit the oscillation range indicated by vertical bars), (c) and (d) squared oscillation amplitudes $\widetilde D$ of $D$ and $\widetilde \Phi$ of $\Phi$, respectively. \[Fig-Aufspalten\]](./fig5.eps){width="8.6cm"}
|
---
abstract: |
Unitary space-time modulation using multiple antennas promises reliable communication at high transmission rates. The basic principles are well understood and certain criteria for designing good unitary constellations have been presented.
There exist two important design criteria for unitary space time codes. In the situation where the signal to noise ratio is large it is well known that the [*diversity product*]{} (DP) of a constellation should be as large as possible. It is less known that the [*diversity sum*]{} (DS) is a very important design criterion for codes working in a low SNR environment. For some special situations, it will be more practical and reasonable to consider a constellation optimized at a certain SNR interval. For this reason we introduce the [*diversity function*]{} as a general design criterion. So far, no general method to design good-performing constellations with large diversity for any number of transmit antennas and any transmission rate exists.
In this paper we propose constellations with suitable structure which allow one to construct codes with excellent diversity using geometrical symmetry and numerical methods. We also demonstrate how these structured constellations out-perform currently existing constellations and explain why the proposed constellation structure admit simple decoding algorithm: sphere decoding. The presented design methods work for any dimensional constellation and for any transmission rate. Moreover codes based on the proposed structure are very flexible and can be optimized for any signal to noise ratio.
author:
- |
Guangyue Han, Joachim Rosenthal\
[Department of Mathematics]{}\
[University of Notre Dame]{}\
[Notre Dame, IN 46556.]{}\
[ [email protected], [email protected]]{}\
[ http://www.nd.edu/\~eecoding/]{}
title: |
Geometrical and Numerical Design of Structured\
Unitary Space Time Constellations [^1]
---
Introduction and Model
======================
One way to acquire reliable transmission with high transmission rate on a wireless channel is to use multiple transmit or receive antennas. Either because of rapid changes in the channel parameters or because of limited system resources, it is reasonable to assume that both the transmitter and the receiver don’t know about the channel state information (CSI), i.e. the channel is non-coherent.
In [@ho00a], Hochwald and Marzetta study unitary space-time modulation. Consider a wireless communication system with $M$ transmit antennas and $N$ receive antennas operating in a Rayleigh flat-fading channel. We assume time is discrete and at each time slot, signals are transmitted simultaneously from the $M$ transmitter antennas. We can further assume that the wireless channel is quasi-static over a time block of length $T$.
A signal constellation ${\mathcal{V}}:=\{ \Phi_1,\ldots, \Phi_L\}$ consists of $L$ matrices having size $T \times M$ and satisfying $T \ge M$ and $\Phi_k^* \Phi_k = I_M$. The last equation simply states that the columns of $\Phi_k$ form a “unitary frame”, i.e. the column vectors all have unit length in the complex vector space $\mathbb{C}^T$ and the vectors are pairwise orthogonal. The scaled matrices $\sqrt{T} \Phi_k$, $k=1,2,\cdots,L$, represent the code words used during the transmission. It is known that the transmission rate is determined by $L$ and $T$:
$$\mathtt{R}=\frac{\log_2(L)}{T}.$$
Let $\rho$ represent the expected signal-to-noise ratio (SNR) at each receive antenna. The basic equation between the received signal $R$ and the transmitted signal $\sqrt{T} \Phi$ is given through: $$R=\sqrt{\frac{\rho T}{M}}\Phi H+W,$$ where the $M \times N$ matrix $H$ accounts for the multiplicative complex Gaussian fading coefficients and the $T
\times N$ matrix $W$ accounts for the additive white Gaussian noise. The entries $h_{m,n}$ of the matrix $H$ as well as the entries $w_{t,n}$ of the matrix $W$ are assumed to have a statistically independent normal distribution $\mathcal{CN}(0,1)$. In particular it is assumed that the receiver does not know the exact values of either the entries of $H$ or $W$ (other than their statistical distribution).
The decoding task asks for the computation of the most likely sent code word $\Phi$ given the received signal $R$. Denote by $||\ \ ||_F$ the Frobenius norm of a matrix. If $A=(a_{i,j})$ then the Frobenius norm is defined through $|| A
||_F=\sqrt{\sum_{i,j} |a_{i,j}|^2}.$ Under the assumption of the above model the maximum likelihood (ML) decoder will have to compute: $$\Phi_{ML}=\displaystyle \arg \max_{\Phi_l \in
\{\Phi_1,\Phi_2,\cdots,\Phi_L\}} {\|R^*\Phi_l\|}_F$$ for each received signal $R$. (See [@ho00a]).
Let $\delta_m(\Phi_l^* \Phi_{l'})$ be the $m$-th singular value of $\Phi_l^* \Phi_{l'}$. It has been shown in [@ho00a] that the pairwise probability of mistaking $\Phi_l$ for $\Phi_{l'}$ using maximum likelihood decoding satisfies:
$$\begin{aligned}
P_{\Phi_l,\Phi_{l'}} &=& \mbox{Prob}\left(\mbox{ choose }\Phi_{l'}\mid
\Phi_{l}\mbox{ transmitted } \right)(\rho)\nonumber\\
&=& \mbox{Prob}\left(\mbox{ choose }\Phi_{l}\mid
\Phi_{l'}\mbox{ transmitted } \right)(\rho) \nonumber\\
&=& \frac{1}{4\pi} \int_{-\infty}^{\infty}\frac{4}{4w^2+1}\prod_{m=1}^M
\left[1+
\frac{(\rho T/M)^2 (1-\delta_m^2 (\Phi_l^*
\Phi_{l'})) }{4(1+\rho T/M)} (4w^2+1)\right]^{-N}\!\! dw \label{exactf}\\
&\le& \frac{1}{2} \prod_{m=1}^M
\left[1+
\frac{(\rho T/M)^2(1-\delta_m^2 (\Phi_l^*
\Phi_{l'}))}{4(1+\rho T/M)} \right]^{-N}. \label{mainf}\end{aligned}$$
It is a basic design objective to construct constellations ${\mathcal{V}}=\{
\Phi_1,\ldots, \Phi_L\}$ such that the pairwise probabilities $P_{\Phi_l,\Phi_{l'}}$ are as small as possible. Mathematically we are dealing with an optimization problem with unitary constraints:
Minimize $\displaystyle \max_{l \neq l'} P_{\Phi_l,\Phi_{l'}}$ with the constraints $\Phi_i^*\Phi_i=I$ where $i=1,2,\cdots,L$.
Formula[ ]{} is sometimes referred to as “Chernoff’s bound”. This formula is easy to work with, the exact formula[ ]{} is in general not easy to work with, although it could be useful in the numerical search of good constellations as well. Researchers have been searching for constructions where the maximal pairwise probability of $P_{\Phi_l,\Phi_{l'}}$ is as small as possible. Of course the pairwise probabilities depend on the chosen signal to noise ratio $\rho$ and the construction of constellations has therefore to be optimized for particular values of the SNR.
The design objective is slightly simplified if one assumes that transmission operates at high SNR situations. In [@ho00], a design criterion for high SNR is presented and the problem has been converted to the design of a finite set of unitary matrices whose diversity product is as large as possible. In this special situation several researchers [@al98; @ta00a; @sh01; @sh02] came up with algebraic constructions and we will say more about this in the next section.
The main purpose of this paper is to present structured constellation and to develop geometrical and numerical procedures which allow one to construct unitary constellations with excellent diversity for any set of parameters $M,N,T,L$ and for any signal to noise ratio $\rho$. The paper is structured as follows. In Section \[Sec-diversity\] we introduce the diversity function of a constellation. This function depends on the signal to noise ratio and it gives for each value $\rho$ an indication how well the constellation ${\mathcal{V}}$ will perform. For large values of $\rho$ the diversity function is governed by the diversity product, for small values of $\rho$ it is governed by the diversity sum. These concepts are introduced in Section \[Sec-diversity\] as well. The introduced concepts are illustrated on some well known constellations previously studied in the literature.
In Section \[Sect-Alg\] we first show that randomly constructed codes are fully diverse with probability one. Then we start the main task of this paper, namely to parameterize constellations which will be efficient for numerical search algorithms. For this purpose we introduce the concept of a [*weak group structure*]{} and we classify all weak group structures whose elements are normal and positive.
In Section \[Sec-geometrical\] we investigate an algebraic structure which led to some of the best constellations which we were able to derive. We also show that in the good-performing codes the distance spectrum profile for both the diversity sum and the diversity product are important.
Section \[Sec-numerical\] is one of the main sections of this paper. We first explain a general method on how one can efficiently design excellent constellations for any set of parameters $M,N,T,L$ and $\rho$. For this we review the properties of the complex Stiefel manifold and the Cayley transform. We conclude this section with an extensive table where we publish a large set of codes having some of the best diversity sums and diversity products in their parameter range. More extensive lists of codes with large diversity can be found on the website [@ha03u2].
Finally in Section \[sphere-decoding\] we explain how the algebraic structure which underlies most of the derived codes can be used to have a fast decoding algorithm. Our simulations indicate that in the design of codes more attention should be given to the diversity sum (more generally diversity function) which previously has not been fully studied.
The Diversity Function, the Diversity Product (DP) and the Diversity Sum (DS)
=============================================================================
\[Sec-diversity\]
In this paper we will be concerned with the construction of constellations where the right hand sides in[ ]{} and[ ]{}, maximized over all pairs $l,l'$ is as small as possible for fixed numbers of $T,M,N,L$. As already mentioned this tasks depends on the signal to noise ratio the system is operating. For this purpose we define the [*exact diversity function*]{} dependent on the constellation ${\mathcal{V}}=\{
\Phi_1,\ldots, \Phi_L\}$ and a particular SNR $\rho$ through: $$\label{exact-div}
\mathcal{D}_e({\mathcal{V}},\rho):=
\max_{l \ne l'} \mbox{Prob}\left(\mbox{ choose }\Phi_{l'}\mid
\Phi_{l}\mbox{ transmitted } \right)(\rho)$$ For a particular constellation with a large number $L$ of elements, with many transmit and receive antennas the function $\mathcal{D}_e({\mathcal{V}},\rho)$ is very difficult to compute. Indeed for each pair $\Phi_{l'},\Phi_{l}$ it is required to compute the singular values of the $M\times M$ matrix $\Phi_l^* \Phi_{l'}$ and then one has to evaluate up to $L(L-1)/2$ integrals of the form[ ]{} and this has to be done for each value of $\rho$. Although this task is formidable it can be done in cases where $T,M,L$ are all in the single digits using e.g. Maple.
Using Chernoff’s bound[ ]{} we define a simplified function called the [*diversity function*]{} through: $$\label{div}
\mathcal{D}({\mathcal{V}},\rho):=
\max_{l \ne l'}
\frac{1}{2} \prod_{m=1}^M
\left[1+
\frac{(\rho T/M)^2}{4(1+\rho T/M)} (1-\delta_m^2 (\Phi_l^*
\Phi_{l'}))\right]^{-N}.$$ The computation of $\mathcal{D}({\mathcal{V}},\rho)$ does not require the evaluation of an integral and the computation requires essentially the computation of $ML(L-1)/2$ singular values. The singular values $\delta_m (\Phi_l^*\Phi_{l'})$ are by definition all real numbers in the interval $[0,1]$ as we assume that the columns of $\Phi_l,\Phi_{l'}$ form both orthonormal frames. The functions $\mathcal{D}_e({\mathcal{V}},\rho)$ and $\mathcal{D}({\mathcal{V}},\rho)$ are the smallest if the singular values $\delta_m (\Phi_l^*\Phi_{l'})$ are as small as possible. These numbers are all equal to zero if and only if the column spaces of $\Phi_l,\Phi_{l'}$ are pairwise perpendicular. We call such a constellation [*fully orthonormal*]{}. Since the columns of $\Phi_l$ generate an $M$-dimensional subspace this can only happen if $L\leq T/M$. On the other hand if $L\leq T/M$ it is easy to construct a constellation where the singular values of $(\Phi_l^*\Phi_{l'})$ are all zero. Just pick $LM$ different columns from a $T\times T$ unitary matrix. Figure \[fig-2\] depicts the functions $\mathcal{D}_e({\mathcal{V}},\rho)$ and $\mathcal{D}({\mathcal{V}},\rho)$ for a fully orthonormal constellation with $T=10$ and $M=N=2$.
In order to study the function $\mathcal{D}({\mathcal{V}},\rho)$ more carefully let $$\label{tilderho}
\tilde{\rho}:=\frac{(\rho T/M)^2}{4(1+\rho T/M)}.$$ In some small interval $[\rho_1,\rho_2]$ the maximum in[ ]{} is achieved for some fixed indices $l,l'$ and in terms of $\tilde{\rho}$ the function $\mathcal{D}({\mathcal{V}},\rho)$ is of the form: $$\mathcal{D}({\mathcal{V}},\tilde{\rho})=\frac{1}{2\left(
1+c_1\tilde{\rho}+\cdots +c_M\tilde{\rho}^M\right)^N},$$ where the coefficients $c_1,\ldots,c_M$ depend on the particular constellation and on the chosen interval $[\rho_1,\rho_2]$. For an interval close to zero the dominating term will be the coefficient $c_1$. Up to some factor this term will define the [*diversity sum*]{} of the constellation. When $\tilde{\rho}>>0$ then the dominating term will be the coefficient $c_M$ and up to some scaling this term will define the [*diversity product*]{} of the constellation. A constellation will have a small diversity function for small values of $\rho$ (and presumably performs well in this range) when the constellation is chosen having a large diversity sum. A constellation will have a small diversity function for large values of $\rho$ (and presumably performs well in this range) when the constellation is chosen having a large diversity product. In the next two subsections we will study the limiting behavior of $\mathcal{D}({\mathcal{V}},\rho)$ as $\rho$ goes to zero and to infinity.
Design criterion for high SNR
-----------------------------
When the SNR $\rho$ is very large then $\mathcal{D}({\mathcal{V}},\rho)$ can be approximated via: $$\mathcal{D}({\mathcal{V}},\rho)\simeq
\max_{l \ne l'}
\frac{1}{2}
\left(
\frac{(\rho T/M)^2}{4(1+\rho T/M)}
\right)^{-NM}
\prod_{m=1}^M
\frac{1}{(1-\delta_m^2 (\Phi_l^*
\Phi_{l'}))^{N}}.$$ It is the design objective to construct a constellation $\Phi_1,
\Phi_2,\cdots, \Phi_n$ such that $$\min_{l \ne l'} \prod_{m=1}^M(1-\delta_m^2(\Phi_l^*
\Phi_{l'}))$$ is as large as possible. This last expression defines in essence the diversity product. In order to compare different dimensional constellations it is customary to use the definition:
(See [@ho00]) \[div-prod\] The [*diversity product*]{} of a unitary constellation ${\mathcal{V}}$ is defined as $$\prod {\mathcal{V}}= \min_{l \ne l'} \left(\prod_{m=1}^M (1-
\delta_m(\Phi_l^* \Phi_{l'}) ^2)\right)^{\frac{1}{2M}}.$$
An important special case occurs when $T=2M$. In this situation it is customary to represent all unitary matrices $\Phi_k$ in the form: $$\label{specialform}
\Phi_k=\frac{\sqrt{2}}{2} \left(\begin{array}{c}
I\\
\Psi_k
\end{array}\right).$$ Note that by definition of $\Phi_k$ the matrix $\Psi_k$ is a $M
\times M$ unitary matrix. The diversity product as defined in Definition \[div-prod\] has then a nice form in terms of the unitary matrices. For this let $\lambda_m$ be the $m$th eigenvalue of a matrix, then $$1-\delta_m^2(\Phi_{l'}^* \Phi_l)=\frac{1}{4}
\lambda_m(2I_M-\Phi_l^* \Phi_{l'}-\Phi_{l'}^* \Phi_l)
=\frac{1}{4}\delta_m^2(I_M-\Psi_{l'}^*
\Psi_l)=\frac{1}{4}\delta_m^2(\Psi_{l'}-\Psi_l).$$ So we have $$\prod_{m=1}^M (1-\delta_m^2(\Phi_{l'}^*
\Phi_l))^{\frac{1}{2M}}=\frac{1}{2}\prod_{m=1}^M
\delta_m(\Psi_{l'}-\Psi_l)^{\frac{1}{M}}=\frac{1}{2}|
\det(\Psi_{l'}-\Psi_l)|^{\frac{1}{M}}.$$ When $T=2M$ and the constellation ${\mathcal{V}}$ is defined as above, then the formula of the diversity product assumes the simple form: $$\label{diversity}
\prod {\mathcal{V}}=\frac{1}{2} \min_{0 \leq l < l' \leq L}
|\det(\Psi_l-\Psi_{l'})|^{\frac{1}{M}}.$$
We call a constellation ${\mathcal{V}}$ a fully diverse constellation if $\prod {\mathcal{V}}> 0$. A lot of efforts have been taken to construct constellations with large diversity product. (See e.g. [@ho00; @li02; @ha02p2; @ha02p; @sh01; @sh02; @ta00a]). For the particular situation $T=2M$ with special form[ ]{} the design asks for the construction of a discrete subset ${\mathcal{V}}=\{\Psi_1,\ldots,\Psi_L\}$ of the set of $M\times M$ unitary matrices $U(M)$. When this discrete subset has the structure of a discrete subgroup of $U(M)$ then the condition that ${\mathcal{V}}$ is fully diverse is equivalent to the condition that the identity matrix is the only element of ${\mathcal{V}}$ having an eigenvalue of 1. In other words the constellation ${\mathcal{V}}$ is required to operate fixed point free on the vector space $\mathbb{C}^M$. Using a classical classification result of fixed point free unitary representations by Zassenhaus [@za36], Shokrollahi et al. [@sh01; @sh02] were able to study the complete list of fully diverse finite group constellations inside the unitary group $U(M)$. Some of these constellations have the best known diversity product for given fixed parameters $M,N,L$. Unfortunately the possible configurations derived in this way is somehow limited. The constellations are also optimized for the diversity product and as we demonstrate in this paper for unitary space time modulation maybe attention should be given to the diversity sum.
In most of the literature mentioned above researchers focus their attention to constellations having the special form[ ]{}. Unitary differential modulation [@ho00] is used to avoid sending the identity (upper part of every element in the constellation) redundantly. This increases the transmission rate by a factor of 2 to:
$$\mathtt{R}=\frac{\log_2(L)}{M}=2\frac{\log_2(L)}{T}.$$
Because of this reason we will also focus ourselves in the later part of the paper to the special form[ ]{} as well. Nonetheless it will become obvious that the numerical techniques also work in the general situation.
Design criterion for low SNR channel
------------------------------------
As we mentioned before a constellation with a large diversity sum will have a small diversity function at small values of the signal to noise ratio. This is particularly suitable when the system operates in a very noisy environment. When $\rho$ is small, using Formula[ ]{}, one has the following expansion: $$\begin{gathered}
\prod_{m=1}^M [1+ \frac{(\rho T/M)^2}{4(1+\rho T/M)} (1-\delta_m^2
(\Phi_l^* \Phi_{l'}))]= \prod_{m=1}^M [1+ \tilde{\rho}(1-
\delta_m^2(\Phi_l^* \Phi_{l'}))]\\
=1+\tilde{\rho} \sum_{m=1}^M (1-\delta_m^2(\Phi_l^*
\Phi_{l'}))+O(\tilde{\rho}^2).\end{gathered}$$
When $\rho \rightarrow 0$, i.e. $\tilde{\rho} \rightarrow 0$, we can omit the higher order terms $O(\tilde{\rho}^2)$ and the upper bound of $P_{\Phi_l,\Phi_{l'}}$ requires that $$\sum_m (1-\delta_m^2(\Phi_l^* \Phi_{l'}))=(M-{\|\Phi_l^* \Phi_{l'}
\|}_F^2)$$ is large. In order to lower the pairwise error probability, it is the objective to make ${\|\Phi_l^* \Phi_{l'} \|}_F^2$ as small as possible for every pair of $l, l'$. It follows that at high SNR, the probability primarily depends on $\prod_{m=1}^M
(1-\delta_m^2(\Phi_l^* \Phi_{l'}))$, but at low SNR, the probability primarily depends on $\sum_{m=1}^M
(1-\delta_m^2(\Phi_l^* \Phi_{l'}))$. In order to be able to compare the constellation of different dimensions, we define:
\[div-sum\] The [*diversity sum*]{} of a unitary constellation ${\mathcal{V}}$ is defined as $$\sum {\mathcal{V}}= \min_{l \ne l'} \sqrt{1-\frac{
{\|\Phi_l^*\Phi_{l'}\|}_F^2}{M}}.$$
Again one has the important special case where $T=2M$ and the matrices $\Phi_k$ take the special form[ ]{}. In this case one verifies that $$\begin{gathered}
{\|\Phi_{l}^*\Phi_{l'}\|}_F^2=\frac{1}{4}{\|I+\Psi_l^*\Psi_{l'}\|}_F^2
=\frac{1}{4}{{\rm tr}\,}((I+\Psi_{l'}^*\Psi_l)(I+\Psi_l^*\Psi_{l'}))\\
=\frac{1}{4}{{\rm tr}\,}(2I+\Psi_{l'}^*\Psi_l+\Psi_l^*\Psi_{l'})=
\frac{1}{4}(4M-(2M-{{\rm tr}\,}(\Psi_{l'}^*\Psi_l+\Psi_l^*\Psi_{l'})))\\
=\frac{1}{4}(4M-{{\rm tr}\,}((\Psi_l-\Psi_{l'})^*(\Psi_l-\Psi_{l'})))=
\frac{1}{4}(4M-{\|\Psi_l-\Psi_{l'}\|}_F^2)\end{gathered}$$
For the form[ ]{} the diversity sum assumes the following simple form: $$\label{T2M-div-sum}
\sum {\mathcal{V}}= \min_{l,l'}\frac{1}{2\sqrt{M}}{\|\Psi_l- \Psi_{l'} \|_F}.$$ Without mentioning the term the concept of diversity sum was used in [@ho00a1]. Liang and Xia [@li02 p. 2295] explicitly defined the diversity sum in the situation when $T=2M$ using equation[ ]{}. Definition \[div-sum\] naturally generalizes the definition to arbitrary constellations.
We want to point out that the diversity sum is the design criterion only for unitary constellation. Hochwald and Marzetta [@ho00a] calculate the non-coherent space time channel capacity and indicate that unitary signal constellation are capacity achieving signal sets only for high SNR scenarios. For low SNR case, the transmitting power should be allocated unsymmetrically, i.e., unitary constellations are not capacity achieving in the first place. However unitary signal sets are easily manageable and one can take advantage of differential modulation technique [@ho00] to speed up the transmission. Moreover our simulation results indicate that codes with near optimal diversity sum tend to perform significantly better compared to the currently existing ones optimized for the diversity product for low and even moderate SNR scenarios. So it is quite reasonable and more toward the practical use to construct unitary constellations with good diversity sum.
As the formulas make it clear the diversity sum and the diversity product are in general very different. There is however an exception. When $T=4$, $M=2$ and the constellation ${\mathcal{V}}$ is in the special[ ]{}. If in addition all the $2\times 2$ matrices $\{ \Psi_1,\ldots, \Psi_L\}$ are a subset of the special unitary group $$SU(2)=\{ A\in {\mathbb{C}}^{2\times 2}\mid A^*A=I\mbox{ and }\det A=1\}$$ then it turns out that the diversity product $\prod {\mathcal{V}}$ and the diversity sum $\sum {\mathcal{V}}$ of such a constellation are the same. For this note that elements $\Psi_l,\Psi_{l'}$ of $SU(2)$ have the special form: $$\Psi_l={\left( \begin{array}{ccc}
a &\;& b \\ -\bar{b} &\;& \bar{a} \end{array} \right)},\
\Psi_{l'}={\left( \begin{array}{ccc}
c &\;& d \\ -\bar{d} &\;& \bar{c} \end{array} \right)}.$$ Through a direct calculation one verifies that $\det(\Psi_l-\Psi_{l'})=|a-c|^2+|b-d|^2$ and ${\|\Psi_l-\Psi_{l'}\|}_F^2=2(|a-c|^2+|b-d|^2)$. But this means that $\prod {\mathcal{V}}= \sum {\mathcal{V}}$ for constellations inside $SU(2)$.
Four illustrative examples {#SubSec-I}
--------------------------
The diversity sum and the diversity product govern the diversity function at low SNR respectively at high SNR. Codes optimized at these extreme values of the SNR-axis do not necessarily perform well on the “other side of the spectrum”. In this subsection we illustrate the introduced concepts on four examples. All examples have about equal parameters, namely $T=4$, $M=2$ and the size $L$ is 121 respectively 120. The first two examples are well studied examples from the literature. We derived the third and the fourth examples by geometrical design and numerical methods respectively.
#### Orthogonal Design:
This constellation has been considered by several authors [@al98; @sh01]. For our purpose we simply define this code as a subset of $SU(2)$: $$\left\{\frac{\sqrt{2}}{2}\left(\begin{array}{cc}
e^{\frac{2m\pi i}{11}}&e^{\frac{2n\pi i}{11}}\\
-e^{-\frac{2n\pi i}{11}}&e^{-\frac{2m\pi i}{11}}
\end{array} \right)|m,n=0,1,\cdots,10\right\}.$$ The constellation has 121 elements and the diversity sum and the diversity product are both equal to $0.1992$.
#### Unitary Representation of $SL_2(\mathbb{F}_5)$:
Shokrollahi et al. [@sh01] derived a constellation using the theory of fixed point free representations whose diversity product is near optimal. This constellation appears as a unitary representation of the finite group $SL_2(\mathbb{F}_5)$ and we will refer to this constellation as the $SL_2(\mathbb{F}_5)$-constellation. The finite group $SL_2(\mathbb{F}_5)$ has 120 elements and this is also the size of the constellation. In order to describe the constellation let $\eta = e^{\frac{2\pi i}{5}}$ and define $$P=\frac{1}{\sqrt{5}} \left(\begin{array}{cc}
\eta^2-\eta^3&\eta^1-\eta^4\\
\eta^1-\eta^4&\eta^3-\eta^2\\
\end{array} \right),\ \
Q=\frac{1}{\sqrt{5}} \left(\begin{array}{cc}
\eta^1-\eta^2&\eta^2-\eta^1\\
\eta^1-\eta^3&\eta^4-\eta^3\\
\end{array} \right).$$ Then the constellation is given by the set of matrices $(PQ)^jX$,where $j = 0,1,\cdots,9$, and $X$ runs over the set $$\begin{gathered}
\{ I_2, P, Q, QP, QPQ, QPQP, QPQ^2, QPQPQ, QPQPQ^2, \\
QPQPQ^2P, QPQPQ^2PQ, QPQPQ^2PQP \}.\end{gathered}$$ The constellation has rate $R = 3.45$ and $\prod
{SL_2(\mathbb{F}_5)}=\sum {SL_2(\mathbb{F}_5)} =
\frac{1}{2}\sqrt{\frac{(3-\sqrt{5})}{2}} \sim 0.3090$. The diversity product of this constellation is truly outstanding. For illustrative purposes we plotted in Figure \[fig-1\] the exact diversity functions and the diversity function of this constellation.
#### Numerically Derived Constellation:
Using simulated annealing algorithm we found after short computation a constellation with very good diversity sum. The constellation is given through a set of 121 matrices $$\begin{gathered}
\left\{\Psi_{k,l}:=A^kB^l|A=\left(\begin{array}{cc}
-0.9049 + 0.3265*i& 0.1635 + 0.2188*i\\
0.0364 + 0.2707*i& -0.8748 + 0.4002*i
\end{array}\right),\right. \\
\left.
B=\left(\begin{array}{cc}
-0.1596 + 0.9767*i& -0.1038 + 0.0994*i\\
0.0833 - 0.1171*i& -0.9432 + 0.2995*i
\end{array}\right), k,l=0,1,\cdots,10 \right\}.\end{gathered}$$ As we explain in Section \[sphere-decoding\], the maximum likelihood decoding of this constellation admits a simple decoding algorithm: sphere decoding.
#### Geometrically Designed Constellation:
Based on the algebraic structure we are going to propose in this paper, we further implement the geometrical symmetry into this structure. A geometrically designed constellation can be described as follows: $$\begin{gathered}
\left\{\Psi_{k}:=A^kB^k|A=\left(\begin{array}{cc}
e^{17 \pi/60}& 0\\
0& e^{13 \pi/60}
\end{array}\right),\right. \\
\left. B=\left(\begin{array}{cc}
\cos(22\pi/60)& \sin(22\pi/60)\\
-\sin(22\pi/60)& \cos(22\pi/60)
\end{array}\right), k=0,1,\cdots,119 \right\}.\end{gathered}$$ This constellation has superb diversity sum and reasonably good diversity product. One can also use sphere decoding to implement maximum likelihood decoding of this constellation.
The following table summarizes the parameters of the four constellations:
[|c|c|c|c|c|]{} &
------------
Orthogonal
design
------------
& $SL_2(\mathbb{F}_5)$ &
-------------
Numerically
derived
-------------
&
---------------
Geometrically
designed
---------------
\
Number of elements& 121 & 120 & 121 & 120\
diversity sum& 0.1992 &0.309 & 0.3886 & 0.4156\
diversity product & 0.1992&0.309 & 0.0278 & 0.1464\
Of course we were curious about the performances of these four different codes. Figure \[fig-3\] provides simulation results for each of the four constellations. Note that the numerically designed code who has a very bad diversity product is performing very well nevertheless due to the exceptional diversity sum. One can see that up to $12$db numerically derived codes outperform the group code by about $1$ db. In fact, our simulation results show that until $35$db the numerical one is still performing much better than the orthogonal one. However at around $18$db, the group constellation surpasses the numerical one due to exceptional diversity product. The geometrically designed constellation has better diversity sum and diversity product than the numerical one, therefore its performance is better than the numerical one (our results show that their performance curves are quite close, although the geometrical one is slightly better). These simulation results give an indication that the diversity sum is a very important parameter for a unitary constellation at low SNR regime.
Constellations With Algebraic Structure
=======================================
\[Sect-Alg\]
Before we venture into the realm of structured constellation, we would like explore random unitary space time constellations first. We introduce the Haar distributed random matrix, which in some sense can be viewed as a high dimensional generalization of a complex random variable with circular symmetric distribution $\mathcal{CN}(0,1)$.
The Haar measure on $U(M)$ is defined to be a probability measure $\mathcal{H}$ on $U(M)$ which is translate invariant: for any mensurable set $S$ in $U(M)$ and any fixed element $U_0$ in $U(M)$ $$\mathcal{H}(S)=\mathcal{H}(U_0S).$$ A unitary random matrix $\textbf{U}$ is Haar distributed (h.d.) if for any measurable set $S$ we have $$Pr(\textbf{U} \in S)=\mathcal{H}(S).$$
Note that h.d. matrix is also called isotropically distributed matrix in [@ma99a]. We want to point out that Haar measure can be defined more generally. In fact every compact Lie group admit a unique (up to scalar) translate invariant measure: Haar measure [@bo86b].
A well known yet non-trivial fact is that for any measurable set $S \subset U(M)$, we have $$\mathcal{H}(S)=\mathcal{H}(S^*),$$ where $S^*$ consists of the conjugate transpose of all the elements in $S$. Thus for a h.d. matrix $\textbf{U}$, one can verify $$Pr(\textbf{U}^* \in S)=Pr(U \in
S^*)=\mathcal{H}(S^*)=\mathcal{H}(S).$$ Immediately we conclude $\textbf{U}^*$ is also h.d. matrix. Also one can verify that the product of two h.d. matrices is still h.d. Another very interesting property about a h.d. matrix is about its spectrum. As derived in [@go98], the joint probability density for the eigenvalues of a h.d. random matrix $\textbf{U} \sim {{\rm diag}\,}(e^{i \mathbf{\theta_1}}, e^{i
\mathbf{\theta_2}}, \cdots, e^{i \mathbf{\theta_M}})$ in $U(M)$ is given by the Weyl denominator formula: $$f(\theta_1, \theta_2, \cdots, \theta_M)=\frac{1}{(2\pi)^M M!}
\prod_{j < k} {|e^{i\theta_j}-e^{i\theta_k}|}^2.$$ The properties of h.d. matrices lead to the following theorem about random unitary space time constellation:
For a random unitary space time constellation ${\mathcal{V}}$ consisting of $L$ h.d. independent random matrices $\mathbf{U_1},\mathbf{U_2}, \cdots, \mathbf{U_L}$, we have $$Pr(\prod {\mathcal{V}}=0)=0,$$ that is the probability of ${\mathcal{V}}$ being fully diverse is $1$.
First we can rewrite $$Pr(\prod {\mathcal{V}}=0)=Pr(\bigcup_{j < k}
|\det(\mathbf{U_j}-\mathbf{U_k})|=0) \leq \sum_{j < k}
Pr(|\det(\mathbf{U_j}-\mathbf{U_k})|=0).$$ Next we are going to show that the probability of the event $|\det(\mathbf{U_j}-\mathbf{U_k})|=0$ happening is $0$. Now, $$Pr(|\det(\mathbf{U_j}-\mathbf{U_k})|=0)=
Pr(|\det(I-\mathbf{U_j}^*\mathbf{U_k})|=0).$$ Let $\mathbf{U}$ denote $\mathbf{U_j}^*\mathbf{U_k}$, we know $\mathbf{U}$ is h.d. matrix. Using the Weyl denominator formula, one computes $$Pr(|\det(I-\mathbf{U})|=0)=Pr(\bigcup_{l=1}^M
\mathbf{\theta_l}=0) \leq \frac{1}{(2\pi)^M M!} \sum_{l=1}^M
\int\!\!\!\int_{\theta_l=0} \prod_{j < k}
{|e^{i\theta_j}-e^{i\theta_k}|}^2 d\theta_1 d\theta_2 \cdots
d\theta_M.$$ Since $$\int\!\!\!\int_{\theta_j=0} \prod_{j < k}
{|e^{i\theta_j}-e^{i\theta_k}|}^2 d\theta_1 d\theta_2 \cdots
d\theta_M \leq 2^{M(M-1)} \int\!\!\!\int_{\theta_j=0} d\theta_1
d\theta_2 \cdots d\theta_M=0,$$ we conclude that $$Pr(|\det(\mathbf{U_j}-\mathbf{U_k})|=0)=0.$$ Consequently $$Pr(\prod {\mathcal{V}}=0)=0,$$ that is the probability of ${\mathcal{V}}$ being fully diverse is $1$.
Note that if an $M \times M$ matrix $G$ of independent complex Gaussian entries is input to $QR$ algorithm, the resulting unitary matrix $Q$ is Haar distributed [@ea83]. For simplicity we sketch the proof as follows: First one can write $Q=GR^{-1}$, then for a fixed unitary matrix $U_0$, it can be checked that $U_0 G$ has the same distribution as $G$. Consequently $U_0 Q$ has the same distribution as $Q$, i.e., the distribution of $Q$ is translate invariant. Therefore the uniqueness of translate invariant measure on a compact Lie group guarantees that $Q$ is Haar distributed. As a consequence of the above theorem, an algorithm which produces a fully diverse unitary constellation with probability $1$ can be given as follows: take $L$ instance of complex Gaussian matrices and feed them through the $QR$ algorithm, the resulting $L$ unitary matrices constitute a fully diverse constellation with probability $1$.
From an algebraic geometry point of view one easily shows that the set of constellations with $\prod {\mathcal{V}}=0$ forms a lower dimensional proper algebraic sub-variety of $U(M)^L$. In particular the set of all the fully diverse constellations is Zariski open [@ha77] in $U(M)^L$, i.e., fully diverse constellations are dense in $U(M)^L$. Haar distributed random constellations won’t be practical for maximum likelihood decoding in high transmission rate scenario because no algebraic structure is assumed for random constellation and therefore the decoding process will be too complex. In the sequel we are going to investigate structured constellations and explain how one can restrict the parameter space to judiciously chosen subsets and how one can convert maximum likelihood decoding to lattice decoding by using structured constellations.
Consider a general constellation of square unitary matrices, $${\mathcal{V}}=\{\Psi_1, \Psi_2, \cdots, \Psi_L\}.$$ In order to calculate the diversity product, one needs to do $\frac{L(L-1)}{2}$ calculations: $ |\det(\Psi_i-\Psi_j)|$ for every different pair $i,j$. The same statement can be said about the diversity sum, however for simplicity we only show the diversity product case in the sequel unless specified otherwise.
If one deals with a group constellation then one needs only to calculate $L-1$ such determinant calculations and this is one of the remarkable advantages of group constellations. This is a direct consequence of $$|\det(\Psi_i-\Psi_j)|=|\det(\Psi_i)\det(I-\Psi_i^*\Psi_j)|
=|\det(I-\Psi_i^*\Psi_j)|,$$ where $\Psi_i^*\Psi_j$ is still in the group.
As we mentioned before group constellations are however very restrictive about what the algebraic structure is concerned. In the following we are going to present some constellations which have some small number of generators and whose diversity can be efficiently computed. This will ensure that the total parameter space to be searched is limited as well. We start with an example:
\[Exmp4\] Consider the constellation $${\mathcal{V}}= \{A^kB^l|A, B \in U(M), k=0, \cdots, p, l=0, \cdots, q
\}.$$ The parameter space for this constellation is $U(M)\times
U(M)$, this is a manifold of dimension $2M^2$ and the number of elements in ${\mathcal{V}}$ is $(p+1)(q+1)$. If one has to compute $|\det(\Psi_i-\Psi_j)|$ for every distinct pair this would require $\left(\begin{array}{c} (p+1)(q+1) \\ 2
\end{array}\right)$ determinant calculations. We will show in the following that the same result can be obtained by doing $2pq+p+q$ determinant computations.
Let $\Psi_i$ and $\Psi_j$ be two distinct elements having the form $A^{k_1}B^{l_1}$ and $A^{k_2}B^{l_2}$ respectively. We have now several cases. When $k_1 =k_2$, then necessarily $l_1 \neq
l_2$ and the distance is computed as $$|\det(A^{k_1}B^{l_1}-A^{k_2}B^{l_2})|=|\det(I-B^{|l_2-l_1|})|,$$ where $|l_2-l_1|$ is an integer between $1$ and $q$. If $l_1
=l_2$, then we have $k_1 \neq k_2$ and the distance is computed as $$|\det(A^{k_1}B^{l_1}-A^{k_2}B^{l_2})|=|\det(I-A^{|k_2-k_1|})|,$$ where $|k_2-k_1|$ is an integer between $1$ and $p$. If $(k_1 <
k_2 \;\; \mbox{and} \;\; l_1 < l_2)$ or $(k_1 > k_2 \;\;
\mbox{and} \;\; l_1 > l_2)$, we have $$|\det(A^{k_1}B^{l_1}-A^{k_2}B^{l_2})|=|\det(I-A^{|k_2-k_1|}B^{|l_2-l_1|}),$$ where $1\leq |k_2-k_1|\leq p$ and $1\leq |l_2-l_1|\leq q$. Similarly if $(k_1 < k_2 \;\; \mbox{and} \;\; l_1 > l_2)$ or $(k_1 > k_2 \;\; \mbox{and} \;\; l_1 < l_2)$ then $$|\det(A^{k_1}B^{l_1}-A^{k_2}B^{l_2})|=|\det(A^{|k_2-k_1|}-B^{|l_2-l_1|})|,$$ with $1\leq |k_2-k_1|\leq p$ and $1\leq |l_2-l_1|\leq p$. The total number of distances to be computed is in total equal to $2pq+p+q$.
The number of distances to be computed indicates how complex the calculation for the diversity is. In fact the smaller this number is, intuitively the larger possibility of finding a unitary constellation with good diversity we will have. An immediate observation is that for two pair of unitary matrices $(A, B), \;
\; (C, D)$, if $(C, D)=(UAV, UBV)$ or $(C, D)=(UA^{-1}V,
UB^{-1}V)$, then $|\det(C-D)|=|\det(A-B)|$. We are going to consider several constellations starting with this observation.
Consider the case that $G \subset U(n)$ is a subgroup with $L$ elements, then for any two distinct elements $A, B \in G$ we have $|\det(A-B)|=|\det(I-A^{-1}B)|$ with $A^{-1}B \in G$. Therefore at most $L-1$ distance calculations are needed to derive the diversity product. The product of two group constellations has the similar property. Consider $G_i \subset
U(M)$ with order $l_i$, where $i=1,2$. Let $$G=\{AB|A \in G_1, B \in G_2\}.$$ Since $|\det(A_1B_1-A_2B_2)|=|\det(I-A_1^{-1}A_2B_2B_1^{-1})|$ with $A_1^{-1}A_2B_2B_1^{-1} \in G$, at most $L-1$ calculations are needed in this case, where $L=l_1l_2$.
Consider a constellation with the following form: $$\{A^iB^j|i=0,\cdots,l_1-1, j=0, \cdots, l_2-1 \;\;
\mbox{and} \;\; A, B\in U(M), A^{l_1}=I, B^{l_2}=I\}.$$ It can be checked that for the above constellation at most $L-1$ calculations are needed, where $L=l_1l_2$.
Group structures do have certain advantages for constructing unitary constellations: it is less complex to calculate the diversity product (or sum); the possibility of finding a large diversity constellation intuitively may be increased. However the constellations found by this approach [@sh01] are really few and far between. Somehow one wonders if the group structure is too restrictive to find a good-performing constellation.
In the sequel we are going to loosen the constraints imposed by the group structures. As demonstrated in Example \[Exmp4\] it is desirable to have a small dimensional manifold (in Example \[Exmp4\] it was $U(M)\times U(M)$) which parameterizes a set of potentially interesting constellations. Having such a parameterization will help to avoid the problem of “dimension explosion”. The set of constellations parameterized by $U(M)\times U(M)$ in Example \[Exmp4\] are interesting as we are not required to compute all pairwise distances in order to compute the diversity product (sum).
Let $X$ be the set $\{x_1, x_2, \cdots, x_n\}$ and $F$ be the free group on the set $X$. A subset $G\subset U(M)$ is called [*freely generated*]{} if there are elements $\{g_1, g_2,
\cdots, g_n\}\subset G$ such that the homomorphism $ \phi: F
\longrightarrow G$ with $\phi(x_i)=g_i$ is an isomorphism.
An immediate consequence of this definition is that every element in $G$ can be uniquely written as a product of $g_i$’s and $g_i^{-1}$’s. The elements $g_i$ are called the generators of $G$. A freely generated subset $G$ is simply parameterized by the set: $$\left\{ a_1^{p_1}a_2^{p_2}\cdots a_k^{p_k}\mid a_i \;\; \mbox{is
one of} \;\; g_i's, \; p_i\in\mathbb{Z}\right\}.$$
Take an element $g \in G$ with its representation $g=\prod_{i=1}^k a_i^{p_i}$, we say that the presentation is [*reduced*]{} whenever $a_i \neq a_{i+1}$ for $i=1,\ldots,n-1$. Observe that taking the product of distinct matrices $\prod_{i=1}^n A_i$ is numerically expensive, however taking the power of one matrix $A^k$ is much easier (note that for $A=U\sum U^{-1}$ with $\sum$ diagonal, we have $A^k=U\sum^kU^{-1}$). Moreover by considering the powers of one matrices, we are able to impose the lattice structure to the constellation, which makes sphere decoding of structured constellations possible. (see Section \[sphere-decoding\]) Therefore we are interested in “normal” elements of $G$.
We say that an element $g=\prod_{i=1}^k a_i^{p_i}$ in reduced form is a [*normal element*]{} whenever $a_i\neq a_j$ for $i\neq
j$. A subset ${\mathcal{V}}$ of the freely generated set $G$ is said to be a [*normal constellation*]{} if every non-identity element in ${\mathcal{V}}$ is normal.
Since finding an inverse of a matrix is numerically expensive, we also limit our searches to positive constellations:
An element $g$ in $G$ with the reduced form $g=\prod_{i=1}^k
{a_i}^{p_i}$ is said to be a [*positive element*]{} if $p_i >
0$ for $i=1, 2, \cdots, k$. A subset ${\mathcal{V}}$ of the freely generated set $G$ is said to be a [*positive constellation*]{} if every non-identity element in ${\mathcal{V}}$ is positive.
Positive normal constellations are desirable for numerical searches as they can be efficiently parameterized and searched. If one wants to compute the diversity product (or sum) of an arbitrary positive constellation with $L$ elements one still has to compare a total of $\binom{L}{2}$ pairs of matrices. In the sequel we will impose more structure on a constellation ${\mathcal{V}}\subset G$ which will guarantee that only $L-1$ pair of elements have to be compared during the diversity product (sum) computation.
Two unitary matrices $A, B \in G$ are said to be [ *equivalent*]{} (denote by $A \sim B$) if there is unitary matrix $U\in G$ such that $A
= UBU^{-1}$ or $A=UB^{-1}U^{-1}$. $[A]$ will denote all the matrices that are equivalent to $A$. For a constellation ${\mathcal{V}}\subset G$, we say ${\mathcal{V}}=\{ \Psi_1, \Psi_2, \cdots, \Psi_L \}$ has a [*weak group structure*]{} if for any two distinct elements $\Psi_i, \Psi_j$ the product $\Psi_i^{-1} \Psi_j$ is equivalent to some $\Psi_k$.
The reader verifies that we indeed defined an equivalence relation. Note also that ${\mathcal{V}}$ has a group structure as soon as $\Psi_i^{-1} \Psi_j$ is always another element of ${\mathcal{V}}$ and this explains our wording.
Let ${\mathcal{V}}=\{\Psi_0=I, \Psi_1, \Psi_2, \cdots, \Psi_{L-1}\}$ be a constellation with a weak group structure. In order to compute the diversity product (sum) it is enough to do $L-1$ distance computations.
$$|\det(\Psi_i-\Psi_j)|=|\det(I-\Psi_i^{-1}\Psi_j)| =|\det(I-B)|,$$ where $B\in {\mathcal{V}}$ is an element in ${\mathcal{V}}$ equivalent to $\Psi_i^{-1}\Psi_j$. This shows the result for the diversity product. If one is concerned with the diversity sum then the same argument still holds if the absolute value of the determinant $|
\;\det(\cdot)\; |$ is replaced by the Frobenius norm ${\| \;.\;
\|}_F$.
Based on this lemma we are interested in finite constellations inside $G$ whose elements have a weak group structure and are all normal. The following theorem provides a complete characterization of all these constellations:
\[mainSec3\] Let ${\mathcal{V}}\subset G$ be a finite positive normal constellation (including identity element) with $L \geq
3$ elements. If ${\mathcal{V}}$ has a weak group structure then ${\mathcal{V}}$ takes one of the following forms:
- $ \{I, A, A^2, \cdots, A^{L-1}\} $
- $ \{I, AB, A^2B^2, \cdots, A^{L-1}B^{L-1}\} $
where $A=g_i^{p_i}$, $B=g_j^{p_j}$ for some $i \neq j$.
The proof of Theorem \[mainSec3\] is rather involved. In order to make it more understandable we will divide it in several definitions and lemmas.
For any element $\Psi \in G$, we define the length of $\Psi=\prod_{i=1}^k {a_i}^{p_i}$ to be $${{\rm length}\,}(\Psi)=\sum_{i=1}^k p_i.$$
It is a routine to check that the definition is well-defined and doesn’t depend on the representation of the element. For the identity element one will have ${{\rm length}\,}(I)=0$. One immediate consequence from this definition is that if $A \sim B$, one will have $|{{\rm length}\,}(A)|=|{{\rm length}\,}(B)|$. The following lemma claims that any freely generated positive weak group constellation “approximately” takes cyclic form.
Let ${\mathcal{V}}=\{\Psi_0=I, \Psi_1, \Psi_2, \cdots, \Psi_{L-1}\}\subset
G$ be a positive constellation of the freely generated set $G\subset U(M)$. Suppose ${{\rm length}\,}(\Psi_i) \leq {{\rm length}\,}(\Psi_j)$ for $i < j$. If ${\mathcal{V}}$ is a weak group constellation, then $$\Psi_i \in [\Psi_1]^i$$ where $[\Psi_1]^i=\{a_1a_2\cdots a_i|a_1, a_2, \cdots, a_i
\in [\Psi_1] \}$.
We first show that ${{\rm length}\,}(\Psi_i) < {{\rm length}\,}(\Psi_j)$ for $i <
j$: Indeed, if ${{\rm length}\,}(\Psi_i) = {{\rm length}\,}(\Psi_j)$, then ${{\rm length}\,}(\Psi_i^{-1}
\Psi_j)={{\rm length}\,}(\Psi_j)-{{\rm length}\,}(\Psi_i)=0$. That means $\Psi_i^{-1} \Psi_j \sim I$, equivalently one will have $\Psi_i^{-1} \Psi_j =I$, i.e. $\Psi_i=\Psi_j$. That contradict the fact that $\Psi_i$ and $\Psi_j$ are distinct.
Consider $\Psi_1^{-1} \Psi_2$. Since $0 <
{{\rm length}\,}(\Psi_1^{-1}\Psi_2)={{\rm length}\,}(\Psi_2)-{{\rm length}\,}(\Psi_1) <
{{\rm length}\,}(\Psi_2)$, therefore $\Psi_1^{-1} \Psi_2=\bar{\Psi}_1$ where $\bar{\Psi}_1 \sim \Psi_1$. So $\Psi_2=\Psi_1
\bar{\Psi}_1 \in [\Psi_1]^2$. Proceed by induction, one can show $\Psi_k^{-1} \Psi_{k+1}= \bar{\Psi}_2$ where $\bar{\Psi}_2 \sim \Psi_1$. So $\Psi_{k+1}=\Psi_k \bar{\Psi}_2
\in [\Psi_1]^{k+1}$ by induction.
An immediate observation is that $${{\rm length}\,}(\Psi_i)=i * {{\rm length}\,}(\Psi_1).$$
Take two positive normal elements in $G$ with their reduced forms: $$\Psi_1=a_1^{p_1} a_2^{p_2} \cdots a_m^{p_m} \qquad
\Psi_2=b_1^{q_1} b_2^{q_2} \cdots b_n^{q_n}.$$ We define the shift operator $S_k$ on the reduced form of a positive normal element $\Psi$ by induction: $S_1(\Psi)=S_1(a_1^{p_1}
a_2^{p_2} \cdots a_m^{p_m})=a_2^{p_2} \cdots a_m^{p_m} a_1^{p_1}$ and $S_{k+1}=S_k \circ S_1$. We assume that $S_0(\Psi)=\Psi$, then apparently for a fixed element $\Psi$ shift operator is periodic. We have the following lemma.
\[shift-version\] $\Psi_1 \sim \Psi_2$ if and only if $\Psi_1=S_k(\Psi_2)$ for some $k$.
The sufficiency part of this lemma is straightforward. So we have to prove the necessity part. Since $\Psi_1 \sim \Psi_2$, according to the definition of equivalence there exists $c$ such that $c\Psi_1 c^{-1}=\Psi_2$ or $c\Psi_1
c^{-1}=\Psi_2^{-1}$. However since ${{\rm length}\,}(c\Psi_1
c^{-1})={{\rm length}\,}(\Psi_2) > 0$ and ${{\rm length}\,}(\Psi_2^{-1}) < 0$, the second case won’t happen. The only possibility is $c\Psi_1
c^{-1}=\Psi_2$. We assume that $c$ is generated by only one generator and further assume $c=c_1^{l_1}$ with $l_1 > 0$, then we will have $$c_1^{l_1} a_1^{p_1} a_2^{p_2} \cdots a_m^{p_m} c_1^{-l_1}=
b_1^{q_1} b_2^{q_2} \cdots b_n^{q_n}.$$ So $c_1=a_m$ and $l_1 \leq p_m$ follows, otherwise the left hand side of the equation above will have negative power, while the right hand side only has positive power. This will contradict the uniqueness of the representation of the same element. In fact $l_1=p_m$, since otherwise $\Psi_2=c_1^{l_1}
a_1^{p_1} a_2^{p_2} \cdots c_1^{p_m-l_1}$. This will contradict the fact that $\Psi_2$ is a normal element. So with $$a_m^{p_m} a_1^{p_1} \cdots a_{m-1}^{p_{m-1}}=b_1^{q_1}
b_2^{q_2} \cdots b_n^{q_n},$$ one can check $m=n$ and $\Psi_2=S_{m-1}(\Psi_1)$.
Proceed by induction, suppose $c$ has the reduced form $c=c_1^{l_1} c_2^{l_2} \cdots c_{k+1}^{l_{k+1}}$, then the following equation follows: $$c_1^{l_1} c_2^{l_2} \cdots c_{k+1}^{l_{k+1}} a_1^{p_1}
a_2^{p_2} \cdots a_m^{p_m} c_{k+1}^{-l_{k+1}} \cdots c_2^{-l_2}
c_1^{-l_1}= b_1^{q_1} b_2^{q_2} \cdots b_n^{q_n}.$$ Without loss of generality, we assume $l_{k+1} > 0$ and apply the same argument as in the one generator case. One proves $a_m=c_{k+1}$ and $l_{k+1}=p_m$. Therefore we reach the following equation: $$c_1^{l_1} c_2^{l_2} \cdots c_{k}^{l_{k}} S_{m-1}(\Psi_1)
c_{k}^{-l_{k}} \cdots c_2^{-l_2} c_1^{-l_1}= b_1^{q_1}
b_2^{q_2} \cdots b_n^{q_n}.$$ By induction, $\Psi_2=S_{k_1} \circ
S_{m-1}(\Psi_1)=S_{k_1+m-1}(\Psi_1)$ for some $k_1$.
Pick any two distinct elements $ \Psi_i, \Psi_j \in {\mathcal{V}}$ having ${{\rm length}\,}(\Psi_i) < {{\rm length}\,}(\Psi_j)$. We claim that if $\Psi_i=a_1 a_2
\cdots a_m$, then either there exists $1 \leq k \leq m-1$ such that $\Psi_j=a_1 a_2 \cdots a_k b_1 b_2 \cdots b_l a_{k+1}
\cdots a_m$, or $\Psi_j=b_1 b_2 \cdots b_l a_1 a_2 \cdots a_m$ or $\Psi_j=a_1 a_2 \cdots a_m b_1 b_2 \cdots b_l $ for some $l
> 0$.
Suppose that the claim is not true, then for $\Psi_j=c_1 c_2,
\cdots c_p$, there exist $k_1, k_2$ such that $0 \leq k_1 \leq
m$, $1 \leq k_2 \leq m+1$ and $k_1 < k_2-1$ and $\Psi_j$ will take the following form: $$\Psi_j=a_1 a_2 \cdots a_{k_1} b_1 b_2 \cdots b_l a_{k_2}
\cdots a_m,$$ where $b_1 \neq a_{k_1+1}$ and $b_l \neq a_{k_2-1}$. (For the special case $k_1=0$, we assume $c_1 \neq a_1$. For the special case $k_2=m+1$, we assume $c_p \neq a_m$.) Then $\Psi_i^{-1} \Psi_j$ would be equivalent to $a_{k_2-1}^{-1}
\cdots a_{k_1+1}^{-1} b_1 b_2 \cdots b_l$, which in any case won’t be equivalent to any positive element $\Psi_k=d_1 d_2
\cdots d_q$ or $I$. That contradicts the fact that ${\mathcal{V}}$ is equipped with a weak group structure.
As explained above we can further assume that $${{\rm length}\,}(I) < {{\rm length}\,}(\Psi_1) < \cdots < {{\rm length}\,}(\Psi_{L-1}).$$
If $\Psi_1$ is generated by only one generator, i.e. $\Psi_1=g_i^{p_i}$ for some $i$. Since $\Psi_2$ is a normal element, according to the claim, either $\Psi_2=\Psi_1
\tilde{\Psi}_2$ or $\Psi_2=\tilde{\Psi}_2 \Psi_1$ for some $\tilde{\Psi}_2$. In either case $\tilde{\Psi}_2$ will be equivalent to $\Psi_1$, while Lemma \[shift-version\] will guarantee $\tilde{\Psi}_2=\Phi_1$. Therefore we will have $\Psi_2=g_i^{2p_i}$. Proceed by induction, it can be checked that $\Psi_l=g_i^{lp_i}$ for every $l$. So the constellation will take the first form in the theorem.
If $\Phi_1$ is generated by two generators, i.e. $\Psi_1=g_i^{p_i} g_j^{p_j}$ for some $i, j$. According to the claim, we will have $\Psi_2=\Psi_1 \tilde{\Psi}_2$ or $\Psi_2=\tilde{\Psi}_2 \Psi_1 $ or $\Psi_2=g_i^{p_i}
\tilde{\Psi}_2 g_j^{p_j}$. Because $\tilde{\Psi}_2$ is equivalent to $\Psi_1$, $\tilde{\Psi}_2$ is a shifted version of $\Psi_1$. Exhausting all the possibilities, the first two cases would make $\Psi_2$ a non-normal element, so the only possibility is the third case. Consider two shifted version of $\Psi_1$: $S_0(\Psi_1)=g_i^{p_i} g_j^{p_j} $ and $S_1(\Psi_1)=g_j^{p_j} g_i^{p_i}$. Only $S_0(\Psi_1)$ will satisfy the condition that $\Psi_2$ is a normal element. So the analysis above shows that $$\Psi_2=g_i^{p_i} \Psi_1 g_j^{p_j}= g_i^{2 p_i} g_j^{2 p_j}.$$ By induction it can shown that $$\Psi_{k+1}=g_i^{p_i} \Psi_k g_j^{p_j}= g_i^{(k+1) p_i}
g_j^{(k+1) p_j}.$$ So in this case, the constellation will take the second form in the theorem.
However the constellation doesn’t exist if $\Psi_1$ is generated by more than $3$ elements. Indeed suppose with the reduced form $\Psi_1=a_1^{p_1} a_2^{p_2} \cdots a_m^{p_m}$ with $m \geq 3$, then $\Psi_2$ will take one of the following form: $\tilde{\Psi}_2 a_1^{p_1} a_2^{p_2} \cdots a_m^{p_m}, a_1^{p_1}
\tilde{\Psi}_2 a_2^{p_2} \cdots a_m^{p_m}, \cdots, a_1^{p_1}
a_2^{p_2} \cdots a_m^{p_m} \tilde{\Psi}_2$ with $\tilde{\Psi}_2$ being a shifted version of $\Psi_1$. But $\Psi_2$ wouldn’t be a normal element for any of the above form, so there doesn’t exist weak group constellation for this case.
A weak group constellation is very group like, while it is not exactly a group. It does keep the advantage of a group constellation: for example, for any weak group constellation ${\mathcal{V}}$ taking the second form in the theorem, only $L-1$ computations $|\det(I-A^kB^k)|$ for $k=1,2,\cdots,L-1$ are needed to calculate the diversity product. It also overcome the disadvantage of group codes: one can freely choose the generators, while in group structures, the generators have to satisfy certain relations to be a group. Last but not least it turns out that the restriction to code elements in normal form is very advantageous during sphere decoding. In the next section we will mainly use the second weak group structure as described in Theorem \[mainSec3\]. Before we describe these search procedures we would like to illustrate some alternative methods.
It is possible to increase the number of generators to obtain new structures. For instance, ${\mathcal{V}}= \{A^kB^lC^m|A, B, C \in U(M), k=0,
\cdots, p, l=0, \cdots, q, m=0, \cdots, r \}$.
For a unitary constellation ${\mathcal{V}}=\{\Phi_i|i=1, \cdots, L\}$, we call ${\mathcal{V}}_s=\{U\Phi_iV|i=1,\cdots,L\}$ shifted version of ${\mathcal{V}}$. It will be straightforward to prove that ${\mathcal{V}}_s$ has the same complexity as ${\mathcal{V}}$ when one calculates the diversity. $\{A^kCB^k|A, B, C \in U(M), k=0, \cdots, L-1\}$ is a shifted copies of the second weak group structure in Theorem \[mainSec3\]. To see this, note that $A^kCB^k=A^kCB^kC^{-1}C=A^k(CBC^{-1})^kC$. It can checked that $A^kB^{L+1-k}=A^k{(B^{-1})}^kB^{L+1}$, therefore $\{A^kB^{L+1-k}|A, B \in U(M), k=1, \cdots, L\}$ is also a shifted version of the second form weak group structure.
Also we can consider the “combination” or the “product” of two structures. For example, $\{I, A, AB, ABA, ABAB, ABABA,
\cdots\}$ is the union of $\{(AB)^k|k=0,\cdots\}$ and its shifted version $\{(AB)^kA|k=0,\cdots\}$. Another example is the product case: let ${\mathcal{V}}_1=\{I, C, C^2, C^3, \cdots \}$ and ${\mathcal{V}}_2=\{I, A,
AB, ABA, \cdots\}$ and consider the Cartesian product constellation $${\mathcal{V}}= {\mathcal{V}}_1 \times {\mathcal{V}}_2=\{AB|A \in {\mathcal{V}}_1, B\in {\mathcal{V}}_2\}.$$
One may wonder how restrictive the proposed structures are. We all know a compact Lie group can be generated by any open neighborhood of any element in the Lie group. So with the above structure, even if one chooses the generators locally, the elements in the constellation could be spreading out on the whole manifold. Somehow this indicates that the proposed structure won’t be too restrictive.
Geometrical Design of Unitary Constellations with Good Diversity
================================================================
\[Sec-geometrical\]
For low dimensional constellations, one may further specify the generators in the proposed structure. Observe that for the second form weak group constellation, one can always assume $A$ is diagonal. In the sequel, we further assume that $B$ is real orthogonal, i.e. based on the weak group structure we consider the following $2$ dimensional constellation: $$\label{Closed-Specification}
{\mathcal{V}}=\{ A^kB^k| A=\left( \begin{array}{cc}
e^{ix}&0\\
0&e^{iy}
\end{array} \right), B=\left( \begin{array}{cc}
\cos z& \sin z\\
-\sin z& \cos z
\end{array} \right), k=0,1,\cdots,L-1\}.$$
There are several ways to design constellations with good diversity from this specific structure. A natural idea is to do Brute Force search using fine step size. Another approach is to design the constellation with the help of geometrical intuition. Note that a $2 \times 2$ complex matrix can be viewed as a vector in $\mathbb{C}^4$. In this context $A$ and $B$ can be viewed as “rotation” transforms (induced by regular matrix multiplication) acting on $\mathbb{C}^4$. A constellation of form[ ]{} can be viewed as a set of rotated vectors under the transforms $A^kB^k$, $k=0, 1, \cdots, L-1$. Intuition says that good constellations can be found if the rotation angle is symmetrical. Based on the idea above we assume that $x, y, z$ to be the multiples of $2\pi/L$, we found a lot of good codes resulted from this geometrical symmetry (see tables in Section \[Sec-numerical\]).
2 dimensional constellation design has been studied in [@li02]. In this paper Liang proposed very interesting parametric codes and many codes with excellent diversity are found. The codes shown in [@li02] can be achieved by our design as well. In fact, most of Liang’s codes belong to a special form of our parameterization[ ]{}. To our best knowledge, most of our codes shown on the web site [@ha03u2] are the best codes ever found or never found before.
A very interesting code with $120$ elements is found using this approach: $${\mathcal{V}}=\{ A^kB^k |A=\left( \begin{array}{cc}
e^{\pi/30 i}&0\\
0&e^{11\pi/30 i}
\end{array} \right), B=\left( \begin{array}{cc}
\cos \pi/4& \sin \pi/4\\
-\sin \pi/4& \cos \pi/4
\end{array} \right), k=0,1,\cdots,119\}.$$
It can be checked that $\prod {{\mathcal{V}}}=\sum {{\mathcal{V}}} =
\frac{1}{2}\sqrt{\frac{(3-\sqrt{5})}{2}}$, i.e. the diversity product and the diversity sum are identical to the ones of the $SL_2(\mathbb{F}_5)$-constellation. We simulated the performance of this code and compared it with the performance of the $SL_2(\mathbb{F}_5)$-constellation. To our big surprise our new code performed considerably better than the $SL_2(\mathbb{F}_5)$-constellation. The constellation ${\mathcal{V}}$ with sphere decoding outperformed the $SL_2(\mathbb{F}_5)$-constellation by about $1db$ up to about $20$db (see Figure \[Closed-Group\]). As the SNR goes higher, the two curves are coming closer though.
In order to understand the difference in the performance of the two seemingly similar constellations we investigated the diversity product (DP) and diversity sum (DS) [*distance spectrum*]{} for each of them. As we explained before, for a unitary constellation with $L$ elements, $L(L-1)/2$ distance calculations may produce distances with multiplicities. For example consider ${\mathcal{V}}$ as above, $360$ out of $7140$ pairs of elements have distance $0.3090$ (see DP distance spectrum in Table \[DP-DS\]). So one can explain the behavior difference of the two codes using their distance spectrum. The following Table \[DP-DS\] shows that the DP and DS distance spectrum of our weak group constellation.
\[DP-DS\] $$\begin{array}{cc}
\begin{tabular}{c}
Weak group constellation \\
DP distance spectrum \\
\end{tabular} & \begin{tabular}{c}
Weak group constellation \\
DS distance spectrum \\
\end{tabular}\\
\begin{tabular}{|c|c|}
\hline
distance & distribution \\
\hline
0.3090 & 360 \\
\hline
0.3136 & 480 \\
\hline
0.3895 & 480 \\
\hline
0.3931 & 1440 \\
\hline
0.4402 & 240 \\
\hline
0.5000 & 120 \\
\hline
0.5878 & 120 \\
\hline
0.6360 & 1440 \\
\hline
0.6787 & 480 \\
\hline
0.7071 & 600\\
\hline
0.8090 & 360\\
\hline
0.8430 & 480 \\
\hline
0.8660 & 120\\
\hline
0.8979 & 240 \\
\hline
0.9511 & 120 \\
\hline
1 & 60 \\
\hline
\end{tabular}& \begin{tabular}{|c|c|}
\hline
distance & distribution\\
\hline
0.3090 & 120 \\
\hline
0.4402 & 240 \\
\hline
0.5000 & 120 \\
\hline
0.5023 & 480 \\
\hline
0.5457 & 240 \\
\hline
0.5878 & 120 \\
\hline
0.6367 & 480 \\
\hline
0.6502 & 240 \\
\hline
0.7071 & 3000 \\
\hline
0.7598 & 240 \\
\hline
0.7711 & 240 \\
\hline
0.8090 & 120 \\
\hline
0.8380 & 240 \\
\hline
0.8647 & 480 \\
\hline
0.8660 & 120 \\
\hline
0.8979 & 240 \\
\hline
0.9511 & 120 \\
\hline
1 & 60 \\
\hline
\end{tabular}
\end{array}$$
One can check that the DP distance spectrum of the $SL_2(\mathbb{F}_5)$-constellation is identical to the DS distance spectrum. The following Table \[DP-DP\] shows that the DS distance spectrum for the $SL_2(\mathbb{F}_5)$-constellation has denser small distance distribution compared to DS spectrum of our constellation and this explains the considerable worse performance of this constellation in our simulations.
\[DP-DP\]
------------------------------------
$SL_2(\mathbb{F}_5)$-constellation
DP (DS) distance spectrum
------------------------------------
\
distance distribution
---------- --------------
0.3090 720
0.5000 1200
0.5878 720
0.7071 1800
0.8090 720
0.8660 1200
0.9511 720
1 60
Although we have concentrated so far in the design of 2-dimensional constellations there is actually no restriction with our approach. The similar “rotation” idea can be applied to other low dimensional constellation design. For instance, we can make further specifications to $3$ dimensional weak group constellations: $${\mathcal{V}}=\{ A^kB^k |A=\left( \begin{array}{ccc}
\cos x & \sin x & 0\\
-\sin x& \cos x & 0\\
0 & 0 & e^{iy}
\end{array} \right), B=\left( \begin{array}{ccc}
e^{iz} & 0 & 0\\
0 & \cos w & \sin w \\
0 & -\sin w & \cos w
\end{array} \right), k=0,1,\cdots,L-1\}.$$ where $x,y,z,w$ is assumed to take the multiple of $2\pi/L$. Apparently algebraic design based on geometrical symmetry can be applied to any other structure as well. For instance consider the following specified structures: $${\mathcal{V}}=\{ A^kB^l| A=\left( \begin{array}{cc}
e^{ix}&0\\
0&e^{iy}
\end{array} \right), B=\left( \begin{array}{cc}
\cos z& \sin z\\
-\sin z& \cos z
\end{array} \right), k=0,1,\cdots,p-1, l=0,1,\cdots,q-1\}.$$ where we can take $x,y$ to be multiple of $2\pi/p$ and $z$ to be multiple of $2\pi/q$. We refer to [@ha03u2] for the designed low dimensional constellations from these approaches.
Numerical Design of Unitary Constellation with Good Diversity
=============================================================
\[Sec-numerical\]
In order to numerically design constellations, it will be necessary to have a good parameterization for the set of unitary constellations having size $L$, operating with $M$ transmit antennas. In this section we show how one can use the theory of complex Stiefel manifolds and the classical Cayley transform to obtain such a parameterization.
The complex Stiefel manifold
----------------------------
The subset of $T\times M$ complex matrices $${\mathcal{S}}_{T,M}:=\left\{ \Phi\in{\mathbb{C}}^{T\times M}\mid \Phi^* \Phi =
I_M\right\}$$ is called the [*complex Stiefel manifold*]{}.
From an abstract point of view a constellation ${\mathcal{V}}:=\{
\Phi_1,\ldots, \Phi_L\}$ having size $L$, block length $T$ and operating with $M$ antennas can be viewed as a point in the complex manifold $$\mathcal{M}:=\left({\mathcal{S}}_{T,M}\right)^L=
\underbrace{{\mathcal{S}}_{T,M}\times\cdots \times{\mathcal{S}}_{T,M}}_{\mbox{$L$
copies}}.$$ The search for good constellations ${\mathcal{V}}$ requires hence the search for points in $\mathcal{M}$ whose diversity is excellent in some interval $[\rho_1,\rho_2]$.
Stiefel manifolds have been intensely studied in the mathematics literature since their introduction by Eduard Stiefel some 50 years ago. A classical paper on complex Stiefel manifolds is [@at60], a paper with a point of view toward numerical algorithms is [@ed99]. The major properties are summarized by the following theorem:
\[Stiefel\] ${\mathcal{S}}_{T,M}$ is a smooth, real and compact sub-manifold of ${\mathbb{C}}^{MT}={\mathbb{R}}^{2MT}$ of real dimension $2TM-M^2$.
Some of the stated properties will follow from our further development. The following two examples give some special cases.
$${\mathcal{S}}_{T,1}=\left\{ x\in{\mathbb{C}}^T\mid ||x||=\sqrt{\sum_{i=1}^M
x_i\bar{x}_i}=1 \right\}\subset {\mathbb{R}}^{2T}$$ is isomorphic to the $2T-1$ dimensional unit sphere $S^{2T-1}$.
When $T=M$ then ${\mathcal{S}}_{T,M}=U(M)$, the group of $M\times M$ unitary matrices. It is well known that the Lie algebra of $U(M)$, i.e. the tangent space at the identity element, consists of all $M\times M$ skew-Hermitian matrices. This linear vector space has real dimension $M^2$, in particular the dimension of $U(M)$ is $M^2$ as well.
A direct consequence of Theorem \[Stiefel\] is:
The manifold $\mathcal{M}$ which parameterizes the set of all constellations ${\mathcal{V}}$ having size $L$, block length $T$ and operating with $M$ antennas forms a a real compact manifold of dimension $2LTM-LM^2$.
As this corollary makes it clear a full search over the total parameter space is only possible for very moderate sizes of $M,L,T$. It is also required to have a good parameterization of the complex Stiefel manifold ${\mathcal{S}}_{T,M}$ and we will go after this task next.
The unitary group is closely related to the complex Stiefel manifold and the problem of parameterization ultimately boils down to the parameterization of unitary matrices. For this assume that $\Phi$ is a $T\times M$ matrix representing an element of the complex Stiefel manifold ${\mathcal{S}}_{T,M}$. Using Gramm-Schmidt one constructs a $T\times (T-M)$ matrix $V$ such that the $T\times T$ matrix $\left[ \Phi\mid V\right]$ is unitary. Define two $T\times
T$ unitary matrices $\left[ \Phi_1\mid V_1\right]$ and $\left[
\Phi_2\mid V_2\right]$ to be equivalent whenever $\Phi_1=\Phi_2$. A direct calculation shows that two matrices are equivalent if and only if there is $(T-M)\times (T-M)$ matrix $Q$ such that: $$\label{Q-matrix}
\left[ \Phi_2\mid V_2\right]=\left[ \Phi_1\mid V_1\right]{\left( \begin{array}{ccc}
I &\;& 0 \\ 0 &\;& Q \end{array} \right)}.$$ Identifying the set of matrices $Q$ appearing in[ ]{} with the unitary group $U(T-M)$ we get the result:
\[Lem-par\] The complex Stiefel manifold ${\mathcal{S}}_{T,M}$ is isomorphic to the quotient group $$U(T)/U(T-M).$$
This lemma let us verify the dimension formula for ${\mathcal{S}}_{T,M}$ stated in Theorem \[Stiefel\]: $$\dim {\mathcal{S}}_{T,M}=\dim U(T)-\dim U(T-M)=T^2-(T-M)^2=2TM-M^2.$$
The section makes it clear that a good parameterization of the set of constellations ${\mathcal{V}}$ requires a good parameterization of the manifold $\mathcal{M}$ and this in turn requires a good parameterization of the unitary group $U(M)$.
Once one has a nice parameterization of the unitary group $U(M)$ then Lemma \[Lem-par\] provides a way to parameterize the Stiefel manifold ${\mathcal{S}}_{T,M}$ as well. Parameterizing $U(T)$ modulo $U(T-M)$ is however an ‘over parameterization’. Edelman, Arias and Smith [@ed99] explained a way on how to describe a local neighborhood of a (real) Stiefel manifold ${\mathcal{S}}_{T,M}$. The method can equally well be applied in the complex case. We do not pursue this parameterization in this paper and leave this for future work.
In the remainder of this paper we will concentrate on constellations having the special form[ ]{}. From a numerical point of view we require for this a good parameterization of the unitary group and the next subsection provides an elegant way to do this.
Cayley transformation
---------------------
There are several ways to represent a unitary matrix in a very explicit way. One elegant way makes use of the classical Cayley transformation. In order that the paper is self contained we provide a short summary. More details are given in [@pr94 Section 22] and [@ha02a].
For a complex $M \times M$ matrix $Y$ which has no eigenvalues at $-1$, the Cayley transform of $Y$ is defined to be $$Y^c= (I + Y)^{-1}(I-Y),$$ where $I$ is the $M \times M$ identity matrix.
Note that $(I+Y)$ is nonsingular whenever $Y$ has no eigenvalue at -1. One immediately verifies that $(Y^c)^c=Y$. This is in analogy to the fact that the linear fractional transformation $f(z)=\frac{1-z}{1+z}$ has the property that $f(f(z))=z$. Recall that a matrix $M$ is skew-Hermitian whenever $A^*=-A$. The set of $M\times M$ skew-Hermitian matrices forms a linear subspace of ${\mathbb{C}}^{M\times M}\cong {\mathbb{R}}^{2M^2}$ having real dimension $M^2$. This is the Lie algebra of the unitary group $U(M)$. The main property of the Cayley transformation is summarized in the following theorem. (See e.g. [@ha02a; @pr94]).
When $A$ is a skew-Hermitian matrix then $(I+A)$ is nonsingular and the Cayley transform $V:=A^c$ is a unitary matrix. Vice versa when $V$ is a unitary matrix which has no eigenvalues at $-1$ then the Cayley transform $V^c$ is skew-Hermitian.
This theorem allows one to parameterize the open set of $U(M)$ consisting of all unitary matrices whose eigenvalues do not include $-1$ through the linear vector space of skew-Hermitian matrices. The Cayley transformation is very important for the numerical design of constellations because it makes the local topology of $U(M)$ clear. One can see that most optimization method require us to consider the neighborhood of one element in $U(M)$.
Simulated Annealing (SA) Algorithm
----------------------------------
In our numerical experiments we have considered several methods. Because there are a large number of target functions the best known optimization algorithms such as Newton’s Methods [@no99; @ed99] and the Conjugate Gradient Method [@no99; @ed99] are difficult to implement. Surprisingly the [*Simulated Annealing Algorithm*]{} turned out to be very practical for this problem.
Simulated Annealing (SA) is a method which mimics the process of melted metal getting cooled off. In the annealing process of the melted metal, first the metal is heated to melt, then the temperature is getting down gradually. The metal will get to a minimized energy state if the temperature is lowering slow enough. For more details about this algorithm, we refer to [@aa89; @la87b; @ot89].
In fact, we would rather call it a general method instead of a concrete algorithm. Generally speaking, for a given optimization problem we always take an initial solution in some certain way, then consider a second solution in the “neighborhood” of this solution. We will accept the solution according to some predefined criterion which might involve a probability threshold.
Combining with good algebraic structure and Cayley transform, which is a good representation of any dimensional unitary matrix, one can see that numerical method can be applied to any dimensional and any size constellation design. Our implementation of the algorithm can be summarized in the following way, one can find simple sample program on our web site [@ha03u2].
1. Choose a proposed algebraic structure for the constellation.
2. Generate initial generators of the whole constellation. One can either take an existing constellation as the start point or just take the initial point randomly.
3. Generate randomly a new constellation using Cayley transform in the neighborhood of the old constellation where the selection is done using a Gaussian distribution with decreasing variances as the algorithm progresses.
4. Calculate the diversity function (product, sum) of the newly constructed constellation.
5. If the new constellation has better diversity function (product, sum), then accept the new constellation. If not, reject the new constellation and keep the old constellation (or accept it according to Metropolis’s criterion [@me53]).
6. Check the stopping criterion, if satisfied, then stop, otherwise go to $2$ and continue the iteration.
As we mentioned before, one can either choose an existing constellation as the starting point for our numerical method or just take the initial point randomly. In the sequel, we use the group constellation $G_{21,4}$ in [@sh01]:
$${\mathcal{V}}_1=\{A^kB^l| A=\left(\begin{array}{ccc}
\eta&0&0\\
0&\eta^4&0\\
0&0&\eta^{16}\\
\end{array}\right), B=\left(\begin{array}{ccc}
0&1&0\\
0&0&1\\
\eta^7&0&0\\
\end{array}\right),
k=0,1,\cdots,20, l=0,1,2\}$$
One can verify that $$\prod {\mathcal{V}}_1=0.3851.$$ It seems that $G_{21,4}$ is already a very good constellation, our algorithm only improves a little (see ${\mathcal{V}}_2$ below). However one can check for most of the cases, the algorithm will improve much compared to the original group constellation. $${\mathcal{V}}_2=\{A^kB^l|k=0,1,\cdots,20,l=0,1,2 \},$$ where $$A=\left(\begin{array}{ccc}
0.9415 + 0.3155*i&0.0573 - 0.0222*i&0.0496 +0.0882*i\\
0.0160 - 0.0555*i&0.4005 + 0.9136*i&0.0326 - 0.0212*i\\
0.0579 + 0.0855*i&-0.0312 - 0.0099*i&0.1384 - 0.9844*i\\
\end{array}\right),$$ $$B=\left(\begin{array}{ccc}
0.0175 + 0.0095*i&0.9997 + 0.0111*i&0.0079 + 0.0042*i\\
0.0086 + 0.0100*i&-0.0082 + 0.0040*i&0.9999 + 0.0036*i\\
-0.4836 + 0.8750*i&0.0004 - 0.0198*i&-0.0045 - 0.0126*i\\
\end{array}\right).$$ One verifies that $$\prod {\mathcal{V}}_2=0.3874.$$
Different industrial applications require different level of reliability of the communication channels. One may want to optimize the constellation at certain Block Error Rate (BER) or Signal Noise Ratio (SNR). It can also be shown theoretically that numerical methods together with the proposed structure works in the same way if one wants to optimize the diversity function at a certain SNR. This is essential the case because for a complex matrix $A$ and unitary matrices $U, V$ one has that $$\label{sing}
\delta_m(UAV)=\delta_m(A),$$ for $m=1,2,\cdots,M$. With the constellation structures as above we are able to reduce the dimension of the parameter space and at the same time we have a considerable reduction in the number of targets to be checked. Intuitively algebraically designing codes for this purpose seems to be impossible.
The following graph shows the comparison of three constellations with different dimensions with $2$ receiver antennas. The first one is a $2$ dimensional constellation with $3$ elements ($R=0.7925$) and optimal diversity product $0.8660$ and optimal diversity sum $0.8660$. The second constellation is a $3$ dimensional constellation which has $5$ elements ($R=0.7740$) with diversity product $0.7183$ and diversity sum $0.7454$. The third constellation is a $4$ dimensional one consisting of $9$ elements ($R=0.7925$) with diversity product 0.5904 and diversity sum 0.6403. Here based on the structure $A^kB^k$ we used Simulated Annealing to optimize the diversity function at $6$db to acquire the last two constellations.
One can see that around $5$ db, the second constellation surpasses the first one and is getting better and better as the SNR becomes larger. This can be easily understood since the diversity function of the first constellation is approximately dominated by $1/{\rho^4}$ at high SNR, while the diversity function of the second constellation is dominated by $1/{\rho^6}$. The same explanation can be applied to the third constellation’s performance. One can even foresee that higher dimensional constellations will perform even better and the BER curve will be sharper than the lower dimensional ones. It is believable that higher dimensional constellations will achieve much more diversity gain compared to lower dimensional ones.
Surprisingly SA works very well when it is applied to an algebraic structure with symmetry. Like all the other numerical methods, one has to suffer the loss of performance due to the increasing complexity as the size and dimension go up and due to the limited computational resources. However without any doubts, the numerical approach is very flexible and can be used for any dimensional and any size constellation, producing very good diversity. So a lot of good-performing unitary constellations are found this way, which were never found by any algebraic method. At the end of this section we will show some $2$ dimensional constellations we found using various methods based on the proposed structure. We skip, however, our numerical results on the higher dimensional unitary constellation design, since one can check them on the web site [@ha03u2].
One very interesting fact is the numerical results for diversity sum from $3$ dimensional structured constellation are even better than the corresponding upper bound for $2$ dimensional constellations. Somehow it won’t be too surprising if one notices that from $U(2)$ to $U(3)$, we have $5$ more dimensions to manoeuvre.
In [@ha03u] packing problems on compact Lie groups are analyzed and the upper bound for the diversity sum and the diversity product are derived. In the following figure one can see the limiting behavior of $2$ dimensional structured constellations compared to the upper bound. One can check [@ha03u2] for the comparisons for other dimensions.
Constellations with extremely large diversity
---------------------------------------------
In this subsection we list the best $2$-dimensional constellations we found with the techniques described in Sections \[Sec-geometrical\] and \[Sec-numerical\]. The tabulated constellations have some of the best diversity sums and diversity products published so far. All the constellations searched by simulated annealing (SA) were based on the $A^kB^k$ structure. For the constellations with $L$ elements and parameters $x, y, z$ being multiples of $2\pi/L$, they are found by geometrical methods using the parameterization[ ]{}. For the constellations with $L$ elements and parameters $x, y, z$ being decimals, they are found by Brute Force with step size $0.1000$ based on the same parameterization[ ]{}.
Diversity product of $2$ dimensional constellation based on weak group structure:
[|c|c|c|]{}
-----------
Number of
elements
-----------
& Diversity Product & Codes and Comments\
2 & 1 & $x=\pi, y=\pi, z=0 \; (\mbox{optimal})$\
3 & $\sqrt{3}/2$ & $x=2\pi/3, y=2\pi/3,z=0 \; (\mbox{optimal})$\
4 & 0.7831 &$x=0.6000, y=6.0000, z=4.4000$\
5 & $\sqrt{5/8}$&$x=2\pi/5, y=8\pi/5, z=4\pi/5 \; (\mbox{optimal})$\
8 & 0.7071 &$x=2.3562,y=3.9270,z=4.7124$\
9 & 0.6524 & SA searched code\
10& 0.6124 & $x=2\pi/5,y=8\pi/5,z=\pi/5$\
16 & $\sqrt[4]{2}/2$ &$x=\pi/4,y=5\pi/4,z=13\pi/8$\
17 & 0.5255 & SA searched code\
18 & 0.5207 & SA searched code\
19 & 0.5128 & SA searched code\
20 & 0.5011 & $x=1.6500,y=3.7500,z=4.0500$\
24 & 0.5000 &$x=\pi/12,y=5\pi/12,z=\pi/2$\
37 & 0.4461 &$x=2\pi/37,y=6\pi/37,z=12\pi/37$\
39 & 0.3984 &$x=8\pi/39,y=34\pi/39,z=36\pi/39$\
40 & 0.3931 &$x=3\pi/10,y=11\pi/10,z=3\pi/4$\
55 & 0.3874 &$x=2\pi/55,y=68\pi/55,z=6\pi/11$\
57 & 0.3764 &$x=2\pi/57,y=40\pi/57,z=48\pi/57$\
75 & 0.3535 &$x=2\pi/75,y=98\pi/75,z=96\pi/75$\
85 & 0.3497 &$x=26\pi/85,y=94\pi/85,z=18\pi/17$\
91 & 0.3451 &$x=2\pi/91,y=128\pi/91,z=42\pi/91$\
96 & 0.3192 &$x=7\pi/16,y=29\pi/16,z=\pi/6$\
105 & 0.3116 &$x=2\pi/105,y=68\pi/105,z=84\pi/105$\
120 &0.3090 &$x=\pi/30,y=11\pi/30,z=\pi/4$\
135 & 0.2869 &$x=2\pi/135,y=28\pi/135,z=68\pi/135$\
145 & 0.2841 &$x=2\pi/145,y=64\pi/145,z=76\pi/145$\
165 & 0.2783 &$x=2\pi/33,y=20\pi/33,z=2\pi/5$\
203 & 0.2603 &$x=2\pi/203,y=290\pi/203,z=70\pi/203$\
225 & 0.2499 &$x=82\pi/225,y=118\pi/225,z=126\pi/225$\
217 & 0.2511 &$x=2\pi/217,y=250\pi/217,z=168\pi/217$\
225 & 0.2499 &$x=82\pi/225,y=118\pi/225,z=126\pi/225$\
240 & 0.2239 &$x=\pi/40,y=9\pi/40,z=\pi/6$\
273 & 0.2152 &$x=2\pi/273,y=208\pi/273,z=142\pi/273$\
295 & 0.2237 &$x=14\pi/295,y=104\pi/295,z=22\pi/59$\
297 & 0.1910 &$x=242\pi/297,y=548\pi/297,z=54\pi/297$\
299 & 0.1858 &$x=8\pi/299,y=220\pi/299,z=18\pi/299$\
300 & 0.1736 &$x=\pi/150,y=51\pi/150,z=5\pi/6$\
Diversity sum of $2$ dimensional constellation based on weak group structure
[|c|c|c|]{}
-----------
number of
elements
-----------
& Diversity Sum& Codes and Comments\
2 & 1 & $x=\pi, y=\pi, z=0 \; (\mbox{optimal})$\
3 & $\sqrt{3}/2$ &$x=2\pi/3, y=2\pi/3,z=0 \; (\mbox{optimal})$\
5 & $\sqrt{5/8}$ &$x=2\pi/5, y=8\pi/5, z=4\pi/5 \; (\mbox{optimal})$\
9 & 3/4 &$x=10\pi/9, y=4\pi/3, z=4\pi/9 \; (\mbox{optimal})$\
16 & $\sqrt{2}/2$ &$x=\pi/4, y=5\pi/4, z=13\pi/8 \; (\mbox{optimal})$\
18 & 0.6614 &$x=4\pi/9,y=2\pi/3,z=7\pi/9$\
19 & 0.6391 & SA searched code\
20 & 0.6338 & SA searched code\
21 & 0.6307 & SA searched code\
22 & 0.6154 & SA searched code\
24 & 0.6124 & $x=\pi/6,y=\pi/4,z=5\pi/12$\
28 & 0.5996 & $x=3\pi/8,y=\pi/2,z=2\pi/7$\
30 & 0.5934 & $x=4\pi/15,y=\pi/3,z=7\pi/15$\
31 & 0.5739 & SA searched code\
32 & 0.5734 & SA searched code\
39 & 0.5726 & $x=14\pi/39,y=40\pi/39,z=18\pi/39$\
40 & 0.5499 & $x=3\pi/20,y=7\pi/20,z=3\pi/10$\
42 & 0.5371 & $x=4\pi/7,y=13\pi/21,z=\pi/3$\
45 & 0.5342 & $x=2\pi/9,y=4\pi/9,z=14\pi/15$\
52 & 0.5332 & $x=\pi/13,y=2\pi/13,z=9\pi/26$\
57 & 0.5053 & $x=4\pi/57,y=8\pi/57,z=40\pi/57$\
60 & 0.5000 & $x=\pi/15,y=4\pi/15,z=3\pi/10$\
64 & 0.4852 &$x=3\pi/16, y=53\pi/32, z=55\pi/32$\
75 & 0.4850 &$x=32\pi/75,y=14\pi/75,z=2\pi/75$\
76 & 0.4672 &$x=3\pi/19,y=4\pi/19,z=11\pi/38$\
77 & 0.4595 &$x=52\pi/77,y=82\pi/77,z=60\pi/77$\
85 & 0.4540 &$x=2\pi/17,y=8\pi/17,z=14\pi/85$\
87 & 0.4460 &$x=52\pi/87,y=98\pi/87,z=82\pi/87$\
95 & 0.4418 &$x=6\pi/19,y=2\pi/95,z=36\pi/95$\
96 & 0.4390 &$x=39\pi/48,y=5\pi/12,z=11\pi/24$\
99 & 0.4297 &$x=62\pi/99,y=192\pi/99,z=142\pi/99$\
105 & 0.4295 &$x=2\pi/105,y=16\pi/105,z=28\pi/105$\
106 & 0.4161 &$x=2\pi/53,y=13\pi/53,z=12\pi/53$\
120 & 0.4156 &$x=\pi/10,y=\pi/6,z=5\pi/4$\
123 & 0.4077 &$x=188\pi/123,y=38\pi/123,z=182\pi/123$\
[|c|c|c|]{}
-----------
number of
elements
-----------
& Diversity Sum& Codes and Comments\
130 & 0.4071 &$x=26\pi/65,y=5\pi/13,z=2\pi/13$\
133 & 0.3971 &$x=2\pi/133,y=212\pi/133,z=206\pi/133$\
138 & 0.3963 &$x=16\pi/69,y=19\pi/69,z=4\pi/69$\
145 & 0.3949 &$x=138\pi/145,y=22\pi/145,z=40\pi/29$\
148 & 0.3840 &$x=5\pi/74,y=13\pi/37,z=2\pi/37$\
150 & 0.3758 &$x=\pi/15,y=8\pi/75,z=19\pi/75$\
155 & 0.3828 &$x=2\pi/5,y=26\pi/31,z=58\pi/31$\
156 & 0.3824 &$x=5\pi/39,y=8\pi/39,z=15\pi/78$\
158 & 0.3823 &$x=58\pi/79,y=81\pi/79,z=64\pi/79$\
159 & 0.3814 &$x=8\pi/159,y=64\pi/159,z=30\pi/159$\
160 & 0.3802 &$x=69\pi/80,y=59\pi/80,z=37\pi/20$\
162 & 0.3770 &$x=53\pi/21,y=10\pi/9,z=19\pi/81$\
165 & 0.3760 &$x=24\pi/165,y=26\pi/165,z=34\pi/165$\
166 & 0.3699 &$x=14\pi/83,y=21\pi/83,z=10\pi/83$\
169 & 0.3696 &$x=56\pi/169,y=76\pi/169,z=284\pi/169$\
171 & 0.3678 &$x=32\pi/171,y=294\pi/171,z=6\pi/171$\
178 & 0.3664 &$x=145\pi/89,y=26\pi/89,z=10\pi/89$\
180 & 0.3636 &$x=\pi/9,y=97\pi/90,z=127\pi/90$\
193 & 0.3598 &$x=90\pi/193,y=98\pi/193,z=26\pi/193$\
204 & 0.3566 &$x=13\pi/51,y=4\pi/51,z=5\pi/34$\
208 & 0.3501 &$x=\pi/13,y=8\pi/13,z=65\pi/104$\
214 & 0.3476 &$x=98\pi/107,y=67\pi/107,z=59\pi/107$\
220 & 0.3459 &$x=19\pi/11,y=163\pi/110,z=121\pi/110$\
222 & 0.3438 &$x=19\pi/111,y=22\pi/111,z=15\pi/111$\
225 & 0.3420 &$x=2\pi/225,y=52\pi/225,z=414\pi/225$\
234 & 0.3410 &$x=4\pi/117,y=24\pi/117,z=43\pi/117$\
240 & 0.3371 &$x=71\pi/120,y=11\pi/10,z=187\pi/120$\
244 & 0.3335 &$x=39\pi/122,y=14\pi/61,z=20\pi/61$\
245 & 0.3305 &$x=16\pi/245,y=186\pi/245,z=46\pi/245$\
248 & 0.3291 &$x=103\pi/124,y=39\pi/31,z=179\pi/124$\
259 & 0.3288 &$x=30\pi/259,y=44\pi/259,z=42\pi/259$\
262 & 0.3274 &$x=142\pi/131,y=215\pi/131,z=87\pi/131$\
264 & 0.3247 &$x=79\pi/66,y=129\pi/66,z=215\pi/132$\
276 & 0.3237 &$x=23\pi/138,y=15\pi/69,z=6\pi/69$\
287 & 0.3188 &$x=6\pi/287,y=76\pi/287,z=28\pi/287$\
292 & 0.3164 &$x=65\pi/146,y=14\pi/73,z=82\pi/73$\
295 & 0.3147 &$x=\pi/5,y=50\pi/59,z=22\pi/59$\
300 & 0.3126 &$x=\pi/75,y=17\pi/150,z=9\pi/25$\
General Form Constellation Numerical Design
-------------------------------------------
The connection between the complex Stiefel manifold and $U(M)$ (see the beginning of this section) makes clear that the techniques used above for square unitary constellations can be applied to design general form unitary constellations too. For simplicity we describe the idea with assumption $T=2M$ and consider the following structure: $$\{A^kB|A \in U(T), B=\left(\begin{array}{c}
I_M\\
0\\
\end{array}\right), k=0,1, \cdots, L-1\}.$$ One can check at most $2L-1$ distance calculations are needed to derive the diversity product (sum or function) with this algebraic structure.
The following tables show the constellations (M=2) found using SA. More results can be found in [@ha03u2].
size 3 4 5 6 7 8 9
------------------- -------- -------- -------- -------- -------- -------- --------
diversity sum 0.8654 0.7901 0.7889 0.7652 0.7514 0.7422 0.7369
diversity product 0.8582 0.7424 0.7330 0.6450 0.6361 0.6216 0.5822
Fast Decoding of the Structured Constellation
=============================================
\[sphere-decoding\]
The complexity of ML decoding for unitary space time constellations increases exponentially with the number of antennas or the transmission rate. This will preclude its practical use for high transmission rates or for large number of antennas. Basically our structured constellations can convert the ML decoding to lattice decoding naturally, consequently they admit fast decoding algorithms.
The principle of sphere decoding [@fi85] is as follows: instead of doing an exhaustive search over all the lattice points, one can limit its search area to a sphere with given radius $\sqrt{C}$ centered at received point. One can check the complexity of this approach in [@fi85] and in [@ha03u1].
We will use the $A^kB^l$ structure to describe how one can apply sphere decoding algorithm for the demodulation based on our constellations. Suppose $A$ has Schur decomposition $A=U{{\rm diag}\,}(e^{i\alpha_1}, e^{i\alpha_2},\cdots,e^{i\alpha_M})U^*$, similarly assume $B=B{{\rm diag}\,}(e^{i\beta_1},
e^{i\beta_2},\cdots,e^{i\beta_M})B^*$. Consider unitary differential modulation [@ho00] and denote with $X_{\tau}$ the received signal at time block $\tau$. The ML demodulation algorithm involves the following minimization problem: $$(\hat{k},\hat{l})=\arg \min_{k,l} {\|X_{\tau}-A^kB^l
X_{\tau-1}\|}_F.$$ Algebraically one can check that $${\|X_{\tau}-A^kB^l X_{\tau-1}\|}_F={\|A^{-k}X_{\tau}-B^l
X_{\tau-1}\|}_F$$ $$={\|U{{\rm diag}\,}(e^{-i k\alpha_1},e^{-i k\alpha_2},
\cdots, e^{-i k\alpha_M})U^*X_{\tau}-V{{\rm diag}\,}(e^{-i l\beta_1},e^{-i
l\beta_2}, \cdots, e^{-i l\beta_M})V^* X_{\tau-1}\|}_F$$ So every entry of $X_{\tau}-A^kB^l X_{\tau-1}$ is a linear combination of trigonometric functions $\cos$ or $\sin$ in the variables $k, l$, which can be viewed as lattice points. As demostrated in [@ji03] and [@ha03u1], the whole demodulation task has been converted to least-squares problem. Consequently our structured constellation will admit sphere decoding algorithm. In [@ji03] a detailed study of the sphere decoding algorithm applied to constellations from $Sp(2)$ was undertaken.
The complexity (either upper bound or average complexity) of sphere decoding will depend on the dimension of the lattice. This will make the weak group structure $A^kB^k$ more remarkable, because in this case the algorithm requires considering finding the closest point in a one dimensional lattice, which is very simple.
In [@cl01] a very interesting fast demodulation approach is proposed for diagonal space time constellations. The authors use numerical approximation and LLL basis reduction technique to reduce the decoding complexity. Note that a constellation with the weak group structure $A^k$ essentially is a diagonal constellation (straightforward Schur decomposition will show this), therefore the same technique can be applied to this structure. Most importantly some other algebraic structure can employ the techniques too. For instance, consider the $A^kB^lC^m$ structure. If we let $l$ go over a large interval and let $k, m$ stay within a small interval, the structure will become “almost” diagonal. For efficient decoding, one only has to do exhaustive search for $k, m$ and apply the techniques for diagonal constellations to decode $l$. Although the decoding complexity will increase a little, our experiments show the performance will output the diagonal one remarkably. Exactly the same “almost” diagonal idea can be applied to other proposed structures.
Conclusions and Future Work
===========================
In this paper, we study the limiting behavior of the [*diversity function*]{} by either letting the SNR go to infinity or to zero. Respectively the [*diversity product*]{} and the [*diversity sum*]{} for unitary constellations are studied from the analysis of the limiting behavior. We propose algebraic structures, which are suitable for constructing unitary space time constellation and feature fast decoding algorithms. Based on the presented structure we construct unitary constellations using geometrical symmetry and numerical methods. For $2$ dimension most of our codes are better or equal to the currently existing ones. For higher dimensions many codes with excellent diversity are found, which were never found before. Combined with the proposed algebraic structure the numerical methods can also be employed to optimize the diversity function at a certain SNR. Future work may involve analyzing the geometric aspects (such as geodesics, gradients and Hessians of the functions, etc) on $U(M)$ or the complex Stiefel manifold. Using the optimization techniques on Riemmannian manifold to optimize the distance spectrum of a unitary constellation to further search good-performing constellations is under close investigation too.
[10]{} -0.5mm
E. H. L. Aarts and J. Korst. . Wiley-Interscience Series in Discrete Mathematics and Optimization. John Wiley & Sons Ltd., Chichester, 1989. A stochastic approach to combinatorial optimization and neural computing.
S. M. Alamouti. A simple transmitter diversity scheme for wireless communications. , pages 1451–1458, October 1998.
M. F. Atiyah and J. A. Todd. On complex [S]{}tiefel manifolds. , 56:342–353, 1960.
W. M. Boothby. , volume 120 of [*Pure and applied mathematics, a series of monographs and textbooks*]{}. Academic Press, Orlando, Fla, 1986.
K. L. Clarkson, W. Sweldens, and A. Zheng. Fast multiple-antenna differential decoding. , 49(2):253–261, February 2001.
M. Eaton. . Wiley, 1983.
A. Edelman, T. A. Arias, and S. T. Smith. The geometry of algorithms with orthogonality constraints. , 20(2):303–353 (electronic), 1999.
U. Fincke and M. Pohst. Improved methods for calculating vectors of short length in a lattice, includ-ing a complexity analysis. , 44:463–471, April 1985.
R. Goodman and N. R. Wallach. . Cambridge University Press, Cambridge, U.K. ; New York, NY, USA, 1998.
G. Han and J. Rosenthal. A website of unitary space time constellations with large diversity. http://www.nd.edu/ eecoding/space-time/.
G. Han and J. Rosenthal. Unitary constellation design and its application to space-time coding. In D. Gilliam and J. Rosenthal, editors, [*Proceedings of the 15-th International Symposium on the Mathematical Theory of Networks and Systems*]{}, University of Notre Dame, August 2002.
G. Han and J. Rosenthal. Unitary constellations with large diversity sum and good diversity product. In [*Proc. of the 40-th Allerton Conference on Communication, Control, and Computing*]{}, pages 48–57, 2002.
G. Han and J. Rosenthal. Unitary space time constellation analysis: An upper bound for the diversity. Preprint, November 2003.
R. Hartshorne. . Springer Verlag, Berlin, 1977.
B. Hassibi and B. M. Hochwald. Cayley differential unitary space-time codes. , 48(6):1485–1503, 2002. Special issue on Shannon theory: perspective, trends, and applications.
B. Hassibi and H. Vikalo. On the expected complexity of integer least-squares problems. In [*Acoustics, Speech and Signal Processing, 2002 IEEE International conference*]{}, pages 1497–1500, April 2002.
B. Hochwald, T. Marzetta, T. Richardson, W. Sweldens, and R. Urbanke. Systematic design of unitary space-time constellations. , 46(6):1962–1973, 2000.
B. Hochwald and W. Sweldens. Differential unitary space-time modulation. , pages 2041–2052, December 2000.
B. M. Hochwald and T. L. Marzetta. Unitary space-time modulation for multiple-antenna communications in [R]{}ayleigh flat fading. , 46(2):543–564, 2000.
Y. Jing and B. Hassibi. Fully-diverse sp(2) code design. In [*Proceedings of the 2003 IEEE International Symposium on Information Theory*]{}, page 299, Yokohoma, Japan, 2003.
X.-B. Liang and X.-G. Xia. Unitary signal constellations for differential space-time modulation with two transmit antennas: Parametric codes, optimal designs and bounds. , 48(8):2291–2322, August 2002.
T. L. Marzetta and B. M. Howchwald. Capacity of a mobile multiple-antenna communication link in [R]{}ayleigh flat fading. , 45(1):139–157, 1999.
N. Metropolis, A.W. Rosenbluth, M.N.Rosenbluth, A.H. Teller, and E. Teller. Equation of state calculations by fast computing machines. , 21(6):1087–1092, 1953.
J. Nocedal and S. J. Wright. . Springer Series in Operations Research. Springer-Verlag, New York, 1999.
R. H. J. M. Otten and L. P. P. P. van Ginneken. . The Kluwer International Series in Engineering and Computer Science. [VLSI]{}, Computer Architecture and Digital Signal Processing. Kluwer Academic Publishers, Boston, MA, 1989.
V. V. Prasolov. , volume 134 of [ *Translations of Mathematical Monographs*]{}. American Mathematical Society, Providence, RI, 1994. Translated from the Russian manuscript by D. A. Leĭtes.
A. Shokrollahi. Computing the performance of unitary space-time group codes from their character table. , 48(6):1355–1371, 2002. Special issue on Shannon theory: perspective, trends, and applications.
A. Shokrollahi, B. Hassibi, B. M. Hochwald, and W. Sweldens. Representation theory for high-rate multiple-antenna code design. , 47(6):2335–2367, 2001.
V. Tarokh and H. Jafarkhani. A differential detection scheme for transmit diversity. , 18(7):1169–1174, 2000.
P. J. M. van Laarhoven and E. H. L. Aarts. , volume 37 of [ *Mathematics and its Applications*]{}. D. Reidel Publishing Co., Dordrecht, 1987.
H. Zassenhaus. über endliche [F]{}astkörper. , 11:187–220, 1936.
[^1]: Both authors were supported in part by NSF grants DMS-00-72383 and CCR-02-05310. The first author was also supported by a fellowship from the Center of Applied Mathematics at the University of Notre Dame. A preliminary version of this paper was presented at 40-th Allerton Conference on Communication, Control, and Computing, Monticello, Illinois, October 2002.
|
---
abstract: 'To engage in human-like dialogue, robots require the ability to describe the objects, locations, and people in their environment, a capability known as “Referring Expression Generation.” As speakers repeatedly refer to similar objects, they tend to re-use properties from previous descriptions, in part to help the listener, and in part due to cognitive availability of those properties in working memory (WM). Because different theories of working memory “forgetting” necessarily lead to differences in cognitive availability, we hypothesize that they will similarly result in generation of different referring expressions. To design effective intelligent agents, it is thus necessary to determine how different models of forgetting may be differentially effective at producing natural human-like referring expressions. In this work, we computationalize two candidate models of working memory forgetting within a robot cognitive architecture, and demonstrate how they lead to cognitive availability-based differences in generated referring expressions.'
address: 'MIRRORLab, Colorado School of Mines, Golden CO, USA'
author:
- Tom Williams
- Torin Johnson
- Will Culpepper
- Kellyn Larson
bibliography:
- 'references.bib'
date: April 2020
title: 'Toward Forgetting-Sensitive Referring Expression Generation for Integrated Robot Architectures'
---
[[email protected]]{}
[[email protected]]{}
[[email protected]]{}
[[email protected]]{}
0.2in
Introduction
============
Effective human-robot interaction requires human-like natural language and dialogue capabilities that are sensitive to robots’ embodied nature and inherently situated context [@mavridis2015review; @tellex2020robots]. In this paper we explore the role that models of Working Memory can play in enabling such capabilities in integrated robot architectures. While Working Memory has long been understood to be a core feature of human cognition, and thus a central component of cognitive architectures, recent evidence from psychology suggests a conception of working memory that is subtly different from what is implemented in most cognitive architectures. Specifically, while most models of working memory in computational cognitive architectures maintain a single central working memory store, Converging evidence from different communities suggests that humans have different resource limitations for different types of information. Moreover, recent psychological evidence suggests that Working Memory may be a limited resource pool, with resources consumed on the basis of the number and type of features retained. This suggests that *forgetting* in Working should be modeled in cognitive architectures as a matter of systematic removal (on the basis of decay or interference) of entity *features* with sensitivity to the resource limitations imposed for the specific *type* of information represented by that feature. Of course robot cognition need not directly mirror human cognition, and indeed robots have both unique knowledge representational needs and increased flexibility in how resource limitations are implemented. In this work, we present a robot architecture in which (1) independent resource pools are maintained for different robot-oriented types of information; (2) WM resources are maintained at the feature level rather than entity level; and (3) both interference- and decay-based forgetting procedures may be used. This architecture is flexibly configurable both in terms of what type of forgetting procedure is used, and how that model is parameterized. For robot designers, this choice of parameterization may be made in part on the basis of facilitation of interaction. In this paper we specifically consider how the use of different models of forgetting within this architecture lead to different information being retained in working memory, which in turn leads to different referring expressions being generated by the robot, which in turn can produce *interactive alignment* effects purely through Working Memory dynamics. While in future work it will be important to identify exactly which parameterizations lead to selection of referring expressions that are optimal for effective human-robot interaction and teaming, in this work we take the critical first step of demonstrating, as a proof-of-concept, that decay- and interference-based forgetting mechanisms can be flexibly used within this architecture, and that those policies do indeed produce different natural language generation behavior.
Referring
=========
Models of Referring Expression Generation
-----------------------------------------
“Referring” has been referred to as the “fruit fly” of language due to the amount of research it has attracted [@van2016computational; @gundel2019oxford]. In this work, we focus specifically on Referring Expression Generation (REG) [@reiter1997building] in which a speaker must choose words or phrases that will allow the listener to uniquely identify the speaker’s intended referent. REG includes two constituent sub-problems [@gatt2018survey]: referring form selection and referential content determination. While referring form selection (in which the speaker decides whether to use a definite, indefinite, or pronominal form [@poesio2004centering; @mccoy1999generating] [see also @pal2020cogsci] ) has attracted relatively little attention, referential content determination is one of the most well-explored sub-problems within Natural Language Generation, in part due to the logical nature of the problem that enables it to be studied in isolation, to the point where “REG” is typically used to refer to the referential content determination phase alone. In this section we will briefly define and describe the general strategies that have been taken in the computational modelling of referential content determination; for a more complete account we recommend the recent book by [@van2016computational], which provides a comprehensive account of work on this problem.
Referential content determination, typically employed when generating definite descriptions, is the process by which a speaker seeks to determine a set of constraints on known objects that if communicated will distinguish the target referent from other candidate referents in the speaker and listener’s shared environment. These constraints most often include attributes of the target referent, but can also include relationships that hold between the target and other salient entities that can serve as “anchors”, as well as attributes of those anchors themselves [@dale1991generating].
Three referential content determination models have been particularly influential: the *Full Brevity Algorithm* [@dale1989cooking], in which the speaker selects the description of minimum length, in order to straightforwardly satisfy Grice’s Maxim of Quantity [@grice1975logic]; the *Greedy Algorithm*, in which the speaker incrementally adds to their description whatever property rules out the largest number of distractors[@dale1992generating]; and the *Incremental Algorithm (IA)*, in which the speaker incrementally adds properties to their description in order of *preference* so long as they help to rule out distractors [@dale1995computational]. A key aspect of the IA is its ability to *overspecify* through its inclusion of properties that are not strictly needed from a logical perspective to single out the target referent, but are nevertheless included due to being highly preferred; a behavior also observed in human speakers [@engelhardt2006speakers].
Because the IA’s behavior is highly sensitive to preference ordering [@gatt2007evaluating], there has been substantial research seeking to determine what properties are in general psycholinguistically preferred over others [@belke2002tracking], or to automatically learn optimal preference orderings [@koolen2012learning]. As highlighted by [@goudbeek2012alignment], however, this focus on a uniform concept of “preference” obscures a much more complex story that relates to fundamental debates over the extent to which speakers leverage listener knowledge during sentence production. A notion of “preference” as encoded in the IA could be egocentrically grounded, with speakers “prefer” concepts that are easy for themselves to assess or cognitively available to themselves [@keysar1998egocentric]; it could be allocentrically grounded, with speakers intentionally seeking to facilitate the listener’s ease of processing [@janarthanam2009learning]; or a hybrid model could be used, in which egocentric and allocentric processes compete [@bard2004referential], with egocentrism vs. allocentrism “winning out” on the basis of factors such as cognitive load [@fukumura2012producing]. These approaches, which require accounting for listener knowledge to be slow and intentional, stand in contrast to memory-oriented account of referential content determination in which such accounting can naturally occur as a result of priming.
Memory-Oriented Models of Referring Expression Generation
---------------------------------------------------------
@pickering2004toward’s *Interactive Alignment* model of dialogue suggests that dialogue is a highly negotiated process (see also [@clark1986referring]) in which priming mechanisms lead interlocutors to influence each others’ linguistic choices at the phonetic, lexical, syntactic and semantic levels, through mutual activation of phonetic, lexical, syntactic, and semantic structures and mental representations, as in the case of lexical entrainment [@brennan1996conceptual].
While there has been extensive evidence for lexical and syntactic priming, research on semantic or conceptual priming in dialogue has only relatively recently become a target of substantial investigation [@gatt2011attribute]. A theory of dialogue including semantic or conceptual priming would suggest that the properties or attributes that speakers choose to highlight in their referring expressions (e.g., when a speaker chooses to refer to an object as “the large red ball” rather than “the sphere”) should be due in part to these priming effects. And in fact, as demonstrated by @goudbeek2010preferences, speakers can in fact be influenced through priming to use attributes in their referring expressions that would otherwise have been dispreferred.
These findings have motivated dual-route computational models of dialogue [@gatt2011attribute; @goudbeek2011referring] in which the properties used for referring expression selection are made on the basis of interaction between two parallel processes, each of which is periodically called upon to provide attributes of the target referent to be placed into a WM buffer that is consulted when properties are needed for RE generation (at which point selected properties are removed from that buffer). The first of these processes is a priming-based procedure in which incoming descriptions trigger changes in activation within a spreading activation model, and properties are selected if they are the highest-activation properties (above a certain threshold) for the target referent. The second of these processes is a preference-based procedure in which a set of properties is generated by a classic REG algorithm [cp. @gatt2018survey] such as the Incremental Algorithm, in which properties are incrementally selected according to a pre-established preference ordering designed or learned to reflect frequency of use, ease of cognitive or perceptual assessability, or some other informative metric [@dale1995computational].
One advantage of this type of dual process model is that it accounts for audience design effects (in which speaker utterances at least appear to be intentionally crafted for ease-of-comprehension) within an egocentric framework, by demonstrating how priming influences on WM can themselves account for listener-friendly referring expressions. That is, if a concept is primed by an interlocutor’s utterance, a speaker will be more likely to use that concept in their own references simply because it is in WM, with the side effect that that property will then be easy to process by the interlocutor responsible for its’ inclusion in WM in the first place [@vogels2015cognitive]. Moreover, this phenomenon aligns well with evidence suggesting that despite the prevalence of lexical entrainment and alignment effects, people are actually slow to explicitly take others’ perspectives into account [@bard2000controlling; @fukumura2012producing; @gann2014speaking].
Another advantage of this type of dual process model is its alignment with overarching dual-process theories of cognition [e.g., @kahneman2011thinking; @evans2013dual; @sloman1996empirical; @oberauer2009design]: the first priming-driven process for populating WM, grounded in semantic activation, can be viewed as a reflexive System One process, whereas the second preference-driven process leveraging the Incremental Algorithm can be viewed as a deliberative System Two process. Of course in the model under discussion the two routes do not truly compete with each other or operate on different time courses, but are instead essentially sampled between; however, it is straightforward to imagine how the two processes used in this type of model could be instead deployed in parallel. One major disadvantage of this type of model, however, is that its focus with respect to WM is entirely on retrieval (i.e., how priming and preference-based considerations impact what information is retrieved from long-term memory into WM), and fails to satisfactorily account for maintenance within WM. Within @gatt2011attribute’s model, as soon as a property stored in WM is used in a description, it is removed from WM so that that space is available for another property to be considered. This behavior seems counter-intuitive, as it ensures that representations are removed from WM at just the time when it is established to be important and useful, which should be a cue to retain said representations rather than dispose of them.
Moreover, this model is surprisingly organized from the perspective of models of WM such as @cowan2001magical’s, in which WM is comprised of the set of all activated representations, of which a small subset (e.g., three or four) are maintained in the focus of attention. In @gatt2011attribute’s model, in contrast, activated representations are used as just one source populating WM, and decaying activation within the spreading activation network results in representations losing activated status without also being removed from WM. This suggests that the WM buffer within @gatt2011attribute’s model may in fact be better conceptualized as a model of the focus of attention (an interpretation also justified by the two-item size limitation of their WM buffer) than as a model of WM.
A final complication for this model is its speaker-blind handling of priming. Specifically, within @gatt2011attribute’s model a speaker’s utterances are only primed by their interlocutor’s utterances, when in fact the choices a speaker makes should also impact the choices they themselves make in the future [@shintel2007you], either due to Gricean notions of cooperativity [@grice1975logic] or, as we argue, because a speaker’s decision to refer to a referent using a particular attribute should make that attribute more cognitively available to themselves in the immediate future.
These concerns are addressed by our previously proposed model of robotic short-term memory [@williams2018icsr], in which speakers rely on the contents of WM for initial attribute selection and, if their selected referring expression is not fully discriminating, select additional properties using a variant of the IA. While this does not align with dual-process models of cognition, it does account for both encoding and maintenance of WM, and provides a potentially more cognitively plausible account of REG with respect to WM dynamics. One shortcoming shared by both models, however, is neither the dual-process model of @gatt2011attribute nor our WM-focused model appropriately account for when and how information is removed from WM over time, or how this impacts REG.
Different theories of WM “forgetting” necessarily lead to predicted differences in cognitive availability. Accordingly, these different models of forgetting should similarly predict cognitive availability-based differences in the properties selected during REG. To design effective intelligent agents, it is thus necessary to determine how different models of forgetting may be differentially effective at producing natural human-like referring expressions.
In this work, we first computationalize two candidate models of WM forgetting within a robot cognitive architecture. Next, we propose a model of REG that is sensitive to the WM dynamics of encoding, retrieval, maintenance, *and forgetting*, discuss the particulars of deploying this type of model within an integrated robot architecture, where WM resources are divided by domain (i.e., people, locations, and objects) rather than by modality (i.e., visual vs. verbal). Finally, we provide a proof-of-concept demonstration of two parametrizations of our model into an integrated robot cognitive architecture, and demonstrate how these different parametrizations lead to cognitive availability-based differences in generated referring expressions.
Models of Forgetting in Working Memory
======================================
Models of forgetting in Working Memory are typically divided into two broad categories [@reitman1971mechanisms; @jonides2008mind; @ricker2016decay]: decay-based models, and interference-based models.
Decay-Based Models
------------------
Decay-based models of WM [@brown1958some; @atkinson1968human], that time plays a causal role in WM forgetting, with a representation’s inclusion in WM made on the basis of a level of activation that gradually decays over time if the represented information is not used or rehearsed. Accordingly, in such models, a piece of information is “forgotten” with respect to WM if it falls below some threshold of activation due to disuse. This model of forgetting is intuitively appealing due to the clear evidence that memory performance decreases over time [@brown1958some; @kep; @ricker2016decay].\
**Computational Advantages and Limitations:** As with Gatt et al., spreading activation networks can be used to elegantly model how activation of representations impacts the rise and fall of activation of semantically related pieces of information. One disadvantage of this approach, however, is that activation levels need to be continuously re-computed for each knowledge representation in memory. While this may be an accurate representation of actual cognitive processing, artificial cognitive systems do not enjoy the massively parallelized architectures enjoyed by biological cognitive systems, meaning that this approach may face severe scaling limitations in practice.\
**Computational Model:** To allow for straightforward comparison with other models of forgetting, we define a simple model of decay that operates on individual representations outside the context of a semantic activation network. We begin by representing WM as a set $WM = {Q_0,\dots,Q_n}$, where $Q_i$ is a mental representation of a single entity, represented as a queue of properties currently activated for that entity. Next, we define an encoding procedure that specifies how the representations in WM are created and manipulated on the basis of referring expressions generated either by the agent or its’ interlocutors. As shown in Alg. \[alg:encoding\], this procedure operates by considering each property included in the referring expression, and updating the queue used to represent the entity being described, by placing that property in the back of the queue, or by moving the property to the back of the queue if it’s already included in the representation. Note that this procedure can be used either after each utterance is heard (in which case the representation is updated based on all properties used to describe the entity) or incrementally (in which case the representation is updated after each property is heard). If used incrementally, then forgetting procedures may be interleaved with representation updates. Finally, we define a model of decay that operates on these representations. As shown in Alg. \[alg:decay\], this procedure operates by removing the property at the front of queue $Q$ at fixed intervals defined by decay parameter $\delta$.
$Q[R] = Q[R] \setminus p$
$Q$: the per-entity property queue $\delta$: decay parameter Every $\delta$ seconds $pop(Q)$ $Q=\emptyset$
Interference-Based Models
-------------------------
In contrast, interference-based models [@waugh1965primary] argue that WM is a fixed-size buffer in which a piece of information is “forgotten” with respect to WM if it is removed (due to, e.g., being the least recently used representation in WM memory) to make room for some new representation.
Interference-Based Models have been popular for nearly as long as decay-based models [@keppel1962proactive] due to observations that the evidence used as evidence for “decay” can just as easily be used as evidence for forgetting due to intra-representational interference, as longer periods of time directly correlate with higher frequencies of interfering events [@lewandowsky2015rehearsal; @oberauer2008forgetting], and because tasks with varying temporal lengths but consistent overall levels of interference have been shown to yield similar rates of forgetting, thus failing to support time-based decay [@oberauer2008forgetting]. Recent work has trended towards interference-based accounts of forgetting, with a number of further debates and competing models opening up within this broad theoretical ground.
First, there is debate as to whether interference *alone* is sufficient to explain forgetting, or whether time-based decay still plays some role in conjunction with interference. Recent work suggests that in fact these two models may be differentially employed for different types of representations, with phonological representations forgotten due to interference and non-phonological representations forgotten due to a combination of interference and time-based decay [@ricker2016decay].
Second, within interference-based models, there exist competing models based on reasons for displacement. In particular, while theories of pure displacement [@waugh1965primary] posit that incoming representations replace maintained representations on the basis of frequency or recency of use, or on the basis of random chance (similar to caching strategies from computing systems research [@press2014caching]), theories of retroactive interference instead posit that replacements are made on the basis of semantic similarity, with representations “forgotten” if they are too similar to incoming representations [@wickelgren1965acoustic; @lewis1996interference].
Third, within both varieties of interference-based models, there has been recent debate on the structure and organization of the capacity-limited medium of WM. [@ma2014changing] contrasts four such models: (1) slot models, in which a fixed number of slots are available for storing coherent representations [@miller1956magical; @cowan2001magical]; (2) resource models, in which a fixed amount of representational medium can be shared between an unbounded number of representations (with storage of additional features in one representation resulting in fewer feature-representing medium being available for other representations) [@wilken2004detection]; (3) discrete-representation models, in which a fixed number of feature “quanta” are available to distribute across representations [@zhang2008discrete]; and (4) variable-precision models, in which working memory precision is statistically distributed [@fougnie2012variability].\
**Computational Advantages and Limitations:** One advantage of interference-based models for artificial cognitive agents is decreased computational expense, as only a fixed number of entities or features must be maintained in WM, and WM need not be updated at continuous intervals if no new stimuli are processed. Rather, WM only needs to be updated when (1) new representations are encoded into WM, or (2) existing representations are manipulated. Another advantage of this approach is its conceptual alignment with the process of *caching* from computer science, which means that caching mechanisms from computing systems research, such as *least-recently-used* and *least-frequently-used* caching policies, can be straightforwardly leveraged, with prior work providing substantial information about their theoretical properties and guarantees. In fact, recent work has explored precisely how caching strategies from computer science can be used for this purpose [@press2014caching]). Within the interference-based family of models, slot-based and discrete-representation models are likely the most easy to computationalize due to the ephemeral and undiscretized nature of “representational medium.”\
**Computational Model:** To model interference-based forgetting, we use the same WM representation and encoding procedure as used to model decay-based forgetting, and propose a new model designed specifically for robotic knowledge representation. This model can be characterized as a per-entity discrete-representation displacement model. As shown in Alg. \[alg:displacement\], this procedure operates by removing properties at the front of queue *Q* whenever the size of $Q$ is greater than some size limitation imposed by parameter $\alpha$. This model is characterized as discrete-representation because resource limitations are imposed at the level of discrete features rather than holistic representations. It is characterized as a displacement model because features are replaced on the basis of a Least-Recently Used (LRU) caching strategy [@knuth1997art] rather than on the basis of semantic similarity, due to the pragmatic difficulty of assessing the similarity of different categories of properties without mandating the use of architectural features such as well-specified conceptual ontologies [cp. @tenorth2009knowrob; @lemaignan2010oro], which may not be available in all robot architectures. This model is characterized as per-entity because resource limitations are imposed locally (i.e., for each entity) rather than globally (i.e., shared across all entities). While this is obviously not a cognitively plausible characteristic, it was selected, as a starting point, to reduce the need for coordination across architectural components and to facilitate improved performance (as entity representations need not compete with each other). However, if desired, it would be straightforward to extend this approach to allow for imposition of global resource constraints.
$Q$: the per-entity property queue $\alpha$: maximum buffer size $pop(Q)$
Models of Working Memory in Integrated Robot Architectures
==========================================================
Memory modeling has long been a core component of cognitive architectures, due to its central role in cognition [@baxter2011memory]. The relative attention paid to WM has, however, varied widely between cognitive architectures. ARCADIA, for example, has placed far more emphasis on attention than on memory [@bridewell2016theory]. While ARCADIA does include a visual short term memory component, it is treated as a “potentially infinite storehouse,” with consideration of resource constraints left to future work [@bridewell2016theory].
ACT-R and Soar, in contrast, do place larger emphases on WM. ACT-R did not originally have explicit WM buffers, instead implicitly representing WM as the subset of LTM with activation above some particular level [@anderson1996working], with “forgetting” thus modeled through activation decay [cp. @cowan2001magical]. In more recent incarnations of ACT-R, a very small short-term buffer is maintained, with contents retrieved from LTM on the basis of both base-level activation (on the basis of recency and frequency) and informative cues. Similarly, Soar [@laird2012soar] has long emphasized the role of WM, due to its central focus on problem solving through continuous manipulation of WM [@rosenbloom1991preliminary]. And while Soar did not initially represent WM resource limitations [@young1999soar], it has also by now long included decay-based mechanisms on at least a subset of WM contents [@chong2003addition; @nuxoll2004comprehensive], as well as base-activation and cue-based retrieval methods such as those mentioned above [@jones2016efficient]. While the models may be intended *primarily* as models of human cognition, flaws and all, rather than as systems for enabling effective task performance through whatever means necessary regardless of cognitive plausibility, a significant body of work has well demonstrated the utility of these architectures for cognitive robotics [@laird2012cognitive; @laird2017standard; @kurup2012can] and human-robot interaction [@trafton2013act].
There has also been significant work in robotics seeking to use insights from psychological research on WM to better inform the design of select components of larger robot cognitive architectures that do not necessarily aspire towards cognitive plausibility. For example, a diverse body of researchers has collaborated on the development and use of the WM Toolkit [@phillips2005biologically; @gordon2006system; @kawamura2008implementation; @persiani2018working]; a software toolkit that maintains pointers to a fixed number of chunks containing arbitrary information. At each timestep, this toolkit proposes a new set of chunks, and then uses neural networks to select a subset of these chunks to retain. This model has been primarily used for enabling *cognitive control*, in which the link between robot perception and action is modulated by a learned memory management policy.
Models of WM have also been leveraged within the field of Human-Robot Interaction. [@broz2012interaction], for example, specifically model the episodic buffer sub-component of WM [@baddeley2000episodic]. [@baxter2013cognitive] leverages models of WM to better enable non-linguistic social interaction through alignment, similar to our approach in this work. Researchers have also leveraged models of WM to facilitate communication. For example, [@hawes2007towards] leverage a model of WM within the CoSy architecture, with concessions made to accommodate the realistic needs of integrated robot architectures, in which specialized representations are stored and manipulated within distributed, parallelized, domain-specific components [see also @williams2016aaai]. Similarly, in our own work within the DIARC architecture [@scheutz2019overview], we have demonstrated (as we will further discuss in this paper) the use of distributed WM buffers associated with such architectural components [@williams2018icsr], as well as hierarchical models of common ground [@williams2019oxford] jointly inspired by models of WM [e.g. @cowan2001magical] and models of *givenness* from natural language pragmatics [e.g. @gundel1993cognitive]. In the next section we propose a new architecture that builds on this previous work of ours to allow for flexible selection (and comparison between) between different models of forgetting.
Proposed Architecture
=====================
Our forgetting models were integrated into the Distributed, Integrated, Affect, Reflection, Cognition (DIARC) Robot Cognitive architecture: a component-based architecture designed with a focus on robust spoken language understanding and generation [@scheutz2019overview]. DIARC’s Mnemonic and Linguistic components integrate via a *consultant framework* in which different architectural components (e.g., vision, mapping, social cognition) serve as heterogeneous knowledge sources that comprise a distributed model of Long Term Memory [@williams2017iib; @williams2016aaai].
These consultants are used by DIARC’s natural language pipeline for the purposes of reference resolution and REG. In recent work we have extended this framework to produce a new *Short-Term Memory Augmented* consultant framework in which consultants additionally maintain, for some subset of the entities for which they are responsible, a short term memory buffer of properties that have recently been used by interlocutors to refer to those entities. In this work, we build upon that STM (Short Term Memory)-Augmented Consultant Framework through the introduction of a new architectural component, the <span style="font-variant:small-caps;">WM Manager</span>, which is responsible for implementing the two forgetting strategies introduced in the previous section.
Our model aligns with two key psychological insights. First, converging evidence from different communities suggests that humans have different resource limitations for different types of information [@wickens2008multiple], due to either decreased interference between disparate representations [@oberauer2012modeling] or the use of independent domain-specific resource pools [@baddeley1992working; @logie1995visuo]. Our approach takes a robot-oriented perspective on this second hypothesis, with our use of the WM-Augmented consultant framework resulting in independent resource pools maintained for different types of entities (e.g., objects vs. locations vs. people) rather than for different modalities (e.g., visual vs. auditory) or different codes of processing (e.g., spatial vs. verbal).
Second, while early models of WM suggested that WM resource limitations are bounded to a limited number of chunks [@miller1956magical], more recent models instead suggest that the size of WM is affected by the complexity of those chunks [@mathy2012s], and that maintaining multiple features of a single entity may detract from the total number of maintainable entities, and accordingly, the number of features maintainable for other entities [@alvarez2004capacity; @oberauer2016limits; @taylor2017does]. Our approach, again, takes a robot-oriented perspective on these models, maintaining WM resources at the feature-level rather than entity-level, while enabling additional flexibility that may not be reflected in human cognition. Specifically, instead of enforcing global resource limits, we allow for flexible selection between decay-based and interference-based (i.e., resource-limited) memory management models, as well as for simultaneous employment of both models, in order to model joint impacts of interference and decay as discussed by [@ricker2016decay]. Moreover, while we currently focus on local (per-entity) feature-based resource limitations, our system is designed to allow for global resource limitations [cp. @just1992capacity; @ma2014changing] in future work, due to our use of a global <span style="font-variant:small-caps;">WM Manager</span> Component.
Using this architecture, our different forgetting models can differentially affect REG without any direct interaction with the REG process. Rather, the <span style="font-variant:small-caps;">WM Manager</span> simply interfaces with the WM buffers maintained by each distributed consultant, which then implicitly impacts the properties available to the SD-PIA algorithm that we use for REG. This algorithm takes a lazy approach to REG in which the speaker initially attempts to rely only on the properties that are currently cached in WM, and only if this is insufficient to distinguish the target from distractors does the speaker retrieve additional properties from long-term memory.
Incorporating the <span style="font-variant:small-caps;">WM Manager</span> into DIARC yields the configuration shown in Fig. \[fig:architecture\]. When a teammate speaks to a robot running this configuration, text is recognized and then semantically parsed by the TLDL Parser [@scheutz2017spoken], after which intention inference is performed by the Pragmatic Understanding Component [@williams2015aaai], whose results are provided to the Reference Resolution module of the Referential Executive (REX), which leverages the GROWLER algorithm [@williams2018mrhrc:growler] (see also [@williams2016hri; @williams2019oxford]), which searches through Givenness Hierarchy (GH) [@gundel1993cognitive] informed data structures (Focus, Activated, Familiar) representing a hierarchically organized cache of relevant entity representations. This is important to highlight due to its relation to the WM buffers described in this work. While the WM buffers described in this work serve as a model of the robot’s own WM, the Referential Executive’s data structures can instead be viewed as either second-order theory-of-mind data structures or as a form of hierarchical common ground. When particular entities are mentioned by the robot’s interlocutor or by the robot itself, pointers to the full representations for these entities (located in the robot’s distributed long-term memory) are placed into the Referential Executive’s GH-informed data structures, and the properties used to describe them are placed into the robot’s distributed WM buffers. In addition, properties are placed into WM whenever the robot affirms that those properties hold through an LTM query. Critically, this can happen when considering other entities. For example, if property $P$ is used during reference resolution, then $P$ will be placed into WM for any distractors for which $P$ is affirmed to hold, before being ruled out for other reasons. Similarly, if $R$ is the target referent during REG, and if property $P$ holds for $R$ and is considered for inclusion in the generated RE, then for any distractors for which $P$ also holds, $P$ will be added to those distractors’ WM buffers at the point where it is affirmed that $P$ cannot be used to rule out those distractors because it holds for them as well. Once Reference Resolution is completed, if the robot decides to respond to the human utterance, it does so through a process that is largely the inverse of language understanding, including a dedicated REG module, which uses the properties cached in WM, along with properties retrievable from Long-Term Memory, to translate this intention into text [@williams2017inlg; @williams2018icsr].
Experimental Setup and Results
==============================
Turn
------ ------ ---------------------------- ------ --------------------- --------------------- ---------------------
Face Description Face
Decay Interference No WM
1 1 $H_L$, $G_M$, $C_L$, $G_Y$ 1 $H_L$, $C_L$, $G_Y$ $H_L$, $C_L$, $G_Y$ $H_L$, $C_L$, $G_Y$
2 2 $G_F$, $G_Y$ 5 $H_D$, $G_M$ $H_D$, $G_M$ $H_D$, $G_M$
3 3 $G_F$, $C_H$, $G_N$ 2 $G_F$, $G_Y$ $G_F$, $G_Y$ $G_F$, $G_Y$
4 4 $H_L$, $H_S$, $C_L$, $G_N$ 1 $H_S$, $C_L$, $G_Y$ $H_L$, $C_L$, $G_Y$ $H_L$, $C_L$, $G_Y$
5 5 $H_D$, $H_S$, $C_T$, $G_M$ 3 $G_F$, $C_H$ $G_F$, $C_H$ $G_F$, $C_H$
6 6 $H_S$, $C_H$, $G_Y$ 1 $C_L$, $G_Y$ $H_S$, $C_L$, $G_Y$ $H_L$, $C_L$, $G_Y$
: 6 Face Case Study. Column contents denote properties used to describe target under each forgetting model, listed in the conssitent order below for easy comparison rather than in order selected. Properties: $H_L$ = <span style="font-variant:small-caps;">light-hair(X)</span>, $H_D$ = <span style="font-variant:small-caps;">dark-hair(X)</span>, $H_S$ = <span style="font-variant:small-caps;">short-hair(X)</span>, $G_M$ = <span style="font-variant:small-caps;">male(X)</span>, $G_F$ = <span style="font-variant:small-caps;">female(X)</span>, $C_T$ = <span style="font-variant:small-caps;">t-shirt(X)</span>, $C_L$ = <span style="font-variant:small-caps;">lab-coat(X)</span>, $C_H$ = <span style="font-variant:small-caps;">hoodie(X)</span>, $G_Y$ = <span style="font-variant:small-caps;">glasses(X)</span>, $G_N$ = <span style="font-variant:small-caps;">no-glasses(X)</span>. []{data-label="tab:casestudies"}
We analyzed the proposed architecture by assessing two claims: (1) The proposed architecture demonstrates interactive alignment effects purely through WM dynamics; and (2) the proposed architecture demonstrates different referring behaviors when different models of forgetting are selected. These claims were assessed within the context of a “Guess Who”-style game in which partners take turns describing candidate referents (assigned from a set of 16 faces). On each player’s turn, they are assigned a referent, and must describe that referent using a referring expression they believe will allow their interlocutor to successfully identify it (a process of REG). The other player must then process their interlocutor’s referring expression and identify which candidate referent they believe to be their interlocutor’s target (a process of Reference Resolution).
Ideally, we would have assessed our claims in a setting in which a robot played this reference game with a naive human subject. This proved to be impossible due to the COVID-19 global pandemic. Instead, we present a case study in which a series of three six-round reference games are played between robot agents and a single human agent. All three games followed the same predetermined order of candidate referents and used the same pre-determined human utterances. The robot’s responses were generated autonomously, with the robot in each of the three games using a different model of forgetting. In the first game, the robot uses our decay-based model of forgetting with $\delta=10$; in the second game, the robot uses our interference-based model of forgetting with $\alpha=2$; in the third game, the robot did retain any properties in short-term memory at all. The referring behavior under each model of forgetting is shown in Tab. \[tab:casestudies\]. As shown in this table, the three examined models perform similarly in initial dialogue turns, but increasingly diverge over time. To help explain the observed differences in robot behavior, we examine specifically turn 6, in which the robot refers to Face 1 for the third time. This face could ostensibly be referred to using four properties: <span style="font-variant:small-caps;">light-hair(X)</span>, <span style="font-variant:small-caps;">short hair(X)</span>, <span style="font-variant:small-caps;">lab-coat(X)</span>, and <span style="font-variant:small-caps;">glasses(X)</span>.
The architectural configuration that did not maintain representations in WM (No WM) operated according to the DIST-PIA algorithm [@williams2017inlg], which is a version of the Incremental Algorithm that is sensitive to uncertainty and that allows for distributed sources of knowledge. This algorithm first considers the highly preferred property <span style="font-variant:small-caps;">light-hair(X)</span>, which is selected because it applies to Face 1 while ruling out distractors. Next, it considers <span style="font-variant:small-caps;">short-hair(X)</span>, which it ignores because while it applies to Face 1, the faces with short hair are a subset of those with light hair, and thus <span style="font-variant:small-caps;">short-hair(X)</span> is not additionally discriminative. Next, the algorithm considers <span style="font-variant:small-caps;">lab-coat(X)</span>, which it selects because it applies to Face 1 and rules out further distractors. Finally, to complete disambiguation, the algorithm considers and selects <span style="font-variant:small-caps;">glasses(X)</span>.
In contrast, the configuration that used the decay model had the following properties in WM: {<span style="font-variant:small-caps;">lab-coat(X)</span>, <span style="font-variant:small-caps;">short-hair(X)</span>, <span style="font-variant:small-caps;">glasses(X)</span>} (ordered from least-recently used to most-recently used[^1]). The algorithm starts by considering the properties stored in WM, beginning with <span style="font-variant:small-caps;">lab-coat(X)</span>, which is selected because it applies to Face 1 while ruling out distractors. Next, it considers <span style="font-variant:small-caps;">short-hair(X)</span>, which is ignored because the set of entities with short hair is a subset of those wearing lab coats, and thus this is not additionally discriminative. Next, it considers <span style="font-variant:small-caps;">glasses(X)</span>, which it selects because it applies to Face 1 and rules out distractors. In fact, {<span style="font-variant:small-caps;">lab-coat(X)</span>, <span style="font-variant:small-caps;">glasses(X)</span>} is fully discriminating for Face 1, so no further properties are needed.
Finally, the configuration that used the interference model had the following properties in WM: {<span style="font-variant:small-caps;">short-hair(X)</span> <span style="font-variant:small-caps;">glasses(X)</span>}. This is easy to see as those properties were recently used in the Human’s description of Face 6, and thus would have been considered for Face 1 when ruling it out during reference resolution. The algorithm thus starts by considering both of these properties, which are both selected because they apply to Face 1 and rule out distractors. However, because these are not sufficient for full disambiguation, the algorithm must also retrieve another property from LTM, i.e., <span style="font-variant:small-caps;">lab-coat(X)</span>, which allows for completion of disambiguation.
The differences in behavior demonstrated in this simple example validate both our claims. First, the proposed architecture’s ability to demonstrate interactive alignment effects purely through WM dynamics is demonstrated by the systems’ tendency to re-use properties originating from its interlocutor. Second, this example clearly demonstrates that the proposed architecture demonstrates different referring behaviors when different models of forgetting are selected.
Conclusions and Future Work
===========================
We have presented a flexible set of forgetting mechanisms for integrated cognitive architectures, and conducted a preliminary, proof-of-concept demonstration of these mechanisms, showing that they lead to different referring expressions being generated due to differences in cognitive availability between different properties. The next step of our work will be to fully explore the implications of different parametrizations of each of our presented mechanism, as well as the combined use of these mechanisms, on REG, and whether the referring expressions generated under different parametrizations are comparatively more or less natural, human-like, or effective, which would present obvious benefits for interactive, intelligent robots. In addition, the perspective taken in this paper may also yield insights and benefits for cognitive science more broadly. Specifically, we argue that our perspective may suggest alternative interpretations of the role of cognitive load on attribute choice. In @goudbeek2011referring’s work building on @gatt2011attribute’s model, they suggest that when speakers are under high cognitive load, they rely less on previously primed attributes, and are thus more likely to rely on their stable preference orderings. Their explanation for this finding is that a decrease in available WM capacity leads to an inability to retrieve dialogue context into WM. We suggest an alternative explanation: cognitive load leads to decreased priming not because priming-activated representations cannot be retrieved into WM, but because those representations are less likely to be in WM in the first place. An additional promising direction for future work will thus be to compare the ability of the model presented in this paper to those presented by @goudbeek2011referring with respect to modeling of REG under cognitive load.
[^1]: Future work should consider other algorithmic configurations, such as having properties within WM considered in the reverse order, or according to the preference ordering specified by the target referent’s consultant.
|
---
address: 'Justus-Liebig-Universität Giessen, II. Physikalisches Institut, Heinrich-Buff-Ring 16, 35392 Giessen'
author:
- |
Jens Sören Lange\
Email [email protected]
title: 'Charm and $\tau$ Decays: Review of BaBar and Belle Results'
---
The $B$ meson factories can in fact also be considered as charm and $\tau$ factories, as [*(a)*]{} $\simeq$99% of all $B$ mesons decay to charm final states, and [*(b)*]{} the cross sections of charm and $\tau$ production in the continuum, i.e. $e^+$$e^-$$\rightarrow$$q$$\overline{q}$ ($q$=$u$,$d$,$s$,$c$) and $e^+$$e^-$$\rightarrow$$\tau^+$$\tau^-$, are as high as the cross section for $B$ meson production ($\simeq$1 nb) in the $Y$(4S) decay. Details of the BaBar and Belle detectors can be found elsewhere [@babar_nim]$^{, }$[@belle_nim]. At the time of this review, the integrated luminosities are 512.2 fb$^{-1}$ for BaBar and 773.7 fb$^{-1}$ for Belle.
Charm Decays
============
$D_{sJ}$ states
---------------
Earlier two new $D_{sJ}$ states have been observed: the $D_{s0}$(2317) by BaBar [@babar_dsj2317] in the decay[^1] $D_{s0}^{+}$(2317)$\rightarrow$$D_s^+$$\pi^0$, and the $D_{s1}$(2460), by CLEO [@cleo_dsj2460] in the decay $D_{s1}^{+}$(2460)$\rightarrow$$D_s^{*+}$$\pi^0$. In both cases, due to the $D_s$ in the final state, the most probable assignment is a $($$c$$\overline{s}$$)$ state with $L$=1. However, the measured masses were found $\simeq$100 MeV too low compared to early quark models [@isgur_weise], and inspired more theoretical and experimental work. Recent analyses have investigated decays of higher-mass $D_{sJ}$ states not only into $D_s$$\pi$, but also into $D^{(*)}$$K$, i.e. corresponding to a Feynman diagram with an internally (rather than externally) created $u$$\overline{u}$ pair.
$D_{sJ}$ decays into $D^*$$K$
-----------------------------
This category of decays (assuming $L$=0) favors the production of vector or axial-vector $D_{sJ}$ states due to the $D^*$ vector meson in the final state. Recently BaBar reported the first observation of $D_{s1}$(2536) in $B$ decays. A data set of 347 fb$^{-1}$ was used to investigate the decays $D_{s1}$(2536)$\rightarrow$$D^{*+}$$K_s^0$ and $D_{s1}$(2536)$\rightarrow$$D^{*0}$$K^+$. An analysis of the $D_{s1}$(2536) helicity was performed in order to confirm the historically assigned PDG [@PDG] quantum numbers of $J^P$=1$^+$, which is the natural expectation in the heavy charm-quark limit for the lower member of an $L$=1, $j_q$=3/2 doublet. $J^P$=1$^+$ in a pure S-wave as well as $J^P$=2$^+$ and $J^P$=2$^-$ were disfavoured, whereas fits for $J^P$=1$^-$ in a pure P-wave and $J^P$=1$^+$ with S/D-wave admixture both describe the observed angular distribution well. On the other hand, Belle reported an analysis of $D_{s1}$(2536)$\rightarrow$$D^{*+}$$K_s^0$, but using continuum production $e^+$$e^-$$\rightarrow$$D_{s1}$(2536)$X$. A partial wave analysis on a data set of 462 fb$^{-1}$ was performed, in order to study the mixing of the ($j_q$=1/2) $D_{s1}$(2460) and the ($j_q$=3/2) $D_{s1}$(2536) states. Heavy Quark Effective Theory (HQET) predicts, that ($j_q$=3/2)$\rightarrow$$D^*$$K$ should be a pure D-wave decay, and ($j_q$=1/2)$\rightarrow$$D^*$$K$ should be a pure S-wave decay. If HQET is not exact, mixing between the S and D-waves would be possible by LS interaction:\
$|$$D_{s1}$(2460)$>$=cos$\theta$$|$$j_q$=1/2$>$+sin$\theta$$|$$j_q$=3/2$>$ and $|$$D_{s1}$(2536)$>$=$-$sin$\theta$$|$$j_q$=1/2$>$+cos$\theta$$|$$j_q$=3/2$>$.\
The fit result yields an admixture of $D$/$S$= (0.63$\pm$0.07(stat.))$\cdot$ exp($\pm$$i$(0.77$\pm$0.03(stat.))). The result indicates that the S-wave dominates in the decay $D_{s1}$(2536)$\rightarrow$$D^*$$K$ with a fraction of (72$\pm$3(stat.)$\pm$1(syst.))%, which contradicts HQET. The reason might be given by the fact, that HQET assumes the $c$ quark to be infinitely heavy. However, it should be noted that the interpretation is not trivial, as the D-wave might also be suppressed by the centrifugal barrier. In any case, there is strong indication that the $D_{s1}$(2460) and the $D_{s1}$(2560) in fact are mixing, which might be considered surprising for two very narrow states ($\Gamma$$\leq$1 MeV) with a mass difference of $\Delta$$m$$\simeq$76 MeV. As a by-product of this analysis, Belle reported the first observation of a three-body decay mode of the $D_{sJ}$(2536), in the decay $D_{s1}$(2536)$\rightarrow$$D^{*0}$$K^+$$\pi^-$.
$D_{sJ}$ decays into $D$$K$
---------------------------
This category of decays (assuming $L$=0) favors the production of scalar $D_{sJ}$ states due to the two pseudoscalar mesons in the final state. Recently Belle reported the observation of a new $D_{sJ}$ meson, tentatively called $D_{sJ}$(2700), in $B^+$$\rightarrow$$\overline{D}^0$$D^0$$K^+$ decays with a data set of 414 fb$^{-1}$. The $D_{sJ}$(2700) was found to be the dominating resonance in this $B$ decay. The mass was determined as $m$=2708$\pm$9(stat.)$^{+11}_{-10}$(syst.) MeV, the width as $\Gamma$=108$\pm$23(stat.)$^{+36}_{-31}$(syst.) MeV. The signal yield was reported as 182$\pm$30 events with a statistical significance of 8.4$\sigma$. In order to determine the quantum numbers, a helicity analysis was performed. A fit to the distribution of the helicity angle between the $D^0$ and the $D_{sJ}$ prefers an assignment of $J$=1. In this case, a $J$=1$\rightarrow$$0^-$$0^-$ decay would imply $L$=1, and thus a negative parity assignment. Two possible interpretations of a $J^P$=$1^-$ $D_{sJ}$(2700) state were proposed, i.e. [*(a)*]{} a radial $n$=2 excitation (2$^3S_1$), predicted [@theory_dsj2700_a] by potential models at m$\simeq$2720 GeV, or [*(b)*]{} a chiral doublet state $J^P$=$1^-$ as a partner to the $J^P$=$1^+$ $D_{s1}$(2536), predicted [@theory_dsj2700_b] from chiral symmetry considerations at $m$=2721$\pm$10 MeV. The new Belle result could be compared to a prior search [@babar_dsj2860] for higher $D_{sJ}$ states in the $e^+$$e^-$ continuum by BaBar with a data set of 240 $fb^{-1}$. Three structures were observed: [*(a)*]{} the known $D_{s2}$(2573) state, [*(b)*]{} a new $D_{sJ}$(2860) state, and [*(c)*]{} a broad structure peaking around 2.7 GeV, which could be identical to the $D_{sJ}$(2700). As the $D_{sJ}$(2860) is not seen in the Belle data, this might indicate that probably a higher $J$ should be assigned, and thus it is only produced in continuum, but not in $B$ decays.
$D_s^+$$\rightarrow$$\mu^+$$\nu_{\mu}$
--------------------------------------
The purely leptonic decay $D_s^+$$\rightarrow$$\mu^+$$\nu_{\mu}$ is theoretically rather clean, as in the standard model the decay is mediated by a single virtual W boson. The decay rate is given by
$$\Gamma ( D_s^+ \rightarrow l^+ \nu_l ) =
\frac{G_F^2}{8\pi}
f_{Ds}^2 m_l^2 m_{Ds} ( 1 - \frac{m_l^2}{m_{Ds}^2} )^2 | V_{cs} |^2
%%\label{eq:xxx}$$
using the Fermi coupling constant $G_F$, the masses of the lepton and of the $D_s$ meson, $m_l$ and $m_{Ds}$, respectively, and the CKM matrix element $V_{cs}$. All effects of the strong interaction are accounted for by the decay constant $f_{Ds}$. The measurement of the branching fraction allows the determination of $f_{Ds}$ and the comparison to theoretical or Lattice QCD calculations. Recently several measurements of $f_{Ds}$ were performed. An overview is given in Tab. 1. BaBar reported [@babar_dsmunu] a signal yield of 489$\pm$55(stat.) $D_s^+$$\rightarrow$$\mu^+$$\nu_{\mu}$ events in a data sample of 230 fb$^{-1}$. Belle reported [@belle_dsmunu] a signal yield of 169$\pm$16(stat.)$\pm$8(syst.) $D_s^+$$\rightarrow$$\mu^+$$\nu_{\mu}$ events in a data sample of 548 fb$^{-1}$. The reconstruction procedures show a few technical differences, i.e. the signal either peaks in [*(a)*]{} the mass difference of $m$($\mu$$\nu$$\gamma$)-$m$($\mu$$\nu$)=143.5 MeV, corresponding to the photon energy in the $D_s^*$$\rightarrow$$D_s$$\gamma$ transition, or peaks in [*(b)*]{} the recoil mass $m$($D$$K$$X$$\gamma$$\mu$)=0, where $X$=$n$$\pi$ and $\geq$1$\gamma$ with $n$=1,2,3. In addition, while Belle determines an absolute branching fraction, the BaBar experiment provides a measurement of the partial width ratio $\Gamma$($D_s^+$$\rightarrow$$\mu^+$$\nu_{\mu}$/ $\Gamma$($D_s^+$$\rightarrow$$\phi$$\pi^+$). The recent measurements of $f_{Ds}$ of Belle, BaBar and also CLEO-c [@cleo-c_dsmunu] are consistent within the error estimates. However, theoretical calculations seem to indicate a lower value of $f_{Ds}$ [@dsmunu_rosner_stone]. As an example, the result of a recent Lattice QCD calculation [@dsmunu_lattice] is $f_{Ds}$=241$\pm$3 MeV. The discrepancy between experiment and theory gave rise to speculation as a possible indication of new physics. Note that charm quark loops are not included in the Lattice QCD calculation.
----------------------------------------------------------------------------------------------------
Br($D_s$$\rightarrow$$\mu^+$$\nu_{\mu}$) $f_{Ds}$ (MeV)
------------------------- --------------------------------------------------- ----------------------
PDG06 [@PDG] (6.1$\pm$1.9)$\cdot$10$^{-3}$
BaBar (6.74$\pm$0.83$\pm$0.26$\pm$0.66)$\cdot$10$^{-3}$ 283$\pm$17$\pm$7$\pm
$14
Belle (6.44$\pm$0.76$\pm$0.57)$\cdot$10$^{-3}$ 275$\pm$16$\pm$12
CLEO-c [@cleo-c_dsmunu] (5.94$\pm$0.66$\pm$0.31)$\cdot$10$^{-3}$ 274$\pm$13$\pm$7
----------------------------------------------------------------------------------------------------
: Overview of recent measurements of the branching fraction for $D_s^+$$\rightarrow$$\mu^+$$\nu_{\mu}$ and $f_{Ds}$.
Decays of Charmed Baryons
-------------------------
Recently new studies of the $\Xi_c$ charmed baryon system[^2] were performed by both BaBar and Belle. Formerly, the $\Lambda_c$$K$$\pi$ final state has been used at the $B$ meson factories in searches for double charm baryon ground states. For these $($$c$$c$$q$$)$ states, this final state indicates [*weak decays*]{}. However, Belle formerly observed [@belle_xic_a] two new $\Xi_c^*$ states, i.e. the $\Xi_c(2980)^{+,0}$ and the $\Xi_c(3077)^{+,0}$ in $\Lambda_c^+$$K^-$$\pi^+$ and $\Lambda_c^+$$K_s^0$$\pi^-$. The doublet nature of these states clearly indicate the $\Xi_c^*$ interpretation. Contrary to double charm baryons, for $\Xi_c^*$ baryons these final states indicates [*strong decays*]{}. The discovery of these decays might be considered as a surprise, as [*(a)*]{} formerly only $\Xi_c^*$ decays to $\Xi_c$$\gamma$ and $\Xi_c$$\pi$ were observed, and [*(b)*]{} here the charm and the strange quarks are observed in [*different*]{} hadrons. Thus the decay reveals an interesting dynamics. Recently BaBar [@babar_xic] also published an updated $\Xi_c$ analysis based upon 384 fb$^{-1}$. Besides the confirmation of the $\Xi_c(2980)$ and the $\Xi_c(3077)$, two new states were observed. In the analysis, the final state of $\Lambda_c$$K$$\pi$ (same as above) was required, however using a cut on the $\Sigma_c$ invariant mass in the $\Lambda_c$$\pi$ subsystem. The new states and observed decays are: $\Xi_c(3055)^+$$\rightarrow$$\Sigma_c(2455)^{++}$($J^P$=1/2$^+$)$K^-$ with a statistical significance of 6.4$\sigma$, and $\Xi_c(3122)^+$$\rightarrow$$\Sigma_c(2520)^{++}$($J^P$=3/2$^+$)$K^-$ with a statistical significance of 3.6$\sigma$. The observation of these decays imposes a new question to the understanding of the dynamics in the decay, namely why the $\Xi_c$ states decay into a $\Sigma_c$ with an isospin 1. Again, the charm and the strange quark are observed in [*different*]{} hadrons, i.e. ($u$$s$$c$)$\rightarrow$($u$$u$$c$)($\overline{u}$$s$). Belle recently continued [@belle_xic_b] the investigation of the $\Xi_c$(2980) with a data set of 414 fb$^{-1}$ in order to try to determine the nature of this state. The $\Xi_c$(2980) could be the first positive parity excitation of the $\Xi_c$, or a higher ($n$$\geq$2) radial excitation. A new decay mode $\Xi_c$(2980)$\rightarrow$$\Xi_c$(2645)$\pi$ was observed. Single pion transitions of such kind can give important hints for the assignment of quantum numbers. For example, the transition $\Xi_c$(2815)($J^P$=3/2$^-$)$\rightarrow$$\Xi_c$(ground state)$\pi$ is forbidden, but the transition $\Xi_c$(2815)($J^P$=3/2$^-$)$\rightarrow$$\Xi_c$(2645)$\pi$ is allowed. On the other hand, the double pion transition $\Xi_c$(2815)($J^P$=3/2$^-$)$\rightarrow$$\Xi_c$(ground state)$\pi$$\pi$ is allowed in any case and thus less indicative for quantum number assignments. Assuming S-wave, the newly observed decay mode is predicted [@theory_xic] dominant for the $\Xi_{c1}$($J^P$=1/2$^+$) state, and thus might favour an assignment of a positive parity.
$\tau$ Decays
=============
In the simplest tree diagram, hadronic $\tau$ decays are given by a $\tau$$\rightarrow$$\nu$$W$ transition, in which the virtual $W$ boson forms a quark anti-quark pair $q$$\overline{q}'$ with $q$ and $q'$ carrying different flavor. As any additional gluon might be soft, $\alpha_S$$\simeq$0.35 is quite large, and therefore these decays are an interesting tool to study non-perturbative QCD.
$\tau$ decays with an $\eta$ in the final state
-----------------------------------------------
Belle recently improved [@belle_tau_eta] known branching fractions for $\tau$ decays with $\eta$ and two additional pseudoscalar mesons by a factor 4-6 with a data set of 485 $fb^{-^1}$. For the decays $\tau^-$$\rightarrow$$K^-$$\pi^0$$\eta$$\nu_{\tau}$ and $\tau^-$$\rightarrow$$\pi^-$$\pi^0$$\eta$$\nu_{\tau}$ branching ratios of 4.7$\pm$1.1(stat.)$\pm$0.4(syst.)$\cdot$10$^{-5}$ and 4.7$\pm$1.1(stat.)$\pm$0.4(syst.)$\cdot$ 10$^{-3}$ were measured, respectively. The measurement of the branching fractions is very important for understanding of low energy QCD. In the case of three pseudoscalar mesons the branching fraction could be increased by a factor $\geq$10 by the Wess-Zumino-Witten anomaly [@wess-zumino-witten].
$\tau$ decays with an $\phi$ in the final state
-----------------------------------------------
Recently BaBar published [@babar_tau_phi] an analysis of $\tau$ decays into a $\phi$ and an additional pseudoscalar meson with a data set of 342 $fb^{-1}$. The decay $\tau^-$$\rightarrow$$\phi$$K^-$$\nu_{\tau}$ is Cabibbo suppressed by the CKM matrix element $V_{us}$$\simeq$0.2. The measured branching fraction of 3.39$\pm$0.20(stat.)$\pm$0.28(syst.)$\cdot$10$^{-5}$ is consistent with an earlier Belle result [@belle_tau_phi], and thus could be used as a reference for the branching fractions of even more rare processes. On the one hand, excluding the resonant $\phi$ contribution, an upper limit for the non-resonant $\tau^-$$\rightarrow$$K^+$$K^-$$K^-$$\nu_{\tau}$ of $<$2.5$\cdot$10$^{-6}$ was obtained, indicating that the $\phi$ largely dominates the decay. On the other hand, a first measurement of the branching fraction of the OZI suppressed process $\tau^-$$\rightarrow$$\phi$$\pi^-$$\nu_{\tau}$ was performed with the result of 3.42$\pm$0.55(stat.)$\pm$0.25(syst.)$\cdot$10$^{-5}$.
References {#references .unnumbered}
==========
[99]{} BaBar Collaboration, B. Aubert et al.,
Belle Collaboration, A. Abashian et al.,
BaBar Collaboration, B. Aubert et al.,
CLEO Collaboration, D. Besson et al.,
S. Godfrey, N. Isgur,
BaBar Collaboration, B. Aubert at al., [*Phys. Rev.*]{} D-RC [**77**]{}, 011102 (2008)
W.-M. Yao et al., [*Journal of Physics*]{} G [**33**]{}, 1 (2006)
Belle Collaboration, V. Balagura et al.,
Belle Collaboration, J. Brodzicka et al.,
F. E. Close, C. E. Thomas, O. Lakhina, E. S. Swanson,
M. A. Nowak, M. Rho, I. Zahed, [*Acta Phys. Polon.*]{} B [**35**]{}, 2377 (2004)
BaBar Collaboration, B. Aubert et al.,
BaBar Collaboration, B. Aubert et al.,
Belle collaboration, L. Widhalm et al., arXiv:0709.1340$[$hep-ex$]$, subm. to [*Phys. Rev. Lett. *]{}
CLEO-c Collaboration, T. K. Pedlar et al.,
For a recent review see J. L. Rosner, S. Stone, arXiv:0802.1043$[$hep-ex$]$
E. Follana, C. T. H. Davies, G. P. Lepage, J. Shigemitsu\
(HPQCD and UKQCD Collaborations),
Belle Collaboration, R. Chistov et al.
BaBar Collaboration, B. Aubert et al.,
Belle Collaboration, T. Lesiak et al arXiv:0802.3968$[$hep-ex$]$, subm. to [*Phys. Lett. B*]{}
H.-Y. Cheng, C.-K. Chua,
The Belle Coll., K. Abe et al., arXiv:0708.0733$[$hep-ex$]$
J. Wess, B. Zumino, ;\
E. Witten,
BaBar Collaboration, B. Aubert et al.,
K. Inami et al.,
[^1]: Throughout this paper charge conjugations are implied if not noted otherwise.
[^2]: The valence quark contents of the $\Xi_c^+$ and the $\Xi_c^0$ are given by $($$u$$s$$c$$)$ and $($$d$$s$$c$$)$, respectively.
|
---
abstract: 'Validation is one of the most important aspects of clustering, but most approaches have been batch methods. Recently, interest has grown in providing incremental alternatives. This paper extends the incremental cluster validity index (iCVI) family to include incremental versions of Calinski-Harabasz (iCH), I index and Pakhira-Bandyopadhyay-Maulik (iI and iPBM), Silhouette (iSIL), Negentropy Increment (iNI), Representative Cross Information Potential (irCIP) and Representative Cross Entropy (irH), and Conn\_Index (iConn\_Index). Additionally, the effect of under- and over-partitioning on the behavior of these six iCVIs, the Partition Separation (PS) index, as well as two other recently developed iCVIs (incremental Xie-Beni (iXB) and incremental Davies-Bouldin (iDB)) was examined through a comparative study. Experimental results using fuzzy adaptive resonance theory (ART)-based clustering methods showed that while evidence of most under-partitioning cases could be inferred from the behaviors of all these iCVIs, over-partitioning was found to be a more challenging scenario indicated only by the iConn\_Index. The expansion of incremental validity indices provides significant novel opportunities for assessing and interpreting the results of unsupervised learning.'
author:
- 'Leonardo Enzo Brito da Silva, Niklas M. Melton, Donald C. Wunsch II, [^1][^2][^3]'
bibliography:
- 'IEEEabrv.bib'
- 'bib/references.bib'
title: 'Incremental Cluster Validity Indices for Hard Partitions: Extensions and Comparative Study'
---
Clustering, Validation, Incremental Cluster Validity Index (iCVI), Fuzzy, Adaptive Resonance Theory (ART).
Introduction {#Sec:intro}
============
validation [@Gordon1998] is a critical topic in cluster analysis. It is crucial to assess the quality of the partitions detected by clustering algorithms when there is no class label information. Different clustering solutions may be found by distinct algorithms, or even by the same algorithm subjected to different hyper-parameters or a different input presentation order [@xu2012; @leonardo2018]. *Cluster validity indices* (CVIs) perform the role of evaluators of such solutions. CVIs typically exhibit a trade-off between measures of compactness (within-cluster scatter) and isolation (between-cluster separation) [@xu2012]. Numerous examples of such criteria have been presented in the literature; for comprehensive reviews and experimental studies the interested reader can go to [@milligan1985; @Bezdek1997; @Halkidi2002a; @Halkidi2002b; @vendramin2010; @Arbelaitz2013; @xu2005; @xu2009].
Recently, *incremental cluster validity indices* (iCVIs) have been developed to track the effectiveness of online clustering methods over data streams [@Moshtaghi2018; @Moshtaghi2018b; @Ibrahim2018; @Keller2018]. To enable cluster validation in such applications, a recursive formulation of compactness was introduced in [@Moshtaghi2018; @Moshtaghi2018b]. This strategy has been used to develop incremental versions of four CVIs so far [@Keller2018]: viz., incremental Davies-Bouldin (iDB) [@Moshtaghi2018; @Moshtaghi2018b], incremental Xie-Beni (iXB) [@Moshtaghi2018; @Moshtaghi2018b] and modified Dunn’s indices [@Ibrahim2018b]. Particularly, the behavior of iXB and iDB are analyzed in both accurately and poorly partitioned data sets in [@Moshtaghi2018; @Moshtaghi2018b], whereas the studies in [@Ibrahim2018; @Keller2018] only investigate the iDB’s behavior in cases where online clustering algorithms accurately detect data structures, [*i*.*e*.,]{} when they yield high performing experimental results.
Therefore, the contributions of this work are three-fold: (1) presenting incremental versions of six additional CVIs (thereby extending the family of iCVIs), (2) discussing the interpretation of these novel iCVIs in cases of accurately, under- and over-partitioning, and (3) performing a systematic comparative study among ten iCVIs. To explore such scenarios, fuzzy adaptive resonance theory (ART)-based clustering methods [@Carpenter1991; @Bartfai1994] were chosen for their simple parameterization of cluster granularity and other appealing properties [@wunsch2009].
The following, Section \[Sec:theory\], provides a brief review of CVIs, iCVIs and ART; Section \[Sec:method\] presents this work’s extensions of several other CVIs to the incremental family; Section \[Sec:setup\] details the set-up used in the numerical experiments; Section \[Sec:results\] describes and discusses the results; Section \[Sec:inc\_vs\_batch\] compares batch and incremental versions of CVIs; and Section \[Sec:conclusion\] summarizes this paper’s findings.
Background and related work {#Sec:theory}
===========================
This section briefly recaps the theory regarding the CVIs, iCVIs and ART-based clustering algorithms used in this study.
Cluster Validity Indices (CVIs) {#Sec:theory_CVI}
-------------------------------
Consider a data set $\bm{X}=\{\bm{x}_i\}_{i=1}^N$ and its hard partition of $k$ disjointed clusters $\omega_i$, such that . In the following CVI overview, $\bm{v}$ is a cluster prototype (centroid), $k$ is the number of clusters, $d$ is the dimensionality of the data ($ \bm{x}_i \in {\rm I\!R}^d$), $\| \cdot \|$ is the Euclidean norm, and $N$ and $n_i$ are the cardinalities of a data set and cluster $\omega_i$, respectively.
### Calinski-Harabasz (CH) [@vrc]
the CH index is defined as: $$CH = \frac{BGSS/\left(k-1\right)}{WGSS/\left(N-k\right)},
\label{Eq:valind_ch1}$$ where the between group sum of squares (BGSS) and within group sum of squares (WGSS) are computed as: $$WGSS = \sum \limits_{i=1}^{k} \sum \limits_{
\substack{j=1 \\ \bm{x}_j\in \omega_i}}^{n_i} \| \bm{x}_j - \bm{v}_i \|^2,
\label{Eq:valind_ch2}$$ $$BGSS = \sum \limits_{i=1}^{k} n_i \| \bm{v}_i - \bm{\mu}_{data} \|^2,
\label{Eq:valind_ch3}$$ $$\bm{\mu}_{data} = \frac{1}{N} \sum\limits_{i=1}^N\bm{x}_i.
\label{Eq:valind_ch4}$$ This is an optimization-like criterion [@vendramin2010] such that larger values of CH indicate better clustering solutions (maximization).
### Davies-Bouldin (DB) [@db]
the DB index averages the similarities $R$ of each cluster $i$ with respect to its maximally similar cluster $j \neq i$: $$DB = \frac{1}{k} \sum_{i=1}^k R_i,
\label{Eq:valind_db1}$$ where $$R_i = \max_{i \neq j}\left( \frac{S_i + S_j}{M_{i,j}} \right),
\label{Eq:valind_db2}$$ $$S_l = \left[ \frac{1}{n_l} \sum \limits_{
\substack{m=1 \\ \bm{x}_m\in \omega_l}}^{n_l} \norm{\bm{x}_m - \bm{v}_l}^q \right]^{\frac{1}{q}},~l=\{1,...,k\},
\label{Eq:valind_db3}$$ $$M_{i,j} = \left[\sum \limits_{t=1}^{d} \abs{v_{it} - v_{jt}}^p \right]^{\frac{1}{p}},~p \geq 1.
\label{Eq:valind_db4}$$ The variables ($p$, $q$) are user-defined parameters, and $S_l$ and $M_{i,j}$ (Minkowski metric) measure compactness and separation, respectively. Smaller values of DB indicate better clustering solutions (minimization).
### Xie-Beni (XB) [@Xie1991]
the XB index was originally designed to detect compact and separated clusters in fuzzy c-partitions. A hard partition version is given by the following ratio of compactness to separation [@Lamirel2015; @Lamirel2016]: $$XB = \frac{WGSS / N}{\min\limits_{i \neq j} \| v_i - v_j \|^2 }.
\label{Eq:valind_xb1}$$
Smaller values of XB indicate better clustering solutions (minimization).
### Pakhira-Bandyopadhyay-Maulik (PBM) [@Bandyopadhyay2001; @pbm]
consider the I index [@Bandyopadhyay2001] defined as: $$I = \left(\frac{1}{k} \times \frac{E_1}{E_k} \times D_k \right)^p,~p \geq 1,
\label{Eq:valind_pbm1}$$ where $$E_1 = \sum\limits_{i=1}^N \norm{\bm{x}_i - \bm{\mu}_{data}},
\label{Eq:valind_pbm2}$$ $$E_k = \sum\limits_{i=1}^k \sum \limits_{
\substack{j=1 \\ \bm{x}_j\in \omega_i}}^{n_i} \norm{\bm{x}_j - \bm{v}_i},
\label{Eq:valind_pbm3}$$ $$D_k = \max_{i \neq j}\left( \norm{\bm{v}_i - \bm{v}_j} \right),
\label{Eq:valind_pbm4}$$ The quantities $E_k$ and $D_k$ measure compactness and separation, respectively. This CVI comprises a trade-off among the three competing factors in Eq. (\[Eq:valind\_pbm1\]): $\frac{1}{k}$ decreases with $k$, whereas both $\frac{E_1}{E_k}$ and $D_k$ increase. By setting $p=2$ in Eq. (\[Eq:valind\_pbm1\]), the I index reduces to the PBM index [@pbm]. Larger values of PBM indicate better clustering solutions (maximization).
### Silhouette (SIL) [@sil]
the SIL index is computed by averaging the silhouette coefficients $sc_i$ across all data samples $\bm{x}_i$: $$SIL = \frac{1}{N}\sum_{i=1}^N sc_i,
\label{Eq:valind_sil1}$$ where $$sc_i= \frac{b_i-a_i}{\max\left(a_i, b_i\right)},
\label{Eq:valind_sil2}$$ $$a_i = \frac{1}{n_i-1} \sum \limits_{\substack{j=1,j \neq i \\ \bm{x}_j \in \omega_i}}^{n_i} \| \bm{x}_j - \bm{x}_i \|,
\label{Eq:valind_sil3}$$ $$b_i = \min\limits_{l, l \neq i}\left(\frac{1}{n_l}\sum \limits_{\substack{j=1 \\ \bm{x}_j \in \omega_l}}^{n_l} \| \bm{x}_j - \bm{x}_i \|\right),
\label{Eq:valind_sil4}$$ the variables $a_i$ and $b_i$ measure compactness and separation, respectively. Larger values of SIL (close to 1) indicate better clustering solutions (maximization). To reduce computational complexity, some SIL variants, such as [@Hruschka2004; @Hruschka2006; @Rawashdeh2012; @Romera2016], use a centroid-based approach. The simplified SIL [@Hruschka2004; @Hruschka2006] has been successfully used in clustering data streams processed in chunks, in which the silhouette coefficients are also used to make decisions regarding the centroids’ incremental updates [@Silva2016].
### Partition Separation (PS) [@Yang2001]
the PS index was originally developed for fuzzy clustering; its hard clustering version is given by [@lughofer2008]: $$PS = \sum\limits_{i=1}^{k} PS_i,
\label{Eq:valind_ps1}$$ where $$PS_i = \frac{n_i}{\max\limits_{j}(n_j)} - exp\left[ - \frac{\min\limits_{i \neq j} \left( \| \bm{v}_i - \bm{v}_j \|^2\right)}{\beta_T} \right],
\label{Eq:valind_ps2}$$ $$\beta_T = \frac{1}{k} \sum\limits_{l=1}^{k} \| \bm{v}_l - \bar{\bm{v}} \|^2,
\label{Eq:valind_ps3}$$ $$\bar{\bm{v}} =\frac{1}{k} \sum\limits_{l=1}^{k} \bm{v}_l,
\label{Eq:valind_ps4}$$
The PS index only comprises a measure of separation between prototypes. Therefore, this CVI can be readily used to evaluate the partitions identified by unsupervised incremental learners that model clusters using centroids ([*e*.*g*.,]{} [@lughofer2008]). Larger values of PS indicate better clustering solutions (maximization).
### Negentropy Increment (NI) [@fernandez2010; @fernandez2009]
the NI index measures the average normality of the clusters of a given partition $\Omega$ via negentropy [@Comon1994] while avoiding the direct computation of the clusters’ differential entropies. Unlike the other CVIs discussed so far, the NI is not explicitly constructed using measures of compactness and separation [@fernandez2010; @Arbelaitz2013], thereby being defined as: $$NI = \frac{1}{2}\sum\limits_{i=1}^{k} p_i\ln | \bm{\Sigma}_i | - \frac{1}{2} \ln |\bm{\Sigma}_{data}| - \sum\limits_{i=1}^{k} p_i\ln p_i,
\label{Eq:valind_ni1}$$ where $| \cdot |$ denotes the determinant. The probabilities ($p$), means ($\bm{v}$) and covariance matrices ($\bm{\Sigma}$) are estimated as: $$p_i = \frac{n_i}{N},
\label{Eq:valind_ni2}$$ $$\bm{v}_i =\frac{1}{n_i} \sum\limits_{
\substack{j=1 \\ \bm{x}_j\in \omega_i}}^{n_i} \bm{x}_j,
\label{Eq:valind_ni3}$$ $$\bm{\Sigma}_i = \frac{1}{n_i-1} \sum\limits_{
\substack{j=1 \\ \bm{x}_j\in \omega_i}}^{n_i} (\bm{x}_j-\bm{v}_i)(\bm{x}_j-\bm{v}_i)^T,
\label{Eq:valind_ni4}$$ $$\bm{\Sigma}_{data} = \frac{1}{N-1}\left( \bm{X}^T\bm{X} - N\bm{\mu}_{data} \bm{\mu}_{data} ^T\right),$$ and $\bm{\mu}_{data}$ is estimated using Eq. (\[Eq:valind\_ch4\]). Smaller values of NI indicate better clustering solutions (minimization).
### Representative Cross Information Potential (rCIP) [@araujo20131; @araujo20132]
cluster evaluation functions (CEFs) based on cross information potential (CIP) [@gokcay2000; @gokcay2002] have been consistently used in the literature to evaluate partitions and drive optimization algorithms searching for data structure [@gokcay2000; @gokcay2002; @araujo20131; @araujo20132], thus this work includes these CEFs under the CVI category. Precisely, representative approaches [@araujo20131; @araujo20132] replace the sample-by-sample estimation of Renyi’s quadratic Entropy [@renyi1961] using the Parzen-window method [@duda2000] (original CIP [@gokcay2000; @gokcay2002]) via prototypes and the statistics of their associated Voronoi polyhedron. The rCIP was devised for prototype-based clustering ([*i*.*e*.,]{} two-step methods: vector quantization followed by clustering of the prototypes) [@Cottrell1997; @Karypis1999; @tyree1999; @Vesanto2000; @Ana2003]. The CEF used here is defined as [@araujo20132]: $$CEF = \sum\limits_{i=1}^{k-1} \sum\limits_{j=i+1}^k rCIP(\omega_i, \omega_j),
\label{Eq:valind_rCIP1}$$ where $$rCIP(\omega_i, \omega_j) = \frac{1}{M_i M_j} \sum\limits_{l=1}^{M_i}\sum\limits_{m=1}^{M_j} G(\bm{v}_l - \bm{v}_m, \bm{\Sigma}_{l,m}),
\label{Eq:valind_rCIP2}$$ $$G(\bm{v}_l - \bm{v}_m, \bm{\Sigma}_{l,m}) = \frac{e^{ -\frac{1}{2} \left( \bm{v}_l - \bm{v}_m \right)^T \bm{\Sigma}_{l,m}^{-1} \left( \bm{v}_l - \bm{v}_m \right)}}{\sqrt{\left( 2 \pi \right)^{d} | \bm{\Sigma}_{l,m}| }},
\label{Eq:valind_rCIP3}$$
$\bm{\Sigma}_{l,m}=\bm{\Sigma}_l + \bm{\Sigma}_m$, $\{\bm{v}_l,\Sigma_l\} \in \omega_i$, $\{\bm{v}_m,\Sigma_m\} \in \omega_j$, $M_i$ and $M_j$ are the number of prototypes used to represent clusters $\omega_i$ and $\omega_j$, respectively. The prototypes and covariance matrices are estimated using Eqs. (\[Eq:valind\_ni3\]) and (\[Eq:valind\_ni4\]), respectively. Smaller values of CEF indicate better clustering solutions (minimization). Recently, the information potential (IP) [@principe2010] measure has been used to define a system’s state when modeling and analyzing dynamic processes [@Oliveira2017; @Oliveira2018].
### Conn\_Index [@tasdemir2007; @tasdemir2011]
the Conn\_Index was also developed for prototype-based clustering. It is formulated using the connectivity strength matrix (CONN), which is a symmetric square similarity matrix that represents local data densities between neighboring prototypes [@tasdemir2006; @tasdemir2009]. Its $(i,j)^{th}$ entry is formally given by: $$CONN(i,j) = CADJ(i,j) + CADJ(j,i),
\label{Eq:valind_conn1}$$ where the $(i,j)^{th}$ entry of the non-symmetric cumulative adjacency matrix (CADJ) corresponds to the number of samples for which $\bm{v}_i$ and $\bm{v}_j$ are, simultaneously, the first and second closest prototypes (according to some measure), respectively. The Conn\_Index is defined as: $$Conn\_Index = Intra\_Conn \times \left( 1 - Inter\_Conn \right),
\label{Eq:valind_conn2}$$ where the intra-cluster ($Intra\_Conn$) and inter-cluster ($Inter\_Conn$) connectivities are: $$Intra\_Conn = \frac{1}{k}\sum\limits_{l=1}^k Intra\_Conn(\omega_l),
\label{Eq:valind_conn3}$$ $$Intra\_Conn(\omega_l) = \frac{1}{n_l} \sum\limits_{\substack{i,j \\ \bm{v}_i,\bm{v}_j \in \omega_l}}^{P} CADJ(i,j),
\label{Eq:valind_conn4}$$ $$Inter\_Conn = \frac{1}{k} \sum\limits_{l=1}^k \max_{\substack{m \\ m \neq l}} \left[ Inter\_Conn(\omega_l,\omega_m) \right],
\label{Eq:valind_conn5}$$ $$Inter\_Conn(\omega_l,\omega_m) =
\frac{\sum\limits_{\substack{i,j \\ \bm{v}_i \in \omega_l, \bm{v}_j \in \omega_m}}^P CONN(i,j)}{\sum\limits_{\substack{i,j \\ \bm{v}_i \in V_{l,m}}}^P CONN(i,j)},
\label{Eq:valind_conn7}$$ $$V_{l,m} = \{ \bm{v}_i : \bm{v}_i \in \omega_l, \exists \bm{v}_j \in \omega_m : CADJ(i,j)>0 \},
\label{Eq:valind_conn8}$$ the variable $P$ is the total number of prototypes, and if $V_{l,m} = \{ \emptyset \}$. Naturally, the quantities $Intra\_Conn$ and $Inter\_Conn$ measure compactness and separation, respectively. Larger values of the Conn\_Index (close to 1) indicate better clustering solutions (maximization).
Incremental Cluster Validity Indices (iCVIs)
--------------------------------------------
The compactness and separation terms commonly found in CVIs are generally computed using data samples and prototypes, respectively [@Moshtaghi2018; @Ibrahim2018]. In order to handle online clustering applications demands ([*i*.*e*.,]{} data streams), an incremental CVI (iCVI) formulation that recursively estimates the compactness term was introduced in [@Moshtaghi2018; @Moshtaghi2018b] in the context of fuzzy clustering.
Specifically, consider the hard clustering version of cluster $i$’s compactness $CP$ ([*i*.*e*.,]{} by setting the fuzzy memberships in [@Moshtaghi2018; @Moshtaghi2018b] to binary indicator functions): $$CP_i = \sum \limits_{
\substack{j=1 \\ \bm{x}_j\in \omega_i}}^{n_i} \| \bm{x}_j - \bm{v}_i \|^2,
\label{Eq:iCVI_0}$$ in such a case, when a new sample $\bm{x}$ is presented and encoded by cluster $i$, then its new compactness becomes: $$CP_i^{new} = \sum \limits_{
\substack{j=1 \\ \bm{x}_j\in \omega_i}}^{n_i^{new}} \| \bm{x}_j - \bm{v}_i^{new} \|^2,
\label{Eq:iCVI_1}$$ where $$n_i^{new} = n_i^{old}+1,
\label{Eq:iCVI_1b}$$ $$\bm{v}_i^{new} = \bm{v}_i^{old} + (\bm{x} - \bm{v}_i^{old})/n_i^{new},
\label{Eq:iCVI_1c}$$ and $$N^{new} = N^{old} + 1.
\label{Eq:nSamples}$$
The compactness in Eq. (\[Eq:iCVI\_1\]) can be updated incrementally as [@Moshtaghi2018; @Moshtaghi2018b]: $$\begin{aligned}
CP_i^{new}&{}={}&CP_i^{old} + \| \bm{z}_i \|^2 + n_i^{old} \| \Delta \bm{v}_i \|^2 + 2\Delta \bm{v}_i^T\bm{g}_i^{old},
\label{Eq:iCVI_2}\end{aligned}$$ where $$\bm{g}_i^{new} = \bm{g}_i^{old} + \bm{z}_i + n_i^{old} \Delta \bm{v}_i,
\label{Eq:iCVI_3}$$ $$\bm{g}_i= \sum\limits_{j=1}^{n_i} \left( \bm{x}_j - \bm{v}_i \right),
\label{Eq:iCVI_4}$$ $$\bm{z}_i=\bm{x} - \bm{v}_i^{new},
\label{Eq:iCVI_5}$$ $$\Delta \bm{v}_i = \bm{v}_i^{old} - \bm{v}_i^{new}.
\label{Eq:iCVI_6}$$
The compactness $CP$ and vector $\bm{g}$ are initialized as $0$ and $\Vec{\bm{0}}$ (since $\bm{v} = \bm{x}$), respectively. Note that, at each iteration, the variable $\bm{g}$ is updated after $CP$. Using such incremental formulation, the following iCVIs were derived in [@Moshtaghi2018; @Moshtaghi2018b] (their hard partition counterparts are shown here)
### incremental Xie-Beni (iXB)
$$XB^{new} = \frac{1}{N^{new}} \times \frac{\sum \limits_{i=1}^{k^{new}} CP_i^{new}}{\min\limits_{i \neq j} \left( \| \bm{v}_i^{new} - \bm{v}_j^{new} \|^2 \right) } ,
\label{Eq:valind_xb}$$
### incremental Davies-Bouldin (iDB - based on [@Araki1993])
$$DB^{new} = \frac{1}{k^{new}} \sum_{i=1}^{k^{new}} \max_{j, j \neq i}\left( \frac{\frac{CP_i^{new}}{n_i^{new}} + \frac{CP_j^{new}}{n_j^{new}}}{\| \bm{v}_i^{new} - \bm{v}_j^{new} \|^2} \right).
\label{Eq:valind_db}$$
If a new cluster emerges, then $k^{new}= k^{old}+1$; otherwise its previous value is maintained. Note that only one prototype $\bm{v}$ is updated after each input presentation.
Adaptive Resonance Theory (ART)
-------------------------------
For this study’s experiments, adaptive resonance theory (ART) [@Carpenter1987] has been implemented. It is a fast and stable online clustering method with automatic category recognition encompassing a rich history with many implementations well-suited to iCVI computation [@Carpenter1987; @Carpenter1987b; @Carpenter1988; @Carpenter1991; @Carpenter1990c; @xu2011; @leonardo2018b; @leonardo2018c; @Chen1999; @wunsch2009; @Bartfai1994; @williamson1996; @anagnostopoulos2000; @anagnostopoulos2001; @vigdor2007; @seiffertt2006; @huang2014; @Isawa2008; @leonardo2017]. The following ART models were used in these experiments.
### Fuzzy ART [@Carpenter1991]
This model implements fuzzy logic [@Zadeh1965] to bound data within hyper-boxes. For a normalized data set $\bm{X}=\{\bm{x}_i\}_{i=1}^N$ $( 0 \leq x_{i,j} \leq 1~,~ j=\{1,...,d\})$, the fuzzy ART algorithm, with parameters $(\alpha,\beta,\rho)$, is defined by: $$\bm{I} = (\bm{x}_i,1-\bm{x}_i),
\label{Eq:FA_CC}$$ $${T}_j = \frac{\|\min_{}(\bm{I},\bm{w}_j)\|_1}{\alpha + \|\bm{w}_j\|_1},
\label{Eq:FA_activation}$$ $$\|\min_{}(\bm{I},\bm{w}_j)\|_1\geq \rho\|\bm{I}\|_1,
\label{Eq:FA_res}$$ $$\bm{w}_j^{new} = \bm{w}_j^{old}(1-\beta)+\beta\min_{}(\bm{I},\bm{w}_j^{old}).
\label{Eq:FA_update}$$
Equation (\[Eq:FA\_CC\]) is the complement coding function, which concatenates sample $\bm{x}$ and its complement to form an input vector $\bm{I}$ with dimension $2d$. Equation (\[Eq:FA\_activation\]) is the activation function for each category $j$, where $\| \cdot \|_1$ is the $L_1$ norm, $min(\cdot)$ is performed component-wise, and $\alpha$ is a tie breaking constant. Each category is checked for validity against Eq. (\[Eq:FA\_res\])’s vigilance parameter $\rho$ in a descending order of activation. If no valid category is found during training, then a new category is initialized using $\bm{I}$ as the new weight vector $\bm{w}$. Otherwise, the winning category is updated according to Eq. (\[Eq:FA\_update\]) using learning rate $\beta$. In this study, when fuzzy ART is set to evaluation mode (learning is disabled), if no valid category is found during search, then the winning category defaults to the highest activated one.
### Fuzzy self-consistent modular ART (SMART) [@Bartfai1994]
This model is a hierarchical clustering technique based on the ARTMAP architecture [@Carpenter1991]. In an ARTMAP network, two ART modules, A- and B-side, are supplied with separate but dependent data streams. Both ART modules can cluster according to local topology and parameters while an inter-ART module enforces a surjective mapping of the A-side to the B-side, effectively learning the functional map of the A-side to the B-side categories.
To build a fuzzy SMART module, it is only necessary to stream the same sample to both the A- and B-sides of a fuzzy ARTMAP module, [*i*.*e*.,]{} use fuzzy ARTMAP in an auto-associative mode. If all else is equal in the A and B modules’ parameters, fuzzy SMART will begin to form a two-level self-consistent cluster hierarchy when $\rho_A > \rho_B$. This hierarchy will be required to extend the iCVI study to prototype-based CVIs such as the Conn\_Index. For such CVIs, the A-side categories act as cluster prototypes while the B-side provides the actual data partition.
Extensions of iCVIs {#Sec:method}
===================
To compute the CVIs mentioned in Section \[Sec:theory\_CVI\] incrementally, employing one of the following approaches is sufficient:
1. The recursive computation of compactness developed in [@Moshtaghi2018; @Moshtaghi2018b] (CVIs: CH, I/PBM, and SIL).
2. The incremental computation of probabilities, means and covariance matrices (CVIs: rCIP and NI). Naturally, if the clustering algorithm of choice already models the clusters using a priori probabilities, means and covariance matrices (such as Gaussian ART [@williamson1996] and Bayesian ART [@vigdor2007]), then, similarly to PS, these CVIs can be readily computed.
3. The incremental building of a multi-prototype representation of clusters in a self-consistent two-level hierarchy while tracking the density-based connections between neighboring prototypes (CVI: Conn\_index). Specifically, increment and/or expand the CADJ and CONN matrices as clusters grow and/or are dynamically created.
In the following iCVIs’ extensions (iCH, iI/iPBM, iSIL, irCIP, iNI, and iConn\_index), if a new cluster is formed after sample $\bm{x}$ is presented, then the number of clusters is , the number of samples encoded by this cluster is $n^{new}_{k^{new}} = 1$, the clusters’ prototype is set to $\bm{v}^{new}_{k^{new}} = \bm{x}$, the initial compactness is $CP^{new}_{k^{new}} = 0$, and vector $\bm{g}^{new}_{k^{new}} = \Vec{\bm{0}}$ (unless otherwise noted). Naturally, clusters that do not encode the presented sample remain with constant parameter values for the duration of that input presentation. Also note that, when necessary, the Euclidean norm is replaced with the squared Euclidean norm ([*i*.*e*.,]{} $\norm{\cdot}^2$) to allow for the computation of compactness $CP$ (as per [@Moshtaghi2018; @Moshtaghi2018b]). Finally, for iCVIs that require the computation of pairwise (dis)similarity between prototypes, the (dis)similarity matrix is kept in memory, where only the rows and columns corresponding to the prototype that is adapted are modified.
Incremental Calinski-Harabasz index (iCH) {#Sec:iCH}
-----------------------------------------
The iCH computation is defined as: $$CH^{new} = \frac{\sum \limits_{i=1}^{k^{new}} SEP_i^{new}}{\sum \limits_{i=1}^{k^{new}} CP_i^{new}} \times \frac{N^{new}-k^{new}}{k^{new}-1},
\label{Eq:iCH}$$ where $$SEP_i^{new} = n_i^{new} \| \bm{v}_i^{new} - \bm{\mu}_{data}^{new} \|^2.
\label{Eq:iCH2}$$ Note that the variables $\{n_1,...,n_k\}$, $\{\bm{v}_1,...,\bm{v}_k\}$, $\{CP_1,...,CP_k\}$, $\{\bm{g}_1,...,\bm{g}_k\}$, $\bm{\mu}_{data}$, $k$, $N$, and $\{SEP_1,...,SEP_k\}$ are all kept in memory. These are updated using Eqs. (\[Eq:iCVI\_1b\]) to (\[Eq:iCVI\_3\]), except for $SEP$, which is adapted using Eq. (\[Eq:iCH2\]). The data mean $\bm{\mu}_{data}$ is updated similarly to the prototypes $\bm{v}$ ([*i*.*e*.,]{} Eq. (\[Eq:iCVI\_1c\])).
Incremental I index (iI) {#Sec:iPBM}
------------------------
The iI computation is defined as: $$I^{new} = \left[ \frac{\max\limits_{i \neq j}\left( \| \bm{v}_i^{new} - \bm{v}_j^{new} \|^2 \right)}{\sum \limits_{i=1}^{k} CP_i^{new}} \times \frac{CP_0^{new}}{k^{new}} \right]^p,
\label{Eq:iPBM}$$ where $CP_0$ and $\sum \limits_{i=1}^{k} CP_i^{new}$ correspond to $E_1$ and $E_k$, respectively. These are updated according to Eqs. (\[Eq:iCVI\_1b\]) to (\[Eq:iCVI\_3\]) along with the remaining compactness variables. Only the pairwise distances with respect to the updated prototype at any given iteration need to be recomputed.
Incremental Silhouette index (iSIL) {#Sec:iSIL}
-----------------------------------
The SIL index is inherently batch (offline), since it requires the entire data set to be computed (the silhouette coefficients are averaged across all data samples in Eq. (\[Eq:valind\_sil1\])). To remove such a requirement and enable incremental updates, a hard version of the centroid-based SIL variant introduced in [@Rawashdeh2012] is employed here as well as the squared Euclidean norm ([*i*.*e*.,]{} ): this is done in order to employ the recurrent formulation of the compactness in Eq. (\[Eq:iCVI\_2\]). Consider the matrix $\bm{S}_{k \times k}$, where $k$ prototypes $\bm{v}_i$ are used to compute the centroid-based SIL (instead of the $N$ samples $\bm{x}_i$ - which, by definition, are discarded after each presentation in online mode). Define each entry $s_{i,j} = D(\bm{v}_i,\omega_j)$ (dissimilarity of $\bm{v}_i$ to cluster $\omega_j$) of $\bm{S}_{k \times k}$ as: $$s_{i,j} = \frac{1}{n_j} \sum\limits_{\substack{l=1 \\ \bm{x}_l \in \omega_j}}^{n_j} \| \bm{x}_l - \bm{v}_i \|^2 = \frac{1}{n_j}CP(\bm{v}_i, \omega_j),
\label{Eq:Smat_inc1}$$ where $i=\{1,...,k\}$ and $j=\{1,...,k\}$. The silhouette coefficients can be obtained from the entries of $\bm{S}_{k \times k}$ as: $$sc_i = \frac{ \min\limits_{l,l \neq J}(s_{i,l}) - s_{i,J}}{\max\left[s_{i,J} , \min\limits_{l,l \neq J}(s_{i,l}) \right]}, \bm{v}_i \in \omega_J.
\label{Eq:cSIL1}$$ where $a_i=s_{i,J}$ and $b_i=\min\limits_{l,l \neq J}(s_{i,l})$.
At first, when examining Eq. (\[Eq:Smat\_inc1\]), one might be tempted to store a $k \times k$ matrix of compactness entries along with their accompanying $k^2$ vectors $\bm{g}$ (one for each entry) to enable incremental updates of each element of matrix of $\bm{S}_{k \times k}$; this approach, however, may lead to unnecessarily large memory requirements. A more careful exam shows that it is sufficient to simply redefine $CP$ and $\bm{g}$ for each cluster $i$ ($i=\{1,...,k\}$) as: $$CP_i = \sum \limits_{
\substack{j=1 \\ \bm{x}_j\in \omega_i}}^{n_i} \| \bm{x}_j - \Vec{\bm{0}} \|^2 = \sum \limits_{
\substack{j=1 \\ \bm{x}_j\in \omega_i}}^{n_i} \| \bm{x}_j \|^2,
\label{Eq:Smat_inc2}$$ $$\bm{g}_i = \sum \limits_{
\substack{j=1 \\ \bm{x}_j\in \omega_i}}^{n_i} \left( \bm{x}_j - \Vec{\bm{0}} \right) = \sum \limits_{
\substack{j=1 \\ \bm{x}_j\in \omega_i}}^{n_i} \bm{x}_j,
\label{Eq:Smat_inc3}$$ which is equivalent to fixing $\bm{v}=\Vec{\bm{0}}$. Therefore, their incremental update equations become (as opposed to Eqs. (\[Eq:iCVI\_2\]) and (\[Eq:iCVI\_3\])): $$CP_i^{new} = CP_i^{old} + \| \bm{x} \|^2,
\label{Eq:Smat_inc4}$$ $$\bm{g}_i^{new} = \bm{g}_i^{old} + \bm{x}.
\label{Eq:Smat_inc5}$$
Using this trick, when a sample $\bm{x}$ is assigned to cluster $\omega_J$, then the update equations for each entry $s_{i,j}$ of $\bm{S}_{k \times k}$ are given by Eq. (\[Eq:Smat\_inc6\]). Note that the numerators of the expressions in Eq. (\[Eq:Smat\_inc6\]) update the compactness “as if” the prototype has changed from $\Vec{\bm{0}}$ to $\bm{v}^{new}$ at every iteration ($\Delta \bm{v}
= - \bm{v}^{new}$). The remaining variables such as $n$, $N$, and $\bm{v}$ are updated as previously described. This allows $\{CP_1,...,CP_k\}$ and $\{\bm{g}_1,...,\bm{g}_k\}$ to continue being stored similarly to the previous iCVIs, instead of a $k \times k$ matrix of compactness and the associated $k^2$ vectors $\bm{g}$. $$s_{i,j}^{new} = \begin{cases}
\frac{1}{n_j^{new}} \left( CP_j^{old} + \| \bm{z}_i \|^2 + n_j^{old} \| \bm{v}_i^{old} \| ^2 - 2\bm{v}_i^{old~^T}\bm{g}_j^{old} \right)&,~(i \neq J,j=J) \\
\frac{1}{n_j^{old}} \left( CP_j^{old} + n_j^{old} \| \bm{v}_i^{new} \| ^2 - 2\bm{v}_i^{new~^T}\bm{g}_j^{old} \right) &,~(i = J,j \neq J) \\
\frac{1}{n_j^{new}} \left( CP_j^{old} + \| \bm{z}_j \|^2 + n_j^{old} \| \bm{v}_j^{new} \| ^2 - 2\bm{v}_j^{new~^T}\bm{g}_j^{old} \right) &,~(i=J,j=J) \\
s_{i,j}^{old} &,~(i \neq J,j \neq J) \\
\end{cases}
\label{Eq:Smat_inc6}$$
In the case where a new cluster $\omega_{k+1}$ is created following the presentation of sample $\bm{x}$, then a new column and a new row are appended to the matrix $\bm{S}_{k \times k}$. Unlike the other iCVIs, the compactness $CP_{k+1}$ and vector $\bm{g}_{k+1}$ of this cluster are initialized as $\| \bm{x} \|^2$ and $\bm{x}$, respectively. Then, the entries of $\bm{S}_{k \times k}$ are updated using Eq. (\[Eq:Smat\_inc7\]). $$s_{i,j}^{new} = \begin{cases}
CP_{k+1} + \| \bm{v}_i^{old} \| ^2 - 2\bm{v}_i^{old~^T}\bm{g}_{k+1} &,~(i \neq k+1,j=k+1) \\
\frac{1}{n_j^{old}} \left( CP_j^{old} + n_j^{old} \| \bm{v}_i^{new} \| ^2 - 2\bm{v}_i^{new~^T}\bm{g}_j^{old} \right) &,~(i = k+1,j \neq k+1) \\
0 &,~(i=k+1,j=k+1) \\
s_{i,j}^{old} &,~(i \neq k+1,j \neq k+1) \\
\end{cases}
\label{Eq:Smat_inc7}$$
Following the incremental updates of the entries of $\bm{S}_{k \times k}$ (Eq. (\[Eq:Smat\_inc6\]) or (\[Eq:Smat\_inc7\])), the silhouette coefficients ($sc_i$) are computed (Eq. (\[Eq:cSIL1\])), and the iSIL is updated as: $$SIL^{new} = \frac{1}{k^{new}}\sum_{i=1}^{k^{new}} sc_i^{new}.
\label{Eq:icSIL}$$
Incremental Negentropy Increment (iNI) {#Sec:iNI}
--------------------------------------
The iNI computation is defined as: $$NI^{new} = \sum\limits_{i=1}^{k} p_i^{new}\ln \left(
\frac{\sqrt{| \bm{\Sigma}_i^{new} |}}{p_i^{new}} \right)
- \frac{1}{2} \ln |\bm{\Sigma}_{data}|
\label{Eq:iNI}$$
where , and $\bm{\Sigma}_i^{new}$ is computed using the following recursive formula [@duda2000]: $$\begin{aligned}
\bm{\Sigma}^{new}&{}={}&\frac{n^{new}-2}{n^{new}-1}\left(\bm{\Sigma}^{old} - \delta I \right) + \frac{1}{n^{new}}\left(\bm{x} - {v}^{old} \right)\left(\bm{x} - {v}^{old} \right)^T + \delta I
\label{Eq:iSigma}\end{aligned}$$
This work’s authors set $\delta = 10^{-\frac{\epsilon}{d}}$ to avoid numerical errors, where $\epsilon$ is a user-defined parameter. If a new cluster is created, then $\bm{\Sigma} = \delta I$ and $|\bm{\Sigma}|=10^{-\epsilon}$.
Incremental representative Cross Information Potential (irCIP) and cross-entropy (irH) {#Sec:irCIP}
----------------------------------------------------------------------------------------
Section \[Sec:results\] will show that using the representative cross-entropy rH for computing the CEF makes it easier to observe the behavior of the incremental clustering process (this corroborates a previous study in which rH was deemed more informative than rCIP for multivariate data visualization [@leonardo2018a]): $$rH(\omega_i, \omega_j) = - \ln \left[rCIP(\omega_i, \omega_j)\right],
\label{Eq:valind_rCIP4}$$ $$CEF = \sum\limits_{i=1}^{k-1} \sum\limits_{j=i+1}^k rH(\omega_i, \omega_j).
\label{Eq:valind_rH_CEF}$$
Note that, as opposed to the rCIP-based CEF, larger values of rH-based CEF indicate better clustering solutions (maximization). Concretely, since the CEF only measures separation, then, like iNI, it is only necessary to update the means and the covariance matrices online in order to construct the incremental CEF (iCEF). This is also done using Eqs. (\[Eq:iCVI\_1c\]) and (\[Eq:iSigma\]), respectively. The iCEFs, based on rCIP and rH, are hereafter referred to as irCIP and irH, respectively.
Incremental Conn\_Index (iConn\_Index) {#Sec:iConn}
--------------------------------------
The Conn\_Index is another inherently batch CVI, as each element $(i,j)$ of the CADJ matrix requires the count of the samples in the data set with the first and second closest prototypes, $\bm{v}_i$ and $\bm{v}_j$ respectively. Naturally, when clustering data online, $\bm{v}_i$ and $\bm{v}_j$ may change for previously presented samples as prototypes are continuously modified or created. However, for the purpose of building and incrementing CADJ and CONN matrices online (with only one element changing per sample presentation), it is assumed that the trends exhibited over time by the iConn\_Index does not differ dramatically from its offline counterpart. Batch calculation can be eliminated entirely by keeping the values of Eqs. (\[Eq:valind\_conn4\]) and (\[Eq:valind\_conn7\]) in memory and updating only the entries corresponding to the winning prototype $\bm{v}_i$.
In this study, the self-consistent hierarchy and multi-prototype cluster representation required by the iConn\_Index was generated using fuzzy SMART, whose modules A and B are used for prototype and cluster definition, respectively. Fuzzy SMART’s module A was modified in such a way that it forcefully creates two prototypes from the first two samples of every emerging cluster in module B. By enforcing this dynamic, each cluster always possesses at least two prototypes for the computation of the iConn\_Index. This strategy addresses two problems: first, it allows CADJ to be created from the second sample seen and onward; second, it prevents some cases in which well-separated clusters are strongly connected simply because one of them does not have another prototype to assume the role of the second winner. The second winning prototype for a sample $\bm{v}_j$ is the winning A-side category when the first winning prototype $\bm{v}_i$ has been removed from the A-side category set.
The iConn\_Index demands certain boundary conditions. In the case of exactly one prototype and one category, such as the case for the very first sample presentation, the CADJ matrix cannot be incremented, and the iConn\_Index will default to 0 [@tasdemir2011]. This paper presents a remedy for this whereby a count of samples is kept separate from the CADJ matrix (instance counting [@Carpenter1998]). Upon creation of the second prototype $\bm{v}_2$ in fuzzy SMART’s module A, the CADJ matrix will be incremented for the first time at element $(2,1)$. At this point, the element $(1,2)$ will be set to the number of samples seen so far belonging to $\bm{v}_1$. This situation is encountered in the very first sample presentation to fuzzy SMART.
Note that, in the case of a single category, $Inter\_Conn$, given by Eq. (\[Eq:valind\_conn5\]), defaults to 1 [@tasdemir2011]. In the case of a category with a single prototype, the $Intra\_Conn$ for that category, given by Eq. (\[Eq:valind\_conn4\]), also defaults to a value of 1 [@tasdemir2011]. Finally, instead of the original constraint imposed by Eq. (\[Eq:valind\_conn8\]), this paper’s iConn\_Index implementation uses , as this makes its behavior smoother and more consistent in this application domain.
Numerical experiments setup {#Sec:setup}
===========================
The numerical experiments were carried out using the MATLAB software environment. The Cluster Validity Analysis Platform Toolbox [@cvap] was used to compute the Adjusted Rand Index ($ARI$) [@hubert1985] to evaluate the partitions detected by the fuzzy ART-based clustering algorithms. Two synthetic data sets were used: (1) *R15* [@veenman2002; @shape], consisting of 800 samples and 15 clusters in two dimensions and (2) *D4*, which is an in-house artificially generated data set with 2000 samples and 4 clusters also in two dimensions. For comparison purposes, hard clustering versions of iDB, iXB and PS CVIs were used in the experiments. Finally, it should be noted that this study does not employ multi-prototype representations for the irCIP and irH ([*i*.*e*.,]{} $M_i=M_j=1, \forall i,j$ in Eq. (\[Eq:valind\_rCIP2\])) since each of the clusters from the data sets used in these experiments can be modeled using single Gaussian distributions.
All fuzzy ART and SMART dynamics were performed with normalized and complement coded input, whereas the CVI computations were performed using the normalized data. To emulate scenarios in which there is a natural order of presentation, the samples were presented to fuzzy ART/SMART in a cluster-by-cluster fashion where samples within a given cluster were randomized. Finally, in these experiments, $\epsilon=12$ in Eq. (\[Eq:iSigma\]) for the incremental computation of the covariance matrices used by irCIP, irH and iNI. The source code of the CVIs/iCVIs, fuzzy ART/SMART, and experiments is provided at the Applied Computational Intelligence Laboratory public GitLab repository.
A comparative study {#Sec:results}
===================
This section discusses the behavior of the iCVIs in three general cases when assessing the quality of the partitions detected by fuzzy ART-based systems in real-time: (1) high-quality partitions, (2) under-partitions, and (3) over-partitions. It should be emphasized that this analysis is not focused on evaluating the performance or capabilities of the chosen clustering algorithms, but instead the purpose of this study is to observe the behavior of the iCVIs in these different scenarios to gain insight on their applicability. Moreover, in each of these scenarios, the iCVIs’ dynamics are investigated in two sub-cases: (a) the creation of a new cluster and (b) the presentation of samples within a given cluster.
The following discussion is relative to the data sets used in the experiments and their respective order of cluster and sample presentation (Fig. \[Fig:all\_orders\]). This is not an exhaustive study of all possible permutations of clusters and samples, as each of them may trigger different global behaviors of the iCVIs. Nonetheless, it can be assumed that some behaviors are typical, which allows the inference of some particular problems that may arise during incremental unsupervised learning.
Similar to [@Moshtaghi2018; @Moshtaghi2018b; @Keller2018; @Ibrahim2018; @Ibrahim2018b], a natural ordering, [*i*.*e*.,]{} meaningful temporal information is assumed. The *R15* data set was used to illustrate the behavior of the iCVIs in cases (1) and (2), which are depicted in Figs. \[Fig:R15HQ\] and \[Fig:R15UP\], respectively. Alternately, the *D4* data set was used to illustrate the behavior of the iCVIs in cases (1) and (3), which are depicted in Figs. \[Fig:D4HQ\] and \[Fig:D4OP\], respectively. For both data sets, case (1) is used as a reference to which their respective cases (2) and (3) are compared. Moreover, Figs. \[Fig:R15HQ\] to \[Fig:D4OP\] depict the iCVIs immediately following the creation of the second cluster.
Correct estimation and underestimation of the number of clusters
----------------------------------------------------------------
Consider the high-quality partition of the *R15* data set shown in Fig. \[Fig:partition1\], which was obtained when presenting samples in the cluster-by-cluster ordering depicted in Fig. \[Fig:order1\]. This study shows, in general (and as expected from previous studies on iDB and iXB [@Moshtaghi2018; @Moshtaghi2018b; @Keller2018; @Ibrahim2018; @Ibrahim2018b]), that the drastic changes in most iCVI values follow the emergence of new clusters. The exceptions are the iXB and irCIP, which appear much less informative than the other iCVIs used in this particular experiment, as they show no clearly defined tendencies and seem insensitive to the well-separated clusters numbered 12 to 15 in Fig. \[Fig:partition1\].
During the presentation of samples within a given cluster, many different behaviors can be observed. Typically, iCH either improves or has small fluctuations; iSIL and iDB either worsen or have small fluctuations; iI/iPBM and iNI either worsen or improve; iConn\_Index and PS improve; and irH consistently undergoes small fluctuations. Again, irCIP and iXB do not appear to be particularly useful compared to the other iCVIs since no apparent trends were found over the iterations. If an iCVI displays more than one trend, these usually do not occur prominently and simultaneously ([*i*.*e*.,]{} during the presentation of samples from the same cluster). Note that these are important characteristics, since they will help in identifying the under-partition cases.
Now consider the case of underestimating the number of clusters, as shown in Fig. \[Fig:partition2\]. The latter was obtained when presenting samples in the cluster-by-cluster ordering depicted in Fig. \[Fig:order2\]. This research notes that most iCVIs consistently worsen while the algorithm incorrectly agglomerates samples from different clusters (clusters numbered 2 to 9 in Fig. \[Fig:order2\]) into a single cluster (cluster numbered 2 in Fig. \[Fig:partition2\]), except for the iConn\_Index (which actually improves due to the strong connectivity among prototypes) and irCIP (which remains constant). Moreover, when incorrectly merging clusters 10 and 11 in Fig. \[Fig:order2\] into a single cluster labeled 3 in Fig. \[Fig:partition2\], the performances of all iCVIs are accompanied by a drastic change typically toward worse values (except for PS, which only undergoes a slight slope change), while the number of clusters remains constant.
The behavior previously described can also be observed for clusters labeled 4 and 1 in Fig. \[Fig:partition2\]. Drastic (iSIL, irCIP, irH, iNI, iDB, iXB, and iConn\_Index) or more subtle (iCH, iI/iPBM) changes entailing worsening trends take place in the behavior of all CVIs in Fig. \[Fig:R15UP\] when these samples are classified to the same cluster - again, with the exception of PS, which still improves, but with a different inclination. These clearly indicate that the clustering algorithm is mistakenly encoding the samples under the same cluster umbrella.
At this stage, it is important to be cautious because even when a high-quality partition is retrieved (Fig. \[Fig:R15HQ\]), some iCVIs (such as iSIL, iConn\_Index, and iDB), can both improve and worsen when fuzzy ART is allocating samples to the same cluster (although this happens less frequently and less drastically). Therefore, it is recommended to observe more than one iCVI to determine if under-partition is taking place.
Correct estimation and overestimation of the number of clusters
---------------------------------------------------------------
For the sake of clarity, over-partition is illustrated using the *D4* data set, which has a smaller number of clusters. First, the iCVI behaviors regarding the high-quality partition shown in Fig. \[Fig:partition3\] are observed as a reference; these were obtained using the cluster sequence depicted in Fig. \[Fig:order3\]. The same iCVI trends seem to hold following the emergence of new clusters as well as during the presentation of samples belonging to a given cluster (and again, iXB and irCIP provided the least visually descriptive behavior over time). A notable exception, however, is the iNI, which quickly improves immediately after the creation of a new cluster and then worsens as samples from the same cluster are presented. This supports the fact that the iCVI behaviors are not universal: naturally, they are data- and order-dependent.
Now consider the over-partition problem depicted in Fig. \[Fig:partition4\], which was also obtained using the cluster sequence depicted in Fig. \[Fig:order3\]. As expected, a steep descent (or ascent depending on the iCVI) usually occurs when new clusters are created. However, since this trend appears to occur regardless of the partition quality (being inherent to all iCVIs), then it is not sufficient to identify this issue. In this scenario, unless there was additional a priori information ([*e*.*g*.,]{} the cardinality of clusters) to detect a premature partition, these iCVIs were unable to patently identify over-partition solely based on the transitions of their values versus the number of clusters.
Moreover, although there is a natural order for the presentation of clusters ([*i*.*e*.,]{} as a time series), the presentation of samples within each cluster is random. Specifically, when the cluster is over-partitioned, samples are not presented in a subcluster-by-subcluster manner, but instead they are randomly sampled from the different subclusters. This adds another layer of complexity and thus makes this problem even more challenging. Compared to the correct partition in Fig. \[Fig:partition3\], most iCVIs do not exhibit an overall behavior that deviates significantly from the one typically expected when accurately partitioning *D4* (Fig. \[Fig:partition3\]), although most of them yield worse cluster quality evaluation values. In reality, in a true unsupervised learning scenario, such reference behavior is unavailable; furthermore, the values of most iCVIs are not bounded, thus making this problem even more challenging to detect.
Except for the iConn\_Index, none of the iCVIs provided distinctive insights on the over-partition problem: there is a noticeable decrease of iConn\_Index values (due to a large increase of $Inter\_Conn$ and decrease of $Intra\_Conn$), especially considering that this iCVI’s value is bounded to the interval $[0,1]$. More importantly, following the over-partition, it does not exhibit the general behavior previously observed in Figs. \[Fig:iConn1\] and \[Fig:iConn3\], and it maintains its poor assessment of the clustering solution, thus indicating that there is an issue with the partition found by the clustering algorithm.
Incremental versus batch implementations {#Sec:inc_vs_batch}
========================================
When evaluated over time, most iCVIs discussed in this study yield the same values as their batch counterparts ([*e*.*g*.,]{} the the recursive formulation of compactness is an exact computation, not an approximation [@Moshtaghi2018; @Moshtaghi2018b]). The only exception is the iConn\_Index, which is the subject of analysis of this section. Figs. \[Fig:iConn\_R15\_A\] to \[Fig:iConn\_D4\_B\] illustrate the evolution of both Conn\_Index and iConn\_Index for all four experiments described in Section \[Sec:results\]. These figures also show the error (difference) between the batch and incremental implementations of the Conn\_Index after the presentation of each sample. To obtain the batch Conn\_Index values, fuzzy SMART was set to evaluation mode and all first and second winning prototypes were recomputed after the presentation of each sample.
Notably, error spikes consistently occur on the appearance of new clusters. In general, the error gradually diminishes over time, as samples within a given cluster are continuously presented to the system. These trends are particularly clear when fuzzy SMART yield high quality partitions (Figs. \[Fig:iConn\_R15\_A\] and \[Fig:iConn\_D4\_A\]). Regarding the cases of under- and over-partitioning (Figs. \[Fig:iConn\_R15\_B\] and \[Fig:iConn\_D4\_B\]), the errors are more pronounced. However, iConn\_Index still smoothly follows the overall trends of its batch counterpart (which has a more jagged behavior).
Finally, the effect of fuzzy SMART module A’s quantization level on the similarity of the batch and incremental implementations was investigated. This was done by varying its vigilance parameter $\rho_A$ in the closed interval $[\rho_B, 0.96]$ (larger values of $\rho_A$ produce finer granularity of cluster prototypes). The Pearson correlation coefficients [@Bain1992] and the mean squared error (MSE) depicted in Figs. \[Fig:rcoef1\], \[Fig:rcoef2\], \[Fig:rcoef3\], and \[Fig:rcoef4\] show that the behavior of iConn\_Index is consistent with Conn\_Index across wide ranges of fuzzy SMART module A’s vigilance. Interestingly, their dissimilarity tends to increase with very large vigilance values. These results support the original assumption, stated in Section \[Sec:iConn\], that both versions of the Conn\_Index would behave similarly. Therefore, iConn\_Index is suitable for monitoring the performance online clustering methods.
Conclusion {#Sec:conclusion}
==========
This paper extended six cluster validity indices (CVIs) to incremental versions, namely, incremental Calinski-Harabasz (iCH), incremental I index and incremental Pakhira-Bandyopadhyay-Maulik (iI and iPBM), incremental Silhouette (iSIL), incremental Negentropy Increment (iNI), incremental Representative Cross Information Potential (irCIP) and Cross Entropy (irH), and incremental Conn\_Index (iConn\_Index). Furthermore, using fuzzy adaptive resonance theory (ART)-based clustering algorithms, three different scenarios were analyzed: detection of the correct number of clusters in high-quality partitions, under- and over-partitioning. In such scenarios, a comparative study was performed among the presented incremental cluster validity indices (iCVIs), the Partition Separation (PS) index, the incremental Xie-Beni (iXB), and the incremental Davies-Bouldin (iDB).
As expected from previous studies, most iCVIs undergo abrupt changes following the creation of a new cluster. When samples from the same cluster are presented, however, each iCVI exhibits a particular behavior, which was taken as a reference to compare the cases of under- and over-partitioning a data set. In these experiments, the least visually informative iCVIs ([*i*.*e*.,]{} that provided less useful visual cues/hints in their behavior) were irCIP and iXB. Particularly, most iCVIs detected under-partitioning in at least one stage of the incremental clustering process, whereas only the iConn\_Index provided some insight to indicate over-partitioning problems. Nonetheless, the iConn\_Index failed in identifying one of the under-partitioning cases. Therefore, the usual recommendation regarding batch CVIs also applies to iCVIs: this research highlights the importance of monitoring a number of iCVI dynamics at any given time, rather than relying on the assessment of only one. Finally, it was shown that, although not equal to its batch counterpart, the iConn\_Index follows the same general trends. It is expected that the observations from the study presented here will assist in incremental clustering applications such as data streams.
Acknowledgment {#acknowledgment .unnumbered}
==============
This research was sponsored by the Missouri University of Science and Technology Mary K. Finley Endowment and Intelligent Systems Center; the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brazil (CAPES) - Finance code BEX 13494/13-9; the U.S. Dept. of Education Graduate Assistance in Areas of National Need program; and the Army Research Laboratory (ARL), and it was accomplished under Cooperative Agreement Number W911NF-18-2-0260. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. The authors would also like to thank Prof. James M. Keller and his coauthors for providing an early copy of reference [@Keller2018].
[^1]: L. E. Brito da Silva is with the Applied Computational Intelligence Laboratory, Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO 65409 USA, and also with the CAPES Foundation, Ministry of Education of Brazil, Brasília, DF 70040-020, Brazil (e-mail: [email protected]).
[^2]: N. M. Melton is with the Applied Computational Intelligence Laboratory, Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO 65409 USA (e-mail: [email protected]).
[^3]: D. C. Wunsch II is with the Applied Computational Intelligence Laboratory, Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO 65409 USA (e-mail: [email protected]).
|
---
abstract: 'X-ray and Neutron diffraction as well as muon spin relaxation and Mössbauer experiments performed on SrFe$_2$As$_2$ polycrystalls confirm a sharp first order transition at $T_0 = 205$K corresponding to an orthorhombic phase distortion and to a columnar antiferromagnetic Fe ordering with a propagation vector (1,0,1), and a larger distortion and larger size of the ordered moment than reported for BaFe$_2$As$_2$. The structural and the magnetic order parameters present an remarkable similarity in their temperature dependence from $T_0$ down to low temperatures, showing that both phenomena are intimately connected. Accordingly, the size of the ordered Fe moments scale with the lattice distortion when going from SrFe$_2$As$_2$ to BaFe$_2$As$_2$. Full-potential band structure calculations confirm that the columnar magnetic order and the orthorhombic lattice distortion are intrinsically tied to each other.'
author:
- 'A.Jesche'
- 'N.Caroca-Canales'
- 'H.Rosner'
- 'H.Borrmann'
- 'A.Ormeci'
- 'D.Kasinathan'
- 'K.Kaneko'
- 'H.H.Klauss'
- 'H.Luetkens'
- 'R.Khasanov'
- 'A.Amato'
- 'A.Hoser'
- 'C.Krellner'
- 'C.Geibel'
title: 'Strong coupling between magnetic and structural order parameters in SrFe$_2$As$_2$'
---
Compounds with FeAs layers have recently attracted considerable interest, because they present an intriguing magnetic and structural transition, which gets suppressed upon doping resulting in the appearance of high temperature superconductivity. This behaviour was first observed in the RFeAsO series of compounds (R= La-Gd)[@Kamihara:2008; @ChenNature:2008; @delacruz:2008; @Klauss:2008; @Fratini:2008] and more recently in the AFe$_2$As$_2$ class of materials (A = Ba, Sr) [@RotterA:2008; @RotterB:2008; @KrellnerPRL:2008; @Chen122:2008; @Sasmal:2008]. The onset of superconductivity at the disappearance of a magnetic ordered state is reminiscent of the behavior in the cuprates and in the heavy fermion systems, and therefore suggests the SC state in these doped layered FeAs systems to be of unconventional nature, too. While this has to be confirmed by further studies, there seems to be a general belief that the intriguing properties of these compounds are connected with very peculiar properties of the FeAs layers.\
While the occurrence of magnetic order in LaFeAsO and in the AFe$_2$As$_2$ compounds has been reported already more than 15 years ago [@Pfisterer:1983; @Raffius:1993], the observation of the lattice deformation is quite recent [@delacruz:2008; @Fratini:2008; @RotterA:2008; @Yan:2008]. The interaction between both phenomena is a very interesting problem on its own. A thorough understanding of these two phenomena, their mutual relation, and how they get suppressed under doping is likely a prerequisite to get a deeper insight into the origin and the nature of the superconducting state. In the RFeAsO compounds, the formation of the spin density wave (SDW) seems to occur in a second order transition at a slightly lower temperature $T_N \approx 140$K than the structural transition at $T_0=150$K [@delacruz:2008; @Klauss:2008]. For BaFe$_2$As$_2$, the first report by M.Rotter *et al.* [@RotterA:2008] suggested both ordering phenomena to occur simultaneously at a second order transition at $T_0=140$K. Shortly later, J. Q. Huang *et al.* [@Huang:2008], claimed the structural distortion to be first order while the magnetic order sets in continuously once the structural distortion is completed. Thus the present picture for both the RFeAsO and BaFe$_2$As$_2$ systems suggests that the structural distortion has to be completed before the AF order can form, and that the two order parameters are not directly connected. For SrFe$_2$As$_2$, we recently showed that a high quality sample presents a very sharp first order transition at $T_0=205$K, without any evidence for a second transition [@KrellnerPRL:2008]. In the present, paper we report a precise study of the evolution of the magnetic and of the structural order parameter in this compound by combining temperature dependent muon spin resonance and X-ray diffraction measurements with bulk susceptibility, resistivity, specific heat as well as preliminary Mössbauer and neutron scattering data. Our results demonstrate that in SrFe$_2$As$_2$, the formation of the SDW and the lattice distortion are intimately coupled. Comparison with results reported for BaFe$_2$As$_2$ also supports a strong connection between both order parameters.\
The sample preparation and characterization have been described in detail in our previous paper [@KrellnerPRL:2008]. Susceptibility, specific heat and resistivity measurements were carried out using standard techniques in commercial equipments PPMS and MPMS of Quantum Design. Temperature dependent X-ray powder pattern were obtained using an imaging plate Guinier Camera HUBER G670 (Co-K$_{\alpha}$ radiation) equipped with a closed cycle cryostat. Zero field muon spin relaxation (ZF-$\mu$SR) experiments were performed between 1.6 and 300K using the GPS spectrometer at the Paul Scherrer Institute. To gain deeper insight into the relation of magnetism and the orthorhombic distortion in AFe$_2$As$_2$ on a microscopic level, we performed density functional band structure calculations within the local (spin) density approximation (L(S)DA). Using the experimental structural parameters of the tetragonal cell [@Pfisterer:1983; @Raffius:1993; @Pfisterer:1980] as a starting point, we applied the full-potential local-orbital code FPLO [@koepernik:1999] (version 7.00-28) in both scalar-relativistic and fully relativistic versions, respectively, with the Perdew-Wang exchange correlation potential [@Perdew:1992]. A well-converged $k$-mesh of at least 18$^3$ points within the Brillouin zone of the larger orthorhombic cell has been used.\
In Fig. 1a we show the anomalies in the resistivity $\rho(T)$, susceptibility $\chi(T)$ and specific heat $C(T)$ which evidence a sharp, first order transition in our polycrystalline SrFe$_2$As$_2$ sample, as discussed in our previous paper [@KrellnerPRL:2008]. While $\rho(T)$ is only weakly decreasing with temperatures between 300K and 205K, it presents a 5% drop at $T_0$ followed by a further strong decrease to low temperatures, resulting in a resistivity ratio $RR_{1.8 K}\approx 32$. The susceptibility, except for a Curie like contribution likely due to paramagnetic impurities or a small amount of foreign phases, seems to be $T$ independent above and below $T_0$, but presents also a drop of $\Delta\chi\approx 1.1 \cdot 10^{-9}$m$^3$/mol at $T_0$. The specific heat measurement shows a sharp peak at $T_0$, which was interpreted as first order transition with a latent heat $\Delta H \approx 200$J/mol. We shall first focus on the results of the X-ray measurements. At room temperature and down to 210K the powder diffraction pattern evidenced an undistorted tetragonal (TT) ThCr$_2$Si$_2$ structure type. In contrast, in all pattern taken at 205K or lower temperatures, some of the Bragg peaks are well splitted, while others are not, demonstrating the structural distortion (Fig.1b). The spectra at 205K and below can be well fitted with an orthorhombic (OT) unit cell (Fmmm) with $a_{OT}$ = $a_{TT}\cdot \sqrt{2} (1 + \delta)$ and $b_{OT}$ = $a_{TT} \cdot \sqrt{2}(1 - \delta)$ in analogy to the structure proposed for BaFe$_2$As$_2$ [@RotterA:2008] and in accordance with [@Yan:2008]. So $\delta$ corresponds to the order parameter of the structural phase transition. A lattice parameter fit at the lowest investigated temperature $T = 60$K gave $a = 5.5746 (4)$Å, $b = 5.5130(8)$Åand $c = 12.286(4)$Åcorresponding to a saturation value of the distortion $\delta_0 = 0.56(1) \cdot 10^{-2}$ at low $T$. The evolution of $\delta$ with temperature was determined by analyzing precisely the splitting of the 400/040 Bragg peaks (Fig.1b and Fig.2a). Here we included data taken upon cooling and data taken upon heating the sample. We did not observe any differences between both sets of data. Between 210K and 205K, the 220 peak of the TT high temperature phase disappears abruptly, being replaced by the 400 and 040 peaks of the OT low temperature phase. At 210K, shoulders on both sides of the 220 peak indicate that a small amount of OT phase is coexisting with the TT phase, in accordance with a first order transition. The presence of this OT phase above $T_0$ might be due to strain or defects induced by the powdering process. The distortion $\delta$ increases step like to 70% of $\delta_0$. This is a further clear evidence for a first order transition. However, $\delta$ continues to increase with decreasing temperatures, indicating a further strengthening of the order parameter below the transition. A comparison with the data reported previously by J. Q. Yan *et al.* [@Yan:2008] gives a strong evidence that this further increase of $\delta(T)$ below $T_0$ is an intrinsic property and not just a consequence of an imperfect sample. In general both sets of data are similar [@note1]. However, our results evidence a very abrupt transition from the TT to the OT phase, while the data of [@Yan:2008] show a large coexistence region ranging from 160K up to 198K. This broadening of the transition as well as the lower $T_0$ in the 122 single crystals of [@Yan:2008] are due to Sn incorporation. However, both the absolute value of the splitting at low $T$ and that at the transition are very similar to our results. Thus, while the transition temperature and the sharpness of the transition are quite sensitive to defects, the splitting at $T_0$ and at $T = 0$K as well as the increase of $\delta(T)$ below $T_0$ are not.\
In order to address the magnetic order parameter, we performed preliminary Fe Mössbauer measurements and elastic neutron scattering experiments. The Mössbauer spectra evidenced very well defined hyperfine splitting at low $T$ corresponding to an hyperfine field of 8.5T [@Luetkens:2008], which is identical to the value reported for EuFe$_2$As$_2$ [@Raffius:1993]. In the neutron scattering spectra we observed sharp magnetic Bragg peaks below $T_0$, similar to those reported for BaFe$_2$As$_2$. The higher precision of our measurement allowed to uniquely fix the magnetic structure as a columnar antiferromagnetic order with propagation vector (1,0,1) and Fe moment of 1.0(1) $\mu_{B}$ oriented along the a-axis. However, we found that the most precise information on the evolution of the magnetic order parameter was obtained in our $\mu$SR experiments. Muon spin relaxation is a well established method for revealing and studying magnetic order. It probes the local field induced at the site(s) of the muon by slowly fluctuating or ordered nearby magnetic moments. For temperatures above 205K we observe only a slow decay of the muon polarization as expected for a non-magnetic material. Below 205K, well defined and strong oscillations appear in the time dependence of the muon polarization, as shown in the inset of Fig.3, evidencing a precession of the muon in an internal field. A Fourier analysis of the signal reveals two distinct components with very well defined frequencies, one at $f_1 = 44$MHz corresponding to $\approx$70% of the signal and one at $f_2 = 13$MHz corresponding to $\approx$30% of the signal [@Luetkens:2008]. This indicates the presence of two distinct muon sites, one being more strongly and one more weakly coupled to the Fe moments. This resembles the situation in LaFeAsO where also two components, one with a larger frequency $f_1 = 23$MHz corresponding to 70% of the muons and one with a lower frequency $f_2 = 3$MHz corresponding to 30% of the muons were observed [@Klauss:2008]. The ratio between the respective $f_1$ frequencies in SrFe$_2$As$_2$ and LaFeAsO is similar to the ratio of the hyperfine field measured in Mössbauer experiments and thus to the ratio of the ordered Fe Moments. This suggests that the muon site corresponding to $f_1$ is the same in both types of compounds and likely located within the FeAs layers, while the muon site corresponding to $f_2$ is probably in the region separating the FeAs layers, which differs between both types of compounds. The oscillation we observed in SrFe$_2$As$_2$ are much better defined than those reported for LaFeAsO, which is likely related to a much better crystallinity and higher homogeneity of the AFe$_2$As$_2$ compounds compared to the RFeAsO ones. On the other hand it indicates that the internal field at each muon site in SrFe$_2$As$_2$ is sharply defined, implying a well defined long range commensurate magnetic order. In the main part of Fig.3, we show the temperature dependence of $f_1$ in SrFe$_2$As$_2$. $f_1$ is proportional to the size of the ordered moment and thus to the magnetic order parameter. In contrast to LaFeAsO where $f_1$ is increasing continuously below a second order transition at $T_N \approx 134$K, we observe in SrFe$_2$As$_2$ at 205K a sharp step like increase of $f_1$ to 66% of its saturation value at low $T$. This is again an indication for a first order transition. However, as already noticed for the $T$ dependence of the lattice distortion $\delta$, also the magnetic order parameter further increases below $T_0$ with decreasing $T$. We compare in Fig.2b the $T$ dependence of $\delta(T)$ and $f_1(T)$ normalized to their saturation values at low $T$. The $T$ dependencies are identical within the accuracy of the experiments. This demonstrates that both order parameters are intimately coupled to each other. To elucidate the role of various possible magnetic orderings for the OT distortion of the crystal structure for SrFe$_2$As$_2$ and the related Ba compound, we performed band structure calculations for various spin configurations within the FeAs layers. Starting from different initial ordering patterns, we obtained self consistent solutions for (i) non-magnetic, (ii) ferromagnmetic, (iii) Neél ordered and (iv) columnar ordered FeAs layers. For both systems the lowest energy was found for the columnar ordered state. Starting from the experimental structural parameters for the TT unit cells we varied the axis ratio $b/a$, keeping the other parameters and the cell volume constant. The resulting curves for the Neél ordered and columnar ordered FeAs layers are shown in Fig. \[lda\]. Except for the columnar magnetic order (iv) that yields a significant OT split for the TT axes, all other patterns (i-iii) resulted in an energy minimum for an undistorted TT structure. The inclusion of spin-orbit coupling did not change this result within the numerical error bars. In surprisingly good agreement with our neutron experiments, we obtain a shortening of the $b$ axis along the ferromagnetic columns compared to the $a$ axis along the antiferromagmetic propagation, resulting in a $b/a$ ratio of 0.984 for SrFe$_2$As$_2$ and 0.987 for the Ba system. These values are only slightly larger than the experimentally observed distortions (extrapolated to zero temperature) and in excellent agreement with respect to the relative changes between both compounds. Thus, obtaining an OT axes split for the columnar magnetic order only, together with its lowest energy indicates that this magnetic order and the OT lattice distortion in both compounds are intrinsically tied to each other.
In summary, we report a detailed study of the structural distortion and of the magnetic ordering using XR diffraction and $\mu$SR experiments as well as preliminary neutron scattering and Mössbauer spectroscopy data. We confirm the low temperature phase to be analogous to that reported for BaFe$_2$As$_2$ with an OT structural distortion, space group Fmmm, and a columnar antiferromagnetic ordering of the Fe moment with a propagation vector (1,0,1). However, both the structural distortion and the size of the ordered Fe moment are larger in the Sr than in the Ba compound. The magnetic and the structural order parameters do not only show a sharp first order transition at $T_0$ as previously suggested, but evidence the same $T$ dependence in the whole $T$ range from $T_0$ down to lowest temperatures. At $T_0$ both the OT distortion $\delta$ and the muon precession frequency $f_1$ jump to only $\approx$68% of their low $T$ saturation value. A comparison with X-ray data obtained on singe crystals with a lower $T_0$ and a broader transition indicates that the further increase of $\delta(T)$ and $f_1(T)$ below $T_0$ is an intrinsic behavior and not due to defects. The identical $T$ dependence of $\delta(T)$ and $f_1(T)$ proves that the structural and the magnetic order parameters are intimately coupled. In this respect, our data unambiguously indicate that SrFe$_2$As$_2$ behaves very differently from the picture presently proposed for the RFeAsO compounds, where the SDW is suggested to form in a second order transition at $\approx10$K below the structural transition, the two order parameters being disconnected. However, at the present level of investigations, one cannot exclude the double, second order type transitions in RFeAsO to be one broadened first order transition due to a poorer sample quality. A better quality of the SrFe$_2$As$_2$ sample is evidenced by the much higher residual resistivity ratio and the sharpness of the transition in all investigated properties. One of the reasons for this better quality is that the preparation of the RFeAsO compounds and especially the control of their stoichiometry is more difficult than that of the AFe$_2$As$_2$ compounds. The strong connection between the magnetic and the structural parameter is not only present in SrFe$_2$As$_2$, but seems to be a more general property of the AFe$_2$As$_2$ systems. This is evidenced by a comparison of the magnitude of both order parameters between SrFe$_2$As$_2$ and BaFe$_2$As$_2$. From the data of Rotter *et al.* one can deduced $\delta_0 = 0.36 \cdot 10^{-2}$ for BaFe$_2$As$_2$ [@RotterA:2008], which is 37% smaller than $\delta_0 = 0.56 \cdot 10^{-2}$ in SrFe$_2$As$_2$. The value of the hyperfine field determined in Fe Mössbauer experiments, and thus the size of the ordered Fe moment, also decreases by 36% from $B_{eff} = 8.5$T in SrFe$_2$As$_2$ to $B_{eff} = 5.4$T in BaFe$_2$As$_2$\[6\]. Thus, both the magnetic and the structural order parameters scale by about the same amount when going from SrFe$_2$As$_2$ to BaFe$_2$As$_2$. Fully-relativistic band structure calculations obtain an OT lattice distortion for the columnar magnetic order, only, in very good agreement with the experimental data, including the correct orientation of the Fe moments along the $a$-axis. This yields strong support to the idea that lattice distortion and the columnar magnetic order in these compounds are intrinsically tied to each other. While finalizing our paper, a study of the structural distortion in SrFe$_2$As$_2$ and EuFe$_2$As$_2$ appeared as a preprint, showing similar structural data but suggesting a second order type transition [@Tegel:2008].
[20]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, ****, (), ISSN .
, ****, (), ISSN .
(), .
, ****, ().
(), .
(), .
(), .
(), .
(), .
, ****, ().
, ****, ().
, , , , , , , , , , (), .
(), .
, ****, ().
, ****, ().
, ****, ().
, .
(), .
![\[rhochiCpeak\] Left panel: resistivity $\rho(T)$, susceptibility $\chi(T)$ and specific heat $C(T)$ in SrFe$_2$As$_2$ near the first order transition at $T_0 = 205$K. Right panel: splitting of the 220 tetragonal peak into the 400 and 040 peaks of the orthorhombic structure below $T_0$.](fig1.eps){width="8cm"}
![\[ordnungsparameter\] (Color online) Left panel: $T$ dependence of the positions of the 400 and 400 peaks. Right panel: $T$ dependence of the lattice distortion $\delta(T)$ and of the muon precession frequency $f_1$ normalized to their saturation value at low $T$. ](fig2.eps){width="8cm"}
![\[muSR\] Temperature dependence of the muon precession frequency $f_1$. Inset: Time dependence of the muon spin polarization at $T = 1.6$K.](fig3.eps){width="8cm"}
![\[lda\] (Color online) Calculated total energy versus axes ratio $b/a$ for the orthorhombic unit cell of SrFe$_2$As$_2$ (red) and BaFe$_2$As$_2$ (blue). The calculated data points are marked by the symbols, the lines are fourth order polynomial fits. The minima are marked by vertical lines. The minimum of SrFe$_2$As$_2$ is chosen as energy zero and the BaFe$_2$As$_2$ curves are shifted upwards by 5meV. The upper panel shows no distortion for a Neél order within the FeAs layers, whereas the lower panel demonstrates the orthorhombic distortion for columnar order in the FeAs layer.](fig4.eps){height="8.5cm"}
|
---
abstract: 'We study the effects of surface tension between normal and superfluid regions of a trapped Fermi gas at unitarity. We find that surface tension causes notable distortions in the shape of large aspect ratio clouds. Including these distortions in our theories resolves many of the apparent discrepancies among different experiments and between theory and experiments.'
author:
- 'Theja N. De Silva$^{a,b}$, Erich J. Mueller$^{a}$'
title: Surface Tension in Unitary Fermi Gases with Population Imbalance
---
Experimentalists are now using dilute gases to controllably study the properties of strongly interacting systems of superfluid fermionic atoms [@biglistE; @ketterle1; @ketterle2; @randy]. Recent experiments have examined the exotic circumstance where atoms with two different hyperfine spins \[denoted up and down\] are placed in a harmonic trap, but the number of spin-up atoms, $N_\uparrow$ is greater than the number of down-spin atoms $N_\downarrow$ [@ketterle1; @ketterle2; @randy]. Spin relaxation is negligible in these experiments, so over the entire time of the experiment, the system is constrained to have a fixed polarization P=$(N_\uparrow-N_\downarrow)/(N_\uparrow+N_\downarrow)$. Understanding the structure of (s-wave) superfluidity in this polarized environment is an important endeavor with a long history [@fflo; @old1; @old2; @old3] and direct relevance to neutron stars, thin-film superconductors, and color superconductivity. In this paper we use the concept of surface tension to quantitatively explain controversial features seen in the density profiles of strongly interacting trapped polarized Fermi gases [@randy; @ketterle1; @ketterle2].
The simplest theories of trapped Fermi gases [@theja; @chevy; @new1; @new2; @ho] (most relying on local density approximations \[LDA\] and assuming zero temperature) predict that the atomic cloud phase separates into a central superfluid region, in which the density of both spin species are equal, surrounded by a polarized normal shell [@shells]. This basic structure was observed in two separate experiments [@randy; @ketterle1; @ketterle2], however some experimental details are at odds with these theoretical predictions. For $P>0.1$, the Rice experiments [@randy] find a double peaked axial density difference, $n_d^{(a)}(z)=\int dx\,dy\,[n_\uparrow({\bf
r})-n_\downarrow({\bf r})]$, where $n_{\uparrow/\downarrow}({\bf
r})$ is the density of up and down spin atoms. In a previous paper [@theja], we argued that this structure pointed to a breakdown of the local density approximation, despite the fact that dimensional arguments suggested that the LDA should work well. Conversely, the results of the MIT experiments [@ketterle1; @ketterle2] are fully consistent with a local density approximation, but show a polarization driven superfluid-normal phase transition at $P\sim0.70$. This phase transition was not seen in the Rice experiments and is not found in most theories at unitarity [@theja; @chevy; @new1; @new2]. Here we show that surface tension in the boundary between normal and superfluid regions distorts the cloud in exactly the right way to account for the unusual features seen at Rice. We also show that surface tension plays a much smaller role in the MIT experiments, where the atomic clouds are larger and more spherical, and we are thus able to account for the fact that the MIT experiment is consistent with the local density approximation. Finally, we show that for $P \gtrsim
0.7$, the Rice data shows a sudden drop in surface tension. Since such a drop would be expected if the system underwent a superfluid-normal phase transition, this observation may reconcile the apparent differences in the experiments. We currently lack a quantitative theory of the superfluid-normal phase transition at unitarity.
In this letter we consider the unitary regime, where the scattering length is infinite and the only lengthscale in the problem is the interparticle spacing. Taking a two-shell structure, with a superfluid core and a normal fluid shell, we model the free energy of a trapped gas as $$\label{energy}
F=\int_Sd^3 r f_s[\mu(r),h] +\int_Nd^3 r f_n[\mu(r),h]
+\int_\partial d^2 r \sigma[\mu(r),h],$$ where $\int_{S/N}$ represents the integral over the superfluid/normal region, $\int_\partial$ corresponds to an integral over the boundary, $f_{s/n}=-\int n^{(s)/(n)} d\mu$ represent the free energy density of the superfluid/normal gas and $\sigma$ represents the surface tension in the boundary. The energy densities are a function of the local chemical potentials $\mu(r)=[\mu_\uparrow(r)+\mu_\downarrow(r)]/2=\mu_0-V(r),$ and $h=[\mu_\uparrow(r)-\mu_\downarrow(r)]/2$, where $V(r)=b_{\perp}\rho^2+b_{z}z^2=\frac{1}{2}m\omega^2(\lambda^2\rho^2+z^2)$ is the trapping potential, with $\lambda\approx50$ for the Rice experiments and $\lambda\approx 5$ at MIT. The shape of the boundary, and the parameters $\mu_0$ and $h$, are determined by minimizing eq. (\[energy\]) with respect to the boundary with the constraint that $N_{\uparrow/\downarrow}=\int_S d^3r\
n^{(s)}_{\uparrow/\downarrow}+\int_N d^3r\
n^{(n)}_{\uparrow/\downarrow}$. This approach is a generalization of one used by Chevy [@chevy], where the boundary term was absent. Universality allows us to write the free energy density as $$f_{s,n}(r)=\biggr(-\frac{2}{15 \pi^2}\biggr)
\biggr(\frac{2m}{\hbar^2}\biggr)^{3/2}\zeta_{s,n}
\mu_{s,n}(r)^{5/2},$$ where $\zeta_{s}=1/(1+\beta)^{3/2}$, $\zeta_{n}=1/2$, and $\beta\approx-0.545$ is a universal many-body parameter [@parabeta]. The relevant chemical potentials are $\mu_{s}(r)=\mu(r)$ and $\mu_{n}(r)=\mu_\uparrow(r)\equiv\mu(r)+h$. The density of each spin component is $n_{\uparrow,\downarrow}=-\partial f/\partial\mu$. The fact that the particle spacing is the only length scale constrains the surface tension to have the form $\sigma=(\hbar^2/2m) n_s^{4/3}(r)g(\delta
P/P)$, where $g$ is a function of $\delta P$, the pressure discontinuity across the domain wall, and $P$, the pressure on the superfluid side of the domain wall and $n_s$ is the density on the superfluid side. Introducing a universal numerical parameter $\eta$, we approximate $g$ by its value at zero pressure drop, $g(0)=\eta$, based on estimating that $\delta P/P< 1.8 \times 10^{-3}$ [@note1].
We determine $\eta$ in two ways. First, as detailed in the appendix, a mean-field theory gradient expansion yields $\eta\approx
0.9 \times 10^{-3}$. Second, we use a fitting scheme where we minimize Eq. ( \[energy\]) for a series of candidate $\eta$’s. We find that $\eta=1.0 \times 10^{-3}$ matches the Rice group’s experimental data for the axial density difference at P=0.53. Given the uncontrolled nature of the mean field approximation, we believe that the similarity of the two results is purely coincidental. We use $\eta=1.0 \times 10^{-3}$ for all of our predictions.
To simplify the minimization of Eq. (\[energy\]) with respect to the boundary, we make the ansatz that the boundary is an ellipsoid with semi major and minor axes $\overline{\rho}$ and $\overline{z}$. Within this ansatz we analytically calculate the free energy \[for brevity we omit the expressions\]. We minimize this expression with respect to the parameters $\overline{\rho}$ and $\overline{z}$.
![ TOP: Comparison of the minority and majority components radii with the Rice experiments [@randy]. Squares and crosses are the majority and minority components observed in [@randy], determined by looking for where the density of each component vanishes. The solid line is our theoretical prediction of the radii with zero surface tension, while the dashed line includes finite surface tension. The radii are normalized with a non-interacting Thomas-Fermi radius, $R_{TF}=\sqrt{\epsilon_{f}/b_z)}$, where $\epsilon_{f}= \hbar\overline{\omega}(6N)^{1/3}$ with average trap frequency $\overline{\omega}=(\omega_{\perp}^{2}\omega_{z})^{1/3}$ and $N=(N_\uparrow+N_\downarrow)/2.$ Note, both the normalization and the fitting procedure differ from the one used in figure 3 of [@randy]. []{data-label="radii"}](radii3.eps){width="\columnwidth"}
![ LEFT: Axial density difference $n_d^{(a)}(z)=2\pi\,\int
d\rho\,\rho\, [n_{\uparrow}(z,\rho)-n_{\downarrow}(z,\rho)]$ of zero temperature harmonically trapped unitary Fermi gas in units of $[10^6 cm^{-1}]$. Figures (a), (b), and (c) represent polarization $P= 0.14$, 0.53, and 0.72 respectively. The grey points are the experimental data from reference [@randy]. The $P=0.14$ and $P=0.53$ data previously appeared in figure 2 of [@randy]. The $P=0.72$ data corresponds to one of the points in figure 3 of ref [@randy]. The dashed line is the zero surface tension density, while solid line is the finite surface tension density using $\eta=1.0 \times 10^{-3}$. RIGHT: Comparison of the axial density of minority component $n_\downarrow^{(a)}(z)=2\pi\,\int d\rho\,\rho\,
n_{\downarrow}(z,\rho)$.[]{data-label="difdw"}](difdw.eps){width="\columnwidth"}
To estimate the distortions, one expands Eq. (\[energy\]) for small distortions: $\overline{\rho}=\rho_0(1+\delta_{\rho})$, and $\overline{z}=z_0(1+\delta_{z})$, where $\rho_0$ and $z_0$ are the lengths of the axes in the absence of surface tension. We take $\delta_{\rho}$ and $\delta_z$ to be order of $\delta$. Dimensional analysis gives, $\delta F/(\hbar^2/2m) \sim A\ \eta n^{4/3}z_0
\rho_0 \delta+B\ n^{5/3}z_0 \rho_0^2 \delta^2$, where $A$ and $B$ are constants. Assuming that $\rho_0$ scales with the radial Thomas-Fermi radius, the size of the distortion is then $\delta \sim
1/(\rho_0n^{1/3}) \sim (\lambda/N)^{1/3}$.
FIG. \[radii\] shows the calculated axial radii as a function of polarization. We compare our predictions to radii that we extract from the data used in FIG. 3 of ref. [@randy]. We extract the radii by fitting the wings of the axial density distributions to a piecewise linear function of the form $n(z)=w\left(1-z/R\right)
\theta (R-z)$, where $\theta(x)=1$ for $x>0$ and $0$ otherwise, and $w$ and $R$ are fitting parameters. This fitting procedure lets us accurately determine the edge of the cloud, while the radii extracted in [@randy] correspond to an average radius, and are systematically larger those extracted by our method. We see that for $0.1 \lesssim P \lesssim 0.7$ the experimental data is in excellent agreement with the finite surface tension theory. For $P \gtrsim
0.7$, the data used for FIG. \[radii\] appears to be inconsistent with $\eta=1.0 \times 10^{-3}$. We speculate that the deviation may be due to the superfluid-normal phase transition observed at MIT [@ketterle1; @ketterle2]. We caution, however, that there are large fluctuations in the radii seen in ref. [@randy] especially at large P. More work is needed before definitive statements can be made. The disagreement of the radii below $P_c \approx 0.1$ is probably attributable to finite temperature effects [@pcom].
In FIG. \[difdw\] and \[up\] we compare our predicted axial density profiles with representative data from ref. [@randy]. As demonstrate in the left panel of FIG. \[difdw\], the finite surface tension theory captures the observed double peak structure in axial density difference for $P<0.7$. The only free parameter in this calculation is $\eta$, which as previously described we set by fitting to the $P=0.53$ data. A close examination of FIG. \[difdw\](c) reveals that the P=0.72 data is not fit quantitatively by either the finite surface tension or zero surface tension theory. As previously discussed, we suspect that the central region may not be superfluid.
As illustrated in FIG. \[up\], surface tension has almost no effect on the axial density of the majority component. The smallness of the effect is to be expected because the discontinuity in $n_\uparrow$ at the domain wall is much smaller than the discontinuity in $n_\downarrow$. Alternate explanations of the double-peaked axial density difference, such as anharmonicities [@zwierlein], would cause distortions in $n_\uparrow$ instead of $n_{\downarrow}$ and are not completely consistent with the experimental data [@replyR].
We also calculated, but do not show here, density profiles for parameters corresponding to the MIT experiments. We find that surface tension has a negligible effect on the density profile, consistent with the fact that $(\lambda/N)^{1/3}$ is 10 times smaller than at Rice.
We wish to emphasize how surprises it is to see surface tension, a phenomenon generally associated with liquid in a gas. This observation opens the possibility of other surface tension related effects in cold atoms. In particular the surface tension could have a large effect on collective modes and expansion. We speculate that surface tension should play a role in the physics of analogous systems, such as nuclear matter at high densities and quark-gluon plasmas.
![ Comparison of the axial majority component density $n_\uparrow^{(a)}(z)=2\pi\,\int d\rho\,\rho\, n_{\uparrow}(z,\rho)$ in units of $[10^6 cm^{-1}]$ with experimental data of reference [@randy]. Figure (a) and (b) represent polarization $P=0.53$ and $P=0.72$ respectively. Symbols carry the same meanings as in Fig \[difdw\]. Notice that the solid line and the dashed line coincide, indicating surface tension has no effect on the majority densities.[]{data-label="up"}](up.eps){width="\columnwidth"}
This work was supported by NSF grant PHY-0456261, and by the Alfred P. Sloan Foundation. We are grateful to R. Hulet and W. Li for very enlightening discussions, critical comments, and for providing us with their experimental data. We also thank M. Zwierlein for providing us with their latest results [@ketterle2], and for insightful critical comments. We thank T.-L. Ho for critical comments.
**APPENDIX**: In this appendix we calculate the domain wall energy at the superfluid-normal interface by applying a gradient expansion to mean field theory. The domain wall energy in this approximation is given by, $E(l)=\int^{l/2}_{-l/2}dx[\gamma\mid\partial_{x}\Delta\mid^2+E_{bcs}(\Delta,h,\mu)-E_{n}(h,\mu)]$, where $l$, the size of the domain wall, will be determined variationally, along with $\Delta(x)$, the superfluid order parameter. The energies in the superfluid and normal phases are given by [@old2],
$$\begin{aligned}
E_{bcs}(\Delta,h,\mu)=\frac{1}{2\pi^2}\int^{k_{+}}_{k_{-}} k^2dk
(-h+E_k)\\ \nonumber+\frac{1}{2\pi^2}\int
k^2dk\left[\epsilon_k-E_k+\frac{\Delta^2m}{\hbar^2k^2}\right]
%\\\nonumber
-\frac{\Delta^2m}{4\pi\hbar^2a_s}\end{aligned}$$
$$\begin{aligned}
\label{Nenergy}
E_n(h,\mu)=-\frac{1}{15\pi^2}\biggr(\frac{2m}{\hbar^2}\biggr)^{\frac{3}{2}}[(\mu+h)^{\frac{5}{2}}+(\mu-h)^{\frac{5}{2}}]\end{aligned}$$
where $k_{\pm}(\vec{r})=(\pm\sqrt{h^2-\Delta^2}+\mu)^{1/2}$, $E_k^2=\epsilon_k^2+\Delta^2$, $\epsilon_k=\hbar^2k^2/2m -\mu$ and, $a_s$ is the s-wave scattering length.
In order to calculate the coefficient of the gradient term $\gamma$, we begin with the action $S=S_0+S_{int}$, where the free fermions action is $S_0=\sum_{\sigma} \int_0^{\beta}d\tau\int d^3\vec{r}
\psi_{r\sigma}^{\dagger}[\partial_{\tau}-\mu_{\sigma}+\hbar^2\nabla^2/2m]\psi_{r\sigma}$, the interaction is $S_{int}=-U\int_0^{\beta}d\tau\int
d^3\vec{r}\psi_{r\uparrow}^{\dagger}\psi_{r\downarrow}^{\dagger}\psi_{r\downarrow}\psi_{r\uparrow}$, the atomic Fermi fields are $\psi_{r\sigma}$, imaginary time is $\tau$, the inverse temperature is $\beta=1/T$ and the attractive interaction between Fermi atoms is $-U$ with $U\geq 0$. After the usual Hubbard-Stratonovich decoupling of the interaction term [@book] and integrating out the Fermi fields, the partition function is written as $\emph{Z}=\int \emph{D}\Delta
\emph{D}\Delta^{\ast}\textit{exp}[-S_{eff}(\Delta,\Delta^{\ast})]$, where $S_{eff}(\Delta,\Delta^{\ast})=\sum_{q,n}[A(q,\omega_n)|\Delta(q)|^2+B(q,\omega)|\Delta(q)|^4+...]$ and $A(q,\omega_n)=(1/U-T\sum_n\int
d^3\vec{k}/(2\pi)^3G_{\uparrow}(k+q/2,\omega_m)G_{\downarrow}(-k+q/2,\omega_m+\omega_n))$ with $G_{\sigma}(k,\omega)^{-1}=\imath\omega-\hbar^2k^2/2m+\mu_{\sigma}$. We assume that the dominant momentum dependence comes from the term which is lowest order in superfluid order parameter. We sum over Matsubara frequencies, defining $A(q)=\sum_n A(q,\omega_n)$ with $\omega_n=(2n+1)\pi T$. In order to suppress the ultraviolet divergences in the theory, we regularize [@car] the interaction with the s-wave scattering length by $1/U=m/4\pi\hbar^2a_s+d^3\vec{k}/(2\pi)^3m/\hbar^2k^2$. We then expand $A(q)$ to second order in q and take the zero temperature limit, finding $A(q)=m/(4\pi \hbar^2a_s)-m\sqrt{\mu}/(4\pi
\hbar^2)+mq^2/(32\pi\hbar^2\sqrt{\mu})+{\cal O}(q^4)$, which means that $\gamma=m/(32\pi\hbar^2\sqrt{\mu})$.
![The value of $\eta$ as a function of $(k_fa_s)^{-1}$ in zero temperature BCS approximation. At unitarity $\eta=0.0009$ is independent of the density. The Fermi wave vector $k_f$ is defined as $k_f^3=3\pi^2n_{s}$.[]{data-label="eta"}](eta.eps){width="\columnwidth"}
Taking the ansatz $\Delta(x)=(\Delta_0/2)[\tanh(x/l)+1]$, where $\Delta_0$ is the value of $\Delta$ on superfluid side of the domain wall, we numerically minimize the surface energy $E(l)$ with respect to $l$ to find the size of the domain wall $l_m$ and the domain wall energy. We find $l_m<k_f^{-1}$, supporting our treatment of the domain wall as very thin, but calling into question the validity of our gradient expansion. Note that we do not expand $E_{bcs}$ in powers of $\Delta$, but work with the exact expansion. Since we are considering a domain wall between a region where $\Delta=0$ and $\Delta\approx{\cal O}(E_f)$, any expansion in $\Delta$ would require going to high order. To even capture the topology of the free energy surface, one must expand to sixth order. Thus, previous calculation of surface tension, such as Caldas’s [@caldas] recent work, which are based on fourth order expansions are not relevant to the physics described here. By repeating this calculation at different $a_s$, and solving the BCS number equation and gap equation [@theja], we find the quantity $\eta=2mE(l_{m})/(\hbar^2n_{s}^{4/3})$ as a function of $a_s$ and $n_{s}$, where $n_{s}$ is the density on the superfluid-normal interface. In the limit of $a_s\rightarrow \infty$, we find $\eta=0.9 \times 10^{-3}$, independent of the density and polarization. However, as seen in FIG. \[eta\], $\eta$ has density dependence away from unitarity. As $a_s\rightarrow 0^+$, $\eta$ grows larger, hence the effects of surface tension is stronger. Therefore, in the strong BEC regime, domain walls become energetically prohibitive and the phase separated atomic system is unstable against phase coexistence [@old2; @theja; @new1]. Recent theoretical work by Imambekov *et al* [@adilet] studied the role of gradient terms in this deep BEC limit.
The value of $\eta$ obtained from fitting to experimental data agrees well with our mean field calculation. We believe that this agreement is coincidental as mean field theory is not expected to work well at unitarity. We also note that the experiment is performed slightly away from the resonance where the mean field approximation predicts a weak density dependance of $\eta$ [@note2].
K. M. O’Hara et al., Science 298, 2179 (2002); M. Greiner, C. A. Regal, and D. S. Jin, Nature 426, 537 (2003); S. Jochim et al., Science 302, 2101 (2003); M. W. Zwierlein et al., Phys. Rev. Lett. 91, 250401 (2003); T. Bourdel et al., Phys. Rev. Lett. 93, 050401 (2004); C. Chin et al., Science 305, 1128 (2004); J. Kinast et al., Science 307, 1296 (2005); J. Kinast et al., Phys. Rev. Lett. 92, 150402 (2004); M. Bartenstein et al., Phys. Rev. Lett. 92, 203201 (2004); M. W. Zwierlein et al., Nature 435, 1047-1051 (2005); G. B. Partridge et al., Phys. Rev. Lett. 95, 020404 (2005).
Martin W. Zwierlein, André Schirotzek, Christian H. Schunck, Wolfgang Ketterle, Science, **311**, 492 (2006).
Martin W. Zwierlein, Christian H. Schunck, André Schirotzek, and Wolfgang Ketterle, unpublished.
Guthrie B. Partridge, Wenhui Li, Ramsey I. Kamar, Yean-an Liao, and Randall G. Hulet, Science, **311**, 503 (2006).
P. Fulde and R. A. Ferrell, Phys. Rev. **135**, A550 (1964) and A.I. Larkin and Yu.N. Ovchinnikov, Zh. Eksp. Teor. Fiz 47, 1136 (1964) \[Sov. Phys. JETP 20, 762 (1965)\]; G. Sarma, J. Phys. Chem. Solids **24**, 1029 (1963).
R. Combescot, Europhys. Lett. **55**, 150 (2001); H. Caldas, Phys. Rev. A **69**, 063602 (2004); A. Sedrakian, J. Mur-Petit, A. Polls; H. M¨uther, Phys. Rev. A **72**, 013613 (2005); U. Lombardo, P. Nozi‘eres, P. Schuck, H.-J. Schulze, and A. Sedrakian, Phys. Rev. C **64**, 064314 (2001); D.T. Son and M.A. Stephanov, Phys A **72**, 013614 (2006); L. He, M. Jin and P. Zhuang, Phys B **73**, 214527 (2006); P. F. Bedaque, H. Caldas, and G. Rupak, Phys. Rev. Lett. **91**, 247002 (2003).
C.-H. Pao, Shin-TzaWu, and S.-K. Yip, Phys. Rev. B **73**, 132506 (2006); D.E. Sheehy and L. Radzihovsky, PRL **96**, 060401 (2006); W.V. Liu and F. Wilczek, Phys. Rev. Lett. **90**, 047002 (2003).
J. Carlson and S. Reddy, Phys. Rev. Lett. **95**, 060401 (2005); P. Castorina, M. Grasso, M. Oertel, M. Urban, and D. Zappal‘a, Phys. Rev. A **72**, 025601 (2005); T. Mizushima, K. Machida, and M. Ichioka, Phys. Rev. Lett. **94**, 060404 (2005).
Theja N. De Silva, Erich J. Mueller, Phys. Rev. A 73, 051602(R) (2006), preprint cond-mat/0601314.
F. Chevy, Phys. Rev. Lett. 96, 130401 (2006).
P. Pieri, and G.C. Strinati, Phys. Rev. Lett. **96**, 150404 (2006); W. Yi, and L. -M. Duan, Phys. Rev. A 73, 031604(R) (2006).
J. Kinnunen, L. M. Jensen, and P. Torma, Phys. Rev. Lett. 96, 110403 (2006); M. Haque and H. T. C. Stoof, preprint, cond-mat/0601321; Zheng-Cheng Gu, Geoff Warner and Fei Zhou, preprint cond-mat/0603091; M. Iskin and C. A. R. Sa de Melo, preprint cond-mat/0604184; T. Paananen, J.-P. Martikainen, P. Torma, Phys. Rev. A **73**, 053606 (2006); C.-H. Pao, S.-K. Yip, J. Phys. Condens. Matter **18**, 5567 (2006); W. Yi, L.-M. Duan, Phys. Rev. A **74**, 013610 (2006); M. Mannarelli, G. Nardulli, and M. Ruggieri, preprint, cond-mat/0604579.
Tin-Lun Ho and Hui Zhai, preprint cond-mat/0602568.
The superfluid and normal regions may be further subdivided, depending on interaction parameters. See [@theja].
J. Carlson, S.-Y. Chang, V. R. Pandharipande, and K. E. Schmidt, Phys. Rev. Lett. 91, 050401 (2003).
Laplace’s law gives, $\delta
P/P=2 \sigma/RP$, where $R$ is the radius of curvature of the domain wall. Typical values from [@randy] are $\sigma/(\hbar^2/2m) \sim
5.8 \times 10^{-14}$ ($\mu$m)$^{-4}$, $R \sim 236$ $\mu$m, and $P/(\hbar^2/2m) \sim 2.7 \times 10^{-13}$ ($\mu$m)$^{-5}$.
R. Hulet, Private communications.
M. Zwierlein and W. Ketterle, preprint cond-mat/0603489.
Guthrie B. Partridge, Wenhui Li, Ramsey I. Kamar, Yean-An Liao, Randall G. Hulet, preprint cond-mat/0605581.
For example see, Functional Integrals and Collective Excitations by V. N Popov, Cambridge University Press, 1987. C. A. R. Sa de Melo, M. Randeria, and J. R. Engelbrecht, Phys. Rev. Lett. 71, 3202 (1993).
H. Caldas, preprint cond-mat/0601148.
A. Imambekov, C. J. Bolech, M. Lukin, and E. Demler, preprint cond-mat/0604423.
The Rice experiment has a characteristic density corresponding to $(k_fa_s)^{-1}\sim 0.09$.
|
---
abstract: 'In a recent paper, Parikh and Boyd describe a method for solving a convex optimization problem, where each iteration involves evaluating a proximal operator and projection onto a subspace. In this paper we address the critical practical issues of how to select the proximal parameter in each iteration, and how to scale the original problem variables, so as the achieve reliable practical performance. The resulting method has been implemented as an open-source software package called POGS (Proximal Graph Solver), that targets multi-core and GPU-based systems, and has been tested on a wide variety of practical problems. Numerical results show that POGS can solve very large problems (with, say, more than a billion coefficients in the data), to modest accuracy in a few tens of seconds. As just one example, a radiation treatment planning problem with around 100 million coefficients in the data can be solved in a few seconds, as compared to around one hour with an interior-point method.'
author:
- Christopher Fougner
- Stephen Boyd
bibliography:
- 'pogs.bib'
title: |
Parameter Selection and Pre-Conditioning\
for a Graph Form Solver
---
Introduction
============
We consider the convex optimization problem $$\begin{aligned}
\begin{aligned}
&\mini
& & f(y) + g(x) \\
& \text{subject to}
& & y = A x,
\end{aligned} \label{eq:gf_pri}\end{aligned}$$ where $x\in \reals^n$ and $y \in \reals^{m}$ are the variables, and the (extended-real-valued) functions $f: \reals^{m} \to \reals \cup \{\infty\}$ and $g: \reals^{n} \to \reals \cup \{\infty\}$ are convex, closed and proper. The matrix $A \in \reals^{m \times n}$, and the functions $f$ and $g$ are the problem data. Infinite values of $f$ and $g$ allow us to encode convex constraints on $x$ and $y$, since any feasible point $(x,y)$ must satisfy $$x \in \{x \mid g(x) < \infty\}, \qquad
y \in \{y \mid f(y) < \infty\}.$$ We will be interested in the case when $f$ and $g$ have simple proximal operators, but for now we do not make this assumption. The problem form (\[eq:gf\_pri\]) is known as *graph form* [@parikh2013block], since the variable $(x, y)$ is constrained to lie in the graph $\mathcal G = \{(x,y) \in \reals^{n + m}~|~y = Ax\}$ of $A$. We denote $p^\star$ as the optimal value of (\[eq:gf\_pri\]), which we assume is finite.
The graph form includes a large range of convex problems, including linear and quadratic programming, general conic programming [@BoV:04 §11.6], and many more specific applications such as logistic regression with various regularizers, support vector machine fitting [@hastie2009elements], portfolio optimization [@BoV:04 §4.4.1] [@Glowinski1975] [@boyd2013performance], and radiation treatment planning [@olafsson2006efficient], to name just a few.
In [@parikh2013block], Parikh and Boyd described an operator splitting method for solving the graph form problem (\[eq:gf\_pri\]), based on the alternating direction method of multipliers (ADMM) [@boyd2011distributed]. Each iteration of this method requires a projection (either exactly or approximately via an iterative method) onto the graph $\mathcal G$, and evaluation of the proximal operators of $f$ and $g$. Theoretical convergence was established in that paper, and basic implementations demonstrated. However it has been observed that practical convergence of the algorithm depends very much on the choice of algorithm parameters (such as the proximal parameter $\rho$), and scaling of the variables (, pre-conditioning).
The purpose of this paper is to explore these issues, and to add some critical variations on the algorithm that make it a relatively robust general purpose solver, at least for modest accuracy levels. The algorithm we propose, which is the same as the basic method described in [@parikh2013block], with modified parameter selection, diagonal pre-conditioning, and modified stopping criterion, has been implemented in an open-source software project called POGS (for **P**r**o**ximal **G**raph **S**olver), and tested on a wide variety of problems. Our CUDA implementation reliably solves (to modest accuracy) problems $1000 \times$ larger than those that can be handled by interior-point methods; and for those that can be handled by interior-point methods, $100\times$ faster. As a single example, a radiation treatment planning problem with more than 100 million coefficients in $A$ can be solved in a few seconds; the same problem takes around one hour to solve using an interior-point method.
Outline
-------
In §\[sec:rel\_work\] we describe related work. In §\[sec:opt\_dual\] we derive the graph form dual problem, and the primal-dual optimality conditions, which we use to motivate the stopping criterion and to interpret the iterates of the algorithm. In §\[sec:alg\] we describe the ADMM-based graph form algorithm, and analyze the properties of its iterates, giving some results that did not appear in [@parikh2013block]. In §\[sec:precond\_param\] we address the topic of pre-conditioning, and suggest novel pre-conditioning and parameter selection techniques. In §\[sec:impl\] we describe our implementation POGS, and in §\[sec:numres\] we report performance results on various problem families.
Related work {#sec:rel_work}
------------
Many generic methods can be used to solve the graph form problem (\[eq:gf\_pri\]), including projected gradient descent [@calamai1987projected], projected subgradient methods [@polyak1987introduction Chap. 5] [@shor1998nondifferentiable], operator splitting methods [@lions1979splitting] [@eckstein2008family], interior-point methods [@wrightnocedal1999numerical Chap. 19] [@ben2001lectures Chap. 6] and many more. (Of course many of these methods can only be used when additional assumptions are made on $f$ and $g$, , differentiability or strong convexity.) For example, if $f$ and $g$ are separable and smooth (or have smooth barrier functions for their epigraphs), the problem (\[eq:gf\_pri\]) can be solved by an interior-point method, which in practice always takes no more than a few tens of iterations, with each iteration involving the solution of a system of linear equations that requires $O(\max\{m,n\}\min\{m,n\}^2)$ flops when $A$ is dense [@BoV:04 Chap. 11][@wrightnocedal1999numerical Chap. 19].
We now turn to first-order methods for the graph form problem (\[eq:gf\_pri\]). In [@o2014primal] O’Connor and Vandenberghe propose a primal-dual method for the graph form problem where $A$ is the sum of two structured matrices. They contrast it with methods such as Spingarn’s method of partial inverses [@spingarn1985applications], Douglas-Rachford splitting [@douglas1956numerical], and the Chambolle-Pock method [@chambolle2011first].
Davis and Yin [@davis2014convergence] analyze convergence rates for different operator splitting methods, and in [@giselsson2015tight] Giselsson proves the tightness of linear convergence for the operator splitting problems considered [@giselsson2014metric]. Goldstein et al. [@goldstein2014fast] derive Nesterov-type acceleration, and show $O(1/k^2)$ convergence for problems where $f$ and $g$ are both strongly convex.
Nishihara et al. [@nishihara2015general] introduce a parameter selection framework for ADMM with over relaxation [@eckstein1992douglas]. The framework is based on solving a fixed-size semidefinite program (SDP). They also make the assumption that $f$ is strongly convex. Ghadimi et al. [@ghadimi2013optimal] derive optimal parameter choices for the case when $f$ and $g$ are both quadratic. In [@giselsson2014metric] Giselsson and Boyd show how to choose metrics to optimize the convergence bound, and in [@giselsson2014diagonal] Giselsson and Boyd suggest a diagonal pre-conditioning scheme for graph form problems based on semidefinite programming. This scheme is primarily relevant in small to medium scale problems, or situations where many different graph form problems, with the same matrix $A$, are to be solved.
It is clear from these papers (and indeed, a general rule) that the practical convergence of first-order mthods depends heavily on algorithm parameter choices. All of these papers make additional assumptions about the objective, which we do not.
GPUs are used extensively for training neural networks [@ngiam2011optimization; @ciresan2011flexible; @krizhevsky2012imagenet; @coates2013deep], and they are slowly gaining popularity in convex optimization as well [@pock2011diagonal; @chu2013primal; @wang2014bregman].
Optimality conditions and duality {#sec:opt_dual}
=================================
Dual graph form problem
-----------------------
The Lagrange dual function of (\[eq:gf\_pri\]) is given by $$\inf_{x,y} f(y) + g(x) + \nu^T(Ax - y) = - f^*(\nu) -g^*(-A^T\nu)$$ where $\nu\in \mathbf{R}^n$ is the dual variable associated with the equality constraint, and $f^*$ and $g^*$ are the conjugate functions of $f$ and $g$ respectively [@BoV:04 Chap. 4]. Introducing the variable $\mu = -A^T\nu$, we can write the dual problem as $$\begin{aligned}
\begin{aligned}
&\maxi
& & -f^*(\nu) - g^*(\mu) \\
& \text{subject to}
& & \mu = -A^T \nu.
\end{aligned} \label{eq:gf_dual}\end{aligned}$$ The dual problem can be written as a graph form problem if we negate the objective and minimize rather than maximize. The dual graph form problem (\[eq:gf\_dual\]) is related to the primal graph form problem (\[eq:gf\_pri\]) by switching the roles of the variables, replacing the objective function terms with their conjugates, and replacing $A$ with $-A^T$.
The primal and dual objectives are $p(x,y) = f(y) + g(x)$ and $d(\mu,\nu) = -f^*(\nu) - g^*(\mu)$ respectively, giving us the duality gap $$\begin{aligned}
\eta = p(x,y) - d(\mu,\nu) = f(y) + f^*(\nu) + g(x) + g^*(\mu). \label{eq:gap}\end{aligned}$$ We have $\eta\geq 0$, for any primal and dual feasible tuple $(x, y, \mu, \nu)$. The duality gap $\eta$ gives a bound on the suboptimality of $(x,y)$ (for the primal problem) and also $(\mu,\nu)$ for the dual problem: $$f(y)+g(x) \leq p^\star + \eta, \qquad
-f^*(\nu)-g^*(\mu) \geq p^\star - \eta.$$
Optimality conditions
---------------------
The optimality conditions for (\[eq:gf\_pri\]) are readily derived from the dual problem. The tuple $(x, y, \mu, \nu)$ satisfies the following three conditions if and only it is optimal.
*Primal feasibility:* $$\begin{aligned}
y = Ax. \label{eq:pri_feas}\end{aligned}$$ *Dual feasibility:* $$\begin{aligned}
\mu = -A^T\nu. \label{eq:dual_feas}\end{aligned}$$ *Zero gap:* $$\begin{aligned}
f(y) + f^*(\nu) + g(x) + g^*(\mu) = 0. \label{eq:gap_opt}\end{aligned}$$
If both (\[eq:pri\_feas\]) and (\[eq:dual\_feas\]) hold, then the zero gap condition (\[eq:gap\_opt\]) can be replaced by the Fenchel equalities $$\begin{aligned}
f(y) + f^*(\nu) = \nu^Ty, \quad g(x) + g^*(\mu) = \mu^Tx. \label{eq:fen_eq}\end{aligned}$$ We refer to a tuple $(x,y,\mu,\nu)$ that satisfies (\[eq:fen\_eq\]) as *Fenchel feasible*. To verify the statement, we add the two equations in (\[eq:fen\_eq\]), which yields $$\begin{aligned}
f(y) + f^*(\nu) + g(x) + g^*(\mu) = y^T\nu + x^T\mu = (Ax)^T\nu -x^T A^T\nu = 0. \end{aligned}$$ The Fenchel equalities (\[eq:fen\_eq\]) are is also equivalent to $$\begin{aligned}
\nu \in \partial f(y), \quad \mu \in \partial g(x), \label{eq:fen_sg} %\quad x \in \partial g^*(\mu), \quad y \in \partial f^*(\nu),\end{aligned}$$ where $\partial$ denotes the subdifferential, which follows because $$\nu \in \partial f(y) \Leftrightarrow \sup_z\left(z^T\nu - f(z) \right) = \nu^Ty - f(y) \Leftrightarrow f(y) + f^*(\nu) = \nu^Ty.$$
In the sequel we will assume that strong duality holds, meaning that there exists a tuple $(x^\star, y^\star, \mu^\star, \nu^\star)$ which satisfies all three optimality conditions.
Algorithm {#sec:alg}
=========
Graph projection splitting
--------------------------
In [@parikh2013block] Parikh et al. apply ADMM [@boyd2011distributed §5] to the problem of minimizing $f(y)+g(x)$, subject to the constraint $(x,y)\in \mathcal G$. This yields the *graph projection splitting* algorithm \[alg:admm\_pri\].
Initialize $(x^0, y^0, \tilde x^0, \tilde y^0) =0, ~k=0$ $(x^{k+1/2}, ~y^{k+1/2}) := \big(\mathbf{prox}_{g}(x^k-\tilde x^{k}),~ \mathbf{prox}_{f}(y^k-\tilde y^{k})\big)$ $(x^{k+1}, y^{k+1}) := \Pi(x^{k+1/2}+\tilde x^{k}, ~y^{k+1/2}+\tilde y^{k})$ $(\tilde x^{k+1}, \tilde y^{k+1}) := (\tilde x^{k} + x^{k+1/2} - x^{k+1}, ~\tilde y^{k} + y^{k+1/2} - y^{k+1})$ $k := k +1$
The variable $k$ is the iteration counter, $x^{k+1}, x^{k+1/2} \in \mathbf{R}^{n}$ and $y^{k+1}, y^{k+1/2}, \in \mathbf{R}^{m}$ are primal variables, $\tilde x^{k+1} \in \mathbf{R}^{n}$ and $\tilde y^{k+1} \in \mathbf{R}^{m}$ are scaled dual variables, $\Pi$ denotes the (Euclidean) projection onto the graph $\mathcal G$, $$\mathbf{prox}_{f}(v) = \argmin_{y}\Big(f(y) + (\rho/2) \vnorm{y - v}_2^2\Big)$$ is the proximal operator of $f$ (and similarly for $g$), and $\rho>0$ is the proximal parameter. The projection $\Pi$ can be explicitly expressed as the linear operator $$\begin{aligned}
\Pi(c,d)=
K^{-1}\begin{bmatrix}
c + A^Td \\ 0
\end{bmatrix}, \qquad K = \begin{bmatrix}
I & A^T \\
A & -I
\end{bmatrix}.
\label{eq:proj}\end{aligned}$$
Roughly speaking, in steps 3 and 5, the $x$ (and $\tilde x$) and $y$ (and $\tilde y$) variables do not mix; the computations can be carried out in parallel. The projection step 4 mixes the $x,\tilde x$ and $y,\tilde y$ variables.
General convergence theory for ADMM [@boyd2011distributed §3.2] guarantees that (with our assumption on the existence of a solution) $$\begin{aligned}
(x^{k+1},y^{k+1}) - (x^{k+1/2}, y^{k+1/2}) \to 0, \quad f(y^k) +g(x^k) \to p^\star, \quad (\tilde x^k,\tilde y^k) \to (\tilde x^\star, \tilde y^\star), \label{eq:conv_theo}\end{aligned}$$ as $k \to \infty$.
Extensions
----------
We discuss three common extensions that can be used to speed up convergence in practice: over-relaxation, approximate projection, and varying penalty.
#### Over-relaxation.
Replacing $x^{k+1/2}$ by $\alpha
x^{k+1/2}+(1-\alpha)x^{k}$ in the projection and dual update steps is known as over-relaxation if $\alpha > 1$ or under-relaxation if $\alpha < 1$. The algorithm is guaranteed to converge [@eckstein1992douglas] for any $\alpha \in (0,2)$; it is observed in practice [@o2013splitting] [@annergren2012admm] that using an over-relaxation parameter in the range \[1.5, 1.8\] can improve practical convergence.
#### Approximate projection.
Instead of computing the projection $\Pi$ exactly one can use an approximation $\tilde \Pi$, with the only restriction that $$\textstyle{\sum}_{k = 0}^\infty\|\Pi(x^{k+1/2}, y^{k+1/2}) - \tilde \Pi(x^{k+1/2}, y^{k+1/2})\|_2 < \infty.$$ This is known as approximate projection [@o2013splitting]. This extension is particularly useful if the approximate projection is computed using an indirect or iterative method.
#### Varying penalty.
Large values of $\rho$ tend to encourage primal feasibility, while small values tend to encourage dual feasibility [@boyd2011distributed §3.4.1]. A common approach is to adjust or vary $\rho$ in each iteration, so that the primal and dual residuals are (roughly) balanced in magnitude. When doing so, it is important to re-scale $(\tilde x^{k+1}, \tilde y^{k+1})$ by a factor $\rho^{k}/\rho^{k+1}$.
Feasible iterates {#sec:feas_vars}
-----------------
In each iteration, algorithm \[alg:admm\_pri\] produces sets of points that are either primal, dual, or Fenchel feasible. Define $$\begin{aligned}
\mu^{k} = -\rho\tilde x^k, \quad \nu^{k} = -\rho\tilde y^k, \quad \mu^{k+1/2} = -\rho(x^{k+1/2} - x^k + \tilde x^k), \quad \nu^{k+1/2} = -\rho (y^{k+1/2} - y^k + \tilde y^k). \end{aligned}$$ The following statements hold.
1. The pair $(x^{k+1}, y^{k+1})$ is primal feasible, since it is the projection onto the graph $\mathcal G$.
2. \[enum:fen\] The pair $(\mu^{k+1}, \nu^{k+1})$ is dual feasible, as long as $(\mu^{0}, \nu^{0})$ is dual feasible and $(x^0, y^0)$ is primal feasible. Dual feasibility implies $\mu^{k+1} +A^T \nu^{k+1}=0$, which we show using the update equations in algorithm \[alg:admm\_pri\]: $$\begin{aligned}
\mu^{k+1} +A^T \nu^{k+1} &= -\rho(\tilde x^{k} + x^{k+1/2} - x^{k+1} +A^T ( \tilde y^{k} + y^{k+1/2} - y^{k+1} )) \\
&= -\rho(\tilde x^{k} + A^T\tilde y^k + x^{k+1/2} +A^Ty^{k+1/2} - (I+A^TA)x^{k+1}),\end{aligned}$$ where we substituted $y^{k+1} = Ax^{k+1}$. From the projection operator in (\[eq:proj\]) it follows that $(I+A^TA)x^{k+1} = x^{k+1/2} + A^Ty^{k+1/2}$, therefore $$\mu^{k+1} +A^T \nu^{k+1} = -\rho(\tilde x^{k} + A^T\tilde y^k) = \mu^{k} +A^T \nu^{k} = \mu^0 + A^T\nu^0,$$ where the last equality follows from an inductive argument. Since we made the assumption that $ (\mu^{0},\nu^{0})$ is dual feasible, we can conclude that $(\mu^{k+1}, \nu^{k+1})$ is also dual feasible.
3. The tuple $(x^{k+1/2}, y^{k+1/2}, \mu^{k+1/2}, \nu^{k+1/2})$ is Fenchel feasible. From the definition of the proximal operator, $$\begin{aligned}
x^{k+1/2} = \argmin_{x}\Big(g(x) + (\rho/2) \vnorm{x - x^k + \tilde x^k}_2^2\Big) &\Leftrightarrow 0 \in \partial g(x^{k+1/2}) + \rho(x^{k+1/2} - x^k + \tilde x^k) \\
&\Leftrightarrow \mu^{k+1/2} \in \partial g(x^{k+1/2}).\end{aligned}$$ By the same argument $ \nu^{k+1/2} \in \partial f(y^{k+1/2})$.
Applying the results in (\[eq:conv\_theo\]) to the dual variables, we find $\nu^{k+1/2} \to \nu^\star $ and $\mu^{k+1/2} \to \mu^\star$, from which we conclude that $(x^{k+1/2}, y^{k+1/2}, \mu^{k+1/2}, \nu^{k+1/2})$ is primal and dual feasible in the limit.
Stopping criteria
-----------------
In §\[sec:feas\_vars\] we noted that either (\[eq:pri\_feas\], \[eq:dual\_feas\], \[eq:gap\_opt\]) or (\[eq:pri\_feas\], \[eq:dual\_feas\], \[eq:fen\_eq\]) are sufficient for optimality. We present two different stopping criteria based on these conditions.
#### Residual based stopping. {#sec:res_stop}
The tuple $(x^{k+1/2}, y^{k+1/2}, \mu^{k+1/2}, \nu^{k+1/2})$ is Fenchel feasible in each iteration, but only primal and dual feasible in the limit. Accordingly, we propose the residual based stopping criterion $$\begin{aligned}
\|Ax^{k+1/2} - y^{k+1/2}\|_2 \leq \epsilon^{\text{pri}}, \quad \|A^T\nu^{k+1/2} + \mu^{k+1/2}\|_2 \leq \epsilon^{\text{dual}}, \label{eq:res_stop}\end{aligned}$$ where the $\epsilon^\text{pri}$ and $\epsilon^\text{dua}$ are positive tolerances. These should be chosen as a mixture of absolute and relative tolerances, such as $$\epsilon^\text{pri} = \epsilon^{\text{abs}} + \epsilon^{\text{rel}} \|y^{k+1/2}\|_2, \quad \epsilon^\text{dual} = \epsilon^{\text{abs}} + \epsilon^{\text{rel}} \|\mu^{k+1/2}\|_2.$$ Reasonable values for $\epsilon^{\text{abs}}$ and $\epsilon^{\text{rel}}$ are in the range $[10^{-4}, 10^{-2}]$.
#### Gap based stopping.
The tuple $(x^k, y^k, \mu^k, \nu^k)$ is primal and dual feasible, but only Fenchel feasible in the limit. We propose the gap based stopping criteria $$\eta^k =
f(y^k) + g(x^k) + f^*(\nu^k) + g^*(\mu^k) \leq \epsilon^\text{gap},$$ where $\epsilon^\text{gap}$ should be chosen relative to the current objective value, , $$\epsilon^\text{gap} = \epsilon^{\text{abs}} + \epsilon^{\text{rel}}|f(y^k) + g(x^k)|.$$ Here too, reasonable values for $\epsilon^{\text{abs}}$ and $\epsilon^{\text{rel}}$ are in the range $[10^{-4}, 10^{-2}]$.
Although the gap based stopping criteria is very informative, since it directly bounds the suboptimality of the current iterate, it suffers from the drwaback that $f, g, f^*$ and $g^*$ must all have full domain, since otherwise the gap $\eta^k$ can be infinite. Indeed, the gap $\eta^k$ is almost always infinite when $f$ or $g$ represent constraints.
Implementation {#sec:impl_cons}
--------------
#### Projection.
There are different ways to evaluate the projection operator $\Pi$, depending on the structure and size of $A$.
One simple method that can be used if $A$ is sparse and not too large is a direct sparse factorization. The matrix $K$ is quasi-definite, and therefore the $LDL^T$ decomposition is well defined [@vanderbei1995symmetric]. Since $K$ does not change from iteration to iteration, the factors $L$ and $D$ (and the permutation or elimination ordering) can be computed in the first iteration (, using CHOLMOD [@chen2008algorithm]) and re-used in subsequent iterations. This is known as *factorization caching* [@boyd2011distributed §4.2.3] [@parikh2013block §A.1]. With factorization caching, we get a (potentially) large speedup in iterations, after the first one.
If $A$ is dense, and $\min(m,n)$ is not too large, then block elimination [@BoV:04 Appendix C] can be applied to $K$ [@parikh2013block Appendix A], yielding the reduced update $$\begin{aligned}
x^{k+1} &:= (A^TA + I)^{-1}(c + A^Td) \\
y^{k+1} &:= Ax^{k+1}\end{aligned}$$ if $m \geq n$, or $$\begin{aligned}
y^{k+1} &:= d + (AA^T + I)^{-1}(Ac -d) \\
x^{k+1} &:= c - A^T(d-y^{k+1})\end{aligned}$$ if $m < n$. Both formulations involve forming and solving a system of equations in $\mathbf{R}^{\text{min}(m,n) \times \text{min}(m,n)}$. Since the matrix is symmetric positive definite, we can use the Cholesky decomposition. Forming the coefficient matrix $A^TA+I$ or $AA^T+I$ dominates the computatation. Here too we can take advantage of factorization caching.
The regular structure of dense matrices allows us to analyze the computational complexity of each step. We define $q =\min(m,n)$ and $p = \max(m,n)$. The first iteration involves the factorization and the solve step; subsequent iterations only require the solve step. The computational cost of the factorization is the combined cost of computing $A^TA$ (or $AA^T$, whichever is smaller), at a cost of $pq^2$ flops, in addition to the Cholesky decomposition, at a cost of $(1/3)q^3$ flops. The solve step consists of two matrix-vector multiplications at a cost of $4pq$ flops and solving a triangular system of equations at a cost of $q^2$ flops. The total cost of the first iteration is $O(pq^2)$ flops, while each subsequent iteration only costs $O(pq)$ flops, showing that we obtain a savings by a factor of $q$ flops, after the first iteration, by using factorization caching.
For very large problems direct methods are no longer practical, at which point indirect (iterative) methods can be used. Fortunately, as the primal and dual variables converge, we are guaranteed that $(x^{k+1/2}, y^{k+1/2}) \to (x^{k+1}, y^{k+1})$, meaning that we will have a good initial guess we can use to initialize the iterative method to (approximately) evaluate the projection. One can either apply CGLS (conjugate gradient least-squares) [@hestenes1952methods] or LSQR [@paige1982lsqr] to the reduced update or apply MINRES (minimum residual) [@paige1975solution] to $K$ directly. It can be shown the latter requires twice the number of iterations as compared to the former, and is therefore not recommended.
#### Proximal operators.
Since the $x,\tilde x$ and $y,\tilde y$ components are decoupled in the proximal step and dual variable update step, both of these can be done separately, and in parallel for $x$ and $y$. If either $f$ or $g$ is separable, then the proximal step can be parallelized further. The monograph [@parikh2013proximal] details how proximal operators can be computed efficiently for a wide range of functions. Typically the cost of computing the proximal operator will be negligible compared to the cost of the projection. In particular, if $f$ and $g$ are separable, then the cost will be $O(m + n)$, and completely parallelizable.
Pre-conditioning and parameter selection {#sec:precond_param}
========================================
The practical convergence of the algorithm (, the number of iterations required before it terminates) can depend greatly on the choice of the proximal parameter $\rho$, and the scaling of the variables. In this section we analyze these, and suggest a method for choosing $\rho$ and for scaling the variables that (empirically) speeds up practical convergence.
Pre-conditioning {#sec:precond}
----------------
Consider scaling the variables $x$ and $y$ in (\[eq:gf\_pri\]), by $E^{-1}$ and $D$ respectively, where $D \in \mathbf{R}^{m\times m}$ and $E \in \mathbf{R}^{n \times n}$ are non-singular matrices. We define the scaled variables $$\hat y = Dy, \quad \hat x = E^{-1}x,$$ which transforms (\[eq:gf\_pri\]) into $$\begin{aligned}
\begin{aligned}
&\mini
& & f(D^{-1}\hat y) + g(E \hat x) \\
& \text{subject to}
& & \hat y= DAE \hat x. \label{eq:gf_trans}
\end{aligned}\end{aligned}$$ This is also a graph form problem, and for notational convenience, we define $$\quad \hat A = DAE, \quad \hat f(\hat y) = f(D^{-1}\hat y), \quad \hat g(\hat x) = g(E \hat x),$$ so that the problem can be written as $$\begin{aligned}
\begin{aligned}
&\mini
& & \hat f(\hat y) + \hat g(\hat x) \\
& \text{subject to}
& & \hat y = \hat A\hat x.
\end{aligned}\end{aligned}$$ We refer to this problem as the pre-conditioned version of (\[eq:gf\_pri\]). Our goal is to choose $D$ and $E$ so that (a) the algorithm applied to the pre-conditioned problem converges in fewer steps in practice, and (b) the additional computational cost due to the pre-conditioning is minimal.
Graph projection splitting applied to the pre-conditioned problem (\[eq:gf\_trans\]) can be interpreted in terms of the original iterates. The proximal step iterates are redefined as $$\begin{aligned}
x^{k+1/2} &= \argmin_{x} \left( g(x) + (\rho/2)\|x - x^k + \tilde x^k\|_{(EE^T)^{-1}}^2 \right) \\
y^{k+1/2} &= \argmin_{y} \left(f(y) + (\rho/2)\|y - y^k + \tilde y^k\|_{(D^TD)}^2 \right),\end{aligned}$$ and the projected iterates are the result of the weighted projection $$\begin{aligned}
\begin{aligned}
&\mini
& & (1/2)\|x - x^{k+1/2}\|_{(EE^T)^{-1}}^2 + (1/2)\|y - y^{k+1/2}\|_{(D^TD)}^2 \\
& \text{subject to}
& & y = A x,
\end{aligned}\end{aligned}$$ where $\|x\|_P = \sqrt{x^TPx}$ for a symmetric positive-definite matrix $P$. This projection can be expressed as $$\Pi(c,d)=
\hat K^{-1}\begin{bmatrix}
(EE^T)^{-1}c + A^TD^TDd \\ 0
\end{bmatrix}, \qquad \hat K = \begin{bmatrix}
(EE^T)^{-1} & A^TD^TD \\
D^TDA & -D^TD
\end{bmatrix}.$$
Notice that graph projection splitting is invariant to orthogonal transformations of the variables $x$ and $y$, since the pre-conditioners only appear in terms of $D^TD$ and $EE^T$. In particular, if we let $D = U^T$ and $E
= V$, where $A=U\Sigma V^T$, then the pre-conditioned constraint matrix $\hat A
= DAE = \Sigma$ is diagonal. We conclude that any graph form problem can be pre-conditioned to one with a diagonal non-negative constraint matrix $\Sigma$. For analysis purposes, we are therefore free to assume that $A$ is diagonal. We also note that for orthogonal pre-conditioners, there exists an analytical relationship between the original proximal operator and the pre-conditioned proximal operator. With $\phi(x) =
\varphi(Qx)$, where $Q$ is any orthogonal matrix ($Q^TQ = QQ^T= I$), we have $$\mathbf{prox}_{\phi}(v) = Q^T\mathbf{prox}_{\varphi}(Qv).$$ While the proximal operator of $\phi$ is readily computed, orthogonal pre-conditioners destroy separability of the objective. As a result, we can not easily combine them with other pre-conditioners.
Multiplying $D$ by a scalar $\alpha$ and dividing $E$ by the same scalar has the effect of scaling $\rho$ by a factor of $\alpha^2$. It however has no effect on the projection step, showing that $\rho$ can be thought of as the relative scaling of $D$ and $E$.
In the case where $f$ and $g$ are separable and both $D$ and $E$ are diagonal, the proximal step takes the simplified form $$\begin{aligned}
x_j^{k+1/2} &= \argmin_{x_j} \left(g_j(x_j) + (\rho^E_{j}/2)(x_j - x_j^k + \tilde x_j^k)^2 \right) ~&& j = 1,\ldots,n\\
y_i^{k+1/2} &= \argmin_{y_i} \left(f_i(y_i) + (\rho^D_{i}/2)(y_i - y_i^k + \tilde y_i^k)^2 \right) ~&& i = 1,\ldots,m,\end{aligned}$$ where $\rho^E_{j} = \rho/E_{jj}^2$ and $\rho^D_{i} = \rho D_{ii}^2$. Since only $\rho$ is modified, any routine capable of computing $\mathbf{prox}_f$ and $\mathbf{prox}_g$ can also be used to compute the pre-conditioned proximal update.
### Effect of pre-conditioning on projection
For the purpose of analysis, we will assume that $A = \Sigma$, where $\Sigma$ is a non-negative diagonal matrix. The projection operator simplifies to $$\Pi(c,d) = \begin{bmatrix}
(I + \Sigma^T\Sigma)^{-1} & (I + \Sigma^T\Sigma)^{-1}\Sigma^T \\
(I + \Sigma\Sigma^T)^{-1}\Sigma &(I + \Sigma\Sigma^T)^{-1}\Sigma\Sigma^T
\end{bmatrix}\begin{bmatrix} c \\ d\end{bmatrix},$$ which means the projection step can be written explicitly as $$\begin{aligned}
x^{k+1}_i &= \frac{1}{1+\sigma_i^2}(x_i^{k+1/2} + \tilde x_i^{k} + \sigma_i(y_i^{k+1/2} + \tilde y_i^{k})) && 1 \leq i \leq \min(m,n) \\
x^{k+1}_i &= x_i^{k+1/2} + \tilde x_i^{k}&& \min(m,n) < i \leq n \\
y^{k+1}_i &= \frac{\sigma_i}{1+\sigma_i^2}(x_i^{k+1/2} + \tilde x_i^{k} + \sigma_i (y_i^{k+1/2} + \tilde y_i^{k})) && 1 \leq i \leq \min(m,n) \\
y^{k+1}_i &= 0 && \min(m,n) < i \leq m,\end{aligned}$$ where $\sigma_i$ is the $i$th diagonal entry of $\Sigma$ and subscripted indices of $x$ and $y$ denote the $i$th entry of the respective vector. Notice that the projected variables $x_i^{k+1}$ and $y_i^{k+1}$ are equally dependent on $(x_i^{k+1/2} + \tilde x_i^k)$ and $\sigma_i(y_i^{k+1/2} + \tilde y_i^{k})$. If $\sigma_i$ is either significantly smaller or larger than 1, then the terms $x_i^{k+1}$ and $y_i^{k+1}$ will be dominated by either $(x_i^{k+1/2} + \tilde x_i^k)$ or $(y_i^{k+1/2} + \tilde y_i^{k})$. However if $\sigma_i = 1$, then the projection step exactly averages the two quantities $$\begin{aligned}
x_i^{k+1} = y_i^{k+1} &= \frac{1}{2}(x_i^{k+1/2} + \tilde x_i^{k} + y_i^{k+1/2} + \tilde y_i^k) && 1 \leq i \leq \min(m,n).\end{aligned}$$ As we pointed out in §\[sec:alg\], the projection step mixes the variables $x$ and $y$. For this to approximately reduce to averaging, we need $\sigma_i \approx 1$.
### Choosing $D$ and $E$
The analysis suggests that the algorithm will be fast when the singular values of $DAE$ are all near one, , $$\begin{aligned}
\mathbf{cond}\big(DAE\big) \approx 1, \quad \|DAE\|_2 \approx 1. \label{eq:precond_obj}\end{aligned}$$ (This claim is also supported in [@giselsson2014preconditioning], and is consistent with our computational experience.) Pre-conditioners that exactly satisfy these conditions can be found using the singular value decomposition of $A$. They will however be of little use, since such pre-conditioners generally destroy our ability to evaluate the proximal operators of $\hat f$ and $\hat g$ efficiently.
So we seek choices of $D$ and $E$ for which (\[eq:precond\_obj\]) holds (very) approximately, and for which the proximal operators of $\hat f$ and $\hat g$ can still be efficiently computed. We now specialize to the special case when $f$ and $g$ are separable. In this case, diagonal $D$ and $E$ are candidates for which the proximal operators are still easily computed. (The same ideas apply to block separable $f$ and $g$, where we impose the further constraint that the diagonal entries within a block are the same.) So we now limit ourselves to the case of diagonal pre-conditioners.
Diagonal matrices that minimize the condition number of $DAE$, and therefore approximately satisfy the first condition in (\[eq:precond\_obj\]), can be found exactly, using semidefinite programming [@boyd1994linear §3.1]. But this computation is quite involved, and may not be worth the computational effort since the conditions (\[eq:precond\_obj\]) are just a heuristic for faster convergence. (For control problems, where the problem is solved many times with the same matrix $A$, this approach makes sense; see [@giselsson2014diagonal].)
A heuristic that tends to minimize the condition number is to equilibrate the matrix, , choose $D$ and $E$ so that the rows all have the same $p$-norm, and the columns all have the same $p$-norm. (Such a matrix is said to be equilibrated.) This corresponds to finding $D$ and $E$ so that $$|DAE|^p\mathbf{1} = \alpha \mathbf{1}, \qquad \mathbf{1}^T|DAE|^p = \beta
\mathbf{1}^T,$$ where $\alpha, \beta > 0$. Here the notation $|\cdot|^p$ should be understood in the elementwise sense. Various authors [@o2013splitting], [@chu2013primal], [@bradley2010algorithms] suggest that equilibration can decrease the number of iterations needed for operator splitting and other first order methods. One issue that we need to address is that not every matrix can be equilibrated. Given that equilibration is only a heuristic for achieving $\sigma_i(DAE)\approx 1$, which is in turn a heuristic for fast convergence of the algorithm, partial equilibration should serve the same purpose just as well.
Sinkhorn and Knopp [@sinkhorn1967concerning] suggest a method for matrix equilibration for $p<\infty$, and $A$ is square and has full support. In the case $p = \infty$, the Ruiz algorithm [@ruiz2001scaling] can be used. Both of these methods fail (as they must) when the matrix $A$ cannot be equilibrated. We give below a simple modification of the Sinkhorn-Knopp algorithm, modified to handle the case when $A$ is non-square, or cannot be equilibrated.
Choosing pre-conditioners that satisfy $\|DAE\|_2=1$ can be achieved by scaling $D$ and $E$ by $\sigma_{\max}(DAE)^{-q}$ and $\sigma_{\max}(DAE)^{q-1}$ respectively for $q \in \mathbf{R}$. The quantity $\sigma_{\max}(DAE)$ can be approximated using power iteration, but we have found it is unnecessary to exactly enforce $\|DAE\|_2=1$. A more computationally efficient alternative is to replace $\sigma_{\max}(DAE)$ by $\|DAE\|_F/\sqrt{\min(m,n)}$. This quantity coincides with $\sigma_{\max}(DAE)$ when $\mathbf{cond}(DAE) = 1$. If $DAE$ is equilibrated and $p=2$, this scaling corresponds to $(DAE)^T(DAE)$ (or $(DAE)(DAE)^T$ when $m < n$) having unit diagonal.
Regularized equilibration
-------------------------
In this section we present a self-contained derivation of our matrix-equilibration method. It is similar to the Sinkhorn-Knopp algorithm, but also works when the matrix is non-square or cannot be exactly equilibrated.
Consider the convex optimization problem with variables $u$ and $v$, $$\begin{aligned}
\begin{aligned}
&\mini
& & \sum_{i=1}^m\sum_{j=1}^n|A_{ij}|^pe^{u_i+v_j} - n \mathbf{1}^Tu - m \mathbf{1}^Tv +\gamma \left[(1/m)\sum_{i=1}^me^{u_i} + (1/n)\sum_{j=1}^ne^{v_j}\right], \label{eq:sk_obj}
\end{aligned}\end{aligned}$$ where $\gamma \geq 0$ is a regularization parameter. The objective is bounded below for any $\gamma > 0$. The optimality conditions are $$\begin{aligned}
\sum_{j=1}^n|A_{ij}|^pe^{u_i+v_j} - n + (1/m)\gamma e^{u_i}= 0,\quad i = 1,\ldots,m\\
\sum_{i=1}^m|A_{ij}|^pe^{u_i+v_j} - m + (1/n)\gamma e^{v_j} = 0,\quad j = 1,\ldots,n.\end{aligned}$$ By defining $D_{ii} = e^{u_i/p}$ and $E_{jj}^p = e^{v_j/p}$, these conditions are equivalent to $$|DAE|^p \mathbf{1} + (1/m)\gamma D\mathbf{1} =n \mathbf{1}, \quad \mathbf{1}^T |DAE|^p + (1/n)\gamma \mathbf{1}^TE=m \mathbf{1}^T.$$ When $\gamma = 0$, these are the conditions for a matrix to be equilibrated. The objective may not be bounded when $\gamma = 0$, which exactly corresponds to the case when the matrix cannot be equilibrated. As $\gamma \to \infty$, both $D$ and $E$ converge to the scaled identity matrix $(mn/\gamma)I$, showing that $\gamma$ can be thought of as a regularizer on the elements of $D$ and $E$. If $D$ and $E$ are optimal, then the two equalities $$\mathbf{1}^T|DAE|^p\mathbf{1} + (1/m)\gamma \mathbf{1}^TD\mathbf{1} = mn,
\qquad \mathbf{1}^T|DAE|^p\mathbf{1} + (1/n)\gamma \mathbf{1}^TE\mathbf{1} = mn$$ must hold. Subtracting the one from the other, and dividing by $\gamma$, we find the relationship $$(1/m) \mathbf{1}^TD\mathbf{1} = (1/n) \mathbf{1}^TE\mathbf{1},$$ implying that the average entry in $D$ and $E$ is the same.
There are various ways to solve the optimization problem (\[eq:sk\_obj\]), one of which is to apply coordinate descent. Minimizing the objective in (\[eq:sk\_obj\]) with respect to $u_i$ yields $$\sum_{j = 1}^ne^{u_i^{k}+v^{k-1}_j}|A_{ij}|^p + (\gamma/m) e^{u_i^{k}}= n \Leftrightarrow e^{u_i^{k}} = \frac{n}{\sum_{j = 1}^ne^{v^{k-1}_j}|A_{ij}|^p + (\gamma/m)}$$ and equivalently for $v_j$ $$e^{v_i^{k}} = \frac{m}{\sum_{i = 1}^ne^{u^{k-1}_i}|A_{ij}|^p + (\gamma/n)}.$$ Since the minimization with respect to $u_i^k$ is independent of $u_{i-1}^{k}$, the update can be done in parallel for each element of $u$, and similarly for $v$. Repeated minimization over $u$ and $v$ will eventually yield values that satisfy the optimality conditions. Algorithm \[alg:sk1\] summarizes the equilibration routine.
Initialize $e^0 := \mathbf{1}, ~k := 0$ $k := k +1$ $d^{k} := n ~ \mathbf{diag}(|A|^pe^{k-1} + (\gamma/m)\mathbf{1})^{-1} \mathbf{1}$ $e^{k} := m ~ \mathbf{diag}(|A^T|^pd^{k} + (\gamma/n)\mathbf{1})^{-1} \mathbf{1}$ $D := \mathbf{diag}(d^{k})^{1/p}$, $E := \mathbf{diag}(e^{k})^{1/p}$
Adaptive penalty update
-----------------------
The projection operator $\Pi$ does not depend on the choice of $\rho$, so we are free to update $\rho$ in each iteration, at no extra cost. While the convergence theory only holds for fixed $\rho$, it still applies if one assumes that $\rho$ becomes fixed after a finite number of iterations [@boyd2011distributed].
As a rule, increasing $\rho$ will decrease the primal residual, while decreasing $\rho$ will decrease the dual residual. The authors in [@he2000alternating],[@boyd2011distributed] suggest adapting $\rho$ to balance the primal and dual residuals. We have found that substantially better practical convergence can be obtained using a variation on this idea. Rather than balancing the primal and dual residuals, we allow either the primal or dual residual to approximately converge and only then start adjusting $\rho$. Based on this observation, we propose the following adaptive update scheme.
Initialize $l:= 0, \,u := 0$ Apply graph projection splitting $\rho^{k+1} := \delta \rho^k$ $u := k$ $\rho^{k+1} := (1/\delta)\rho^k$ $l := k$
Once either the primal or dual residual converges, the algorithm begins to steer $\rho$ in a direction so that the other residual also converges. By making small adjustments to $\rho$, we will tend to remain approximately primal (or dual) feasible once primal (dual) feasibility has been attained. Additionally by requiring a certain number of iterations between an increase in $\rho$ and a decrease (and vice versa), we enforce that changes to $\rho$ do not flip-flop between one direction and the other. The parameter $\tau$ determines the relative number of iterations between changes in direction.
Implementation {#sec:impl}
==============
Proximal Graph Solver (POGS) is an open-source (BSD-3 license) implementation of graph projection splitting, written in C++. It supports both GPU and CPU platforms and includes wrappers for C, MATLAB, and R. POGS handles all combinations of sparse/dense matrices, single/double precision arithmetic, and direct/indirect solvers, with the exception (for now) of sparse indirect solvers. The only dependency is a tuned BLAS library on the respective platform (, cuBLAS or the Apple Accelerate Framework). The source code is available at
https://github.com/cvxgrp/pogs
In lieu of having the user specify the proximal operators of $f$ and $g$, POGS contains a library of proximal operators for a variety of different functions. It is currently assumed that the objective is separable, in the form $$f(y) + g(x) = \sum_{i = 1}^mf_i(y_i) + \sum_{j = 1}^ng_j(x_j),$$ where $f_i, g_j : \mathbf{R} \to \mathbf{R} \cup \{\infty\}$. The library contains a set of base functions, and by applying various transformations, the range of functions can been greatly extended. In particular we use the parametric representation $$f_i(y_i) = c_i h_i(a_iy_i - b_i) + d_iy_i + (1/2)e_iy_i^2,$$ where $a_i,b_i,d_i \in \mathbf{R}$, $c_i, e_i \in \mathbf{R}_+$, and $h_i : \mathbf{R} \to \mathbf{R} \cup \{\infty\}$. The same representation is also used for $g_j$. It is straightforward to express the proximal operators of $f_i$ in terms of the proximal operator of $h_i$ using the formula $$\begin{aligned}
{\prox}_{f}(v) = \frac{1}{a}\bigg({\prox}_{h, (e+\rho)/(c a^2)}\Big(a \left(v\rho -d\right)/(e+\rho) - b\Big) + b\bigg),\end{aligned}$$ where for notational simplicity we have dropped the index $i$ in the constants and functions. It is possible for a user to add their own proximal operator function, if it is not in the current library. We note that the separability assumption on $f$ and $g$ is a simplification, rather than a limitation of the algorithm. It allows us to apply the proximal operator in parallel using either CUDA or OpenMP (depending on the platform).
The constraint matrix is equilibrated using algorithm \[alg:sk1\], with a choice of $p=2$ and $\gamma = (m + n)\sqrt{\epsilon^{\text{cmp}}}$, where $\epsilon^{\text{cmp}}$ is machine epsilon. Both $D$ and $E$ are rescaled evenly, so that they satisfy $\|DAE\|_F/\sqrt{\min(m,n)} = 1$. The projection $\Pi$ is computed as outlined in §\[sec:impl\_cons\]. We work with the reduced update equations in all versions of POGS. In the indirect case, we chose to use CGLS. The parameter $\rho$ is updated according to algorithm \[alg:adaptive\_rho\]. Empirically, we found that $(\delta, \,\tau) = (1.05, \,0.8)$ works well. We also use over-relaxation with $\alpha = 1.7$.
POGS supports warm starting, whereby an initial guess for $x^0$ and/or $\nu^0$ may be supplied by the user. If only $x^0$ is provided, then $\nu^0$ will be estimated, and vice-versa. The warm-start feature allows any cached matrices to be used to solve additional problems with the same matrix $A$.
POGS returns the tuple ($x^{k+1/2}, y^{k+1/2}, \mu^{k+1/2}, \nu^{k+1/2}$), since it has finite primal and dual objectives. The primal and dual residuals will be non-zero and are determined by the specified tolerances.
Future plans for POGS include extension to block-separable $f$ and $g$ (including general cone solvers), additional wrappers for Julia and Python, support for a sparse direct solver, and a multi-GPU extension.
Numerical results {#sec:numres}
=================
To highlight the robustness and general purpose nature of POGS, we tested it on 9 different problem classes using random data, as well as a radiation treatment planning problem using real-world data.
All experiments were performed in single precision arithmetic on a machine equipped with an Intel Core i7-870, 16GB of RAM, and a Tesla K40 GPU. Timing results include the data copy from CPU to GPU. We compare POGS to SDPT3 [@toh1999sdpt3], an open-source solver that handles linear, second-order, and positive semidefinite cone programs. Since SDPT3 uses an interior-point algorithm, the solution returned will be of high precision, allowing us to verify the accuracy of the solution computed by POGS. Problems that took SDPT3 more than 150 seconds (of which there were many) were aborted.
Random problem classes {#sec:rand_probs}
----------------------
We considered the following 9 problem classes: Basis pursuit, Entropy maximization, Huber fitting, Lasso, Logistic regression, Linear programming, Non-negative least-squares, Portfolio optimization, and Support vector machine fitting. For each problem class, reasonable random instance were generated and solve; details about problem generation can be found in Appendix \[sec:prob\_gen\]. For each problem class the number of non-zeros in $A$ was varied on a logarithmic scale from 100 to 2 Billion. The aspect ratio of $A$ also varied from 1:10 to 10:1, with the orientation (wide or tall) chosen depending on what was reasonable for each problem. We report running time averaged over all aspect ratios.
The maximum number of iterations was set to $10^{4}$, but all problems converged in fewer iterations, with most problems taking a couple of hundred iterations. The relative tolerance was set to $10^{-3}$, and where solutions from SDPT3 were available, we verified that the solutions produced by both solvers matched to 3 decimal places. We omit SDPT3 running times for problems involving exponential cones, since SDPT3 does not support them.
Figure \[fig:pogs\_dense\] compares the running time of POGS versus SDPT3, for problems where the constraint matrix $A$ is dense. We can make several general observations.
- POGS solves problems that are 3 orders of magnitude larger than SDPT3 in the same amount of time.
- Problems that take 200 seconds in SDPT3 take 0.5 seconds in POGS.
- POGS can solve problems with 2 Billion non-zeros in 10-50 seconds.
- The variation in solve time across different problem classes was similar for POGS and SDPT3, around one order of magnitude.
In summary, POGS is able to solve much larger problems, much faster (to moderate precision).
![POGS (GPU version) vs. SDPT3 for dense matrices (color represents problem class).[]{data-label="fig:pogs_dense"}](pogs_vs_sdpt3.eps)
Radiation treatment planning
----------------------------
Radiation treatment is used to radiate tumor cells in cancer patients. The goal of radiation treatment planning is to find a set of radiation beam intensities that will deliver a specified radiation dosage to tumor cells, while minimizing the impact on healthy cells. The problem can be stated directly in graph form, with $x$ corresponding to the $n$ beam intensities to be found, $y$ corresponding to the radiation dose received at the $m$ voxels, and the matrix $A$ (whose elements are non-negative) giving the mapping from the beams to the received dosages at the voxels. This matrix comes from geometry, including radiation scattering inside the patient [@ahnesjo2006imrt]. The objective $g$ is the indicator function of the non-negative orthant (which imposes the constraint that $x_j \geq 0$), and $f$ is a separable function of the form $$f_i(y_i) = \left\{ \begin{array}{ll}
w_i^+ y_i & \mbox{$i$ corresponds to a non-tumor voxel}\\
w_i^- \max (d_i-y_i,0)+
w_i^+ \max (y_i-d_i,0)
& \mbox{$i$ corresponds to a tumor voxel},
\end{array}\right.$$ where $w_i^+ >0$ is the (given) weight associated with overdosing voxel $i$, where $w_i^- >0$ is the (given) weight associated with underdosing voxel $i$, and $d_i>0$ is the target dose, given for each tumor voxel. We can also add the redundant constraint $y_i \geq 0$ by defining $f_i(y_i)=\infty$ for $y_i<0$.
We present results for one instance of this problem, with $m=360000$ voxels and $n=360$ beams. The matrix $A$ comes from a real patient, and the objective parameters are chosen to achieve a good clinical plan. The problem is small enough that it can be solved (to high accuracy) by an interior-point method, in around one hour. POGS took a few seconds to solve the problem, producing a solution that was extremely close to the one produced by the interior-point method. In warm start mode, POGS could solve problem instances (obtained by varying the objective parameters) in under one second, allowing for real-time tuning of the treatment plan (by adjusting the objective function weights) by a radiation oncologist.
Acknowledgments
===============
We would like to thank Baris Ungun for testing POGS and providing valuable feedback, as well as providing the radiation treatment data. We also thank Michael Saunders for numerous discussions about solving large sparse systems. This research was funded by DARPA XDATA and Adobe.
Problem generation details {#sec:prob_gen}
==========================
In this section we describe how the problems in §\[sec:rand\_probs\] were generated.
Basis pursuit
-------------
The basis pursuit problem [@chen1998atomic] seeks the smallest vector in the $\ell_1$-norm sense that satisfies a set of underdetermined linear equality constraints. The objective has the effect of finding a sparse solution. It can be stated as $$\begin{aligned}
\begin{aligned}
&\mini
& & \|x\|_1 \\
& \text{subject to}
& & b = A x,
\end{aligned}\end{aligned}$$ with equivalent graph form representation $$\begin{aligned}
\begin{aligned}
&\mini
& & I(y = b) + \|x\|_1 \\
& \text{subject to}
& & y = A x.
\end{aligned}\end{aligned}$$ The elements of $A$ were generated as $A_{ij} \sim \mathcal{N}(0, 1)$. To construct $b$ we first generated a vector $v \in \mathbf{R}^n$ as $$v_i \sim \left\{
\begin{tabular}{ll}
0 & with probability $p=1/2$ \\
$\mathcal{N}(0, 1/n)$ & otherwise,
\end{tabular}\right.$$ we then let $b = Av$. In each instance we chose $m > n$.
Entropy maximization
--------------------
The entropy maximization problem [@BoV:04] seeks a probability distribution with maximum entropy that satisfies a set of $m$ affine inequalities, which can be interpreted as bounds on the expectations of arbitrary functions. It can be stated as $$\begin{aligned}
\begin{aligned}
&\maxi
& & -\textstyle{\sum}_{i=1}^nx_i\log x_i \\
& \text{subject to}
& & \mathbf{1}^T x = 1, \quad A x \leq b,
\end{aligned}\end{aligned}$$ with equivalent graph form representation $$\begin{aligned}
\begin{aligned}
&\mini
& & I(y_{1:m} \leq b) + I(y_{m+1} = 1) + \textstyle{\sum}_{i=1}^nx_i\log x_i \\
& \text{subject to}
& & y = \begin{bmatrix}A \\ \mathbf{1}^T\end{bmatrix} x.
\end{aligned}\end{aligned}$$ The elements of $A$ were generated as $A_{ij} \sim \mathcal{N}(0,n)$. To construct $b$, we first generated a vector $v \in \mathbf{R}^n$ as $v_i \sim U[0, 1]$, then we set $b = Fv/(\mathbf{1}^Tv)$. This ensures that there exists a feasible $x$. In each instance we chose $m < n$.
Huber fitting
-------------
Huber fitting or robust regression [@huber1964robust] performs linear regression under the assumption that there are outliers in the data. The problem can be stated as $$\begin{aligned}
\begin{aligned}
&\mini
& & \textstyle{\sum}_{i=1}^m\text{huber}(b_i - a_i^Tx),
\end{aligned}\end{aligned}$$ where the Huber loss function is defined as $$\text{huber}(x) = \left\{\begin{aligned} &(1/2)x^2 & |x| \leq 1 \\\ &|x| - (1/2) & |x| > 1 \end{aligned} \right.$$ The graph form representation of this problem is $$\begin{aligned}
\begin{aligned}
&\mini &
\textstyle{\sum}_{i=1}^n\text{huber}(b_i-y_i) \\
& \text{subject to}
& y = Ax.
\end{aligned}\end{aligned}$$ The elements of $A$ were generated as $A_{ij} \sim \mathcal{N}(0,n)$. To construct $b$, we first generated a vector $v \in \mathbf{R}^n$ as $v_i \sim \mathcal{N}(0,1/n)$ then we generated a noise vector $\varepsilon$ with elements $$\varepsilon_i \sim \left\{
\begin{tabular}{ll}
$\mathcal{N}(0,1/4)$ & with probability $p=0.95$ \\
$U[0, 10]$ & otherwise.
\end{tabular}\right.$$ Lastly we constructed $b = Av + \varepsilon$. In each instance we chose $m > n$.
Lasso {#sec:apx_lasso}
-----
The lasso problem [@tibshirani1996regression] seeks to perform linear regression under the assumption that the solution is sparse. An $\ell_1$ penalty is added to the objective to encourage sparsity. It can be stated as $$\begin{aligned}
\begin{aligned}
&\mini
& & \|Ax-b\|_2^2 + \lambda \|x\|_1,
\end{aligned}\end{aligned}$$ with graph form representation $$\begin{aligned}
\begin{aligned}
&\mini
& & \|y-b\|_2 + \lambda\|x\|_1\\
& \text{subject to}
& & y = Ax.
\end{aligned}\end{aligned}$$ The elements of $A$ were generated as $A_{ij} \sim \mathcal{N}(0, 1)$. To construct $b$ we first generated a vector $v \in \mathbf{R}^n$, with elements $$v_i \sim \left\{
\begin{tabular}{ll}
0 & with probability $p=1/2$ \\
$\mathcal{N}(0, 1/n)$ & otherwise.
\end{tabular}\right.$$ We then let $b = Av + \varepsilon$, where $\varepsilon$ represents the noise and was generated as $\varepsilon_i\sim\mathcal{N}(0,1/4)$. The value of $\lambda$ was set to $(1/5)\|A^Tb\|_\infty$. This is a reasonable choice since $\|A^Tb\|_\infty$ is the critical value of $\lambda$ above which the solution of the Lasso problem is $x=0$. In each instance we chose $m < n$.
Logistic regression
-------------------
Logistic regression [@hastie2009elements] fits a probability distribution to a binary class label. Similar to the Lasso problem (\[sec:apx\_lasso\]) a sparsifying $\ell_1$ penalty is often added to the coefficient vector. It can be stated as $$\begin{aligned}
\begin{aligned}
&\mini
& &\textstyle{\sum}_{i=1}^m \left(\log(1+\exp (x^Ta_i)) - b_ix^Ta_i\right) + \lambda \|x\|_1,
\end{aligned}\end{aligned}$$ where $b_i \in \{0,1\}$ is the class label of the $i$th sample, and $a_i^T$ is the $i$th row of $A$. The graph form representation of this problem is $$\begin{aligned}
\begin{aligned}
&\mini
& & \textstyle{\sum}_{i=1}^m \left(\log(1+\exp (y_i)) - b_iy_i\right) + \lambda \|x\|_1, \\
& \text{subject to}
& & y = Ax.
\end{aligned}\end{aligned}$$ The elements of $A$ were generated as $A_{ij} \sim \mathcal{N}(0, 1)$. To construct $b$ we first generated a vector $v \in \mathbf{R}^n$, with elements $$v_i \sim \left\{
\begin{tabular}{ll}
0 & with probability $p=1/2$ \\
$\mathcal{N}(0, 1/n)$ & otherwise.
\end{tabular}\right.$$ We then constructed the entries of $b$ as $$b_i \sim \left\{
\begin{tabular}{ll}
0 & with probability $p=1/(1+ \exp(-a_i^Tv))$ \\
1 & otherwise.
\end{tabular}\right.$$ The value of $\lambda$ was set to $(1/10)\|A^T((1/2)\mathbf{1} - b)\|_\infty$. ($\|A^T((1/2)\mathbf{1} - b)\|_\infty$ is the critical of $\lambda$ above which the solution is $x=0$.) In each instance we chose $m > n$.
Linear program
--------------
Linear programs [@BoV:04] seek to minimize a linear function subject to linear inequality constraints. It can be stated as $$\begin{aligned}
&\mini
& & c^Tx \\
& \text{subject to}
& & Ax \leq b,
\end{aligned}$$ and has graph form representation $$\begin{aligned}
&\mini
& & c^Tx + I(y \leq b) \\
& \text{subject to}
& & y = Ax.
\end{aligned}$$ The elements of $A$ were generated as $A_{ij} \sim \mathcal{N}(0, 1)$. To construct $b$ we first generated a vector $v \in \mathbf{R}^n$, with elements $$v_i \sim \mathcal{N}(0, 1/n).$$ We then generated $b$ as $b = Av + \varepsilon$, where $\varepsilon_i \sim U[0, 1/10]$. The vector $c$ was constructed in a similar fashion. First we generate a vector $u \in \mathbf{R}^m$, with elements $$u_i \sim U[0,1],$$ then we constructed $c = -A^Tu$. This method guarantees that the problem is bounded. In each instance we chose $m > n$.
Non-negative least-squares
--------------------------
Non-negative least-squares [@chen2009nonnegativity] seeks a minimizer of a least-squares problem subject to the solution vector being non-negative. This comes up in applications where the solution represents real quantities. The problem can be stated as $$\begin{aligned}
&\mini
& & \|Ax-b\|_2 \\
& \text{subject to}
& & x \geq 0,
\end{aligned}$$ and has graph form representation $$\begin{aligned}
&\mini
& & \|y-b\|_2^2 + I(x\geq 0) \\
& \text{subject to}
& & y = Ax.
\end{aligned}$$ The elements of $A$ were generated as $A_{ij} \sim \mathcal{N}(0, 1)$. To construct $b$ we first generated a vector $v \in \mathbf{R}^n$, with elements $$v_i \sim \mathcal{N}(1/n, 1/n).$$ We then generated $b$ as $b = Av + \varepsilon$, where $\varepsilon_i \sim \mathcal{N}(0,1/4)$. In each instance we chose $m > n$.
Portfolio optimization
----------------------
Portfolio optimization or optimal asset allocation seeks to maximize the risk adjusted return of a portfolio. A common assumption is the $k$-factor risk model [@connor1993arbitrage], which states that the return covariance matrix is the sum of a diagonal plus a rank $k$ matrix. The problem can be stated as $$\begin{aligned}
&\maxi
& & \mu^Tx- \gamma x^T(FF^T + D)x \\
& \text{subject to}
& & x \geq 0, \quad \mathbf{1}^Tx = 1
\end{aligned}$$ where $F \in \reals^{n \times k}$ and $D$ is diagonal. An equivalent graph form representation is given by $$\begin{aligned}
&\mini
& &-x^T\mu + \gamma x^TDx + I(x \geq 0)+ \gamma y_{1:m}^Ty_{1:m} + I(y_{m+1} = 1)\\
& \text{subject to}
& & y = \begin{bmatrix}
F^T \\ \mathbf{1}^T
\end{bmatrix}x.
\end{aligned}$$ The elements of $A$ were generated as $A_{ij} \sim \mathcal{N}(0,1)$. The diagonal of $D$ was generated as $D_{ii} \sim U[0,\sqrt{k}]$ and the the mean return $\mu$ was generated as $\mu_i \sim \mathcal{N}(0, 1)$. The risk aversion factor $\gamma$ was set to 1. In each instance we chose $n > k$.
Support vector machine
----------------------
The support vector machine [@cortes1995support] problem seeks a separating hyperplane classifier for a problem with two classes. The problem can be stated as $$\begin{aligned}
&\mini
& & x^Tx + \lambda \textstyle{\sum}_{i = 1}^m\max(0, b_ia_i^Tx + 1),
\end{aligned}$$ where $b_i \in \{-1, +1\}$ is a class label and $a_i^T$ is the $i$th row of $A$. It has graph form representation $$\begin{aligned}
&\mini
& & \lambda \textstyle{\sum}_{i = 1}^m\max(0, y_i + 1) + x^Tx\\
& \text{subject to}
& & y = \mathbf{diag}(b)Ax.
\end{aligned}$$ The vector $b$ was chosen to so that the first $m/2$ elements belong to one class and the second $m/2$ belong to the other class. Specifically $$b_i = \left\{\begin{tabular}{ll}
$+1$ & $i \leq m/2$ \\
$-1$ & otherwise.
\end{tabular}\right.$$ Similarly, the elements of $A$ were generated as $$A_{ij} \sim \left\{\begin{tabular}{ll}
$\mathcal{N}(+1/n, 1/n)$ & $i \leq m/2$ \\
$\mathcal{N}(-1/n, 1/n)$ & otherwise.
\end{tabular}\right.$$ This choice of $A$ causes the rows of $A$ to form two distinct clusters. In each instance we chose $m > n$.
|
---
abstract: 'A single protein molecule is regarded as a contact network of amino-acid residues. Some studies have indicated that this network is a small world network (SWN), while other results have implied that this is a fractal network (FN). However, SWN and FN are essentially different in the dependence of the shortest path length on the number of nodes. In this paper, we investigate this dependence in the residue contact networks of proteins in native structures, and show that the networks are not SWN but FN. FN is generally characterized by several dimensions. Among them, we focus on three dimensions; the network topological dimension $D_c$, the fractal dimension $D_f$, and the spectral dimension $D_s$. We find that proteins universally yield $D_c \approx 1.9$, $D_f \approx 2.5$ and $D_s \approx 1.3$. These values are in surprisingly good coincidence with those in three dimensional critical percolation cluster. Hence the residue contact networks in the protein native structures belong to the universality class of three dimensional percolation cluster. The criticality is relevant to the ambivalent nature of the protein native structures, i.e., the coexistence of stability and instability, both of which are necessary for a protein to function as a molecular machine or an allosteric enzyme.'
author:
- Hidetoshi Morita
- Mitsunori Takano
title: |
Residue network in protein native structure belongs to\
the universality class of three dimensional critical percolation cluster
---
#### Introduction
Proteins are one-dimensional chains of amino-acid residues embedded in three dimensional ($D=3$; 3D) Euclidean space. The residues neighboring in the Euclidean space are in contact with each other. Thus we can regard a protein molecule as a contact network of amino-acid residues [@Bahar_Atilgan_Erman1997; @Atilgan_etal2001]. This network viewpoint is complementary to the energy landscape picture [@Frauenfelder_Sligar_Wolynes1991] in understanding the general properties of proteins. We hereafter consider this network within single protein molecules in their native structures, in particular focusing on its universality among proteins.
Some recent studies [@Vendruscolo_etal2002; @Dokholyan_etal2002; @Greene_Higman2003; @Atilgan_Akan_Canan2004; @Bagler_Sinha2005] have applied the latest network theory to the residue network, by regarding the amino-acid residues and their contacts as nodes and edges, respectively. The important quantities to characterize the network are the clustering coefficient $C$ and the shortest path length $L$ [@Newman2003_rev]. Those studies have demonstrated that in the residue networks $C$ is larger than the random networks [@RN] while $L$ is smaller than the normal lattice. This indicates that the residue network is a small world network (SWN) [@Watts_Strogatz1998].
On the other hand, the spacial profile of residues within single protein molecules has long been studied with the use of authentic methods of material science. Earlier spectroscopic studies [@spectr] have shown anomalous density of states. These results, accompanied with theoretical studies [@spectr_theor], have suggested that the protein structures possess the property of fractal lattice. The fractality within single proteins has also been supported numerically through the density of normal modes [@Wako1989; @benAvraham1993; @Yu_Leitner2003] and the spacial mass distribution [@frac_dim_prev]. This implies that the residue network that we are interested in is a fractal network (FN).
From the general viewpoint of the network theory, however, there lies a dichotomy between SWN and FN [@Csanyi_Szendroi2004]. The clustering coefficient $C$ cannot discriminate between SWN and FN, since in both the networks $C$ have a larger value than the random networks. In contrast, the dependence of the shortest path length $L$ on the number of nodes $N$ is essentially different between SWN and FN; $L$ depends on $N$ logarithmically and algebraically, respectively. By exploiting the $N$-dependency of $L$, we can differentiate SWN and FN, in principle.
In proteins, nevertheless, it is practically difficult to clearly distinguish between these two $N$-dependence. This is because the size of proteins does not distribute widely enough to cover sufficient decades. The same data sets can be read as a straight line both in log-log (SWN) and semi-log (FN) plot.
To overcome this difficulty, here we introduce a more sophisticated method. Instead of the $N$-$L$ plot among various sized proteins, we investigate an equivalent [*within single protein molecules*]{}; we calculate the number of nodes $n_l$ that can be reached until $l$ path steps. Then, by overdrawing the $n_l$-$l$ plot for various sized proteins, we obtain a universal curve, as well as the deviation from it due to finite size effect. Thus we can discuss an asymptotic behavior in the large $N$ limit. We thereby find that network in protein native structures is FN, not SWN. This is the first result of this letter.
We then obtain the three characteristic dimensions of fractal residue network; the network topological dimension $D_c$, the fractal dimension $D_f$, and the spectral dimension $D_s$. The values of them are universal among single-chain proteins. Furthermore, these three values surprisingly coincide with those of the 3D critical percolation cluster. Namely, proteins belongs to the universality class of 3D critical percolation cluster. This is the second and the most highlighted result of this letter.
#### Small world network vs fractal network
First of all, we define the network in a protein native structure. We use the spacial information of the native structure in Protein Data Bank (PDB) [@PDB]. We regard amino-acid residues as nodes; we represent them by [$\mathrm{C}_\mathrm{\alpha}$]{} atoms, which is a standard way in coarse grained models [@Atilgan_etal2001], and is indeed employed in the past network studies [@Vendruscolo_etal2002; @Dokholyan_etal2002; @Atilgan_Akan_Canan2004; @Bagler_Sinha2005]. A pair of nodes, $i$ and $j$, is considered to have an edge if their Euclidean distance, $d_{ij}$, is less than a cut-off distance, $d_c$. Then the network is characterized by the adjacent matrix: $$\begin{aligned}
\mathbf{A}=(A_{ij}),
\quad
A_{ij}=\Theta(d_c-d_{ij})\end{aligned}$$ where $\Theta(\cdot)$ is the Heaviside step function. Here we adopt $d_c=$ 7[Å]{}, which corresponds to the second coordination shell in the radial distribution function of [$\mathrm{C}_\mathrm{\alpha}$]{}; we have also confirmed that the result below is robust to the choice of $d_c$ from 6 to 10[Å]{} [@fullpaper].
Let $n_l^{(i)}$ be the number of nodes that a walker on the network starting from the node $i$ can visit at least once until $l$ steps. Since we are interested in the overall network property of a protein, we consider its average, $n_l=\sum_i n_l^{(i)}/N$. As $l$ becomes larger, $n_l$ monotonically increases and finally saturates at $N$. In the $D$-dimensional normal lattice, $n_l \sim l^D$. If the network is FN, similarly, the following scaling holds [@Csanyi_Szendroi2004]: $$\begin{aligned}
n_l \sim l^{D_c},
\label{eq:n_l_FN}\end{aligned}$$ where $D_c$ is referred to as the network topological dimension [@Stauffer_Aharony1994; @Nakayama_Yakubo_Orbach1994]. If the network is SWN, in contrast, the relationship is [@Csanyi_Szendroi2004], $$\begin{aligned}
n_l \sim \exp(l/l_0),
\label{eq:n_l_SWN}\end{aligned}$$ for a positive constant $l_0$. Note again that the relationships [(\[eq:n\_l\_FN\])]{} and [(\[eq:n\_l\_SWN\])]{} are essentially different, leading to the dichotomy between FN and SWN [@Csanyi_Szendroi2004].
![Averaged number of nodes $n_l$ that a walker on the network starting from a node can visit at least once until $l$ steps; plotted in (a) log-log and (b) semi-log scales.[]{data-label="fig:n_vs_l"}](fig1.eps){width="48.00000%"}
[FIG. \[fig:n\_vs\_l\]]{} shows the relationship between $n_l$ and $l$; the same data sets are plotted in (a) log-log and (b) semi-log scales. We present the data for five representative proteins of different size: ribonuclease T1 (PDB ID=9RNT, 104 amino acids (a.a.)), cutinase (1CUS, 200 a.a.), green fluorescent protein (1EMA, 236 a.a.), actin, (1J6Z, 375 a.a.), and subfragment 1 of myosin (1SR6, 1152 a.a.). Obviously the data obeys the power-law scaling better than the exponential dependence. This is also supported by considering the finite size effect as follows. In (a), the range where the data follow the power-law scaling tends to extend as the number of nodes $N$ increases. This suggests the existence of an asymptotic universal line in the limit of $N\to\infty$. In (b), on the contrary, we cannot see such an asymptotic tendency. Thus we conclude that proteins universally obeys the power-law scaling [(\[eq:n\_l\_FN\])]{} with $D_c\approx 1.9$. Hence the networks in protein native structures are FN, not SWN.
In much larger proteins, $D_c$ often gives a bit larger value than 1.9, or even the scaling itself is smeared. This is because the larger proteins are usually not single-domain nor single-chain but multi-domain or multi-chain proteins. Even in such proteins, however, each single-domain or single-chain component still yields the same scaling law with the same dimension $D_c\approx 1.9$ [@fullpaper].
One plausible reason why the network is not SWN but FN is that the residues are spatially restricted in the 3D Euclidean space. Indeed, it has been suggested that networks with spatial (geographical) restriction tend to be rather regular (including fractal) network than SWN [@Csanyi_Szendroi2004].
#### Fractal dimension
In addition to the network topological dimension $D_c$, FN is characterized by two other dimensions in general; the fractal dimension $D_f$ and the spectral dimension $D_s$ [@Stauffer_Aharony1994]. While these three dimensions and the Euclidean dimension $D$ are identical in the normal lattice, they can be different in FN.
The fractal dimension is determined from the spacial distribution of nodes. Here we again employ the method within single proteins, differently from the previous studies [@frac_dim_prev], in order to discuss the asymptotic behavior in the limit $N\to\infty$. Let $n^{(i)}(d)$ be the number of nodes the distance of which from the node $i$ is less than $d$; $n^{(i)}(d)=\sum_j \Theta (d-d_{ij})$. Since we are interested in the overall network property of a protein, we consider its average, $n(d)=\sum_i n^{(i)}(d)/N$, that is, $$\begin{aligned}
n(d)=\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{N} \Theta (d-d_{ij}).\end{aligned}$$ Note that this is nothing but the correlation integral introduced by Grassberger and Procaccia [@Grassberger_Procaccia1983a], although this is not normalized in order to consider the finite size effect. As $d$ becomes larger, $n(d)$ monotonically increases and finally saturates at $N$. In the $D$-dimensional normal lattice, $n(d) \sim d^D$. Similarly, if the spacial distribution of nodes is fractal, $$\begin{aligned}
n(d) \sim d^{D_f},
\label{eq:n_vs_d_FN}\end{aligned}$$ where $D_f$ is referred to as the fractal dimension.
![Averaged number of residues $n(d)$ the distance of which from a residue is less than $d$ for the same proteins as [FIG. \[fig:n\_vs\_l\]]{}.[]{data-label="fig:n_vs_d"}](fig2.eps){width="35.00000%"}
[FIG. \[fig:n\_vs\_d\]]{} shows $n(d)$ versus $d$ in log-log scale, for the same proteins as [FIG. \[fig:n\_vs\_l\]]{}. The relationship follows power-law scaling. Similarly to the case in the network topological dimension, this is supported by considering the finite size effect; the power-law scaling range tends to extend as the number of nodes increases, suggesting the existence of an asymptotic universal line in the limit $N\to\infty$. Thus we conclude that proteins universally follows the power-law scaling [(\[eq:n\_vs\_d\_FN\])]{} with the fractal dimension $D_f\approx 2.5$, which is consistent with the previous studies [@frac_dim_prev].
#### Spectral dimension
The spectral dimension is determined from the density of normal modes (DNM). According to the Debye theory, DNM in $D$-dimensional normal lattice is $\rho(\omega) \sim \omega^{D-1}$. Similarly, DNM in FN obeys, $$\begin{aligned}
\rho(\omega) \sim \omega^{D_s-1},
\label{eq:rho_vs_omega_FN}\end{aligned}$$ where $D_s$ is referred to as the spectral dimension [@Nakayama_Yakubo_Orbach1994].
DNM is, in general, obtained experimentally by spectroscopies and numerically by normal mode analysis (NMA). To be relevant to experiments, we conduct NMA in the all atom model, not in a coarse grained model. Then, by focusing on the frequency region corresponding to the residue-residue interaction, we consider the spectral dimension of the residue network. We do so because for NMA it is necessary to take the interaction strengths precisely into account. In the all atom model, the interaction strengths are quite reliable, since it is basically obtained from quantum chemical calculations. In a coarse grained model, in contrast, the interaction strengths are introduced rather arbitrary. It is true that the coarse grained models well reproduces the overall fluctuation of the protein native structure [@Atilgan_etal2001]. This is, however, largely due to the fact that only a limited number of lowest frequency normal modes (or largest amplitude principal components) dominate the fluctuation. There is no guarantee that they also reproduce DNM for decades. Indeed, it has been reported that there is an essential difference in DNM between the all atom model and the coarse grained model with identical interaction strengths [@Takano_etal2004]. Instead, here we coarse grain DNM itself, by truncating the higher frequency region. We perform NMA by using the program NMODE implemented in the AMBER software [@Case_etal2005], with AMBER force field (perm99) and implicit water (Generalized Born) model. Before NMA, energy minimization is executed with Newton-Raphson and conjugate gradient method, so that the norm of the force is less than the order of $10^{-12}\;\mathrm{kcal}\;\mathrm{mol}^{-1}\mbox{\AA}^{-1}$.
![Density of normal modes $\rho(\omega)$ of (a) ribonuclease T1 (PDB ID=9RNT) and (b) cutinase (1CUS). Various bin sizes $\Delta\omega$ are taken so as to display the master curve clearer.[]{data-label="fig:DNM"}](fig3.eps){width="48.00000%"}
We have obtained DNM for several proteins, and [FIG. \[fig:DNM\]]{} shows typical results; these are essentially similar to one of the previous numerical studies [@Yu_Leitner2003]. There exist two shoulders at around $10$ and $100~\mbox{cm}^{-1}$, which are denoted respectively by $\omega_{FS}$ and $\omega_{GL}$. The frequency higher than $\omega_{GL}$ corresponds to local motions, due to covalent-bond stretching and angle bending motions. The frequency lower than $\omega_{GL}$, in contrast, corresponds to global motions due to residue-residue interactions, which we are now interested in. In the latter region, DNM obeys the power-law scaling [(\[eq:rho\_vs\_omega\_FN\])]{} with $D_s\approx 1.3$. At around $\omega_{FS}$, the dimension changes from $1.3$ to $3.0$. This is due to the finite size effect; through a long wave-length probe, the protein is regarded just as a 3D object. Indeed, similar change in slope due to the finite size effect is observed in percolation clusters [@Nakayama_Yakubo_Orbach1994]. We expect that, in much larger proteins, $\omega_{FS}$ shift towards the lower-frequency direction, and accordingly the region of $D_s\approx 1.3$ becomes wider. Thus we conclude that residue-residue interaction in proteins universally follows the power-law scaling [(\[eq:rho\_vs\_omega\_FN\])]{} with the spectral dimension $D_s\approx 1.3$.
We discuss the reason why some of the previous studies [@Wako1989; @benAvraham1993] gave $D_s$ larger than 1.3. In these studies, $D_s$ was obtained not from DNM, i.e., the probability density function $\rho(\omega)$, but from its cumulative distribution function $\Omega(\omega)=\int_0^\omega \mathrm{d}\omega'\rho(\omega')$. $D_s$ obtained from $\rho(\omega)$ is identical with that from $\Omega(\omega)$ if a single scaling holds over the whole range considered. In proteins, however, the scaling changes at around $\omega_{FS}$ due to the finite size effect. This accordingly gives an illusionary larger value of $D_s$. To illustrate this simply, we model the probability density function as a function that sharply change the scaling at $\omega_{FS}$: $$\begin{aligned}
\rho(\omega)=
\begin{cases}
\displaystyle
\frac{C}{\omega_{FS}}\left(\frac{\omega}{\omega_{FS}}\right)^{D-1}
& (\omega\leq\omega_{FS})\\
\displaystyle
\frac{C}{\omega_{FS}}\left(\frac{\omega}{\omega_{FS}}\right)^{D_s-1}
& (\omega>\omega_{FS})
\end{cases}
\label{eq:prob_exp}\end{aligned}$$ with a dimensionless positive constant $C$. Its cumulative distribution function is, $$\begin{aligned}
\Omega(\omega) &= \int_0^\omega \mathrm{d}\omega'\rho(\omega') \nonumber \\
&=
\begin{cases}
\displaystyle
\frac{C}{D}\left(\frac{\omega}{\omega_{FS}}\right)^D & (\omega\leq\omega_{FS})\\
\displaystyle
\frac{C}{D_s}\left[\left(\frac{\omega}{\omega_{FS}}\right)^{D_s} - \left(1-\frac{D_s}{D}\right)\right]
& (\omega>\omega_{FS}).
\end{cases}
\label{eq:dist_exp}\end{aligned}$$ The gradient of $\log\Omega$ to $\log\omega$ gives a larger value than the correct spectral dimension $D_s$ at around $\omega\gtrsim\omega_{FS}$. The gradient would yield $D_s$ in the region $\omega/\omega_{FS} \gg (1-D_s/D)^{1/D_s}$. In proteins, $D=3$ and $D_s=1.3$, then $\omega/\omega_{FS} \gg 0.56$. This region, however, corresponds to the local motions, not to the global residue-residue interactions in which we have discovered the universality.
#### Conclusion: universality class of 3D critical percolation cluster
We have thus obtained the characteristic dimensions of FN inherent in the protein native structures, $(D, D_c, D_f, D_s)=(3, 1.9, 2.5, 1.3)$. Note that these dimensions are in surprisingly good coincidence with those in the 3D critical percolation cluster, $(D, D_c, D_f, D_s)=(3, 1.885, 2.53, 1.3)$ [@Stauffer_Aharony1994]. Hence we here propose that the protein native structures belong to the universality class of 3D critical percolation cluster. This is the main statement of this letter.
Then why proteins as residue-contact networks are critically percolated? Although it is difficult to give the complete answer in the present stage of this study, still we can provide a purposive explanation by pointing out two important aspects of proteins; stability and instability. On the one hand, proteins fold into their own (almost) unique native structures. Even when they are forced to unfold, they refold back into the native structures spontaneously (often with help from molecular chaperons). In this sense, proteins are stable. On the other hand, proteins flexibly change their structures. The structural change is sometimes accompanied with even (partial) unfolding. In this sense, proteins are unstable. The coexistence of these two conflicting aspects is essential for the functions of proteins, in particular to work as molecular machines or allosteric enzymes. Being in the critical state is sufficient for that. Furthermore, the criticality can be even necessary; proteins should evolve towards the critical state [@Kauffman1993; @Bak1996]. This hypothesis should be verified through the study on molecular evolution, which is a challenging subject in the future.
This work was partially supported by Grants-in-Aids for Scientific Research in Priority Areas, the 21st Century COE Program (Physics of Self-Organization Systems), and “Academic Frontier” Project from MEXT.
[33]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , ****, ().
et al. ****, ().
, , , ****, ().
et al. ****, ().
et al. ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, (); , ****, ().
, ****, ().
, ****, (); et al. ****, (); et al. ****, (); et al. ****, ().
, , , ****, (); , *ibid.* ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, (); et al. *ibid.* ****, ().
, ****, ().
, .
, ** (, , ), ed.
, , , ****, ().
, ****, ().
et al. ****, ().
et al. ****, ().
, ** (, ).
, ** (, , ).
|
---
abstract: 'We present a generic technique, automated by computer-algebra systems and available as open-source software [@TaylorDuffyCode], for efficient numerical evaluation of a large family of singular and nonsingular 4-dimensional integrals over triangle-product domains, such as those arising in the boundary-element method (BEM) of computational electromagnetism. Previously, practical implementation of BEM solvers often required the aggregation of multiple disparate integral-evaluation schemes [@Taylor2003; @Duffy1982; @Taskinen2003; @Jarvenpaa2006; @TongChew2007; @Klees1996; @Cai2002; @Khayat2005; @Ismatullah2008; @Graglia2008; @Polimeridis2010; @Polimeridis2013; @Andra1997; @SauterSchwab2010; @Erichsen1998] in order to treat all of the distinct types of integrals needed for a given BEM formulation; in contrast, our technique allows many different types of integrals to be handled by the *same* algorithm and the same code implementation. Our method is a significant generalization of the Taylor–Duffy approach [@Taylor2003; @Duffy1982], which was originally presented for just a single type of integrand; in addition to generalizing this technique to a broad class of integrands, we also achieve a significant improvement in its efficiency by showing how the *dimension* of the final numerical integral may often reduced by one. In particular, if $n$ is the number of common vertices between the two triangles, in many cases we can reduce the dimension of the integral from $4-n$ to $3-n$, obtaining a closed-form analytical result for $n=3$ (the common-triangle case).'
author:
- 'M. T. Homer Reid, [^1] Jacob K. White, *Fellow, IEEE*, [^2] and Steven G. Johnson [^3]'
title: 'Generalized Taylor-Duffy Method for Efficient Evaluation of Galerkin Integrals in Boundary-Element Method Computations'
---
Introduction {#IntroductionSection}
============
The application of boundary-element methods {BEM [@Harrington93; @Chew2009], also known as the method of moments (MOM)} to surfaces discretized into triangular elements commonly requires evaluating four-dimensional integrals over triangle-product domains of the form [$$\mathcal{I}=
\int_{{\mathcal}T} \, d{\mathbf{x}} \, \int_{{\mathcal}T^\prime} \, d{\mathbf{x}}^\prime \,
P\big({\mathbf{x}}, {\mathbf{x}}^\prime\big) K\big(|{\mathbf{x}}-{\mathbf{x}}^\prime|\big)
\label{OriginalIntegral}$$]{} where $P$ is a polynomial, $K(r)$ is a kernel function which may be singular at $r=0$, and ${\mathcal}T, {\mathcal}T^\prime$ are flat triangles; we will here be concerned with the case in which ${\mathcal}T,{\mathcal}T^\prime$ have one or more common vertices. Methods for efficient and accurate evaluation of such integrals have been extensively researched; among the most popular strategies are singularity subtraction (SS) [@Taskinen2003; @Jarvenpaa2006; @TongChew2007], singularity cancellation (SC) [@Klees1996; @Cai2002; @Khayat2005; @Ismatullah2008; @Graglia2008], and fully-numerical schemes [@Polimeridis2010; @Polimeridis2013]. (Strategies have also been proposed to handle the *near-singular* case in which ${\mathcal}T, {\mathcal}T^\prime$ have vertices which are nearly but not precisely coincident [@Botha2013; @Vipiana2013]; we do not address that case here.) Particularly interesting among SC methods is the scheme proposed by Taylor [@Taylor2003] following earlier ideas of Duffy [@Duffy1982] (see also Refs. \[\]); we will refer to the method of [Ref. ]{} as the the “Taylor-Duffy method” (TDM). This method considered the specific kernel $K{^{\hbox{\scriptsize{Helmholtz}}}}(r)=\frac{e^{ikr}}{4\pi r}$ and a specific linear polynomial $P{^{\hbox{\scriptsize{linear}}}}$ and reduced the singular 4-dimensional integral (\[OriginalIntegral\]) to a nonsingular $(4-n)$-dimensional integral (where $n\in \{1,2,3\}$ is the number of vertices common to ${\mathcal}T, {\mathcal}T^\prime$) with a complicated integrand obtained by performing various manipulations on $K{^{\hbox{\scriptsize{Helmholtz}}}}$ and $P{^{\hbox{\scriptsize{linear}}}}$. The reduced integral is then evaluated numerically by simple cubature methods.
Our first objective is to show that the TDM may be generalized to handle a significantly broader class of integrand functions. Whereas [Ref. ]{} addressed the *specific* case of the Helmholtz kernel combined with constant or linear factors, the master formulas we present \[equations (2) in Section \[GeneralizedTaylorDuffySection\]\] are nonsingular reduced-dimensional versions of (\[OriginalIntegral\]) that apply to a broad *family* of kernels $K$ combined with *arbitrary* polynomials $P$. Our master formulas (2) involve new functions ${\mathcal}K$ and ${\mathcal}P$ derived from $K$ and $P$ in (\[OriginalIntegral\]) by procedures, discussed in the main text and Appendices, that abstract and generalize the techniques of [Ref. ]{}.
We next extend the TDM by showing that, for some kernels—notably including the “$r-$power” kernel $K(r)=r^p$ for integer $p$—the reduction of dimensionality effected by the TDM may be carried one dimension further, so that the original 4-dimensional integral is converted into a $(3-n)$-dimensional integral \[equations (5) in Section \[TwiceIntegrableKernelSection\]\]. In particular, in the common-triangle case $(n=3)$, we obtain a *closed-form analytical solution* of the full 4-dimensional integral (\[OriginalIntegral\]). This result encompasses and generalizes existing results [@Caorsi1993; @Eibert1995] for closed-form evaluations of the four-dimensional integral for certain special $P$ and $K$ functions.
A characteristic feature of many published strategies for evaluating integrals of the form (\[OriginalIntegral\]) is that they depend on specific choices of the $P$ and $K$ functions, with (in particular) each new type of kernel understood to necessitate new computational strategies. In practical implementations this can lead to cluttered codes, requiring multiple distinct modules for evaluating the integrals needed for distinct BEM formulations. The technique we propose here alleviates this difficulty. Indeed, as we discuss in Section \[BEMFormulationsSection\], the flexibility of our generalized TDM allows the *same* basic code {$\sim$1,500 lines of C++ (not including general-purpose utility libraries), available for download as free open-source software [@TaylorDuffyCode]} to handle *all* singular integrals arising in several popular BEM formulations. Although separate techniques for computing these integrals have been published before, the novelty of our approach is to attack many different integrals with the *same* algorithm and the *same* code implementation.
Of course, the efficiency and generality of the TDM reduction do not come for free: the cost is that the reduction *process*—specifically, the procedure by which the original polynomial $P$ in (\[OriginalIntegral\]) is converted into new polynomials ${\mathcal}P$ that enter the master formulas (2) and (5)—is tedious and error-prone if carried out by hand. To alleviate this difficulty, we have developed a computer-algebra technique for automating this conversion (Section \[ComputerAlgebraSection\]); our procedure inputs the coefficients of $P$ and emits code for computing ${\mathcal}P$, which may be directly incorporated into routines for numerical evaluation of the integrands of the reduced integrals (2) or (5).

The TDM reduces the 4-dimensional integral (\[OriginalIntegral\]) to a lower-dimensional integral which is evaluated by numerical cubature. How smooth is the reduced integrand, and how rapidly does the cubature converge with the number of integrand samples? These questions are addressed in Section \[ExamplesSection\], where we plot integrands and convergence rates for the reduced integrals resulting from applying the generalized TDM to a number of practically relevant cases of (\[OriginalIntegral\]). We show that—notwithstanding the presence of singularities in the original integral or geometric irregularities in the panel pair—the reduced integrand is typically a smooth, well-behaved function which succumbs readily to straightforward numerical cubature.
Although the TDM is an SC scheme, it has useful application to SS schemes. In such methods one subtracts the first few terms from the small-$r$ expansion of the Helmholtz kernels; the non-singular integral involving the subtracted kernel is evaluated by simple numerical cubature, but the integrals involving the singular terms must be evaluated by other means. In Section \[CachingSection\] we note that these are just another type of singular integral of the form (\[OriginalIntegral\]), whereupon they may again be evaluated using the same generalized TDM code—and, moreover, because the kernel in these integrals is just the “$r$-power” kernel $K(r)=r^p$, the improved TDM reduction discussed in Section \[TwiceIntegrableKernelSection\] is available. We compare the efficiency of the unadorned TDM to a combined TDM/SS method and note that the latter is particularly effective for broadband studies of the same structure at many different frequencies.
Finally, in Section \[HighKSection\] we note a curious property of the Helmholtz kernel in the short-wavelength limit: as $k\to\infty$: this kernel becomes “twice-integrable” (a notion discussed below), and the accelerated TDM scheme of Section \[TwiceIntegrableKernelSection\] becomes available. In particular, in the common-triangle case, the full four-dimensional integral (\[OriginalIntegral\]) with $K(r)=\frac{e^{ikr}}{4\pi r}$ and arbitrary polynomial $P$ may be evaluated in closed analytical form in this limit.
Our conclusions are presented in Section \[ConclusionsSection\], and a number of technical details are relegated to the Appendices. A free, open-source software implementation of the method presented in this paper is available online [@TaylorDuffyCode].
Master Formulas for the Generalized Taylor-Duffy Method {#GeneralizedTaylorDuffySection}
=======================================================
In the integrals of equation (2),
- - The $d$ index runs over subregions into which the original dimensional integration domain is divided; there are $\{3, 6, 2\}$ subregions for the {CT, CE, CV} cases.
- For each subregion $d$, the $X_d$ functions are “reduced distance” functions for that subregion. $X_d(\{y_i\})$ is the square root of a second-degree polynomial in the $y_i$ variables, whose coefficients depend on the geometrical parameters of the two triangles. (Explicit expressions are given in Appendix \[SubRegionAppendix\].) Note that the division into subregions, and the $X_d$ functions, are independent of the specific $P$ and $K$ functions in the original integrand.
- For each subregion $d$ and each integer $n$, the functions $\mathcal{P}_{dn}(y_i)$ are polynomials derived from the original polynomial $P({\mathbf{x}}, {\mathbf{x}}^\prime)$ in (\[OriginalIntegral\]). For a given $P({\mathbf{x}}, {\mathbf{x}}^\prime),$ the derived polynomials $\mathcal{P}_{dn}$ are only nonzero for certain integers $n$; this defines the limits of the $n$ summations in (2). The procedure for obtaining $\mathcal{P}$ from $P$ is discussed in Section \[ComputerAlgebraSection\] and Appendix \[SubRegionAppendix\].
- For each integer $n$, the function $\mathcal{K}_n$ is obtained from the $K$ kernel as follows: [$$\mathcal{K}_n(X) \equiv \int_0^1 w^n K(wX) \, dw. \label{FirstIntegral}$$]{} For several kernels of interest, this integral may be evaluated explicitly to obtain a closed-form expression for $\mathcal{K}_n$. We will refer to such kernels as *once-integrable*. Appendix \[FirstSecondIntegralAppendix\] tabulates the $\mathcal{K}_n$ functions for several once-integrable kernel functions. (In the following section we will introduce the further notion of *twice-integrability.*)
The key advantage of the TDM is that equation (\[FirstIntegral\]) isolates the integrable singularities in (\[OriginalIntegral\]) into a one-dimensional integral which may be performed analytically. This not only reduces the dimension of the original integral (\[OriginalIntegral\]), but also neutralizes its singularities, leaving behind a smooth integrand. (In the CT and CE cases, the dimension of the integral may be reduced further.) The remaining integrals (2), though complicated, are amenable to efficient evaluation by numerical cubature.
Improved TDM Formulas for Twice-Integrable Kernels {#TwiceIntegrableKernelSection}
==================================================
The TDM reduces the original 4-dimensional integral (\[OriginalIntegral\]) to the $(4-n)$-dimensional integral (2) where $n\in\{1,2,3\}$ is the number of common vertices. In this section we show that, for certain kernel functions, it is possible to go further; when the kernel is *twice-integrable*, in a sense defined below, the original 4-dimensional integral is reduced to a $(3-n)$-dimensional integral. In particular, for the case $n=3$, the full 4-dimensional integral may be evaluated explicitly to yield a *closed-form expression* requiring no numerical integrations.
The master TDM formulas for twice-integrable kernels are equations (5) at the top of the following page, and their derivation is discussed below.
Twice-Integrable Kernels {#twice-integrable-kernels .unnumbered}
------------------------
Above we referred to a kernel function $K(r)$ as *once integrable* if it is possible to evaluate the integral (\[FirstIntegral\]) in closed form. For such kernels, we now introduce a further qualification: we refer to $K(r)$ as *twice-integrable* if it is possible to obtain closed-form expressions for the following two integrals involving the $\mathcal{K}$ function defined by (\[FirstIntegral\]):
$$\begin{aligned}
\mathcal{J}_n(\alpha,\beta,\gamma)
&\equiv
\int_0^1 \mathcal{K}_n\Big( \alpha\sqrt{ (y+\beta)^2 + \gamma^2} \Big) \, dy
\\
\mathcal{L}_n(\alpha,\beta,\gamma)
&\equiv
\int_0^1 y\, \mathcal{K}_n\Big( \alpha\sqrt{ (y+\beta)^2 + \gamma^2} \Big) \, dy.\end{aligned}$$
\[SecondIntegrals\]
In particular, the kernel $K(r)=r^p$ is twice-integrable for arbitrary integer powers $p$; moreover, in Section \[HighKSection\] we show that the Helmholtz kernels become twice-integrable in the limit $\text{Im } k\to \infty.$ (Expressions for $\mathcal{J}$ and $\mathcal{L}$ in all these cases are collected in Appendix \[FirstSecondIntegralAppendix\].)

The TDM For Twice-Integrable Kernels {#the-tdm-for-twice-integrable-kernels .unnumbered}
------------------------------------
For twice-integrable kernels, the formulas (2) may be further simplified by analytically evaluating the innermost integral in each case. Thus, for the {CT, CE, CV} case, we analytically perform the $\{y_1, y_2, y_3\}$ integration.
We will consider here the case in which the ${\mathcal}P$ polynomials are of degree not greater than 1 in the innermost integration variable. (This condition is satisfied, in particular, for all but one of the eight distinct forms of the $P$ polynomials considered in Section \[BEMFormulationsSection\].) More general cases could be handled by extending the methods of this section.
### The Common-Triangle Case {#the-common-triangle-case .unnumbered}
Given the above assumption on the degree of the ${\mathcal}P$ polynomials, we can write, in the common-triangle case, [$${\mathcal}P{^{\hbox{\tiny{CT}}}}_{dn}(y) =
{\mathcal}P{^{\hbox{\tiny{CT}}}}_{dn0} + y {\mathcal}P{^{\hbox{\tiny{CT}}}}_{dn1},
\label{PCTExpansion}$$]{} where ${\mathcal}P{^{\hbox{\tiny{CT}}}}_{dn0}$ and ${\mathcal}P{^{\hbox{\tiny{CT}}}}_{dn1}$ are just the constant and linear coefficients in the polynomial ${\mathcal}P{^{\hbox{\tiny{CT}}}}_{dn}(y)$. The Taylor-Duffy formula for the common-triangle case, equation (2a), then becomes $$\begin{aligned}
\mathcal{I}{^{\hbox{\tiny{CT}}}}
=
\sum_{d=1}^3 \sum_{n}
\bigg\{
&\mathcal{P}{^{\hbox{\tiny{CT}}}}_{dn0}
\int_0^1 dy \, \mathcal{K}_{n+1}\Big( X{^{\hbox{\tiny{CT}}}}_d(y) \Big)
{\nonumber \\}&\quad+\mathcal{P}{^{\hbox{\tiny{CT}}}}_{dn1} \,
\int_0^1 dy \, y\,\mathcal{K}_{n+1}\Big( X{^{\hbox{\tiny{CT}}}}_d(y) \Big)
\bigg\}.
\label{TDMFormulas20}\end{aligned}$$ If we now write the reduced-distance function $X{^{\hbox{\tiny{CT}}}}_d(x)$ in the form (Appendix \[SubRegionAppendix\]) [$$X{^{\hbox{\tiny{CT}}}}_d(y) \equiv
\alpha{^{\hbox{\tiny{CT}}}}_d\sqrt{(y+\beta{^{\hbox{\tiny{CT}}}}_d)^2 + (\gamma{^{\hbox{\tiny{CT}}}}_d)^2 }
\label{XCTReduced}$$]{} (where $\alpha_d, \beta_d$, and $\gamma_d$ are functions of the geometrical parameters such as triangle side lengths and areas) then we can immediately use equations (\[SecondIntegrals\]) to evaluate the $y$ integrals in (\[TDMFormulas20\]), obtaining an exact closed-form expression for the full 4-dimensional integral (\[OriginalIntegral\]) in the common-triangle case. This is equation (5a). We emphasize again that (5a) involves no further integrations, but is a *closed-form expression for the full 4-dimensional integral* in (\[OriginalIntegral\]). Closed-form expressions for certain special cases of 4-dimensional triangle-product integrals in BEM schemes have appeared in the literature before [@Caorsi1993; @Eibert1995], but we believe equation (5a) to be the most general result available to date.
### The Common-Edge and Common-Vertex Cases {#the-common-edge-and-common-vertex-cases .unnumbered}
We now proceed in exactly analogous fashion for the common-edge and common-vertex cases. We will show that, for twice-integrable kernels, the 2-dimensional and 3-dimensional integrals obtained via the usual TDM \[equations (2b, 2c)\] may be reduced to 1-dimensional and 2-dimensional integrals, respectively.
Because the ${\mathcal}P{^{\hbox{\tiny{CE}}}}$ and ${\mathcal}P{^{\hbox{\tiny{CV}}}}$ polynomials are (by assumption) not more than linear in the variables $y_2$ and $y_3$, respectively, we can write, in analogy to (\[PCTExpansion\]),
$$\begin{aligned}
{\mathcal}P{^{\hbox{\tiny{CE}}}}_{dn}(y_1, y_2)
&=
{\mathcal}P{^{\hbox{\tiny{CE}}}}_{dn0} + y_2 {\mathcal}P{^{\hbox{\tiny{CE}}}}_{dn1}
\\[5pt]
{\mathcal}P{^{\hbox{\tiny{CV}}}}_{dn}(y_1, y_2, y_3)
&=
{\mathcal}P{^{\hbox{\tiny{CV}}}}_{dn0} + y_3 {\mathcal}P{^{\hbox{\tiny{CV}}}}_{dn1}\end{aligned}$$
\[PCEPCV\]
where the ${{\mathcal}P}{^{\hbox{\tiny{CE}}}}_{dni}$ coefficients depend on $y_1$ in addition to the geometric parameters, while the ${{\mathcal}P}{^{\hbox{\tiny{CV}}}}_{dni}$ coefficients depend on $y_1$ and $y_2$ in addition to the geometric parameters.
Similarly, in analogy to (\[XCTReduced\]), we write
$$\begin{aligned}
X{^{\hbox{\tiny{CE}}}}_d(y_1, y_2)
\equiv
\alpha{^{\hbox{\tiny{CE}}}}_d \sqrt{(y_2+\beta{^{\hbox{\tiny{CE}}}}_d)^2 + (\gamma{^{\hbox{\tiny{CE}}}}_d)^2 }
\\
X{^{\hbox{\tiny{CV}}}}_d(y_1, y_2, y_3)
\equiv
\alpha{^{\hbox{\tiny{CV}}}}_d \sqrt{ (y_3+\beta{^{\hbox{\tiny{CV}}}}_d)^2 + (\gamma{^{\hbox{\tiny{CV}}}}_d)^2 }.\end{aligned}$$
\[XCEXCV\]
where $\{\alpha, \beta, \gamma\}{^{\hbox{\tiny{CE}}}}_{d}$ depend on $y_1$ in addition to the geometric parameters, while $\{\alpha, \beta, \gamma\}{^{\hbox{\tiny{CV}}}}_{d}$ depend on $y_1$ and $y_2$ in addition to the geometric parameters.
Inserting (\[PCEPCV\]) and (\[XCEXCV\]) into (2b) and (2c) and evaluating the $y_2$ and $y_3$ integrals using (\[SecondIntegrals\]), the original 4-dimensional integral (\[OriginalIntegral\]) is then reduced to a 1-dimensional integral \[equation (5b)\] or a 2-dimensional integral \[equation (5c)\].
Thus, for twice-integrable kernels, the dimension of the numerical cubature needed to evaluate the original integral (\[OriginalIntegral\]) is reduced by 1 compared to the case of once-integrable kernels.
Summary of Master TDM Formulas {#summary-of-master-tdm-formulas .unnumbered}
------------------------------
For once-integrable kernel functions, the generalized TDM reduces the original 4-dimensional integral (\[OriginalIntegral\]) to a $(4-n)$-dimensional integral, equation (2), where $n$ is the number of common vertices between the triangles.
For twice-integrable kernel functions, the generalized TDM reduces the original four-dimensional integral (\[OriginalIntegral\]) to a $(3-n)$-dimensional integral, equation (5). In particular, in the common-triangle case $n=3$ we obtain a *closed-form expression* requiring no numerical integrations.
In addition to reducing the dimension of the integral, the TDM also performs the service of neutralizing singularities that may be present in the original four-dimensional integral, ensuring that the resulting integrals (2) or (5) are amenable to efficient evaluation by numerical cubature.
From $P$ to $\mathcal{P}$: Computer Algebra Techniques {#ComputerAlgebraSection}
======================================================
The integrands of the Taylor-Duffy integrals (2) and (5) refer to polynomials $\mathcal{P}$ derived from the original polynomial $P$ appearing in the original integral (\[OriginalIntegral\]). The procedure for obtaining $\mathcal{P}$ from $P$, summarized in equations (\[CommonTriangleHToP\]), (\[CommonEdgeHToP\]), and (\[CommonVertexHToP\]), is straightforward but tedious and error-prone if carried out by hand. For example, to derive the polynomials $\mathcal{P}_{dn}{^{\hbox{\tiny{CT}}}}$ in the common-triangle formulas (2a) and (5a), we must **(a)** define, for each subregion $d=1,2,3$, a new function $H_d(u_1, u_2)$ by evaluating a certain definite integral involving the $P$ polynomial, **(b)** evaluate the function $H$ at certain $w$-dependent arguments to obtain a polynomial in $w$, and then **(c)** identify the coefficients of $w^n$ in this polynomial as the $\mathcal{P}_{dn}{^{\hbox{\tiny{CT}}}}$ functions we seek. Moreover, we must repeat this procedure for each of the three subregions that enter the common-triangle case, and for the common-edge case we have *six* subregions. Clearly the process of reducing (\[OriginalIntegral\]) to (2) or (5) is too complex a task to entrust to pencil-and-paper calculation.
However, the manipulations are ideally suited to evaluation by *computer-algebra* systems. For example, Figure 3 presents [mathematica]{} code that executes the procedure described above for deriving the $\mathcal{P}_{dn}{^{\hbox{\tiny{CT}}}}$ polynomials for one choice of $P({\mathbf{x}}, {\mathbf{x}}^\prime)$ function (specifically, the polynomial named $P{^{\hbox{\scriptsize{EFIE1}}}}$ in Section \[BEMFormulationsSection\]). Running this script yields the emission of machine-generated code for computing the ${\mathcal}P$ polynomials, and this code may be directly incorporated into a routine for computing the integrands of (2) or (5).
(*****************************************)
(* P polynomial for the case *)
(* P(x,xp) = (x-Q) \cdot (xp - QP) *)
(*****************************************)
P[Xi1_, Xi2_, Eta1_, Eta2_] := \
Xi1*Eta1*A*A + Xi1*Eta2*AdB + Xi1*AdDP \
+ Xi2*Eta1*AdB + Xi2*Eta2*B*B + Xi2*BdDP \
+ Eta1*AdD + Eta2*BdD + DdDP;
(*****************************************)
(* region-dependent integration limits *)
(* and u-functions, Eqs (21) and (23) *)
(*****************************************)
u1[d_, y_]:=Switch[d, 1, 1, 2, y, 3, y];
u2[d_, y_]:=Switch[d, 1, y, 2, y-1, 3, 1];
Xi1Lower[d_, u1_, u2_] \
:= Switch[ d, 1, 0, 2, -u2, 3, u2-u1];
Xi1Upper[d_, u1_, u2_] \
:= 1-u1;
Xi2Lower[d_, u1_, u2_, Xi1_] \
:= Switch[ d, 1, 0, 2, -u2, 3, 0];
Xi2Upper[d_, u1_, u2_, Xi1_] \
:= Switch[ d, 1, Xi1, 2, Xi1, \
3, Xi1-(u1-u2)];
(*****************************************)
(* big H function, equation (22) *********)
(*****************************************)
H[u1_, u2_] := \
Integrate[ \
Integrate[ \
P[Xi1, Xi2, u1+Xi1, u2+Xi2] \
+ P[u1+Xi1,u2+Xi2, Xi1, Xi2 ], \
{Xi2Lower, 0, Xi2Upper }], \
{Xi1Lower, 0, Xi1Upper}];
(*****************************************)
(* \mathcal{P}_{dn} functions, eq. (24) *)
(*****************************************)
wSeries =
Series[ H[ w*u1[x], w*u2[x] ], {w,0,10} ];
P[n_,x_] := SeriesCoefficient[ wSeries, n];
\[Listing1\]
{#BEMFormulationsSection}
Electrostatics with triangle-pulse functions {#electrostatics-with-triangle-pulse-functions .unnumbered}
--------------------------------------------
For electrostatic BEM formulations using “triangle-pulse” basis functions representing constant charge densities on flat triangular panels, we require the average over triangle ${\mathcal}T$ of the potential and/or normal electric field due to a constant charge density on ${\mathcal}T^\prime$. These are
$$\begin{aligned}
\mathcal{I}{^{\hbox{\scriptsize{ES1}}}}
&=
\int_{{\mathcal}T} \, \int_{{\mathcal}T^\prime} \,
\frac{1}{4\pi|{\mathbf{x}} - {\mathbf{x}}^\prime|}\,
\, d{\mathbf{x}} \, d{\mathbf{x}}^\prime
\\[5pt]
\mathcal{I}{^{\hbox{\scriptsize{ES2}}}}
&=
\int_{{\mathcal}T} \, \int_{{\mathcal}T^\prime} \,
\frac{{{\mathbf{\hat n}}}\cdot({\mathbf{x}}-{\mathbf{x}}^\prime)}
{4\pi|{\mathbf{x}} - {\mathbf{x}}^\prime|^3}
\,d{\mathbf{x}} \, d{\mathbf{x}}^\prime.\end{aligned}$$
\[ESIntegrals\]
Equations (\[ESIntegrals\]a) and (\[ESIntegrals\]b) are of the form (\[OriginalIntegral\]) with $$\begin{array}{lclclcl}
\displaystyle{
P{^{\hbox{\scriptsize{ES1}}}}({\mathbf{x}}, {\mathbf{x}}^\prime)
}
&=&
\displaystyle{
1,
}
&\quad&
\displaystyle{
K{^{\hbox{\scriptsize{ES1}}}}(r)
}
&=&
\displaystyle{
\frac{1}{4\pi r},
}
\\[8pt]
\displaystyle{
P{^{\hbox{\scriptsize{ES2}}}}({\mathbf{x}}, {\mathbf{x}}^\prime)
}
&=&
\displaystyle{
{{\mathbf{\hat n}}} \cdot ({\mathbf{x}}-{\mathbf{x}}^\prime),
}
&\quad&
\displaystyle{
K{^{\hbox{\scriptsize{ES2}}}}(r)
}
&=&
\displaystyle{
\frac{1}{4\pi r^3}.
}
\end{array}$$
EFIE with RWG functions {#efie-with-rwg-functions .unnumbered}
-----------------------
For the EFIE formulation of full-wave electromagnetism with RWG source and test functions [@RWG1982], we require the electric field due to an RWG distribution on ${\mathcal}T^\prime$ averaged over an RWG distribution on ${\mathcal}T$. This involves the integrals
$$\begin{aligned}
I{^{\hbox{\scriptsize{EFIE1}}}}
&=
\frac{1}{4AA^\prime}
\int_{{\mathcal}T} \, \int_{{\mathcal}T^\prime} \,
({\mathbf{x}}-{\mathbf{Q}})\cdot ({\mathbf{x}}^\prime-{\mathbf{Q}}^\prime)
\frac{e^{ik|{\mathbf{x}}-{\mathbf{x}}^\prime|}}{4\pi|{\mathbf{x}}-{\mathbf{x}}^\prime|}
\, d{\mathbf{x}} \, d{\mathbf{x}}^\prime
\\[10pt]
I{^{\hbox{\scriptsize{EFIE2}}}}
&=
\frac{1}{4AA^\prime}
\int_{{\mathcal}T} \, \int_{{\mathcal}T^\prime} \,
\frac{e^{ik|{\mathbf{x}}-{\mathbf{x}}^\prime|}}{4\pi|{\mathbf{x}}-{\mathbf{x}}^\prime|}
\, d{\mathbf{x}} \, d{\mathbf{x}}^\prime\end{aligned}$$
\[EFIEIntegrals\]
where $A,A^\prime$ are the areas of ${\mathcal}T,{\mathcal}T^\prime$ and ${\mathbf{Q}},{\mathbf{Q}}^\prime$ (the source/sink vertices of the RWG basis functions) are vertices in ${\mathcal}T, {\mathcal}T^\prime$.
Equations (\[EFIEIntegrals\]) are of the form (\[OriginalIntegral\]) with $$\begin{array}{lcllcl}
\displaystyle{
P{^{\hbox{\scriptsize{EFIE1}}}}({\mathbf{x}}, {\mathbf{x}}^\prime)
}
&\!\!\!\!=\!\!\!\!&
\displaystyle{
\frac{({\mathbf{x}}\!\!-\!\!{\mathbf{Q}}) \cdot ({\mathbf{x}}^\prime \!\!-\!\! {\mathbf{Q}}^\prime)}
{4AA^\prime},
}
&
\displaystyle{
K{^{\hbox{\scriptsize{EFIE1}}}}(r)
}
&\!\!=\!\!&
\displaystyle{
\frac{e^{ikr}}{4\pi r}.
}
\\[8pt]
\displaystyle{
P{^{\hbox{\scriptsize{EFIE2}}}}({\mathbf{x}}, {\mathbf{x}}^\prime)
}
&\!\!\!\!=\!\!\!\!&
\displaystyle{
\frac{1}{AA^\prime},
}
&
\displaystyle{
K{^{\hbox{\scriptsize{EFIE2}}}}(r)
}
&\!\!=\!\!&
\displaystyle{
\frac{e^{ikr}}{4\pi r}
}
\end{array}$$ \[We will use the labels $K{^{\hbox{\scriptsize{EFIE}}}}$ and $K{^{\hbox{\scriptsize{Helmholtz}}}}$ interchangeably to denote the kernel $K(r)=\frac{e^{ikr}}{4\pi r}.$\]
MFIE / PMCHWT with RWG functions {#mfie-pmchwt-with-rwg-functions .unnumbered}
--------------------------------
For the MFIE formulation of full-wave electromagnetism with RWG source and test functions [@Rius2001], we require the magnetic field due to an RWG distribution on ${\mathcal}T^\prime$ averaged over an RWG distribution on ${\mathcal}T$. This involves the integral [$$I{^{\hbox{\scriptsize{MFIE}}}}
=\!
\frac{1}{4AA^\prime}
\int_{{\mathcal}T} \, \int_{{\mathcal}T^\prime} \,
({\mathbf{x}} \!-\! {\mathbf{Q}}) \cdot \nabla {\!\times\!}\left\{ ({\mathbf{x}}^\prime\!\!-\!\!{\mathbf{Q}}^\prime)
\frac{e^{ik|{\mathbf{x}} \!-\! {\mathbf{x}}^\prime|}}
{4\pi|{\mathbf{x}} {\!\!-\!\!}{\mathbf{x}}^\prime|}
\right\}
\, d{\mathbf{x}} \, d{\mathbf{x}}^\prime.
\label{MFIEIntegral}$$]{} With some rearrangement, equation (\[MFIEIntegral\]) may be written in the form (\[OriginalIntegral\]) with $$P{^{\hbox{\scriptsize{MFIE}}}}({\mathbf{x}}, {\mathbf{x}}^\prime)=
\frac{ ({\mathbf{x}}- {\mathbf{x}}^\prime) \cdot ({\mathbf{Q}}\times {\mathbf{Q}}^\prime)
+({\mathbf{x}} \times {\mathbf{x}}^\prime) \cdot ({\mathbf{Q}}-{\mathbf{Q}}^\prime)
}{4AA^\prime},$$ $$K{^{\hbox{\scriptsize{MFIE}}}}(r)
=(ikr-1)\frac{e^{ikr}}{4\pi r^3}.$$ With the EFIE and MFIE integrals, equations (\[EFIEIntegrals\]) and (\[MFIEIntegral\]), we also have everything needed to implement the PMCHWT formulation of full-wave electromagnetism with RWG source and test functions [@Medgyesi1994].
N-Müller with ${{\mathbf{\hat n}}}\times$RWG/RWG functions {#n-müller-with-mathbfhat-ntimesrwgrwg-functions .unnumbered}
----------------------------------------------------------
For the N-Müller formulation with RWG basis functions and ${{\mathbf{\hat n}}}\times$RWG testing functions [@Taskinen2005], we require the electric and magnetic fields due to an RWG distribution on ${\mathcal}T^\prime$ averaged over an ${{\mathbf{\hat n}}}\times$RWG distribution on ${\mathcal}T$; here ${{\mathbf{\hat n}}}$ denotes the surface normal to ${\mathcal}T$. These quantities involve the following integrals. (We have here introduced the shorthand notation ${{\widetilde}{\mathbf{V}} } \equiv {{\mathbf{\hat n}}}\times {\mathbf{V}}.$)
$$\begin{aligned}
I{^{\hbox{\scriptsize{NM\"uller1}}}}
&
\\
&\hspace{-0.3in}=
\frac{1}{4AA^\prime}
\int_{{\mathcal}T} \, \int_{{\mathcal}T^\prime} \,
\big({\widetilde}{{\mathbf{x}}}-{\widetilde}{{\mathbf{Q}}}\big)\cdot
\big({\mathbf{x}}^\prime-{\mathbf{Q}}^\prime\big)
\frac{e^{ik|{\mathbf{x}}-{\mathbf{x}}^\prime|}}
{4\pi|{\mathbf{x}}-{\mathbf{x}}^\prime|}
d{\mathbf{x}} \, d{\mathbf{x}}^\prime
\nonumber\\[15pt]
I{^{\hbox{\scriptsize{NM\"uller2}}}}
&
\\
&\hspace{-0.4in}=
\frac{1}{4AA^\prime}
\int_{{\mathcal}T} \! \int_{{\mathcal}T^\prime} \!
({\widetilde}{{\mathbf{x}}} \!\!-\!\! {\widetilde}{{\mathbf{Q}}}) \! \cdot \! \nabla
\left\{ \big[\nabla^\prime \! \cdot \!
({\mathbf{x}}^\prime\!\!-\!\!{\mathbf{Q}}^\prime)\big]
\frac{e^{ik|{\mathbf{x}} \!-\! {\mathbf{x}}^\prime|}}
{4\pi|{\mathbf{x}} {\!\!-\!\!}{\mathbf{x}}^\prime|}
\right\}\!
d{\mathbf{x}} d{\mathbf{x}}^\prime
\nonumber\\[8pt]
&\hspace{-0.3in}
=\frac{2}{4AA^\prime}
\int_{{\mathcal}T} \! \int_{{\mathcal}T^\prime} \!
\Big[ \big({\widetilde}{{\mathbf{x}}}-{\widetilde}{{\mathbf{Q}}}\big) \cdot
({\mathbf{x}} - {\mathbf{x}}^\prime)
\Big]
(ikr{\!\!-\!\!}1)\frac{e^{ikr}}{4\pi r^3}
d{\mathbf{x}} d{\mathbf{x}}^\prime
\nonumber\\[15pt]
I{^{\hbox{\scriptsize{NM\"uller3}}}}
&
\\
&\hspace{-0.3in}=
\frac{1}{4AA^\prime}
\int_{{\mathcal}T} \, \int_{{\mathcal}T^\prime} \,
({\widetilde}{{\mathbf{x}}} {\!\!-\!\!}{\widetilde}{{\mathbf{Q}}}) {\!\cdot\!}\nabla {\!\times\!}\left\{ ({\mathbf{x}}^\prime {\!\!-\!\!}{\mathbf{Q}}^\prime)
\frac{e^{ik|{\mathbf{x}} \!-\! {\mathbf{x}}^\prime|}}
{4\pi|{\mathbf{x}} {\!\!-\!\!}{\mathbf{x}}^\prime|}
\right\}
\, d{\mathbf{x}} \, d{\mathbf{x}}^\prime
\nonumber
$$
\[NMullerIntegrals\]
Equations (\[NMullerIntegrals\]) are of the form (\[OriginalIntegral\]) with $$\begin{aligned}
P{^{\hbox{\scriptsize{NM\"uller1}}}}({\mathbf{x}}, {\mathbf{x}}^\prime)
&=\frac{ ({\widetilde}{{\mathbf{x}}} - {\widetilde}{{\mathbf{Q}}}) \cdot
({\mathbf{x}}^\prime - {\mathbf{Q}}^\prime)}
{4AA^\prime},
\\[3pt]
P{^{\hbox{\scriptsize{NM\"uller2}}}}({\mathbf{x}}, {\mathbf{x}}^\prime)
&=\frac{ ({\widetilde}{{\mathbf{x}}} - {\widetilde}{{\mathbf{Q}}}) \cdot
({\mathbf{x}} - {\mathbf{x}}^\prime)}{2AA^\prime},
\\[3pt]
P{^{\hbox{\scriptsize{NM\"uller3}}}}({\mathbf{x}}, {\mathbf{x}}^\prime)
&= \big({\widetilde}{{\mathbf{x}}} - {\widetilde}{{\mathbf{Q}}}\big) \cdot
\Big[ \big({\mathbf{x}} - {\mathbf{x}}^\prime\big)\times
\big({\mathbf{x}}^\prime - {\mathbf{Q}}^\prime\big)
\Big],
\\[3pt]
K{^{\hbox{\scriptsize{NM\"uller1}}}}(r)
&=\frac{e^{ikr}}{4\pi r},
\\
K{^{\hbox{\scriptsize{NM\"uller2}}}}(r)
&=
K{^{\hbox{\scriptsize{NM\"uller3}}}}(r)
=(ikr-1)\frac{e^{ikr}}{4\pi r^3}.\end{aligned}$$
Computational Examples {#ExamplesSection}
======================
In this section we consider a number of simple examples to illustrate the practical efficacy of the generalized TDM. For generic instances of the common-triangle, common-edge, and common-vertex cases, we study the convergence vs. number of cubature points in the numerical evaluation of integrals (2) or (5), and we plot the 1D or 2D integrands in various cases to lend intuition for the function that is being integrated.
Common-triangle examples
------------------------
Figure \[CTExampleFigure\] plots the integrand of equation (2a) for the choice of polynomial $P{^{\hbox{\scriptsize{EFIE1}}}}({\mathbf{x}}, {\mathbf{x}}^\prime)
\propto ({\mathbf{x}}-{\mathbf{Q}})\cdot ({\mathbf{x}}-{\mathbf{Q}}^\prime)$ and kernel $K{^{\hbox{\scriptsize{EFIE}}}}(r)=\frac{e^{ikr}}{4\pi r}$, a combination which arises in the EFIE formulation with RWG functions (Section \[BEMFormulationsSection\]). The triangle (inset) lies in the $xy$ plane with vertices at the points $(x,y)=\{(0,0), (0.1,0), (0.03,0.1)\}$ with RWG source/sink vertices ${\mathbf{Q}}={\mathbf{Q}}^\prime=(0,0).$ The wavenumber parameter $k$ in the Helmholtz kernel is chosen such that $kR=0.1$ or $kR=1.0$, where $R$ is the radius of the triangle (the maximal distance from centroid to any vertex). Whereas the integrand of the original integral (\[OriginalIntegral\]) exhibits both singularities and sinusoidal oscillations over its 4-dimensional domain, the integrand of the TDM-reduced integral (2a) is nonsingular and slowly varying and will clearly succumb readily to numerical quadrature; indeed, for both values of $k$ a simple 17-point Clenshaw-Curtis quadrature scheme [@libSGJC] already suffices to evaluate the integrals to better than 11-digit accuracy. Note that, although the sinusoidal factor in the integrand of the original integral (\[OriginalIntegral\]) exhibits 10$\times$ more rapid variation for $kR=1.0$ than for $kR=0.1$, the TDM reduction to the 1D integrand smooths this behavior to such an extent that the two cases are nearly indistinguishable in (\[CTExampleFigure\]).
How are these results modified for triangles of less-regular shapes? Figure \[CTByThetaExampleFigure\] plots the real part of the integral of equation (2a), again for the choices $\{P,K\}=\{P{^{\hbox{\scriptsize{EFIE1}}}},K{^{\hbox{\scriptsize{EFIE}}}}\},$ for a triangle in the $xy$ plane with vertices $(x,y)=(0,0), (L,0), (L\sin\theta,L\cos\theta)$ with $L=0.1$ and 8 distinct values of $\theta$.
The integrand exhibits slightly more rapid variation in the extreme cases $\theta=10^\circ,170^\circ,$ but remains sufficiently smooth to succumb readily to low-order quadrature. To quantify this, Figure \[CTConvergenceFigure\] plots, versus $N$, the relative error incurred by numerical integration of the integrands of Figure \[CTByThetaExampleFigure\] using $N$-point Clenshaw-Curtis quadrature. (The relative error is defined as $|\mathcal{I}^N - \mathcal{I}{^{\hbox{\scriptsize{exact}}}}|/|\mathcal{I}{^{\hbox{\scriptsize{exact}}}}|$ where ${\mathcal}I^N$ and ${\mathcal}I{^{\hbox{\scriptsize{exact}}}}$ are the $N$-point Clenshaw-Curtis quadrature approximations to the integral and the “exact” integral as evaluated by high-order quadrature (with $N>100$ integrand samples per dimension). For almost all cases we obtain approximately $12$-digit accuracy with just 20 to 30 quadrature points, with only the most extreme-aspect-ratio triangles exhibiting slightly slower convergence.
It is instructive to compare Figures \[CTByThetaExampleFigure\] and \[CTConvergenceFigure\] to Figure 1 of [Ref. ]{}, which applied Duffy-transformation techniques to the *two-dimensional* integral of $1/r$ over a single triangle (in contrast to the four-dimensional integrals over triangle pairs considered in this work). With the triangle assuming various distorted shapes similar to those in the inset of Figure (\[CTByThetaExampleFigure\]), [Ref. ]{} observed a dramatic slowing of the convergence of numerical quadrature as the triangle aspect ratio worsened, presumably because the integrand exhibits increasingly rapid variations. In contrast, Figures \[CTByThetaExampleFigure\] and \[CTConvergenceFigure\] indicate that no such catastrophic degradation in integrand smoothness occurs in the four-dimensional case, perhaps because the analytical integrations effected by the TDM reduction from (\[OriginalIntegral\]) to (2a) smooth the bad integrand behavior that degrades convergence in the two-dimensional case. (Techniques for improving the convergence of two-dimensional integrals over triangles with extreme aspect ratios were discussed in [Ref. ]{}.)
Common-edge examples
--------------------
As an example of a common-edge case, Figure \[CEIntegrandFigure\] plots the two-dimensional integrand of equation (2b) for the choice of polynomial $P{^{\hbox{\scriptsize{MFIE}}}}=
[ ({\mathbf{x}}-{\mathbf{Q}})
\cdot
({\mathbf{x}}^\prime-{\mathbf{Q}}^\prime)
] \cdot ({\mathbf{x}}-{\mathbf{x}}^\prime)$ and kernel $K{^{\hbox{\scriptsize{MFIE}}}}(r)=(ikr-1)\frac{e^{ikr}}{4\pi r^3}$, a combination which arises in the MFIE formulation with RWG functions (Section \[BEMFormulationsSection\]). The triangle pair (inset) is the right-angle pair $ \mathcal{T}=\{(0,0,0), (L,0,0), (0,L,0)\}$ and $ \mathcal{T}^\prime=\{(0,0,0), (L,0,0), (L/2,0,-L)\}$ with $L=0.1.$ The RWG source/sink vertices are indicated by black dots in the inset. The $k$ parameter in the Helmholtz kernel is chosen such that $kR=0.628$ where $R$ is the maximum panel radius. The integrand is smooth and is amenable to straightforward two-dimensional cubature. To quantify this, Figure \[CEConvergenceFigure1\] plots the error vs. number of cubature points incurred by numerical integration of the integrand plotted in Figure \[CEVsThetaFigure\]. The cubature scheme is simply nested two-dimensional Clenshaw-Curtis cubature, with the same number of quadrature points per dimension. Although the added dimension of integration inevitably necessitates the use of more integration points than were needed in the 1D cases examined above, nonetheless we achieve 12-digit accuracy with roughly 500 cubature points.
The kernel $K{^{\hbox{\scriptsize{MFIE}}}}(r)$ approaches $-1/(4\pi r^3)$ for small $r$. (As noted above, pairing with $P{^{\hbox{\scriptsize{MFIE}}}}$ reduces the singularity of the overall integrand due to the vanishing of $P{^{\hbox{\scriptsize{MFIE}}}}$ at $r=0$.) If we consider the integral of just this most singular contribution—that is, if in (\[OriginalIntegral\]) we retain the polynomial $P=P{^{\hbox{\scriptsize{MFIE}}}}$ but now replace the kernel $K{^{\hbox{\scriptsize{MFIE}}}}(r)$ with $K(r)=1/(4\pi r^3)$—then we have a *twice-integrable* kernel and the Taylor-Duffy reduction yields a *one-dimensional* integral, equation (5b), instead of the two-dimensional integrand (2b) plotted in (\[CEIntegrandFigure\]). Figure \[CEVsThetaFigure\] plots the 1-dimensional integrand obtained in this way for several common-edge panel pairs obtained by taking the unshared vertex of ${\mathcal}T^\prime$ to be the point $(L/2,-L\cos\theta,-L\sin\theta)$ with $\theta$ ranging from $\theta=10^\circ$ to $170^\circ$. (The inset of Figure \[CEIntegrandFigure\] corresponds to $\theta=90^\circ.$) For all values of $\theta$ the integrand is smooth and readily amenable to numerical quadrature. Figure \[CEConvergenceFigure2\] plots the convergence vs. number of quadrature points for numerical integration by Clenshaw-Curtis quadrature of the integrands plotted in Figure \[CEVsThetaFigure\].
Common-vertex example
---------------------
As an example of a common-vertex case, Figure \[CVConvergenceFigure\] plots the convergence of equation (2c) for $\{P,K\}$=$\{P{^{\hbox{\scriptsize{EFIE1}}}}, K{^{\hbox{\scriptsize{EFIE}}}}\}$, the same pair considered for the common-triangle example above. The triangle pair (inset) is $ \mathcal{T}=\{(0,0,0), (L,0,0), (L^\prime,L,0)\}$ and $ \mathcal{T}^\prime=\{(0,0,0), (-L,0,0), (-L^\prime\sin\theta,L^\prime\cos\theta)\}$ with $\{L,L^\prime\}=\{0.1,0.02\}$ and various values of $\theta$. The $k$ parameter in the Helmholtz kernel is chosen such that $kR=0.628$ where $R$ is the maximum panel radius. RWG source/sink vertices are indicated by dots in the inset. The figure plots the error vs. number of cubature points incurred by numerical integration of the integrand plotted in Figure \[CEVsThetaFigure\]. The cubature scheme is simply nested three-dimensional Clenshaw-Curtis cubature, with the same number of quadrature points per dimension. This is probably not the most efficient cubature scheme for a three-dimensional integral, but the figure demonstrates that the error decreases steadily and rapidly with the number of cubature points. The convergence rate is essentially independent of $\theta$.
Application to Full-Wave BEM Solvers: Evaluation and Caching of Series-Expansion Terms {#CachingSection}
======================================================================================
We now take up the question of how the generalized TDM may be most effectively deployed in practical implementations of full-wave BEM solvers using RWG basis functions. To assemble the BEM matrix for, say, the PMCHWT formulation at a single frequency for a geometry discretized into $N$ triangular surface panels, we must in general compute $\approx\, 12N$ singular integrals of the form (\[OriginalIntegral\]) with roughly $\{2N,4N,6N\}$ instances of the common-{triangle, edge, vertex} cases. \[For each panel pair we must compute integrals involving two separate kernels ($K{^{\hbox{\scriptsize{EFIE}}}}$ and $K{^{\hbox{\scriptsize{MFIE}}}}$), and neither of these kernels is twice-integrable except in the short-wavelength limit.\]
A first possibility is simply to evaluate all singular integrals using the basic TDM scheme outlined in Section \[GeneralizedTaylorDuffySection\]—that is, for an integral of the form (\[OriginalIntegral\]) with polynomial $P({\mathbf{x}}, {\mathbf{x}}^\prime)$ and kernel $K(r)$ we write simply [$$\mathcal{I}
=
\underbrace{\iint P({\mathbf{x}}, {\mathbf{x}}^\prime) K(r) d{\mathbf{x}}^\prime d{\mathbf{x}}.}
_{\text{evaluate by TDM}}
\label{FullEvaluation}$$]{} This method already suffices to evaluate all singular integrals and would consitute an adequate solution for a medium-performance solver appropriate for small-to-midsized problems. However, although the unadorned TDM successfully neutralizes singularities to yield integrals amenable to simple numerical cubature, we must still *evaluate* those 1D-, 2D-, and 3D cubatures, and this task, even given the non-singular integrands furnished by the TDM, remains too time-consuming for the online stage of a high-performance BEM code for large-scale probems.
Such a hybrid TDM-SS approach has several advantages. **(a)** The integrals involving the $r^p$ kernel are *independent of frequency and material properties*, even if $K(r)$ depends on these quantities through the wavenumber $k=\sqrt{\epsilon\mu}\cdot\omega$. \[The $k$ dependence of the first set of terms in (\[MSubEval\]) is contained entirely in the constants $\{C_m\}$, which enter only as multiplicative prefactors outside the integral sign.\] This means that we need only compute these integrals *once* for a given geometry, after which they may be stored and reused many times for computations at other frequencies or for scattering geometries involving the same shapes but different material properties. (The caching and reuse of frequency-independent contributions to BEM integrals has been proposed before [@Taskinen2003].) **(b)** The $r^p$ kernels in the first set of integrals are *twice-integrable*. This means that the improved Taylor-Duffy scheme discussed in Section \[TwiceIntegrableKernelSection\] is available, significantly accelerating the computation; for the common-triangle case these integrals maybe evaluated in closed analytical form. **(c)** The $M$ integrals in the first set all involve the *same* $P$ polynomial. This means that the computational overhead required to evaluate TDM integrals involving this polynomial need only be done once and may then be reused for all $M$ integrals. Indeed, all of the integrals on the first line of (\[MSubEval\]) may be evaluated simultaneously as the integral of an $M$-dimensional vector-valued integrand; in practice this means that the cost of evaluating all $M$ integrals is nearly independent of $M$. **(d)** Because $K{^{\hbox{\scriptsize{NS}}}}$ has been relieved of its most rapidly varying contributions, it may be integrated with good accuracy by a simple low-order cubature scheme. {We evaluate the 4-dimensional integral in the last term of (\[MSubEval\]) using a 36-point cubature rule [@Cools2003].}
[c]{}
\
\
Figure \[ErrorVsTimeFigure\] compares accuracy vs. computation time for the methods of equation (\[FullEvaluation\]) and equation (\[MSubEval\]) with $M=\{1,3,5\}$ subtracted terms. The {top, center, bottom} plots are for the {CT, CE, CV} cases using the triangle pairs of Figures {\[CTExampleFigure\], \[CEIntegrandFigure\], \[CVConvergenceFigure\]} (we choose the $\theta=30^\circ$ triangle pair for the CV case). The integral computed is (\[OriginalIntegral\]) with $\{P,K\}=\{P{^{\hbox{\scriptsize{EFIE1}}}},K{^{\hbox{\scriptsize{EFIE}}}}\}$. The wavenumber $k$ is chosen such that $kR=0.628$ where $R$ is the maximum panel radius, so that the linear size of the panels is approximately 1/10 the wavelength.
The curves marked “full” in each plot correspond to equation (\[FullEvaluation\]), i.e. full evaluation by numerical cubature of the (1,2,3)-dimensional integral of equation (2). The $M=\{1,3,5\}$ data correspond to equation (\[MSubEval\]). For each $M$ value, the data point furthest to the left is for the case in which we precompute the contributions of the singular integrals, so that the only computation time is the evaluation of the fixed-order cubature. The other data points for each $M$ value include the time incurred for numerical cubature of equation (5) for the subtracted (singular) terms at varying degrees of accuracy. Beyond a certain threshold computation time the integrals have converged to accurate values, whereupon further computation time does not improve the accuracy with which we compute the overall integral (because we use a fixed-order cubature for the nonsingular contribution). Of course, one could increase the cubature order for the fixed-order contribution at the expense of shifting all $M=\{1,3,5\}$ data points to the right.
In the common-triangle case, the integrals of the singular terms may be done in closed form \[equation (5a)\], so the data points marked “not precomputed” all correspond to roughly the same computation time. In the other cases, the “not precomputed” data points for various values of the computation time correspond to evaluation of the integrals (5b) or (5c) via numerical cubature with varying numbers of cubature points.
Absolute timing statistics are of course heavily hardware- and implementation-dependent (in this case they were obtained on a standard desktop workstation), but the picture of *relative* timing that emerges from Figure \[ErrorVsTimeFigure\] is essentially hardware-independent. For a given accuracy, the singularity-subtracted scheme (\[MSubEval\]) is typically an order of magnitude faster than the full scheme (\[FullEvaluation\]), and this is true even if we include the time required to compute the integrals of the frequency-independent subtracted terms. If we *precompute* those integrals, the singularity-subtraction scheme is several orders of magnitude faster than the full scheme. For example, to achieve 8-digit accuracy in the common-vertex case takes over 2 ms for the full scheme but just 30 $\mu$s for the precomputed $M=5$ singularity-subtraction scheme.
The speedup effected by singularity subtraction is less pronounced in the common-triangle case. This is because the full TDM integral (\[FullEvaluation\]) is only 1-dimensional in that case and thus already quite efficient to evaluate.
Helmholtz-Kernel Integrals in the Short-Wavelength Limit {#HighKSection}
========================================================
The kernels $K{^{\hbox{\scriptsize{EFIE}}}}(r)=\frac{e^{ikr}}{4\pi r}$ and $K{^{\hbox{\scriptsize{MFIE}}}}(r)=(ikr-1)\frac{e^{ikr}}{4\pi r^3}$ become twice-integrable in the limit $\text{Im }k\to \infty.$ More specifically, as shown in Appendix \[FirstSecondIntegralAppendix\], the first integral \[equation (\[FirstIntegral\])\] of these kernels takes the form $Q_1(r) + e^{ikX(r)}Q_2(r)$, where $Q_1,Q_2$ are Laurent polynomials in $r$ and $X$ is a nonvanishing quantity bounded below by the linear size of the triangles. When $\text{Im }k$ is large, the exponential factor makes the second term negligible, and we are left with just $Q_1(r)$—which, as a sum of integer powers of $r$—is twice-integrable. This means that the TDM-reduced version of integral (\[OriginalIntegral\]) involves one fewer dimension of integration than in the usual case, i.e. we have equations (5) instead of equations (2). In particular, for the common-triangle case the full integral may be evaluated in closed form \[equation (5a)\].
[c]{}\
Figure \[HighKFigure\] plots, for the common-triangle case of Figure \[CTExampleFigure\] (upper plot) and the common-edge case of Figure \[CEIntegrandFigure\] (lower plot), values and computation times for the integral $\iint \frac{e^{ikr}}{4\pi r}\,d^4 r$—that is, equation (\[OriginalIntegral\]) for the choices $\{P,K\}=\{1,K{^{\hbox{\scriptsize{EFIE}}}}\}$—as evaluated using the “full” scheme of equation (2) (red curves) and using the “high-$k$ approximation” of equation (5) (green curves), with the exponentially-decaying term in the first integral of the kernel neglected to yield a twice-integrable kernel. The blue curves are the relative errors between the two calculations. The cyan and magenta curves respectively plot the (wall-clock) time required to compute the integrals using the full and high-$k$ methods. The high-$k$ calculation is approximately one order of magnitude more rapid than the full calculation and yields results in good agreement with the full calculation for values of Imag $kR$ greater than 10 or so.
Conclusions {#ConclusionsSection}
===========
The generalized Taylor-Duffy method we have presented allows efficient evaluation of a broad family of singular and non-singular integrals over triangle-pair domains. The generality of the method allows a single implementation ($\sim$1,500 lines of C++ code) to handle *all* singular integrals arising in several different BEM formulations, including electrostatics with triangle-pulse basis functions and full-wave electromagnetism with RWG basis functions in the EFIE, MFIE, PMCHWT, and N-Müller formulations. In particular, for N-Müller integrals the method presented here offers an alternative to the line-integral scheme discussed in [Ref. ]{}. In addition to deriving the method and discussing practical implementation details, we presented several computational examples to illustrate its efficacy.
Although the examples we considered here involved low-order basis functions (constant or linear variation over triangles), it would be straightforward to extend the method to higher-order basis functions. Indeed, switching to basis functions of quadratic or higher order would amount simply to choosing different $P$ polynomials in (\[OriginalIntegral\]); the computer-algebra methods mentioned in Section \[ComputerAlgebraSection\] could then be used to identify the corresponding ${\mathcal}P$ polynomials in equations (2) and (5). A less straightforward but potentially fruitful challenge would be to extend the method to the case of *curved* triangles, in which case the integrand of (\[OriginalIntegral\]) may contain non-polynomial factors [@Graglia1997].
Although—as noted in the Introduction—the problem of evaluating singular BEM integrals has been studied for decades with dozens of algorithms published, the problem of choosing *which* of the myriad available schemes to use in a practical BEM solver must surely remain bewildering to implementors. A comprehensive comparative survey of available methods—including considerations such as numerical accuracy vs. computation time, the reuse of previous computations to accelerate calculations at new frequencies, the availability of open-source code implementations, and the complexity and length of codes versus their extendability and flexibility (i.e. the range of possible integrands they can handle)—would be an invaluable contribution to the literature.
A complete implementation of the method described in this paper is incorporated into [scuff-em]{}, a free, open-source software implementation of the boundary-element method [@TaylorDuffyCode].
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors are grateful to A. G. Polimeridis for many valuable suggestions and for a careful reading of the manuscript.
This work was supported in part by the Defense Advanced Research Projects Agency (DARPA) under grant N66001-09-1-2070-DOD, by the Army Research Office through the Institute for Soldier Nanotechnologies (ISN) under grant W911NF-07-D-0004, and by the AFOSR Multidisciplinary Research Program of the University Research Initiative (MURI) for Complex and Robust On-chip Nanophotonics under grant FA9550-09-1-0704.
Expressions for subregion-dependent functions {#SubRegionAppendix}
=============================================
The TDM formulas (2) refer to functions $X_d$ and $\mathcal{P}_{dn}$ indexed by the subregion $d$ of the original 4-dimensional integration domain. In this Appendix we give detailed expressions for these quantities. In what follows, the geometric parameters ${\mathbf{A}}, {\mathbf{B}}, {\mathbf{A}}^\prime, {\mathbf{B}}^\prime, {\mathbf{V}}_i$ refer to Figure \[VertexEdgeLabelFigure\], and the functions ${\mathbf{x}}(\xi_1,\xi_2)$ and ${\mathbf{x}}^\prime(\eta_1, \eta_2)$ map the standard triangle into ${\mathcal}T, {\mathcal}T^\prime$ according to [$${\mathbf{x}}(\xi_1, \xi_2)
= {\mathbf{V}}_1 + \xi_1 {\mathbf{A}} + {\mathbf{\xi}}_2 {\mathbf{B}},
\quad
{\mathbf{x}}^\prime(\eta_1, \eta_2)
= {\mathbf{V}}_1 + \eta_1 {\mathbf{A}}^\prime + \eta_2 {\mathbf{B}}^\prime.
\label{VariableTransformation}$$]{} with ranges $0\le \xi_1,\eta_1\le 1$, $0\le \xi_2 \le \xi_1$, $0\le \eta_2\le \eta_1.$
Common Triangle {#common-triangle .unnumbered}
---------------
#### $u$ functions {#u-functions .unnumbered}
First define ancillary functions $u_1$ and $u_2$ depending on $y_1$: [$$\begin{array}{|c|c|c|} \hline
d & u_{1d}(y_1) & u_{2d}(y_1) \\ \hline
1 & 1 & y_1 \\ \hline
2 & y_1 & (y_1-1) \\ \hline
3 & y_1 & 1 \\ \hline
\end{array}
\label{CommonTriangleUTable}$$]{}
#### Reduced distance function {#reduced-distance-function .unnumbered}
The function $X{^{\hbox{\tiny{CT}}}}_d(y_1)$ that enters formulas (2a) and (5a) is $$\begin{aligned}
&\!\!X_d{^{\hbox{\tiny{CT}}}}(y_1)
\\
&=
\sqrt{ |{\mathbf{A}}|^2 u_{1d}^2 (y_1)
+|{\mathbf{B}}|^2 u_{2d}^2 (y_1)
+ 2{\mathbf{A}} \cdot {\mathbf{B}} u_{1d} (y_1) u_{2d}(y_1)}.\end{aligned}$$
#### $\mathcal{P}$ polynomials
First define polynomials obtained as definite integrals over the original $P$ polynomial in (\[OriginalIntegral\]): $$\begin{aligned}
&\hspace{-0.2in}
H_d(u_1, u_2) \equiv
4AA^\prime
\int_{\xi_{1d}{^{\hbox{\scriptsize{lower}}}}}^{\xi_{1d}{^{\hbox{\scriptsize{upper}}}}} d \xi_1
\int_{\xi_{2d}{^{\hbox{\scriptsize{lower}}}}}^{\xi_{2d}{^{\hbox{\scriptsize{upper}}}}} d \xi_2
\,\bigg\{
{\nonumber \\}&P\Big( {\mathbf{x}}(\xi_1, \xi_2),
{\mathbf{x}}^\prime(u_1 + \xi_1, u_2 + \xi_2)
\Big)
{\nonumber \\}&\,\,+P\Big( {\mathbf{x}}(u_1+ \xi_1, u_2 + \xi_2),
{\mathbf{x}}^\prime(\xi_1,\xi_2)
\Big)
\bigg\}
\label{CommonTriangleHIntegral}\end{aligned}$$ where ${\mathbf{x}}(\xi_1, \xi_2)$ and ${\mathbf{x}}^\prime(\eta_1, \eta_2)$ are as in (\[VariableTransformation\]) and where the limits of the $\xi_1, \xi_2$ integrals are as follows: [$$\begin{array}{|c|c|c|c|c|} \hline
d & \xi_{1d}{^{\hbox{\tiny{lower}}}} & \xi_{1d}{^{\hbox{\tiny{upper}}}} &
\xi_{2d}{^{\hbox{\tiny{lower}}}} & \xi_{2d}{^{\hbox{\tiny{upper}}}} \\\hline
1 & 0 & 1-u_1 & 0 & \xi_1 \\ \hline
2 & -u_2 & 1-u_1 & -u_2 & \xi_1 \\ \hline
3 & u_2-u_1 & 1-u_1 & 0 & \xi_1 - (u_2-u_1) \\ \hline
\end{array}
\label{CommonTriangleIntegrationLimits}$$]{} Now evaluate $H_d$ at $w, y_1$ dependent arguments and expand the result as a polynomial in $w$ to obtain the $\mathcal{P}_{dn}{^{\hbox{\tiny{CT}}}}$ functions: [$$H_d\Big( w u_{1d}(y_1), w u_{2d}(y_1) \Big)
\equiv \sum_{n} w^n \mathcal{P}_{dn}{^{\hbox{\tiny{CT}}}}(y_1).
\label{CommonTriangleHToP}$$]{}
Common Edge {#common-edge .unnumbered}
-----------
#### $u,\xi$ functions {#uxi-functions .unnumbered}
First define[^4] ancillary functions $u_1, u_2, \xi_2$ depending on $y_1$ and $y_2$: [$$\begin{array}{|c|c|c|c|} \hline
d & u_{1d}(y_1,y_2) & u_{2d}(y_1,y_2) & \xi_{2d}(y_1, y_2) \\ \hline
1 & -y_1 & -y_1y_2 & 1-y_1+y_1 y_2 \\ \hline
2 & y_1 & y_1y_2 & 1-y_1 \\ \hline
3 & -y_1y_2 & y_1(1-y_2) & 1-y_1 \\ \hline
4 & y_1y_2 & -y_1(1-y_2) & 1-y_1y_2 \\ \hline
5 & -y_1y_2 & -y_1 & 1 \\ \hline
6 & y_1y_2 & y_1 & 1-y_1 \\ \hline
\end{array}
\label{CommonEdgeTable}$$]{}
#### Reduced distance function {#reduced-distance-function-1 .unnumbered}
The function $X{^{\hbox{\tiny{CE}}}}_d(y_1, y_2)$ that enters formulas (2b) and (5b) is $$\begin{aligned}
&X_d{^{\hbox{\tiny{CE}}}}(y_1, y_2)=
\bigg[ |{\mathbf{A}}|^2 u_{1d}^2
+|{\mathbf{B}}^\prime|^2 u_{2d}^2
+|{\mathbf{L}}|^2 \xi_{2d}^2
\\
&\,\,\,
+ 2{\mathbf{A}} \cdot {\mathbf{B}}^\prime u_{1d} u_{2d}
+ 2{\mathbf{A}} \cdot {\mathbf{L}} u_{1d} \xi_{2d}
+ 2{\mathbf{B}}^\prime \cdot {\mathbf{L}} u_{2d} \xi_{2d}
\bigg]^{1/2}\end{aligned}$$ where $u_{1d}, u_{2d}, \xi_{2d}$ are functions of $y_1$ and $y_2$ as in (\[CommonEdgeTable\]), and where ${\mathbf{L}}\equiv {\mathbf{B}}^\prime - {\mathbf{B}}$.
#### $\mathcal{P}$ polynomials {#mathcalp-polynomials-1 .unnumbered}
First define polynomials obtained as definite integrals over the original $P$ polynomial in (\[OriginalIntegral\]): $$\begin{aligned}
&\!\! H_d(u_1, u_2, \xi_2 )
\\
&\equiv 4AA^\prime
\int_{\xi_{1d}{^{\hbox{\scriptsize{lower}}}}}^{\xi_{1d}{^{\hbox{\scriptsize{upper}}}}} d \xi_1
\bigg\{ P\Big( {\mathbf{x}}(\xi_1, \xi_2),
{\mathbf{x}}^\prime(u_1 + \xi_1, u_2 + \xi_2)
\Big)
\bigg\}\end{aligned}$$ where ${\mathbf{x}}(\xi_1, \xi_2)$ and ${\mathbf{x}}^\prime(\eta_1, \eta_2)$ are as in (\[VariableTransformation\]) and where the limits of the $\xi_1$ integral are as follows: $$\begin{array}{|c|c|c|c|c|} \hline
d & \xi_{1d}{^{\hbox{\tiny{lower}}}} & \xi_{1d}{^{\hbox{\tiny{upper}}}} \\ \hline
1 & \xi_2 + (u_2-u_1) & 1 \\ \hline
2 & \xi_2 & 1-u_1 \\ \hline
3 & \xi_2 + (u_2-u_1) & 1 \\ \hline
4 & \xi_2 & 1-u_1 \\ \hline
5 & \xi_2 & 1 \\ \hline
6 & \xi_2 + (u_2-u_1) & 1-u_1 \\ \hline
\end{array}.$$ Now evaluate $H_d$ at $w, y_1, y_2$-dependent arguments and expand the result as a polynomial in $w$ to obtain the $\mathcal{P}_{dn}{^{\hbox{\tiny{CE}}}}$ functions: $$\begin{aligned}
&H_d\Big( w u_{1d}(y_1,y_2), w u_{2d}(y_1, y_2) ,
w \xi_2(y_1,y_2) \Big)
{\nonumber \\}&\quad
\equiv \sum_{n} w^n \mathcal{P}_{dn}{^{\hbox{\tiny{CE}}}}(y_1, y_2).
\label{CommonEdgeHToP}\end{aligned}$$
Common Vertex {#common-vertex .unnumbered}
-------------
#### $\xi,\eta$ functions {#xieta-functions .unnumbered}
First define ancillary functions $\xi_1, \xi_2, \eta_1, \eta_2$ depending on $y_1,y_2,y_3$:
[$$\begin{array}{|c|c|c|c|c|} \hline
d & \xi_{1d}({\mathbf{y}}) & \xi_{2d}({\mathbf{y}}) & \eta_{1d}({\mathbf{y}}) & \eta_{2d}({\mathbf{y}})
\\\hline
1 & 1 & y_1 & y_2 & y_2 y_3
\\\hline
2 & y_2 & y_2 y_3 & 1 & y_1
\\\hline
\end{array}
\label{CommonVertexTable}$$]{}
#### Reduced distance function {#reduced-distance-function-2 .unnumbered}
The function $X{^{\hbox{\tiny{CV}}}}_d(y_1, y_2, y_3)$ that enters formulas (2c) and (5c) is $$\begin{aligned}
&\hspace{-0.1in} X_d{^{\hbox{\tiny{CV}}}}(y_1, y_2, y_3)
\\
= \bigg[ & |{\mathbf{A}}|^2 \xi_{1d}^2
+|{\mathbf{B}}|^2 \xi_{2d}^2
+|{\mathbf{A}}^\prime|^2 \eta_{1d}^2
+|{\mathbf{B}}^\prime|^2 \eta_{1d}^2
\\
& + 2\big({\mathbf{A}} \cdot {\mathbf{B}}\big) \xi_{1d} \xi_{2d}
- 2\big({\mathbf{A}} \cdot {\mathbf{A}}^\prime\big) \xi_{1d} \eta_{1d}
\\[5pt]
&
- 2\big({\mathbf{A}} \cdot {\mathbf{B}}^\prime\big) \xi_{1d} \eta_{2d}
- 2\big({\mathbf{B}} \cdot {\mathbf{A}}^\prime\big) \xi_{2d} \eta_{1d}
\\
&
- 2\big({\mathbf{B}} \cdot {\mathbf{B}}^\prime\big) \xi_{2d} \eta_{2d}
+ 2\big({\mathbf{A}}^\prime \cdot {\mathbf{B}}^\prime\big) \eta_{1d} \eta_{2d}
\bigg]^{1/2}\end{aligned}$$ where $\xi$ and $\eta$ are functions of $y_1, y_2,$ and $y_3$ as in (\[CommonVertexTable\]).
#### $\mathcal{P}$ polynomials {#mathcalp-polynomials-2 .unnumbered}
In contrast to the common-triangle and common-edge cases, in the common-vertex case there is no integration over the original $P$ polynomial; instead, we simply evaluate the original $P$ polynomial at $w$- and ${\mathbf{y}}$-dependent arguments, expand the result as a power series in $w$, and identify the coefficients of this power series as the $\mathcal{P}({\mathbf{y}})$ polynomials:
[$$P\Big( w \xi_{1d}({\mathbf{y}}), w \xi_{2d}({\mathbf{y}}),
w \eta_{1d}({\mathbf{y}}), w \eta_{2d}({\mathbf{y}})
\Big)
\equiv \sum_{n} w^n \mathcal{P}_{dn}{^{\hbox{\tiny{CV}}}}({\mathbf{y}}).
\label{CommonVertexHToP}$$]{}
The $\alpha$, $\beta$, $\gamma$ Coefficients {#the-alpha-beta-gamma-coefficients .unnumbered}
--------------------------------------------
For twice-integrable kernels, the master TDM formulas (5) refer to parameters $\alpha$, $\beta$, and $\gamma$ defined for the various cases and the various subregions. These parameters are defined by completing the square under the radical sign in the reduced-distance functions $X_d$:
$$\begin{aligned}
X_d{^{\hbox{\tiny{CT}}}}(y_1)
&\equiv
\sqrt{ \alpha{^{\hbox{\tiny{CT}}}}_d
(y_1 + \beta{^{\hbox{\tiny{CT}}}}_d)^2
+ \gamma{^{\hbox{\tiny{CT}}}}_d }
\\
X_d{^{\hbox{\tiny{CE}}}}(y_1, y_2)
&\equiv
\sqrt{ \alpha{^{\hbox{\tiny{CE}}}}_d
(y_2 + \beta{^{\hbox{\tiny{CE}}}}_d)^2
+ \gamma{^{\hbox{\tiny{CE}}}}_d }
\\
X_d{^{\hbox{\tiny{CV}}}}(y_1, y_2, y_3)
&\equiv
\sqrt{ \alpha{^{\hbox{\tiny{CV}}}}_d
(y_3 + \beta{^{\hbox{\tiny{CV}}}}_d)^2
+ \gamma{^{\hbox{\tiny{CV}}}}_d }\end{aligned}$$
In all cases, the $\{\alpha, \beta, \gamma\}$ coefficients depend on the geometric parameters (${\mathbf{A}}, {\mathbf{B}},$ etc.). In the CE case, they depend additionally on the variable $y_1$, and in the CV case they depend additionally on the variables $y_1$ and $y_2$.
First and Second Kernel Integrals {#FirstSecondIntegralAppendix}
=================================
In this Appendix we collect expressions for the first and second integrals of various commonly encountered kernel functions.
First Kernel Integrals {#first-kernel-integrals .unnumbered}
----------------------
In Section \[GeneralizedTaylorDuffySection\] we defined the “first integral” of a kernel function $K(r)$ to be
$$\mathcal{K}_n(X) \equiv \int_0^1 w^n K(wX) \, dw.$$ First integrals for some commonly encountered kernels are presented in Table \[FirstIntegralTable\].
$$\begin{array}{|c|c|} \hline
\ K(r)
&
\mathcal{K}_n(X)
\\\hline
r^p
&
\displaystyle{ \frac{X^p}{1+n+p} } \vphantom{\int_\int^\int}
\\\hline
\displaystyle{ \frac{e^{ikr}}{4\pi r} }
&
\displaystyle{ \frac{e^{ikX}}{4\pi n X} \text{ExpRel}(n,-ikX)
}
\\[5pt]\hline
\displaystyle{ (ikr-1)\frac{e^{ikr}}{4\pi r^3} }
&
\parbox{0.3\textwidth}
{ \begin{align*}
&\displaystyle{ \frac{e^{ikX}}{4\pi X}
\bigg[ \frac{ik}{(n-1)X}\text{ExpRel}(n-1,-ikX)
}\\
&\qquad\qquad
\displaystyle{ -\frac{1}{(n-2)X^2}\text{ExpRel}(n-2,-ikX)
\bigg]
}
\end{align*}
}
\vphantom{\int_\int^\int}
\\\hline
\end{array}$$
The function $(n,z)$ in Table \[FirstIntegralTable\] is the $n$th “relative exponential” function,[^5] defined as the usual exponential function with the first $n$ terms of its power-series expansion subtracted and the result normalized to have value 1 at $z=0$:
$$\begin{aligned}
\text{ExpRel}(n,z)
&\equiv \frac{n!}{z^n}\Big[ e^{z} - 1 - z - \frac{z^2}{2}
- \cdots - \frac{z^{n-1}}{(n-1)!}
\Big]
\label{ExpRelA}
\\
&= 1 + \frac{z}{(n+1)} + \frac{z^2}{(n+1)(n+2)} + \cdots
\label{ExpRelB}\end{aligned}$$
For $|z|$ small, the relative exponential function may be computed using the rapidly convergent series expansion (\[ExpRelB\]), and indeed for computational purposes at small $z$ it is important *not* to use the defining expression (\[ExpRelA\]), naïve use of which invites catastrophic loss of numerical precision. For example, at $z=10^{-4}$, each term subtracted from $e^z$ in the square brackets in (\[ExpRelA\]) eliminates 4 digits of precision, so that in standard 64-bit floating-point arithmetic a calculation of $\text{ExpRel}(3,z)$ would be accurate only to approximately 3 digits, while a calculation of $\text{ExpRel}(4,z)$ would yield pure numerical noise.
On the other hand, for values of $z$ with large negative real part—which arise in calculations involving lossy materials at short wavelengths—it is most convenient to evaluate in a different way, as discussed below.
Second Kernel Integrals for the $r^p$ kernel {#second-kernel-integrals-for-the-rp-kernel .unnumbered}
--------------------------------------------
In Section \[TwiceIntegrableKernelSection\] we defined the “second integrals” of a kernel function $K(r)$ to be the following definite integrals involving the first integral: $$\begin{aligned}
\mathcal{J}_n(\alpha,\beta,\gamma)
&\equiv
\int_0^1 \mathcal{K}_n\Big( \alpha\sqrt{ (y+\beta)^2 + \gamma^2} \Big) \, dy
\\
\mathcal{L}_n(\alpha,\beta,\gamma)
&\equiv
\int_0^1 y\, \mathcal{K}_n\Big( \alpha\sqrt{ (y+\beta)^2 + \gamma^2} \Big) \, dy.
\label{SecondIntegralsAgain}\end{aligned}$$ For the particular kernel function $K(r)=r^p$ with (positive or negative) integer power $p$, the second integrals are the following analytically-evaluatable integrals: $$\begin{aligned}
\mathcal{J}_n(\alpha,\beta,\gamma)
&=\frac{\alpha^p}{(1+p+n)}
\underbrace{ \int_0^1 \Big[(y+\beta)^2 + \gamma^2]^{p/2} \, dy
}_{\equiv \overline{{\mathcal}J_p}(\beta,\gamma)}
\\
\mathcal{L}_n(\alpha,\beta,\gamma)
&=\frac{\alpha^p}{(1+p+n)}
\underbrace{\int_0^1 y \Big[(y+\beta)^2 + \gamma^2]^{p/2} \, dy
}_{\equiv \overline{{\mathcal}L_p}(\beta,\gamma)}\end{aligned}$$ The integral arising in the first line here is tabulated below for a few values of $p$. (The table is easily extended to arbitrary positive or negative values of $p$.) In this table, we have $S=\sqrt{\beta^2 + \gamma^2}, T=\sqrt{(\beta+1)^2+\gamma^2}.$
$$\renewcommand{{1.0}}{2.5}
\begin{array}{|c|c|}\hline
p & \overline{{\mathcal}J_p}
\equiv \int_0^1 \Big[(y+\beta)^2 + \gamma^2\Big]^{p/2} \, dy
\\ \hline
-3
& \frac{1}{\gamma^2}\Big[ \frac{\beta+1}{T} - \frac{\beta}{S}
\Big]
\\ \hline
-2
& \frac{1}{\gamma}\Big[ \arctan\frac{\beta+1}{\gamma} -
\arctan\frac{\beta}{\gamma}
\Big]
\\ \hline
-1
& \ln \frac{\beta+1 + T }{\beta + S}
\\ \hline
1
& \frac{1}{2}\Big[ \beta(T-S) + T
+ \gamma^2 \overline{{\mathcal}J_{-1}}
\Big]
\\ \hline
2
& \frac{1}{2}\Big[ T^2 + S^2 - \frac{1}{2}\Big]
\\ \hline
\end{array}
\renewcommand{{1.0}}{1.0}$$ The $\overline{{\mathcal}L}$ functions are related to the $\overline{{\mathcal}J}$ functions according to $$\overline{{\mathcal}L_p} =
\begin{cases}
\displaystyle{
-\beta \overline{{\mathcal}J_p}
+ \frac{1}{p+2}\left(T^{p+2} - S^{p+2}\right),
} \qquad &p\ne -2
\\[10pt]
\displaystyle{
-\beta \overline{{\mathcal}J_p}
+ \ln \left(\frac{T}{S}\right),
} \qquad &p=-2.
\end{cases}$$
Second kernel integrals for the Helmholtz kernel in the short-wavelength limit {#second-kernel-integrals-for-the-helmholtz-kernel-in-the-short-wavelength-limit .unnumbered}
------------------------------------------------------------------------------
For the EFIE kernel $K(r)=e^{ikr}/(4\pi r)$, the first integral $\mathcal{K}_n(X)$ (Table \[FirstIntegralTable\]) involves the quantity $e^{ikX}\text{ExpRel}(n,-ikX).$ As noted above, for small values of $|kX|$ the relative exponential function is well represented by the first few terms in the expansion (\[ExpRelB\]). However, for $|kX|$ large it is convenient instead to use the defining expression (\[ExpRelA\]), in terms of which we find $$\begin{aligned}
& e^{ikX}\text{ExpRel}(n,-ikX)
\label{ExpRelLargeK}\\
&\hspace{0.1in}=\frac{n!}{(-ikX)^n}
{\nonumber \\}&\qquad-
e^{ikX}\bigg[ \frac{n^!}{(-ikX)^n}
\Big(1 - ikX + \cdots + \frac{(-ikX)^{n-1}}{(n-1)!}
\Big)\bigg].
\nonumber\end{aligned}$$ For $k$ values with large positive imaginary part, the first term here decays algebraically with $|k|$, while the second term decays *exponentially* and hence makes negligible contribution to the sum when $|k|$ is sufficiently large. This suggests that, for $k$ values with large positive imaginary part, we may approximate the first kernel integrals in Table \[FirstIntegralTable\] by retaining only the first term in (\[ExpRelLargeK\]). This leads to the following approximate expressions for the first kernel integrals in Table (\[FirstIntegralTable\]): $$\begin{array}{l}
\displaystyle{K(r)=\frac{e^{ikr}}{4\pi r}}
\\[6pt]
\quad\quad \Longrightarrow \,\,
{\mathcal}K_n(X) \,\,\xrightarrow{\text{Im k}\to\infty}\,\,
\displaystyle{\frac{(n-1)!}{4\pi(-ik)^n X^{n+1}}}
\\[12pt]
\displaystyle{K(r)=(ikr-1)\frac{e^{ikr}}{4\pi r^3}}
\\[6pt]
\quad\quad\Longrightarrow \,\,
{\mathcal}K_n(X) \,\,\xrightarrow{\text{Im k}\to\infty}\,\,
\displaystyle{-\frac{(n-1)[(n-3)!]}{4\pi(-ik)^{n-2} X^{n+1}}}
\end{array}
$$ The important point here is that the simpler $X$ dependence of these $\mathcal{K}$ functions in the $\text{Im }k\to\infty$ limit renders these kernels *twice integrable*. This allows us to make use of the twice-integrable versions of the TDM formulas to compute integrals involving these kernels in this limit. In particular, we find the following second integrals:
For $K(r)=\frac{e^{ikr}}{4\pi r}$ as $\text{Im }k\to \infty$: {#for-krfraceikr4pi-r-as-textim-kto-infty .unnumbered}
-------------------------------------------------------------
$$\begin{aligned}
\mathcal{J}_n(\alpha,\beta,\gamma)
&\to \frac{(n-1)!}{4\pi (-ik)^n \alpha^{n+1}}
\,
\overline{{\mathcal}J}_{-(n+1)/2}(\beta,\gamma)
\\
\mathcal{L}_n(\alpha,\beta,\gamma)
&\to \frac{(n-1)!}{4\pi (-ik)^n \alpha^{n+1}}
\,
\overline{{\mathcal}L}_{-(n+1)/2}(\beta,\gamma)\end{aligned}$$
For $K(r)=(ikr-1)\frac{e^{ikr}}{4\pi r^3}$ as $\text{Im }k\to \infty$: {#for-krikr-1fraceikr4pi-r3-as-textim-kto-infty .unnumbered}
----------------------------------------------------------------------
$$\begin{aligned}
\mathcal{J}_n(\alpha,\beta,\gamma)
&\to -\frac{(n-1)[(n-3)!]}{4\pi (-ik)^{n-2} \alpha^{n+1}}
\,
\overline{{\mathcal}J}_{-(n+1)/2}(\beta,\gamma)
\\
\mathcal{L}_n(\alpha,\beta,\gamma)
&\to -\frac{(n-1)[(n-3)!]}{4\pi (-ik)^{n-2} \alpha^{n+1}}
\,
\overline{{\mathcal}L}_{-(n+1)/2}(\beta,\gamma)
\\\end{aligned}$$
The $\overline{{\mathcal}J}, \overline{{\mathcal}L}$ functions were evaluated above in the discussion of the $K(r)=r^p$ kernel.
[10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{}
`http://homerreid.com/scuff-EM/SingularIntegrals`.
D. Taylor, “Accurate and efficient numerical integration of weakly singular integrals in [G]{}alerkin [EFIE]{} solutions,” *Antennas and Propagation, IEEE Transactions on*, vol. 51, no. 7, pp. 1630–1637, 2003.
M. G. Duffy, “Quadrature over a pyramid or cube of integrands with a singularity at a vertex,” *SIAM Journal on Numerical Analysis*, vol. 19, no. 6, pp. 1260–1262, 1982.
P. Yla-Oijala and M. Taskinen, “Calculation of [CFIE]{} impedance matrix elements with [RWG]{} and nx[RWG]{} functions,” *Antennas and Propagation, IEEE Transactions on*, vol. 51, no. 8, pp. 1837–1846, 2003.
S. Jarvenpaa, M. Taskinen, and P. Yla-Oijala, “Singularity subtraction technique for high-order polynomial vector basis functions on planar triangles,” *Antennas and Propagation, IEEE Transactions on*, vol. 54, no. 1, pp. 42–49, 2006.
M. S. Tong and W. C. Chew, “Super-hyper singularity treatment for solving 3d electric field integral equations,” *Microwave and Optical Technology Letters*, vol. 49, no. 6, pp. 1383–1388, 2007. \[Online\]. Available: <http://dx.doi.org/10.1002/mop.22443>
R. Klees, “,” **, vol. 70, no. 11, pp. 781–797, 1996. \[Online\]. Available: <http://dx.doi.org/10.1007/BF00867156>
W. Cai, Y. Yu, and X. C. Yuan, “Singularity treatment and high-order [RWG]{} basis functions for integral equations of electromagnetic scattering,” *International Journal for Numerical Methods in Engineering*, vol. 53, no. 1, pp. 31–47, 2002. \[Online\]. Available: <http://dx.doi.org/10.1002/nme.390>
M. Khayat and D. Wilton, “Numerical evaluation of singular and near-singular potential integrals,” *Antennas and Propagation, IEEE Transactions on*, vol. 53, no. 10, pp. 3180–3190, 2005.
I. Ismatullah and T. Eibert, “Adaptive singularity cancellation for efficient treatment of near-singular and near-hypersingular integrals in surface integral equation formulations,” *Antennas and Propagation, IEEE Transactions on*, vol. 56, no. 1, pp. 274–278, 2008.
R. Graglia and G. Lombardi, “Machine precision evaluation of singular and nearly singular potential integrals by use of [G]{}auss quadrature formulas for rational functions,” *Antennas and Propagation, IEEE Transactions on*, vol. 56, no. 4, pp. 981–998, 2008.
A. G. Polimeridis and J. R. Mosig, “Complete semi-analytical treatment of weakly singular integrals on planar triangles via the direct evaluation method,” *International Journal for Numerical Methods in Engineering*, vol. 83, no. 12, pp. 1625–1650, 2010. \[Online\]. Available: <http://dx.doi.org/10.1002/nme.2877>
A. Polimeridis, F. Vipiana, J. Mosig, and D. Wilton, “[DIRECTFN]{}: Fully numerical algorithms for high precision computation of singular integrals in [G]{}alerkin [SIE]{} methods,” *Antennas and Propagation, IEEE Transactions on*, vol. 61, no. 6, pp. 3112–3122, 2013.
H. Andrä and E. Schnack, “,” **, vol. 76, no. 2, pp. 143–165, 1997.
S. Sauter and C. Schwab, *Boundary Element Methods*, ser. Springer series in computational mathematics.1em plus 0.5em minus 0.4em Springer, 2010. \[Online\]. Available: <http://books.google.com/books?id=yEFu7sVW3LEC>
S. Erichsen and S. A. Sauter, “Efficient automatic quadrature in 3-d [G]{}alerkin [BEM]{},” *Computer Methods in Applied Mechanics and Engineering*, vol. 157, no. 3–4, pp. 215 – 224, 1998, <ce:title>Papers presented at the Seventh Conference on Numerical Methods and Computational Mechanics in Science and Engineering</ce:title>. \[Online\]. Available: <http://www.sciencedirect.com/science/article/pii/S0045782597002363>
R. F. Harrington, *Field Computation by Moment Methods*.1em plus 0.5em minus 0.4emWiley-IEEE Press, 1993.
W. Chew, M. Tong, and B. Hu, *Integral Equation Methods for Electromagnetic and Elastic Waves*, ser. Synthesis Lectures on Computational Electromagnetics Series.1em plus 0.5em minus 0.4emMorgan & Claypool Publishers, 2009. \[Online\]. Available: <http://books.google.com/books?id=PJN9meadzT8C>
M. Botha, “A family of augmented [D]{}uffy transformations for near-singularity cancellation quadrature,” *Antennas and Propagation, IEEE Transactions on*, vol. 61, no. 6, pp. 3123–3134, June 2013.
F. Vipiana, D. Wilton, and W. Johnson, “Advanced numerical schemes for the accurate evaluation of 4-d reaction integrals in the method of moments,” *Antennas and Propagation, IEEE Transactions on*, vol. PP, no. 99, pp. 1–1, 2013.
P. Caorsi, D. Moreno, and F. Sidoti, “Theoretical and numerical treatment of surface integrals involving the free-space [G]{}reen’s function,” *Antennas and Propagation, IEEE Transactions on*, vol. 41, no. 9, pp. 1296–1301, 1993.
T. Eibert and V. Hansen, “On the calculation of potential integrals for linear source distributions on triangular domains,” *Antennas and Propagation, IEEE Transactions on*, vol. 43, no. 12, pp. 1499–1502, 1995.
T. Sarkar and R. Harrington, “The electrostatic field of conducting bodies in multiple dielectric media,” *Microwave Theory and Techniques, IEEE Transactions on*, vol. 32, no. 11, pp. 1441–1448, 1984.
S. Rao, D. Wilton, and A. Glisson, “Electromagnetic scattering by surfaces of arbitrary shape,” *Antennas and Propagation, IEEE Transactions on*, vol. 30, no. 3, pp. 409–418, May 1982.
J. Rius, E. Ubeda, and J. Parron, “On the testing of the magnetic field integral equation with [R]{}[W]{}[G]{} basis functions in method of moments,” *Antennas and Propagation, IEEE Transactions on*, vol. 49, no. 11, pp. 1550–1553, Nov 2001.
L. N. Medgyesi-Mitschang, J. M. Putnam, and M. B. Gedera, “Generalized method of moments for three-dimensional penetrable scatterers,” *J. Opt. Soc. Am. A*, vol. 11, no. 4, pp. 1383–1398, Apr 1994. \[Online\]. Available: <http://josaa.osa.org/abstract.cfm?URI=josaa-11-4-1383>
P. Yla-Oijala and M. Taskinen, “Well-conditioned [M]{}üller formulation for electromagnetic scattering by dielectric objects,” *Antennas and Propagation, IEEE Transactions on*, vol. 53, no. 10, pp. 3316–3323, 2005.
`http://ab-initio.mit.edu/wiki/index.php/Cubature`.
M. Khayat, D. Wilton, and P. Fink, “An improved transformation and optimized sampling scheme for the numerical evaluation of singular and near-singular potentials,” *Antennas and Wireless Propagation Letters, IEEE*, vol. 7, pp. 377–380, 2008.
R. Cools, “An encyclopaedia of cubature formulas,” *Journal of Complexity*, vol. 19, no. 3, pp. 445 – 453, 2003, oberwolfach Special Issue. \[Online\]. Available: <http://www.sciencedirect.com/science/article/pii/S0885064X03000116>
R. Graglia, D. Wilton, and A. Peterson, “Higher order interpolatory vector bases for computational electromagnetics,” *Antennas and Propagation, IEEE Transactions on*, vol. 45, no. 3, pp. 329–342, 1997.
[^1]: M. T. Homer Reid is with the Department of Mathematics, Massachusetts Institute of Technology.
[^2]: J. K. White is with the Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology.
[^3]: S. G. Johnson is with the Department of Mathematics, Massachusetts Institute of Technology.
[^4]: This table is similar to Table III of [Ref. ]{}, but it corrects two errors in that table—namely, those in table entries $u_2,E_4$ and $\xi_2,E_1$.
[^5]: Our terminology here is borrowed from the [gnu]{} Scientific Library, `http://www.gnu.org/software/gsl/`.
|
---
abstract: 'The large amounts of data from molecular biology and neuroscience have lead to a renewed interest in the inverse Ising problem: how to reconstruct parameters of the Ising model (couplings between spins and external fields) from a number of spin configurations sampled from the Boltzmann measure. To invert the relationship between model parameters and observables (magnetisations and correlations) mean-field approximations are often used, allowing to determine model parameters from data. However, all known mean-field methods fail at low temperatures with the emergence of multiple thermodynamic states. Here we show how clustering spin configurations can approximate these thermodynamic states, and how mean-field methods applied to thermodynamic states allow an efficient reconstruction of Ising models also at low temperatures.'
author:
- 'H. Chau Nguyen and Johannes Berg'
title: 'Mean-field theory for the inverse Ising problem at low temperatures'
---
Taking a set of spin configurations sampled from the equilibrium distribution of an Ising model, can the underlying couplings between spins be reconstructed from a large number of such samples? This inverse Ising problem is a paradigmatic inverse problem with applications in neural biology [@bialek; @monasson], protein structure determination [@weigt], and gene expression analysis [@braunstein]. Typically a large number of spins (representing the states of neurons, genetic loci, or genes) is involved, as well as a large number of interactions between them.
Such large system sizes makes the inverse Ising model intrinsically difficult: Solving the inverse problem involves first solving the Ising model, in some manner, for a given set of couplings and external fields. Then one can ask how couplings between spins and external fields need to be adjusted in order to match the inferred model with the observed statistics of the samples. An early and fundamental approach to the inverse Ising model, Boltzmann machine learning [@ackley1985], follows this prescription quite literally. Proceeding iteratively, couplings and fields are updated in proportion to the remaining differences of magnetisations and two-point-correlations resulting from the current model parameters and the corresponding values observed in data. To compute the magnetisations and two-point-correlations, each iteration involves a numerical simulation of the Ising model, so this approach is limited to small systems.
Instead, mean-field theory is the basis of many approaches to the inverse Ising problem used in practice [@kappen; @Roudi_FCN]. Under the mean-field approximation, the Ising model can be solved easily for the magnetisations and correlations between spins. The mean-field solution is then inverted (see below) to yield the parameters of the model (couplings and external fields) as a function of the empirical observables (magnetisations and correlations). Yet, as temperature is decreased and correlations between spins grow and become more discernible, the reconstruction given by mean-field theory becomes less accurate, not as one might expect, more accurate. This effect has been called “an embarrassment to statistical physics” [@aurell_priv]. Mean-field reconstruction of the Ising model even breaks down entirely near the transition to a low-temperature phase [@Mezard_JOP]: in the low-temperature phase there is no correlation between reconstructed and underlying couplings. This low-temperature failure equally affects all refinements related to mean-field theory like the TAP approach [@kappen; @Mezard_JOP; @Roudi_FCN], susceptibility propagation [@Welling_NC; @Mezard_JOP], the Sessak–Monasson expansion [@Sessak], and Bethe reconstruction [@NguyenBerg].
The breakdown of mean-field reconstructions can have different roots: the emergence of multiple thermodynamic states at a phase transition, an increasing correlation length at lower temperatures, or the freezing of the spins into a reduced set of configurations at low temperatures requiring more samples to measure the correlations between spins. To address this issue, we first consider a very simple case where mean-field theory is exact: the Curie–Weiss model. The zero-field Hamiltonian of $N$ binary spins $s_i$ is ${{\mathcal H}}_J({{\bf s}})=-J/N \sum_{i<j} s_i s_j$ with $J=1$. This corresponds to equal couplings $J^0_{ij}=J/N$ between all pairs of spins, a fact that is of course not known when reconstructing the couplings. $M$ samples of spin configurations are taken from the equilibrium measure $\exp\{-\beta {{\mathcal H}}_J({{\bf s}})\}/Z$, where $\beta$ is the inverse temperature and $Z$ is the partition function. In a real-life reconstruction, these configurations would come from experimental measurements of neural activity, gene expression levels, etc. One then can calculate the observed magnetisations $\bar{m}_i=\frac{1}{M} \sum_{\mu} s^{\mu}_i$ and connected correlations $\bar{c}_{ik}= \frac{1}{M} \sum_{\mu} s^{\mu}_i s^{\mu}_k - \bar{m}_i \bar{m}_k$, with $\mu =1,\ldots,M$ denoting the sampled configurations.
The mean-field prediction for the magnetisations of the Curie-Weiss model is given by the solution of the self-consistent equation $$\label{nMF_selfconsistent}
m_i={\operatorname{tanh}}( \sum_{j \neq i} J_{ij} m_j + h_i) \ ,$$ where the couplings are rescaled with temperature $J_{ij} = \beta J^0_{ij}$. The connected correlations follow from (\[nMF\_selfconsistent\]) by considering the linear response $$\begin{aligned}
\label{nMF_correlations}
c_{ik}
&=& \frac{\partial m_i}{\partial h_k} = (1-m_i^2) \left( \sum_{j \neq i} J_{ij} \frac{\partial m_j}{\partial h_k} + \delta_{ik} \right) \nonumber\\
&=&(1-m_i^2) \left( \sum_{j \neq i} J_{ij} c_{jk} + \delta_{ik} \right)
\ .\end{aligned}$$
Inserting the observed magnetisations and correlations into (\[nMF\_correlations\]) gives [@kappen] $$\label{nMF_reconstruction}
\sum_{j \neq i} J_{ij} \bar{c}_{jk}= - \delta_{ik} + \bar{c}_{ik}/(1-\bar{m}_i^2) \ ,$$ which can be solved directly for the couplings $J_{ij} = - (\bar{c}^{-1})_{ij}$ ($i \neq j$) and the fields $h_i={\operatorname{arctanh}}\bar{m}_i - \sum_{j \neq i} J_{ij} \bar{m}_j$ using (\[nMF\_selfconsistent\]).
Figure \[fig\_CW\]a shows how well this reconstruction performs at different inverse temperatures $\beta$ and different number of samples $M$. For $\beta<\beta_c=1$, the reconstruction error goes to zero with the number of samples as $M^{-1/2}$: since for the Curie–Weiss model the self-consistent equation (\[nMF\_selfconsistent\]) is exact, the reconstruction is limited only by fluctuations of the measured correlations resulting from the finite number of samples and by the finite system size.
Yet for $\beta>\beta_c$, the difference between the underlying couplings and the reconstructed couplings does not vanish with increasing number of samples. While the self-consistent equation (\[nMF\_selfconsistent\]) is still correct, the identification of its solutions with the observed magnetisations $\bar{m}_i$ is mistaken. For the ferromagnetic phase at $\beta>\beta_c$, there are two solutions of the self-consistent equation, denoted $m_i^{\pm}=\pm m$. The observed magnetisations are averages over these two thermodynamic states and they have nothing to do with either of the two solutions of (\[nMF\_selfconsistent\]). The same holds for the connected correlations $c_{ij}^{+}$ and $c_{ij}^{-}$ in the two states, and the observed correlations $\bar{c}_{ij}$. Any method explicitly or implicitly connecting the magnetisation in low temperature states with the average magnetisation over samples will thus fail at low temperatures. Note that this does not affect Boltzmann machine learning, where the magnetisation is averaged over all states.
A simple cure suggests itself: Since each sample stems from one of the two thermodynamic states, we divide the $M$ configurations into those configurations with positive total magnetisation $\sum_i s_i^{\mu}$, and those with negative total magnetisation. Then the magnetisations in the two thermodynamic states can be calculated separately, giving $\bar{m}_i^+=\frac{1}{M_+} \sum_{\mu \in +} s^{\mu}_i$ and similarly for $\bar{m}_i^-$ and the connected correlations. Identifying these magnetisations with the solutions of the self-consistent equation (\[nMF\_selfconsistent\]), we obtain in the place of (\[nMF\_reconstruction\]) *two* sets of equations $$\begin{aligned}
\sum_{j \neq i} J_{ij} \bar{c}_{jk}{^{\,+}}&=& - \delta_{ik} + \bar{c}_{ik}{^{\,+}}/(1-(\bar{m}_i^+)^2) \label{MF_reconstruction_up} \ , \\
\sum_{j \neq i} J_{ij} \bar{c}_{jk}{^{\,-}}&=& - \delta_{ik} + \bar{c}_{ik}{^{\,-}}/(1-(\bar{m}_i^-)^2) \label{MF_reconstruction_down} \ .\end{aligned}$$ Reconstructing the couplings using a single state only, by solving say (\[MF\_reconstruction\_up\]), the observed positive magnetisation can be accounted for equally well by positive external fields (even though the samples were generated by a model with zero field), or alternatively, by ferromagnetic couplings between the spins. One finds that solving (\[MF\_reconstruction\_up\]) leads to an underestimate of the couplings, and *positive* external fields calculated by (\[nMF\_selfconsistent\]) follow. Correspondingly, basing the reconstruction only on data from the down state by solving (\[MF\_reconstruction\_down\]) also leads to an underestimate of the couplings, and large *negative* fields. This effect has already been noted in the context of the inverse Hopfield problem [@zecchina]. We thus demand that the reconstructed fields obtained from either state are equal to each other $$\label{MF_fields}
\sum_{j \neq i} J_{ij} (\bar{m}_j^+ - \bar{m}_j^-) = {\operatorname{arctanh}}\bar{m}_i^+ - {\operatorname{arctanh}}\bar{m}_i^-$$ and claim that jointly solving equations , , and gives the correct mean-field reconstruction at low temperatures.
Already equations and are two linear equations per coupling variable, so in general there is no solution to these equations. However, we expect that the underlying couplings used to generate the $M$ configurations actually solve these equations, at least up to fluctuations due to the finite number of configurations sampled and the finite size effect. For an overdetermined linear equation of the form ${\bf A \cdot x}= {\bf b}$ with vectors of different lengths ${\bf x}$ and ${\bf b}$ and a non-square matrix ${\bf A}$, the Moore–Penrose pseudoinverse ${\bf A^+}$ [@moore; @penrose] gives a least-square solution ${\bf x}={\bf A^+ \cdot b}$ such that the Euclidean norm $|| {\bf A\cdot x} - {\bf b}||_2$ is minimized. In this sense, the Moore–Penrose pseudoinverse allows to solve , , and as well as possible. The linear equations , , and can be written as a single matrix equation ${\bf J \cdot A}={\bf B}$, where ${\bf A}$ is the $N \times (2N+1)$ matrix $\left( {\bf \bar{c}^+, \bar{c}^-, \bar{m}^+ - \bar{m}^- }\right) $ and ${\bf B}$ is the $N \times (2N+1)$ matrix $( {\bf \bar{b}^+, \bar{b}^-, \tilde{m}^+ - \tilde{m}^-) }$, with $\bar{b}_{ij}^+= -\delta_{ij} + \bar{c}_{ij}{^{\,+}}/(1-(\bar{m}_i^+)^2)$ and analogously for $\bar{b}_{ij}^-$, and $\tilde{m}_i^+= {\operatorname{arctanh}}\bar{m}_i^+$ and analogously for $\tilde{m}_i^-$. The Moore–Penrose inverse is calculated using singular value decomposition [@nr] and right-multiplied with ${\bf B}$ to obtain the the optimal solution ${\bf J}$. In general, this matrix will not be symmetric, and we use $(J_{ij}+J_{ji})/2,\ i\neq j$ for the reconstructed couplings. The external fields can be computed for each state from $h_i^+={\operatorname{arctanh}}\bar{m}_i^+ - \sum_{j \neq i} J_{ij} \bar{m}_j^+$, and analogously for $h_i^-$. Their average over the two states is used for the reconstructed fields.
Figure \[fig\_CW\]b shows how the reconstruction error now vanishes as $M^{-1/2}$ also in the ferromagnetic phase, albeit with a prefactor which grows as the temperature decreases. So while the mean-field reconstruction from many samples is still successful at low temperatures, more configurations are needed to obtain a certain reconstruction error: At very low temperatures, most spins will be in the same state (either up or down); the connected correlations are small as a result and require many samples for their accurate determination. The quality of the reconstruction depends on configurations being correctly assigned to the thermodynamic states. Artificially introducing mistakes in this assignment, we find the reconstruction error increases linearly with the fraction of mistakes in the assignment of configurations to states.
In practice, couplings between spins will not all be equal to each other, like in the Curie–Weiss model. Ferromagnetic as well as antiferromagnetic couplings may be present in magnetic alloys, neurons have excitatory and inhibitory interactions, regulatory interactions between genes can either enhance or suppress the expression of a target gene. The Curie–Weiss ferromagnet is not a good model for all those cases where the couplings are of different signs and magnitudes. In fact, in models where all spins interact with each other via couplings that can be positive or negative [@SK], the low-temperature regime may be characterized not by two, but by many thermodynamic states [@TAP; @MezardParisiVirasoro]. These so-called glassy states cannot be identified simply by the total magnetisation of each sample, as is the case for the ferromagnet. Nevertheless, configurations $\mu,\mu^{\prime}$ from the same thermodynamic state are typically close to each other, having a large overlap $(1/N) \sum_i s_i^{\mu} s_i^{\mu^{\prime}}$. Glassy thermodynamic states thus appear as clusters in the space of configurations [@domanystaufferhartmann; @montanarikrzalaparisi_pnas].
We use the $k$-means clustering algorithm [@kmeans] to find these clusters in the sampled spin configurations. Starting with a set of randomly chosen and normalized cluster centres, each configuration is assigned to the cluster centre it has the largest overlap with. Then the cluster centres are moved to lie in the direction of the centre of mass of all configurations assigned to that cluster, and the procedure is repeated until convergence. We also tried out different algorithms from the family of hierarchical clustering methods, but found no significant difference in the reconstruction performance. Then magnetisations and connected correlations are computed for each cluster separately. Equations , , and can be written for $k$ thermodynamic states. The mean-field equation for each state and the condition that the external fields are equal in all states can be written again in the form of a matrix equation ${\bf J \cdot A}={\bf B}$. ${\bf A}$ is the $N \times (kN+k)$ matrix $\left( {\bf \bar{c}^1, \ldots, \bar{c}^k, \bar{m}^1 - \langle\bar{m} \rangle ,\ldots, \bar{m}^k - \langle\bar{m} \rangle } \right)$ where $\langle \cdot \rangle$ denotes the average over clusters, $\langle{\bf \bar{m}} \rangle = (1/k) \sum_{a=1}^k {\bf \bar{m}^a}$, and analogously for ${\bf B}$. The pseudoinverse of $A$ can be computed in ${\mathcal O}(kN^3)$ steps [@nr], so up to a factor of $k$ coming from the number of clusters, this is just as fast as the high-temperature mean-field reconstruction based on Gaussian elimination to invert the correlation matrix.
We test this approach using couplings drawn independently from a Gaussian distribution of zero mean and variance $1/N$ (the Sherrington–Kirkpatrick model [@SK]). Figure \[fig\_SK\]a shows the reconstruction at low temperatures improving with the number of clusters $k$ and configuration samples $M$. We note that at high temperatures, magnetisations are $0 \pm {\mathcal O}(N^{-1/2})$, so for small system sizes clustering erroneously identifies distinct clusters with small magnetisation. Thus at high temperatures, the low temperature resconstruction based on many clusters does not work as well as the standard approach based on a single cluster.
A further improvement is possible. For disordered systems, the self-consistent equation is not exact. An additional term is required, the so-called Onsager reaction term describing the effect a spin has on itself via the response of its neighbouring spins. The Thouless–Anderson–Palmer (TAP) equation [@TAP] $$\label{TAP_selfconsistent}
m_i={\operatorname{tanh}}( \sum_{j \neq i} J_{ij} m_j - m_i \sum_{j \neq i} J_{ij}^2 (1-m_j^2)+ h_i)$$ turns out to be exact for models where all spins interact with each other. For each state $a$ we now obtain instead of $$\begin{aligned}
\label{TAP_reconstruction}
&&\sum_{j \neq i} J_{ij} \bar{c}_{jk}^a= - \delta_{ik} + \bar{c}_{ik}^a/(1-(\bar{m}_i^a)^2) + \nonumber \\
&&\ \ \bar{c}_{ik}^a \sum_{j \neq i} J_{ij}^2 (1-(\bar{m}_j^a)^2) - 2\bar{m}_i^a \sum_{j \neq i} J_{ij}^2 \bar{m}_j^a \bar{c}_{jk}^a \ .\end{aligned}$$ These equations are no longer linear in the couplings $J_{ij}$ and cannot be solved by the pseudoinverse. A simple gradient descent method, still allows to solve these equations in ${\mathcal O}(kN^3)$ steps per iteration. We define a quadratic cost function $S$ for the couplings ${\bf J}$ by squaring the difference between lhs and rhs of equation and summing over all spin pairs $i,k$ and states $a$. Differences in the external fields $h_i^a={\operatorname{arctanh}}\bar{m}_i^a - \sum_{j \neq i} J_{ij} \bar{m}_j^a+ \bar{m}_i^a \sum_{j \neq i} J_{ij}^2 (1-(\bar{m}_j^a)^2)$ across thermodynamic states are penalized by an additional term $\sum_{i,a} \left( h_i^a - \langle h_i \rangle \right)^2$. The iterative prescription with rate $\eta$, $J_{ij} \leftarrow J_{ij} - \eta \partial S/\partial J_{ij}$, converges to a point near the solution of the TAP equation with small differences in the external fields across states (the deviations resulting from the finite number of samples and finite system size). Figure \[fig\_SK\]b shows how the reconstruction error asymptotically tends to zero with growing $k$ and $M$.
Mean-field theories exists beyond the Curie–Weiss, or the Sherrington–Kirkpatrick model discussed here [@saad_opper_advancedMF]. We have shown that the use of mean-field methods to solve the inverse Ising problem at low temperatures hinges on our ability to reconstruct the thermodynamic states from the sampled data. With this proviso, the entire range of mean-field methods can be now be used, for instance for tree-like couplings [@NguyenBerg] or couplings with local loops [@kikutchi].
We placed our focus on mean-field approaches, since they result in computationally efficient reconstructions independently of the underlying model (for instance a full connectivity matrix $J_{ij}$ versus a sparse matrix). Reconstructions based on pseudo-likelihood [@Ravikumar] can fail at low temperatures as well [@bento2009], although [@pseudolikelihood] finds a good reconstruction for several models also at low temperatures, albeit at a large computational cost. The adaptive cluster expansion recently introduced by Cocco and Monasson [@adaptive] is not affected by the transition to a low-temperature phase, but becomes computationally unwieldy for highly-connected models due to the large number of clusters to be considered.
[**Acknowledgements:**]{} We thank Erik Aurell, Filippos Klironomos, and Nico Riedel for discussions. Funding by the DFG under SFB 680 is acknowledged.
[28]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , , ****, ().
, , , ****, ().
, , , , , ****, ().
, , , , , ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
(), .
, ****, ().
, ****, ().
, ****, ().
, p. (), , ().
, , , , ****, ().
, ****, ().
, ****, ().
, , , , ** (, ).
, ****, ().
, , , ****, ().
, , , ** (, ).
, , , , ****, ().
, , , , , ****, ().
, ** (, ).
, eds., ** (, ).
, ****, ().
, , , ****, ().
, ().
, ****, ().
, ****, ().
|
---
abstract: 'We investigate by the use of the Martin - Siggia - Rose generating functional technique and the self - consistent Hartree approximation, the dynamics of the ring homopolymer collapse (swelling) following an instantaneous change into a poor (good) solvent conditions.The equation of motion for the time dependent monomer - to - monomer correlation function is systematically derived. It is argued that for describing of the coarse - graining process (which neglects the capillary instability and the coalescence of “pearls”) the Rouse mode representation is very helpful, so that the resulting equations of motion can be simply solved numerically. In the case of the collapse this solution is analyzed in the framework of the hierarchically crumpled fractal picture, with crumples of successively growing scale along the chain. The presented numerical results are in line with the corresponding simple scaling argumentation which in particular shows that the characteristic collapse time of a segment of length $g$ scales as $t^* \sim \zeta_0 g/\tau$ (where $\zeta_0$ is a bare friction coefficient and $\tau$ is a depth of quench). In contrast to the collapse the globule swelling can be seen (in the case that topological effects are neglected) as a homogeneous expansion of the globule interior. The swelling of each Rouse mode as well as gyration radius $R_g$ is discussed.'
author:
- 'Vakhtang G. Rostiashvili$^{1,2}$, Nam-Kyung Lee$^{1}$ and Thomas A. Vilgis$^{1,2}$'
title: |
Collapse or Swelling Dynamics of Homopolymer Rings:\
Self-consistent Hartree approach
---
Introduction
============
The equilibrium theory of the coil - globule transition is one of the major issues in polymer physics. Many theories have been developed to understand this physical phenomena in more detail. [@Gennes; @Gennes1; @Grosberg; @Kremer]. The theory for kinetics of this transition [@abrams01; @Gennes2; @Pitard; @Pitard1; @Pitard2; @Brochard; @Halperin; @Klushin; @Byrne; @Timoshenko; @Shakh; @Chang] as well as the pertinent experiment [@Wu] is the subject which has recently attracted a broad interest. One of the main motivations for these studies is that the first stage of a protein folding process is believed to be a fast collapse to a compact but nonspecific state. This has been shown for example by lattice Monte Carlo simulations [@Socci]. The first stage of folding appears as sequence - independent and therefore the process is similar to the collapse of a homopolymer. Generally the collapse of a polymer chain can be addressed to an (abrupt) change of the second virial coefficient from the good solvent regime $v>0$ to the poor solvent regime $v<0$. The second virial coefficient depends on the temperature and has a Boyle point $v=0$ at the so called $\theta$ temperature. The resulting attractive two body interactions requires at least the third virial (always positive) which prevents the chain from the collapse to a single point.
In de Gennes’ “expanding sausage model” [@Gennes2] a flexible chain changes its conformations on a shortest scale through the formation of crumples after being quenched to poor solvent conditions. Then crumples are formed on a larger scale, resulting in a sausage - like shape, which eventually leads to a compact globule. One can see that this model is translational invariant along the chain backbone, provided that the chain has cyclic boundary conditions. This simplifies the problem and allows us later on the use of simple Rouse transformations [@Pitard; @Pitard1; @Doi] of the chain coordinates.
Concerning de Gennes’ model, it is shown by Brownian dynamics simulations [@Byrne], that the “sausages” becomes unstable with respect to capillary waves. Consequently a so called pearl necklace structure is formed. This brought about a number of publications where scaling arguments [@Brochard; @Halperin; @Klushin] and the Gaussian self - consistent approach [@Timoshenko] have been put forward. Actually the pearl necklace formation breaks the translational invariance along the chain backbone and complicates these issues.
Alternative recent phenomenological considerations are put forward by joining scaling argument [@abrams01] and computer simulations [@abrams01; @Chang]. It was argued [@abrams01] that the relaxations times related with the capillary instability as well as with coalescence of pearls are short compared to the characteristic time on which the “sausage” change its configuration. Since the positions of pearls along the chain are random they can be averaged over. The resulting “sausage” can be seen as an envelop (which has the form of a flexible cylinder) of the pearl necklaces and the overall chain configurations are composed by random walks of sausages. This picture successfully explains the so called coarse - graining process shortly after the quench to a range of temperature below $\theta$ - point, but above eventual glassy globular relaxation regimes [@Pitard2; @Dokholyan; @Rost]. This argumentation reconciles in a sense the two scenarios mentioned above, but the microscopic picture is still lacking. Therefore instead of considering the processes of formation and coalescence of pearls we concentrate here on the coarse - grained dynamics of the “sausage”. This model, as it was already mentioned, is homogeneous in the sense that the translational invariance along the chain backbone is effectively assured during the process of the collapse.
In this paper we provide the microscopic theory for the coil to globule transition. In order to get insight and some intuitive picture for the later discussion we start from the scaling consideration. Then we study the Langevin dynamics of the problem based on the Martin - Siggia - Rose (MSR) generating functional technique together with the self - consistent Hartree approximation [@Horner; @Shapir; @Rost1]. It is quite important for the judgment of the final results how the low molecular solvent dynamics is treated. In this paper we restrict our consideration to the random phase approximation (RPA) which is well - known in the context of the theory of both polymeric [@Vilgis] and low - molecular [@Boon] systems. The RPA fails to account the hydrodynamic interaction because in the hydrodynamic regime collisions dominate and keep the solvent in a state of local thermodynamic equilibrium (see e.g. Sec.6.5 in ref. [@Boon]), so that we leave this subject for the future publications. As a main result the generalized dynamical equation for the collapsed (or swelled) chain has been derived. The relaxation laws for the early and late stages are investigated analytically, whereas the whole numerical solution is also done and thoroughly discussed.
Scaling
=======
Collapse
--------
Let us first consider the dynamical time scales for the ring polymer collapse under Rouse dynamics conditions. By this assumption we neglect certain physical properties, such as capillary instability and long ranged hydrodynamic interactions. In this first section, our main goal is to provide a corresponding scaling picture for a Rouse chain which we are going to discuss with more refined analytical methods below. Nevertheless the principal times scales are fixed. The relaxation of the each length scale is clearly associated with the relaxation of mode as we will show later.
To model the collapse process we consider the initial state of globule formation to be composed by a Gaussian chain of $N$ monomers of size $b_1$. It can be considered as a fractal. In the states at later time the corresponding structure is assumed to be the same, generated by a hierarchical random process, as illustrated in Fig.\[fig:gauss\]. This self similar hierarchical process defines static and dynamic properties by a set of exponents. In each scale and at each stage, the chain can be again re-expressed by random walks of $N/g$ coarse grained monomers of size $b(g)$ which contain $g$ original monomers [@abrams01]. The main effect of the quench to poor solvent conditions can be viewed as if the Gaussian ($\theta$ -) chain becomes instantly elastic and collapses in order to minimize its contacts with the solvent.
![\[fig:gauss\] Initial configuration of a chain can be viewed as hierarchical fractal in ring geometry. At time $t$ all length scales smaller than a certain $b_2$ becomes collapsed.](fractal.eps){width="10cm"}
At larger times $t$, the Gaussian fractal structure becomes less complex, since at smaller scales the structure ($b_1$ in Fig. \[fig:gauss\]) becomes collapsed. Then all monomers on length scales smaller than, e.g. $b_1$, condense to a scale of length $b_2$. On the other hand the overall structure on larger scales remains a random walk of the new coarse grained monomers of size $b_2$. After further collapse of $g \sim b_2^2$ monomers, the longest dimension of each collapsed segment remains as $b_3$. The vectorial sum of the net force acting on the each segment from outside of the segment is zero if the segment belongs to the fractal structure. In the linear chain dynamics, the coarse graining picture based on the fractal structure moves to the late stage when the intermediate chain segment can no longer be considered as a part of the fractal due to the continuous condensation. The typical late stage configuration for a linear chain is the collinear structure where terminal parts of chain experience a net force[@abrams01].
Due to the absence of end effects in the chosen ring geometry, the collapse dynamics is a continuous coarse graining process until the polymer reaches its compact globule conformation. We will discuss now an exponent $x$ for the characteristic time of the collapse, i.e. $t^*_{\rm collapse} \sim N^{x}$. When the chain is quenched into poor solvent conditions, the initial random walk configuration contracts immediately. The energy per each segment of size $R_1$ is $E(R_1)
\sim \tau g k_{\rm B}T \sim \tau (R_1/b)^2 k_{\rm B}T$ where $\tau \sim v_{\rm
i} - v_{\rm f}$ is the depth of quench ($v_{\rm i}$ and $v_{\rm f}$ are the initial and the final second virial coefficients correspondingly). The contraction results in a finite net force, which can be estimated for a given segment of length $R_1(g)\sim b\sqrt{g}$ to be $$f(g)= \frac{d E}{d R_1(g)} \sim \frac{k_{\rm B}T}{b^2} \tau R_1 \sim
\frac{k_{\rm B}T}{b}\tau \sqrt{g} .
\label{force}$$ The corresponding velocity of each segment is given by $$u = \frac{f(g)}{\zeta}.
\label{eq:eq}$$ As we assume the Rouse dynamics, the corresponding friction coefficient of each segment scales with the number of monomers involved, i.e., $\zeta \sim \zeta_0
g$, where $\zeta_0$ is the bare friction coefficient (determined by the white noise correlation and the fluctuation dissipation theorem). Since $R_1^2 \sim
b^2 g$, the characteristic time, $t^* \sim R_1/u$, for the collapse of each (coarse grained) segment of length $R_1$ reads $$t^*(g) = \frac{g}{\tau}t_0.
\label{each_segment}$$ Here we define $t_0 \equiv \zeta_0 b^2/k_{\rm B}T = b^2/D$ where $D$ is the diffusion constant of a solvent particle and is related to $\zeta_0$ by Einstein relation $D\equiv k_{\rm B}T/\zeta_0$. Since the chain has self similar structure on all scales, this relation holds until the chain merges to a single compact globule. The total time for the collapse is the time required for the largest length scale with $N$ monomers. $$t_{\rm collapse}^* \sim \frac{N}{\tau} t_0.
\label{total}$$ If we use Zimm dynamics, the friction of the segment of size $R_1$ would no longer be determined by the number of monomers, but by the geometric size of the chain [@Doi], i.e., $\zeta_{Z} \sim \zeta_0 R_1$. The characteristic time for collapse is $t^*_{Z}\sim \sqrt{g} t_0$. Note that in Ref.[@abrams01], the characteristic time for collapse is $t\sim g t_0$ with Zimm dynamics where the infrastructure of a segment is a series of peal necklaces due to the capillary instability. The net force of contraction in the geometry of necklace is independent of the scale. The contraction effectively brings monomers which belong to the string to the globule. Therefore $f_{\rm necklace} \sim k_{\rm B}T/b$. We summarize the characteristic time for collapse in different regimes in table \[table\].
uniform $f\sim \sqrt{g}$ necklace $f\sim 1$
---------------------------- -------------------------- -----------------------
Rouse $\zeta \sim g$ $t^u_R \sim g$ $t^n_R \sim g^{3/2}$
Zimm $\zeta \sim \sqrt{g}$ $t^u_Z \sim \sqrt{g}$ $t^n_Z \sim g$
\[table\]
The preceding scenario of the collapse based on the hierarchical fractal picture have been discussed first in ref.[@abrams01]. It has been shown there that the theoretical predictions are compatible with the MD - simulation. However we should emphasize that at a larger times, when all pearls are merged to a single cluster, the driving energy is no longer determined by the fractal regime, $E \sim \tau g
k_BT$, but mainly ruled by interfacial effects. This has pointed out by de Gennes [@Gennes2] in his sausage model. The dynamics in this regime is determined approximately by changes of the surface, which are described by $$E_{\rm inter} \sim \gamma L r \, .
\label{interface}$$ In eq.\[interface\] $\gamma \sim k_BT/\xi^2$ stands for the surface tension, $L$ is the “sausage” length and $r$ is its radius. The sausage is formed by thermal blobs of diameter $\xi \simeq b/\tau$. In this regime the collapse can be viewed as a minimization of the interfacial energy $E_{\rm inter}$ under the constant volume $V
\sim Lr^2$. The characteristic time which corresponds to this regime is given by [@Gennes2] $$t^{**} \sim \tau N^2 t_0
\label{t**}$$ The crossover between the two above-mentioned regimes is detrmined by the match of the two characteristic times, i.e., $t^* \approx t^{**}$ or $\tau^2 N \approx {\rm const}$.
This matching is simply understood by taking into account that the globule behaves liquid like under the constraint of constant volume. The volume is given by the number of blobs $N/g_{\rm blob}$, where $g_{\rm blob} \sim 1/\tau^2$. Thus we may conclude that in the crossover range the number of blobs ${\cal N}_{\rm blob}^{(eq)}
\sim N/g_{\rm blob}$ corresponds to the equilibrium value. De Gennes’ dynamics is associated therefore with the re-packing of the incompressible “blob fluid”.
The presence of these two regimes have been shown by the Brownian dynamics simulation (see Sec.III B and Fig.6 in ref.[@Chang]). It was found that the first regime becomes faster for the larger quenching $\tau$ (see eq.(\[each\_segment\])) whereas the second one become slow as the quenching grows (see eq.(\[t\*\*\])).
Swelling
--------
When the solvent quality is changed suddenly from poor to good or $\theta$ - conditions the globule starts to swell. We consider here for brevity a quench from a poor solvent to $\theta$ - solvent conditions and assume that the globule interior always contains enough solvent molecules i.e., the solvent is not very poor ($\tau \le 0$) . Then one can ignore a specific role of the solvent transport and assure the homogegeneous globule expansion. In this subsection we restrict ourselves to a scaling picture to set up the main time scales by employing an “expanding blob” picture. Let us therefore assume that during the process of expansion the segments on the length of blob size $\xi$ maintain local equilibrium states, whereas on a larger scales the system is no longer in equilibrium.This amounts to the assumption that the initial state of the globule can be described by free energy which is proportional to the number of blobs times the thermal energy [@Gennes]. We can then write the free energy for the overall quasi - equilibrium regime in the following form $$F_{\rm glob} \propto k_{B}T \left(\frac{N}{g}\right) \quad,
\label{F11}$$ where the number of monomers in the blob, $g$, is a time dependent function and still has a Gaussian statistics, i.e. $$g \propto \left(\frac{\xi}{b}\right)^2 \quad.
\label{g}$$ We assume as well that the quasi - equilibrium scaling for the overall globule size, $R$, is valid and gives $$R \propto \xi \left(\frac{N}{g}\right)^{1/3} \quad.
\label{R11}$$ The substitution of eqs.(\[g\]) and (\[R11\]) in eq. (\[F11\]) yields $$F_{\rm glob} \propto k_{B}T \frac{b^6N^3}{R^6} \quad.
\label{F1}$$ To take into account the correct long time limiting behavior, when the chain size becomes proportional to $b^2 N$, the elastic energy term should be included. This term counterbalances the over-swelling and stabilizes the system, so that the whole quasi - equilibrium free energy reads $$F \propto k_{B}T \left( \frac{b^6N^3}{R^6} + \frac{R^2}{b^2N}\right)\quad.
\label{F_whole}$$ After that the corresponding equation of motion for $R(t)$, i.e. $$\zeta \frac{d R}{d t} = - \frac{\delta F}{\delta R}
\label{zeta}$$ takes the following form $$\frac{N}{D} \frac{dR(t)}{dt} = \frac{b^6 N^3}{R^7} - \frac{R}{b^2
N}\quad.
\label{Eq_Motion}$$ In eq.(\[zeta\]) $\zeta \sim \zeta_0 N$ is the Rouse friction coefficient and in eq.(\[Eq\_Motion\]) $D = k_{B}T/ \zeta_0$ denotes the monomer diffusion constant. It should be noted, that we do not use the Rouse diffusion constant throughout the paper, since we wish to keep track of the chain length dependences explicitly.
The solution of eq.(\[Eq\_Motion\]) is $$R^2(t) = \left[R_0^8 e^{-8Dt/b^2N^2} + b^8 N^4 \left( 1 -
e^{-8Dt/b^2N^2}\right)\right]^{1/4} \quad.
\label{swell}$$ In the asymptotic limit of $t\rightarrow \infty$, we recover the Gaussian chain size, $R^2_{\infty} = b^2 N$. The characteristic time for swelling is determined by the condition: $R_g^2 \sim t^{1/2}_{\rm swell} \sim b^2 N$. The characteristic time $t^*_{\rm swell}$ scales with the chain length as $$t^*_{\rm swell} \sim N^2.$$
We will see in Sec.V, devoted to the numerical investigation of the full equation of motion, that because of coupling of different Rouse modes the driving force of the swelling is mainly determined by the large length scale (see e.g. Fig. \[fig:omega\]).
In this consideration the chain is treated as “phantom”, notwithstanding the effective interaction is taken into account through the virial coefficients segments still can cross each other.This use of pseudo potentials does not allow to include topological effects discussed recently. If, however, the effects of topological constraints are dominant, the chain can be swollen only by reptation through the channels made of neighboring segments. The swelling process of an entangled globule needs more consideration [@Nechaev; @Rabin].
Effective Hamiltonians and local time scales
--------------------------------------------
The scaling shows what can be expected at large length scales only. It is at first sight difficult to estimate the expected time scales on the level of Rouse type modes. Nevertheless some remarks to the scale dependence on the relaxation modes can be drawn. In the more detailed theory, which is presented below one can recover the dynamical scales on the very late relaxation regimes. The chain size in globular states scales as $R
\simeq b (N/\tau)^{1/d}$ for scales larger than the thermal blob size $\xi_{\rm T} = b/\tau$. A general staring point is the Edwards Hamiltonian $H({{\bf R}(s)})$, where ${\bf R}(s)$ defines the chain position vectors and $s$ the contour variable. Any dynamic theory can be formulated through the Rouse modes which are usually defined by a Fourier transform of the position vectors as ${\bf X}(p) = (1/N)\sum_{s} {\bf R}(s)\exp( i sp)$, where $p$ stands for the Rouse modes with $p = 2\pi n/N$ and $n = 0, 1, 2, \dots,N - 1$ [@Doi]. In general the effective Hamiltonian for a collapsed chain can be written in terms of Rouse modes as a simple quadratic form $$\label{effective}
H_{\rm eff} = k_{\rm B}T f(\tau) \sum_{p} p^{2a}|{\bf X}(p)|^{2} \quad,$$ where $a$ is determined by the static correlation and the function $f(\tau)$ is 1 for the good and $\theta$ - solvents but $f(\tau) \sim \tau^{2/d}$ for poor solvent conditions. In the case of extended chains it is easy to show that $2a=1+2\nu$ ( where $\nu = 3/(d + 2)$) whereas in the case of compact chains $2a=1+2/d$. The Rouse dynamics for these effective chain variables ${\bf X}(p)$ is determined by an Rouse equation of the form $$\zeta_{0} \frac{\partial {\bf X}(p)} {\partial t} +
\frac{3k_{\rm B}T}{b^{2}} f(\tau) p^{2a} {\bf X}(p) = 0$$ This effective Rouse equation set the time scales for the latest stages of the dynamical evolution. Namely, for the swelling the relaxation spectrum has the form $$\tau_{\rm rel}(p) \simeq \frac{\zeta_{0}}{p^{1+2\nu}}\quad,
\label{tau1}$$ whereas for the collapse it is $$\tau_{\rm rel}(p) \simeq \frac{\zeta_{0}}{\tau^{2/d}p^{1+2/d}}\quad.
\label{tau2}$$ This corresponds naturally to the characteristic Rouse relaxation times as we will discuss below in more refined theories it in Sec. IVB.
Equation of motions for correlators
===================================
Model
-----
In this section we provide a more general formulation of the Langevin dynamics for a polymer chain immersed in the solvent. The chain conformation is characterized by the $d$ - dimensional vector - function ${\bf R}(s,t)$ of the time $t$ and $s$ ($1 \le s \le N$), the segment position along the chain contour. The corresponding intra chain Hamiltonian has the following form $$\begin{aligned}
H=\frac{1}{2}\epsilon\sum_{s=1}^{N}\left[\nabla_{s}{\bf
R}(s,t)\right]^{2} + H_{\rm int}\left\{{\bf R}(s,t)\right\}
\quad,
\label{Hamilton}\end{aligned}$$ where $\epsilon=dk_B T/b^{2}$ is the elastic modulus with the Kuhn segment length $b$, $N$ is the general number of segments and the finite difference $\nabla_{s}{\bf R}_{j}(s,t) = {\bf R}_{j}(s + 1,t) - {\bf R}_{j}(s,t)$ and the intra-chain interaction Hamiltonian has the form $$\begin{aligned}
H_{\rm int}\left\{{\bf R}(s,t)\right\} &=&\frac{1}{2}
\sum_{s=1}^{N}\sum_{s'=1}^{N}v({\bf R}(s,t) - {\bf R}(s',t))\nonumber\\
&+&\frac{1}{3!} \sum_{s=1}^{N}\sum_{s'=1}^{N}\sum_{s''=1}^{N}w({\bf
R}(s,t) - {\bf R}(s',t); {\bf R}(s',t) - {\bf R}(s'',t)) + \dots
\label{Interaction}\end{aligned}$$ In eq.(\[Interaction\]) $v({\bf r})$ and $w({\bf r}_1,{\bf r}_2)$ are the second and the third virial coefficients correspondingly. Let us treat the low - molecular solvent molecules as a separate component and specify their local positions by the vector - function ${\bf
r}^{(p)}(t)$, where $p = 1, 2, \dots M$ numerates the number of the solvent molecules. We denote by $V_{\rm ps}({\bf r})$ and $V_{\rm ss}({\bf r})$ the polymer - solvent and solvent - solvent interaction potential correspondingly. After that the whole polymer - solvent dynamics is described by the following Langevin equations: $$\begin{aligned}
\zeta_{0}\frac{\partial}{\partial
t}R_{j}(s,t)&-&\epsilon\Delta_{s}R_{j}(s,t) +\frac{\delta}{\delta R_{j}(s,t)} H_{\rm int}\left\{{\bf R}(s,t)\right\} \nonumber\\
&+&{\frac{\delta}{\delta R_{j}(s,t)}\sum_{p=1}^{M} V_{\rm ps}\left({\bf R}(s,t)-{\bf
r}^{(p)}(t)\right)}=f_{j}(s,t)
\label{R}\end{aligned}$$ and $$\begin{aligned}
\zeta'_{0}\frac{\partial}{\partial
t}r_{j}^{(p)}(t) &+&\frac{\delta}{\delta
r_j^{(p)}(t)}\sum_{m=1}^M V_{\rm ss}\left({\bf r}^{(p)}(t)-{\bf
r}^{(m)}(t)\right)\nonumber\\&+&\frac{\delta}{\delta
r_j^{(p)}(t)} \sum_{s=1}^{N} V_{\rm ps}\left({\bf r}^{(p)}(t)-{\bf
R}(s,t)\right)={\tilde f}_j^{(p)}(s,t)\quad,
\label{r}\end{aligned}$$ where $j$ numerates Cartesian components, $\zeta_{0}$ and $\zeta'_{0}$ are bare friction coefficients of polymer segments and solvent molecules (which should be of the same order and are put equal in the following) and the second order finite difference $\Delta_{s}R_{j}(s,t) = R_{j}(s + 1,t) + R_{j}(s -
1,t) - 2R_{j}(s,t)$.
In order to reformulate the Langevin problem (\[R\]) - (\[r\]) in a more convenient form we go to the MSR - functional integral representation [@Rost1]. The generating functional (GF) of the problem has the form $$\begin{aligned}
Z\left\{\cdots\right\}&=&\int DR_j(s,t)D{\hat
R}_j(s,t)\nonumber\\
&\times&\exp\left\{\Xi\left[{\bf R}(s,t), \hat
{\bf R}(s,t)\right] + A_{\rm intra}\left[{\bf R}(s,t),\hat{\bf
R}(s,t)\right]\right\}\quad,
\label{GF1}\end{aligned}$$ where the influence functional (which describes the influence of solvent molecules on the chain) reads $$\begin{aligned}
\Xi\left[{\bf R},{\bf \hat
{ R}}\right] &=&\ln\int\prod_{p=1}^{M}D{\bf r}^{(p)}(t)D\hat{\bf
r}^{(p)}(t) \exp\Bigg\{A_{\rm solvent}\left[{\bf r}^{(p)},\hat{\bf r}^{(p)}\right]\nonumber\\
&+&\sum_{p=1}^M\sum_{s=1}^N \int dt
i\hat{R}_{j}(s,t)\frac{\delta}{\delta R_{j}(s,t)}V_{\rm ps}\left({\bf R}(s,t)-{\bf
r}^{(p)}(t)\right)\nonumber\\
&+&\sum_{p=1}^M \sum_{s=1}^N \int dt
i\hat{r}_{j}^{(p)}(t)\frac{\delta}{\delta
r_{j}^{(p)}(t)}V_{\rm ps}\left({\bf r}^{(p)}(t)-{\bf
R}(s,t)\right)\Bigg\}\quad,
\label{Xi}\end{aligned}$$ the intra-chain action is given by $$\begin{aligned}
A_{\rm intra}\left[{\bf R}(s,t),{\hat {\bf R}}(s,t)\right] &=& \sum_{s=1}^N\int dt\Bigg\{i{\hat R}_{j}(s,t)\left[\zeta_{0}\frac{\partial}{\partial t}R_{j}(s,t)-\epsilon\Delta_{s}R_{j}(s,t)\right]\nonumber\\
&+&\frac{\delta}{\delta R_{j}(s,t)} H_{\rm
int}\left\{R_{j}(s,t)\right\} + k_B T\zeta_{0}\left[i{\hat
R}_{j}(s,t)\right]^{2}\Bigg\}
\label{testaction}\end{aligned}$$ and the solvent action has the form $$\begin{aligned}
A_{\rm solvent}\left[{\bf r}^{(p)}(t),{\hat {\bf r}}^{(p)}(t)\right]
&=&\sum_{p=1}^{M} \int dt\Bigg\{i{\hat
r}^{(p)}_{j}(t)\left[\zeta_{0}\frac{\partial}{\partial
t}r^{(p)}_{j}(t)
+\sum_{m=1}^{M} \frac{\delta}{\delta r^{(p)}_{j}(t)}V\left[{\bf
r}^{(p)}(t)-{\bf r}^{(m)}(t)\right]\right]\nonumber\\
&+&k_B T\zeta_{0}\left[i{\hat r}^{(p)}_{j}(t)\right]^{2}\Bigg\}
\label{solventaction}.\end{aligned}$$ The representation (\[GF1\]) - (\[solventaction\]) is a suitable starting point for further approximations. In particular these expressions are very convenient for integration over collective solvent variables, which will leads to the effective ”actions" for the polymer. We remind that this procedure is only possible if the solvent particles dynamics is much faster compared to the polymer dynamics. While this is in general ensured for long chains, nevertheless this point has to be treated with a care for the polymer collapse problem.
Self - consistent Hartree approximation
---------------------------------------
As mentioned already it is an important to point out how the solvent dynamics is treated. Indeed we may follow at least two different ways. The solvent can be considered within the hydrodynamic approximation in the same way as it was done in ref. [@Fredrick], where the solvent dynamics was described by an incompressible Navier - Stokes liquid. This approach leads to a time - dependent hydrodynamics interaction. The other way is to treat the solvent as a dynamical background in a random phase approximation (RPA). Here we restrict ourselves to the dynamical RPA [@Rost1; @Rost2] which is well known in low molecular liquid dynamics [@Boon]. It is also known [@Boon] that the RPA is a mean field type description in which the free - particle behavior is modified by an effective interaction. The main drawback of this approximation is that it does not give the proper hydrodynamic behavior. The reason for this lies in the fact that RPA neglects collisions which dominate in the hydrodynamic regime. We will come to this point in a future publication.
In order to accomplish the calculation for the present purpose we make use of the the transformation to the collective solvent density [@Rost1] and integrate them out. The level of this procedure will determine the level of approximation also. We relegate all technical details of this calculation in the Appendix A.
The Hartree approximation will then take into account all mean field diagrams. Naturally the mean field description appears poor in good solvent conditions, but becomes better in the globule state, since fluctuations are less important under the increasing globule density.
The GF which is determined by eqs.(\[GF2\]) and (\[A\]) is still highly nonlinear with respect to ${\bf R}(s,t)$ and $\hat{\bf R}(s,t)$. In order to handle the difficulties which are associated to these we use Hartree - type approximation. In this approximation the real MSR - action is replaced by the Gaussian one in such a way that all terms which include more than two fields ${\bf R}(s,t)$ and $\hat{\bf R}(s,t)$ ) are written in all possible ways as products of pairs of ${\bf R}(s,t)$ or/and $\hat{\bf R}(s,t)$ coupled to the self - consistent averages of the remaining fields. In ref. [@Rost3] it was shown that if the number of field components is large the Hartree approximation and the next to the saddle point approximation merge and both become exact. The resulting Hartree action is a Gaussian functional with coefficients which could be represented in terms of correlation and response functions. All calculations are straightforward and details can be found in the Appendix B of ref. [@Rehkopf]. The only difference is that here the second and third virial terms (two last terms in eq.(\[A\])) explicitly enter into the equation. After the collection all terms the final GF reads $$\begin{aligned}
Z\{\cdots \}&=&\int D{\bf R}D{\hat {\bf R}}\exp\Big\{A_{\rm intra}^{(0)}[{\bf R},{\hat {\bf
R}}]\nonumber\\
&+&\sum_{s=1}^{N}\sum_{s'=1}^{N}\int_{-\infty}^{\infty}dt\int_{-\infty}^{t}dt'\:i{\hat
R}_{j}(s,t)R_{j}(s',t')\lambda(s,s';t,t')\nonumber\\
&-&\sum_{s=1}^{N}\sum_{s'=1}^{N}\int_{-\infty}^{\infty}dt\int_{-\infty}^{t}dt'\:i{\hat
R}_{j}(s,t)R_{j}(s',t)\lambda(s,s';t,t')\nonumber\\
&+&\frac{1}{2}\int_{-\infty}^{\infty}dt\int_{-\infty}^{\infty}dt'\:i{\hat
R}_{j}(s,t)i{\hat R}_{j}(s',t')\chi(s,s';t,t')\Big\}\quad,
\label{GF3}\end{aligned}$$ where $$\begin{aligned}
\lambda(s,s';t,t') &=& \frac{1}{d}G(s,s';t,t')\int\frac{d^{d}k}{(2\pi)^{d}}k^{4}|V_{\rm ps}({\bf k})|^{2}F({\bf k};s,s';t,t')S_{00}({\bf k};t,t')\nonumber\\
&-&\int\frac{d^{d}k}{(2\pi)^{d}}k^{2}\left[|V_{\rm ps}({\bf k})|^{2}S_{01}({\bf
k};t,t') - V_{\rm ps}({\bf k})\delta(t-t')\right]F({\bf
k};s,s';t,t')\nonumber\\
&+&\sum_{s''=1}^{N}\int\frac{d^{d}kd^{d}q }{(2\pi)^{2d}}w({\bf k},{\bf
q})F({\bf q};s',s'';t,t)F({\bf k};s,s';t,t)\delta(t - t')
\label{Lambda}\end{aligned}$$ and $$\begin{aligned}
\chi(s,s';t,t')=\int\frac{d^{d}k}{(2\pi)^{d}}k^{2}|V({\bf
k})|^{2}F({\bf k};s,s';t,t')S_{00}({\bf k};t,t')
\label{Chi}.\end{aligned}$$ In eqs.(\[GF3\]) - (\[Chi\]) the response function $$\begin{aligned}
G(s,s';t,t')=\left<i{\hat {\bf R}}(s',t'){\bf R}(s,t)\right>
\label{G}\end{aligned}$$ and the chain density correlator $$\begin{aligned}
F({\bf k};s,s';t,t')=\exp\left\{-\frac{k^{2}}{d}Q(s,s';t,t')\right\}
\label{F}\end{aligned}$$ with $$\begin{aligned}
Q(s,s';t,t')\equiv\left<{\bf R}(s,t){\bf R}(s,t)\right>-\left<{\bf
R}(s,t){\bf R}(s',t')\right> \quad.
\label{Q}\end{aligned}$$ In eqs.(\[GF3\]) - (\[Chi\]) $S_{00}({\bf k};t,t')$ and $S_{01}({\bf k};t,t')$ are the solvent RPA - density correlation and response functions correspondingly (see eqs.(\[RPA1\]) and (\[RPA2\]) in the Appendix A). They embody information on the solvent dynamics.
The pointed brackets denote the self - consistent averaging with the Hartree - type GF, eq.(\[GF3\]). Below we will also concern transient time regimes, so that keeping both time arguments for correlator $F$ in eq.(\[Lambda\]) equal to each other does not necessarily mean that this is a static correlator $F_{\rm st}$. On the other hand we assume that the fluctuation - dissipation theorem (FDT) hold for both chain and solvent correlators, then $$\begin{aligned}
G(s,s';t-t') = (k_B T)^{-1}\frac{\partial}{\partial t'}Q(s,s';t-t')\:\:\:\:{\rm at}\:\:\: t>t'
\label{FDT1}\end{aligned}$$ $$\begin{aligned}
S_{01}({\bf k};t-t') = (k_B T)^{-1}\frac{\partial}{\partial t'}S_{00}({\bf
k};t-t')\:\:\:\:{\rm at}\:\:\: t>t'
\label{FDT2}.\end{aligned}$$ Note that in eq.(\[FDT2\]) the units of the correlation function $S_{00}$ and the response function $S_{01}$ are different (see the corresponding Appendix A for the notation).
Now we can use eqs.(\[FDT1\]) and (\[FDT2\]) in eqs.(\[GF3\]) - (\[Q\]). After integration by parts with respect to time argument $t'$, we obtain $$\begin{aligned}
Z\{\cdots \}&=&\int D{\bf R}D{\hat {\bf R}}\exp\Big\{\sum_{s,s'=1}^{N}\int_{-\infty}^{\infty}dt\int_{-\infty}^{t}dt'\:i{\hat
R}_{j}(s,t)\left[ \zeta_0 \delta(t - t') + \theta(t -
t')\Gamma(s,s';t,t')\right]\frac{\partial}{\partial t}R_{j}(s',t')\nonumber\\
&-&\sum_{s,s'=1}^{N}\int_{-\infty}^{\infty}dt\int_{-\infty}^{t}dt'\:i{\hat
R}_{j}(s,t)\:\Omega(s,s';t)\:R_{j}(s',t)\nonumber\\
&+& k_B T \sum_{s,s'=1}^{N}\int_{-\infty}^{\infty}dt\int_{-\infty}^{\infty}dt'\:i{\hat
R}_{j}(s,t)\left[ \zeta_0 \delta(t - t') + \theta(t -
t')\Gamma(s,s';t,t')\right] i{\hat R}_{j}(s',t')\Big\}\quad,
\label{GF4}\end{aligned}$$ where the memory function $$\begin{aligned}
\Gamma(s,s';t)= \frac{1}{k_B T} \int\frac{d^{d}k}{(2\pi)^{d}} \:k^{2}|V_{\rm
ps}({\bf k})|^{2}F({\bf k};s,s';t)S_{00}({\bf k},t)
\label{Memory}\end{aligned}$$ and the effective elastic susceptibility $$\begin{aligned}
\Omega(s,s';t) &=& \epsilon\delta_{\rm ss'}\Delta_{\rm s} -
\int\frac{d^{d}k}{(2\pi)^{d}} \: k^{2}{\cal V}({\bf
k})\left[F ({\bf k};s,s';t,t) - \delta_{\rm
ss'}\sum_{s''=1}^{N} F({\bf k};s,s'';t,t)\right]\nonumber\\
&-&\frac{1}{2}\sum_{s''=1}^{N}\int\frac{d^{d}kd^{d}q}{(2\pi)^{2d}} \:
k^2 w({\bf k}, {\bf q})\nonumber\\
&\times&\left[F({\bf k};s,s';t,t)F({\bf q};s'',s';t,t) - \delta_{\rm ss'}\sum_{s'''=1}^{N} F({\bf k};s,s''';t,t)F({\bf q};s''',s'';t,t)\right]\quad.
\label{Omega}\end{aligned}$$ In eq.(\[Omega\]) the Fourier components of the effective segment - segment self - interaction function is given by $$\begin{aligned}
{\cal V}({\bf k}) = v(k) - \frac{|V_{\rm ps}({\bf k})|^2 \Phi_{\rm
st}({\bf k})/k_B T}{1 + V_{\rm ss}({\bf k}) \Phi_{\rm st}({\bf
k})/k_B T}
\quad,
\label{Self-interaction} \end{aligned}$$ where $\Phi_{\rm st}({\bf k})$ is the static density correlator for the free solvent system. In eq.(\[Self-interaction\]) the second term results from a coupling with solvent degree of freedom. The memory function (\[Memory\]) describes the renormalization of the Stokes friction coefficient $\zeta_0$ which originates from a coupling between polymeric and solvent fluctuations. The effective elastic susceptibility, eq.(\[Omega\]), gives an account of the non - dissipative contributions, which arises not only from the local spring - interaction (the first term in eq.(\[Omega\])) but also from effective two and three point interactions.
Equation of motion
------------------
Now we are in a position to derive equation of motion for the time - displaced correlator $$\begin{aligned}
C(s,s';t,t') = \left<{\bf R}(s,t){\bf R}(s',t')\right>
\label{TD-correlator}\end{aligned}$$ as well as for the equal - time correlator $$\begin{aligned}
P(s,s';t) = \left<{\bf R}(s,t){\bf R}(s',t)\right> \quad.
\label{ET-correlator}\end{aligned}$$
The standard way to derive equations of motion by starting from the Hartree action (see eq.(\[GF3\]) is discussed in Appendix B ref.[@Horner]. Using this way for the Hartree action in eq.(\[GF3\]) and taking into account FDT yields $$\begin{aligned}
\zeta_0 \frac{\partial}{\partial t}C(s,s';t,t') &-& \sum_{m = 1}^{N} \:
\Omega (s,m;t) C(m,s';t,t')\nonumber\\
&+& \sum_{m = 1}^{N}\int_{t'}^{t} \:
\Gamma(s,m;t,\tau) \frac{\partial}{\partial \tau}C(m,s';\tau,t') d \tau = -
2k_B T\zeta_0 G(s',s;t',t) \quad.
\label{EqMotion1}\end{aligned}$$
In the case of $t' < t$ the r.h.s. of eq.(\[EqMotion1\]) is zero and the time - displaced correlator satisfies the equation $$\begin{aligned}
\zeta_0 \frac{\partial}{\partial t}C(s,s';t,t') - \sum_{m = 1}^{N} \:
\Omega (s,m;t) C(m,s';t,t')
+ \sum_{m = 1}^{N}\int_{t'}^{t} \:
\Gamma(s,m;t,\tau) \frac{\partial}{\partial \tau}C(m,s';\tau,t')d \tau = 0 \quad.
\label{EqMotion2}\end{aligned}$$ In order to derive the equation for the equal - time correlator (\[ET-correlator\]) we recall that $$\begin{aligned}
\frac{\partial}{\partial t} P(s,s';t) = \left[\frac{\partial}{\partial t}
C(s,s';t,t')\right]_{t'=t} + \left[ \frac{\partial}{\partial t'} C(s,s';t,t')\right]_{t'=t}
\label{Recall}\end{aligned}$$ and the initial condition [@Rehkopf] $$\begin{aligned}
\zeta_0 G(s,s';t+0^+,t') = - d\delta(s - s') \quad.
\label{Initial}\end{aligned}$$ Let us make the permutation of time moments, $t {\to\atop \gets} t'$, in eq. (\[EqMotion1\]). Combining this equation with the original one and using eqs. (\[Recall\]) - (\[Initial\]) in the limit $t =
t' + \epsilon$ at $\epsilon \to 0$, one can derive the result: $$\begin{aligned}
\frac{1}{2}\zeta_0 \frac{\partial}{\partial t} P(s,s';t) - \sum_{m = 1}^{N} \:
\Omega (s,m;t) P(s,s';t) = d k_B T \delta(s - s')
\label{EqMotion3}\end{aligned}$$ It is of interest that in eq.(\[EqMotion3\]) the memory term is dropped out.
As it was discussed in the Introduction , here we restrict ourselves to the case where the translational invariance along the chain backbone holds during the collapse (swelling). Generally speaking, the presence of the “pearl necklace” structure breaks down this invariance, so that the correlator $ P(s,s';t)$ depends not only on the “chemical distance” $|s - s'|$ but also on the position along the chain backbone. Even the interface might violate this invariance because the chain segments on the surface and in the bulk experience quite different environment. Nevertheless, in the case when the pearl formation is fast compared to the dynamics of the envelop and there positions along the chain are random these differences can be averaged out (by preparing an appropriate ensemble of pearls realizations) and the effective invariance still holds. In this situation the Rouse components are the “good variables” and it is worthwhile to make the Rouse transformation in the standard way [@Doi]:
$$\begin{aligned}
{\bf X}(p,t) = \frac{1}{N}\sum_{s = 1}^{N} {\bf R}(s,t)
\exp (is p)
\label{Fourier1}\end{aligned}$$
and $$\begin{aligned}
{\bf R}(s,t) = \sum_{p = 0}^{2\pi} {\bf X}(p,t)
\exp (- i s p) \quad,
\label{Fourier2}\end{aligned}$$ where the Rouse mode $p = 2\pi n /N$ at $n = 0, 1, \dots ,N - 1$ and we have used for simplicity the cyclic boundary conditions. After that the eq.(\[EqMotion3\]) reads $$\begin{aligned}
(2 D)^{-1} \frac{\partial}{\partial t} P(p;t) + \Omega
(p;t) P(p;t) = d N^{-1}\quad,
\label{EqMotionFourier}\end{aligned}$$ where $D = k_B T/\zeta_0$ is the bare diffusion coefficient and $$\begin{aligned}
\Omega(p ; t) &=& \frac{2 d}{b^2} (1 - \cos p) + \frac{N}{k_B T}\: \int
\frac{d^d k}{(2 \pi)^d} \: k^2 {\cal V}({\bf k}) \left[F({\bf k} ; p
;t,t) - F({\bf k} ; p=0 ;t,t)\right]\nonumber\\
&+& \frac{N^2}{2k_B T}\:\int \frac{d^d kd^d q}{(2 \pi)^2d} \: k^2 w({\bf
k},{\bf q})\Big[F({\bf k} ; p;t,t)F({\bf q} ; p =
0;t,t)\nonumber\\
&-& F({\bf k} ; p=0 ;t,t)F({\bf q} ; p=0 ;t,t)\Big]\quad.
\label{OmegaFourier}\end{aligned}$$ On the other hand, the Rouse transformation of the chain density correlator has the form $$\begin{aligned}
F({\bf k} ; p ;t,t) = \frac{1}{N} \sum_{n = 1}^{N} \cos (p
n)\exp\left\{ - \frac{k^2}{d} Q(n ; t,t)\right\}\quad.
\label{F-correlator}\end{aligned}$$ For the short range segment - segment interaction one can neglect the wave vector dependence in ${\cal V}$ and $w({\bf k},{\bf q})$ by putting ${\cal V} \approx v$ and $w({\bf k},{\bf q}) \approx
w$. Using eq.(\[F-correlator\]) in eq.(\[OmegaFourier\]) and performing the integration over ${\bf k}$ and ${\bf q}$ yields $$\begin{aligned}
\Omega(p ; t) &=& \frac{2d}{b^2} (1 - \cos p) - v \frac{d^{\frac{d}{2}
+ 2}}{2 k_B T
(4\pi)^{\frac{d}{2}}}\sum_{n = 1}^{N}
\: \frac{1 - \cos(p n)}{\left[Q(n,t)\right]^{\frac{d +
2}{2}}}\nonumber\\
&-& w \frac{d^{d + 2}}{4k_B T
(4\pi)^{d}}\sum_{n = 1}^{N}\sum_{m =
1}^{N - n}\: \frac{1 - \cos(p n)}{\left[Q(n,t)\right]^{\frac{d +
2}{2}}\left[Q(n,t)\right]^{\frac{d}{2}}}\quad.
\label{OmegaFourier1}\end{aligned}$$ We stress that the equation of motion (\[EqMotionFourier\]), (\[OmegaFourier1\]) for $P(p ;t)$ is highly nonlinear because the correlator $Q(n,t)$ depends on all other $P(\kappa ;t)$ (which provides also a Rouse mode coupling) as follows $$\begin{aligned}
Q(n,t) &=& P(n, n;t) - P(n, 0;t)\nonumber\\
&=& \sum_{\kappa = 2 \pi/N}^{2 \pi} \: \left[1 - \cos(\kappa n)\right] P(\kappa ;t)
\label{Coupling} \end{aligned}$$ In equilibrium all equal - time correlators in eqs.(\[OmegaFourier1\]) - (\[Coupling\]) can be seen as static ones: $P(p ; t) \to
C_{\rm st}(p)$ and $Q(n,t) \to Q_{\rm st}(p)$. Using this limit in eq.(\[EqMotionFourier\]) leads to the following equilibrium equation $$\begin{aligned}
\left[N C_{\rm st}(p)\right]^{-1} &=& \frac{2}{b^{2}} (1 - \cos p) - v
\frac{d^{\frac{d}{2}+1}}{2 k_B T
(4\pi)^{\frac{d}{2}}}\sum_{n = 1}^{N}
\: \frac{1 - \cos(p n)}{\left[Q_{\rm st}(n)\right]^{\frac{d +
2}{2}}}\nonumber\\
&-& w \frac{d^{d+1}}{4 k_B T
(4\pi)^{d}}\sum_{n = 1}^{N}\sum_{m =
1}^{N - n}\: \frac{1 - \cos(p n)}{\left[Q_{\rm st}(n)\right]^{\frac{d +
2}{2}}\left[Q_{\rm st}(n)\right]^{\frac{d}{2}}}\quad.
\label{Static}\end{aligned}$$
This equation is identical (with the accuracy of prefactors) with the variational (Euler) equation which we have recently discussed in ref.[@Miglior; @Miglior1]. This provides the means for answering one of the important questions of whether the Edwards Hamiltonian can be equally used for dynamical calculations. Within the Hartree approximation the answer is positive provided that the second and third virial coefficients are considered as free parameters. In fact, the Hartree approximation is a dynamical counterpart of the variational approach [@Miglior; @Miglior1].
Early and latest stages of the collapse (swelling)
==================================================
Now we are in a position to consider some limiting cases which allow an analytical investigation. These are obviously very early and latest stages of the transformation. We shall defer the full discussion which based on the equation of motion numerical solution until the next section.
Let us consider the abrupt solvent quality changing between the initial $v_{\rm i} > 0$ and the final $v_{\rm f} < 0$, which is the case of the collapse experiment. For the swelling case these are the following: $v_{\rm i} < 0$ and $v_{\rm f} > 0$.
Early stages
------------
For the very early stage the eq.(\[EqMotionFourier\]) can be linearized around the starting state, which leads to the following form $$\begin{aligned}
(2D)^{-1} \frac{\partial}{\partial t} P(p;t) +
\Omega_{\rm st}
(p) P(p;t) = d N^{-1} \quad,
\label{EqMotionLinear}\end{aligned}$$ where $$\begin{aligned}
\Omega_{\rm st}(p) &=& \frac{2d}{b^2} (1 - \cos p) - v_{\rm f} \frac{d^{\frac{d}{2}
+ 2}}{2 k_B T
(4\pi)^{\frac{d}{2}}}\sum_{n = 1}^{N}
\: \frac{1 - \cos(p n)}{\left[Q_{\rm st}(n)\right]^{\frac{d +
2}{2}}}\nonumber\\
&-& w \frac{d^{d + 1}}{4 k_B T
(4\pi)^{d}}\sum_{n = 1}^{N}\sum_{m =
1}^{N - n}\: \frac{1 - \cos(p n)}{\left[Q_{\rm st}(n)\right]^{\frac{d +
2}{2}}\left[Q_{\rm st}(n)\right]^{\frac{d}{2}}}\quad.
\label{OmegaFourierStatic}\end{aligned}$$ In the eq.(\[OmegaFourierStatic\]) the static correlator $ Q_{\rm
st}(n)$ should be seen as a coil correlator, i.e. $ Q_{\rm st}(n) = Q_{\rm
st}^{coil}(n)$ in the case of collapse and as a globule correlator, $Q_{\rm st}(n) = Q_{\rm st}^{glob}(n)$, in the case of swelling.
Combining eq.(\[OmegaFourierStatic\]) with eq.(\[Static\]), which is valid in the equilibrium at $v = v_{\rm i}$, yields $$\begin{aligned}
\Omega_{\rm st}(p) = d \left[N C_{\rm st}(p)\right]^{-1} + (v_{\rm i}
- v_{\rm f}) \frac{d^{\frac{d}{2}
+ 1}}{2 T
(4\pi)^{\frac{d}{2}}}\sum_{n = 1}^{N}
\: \frac{1 - \cos(p n)}{\left[Q_{\rm st}(n)\right]^{\frac{d +
2}{2}}}
\label{OmegaFourierStatic1}\end{aligned}$$ The very early stage can be characterized by the initial relaxation rate $$\begin{aligned}
X(p) \equiv \frac{\partial}{\partial t}
P(p;t)\biggr |_{t =0} = \left[d N^{-1} - \Omega_{\rm st}(p) C_{\rm st}(p)\right](2 D)\quad,
\label{RelaxationRate}\end{aligned}$$ which with the use of eq. (\[OmegaFourierStatic1\]) can be represented in the following form $$\begin{aligned}
X(p) = -\frac{(v_{\rm i} - v_{\rm f})}{\zeta_0}\: \: C_{\rm
st}(p) \: {\cal F}(p)\quad,
\label{RelaxationRate1}\end{aligned}$$ where $$\begin{aligned}
{\cal F}(p) = \frac{d^{\frac{d}{2}
+ 2}}{
(4\pi)^{\frac{d}{2}}}\sum_{n = 1}^{N}
\: \frac{1 - \cos(p n)}{\left[Q_{\rm st}^{(i)}(n)\right]^{\frac{d +
2}{2}}} \quad.
\label{ConstA}\end{aligned}$$ The relaxation law for the decrement of the gyration radius $\Delta R_g^2(t) =
\sum_{p = 2\pi/N}^{2\pi} \left[ P(p;t) - P(p;0)\right]$ at the early stage takes the form $$\begin{aligned}
\Delta R_g^2(t) = -\frac{(v_{\rm i} - v_{\rm f})}{\zeta_0}\:\:
\Bigl[ \sum_{p = 2\pi/N}^{2\pi}
C_{\rm st}(p) \: {\cal F}(p)\Bigr] \: t \quad.
\label{GerationRad}\end{aligned}$$ Taking into account the asymptotic behavior, $\left[C_{\rm
st}(p)\right]^{-1} \propto p^{1 + 2\nu}$ (for small $p$), $Q_{\rm st}(n)\propto n^{2 \nu}$ (for large $n$), where $\nu$ is the Flory exponent, one can obtain for function ${\cal F}(p)$ the following scaling $$\begin{aligned}
{\cal F}(p) \propto p^{\nu(d + 2) - 1}
\label{function_f}\end{aligned}$$ For the collapse case $v_{\rm i} > v_{\rm f}$. For quenching from the good solvent, $\nu = 3/(d + 2)$, and ${\cal F}_c(p) \propto p^{2}$, whereas for the quenching from the $\theta$ - solvent $\nu = 1/2$ and ${\cal F}_c(p)
\propto p^{d/2}$.
In the case of swelling $v_{\rm i} < v_{\rm f}$. If one heat up the system starting from the globule state then $\nu = 1/d$ and ${\cal F}_s(p) \propto
p^{2/d}$, whereas for the $\theta$ - solvent initial state one has ${\cal F}_s(p) \propto p^{d/2}$.
We will leave the discussion of the relaxation rate $X(p)$ till the next section but one can immediately see that $X(p)$ has the following scaling forms. For collapse $$X_c(p) \propto - \frac{v_{\rm i} - v_{\rm f}}{\zeta_0}
\left\{\begin{array}{l@{\quad,\quad}l}
1/p^{(4-d)/(d+2)} &{\rm good- solvent} \\
1/p^{2 - d/2} &\theta -{\rm solvent}
\end{array}\right.
\label{Function_R1}$$ where the first and the second lines refer to the quenching from the good and $\theta$ - solvents correspondingly.
For the swelling process we find respectively for different initial starting points $$X_s(p) \propto \frac{v_{\rm f} - v_{\rm i}}{\zeta_0}
\left\{\begin{array}{l@{\quad,\quad}l}
1/p &{\rm poor-solvent} \\
1/p^{2 - d/2} &\theta - {\rm solvent}
\end{array}\right.
\label{Function_R2}$$ Here the first and the second lines are assigned to the heating up from the poor and $\theta$ - solvents correspondingly.
Latest stages
-------------
Now the system is close to the globule state (if the collapse is under discussion) and we can linearize the equation of motion around it. In this case eq.(\[EqMotionLinear\]) is still valid but $\Omega_{\rm st}(p)$ should be calculated in the globule state, which comes out of eq.(\[OmegaFourierStatic\]) after substitution $Q_{\rm
st}(n) \to Q_{\rm st}^{(gl)}(n)$. In its turn correlator for the globule state $Q_{\rm st}^{(gl)}(n)$ satisfies the eq.(\[Static\]) at $v = v_{\rm f}, C_{\rm st}(p) = C_{\rm st}^{(gl)}(p)$ and $Q_{\rm st}(n) = Q_{\rm st}^{(gl)}(n)$. After simple calculations one can obtain $$\begin{aligned}
\Omega_{\rm st}^{(gl)}(p) = d \left[N C_{\rm st}^{(gl)}(p)\right]^{-1},\end{aligned}$$ where $\left[N C_{\rm st}^{(gl)}(p)\right]^{-1}\propto p^{1 + 2/d}$. On the latest stage only the relaxation mode $p = 2\pi/N$ contributes. Then for $\Delta r_g^2(t) = R_g^2(t) - R_g^2(\infty)$ we have $$\begin{aligned}
\Delta r_g^2(t) \propto \exp \left\{- 2 D \:\Omega_{\rm
st}^{(gl)}\left(p = \frac{2\pi}{N}\right) \:t \right\}\quad,\end{aligned}$$ i.e. the characteristic relaxation time is given by $$\begin{aligned}
\tau_{\rm rel} \propto \frac{1}{D} N^{1 + 2/d}\quad,
\label{tau1}\end{aligned}$$ which agrees with ref. [@Pitard1]. Note that we derived this behavior as well previously when we had used the effective Hamiltonian for the collapsed chain. Obviously these crude estimated describe already the main issues of the dynamic behavior.
The same remark holds for the case of the large time limit for the swelling in a good solvent ($v_f > 0$). The formal dynamic theory describes these mode dependence as $$\begin{aligned}
\tau_{\rm rel} \propto \frac{1}{D} N^{\frac{d + 8}{d + 2}}\quad,
\label{tau2}\end{aligned}$$ which is also consistent with ref.[@Pitard1]. Again we derived this already by the effective chain Hamiltonian in Sec. IIC.. Physically this appears naturally since the choice of the exponent in eq.(\[effective\]) uses implicitly the FDT to determine the statics correctly. It is interesting that eqs.(\[tau1\]) and eqs.(\[tau2\]) can be seen as a characteristic Rouse time, $\tau_{\rm Rouse} \sim N^{1 + 2\nu}$ (see e.g.[@PGG]), where the Flory exponent $\nu = 1/d$ or $\nu = 3/(d + 2)$ in the case of eqs.(\[tau1\]) (collapse) and eqs.(\[tau2\]) (swelling in a good solvent) correspondingly.
Numerical Studies
=================
Collapse
--------
To analyze these results in more detail we performed numerical studies of the corresponding equations. We studied so far only the asymptotic behavior, whereas the intermediate time regimes are not accessible. To do so we explicitly computed the numeric solution of Eq. \[EqMotionFourier\] for chain length $N=2^6 -2^{10}$. For the whole procedure, the three body interaction term is fixed as $v_3 = b^6 \equiv 1$. The time scale is measured in units of $t_0 =\zeta_0 b^2/k_B T$. The monomer density in a globule is determined by the balance between the two-body attraction and three body repulsion, $\rho = |v_2|/v_3 = -\tau/b^3$. The excluded volume provides the condition for the maximum density $\rho_{\rm max}|v_2| \le 1$.
As mentioned earlier, the chain is phantom, i.e., we use pseudo-potentials only which might cause a faster collapse since the chain segments are allowed to pass through each other. However, this effect is maybe a minor correction at least in the early stages at the beginning of the collapse. At later stages artifact caused by the phantom assumption (such as an overshooting on the relaxation curves) are expected to be more pronounced. In order to assure the validity of the virial expansion and to minimize these artifacts, the solvent quality should remain in the limit where $v_2^2/v_3 \ll 1$. Let us now discuss the results in more detail.
### Solvent quality dependence
First we are going to present the dependence on the solvent quality as a first check of the theory. The mean squared radius of gyration $R_g^2(t)$ for various second virial coefficient $v_2$ is plotted in Fig. \[fig:sol\] for $N = 512$. Due to the finite size effect the solvent quality which gives the coil to globule transition is expected to be shifted by $\sim 1/\sqrt{N}$. We observe the transition occurs around $\tau \approx -0.25$.
![\[fig:sol\] The mean square radius of gyration $R_g^2(t)$ as a function of time after quench to poor solvent conditions from $\theta$ solvent condition where the chain configuration is Gaussian. The equilibrium $R_g^2(t)$ is shown in inset for various solvent quality. The coil to globule transition is observed at solvent quality $|\tau|= 0.25$.](sol-q512.eps){width="10cm"}
In equilibrium, the size of the globule in the poor solvent regime varies as $R\sim b(N/|\tau|)^{1/3}$, according to the solvent quality. This is in good agreement with the scaling predictions.
### Chain length dependence
The next issue is the chain length dependence. In Fig. \[fig:chainlength\], the mean square radius of gyration $R^2_g(t)$ during the collapse is shown for various chain size $N$. The characteristic total collapse time increases linearly as chain length $N$, as predicted from the scaling estimate given by eq.(\[total\]).
![\[fig:chainlength\] The mean square $R^2_g(t)$ of various chain length $N$ as a function of time after quench to poor solvent conditions ($\tau = -0.4$) $N=64,128,256,512,1024$ from bottom to top. The inset shows the total characteristic time for collapse for each chain vs. their length $N$. ](timeN.eps){width="10cm"}
### Relaxation times for individual modes
During the coil - globule transition the collapse occurs in hierarchical patterns, such that the collapsed segment on a smaller length scale can be considered as the unit length of the larger scale of the chain. In the process of the collapse transition the active modes disappear one after another. We expect such a hierarchical collapse to be visible in the corresponding Fourier mode correlator $P(p,t)$ where $p = 2\pi n/N$ and $n = 1, 2, \dots , N-1$. The characteristic time for mode $p$ is related to the relaxation time of the subchain of length $g= N/n$ After the elapse of time $t_n$ all length scales less than $g$ are collapsed (see the Sec.II). There exist $N/g$ collapsed sub-chains at a given time and the hierarchically larger length structure is a random walk of these sub-chains. At times less than $t_n$, the contractions on length scale smaller than $g$ contribute essentially to the decrease of the correlatator $P(p \sim 1/g,t)$. For times $t> t_n$, further contractions from the larger scales are reflected. Indeed, the relaxation behavior of $P(p,t)$ on smaller length scales (or larger $n \geq 2$) shows two clearly distinctive regimes, see Fig. \[fig:n1024\]. However, the longest mode ($n=1$) shows only a single slope until the chain finds itself in the globular state. We may define the fast decreasing regime as the first characteristic time scale $t_p^*$ for each mode $p$. The first characteristic time $t_p^*$ is therefore related to the “internal” relaxation time of subchain of length $g= N/n$. The subchain relaxation time scales with its length $g$ as $t_p^* \sim g\sim 1/p$ which is consistent with the scaling results (see Sec.IIA). The size of each length scale continue to decrease after time $t_p^*$ and this is due to the larger scale contraction.
It is now instructive to give a more general estimate for the characteristic time $t_p^*$. According to eq.(\[RelaxationRate1\]) the driving force for the collapse transition scales as $f_p \sim (v_{\rm i} - v_{\rm f}) C_{\rm st}
{\cal F}(p) \sim \tau p^{\nu d - 2} \sim
\tau /g^{\nu d - 2}$. In $d=3$ and at $\nu = 1/2$, $f_p \sim
\tau \sqrt{g}$, which agrees with eq.(\[force\]). In the same manner as in Sec.IIA, the characteristic time scales as $t_p^* \sim R/u$ where $R$ describes the chain properties as $R \sim
n^{\nu}$ and $u \sim f_p/\zeta_0 g$, so that the resulting scaling reads, $$t_p^* \propto \frac{\zeta_0}{\tau}\; g^{\nu(1 + d) - 1} \quad.
\label{resulting_scale}$$ For three dimensions, $d=3$ and $\nu = 1/2$ we recover the relation, $t_p^* \sim \zeta_0 g/\tau$, which is supported by the scaling analysis in the Sec.IIA.
After the time $t_p^*$, each Fourier mode decreases slowly (for $n>2$). The total collapse time for each mode is identical with that of the first characteristic time for the longest mode, $t_{p=2\pi/N}^*$. The slope decrease after time $t_p^*$ is controlled (because of modes coupling) by the contraction on longer scales (or smaller $p$ modes). The fast relaxation time for the second largest mode for the chain length $N$ coincides with the total collapse time for a chain of the corresponding half length (see again the Sec. II). Naturally, the longest relaxation mode has only one characteristic regime.
![\[fig:n1024\] [ The relaxation of each mode $P(p = 2\pi n/N,t)$ during the collapse transition. ]{} ](n1024p.eps){width="10cm"}
Swelling
--------
We turn now to the the case of swelling and present the numerical results of the globule expanding after a quench from poor to a good (or $\theta$ - solvent) condition. The initial configuration is now assumed to be a compact globule which is prepared by a poor solvent with a certain negative second virial coefficient. Contrary to the collapse dynamics, which based on the hierarchically crumpled fractal picture, the swelling can be conceived as an homogeneous extension of the different Rouse modes.
The time dependence of $ P(p,t)$ is shown in Fig. \[fig:swellp\] for different values of the mode index $p$. Small length scales (or large $p$) relax fast, in the sense that they are saturated faster, while larger scales grow slower. There is no clear dynamic exponent observed for the subchain relaxation time in the numerical solution. In the beginning of the swelling, overshooting is observed for large $p$ values. This is an obvious artifact of a phantom chain. The monomers move out from the dense globule explosively as soon as the solvent condition is switched to a theta solvent. The phantom chain can pass through surrounding dense phase without being hindered by the topological constraint. The overshooting shows up when the size of the subchain of $g$ monomers $r(g)$ is comparable to the boundary size of the total chain $R(t)$. It appears also that the virial expansion is not simply applicable for at least the early stage of swelling.
The growth of the overall size $R_g^2(t) \sim t^{z}$ is also computed. The dynamic exponent is approximately $z \sim 1/2$. The total relaxation time $t^*_{swell}$ grows as $N^2$ as the chain length increases.
In order to demonstrate how different modes contribute to the swelling, it is useful to calculate the normalized relaxation rate $\overline{X}(p,t) = 1 - P (p,t)
\Omega (p,t)/d$ as a function of wave vector $p$. We have discussed already (see Sec.IV) the asymptotic forms, i.e. at $p \ll 1$, of this function at the initial time moment. In Fig. \[fig:omega\], we compute the relaxation rate $\overline{X}(p,t)$ for the swelling condition. When the system is suddenly quenched to the $\theta$ solvent, the compact globule conformation is unstable and far from the equilibrium. The relaxation rate $X(p,t)$ reflects the discrepancy between the desired conformation and the conformation at given time $t$. If the system tends to the equilibrium, the relaxation rate obviously vanishes. The contribution to the total swelling from different modes $p$ changes as swelling proceeds. The length scales smaller then the blob size do not contribute in the swelling. We may define $p_{\rm max} = 2\pi /g$ as the Rouse index of the shortest active mode at $t=0$, where $g$ is again the number of chain units in the Gaussian blob. (The larger $p>
p_{\rm max}$ do not contribute). As the chain swells, the size of the boundary is growing, but on the other hand, the position of $p_{\rm max}(t)$ decreases accordingly. In Fig. \[fig:omega\] one can see the shifting of $p_{\rm max}(t)$ to the smaller value of $p$ in the course of time. These findings are consistent with the “expanding blob” picture which we discussed in the Sec.II B.
Conclusion
==========
Using the MSR - generating functional technique and the self - consistent Hartree approximation we have derived the equation of motion for the time dependent monomer - to - monomer correlation function. The numerical solutions of this equation for the Rouse modes and the gyration radius $R_g$ in the cases of collapse and swelling regimes have been discussed. It has been shown that the simple scaling arguments for the collapse based on the hierarchically crumpled fractal picture match pretty well with our numerical findings. The size of the crumples (or the size of the collapsed segment of the chain) grows successively, so that the characteristic time of the collapse changes linearly with the length of the collapsed segment. Swelling, on the contrary, goes homogeneously, with the Rouse modes relaxing with different relaxation rates. The spectrum of the relaxation rate spans the interval between the minimal $p_{\rm min} = 2\pi/N$ and maximal $p_{\rm max} = 2\pi/g$ Rouse mode indices, where the “expanding blob” length, $g$, is a growing function of time.
In this paper we have restricted ourselves to the ring polymer. The presence of free ends in the open polymer chain brings some special features mainly on the later stage of the collapse. This question have been discussed in the number of papers [@abrams01; @ostrovsky]. In particular in ref. [@abrams01] it has been shown that the late stage configuration consists of two globules with connecting bridge between them (dumb - bell structure). In this case the dynamics is determined by the bridge tension between the globules and the hydrodynamic friction experienced by the globules.
We should stress that the whole consideration ignores up to now two important things: (i) hydrodynamical interaction and (ii) topological (or entanglements) effects. The hydrodynamical interaction can be simply taken into account by treating the solvent as a incompressible Navier - Stokes liquid coupled with chains monomers [@Fredrick]. Within the self - consistent Hartree approximation the effective friction coefficient depends from the monomer - to - monomer correlation function, $Q(n,t)$, so that the closed equation of motion for the Rouse modes correlator can be solved numerically. This point will be discussed in detail in a forthcoming publication.
The systematic inclusion of topological constraints in the dynamics of collapse (or swelling) is a much more complicate theoretical problem. It was argued in ref.[@Nechaev] that at the chain length $N \gg N_{\rm e}$ (where $N_{\rm e}$ is an effective entanglement length) the topological constraints become essential. The self similar process of crumpling which we have discussed in Sec.IIA is restricted also by topological constraints as soon as a collapsed segment is longer then $N_{\rm e}$. These segments have a fixed topology and in a sense are in a partially equilibrium state called “fractal crumpled globule”. The characteristic time of of this first stage of collapse is $t_{\rm collapse} \approx t_{\rm collapse}^{(0)} (1 + b^6 /N_{\rm e}w)$ , where $w$ is a third virial coefficient and $t_{\rm collapse}^{(0)}$ is the collapse time without the topological effects. At the subsequent stage the “crumpled globule” relaxes to the full equilibrium via the penetration of the chain ends through the globule and many knots formation. This stage has a reptational mechanism and as a result the characteristic relaxation time is scaled as $t_{\rm top} \sim N^3$. Unfortunately, nowadays it is not quite clear how to incorporate the topological constraints in the equation of motion, so that this is a challenging problem for the future discussion. Finally we can mention as an interesting possible application of our approach the problem of the globule dissolution under an external force [@Kreitmeier]. This problem is also important for the interpretation of the stress - strain relations in polymer networks [@Cifra].
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors have benefited from discussions with Albert Johner. V.G.R. and T.A.V. acknowledge financial support from the Laboratoire Européen Associé (L.E.A.) N-K.L. and T.A.V. appreciate financial support from the German Science Foundation (DFG, Schwerpunkt Polyelektrolyte) and support form the Ministery of Research (BMBF) via the Nanocenter Mainz.
Integration over solvent variables
==================================
In order to accomplish the RPA - calculation let us make as usual [@Rost1] the transformation to the collective solvent density $$\begin{aligned}
\rho({\bf r},t)=\sum_{p=1}^{M}\:\delta\left({\bf r}-{\bf
r}^{(p)}(t)\right)
\label{rho}\end{aligned}$$ and response field density $$\begin{aligned}
\pi({\bf
r},t)=\sum_{p=1}^{M}\sum_{j=1}^{d}\:i{\hat r}^{(p)}_{j}(t)\nabla_{j}\delta\left({\bf r}-{\bf r}^{(p)}(t)\right)\label{pi}\quad.\end{aligned}$$ These transform the influence functional (\[Xi\]) to the form $$\begin{aligned}
\Xi\left[{\bf R},{\hat {\bf R}}\right] &=& \ln \int D\rho({\bf k},t)D\pi({\bf
k},t)\nonumber\\
&\times&\exp\Bigg\{W[\rho,\pi]-\int dt\int\frac{d^{d}k}{(2\pi)^{d}}\pi (-{\bf
k},t)\rho({\bf k},t)V_{\rm ss}({\bf k})\nonumber\\
&+& \sum_{s=1}^N\int dt\:i{\hat
R}_{j}(s,t)\int\frac{d^{d}k}{(2\pi)^{d}}ik_{j}V_{\rm ps}({\bf k})\rho({\bf k},t)\exp\{i{\bf
k}{\bf R}(s,t)\}\nonumber\\
&-&\sum_{s=1}^N \int dt\int\frac{d^{d}k}{(2\pi)^{d}}\pi ({\bf
k},t)V_{\rm ps}(-{\bf
k})\exp\{i{\bf k}{\bf R}(s,t)\}\Bigg\}\dots,
\label{Xirho}\end{aligned}$$ where the functional $W$ depends only on properties of the free system $$\begin{aligned}
W\{\rho,\pi \}&=&\ln \int\prod_{p=1}^{M}D{\bf r}^{(p)}D{\hat {\bf
r}}{(p)}\exp\left\{A_{\rm solvent}^{(0)}\left[{\bf r}^{(p)},{\hat {\bf
r}}^{(p)}\right]\right\}\nonumber\\
&\times&\delta\left(\rho({\bf r},t)-\sum_{p=1}^{M}\:\delta\left({\bf r}-{\bf
r}^{(p)}(t)\right)\right)\nonumber\\
&\times&\delta\left(\pi ({\bf
r},t)-\sum_{p=1}^{M}\sum_{j=1}^{d}\:i{\hat
r}^{(p)}_{j}(t)\nabla_{j}\delta\left({\bf r}-{\bf
r}^{(p)}(t)\right)\right) \ .
\label{W}\end{aligned}$$ Here $A_{\rm solvent}^{(0)}$ is the free solvent action. Following ref. [@Rost1; @Rost2] one can expand $W\{\rho,\pi \}$ up to the second order with respect $\rho$ and $\pi$, which formally corresponds to the dynamical RPA. Then the solvent variables in (\[Xirho\]) can be integrated over and for GF we obtain the following result $$\begin{aligned}
Z\left\{\cdots\right\}=\int DR_j(s,t)D{\hat
R}_j(s,t)\: \exp\left\{ A_{\rm eff}\left[{\bf R}(s,t),\hat{\bf
R}(s,t)\right]\right\}\quad,
\label{GF2}\end{aligned}$$ where $$\begin{aligned}
A_{\rm eff}\left[{\bf R},{\hat {\bf R}}\right]&=&A_{\rm intra}^{(0)}\left[{\bf R},{\hat {\bf
R}}\right]\nonumber\\
&+&\frac{1}{2}\int_{1}\int_{1'}\:i{\hat
R}_{j}(1)\int\frac{d^{d}k}{(2\pi)^{d}}k_{j}k_{p}|V_{\rm ps}({\bf
k})|^{2}\exp\left\{i{\bf k}\left[{\bf R}(1)-{\bf R}(1')\right]\right\}i{\hat
R}_{p}(1')S_{00}({\bf k},t-t')\nonumber\\
&-&\int_{1}\int_{1'}\:i{\hat
R}_{j}(1)\int\frac{d^{d}k}{(2\pi)^{d}}ik_{j}|V_{\rm ps}({\bf
k})|^{2}\exp\left\{i{\bf k}\left[{\bf R}(1)-{\bf
R}(1')\right]\right\}S_{01}({\bf k},t-t')\nonumber\\
&+&\int_{1}\int_{1'}\:i{\hat
R}_{j}(1)\int\frac{d^{d}k}{(2\pi)^{d}}ik_{j}v({\bf k})\exp\left\{i{\bf k}\left[{\bf R}(1)-{\bf
R}(1')\right]\right\}\delta(t-t')\nonumber\\
&+&\frac{1}{2}\int_{1}\int_{1'}\int_{1''}\:{\hat
R}_{j}(1)\int\frac{d^{d}kd^{d}q }{(2\pi)^{2d}}ik_{j}w({\bf k},{\bf
q})\nonumber\\
&\times&\exp\left\{i{\bf k}\left[{\bf R}(1)-{\bf
R}(1')\right] + i{\bf q} \left[{\bf R}(1')-{\bf
R}(1'')\right] \right\} \delta(t-t') \delta(t'-t'')\quad,
\label{A}\end{aligned}$$ where we have used a short hand notations: $\int_{1} \equiv
\sum_{s=1}^{N}\int_{-\infty}^{\infty}\:dt$, $1 \equiv (s,t)$ and $A_{\rm intra}^{(0)}$ is the free chain action. In eq.(\[A\]) $)S_{00}({\bf k},t)$ and $S_{01}({\bf k},t)$ stands for the solvent RPA - correlation and response functions correspondingly. They take especially simple forms after Fourier transformation with respect to time argument. This can be written as $$\begin{aligned}
S_{00}({\bf k},\omega) &=& \frac{\Phi_{00}({\bf k},\omega)}{\left[ 1 +
V_{\rm ss}({\bf k}) \Phi_{10}({\bf k},\omega)\right]\left[ 1 +
V_{\rm ss}({\bf k}) \Phi_{01}({\bf k},\omega)\right]}
\label{RPA1}\end{aligned}$$ and $$\begin{aligned}
S_{01}({\bf k},\omega) &=& \frac{\Phi_{01}({\bf k},\omega)}{\left[ 1 +
V_{\rm ss}({\bf k}) \Phi_{01}({\bf k},\omega)\right]} ,
\label{RPA2}\end{aligned}$$ where $\Phi_{00}({\bf k},\omega),\Phi_{01}({\bf k},\omega) = \Phi_{01}^{*}({\bf
k},\omega)$ are correlation and response function for the free solvent correspondingly.
[99]{} P.-G. de Gennes $Scaling~Concepts~in~Polymer~Physics$, Cornell University Press: Ithaca, NY (1979). P.-G. de Gennes, J.de Phys. Lett.[**36**]{}, L-55 (1975). A.Y. Grosberg, A.R. Khokhlov [*Statistical Physics of Macromolecules*]{} (AIP - N.Y. , 1994) K. Kremer, A. Baumgärtner and K. Binder, J. Phys. A [ **15**]{}, 2879 (1981). C.F. Abrams, N.-K. Lee and S.P. Obukhov, Phys. Rev. Lett. (submitted); cond-mat/0109198. P.-G. de Gennes, J.de Phys. [**46**]{}, L-642 (1985). E. Pitard and H. Orland, Europhys. Lett. [**41**]{}, 467 (1998). E. Pitard, Eur. Phys. J. B [**7**]{}, 665 (1998). E. Pitard and J.-P. Bouchaud, Eur. Phys. J. E [ **5**]{}, 133 (2001). A. Buguin, F. Brochard - Wyart and P.-G. de Gennes, C.R. Acad. Sci. Paris, Série IIb, [**322**]{}, 741 (1996). A. Halperin and P.M. Goldbart, Phys. Rev. E [ **61**]{}, 565 (2000). L.I. Klushin, J. Chem.Phys. [**108**]{}, 7917 (1998). A. Byrne, Kiernan, D. Green and K.A. Dawson, J. Chem.Phys. [**102**]{} (1995). E.G. Timoshenko and K.A. Dawson, Phys.Rev. E [ **51**]{}, 492 (1995); E.G. Timoshenko, Yu.A. Kuznetsov and K.A. Dawson, J. Chem.Phys. [**102**]{},1816 (1995); Yu.A. Kuznetsov , E.G. Timoshenko and K.A. Dawson,J. Chem.Phys. [**103**]{}, 4807 (1995);Yu.A. Kuznetsov , E.G. Timoshenko and K.A. Dawson,J. Chem.Phys. [**104**]{}, 3338 (1996). J. Ma, J.E. Straub and E.I. Shakhnovich, J. Chem.Phys. [ **103**]{}, 2615 (1995). R. Chang and A. Yethiraj, J. Chem.Phys. [**114**]{}, 7688 (2001). C.Wu and S.Zhou, Phys. Rev. Lett. [**77**]{},3053 (1996); B. Chu, Q. Ying and A.Yu. Grosberg, Macromolecules [**28**]{}, 180 (1995). N. Socci and J. Onuchic, J. Chem.Phys. [**101**]{}, 1519 (1994). M. Doi and S.F. Edwqards, [*The theory of polymer dynamics*]{}, Clarendon Press, Oxford, 1986 N.V. Dokholyan, E. Pitard, S.V. Buldyrev and H.E. Stanly, Glassy behavior of a homopolymer from molecular dynamics simulations, cond-mat/0109198. V.G. Rostiashvili, G. Migliorini and T.A. Vilgis, Phys.Rev. E [**64**]{}, 051112 (2001). H. Kinzelbach and H. Horner, J. Phys. I [**3**]{},1329 (1993). D. Cule and Y. Shapir, Phys.Rev. E [**53**]{}, 1553 (1996). V.G. Rostiashvili, M. Rehkopf and T.A. Vilgis, J. Chem.Phys. [**110**]{},639 (1999). M. Benmouna, T.A. Vilgis and H. Benoit, Makromol. Chem. Theory Simul. [**1**]{},333 (1992). J.-P. Boon and S. Yip, [*Molecular Hydrodynamics*]{}, McCraw - Hill Inc., N.Y., 1980. A.Yu. Grosberg, S.K. Nechaev and E.I. Shakhnovich, J. Phys. France [**49**]{}, 2095 (1988). Y. Rabin, A.Yu. Grosberg and T. Tanaka, Europhys. Lett. [**32**]{}, 505 (1995) G.H. Fredrickson and E. Helfand, J. Chem. Phys. [**93**]{}, 2048 (1990). V.G. Rostiashvili, M. Rehkopf and T.A. Vilgis, Eur. Phys. J. B[**6**]{}, 233 (1998). V.G. Rostiashvili and T.A. Vilgis, Phys. Rev. E [ **62**]{}, 1560 (2000). M. Rehkopf, V.G. Rostiashvili and T.A. Vilgis, J. Phys.II (France) [**7**]{} 1469 (1997). G. Migliorini, V.G. Rostiashvili and T.A. Vilgis, Eur. Phys. J. E[**4**]{}, 475 (2001). G. Migliorini, N. Lee, V.G. Rostiashvili and T.A. Vilgis, Eur. Phys. J. E[**6**]{}, 259 (2001) P. - G. de Gennes, Macromolecules [**9**]{},587, 594 (1976). B. Ostrovsky and Y. Bar - Yam, Europhysics Lett. [**25**]{} (1994) 409. S. Kreitmeier, J. Chem. Phys. [**112**]{} (2000) 6925; S. Kreitmeier, M. Wittkop and D. Göritz, Phys. Rev. E [ **59**]{} (1999)1982. P. Cifra and T. Bleha, Macromolecules [**31**]{} (1998) 1358.
|
---
address: |
Applied Physics Dept., ETSIAE, Univ. Politécnica de Madrid, E-28040 Madrid, Spain; [email protected]\
In memory of my mother.
---
Introduction
============
The symmetry of the physical laws is probably the essential foundation of our current understanding of physics and the universe [@Feynman]. Symmetry principles are indeed essential in the formulation of quantum field theory, as one of the fundamental theories of physics [@Weinberg]. The oldest and most common symmetries are the space-time symmetries, namely the symmetry of the physical laws under space or time translations and under space rotations, a symmetry that is enlarged to the Poincaré symmetry group by the theory of relativity. These symmetries induce very relevant conservation laws, namely the conservation of linear and angular momenta and the conservation of energy. of gravity is added to quantum field theory, the space-time symmetries become more involved, because they only hold locally in inertial frames. However, the relativistic theories of gravity can be formulated as [*gauge*]{} theories of the space-time symmetries [@Hehl]. In addition to the mentioned space-time symmetries, there is another transformation of space-time that is intuitively appealing and has had an important role in physics and other sciences, even though it is not necessarily a symmetry, namely the transformation of scale or dilatation. It closes, together with the space-time transformations, a group that can be enlarged to the [*conformal group*]{} of transformations, of ever increasing interest in theoretical physics [@Kastrup].
Given the large range of sizes in the universe [@10], it is surely an old question how to know what determines the relevant sizes, from micro to macro-physics. While micro-physics or human scale physics involve various physical constants and laws, macro-physics, especially the large scale structure of the universe, is just the realm of gravity. Gravity has no intrinsic length scale, so one can wonder why large astronomical objects have a given size. In fact, beyond galaxies, whose size is determined by both gravity and the electromagnetic interaction [@Padma], there seems to be no way to construct objects of size. Simply put, if one finds an object of a given size, there must be similar objects of larger size (and possibly of smaller size, as long as one does not go to too small scales). For example, take a cluster of galaxies; there must be similar [*superclusters*]{} of every possible size. Not surprisingly, the idea of invariant structure of the universe on large scales is old, but its modern formulation had to await the advent of the appropriate mathematical description, namely fractal geometry [@Mandel]. Simple fractals are scale invariant and are indeed composed of clusters of clusters of …, down to the infinitesimally small. Naturally, in the universe, the self-similarity must stop at a scale about the size of galaxies, although it could be limitless towards the large scales, in principle.
However, the appealing idea of an infinite hierarchy of clusters of clusters of galaxies [@Vaucou; @Mandel] clashes with the large scale homogeneity prescribed by the standard cosmological principle and embodied in the Friedmann–Lemaitre–Robertson–Walker relativistic model of the universe [@Pee; @Peebles]. is possible: the universe is homogeneous on very large scales but is fractal on smaller yet large scales (in the so-called strong-clustering regime). In the intermediate range of scales, the structure of matter in the universe undergoes a transition from fractal to homogeneous. Therefore, there is of transition to homogeneity, which admits several definitions that should give approximately the same value. However, despite the work of many researchers along several decades, the debate about the scale of transition to homogeneity is not fully settled and quite different values appear in the literature [@Pee; @Peebles; @Cole-Pietro; @Borga; @Sylos-PR; @Jones-RMP; @I0]. This situation is surely a consequence of the different definitions used and the limitations of the current methods of observation. At any rate, this mainly observational issue is not crucial for us and we are content to study the fractal structure of the universe without worrying about the definitive value of the scale of transition to homogeneity.
It is normal in physics that scale invariance holds in one range of scales and is lost in another range or changes to a different type of scale invariance (in a sense, the homogeneous state is trivially scale invariant). This situation is common, for example, in critical phenomena in statistical physics, in which it is called [*crossover*]{}. The theory of critical phenomena is actually a fruitful domain of application of the theory of scale invariance and furthermore of the full theory of conformal invariance [@Henkel]. The purpose of the present paper is to study the theory of the fractal structure of the universe with methods of statistical physics and field theory. There is a basic difference between the large mass fluctuations in fractal geometry and the more moderate fluctuations in the theory of critical phenomena [@I0], but statistical field theory methods can nonetheless be applied to fractal geometry. At any rate, the scale invariance and fractal nature of gravitational clustering is due to the form of the law of gravity. In fact, the statistical field theory of gravitational systems is peculiar.
The Newton law, with its inverse dependence on distance, does not fulfill the condition of short-rangedness that makes an interaction tractable in statistical physics, thus there can be no homogeneous equilibrium state (§74, [@LL]). This problem leads to the peculiar statistical and thermodynamic properties of many-body systems with long-range interactions, such as negative specific heats, ensemble nonequivalence, metastable states whose lifetimes diverge with the number of bodies, and spatial inhomogeneity, among others [@Chava]. Fortunately, there are methods to study gravitationally interacting systems. In the mean field approximation for a system of $N$ bodies in gravitational interaction, which becomes exact for $N \ra \infty$, the equation that rules the distribution of the gravitational potential is a higher dimensional generalization of Liouville’s equation, to describe the conformal geometry of surfaces [@Liouville]. The gravitational equation is also old and has been given various names; it is called the Poisson–Boltzmann–Emden equation in Bavaud’s review [@Bavaud]. Its scale covariance is remarked by de Vega et al. [@deVega] (who actually studied the fractal structure of the interstellar medium, on galactic scales below the cosmological scales).
Deep studies of the fractal geometry of mass distributions related to the Poisson–Boltzmann–Emden equation have been carried out in the theory of stochastic processes, namely in the theory of random multiplicative cascades [@MFcascades; @LiouvilleQG; @Mchaos]. Historically, this theory arose in relation to the [*lognormal model*]{} of turbulence [@Kolmo62; @Frisch]. Random multiplicative cascades give rise to multifractal distributions, which have applications in several areas [@MandelMF; @Harte]. Indeed, random multiplicative cascades are naturally applied to cosmology: while in models of turbulence the energy or vorticity “cascades” down towards smaller scales, in cosmology, mass undergoes successive gravitational collapses. The result of the infinite iteration of random mass condensations is mass distribution, that is to say, a generalization of a simple fractal structure.
Gravity is the dominant force in the universe on large scales but it also dominates the very small scales near the Planck length, which is the domain of quantum gravity [@QG]. Field equations connected with the Poisson–Boltzmann–Emden equation appear in generalizations of the general theory of relativity that add a scalar field, such as the Dicke–Brans–Jordan theory [@QG]. Scalar–tensor theories of gravity are old but they gained popularity with the advent of string theory, a theory of quantum gravity in which a scalar field, the dilaton, naturally arises as a partner of the graviton (the metric tensor field quantum) [@Polyakov; @GSW]. The dilaton is essential in the understanding of the conformal symmetry of space-time [@tHooft]. However, the connection of the dilaton field with the gravitational potential in the Poisson–Boltzmann–Emden equation is, at best, indirect; except in two-dimensional relativistic gravity, which is defined in terms of only one scalar field [@Polyakov]. Of course, the fractal geometry of the large scale structure of the universe is produced by three-dimensional gravity, but the study of lower dimensional models also has interest in cosmology.
We begin with a summary of fractal geometry, in particular, of multifractal geometry, the description of the cosmic mass distribution. We proceed to study how to generate the fractal geometry of the large scale structure, especially combining the Vlasov dynamics and the Poisson–Boltzmann–Emden equation [@Bavaud] with the Zel’dovich approximation and the adhesion model of the early stage of structure formation [@Shan-Zel; @GSS]. The fractal geometry of the web structure that arises in the cosmological evolution according to the adhesion model is thoroughly studied in [@AinA]. paper, we take a further step in the attempt to describe analytically the fractal geometry of the universe, by means of simple models and taking advantage of the power of the scale symmetry.
Multifractal Geometry {#MFgeom}
=====================
Multifractal geometry is, of course, the geometry of multifractal mass distributions, which are a natural generalization of the concept of simple fractal set, such that a range of dimensions appear instead of just a single dimension. However, multifractal geometry can almost be defined as the geometry of [*generic*]{} mass distributions, because most of them are actually susceptible to a multifractal analysis. To be precise, despite common prejudices, generic mass distributions are [*strictly singular*]{}, to say, the mass density is not well defined and is in fact either zero or infinity at every point [@Monti]. The singularities give rise to a spectrum of dimensions, as indeed happens in the mass distributions that appear in cosmology [@AinA]. Actually, the formation of structure in the universe occurs in a definite way, from the growth of small density fluctuations in an initially homogeneous and isotropic universe, up to a size such that the increased gravitational force leads to a collapse of mass patches towards the formation of singularities. In the adhesion model of structure formation, the collapse initially leads to matter sheets (two-dimensional singularities), then to filaments (one-dimensional singularities), and finally to point-like singularities [@Shan-Zel; @GSS]. The web structure so formed is a particular example of multifractal mass distribution and is indeed a rough model of the actual cosmic structure, but it is not a very accurate model [@AinA]. The problem is that the formation of matter filaments and especially point-like singularities, in Newtonian gravitation, can only occur after the dissipation of an infinite amount of gravitational energy, so that the process requires a fully relativistic treatment (Box 32.3, [@QG]), beyond the scope of the adhesion model.
At any rate, the adhesion model is not meant to describe accurately the formation of singularities, even within Newtonian gravitation. Gurevich and Zybin’s specific approach to this process (within Newtonian gravitation) [@GZ] obtains different results, namely the formation of singular power-law mass concentrations instead of the collapse of a number of dimensions (one, two or three) to zero size. Singular power-law mass concentrations are the hallmark of typical multifractals. On the other hand, multifractals are the natural generalization of simple hierarchical mass distributions, namely simple fractal sets, also referred to as [*monofractal*]{} distributions. Therefore, we are led to study the geometry of multifractal mass distributions and the proper way to characterize them [@AinA]. Examples of monofractal, multifractal and adhesion model web structures are shown in Figure \[devil\].
![Three different types of fractal structure that can represent the cosmic mass distribution (from left to right): (i) monofractal cluster hierarchy, with $D=1$ and therefore $m(r) \propto r$; (ii) cosmic web structure, rendered in two dimensions, showing one-dimensional filaments and zero-dimensional nodes organized in a self-similar structure; and (iii) multifractal lognormal mass distribution, of dimensions $\a$ such that $m(r,\bm{x}) \propto r^{\a(\bm{x})}$ from the point $\bm{x}$. []{data-label="devil"}](rand-Cantor_D=1.pdf "fig:"){height="4.95cm"} ![Three different types of fractal structure that can represent the cosmic mass distribution (from left to right): (i) monofractal cluster hierarchy, with $D=1$ and therefore $m(r) \propto r$; (ii) cosmic web structure, rendered in two dimensions, showing one-dimensional filaments and zero-dimensional nodes organized in a self-similar structure; and (iii) multifractal lognormal mass distribution, of dimensions $\a$ such that $m(r,\bm{x}) \propto r^{\a(\bm{x})}$ from the point $\bm{x}$. []{data-label="devil"}](cosmic-web2_crop.pdf "fig:"){height="4.95cm"} ![Three different types of fractal structure that can represent the cosmic mass distribution (from left to right): (i) monofractal cluster hierarchy, with $D=1$ and therefore $m(r) \propto r$; (ii) cosmic web structure, rendered in two dimensions, showing one-dimensional filaments and zero-dimensional nodes organized in a self-similar structure; and (iii) multifractal lognormal mass distribution, of dimensions $\a$ such that $m(r,\bm{x}) \propto r^{\a(\bm{x})}$ from the point $\bm{x}$. []{data-label="devil"}](lognormal.pdf "fig:"){height="4.95cm"}
The mathematical definition of multifractal can be found in books about fractal geometry [@Harte; @Falcon]. It is to be remarked that the definition of multifractal, as well as the simpler definition of fractal set, are formulated in terms of limits for some vanishing length scale. In other words, the scale symmetry only needs to hold asymptotically for vanishing length scales. However, there is an important class of multifractals in which scale symmetry holds in a finite range of scales, namely the [*self-similar*]{} fractals or multifractals, generated by some iterative process [@Harte; @Falcon]. These processes are especially interesting in the theory of cosmological structure formation. One reason for it is the symmetry of the initial state, which is strictly homogeneous and isotropic and is such that structure formation proceeds homogeneously in every part of the universe, defining a single scale of transition to homogeneity. Another reason, related to the preceding one, is that the standard cosmological principle admits , stochastic formulation, namely Mandelbrot’s conditional cosmological principle, in which every [*possible*]{} observer is replaced by every observer located at a [*material*]{} point (§22, [@Mandel]). This principle implies the self-similarity of the strong-clustering regime.
As mentioned in the Introduction, good models of the strong-clustering regime of the cosmological evolution, in terms of successive gravitational collapses, are the random multiplicative cascades. study in detail these cascade processes in Section \[cascades\], we prepare ourselves by studying methods of multifractal analysis, especially those that take into account the statistical homogeneity and isotropy of the mass distribution.
A statistically homogeneous, isotropic, and scale invariant mass distribution can be characterized by the probability of a density field $\rho(\bm{x})$ with those symmetries, that is to say, by a suitable functional of the function $\rho(\bm{x})$. If we forgo the scale invariance, the simplest functional is, of course, the Gaussian functional, which in Fourier space is simply a product of the respective Gaussian probability functions over all the Fourier modes [@Padma]. However, in cosmology, it is normal to use, instead of the probability of $\rho(\bm{x})$, the $N$-point correlation functions of $\rho(\bm{x})$ or the method of [*counts in cells*]{} [@Peebles]. These counts are usually galaxy counts in a sphere of radius $r$ placed at random across a fair sample of the process [@Peebles]. Therefore, the density refers to just the galaxy number density. This method of galaxy counts in cells has been applied to the study of scale invariance in the large scale structure of the universe [@Bal-Scha]. do not consider the mass of galaxies. Nevertheless, in cosmological $N$-body simulations, which describe the mass distribution in terms of $N$ bodies (or particles) of equal mass, number density equals mass density, so the density is the real mass density or, to be precise, a [*coarse-grained*]{} mass density. Furthermore, the good mass resolution of the simulations makes them suitable for the multifractal analysis [@Valda; @Colom; @Yepes; @fhalos; @I4]. The results of these analyses are summarized in Section \[data\].
It is often assumed that the full set of $N$-point correlation functions allow recovering the probability functional of $\rho(\bm{x})$ or that the full set of integral statistical moments of the probability distribution function of the coarse-grained density allow recovering the probability distribution function of the coarse-grained density. Both assumptions do not necessarily hold for random multifractals. that affects the strictly singular distributions, as they have mass densities that are zero or infinity at every point. It is clear that this singular behavior makes the probability distribution function of the coarse-grained density singular for a vanishing [*coarse-graining length*]{} (e.g., for a vanishing radius $r$ of the above-mentioned sphere): the probability distribution function gets concentrated on zero or infinity. In fact, the multifractal analysis can be made in terms of the probability distribution function of the coarse-grained density, analyzing [*how*]{} it becomes singular. This type of multifractal analysis is equivalent to the lattice or point centered based methods [@Harte; @Falcon], according to the type of coarse-graining, if we take into account the homogeneity and isotropy of the mass distribution. Of course, the mass distribution must also be self-similar, namely scale invariant in some range of scales and not just asymptotically for vanishing coarse-graining length.
It has to be remarked here that self-similarity is not equivalent to multifractality. A pertinent example is the case of the adhesion model with scale invariant initial conditions [@V-Frisch]. The scale invariant initial conditions consist of a uniform density and a [*fractional Brownian*]{} velocity field. of field is self similar but not multifractal; in fact, it is a normal non-differentiable function (ch. IX, [@Mandel]) (). However, in the adhesion model, the mass distribution becomes singular and multifractal as of the scale invariant initial conditions [@V-Frisch; @AinA].
In a multifractal distribution, the point density $\rho(\bm{x})$ is singular: in mathematical terms, it does not exist as a function. However, the coarse-grained density is a standard function, namely the coarse graining is a [*regularization*]{} of $\rho(\bm{x})$. However, it is a function that depends on the coarse-graining length, say $r$, and on the precise method of coarse graining. Since the point density $\rho(\bm{x})$ is a random variable, thus is the coarse-grained density. The probability distribution of this variable is independent of the base point, by statistical homogeneity, and it is a suitable starting point for the multifractal analysis, above. Let us call $\rho_r$ the density coarse grained with length $r$ and $P(\rho_r)$ its probability distribution function. We are interested in the statistical moments of this probability distribution function. are related to the density correlation functions: in fact, the statistical moment of order $k$ can be expressed as the integral of the $k$-point correlation function over the $k$ points in the coarse-graining volume. Furthermore, there is a relation between the probability of given counts in cells in some volume and the statistical moments of the probability distribution function of the number density in that volume [@Bal-Scha; @White]. In particular, the probability of the volume being empty, called the [*void probability function*]{}, is the Laplace transform of the probability distribution function of the density and can be taken as the generating function of the statistical moments of this probability distribution function, following standard methods of statistical mechanics. Naturally, these relations play a role in the study of [*cosmic voids*]{} [@voids]. Here, we focus on just the probability $P(\rho_r)$.
The multifractal analysis consists in finding how $P(\rho_r)$ depends on $r$ through its [*fractional*]{} statistical moments. Within the [*scaling regime*]{}, every fractional statistical moment behaves as a power law of $r$, namely for $q \in \mathbb{R}$, $$\mu_q = \frac{\langle {\rho_r}^q\rangle}{\langle {\rho_r}\rangle^q} =
\int_0^\infty \left(\frac{{\rho_r}}{\langle {\rho_r}\rangle}\right)^q P(\rho_r) d\rho_r
\propto r^{\tau(q)-3(q-1)},
\label{tq}$$ where the somewhat complicated exponent of $r$ comes from the somewhat different definition of statistical moments that is standard in fractal geometry [@I4; @AinA]. Notice that we assume that the multifractal has support in the full three-dimensional space, that is to say, we have a [*nonlacunar*]{} multifractal, with $\tau(0)=-3$, as is natural in Newtonian cosmology. The function $\tau(q)$ describes the singularity structure of any realization of $\rho(\bm{x})$, that is to say, the local behavior of the mass distribution at every singularity. When this description is complete the distribution is said to fulfill the multifractal formalism [@Harte]. The local behavior of a realization of the mass distribution at a singular point $\bm{x}$ is given by the [*local dimension*]{} $\a(\bm{x})$, which defines how the mass grows from that point, that is to say, $$m(\bm{x},r) \sim r^{\a(\bm{x})}.
\label{mr}$$
Naturally, $\a \geq 0$. Every set of points with a given local dimension $\a$ constitutes a fractal set with a dimension that depends on $\a$, namely $f(\a)$. The multifractal formalism involves a relationship between $\tau(q)$ and the local behaviors given by $\a$ and $f(\a)$, in the form of a Legendre transform: $$f(\a) = q\a - \tau(q),
\label{fa}$$ and $\a(q) = \tau'(q)$. It is to be remarked that the multifractal formalism becomes trivial if there is only one dimension $\a=f(\a)$ (the case of a [*monofractal*]{} or [*unifractal*]{}); then, $\a=\tau(q)/(q-1)$, as follows from Equation (\[fa\]). The quotient $D_q=\tau(q)/(q-1)$ is called, in general, the Rényi dimension and has an information-theoretic meaning [@Harte]. In a monofractal, $D_q=\a$ is constant, but in a multifractal is function of $q$. A typical example of monofractal is a self-similar fractal [*set*]{} such that the mass is uniformly spread on it.
Before studying the theory of random multiplicative cascades, let us consider some simple examples of self-similar multifractals [@Mandel; @Harte; @Falcon]. They are constructed as self-similar fractal sets with mass distribution on them. The simplest example is surely the “Cantor measure” () (example 17.1, [@Falcon]). If the mass is distributed on the three subintervals instead of just two of them, we have the “Besicovitch weighted curdling”, an example of [*nonlacunar fractal*]{} (p. 377, [@Mandel]). nonlacunar fractal is obtained by using just two equal subintervals, so defining the [*binomial multiplicative process*]{} (§6.2, [@Feder]). Its function $\tau(q)$ is very simple: $$\tau(q) = -\log_2 [p^q+(1-p)^q],
\label{tqm}$$ where $p$ is, say, the mass fraction in the left hand side subinterval. The generalization of these constructions leads to the general theory of deterministic self-similar multifractals (§17.3, [@Falcon]) (or Moran cascade processes (§6.2, [@Harte])). Notice that the average in Equation (\[tq\]), in the case of deterministic multifractals, is to be interpreted as a spatial average in a suitable domain. cascades are just an elaboration of the deterministic cascades, in the setting of random processes. Only in this setting, we can achieve statistically homogeneous, isotropic and scale invariant mass distributions.
Several properties of the self-similar multifractals defined by an iterated function system and a set of weights for the redistribution of mass are relatively easy to prove (these multifractals are sometimes called [*multinomial*]{} because one iteration gives rise to a multinomial distribution) [@Harte; @Falcon]. In a multifractal with support in $\mathbb{R}^3$, $\tau(q)$ is a concave and increasing function, such that $\tau(0)=-3$ and $\tau(1)=0$. Furthermore, $\tau(q)$ has definite asymptotes as $q\ra \infty$ or $q\ra -\infty$ (see the realistic example of Figure \[Bolshoi\]). The slopes can be computed and, in particular, $$\lim_{q\ra \infty}\frac{\tau(q)}{q} = \a_\mathrm{min},
$$ where $\a_\mathrm{min}$ is, of course, the minimum value of the local dimension and is smaller than 3 (or smaller than the ambient space dimension, in general). Therefore, the asymptotic value of the exponent of $r$ in Equation (\[tq\]) for $q\ra \infty$ is $(\a_\mathrm{min}-3)\,q < 0$. On the other hand, the exponent vanishes for $q=0$ and $q=1$. We deduce that the sequence of integral statistical moments has a reasonable behavior: as regards the $r$-dependence, the exponent in Equation (\[tq\]) is negative, thus the moments decrease with $r$; and as regards the asymptotic $q$-dependence, it is such that $P(\rho_r)$ is determined by the integral moments, according to the standard criteria for a probability distribution function to be determined by its integral moments (p. 20, [@Shohat]). This property of determinacy cannot be generalized to arbitrary random multiplicative cascades. A remark is in order here. People familiar with the theory of critical phenomena may find that the description of a statistically homogeneous, isotropic and scale invariant mass distribution in this section is strange or restrictive, because the same symmetries are generally present in the critical phenomena of statistical physics, yet the treatment is quite different and the concept of multifractal is not necessary. The crucial point is that we are demanding here the scale symmetry of the [*full*]{} mass distribution and not just the scale symmetry of the mass [*fluctuations*]{} about a homogeneous and isotropic mass distribution. The importance of this distinction in cosmology is discussed in [@I0].
Gravity and Scale Symmetry {#gravity}
==========================
Let us consider the universe on large scales as a system of bodies in gravitational interaction and with a moderate range of velocities (the bodies may be galaxies, but this is not important). From such model, we can draw an interesting conclusion in Newtonian gravity (§9, [@Mandel]): the accumulated mass $m(r)$ inside a sphere of radius $r$ centered on one body is proportional to $r$, that is to say, the system of bodies approaches, in a range of scales, a monofractal of dimension one \[see Equation (\[mr\])\]. is also very simple: the average velocity of the bodies on the surface of the sphere of radius $r$ is about $[G m(r)/r]^{1/2}$, and this velocity must be almost constant. Of course, this argument ignores that any realization of a monofractal of dimension one has to be anisotropic. The relation $m(r)\propto r$ is anyway interesting and actually applies in various situations and goes beyond Newtonian gravity. For example, the mass of a black hole or radius $r$ is proportional to $r$ (the crucial velocity is then the velocity of light). [A more interesting example is the singular solution of the relativistic Oppenheimer–Volkoff equation for isothermal hydrostatic equilibrium, which yields $m(r)\propto r$ (p. 320, [@Weinberg_g]).]{}
The quantity $Gm(r)/r$ represents minus the value of the gravitational potential on the surface of the sphere, assuming isotropy (if there is anisotropy, then consider the average over the surface of the sphere). In a multifractal distribution of mass in the universe, we may think that the condition that the gravitational potential is almost constant everywhere is too restrictive, but we should require that it be bounded. Under this relaxed condition, $\a=1$ becomes a lower bound to the local dimension and this bound seems to agree with the analysis of data from cosmological simulations and observations of the galaxy distribution [@AinA] [(see the $\a=1$ lower bound in Figure \[Bolshoi\])]{}. If we further relax the condition of a bounded gravitational potential, the dimension one appears again: the set of points for which the potential diverges must have a (Hausdorff) dimension smaller than or equal to one (§18.2, [@Falcon]). In the cosmic web produced by the adhesion model, the set of filaments and point-like singularities is the set of a diverging potential and has precisely dimension equal to one. However, the analysis of data in [@AinA] favors a mass distribution in which the potential is either finite everywhere or its set of singularities is much more reduced (the precise meaning of this is explained below).
The above preliminary considerations are interesting but have limited predictive power; unless one takes too seriously the model of bodies with a restricted range of velocities and concludes that the mass distribution must adjust to a monofractal that is irregular but one-dimensional. , who put forward the argument, also asked why the observational exponent of growth of $m(r)$ is larger than one (he quoted the value $1.23$). In retrospect, we can affirm that Mandelbrot’s assumption of a monofractal distribution was the main stumbling block for a better understanding of the issue. Indeed, in a multifractal, $m(r)$ has different exponents of growth at different points, and the analysis of the correlation function of galaxies, from which the value $1.23$ was obtained, only yields a sort of average of them. There are reasons to expect multifractality, that it realizes the most general form of scale symmetry. One reason for multifractality lies in the predictions of the adhesion model, which is a reasonably good model for the early formation of structure, including matter sheets and the voids they leave in between, as well as filaments and point-like singularities.
[Nevertheless, the filaments and point-like singularities are more complex objects than the ones predicted by the adhesion model [@AinA]. However, this pertains to Newtonian gravity, while the formation of matter filaments ([*thread-like singularities*]{}) and point-like singularities is feasible in General Relativity (Box 32.3, [@QG]). One may wonder why not consider the problem of structure formation in General Relativity from the start. General Relativity has no intrinsic length scale, Newtonian gravity, and should also give rise to a fractal structure. Actually, General Relativity has indeed been employed in the study of large-scale structure formation. For example, in the old cosmology models with locally inhomogeneous but globally homogeneous [spacetime]{}s that everywhere satisfy Einstein’s field equations [@Vittie; @Einstein; @Einstein+] (modernly called “Swiss cheese” models). However, these models are far from realistic, because they disregard that the initial conditions of structure formation cannot lead to such . In fact, large-scale structure formation can be studied within simple Newtonian gravity , that is to say, with the exception of zones with strong gravitational fields, singularities can form. However, such zones have a very small size, in cosmic terms. ]{}
[The adhesion model is a very simplified model of the action of gravity in structure formation, although it gives a rough idea of the type of early structures.]{} Naturally, what we need for definite and accurate predictions of structure formation are dynamic models that are less simplified than the adhesion model. One can take the full set of cosmological equations of motion in the Newtonian limit [@Pee; @Padma; @Peebles]. They are applicable on scales small compared to the Hubble length and away from strong gravitational fields, but they are [*nonlinear*]{} and quite intractable. In fact, these equation bring in the classic and hard problem of [*fluid turbulence*]{}, with the added complication of the gravitational interaction [@AinA]. In fact, methods of the theory of fluid turbulence can be applied in the theory of structure formation, and it happens that the peculiarities of the gravitational interaction can be useful as constraints.
For example, let us consider the [*stable clustering*]{} hypothesis, proposed by Peebles and collaborators for the strong clustering regime [@Pee]. It says that the average relative velocity of pairs of bodies vanishes. This hypothesis arose in the search for simplifying hypothesis to solve a statistical formulation of the cosmological equations, one of which was a scaling ansatz. Actually, the stable clustering hypothesis can be considered in its own right and is equivalent to the constancy in time of the average conditional density, namely the average density at distance $r$ from an occupied point. In a monofractal, it is constant and equals the derivative of $m(r)$ divided by the area of a spherical shell of radius $r$ [@Cole-Pietro; @Sylos-PR]. However, just the constancy of the average conditional density does not necessarily imply scale symmetry. Nevertheless, the fact that the average conditional density is singular and indeed a power law of $r$ can be argued on general grounds [@AinA].
While the preceding arguments somewhat justify the scale symmetry of the mass distribution, they may not be cogent enough and do not predict a definite type of mass distribution. , the problem is that we have hardly considered the consequences of the equations of motion. The statistical formulation of these equations is very complicated but it can be simplified in the [*mean field limit*]{}, valid for a large number of interacting bodies. In this limit, the one-particle distribution function suffices, and it fulfills the [*collisionless Boltzmann equation*]{} (or Vlasov equation) (§1.5, [@Padma]) [@Bavaud]. equation is time-reversible, it embodies the nonlinear and chaotic nature of the gravitational dynamic and leads to [*strong mixing*]{}: the flow of matter becomes [*multistreaming*]{} on ever decreasing scales and eventually a state of dynamic equilibrium arises, in a coarse-grained approximation [@GZ]. These dynamic equilibrium states fulfill the virial theorem, as is to be expected and is explicitly proved by the [*Layzer–Irvine equation*]{} (pp. 506–508, [@Peebles]). Naturally, the formation of dynamic equilibrium states can be considered as a concrete form of the stable clustering hypothesis.
An equilibrium state that fulfills the virial theorem is not necessarily a state of [*thermodynamic equilibrium*]{}. However, stationary solutions of the collisionless Boltzmann equation often mimic the properties of the states of thermodynamic equilibrium. As a case in point, let us take the state of equilibrium of an isolated singularity with spherical symmetry, obtained by Gurevich and Zybin [@GZ]. Its density is given by $$\rho(r) \propto r^{-2}\,[\ln(1/r)]^{-1/3}.
$$
This density profile can be compared with the density profile $\rho(r) \propto r^{-2}$ of the [*singular isothermal sphere*]{} (which is the asymptotic form of the density profile of any isothermal sphere for large $r$) [@Padma; @Bavaud]. The difference between the two profiles is very small. Let us remark that the density profile $\rho(r) \propto r^{-2}$ corresponds to the mass–radius relation $m(r)\propto r$ but is unrelated to any fractal property, because it applies to a smooth distribution with just one singularity at $r=0$, unlike a fractal, in which every mass point is singular. A smooth distribution of matter, especially dark matter, with possibly at its center, is called in cosmology a [*halo*]{}, and it has been proposed that the large scale structure should be described in terms of halos ([*halo models*]{}) [@CooSh]. Certain distributions of halos can be considered as coarse-grained multifractals [@fhalos] and, in fact, halo models and fractal models have much in common [@AinA].
Given the success of the assumption of thermodynamic equilibrium, we take the corresponding description of gravitationally bound states as a reasonable approach, although the temperature might have, in the end, a different meaning than it does in the thermodynamics of systems of particles that interact through a short range potential. The thermodynamic approach leads to the Poisson–Boltzmann–Emden equation: $$\Delta \phi = 4\pi G A \exp[-\phi/T],
\label{PBE}$$ where $\phi$ is the gravitational potential, $A$ is a normalizing constant, and [$T$ is the temperature in units of energy per unit mass (equal to one third of the mean square velocity of bodies)]{}. Several derivations of the equation appear in [@Bavaud] (see also (§1.5, [@Padma]), [@deVega]). Probably, the simplest way of understanding this equation is to realize that it is simply the Poisson equation with a source $\rho = A \exp[-\phi/T]$ given by hydrostatic equilibrium in the gravitational field defined by $\phi$ itself [(for the theory of thermodynamic equilibrium in an external field, see (§38, [@LL])).]{}
The solutions of Equation (\[PBE\]) depend on the boundary conditions, of course. Simple solutions are obtained by imposing rotational symmetry, namely the already mentioned [*isothermal spheres*]{} [@Padma; @Bavaud]. Actually, each solution belongs to a family of solutions related by the scale covariance [@deVega] $${\phi_\l(\bm{x}) = \phi(\l \bm{x}) - T\, \log \l^2.}
$$
Thus, the scale-symmetric singular isothermal sphere can be considered as the limit of regular isothermal spheres in the same family of solutions. ([[Let us recall that a singular isothermal sphere with $\rho(r) \propto r^{-2}$ is also a solution in General Relativity, although the equation for thermodynamic equilibrium is more complicated than Equation (\[PBE\]) (p. 320, [@Weinberg_g]).]{}]{}) One can also obtain the symmetric solutions corresponding to lower dimension; that is to say, if we take Equation (\[PBE\]) in $\mathbb{R}^3$, one obtains the one- or two-dimensional solutions by making $\phi$ depend only on one or two variables. These solutions represent a two-dimensional sheet or a one-dimensional filament, the early structures in the adhesion model. More complex solutions of Equation (\[PBE\]) can be obtained as functions that asymptotically are combinations of the preceding solutions. Naturally, there are also totally asymmetric solutions, some of which look like those combinations of simple solutions (some numerical solutions appear in [@Plum-W]). Notice that the formulation of a partial differential equation such as Equation (\[PBE\]) assumes some regularity of the function, but we are mainly interested in singular solutions. We can interpret Equation (\[PBE\]) in a coarse-grained sense, that is to say, as an equation for the coarse-grained variable $\phi_r$ (or $\rho_r$), and eventually take the limit $r\ra 0$. However, we can obtain directly singular solutions if we reformulate Equation (\[PBE\]) as an integral equation (e.g., in “the Hammerstein description”) [@Bavaud].
Equation (\[PBE\]), in two dimensions, arises in the differential geometry of surfaces, where it is called Liouville’s equation. In this context, it is the equation that rules the conformal factor of a metric on some surface [\[]{}of course, the temperature $T$ is not present, and anyway it is always scalable away in Equation (\[PBE\])[\]]{}. The higher dimensional generalization of Liouville’s equation also rules the conformal factor of a metric and is then connected with string theory, in which the dilaton is field in its own right. A deeper connection of these relativistic equations, which appear in theories of quantum gravity, with the Newtonian Equation (\[PBE\]) may exist, but we are only concerned here with the interpretation and consequences of Equation (\[PBE\]) in the theory of structure formation. take advantage of the body of knowledge developed from the two-dimensional Liouville’s equation and Liouville’s field theory, in particular, the theory of random multiplicative cascades [@MFcascades; @LiouvilleQG; @Mchaos; @MandelMF; @Harte]. This theory arose in relation to the [*lognormal model*]{} of turbulence [@Kolmo62]. model certainly has broader scope and often arises in connection with nonlinear processes.
Examples of the application of the lognormal probability distribution function in astrophysics abound, and there are often connections between them. Hubble found that the galaxy counts in cells on the sky can be fitted by a lognormal distribution [@Hubble]. A more relevant example is Zinnecker’s model of star formation by hierarchical cloud fragmentation (a random multiplicative cascade model) [@Zinne]. , a lognormal model has been proposed as a plausible approximation to the large-scale mass distribution [@Coles-Jones]. Even though some of these models are explicitly constructed in terms of random multiplicative cascades, the form in which a scale invariant distribution arises is by no means evident. In fact, a lognormal probability distribution function $P(\rho_r)$ of the coarse-grained density $\rho_r$ does not imply by itself any scale symmetry, even if it holds for all $r$. It is just the dependence of $P(\rho_r)$ on $r$ that gives rise to scale symmetry. To be precise, in general, it is necessary that Equation (\[tq\]) holds. In the lognormal model, Equation (\[tq\]) is fulfilled for a particular form of $\tau(q)$.
Thus, we are led to the study of random self-similar multiplicative cascades. They are of the non-random self-similar multifractals briefly reviewed in the preceding section (Section \[MFgeom\]). The study of random cascades is the task of the next section.
Random Cascades {#cascades}
===============
Here, we are not interested in regular solutions of Equation (\[PBE\]) but in singular solutions, in particular, in solutions such that $\rho_r(\bm{x})$ is either very small or very large. As support for the existence of such solutions, we can argue that $\rho = A \exp[-\phi/T]$ is always positive but experiences great fluctuations, being close to zero at many points but very large at other points, in accord with the fluctuations of $\phi$. We are also interested in solutions that fulfill the principle of statistical homogeneity and isotropy of the mass distribution. The way to obtain such solutions is by means of the theory of random multiplicative cascades.
The construction of deterministic self-similar multifractals introduced in Section \[MFgeom\] is based on the concept of iterated function system. A function system consists of a set of contracting similarities (with some separation condition). In addition, it is defined a set of mass ratios that defines how the initial mass is split into the sets that result from the application of the function system. There are randomized versions of this construction that are not suitable for our problem. For example, one can randomize the distribution of mass on the sets that result from the function system, by changing the order at random without altering the mass ratios [@FalconRAND]. This construction has mathematical interest but is not very relevant for us, because it cannot produce either nonlacunarity or statistical homogeneity and isotropy. We instead focus on the random multiplicative cascades based on Obukhov–Kolmogorov’s 1962 model of turbulence [@Kolmo62; @Frisch]. Obukhov and Kolmogorov intended to describe spatial fluctuations of the energy dissipation rate in turbulence and were inspired by Kolmogorov’s lognormal law of the size distribution in pulverization of mineral ore. This process can be essentially described as a multiplicative random cascade. In fact, similar models had been introduced earlier in economics, under the name of law of [*proportionate effect*]{} [@Gibrat]. All these models are by nature only statistical. The connection with fractal geometry is due to Mandelbrot, in the late 1960s (the history is exposed in the reprint book [@MandelMF]). He observed that a spatial random cascade process of the type used in turbulence, when continued indefinitely, leads to an energy dissipation generally concentrated on a set of non-integer (fractal) Hausdorff dimension. This property was manifested in some especially simple cascade models, e.g., the $\b$-model, in which the concentration set is monofractal [@Frisch]. However, monofractality implies a singular probability $P(\rho_r)$, because a nontrivial monofractal has to be lacunar. Therefore, $P(\rho_r)$ is unlike a lognormal probability distribution and is not relevant for models of the cosmic mass distribution. Actually, Mandelbrot’s own random multiplicative cascades generically give rise to multifractal mass distributions [@MandelMF]. These are the random cascades of interest here.
A [*multiplicative process*]{} is defined as follows. Suppose that we start with some positive variable of size $x_0$ that at each step $n$ can grow or shrink, according to some positive random variable $W_n$, so that $$x_n = x_{n-1} W_n\,.
\label{xn}$$
Let us think of $x$ as a mass. The proportionate effect consists in that the random growth of $x$ is of its current mass and is independent of its current actual mass. One may want $x$ to be as likely to grow as to shrink, on average; that is to say, its mean value must stay constant. variables $W_n$, for $n=1, \ldots$, are assumed to be independent, and even assumed to be independent copies of the same random variable $W$. A further condition is that $W$ has moments of any order. In summary, the random variable $W$ is subject to: $$W \geq 0, \quad \langle W \rangle = 1, \quad \langle W^q \rangle < \infty,\,\forall q >0.
\label{W}$$
As is easy to notice, the definition of multiplicative process is analogous to the definition of additive process, which gives rise to the central limit theorem and the Gaussian distribution. process is very different. A simple example shows this: if we take $W$ to be zero or two with equal probability, then the product of $N$ instances of $W$ will be zero unless all values of $W_n$ turn out to be two, in which case the product is very large, equal to $2^N$. The mean square value and the variance of the product are also large and higher moments are even larger. In fact, the moments of the product of $N$ instances of $W$ are the $N$th power of the moments of $W$, given by $$\log_2 \langle W^q \rangle = q-1.
$$
This simple example is actually a particular case of the $\b$-model, with $\b=1/2$ (§8.6.3, [@Frisch]).
The $\b$-model is somewhat singular, because it lets $W$ be null with a non-vanishing probability. easily change this feature by taking $W$ to be $2p$ or $2(1-p)$ with equal probability, where $0<p<1$ ($p=0$ or $p=1$ give the case already considered). Now, we have $$\log_2 \langle W^q \rangle = \log_2 \frac{(2p)^q+(2(1-p))^q}{2} =
\log_2 [p^q+(1-p)^q] + q-1.
$$
Therefore, the $q$-moment of the product of $N$ instances of $W$ is given by the expression $$\langle (\prod_{n=1}^N W_n)^q \rangle = \langle W^q \rangle^N = 2^{N[-\tau(q)+q-1]},
\label{qm}$$ where $\tau(q)$ is given by Equation (\[tqm\]), because the binomial multiplicative process of Section \[MFgeom\] is closely connected with this process. The binomial multifractal is possibly the simplest example of nonlacunar self-similar multifractal, but it is deterministic. The connection with the present stochastic process is based on the identification of the average over $W$ in the multiplicative process with spatial average over the mass distribution in the binomial multifractal. Of course, the binomial multifractal could be randomized by using a generalization of the procedure in [@FalconRAND]. would have undesirable features. First, the binary subdivision procedure lets itself to be noticed in the final result, breaking the desired statistical homogeneity (as well as statistical isotropy, generalization). Second, the total mass is fixed at the initial stage in some given volume and we would rather have that it has fluctuations that tend to zero as the volume tends to infinity.
For the application of multiplicative processes to the description of the cosmic mass distribution, we have two options. One is to consider a fixed volume and to understand that the mass fluctuation in it grows with time, according to the natural generalization of Equation (\[qm\]), where $N$ is the number of time steps. Another is the standard generalization of the definition of multiplicative processes for the construction of self-similar multifractals, where the volume is reduced in every iteration and the average mass is reduced in the same proportion. For example, for a lattice multifractal in $\mathbb{R}^n$, we need to replace in Equation (\[W\]) $\langle W \rangle = 1$ with $\langle W \rangle = b^{-n}$, where $b \geq 2$ is the number of subintervals ($b^{-n}$ is the volume reduction factor) [@Harte]. In general, if we define $$\tau(q) = -\log_b\langle W^q \rangle -n,
$$ then, taking into account that $r/r_0=b^{-N}$, after $N$ steps, and that $\rho_r = m_r/r^3$, we recover Equation (\[tq\]) in the case $n=3$.
Let us now see how the lognormal distribution arises from Equation (\[xn\]). Taking logarithms, $$\log x_N = \log x_{0} + \sum_{n=1}^N \log W_n\,.
$$
Assuming that the random variables $\log W_n$ fulfill the standard conditions to apply the central limit theorem, this theorem implies that the sum of the variables converges to a normal distribution and, therefore, the quotient $x_N/x_0$ converges for large $N$ to a lognormal distribution. This argument has been criticized on the basis of the theory of large deviations, which just says that the limit distribution is expressed in terms of the [*Cramér function*]{}, while the central limit theorem only applies to small deviations about its maximum, in the form of a quadratic approximation (§8.6, [@Frisch]) [@MandelMF]. approximation is insufficient to characterize a full multifractal spectrum and only provides an approximation of it, whereas the Cramér function is sufficient. The insufficiency of the central limit theorem is manifest in that there are many different quadratic approximations to the multifractal spectrum. Actually, two of them are especially important: the quadratic expansions about the two distinguished points of a multifractal, namely the point where $\a=f(\a)$, which is the point of [*mass concentration*]{}, or the point where $f(\a)$ is maximum, which corresponds to the [*support*]{} of the mass distribution (which is the full $\mathbb{R}^3$ in our case, thus $f(\a)=3$) [@I4]. Both quadratic approximations can be employed but they give different results in general. At any rate, a lognormal multiplicative process can be obtained by just requiring that the random variables $W_n$ are lognormal themselves, so that the multifractal spectrum has an exact parabolic shape (§8.6, [@Frisch]) [@MandelMF; @Harte]. However, this lognormal model presents some problems (§8.6.5, [@Frisch]) [@MandelMF].
A surprising feature of an exactly parabolic multifractal spectrum $f(\a)$ is that it prolongs to $f(\a)<0$, so there are negative fractal dimensions. The general nature of this anomaly has been discussed by Mandelbrot [@Mandel_neg]. He says that “the negative $f(\a)$ rule the sampling variability” in . Any set of singularities of strength $\a$ with $f(\a)<0$ is [*almost surely*]{} empty, but Mandelbrot states that those values “measure usefully the degree of emptiness of empty sets.” in the multifractal analysis of experimental and observational cosmic mass distributions is to discard the part of the multifractal spectrum with $f(\a)<0$ [@I-SDSS].
One more point of concern is possibly the discrete nature of multiplicative processes, which are built on a dyadic or $b$-adic tree from the larger to the smaller scales and produce multifractals that have two main drawbacks: they display discrete scale invariance only and are not strictly translation invariant (or isotropic in $\mathbb{R}^n$, $n>1$). In statistical (or quantum) field theory, the scale symmetry takes place at [*fixed points*]{} of the renormalization group, which is usually formulated as an iteration of a discrete transformation but admits a continuous formulation [@Wil-Kog]. An analogous procedure can be carried out for multiplicative processes. A simple example of the continuous formulation of multiplicative processes is provided by substituting the random walk equation that produces Brownian motion, namely $$\frac{dx}{dt} = \xi(t),
$$ where $\xi(t)$ is a Gaussian process (e.g., white noise), by the multiplicative equation $$\frac{dx}{dt} = \xi(t)\,x.
$$
The solution of this equation is $$x(t) = x_0\, \exp\! \int_0^t \xi(s)\,ds.
$$
It undergoes large fluctuations, such as the discrete multiplicative processes described above, and can actually be considered as a continuous formulation of the lognormal multiplicative process. definition of continuous multiplicative processes employs the concept of log infinitely divisible probability distribution [@Muzy]. A particularly useful class of these continuous multiplicative processes is given by the log-Lévy generators (§6.3.7, [@Harte]). This is an interesting topic but rather technical, so we do not dwell on it.
In summary, we have a construction of multifractals that are statistically homogeneous and isotropic (in $\mathbb{R}^n$, $n>1$) and have continuous scale invariance. Furthermore, they satisfy Equation (\[tq\]), for some function $\tau(q)$. In fact, we can consider the multifractal as a scale symmetric mass distribution that is obtained as the continuous parameter $t \ra \infty$. In this sense, it is analogous to a statistical system at a critical point defined in terms of a fixed point of the renormalization group. However, let us insist that a multifractal is [*fully*]{} scale symmetric, whereas a critical statistical system is only required to have scale symmetric [*fluctuations*]{}. Nevertheless, the mass fluctuations in a continuous scale multifractal also tend to zero as $t \ra -\infty$ (at very large scale). Therefore, we may consider the transition from the fully homogeneous and isotropic cosmic mass distribution on very large scales to the multifractal distribution on smaller yet large scales as a [*crossover*]{} analogous to the ones that take place in the theory of critical phenomena in statistical physics. We must also notice that the scale parameter $t$, analogous to a renormalization group parameter, can be replaced by a real scale, such as the coarse-graining scale $r$ of Equation (\[tq\]). Naturally, $r/r_0 = e^{-t}$, as a generalization of $r/r_0=b^{-N}$ (the continuous limit can be thought of as the limit where $b\ra 1$ and $N \ra \infty$, with $N \ln b$ finite). The use of coarse-grained quantities is necessary to connect with the partial differential equation (Equation ). This connection is explained in detail in the collective work of or researchers in probability theory [@MFcascades; @LiouvilleQG; @Mchaos; @MFcascades0; @log-correl]. Here, we just need to notice the relation of field, with $$\langle \phi(x) \phi(y) \rangle = -\log\left| x-y \right|
$$ for small $\left| x-y \right|$, to the multifractal cascades. This relation comes from the mass density being $$\rho = A \exp[-\phi/T]
$$ in Equation (\[PBE\]), so that the correlations of the density field are power laws and, in particular, Equation (\[tq\]) holds. Of course, this relation cannot specify the multifractal properties, given by the function $\tau(q)$. The case of a lognormal distribution was studied by Coles and Jones [@Coles-Jones], connection with Equation (\[PBE\]). The lognormal model has a quadratic function $\tau(q)$, transform gives the parabolic multifractal spectrum $f(\a)$ (§6.3.16, [@Harte]).
Experimental and Observational Results {#data}
======================================
Fortunately, nowadays we have good knowledge of the cosmic mass distribution, based on the considerable amount of data from cosmological simulations and from observations of the galaxy distribution. These data are suitable for various statistical analyses and, in particular, for multifractal analyses. Naturally, the highest quality data come from cosmological $N$-body simulations. been a steady increase in the number of bodies $N$ that computers are capable of handling, and state-of-the-art simulations handle billions of particles, and thus afford excellent mass resolution. Moreover, $N$-body simulations have a numerical experimentation capability, because one can tune several aspects of the cosmic evolution, such as the initial conditions, the content of baryons in relation to dark matter, etc.
A number of multifractal analyses of the cosmic mass distribution were made years ago . However, the quality of the data then was not sufficient to obtain reliable results. carried out multifractal analyses of the mass distribution in recent several $N$-body simulations, which have consistently yielded the same shape of the multifractal spectrum. Probably, the most interesting results to quote are the most recent ones, from the [*Bolshoi*]{} simulation [@AinA; @Bol]. This simulation has very good mass resolution of the cosmic structure: it contains $N=2048^3$ particles in a volume of $(250\,\mathrm{Mpc})^3/h^3$ \[$h$ is the Hubble constant normalized to $100$ km$/$(s Mpc)\]; so that each particle represents a mass of $1.35 \cdot 10^8\,h^{-1} M_\odot$, which is the mass of a small galaxy. Therefore, it is possible to establish convergence of several coarse-grained spectra to a limit function, as the coarse-graining length is shrunk to its minimum value available in the particle distribution. Actually, the shape of the multifractal spectrum is stable along a considerable range of scales, proving the self-similarity of the mass distribution (Figure \[Bolshoi\]). It is to be noticed that the Bolshoi simulation only describes dark matter particles. However, the [*Mare-Nostrum*]{} simulation describes both dark matter and baryon gas particles and gives essentially the same results [@MN].
![(**Left**) The multifractal spectra of the dark matter distribution in the Bolshoi simulation (coarse-graining lengths $l=3.91, 1.95, 0.98, 0.49, 0.24, 0.122, 0.061, 0.031
\;\mathrm{Mpc}/h$). (**Right**) The function $\tau(q)$ (calculated at $l=0.98\;\mathrm{Mpc}/h$), showing that $\tau(0)=-3$, corresponding to nonlacunarity. is also patent in the left graph, because it shows that $\mathrm{max}\, f(\a) =3$ (for sufficiently large $l$).[]{data-label="Bolshoi"}](mf-spec_Bolshoi_1.pdf "fig:"){width="7.4cm"} ![(**Left**) The multifractal spectra of the dark matter distribution in the Bolshoi simulation (coarse-graining lengths $l=3.91, 1.95, 0.98, 0.49, 0.24, 0.122, 0.061, 0.031
\;\mathrm{Mpc}/h$). (**Right**) The function $\tau(q)$ (calculated at $l=0.98\;\mathrm{Mpc}/h$), showing that $\tau(0)=-3$, corresponding to nonlacunarity. is also patent in the left graph, because it shows that $\mathrm{max}\, f(\a) =3$ (for sufficiently large $l$).[]{data-label="Bolshoi"}](tau-l=2-8_Bolshoi.pdf "fig:"){width="7.4cm"}
Of course, the most realistic results about the cosmic mass distribution must come from observations of the real universe. Even though there are increasingly better observations of the dark matter, the main statistical analyses of the overall mass distribution come from observations of the galaxy distribution. I have carried out a multifractal analysis of the distribution of stellar mass employing the rich Sloan Digital Sky Survey (data release 7) [@I-SDSS]. The stellar mass distribution is of the full baryonic matter distribution and is simply obtained from the distribution of galaxy positions by taking into account the stellar masses of galaxies, which are available for the Sloan Digital Sky Survey (data release 7).
We can assert a good concordance between the multifractal geometry of the cosmic structure in cosmological $N$-body simulations and galaxy surveys, to the extent that the available data allow us to test it, that is to say, in the important part of the multifractal spectrum $f(\a)$ up to its maximum (the part such that $q>0$) [@AinA; @I-SDSS]. The other part, such that $q<0$, has $\a>3$ and would give information about voids in the stellar mass distribution, but the resolution of the SDSS data is insufficient in this range. The common features of the multifractal spectrum found in Ref. [@I-SDSS] and visible in Figure \[Bolshoi\] are: (i) a minimum singularity strength $\a_\mathrm{min} = 1$; (ii) a “supercluster set” of dimension $\a=f(\a)\simeq 2.5$ where the mass concentrates; and (iii) $\mathrm{max}\, f(\a) =3$, giving a non-lacunar structure (without totally empty voids). As regards Point (i), it is to be remarked that $\a_\mathrm{min} = 1$, with $f(\a_\mathrm{min}) =0$, the edge of diverging gravitational potential. However, the multifractal spectrum $f(\a)$ prolongs to $f(\a)<0$, giving rise to stronger singularities, which have null probability of appearing in the limit of vanishing coarse-graining length, $l\ra 0$ (a set with negative dimension is [*almost surely*]{} empty). Nevertheless, these strong singularities do appear in any coarse-grained mass distribution and correspond to negative peaks of the gravitational potential $\phi$, which must not be divergent in the $l \ra 0$ limit. Thus, we seem to have a mass distribution in which the potential can become large (in absolute value) but is finite everywhere (recall the options brought up in Section \[gravity\]).
Given that we now know the multifractal spectrum of the cosmic mass distribution with reasonable accuracy, we can look for the type of random multiplicative cascade that produces such spectrum. This is an appealing task that is left for the future.
Discussion
==========
We show that there is a considerable range of scales in the universe in which scale symmetry is effectively realized, that is to say, the mass distribution is a self-similar multifractal, with identical appearance and properties at any scale. This symmetry is a consequence of the absence of any intrinsic length scale in Newtonian gravitation, which is the theory that rules the mass distribution on scales beyond the size of galaxies but small compared to the Hubble length. Indeed, it is found in the analysis of cosmological $N$-body simulations and in the analysis of the stellar mass distribution (with less precision) that the self-similarity extends from a fraction of Megaparsec to several Megaparsecs. scales, the multifractal mass distribution shows signs of undergoing the transition to the expected homogeneity of the Friedmann–Lemaitre–Robertson–Walker relativistic model of the universe, with the standard cosmological principle.
We describe models that enforce scale symmetry in combination with the other relevant symmetries, namely the translational and rotational symmetries that impose homogeneity and isotropy and that must be understood in a statistical sense, related to Mandelbrot’s conditional cosmological principle. We show that those models are given by the theory of continuous random multiplicative cascades. We show how random multiplicative cascades can be constructed, made continuous, and produce the type of multifractal mass distribution that we need.
Of course, the use of continuous random multiplicative cascades could be regarded as somewhat [ ad hoc]{}, as they seem unrelated to the gravitational physics. However, we have explained the close connection of those models with the partial differential equation that arises from an approximate model of gravitational physics, namely the Poisson–Boltzmann–Emden equation [that follows from]{} the assumption of thermodynamic equilibrium. While this assumption may not be fully realized in the cosmological evolution of structure formation, it is a reasonable approach to the states of virial equilibrium, say, to the regime of strong and stable clustering.
We also mention that the early stage of structure formation is approximately described by the adhesion model, which predicts a self-similar cosmic web somewhat different from the result of random multiplicative cascade related to the Poisson–Boltzmann–Emden equation, as can be perceived in Figure \[devil\]. It seems that both types of structures should be combined, web morphology appropriate on the larger scales and the full self-similar multifractal structure, related to the Poisson–Boltzmann–Emden equation, appropriate on smaller scales. This combination should take into account that the matter sheets and the corresponding voids present no problem in Newtonian gravity, whereas the matter filaments and point-like singularities do and they should be replaced by weaker singularities, of power law type, precisely such as the ones that are found as simple solutions of the Poisson–Boltzmann–Emden equation; namely, radial or axial singular isothermal distributions. structure achieves a mass distribution without singularities of the gravitational potential.
Since the Poisson–Boltzmann–Emden equation describes halo-like structures (Section \[gravity\]), we can expect that a coarse-grained formulation of the above proposed combination of singular solutions of the Poisson–Boltzmann–Emden equation with a larger-scale web structure would be equivalent to the fractal distribution of halos that can be deduced from a coarse-grained multifractal [@fhalos; @I4]. of halos should lie in web sheets or filaments, as is expected [@CooSh; @Bol].
Finally, we can just mention here that the formalism described in this paper can possibly be applied in a very different range of scales and with a different theory of gravity, namely the very small scales that constitute the realm of quantum gravity. The potential of scale symmetry and, symmetry in theories of quantum gravity is well established and, in fact, string theory is essentially based on the conformal symmetry. Anyway, this very interesting connection lies beyond the scope of the present work.
[999]{}
Feynman, R. *The Character of Physical Law*; The MIT Press: [Cambridge, MA, USA,]{} 1967.
Weinberg, S. *The Quantum Theory of Fields, Vols. I and II*; Cambridge Univ. Press: New York, NY, USA, 1995.
Blagojević, M.; Hehl, F.W. (Eds.) *Gauge Theories of Gravitation*; World Scientific: [Singapore,]{} 2013.
Kastrup, H. On the Advancements of Conformal Transformations and their Associated Symmetries in Geometry and Theoretical Physics. [*Ann. Phys.*]{} [**2008**]{}, [*17*]{}, 631–690
Morrison, P. *Powers of Ten: A Book About the Relative Size of Things in the Universe and the Effect of Adding Another Zero*; W H Freeman & Co.: [S. Francisco, Ca, USA]{}, 1985 .
Padmanabhan, T. *Structure Formation in the Universe*; Cambridge Univ. Press: New York, NY, USA, 1993.
Mandelbrot, B.B. *The Fractal Geometry of Nature*; W.H. Freeman and Company: [New York, NY, USA,]{} 1983. de Vaucouleurs, G. The case for a hierarchical cosmology. [*Science*]{} [**1970**]{}, [*167*]{}, 1203.
Peebles, P.J.E. [*The Large-Scale Structure of the Universe*]{}; Princeton University Press: [Princeton, NJ, USA,]{} 1980. Peebles, P.J.E. *Principles of Physical Cosmology*; Princeton University Press: [Princeton, NJ, USA,]{} 1993. Coleman, P.H.; Pietronero, L. The fractal structure of the Universe, *Phys. Rep.* [**1992**]{}, [*213*]{}, 311–389.
Borgani, S. Scaling in the Universe, *Phys. Rep.* [**1995**]{}, *251*, 1–152.
Labini, F.S.; Montuori, M.; Pietronero, L. Scale invariance of galaxy clustering. *Phys. Rep.* [**1998**]{}, *293*, 61–226.
Jones, B.J.; Martínez, V.; Saar, E.; Trimble, V. Scaling laws in the distribution of galaxies. *Rev. Mod. Phys.* [**2004**]{}, *76*, 1211–1266.
Gaite, J.; Domínguez, A.; Pérez-Mercader, J. The fractal distribution of galaxies and the transition to homogeneity. *Astrophys. J.* [**1999**]{}, *522*, L5–L8.
Christe, P.; Henkel, M. *Introduction to Conformal Invariance and Its Applications to Critical Phenomena*; Lecture Notes in Physics Monographs; Springer: [Berlin/Heidelberg, Germany,]{} 1993.
Landau, L.D.; Lifshitz, E.M. *Statistical Physics, Part 1*, 3rd ed.; Pergamon Press: [Oxford, UK,]{} 1980.
Chavanis, P.H. Dynamics and thermodynamics of systems with long-range interactions: Interpretation of the different functionals. *AIP Conf. Proc.* [**2008**]{}, *970*, 39.
Teschner, J. Liouville theory revisited. *Class. Quant. Grav.* [**2001**]{}, *18*, R153–R222.
Bavaud, F. Equilibrium properties of the Vlasov functional: The generalized Poisson-Boltzmann-Emden equation. [*Rev. Mod. Phys.*]{} [**1991**]{}, [*63*]{}, 129–150.
de Vega, H.J.; Sánchez, N.; Combes, F. Fractal dimensions and scaling laws in the interstellar medium: field theory approach. [*Phys. Rev. D*]{} [**1996**]{}, [*54*]{}, 6008–6020.
Rhodes, R.; Vargas, V. KPZ formula for log-infinitely divisible multifractal random measures. [*ESAIM Probab. Stat.*]{} [**2011**]{}, [*15*]{}, 358–371.
Duplantier, B.; Sheffield, S. Liouville quantum gravity and KPZ. [*Invent. Math.*]{} [**2011**]{}, [*185*]{}, 333–393.
Rhodes, R.; Vargas, V. Gaussian multiplicative chaos and applications: A review. [*Probab. Surv.*]{} [**2014**]{}, [*11*]{}, 315–392.
Kolmogorov, A.N. A refinement of previous hypotheses concerning the local structure of turbulence in incompressible fluid at high Reynolds number. [*J. Fluid Mech.*]{} [**1962**]{}, [*13*]{}, 82–85.
Frisch, U. [*Turbulence: The Legacy of A.N. Kolmogorov*]{}; Cambridge University Press: Cambridge, UK, 1995.
Mandelbrot, B.B. *Multifractals and $1/f$ Noise*; Springer: [Berlin/Heidelberg, Germany,]{} 1999.
Harte, D. *Multifractals. Theory Appl.*; Hall/CRC: Boca Raton, FL, USA, 2001. Misner, C.W.; Thorne, K.S.; Wheeler, J.A. *Gravitation*; Freeman: New York, NY, USA, 1973.
Polyakov, A.M. *Gauge Fields and Strings*; Harwood Academic Pub.: [Chur, Switzerland,]{} 1987. Green, M.; Schwarz, J.; Witten, E. *Superstring Theory: Volume 1*; Cambridge University Press: Cambridge, .
’t Hooft, G. Local Conformal Symmetry: The Missing Symmetry Component for Space and Time. [*Int. Jour. Mod. Phys. D*]{} [**2014**]{}, [*24*]{}, 10.1142.
Shandarin, S.F.; Zel’dovich, Y.B. The large-scale structure of the universe: Turbulence, intermittency, structures in a self-gravitating medium. [*Rev. Mod. Phys.*]{} [**1989**]{}, [*61*]{}, 185–220.
Gurbatov, S.N.; Saichev, A.I.; Shandarin, S.F. Large-scale structure of the Universe. The Zeldovich approximation and the adhesion model. *Phys. Usp.* [**2012**]{}, [*55*]{}, 223–249.
Gaite, J. The Fractal Geometry of the Cosmic Web and its Formation. *Adv. Astron.* [**2019**]{}, *2019*, 6587138.
Monticino M. How to Construct a Random Probability Measure. *Int. Stat. Rev.* [**2001**]{}, *69*, 153–167.
Gurevich, A.V.; Zybin, K.P. Large-scale structure of the Universe: Analytic theory. *Phys. Usp.* **1995**, [*38*]{}, 687–722.
Falconer, K. *Fractal Geometry*; John Wiley and Sons: Chichester, UK, 2003. Balian, R.; Schaeffer, R. Scale-invariant matter distribution in the universe I. Counts in cells. *Astron. Astrophys.* [**1989**]{}, [*263*]{}, 1–29.
Valdarnini, R.; Borgani, S.; Provenzale, A. Multifractal properties of cosmological $N$-body simulations. *Astrophys. J.* [**1992**]{}, 394, 422–441.
Colombi, S.; Bouchet, F.R.; Schaeffer, R. Multifractal analysis of a cold dark matter universe. *Astron. Astrophys.* [**1992**]{}, 263, 1.
Yepes, G.; Domínguez-Tenreiro, R.; Couchman, H.P.M. The scaling analysis as a tool to compare $N$-body simulations with observations—Application to a low-bias cold dark matter model. *Astrophys. J.* [**1992**]{}, *401*, 40–48.
Gaite, J. The fractal distribution of haloes. [*Europhys. Lett.*]{} [**2005**]{}, [*71*]{}, 332–338.
Gaite, J. Halos and voids in a multifractal model of cosmic structure. [*Astrophys. J.*]{} [**2007**]{}, [*658*]{}, 11–24.
Vergassola, M.; Dubrulle, B.; Frisch, U.; Noullez, A. Burgers’ equation, Devil’s staircases and the mass distribution for large scale structures. [*Astron. Astrophys.*]{} [**1994**]{}, [*289*]{}, 325–356.
White, S.D.M. The hierarchy of correlation functions and its relation to other measures of galaxy clustering. [*MNRAS*]{} [**1979**]{}, [*186*]{}, 145–154.
Gaite, J. Statistics and geometry of cosmic voids. *JCAP* [**2009**]{}, *11*, 004.
Feder, J. [*Fractals*]{}; Plenum Press: [New York, NY, USA,]{} 1988. Shohat, J.A.; Tamarkin, J.D. [*The problem of moments*]{}; Mathematical Surveys No. 1: New York, NY, USA, 1970.
Weinberg, S. *Gravitation and Cosmology*; John Wiley & Sons: [Hoboken, NJ, USA,]{} 1972.
McVittie, G.C. Condensations in an expanding universe. [*MNRAS*]{} [**1932**]{}, [*92*]{}, 500–518.
Einstein, A.; Straus, E.G. The Influence of the Expansion of Space on the Gravitation Fields Surrounding the Individual Stars. [*Rev. Mod. Phys.*]{} [**1945**]{}, [*17*]{}, 120–124.
Einstein, A.; Straus, E.G. Corrections and Additional Remarks to our Paper: The Influence of the Expansion of Space on the Gravitation Fields Surrounding the Individual Stars. [*Rev. Mod. Phys.*]{} [**1946**]{}, [*18*]{}, 148–149.
Cooray, A.; Sheth, R. Halo models of large scale structure. [*Phys. Rep.*]{} [**2002**]{}, [*372*]{}, 1–129.
Plum, M.; Wieners, C. New solutions of the Gelfand problem. [*J. Math. Anal. AppL.*]{} [**2002**]{}, [*269*]{}, 588–606.
Hubble, E. The Distribution of Extra-Galactic Nebulae. [*Astrophys. J.*]{} [**1934**]{}, [*79*]{}, 8–76.
Zinnecker, H. Star formation by hierarchical cloud fragmentation: A statistical theory of the log-normal initial mass function. [*MNRAS*]{} [**1984**]{}, [*210*]{}, 43–56.
Coles, P.; Jones, B.J. A lognormal model for the cosmological mass distribution. [*MNRAS*]{} [**1991**]{}, [*248*]{}, 1–13.
Falconer, K. The Multifractal Spectrum of Statistically Self-Similar Measures. [*J. Theor. Prob.*]{} [**1994**]{}, [*7*]{}, 681–702.
Gibrat, R. [*Les inégalités économiques; applications: Aux inégalités des richesses, à la concentration des entreprises, aux populations des villes, aux statistiques des familles, etc., d’une loi nouvelle, la loi de l’effect proportionnel*]{}; Recueil Sirey: Paris, France, 1931.
Mandelbrot, B.B. Negative fractal dimensions and multifractals. *Physica A* [**1990**]{}, [*163*]{}, 306–315.
Gaite, J. Fractal analysis of the large-scale stellar mass distribution in the Sloan Digital Sky Survey. [*JCAP*]{} [**2018**]{}, [*07*]{}, 010.
Wilson, K.G.; Kogut, J. The renormalization group and the $\e$ expansion. *Phys. Rep.* [**1974**]{}, [*12C*]{}, 75–200.
Muzy, J.-F.; Bacry, E. Multifractal stationary random measures and multifractal random walks with log infinitely divisible scaling laws. *Phys. Rev. E* [**2002**]{}, [*66*]{}, 056121.
Rhodes, R.; Vargas, V. Multidimensional multifractal random measures. [*Electron. J. Probab.*]{} [**2010**]{}, [*15*]{}, 241–258.
Duplantier, B.; Rhodes, R.; Sheffield, S.; Vargas, V. Log-correlated Gaussian fields: An overview. In [*Geometry, Analysis and Probability*]{}; Progress in Mathematics; Bost, J.-B., Hofer, H., Labourie, F., Le Jan, Y., Ma, X., , Eds.; Birkhäuser: Cham, Switzerland, 2017; Volume 310, pp 191–216.
Gaite, J. Smooth halos in the cosmic web. *JCAP* [**2015**]{}, *04*, 020.
Gaite, J. Fractal analysis of the dark matter and gas distributions in the Mare-Nostrum universe. [*J. Cosmol. Astropart. Phys.*]{} [*JCAP*]{} [**2010**]{}, [*3*]{}, 006.
|
---
abstract: 'We review past and present results on the non-local form-factors of the effective action of semiclassical gravity in two and four dimensions computed by means of a covariant expansion of the heat kernel up to the second order in the curvatures. We discuss the importance of these form-factors in the construction of mass-dependent beta functions for the Newton’s constant and the other gravitational couplings.'
author:
- 'Sebasti[á]{}n A. Franchino-Vi[ñ]{}as'
- 'Tib[é]{}rio de Paula Netto'
- Omar Zanusso
title: |
Vacuum effective actions and mass-dependent\
renormalization in curved space[^1]
---
Introduction {#sect:introduction}
============
The Appelquist-Carazzone theorem implies that quantum effects induced by the integration of a massive particle are suppressed when studied at energies smaller than a threshold set by the particle’s mass [@AC]. The suppression mechanism has been well understood both quantitatively and qualitatively in flat space. From a renormalization group (RG) perspective it is convenient to adopt a mass-dependent renormalization scheme, which shows that the running of couplings that are induced by the integration of massive fields is suppressed below the mass threshold. Extensions of the above statements to curved space have been developed only more recently because of the additional difficulties in preserving covariance. In curved space it is convenient to compute the vacuum effective action, also known as the semiclassical action, which is the effective metric action induced by the integration of matter fields. If the effective action is computed correctly, the decoupling mechanism can be studied covariantly through the use of opportune form-factors among the curvatures. These form-factors are in fact covariant functions of the Laplacian, both in two- [@Ribeiro:2018pyo] and four- [@apco; @fervi; @BuGui; @Franchino-Vinas:2018gzr] dimensional curved space.
The simplest way to compute the necessary form-factors and maintain covariance is through the use of the heat kernel expansion [@bavi85]. For our purposes it is convenient to adopt a curvature expansion, which resums the covariant derivatives acting on the curvatures as the non-local form-factors [@bavi87; @bavi90]. More precisely, it proves essential to use a heat kernel expansion which resums the total derivative terms constructed by an arbitrary power of the Laplacian acting on a single curvature scalar $R$ [@Codello:2012kq]. This paper reviews the recent developments on the use of these boundary terms to investigate the decoupling of the Newton’s constant [@Ribeiro:2018pyo; @Franchino-Vinas:2018gzr]. We believe that these develpments might be useful in the broader context of developing non-local effective actions which have useful phenomenological implications. Among these we include the anomaly induced inflation models [@susykey; @Shocom; @asta], even though they are not sufficient for deriving Starobinsky’s inflation purely from quantum corrections [@star; @star83]. Our results might pave the way to the construction of a field theoretical model [@StabInstab]. More generally, renormalization-group-running Newton’s and cosmological constants could have measurable implications in both cosmology [@CC-Gruni] and astrophysics [@RotCurves]. For this purpose, runnings developed using spacetimes of non-zero constant curvature are a first step [@DCCrun; @Verd], which have to be reconciled with the same runnings that are obtained in the modified minimal subtraction ($\overline{\rm MS}$) scheme [@nelpan82; @buch84; @book].
Focussing our attention on phenomenologically interesting effective actions it is important to mention that non-local actions are promising candidates to describe dark energy [@Maggiore; @CC-Gruni; @DCCrun; @Codello:2015pga], as well as satisfying templates to reconstruct the effective action induced by dynamical triangulations or asymptotic safety [@Knorr:2018kog]. The applications might even extend to Galileon models, especially if promoted to their covariant counterparts [@Codello:2012dx; @Brouzakis:2013lla] with form-factors that act also on extrinsic curvatures [@Codello:2011yf]. The most recent results on the renormalization of Newton’s constant in a massive scheme point to the necessity of connecting the renormalization of the operators $R$, $\Box R$ and $R^2$ [@Ribeiro:2018pyo; @Franchino-Vinas:2018gzr], and that the couplings could be generalized to $\Box$-dependent functions, a fact which is reminiscent of previous analyses by Avramidi [@Avramidi:2007zz] and by Hamber and Toriumi [@Hamber:2010an; @Hamber:2011kc]. In this respect, the relations among the non-local form-factor of the above terms in the semiclassical theory has already been emphasized in [@anom2003].
This paper reviews the recent results on the mass-dependent renormalization of the Newton’s constant induced by the integration of massive matter fields in two [@Ribeiro:2018pyo] and four [@Franchino-Vinas:2018gzr] dimensions, complementing the latter with results that previously appeared in [@apco; @fervi; @BuGui]. The outline of this review is as follows: In section \[sect:mass-dependent-schemes\] we briefly describe the decoupling of the electron’s loops in electrodynamics and connect it with the computation of the QED semiclassical action. In section \[sect:effective-action\] we introduce the covariant representation of the effective action that underlies this work. In sections \[sect:nonlocal-two\] and \[sect:nonlocal-four\] we apply our formalism to two- and four-dimensional curved space respectively. We concentrate on scalar, Dirac and Proca fields in both cases. In section \[sect:uv-structure\] we briefly describe the general structure of the effective action and make some general statement on its ultraviolet structure. In section \[sect:scheme\] we speculate that our formalism could have untapped potential for expressing results of the asymptotic safety conjecture [@Reuter:1996cp; @books] by making the case of scheme independence. The appendices \[sect:heat-kernel\] and \[sect:further\] contain mathematical details on the heat kernel and on the geometrical curvatures that would have otherwise burdened the main text.
Mass-dependent schemes {#sect:mass-dependent-schemes}
======================
In this section we outline our strategy to find explicit predictions of the Appelquist-Carazzone theorem in the simpler setting of quantum electrodynamic (QED) in flat space. In particular, we take this opportunity to bridge the gap between the more traditional approach and a fully covariant method. We begin by considering the regulated one-loop vacuum polarization tensor of QED in $d=4-\epsilon$ dimensions $$\label{eq:vacuum-polarization}
\begin{split}
\frac{e^2}{2\pi^2}\left(q^2 g_{\mu\nu}-q_\mu q_\nu\right)\left[ -\frac{1}{3\bar{\epsilon}} + \int_0^1{\rm d}\alpha \, \alpha(1-\alpha) \ln\left(\frac{m^2+\alpha(1-\alpha)q^2}{m^2}\right)\right]\,,
\end{split}$$ in which $q_\mu$ is the momentum of the external photon lines and $m^2$ is the square mass of the electron that is integrated in the loop. In the modified minimal subtraction scheme ($\overline{\rm MS}$) one subtracts the contribution proportional to $\frac{1}{\bar{\epsilon}}$ which includes the dimensional pole as well as some finite terms $$\begin{split}
\frac{1}{\bar{\epsilon}}
&
= \frac{1}{\ep} + \frac{1}{2}\ln \left(\frac{4\pi\mu^2}{m^2}\right)
- \frac{\ga}{2}
\end{split}$$ ($\ga\simeq 0.5$ is the Euler’s constant), so that the resulting finite polarization is $$\begin{split}
\frac{e^2}{2\pi^2}\left(q^2 g_{\mu\nu}-q_\mu q_\nu \right)\int_0^1{\rm d}\alpha \, \alpha(1-\alpha) \ln\left(\frac{m^2+\alpha(1-\alpha)q^2}{m^2}\right)\,.
\end{split}$$ Customarily, the regularization procedure introduces a scale $\mu$ and the dependence of the renormalized constant $e(\mu)$ on this scale is encoded in the beta function $$\begin{split}
\beta^{\overline{\rm MS}}_{e}
= \frac{e^3}{12\pi^2}\,,
\end{split}$$ which comes essentially from the coefficient of the subtracted pole times $\frac{e}{2}$ [@BuGui]. Notice that we labelled the beta function with $\overline{\rm MS}$ so that it is clear that we used the modified minimal subtraction scheme to compute it.
An alternative to the $\overline{\rm MS}$ scheme would use some other scale to subtract the divergence, this new choice generally results in a mass-dependent scheme if the new scale is not $\mu$. If we choose as new scale $q=\left|q_\mu\right|$, a different beta function can be computed by acting on the right term between the brackets in with $\frac{e}{2}p\partial_p$ [@apco] resulting in $$\label{eq:beta-qed}
\begin{split}
\beta_{e}
= \frac{e^3}{2\pi^2}\int_0^1{\rm d}\alpha \, \alpha(1-\alpha) \frac{\alpha(1-\alpha)q^2}{m^2+\alpha(1-\alpha)q^2}\,.
\end{split}$$ The new beta function explicitly depends on the mass of the electron, besides the scale $q$, thus allowing us to distinguish the following two limits $$\begin{split}
\beta_{e} & \simeq
\begin{cases}
\frac{e^3}{12\pi^2} & \qquad {\rm for} \quad q^2 \gg m^2 \,;\\
\frac{e^3}{60\pi^2} \frac{q^2}{m^2} & \qquad {\rm for} \quad q^2 \ll m^2\,.
\end{cases}
\end{split}$$ The physical interpretation of the above results goes as follows: in the ultraviolet, which corresponds to energies $q^2$ much bigger than the electron’s mass, the beta function coincides with its $\overline{\rm MS}$ counterpart which is a universal result at high energies.[^2] Instead in the infrared, which corresponds to energies $q^2$ smaller than the electron’s mass, the electron in the loop hits the mass threshold and effectively stops propagating. This results in a contribution to the renormalization group (RG) that goes to zero quadratically with the energy $q$. This latter effect is predicted in general terms by the Appelquist-Carazzone theorem and can be observed in any quantum field theoretical computation that involves massive particles propagating in the loops.
As anticipated, in this contribution we generalize similar results to several types of massive fields in two- and four-dimensional curved spacetimes. In dealing with curved space it is convenient to have results that are always manifestly covariant [@Donoghue:2017pgk]. In order to achieve manifest covariance we are going to present an effective-action-based computation which can be done using the heat kernel methods described in appendix \[sect:heat-kernel\], and illustrate how the above results are derived from a covariant effective action. Using non-local heat kernel methods one finds that the renormalized contributions to the vacuum effective action of QED become $$\begin{split}
\Gamma[A] =
\frac{1}{4}\int {\rm d}^4y\, F_{\mu\nu}F^{\mu\nu}
- \frac{e^2}{8\pi^2} \int {\rm d}^4x \, F_{\mu\nu}\left\{
\int_0^1 {\rm d}\alpha \, \alpha(1-\alpha)\ln \left(\frac{m^2+\alpha(1-\alpha) \Delta}{4\pi\mu^2}\right)
\right\} F^{\mu\nu}\,,
\end{split}$$ in which $\Delta=-\partial_x^2$ is the Laplacian operator in flat space and $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ is the Abelian curvature tensor [@Codello:2015oqa]. It should be clear that the non-local form-factor appearing between the two copies of $F_{\mu\nu}$ is a covariant way of writing in which the momentum scale $q^2$ comes from Fourier transformation of the differential operator $\Delta$.
Using this latter observation, one could proceed to the computation of the mass-dependent beta function by “undoing” the covariantization and by extracting the form-factor to obtain . In practical computations we replace $\Delta$ with the square of the new reference scale $q^2$ and apply the derivatives with respect to $q$ as outlined before [@Goncalves:2009sk], thus following closely the steps that lead to . This latter strategy of identifying the relevant scale with the covariant Laplacians of the effective action’s form-factors can be easily applied to curved space, in which there are more curvature tensors besides $F_{\mu\nu}$ and therefore more couplings, and it will prove fundamental for the rest of this review.
Heat kernel representation of the effective action in curved space {#sect:effective-action}
==================================================================
We now concentrate our attention to a $D$-dimensional spacetime in which the dimensionality can be either $D=2$ or $D=4$. We assume that the spacetime is equipped with a classical torsionless Euclidean metric $g_{\mu\nu}$, which for practical purposes can be assumed to come from the Wick rotation of a Lorentzian metric. Our task is to compute the vacuum effective actions for the classical metric induced by the integration of massive matter fields. If we limit our interest to fields of spin up to one, we must consider scalars, spinors and vectors, which is why we consider the following bare actions $$\label{eq:bare-actions}
\begin{split}
S_{\rm s}[\varphi] &= \frac{1}{2}\int {\rm d}^Dx\sqrt{g} \left( g^{\mu\nu}\partial_\mu\varphi\partial_\nu\varphi + m_{\rm s}^2\varphi^2+ \xi \varphi^2 R \right) \\
S_{\rm f}[\psi] &= \int {\rm d}^Dx \sqrt{g}\, \overline{\psi}\left(\slashed D+m_{\rm f} \right)\psi \\
S_{\rm p}[A] &= \int {\rm d}^Dx\sqrt{g} \left(-\frac{1}{4} F_{\mu\nu} F^{\mu\nu} + \frac{m_{\rm v}^2}{2} A_\mu A^\mu\right)
\end{split}$$ in which we defined $\slashed D = \gamma^a e^\mu{}_a D_\mu$, $D_\mu = \partial_\mu + S_\mu$ with $S_\mu$ the spin-$\frac{1}{2}$ connection, $F_{\mu\nu} = \nabla_\mu A_\nu -\nabla_\nu A_\mu$ and $R$ is the scalar curvature. The action $S_{\rm s}[\varphi]$ represents a non-minimally coupled free massive scalar field, while $S_{\rm f}[\psi]$ and $S_{\rm p}[A]$ represent minimally coupled massive Dirac spinors and massive Proca vectors respectively.
Given that the matter fields are quadratic, the one-loop effective action corresponds to the full integration of the path-integral and captures a physical situation in which the matter interactions are weak. If we have $n_{\rm s}$ scalars, $n_{\rm f}$ Dirac spinors and $n_{\rm p}$ Proca vectors of equal masses per spin, the full effective action is additive in its sub-parts &=& n\_[s]{} \_[s]{}\[g\] + n\_[f]{} \_[f]{}\[g\] + n\_[p]{} \_[p]{}\[g\], in which the single contributions can be easily obtained from a standard path-integral analysis $$\label{eq:functional-traces}
\begin{split}
&
\Gamma_{\rm s}[g]
= \frac{1}{2} \Tr_{\rm s} \ln \left( \D + \xi R +m_{\rm s}^2\right) ,
\\
& \Gamma_{\rm f}[g] = -\Tr_{\rm f} \ln \left(\slashed D+m_{\rm f}\right) ,
\\
&
\Gamma_{\rm p}[g] = \frac{1}{2}\Tr_{\rm v} \ln\left( \delta_\mu^\nu \D
+\nabla_\mu\nabla^\nu + R_\mu{}^\nu + \delta_\mu^\nu m_{\rm v}^2\right) ,
\end{split}$$ and we defined the curved space Laplace operator $\D= - \nabla^2 = -\nabla_\mu\nabla^\mu= -g_{\mu\nu}\nabla^\mu\nabla^\nu$.
One notices that $\Gamma_{\rm s}[g]$ is a functional trace of an operator of Laplace-type, and therefore can be dealt with using standard heat kernel methods. The same is not true for the other two traces, but it is a well-known fact that we can manipulate them to recover a Laplace-type operator. For the Dirac fields it is sufficient to recall that the square $\left({\rm i}\slashed D\right)^2=\D + \frac{R}{4}$, which implies $$\begin{split}
\Gamma_{\rm f}[g] &= -\frac{1}{2} \Tr_{\rm f} \ln\left[\left(\slashed D+m_{\rm f}\right)^2\right]
=-\frac{1}{2} \Tr_{\rm f} \ln \left( \D + \frac{R}{4} + m_{\rm f}^2\right) \,,
\end{split}$$ if we assume a positive bounded spectrum for the Dirac operator. A more involved manipulation can be done to the Proca’s functional trace [@bavi85; @Ruf:2018vzq] and it results in $$\begin{split}
\Gamma_{\rm p}[g] &= \frac{1}{2} \Tr_{\rm v} \ln\left( \D+{\rm Ric}+m_{\rm v}^2\right) - \frac{1}{2} \Tr_{\rm s} \ln\left(\D +m_{\rm v}^2\right)\,.
\end{split}$$ The physical interpretation of the above difference is that a Proca field can be understood as a vector degree of freedom which is integrated in the first trace, minus one single scalar ghost which is integrated in the second trace, for a total of one degree of freedom in $D=2$ and three degrees of freedom in $D=4$. Recall now that the functional trace of a Maxwell’s $U(1)$ gauge field, which naively could be understood as massless Proca vector, includes the subtraction of *two* ghost degrees of freedom, which is one more than the Proca’s. This shows that the naive limit $m_{\rm v}\to 0$ does not actually recover a Maxwell field, but rather it is discontinuous.
A simple glance at all the above traces shows that, modulo overall constants, we are generally interested in functional traces of Laplace-type operators in the form $$\label{eq:effective-action-trlog-generic}
\begin{split}
\Ga[g] &= \frac{1}{2} \Tr \ln \left(\D +E +m^2\right)\,,
\end{split}$$ in which we trace over the opportune degrees of freedom. The general endomorphism $E=E(x)$ acts on the field’s bundle and it is assumed to be arbitrary, so that by taking the opportune form we obtain the result of either of the above traces. Let us collectively denote the general Laplace-type operator ${\cal O}= \D+ E$ and its heat kernel ${\cal H}_D(s;x,x')$, in which we keep the subscript $D$ as a reminder of the spacetime dimension for later use. Following appendix \[sect:heat-kernel\] we use the heat kernel to represent as $$\begin{split} \label{eq:effective-action-divergent}
\Ga[g] &= -\frac{1}{2} {\rm tr} \int_0^\infty \frac{{\rm d}s}{s} \int{\rm d}^Dx ~ {\rm e}^{-sm^2} {\cal H}_D(s;x,x)\,,
\end{split}$$ in which the bi-tensor ${\cal H}_D(s;x,y)$ is the solution of the heat kernel evolution equation in $D$-dimensions. The effective action is generally an ultraviolet divergent functional: divergences appear as poles in the integration of the $s$ variables at $s=0$ because $s$ is conjugate to the square of a momentum. The leading power of the heat kernel is $s^{-D/2}$ and, after expanding in powers of $s$, one expects a finite number poles for the first few terms of this expansion. In particular, the first two terms will contain divergences for $D=2$, or the first three for $D=4$ (see also below).
We regularize divergences by analytic continuation of the dimensionality to $d=D-\epsilon$. Since in curved space the dimensionality can appear in a multitude of ways (such as $g_\mu{}^\mu$) we have to be careful in our choice for the analytic continuation. We choose to continue only the leading power of the heat kernel, thus promoting ${\cal H}_D(s;x,x)\to{\cal H}_d(s;x,x)$, while at the same time keeping all geometrical objects in $D$ dimensions (implying, for example, that $g_\mu{}^\mu=D$ and *not* $g_\mu{}^\mu=d$). This choice is probably the simplest that one can make, but we should stress that any other choice differs from this one by finite terms which do not change the predictions of the renormalized effective action. After our continuation to $d$ dimensions the trace becomes $$\begin{split} \label{eq:effective-action-regularized}
\Ga[g] &= -\frac{\mu^{\epsilon}}{2} \tr \int_0^\infty \frac{{\rm d}s}{s} \int{\rm d}^Dx ~ {\rm e}^{-sm^2} {\cal H}_d(s;x,x)\,,
\end{split}$$ in which we have also introduced a reference scale $\mu$ to preserve the mass dimension of all quantities when leaving $D$ dimensions, and the label $d$ of the heat kernel is a reminder of the continuation $s^{-D/2}\to s^{-d/2}$ [@bro-cass].
Before concluding this section we find convenient to introduce some further definition. When studying the renormalization group it is sometimes useful to consider dimensionless variables. At our disposal we have the renormalization group scale $q$ which is related to $\Delta_g \leftrightarrow q^2$ as discussed in section \[sect:mass-dependent-schemes\], and a mass $m$ which collectively denotes the species’ masses introduced before. For us it is natural to give every dimensionful quantity in units of the mass $m$, which leads to the following dimensionless operators $$\label{eq:dimensionless-operators}
z = \frac{\D}{m^2}\,,
\qquad
a = \sqrt{\frac{4z}{4+z}}\,,
\qquad
Y = 1-\frac{1}{a} \ln\left|{\frac{1+a/2}{1-a/2}}\right|\,.$$ We will also denote by $\hat{q}^2=q^2/m^2$ the dimensionless RG scale (the RG scale in units of the mass), which is related to $z \leftrightarrow \hat{q}^2$ according to the discussion of section \[sect:mass-dependent-schemes\]. We will not adopt further symbols for the operators $a$ and $Y$ after the identification, which means that from the point of view of the RG they will be functions of the ratio $\hat{q}^2=\frac{q^2}{m^2}$ and therefore change as a function of the energy.
Renormalized action in two dimensions {#sect:nonlocal-two}
=====================================
In $D=2$ the only independent curvature tensor is the Ricci scalar $R$ if there are no further gauge connections. We therefore choose to parametrize the most general form that a regularized effective action can take as $$\label{eq:effective-action-2d}
\begin{split}
\Ga[g] &=
\Ga_{\rm loc}[g]
+ \frac{1}{4\pi}\int {\rm d}^2 x \sqrt{g}\, \cB(z) R
- \frac{1}{96\pi}\int {\rm d}^2 x \sqrt{g}\,
R\, \frac{\cC(z)}{\D} \, R\,.
\end{split}$$ The part $\Ga_{\rm loc}[g]$ is a local function of the curvatures and as such contains the divergent contributions which require the renormalization of both zero point energy and coefficient of the scalar curvature. These two divergences correspond to the leading $s^{-d/2}$ and subleading $s^{-d/2+1}$ (logarithmic) powers of the expansion of the heat kernel. Starting from the terms that are quadratic in the scalar curvature the parametric $s$ integration becomes finite.
The dimensional divergences that appear in $\Ga_{\rm loc}[g]$ can be renormalized by opportunely choosing two counterterms up to the first order in the curvatures. Consequently, after the subtraction of the divergences, the local part of the renormalized action contains $$\begin{split}
S_{\rm ren}[g]
&= \int {\rm d}^2 x \sqrt{g}\,\left\{b_0 + b_1 R\right\}
\end{split}$$ in which the couplings $b_0$ and $b_1$ are related to the two-dimensional cosmological and Newton’s constants. A popular parametrization of the Einstein-Hilbert action in two dimensions is $b_0=\Lambda$ and $b_1=-G^{-1}$, in which $\Lambda$ and $G$ are the two-dimensional cosmological and Newton’s constants respectively. The $\overline{\rm MS}$ procedure generates perturbative beta functions for the renormalized couplings which we denote with $\beta^{\overline{\rm MS}}_{b_0}$ and $\beta^{\overline{\rm MS}}_{b_1}$ and which depend on the specific matter content.
The non-local part of is also very interesting for our discussion. If the critical theory is conformally invariant, then we know that it contains the pseudo-local Polyakov action $$\label{eq:polyakov-action}
\begin{split}
S_{\rm P}[g]
&= - \frac{c}{96\pi}\int {\rm d}^2 x \sqrt{g}\,
R\, \frac{1}{\D} \, R\,,
\end{split}$$ in which we introduced the central charge of the conformal theory $c$ [@Barvinsky:2004he]. The Polyakov action accounts for the violations of the conformal symmetry from the measure of the path integral at the quantum level [@Codello:2014wfa]. The central charge counts the number of degrees of freedom of the model and it is generally understood as a property of the fixed points of the renormalization group, which in general means that $c=c(g^*)={\rm const.}$ for $g^*$ some fixed point coupling(s).
Since the Polyakov action is not required for the subtraction of any divergence we could deduce that the ${\overline{\rm MS}}$ scheme does not generate a flow for the central charge, or alternatively $\beta^{\overline{\rm MS}}_{c}=0$. This latter property is in apparent contradiction to Zamolodchikov’s theorem that states that $\Delta c\leq 0$ along the flow, but the contradiction is qualitatively resolved by understanding that the ${\overline{\rm MS}}$ scheme captures only the far ultraviolet of the RG flow. A comparison of with suggests the interpretation of the function $\cC(z)$ as a RG-running central charge in our massive scheme, recalling that $z$ is the square of our RG scale in units of the mass.
Our framework makes a quantitative connection with Zamolodchikov’s theorem: the non-local part of the effective action is parametrized by the functions $\cB(z)$ and $\cC(z)$, which are both dimensionless functions of the dimensionless argument $z$. Simple intuition allows us to interpret $\cB(z)$ as a non-local generalization of the Newton’s constant, while we suggest to interpret $\cC(z)$ as a generalization of the central charge under the correct conditions (see below). In all applications below we observe that $\Delta \cC \leq 0$ for flows connecting known conformal theories, in agreement with the theorem [@Zamolodchikov:1986gt].
As discussed in section \[sect:mass-dependent-schemes\], we introduce the momentum scale $q$ and its dimensionless counterpart $\hat{q}=q/m$. Setting the momentum scale from $z=\hat{q}^2$ and interpreting the coefficient of $R$ as a scale dependent coupling we define the non-local beta function of $b_1$ $$\begin{split}
\beta_{b_1} &= q\frac{\partial}{\partial q} \frac{\cB(z)}{4\pi}= \hat{q}\frac{\partial}{\partial \hat{q}} \frac{\cB(z)}{4\pi} = \frac{z}{2\pi} \cB'(z)\,,
\end{split}$$ in which we used a prime to indicate a derivative with respect to the argument. Analogously we push the interpretation of the derivative of $\cC(z)$ as a running central charge $$\label{eq:running-central-charge}
\begin{split}
\beta_c &= q\frac{\partial}{\partial q} \cC(z) = 2 z \,\cC'(z)\,.
\end{split}$$ Again we stress that this latter flow is expected to be negative for trajectories connecting two conformal field theories to comply with Zamolodchikov’s theorem.
In agreement with general arguments, we see that the UV limit of the non-local beta functions reproduce the standard ${\overline{\rm MS}}$ results. Specifically we have that the running of $b_1$ reproduces the $\overline{\rm MS}$ result at high energies $$\begin{split}
\beta_{b_1} &= \beta^{\overline{\rm MS}}_{b_1} + {\cal O}\left(\frac{m^2}{q^2}\right) \qquad {\rm for}\quad q^2\gg m^2\,.
\end{split}$$ We also see that the non-local generalization of the central charge is related to the central charge itself in the same limit $$\begin{split}
\cC(z) = c + {\cal O}\left(\frac{m^2}{q^2}\right) \qquad {\rm for}\quad q^2\gg m^2\,.
\end{split}$$ This latter property seems to be always true if $c$ is interpreted as the number of degrees of freedom of the theory. In particular it is true for the case of the Proca field which is not conformally invariant like the massless minimally coupled scalar or the massless Dirac field. We will see in the next sections that $c=1$ for scalars with $\xi=0$, $c=1/2$ for spinors, and $c=1$ for Proca fields in two dimensions. All the explicit expressions for the functions $\cB(z)$, $\cC(z)$ and their derivatives are given in the next three subsections.
Non-minimally coupled scalar field in two dimensions {#sect:scalar2d}
----------------------------------------------------
We now give all the terms needed for the scalar field trace appearing in in $D=2$. As a template to assemble all terms we refer to . The local part of the effective action is $$\label{eq:effective-action-2d-scalar-local}
\begin{split}
\Gamma_{\rm loc}[g] &=
\frac{1}{4\pi}\int {\rm d}^2 x \sqrt{g}\, \Bigl\{
\left(\frac{1}{\bar{\epsilon}}+\frac{1}{2}\right)m^2
+\left(\xi-\frac{1}{6}\right)\frac{1}{\bar{\epsilon}} R
\Bigr\}\,,
\end{split}$$ which has poles in both terms as expected. The non-local part of is captured by the functions $$\begin{split}
\cB(z)
&= \frac{1}{36}+\left(\xi-\frac{1}{4}\right)Y+\frac{Y}{3a^2}
\\
\cC(z)
&=
-\frac{1}{2}-\frac{6Y}{a^2}-12\left(\xi-\frac{1}{4}\right)Y+6\left(\xi-\frac{1}{4}\right)^2(1-Y)\,,
\end{split}$$ in which we use the notation . From the non-local functions we can derive the mass-dependent beta function $$\begin{split}
\beta_{b_1} &= \frac{z}{2\pi} \cB'(z)
= \frac{1}{2\pi}\Bigl\{
-\frac{1}{24}-\frac{Y}{2a^2}-\frac{1}{2}\left(\xi-\frac{1}{2}\right)Y-\frac{1}{8}\left(\xi-\frac{1}{4}\right)(1-Y)a^2
\Bigr\}\,.
\end{split}$$ The beta function in the mass-dependent scheme displays two limits $$\begin{split}
\beta_{b_1} &= \begin{cases}
\frac{1}{4\pi}\left(\frac{1}{6}-\xi\right) +{\cal O}\left(\frac{m^2}{q^2}\right) & \qquad {\rm for} \quad q^2 \gg m^2 \,; \\
\frac{1}{24\pi}\left(\frac{1}{5}-\xi \right) \frac{q^2}{m^2} +{\cal O}\left(\frac{q^2}{m^2}\right)^2 & \qquad {\rm for} \quad q^2 \ll m^2\,.
\end{cases}
\end{split}$$ The low energy limit shows a realization of the Appelquist-Carazzone theorem for which the Newton’s constant stops running below the threshold determined by the mass with a quadratic damping factor. The high energy limit shows instead that $\beta_{b_1}$ reduces to minus the coefficient of $R$’s divergent term in and thus to its $\overline{\rm MS}$ counterpart. One can explicitly check that $\beta_{c}$ defined as in is positive as a function of $z$ if $\xi=0$, meaning that $\Delta \cC\leq 0$ from the UV to the IR. For practical purposes we are interested in $$\begin{split}
\cC(z)&= \begin{cases}
1 -12 \xi +12 \xi^2 \ln \left(\frac{q^2}{m^2}\right) +{\cal O}\left(\frac{m^2}{q^2}\right) & \qquad {\rm for} \quad q^2 \gg m^2 \,; \\
0 +{\cal O}\left(\frac{q^2}{m^2}\right) & \qquad {\rm for} \quad q^2 \ll m^2\,.
\end{cases}
\end{split}$$ Notice in particular that $\cC(\infty)=1$ for $\xi=0$, which is the central charge of a single minimally coupled free scalar and is expected from the general result $\Delta \cC = c_{\rm UV} - c_{\rm IR}=1$ under the normalization $c_{\rm IR}=1$. The interpretation of this result is that for $\xi=0$ the RG trajectory connects a theory with $c=1$ with the massive theory with $c=0$ that lives in the infrared.
Dirac field in two dimensions {#sect:dirac2d}
-----------------------------
Here we report all the terms needed for the Dirac field trace appearing in in $D=2$. The template is again and we denote by $d_\gamma$ the dimensionality of the Clifford algebra, which factors in front of all formulas (see also the discussion at the end of appendix \[sect:further\]). The local part of the effective action is $$\begin{split}
\Gamma_{\rm loc}[g] &=
\frac{d_\gamma}{4\pi}\int {\rm d}^2 x \sqrt{g}\, \Bigl\{
-\left(\frac{1}{\bar{\epsilon}}+\frac{1}{2}\right)m^2
-\frac{1}{12}\frac{1}{\bar{\epsilon}} R
\Bigr\}\,,
\end{split}$$ which has poles in both terms as expected. The non-local part of is captured by the functions $$\begin{split}
\cB(z)
= d_\gamma\left\{-\frac{1}{36}-\frac{Y}{3a^2}\right\}\,,
&\qquad
\cC(z)
=
d_\gamma\left\{\frac{1}{2}-\frac{3}{2} Y + \frac{6Y}{a^2}\right\}\,.
\end{split}$$ From the first non-local function we can derive the mass-dependent beta function $$\begin{split}
\beta_{b_1} =
&= \frac{d_\gamma}{2\pi}\left\{\frac{1}{24}-\frac{Y}{8} + \frac{Y}{2a^2}\right\}
\end{split}$$ which displays two limits $$\begin{split}
\beta_{b_1} &= \begin{cases}
\frac{d_\gamma}{24\pi}\frac{1}{2} +{\cal O}\left(\frac{m^2}{q^2}\right) & \qquad {\rm for} \quad q^2 \gg m^2 \,; \\
\frac{d_\gamma}{24\pi} \frac{1}{20} \frac{q^2}{m^2} +{\cal O}\left(\frac{q^2}{m^2}\right)^{\frac{3}{2}} & \qquad {\rm for} \quad q^2 \ll m^2\,.
\end{cases}
\end{split}$$ Similarly to the scalar case the generalization of the central charge is always decreasing, starting from the UV value $$\begin{split}
C(z)&= \frac{d_\gamma}{2} +{\cal O}\left(\frac{m^2}{q^2}\right) \qquad {\rm for} \quad q^2 \gg m^2\,.
\end{split}$$ This agrees with the fact that $c=\frac{1}{2}$ is the expected central charge of a single fermionic degree of freedom in $D=2$.
Proca field in two dimensions {#sect:proca2d}
-----------------------------
Finally we report all the terms needed for the Proca field trace appearing in in $D=2$ to be used in conjunction with . The local part of the effective action is $$\begin{split}
\Gamma_{\rm loc}[g] &=
\frac{1}{4\pi}\int {\rm d}^2 x \sqrt{g}\, \Bigl\{
\left(\frac{1}{\bar{\epsilon}}+\frac{1}{2}\right)m^2
+\frac{5}{6}\frac{1}{\bar{\epsilon}} R
\Bigr\}\,.
\end{split}$$ The non-local part of is captured by the functions $$\begin{split}
\cB(z)
= \frac{1}{36}+\frac{3Y}{4}+\frac{Y}{3a^2}\,,
&\qquad
\cC(z)
=
-\frac{1}{2}+3Y- \frac{6Y}{a^2}+\frac{3}{8}(1-Y)a^2\,.
\end{split}$$ The non-local beta function related to the running of the Newton’s constant is $$\begin{split}
\beta_{b_1}
&= \frac{1}{2\pi}\left\{-\frac{1}{24}-\frac{Y}{4} - \frac{Y}{2a^2}-\frac{3}{32}(1-Y)a^2\right\}\,,
\end{split}$$ and it has the limits $$\begin{split}
\beta_{b_1} &= \begin{cases}
-\frac{5}{24\pi} +{\cal O}\left(\frac{m^2}{q^2}\right) & \qquad {\rm for} \quad q^2 \gg m^2 \,; \\
-\frac{1}{30\pi} \frac{q^2}{m^2}+{\cal O}\left(\frac{q^2}{m^2}\right)^{\frac{3}{2}} & \qquad {\rm for} \quad q^2 \ll m^2\,.
\end{cases}
\end{split}$$ The Proca field is not conformally coupled neither for non-zero mass, nor in the limit $m\to 0$. In fact, the conformally coupled “equivalent” of the Proca field is a Maxwell field, but we have established in section \[sect:effective-action\] that such limit is discontinuous. Nevertheless in the ultraviolet $$\begin{split}
C(\infty)&= 1 +{\cal O}\left(\frac{m^2}{q^2}\right) \qquad {\rm for} \quad q^2 \gg m^2\,,
\end{split}$$ which correctly counts the number of degrees of freedom for a Proca field in $D=2$ (two degrees of freedom of a vector minus one from the ghost scalar).
Renormalized action in four dimensions {#sect:nonlocal-four}
======================================
In four dimensions the regularized effective action is much more complicate than the one shown in section \[sect:nonlocal-two\]. As general template for its parametrization we define $$\begin{split} \label{eq:effective-action-full}
\Ga[g] &=
\Ga_{\rm loc}[g]
+ \frac{m^2}{2(4\pi)^2}\int {\rm d}^4 x \sqrt{g}\,
B(z) R
+ \frac{1}{2(4\pi)^2}\int {\rm d}^4 x \sqrt{g}\, \Bigl\{
C^{\mu\nu\alpha\beta} \, C_1(z) \, C_{\mu\nu\alpha\beta}
+ R \, C_2(z) \, R
\Bigr\}\,,
\end{split}$$ in which we used the four-dimensional Weyl tensor $C_{\mu\nu\rho\theta}$. In our settings the non-local functions $C_1(z)$ and $C_2(z)$ are four-dimensional generalizations of $\cC(z)$ and therefore we could speculate on their relations with the $a$- and $c$-charges that appear in four-dimensional generalizations of Zamolodchikov’s analysis [@Jack:1990eb] through local RG [@Osborn:1991gm]. It would be intriguing to establish a connection with the functional formalism of [@Codello:2015ana] but we do not dive further in this direction.
The heat kernel terms that require renormalization are those with zero, one and two curvatures, corresponding to poles coming from the integration of $s^{-d/2}$, $s^{-d/2+1}$ and $s^{-d/2+2}$. All the poles are local, which means that they are contained in $\Ga_{\rm loc}[g]$ and can be renormalized by introducing the counterterms. The renormalized local action is $$\begin{split} \label{eq:effective-action-local-renormalized}
S_{\rm ren}[g]
&= \int {\rm d}^4 x \sqrt{g}\,\left\{b_0 + b_1 R + a_1 C^2 +a_2 {\cal E}_4
+a_3 \Box R + a_4 R^2\right\}\,,
\end{split}$$ in which ${\cal E}_4$ is the operator associated to the Euler’s characteristic, which is the Gauss-Bonnet topological term in $d=4$. Our non-local heat kernel of appendix \[sect:heat-kernel\] is valid for asymptotically flat spacetimes, which has the unfortunate consequence of setting ${\cal E}_4=0$, but we can study every other term flawlessly [@bavi90]. The couplings of include the cosmological constant $\Lambda$ and the Newton’s constant $G$ through the relations $b_0=2\Lambda G^{-1}$ and $b_1=-G^{-1}$. In general, we denote beta functions in the minimal subtraction scheme as $\beta^{\overline{\rm MS}}_g$ in which $g$ is any of the couplings appearing in .
Comparing with we can straightforwardly define the non-local renormalization group beta function for two of the quadratic couplings $$\label{eq:definitions-running-1}
\begin{split}
\beta_{a_1} =
\frac{z}{(4\pi)^2} C'_1(z)\,,\qquad
\beta_{a_4} = \frac{z}{(4\pi)^2} C'_2(z)\,,
\end{split}$$ and these definitions coincide with the ones made in [@apco; @fervi]. In contrast to the two-dimensional case, it is much less clear how to attribute the running of the function $B(z)$ because both $R$ and $\Box R$ require counterterms. We discuss some implications of this point in section \[sect:uv-structure\]. To handle the problem we define a master “beta function” for the couplings that are linear in the scalar curvature $$\label{eq:psi-def}
\begin{split}
\Psi ~ = ~ \frac{1}{(4\pi)^2}\, z\,\partial_z \Big[\frac{B(z)}{z}\Big]\,.
\end{split}$$ The function $\Psi$ includes the non-local running of both couplings $a_3$ and $b_1$, which can be seen from the general property $$\label{eq:psi}
\begin{split}
\Psi &= \begin{cases}
- \beta^{\overline{\rm MS}}_{a_3} & \qquad {\rm for} \quad q^2 \gg m^2 \\
\frac{m^2}{q^2} ~ \beta^{\overline{\rm MS}}_{b_1} & \qquad {\rm for} \quad q^2 \ll m^2
\end{cases}
\end{split}$$ that we observe for all the matter species that we considered. The function $\Psi$ “mutates” from the ultraviolet to the infrared giving the universal $\overline{\rm MS}$ contributions of the running of both $a_3$ and $b_1$. Following the discussion of section \[sect:uv-structure\] we define the non-local beta functions by clearing the asymptotic behaviors $$\begin{split}
\label{eq:definitions-running}
\beta_{a_3} ~ = ~ -\frac{1}{(4\pi)^2} \,z\,\partial_z
\Big[\frac{B(z)-B(0)}{z}\Big],
\qquad \beta_{b_1} ~ = ~ \frac{m^2}{(4\pi)^2} \,z\,
\partial_z\big[B(z)-B_\infty(z)\big].
\end{split}$$ In order to preserve the elegance of the form-factors and of the beta functions expressed only in terms of the dimensionless variables $a$ and $Y$, instead of subtracting the leading logarithm at infinity we subtract $$\begin{split}
a(1-Y)\simeq \ln (z) \,,
\end{split}$$ which is shown to be valid for $z\gg 1$ using the definitions .
Using the above definitions and , each separate beta function coincides with its $\overline{\rm MS}$ counterpart in the ultraviolet $$\begin{split}
\beta_g
&= \beta^{\overline{\rm MS}}_g
+ {\cal O}\left(\frac{m^2}{q^2}\right) \qquad {\rm for }\qquad q^2\gg m^2,
\end{split}$$ in which $g$ is any of the couplings of (with the possible exception of $a_2$ which is not present in asymptotically flat spacetimes). Furthermore, in the infrared the running of each coupling is slowed down by a quadratic factor of the energy $$\begin{split}
\beta_g
&= {\cal O}\left(\frac{q^2}{m^2}\right) \qquad {\rm for }\qquad q^2 \ll m^2\,,
\end{split}$$ which is a practical evidence of the Appelquist-Carazzone theorem in a four-dimensional space.
Non-minimally coupled scalar field in four dimensions {#sect:scalar4d}
-----------------------------------------------------
The effective action of the non-minimally coupled scalar field can be obtained specifying the endomorphism $E=\xi R$ in the non-local heat kernel expansion and then performing the integration in $s$. We give all the results using the template . We find the local contributions of the regularized action to be $$\begin{gathered}
\Ga_{\rm loc}[g] =
\frac{1}{2(4\pi)^2} \int {\rm d}^4 x\sqrt{g} \, \Bigl\{
-m^4\Bigl(\frac{1}{\bar{\epsilon}}+\frac{3}{4}\Bigr)
- 2m^2\Bigl( \xi-\frac{1}{6} \Bigr)\frac{1}{\bar{\epsilon}} R
\\
+\frac{1}{3} \Bigl( \xi-\frac{1}{5} \Bigr)\frac{1}{\bar{\epsilon}} \Box R
-\frac{1}{60\bar{\epsilon}} C_{\mu\nu\rho\theta} C^{\mu\nu\rho\theta}
-\Bigl( \xi-\frac{1}{6} \Bigr)^2 \frac{1}{\bar{\epsilon}} R^2
\Bigr\}\,.\end{gathered}$$ The minimal subtraction of the divergences of local contributions induces the following $\overline{\rm MS}$ running $$\label{eq:beta-functions-scalar-perturbative}
\begin{array}{lll}
\beta_{b_0}^{\overline{\rm MS}} = \frac{1}{(4\pi)^2} \frac{m^4}{2} \,,
& \quad
\beta_{b_1}^{\overline{\rm MS}} = \frac{1}{(4\pi)^2} m^2 \xibar \,,
& \quad \\
\beta_{a_3}^{\overline{\rm MS}} = - \frac{1}{(4\pi)^2} \frac{1}{6} \left(\xi-\frac{1}{5}\right)\,,
& \quad
\beta_{a_1}^{\overline{\rm MS}} = \frac{1}{(4\pi)^2}\frac{1}{120}\,,
& \quad
\beta_{a_4}^{\overline{\rm MS}} = \frac{1}{(4\pi)^2} \frac{1}{2} \xibar^2\,,
\end{array}$$ which agree with [@bro-cass; @Jack:1983sk; @Martini:2018ska] in the overlapping region of validity. The non-local part of the effective action includes the following form-factors $$\begin{split}
\frac{B(z)}{z} &=
-\frac{4 Y}{15 a^4}+\frac{Y}{9 a^2}-\frac{1}{45 a^2}+\frac{4}{675}+\xibar \left(-\frac{4 Y}{3 a^2}-\frac{1}{a^2}+\frac{5}{36}\right)\,,
\\
C_1(z) &= -\frac{1}{300}-\frac{1}{45a^2}-\frac{4Y}{15a^4} \,,
\\
C_2(z) &=
-\frac{Y}{144}+\frac{7}{2160}-\frac{Y}{9 a^4}+\frac{Y}{18 a^2}-\frac{1}{108 a^2}
+\xibar \left(-\frac{2 Y}{3 a^2}+\frac{Y}{6}-\frac{1}{18}\right)-Y \xibar^2 \,.
\end{split}$$ Using our definitions and the non-local beta functions are $$\begin{split}
\beta_{b_1} &=\frac{z}{(4\pi)^2}\Bigl\{
\frac{2 Y}{5 a^4}-\frac{2 Y}{9 a^2}
+ \frac{1}{30 a^2}-\frac{aY}{180}
+\frac{a}{120}+\frac{Y}{24}-\frac{1}{40}
+
\xibar \left(\frac{2 Y}{3 a^2}+\frac{a Y}{6}-\frac{a}{4}
-\frac{Y}{2}+\frac{1}{2}\right)
\Bigr\}\,,
\\
\beta_{a_3} &= \frac{1}{(4\pi)^2}
\Bigl\{
-\frac{2 Y}{3 a^4}
+\frac{Y}{3 a^2}-\frac{1}{18 a^2}
-\frac{Y}{24}+\frac{7}{360}
+
\xibar
\left(-\frac{2 Y}{a^2}+\frac{Y}{2}-\frac{1}{6}\right)
\Bigr\}\,, \\
\beta_{a_1} &= \frac{1}{(4\pi)^2}\Bigl\{
-\frac{1}{180}+\frac{1}{18a^2}+\frac{2Y}{3a^4}-\frac{Y}{6a^2}
\Bigr\}\,,
\\
\beta_{a_4} &= \frac{1}{(4\pi)^2}\Bigl\{
\frac{5 Y}{18 a^4}-\frac{a^2 Y}{1152}-\frac{11 Y}{72 a^2}+\frac{a^2}{1152}+\frac{5}{216 a^2}+\frac{7 Y}{288}-\frac{1}{108}
\\&
\quad +\xibar \left(\frac{a^2 Y}{48}+\frac{Y}{a^2}-\frac{a^2}{48}-\frac{Y}{3}+\frac{1}{12}\right)
+\xibar^2 \left(-\frac{a^2 Y}{8}+\frac{a^2}{8}+\frac{Y}{2}\right)
\Bigr\}\,.
\end{split}$$ The effects of the Appelquist-Carazzone for $\beta_{a_1}$ and $\beta_{a_4}$ have been observed in [@apco; @fervi], and for $\beta_{b_1}$ and $\beta_{a_3}$ in [@Franchino-Vinas:2018gzr]. We report the latter two because they are related to the Newton’s constant through $b_1=-G^{-1}$. The non-local beta function of the coupling $b_1$ in units of the mass has the two limits $$\begin{split}
\frac{\beta_{b_1}}{m^2} &= \begin{cases}
\frac{1}{(4\pi)^2}\xibar + \frac{1}{(4\pi)^2}
\left\{\left(\frac{3}{5}-\xi\right)
-\xi \ln\left(\frac{q^2}{m^2}\right)\right\} \frac{m^2}{q^2}
+{\cal O}\left(\frac{m^2}{q^2}\right)^{2}
&
\quad {\rm for} \quad q^2 \gg m^2\,,
\\
\frac{1}{(4\pi)^2}\left(\frac{4}{9}\xi-\frac{77}{900}\right)
\frac{q^2}{m^2} +{\cal O}\left(\frac{q^2}{m^2}\right)^{\frac{3}{2}}
&
\quad {\rm for} \quad q^2 \ll m^2\,;
\end{cases} \end{split}$$ while the one of $a_3$ is $$\begin{split}
\beta_{a_3} &= \begin{cases}
-\frac{1}{6(4\pi)^2}\left(\xi-\frac{1}{5}\right)
+ \frac{1}{(4\pi)^2} \left\{ \frac{5}{18}-2\xi
+ \xibar \ln\left(\frac{q^2}{m^2}\right)\right\} \frac{m^2}{q^2}
+{\cal O}\left(\frac{m^2}{q^2}\right)^{2}
& \,\,\, {\rm for} \,\,\, q^2 \gg m^2\,,
\\
\frac{1}{(4\pi)^2} \frac{1}{840}
\left(3-14\xi\right)\frac{q^2}{m^2}
+{\cal O}\left(\frac{q^2}{m^2}\right)^2
& \,\,\, {\rm for} \,\,\, q^2 \ll m^2\,.
\end{cases}
\end{split}$$ These expressions show a standard quadratic decoupling in the IR, exactly as for QED [@AC] and the fourth derivative gravitational terms [@apco; @fervi].
Dirac field in four dimensions {#sect:dirac4d}
------------------------------
The effective action of the minimally coupled Dirac fields requires the specification of the endomorphism $E=R/4$. The final result is proportional to the dimension $d_\gamma$ of the Clifford algebra and hence to the number of spinor components. We do not set $d_\gamma=4$, but choose instead to leave it arbitrary so that the formulas can be generalized to other spinor species easily. We find the local regularized action to be $$\begin{split}
\Ga_{\rm loc}[g] &=\frac{d_\gamma}{2(4\pi)^2} \int {\rm d}^4 x\sqrt{g} \, \Bigl\{
m^4\Bigl(\frac{1}{\bar{\epsilon}}+\frac{3}{4}\Bigr)
+\frac{m^2}{6\bar{\epsilon}} R -\frac{1}{60\bar{\epsilon}} \Box R
-\frac{1}{40\bar{\epsilon}} C_{\mu\nu\rho\theta} C^{\mu\nu\rho\theta}
\Bigr\}\,.
\end{split}$$ The minimal subtraction of the $1/\bar{\epsilon}$ divergences induces the following $\overline{\rm MS}$ beta functions $$\begin{array}{lll}
\beta_{b_0}^{\overline{\rm MS}} = -\frac{d_\gamma}{(4\pi)^2} \frac{m^4}{2} \,,
& \quad
\beta_{b_1}^{\overline{\rm MS}} = -\frac{d_\gamma}{(4\pi)^2} \frac{m^2}{12} \,,
& \quad \\
\beta_{a_3}^{\overline{\rm MS}} = \frac{d_\gamma}{(4\pi)^2} \frac{1}{120} \,,
& \quad
\beta_{a_1}^{\overline{\rm MS}} = \frac{d_\gamma}{(4\pi)^2}\frac{1}{80} \,,
& \quad
\beta_{a_4}^{\overline{\rm MS}} = 0\,.
\end{array}$$ The non-local part of the effective action includes the following form-factors $$\begin{split}
\frac{B(z)}{z} &=
d_\gamma\Bigl\{-\frac{7}{400}+\frac{19}{180a^2}+\frac{4Y}{15a^4} \Bigr\}\,,
\\
C_1(z) &= d_\gamma\Bigl\{-\frac{19}{1800}+\frac{1}{45a^2}+\frac{4Y}{15a^4}-\frac{Y}{6a^2}\Bigr\} \,,
\\
C_2(z) &= d_\gamma\Bigl\{-\frac{1}{1080}+\frac{1}{108a^2}+\frac{Y}{9a^4}-\frac{Y}{36a^2}\Bigr\} \,.
\end{split}$$ The non-local beta functions are $$\begin{split}
\beta_{b_1} &= \frac{d_\gamma z}{(4\pi)^2}\Bigl\{-\frac{2 Y}{5 a^4}+\frac{Y}{6 a^2}-\frac{1}{30 a^2}-\frac{a Y}{120}+\frac{a}{80}-\frac{1}{60} \Bigr\}\,,
\\
\beta_{a_3} &= \frac{d_\gamma}{(4\pi)^2} \Bigl\{ \frac{2 Y}{3 a^4}-\frac{Y}{6 a^2}+\frac{1}{18 a^2}-\frac{1}{180}\Bigr\}\,,
\\
\beta_{a_1} &= \frac{d_\gamma}{(4\pi)^2}\Bigl\{-\frac{2 Y}{3 a^4}+\frac{5 Y}{12 a^2}-\frac{1}{18 a^2}-\frac{Y}{16}+\frac{19}{720} \Bigr\}\,,
\\
\beta_{a_4} &= \frac{d_\gamma}{(4\pi)^2}\Bigl\{-\frac{5 Y}{18 a^4}+\frac{Y}{9 a^2}-\frac{5}{216 a^2}-\frac{Y}{96}+\frac{5}{864} \Bigr\}\,.
\end{split}$$ Likewise in the scalar case the non-local beta functions of $b_1$ and $a_3$ have two limits $$\begin{split}
\frac{\beta_{b_1}}{m^2}
&= \begin{cases}
-\frac{d_\gamma}{(4\pi)^2}\frac{1}{12}
-\frac{d_\gamma}{(4\pi)^2}\left[\frac{7}{20}
-\frac{1}{4}\ln\left(\frac{q^2}{m^2}\right)\right]\frac{m^2}{q^2} +{\cal O}\left(\frac{m^2}{q^2}\right)^{{2}}
& \qquad {\rm for} \quad q^2 \gg m^2\,;
\\
-\frac{d_\gamma}{(4\pi)^2}\frac{23}{900} \frac{q^2}{m^2} +{\cal O}\left(\frac{q^2}{m^2}\right)^{\frac{3}{2}}
& \qquad {\rm for} \quad q^2 \ll m^2\,.
\end{cases}\\
\beta_{a_3} &= \begin{cases}
\frac{d_\gamma}{(4\pi)^2} \frac{1}{120} + \frac{d_\gamma}{(4\pi)^2}\left\{\frac{2}{9}
-\frac{1}{12}\ln\left(\frac{q^2}{m^2}\right)\right\}\frac{m^2}{q^2}
+{\cal O}\left(\frac{m^2}{q^2}\right)^{{2}}
& \qquad {\rm for} \quad q^2 \gg m^2 \,;
\\
\frac{d_\gamma}{(4\pi)^2} \frac{1}{1680} \frac{q^2}{m^2}
+{\cal O}\left(\frac{q^2}{m^2}\right)^2
& \qquad {\rm for} \quad q^2 \ll m^2\,.
\end{cases}
\end{split}$$ As in the previous section there is the standard quadratic decoupling in the IR.
Proca field in four dimensions {#sect:proca4d}
------------------------------
The integration of the minimally coupled Proca field exhibits the local regularized action $$\begin{split}
\Ga_{\rm loc}[g] &=
\frac{1}{2(4\pi)^2} \int {\rm d}^4 x\sqrt{g} \, \Bigl\{
-m^4\Bigl(\frac{3}{\bar{\epsilon}}+\frac{9}{4}\Bigr)
-\frac{m^2}{\bar{\epsilon}} R +\frac{2}{15\bar{\epsilon}} \Box R
-\frac{13}{60\bar{\epsilon}} C_{\mu\nu\rho\theta} C^{\mu\nu\rho\theta}
-\frac{1}{36} R^2
\Bigr\}\,.
\end{split}$$ The minimal subtraction of the $1/\bar{\epsilon}$ poles induces the following $\overline{\rm MS}$ beta functions $$\begin{array}{lll}
\beta_{b_0}^{\overline{\rm MS}} = \frac{1}{(4\pi)^2} \frac{3m^4}{2} \,,
& \quad
\beta_{b_1}^{\overline{\rm MS}} = \frac{1}{(4\pi)^2} \frac{m^2}{2} \,,
& \quad \\
\beta_{a_3}^{\overline{\rm MS}} = -\frac{1}{(4\pi)^2} \frac{1}{15} \,,
& \quad
\beta_{a_1}^{\overline{\rm MS}} = \frac{1}{(4\pi)^2}\frac{13}{120} \,,
& \quad
\beta_{a_4}^{\overline{\rm MS}} = \frac{1}{(4\pi)^2}\frac{1}{72}\,.
\end{array}$$ The non-local part of the effective action includes the following form-factors $$\begin{split}
\frac{B(z)}{z} &= \frac{157}{1800} -\frac{17}{30 a^2} -\frac{4 Y}{5 a^4}-\frac{Y}{3 a^2} \,,
\\
C_1(z) &= \frac{91}{900} -\frac{1}{15 a^2}-\frac{Y}{2} -\frac{4 Y}{5 a^4}+\frac{4 Y}{3 a^2} \,,
\\
C_2(z) &= \frac{1}{2160} -\frac{1}{36 a^2} -\frac{Y}{3 a^4} -\frac{Y}{48} +\frac{Y}{18 a^2} \,.
\end{split}$$ The non-local beta functions are easily derived $$\begin{split}
\beta_{b_1} &= \frac{z}{(4\pi)^2} \left\{ \frac{6 Y}{5 a^4}-\frac{Y}{3 a^2}+\frac{1}{10 a^2}+\frac{a Y}{15}-\frac{a}{10}-\frac{Y}{8}+\frac{7 }{40} \right\}\,,
\\
\beta_{a_3} &= \frac{1}{(4\pi)^2} \Bigl\{ -\frac{2 Y}{a^4}-\frac{1}{6 a^2}+\frac{Y}{8}-\frac{1}{40} \Bigr\}\,,
\\
\beta_{a_1} &= \frac{1}{(4\pi)^2}\Bigl\{\frac{2 Y}{a^4}-\frac{a^2 Y}{16}-\frac{5 Y}{2 a^2}+\frac{a^2}{16}+\frac{1}{6 a^2}+\frac{3 Y}{4}-\frac{11}{60}\Bigr\}\,,
\\
\beta_{a_4} &= \frac{1}{(4\pi)^2}\Bigl\{\frac{5 Y}{6 a^4}-\frac{a^2 Y}{384}-\frac{7 Y}{24 a^2}+\frac{a^2}{384}+\frac{5}{72 a^2}+\frac{Y}{32}-\frac{1}{72} \Bigr\}\,.
\end{split}$$ The beta functions of $b_1$ and $a_3$ have the two limits $$\begin{split}
\frac{\beta_{b_1}}{m^2} &= \begin{cases}
\frac{1}{(4\pi)^2}\frac{1}{2}+\frac{1}{(4\pi)^2}\left(\frac{4}{5}
-\ln\left(\frac{q^2}{m^2}\right)\right)\frac{m^2}{q^2} +{\cal O}\left(\frac{m^2}{q^2}\right)^{2}
& \qquad {\rm for} \quad q^2 \gg m^2\,;
\\
\frac{1}{(4\pi)^2}\frac{169}{900} \frac{q^2}{m^2}
+{\cal O}\left(\frac{q^2}{m^2}\right)^{\frac{3}{2}}
& \qquad {\rm for} \quad q^2 \ll m^2\,,
\end{cases}\\
\beta_{a_3} &= \begin{cases}
-\frac{1}{(4\pi)^2} \frac{1}{15}
- \frac{1}{(4\pi)^2}\left\{\frac{7}{6}
-\frac{1}{2}\ln\left(\frac{q^2}{m^2}\right)\right\}\frac{m^2}{q^2}
+{\cal O}\left(\frac{m^2}{q^2}\right)^{2}
& \qquad {\rm for} \quad q^2 \gg m^2 \,;
\\
-\frac{1}{(4\pi)^2} \frac{1}{168} \frac{q^2}{m^2}
+{\cal O}\left(\frac{q^2}{m^2}\right)^2 & \qquad {\rm for}
\quad q^2 \ll m^2\,.
\end{cases}
\end{split}$$ We can observe that also for the Proca field there is a quadratic decoupling.
Comments on the UV structure of the effective action {#sect:uv-structure}
====================================================
The local and non-local contributions to the effective action are not fully independent, but rather display some important relations which underline the properties described in Sect. \[sect:nonlocal-four\]. We concentrate here on the running of a generic operator $O[g]$ on which a form-factor $B_O(z)$ acts, while keeping in mind that the explicit example would be to take $R$ as the operator and $B(z)$ as the corresponding form-factor. For small mass $m\sim 0$ we expect on general grounds that the regularized vacuum action is always of the form $$\begin{split}\label{eq:gamma-local-div}
\Ga[g] &\supset
- \frac{b _O}{(4\pi)^2\bar{\epsilon}} \int {\rm d}^4x~O[g]
+ \frac{1}{2(4\pi)^2}\int {\rm d}^4x~ B_O(z)~O[g]
\\
&
= -\frac{b _O}{2(4\pi)^2}\int {\rm d}^4x \Bigl[\frac{2}{\bar{\epsilon}} - \ln\left(\D/m^2\right)\Bigr]O[g] + \dots
\end{split}$$ which can be proven coupling $O[g]$ to the path integral as a scalar composite operator. The dots hide subleading contributions in the mass and $b_O$ is a unique coefficient determined by the renormalization of the operator itself. The above relation underlines the explicit connection between the coefficient of the $1/\bar{\epsilon}$ pole and the leading ultraviolet logarithmic behavior of the form-factor [@El-Menoufi:2015cqw; @Donoghue:2015nba].
The subtraction of the pole requires the introduction of the renormalized coupling $g_O$ $$\begin{split}
S_{\rm ren}[g] \supset \int g_O ~ O[g]\,,
\end{split}$$ which in the $\overline{\rm MS}$ scheme will have the beta function $$\begin{split}
\beta^{\overline{\rm MS}}_{g_O} &= \frac{b _O}{(4\pi)^2}\,.
\end{split}$$ Following our discussion of section \[sect:nonlocal-four\] we find that if we subtract the divergence at the momentum scale $q^2$ coming from the Fourier transform of the form-factor we get a non-local beta function $$\begin{split}
\beta_{g_O} &= \frac{z}{(4\pi)^2} B'_{g_O}(z)\,.
\end{split}$$ Using it is easy to see that in the ultraviolet limit $z\gg 1$ $$\begin{split}
B(z)&= b _O \ln\left(z\right) + \dots \,,
\end{split}$$ from which one can infer in general that the ultraviolet limit of the non-local beta function coincides with the $\overline{\rm MS}$ result $$\label{eq:running-of-gO}
\begin{split}
\beta_{g_O}&=\beta^{\overline{\rm MS}}_{g_O} +\dots \qquad{\rm for}\quad z\gg 1\,.
\end{split}$$
It might not be clear at a first glance, but in the above discussion we are implicitly assuming that the operator $O[g]$ is kept fixed upon actions of the renormalization group operator $q\partial q=2z\partial_z$. Suppose instead that the operator $O[g]$ is actually a total derivative of the form $$\begin{split}
O[g] = \Box \, O'[g] = -\Delta_g O'[g] \,,
\end{split}$$ in which we introduce another operator $O'[g]$ to be renormalized with a coupling $g_{O'}$ and a local term $g_{O'}\int\Box O'[g]$. If we act with $q\partial_q$ and keep $O'[g]$ fixed instead of $O[g]$ we get $$\begin{split}
\beta_{g_{O'}}O'[g] \propto -\frac{1}{2(4\pi)^2} q\partial_q\left( B_O(z)O[g]\right) = -z\beta _O O'[g] - \frac{1}{(4\pi)^2} z B_O(z) O'[g] \,.
\end{split}$$ Obviously we find an additional scaling term proportional to the form-factor $B_O(z)$ itself. The definitions take care of this additional scaling by switching the units of $B(z)$ before applying the derivative with respect to the scale. In the general example of this appendix we would follow this strategy by defining $$\label{eq:running-of-gOprime}
\begin{split}
\beta_{g_{O'}} = -\frac{1}{(4\pi)^2} z\partial_z\left(\frac{B_O(z)}{z}\right)\,,
\end{split}$$ for the running of the total derivative coupling.
The definitions and now ensure the correct scaling behavior of the running, but are still sensitive to some problems, as shown in practice by . These problems are related to the fact that some terms that should be attributed in the UV/IR limits of either coupling’s running appear in the other coupling’s running. For example, our mass-dependent running of $\Box R$ dominates $\Psi$ in the ultraviolet because $\Box\sim -q^2$ grows unbounded, while the same happens in the infrared for $R$. In of the main text we have adopted the convention of subtracting the asymptotic (clearly attributable) behavior of either coupling to the definition of the running of the other coupling as follows $$\label{eq:running-of-g0-gOprime-sub}
\begin{split}
\beta_{g_{O}} = -\frac{1}{(4\pi)^2} z\partial_z\left(B_O(z)-B_{\infty,O}(z)\right)\,, \qquad\beta_{g_{O'}} = -\frac{1}{(4\pi)^2} z\partial_z\left(\frac{B_O(z)-B_O(0)}{z}\right)\,,
\end{split}$$ in which $B_{\infty,O}(z)$ is the asymptotic behavior of $B_O(z)$ at $z=\infty$ (see the discussion of section \[sect:nonlocal-four\] for the practical application). These definitions ensure that the dimensional $\overline{\rm MS}$ beta functions of both couplings are reproduced in the UV if both couplings require counterterms, and have the important property of agreeing with the predictions of the Appelquist-Carazzone theorem in the infrared.
Scheme dependence and quantum gravity {#sect:scheme}
=====================================
In this section we speculate on possible uses of the framework described in sections \[sect:nonlocal-two\] and \[sect:nonlocal-four\] to the context of quantum gravity and, more specifically, of asymptotically safe gravity [@Reuter:1996cp; @books]. We begin by recalling that the asymptotic safety conjecture suggests that the four-dimensional quantum theory of metric gravity might be asymptotically safe. An asymptotically safe theory is one in which the ultraviolet is controlled by a non-trivial fixed point of the renormalization group with a finite number of UV relevant directions. Therefore the first and most important point to validate the asymptotic safety conjecture is thus to show that the gravitational couplings, in particular the Newton’s constant, have a non-trivial fixed point in their renormalization group flow.
On general grounds, the RG of quantum gravity is induced by the integration of gravitons and all other fields, with the latter including both all matter flavors and types and gauge fields. Certainly in this review we have not considered gauge nor graviton fields, but we can still capture some information of a presumed fixed point. If for example quantum gravity is coupled to a large number of minimally coupled scalar fields, $n_{\rm s}\gg 1$, then we can assume with reasonable certainty that fluctuations of the scalar fields will dominate the running in the large-$n_{\rm s}$ expansion and we could promote using $b_1=-G^{-1}$ and $\xi=0$ to obtain the beta function $\beta_G$ [@Martini:2018ska; @Codello:2011js] without having to deal with gauge-fixing and ghosts [@Eichhorn:2009ah; @Groh:2010ta].
One point of criticism of the use of $\beta_G$ for making physical predictions is that the running of Newton’s constant is strongly dependent on the scheme in which it is computed. If we use dimensional regularization and assume that $n_{\rm s}$ is large, we have the counterterm relation $$\begin{split}
-\frac{1}{G_0}
= \mu^{\epsilon}\left( -\frac{1}{G} - n_{\rm s} \,\frac{ m^2}{6(4\pi)^2\epsilon} \right)\,;
\end{split}$$ if instead we use any scheme involving a cutoff $\Lambda$ $$\label{eq:cc-scheme}
\begin{split}
-\frac{1}{G_0}
=
-\frac{1}{G} + A_{\rm sch} \, \Lambda^2
- n_{\rm s}\,\frac{m^2}{6(4\pi)^2}\log\Lambda\,,
\end{split}$$ in which we introduced the constant $A_{\rm sch}$ that depends on the specific details of the scheme. We can see that the coefficient of the dimensional pole of the $\overline{\rm MS}$ subtraction is universal: it survives the change of scheme and it multiplies the logarithm in the massive scheme. This is of course a well-known relation of quantum field theory.
The vast body of literature dedicated to the conjecture points to the fact that the existence of the fixed point hinges on the inclusion of the scheme dependent part, but this is often reason of mistrust because the quantities that are computed using $A_{\rm sch}$ depend on the scheme in very complicate ways, especially if considered beyond the limitations of perturbation theory. In short there are two very polarized points of view on the credibility of results based on which seem impossible to make agree conceptually. Ideally, in order to find common ground between the points of view, one would like to have a relation almost identical to , but in which $\Lambda$ is replaced by some scale $q^2$ which has physical significance, meaning that it is related to some momentum of a given magnitude. Our definition of renormalization group as given in and does something very close, in that $q^2$ is a momentum variable of a form-factor which could in principle be related to some gravitational observable.
The function $B(z)$ could thus work as a scale dependent Newton’s constant and $\Psi(z)$ as its beta function in the usual sense required by asymptotic safety, yet they could maintain some physical meaning thanks to the momentum scale $q^2$. From this point of view the scheme dependence of could be replaced by the dependence of the renormalization condition, hence on the appropriate observable that incorporates $B(z)$ and the scale $q^2$. This idea is certainly *very* speculative, but it becomes worth considering after identifying an interesting conclusion: we have observed in that $\Psi(z)$ always has two limits: in the infrared it reproduces the universal running of the Newton’s constant, while in the ultraviolet it reproduces the universal running of the coupling of $\Box R$. This fact might be suggesting that in determining the ultraviolet nature of quantum gravity the operator $\Box R$ plays the role commonly associated to $R$. We hope that our results might offer some inspiration for further developments in the direction of a more formal proof of the asymptotic safety conjecture.
Conclusions {#sect:conclusions}
===========
We reviewed the covariant computation of the non-local form-factors of the metric-dependent effective action which integrates the effects of several massive matter fields over two- and four-dimensional metric Euclidean spacetime. We established a connection between these form-factors and the mass-dependent beta functions of several gravitational couplings which include the Newton’s constant as the most recent result. All the beta functions that we have presented depend on a scale $q^2$ that is associated to the momentum dependence of the form-factors in Fourier space. The running displays two important limits: in the ultraviolet the beta functions coincide with their $\overline{\rm MS}$ counterparts, while in the infrared the same beta functions go to zero with the leading power $\frac{q^2}{m^2}$ as expected from the Appelquist-Carazzone theorem. We expect that our derivation of the semiclassical effective action could have some relevant repercussion in the context of cosmology or astrophysics, as it predicts effective values for the Newton’s constant in units of the particles’ masses which depend on a physical scale of the renormalization group.
Besides the effects of decoupling, several other interesting results have been presented in this review. In fact we have discussed the pragmatic connection that is made in two dimensions with the expectations of Zamolodchikov’s theorem. Furthermore, in four dimensions we have established some interesting link between the renormalization of the $R$ and $\Box R$ operators, which might have implications for some approaches to quantum gravity. In particular, we have made some speculation regarding the utility of our framework for the asymptotic safety conjecture of quantum gravity, in which a consistent non-perturbative renormalization of four-dimensional Einstein-Hilbert gravity is assumed.
*Acknowledgements.* The research of O.Z. was funded by Deutsche Forschungsgemeinschaft (DFG) under the Grant Za 958/2-1. T.P.N. acknowledges support from CAPES through the PNPD program. S.A.F. acknowledges support from the DAAD and the Ministerio de Educación Argentino under the ALE-ARG program. O.Z. is grateful to Martin Reuter and all other partecipants of the workshop “Quantum Fields – From Fundamental Concepts to Phenomenological Questions” for the interest shown in the topics of this work. The authors are grateful to Tiago G. Ribeiro and Ilya L. Shapiro for collaborations on the projects discussed in this review, and to Carlo Pagani for useful comments on the draft.
The non-local expansion of the heat kernel {#sect:heat-kernel}
==========================================
The heat kernel of the Laplace-type operator ${\cal O}=\Delta_g + E$ is a bi-tensor that is defined as the solution of the differential equation $$\begin{split}
\left(\partial_s + {\cal O}_x \right) {\cal H}_D(s;x,x') = 0\,, \qquad {\cal H}_D(0;x,x')=\delta^{(D)}(x,x')\,,
\end{split}$$ in which $\delta^{(D)}(x,x')$ is the covariant Dirac delta. The formal solution is the exponential $$\begin{split}
{\cal H}_D(s;x,x') = \langle x | {\rm e}^{-s {\cal O}} | x '\rangle \,.
\end{split}$$ We keep the subscript $D$ as a reminder of the spacetime dimension for reasons explained in section \[sect:effective-action\]. A customary tool of quantum field theory is to consider the expression $$\begin{split}
\ln (\frac{x}{y}) &= -\int_0^\infty \frac{{\rm d}s}{s}\, \left( {\rm e}^{-s x}-{\rm e}^{-s y}\right)\,,
\end{split}$$ and use it to give a practical representation of the one loop functional trace $$\begin{split}
\Ga[g] &= -\frac{1}{2} \tr \int_0^\infty \frac{{\rm d}s}{s} \int{\rm d}^Dx ~ {\rm e}^{-sm^2} {\cal H}_D(s;x,x)
\end{split}$$ modulo a field-independent normalization, as shown in the main text in .
The heat kernel of a Laplace-type operator admits an expansion in powers of $s$ that starts with the power $s^{-D/2}$ known as Seeley-deWitt expansion. The Seeley-deWitt expansion is perfectly suited for the computation of the divergences of the effective action, and therefore for their $\overline{\rm MS}$ renormalization, but much less effective in obtaining the finite contributions of the effective action that we need in this work. As an alternative we consider the non-local expansion of the heat kernel [@bavi87; @bavi90; @Codello:2012kq]. This latter expansion is a special curvature expansion known to the third order that is valid for asymptotically free spacetimes and in which the effects of covariant derivatives are resummed. The trace of the coincidence limit to the second order in the curvatures is $$\begin{gathered}
{\cal H}(s) = \frac{1}{(4\pi s)^{d/2}} \int {\rm d}^D x \sqrt{g}\, {\rm tr} \Bigl\{
\mathbf{1}
+ s G_E(s\Delta_g) E
+ s G_R(s\Delta_g)
+s^2 R F_R(s\Delta_g)R \\
+s^2 R^{\mu\nu} F_{Ric}(s\Delta_g)R_{\mu\nu}
+s^2 E F_E(s\Delta_g)E
+s^2 E F_{RE}(s\Delta_g)R
+s^2 \Omega^{\mu\nu} F_\Omega(s\Delta_g) \Omega_{\mu\nu}
\Bigr\}
+ {\cal O}\left({\cal R}\right)^3\,,\end{gathered}$$ in which ${\cal O}\left({\cal R}\right)^3$ represents all possible non-local terms with three or more curvatures as described in [@bavi87; @bavi90]. The functions of $\Delta_g$ are known as form-factors of the heat kernel: they act on the rightmost curvature and should be regarded as non-local functions of the Laplacian. The form-factors appearing in the linear terms have been derived in [@Codello:2012kq] as $$\begin{split}
G_E(x) = -f(x)\,,\qquad G_R(x) = \frac{f(x)}{4}+\frac{f(x)-1}{2x}\,,
\end{split}$$ while those appearing in the quadratic terms have been derived in [@bavi87; @bavi90] as $$\begin{split}
&F_{Ric}(x) = \frac{1}{6x}+\frac{f(x)-1}{x^2} \,\qquad
F_R(x) = -\frac{7}{48x}+\frac{f(x)}{32}+\frac{f(x)}{8x}-\frac{f(x)-1}{8x^2} \\
&F_{RE}(x) = -\frac{f(x)}{4}-\frac{f(x)-1}{2x} \,\qquad
F_E(x) =\frac{f(x)}{2} \,\qquad
F_\Omega(x) = -\frac{f(x)-1}{2x}\,,
\end{split}$$ but we give them in the notation of [@Codello:2012kq]. Interestingly all the above form-factors depend on a basic form-factor which is defined as $$\begin{split}
f(x) &= \int_0^1 \!{d}\alpha \, {\rm e}^{-\alpha(1-\alpha)x}\,.
\end{split}$$ All the form-factors admit well-defined expansions both for large and for small values of the parameter $s$ [@bavi87; @bavi90] and therefore allow us to go beyond the simple asymptotic expressions at small $s$.
Further mathematical details {#sect:further}
============================
We collect here some useful formulas for dealing with simplifications of the curvature tensors and the Dirac operator that are used in sections \[sect:nonlocal-two\] and \[sect:nonlocal-four\]. In $D=2$ all Riemaniann curvature tensors can be written in terms of the metric and the curvature scalar $R$ because only the conformal factor of the metric is an independent degree of freedom. The Riemann and the Ricci tensors are simplified as R\_ = 12R(g\_g\_ - g\_g\_), R\_ = 12Rg\_. Notice that in we use explicitly the above formulas to argue that the only relevant quadratic form-factor in $D=2$ involves two copies of the scalar curvature. As discussed in section \[sect:effective-action\] we have continued the dimensionality only through the dependence of the leading power of the heat kernel and all geometric tensors behave as if they live in precisely two dimensions, which allows us to use the above simplifications. In $D=4$ instead all curvature tensors are generally independent and for we have chosen a basis that includes the Ricci scalar and the Weyl tensor, which is useful to disentangle the contributions coming from the conformal factor from those of purely spin-$2$ parts of $g_{\mu\nu}$ that are missing in $D=2$.
Our conventions for the Dirac operator are in form the same for both $D=2$ and $D=4$. The spin connection $\omega_\mu {}^{a}{}_{b}$ is constructed from the Levi-Civita connection in a straightforward way by introducing the $D$-bein $e^a{}_\mu$ that trivialize the metric $g_{\mu\nu} =e^a{}_\mu e^b{}_\nu \delta_{ab}$, and requiring the compatibility of the extended connection $\nabla_\mu e^a{}_\nu=0$. We use the fact that the elements $\sigma_{ab}=\frac{i}{2}\left[\gamma_a,\gamma_b\right]$ of the Clifford algebra are generators of local Lorentz transformations to construct the covariant connection acting on Dirac fields $$D_\mu = \partial_\mu - \frac{i}{4}\omega_\mu {}^{ab}\sigma_{ab}\,,$$ which appears in . When applying the general formulas for the heat kernel we need the curvature two-form on Dirac fields $$\begin{aligned}
\Omega_{\mu\nu} = \left[D_\mu,D_\nu\right]
= - \frac{i}{4}F_{\mu\nu} {}^{ab}\sigma_{ab}\end{aligned}$$ in which $F_{\mu\nu} {}^{ab} = R_{\mu\nu}{}^{\rho\theta}e^a{}_\rho e^b{}_\theta$ is the spin curvature of $\omega_\mu {}^{a}{}_{b}$. Using some standard properties of the Clifford algebra, we explicitly find $$\begin{aligned}
\tr ~\Omega^2 = -\frac{d_\gamma}{8} R_{\mu\nu\rho\theta} R^{\mu\nu\rho\theta}\end{aligned}$$ in which $d_\gamma = \tr \, {\mathbf 1}$ is the dimensionality of the Clifford algebra. Interestingy $d_\gamma$ factorizes from all formulas of sections \[sect:dirac2d\] and \[sect:dirac4d\] because our bare actions are invariant under chiral symmetry signalling the fact that it is the product $n_{\rm f}\cdot d_\gamma$ that effectively counts the number of independent fermionic degrees of freedom.
[999]{}
T. Appelquist and J. Carazzone, “Infrared Singularities and Massive Fields,” Phys. Rev. D [**11**]{}, 2856 (1975). T. G. Ribeiro, I. L. Shapiro and O. Zanusso, “Gravitational form factors and decoupling in 2D,” Phys. Lett. B [**782**]{}, 324 (2018) \[arXiv:1803.06948 \[hep-th\]\]. S. A. Franchino-Viñas, T. de Paula Netto, I. L. Shapiro and O. Zanusso, “Form factors and decoupling of matter fields in four-dimensional gravity,” Phys. Lett. B [**790**]{}, 229 (2019) \[arXiv:1812.00460 \[hep-th\]\]. E. V. Gorbar and I. L. Shapiro, “Renormalization group and decoupling in curved space,” JHEP [**0302**]{}, 021 (2003) \[hep-ph/0210388\]. E. V. Gorbar and I. L. Shapiro, “Renormalization group and decoupling in curved space. 2. The Standard model and beyond,” JHEP [**0306**]{}, 004 (2003) \[hep-ph/0303124\]. I. L. Buchbinder, G. de Berredo-Peixoto and I. L. Shapiro, “Quantum effects in softly broken gauge theories in curved space-times,” Phys. Lett. B [**649**]{}, 454 (2007) \[hep-th/0703189\]. A. O. Barvinsky and G. A. Vilkovisky, “The Generalized Schwinger-Dewitt Technique in Gauge Theories and Quantum Gravity,” Phys. Rept. [**119**]{}, 1 (1985). A. O. Barvinsky and G. A. Vilkovisky, “Beyond the Schwinger-Dewitt Technique: Converting Loops Into Trees and In-In Currents,” Nucl. Phys. B [**282**]{}, 163 (1987). A. O. Barvinsky and G. A. Vilkovisky, “Covariant perturbation theory. 2: Second order in the curvature. General algorithms,” Nucl. Phys. B [**333**]{}, 471 (1990). A. Codello and O. Zanusso, “On the non-local heat kernel expansion,” J. Math. Phys. [**54**]{}, 013513 (2013) \[arXiv:1203.2034 \[math-ph\]\]. I. L. Shapiro and J. Sola, “Massive fields temper anomaly induced inflation,” Phys. Lett. B [**530**]{}, 10 (2002) \[hep-ph/0104182\]. A. M. Pelinson, I. L. Shapiro and F. I. Takakura, “On the stability of the anomaly induced inflation,” Nucl. Phys. B [**648**]{}, 417 (2003) \[hep-ph/0208184\]. I. L. Shapiro, “The Graceful exit from the anomaly induced inflation: Supersymmetry as a key,” Int. J. Mod. Phys. D [**11**]{}, 1159 (2002) \[hep-ph/0103128\]. A. A. Starobinsky, “A New Type of Isotropic Cosmological Models Without Singularity,” Phys. Lett. B [**91**]{}, 99 (1980) \[Adv. Ser. Astrophys. Cosmol. [**3**]{}, 130 (1987)\]. A. A. Starobinsky, “The Perturbation Spectrum Evolving from a Nonsingular Initially De-Sitter Cosmology and the Microwave Background Anisotropy,” Sov. Astron. Lett. [**9**]{}, 302 (1983). T. d. P. Netto, A. M. Pelinson, I. L. Shapiro and A. A. Starobinsky, “From stable to unstable anomaly-induced inflation,” Eur. Phys. J. C [**76**]{}, no. 10, 544 (2016) \[arXiv:1509.08882 \[hep-th\]\]. I. L. Shapiro, J. Sola and H. Stefancic, “Running G and Lambda at low energies from physics at M(X): Possible cosmological and astrophysical implications,” JCAP [**0501**]{}, 012 (2005) \[hep-ph/0410095\]. D. C. Rodrigues, P. S. Letelier and I. L. Shapiro, “Galaxy Rotation Curves from General Relativity with Infrared Renormalization Group Effects,” JCAP [**1004**]{} (2010), arXiv:1102.2188 \[astro-ph.CO\]. I. L. Shapiro and J. Sola, “On the possible running of the cosmological ’constant’,” Phys. Lett. B [**682**]{}, 105 (2009) \[arXiv:0910.4925 \[hep-th\]\]. M. B. Fr[ö]{}b, A. Roura and E. Verdaguer, “One-loop gravitational wave spectrum in de Sitter spacetime,” JCAP [**1208**]{}, 009 (2012) \[arXiv:1205.3097 \[gr-qc\]\]. B. L. Nelson and P. Panangaden, “Scaling Behavior Of Interacting Quantum Fields In Curved Space-time,” Phys. Rev. D [**25**]{}, 1019 (1982). I. L. Buchbinder, “Renormalization Group Equations In Curved Space-time,” Theor. Math. Phys. [**61**]{}, 1215 (1984) \[Teor. Mat. Fiz. [**61**]{}, 393 (1984)\]. I. L. Buchbinder, S. D. Odintsov and I. L. Shapiro, “Effective action in quantum gravity,” Bristol, UK: IOP (1992) 413 p M. Maggiore and M. Mancarella, “Nonlocal gravity and dark energy,” Phys. Rev. D [**90**]{}, no. 2, 023005 (2014) \[arXiv:1402.0448 \[hep-th\]\]. A. Codello and R. K. Jain, “On the covariant formalism of the effective field theory of gravity and leading order corrections,” Class. Quant. Grav. [**33**]{}, no. 22, 225006 (2016) \[arXiv:1507.06308 \[gr-qc\]\];\
A. Codello and R. K. Jain, “On the covariant formalism of the effective field theory of gravity and its cosmological implications,” Class. Quant. Grav. [**34**]{}, no. 3, 035015 (2017) \[arXiv:1507.07829 \[astro-ph.CO\]\].
B. Knorr and F. Saueressig, “Towards reconstructing the quantum effective action of gravity,” Phys. Rev. Lett. [**121**]{}, no. 16, 161304 (2018) \[arXiv:1804.03846 \[hep-th\]\]. A. Codello, N. Tetradis and O. Zanusso, “The renormalization of fluctuating branes, the Galileon and asymptotic safety,” JHEP [**1304**]{}, 036 (2013) \[arXiv:1212.4073 \[hep-th\]\]. N. Brouzakis, A. Codello, N. Tetradis and O. Zanusso, “Quantum corrections in Galileon theories,” Phys. Rev. D [**89**]{}, no. 12, 125017 (2014) \[arXiv:1310.0187 \[hep-th\]\]. A. Codello and O. Zanusso, “Fluid Membranes and 2d Quantum Gravity,” Phys. Rev. D [**83**]{}, 125021 (2011) \[arXiv:1103.1089 \[hep-th\]\]. I. G. Avramidi, “Covariant Studies of Nonlocal Structure of Effective Action. (In Russian),” Sov. J. Nucl. Phys. [**49**]{}, 735 (1989) \[Yad. Fiz. [**49**]{}, 1185 (1989)\]. H. W. Hamber and R. Toriumi, “Cosmological Density Perturbations with a Scale-Dependent Newton’s G,” Phys. Rev. D [**82**]{}, 043518 (2010) \[arXiv:1006.5214 \[gr-qc\]\]. H. W. Hamber and R. Toriumi, “Scale-Dependent Newton’s Constant G in the Conformal Newtonian Gauge,” Phys. Rev. D [**84**]{}, 103507 (2011) \[arXiv:1109.1437 \[gr-qc\]\]. M. Asorey, E. V. Gorbar and I. L. Shapiro, “Universality and ambiguities of the conformal anomaly,” Class. Quant. Grav. [**21**]{}, 163 (2003) \[hep-th/0307187\]. M. Reuter, “Nonperturbative evolution equation for quantum gravity,” Phys. Rev. D [**57**]{}, 971 (1998) \[hep-th/9605030\]. M. Reuter and F. Saueressig, “Quantum Gravity and the Functional Renormalization Group,” published by Cambridge University Press (2019);\
R. Percacci, “An Introduction to Covariant Quantum Gravity and Asymptotic Safety,” published in the series “100 Years of General Relativity,” vol. 3, by World Scientific (2017).
J. F. Donoghue, M. M. Ivanov and A. Shkerin, “EPFL Lectures on General Relativity as a Quantum Field Theory,” arXiv:1702.00319 \[hep-th\]. A. Codello, R. Percacci, L. Rachwał and A. Tonero, “Computing the Effective Action with the Functional Renormalization Group,” Eur. Phys. J. C [**76**]{}, no. 4, 226 (2016) \[arXiv:1505.03119 \[hep-th\]\]. B. Goncalves, G. de Berredo-Peixoto and I. L. Shapiro, “One-loop corrections to the photon propagator in the curved-space QED,” Phys. Rev. D [**80**]{}, 104013 (2009) \[arXiv:0906.3837 \[hep-th\]\]. M. S. Ruf and C. F. Steinwachs, “Renormalization of generalized vector field models in curved spacetime,” Phys. Rev. D [**98**]{}, no. 2, 025009 (2018) \[arXiv:1806.00485 \[hep-th\]\];\
M. S. Ruf and C. F. Steinwachs, “Quantum effective action for degenerate vector field theories,” Phys. Rev. D [**98**]{}, no. 8, 085014 (2018) \[arXiv:1809.04601 \[hep-th\]\]. L. S. Brown and J. P. Cassidy, “Stress Tensor Trace Anomaly in a Gravitational Metric: General Theory, Maxwell Field,” Phys. Rev. D [**15**]{}, 2810 (1977). A. O. Barvinsky and D. V. Nesterov, “Nonperturbative heat kernel and nonlocal effective action,” hep-th/0402043. A. Codello and G. D’Odorico, “Scaling and Renormalization in two dimensional Quantum Gravity,” Phys. Rev. D [**92**]{}, no. 2, 024026 (2015) \[arXiv:1412.6837 \[gr-qc\]\]. A. B. Zamolodchikov, “Irreversibility of the Flux of the Renormalization Group in a 2D Field Theory,” JETP Lett. [**43**]{}, 730 (1986) \[Pisma Zh. Eksp. Teor. Fiz. [**43**]{}, 565 (1986)\]. I. Jack and H. Osborn, “Analogs for the $c$ Theorem for Four-dimensional Renormalizable Field Theories,” Nucl. Phys. B [**343**]{}, 647 (1990). H. Osborn, “Weyl consistency conditions and a local renormalization group equation for general renormalizable field theories,” Nucl. Phys. B [**363**]{}, 486 (1991). A. Codello, G. D’Odorico and C. Pagani, “Functional and Local Renormalization Groups,” Phys. Rev. D [**91**]{}, no. 12, 125016 (2015) \[arXiv:1502.02439 \[hep-th\]\]. I. Jack and H. Osborn, “Background Field Calculations in Curved Space-time. 1. General Formalism and Application to Scalar Fields,” Nucl. Phys. B [**234**]{}, 331 (1984). R. Martini and O. Zanusso, “Renormalization of multicritical scalar models in curved space,” arXiv:1810.06395 \[hep-th\]. B. K. El-Menoufi, “Quantum gravity of Kerr-Schild spacetimes and the logarithmic correction to Schwarzschild black hole entropy,” JHEP [**1605**]{}, 035 (2016) \[arXiv:1511.08816 \[hep-th\]\]. J. F. Donoghue and B. K. El-Menoufi, “Covariant non-local action for massless QED and the curvature expansion,” JHEP [**1510**]{}, 044 (2015) \[arXiv:1507.06321 \[hep-th\]\]. A. Codello, “Large N Quantum Gravity,” New J. Phys. [**14**]{}, 015009 (2012) \[arXiv:1108.1908 \[gr-qc\]\]. A. Eichhorn, H. Gies and M. M. Scherer, “Asymptotically free scalar curvature-ghost coupling in Quantum Einstein Gravity,” Phys. Rev. D [**80**]{}, 104003 (2009) \[arXiv:0907.1828 \[hep-th\]\]. K. Groh and F. Saueressig, “Ghost wave-function renormalization in Asymptotically Safe Quantum Gravity,” J. Phys. A [**43**]{}, 365403 (2010) \[arXiv:1001.5032 \[hep-th\]\].
[^1]: Prepared for the special issue of Universe collecting the contributions to the workshop “Quantum Fields—from Fundamental Concepts to Phenomenological Questions”, Mainz 26-28 September 2018
[^2]: This happens because the scale $\mu$ of dimensional regularization, which we use to subtract the poles, can be interpreted as a very high energy scale which is bigger than any other scale in the theory and in particular bigger than the electron’s mass.
|
---
abstract: 'We consider co–rotational wave maps from ($3+1$) Minkowski space into the three–sphere. This is an energy supercritical model which is known to exhibit finite time blow up via self–similar solutions. The ground state self–similar solution $f_0$ is known in closed form and based on numerics, it is supposed to describe the generic blow up behavior of the system. We prove that the blow up via $f_0$ is stable under the assumption that $f_0$ does not have unstable modes. This condition is equivalent to a spectral assumption for a linear second order ordinary differential operator. In other words, we reduce the problem of stable blow up to a linear ODE spectral problem. Although we are unable, at the moment, to verify the mode stability of $f_0$ rigorously, it is known that possible unstable eigenvalues are confined to a certain compact region in the complex plane. As a consequence, highly reliable numerical techniques can be applied and all available results strongly suggest the nonexistence of unstable modes, i.e., the assumed mode stability of $f_0$.'
address: 'University of Chicago, Department of Mathematics, 5734 South University Avenue, Chicago, IL 60637, U.S.A.'
author:
- Roland Donninger
bibliography:
- 'wmnlin.bib'
title: 'On stable self–similar blow up for equivariant wave maps'
---
Introduction
============
Wave maps are (formally) critical points of the action functional $$S(u)=\int_M \mathrm{tr}_g (u^*h)$$ for a map $u: M \to N$ where $(M,g)$ and $(N,h)$ are Lorentzian and Riemannian manifolds, respectively. Here, $\mathrm{tr}_g (u^*h)$ is the trace (with respect to $g$) of the pullback metric $u^*h$. The associated Euler–Lagrange equations in a local coordinate system $(x^\mu)$ on $M$ read $$\label{eq:wm}
\Box_g u^a(x)+g^{\mu \nu}(x)\Gamma^a_{bc}(u(x))\partial_\mu u^b(x) \partial_\nu u^c(x)=0$$ where Einstein’s summation convention is assumed and $\Gamma^a_{bc}$ are the Christoffel symbols on $N$. The system is called the wave maps equation (in the intrinsic form). In what follows we choose the base manifold $M$ to be Minkowski space. Due to their geometric origin, wave maps are appealing models in nonlinear field theory which naturally generalize the linear wave equation. They are relevant in various areas of physics but still simple enough to be accessible for rigorous mathematical analysis.
Basic questions for a nonlinear time evolution equation are: Do there exist global (that is, for all times) solutions for all data or is it possible to specify initial data that lead to a breakdown of the solution in finite time? If the latter is true, then how does the breakdown (blow up) occur? There exists a heuristic principle which gives a hint for scaling invariant equations that possess a positive energy (such as the wave maps equation, see [@DSA]). If the scaling behavior is such that shrinking of the solution is energetically favorable, then one expects finite time blow up. Conversely, if shrinking to ever smaller scales is energetically forbidden, then global existence is anticipated. These two cases are called energy supercritical and energy subcritical, respectively. There is also a borderline situation where the energy itself is scaling invariant which is called energy critical. For wave maps, the criticality class is linked to the spatial dimension of the base manifold $M$. The equation is energy subcritical, critical or supercritical if $\dim M=1+1$, $\dim M=2+1$ or $\dim M \geq 3+1$, respectively. It is fair to say that our current understanding of large data problems is confined to energy subcritical and critical equations. However, this is a very unsatisfactory situation since many problems in physics turn out to be energy supercritical.
The initial value problem for the wave maps equation has attracted a lot of interest by the mathematical community in the past two decades which led to the development of new sophisticated tools that tremendously improved our understanding of nonlinear wave equations. In particular the case where the base manifold $M$ is assumed to be Minkowski space (which we shall do throughout) has been studied thoroughly. Since it is impossible for us to do justice to the huge amount of publications on the subject, we have to restrict ourselves to a certain selection of some important contributions. A considerable amount of the literature is devoted to problems with certain symmetry properties, such as radial or equivariant maps. We mention for instance [@Chr1], [@Chr2], [@shatah88], [@STZ94], [@struwe04a], [@struwe04b] where various fundamental aspects like local and global well–posedness, asymptotic behavior or blow up are studied. At the end of the 1990s new techniques were developed, capable of treating the full system without symmetry assumptions. A main objective in this respect was to prove small data global existence for various target manifolds and space dimensions, see e.g., [@struwe01], [@keel-tao], [@struwe99], [@tataru99], [@tataru01], [@tao01a], [@tao01b], [@klainerman-rodnianski02], [@shatah-struwe02], [@tataru05], [@krieger03], [@krieger04], [@nahmod02], [@nahmod03]. On the other hand, for large data and in the energy critical case, there are newer results on blow up, e.g., [@struwe03], [@KST08], [@rodnianski-sterbenz06], [@rodnianski-raphael09], [@carstea] and, very recently, also on global existence [@krieger-schlag09], [@tataru-sterbenz09a], [@tataru-sterbenz09b], [@tao09]. We will comment below in more detail on some of those works which are most relevant for us. Despite the already vast literature on the subject we are still only at the beginning and many questions remain open, even for highly symmetric problems. We also refer the reader to the monograph [@struwe98] for a general introduction on the subject as well as the survey article [@kriegersurv] for an in–depth review of recent results and applications of wave maps.
In this paper we study the simplest energy supercritical case: co–rotational wave maps from $(3+1)$ Minkowski space to the three–sphere. This model can be described by the single semilinear wave equation $$\label{eq:main}
\psi_{tt}-\psi_{rr}-\frac{2}{r}\psi_r+\frac{\sin(2\psi)}{r^2}=0$$ where $r \geq 0$ is the standard radial coordinate on Minkowski space. We refer to [@DSA] and references therein for more details on the underlying symmetry reduction which is a special case of equivariance. Global well–posedness of the Cauchy problem for Eq. for data which are small in a sufficiently high Sobolev space follows from [@sideris]. Furthermore, a number of results concerning the Cauchy problem for equivariant wave maps are obtained in [@STZ94], in particular, local well–posedness with minimal regularity requirements is studied. Global well–posedness for data with small energy can be concluded from the much more general result in [@tao01b]. On the other hand, it has been known for a long time that Eq. exhibits finite time blow up in the form of self–similar solutions [@shatah88], see also [@cazenave] for generalizations. By definition, a self–similar solution is of the form $\psi(t,r)=f(\frac{r}{T-t})$ for a constant $T>0$ and obviously, depending on the concrete form of $f$, “something singular” happens as $t \to T-$. As usual, by exploiting finite speed of propagation, a self–similar solution can be used to construct a solution with compactly supported initial data that breaks down in finite time. In fact, Eq. admits many self–similar solutions [@bizon00] and a particular one, henceforth denoted by $\psi^T$, is even known in closed form [@TS90] and given by $$\psi^T(t,r)=2 \arctan \left (\tfrac{r}{T-t} \right )=:f_0 \left (\tfrac{r}{T-t} \right ).$$ We call $\psi^T$ the *ground state* or *fundamental self–similar solution*.
We remark in passing that the self–similar blow up here is very different from singularity formation in the analogous model of equivariant wave maps on ($2+1$) Minkowski space. For this energy critical problem it has been shown by Struwe [@struwe03] that the blow up (if it exists) takes place via shrinking of a harmonic map at a rate which is strictly faster than self–similar. This beautiful result holds for a large class of targets and, by ruling out the existence of finite energy harmonic maps, it can be used to show global existence. In the case of the two–sphere as a target, there do exist finite energy harmonic maps and indeed, blow up solutions for this model have been constructed in [@KST08], [@rodnianski-sterbenz06], [@rodnianski-raphael09].
The main result {#sec:intromainresult}
---------------
Based on numerical investigations [@bizon99], the solution $\psi^T$ is expected to be fundamental for the understanding of the dynamics of the system. This is due to the fact that it acts as an attractor in the sense that generic large data evolutions approach $\psi^T$ locally near the center $r=0$ as $t \to T-$. Consequently, the blow up described by $\psi^T$ is expected to be stable. In this paper we give a rigorous proof for the stability of $\psi^T$. Our result is conditional in the sense that it depends on a certain spectral property for a linear ordinary differential equation which cannot be verified rigorously so far. However, very reliable numerics and partial theoretical results leave no doubt that it is satisfied, see [@DSA] for a thorough discussion of this issue. In other words, our result reduces the question of (nonlinear) stability of $\psi^T$ to a linear ODE spectral problem. At this point we should remark that some aspects of the (linear) stability problem for self–similar wave maps are studied in [@ichpca2] by using a hyperbolic coordinate system adapted to self–similarity. In particular stability properties of excited self–similar solutions are derived in [@ichpca2]. However, as discussed in [@ichpca2], the hyperbolic coordinate system is not suitable for proving stability of $\psi^T$.
In order to formulate our main theorem, we need a few preparations. Based on the known numerical results, one expects convergence to the self–similar attractor $\psi^T$ only in $${\mathcal}{C}_T:=\{(t,r): t\in (0,T), r \in [0,T-t]\},$$ the backward lightcone of the blow up point. Consequently, we study the Cauchy problem $$\label{eq:maincauchy}
\left \{ \begin{array}{l}
\psi_{tt}(t,r)-\psi_{rr}(t,r)-\frac{2}{r}\psi_r(t,r)+\frac{\sin(2\psi(t,r))}{r^2}=0 \mbox{ for } (t,r) \in {\mathcal}{C}_T \\
\psi(0,r)=f(r), \psi_t(0,r)=g(r) \mbox{ for }r \in [0,T]
\end{array} \right .$$ with given initial data $(f,g)$. Our result applies to small perturbations $(f,g)$ of $(\psi^T(0,\cdot), \psi^T_t(0, \cdot))$, i.e., we view the nonlinear problem as a perturbation of the linearization of around $\psi^T$. Writing $$\label{eq:nonlinearity}
\sin(2(\psi^T+\varphi))=\sin(2\psi^T)+2\cos(2\psi^T)\varphi+N_T(\varphi)$$ with $N_T(\varphi)=O(\varphi^2)$ (if $\varphi$ is small), we obtain the equation $$\label{eq:mainlin}
\varphi_{tt}-\varphi_{rr}-\frac{2}{r}\varphi_r+\frac{2}{r^2}\varphi+\frac{2\cos(2\psi^T)-2}{r^2}\varphi+\frac{N_T(\varphi)}{r^2}=0$$ for perturbations $\varphi$ of $\psi^T$. Note that the “potential term” in Eq. is time–dependent since $\psi^T$ is self–similar. Consequently, it is convenient to remove this time dependence by switching to similarity coordinates $\tau:=-\log(T-t)$ and $\rho:=\frac{r}{T-t}$. This transforms the lightcone ${\mathcal}{C}_T$ to an infinite cylinder and the blow up point is shifted towards $\infty$. Thus, we obtain an asymptotic stability problem which is explicitly given by $$\label{eq:cssscalar}
\phi_{\tau \tau}+\phi_\tau+2\rho \phi_{\tau \rho}-(1-\rho^2)\phi_{\rho \rho}-2\frac{1-\rho^2}{\rho}\phi_\rho+\frac{V(\rho)}{\rho^2}\phi+\frac{N_T(\phi)}{\rho^2}=0$$ where $\phi(\tau,\rho)=\varphi(T-e^{-\tau}, e^{-\tau}\rho)$ and the “potential” $V$ reads $$\label{eq:V}
V(\rho)=2 \cos(4 \arctan(\rho))=\frac{2(1-6 \rho^2+\rho^4)}{(1+\rho^2)^2}.$$ The relevant coordinate domain is $\tau \geq -\log T$ and $\rho \in [0,1]$. The corresponding *linearized problem* is simply obtained by ignoring the nonlinear term, i.e., $$\label{eq:linearcssscalar}
\phi_{\tau \tau}+\phi_\tau+2\rho \phi_{\tau \rho}-(1-\rho^2)\phi_{\rho \rho}-2\frac{1-\rho^2}{\rho}\phi_\rho+\frac{V(\rho)}{\rho^2}\phi=0.$$ Inserting the mode ansatz $\phi(\tau,\rho)=e^{\lambda \tau}u_\lambda(\rho)$ into Eq. , we arrive at the aforementioned linear ODE spectral problem $$\label{eq:evodeintro}
-(1-\rho^2)u_\lambda''(\rho)-2\frac{1-\rho^2}{\rho}u_\lambda'(\rho)+2\lambda \rho u_\lambda'(\rho)+\lambda (\lambda+1)u_\lambda(\rho)+\frac{V(\rho)}{\rho^2}u_\lambda(\rho)=0$$ for the function $u_\lambda$. We say that $\lambda$ is an eigenvalue if Eq. has a solution $u_\lambda \in C^\infty[0,1]$. It is a consequence of [@DSA] that only smooth solutions are relevant here. Furthermore, the eigenvalue is said to be unstable if $\mathrm{Re}\lambda \geq 0$ and stable if $\mathrm{Re}\lambda<0$. It can be immediately checked that $\lambda=1$ is an (unstable) eigenvalue with $u_1(\rho)=\frac{\rho}{1+\rho^2}$. However, it turns out that this instability is an artefact of the similarity coordinates and it does not correspond to a “real” instability of the solution $\psi^T$. In fact, it is a manifestation of the time translation invariance of the wave maps equation. Consequently, we say that $\psi^T$ is *mode stable* if $\lambda=1$ is the only unstable eigenvalue. The mode stability of $\psi^T$, which we shall assume here, has been verified numerically using various independent techniques, see [@bizon99], [@ichdipl], [@ich0] and [@bizon05]. Furthermore, in [@ichpca2] it is rigorously proved that $\lambda=1$ is the only eigenvalue with $\mathrm{Re}\lambda
\geq 1$ and in [@ichpca1] we show that there do not exist *real* unstable eigenvalues (except $\lambda=1$). Finally, in [@DSA] it is shown that $\lambda=1$ is the only eigenvalue with real part greater than $\frac12$. All these results leave no doubt that $\psi^T$ is indeed mode stable although a completely rigorous proof of this property is still not available. At this point we also mention that according to numerics [@bizon99], [@ichdipl], [@ich0] and [@bizon05], the first stable eigenvalue is $\approx -0.54$. This is important since it *dictates the rate of convergence* to the self–similar solution, see Theorem \[thm:main\] below.
For pairs of functions $(f,g) \in C^2[0,R] \times C^1[0,R]$, $R>0$, that satisfy the boundary condition $f(0)=g(0)=0$ we introduce a norm $\|\cdot\|_{{\mathcal}{E}(R)}$ by setting $$\|(f,g)\|_{{\mathcal}{E}(R)}^2:=
\int_0^R |r f''(r)+3 f'(r)|^2 dr+\int_0^R |r g'(r)+2 g(r)|^2 dr.$$ Observe that $\|\cdot\|_{{\mathcal}{E}(R)}$ is indeed a norm since $r f''(r)+3f'(r)=0$ implies $f(r)=c_1+\frac{c_2}{r^2}$ for constants $c_1, c_2$ but this function does not belong to $C^2[0,R]$ unless $c_2=0$ and it does not satisfy the boundary condition $f(0)=0$ unless $c_1=0$. A similar statement is true for $g$. The motivation for the choice of the norm $\|\cdot\|_{{\mathcal}{E}(R)}$ is two–fold. First, it is derived from a conserved quantity of the *free equation* $$\label{eq:free}
\varphi_{tt}-\varphi_{rr}-\frac{2}{r}\varphi_r+\frac{2}{r^2}\varphi=0$$ which is obtained from Eq. by dropping the regularized “potential” and the nonlinearity. More precisely, for any sufficiently regular solution $\varphi$ of Eq. , the function $t \mapsto \|(\varphi(t,\cdot),\varphi_t(t,\cdot))\|_{{\mathcal}{E}(\infty)}$ is a constant. This can be seen as follows. Suppose $\varphi$ is a (sufficiently smooth) solution of Eq. and set $$\label{eq:phihat}
\hat{\varphi}(t,r):=r\varphi_r(t,r)+2\varphi(t,r).$$ It is important to note here that this transformation is invertible. Indeed, we have $$r\hat{\varphi}(t,r)=\partial_r (r^2 \varphi(t,r))$$ and this necessarily implies $$\varphi(t,r)=\frac{1}{r^2}\int_0^r r' \hat{\varphi}(t,r')dr'$$ since we assume regularity of $\varphi$ at the origin. Now note that $$\begin{aligned}
\hat{\varphi}_{tt}-\hat{\varphi}_{rr}&=\tfrac{1}{r}\partial_r(r^2 \varphi_{tt})-r\varphi_{rrr}
-4\varphi_{rr} \\
&=\tfrac{1}{r}\partial_r r^2 \left [\varphi_{tt}-\varphi_{rr}-\tfrac{2}{r}\varphi_r
+\tfrac{2}{r^2}\varphi \right ]=0\end{aligned}$$ and thus, $\hat{\varphi}$ satisfies the one–dimensional wave equation on the half–line $r \geq 0$. Furthermore, since $\varphi(t,0)=0$ by regularity, we have $\hat{\varphi}(t,0)=0$ for all $t$ and thus, $$\int_0^\infty \left [\hat{\varphi}_t^2(t,r)+\hat{\varphi}_r^2(t,r) \right ]dr
=\|(\varphi(t,\cdot), \varphi_t(t,\cdot))\|_{{\mathcal}{E}(\infty)}^2$$ is independent of $t$. Consequently, $\|\cdot\|_{{\mathcal}{E}(R)}$ is a local “higher energy norm” for the free equation since it requires one more derivative than the energy. The point of requiring more derivatives is that one can “see” self–similar blow up in this norm, which is the second important feature of $\|\cdot\|_{{\mathcal}{E}(R)}$. Explicitly, we have $$\begin{aligned}
\|(\psi^T(t,\cdot),\psi_t^T(t,\cdot))\|_{{\mathcal}{E}(T-t)}^2&=\int_0^{T-t}|r\psi_{rr}^T(t,r)+3\psi_r^T(t,r)|^2 dr
\\
&\quad +\int_0^{T-t}|r\psi_{tr}^T(t,r)+2\psi_t^T(t,r)|^2 dr \\
&=\int_0^{T-t} \left |\tfrac{r}{(T-t)^2}f_0''\left (\tfrac{r}{T-t} \right )+\tfrac{3}{T-t}f_0'\left (\tfrac{r}{T-t} \right ) \right |^2 dr \\
&\quad +\int_0^{T-t}\left |\tfrac{r^2}{(T-t)^3}f_0''\left (\tfrac{r}{T-t} \right )
+\tfrac{3r}{(T-t)^2}f_0'\left (\tfrac{r}{T-t} \right ) \right |^2 dr \\
&=\frac{1}{T-t}\left [\int_0^1 |\rho f_0''(\rho)+3f_0'(\rho)|^2 d\rho+\int_0^1 |\rho^2 f_0''(\rho)+3 \rho f_0'(\rho)|^2 d\rho \right ],\end{aligned}$$ and thus, $$\label{eq:blowup}
\|(\psi^T(t,\cdot),\psi_t^T(t,\cdot))\|_{{\mathcal}{E}(T-t)}=C(T-t)^{-\frac{1}{2}}$$ for a constant $C>0$ which shows that the norm of $\psi^T$ blows up. Note carefully that this is in stark contrast to the behavior of the energy norm. The local energy of $\psi^T$ in the cone ${\mathcal}{C}_T$ *decays* like $(T-t)$ as $t \to T-$ and thus, the blow up is “invisible” in the energy norm. This is a manifestation of the energy supercritical character of the equation. As a consequence, we have to work in a stronger topology, see also [@DSA] for a discussion on this issue.
Finally, for initial data $(f,g) \in C^3[0,\frac{3}{2}] \times C^2[0,\frac{3}{2}]$ with $f(0)=g(0)=0$, we define another norm $\|\cdot\|_{{\mathcal}{E}'}$ by $$\begin{aligned}
\|(f,g)\|_{{\mathcal}{E}'}^2:=&\int_0^{3/2} |rf'''(r)+4f''(r)|^2 r^2 dr+\int_0^{3/2} |rf''(r)+3f'(r)|^2 dr \\
&+\int_0^{3/2} |r^2 g''(r)+4r g'(r)+2 g(r)|^2 dr.\end{aligned}$$ The norm $\|\cdot\|_{{\mathcal}{E}'}$ is stronger than $\|\cdot\|_{{\mathcal}{E}(\frac{3}{2})}$ in the sense that it requires one additional derivative. This stronger norm is needed for certain technical reasons which will become clear below. Now we are ready to formulate our main result.
\[thm:main\] Assume that $\psi^T$ is mode stable and denote by $s_0$ the real part of the first [^1] stable eigenvalue. Let $\varepsilon>0$ be arbitrary but so small that $\omega:=\max\{-\frac{1}{2},s_0\}+\varepsilon<0$. Furthermore, let $(f,g) \in C^3[0,\frac{3}{2}] \times C^2[0,\frac{3}{2}]$ be initial data with $f(0)=g(0)=0$ and $$\|(f,g)-(\psi^1(0,\cdot),\psi_t^1(0,\cdot))\|_{{\mathcal}{E}'}<\delta$$ for a sufficiently small $\delta>0$. Then there exists a $T>0$ close to $1$ such that the Cauchy problem $$\left \{ \begin{array}{l}
\psi_{tt}(t,r)-\psi_{rr}(t,r)-\frac{2}{r}\psi_r(t,r)+\frac{\sin(2\psi(t,r))}{r^2}=0, \quad t \in (0,T),
r \in [0,T-t] \\
\psi(0,r)=f(r), \psi_t(0,r)=g(r), \quad r \in [0,T]
\end{array} \right .$$ has a unique solution $\psi$ that satisfies $$\|(\psi(t,\cdot),\psi_t(t,\cdot))-(\psi^T(t,\cdot),\psi_t^T(t,\cdot))\|_{{\mathcal}{E}(T-t)} \leq C_\varepsilon
|T-t|^{-\frac{1}{2}+|\omega|}$$ for all $t \in [0,T)$ where $C_\varepsilon>0$ is a constant that depends on $\varepsilon$.
Several remarks are in order.
- As usual, by a “solution” $\psi$ we mean a function that solves the equation in an appropriate weak sense and not necessarily in the sense of classical derivatives.
- According to Eq. , one should actually normalize the estimate from Theorem \[thm:main\] to $$|T-t|^{\frac{1}{2}}\|(\psi(t,\cdot),\psi_t(t,\cdot))-(\psi^T(t,\cdot),\psi_t^T(t,\cdot))\|_{{\mathcal}{E}(T-t)} \leq C_\varepsilon
|T-t|^{|\omega|}$$ for $t \in [0,T)$ which shows that the solution $\psi$ converges to $\psi^T$ in the backward lightcone of the blow up point $(T,0)$. Consequently, Theorem \[thm:main\] tells us that, if we start with initial data that are sufficiently close to $(\psi^1(0,\cdot),\psi_t^1(0,\cdot))$, then the solution $\psi$ blows up in a self–similar manner via $\psi^T$. In this sense, the blow up described by $\psi^T$ is stable. It is clear that even very small (generic) perturbations of the initial data $(\psi^1(0, \cdot),\psi_t^1(0,\cdot))$ will change the blow up time of the solution. That is why one has to adjust $T$.
- It should be emphasized that the rate of convergence to the attractor is dictated by the first stable eigenvalue. This complies with naive expectations and previous heuristics and numerics in the physics literature, e.g., [@bizon99].
- The radius $\frac{3}{2}$ in the smallness condition for the initial data is more or less an arbitrary choice. The problem is that one actually needs to prescribe the data on the interval $[0,T]$ but $T$ is not known in advance — a tautology. However, since our argument yields a $T$ close to $1$, one may assume that $T$ is always smaller than $\frac{3}{2}$.
- We have to require one more derivative of the initial data than we actually control in the time evolution. This is for technical reasons.
- As usual, one does not really need classical derivatives of the data $(f,g)$ — weak derivatives are fine too, as long as the norm $\|(f,g)\|_{{\mathcal}{E}'}$ is well–defined. In other words, the initial data may be taken from the completion of the space $$\left \{(f,g)\in C^3[0,\tfrac{3}{2}] \times C^2[0,\tfrac{3}{2}]: f(0)=g(0)=0 \right \}$$ with respect to the norm $\|\cdot\|_{{\mathcal}{E}'}$. It is worth noting here that the norm $\|\cdot\|_{{\mathcal}{E}'}$ is strong enough for the boundary conditions to “survive”, cf. Lemma \[lem:Hcont\] below.
- It seems to be difficult to rigorously *prove* mode stability of the ground state self–similar solution. However, this is a linear ODE problem and there are very reliable numerics as well as partial rigorous results that leave no doubt that $\psi^T$ is mode stable (cf. [@DSA], Sec. 3). Nevertheless, it would be desirable to have a proof for the mode stability and we plan to revisit this problem elsewhere.
Outline of the proof
--------------------
The proof of Theorem \[thm:main\] is functional analytic and the main tools we use are the linear perturbation theory developed in the companion paper [@DSA] as well as the implicit function theorem on Banach spaces. The implicit function theorem is used to “push” the linear results from [@DSA] to the nonlinear level. Consequently, our approach is perturbative in the sense that the nonlinearity is viewed as a perturbation of the linear problem.
In a first step, we write the Cauchy problem in Theorem \[thm:main\] as an ODE on a suitable Hilbert space of the form $$\label{eq:maincssopintro}
\left \{ \begin{array}{l}
\frac{d}{d \tau}\Phi(\tau)=L\Phi(\tau)+{\mathbf}{N}(\Phi(\tau)) \mbox{ for }\tau>-\log T \\
\Phi(-\log T)={\mathbf}{U}({\mathbf}{v},T)
\end{array} \right .$$ which is, in fact, an operator formulation of Eq. since we are working in similarity coordinates $(\tau,\rho)$. The field $\Phi$ describes a nonlinear perturbation of $\psi^T$. Here, the linear operator $L$ emerges from the linearization of the equation around the fundamental self–similar solution $\psi^T$. The properties of $L$ have been studied thoroughly in [@DSA]. Furthermore, the nonlinear operator ${\mathbf}{N}$ results from the nonlinear remainder of the equation and the free data $(f,g)$ from Theorem \[thm:main\] define the vector ${\mathbf}{v}$. In fact, ${\mathbf}{v}$ are the data relative to $\psi^1$. Up to some variable transformations we essentially have ${\mathbf}{v} \approx (f,g)-\psi^1[0]$ and $${\mathbf}{U}({\mathbf}{v},T) \approx {\mathbf}{v}+\psi^1[0]-\psi^T[0]=(f,g)-\psi^T[0]$$ where $\psi[0]$ is shorthand for $(\psi(0,\cdot),\psi_t(0,\cdot))$. Consequently, the smallness condition in Theorem \[thm:main\] ensures that ${\mathbf}{v}$ is small. We apply Duhamel’s formula and time translation in order to derive an integral equation for solutions of Eq. of the form $$\label{eq:maincssopmildintro}
\Psi(\tau)=S(\tau){\mathbf}{U}({\mathbf}{v},T)+\int_0^\tau S(\tau-\tau'){\mathbf}{N}(\Psi(\tau'))d\tau', \quad \tau \geq 0$$ where $\Psi(\tau)=\Phi(\tau-\log T)$. Here, $S$ is the semigroup generated by $L$, i.e., the solution operator to the linearized problem. The existence of $S$ has been proved in [@DSA] and, moreover, it has been shown that there exists a projection $P$ with one–dimensional range that commutes with $S(\tau)$ such that $S$ satisfies the estimates $$\|S(\tau)P\|\lesssim e^{\tau}, \quad \|S(\tau)(1-P)\|\lesssim e^{-|\omega|\tau}$$ for all $\tau \geq 0$ with $\omega$ from Theorem \[thm:main\], see Theorem \[thm:linear\] below. In other words, the subspace $\mathrm{rg}P$ is unstable for the linear time evolution. In order to compensate for this instability, we first construct a solution to a modified equation $$\begin{aligned}
\Psi(\tau)=&S(\tau){\mathbf}{U}({\mathbf}{v},T)-e^\tau P \left [{\mathbf}{U}({\mathbf}{v},T)+\int_0^\infty e^{-\tau'}{\mathbf}{N}(\Psi(\tau'))d\tau' \right ] \\
&+\int_0^\tau S(\tau-\tau'){\mathbf}{N}(\Psi(\tau'))d\tau', \quad \tau \geq 0.\end{aligned}$$ The “correction” $${\mathbf}{F}({\mathbf}{v},T):=P \left [{\mathbf}{U}({\mathbf}{v},T)+\int_0^\infty e^{-\tau'}{\mathbf}{N}(\Psi(\tau'))d\tau' \right ]$$ suppresses the instability of the solution. We remark that some aspects of this construction are inspired by the work of Krieger and Schlag on the critical wave equation [@schlag]. An application of the implicit function theorem yields the existence of a solution $\Psi$ to the modified equation that decays like $e^{-|\omega|\tau}$ provided the data ${\mathbf}{v}$ are small and $T$ is sufficiently close to $1$. Thus, we retain the linear decay on the nonlinear level. In a second step, we show that, for given small ${\mathbf}{v}$, there exists a $T$ close to $1$ such that ${\mathbf}{F}({\mathbf}{v},T)$ is in fact zero and thereby, we obtain a decaying solution to Eq. . This is again accomplished by an application of the implicit function theorem. Here, it is crucial that the Fréchet derivative $\partial_T {\mathbf}{U}({\mathbf}{0},T)|_{T=1}$ is an element of the unstable subspace $\mathrm{rg}P$. In other words, the unstable subspace is spanned by the tangent vector at $T=1$ to the curve $T \mapsto {\mathbf}{U}({\mathbf}{0},T) \approx \psi^T[0]$ in the initial data space. This curve is generated by varying the blow up time $T$ and therefore, the instability of the linear evolution is caused by the time translation symmetry of the wave maps equation.
Notations and conventions
-------------------------
For Banach spaces $X,Y$ we denote by ${\mathcal}{B}(X,Y)$ the Banach space of bounded linear operators from $X$ to $Y$. As usual, we write ${\mathcal}{B}(X)$ if $X=Y$. For a Fréchet differentiable map $F: U \times V \subset X \times Y \to Z$ where $X,Y,Z$ are Banach spaces and $U \subset X$, $V \subset Y$ are open subsets, we denote by $D_1 F: U \times V \to {\mathcal}{B}(X,Z)$, $D_2 F: U\times V \to {\mathcal}{B}(Y,Z)$ the respective partial Fréchet derivatives and by $DF$ the (full) Fréchet derivative of $F$. Vectors are denoted by bold letters and the individual components are numbered by lower indices, e.g., ${\mathbf}{u}=(u_1,u_2)$. We do not distinguish between row and column vectors. Furthermore, for $a,b \in \mathbb{R}$ we use the notation $a \lesssim b$ if there exists a constant $c>0$ such that $a \leq cb$ and we write $a \simeq b$ if $a \lesssim b$ and $b \lesssim a$. Unless otherwise stated, it is implicitly assumed that the constant $c$ is absolute, i.e., it does not depend on any of the involved quantities in the inequality. The symbol $\sim$ is reserved for asymptotic equality. Finally, the letter $C$ (possibly with indices) denotes a generic nonnegative constant which is not supposed to have the same value at each occurrence.
Review of the linear perturbation theory
========================================
For the convenience of the reader we review the results recently obtained in [@DSA] which prepare the ground for the nonlinear stability theory of the self–similar wave map $\psi^T$ elaborated in the present paper.
First order formulation
-----------------------
Roughly speaking, the main outcome of our paper [@DSA] is a well–posed initial value formulation of the linear evolution problem . In order to accomplish this, the equation is first transformed to a first–order system in time by defining the new variables $$\begin{aligned}
\varphi_1(t,r)&:=\frac{r^2 \varphi_t(t,r)}{T-t} \\
\varphi_2(t,r)&:=r\varphi_r(t,r)+2 \varphi(t,r).\end{aligned}$$ This is motivated by the discussion in the introduction, observe in particular that $\varphi_2$ is nothing but $\hat{\varphi}$ in Eq. . The variable $\varphi_1$ is an appropriately scaled time derivative. As before, the inverse transformation is $$\varphi(t,r)=\frac{1}{r^2}\int_0^r r' \varphi_2(t,r')dr'$$ and by definition, the free operator is given by $$\varphi_{rr}+\tfrac{2}{r}\varphi_r-\tfrac{2}{r^2}\varphi=\tfrac{1}{r}\partial_r \varphi_2
-\tfrac{1}{r^2}\varphi_2.$$ Consequently, we obtain $$\begin{aligned}
\label{eq:sys1}
\partial_t \varphi_1&=\frac{r^2 \varphi_{tt}}{T-t}+\frac{r^2 \varphi_t}{(T-t)^2} \\
&=\frac{r^2}{T-t}\left [\varphi_{rr}+\frac{2}{r}\varphi_r-\frac{2}{r^2}\varphi-\frac{2\cos(2\psi^T)-2}{r^2}\varphi-\frac{N_T(\varphi)}{r^2} \right ]+\frac{\varphi_1}{T-t} \nonumber \\
&=\frac{1}{T-t}\left [ \varphi_1+r\partial_r \varphi_2-\varphi_2-\frac{2\cos(2\psi^T)-2}
{r^2}\int r \varphi_2-N_T \left (\tfrac{1}{r^2}\int r \varphi_2 \right ) \right ] \nonumber\end{aligned}$$ where $\int r \varphi_2$ is shorthand for $\int_0^r r' \varphi_2(t,r')dr'$ and, as the reader has already noticed, we are a bit sloppy with our notation and omit the arguments of the functions occasionally. The second equation in the first–order system is simply given by the identity $$\label{eq:sys2}
\partial_t \varphi_2=\tfrac{1}{r}\partial_r (r^2 \varphi_t)=\frac{T-t}{r}\partial_r \varphi_1.$$ Furthermore, the initial data $(\psi(0,\cdot),\psi_t(0,\cdot))=(f,g)$ from Eq. transform into $$\begin{aligned}
\label{eq:sys3}
\varphi_1(0,r)&=\tfrac{r^2}{T}\left [g(r)-\psi_t^T(0,r) \right ] \\
\varphi_2(0,r)&=r \left [f'(r)-\psi_r^T(0,r) \right ]+2 \left [f(r)-\psi^T(0,r) \right ]. \nonumber\end{aligned}$$ Now we rewrite Eqs. , and in similarity coordinates $$\tau=-\log(T-t),\quad \rho=\frac{r}{T-t}$$ by setting $$\phi_j(\tau,\rho):=\varphi_j(T-e^{-\tau}, e^{-\tau}\rho), \quad j=1,2$$ and recalling that $\partial_t=e^\tau (\partial_\tau+\rho \partial_\rho)$, $\partial_r=e^\tau \partial_\rho$. Furthermore, note that $$\frac{1}{r^2}\int_0^r r'\varphi_2(t,r')dr'=\frac{1}{\rho^2}\int_0^\rho \rho \phi_2(t,\rho')d\rho'$$ and thus, we arrive at the system $$\label{eq:main1stcss}
\left \{ \begin{array}{l}
\left. \begin{array}{l}
\partial_\tau \phi_1=-\rho \partial_\rho \phi_1+\phi_1+\rho \partial_\rho \phi_2
-\phi_2-\frac{V(\rho)-2}{\rho^2}\int \rho \phi_2-N_T \left ( \rho^{-2}\int \rho \phi_2 \right )
\\
\partial_\tau \phi_2=\frac{1}{\rho}\partial_\rho \phi_1-\rho \partial_\rho \phi_2 \end{array} \right \}
\mbox{ in } {\mathcal}{Z}_T \\
\left. \begin{array}{l}
\phi_1(-\log T,\rho)=T\rho^2\left [g(T\rho)-\psi^T_t(0,T\rho) \right ] \\
\phi_2(-\log T,\rho)=T\rho \left [f'(T\rho)-\psi^T_r(0,T\rho) \right ] +2 \left [f(T\rho)-\psi^T(0,T\rho)
\right ]
\end{array} \right \}
\mbox{ for } \rho \in [0,1]
\end{array} \right .$$ where ${\mathcal}{Z}_T:=\{(\tau,\rho): \tau > -\log T, \rho \in [0,1]\}$ and $V$ is given by Eq. . The system is equivalent to the original Cauchy problem . We again recall that the orginal wave map $\psi$ can be reconstructed from $\phi_2$ by the formula $$\label{eq:origpsi}
\psi(t,r)=\psi^T(t,r)+\tfrac{1}{r^2}\int_0^r r' \phi_2\left (-\log(T-t), \tfrac{r'}{T-t} \right )dr'.$$ Furthermore, for the time derivative of the original field we have $$\label{eq:origpsit}
\psi_t(t,r)=\psi_t^T(t,r)+\tfrac{T-t}{r^2}\phi_1\left (-\log(T-t),\tfrac{r}{T-t} \right ).$$
In [@DSA] we have studied the corresponding *linearized problem*, that is, $$\label{eq:main1stcsslin}
\left \{ \begin{array}{l}
\left. \begin{array}{l}
\partial_\tau \phi_1=-\rho \partial_\rho \phi_1+\phi_1+\rho \partial_\rho \phi_2
-\phi_2-\frac{V(\rho)-2}{\rho^2}\int \rho \phi_2
\\
\partial_\tau \phi_2=\frac{1}{\rho}\partial_\rho \phi_1-\rho \partial_\rho \phi_2 \end{array} \right \}
\mbox{ in } {\mathcal}{Z}_T \\
\left. \begin{array}{l}
\phi_1(-\log T,\rho)=T\rho^2\left [g(T\rho)-\psi^T_t(0,T\rho) \right ] \\
\phi_2(-\log T,\rho)=T\rho \left [f'(T\rho)-\psi^T_r(0,T\rho) \right ] +2 \left [f(T\rho)-\psi^T(0,T\rho)
\right ]
\end{array} \right \}
\mbox{ for } \rho \in [0,1]
\end{array} \right .$$ which is obtained from by dropping the nonlinearity. The concrete form of the initial data is irrelevant on the linear level and therefore, we will omit it in this section.
Operator formulation and well–posedness of the linearized problem {#sec:oplin}
-----------------------------------------------------------------
The basic philosophy of both the present paper and [@DSA], is to formulate the evolution problems and as ordinary differential equations on suitable Hilbert spaces. We would like to emphasize that *the following choice of a Hilbert space is central and of crucial importance for the whole construction*.
We define the Hilbert space ${\mathcal}{H}$ as the completion of $$\{u \in C^2[0,1]: u(0)=u'(0)=0\} \times \{u \in C^1[0,1]: u(0)=0\}$$ with respect to the norm $\|\cdot\|:=\sqrt{(\cdot|\cdot)}$ induced by the inner product $$({\mathbf}{u}|{\mathbf}{v}):=\int_0^1 u_1'(\rho)\overline{v_1'(\rho)}\frac{d\rho}{\rho^2}+\int_0^1 u_2'(\rho)\overline{v_2'(\rho)}d\rho.$$
In order to motivate the boundary conditions, recall that $$\varphi_1(t,r)=\frac{r^2 \varphi_t(t,r)}{T-t}, \quad \varphi_2(t,r)=r\varphi_r(t,r)+2\varphi(t,r)$$ and sufficiently regular solutions $\varphi$ of Eq. satisfy $\varphi_t(t,r)=O(r)$ as $r \to 0+$. The Hilbert space ${\mathcal}{H}$ is supposed to hold the solution $(\phi_1, \phi_2)$ of Eq. (or Eq. ) at a given instance of time and, since $\phi_j$ is nothing but $\varphi_j$ in similarity coordinates, the boundary conditions are natural requirements that follow from the behavior of regular solutions of Eq. at the origin. Note also that the norm $\|\cdot\|$ corresponds exactly to the local energy of $\hat{\varphi}$ as discussed in the introduction, cf. Eq. . We recall a convenient density result.
\[lem:Hdense\] The set $C^\infty_c(0,1] \times C^\infty_c(0,1]$ is dense in ${\mathcal}{H}$.
See [@DSA].
Furthermore, the space ${\mathcal}{H}$ is continuously embedded in $C[0,1] \times C[0,1]$.
\[lem:Hcont\] Let ${\mathbf}{u} \in {\mathcal}{H}$. Then ${\mathbf}{u} \in C[0,1] \times C[0,1]$ and we have the estimate $$\|{\mathbf}{u}\|_{L^\infty(0,1) \times L^\infty(0,1)} \lesssim \|{\mathbf}{u}\|$$ for all ${\mathbf}{u} \in {\mathcal}{H}$. In particular, every ${\mathbf}{u} \in {\mathcal}{H}$ satisfies the boundary condition ${\mathbf}{u}(0)={\mathbf}{0}$.
See [@DSA].
Next we define a differential operator $\tilde{L}: {\mathcal}{D}(\tilde{L}) \subset {\mathcal}{H} \to {\mathcal}{H}$ on a suitable dense domain ${\mathcal}{D}(\tilde{L})$ by setting $$\tilde{L}{\mathbf}{u}(\rho):=\left (\begin{array}{c}
-\rho u_1'(\rho)+u_1(\rho)+\rho u_2'(\rho)-u_2(\rho)-V_1(\rho)\int_0^\rho \rho' u_2(\rho')d\rho' \\
\frac{1}{\rho}u_1'(\rho)-\rho u_2'(\rho)
\end{array} \right )$$ where the *regularized potential* $V_1$ is given by $$V_1(\rho):=\frac{V(\rho)-2}{\rho^2}=-\frac{16}{(1+\rho^2)^2}.$$ We remark that in [@DSA] we actually start with a *free operator* $\tilde{L}_0$. In the notation of [@DSA] we have $\tilde{L}=\tilde{L}_0+L'$ where $L'$ contains the potential term and $L' \in {\mathcal}{B}({\mathcal}{H})$. Consequently, $$\label{eq:linoppre}
\left \{ \begin{array}{l}
\frac{d}{d\tau}\Phi(\tau)=\tilde{L}\Phi(\tau) \mbox{ for }\tau>-\log T \\
\Phi(-\log T)={\mathbf}{u}
\end{array} \right .$$ is an operator formulation of with $\Phi: [-\log T, \infty) \to {\mathcal}{H}$ and ${\mathbf}{u}$ are the initial data. We have the following well–posedness result from [@DSA].
\[prop:gen\] The operator $\tilde{L}$ is closable and its closure $L$ generates a strongly continuous one–parameter semigroup $S: [0,\infty) \to {\mathcal}{B}({\mathcal}{H})$ satisfying $$\|S(\tau)\|\leq e^{(-\frac{1}{2}+\|L'\|)\tau}$$ for all $\tau \geq 0$. In particular, the Cauchy problem $$\left \{ \begin{array}{l}
\frac{d}{d\tau}\Phi(\tau)=L\Phi(\tau) \mbox{ for }\tau>-\log T \\
\Phi(-\log T)={\mathbf}{u}
\end{array} \right .$$ for ${\mathbf}{u} \in {\mathcal}{D}(L)$ has a unique solution which is given by $$\Phi(\tau)=S(\tau+\log T){\mathbf}{u}$$ for all $\tau \geq -\log T$.
The proof of Proposition \[prop:gen\] consists of an application of the Lumer–Phillips Theorem of semigroup theory, see e.g., [@engel]. The rest of [@DSA] is concerned with a detailed spectral analysis of $L$ in order to improve the growth bound for $S(\tau)$.
As already mentioned in the introduction, the linearized problem Eq. has an exponentially growing solution which emerges from the time translation symmetry of the original equation. In the operator formulation, this is reflected by an unstable eigenvalue $\lambda=1$ in the spectrum of $L$. The associated eigenspace is one–dimensional and spanned by the *gauge mode* $${\mathbf}{g}(\rho):=\frac{1}{(1+\rho^2)^2}\left ( \begin{array}{c}
2 \rho^3 \\ \rho(3+\rho^2) \end{array} \right ).$$ In order to formulate the main result of [@DSA], we need to recall the definition of the spectral bound $s_0$.
\[def:sb\] The spectral bound $s_0$ is defined as $$s_0:=\sup\{{\mathrm{Re}}\lambda, \lambda \not=1: \mbox{ Eq.~\eqref{eq:evodeintro} has a nontrivial solution }
u_\lambda \in C^\infty[0,1] \}.$$
In other words, $s_0$ is the real part of the largest [^2] eigenvalue (in the sense of Sec. \[sec:intromainresult\]) apart from the gauge eigenvalue $\lambda=1$. We remark that in [@DSA] it is proved that $s_0<\frac{1}{2}$, however, numerical results (cf. [@DSA], Sec. 3) strongly suggest that in fact $s_0 \approx -0.54$. The main result from [@DSA] gives a satisfactory description of the linearized time evolution and, via $s_0$, establishes the link between the linear Cauchy problem and the mode stability ODE .
\[thm:linear\] Let $\varepsilon>0$ be arbitrary. There exists a projection $P \in {\mathcal}{B}({\mathcal}{H})$ onto $\langle {\mathbf}{g} \rangle$ which commutes with the semigroup $S(\tau)$ (and $PL \subset LP$) such that $$\|S(\tau)P{\mathbf}{f}\|=e^\tau \|P{\mathbf}{f}\|$$ as well as $$\|S(\tau)(1-P){\mathbf}{f}\| \leq C_\varepsilon e^{(\max\{-
\frac{1}{2},s_0\}+\varepsilon)\tau}\|(1-P){\mathbf}{f}\|$$ for all $\tau \geq 0$ and all ${\mathbf}{f} \in {\mathcal}{H}$ where $C_\varepsilon>0$ is a constant that depends on $\varepsilon$.
This important result tells us that the fundamental self–similar solution $\psi^T$ is linearly stable if one “ignores” the gauge instability and assumes that it is mode stable. We remark that the proof of Theorem \[thm:linear\] is highly nontrivial since the operator $L$ is not normal. This inconvenient fact introduces a number of technical difficulties one has to overcome. In order to prove the existence of the spectral projection $P$ one first shows that $\lambda=1$ (the gauge eigenvalue) is isolated in the spectrum of $L$. This follows from a compactness argument. Then the existence of $P$ is a consequence of the standard formula $$P=\frac{1}{2\pi i}\int_\Gamma (\lambda-L)^{-1}d\lambda$$ where $\Gamma$ is a suitable curve in the resolvent set of $L$ that encloses the point $1$. However, since $L$ is not normal, one also has to deal with the possibility that the algebraic multiplicity of the gauge eigenvalue is larger than one. This has to be excluded by ODE analysis of the eigenvalue equation. Finally, the semigroup bounds on the stable subspace follow from a general abstract result. We refer the reader to [@DSA] where the proof is carefully elaborated in full detail.
The nonlinear perturbation theory
=================================
In this section we study the full nonlinear problem . As already remarked, is equivalent to the original Cauchy problem for co–rotational wave maps. We proceed by a completely abstract approach, i.e., we use operator theoretic methods.
Estimates for the nonlinearity
------------------------------
Our first goal is to formulate the Cauchy problem as a nonlinear ODE on the Hilbert space ${\mathcal}{H}$. In order to do so, we need to understand the mapping properties of the nonlinearity. We are going to need Hardy’s inequality in the following form.
Let $\alpha>1$. Then $$\int_0^1 \frac{|u(\rho)|^2}{\rho^\alpha}d\rho \leq \left (\frac{2}{\alpha-1} \right)^2
\int_0^1 \frac{|u'(\rho)|^2}{\rho^{\alpha-2}}d\rho$$ for all $u \in C_c^\infty(0,1]$.
Integration by parts and Cauchy–Schwarz yield $$\begin{aligned}
\int_0^1 \frac{|u(\rho)|^2}{\rho^\alpha}d\rho&=
\left . -\frac{1}{\alpha-1}\frac{|u(\rho)|^2}{\rho^{\alpha-1}} \right |_0^1+\frac{1}{\alpha-1}\int_0^1 \frac{u'(\rho)\overline{u(\rho)}+u(\rho)\overline{u'(\rho)}}{\rho^{\alpha-1}}d\rho \\
&=-\frac{1}{\alpha-1}|u(1)|^2 +\frac{2}{\alpha-1}\int_0^1
\frac{{\mathrm{Re}}[u'(\rho)\overline{u(\rho)}]}{\rho^{\alpha-1}}d\rho \\
&\leq \frac{2}{\alpha-1}\int_0^1 \frac{|u(\rho)|}{\rho^{\alpha/2}}\frac{|u'(\rho)|}{\rho^{\alpha/2-1}}d\rho \\
&\leq \frac{2}{\alpha-1}\left (\int_0^1 \frac{|u(\rho)|^2}{\rho^{\alpha}}d\rho \right )^{1/2}
\left ( \int_0^1 \frac{|u'(\rho)|^2}{\rho^{\alpha-2}}d\rho \right )^{1/2}
\end{aligned}$$ for all $u \in C_c^\infty(0,1]$ which implies the claim.
We will also frequently use the following general fact on continuous extensions.
\[lem:context\] Let $X,Y$ be Banach spaces with norms $\|\cdot\|_X$, $\|\cdot\|_Y$, respectively. Assume further that $\tilde{X} \subset X$ is a dense subset of $X$ and let $\tilde{F}: \tilde{X} \to Y$ be a mapping that satisfies the estimate $$\|\tilde{F}(x_1)-\tilde{F}(x_2)\|_Y \leq \|x_1-x_2\|_X^\alpha \gamma (\|x_1\|_X, \|x_2\|_X)$$ for all $x_1,x_2 \in \tilde{X}$ where $\gamma: [0,\infty) \times [0,\infty) \to [0,\infty)$ is a continuous function and $\alpha>0$. Then there exists a unique continuous mapping $F: X \to Y$ with $F(x)=\tilde{F}(x)$ for all $x \in \tilde{X}$ and $$\|F(x_1)-F(x_2)\|_Y \leq \|x_1-x_2\|_X^\alpha \gamma (\|x_1\|_X, \|x_2\|_X)$$ for all $x_1,x_2 \in X$.
Let $x \in X$. Then, by the assumed density of $\tilde{X}$, there exists a sequence $(x_j) \subset \tilde{X}$ with $\|x-x_j\|_X \to 0$ as $j \to \infty$. This implies $\|x_j\|_X \to \|x\|_X$ and from $$\|\tilde{F}(x_j)-\tilde{F}(x_k)\|_Y \leq \|x_j-x_k\|_X^\alpha \gamma(\|x_j\|_X, \|x_k\|_X) \to 0$$ as $j,k \to \infty$ we see that $(\tilde{F}(x_j))\subset Y$ is a Cauchy sequence. Since $Y$ is a Banach space, this Cauchy sequence has a limit $$y:=\lim_{j\to\infty}\tilde{F}(x_j)$$ and we define the mapping $F: X \to Y$ by $F(x):=y$. For this to be well–defined we have to ensure that $y$ is independent of the chosen sequence $x_j \to x$. In order to see this, let $(\tilde{x}_j) \subset \tilde{X}$ be another sequence with $\|x-\tilde{x}_j\|_X \to 0$ as $j \to \infty$. Then we have $$\|x_j-\tilde{x}_j\|_X \leq \|x_j-x\|_X+\|x-\tilde{x}_j\|_X \to 0$$ as $j \to \infty$ and thus, $$\begin{aligned}
\|y-\tilde{F}(\tilde{x}_j)\|_Y &\leq \|y-\tilde{F}(x_j)\|_Y+\|\tilde{F}(x_j)-\tilde{F}(\tilde{x}_j)\|_Y \\
& \leq \|y-\tilde{F}(x_j)\|_Y+\|x_j-\tilde{x}_j\|_X^\alpha \gamma(\|x_j\|_X, \|\tilde{x}_j\|_X) \to 0 \end{aligned}$$ as $j \to \infty$ and this shows that $\lim_{j \to \infty}\tilde{F}(\tilde{x}_j)=y$. Consequently, $F: X \to Y$ is well–defined. If $x \in \tilde{X}$ we have $F(x)=\tilde{F}(x)$ by definition. This shows that $F$ extends $\tilde{F}$ to all of $X$. Now let $\varepsilon \in (0,1)$ be arbitrary and choose $x_0,x\in X$ with $\|x_0-x\|_X \leq \varepsilon$. By construction of $F$, we can find $\tilde{x}_0,\tilde{x} \in \tilde{X}$ with $\|x_0-\tilde{x}_0\|_X \leq \varepsilon$, $\|x-\tilde{x}\|_X \leq \varepsilon$ such that $\|F(x_0)-\tilde{F}(\tilde{x}_0)\|_Y \leq \varepsilon$ and $\|F(x)-\tilde{F}(\tilde{x})\|_Y \leq \varepsilon$. This implies $$\|\tilde{x}_0-\tilde{x}\|_X\leq \|\tilde{x}_0-x_0\|_X+\|x_0-x\|_X+\|x-\tilde{x}\|_X \leq 3 \varepsilon$$ and we conclude $$\begin{aligned}
\|F(x_0)-F(x)\|_Y&\leq \|F(x_0)-\tilde{F}(\tilde{x}_0)\|_Y+\|\tilde{F}(\tilde{x}_0)-\tilde{F}(\tilde{x})\|_Y
+\|\tilde{F}(\tilde{x})-F(x)\|_Y \\
&\leq 2\varepsilon +\|\tilde{x}-\tilde{x}_0\|_X^\alpha \gamma(\|\tilde{x}\|_X, \|\tilde{x}_0\|_X)
\leq 2\varepsilon + (3\varepsilon)^\alpha \sup_{0 \leq s, t \leq \|x_0\|_X+4} \gamma(s,t)\end{aligned}$$ since $$\begin{aligned}
\|\tilde{x}\|_X &\leq \|x_0\|_X+\|\tilde{x}-x_0\|_X\leq \|x_0\|_X+\|\tilde{x}-\tilde{x}_0\|_X+\|\tilde{x}_0-x_0\|_X \\
&\leq \|x_0\|_X+4\varepsilon \leq \|x_0\|_X+4\end{aligned}$$ and $$\|\tilde{x}_0\|_X\leq \|x_0\|_X+\|\tilde{x}_0-x_0\|_X \leq \|x_0\|_X+\varepsilon \leq \|x_0\|_X+4$$ for any $\varepsilon \in (0,1)$. Consequently, we obtain $$\|F(x_0)-F(x)\|_Y \leq 2\varepsilon + C_{x_0}\varepsilon^\alpha$$ for a constant $C_{x_0}>0$ depending on $\|x_0\|_X$. This constant is finite since the continuous function $\gamma$ attains its maximum on the compact set $[0,\|x_0\|_X+4] \times [0,\|x_0\|_X+4]$. This shows that $F: X \to Y$ is continuous. Uniqueness of $F$ follows from continuity and the fact that $F$ and $\tilde{F}$ coincide on a dense subset. Finally, the claimed estimate for $F$ follows from the estimate for $\tilde{F}$ which carries over to the extension by continuity.
We start the analysis of the nonlinearity by defining an auxiliary operator $A$ on $C_c^\infty(0,1]$ by setting $$(Au)(\rho):=\frac{1}{\rho^2}\int_0^\rho \rho' u(\rho')d\rho'.$$ Observe that the operator $A$ appears in the argument of the nonlinearity $N_T$ in .
\[lem:estA1\] The operator $A$ satisfies the estimate $$|Au(\rho)|\lesssim \sqrt{\rho}\|u'\|_{L^2(0,1)}$$ for all $\rho \in (0,1)$ and all $u \in C_c^\infty(0,1]$.
By Cauchy–Schwarz and Hardy’s inequality we obtain $$\begin{aligned}
|Au(\rho)|&\leq \frac{1}{\rho^2}\int_0^\rho \rho'^2 \frac{|u(\rho')|}{\rho'}d\rho'
\leq \frac{1}{\rho^2} \left ( \int_0^\rho \rho'^4 d\rho' \right )^{1/2} \left (\int_0^\rho \frac{|u(\rho')|^2}{\rho'^2}d\rho' \right )^{1/2} \\
&\lesssim \sqrt{\rho}\left ( \int_0^1 |u'(\rho)|^2 d\rho \right )^{1/2}\end{aligned}$$ for all $u \in C_c^\infty(0,1]$.
Another useful observation is the following.
\[lem:estA1a\] The operator $A$ satisfies the estimates $$\int_0^1 \frac{|Au(\rho)|^2}{\rho^2}d\rho \lesssim \int_0^1 |(Au)'(\rho)|^2 d\rho \lesssim
\|u'\|_{L^2(0,1)}^2$$ for all $u \in C^\infty_c(0,1]$.
Note that $u \in C^\infty_c(0,1]$ implies $Au \in C^\infty_c(0,1]$. Consequently, Hardy’s inequality yields $$\begin{aligned}
\int_0^1 \frac{|Au(\rho)|^2}{\rho^2}d\rho &\lesssim \int_0^1 |(Au)'(\rho)|^2 d\rho \\
&\lesssim
\int_0^1 \frac{1}{\rho^6} \left |\int_0^\rho \rho' u(\rho')d\rho' \right |^2 d\rho+\int_0^1
\frac{|u(\rho)|^2}{\rho^2} d\rho \\
&\lesssim \int_0^1 \frac{|u(\rho)|^2}{\rho^2}d\rho \\
&\lesssim \|u'\|_{L^2(0,1)}^2 \end{aligned}$$ for all $u \in C^\infty_c(0,1]$.
From now on we restrict ourselves to real–valued functions. For the following results recall that ${\mathcal}{H}=H_1 \times H_2$ where the respective norms $\|\cdot\|_1$, $\|\cdot\|_2$ on $H_1$, $H_2$ are given by $$\|u\|_1^2=\int_0^1 \frac{|u'(\rho)|^2}{\rho^2} d\rho, \quad \|u\|_2^2=\int_0^1 |u'(\rho)|^2 d\rho=\|u'\|_{L^2(0,1)}^2,$$ see Sec. \[sec:oplin\] and [@DSA]. Note that the nonlinearity $N_T$ in appears in the first component and takes an argument from the second component. Thus, it is supposed to map from $H_2$ to $H_1$. However, the nonlinearity is composed with the operator $A$ and Lemma \[lem:estA1\] suggests to include a weighted $L^\infty$–piece in the norm. Consequently, we view the nonlinearity as a mapping $X \to H_1$ where the Banach space $X$ is defined as the completion of $C^\infty_c(0,1]$ with respect to the norm $$\|u\|_X:=\|u\|_2+\sup_{\rho \in (0,1)}\left |\frac{u(\rho)}{\sqrt{\rho}} \right |.$$ Note, however, that this is just for notational convenience since, in fact, by Cauchy–Schwarz, we have $$\frac{|u(\rho)|}{\sqrt{\rho}}\leq \frac{1}{\sqrt{\rho}}\int_0^\rho |u'(\rho)|d\rho \leq \|u'\|_{L^2(0,1)}$$ for all $u \in C_c^\infty(0,1]$ and thus, the norms $\|\cdot\|_2$ and $\|\cdot\|_X$ are equivalent. According to Lemmas \[lem:estA1\] and \[lem:estA1a\], the operator $A$ extends to a bounded linear operator $A: H_2 \to X$.
For a suitable function $F$ of two variables, we define the *composition* or *Nemitsky operator* $\hat{F}$, acting on $C^\infty_c(0,1]$, by $$\hat{F}(u)(\rho):=F(u(\rho),\rho).$$ The following two results give sufficient conditions on $F$ such that the corresponding Nemitsky operator is continuous from $X$ to $H_1$.
\[lem:Nemcont\] Let $F: \mathbb{R} \times [0,1] \to \mathbb{R}$ be twice continuously differentiable in both variables and assume that $\partial_1 F(0,\rho)=0$ for all $\rho \in [0,1]$. Suppose further that there exists a constant $C>0$ such that $$|\partial_{1j}F(x,\rho)|\leq C(|x|+\rho)$$ for all $(x,\rho) \in \mathbb{R}\times [0,1]$ and $j=1,2$. Then we have $\hat{F}(u) \in H_1$ for any $u \in C^\infty_c(0,1]$ and there exists a continuous function $\gamma: [0,\infty) \times [0,\infty) \to [0,\infty)$ such that $\hat{F}$ satisfies the estimate $$\|\hat{F}(u)-\hat{F}(v)\|_1 \leq \|u-v\|_X \gamma(\|u\|_X, \|v\|_X)$$ for all $u,v \in C^\infty_c(0,1]$. As a consequence, $\hat{F}$ uniquely extends to a continuous map $\hat{F}: X \to H_1$ and the above estimate remains valid for all $u,v \in X$.
Let $u,v \in C^\infty_c(0,1]$. By definition of $\hat{F}$ and $\|\cdot\|_1$, we have $$\begin{aligned}
\label{eq:proofNemcont}
\|\hat{F}(u)-\hat{F}(v)\|_1^2 &\lesssim \int_0^1 \frac{|\partial_1 F(u(\rho),\rho)u'(\rho)-\partial_1 F(v(\rho),\rho)v'(\rho)|^2}{\rho^2}d\rho \\
&\quad + \int_0^1 \frac{|\partial_2 F(u(\rho),\rho)
-\partial_2 F(v(\rho),\rho)|^2}{\rho^2}d\rho \nonumber \\
&\lesssim \int_0^1 \frac{\left |\partial_1 F(u(\rho),\rho)
\left [u'(\rho)-v'(\rho) \right ] \right |^2}{\rho^2}d\rho \nonumber \\
&\quad +\int_0^1 \frac{\left | \left [\partial_1 F(u(\rho),\rho)-\partial_1 F(v(\rho),\rho) \right ] v'(\rho) \right |^2}{\rho^2}d\rho \nonumber \\
&\quad + \int_0^1 \frac{|\partial_2 F(u(\rho),\rho)
-\partial_2 F(v(\rho),\rho)|^2}{\rho^2}d\rho=:A_1+A_2+A_3 \nonumber
\end{aligned}$$ Now we put the terms containing $u',v'$ into $L^2$ and the rest into $L^\infty$, i.e., $$A_1\leq \sup_{\rho \in (0,1)}\left | \frac{\partial_1 F(u(\rho),\rho)}{\rho} \right |^2 \int_0^1 |u'(\rho)-v'(\rho)|^2 d\rho$$ and, analogously, $$\begin{aligned}
A_2 &\leq \sup_{\rho \in (0,1)}\left | \frac{\partial_1 F(u(\rho),\rho)-\partial_1 F(v(\rho),\rho)}{\rho} \right |^2 \int_0^1 |v'(\rho)|^2 d\rho \\
A_3 &\leq \sup_{\rho \in (0,1)}\left | \frac{\partial_2 F(u(\rho),\rho)
-\partial_2 F(v(\rho),\rho)}{\rho} \right |^2.
\end{aligned}$$ Using the assumption on $\partial_{1j}F$ and applying the fundamental theorem of calculus we obtain $$\begin{aligned}
\left | \partial_{j}F(x,\rho)-\partial_{j}F(y,\rho) \right |
&=\left |\int_0^1 \partial_{j1}F(y+t(x-y),\rho)(x-y)dt \right | \\
&\leq C|x-y| \int_0^1 \left (|y+t(x-y)|+\rho \right )dt \\
&\lesssim |x-y| \left ( |x|+|y|+\rho \right )\end{aligned}$$ for all $x,y \in \mathbb{R}$, all $\rho \in [0,1]$ and $j=1,2$. In particular, this implies $$\label{eq:proofNemest}
|\partial_1 F(x,\rho)|\lesssim |x| \left (|x|+\rho \right )$$ since $\partial_1 F(0,\rho)=0$ for all $\rho \in [0,1]$ by assumption. Consequently, we obtain $$\begin{aligned}
A_1 &\lesssim \sup_{\rho \in (0,1)}\left | \frac{|u(\rho)|\left (|u(\rho)|+
\rho \right )}{\rho} \right |^2 \|u-v\|_2^2 \\
&\lesssim \sup_{\rho \in (0,1)}\left |\frac{u(\rho)}{\sqrt{\rho}} \right |^2 \sup_{\rho \in (0,1)} \left | \frac{|u(\rho)|+\rho}{\sqrt{\rho}} \right |^2 \|u-v\|_2^2 \\
&\lesssim \|u\|_X^2 (\|u\|_X^2+1)\|u-v\|_X^2\end{aligned}$$ as well as $$\begin{aligned}
A_2 & \lesssim \sup_{\rho \in (0,1)}\left | \frac{|u(\rho)-v(\rho)|(|u(\rho)|+|v(\rho)|+\rho)}{\rho} \right |^2 \|v\|_2^2 \\
&\lesssim \sup_{\rho \in (0,1)}\left |\frac{|u(\rho)-v(\rho)|}{\sqrt{\rho}} \right |^2
\sup_{\rho \in (0,1)}\left |\frac{|u(\rho)|+|v(\rho)|+\rho}{\sqrt{\rho}} \right |^2 \|v\|_2^2 \\
&\lesssim \|u-v\|_X^2 \left (\|u\|_X^2+\|v\|_X^2+1 \right )\|v\|_X^2\end{aligned}$$ and, analogously, $$A_3 \lesssim \|u-v\|_X^2 \left (\|u\|_X^2+\|v\|_X^2+1 \right ).$$ Putting everything together we arrive at the claimed estimate. The existence of the extension with the stated properties follows from Lemma \[lem:context\].
Next, we turn to the question of differentiability of the Nemitsky operator $\hat{F}$. Recall that the G\^ ateaux derivative $D_G\hat{F}(u): X \to H_1$ of $\hat{F}$ at $u \in X$ is defined as $$D_G \hat{F}(u)v:=\lim_{h \to 0}\frac{\hat{F}(u+hv)-\hat{F}(u)}{h}$$ provided the right–hand side exists and defines a bounded linear operator from $X$ to $H_1$. We give sufficient conditions for the Gâteaux derivative to exist.
\[lem:NemGdiff\] Let $F \in C^3(\mathbb{R} \times [0,1])$ satisfy the assumptions from Lemma \[lem:Nemcont\]. Assume further that there exists a constant $C>0$ such that $$|\partial_{11j} F(x,\rho)|\leq C$$ for all $(x,\rho) \in \mathbb{R} \times [0,1]$ and $j=1,2$. Then, at every $u \in C^\infty_c(0,1]$, the Nemitsky operator $\hat{F}: X \to H_1$ is Gâteaux differentiable and its Gâteaux derivative at $u$ applied to $v \in X$ is given by $$[D_G \hat{F}(u)v](\rho)=\partial_1 F(u(\rho),\rho)v(\rho).$$
According to Lemma \[lem:Nemcont\], $\hat{F}$ extends to a continuous operator $\hat{F}: X \to H_1$. For $u,v \in C^\infty_c(0,1]$ define $$[\tilde{D}_G \hat{F}(u)v](\rho):=\partial_1 F(u(\rho),\rho)v(\rho).$$ Inserting the definition of $\|\cdot\|_1$ yields $$\begin{aligned}
\|\tilde{D}_G \hat{F}(u)v\|_1^2&\lesssim \int_0^1 \frac{|\partial_{11}F(u(\rho),\rho)u'(\rho)v(\rho)|^2}{\rho^2}d\rho
+\int_0^1 \frac{|\partial_{12}F(u(\rho),\rho)v(\rho)|^2}{\rho^2}d\rho \\
&\quad +\int_0^1 \frac{|\partial_1 F(u(\rho),\rho)v'(\rho)|^2}{\rho^2}d\rho \\
&\lesssim \sup_{\rho \in (0,1)}\left |\frac{\partial_{11} F(u(\rho),\rho)v(\rho)}{\rho} \right |^2
\int_0^1 |u'(\rho)|^2 d\rho
+\sup_{\rho \in (0,1)}\left | \frac{\partial_{12}F(u(\rho),\rho)v(\rho)}{\rho} \right |^2 \\
&\quad +\sup_{\rho \in (0,1)}\left |\frac{\partial_1 F(u(\rho),\rho)}{\rho} \right |^2 \int_0^1 |v'(\rho)|^2 d\rho
\end{aligned}$$ and by assumption we have $$\sup_{\rho \in (0,1)} \left |\frac{\partial_{1j}F(u(\rho),\rho)v(\rho)}{\rho} \right |^2 \lesssim
\sup_{\rho \in (0,1)}\left |\frac{(|u(\rho)|+\rho)|v(\rho)|}{\rho} \right |^2
\lesssim \left (\|u\|_X^2+1 \right )\|v\|_X^2$$ for $j=1,2$. Furthermore, Eq. yields $$\sup_{\rho \in (0,1)}\left |\frac{\partial_1 F(u(\rho),\rho)}{\rho} \right |^2 \lesssim
\sup_{\rho \in (0,1)}\left |\frac{|u(\rho)|(|u(\rho)|+\rho)}{\rho} \right |^2
\lesssim \|u\|_X^2 \left (\|u\|_X^2+1 \right )$$ and we infer $$\|\tilde{D}_G \hat{F}(u)v\|_1 \leq C_u \|v\|_X$$ for all $v \in C^\infty_c(0,1]$ where $C_u>0$ is a constant that depends on $\|u\|_X$. Since $C^\infty_c(0,1]$ is dense in $X$, $\tilde{D}_G \hat{F}(u)$ extends to a bounded linear operator from $X$ to $H_1$. Recall that, for general $v \in X$, the extension is defined by $$\tilde{D}_G \hat{F}(u)v=\lim_{j \to \infty}\tilde{D}_G \hat{F}(u)v_j$$ where $(v_j) \subset C^\infty_c(0,1]$ is an arbitrary sequence with $v_j \to v$ in $X$ and the limit is taken in $H_1$. Since convergence in $H_1$ or $X$ implies pointwise convergence (see Lemma \[lem:Hcont\]), the representation $$[\tilde{D}_G \hat{F}(u)v](\rho)=\partial_1 F(u(\rho),\rho)v(\rho)$$ remains valid for general $v \in X$. We claim that $\tilde{D}_G \hat{F}(u)$ is the Gâteaux derivative of $\hat{F}$ at $u$. In order to prove this, we have to show that $$\lim_{h \to 0} \left \|\frac{\hat{F}(u+hv)-\hat{F}(u)}{h}-\tilde{D}_G \hat{F}(u)v \right \|_1=0$$ for any $v \in X$.
Assume for the moment that $v \in C^\infty_c(0,1]$. By applying the fundamental theorem of calculus we obtain $$\hat{F}(u+hv)(\rho)-\hat{F}(u)(\rho)=F(u(\rho)+hv(\rho), \rho)-F(u(\rho),\rho)
=\int_0^h \partial_1 F(u(\rho)+\tilde{h}v(\rho),\rho)v(\rho)d\tilde{h}$$ and this implies $$\begin{aligned}
g_h(\rho)&:=\frac{\hat{F}(u+hv)(\rho)-\hat{F}(u)(\rho)}{h}-[\tilde{D}_G \hat{F}(u)v](\rho) \\
&=\frac{v(\rho)}{h}\int_0^h \left [ \partial_1 F(u(\rho)+\tilde{h}v(\rho),\rho)-\partial_1 F(u(\rho),\rho) \right ] d\tilde{h}.\end{aligned}$$ Now observe that $$\begin{aligned}
&\frac{d}{d\rho}\int_0^h \left [ \partial_1 F(u(\rho)+\tilde{h}v(\rho),\rho)-\partial_1 F(u(\rho),\rho) \right ] d\tilde{h} \\
&=\int_0^h \partial_\rho \left [ \partial_1 F(u(\rho)+\tilde{h}v(\rho),\rho)-\partial_1 F(u(\rho),\rho) \right ] d\tilde{h} \end{aligned}$$ for all $h \in [0,h_0]$ by dominated convergence since the function $$(\tilde{h},\rho)\mapsto \partial_1 F(u(\rho)+\tilde{h}v(\rho),\rho)-\partial_1 F(u(\rho),\rho)$$ belongs to $C^1([0,h_0] \times [0,1])$ for some $h_0>0$. Consequently, we infer $$\begin{aligned}
|g_h'(\rho)|&\leq |v'(\rho)|\sup_{\tilde{h} \in (0,h)}\left |
\partial_1 F(u(\rho)+\tilde{h}v(\rho), \rho)-\partial_1 F(u(\rho),\rho) \right | \\
&\quad +|v(\rho)||u'(\rho)|\sup_{\tilde{h} \in (0,h)}\left |\partial_{11}F(u(\rho)+\tilde{h}v(\rho),\rho)-\partial_{11} F(u(\rho),\rho) \right | \\
&\quad + |v(\rho)|\sup_{\tilde{h} \in (0,h)}\left | \partial_{12}F(u(\rho)+\tilde{h}v(\rho),\rho)-\partial_{12}F(u(\rho),\rho) \right | \\
&\quad + |v'(\rho)||v(\rho)|\sup_{\tilde{h} \in (0,h)} \left |\tilde{h} \partial_{11}F(u(\rho)+\tilde{h}v(\rho),\rho) \right |.\end{aligned}$$ Note that the partial derivatives of $F$ commute since $F$ is three–times continuously differentiable. Furthermore, by the assumption on $\partial_{11j}F$, we have $$\begin{aligned}
&\sup_{\tilde{h} \in (0,h)} \left |
\partial_{j1}F(x+\tilde{h}y,\rho)-\partial_{j1} F(x,\rho) \right | \\
&=\sup_{\tilde{h} \in (0,h)} \left | \int_0^{\tilde{h}} \partial_{j11}
F(x+h_1 y,\rho)y\:dh_1 \right | \\
&\leq C h |y|\end{aligned}$$ for $j=1,2$ and, from the proof of Lemma \[lem:Nemcont\], $$\begin{aligned}
\sup_{\tilde{h} \in (0,h)}\left |\partial_1 F(x+\tilde{h}y,\rho)-\partial_1 F(x,\rho) \right |
&\leq \sup_{\tilde{h} \in (0,h)} \tilde{h}|y|\left (|x|+|x+\tilde{h}y|+\rho \right ) \\
&\lesssim h |y| \left (|x|+|y|+\rho \right ) \end{aligned}$$ for all $x,y \in \mathbb{R}$, all $\rho \in [0,1]$ and $h \in [0,h_0]$. Also, by the assumption on $\partial_{11}F$, we have $$\sup_{\tilde{h}\in (0,h)} \left |\tilde{h} \partial_{11}F(x+\tilde{h}y,\rho) \right |
\lesssim \sup_{\tilde{h}\in (0,h)} \tilde{h}\left (|x+\tilde{h}y|+\rho \right )\leq h \left (|x|+|y|
+\rho \right )$$ in the above stated domains for $x,y,\rho$ and $h$. Consequently, we obtain $$\begin{aligned}
\|g_h\|_1^2&=\int_0^1 \frac{|g_h'(\rho)|^2}{\rho^2}d\rho \\
&\lesssim h^2 \sup_{\rho \in (0,1)}\left |\frac{|v(\rho)|\left (|u(\rho)|+|v(\rho)|+\rho \right )}{\rho} \right |^2 \int_0^1 |v'(\rho)|^2 d\rho
+ h^2 \sup_{\rho \in (0,1)} \left | \frac{|v(\rho)|^2}{\rho} \right |^2 \int_0^1 |u'(\rho)|^2 d\rho \\
&\quad +h^2 \sup_{\rho \in (0,1)}\left | \frac{|v(\rho)|^2}{\rho} \right |^2
+h^2 \sup_{\rho \in (0,1)} \left |\frac{|v(\rho)|\left (|u(\rho)|+|v(\rho)|+\rho\right )}{\rho} \right |^2 \int_0^1 |v'(\rho)|^2 d\rho\end{aligned}$$ and this shows that $\|g_h\|_1\leq h\gamma(\|u\|_X, \|v\|_X)$ for a suitable continuous function $\gamma: [0,\infty) \times [0,\infty) \to [0,\infty)$. By Lemma \[lem:Nemcont\], we know that $\hat{F}$ extends to a continuous map from $X$ to $H_1$. Now let $v \in X$ be arbitrary and choose a sequence $(v_j) \subset C^\infty_c(0,1]$ with $v_j \to v$ in $X$. By the continuity of $\hat{F}$, this implies $\hat{F}(u+hv_j) \to \hat{F}(u+hv)$ in $H_1$ for any $u \in C^\infty(0,1]$ and $h \in (0,h_0)$. We conclude that $$\begin{aligned}
\left \|\frac{\hat{F}(u+hv)-\hat{F}(u)}{h}-\tilde{D}_G \hat{F}(u)v \right \|_1 &=
\lim_{j \to \infty} \left \|\frac{\hat{F}(u+hv_j)-\hat{F}(u)}{h}-\tilde{D}_G \hat{F}(u)v_j \right \|_1 \\
&\leq h\lim_{j\to \infty}\gamma(\|u\|_X, \|v_j\|_X)=h\gamma(\|u\|_X, \|v\|_X)\end{aligned}$$ by the continuity of $\tilde{D}_G \hat{F}(u)$ and $\gamma$. This shows that, for any $v \in X$, we have $$\label{eq:proofNemGdiff}
\left \|\frac{\hat{F}(u+hv)-\hat{F}(u)}{h}-\tilde{D}_G \hat{F}(u)v \right \|_1 \leq h\gamma(\|u\|_X, \|v\|_X) \to 0$$ as $h \to 0$ and therefore, $\tilde{D}_G\hat{F}(u)=D_G \hat{F}(u)$, the Gâteaux derivative of $\hat{F}$ at $u \in C^\infty_c(0,1]$.
Finally, we show that the Gâteaux derivative is in fact a Fréchet derivative.
\[lem:NemFdiff\] Let $F$ satisfy the assumptions from Lemma \[lem:NemGdiff\]. Then $\hat{F}: X \to H_1$ is Gâteaux differentiable at every $u \in X$ and the Gâteaux derivative $D_G \hat{F}: X \to {\mathcal}{B}(X,H_1)$ is continuous and satisfies the estimate $$\|D_G \hat{F}(u)-D_G \hat{F}(\tilde{u})\|_{{\mathcal}{B}(X,H_1)}\leq \|u-\tilde{u}\|_X\; \gamma(\|u\|_X, \|\tilde{u}\|_X )$$ for all $u, \tilde{u} \in X$ where $\gamma: [0,\infty) \times [0,\infty) \to [0,\infty)$ is a suitable continuous function. As a consequence, $\hat{F}$ is continuously Fréchet differentiable and $D\hat{F}=D_G \hat{F}$.
Let $u,\tilde{u},v \in C^\infty_c(0,1]$. According to Lemma \[lem:NemGdiff\], $\hat{F}$ is Gâteaux differentiable at $u$ and $\tilde{u}$ and we set $$g_{u,\tilde{u},v}(\rho):=[D_G \hat{F}(u)v](\rho)-[D_G \hat{F}(\tilde{u})v](\rho)
=\left [\partial_1 F(u(\rho),\rho)-\partial_1 F(\tilde{u}(\rho), \rho) \right ]v(\rho)$$ where we have used Lemma \[lem:NemGdiff\] again. Differentiating with respect to $\rho$ we obtain $$\begin{aligned}
|g_{u,\tilde{u},v}'(\rho)|&\leq |v(\rho)| \left |\partial_{11}F(u(\rho),\rho)u'(\rho)
-\partial_{11}F(\tilde{u}(\rho),\rho)\tilde{u}'(\rho)\right |\\
&\quad +|v(\rho)| \left | \partial_{12}F(u(\rho),\rho)-\partial_{12}F(\tilde{u}(\rho),\rho) \right |\\
&\quad +|v'(\rho)| \left |\partial_1 F(u(\rho),\rho)-\partial_1 F(\tilde{u}(\rho),\rho) \right |.
\end{aligned}$$ By the assumptions on $F$ we have $$\begin{aligned}
|\partial_{j1}F(x,\rho)-\partial_{j1}F(\tilde{x},\rho)|\leq |x-\tilde{x}|\int_0^1 |\partial_{j11}F(\tilde{x}+t(x-\tilde{x}), \rho)|dt \lesssim |x-\tilde{x}|
\end{aligned}$$ for $j=1,2$ as well as $$|\partial_1 F(x,\rho)-\partial_1 F(\tilde{x},\rho)|\lesssim |x-\tilde{x}|\left (|x|+|\tilde{x}|
+\rho \right )$$ for all $x, \tilde{x} \in \mathbb{R}$ and $\rho \in [0,1]$ (cf. the proofs of Lemmas \[lem:Nemcont\] and \[lem:NemGdiff\]). This shows that $$\begin{aligned}
\left |\partial_{11}F(u(\rho),\rho)u'(\rho)-\partial_{11} F(\tilde{u}(\rho),\rho)\tilde{u}'(\rho) \right | &\leq
|\partial_{11}F(u(\rho),\rho)| |u'(\rho)-\tilde{u}'(\rho)| \\
&\quad + |\partial_{11}F(u(\rho),\rho)-\partial_{11}F(\tilde{u}(\rho),\rho)||\tilde{u}'(\rho)| \\
&\lesssim \left | |u(\rho)|+\rho \right | |u'(\rho)-\tilde{u}'(\rho)|+|u(\rho)-\tilde{u}(\rho)||\tilde{u}'(\rho)|\end{aligned}$$ and we obtain $$\begin{aligned}
\|g_{u,\tilde{u},v}(\rho)\|_1^2&=\int_0^1 \frac{|g_{u,\tilde{u},v}'(\rho)|^2}{\rho^2}d\rho
\lesssim \sup_{\rho \in (0,1)}\left | \frac{|v(\rho)|(|u(\rho)|+\rho)}{\rho} \right |^2 \int_0^1 |u'(\rho)-\tilde{u}'(\rho)|^2 d\rho \\
&\quad + \sup_{\rho \in (0,1)} \left | \frac{|v(\rho)||u(\rho)-\tilde{u}(\rho)|}{\rho} \right |^2
\int_0^1 |\tilde{u}'(\rho)|^2 d\rho +\sup_{\rho \in (0,1)} \left |\frac{|v(\rho)||u(\rho)-\tilde{u}(\rho)|}{\rho} \right |^2 \\
&\quad + \sup_{\rho \in (0,1)}\left |\frac{|u(\rho)-\tilde{u}(\rho)|(|u(\rho)|+|\tilde{u}(\rho)|+\rho)}{\rho} \right |^2 \int_0^1 |v'(\rho)|^2 d\rho \\
&\lesssim \|u-\tilde{u}\|_X^2 \gamma^2 (\|u\|_X, \|\tilde{u}\|_X, \|v\|_X)\end{aligned}$$ for a continuous function $\gamma: [0,\infty)^3 \to [0,\infty)$. According to Lemma \[lem:NemGdiff\], the operator $D_G \hat{F}(u): X \to H_1$ is bounded and thus, the above estimate extends to all $v \in X$ by approximation. Consequently, we obtain $$\begin{aligned}
\label{eq:proofNemFdiff}
\|D_G \hat{F}(u)-D_G\hat{F}(\tilde{u})\|_{{\mathcal}{B}(X,H_1)}
&\lesssim \sup\left \{ \|u-\tilde{u}\|_X\:\gamma(\|u\|_X,\|\tilde{u}\|_X,\|v\|_X): v \in X, \|v\|_X=1 \right \} \\
&=\|u-\tilde{u}\|_X\; \gamma(\|u\|_X,\|\tilde{u}\|_X,1). \nonumber\end{aligned}$$ By Lemma \[lem:context\], the operator $D_G \hat{F}$ extends to a continuous map $D_G \hat{F}: X \to {\mathcal}{B}(X,H_1)$ and the estimate remains valid for all $u, \tilde{u} \in X$. Furthermore, Eq. shows that, for arbitrary $u \in X$, $D_G \hat{F}(u)$ is the Gâteaux derivative of $\hat{F}$ at $u$. Since $D_G \hat{F}$ is continuous, it follows from [@zeidler], p. 137, Proposition 4.8 that $\hat{F}$ is Fréchet differentiable at every $u \in X$ and $D_G \hat{F}=D\hat{F}$.
Now it is time to recall the definition of the nonlinearity which reads $$N_T(u)=\sin(2(\psi^T+u))-\sin(2\psi^T)-2\cos(2\psi^T)u.$$ However, due to our definition of the similarity coordinates we have $$\psi^T(t,r)=2 \arctan \left (\tfrac{r}{T-t} \right )=f_0(\rho)$$ and thus, the nonlinearity does in fact not depend on $T$. Consequently, we drop the subscript $T$ in the sequel and write $N$ instead of $N_T$.
\[lem:NFdiff\] The Nemitsky operator $\hat{N}$ associated to the nonlinearity $N$ extends to a continuous map $\hat{N}: X \to H_1$. Furthermore, $\hat{N}$ is continuously Fréchet differentiable at every $u \in X$ and the Fréchet derivative $D\hat{N}$ satisfies the estimate $$\|D\hat{N}(u)-D\hat{N}(\tilde{u})\|_{{\mathcal}{B}(X,H_1)} \leq \|u-\tilde{u}\|_X\; \gamma(\|u\|_X, \|\tilde{u}\|_X)$$ for all $u, \tilde{u} \in X$ and a suitable continuous function $\gamma: [0,\infty) \times [0,\infty) \to [0,\infty)$.
We have $$N(x,\rho)=\sin(2f_0(\rho)+2x)-\sin(2f_0(\rho))-2\cos(2f_0(\rho))x$$ and it suffices to verify the assumptions of Lemma \[lem:NemFdiff\] for $N$. Obviously, we have $N \in C^3(\mathbb{R}\times [0,1])$ and $$\partial_1 N(x,\rho)=2\cos (2f_0(\rho)+2x)-2\cos(2f_0(\rho))$$ shows that $\partial_1 N(0,\rho)=0$ for all $\rho \in [0,1]$. Furthermore, $$\begin{aligned}
\partial_{11}N(x,\rho)&=-4 \sin(2f_0(\rho)+2x) \\
\partial_{12}N(x,\rho)&=-4\sin(2f_0(\rho)+2x)f_0'(\rho)+4\sin(2f_0(\rho))f_0'(\rho)
\end{aligned}$$ and $$\begin{aligned}
\partial_{111}N(x,\rho)&=-8\cos(2f_0(\rho)+2x) \\
\partial_{121}N(x,\rho)&=-8\cos(2f_0(\rho)+2x)f_0'(\rho)
\end{aligned}$$ which shows that there exists a constant $C>0$ such that $$|\partial_{11j}N(x,\rho)|\leq C$$ for $j=1,2$ and all $(x,\rho) \in \mathbb{R} \times [0,1]$. Consequently, we obtain $$|\partial_{1j}N(x,\rho)|\leq |x|\int_0^1 |\partial_{11j}N(tx,\rho)|dt+|\partial_{1j}N(0,\rho)|
\lesssim |x|+\rho$$ for $j=1,2$ and all $(x,\rho) \in \mathbb{R} \times [0,1]$ and we conclude that $N$ indeed satisfies the assumptions of Lemma \[lem:NemFdiff\] which yields the claim.
After these preparations we are now ready to define the vector–valued nonlinearity ${\mathbf}{N}: {\mathcal}{H} \to {\mathcal}{H}$ by $${\mathbf}{N}({\mathbf}{u}):=\left ( \begin{array}{c} \hat{N}(A u_2) \\ 0 \end{array} \right ).$$ As a simple consequence of Lemma \[lem:NFdiff\], ${\mathbf}{N}$ is continuously Fréchet differentiable.
\[lem:N\] The nonlinearity ${\mathbf}{N}: {\mathcal}{H} \to {\mathcal}{H}$ is continuously Fréchet differentiable at any ${\mathbf}{u} \in {\mathcal}{H}$. Furthermore, there exist continuous functions $\gamma_1: [0,\infty) \to [0,\infty)$ and $\gamma_2: [0,\infty) \times [0,\infty) \to [0,\infty)$ such that $$\|{\mathbf}{N}({\mathbf}{u})\|\leq \|{\mathbf}{u}\|^2 \gamma_1(\|{\mathbf}{u}\|)$$ and $$\|D{\mathbf}{N}({\mathbf}{u}){\mathbf}{v}-D{\mathbf}{N}(\tilde{{\mathbf}{u}}){\mathbf}{v}\|
\leq \|{\mathbf}{u}-\tilde{{\mathbf}{u}}\|\|{\mathbf}{v}\|\gamma_2(\|{\mathbf}{u}\|, \|\tilde{{\mathbf}{u}}\|)$$ for all ${\mathbf}{u}, \tilde{{\mathbf}{u}}, {\mathbf}{v} \in {\mathcal}{H}$. Finally, $D{\mathbf}{N}({\mathbf}{0})={\mathbf}{0}$.
Define auxiliary operators $B: {\mathcal}{H} \to X$ and ${\mathbf}{T}: H_1 \to {\mathcal}{H}$ by $$B {\mathbf}{u}:=A u_2 \mbox{ and } {\mathbf}{T} u:=\left ( \begin{array}{c}u \\ 0 \end{array} \right ).$$ Obviously, ${\mathbf}{T}$ is linear and bounded. The same is true for $B$ since $$\|B {\mathbf}{u}\|_X=\|Au_2 \|_X \lesssim \|u_2\|_2 \lesssim \|{\mathbf}{u}\|$$ by Lemmas \[lem:estA1\] and \[lem:estA1a\]. Therefore, ${\mathbf}{N}$ can be written as ${\mathbf}{N}={\mathbf}{T} \circ \hat{N} \circ B$. Consequently, Lemma \[lem:NFdiff\] and the chain rule show that ${\mathbf}{N}$ is Fréchet differentiable at every ${\mathbf}{u} \in {\mathcal}{H}$ and we obtain $$D{\mathbf}{N}({\mathbf}{u})=D{\mathbf}{T}(\hat{N}(B{\mathbf}{u}))D\hat{N}(B{\mathbf}{u})DB({\mathbf}{u})={\mathbf}{T}D\hat{N}(B{\mathbf}{u})B.$$ By Lemma \[lem:NemGdiff\] we have $$[D\hat{N}(0)v](\rho)=\partial_1 N(0,\rho)v(\rho)=0$$ which shows $D{\mathbf}{N}({\mathbf}{0})={\mathbf}{0}$. Furthermore, the estimate from Lemma \[lem:NFdiff\] yields $$\begin{aligned}
\|D{\mathbf}{N}({\mathbf}{u}){\mathbf}{v}-D{\mathbf}{N}(\tilde{{\mathbf}{u}}){\mathbf}{v}\|& \lesssim
\|D\hat{N}(B{\mathbf}{u})B{\mathbf}{v}-D\hat{N}(B\tilde{{\mathbf}{u}})B{\mathbf}{v}\| \\
&\lesssim \|B{\mathbf}{u}-B\tilde{{\mathbf}{u}}\|_X \|B{\mathbf}{v}\|_X \; \gamma(\|B{\mathbf}{u}\|_X, \|B\tilde{{\mathbf}{u}}\|_X) \\
&\lesssim \|{\mathbf}{u}-\tilde{{\mathbf}{u}}\| \|{\mathbf}{v}\| \; \gamma(\|B{\mathbf}{u}\|_X, \|B\tilde{{\mathbf}{u}}\|_X)
\end{aligned}$$ and, with $D{\mathbf}{N}({\mathbf}{0})={\mathbf}{0}$, this shows in particular that $$\|D{\mathbf}{N}({\mathbf}{u}){\mathbf}{v}\|\lesssim \|{\mathbf}{u}\|\|{\mathbf}{v}\|\; \gamma(\|B{\mathbf}{u}\|_X, 0)$$ for all ${\mathbf}{u}, {\mathbf}{v} \in {\mathcal}{H}$. Consequently, by using the fundamental theorem of calculus, we obtain $$\begin{aligned}
\|{\mathbf}{N}({\mathbf}{u})\|\leq \int_0^1 \|D{\mathbf}{N}(t{\mathbf}{u}){\mathbf}{u}\|dt
\leq \|{\mathbf}{u}\|^2 \int_0^1 \gamma(t\|B{\mathbf}{u}\|_X, 0)dt\end{aligned}$$ since ${\mathbf}{N}({\mathbf}{0})={\mathbf}{0}$.
Global existence for the nonlinear problem
------------------------------------------
We establish global existence for the wave maps equation . However, at the moment we do not allow for arbitrary initial data but modify the given data along the one–dimensional subspace spanned by the gauge mode. Thereby, we suppress the gauge instability. The main result of this section can be viewed as a nonlinear version of Theorem \[thm:linear\].
As a first step we reformulate Eq. as a nonlinear ODE on the Hilbert space ${\mathcal}{H}$. With the nonlinear mapping ${\mathbf}{N}: {\mathcal}{H} \to {\mathcal}{H}$ from above we can simply write $$\label{eq:maincssop}
\left \{ \begin{array}{l}
\frac{d}{d \tau}\Phi(\tau)=L\Phi(\tau)+{\mathbf}{N}(\Phi(\tau)) \mbox{ for }\tau>-\log T \\
\Phi(-\log T)={\mathbf}{u}
\end{array} \right .$$ for a function $\Phi: [-\log T,\infty) \to {\mathcal}{H}$ with initial data ${\mathbf}{u}$. We do not specify the initial data explicitly but keep them general in this section. Our aim is to construct a mild solution of this equation. By a mild solution we mean a solution of the associated integral equation $$\label{eq:maincssopmild}
\Phi(\tau)=S(\tau+\log T){\mathbf}{u}+\int_{-\log T}^\tau S(\tau-\tau'){\mathbf}{N}(\Phi(\tau'))d\tau', \quad \tau \geq -\log T$$ where $S$ is the semigroup from Theorem \[thm:linear\]. Here, the integral is well–defined as a Riemann integral over a continuous function with values in a Banach space. Consequently, we restrict ourselves to continuous solutions $\Phi: [-\log T,\infty) \to {\mathcal}{H}$. Any solution of Eq. is also a solution of Eq. . The converse is not true since Eq. requires more regularity than Eq. . However, for sufficiently regular functions $\Phi$, both problems Eq. and Eq. are equivalent. As a consequence, the concept of a mild solution is more general.
In fact, though, we want to consider solutions of Eq. with different $T>0$ *simultaneously* so we should write $\Phi^T$ instead of $\Phi$ in order to indicate the dependence on $T$. Here, we encounter the technical problem that the domain of definition varies with $T$. In order to circumvent this difficulty, we construct a “universal” solution $\Psi$ for all $T>0$ by translation, i.e., we set $$\Psi(\tau):=\Phi^T(\tau-\log T)$$ for $\tau \geq 0$. The function $\Phi^T$ satisfies Eq. if and only if $\Psi$ satisfies the translated equation $$\label{eq:maincssopuni}
\Psi(\tau)=S(\tau){\mathbf}{u}+\int_0^\tau S(\tau-\tau'){\mathbf}{N}(\Psi(\tau'))d\tau', \quad \tau \geq 0$$ as follows by a straightforward change of variables. Obviously, $\Psi$ is independent of $T$ which justifies the notation. What makes this possible is nothing but the time translation invariance of the wave maps equation. Having constructed the solution $\Psi$, the function $\Phi^T$, for different values of $T>0$, can be obtained by simply noting that $$\Phi^T(\tau)=\Psi(\tau+\log T)$$ for $\tau \geq -\log T$. Consequently, it suffices to consider the solution $\Psi$.
For a small $\varepsilon>0$, we set $$\omega:=\max\left \{-\tfrac{1}{2},s_0 \right \}+\varepsilon$$ where $s_0$ is the spectral bound (see Definition \[def:sb\]). In [@DSA] it has been proved that $s_0<\frac{1}{2}$, however, as already mentioned in the beginning, there is no reasonable doubt that in fact $s_0<0$, i.e., that $\psi^T$ is mode stable.
Note that the estimate for the linear time evolution from Theorem \[thm:linear\] now reads $$\|S(\tau)(1-P){\mathbf}{f}\|\lesssim e^{-|\omega|\tau}\|(1-P){\mathbf}{f}\|$$ for all $\tau \geq 0$ and all ${\mathbf}{f} \in {\mathcal}{H}$. We define a spacetime Banach space ${\mathcal}{X}$ by setting $${\mathcal}{X}:=\left \{\Phi \in C([0,\infty),{\mathcal}{H}): \sup_{\tau > 0}e^{|\omega|\tau}\|\Phi(\tau)\|<\infty \right \}$$ and $$\|\Phi\|_{\mathcal}{X}:=\sup_{\tau > 0}e^{|\omega|\tau}\|\Phi(\tau)\|.$$ The vector space ${\mathcal}{X}$ equipped with $\|\cdot\|_{\mathcal}{X}$ is a Banach space and we have encoded the decay property of the linear evolution in the definition of ${\mathcal}{X}$. Thus, our hope is to retain the decay of the linear evolution on the nonlinear level.
Now we (formally) define the mapping $$\begin{aligned}
\label{eq:defK}
{\mathbf}{K}(\Psi, {\mathbf}{u})(\tau)&:=S(\tau)(1-P){\mathbf}{u}-\int_{0}^\infty e^{\tau-\tau'} P{\mathbf}{N}(\Psi(\tau')) d\tau' \\
& \quad +\int_0^\tau S(\tau-\tau'){\mathbf}{N}(\Psi(\tau'))d\tau', \quad \tau \geq 0 \nonumber\end{aligned}$$ where $P$ is the spectral projection from Theorem \[thm:linear\]. We will show in a moment (Lemma \[lem:KtoX\] below) that ${\mathbf}{K}$ is well defined as an operator from ${\mathcal}{X} \times {\mathcal}{H}$ to ${\mathcal}{X}$. The relevance of ${\mathbf}{K}$ for the Cauchy problem is based on the following observation. Suppose there exists a function $\Psi \in {\mathcal}{X}$ that satisfies $\Psi={\mathbf}{K}(\Psi,{\mathbf}{u})$. By comparison with Eq. , we see that $\Psi$ is a solution of Eq. with the rather strange–looking initial data $$\Psi(0)=(1-P){\mathbf}{u}-
\int_{0}^\infty e^{-\tau'} P{\mathbf}{N}(\Psi(\tau')) d\tau'.$$ However, this can also be written as $$\Psi(0)={\mathbf}{u}-P \left [{\mathbf}{u}+\int_{0}^\infty e^{-\tau'} {\mathbf}{N}(\Psi(\tau')) d\tau' \right ]$$ and therefore, the original initial data ${\mathbf}{u}$ have been modified by adding an element of the unstable subspace $\langle {\mathbf}{g} \rangle$ (recall that $P{\mathcal}{H}=\langle {\mathbf}{g} \rangle$, see Theorem \[thm:linear\]). It is imporant to note, however, that this correction depends on the solution $\Psi$ itself. We will discuss later how to deal with this inconvenient fact, see Sec. \[sec:globalarbitrary\] below. For the moment we record the following: if we can show that there exists a function $\Psi$ with $\Psi={\mathbf}{K}(\Psi,{\mathbf}{u})$, we obtain a global mild solution of Eq. for initial data that satisfy a “co–dimension one condition”. The reader may have noticed that there are many similarities with center–stable manifolds in the context of Hamiltonian evolution equations. In fact, the construction of the solution by adding this type of modification is *formally* exactly the same as e.g., in [@schlag]. Consequently, it is very likely that there exists a center–stable manifold approach to the problem at hand but we do not pursue this issue here any further.
Our goal is to apply the implicit function theorem on Banach spaces. In order to do so, we have to show that ${\mathbf}{K}$ is continuously Fréchet differentiable.
\[lem:KtoX\] Let $(\Psi, {\mathbf}{u}) \in {\mathcal}{X} \times {\mathcal}{H}$. Then we have $ {\mathbf}{K}(\Psi, {\mathbf}{u}) \in {\mathcal}{X}$.
Fix $(\Psi,{\mathbf}{u}) \in {\mathcal}{X} \times {\mathcal}{H}$. In the following, the implicit constants of the $\lesssim$ notation may depend on $\Psi$ and ${\mathbf}{u}$. Note first that the integrals in are well–defined as Riemann integrals over a continuous function (with values in a Banach space). Furthermore, according to Lemma \[lem:N\], we have $$\label{eq:proofKtoX}
\|P{\mathbf}{N}(\Psi(\tau'))\|\lesssim \|\Psi(\tau')\|^2 \gamma(\|\Psi(\tau')\|) \lesssim
e^{-2|\omega|\tau'}$$ for all $\tau' \geq 0$ by the continuity of $\gamma$ and this shows that the first integral in the definition of ${\mathbf}{K}$, Eq. , exists. We claim that $K(\Psi, {\mathbf}{u})\in C([0,\infty),{\mathcal}{H})$. Obviously, by the strong continuity of the semigroup, the map $\tau \mapsto S(\tau){\mathbf}{u}: [0,\infty) \to {\mathcal}{H}$ is continuous. Thus, it remains to show continuity of the two integral terms. To this end choose an arbitrary $\tau_0 \geq 0$. Then we have $$\begin{aligned}
&\left \|\int_0^\infty e^{\tau_0-\tau'}P{\mathbf}{N}(\Psi(\tau'))d\tau'-
\int_0^\infty e^{\tau-\tau'}P{\mathbf}{N}(\Psi(\tau'))d\tau' \right \|\\
&\leq \left |e^{\tau_0}-e^\tau \right |\int_{0}^\infty e^{-\tau'}\|P{\mathbf}{N}(\Psi(\tau'))\|d\tau' \to 0
\end{aligned}$$ as $\tau \to \tau_0$. Furthermore, $$\begin{aligned}
&\left \|\int_{0}^{\tau_0} S(\tau_0-\tau'){\mathbf}{N}(\Psi(\tau'))d\tau'
-\int_{0}^\tau S(\tau-\tau'){\mathbf}{N}(\Psi(\tau'))d\tau' \right \| \\
&\leq \int_{0}^{\tau_0} \left \|S(\tau_0-\tau'){\mathbf}{N}(\Psi(\tau'))-S(\tau-\tau'){\mathbf}{N}(\Psi(\tau'))
\right \|d\tau' \\
&\quad +\left |\int_{\tau_0}^\tau \|S(\tau-\tau'){\mathbf}{N}(\Psi(\tau'))\|d\tau' \right | \to 0\end{aligned}$$ as $\tau \to \tau_0$ by dominated convergence since $$\|S(\tau_0-\tau')P{\mathbf}{N}(\Psi(\tau'))-S(\tau-\tau')P{\mathbf}{N}(\Psi(\tau'))\| \to 0$$ as $\tau \to \tau_0$ by the strong continuity of $S$. This proves that ${\mathbf}{K}(\Psi, {\mathbf}{u}) \in C([0,\infty),{\mathcal}{H})$.
It remains to show that $\|{\mathbf}{K}(\Psi,{\mathbf}{u})\|_{\mathcal}{X}<\infty$. To this end note first that, for any ${\mathbf}{f}$, we have $S(\tau)P{\mathbf}{f}=e^{\tau}P{\mathbf}{f}$. To see this, recall that $P{\mathcal}{H}=\langle {\mathbf}{g} \rangle$ (Theorem \[thm:linear\]) and therefore, for any ${\mathbf}{f} \in {\mathcal}{H}$, there exists a $c({\mathbf}{f})\in \mathbb{C}$ such that $P{\mathbf}{f}=c({\mathbf}{f}){\mathbf}{g}$. Since ${\mathbf}{g}$ is an eigenfunction of $L$ with eigenvalue $1$, we obtain $$S(\tau)P{\mathbf}{f}=c({\mathbf}{f})S(\tau){\mathbf}{g}=e^\tau c({\mathbf}{f}){\mathbf}{g}=e^\tau P{\mathbf}{f}$$ as claimed. Consequently, since $P$ commutes with $S$ (and also with the integrals since $P$ is bounded), we obtain $$\begin{aligned}
\|P{\mathbf}{K}(\Psi,{\mathbf}{u})(\tau)\| &= \left \|
-\int_{0}^\infty e^{\tau-\tau'}P{\mathbf}{N}(\Psi(\tau'))d\tau'+\int_{0}^\tau S(\tau-\tau')P{\mathbf}{N}(\Psi(\tau'))d\tau' \right \| \\
&\leq \int_\tau^\infty e^{\tau-\tau'}\|P{\mathbf}{N}(\Psi(\tau'))\|d\tau'
\lesssim \int_\tau^\infty e^{\tau-(1+2|\omega|)\tau'}d\tau' \\
&\lesssim e^{-2|\omega|\tau} \end{aligned}$$ for all $\tau \geq 0$ by Eq. . Furthermore, we have $$\begin{aligned}
\|(1-P){\mathbf}{K}(\Psi, {\mathbf}{u})(\tau)\|&\leq \|S(\tau)(1-P){\mathbf}{u}\|+\int_{0}^\tau \|S(\tau-\tau')(1-P){\mathbf}{N}(\Psi(\tau'))\|d\tau' \\
&\lesssim e^{-|\omega|\tau}\|{\mathbf}{u}\|+\int_{0}^\tau e^{-|\omega|(\tau-\tau')}\|{\mathbf}{N}(\Psi(\tau'))\|d\tau' \\
&\lesssim e^{-|\omega|\tau}+\int_{0}^\tau e^{-|\omega|(\tau+\tau')}d\tau' \\
&\lesssim e^{-|\omega|\tau}\end{aligned}$$ for all $\tau \geq 0$ by Theorem \[thm:linear\]. Adding up the two contributions we obtain $$\|{\mathbf}{K}(\Psi,{\mathbf}{u})(\tau)\|\leq \|P{\mathbf}{K}(\Psi,{\mathbf}{u})(\tau)\|+\|(1-P){\mathbf}{K}(\Psi,{\mathbf}{u})(\tau)\|\lesssim e^{-|\omega| \tau}$$ and this yields $$\|{\mathbf}{K}(\Psi,{\mathbf}{u})\|_{\mathcal}{X}=\sup_{\tau > 0}e^{|\omega \tau|}\|{\mathbf}{K}(\Psi,{\mathbf}{u})(\tau)\|\lesssim 1$$ which finishes the proof.
We define ${\mathbf}{K}_j: {\mathcal}{X} \times {\mathcal}{H} \to {\mathcal}{X}$, $j=1,2$, by $$\begin{aligned}
{\mathbf}{K}_1(\Psi, {\mathbf}{u})(\tau)&:=S(\tau)(1-P){\mathbf}{u} \\
{\mathbf}{K}_2(\Psi,{\mathbf}{u})(\tau)&:=-\int_{0}^\infty e^{\tau-\tau'}P{\mathbf}{N}(\Psi(\tau'))d\tau'
+\int_{0}^\tau S(\tau-\tau'){\mathbf}{N}(\Psi(\tau'))d\tau'\end{aligned}$$ for $\tau \geq 0$. By definition we have ${\mathbf}{K}={\mathbf}{K}_1+{\mathbf}{K}_2$.
\[lem:K1\] The mapping ${\mathbf}{K}_1: {\mathcal}{X} \times {\mathcal}{H} \to {\mathcal}{X}$ is continuously Fréchet differentiable and its Fréchet derivative is given by $$[D {\mathbf}{K}_1(\Psi,{\mathbf}{u})(\Phi, {\mathbf}{v})](\tau)=S(\tau)(1-P){\mathbf}{v}$$ for all $\tau \geq 0$.
We consider the partial Fréchet derivatives. ${\mathbf}{K}_1$ does not depend on $\Psi$ so trivially, we have $D_1 {\mathbf}{K}_1 (\Psi,{\mathbf}{u})={\mathbf}{0}$. ${\mathbf}{K}_1$ is linear with respect to the second variable which shows that $$[D_2 {\mathbf}{K}_1(\Psi, {\mathbf}{u}){\mathbf}{v}](\tau)=S(\tau)(1-P){\mathbf}{v}$$ for all ${\mathbf}{v} \in {\mathcal}{H}$. Since $\|S(\tau)(1-P){\mathbf}{v}\|\lesssim e^{-|\omega|\tau}\|(1-P){\mathbf}{v}\|$ by Theorem \[thm:linear\] and $D_2 {\mathbf}{K}_1(\Psi,{\mathbf}{u}){\mathbf}{v}$ is independent of $(\Psi,{\mathbf}{u})$, we trivially see that $D_2 {\mathbf}{K}_1: {\mathcal}{X} \times {\mathcal}{H} \to {\mathcal}{B}({\mathcal}{H}, {\mathcal}{X})$ is continuous. Consequently, all partial Fréchet derivates of ${\mathbf}{K}_1$ exist and they are continuous. Thus, a standard result, see e.g., [@zeidler], p. 140, Proposition 4.14, implies that $D{\mathbf}{K}_1$ exists, is continuous and given by $$D{\mathbf}{K}_1(\Psi,{\mathbf}{u})(\Phi,{\mathbf}{v})=D_1 {\mathbf}{K}_1(\Psi,{\mathbf}{u})\Phi+D_2 {\mathbf}{K}_1(\Psi, {\mathbf}{u}){\mathbf}{v}.$$
\[lem:K2\] The mapping ${\mathbf}{K}_2: {\mathcal}{X} \times {\mathcal}{H} \to {\mathcal}{X}$ is continuously Fréchet differentiable and its partial Fréchet derivative with respect to the first variable is given by $$\begin{aligned}
[D_1{\mathbf}{K}_2(\Psi, {\mathbf}{u})\Phi](\tau)=&-\int_0^\infty e^{\tau-\tau'}PD{\mathbf}{N}(\Psi(\tau'))\Phi(\tau')d\tau' \\
&+\int_0^\tau S(\tau-\tau')D{\mathbf}{N}(\Psi(\tau'))\Phi(\tau')d\tau'
\end{aligned}$$ for all $\tau \geq 0$.
${\mathbf}{K}_2(\Psi,{\mathbf}{u})$ does not depend on ${\mathbf}{u}$ so we have $D_2 {\mathbf}{K}_2(\Psi,{\mathbf}{u})={\mathbf}{0}$. We set $$\begin{aligned}
[\tilde{D}_1{\mathbf}{K}_2(\Psi, {\mathbf}{u})\Phi](\tau):=&-\int_0^\infty e^{\tau-\tau'}PD{\mathbf}{N}(\Psi(\tau'))\Phi(\tau')d\tau' \\
&+\int_0^\tau S(\tau-\tau')D{\mathbf}{N}(\Psi(\tau'))\Phi(\tau')d\tau' \end{aligned}$$ and claim that $\tilde{D}_1{\mathbf}{K}_2=D_1 {\mathbf}{K}_2$. In order to prove this, we have to show that $$\frac{\left \|{\mathbf}{K}_2(\Psi+\Phi,{\mathbf}{u})-{\mathbf}{K}_2(\Psi, {\mathbf}{u})-\tilde{D}_1 {\mathbf}{K}_2(\Psi, {\mathbf}{u})\Phi \right \|_{{\mathcal}{X}}}{\|\Phi\|_{{\mathcal}{X}}} \to 0$$ as $\|\Phi\|_{\mathcal}{X}\to 0$. We have $$P{\mathbf}{K}_2(\Psi,{\mathbf}{u})(\tau)=-\int_\tau^\infty e^{\tau-\tau'}P{\mathbf}{N}(\Psi(\tau'))d\tau'$$ and, analogously, $$P[\tilde{D}_1 {\mathbf}{K}_2(\Psi,{\mathbf}{u})\Phi](\tau)=-\int_\tau^\infty e^{\tau-\tau'}PD{\mathbf}{N}(\Psi(\tau'))\Phi(\tau')d\tau'$$ for all $\tau \geq 0$. Consequently, we obtain $$\begin{aligned}
\label{eq:proofK2}
&\frac{1}{\|\Phi\|_{\mathcal}{X}}\|P{\mathbf}{K}_2(\Psi+\Phi, {\mathbf}{u})(\tau)-P{\mathbf}{K}_2(\Psi)(\tau)-P[\tilde{D}_1{\mathbf}{K}_2(\Psi,{\mathbf}{u})\Phi](\tau)\| \\
&\leq \frac{e^\tau}{\|\Phi\|_{\mathcal}{X}} \int_\tau^\infty e^{-\tau'}
\|{\mathbf}{N}(\Psi(\tau')+\Phi(\tau'))-{\mathbf}{N}(\Psi(\tau'))-D{\mathbf}{N}(\Psi(\tau'))\Phi(\tau')\|d\tau' \nonumber\end{aligned}$$ for all $\tau \geq 0$. An application of the fundamental theorem of calculus yields $$\begin{aligned}
\|{\mathbf}{N}({\mathbf}{u}+{\mathbf}{v})-{\mathbf}{N}({\mathbf}{u})-D{\mathbf}{N}({\mathbf}{u}){\mathbf}{v}\|&=
\left \|\int_0^1 \left [D{\mathbf}{N}({\mathbf}{u}+t{\mathbf}{v}){\mathbf}{v}-D{\mathbf}{N}({\mathbf}{u}){\mathbf}{v} \right ] dt \right \|\\
&\leq \|{\mathbf}{v}\|^2\int_0^1 t \gamma_2(\|{\mathbf}{u}+t{\mathbf}{v}\|,\|{\mathbf}{u}\|)dt\end{aligned}$$ where we have used Lemma \[lem:N\]. Thus, we obtain $$\begin{aligned}
\label{eq:proofK22}
&\|{\mathbf}{N}(\Psi(\tau')+\Phi(\tau'))-{\mathbf}{N}(\Psi(\tau'))-D{\mathbf}{N}(\Psi(\tau'))\Phi(\tau')\| \\
&\leq
\|\Phi(\tau')\|^2 \int_0^1 t \gamma_2(\|\Psi(\tau')+t\Phi(\tau')\|,\|\Phi(\tau')\|)dt \nonumber \\
&\leq \|\Phi(\tau')\|^2 \sup_{t \in [0,1]}\sup_{\tau'>0} \gamma_2(\|\Psi(\tau')+t\Phi(\tau')\|,\|\Phi(\tau')\|) \nonumber \\
&\leq C_\Psi\;\|\Phi(\tau')\|^2 \nonumber \end{aligned}$$ for all $\tau' \geq 0$ where $C_\Psi>0$ is a constant that depends on $\Psi$. Since we are interested in the limit $\|\Phi\|_{\mathcal}{X} \to 0$, we can assume that $\|\Phi\|_{\mathcal}{X} \leq 1$. Consequently, we have $$\|\Psi(\tau')+t\Phi(\tau')\|\leq \|\Psi\|_{\mathcal}{X}+1$$ for all $t \in [0,1]$, $\tau' \geq 0$ and thus, we may choose $$C_\Psi:=\sup_{0 \leq x,y \leq \|\Psi\|_{\mathcal}{X}+1}\gamma_2(x,y)$$ which is a finite number (depending on $\Psi$) since $\gamma_2$ is continuous. As a consequence, from Eq. above, we obtain $$\begin{aligned}
\label{eq:proofK2p1}
&\frac{1}{\|\Phi\|_{\mathcal}{X}}\|P{\mathbf}{K}_2(\Psi+\Phi, {\mathbf}{u})(\tau)-P{\mathbf}{K}_2(\Psi, {\mathbf}{u})(\tau)-P[\tilde{D}_1{\mathbf}{K}_2(\Psi,{\mathbf}{u})\Phi](\tau)\| \\
&\leq C_\Psi \;e^\tau\; \frac{\sup_{\tau'>\tau}e^{|\omega|\tau'} \|\Phi(\tau')\|}{\|\Phi\|_{\mathcal}{X}} \int_\tau^\infty e^{-\tau'-|\omega|\tau'}e^{|\omega| \tau'}\|\Phi(\tau')\|d\tau' \nonumber \\
&\lesssim C_\Psi\; e^{-|\omega|\tau}\|\Phi\|_X \nonumber
\end{aligned}$$ for all $\tau \geq 0$. Furthermore, we have $$(1-P){\mathbf}{K}_2(\Psi,{\mathbf}{u})(\tau)=\int_0^\tau S(\tau-\tau')(1-P){\mathbf}{N}(\Psi(\tau'))d\tau'$$ and thus, $$\begin{aligned}
&\frac{1}{\|\Phi\|_{\mathcal}{X}}\left \|(1-P)\left [{\mathbf}{K}_2(\Psi+\Phi, {\mathbf}{u})(\tau)-{\mathbf}{K}_2(\Psi, {\mathbf}{u})(\tau)-[\tilde{D}_1{\mathbf}{K}_2(\Psi,{\mathbf}{u})\Phi](\tau) \right ] \right \| \\
&\leq \frac{1}{\|\Phi\|_{\mathcal}{X}}\int_0^\tau \left \|S(\tau-\tau')(1-P) \left [
{\mathbf}{N}(\Psi(\tau')+\Phi(\tau'))-{\mathbf}{N}(\Psi(\tau'))-D{\mathbf}{N}(\Psi(\tau'))\Phi(\tau') \right ] \right \| d\tau' \\
&\lesssim \frac{1}{\|\Phi\|_{\mathcal}{X}}\int_0^\tau e^{-|\omega|(\tau-\tau')}\left \|
{\mathbf}{N}(\Psi(\tau')+\Phi(\tau'))-{\mathbf}{N}(\Psi(\tau'))-D{\mathbf}{N}(\Psi(\tau'))\Phi(\tau') \right \|d\tau'\end{aligned}$$ by Theorem \[thm:linear\] and, via , this implies $$\begin{aligned}
\label{eq:proofK2p2}
&\frac{1}{\|\Phi\|_{\mathcal}{X}}\left \|(1-P)\left [{\mathbf}{K}_2(\Psi+\Phi, {\mathbf}{u})(\tau)-{\mathbf}{K}_2(\Psi, {\mathbf}{u})(\tau)-[\tilde{D}_1{\mathbf}{K}_2(\Psi,{\mathbf}{u})\Phi](\tau) \right ] \right \| \\
&\lesssim C_\Psi \frac{e^{-|\omega| \tau}}{\|\Phi\|_{\mathcal}{X}}\int_0^\tau e^{|\omega| \tau'}\|\Phi(\tau')\|^2 d\tau'
\lesssim C_\Psi \;e^{-|\omega| \tau}\;\|\Phi\|_{\mathcal}{X}
\int_0^\tau e^{-|\omega|\tau'}d\tau' \nonumber \\
&\lesssim C_\Psi\;e^{-|\omega| \tau}\;\|\Phi\|_{\mathcal}{X} \nonumber\end{aligned}$$ for all $\tau \geq 0$. Putting together the two pieces and , we infer the claim $\tilde{D}_1 {\mathbf}{K}_2=D_1 {\mathbf}{K}_2$.
It remains to show that $D_1 {\mathbf}{K}_2: {\mathcal}{X} \times {\mathcal}{H} \to {\mathcal}{B}({\mathcal}{X})$ is continuous. We have $$\begin{aligned}
&\left \| P[D_1 {\mathbf}{K}_2(\Psi, {\mathbf}{u})\Phi](\tau)-P[D_1 {\mathbf}{K}_2(\tilde{\Psi},\tilde{{\mathbf}{u}})\Phi](\tau) \right \| \\
&\lesssim \int_\tau^\infty e^{\tau-\tau'}\left \|D{\mathbf}{N}(\Psi(\tau'))\Phi(\tau')-D{\mathbf}{N}(\tilde{\Psi}(\tau'))\Phi(\tau') \right \|d\tau' \\
&\lesssim \int_\tau^\infty e^{\tau-\tau'}\|\Psi(\tau')-\tilde{\Psi}(\tau') \| \|\Phi(\tau')\| \gamma_2(\|\Psi(\tau')\|, \|\tilde{\Psi}(\tau')\|) d\tau' \\
&\leq C_\Psi \|\Psi-\tilde{\Psi}\|_{\mathcal}{X}\|\Phi\|_{\mathcal}{X}\int_\tau^\infty e^{\tau-\tau'-2|\omega|\tau'}d\tau' \\
&\lesssim C_\Psi \;e^{-2|\omega|\tau}\|\Psi-\tilde{\Psi}\|_{\mathcal}{X}\|\Phi\|_{\mathcal}{X}\end{aligned}$$ for all $\tau \geq 0$ by Lemma \[lem:N\] with a $\Psi$–dependent constant $C_\Psi$. Since we are interested in the limit $\tilde{\Psi} \to \Psi$, we may assume that $\|\Psi-\tilde{\Psi}\|_{\mathcal}{X}\leq 1$ which implies $$\|\tilde{\Psi}\|_{\mathcal}{X} \leq \|\tilde{\Psi}-\Psi\|_{\mathcal}{X}+\|\Psi\|_{\mathcal}{X}\leq \|\Psi\|_{\mathcal}{X}+1$$ and therefore, we may choose $$C_\Psi:=\sup_{0 \leq x,y \leq \|\Psi\|_{\mathcal}{X}+1}\gamma_2(x,y)< \infty$$ as before. Similarly, $$\begin{aligned}
&\left \|(1-P)\left [D_1 {\mathbf}{K}_2(\Psi, {\mathbf}{u})\Phi](\tau)-[D_1 {\mathbf}{K}_2(\tilde{\Psi},\tilde{{\mathbf}{u}})\Phi](\tau) \right ] \right \| \\
&\lesssim \int_0^\tau \left \|S(\tau-\tau')(1-P)\left [D{\mathbf}{N}(\Psi(\tau'))\Phi(\tau')-D{\mathbf}{N}(\tilde{\Psi}(\tau'))\Phi(\tau')\right ] \right \|d\tau' \\
&\lesssim \int_0^\tau e^{-|\omega| (\tau-\tau')}\|\Psi(\tau')-\tilde{\Psi}(\tau')\|\|\Phi(\tau')\|
\gamma_2(\|\Psi(\tau')\|, \|\tilde{\Psi}(\tau')\|)d\tau' \\
&\leq C_\Psi \|\Psi-\tilde{\Psi}\|_{\mathcal}{X}\|\Phi\|_{\mathcal}{X}
\int_0^\tau e^{-|\omega|\tau-|\omega| \tau'}d\tau' \\
&\leq C_\Psi\; e^{-|\omega|\tau} \|\Psi-\tilde{\Psi}\|_{\mathcal}{X}\|\Phi\|_{\mathcal}{X}\end{aligned}$$ for all $\tau \geq 0$ by Lemma \[lem:N\] and Theorem \[thm:linear\]. Putting those two estimates together we obtain $$\left \|D_1 {\mathbf}{K}_2(\Psi, {\mathbf}{u})\Phi-D_1 {\mathbf}{K}_2(\tilde{\Psi},\tilde{{\mathbf}{u}})\Phi \right \|_{\mathcal}{X}
\lesssim C_\Psi\|\Psi-\tilde{\Psi}\|_{\mathcal}{X}\|\Phi\|_{\mathcal}{X}$$ for all $\Phi \in {\mathcal}{X}$ and thus, $$\left \|D_1 {\mathbf}{K}_2(\Psi, {\mathbf}{u})-D_1 {\mathbf}{K}_2(\tilde{\Psi},\tilde{{\mathbf}{u}}) \right \|_{{\mathcal}{B}({\mathcal}{X})}
\lesssim C_\Psi\|\Psi-\tilde{\Psi}\|_{\mathcal}{X}$$ which implies the continuity of $D_1 {\mathbf}{K}_2: {\mathcal}{X} \times {\mathcal}{H} \to {\mathcal}{B}({\mathcal}{X})$ at $(\Psi,{\mathbf}{u})$ and $(\Psi, {\mathbf}{u})$ was arbitrary. The claim now follows from [@zeidler], p. 140, Proposition 4.14.
Now we can show the existence of a global (with respect to the time variable $\tau$) mild solution to the wave maps problem.
\[thm:global1\] Let $\delta>0$ be sufficiently small and assume that $\psi^T$ is mode stable. Then, for any ${\mathbf}{u} \in {\mathcal}{H}$ with $\|{\mathbf}{u}\|< \delta$, there exists a $\Psi(\cdot; {\mathbf}{u}) \in {\mathcal}{X}$ such that $\Psi(\cdot; {\mathbf}{u})={\mathbf}{K}(\Psi(\cdot;{\mathbf}{u}),{\mathbf}{u})$. Moreover, the solution $\Psi(\cdot; {\mathbf}{u})$ is unique in a sufficiently small neighborhood of ${\mathbf}{0}$ in ${\mathcal}{X}$ and the mapping ${\mathbf}{u} \mapsto \Psi(\cdot;{\mathbf}{u}): {\mathcal}{H} \to {\mathcal}{X}$ is continuously Fréchet differentiable.
Define $\tilde{{\mathbf}{K}}: {\mathcal}{X} \times {\mathcal}{H} \to {\mathcal}{X}$ by $\tilde{{\mathbf}{K}}(\Psi,{\mathbf}{u}):=\Psi-{\mathbf}{K}(\Psi,{\mathbf}{u})$. Then we have $\tilde{{\mathbf}{K}}({\mathbf}{0},{\mathbf}{0})={\mathbf}{0}$. According to Lemmas \[lem:K1\] and \[lem:K2\], $\tilde{{\mathbf}{K}}$ is continuously Fréchet differentiable. By Lemma \[lem:N\], we have $D{\mathbf}{N}({\mathbf}{0})={\mathbf}{0}$ and thus, by Lemma \[lem:K2\], we infer $D_1 {\mathbf}{K}({\mathbf}{0}, {\mathbf}{0})={\mathbf}{0}$. Consequently, $D_1 \tilde{{\mathbf}{K}}({\mathbf}{0},{\mathbf}{0})={\mathbf}{1}$, the identity on ${\mathcal}{X}$. In particular, $D_1 \tilde{{\mathbf}{K}}({\mathbf}{0},{\mathbf}{0})$ is an isomorphism and the implicit function theorem (see e.g., [@zeidler], p. 150, Theorem 4.B) yields the existence of a $\Psi(\cdot; {\mathbf}{u}) \in {\mathcal}{X}$ such that $\tilde{{\mathbf}{K}}(\Psi(\cdot; {\mathbf}{u}),{\mathbf}{u})={\mathbf}{0}$ for all ${\mathbf}{u} \in {\mathcal}{H}$ in a sufficiently small neighborhood of the origin. Furthermore, ${\mathbf}{u} \mapsto \Psi(\cdot; {\mathbf}{u}): {\mathcal}{H} \to {\mathcal}{X}$ is a $C^1$–map in the sense of Fréchet.
Global existence for arbitrary small data {#sec:globalarbitrary}
-----------------------------------------
Theorem \[thm:global1\] provides us with a global mild solution for the wave maps problem. However, we are not able to specify the initial data freely. Instead, the chosen initial data get modified along the one–dimensional subspace spanned by the gauge mode. This is necessary in order to suppress the instability introduced by the time translation symmetry of the original problem. This instability shows up because we have in fact fixed the blow up time. However, perturbing the initial data of $\psi^T$ (i.e., choosing arbitrary $(f,g)$) does not, in general, preserve the blow up time of the solution. Therefore, one might hope to eliminate the shortcomings of Theorem \[thm:global1\] by allowing for a variable blow up time. This issue will be pursued in the current section. In fact, there is one situation where the specified data are not modified, namely when they are chosen to be identically zero. Then, the corresponding global solution from Theorem \[thm:global1\] is the zero solution. Loosely speaking, we are going to use the implicit function theorem to extend this to a neighborhood of zero.
Recall that we intend to solve the system and thus, the initial data we want to prescribe are of the form $$\Psi(0)(\rho)=\left ( \begin{array}{c}
T\rho^2\left [g(T\rho)-\psi^T_t(0,T\rho) \right ] \\
T\rho \left [f'(T\rho)-\psi^T_r(0,T\rho) \right ] +2 \left [f(T\rho)-\psi^T(0,T\rho)
\right ]
\end{array} \right )$$ where $(f,g)$ are free functions. This rather complicated looking expression is a consequence of the various variable transformations we have performed. In what follows it is convenient to rewrite this in a different form. First, we set $$\label{eq:defv}
{\mathbf}{v}(\rho):=\left ( \begin{array}{c}
\rho^2\left [g(\rho)-\psi^1_t(0,\rho) \right ] \\
\rho \left [f'(\rho)-\psi^1_r(0,\rho) \right ] +2 \left [f(\rho)-\psi^1(0,\rho)
\right ]
\end{array} \right ).$$ Note carefully that ${\mathbf}{v}$ does not depend on the blow up time $T$. Thus, varying ${\mathbf}{v}$ is equivalent to varying $(f,g)$ and therefore, ${\mathbf}{v}$ are the free data of the problem. The dependence on the blow up time $T$ is encoded in the operator ${\mathbf}{U}$, formally defined by $${\mathbf}{U}({\mathbf}{v},T)(\rho):=\left ( \begin{array}{c}
\frac{1}{T}v_1(T\rho)+T\rho^2 \left [\psi_t^1(0,T\rho)-\psi_t^T(0,T\rho) \right ] \\
v_2(T\rho)+T\rho \left [\psi_r^1(0,T\rho)-\psi_r^T(0,T\rho) \right ]+2 \left [\psi^1(0,T\rho)-\psi^T(0,T\rho) \right ]
\end{array} \right ).$$ This can be written in a simpler form by recalling that $\psi^T(t,r)=f_0(\frac{r}{T-t})$ and thus, $$\begin{aligned}
\psi^T(0,T\rho)=f_0(\rho) & \psi^1(0,T\rho)=f_0(T\rho) \\
\psi_t^T(0,T\rho)=\frac{\rho}{T} f_0'(\rho) & \psi_t^1(0,T\rho)=T\rho f_0'(T\rho) \\
\psi_r^T(0,T\rho)=\frac{1}{T}f_0'(\rho) & \psi_r^1(0,T\rho)=f_0'(T\rho).\end{aligned}$$ Consequently, we obtain $$\label{eq:defU}
{\mathbf}{U}({\mathbf}{v},T)(\rho)=\left ( \begin{array}{c}
\frac{1}{T} \left [ v_1(T\rho)+T^3\rho^3 f_0'(T\rho) \right ] -\rho^3 f_0'(\rho) \\
v_2(T\rho)+T\rho f_0'(T\rho)+2 f_0(T\rho)-\rho f_0'(\rho)-2 f_0(\rho)
\end{array} \right ).$$ With the above correspondence between ${\mathbf}{v}$ and $(f,g)$, we obviously have $$\Psi(0)={\mathbf}{U}({\mathbf}{v},T)$$ for the initial data. The advantage of this new notation is that the dependencies of the data on $(f,g)$ (or, equivalently, ${\mathbf}{v}$) on the one hand, and $T$ on the other, are clearly separated now. In order to obtain a well–posed initial value problem in the lightcone ${\mathcal}{C}_T$, the data $(f,g)$ have to be specified on the interval $[0,T]$. This, however, introduces the technical problem that the data space depends on $T$. In order to fix this, we restrict $T$ to have values in the interval $I:=(\frac{1}{2},\frac{3}{2})$. This is no real restriction since our existence argument will be perturbative around $T=1$ anyway. We need to find a space ${\mathcal}{Y}$ such that ${\mathbf}{U}$ has nice properties as a map from ${\mathcal}{Y} \times I$ to ${\mathcal}{H}$. To this end we set $$\tilde{Y}_1:=\left \{u \in C^2[0,\tfrac{3}{2}]: u(0)=u'(0)=0 \right \}, \quad
\tilde{Y}_2:=\left \{u \in C^2[0,\tfrac{3}{2}]: u(0)=0 \right \}$$ and define two norms $\|\cdot\|_{Y_1}$, $\|\cdot\|_{Y_2}$ on $\tilde{Y}_1$, $\tilde{Y}_2$, respectively, by setting $$\|u\|_{Y_1}^2:=\int_0^{3/2} |u''(\rho)|^2 d\rho, \quad
\|u\|_{Y_2}^2:=\int_0^{3/2} |u'(\rho)|^2 d\rho+\int_0^{3/2} |u''(\rho)|^2 \rho^2 d\rho.$$ Thanks to the boundary conditions for functions in $\tilde{Y}_j$, $j=1,2$, the mappings $\|\cdot\|_{Y_j}: \tilde{Y}_j \to [0,\infty)$ are really norms (not only seminorms). We denote by $Y_j$ the completion of $(\tilde{Y}_j, \|\cdot\|_{Y_j})$. Furthermore, we set ${\mathcal}{Y}:=Y_1 \times Y_2$ with the canonical norm $$\|{\mathbf}{u}\|_{\mathcal}{Y}^2:=\|u_1\|_{Y_1}^2+\|u_2\|_{Y_2}^2.$$ By construction, $Y_1$, $Y_2$ and ${\mathcal}{Y}$ are Banach spaces. For notational convenience we also set $\tilde{{\mathcal}{Y}}:=\tilde{Y}_1 \times \tilde{Y}_2$. Then ${\mathcal}{Y}$ is the completion of $(\tilde{{\mathcal}{Y}},\|\cdot\|_{{\mathcal}{Y}})$ and thus, the notation is consistent. Recall that, similarly, the Hilbert space ${\mathcal}{H}$ emerged as the completion of $$\tilde{{\mathcal}{H}}:=\{u \in C^2[0,1]: u(0)=u'(0)=0\} \times \{u \in C^1[0,1]: u(0)=0\}$$ with respect to the norm $$\|{\mathbf}{u}\|^2:=\int_0^1 \frac{|u_1'(\rho)|^2}{\rho^2}d\rho
+\int_0^1 |u_2'(\rho)|^2d\rho,$$ see Sec. \[sec:oplin\] or [@DSA]. Here, we also write ${\mathcal}{H}=:H_1 \times H_2$ and $\tilde{{\mathcal}{H}}=:\tilde{H}_1 \times \tilde{H}_2$ with the obvious definitions of $H_j$, $\tilde{H}_j$, $j=1,2$. We need a few preparing technical lemmas.
\[lem:Fvcont\] For $(v_j, T) \in \tilde{Y}_j \times I$, $j=1,2$, define $[F_j(v_j,T)](\rho):=v_j(T\rho)$. Then $F_j(v_j, T) \in H_j$ and we have the estimate $$\|F_j(v_j, T)-F_j(\tilde{v}_j, T)\|_j \lesssim \|v_j - \tilde{v}_j\|_{Y_j}$$ for all $v_j, \tilde{v}_j \in \tilde{Y}_j$ and all $T \in I$. As a consequence, $F_j$ extends to a continuous mapping from $Y_j \times I$ to $H_j$.
Let $v_j \in \tilde{Y}_j$. Since $\rho \in [0,1]$ implies $T\rho \in [0,T] \subset [0,\frac{3}{2}]$ for $T \in I$, we see that $\rho \mapsto v_j(T\rho)$ defines a function in $C^2[0,1]$. To be more precise, this function is given by $v_j(T\cdot)|_{[0,1]} \in C^2[0,1]$, the restriction of $v_j(T\cdot)$ to the interval $[0,1]$. Consequently, $F_j(v_j,T) \in C^2[0,1]$, $j=1,2$. Furthermore, the boundary conditions $v_1(0)=v_1'(0)=v_2(0)=0$ imply that $$[F_1(v_1,T)](0)=[F_1(v_1,T)]'(0)=[F_2(v_2,T)]'(0)=0$$ which shows that $F_j(v_j, T) \in \tilde{H}_j \subset H_j$. By definition, we have $$[F_j(v_j,T)]'(\rho)=Tv_j'(T\rho)$$ and this implies $$\begin{aligned}
\|F_1(v_1,T)-F_1(\tilde{v}_1,T)\|_1^2&=T^2 \int_0^1
\frac{|v_1'(T\rho)-\tilde{v}_1'(T\rho)|^2}{\rho^2}d\rho \\
&\lesssim T^4 \int_0^1 |v_1''(T\rho)-\tilde{v}_1''(T\rho)|^2 d\rho \\
&=T^3 \int_0^T |v_1''(\rho)-\tilde{v}_1''(\rho)|^2 d\rho \\
&\lesssim \|v_1-\tilde{v}_1\|_{Y_1}\end{aligned}$$ for all $v_1, \tilde{v}_1$ in $\tilde{Y}_1$ and $T \in I$ by Hardy’s inequality (recall that $v_1'(0)=0$). Analogously, we have $$\begin{aligned}
\|F_2(v_2,T)-F_2(\tilde{v}_2,T)\|_2^2&=T^2 \int_0^1 |v_2'(T\rho)-\tilde{v}_2'(T\rho)|^2 d\rho \\
&\lesssim \|v_2-\tilde{v}_2\|_{Y_2}\end{aligned}$$ for all $v_2,\tilde{v}_2 \in \tilde{Y}_2$ and $T \in I$ which proves the claimed estimate. Consequently, Lemma \[lem:context\] shows that, for any $T \in I$, the mapping $F_j(\cdot,T)$ extends to a continuous function $F_j(\cdot,T): Y_j \to H_j$ *and the continuity is uniform with respect to $T$*. Thus, in order to show that $F_j: Y_j \times I \to H_j$ is continuous, it suffices to show continuity of $F(v_j, \cdot): I \to H_j$ for any fixed $v_j \in Y_j$. To see this, note that $$\begin{aligned}
\label{eq:proofFcont}
\|F_1(v_1,T)-F_1(v_1,\tilde{T})\|_1^2 &\lesssim \int_0^1 \left |T^2 v_1''(T\rho)-\tilde{T}^2 v_1''(\tilde{T}\rho) \right |^2 d\rho \\
&\lesssim \int_0^1 \left |T^2 v_1''(T\rho)-T^2 \tilde{v}_1''(T\rho) \right |^2 d\rho+
\int_0^1 \left |T^2 \tilde{v}_1''(T\rho)-\tilde{T}^2 \tilde{v}_1''(\tilde{T}\rho) \right |^2 d\rho \nonumber \\
&\quad +\int_0^1 \left |\tilde{T}^2 \tilde{v}_1''(\tilde{T}\rho)-\tilde{T}^2 v_1''(\tilde{T}\rho) \right |^2 d\rho \nonumber \\
&\lesssim \|v_1-\tilde{v}_1\|_{Y_1}^2
+\int_0^1 \left |T^2 \tilde{v}_1''(T\rho)-\tilde{T}^2 \tilde{v}_1''(\tilde{T}\rho) \right |^2 d\rho \nonumber\end{aligned}$$ for all $v_1, \tilde{v}_1 \in Y_1$ and $T, \tilde{T} \in I$. Now let $\varepsilon>0$ be arbitrary. For any given $\delta>0$, we can find a $\tilde{v}_1 \in \tilde{Y}_1$ such that $\|v_1-\tilde{v}_1\|_{Y_1}<\delta$. If $\delta>0$ is chosen small enough, implies $$\|F_1(v_1,T)-F_1(v_1,\tilde{T})\|_1\leq \tfrac{\varepsilon}{2}+C \left (\int_0^1 \left |T^2 \tilde{v}_1''(T\rho)-\tilde{T}^2 \tilde{v}_1''(\tilde{T}\rho) \right |^2 d\rho \right )^{1/2}$$ for an absolute constant $C>0$ and the integral goes to zero as $\tilde{T} \to T$ since $\tilde{v}_1'' \in C[0,\frac{3}{2}]$. This shows that $F_1(v_1,\cdot): I \to H_1$ is continuous for any $v_1 \in Y_1$. The proof for $F_2$ is completely analogous.
From now on we may assume that $F_j$ is defined on all of $Y_j \times I$. In the next lemma we show that $F_j$ has a continuous partial Fréchet derivative with respect to $T$.
\[lem:Fvdiff\] The mapping $F_j: Y_j \times I \to H_j$, $j=1,2$, from Lemma \[lem:Fvcont\] is partially Fréchet differentiable with respect to the second variable. Moreover, $v_j \in Y_j$ implies that $\rho \mapsto \rho v_j'(T\rho) \in C[0,1]$ and the derivative $D_2 F_j(v_j,T): \mathbb{R} \to H_j$ at $(v_j,T)$ applied to $\lambda \in \mathbb{R}$ is given by $$[D_2 F_j(v_j,T)\lambda](\rho)=\lambda \rho v_j'(T\rho).$$ Finally, $D_2 F_j: Y_j \times I \to {\mathcal}{B}(\mathbb{R},H_j)$ is continuous.
We proceed in three steps: First, we show that $F_j$ is partially Fréchet differentiable at any $(v_j,T) \in \tilde{Y}_j \times I$. Second, we prove that the derivative $D_2 F_j$ can be extended to a continuous function from $Y_j \times I$ to ${\mathcal}{B}(\mathbb{R},H_j)$. Finally, we show that the extended function is the partial Fréchet derivative of $F_j$.
For $(v_j,T) \in \tilde{Y}_j \times I$ we set $[\tilde{D}_2F_j(v_j, T)\lambda](\rho):=\lambda \rho v_j'(T\rho)$ and claim that $\tilde{D}_2 F_j(v_j,T)$ is the partial Fréchet derivative of $F_j$ at $(v_j,T)$. In order to prove this, we have to show that $$\frac{\|F_j(v_j, T+\lambda)-F_j(v_j, T)-\tilde{D}_2 F_j(v_j,T)\lambda\|_j}{|\lambda|} \to 0$$ as $|\lambda|\to 0$. Choose $\lambda_0$ so small that $T\pm \lambda_0 \in I$. Since we are interested in the limit $|\lambda| \to 0$, we assume in the following that $|\lambda| \in (0,\lambda_0]$. By definition we have $$[F_j(v_j,T+\lambda)]'(\rho)=(T+\lambda)v_j'((T+\lambda)\rho)$$ and $$[\tilde{D}_2 F_j(v_j,T)\lambda]'(\rho)=\lambda \left [T \rho v_j''(T\rho)+ v_j'(T\rho) \right ].$$ Consequently, by the fundamental theorem of calculus, we obtain $$\begin{aligned}
&\left | [F_j(v_j,T+\lambda)]'(\rho)-[F_j(v_j,T)]'(\rho)-[\tilde{D}_2F_j(v_j, T)\lambda]'(\rho) \right | \\
&=\left |(T+\lambda)v_j'((T+\lambda)\rho)-Tv_j'(T\rho)-
\lambda \left [T \rho v_j''(T\rho)+ v_j'(T\rho) \right ] \right | \\
&=\left | \lambda \int_0^1 \left [(T+h\lambda)\rho v_j''((T+h\lambda)\rho)+v_j'((T+h\lambda)\rho) \right ] dh
-\lambda \left [T \rho v_j''(T\rho)+ v_j'(T\rho) \right ] \right | \\
&\leq |\lambda| \left [ \rho \left |\int_0^1 \left [(T+h\lambda) v_j''((T+h\lambda)\rho)-T v_j''(T\rho) \right ]dh \right |
+\left |\int_0^1 \left [v_j'((T+h\lambda)\rho)-v_j'(T\rho) \right ]dh \right | \right ]\end{aligned}$$ and this shows $$\begin{aligned}
\label{eq:proofFdiff1}
&\frac{\|F_1(v_1, T+\lambda)-F_1(v_1, T)-\tilde{D}_2 F_1(v_1,T)\lambda\|_1^2}{|\lambda|^2} \\
&\leq \int_0^1 \left |
\int_0^1 \left [(T+h\lambda) v_1''((T+h\lambda)\rho)-T v_1''(T\rho) \right ]dh \right |^2 d\rho
\nonumber \\
&\quad + \int_0^1 \frac{1}{\rho^2} \left |
\int_0^1 \left [v_1'((T+h\lambda)\rho)-v_1'(T\rho) \right ] dh \right |^2 d\rho \nonumber \\
&\lesssim \int_0^1
\int_0^1 \left |(T+h\lambda) v_1''((T+h\lambda)\rho)-T v_1''(T\rho) \right |^2dh d\rho \nonumber\end{aligned}$$ by Cauchy–Schwarz and Hardy’s inequality. Here, the differentiation (with respect to $\rho$) under the integral sign is justified by the fact that $v_1' \in C^1[0,\frac{3}{2}]$. We set $$\chi_1(v_1,w_1,\lambda,\mu):=\left (
\int_0^1 \int_0^1 \left |(T+h\lambda) v_1''((T+h\lambda)\rho)-(T+h\mu) w_1''((T+h\mu)\rho) \right |^2dh d\rho \right )^{1/2}$$ for $v_1,w_1 \in \tilde{Y}_1$ and $\lambda,\mu \in [-\lambda_0,\lambda_0]$. Since $v_1'' \in C[0,\frac{3}{2}]$, we have $\chi_1(v_1,v_1,\lambda,\mu)\to 0$ as $\lambda \to \mu$. Similarly, we have $$\begin{aligned}
\label{eq:proofFdiff2}
&\frac{\|F_2(v_2, T+\lambda)-F_2(v_2, T)-\tilde{D}_2 F_2(v_2,T)\lambda\|_2^2}{|\lambda|^2} \\
&\lesssim \int_0^1 \int_0^1 \left |(T+h\lambda) \rho v_2''((T+h\lambda)\rho)-T \rho v_2''(T\rho) \right |^2 dh d\rho \nonumber
\\
&\quad + \int_0^1 \int_0^1 \left |v_2'((T+h\lambda)\rho)-v_2'(T\rho) \right |^2 dh d\rho \to 0 \nonumber\end{aligned}$$ as $|\lambda| \to 0$ and we also define $$\begin{aligned}
\chi_2(v_2,w_2,\lambda,\mu):=&\left ( \int_0^1 \int_0^1 \left |(T+h\lambda) \rho v_2''((T+h\lambda)\rho)
-(T+h\mu) \rho w_2''((T+h\mu)\rho) \right |^2 dh d\rho \right . \\
&+ \left . \int_0^1 \int_0^1
\left |v_2'((T+h\lambda)\rho)-w_2'((T+h\mu)\rho) \right |^2 dh d\rho \right )^{1/2}\end{aligned}$$ for $v_2,w_2 \in \tilde{Y}_2$. The fact that $\chi_j(v_j,v_j,\lambda,0) \to 0$ as $|\lambda| \to 0$ shows that $F_j$ is partially Fréchet differentiable with respect to the second variable at $(v_j,T) \in \tilde{Y}_j \times I$ and we have $D_2 F_j(v_j,T)=\tilde{D}_2F_j(v_j,T)$ as claimed.
Now we turn to the second step, the continuity of the derivative. By definition, for all $v_j, \tilde{v}_j \in \tilde{Y}_j$, $T, \tilde{T} \in I$ and $\lambda \in \mathbb{R}$, we have $$\begin{aligned}
&\|D_2 F_1(v_1,T)\lambda-D_2 F_1(\tilde{v}_1,\tilde{T})\lambda\|_1^2 \\
&\lesssim |\lambda|^2 \left [ \int_0^1 \left |Tv_1''(T\rho)
-\tilde{T}\tilde{v}_1''(\tilde{T}\rho) \right |^2 d\rho+
\int_0^1 \frac{1}{\rho^2}\left | v_1'(T\rho)-\tilde{v}_1'(\tilde{T}\rho) \right |^2 d\rho \right ] \\
&\lesssim |\lambda|^2 \int_0^1 \left |T v_1''(T\rho)-\tilde{T} \tilde{v}_1''(\tilde{T}\rho) \right |^2 d\rho
\end{aligned}$$ and therefore, $\|D_2 F_1(v_1,T)\lambda-D_2 F_1(\tilde{v}_1,T)\lambda\|_1^2
\lesssim |\lambda|^2 \|v_1-\tilde{v}_1\|_{Y_1}^2$. Analogously, $$\begin{aligned}
&\|D_2 F_2(v_2,T)\lambda-D_2 F_2(\tilde{v}_2,\tilde{T})\lambda\|_2^2 \\
&\lesssim |\lambda|^2 \left [ \int_0^1 \left |Tv_2''(T\rho)-\tilde{T}\tilde{v}_2''(\tilde{T}\rho) \right |^2 \rho^2 d\rho+
\int_0^1 \left | v_2'(T\rho)-\tilde{v}_2'(\tilde{T}\rho) \right |^2 d\rho \right ],
\end{aligned}$$ and thus, $\|D_2 F_2(v_2,T)\lambda-D_2 F_2(\tilde{v}_2,\tilde{T})\lambda\|_2^2
\lesssim |\lambda|^2 \|v_2-\tilde{v}_2\|_{Y_2}^2$. Consequently, we obtain $$\|D_2 F_j(v_j,T)-D_2 F_j(\tilde{v}_j,T)\|_{{\mathcal}{B}(\mathbb{R},H_j)}\lesssim \|v_j-\tilde{v}_j\|_{Y_j}$$ for all $v_j, \tilde{v}_j \in \tilde{Y}_j$ and $T \in I$. This estimate implies that, for any fixed $T \in I$, $D_2 F_j(\cdot,T)$ can be extended to a continuous map $D_2 F_j(\cdot,T): Y_j \to {\mathcal}{B}(\mathbb{R},H_j)$ (use Lemma \[lem:context\] and recall that ${\mathcal}{B}(\mathbb{R},H_j)$ is a Banach space) *and the continuity is uniform with respect to $T \in I$*. Now let $v_1 \in Y_1$ and $\varepsilon>0$ be arbitrary and choose a $\tilde{v}_1 \in \tilde{Y}_1$ with $\|v_1-\tilde{v}_1\|_{Y_1}\leq \varepsilon$. Then we have $$\begin{aligned}
&\|D_2 F_1(v_1,T)-D_2 F_1(v_1,\tilde{T})\|_{{\mathcal}{B}(\mathbb{R},H_1)}^2 \lesssim \int_0^1 \left |T v_1''(T\rho)-\tilde{T} v_1''(\tilde{T}\rho) \right |^2 d\rho \\
&\lesssim \int_0^1 \left |T v_1''(T\rho)-T \tilde{v}_1''(T\rho) \right |^2 d\rho
+\int_0^1 \left |T \tilde{v}_1''(T\rho)-\tilde{T} \tilde{v}_1''(\tilde{T}\rho) \right |^2 d\rho \\
&\quad +\int_0^1 \left |\tilde{T} \tilde{v}_1''(\tilde{T}\rho)
-\tilde{T} v_1''(\tilde{T}\rho) \right |^2 d\rho \\
&\lesssim \|v_1-\tilde{v}_1\|_{Y_1}^2
+\int_0^1 \left |T \tilde{v}_1''(T\rho)-\tilde{T} \tilde{v}_1''(\tilde{T}\rho) \right |^2 d\rho \lesssim \varepsilon\end{aligned}$$ provided that $|T-\tilde{T}|$ is sufficiently small. Here we use that $\tilde{v}_1'' \in C[0,\frac{3}{2}]$. An analogous estimate holds for $D_2 F_2$ and we conclude that $D_2 F_j(v_j,\cdot): I \to {\mathcal}{B}(\mathbb{R},H_j)$ is continuous for any $v_j \in Y_j$. As a consequence, we see that $D_2 F_j: Y_j \times I \to {\mathcal}{B}(\mathbb{R},H_j)$ is continuous (recall that the continuity of $D_2 F_j(\cdot,T): Y_j \to {\mathcal}{B}(\mathbb{R},H_j)$ is uniform with respect to $T \in I$).
Finally, we want to justify the above notation, i.e., we want to show that $D_2 F_j$ is indeed the partial Fréchet derivative of $F_j$. First, we prove that the estimates and are preserved by the extension. To this end we show that, for fixed $\lambda, \mu \in [-\lambda_0,\lambda_0]$, $\chi_j$ can be extended to a continuous map $\chi_j(\cdot, \cdot, \lambda, \mu): Y_j \times Y_j \to \mathbb{R}$. As always, we intend to use Lemma \[lem:context\]. By applying Minkowski’s inequality, we obtain $$\begin{aligned}
&|\chi_1(v_1,w_1\lambda,\mu)-\chi_1(\tilde{v}_1,\tilde{w}_1,\lambda,\mu)| \\
&=\left | \left (\int_0^1 \int_0^1 \left |(T+h\lambda) v_1''((T+h\lambda)\rho)
-(T+h\mu) w_1''((T+h\mu)\rho) \right |^2 dh d\rho \right )^{1/2} \right .\\
&\quad - \left .\left ( \int_0^1 \int_0^1 \left |(T+h\lambda) \tilde{v}_1''((T+h\lambda)\rho)
-(T+h\mu) \tilde{w}_1''((T+h\mu)\rho) \right |^2 dh d\rho \right )^{1/2} \right | \\
&\leq \left (\int_0^1 \int_0^1 \left |(T+h\lambda) v_1''((T+h\lambda)\rho)
-(T+h\mu) w_1''((T+h\mu)\rho) \right . \right . \\
& \quad \left . \left .-(T+h\lambda) \tilde{v}_1''((T+h\lambda)\rho)+(T+h\mu)
\tilde{w}_1''((T+h\mu)\rho) \right |^2 dh d\rho \right )^{1/2}\end{aligned}$$ and this implies $$\begin{aligned}
&|\chi_1(v_1,w_1,\lambda,\mu)-\chi_1(\tilde{v}_1,\tilde{w}_1,\lambda,\mu)| \\
& \leq \left (
\int_0^1 \int_0^1 |T+h\lambda|^2 \left | v_1''((T+h\lambda)\rho)
-\tilde{v}_1''((T+h\lambda)\rho) \right |^2 dh d\rho \right )^{1/2} \\
&\quad +\left (
\int_0^1 \int_0^1 |T+h\mu|^2 \left | w_1''((T+h\mu)\rho)
-\tilde{w}_1''((T+h\mu)\rho) \right |^2 dh d\rho \right )^{1/2}
=:I_1^{1/2}+I_2^{1/2}.\end{aligned}$$ We apply Fubini’s theorem (this is justified since we assume $v_j, \tilde{v}_j, w_j, \tilde{w}_j \in C^2[0,\frac{3}{2}]$) and a change of variables to obtain $$\begin{aligned}
I_1&=\int_0^1 |T+h\lambda| \int_0^{T+h\lambda} \left | v_1''(\rho)
-\tilde{v}_1''(\rho) \right |^2 d\rho dh \lesssim \|v_1-\tilde{v}_1\|_{Y_1}^2\end{aligned}$$ and, analogously, $I_2\lesssim \|w_1-\tilde{w}_1\|_{Y_1}^2$ for all $v_1, w_1, \tilde{v}_1, \tilde{w}_1 \in \tilde{Y}_1$. Note carefully that the implicit constants in these estimates are independent of $\lambda, \mu$ (as long as $\lambda, \mu \in [-\lambda_0, \lambda_0]$ which we assume throughout). The function $\chi_2$ can be treated in a completely analogous way and putting everything together we arrive at $$|\chi_j(v_j,w_j,\lambda,\mu)-\chi_j(\tilde{v}_j,\tilde{w}_j,\lambda,\mu)|\lesssim
\|(v_j,w_j)-(\tilde{v}_j,\tilde{w}_j)\|_{Y_j \times Y_j}$$ for all $(v_j,w_j), (\tilde{v}_j,\tilde{w}_j) \in \tilde{Y}_j \times \tilde{Y}_j$. Consequently, Lemma \[lem:context\] shows that $\chi_j$ can be (uniquely) extended to a continuous function $\chi_j(\cdot,\cdot,\lambda,\mu): Y_j \times Y_j \to \mathbb{R}$ and the estimates , remain valid for all $v_j \in Y_j$. Furthermore, directly from the definition we have $$\begin{aligned}
\chi_j(v_j,v_j,\lambda,0)&\lesssim \chi_j(v_j,\tilde{v}_j,\lambda,\lambda)
+\chi_j(\tilde{v}_j,\tilde{v}_j,\lambda,0)+\chi_j(\tilde{v}_j,v_j,0,0) \\
&\lesssim \|v_j-\tilde{v}_j\|_{Y_j}+\chi_j(\tilde{v}_j,\tilde{v}_j,\lambda,0)\end{aligned}$$ for all $v_j, \tilde{v}_j \in \tilde{Y}_j$ and $\lambda \in [-\lambda_0,\lambda_0]$ (cf. the estimate for $I_1$ above). By continuity, this estimate extends to all $v_j, \tilde{v}_j \in Y_j$. Now let $v_j \in Y_j$ and $\varepsilon>0$ be arbitrary. Then, for any $\delta>0$, we can find an element $\tilde{v}_j \in \tilde{Y}_j$ such that $\|v_j-\tilde{v}_j\|_{Y_j}< \delta$. Consequently, we obtain $$\chi_j(v_j,v_j,\lambda,0)\leq \tfrac{1}{2}\varepsilon+\chi_j(\tilde{v}_j,\tilde{v}_j
,\lambda,0) < \varepsilon$$ provided that $\delta$ and $|\lambda|$ are sufficiently small since $\chi_j(\tilde{v}_j,\tilde{v}_j,\lambda,0)\to 0$ as $|\lambda|\to 0$. This shows that $\chi_j(v_j,v_j,\lambda,0) \to 0$ as $|\lambda|\to 0$ and, in conjunction with the estimates , , this implies that the continuous extension $D_2 F_j$ of $\tilde{D}_2 F_j$ is indeed the partial Fréchet derivative of $F_j$ as suggested by the notation. Finally, the continuous extension $D_2 F_j$ is explicitly given by $$D_2 F_j(v_{j},T)\lambda=\lim_{k \to \infty} \tilde{D}_2 F_j(v_{jk}, T)\lambda$$ for an arbitrary sequence $(v_{jk}) \subset \tilde{Y}_j$ with $\|v_j-v_{jk}\|_{Y_j} \to 0$ as $k \to \infty$ where the limit is taken in $H_j$. Since convergence in $H_j$ implies pointwise convergence (see Lemma \[lem:Hcont\]), the explicit expression $$D_2 F_j(v_j,T)\lambda=\lambda \rho v_j'(T\rho)$$ remains valid for all $v_j \in Y_j$. In particular, we see that $v_j \in Y_j$ implies that $\rho \mapsto \rho v_j'(T\rho)$ belongs to $C[0,1]$.
With these preparations at hand, we go back to the mapping ${\mathbf}{U}$. Note that the following result completely demystifies the role of the gauge mode.
\[lem:U\] The function ${\mathbf}{U}$ extends to a continuous mapping ${\mathbf}{U}: {\mathcal}{Y} \times I \to {\mathcal}{H}$. Furthermore, ${\mathbf}{U}$ is partially Fréchet differentiable with respect to the second variable, the derivative $D_2 {\mathbf}{U}: {\mathcal}{Y} \times I \to {\mathcal}{B}(\mathbb{R},{\mathcal}{H})$ is continuous and we have $$D_2 {\mathbf}{U}({\mathbf}{0},1)\lambda=2\lambda {\mathbf}{g}$$ for all $\lambda \in \mathbb{R}$ where ${\mathbf}{g}$ is the gauge mode.
By Eq. , we can write $${\mathbf}{U}({\mathbf}{v},T)=\left ( \begin{array}{c}
\frac{1}{T}\left [ F_1(v_1, T)+F_1(p^3 f_0', T) \right ]-p^3 f_0' \\
F_2(v_2, T)+F_2(p f_0', T)+2 F_2(f_0,T)-p f_0'-2 f_0
\end{array} \right )$$ for all ${\mathbf}{v} \in \tilde{{\mathcal}{Y}}$ and $T \in I$ where $p(\rho):=\rho$. Observe that $p^3 f_0' \in Y_1$ and $pf_0', f_0 \in Y_2$ since $f_0 \in C^\infty[0,\frac{3}{2}]$ and $f_0(0)=0$. By Lemma \[lem:Fvcont\], $F_j$ uniquely extends to a continuous map $F_j: Y_j \times I \to H_j$, $j=1,2$, and this yields the unique continuous extension of ${\mathbf}{U}$ to ${\mathbf}{U}: {\mathcal}{Y} \times I \to {\mathcal}{H}$. Furthermore, from Lemma \[lem:Fvdiff\] it follows that ${\mathbf}{U}$ is partially Fréchet differentiable with respect to the second variable and $D_2 {\mathbf}{U}: {\mathcal}{Y} \times I \to {\mathcal}{B}(\mathbb{R},{\mathcal}{H})$ is continuous. Finally, for any $v_j \in Y_j$, we have $F_j (v_j,1)=v_j$ as well as $[D_2 F_j(v_j,1)\lambda](\rho)=\lambda \rho v_j'(\rho)$ (Lemma \[lem:Fvdiff\]) and this yields $$[D_2 {\mathbf}{U}({\mathbf}{0},1)\lambda](\rho) =\lambda \left ( \begin{array}{c}
2\rho^3 f_0'(\rho)+\rho^4 f_0''(\rho) \\
3\rho f_0'(\rho)+\rho^2 f_0''(\rho)
\end{array} \right )
=\frac{2\lambda }{(1+\rho^2)^2} \left ( \begin{array}{c}
2 \rho^3 \\ \rho (3+\rho^2) \end{array} \right )=2\lambda {\mathbf}{g}(\rho)$$ as claimed.
Lemma \[lem:U\] can be interpreted as follows. The mapping $T \mapsto {\mathbf}{U}({\mathbf}{0},T)$ describes a curve in the initial data space ${\mathcal}{H}$ and ${\mathbf}{g}$ is (up to a factor $\frac{1}{2}$) the tangent vector to this curve at ${\mathbf}{U}({\mathbf}{0},1)={\mathbf}{0}$.
With these preparations at hand, we can return to the existence problem for the wave maps equation. Recall that we want to construct a global solution of the integral equation $$\label{eq:mainU}
\Psi(\tau)=S(\tau){\mathbf}{U}({\mathbf}{v},T)+\int_0^\tau S(\tau-\tau'){\mathbf}{N}(\Psi(\tau'))d\tau'$$ for $\tau \geq 0$. Theorem \[thm:global1\] constitutes a preliminary step in this direction since it yields a solution $\Psi(\cdot; {\mathbf}{u}) \in {\mathcal}{X}$ to $$\begin{aligned}
\label{eq:mainUcorr}
\Psi(\tau; {\mathbf}{u})=&S(\tau){\mathbf}{u}-e^\tau P \left [{\mathbf}{u}+\int_0^\infty e^{-\tau'}{\mathbf}{N}(\Psi(\tau'; {\mathbf}{u})d\tau' \right ] \\
&+\int_0^\tau S(\tau-\tau'){\mathbf}{N}(\Psi(\tau'; {\mathbf}{u}))d\tau', \quad \tau \geq 0 \nonumber\end{aligned}$$ provided that $\|{\mathbf}{u}\|$ is sufficiently small, see the definition of ${\mathbf}{K}$ and recall that $S(\tau)P{\mathbf}{u}=e^\tau P{\mathbf}{u}$ (cf. the proof of Lemma \[lem:KtoX\]). Moreover, the map ${\mathbf}{u} \mapsto \Psi(\cdot; {\mathbf}{u})$ is continuously Fréchet differentiable. We call this map ${\mathbf}{E}$, i.e., for given ${\mathbf}{u} \in {\mathcal}{H}$ with $\|{\mathbf}{u}\|$ sufficiently small, we set ${\mathbf}{E}({\mathbf}{u}):=\Psi(\cdot; {\mathbf}{u})$ where $\Psi(\cdot; {\mathbf}{u}) \in {\mathcal}{X}$ is the solution from Theorem \[thm:global1\]. ${\mathbf}{E}: {\mathcal}{U} \subset {\mathcal}{H} \to {\mathcal}{X}$ is well–defined on a sufficiently small open neighborhood ${\mathcal}{U}$ of ${\mathbf}{0}$ in ${\mathcal}{H}$ and, according to Theorem \[thm:global1\], ${\mathbf}{E}$ is continuously Fréchet differentiable on ${\mathcal}{U}$. Note also that ${\mathbf}{E}({\mathbf}{0})={\mathbf}{0}$. This follows from the fact that the solution constructed in Theorem \[thm:global1\] is unique in a small neighborhood of ${\mathbf}{0}$ in ${\mathcal}{X}$ and obviously, ${\mathbf}{0} \in {\mathcal}{X}$ is a solution of Eq. if ${\mathbf}{u}={\mathbf}{0}$ (recall that ${\mathbf}{N}({\mathbf}{0})={\mathbf}{0}$, see Lemma \[lem:N\]). Since ${\mathbf}{U}({\mathbf}{0},1)={\mathbf}{0}$ and ${\mathbf}{U}: {\mathcal}{Y} \times I \to {\mathcal}{H}$ is continuous (Lemma \[lem:U\]), we see that ${\mathbf}{U}({\mathbf}{v},T) \in {\mathcal}{U}$ provided that $({\mathbf}{v}, T) \in {\mathcal}{V} \times J$ where ${\mathcal}{V}$ and $J$ are sufficiently small open neighborhoods of ${\mathbf}{0}$ in ${\mathcal}{Y}$ and $1$ in $I$, respectively. Consequently, the mapping ${\mathbf}{E} \circ {\mathbf}{U}: {\mathcal}{V} \times J \subset {\mathcal}{Y} \times I \to {\mathcal}{X}$ is well–defined and, by Lemma \[lem:U\] and the chain rule, ${\mathbf}{E} \circ {\mathbf}{U}$ is continuously partially Fréchet differentiable with respect to the second variable. We define ${\mathbf}{F}: {\mathcal}{V} \times J \subset {\mathcal}{Y} \times I \to \langle {\mathbf}{g} \rangle$ by $${\mathbf}{F}({\mathbf}{v},T):=P \left [{\mathbf}{U}({\mathbf}{v},T)+\int_0^\infty e^{-\tau'}{\mathbf}{N}({\mathbf}{E}({\mathbf}{U}({\mathbf}{v},T))(\tau'))d\tau' \right ].$$ Recall that $P {\mathcal}{H} = \langle {\mathbf}{g} \rangle$ (Theorem \[thm:linear\]) and thus, ${\mathbf}{F}$ has indeed range in $\langle {\mathbf}{g} \rangle$. Furthermore, observe that $F({\mathbf}{0},1)={\mathbf}{0}$. For any $({\mathbf}{v},T) \in {\mathcal}{V} \times J$, Theorem \[thm:global1\] yields the existence of a solution to $$\begin{aligned}
\Psi(\tau)=&S(\tau){\mathbf}{U}({\mathbf}{v},T)-e^\tau {\mathbf}{F}({\mathbf}{v},T)
+\int_0^\tau S(\tau-\tau'){\mathbf}{N}(\Psi(\tau'))d\tau', \quad \tau \geq 0 \nonumber\end{aligned}$$ which is nothing but a reformulation of Eq. with ${\mathbf}{u}={\mathbf}{U}({\mathbf}{v},T)$. Consequently, if we can show that, for any ${\mathbf}{v} \in {\mathcal}{V}$, we can find a $T \in J$ such that $F({\mathbf}{v},T)={\mathbf}{0}$, we obtain a solution to Eq. . This cries for an application of the implicit function theorem.
\[lem:F0\] Let ${\mathcal}{V} \subset {\mathcal}{Y}$ be a sufficiently small open neighborhood of ${\mathbf}{0}$. Then, for any ${\mathbf}{v} \in {\mathcal}{V}$, there exists a $T \in J$ such that ${\mathbf}{F}({\mathbf}{v},T)={\mathbf}{0}$.
Note first that the mapping ${\mathbf}{B}: \Psi \mapsto \int_0^\infty e^{-\tau'}\Psi(\tau')d\tau': {\mathcal}{X} \to {\mathcal}{H}$ is continuously Fréchet differentiable. This follows immediately from the fact that ${\mathbf}{B}$ is linear and, by $$\|{\mathbf}{B}\Psi\|=\left \|\int_0^\infty e^{-\tau'}\Psi(\tau')d\tau' \right \|\leq \int_0^\infty e^{-\tau'}\|\Psi(\tau')\|d\tau' \leq \sup_{\tau' > 0}\|\Psi(\tau')\|\leq \|\Psi\|_{\mathcal}{X},$$ also bounded. Now we define $\tilde{{\mathbf}{N}}: {\mathcal}{X} \to {\mathcal}{X}$ by $\tilde{{\mathbf}{N}}(\Psi)(\tau):={\mathbf}{N}(\Psi(\tau))$. By definition, ${\mathbf}{F}$ can be written as $${\mathbf}{F}({\mathbf}{v},T)=P \left [{\mathbf}{U}({\mathbf}{v},T)+{\mathbf}{B}\tilde{{\mathbf}{N}}({\mathbf}{E}({\mathbf}{U}({\mathbf}{v},T))) \right ].$$ We claim that $\tilde{{\mathbf}{N}}$ is continuously Fréchet differentiable. To show this, we define a mapping $\tilde{D}\tilde{{\mathbf}{N}}: {\mathcal}{X} \to {\mathcal}{B}({\mathcal}{X})$ by $$[\tilde{D}\tilde{{\mathbf}{N}}(\Psi)\Phi](\tau):=D{\mathbf}{N}(\Psi(\tau))\Phi(\tau)$$ for $\Psi,\Phi \in {\mathcal}{X}$ and $\tau \geq 0$. According to Lemma \[lem:N\], there exists a continuous function $\gamma: [0,\infty) \to [0,\infty)$ such that $$\|[\tilde{D}\tilde{{\mathbf}{N}}(\Psi)\Phi](\tau)\|\leq \|\Psi(\tau)\|\|\Phi(\tau)\|\gamma(\|\Psi(\tau)\|)$$ and this shows that $\tilde{D}\tilde{{\mathbf}{N}}$ is well–defined as a mapping ${\mathcal}{X} \to {\mathcal}{B}({\mathcal}{X})$. Invoking the fundamental theorem of calculus, we infer $$\begin{aligned}
\tilde{{\mathbf}{N}}(\Psi+\Phi)(\tau)-\tilde{{\mathbf}{N}}(\Psi)(\tau)&={\mathbf}{N}(\Psi(\tau)+\Phi(\tau))-{\mathbf}{N}(\Psi(\tau)) \\
&=\int_0^1 D{\mathbf}{N}(\Psi(\tau)+h\Phi(\tau))\Phi(\tau)]dh \\
&=\int_0^1 [\tilde{D}\tilde{{\mathbf}{N}}(\Psi+h\Phi)\Phi](\tau)dh\end{aligned}$$ and this implies $$\begin{aligned}
&\|\tilde{{\mathbf}{N}}(\Psi+\Phi)(\tau)-\tilde{{\mathbf}{N}}(\Psi)(\tau)-[\tilde{D}\tilde{{\mathbf}{N}}(\Psi)\Phi](\tau)\| \\
&\leq \int_0^1 \|[\tilde{D}\tilde{{\mathbf}{N}}(\Psi+h\Phi)\Phi](\tau)
-[\tilde{D}\tilde{{\mathbf}{N}}(\Psi)\Phi](\tau)\| dh \\
&\leq \int_0^1 h \|\Phi(\tau)\|^2 \gamma_2(\|\Psi(\tau)+h\Phi(\tau)\|, \|\Psi(\tau)\|) dh \\
&\leq C_\Psi \|\Phi\|_{{\mathcal}{X}}\|\Phi(\tau)\|\end{aligned}$$ for all $\Phi \in {\mathcal}{X}$ with $\|\Phi\|_{\mathcal}{X} \leq 1$ where we have used the estimate from Lemma \[lem:N\]. Consequently, we obtain $$\frac{\|\tilde{{\mathbf}{N}}(\Psi+\Phi)-\tilde{{\mathbf}{N}}(\Psi)-\tilde{D}\tilde{{\mathbf}{N}}(\Psi)\Phi\|_{\mathcal}{X}}{\|\Phi\|_{\mathcal}{X}}\leq C_\Psi \|\Phi\|_{\mathcal}{X}$$ for all $\Phi \in {\mathcal}{X}$ with $\|\Phi\|_{\mathcal}{X} \leq 1$ and this shows that $\tilde{D}\tilde{{\mathbf}{N}}$ is the Fréchet derivative of $\tilde{{\mathbf}{N}}$. Another application of Lemma \[lem:N\] yields $$\begin{aligned}
\|D\tilde{{\mathbf}{N}}(\Psi)\Phi-D\tilde{{\mathbf}{N}}(\tilde{\Psi})\Phi\|_{\mathcal}{X}& \leq
\sup_{\tau>0}e^{|\omega|\tau}\left [\|\Psi(\tau)-\tilde{\Psi}(\tau)\|\|\Phi(\tau)\|\gamma_2(\|\Psi(\tau)\|,\tilde{\Psi}(\tau)\|) \right ]\\
&\leq C_\Psi \|\Psi-\tilde{\Psi}\|_{\mathcal}{X}\|\Phi\|_{\mathcal}{X}\end{aligned}$$ for all $\Phi \in {\mathcal}{X}$ and $\tilde{\Psi} \in {\mathcal}{X}$ with, say, $\|\Psi-\tilde{\Psi}\|\leq 1$. This estimate shows that $D\tilde{{\mathbf}{N}}: {\mathcal}{X} \to {\mathcal}{B}({\mathcal}{X})$ is continuous. We conclude that ${\mathbf}{F}: {\mathcal}{V} \times J \to \langle {\mathbf}{g} \rangle$ is continuous and continuously Fréchet differentiable with respect to the second variable and by the chain rule and Lemma \[lem:U\], we obtain $$\begin{aligned}
D_2 {\mathbf}{F}({\mathbf}{0},1)\lambda=PD_2 {\mathbf}{U}({\mathbf}{0},1)\lambda+{\mathbf}{B}\;\underbrace{D\tilde{{\mathbf}{N}}({\mathbf}{E}({\mathbf}{0}))}_{={\mathbf}{0}}\;D{\mathbf}{E}({\mathbf}{0})\;D_2 {\mathbf}{U}({\mathbf}{0},1)\lambda=2\lambda {\mathbf}{g}\end{aligned}$$ since ${\mathbf}{U}({\mathbf}{0},1)={\mathbf}{0}$, ${\mathbf}{E}({\mathbf}{0})={\mathbf}{0}$ and $D \tilde{{\mathbf}{N}}({\mathbf}{0})={\mathbf}{0}$ by Lemma \[lem:N\]. This shows that $D_2 {\mathbf}{F}({\mathbf}{0},1): \mathbb{R} \to \langle {\mathbf}{g} \rangle$ is an isomorphism and, since ${\mathbf}{F}({\mathbf}{0},1)={\mathbf}{0}$, the implicit function theorem (see e.g., [@zeidler], p. 150, Theorem 4.B) yields the claim.
We summarize the results of this section in a theorem.
\[thm:global\] Let $\varepsilon>0$ be arbitrary but small and assume that the fundamental self–similar solution $\psi^T$ is mode stable. Set $\omega:=s_0+\varepsilon<0$ where $s_0$ is the spectral bound and let ${\mathbf}{v} \in {\mathcal}{V} \subset {\mathcal}{Y}$ where ${\mathcal}{V}$ is a sufficiently small open neighborhood of ${\mathbf}{0}$ in ${\mathcal}{Y}$. Then there exists a $T$ close to $1$ such that the equation $$\Phi(\tau)=S(\tau+\log T){\mathbf}{U}({\mathbf}{v},T)+\int_{-\log T}^\tau S(\tau-\tau'){\mathbf}{N}(\Phi(\tau'))d\tau', \quad \tau \geq -\log T$$ has a continuous solution $\Phi: [-\log T,\infty) \to {\mathcal}{H}$ satisfying $$\|\Phi(\tau)\|\lesssim e^{-|\omega|\tau}$$ for all $\tau \geq -\log T$. Consequently, $\Phi$ is a global mild solution of Eq. with initial data $\Phi(-\log T)={\mathbf}{U}({\mathbf}{v},T)$.
Uniqueness of the solution
--------------------------
Finally, we show that the solution of Theorem \[thm:global\] is unique in the space $C([-\log T,\infty), {\mathcal}{H})$. This is the last ingredient and it completes the proof of nonlinear stability of $\psi^T$. The very last section \[sec:verylast\] is nothing but the translation from the operator formulation in similarity coordinates back to the original equation . The proof of uniqueness in $C([-\log T, \infty),{\mathcal}{H})$ is necessary in order to rule out the paradoxical situation that, given initial data $(f,g)$, there exist two (or even more) solutions with the same data and some of them decay whereas others do not. If such a situation could occur, the whole question of stability of a solution would be meaningless. Luckily, a standard argument can be applied to exclude this absurdity. As before, it suffices to consider solutions of the translated problem Eq. .
\[lem:unique\] Let ${\mathbf}{u} \in {\mathcal}{H}$ and suppose $\Psi_j \in C([0,\infty),{\mathcal}{H})$, $j=1,2$, satisfies $$\Psi_j(\tau)=S(\tau){\mathbf}{u}+\int_0^\tau S(\tau-\tau'){\mathbf}{N}(\Psi_j(\tau'))d\tau'$$ for all $\tau \geq 0$. Then $\Psi_1=\Psi_2$.
Recall first that, for all ${\mathbf}{u},{\mathbf}{v} \in {\mathcal}{H}$, $$\begin{aligned}
\|{\mathbf}{N}({\mathbf}{u})-{\mathbf}{N}({\mathbf}{v})\|&\leq
\int_0^1 \|D{\mathbf}{N}({\mathbf}{v}+t({\mathbf}{u}-{\mathbf}{v}))({\mathbf}{u}-{\mathbf}{v})\|dt \\
&\leq \|{\mathbf}{u}-{\mathbf}{v}\|\int_0^1 \gamma(\|{\mathbf}{v}+t({\mathbf}{u}-{\mathbf}{v})\|)dt
\end{aligned}$$ where $\gamma: [0,\infty) \to [0,\infty)$ is a suitable continuous function, see Lemma \[lem:N\]. Now let $\tau_0>0$ be arbitrary. Then we have $$\begin{aligned}
\|\Psi_1(\tau)-\Psi_2(\tau)\|&\leq C\int_0^\tau e^{\tau-\tau'}\|{\mathbf}{N}(\Psi_1(\tau'))-{\mathbf}{N}(\Psi_2(\tau'))\|d\tau' \\
&\leq C(e^\tau-1)\sup_{\tau' \in [0,\tau]} \left [\|\Psi_1(\tau')-\Psi_2(\tau')\|
\sup_{t \in [0,1]} \gamma(\|\Psi_2(\tau')+t(\Psi_1(\tau')-\Psi_2(\tau'))\|) \right ] \\
&\leq M(\tau_0, \Psi_1, \Psi_2)(e^\tau-1)\sup_{\tau' \in [0,\tau]}\|\Psi_1(\tau')-\Psi_2(\tau')\|\end{aligned}$$ where $$M(\tau_0, \Psi_1, \Psi_2):=C\sup_{\tau' \in [0,\tau_0]}\sup_{t \in [0,1]} \gamma(\|\Psi_2(\tau')+t(\Psi_1(\tau')-\Psi_2(\tau'))\|)$$ is finite by the continuity of $\Psi_j$ and $\gamma$. Consequently, we can find a $\tau_1 \in (0,\tau_0]$ such that $$\sup_{\tau \in [0,\tau_1]}\|\Psi_1(\tau)-\Psi_2(\tau)\|\leq \tfrac{1}{2}
\sup_{\tau \in [0,\tau_1]}\|\Psi_1(\tau)-\Psi_2(\tau)\|$$ which implies $\Psi_1(\tau)=\Psi_2(\tau)$ for all $\tau \in [0,\tau_1]$. Iterating this argument we obtain $\Psi_1(\tau)=\Psi_2(\tau)$ for all $\tau \in [0,\tau_0]$ and $\tau_0>0$ was arbitrary.
Proof of Theorem \[thm:main\] {#sec:verylast}
-----------------------------
Set $$v_1(\rho):=\rho^2 [g(\rho)-\psi_t^1(0, \rho)]$$ and $$v_2(\rho):=\rho [f'(\rho)-\psi_r^1(0,\rho)]+2[f(\rho)-\psi^1(0,\rho)],$$ cf. Eq. . Then $v_1, v_2 \in C^2[0,\frac{3}{2}]$ and $v_1(0)=v_1'(0)=v_2(0)=0$ which shows that $v_j \in \tilde{Y}_j \subset Y_j$, $j=1,2$. Furthermore, we have $$\begin{aligned}
\|v_1\|_{Y_1}^2&=\int_0^{3/2}|v_1''(\rho)|^2 d\rho \\
&=\int_0^{3/2}\left |r^2 [g''(r)-\psi^1_{trr}(0,r)]+4r [g'(r)-\psi_{tr}^1(0,r)]
+2 [ g(r)-\psi_t^1(0,r)] \right |^2 dr\end{aligned}$$ as well as $$\begin{aligned}
\|v_2\|_{Y_2}^2&=\int_0^{3/2}\left |v_2'(\rho) \right |^2 d\rho+\int_0^{3/2} \left |v_2''(\rho) \right|^2
\rho^2 d\rho \\
&=\int_0^{3/2} \left |r [f''(r)-\psi_{rr}^1(0,r)]+3[f'(r)-\psi_r^1(0,r) \right |^2 dr \\
&\quad + \int_0^{3/2} \left |r [f'''(r)-\psi_{rrr}^1(0,r)]+4[f''(r)-\psi_{rr}^1(0,r)] \right |^2 r^2 dr\end{aligned}$$ and this shows $$\|{\mathbf}{v}\|_{{\mathcal}{Y}}=\|(f,g)-(\psi^1(0,\cdot),\psi_t^1(0,\cdot))\|_{{\mathcal}{E}'}<\delta$$ by assumption. Consequently, if $\delta$ is small enough, we obtain ${\mathbf}{v} \in {\mathcal}{V}$ and Theorem \[thm:global\] yields the existence of a global mild solution $\Phi \in C([-\log T,\infty),{\mathcal}{H})$ of Eq. with initial data $\Phi(-\log T)={\mathbf}{U}({\mathbf}{v},T)$ that satisfies $$\label{eq:estsol}
\|\Phi(\tau)\|\leq C_\varepsilon e^{-|\omega|\tau}$$ for all $\tau \geq -\log T$ where $T \in J$, i.e., $T>0$ is close to $1$. Furthermore, by Lemma \[lem:unique\], the solution $\Phi$ is unique in the class $C([-\log T,\infty),{\mathcal}{H})$. We conclude that $$(\phi_1(\tau,\rho),\phi_2(\tau,\rho))=\Phi(\tau)(\rho)$$ is a solution of the Cauchy problem Eq. and therefore, by Eq. , $$\label{eq:origpsi2}
\psi(t,r)=\psi^T(t,r)+\tfrac{1}{r^2}\int_0^r r' \phi_2\left (-\log(T-t), \tfrac{r'}{T-t} \right )dr'$$ is a solution of the original wave maps equation . Furthermore, by Eq. , its time derivative is given by $$\label{eq:origpsit2}
\psi_t(t,r)=\psi_t^T(t,r)+\tfrac{T-t}{r^2}\phi_1\left (-\log(T-t),\tfrac{r}{T-t} \right ).$$ As before, we write $\varphi=\psi-\psi^T$. Now note that $$r\varphi_{rr}+3\varphi_r=\partial_r\tfrac{1}{r}\partial_r(r^2\varphi)$$ and thus, $$\varphi_{rr}(t,r)+3\varphi_r(t,r)=\tfrac{1}{T-t}\partial_\rho \phi_2 \left (-\log(T-t),\tfrac{r}{T-t}\right )$$ by Eq. . Similarly, we have $$r\varphi_{tr}+2\varphi_t=\tfrac{1}{r}\partial_r (r^2 \varphi_t)$$ and therefore, by Eq. , we obtain $$r\varphi_{tr}(t,r)+2\varphi_t(t,r)=\tfrac{1}{r}\partial_\rho \phi_1\left (-\log(T-t), \tfrac{r}{T-t} \right ).$$ By recalling the definition of $\|\cdot\|_{{\mathcal}{E}(R)}$, this shows that $$\begin{aligned}
\|(\varphi(t,\cdot),\varphi_t(t,\cdot))\|_{{\mathcal}{E}(T-t)}^2&=\int_0^{T-t} \left |r\varphi_{rr}(t,r)+3\varphi_r(t,r) \right|^2 dr+\int_0^{T-t} \left | r\varphi_{tr}(t,r)+2\varphi_t(t,r) \right |^2 dr \\
&=\tfrac{1}{(T-t)^2}\int_0^{T-t} \left |\partial_\rho \phi_2 \left
(-\log(T-t),\tfrac{r}{T-t}\right ) \right |^2 dr \\
&\quad +\int_0^{T-t} \left | \frac{\partial_\rho \phi_1\left (-\log(T-t), \tfrac{r}{T-t} \right )}{r} \right |^2 dr \\
&=\tfrac{1}{T-t} \int_0^1 \left |\partial_\rho \phi_2 \left
(-\log(T-t),\rho \right ) \right |^2 d\rho \\
&\quad +\tfrac{1}{T-t}\int_0^1 \left | \frac{\partial_\rho \phi_1\left (-\log(T-t), \rho \right )}{\rho} \right |^2 d\rho \\
&=\tfrac{1}{T-t}\|\Phi(-\log(T-t))\|^2 \\
&\leq \frac{C_\varepsilon^2}{T-t}(T-t)^{2|\omega|}\end{aligned}$$ by Eq. . This proves the claimed estimate $$\|(\psi(t,\cdot),\psi_t(t,\cdot))-(\psi^T(t,\cdot),\psi_t^T(t,\cdot))\|_{{\mathcal}{E}(T-t)}\leq C_\varepsilon |T-t|^{-\frac{1}{2}+|\omega|}$$ and we are done.
[^1]: That is, the stable eigenvalue with the largest real part.
[^2]: The “largest” eigenvalue here means the eigenvalue with the largest real part.
|
---
abstract: 'The observability of the Higgs boson via the $WW^{*}$ decay channel at the Tevatron is discussed taking into account the enhancements due to the possible existence of the extra standard model (SM) families. It seems that the existence of new SM families can give the Tevatron experiments (D0 and CDF) the opportunity to observe the intermediate mass Higgs boson before the LHC.'
author:
- 'E. Arik'
- 'O. Çakir'
- 'S. A. Çetin'
- 'S. Sultansoy'
title: Observability of the Higgs Boson in the Presence of Extra Standard Model Families at the Tevatron
---
Introduction
============
It is known that the number of fermion generations is not fixed by the standard model (SM). Asymptotic freedom of the quantum chromodynamics (QCD) suggests that this number is less than eight. Concerning the leptonic sector, the large electron positron collider (LEP) data determine the number of light neutrinos to be N$=2.994\pm
0.012$ \[1\]. On the other hand, the flavor democracy (i.e. democratic mass matrix approach \[2-5\]) favors the existence of the fourth SM family \[6-9\].
Direct searches for the new leptons ($\nu_4, \ell_4$) and quarks ($u_4, d_4$) led to the following lower bounds on their masses \[1\]: $m_{\ell_4} > 100.8$ GeV; $m_{\nu_4} > 45$ GeV (Dirac type) and $m_{\nu_4} > 39.5$ GeV (Majorana type) for stable neutrinos; $m_{\nu_4} > 90.3$ GeV (Dirac type) and $m_{\nu_4} > 80.5$ GeV (Majorana type) for unstable neutrinos; $m_{d_4}
> 199$ GeV (neutral current decays), $m_{d_4} > 128$ GeV (charged current decays). The precision electroweak data does not exclude the fourth SM family, even a fifth or sixth SM family is allowed provided that the masses of their neutrinos are about 50 GeV \[14, 15\].
In the Standard Model, the Higgs boson is crucial for the understanding of the electroweak symmetry breaking and the mass generation for the gauge bosons and the fermions. Direct searches at the CERN $e^+e^-$ collider (LEP) yielded a lower limit for the Higgs boson mass of $m_H > 114.4$ GeV at 95% confidence level (C.L.) \[1\].
In this study, we present the observability of the Higgs boson at the Tevatron and find the accessible mass limits for the Higgs boson in the presence of extra SM fermion families (SM-4, SM-5 and SM-6).
Anticipation for the Fourth SM Family
=====================================
According to the SM with three families, before the spontaneous symmetry breaking, quarks are grouped into the following SU(2) x U(1) multiplets: $$\begin{aligned}
\left( \begin{array} {c} {u_L^0} \\ {d_L^0} \end{array} \right)
u_R^0\, , \, d_R^0 \qquad
\left( \begin{array} {c} {c_L^0} \\ {s_L^0} \end{array} \right)
c_R^0\, , \, s_R^0 \qquad
\left( \begin{array} {c} {t_L^0} \\ {b_L^0} \end{array} \right)
t_R^0\, , \, b_R^0 \qquad\end{aligned}$$ where $0$ denotes the SM basis. In one family case, e.g. $d$-quark mass is obtained due to the Yukawa interaction $$\begin{aligned}
L_Y^{(d)} = a^d \, \left(\bar u_L \, \bar d_L \right)
\left( \begin{array} {c} {\phi^+} \\ {\phi^0} \end{array} \right) \,
d_{R} + h.c.\end{aligned}$$ which yields $$\begin{aligned}
L_m^{(d)} = m^d \bar d d\end{aligned}$$ where $m^d = a^d \eta/\sqrt{2}$ and $\eta = 2\, m_W/g_W = 1 /\sqrt{\sqrt{2} \, G_F}
\approx 246$ GeV. In the same manner, $m^u = a^u \eta/\sqrt{2}$, $m^e = a^e \eta/\sqrt{2}$ and $m^{\nu_e} = a^{\nu_e} \eta/\sqrt{2}$ if $ {\nu_e}$ is a Dirac particle.
In $n$-family case $$\begin{aligned}
L_Y^{(d)} = \sum_{i,j = 1}^n a^d_{ij} \, \left(\bar u^0_{Li} \, \bar
d^0_{Li} \right) \left( \begin{array} {c} {\phi^+} \\ {\phi^0}
\end{array} \right) \, d^0_{Rj} + h.c. \Rightarrow \sum_{i,j = 1}^n
m^d_{ij} \, \bar d^0_{i}d^0_{j}\end{aligned}$$ where $d_1^0$ denotes $d^0$, $d_2^0$ denotes $s^0$ etc. and $m_{ij}^d \equiv a_{ij}^d \eta/\sqrt{2}$.
Before the spontaneous symmetry breaking, all quarks are massless and there are no differences between $d^0$, $s^0$, $b^0$, etc. In other words, fermions with the same quantum numbers are indistinguishable. This leads us to the first assumption \[2, 3\]:
- [Yukawa couplings are equal within each type of fermion families]{}
$$\begin{aligned}
a^d_{ij} \approx a^d \, , \, a^u_{ij} \approx a^u \, , \,
a^{\ell}_{ij} \approx a^{\ell} \, , \, a^{\nu}_{ij} \approx a^{\nu} \, . \,\end{aligned}$$
The first assumption results in $n-1$ massless particles and one massive particle with $m = n a^F \eta/\sqrt{2} \, (F = u, d, \ell, \nu)$ for each type of fermion $F$. If there is only one Higgs doublet which gives Dirac masses to all four types of fermions ($u, d, \ell, \nu)$, it seems natural to make the second assumption \[6, 8\]:
- [Yukawa couplings for different types of fermions should be nearly equal]{}
$$\begin{aligned}
a^d \approx a^u \approx a^{\ell} \approx a^{\nu} \approx a \, .\end{aligned}$$
Considering the mass values of the third SM generation $$\begin{aligned}
m_{\nu_\tau} << m_{\tau} < m_b << m_t \, ,\end{aligned}$$ the second assumption leads to the statement that according to the flavor democracy, the fourth SM family should exist. In terms of the mass matrix, the above arguments mean $$\begin{aligned}
M^0 =\frac{ a \, \eta} {\sqrt{2}}\,
\left( \begin{array} {c c c c}
1 & 1 & 1 & 1\\
1 & 1 & 1 & 1\\
1 & 1 & 1 & 1\\
1 & 1 & 1 & 1\\
\end{array} \right)\end{aligned}$$ which leads to $$\begin{aligned}
M^m =\frac{ 4a \, \eta}{\sqrt{2}}\,
\left( \begin{array} {c c c c}
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 1\\
\end{array} \right)\end{aligned}$$ where $m$ denotes the mass basis.
Now let us state the third assumption:
- [The coupling $a/\sqrt{2}$ is between $e = g_W \sin \theta_W$ and $g_Z = g_W / \cos \theta_W$. ]{}
Therefore, the fourth family fermions are almost degenerate, in agreement with the experimental value $\rho = 0.9998 \pm 0.0008$ \[1\], and their common mass lies between 320 GeV and 730 GeV. The last value is close to the upper limit on heavy quark masses which follows from the partial-wave unitarity at high energies \[10\]. It is interesting to note that with the preferable value of $ a \approx
\sqrt{2}\,g_W$ the flavor democracy predicts the mass of the fourth generation to be $m_4 \approx 4 a \eta /\sqrt{2} \approx 8 m_W \approx 640$ GeV.
The masses of the first three families of fermions, as well as observable inter-family mixings, are generated due to the small deviations from the full flavor democracy \[7, 11, 12\]. The parametrization proposed in \[12\] gives the values for the fundamental fermion masses and at the same time predicts the values of the quark and the lepton CKM matrices. These values are in good agreement with the experimental data. In principle, flavor democracy provides the possibility to obtain the small masses for the first three neutrino species without the see-saw mechanism \[13\].
The fourth SM family quark pairs will be produced copiously at the LHC \[16, 17\] and at the future lepton-hadron colliders \[18\]. Furthermore, the fourth SM generation can manifest itself via the pseudo-scalar quarkonium production at the hadron colliders \[19\]. The fourth family leptons will clearly manifest themselves at the future lepton colliders \[20, 21\]. In addition, the existence of the extra SM generations leads to an essential increase in the Higgs boson production cross section via gluon fusion at the hadron colliders (see \[22-25\] and references therein). This indirect evidence may soon be observed at the Tevatron.
Implications for the Higgs Production
=====================================
The cross section for the Higgs boson production via gluon-gluon fusion at the Tevatron is given by $$\begin{aligned}
\sigma(p\bar{p}\to HX)=\sigma_0 \tau_H \int_{\tau_H}^1 {dx\over
x}g(x,Q^2)g(\tau_H/x,Q^2)\end{aligned}$$ where $\tau_H=m_H^2/s$, $g(x,Q^2)$ denotes the gluon distribution function and $$\begin{aligned}
\sigma_0(gg\to H)= {G_F\,\alpha_s^2(\mu^2)\over 288 \,\sqrt{2} \,
\pi} \, |I|^2\end{aligned}$$ is the partonic cross section. The amplitude $I$ is the sum of the quark amplitudes $I_q$ which is a function of $\lambda_q \equiv
(m_q/m_H)^2$, defined as \[26\] $$\begin{aligned}
I_q = \frac{3}{2}\, [4\lambda_q +\lambda_q (4\lambda_q-1)f(\lambda_q)]\,\,,\\
f(\lambda_q) = -4 \,(\arcsin ({1\over \sqrt{4\lambda_q}}))^2 \qquad {\rm
for} \qquad 4\lambda_q > 1 \,\,\,\, \\
f(\lambda_q) = (\ln \frac{1+
\sqrt{1-4\lambda_q}}{1-\sqrt{1-4\lambda_q}} -i\pi)^2
\qquad {\rm for} \qquad 4\lambda_q < 1 \,\,.\end{aligned}$$ The numerical calculations for the Higgs boson production cross sections in the three SM family case are performed using the HIGLU software \[27\] which includes next to leading order (NLO) QCD corrections \[28\]. In HIGLU, CTEQ6M \[29\] distribution is selected for $g(x,Q^2)$, the natural values are chosen for the factorization scale $Q^2(=m_H^2)$ of the parton densities and the renormalization scale $\mu\,(=m_H)$ for the running strong coupling constant $\alpha_s(\mu)$.
Quarks from the fourth SM generation contribute to the loop mediated process in the Higgs boson production $gg\to H$ at the hadron colliders resulting in an enhancement of $\sigma_0$ by a factor of $\epsilon\cong |I_t + I_{u_4}+ I_{d_4}|^2/|I_t|^2$. Fig. 1 shows this enhancement factor as a function of the Higgs boson mass in the four SM families case with $m_4 = $ 200, 320, 640 GeV. For the extra SM families we find that b-quark loop contribution increases $\epsilon$ by $9\%-4\%$ depending on the Higgs boson mass in the range $100-200$ GeV. In the infinitely heavy quark mass limit, the expected enhancement factors are 9, 25, and 49 for the cases of four, five, six generations, respectively. Fig. 2 shows the enhancement factor $\epsilon$ in the four, five and six SM families cases where quarks from extra generations are assumed to ber infinitely heavy whereas $m_t=175$ GeV. We also include the QCD corrections \[28\] in the decay of the Higgs boson by using the program HDECAY \[30\]. Below we deal with the mass region $115<m_H<200$ GeV, therefore the formulation of $\epsilon$ with obvious modifications for five and six SM families cases can be a good approximation. Theoretical uncertainties in the prediction of the Higgs boson production cross section originate from two sources, the dependence of the cross sections on parton distributions (estimated to be around 10%) and higher order QCD corrections.
{height="9cm" width="9cm"}
{height="9cm" width="9cm"}
Recently, D0 and CDF collaborations have presented their results on the search for the Higgs boson in the channel $H\to WW^{(*)}\to
l\nu l\nu$ \[31-36\]. Further luminosity upgrade of the Tevatron could give a chance to observe the Higgs boson at the Tevatron if the fourth SM family exists.
The Higgs decay width $\Gamma(H\to gg)$ is altered by the presence of the extra SM generations, due to this effect, the $H \to
WW^{(*)}$ branching ratio changes as shown in Fig. 3. The decay widths and branching ratios for the Higgs decays are calculated using HDECAY program \[30\] after some modifications for extra SM families. Details on how the branching ratios of all Higgs decay channels change for extra SM families can be found in \[25\]. In this figure 4n, 5n and 6n denote the cases of one, two and three extra SM generations with neutrinos of mass $\cong 50$ GeV, respectively. We present the numerical values of the branching ratios depending on the Higgs boson mass in Table I. SM-4 and SM-5 denote the extra SM families with unstable heavy neutrinos, whereas SM-4\*, SM-5\* and SM-6\* correspond to the extra SM families with $m_{\nu}\cong 50$ GeV. The difference between SM-4 (SM-5) and SM-4\* (SM-5\*) is due to additional $H\to\nu_4\bar{\nu}_4$ decay channel in the latter case.
{height="9cm" width="12cm"}
-----------------------------------------------------------------------------------------------------------------------------------
Mass(GeV) SM-3 SM-4 SM-5 SM-4\* SM-5\* SM-6\*
----------- ---------------------- ---------------------- ------------- ---------------------- ---------------------- -------------
100 1.02$\times 10^{-2}$ 6.73$\times 10^{-3}$ 5.05$\times 6.73$\times 10^{-3}$ 5.05$\times 10^{-3}$ 3.30$\times
10^{-3}$ 10^{-3}$
120 1.33$\times 10^{-1}$ 8.11$\times 10^{-2}$ 5.95$\times 1.21$\times 10^{-2}$ 1.15$\times 10^{-2}$ 1.03$\times
10^{-2}$ 10^{-2}$
140 4.86$\times 10^{-1}$ 3.35$\times 10^{-1}$ 2.63$\times 4.29$\times 10^{-2}$ 4.15$\times 10^{-2}$ 3.86$\times
10^{-1}$ 10^{-2}$
160 9.05$\times 10^{-1}$ 8.48$\times 10^{-1}$ 8.05$\times 3.43$\times 10^{-1}$ 3.35$\times 10^{-1}$ 3.20$\times
10^{-1}$ 10^{-1}$
180 9.35$\times 10^{-1}$ 9.23$\times 10^{-1}$ 9.14$\times 7.27$\times 10^{-1}$ 7.21$\times 10^{-1}$ 7.09$\times
10^{-1}$ 10^{-1}$
200 7.35$\times 10^{-1}$ 7.29$\times 10^{-1}$ 7.25$\times 6.34$\times 10^{-1}$ 6.31$\times 10^{-1}$ 6.24$\times
10^{-1}$ 10^{-1}$
-----------------------------------------------------------------------------------------------------------------------------------
: The branching ratios depending on the mass of the Higgs boson in the three, four, five and six SM families cases. The asterisk denotes that the calculations are performed assuming $m_{\nu}=50$ GeV for the extra families.
Results and Conclusions
=======================
In Fig. 4, we added our theoretical predictions for the case of two extra SM families (SM-5) with unstable heavy neutrinos ($m_{\nu}>
100$ GeV) as well as the possible exclusion limits for the integrated luminosity $L_{int}=2$ fb$^{-1}$ and 8 fb$^{-1}$. It is seen that the recent Tevatron data excludes SM-5 at 95% C.L., if the Higgs mass lies in the region 160 GeV $< m_H <$ 170 GeV (i.e. this mass region is excluded if there are two extra SM families with unstable heavy neutrinos). With 2 fb$^{-1}$ integrated luminosity, the fourth SM family with an unstable neutrino (SM-4) can be verified or excluded for the region 150 GeV $ < m_H < $ 180 GeV. Similarly, SM-5 can be verified or excluded for the region $m_H
> 130$ GeV with 2 fb$^{-1}$. The upgraded Tevatron is expected to reach an integrated luminosity of 8 fb$^{-1}$ before the LHC operation, which means that SM-4 (SM-5) will be verified or excluded for the Higgs mass region $m_H >$ 140 GeV (120 GeV). However, the LHC will be able to cover the whole region via the golden mode H $\rightarrow$ ZZ $ \rightarrow \ell \ell \ell \ell$ and detect the Higgs signal during the first year of operation if the fourth SM family exists \[25\].
{height="10cm" width="10cm"}
In Fig. 5, we present our $\sigma\times BR(H\to WW^{(*)})$ predictions for the cases of one, two and three extra SM families with $m_{\nu}\cong 50$ GeV, SM-4\*, SM-5\* and SM-6\* respectively. If the Higgs mass lies in the region 165 GeV $< m_H < 185$ GeV, SM-6\* is excluded at 95 $\%$ C.L.. When $L_{int} = 2 $ fb$^{-1}$ is reached, the Tevatron data will be able to exclude or verify SM-6$^*$ (SM-5$^*$) for the mass region $m_H > $ 150 GeV (155 GeV). With 8 fb$^{-1}$ integrated luminosity, this limit changes to $m_H
> 145$ GeV (150 GeV) and SM-4$^{*}$ will be observed or excluded in the range 160 GeV $ < m_H < 195$ GeV.
{height="10cm" width="10cm"}
In Table II, we present the accessible Higgs mass limits at the Tevatron with $L_{int}=2$ fb$^{-1}$ and 8 fb$^{-1}$ for the extra SM families.
2 fb$^{-1}$ 8 fb$^{-1}$
-------- -- ------------------------- -- -------------------------
SM-4 150 $ < m_H < $ 180 GeV 140 $ < m_H < $ 200 GeV
SM-5 $ > 135$ GeV $ >125$ GeV
SM-4\* – 160 $< m_H<$ 195 GeV
SM-5\* $>155$ GeV $>150$ GeV
SM-6\* $>150$ GeV $>145$ GeV
: Accessible mass limits of the Higgs boson at the Tevatron with $L_{int}=2$ fb$^{-1}$ and 8 fb$^{-1}$ for extra SM families.
Another possibility to observe the fourth SM family quarks at the Tevatron will be due to their anomalous production via the quark-gluon fusion process $q g \rightarrow q_4$, if their anomalous couplings have sufficient strength \[37\]. Note that the process $q g
\rightarrow q_4$ is analogous to the single excited quark production \[38\].
In conclusion, the existence of the fourth SM family can give the opportunity to observe the intermediate mass Higgs boson production at the Tevatron experiments D0 and CDF before the LHC. The fourth SM family quarks can manifest themselves at the Tevatron as: Significant enhancement ($\sim $ 8 times) of the Higgs boson production cross section via gluon fusion; Pair production of the fourth family quarks, if $m_{d_4}$ and/or $m_{u_4} < 300$ GeV; Single resonant production of the fourth family quarks via the process $q g \rightarrow q_4$.
[**Acknowledgments**]{}
This work is partially supported by the Turkish State Planning Organization (DPT) under the grant No 2002K120250, 2003K120190 and DPT-2006K-120470.
[99]{}
S. Eidelman [*et al.*]{}, Phys. Lett. B [**592**]{}, 1 (2004)
H. Harari, H. Haut, J. Weyers, Phys. Lett. B [**78**]{}, 459 (1978)
H. Fritzsch, Nucl. Phys. B [**155**]{}, 189 (1979)\
H. Fritzsch, Phys. Lett. B [**184**]{}, 391 (1987)\
H. Fritzsch, Phys. Lett. B [**189**]{}, 191 (1987)\
H. Fritzsch, J. Plankl, Phys. Rev. D [**35**]{}, 1732 (1987)
P. Kaus, S. Meshkov, Mod. Phys. Lett. A [**3**]{}, 1251 (1988)
H. Fritzsch, J. Plankl, Phys. Lett. B [**237**]{}, 451 (1990)
A. Datta, Pramana [**40**]{}, L503 (1993)
A. Datta, S. Raychaudhuri, Phys. Rev. D [**49**]{}, 4762 (1994)
A. Celikel, A.K. Ciftci, S. Sultansoy, Phys. Lett. B [**342**]{}, 257 (1995)
S. Sultansoy, hep-ph/0004271 (2000)
M.S. Chanowitz, M.A. Furlan, I. Hinchliffe, Nucl. Phys. B [**153**]{}, 402 (1979)
S. Atag [*et al.*]{}, Phys. Rev. D [**54**]{}, 5745 (1996)
A.K. Ciftci, R. Ciftci, S. Sultansoy, Phys. Rev. D [**72**]{}, 053006 (2006). (2005).
J. L. Silva-Marcos, Phys. Rev. D [**59**]{}, 091301 (1999)
V.A. Novikov, L.B. Okun, A.N. Rozanov, M.I. Vysotsky, Phys. Lett. B [**529**]{}, 111 (2002)
H.J. He, N. Polonsky, S. Su, Phys. Rev. D [**64**]{}, 053004 (2001)
E. Arik [*et al.*]{}, Phys. Rev. D [**58**]{}, 117701 (1998)
ATLAS Collaboration, Technical Design Report, CERN/LHCC/99-15 (1999)
A.T. Alan, A. Senol, O. Çakir, Eur. Phys. Lett. [**66**]{}, 657 (2004)
E. Arik, O. Cakir, S.A. Cetin, S. Sultansoy, Phys. Rev. D [**66**]{}, 116006 (2002)
A.K. Ciftci, R. Ciftci, S. Sultansoy, Phys. Rev. D [**65**]{}, 055001 (2002)
R. Ciftci, A.K. Ciftci, E. Recepoglu, S. Sultansoy, Tur. J. Phys. [**27**]{}, 179 (2003)
E. Arik et al., CERN-ATLAS Internal Note, ATL-PHYS-98-125, 27 August 1998\
ATLAS TDR 15, CERN/LHCC/99-15, Vol. [**2**]{}, Ch. 18 (1999)
I.F. Ginzburg, I.P. Ivanov, A. Schiller, Phys. Rev. D [**60**]{}, 095001 (1999)
O. Cakir, S. Sultansoy, Phys. Rev. D [**65**]{}, 013009 (2001)
E. Arik [*et al.*]{}, Eur. Phys. J. C [**26**]{}, 9 (2002)\
E. Arik, O. Cakir, S.A. Cetin, S. Sultansoy, Phys. Rev. D [**66**]{}, 003033 (2002)
V.D. Barger, R.J.N. Phillips, Collider Physics, Updated Edition, Addison-Wesley Publishing Company, Inc., page 433-434, 1987; M. Spira, Fortschritte der Physik/Progress of Physics, 46, 203 (1999).
M. Spira, [*HIGLU*]{}, Internal Report No. DESY T-95-05, hep-ph/9510347 (1995)
A. Djouadi, M.Spira, P.M. Zerwas, Phys. Lett. B [**264**]{}, 440 (1991)\
S. Dawson, Nucl. Phys. B [**359**]{}, 283 (1991)\
D. Graudenz, M. Spira, P.M. Zerwas, Phys. Rev. Lett. [**70**]{}, 1372 (1993)\
A. Djouadi, M. Spira, P. M. Zerwas, Phys. Lett. B [**311**]{}, 255 (1993)\
M. Spira, A. Djouadi, D. Graudenz, P. M. Zerwas, Phys. Lett. B [**318**]{}, 347 (1993)\
M. Spira, A. Djouadi, D. Graudenz, P.M. Zerwas, Nucl. Phys. B [**453**]{}, 17 (1995)\
A. Djouadi, M. Spira, P. M. Zerwas, Z. Phys. C [**70**]{}, 427 (1996).
CTEQ Collaboration, H.L. Lai et al., Phys. Rev. D [**55**]{}, 1280 (1997)
A. Djouadi, J. Kalinowski and M. Spira, HDECAY, Comp. Phys. Commun. 108, 56 (1998); hep-ph/9704448.
W.M. Yao, FERMILAB-CONF-04-307-E, hep-ex/0411053 (2004)
V. Buscher, hep-ex/0411063 (2004)
G.J. Davies, 12$^{th}$ Int. Conf. on Supersymmetry and Unification of Fundamental Interactions, SUSY04, Tsukuba, Japan (2004)
A. Kharchilava, hep-ex/0407010 (2004)
P. M. Jonsson, IC/HEP/04-9, talk given at Hadron Collider Physics (2004)
V.M. Abazov et al., Phys. Rev. Lett. [**96**]{}, 011801 (2006)
E. Arik, O. Cakir, S. Sultansoy, Phys. Rev. D [**67**]{}, 035002 (2003)\
E. Arik, O. Cakir, S. Sultansoy, Europhysics Lett. [**62**]{} (3), 332 (2003)\
E. Arik, O. Cakir, S. Sultansoy, Eur. Phys. J. C [**39**]{}, 499 (2005)
A. De Rujula, L. Maiani, R. Petronzio, Phys.Lett.B [**140**]{}, 253 (1984)\
Johann H. Kuhn, Peter M. Zerwas, Phys.Lett.B [**147**]{}, 189 (1984)\
U. Baur, I. Hinchliffe, D. Zeppenfeld, Int.J. Mod. Phys.A [**2**]{}, 1285 (1987)\
U. Baur, M. Spira, P. M. Zerwas, Phys. Rev. D [**42**]{}, 815 (1990)\
O. Adriani et al. (L3 Collaboration), Phys. Rep. [**236**]{}, 1 (1993)\
F. Abe et al. (CDF Collaboration), Phys. Rev. Lett. [**74**]{}, 3538 (1995)\
F. Abe et al. (CDF Collaboration), Phys. Rev. D [**55**]{}, R5263 (1997)\
O. Cakir, R. Mehdiyev, Phys.Rev. D [**60**]{}, 034004 (1999)
|
---
abstract: 'Considering the difference of energy bands in graphene and silicene, we put forward a new model of the graphene-silicene-graphene (GSG) heterojunction. In the GSG, we study the valley polarization properties in a zigzag nanoribbon in the presence of an external electric field. We find the energy range associated with the bulk gap of silicene has a valley polarization more than $95\%$. Under the protection of the topological edge states of the silicene, the valley polarization remains even the small non-magnetic disorder is introduced. These results have certain practical significance in applications for future valley valve.'
author:
- Man Shen
- 'Yan-Yang Zhang'
- 'Xing-Tao An'
- 'Jian-Jun Liu'
- 'Shu-Shen Li'
title: 'Valley polarization in graphene-silicene-graphene heterojunction'
---
\[sec:level1\]INTRODUCTION
==========================
Graphene, the monolayer of carbon honeycomb lattice, has special electron and thermal transport properties[@K.; @S.; @NovoselovetNature; @K.; @S.; @NovoselovetScience; @A.; @K.; @Geim1; @K.; @I.; @Bolotin; @A.; @A.; @Balandin]. At the corners of the first Brillouin zone there are two degenerate and inequivalent valleys ($K$ and $K^{'}$). In momentum space two valleys have large interval, which leads to the strong suppression of the intervalley scattering[@A.; @F.; @Morpurgo; @S.; @V.; @Morozovet; @R.; @V.; @Gorbachevet]. Therefore the two valleys are proposed as independent internal degrees of freedom of the conduction electrons. The low-energy dynamics in the $K$ and $K^{'}$ valleys is given by the Dirac theory. In graphene the valley-dependent phenomena have attracted an increasing amount of interest[@D.; @Xiao; @A.; @Rycerz1; @J.; @M.; @Pereira; @F.; @Zhai; @D.; @Gunlycke]. The spin-orbit interaction in graphene is quite small, so the spin degeneracy can’t be almost broken. Due to the smaller band gap in graphene, the good valley polarization only appears in a small energy range[@A.; @Rycerz; @A.; @L.; @C.; @Pereira]. Therefore, it is hard to experimentally realize valleytronics in graphene.
Silicene, the monolayer of silicon, is isostructural to graphene[@A.; @K.; @Geim; @A.; @H.; @Castro; @Neto] and has been experimentally synthesized[@P.; @Vogt; @A.; @Fleurence]. Silicene has a strong spin-orbit interaction, and has a buckled sheet with two sublattices in two parallel planes. These give rise to strong spin-valley dependence and valley Hall effect in silicene[@P.; @Vogt; @C.; @C.; @Liu1; @M.; @Ezawaprl; @J.; @Y.; @Zhang]. Applying an external electric field perpendicular to silicene’s plane, the staggered potential between sublattices can be changed and the bulk gap can be tuned. In the bulk gap, there exist robust edge states connecting two valleys, giving rise to quantum spin Hall effect[@C.; @C.; @Liu1; @M.; @Ezawaprl]. On the other hand, in the bulk band, the spin-valley configuration is quite different from that in graphene because of strong spin-orbital coupling[@Y.; @Y.; @Zhang]. It is therefore interesting to ask, what will the valley transport be like if graphene and silicene are connected together? Thereupon, we propose a graphene-silicene-graphene (GSG) heterojunction structure investigate the valley polarization through it.
In this paper, in the presence of an external perpendicular electric field we systematically investigate the properties of the valley polarization in graphene, silicene and GSG with zigzag edges, respectively, and the results are compared and analyzed. Under the four-band next-nearest-neighbor (NNN) tight-binding model, the Hamiltonian contains nearest neighbor (NN) hopping, the Rashba spin-orbit coupling term, the intrinsic spin-orbit coupling term and the staggered sublattice potential term. The Rashba spin-orbit coupling and staggered sublattice potential can both be tuned by the external electric field, which leads to the changes of the bulk band gaps and the spin split. Using the method of calculating transmission coefficient from an incident channel to an out-going channel and the recursion techniques[@T.; @Ando; @T.; @Andoprb], we obtain conductances in each valley. As we expect, the ideal valley polarization can appear within a larger energy range in the GSG. In addition, we find that in the GSG the valley polarization robust against the small non-magnetic disorder because of the protection of the topological edge states of the silicene.
This paper is organized as follows. The theoretical framework is introduced in Sec. . In Sec. , we present and discuss our results and then give a summary in Sec. .
\[sec:level1\]THEORETICAL MODEL
===============================

The Hamiltonian of the silicene system can be described by the four-band NNN tight-binding model[@C.; @C.; @Liu; @M.; @Ezawaprl; @Y.; @Y.; @Zhang], $$\begin{aligned}
H&=&t\displaystyle{\sum_{\langle{ij}\rangle,\alpha}}c_{i\alpha}^\dag c_{j\alpha}
-i\frac{2\lambda_{R}}{3}
\displaystyle{\sum_{\langle\langle{ij}\rangle\rangle,\alpha\beta}}\mu_{i}c_{i\alpha}^\dag
(\boldsymbol{\sigma}\times{\boldsymbol{\hat{d}}_{ij}})_{\alpha\beta}^{z}c_{j\beta}\nonumber\\
&&+i\frac{\lambda_{SO}}{3\sqrt{3}}\displaystyle{\sum_{\langle\langle{ij}\rangle\rangle,\alpha\beta}}
\nu_{ij}c_{i\alpha}^\dag\sigma_{{\alpha\beta}}^{z}c_{j\beta}+\lambda_{\nu}\displaystyle{\sum_{i,\alpha}}\xi_{i}c_{i\alpha}^\dag c_{i\alpha},\label{Eq1}\end{aligned}$$ where $\langle{ij}\rangle$ and $\langle\langle{ij}\rangle\rangle$ denote all the NN and NNN hopping sites, respectively, and the indexes $\alpha,\beta$ label spin quantum numbers. The first term is the usual NN hopping with transfer energy $t = 1.6$eV for silicene, where $c_{i\alpha}^\dag$ creates an electron with spin polarization $\alpha$ at site $i$. The second term describes the Rashba SOC between NNN sites, where $\mu_{i}=\pm1$ for the A(B) site and $\boldsymbol{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})$ is the vector of the Pauli matrix of spin. $\boldsymbol{\hat{d}_{ij}}=\boldsymbol{d_{ij}}/|\boldsymbol{d_{ij}}|$ is the unit vector of $\boldsymbol{d_{ij}}$ which connects NNN sites $i$ and $j$. The third term represents the intrinsic SOC between NNN sites, where $\nu_{ij}=+1$ if the NNN hopping is anticlockwise with respect to the positive z axis and $\nu_{ij}=-1$ if it is clockwise. The fourth term describes the staggered sublattice potential term, and the parameter $\lambda_{\nu}=l_{z}E_{z}$ can be tuned by a perpendicular electric field $E_{z}$ because of the buckling distance $l_{z}$ between two sublattices. For silicene, the NN Rashba SOC can be ignored because it is very small and becomes zero at the gapless state[@M.; @Ezawaprl]. Thus, the main focus of this work is the NNN SOC terms and the staggered potential, which can be tuned by the external electric field. For undoped graphene, the Hamiltonian is the first term of the Eq. (1) with $t = 2.7$eV, the very small intrinsic SOC and staggered potential term. Hereafter, we adopt the silicene’s $t= 1.6$eV and lattice constant $a$ (NNN distance) as the units of energy and length, respectively.
The GSG is divided into three regions as shown in Fig. 1, with left and right leads corresponding to graphene and the middle scattering region the silicene. Honeycomb lattices of carbon or silicon atoms in a strip are with zigzag edges, as shown in Fig. 1. In our numerical calculations, we fix the width of the conductor is 80 nanoribbons and each nanoribbon contains 80 atoms.
In Fig. 1 (and also in our calculations), the geometrical difference of lattice constants associated between graphene and silicene is ignored. The reasons are as follows. Firstly, in the calculation, this difference only reflects itself on the bond connecting configurations on the graphene-silicene interface. However, the well-accepted configurations and values of connecting bond hoppings at this interface from experiments or first principles calculations are lacking. On the other hand, even with the same geometric “lattice constant”, sudden changes of Hamiltonian parameters across this interface is enough to induce strong scattering which we are interested in. Moreover, our calculations also offer intuitive pictures for cold atom systems where such structures with designed model parameters can be readily realized[@ColdAtom1; @ColdAtom2].
At low temperature, the conductance $G$ is given by the multichannel version[@D.; @S.; @Fisher] of Landauer’s formula: $$\begin{aligned}
G=\frac{2e^{2}}{h}\displaystyle{\sum_{\mu\nu}{|t_{\mu\nu}|}^2},\label{Eq2}\end{aligned}$$ where $t_{\mu\nu}$ is the transmission coefficient from the incident channel $\nu$ with velocity $v_{\nu}$ to the out-going channel $\mu$ with velocity $v_{\mu}$, and can be calculated using the Green¡¯s function method in quasi-one-dimensional lattice[@T.; @Ando; @Khomyakov2005]. The recursion techniques were employed in computing the Green functions[@T.; @Ando; @T.; @Andoprb].
The valley polarization of the transmitted current in $K$ valley and $K^{'}$ valley is defined by $$\begin{aligned}
P_{KK^{'}}=\frac{G_{K}-G_{K^{'}}}{G_{K}+G_{K^{'}}},\label{Eq3}\end{aligned}$$ and the polarization between difference valley is quantified by $$\begin{aligned}
P_{intrainter}=\frac{G_{intra}-G_{inter}}{G_{intra}+G_{inter}},\label{Eq4}\end{aligned}$$ where $G_{{K}(K^{'})}$ and $G_{intra(inter)}$ are the conductances transmitted to $K (K^{'})$ valley and between two same (difference) valleys, respectively.
In this paper, the parameters $\lambda_{R}=0t$, $\lambda_{SO}=0.01t$, $\lambda_{\nu}=0.001t$ are adopted for garaphene, and $\lambda_{R}=0.5t$, $\lambda_{SO}=0.5t$, $\lambda_{\nu}=0.05t$ for silicene. These spin-orbital parameters for silicene are rather larger than those in the realistic material, but this does not change the basic physics we will discuss[@Y.; @Y.; @Zhang]. As a matter of fact, we adopt them to manifest the physical consequences in our finite-size simulations.
\[sec:level1\]RESULTS AND DISCUSSION
====================================

In Fig. 2 we show the energy bands obtained from diagonalizing the tight-binding Hamiltonian (1) with various parameters for a zigzag nanoribbon. When the Hamiltonian (1) has only the NN hopping energy with $t = 2.7$eV and vanishing Rashba SOC and staggered potential, the electronic structure exhibits a semimetallic behavior of the graphene, as shown in Fig. 2(a). In zigzag edge graphene, there are edge states connecting two valleys living in the lowest subband around Dirac points, whose energy extension is inversely proportional to the transverse width of the ribbon. Hence the transmission channels exist only in the $K^{'}$ valley in the vicinity of $E_f=0$ if the current is set to be right-going in the device. In silicene, with the finite Rashba SOC, intrinsic SOC and the staggered sublattice potential, gapless edge states appear in the bulk gap (see Fig. 2(b)), whose magnitude is determined by the staggered potential $2\lambda_{\nu}$. Experimentally, $2\lambda_{\nu}$ is tunable by a perpendicular electric field due to the buckled structure of two sublattices, therefore the bulk gap can be much bigger in silicene than the edge state region in graphene.
![\[fig\_3\]The conductance as a function of the incident electron energy for zigzag edge geometries in graphene ((a)), silicene ((b)) and GSG ((c)). The red lines and the blue lines represent the conductance through $K$ valley and $K^{'}$ valley, respectively. The total conductance is indicated in black.](Fig_3)
In Fig. 3, we show the normalized electrical conductance as a function of the incident electron energy for zigzag edge geometries in graphene ((a)), silicene ((b)) and GSG ((c)), respectively. For graphene (see Figs. 3(a)), the total conductance shows perfect quantized step-like plateaus and always increases by $4e^2/h$ since two bands start to transmit at the same time and the spin is degenerate. Consequently there are only conductance plateaus when $G/(2e^2/h)$ is even. Near zero energy, the electron transmits completely through the $K^{'}$ valley with $E_{f}>0$ and through the $K$ valley with $E_{f}<0$, which can be accounted for via the directions of the electronic velocity in the lowest energy subbands near $E=0$ (see Figs. 2(a)). Thereby the valley polarization can be produced in graphene[@A.; @Rycerz]. For the silicene with $\lambda_{R}=\lambda_{SO}=0.5$, $\lambda_{\nu}=0.05$, from the Fig. 3(b) we can see that the curves of the conductance still have obvious plateaus as depicted in graphene, with conductance plateau of $(2e^2/h)$ due to the topological edge states.
In GSG heterojunctions, the property of charge conductance changes much compared with those in graphene and silicene (see Fig. 3(c)). The obvious conductance plateaus disappear, which is caused by the mismatching of the interface between the graphene and the silicene. This lead to the oscillations of the total conductance and sharp dips at the edge of the conductance plateaus of the graphene that arise from the quantum interference between different spin channels in the GSG. But the most interesting phenomenon is the almost perfect valley blockade in the bulk gap region ($-0.45t<E_f<0.45t$) of silicene: The electronic states in $K$ ($K^{'}$)) valley cannot transport at negative (positive) Fermi energy. This originates from the fact that at a definite Fermi energy in the bulk gap of silicene, the edge states around each valley possess identical velocity. Notice this blockade is effective even in the bulk band of graphene. Practically the bulk gap of silicene is more tunable and can be much larger than the level spacing of graphene from finite width, therefore this type of valley filter has a large and tunable working energy range compared with that from graphene itself[@A.; @Rycerz1].
![\[fig\_4\] The valley polarization of the transmitted current in $K$ valley and $K'$ valley as a function of the incident electron energy for graphene (blue line), silicene (red line) and GSG (black line).](Fig_4)
For a more visualized view on the valley polarization, we plot the valley polarization of the transmitted current in $K$ valley and $K^{'}$ valley as a function of the incident electron energy in Fig. 4. For a pure graphene device, the plateaus of the full valley polarization is from $E_f=-0.1885t$ to $0.1885t$, just within the lowest subbands corresponding to the zigzag edge states. In other energy area the valley polarization decreases significantly. For contrast, the valleys of the edge states in silicene is also defined as those around Dirac point $K$ or $K^{'}$. Around $E_f=0$ there was no valley polarization for the pure silicene device (see the red line in Fig. 4). This can be attributed to the crossing of edge states with opposite velocities and spins, and the existence of spin-flip processes arising from nonzero $\lambda_{R}$[@XTAn]. However, in the GSG heterojunction, the valley polarization becomes more perfect than those in pure graphene or silicene devices. Thus, this GSG heterojunction is a good candidate for controlling the valley degree of freedom.
![\[fig\_5\] (a) The conductance as a function of the incident electron energy between the same valley (red line), between the difference valley (blue line) and for totality (black dot line) in GSG. (b) The valley polarization of the transmitted current between difference valley as a function of the incident electron energy for graphene (blue dot line), silicene (red dot line) and GSG (black line).](Fig_5)
In order to further investigate the stability of the valley polarization, in Fig. 5, the conductance and the valley polarization between two valleys as a function of the incident electron energy are plotted. Naturally, the inter-valley scattering in pure and clean graphene or silicene is vanishing. Nevertheless, in GSG, when the incident electrons lie within the bulk states of the graphene, they may be transmitted from one valley to another (see Fig. 5(a)), due to the strong scattering at the mismatched interface. But this kind of transmissive probability is very small, which makes the valley polarization between two difference valleys exceed $90\%$ from $E_f=-0.459t$ to $0.4606t$. So the valley polarization in $K$ valley and $K'$ valley can be better guaranteed.
![\[fig\_6\] The valley polarization $P_{KK^{'}}$ in GSG as a function of the incident electron energy for W=0.5t (black line), 1.0t (red line) and 2.0t (blue line). The results are the average over 100 disorder samples.](Fig_6)
Additionally there are always other disorder effects in real material, e.g., impurities and defects. We investigate the non-magnetic disorder effect on the valley polarization in $K$ valley and $K'$ valley. Disordered on-site potential $W_i$ is added to each site $i$ in the central region, where $W_i$ is a random number uniformly distributed in the range $[-W/2, W/2]$ with the disorder strength $W$. Fig. 6 shows the valley polarization $P_{KK^{'}}$ versus the incident electron energy at various disorder strength for the GSG. From the black curve ($W=0.5t$) in Fig. (6), we can see that the valley polarization remains more than $90\%$ from $E_f=-0.45t$ to $0.45t$, which is due to the topological origin of the edge states. With the increasing of the disorder strength, the transmission of the carrier gradually becomes more weak and more chaotic, so the valley polarization becomes poor.
In summary, we proposed the GSG model, in which the conductance and the valley polarization are calculated. Using the tight-binding Hamiltonian, the energy bands for the graphene possess the spin degeneracy and a smaller bulk gap. On the contrary, in the silicene the spin degeneracy is lifted and the bulk gap increases. In the GSG heterojunction the valley polarization is very strong and corresponding to a wide energy range. In the GSG the carriers transmit mainly in the same valley, which ensures the stability of the valley polarization. The dependence of valley polarization on non-magnetic disorder is also discussed. These can make the GSG system be a good valley filter.
This work was supported by National Natural Science Foundation of China (Grant Nos. ??, ?? and ??).
K. S. Novoselovet, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature (London) **438**, 197 (2005). K. S. Novoselovet, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Science **306**, 666 (2004). A. K. Geim and K. S. Novoselov, Nat. Mater. **6**, 183 (2007). K. I. Bolotin, K. J. Sikes, Z. Jiang, G. Fudenberg, J. Hone, P. Kim, and H. L. Stormer, Solid State Commun. **146**, 351 (2008). A. A. Balandin, S. Ghosh, W. Bao, I. Calizo, D. Teweldebrhan, F. Miao, and C. N. Lau, Nano Lett. **8**, 902 (2008). A. F. Morpurgo and F. Guinea, Phys. Rev. Lett. **97**, 196804 (2006). S. V. Morozovet, K. S. Novoselov, M. I. Katsnelson, F. Schedin, L. A. Ponomarenko, D. Jiang, and A. K. Geim, Phys. Rev. Lett. **97**, 016801 (2006). R. V. Gorbachevet, F. V. Tikhonenko, A. S. Mayorov, D. W. Horsell, and A. K. Savchenko, Phys. Rev. Lett. **98**, 176805 (2007). D. Xiao, W. Yao, and Q. Niu, Phys. Rev. Lett. **99**, 236809 (2007). A. Rycerz, J. Tworzydlo, and C. W. J. Beenakker, Nature Phys. **3**, 172 (2007). J. M. Pereira Jr, F. M. Peeters, R. N. Costa Filho, and G. A. Farias, J. Phys.: Condens. Matter **21**, 045301 (2009). F. Zhai, X. F. Zhao, K. Chang, and H. Q. Xu, Phys. Rev. B **82**, 115442 (2010). D. Gunlycke and C. T. White, Phys. Rev. Lett. **106**, 136806 (2011). A. Rycerz, J. Tworzydlo, and C. W. J. Beenakker, Nature Phys. **3** 172 (2007). A. L. C. Pereira and P. A. Schulz, Phys. Rev. B **77**, 075416 (2008). A. K. Geim, Science **324**, 1530 (2009). A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. **81**, 109 (2009). P. Vogt, P. D. Padova, C. Quaresima, J. Avila, E. Frantzeskakis, M. C. Asensio, A. Resta, B. Ealet, and G. L. Lay, Phys. Rev. Lett. **108**, 155501 (2012). A. Fleurence, R. Friedlein, T. Ozaki, H. Kawai, Y. Wang, and Y. Yamada-Takamura, Phys. Rev. Lett. **108**, 245501 (2012). C. C. Liu, W. Feng, and Y. G. Yao, Phys. Rev. Lett. **107**, 076802 (2011). M. Ezawa, Phys. Rev. Lett. **109**, 055502 (2012). Y. Y. Zhang, W. F. Tsai, Kai Chang, X. T. An, G. P. Zhang, X. C. Xie, and S. S. Li, Phys. Rev. B **88**, 125431 (2013). J. Y. Zhang, B. Zhao, and Z. Q. Yang, Phys. Rev. B **88**, 165422 (2013). T. Ando, Phys. Rev. B **44**, 8017 (1991). P. A. Khomyakov, G. Brocks, V. Karpan, M. Zwierzycki, and P. J. Kelly, Phys. Rev. B **72**, 035450 (2005). T. Ando, Phys. Rev. B **40**, 5325 (1989). C. C. Liu, H. Jiang, and Y. G. Yao, Phys. Rev. B **84**, 195430 (2011). A. Bermudez, N. Goldman, A. Kubasiak, M. Lewenstein, and M. A. Martin-Delgado, New J. Phys. **12**, 033041 (2010). P. Soltan-Panahi, J. Struck, P. Hauke, A. Bick, W. Plenkers, G. Meineke, C. Becker, P. Windpassinger, M. Lewenstein, and K. Sengstock, Nature Phys. **7**, 434 (2011). D. S. Fisher and P. A. Lee, Phys. Rev. B **23**, 6851 (1981). X. T. An, Y. Y. Zhang, J. J. Liu, and S. S. Li, Phys. Rev. B **23**, 6851 (1981).
|
---
abstract: 'The derivation of the efficiency of Carnot cycle is usually done by calculating the heats involved in two isothermal processes and making use of the associated adiabatic relation for a given working substance’s equation of state, usually the ideal gas. We present a derivation of Carnot efficiency using the same procedure with Redlich-Kwong gas as working substance to answer the calculation difficulties raised by Agrawal and Menon [@AM90]. We also show that using the same procedure, the Carnot efficiency may be derived regardless of the functional form of the gas equation of state.'
address: |
Department of Physics, Faculty of Mathematics and Natural Sciences\
Universitas Katolik Parahyangan, Bandung 40141 - INDONESIA
author:
- 'Paulus C. Tjiang$^1$ and Sylvia H. Sutanto$^2$'
title: Efficiency of Carnot Cycle with Arbitrary Gas Equation of State
---
Introduction. {#intro}
=============
In any course of undergraduate thermodynamics, thermodynamical cycles and their efficiencies hardly miss their parts. The discussion of thermodynamical cycles is always followed by, or is parallel to, the discussion of the second law of thermodynamics. For a reversible cycle between two single temperatures, it is well-known that the efficiency $\eta$ of such cycle is $$\eta=1- \frac{T_C}{T_H}, \label{efficiency}$$ where $T_H$ and $T_C$ are the absolute temperatures of hot and cold heat reservoirs. For an irreversible cycle, since the total change of entropy is positive, the efficiency of such cycle is less than (\[efficiency\]).
There are many theoretical cycles that satisfy the efficiency (\[efficiency\]) [@KW77], but the so-called [*Carnot cycle*]{} is of interest because of its simple manner in describing a reversible cycle. The Carnot cycle consists of two isothermal processes and two adiabatic processes as shown in . In most textbooks of either elementary physics or thermodynamics, the Carnot efficiency (\[efficiency\]) is derived with the ideal gas as its working substance because of its mathematical simplicity using the following customary procedure : [*calculating the heats involved in isothermal expansion and compression in $p - V$, $V - T$ or $p - T$ diagrams, then relating them with the associated adiabatic relations*]{}. However, the second law of thermodynamics proves that the efficiency (\[efficiency\]) should be independent of working substance [@Sears75], so it is natural to question whether the Carnot efficiency can be obtained from other equations of state using the procedure above. Some attempt has been made to do the task above, among them is the work of Agrawal and Menon [@AM90] who used the van der Waals equation of state to derive the Carnot efficiency through the procedure above, and it turned out that their result agreed with (\[efficiency\]). Nevertheless, they pointed out that there were some calculation difficulties arising when the derivation was done for other real gases’ equations of state (such as the Redlich-Kwong gas [@KW77]) using the same procedure, and suggested to derive Eq.(\[efficiency\]) through an infinitesimal cycle using a suitable Taylor expansion of thermodynamic variables about the initial state of the cycle [@AM90].
![The $p - V$ indicator diagram of a Carnot cycle 1-2-3-4, where $T_C < T_H$.[]{data-label="Carnot"}](Carnot.eps)
In this paper we shall derive the Carnot efficiency (\[efficiency\]) with the Redlich-Kwong gas as working substance to answer the calculation difficulties raised by Agrawal and Menon, and we also show that using the customary procedure, we may obtain the efficiency (\[efficiency\]) regardless of the functional form of the equation of state. We start with a brief review of the generalized thermodynamic properties satisfied by any equation of state in Section \[general-property\]. Using the relations discussed in Section \[general-property\], we shall derive the Carnot efficiency from the Redlich-Kwong equation of state in Section \[redlich-kwong\]. In Section \[arbitrary\], we present the derivation of the Carnot efficiency without any knowledge of the substance’s equation of state using the customary procedure. The discussion will be concluded in Section \[summary\].
Generalized Thermodynamic Properties {#general-property}
====================================
In this section we shall briefly review some thermodynamic properties satisfied by any equation of state.
Maxwell Relations {#maxwell-relation}
-----------------
An equation of state in thermodynamics may be written as $$f(p,V,T) = C, \label{state-function}$$ where $p$, $V$ and $T$ are pressure, volume and absolute temperature of substance, respectively. Eq. (\[state-function\]) yields the following relations : $$\begin{aligned}
p & = & p(V,T), \label{p-function} \\
V & = & V(p,T), \label{V-function} \\
T & = & T(p,V). \label{T-function}\end{aligned}$$ However, the first law of thermodynamics and the definition of entropy suggests that there is another degree of freedom that should be taken into account, i.e. the entropy $S$, for $$dU = T dS - p dV \longrightarrow U = U(S,V), \label{U-SV}$$ where $U$ is the internal energy of the substance. From Eq. (\[U-SV\]), it is clear that $$\begin{aligned}
\left(\frac{\partial U}{\partial S} \right)_V & = & T, \nonumber \\
\left(\frac{\partial U}{\partial V} \right)_S & = & -p,\end{aligned}$$ which gives $$\left(\frac{\partial T}{\partial V} \right)_S = -
\left(\frac{\partial p}{\partial S} \right)_V
\label{Maxwell-relation-1}$$ according to the exactness condition of the internal energy $U$.
Using Legendre transformations [@Goldstein80], we may define $$\begin{aligned}
H(p,S) & = & U(S,V) + p V, \label{enthalpy} \\
F(V,T) & = & U(S,V) - T S, \label{Helmholtz} \\
G(P,T) & = & H(p,S) - T S, \label{Gibbs}\end{aligned}$$ where $H(p,S)$, $F(V,T)$ and $G(p,T)$ are enthalpy, Helmholtz and Gibbs functions, respectively. Differentiating Eqs. (\[enthalpy\]), (\[Helmholtz\]) and (\[Gibbs\]) give $$\begin{aligned}
dH & = & T dS + V dp, \\
dF & = & - p dV - S dT, \\
dG & = & V dp - S dT,\end{aligned}$$ which lead us to $$\begin{aligned}
\left(\frac{\partial T}{\partial p} \right)_S & = &
\left(\frac{\partial V}{\partial S} \right)_p,
\label{Maxwell-relation-2} \\
\left(\frac{\partial p}{\partial T} \right)_V & = &
\left(\frac{\partial S}{\partial V} \right)_T,
\label{Maxwell-relation-3} \\
\left(\frac{\partial V}{\partial T} \right)_p & = & -
\left(\frac{\partial S}{\partial p} \right)_T
\label{Maxwell-relation-4}\end{aligned}$$ due to the exactness of $H(p,S)$, $F(V,T)$ and $G(p,T)$. The set of Eqs. (\[Maxwell-relation-1\]), (\[Maxwell-relation-2\]), (\[Maxwell-relation-3\]) and (\[Maxwell-relation-4\]) is called the Maxwell relations [@KW77; @Sears75].
General Properties of Entropy and Internal Energy {#general-S-U}
-------------------------------------------------
Now let us express the entropy $U$ and internal energy $S$ in terms of measurable quantities. Let $U = U(V,T)$, then $$\begin{aligned}
dU & = & \left(\frac{\partial U}{\partial T}\right)_V dT +
\left(\frac{\partial U}{\partial V}\right)_T dV \nonumber \\
& = & C_v dT + \left(\frac{\partial U}{\partial V}\right)_T dV,
\label{differential-U}\end{aligned}$$ where $C_v$ is the heat capacity at constant volume. Inserting Eq. (\[differential-U\]) into Eq. (\[U-SV\]), we have $$dS = \frac{C_v}{T} dT + \frac{1}{T} \left[\left(\frac{\partial
U}{\partial V} \right)_T + p \right] dV. \label{differential-S-2}$$ Suppose $S = S(T,V)$, then $$\begin{aligned}
dS & = & \left(\frac{\partial S}{\partial T}\right)_V dT +
\left(\frac{\partial S}{\partial V}\right)_T dV. \nonumber \\
& = & \left(\frac{\partial S}{\partial T}\right)_V dT +
\left(\frac{\partial p}{\partial T}\right)_V dV.
\label{differential-S}\end{aligned}$$ where we have used Eq. (\[Maxwell-relation-3\]). Comparing Eqs. (\[differential-S\]) and (\[differential-S-2\]), we obtain $$\begin{aligned}
\left(\frac{\partial S}{\partial T}\right)_V & = & \frac{C_v}{T}, \\
\left(\frac{\partial U}{\partial V}\right)_T & = & T
\left(\frac{\partial p}{\partial T} \right)_V - p.
\label{differential-S-3}\end{aligned}$$ Substitution of Eq. (\[differential-S-3\]) into Eq. (\[U-SV\]) gives $$dU = C_v dT + \left[T \left(\frac{\partial p}{\partial T}
\right)_V - p \right] dV. \label{differential-U-2}$$ Since $dU$ is an exact differential, the following exactness condition must be fulfilled : $$\left(\frac{\partial C_v}{\partial V}\right)_T = T
\left(\frac{\partial^2 p}{\partial T^2}\right)_V.
\label{exactness-U}$$ It is easy to see that Eq. (\[exactness-U\]) must also be satisfied to ensure the exactness of Eq. (\[differential-S\]). Eq. (\[exactness-U\]) also tells us the isothermal volume dependence of $C_v$.
General Relations of Isothermal and Adiabatic Processes {#general-isothermal-adiabatic}
-------------------------------------------------------
In an [*isothermal*]{} process, the change of internal energy is given by $$dU = \left[T \left(\frac{\partial p}{\partial T} \right)_V - p
\right] dV, \label{dU-isothermal}$$ using Eq. (\[differential-U-2\]). Using the first law of thermodynamics $dU = dQ - p \ dV$, the heat involved in this process is $$dQ = T \left(\frac{\partial p}{\partial T} \right)_V dV.
\label{dQ-isothermal}$$
In an [*adiabatic*]{} process where no heat is involved, the first law of thermodynamics, together with Eq. (\[differential-U-2\]) gives $$C_{v} dT = -T \left(\frac{\partial p}{\partial T} \right)_V dV
\label{general-adiabatic}$$
Equations (\[dQ-isothermal\]) and (\[general-adiabatic\]) will be used to obtain the Carnot efficiency of the Redlich-Kwong gas in the next section.
Carnot Efficiency of the Redlich-Kwong Equation of State {#redlich-kwong}
========================================================
In this section we shall derive the Carnot efficiency (\[efficiency\]) from the Redlich-Kwong gas, whose equation of state is given by $$p = \frac{n R T}{V - b} - \frac{n^2 a}{T^{1/2} V (V+b)}, \label{R-K}$$ where $n$ is the number of moles of the gas, $R \approx 8.31 \ J \
mol^{-1} K^{-1}$ is the gas constant, $a$ and $b$ are constants evaluated from the critical state of the gas [@KW77]. We shall follow the process order of the Carnot cycle as shown in .
From Eq. (\[exactness-U\]), the volume dependence of the heat capacity of constant volume $C_v$ for the Redlich-Kwong gas is $$\left(\frac{\partial C_v}{\partial V}\right)_T = - \frac{3 n^2 a}{4
T^{3/2} V (V+b)},$$ which leads to the following functional form of $C_v$ : $$C_v (V,T) = \frac{3 n^2 a}{4 b T^{3/2}} \ln{\frac{V + b}{V}} + f(T),
\label{Cv-RK}$$ where $f(T)$ is an arbitrary function of temperature, since we do not have any information of $\left(\frac{\partial C_v}{\partial
T}\right)_V$.
Using Eqs. (\[dQ-isothermal\]) and (\[R-K\]), we obtain the involved heat for the isothermal expansion from states 1 to 2, as well as isothermal compression from states 3 to 4 as follows : $$\begin{aligned}
Q_{1 \rightarrow 2} & = & n R T_H \ln \frac{V_2 - b}{V_1 - b} +
\frac{n^2 a}{2 b T_H^{1/2}} \ln \frac{V_2 (V_1 + b)}{V_1 (V_2 + b)},
\label{isothermal-H} \\
Q_{3 \rightarrow 4} & = & n R T_C \ln \frac{V_4 - b}{V_3 - b} +
\frac{n^2 a}{2 b T_C^{1/2}} \ln \frac{V_4 (V_3 + b)}{V_3 (V_4 + b)}.
\label{isothermal-C}\end{aligned}$$
For the adiabatic process, Eq. (\[general-adiabatic\]) leads to the following differential form with the help of Eq. (\[Cv-RK\]) : $$\begin{aligned}
M(V,T) \ dT & + & N(V,T) \ dV = 0, \label{adiabatic-RK-DE} \\
M(V,T) & = & \frac{3 n^2 a}{4 b T^{3/2}} \ln{\frac{V + b}{V}} +
f(T), \label{M}
\\
N(V,T) & = & \frac{n R T}{V - b} + \frac{n^2 a}{2 T^{1/2} V (V+b)}.
\label{N}\end{aligned}$$ It is clear that Eq. (\[adiabatic-RK-DE\]) is not an exact differential, which means that we have to find a suitable integrating factor in order to transform Eq. (\[adiabatic-RK-DE\]) to an exact differential. The correspondence integrating factor $\mu(V,T)$ for Eq. (\[adiabatic-RK-DE\]) is surprisingly simple : $$\mu(V,T) \longrightarrow \mu(T) = \frac{1}{T}.
\label{integrating-factor}$$ Multiplying $\mu(T)$ to Eq. (\[adiabatic-RK-DE\]) gives $$\begin{aligned}
\bar{M}(V,T) \ dT & + & \bar{N}(V,T) \ dV = 0, \label{adiabatic-RK-exact} \\
\bar{M}(V,T) & = & \frac{3 n^2 a}{4 b T^{5/2}} \ln{\frac{V + b}{V}}
+ \frac{f(T)}{T}, \label{bar-M}
\\
\bar{N}(V,T) & = & \frac{n R}{V - b} + \frac{n^2 a}{2 T^{3/2} V
(V+b)}, \label{bar-N}\end{aligned}$$ whose general solution is $$n R \ln (V-b) + \frac{n^2 a}{2 b T^{3/2}} \ln \frac{V}{V+b} + g(T) =
\mbox{constant}, \label{adiabatic-solution}$$ where $$g(T) = \int \frac{f(T)}{T} \ dT.$$
Using Eq. (\[adiabatic-solution\]), we obtain the relation between states $2$ and $3$ connected by adiabatic expansion as $$\begin{aligned}
& & n R \ln (V_2-b) + \frac{n^2 a}{2 b T_H^{3/2}} \ln
\frac{V_2}{V_2+b} + g(T_H) \nonumber \\
& = & n R \ln (V_3-b) + \frac{n^2 a}{2 b T_C^{3/2}} \ln
\frac{V_3}{V_3+b} + g(T_C). \label{adiabatic-BC}\end{aligned}$$ The similar relation holds for adiabatic compression from states $4$ to $1$ : $$\begin{aligned}
& & n R \ln (V_1-b) + \frac{n^2 a}{2 b T_H^{3/2}} \ln
\frac{V_1}{V_1+b} + g(T_H) \nonumber \\
& = & n R \ln (V_4-b) + \frac{n^2 a}{2 b T_C^{3/2}} \ln
\frac{V_4}{V_4+b} + g(T_C). \label{adiabatic-DA}\end{aligned}$$ Eqs. (\[adiabatic-BC\]) and (\[adiabatic-DA\]) may be rewritten as $$\begin{aligned}
g(T_H) - g(T_C) & = & n R \ln \frac{V_3-b}{V_2-b} + \frac{n^2 a}{2 b
T_C^{3/2}} \ln \frac{V_3}{V_3+b} \nonumber \\
& - & \frac{n^2 a}{2 b T_H^{3/2}} \ln \frac{V_2}{V_2+b}
\label{adiabatic-BC-1}\end{aligned}$$ and $$\begin{aligned}
g(T_H) - g(T_C) & = & n R \ln \frac{V_4-b}{V_1-b} + \frac{n^2 a}{2 b
T_C^{3/2}} \ln \frac{V_4}{V_4+b} \nonumber \\
& - & \frac{n^2 a}{2 b T_H^{3/2}} \ln \frac{V_1}{V_1+b},
\label{adiabatic-DA-1}\end{aligned}$$ respectively. Equating Eqs. (\[adiabatic-BC-1\]) and (\[adiabatic-DA-1\]) and after doing some algebraic calculation, we get $$\begin{aligned}
& & n R \ln \frac{V_2 - b}{V_1 - b} + \frac{n^2 a}{2 b T_H^{3/2}}
\ln \frac{V_2 (V_1+b)}{V_1 (V_2+b)} \nonumber \\
& = & n R \ln \frac{V_3 - b}{V_4 - b} + \frac{n^2 a}{2 b T_C^{3/2}}
\ln \frac{V_3 (V_4+b)}{V_4 (V_3+b)}. \label{adiabatic-RK-relation}\end{aligned}$$
Now let us calculate the Carnot efficiency of Redlich-Kwong gas. From Eqs. (\[isothermal-H\]) and (\[isothermal-C\]), the efficiency $\eta$ is $$\begin{aligned}
\eta & = & \frac{|Q_{1 \rightarrow 2}| - |Q_{3 \rightarrow 4}|}{|Q_{1 \rightarrow 2}|} = 1 - \frac{|Q_{3 \rightarrow 4}|}{|Q_{1 \rightarrow 2}|} \nonumber \\
& = & 1 - \frac{T_C \left(n R \ln \frac{V_3 - b}{V_4 - b} +
\frac{n^2 a}{2 b T_C^{3/2}} \ln \frac{V_3 (V_4 + b)}{V_4 (V_3 +
b)}\right)}{T_H \left(n R \ln \frac{V_2 - b}{V_1 - b} + \frac{n^2
a}{2 b T_H^{3/2}} \ln \frac{V_2 (V_1 + b)}{V_1 (V_2 + b)}\right)}
\longrightarrow 1 - \frac{T_C}{T_H} \label{Carnot-efficiency-RK}\end{aligned}$$ where we have used the adiabatic relation (\[adiabatic-RK-relation\]). It is clear that the Carnot efficiency (\[Carnot-efficiency-RK\]) coincides with Eq. (\[efficiency\]) in the Section \[intro\] of this paper.
Derivation of Carnot Efficiency of Arbitrary Gas Equation of State {#arbitrary}
==================================================================
The success of obtaining Carnot efficiency with the van der Waals gas in Ref. [@AM90] and the Redlich-Kwong gas in the previous section has tempted us to question whether we may obtain Eq. (\[efficiency\]) from any working substance using the same procedure mentioned in Section \[intro\]. Let the substance’s equation of state be in the form of Eq. (\[p-function\]). With the volume dependence of $C_v$ is given by Eq. (\[exactness-U\]), the functional form of $C_v$ is $$C_v (V,T) = T \int \left(\frac{\partial^2 p}{\partial T^2}\right)_V
dV + f (T), \label{Cv-general}$$ where $f(T)$ is an arbitrary function of temperature.
Using the same process order of Carnot cycle as given in and with help of Eq. (\[dQ-isothermal\]), the involved heat in isothermal expansion from states $1$ to $2$, as well as isothermal compression from states $3$ to $4$ are $$\begin{aligned}
Q_{1 \rightarrow 2} & = & T_H \int_{V_1}^{V_2} \
\left(\frac{\partial p}{\partial T} \right)_V \ dV \equiv T_H \left[F(V_2,T_H) - F(V_1,T_H) \right], \label{heat-general-1-2}\\
Q_{3 \rightarrow 4} & = & T_C \int_{V_3}^{V_4} \
\left(\frac{\partial p}{\partial T} \right)_V \ dV \equiv T_C
\left[F(V_4,T_C) - F(V_3,T_C) \right], \label{heat-general-3-4}\end{aligned}$$ respectively, where $F(V,T) = \int \left(\frac{\partial p}{\partial
T} \right)_V \ dV$.
In the adiabatic process, with the help of Eq. (\[exactness-U\]) it is easy to see that Eq. (\[general-adiabatic\]) is not an exact differential. However, by multiplying Eq. (\[general-adiabatic\]) with a suitable integrating factor, which turns out to be $\mu(V,T)
= \frac{1}{T}$ like the one used in Section \[redlich-kwong\], we obtain $$\frac{C_{v}}{T} \ dT + \left(\frac{\partial p}{\partial T} \right)_V
dV = 0. \label{exact-adiabatic}$$ With the help of Eq. (\[Cv-general\]), it is easy to see that Eq. (\[exact-adiabatic\]) is an exact differential, whose general solution is $$\int \left(\frac{\partial p}{\partial T} \right)_V dV + g(T) =
\mbox{constant} \longrightarrow F(V,T) + g(T) = \mbox{constant},
\label{general-adiabatic-arbitrary}$$ where $g(T) = \int \frac{f(T)}{T} \ dT$ is another arbitrary function of temperature. Using Eq. (\[general-adiabatic-arbitrary\]), the relation between states $2$ and $3$ in the adiabatic expansion, as well as the relation between states $4$ and $1$ in the adiabatic compression are $$\begin{aligned}
g(T_H) - g(T_C) & = & F(V_3,T_C) - F(V_2,T_H),
\label{adiabatic-BC-relation-arbitrary} \\
g(T_H) - g(T_C) & = & F(V_4,T_C) - F(V_1,T_H),
\label{adiabatic-DA-relation-arbitrary}\end{aligned}$$ respectively. Equating Eqs. (\[adiabatic-BC-relation-arbitrary\]) and (\[adiabatic-DA-relation-arbitrary\]), we get $$F(V_3,T_C) - F(V_4,T_C) = F(V_2,T_H) - F(V_1,T_H).
\label{adiabatic-ABCD-relation}$$
Finally, the Carnot efficiency $\eta$ is $$\begin{aligned}
\eta & = & 1 - \frac{|Q_{3 \rightarrow 4}|}{|Q_{1 \rightarrow 2}|} \nonumber \\
& = & 1 - \frac{T_C
\left|F(V_4,T_C) - F(V_3,T_C) \right|}{T_H \left|F(V_2,T_H) -
F(V_1,T_H) \right|} \longrightarrow 1 - \frac{T_C}{T_H}\end{aligned}$$ using Eq. (\[adiabatic-ABCD-relation\]). It is just the same efficiency as Eq. (\[efficiency\]) given in Section \[intro\].
Summary and Conclusion {#summary}
======================
In this paper, we have derived the Carnot efficiency for the Redlich-Kwong gas as well as for arbitrary gas equations of state using the procedure given in Section \[intro\]. Both results are in agreement with Eq. (\[efficiency\]).
From the derivation using the Redlich-Kwong gas equation of state, we show that the derivation procedure succeeds even if the specific heat at constant volume $C_v$ is the function of volume and temperature - the difficulty encountered by Agrawal and Menon [@AM90] while deriving Carnot efficiency using equation of state with $\left(\frac{\partial C_v}{\partial V} \right)_T \neq 0$. As shown by Eq. (\[Cv-RK\]), we may write the analytical form of $C_v (V,T)$ with an unknown function of temperature in it since we know only the volume dependence of $C_v$ through $\left(\frac{\partial C_v}{\partial V} \right)_T$. From Eq. (\[adiabatic-RK-relation\]), it is clear that the equation of adiabatic relations between states 1, 2, 3 and 4 does not depend on that unknown function of temperature.
On the contrary of Agrawal-Menon’s discussion in Ref. [@AM90] that it is difficult to apply the procedure stated in Section \[intro\] for a finite Carnot cycle when the working substance is arbitrary, our results in Section \[arbitrary\] show that it is technically possible to derive the Carnot efficiency (\[efficiency\]) from the general thermodynamic properties discussed in Section \[general-property\]. However, since the thermodynamic properties are derived from the Maxwell’s relations where the concept of entropy is used, the results in Section \[arbitrary\] are hence not surprising. Using Eqs. (\[general-adiabatic\]), (\[heat-general-1-2\]) - (\[heat-general-3-4\]) and (\[adiabatic-ABCD-relation\]), it is easy to verify that the derivation given in Section \[arbitrary\] is completely equivalent to the condition of a reversible cycle $\oint dS = 0$ which also produces the Carnot efficiency (\[efficiency\]) regardless of working substance. The results in Section \[arbitrary\] may answer student’s questions concerning how the derivation of Carnot efficiency from any given working substance may be carried out using the procedure stated in Section \[intro\] to produce Eq. (\[efficiency\]) .
Acknowledgement {#acknowledgement .unnumbered}
===============
The authors would like to thank Prof. B. Suprapto Brotosiswojo and Dr. A. Rusli of the Department of Physics, Institut Teknologi Bandung - Indonesia for their helpful discussions and corrections on the subject in this paper.
References. {#references. .unnumbered}
===========
D. C. Agrawal and V. J. Menon, Eur. J. Phys [**11**]{}, 88 - 90 (1990).
Ward, K., [*Thermodynamics*]{}, 9$^{th}$ Ed., McGraw-Hill Book Company, New York (1977).
Sears, F.W. and Salinger, G. L., [*Thermodynamics, Kinetic Theory and Statistical Thermodynamics*]{}, 3$^{rd}$ Ed., Addison-Wesley Pub. Co., Manila (1975); Zemansky, M. W. and Dittman, R. H., [*Heat and Thermodynamics*]{}, 6$^{th}$ Ed., McGraw-Hill, New York (1982).
Goldstein, H., [*Classical Mechanics*]{}, Addison-Wesley, Massachusetts (1980), p. 339.
|
---
abstract: |
The vacant set of random interlacements on ${{\mathbb Z}}^d$, $d \ge3$, has nontrivial percolative properties. It is known from Sznitman \[*Ann. Math.* **171** (2010) 2039–2087\], Sidoravicius and Sznitman \[*Comm. Pure Appl. Math.* **62** (2009) 831–858\] that there is a nondegenerate critical value $u_*$ such that the vacant set at level $u$ percolates when $u < u_*$ and does not percolate when $u >
u_*$. We derive here an asymptotic upper bound on $u_*$, as $d$ goes to infinity, which complements the lower bound from Sznitman \[*Probab. Theory Related Fields*, to appear\]. Our main result shows that $u_*$ is equivalent to $\log d$ for large $d$ and thus has the same principal asymptotic behavior as the critical parameter attached to random interlacements on $2d$-regular trees, which has been explicitly computed in Teixeira \[*Electron. J. Probab.* **14** (2009) 1604–1627\].
address: |
Departement Mathematik\
ETH Zürich\
CH-8092 Zürich\
Switzerland\
author:
-
title: On the critical parameter of interlacement percolation in high dimension
---
.
.
Introduction {#sec0}
============
Random interlacements have proven useful in understanding how trajectories of random walks can create large separating interfaces; see [@Szni09c; @Szni09d; @CernTeixWind09]. In the case of ${{\mathbb Z}}^d$, $d \ge3$, it is known that the interlacement at level $u \ge0$ is a random subset of ${{\mathbb Z}}^d$, which is connected, ergodic under translations and infinite when $u$ is positive; see [@Szni07a]. The density of this set monotonically increases from $0$ to $1$ as $u$ goes from $0$ to $\infty$. Its complement, the vacant set at level $u$, displays nontrivial percolative properties. There is a critical value $u_*$ in $(0,\infty)$ such that for $u < u_*$, the vacant set at level $u$ has an infinite connected component which is unique (see [@SidoSzni09a; @Teix09a]) and, for $u > u_*$, only has finite connected components; see [@Szni07a]. Little is known about $u_*$ and only recently was it shown that $u_*$ diverges when the dimension $d$ tends to infinity; see [@Szni09e]. The aim of the present article is to establish that $u_*$ is equivalent to $\log d$ as $d$ tends to infinity. In particular, this result shows that $u_*$ has the same principal asymptotic behavior for large $d$ as the corresponding critical parameter (which has been explicitly computed in [@Teix09b]) attached to the percolation of the vacant set of random interlacements on $2d$-regular trees.
We now describe the model. Precise definitions and pointers to the literature appear in Section \[sec1\]. Random interlacements are made of a cloud of paths, which constitute a Poisson point process on the space of doubly infinite ${{\mathbb Z}}^d$-valued trajectories modulo time shift, tending to infinity at positive and negative infinite times. The nonnegative parameter $u$ mentioned above plays the role (roughly speaking) of a multiplicative factor of the intensity measure of the Poisson point process. Actually, one simultaneously constructs, on a suitable probability space $(\Omega, {\mathcal{A}}, {{\mathbb P}})$, the whole family ${\mathcal{I}}^u$, $u
\ge0$, of random interlacements at level $u \ge0$ \[cf. (\[1.30\])\]. They are the traces on ${{\mathbb Z}}^d$ of the trajectories modulo time shift in the cloud having labels at most $u$. The complement ${\mathcal{V}}^u$ of ${\mathcal{I}}^u$ in ${{\mathbb Z}}^d$ is the vacant set at level $u$. It satisfies the following identity: $$\label{0.1}
{{\mathbb P}}[{\mathcal{V}}^u \supseteq K] = \exp\{- u \operatorname{cap}(K)\} \qquad\mbox{for all
finite $K \subseteq{{\mathbb Z}}^d$}.$$ In fact, this formula provides a characterization of the law on $\{0,1\}
^{{{\mathbb Z}}^d}$ of the indicator function of ${\mathcal{V}}^u$ (cf. (2.16) of [@Szni07a]). From Theorem 3.5 of [@Szni07a] and Theorem 3.4 of [@SidoSzni09a], one knows that there is a critical value $u_*$ in $(0,\infty)$ such that: $$\begin{aligned}
\label{0.2}\hspace*{20pt}
&&\mbox{\hphantom{i}(i)} \quad \mbox{for $u > u_*$, ${{\mathbb P}}$-a.s.}\qquad \mbox{all connected
components of ${\mathcal{V}}^u$ are finite};
\nonumber\\
&&\mbox{(ii)} \quad \mbox{for $u < u_*$, ${{\mathbb P}}$-a.s.}\qquad \mbox{there exists an
infinite connected}\\
&&\hspace*{123pt}\mbox{component in ${\mathcal{V}}^u$}.\nonumber\end{aligned}$$ From Theorem 0.1 of [@Szni09e], one has the following asymptotic lower bound on $u_*$ as $d$ tends to infinity: $$\label{0.3}
\liminf_d u_*\big/\log d \ge1 .$$
The main aim of the present article is to show that the above lower bound does capture the correct asymptotic behavior of $u_*$ and that the following statement holds.
\[theo0.1\] $$\label{0.4}
\lim_d u_*\big/\log d= 1 .$$
As a byproduct, this result shows that $u_*$ has the same principal asymptotic behavior as the critical value attached to random interlacements on $2d$-regular trees when $d$ goes to infinity; see Proposition 5.2 of [@Teix09b]. We refer the reader to Remark \[rem4.1\] for more on this matter. In addition, the proof of Theorem \[theo0.1\] also shows (cf. Remark \[rem4.1\]) that $$\label{0.5}
\lim_d u_{**}\big/\log d = 1 ,$$ where $u_{**} \in[u_*,\infty)$ is another critical value introduced in [@Szni09c]. Informally, $u_{**}$ is the critical level above which there is a polynomial decay in $L$ for the probability of existence of a vacant crossing between a box of side length $L$ and the complement of a concentric box of double side length. It is an important and presently unresolved question whether $u_* = u_{**}$ actually holds. However, it is known that the connectivity function of the vacant set at level $u$, that is, the probability that $0$ and a (distant) $x$ are linked by a path in ${\mathcal{V}}^u$ (i.e., the probability of a vacant crossing at level $u$ between $0$ and $x$) has a stretched exponential decay in $x$ when $u$ is bigger than $u_{**}$; see Theorem 0.1 of [@SidoSzni09b].
We will briefly comment on the proof of Theorem \[theo0.1\]. In view of (\[0.3\]), we only need to show that $$\label{0.6}
\limsup_d u_*\big/ \log d \le1 .$$ As for Bernoulli bond or site percolation, similarities between what happens on ${{\mathbb Z}}^d$ and on $2d$-regular trees for large $d$ lurk in the background of the proof. The statement corresponding to (\[0.6\]) for Bernoulli percolation is an asymptotic lower bound for the critical probability (a lower bound, not an upper bound, because the density of ${\mathcal{V}}^u$ decreases with $u$), whereas the required lower bound in the Bernoulli percolation context follows from a short Peierls-type argument (cf. [@BroaHamm57], page 640, [@Kest90], page 222, or [@Grim99], page 25); the proof of (\[0.6\]) for random interlacements is quite involved. The long-range dependence present in the model is deeply felt.
An important feature of working in high dimension is that the $\ell
^1$-, Euclidean and $\ell^\infty$-distances all behave very differently on ${{\mathbb Z}}^d$; see (\[1.1\]). At large enough scales (i.e., Euclidean distance at least $d$), the Green function of the simple random walk “feels the invariance principle” and is well controlled by expressions of the type $(c \sqrt{d} / |\cdot|)^{d-2}$, where $c$ does not depend on $d$ and stands for the Euclidean norm; see Lemma \[lem1.1\]. However, at shorter range, the walk feels more of the tree-like nature of the space and the use of bounds involving the $\ell^1$-distance becomes more pertinent \[cf. (\[1.14\]) and Remark \[rem1.2\]\].
The above dichotomy permeates the proof of (\[0.6\]). We use a modification of the renormalization scheme (“for fixed $d$”) employed in [@SidoSzni09b]. The renormalization scheme enables us to transform certain local controls on the probability of vacant crossings at level $u_0 = ( 1 + 5 \varepsilon) \log d$, $\varepsilon> 0$, small, into controls on the probability of vacant crossings at arbitrary large scales at a bigger level $u_\infty< ( 1 + 10 \varepsilon) \log d$.
The local estimates entering the initial step of the renormalization scheme are developed in Section \[sec3\]. These involve controls on the existence of vacant crossings moving at $\ell^1$-distance $c(\varepsilon)d$ from a box of side length $L_0 = d$ for the interlacement at level $u_0$. The $2d$-regular tree model lurks behind the control of these local crossings. The key estimates appear in Theorem \[theo3.1\] and Corollary \[cor3.4\]. These estimates result from an enhanced Peierls-type argument involving the consideration of what happens in $\frac{c}{\varepsilon^2}$ $\ell^1$-balls, each having an $\ell^1$-radius $c^\prime\varepsilon d$ and lying at mutual $\ell
^1$-distances of at least $c^{\prime\prime} d$. For this step, part of the difficulty stems from the fact that the local estimates need to be strong enough to overcome the combinatorial complexity involved in the selection of the dyadic trees entering the renormalization scheme.
The renormalization scheme is developed in Section \[sec2\]. It propagates along an increasing sequence of levels $u_n$, with initial value $u_0 =
( 1 + 5 \varepsilon) \log d$ and limiting value $u_\infty< ( 1+ 10
\varepsilon) \log d$, uniform estimates on the probability of events involving the presence of certain vacant crossings at level $u_n$. Roughly speaking, these events correspond to the presence in $2^n$ boxes of side length $L_0(=d)$ of paths in ${\mathcal{V}}^{u_n}$. The boxes can be thought of as the “bottom leaves” of a dyadic tree of depth $n$ and are well “spread out” within a box of side length $3L_n$, where $L_n = \ell^n_0 L_0$ and $\ell_0 = d$. The paths start in each of the $2^n$ boxes of side length $L_0$ and move at Euclidean (and hence $\ell^1$-) distance of order $c(\varepsilon)d$ from the boxes. The estimates are conducted uniformly over the possible dyadic trees involved (cf. Propositions \[prop2.1\] and \[prop2.3\]). The main induction step in the above procedure (cf. Proposition \[prop2.1\]) relies on the sprinkling technique introduced in [@Szni07a] to control the long-range interactions. The rough idea is to introduce more trajectories in the interlacement by letting the levels slightly increase along the convergent sequence $u_n$. In this way, one dominates the long-range dependence induced by trajectories of the interlacement traveling between distant boxes. In the present context, the method uses, in an essential way, quantitative estimates on Harnack constants in large Euclidean balls when the dimension $d$ goes to infinity. These estimates crucially enter the proof of Proposition \[prop2.3\]. The bounds on the Harnack constants are derived in Proposition \[prop1.3\] with the help of the general Lemma \[lemA.2\] from the , which is an adaptation of Lemma 10.2 of Grigoryan and Telcs [@GrigTelc01].
Let us now describe how this article is organized.
In Section \[sec1\], we introduce notation and recall several useful facts concerning random walks and random interlacements. An important role is played by the Green function bounds (see Lemma \[lem1.1\]) and by the bounds on Harnack constants; see Proposition \[prop1.3\].
In Section \[sec2\], we develop the renormalization scheme. It follows, with a number of changes, the general line of [@SidoSzni09b]. The key induction step appears in Proposition \[prop2.1\]. The main consequences of the renormalization scheme for the proof of Theorem \[theo0.1\] are stated in Proposition \[prop2.3\].
In Section \[sec3\], we derive the crucial local control on the existence of vacant crossings at level $u_0$ traveling at $\ell^1$-distance of order some suitable multiple of $d$. This local control is stated in Theorem \[theo3.1\]. It enables one to produce the required estimate to initiate the renormalization scheme. This estimate can be found in Corollary \[cor3.4\].
Section \[sec4\] provides the proof of (\[0.6\]). Combined with the lower bound (\[0.3\]) from [@Szni09e], this yields Theorem \[theo0.1\]. In Remark \[rem4.1\], we discuss some further questions concerning the asymptotic behavior of $u_*$ for large $d$.
In the , we first derive, in Lemma \[lemA.1\], an elementary inequality involved in proof of the Green function bounds from Lemma \[lem1.1\]. We then present, in Lemma \[lemA.2\], a general result of independent interest providing controls on Harnack constants in terms of killed Green functions for general nearest-neighbor Markov chains on graphs.
Finally, let us explain the convention we use concerning constants. Throughout the text, $c$ or $c^\prime$ denote positive constants with values which can change from place to place. These constants are independent of $d$. The numbered constants $c_0, c_1,\ldots$ are fixed as the values of their first appearances in the text. Dependence of constants on additional parameters appears in the notation, for instance, $c(\varepsilon)$ denotes a constant depending on $\varepsilon$.
Notation and random walk estimates {#sec1}
==================================
In this section, we introduce further notation and gather various useful estimates on simple random walk on ${{\mathbb Z}}^d$ for large $d$. Controls on the Green function and on Harnack constants in Euclidean balls play an important role in the sequel. These can be found in Lemma \[lem1.1\] and Proposition \[prop1.3\]. We also recall several useful facts concerning random interlacements.
We let ${{\mathbb N}}= \{0,1,2,\ldots\}$ denote the set of natural numbers. Given a nonnegative real number $a$, we let $[a]$ denote the integer part of $a$. We denote by $|\cdot|_1$, $|\cdot|$ and $|\cdot|_\infty
$ the $\ell^1$-, Euclidean and $\ell^\infty$-norms on ${{\mathbb R}}^d$, respectively. We have the following inequalities: $$\label{1.1}
|\cdot|_\infty\le| \cdot| \le| \cdot|_1 ,\qquad |\cdot| \le
\sqrt{d} |\cdot|_\infty,\qquad |\cdot|_1 \le\sqrt{d} |\cdot| .$$ Unless explicitly stated otherwise, we tacitly assume that $d \ge3$.
By *finite path*, we mean a sequence $x_0,\ldots, x_N$ in ${{\mathbb Z}}^d$, with $N \ge1$, which is such that $|x_{i+1} - x_i|_1 = 1$ for $0
\le i < N$. We sometimes write “path” in place of “finite path” when this causes no confusion. We denote by $B(x,r)$ and $S(x,r)$ the closed ball and the closed sphere, respectively, with radius $r \ge0$ and center $x \in{{\mathbb Z}}^d$. In the case of the $\ell^p$-distance where $p=1$ or $\infty$, the corresponding objects are denoted by $B_p(x,r)$ and $S_p(x,r)$. For $A,B \subseteq{{\mathbb Z}}^d$, we write $A + B$ for the set of $x+y$ with $x$ in $A$ and $y$ in $B$, and $d(A,B) = \inf\{
|x-y|; x \in A, y \in B\}$ for the mutual Euclidean distance between $A$ and $B$. We write $d_p(A,B)$, where $p=1$ or $\infty$, when the $\ell^p$-distance is used instead. The notation $K \subset\subset{{\mathbb Z}}^d$ indicates that $K$ is a finite subset of ${{\mathbb Z}}^d$. When $U$ is a subset of ${{\mathbb Z}}^d$, we write $|U|$ for the cardinality of $U$, $\partial U = \{x \in U^c; \exists y \in U$, $|x-y|_1 = 1\}$ for the boundary of $U$ and $\partial_{\mathrm{int}} U= \{x \in U$; $\exists y
\in U^c$, $|x-y|_1 = 1\}$ for the interior boundary of $U$. We also write $\overline{U}$ in place of $U \cup\partial U$.
We denote by $W^+$ the set of nearest-neighbor ${{\mathbb Z}}^d$-valued trajectories defined for nonnegative times and tending to infinity. We write ${\mathcal{W}}_+$ and $X_n$, $n \ge0$, for the canonical $\sigma
$-algebra and the canonical process on $W_+$, respectively. We denote by $\theta_n$, $n \ge0$, the canonical shift on $W_+$ so that $\theta
_n(w) = w(\cdot+ n)$ for $w \in W_+$ and $n \ge0$. Since $d \ge3$, the simple random walk on ${{\mathbb Z}}^d$ is transient and we write $P_x$ for the restriction to the set $W_+$ of full measure of the canonical law of the walk starting at $x \in{{\mathbb Z}}^d$. When $\rho$ is a measure on ${{\mathbb Z}}^d$, we denote by $P_\rho$ the measure $\sum_{x \in{{\mathbb Z}}^d} \rho
(x) P_x$ and by $E_\rho$ the corresponding expectation. Given $U
\subseteq{{\mathbb Z}}^d$, we write $H_U = \inf\{n \ge0; X_n \in U\}$, $\widetilde
{H}_U = \inf\{n \ge1; X_n \in U\}$ and $T_U = \inf\{n \ge0; X_n
\notin U\}$ for the entrance time in $U$, the hitting time of $U$ and the exit time from $U$, respectively. In the case of a singleton $\{x\}
$, we simply write $H_x$ and $\widetilde{H}_x$ for simplicity.
We let $g(\cdot,\cdot)$ stand for the Green function: $$\label{1.2}
g(x,x^\prime) = \sum_{n \ge0} P_x[X_n = x^\prime] \qquad\mbox{for
$x,x^\prime$ in ${{\mathbb Z}}^d$}.$$ The Green function is symmetric in its two variables and, due to translation invariance, $g(x,x^\prime) = g(x^\prime- x) = g(x -
x^\prime)$, where $$\label{1.3}
g(x) = g(x,0) = g(0,x) \qquad\mbox{for } x \in{{\mathbb Z}}^d .$$ The $\ell^1$-distance is relevant for the description of the short-range behavior of $g(\cdot)$ in high dimension \[cf. Remark 1.3(1) of [@Szni09e] and Remark \[rem1.2\] below\]; the Euclidean distance becomes relevant in the description of the “mid-to-long-range” behavior of $g(\cdot)$. The following lemma will be repeatedly used in the sequel. We recall that the convention concerning constants is stated at the end of the .
\[lem1.1\] $$\begin{aligned}
\label{1.4}
g(x) &\le& \bigl(c_0 \sqrt{d} / |x|\bigr)^{d-2} \qquad\mbox{for $|x| \ge d$}
\\
\label{1.5}
g(x) &\ge& \bigl(c_1 \sqrt{d} / |x|\bigr)^{d-2} \nonumber\\[-8pt]\\[-8pt]
\eqntext{\mbox{for $|x|^2 \ge d
|x|_\infty> 0$ (and, in particular, when $|x| \ge d$)}}
\\
\label{1.6}
\hspace*{22pt}P_x\bigl[H_{B(0,L)} < \infty\bigr] &\le& \biggl(\frac{c L}{|x|} \biggr)^{d-2}
\wedge1 \qquad\mbox{for $L \ge d, x \in{{\mathbb Z}}^d$ (with $c \ge1$)}\end{aligned}$$
We begin with the proof of (\[1.4\]), (\[1.5\]). To this end, we denote by $p_t(u,v)$, $t \ge0$, $u,v \in{{\mathbb Z}}$, the transition probability of the simple random walk in continuous time on ${{\mathbb Z}}$ with exponential jumps of parameter $1$. The transition probability of the simple random walk on ${{\mathbb Z}}^d$ with exponential jumps of parameter $d$ can then be expressed as the product of one-dimensional transition probabilities. Relating the continuous- and the discrete-time random walks on ${{\mathbb Z}}^d$, we thus find that $$\label{1.7}
g(x) = d \int^\infty_0 \prod^d_{i=1} p_t(0,x_i) \,dt\qquad
\mbox{for $x = (x_1,\ldots,x_d) \in{{\mathbb Z}}^d$} .$$ From Theorem 3.5 of [@Pang93] and the fact that the function $$F(\gamma) = - \log\bigl(\gamma+ \sqrt{\gamma^2 + 1} \bigr) +
\frac{1}{\gamma} \bigl(\sqrt{\gamma^2 + 1} - 1 \bigr),\qquad
\gamma> 0 ,$$ appearing in Theorem 3.5 of [@Pang93] has derivative $-(1 + \sqrt
{\gamma^2 + 1})^{-1}$, tends to $0$ in $\gamma= 0$ and thus satisfies the inequality $\log(1 + \frac{\gamma}{2}) \le- F(\gamma) \le\log
(1 + \gamma)$ for $\gamma\ge0$, we see that for suitable constants $0 < \kappa< 1 < \kappa^\prime$, we have $$\begin{aligned}
\label{1.8}
&&\frac{1}{\kappa^\prime} (1 \vee t \vee
|u|)^{-{1/2}} \exp\biggl\{- |u| \log\biggl(1 + \kappa^\prime
\frac{|u|}{t} \biggr) \biggr\}\nonumber\\
&&\qquad \le p_t(0,u) \le
\frac{1}{\kappa} (1 \vee t \vee|u|)^{-{1/2}} \exp\biggl\{- |u| \log\biggl(1 + \kappa
\frac{|u|}{t} \biggr) \biggr\}\\
\eqntext{\mbox{for } t > 0, u \in{{\mathbb Z}}.}
$$ We now prove (\[1.4\]) and thus assume that $|x| \ge d$. By (\[1.7\]), (\[1.8\]), we bound $g(x)$ from above as follows (we also use the inequality $d \le2^d$ and Lemma \[lemA.1\] from the ): $$\begin{aligned}
\label{1.9}
g(x) &\le& c^d \int^\infty_0 (1 \vee t)^{-{d/2}}
\exp\Biggl\{ - \sum^d_{i=1} |x_i| \log\biggl(1 + \kappa
\frac{|x_i|}{t} \biggr) \Biggr\} \,dt
\nonumber\\
&\stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{A.1})}}}{\le} & c^d \int^\infty_0 (1 \vee
t)^{-{d/2}} \exp\biggl\{- |x| \log\biggl(1 + \kappa
\frac{|x|}{t} \biggr) \biggr\} \,dt
\nonumber\\[-8pt]\\[-8pt]
&\le& c^d \int_0^{\kappa|x|} (1 \vee t)^{-{d/2}}
\exp\biggl\{- |x| \log\biggl(1 + \kappa\frac
{|x|}{t} \biggr) \biggr\} \,dt
\nonumber\\
&&{} + c^d \int^\infty_{\kappa|x|} t^{-{d/2}} \exp
\biggl\{- \frac{\kappa|x|^2}{2 t} \biggr\} \,dt
,\nonumber\end{aligned}$$ where, in the last step, we have used the inequality $\log(1 + \gamma
) \ge\frac{\gamma}{2}$ for $0 \le\gamma\le1$. Performing the change of variable $s = \frac{\kappa|x|^2}{2t}$ in the last integral, we see that the last term of (\[1.9\]) is smaller than $$\label{1.10}\qquad
c^d |x|^{2-d} \int_0^{{|x|}/{2}} s^{{d}/{2} - 2}
e^{-s} \,d s \le c^d |x|^{2-d} \Gamma\biggl(\frac
{d}{2} - 1 \biggr) \le\bigl(c \sqrt{d} / |x|\bigr)^{d-2} ,$$ using the asymptotic behavior of the gamma function in the last step (cf. [@Olve74], page 88).
As for the first integral in the last line of (\[1.9\]), we note that for $1 \le s \le\kappa|x|$, the function $s \rightarrow-\frac
{d}{2} \log s - |x| \log(1 + \kappa\frac{|x|}{s})$ has derivative $$-\frac{d}{2s} + \frac{|x|}{s}
\frac{\kappa|x|}{s + \kappa|x|} \stackrel{s \le
\kappa|x|}{\ge} - \frac{d}{2s} +
\frac{|x|}{2s} \stackrel{|x| \ge d}{\ge} 0$$ and is hence nondecreasing. Thus, the first term in the last line of (\[1.9\]) is smaller than $c^d(\kappa|x|)^{-({d}/{2} - 1)} 2^{-|x|}$.
Observe that for $a \ge d$, $\frac{d-2}{2} \log a + a \log2 \ge
(d-2) \log\frac{a}{\sqrt{d}}$ (indeed, this inequality holds for $a=d$ and $\frac{d-2}{2a}+ \log2 \ge\frac{d-2}{a}$ for $a \ge d$). It follows that the first term in the last line of (\[1.9\]) is at most $(c \sqrt{d} / |x|)^{d-2}$. Together with (\[1.10\]), this completes the proof of (\[1.4\]).
We now prove (\[1.5\]) and assume that $x \not= 0$. Since $\log(1
+ \gamma) \le\gamma$ for $\gamma\ge0$, and $\kappa^\prime> 1$, it follows from (\[1.7\]), (\[1.8\]) that $$\begin{aligned}
\label{1.11}
g(x) & \ge & c^d \int^\infty_{\kappa^\prime|x|_\infty}
t^{-{d/2}} \exp\biggl\{- \kappa^\prime
\frac{|x|}{t}^2 \biggr\} \,dt \nonumber\\
&\stackrel{s = {\kappa
^\prime
|x|^2}/{t}}{\ge}& c^d |x|^{2-d} \int_0^{
{|x|^2}/{|x|_\infty}}
s^{{d}/{2} -2} e^{-s} \,ds
\\
& \ge & \biggl(\frac{c\sqrt{d}}{|x|} \biggr)^{d-2}
\qquad\mbox{when $|x|^2 \ge d
|x|_\infty$} \nonumber\end{aligned}$$ and (\[1.5\]) follows.
Finally, (\[1.6\]) is a routine consequence of the identity $$\label{1.12}\quad
g(x) = E_x \bigl[g\bigl(X_{H_{B(0,L)}}\bigr), H_{B(0,L)} < \infty\bigr] \qquad\mbox{for $L
\ge0$ and $x \in{{\mathbb Z}}^d$} ,$$ combined with (\[1.4\]), (\[1.5\]) and the fact that $\inf
_{B(0,L)} g \ge\inf_{\partial B(0,L)} g$.
\[rem1.2\]
\(1) Although we will not need this fact in the sequel, let us mention that the following lower bound complementing (\[1.6\]) also holds: $$\label{1.13}\hspace*{25pt}
P_x\bigl[H_{B(0,L)} < \infty\bigr] \ge\biggl(\frac{c
L}{|x|} \biggr)^{d-2} \wedge1 \qquad\mbox{for $L \ge d$}, x \in
{{\mathbb Z}}^d \mbox{ (with $c \le1$)} .$$ Indeed, one uses (\[1.12\]), together with (\[1.4\]), (\[1.5\]), and, when $d + 1 \ge L( \ge d)$, the inequality $\sup
_{\partial_{\mathrm{int}} B(0,L)} g \le2d \sup_{\partial B (0,L)} g$, which follows from the fact that $g$ is harmonic outside the origin (the factor $2d$ can then be dominated by $\widetilde{c} {^{d-2}}$).
\(2) Let us point out that when $x = ([d^\alpha],0,\ldots,0)$ with $\frac{1}{2} < \alpha< 1$, the upper bound (\[1.4\]) does not hold when $d \ge c(\alpha)$. Indeed, it follows from (\[1.7\]), (\[1.8\]) that $$g(x) \ge d \int^2_1 p_t(0,[d^\alpha]) p_t(0,0)^{d-1} \,dt
\stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{1.8})}}}{\ge} c^d d^{-{\alpha/2}} \exp\{-
d^\alpha\log(1 + \kappa^\prime d^\alpha)\} ,$$ which is much bigger than $(c_0 \sqrt{d} / |x|)^{d-2} \le c^{d-2} \exp
\{-(\alpha- \frac{1}{2}) (d-2) \log d\}$ for $d \ge c(\alpha)$.
\(3) We recall from (\[1.11\]) of [@Szni09e] that when $d \ge5$, $$\label{1.14}
g(x) \le\biggl(\frac{c_2 d}{|x|_1} \biggr)^{
{d/2} - 2} \qquad\mbox{for $x \in{{\mathbb Z}}^d$} .$$ The inequality is useful, for instance, when $|x| < d$, but $|x|_1 \ge
c_2 d$, a situation where (\[1.4\]) is of no help. We will use (\[1.14\]) in Section \[sec3\] when deriving local bounds on the connectivity function of random interlacements at a level $u_0$ close to $\log d$; see the proof of Theorem \[theo3.1\].
\(4) The asymptotic behavior of $g(x)$ for $d$ fixed and large $x$ is well known; see, for instance, [@HaraSlad92], page 313, or [@Lawl91], page 31: $$\lim_{x \rightarrow\infty} \frac
{g(x)}{|x|^{d-2}} = \frac{d}{2} \Gamma
\biggl(\frac{d}{2} - 1 \biggr) \pi^{-{d/2}} .$$ The asymptotic behavior of $g(\cdot)$ at the origin, or close to the origin when $d$ tends to infinity, is also well known; see, for instance, [@Mont56], page 246, or [@Szni09e], Remark 1.3(1). On the other hand, the behavior of $g(\cdot)$ at intermediate scales when $d$ tends to infinity seems much less well explored.
The bounds on the Green function of Lemma \[lem1.1\], together with Lemma \[lemA.2\] from the , enable us to derive quantitative controls on Harnack constants in suitably large Euclidean balls. These bounds will be instrumental for the renormalization scheme developed in the next section; see the proof of Lemma \[lem2.2\]. First, we recall some terminology. When $U \subseteq{{\mathbb Z}}^d$, we say that a function $u$ defined on $\overline{U}$ is harmonic in $U$ if, for all $x \in U$, $u(x) = \frac{1}{2d} \sum_{|e| = 1} u(x+e)$. We can now state the following proposition.
\[prop1.3\] Setting $c_3 = 4 + 10 \frac{c_0}{c_1}$ \[where $c_0 \ge c_1$—see (\[1.4\]), (\[1.5\])\], there exists $c > 1$ such that when $u$ is a nonnegative function defined on $\overline{B(0,c_3 L)}$ and harmonic in $B(0,c_3 L)$, we have $$\label{1.15}
\max_{B(0,L)} u \le c^d \min_{B(0,L)} u .$$
We define $U_1 = B(0,L) \subseteq U_2 = B(0,4L) \subseteq U_3 = B(0,c_3
L)$. In view of Lemma \[lemA.2\] from the , any $u$ as above satisfies the inequality $$\max_{U_1} u \le K \min_{U_1} u ,$$ where $$\label{1.16}
K = \max_{x,y \in U_1} \max_{z \in\partial_{\mathrm{int}} U_2} G_{U_3}(x,z) / G_{U_3}(y,z)$$ and $G_{U_3}(\cdot,\cdot)$ stands for the Green function of the walk killed outside $U_3$ \[cf. (\[A.8\])\]. Applying the strong Markov property at time $T_{U_3}$ and (\[1.2\]), we obtain the following identity: $$G_{U_3}(y,z) = G(y,z) - E_y [G(X_{T_{U_3}}, z)] \qquad\mbox{for } y,z
\in{{\mathbb Z}}^d .$$ Hence, when $x,y \in U_1$ and $z \in\partial_{\mathrm{int}} U_2$, we see that $$\label{1.17}
G_{U_3}(x,z) \le G(x,z) \stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{1.4})}}}{\le} \bigl(c_0 \sqrt{d}
/ (2L)\bigr)^{d-2}$$ and $$\begin{aligned}
\label{1.18}
G_{U_3}(y,z) & \ge & \bigl(c_1 \sqrt{d} / (5L)\bigr)^{d-2} - \bigl\{c_0 \sqrt{d} /
\bigl((c_3 - 4) L\bigr)\bigr\}^{d-2}
\nonumber\\
& = & \biggl(\frac{\sqrt{d}}{L} \biggr)^{d-2} \biggl(
\biggl(\frac{c_1}{5} \biggr)^{d-2} - \biggl(
\frac{c_0}{c_3 - 4} \biggr)^{d-2} \biggr)
\\
& = & \biggl(\frac{\sqrt{d}}{L} \biggr)^{d-2}
\biggl(\frac{c_1}{5} \biggr)^{d-2} \biggl(1 - \biggl(
\frac{1}{2}\biggr)^{d-2} \biggr) . \nonumber\end{aligned}$$ We thus find that $K \le2 (\frac{5}{2} \frac{c_0}{c_1})^{d-2}$ and the claim (\[1.15\]) follows.
We now briefly review some notation and basic properties concerning the equilibrium measure and the capacity. Given $K \subset\subset{{\mathbb Z}}^d$, we write $e_K$ for the equilibrium measure of $K$ and $\operatorname{cap}(K)$ for its total mass, the capacity of $K$: $$\begin{aligned}
\label{1.19}
e_K(x) &=& P_x[\widetilde{H}_K = \infty] 1_K(x),\qquad x \in{{\mathbb Z}}^d,\nonumber\\[-8pt]\\[-8pt]
\operatorname{cap}(K) &=& \sum_{x \in K} P_x[\widetilde{H}_K = \infty]
.\nonumber\end{aligned}$$ The capacity is subadditive \[a straightforward consequence of (\[1.19\])\]: $$\label{1.20}
\operatorname{cap} (K \cup K^\prime) \le\operatorname{cap}(K) + \operatorname{cap}(K^\prime),
\qquad\mbox{for } K, K^\prime\subset\subset{{\mathbb Z}}^d .$$ One can also express the probability of entering $K$ in the following well-known fashion: $$\label{1.21}
P_x[H_K < \infty] = \sum_{y \in K} g(x,y) e_K(y) \qquad\mbox{for
$x \in{{\mathbb Z}}^d$} .$$ Further, we have the following bound on the capacity of Euclidean balls: $$\label{1.22}
\operatorname{cap}(B(0,L)) \le\biggl(\frac{cL}{\sqrt{d}}
\biggr)^{d-2}\qquad
\mbox{for $L \ge d$} ,$$ which follows from (\[1.5\]), (\[1.6\]) and (\[1.21\]), by letting $x$ tend to infinity.
\[rem1.4\] Although we will not need this estimate in the sequel, let us mention that in a way analogous with (\[1.4\]), (\[1.13\]) and (\[1.21\]), one finds that $$\label{1.23}
\operatorname{cap} (B(0,L)) \ge\biggl(\frac{c L}{\sqrt{d}} \biggr)^{d-2}
\qquad\mbox{for $L \ge d$} .$$
We now turn to the description of random interlacements. We refer to Section 1 of [@Szni07a] for details. We denote by $W$ the space of doubly infinite nearest-neighbor ${{\mathbb Z}}^d$-valued trajectories which tend to infinity at positive and negative infinite times and by $W^*$ the space of equivalence classes of trajectories in $W$ modulo time shift. We let $\pi^*$ stand for the canonical map from $W$ into $W^*$. We write ${\mathcal{W}}$ for the canonical $\sigma$-algebra on $W$ generated by the canonical coordinates $X_n$, , and ${\mathcal{W}}^* = \{A
\subseteq W^*; (\pi^*)^{-1}(A) \in{\mathcal{W}}\}$ for the largest $\sigma
$-algebra on $W^*$ for which $\pi^*\dvtx(W, {\mathcal{W}}) \rightarrow(W^*, {\mathcal{W}}^*)$ is measurable. The canonical probability space for random interlacements is now given as follows.
We consider the space of point measures on $W^* \times{{\mathbb R}}_+$: $$\begin{aligned}
\label{1.24}\hspace*{22pt}
\Omega &=& \biggl\{\omega= \sum_{i \ge0} \delta_{(w^*_i,u_i)},
\mbox{with $(w^*_i,u_i) \in W^* \times{{\mathbb R}}_+, i \ge0$ and } u \ge0,
\nonumber\\[-8pt]\\[-8pt]
&&\hspace*{47.27pt} w(W^*_K \times[0,u]) < \infty\mbox{ for any } K \subset\subset
{{\mathbb Z}}^d\mbox{ and }u \ge0 \biggr\},\nonumber\end{aligned}$$ where, for $K \subset\subset{{\mathbb Z}}^d$, $W^*_K \subseteq W^*$ stands for the set of trajectories modulo time shift that enter $K$, that is, $W^*_K = \pi^*(W_K)$, where $W_K$ is the subset of $W$ of trajectories that enter $K$.
We endow $\Omega$ with the $\sigma$-algebra ${\mathcal{A}}$ generated by the evaluation maps $\omega\rightarrow\omega(D)$, where $D$ runs over the $\sigma$-algebra ${\mathcal{W}}^* \times{\mathcal{B}}({{\mathbb R}}_+)$, and with the probability ${{\mathbb P}}$ on $(\Omega, {\mathcal{A}})$, which is the Poisson measure with intensity $\nu(d\omega^*) \,du$ giving finite mass to the sets $W^*_K \times[0,u]$ for $K \subset\subset{{\mathbb Z}}^d$, $u \ge0$, where $\nu$ is the unique $\sigma$-finite measure on $(W^*,{\mathcal{W}}^*)$ such that for any $K \subset\subset{{\mathbb Z}}^d$ (see Theorem 1.1 of [@Szni07a]), $$\label{1.25}
1_{W^*_K} \nu= \pi^* \circ Q_K ,$$ $Q_K$ here denoting the finite measure on $W^0_K$, the subset of $W_K$ of trajectories which are for the first time in $K$ at time $0$ and such that for $A,B \in{\mathcal{W}}_+$ \[we recall that ${\mathcal{W}}_+$ is defined above (\[1.2\])\] and $x \in{{\mathbb Z}}^d$, $$\begin{aligned}
\label{1.26}
&&Q_K [(X_{-n})_{n \ge0} \in A, X_0 = x, (X_n)_{n \ge0} \in
B]\nonumber\\[-8pt]\\[-8pt]
&&\qquad =
P_x [A | \widetilde{H}_K = x] e_K(x) P_x[B] .\nonumber\end{aligned}$$ For $K \subset\subset{{\mathbb Z}}^d$, $u \ge0$, one defines on $(\Omega,
{\mathcal{A}})$ the following random variable valued in the set of finite point measures on $(W_+, {\mathcal{W}}_+)$: $$\begin{aligned}
\label{1.27}
\mu_{K,u}(dw) = \sum_{i \ge0} \delta_{(w_i^*)^{K,+}} 1\{w_i^* \in
W^*_K, u_i \le u\} \nonumber\\[-8pt]\\[-8pt]
\eqntext{\mbox{for } \displaystyle\omega= \sum_{i \ge0} \delta
_{(w_i^*,u_i)} \in\Omega,}\end{aligned}$$ where, for $w^* \in W^*_K$, $(w^*)^{K,+}$ stands for the trajectory in $W_+$ which follows $w^*$ step-by-step from the first time it enters $K$.
When $0 \le u^\prime< u$, one defines $\mu_{K,u^\prime,u}(dw)$ in an analogous way to (\[1.27\]), replacing the condition $u_i \le u$ with $u^\prime< u_i \le u$ in the right-hand side of (\[1.27\]). Then, for $0 \le u^\prime< u$, $K \subset\subset{{\mathbb Z}}^d$, one finds that $$\label{1.28}
\begin{tabular}{p{260pt}}
$\mu_{K,u^\prime,u}$ and $\mu_{K,u^\prime}$ are independent
Poisson point processes
with respective intensity measures $(u-u^\prime) P_{e_K}$ and
$u^\prime P_{e_K}$.
\end{tabular}
$$ In addition, one has the identity $$\label{1.29}
\mu_{K,u} = \mu_{K,u^\prime} + \mu_{K,u^\prime, u} .$$ Given $\omega\in\Omega$, the interlacement at level $u \ge0$ is the following subset of ${{\mathbb Z}}^d$: $$\begin{aligned}
\label{1.30}
{\mathcal{I}}^u(\omega) & = & \bigcup_{u_i \le u} {\mathrm{range}}
(w_i^*) \hspace*{40pt}\nonumber\\[-8pt]\\[-8pt]
\eqntext{\mbox{if } \displaystyle\omega= \sum_{i \ge0} \delta_{(w_i^*,u_i)}
= \bigcup_{K \subset\subset{{\mathbb Z}}^d} \bigcup_{w
\in\operatorname{Supp} \mu_{K,u}(\omega)}
w({{\mathbb N}}) ,}\end{aligned}$$ where, for $w^* \in W^*$, range$(w^*) = w({{\mathbb N}})$ for any $w \in W$, with $\pi^*(w) = w^*$, and $\operatorname{Supp} \mu_{K,u}(\omega)$ refers to the support of the point measure $\mu_{K,u}(\omega)$. The vacant set at level $u$ is the complement of ${\mathcal{I}}^u(\omega)$: $$\label{1.31}
{\mathcal{V}}^u(\omega) = {{\mathbb Z}}^d \setminus{\mathcal{I}}^u(\omega)\qquad \mbox{for } u
\in\Omega, u \ge0 .$$ One also has (cf. (1.54) of [@Szni07a]) $$\label{1.32}\hspace*{28pt}
{\mathcal{I}}^u(\omega) \cap K = \bigcup_{w \in\operatorname{Supp} \mu
_{K^\prime,u}(\omega)} w({{\mathbb N}}) \cap K \qquad\mbox{for } K \subset
K^\prime\subset\subset{{\mathbb Z}}^d, u \ge0 .$$ From (\[1.28\]), one readily finds that, as mentioned in (\[0.1\]), $$\label{1.33}
{{\mathbb P}}[{\mathcal{V}}^u \supseteq K] = \exp\{- u \operatorname{cap}(K)\} \qquad\mbox{for
all } K \subset\subset{{\mathbb Z}}^d ,$$ an identity that characterizes the law $Q_u$ on $\{0,1\}^{{{\mathbb Z}}^d}$ of the indicator function of ${\mathcal{V}}^u(\omega)$; see also Remark 2.2(2) of [@Szni07a]. This brings us to the conclusion of Section \[sec1\] and of this brief review of some useful facts that we will use in the following sections.
From local to global: The renormalization scheme {#sec2}
================================================
In this section, we develop a renormalization scheme that follows, in its broad lines, the strategy of [@SidoSzni09b]. We introduce a geometrically increasing sequence of length scales $L_n$, $n \ge0$, and an increasing, but typically convergent, sequence of levels $u_n$, $n \ge0$. When the sequence $u_n$ is sufficiently increasing \[cf. (\[2.19\])\], we are able to propagate from scale to scale bounds on the key quantities $p_n(u_n)$ that appear in (\[2.17\]). Roughly speaking, these controls provide uniform upper bounds on the probability that in a box at scale $L_n$, $2^n$ “well spread” boxes at scale $L_0$ all witness certain crossing events at Euclidean distance of order $c L_0$ in the vacant set at level $u_n$. Interactions are handled by the sprinkling technique originally introduced in Section 3 of [@Szni07a]. The renormalization scheme enables us to transform local estimates on the existence of vacant crossings at scale $L_0$ in the vacant set at level $u_0$ into global estimates on crossings at arbitrary scales in the vacant set at level $u_\infty= \lim u_n$. The difficulty we encounter in the implementation of the scheme stems from the fact that we want both $u_0$ and $u_\infty$ to be “slightly above” the critical value $u_*$; see (\[4.7\]) and (\[0.3\]). However, the local controls on vacant crossings at level $u_0$, which we introduce into the renormalization scheme and develop in the next section, require $L_0$ to be rather small, that is, of order $d$. We are then forced to keep a tight control on the estimates we derive when $d$ goes to infinity. The Green function and entrance probability estimates from Lemma \[lem1.1\], together with the bounds on Harnack constants in Euclidean balls from Proposition \[prop1.3\], play a pivotal role in this scheme. The fact that the $\ell^\infty$- and Euclidean distances behave very differently for large $d$ \[see (\[1.1\])\] also forces upon us some modifications of the geometric constructions in [@SidoSzni09b]; see, for instance, (2.1) and (2.26). The main results of this section are Proposition \[prop2.1\], which contains the main induction step, and Proposition \[prop2.3\], which encapsulates the estimates we will use in Section \[sec4\].
We consider the length scales $$\label{2.1}
L_0 \ge d,\qquad \widehat{L}_0 = \bigl(\sqrt{d} + R\bigr) L_0\qquad \mbox{with $R \ge1$}$$ as well as $$\begin{aligned}
\label{2.2}
L_n = \ell_0^n L_0\hspace*{140pt}\nonumber\\[2pt]\\[-22pt]
\eqntext{\begin{tabular}{p{238pt}}
for $n \ge1$, where $\ell_0 \ge
1000 \dfrac{c_0}{c_1} \bigl(\sqrt{d} + R\bigr)$ is an integer
multiple of $100$ (we recall that $c_0 \ge c_1$; cf. Lemma
\ref{lem1.1}).\hspace*{-18pt}
\end{tabular}}\end{aligned}$$ We organize ${{\mathbb Z}}^d$ in a hierarchical way, with $L_0$ being the finest scale and $L_1 < L_2 < \cdots$ being coarser and coarser scales. Crossing events at the finest scale will involve the length scale $\widehat
{L}_0$. We introduce the following set of labels of boxes at level $n
\ge0$: $$\label{2.3}
I_n = \{n\} \times{{\mathbb Z}}^d .$$ To each $m = (n,i) \in I_n$, $n \ge0$, we attach the box $$\label{2.4}
C_m = \bigl(i L_n + [0,L_n)^d\bigr) \cap{{\mathbb Z}}^d .$$ In addition, when $n \ge1$, we define $$\label{2.5}
\widetilde{C}_m = \bigcup_{m^\prime\in I_n, d_\infty(C_{m^\prime
}, C_m) \le1} C_{m^\prime} ( \mbox{$\supseteq$} C_m) .$$ On the other hand, when $n = 0$ and $m = (0,i) \in I_0$, we define instead $$\label{2.6}
\widetilde{C}_m = B(i L_0, \widehat{L}_0) \stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{1.1}),
(\ref{2.1})}}}{\supseteq} \bigcup_{x \in C_m} B(x,R L_0) ( \mbox{$\supseteq$}
C_m) .$$ The above definitions slightly differ from (2.3) in [@SidoSzni09b] due to the special treatment of the scale $n=0$. It is relevant here to use Euclidean balls and, thanks to (\[1.6\]) of Lemma \[lem1.1\], to have a good control on the entrance probability of a simple random walk in $\widetilde{C}_m$. The radius of these balls has to be chosen sufficiently large so that we can show that crossing events at the bottom scale, from $C_m$ to $\partial_{\mathrm{int}} \widetilde{C}_m$, are unlikely (this will be done in the next section).
We then write $S_m = \partial_{\mathrm{int}} C_m $ and $\widetilde{S}_m =
\partial_{\mathrm{int}} \widetilde{C}_m$ for $m \in I_n$, $n \ge0$. Given $m
\in
I_n$ with $n \ge1$, we consider ${\mathcal{H}}_1(m)$, ${\mathcal{H}}_2(m) \subseteq
I_{n-1}$ defined by $$\begin{aligned}
\label{2.7}
{\mathcal{H}}_1(m) & = & \{\overline{m} \in I_{n-1}; C_{\overline{m}} \subseteq C_m
\mbox{ and } C_{\overline{m}} \cap S_m \not= \varnothing\},
\nonumber\\[-8pt]\\[-8pt]
{\mathcal{H}}_2(m) & = & \biggl\{\overline{m} \in I_{n-1}; C_{\overline{m}} \cap\biggl\{ z
\in{{\mathbb Z}}^d; d_\infty(z,C_m) = \frac{L_n}{2}
\biggr\}\not= \varnothing
\biggr\} .\nonumber\end{aligned}$$ We thus see that for $n \ge1$, $m \in I_n$, one has (see also Figure \[fig1\]): $$\begin{aligned}
\label{2.8}\qquad
&&\overline{m}_1 \in {\mathcal{H}}_1(m),\qquad
\overline{m}_2 \in {\mathcal{H}}_2(m)\nonumber\\[-8pt]\\[-8pt]
&&\mbox{implies that }
\widetilde{C}_{\overline{m}_1} \cap\widetilde{C}_{\overline{m}_2} =
\varnothing\mbox{ and }
\widetilde{C}_{\overline{m}_1} \cup\widetilde{C}_{\overline{m}_2}
\subseteq\widetilde{C}_m
\nonumber
$$ \[in the case $n=1$, we use the lower bound on $\ell_0$ in (\[2.2\]) as well as (\[1.1\])\].
![An illustration of the boxes $C_{\overline{m}_i}$ and balls $\widetilde
{C}_{\overline{m}_i}$, $i=1,2$, when $m$ belongs to $I_1$.[]{data-label="fig1"}](545f01.eps)
Then, to each $m \in I_n$, $n \ge0$, we associate a collection $\Lambda_m$ of “binary trees of depth $n$.” More precisely, we define $\Lambda_m$ to be the collection of subsets ${\mathcal{T}}$ of $\bigcup
_{0 \le k \le n} I_k$ such that, writing ${\mathcal{T}}^k = {\mathcal{T}}\cap I_k$, we have $$\begin{aligned}
\label{2.9}\hspace*{29pt}
&{\mathcal{T}}^n = \{m\},&
\\
\label{2.10}
&
\begin{tabular}{p{310pt}}
any $m^\prime\in{\mathcal{T}}^k$, $1 \le k \le n$, has two
``descendants,'' $\overline{m}_i(m^\prime) \in{\mathcal{H}}_i(m^\prime)$, \mbox{$i = 1,2$},
such that ${\mathcal{T}}^{k-1} = \bigcup_{m^\prime\in{\mathcal{T}}^k} \{
\overline{m}_1 (m^\prime), \overline{m}_2(m^\prime)\}$.
\end{tabular}
&
$$ For each ${\mathcal{T}}\in\Lambda_m$ and $m^\prime\in{\mathcal{T}}$, one can then define the subtree of “descendants of $m^\prime$ in ${\mathcal{T}}$” via $$\label{2.11}
{\mathcal{T}}_{m^\prime} = \{m^{\prime\prime} \in{\mathcal{T}}; \widetilde{C}_{m^{\prime
\prime}} \subseteq\widetilde{C}_{m^\prime}\} (\mbox{$\in$}\Lambda_{m^\prime}) .$$ Given $1 \le k \le n$, $m^\prime\in{\mathcal{T}}^k$, one thus has the following partition of ${\mathcal{T}}_{m^\prime}$: $$\label{2.12}
{\mathcal{T}}_{m^\prime} = \{m^\prime\} \cup{\mathcal{T}}_{\overline{m}_1(m^\prime)} \cup
{\mathcal{T}}_{\overline{m}_2(m^\prime)} .$$ In addition, we have the following rough bound on the collection $\Lambda_m$ of binary trees attached to $m \in I_n$: $$\label{2.13}\hspace*{28pt}
|\Lambda_m| \le(c_4 \ell_0)^{2(d-1)} (c_4 \ell_0)^{4(d-1)}
\cdots(c_4 \ell_0)^{2^n(d-1)} = (c_4 \ell_0)^{2(d-1) (2^n-1)} ,$$ where we have used the rough bound for $m^\prime\in I_k$, $1 \le k \le
n$, and, for $i = 1,2$, $$|{\mathcal{H}}_i(m^\prime)| \le2d \biggl(c \frac
{L_k}{L_{k-1}} \biggr)^{d-1} = 2d (c \ell_0)^{d-1} \le(c_4 \ell
_0)^{d-1} \qquad\mbox{for some $c_4 > 1$} .$$ We then introduce, for $u \ge0$, $m \in I_n$, with $n \ge0$, the event $$\label{2.14}
A^u_m = \bigl\{C_m \stackrel{{\mathcal{V}}^u}{\longleftrightarrow} \widetilde
{S}_m \bigr\} ,$$ where the expression in the right-hand side of (\[2.14\]) denotes the collection of $\omega$ in $\Omega$ such that there is a path between $C_m$ and $\widetilde{S}_m$ contained in ${\mathcal{V}}^u$. In an analogous fashion to Lemma 2.1 of [@SidoSzni09b], $A^u_m$ “cascades down to the bottom scale” because any path originating in $C_m$ and ending in $\widetilde{S}_m$ must go through some $C_{\overline{m}_1}$, $\overline
{m}_1 \in{\mathcal{H}}_1(m)$, reach $\widetilde{S}_{\overline{m}_1}$ and then go through some $C_{\overline
{m}_2}, \overline{m}_2\in{\mathcal{H}}_2(m)$, and reach $\widetilde{S}_{\overline
{m}_2}$. Thus, similarly to Lemma 2.1 of [@SidoSzni09b], we find that defining for $u \ge0$, $n \ge0$, $m \in I_n$ and ${\mathcal{T}}\in\Lambda_m$ $$\label{2.15}
A^u_{\mathcal{T}}= \bigcap_{m^\prime\in{\mathcal{T}}^0} A^u_{m^\prime}
\qquad\mbox{(recall that ${\mathcal{T}}^0 = {\mathcal{T}}\cap I_0$)} ,$$ one has the inclusion $$\label{2.16}
A^u_m \subseteq\bigcup_{{\mathcal{T}}\in\Lambda_m} A^u_{{\mathcal{T}}} .$$ We then introduce the key quantity $$\label{2.17}
p_n(u) = \sup_{{\mathcal{T}}\in\Lambda_m} {{\mathbb P}}[A^u_{\mathcal{T}}],\qquad u \ge0,
n \ge0 \mbox{ with $m \in I_n$ arbitrary} ,$$ which is well defined due to translation invariance, and find that $$\label{2.18}
{{\mathbb P}}[A^u_m] \le|\Lambda_m| p_n(u) \qquad\mbox{for $u \ge0, n \ge0$} .$$ The heart of the matter is now to find a recurrence relation bounding $p_{n+1}(u_{n+1})$ in terms of $p_n(u_n)$ for suitably increasing sequences $u_n$ (we are actually interested in increasing, but convergent, sequences). We recall that $R$ appears in (\[2.1\]).
\[prop2.1\] There exist positive constants $c_5,c_6,c$ such that if $\ell_0 \ge
c(\sqrt{d} + R)$, then, for any increasing sequences $u_n$, $n \ge0$, in $(0,\infty)$ and nondecreasing sequences $r_n$, $n \ge0$, of positive integers such that $$\label{2.19}
u_{n+1} \ge u_n \biggl(1 + \frac{\widehat{L}_0}{L_0}
\biggl(\frac{c_5}{\ell_0} \biggr)^{(n+1) (d-2)}
\biggr)^{r_n+1} \qquad\mbox{for all $n \ge0$} ,$$ one has, for all $n \ge0$, $$\begin{aligned}
\label{2.20}
p_{n+1}(u_{n+1}) &\le& p_n(u_{n+1})\biggl(p_n(u_n) + u_n \biggl(
\frac{\widehat{L}_0}{\sqrt{d}} \biggr)^{(d-2)}\nonumber\\[-8pt]\\[-8pt]
&&\hspace*{45.17pt}{}\times \biggl(4^n \biggl(c_6
\frac{\widehat{L}_0}{L_0} \biggr)^{(d-2)} \ell
_0^{-(n+1)(d-2)} \biggr)^{r_n} \biggr)\nonumber\end{aligned}$$ \[note that $p_n(\cdot)$ is nonincreasing so that $p_n(u_{n+1}) \le p_n(u_n)$\].
The proof of Proposition \[prop2.1\] is an adaptation of the proof of Proposition 2.2 of [@SidoSzni09b], which will be sketched below with some modifications which we will highlight.
One considers some $n \ge0$, $m \in I_{n+1}$, ${\mathcal{T}}\in\Lambda_m$ and writes $\overline{m}_1, \overline{m}_2$ for the unique elements of ${\mathcal{H}}_1(m)$, ${\mathcal{H}}_2(m)$ in ${\mathcal{T}}^n$ ($={\mathcal{T}}\cap I_n$). One also writes $u^\prime$ and $u$, with $0 < u^\prime< u$, in place of $u_n$ and $u_{n+1}$.
If $\overline{{\mathcal{T}}} \in\Lambda_{\overline{m}}$ with $\overline{m} \in
I_n$, then one defines, for $\mu$, a point process on $W_+$ defined on $\Omega$ (i.e., a measurable map from $\Omega$ into the space of point measures on $W_+$): $$\begin{aligned}
\label{2.21}\qquad
A_{\overline{{\mathcal{T}}}}(\mu) &=& \bigcap_{m^\prime\in\overline{{\mathcal{T}}} \cap
I_0} \biggl\{ \omega\in\Omega;
\mbox{there is a path in }
\nonumber\\[-8pt]\\[-8pt]
&&{}\hspace*{39.7pt}\widetilde{C}_{m^\prime} \Bigm\backslash\biggl(\bigcup
_{w \in\operatorname{Supp} \mu(\omega)} w({{\mathbb N}}) \biggr)\mbox{ joining }C_{m^\prime}\mbox{ with }
\widetilde{S}_{m^\prime} \biggr\}.\nonumber\end{aligned}$$ As in (\[2.19\]) of [@SidoSzni09b], using independence, we have the bound $$\label{2.22}
{{\mathbb P}}[A^u_{{\mathcal{T}}}] \le p_n(u) {{\mathbb P}}[A_{\overline{{\mathcal{T}}}_2} (\mu_{2,2})] ,$$ where $\overline{{\mathcal{T}}}_2$ stands for ${\mathcal{T}}_{\overline{m}_2}$ and we have decomposed the point process $\mu_{V,u}$ \[see (\[1.27\])\], where $$\begin{aligned}
\label{2.23}
\hspace*{37.4pt}\hspace*{67.4pt} V & = & \widehat{C}_1 \cup\widehat{C}_2
\\
\label{2.24}
&&\mbox{with }\widehat{C}_i = \bigcup_{m^\prime\in\overline{{\mathcal{T}}}_i \cap I_0}
\widetilde{C}_{m^\prime} \subseteq\widetilde{C}_{\overline{m}_i}
\qquad\mbox{for $i=1,2$}\end{aligned}$$ (i.e., a union of $2^n$ pairwise disjoint Euclidean balls of radius $\widehat{L}_0$), into a sum of independent Poisson processes via $$\label{2.25}
\mu_{V,u} = \mu_{1,1} + \mu_{1,2} + \mu_{2,1} + \mu_{2,2} ,$$ where, for $i \not= j$ in $\{1,2\}$, we have set $$\mu_{i,j} = 1 \{X_0 \in\widehat{C}_i, H_{\widehat{C}_j} < \infty\} \mu
_{V,u} \quad\mbox{and}\quad \mu_{i,i} = 1\{X_0 \in\widehat{C}_i, H_{\widehat
{C}_j} = \infty\} \mu_{V,u} .$$ One introduces similar decompositions for $\mu_{V,u^\prime}$ in terms of analogously defined point processes $\mu^\prime_{i,j}$, $1 \le i,j
\le2$, and for $\mu_{V,u^\prime,u}$ in terms of $\mu^*_{i,j}$, $1
\le i,j \le2$.
The heart of the matter is to bound ${{\mathbb P}}[A_{\overline{{\mathcal{T}}}_2}(\mu_{2,2})]
= {{\mathbb P}}[A_{\overline{{\mathcal{T}}}_2}(\mu^\prime_{2,2} + \mu^*_{2,2})]$, which appears in the right-hand side of (\[2.22\]), in terms of $p_n(u^\prime)$ when $u-u^\prime$ is not too small. For this purpose, we employ the sprinkling technique of [@Szni07a] and, loosely speaking, establish that $\mu^*_{2,2}$ dominates “up to small corrections” the contribution of $\mu^\prime_{2,1} + \mu^\prime
_{1,2}$ in ${{\mathbb P}}[A^{u^\prime}_{\overline{{\mathcal{T}}}_2}] = {{\mathbb P}}[A_{\overline{{\mathcal{T}}}_2}(\mu^\prime_{2,2} + \mu^\prime_{2,1} + \mu^\prime_{1,2})]$.
With this in mind, we define a neighborhood $U$ of $\widetilde
{C}_{\overline
{m}_2}$ (and, in contrast to (2.20) of [@SidoSzni09b], we do not define $U$ as the $\ell^\infty$-neighborhood of $\widetilde
{C}_{\overline{m}_2}$ of size $\frac{L_{n+1}}{10}$). Instead, if $\overline{m}_2 =
(n,\overline{i}_2)
\in I_n$ \[see (\[2.3\])\], we define $U$ as the following Euclidean ball (which is much smaller than the corresponding $\ell^\infty$-ball of same radius): $$\label{2.26}
\quad U = B \biggl(\overline{i}_2 L_n, \frac{L_{n+1}}{10} \biggr)
\supseteq\widetilde{C}_{\overline{m}_2}\qquad \mbox{using (\ref{2.1}), (\ref
{2.2}), (\ref{2.5}), (\ref{2.6})} .$$ We then have the following important controls on Euclidean distances: $$\begin{aligned}
\label{2.27}
d(\partial U, \widehat{C}_2) & \ge & \frac{L_{n+1}}{10} -
3 \sqrt{d} L_n \stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{2.1}), (\ref{2.2})}}}{>}
\frac{L_{n+1}}{20} \qquad\mbox{when $n \ge1$}
\nonumber\\[-8pt]\\[-8pt]
& \ge & \frac{L_{n+1}}{10} - \widehat{L}_0 \stackrel
{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{2.1}), (\ref{2.2})}}}{>} \frac{L_{n+1}}{20}
\qquad\mbox{when $n =
0$}, \nonumber\end{aligned}$$ and we have used in the first line the fact that $\widehat{C}_2
\subseteq
\widetilde{C}_{\overline{m}_2}$ when $m \ge1$; see (\[2.8\]). Using similar considerations, we find that $$\begin{aligned}
\label{2.28}\hspace*{28pt}
d(\partial U, \widehat{C}_1) & \ge & \frac{L_{n+1}}{2} -
L_n - L_n - \frac{L_{n+1}}{10} - 1 >
\frac{L_{n+1}}{20} \qquad\mbox{when $n \ge1$}
\nonumber\\[-8pt]\\[-8pt]
& \ge & \frac{L_{n+1}}{2} - \widehat{L}_0 - L_0 -
\frac{L_{n+1}}{10} - 1 > \frac
{L_{n+1}}{20} \qquad\mbox{when $n=
0$} .\nonumber\end{aligned}$$ Since $V = \widehat{C}_1 \cup\widehat{C}_2$, we have established that $$\label{2.29}
d(\partial U, V) > \frac{L_{n+1}}{20} .$$ We then introduce the successive times of return to $\widehat{C}_2$ and departure from $U$: $$\begin{aligned}
\label{2.30}\hspace*{38pt}
R_1 & = & H_{\widehat{C}_2},\qquad D_1 = T_U \circ\theta_{R_1} + R_1 \quad\mbox
{and}\quad\mbox{for $k \ge1$, by induction,}
\\
R_{k+1} & = & R_1 \circ\theta_{D_k} + D_k,\qquad D_{k+1} = D_1 \circ\theta
_{D_k} +
D_k, \nonumber\end{aligned}$$ so that $0 \le R_1 \le D_1 \le\cdots\le R_k \le D_k \le\cdots\le
\infty$.
Letting $r \ge1$ play the role of $r_n$ in (\[2.19\]), (\[2.20\]), we further introduce the decompositions $$\begin{aligned}
\label{2.31}
\mu^\prime_{2,1} & = & \sum_{1 \le\ell\le r} \rho^\ell_{2,1} +
\overline{\rho}_{2,1},\qquad \mu^\prime_{1,2} = \sum_{1 \le\ell\le r}
\rho^\ell_{1,2} + \overline{\rho}_{1,2} ,
\nonumber\\[-8pt]\\[-8pt]
\mu^*_{2,2} & = & \sum_{1 \le\ell\le r} \rho^\ell_{2,2} + \overline
{\rho}_{2,2}
,\nonumber\end{aligned}$$ where, for $i \not= j$ in $\{1,2\}$ and $\ell\ge1$, we have set $$\begin{aligned}
\rho^\ell_{i,j} & = & 1\{R_\ell< D_\ell< R_{\ell+ 1} = \infty\}
\mu^\prime_{i,j},\\
\overline{\rho}_{i,j} & = & 1\{R_{r + 1} < \infty\} \mu
^\prime_{i,j},
\\
\rho^\ell_{2,2} & = & 1\{R_\ell< D_\ell< R_{\ell+ 1} = \infty\}
\mu^*_{2,2}\end{aligned}$$ and $$\overline{\rho}_{2,2} = 1\{R_{r + 1} < \infty
\} \mu^*_{2,2} .$$ The point processes $\overline{\rho}_{1,2}$ and $\overline{\rho}_{2,2}$ play the role of correction terms, eventually responsible for the last term in the right-hand side of (\[2.20\]). The bounds we derive on the intensity measures $\overline{\xi}_{2,1}$ and $\overline{\xi}_{1,2}$ of $\overline
{\rho}_{2,1}$ and $\overline{\rho}_{1,2}$ depart from (2.26), (2.27) in [@SidoSzni09b]. We write $$\begin{aligned}
\label{2.32}
\overline{\xi}_{2,1}(W_+) &=& u^\prime
P_{e_V} [X_0 \in\widehat{C}_2, H_{\widehat{C}_1} < \infty, R_{r+1} <
\infty]
\nonumber\\
& \stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{1.19})}}}{\le} &
u^\prime\operatorname{cap}(\widehat{C}_2) \sup_{x \in\widehat{C}_2} P_x
[R_{r+1} < \infty]
\\
& \stackrel{\mathrm{strong}\ \mathrm{Markov}}{\le} &
u^\prime\operatorname{cap}(\widehat{C}_2) \Bigl( \sup_{x \in\partial U}
P_x[H_{\widehat{C}_2} < \infty]\Bigr)^r .\nonumber
$$ Combining (\[1.6\]) and (\[2.29\]), we find that $$\label{2.33}
\sup_{x \in\partial U} P_x [H_{\widehat{C}_2} < \infty] \le
2^n \biggl( c \frac{\widehat{L}_0}{L_{n+1}}
\biggr)^{(d-2)} \stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{2.1}), (\ref{2.2})}}}{=} 2^n \biggl(c
\frac{\widehat{L}_0}{L_0} \ell_0^{-(n+1)} \biggr)^{(d-2)} .\hspace*{-35pt}$$ Moreover, from (\[1.20\]), (\[1.22\]), we have $$\label{2.34}
\operatorname{cap}(\widehat{C}_2) \le2^n \biggl(c \frac{\widehat
{L}_0}{\sqrt{d}} \biggr)^{(d-2)}$$ and hence $$\label{2.35}
\overline{\xi_{2,1}}(W_+) \le u^\prime\biggl(\frac{\widehat
{L}_0}{\sqrt{d}} \biggr)^{(d-2)} \biggl(4^n \biggl(c
\frac{\widehat{L}_0}{L_0} \biggr)^{(d-2)} \ell_0^{-(n+1)(d-2)} \biggr)^r
.$$ In a similar fashion, we also obtain $$\label{2.36}
\overline{\xi_{1,2}}(W_+) \le u^\prime\biggl(\frac{\widehat
{L}_0}{\sqrt{d}} \biggr)^{(d-2)} \biggl(4^n \biggl(c
\frac{\widehat{L}_0}{L_0} \biggr)^{(d-2)} \ell_0^{-(n+1)(d-2)} \biggr)^r
.$$ The next objective is to show that the trace on $\widehat{C}_2$ of paths in the support of $\sum_{1 \le\ell\le r} \rho^\ell_{2,1}$ and $\sum
_{1 \le\ell\le r} \rho^\ell_{1,2}$ is stochastically dominated by the corresponding trace on $\widehat{C}_2$ of paths in the support of $\mu
^*_{2,2}$ when $u-u^\prime$ is not too small. An important step is the following lemma.
\[lem2.2\] For $\ell_0 \ge c(\sqrt{d} + R)$, all $n \ge0$, $m \in I_{n+1}$, ${\mathcal{T}}\in\Lambda_m$, $x\in\partial U$ and $y \in\partial_{\mathrm{int}}
\widehat{C}_2$, one has $$\begin{aligned}
\label{2.37}
&& P_x[H_{\widehat{C}_1} < R_1 < \infty, X_{R_1} = y]
\nonumber\\[-8pt]\\[-8pt]
&&\qquad\le\biggl(
\frac{\widehat{L}_0}{L_0} \biggr)^{(d-2)} \biggl(
\frac{c}{\ell_0} \biggr)^{(d-2)(n+1)} P_x[H_{\widehat{C}_1} > R_1,
X_{R_1} = y] ,\nonumber
\\
\label{2.38}
&& P_x[H_{\widehat{C}_1} < \infty, R_1 = \infty] \nonumber\\[-8pt]\\[-8pt]
&&\qquad\le\biggl(
\frac{\widehat{L}_0}{L_0} \biggr)^{(d-2)} \biggl(
\frac{c}{\ell_0} \biggr)^{(d-2)(n+1)} P_x[R_1 = \infty= H_{\widehat
{C}_1}] .\nonumber\end{aligned}$$
The proof of (\[2.37\]) closely follows the proof of (\[2.30\]) in Lemma 2.3 of [@SidoSzni09b]. The difference lies in the control of Harnack constants. Indeed, we first observe that the function $h\dvtx z
\rightarrow P_z[R_1 < \infty$, $X_{R_1} = y] = P_z[H_{\widehat{C}_2} <
\infty$, $X_{H_{\widehat{C}_2}} = y]$ is a nonnegative function, harmonic in $\widehat
{C}_2^c$. By (\[2.29\]), it is therefore harmonic on any $B(z_0,
\frac{L_{n+1}}{20})$ with $z_0 \in\partial U$. One can then find $c$ such that for any $\widetilde{z}, \widetilde{z} ^\prime$ in $\partial
U$, there exists a sequence $z_i$, $0 \le i \le m$, in $\partial U$ with $m \le
c$, $z_0 = \widetilde{z}$, $z_m = \widetilde{z} ^\prime$ and $|z_{i+1}
- z_i|
\le\frac{L_{n+1}}{100 c_3}$, in the notation of Proposition \[prop1.3\]. Indeed, one simply projects $\widetilde{z}, \widetilde{z}
^\prime$ onto the Euclidean sphere in ${{\mathbb R}}^d$ of radius $\frac{L_{n+1}}{10}$ with center $\overline{i_2} L_n$, the “center” of $U$ \[see (\[2.26\])\] and uses the great circle joining these two points to construct the sequence.
Using (\[1.15\]) and a standard chaining argument, it follows that $$\label{2.39}
\sup_{z \in\partial U} P_z[R_1 < \infty, X_{R_1} = y] \le
c^d \inf_{z \in\partial U} P_z[R_1 < \infty, X_{R_1} = y] .$$ The proof of (\[2.37\]) then proceeds as in Lemma 2.3 of [@SidoSzni09b] \[and we use a similar bound to (\[2.33\]) above, where $\widehat{C}_1$ replaces $\widehat{C}_2$\].
As for (\[2.38\]), we first note that for $x \in\partial U$, due to (\[1.6\]) and (\[2.29\]), we have $$\begin{aligned}
\label{2.40}
&&\inf_{x \in\partial U} P_x [R_1 = \infty, H_{\widehat{C}_1}
= \infty] \nonumber\\
&&\qquad\ge1 - 2 2^n \biggl(c \frac{\widehat
{L}_0}{L_{n+1}} \biggr)^{(d-2)} \stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{2.2})}}}{\ge} 1 -
\biggl(\frac{c}{\ell_0} \frac{\widehat
{L}_0}{L_0} \biggr)^{(d-2)} \\
&&\qquad\hspace*{-4.06pt}\stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{2.1})}}}{\ge}
\frac{1}{2},\nonumber\end{aligned}$$ when $\ell_0 \ge c^\prime(\sqrt{d} + R)$.
On the other hand, a similar calculation leads to $$\begin{aligned}
\label{2.41}
P_x[H_{\widehat{C}_1} < \infty, R_1 = \infty] & \le & 2^n \biggl(c
\frac{\widehat{L}_{0}}{L_{n+1}} \biggr)^{(d-2)} \nonumber\\[-8pt]\\[-8pt]
&\le&\biggl(
\frac{\widehat{L}_0}{L_0} \biggr)^{(d-2)} \biggl(
\frac{c}{\ell_0} \biggr)^{(d-2)(n+1)}\nonumber\end{aligned}$$ and (\[2.38\]) follows.
The proof of Proposition \[prop2.1\] then proceeds like the proof of Proposition 2.3 of [@SidoSzni09b] and yields that, under (\[2.19\]) (with $u^\prime$ in place of $u_n$ and $u$ in place of $u_{n+1}$), $$\begin{aligned}
\label{2.42}\hspace*{12pt}
&&{{\mathbb P}}[A_{\overline{{\mathcal{T}}}_2} (\mu_{2,2})] \nonumber\\
&&\qquad\hspace*{18.21pt} \le
p_n(u^\prime) + \overline{\xi}_{2,1} (W_+) + \overline{\xi}_{1,2} (W_+)
\\
&&\qquad\stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{2.35}), (\ref{2.36})}}}{\le}
p_n(u^\prime) + 2u^\prime\biggl(\frac{\widehat
{L}_0}{\sqrt{d}} \biggr)^{(d-2)} \biggl(4^n \biggl( c
\frac{\widehat{L}_0}{L_0} \biggr)^{(d-2)} \ell_0^{-(n+1)(d-2)} \biggr)^r .\nonumber
$$ Inserting this inequality into (\[2.22\]), we thus infer (\[2.20\]) under the assumption on (\[2.19\]).
We assume from now on that $\ell_0 \ge c(\sqrt{d} + R)$, with $c > 2
c_5$ sufficiently large so that Proposition \[prop2.1\] holds. We then choose the sequences $u_n$, $n \ge0$ and $r_n$, $n \ge0$, as follows: $$\begin{aligned}
\label{2.43}
u_n & = & u_0 \exp\biggl\{ \biggl(\frac{\widehat
{L}_0}{L_0} \biggr)^{(d-2)} \sum_{0 \le k < n} (r_k + 1) \biggl(
\frac{c_5}{\ell_0} \biggr)^{(k +1)(d-2)} \biggr\},
\\
\label{2.44}
r_n & = & r_0 2^n ,\end{aligned}$$ where $u_0 > 0$ and $r_0$ is a positive integer. The choice (\[2.43\]) ensures that (\[2.19\]) is fulfilled and the increasing sequence $u_n$ has the finite limit $$\label{2.45}
u_\infty= u_0 \exp\biggl\{ \biggl(\frac{c_5 \widehat{L}_0}{\ell
_0 L_0} \biggr)^{(d-2)} \biggl(\frac{r_0}{1 - 2 (c_5 \ell
_0^{-1})^{(d-2)}} + \frac{1}{1 - (c_5 \ell
_0^{-1})^{(d-2)}} \biggr) \biggr\}.\hspace*{-35pt}$$ The next proposition reduces the task of bounding $p_n(u_n)$ to a set of conditions which enable us to initiate the induction procedure suggested by Proposition \[prop2.1\]. We view $u_\infty$ as a function of $u_0$, $r_0$, $\ell_0$, $R$ \[we introduced $R$ in (\[2.1\])\].
\[prop2.3\] There exists a positive constant $c$ such that when $u_0 > 0$, $r_0 \ge
1$, $\ell_0 \ge c(\sqrt{d} + R)$, $L_0 \ge d$, $\widehat{L}_0 = (\sqrt
{d} + R) L_0$, $R \ge1$ and $K_0 > \log2$ satisfy $$\begin{aligned}
\label{2.46}
u_\infty\biggl(\frac{\widehat{L}_0}{\sqrt{d}}
\biggr)^{d-2} \vee e^{K_0} &\le& \biggl(\frac{\ell_0
L_0}{c_6 \widehat{L}_0} \biggr)^{{r_0}/{2} (d-2)},
\\ \label{2.47}
p_0 (u_0) &\le& e^{-K_0},\end{aligned}$$ then $$\label{2.48}
p_n(u_n) \le e^{-(K_0 - \log2)2^n} \qquad\mbox{for each $n \ge0$} .$$
The argument is similar to Proposition 2.5 of [@SidoSzni09b]. We assume, as mentioned before, that $c > 2c_5$ is large enough so that Proposition \[prop2.1\] applies. Condition (\[2.46\]) implies that $c_6 \widehat{L}_0 \le\ell_0 L_0$($L_1)$. Thus, the last term in the right-hand side of (\[2.20\]) satisfies $$\begin{aligned}
\label{2.49}
&&
u_n \biggl(\frac{\widehat{L}_0}{\sqrt{d}} \biggr)^{(d-2)}
\biggl(4^n \biggl(c_6 \frac{\widehat{L}_0}{L_0}
\biggr)^{(d-2)} \ell_0^{-(n+1)(d-2)} \biggr)^{r_n} \nonumber\\
&&\hspace*{17.6pt}\qquad\le
u_\infty\biggl(\frac{\widehat{L}_0}{\sqrt{d}}
\biggr)^{(d-2)}\biggl (c_6 \frac{\widehat{L}_0}{ \ell_0
L_0} \biggr)^{(d-2)r_n} \biggl( \frac{4}{\ell
_0^{d-2}} \biggr)^{nr_n} \\
&&\qquad\stackrel{\mbox{\fontsize{8.36}{10.36}
\selectfont{(\ref{2.2}), (\ref{2.46})}}}{\le}
\biggl(c_6 \frac{\widehat{L}_0}{\ell_0 L_0}
\biggr)^{{r_n}/{2} (d-2)} .\nonumber
$$ As a result, (\[2.20\]) yields that for $n \ge0$, $$\label{2.50}
p_{n+1}(u_{n+1}) \le p_n(u_n) \biggl(p_n(u_n) + \biggl(c_6
\frac{\widehat{L}_0}{\ell_0 L_0} \biggr)^{({r_0}/{2})
2^n(d-2)} \biggr).$$ We then define by induction $K_n$, $n \ge0$, via the following relation valid for $n \ge1$: $$\label{2.51}\qquad\quad
K_n = K_0 - \sum_{0 \le n^\prime< n} 2^{-(n^\prime+ 1)} \log\biggl(
1 + e^{K_{n^\prime} 2^{n^\prime}} \biggl(c_6 \frac{
\widehat{L}_0}{\ell_0 L_0} \biggr)^{({r_0}/{2}) 2^{n^\prime}
(d-2)} \biggr)$$ so that $K_n \le K_0$ and hence $$\begin{aligned}
\label{2.52}\quad
K_n & \ge & K_0 - \sum_{n^\prime\ge0} 2^{-(n^\prime+ 1)} \log\biggl(
1 + e^{K_0 2^{n^\prime}} \biggl(c_6 \frac{ \widehat
{L}_0}{\ell_0 L_0} \biggr)^{({r_0}/{2}) 2^{n^\prime}(d-2)} \biggr)
\nonumber\\[-8pt]\\[-8pt]
& \stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{2.46})}}}{\ge} & K_0 - \sum_{n^\prime\ge0}
2^{-(n^\prime+ 1)} \log2 = K_0 - \log2 >
0 .\nonumber\end{aligned}$$ As we now show by induction, we have $p_n(u_n) \le e^{-K_n 2^n}$.
Indeed, this inequality holds for $n = 0$, due to (\[2.47\]), and if it holds for $n \ge0$, then, due to (\[2.50\]), we find that $$\begin{aligned}
p_{n+1}(u_{n+1}) & \le & e^{-K_n 2^n} \biggl(e ^{-K_n 2^n} + \biggl(c_6
\frac{\widehat{L}_0}{\ell_0 L_0} \biggr)^{
({r_0}/{2}) 2^n(d-2)} \biggr)
\\
& = & e^{-K_n 2^{n+1}} \biggl(1 + e^{K_n 2^n} \biggl(c_6
\frac{\widehat{L}_0}{\ell_0 L_0} \biggr)^{({r_0}/{2})
2^n(d-2)} \biggr) \\
&\stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{2.51})}}}{=}&
e^{-K_{n+1} 2^{n+1}}.\end{aligned}$$ This proves that $p_n(u_n) \le e^{-K_n 2^n}$ for all $n \ge0$ and (\[2.48\]) follows.
\[rem2.4\] One of the main issues we now have to face is proving the local estimate $p_0(u_0) \le e^{-K_0}$ \[see (\[2.47\])\] for large $d$, with $u_0$ of order close to $\log d$ (and a posteriori close to $u_*$). We further need $K_0$ sufficiently large so that $2^{-(K_0 -
\log2)2^n}$ overcomes the combinatorial complexity arising from the choice of the binary trees in the upper bound (\[2.18\]), that is, overcomes the factor $|\Lambda_n|
\stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{2.13})}}}{\le} (c_4 \ell
_0)^{2(d-1)(2^n-1)}$. Devising this local estimate will be the aim of the next section and will involve aspects of random interlacements at a shorter range, where features reminiscent of random interlacements on $2d$-regular trees (cf. Section 5 of [@Teix09b]) will be evident.
Local connectivity bounds {#sec3}
=========================
The aim of this section is to derive exponential bounds on the decay of the probability of existence of a path in the vacant set at level $u_0
= (1 +5 \varepsilon) \log d$, starting at the origin and traveling at $\ell^1$-distance $Md$, where $M$ is an arbitrary integer and $d \ge
c(\varepsilon, M)$ (cf. Corollary \[cor3.4\]). For this purpose, we develop an enhanced Peierls-type argument. The main step appears in Theorem \[theo3.1\] below. In the present section, aspects of random interlacements on ${{\mathbb Z}}^d$ for large $d$, reminiscent of random interlacements on $2d$-regular trees (cf. [@Teix09b]) will play a important role. We introduce the parameter $$\label{3.1}
0 < \varepsilon< \tfrac{1}{3}.$$ We also introduce, in the notation of (\[1.14\]), $$\label{3.2}
L= c_7 d\qquad \mbox{where } c_7 = [e^8 c_2] + 2 .$$ The main result of this section is the following estimate on the connectivity function.
\[theo3.1\] For any positive integer $M$, we have $$\label{3.3}\quad
{{\mathbb P}}\bigl[0 \stackrel{{\mathcal{V}}^{u_0}}{\longleftrightarrow} S_1 (0,ML)\bigr] \le
\exp\biggl\{ \frac{M(M-1)}{2} L + 3 Md -
\frac{\varepsilon^2}{5} Md \log d \biggr\},$$ where the notation is similar to (\[2.14\]) and $$\label{3.4}
u_0 = ( 1 + 5 \varepsilon) \log d .$$
Observe that any self-avoiding path from $0$ to $S_1 (0,ML)$ successively visits the $\ell^1$-spheres $S_1(0,iL)$, $i = 0, \ldots,
M-1$. Thus, considering the first $[\frac{\varepsilon}{10} d]$ steps of the path consecutive to the successive entrances in the various spheres $S_1(0,iL)$, we obtain $M$ self-avoiding paths $\pi_i$, $i =
0, \ldots, M-1$, where $\pi_i$ starts in $S_1(0,iL)$ and has $[\frac
{\varepsilon}{10} d]$ steps for each $i$. Denoting by $z_i$, $i =
0,\ldots, M-1$, the respective starting points of these paths, we find that $$\label{3.5}\quad
{{\mathbb P}}\bigl[0 \stackrel{{\mathcal{V}}^{u_0}}{\longleftrightarrow} S_1 (0,ML)\bigr] \le
\sum_{z_i,\pi_i} {{\mathbb P}}[{\mathcal{V}}^{u_0} \supseteq\operatorname{range} \pi_i
\mbox{ for } i = 0, \ldots, M-1] ,$$ where the above sum runs over $z_i \in S(0,iL)$ and self-avoiding paths $\pi_i$ with $[\frac{\varepsilon}{10} d]$ steps and starting points $z_i$, $i = 0,\ldots, M-1$. The next lemma provides a very rough bound on the cardinality of $\ell^1$-spheres and $\ell^1$-balls. Crucially, it shows that $\ell^1$-spheres and balls of radius $cd$ are “rather small,” that is, their cardinality grows at most geometrically in $d$.
\[lem3.2\] $$\begin{aligned}
\label{3.6}
&&\mbox{\hphantom{i}\textup{(i)}} \quad |S_1 (0,\ell) | \le2^d e^{\ell+ d},
\nonumber\\[-8pt]\\[-8pt]
&&\mbox{\textup{(ii)}} \quad |B_1(0,\ell) | \le2^d e^{\ell+ 1 + d} .\nonumber
$$
We express the generating function of $|S_1(0,k)|$, $k \ge0$, as follows. Given $|t| < 1$, we have
$$\begin{aligned}
\label{3.7}
\sum_{k \ge0} t^k |S_1(0,k)| & = & \sum_{k \ge0} t^k \mathop{\sum
_{m_1,\ldots,m_d \ge0}}_{m_1 + \cdots+ m_d = k} 2^{|\{i \in\{
1,\ldots,d\}; m_i \not= 0\}|}
\nonumber\\
& = &\sum_{m_1,\ldots,m_d \ge0} t^{m_1 + \cdots+ m_d} 2^{|\{i \in\{
1,\ldots, d\}; m_i \not= 0\}|} \nonumber\\[-8pt]\\[-8pt]
& = &\biggl(1 + 2 \sum_{m \ge1} t^m \biggr)^d
\nonumber\\
& = &\biggl( \frac{1+ t}{1-t} \biggr)^d \le\frac{2^d}{(1-t)^d}.\nonumber\end{aligned}$$
As a result, we see that for $0 < t < 1$, $\ell\ge0$, $$|S_1(0,\ell)| \le2^d(1-t)^{-d} t^{-\ell} .$$ Choosing $t = \ell/ (d+\ell)$, we find that $$\label{3.8}
|S_1 (0,\ell)| \le2^d \biggl(1 + \frac{\ell
}{d} \biggr)^d \biggl(1 + \frac{d}{\ell} \biggr)^\ell
\le2^d e^{\ell+ d} ,$$ where we have used the inequality $1 + u \le e^u$ in the last step. This proves (\[3.6\])(i). As for the inequality (\[3.6\])(ii), by (\[3.6\])(i), we can write $$\label{3.9}
|B_1(0,\ell)| \le2^d e^d \sum^\ell_{k=0} e^k = 2^d e^d
\frac{e^{\ell+1}-1}{e-1} \le2^d e^{\ell+ 1 + d}$$ and our claims follows.
We now come back to (\[3.5\]). By a very rough counting argument for the number of possible choices of $\pi_i$, we have a Peierls-type bound: $$\begin{aligned}
\label{3.10}
&&{{\mathbb P}}\bigl[0 \stackrel{{\mathcal{V}}^{u_0}}{\longleftrightarrow} S_1(0,ML)\bigr]\nonumber \\
&&\hspace*{7.75pt}\qquad\le
\Biggl(\prod^{M-1}_{k=0} |S_1(0,k L)| (2 d)^{
{\varepsilon}/{10} d} \Biggr) \nonumber\\
&&\hspace*{7.75pt}\qquad\quad{}\times\sup_{z_i,\pi_i} {{\mathbb P}}[{\mathcal{V}}^{u_0} \supseteq\operatorname{range} \pi_i, i = 0, \ldots, M-1]
\nonumber\\
&&\qquad\stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{3.6})(i)}}}{\le}
\Biggl(\prod^{M-1}_{k=0} 2^d e^{k L+d} \Biggr) (2d)^{
{\varepsilon}/{10} M d} \\
&&\hspace*{7.75pt}\qquad\quad{}\times\sup_{z_i,\pi_i} {{\mathbb P}}[{\mathcal{V}}^{u_0}
\supseteq\operatorname{range} \pi_i, i = 0, \ldots, M-1] \nonumber\\
&&\hspace*{7.75pt}\qquad\le
e^{{M(M-1)}/{2} L + 2 M d} (2d)^{{\varepsilon}/{10} M d}\nonumber\\
&&\hspace*{7.75pt}\qquad\quad{}\times
\sup_{z_i,\pi_i} {{\mathbb P}}[{\mathcal{V}}^{u_0} \supseteq\operatorname{range}
\pi_i, i = 0, \ldots, M-1] ,\nonumber
$$ where the supremum runs over a similar collection as the sum in (\[3.5\]).
The next objective is to bound the probability in the last line of (\[3.10\]). For this purpose, for each $x$ in the set $$\begin{aligned}
\label{3.11}\qquad
\hspace*{-12pt}B \stackrel{\mathrm{def}}{=} \bigcup^{M-1}_{i=0} B_1 \biggl(z_i,
\frac{\varepsilon}{10} d \biggr) \nonumber\\[-8pt]\\[-8pt]
\eqntext{\mbox
{(pairwise disjoint $\ell^1$-balls appear in this union)},}\end{aligned}$$ we write $z_x$ for the unique $z_i$ such that $x \in B (z_i,
\frac{\varepsilon}{10} d )$. We then define, for any $x$ in $B$, the subset $W^*_x$ of $W^*$—see above (\[1.24\])—(not to be confused with $W^*_{\{x\}}$): $$\begin{aligned}
\label{3.12}
W_x^* & = & \mbox{the image under $\pi^*$ of}\nonumber\\
&&{} \biggl\{\mbox{$w \in W$: the
minimum of $d_1(z_x,w(n))$, $n \in{{\mathbb Z}}$,}
\nonumber\\[-8pt]\\[-8pt]
&&\hspace*{7.4pt} \mbox{is reached for the first time at $w(n) = x$ and $w$ }\nonumber\\
&&\hspace*{11pt} \mbox{does not
enter any }B_1 \biggl(z_i, \frac{\varepsilon}{10} d
\biggr) \mbox{ with } z_i \not= z_x \biggr\} .\nonumber\end{aligned}$$ Note that, clearly, $W^*_x \subseteq W^*_{\{x\}}$ and that $$\label{3.13}
\mbox{$W^*_x$, $x \in B$, are pairwise disjoint measurable subsets of
$W^*$} .$$ It then follows that for $z_i, \pi_i$, $0 \le i \le M-1$, as in (\[3.10\]), we have $$\begin{aligned}
\label{3.14}
&&{{\mathbb P}}[{\mathcal{V}}^{u_0} \supseteq\operatorname{range} \pi_i, i = 0,\ldots, M-1]
\nonumber\\
&&\qquad\le
{{\mathbb P}}\Biggl[\omega\Biggl(\bigcup^{M-1}_{i=0} \bigcup
_{x \in\operatorname{range} \pi_i} W_x^* \times[0,u_0] \Biggr) = 0
\Biggr]\nonumber\\[-8pt]\\[-8pt]
&&\qquad =
\exp\Biggl\{- u_0 \sum^{M-1}_{i=0} \sum_{x \in\operatorname{range} \pi
_i} \nu(W^*_x) \Biggr\}
\nonumber\\
&&\qquad\le\exp\biggl\{ - u_0 M \frac{\varepsilon d}{10}
\times\inf_{x \in B} \nu(W^*_x) \biggr\} .\nonumber
$$ We will now seek a lower bound on $\nu(W^*_x)$ for $x \in B$.
Choosing $K = \{x\}$ in (\[1.25\]), (\[1.26\]), by (\[1.19\]), we see that for any $x$ in $B$, $$\begin{aligned}
\label{3.15}
\nu(W^*_x) & = & P_x \bigl[|X_n - z_x|_1 \ge|x-z_x|_1\mbox{, for $n
\ge0$,}\nonumber\\
&&\hspace*{39.9pt}\mbox{and } H_{\bigcup_{z_i \not= z_x} B_1(z_i,
{\varepsilon}/{10} d)} = \infty\bigr]
\nonumber\\
&&{} \times P_x \bigl[|X_n - z_x|_1> |x-z_x|_1\mbox{, for $n >
0$,}\nonumber\\[-8pt]\\[-8pt]
&&\hspace*{53.2pt} \mbox{and }
H_{\bigcup_{z_i \not= z_x} B_1(z_i, {\varepsilon
}/{10} d)} = \infty\bigr]
\nonumber\\
& \ge & \biggl(P_x [|X_n - z_x|_1 > |x-z_x|_1\mbox{, for } n > 0]\nonumber\\
&&\hspace*{38.2pt}{} -
\sum_{z_i \not= z_x} P_x
\bigl[ H_{B_1(z_i, {\varepsilon}/{10}d)} < \infty\bigr] \biggr)^2_+.\nonumber\end{aligned}$$ In view of (\[1.14\]) and the choice of $L$ in (\[3.2\]), we see that when $d \ge8$, we have $$\begin{aligned}
\label{3.16}
&&\sum_{z_i \not= z_x} P_x \bigl[ H_{B_1(z_i, {\varepsilon
}/{10} d)} < \infty\bigr] \nonumber\\
&&\qquad\hspace*{21.28pt}\le\sum_{z_i \not= z_x} \sup
_{y \in B(z_i, {\varepsilon}/{10} d)} g(y-x) \biggl|B_1
\biggl(0, \frac{\varepsilon}{10} d \biggr) \biggr|\nonumber\\[-8pt]\\[-8pt]
&&\qquad\stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{1.14}), (\ref{3.6})(ii)}}}{\le}
2 \sum_{j \ge1} \biggl(\frac{c_2 d}{j L -
{\varepsilon}/{5} d} \biggr)^{{d}/{2} - 2} 2^d e^{
{\varepsilon}/{10} d + 1 + d} \nonumber\\
&&\hspace*{17.1pt}\qquad\stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{3.2})}}}{\le}
2e^{-8({d}/{2} - 2) + 3d + 1} \sum_{j \ge1} j^{-({d}/{2}
- 2)} \stackrel{d \ge8}{\le} c e^{-d} .\nonumber
$$ The next lemma yields a lower bound on the first term in the last line of (\[3.15\]).
When $|y|_1 \le\frac{d}{2}$, one has $$\label{3.17}
P_y [ |X_n |_1 > |y|_1 \mbox{ for all } n > 0] \ge1 -
\frac{4(|y|_1 \vee1)}{2d - (|y|_1 \vee1)} .$$
We first note that for $z = (z_1,\ldots,z_d)$ in ${{\mathbb Z}}^d$, $P_z$-a.s., $ | |X_1|_1 - |z|_1 | = 1$, and $$\begin{aligned}
\label{3.18}
P_z[|X_1|_1 = |z|_1 + 1] &=& \frac{1}{2d} \Biggl(2d -
\sum^d_{k=1} 1\{z_k \not= 0\} \Biggr) \ge
p_{|z|_1}\nonumber\\[-8pt]\\[-8pt]
\eqntext{\mbox{where }
\displaystyle p_m \stackrel{\mathrm{def}}{=} \biggl(\frac{1}{2} +
\frac{1}{2} \biggl( 1 - \frac{m}{d} \biggr)_+\biggr)
\qquad\mbox{for $m \ge0$} .}
$$ We then introduce the canonical Markov chain $N_n$ on ${{\mathbb N}}$ that jumps to $m+1$ with probability $p_m$ and to $m-1$ with probability $q_m =
1-p_m$ when located at $m$. We denote by $Q_m$ the canonical law of this Markov chain starting in $m$. In view of (\[3.18\]), a coupling argument shows that we can construct $X_n$ and $N_n$ on the same probability space so that a.s. $|X_n|_1 \ge N_n$ for all $n \ge0$ and $X_0 = y \in{{\mathbb Z}}^d$, $N_0 = |y|_1$. Consequently, we see that when $y
\not= 0$, we have the bound (with $m = |y|_1 \le\frac{d}{2}$) $$\begin{aligned}
\label{3.19}
&&
P_y \bigl[ H_{S_1(0,d^2)} < \widetilde{H}_{B_1(0,|y|_1)} \bigr] \nonumber\\
&&\qquad\ge Q_{|y|_1} \bigl[H_{d^2}
< \widetilde{H}_{|y|_1}\bigr]
\\
&&\qquad = p_m (1 + \rho_{m+1} + \rho_{m+1} \rho_{m+2} + \cdots+ \rho
_{m+1} \cdots\rho_{d^2 -
1})^{-1},\nonumber\end{aligned}$$ where $\rho_\ell= \frac{q_\ell}{p_\ell}$ for $\ell\ge0$ and we have used [@Chun60], (5), page 73.
Note that the expression in the right-hand side of (\[3.19\]) is a decreasing function of each $\rho_\ell$, $m + 1 < \ell< d^2$. If we further observe that $\rho_\ell\le(\frac{1}{2} - \frac{1}{2}
\times\frac{1}{4})(\frac{1}{2} + \frac{1}{2} \times\frac
{1}{4})^{-1} = \frac{3}{5}$ for $m + 1 < \ell\le\frac{3}{4} d$ and $\rho_\ell\le1$ for $\frac{3}{4} d < \ell\le d^2 - 1$, then we see that the above expression is bigger than $$\begin{aligned}
&&\biggl(1 - \frac{m}{2d} \biggr) \biggl( 1 +
\frac{m}{2d-m} \sum_{k \ge0} \biggl(
\frac{3}{5} \biggr)^k + \biggl(
\frac{3}{5} \biggr)^{[{3}/{4} d]-m} d^2 \biggr)^{-1}\\
&&\hspace*{5.41pt}\qquad\stackrel{m \le{d/2}}{\ge}
\biggl(1 - \frac{m}{2d} \biggr) \biggl( 1 +
\frac{5}{2} \frac{m}{2d-m} +
\frac{5}{3} \biggl(
\frac{3}{5} \biggr)^{{d}/{4}} d^2 \biggr)^{-1} \\
&&\qquad\mathop{\ge
}_{1 \le
m \le{d/2}}^{d \ge c} \biggl(1 -
\frac{m}{2d} \biggr) \biggl( 1 + 3
\frac{m}{2d-m} \biggr)^{-1}
\\
&&\hspace*{13.94pt}\qquad\ge\biggl(1 - \frac{m}{2d} \biggr) \biggl( 1 - 3
\frac{m}{2d-m} \biggr) \qquad\biggl( \mbox{$\ge$}0 \mbox{ since } m \le
\frac{d}{2} \biggr) .\end{aligned}$$ By the strong Markov property at time $H_{S_1(0,d^2)}$, we thus find that for $d \ge c$, $1 \le|y|_1 \le\frac{d}{2}$, we have $$\begin{aligned}
\label{3.20}\qquad
&&P_y [|X_n|_1 > |y|_1 \mbox{ for all } n > 0] \nonumber\\
&&\hspace*{4.3pt}\qquad\ge
\biggl(1 - \frac{|y|_1}{2d} \biggr) \biggl(1 -
\frac{3|y|_1}{2d - |y|_1} \biggr) \Bigl(1 - \sup
_{|z|_1= d^2} P_z\bigl[H_{B_1(0,{d}/{2})}< \infty\bigr] \Bigr)\\
&&\qquad \stackrel
{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{1.1})}}}{\ge}
\biggl(1 - \frac{|y|_1}{2d} \biggr) \biggl(1 -
\frac{3|y|_1}{2d - |y|_1} \biggr) - \sup_{|z|\ge
d^{3/2}} P_z \bigl[H_{B(0,d)} < \infty\bigr] \nonumber\\
&&\qquad\stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{1.6})}}}{\ge}
1 - \frac{|y|_1}{2d} - \frac
{3|y|_1}{2d - |y|_1} + \frac{3|y|^2_1}{2d(2d -
|y|_1)} - \biggl(\frac{c}{\sqrt{d}} \biggr)^{(d-2)}\nonumber\\
&&\qquad\mathop{\ge}_{y \not= 0}^{d \ge c} 1 - \frac
{4|y|_1}{2d - |y|_1} .\nonumber
$$ This completes the proof of (\[3.17\]) when $y \not= 0$. The extension to the case $y=0$ is immediate.
We use the above lemma to bound the first term in the last line of (\[3.15\]) from below. In view of (\[3.16\]) and (\[3.17\]), we thus find that for $d \ge c$ and any $x \in B$ \[see (\[3.11\])\], $$\label{3.21}\qquad
\nu(W^*_x) \ge\biggl(1 - 5 \frac{|x-z_x|_1 \vee
1}{2d-(|x-z_x|_1 \vee1)} \biggr)^2 \ge1 - 10 \frac
{\varepsilon/10}{2-\varepsilon/10} \ge1 - \varepsilon.$$ Coming back to (\[3.14\]), we thus find that $$\label{3.22}
{{\mathbb P}}[{\mathcal{V}}^{u_0} \supseteq\operatorname{range} \pi_i, i = 0,\ldots,M-1] \le
\exp\biggl\{ - \frac{u_0}{10} M \varepsilon(1 -
\varepsilon) d \biggr\}.$$ Inserting this bound into the last line of (\[3.10\]), we obtain $$\begin{aligned}
&&{{\mathbb P}}\bigl[0 \stackrel{{\mathcal{V}}^{u_0}}{\longleftrightarrow} S_1(0,ML)\bigr] \\
&&\hspace*{4pt}\qquad\le
\exp\biggl\{ \frac{M(M-1)}{2} L+2 Md -
\frac{u_0}{10} \varepsilon(1-\varepsilon) Md \biggr\}
(2d)^{{\varepsilon}/{10} Md}
\\
&&\qquad\stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{3.4})}}}{\le}
\exp\biggl\{ \frac{M(M-1)}{2} L+3 Md + \frac{\varepsilon}{10}
Md \log d\\
&&\qquad\quad\hspace*{57.2pt} - \frac{1}{10} (\varepsilon+ 4
\varepsilon^2 - 5 \varepsilon^3) Md \log d \biggr\}.\end{aligned}$$ Since $5\varepsilon^3 \le2\varepsilon^2$, due to (\[3.1\]), the claim (\[3.3\]) follows.
We will use the following corollary in the proof of Theorem \[theo0.1\] in the next section.
\[cor3.4\] If $M \ge1$, then for $d \ge c(M,\varepsilon)$, $$\label{3.23}
{{\mathbb P}}\bigl[0 \stackrel{{\mathcal{V}}^{u_0}}{\longleftrightarrow} S_1(0,ML)\bigr] \le
\exp\biggl\{ - \frac{\varepsilon^2}{10} dM \log
d \biggr\}.$$
This is an immediate consequence of (\[3.3\]).
\[rem3.5\] One should note that the bound of Theorem \[theo3.1\] deteriorates when $M$ becomes large. One can view Theorem \[theo3.1\] as a Peierls-type bound (slightly enhanced due to the role of $M$ in the proof). In the next section, we will choose $M$ as a large constant depending on $\varepsilon$ and use Corollary \[cor3.4\] to produce the local estimate which will enable us to initiate the renormalization scheme of Section \[sec2\]. In this way, the local estimate on crossings in ${\mathcal{V}}^{u_0}$ at $\ell^1$-distance of order $c(\varepsilon)d$ will be transformed into an estimate on crossings at all scales in ${\mathcal{V}}^{u_\infty}$, where $u_\infty\le(1 + 10 \varepsilon) \log d$.
Denouement {#sec4}
==========
In this section, we prove Theorem \[theo0.1\]. We combine the local bound on the connectivity function at level $u_0$ of the last section (cf. Corollary \[cor3.4\]) with the renormalization scheme of Section \[sec2\] (cf. Proposition \[prop2.3\]) in order to produce a bound on vacant crossings at a level $u_\infty\in[(1 + 5 \varepsilon) \log
d$, $(1
+ 10 \varepsilon) \log d]$, valid at arbitrarily large scales.
[Proof of Theorem \[theo0.1\]]{} We choose $\varepsilon$ and $u_0$ as in (\[3.1\]), (\[3.4\]), respectively. For the renormalization scheme of Section \[sec2\], we choose \[the constant $c_7$ appears in (\[3.2\])\] $$\label{4.1}
L_0 = d,\qquad \widehat{L}_0 = \bigl(\sqrt{d} + R\bigr) L_0\qquad \mbox{with $R = 300
c_7 \varepsilon^{-2}$}$$ and $$\label{4.2}
\ell_0 = d .$$ In the notation of Proposition \[prop2.3\] and (\[2.13\]), we choose $$\label{4.3}
r_0 = 24$$ and $$\label{4.4}
K_0 = \log\bigl(4 (c_4 \ell_0)^{2(d-1)}\bigr)
\stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{4.2})}}}{=}
\log\bigl(4 (c_4 d)^{2(d-1)}\bigr) .$$ In the application of Corollary \[cor3.4\], we choose $$\label{4.5}
M = [100 \varepsilon^{-2}] + 1$$ so that in the notation of (\[3.2\]), (\[4.1\]), $$\label{4.6}
ML + 1 \le R L_0 .$$ We will now check that the assumptions of Proposition \[prop2.3\] hold for $d \ge c(\varepsilon)$. By (\[2.45\]), we see that for $d \ge
c(\varepsilon)$, $$\label{4.7}
u_0 = (1 + 5 \varepsilon) \log d < u_\infty< ( 1+ 10 \varepsilon) \log d$$ and also that $$\label{4.8}
\widehat{L}_0 \le2d^{{3/2}} .$$ As a result, we find that $$\label{4.9}
u_\infty\biggl(\frac{\widehat{L}_0}{\sqrt{d}}
\biggr)^{(d-2)} \le(1 + 10 \varepsilon) (\log d) (2d)^{(d-2)}$$ and that $$\label{4.10}
e^{K_0} = 4(c_4 d)^{2(d-1)} ,$$ whereas, on the other hand, $$\label{4.11}
\biggl(\frac{\ell_0 L_0}{c_6 \widehat{L}_0}
\biggr)^{{r_0}/{2} (d-2)} \ge(cd)^{6(d-2)} .$$ Since $2(d-1) < 6(d-2)$, we see that for $d \ge c(\varepsilon)$, the expression in the left-hand side of (\[4.11\]) dominates the corresponding expressions in (\[4.9\]) and (\[4.10\]), that is, (\[2.46\]) holds.
There remains to check (\[2.47\]). For this purpose, we apply Corollary \[cor3.4\] and find that for $d \ge c(\varepsilon)$, since $\widehat
{L}_0 \ge\sqrt{d} L_0 + M L + 1$ \[cf. (\[4.1\]), (\[4.6\])\], we have $$\begin{aligned}
\label{4.12}
p_0(u_0) & = & {{\mathbb P}}\bigl[ [0,L_0 - 1]^d \stackrel{{\mathcal{V}}^{u_0}}{\longleftrightarrow} \partial_{\mathrm{int}} B(0,\widehat{L}_0) \bigr]
\nonumber\\
& \le & L^d_0 {{\mathbb P}}\bigl[0 \stackrel{{\mathcal{V}}^{u_0}}{\longleftrightarrow} S_1(0,ML)\bigr]
\mathop{\le}_{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{4.5})}}}
^{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{3.23})}}} \exp\{ d \log d - 10 d
\log d\} \\
&=&
d^{-9d}.\nonumber\end{aligned}$$ We thus find that for $d \ge c(\varepsilon)$, $p_0(u_0) \le e^{-K_0}$, that is, (\[2.47\]) holds as well. It now follows from Proposition \[prop2.3\] that for $d \ge c(\varepsilon)$, $$\label{4.13}
p_n (u_\infty) \le e^{-(K_0 - \log2)2^n} \qquad\mbox{for all $n \ge0$} .$$ Taking (\[2.13\]), (\[2.18\]) into account yields that for all $n
\ge1$, $$\begin{aligned}
\label{4.14}
&&{{\mathbb P}}\bigl[[0,L_n - 1]^d \stackrel{{\mathcal{V}}^{u_\infty
}}{\longleftrightarrow} \partial_{\mathrm{int}} [-L_n, 2L_n - 1]^d \bigr]
\nonumber\\[-8pt]\\[-8pt]
&&\qquad\le(c_4 \ell_0)^{2(d-1)(2^n-1)} e^{-(K_0 - \log2)2^n}
\stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref
{4.10})}}}{\le} 2^{-2^n} .\nonumber
$$ In particular, the above inequality implies that ${{\mathbb P}}[0 \stackrel{{\mathcal{V}}^{u_\infty}}{\longleftrightarrow} \infty] = 0$ and hence $u_* \le
u_\infty< (1 + 10 \varepsilon) \log d$ for $d \ge c(\varepsilon)$. The claim (\[0.6\]) readily follows. Combining this upper bound with the lower bound (\[0.3\]), we have thus proven Theorem \[theo0.1\].
\[rem4.1\]
\(1) The inequality (\[4.14\]), together with the fact that $L_n =
L_0 \ell_0^n$ for $n \ge0$, is more than enough to show that for $\varepsilon$ as in (\[3.1\]) and $d \ge c(\varepsilon)$, $$\lim_{L \rightarrow\infty} L^\gamma{{\mathbb P}}\bigl[B_\infty(0,L) \stackrel
{{\mathcal{V}}^{(1+10 \varepsilon)\log d}}{\longleftrightarrow} S_\infty(0,2L)\bigr] =
0 ,$$ for some, and, in fact, all, $\gamma> 0$. From the definition of the critical parameter $u_{**}$ in [@Szni09c], $$\begin{aligned}
\label{4.15}
u_{**} & = &\inf\{u \ge0; \alpha(u) > 0\}\hspace*{150pt}
\nonumber\\[-8pt]\\[-8pt]
\eqntext{\mbox{where }\displaystyle\alpha(u) = \sup\Bigl\{\alpha\ge0; \lim_{L \rightarrow\infty}
L^\alpha{{\mathbb P}}\bigl[B_\infty(0,L) \stackrel{{\mathcal{V}}^u}{\longleftrightarrow}
S_\infty(0,2L)\bigr] =
0\Bigr\} }\end{aligned}$$ (the supremum is, by convention, equal to zero when the set is empty), we thus find that for $d \ge c(\varepsilon)$, $$\label{4.16}
u_{**} \le( 1 + 10 \varepsilon) \log d .$$ Since $u_* \le u_{**}$, it follows that we have also proven that $$\label{4.17}
\lim_d u_{**} \big/ \log d = 1 .$$ It is presently an open question whether $u_* = u_{**}$; however, we know that $0 < u_* \le u_{**} < \infty$ for all $d \ge3$ (cf. [@Szni09e]) and that for $u > u_{**}$, the connectivity function has a stretched exponential decay (cf. [@SidoSzni09b]).
\(2) One may wonder whether the following reinforcement of (\[0.4\]) actually holds: $${{\mathbb P}}[0 \in{\mathcal{V}}^{u_*}] = e^{-u_*/g(0)} \sim(2d)^{-1} \qquad\mbox{as } d
\rightarrow\infty.$$ This would indicate a similar high-dimensional behavior as for Bernoulli percolation; see [@AlonBenjStac04; @BollKoha94; @Gord91; @HaraSlad90; @Kest90]. In the case of interlacement percolation on a $2d$-regular tree, such an asymptotic behavior is known to hold (cf. [@Teix09a]).
\[app\]
Appendix {#appendix .unnumbered}
========
In this appendix, we prove an elementary inequality which is involved in the proof of the Green function estimate (\[1.14\]); see Lemma \[A.1\] below. We then prove, in Lemma \[lemA.2\], a bound on Harnack constants in terms of killed Green functions for nearest-neighbor Markov chains on graphs. The result is stated in a rather general formulation due to the fact that it is of independent interest. It is an adaptation of Lemma 10.2 of [@GrigTelc01]. We recall that Lemma \[lemA.2\] enters the proof of Proposition \[prop1.3\].
\[lemA.1\] $$\begin{aligned}
\label{A.1}
&&\mbox{For } a,b \ge0\qquad \sqrt{a^2 + b^2} \log\bigl(1 + \sqrt{a^2
+ b^2}\bigr) \nonumber\\[-8pt]\\[-8pt]
&&\hspace*{52.8pt}\qquad\qquad\le a \log(1+a) + b \log(1+b) .\nonumber\end{aligned}$$
We introduce $\psi(u) = u \log(1 + u)$, $u \ge0$, as well as $\varphi_b(a) = \sqrt{a^2 + b^2}$ and $\chi_b(a) = \psi(a) + \psi
(b) - \psi(\varphi_b(a))$ for $a,b \ge0$. We want to show that $$\label{A.2}
\chi_b(a) \ge0 \qquad\mbox{for } a,b > 0 .$$ We note that $\chi_b(0) = 0$ and that $$\chi^\prime_b(a) = \log(1 + a) + 1 - \frac
{1}{1+a} - \biggl(\log\bigl(1 + \varphi_b(a)\bigr) + 1 - \frac
{1}{1 + \varphi_b(a)} \biggr) \frac{a}{\varphi
_b(a)} .$$ The claim (\[A.2\]) will follow once we show that $$\label{A.3}
\chi^\prime_b(a) \ge0\qquad \mbox{for $a,b > 0$} .$$ To this end, we note that for $a > 0$, $\chi^\prime_0(a) = 0$ and that $$\begin{aligned}
\label{A.4}
\frac{\partial}{\partial b} \chi^\prime_b(a) &
= &- \biggl( \frac{1}{1+\varphi_b(a)}
\frac{b}{\varphi_b(a)} + \frac{1}{(1+\varphi
_b(a))^2} \frac{b}{\varphi_b(a)} \biggr)
\frac{a}{\varphi_b(a)}
\nonumber\\
&&{} + \biggl(\log\bigl(1 + \varphi_b(a)\bigr) + 1 - \frac
{1}{1+\varphi_b(a)} \biggr) \frac{a b}{\varphi_b(a)^3}
\nonumber\\[-8pt]\\[-8pt]
& = &\frac{a b}{(1 + \varphi_b(a))\varphi_b(a)^3}\nonumber\\
&&\times{}
\biggl\{ \log\bigl(1 + \varphi_b(a)\bigr) \bigl(1 + \varphi_b(a)\bigr) -
\frac{\varphi_b(a)}{1+\varphi_b(a)} \biggr\} .\nonumber\end{aligned}$$ We introduce the function $\rho(u) = \log(1 + u) (1 + u) - \frac
{u}{1+u}$, $u \ge0$. Observe that $\rho(0) = 0$ and $\rho^\prime(u)
= \log(1 + u) + 1 - \frac{1}{(1+u)^2} \ge0$ so that $\rho(u) \ge0$ for $u \ge0$. Coming back to the last line of (\[A.4\]), we find that for $a > 0$, $\frac{\partial}{\partial b} \chi^\prime_b(a)
\ge0$ for $b \ge0$. This shows (\[A.3\]) and the claim (\[A.1\]) then follows.
We now turn to the second result of this appendix. We consider a connected graph $\Gamma$ with an at most countable vertex set $E$ and edge set $\mathcal{E}$ (a subset of the collection of unordered pairs of $E$). Given $U \subseteq E$, we define $\partial U$, $\partial_{\mathrm{int}} U$ and $\overline{U}$ similarly to what is described at the beginning of the Section \[sec1\] (with obvious modifications). We consider an irreducible Markov chain on $E$, nearest-neighbor in the broad sense (i.e., at each step, the Markov chain moves to a vertex which is at graph-distance at most $1$ from its current location). We write $X_n$, $n \ge0$, for the canonical process, $P_x$ for the canonical law starting from $x \in E$ and otherwise use similar notation as described at the beginning of Section \[sec1\]. We denote by $p(x,y)$, $x,y \in E$, the transition probability. We assume that the Markov chain satisfies the following ellipticity condition: $$\label{A.5}
p(x,y) > 0 \qquad\mbox{when $x,y$ are neighbors (i.e., $\{x,y\} \in\mathcal{E}$)}.$$ For $f$ a bounded function on $E$, we define $$\label{A.6}
L f(x) = E_x[f(X_1)] - f(x) = \sum_{y \sim x} p(x,y)\bigl(f(y) - f(x)\bigr)
\qquad\mbox{for $x \in E$} ,\hspace*{-35pt}$$ where $y \sim x$ means that $y = x$ or $y$ is a neighbor of $x$. Given $U \subseteq E$, a bounded function on $\overline{U}$ is said to be *harmonic in* $U$ when (with a slight abuse of notation) $$\label{A.7}
L f(x) = 0 \qquad\mbox{for $x \in U$} .$$ When $U$ is a finite strict subset of $E$, the Green function killed outside $U$ is defined as follows (the notation is similar to that in Section \[sec1\]): $$\label{A.8}
G_U(x,y) = E_x \biggl[\sum_{k \ge0} 1\{X_k = y, T_U > k\} \biggr],\qquad x,y
\in E .$$ It follows from the ellipticity assumption (\[A.5\]) that when $U$ is connected, $G_U(x,y) > 0$ for all $x,y \in U$. The next lemma is an adaptation of Lemma 10.2 of [@GrigTelc01].
\[lemA.2\] Assume that $\varnothing\not= U_1 \subseteq U_2 \subseteq U_3$ are finite strict subsets of $E$, with $U_3$ connected, and that $u$ is a bounded nonnegative function on $\overline{U}_3$ which is harmonic in $U_3$. We then have $$\label{A.9}
\max_{U_1} u \le K \min_{U_1} u ,$$ where $$\label{A.10}
K = \max_{x,y \in U_1} \max_{z \in\partial_{\mathrm{int}} U_2} G_{U_3} (x,z) / G_{U_3}(y,z) .$$
We define, for $x \in E$, $$\label{A.11}
v(x) = E_x [u (X_{H_{U_2}}), H_{U_2} < T_{U_3}] .$$ We first note that $$\label{A.12}\quad
u(x) \ge v(x) \qquad\mbox{for $x \in\overline{U}_3$}\quad\mbox{and}\quad u(x) = v(x)\qquad\mbox{for $x
\in U_2$} .$$ Indeed, in view of (\[A.11\]), $u$ and $v$ agree on $U_2$ and, thanks to our assumptions, $u(X_{n \wedge T_{U_3}})$, $n \ge0$, is a bounded martingale under $P_x$, $x \in\overline{U}_3$, so that by the stopping theorem, we find that $$\begin{aligned}
u(x) &=& E_x [u(X_{H_{U_2} \wedge T_{U_3}})] = v(x) + E_x
[u(X_{T_{U_3}}), T_{U_3} < H_{U_2}]
\\
&\ge& v(x) \qquad\mbox{for } x \in\overline{U}_3 .\end{aligned}$$ The claim (\[A.12\]) then follows.
Applying the simple Markov property at time 1 in (\[A.11\]), when $x
\in U_3 \setminus U_2$, we see that $$\label{A.13}
\mbox{$v$ is harmonic in $U_3 \setminus U_2$} .$$ In addition, we have, for $x \in U_2$, $$v(x) = u(x) = \sum_{y \sim x} p(x,y) u(y) \stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref
{A.12})}}}{\ge} \sum_{y \sim x} p(x,y) v(y)$$ and the last inequality is an equality when $x \in U_2 \setminus
\partial_{\mathrm{int}} U_2$. We have thus shown that $$\label{A.14}
Lv = 1_{\partial_{\mathrm{int}} U_2} Lv \le0 \qquad\mbox{on $U_3$} .$$ Applying the stopping theorem, we see that, under any $P_x$, $$v(X_{n \wedge T_{U_3}}) - \sum_{0 \le k < n \wedge T_{U_3}}
Lv(X_k),\qquad
n \ge0,\qquad \mbox{is a martingale} .$$ Taking expectations and letting $n$ tend to infinity, we obtain the identity $$\begin{aligned}
\label{A.15}
v(x) & = & E_x[v(X_{T_{U_3}})] - E_x \biggl[\sum_{0 \le k < T_{U_3}} Lv
(X_k) \biggr]
\nonumber\\
& = & - \sum_{z \in E} G_{U_3} (x,z) Lv(z) \\
&\stackrel{\mbox{\fontsize{8.36}{10.36}\selectfont{(\ref{A.14})}}}{=}&
\sum_{z \in\partial_{\mathrm{int}} U_2} G_{U_3} (x,z)
(-Lv)(z),\qquad x \in
E .\nonumber\end{aligned}$$ Since $v$ and $u$ agree on $U_2 \supseteq U_1$, (\[A.9\]) is a direct consequence of the above representation formula for $v$.
[23]{}
, (). . .
(). . .
(). . .
, (). [http://www.math.ethz.ch/\\textasciitilde cerny/publications.html](http://www.math.ethz.ch/\textasciitilde cerny/publications.html).
(). . , .
(). . .
(). . .
(). , ed. . , .
(). . .
(). . .
(). . In . , .
(). . , .
(). . .
(). . , .
(). . .
(). . .
(). . .
(). . .
(). . .
(). . .
(). . Available at [arXiv:1003.0334](http://arxiv.org/abs/1003.0334).
(). . .
(). . .
|
---
address: |
Department of Astronomical Science, the Graduate University for Advanced Studies,\
Mitaka 181-8588, Japan
author:
- 'Kouji Nakamura[^1]'
title: ' Inclusion of the first-order vector- and tensor-modes in the second-order gauge-invariant cosmological perturbation theory '
---
The second-order general relativistic cosmological perturbation theory has very wide physical motivation. In particular, the first order approximation of our universe from a homogeneous isotropic one is revealed by the recent observations of Cosmic Microwave Background (CMB) by Wilkinson Microwave Anisotropy Probe[@WMAP], which suggests that the fluctuations of our universe are adiabatic and Gaussian at least in the first order approximation. One of the next theoretical researches is to clarify the accuracy of this result through the non-Gaussianity, or non-adiabaticity, and so on. To carry out this, it is necessary to discuss the second-order cosmological perturbations.
However, general relativistic perturbation theory requires delicate treatments of “gauges” and this situation becomes clearer by the general arguments of perturbation theories. Therefore, it is worthwhile to formulate the higher-order gauge-invariant perturbation theory from general point of view. According to this motivation, we proposed the general framework of the second-order gauge-invariant perturbation theory on a generic background spacetime[@KNs-general]. This general framework was applied to cosmological perturbation theory[@KNs-cosmological] and all components of the second-order perturbation of the Einstein equation were derived in gauge invariant manner. The derived second-order Einstein equations are quite similar to the equations for the first-order one but there are source terms which consist of the quadratic terms of the linear-order perturbations.
In this article, we show the extension of the formulation in Refs. [@KNs-cosmological] to include the first-order vector- and tensor-modes in the source terms of the second-order Einstein equation, which were ignored in Refs. [@KNs-cosmological].
As emphasized in Refs.[@KNs-general; @KNs-cosmological], in any perturbation theory, we always treat two spacetime manifolds. One is a physical spacetime ${\cal M}_{\lambda}$ and the other is the background spacetime ${\cal M}_{0}$. In this article, the background spacetime ${\cal M}_{0}$ is the Friedmann-Robertson-Walker universe filled with a perfect fluid whose metric is given by $$\begin{aligned}
g_{ab} = a^{2}(\eta)\left(
- (d\eta)_{a}(d\eta)_{b}
+ \gamma_{ij}(dx^{i})_{a}(dx^{j})_{b}
\right),
\label{eq:background-metric}\end{aligned}$$ where $\gamma_{ij}$ is the metric on maximally symmetric three space. The physical variable $Q$ on the physical spacetime is pulled back to ${}_{{\cal X}}\!Q$ on the background spacetime by an appropriate gauge choice ${\cal X}$ which is an point-identification map from ${\cal M}_{0}$ to ${\cal M}_{\lambda}$. The gauge transformation rules for the pulled-back variable ${}_{{\cal X}}\!Q$, which is expanded as ${}_{{\cal X}}\!Q_{\lambda}$ $=$ $Q_{0}$ $+$ $\lambda {}^{(1)}_{{\cal X}}\!Q$ $+$ $\frac{1}{2} \lambda^{2} {}^{(2)}_{{\cal X}}\!Q$, are given by $$\begin{aligned}
\label{eq:Bruni-47-one}
{}^{(1)}_{\;{\cal Y}}\!Q - {}^{(1)}_{\;{\cal X}}\!Q =
{\pounds}_{\xi_{(1)}}Q_{0}, \quad
{}^{(2)}_{\;\cal Y}\!Q - {}^{(2)}_{\;\cal X}\!Q =
2 {\pounds}_{\xi_{(1)}} {}^{(1)}_{\;\cal X}\!Q
+\left\{{\pounds}_{\xi_{(2)}}+{\pounds}_{\xi_{(1)}}^{2}\right\} Q_{0},\end{aligned}$$ where ${\cal X}$ and ${\cal Y}$ represent two different gauge choices, $\xi_{(1)}^{a}$ and $\xi_{(2)}^{a}$ are generators of the first- and the second-order gauge transformations, respectively. The metric $\bar{g}_{ab}$ on the physical spacetime ${\cal M}_{\lambda}$ is also expanded as $\bar{g}_{ab}$ $=$ $g_{ab}$ $+$ $\lambda h_{ab}$ $+$ $\frac{\lambda^{2}}{2} l_{ab}$ under a gauge choice. Inspecting gauge transformation rules (\[eq:Bruni-47-one\]), the first-order metric perturbation $h_{ab}$ is decomposed as $h_{ab}$ $=:$ ${\cal H}_{ab}$ + ${\pounds}_{X}g_{ab}$, where ${\cal H}_{ab}$ and $X_{a}$ are transformed as ${}_{\;{\cal Y}}\!{\cal H}_{ab}$ $-$ ${}_{\;{\cal X}}\!{\cal H}_{ab}=0$, and ${}_{\;{\cal Y}}\!X_{a}$ $-$ ${}_{\;{\cal X}}\!X_{a}$ $=$ $\xi_{(1)a}$ under the gauge transformation (\[eq:Bruni-47-one\]), respectively[@KNs-cosmological]. The gauge invariant part ${\cal H}_{ab}$ of $h_{ab}$ is given in the form $$\begin{aligned}
\label{eq:first-order-gauge-inv-metrc-pert-components}
{\cal H}_{ab}
&=&
- 2 a^{2} \stackrel{(1)}{\Phi} (d\eta)_{a}(d\eta)_{b}
+ 2 a^{2} \stackrel{(1)}{\nu_{i}} (d\eta)_{(a}(dx^{i})_{b)}
+ a^{2}
\left( - 2 \stackrel{(1)}{\Psi} \gamma_{ij}
+ \stackrel{(1)}{{\chi}_{ij}} \right)
(dx^{i})_{a}(dx^{j})_{b},\end{aligned}$$ where $D^{i}\stackrel{(1)}{\nu_{i}}$ $=$ $\stackrel{(1)}{\chi_{[ij]}}$ $=$ $\stackrel{(1)}{\chi^{i}_{\;i}}$ $=$ $D^{i}\stackrel{(1)}{\chi_{ij}}$ $=$ $0$ and $D^{i}:=\gamma^{ij}D_{j}$ is the covariant derivative associate with the metric $\gamma_{ij}$. In the cosmological perturbations[@Bardeen-1980], $\{\stackrel{(1)}{\Phi},\stackrel{(1)}{\Psi}\}$, $\stackrel{(1)}{\nu_{i}}$, and $\stackrel{(1)}{\chi_{ij}}$ are called the scalar-, vector-, and tensor-modes, respectively. We have to note that we used the existence of the Green functions $\Delta^{-1}=:(D^{i}D_{i})^{-1}$, $(\Delta+2K)^{-1}$, and $(\Delta+3K)^{-1}$ to accomplish the above decomposition of $h_{ab}$.
As shown in Ref.[@KNs-general], through the above variables $X_{a}$ and $h_{ab}$, the second order metric perturbation $l_{ab}$ is decomposed as $l_{ab}$ $=:$ ${\cal L}_{ab}$ $+$ $2 {\pounds}_{X}h_{ab}$ $+$ $\left( {\pounds}_{Y} - {\pounds}_{X}^{2} \right) g_{ab}$ The variables ${\cal L}_{ab}$ and $Y^{a}$ are the gauge invariant and variant parts of $l_{ab}$, respectively. The vector field $Y_{a}$ is transformed as ${}_{\;{\cal Y}}\!Y_{a}$ $-$ ${}_{\;{\cal X}}\!Y_{a}$ $=$ $\xi_{(2)}^{a}$ $+$ $[\xi_{(1)},X]^{a}$ under the gauge transformations (\[eq:Bruni-47-one\]). The components of ${\cal L}_{ab}$ are given by $$\begin{aligned}
\label{eq:second-order-gauge-inv-metrc-pert-components}
{\cal L}_{ab}
&=&
- 2 a^{2} \stackrel{(2)}{\Phi} (d\eta)_{a}(d\eta)_{b}
+ 2 a^{2} \stackrel{(2)}{\nu_{i}} (d\eta)_{(a}(dx^{i})_{b)}
+ a^{2}
\left( - 2 \stackrel{(2)}{\Psi} \gamma_{ij}
+ \stackrel{(2)}{{\chi}_{ij}} \right)
(dx^{i})_{a}(dx^{j})_{b},\end{aligned}$$ where $D^{i}\stackrel{(2)}{\nu_{i}}$ $=$ $\stackrel{(2)}{\chi_{[ij]}}$ $=$ $\stackrel{(2)}{\chi^{i}_{\;\;i}}$ $=$ $D^{i}\stackrel{(2)}{\chi_{ij}}$ $=$ $0$. As shown in Ref.[@KNs-general], by using the above variables $X_{a}$ and $Y_{a}$, we can find the gauge invariant variables for the perturbations of an arbitrary field as $$\begin{aligned}
\label{eq:matter-gauge-inv-def-1.0}
{}^{(1)}\!{\cal Q} := {}^{(1)}\!Q - {\pounds}_{X}Q_{0},
, \quad
% \label{eq:matter-gauge-inv-def-2.0}
{}^{(2)}\!{\cal Q} := {}^{(2)}\!Q - 2 {\pounds}_{X} {}^{(1)}Q
- \left\{ {\pounds}_{Y} - {\pounds}_{X}^{2} \right\} Q_{0}.\end{aligned}$$
As the matter contents, in this article, we consider a perfect fluid whose energy-momentum tensor is given by $\bar{T}_{a}^{\;\;b}$ $=$ $\left(\bar{\epsilon}+\bar{p}\right)\bar{u}_{a}\bar{u}^{b}$ $+$ $\bar{p}\delta_{a}^{\;\;b}$. We expand these fluid components $\bar{\epsilon}$, $\bar{p}$, and $\bar{u}_{a}$ as $$\begin{aligned}
\bar{\epsilon}
=
\epsilon
+ \lambda \stackrel{(1)}{\epsilon}
+ \frac{1}{2} \lambda^{2} \stackrel{(2)}{\epsilon}
,
\quad
\bar{p}
=
p
+ \lambda \stackrel{(1)}{p}
+ \frac{1}{2} \lambda^{2} \stackrel{(2)}{p}
,
\quad
\bar{u}_{a}
=
u_{a}
+ \lambda \stackrel{(1)}{u}_{a}
+ \frac{1}{2} \lambda^{2} \stackrel{(2)}{u}_{a}p
.
\label{eq:Taylor-expansion-of-four-velocity}\end{aligned}$$ Following the definitions (\[eq:matter-gauge-inv-def-1.0\]), we easily obtain the corresponding gauge invariant variables for these perturbations of the fluid components: $$\begin{aligned}
% \label{eq:kouchan-016.13}
\stackrel{(1)}{{\cal E}}
&:=& \stackrel{(1)}{\epsilon} - {\pounds}_{X}\epsilon, \quad
% \label{eq:kouchan-016.14}
\stackrel{(1)}{{\cal P}}
:= \stackrel{(1)}{p} - {\pounds}_{X}p, \quad
% \label{eq:kouchan-016.15}
\stackrel{(1)}{{\cal U}_{a}}
:= \stackrel{(1)}{(u_{a})} - {\pounds}_{X}u_{a}, \nonumber
\quad
% \label{eq:kouchan-016.16}
\stackrel{(2)}{{\cal E}}
:= \stackrel{(2)}{\epsilon}
- 2 {\pounds}_{X} \stackrel{(1)}{\epsilon}
- \left\{
{\pounds}_{Y}
-{\pounds}_{X}^{2}
\right\} \epsilon
, \nonumber
\\
% \label{eq:kouchan-016.17}
\stackrel{(2)}{{\cal P}}
&:=& \stackrel{(2)}{p}
- 2 {\pounds}_{X} \stackrel{(1)}{p}
- \left\{
{\pounds}_{Y}
-{\pounds}_{X}^{2}
\right\} p
,
\quad
\label{eq:kouchan-016.18}
\stackrel{(2)}{{\cal U}_{a}}
:= \stackrel{(2)}{(u_{a})}
- 2 {\pounds}_{X} \stackrel{(1)}{u_{a}}
- \left\{
{\pounds}_{Y}
-{\pounds}_{X}^{2}
\right\} u_{a}.
\nonumber\end{aligned}$$ Through $\bar{g}^{ab}\bar{u}_{a}\bar{u}_{b}$ $=$ $g^{ab}u_{a}u_{b}$ $=$ $-1$, the components of $\stackrel{(1)}{{\cal U}_{a}}$ and $\stackrel{(2)}{{\cal U}_{a}}$ are given by $$\begin{aligned}
&&
\stackrel{(1)}{{\cal U}_{a}}
=
- a \stackrel{(1)}{\Phi} (d\eta)_{a}
+ a \left(
D_{i} \stackrel{(1)}{v} + \stackrel{(1)}{{\cal V}_{i}}
\right)(dx^{i})_{a},
\quad
\label{eq:kouchan-17.399}
\stackrel{(2)}{{\cal U}_{a}}
=
\stackrel{(2)}{{\cal U}_{\eta}} (d\eta)_{a}
+ a \left(
D_{i} \stackrel{(2)}{v}
+ \stackrel{(2)}{{\cal V}_{i}}
\right) (dx^{i})_{a}
, \\
&&
\stackrel{(2)}{{\cal U}_{\eta}}
:=
a \left\{
\left(\stackrel{(1)}{\Phi}\right)^{2}
- \stackrel{(2)}{\Phi}
- \left(
D_{i}\stackrel{(1)}{v}
+ \stackrel{(1)}{{\cal V}_{i}}
- \stackrel{(1)}{\nu_{i}}
\right)
\left(
D^{i}\stackrel{(1)}{v}
+ \stackrel{(1)}{{\cal V}^{i}}
- \stackrel{(1)}{\nu^{i}}
\right)
\right\}\end{aligned}$$ where $D^{i}\stackrel{(1)}{{\cal V}_{i}}$ $=$ $D^{i}\stackrel{(2)}{{\cal V}_{i}}$ $=$ $0$.
We also expand the Einstein tensor as $\bar{G}_{a}^{\;\;b}$ $=$ $G_{a}^{\;\;b}$ $+$ $\lambda {}^{(1)}\!G_{a}^{\;\;b}$ $+$ $\frac{1}{2} \lambda^{2} {}^{(2)}\!G_{a}^{\;\;b}$. From the decomposition of the first- and the second-order metric perturbation into gauge-invariant parts and gauge-variant parts, each order perturbation of the Einstein tensor is given by $$\begin{aligned}
{}^{(1)}\!G_{a}^{\;\;b}
=
{}^{(1)}{\cal G}_{a}^{\;\;b}\left[{\cal H}\right]
+ {\pounds}_{X}G_{a}^{\;\;b}
,
\quad
{}^{(2)}\!G_{a}^{\;\;b}
=
{}^{(1)}{\cal G}_{a}^{\;\;b}\left[{\cal L}\right]
+ {}^{(2)}{\cal G}_{a}^{\;\;b}\left[{\cal H}, {\cal H}\right]
+ 2 {\pounds}_{X} {}^{(1)}\!G_{a}^{\;\;b}
+ \left\{
{\pounds}_{Y}
-{\pounds}_{X}^{2}
\right\} G_{a}^{\;\;b}\end{aligned}$$ as expected from Eqs. (\[eq:matter-gauge-inv-def-1.0\]). Here, ${}^{(1)}{\cal G}_{a}^{\;\;b}\left[{\cal H}\right]$ and ${}^{(1)}{\cal G}_{a}^{\;\;b}\left[{\cal L}\right]
+ {}^{(2)}{\cal G}_{a}^{\;\;b}\left[{\cal H}, {\cal H}\right]$ are gauge invariant parts of the first- and the second- order perturbations of the Einstein tensor, respectively. On the other hand, the energy momentum tensor of the perfect fluid is also expanded as $\bar{T}_{a}^{\;\;b}$ $=$ $T_{a}^{\;\;b}$ $+$ $\lambda {}^{(1)}\!T_{a}^{\;\;b}$ $+$ $\frac{1}{2} \lambda^{2} {}^{(2)}\!T_{a}^{\;\;b}$ and ${}^{(1)}\!T_{a}^{\;\;b}$ and ${}^{(2)}\!T_{a}^{\;\;b}$ are also given in the form $$\begin{aligned}
{}^{(1)}\!T_{a}^{\;\;b}
=
{}^{(1)}\!{\cal T}_{a}^{\;\;b}
+ {\pounds}_{X}T_{a}^{\;\;b}
,
\quad
{}^{(2)}\!T_{a}^{\;\;b}
=
{}^{(2)}\!{\cal T}_{a}^{\;\;b}
+ 2 {\pounds}_{X} {}^{(1)}\!T_{a}^{\;\;b}
+ \left\{
{\pounds}_{Y}
-{\pounds}_{X}^{2}
\right\} T_{a}^{\;\;b}\end{aligned}$$ through the definitions (\[eq:kouchan-016.18\]) of the gauge invariant variables of the fluid components. Here, ${}^{(1)}\!{\cal T}_{a}^{\;\;b}$ and ${}^{(2)}\!{\cal T}_{a}^{\;\;b}$ are gauge invariant part of the first- and the second-order perturbations of the energy momentum tensor, respectively. Then, the first- and the second-order perturbations of the Einstein equation are necessarily given in term of gauge invariant variables: $$\begin{aligned}
\label{eq:linear-order-Einstein-equation}
{}^{(1)}{\cal G}_{a}^{\;\;b}\left[{\cal H}\right]
=
8\pi G {}^{(1)}{\cal T}_{a}^{\;\;b},
\quad
{}^{(1)}{\cal G}_{a}^{\;\;b}\left[{\cal L}\right]
+ {}^{(2)}{\cal G}_{a}^{\;\;b}\left[{\cal H}, {\cal H}\right]
=
8\pi G \;\; {}^{(2)}{\cal T}_{a}^{\;\;b}. \end{aligned}$$
In the single perfect fluid case, the traceless scalar part of the spatial component of the first equation in Eq.(\[eq:linear-order-Einstein-equation\]) yields $\stackrel{(1)}{\Psi}$ $=$ $\stackrel{(1)}{\Phi}$ due to the absence of the anisotropic stress in the first order perturbation of the energy momentum tensor and the other components of Eq. (\[eq:linear-order-Einstein-equation\]) give well-known equations[@Bardeen-1980]. We show the expression of the second-order perturbations of the Einstein equation after imposing these first-order perturbations of the Einstein equations. Though we have derived all components of the second equation in Eq. (\[eq:linear-order-Einstein-equation\]), we only show their scalar parts of it for simplicity: $$\begin{aligned}
4\pi G a^{2} \stackrel{(2)}{{\cal E}}
&=&
\left(
- 3 {\cal H} \partial_{\eta}
+ \Delta
+ 3 K
- 3 {\cal H}^{2}
\right)
\stackrel{(2)}{\Phi}
- \Gamma_{0}
+
\frac{3}{2}
\left(
\Delta^{-1} D^{i}D_{j}\Gamma_{i}^{\;\;j}
- \frac{1}{3} \Gamma_{k}^{\;\;k}
\right)
\nonumber\\
&& \quad
-
\frac{9}{2}
{\cal H} \partial_{\eta}
\left( \Delta + 3 K \right)^{-1}
\left(
\Delta^{-1} D^{i}D_{j}\Gamma_{i}^{\;\;j}
- \frac{1}{3} \Gamma_{k}^{\;\;k}
\right)
\label{eq:kouchan-18.79}
, \\
8\pi G a^{2} (\epsilon + p) D_{i}\stackrel{(2)}{v}
&=&
- 2 \partial_{\eta}D_{i}\stackrel{(2)}{\Phi}
- 2 {\cal H} D_{i}\stackrel{(2)}{\Phi}
+ D_{i} \Delta^{-1} D^{k}\Gamma_{k}
\nonumber\\
&& \quad
- 3 \partial_{\eta}D_{i}
\left( \Delta + 3 K \right)^{-1}
\left(
\Delta^{-1} D^{i}D_{j}\Gamma_{i}^{\;\;j}
- \frac{1}{3} \Gamma_{k}^{\;\;k}
\right)
\label{eq:second-velocity-scalar-part-Einstein}
, \\
4 \pi G a^{2} \stackrel{(2)}{{\cal P}}
&=&
\left(
\partial_{\eta}^{2}
+ 3{\cal H} \partial_{\eta}
- K
+ 2\partial_{\eta}{\cal H}
+ {\cal H}^{2}
\right)
\stackrel{(2)}{\Phi}
-
\frac{1}{2}
\Delta^{-1} D^{i}D_{j}\Gamma_{i}^{\;\;j}
\nonumber\\
&& \quad
+
\frac{3}{2}
\left(
\partial_{\eta}^{2}
+ 2 {\cal H} \partial_{\eta}
\right)
\left( \Delta + 3 K \right)^{-1}
\left(
\Delta^{-1} D^{i}D_{j}\Gamma_{i}^{\;\;j}
- \frac{1}{3} \Gamma_{k}^{\;\;k}
\right)
,\\
\label{eq:kouchan-18.80}
\stackrel{(2)}{\Psi} - \stackrel{(2)}{\Phi}
&=&
\frac{3}{2}
\left( \Delta + 3 K \right)^{-1}
\left(
\Delta^{-1} D^{i}D_{j}\Gamma_{i}^{\;\;j}
- \frac{1}{3} \Gamma_{k}^{\;\;k}
\right)
.
\label{eq:kouchan-18.65}\end{aligned}$$ where ${\cal H}$ $:=$ $\partial_{\eta}a/a$. $\Gamma_{0}$, $\Gamma_{i}$ and $\Gamma_{ij}$ in Eqs. (\[eq:kouchan-18.79\])-(\[eq:kouchan-18.65\]) are defined by $$\begin{aligned}
\Gamma_{0}
&:=&
+ 8 \pi G a^{2} \left(\epsilon+p\right) D_{i}\stackrel{(1)}{v} D^{i}\stackrel{(1)}{v}
- 3 D_{k}\stackrel{(1)}{\Phi} D^{k}\stackrel{(1)}{\Phi}
- 8 \stackrel{(1)}{\Phi} \Delta\stackrel{(1)}{\Phi}
% \nonumber\\
% && \quad\quad\quad\quad\quad
- 3 \left(\partial_{\eta}\stackrel{(1)}{\Phi}\right)^{2}
- 12 \left(K + {\cal H}^{2}\right) \left(\stackrel{(1)}{\Phi}\right)^{2}
\nonumber\\
&&
- 4 \left(
\partial_{\eta}D_{i}\stackrel{(1)}{\Phi}+{\cal H} D_{i}\stackrel{(1)}{\Phi}
\right) \stackrel{(1)}{{\cal V}^{i}}
- 2 {\cal H} D_{k}\stackrel{(1)}{\Phi} \stackrel{(1)}{\nu^{k}}
% \nonumber\\
% &&
+ 8 \pi G a^{2} \left(\epsilon+p\right) \stackrel{(1)}{{\cal V}_{i}} \stackrel{(1)}{{\cal V}^{i}}
+ \frac{1}{2} D_{k}\stackrel{(1)}{\nu_{l}} D^{(k}\stackrel{(1)}{\nu^{l)}}
+ 3 {\cal H}^{2} \stackrel{(1)}{\nu^{k}} \stackrel{(1)}{\nu_{k}}
\nonumber\\
&&
+ D_{l}D_{k}\stackrel{(1)}{\Phi} \stackrel{(1)}{\chi^{lk}}
% \nonumber\\
% &&
- 2 {\cal H} D^{k}\stackrel{(1)}{\nu^{l}} \stackrel{(1)}{\chi_{kl}}
-\frac{1}{2}D^{k}\stackrel{(1)}{\nu^{l}}\partial_{\eta}\stackrel{(1)}{\chi_{lk}}
\nonumber\\
&&
+ \frac{1}{8} \partial_{\eta}\stackrel{(1)}{\chi_{lk}} \partial_{\eta}\stackrel{(1)}{\chi^{kl}}
+ {\cal H} \stackrel{(1)}{\chi_{kl}} \partial_{\eta}\stackrel{(1)}{\chi^{lk}}
- \frac{1}{8} D_{k}\stackrel{(1)}{\chi_{lm}} D^{k}\stackrel{(1)}{\chi^{ml}}
% \nonumber\\
% && \quad\quad\quad\quad\quad
+ \frac{1}{2} D_{k}\stackrel{(1)}{\chi_{lm}} D^{[l}\stackrel{(1)}{\chi^{k]m}}
- \frac{1}{2} \stackrel{(1)}{\chi^{lm}} \left(\Delta-K\right)\stackrel{(1)}{\chi_{lm}}
,
% \label{eq:kouchan-19.117}
\nonumber\\
\Gamma_{i}
&:=&
- 16 \pi G a^{2} \left(
\stackrel{(1)}{{\cal E}} + \stackrel{(1)}{{\cal P}}
\right)
D_{i}\stackrel{(1)}{v}
+ 12 {\cal H} \stackrel{(1)}{\Phi} D_{i}\stackrel{(1)}{\Phi}
- 4 \stackrel{(1)}{\Phi} \partial_{\eta}D_{i}\stackrel{(1)}{\Phi}
- 4 \partial_{\eta}\stackrel{(1)}{\Phi} D_{i}\stackrel{(1)}{\Phi}
\nonumber\\
&&
- 16 \pi G a^{2} \left(
\stackrel{(1)}{{\cal E}} + \stackrel{(1)}{{\cal P}}
\right)
\stackrel{(1)}{{\cal V}_{i}}
- 2 D^{j}\stackrel{(1)}{\Phi} D_{i}\stackrel{(1)}{\nu_{j}}
+ 2 D_{i}D^{j}\stackrel{(1)}{\Phi} \stackrel{(1)}{\nu_{j}}
+ 2 \Delta\stackrel{(1)}{\Phi} \stackrel{(1)}{\nu_{i}}
+ \stackrel{(1)}{\Phi} \Delta\stackrel{(1)}{\nu_{i}}
+ 2 K \stackrel{(1)}{\Phi} \stackrel{(1)}{\nu_{i}}
\nonumber\\
&&
- 4 {\cal H} \stackrel{(1)}{\nu^{j}} D_{i}\stackrel{(1)}{\nu_{j}}
% \nonumber\\
% &&
+ 2 D^{j}\stackrel{(1)}{\Phi} \partial_{\eta}\stackrel{(1)}{\chi_{ji}}
- 2 \partial_{\eta}D^{j}\stackrel{(1)}{\Psi} \stackrel{(1)}{\chi_{ij}}
\nonumber\\
&&
+ 2 D_{k}D_{[i}\stackrel{(1)}{\nu_{m]}} \stackrel{(1)}{\chi^{km}}
+ 2 D^{[k}\stackrel{(1)}{\nu^{j]}} D_{j}\stackrel{(1)}{\chi_{ki}}
+ 2 K \stackrel{(1)}{\nu^{j}} \stackrel{(1)}{\chi_{ij}}
- \stackrel{(1)}{\nu^{j}} \Delta\stackrel{(1)}{\chi_{ji}}
% \nonumber\\
% &&
- \frac{1}{2} \partial_{\eta}\stackrel{(1)}{\chi^{jk}} D_{i}\stackrel{(1)}{\chi_{kj}}
+ 2 \stackrel{(1)}{\chi^{kj}} \partial_{\eta}D_{[j}\stackrel{(1)}{\chi_{i]k}}
,
\label{eq:kouchan-19.118}
\\
\Gamma_{ij}
&:=&
16 \pi G a^{2} \left( \epsilon + p \right) D_{i}\stackrel{(1)}{v} D_{j}\stackrel{(1)}{v}
- 4 D_{i}\stackrel{(1)}{\Phi} D_{j}\stackrel{(1)}{\Phi}
- 8 \stackrel{(1)}{\Phi} D_{i}D_{j}\stackrel{(1)}{\Phi}
\nonumber\\
&& \quad\quad
+ \left\{
6 D_{k}\stackrel{(1)}{\Phi} D^{k}\stackrel{(1)}{\Phi}
+ 8 \stackrel{(1)}{\Phi} \Delta\stackrel{(1)}{\Phi}
+ 2 \left(\partial_{\eta}\stackrel{(1)}{\Phi}\right)^{2}
% \right.
% \nonumber\\
% && \quad\quad\quad\quad\quad
% \left.
+ 16 {\cal H} \stackrel{(1)}{\Phi} \partial_{\eta}\stackrel{(1)}{\Phi}
+ 8 \left(
2 \partial_{\eta}{\cal H} + K + {\cal H}^{2}
\right)
\left(\stackrel{(1)}{\Phi}\right)^{2}
\right\} \gamma_{ij}
\nonumber\\
&&
+ 32 \pi G a^{2} \left( \epsilon + p \right) D_{(i}\stackrel{(1)}{v} \stackrel{(1)}{{\cal V}_{j)}}
- 4 \partial_{\eta}\stackrel{(1)}{\Phi} D_{(i}\stackrel{(1)}{\nu_{j)}}
+ 4 \partial_{\eta}D_{(i}\stackrel{(1)}{\Phi} \stackrel{(1)}{\nu_{j)}}
+ \left(
4 \partial_{\eta}D_{k}\stackrel{(1)}{\Phi} \stackrel{(1)}{\nu^{k}}
+ 4 {\cal H} D_{k}\stackrel{(1)}{\Phi} \stackrel{(1)}{\nu^{k}}
\right) \gamma_{ij}
\nonumber\\
&&
+ 16 \pi G a^{2} \left( \epsilon + p \right) \stackrel{(1)}{{\cal V}_{i}} \stackrel{(1)}{{\cal V}_{j}}
- 2 \stackrel{(1)}{\nu^{k}} D_{k}D_{(i}\stackrel{(1)}{\nu_{j)}}
+ 2 \stackrel{(1)}{\nu_{k}} D_{i}D_{j}\stackrel{(1)}{\nu^{k}}
+ D_{i}\stackrel{(1)}{\nu^{k}} D_{j}\stackrel{(1)}{\nu_{k}}
+ D^{k}\stackrel{(1)}{\nu_{i}} D_{k}\stackrel{(1)}{\nu_{j}}
\nonumber\\
&& \quad\quad
+ \left(
- D_{k}\stackrel{(1)}{\nu_{l}} D^{[k}\stackrel{(1)}{\nu^{l]}}
- D_{k}\stackrel{(1)}{\nu_{l}} D^{k}\stackrel{(1)}{\nu^{l}}
- 2 \stackrel{(1)}{\nu_{k}} \Delta\stackrel{(1)}{\nu^{k}}
- 4 \partial_{\eta}{\cal H} \stackrel{(1)}{\nu_{k}}\stackrel{(1)}{\nu^{k}}
+ 6 {\cal H}^{2} \stackrel{(1)}{\nu_{k}} \stackrel{(1)}{\nu^{k}}
\right) \gamma_{ij}
\nonumber\\
&&
- 4 {\cal H} \partial_{\eta}\stackrel{(1)}{\Phi} \stackrel{(1)}{\chi_{ij}}
- 2 \partial_{\eta}^{2}\stackrel{(1)}{\Phi} \stackrel{(1)}{\chi_{ij}}
- 4 D^{k}\stackrel{(1)}{\Phi} D_{(i}\stackrel{(1)}{\chi_{j)k}}
+ 4 D^{k}\stackrel{(1)}{\Phi} D_{k}\stackrel{(1)}{\chi_{ij}}
- 8 K \stackrel{(1)}{\Phi} \stackrel{(1)}{\chi_{ij}}
+ 4 \stackrel{(1)}{\Phi} \Delta\stackrel{(1)}{\chi_{ij}}
\nonumber\\
&& \quad\quad
- 4 D^{k}D_{(i}\stackrel{(1)}{\Phi} \stackrel{(1)}{\chi_{j)k}}
+ 2 \Delta \stackrel{(1)}{\Phi} \stackrel{(1)}{\chi_{ij}}
+ 2 D_{l}D_{k}\stackrel{(1)}{\Phi} \stackrel{(1)}{\chi^{lk}} \gamma_{ij}
\nonumber\\
&&
- 2 D^{k}\stackrel{(1)}{\nu_{(i}} \partial_{\eta}\stackrel{(1)}{\chi_{j)k}}
- 2 \stackrel{(1)}{\nu^{k}} \partial_{\eta}D_{(i}\stackrel{(1)}{\chi_{j)k}}
+ 2 \stackrel{(1)}{\nu^{k}} \partial_{\eta}D_{k}\stackrel{(1)}{\chi_{ij}}
+ D^{k}\stackrel{(1)}{\nu^{l}} \partial_{\eta}\stackrel{(1)}{\chi_{lk}} \gamma_{ij}
\nonumber\\
&&
+ \partial_{\eta}\stackrel{(1)}{\chi_{ik}} \partial_{\eta}\stackrel{(1)}{\chi_{j}^{\;\;k}}
+ 2 D_{[l}\stackrel{(1)}{\chi_{k]i}} D^{k}\stackrel{(1)}{\chi_{j}^{\;\;l}}
- \frac{1}{2} D_{j}\stackrel{(1)}{\chi_{lk}} D_{i}\stackrel{(1)}{\chi^{lk}}
- \stackrel{(1)}{\chi^{lm}} D_{i}D_{j}\stackrel{(1)}{\chi_{ml}}
+ 2 \stackrel{(1)}{\chi^{lm}} D_{l}D_{(i}\stackrel{(1)}{\chi_{j)m}}
\nonumber\\
&& \quad\quad
- \stackrel{(1)}{\chi^{lm}} D_{m}D_{l}\stackrel{(1)}{\chi_{ij}}
+ \left(
- \frac{3}{4} \partial_{\eta}\stackrel{(1)}{\chi_{lk}} \partial_{\eta}\stackrel{(1)}{\chi^{kl}}
+ \frac{3}{4} D_{k}\stackrel{(1)}{\chi_{lm}} D^{k}\stackrel{(1)}{\chi^{ml}}
- \frac{1}{2} D_{k}\stackrel{(1)}{\chi_{lm}} D^{l}\stackrel{(1)}{\chi^{mk}}
+ K \stackrel{(1)}{\chi_{lm}} \stackrel{(1)}{\chi^{lm}}
\right) \gamma_{ij}
% \label{eq:kouchan-19.119}
\nonumber
,\end{aligned}$$ and $\Gamma_{i}^{\;\;j}$ $:=$ $\gamma^{jk}\Gamma_{ik}$. These equations (\[eq:kouchan-18.79\])-(\[eq:kouchan-18.65\]) coincide with the equations derived in Refs.[@KNs-cosmological] except for the definition of the source terms $\Gamma_{0}$, $\Gamma_{i}$, and $\Gamma_{ij}$. Further, as shown in Refs.[@KNs-cosmological], the equations (\[eq:kouchan-18.79\]) and (\[eq:kouchan-18.80\]) are reduced to the single equation for $\stackrel{(2)}{\Phi}$. We also derived the similar equations in the case where the matter content of the universe is a single scalar field[@KNs-preparation].
In summary, we have extended our formulation without ignoring the first-order vector- and tensor-modes. As the result, these equations imply that any types of mode-coupling arise due to the second-order effects of the Einstein equations, in principle. In some inflationary scenario, the tensor mode are also generated by the quantum fluctuations. This extension will be useful to clarify the evolution of the second order perturbation in the existence of the first-order tensor-mode. Further, to apply this formulation to clarify the non-linear effects in CMB physics[@Non-Gaussianity-in-CMB], we have to extend our formulation to multi-field system and to the Einstein-Boltzmann system. These extensions will be one of our future works.
[99]{} C.L. Bennett et al., Astrophys. J. Suppl. Ser. [**148**]{}, (2003), 1. K. Nakamura, Prog. Theor. Phys. [**110**]{} (2003), 723; K. Nakamura, Prog. Theor. Phys. [**113**]{} (2005), 481. K. Nakamura, Phys. Rev. D [**74**]{} (2006), 101301(R); K. Nakamura, Prog. Theor. Phys. [**117**]{} (2007), 17. K. Nakamura, in preparation. J. M. Bardeen, Phys. Rev. D [**22**]{} (1980), 1882; H. Kodama and M. Sasaki, Prog. Theor. Phys. Suppl. No.78, 1 (1984); V. F. Mukhanov, H. A. Feldman and R. H. Brandenberger, Phys. Rep. [**215**]{}, 203 (1992). N. Bartolo, S. Matarrese and A. Riotto, JCAP [**0401**]{}, 003 (2004); N. Bartolo, S. Matarrese and A. Riotto, Phys. Rev. Lett. [**93**]{} (2004), 231301; N. Bartolo, E. Komatsu, S. Matarrese and A. Riotto, Phys. Rept. [**402**]{}, 103 (2004); N. Bartolo, S. Matarrese, and A. Riotto, arXiv:astro-ph/0512481.
[^1]: E-mail:[email protected]
|
---
abstract: 'A strong entanglement monotone, which never increases under local operations and classical communications (LOCC), restricts quantum entanglement manipulation more strongly than the usual monotone since the usual one does not increase on average under LOCC. We propose new strong monotones in mixed-state entanglement manipulation under LOCC. These are related to the decomposability and 1-positivity of an operator constructed from a quantum state, and reveal geometrical characteristics of entangled states. These are lower bounded by the negativity or generalized robustness of entanglement.'
author:
- Satoshi Ishizaka
title: 'Strong monotonicity in mixed-state entanglement manipulation'
---
Introduction {#sec: Introduction}
============
It is a key concept for quantum information science that distant parties can manipulate quantum entanglement by local operations and classical communication (LOCC). However, the entanglement manipulation suffers some fundamental restrictions: LOCC cannot create entanglement [@Yang05a] and LOCC cannot increase the total amount of entanglement. This monotonicity is characterized by mathematical functions $E(\sigma)$, called entanglement monotones [@Vidal00a]. When a quantum state $\varrho_i$ is obtained from $\sigma$ with probability $p_i\!>\!0$ by LOCC ($i$ indexes the multiple outcomes), the functions satisfy $$E(\sigma)\!\ge\!\sum_i p_i E(\varrho_i),$$ and therefore $E(\sigma)$ does not increase [*on average*]{} under LOCC. Many such monotones have been proposed such as entanglement measures (e.g. entanglement cost [@Bennett96a], distillable entanglement [@Bennett96a], and relative entropy of entanglement [@Vedral97a]), negativity [@Vidal02a; @Plenio05a], robustness of entanglement [@Vidal99b; @Steiner03a], best separable approximation (BSA) measure [@Karnas01a; @Lewenstein98a], and so on [@PlenioVirmani05a].
On the other hand, there exists a much stronger restriction in the entanglement manipulation: the Schmidt number [@Terhal00b], which is a general extension of the Schmidt rank to mixed states, cannot increase even with an infinitesimally small probability. This type of restriction, called strong monotonicity here, may be characterized by strong monotone functions $M(\sigma)$ which satisfy $$M(\sigma)\!\ge\! M(\varrho_i) \hbox{~~for all $\varrho_i$}.$$ Namely $M(\sigma)$ [*never*]{} increases under LOCC. Note that the strong monotones are generally discontinuous functions as explicitly shown later. Concerning the conversion between bipartite pure states, the Schmidt number is a unique strong monotone (the conversion is impossible when the Schmidt rank of the target state is larger than that of the initial state, but otherwise the conversion is possible with nonzero probability [@Vidal99a; @Dur00b]). Positive partial transpose preserving (PPT-preserving) operations can overcome the monotonicity of the Schmidt number, and therefore all pure entangled states become convertible under PPT-preserving operations [@Ishizaka04b; @Ishizaka05a]. Here, a map $\Lambda$ is called PPT-preserving operations [@Rains99b] when both $\Lambda$ and $\Gamma\!\circ\!\Lambda\!\circ\!\Gamma$ is a completely positive (CP) map with $\Gamma$ being a map of the partial transpose [@Peres96a]. On the other hand, the manipulation of mixed-state entanglement still suffers some restriction even under PPT-preserving operations in single-copy settings [@Ishizaka04b; @Ishizaka05a]. This implies that there certainly exists strong monotonicity independent on the Schmidt number in mixed-state entanglement manipulation.
In this paper, we propose new strong monotones which are related to the decomposability and 1-positivity of an operator constructed from a quantum state. Here, an operator $Z$ is called decomposable when $Z$ is written as $Z\!=\!X\!+\!Y^\Gamma$ with $X,Y\!\ge\!0$, and $Z$ is called 1-positive when $\langle ef|Z|ef \rangle\!\ge\!0$ for every product states $|ef\rangle$ (see e.g. [@Horodecki96a; @Lewenstein00a; @Sanpera01a; @Kraus02a; @Ha03a; @Clarisse05a] for the relation between entanglement and decomposability or 1-positivity). The singlet fraction and negativity maximized over (stochastic) LOCC are also strong monotones, but strong monotones studied in this paper are slightly different from those and reveal geometrical characteristics of entangled states. These are lower bounded by the negativity or generalized robustness of entanglement.
Strong monotone $M_1$ {#sec: Strong monotone M_1}
=====================
The first strong monotone we propose is the following:
[Theorem 1:]{} [*Let $\sigma^\Gamma\!=\!P-Q$ be the Jordan decomposition (orthogonal decomposition) of $\sigma^\Gamma$ and hence $P\!\ge\!0$, $Q\!\ge\!0$, and $PQ\!=\!0$. The function $M_1(\sigma)$, which is defined as the minimal x subject that $\sigma\!-\!(1\!-\!x)P^\Gamma$ is decomposable, is a strong monotone, i.e. $M_1(\sigma)$ never increases under LOCC (and even under PPT-preserving operations).* ]{}
Note that $\sigma\!-\!(1\!-\!x)P^\Gamma$ is always decomposable for $x\!=\!1$, and hence $M_1(\sigma)\!\le\!1$. Before proving the above theorem, let us show explicit examples of $M_1(\sigma)$ for several important classes of states.
\(i) [*Separable states:*]{} For every PPT states ($\sigma^\Gamma\!\ge\!0$), we have $P^\Gamma\!=\!\sigma$ and hence $\sigma\!-\!(1\!-\!x)P^\Gamma\!=\!x\sigma$ which is decomposable only for $x\!\ge\!0$. Since all separable states are PPT states, $M_1(\sigma)\!=\!0$ for every separable $\sigma$.
\(ii) [*Entangled pure states:*]{} Let $$|\phi_d^+\rangle=\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|ii\rangle$$ be a maximally entangled state on ${\mathbb C}^d\!\otimes\!{\mathbb C}^d$ and $P^+_d\!\equiv\!|\phi_d^+\rangle\langle\phi_d^+|$. When $\sigma\!=\!P^+_d$, $P\!=\!P_d^S/d$ where $P_d^S$ is the projector onto the symmetric subspace on ${\mathbb C}^d\!\otimes\!{\mathbb C}^d$. Any decomposable operator $Z$ must satisfy $\langle ef|Z|ef\rangle\!\ge\!0$ for every product states $|ef\rangle$, i.e. 1-positive, but $$\langle01|P^+_d-(1-x)\big(\frac{P_d^S}{d}\big)^\Gamma|01\rangle
=-\frac{1-x}{2d}.$$ Therefore $P^+_d\!-\!(1\!-\!x)(P_d^S/d)^\Gamma$ cannot be decomposable for $x\!<\!1$ and $M_1(P_d^+)\!=\!1$ independent on $d$. It has been shown that an entangled $|\psi\rangle$ can be converted to $P^+_r$ by LOCC where $r\!\ge\!2$ is the Schmidt number of $|\psi\rangle$ [@Vidal99a]. Since $M_1$ never increases under LOCC and $M_1\!\le\!1$, we have $M_1(|\psi\rangle)\!=\!1$ for every entangled $|\psi\rangle$. Therefore, $M_1$ does not distinguish entangled pure states at all. This property however is desirable for the purpose of this paper that is to study strong monotonicity independent on the Schmidt number. As mentioned before, the Schmidt number is a unique strong monotone concerning the conversion between pure states. Therefore, the strong monotones independent on the Schmidt number should not distinguish entangled pure states.
It should be noted that $M_1(|ef\rangle)\!=\!0$ as shown in (i) but in the close vicinity of $|ef\rangle$ there always exists a partially entangled pure state $|\psi\rangle$ for which $M_1(|\psi\rangle)\!=\!1$. As a result, it is found that $M_1(\sigma)$ is a discontinuous function.
According to the above examples (i) and (ii), the following is concluded:
[Corollary 1:]{} [*If $\sigma$ is single-copy distillable under LOCC (or under PPT-preserving operations), then $M_1(\sigma)\!=\!1$.* ]{}
\(iii) [*Antisymmetric Werner states:*]{} For an antisymmetric Werner state of $\sigma^A_d\!=\!2/(d^2\!-\!d)P_d^A$, $P\!=\!(\openone_d\!-\!P_d^+)/(d^2\!-\!d)$ where $\openone_d$ and $P_d^A$ is the identity on ${\mathbb C}^d\!\otimes\!{\mathbb C}^d$ and projector onto the antisymmetric subspace on ${\mathbb C}^d\!\otimes\!{\mathbb C}^d$, respectively. Then, $$\langle00|\sigma^A_d-(1-x)\big(\frac{\openone_d-P_d^+}{d^2-d}\big)^\Gamma|00\rangle
=-\frac{1-x}{d^2},$$ which cannot be decomposable for $x\!<\!1$, and $M_1(\sigma^A_d)\!=\!1$.
\(iv) [*Convex combination of $\sigma_0$ and $P_0^\Gamma$:*]{} Let $\sigma_0^\Gamma\!=\!P_0-Q_0$ be the Jordan decomposition. For the state of $$\sigma=\sigma_0+\lambda P_0^\Gamma,
\label{eq: convex combination}$$ $\sigma^\Gamma\!=\!(1+\lambda)P_0\!-\!Q_0$ (here $\sigma$ is not normalized but $M_1(\sigma)$ does not depend on the normalization). As a result, $$\begin{aligned}
\sigma-(1-x)P^\Gamma&\!\!\!=\!\!\!&\sigma_0+\lambda P_0^\Gamma
-(1-x)(1+\lambda)P_0^\Gamma \cr
&\!\!\!=\!\!\!&\sigma_0-\big[1-x(1+\lambda)]P_0^\Gamma,\end{aligned}$$ and therefore $$M_1(\sigma)=\frac{M_1(\sigma_0)}{1+\lambda}$$ for the state of Eq. (\[eq: convex combination\]). An entangled isotropic state $$\sigma_I=\eta P_d^++(1-\eta)\frac{\openone_d-P_d^+}{d^2-1},
\label{eq: Isotropic state}$$ where $\eta\!>\!1/d$, can be rewritten as $\sigma_I\!\propto\!\sigma_0\!+\!\frac{2d(1-\eta)}{(d\eta-1)(d+1)}P_0^\Gamma$ with $\sigma_0\!=\!P_d^+$ \[correspondingly $P_0^\Gamma\!=\!(\openone_d\!+\!dP_d^+)/(2d)$\], and therefore $$M_1(\sigma_I)=\frac{(d\eta-1)(d+1)}{(d\eta+1)(d-1)}.
\label{eq: monotone for an isotropic state}$$ Similarly, for an entangled Werner state $$\sigma_W=\mu \frac{2}{d^2-d}P_d^A + (1-\mu)\frac{2}{d^2+2}P_d^S,$$ where $\mu\!>\!1/2$, putting $\sigma_0\!=\!\sigma^A_d$ we have $$M_1(\sigma_W)=\frac{(2\mu-1)(d+1)}{2\mu+d-1}.
\label{eq: monotone for a Werner state}$$
It has been mentioned that $\eta$ of an isotropic state and $\mu$ of a Werner state cannot increase under LOCC [@Masanes05a]. This can be confirmed by $M_1(\sigma_I)$ and $M_1(\sigma_W)$, and moreover it is found that this is the case even under PPT-preserving operations, since these functions are monotonic with respect to $\eta$ and $\mu$, respectively, and these never increase under LOCC and even under PPT-preserving operations.
Equating Eq. (\[eq: monotone for an isotropic state\]) and Eq. (\[eq: monotone for a Werner state\]), we have a relation: $$\mu=\frac{(d-1)\eta}{(d-2)\eta+1},
\label{eq: mu and eta}$$ and therefore the reversible conversion of $\sigma_W\!\leftrightarrow\!\sigma_I$ is not prohibited by $M_1$ if $\mu$ and $\eta$ satisfy Eq. (\[eq: mu and eta\]). Indeed, this reversible conversion is possible by PPT-preserving operations, whose trace non-preserving maps are $$\begin{aligned}
&&\textstyle
\sigma_W\!\rightarrow\!\sigma_I:
\Lambda(X)=(\hbox{tr}XP_d^A)P_d^+
+(\hbox{tr}XP_d^S)\frac{\openone_d-P_d^+}{d+1}, \cr
&&\textstyle
\sigma_I\!\rightarrow\!\sigma_W:
\Lambda(X)=(\hbox{tr}XP_d^+)P_d^A
+(\hbox{tr}X(\openone_d\!-\!P_d^+))\frac{P_d^S}{d+1}.\end{aligned}$$ It should be noted however that the conversion of $\sigma_W\!\rightarrow\!\sigma_I$ under LOCC suffers the strong monotonicity due to the Schmidt number because it is 2 for an entangled $\sigma_W$ but larger than 2 for $\sigma_I$ when $\eta\!>\!2/d$ ($d\!>\!2$) [@Terhal00b].
\(v) [*Two-qubit states:*]{} Let us consider an entangled Bell diagonal state: $$\sigma_B=\sum_{i=0}^{3} p_i |e_i\rangle\langle e_i|, \,\,\,
(p_0 \ge p_1 \ge p_2 \ge p_3),$$ where $p_0\!>\!1/2$ so that $\sigma_B$ is entangled. The Bell basis is chosen as $|e_i\rangle\!=\!\{i|\psi^-\rangle,|\psi^+\rangle,|\phi^-\rangle,i|\phi^+\rangle\}$ with $|\phi^{\pm}\rangle\!=\!(|00\rangle\!\pm\!|11\rangle)/\sqrt{2}$ and $|\psi^{\pm}\rangle\!=\!(|01\rangle\!\pm\!|10\rangle)/\sqrt{2}$, which is the magic basis and hence $|\tilde e_i\rangle\!=\!(\sigma_2\otimes\sigma_2)|e_i^*\rangle\!=\!
|e_i\rangle$ [@Bennett96a; @Wootters98a]. Then we have $$\begin{aligned}
\lefteqn{\sigma_B-(1-x)P^\Gamma=
\big[p_0-\frac{(1-x)(2p_0-1)}{4}\big]|e_0\rangle\langle e_0|}\quad\quad\quad\cr
&&+\sum_{i=1}^{3}\!\big[p_i\!-\!\frac{(1\!-\!x)(2p_0\!-\!1\!+\!4p_i)}{4}\big]|e_i\rangle\langle e_i|. \quad
\label{eq: Bell diagonal}\end{aligned}$$ For the decomposability of such a Bell diagonal operator, the following is useful:
[Lemma 1:]{} [*An operator $A\!=\!\sum_{i=0}^{3} a_i |e_i\rangle\langle e_i|$ on ${\mathbb C}^2\!\otimes\!{\mathbb C}^2$, where $a_i\!\ge\!a_{i+1}$, is decomposable (and 1-positive) if and only if $a_2\!+\!a_3\!\ge\!0$.* ]{}
[*Proof:*]{} When $A$ on ${\mathbb C}^2\!\otimes\!{\mathbb C}^2$ is expressed as $A\!=\!(I\!\otimes\!\Theta)P_2^+$ with $\Theta$ being a map, the following four statements are equivalent [@Stormer63a; @Woronowicz76a]: (a) $A$ is decomposable, (b) $\Theta$ is a decomposable positive map, (c) $\Theta$ is a positive map, and (d) $A$ is 1-positive. Since $|e_i\rangle$ is an orthogonal set, any pure state is expanded as $|\psi\rangle=\sum_{i=0}^{3}\lambda_i|e_i\rangle$ with $\sum_i|\lambda_i|^2\!=\!1$. So that $|\psi\rangle$ is a product state, $\langle\tilde \psi|\psi\rangle\!=\!\sum \lambda_i^2\!=\!0$ [@Bennett96a]. When the real and imaginary part of $\lambda_i$ is $r_i$ and $c_i$, respectively, the above two conditions are written as $\sum_i r_i^2\!=\!\sum_i c_i^2\!=\!1/2$ and $\sum_i r_i c_i=0$. Therefore, the four dimensional real vectors $\vec r$ and $\vec c$, whose elements are $r_i$ and $c_i$, respectively, satisfy $|\vec r|^2\!=\!|\vec c|^2\!=\!1/2$ and $\vec r\cdot \vec c\!=\!0$, and it is easy to see $r_3^2\!+\!c_3^2\!\le\!1/2$. Then, $$\begin{aligned}
\langle \psi |A|\psi\rangle&\!\!\!=\!\!\!&\sum_{i=0}^3|\lambda_i|^2a_i
\ge \sum_{i=0}^2|\lambda_i|^2a_2+|\lambda_3|^2a_3 \cr
&\!\!\!=\!\!\!&a_2-(a_2-a_3)(r_3^2+c_3^2)\ge \frac{1}{2}(a_2+a_3),\end{aligned}$$ and therefore if $a_2\!+\!a_3\!\ge\!0$ then $\langle ef |A|ef\rangle\!\ge\!0$ for every $|ef\rangle$. Conversely, if $a_2\!+\!a_3\!<\!0$ then $\langle 00 |A|00\rangle\!<\!0$.
Using this lemma, it is found that Eq. (\[eq: Bell diagonal\]) is decomposable if and only if $x\!\ge\!(2p_0\!-\!1)/(1\!-\!2p_1)$, and $$M_1(\sigma_B)=\frac{2p_0-1}{1-2p_1}
\label{eq: monotone for Bell}$$ for an entangled $\sigma_B$. It has been shown that $p_0$ of the entangled Bell diagonal state cannot increase under LOCC [@Verstraete03b]. By Eq. (\[eq: monotone for Bell\]), it is found that $p_1$ also cannot increase unless $p_0$ is decreased. It has been shown further that almost all entangled two-qubit states can be converted to Bell diagonal states by LOCC in a reversible fashion [@Verstraete01a]. Such reversible LOCC does not change $M_1$, and hence $M_1(\sigma)$ for such the entangled two-qubit state $\sigma$ agrees with Eq. (\[eq: monotone for Bell\]) of the converted $\sigma_B$.
Note that Eq. (\[eq: monotone for Bell\]) is equal to 1 for $p_1\!=\!1\!-\!p_0$, and therefore $M_1(\sigma)\!=\!1$ for every entangled two-qubit states of rank-2. However, two-qubit states of rank-2 are not single-copy distillable [@Kent98a] (though a special state is quasi distillable [@Horodecki99a; @Verstraete01a]) under LOCC and even under PPT-preserving operations [@Ishizaka04b; @Ishizaka05a]. Therefore the converse of the corollary 1 does not hold in general.
A family of strong monotones {#sec: A family of strong monotones}
============================
Let us now prove the theorem 1. Our starting point is the following function: $$M(\sigma)=\min_{\sigma_\pm\in{\cal C}_\pm} \sup_{\Omega \in {\cal L}_\sigma}
\frac{\hbox{tr}\Omega (\sigma_-)}
{\hbox{tr}\Omega (\sigma_+)},
\label{eq: function M}$$ where the minimization is performed over all possible decompositions of $\sigma\!=\!\sigma_+\!-\!\sigma_-$ such as $\sigma_+\!\in\!{\cal C}_+$ and $\sigma_-\!\in\!{\cal C}_-$. Note that $\sigma_\pm$ (unnormalized) are not necessarily positive, and the sets ${\cal C}_\pm$ are specified later. The supremum is taken for all possible operations $\Omega$ that belong to some operational class ${\cal L}_\sigma$, which may depend on $\sigma$. Then, suppose that $\sigma$ is converted to $\varrho$ by LOCC or PPT-preserving operations with nonzero probability $p$, and hence there exists a PPT-preserving map $\Lambda_{\sigma\rightarrow\varrho}$ such that $\Lambda_{\sigma\rightarrow\varrho}(\sigma)\!=\!p\varrho$ (LOCC is also PPT-preserving). Moreover suppose that the sets ${\cal C}_\pm$ have been chosen such that $(1/p)\Lambda_{\sigma\rightarrow\varrho}(\sigma_\pm)\!\equiv\!\varrho_\pm
\!\in\!{\cal C}_\pm$, and suppose that ${\cal L}_\sigma$ has been chosen such that $\Omega\!\circ\!\Lambda_{\sigma\rightarrow\varrho}\!\in\!{\cal L}_\sigma$ for every $\Omega\!\in\!{\cal L}_\varrho$. Under these assumptions, the function $M(\sigma)$ is indeed a strong monotone because $$\begin{aligned}
M(\sigma)&\!\!\!=\!\!\!&\min_{\sigma_\pm\in{\cal C}_\pm}
\sup_{\Omega \in {\cal L}_\sigma}
\frac{\hbox{tr}\Omega (\sigma_-)}{\hbox{tr}\Omega (\sigma_+)} \cr
&\!\!\!\ge\!\!\!&\min_{\sigma_\pm\in{\cal C}_\pm}
\sup_{\Omega \in {\cal L}_\varrho}
\frac{\hbox{tr}\Omega \circ\Lambda_{\sigma\rightarrow\varrho} (\sigma_-)}
{\hbox{tr}\Omega \circ \Lambda_{\sigma\rightarrow\varrho} (\sigma_+)} \cr
&\!\!\!=\!\!\!&\min_{\sigma_\pm\in{\cal C}_\pm}
\sup_{\Omega \in {\cal L}_\varrho}
\frac{\hbox{tr}\Omega (\varrho_-)}{\hbox{tr}\Omega (\varrho_+)} \cr
&\!\!\!\ge\!\!\!&\min_{\varrho_\pm\in{\cal C}_\pm}
\sup_{\Omega \in {\cal L}_\varrho}
\frac{\hbox{tr}\Omega (\varrho_-)}{\hbox{tr}\Omega (\varrho_+)}=M(\varrho),
\label{eq: strong monotonicity}\end{aligned}$$ and $M$ never increases under the conversion of $\sigma\!\rightarrow\!\varrho$ if it is possible with nonzero probability. Note that the function $M$ is neither convex nor concave in general.
Then, let us consider the case where ${\cal C}_\pm\!=\!\{A|A^\Gamma\!\ge\!0\}$. Namely, the minimization in Eq. (\[eq: function M\]) is performed over all possible decomposition such as $\sigma^\Gamma\!=\!a_+\!-\!a_-$ with $a_+\!\ge\!0$ and $a_-\!\ge\!0$. This is the same decomposition introduced in [@Vidal02a] as the minimization problem for the negativity (see also [@PlenioVirmani05a]). Since $\Lambda_{\sigma\rightarrow\varrho}$ is PPT-preserving, $\Gamma\!\circ\!\Lambda_{\sigma\rightarrow\varrho}\!\circ\!\Gamma$ is a CP map. Therefore $\varrho_\pm^\Gamma\!=\!(1/p)\Gamma\!\circ\!
\Lambda_{\sigma\rightarrow\varrho}\!\circ\!\Gamma(a_\pm)\!\ge\!0$ and $\varrho_\pm\!\in\!{\cal C}_\pm$ is satisfied [@Vidal02a]. The explicit form of the function $M(\sigma)$ in this case, denoted by $M_1(\sigma)$, is $$M_1(\sigma)=
\min_{a_\pm\ge0} \sup_{\Omega \in {\cal L}_\sigma}
\frac{\hbox{tr}\Omega (a_-^\Gamma)}
{\hbox{tr}\Omega (a_+^\Gamma)}.
\label{eq: monotone 1}$$ where ${\cal L}_\sigma$ is chosen as a set of PPT-preserving operations restricted to $\hbox{tr}\Omega(\sigma)\!>\!0$ (note that the optimization is supremum considering $\hbox{tr}\Omega(\sigma)\!\rightarrow\!0$). For this choice of ${\cal L}_\sigma$, $\Omega\!\circ\!\Lambda_{\sigma\rightarrow\varrho}\!\in\!
{\cal L_\sigma}$ for every $\Omega\!\in\!{\cal L_\varrho}$ and hence the strong monotonicity Eq. (\[eq: strong monotonicity\]) holds. Here, suppose that there exists a nonzero positive operator $a\!\ge\!0$ such that $a'_\pm\!\equiv\!a_\pm\!-\!a\!\ge\!0$ for some decomposition of $\sigma^\Gamma\!=\!a_+\!-\!a_-$. Then, $$\max_\Omega
\frac{\hbox{tr}\Omega (a^\Gamma_-)}
{\hbox{tr}\Omega (a_+^\Gamma)}
=
\max_\Omega
\frac{\hbox{tr}\Omega (a'^\Gamma_-)
\!+\!\hbox{tr}\Omega (a^\Gamma)}
{\hbox{tr}\Omega(a'^\Gamma_+)
\!+\!\hbox{tr}\Omega(a^\Gamma)}
\ge
\max_\Omega
\frac{\hbox{tr}\Omega (a'^\Gamma_-)}
{\hbox{tr}\Omega(a'^\Gamma_+)},
\nonumber$$ where $\hbox{tr}\Omega(a^\Gamma)\!=\!
\hbox{tr}[\Gamma\!\circ\!\Omega\!\circ\!\Gamma(a)]\!\ge\!0$ and $\hbox{tr}\Omega(a'^\Gamma_+)=\hbox{tr}\Omega(\sigma)\!+\!\hbox{tr}\Omega(a'^\Gamma_-)\!\ge\!\hbox{tr}\Omega(a'^\Gamma_-)$ were used. As a result, it is found that the minimization in Eq. (\[eq: monotone 1\]) is reached when $a_\pm$ are orthogonal to each other and hence $$M_1(\sigma)=\max_{\Omega\in{\cal L}_\sigma}\frac{\hbox{tr}\Omega(Q^\Gamma)}{\hbox{tr}\Omega(P^\Gamma)}
\label{eq: monotone 1b}$$ with $\sigma^\Gamma\!=\!P\!-\!Q$ being the Jordan decomposition. Moreover, it can be assumed that $\Omega\!=\!{\cal R}\!\circ\!\Omega$ where $${\cal R}(X)=\int dUdV (U \otimes V)X(U \otimes V)^\dagger$$ is the random application of local unitary transformations. All such PPT-preserving operations have the form of $$\Omega(X)=(\hbox{tr}XA)\openone$$ with $A\!\ge\!0$ and $A^\Gamma\!\ge\!0$ [@Cirac01a]. Moreover, since $\Omega$ is restricted to $\hbox{tr}\Omega(\sigma)\!>\!0$ and hence $\hbox{tr}\Omega(P^\Gamma)=\hbox{tr}\Omega(\sigma)\!+\!\hbox{tr}\Omega(Q^\Gamma)\!>\!0$, we can assume $\hbox{tr}P^\Gamma A\!=\!1$ without loss of generality. Then, $M_1(\sigma)$ is reduced to $M_1(\sigma)\!=\!\max_{A} \hbox{tr}(Q^\Gamma A)$ subject that $$A\ge 0, \,\,\, A^\Gamma\ge0, \,\,\, \hbox{tr}P^\Gamma A=1.
\label{eq: primary constraints}$$ This is a convex optimization problem, whose optimal value coincides with the optimal value of the dual problem [@Boyd04a]. Since $$\begin{aligned}
\hbox{tr}Q^\Gamma A&\!\!\!=\!\!\!&
-\hbox{tr}Y A^\Gamma -x (1-\hbox{tr}P^\Gamma A) \cr
&&-\hbox{tr} A(-Q^\Gamma+xP^\Gamma-Y^\Gamma)+x,\end{aligned}$$ where $x$ is a Lagrange multiplier, the dual problem is $M_1(\sigma)\!=\!\min x$ subject to the constraints of $Y\!\ge\!0$ and $$X\equiv-Q^\Gamma+xP^\Gamma-Y^\Gamma=\sigma-(1-x)P^\Gamma-Y^\Gamma\ge0.$$ These constraints can be read as $\sigma\!-\!(1\!-\!x)P^\Gamma\!=\!X\!+\!Y^\Gamma$ with $X,Y\!\ge\!0$, and consequently the theorem 1 is obtained.
It is obvious that $M_1(\sigma)$ is a strong monotone under PPT-preserving operations because ${\cal L}_\sigma$ was chosen as a set of PPT-preserving operations. Then, let us consider another function $M_1^{sep}(\sigma)$ for which ${\cal L}_\sigma$ in Eq. (\[eq: monotone 1\]) is replaced by the set of LOCC restricted to $\hbox{tr}\Omega(\sigma)\!>\!0$. By this, since the set of stochastic LOCC coincides with the set of stochastic separable operations, the constraints of Eq. (\[eq: primary constraints\]) become [@Cirac01a] $$A\ge 0, \,\,\, A\hbox{~is separable}, \,\,\, \hbox{tr}P^\Gamma A=1.$$ Following an idea of [@Brandao05a] and putting $$A=\sum_i q_i |ef^{(i)}\rangle\langle ef^{(i)}|,$$ with $q_i\!\ge\!0$, we have $$\begin{aligned}
\hbox{tr}Q^\Gamma A&\!\!\!=\!\!\!&
-x \Big(1-\sum_i q_i\langle ef^{(i)}| P^\Gamma |ef^{(i)}\rangle\Big) \cr
&&-\sum_i q_i \langle ef^{(i)}|-Q^\Gamma+xP^\Gamma|ef^{(i)}\rangle +x.\end{aligned}$$ Therefore, it is found that the corresponding dual problem becomes $M_1^{sep}(\sigma)\!=\!\min x$ subject to $$\langle ef|\sigma-(1-x)P^\Gamma|ef\rangle\ge 0 \hbox{~~for every $|ef\rangle$},
\label{eq: separable constraint}$$ i.e. subject that $\sigma\!-\!(1\!-\!x)P^\Gamma$ is 1-positive. This dual problem for $M_1^{sep}(\sigma)$ has a simple geometrical meaning as well as $M_1(\sigma)$ as shown in Fig. \[fig: Geometrical structure\].
\[0.45\][![ The set of positive operators (unnormalized states) and set of PPT operators are schematically shown as two circles. The intersection of the two circles corresponds to the set of (unnormalized) PPT states. The set of decomposable operators corresponds to the convex cone of the two circles. Moreover, the set of 1-positive operators embodies the set of decomposable operators. When $\sigma^\Gamma\!=\!P\!-\!Q$ is the Jordan decomposition, $P^\Gamma$ is located on the edge of the set of PPT operators. $P^\Gamma$ is an unnormalized PPT state for many classes of states, but sometimes $P^\Gamma$ is not a state (see [@Ishizaka04a]). On the other hand, $\delta\!\equiv\!\sigma\!-\!(1\!-\!x)P^\Gamma$ is located on the edge of the set of 1-positive operators. The ratio of the interior division $x$ corresponds to the strong monotone $M_1^{sep}(\sigma)$. When $\delta$ is located on the edge of the set of decomposable operators, the ratio of the interior division $x$ corresponds to the strong monotone $M_1(\sigma)$. []{data-label="fig: Geometrical structure"}](fig1.eps "fig:")]{}
Although $M_1^{sep}$ is a strong monotone only under LOCC (not under PPT-preserving operations), all examples (i)-(v) for $M_1$ also hold for $M_1^{sep}$ without any modification, namely $M_1(\sigma)\!=\!M_1^{sep}(\sigma)$ for those classes of states. Note that when $Z\!\equiv\!\sigma\!-\!(1\!-\!x)P^\Gamma$ is expressed as $Z\!=\!(F\!\otimes\!\Theta)P_d^+$, if $\Theta$ is a non-decomposable positive map for some $x$, then $M_1(\sigma)\!>\!M_1^{sep}(\sigma)$ (see also Fig. \[fig: Geometrical structure\]). Here $F$ is a map of the local filter that converts $\openone/d$ to $Z_A\!=\!\hbox{tr}_B Z$ ($Z_A$ must be positive so that $Z$ is 1-positive). Note further that from Eq. (\[eq: monotone 1\]), $M_1(\sigma)$ and $M_1^{sep}(\sigma)$ is lower bounded by the negativity $N(\sigma)\!=\!(\hbox{tr}|\sigma^\Gamma|\!-\!1)/2$ [@Vidal02a] as $$M_1(\sigma)\ge M_1^{sep}(\sigma)\ge
\frac{\hbox{tr}Q}{\hbox{tr}P}=\frac{N(\sigma)}{1+N(\sigma)}.
\label{eq: Lower bound for M_1}$$
Strong monotone $M_2$ {#sec: Strong monotone M_2}
=====================
Let us return to Eq. (\[eq: function M\]) and consider the case where ${\cal C}_+\!=\!\{A|A\!\ge\!0,A\hbox{~is separable}\}$ and ${\cal C}_-\!=\!\{A|A\!\ge\!0\}$, namely the same decomposition for the minimization problem of the generalized robustness of entanglement [@Vidal99b; @Steiner03a] (see also [@PlenioVirmani05a]). Moreover, ${\cal L}_\sigma$ is chosen as a set of LOCC restricted to $\hbox{tr}\Omega(\sigma)\!>\!0$. When $\Lambda_{\sigma\rightarrow\varrho}$ is LOCC, $\varrho_+\!=\!(1/p)\Lambda_{\sigma\rightarrow\varrho}(\sigma_+)$ is separable and $\varrho_\pm\!\in\!{\cal C}_\pm$ holds. The explicit form of the function $M(\sigma)$ in this case, denoted by $M_2^{sep}(\sigma)$, is $$M_2^{sep}(\sigma)=\min_{\sigma_\pm} \sup_{\Omega \in {\cal L}_\sigma}
\frac{\hbox{tr}\Omega (\sigma_-)}
{\hbox{tr}\Omega (\sigma_+)}
\label{eq: monotone 2}$$ with the constraints of $\sigma\!=\!\sigma_+\!-\!\sigma_-$, $\sigma_\pm\!\ge\!0$, and $\sigma_+$ is separable. Since $\hbox{tr}\Omega(\sigma_-)/\hbox{tr}\Omega(\sigma_+)
\!=\!\hbox{tr}\Omega(\sigma_-)/[\hbox{tr}\Omega(\sigma)\!+\!
\hbox{tr}\Omega(\sigma_-)]$, we have $0\!\le\!M_2^{sep}(\sigma)\!\le\!1$. Likewise to the case of $M_1^{sep}(\sigma)$, the dual problem for $M_2^{sep}(\sigma)$ is $$\begin{aligned}
&& M_2^{sep}(\sigma)=\min_{\sigma_\pm}\min x,\cr
&&\sigma-(1-x)\sigma_+ \hbox{~is 1-positive}\cr
&&\sigma=\sigma_+-\sigma_-,\,\,\,\,\,\,\sigma_\pm\ge0,\,\,\,\,\,\,
\hbox{$\sigma_+$ is searable.}
\label{eq: monotone 2 dual}\end{aligned}$$
For every separable $\sigma$, $M_2^{sep}(\sigma)\!=\!0$ because the decomposition of $\sigma_+\!=\!\sigma$ and $\sigma_-\!=\!0$ exists. On the other hand, it is somewhat surprising that $M_2^{sep}(|\psi\rangle)\!=\!1$ for every entangled $|\psi\rangle$. To see this, let $\sigma\!=\!P_2^+$ on ${\mathbb C}^2\!\otimes\!{\mathbb C}^2$, and let $Z\!\equiv\!P_2^+\!-\!(1\!-\!x)\sigma_+\!=\!(F\!\otimes\!\Theta)P_2^+$ using a local filtering map $F$ and some map $\Theta$. When $F$ corresponds to a rank-1 projection, $Z$ is 1-positive only when $Z\!\ge\!0$. However, $Z\!\ge\!0$ for $x\!<\!1$ only when $\sigma_+\!=\!P_2^+$, which is not separable and not allowed as a decomposition of $\sigma$. When $F$ corresponds to a projection of rank-2, $Z$ is 1-positive if and only if $Z$ is decomposable, and hence $Z\!=\!X+Y^\Gamma$ must hold with $X,Y\!\ge\!0$. For $x\!<\!1$, $Z$ is not positive, and hence $Y^\Gamma$ must have a negative eigenvalue, $Y^\Gamma$ must have three positive (and one negative) eigenvalues [@Verstraete01d; @Ishizaka04a], and $Z\!=\!X\!+\!Y^\Gamma$ must have three positive eigenvalues. However, $Z$ cannot have three positive eigenvalues because of $\sigma_+\!\ge\!0$. After all, the constraints in Eq. (\[eq: monotone 2 dual\]) cannot be satisfied for $x\!<\!1$, and hence $M_2^{sep}(P_2^+)\!=\!1$. Since all pure entangled $|\psi\rangle$ can be converted to $P_2^+$ by LOCC and $M_2^{sep}$ never increases under LOCC, $M_2^{sep}(|\psi\rangle)\!=\!1$. Moreover, as in the case of the example (iv) for $M_1(\sigma)$, for the mixed state of $$\sigma=\sigma_0+\lambda\sigma_+^*,$$ where $\sigma_+^*$ constitutes an optimal decomposition for $\sigma_0$, it is found that $$M_2^{sep}(\sigma)\le\frac{M_2^{sep}(\sigma_0)}{1+\lambda},$$ since $(1\!+\!\lambda)\sigma_+^*$ is not necessarily optimal for $\sigma$. In this way, $M_2^{sep}(\sigma)\!\in\![0,1]$ certainly represents non-trivial strong monotonicity under LOCC, and we have
[Theorem 2:]{} [*The function $M_2^{sep}(\sigma)$, which is defined by Eq. (\[eq: monotone 2 dual\]), is a strong monotone under LOCC. If $\sigma$ is single-copy distillable under LOCC, then $M_2^{sep}(\sigma)\!=\!1$.* ]{}
Note that $M_2^{sep}(\sigma)$ and $M_2(\sigma)$ (defined below) are lower bounded by the generalized robustness of entanglement $R_G$ [@Steiner03a] as $$M_2(\sigma)\ge M_2^{sep}(\sigma)\ge\min_{\sigma_\pm}\frac{\hbox{tr}\sigma_-}{\hbox{tr}\sigma_+}
=\frac{R_G(\sigma)}{1+R_G(\sigma)}.$$ Here, $M_2(\sigma)$ is a strong monotone for which ${\cal L}_\sigma$ in Eq. (\[eq: monotone 2\]) is chosen as the set of PPT-preserving operations, and hence $M_2(\sigma)$ is defined such that the constraint of the 1-positivity in Eq. (\[eq: monotone 2 dual\]) is replaced by the constraint of the decomposability.
Summary {#sec: Summary}
=======
We proposed four strong entanglement monotones, $M_1$, $M_1^{sep}$, $M_2$, and $M_2^{sep}$, and studied those properties. All these strong monotones take 1 for every pure entangled states and hence represent the strong monotonicity independent on the Schmidt number in mixed-state entanglement manipulation. Moreover, these strong monotones provide necessary conditions for single-copy distillability. These are lower bounded by the negativity or generalized robustness of entanglement. All these strong monotones are derived from the strong monotone function $M(\sigma)$ given by Eq. (\[eq: function M\]), whose optimization problem of the supremum concerning LOCC or PPT-preserving operations can be reduced to a simple convex optimization problem. The constraint of the dual optimization problem is described by the decomposability and 1-positivity of an operator constructed from a given quantum state, and it clearly reveals the geometrical characteristics of entangled state as shown in Fig. \[fig: Geometrical structure\]. In this paper, we concentrated our attention on the bipartite systems but it is obvious that these strong monotones can be applicable to every bipartite partitioning of multipartite systems.
Finally, let us briefly discuss the relation between the strong monotones studied in this paper and asymptotic distillability. If $\sigma$ is asymptotically distillable, there must exist a trace-preserving LOCC (ended by twirling) which produces an isotropic state close to a maximally entangled state, namely there must exist LOCC $\Lambda$ such that $\Lambda(\sigma^{\otimes n})\!=\!\sigma_I$ with $\eta\!\rightarrow\!1$ for $n\!\rightarrow\!\infty$. Here $\sigma_I$ is an isotropic state given by Eq. (\[eq: Isotropic state\]) and the distillable entanglement $E_D$ is given by $(\log_2 d)/n\!\rightarrow\!E_D(\sigma)$ with $d$ being the dimension of $\sigma_I$ [@Horodecki00a]. On the other hand, $M_1(\sigma_I)\!\rightarrow\!1$ for $\eta\!\rightarrow\!1$ as explicitly shown in Eq. (\[eq: monotone for an isotropic state\]) \[$M_1(\sigma)$ is a discontinuous function but it is continuous on an isotropic state\]. This implies that $$1\ge M_1(\sigma^{\otimes n})\ge M_1(\Lambda(\sigma^{\otimes n})) = M_1(\sigma_I) \rightarrow 1,$$ and therefore $M_1(\sigma^{\otimes n})$ must satisfy $M_1(\sigma^{\otimes n})\!\rightarrow\!1$ for $n\!\rightarrow\!\infty$ so that $\sigma$ is asymptotically distillable. In this way, the asymptotic behavior of $M_1(\sigma^{\otimes n})$ provides a condition necessary to the asymptotic distillability. However, since the negativity satisfies $N(\sigma^{\otimes n})\!\rightarrow\!\infty$ for every non-PPT (NPT) states, it is found using Eq. (\[eq: Lower bound for M\_1\]) that $M_1(\sigma^{\otimes n})\!\rightarrow\!1$ for every NPT states. Therefore, $M_1$ does not provide any non-trivial result concerning asymptotic distillability unfortunately. This is the case for $M_1^{sep}(\sigma^{\otimes n})$ also. On the other hand, it is open whether $M_2^{sep}(\sigma^{\otimes n})\!\rightarrow\!1$ for every NPT states or not \[as far as we know the asymptotic behavior of $R_G(\sigma^{\otimes n})$ for every NPT states has not been shown yet\]. Note that the continuity of $M_2^{sep}$ on an isotropic state is also open.
The problem obtaining the tractable criterion for single-copy distillability in higher dimensional systems is still open as well as the asymptotic distillability. We wish the results in this paper could shed some light on these open problems.
The author would like to thank M. Owari for helpful discussions.
[10]{}
D. Yang, M. Horodecki, R. Horodecki, and B. Synak-Radtke, Phys. Rev. Lett. [ **95**]{}, 190501 (2005).
G. Vidal, J. Mod. Opt. [**47**]{}, 355 (2000).
C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wootters, Phys. Rev. A [**54**]{}, 3824 (1996).
V. Vedral, M. B. Plenio, M. A. Rippin, and P. L. Knight, Phys. Rev. Lett. [ **78**]{}, 2275 (1997).
G. Vidal and R. F. Werner, Phys. Rev. A [**65**]{}, 032314 (2002).
M. B. Plenio, Phys. Rev. Lett. [**95**]{}, 090503 (2005).
G. Vidal and R. Tarrach, Phys. Rev. A [**59**]{}, 141 (1999).
M. Steiner, Phys. Rev. A [**67**]{}, 054305 (2003).
S. Karnas and M. Lewenstein, J. Phys. A [**34**]{}, 6919 (2001).
M. Lewenstein and A. Sanpera, Phys. Rev. Lett. [**80**]{}, 2261 (1998).
M. B. Plenio and S. Virmani, quant-ph/0504163.
B. M. Terhal and P. Horodecki, Phys. Rev. A [**61**]{}, 040301(R) (2000).
G. Vidal, Phys. Rev. Lett. [**83**]{}, 1046 (1999).
W. D[" u]{}r, G. Vidal, and J. I. Cirac, Phys. Rev. A [**62**]{}, 062314 (2000).
S. Ishizaka, Phys. Rev. Lett. [**93**]{}, 190501 (2004).
S. Ishizaka and M. B. Plenio, Phys. Rev. A [**71**]{}, 052303 (2005).
E. M. Rains, Phys. Rev. A [**60**]{}, 173 (1999).
A. Peres, Phys. Rev. Lett. [**77**]{}, 1413 (1996).
M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Lett. A [**223**]{}, 1 (1996).
M. Lewenstein, B. Kraus, J. I. Cirac, and P. Horodecki, Phys. Rev. A [**62**]{}, 52310 (2000).
A. Sanpera, D. Bru[ß]{}, and M. Lewenstein, Phys. Rev. A [**63**]{}, 050301(R) (2001).
B. Kraus, M. Lewenstein, and J. I. Cirac, Phys. Rev. A [**65**]{}, 042327 (2002).
K.-C. Ha, S.-H. Kye, and Y. S. Park, Phys. Lett. A [**313**]{}, 163 (2003).
L. Clarisse, Phys. Rev. A [**71**]{}, 032332 (2005).
L. Masanes, quant-ph/0508071 .
W. K. Wootters, Phys. Rev. Lett. [**80**]{}, 2245 (1998).
E. St[ø]{}rmer, Acta. Math. [**110**]{}, 233 (1963).
S. L. Woronowicz, Rep. Math. Phys. [**10**]{}, 165 (1976).
F. Verstraete and H. Verschelde, Phys. Rev. Lett. [**90**]{}, 097901 (2003).
F. Verstraete, J. Dehaene, and B. DeMoor, Phys. Rev. A [**64**]{}, 010101(R) (2001).
A. Kent, Phys. Rev. Lett. [**81**]{}, 2839 (1998).
M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. A [**60**]{}, 1888 (1999).
S. Boyd and L. Vandenberghe, [*Convex Optimization*]{} (Cambridge University Press, Cambridge, UK, 2004).
J. I. Cirac, W. D[" u]{}r, B. Kraus, and M. Lewenstein, Phys. Rev. Lett. [**86**]{}, 544 (2001).
F. G. S. L. Brand[ã]{}o, Phys. Rev. A [**72**]{}, 022310 (2005).
F. Verstraete, K. Audenaert, J. Dehaene, and B. D. Moor, J. Phys. A [**34**]{}, 10327 (2001).
S. Ishizaka, Phys. Rev. A [**69**]{}, 020301(R) (2004).
M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. Lett. [**84**]{}, 2014 (2000).
|
---
abstract: 'The FORWARD SolarSoft IDL package is a community resource for model-data comparison, with a particular emphasis on analyzing coronal magnetic fields. FORWARD allows the synthesis of coronal polarimetric signals at visible, infrared, and radio frequencies, and will soon be augmented for ultraviolet polarimetry. In this paper we focus on observations of the infrared (IR) forbidden lines of Fe XIII, and describe how FORWARD may be used to directly access these data from the Mauna Loa Solar Observatory Coronal Multi-channel Polarimeter (MLSO/CoMP), to put them in the context of other space- and ground-based observations, and to compare them to synthetic observables generated from magnetohydrodynamic (MHD) models.'
---
Introduction
============
Given a coronal model distribution of density, temperature, velocity, and vector magnetic field, many different synthetic observables may be produced via integration along a line of sight defined by the viewer’s position in heliographic coordinates. This is the purpose of the FORWARD SolarSoft IDL package: [*http://www.hao.ucar.edu/FORWARD/*]{}.
FORWARD includes routines to reproduce data from EUV/Xray imagers, UV/EUV spectrometers, white-light coronagraphs, and radio telescopes. In addition, FORWARD links to the Coronal Line Emission (CLE) polarimetry synthesis code ([@judgecasini_01 Judge & Casini 2001]) for forbidden coronal lines, allowing synthesis of polarimetric observations from visible to IR.
FORWARD includes several analytic magnetostatic and MHD models ([@lowhund Low & Hundhausen 1995], [@liteslow_97 Lites & Low 1997], [@giblow_98 Gibson & Low 1998], [@gibson_10 Gibson et al. 2010]) in its distribution, and it is straightforward to expand it to incorporate other analytic models. It works with user-inputted datacubes from numerical simulations, and, given calendar date input, automatically interfaces with the SolarSoft IDL Potential Field Source Surface (PFSS) package ([*http://www.lmsal.com/$\sim$derosa/pfsspack/*]{}) and Magnetohydrodynamics on a Sphere (MAS)-corona datacubes ([*http://www.predsci.com/hmi/dataAccess.php*]{}). In addition, it connects to the Virtual Solar Observatory and other web-served observations to download data in a format directly comparable to model predictions.
FORWARD creates SolarSoft IDL maps of a specified observable (e.g., Stokes I, Q, U, V). Maps of model plasma properties (e.g., density, temperature, magnetic field, velocity) may also be created. Plane-of-sky and Carrington plots are both standard outputs, or the user may create custom plots or simply evaluate the observable at a point or set of points in the plane of the sky. Either command-line or widget interfaces are available. Finally, information may be passed from one map to the next, enabling data-data, model-model, and model-data comparison. We now demonstrate this capability in the context of Fe XIII 1074.7 nm polarimetry.
CoMP observations and data access
=================================
The Coronal Multi-channel Polarimeter (CoMP) ([@tomczyk_08 Tomczyk et al. 2008]) at the Mauna Loa Solar Observatory (MLSO) in Hawaii is comprised of a coronagraph and a narrow-band imaging polarimeter. CoMP can measure the complete Stokes I, Q, U, and V through a 0.13 nm bandpass which can be tuned across the coronal emission lines of Fe XIII at 1074.7 and 1079.8 nm and the chromospheric He[i]{} line at 1083 nm. CoMP has a 20-cm aperture and observes the full field of view of the (occulted) corona from 1.05 to 1.38 solar radii.
CoMP Fe XIII data are available online, beginning from May 2011 and including intensity, Doppler velocity, line width, and Stokes linear polarization (Q, U as well as L = $(Q^2+U^2)^\frac{1}{2}$ and $Azimuth = 0.5*atan(\frac{U}{Q})$). Data can be downloaded in FITS or image format via the MLSO web pages ([*http://www2.hao.ucar.edu/mlso*]{}).
FORWARD offers another means of downloading CoMP data, and moreover acts as a tool for the display and analysis of CoMP data. Figure 1 demonstrates this, using the FORWARD widget interface. In this case a calendar date has been inputted via the widget, so that the CoMP standard “Quick Invert" data file for that date is automatically loaded when an observable is requested. The Quick Invert file represents an averaged image and may not include all CoMP data products. Comprehensive, non-averaged data are available in the “Daily Dynamics" and “Daily Polarization" FITS archives on the MLSO web pages, and once these are downloaded to a local directory they may be accessed by FORWARD by selecting “Data: COMP: By File" in the top-left widget.
{width="5.5in"}
At any time a SolarSoft IDL map may be saved for the data, either through the “SAVE" radio button in top left widget (which creates an IDL save set with a unique time-stamped name), or via the “Output" radio button which allows customized naming. Similarly, images may be saved in TIFF or Postscript format (further image format options are available if using the line command version of FORWARD).
FORWARD synthesis of CoMP-type observations
===========================================
The CLE code of [@judgecasini_01 Judge & Casini (2001)] is called by FORWARD to synthesize Stokes profiles for the Fe XIII forbidden lines. The code assumes that the lines are optically thin, and that they are excited by both anisotropic photospheric radiation and thermal particle collisions. Magnetism manifests in these lines firstly through circular polarization (Stokes V) arising from the longitudinal Zeeman effect, and secondly through a depolarization of the linearly-polarized line through the saturated Hanle effect. The linearly polarized light (Q,U) has approximately 100 times more signal than the circularly polarized light (V), and so we focus on it from hereon.
{width="4.8in"}
In order to calculate the Stokes profiles, it is necessary to specify a distribution of electron density, temperature, velocity, and vector magnetic field in the corona. A range of such model coronae have been specified and analyzed, resulting in a growing body of literature that demonstrates the usefulness of the linear polarization of Fe XIII 1074.7 nm for identifying characteristic magnetic topologies in the corona ([@judge_06 Judge et al. 2006], [@dove_11 Dove et al. 2011], [@rachmeler_12; @rachmeler_13; @rachmeler_14 Rachmeler et al. 2012; 2013; 2014], [@ula_13; @ula_14 [B[a]{}k-St[ȩ]{}[ś]{}licka]{} [et al.]{} 2013; 2014], [@gibsoniau_14; @gibson_15 Gibson 2014; 2015]).
Figure 2 shows an example of how FORWARD can be used to vary properties of a model corona and examine how this affects predicted linear polarization. In this case a model of a magnetic flux rope ([@fan_12 Fan 2012]) predicts a “lagomorphic" (rabbit-shaped) structure in linear polarization. When the flux rope orientation is varied, the lagomorphic structure loses visibility. Such linear-polarization lagomorphs are observed by CoMP ([@ula_13 [B[a]{}k-St[ȩ]{}[ś]{}licka]{} [et al.]{} 2013]) in association with polar crown filament cavities, which are oriented largely parallel to the viewer’s line of sight.
{width="5in"}
Comparison of CoMP data to FORWARD-synthesized observables
==========================================================
In addition to identifying the linear polarization signatures of particular types of structures such as flux ropes, FORWARD may be used to directly compare linear polarization synthesized from global magnetic models to CoMP data. Figure 3 shows an example of such a comparison. Some features are common to all three, such as the dark curved feature originating from approximately polar angle 130$^\circ$ (measured counter-clockwise from North). Other features may be better captured by the MAS model than the PFSS, such as the parallel curved features near polar angle 90$^\circ$. Still others are not captured by either model, such as the lagomorphic structures between approximately polar angle 330-350. If the lagomorphs are polar crown filament flux ropes that are built up over days or even weeks, it is not surprising that the models – which do not employ a time-varying photospheric magnetic field as a boundary condition – might miss them. We must be cautious in interpreting these differences, however, as they may arise from time evolution, model assumptions, data uncertainty, or some combination of all of these.
Conclusions
===========
FORWARD is available as a SolarSoft package, and further documentation can be found at the web address referenced above. It continues to grow as capabilities in new wavelength regimes are added. For coronal magnetometry, comparing model to multi-wavelength data is particularly important, since different wavelengths probe different parts of the corona. The goal of FORWARD is to act as a community framework to facilitate such comprehensive model-data comparisons.
CoMP data are courtesy of the Mauna Loa Solar Observatory, operated by the High Altitude Observatory, as part of the National Center for Atmospheric Research (NCAR). NCAR is supported by the National Science Foundation. I thank the Air Force Office of Scientific Research for support under Grant FA9550-15-1-0030. I thank Terry Kucera, James Dove, Laurel Rachmeler, Blake Forland, Tim Bastian, Stephen White, Silvano Fineschi, Cooper Downs, Haosheng Lin, Don Schmit, Kathy Reeves, and Yuhong Fan for their code contributions to FORWARD, as well as the other members of the International Space Science Institute (ISSI) teams on coronal cavities (2008-2010) and coronal magnetism (2013-2014) who were key to guiding FORWARD development efforts. In addition, I acknowledge Steve Tomczyk and Chris Bethge for assistance with interfacing FORWARD with the CoMP data, Phil Judge and Roberto Casini for assistance with interfacing FORWARD with the CLE code, Cooper Downs and Jon Linker for assistance interfacing with the PSI/MAS model, Marc de Rosa for assistance with interfacing with his PFSS SolarSoft codes, Dominic Zarro for assistance with VSO interfacing, and Sam Freeland for assistance with SolarSoft interfacing.
[13]{} natexlab\#1[\#1]{}
, U., Gibson, S. E., Fan, Y., Bethge, C., Forland, B., & Rachmeler, L. A. 2013, [*ApJ*]{} 770, 28
, U., Gibson, S. E., Fan, Y., Bethge, C., Forland, B., & Rachmeler, L. A. 2014, in: B. Schmieder, J.-M. Malherbe & S.T. Wu (eds.), [*Nature of Prominences and their role in Space Weather*]{}, Proc. IAU Symposium No. 300 (Cambridge: CUP), p. 395
Dove, J., Gibson, S., Rachmeler, L. A., Tomczyk, S., & Judge, P. 2011, [*ApJ*]{} 731, 1
, Y. 2012, [*ApJ*]{} 758, 60
, S. 2014, in: B. Schmieder, J.-M. Malherbe & S.T. Wu (eds.), [*Nature of Prominences and their role in Space Weather*]{}, Proc. IAU Symposium No. 300 (Cambridge: CUP), p. 139
, S. 2015, in: [*Solar Prominences, Astrophysics and Space Science Library*]{}, Volume 415 (Springer), p. 323
Gibson, S. E., Kucera, T. A., Rastawicki, D., Dove, J., de Toma, G., Hao, J., Hill, S., Hudson, H. S., Marque, C., McIntosh, P. S., Rachmeler, L., Reeves, K. K., Schmieder, B., Schmit, D. J., Seaton, D. B., Sterling, A. C., Tripathi, D., Williams, D. R., & Zhang, M. 2010, [*ApJ*]{} 723, 1133
Gibson, S. E., & Low, B. C. 1998, [*ApJ*]{} 493, 460
, P. G., & [Casini]{}, R. 2001, in: [M. Sigwarth]{} (ed.), [*Advanced Solar Polarimetry – Theory, Observation, and Instrumentation*]{}, ASP Conf. Series 236 (San Francisco: ASP), p. 503
Judge, P. G., Low, B. C., & Casini, R. 2006, [*ApJ*]{} 651, 1229
Lites, B. W., & Low, B. C. 1997, [*Solar Phys.*]{} 174, 91
Low, B. C., & Hundhausen, J. R. 1995, [*ApJ*]{} 443, 818
, L. A., [Casini]{}, R., & [Gibson]{}, S. E. 2012, in: T. R. [Rimmele]{}, A. [Tritschler]{}, F. [W[ö]{}ger]{}, M. [Collados Vera]{}, H. [Socas-Navarro]{}, R. [Schlichenmaier]{}, M. [Carlsson]{}, T. [Berger]{}, A. [Cadavid]{}, P. R. [Gilbert]{}, P. R. [Goode]{}, & M. [Kn[ö]{}lker]{} (eds.), [*The Second ATST-EAST Meeting: Magnetic Fields from the Photosphere to the Corona*]{}, ASP Conf. Series 463 (San Francisco: ASP), p. 227
Rachmeler, L. A., Gibson, S. E., Dove, J. B., DeVore, C. R., & Fan, Y. 2013, [*ApJL*]{} 787, L3
Rachmeler, L. A., Platten, S. J., Bethge, C., Seaton, D. B., & Yeates, A. R., 2014, [*Solar Phys.*]{} 288, 617
Tomczyk, S., Card, G. L., Darnell, T., Elmore, D. F., Lull, R., Nelson, P. G., Streander, K. V., Burkepile, J., Casini, R., & Judge, P. G. 2008, [*Solar Phys.*]{} 247, 411
|
---
author:
- 'N. Fathi'
- 'K. Mertens'
- 'V. Putkaradze'
- 'P. Vorobieff'
title: 'Comment on “The role of wetting heterogeneities in the meandering instability of a partial wetting rivulet”'
---
Rivulets [@rivulet1] and their meandering on a partially wetting surface [@culkin] present an interesting problem, as complex behavior arises from a deceptively simple setup. Recently Couvreur and Daerr [@daerr-new] suggested that meandering is caused by an instability developing as the flow rate $Q$ increases to a critical value $Q_c$, with stationary (pinned) meandering being the final state of the flow. We tried to verify this assertion experimentally, but instead produced results contradicting the claim of Ref. [@daerr-new]. The likely reason behind the discrepancy is the persistence of flow-rate perturbations. Moreover, the theory presented in this paper cannot reproduce the states as considered and disagrees with other theories [@homsy; @jfm08; @daerr-old].
First, we tried reproducing the critical flow rate precipitating meandering as reported [@daerr-new]. We were unable to do so with two carefully constructed experimental arrangements (one at the University of New Mexico, another at the University of North Carolina), both using the same substrate (glass), same fluid (water), and same flow parameters as the experiments of Ref. [@daerr-new], with the fluid supply following the design described in our previous work [@jfm08]. The stationary pattern that emerged was a non-meandering, straight flow over the span exceeding 2 m. This applies to flow rates 0.2–8 ml/s, while the range of flow rate of Ref. [@daerr-new] was 0.2–1.8 ml/s. The likely cause of this difference is the “constant level tank” Couvreur and Daerr employ: a constant (on average) level of fluid in the tank by itself does not guarantee that the *instantaneous* flow rate is constant (only the average), and the flow meandering is keenly sensitive to even modest flow rate $Q$ perturbations, as discussed in [@jfm05; @jfm08; @prl08].
In a tank with a source of velocity fluctuations near the bottom (e.g., a pump), these fluctuations rapidly decay away from the source (consider exponential decay in Stokes’ second problem). Thus the top (far) boundary well may appear unperturbed, while the discharge rate from the bottom of the tank is affected.
Any $Q$ variation (*e.g.*, $Q$ increase) *can* temporarily destabilize a rivulet and mislead an observer into believing it has precipitated meandering. We have recorded [@aps12] transient meandering in response to $Q$ increase or decrease between constant levels (how slow a rate change should be not to trigger meandering would be an interesting subject for further study). A stationary flow can be driven to meander with a short sequence of rate fluctuations (Fig. \[fig.1\]) retaining average $Q$ and tank fluid level. In all these cases, we see almost immediate transition to meandering. However, after $Q$ becomes constant, the straight flow typically resumes, often in the matter of minutes, although sometimes it takes longer. Moreover, a “pinned” meandering pattern can be destroyed by a $Q$ increase, once again followed by formation of a straight rivulet [@aps12].
![a) - Straight rivulet at $Q=5.7$ ml/s (exists above meandering thresholds $Q_c$ predicted in Ref. [@daerr-new], requires no surface “preparing” to develop). b) – e) - image sequence showing destruction and re-emergence of the straight rivulet following a sequence of three 1 s flow-rate pulses with 1.5 s intervals (first pulse at $t=0$). Vertical image extent is 2 m.[]{data-label="fig.1"}](Graphic1.png)
The theory presented in Ref. [@daerr-new] may also contain unclear expositions, errors, or inconsistencies. It would help to specify that the axis of the independent coordinate $X$ ($x$ in dimensionless form) is normal to the direction of the rivulet, rather than pointing downstream. Setting $X$ to be the downstream direction and $h$ to be the deviation from the centerline (a common notation in the field) leads to apparent inconsistencies. The most important of these are as follows. First, in the balance of forces, terms that are normal to each other would be equated. Second, Eq. 3 for $h'''(X)$ would contain stream curvature on the right hand side, which cannot be taken as constant (as it is taken in [@daerr-new]). Third, as a simple calculation shows, there would be no consistent limit for $h(X)$ far downstream, as all solutions to Eq. 3 would develop a singularity corresponding to the stream flowing sideways. Thus we will regard the $X$-direction as transversal to the rivulet. Even then, questions still persist about the validity of the key element of the theory, namely the discussion following Eq 3.
Indeed, Eq. 3 for the cross-sectional profile $h(x)$ is $h'''(x) = - \alpha h^4(x) $, with $\alpha$ being a constant depending on the parameters of the flow. In Eq. 4, a cubic, area-preserving, polynomial expansion to this equation is assumed, stating $h = \theta_s/2 (1-x^2)(1+Ax/3)$, where $\theta_s$ is the (tangent of) contact angle of the equilibrium profile and $A$ is the asymmetry parameter. This expansion, applied for $-1<x<1$, does not represent a true solution of the differential equation and introduces artificial constraints, such as an additional linear dependence between derivatives evaluated at $x = \pm 1$. However, the most serious criticism concerns the assumption of constancy of the cross-section, used as an additional condition to determine the form of polynomial (p. 2 of the paper). The constancy of area, *i.e.* $\int_{-1}^1 h(x) \mbox{d} x$, does not follow from the equation for $h(x)$ or from any other physical principle. The quantity that is constant for all steady states is the fluid flux through a given cross-section and *not* the area. Since the theory is based on the *deviations* from the straight rivulet, inaccurate description of variations of cross-sectional profile casts doubt on the validity of the whole theory.
[In addition, the polynomial ansatz itself introduces errors, as shown in fig. 2, providing an exact solution of eq. (3) (solid line) and the corresponding polynomial (dashed line) satisfying exactly the same boundary conditions for $h(-1)$ and $h'(-1)$ with $h''(-1)$ for eq. (3) chosen so $h(1)=0$, for a particular value of $\theta_s$. Clearly, the area under the solid curve is greater than that under the dashed curve. The difference between the areas changes with the [contact angles, tending to increase when one of the contact angles is increasing, precisely where the theory is applied]{}. This area mismatch is also present if one were to assign the same derivatives to the exact solution and polynomial ansatz at both ends. ]{}
![Numerical solution for $h$ using the full equation (solid line) and polynomial ansatz (dashed line).[]{data-label="fig:wrong_eq"}](./WrongGraph.pdf){width="2.0in"}
Some of the difference in the interpretation of results may come from a different approach to time scales. The time necessary for straight rivulet to establish in our experiment varies from minutes to hours. During the transition to straight steady state (which is sustainable indefinitely – for days), meandering patterns that appear may look stationary, but destabilize in a matter of minutes.
Finally, let us comment on the interpretation of theoretical results after Eq. 8. The authors take the RMS of curvature and use it as a length parameter in the problem. Our previous measurements show that for meandering, all measurable quantities, properly averaged, satisfy a power law distribution [@prl08]. In the light of this observation, any curvature RMS is likely to depend on the particular cutoff and numerical procedure and thus may not be suited for use as a robust length scale.
The problem of rivulet meandering in a variety of settings is interesting both as an example of a simple flow with surprisingly complex behavior, and because of its practical importance in a wide variety of areas. However, this very complexity requires that the problem is treated rigorously and with attention to detail – both in theoretical consideration and in experimental approach.
[0]{}
.
.
.
.
.
.
|
---
abstract: 'The column density probability distribution function (N-PDF) of GMCs has been used as a diagnostic of star formation. Simulations and analytic predictions have suggested the N-PDF is composed of a low density lognormal component and a high density power-law component, tracing turbulence and gravitational collapse, respectively. In this paper, we study how various properties of the true 2D column density distribution create the shape, or “anatomy” of the PDF. We test our ideas and analytic approaches using both a real, observed, PDF based on Herschel observations of dust emission as well as a simulation that uses the ENZO code. Using a dendrogram analysis, we examine the three main components of the N-PDF: the lognormal component, the power-law component, and the transition point between these two components. We find that the power-law component of an N-PDF is the summation of N-PDFs of power-law substructures identified by the dendrogram algorithm. We also find that the analytic solution to the transition point between lognormal and power-law components proposed by @Burkhart_2017 is applicable when tested on observations and simulations, within the uncertainties. We reconfirm and extend the results of @Lombardi_2015, which stated that the lognormal component of the N-PDF is difficult to constrain due to the artificial choice of the map area. Based on the resulting anatomy of the N-PDF, we suggest avoiding analyzing the column density structures of a star forming region based solely on fits to the lognormal component of an N-PDF. We also suggest applying the N-PDF analysis in combination with the dendrogram algorithm, to obtain a more complete picture of the global and local environments and their effects on the density structures.'
author:
- 'Hope Chen, Blakesley Burkhart, Alyssa Goodman, & David C. Collins'
bibliography:
- 'main.bib'
title: 'The Anatomy of the Column Density Probability Distribution Function (N-PDF)'
---
Introduction {#intro}
============
Star formation occurs in dense filamentary structures within molecular environments that are governed by the complex interaction of gravity, magnetic fields, and turbulence [@McKee_2007]. The initial distribution of the gas density at parsec scales, which is affected by the average density, level of turbulence and magnetic field strength, may determine the AU scale properties of star formation such as the initial mass function (IMF) and the overall star formation rate [@Krumholz_2005; @Hennebelle_2011; @Padoan_2011b; @Padoan_2011a; @Federrath_2012; @Mocz_2017].
History
-------
The column density probability distribution function (N-PDF) is a commonly used tool for quantifying the distribution of gas. Simulations and observations have shown N-PDFs to be an important diagnostic of turbulence and star formation efficiency in local star forming clouds [@Federrath_2012; @Collins_2010; @Burkhart_2015a; @Myers_2015]. This is because N-PDFs can constrain the fraction of dense gas within molecular clouds and provide a means of comparison with analytic models as well as numerical simulations (via synthetic observations) of star formation.
N-PDFs, as available from observations, have been utilized extensively for many different tracers of the ISM. This includes molecular line tracers such as CO [@Lee_2012; @Burkhart_2013b] and column density tracers such as dust [@Kainulainen_2009; @Froebrich_2010; @Schneider_2013; @Schneider_2014; @Schneider_2015b; @Lombardi_2015]. Tracing the N-PDF using dust emission and absorption provides the largest dynamic range of densities, in contrast to molecular line tracers such as CO, which do not trace the true column density distribution due to depletion and opacity effects [@Goodman_2009b; @Burkhart_2013a; @Burkhart_2013b].
Both the true 3D density (volume density) PDF and N-PDFs have been used to understand the properties of galactic gas dynamics, from the diffuse ionized medium to dense star-forming clouds [@Hill_2008; @Burkhart_2010; @Maier_2017]. This is because the shape of the density/column density PDF is expected to be related to the underlying physics of the cloud and linked to the kinematics and the chemistry of the gas [@Vazquez_1994; @Padoan_1997; @Kritsuk_2007; @Burkhart_2013b; @Burkhart_2015a].
The low column density gas in molecular clouds, as well as in the diffuse neutral and ionized ISM, takes on the form of a lognormal [@Vazquez_1994; @Hill_2008; @Kainulainen_2013]. This is primarily attributed to the application of the central limit theorem to a hierarchical (e.g. turbulent) density field generated by a multiplicative process, such as shocks. If the width of the lognormal portion of the N-PDF can be measured, it may be related to the sonic Mach number of the gas in a nearly isothermal cloud [@Federrath_2008; @Burkhart_2009; @Kainulainen_2013; @Burkhart_2012; @Burkhart_2015a]. However, there are known caveats to constraining the exact shape of the lognormal N-PDF. Below the sensitivity limit, observations are incomplete and do not constitute a statistically meaningful representation of the column density distribution. The sampled N-PDF is then subject to the uncertainty due to the choice of the map area and the foreground/background contamination [see Figure \[fig:cartoon\]; @Lombardi_2015]. @Lombardi_2015 found that both changing the map area used to derive the PDF and subtracting the foreground/background uniformly over the map affect the PDF shape at the low column density end (around A$_K$ $\sim$ 0.1, or A$_V$ $\sim$ 1, for nearby molecular clouds). @Lombardi_2015 further suggests that the sampling is statistically unbiased only when the region used to sample the PDF is defined by a “closed contour” [@Alves_2017]. In evolved molecular clouds with active star formation, the PDF shape of the densest gas/dust has been observed to develop a power-law form [@Kainulainen_2013; @Schneider_2015a; @Lombardi_2015]. This is expected from both numerical simulations and the analytic theory which suggest that the PDF of self-gravitating gas should develop a power-law tail [@Shu_1977; @Federrath_2012; @Burkhart_2015a; @Meisner_2015]. The slope of the PDF power law tail depends on self-gravity and the magnetic pressure [@Kritsuk_2011; @Ballesteros-Paredes_2011; @Collins_2012; @Federrath_2013; @Burkhart_2015a], and can be analytically related to the powerlaw index of a collapsing isothermal sphere [@Shu_1977; @Girichidis_2014].
Even in the most evolved molecular clouds which exhibit a power law tail towards the dense gas, a lognormal portion of the PDF can still be observed at low total gas and dust column densities. Recently, the HI PDF in and around GMCs has been measured and shown to be the primary component of the lognormal portion of the total gas+dust PDF [@Burkhart_2015b; @Imara_2016]. @Burkhart_2015b [@Imara_2016] have shown that the lognormal portion of the column density PDF in a sample of Milky Way GMCs is comprised of mostly atomic HI gas while the power-law tail is built up by the molecular H$_2$, with no contribution from HI. These studies, including an analytic study by @Burkhart_2017, suggest that the transition point in the column density PDF between the lognormal and power-law portions of the column density PDF traces important physical processes. These include the HI-H$_2$ transition and the so-called “post-shock density” regime where the background turbulent pressure equals the thermal pressure and therefore self-gravity becomes dynamically important in the molecular gas [@Burkhart_2017; @Li_2015; @Kritsuk_2011]. We illustrate the idealized PDF and the known physics associated with different components in Figure \[fig:cartoon\].

Understanding Anatomy
---------------------
What sets the shape of the N-PDF of a star forming molecular cloud? In this paper we seek to address this question by comparing dendrogram-based N-PDFs of observations with simulations that include turbulence, magnetic fields, and self-gravity.
*Dendrograms* are hierarchical tree-diagrams composed of branches, which are split into multiple substructures, and leaves, which have no measurable substructure [@Rosolowsky_2008]. Dendrograms have been applied to star forming turbulent clouds in the past towards understanding the hierarchical properties of star forming regions [@Rosolowsky_2008; @Goodman_2009a; @Beaumont_2012] and supersonic turbulence [@Burkhart_2013a]. We can use dendrograms to break up the column density PDF into its hierarchical constituent parts and relate these individual components to the underlying physics.
In this work, we use the column density PDF derived from dust tracers based on the Herschel observations of the dust thermal emission [the Gould Belt Survey, @Andre_2010], as well as the column density PDF from simulations that include turbulence and self-gravity [@Collins_2012; @Burkhart_2015a]. The paper is organized as follows: In §\[sec:data\] we describe the data we use in this paper, namely Herschel observations of L1689 in Ophiuchus and Enzo MHD simulations, as well as the methods used in our analysis of the N-PDF, including the *dendrogram* algorithm and the fitting to the lognormal and the power-law models. We present our analysis of the lognormal component in §\[sec:ln\] for the L1689 region in Ophiuchus and for the Enzo simulations. We then present our dendrogram lognormal + power-law PDF analysis for both simulations and observations in §\[sec:pl\]. We consider the relevance of the analytic solution to the column density at the transition point, as proposed by @Burkhart_2017, in §\[sec:obsv\_trans\] and in §\[sec:sim\_trans\], respectively for observations and simulations. Finally, we discuss our results in §\[sec:discussion\] followed by our conclusions in §\[sec:conclusion\].
Data and Methods {#sec:data}
================
Observation {#sec:obsv}
-----------
The observed column density for the L1689 region in Ophiuchus (Oph L1689) is derived from data taken by the *Herschel Space Observatory*. Herschel was a satellite operated by the European Space Agency. Its Photodetecting Array Camera and Spectrometer (PACS) and Spectral and Photometric Imaging Receiver (SPIRE) covered wavelengths from 55 to 670 $\mu$m, with six broad spectral bands identified by their nominal central wavelengths: PACS 70 $\mu$m, 100 $\mu$m, and 160 $\mu$m, and SPIRE 250 $\mu$m, 350 $\mu$m, and 500 $\mu$m. The data used to derive the column density presented in this paper were obtained as part of the Herschel Gould Belt Survey [@Andre_2010].
To derive the column density, we make use of the maps at 160 $\mu$m, 250 $\mu$m, 350 $\mu$m, and 500 $\mu$m, produced by the Herschel Interactive Processing Environment (HIPE; Version 11.1.0). The maps produced by HIPE were not absolutely calibrated, in the sense that there may be a per-band additive offset needed to correct the Herschel zero level. This issue has traditionally been addressed for small $\lesssim$1 square degree regions by adding a scalar offset to the Herschel mosaic under consideration at each wavelength. However, in this study we seek to calibrate Herschel data for a large region of the Ophiuchus cloud dozens of square degrees in size. Over this sizeable footprint, we found that a single per-band scalar offset could not satisfactorily correct the Herschel zero level.
Instead, we chose to allow for a spatially varying zero-level offset in each Herschel band. Specifically, we used the [@Meisner_2015] *Planck*-based thermal dust emission model to predict the Herschel emission at 10$'$ FWHM over our entire Ophiuchus footprint. These low-resolution predictions incorporated color corrections to account for the Herschel bandpasses. To correct the Herschel zero level, we then high-pass filtered each Herschel mosaic at $10$$'$, and replaced the low-order spatial modes ($\geq$10$'$ FWHM) with the corresponding Planck-based predictions. We thereby achieved dust emission maps at 160$\mu$m, 250$\mu$m, 350$\mu$m, 500$\mu$m which retain the high angular resolution of Herschel, but inherit the reliable zero level of Planck.
To derive reliable column density maps, we assume that the Herschel maps from 160$\mu$m to 500$\mu$m, following the zero level corrections, are dominated by “big grain [BG; @Stepnik_2003]” thermal dust emission. We also assume that the thermal dust emission can be described by a single modified blackbody (MBB). This assumption is only valid under certain conditions and becomes inaccurate at wavelengths shorter than 100$\mu$m due to contamination from “very small grain” (VSG) emission. Adopting a single-component modified blackbody model also presumes that the material along the line of sight can be characterized well by a single temperature. We incorporate a spatially varying value of dust emissivity power-law index, $\beta$ in the following SED fitting.
We smooth the Herschel maps at 160$\mu$m, 250$\mu$m and 350$\mu$m to 36.1$'$ FWHM, to match the angular resolution of the SPIRE 500$\mu$m map. In the SED fitting, we assume a 10% fractional uncertainty on each Herschel intensity measurement. For each pixel, we then derive the optimal temperature and 350$\mu$m intensity via simple $\chi^2$ minimization. Using the equation $I_{\nu}$=$\tau_{\nu}$$B_{\nu}(T)$, we can then derive the optical depth at any frequency based on our two fitted parameters. Then, a conversion from the 350$\mu$m optical depth to column density units are obtained by convolving the map of the 350$\mu$m optical depth ($\gamma_{350}$) to the nominal resolution of the NICEST map [@Lombardi_2009] and comparing to the NICEST extinction map (in the unit of K-band extinction magnitude, $A_K$). A simple power law, $A_K$ = $\gamma\tau_{350}^{c}$, is fitted for data points around the median value of 350$\mu$m optical depth, $\sim$ 2.5$\times10^{-4}$. The resulting parameters, $\gamma$ $\sim$ 2520 and $c$ $\sim$ 1.11, are consistent with the solution suggested by @Lombardi_2014.
Figure \[fig:map\_obsv\] is the final column density map of the L1689 region. The map covers a region of $\sim$ 2.5 pc by 2.5 pc, and includes all material with column density larger than $A_K$ = 0.8 mag [used to define the “dense molecular cloud” where the star formation almost exclusively occurs @Lada_1992; @Lada_2010].
The resulting column density measurements can then be converted to the unit of equivalent V-band extinction magnitudes, using $A_V$ = $A_K$/0.112 [@Rieke_1985], and to the number column density, using N(H$_2$)/$A_V$ = 9.4$\times$10$^{20}$ cm$^{-2}$ [@Bohlin_1978] or N(H$_2$)/$A_V$ = 6.9$\times$10$^{20}$ cm$^{-2}$ [@Draine_2003; @Evans_2009]. In this paper, for easier comparison with other works in various column density units and between simulation and observation, the column density is expressed in a dimensionless, normalized unit where the column density is divided by the median value above the detection limit. For L1689, the median column density is $\sim$ 7.3 mag in the unit of equivalent V-band extinction ($A_V$). Figure \[fig:map\_obsv\] shows the map used in the following analysis, and Figure \[fig:cartoon\_obsv\] shows the N-PDF of the entire L1689 region in the normalized units. Note that the conversion from physical to dimensionless units does not change the shape of the N-PDF on the logarithmic scale.
An overview of physical properties of Oph L1689 is listed in Table \[table:comparison\], in comparison to the Enzo simulation.
### Turbulence in Oph L1689 {#sec:obsv_turb}
To estimate the magnitude of turbulent motions in Oph L1689, we fit Gaussian profiles for spectra from the FCRAO observations of $^{13}$CO (1-0) molecular line emission [using data from the COMPLETE Survey, @Ridge_2006]. Assuming that the dust temperature derived from Herschel observations of thermal dust emission is representative of the gas temperature (*i.e.* the gas and the dust are in thermal equilibrium; see §\[sec:obsv\] for details on the fitting of dust properties), we calculate the sonic Mach number for Oph L1689, $\mathcal{M}_s$ = 4.50$^{+1.43}_{-1.09}$. Since the $^{13}$CO (1-0) line emission traces a density range similar to the density range (as traced by thermal dust emission) in question in this paper, we use the Gaussian line widths of the $^{13}$CO (1-0) transition and the estimated sonic Mach number to assess the dynamics of Oph L1689 in the following analyses (see §\[sec:obsv\_pl\]). The estimated sonic Mach number is also used to calculate the transitional column density (§\[sec:obsv\_trans\]), between the lognormal and the power-law components, in the analytic model proposed by @Burkhart_2017. See §\[sec:comparison\] and Table \[table:comparison\] for a comparison of physical properties to the Enzo simulation used in this paper.

Simulation {#sec:sim}
----------
In order to investigate the physics of the N-PDF without the uncertainties that plague observations, we use MHD simulations of a collapsing turbulent cloud produced by the Enzo code [@Collins_2010; @Collins_2012]. The Enzo simulations used in this paper are generated by solving the ideal MHD equations with large-scale solenoidal forcing [$b$ $\approx$ 1/3, where $b$ is the forcing parameters; see §\[sec:trans\] in this paper, and @Collins_2012; @Burkhart_2017]. The simulations have a sonic Mach number of $\mathcal{M}_s$ = 9 and an Alfvénic Mach number of $\mathcal{M}_A$ = 12, which scales to $\sim$ 4.4 $\mu$G assuming a sound speed of $c_s$ = 0.2 km s$^{-1}$ and an average volume density of $n_H$ = 1000 cm$^{-3}$ [the “mid” case in @Burkhart_2015a]. To investigate properties of the column density profile under the influence of gravitational collapse, the snapshot at the simulation time t = 0.6 t$_\text{free-fall}$ is taken. Each side of the simulation cube is $\sim$ 4.6 pc in length, and the density and the velocities are sampled on a 512$^3$ grid (resulting in a coarser grid cell size of $\sim$ 9$\times$10$^{-3}$ pc, or $\sim$ 1.8$\times$10$^3$ AU; compared to the smallest physical scale the simulation resolves with adaptive mesh refinement of $\sim$ 500 AU).
To obtain the column density, the density cube is integrated along one of the three axes of a density cube, through the full 4.6 pc. The resulting column density map is then convolved with a Gaussian beam with the same size as that of the Herschel 500-$\mu$m beam. For subsequent analyses involving the *dendrogram* algorithm, we take a 2D map region of 2.3pc by 2.3pc, in order to compare to Oph L1689. Similarly, the column density is expressed in the normalized units, where the column density is divided by the median value. (See §\[sec:obsv\] for details about the “normalization”.) The median value of the simulated column density map is $\sim$ 1.3$\times$10$^{22}$ cm$^{-2}$, or $\sim$ 13.5 mag in the unit of equivalent V-band extinction ($A_V$). See @Collins_2012 and @Burkhart_2015a for details on scalings to physical units.
An overview of physical properties of the Enzo simulation is listed in Table \[table:comparison\], in comparison to Oph L1689.
### Turbulence in the Enzo simulation {#sec:sim_turb}
To estimate the magnitude of turbulence in the 2.3pc by 2.3pc by 4.6pc cube (a 2.3pc by 2.3pc map with a 4.6pc line of sight) from which the column density map (Figure \[fig:map\_sim\]) was derived, we calculate the sonic Mach number, $\mathcal{M}_s$, based on the 3D velocity dispersion in the 2.3pc by 2.3pc by 4.6pc cube. Since we chose a region where gravity is particularly dominant (see Figure \[fig:map\_sim\], in which multiple high-density structures are identifiable), we expect a smaller sonic Mach number (less turbulent material) than expected of the entire 4.6pc by 4.6pc by 4.6pc cube [$\mathcal{M}_s$ = 9; @Collins_2012]. For the 2.3pc by 2.3pc column density map, we find $\mathcal{M}_s$ = 8.14$^{+2.95}_{-1.81}$. The estimated sonic Mach number is used to calculate the transitional column density in the analytic model proposed by @Burkhart_2017. See §\[sec:trans\] for the analysis of the analytic model.
### Difference between observation and simulation {#sec:comparison}
Table \[table:comparison\] shows that Oph L1689 and the Enzo simulation used in this paper have different median column densities and sonic Mach numbers, indicating that Oph L1689 and the Enzo simulation are at different stages of star formation and/or under the effects of different levels of gravity, turbulence, and likely also magnetic field. But, since the main goal of this paper is to examine the uncertainties in the N-PDF analysis and the origins of the different components of the N-PDF, we are more concerned with the relative column density structures than the absolute dynamics of Oph L1689 and the Enzo simulation. The dendrogram analyses of Oph L1689 and the Enzo simulation show that the two *do* have similar hierarchical density structures (see Table \[table:comparison\], and compare Figure \[fig:fancy\_obsv\] to Figure \[fig:fancy\_sim\]). As a side note, @Beaumont_2013 have also demonstrated that it remains difficult to make simulations “look like” a real molecular cloud, even in terms of the simplest observable diagnostics including the column density distribution and the distribution of the CO line widths.
------------------------- ----------------------- ------------------------ ------------------ ---------------------------
Median Column Density Sonic Mach Number
N$_0$ \[$A_V$\] $\mathcal{M}_s$ Number of Levels Number of Leaf Structures
Observation (Oph L1689) 7.3$\pm$3.5 4.50$^{+1.43}_{-1.09}$ 8 10
Simulation (Enzo) 13.5$\pm$4.7 8.14$^{+2.95}_{-1.81}$ 8 10
------------------------- ----------------------- ------------------------ ------------------ ---------------------------
[**a.** The dendrograms are computed in normalized units, N/N$_0$.]{}
[**b.** The Mach number is derived from the average Gaussian line widths for the $^{13}$CO (1-0) molecular line emissions in Oph L1689.]{}
[**c.** The Mach number is calculated for the 2.3pc by 2.3pc by 4.6pc cube, from which the 2.3pc by 2.3pc column density map shown in Figure \[fig:map\_sim\] is derived.]{}

Dendrogram {#sec:dendro}
----------
We use *dendrograms* [@Rosolowsky_2008] to identify substructures inside molecular clouds. The *dendrogram* is a structure finding algorithm that decomposes an N-dimensional (N $\geq$ 1) entity into smaller *substructures* based on the positions and the values of the pixels. The *dendrogram* algorithm can build a tree of substructures representative of the hierarchy of nested column density features inside a cloud [for previous examples of application in astrophysics and a detailed description of the algorithm, see @Rosolowsky_2008; @Goodman_2009a; @Burkhart_2013a]. This hierarchical representation is ideal for the following analysis, where we hope to find the composition of various components of the N-PDF.
Figure \[fig:dendrogram\] shows a cartoon demonstrating how a dendrogram is computed from a two-dimensional map. The *dendrogram* (shown on the left-hand side of Figure \[fig:dendrogram\]) is a tree diagram where each structure can harbor exactly two substructures. The structures can then be categorized into *leaf*, *branch*, or *trunk* structures. The *trunk* structure is a special case, as it is the bottommost structure in the dendrogram.
We define the *height* of any structure (leaf, branch, or trunk) to be its vertical extent as shown in Figure \[fig:dendrogram\]. Three input parameters effect the outcome of the dendrogram algorithm. First, the minimum value in a *trunk* structure has to be larger than `min‘_value`[^1] (the light blue dashed line in the dendrogram on the left and the light blue dashed contour in the 2D map on the top right of Figure \[fig:dendrogram\]). Second, the height of any substructure in the dendrogram has to be larger than `min‘_delta` (the left inset diagram on the bottom right of Figure \[fig:dendrogram\]). Lastly, the area of a *leaf* structure must be larger than `min‘_npix` (the right inset diagram on the bottom right of Figure \[fig:dendrogram\]).
In this paper, we use the *Python*-based `astrodendro` packageto compute and analyze the dendrograms of the column density maps. (See §\[sec:obsv\] and §\[sec:sim\] for details on how we derive the column density maps from observations and simulations.) The `astrodendro` package offers a simple control of the three essential parameters—`min‘_value`, `min‘_delta`, and `min‘_npix`—in the dendrogram analysis, each defining one of the three criteria described in the above paragraph (see Figure \[fig:dendrogram\]). We use the same set of parameters in normalized units for observations and simulations in order to make sure that the analyses are consistent. We choose a minimum value (`min‘_value`) of N/N$_0$ = 0.95, which corresponds to $A_K$ $\sim$ 0.8 mag in observations of Oph L1689. Note that $A_K$ = 0.8 mag is also found to define the “dense cloud” where the star formation activity occurs [@Lada_2010]. We then choose `min‘_npix` that corresponds to an area equivalent to a 0.05pc by 0.05pc square. And to guarantee a representative sampling of the N-PDF, we also make sure that each substructure has more than 200 pixels [for discussions on the sampling, see @Clauset_2009]. Lastly, `min‘_delta` is selected to be N/N$_0$ = 0.475, roughly corresponding to the 3-$\sigma$ level in observations. The final dendrograms are shown as connections between panels in Figure \[fig:fancy\_obsv\] and Figure \[fig:fancy\_sim\].

Fitting the N-PDF {#sec:fitting}
-----------------
In this paper, the term “N-PDF” is used to indicate the frequency distribution of the column density in a map or a binned histogram representation of this frequency distribution. This definition is mathematically different from a well-defined probability distribution, where the distribution function $f(x)$ is normalized ($\int_0^\infty{f(x)dx}$ = 1). However, this discrepancy in definition does not affect the validity of results presented in this paper. In practice, since one can only sample the mathematical probability distribution function with a finite number of independent pixels (in both observations and simulations), the results in this paper can be directly compared to other work where the distributions of values in column density maps are examined.
Following results presented by @Schneider_2013 [@Myers_2015] and @Burkhart_2017 and the anatomical diagram shown in Figure \[fig:cartoon\], we assume that the N-PDF has a lognormal component in the low column density regime and a power-law component toward the high column density end. The two components are continuous at the transitional column density [@Myers_2015; @Burkhart_2017]. By defining the normalized column density on the logarithmic scale:
$$s \equiv \ln{\text{N}/\text{N}_0},$$
the distribution can be written as
$$\label{eq:piecewise}
p_s(s) =
\begin{cases}
M\frac{1}{\sqrt{2\pi}\sigma_s}\exp{\left[-\frac{\left(s-s_0\right)^2}{2\sigma_s^2}\right]},& s < s_t\\
Mp_0\exp{\left[-\alpha s\right]},& s > s_t,
\end{cases}$$
where $s_t$ is the transitional column density in the logarithmic normalized units, $p_0$ is the amplitude of the N-PDF at the transition point, and $M$ is the normalization/scaling parameter. To carry out the least-squares fitting, the N-PDF is first sampled using a binned histogram. The $\chi^2$-residual between the model and the histogram is minimized over a range of $s_t$, with a step size in $s_t$ equal to one third of the histogram bin size. For convenience, the modeled N-PDF based on this equation is called a “lognormal + power-law” model in discussions below.
We note that the least-squares fitting cannot be used to determine whether a lognormal + power-law model is a better fit to the observed distribution than a simple lognormal model [@Clauset_2009] and is subject to various fitting/binning choices. Thus, we limit our analyses involving $\chi^2$-fitting to the N-PDFs of the *cloud-scale* regions (the entire L1689 region or the integrated column density maps derived from the simulation cubes; see §\[sec:obsv\] and §\[sec:sim\] for how we define the regions in observations and simulations). At the cloud scale, the large number of independently sampled pixels (on the order of 10$^4$ to 10$^5$) makes the $\chi^2$-fitting results less dependent on fitting/binning choices [@Stutz_2015], and the existence of a power-law component is evident by eye (see Figure \[fig:cartoon\_obsv\] and Figure \[fig:cartoon\_sim\]). For smaller substructures within the cloud, we base our analysis on comparing (without fitting) the individual N-PDFs to the N-PDFs of entire regions. Thus, the results presented in this paper are essentially independent of the uncertainties in fitting. (See @Stutz_2015 for a comparison of various fitting/binning schemes, and @Burkhart_2015a, for alternative N-PDF diagnostics that do not require fitting.)
The Lognormal Component {#sec:ln}
=======================
Observation {#sec:obsv_ln}
-----------
We investigate the lognormal component of the N-PDF of Oph L1689 and plot the total PDF in Figure \[fig:cartoon\_obsv\]. Figure \[fig:cartoon\_obsv\] shows that the lognormal component is observable only between N/N$_0$ $\sim$ 0.6 and $\sim$ 1.4 (the transition point; $A_V$ $\sim$ 4.4 mag and $\sim$ 10.2 mag). Figure \[fig:cartoon\_obsv\] also demonstrates that sampling of the lognormal component of the N-PDF is subject to the uncertainty due to the map area between N/N$_0$ $\sim$ 0.1 and $\sim$ 0.6 [$A_V$ $\sim$ 0.7 mag and $\sim$ 4.4 mag, consistent with @Lombardi_2015], and that the lognormal component is unobservable below the detection limit (N/N$_0$ $\sim$ 0.1 in this case). The uncertainty due to the map area and the detection limit leaves us with a very narrow range of well-sampled column density that can be used in an attempt to fit for the lognormal component. In the case of Oph L1689, the range of column density that is not affected by the uncertainties is $\lesssim$ 50% of the full range the fitted lognormal component spans [which includes the pixels outside the last closed contour; @Alves_2017]. Thus, a direct comparison of the width of the lognormal component to the cloud dynamics such as the sonic Mach number is *difficult and unreliable* using the column density maps based on dust tracers.

Simulation {#sec:sim_ln}
----------
Figure \[fig:cartoon\_sim\] shows that, at t = 0.6 t$_\text{free-fall}$, the 2.3pc by 2.3pc integrated column density map derived from the Enzo simulation has a lognormal component toward the low column density. The shape of the N-PDF is consistent with the N-PDF of the full 4.6pc by 4.6pc column density map derived from the entire cube [@Burkhart_2015a; @Collins_2012]. Figure \[fig:cartoon\_sim\] also shows that, when we mimic the observational constraint on the map area by changing map size, the change in the N-PDF occurs at the low column density side of the peak column density [consistent with observations presented in §\[sec:obsv\_ln\] and Figure \[fig:cartoon\_obsv\] in this paper and in @Lombardi_2015], even though in simulations a complete sampling is possible by including the entire cube. There is no detection limit in simulations. Fitting of the lognormal component is thus stable and can be shown to correlate with physical properties [@Burkhart_2015a].

The Power-law Component and the Dendrogram Decomposition of the N-PDF {#sec:pl}
=====================================================================
Observation {#sec:obsv_pl}
-----------
Figure \[fig:cartoon\_obsv\] shows that the N-PDF of Oph L1689 has a power-law component above N/N$_0$ $\sim$ 1.4 (the transition point). In an attempt to understand the composition of the power-law component, we apply the dendrogram algorithm to find structures with physical sizes larger than 0.05 pc above a minimum value of N/N$_0$ = 0.95 [$A_K$ $\sim$ 0.8, consistent with the “dense cloud” definition suggested by @Lada_2010], with `min‘_delta` = 0.475 in the normalized unit. (See §\[sec:dendro\] for the specifics of the *dendrogram* algorithm and the setup parameters.) The resulting Ophiuchus dendrogram is shown as the connections between panels in Figure \[fig:fancy\_obsv\].
When we examine the N-PDFs of individual “leaf” structures (color coded according to their levels in the dendrogram in Figure \[fig:fancy\_obsv\]), we find that the N-PDFs of the individual “leaf” structures can be roughly categorized into three categories. First, there are “leaf” structures such as Structure 9, 12, and 18 (marked with “LN” in Figure \[fig:fancy\_obsv\]), which sit at the lower levels in the dendrogram and have N-PDFs mostly in the lognormal regime of the entire cloud. These are likely transient (unbound) over-densities, and their N-PDFs do not always look like a lognormal function because that the sample size is small. Secondly, Structure 14 (marked with “LN/PL” in Figure \[fig:fancy\_obsv\]) could be at an early stage of gravitational collapse. It sits at a low level in the dendrogram tree, and has an N-PDF consisting of both a lognormal component and a power-law component. Lastly, there are several structures sitting toward the top of the dendrogram tree with N-PDFs almost entirely in the power-law regime of the entire Oph L1689. Structure 6, 8, 10, 11, 16 and 17 belong to the last category (marked with “PL” in Figure \[fig:fancy\_obsv\]). The N-PDF of each of these structures has a shape roughly resembling the power-law distribution. Some of these structures (*e.g.*, Structure 6, 8, and 10 in Figure \[fig:fancy\_obsv\]) span a smaller range of column density, while others (*e.g.*, Structure 11 and 17 in Figure \[fig:fancy\_obsv\]) are much denser with narrow $^{13}$CO (1-0) line widths suggesting that they are gravitationally bound (see a full virial analysis of the dynamics to be presented in Chen et al. 2017, in preparation). Notice that the slopes of the N-PDFs of the dense substructures are not necessarily the same as that of the entire Oph L1689. The different slopes may indicate that the structures are undergoing gravitational collapse at different evolutionary stages [@Stutz_2015; @Burkhart_2015b]. The sum of all the leaf N-PDFs, given by the grey shaded area in Figure \[fig:cartoon\_obsv\] (and in Figure \[fig:cartoon\] and Figure \[fig:cartoon\_sim\]), is a very good approximation to the observed PDF.
Following the nested structures of the dendrogram from the top-level “leaf” structures down to the “branch” structures, we find that the N-PDF of a “branch” structure containing several or more “leaf” structures with power-law N-PDFs has a power-law component similar to the power-law component of the N-PDF of the entire region. (For example, see the N-PDF of the branch that contains Structure 12, 11, 6, 8, and 10 in Figure \[fig:fancy\_obsv\].) And, Figure \[fig:cartoon\_obsv\] shows that most of the pixels in the power-law component of the entire region are in the independent “leaf” structures in the dendrogram. Since a power-law N-PDF is analytically expected from the self-similar gravitational collapse of a cloud [@Shu_1977], these results suggest that the power-law component of an extended region is the summation of N-PDFs of dense, probably self-gravitating, substructures within the region.

Simulation {#sec:sim_pl}
----------
Similar to observations (Figure \[fig:cartoon\_obsv\]), Figure \[fig:cartoon\_sim\] shows that the N-PDF of the Enzo simulation at t = 0.6 t$_\text{free-fall}$ has a power-law component above N/N$_0$ $\sim$ 1.2. We then apply the dendrogram algorithm on the simulated column density map with the same set of setup parameters as that applied on observations. (See §\[sec:dendro\] for details on the dendrogram setup parameters.) The dendrogram is shown as connections between panels in Figure \[fig:fancy\_sim\]. Notice that the dendrogram of the simulated column density map has a similar complexity as that of Oph L1689 (Figure \[fig:fancy\_obsv\] and Figure \[fig:fancy\_sim\]).
Figure \[fig:fancy\_sim\] shows the N-PDFs of substructures in the dendrogram. The N-PDFs of all “leaf” structures seem to sit in the power-law regime of the lognormal + power-law model fitted to the N-PDF of the entire region (Equation \[eq:piecewise\]; see §\[sec:fitting\] for details on the lognormal + power-law model). However, we can still identify “leaf” structures that have N-PDFs with shapes of a lognormal distribution (Structure 2), a lognormal + power-law distribution (Structure 3), or a power-law distribution (Structure 6, 12, 13, 15, 16, 17 and 18). The slopes of the power-law component of the denser substructures are different from each other and from the slope of the power-law component of the entire region. And similar to observations, Figure \[fig:cartoon\_sim\] shows that most of the pixels in the power-law component of the entire region are in the independent “leaf” structures. Most of these “leaf” structures also have power-law N-PDFs, albeit with different slopes. And, similar to Oph L1689, the power-law component of the N-PDF of the selected region in the Enzo simulation is a summation of the N-PDFs of likely self-gravitating substructures within the cloud [@Shu_1977].

The Transition Point {#sec:trans}
====================
@Burkhart_2017 interpret the transition point as the density at which the turbulent energy density is equal to the thermal pressure and provide an analytic model of the transitional column density value [see also @Myers_2015]. In this section, we test this analytic model and investigate the physics behind the transitional column density value between lognormal and power-law distributions. In the following paragraphs, we present the analytic model derived by @Burkhart_2017 assuming an equilibrium between the turbulent energy density and the thermal pressure at the transitional column density. See @Myers_2015 for a more general mathematical description of the transition point.
We consider a piecewise form of the N-PDF as in Equation \[eq:piecewise\]. (See §\[sec:fitting\] for more details on the lognormal + power-law model.) By assuming that the N-PDF at the transition point is continuous and differentiable, the transition point was derived in @Burkhart_2017 as:
$$\label{eq:continuity}
s_t=\frac{1}{2}\left(2\left|\alpha\right|-1\right)\sigma_s^2 ,$$
where $s_t$ is the logarithmic normalized column density at the transition point, $\alpha$ is the slope of the power law tail, and $\sigma_s$ is the width of the lognormal component of the N-PDF.
In the strong collapse limit, where a well defined power-law tail is formed, the slope of the power-law component, $\left|\alpha\right|$, assumes a value of $\sim$ 1.5. In this limit, @Burkhart_2017 showed that the normalized column density at the transition point, N$_t$/N$_0$, can be expressed as a function of the sonic Mach number and the forcing parameter:
$$\label{eq:transition}
\frac{\text{N}_t}{\text{N}_0} \approx \left(1+b^2\mathcal{M}_s^2\right)^{A}\text{,}$$
where $b$ is the forcing parameter, varying from $\approx$ 1/3 (purely solenoidal forcing) to 1 (purely compressive forcing), and $A$ = 0.11 is the scaling constant from volume density to column density [@Federrath_2008; @Burkhart_2017]. We can then test Equation \[eq:transition\] by comparing this modeled transitional column density with the fitted transitional column density. (See §\[sec:fitting\] for details on how we fit for the transition point.) In the following subsections, we present the results of the tests in observations and simulations.
Observation {#sec:obsv_trans}
-----------
We compare the fitted transitional column density value to the modeled value [see Table \[table:trans\] in this paper; @Burkhart_2017]. Equation \[eq:transition\] shows that the normalized transition column density, N$_t$/N$_0$, is dependent on the sonic Mach number, $\mathcal{M}$. We fit Gaussian line profiles for spectra from FCRAO observations of $^{13}$CO (1-0) line emission [the COMPLETE Survey, @Ridge_2006] and find an average Mach number of Oph L1689, $\mathcal{M}_s$ = 4.50$^{+1.43}_{-1.09}$ (Table \[table:trans\]; see §\[sec:obsv\_turb\] for details on the estimation of the sonic Mach number in observation). Using Equation \[eq:transition\], we then derive the modeled transitional column density, (N/N$_0$)$_{\text{trans, model}}$ = 1.14$^{+0.12}_{-0.08}$ with the forcing parameter $b$ = 1/3 (purely solenoidal), and (N/N$_0$)$_{\text{trans, model}}$ = 1.40$^{+0.18}_{-0.15}$ with $b$ = 1 (purely compressive). Since, in observations, we do not know the forcing parameter, $b$, which varies between 1/3 and 1, the modeled transitional column density, (N/N$_0$)$_{\text{trans, model}}$, has a large uncertainty and ranges between 1.14 and 1.40.
To obtain the fitted transitional column density from the N-PDF, we first derive a binned histogram representation of the N-PDF and fit the histogram to the lognormal + power-law model. (See §\[sec:fitting\] for details on the $\chi^2$-fitting.) For the L1689 data, we find a fitted transitional column density, (N/N$_0$)$_{\text{trans, fit}}$ = 1.43$\pm$0.14 (Table \[table:trans\]). Compared to the modeled values presented in the above paragraph, we can say, with some uncertainties, that the observed N-PDF has a transition point mildly more consistent with the analytic model with a purely compressive forcing ($b$ = 1).
------------------------- ------------------------ ------------------- ----------------------------------- --------------------------------- ------------------------
Mach Number Forcing Parameter Modeled Transition Fitted Transition Whether the Prediction
$\mathcal{M}_s$ $b$ (N/N$_0$)$_{\text{trans, model}}$ (N/N$_0$)$_{\text{trans, fit}}$ Agrees with the Fit
Observation (Oph L1689) 4.50$^{+1.43}_{-1.09}$ 1/3 1.14$^{+0.12}_{-0.08}$ 1.43$\pm$0.14 No
4.50$^{+1.43}_{-1.09}$ 1 1.40$^{+0.18}_{-0.15}$ 1.43$\pm$0.14 Yes
Simulation (Enzo) 8.14$^{+2.95}_{-1.81}$ 1/3 1.26$^{+0.11}_{-0.08}$ 1.18$\pm$0.12 Yes
------------------------- ------------------------ ------------------- ----------------------------------- --------------------------------- ------------------------
[**a.** The forcing parameter ranges from 1/3 (purely solenoidal forcing) to 1 (purely compressive forcing).]{}
[**b.** The uncertainty is estimated based on the inherent uncertainties of Herschel observations.]{}
[**c.** The two values are said to agree when the ranges enclosed by the uncertainties overlap.]{}
[**d.** The Mach number is derived from the average velocity dispersion of Gaussian fits to the $^{13}$CO (1-0) molecular line emissions in the region.]{}
Simulation {#sec:sim_trans}
----------
@Burkhart_2017 have verified that the transitional column density can be described by the above analytic expression (§\[sec:trans\]), using a set of Enzo simulations with various setups of the sonic Mach number and the Alfvénic Mach number [see Figure 3 in @Burkhart_2017]. Since it is possible to completely sample the lognormal + power-law distribution in simulations, @Burkhart_2017 followed Equation \[eq:continuity\] [that is Equation 6 in @Burkhart_2017] and demonstrated that the fitted transitional column density matches the modeled value as a function of the fitted width of the lognormal component, $\sigma_s$, and the fitted slope of the power-law component, $\alpha$ (Equation \[eq:continuity\]).
In this section, we present a separate verification of the analytic model of the transition point, using the 2.3pc by 2.3pc integrated column density map (Figure \[fig:map\_sim\]; see also §\[sec:sim\] for details on how the map was made). Instead of modeling the transitional column density value based on the fitted width of the lognormal component and of the fitted slope of the power-law component [see Equation \[eq:continuity\]; @Burkhart_2017], we follow Equation \[eq:transition\] and model the transitional column density based on the Mach number, $\mathcal{M}_s$, and the forcing parameter of the simulations, $b$. Since the Mach number and the forcing parameter are independent from the fitting of the N-PDF, the test presented below following Equation \[eq:transition\] is stricter than (and *independent from*) the one presented in @Burkhart_2017, which follows Equation \[eq:continuity\] and involves fitting the N-PDF on both sides of the equation.
We calculate the sonic Mach number, $\mathcal{M}_s$, based on the 3D velocity dispersion of the 2.3pc by 2.3pc by 4.6pc cube (a 2.3pc by 2.3pc map with a 4.6pc line of sight) from which the column density map was derived. Since the gravity in the 2.3pc by 2.3pc region is particularly dominant (see Figure \[fig:map\_sim\] in which multiple high-density structures are identifiable), we expect a smaller sonic Mach number (less turbulent materials) than expected of the entire cube [$\mathcal{M}_s$ = 9; @Collins_2012]. For the 2.3pc by 2.3pc region shown in Figure \[fig:map\_sim\], we find $\mathcal{M}_s$ = 8.14$^{+2.95}_{-1.81}$ (see §\[sec:sim\_turb\] for details on the estimation of the sonic Mach number in simulation, and also §\[sec:comparison\] for a discussion on differences in the physical properties between Oph L1689 and the Enzo simulation used in this paper). Knowing that the simulation has purely solenoidal forcing ($b$ $\approx$ 1/3), we then derive the transitional column density based on the analytic model (Equation \[eq:transition\]), (N/N$_0$)$_{\text{trans, model}}$ = 1.26$^{+0.11}_{-0.08}$. This is consistent with the fitted transitional column density, (N/N$_0$)$_{\text{trans, fit}}$ = 1.18$\pm$0.12 (Table \[table:trans\]). The result again verifies that the transition point of the observed N-PDF is well described by the analytic model proposed by @Burkhart_2017, especially in simulations.
Table \[table:trans\] gives an overview of comparisons between the fitted and modeled transitional column densities in observation and simulation. We see that when the transition point between the lognormal and the power-law components of an N-PDF can be fitted, the analytic model could be potentially useful for deriving the dynamics of star-forming materials from the column density distribution. Unfortunately, the ability to estimate the transitional column density in the lognormal + power-law model is limited by the uncertainty due to changing the map area used to derive the N-PDF in observations. On top of the difficulty in fitting, the forcing parameter, $b$, is usually difficult to measure in observations, adding a large uncertainty to the modeled transitional column density [see recent attempts by @Orkisz_2017; @Herron_2017; @Otto_2017].
Discussion {#sec:discussion}
==========
Our study highlights the importance of comparing observations and simulations. For example, the lognormal portion of the PDF in observations suffers biases, such as boundary effects and unresolved foreground/background contributions [@Schneider_2015a; @Lombardi_2015]. Simulations provide an avenue to study these effects as the density in the simulations is completely sampled due to mass conservation and periodic boundary conditions. However, the simulations are missing important physical effects such as feedback from stars [@Offner_2015], non-isothermal effects [@Nolan_2015] and non-ideal MHD effects [@Meyer_2014; @Burkhart_2015c]. No single simulation is able to capture all the physical processes and scales involved in star formation. Simulations, therefore, can serve only as a general reference to interpret the observations. One aspect not addressed in our study of the anatomy of the N-PDF is the influence of the magnetic field. The effect of the magnetic field on the shape and behavior of the PDF of gravitating turbulent clouds has been studied in the past [@Burkhart_2012; @Mocz_2017; @Kritsuk_2011; @Collins_2012; @Federrath_2012]. In general, the lognormal portion of the N-PDF is not strongly affected by changing the magnetic field strength [@Burkhart_2009; @Burkhart_2012; @Collins_2012; @Federrath_2012; @Burkhart_2015a]. However, studies have shown that the strength of the magnetic field can alter the slope of the power-law tail portion of the N-PDF [@Burkhart_2015a; @Mocz_2017]. These studies found that a higher magnetic field strength produces an N-PDF with a steeper power-law tail slope since the magnetic field inhibits the collapse.
The dendrogram analysis and the N-PDF of an independent structure {#sec:discussion_structure}
-----------------------------------------------------------------
The analysis presented in this paper shows that the power-law component of the N-PDF is the sum of individual substructures with power-law PDFs. Past a transition point where the shape of the PDF changes from lognormal to power-law, individual dendrogram structures show clear power-law forms, while at or below the transition point the column density PDFs of substructures primarily take on a lognormal form (e.g. structures 11, 14, 17 in Figure \[fig:fancy\_obsv\]). One notable deviation from the above picture is Structure 6 (see Figure \[fig:map\_obsv\] and Figure \[fig:fancy\_obsv\]) in the dendrogram analysis of the N-PDF of Oph L1689. Structure 6 sits above the transition point, so we would expect that its N-PDF should take on a power-law shape. However, the N-PDF of Structure 6 is visually indistinguishable from a lognormal distribution. Despite the fact that there are only $\sim$ 2$\times$10$^2$ pixels in Structure 6 and that the sampling of the N-PDF is thus probably incomplete, we want to ask the question: Does the lognormal shape of the N-PDF of Structure 6 mean that the substructure is, in fact, not gravitationally bound? Ongoing star formation is evident in this region within a time scale $\sim$ 5$\times10^5$ years, since there are multiple young stellar objects nearby [@Gutermuth_2009]. Feedback from active star formation could have shredded the star forming materials in the region into smaller, unbound pieces like Structure 6. The shredded pieces would have N-PDFs that become increasingly steep, potentially mimicking a power-law distribution at high column density. (Similar effects of feedback on the N-PDF shape have been observed at cloud scales by @Schneider_2015a.) We also note that the Enzo simulation we compare to (Figure \[fig:fancy\_sim\]) lacks a similar lognormal substructure at these high densities, and that there is no feedback in the Enzo simulation. Local environmental effects such as feedback therefore most likely play a role in the unusual shape of structure 6.
If an independent structure (a leaf structure in the dendrogram) is axially symmetric (*e.g.*, a filamentary structure) or radially symmetric (*e.g.*, a spherical structure), @Myers_2015 pointed out that the N-PDF is the inverse function of the column density profile along a cut perpendicular to the axis of symmetry or in the radial direction, respectively. @Myers_2015 observed that, for a filamentary structure with a Plummer-like profile perpendicular to the axis of the filament [typical of star forming filamentary structures; @Arzoumanian_2011], the inverse function of the Plummer-like column density profile deviates from the lognormal+power-law model of the N-PDF. @Myers_2015 suggested that this is likely due to the nearly constant column density within the first thermal scale length. Similarly, the inverse function of the narrow N-PDF of structure 6 in Figure \[fig:fancy\_obsv\] would have a flat shape, consistent with a nearly constant column density profile. Is the thermal length scale then the scale limit for the N-PDF to be representative of the gravitational effect? A more detailed analysis of the velocity structure of dendrogram features such as Structure 6 in Oph L1689 (see Figure \[fig:fancy\_obsv\]) is needed to answer this question.
Conclusion {#sec:conclusion}
==========
We present the anatomy of the column density PDF (N-PDF) of star forming molecular clouds in both observations and simulations. By assuming a lognormal + power-law model, we examine how the lognormal component, the power-law component, and the transition point could be useful for estimating the dynamical properties of a star forming region. We also examine the uncertainties that could affect the N-PDF analysis. With the help of the dendrogram algorithm, *we demonstrate that the power-law component of an N-PDF is primarily a summation of N-PDFs of substructures inside the star forming cloud*. Most of these substructures show N-PDFs following power-law distributions, with power-law indices different from the N-PDF of the entire region. The power-law shapes and varying indices of N-PDFs suggest that these substructures could be going through different stages of gravitational collapse.
The analytic prediction of a transition point between lognormal and power-law components proposed by @Burkhart_2017 is verified independently (within uncertainties) in this work using data from both observations and simulations. We show that the transition point is generally well described by the analytic model [see Equation \[eq:transition\] in this paper; @Burkhart_2017]. The result suggests that finding the transitional column density value in a N-PDF could potentially provide information on the dynamics of the cloud. In particular, Equation \[eq:transition\] could potentially be useful for getting a general estimate of the sonic Mach number, even though it is usually difficult to determine the transition point and the forcing parameter in observations [see a recent attempt by @Orkisz_2017].
Based on the results presented in this paper, we give the following suggestions for future studies involving the N-PDF analysis. First, we do not suggest analyzing the dynamics based solely on fits to the lognormal component of the N-PDF derived from observations of dust tracers. Second, measuring the column density at the transition point between the lognormal and the power-law components could potentially provide information on the dynamical properties of a star forming region, but only when the transitional column density and the forcing parameter can be estimated. *Lastly but most importantly, we recognize that the power-law component of an N-PDF is a summation of N-PDFs of substructures, which are likely going through various stages of gravitational collapse. Based on the results presented in this paper, we suggest combining the N-PDF analysis with the dendrogram algorithm to obtain a more complete picture of the effects of global and local environments, including gravity, turbulence, the magnetic field, and the feedback from star formation, in future attempts to analyze the column density structures of a star forming region.*
B.B. acknowledges support from the NASA Einstein Postdoctoral Fellowship. Computer time was provided through NSF TRAC allocations TG-AST090110 and TG-MCA07S014. The computations were performed on Nautilus and Kraken at the National Institute for Computational Sciences (<http://www.nics.tennessee.edu/>).
[^1]: \[fn:astrodendro\]`min‘_value`, `min‘_delta`, and `min‘_npix` are names of the input parameters in the *Python*-based `astrodendro` package. In this paper, we use them as shorthands for the three input parameters in the dendrogram algorithm. See <http://dendrograms.org> for documentation of `astrodendro`.
|
---
abstract: 'Using Jacobi’s identity, a simple formula expressing Bessel functions of integer order as simple combinations of powers and hyperbolic functions, plus higher order corrections, is obtained.'
author:
- 'V. Bârsan'
- 'S. Cojocaru'
title: Bessel functions of integer order in terms of hyperbolic functions
---
In this article we shall propose a simple formula expressing the modified Bessel functions of integer order, $I_{n},$ in terms of powers and hyperbolic functions of the same argument. It can be easily adapted for the Bessel functions $J_{n}.$
The starting point is the generalization of the Jacobi identity ([@W] p.22) used in the calculation of lattice sums [@1]:
$$\frac{1}{N}\sum_{m=0}^{N-1}\exp \left[ \frac{z}{2}\left( we^{i\frac{2\pi m}{N%
}}+\frac{1}{w}e^{-i\frac{2\pi m}{N}}\right) \right] =\sum_{k=-\infty
}^{\infty }w^{kN}\cdot I_{kN}(z). \label{1}$$
For $w=1$ and $N=2$, (\[1\]) gives a well-known formula:
$$\cosh z=\sum_{k=-\infty }^{\infty }I_{2k}(z)=I_{0}(z)+2\left[
I_{2}(z)+I_{4}(z)+...\right] , \label{2}$$
(see for instance [@2], eq.9.6.39)
It is interesting to exploit (\[1\]) at $w=1$, for larger values of $N$. For $N=4$,
$$\frac{1}{2}\left( 1+\cosh z\right) =\cosh ^{2}\frac{z}{2}=I_{0}(z)+2%
\sum_{k=1}^{\infty }I_{4k}(z), \label{3}$$
and for $N=8:$
$$\frac{1}{4}(1+\cosh z+2\cosh \frac{z}{\sqrt{2}})=I_{0}(z)+2\sum_{k=1}^{%
\infty }I_{8k}(z). \label{4}$$
It is easy to see that, if the l.h.s. of these equations contains $p$ hyperbolic cosines, it provides an exact expression for the series of $%
I_{0}(z)$, cut off at the $z^{4p}$ term. The generalization of (\[4\]) for $N=4p,$ with $p$ - an arbitrary integer$,$ is indeed:
$$\frac{1}{2p}\left[ 1+\cosh z+2\cosh \left( z\cos \frac{\pi }{2p}\right)
+...+2\cosh \left( z\cos \left( p-1\right) \frac{\pi }{2p}\right) \right]
=I_{0}(z)+2\sum_{k=1}^{\infty }I_{4pk}(z). \label{5}$$
Because
$$I_{n}(-iz)=i^{-n}J_{n}(z). \label{6}$$
our result (\[5\]) can be written as:
$$\frac{1}{2p}\left[ 1+\cos z+2\cos \left( z\cos \frac{\pi }{2p}\right)
+...+2\cos \left( z\cos \left( p-1\right) \frac{\pi }{2p}\right) \right] =
\label{7}$$
$$=J_{0}(z)+2\sum_{k=1}^{\infty }J_{4pk}(z).$$
It is easy to obtain formulae similar to (\[5\]) for any modified Bessel function of integer order. Let us introduce the notations:
$$c_{1}=\cos \frac{\pi }{2p},\qquad ...\qquad c_{p-1}=\cos \frac{p-1}{2p}\pi .
\label{8}$$
and let us define the functions:
$$S_{q}\left( z\right) =\sinh z+2\left( c_{1}\right) ^{q}\sinh \left(
c_{1}z\right) +...+2\left( c_{p}\right) ^{q}\sinh \left( c_{p}z\right)
,\qquad q>0 \label{9}$$
$$C_{q}\left( z\right) =\cosh z+2\left( c_{1}\right) ^{q}\cosh \left(
c_{1}z\right) +...+2\left( c_{p}\right) ^{q}\cosh \left( c_{p}z\right)
,\qquad q\geq 0 \label{10}$$
We have:
$$\frac{dS_{q}\left( z\right) }{dz}=C_{q+1}\left( z\right) ;\qquad \frac{%
dC_{q}\left( z\right) }{dz}=S_{q+1}\left( z\right) \label{11}$$
Using recurrently the formula ([@Gradst] 8.486.4):
$$\frac{d}{dz}I_{n}\left( z\right) -\frac{n}{z}I_{n}\left( z\right)
=I_{n+1}\left( z\right) \label{12}$$
and the notation:
$$T_{n}\left( z\right) =\left( \frac{1}{z}\frac{d}{dz}\right) ^{n}C_{0}\left(
z\right) , \label{13}$$
we get:
$$\frac{1}{2p}z^{n}T_{n}=I_{n}\left( z\right) +2z^{n}\left( \frac{1}{z}\frac{d%
}{dz}\right) ^{n}\sum_{k=1}^{\infty }I_{4pk}(z). \label{14}$$
For $n=1,2,3,4:$
$$T_{1}\left( z\right) =z^{-1}S_{1}\left( z\right) ,\text{ }%
T_{2}=-z^{-3}S_{1}\left( z\right) +z^{-2}C_{2}\left( z\right) \text{ ,}
\label{15}$$
$$T_{3}\left( z\right) =3z^{-5}S_{1}\left( z\right) -3z^{-4}C_{2}\left(
z\right) +z^{-3}S_{3}\left( z\right) , \label{16}$$
$$T_{4}\left( z\right) =-15z^{-7}S_{1}\left( z\right) +15z^{-6}C_{2}\left(
z\right) -6z^{-5}S_{3}\left( z\right) +z^{-4}C_{4}\left( z\right) .
\label{17}$$
The general expressions are:
$$T_{2n}=z^{-2n}\left[ \alpha _{1}^{\left( 2n\right) }z^{-2n+1}S_{1}+\alpha
_{2}^{\left( 2n\right) }z^{-2n+2}C_{2}+...+\alpha _{2n}^{\left( 2n\right)
}C_{2n}\right] , \label{18}$$
$$T_{2n+1}=z^{-2n-1}\left[ \alpha _{1}^{\left( 2n+1\right)
}z^{-2n}S_{1}+\alpha _{2}^{\left( 2n+1\right) }z^{-2n+1}C_{2}+...+\alpha
_{2n+1}^{\left( 2n+1\right) }S_{2n+1}\right] , \label{19}$$
We get:
$$\alpha _{1}^{\left( n\right) }=-\alpha _{2}^{\left( n\right) }=\left(
-1\right) ^{n+1}\left( 2n-3\right) !!,\qquad \alpha _{n-1}^{\left( n\right)
}=-\frac{\left( n-1\right) n}{2},\qquad \alpha _{n}^{\left( n\right) }=1.
\label{20}$$
and the following recurrence relations for the coefficients $\alpha
_{q}^{\left( p\right) }:$
$$\alpha _{n-p}^{\left( n\right) }=\alpha _{n-p-1}^{\left( n-1\right) }-\left(
n+p-2\right) \alpha _{n-p}^{\left( n-1\right) },\qquad 2\leqslant p\leqslant
n-3. \label{21}$$
Other general expressions of the coefficients are:
$$\alpha _{3}^{\left( n\right) }=\left( -1\right) ^{n+1}\left( n-2\right)
\left( 2n-5\right) !! \label{22}$$
$$\alpha _{4}^{\left( n\right) }=\left( -1\right) ^{n}\left( n-3\right)
!\left\{ \frac{2^{n-4}}{0!}1!!+\frac{2^{n-3}}{1!}3!!+...+\frac{2^{0}}{\left(
n-4\right) !}\left( 2n-7\right) !!\right\} . \label{23}$$
Ignoring the series in the r.h.s. of (\[14\]), we get approximate expression for $I_{n}.$ Let us give here these expressions for the value $%
p=2 $ and $n=0,1,2,3:$
$$I_{0}^{\left( ap\right) }(z)=\frac{1}{4}\cdot \left( 1+\cosh z+2\cosh \frac{z%
}{\sqrt{2}}\right) , \label{24}$$
$$I_{1}^{\left( ap\right) }\left( z\right) =\frac{1}{4}\cdot \left( \sinh z+%
\sqrt{2}\sinh \frac{z}{\sqrt{2}}\right) , \label{25}$$
$$I_{2}^{\left( ap\right) }\left( z\right) =\frac{1}{4}\cdot \left( -\frac{1}{z%
}\left( \sinh z+\sqrt{2}\sinh \frac{z}{\sqrt{2}}\right) +\cosh z+\cosh \frac{%
z}{\sqrt{2}}\right) , \label{26}$$
$$I_{3}^{\left( ap\right) }\left( z\right) =\frac{1}{4}\left[ \frac{3}{z^{2}}%
\left( \sinh z+\sqrt{2}\sinh \frac{z}{\sqrt{2}}\right) -\frac{3}{z}\left(
\cosh z+\cosh \frac{z}{\sqrt{2}}\right) \right.$$
$$\left. -\frac{3}{z}\left( \cosh z+\cosh \frac{z}{\sqrt{2}}\right) +\left(
\sinh z+\frac{1}{\sqrt{2}}\sinh \frac{z}{\sqrt{2}}\right) \right] .
\label{27}$$
According to the Table 1, even for this very small value of $p$, the approximation provided by these functions, for “moderate” values of the argument $(z\lesssim 4),$ is very good.
Table 1
----------------------------------------------------------------------------------------------------------------------------------------
$z$ $1$ $2$ $3$ $4$
------------------------------------------------- --------------------- -------------------- --------------------- ---------------------
$\frac{I_{0}^{\left( ap\right) }-I_{0}}{I_{0}}$ $1.6\times 10^{-7}$ $% $3.3\times 10^{-4}$ $1.7\times 10^{-3}$
2.4\times 10^{-5}$
$\frac{I_{1}^{\left( ap\right) }-I_{1}}{I_{1}}$ $2.3\times 10^{-6}$ $% $1.2\times 10^{-3}$ $4.4\times 10^{-3}$
1.4\times 10^{-4}$
$\frac{I_{2}^{\left( ap\right) }-I_{2}}{I_{2}}$ $7.1\times 10^{-5}$ $% $4.5\times 10^{-3}$ $1.2\times 10^{-2}$
10^{-3}$
$\frac{I_{3}^{\left( ap\right) }-I_{3}}{I_{3}}$ $1.8\times 10^{-3}$ $% $1.7\times 10^{-2}$ $3\times 10^{-2}$
7.3\times 10^{-3}$
----------------------------------------------------------------------------------------------------------------------------------------
It is visible that the precision of the approximation decreases with the order $n$ of the Bessel function $I_{n}$. We can increase it arbitrarly, by increasing the value of $p.$
The extension of these results to Bessel functions of real argument is trivial, using the formula (\[6\]) and:
$$S_{q}\left( -iz\right) =-i\left[ \sin z+2c_{1}^{q}\sin \left( c_{1}z\right)
+...+2c_{p}^{q}\sin \left( c_{p}z\right) \right] , \label{28}$$
$$C_{q}\left( -iz\right) =\cos z+2c_{1}^{q}\cos \left( c_{1}z\right)
+...+2c_{p}^{q}\cos \left( c_{p}z\right) . \label{29}$$
In conclusion, we have proposed a controlled analytic approximation for Bessel functions of integer order. The first $4p-n$ terms of the series representation of $I_{n}$ is generated exactly by the first $4p-n$ terms of the elementary functions in the l.h.s. of eq.(14). So, our formulae can be used, for instance, to find the series expansions of the powers of Bessel functions, a subject discussed recently by Bender et al [@Bender]
The results presented in this paper can be applied to a large variety of problems, mainly with cylindrical symmetry, involving Bessel functions at “moderate” arguments. They may provide also a useful “visualization” of $%
J_{n}$ and $I_{n}$ in terms of elementary functions. The method cannot be used for asymptotic problems.
[9]{} G.N.Watson 1948 A Treatise on the Theory of Bessel Functions (Cambridge University Press)
Cojocaru S 2006 *Int. J. Mod. Phys*. **20** 593
Abramowitz M and Stegun I A 1972 Handbook of Mathematical Functions (New York: Dover Publications)
Gradstein I S and Ryzhik I M 1980 Tables of Integrals, Series and Products (New York, London: Academic Press)
Bender C M, Brody D C and Meister B K 2003 *J.Math.Phys*. **44** 309
|
---
abstract: 'Using the Submillimeter Array, we have made the first high angular resolution measurements of the linear polarization of Sagittarius A\* at submillimeter wavelengths, and the first detection of intra-day variability in its linear polarization. We detected linear polarization at 340 GHz (880 [$\mu$m]{}) at several epochs. At the typical resolution of 14$\times$22, the expected contamination from the surrounding (partially polarized) dust emission is negligible. We found that both the polarization fraction and position angle are variable, with the polarization fraction dropping from 8.5% to 2.3% over three days. This is the first significant measurement of variability in the linear polarization fraction in this source. We also found variability in the polarization and total intensity within single nights, although the relationship between the two is not clear from these data. The simultaneous 332 and 342 GHz position angles are the same, setting a one-sigma rotation measure (RM) upper limit of $7\times10^5$ [rad m$^{-2}$]{}. From position angle variations and comparison of “quiescent” position angles observed here and at 230 GHz we infer that the RM is a few$\times10^5$ [rad m$^{-2}$]{}, a factor of a few below our direct detection limit. A generalized model of the RM produced in the accretion flow suggests that the accretion rate at small radii must be low, below $10^{-6}-10^{-7}$ [$M_\sun$ yr$^{-1}$]{} depending on the radial density and temperature profiles, but in all cases below the gas capture rate inferred from X-ray observations.'
author:
- 'Daniel P. Marrone, James M. Moran, Jun-Hui Zhao, Ramprasad Rao'
title: 'Interferometric Measurements of Variable 340 GH Linear Polarization in Sagittarius A\*'
---
Introduction
============
The radio source Sagittarius A\* ([Sgr A\*]{}) has been conclusively identified in the radio and infrared with a black hole of mass $\sim3.5\times10^6 M_{\odot}$ at the center of our galaxy [@ReidBrun04; @SchodelE03; @GhezE05; @EisenhauerE05]. [[Sgr A\*]{}]{} is the nearest super-massive black hole, 100 times closer than its nearest neighbor, M31\*, and therefore should provide a unique opportunity to understand the physics and life cycle of these objects. For a black hole of its size, [[Sgr A\*]{}]{} is extremely under-luminous, only a few hundred solar luminosities and $10^{-8} L_{Edd}$. This surprisingly low luminosity has motivated many theoretical and observational efforts to understand the processes at work very near to [[Sgr A\*]{}]{}.
Accretion models of [[Sgr A\*]{}]{} generally seek to explain its faintness through inefficient radiative and accretion processes. A variety of physical mechanisms can be invoked to suppress accretion and radiation, including convection [@QuatGruz00-CDAF], jets [@FalckeE93], advection of energy stored in non-radiating ions [@NaraYi94], and winds [@BlandBegel99]. Many models incorporating combinations of these and other phenomena are able to account for the spectrum and low luminosity of [[Sgr A\*]{}]{}. Therefore, the physics of this source are not well constrained by these observations alone.
In recent years, millimeter and submillimeter polarimetry has emerged as an important tool for studies of [Sgr A\*]{}. Linear polarization and its variability can be used to understand the structure of the magnetic field in the emission region and turbulence in the accretion flow, and possibly to constrain the mechanisms responsible for the multi-wavelength variability of this source. Through Faraday rotation of the linear polarization, we can examine the density and magnetic field distributions along the line of sight, and eventually, in the context of more comprehensive models of the accretion flow structure, infer an accretion rate at the inner regions of the accretion flow [@QuatGruz00-LP; @Agol00; @MeliaLiuCoker00].
Previous observations of the linear polarization of [Sgr A\*]{} have found low ($<$1%) upper limits at 22, 43, and 86 GHz [@BowerE99-lp], with a 2% limit at 112 GHz [@BowerE01]. The lowest frequency detection of linear polarization is at 150 GHz [@AitkenE00], suggesting that these polarimetric probes of [Sgr A\*]{}can only be exploited at short millimeter and submillimeter wavelengths. @AitkenE00 found that the polarization fraction rises steeply from 150 to 400 GHz, although these observations were made with a single-aperture instrument and therefore required careful removal of contaminant emission within the telescope beam. The steep spectrum and a jump in the polarization position angle between 230 and 350 GHz in the @AitkenE00 data have been taken as evidence of a transition to optically thin synchrotron emission [e.g., @AitkenE00; @Agol00; @MeliaLiuCoker00]. Subsequent interferometric monitoring of the 230 GHz polarization, with angular resolution sufficient to avoid contamination from the surrounding emission, have shown that the 230 GHz polarization fraction appears to remain constant over 5 years, despite variations in the position angle on month to year timescales [@BowerE03; @BowerE05]. This variability reduces the significance of the observed position angle jump and demonstrates the need for contemporaneous measurements at multiple frequencies. @BowerE05 attribute the variations in the 230 GHz polarization to few$\times10^5$ [rad m$^{-2}$]{} changes in the rotation measure (RM), probably in the accretion medium, rather than to changes in the intrinsic source polarization. As of yet, no observations have been able to determine the RM, but they can place upper limits on the magnitude of the RM and infer temporal variations that are within a factor of a few of the upper limits.
Circular polarization has also been detected in this source, with a rising polarization fraction from 1.4 to 15 GHz [@BowerE99-cp; @SaultMacq99; @BowerE02]. Some models seeking to explain the millimeter/submillimeter linear polarization have also predicted appreciable circular polarization at these high frequencies due to the conversion of linear to circular polarization in a turbulent jet [@BeckertFalcke02; @Beckert03]. However, measurements to date at or above 100 GHz [e.g., @TsuboiE03; @BowerE03; @BowerE05] have not shown circular polarization at the percent level.
The Submillimeter Array (SMA) has the potential to contribute many new capabilities to these studies. It provides the first opportunity to measure the polarization above 230 GHz at angular resolution sufficient to separate [Sgr A\*]{} from its surroundings. Its large bandwidth (2 GHz per sideband), low latitude, and dry site make it far more sensitive for studies of this southern source than the 230 GHz observations of @BowerE03 [@BowerE05], which were made with the Berkeley-Illinois-Maryland Association array at Hat Creek, California. Given the sensitivity and the large (10 GHz) sideband separation, 340 GHz polarimetry with the SMA should improve limits on the RM, and future 230 GHz polarimetry may measure it directly. These advantages also apply to measurements of variability in total intensity and polarization, and of circular polarization. Here we present the first high angular resolution observations of the submillimeter polarization of [Sgr A\*]{}, using the newly dedicated SMA and its polarimetry system. Our observations and reduction are discussed in § \[s-obs\], the data and their relation to previous polarimetry in this source in § \[s-res\], and the implications of these new results in § \[s-disc\]. We offer concluding remarks in § \[s-concl\].
Observations {#s-obs}
============
[[Sgr A\*]{}]{} was observed on several nights in 2004 using the Submillimeter Array[^1] [@Blundell04; @HoMoranLo04]. The observing dates, zenith opacity, number of antennas used in the reduction, and on-source time are given in Table \[t-obslog\]. The local oscillators were tuned to a frequency of 336.7 GHz, centering the 2 GHz wide upper and lower sidebands (USB and LSB) on 341.7 and 331.7 GHz, respectively. This frequency choice avoided strong spectral lines and provided a reasonable match to the frequency response of the SMA polarimetry hardware, as discussed below. Our [Sgr A\*]{} tracks generally included source elevations between 20 and 41(transit), a period of seven hours, although weather, calibration, and technical problems caused variations in the coverage. In the SMA “Compact-North” configuration we sampled projected baselines between 8 and 135 [k$\lambda$]{}. The average synthesized beam was approximately 14$\times$22. According to the estimate in @AitkenE00, polarized emission within the 14beam of the JCMT at 350 GHz contributes 100 mJy of polarized flux density. With a beam smaller by a factor of 60, and reduced sensitivity to large-scale emission, we expect this contaminant to be negligible in our data.
[ccccccc]{} 2004 May 25 & & 0.16 & & 7 & 100\
2004 May 26 & & 0.28 & & 6 & 160\
2004 July 5& & 0.11 & & 7 & 160\
2004 July 6& & 0.15 & & 7 & 180\
2004 July 7& & 0.29 & & 6 & 170\
2004 July 14 & & 0.23 & & 6 & 100\
Each SMA antenna was equipped with a single linearly polarized (LP) feed in each of its three observing bands. Ideally, interferometric observations of linear polarization are made with dual circularly-polarized (CP) feeds as they separate the total intensity (Stokes I) from the linear polarization Stokes parameters (Q and U). For polarimetry we have converted the 340 GHz LP feeds to left- and right-circularly polarized (LCP and RCP) feeds using positionable quartz and sapphire quarter-wave plates. The polarization handedness was selected by switching the angular position of the waveplate crystal axes between two positions $\pm45^\circ$ from the polarization angle of the receiver. Although we could only measure a single polarization in each antenna at a given time, we sampled all four polarized correlations (LL, LR, RL, RR) on each baseline by switching antennas between LCP and RCP in period-16 Walsh function patterns [e.g., @Rao99]. For 20-second integrations, a full cycle required just under seven minutes. These observations were made during the commissioning phase of the SMA polarimetry hardware; details of this instrument can be found in Marrone (2005, in preparation).
The conversion of LP to CP was not perfect, but we calibrated the (frequency-dependent) leakage of cross-handed polarization into each CP state of each antenna in order to properly determine source polarizations. We used a long observation of a polarized point source (in this case the quasar 3C279) to simultaneously solve for the quasar polarization and leakage terms [e.g., @SaultE96]. This polarization calibration was performed twice, on May 25 and July 14, yielding consistent leakages. The derived polarization leakages were at or below 3% in the USB and 5% in the LSB, with the exception of antenna 3, which used a sapphire waveplate with different frequency response and poorer performance (6% LSB leakage) than the other waveplates. Theoretical considerations of our design suggest that the real components of the L$\rightarrow$R and R$\rightarrow$L leakages should be identical for a given waveplate at a given frequency, and a comparison of the results on the two nights (a total of four measurements of each real component) show that the rms variations in the measured leakage terms were below 1% for all antennas except antenna 7. One measured leakage on July 14 was responsible for this antenna’s large rms, and because of the disagreement between the real part of the L$\rightarrow$R and R$\rightarrow$L leakages we know that this measurement was in error. Using the same comparison on the other antennas we found that on average the solutions for this date were of poorer quality, probably due to the difference in weather. Accordingly, we adopted the May 25 leakage values for all dates, although that required that we not use antenna 8, which was absent from that calibration track. Errors in the leakage calibration produce effects of varying importance, as outlined in @SaultE96, the most important, for our purposes, are the contamination of Q and U by Stokes I due to errors in the determination of the leakage calibrator polarization. We have examined this effect by comparing the Q and U fractions across sidebands on the high signal-to-noise 3C279 data sets of May 25 and July 14; the two sidebands should give identical measurements of Q and U from the source, and differences can be ascribed to noise in the images and the difference of the independent errors in the leakage solutions in the two sidebands. With this procedure we found no inter-sideband differences that were consistent across the two data sets, and the differences present were consistent with the noise level, roughly 0.3% or smaller. Because an important part of our analysis is the comparison of position angles across sidebands, we had to ensure that the calibration did not create a position angle offset between the sidebands. Fortunately, although leakage errors could introduce spurious Stokes Q or U polarization, the phase difference between the RCP and LCP feeds, corresponding to a rotation of the sky polarization, is identically zero because each pair of CP feeds is in reality a single LP feed looking through both crystal axes of the same waveplate. Therefore, the only way to create a relative position angle difference between the sidebands would be through the leakage errors and the resulting contamination of Q and U, an effect which appears to be small in our data.
The flux density scale was derived from observations of Neptune on all nights except May 25 and July 14. We expect the absolute calibration to be accurate to about 25% on these nights. The May 25 flux density scale was transferred from three quasars that were also observed on May 26; these appeared to have the same relative flux densities on both nights to better than 10%, consistent with the overall uncertainty on that night, so we do not expect that the May 25 flux densities are any more uncertain than the others. The July 14 data were obtained in an engineering track primarily aimed at obtaining a second polarization calibration, so only three sources are present ([Sgr A\*]{}, 3C279, and 1743-038). Fortunately, 1743-038 has been very stable during more than two years of monitoring observations with the SMA (an rms flux density variation of only 20% in that time), with even smaller ($<10$%) variations observed from July 5$-$7, so we have used it as our flux density standard for the final track.
The data were averaged over the 7 minute polarization cycle to simulate simultaneous measurement of all four polarized visibilities, then phase self-calibrated using the LL and RR visibilities. Quasars were interleaved into the observations of [Sgr A\*]{} to allow variability monitoring and independent gain calibration. Transferring gains from the quasars, rather than self-calibrating, generally resulted in slightly lower signal-to-noise but did not change the polarization. We attribute the increased noise ($\sim20$%) to the 16$-$40angular separation between [Sgr A\*]{} and the quasars. Following calibration, each sideband was separately imaged in Stokes I, Q, U, and V, using only baselines longer than 20 [k$\lambda$]{}, and then cleaned. Sample Stokes images are shown in Figure \[f-stokes\]. On July 14, due to poorer coverage in the [*uv*]{} plane in the short track, we increased the cut to 30 [k$\lambda$]{}. Flux densities were extracted from the center pixel of each image, and these are listed in Table \[t-pol\]. We also examined the polarization by fitting point sources to the central parts of the images; the point source flux densities matched well with those obtained from the central pixel when the signal was well above the noise, but the point source positions and peak flux densities became erratic for low signal-to-noise images (most Stokes Q and V images). Table \[t-pol\] also includes the polarization fraction ($m$), which has been corrected for the noise bias [through quadrature subtraction of a $1\sigma$ noise contribution, e.g., @WardleKronberg74], and the electric vector position angle ($\chi$, determined as $2\chi=\mathrm{tan}^{-1}\frac{\mathrm{U}}{\mathrm{Q}}$).
[ccccccc]{} 2004 May 25& & & & & &\
USB & 3.79 $\pm$ 0.03 & 9 $\pm$ 15 & -244 $\pm$ 15 & -5 $\pm$ 22 & 6.43 $\pm$ 0.39 & 136.1 $\pm$1.7\
LSB & 3.79 $\pm$ 0.02 & 13 $\pm$ 17 & -201 $\pm$ 17 & -9 $\pm$ 21 & 5.28 $\pm$ 0.45 & 136.8 $\pm$2.4\
Both & 3.79 $\pm$ 0.02 & 13 $\pm$ 11 & -230 $\pm$ 11 & -5 $\pm$ 17 & 6.07 $\pm$ 0.28 & 136.7 $\pm$1.3\
2004 May 26& & & & & &\
USB & 3.19 $\pm$ 0.03 & 145 $\pm$ 20 & -97 $\pm$ 20 & -14 $\pm$ 21 & 5.43 $\pm$ 0.63 & 163.0 $\pm$3.3\
LSB & 3.11 $\pm$ 0.02 & 104 $\pm$ 18 & -138 $\pm$ 18 & -10 $\pm$ 22 & 5.53 $\pm$ 0.58 & 153.5 $\pm$3.0\
Both & 3.16 $\pm$ 0.02 & 118 $\pm$ 13 & -138 $\pm$ 13 & -17 $\pm$ 19 & 5.75 $\pm$ 0.43 & 155.3 $\pm$2.1\
2004 July 5& & & & & &\
USB & 3.23 $\pm$ 0.04 & 42 $\pm$ 14 & -267 $\pm$ 14 & -37 $\pm$ 17 & 8.35 $\pm$ 0.44 & 139.5 $\pm$1.5\
LSB & 3.13 $\pm$ 0.02 & 41 $\pm$ 12 & -273 $\pm$ 12 & -19 $\pm$ 17 & 8.84 $\pm$ 0.38 & 139.3 $\pm$1.2\
Both & 3.20 $\pm$ 0.02 & 42 $\pm$ 10 & -270 $\pm$ 10 & -38 $\pm$ 13 & 8.52 $\pm$ 0.31 & 139.5 $\pm$1.0\
2004 July 6& & & & & &\
USB & 3.19 $\pm$ 0.02 & 58 $\pm$ 21 & -169 $\pm$ 21 & -15 $\pm$ 25 & 5.56 $\pm$ 0.65 & 144.4 $\pm$3.3\
LSB & 3.15 $\pm$ 0.03 & 29 $\pm$ 18 & -164 $\pm$ 18 & -16 $\pm$ 22 & 5.27 $\pm$ 0.56 & 140.1 $\pm$3.0\
Both & 3.18 $\pm$ 0.02 & 52 $\pm$ 15 & -177 $\pm$ 15 & -18 $\pm$ 19 & 5.78 $\pm$ 0.49 & 143.2 $\pm$2.4\
2004 July 7& & & & & &\
USB & 2.71 $\pm$ 0.03 & 38 $\pm$ 22 & -35 $\pm$ 22 & -8 $\pm$ 29 & 1.72 $\pm$ 0.82 & 158.8 $\pm$ 13.7\
LSB & 2.78 $\pm$ 0.04 & 31 $\pm$ 22 & -67 $\pm$ 22 & -13 $\pm$ 38 & 2.53 $\pm$ 0.80 & 147.6 $\pm$9.0\
Both & 2.75 $\pm$ 0.03 & 44 $\pm$ 17 & -49 $\pm$ 17 & -17 $\pm$ 25 & 2.32 $\pm$ 0.61 & 156.1 $\pm$7.5\
2004 July 14& & & & & &\
USB & 3.00 $\pm$ 0.03 & 37 $\pm$ 27 & -243 $\pm$ 27 & 14 $\pm$ 32 & 8.14 $\pm$ 0.91 & 139.3 $\pm$3.2\
LSB & 3.00 $\pm$ 0.03 & 29 $\pm$ 19 & -175 $\pm$ 19 & -17 $\pm$ 25 & 5.87 $\pm$ 0.64 & 139.7 $\pm$3.1\
Both & 3.02 $\pm$ 0.03 & 75 $\pm$ 16 & -236 $\pm$ 16 & -15 $\pm$ 24 & 8.17 $\pm$ 0.55 & 143.8 $\pm$1.9\
All days& & & & & &\
USB & 3.33 $\pm$ 0.02 & 57 $\pm$ 10 & -197 $\pm$ 10 & -9 $\pm$ 15 & 6.15 $\pm$ 0.29 & 143.1 $\pm$1.3\
LSB & 3.29 $\pm$ 0.02 & 49 $\pm$ 10 & -202 $\pm$ 10 & -8 $\pm$ 13 & 6.32 $\pm$ 0.29 & 141.8 $\pm$1.3\
Both & 3.31 $\pm$ 0.02 & 59 $\pm$ 7 & -204 $\pm$ 7 & -17 $\pm$ 11 & 6.39 $\pm$ 0.23 & 143.1 $\pm$1.0\
Results {#s-res}
=======
Linear Polarization
-------------------
The polarization fraction and position angle for each sideband on each night are plotted in Figure \[f-pol\]. It can be seen from the figure and the data in Table \[t-pol\] that we have clear detections of the linear polarization in both sidebands on all nights. Among the six nights of our observations, July 7 stands out for its low polarization fraction, around 2%. The polarization was only detected at the $2-3\sigma$ level in each sideband, so the polarization position angle was poorly constrained. This is the lowest linear polarization fraction measured at or above 150 GHz, the lowest frequency where polarization has been detected. The weather on this night was the poorest of all the tracks, but only marginally worse than May 26 which did not show an unusually low polarization. Other sources in the July 7 track with measurable polarization, such as 3C279, did not show a significantly lower polarization than on other nights, as one might have expected from a systematic problem in that track. An obvious systematic error would be a substantial change in the leakages with respect to previous nights; this would most easily be caused by large changes to the alignment of the polarization hardware. However, the hardware was not moved between installation on July 5 and removal after the July 7 track, and the July 5 and 6 tracks show substantially larger polarization, so this possibility seems very unlikely. Moreover, because the leakages measured on July 14 are consistent with the May 25 leakages, as discussed in § \[s-obs\], any change between July 6 and 7 would have to have been reversed when the hardware was reinstalled on July 14. This low polarization fraction, along with the unusually high polarization two nights before, clearly demonstrates that the polarization fraction is variable. Moreover, the polarization variations are present both in the polarization fraction and the polarized flux density, even after accounting for the 25% uncertainty in the overall flux density scale, and are not merely the result of a constant polarized emission component with a changing total intensity.
Variability was also observed in the polarization position angle. Polarization over four of the nights ranged between roughly 137and 143, at a weighted average position angle of 139.6. The position angle determined for May 26 differed significantly from this range and July 7 had an extremely uncertain position angle due to the very low polarization fraction. Neither the combined six-night data set, nor the individual nights showed significant inter-sideband differences, with the possible exception of May 26. On that night $\chi_{LSB}-\chi_{USB} = (153.5^\circ\pm3.0^\circ) -
(163.0^\circ\pm3.3^\circ) = -9.5^\circ\pm4.5^\circ$, which is marginally significant for the quoted errors. As we discussed in § \[s-obs\], although it is possible for Stokes I to contaminate Q and U (which determine $\chi$), this appears to be unimportant in these data. The 0.3% limit on this effect is smaller than the Q and U errors on May 26, which are 0.6% of Stokes I. Furthermore, any other systematic source of inter-sideband position angle offsets would show up equally on all nights, but the six-night average $\chi_{LSB}-\chi_{USB}$ is 1.3$\pm$1.8, consistent with zero. The May 26 result is considered further in the context of a Faraday rotation measure in § \[s-discRM\].
Circular Polarization
---------------------
Neither the averaged data nor the individual nights show CP at a level that is significant. The greatest deviation from zero is $-38\pm13$ mJy on July 5, corresponding to $-1.2\pm0.4$%. However, in addition to the quoted error, which is the measured noise in the cleaned map, there are well known systematic effects. The MIRIAD reduction package [@miriad] uses linearized equations when solving for the polarized leakages, ignoring second order terms in the leakages ($d$) and linear polarization fraction. These terms contribute a systematic error in Stokes V of the form $\mathrm{I}d^2$ and $md$ [@RobertsE94], which may be of the order of a few tenths of a percent for our leakages and the polarization of [Sgr A\*]{}. Moreover, the small difference in the sample times of the LL and RR correlations on a given baseline permit gain differences, due to weather, pointing, and system changes, to introduce differences between the LL and RR visibilities that would not be present if these were actually measured simultaneously (as our reduction assumes). These gain variations contaminate Stokes V with Stokes I and make the value of V at the peak of the I map more uncertain than the map rms would indicate. The average of all six tracks shows $-0.5\pm0.3$% CP, consistent with zero, with additional systematic error of perhaps another 0.3%. The 0.5% sum of these errors can be taken as a limit on any persistent level of CP across the six nights, and is the most stringent limit yet on CP in [Sgr A\*]{} above 90 GHz.
Intra-day Variability
---------------------
Intra-day variability in the total intensity (Stokes I), the polarization fraction, and position angle are shown in Figure \[f-plc\]. The July 5$-$7 observations were obtained as part of a coordinated multi-wavelength [Sgr A\*]{} monitoring program, and the observed temporal variability in Stokes I on these nights is discussed in conjunction with results at other wavelengths in @EckartE05. In order to prevent antennas with variable performance from falsely modulating Stokes I, we use only the 5 antennas with the best gain stability for these light curves. Slow variations in the gain of the other antennas are likely due to pointing errors. We have reduced the effects of changing spatial sampling of extended emission by removing the two baselines that project to less than 24 [k$\lambda$]{} (angular scales $>$9) during the [Sgr A\*]{} observations. Further details of the light curve reduction can be found in @EckartE05. The variability in the linear polarization is much harder to measure; with signals one to two orders of magnitude weaker than Stokes I it is difficult to obtain reliable results from a subdivided track, and we could not be as selective about which data to exclude in the hope of removing the imprint of instrumental variations from the polarization variation. Accordingly, polarized light curves could not be reliably extracted for May 26 and July 14 due to poor weather, nor July 7, due to both weather and very low polarization fraction. The remaining three nights have been subdivided into two or three segments at boundaries in the Stokes I curves and the polarization has been extracted as described in § \[s-obs\]. The large (160 minute) gap on May 25, due to instrument difficulties, served as one of the boundaries.
A great deal of variability is visible in the Stokes I curve on all three nights, with the most notable feature being the $\sim1.5$ Jy difference between the flux densities of the first and second halves of the May 25 data. No such difference shows up in the light curve of the calibrator, 1921$-$293, a source at nearly identical declination, suggesting that this result is not an instrumental artifact. Clear polarization variability is also measured on May 25 and July 6 in both $m$ and $\chi$. At all times the position angles in the USB and LSB are found to be very similar, as was observed in the full track averages reported above.
Discussion {#s-disc}
==========
Rotation Measure {#s-discRM}
----------------
The rotation measure associated with a plasma screen located between the source and observer can be inferred from the measurement of $\chi$ at two frequencies, since it introduces a frequency dependent change in the position angle given by $$\chi(\nu) = \chi_0 + \frac{c^2}{\nu^2}\mathrm{RM} ,
\label{e-chi}$$ where the RM is given by [e.g., @GardnerWhiteoak66] $$\mathrm{RM} = 8.1\times10^5 \int n_e \textit{\textbf{B}}
\cdot d\textit{\textbf{l}}
\label{e-RM}$$ for electron density $n_e$ in cm$^{-3}$, path length $d\textit{\textbf{l}}$ in parsecs, and magnetic field ***B*** in Gauss. The greatest obstacle to such a detection, as previously noted, is the variability in the polarization, which may prevent polarization measured at different times from being reliably compared.
The best method for measuring the RM from our data comes from the observed difference in the simultaneous position angles in the USB and LSB. Applying equation (\[e-chi\]) to the two sideband frequencies of these observations, and for position angles in degrees, we obtain $$\mathrm{RM} = 3.7\times10^5 \left(\chi_{LSB}-\chi_{USB}\right) .
\label{e-RMa}$$ Equation (\[e-RMa\]) implicitly assumes that the Faraday rotation occurs outside of the plasma responsible for the polarized emission. This assumption seems reasonable for [Sgr A\*]{}, VLBI measurements [@KrichbaumE98; @ShenE05; @BowerE04] suggest intrinsic sizes of $13-24 r_S$ at 215, 86, and 43 GHz, and for reasons described in § \[s-mdot\] we expect little contribution to the RM inside $300
r_S$. One other potential complication arises if the source polarization changes with radius and the two frequencies being compared probe different radii. For our 3% sideband separation, and assuming that the polarized submillimeter emission is thermal synchrotron [as is expected in ADAF models; @YuanE03], we expect a 5% opacity difference between our sidebands, while for non-thermal synchrotron [taking an electron energy spectral index of 2-3.5, e.g., @MarkoffE01; @YuanE03] the difference is 9-12%. Emission will be contributed from a range of radii around the $\tau=1$ surfaces, so we would have to postulate a large gradient in the source polarization to produce a large intrinsic inter-sideband polarization difference over such a small frequency range. Finally, the 2 GHz bandwidth at 340 GHz limits the allowed RM to approximately $2\times10^7$ [rad m$^{-2}$]{} if polarization is detected, as this RM would rotate the polarization by more than a radian across the band and wash out the signal (bandwidth depolarization). For highly polarized emission the vector average of the polarization may still be detectible but the position angles of the two sidebands are very unlikely to agree in this case. We can therefore ignore the possibility of full 180 wraps between sidebands, as a wrap requires a RM of $7\times10^7$ [rad m$^{-2}$]{}.
As is clear from Table \[t-pol\], we do not see a significant change in the position angle between the two SMA sidebands on most of the observing nights (disregarding the uncertain position angle of July 7). In the most sensitive track, July 5, the sideband difference places a one-sigma limit of $7.1\times10^5$ [rad m$^{-2}$]{} on the RM on that particular night, which is the most sensitive limit to date from simultaneous interferometric observations. If the full data set is considered together (i.e., with Stokes images derived from the ensemble of data), the limit drops by a small amount to $6.8\times10^5$ [rad m$^{-2}$]{}, although if the RM is varying between observations this average will not actually represent a measurement of a RM. It should be noted here that the broadband observations of @AitkenE00 were able to place a similar limit of approximately $5\times10^5$ [rad m$^{-2}$]{} on the RM in August 1999 because of the large bandwidth of their 150 GHz bolometer.
The May 26 sideband difference of $-9.5^\circ\pm4.5^\circ$ is possibly significant, with an inferred rotation measure of $(-3.5\pm1.7)\times10^6$ [rad m$^{-2}$]{}. If this RM had been present on the previous night it would have shown up as a similarly large sideband difference, instead of the observed $0.7^\circ\pm3.1^\circ$, corresponding to a RM of $(+0.3\pm1.1)\times10^6$ [rad m$^{-2}$]{}. We can check the large RM by comparing the position angles on May 25 and 26, on the assumption that the emitted polarization ($\chi_0$ from eq. \[\[e-chi\]\]) is constant over timescales of a few days and observed position angle changes are due to RM changes. At this frequency, the relationship between the position angle change ($\Delta\chi$, in degrees) and the RM change is (see eq. \[\[e-chi\]\]) $$\Delta\mathrm{RM} = 2.2\times10^4 \Delta\chi .
\label{e-RMb}$$ We observed an increase in the position angle from May 25 to May 26 of $18.6^\circ\pm2.5^\circ$. If this is not a change in the intrinsic polarization, it corresponds to an increase in the RM of $4\times10^5$ [rad m$^{-2}$]{}, inconsistent with the small sideband difference on May 25 and large difference on May 26. The position angle is 180degenerate, however, and a $\chi$ change of $18.6^\circ-180^\circ=-161.4^\circ$ requires a RM change of $-3.6\times10^6$ [rad m$^{-2}$]{}, which agrees well with the RM inferred from the May 26 sideband difference. It is therefore possible that we have observed a large change in the RM between these two nights, with the May 26 value far in excess of the limits on the other five nights. We discuss this further in § \[s-mdot\].
In the existing polarization data at 230 and 340 GHz the position angle seems to frequently return to the same value. The @BowerE05 230 GHz data are clustered around 111 between 2002 October and 2004 January, while four of our observations at 340 GHz have a mean position angle of 140. Assuming that these two angles sample the same $\chi_0$ (no source polarization changes between the two observing periods or observing frequencies), we can infer a “quiescent” RM of $-5.1\times10^5$ [rad m$^{-2}$]{}. This is just below the RM upper limit from our most sensitive night. If the idea of a quiescent RM is correct, then the change in the mean 230 GHz position angle observed between early 2002 [@BowerE03] and 2003 [@BowerE05] merely reflects a change in this RM by $-3\times10^5$ [rad m$^{-2}$]{}. This implies that the quiescent RM in early 2002 was around $-8\times10^5$ [rad m$^{-2}$]{}, which is conveniently below the detection limit of the @BowerE03 observations. If this scenario is correct, the RM should be detectable by the SMA at 230 GHz, where it would be observable as a 5 sideband difference.
Accretion Rate Constraints {#s-mdot}
--------------------------
Much of the importance placed on the RM determination stems from its use as a probe of the accretion rate near the black hole. However, the interpretation of a RM detection, or limit, in terms of an accretion rate requires a model for the density and magnetic field in the accretion flow, as these quantities actually determine the RM through equation (\[e-RM\]).
To estimate the RM predicted for a variety of accretion models we make several simplifying assumptions. First, we assume a generic picture with a central emission source surrounded by a roughly spherical accretion flow. Given the previously mentioned limits on the millimeter size of [Sgr A\*]{}, we could also accomodate models where the observed 340 GHz emission arises in a small jet component, as the jet would have to lie within $\sim10r_S$ of the black hole, and would effectively be a central emission source as seen from a Faraday screen tens to hundreds of $r_S$ further out. We characterize the radial density profile, $n(r)$, as a power law, $$n(r)=n_0(r/r_S)^{-\beta} ,
\label{e-nr}$$ where $r_S=2GM_{BH}/c^2$ is the Schwarzschild radius of [Sgr A\*]{}($10^{12}$ cm for $M_{BH} = 3.5\times10^6M_{\sun}$), and $n_0$ is the density at this radius. In the case of free-falling gas we have $\dot M(r)\propto r^p$ with $\beta=3/2-p$, as in @BlandBegel99. For spherical accretion [@Bondi52] or Advection-Dominated Accretion Flows [ADAF; @NaraYi94] we have $\beta=3/2$, while for a Convection-Dominated Accretion Flow [CDAF; @QuatGruz00-CDAF], formally an $\dot M = 0$ limiting case of convection-frustrated accretion, we have $\beta=1/2$. Intermediate values are also possible: the best-fit radiatively-inefficient accretion model in @YuanE03 has $\beta=0.8$, and accretion flow simulations [e.g., @PenE03] typically produce values between 1/2 and 1 [@Quataert03]. We take the ADAF and CDAF values as bounds on $\beta$ (i.e., 1/2 to 3/2).
Rather than using a separate parameter to describe the magnetic field profile, we tie it to the density by assuming equipartition between magnetic, kinetic, and gravitational energy, as many other modelers have done [e.g., @Melia92]. For pure hydrogen gas, with the use of equation (\[e-nr\]), we obtain $$B(r)=\sqrt{4\pi c^2 m_H n_o}
\left(\frac{r}{r_S}\right)^{-\left(\beta+1\right)/2} .
\label{e-Br}$$ We additionally assume that the magnetic field contains no reversals along the line of sight and is entirely radial, which should contribute only a small error unless the field is very nearly toroidal. The former simplification is a good approximation for strongly peaked RM vs r profiles (large $\beta$), where only a small radial range contributes significantly. For smaller $\beta$ and many field reversals, the effective field will only drop as the square root of the number of reversals.
In the [Sgr A\*]{} accretion flow we expect that the electron temperature ($T_e$) will rise to smaller radii, eventually bringing the electrons to relativistic temperatures ($T_e > 6\times10^9 \mathrm{K} = m_e
c^2/k$) at some radius $r_{in}$. The RM contribution from relativistic electrons is suppressed (by as much as log($\gamma$)/2$\gamma^2$ for Lorentz factor $\gamma$ in the ultra-relativistic thermal plasma limit; @QuatGruz00-LP), so we approximate this effect by truncating the RM integration at $r_{in}$ and by treating $r_{in}$ as a variable. From the density profile, and assuming that gas at $r_{in}$ is in free-fall, we can determine a mass flux across the $r=r_{in}$ surface $$\begin{aligned}
\dot M_{in} &=& 4\pi r_{in}^2 m_H n\left(r_{in}\right) v\left(r_{in}\right) \nonumber \\
&=& 4\pi r_S^2 m_H n_0 c \left(r_{in}/r_S\right)^{3/2-\beta}.
\label{e-mdot}\end{aligned}$$ This equation does not require that the density profile be followed down to $r=r_S$, $n_0=n\left(r_S\right)$ is merely a convenient quatity to normalize the power-law density relation we are assuming for larger radii. The mass flux at $r_{in}$ ($\dot M_{in}$) can be taken to be an upper limit on the accretion rate at $r_S$, but the true rate of accretion onto the black hole could be lower if the loosely bound plasma falling from $r_{in}$ escapes as a wind or jet. Substituting equations (\[e-nr\]), (\[e-Br\]), and (\[e-mdot\]) into equation (\[e-RM\]), and converting $\dot M_{in}$ to units of [$M_\sun$ yr$^{-1}$]{} and $r$ to $r_S$, we obtain $$\begin{aligned}
RM &=& 3.4\times10^{19}
\left(\frac{M_{BH}}{3.5\times10^6 M_\sun}\right)^{-2} \times \nonumber \\
& & r_{in}^{(6\beta-9)/4} \dot M_{in}^{3/2}
\int_{r_{in}}^{r_{out}} r^{-\left(3\beta+1\right)/2} dr .
\label{e-dRM}\end{aligned}$$ Integrating and simplfying yields $$\begin{aligned}
RM &=& 3.4\times10^{19}
\left(1-\left(r_{out}/r_{in}\right)^{-\left(3\beta-1\right)/2}\right)
\times \nonumber \\
& & \left(\frac{M_{BH}}{3.5\times10^6 M_\sun}\right)^{-2}
\left(\frac{2}{3\beta-1}\right) r_{in}^{7/4}\dot M_{in}^{3/2} .
\label{e-RM-mdot}\end{aligned}$$ To obtain an RM given $\beta$ and $\dot M_{in}$ we must also choose $r_{in}$ and $r_{out}$. The inner radius will vary by model, but it is typically around $300r_S$ [e.g., @YuanE03]. For these calculations we consider values of $r_{in}$ from 300 to $3r_S$ in order to account for variations among models and to allow for the possibility that the electrons do not become highly relativistic interior to $r_{in}$, in which case the RM would not be strongly suppressed. The outer radius depends on the coherence of the radial field. We examine two cases: a fully coherent field ($r_{out}\approx\infty$), and a field that persists for a factor of three in radius from $r_{in}$.
Figure \[f-mdot\] shows the accretion rate limits imposed by our RM limit of $7\times10^5$ [rad m$^{-2}$]{}, based on the model described above. From the two choices of $r_{out}$ we see that the effect of the magnetic field coherence is larger at small $\beta$. As mentioned before, for steep density profiles (large $\beta$) we expect that only a small range in radius around $r_{in}$ contributes to the RM, making the inferred accretion rate limit insensitive to the field coherence length. If we assume that the density profile follows equation (\[e-nr\]) down to $r=r_S$, our model imposes accretion rate limits that are a factor of $\dot M\left(r_S\right) / \dot M_{in} =
(r_{in}/r_S)^{\beta-3/2}$ lower than those in Figure \[f-mdot\], but the transition to supersonic flow makes this density extrapolation uncertain. However, in cases like the basic ADAF model [@NaraYi95] where the electron temperature ceases to rise at small radii and the electrons are only marginally relativistic, integration to smaller radii (the lower sets of curves) may set more relevant (and lower) accretion rate limits. In fact, taking $\beta =
3/2$ and $r_{in} = 30r_S$, we roughly have the ADAF/Bondi model used in @QuatGruz00-LP, and reproduce their $\dot M$ limit of $10^{-7}$ [$M_\sun$ yr$^{-1}$]{}. The high and low-$\beta$ limits are similar, but the field coherence is a larger concern for shallow profiles. Since the prototype for a low-$\beta$ model is a highly convective flow we may expect a tangled field, but in this case the accretion rate limit (proportional to $B^{-2/3}$) will increase only as $\dot M \propto
N^{1/3}$ for $N$ field reversals. In summary, the figure shows that for any choice of density profile, the maximum allowed accretion rate is $10^{-6}$ [$M_\sun$ yr$^{-1}$]{}, and may be much lower. This is an order of magnitude below the gas capture rate of $10^{-5}$ [$M_\sun$ yr$^{-1}$]{} inferred from X-ray observations [@BaganoffE03; @YuanE03] and from simulations of stellar winds in the Galactic Center [e.g., @Quataert04; @CuadraE05]. It is therefore likely that there is substantial mass lost between the gas capture at $r\sim10^5r_S$ and the event horizon.
Finally, this model of the accretion flow can be used to examine the proposed $-3.5\times10^6$ [rad m$^{-2}$]{} RM from May 26 (§ \[s-discRM\]). This RM would require a change of more than $2\times10^6$ [rad m$^{-2}$]{} between consecutive nights. This is very large compared to the RM changes implied by other position angle changes (again assuming that the source polarization remains constant). Based on the four other nights with strong polarization detections, all of which have position angles near to 140, the peak-to-peak $\chi$ change corresponds to an RM change of $1.5\times10^5$ [rad m$^{-2}$]{} and the rms variation is only $5\times10^4$ [rad m$^{-2}$]{}. The largest change on similar (day to week) timescales observed at 230 GHz is $3\times10^5$ [rad m$^{-2}$]{} [between 2003 December 27 and 2004 January 5; @BowerE05]. The longer timescale 230 GHz position angle changes and the difference between our position angles and the @AitkenE00 350 GHz position angle (reinterpreted as described in § \[s-LPvar\] or otherwise) also correspond to RM changes of a few$\times10^5$ [rad m$^{-2}$]{}. We expect that these variations are not more than order unity fractional RM changes, so they are all quite consistent with our inferred $-5\times10^5$ [rad m$^{-2}$]{} quiescent RM from § \[s-discRM\]. The May 26 RM would then correspond to a factor of 7 increase in the density or line of sight magnetic field. Such a change is difficult to accomplish with any density profile, but is particularly difficult for small $\beta$ where the entire line of sight contributes significantly to the RM. If the fluctuation is real it suggests a steep density profile, as the associated density/field change should not be extended over decades of radius. Unless such an event is observed again in future observations, the more likely interpretations appear to be that the position angle change from May 25 represents a RM fluctuation of $4\times10^5$ [rad m$^{-2}$]{} observed between consecutive nights or a transient change in the source polarization, and the May 26 difference in the USB and LSB position angles is merely a $2\sigma$ measurement noise event.
Linear Polarization and Variability {#s-LPvar}
-----------------------------------
Our 340 GHz observations show a typical polarization fraction of 6.4%, with a range of 2.3$-$8.5%, and an rms variation of 2.0%. This is comparable to the $\sim$7.5% mean, $4.6-13.6$% range, and 2.2% rms measured at 230 GHz by @BowerE03 [@BowerE05]. The range of observed polarization is lower at 340 GHz than it is at 230 GHz, and the mean is slightly lower as well. It is difficult to explain a lower observed polarization fraction (and comparable variability) at higher frequencies with beam depolarization models [@Tribble91], as Faraday rotation and the resulting dispersion in polarization directions decreases with increasing frequency. If the polarization fraction decrease is intrinsic to the source and not generated in the propagation medium, it suggests that the magnetic field becomes increasingly disordered at smaller radii, as these observations should probe slightly smaller radii than the 230 GHz data. But across only 0.2 decades in frequency we expect little change in intrinsic polarization, so the difference, if present, may be best explained by time variability in the source polarization. To resolve this question, simultaneous or nearly-coincident polarimetry at multiple frequencies with interferometer resolution is clearly desirable.
@BowerE05 used the apparent stability of the 230 GHz polarization fraction to argue that the observed variations in the 230 GHz polarization position angle were more likely to be the result of changes in the rotation measure than due to intrinsic source changes. While our results do not refute this conclusion, they demonstrate that the polarization fraction is not stable, even over a single night. Note that two substantial excursions in the 230 GHz polarization fraction, one of which is labeled an “outlier” in @BowerE05, probably represent real variations similar to those seen here, but have lower significance because of the poorer sensitivity of their instrument.
The polarization fraction presented here is considerably lower than those measured in 1999 by @AitkenE00: $13^{+10}_{-4}$% and $22^{+25}_{-9}$% at 350 and 400 GHz, respectively. However, to determine the flux density of [Sgr A\*]{} @AitkenE00 had to correct for the contamination from dust and free-free emission in their large primary beam (14$-$125 at the highest frequencies), and it is possible that they over-corrected for the dust emission, which would make the polarized component appear to be a larger fraction of the total flux density of [Sgr A\*]{}. There is some support for this possibility from their measured flux densities: [Sgr A\*]{} was found to be only 2.3 and 1 Jy at 350 and 400 GHz, while our data (see Table \[t-pol\]) and previous measurements between 300 and 400 GHz have found higher values of $2.6-3.8$ Jy [@ZylkaE95; @SerabynE97; @PiercePriceE00]. If we assume that their 350 GHz data are well calibrated (the 400 GHz calibration is more uncertain) and assume our 3.3 Jy flux density for [Sgr A\*]{}, we can re-derive the intrinsic polarization of [Sgr A\*]{} using their Stokes Q and U decomposition method, and find a polarization of 9% at 158. The polarization fraction drops further as the assumed flux density for [Sgr A\*]{} is increased, reaching 7.6% for 3.8 Jy. These values are within the polarization fraction variations we observe; one might expect that well calibrated 400 GHz measurements could be interpreted similarly and that the polarization fraction need not rise steeply to high frequencies. In arriving at a flux density of 2.3 Jy for [Sgr A\*]{}, @AitkenE00 estimated the dust emission in their central pixel from the average of the surrounding pixels, so by increasing the contribution from [Sgr A\*]{} we are also suggesting that there is a deficit of dust emission in the central 14at 350 GHz. Unfortunately, our observations are poorly sampled at short spacings, but the available visibilities shortward of 20 [k$\lambda$]{} show little excess over the point source flux density, consistent with such a central hole in the dust emission. The existence of this hole requires further confirmation, as could be achieved through simultaneous single-aperture and interferometer observations; our circumstantial evidence could be equally well explained if [Sgr A\*]{} had a higher polarization fraction and lower flux density in 1999 (at the time of the @AitkenE00 measurement) and if the emission in the central 30 is distributed smoothly on scales smaller than 10.
We observe variability on inter-night and intra-day intervals, in both the polarization and total intensity. The single-night flux densities we measure fall within the range of previous observations, and the rms variation of 0.3 Jy, or 10%, matches the recent results of @MauerhanE05 at 100 GHz. Within nights, the Stokes I light curves in Figure \[f-plc\] show unambiguous variations on timescales of hours, reminiscent of those seen at 100 and 140 GHz by @MiyazakiE04 and @MauerhanE05. This is slower than the variations seen in the near-infrared and X-ray [e.g., @BaganoffE01; @GenzelE03], which seem to vary on hour timescales, with some features requiring only minutes. These slow changes suggest that opacity is obscuring our view of the very inner regions of the accretion flow, regions unobscured at NIR/X-ray wavelengths, even at 340 GHz. At slightly higher frequencies the inner flow may become visible, although many estimates of the optically-thin transition frequency place it at or above 1 THz, a frequency that is difficult to access from the ground. It should be possible to search for the transition to optically thin emission using the change in the variability timescale; the more frequently proposed technique of looking for the turnover in the spectrum relies on precise flux density calibration at high frequencies, which is problematic because of contaminating emission in single-aperture beams and lack of unresolved calibrators in interferometers. A few instruments may be able to make these difficult observations before ALMA: the SMA, or perhaps SCUBA [@SCUBA] on the JCMT at 650 GHz, and SHARC II [@SHARCII] on the CSO at 650 or 850 GHz.
The intra-day variations in the linear polarization shown in Figure \[f-plc\] are the first linear polarization changes observed on intervals of hours rather than days. The three nights with time-resolved polarization measurements do not demonstrate a clear relationship between Stokes I and the polarization. For example, May 25 shows a very strong flare in I with $m$ very close to our average values, followed in the second half of the track by a lower I and a below average $m$. July 5 has the highest $m$ of our six nights, along with 20% modulation in I, but the polarization fraction is not modulated significantly with the total intensity. Finally, on July 6 we see below average $m$ in a period of high I and above average $m$ with low I, the inverse of the relationship seen on May 25. That the polarization fraction may vary in multiple ways during flares in the total intensity could suggest that there are multiple mechanisms (of varying polarization) responsible for the submillimeter Stokes I variability, or that the I and $m$ changes are not closely related. Diverse flare mechanisms could be expected to show different spectra at shorter wavelengths, so simultaneous infrared and X-ray data may be useful. However, based on the infrequency of infrared and X-ray flares [@EckartE04] and the lack of coincident activity in these bands during the SMA observations on July 6 and 7 [@EckartE05], it seems that the small changes we observe in the submillimeter are often imperceptible at shorter wavelengths. Therefore, the best way to determine whether the polarization changes are internal or external may be to increase the time resolution in the polarization light curves. In these data we observe $m$ changes on the shortest interval we can measure, around three hours (on July 6). This is close to the variability timescale observed in the total intensity, which suggests that given better time resolution we may see that the I and $m$ changes have similar temporal characteristics and therefore arise from the same processes.
The $m$ and $\chi$ curves seem to show more coordinated behavior than the total intensity and polarization do. Of the seven sub-night intervals plotted in Figure \[f-plc\], five show position angles close to the observed quiescent $\chi$ of 140. Only in the two intervals with the lowest polarization, on May 25 and July 6, does $\chi$ deviate from this value, and if the deviations are caused by RM changes then both would represent increases in the RM. None of the intervals provide evidence for a RM through inter-sideband $\chi$ differences, but the largest $\chi$ change between intervals, $-20.7^\circ\pm3.8^\circ$ on July 6, only requires a RM decrease of $5\times10^5$ [rad m$^{-2}$]{}, still below our detection limits. Here again we face the question of whether the source polarization or an external process is responsible for the variability we see. It is possible to explain the $\chi$ changes with a two-component source, where the dominant polarization component is polarized close to the quiescent polarization direction and variable in amplitude while the weaker component causes the polarization to deviate from 140 when the dominant component weakens. In this case we would expect to see a correlation between the polarization fraction and the position angle, something that is not excluded by our data. Such a source model is naturally identified with emission from a core and jet. A second model uses a turbulent plasma screen, in addition to the screen responsible for the putative mean RM (suggested by the difference in position angle between 230 and 340 GHz), to partially beam depolarize the emission. The fact that $\chi$ seems to faithfully return to 140 implies that the source, or the source plus a stable RM component, is separated from the changes that cause the depolarization and position angle change. With better time resolution and better sensitivity to RM it should be possible to distinguish between these models.
Conclusions {#s-concl}
===========
Using the Submillimeter Array, outfitted with polarization conversion hardware (Marrone, in preparation), we have made sensitive measurements of the polarization of [Sgr A\*]{} at 340 GHz with angular resolution sufficient to separate the source from the surrounding contaminating emisssion. Our increased sensitivity has allowed us to make unequivocal measurements of the variability of the linear polarization of this source, in both position angle and polarization fraction. This is the first reliable detection of variation in the linear polarization fraction. Moreover, we have made the first detection of linear polarization changes within a night. These changes do not show an obvious correlation with the observed changes in the total intensity, possibly because of the coarse time resolution available at our sensitivity limits. The polarization variations occur on the shortest intervals we sample, around 3 hours, which is comparable to the modulation time observed in the total intensity here and in @MauerhanE05 at 100 GHz. It is not clear from these data whether the polarization variability can be best explained by changes in the source emission or by changes in an external Faraday screen, but polarization light curves with better time resolution should clarify the issue. The observed polarization fraction at 340 GHz is comparable to, and perhaps lower than, that observed at 230 GHz. This contradicts the polarization spectrum measured from 150 to 400 GHz by @AitkenE00, but we show that their polarization fraction at 350 GHz can be brought into agreement with ours through changes in their correction for dust emission. Whether or not the polarization fraction rises steeply to high frequency, as predicted by synchrotron optical depth explanations of the early polarization results [@Agol00; @MeliaLiuCoker00], is no longer clear, but this question should be resolved by future submillimeter polarimetry at 650 GHz.
We have also measured the circular polarization of this source to be less than 0.5% for a time-stable component, and do not detect CP at a slightly higher level in individual nights. This limit contradicts the predictions of the turbulence-driven polarization conversion model of @Beckert03, which was designed to match the @AitkenE00 linear polarization results, but can be matched to an earlier version of the model [@BeckertFalcke02] where the CP originated in a fully turbulent jet.
By comparing the position angles in the two sidebands, we place new upper limits on the RM allowed for this source. In single nights we obtain one-sigma upper limits of less than $10^6$ [rad m$^{-2}$]{} with our lowest limit of $7\times10^5$ [rad m$^{-2}$]{} coming on July 5. This is comparable to the lowest limit obtained in any other polarimetric observations of this source and well below the single-night limits of other interferometers. We can use a model accretion flow (with energy equipartition), parameterized only by the density power-law slope and the radius at which the electrons become relativistic, to convert this RM to a mass accretion rate limit, and find that for any density slope [Sgr A\*]{} is accreting at least an order of magnitude less matter than it should gravitationally capture based on X-ray measurements [@BaganoffE03], and may be accreting much less if the density profile is shallow. This result agrees with earlier interpretations of polarization detections. We note that the position angle at 340 GHz seems to show a persistent stable state, much like that observed at 230 GHz [@BowerE05], and we combine these two values to infer a stable “quiescent” RM of $-5\times10^{-5}$ [rad m$^{-2}$]{}. This value is just below the detection limit of our observations. The possible proximity of the RM to the detection threshold, the need for more time-resolved polarimetry, and the potential for coordinated observations with other wavelengths suggests that expanded SMA capabilities may contribute considerably more to this study.
The authors thank the entire SMA team for their contributions to the array and to the new polarimetry system. In particular, we acknowledge the enormous contribution of K. Young for his work on the real-time software changes essential to these observations. We thank R. Narayan, E. Quataert, and G. Bower for useful discussions, and J. Greene for discussions and her help in developing the prototypes of the polarimetry system. DPM was supported by an NSF Graduate Research Fellowship. We thank an anonymous referee for a thorough reading and helpful comments. Finally, we extend our gratitude to the Hawaiian people, who allow us the privilege of observing from atop the sacred mountain of Mauna Kea.
[55]{}
, E. 2000, , 538, L121
, D. K., [Greaves]{}, J., [Chrysostomou]{}, A., [Jenness]{}, T., [Holland]{}, W., [Hough]{}, J. H., [Pierce-Price]{}, D., & [Richer]{}, J. 2000, , 534, L173
, F. K. [et al.]{} 2001, , 413, 45
—. 2003, , 591, 891
, T. 2003, Astronomische Nachrichten Supplement, 324, 459
, T. & [Falcke]{}, H. 2002, , 388, 1106
, R. D. & [Begelman]{}, M. C. 1999, , 303, L1
, R. 2004, in 15th International Symposium on Space Terahertz Technology, ed. G. Narayanan (Amherst: U Mass), 3 (astro–ph/0508492)
, H. 1952, , 112, 195
, G. C., [Falcke]{}, H., & [Backer]{}, D. C. 1999, , 523, L29
, G. C., [Falcke]{}, H., [Herrnstein]{}, R. M., [Zhao]{}, J.-H., [Goss]{}, W. M., & [Backer]{}, D. C. 2004, Science, 304, 704
, G. C., [Falcke]{}, H., [Sault]{}, R. J., & [Backer]{}, D. C. 2002, , 571, 843
, G. C., [Falcke]{}, H., [Wright]{}, M. C., & [Backer]{}, D. C. 2005, , 618, L29
, G. C., [Wright]{}, M. C. H., [Backer]{}, D. C., & [Falcke]{}, H. 1999, , 527, 851
, G. C., [Wright]{}, M. C. H., [Falcke]{}, H., & [Backer]{}, D. C. 2001, , 555, L103
—. 2003, , 588, 331
, J., [Nayakshin]{}, S., [Springel]{}, V., & [Di Matteo]{}, T. 2005, , in press
, C. D. [et al.]{} 2003, in Millimeter and Submillimeter Detectors for Astronomy. Edited by T. G. Phillips & J. Zmuidzinas. Proceedings of the SPIE, Volume 4855, 73
, A. [et al.]{} 2004, , 427, 1
—. 2005, , submitted
, F. [et al.]{} 2005, , 628, 246
, H., [Mannheim]{}, K., & [Biermann]{}, P. L. 1993, , 278, L1
, F. F. & [Whiteoak]{}, J. B. 1966, , 4, 245
, R., [Sch[" o]{}del]{}, R., [Ott]{}, T., [Eckart]{}, A., [Alexander]{}, T., [Lacombe]{}, F., [Rouan]{}, D., & [Aschenbach]{}, B. 2003, , 425, 934
, A. M., [Salim]{}, S., [Hornstein]{}, S. D., [Tanner]{}, A., [Lu]{}, J. R., [Morris]{}, M., [Becklin]{}, E. E., & [Duch[\^ e]{}ne]{}, G. 2005, , 620, 744
, P. T. P., [Moran]{}, J. M., & [Lo]{}, K. Y. 2004, , 616, L1
, W. S. [et al.]{} 1999, , 303, 659
, T. P. [et al.]{} 1998, , 335, L106
, S., [Falcke]{}, H., [Yuan]{}, F., & [Biermann]{}, P. L. 2001, , 379, L13
, J. C., [Morris]{}, M., [Walter]{}, F., & [Baganoff]{}, F. K. 2005, , 623, L25
, F. 1992, , 387, L25
, F., [Liu]{}, S., & [Coker]{}, R. 2000, , 545, L117
, A., [Tsutsumi]{}, T., & [Tsuboi]{}, M. 2004, , 611, L97
, R. & [Yi]{}, I. 1994, , 428, L13
—. 1995, , 452, 710
, U.-L., [Matzner]{}, C. D., & [Wong]{}, S. 2003, , 596, L207
, D. [et al.]{} 2000, , 545, L121
, E. 2003, Astronomische Nachrichten Supplement, 324, 435
—. 2004, , 613, 322
, E. & [Gruzinov]{}, A. 2000, , 545, 842
—. 2000, , 539, 809
, R. 1999, PhD thesis, University of Illinois
, M. J. & [Brunthaler]{}, A. 2004, , 616, 872
, D. H., [Wardle]{}, J. F. C., & [Brown]{}, L. F. 1994, , 427, 718
, R. J., [Hamaker]{}, J. P., & [Bregman]{}, J. D. 1996, , 117, 149
, R. J. & [Macquart]{}, J.-P. 1999, , 526, L85
, R. J., [Teuben]{}, P. J., & [Wright]{}, M. C. H. 1995, in ASP Conf. Ser. 77: Astronomical Data Analysis Software and Systems IV, 433
, R., [Ott]{}, T., [Genzel]{}, R., [Eckart]{}, A., [Mouawad]{}, N., & [Alexander]{}, T. 2003, , 596, 1015
, E., [Carlstrom]{}, J., [Lay]{}, O., [Lis]{}, D. C., [Hunter]{}, T. R., & [Lacy]{}, J. H. 1997, , 490, L77
, Z.-Q., [Lo]{}, K. Y., [Liang]{}, M.-C., [Ho]{}, P. T. P., & [Zhao]{}, J.-H. 2005, , 438, 62
, P. C. 1991, , 250, 726
, M., [Miyahara]{}, H., [Nomura]{}, R., [Kasuga]{}, T., & [Miyazaki]{}, A. 2003, Astronomische Nachrichten Supplement, 324, 431
, J. F. C. & [Kronberg]{}, P. P. 1974, , 194, 249
, F., [Quataert]{}, E., & [Narayan]{}, R. 2003, , 598, 301
, R., [Mezger]{}, P. G., [Ward-Thompson]{}, D., [Duschl]{}, W. J., & [Lesch]{}, H. 1995, , 297, 83
[^1]: The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics, and is funded by the Smithsonian Institution and the Academia Sinica.
|
---
abstract: 'Using Hall photovoltage measurements, we demonstrate that an anomalous Hall-voltage can be induced in few layer WTe$_2$ under circularly polarized light illumination. By applying a bias voltage along different crystal axes, we find that the photo-induced anomalous Hall conductivity coincides with a particular crystal axis. Our results are consistent with the underlying Berry-curvature exhibiting a dipolar distribution due to the breaking of crystal inversion symmetry. Using a time-resolved optoelectronic auto-correlation spectroscopy, we find that the decay time of the anomalous Hall voltage exceeds the electron-phonon scattering time by orders of magnitude but is consistent with the comparatively long spin-lifetime of carriers in the momentum-indirect electron and hole pockets in WTe$_2$. Our observation suggests, that a helical modulation of an otherwise isotropic spin-current is the underlying mechanism of the anomalous Hall effect.'
author:
- 'Paul Seifert$^1$, Florian Sigger$^{1,2}$, Jonas Kiemle$^1$, Kenji Watanabe$^3$, Takashi Taniguchi$^3$, Christoph Kastl$^4$, Ursula Wurstbauer$^{1,2}$ and Alexander Holleitner$^{1,2}$'
bibliography:
- 'library.bib'
date: 'Oktober 1, 2018'
title: 'Photo-induced anomalous Hall effect in the type-II Weyl-semimetal WTe$_2$ at room-temperature '
---
Introduction
============
In recent years, materials that exhibit a non-trivial topological band-structure and non-zero Berry-curvature have attracted a lot of attention. These properties render the materials a very promising and robust platform for spintronic applications independent of the exact details of material composition or extrinsic influences such as temperature. The Berry-curvature describes the local self-rotation of a quantum wave-packet and can effectively act as a magnetic field largely impacting the electronic properties of a system [@Xiao2010]. The Berry-curvature impacts the Hall-conductivity and spin-Hall conductivity, which is the antisymmetric non-dissipative addition to the Ohmic conductivity in the absence of time-reversal symmetry or crystal inversion symmetry, respectively [@Haldane2004]. In this context, WTe$_2$ a layered van-der-Waals material, became the subject of extensive research. Monolayer WTe$_2$ was demonstrated to host both a topologically non-trivial quantum spin-Hall gap [@Fei2017; @Tang2017; @Wu2018], as well as a Berry curvature dipole that leads to the so-called circular photogalvanic effect when inversion symmetry is broken via an out-of-plane electric field [@Xu2018]. Bulk WTe$_2$ is known as the prototypical type-II Weyl semimetal, a material shown to exhibit chiral Weyl-fermions at the surface that break Lorentz-invariance [@Soluyanov2015]. Moreover, WTe$_2$ exhibits the highest values of a non-saturating magnetoresistance ever reported [@Ali2014]. In contrast to monolayer WTe$_2$, inversion symmetry is intrinsically broken in few-layered and bulk WTe$_2$ [@Brown1966]. This renders few layer WTe$_2$ a promising candidate to host non-trivial spin-transport phenomena, as a non-zero Berry curvature on the non-equilibrium Fermi surface is predicted [@Zhang2018b]. The Weyl points, which are the linear band-crossing points, represent monopoles of Berry curvature and come pairwise with opposite chirality [@Wan2011]. It is proposed, that under circularly polarized (CP) illumination, a momentum-shift of the Weyl-points gives rise to a photoinduced anomalous Hall effect during optical excitation [@Chan2016].In the present work, we explore the transverse conductivity in few layer WTe$_2$ under electrical current flow using polarization- and time-resolved photoexcitation at room temperature. We detect a finite photoinduced transverse voltage that switches sign with the polarity of the applied longitudinal current as well as with the photonr helicity. The helicity dependent transverse conductivity only occurs when the current is injected orthogonal to the crystal’s mirror-plane. This observation suggests the intrinsic breaking of inversion symmetry with a respective dipolar structure of the Berry curvature as the underlying physics [@Zhang2018c; @Yao2008]. We further reveal a characteristic lifetime of the transverse conductivity of $>100$ ps exceeding the reported carrier-phonon scattering time by orders of magnitude [@Dai2015]. Therefore, the lifetime of the observed transverse voltage is associated with the reported spin-lifetime of carriers in the momentum-indirect electron and hole pockets in WTe$_2$ [@Wang2018; @Dai2015]. Our observation suggests a photon helicity induced anisotropy in the spin-Hall conductivity via the excitation of a net spin polarization.


We exfoliate individual few-layer flakes of the type-II Weyl-semimetal WTe$_2$ (HQ Graphene) onto a transparent sapphire substrate with pre-fabricated Ti/Au contacts in a four-terminal geometry. We focus a circularly polarized laser with a photon energy of 1.5 eV onto the center of the flake. To avoid degradation effects and to maintain a high device quality, the WTe$_2$-flakes are encapsulated with a thin crystal of high quality hexagonal boron nitride [@Ye2016; @Watanabe2004]. Figures 1(a) sketches our measurement geometry of the photo-induced anomalous Hall effect. A bias voltage is applied between the source and drain contacts and it drives a current **j**. A high impedance differential voltage amplifier is wired to the two remainig contacts in perpendicular direction for the measurement of the photo-induced transverse voltage $V^\mathrm{photo}$. Under current flow, a transverse spin-current can be induced for a finite local spin-Hall conductivity $\sigma_H^s$ acting on the non-equilibrium Fermi surface. Without polarized illumination, spin-Hall conductivity $\sigma_H^s$ induces a pure spin-current without any accompanying charge current. However, under circularly polarized illumination the photon’s magnetic moment induces a spin polarization and a corresponding net anomalous Hall-conductivity. The latter can be detected as a net Hall voltage orthogonal to the light propagation and the source-drain current density **j**. Figure 1(b) shows a microscope image of a four-terminal WTe$_2$/h-BN device and the electrical circuitry. The distance between the contacts of 14 m is chosen to be much larger than the size of our laser spot of $\approx$ 1.5 m in order to exclude extrinsic effects that stem from an illumination of the metal contacts or crystal boundaries [@Xu2018]. Figures 1(c) and (d) sketch the side- and top-view of the layered crystal structure of WTe$_2$. In contrast to most TMDs, which exhibit a hexagonal structure with space-group $D_{6h}$, bulk WTe$_2$ crystallizes in an orthorhombic phase with space group $C_{2v}$ featuring a single mirror plane $\text{M}_{a}$ orthogonal to the a-axis. In the T$_d$-phase, atoms in neighbouring layers are rotated by a spatial angle of $\pi$ with respect to each other, which breaks inversion symmetry along the b-axis and leads to a dipolar structure of the Berry-curvature [@Tang2017; @Brown1966; @Jiang2016; @MacNeill2017]. Due to the different dielectric environment above (h-BN) and below (Al$_2$O$_3$) the WTe$_2$ flake, we also cannot exclude a weak breaking of inversion-symmetry along the c-direction of the crystal.
Figure 2 presents polarization-resolved transverse photovoltage measurements. The laser is focused at the center of the WTe$_2$ flake and a bias voltage $V^\mathrm{bias}_{1-3}$ is applied between source (1) and drain (3), while the photovoltage $V^\mathrm{photo}_{2-4}$ is measured between (2) and (4) (Fig 2 (a)). The laser polarization is modulated with a quarter-wave plate. Figures 2(b) and (c) show the polarization dependent transverse voltage $V^\mathrm{photo}_{2-4}$ for bias voltages of $V^\mathrm{bias}_{1-3}$ = + 0.4 V and $V^\mathrm{bias}_{1-3}$ = - 0.4 V, respectively. A clear transverse $V^\mathrm{photo}_{2-4}$ is detectable and can be modulated with the laser helicity. Figure 2 (d) shows selectively the helicity dependent photovoltage contribution $V^\mathrm{helical}_{2-4}$ as a function of bias voltages $V^\mathrm{bias}_{1-3}$ and laser polarization. The helicity independent contributions are subtracted via a fit procedure according to their characteristic frequency with the angle of the quarter-wave plate (compare supplementary Figs. S1 and S2) [@Seifert2018]. Line-cuts through the helical photovoltage map (Fig. 2(d)) are drawn in Fig. 2(e) along the polarization axis and Fig. 2(f) along the bias axis for different laser helicities. The transverse photovoltage $V^\mathrm{helical}_{2-4}$ changes polarity with the laser helicity and depends linearly on the longitudinal bias-voltage $V^\mathrm{bias}_{1-3}$. In order to determine how the transverse photovoltage depends on the crystal axis, we perform measurements on a sample, where the crystal axes are aligned to the measurement axes of our four-terminal contact geometry during sample fabrication. The *x*-axis (*y*-axis, compare Fig. 2 (a)) of our measurement geometry coincides with the b-axis (a-axis) of this WTe$_2$ crystal as determined by polarization resolved Raman spectroscopy (supplementary Fig. S3) [@Song2016].

Figure 3 (a) shows the helicity dependent transverse photovoltage for a bias applied between contacts (1) and (3) (b-axis, grey dots) and between contacts (2) and (4) (a-axis, yellow dots). For a bias applied along the b-axis, no distinct helicity dependent transverse photovoltage is visible. For a bias applied along the a-axis, a transverse photovoltage emerges for CP illumination. Figure 3 (b) shows the amplitude of the helicity dependent transverse photovoltage $V^\mathrm{helical}$ vs bias voltage $V^\mathrm{bias}$ applied along the respective crystal directions. We find a finite $V^\mathrm{helical}$, which depends linearly on the longitudinal bias if the bias is applied along the a-axis (yellow). In contrast, we detect no significant $V^\mathrm{helical}$ when the bias is applied along the b-axis (grey). In a next step, we perform time-resolved auto-correlation measurements to extract the characteristic time scales of the involved processes.

Figure 4 (a) shows a schematic of our measurement geometry. A 150 fs pump pulse at a photon energy of 1.5 eV excites the center of the sample from the top at a fixed laser helicity. A probe pulse with identical duration and energy but variable helicity is delayed via a mechanical delay-stage and focused onto the same spot from the backside. Both pump and probe lasers are modulated at different frequencies, and the auto-correlation signal of the transverse photovoltage is detected at the sum-frequency with a lock-in amplifier. Figure 4 (b) shows the time-resolved difference of the transverse photovoltage for co-polarized and cross-polarized pump/probe excitation. We observe a finite difference directly after the excitation, which decays exponentially on a timescale of $\tau_\mathrm{slow} \approx 106.3 \pm 4.5$ ps. We interpret the difference between co-polarized and cross-polarized auto-correlation of the transverse photovoltage as a signature of the photo-excited spin population. Intriguingly, the decay time surpasses the electron-phonon relaxation time of a few ps by orders of magnitude [@Dai2015; @Caputo2018]. In fact, the slow time-scale $\tau_\mathrm{slow}$ of spin relaxation was reported to be limited by the phonon-assisted recombination of momentum indirect electron-hole pairs, suggesting, that the charge carrier-relaxation to the Fermi-energy does not significantly randomize the spin-polarization [@Wang2018]. Figure 4 (c) shows the time-resolved auto-correlation of the longitudinal photo-current measured in the direction between the source and drain contacts. In contrast to the transverse photovoltage, the main time-scale dominating the longitudinal photocurrent, which we denote as $\tau_\mathrm{fast}$ is in the order of 2 ps and can be interpreted as an instantaneous photo-conductance increase, which is limited by the phonon-mediated charge carrier-relaxation to the Fermi-energy [@Wang2018; @Dai2015; @Caputo2018]. We note, that a slower time-scale is also detectable in the photo-current to a lesser extent. This indicates that the charge carrier spin does also impact the longitudinal charge transport, e.g. by anisotropic optical absorption as a consequence of a chiral anomaly in Weyl-semimetals [@Ashby2014; @Mukherjee2017; @Yang2015]. Figure 4(d) shows a schematic energy band-diagram of the electron and hole pockets along the $\Gamma - X $ direction. The phonon-mediated carrier relaxation and recombination channels are indicated by their characteristic time-scales $\tau_\mathrm{fast}$ and $\tau_\mathrm{slow}$, respectively.\
#### Discussion
Our experiments demonstrate the observation of a photon helicity dependent transverse photovoltage in few-layer WTe$_2$ under an applied longitudinal bias. We interpret our findings to stem from an anomalous Hall-conductivity as a result of a helicity-induced anisotropy in the system’s spin-Hall conductivity $\sigma_H^s$ based on the following arguments. In the presence of an electric field, an electron with eigenenergy $\epsilon_n(\textbf{k})$ occupying a specific band $n$ can pick up an additional anomalous velocity contribution, which is proportional to the Berry curvature $\Omega_n(\textbf{k})$ of the specific band [@Xiao2010]. The velocity reads [@Xiao2010] $$v_n(\textbf{k})=\frac{\delta\epsilon_n(\textbf{k})}{\hbar\delta\textbf{k}}-\frac{e}{\hbar}\textbf{E}\times \Omega_n(\textbf{k}),$$ where the second term is the anomalous velocity contribution. It is always transverse to the electric field and is responsible for various Hall effects [@Xiao2010]. The Berry curvature $\Omega_n(\textbf{k})$ is the curl of the phase shift of a wave-function between two points in $k$-space and it is directly related to the symmetry of the system’s Hamiltonian. An electron’s velocity must be unchanged undergoing symmetry operations that reflect the symmetries of the unperturbed system [@Xiao2010]. Accordingly, eq. 1 dictates that $\Omega_n(\textbf{k})=-\Omega_n(-\textbf{k})$ if the system is invariant under time-reversal and $\Omega_n(\textbf{k})=\Omega_n(-\textbf{k})$ if the system is invariant under spatial inversion [@Xiao2010]. One can directly see that the Berry-curvature must vanish in systems, which have both inversion- and time-reversal symmetry. If time-reversal symmetry is broken, a net $\Omega_n$ can occur when integrating across the Brillouin-zone leading to a net anomalous Hall conductivity and a corresponding Hall voltage under applied bias. If inversion symmetry is broken, time-reversal dictates, that the net $\Omega_n$ is zero, when integrating across the Brillouin-zone, but it can exhibit a dipolar structure where opposite points in $k$-space exhibit opposite $\Omega_n(\textbf{k})$ and opposite anomalous velocities [@Xiao2010; @Shi2018a; @Zhang2018c]. At an additional presence of spin-orbit coupling, a lifted spin-degeneracy at opposite points in $k$-space can lead to a net $\sigma_H^s$ and a transverse spin-transport perpendicular to the electric field. Indeed, the breaking of inversion-symmetry in WTe$_2$ along one crystal axis is predicted to induce Rashba and Zeeman type spin-orbit coupling as well as a dipolar Berry curvature [@Shi2018a]. Such a Berry curvature dipole is consistent with the crystal axis dependence of our measured transverse photovoltage (Fig. 3 and S3). We find a bias dependent transverse photovoltage $V^\mathrm{helical}$ only if the bias is applied along the a-axis orthogonal to the mirror-plane M$_\mathrm{a}$. The bias-dependence clearly demonstrates that the observed $V^\mathrm{helical}$ is not caused by a local photo-voltage generation e.g. due to built-in fields or impurities, but it is intrinsic in nature and proportional to the applied external electric field [@Seifert2018]. The transverse photovoltage switches polarity with the helicity of the exciting laser, which establishes that the underlying origin of the transverse conductivity is either directly based on the photon chirality [@Chan2016] or on the corresponding excited spin-density in the WTe$_2$ [@Seifert2018; @Lee2018; @Liu2018]. As we detect the $V^\mathrm{helical}$ for a 150 fs pulsed laser excitation on a long characteristic time scale $\approx 100$ ps, we identify the transverse photovoltage to stem from the excited spin-density on the non-equilibrium Fermi surface, rather than from a laser field driven anomalous Hall conductivity as suggested for a CW-laser excitation [@Chan2016]. In both cases, however, the underlying Hamiltonian is not invariant under time reversal, and in turn, it allows for a net anomalous Hall conductivity as well as a finite Hall voltage under applied bias. In principle, charge carriers can also acquire an anomalous velocity from extrinsic mechanisms such as skew-scattering and side-jump contributions, which can also be proportional to the Berry-curvature [@Sinitsyn2005; @Onoda2006]. Experimentally, we generate the signal locally within the laser spot ($\approx 1.5$ m) and detect the photovoltage at rather macroscopic distances of $~14$ m. In metallic systems such as WTe$_2$ the detection occurs instantaneously according to the so-called Shockley-Ramo theorem [@Song2014a]. Therefore, we cannot distinguish between intrinsic Berry-phase effects and extrinsic drift or diffusive contributions. However, the linear dependence of our observed $V^\mathrm{helical}$ on the applied bias makes a side-jump contribuion less likely, as the latter is expected to show a non-linear contribution in the presence of non-zero Berry-curvature [@Onoda2006; @Nagaosa2010]. Rationalized by this microscopic model, we interpret the transverse conductivity to stem from a laser-helicity induced transport anisotropy of an excited spin-ensemble as observed in topological insulators [@Seifert2018; @Lee2018; @Liu2018]. Without illumination, the bipolar distribution of the Berry-curvature gives rise to a spin-transport perpendicular to the applied bias. As time-reversal symmetry is preserved, the anomalous velocities of opposite spin orientations must cancel out. No net Hall-voltage can be observed.
#### Conclusion {#conclusion .unnumbered}
We explore the transverse conductivity in few layer WTe$_2$ under illumination and electrical current flow using polarization and time-resolved transverse photovoltage measurements. Our findings suggest that a helicity induced symmetry-breaking of an otherwise isotropic spin-current gives rise to an anomalous Hall-voltage. We further reveal, that the characteristic decay-time of the anomalous Hall-voltage exceeds the carrier-phonon scattering time by orders of magnitude but is consistent with the spin-lifetime of carriers in the momentum-indirect electron and hole pockets in WTe$_2$.
#### Acknowledgements
We thank T. Schmidt and M. Burghard for discussions. This work was supported by the DFG via SPP 1666 (grant HO 3324/8), the Center of NanoScience (CeNS) in Munich, and the Munich Quantum Center (MQC). C. K. acknowledges funding by the Molecular Foundry supported by the Office of Science, Office of Basic Energy Sciences, of the US Department of Energy under Contract No. DE-AC02-05CH11231.
References
==========
Supplementary information Photo-induced anomalous Hall effect in the type-II Weyl-semimetal WTe$_2$ at room-temperature
========================================================================================================================


![Raman signature of crystal axis configurations. (a), Sample geometry and contact configuration. The blue and yellow arrows indicate laser polarization axes parallel to the measurement configurations is Fig. 3 of the main manuscript. (b), Polarization resolved Raman spectra of the WTe$_2$ crystal for different linear excitation polarizations. The polarization is rotated with a half-waveplate. (c), Raman spectra for the laser polarizations indicated by the blue and yellow arrow in (a). (d), Cuts through the polarization resolved Raman spectra in (b) along the wavenumbers 163.8 cm$^{-1}$ and 211.3 cm$^{-1}$ as indicated by the grey and yellow arrows in (b). The maxima of the Raman modes at 163.8 cm$^{-1}$ and 211.3 cm$^{-1}$ indicate a laser polarization along the b-axis and a-axis respectively [@Song2016]. ](SIFigure_4_v01.pdf)
|
---
author:
- 'I. Rogachevskii , N. Kleeorin , A.D. Chernin'
- 'E. Liverts'
date: 'Received; accepted; published online'
title: 'New mechanism of generation of large-scale magnetic fields in merging protogalactic and protostellar clouds'
---
Introduction
============
The generation of magnetic fields in astrophysical objects, e.g., galaxies, stars, planets, is one of the outstanding problems of physics and astrophysics. The initial seed magnetic fields of galaxies and stars are very weak, and are amplified by the dynamo process. The generated magnetic field is saturated due to nonlinear effects.
The origin of seed magnetic fields in the early Universe, e.g., at phase transitions, is a subject of discussions. However, the merging of such seed magnetic fields, hardly could produce a substantial large-scale magnetic fields observed at the present time (Peebles 1980; Zeldovich & Novikov 1983). The origin of seed magnetic fields in self-gravitating protogalactic clouds was studied by Birk et al. (2002) (see also Wiechen et al. 1998). It was suggested that the seed magnetization of protogalaxies can be provided by relative shear flows and collisional friction of the ionized and the neutral components in partially ionized self-gravitating and rotating protogalactic clouds. Self-consistent plasma-neutral gas simulations by Birk et al. (2002) have shown that seed magnetic fields $\sim
10^{-14}$ G arise in self-gravitating protogalactic clouds on spatial scales of 100 pc during $7 \times 10^6$ years.
In this paper we discuss a new mechanism of generation of large-scale magnetic fields in colliding protogalactic and merging protostellar clouds. Interaction of the merging clouds causes large-scale shear motions which are superimposed on small-scale turbulence. Generation of the large-scale magnetic field is caused by a ”shear-current“ effect or ”vorticity-current" effect (Rogachevskii & Kleeorin 2003; 2004). The mean vorticity is produced by the large-scale shear motions of colliding protogalactic and merging protostellar clouds (Chernin et al. 1991; 1993).
Let us first discuss a scenario of formation the large-scale shear motions in colliding protogalactic clouds. Jean’s process of gravitational instability and fragmentation can cause a very clumsy state of cosmic matter at the epoch of galaxy formation. A complex system of rapidly moving gaseous fragments embedded into rare gas might appear in some regions of protogalactic matter. Supersonic contact collisions of these protogalactic clouds might play a role of an important elementary process in a complex nonlinear dynamics of protogalactic medium. The supersonic contact non-central collisions of these protogalactic clouds could lead to their coalescence, formation of shear motions and transformation of their initial orbital momentum into the spin momentum of the merged condensations bound by its condensations (Chernin 1993).
Two-dimensional hydrodynamical models for inelastic non-central cloud-cloud collisions in the protogalactic medium have been developed by Chernin (1993). An evolutionary picture of the collision is as follows. At the first stage of the process the standard dynamical structure, i.e., two shock fronts and tangential discontinuity between them arise in the collision zone. Compression and heating of gas which crosses the shock fronts occurs. The heating entails intensive radiation emission and considerable energy loss by the system which promotes gravitational binding of the cloud material. At the second stage of the process a dense core forms at the central part of the clump. In the vicinity of the core two kinds of jets form: “flyaway” jets of the material (which does not undergo the direct contact collision) and internal jets sliding along the curved surface of the tangential discontinuity. The flyaway jets are subsequently torn off, having overcome the gravitational attraction of the clump whereas the internal jets remain bound in the clump. When the shock fronts reach the outer boundaries of the clump, the third stage of the process starts. Shocks are replaced by the rarefaction waves and overall differential rotation and large-scale shear motions arise. This structure can be considered as a model of the protogalactic condensation (Chernin 1993). The formed large-scale sheared motions are superimposed on small-scale turbulence.
There are two important characteristics of the protogalactic cloud - cloud collisions: the mass bound in the resulting clump and the spin momentum acquired by it. These characteristics depend on the relative velocity and impact parameter of the collision (Chernin 1993). The parameters of a protogalactic cloud are following: the mass is $M \leq 10^{10} \, M_\odot$, the radius is $R \sim 10^{23}$ cm, the internal temperature is $T \sim 10^4 $ K, the mean velocity of the cloud is $V \sim 10^{6} - 10^{7}$ cm/s, where $M_\odot$ is the solar mass. Some other parameters for the protogalactic clouds (PGC) are given in Table 1 in Section 3.
An important feature of the dynamics of the interstellar matter is fairly rapid motions of relatively dense matter fragments (protostellar clouds) embedded in to rare gas. The origin of protostellar clouds might be a result of fragmentation of the core of large molecular clouds. Supersonic and inelastic collisions of the protostellar clouds can cause merging of the clouds and formation of a condensation. A non-central collision of the protostellar clouds can cause conversion of initial orbital momentum of the clouds in to spin momentum and formation of differential rotation and shear motions (Chernin 1991). The internal part of the condensation would have only slow rotation because the initial matter motions could me almost stopped in the zone of direct cloud contact. On the other hand, the minor outer part of the merged cloud matter of the condensation would have very rapid rotation due to the initial motions of that portions of cloud materials which would not stop in this zone because they do not undergo any direct cloud collision (Chernin 1991). This material could keep its motion on gravitationally bound orbits around the major internal body condensation. The formed large-scale sheared motions are superimposed on small-scale interstellar turbulence.
In the supersonic and inelastic collision of the protostellar clouds an essential part of the initial kinetic energy of the cloud motions will be lost with the mass lost and also due to dissipation and subsequent radiative emission. The cooling time scale for the material compressed in the collision would be less than the time scale of the hydrodynamic processes. An estimate of basic physical quantities which characterize the above processes has been made by Chernin (1991). Thus, e.g., the parameters of a protostellar cloud are as follows: a mass is $M \leq M_\odot$, the radius is $R \sim
10^{17}$ cm, the internal temperature is $T \sim 10$ K, the mean velocity of the cloud is $V \sim 10^{5} - 10^{6} $ cm/s. Some other parameters for the protostellar clouds (PSC) are given in Table 1 in Section 3.
Generation of large-scale magnetic field due to the shear-current effect
========================================================================
Now we discuss generation of large-scale magnetic field due to the shear-current effect. We suggested that this effect is responsible for the large-scale magnetic fields in colliding protogalactic clouds and merging protostellar clouds.
The large-scale magnetic field can be generated in a helical rotating turbulence due to the $\alpha$ effect. When the rotation is a nonuniform, the generation of the mean magnetic field is caused by the $\alpha {\bf \Omega} $ dynamo. For a nonrotating and nonhelical turbulence the $\alpha$ effect vanishes. However, the large-scale magnetic field can be generated in a nonrotating and nonhelical turbulence with an imposed mean velocity shear due to the shear-current effect (see Rogachevskii & Kleeorin 2003; 2004). This effect is associated with the ${\mbox{\boldmath $ \delta$}} {\bf \times} {\bf J}$ term in the mean electromotive force, where ${\bf J}$ is the mean electric current. In order to elucidate the physics of the shear-current effect, we compare the $\alpha$ effect in the $\alpha
{\bf \Omega} $ dynamo with the ${\mbox{\boldmath $ \delta$}} {\bf \times} {\bf J}$ term caused by the shear-current effect. The $\alpha$ term in the mean electromotive force which is responsible for the generation of the mean magnetic field, reads $ {\mbox{\boldmath $ \cal E$}}^\alpha \equiv \alpha
{\bf B} \propto - ({\bf \Omega} \cdot {\bf \Lambda}) {\bf B}$ (see, e.g., Krause & Rädler 1980; Rädler et al. 2003), where $
{\bf \Lambda} = {\mbox{\boldmath $ \nabla$}} \langle {\bf u}^2 \rangle / \langle
{\bf u}^2 \rangle $ determines the inhomogeneity of the turbulence. The ${\mbox{\boldmath $ \delta$}} {\bf \times} {\bf J}$ term in the electromotive force caused by the shear-current effect is given by $ {\mbox{\boldmath $ \cal
E$}}^\delta \equiv - {\mbox{\boldmath $ \delta$}} {\bf \times} ({\mbox{\boldmath $ \nabla$}} {\bf
\times} {\bf B}) \propto ({\bf W} \cdot {\mbox{\boldmath $ \nabla$}}) {\bf B} $, where ${\mbox{\boldmath $ \delta$}}$ is proportional to the mean vorticity ${\bf W}
= {\mbox{\boldmath $ \nabla$}} {\bf \times} {\bf U}$ caused by the mean velocity shear (Rogachevskii & Kleeorin 2003; 2004).
The mean vorticity ${\bf W}$ in the shear-current dynamo plays a role of a differential rotation and an inhomogeneity of the mean magnetic field plays a role of the inhomogeneity of turbulence. During the generation of the mean magnetic field in both cases (in the $\alpha {\bf \Omega} $ dynamo and in the shear-current dynamo), the mean electric current along the original mean magnetic field arises. The $\alpha$ effect is related to the hydrodynamic helicity $ \propto ({\bf \Omega} \cdot {\bf \Lambda}) $ in an inhomogeneous turbulence. The deformations of the magnetic field lines are caused by upward and downward rotating turbulent eddies in the $\alpha {\bf
\Omega} $ dynamo. Since the turbulence is inhomogeneous (which breaks a symmetry between the upward and downward eddies), their total effect on the mean magnetic field does not vanish and it creates the mean electric current along the original mean magnetic field.
In a turbulent flow with an imposed mean velocity shear, the inhomogeneity of the original mean magnetic field breaks a symmetry between the influence of upward and downward turbulent eddies on the mean magnetic field. The deformations of the magnetic field lines in the shear-current dynamo are caused by upward and downward turbulent eddies which result in the mean electric current along the mean magnetic field and produce the magnetic dynamo.
![\[Fig1\] The dimensionless nonlinear coefficient $\sigma_{_{N}}(B)$ defining the shear-current effect for different values of the parameter $\epsilon$: $\, \, \, \epsilon=0$ (solid); $\epsilon=1$ (dashed).](FIG1.eps){width="8cm"}
Let us consider for simplicity a homogeneous turbulence with a mean linear velocity shear, i.e., the mean velocity ${\bf U} = (0, S \,
x, 0)$ and the mean vorticity ${\bf W} = (0,0,S)$. The mean magnetic field, ${\bf B} = B(x,z) \, {\bf e}_y + (D / S_\ast) \, {\mbox{\boldmath $ \nabla$}}
{\bf \times} [A(x,z) \, {\bf e}_y]$, is determined by the dimensionless dynamo equations $$\begin{aligned}
{\partial A \over \partial t} &=& \sigma_{_{N}}(B) \, \nabla_z B +
\Delta A \;,
\label{F11} \\
{\partial B \over \partial t} &=& - D \, \nabla_z A + \Delta B \;,
\label{F12}\end{aligned}$$ (Rogachevskii & Kleeorin 2003; 2004), where $D = (l_0 / L)^2 \,
S_\ast^2 \, \sigma_0 $ is the dynamo number, $S_\ast = S \, L^2 /
\eta_{_{T}}$ is the dimensionless shear number, $\sigma_0 = (4 /
135) \, (1 + 9 \epsilon) $, the parameter $\epsilon$ is the ratio of the magnetic and kinetic energies in the background turbulence (i.e., turbulence the with a zero mean magnetic field), $L$ is the characteristic scale of the mean magnetic field variations, $\eta_{_{T}}$ is the turbulent magnetic diffusivity, $\sigma_{_{N}}(B)$ is the function defining nonlinear shear-current effect which is normalized by $\sigma_0$. We adopted here the dimensionless form of the mean dynamo equations; in particular, length is measured in units of $L$, time is measured in units of $
L^{2} / \eta_{_{T}} $ and ${\bf B}$ is measured in units of the equipartition energy $B_{\rm eq} = \sqrt{4 \pi \rho} \, u_0 $, the turbulent magnetic diffusion coefficients are measured in units of the characteristic value of the turbulent magnetic diffusivity $\eta_{_{T}} = l_0 u_{0} / 3 $, where $u_0$ is the characteristic turbulent velocity in the maximum scale of turbulent motions $l_0$. In Eqs. (\[F11\]) and (\[F12\]) we have not taken into account a quenching of the turbulent magnetic diffusion. This facet is discussed in details by Rogachevskii and Kleeorin (2004).
The nonlinear function $\sigma_{_{N}}(B)$ defining the shear-current effect for a weak mean magnetic field $B \ll B_{\rm eq} /4$ is given by $\sigma_{_{N}}(B) = 1$, and for $B \gg B_{\rm eq} / 4$ it is given by $\sigma_{_{N}}(B) = - 11 (1 + \epsilon) / 4 (1 + 9
\epsilon)$. The function $\sigma_{_{N}}(B)$ is shown in Fig. 1 for different values of the parameter $\epsilon$. The nonlinear function $\sigma_{_{N}}(B)$ changes its sign at some value of the mean magnetic field $B=B_\ast$. For instance, $B_\ast = 1.2 B_{\rm eq}$ for $\epsilon=0$, and $B_\ast = 1.4 B_{\rm eq}$ for $\epsilon=1$. The magnitude $B_\ast$ determines the level of the saturated mean magnetic field during its nonlinear evolution.
Solution of Eqs. (\[F11\]) and (\[F12\]) for the kinematic problem we seek for in the form $ \propto \exp(\gamma \, t + i K_z
\, z) ,$ where $$\begin{aligned}
B_y(t,z) &=& B_0 \, \exp(\gamma \, t) \, \cos (K_z z) \;,
\label{M5} \\
B_x(t,z) &=& {l_0 \over L} \, \sqrt{\sigma_0} \, B_0 \, K_z \,
\exp(\gamma \, t) \, \cos (K_z z) \;, \label{M6}\end{aligned}$$ and we considered for simplicity the case when the mean magnetic field ${\bf B}$ is independent of $x$. The growth rate of the mean magnetic field is $\gamma = \sqrt{D} \, K_z - K_z^2$. The wave vector $K_z$ is measured in units of $L^{-1}$ and the growth rate $\gamma$ is measured in $ \eta_{_{T}} / L^{2} $. Consider the simple boundary conditions for a layer of the thickness $2L$ in the $z$ direction, $B(t,|z|=1) = 0$ and $ A'(t,|z|=1) = 0$, i.e., ${\bf
B}(t,|z|=1) = 0$, where $A'$ is the derivative with respect to $z$. The mean magnetic field is generated when $D > D_{\rm cr} =
\pi^2/4$, which corresponds to $K_z=\pi / 2$. Numerical solution of Eqs. (\[F11\]) and (\[F12\]) with these boundary conditions for the nonlinear problem is plotted in Fig. 2. In particular, Fig. 2 shows the nonlinear evolution of the mean magnetic field $B(t,z=0)$ due to the shear-current effect for $\epsilon=0$ and different values of the dynamo number $D$. Here $B(t,z=0)$ is measured in units of the equipartition energy $B_{\rm eq} = \sqrt{4 \pi \rho} \,
u_0 $.
![\[Fig2\] The nonlinear evolution of the mean magnetic field $B(t,z=0)$ due to the shear-current effect for $\epsilon=0$ and different values of the dynamo number: $\, \, \, D=\pi^2/4 +1$ (solid); $D=10$ (dashed); $D=40$ (dashed-dotted).](FIG2.eps){width="8cm"}
The shear-current effect was studied for large hydrodynamic and magnetic Reynolds numbers using two different methods: the spectral $\tau$ approximation (the third-order closure procedure) and the stochastic calculus, i.e., the Feynman-Kac path integral representation of the solution of the induction equation and Cameron-Martin-Girsanov theorem (Rogachevskii & Kleeorin 2003; 2004). Note that recent studies by Rädler & Stepanov (2006) and Rüdiger & Kichatinov (2006) have not found the dynamo action in nonrotating and nonhelical shear flows using the second order correlation approximation (SOCA). This approximation is valid for small hydrodynamic Reynolds numbers. Indeed, even in a highly conductivity limit (large magnetic Reynolds numbers) SOCA can be valid only for small Strouhal numbers, while for large hydrodynamic Reynolds numbers (fully developed turbulence) the Strouhal number is unity.
Generation of the large-scale magnetic field in a nonhelical turbulence with an imposed mean velocity shear was recently investigated by Brandenburg (2005) and Brandenburg et al. (2005) using direct numerical simulations. The numerical results are in a good agreement with the theoretical predictions by Rogachevskii & Kleeorin (2004).
Discussion
==========
In this paper we discussed a new mechanism of generation of the large-scale magnetic fields in colliding protogalactic and merging protostellar clouds. Interaction of the merging clouds produces large-scale shear motions which are superimposed on small-scale turbulence. The scenario of the mean magnetic field evolution is as follows. In the kinematic stage, the mean magnetic field grows due to the shear-current effect from a very small seed magnetic field. During the nonlinear growth of the mean magnetic field, the shear-current effect changes its sign at some value $B_\ast$ of the mean magnetic field. The magnitude ${\bf B}_\ast$ determines the level of the saturated mean magnetic field. Since the shear-current effect is not quenched, it might be the only surviving effect, and this effect can explain the dynamics of large-scale magnetic fields in astrophysical objects with large-scale sheared motions which are superimposed on small-scale turbulence.
Note that the magnetic part of the $\alpha$ effect caused by the magnetic helicity is not zero even in nonhelical turbulence. It is a purely nonlinear effect. In this study we concentrated on the nonlinear shear-current effect and do not discuss the effect of magnetic helicity on the nonlinear saturation of the mean magnetic field (see, e.g., Kleeorin et al. 2000, 2002; Blackman & Brandenburg 2002; Brandenburg & Subramanian 2005). This is a subject of a separate ongoing study.
In Table 1 we presented typical parameters of flow and generated magnetic fields in colliding protogalactic clouds (PGC) and merging protostellar clouds (PSC). We use the following notations: $\Delta
V$ is the relative mean velocity, $\Delta R$ is the scale of the mean velocity inhomogeneity, $S = \Delta V / \Delta R$ is the mean velocity shear, $u_0$ is the characteristic turbulent velocity, $l_0$ is the maximum scale of turbulent motions, $\tau_0 = l_0 /
u_0$ is the characteristic turbulent time, $\eta_{_{T}}$ is the turbulent magnetic diffusivity, $t_\eta = (\Delta R)^2 /
\eta_{_{T}}$ is the turbulent diffusion time, $B_{\rm eq} = \sqrt{4
\pi \rho} \, u_0 $ is the equipartition large-scale magnetic field. Therefore, the estimated saturated large-scale magnetic field for merging protogalactic clouds is about several microgauss, and for merging protostellar clouds is of the order of several tenth of microgauss (see Table 1).
\[tab1\]
[|l|c|c|]{}\
\
& PGC & PSC\
& &\
Mass & $M \leq 10^{10} \, M_\odot$ & $M \leq M_\odot$\
& &\
$R \, $ (cm) & $R \sim 10^{23}$ & $R \sim 10^{17}$\
& &\
$V \, $ (cm/s) & $10^{6} - 10^{7}$ & $10^{5} - 10^{6}$\
& &\
$\rho \, \,$ (g/cm$^{3}$) & $10^{-26}$ & $(1-5) \times 10^{-19}$\
& &\
$\Delta V \, $ (cm/s) & $10^{6} - 10^{7}$ & $10^{5}$\
& &\
$\Delta R \, $ (cm) & $2 \times 10^{23}$ & $10^{16} - 10^{17}$\
& &\
$S \,$ ($s^{-1}$) & $(0.5 - 5) \times 10^{-16}$ & $10^{-12} - 10^{-11}$\
& &\
$u_0 \, $ (cm/s) & $10^{6} - 10^{7}$ & $10^{4}$\
& &\
$l_0 \, $ (cm) & $10^{22}$ & $10^{15} - 10^{16}$\
& &\
$\tau_0 \, $ (years) & $(0.3 - 3) \times 10^{8}$ & $(0.3 - 3) \times 10^{4}$\
& &\
$\eta_{_{T}} \,$ (cm$^2$/s) & $(0.3 - 3) \times 10^{28}$ & $(0.3 - 3) \times 10^{19}$\
& &\
$t_\eta \, $ (years) & $(0.3 - 3) \times 10^{9}$ & $10^{6} - 10^{7}$\
& &\
$B_{\rm eq} \, $ ($\mu$G) & 0.3 - 3 & 10 - 75\
This work has benefited from research funding from the European Community’s sixth Framework Programme under RadioNet R113CT 2003 5058187.
Birk G.T., Wiechen H., Lesch H.: 2002, A& A 393, 685
Blackman E. G., Brandenburg A.: 2002, ApJ. 579, 359
Brandenburg A.: 2005, ApJ. 625, 539
Brandenburg A., Haugen N.E.L., Käpylä P.J., Sandin C.: 2005, Astron. Nachr. 326, 174
Brandenburg A., Subramanian K.: 2005, Phys. Rep. 417, 1
Chernin A.D.: 1991, Ap&SS 186, 159
Chernin A.D.: 1993, A&A 267, 315
Kleeorin N., Moss D., Rogachevskii I., Sokoloff D.: 2000, A&A 361, L5
Kleeorin N., Moss D., Rogachevskii I., Sokoloff D.: 2002, A&A 387, 453
Krause F., Rädler K.-H.: 1980, [*Mean-Field Magnetohydrodynamics and Dynamo Theory*]{}, Pergamon, Oxford
Peebles P.J.E.: 1980, [*The Large Scale Structure of the Universe*]{}, Princeton Univ. Press, Princeton
Rädler K.-H., Kleeorin N., Rogachevskii I.: 2003, GAFD 97, 249
Rädler K.-H., Stepanov R.: 2006, Phys. Rev. E, in press
Rogachevskii I., Kleeorin N.: 2003, Phys. Rev. E 68, 036301
Rogachevskii I., Kleeorin N.: 2004, Phys. Rev. E 70, 046310
Rüdiger G., Kichatinov, L.L.: 2006, Astr. Nachr., submitted
Wiechen H., Birk G.T., Lesch H.: 1998, A& A 334, 388
Zeldovich Ya.B., Novikov I.D.: 1983, [*Relativistic Astrophysics*]{}, Vol. 2, [*The structure and Evolution of the Universe*]{}, Chicago Univ. Press, Chicago
|
---
abstract: 'A generalized family of Adversary Robust Consensus protocols is proposed and analyzed. These are distributed algorithms for multi-agents systems seeking to agree on a common value of a shared variable, even in the presence of faulty or malicious agents which are updating their local state according to the protocol rules. In particular, we adopt monotone joint-agent interactions, a very general mechanism for processing locally available information and allowing cross-comparisons between state-values of multiple agents simultaneously. The salient features of the proposed class of algorithms are abstracted as a Petri Net and convergence criteria for the resulting time evolutions formulated by employing structural invariants of the net.'
author:
- David Angeli and Sabato Manfredi
title: 'On Adversary Robust Consensus protocols through joint-agent interactions'
---
Introduction and motivations
============================
Algorithms for consensus were introduced in [@tsitsi] a few decades ago, in the context of distributed optimization, a topic which remains of great interest still today, [@boyd].\
The role played by propagation of information in achieving consensus among interacting agents, was first highlighted in the seminal paper [@Moreau05]. Therein, authors formulated tight and explicit graph-theoretical requirements for asymptotic consensus in time-varying linear update protocols, by abstracting the network of agents’ interactions and its underlying dynamics as a graph. This sparked a considerable interest of the scientific community in advancing and applying consensus protocols for multi-agent systems (see i.e. [@murray_survey; @Hendrickx12; @Martin16] and references therein). Subsequent developments in the theory of nonlinear consensus protocols have formalized and clarified the role of information spread along the graph of agents’ interaction for more general situations, including second and higher order agents dynamics, or agents’ states evolving on manifolds [@manifold] or nonlinear interactions [@Maggiore07; @Manfredi_TAC]. More recently, graph theoretical criteria have been similarly developed to encompass asymmetric confidence, as in the case of unilateral interactions [@ourpaper], or joint-agent interactions, [@angelimanfredijoint].\
The latter, in particular, account for situations where individual agents impose “filtering thresholds” upon neighbours’ influences by cross-validating their opinions through mutual comparisons that only allow for consistent infuences (either from above or below) of two or more neighbouring agents to be enacted upon. In this regard, unlike the majority of existing consensus protocols that implicitly assume ‘additive’ dynamics and exhibit variation rates as a disjunctive combination (sum) of neighbors influences, joint-agent interactions allow the formulation of conjunctive influences, and, respectively, of their additive combination.\
It is worth stressing that complex contemporary social and engineering systems often need to deal with selfish or malicious users, node faults and attacks ([@Survey1]- [@Survey5]). In this respect the evaluation of individual and group reputation play a focal role for the safety of such systems. In the last years different (centralised and distributed) algorithms have been proposed to deal with online reputation estimation of both individual ([@back1]-[@back6]) and group (clustering all users according to their rating similarities - [@Group_rank1; @Group_rank2]) to providing incentives to users acting responsibly and cooperatively.
Within this line of investigation, the problem of Adversary Robust Consensus Protocols (ARC-P) was formulated in [@leblanc1], following earlier seminal results in [@pease]. Therein, Leblanc and coworkers propose and analyze a discrete time protocol which allows $n$ cooperating agents to converge towards a consensus state, within a complete all-to-all network, even when a subset of agents (of cardinality up to $\lfloor n/2 \rfloor$) is *malicious* or *faulty*, namely it evolves in a completely arbitrary way, with the sole constraint of broadcasting its own state to all remaining agents. The proposed protocol simply orders state values in ascending (or descending order) and removes $F$ top and lowest values from the ordered list, where $F$ is an apriori fixed bound to the number of malicious agents. Then, the average among the remaining values is computed and a standard linear consensus update equation is applied.\
Subsequent analysis has been devoted in [@leblanc3] to the important topic of relaxing the all-to-all topology requirement and investigating sufficient conditions for Adversary Robust Consensus on the basis of local information only, or in the presence of so called Byzantine agents [@leblanc2], who may, either intentionally or due to faulty conditions, communicate different state values to different neighbours. A related line of investigation assumes the presence of trusted nodes, [@abbas].\
The interconnection topology is interpreted, in such context, as a specific type of switching (time-varying) linear consensus, arising through the application of the so called *sorting* function, its composition with the *reducing* function (responsible for discarding highest and lowest values) and finally by averaging the entries of the vector obtained. It turns out, however, that similar types of agents interactions can also be recast within the framework of joint-agent interactions.
The formalism of joint-agent interactions is, in this respect, even more flexible and may, for instance, allow to partition neighbours of every agent in several subgroups, to be suitably *sorted, reduced* and *averaged* while adding (possibly with different weights) the influences resulting from distinct subgroups as a final step. This type of rules for processing local information results in consensus protocols which allow different levels of trust attributed to different set of neighbors and, generally speaking, break the simmetry implicit in the use of a single sorting and reducing function.\
The extended class of intrinsically nonlinear consensus protocols afforded by the use of joint-agent interactions can be conveniently described and characterized, from a topological point of view, as *bipartite graphs*, and more specifically Petri Nets. It turns out that structural notions, developed in the context of Petri Nets to ascertain their liveness as Discrete Event Systems, play a crucial role in characterizing the ability of a network of agents to reach consensus regardless of initial conditions [@angelimanfredijoint].\
The specific details of the conditions needed for this to happen will be illustrated in a subsequent Section. Nevertheless, it is intuitive that if, on one hand, application of conjunctive filtering conditions among neighbouring agents limits the spread of information across the network (and therefore, if not done carefully may prevent consensus from happening at all), on the other, it only allows “trustworthy” information to be propagated, and therefore may result (if carefully deployed) in Adversary Robust Consensus protocols. In this paper we address the issue of when a network of agents, with arbitrary (and possibly asymmetric) interconnection topology (allowing for instance differentiated trust levels among neighboors) exhibits the ability to reach consensus despite a subset of its agents being either *faulty* or *malicious*, viz. able to influence other nodes according to their individual state-value but, in fact, upgrading their position in a completely arbitrary fashion. Just to illustrate the potential of the approach, we present below simulations referring to an all-to-all network of $5$ agents, involving linear interactions, or a similar network entailing joint-agent interactions. Our theory allows to prove that, in the latter network, robustness can be achieved allowing any set of $2$ out of $5$ agents to be malicious or faulty, and still guaranteeing the remaining healthy agents will retain the ability to reach exact consensus. In particular, we simulate the following linear network: $$\label{linlike}
\dot{x}_i = \sum_{j \neq i} a_{ij} (x_j - x_i), \qquad i,j \in \{1 \ldots 5 \},$$ for some $a_{ij}>0$ and compare it with the following nonlinear all to all network: $$\label{joint}
\dot{x}_i = \sum_{J \subset \{1 \ldots 5 \} \backslash \{ i \}: |J|=3}
f_{J \rightarrow i} (x)$$ where the function $f_{J \rightarrow i}: \mathbb{R}^n \rightarrow \mathbb{R}$ is defined below: $$f_{J \rightarrow i} (x) = \max_{j \in J} \; \min \{ x_j - x_i,0 \} + \min_{j \in J} \; \max \{x_j - x_i ,0 \}.$$ We pick the initial condition $[35,10,5,15,20]'$ and run the two consensus protocols assuming that agents $4$ and $5$ are faulty and follow the apriori fixed time evolutions: $$x_4 (t) = 15 + \frac{\cos\!\left(3\, t\right) - 1}{9} + \frac{t\, \sin\!\left(3\, t\right)}{3} + \frac{t^3}{150}$$ $$x_5(t)=20 + \frac{\sin\!\left(2\, t\right)}{4} + \frac{t\, \left(2\, {\sin\!\left(t\right)}^2 - 1\right)}{2}.$$ These agents effectively act as exogenous disturbance inputs for the remaining $3$ agents, and, from the practical point of view, may be regarded as faulty agents or malicious agents trying to disrupt the consensus. In the case of equation (\[linlike\]), as it is expected due to linearity and addivity of interactions, the exogenous disturbances $x_4$ and $x_5$ are able to spread their influence to the remaining agents and effectively prevent the remaining agents to asymptotically reach consensus (see Fig. \[lineardisruption\]).
![Linear consensus protocol subject to faulty agents $4$ and $5$[]{data-label="lineardisruption"}](maliciouscompare1){width="12cm"}
In the case of equation (\[joint\]), instead, nonlinear joint-agent interactions allow the three“healthy” agents, $1,2,3$ to asymptotically reach agreement within the convex-hull of their initial values, regardless of $x_4$ and $x_5$, (see Fig. \[jointrobust\]).
![Joint-Agent consensus protocol subject to faulty agents $4$ and $5$[]{data-label="jointrobust"}](maliciouscompare2){width="12cm"}
In other words, while the faulty agents may, to a certain extent, affect the final consensus value reached, they are unable to disrupt it.
Problem formulation
===================
The aim of this note is to derive necessary and sufficient conditions to characterize when networks of agents implementing joint-agent interactions may be able to achieve *robust* consensus in the presence of possibly *malicious* or *faulty* agents. In particular, we study networks described by the following class of nonlinear finite-dimensional differential equations: $$\label{net}
\dot{x} = f(x)$$ where $x \in \mathbb{R}^n$ is the state vector, and $f: \mathbb{R}^n \rightarrow \mathbb{R}^n$ is a Lipschitz continous function, describing the update laws of each agent as a function of its own and neighbours’ state values. For convenience we ask that $f_j$ be monotonically non-decreasing with respect to all $x_i$ ($i \neq j$) so that the resulting flow is monotone with respect to initial conditions, once the standard order induced by the positive orthant is adopted. This assumption, while not essential, can to a certain extent simplify the analysis and the definition of interaction among agents. Many authors, in recent years, have elaborated conditions under which solutions of (\[net\]) asymptotically converge towards equilibriums of the following form: $$\lim_{t \rightarrow + \infty} \varphi(t,x_0) = \bar{x} \textbf{1}$$ for some $\bar{x} \in \mathbb{R}$, where $\textbf{1}$ is the vector of all ones in $\mathbb{R}^n$. When this occurs for all solutions, and regardless of initial conditions, we say that system (\[net\]) achieves *global asymptotic consensus*.\
In this note, however, we consider a more general situation in which the state vector $x$ is partitioned into two subvectors, $x_H$ and $x_F$, associated to *healthy* and *faulty* agents respectively. Accordingly, we denote $f(x) = [f_H (x)',f_F(x)']'$ and wish to characterize under what assumptions solutions of $$\label{projectednet}
\dot{x}_H = f_H (x_H, x_F)$$ asymptotically converge to equilibria of the type: $$\lim_{t \rightarrow + \infty} x_H(t) = \bar{x} \textbf{1}_H$$ for all initial conditions $x_H(0)$ and all exogenous input signals $x_F(\cdot)$. A formal definition follows.
We say that network (\[net\]) achieves robust consensus in the face of faults in $F \subset \mathcal{N}$, if, partitioning the state vector according to $F$ and $H:=\mathcal{N} \backslash F$ yields: $$\lim_{t \rightarrow + \infty} \varphi_{H} ( t, x_H(0), x_F( \cdot) ) = \bar{x} \textbf{1}_H$$ for all $x_H(0)$, and all uniformly bounded exogenous input $x_F(\cdot)$, (where $\varphi_H (t, x_H(0), x_F(\cdot))$ denotes the solution of (\[projectednet\]) at time $t$ from initial condition $x_H(0)$ and input $x_F(\cdot)$).
In practice, for a given net, we will be interested in considering several possible combinations of faulty agents (corresponding to several choices of $F$) and, for each one of them, verify conditions for asymptotic convergence towards consensus of the remaining *healthy* agents $H$.\
In order to characterize the flow of information needed for achieving such kind of behaviour, we recall the notion of *joint agent interaction*, as proposed in [@angelimanfredijoint].
\[jointbilateral\] We say that a group of agents $I \subset \mathcal{N}$ jointly influences agent $j \in \mathcal{N} \backslash I$ if for all compact intervals $K \subset \mathbb{R}$ there exists a positive definite function $\rho$, such that, for all $x_I, x_j \in K$ it holds: $$\label{jointinter}
\textrm{sign}(x_I - x_j) f_j ( x_j \textbf{1} + (x_I-x_j) e_I ) \geq \rho ( |x_I - x_j | ).$$ We denote this by the following shorthand notation: $I \rightarrow j$.
Notice that influence from $I$ to $j$, denoted as $I \rightarrow j$, is monotone (in its first argument $I$) with respect to set-inclusion. In particular, if $j \in \mathcal{N} \backslash \tilde{I}$ we have: $$I \rightarrow j \textrm{ and } I \subset \tilde{I} \; \Rightarrow \tilde{I} \rightarrow j.$$ For this reason, it is normally enough to consider *minimal* influences alone. We say that $I$ influences $j$ and that this influence is *minimal* if there is no $\tilde{I} \subsetneq I$ such that $\tilde{I} \rightarrow j$.
Relevant Petri Net background
=============================
Our goal is to derive characterizations of a graph theoretical nature regarding the ability of networks with joint-agent interactions to exhibit robust consensus, in the face of faults or malicious attacks. We adopt, to this end, the formalism introduced in [@angelimanfredijoint]. In particular, we represent multiagent networks as Petri Nets. These are a type of bipartite graph, used to model Discrete Event Systems, and can be conveniently adopted in the present study. In fact, a rich literature on structural invariants for Petri Nets already exists, including software libraries to compute them as well as complexity analysis of the available algorithms.
An (ordinary) Petri Net is a quadruple $\{ P, T, E_I , E_O \}$, where $P$ and $T$ are finite sets (with $P \cap T = \emptyset$) referred to as *places* and *transitions*, respectively. These are nodes of a directed bipartite graph. In fact, directed edges are of two types: $E_I \subset T \times P$ connecting transitions to places and $E_O \subset P \times T$ connecting places to transitions.\
In our context places represent agents while transitions stand for interactions among them. More closely, to each agent $i \in \mathcal{N}$ there exists a unique associated place $p_i \in P$. Furthermore, if agents in $J \subset \mathcal{N}$ jointly influence agent $i$, this is denoted as $J \rightarrow i$ and, provided this interaction is minimal, it is represented graphically by a single transition $t \in T$, with edges $(p_j, t) \in E_O$ for all $j \in J$ and a single edge $(t,p_i )$ in $E_I$. Notice that every transition can be assumed to only afford exactly one outgoing edge, unlike in general Petri Nets. As an example, we show in Fig. \[netexamples\] the graphical representation of the Petri Nets associated to the list of interactions $$\label{simplestnet}
\{ 1 \} \rightarrow 2, \; \{ 1,2 \} \rightarrow 3,$$ and, next to it, for a kind of ring topology with $5$ agents and the following list of minimal joint agent interactions: $$\label{ring5}
\{1,2\} \rightarrow 3, \; \{2,3\} \rightarrow 4, \; \{3,4\} \rightarrow 5, \; \{4,5 \} \rightarrow 1, \; \{5, 1 \} \rightarrow 2.$$
![Petri Nets associated to network of interactions (\[simplestnet\]) and (\[ring5\]) []{data-label="netexamples"}](simplest "fig:"){height="3.5cm"} ![Petri Nets associated to network of interactions (\[simplestnet\]) and (\[ring5\]) []{data-label="netexamples"}](oddnumbernodes "fig:"){height="3.5cm"}
The next concepts will be crucial in characterizing, from the topological point of view, networks that guarantee asymptotic convergence towards consensus. The set of *input transitions* for a place $p$, is denoted as $$I(p) = \{ t \in T: (t,p) \in E_I \},$$ and, similarly for a set of places $S \subset P$, its input transitions are: $$I(S) = \{ t \in T: \exists \, p \in S: (t,p) \in E_I \}.$$ Simmetrically, output transitions are denoted as: $$O(S) = \{ t \in T: \exists \, p \in S: (p,t) \in E_O \}.$$
\[siphondef\] A non-empty set of places $S \subset P$ is called a *siphon* if $I(S) \subset O(S)$. A siphon is minimal if no proper subset is also a siphon.
Informally, in a group of agents that correspond to a siphon, any influence needs to come (at least in part) from within the group. In [@angelimanfredijoint], a characterization of the ability of agents to asymptotically converge towards consensus (regardless of their initial conditions) is provided. This feature, called *structural consensuability*, is shown to be equivalent to the requirement that any pair of siphons in the associated Petri Net have non-empty intersection.
To address robustness questions, within the same set-up of joint agent interactions, an externsion of the concept of siphon is needed. The following is, to the best of our knowledge, an original definition:
\[cs\] A non-empty set of places $S \subset P$ is an $F$-controlled siphon, if: $$I(S) \subset O(S) \cup O(F).$$
Notice that Definition \[cs\] boils down to the standard notion of siphon for $F= \emptyset$. Union of $F$-controlled siphons is again an $F$-controlled siphon and, in particular, if a set is an $F_1$-controlled siphon it is also an $F_2$-controlled siphon for all $F_2 \supseteq F_1$. We call the set $F$ the *switch* of siphon $S$. Informally, this terminology is adopted as malicious agents in $F$ may prevent healthy agents in $S$ from increasing (or decreasing) their own state values. This is indeed achievable by malicious agents simply broadcasting values which are either below the minimum or, respectively, above the maximum of all values within the siphon.
The following notions are appropriate to characterize occurrence of robust consensus.
\[robconscond\] We say that a Petri Net fulfills robust consensuability with respect to faults in $F \subset \mathcal{N}$ if $H := \mathcal{N} \backslash F$ is a siphon and for all pairs of controlled siphons $S_1$,$S_2$ and associated switches $F_1,F_2 \subset F$, we have the following: $$\label{robcons}
S_1 \cap S_2 = \emptyset \Rightarrow F_1 \cap F_2 \neq \emptyset.$$
It is worth pointing out that robust consensuability, when $F= \emptyset$, boils down to *structural* consensuability as defined in [@angelimanfredijoint]. Also, a direct comparison with previously existing conditions for consensuability in networks where all influences are single-agent influences is not possible, as the condition would never be fulfilled. It is in fact use of joint-agent interactions and cross validations that make adversary robust consensus achievable. On the other hand, we believe that our conditions boil down to those proposed in [@leblanc3] when only interactions obtained through a sorting and reducing function are allowed.
Main result and proofs
======================
In the following Section we state the main result and clarify the steps of its proof. To this end, in order to allow dynamical properties of a multiagent system to be derived on the basis of structural conditions fulfilled by the associated Petri Net, it is important to establish a closer link between the considered equations and the associated Petri Net. For any transition $t \in T$, denote by: $$I(t):= \{ p \in P : (p,t) \in E_O \}$$ and by $j(t)$ the unique place such that $(t, j(t) )$ belongs to $E_I$. In particular, for a given Petri Net $\{P,T,E_I,E_O \}$ we consider non-decreasing locally Lipschitz functions $F_i: \mathbb{R}^{|I(p_i)|} \rightarrow \mathbb{R}$, with $F_i (0,0,\ldots,0)=0$, and such that $F_i$ is strictly increasing in each of its arguments in $0$. These are employed to define networks of equations: $$\label{specialstructure}
\dot{x}_i = F_i ( f_{I(t_1) \rightarrow i} (x), f_{I(t_2) \rightarrow i} (x), \ldots, f_{I(t_{|O(p_i)|}) \rightarrow i} (x) ), \qquad O(p_i)=\{ t_1, \ldots t_{|O(p_i)|} \}.$$ A typical example arises when $F_i ( f ) = \sum_{k} \alpha_k f_k$, for some choice of coefficients $\alpha_k>0$. Equation (\[specialstructure\]) is, however, more general and allows non-additive agents’ infuences. As an example of a non-additive function $F_i$, one may consider for instance the map $F_i (f) = \min_{k \in \{1, \ldots, |O(p_i)| \}} f_k + \max_{k \in \{1 \ldots |O(p_i)| \} } f_k$.\
Composition of the above maps with monotonic increasing functions, such as saturations or (odd) powers are also legitimate choices, i.e. $F_i ( f ) = \sum_{k} \alpha_k \textrm{sat} (f_k)$ or $F_i ( f ) = ( \sum_{k} \alpha_k f_k )^3$.
We are now ready to state our main result and, later, to discuss the technical steps of its derivation.
\[mr\] Consider a cooperative network of agents as in (\[net\]) and let $N$ be the Petri Net associated to its set of *minimal* joint agent interactions. Consider a partition of $\mathcal{N}$ into two disjoint subgroups $F, H \subset\mathcal{N}$, which represent the Faulty and the Healthy agents (respectively), along with the projected dynamics, (\[projectednet\]). Then, robust consensus is achieved among the agents in $H$ provided $N$ fulfills robust consensuability with respect to faulty agents in $F$.
It is worth pointing out that the result assumes faulty agents are following arbitrary continuous evolutions and that these are reliably broadcast to all neighbouring agents. This hypothesis cannot model the situation in which malicious agents intentionally communicate different evolutions to different neighbors. Agents with this ability are usually referred to as Byzantine agents, and Byzantine consensus protocols exhibit robustness to such kind of threats. Notice that the ability of malicious agents of differentiating the information sent to neighbors may disrupt consensus even when robust consensuability is fulfilled. An example of this situation is later shown in Section \[simuex\].\
We start the technical discussion by generalizing Proposition 11 in [@angelimanfredijoint].
\[invlemma\] Let $H$ be a siphon of $N$. Consider a network of equations (\[specialstructure\]) and let $x_H$ denote the state vector of agents in $H$, along with the corresponding equations $$\dot{x}_H (t) = f_H ( x_H(t), x_F(t) )$$ as introduced in (\[projectednet\]). Then, for any $c \in \mathbb{R}$, the sets: $$\bar{\mathcal{X}}_c := \{ x_H \in \mathbb{R}^{|H|}: x_H \leq c \textbf{1}_H \},$$ $$\underline{ \mathcal{X}}_c := \{ x_H \in \mathbb{R}^{|H|}: x_H \geq c \textbf{1}_H \}$$ are robustly forward invariant for any bounded input signal $x_F (\cdot)$.
Let $x_F(\cdot)$ take value in the compact set $K$ and $h \in H$ be any agent whose associated state value fulfills $x_h=c$. To prove invariance of $\bar{\mathcal{X}}_c$ we need to show $f_h (x_H,x_F) \leq 0$ for all $x_F \in K$. This condition, in fact, amounts to $f (x_H,x_F) \in TC_x ( \bar{ \mathcal{X}}_c )$ for all $x_H \in \partial \bar{\mathcal{X}}_c $ and all $x_F \in K$. This, in turn implies forward invariance of $\bar{\mathcal{X}}_c$ by Nagumo’s Theorem. Let $O(p_h) = \{ t_1, t_2, \ldots, t_{|O(p_h)|} \}$. Since $h$ is a siphon, for all $t_i$ in $O(p_h)$ there exists $\tilde{h} \in I (t_i) \cap H$. Hence: $$f_{ I (t_i) \rightarrow h } (x) \leq f_{ I (t_i) \rightarrow h } (\bar{x}_H \textbf{1}_H, x_F ) = 0.$$ By monotonicity of $F_h$ then: $$\dot{x}_h = F_h ( f_{I(t_1) \rightarrow h} (x), f_{I(t_2) \rightarrow h} (x), \ldots, f_{I(t_{|O(p_h)|}) \rightarrow h} (x) ) \leq F_h (0,0,\ldots,0) = 0.$$ This completes the proof of the Lemma.
The rest of the Section is devoted to illustrate the main technical steps of the proof.
Let $x_F (t)$ be an arbitrary bounded, continuous signal. Assume, in particular, that $x_F(t) \in K$ for some compact set $K \subset \mathbb{R}^{|F|}$. Pick any initial agent distribution $x_H(0)$ and define the evolution of healthy agents in $H$ according to the equation (\[projectednet\]). In particular, we denote the solution $x_H(t) := \varphi_H (t, x_H(0), x_F( \cdot ) )$, for all $t \geq 0$. Moreover, we let: $$\bar{x}_H := \max_{h \in H} x_h \qquad \underline{x}_H := \min_{h \in H} x_h.$$ Since $H$ is a siphon, by Lemma \[invlemma\], we see that for all $t_2 \geq t_1 \geq 0$: $$x_H (t_1) \in \bar{\mathcal{X}}_{\bar{x}_H(t_1)} \Rightarrow x_H(t_2) \in \bar{\mathcal{X}}_{\bar{x}_H(t_1)}.$$ In particular then, $\bar{x}_H(t_1) \geq \bar{x}_H (t_2)$, viz. $\bar{x}_H$ is monotonically non-increasing. As expected, a symmetric argument shows that $\underline{x}_H (t)$ is monotonically non-decreasing. Therefore $x_H(t)$ is uniformly bounded and the limits $$\label{limvalues}
\bar{x}_H^{\infty} := \lim_{t \rightarrow + \infty} \bar{x}_H(t) \qquad \underline{x}_H^{\infty} := \lim_{t \rightarrow + \infty} \underline{x}_H (t),$$ exist finite. For future reference, it is convenient to define the convex-valued differential inclusion given below: $$\label{timeinvariantdinc}
\dot{z} \in F_H (z) := \textrm{co} \left ( \bigcup_{x_F \in K} \{ f_H (z,x_F) \} \right ).$$ Due to compactness of $K$, and Lipschitz continuity of $f_H$, $F_H$ is a Lipschitz continuous set-valued map. In particular, $x_H(t)$, is also a (bounded) solution of (\[timeinvariantdinc\]). Consider next the associated $\omega$-limit set, which, by boundedness of $x_H(t)$, is non-empty and compact: $$\Omega_H := \left \{ x \in \mathbb{R}^{|H|}: \exists \, \{ t_{n} \}_{n=1}^{+ \infty}: \lim_{n \rightarrow + \infty} t_n = + \infty \textrm{ and } x = \lim_{n \rightarrow + \infty} x_H (t_n) \right \}.$$ Notice that, by definition, for any $z_H \in \Omega_H$ we have $\bar{z}_H = \bar{x}_H^{\infty}$ and $\underline{z}_H = \underline{x}_H^{\infty}$. As is well known, $\Omega_H$ is a weakly invariant set for the differential inclusion (\[timeinvariantdinc\]). Selecting any element $\tilde{z}_H$ in $\Omega_H$, there exists at least one viable solution $\tilde{z}_H (t)$ of (\[timeinvariantdinc\]), such that $\tilde{z}_H (t) \in \Omega_H$, for all $t$. Notice that, by Lipschitzness of $F_H$, the sets $$M(t) := \{ h \in H: \tilde{z}_h (t) = \bar{x}_H^{\infty} \},$$ and $$m(t) := \{ h \in H: \tilde{z}_h (t) = \underline{x}_H^{ \infty} \}$$ are monotonically non-increasing with respect to set-inclusion and, trivially, non-empty for all $t \geq 0$. Hence, there exists some finite $\tau\geq 0$ such that $M(t)=M(\tau)$ and $m(t)=m (\tau)$ for all $t \geq \tau$. Moreover, for all such values of $t$, we see that: $$\label{isconstant}
\dot{\tilde{z}}_h (t) = 0 \qquad \forall \, h \in M( \tau),$$ and similarly $$\label{isconstant2}
\dot{\tilde{z}}_h (t) = 0 \qquad \forall \, h \in m( \tau).$$ To prove asymptotic consensus, we need to show $M(\tau) \cap m(\tau) \neq \emptyset$. To this end, we claim that there exists $F_M \subset F$ such that $M( \tau )$ is an $F_M$ controlled siphon, as we argue next by contradiction.\
Should this not happen, at least some $h$ would exist in $M(t)$ and $I \subset \mathcal{N}$ such that $I \rightarrow h$ and still $I \cap ( M(t) \cup F ) = \emptyset$. In particular then, $\bar{ \tilde{z}}_I(t):= \max_{i \in I} \tilde{z}_i (t) < \bar{x}_H^{ \infty}$ and this violates (\[isconstant\]) by virtue of definition (\[jointinter\]) as for all $x_F \in K$: $$f_h (\tilde{z}_H (t), x_F) \leq f_h ( \bar{x}_H^{\infty} \textbf{1}_H + ( \bar{\tilde{z}}_I (t) - \bar{x}_H^{\infty} ) e_I, x_F ) \qquad \qquad \qquad$$ $$\qquad \qquad \qquad \qquad \leq - \rho ( \bar{x}_H^{\infty}-\bar{\tilde{z}}_I (t) )<0.$$ A similar argument can be used to show that $m(t)$ is an $F_m$ controlled siphon.\
Consider next any switch pairs $F_M, F_m \subset F $ such that $M( \tau )$ and $m( \tau)$ are, respectively, an $F_M$ and $F_m$ controlled siphon. Assume, without loss of generality, $F_M$ and $F_m$ minimal with respect to set inclusion (among similar siphons’ switches).\
In the following we argue by contradiction considering the case $M( \tau ) \cap m( \tau ) = \emptyset$. By structural consensuability this implies $F_M \cap F_m \neq \emptyset$ and we may pick $\bar{f} \in F_M \cap F_m$. By minimality of $F_M$ and $F_m$, moreover, taking out $\bar{f}$ from them violates the definition of controlled siphon, viz. there exist $h_M \in M(\tau)$ and $h_m \in m(\tau)$ (distinct from each other), such that for some joint interactions $I_M \rightarrow h_M$ and $I_m \rightarrow h_m$ we see that $$\label{fshouldbeabove}
I_M \cap ( M(t) \cup ( F_M \backslash \{ \bar{f} \} ) ) = \emptyset,$$ and, similarly, $$\label{fshouldbebelow}
I_m \cap ( m(t) \cup ( F_m \backslash \{ \bar{f} \} ) ) = \emptyset.$$
Since $H$ is a siphon, however, $f_{h_M} (z, x_F) \leq 0$ for all $z \in \Omega_H$ and all $x_F \in K$. Moreover condition (\[fshouldbeabove\]) yields: $$x_{\bar{f}} < \bar{x}_H^{\infty} \Rightarrow
f_{h_M} ( \tilde{z}_H(t), x_F ) \leq - \rho ( \bar{x}_H^{\infty} - \max \{ \bar{\tilde{z}}_{I_M}(t), x_{\bar{f}} \} ) < 0.$$ Similarly, $f_{h_m} (z,x_F) \geq 0$ for all $z \in \Omega_H$ and all $x_F \in K$. In addition, $$x_{\bar{f}} > \underline{x}_H^{\infty} \Rightarrow f_{h_m} ( \tilde{z}_H(t), x_F ) \geq \rho ( \min \{ \underline{\tilde{z}}_{I_m}(t), x_{\bar{f}} \} - \underline{x}_H^{\infty} ) > 0.$$ Notice that, whenever $\underline{x}_H^{\infty} < \bar{x}_H^{\infty} $, we have $(- \infty, \bar{x}_H^{\infty} ) \cup ( \underline{x}_H^{\infty}, + \infty ) = \mathbb{R}$ and therefore, $$f_{h_M} ( \tilde{z}_H(t), x_F ) - f_{h_m} ( \tilde{z}_H(t), x_F ) \qquad \qquad \qquad \qquad$$ $$\leq - \rho ( \min \big \{ \bar{x}_H^{\infty} - \max \{ \bar{\tilde{z}}_{I_M}(t), (\underline{x}_H^{\infty} + \bar{x}_H^{\infty})/2 \}, \min \{ \underline{\tilde{z}}_{I_m}(t), (\underline{x}_H^{\infty} +
\bar{x}_H^{\infty})/2 \} -
\underline{x}_H^{\infty} \big \} ).$$ As a consequence: $$( e_{h_M} - e_{h_m} )' F_H ( \tilde{z}_H(t), x_F) \qquad \qquad \qquad$$ $$\leq - \rho ( \min \big \{ \bar{x}_H^{\infty} - \max \{ \bar{\tilde{z}}_{I_M}(t), (\underline{x}_H^{\infty} + \bar{x}_H^{\infty})/2 \}, \min \{ \underline{\tilde{z}}_{I_m}(t), (\underline{x}_H^{\infty} +
\bar{x}_H^{\infty})/2 \} -
\underline{x}_H^{\infty} \big \} ) < 0$$ for all $x_F \in K$. This, however, contradicts either (\[isconstant\]) or (\[isconstant2\]).\
Notice that, in the proof of Theorem \[mr\], it is crucial that the position communicated by malicious agents to all of its neighbors are consistent. If not, malicious agents could more easily prevent consensus by sending differentiated signals to individual agents, and a correspondingly stronger notion of structural consensuability would be needed.
Examples and Simulation {#simuex}
=======================
We consider next an example with $9$ agents arranged in a $3 \times 3$ grid. Each agent is denoted by an (ordered) pair of integers in $\{1,2,3\}:=N$. In particular then $\mathcal{N}= N \times N$. We consider the following interconnection topology. For all $(i,j) \in N \times N$, we have two joint-agent interactions: $$( N \backslash \{i \} ) \times \{ j \} \rightarrow (i,j)$$ $$\{ i \} \times ( N \backslash \{j\} ) \rightarrow (i,j).$$ The associated Petri Net is shown in Fig. \[petrimess\].
![Petri Net associated to joint-agent interactions[]{data-label="petrimess"}](nicepetri.pdf){width="9cm"}
Notice that, by construction, whenever an agent belongs to a siphon, somebody from the same column and row also needs to be within the siphon. Let, for a set $\Sigma \subset N \times N$, $\Sigma_i$ denote the elements of $\Sigma$ belonging to $\{i\} \times N$ and, by $\Sigma^j$ the elements of $\Sigma$ in $N \times \{j \}$. We see that $\Sigma$ (non-empty) is a siphon if and only if $|\Sigma_i| \geq 2$ for all $i \in N$ such that $|\Sigma_i|>0$ and $|\Sigma^j| \geq 2$ for all $j \in N$ such that $|\Sigma^j|>0$. In particular, minimal siphons fulfill the equality rather than the strict inequality and are essentially of two kinds, as shown in Fig. \[basicsi\]. It is straightforward to see that, despite having siphons of cardinality strictly smaller than half of the size of the group ($4 < 9/2$ ), still their layout ensures that two siphons will always share at least one element. Therefore, structural consensuability is fulfilled and asymptotic consensus is guaranteed in the absence of faulty agents for all initial conditions.\
\(a) ![Siphons of minimal support (gray)[]{data-label="basicsi"}](basicsiphon.pdf "fig:"){width="4.5cm"} (b) ![Siphons of minimal support (gray)[]{data-label="basicsi"}](basicsiphon2.pdf "fig:"){width="4.5cm"}
Next we investigate the possibility of achieving robust consensus in the presence of a single faulty agent. Due to the simmetry of the network we may choose any agent to be faulty, and the corresponding analysis will apply to any other possible agent after some permutations. For ease of graphical representation we choose the faulty agent to be $(2,2)$.\
First of all it is easily seen that $N \times N \backslash \{ (2,2) \}$ is a siphon. Moreover, any siphon of the full Petri Net that does not contain $(2,2)$ is also a $\emptyset$-controlled siphon when $F= \{ (2,2) \}$. Next, we look for $(2,2)$-controlled siphons. These are siphons for which agent $(2,2)$ acts as a switch. It can be seen that $\Sigma$ is a $(2,2)$ controlled siphon if (and only if) $\Sigma \cup \{(2,2)\}$ is a siphon. This direct implication is true for all Petri Nets, but the converse need not hold in general. In particular, then, only two types of controlled siphons can be identified (up to permutations), as shown in Fig. \[contrsi\].
\(a) ![$(2,2)$-controlled siphons of minimal support (gray)[]{data-label="contrsi"}](fcontrolledsiphon.pdf "fig:"){width="4.5cm"} (b) ![$(2,2)$-controlled siphons of minimal support (gray)[]{data-label="contrsi"}](fcontrolledsiphon2.pdf "fig:"){width="4.5cm"}
Because of this, for any pair of $\emptyset$ or $(2,2)$ controlled siphons $\Sigma_1$, $\Sigma_2$, with associated switches $F_1$, $F_2$ it is true that $$(\Sigma_1 \cup F_1) \cap (\Sigma_2 \cup F_2) \neq \emptyset.$$ And, since by definition $\Sigma_i \cap F_i = \emptyset$, then the following is true: $$\Sigma_1 \cap \Sigma_2 = \emptyset \Rightarrow F_1 \cap F_2 \neq \emptyset.$$
Hence, robust structural consensuability is fulfilled and one may expect the $8$ healty agents to reach asymptotic consensus despite the exogenous disturbance coming from agent $(2,2)$. As previously remarked, this is still true for all possible selections of a single faulty agent.\
Next we explain why this network is not able, in general, to withstand more than a single faulty agent. If two faulty agents occur, either in the same row or column, then the set of healty agents will exhibit a row or a column with a single agent. This implies that the set of healthy agents is not a siphon. Hence robust structural consensuability is not fulfilled. Indeed, the two agents have the ability to influence the agent within the same column (or row) and disrupt its ability to reach asymptotic consensus.\
Consider next the case of two faulty agents that are not in the same row or column. For instance agents $(2,2)$ and $(3,3)$. Let $F = \{ (2,2), (3,3) \}$. As we already characterized s$\emptyset$ controlled siphons and siphons controlled by switch of cardinality one, we need only look for $F$-controlled siphons. Any set $\Sigma$ such $\Sigma \cup F$ is a siphon is also an $F$-controlled siphon. In addition, the network exhibits two types of $F$-controlled siphons that do not fulfill such condition. These are shown in Fig. \[f1f2si\].\
\(a) ![$\{ (2,2), (3,3) \}$-controlled siphons of minimal support (gray)[]{data-label="f1f2si"}](f1f2controlledsiphon.pdf "fig:"){width="4.5cm"} (b) ![$\{ (2,2), (3,3) \}$-controlled siphons of minimal support (gray)[]{data-label="f1f2si"}](f1f2controlledsiphon2.pdf "fig:"){width="4.5cm"}
Notice that, $\{ (2,3) \}$ is an $F$-controlled siphon. On the other hand, the set $\{(1,1),(1,2),(3,1),(3,2)\}$ is an $\emptyset$-controlled siphon. These two controlled siphons and their associated switches have both empty intersection. Hence, robust structural consensuability does not hold for this choice of faulty agents. Indeed agents in $F$ have the ability to prevent consensus between the agents in the siphons described above.
![Malicious agents $(2,2)$ and $(3,3)$ disrupt consensus.[]{data-label="notrob"}](nonrobust.png){width="10cm"}
For instance, one may take initial conditions $x_{1,1}(0)=x_{1,2} (0) = x_{3,1} (0) = x_{3,2}(0)= 1$. For a $\emptyset$-controlled siphon this results in solutions fulfilling $x_{1,1}(t)=x_{1,2} (t) = x_{3,1} (t) = x_{3,2}(t)= 1$ for all $t$. At the same time, one may let the malicious agents fulfill $x_{3,3}(t)=x_{2,2} (t)=0$ so that, any solution with $x_{2,3}(0)=0$ would result in $x_{2,3}(t)=0$ identically, thus preventing consensus. A similar issue arises, letting agents $x_{1,1},x_{1,2},x_{3,1},x_{3,2}$ be initialized with negative values, while $x_{2,2}$ and $x_{3,3}$ oscillate at some higher values as shown in Fig. \[notrob\].
Due to simmetry of the considered network, it follows that selection of any two faulty agents result in the conditions for robust consensuability to be violated.
We emphasize that the considered network is not robust with respect to Byzantine malicious agents. In particular, in the simulation we show the result of agent $(2,2)$ broadcasting higher values to agents $(1,2)$, $(2,1)$ than the one broadcast to agents $(3,2)$, $(2,3)$. The Byzantine agent is initialized with $x_{2,2} (0)=0$ and does not change his position, while it sends the value $+2$ and $-2$ to its neighbors, thus disrupting consensus as shown in Fig. \[byz\]. It is worth pointing out that if the agent had consistently sent the same information to all his neighbours robust consensus would have been achieved.
![Byzantine agent $(2,2)$ disrupts consensus.[]{data-label="byz"}](byzantine.png){width="10cm"}
Comparison with ARC-P protocols {#arcpsection}
===============================
Adversarily Robust Consensus was first introduced by LeBlanc and coworkers in [@leblanc1]. This is proposed, initially, for all-to-all networks and later extended in [@leblanc3] to networks with more general topologies. We start this Section by highlighting how the all-to-all topology considered in [@leblanc1] can be seen as a specific type of symmetric joint-agent interaction. Similar considerations apply when the set of neighbours of each agent is a proper subset of $\{1,2, \ldots, n \}$ but, for the sake of simplicity, this is not illustrated in detail. Let $\bar{\sigma}_k(x)$ denote the $k$-th largest entry in $x$ and, similarly, $\underline{\sigma}_k (x)$ the $k$-th smallest entry in $x$. We see that, $$\bar{\sigma}_k (x) = \max_{J \subset \mathcal{N}: |J|= k } \; \min_{k \in J} x_k$$ $$\underline{ \sigma }_k (x) = \min_{J \subset \mathcal{N}: |J|= k } \; \max_{k \in J} x_k.$$ Moreover, $\bar{ \sigma }_k (x) = \underline{ \sigma }_{n+1-k} (x)$, and therefore for any integer $F$ with $n-F \geq F+1$ we see $$\sum_{k= F+1}^{n-F} \bar{\sigma}_k (x) = \sum_{k= F+1}^{n-F} \underline{\sigma}_k (x).$$ Consider next the protocol described by the following set of equations: $$\label{forcomparison}
\dot{x}_i = -x_i + \frac{ \sum_{k= F+1}^{n-F} \bar{\sigma}_k (x)}{n-2F}.$$ This is, essentially, a continuous-time version of the algorithm proposed in [@leblanc1], where each agent is directed towards the average of the $n-2F$ agents’ opinions of intermediate value (as achieved in [@leblanc1] by using the sorting and reducing maps). It is easy to see that (\[forcomparison\]) is a monotone cooperative network, moreover, we claim that for all $J$ of cardinality $F+1$ and any agent $i$ it holds $J \rightarrow i$. To this end, let $J$ be a subset of cardinality $F+1$ and let $x_J>x_i$ be the common value associated with agents in $J$. All agents not in $J$, including agent $i$ have value $x_i$, instead. Clearly $\bar{\sigma}_k(x) = x_J$ for all $k=1 \ldots F+1$ and $\bar{ \sigma}_k (x) = x_i$ for all $k= F+2 \ldots n$. In particular, then: $$\dot{x}_i = - x_i + \frac{ x_J + (n - 2 F - 1) x_i}{n - 2F} = \frac{ x_J - x_i }{ n -2F },$$ which proves a joint influence of agents in $J$ towards $i$ from above. Similar results hold when $x_J < x_i$. Moreover, $J \rightarrow i$ is a minimal influence. In fact, any proper subset of $J$ consists of at most $F$ elements and therefore, assuming their value is $x_J$ while $x_i$ is the value of other agents, a simple computation shows that $$\dot{x}_i = - x_i + \frac{ (n - 2F) x_i }{n - 2F} = 0.$$ thus ruling out the possibility of joint-influences for sets of agents of cardinality $F$ or lower. Similar computations can be carried out when the network is not of all-to-all type, and each agent has a specific set of neighbours that get sorted, reduced and averaged upon.
Conclusions
===========
This paper has explored tight necessary and sufficient conditions for continuous-time Adversary Consensus Protocols of networks with joint-agent interactions of arbitrary topology. This captures, as a particular case, the notion of ARC consensus studied in the discrete-time case by Leblanc and co-workers, using *sorting* and *selection* maps. Consensus is achieved in the face of agents that behave as arbitrary bounded disturbances, and are only constrained to broadcasting the same information to all of the neighbours they have an influence upon. In this respect, the problem of *Byzantine* consensus, where agents may maliciously or unintenionally send different information to distinct neighbours is an interesting open question for further research. Conditions are formulated in the language of Petri Nets, in particular making use of the notion of *controlled siphon*, in which faulty agents play the role of a ‘switch’ capable of disabling some influences by suitably positioning itself above or below the value of agents within the same joint-agent interaction. An example is presented to illustrate the applicability of the considered results. This does not fall within the class of networks considered in [@leblanc3] since each agent is only has two distint group of neighbours (vertical and horizontal ones in the picture) which are treated separately when cross-validating information in joint-agent interactions). In particular, the equations considered can never be achieved by means of sorting and selection functions.
[99]{} D. Angeli and S. Manfredi, On consensus protocols allowing joint-agent interactions, *IEEE Conf. on Decision and Control* submitted. W. Abbas, Y. Vorobeychik and X. Koutsoukos, Resilient Consensus Protocol in the Presence of Trusted Nodes, *7th International Symposium on Resilient Control Systems*, Denver (CO), U.S., August 2014. R. Olfati-Saber, J.A. Fax and R.M. Murray, Consensus and Cooperation in Networked Multi-Agent Systems, *Proceedings of the IEEE*, vol. 95, N. 1, pp. 215-233, 2007.
J. M. Hendrickx and J. N. Tsitsiklis. Convergence of type-symmetric and cut-balanced consensus seeking systems. [*IEEE Transactions on Automatic Control, 58, 1, 2013*]{}. S. Martin, J. M. Hendrickx, Continuous-time consensus under non-instantaneous reciprocity, in *IEEE Transactions on Automatic Control, 61, 9, 2484 –-2495, 2016*. L. Moreau. Stability of multiagent systems with time-dependent communication links. [*IEEE Transactions on Automatic Control, 50, 2, 2005*]{}. H. LeBlanc and X. Koutsoukos, Consensus in Networked Multi-Agent Systems with Adversaries, *Proc. of HSCC’2011*, Chicago, IL, USA, 2011. H. LeBlanc and X. Koutsoukos, Low Complexity Resilient Consensus in Networked Multi-Agent Systems with Adversaries, *Proc. of HSCC’2012*, Beijing, China, 2012. H. LeBlanc, H. Zhang, S. Sundaram and X. Koutsoukos, Consensus of Multi-Agent Networks in the Presence of Adversaries Using Only Local Information, *Proc. of HiCoNS’12*, Beijing, China, 2012. Z. Lin, B. Francis, and M. Maggiore. State agreement for continuous time coupled nonlinear systems. *SIAM J. Contr., 46, 1*, 2007. S. Manfredi, D. Angeli. A criterion for exponential consensus of time-varying non-monotone nonlinear networks. [*IEEE Transactions on Automatic Control, 62, 5, 2483–2489, 2016*]{}.
S. Manfredi, D. Angeli. Necessary and Sufficient Conditions for Consensus in Nonlinear Monotone Networks with Unilateral Interactions. *Automatica, 77, 51 –- 60*, 2017.
Y. D. Zhong, V. Srivastava and N. E. Leonard. On the Linear Threshold Model for Diffusion of Innovations in Multiplex Social Networks. *To appear in Proc. IEEE Conference on Decision and Control*, 2017.
M. A. Javarone. Social Influences in Opinion Dynamics: the Role of Conformity. *Physica A: Statistical Mechanics and its Applications*, 2014.
C. Alós-Ferrer, S. Hügelschäfer and J. Li. Inertia and Decision Making. *Frontiers in Psychology, Volume 7*, 2016.
M. Baddeley Herding, social influence and economic decision-making: socio-psychological and neuroscientific analyses. *Phil. Trans. R. Soc. B, 365, 281–-290*, 2016.
D.P. Bertsekas and J.N. Tsitsiklis,*Parallel and Distributed Computation: Numerical Methods*, Prentice Hall, 1989. S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein, *Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers*, Foundations and Trends in Machine Learning, Vol. 3, N.1., p. 1-122, 2010 C. Altafini, Consensus problems on networks with antagonistic interactions, *IEEE Transactions on Automatic Control, Vol. 58, No. 4, 2013*. S. Manfredi, D. Angeli. Frozen state conditions for exponential consensus of time-varying cooperative nonlinear networks. [*Automatica, 63, 182 – 189, 2016*]{}. A. Sarlette and R. Sepulchre, Consensus Optimization on manifolds, *SIAM Journal on Control and Optimization*, Vol. 48, N.1, pp. 56-76, 2009.
P. Resnick, K. Kuwabara, R. Zeckhauser, E. Friedman, [*Reputation systems*]{}, ACM 43, 12, 45–48., 2000 S.-R. Yan, X.-L. Zheng, Y. Wang, W.W. Song, W.-Y. Zhang, *A graph-based comprehensive reputation model: Exploiting the social context of opinions to enhance trust in social commerce, Inform*. Sci. 318, 51–72, 2015 X.-L.Zheng,C.-C.Chen,J.-L.Hung,W.He,F.-X.Hong,Z.Lin, *A hybrid trust-based recommender system for online communities of practice*, IEEE Transactions Learn. Technol. 8, 4, 345–356, 2015. B. Mortazavi, and G. Kesidis, *Cumulative Reputation Systems for Peer-to-Peer Content Distribution,* Proceeding NOSSDAV ’03 Proceedings of the 13th international workshop on Network and operating systems support for digital audio and video 144-152, 2003 A. Jsang, R. Ismail, and C. Boyd. *A survey of trust and reputation systems for online service provision.* Decision Support Systems, 43, 2, 618-644, 2007
W.Jiang, J.Wu, F.Li,G.Wang, H.Zheng,*Trust evaluation in online social networks using generalized flow*, IEEE Transactions Comput. 65,3, 952–963, 2016 K. Fujimura, T. Nishihara, *Reputation rating system based on past behavior of evaluators*, in: Proceedings of the 4th ACM Conference on Electronic Commerce, EC’03, ACM, New York, NY, USA, 246–247, 2003 X.-L. Liu, Q. Guo, L. Hou, C. Cheng, J.-G. Liu, *Ranking online quality and reputation via the user activity*, Physica A 436, 629–636, 2015 Liao H, Zeng A, Xiao R, Ren Z-M, Chen D-B, Zhang Y-C *Ranking Reputation and Quality in Online Rating Systems*. PLoS ONE 9, 5: e97146, 2014 S. Kamvar, M. Schlosser, and H. Garcia-Molina. *The Eigen- Trust algorithm for reputation management in P2P networks*. In Proceedings of the Twelwth International World-Wide Web Conference (WWW03), pages 446–458, 2003. M. Kinateder and K. Rothermel. *Architecture and algo- rithms for a distributed reputation system.* In Proceedings of Trust Management: First International Conference (iTrust 2003), LNCS, pages 1–16. Springer-Verlag, May 2003.
J.Gao,Y.-W.Dong,M.-S.Shang,S.-M.Cai,T.Zhou, *Group-based ranking method for on line rating systems with spamming attacks*, Europhys. Lett.110, 2, 28003, 2015 Jian Gao, Tao Zhou, *Evaluating user reputation in online rating systems via an iterative group-based ranking method,* Physica A 473, 546–560, 2017 M. Pease, R. Shostak and L. Lamport, Reaching Agreement in the Presence of Faults, *Journal of the ACM*, Vol. 27, N. 2, pp. 228-234, 1980.
|
---
abstract: |
Let $\{w_{i,j}\}_{1\leq i\leq n, 1\leq j\leq s} \subset
L_m=F(X_1,\dots,X_m)[{\partial
\over
\partial X_1},\dots, {\partial \over
\partial X_m}]$ be linear partial differential operators of orders with respect to ${\partial
\over
\partial X_1},\dots, {\partial \over
\partial X_m}$ at most $d$. We prove an upper bound $$\begin{aligned}
n(4m^2d\min\{n,s\})^{4^{m-t-1}(2(m-t))} \nonumber\end{aligned}$$ on the leading coefficient of the Hilbert-Kolchin polynomial of the left $L_m$-module $\langle \{w_{1,j}, \dots , w_{n,j}\}_{1\leq j \leq s} \rangle \subset L_m^n$ having the differential type $t$ (also being equal to the degree of the Hilbert-Kolchin polynomial). The main technical tool is the complexity bound on solving systems of linear equations over [*algebras of fractions*]{} of the form $$L_m(F[X_1,\dots , X_m, {\partial
\over
\partial X_1},\dots, {\partial \over
\partial X_k}])^{-1}.$$
author:
- |
Dima Grigoriev\
IRMAR, Université de Rennes\
Beaulieu, 35042, Rennes, France\
[[email protected]]{}\
http://name.math.univ-rennes1.fr/dimitri.grigoriev
title: 'Weak Bézout inequality for D-modules'
---
Introduction {#introduction .unnumbered}
============
Denote the derivatives $D_i={\partial
\over
\partial X_i}, 1\leq i \leq m$ and by $A_m=F[X_1,\dots , X_m, D_1, \dots , D_m]$ the [*Weil algebra*]{} [@B] over an infinite field $F$. It is well-known that $A_m$ is defined by the following relations: X\_iX\_j=X\_jX\_i,D\_iD\_j=D\_jD\_i,X\_iD\_i=D\_iX\_i-1,X\_iD\_j=D\_jX\_i,ij
For a family $\{w_{i,j}\}_{1\leq i\leq n, 1\leq j\leq s} \subset L_m$ of elements of the [*algebra of linear partial differential operators*]{} one can consider a system \_[1i n]{} w\_[i,j]{}u\_i=0, 1js of linear partial differential equations in the unknowns $u_1,\dots , u_n$. In particular, if the $F$-linear space of solutions of (\[-1\]) has a finite dimension $l$ then the quotient of the free $L_m$-module $L_m^n$ over the left $L_m$-module $L=
\langle \{w_{1,j}, \dots , w_{n,j}\}_{1\leq j \leq s} \rangle \subset L_m^n$ has also the dimension $l$ over the field $F(X_1,\dots , X_m)$ [@K]. Denote by $t$ the [*differential type*]{} of $L$ [@K], then $0\leq t \leq m$ (observe that the case treated in the previous sentence, corresponds to $t=0$).
We consider the filtration on the algebra $L_m$ defined on the monomials by $ord(
cD_1^{i_1}\cdots D_m^{i_m})=i_1+\cdots +i_m$ where a coefficient $c\in F(X_1,\dots
, X_m)$. With respect to this filtration the module $L$ posesses the Hilbert-Kolchin polynomial [@K] $${l \over t!}z^t+l_{t-1}z^{t-1}+\cdots +l_0$$ of the degree $t$ (which coincides with the differential type of $L$). The leading coefficient $l$ is called the [*typical differential dimension*]{} [@K]. In the treated above particular case $t=0$ the dimension of $F$-linear space of solutions of (\[-1\]) equals to $l$.
In the present paper we prove (see Section \[bezout\]) the following inequality which could be viewed as a weak analogue of the Bézout inequality for differential modules. Let $ord(w_{i,j})\leq d, \quad 1\leq i \leq n, 1\leq j \leq s$. Then the leading coefficient of the Hilbert-Kolchin polynomial $$\begin{aligned}
l\leq n(4m^2d\min\{n,s\})^{4^{m-t-1}(2(m-t))} \nonumber
\end{aligned}$$
Actually, one could slightly improve this estimate while making it more tedious. We note that the latter estimate becomes better with a smaller value of $m-t$. In fact, for small values $m-t\leq 2$ much stronger estimates are known. In the case $m-t=0$ the bound $l\leq n$ is evident. In the case $m-t=1$ the bound $l\leq \max_{1\leq i\leq
s}
\{ord(w_{i,1}\} +\cdots + \max_{1\leq i\leq s}
\{ord(w_{i,n}\}$ was proved [@K] (moreover, the latter bound holds in the more general situation of [*non-linear*]{} partial differential equations, whereas in the situation under consideration in the present paper of [*linear*]{} partial differential equations a stronger [*Jacobi conjecture*]{} was established , see e.g. [@P]). In the case $m-t=2, \quad n=1$ the bound $l\leq ord(w_1)ord(w_2)$ was proved for the left ideal $\langle w_1, w_2, \dots \rangle \subset L_m$ where $ord(w_1)\geq ord(w_2) \geq
\dots$ [@P] which could be viewed as a direct analogue of the Bézout inequality. In the case $m=3, \quad t=0, \quad n=1$ a counter-example of a left ideal $\langle
w_1,w_2,w_3
\rangle \subset L_3$ is also produced in [@P] which shows that the expected upper bound $ord(w_1)ord(w_2)ord(w_3)$ on $l$ appears to be wrong. It would be interesting to clarify how sharp is the estimate in Corollary \[bez\] for large values of $m-t$.
The main technical tool in the proof of Corollary \[bez\] is the complexity bound on solving linear systems over [*algebras of fractions*]{} of $L_m$. Let $K\subset
\{1,\dots , m\}$ be a certain subset. Denote by $A_m^{(K)}=F[X_1,\dots , X_m,
\{D_k\}_{k
\in K}] \subset A_m$ the corresponding subalgebra of $A_m$. We consider the algebra of fractions $Q_m^{(K)}=A_m(A_m^{(K)})^{-1}$. For an element $a\in A_m$ we denote the Bernstein filtration [@B] $deg(a)$ defining it on monomials $X_1^{j_1} \cdots
X_m^{j_m} D_1^{i_1} \cdots D_m^{i_m}$ by $j_1+\cdots +j_m+i_1+\cdots +i_m$. Then for an element $ab^{-1}\in Q_m^{(K)}, \quad
a\in A_m, b\in A_m^{(K)}$ we write that the degree $deg(ab^{-1})\leq \max
\{deg(a), deg(b)\}$.
In Section \[fraction\] below we study the properties of $Q_m^{(K)}$ and the complexity bounds on manipulating in $Q_m^{(K)}$. In Section \[matrice\] we establish complexity bounds on [*quasi-inverse*]{} matrices over the algebra $Q_m^{(K)}$. Finally, in Section \[system\] we consider the problem of solving a system of linear equations over the algebra $Q_m^{(K)}$: \_[1ip]{}a\_[j,i]{}V\_i=a\_j, 1j q where the coefficients $a_{j,i}, a_j \in A_m, \quad deg(a_{j,i}), deg(a_j) \leq
d$. We prove the following theorem. If (\[1\]) is solvable over $Q_m^{(K)}$ then (\[1\]) has a solution with $$\begin{aligned}
deg(v_i)\leq (16m^4d^2(\min\{p,q\})^2)^{4^{m-|K|}} \nonumber\end{aligned}$$
Assume now that the ground field $F$ is represented in an efficient way, say as a finitely generated extension either of $\mathbb Q$ or of a finite field (see e.g. [@G86]). Then one can define the bit-size $M$ of the coefficients in $F$ of the input $\{a_{j,i}, a_j\}$. One can test the solvability of (\[1\]) and if it is solvable then yield some its solution in time polynomial in $$\begin{aligned}
M,\enskip q, \enskip p^m, \enskip (md\min\{p,q\})^{4^{m-|K|}m} \nonumber\end{aligned}$$
Theorem \[solution\] and Corollary \[time\] generalize the results from [@G90] established for the algebra $Q_m^{(\emptyset)}=L_m$ of linear differential operators to the algebras of fractions $Q_m^{(K)}$. In [@G90] it is noticed that due to the example of [@M] the bounds in Theorem \[solution\] and Corollary \[time\] are close to sharp.
The problem in question generalizes the one of solving linear systems over the algebra of polynomials which was studied in [@S] where the similar complexity bounds were proved. Unfortunately, one cannot extend directly the method from [@S] (which arises to G.Hermann) to the ([*non-commutative*]{}) algebra $Q_m^{(K)}$ because the method involves the determinants. Nevertheless, we exploit the general approach of [@S].
We mention also that certain algorithmical problems in the algebra of linear partial differential operators were posed in [@G].
Algebra of fractions of differential operators {#fraction}
==============================================
Let a matrix $B=(b_{i,j}), \quad 1\leq i\leq p-1, 1\leq j \leq p$ have its entries $b_{i,j} \in A_m^{(K)}$ and $deg(b_{i,j})\leq d$. The following lemma was proved in [@G90]. There exists a vector $0\neq c=(c_1,\dots , c_p) \in (A_m^{(K)})^p$ such that $Bc=0$ and moreover, $deg(c)\leq 2(m+|K|)(p-1)d=N$.
Consider an $F$-linear space $U\subset (A_m^{(K)})^p$ consisting of all the vectors $c=(c_1,\dots , c_p)$ such that $deg(c)\leq N$. Then $\dim U =p {N+m+|K| \choose
m+|K|}$. For any vector $c\in U$ we have $deg(Bc)\leq N+d$, i.e. $Bc \in W$ where the $F$-linear space $W$ consists of all the vectors $w=(w_1,\dots , w_{p-1})\in
(A_m^{(K)})^{p-1}$ for which $deg(w)\leq N+d$, thereby $\dim (W)=(p-1){N+d+m+|K| \choose m+|K|}$.
Let us verify an inequality $p {N+m+|K| \choose m+|K|} > (p-1){N+d+m+|K| \choose
m+|K|}$ whence lemma would follow immediately. Indeed, $${N+d+m+|K| \choose m+|K|}/{N+m+|K| \choose m+|K|}={N+d+m+|K| \over N+m+|K|}\cdots
{N+d+1 \over N+1}\leq \Bigl( {N+d+1 \over N+1}\Bigr) ^{m+|K|}.$$ It suffices to check the inequality $({N+d+1 \over N+1})^{m+|K|} < {p \over p-1}$. The latter follows in its turn from the inequality $$\begin{aligned}
(1+{1 \over p-1})^{1/(m+|K|)} > 1 + \Bigl({1\over m+|K|}\Bigr) {1 \over p-1} +
{1\over 2}\Bigl({1\over m+|K|}\Bigr) \Bigl({1\over m+|K|}-1\Bigr) {1\over (p-1)^2}
>
\nonumber \\
1+{1\over 2}\Bigl({1\over m+|K|}\Bigr) {1 \over p-1} > 1+{d \over N+1} \bull
\nonumber\end{aligned}$$
Notice that Lemma \[vector\] implies that $A_m^{(K)}$ is an Ore domain [@B], i.e. the expressions of the form $b_1b_2^{-1}$ where $b_1,b_2 \in A_m^{(K)}$ constitute an algebra. Below we use the following notations: letters $a,\alpha$ (respectively, $b,
\beta$) with subscripts denote the elements from $A_m$ (respectively, from $A_m^{(K)}$). Our nearest purpose is to show that the expressions of the form $ab^{-1}$ also constitute an algebra $Q_m^{(K)}=A_m(A_m^{(K)})^{-1}$ (see above the Introduction) and to provide complexity bounds on performing arithmetic operations in $Q_m^{(K)}$. To verify that the sum $a_1b_1^{-1}+a_2b_2^{-1}$ can be represented in the desired form $a_3b_3^{-1}$ we note first the following bound on a [*(left) common multiple*]{} of a family of elements from $A_m^{(K)}$ being a consequence of Lemma \[vector\]. For a family $b_1,\dots , b_p \in A_m^{(K)}$ of the degrees $deg(b_1), \dots ,
deg(b_p) \leq d$ there exist $c_1, \dots , c_p \in A_m^{(K)}$ such that $b_1c_1=\dots =b_pc_p \neq 0$ of the degrees $deg(c_1),\dots , deg(c_p) \leq
2(m+|K|)(p-1)d$.
Evidently, the same bound holds also for a [*right*]{} common multiple of $b_1,\dots , b_p$ which equals to $c_1^{'}b_1=\dots =c_p^{'}b_p$.
To complete the consideration of the sum one can find $c_1,c_2 \in A_m^{(K)}$ such that $b=b_1c_1=b_2c_2$ according to Corollary \[multiple\], then $a_1b_1^{-1}+a_2b_2^{-1}=a_1c_1b^{-1}+a_2c_2b^{-1}=(a_1c_1+a_2c_2)b^{-1}$.
For an element $a\in A_m$ we denote by $ord^{(K)}(a)$ the filtration degree of $a$ with respect to the symbols $\{ {\partial \over \partial X_j} \}$ for $j \not \in K$ and by $deg^{(K)}(a)$ the filtration degree of $a$ with respect to the symbols $X_1,\dots , X_m, \{ {\partial \over
\partial X_k} \}$ for $k\in K$.
Next we verify that $(A_m^{(K)})^{-1}A_m=A_m(A_m^{(K)})^{-1}$ relying on the following lemma. Let $a\in A_m, \quad b\in A_m^{(K)}$ be such that $ deg^{(K)}(a), deg^{(K)}(b)\leq
d,
\quad
ord^{(K)}(a)=e$. Then there exist suitable elements $\alpha \in A_m, \quad \beta
\in
A_m^{(K)}$ such that $b\alpha=a\beta$ (or in other terms $\alpha \beta ^{-1}=b^{-1}a$) and moreover, $ord^{(K)}(\alpha)\leq ord^{(K)}(a), \quad deg^{(K)}(\alpha), deg^{(K)}(\beta)
\leq
2(m+|K|){e+m-|K| \choose e}d$.
Write down $\alpha= \sum_I D^I\beta_I$ where indeterminates $\beta_I \in
A_m^{(K)}$ and the summation ranges over all the derivatives $D^I=\prod _{j\notin
K}
D_j^{i_j}$ with the orders $\sum _{j\notin K} i_j \leq e$. In a similar manner $a=\sum _I D^Ib_I$. Then the equality $b\alpha=a\beta$ turns into a linear system in ${e+m-|K| \choose e}$ equations in ${e+m-|K| \choose e}+1$ indeterminates $\beta,
\{ \beta _I \} _I$. Applying to this system Lemma \[vector\] we complete the proof.
Lemma \[denominator\] entails that the product of two elements $a_1b_1^{-1}$ and $a_2b_2^{-1}$ from $Q_m^{(K)}$ has again the similar form $a_3b_3^{-1}$: indeed, let $b_1^{-1}a_2=a_4b_4^{-1}$ for appropriate $a_4\in A_m, \quad b_4\in A_m^{(K)}$, then $a_1b_1^{-1}a_2b_2^{-1}=a_1a_4(b_2b_4)^{-1}$.
Finally, to complete the description of the algebra $Q_m^{(K)}$ we need to verify that the relation $\alpha \beta^{-1}=b^{-1}a\in Q_m^{(K)}$ being defined as $b\alpha=a\beta$, induces an equivalence relation on $Q_m^{(K)}$. To this end it suffices to show that the equalities $\alpha_1\beta_1^{-1}=b_1^{-1}a_1, \quad
b_1^{-1}a_1=\alpha_2\beta_2^{-1},
\quad \alpha_2\beta_2^{-1}=b_2^{-1}a_2$ imply the equality $\alpha_1\beta_1^{-1}=
b_2^{-1}a_2$. Due to Corollary \[multiple\] there exist $a_3,a_4\in A_m$ such that $a_3a_1=a_4a_2$, hence $a_4b_2\alpha_2=a_4a_2\beta_2=a_3a_1\beta_2=a_3b_1\alpha_2$, therefore $a_4b_2=a_3b_1$. Because of that $a_4b_2\alpha_1=a_3b_1\alpha_1=a_3a_1\beta_1=a_4a_2\beta_1$, thus $b_2\alpha_1=a_2\beta_1$ that was to be shown.
The following corollary summarizes the established above properties of the algebra $Q_m^{(K)}$. In the algebra of fractions $Q_m^{(K)}=A_m(A_m^{(K)})^{-1}=(A_m^{(K)})^{-1}A_m$ two elements $a_1b_1^{-1}, a_2b_2^{-1} \in A_m(A_m^{(K)})^{-1}$ are equal if and only if there exists an element $\beta^{-1}\alpha \in (A_m^{(K)})^{-1}A_m$ such that $\beta a_1=\alpha b_1, \quad \beta a_2=\alpha b_2$.
Quasi-inverse matrices over algebras of differential operators {#matrice}
==============================================================
Let us call an $p\times p$ matrix $C=(c_{i,j})$ a right (respectively, left) [*quasi-inverse*]{} to an $p\times p$ matrix $B=(b_{i,j})$ where the entries $c_{i,j},b_{i,j}\in A_m^{(K)}$ if the matrix $BC$ (respectively, $CB$) has the diagonal form with non-zero diagonal entries. The following lemma was proved in [@G90]. If an $p\times p$ matrix $B$ over $A_m^{(K)}$ has a right quasi-inverse (we assume that $deg(B)\leq d$) then $B$ has also a left quasi-inverse $C$ over $A_m^{(K)}$ such that $deg(C)\leq 2(m+|K|)(p-1)d$.
First observe that there does not exist a vector $0\neq b \in
(A_m^{(K)})^p$ for which $bB=0$ since $A_m^{(K)}$ is a domain (see [@B] and also Section \[fraction\]). Consider the $p\times (p-1)$ matrix $B^{(i)}$ obtained from $B$ by deleting its $i$-th column, $1\leq i \leq
p$. Due to Lemma \[vector\] there exists a vector $0\neq c^{(i)} \in (A_m^{(K)})^p$ such that $c^{(i)} B^{(i)} =0$ and $deg(c^{(i)})\leq 2(m+|K|)(p-1)d$. Then the $p\times p$ matrix with the rows $c^{(i)}, \quad 1\leq i \leq
p$ is a left quasi-inverse of $B$.
We note that a matrix $G$ over $A_m$ (or over $Q_m^{(K)}$) has a quasi-inverse if and only if $G$ is [*non-singular*]{}, i.e. has an inverse over the skew-field $
Q_m^{(\{1,\dots , m\})}=A_m(A_m)^{-1}$. The latter is equivalent to that $G$ has a non-zero determinant of Dieudonné [@A]. The rank $r=rk(G)$ is defined as the maximal size of non-singular submatrices of $G$. The following lemma was proved in [@G90]. Let $G=(g_{i,j})$ be a $p_1\times p_2$ matrix over $A_m^{(K)}$ with the rank $rk(G)=r$ and assume that the $r\times r$ submatrix $G_1$ of $G$ in its left-upper corner is non-singular. Let an $r\times r$ matrix $C_1$ over $A_m^{(K)}$ be a left quasi-inverse to $G_1$. Then one can find an $(p_1-r)\times r$ matrix $C_2$ over the algebra $Q_m^{(K)}$ such that $$\left(\begin{array}{cc} C_1 0 \\ C_2 E \end{array} \right) G=
\left(
\begin{array}{ccc|c}
g_1& & 0 &\\
&\ddots & & *\\
0 & & g_r &\\
\hline
& \mathbf{0} & & \mathbf{0}\\
\end{array}
\right)$$ where $E$ denotes the unit matrix.
The matrix $C_2$ is determined uniquely by the requirement that in the product of matrices in the right-hand side the left-lower corner is zero. Then the right-lower corner is zero as well by the definition of the rank.
We proceed to solving system (\[1\]). Denote $r=rk(a_{j,i})$. After renumerating the rows and columns one can suppose the $r\times r$ submatrix in the left-upper corner of $(a_{j,i})$ to be non-singular. Applying Lemma \[quasi-inverse\] to $r\times r$ submatrix $(a_{j,i}), \quad 1\leq i,j\leq
r$ one gets a matrix $C_1$, subsequently applying Lemma \[block\] one gets a matrix $C_2$. If the vector $(C_2\enskip
E)(a_1,
\dots , a_q)$ does not vanish then system (\[1\]) has no solutions. Otherwise, if $(C_2\enskip E)(a_1,
\dots , a_q)=0$ then system (\[1\]) is equivalent to a linear system over $Q_m^{(K)}$ of the following form (see Lemma \[block\]): g\_jV\_j+\_[r+1ip]{} g\_[j,i]{}V\_i=f\_j, 1j r where $g_j,g_{j,i},f_j \in A_m$. Lemma \[quasi-inverse\] implies that $deg(g_j),deg(g_{j,i}),deg(f_j)\leq (4m(r-1)+1)d$. Fix for the time being a certain $i, \quad r+1\leq i \leq p$. Applying Lemma \[vector\] to the $r\times
(r+1)$ submatrix which consists of the first $r$ columns and of the $i$-th column of the matrix in the left-hand side of (\[2\]), we obtain $h_1^{(i)},\dots ,
h_r^{(i)},h^{(i)} \in
A_m$ such that g\_jh\_j\^[(i)]{}+g\_[j,i]{}h\^[(i)]{}=0, 1j r Moreover, $deg(h_j^{(i)}), deg(h^{(i)})\leq 4mr(4m(r-1)+1)d \leq (16m^2r^2-1)d$.
Complexity of solving a linear system over an algebra of fractions of differential operators {#system}
============================================================================================
In the present section we design an algorithm to solve a linear system (\[2\]) over $Q_m^{(K)}$.
Fix for the time being a certain $\gamma \notin K$. An arbitrary element $h\in
A_m$ can be written as h=\_[0s t]{} D\_\^s h\_s=\_[S={s\_}\_[K]{}]{} (\_[K]{} D\_\^[s\_]{})h\_S where $h_s\in A_m^{(\{1,\dots ,m\} \setminus \gamma)}, \quad h_S\in A_m^{(K)}$. Denote the leading coefficient $lc_{\gamma}(h)=h_t\neq 0$. We say that $h$ is [*normalized with respect to $D_{\gamma}$*]{} when $lc_{\gamma}(h) \in A_m^{(K)}$. The following lemma plays the role of the normalization for the algebra $Q_m^{(K)}$ (cf. Lemma 2.3 [@Sit] or Lemma 4 [@G90]). For any finite family $H=\{h\}\subset A_m$ there exists a non-singular $F$-linear transformation of the $2(m-|K|)$-dimensional $F$-linear subspace of $A_m$ with the basis $\{X_{\delta}, {\partial \over \partial X_{\delta}}\}_{\delta \notin K}$ under which the vector $\{{\partial \over \partial X_{\delta}}\}_{\delta \notin K}$ is transformed as follows: $$\{{\partial \over \partial X_{\delta}}\}_{\delta \notin K} \rightarrow
\Omega \{{\partial \over \partial X_{\delta}}\}_{\delta \notin K}$$ where the $(m-|K|)\times (m-|K|)$ matrix $\Omega=(\omega_{\delta_1, \delta}),
\quad
\omega_{\delta_1, \delta} \in F$, and the vector $$\{X_{\delta}\}_{\delta \notin K} \rightarrow (\Omega^T)^{-1}
\{X_{\delta}\}_{\delta \notin K}$$ such that any transformed (under the transformation continued to $A_m$) element $\overline {h} \in A_m$ for $h\in H$ is normalized with respect to $D_{\gamma}$. Moreover, $deg_{D_{\gamma}}(\overline {h})=
ord^{(K)}(\overline {h})$.
One can verify that this linear transformation keeps the relations (\[0\]), therefore, one can consider $A_m$ as a Weil algebra with respect to the variables $\{X_k\}_{k\in K} \cup (\Omega^T)^{-1}
\{X_{\delta}\}_{\delta \notin K}$ and the corresponding differential operators $\{ {\partial \over \partial X_k}\} \cup \Omega \{{\partial \over
\partial X_{\delta}}\}_{\delta \notin K}$ (cf. also [@G90]).
We rewrite (\[4\]) as $$h=\sum_{S_0=\{s_{\delta}\}_{\delta \notin K}}
(\prod_{\delta \notin K} D_{\delta}^{s_{\delta}})h_{S_0} + \Sigma_1$$ where in the first sum all the terms from (\[4\]) with the maximal value of the sum $\sum_{s_{\delta} \in S_0} s_{\delta} = ord^{(K)}(h)$ are gathered. Then the leading coefficient $$lc_{\gamma}(\overline h)= \sum_{S_0}(\prod_{\delta \notin K}
\omega_{\gamma, \delta}^{s_{\delta}}){\overline h}_{S_0} \in A_m^{(K)}.$$ Since the latter sum does not vanish if and only if the result of its linear transformation $$\sum_{S_0}(\prod_{\delta \notin K}
\omega_{\gamma, \delta}^{s_{\delta}}) h_{S_0} \in A_m^{(K)}$$ with respect to $\Omega^T$ does not vanish as well, the set of the entries $\{\omega_{\gamma, \delta}\}_{\delta \notin K}$ for which $lc_{\gamma}(\overline
h)$ does not vanish, is open in the Zariski topology (and thereby, is non-empty taking into account that the ground field $F$ is infinite). Hence for an open set of the entries $\{\omega_{\gamma, \delta}\}_{\delta \notin K}$ the leading coefficients $lc_{\gamma}(\overline h)$ do not vanish for all $h \in H$. Therefore, $deg_{D_{\gamma}}(\overline {h})=ord^{(K)}(h)=ord^{(K)}(\overline {h})$ and thereby, $\overline {h}$ is normalized with respect to $D_{\gamma}$.
Applying Lemma \[normalization\] to the family $\{h^{(i)}\}_{r+1\leq i \leq p}$ constructed in (\[3\]), we can assume without loss of generality that $0\neq lc_{D_{\gamma}}(h^{(i)}) \in A_m^{(K)}, r+1\leq i\leq p$.
Consider a certain solution $v_i\in Q_m^{(K)}, \quad 1\leq i\leq p$ of system (\[2\]). Fix some $r+1\leq i\leq p$ for the time being. One can divide (from the right) $v_i$ by $h^{(i)}$ with the remainder in $Q_m^{(K)}$ with respect to $D_{\gamma}$, i.e. $v_i=h^{(i)}\phi_i+\psi_i$ for suitable $\phi_i,\psi_i \in Q_m^{(K)}$ such that $deg_{D_{\gamma}}(\psi_i)< deg_{D_{\gamma}}(h^{(i)})=t$. Let $v_i=\sum_{0\leq
s\leq t_1}
D_{\gamma}^sv_{i,s}$ where $v_{i,s} \in A_m^{(\{1,\dots ,m\} \setminus \gamma)}$ and $v_{i,t_1}=lc_{D_{\gamma}}(v_i)$. Taking into account that $h^{(i)}$ is normalized with respect to $D_{\gamma}$, one can rewrite $lc_{D_{\gamma}}(h^{(i)})D_{\gamma}^{t_1-t}
=D_{\gamma}^{t_1-t}lc_{D_{\gamma}}(h^{(i)})+\sum_{0\leq s\leq
t_1-t-1}D_{\gamma}^s\eta_s$ for appropriate $\eta_s \in A_m^{(K)}$. Thus, one can put the leading term of (the quotient) $\phi_i$ to be $\phi_{i,t_1-t}=D_{\gamma}^{t_1-t}(lc_{D_{\gamma}}(h^{(i)}))^{-1}lc_{D_{\gamma}}(v_i)
\in
Q_m^{(K)}$. Then $deg_{D_{\gamma}}(v_i-h^{(i)}\phi_{i,t_1-t})<t_1$ and one can continue the process of dividing with the remainder achieving finally $\phi_i,\psi_i$.
For a fixed $1\leq j\leq r$ we multiply each of the equalities (\[3\]) for $r+1\leq i
\leq p$ from the right by $\phi_i$ and subtract it from the corresponding equality (\[2\]), as a result we get an equivalent to (\[2\]) linear system g\_j\_j+\_[r+1ip]{}g\_[j,i]{}\_i=f\_j, 1jr for certain $\psi_j \in Q_m^{(K)}$. Since $deg(f_j),deg(g_{j,i})\leq (4m(r-1)-1)d,
\quad deg_{D_{\gamma}}(\psi_i)< deg_{D_{\gamma}}(h^{(i)})\leq (16m^2r^2-1)d$ (see the end of Section \[matrice\]) we conclude that $deg_{D_{\gamma}}(\psi_j) \leq N_1 \leq 16m^2r^2d, \quad 1\leq j \leq r$.
Represent $\psi_j=\sum_{0\leq s \leq N_1}D_{\gamma}^s\psi_{j,s}, \quad 1\leq j
\leq p$ for appropriate $\psi_{j,s} \in A_m^{(\{1,\dots ,m\} \setminus
\gamma)}(A_m^{(K)})^{-1}$. For each $0\leq s\leq N_1$ we have g\_jD\_\^s=\_[0lN\_0]{}D\_\^lg\_[j,s,l]{}\^[(1)]{}, g\_[j,i]{}D\_\^s=\_[0lN\_0]{}D\_\^lg\_[j,i,s,l]{}\^[(1)]{} for appropriate $g_{j,s,l}^{(1)},g_{j,i,s,l}^{(1)} \in
A_m^{(\{1,\dots ,m\} \setminus \gamma)}$ where $N_0,deg(g_{j,s,l}^{(1)}),
deg(g_{j,i,s,l}^{(1)})\leq 16m^2r^2d$. Substituting the expressions (\[6\]) in (\[41\]) and subsequently equating the coefficients at the same powers of $D_{\gamma}$, we obtain the following linear system over $A_m^{(\{1,\dots ,m\} \setminus \gamma)}(A_m^{(K)})^{-1}$: \_[j,s]{}g\_[j,s,l]{}\^[(2)]{}\_[j,s]{}=g\_l\^[(2)]{} being equivalent to system (\[41\]) and thereby, to system (\[1\]), in other words, these systems are solvable simultaneously. Moreover, $g_{j,s,l}^{(2)},g_l^{(2)}
\in
A_m^{(\{1,\dots ,m\} \setminus \gamma)}, \quad deg(g_{j,s,l}^{(2)}),
deg(g_l^{(2)})\leq
16m^2r^2d$, the number of the equations in system (\[7\]) does not exceed $16m^2r^2d$ and the number of the indeterminates $\psi_{j,s}$ is less than $16pm^2r^2d$.
We summarize the proved above in this section in the following lemma. A linear system (\[1\]) of $q$ equations in $p$ indeterminates with the degrees of the coefficients $a_{j,i},a_j$ at most $d$ is solvable over the algebra $Q_m{(K)}$ if and only if the linear system (\[7\]) is solvable over the algebra $A_m^{(\{1,\dots ,m\} \setminus
\gamma)}(A_m^{(K)})^{-1}$. System (\[7\]) in at most $16pm^2r^2d$ indeterminates and in at most $16m^2r^2d$ equations has the coefficients from the algebra $A_m^{(\{1,\dots ,m\} \setminus \gamma)}$ of the degrees less than $16m^2r^2d$ where $r\leq \min\{p,q\}$ is the rank of the system (\[1\]).
Moreover, if system (\[7\]) has a solution with the degrees not exceeding a certain $\lambda$ then system (\[1\]) has a solution with the degrees not exceeding $\lambda+16m^2r^2d$.
Thus, we have eliminated the symbol $D_{\gamma}$. Continuing by recursion applying Lemma \[elimination\] we eliminate consecutively $D_{\delta}$ for all $\delta
\notin
K$ and finally yield a linear system \_[1l N\_3]{}g\_[s,l]{}\^[(0)]{}V\_l\^[(0)]{}=g\_s\^[(0)]{}, 1s N\_2 over the skew-field $A_m^{(K)}(A_m^{(K)})^{-1}$ with the coefficients $g_{s,l}^{(0)},
g_s^{(0)} \in A_m^{(K)}$ where $N_2, deg(g_{s,l}^{(0)}), deg(g_s^{(0)}) \leq N_4=
(2m)^{4^{m-|K|}}(dr)^{3^{m-|K|}}$ and the number of the indeterminates $N_3\leq
pN_4$. Notice that system (\[8\]) is solvable simultaneously with system (\[1\]).
As in Section \[matrice\] one can reduce (with the help of Lemma \[block\]) system (\[8\]) to the diagonal-trapezium form similar to (\[2\]) with the coefficients from the algebra $A_m^{(K)}$ having the degrees less than $2(m+|K|)N_4^2$ due to Lemma \[quasi-inverse\]. Therefore, if system (\[8\]) has a solution in the skew-field $A_m^{(K)}(A_m^{(K)})^{-1}$ it should have a solution of the form $v_l^{(0)}=(b_l^{(1)})^{-1}b_l^{(2)} \in
(A_m^{(K)})^{-1}A_m^{(K)}$ with the degrees $deg(b_l^{(1)}), deg(b_l^{(2)})\leq
2(m+|K|)N_4^2$ taking into account the achieved diagonal-trapezium form. Applying Corollary \[multiple\] to $v_l^{(0)}$ one can represent $v_l^{(0)}=v_l^{(3)}
(v_l^{(4)})^{-1}$ for suitable $v_l^{(3)},v_l^{(4)} \in A_m^{(K)}$ with the degrees $deg(v_l^{(3)}), deg(v_l^{(4)})\leq 4(m+|K|)^2N_4^2, \quad 1\leq l \leq N_3$. Hence due to Lemma \[elimination\] it provides a solution of system (\[1\]) over the algebra $Q_m{(K)}=A_m(A_m^{(K)})^{-1}$ with the bounds on the degrees $N_5=4(m+|K|)N_4^2$. This completes the proof of Theorem \[solution\].
Finally we observe that if system (\[1\]) has a solution it has also a solution of the form $v_i=c_ib^{-1}$ for appropriate $c_i\in A_m, \quad b\in A_m^{(K)}$ with the degrees $deg(c_i), deg(b)\leq (2(m+|K|)p+1)N_5, \quad 1\leq i \leq q$ due to Corollary \[multiple\]. The algorithm looks for a solution of system (\[1\]) just in this form with the indeterminate coefficients over the field $F$ at the monomials in the symbols $X_1,\dots , X_m, D_1, \dots , D_m$ and treat (\[1\]) or equivalently, $\sum_{1\leq i\leq p}a_{j,i}c_i=a_jb, \quad 1\leq j \leq q$ as a linear system over $F$ in the indeterminate coefficients. This completes the proof of Corollary \[time\].
A bound on the leading coefficient of the Hilbert-Kolchin polynomial of a linear differential module {#bezout}
====================================================================================================
In the sequel we use the notations from the Introduction. If the degree $0\leq t\leq m$ of the Hilbert-Kolchin polynomial of the left $L_m$-module $L$ equals to $m$ then the leading coefficient $l$ is at most $n$ [@K].
From now on assume that $t<m$. For each $1\leq i_0 \leq n$ and any family $K=
\{k_0,\dots , k_t\} \subset \{1,\dots , n\}$ of $t+1$ integers there exists an element $0\neq (0,\dots ,0, b_{i_0}^{(0)},0,\dots ,0)\in L$ with a single non-zero coordinate at the $i_0$-th place where $b_{i_0}^{(0)}\in A_m^{(K)}(F[X_1,\dots , X_m])^{-1}$, taking into account that the differential type of $L$ equals to $t$ (cf. Proposition 2.4 [@Sit]). Rewriting the latter condition as a system of linear equations $$\sum_{1\leq j\leq s} C_jw_{i,j}=0, \quad i\neq i_0, \quad \sum_{1\leq j\leq s}
C_jw_{i_0,j}=1$$ in the indeterminates $C_1,\dots , C_s$ over the algebra $Q_m^{(K)}$ and making use of Theorem \[solution\] one can find a solution of this system in the form $c_1=(b_{i_0})^{-1}a_{1,i_0},\dots ,
c_s=(b_{i_0})^{-1}a_{s,i_0}\in Q_m^{(K)}$ for suitable $b_{i_0}\in A_m^{(K)}
, \quad a_{1,i_0},\dots ,a_{s,i_0} \in A_m$ with the degrees $deg(b_{i_0}),deg(a_{1,i_0}),\dots , deg(a_{s,i_0}) \leq
(16m^4d^2(\min\{n,s\})^2)^{4^{m-t-1}}$. Thus, $0\neq (0,\dots ,0,
b_{i_0},0,\dots ,0)\in L$.
Applying Lemma \[normalization\] to the family $\{b_{i_0}\}_{1\leq i_0 \leq n}$ we conclude that after an appropriate $F$-linear transformation $\Omega$ of the subspace with the basis $D_{k_0},\dots , D_{k_t}$ and the corresponding transformation $(\Omega^T)^{-1}$ of the subspace with the basis $X_{k_0},\dots , X_{k_t}$, one can suppose that $b_{i_0}=\alpha_eD_{k_0}^e+\beta_{e-1}D_{k_0}^{e-1}+\cdots +\beta_0$ is normalized with respect to $D_{k_0}$ where $0\neq \alpha_e \in F[X_1,\dots , X_m]$ and $\beta_{e-1}, \dots , \beta_0 \in A_m^{(K\setminus \{k_0\})}$. The Hilbert-Kolchin polynomial does not change under the $F$-linear transformation $\Omega$. Taking into account that these transformations keep the relations (\[0\]) of the Weil algebra (see the proof of Lemma \[normalization\]), in the applications of these transformations below we may preserve the same notations for the basis of the resulting Weil algebra after transformations.
First we apply the described above construction to the family $K=\{1,\dots ,
t+1\}$ and obtain normalized elements $(0,\dots ,0,
b_{i_0}^{(1)},0,\dots ,0)\in L, \quad 1\leq i_0 \leq n$ with respect to $D_1$. Thereupon consecutively we take $K=\{2,\dots , t+2\}, \dots , K=\{m-t,\dots , m\}$ and obtain elements $(0,\dots ,0,
b_{i_0}^{(2)},0,\dots ,0), \dots , (0,\dots ,0,
b_{i_0}^{(m-t)},0,\dots ,0) \in L, \quad 1\leq i_0 \leq n$ being normalized with respect to $D_2, \dots , D_{m-t}$, correspondingly.
Hence any element in the quotient $F(X_1,\dots , X_m)$-vector space $L_m^n$ over the left $L_m$-module $L$ can be reduced to the form $(\sum_Ih_{1,I}D_1^{i_1}\cdots D_m^{i_m}, \dots ,
\sum_Ih_{n,I}D_1^{i_1}\cdots D_m^{i_m})$ where the coefficients $h_{j,I}\in
F(X_1,\dots , X_m)$ and $i_1,\dots , i_{m-t}\leq
(16m^4d^2(\min\{n,s\})^2)^{4^{m-t-1}}$. This completes the proof of Corollary \[bez\].
The paper was done partially under the support of the Humboldt-Preis. The author is grateful to Michel Granger, Fritz Schwarz, Serguey Tsarev for their attention and to SCAI (Fraunhofer Institut) for the hospitality during the stay there.
[99]{} E. Artin, [*Geometric algebra*]{}, Interscience Publishers, 1957. J.-E. Bjork, [*Rings of differential operators*]{}, North-Holland, 1979. A. Galligo, [*Some algorithmical questions on ideals of differential operators*]{}, Lect.Notes Comput.Sci., [**204**]{} (1985), 413–421. D. Grigoriev, [*Computational complexity in polynomial algebra*]{}, in Proc.Intern.Congress Mathem., Berkeley (1986), 1452–1460. D. Grigoriev, [*Complexity of solving systems of linear equations over the rings of differential operators*]{}, Progress in Math., Birkhauser, [**94**]{} (1991), 195–202. E. Kolchin, [*Differential algebra and algebraic groups*]{}, Academic Press, 1973. M. Kondratieva, A. Levin, A. Mikhalev, E. Pankratiev, [*Differential and difference dimension polynomials*]{}, Kluwer, 1999. E. Mayr, A. Meyer, [*The complexity of the word problems for commutative semigroups and polynomial ideals*]{}, Adv.Math., [**46**]{} (1982), 305–329. A. Seidenberg, [*Constructions in algebra*]{}, Trans.Amer.Math.Soc. [**97**]{} (1974), 273–313. W.Yu. Sit, [*Typical differential dimension of the intersection of linear differential algebraic groups*]{}, J.Algebra [**32**]{} (1974), 476–487.
|
---
abstract: 'It has been pointed out that non-singular cosmological solutions in second-order scalar-tensor theories generically suffer from gradient instabilities. We extend this no-go result to second-order gravitational theories with an arbitrary number of interacting scalar fields. Our proof follows directly from the action of generalized multi-Galileons, and thus is different from and complementary to that based on the effective field theory approach. Several new terms for generalized multi-Galileons on a flat background were proposed recently. We find a covariant completion of them and confirm that they do not participate in the no-go argument.'
author:
- Shingo Akama
- Tsutomu Kobayashi
title: 'Generalized multi-Galileons, covariantized new terms, and the no-go theorem for non-singular cosmologies'
---
Introduction
============
Inflation [@Guth:1980zm; @Starobinsky:1980te; @Sato:1980yn] is an attractive scenario because it gives a natural resolution of the horizon and flatness problems in standard Big Bang cosmology and accounts for the origin of density perturbations that are consistent with observations such as CMB. However, there are criticisms that even inflation cannot resolve the initial singularity [@Borde:1996pt] and the trans-Planckian problem for cosmological perturbations [@Martin:2000xs]. Alternative scenarios such as bounces and Galilean Genesis have therefore been explored by a number of authors (see, e.g., Ref. [@Battefeld:2014uga] for a review).
To avoid the initial singularity, there must be a period in which the Hubble parameter $H$ is an increasing function of time. This indicates a violation of the null energy condition (NEC), possibly causing some kind of instability. It is easy to show that NEC-violating cosmological solutions are indeed unstable if the Universe is filled with a usual scalar field or a perfect fluid. However, this is not the case if the underlying Lagrangian depends on second derivatives of a scalar field [@Rubakov:2014jja], and one can construct explicitly a stable cosmological phase in which the NEC is violated in the Galileon-type scalar-field theory [@Creminelli:2010ba; @Deffayet:2010qz; @Kobayashi:2010cm].
Nevertheless, this does not mean that such non-singular cosmological solutions are stable at all times in the entire history; it has been known that gradient instabilities occur at some moment in many concrete examples (see, e.g., Refs. [@Cai:2012va; @Koehn:2013upa; @Battarra:2014tga; @Qiu:2015nha; @Wan:2015hya; @Pirtskhalava:2014esa; @Kobayashi:2015gga]), and in some cases the instabilities show up even in the far future after the NEC violating stage [@Qiu:2011cy; @Easson:2011zy; @Ijjas:2016tpn]. Recently, it was shown that this is a generic nature of non-singular cosmological solutions in the Horndeski/generalized Galileon theory [@Horndeski:1974wa; @Deffayet:2011gz; @Kobayashi:2011nu], i.e., in the most general scalar-tensor theory having second-order field equations, provided that graviton geodesics are complete [@Libanov:2016kfc; @Kobayashi:2016xpl; @Creminelli:2016zwa].
As the no-go result is obtained in the single-field Horndeski theory, one could evade this by considering theories with multiple scalar fields or higher derivative theories beyond Horndeski. The latter way is indeed successful within the Gleyzes-Langlois-Piazza-Vernizzi scalar-tensor theory [@Gleyzes:2014dya; @Gleyzes:2014qga; @Zumalacarregui:2013pma], as pointed out in Refs. [@Cai:2016thi; @Creminelli:2016zwa] based on the effective field theory (EFT) of cosmological perturbations [@Cheung:2007st]. Gradient instabilities can also be cured if higher spatial derivative terms arise in the action for curvature perturbations [@Creminelli:2006xe; @Pirtskhalava:2014esa; @Kobayashi:2015gga]. This occurs in a more general framework [@Gao:2014soa; @Gao:2014fra] than [@Gleyzes:2014dya] including Hořava gravity [@Horava:2009uw]. In some cases it is possible, even without such general frameworks, that the strong coupling scale cuts off the instabilities [@Koehn:2015vvy].
The purpose of the present paper is to show that, in contrast to the case of the higher derivative extension, the no-go theorem for non-singular cosmologies still holds in general multi-scalar-tensor theories of gravity. In a subclass of the generalized multi-Galileon theory [@Padilla:2012dx], the same conclusion as in the single-field case was obtained in [@Kolevatov:2016ppi]. It was found in [@Creminelli:2016zwa] that the no-go theorem can also be extended to the EFT of multi-field models in which a shift symmetry is assumed for the entropy mode [@Senatore:2010wk]. (See Ref. [@Noumi:2012vr] for the EFT of multi-field inflation without the shift symmetry.) In this paper, we provide a new proof which follows directly from the full action of the generalized multi-Galileon theory.
This paper is organized as follows. In the next section, we give a brief review on the generalized multi-Galileon theory and extend the proof of the no-go theorem for non-singular cosmologies to multi-field models. Recently, several new terms were found that are not included in the generalized multi-Galileon theory but still yield second-order field equations [@Allys:2016hfl]. To keep the proof as general as possible, we show in Sec. III that the main result is not changed by the addition of these new terms. In doing so, we find a covariant completion of the flat-space action of Ref. [@Allys:2016hfl]. In Sec. IV we give a comment on the (in)completeness of graviton geodesics viewed from the original (non-Einstein) frame. We draw our conclusions in Sec. V.
No-go theorem in generalized multi-Galileon theory
==================================================
Generalized multi-Galileon theory
---------------------------------
The most general single-scalar-tensor theory whose field equations are of second order is given by the Horndeski action [@Horndeski:1974wa]. To begin with, let us review briefly how the same theory was rediscovered in a different way starting from the Galileon theory. The Galileon theory is a scalar-field theory on a fixed Minkowski background having the Galilean shift symmetry, $\partial_\mu\phi\to\partial_\mu\phi + b_\mu$, and second-order field equations [@Nicolis:2008in]. To make the metric dynamical and consider an arbitrary spacetime, one can covariantize the Galileon theory by replacing $\partial_\mu$ with $\nabla_\mu$, but this procedure induces higher derivative terms in the field equations due to the noncommutativity of the covariant derivative. However, the resulting higher derivative terms can be removed by introducing non-minimal derivative coupling to the curvature. The covariant multi-Galileon theory is thus obtained [@Deffayet:2009wt]. Now the Galilean shift symmetry is lost and what is more important is the second-order nature of the field equations, as it guarantees the absence of Ostrogradski instabilities. One can further generalize the covariant Galileon theory by promoting $X:=-g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi/2$ in the action to arbitrary functions $\phi$ and $X$ while retaining the second-order field equations [@Deffayet:2011gz]. This yields the Lagrangian $$\begin{aligned}
{\cal L}&=G_2(X,\phi)-G_3(X,\phi)\Box\phi+G_4(X,\phi )R
\notag \\ & \quad
+\frac{\partial G_4}{\partial X}\left[(\Box\phi)^2-(\nabla_\mu\nabla_\nu\phi)^2 \right]
+G_5(X,\phi)G^{\mu\nu}\nabla_\mu\nabla_\nu\phi
\notag \\ & \quad
-\frac{1}{6}\frac{\partial G_5}{\partial X}\left[(\Box\phi)^3-3\Box\phi(\nabla_\mu\nabla_\nu\phi)^2
+2(\nabla_\mu\nabla_\nu\phi)^3\right],\label{actionHor}\end{aligned}$$ where $R$ is the Ricci scalar and $G_{\mu\nu}$ is the Einstein tensor. Interestingly, it can be shown that this Lagrangian is equivalent to the one obtained by Horndeski in an apparently different form [@Kobayashi:2011nu], and therefore is the most general one having second-order field equations.
The multi-field generalization can proceed in the following way. In Refs. [@Deffayet:2010zh; @Padilla:2010de; @Padilla:2010ir; @Padilla:2010ir2; @Trodden:2011xh; @Sivanesan:2013tba], the Galileons on a fixed Minkowski background was generalized to multi-field models, whose action is a functional of $N$ scalar fields $\phi^I$ ($I=1,\,2,\, ...,\,N$) and their derivatives of order up to two. Covariantizing the multi-Galileons and introducing arbitrary functions of the scalar fields and their first derivatives so that no higher derivative terms appear in the field equations, one can arrive at the generalized multi-Galileon theory, the Lagrangian of which is given in an analogous form to Eq. (\[actionHor\]) by [@Padilla:2012dx] $$\begin{aligned}
\mathcal{L}&=G_2(X^{IJ},{\phi}^K)-G_{3L}(X^{IJ},{\phi}^K)\Box{\phi}^L+G_4(X^{IJ},{\phi}^K)R\nonumber\\
&
\quad
+G_{4,\langle{IJ}\rangle}
\bigl(
{\Box{\phi}}^I{\Box{\phi}}^J-{\nabla}_{\mu}{\nabla}_{\nu}{\phi}^I{\nabla}_{\mu}{\nabla}_{\nu}{\phi}^J
\bigr)
\nonumber\\
&
\quad
+G_{5L}(X^{IJ},{\phi}^K)G^{\mu\nu}{\nabla}_{\mu}{\nabla}_{\nu}{\phi}^L
-\frac{1}{6}G_{5I,\langle{JK}\rangle}
\nonumber\\
&
\quad \quad
\times \bigl(\Box{\phi}^I\Box{\phi}^J\Box{\phi}^K
-3\Box{\phi}^{(I}{\nabla}_{\mu}{\nabla}_{\nu}{\phi}^J{\nabla}^{\mu}{\nabla}^{\nu}{\phi}^{K)}
\nonumber\\
&
\quad \quad
+2{\nabla}_{\mu}{\nabla}_{\nu}{\phi}^I{\nabla}^{\nu}{\nabla}^{\lambda}{\phi}^J{\nabla}_{\lambda}{\nabla}^{\mu}{\phi}^K
\bigr),\label{multi-G-L}\end{aligned}$$ where $$\begin{aligned}
X^{IJ}&:=-\frac{1}{2}g^{\mu\nu}{\partial}_{\mu}{\phi}^I{\partial}_{\nu}{\phi}^J,
\\
G_{,\langle IJ \rangle}
&:=\frac{1}{2}\left(\frac{\partial{G}}{\partial{X^{IJ}}}+\frac{\partial{G}}{\partial{X^{JI}}}\right).\end{aligned}$$ In order for the field equations to be of second order, it is required that $$\begin{aligned}
& G_{3IJK}:=G_{3I,\langle JK\rangle},
&& G_{4IJKL}:=G_{4,\langle IJ \rangle,\langle KL\rangle},
\\
& G_{5IJK}:=G_{5I,\langle JK\rangle},
&& G_{5IJKLM}:=G_{4IJK,\langle LM \rangle},\end{aligned}$$ are symmetric in all of their indices $I, \,J, \,...$. In what follows we will write $G_{4,\langle IJ\rangle}$ as $G_{4IJ}$. It is obvious that $G_{4IJ}=G_{4JI}$.
The multi-scalar-tensor theory described by the Lagrangian (\[multi-G-L\]) seems very general and includes the earlier works [@Damour:1992we; @Horbatsch:2015bua] and more recent ones [@Kolevatov:2016ppi; @Naruko:2015zze; @Charmousis:2014zaa; @Saridakis:2016ahq; @Saridakis:2016mjd] as specific cases. However, in contrast to the case of the single Galileon, it is [*not*]{} the most general multi-scalar-tensor theory with second-order field equations. Indeed, as demonstrated in [@Kobayashi:2013ina], the multi-DBI Galileon theory [@RenauxPetel:2011uk] is not included in the above one. To date, no complete multi-field generalization of the Horndeski action has been known. Taking the same approach as Horndeski did rather than starting from the multi-Galileon theory, the authors of Ref. [@Ohashi:2015fma] obtained the most general second-order field equations of [*bi*]{}-scalar-tensor theories, but deducing the corresponding action and extending the bi-scalar result to the case of more than two scalars have not been successful so far. We will come back to this issue in the next section in light of the recent result reported in [@Allys:2016hfl].
Although the generalized multi-Galileon theory is thus not the most general one, it is definitely quite general and so we choose to use the Lagrangian (\[multi-G-L\]). This is one of the best things one can do at this stage to draw some general conclusions on the cosmology of multiple interacting scalar fields, and is considered as complementary to the approach based on the effective field theory of multifield inflation [@Creminelli:2016zwa].
Stability of a non-singular universe in generalized multi-Galileon theory
-------------------------------------------------------------------------
We now show that the no-go theorem in [@Kobayashi:2016xpl] can be extended to the case of the generalized multi-Galileon theory.
The quadratic actions for perturbations around a flat Friedmann background have been calculated in [@Kobayashi:2013ina]. For tensor perturbations $h_{ij}(t,\Vec{x})$ we have $$\begin{aligned}
S_h^{(2)}=
\frac{1}{8} \int {{\rm d}}t{{\rm d}}^3x \,
a^3\left[
{\mathcal G}_T\dot{h}_{ij}^2-\frac{{\mathcal F}_T}{a^2}
(\Vec{\nabla}h_{ij})^2
\right],\label{ac2tens}\end{aligned}$$ where $$\begin{aligned}
{\mathcal G}_T:=2 \left[ G_4-2X^{IJ}G_{4IJ}-X^{IJ}(H\dot{\phi}^KG_{5IJK}-G_{5I,J}) \right]\end{aligned}$$ and $$\begin{aligned}
{\mathcal F}_T:=2 \left[ G_4-X^{IJ}(\ddot{\phi}^KG_{5IJK}+G_{5I,J}) \right].\end{aligned}$$ Here we defined $G_{,I}:=\partial{G}/\partial{\phi}^I$. Stability requires $$\begin{aligned}
{\cal G}_T>0,\quad {\cal F}_T>0,\end{aligned}$$ at any moment in the whole cosmological history.
To study scalar perturbations in multi-field models, it is convenient to use the spatially flat gauge. The quadratic action for scalar perturbations is of the form [@Kobayashi:2013ina] $$\begin{aligned}
S_Q^{(2)}
=\frac{1}{2}{\int}{{\rm d}}t{{\rm d}}^3x
a^3 &
\biggl[
{\cal K}_{IJ}\dot Q^I \dot Q^J-\frac{1}{a^2}{\cal D}_{IJ}\Vec{\nabla}Q^I\cdot\Vec{\nabla}Q^J
\notag \\
&
-{\cal M}_{IJ}Q^IQ^J+2\Omega_{IJ}Q^I\dot Q^J
\biggr],\end{aligned}$$ where $Q^I$’s are the perturbations of the scalar fields defined by $$\phi^I=\bar{\phi}^I(t)+Q^I(t,\vec{x}).$$ The explicit expressions for the matrices ${\cal K}_{IJ}$, ${\cal M}_{IJ}$, and $\Omega_{IJ}$ can be found in [@Kobayashi:2013ina], but are not necessary for the following discussion. Since gradient instabilities manifest most significantly at high frequencies, only the structure of the matrix ${\cal D}_{IJ}$ is crucial to our no-go argument. We will use the fact that ${\cal D}_{IJ}$ is given by [@Kobayashi:2013ina] $$\begin{aligned}
{\cal D}_{IJ}={\cal C}_{IJ}-\frac{{\cal J}_{(I}{\cal B}_{J)}}{\Theta}
+\frac{1}{a}\frac{{{\rm d}}}{{{\rm d}}t}\left(
\frac{a{\cal B}_I{\cal B}_J}{2\Theta}
\right),\label{DIJrelation}\end{aligned}$$ where ${\cal C}_{IJ}$ is the matrix satisfying the identity $$\begin{aligned}
{\cal C}_{IJ}X^{IJ}=2H\left(\dot{\cal G}_T+H{\cal G}_T\right)
-\dot\Theta-H\Theta-H^2{\cal F}_T,\label{XIJC}\end{aligned}$$ with $$\begin{aligned}
\Theta &:=
-\dot\phi^IX^{JK}G_{3IJK} + 2HG_4
\notag \\ & \quad
-8HX^{IJ}\left(G_{4IJ}+X^{KL}G_{4IJKL}\right)
\notag \\ & \quad
+2\dot\phi^IX^{JK}G_{4IJ, K}+\dot\phi^IG_{4,I}
\notag \\ & \quad
-H^2\dot\phi^IX^{JK}\left(5G_{5IJK}+2X^{LM}G_{5IJKLM}\right)
\notag \\ & \quad
+2HX^{IJ}\left(
3G_{5I,J}+2X^{KL}G_{5IJK,L}
\right).\label{deftheta}\end{aligned}$$ The explicit expressions for ${\cal J}_I$ and ${\cal B}_I$ in Eq. (\[DIJrelation\]) are also unimportant, but we will use the equation [@Kobayashi:2013ina] $$\begin{aligned}
\dot\phi^I{\cal J}_I+\ddot\phi^I{\cal B}_I+2\dot H{\cal G}_T=0.\label{idFRD}\end{aligned}$$ This follows from the background equations, and corresponds in the minimally coupled single-field case to the familiar equation $$\begin{aligned}
\dot\phi^2+2{M_{\rm Pl}}^2\dot H = 0.\end{aligned}$$
It is required for the stability of the scalar sector that the matrices ${\boldsymbol {\cal K}}=({\cal K}_{IJ})$ and ${\boldsymbol {\cal D}}=({\cal D}_{IJ})$ must be positive definite. Hence, a non-singular cosmological solution is free from gradient instabilities if, for every non-zero column vector ${\boldsymbol v}$, $$\begin{aligned}
{\boldsymbol v}^{{\rm T}}{\boldsymbol {\cal D}}{\boldsymbol v}>0,\label{gradsta}\end{aligned}$$ where ${\boldsymbol v}^{{\rm T}}$ is the transpose of ${\boldsymbol v}$. Now, let ${\boldsymbol v}$ be $$\begin{aligned}
{\boldsymbol v}=\left(
\begin{array}{c}
\dot\phi^1 \\
\dot\phi^2 \\
\vdots \\
\dot\phi^N
\end{array}
\right).\end{aligned}$$ Then, Eq. (\[gradsta\]) reads $$\begin{aligned}
{\boldsymbol v}^{{\rm T}}{\boldsymbol {\cal D}}{\boldsymbol v}=2X^{IJ}{\cal D}_{IJ}>0.\end{aligned}$$ Using Eqs. (\[DIJrelation\]), (\[XIJC\]), and (\[idFRD\]) and doing some manipulation, one finds $$\begin{aligned}
X^{IJ}{\cal D}_{IJ} = H^2\left(\frac{1}{a}\frac{{{\rm d}}\xi}{{{\rm d}}t}-{\cal F}_T\right)>0,\label{ineq1}\end{aligned}$$ where $$\begin{aligned}
\xi:=\frac{a{\mathcal G}_T^2}{\Theta}.\end{aligned}$$
The remaining part of the proof is parallel to that in the Horndeski case [@Kobayashi:2016xpl], because the structure of the inequality (\[ineq1\]) is identical to the single-field counterpart. In a non-singular universe, $\Theta$ never diverges because it is composed of $H$ and $\phi^I$ as given in Eq. (\[deftheta\]) and we require that the functions $G_2$, $G_{3I}$, ... in the underlying Lagrangian remain finite in the entire cosmological history.[^1] We also have $a{\cal G}_T^2>0$ which comes from the stability of the tensor perturbations.[^2] Therefore, $\xi$ cannot cross zero. From Eq. (\[ineq1\]) we have $$\begin{aligned}
\frac{{{\rm d}}\xi}{{{\rm d}}t}>a{\cal F}_T>0,\label{ineq2}\end{aligned}$$ indicating that $\xi$ is a monotonically increasing function of $t$. Integrating Eq. (\[ineq2\]) from some $t_{\rm i}$ to $t_{\rm f}$, we obtain $$\begin{aligned}
\xi(t_{\rm f})-\xi(t_{\rm i}) > \int_{t_{\rm i}}^{t_{\rm f}} a{\cal F}_T{{\rm d}}t'.\label{ineq3}\end{aligned}$$ (We admit that $\xi$ diverges at some $t_\ast$ where $\Theta=0$ occurs. In this case, $t_{\rm i}$ and $t_{\rm f}$ are taken to be such that $t_{\rm i}<t_{\rm f}<t_\ast$ or $t_\ast<t_{\rm i}< t_{\rm f}$.) If $\lim_{t\to-\infty}\xi=\,$const, we take $t_{\rm i}\to-\infty$ in Eq. (\[ineq3\]) and obtain $$\begin{aligned}
\int_{-\infty}^{t_{\rm f}}a{\cal F}_T{{\rm d}}t'<\xi(t_{\rm f})-\xi(-\infty)<\infty.\end{aligned}$$ Similarly, if $\lim_{t\to\infty}\xi=\,$const then we take $t_{\rm f}\to\infty$ to get $$\begin{aligned}
\int^{\infty}_{t_{\rm i}}a{\cal F}_T{{\rm d}}t'<\xi(\infty)-\xi(t_{\rm i})<\infty.\end{aligned}$$ Thus, we conclude that a non-singular cosmological solution in the generalized multi-Galileon theory is stable in the entire history provided that either $$\begin{aligned}
\int_{-\infty}^ta{\cal F}_T{{\rm d}}t'\quad
{\rm or} \quad
\int_t^\infty a{\cal F}_T{{\rm d}}t'\label{convint}\end{aligned}$$ is convergent. (If $\Theta = 0$ occurs, both of the above integrals must be convergent.) As is argued in Refs. [@Creminelli:2016zwa; @Cai:2016thi] and also in Sec. IV of the present paper, the convergence of the above integrals signals some kind of pathology in the tensor perturbations. If one prefers to avoid this pathology, all non-singular cosmological solutions in the generalized multi-Galileon theory are inevitably plagued with gradient instabilities.
One might expect naively that, in the presence of multiple interacting scalar fields, a dominant field can transfer its energy to another field or matter before the instability of the former shows up, and thus the instability can be eliminated. We have shown that this is not the case in the generalized multi-Galileon theory.
The same conclusion was reached using the EFT of multi-field cosmologies, in which a shift symmetry is assumed for the entropy mode [@Creminelli:2016zwa]. Our proof is different from, and complementary to, that based on the EFT. The EFT approach amounts to writing all the terms allowed by symmetry, which leads to the theory of cosmological perturbations on a given background. Therefore, the adiabatic and entropy modes are decomposed by construction in the EFT. In contrast, our guiding principle is the second-order nature of the field equations, and so we start from the general action of second-order multiple scalar-tensor theories that governs the perturbation evolution as well as the background dynamics. It should be noticed that we have not performed the adiabatic/entropy decomposition, as it is unnecessary for our no-go argument. Although the relation between the second-order theory and the EFT of cosmological perturbations has been clarified in the single-field case [@Gleyzes:2013ooa], to date, it is not obvious how the EFT of multi-field cosmology is related to the generalized multi-Galileon theory.
Covariantized new terms for multi-Galileon theory
=================================================
Very recently, the author of Ref. [@Allys:2016hfl] proposed new terms for scalar multi-Galileon theory that are not included in the existing multi-Galileon Lagrangian but give rise to a second-order field equation. The Lagrangians for these “extended” multi-Galileons are given by [@Deffayet:2010zh; @Allys:2016hfl] $$\begin{aligned}
{\cal L}_{{\rm ext}1}&=
A_{[IJ][KL]M}\delta_{\nu_1\nu_2\nu_3}^{\mu_1\mu_2\mu_3}
\partial_{\mu_1}\phi^I\partial_{\mu_2}\phi^J\partial^{\nu_1}\phi^K\partial^{\nu_2}\phi^L
\notag \\ &\quad \times
\partial_{\mu_3}\partial^{\nu_3}\phi^M,\label{fext1}
\\
{\cal L}_{{\rm ext}2}&=
A_{[IJ][KL](MN) }
\delta_{\nu_1\nu_2\nu_3\nu_4}^{\mu_1\mu_2\mu_3\mu_4}
\partial_{\mu_1}\phi^I\partial_{\mu_2}\phi^J
\notag \\ &\quad \times
\partial^{\nu_1}\phi^K\partial^{\nu_2}\phi^L
\partial_{\mu_3}\partial^{\nu_3}\phi^M\partial_{\mu_4}\partial^{\nu_4}\phi^N,\label{fext2}
\\
{\cal L}_{{\rm ext}3}&=
A_{[IJK][LMN]O }
\delta_{\nu_1\nu_2\nu_3\nu_4}^{\mu_1\mu_2\mu_3\mu_4}
\partial_{\mu_1}\phi^I\partial_{\mu_2}\phi^J\partial_{\mu_3}\phi^K
\notag \\ &\quad \times
\partial^{\nu_1}\phi^L\partial^{\nu_2}\phi^M\partial^{\nu_3}\phi^N
\partial_{\mu_4}\partial^{\nu_4}\phi^O,\label{fext3}\end{aligned}$$ where the coefficients $A_{[IJ][KL]M},\, ...$ are arbitrary functions of $\phi^I$ and $X^{IJ}$. These coefficients are antisymmetric in indices inside $[~]$ and symmetric in indices inside $(~)$. In order for the field equations to be of second order, we require that $$\begin{aligned}
&A_{[IJ][KL]\underline{M,\langle NO\rangle}},
\quad
A_{[IJ][KL]\underline{(MN),\langle OP\rangle}},
\notag \\ &
A_{[IJK][LMN]\underline{O,\langle PQ\rangle}},\end{aligned}$$ are symmetric in underlined indices.
The Lagrangians (\[fext1\])–(\[fext3\]) are those for scalar fields on fixed Minkowski spacetime. Let us explore a covariant completion of the above flat-space multi-scalar theory. To make the metric dynamical, we first promote $\partial_\mu$ to $\nabla_\mu$. It is easy to see that this procedure is sufficient for ${\cal L}_{{\rm ext}1}$ and ${\cal L}_{{\rm ext}3}$: $$\begin{aligned}
{\cal L}_{{\rm ext}1}'&=
A_{[IJ][KL]M}\delta_{\nu_1\nu_2\nu_3}^{\mu_1\mu_2\mu_3}
\nabla_{\mu_1}\phi^I\nabla_{\mu_2}\phi^J\nabla^{\nu_1}\phi^K\nabla^{\nu_2}\phi^L
\notag \\ &\quad \times
\nabla_{\mu_3}\nabla^{\nu_3}\phi^M,\label{cext1}
\\
{\cal L}_{{\rm ext}3}'&=
A_{[IJK][LMN]O }
\delta_{\nu_1\nu_2\nu_3\nu_4}^{\mu_1\mu_2\mu_3\mu_4}
\nabla_{\mu_1}\phi^I\nabla_{\mu_2}\phi^J\nabla_{\mu_3}\phi^K
\notag \\ &\quad \times
\nabla^{\nu_1}\phi^L\nabla^{\nu_2}\phi^M\nabla^{\nu_3}\phi^N
\nabla_{\mu_4}\nabla^{\nu_4}\phi^O,\label{cext3}\end{aligned}$$ have second-order equations of motion for the metric and scalar fields. However, the simple covariantization of ${\cal L}_{{\rm ext}2}$, $$\begin{aligned}
{\cal L}_{{\rm cext}2}&=
A_{[IJ][KL](MN) }
\delta_{\nu_1\nu_2\nu_3\nu_4}^{\mu_1\mu_2\mu_3\mu_4}
\nabla_{\mu_1}\phi^I\nabla_{\mu_2}\phi^J
\notag \\ &\quad \times
\nabla^{\nu_1}\phi^K\nabla^{\nu_2}\phi^L
\nabla_{\mu_3}\nabla^{\nu_3}\phi^M\nabla_{\mu_4}\nabla^{\nu_4}\phi^N,\end{aligned}$$ yields higher derivative terms in the field equations. To cancel such terms, we add a counter term, i.e., a coupling to the curvature tensor ${\cal L}_{{\rm curv}2}$. It turns out that the appropriate Lagrangian is the following: $$\begin{aligned}
{\cal L}_{{\rm curv}2}&=B_{[IJ][KL]}\delta_{\nu_1\nu_2\nu_3\nu_4}^{\mu_1\mu_2\mu_3\mu_4}
\notag \\ &
\quad \times
R^{\nu_3\nu_4}_{~~~~~\mu_3\mu_4}
\nabla_{\mu_1}\phi^I\nabla_{\mu_2}\phi^J\nabla^{\nu_1}\phi^K\nabla^{\nu_2}\phi^L,\end{aligned}$$ where $$\begin{aligned}
B_{[IJ][KL],\langle MN\rangle }=\frac{1}{2}A_{[IJ][KL](MN)}\end{aligned}$$ must be imposed. Thus, we find that the covariant completion of ${\cal L}_{{\rm ext}2}$ is given by $$\begin{aligned}
{\cal L}_{{\rm ext}2}'&={\cal L}_{{\rm curv}2}+{\cal L}_{{\rm cext}2}\end{aligned}$$ where $A_{[IJ][KL](MN)}=2B_{[IJ][KL],\langle MN\rangle }$ and $$\begin{aligned}
B_{[IJ][KL]\underline{MNOP}}:=B_{[IJ][KL],\langle MN\rangle,\langle OP\rangle }\end{aligned}$$ is symmetric in underlined indices.
One can check that the multi-DBI Galileon theory at leading order in the $X^{IJ}$ expansion [@Kobayashi:2013ina] is obtained by taking $$\begin{aligned}
B_{[IJ][KL]}= {\rm const}\times \left(\delta_{IK}\delta_{JL}-\delta_{IL}\delta_{JK}\right),\end{aligned}$$ though it seems extremely difficult to see explicitly that the complete Lagrangian for the multi-DBI Galileons [@RenauxPetel:2011uk] can be reproduced by choosing appropriately the functions in the above Lagrangians.
Now the question is how the additional terms $$\begin{aligned}
{\cal L}_{\rm ext}:={\cal L}_{{\rm ext}1}'+{\cal L}_{{\rm ext}2}'+{\cal L}_{{\rm ext}3}'\end{aligned}$$ change the stability of cosmological solutions. Obviously, ${\cal L}_{\rm ext}$ does not change the background equations due to antisymmetry. We see that, in the quadratic actions for scalar and tensor perturbations, only the ${\cal C}_{IJ}$ coefficients are modified as follows: $$\begin{aligned}
{\cal C}_{IJ}\to{\cal C}_{IJ}+{\cal C}_{IJ}^{\rm ext},\end{aligned}$$ with $$\begin{aligned}
{\cal C}_{IJ}^{\rm ext}&:=32H\bigl(
-A_{[IK][JL]M}X^{KL}\dot\phi^M
+2HB_{[IK][JL]}X^{KL}
\notag \\ &\quad\qquad\quad
+4HB_{[IK][JL],\langle MN\rangle}X^{KL}X^{MN}
\bigr),\end{aligned}$$ and no other terms are affected by the addition of ${\cal L}_{\rm ext}$. Since $X^{IJ}{\cal C}_{IJ}^{\rm ext}=0$ due to antisymmetry, $X^{IJ}{\cal D}_{IJ}$ remains the same even if one adds ${\cal L}_{\rm ext}$: $$\begin{aligned}
X^{IJ}{\cal D}_{IJ}\to X^{IJ}{\cal D}_{IJ}.\end{aligned}$$ Therefore, the new terms proposed in Ref. [@Allys:2016hfl] do not change the no-go argument.
The new term ${\cal L}_{{\rm ext}}$ vanishes for the homogeneous background, which implies that ${\cal L}_{{\rm ext}}$ contributes only to the entropy modes at the level of perturbations. This is consistent with the result of [@Creminelli:2016zwa], where it can be seen using the EFT that the instability occurs in the adiabatic direction.
Graviton geodesics
==================
We have thus seen that within the multi-field extension of the generalized Galileons, non-singular cosmological solutions are possible only if either integral in Eq. (\[convint\]) is convergent, as in the single-field Horndeski case. In Ref. [@Kobayashi:2016xpl], this fact was noticed and a numerical example of a non-singular cosmological solution with the convergent integral was obtained for the first time in the single-field context. Later, the authors of Ref. [@Ijjas:2016vtq] followed Ref. [@Kobayashi:2016xpl] and presented another example.
One can move to the “Einstein frame” for tensor perturbations from the original frame (\[ac2tens\]) by performing a disformal transformation [@Creminelli:2014wna]. This is because one has two independent functions of $t$ in performing a disformal transformation which can be fitted to make ${\cal F}_T$ and ${\cal G}_T$ into their standard forms: ${\cal F}_T\to{M_{\rm Pl}}^2$, ${\cal G}_T\to {M_{\rm Pl}}^2$. It is clearly explained in Ref. [@Creminelli:2016zwa] that because gravitons propagate along null geodesics in the Einstein frame and the integral $$\begin{aligned}
\int a{\cal F}_T{{\rm d}}t \label{int2grav}\end{aligned}$$ is nothing but the affine parameter of the null geodesics in the Einstein frame, the convergent integral (\[convint\]) implies past (future) incompleteness of graviton geodesics (see also Ref. [@Cai:2016thi]). This may signal some kind of pathology in the tensor perturbations, though it is not obvious whether the incompleteness of null geodesics in a disformally related frame causes actual problems.
Let us rephrase this potential pathology of gravitons without invoking the disformal transformation. The equation of motion for the tensor perturbation $h_{ij}$ derived from the action (\[ac2tens\]) can be written in the form $$\begin{aligned}
Z^{\mu\nu}{\cal D}_\mu{\cal D}_\nu h_{ij}=0,\label{effgeoh}\end{aligned}$$ where $$\begin{aligned}
Z_{\mu\nu}{{\rm d}}x^\mu{{\rm d}}x^\nu =-\frac{{\cal F}_T^{3/2}}{{\cal G}_T^{1/2}}{{\rm d}}t^2
+a^2\left({\cal F}_T{\cal G}_T\right)^{1/2}\delta_{ij}{{\rm d}}x^i{{\rm d}}x^j,\end{aligned}$$ and ${\cal D}_\mu$ is the covariant derivative associated with the “metric” $Z_{\mu\nu}$. Equation (\[effgeoh\]) shows that graviton paths can be interpreted as null geodesics in the effective geometry defined by $Z_{\mu\nu}$. It turns out that the affine parameter $\lambda$ of null geodesics in the metric $Z_{\mu\nu}$ is given by ${{\rm d}}\lambda = a{\cal F}_T{{\rm d}}t$. Therefore, the incompleteness of graviton geodesics can be made manifest even without working in the Einstein frame.
Summary
=======
In this paper, we have shown that all non-singular cosmological solutions are plagued with gradient instabilities in the multi-field generalization of scalar-tensor theories, if the graviton geodesic completeness is required. This extends the recent no-go arguments of Refs. [@Libanov:2016kfc; @Kobayashi:2016xpl; @Kolevatov:2016ppi]. We have given a direct proof using the generalized multi-Galileon action, so that our proof is different from and complementary to that obtained from the effective field theory of cosmological fluctuations [@Creminelli:2016zwa]. Several new terms for multi-Galileons on a flat background were found recently [@Allys:2016hfl]. We have covariantized these terms and shown that the inclusion of them does not change the no-go result.
We thank Yuji Akita, Norihiro Tanahashi, Masahide Yamaguchi, and Shuichiro Yokoyama for helpful discussions. This work was supported in part by the MEXT-Supported Program for the Strategic Research Foundation at Private Universities, 2014-2017, and by the JSPS Grants-in-Aid for Scientific Research No. 16H01102 and No. 16K17707 (T.K.).
[99]{}
A. H. Guth, “The Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems,” Phys. Rev. D [**23**]{}, 347 (1981). A. A. Starobinsky, “A New Type of Isotropic Cosmological Models Without Singularity,” Phys. Lett. B [**91**]{}, 99 (1980).
K. Sato, “First Order Phase Transition of a Vacuum and Expansion of the Universe,” Mon. Not. Roy. Astron. Soc. [**195**]{}, 467 (1981).
A. Borde and A. Vilenkin, “Singularities in inflationary cosmology: A Review,” Int. J. Mod. Phys. D [**5**]{}, 813 (1996) \[gr-qc/9612036\].
J. Martin and R. H. Brandenberger, “The TransPlanckian problem of inflationary cosmology,” Phys. Rev. D [**63**]{}, 123501 (2001) \[hep-th/0005209\].
D. Battefeld and P. Peter, “A Critical Review of Classical Bouncing Cosmologies,” Phys. Rept. [**571**]{}, 1 (2015) \[arXiv:1406.2790 \[astro-ph.CO\]\].
V. A. Rubakov, “The Null Energy Condition and its violation,” Phys. Usp. [**57**]{}, 128 (2014) \[arXiv:1401.4024 \[hep-th\]\].
P. Creminelli, A. Nicolis and E. Trincherini, “Galilean Genesis: An Alternative to inflation,” JCAP [**1011**]{}, 021 (2010) \[arXiv:1007.0027 \[hep-th\]\].
C. Deffayet, O. Pujolas, I. Sawicki and A. Vikman, “Imperfect Dark Energy from Kinetic Gravity Braiding,” JCAP [**1010**]{}, 026 (2010) \[arXiv:1008.0048 \[hep-th\]\].
T. Kobayashi, M. Yamaguchi and J. Yokoyama, “G-inflation: Inflation driven by the Galileon field,” Phys. Rev. Lett. [**105**]{}, 231302 (2010) \[arXiv:1008.0603 \[hep-th\]\]. Y. F. Cai, D. A. Easson and R. Brandenberger, “Towards a Nonsingular Bouncing Cosmology,” JCAP [**1208**]{}, 020 (2012) \[arXiv:1206.2382 \[hep-th\]\].
M. Koehn, J. L. Lehners and B. A. Ovrut, “Cosmological super-bounce,” Phys. Rev. D [**90**]{}, no. 2, 025005 (2014) \[arXiv:1310.7577 \[hep-th\]\].
L. Battarra, M. Koehn, J. L. Lehners and B. A. Ovrut, “Cosmological Perturbations Through a Non-Singular Ghost-Condensate/Galileon Bounce,” JCAP [**1407**]{}, 007 (2014) \[arXiv:1404.5067 \[hep-th\]\].
T. Qiu and Y. T. Wang, “G-Bounce Inflation: Towards Nonsingular Inflation Cosmology with Galileon Field,” JHEP [**1504**]{}, 130 (2015) \[arXiv:1501.03568 \[astro-ph.CO\]\].
Y. Wan, T. Qiu, F. P. Huang, Y. F. Cai, H. Li and X. Zhang, “Bounce Inflation Cosmology with Standard Model Higgs Boson,” JCAP [**1512**]{}, no. 12, 019 (2015) \[arXiv:1509.08772 \[gr-qc\]\]. D. Pirtskhalava, L. Santoni, E. Trincherini and P. Uttayarat, “Inflation from Minkowski Space,” JHEP [**1412**]{}, 151 (2014) \[arXiv:1410.0882 \[hep-th\]\].
T. Kobayashi, M. Yamaguchi and J. Yokoyama, “Galilean Creation of the Inflationary Universe,” JCAP [**1507**]{}, no. 07, 017 (2015) \[arXiv:1504.05710 \[hep-th\]\].
T. Qiu, J. Evslin, Y. F. Cai, M. Li and X. Zhang, “Bouncing Galileon Cosmologies,” JCAP [**1110**]{}, 036 (2011) \[arXiv:1108.0593 \[hep-th\]\].
D. A. Easson, I. Sawicki and A. Vikman, “G-Bounce,” JCAP [**1111**]{}, 021 (2011) \[arXiv:1109.1047 \[hep-th\]\].
A. Ijjas and P. J. Steinhardt, “Classically stable nonsingular cosmological bounces,” Phys. Rev. Lett. [**117**]{}, no. 12, 121304 (2016) \[arXiv:1606.08880 \[gr-qc\]\].
G. W. Horndeski, “Second-order scalar-tensor field equations in a four-dimensional space,” Int. J. Theor. Phys. [**10**]{}, 363 (1974).
C. Deffayet, X. Gao, D. A. Steer and G. Zahariade, “From k-essence to generalised Galileons,” Phys. Rev. D [**84**]{}, 064039 (2011) \[arXiv:1103.3260 \[hep-th\]\].
T. Kobayashi, M. Yamaguchi and J. Yokoyama, “Generalized G-inflation: Inflation with the most general second-order field equations,” Prog. Theor. Phys. [**126**]{}, 511 (2011) \[arXiv:1105.5723 \[hep-th\]\].
M. Libanov, S. Mironov and V. Rubakov, “Generalized Galileons: instabilities of bouncing and Genesis cosmologies and modified Genesis,” JCAP [**1608**]{}, no. 08, 037 (2016) \[arXiv:1605.05992 \[hep-th\]\].
T. Kobayashi, “Generic instabilities of nonsingular cosmologies in Horndeski theory: A no-go theorem,” Phys. Rev. D [**94**]{}, no. 4, 043511 (2016) \[arXiv:1606.05831 \[hep-th\]\].
P. Creminelli, D. Pirtskhalava, L. Santoni and E. Trincherini, “Stability of Geodesically Complete Cosmologies,” JCAP [**1611**]{}, no. 11, 047 (2016) \[arXiv:1610.04207 \[hep-th\]\].
J. Gleyzes, D. Langlois, F. Piazza and F. Vernizzi, “Healthy theories beyond Horndeski,” Phys. Rev. Lett. [**114**]{}, no. 21, 211101 (2015) \[arXiv:1404.6495 \[hep-th\]\]. J. Gleyzes, D. Langlois, F. Piazza and F. Vernizzi, “Exploring gravitational theories beyond Horndeski,” JCAP [**1502**]{}, 018 (2015) \[arXiv:1408.1952 \[astro-ph.CO\]\].
M. Zumalacárregui and J. García-Bellido, “Transforming gravity: from derivative couplings to matter to second-order scalar-tensor theories beyond the Horndeski Lagrangian,” Phys. Rev. D [**89**]{}, 064046 (2014) \[arXiv:1308.4685 \[gr-qc\]\].
Y. Cai, Y. Wan, H. G. Li, T. Qiu and Y. S. Piao, “The Effective Field Theory of nonsingular cosmology,” arXiv:1610.03400 \[gr-qc\].
C. Cheung, P. Creminelli, A. L. Fitzpatrick, J. Kaplan and L. Senatore, “The Effective Field Theory of Inflation,” JHEP [**0803**]{}, 014 (2008) \[arXiv:0709.0293 \[hep-th\]\].
P. Creminelli, M. A. Luty, A. Nicolis and L. Senatore, “Starting the Universe: Stable Violation of the Null Energy Condition and Non-standard Cosmologies,” JHEP [**0612**]{}, 080 (2006) \[hep-th/0606090\].
X. Gao, “Unifying framework for scalar-tensor theories of gravity,” Phys. Rev. D [**90**]{}, 081501 (2014) \[arXiv:1406.0822 \[gr-qc\]\]. X. Gao, “Hamiltonian analysis of spatially covariant gravity,” Phys. Rev. D [**90**]{}, 104033 (2014) \[arXiv:1409.6708 \[gr-qc\]\].
P. Horava, “Quantum Gravity at a Lifshitz Point,” Phys. Rev. D [**79**]{}, 084008 (2009) \[arXiv:0901.3775 \[hep-th\]\].
M. Koehn, J. L. Lehners and B. Ovrut, “Nonsingular bouncing cosmology: Consistency of the effective description,” Phys. Rev. D [**93**]{}, no. 10, 103501 (2016) \[arXiv:1512.03807 \[hep-th\]\].
A. Padilla and V. Sivanesan, “Covariant multi-galileons and their generalisation,” JHEP [**1304**]{}, 032 (2013) \[arXiv:1210.4026 \[gr-qc\]\].
R. Kolevatov and S. Mironov, “On cosmological bounces and Lorentzian wormholes in Galileon theories with extra scalar field,” arXiv:1607.04099 \[hep-th\].
L. Senatore and M. Zaldarriaga, “The Effective Field Theory of Multifield Inflation,” JHEP [**1204**]{}, 024 (2012) \[arXiv:1009.2093 \[hep-th\]\].
T. Noumi, M. Yamaguchi and D. Yokoyama, “Effective field theory approach to quasi-single field inflation and effects of heavy fields,” JHEP [**1306**]{}, 051 (2013) \[arXiv:1211.1624 \[hep-th\]\].
E. Allys, “New terms for scalar multi-galileon models, application to SO(N) and SU(N) group representations,” arXiv:1612.01972 \[hep-th\]. A. Nicolis, R. Rattazzi and E. Trincherini, “The Galileon as a local modification of gravity,” Phys. Rev. D [**79**]{}, 064036 (2009) \[arXiv:0811.2197 \[hep-th\]\].
C. Deffayet, G. Esposito-Farese and A. Vikman, “Covariant Galileon,” Phys. Rev. D [**79**]{}, 084003 (2009) \[arXiv:0901.1314 \[hep-th\]\].
C. Deffayet, S. Deser and G. Esposito-Farese, “Arbitrary $p$-form Galileons,” Phys. Rev. D [**82**]{}, 061501 (2010) \[arXiv:1007.5278 \[gr-qc\]\].
A. Padilla, P. M. Saffin and S. Y. Zhou, “Bi-galileon theory I: Motivation and formulation,” JHEP [**1012**]{}, 031 (2010) \[arXiv:1007.5424 \[hep-th\]\].
A. Padilla, P. M. Saffin and S. Y. Zhou, “Multi-galileons, solitons and Derrick’s theorem,” Phys. Rev. D [**83**]{}, 045009 (2011) \[arXiv:1008.0745 \[hep-th\]\].
A. Padilla, P. M. Saffin and S. Y. Zhou, “Bi-galileon theory II: Phenomenology,” JHEP [**1101**]{}, 099 (2011) \[arXiv:1008.3312 \[hep-th\]\].
M. Trodden and K. Hinterbichler, “Generalizing Galileons,” Class. Quant. Grav. [**28**]{}, 204003 (2011) \[arXiv:1104.2088 \[hep-th\]\].
V. Sivanesan, “Generalized multiple-scalar field theory in Minkowski space-time free of Ostrogradski ghosts,” Phys. Rev. D [**90**]{}, no. 10, 104006 (2014) \[arXiv:1307.8081 \[gr-qc\]\].
T. Damour and G. Esposito-Farese, “Tensor multiscalar theories of gravitation,” Class. Quant. Grav. [**9**]{}, 2093 (1992). M. Horbatsch, H. O. Silva, D. Gerosa, P. Pani, E. Berti, L. Gualtieri and U. Sperhake, “Tensor-multi-scalar theories: relativistic stars and 3 + 1 decomposition,” Class. Quant. Grav. [**32**]{}, no. 20, 204001 (2015) \[arXiv:1505.07462 \[gr-qc\]\]. A. Naruko, D. Yoshida and S. Mukohyama, “Gravitational scalar-tensor theory,” Class. Quant. Grav. [**33**]{}, no. 9, 09LT01 (2016) \[arXiv:1512.06977 \[gr-qc\]\].
C. Charmousis, T. Kolyvaris, E. Papantonopoulos and M. Tsoukalas, “Black Holes in Bi-scalar Extensions of Horndeski Theories,” JHEP [**1407**]{}, 085 (2014) \[arXiv:1404.1024 \[gr-qc\]\].
E. N. Saridakis and M. Tsoukalas, “Cosmology in new gravitational scalar-tensor theories,” Phys. Rev. D [**93**]{}, no. 12, 124032 (2016) \[arXiv:1601.06734 \[gr-qc\]\].
E. N. Saridakis and M. Tsoukalas, “Bi-scalar modified gravity and cosmology with conformal invariance,” JCAP [**1604**]{}, no. 04, 017 (2016) \[arXiv:1602.06890 \[gr-qc\]\].
T. Kobayashi, N. Tanahashi and M. Yamaguchi, “Multifield extension of $G$ inflation,” Phys. Rev. D [**88**]{}, no. 8, 083504 (2013) \[arXiv:1308.4798 \[hep-th\]\].
S. Renaux-Petel, S. Mizuno and K. Koyama, “Primordial fluctuations and non-Gaussianities from multifield DBI Galileon inflation,” JCAP [**1111**]{}, 042 (2011) \[arXiv:1108.0305 \[astro-ph.CO\]\].
S. Ohashi, N. Tanahashi, T. Kobayashi and M. Yamaguchi, “The most general second-order field equations of bi-scalar-tensor theory in four dimensions,” JHEP [**1507**]{}, 008 (2015) \[arXiv:1505.06029 \[gr-qc\]\].
A. Ijjas, “Cyclic anamorphic cosmology,” arXiv:1610.02752 \[astro-ph.CO\].
J. Gleyzes, D. Langlois, F. Piazza and F. Vernizzi, “Essential Building Blocks of Dark Energy,” JCAP [**1308**]{}, 025 (2013) \[arXiv:1304.4840 \[hep-th\]\].
A. Ijjas and P. J. Steinhardt, “Fully stable cosmological solutions with a non-singular classical bounce,” Phys. Lett. B [**764**]{}, 289 (2017) \[arXiv:1609.01253 \[gr-qc\]\].
P. Creminelli, J. Gleyzes, J. Noreña and F. Vernizzi, “Resilience of the standard predictions for primordial tensor modes,” Phys. Rev. Lett. [**113**]{}, no. 23, 231301 (2014) \[arXiv:1407.8439 \[astro-ph.CO\]\].
[^1]: Our postulate on this point is different from that adopted in Ref. [@Ijjas:2016tpn], in which [*singular*]{} functions are introduced in the underlying Lagrangian to obtain non-singular cosmological solutions.
[^2]: Our postulate on this point is different from that adopted in Ref. [@Ijjas:2016wtc], in which all the coefficients in the quadratic action for cosmological perturbations vanish at the same moment.
|
---
abstract: 'We have analyzed the [*XMM-Newton*]{} and [*Chandra*]{} data overlapping $\sim$16.5 deg$^2$ of Sloan Digital Sky Survey Stripe 82, including $\sim$4.6 deg$^2$ of proprietary [*XMM-Newton*]{} data that we present here. In total, 3362 unique X-ray sources are detected at high significance. We derive the [*XMM-Newton*]{} number counts and compare them with our previously reported [*Chandra*]{} Log$N$-Log$S$ relations and other X-ray surveys. The Stripe 82 X-ray source lists have been matched to multi-wavelength catalogs using a maximum likelihood estimator algorithm. We discovered the highest redshift ($z=5.86$) quasar yet identified in an X-ray survey. We find 2.5 times more high luminosity (L$_x \geq 10^{45}$ erg s$^{-1}$) AGN than the smaller area [*Chandra*]{} and [*XMM-Newton*]{} survey of COSMOS and 1.3 times as many identified by XBoötes. Comparing the high luminosity AGN we have identified with those predicted by population synthesis models, our results suggest that this AGN population is a more important component of cosmic black hole growth than previously appreciated. Approximately a third of the X-ray sources not detected in the optical are identified in the infrared, making them candidates for the elusive population of obscured high luminosity AGN in the early universe.'
author:
- |
Stephanie M. LaMassa$^1$[^1], C. Megan Urry$^1$, Nico Cappelluti$^{2,3}$, Francesca Civano$^{4,5}$, Piero Ranalli$^{6}$, Eilat Glikman$^1$, Ezequiel Treister$^7$, Gordon Richards$^{8}$, David Ballantyne$^{9}$, Daniel Stern$^{10}$, Andrea Comastri$^{2}$, Carie Cardamone$^{11}$, Kevin Schawinski$^{12}$ Hans Böhringer$^{13}$, Gayoung Chon$^{13}$, Stephen S. Murray$^{14,4}$, Paul Green$^{4}$, Kirpal Nandra$^{13}$\
$^1$Yale Center for Astronomy & Astrophysics, Yale University, Physics Department, PO Box 208120, New Haven, CT, 06520-8120, USA;\
$^2$INAF - Osservatorio Astronomico di Bologna, Via Ranzani 1, I-40127 Bologna, Italy;\
$^3$University of Maryland Baltimore College, Center for Space Science & Technology, Physics Department,\
1000 Hilltop Circle, Baltimore, MD 21250, USA;\
$^4$Dartmouth College, Physics & Astronomy Department, Wilder Lab, Hanover, NH 03755, USA;\
$^5$Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA;\
$^6$National Observatory of Athens\
$^7$Universidad de Concepción, Casilla 160-c Concepción, Chile;\
$^{8}$Drexel University, Department of Physics, 3141 Chestnut Street, Philadelphia, PA 19104, USA;\
$^{9}$Center for Relativistic Astrophysics, School of Physics, Georgia Institute of Technology, Atlanta, GA;\
$^{10}$Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Mail Stop 169-221, Pasadena, CA 91109, USA;\
$^{11}$Brown University, The Harriet W. Sheridan Center for Teaching and Learning, Box 1912, 96 Waterman Street, Providence, RI 02912, USA;\
$^{12}$Institute for Astronomy, Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 16, CH-8093 Zurich, Switzerland;\
$^{13}$Max-Planck-Institut Für Extraterrestriche Physik, D-85748 Garching, Germany;\
$^{14}$The Johns Hopkins University, Department of Physics & Astronomy, 3400 N. Charles Street, Baltimore, MD 21218, USA
title: 'Finding Rare AGN: [*XMM-Newton*]{} and [*Chandra*]{} Observations of SDSS Stripe 82'
---
Introduction
============
Supermassive black holes (SMBHs) that reside in galactic centers grow by accretion in a phase where they appear as Active Galactic Nuclei (AGN). To understand AGN demography and evolution, large samples over a range of redshifts and luminosities are necessary. Extragalactic surveys provide an ideal mechanism for locating large enough samples of growing black holes to study the ensemble statistically. Large area surveys have been undertaken in the optical via, e.g., the Sloan Digital Sky Survey [SDSS, @dr9] and in the near-infrared (NIR) via the Wide-Field Infrared Survey Explorer [[*WISE*]{}, @wright] and the UKIRT Infrared Deep Sky Survey [UKIDSS, @lawrence], locating over 100,000 AGN in the optical and millions of AGN candidates in the infrared.
However, optical selection is not ideal for studying high-luminosity, high-redshift AGN (quasars) that are heavily reddened or obscured. At redshifts greater than 0.5, diagnostic diagrams that use ratios of narrow emission lines to identify Type 2 (obscured) AGN [e.g., @bpt; @kewley; @kauff] become inefficient as H$\alpha$ is shifted out of the optical. Such Type 2 AGN can be found using alternate rest-frame optical diagnostics, using, e.g., ratios of narrow emission lines versus $g - z$ color [TBT, @trouille] and versus stellar mass [MEx, @juneau], probing out to distances $z<1.4$ and $z<1$, respectively. Narrow rest-frame UV emission lines also allow identification of SMBH accretion at $z>0.5$. Alternatively, obscured AGN candidates can be followed up with ground based infrared spectroscopy to detect redshifted H$\alpha$ and \[NII\]$\lambda$6584. However, the @kewley and @kauff boundaries between star-forming galaxies, composites and Sy2s are only calibrated at low redshifts. As galaxies beyond $z>0.5$ have lower metallicities, it is unclear whether these dividing lines can unambiguously identify signatures of SMBH accretion.
The reliability of infrared color selection varies with the depth of the data, with [*Spitzer*]{} IRAC color cuts [@stern1; @lacy] and [*WISE*]{} color cuts [@stern; @assef] being most applicable at shallow depths. At fainter fluxes, contamination from normal galaxies can become appreciable [@cardamone; @donley; @mendez]. The revised IRAC color selection from @donley is more reliable for deeper data, yet at X-ray luminosities exceeding 10$^{44}$ erg s$^{-1}$, 25% (32%) of the [*XMM-Newton*]{}- ([*Chandra*]{}-) selected AGN are not recovered with this MIR identification method.
X-rays provide an alternate way to search for AGN, complementing the optical and MIR identification techniques to provide a comprehensive view of black hole growth over cosmic time, because X-rays can pierce through large amounts of dust and gas. Their emission is visible out to cosmological distances as long as it is not attenuated by Compton-thick (N$_H \geq 10^{24}$ cm$^{-2}$) obscuration. Normal star formation processes rarely exceed an X-ray luminosity above 10$^{42}$ erg s$^{-1}$ [e.g., @persic; @bh], whereas AGN luminosities extend to $\sim10^{46}$ erg s$^{-1}$, making X-ray selection an efficient means for locating AGN at all redshifts. Indeed, X-ray surveys such as the [*Chandra*]{} Deep Fields North [@cdfn] and South [@giacconi; @cdfs], Extended [*Chandra*]{} Deep Field South [@lehmer; @virani], [*XMM-Newton*]{} survey of the [*Chandra*]{} Deep Field South [@comastri; @ranalli], [*XMM-Newton*]{} and [*Chandra*]{} surveys of COSMOS [@cap; @cap09; @C-Cosmos; @brusa3; @civano_mle], XBoötes [@murray; @kenter], the [*XMM-Newton*]{} survey of the Lockman Hole [@brunner], [*Chandra*]{} observations of All-Wavelength Extended Groth Strip International Survey [AEGIS, @aegis; @aegis2], XDEEP2 [@deep2], the [*XMM-Newton*]{} Serendipitous Survey [@Mateos] and the [*Chandra*]{} multi-wavelength campaign [ChaMP, @champ], have identified thousands of AGN, contributing significantly to our knowledge of AGN demography and galaxy and SMBH co-evolution.
However, most of these X-ray surveys cover small ($<$1 deg$^2$) to moderate (3-5 deg$^2$) areas, sacrificing area for depth to uncover the faintest X-ray objects. The [*XMM*]{}-COSMOS [@cap; @cap09; @brusa3], [*Chandra*]{} COSMOS [@C-Cosmos; @civano_mle] and ongoing [*Chandra*]{} COSMOS Legacy Project (PI: Civano) strikes a good balance of moderate area at moderate depth to populate a large portion of the L$_x-z$ plane. But sources that are rare, like high luminosity and/or high redshift AGN, are under-represented in these small to moderate area X-ray samples as a larger volume of the Universe must be probed to locate them.
Considerable follow-up (optical/near-infrared imaging and spectroscopy) is needed to identify X-ray sources, and multi-wavelength data are needed to classify these objects. Since spectroscopic campaigns and multi-wavelength follow-up are time intensive, the output from wide area surveys such as XBoötes [$\sim$9 deg$^2$, @kenter; @kochanek], ChaMP [$\sim$33 deg$^2$, @champ; @trichas] and [*XMM*]{}-LSS [$\sim$11 deg$^2$, the first part of the expanded [*XMM*]{}-XXL 50 deg$^2$ survey, @lss1; @lss2], has taken many years to achieve. The high-redshift X-ray-selected luminosity AGN population therefore remains poorly explored, prohibiting a comprehensive view of black hole growth.
To address this gap, we have begun a wide area X-ray survey in a region that already has a rich investment in multi-wavelength data and a high level of optical spectroscopic completeness ($>$ 400 objects deg$^2$): the SDSS Stripe 82 region, which spans 300 deg$^2$ along the celestial equator (-60$^{\circ}$ $<$ R.A. $<$ 60$^{\circ}$, -1.25$^{\circ}$ $<$ Dec $<$ 1.25$^{\circ}$). The current non-overlapping X-ray coverage in Stripe 82 from archival [*Chandra*]{} and archival and proprietary [*XMM-Newton*]{} observations is $\sim$16.5 deg$^2$. The distribution of these pointings across Stripe 82 is shown in Figure \[pointings\]. As we are endeavoring to increase the survey area to $\sim$100 deg$^2$, we dub the present survey ‘Stripe 82X Pilot.’ Here we follow-up on the work presented in @me where we focused on just the [*Chandra*]{} overlap with Stripe 82, by adding in $\sim$10.5 deg$^2$ of [*XMM-Newton*]{} observations, 4.6 deg$^2$ of which were obtained by us as part of an approved AO10 proposal (PI: Urry), with the observations performed in ‘mosaic’ mode. We then match both catalogs to large optical [SDSS, @dr9], NIR [UKIDSS and [*WISE*]{}, @lawrence; @wright], ultraviolet [[*GALEX*]{}, @morrissey] and radio datasets [FIRST, @first] in this region. Observations covering Stripe 82 with [*Spitzer*]{} (P.I. Richards) and analysis of Herchel observation overlapping $\sim$55 deg$^2$ of the region (P.I. Viero) are on-going.
In Section 2, we discuss the reduction and analysis of the archival and proprietary mosaicked [*XMM-Newton*]{} data in Stripe 82. We use these data to calculate area-flux curves and in Section 3 present the Log$N$-Log$S$ relations, which we compare to the [*Chandra*]{} Stripe 82 number counts [@me] and those from other X-ray surveys. We then describe in Section 4 the matching of the [*XMM-Newton*]{} and [*Chandra*]{} X-ray source lists with multi-wavelength catalogs, producing multi-wavelength source lists. In Section 5, we describe the general characteristics of the Stripe 82X sample so far. In particular, we highlight the interesting science gaps our data are primed to fill: uncovering the population of rare high-luminosity AGN at high redshift and identifying candidates for high-luminosity obscured AGN at $z>1$. We have adopted a cosmology of H$_{0}$ = 70 km s$^{-1}$ Mpc$^{1}$, $\Omega_M$ = 0.27 and $\Lambda$=0.73 throughout the paper.
[![\[pointings\] X-ray observations overlapping Stripe 82 used in this analysis, with [*Chandra*]{} observations shown as black diamonds and [*XMM-Newton*]{} pointings depicted as red circles. The dense [*Chandra*]{} pointings are part of the XDEEP2 survey [@deep2] while the dense [*XMM-Newton*]{} groupings represent the positions of the proprietary mosaicked observations we were awarded in AO 10.](pointings.eps "fig:")]{}
[*XMM-Newton*]{} Data Reduction
===============================
Archival Observations
---------------------
Fifty-seven [*XMM-Newton*]{} EPIC non-calibration observations overlap Stripe 82. Of these, twenty-four were removed due to flaring, substantial pile-up and read-out streaks, small-window mode set-up, or extended emission spanning the majority of the detector, all of which complicate serendipitous detections of point sources in the field. We were left with 33 archival observations well suited for our analysis, listed in Table \[obs\_summary\] and shown as red circles in Figure \[pointings\]; for 3 of these we dropped the PN detector due to significant pile-up which did not affect the MOS detectors as seriously.
The raw observational data files (ODFs) were processed with [*XMM-Newton*]{} Standard Analysis System (SAS) version 11. SAS tasks [*emchain*]{} and [*epchain*]{} were run to generate MOS1 and MOS2 event files as well as PN and PN out-of-Time (OoT) event files. OoT events result from photons detected during CCD readout, when photons are recorded at random position along the readout column in the $y$ direction. The subsequent energy correction for these OoT events will then be incorrect. The fraction of OoT events is highest for the PN detector in full frame mode, affecting $\sim$6.3% of observing time. By generating simulated OoT event files, the PN images can be statistically corrected for this effect.
Good time intervals (GTIs) were applied to the data by searching for flaring in the high energy background (10-12 keV for MOS, 12-14 keV for PN and PN OoT), removing intervals where the count rate was $\geq$3$\sigma$ above the average. Low energy flares were removed from this filtered event list by removing intervals where the count rate was $\geq$3$\sigma$ above the average in the 0.3-10 keV range. In both the high energy and low energy cleaning, GTIs were extracted from single events (i.e., PATTERN = 0).
MOS images were extracted from all valid events (PATTERN 0 to 12) whereas the PN and PN OoT images were extracted from the single and double events only (PATTERN 0 to 4). To avoid emission line features from the detector background (i.e., Al K$\alpha$ at 1.48 keV), the energy range 1.45 to 1.54 keV was excluded when extracting images from both the MOS and PN detectors. The PN background also has strong emission from Cu at $\sim$7.4 and $\sim$8.0 keV, so the 7.2-7.6 keV and 7.8-8.2 keV ranges were also excluded when extracting images from the PN detector. The PN OoT images were scaled by 0.063 to account for the loss of observing time due to photon detection during CCD readout, and were then subtracted from the PN images. Finally, MOS and PN images were extracted in the standard 0.5-2 keV, 2-10 keV and 0.5-10 keV ranges and were added among the detectors in each energy band.[^2]
Exposure maps were generated using the SAS task [*eexpmap*]{} for each detector and energy range. Since vignetting, decrease in effective area with off-axis distance, increases as a function of energy, we created spectrally weighted exposure maps, i.e., the mean energy at which the maps were calculated was found assuming a spectral model where, consistent with previous [*XMM-Newton*]{} surveys [e.g. @cap], $\Gamma$=2.0 in the soft band and $\Gamma$=1.7 in the hard and full bands, since the specral slope of the soft band in AGN tends to be steeper than the hard band. The same spectral model was used to derive energy conversion factors (ECFs) to transform count rates to physical flux units, where the ECF depends on the filter for the observation and was calculated via PIMMS[^3] (see Table \[ecf\] for a summary). The exposure maps were added among the three detectors for each observation, normalized by these ECFs.[^4]
Two regions in Stripe 82 had multiple X-ray observations (ObsIDs 0056020301, 0312190401 and 0111200101, 0111200201). In order to detect sources from these overlapping observations simultaneously, the events files were mapped to a common set of WCS coordinates using SAS task [*attcalc*]{} to update the ‘RA\_NOM’ and ‘DEC\_NOM’ header keywords. The subsequent data products (e.g., images, exposure maps, background maps, detector masks) then share common coordinates. Before running the source detection in ‘raster’ mode (see Section 2.4), the header keywords ‘EXP\_ID’ and ‘INSTRUME’ for these files were updated to common values.
[lrrrr]{} Obs. ID & R.A. & Dec & Detectors & Exp time\
& & & &(ks)\
0036540101 & 54.64 & 0.34 & MOS1,MOS2,PN & 21.77\
0041170101 & 45.68 & 0.11 & MOS1,MOS2,PN & 50.04\
0042341301$^{2}$ & 354.44 & 0.26 & MOS1,MOS2,PN & 13.36\
0056020301$^{2,3}$ & 44.16 & 0.08 & MOS1,MOS2,PN & 23.37\
0066950301$^{2}$ & 349.54 & 0.28 & MOS1,MOS2 & 11.45\
0084230401$^{2}$ & 28.20 & 0.99 & PN & 23.74\
0090070201$^{2}$ & 10.85 & 0.84 & MOS1,MOS2,PN & 20.53\
0093030201$^{2}$ & 322.43 & 0.07 & MOS1,MOS2,PN & 57.43\
0101640201$^{2}$ & 29.94 & 0.41 & MOS1,MOS2,PN & 10.54\
0111180201$^{2}$ & 310.06 & -0.89 & MOS1,MOS2,PN & 16.31\
0111200101$^{1,2,3}$ & 40.65 & 0.00 & MOS1,MOS2 & 38.39\
0111200201$^{1,2,3}$ & 40.65 & 0.00 & MOS1,MOS2 & 37.99\
0116710901$^{2}$ & 54.20 & 0.59 & MOS1,MOS2 & 7.64\
0134920901 & 58.45 & -0.10 & MOS1,MOS2,PN & 18.69\
0142610101$^{2}$ & 46.69 & 0.00 & PN & 65.89\
0147580401 & 356.88 & 0.88 & MOS1,MOS2,PN & 15.12\
0200430101 & 55.32 & -1.32 & MOS1,MOS2,PN & 11.46\
0200480401 & 37.76 & -1.03 & MOS1,MOS2,PN & 16.07\
0203160201$^{1,2}$ & 46.22 & 0.06 & MOS1,MOS2 & 15.08\
0203690101 & 9.83 & 0.85 & MOS1,MOS2,PN & 47.31\
0211280101$^{2}$ & 355.89 & 0.34 & MOS1,MOS2,PN & 40.68\
0303110401 & 14.07 & 0.56 & MOS1,MOS2,PN & 11.09\
0303110801 & 359.55 & -0.14 & MOS1,MOS2,PN & 9.63\
0303562201 & 10.88 & 0.00 & MOS1,MOS2,PN & 6.57\
0304801201 & 323.39 & -0.84 & MOS1,MOS2,PN & 13.27\
0305751001 & 1.20 & 0.11 & MOS1,MOS2,PN & 15.07\
0307000701 & 45.97 & -1.12 & MOS1,MOS2,PN & 15.84\
0312190401$^{3}$ & 43.82 & -0.20 & MOS1,MOS2,PN & 11.63\
0400570301 & 19.75 & 0.65 & MOS1,MOS2,PN & 25.94\
0401180101 & 331.47 & -0.34 & MOS1,MOS2,PN & 40.13\
0402320201 & 53.64 & 0.09 & MOS1,MOS2,PN & 10.51\
0403760301 & 2.76 & 0.86 & MOS1,MOS2,PN & 25.46\
0407030101$^{2}$ & 5.58 & 0.26 & MOS1,MOS2,PN & 27.15\
\
\
\
\
\
[lllllll]{} Band & PN & PN & PN & MOS & MOS & MOS\
& Thin & Medium & Thick & Thin & Medium & Thick\
Soft (0.5-2 keV) & 7.45 & 7.36 & 5.91 & 2.00 & 1.87 & 1.67\
Hard (2-10 keV) & 1.22 & 1.24 & 1.19 & 0.45 & 0.42 & 0.43\
Full (0.5-10 keV) & 3.26 & 3.25 & 2.75 & 0.97 & 0.91 & 0.85\
\
\
\
\
\
\
Proprietary Observations
------------------------
We were awarded 2 [*XMM-Newton*]{} mosaicked pointings in AO 10 (PI: C. Megan Urry, ObsIDs: 0673000101 (‘Stripe 82 XMM field 1’), 0673002301 (‘Stripe 82 XMM field 2’)), covering $\sim$4.6 deg$^2$. With this observing strategy, each pointing has $\sim$4.56 ks of exposure time and is separated with 15$^{\prime}$ spacing. The exposure time in the regions with greatest overlap reaches a depth of $\sim$12 ks. The [*XMM-Newton*]{} mosaic procedure enables a relatively large region to be surveyed, in this case $\sim$2.5 deg$^2$ per mosaic, while minimizing overhead as after the first pointing, the EPIC offset tables do not need to be calculated (PN) and uploaded (MOS). Each mosaic was made up of 22 individual, overlapping pointings, for a total observing time of 240 ks between both mosaics.
We split the events files for the mosaicked observations into individual pseudo-exposures using the SAS task [*emosaic\_prep*]{}. Each pseudo-exposure is then reduced in the same way as the archival pointings, producing cleaned events files, spectrally weighted exposure maps and appropriately modeled background maps (see below). As with overlapping archival observations, ‘RA\_NOM’, ‘DEC\_NOM’, ‘EXP\_ID’ and ‘INSTRUME’ were updated to common values, but ‘RA\_PNT’ and ‘DEC\_PNT’ also had to be set manually to reflect the center coordinates of each pointing for the point spread function (PSF) to be calculated correctly during source detection. One of the pointings from ObsID 0673002301 (pseudo-exposure field 22) was afflicted by flaring and consequently not used in the source detection. In total, approximately 4.6 deg$^2$ of Stripe 82 were covered in these observations.
Background Modeling
-------------------
Following @cap, we used the following algorithm to model the background. First we created detection masks for each detector in each energy band for each observation and then ran the SAS task [*eboxdetect*]{} with a low detection probability ($likemin$ = 4) to generate a preliminary list of detected sources. The positions of these sources were then masked out when generating the background maps. Regions of significant extended emission (radius $>$1$^{\prime}$), piled-up sources and read-out streaks were also masked out manually.
As noted by @cap, the background has two components: unresolved X-ray emission which comprises the cosmic X-ray background (CXB) and local particle and detector background. The former background is subject to vignetting while the latter is not. The residual area (i.e., regions where no sources are detected) was split into two parts based on the median of the effective exposure. Regions above the median, with low vignetting, are dominated by the CXB whereas the detector background becomes more important below the median effective exposure. We set up templates to account for these two components of the background:
$$AM_{1,v} + BM_{1,unv} = C_1$$
$$AM_{2,v} + BM_{2,unv} = C_2,$$
where $M_{1,v}$ and $M_{2,v}$ are the vignetted exposure maps for the areas above and below the median effective exposure time, respectively; $M_{1,unv}$ and $M_{2,unv}$ are the unvignetted template exposure maps, and $C_{1}$ and $C_{2}$ are the background counts. We solve this system of linear equations for the normalizations $A$ and $B$. The vignetted and unvignetted exposure maps are normalized by $A$ and $B$ respectively and then added to obtain the background map for each detector and observation. The background maps among the multiple detectors were added, giving one background map per observation.
\[src\_detect\] Source Detection
--------------------------------
We ran the source detection algorithm using the combined images, exposure maps and background maps generated as described above. We created detector masks on the combined images using the SAS task [*emask*]{}. For 15 observations, we manually updated these masks to screen out regions of extended emission and piled-up sources and read out streaks, as noted in Section 2.3 (see Table \[obs\_summary\]). A preliminary list of sources was generated with the SAS task [*eboxdetect*]{}, which is a sliding box detection algorithm run in ‘map’ mode, where source counts are detected in a 5$\times$5 pixel box with a low probability threshold ($likemin = 4$). The source list generated by [*eboxdetect*]{} is used as an input for the SAS task [*emldetect*]{} which performs a maximum likelihood point PSF fit to the source count distribution, using a likelihood threshold ($det\_ml$) of 6, where $det\_ml = -ln P_{\rm random}$, with $P_{\rm random}$ being the Poisson probability that a detection is due to random fluctuations. We ran [*emldetect*]{} with the option to fit extended sources, where the PSF is convolved with a $\beta$ model profile. All extended sources (i.e., $ext$ flag $>$0 in the [*emldetect*]{} outputted source list) are omitted from further analysis in this paper.
For overlapping archival observations, [*eboxdetect*]{} and [*emldetect*]{} were run in ‘raster’ mode, i.e., these tasks were run on an input list of images, exposure maps, detector masks and background maps, which as noted above were remapped to a common WCS grid. The source detection algorithm was run separately for the soft, hard and broad bands for the overlapping observations but simultaneously for the non-overlapping pointings; memory constraints precluded running [*eboxdetect*]{} and [*emldetect*]{} simultaneously for overlapping observations in multiple energy bands. The ECFs reported in Table \[ecf\] are summed among the detectors turned on for each observation and given as input in the source detection algorithm, converting count rates into physical flux units.
The 22 pointings for each mosaicked observation could not be fit simultaneously for source detection due to computational memory constraints. Instead, each group of mosaicked pointings was split into sub-groups so that source detection was run on two adjacent ‘rows’ in R.A. to accommodate overlapping pointings. Other than the pointings on the Eastern and Western edges of the mosaic, each R.A. row was included in two source detection runs to account for overlap and ensure the deepest possible exposures. Similar to the overlapping archival observations, the source detection was run separately for the soft, hard and full bands. From the source lists, we then generated a list of individual sources and searched for the inevitable duplicate identifications of the same source, since portions of every field were in more than one source detection fitting run. Similar to the algorithm used for the Serendipitous [*XMM-Newton*]{} Source Catalog to identify duplicates [@Watson], if the distance between any two sources is less than $d_{\rm cutoff}$ (where $d_{\rm cutoff}$ = min(0.9$\times d_{nn,1}$,0.9$\times d_{nn,2}$,15$^{\prime\prime}$,3$\times$($\sqrt{ra\_dec\_err_{1}^2+sys\_err^2} +\sqrt{ra\_dec\_err_{2}^2+sys\_err^2}$), where $d_{nn}$ is the distance between a source and its nearest neighbor in that pointing, $ra\_dec\_err$ is the positional X-ray error returned by [*emldetect*]{}, and $sys\_err$ is the systematic positional error (taken to be 1$^{\prime\prime}$), we consider the sources to be the same. We then chose the source with the higher $det\_ml$ as the detection from which to derive the position, positional error, flux and flux error. We chose a maximum search radius of 15$^{\prime\prime}$ based in part from the results of the simulations and matching the input simulated list to the detected source list, with this threshold maximizing identification of counterparts while minimizing spurious associations.
To merge the separate soft, hard and full band source lists into one single source list for the archival overlapping and mosaicked observations, we identified duplicate sources using the method described above. The positions among (or between, for cases where a match was found in 2 rather than 3 bands) the bands were averaged and the positional errors were added in quadrature. In our final point source list, we remove extended objects (i.e., where $ext > 0$ as reported by $emldetect$) and only include the objects where $det\_ml \ge$15 (5$\sigma$ significance) in at least one of the energy bands, to reduce spurious identifications and assure our catalog contains reliable X-ray detections [see @Mateos; @Loaring]. As summarized in Table \[src\_num\], we detected 2358 X-ray sources, of which 1607 were found in archival observations and 751 were discovered in our proprietary program. Of this total number, 182 were detected only in the full band, 261 were identified solely in the soft band and 18 in just the hard band.
[lrrr]{} Band & Archival & Proprietary & Total\
Soft (0.5-2 keV) & 1438 & 635 & 2073\
Hard (2-10 keV) & 432 & 175 & 607\
Full (0.5-10 keV) & 1411 & 668 & 2079\
Total & 1607 & 751 & 2358\
\
\
\
Monte Carlo Simulations: Source Detection Reliability & Survey Coverage
-----------------------------------------------------------------------
To assess the source detection efficiency and the survey area as a function of limiting flux, we have performed detailed Monte Carlo simulations. First, we generated a list of random fluxes following a published Log$N$-Log$S$ distribution for each observation, using the fits to the XMM-COSMOS soft and hard bands number counts [@cap09] and the fit to the ChaMP full band number counts [@champ]. These simulated sources are placed in random positions across the detector. Using part of the simulator written for the [*XMM-Newton*]{} survey of the CDFS by @ranalli[^5], each input source list is convolved with the [*XMM-Newton*]{} PSF, generating simulated event lists for all detectors turned on during each observation. Similar to the procedure for the real data, images are extracted from these simulated events files and added among the detectors. The background map for each observation is added to the combined simulated image and then Poisson noise is added to the combined source image and background map to replicate real observations. The source detection on these simulated images is then executed in the same manner as the real data. We simulated 20 images per pointing, providing us with an adequate number of input and detected sources to gauge source detection reliability and assess survey sensitivity.
To estimate the fraction of spurious and confused sources, we compare the sources detected significantly from the simulations ($det\_ml \geq 15$) with the input source list. We consider a detected source within 15$^{\prime\prime}$ of an input source as a match. Any detected object lacking an input counterpart is deemed spurious. The fraction of spurious sources is 0.49%, 0.37%, and 0.20% in the soft, hard and full bands, respectively. Following the prescription of @cap, a source is considered confused if $S_{\rm out}/(S_{\rm in} + 3\sigma_{\rm out}) > 1.5$, where $S_{\rm out}$ and $S_{\rm in}$ are the output and input fluxes of the counterparts and $\sigma_{out}$ is the error on the detected flux. We estimate our fraction of confused sources in the soft, hard, and full bands as 0.34%, 0.23%, and 0.34%, respectively.
From these simulations, we also accurately gauge our survey sensitivity by determining the distribution of fluxes for both input and significantly detected sources. The ratio of these distributions as a function of flux provides us with the area-flux curves shown in Figure \[area\_flux\], where we show the area-flux curves separately for the [*XMM-Newton*]{} proprietary data ($\sim$4.6 deg$^2$), proprietary and archival [*XMM-Newton*]{} data ($\sim$10.5 deg$^2$), [*XMM-Newton*]{} and [*Chandra*]{} coverage ($\sim$16.5 deg$^2$), and [*Chandra*]{}-COSMOS [$\sim$0.9 deg$^2$ @C-Cosmos] for comparison; we note that the fluxes in the [*Chandra*]{} hard (2-7 keV) and full (0.5-7 keV) bands were converted to 2-10 keV and 0.5-10 keV ranges using the assumed spectral models of $\Gamma$=1.7 for Stripe 82 and $\Gamma$=1.4 for [*Chandra*]{}-COSMOS. We reach down to approximate flux limits (at $\sim$0.1 deg$^2$ of coverage) of 1.4$\times10^{-15}$ erg s$^{-1}$ cm$^{-2}$, 1.2$\times10^{-14}$ erg s$^{-1}$ cm$^{-2}$ and 5.6$\times10^{-15}$ erg s$^{-1}$ cm$^{-2}$ with half-survey area at 4.7$\times10^{-15}$ erg s$^{-1}$ cm$^{-2}$, 3.1$\times10^{-14}$ erg s$^{-1}$ cm$^{-2}$ and 1.6$\times10^{-14}$ erg s$^{-1}$ cm$^{-2}$ in the soft, hard and full bands, respectively. From these curves, we then generate the number counts below.
log$N$ - log$S$
===============
We present the number density of point sources as a function of flux, i.e., the log$N$ - log$S$ relation. In integral form, the cumulative source distribution is represented by:
$$N(>S) = \sum_{i=1}^{N_s} \frac{1}{\Omega_i},$$
where N($>$S) is the number of sources with a flux greater than $S$ and $\Omega_{i}$ is the limiting sky coverage associated with the $i$th source. The associated error is the variance:
$$\sigma^2 = \sum_{i=1}^{N_s} (\frac{1}{\Omega_i})^2.$$
To avoid biasing our Log$N$-Log$S$ relations by the inclusion of targeted sources, we removed the closest object located within 30$^{\prime\prime}$ of the target R.A. and Dec, taken from $RA\_OBJ$ and $Dec\_OBJ$ in the FITS header. Of the 33 archival pointings, 18 had objects within 30$^{\prime\prime}$ of the nominal target positions. Three of these were not detected at a significant level (i.e., $det\_ml \geq 15$) in any given band and were not in our final source list. Thus, only 15 sources were excluded when generating the Log$N$-Log$S$. Of the remaining 15 archival pointings, 12 had central regions masked out due to extended emission or pile-up (presumably from the targeted source) while the other 3 had no sources detected within 30$^{\prime\prime}$ of the targeted position.
The number counts in the soft, hard and full bands are shown in Figure \[logn\_logs\]. We have also overplotted the upper and lower bounds of the [*Chandra*]{} Log$N$-Log$S$ from Stripe 82 (S82 ACX) for comparison, where we have re-calculated the source fluxes and survey sensitivity from @me using the same spectral model applied to the [*XMM-Newton*]{} data. We note that 12 [*Chandra*]{} non-cluster pointings used for generation of the Log$N$-Log$S$ presented in @me at least partially overlap the [*XMM-Newton*]{} observations, $\sim$1.2 deg$^2$. Since the hard and full bands are defined in S82 ACX up to 7 keV, the [*Chandra*]{} fluxes have been adjusted assuming a powerlaw model of $\Gamma$=1.7 to convert to the energy ranges used in our [*XMM-Newton*]{} analysis (i.e., the [*Chandra*]{} fluxes have been multiplied by factors of 1.36 and 1.2 for the hard and full bands, respectively). The [*XMM-Newton*]{} and S82 ACX number counts are largely consistent, with slight discrepancies apparent at moderate fluxes in the hard band ($\sim 5\times10^{-14}$ erg cm$^{-2}$ s$^{-1} <$ S$_{\rm 2-10keV} < 2\times10^{-13}$ erg cm$^{-2}$ s$^{-1}$) and at the low [*XMM-Newton*]{} flux limit in the full band ($< 10^{-14}$ erg cm$^{-2}$ s$^{-1}$). However, as noted in @me, short exposure times in [*Chandra*]{} observations, which constitute the majority of Stripe 82 ACX, has an effect on the Log$N$-Log$S$ normalization in the hard band, making the offset between [*XMM-Newton*]{} and [*Chandra*]{} in this energy range unsurprising.
In Figure \[comp\_logns\], we compare the Stripe 82 ACX Log$N$-Log$S$ using the spectral model from @me and the one used here. In @me, we adopted a spectral model used in [*Chandra*]{} surveys to which we compared our results while here we used a spectral model consistent with previous [*XMM-Newton*]{} surveys, such as [*XMM*]{}-COSMOS [@cap]. The difference in the hard band number counts is slight with this change of assumed spectral model, but shifts the normalization to lower values in the soft and especially the full band where the median offset between the 1$\sigma$ error bars in the discrepant ranges is $\sim$10%.
We also compare our Log$N$-Log$S$ relationships with those from previous X-ray surveys, spanning from wide [2XMMi, 132 deg$^2$; @Mateos] to moderate [XMM-COSMOS, 2 deg$^2$; @cap; @cap09] to small areas [E-CDFS, 0.3 deg$^2$, [*XMM*]{}-CDFS, $\sim$0.25 deg$^2$; @lehmer; @ranalli]. Where possible, we aim to compare our data with other [*XMM-Newton*]{} surveys. However, the [*XMM-Newton*]{} survey in the CDFS [@ranalli] only produced the Log$N$-Log$S$ in the hard band, so we use the [*Chandra*]{} E-CDFS survey [@lehmer] for comparison in the soft band. No previous [*XMM-Newton*]{} survey has produced a full band Log$N$-Log$S$, so we compare our Stripe 82 number counts with the small area [*Chandra*]{}-COSMOS [C-COSMOS, 0.9 deg$^2$; @C-Cosmos] and wide area ChaMP surveys [9.6 deg$^2$; @champ]. As ChaMP defines the full band to be 0.5-8 keV, their fluxes were adjusted to match our 0.5-10 keV range using their adopted spectral model (i.e., multiplied by a factor of 1.18). We note that the spectral shapes over a broad band are not well constrained, making the energy conversion factors in this range approximate and comparisons with number counts using other model assumptions difficult to quantify; these comparisons are for illustrative purposes. The model predictions from @Gilli have also been overplotted in the soft and hard bands.
The Stripe 82 [*XMM-Newton*]{} number counts are consistent with previous [*XMM-Newton*]{} surveys in the hard band. The 2XMM Log$N$-Log$S$ from @Mateos is systematically higher than our data in the soft band. However, they note that their 0.5-2 keV number counts are higher than several other X-ray surveys, which they attribute to the inclusion of moderately extended sources in their catalog. Similar to other surveys, we include only point sources, making this soft band discrepancy with @Mateos not surprising. Stripe 82 [*XMM-Newton*]{} is fully consistent with E-CDFS [@lehmer] and the model predictions from @Gilli in this energy range. Though the normalization for the full band Stripe 82 [*XMM-Newton*]{} Log$N$-Log$S$ seems low compared to ChaMP and C-COSMOS, this is likely due to differences in spectral models to convert from count rate to fluxes: ChaMP and C-COSMOS adopt a powerlaw model with $\Gamma=1.4$ whereas we use $\Gamma=1.7$. As shown in Figure \[comp\_logns\] (c), the difference between these two spectral models shifts the full band number counts in the right sense to account for the observed disagreement between Stripe 82 [*XMM-Newton*]{} and C-COSMOS and ChaMP. We also note that the ChaMP number counts seem to be somewhat higher than other [*Chandra*]{} surveys [@me] while C-COSMOS shows better agreement with our calculations.
As we show below, these X-ray objects do preferentially sample the high luminosity AGN population and include candidates for interesting rare objects: reddened quasars and high luminosity AGN at high redshift. In a future paper, we will quantify the evolution of these sources by generating the quasar luminosity function, beginning with the Log$N$-Log$S$ relations presented here.
Multi-wavelength Source Matching via Maximum Likelihood Estimator
=================================================================
The Stripe 82 X-ray source lists represent the [*XMM-Newton*]{} objects found above and the [*Chandra*]{} sources detected at $\geq 4.5\sigma$ level from all pointings overlapping the Stripe 82 area. In @me, we presented only those observations that did not target galaxy clusters, covering an area of $\sim$6.2 deg$^2$, garnering 709 objects. Inclusion of the previously omitted [*Chandra*]{} pointings adds an additional 1.2 deg$^2$ to produce a total of 1146 X-ray sources. About 1.5 deg$^2$ of the full 7.4 deg$^2$ of [*Chandra*]{} coverage in Stripe 82 overlaps the [*XMM-Newton*]{} pointings. Using the method described above to find duplicate observations of the same X-ray object, we cross-matched the [*XMM-Newton*]{} and [*Chandra*]{} source lists, finding 3362 unique objects over $\sim$16.5 deg$^2$ of non-overlapping area.
To assign multi-wavelength counterparts to the Stripe 82 X-ray sources, we employed a maximum likelihood estimator (MLE) algorithm which takes into account the distance between potential matches and the brightness of the ancillary counterpart [@mle]. The ancillary source at the closest distance to the X-ray object, as found using the nearest neighbor method, may not be the true match, but may instead be a spurious association due to random chance. As there are many more faint than bright objects, an association between a bright source and an X-ray target is more likely to represent a true counterpart than a match to a faint source. The MLE technique codifies this statistically, assigning reliability values to each potential match and has been successfully implemented in multi-wavelength catalog matching in previous X-ray surveys [e.g., @brusa1; @brusa2; @cardamone; @luo; @brusa3; @civano_mle].
All the objects within a search radius ($r_{\rm search}$) around each X-ray target are assigned a likelihood ratio ($LR$), which is the probability that the correct counterpart is found within $r_{\rm search}$ divided by the probability of finding an unassociated object by chance:
$$LR = \frac{q(m)f(r)}{n(m)},$$
where $q(m)$ is the expected normalized magnitude distribution of ancillary counterparts, $f(r)$ is the probability distribution of the positional errors (which is assumed to be a two-dimensional Gaussian, where $\sigma$ is derived by adding the X-ray and ancillary positional errors in quadrature), and $n(m)$ is the magnitude distribution of background sources. For the positional [*Chandra*]{} uncertainty, we added the major and minor axes of the 95% confidence level error ellipse, $err\_ellipse\_r0$ and $err\_ellipse\_r1$, in quadrature, while [*XMM-Newton*]{} positional errors are from the [*emldetect*]{} source detection script added in quadrature to a 1$^{\prime\prime}$ systematic error.[^6] As noted below, for ancillary catalogs where a positional error is not quoted, we adopted a uniform, survey dependent, positional uncertainty. Since [*Chandra*]{} has higher resolution and a smaller on-axis PSF than [*XMM-Newton*]{}, we chose different radii to search for ancillary counterparts for each catalog. For [*Chandra*]{} objects, $r_{\rm search}$ = 5$^{\prime\prime}$ [@civano_mle] while for [*XMM-Newton*]{} sources, $r_{\rm search}$ = 7$^{\prime\prime}$ [@brusa3]; the positional errors for 88% of [*Chandra*]{} and 99.9% of [*XMM-Newton*]{} sources are below the adopted search radii.
To determine the background distribution $n(m)$, we isolate the ancillary sources within an annulus around each X-ray source, with inner and outer radii of 7$^{\prime\prime}$ and 30$^{\prime\prime}$ for [*Chandra*]{} and 10$^{\prime\prime}$ and 45$^{\prime\prime}$ for [*XMM-Newton*]{}. The inner radius is chosen to avoid the inclusion of real counterparts and the outer radius is picked to ensure a large number of sources to estimate the background while minimizing overlap with other X-ray sources. Within these annular regions, there were 53 [*Chandra*]{} pairs and 7 [*Chandra*]{} triples ($\sim11\%$ of the sample) and 49 [*XMM-Newton*]{} pairs and 3 [*XMM-Newton*]{} triples ($\sim4\%$ of the sample), i.e., only a small fraction of the background histogram has duplicate objects.
We then calculate $q^\prime(m)$ by first finding the magnitude distribution of ancillary objects within $r_{\rm search}$ of each X-ray source and dividing by the area to obtain the source density magnitude distribution. Similarly, we divide $n(m)$ by the search area, and take the difference between the former and latter which gives us the expected source density magnitude distribution. Finally, multiplying this distribution by the search area gives us $q^\prime(m)$. We then normalize $q^\prime(m)$ to $Q$, the ratio of the number of X-ray sources with counterparts found within $r_{\rm search}$ to the total number of X-ray sources, producing $q(m)$ [see @civano_mle].
From $LR$, we then calculate a reliability value for each source:
$$R = \frac{LR}{\Sigma_i (LR)_i + (1 - Q)},$$
where the sum over $LR$ is for each possible counterpart found within $r_{\rm search}$ around an individual X-ray source. We use $R$ to discriminate between true counterparts and spurious associations. Since $R$ depends on the source density and magnitude distribution of the ancillary sources, the critical $R$ value ($R_{\rm crit}$) we adopt to accept a match as ‘real’ differs among catalogs and strikes a fine balance between missing true counterparts and adding contamination from chance proximity to an unrelated source. To calibrate $R_{\rm crit}$, we shifted the positions of the X-ray sources by random amounts, with offsets ranging from $\sim$21$^{\prime\prime}$ to $\sim$35$^{\prime\prime}$, and re-ran the matching code. Any matches found should be due to random chance. We then plotted the distribution of reliability values for these spurious associations to estimate the contamination above $R_{\rm crit}$; full details regarding the estimate of false matches are given in Appendix A. We impose a lower limit on $R_{\rm crit}$ of 0.5, even in the cases where the reliability values for the shifted X-ray positions are consistent with zero. If there were multiple counterparts per X-ray source, or multiple X-ray sources per counterpart, the match with the highest reliability was favored.
In the on-line catalogues [available at CDS and searchable with VizieR, @vizier], we list the X-ray sources, fluxes and matches to the ancillary multi-wavelength catalogs, including the non-aperture matched photometry. Duplicate observations of the same X-ray object between the [*Chandra*]{} and [*XMM-Newton*]{} source lists are marked in the on-line tables. Objects not included in the Log$N$-Log$S$ relations, i.e., targets of observations and for [*Chandra*]{} objects, all sources identified in observations targeting galaxy clusters, are also noted. If the X-ray flux is not detected at a significant level in any individual band ($<4.5\sigma$ for [*Chandra*]{} and $det\_ml <$15 for [*XMM-Newton*]{}), the flux is listed as null in the on-line catalogues. A high level summary of the number of sources matched to each optical, near-infrared and ultraviolet catalog is reported in Table \[cp\_summary\], with the magnitude/flux density distributions for these counterparts shown in Figure \[mag\_distr\]. Appendix B details the columns for the on-line versions of the catalogs.
Sloan Digital Sky Survey
------------------------
Due to the high density of sources in SDSS, as well as sub-arcsecond astrometry precision, we matched the X-ray sources separately to the $u$, $g$, $r$, $i$ and $z$ bands, using single-epoch photometry from Data Release 9 [@dr9 DR9]. A uniform 0.$^{\prime\prime}$1 error was assumed for all SDSS positions [@csc_sdss]. Comparing the reliability distributions for each band with the distributions for randomly shifted X-ray positions, we chose $R_{\rm crit}$=0.5 for both the [*Chandra*]{} and [*XMM-Newton*]{} source lists.
After vetting each individual band source list to include only objects exceeding $R_{\rm crit}$, we combined these source lists into a matched SDSS/[*Chandra*]{} catalog and an SDSS/[*XMM-Newton*]{} catalog. We visually inspected the cases where multiple SDSS objects (from separate band matchings) were paired to one X-ray source and selected the most likely counterpart by selecting the one with the greatest number of matches and/or the brightest object. We also imposed quality control cuts to assure the broad-band SEDs and derived photometric redshifts we will generate in a future paper (after careful aperture matching) are robust. We therefore require the SDSS objects to not be saturated[^7] or blended[^8] and to have the photometry well measured[^9]. After this vetting, every remaining SDSS match was visually inspected to remove objects contaminated by optical artifacts from e.g., diffraction spikes, or proximity to a close object that was not caught in the pipeline flagging.
We identified 748 and 1444 SDSS counterparts to [*Chandra*]{} and [*XMM-Newton*]{} sources, corresponding to 65% and 61% of the sample respectively, that exceeded $R_{\rm crit}$. However, 72 and 161 of these were rejected due to failing the quality control checks and visual inspection described above (marked as ‘yes’ in the ‘SDSS\_rej’ flag in the on-line catalogues, leaving 676 and 1283 reliable matches to [*Chandra*]{} and [*XMM-Newton*]{} sources, or 59% and 54% of the X-ray sources. In a follow-up paper in which we will generate the broad-band SEDs, we will use co-added data of the 50-60 epochs of Stripe 82 scans to search for counterparts for the remaining $\sim$35% of the X-ray sources [@jiang; @mcgreer for studies of $z>5$ QSOs using co-added SDSS Stripe 82 data, see].
### Spectroscopy
We searched spectroscopic databases to find redshifts corresponding to our matched X-ray/SDSS catalogs, using SDSS DR9, 2SLAQ [@2slaq], WiggleZ [@wigglez] and DEEP2 [@spec_deep2]. This yielded spectroscopic redshifts for 306 [*Chandra*]{} sources ($\sim$27% of the sample): 286 from SDSS DR9; 10 from 2SLAQ; 3 from WiggleZ and 7 from DEEP2. For the [*XMM-Newton*]{} sources, 497 optical counterparts had spectroscopic redshifts ($\sim$21% of the sample): 468 from SDSS DR9, 20 from 2SLAQ, 4 from WiggleZ and 5 from DEEP2. We manually checked the spectra for the 25 SDSS sources where warning flags were set or for any object with $z > 5$: three spectra were re-fit to give more reliable redshifts, 11 were discarded due to poor spectra that could not be reliably fitted and we confirmed the redshifts for the remaining 11 objects. In Table \[cp\_summary\], the number of reported redshifts do not include the 11 that we discarded. Twenty-eight [*XMM-Newton*]{} sources had spectroscopic redshifts but unreliable photometry; we retain the redshift, but not the photometric, information for these objects. In the online catalogues, we indicate the database from which the spectroscopic redshifts were found, with $z$-source of 0, 1, 2, 3 and 4 referring to SDSS, 2SLAQ, WiggleZ, DEEP2 and SDSS spectra refitted/verified by us, respectively.
[*WISE*]{}
----------
For a source to be included in the [*WISE*]{} All Sky Source Catalog [@wright; @wise_cat], a SNR $>$ 5 detection was required for one of the four photometric bands, W1, W2, W3 or W4, corresponding to wavelengths 3.4, 4.6, 12, and 22 $\mu$m, with resolution 6.$^{\prime\prime}$1, 6.$^{\prime\prime}$4, 6.$^{\prime\prime}$5 and 12.$^{\prime\prime}$0. The X-ray sources were matched to the W1 band since this band has the greatest number of non-null values, including both detections and upper limits. In the full Stripe 82 area, no [*WISE*]{} sources had null W1 detections, so we do not miss any potential [*WISE*]{} counterparts by matching to only the W1 band. The matching was performed on all W1 values, regardless of whether the magnitude corresponded to a detection or an upper limit (i.e., where the W1 SNR is below 2). The R.A. and Dec errors were added in quadrature to provide an estimate of the [*WISE*]{} astrometric error.
If any bands suffered from saturation,[^10] spurious detections associated with artifacts (i.e., diffraction spikes, persistence from a short-term latent image, scattered halo light from a nearby bright source, or optical ghost image from nearby bright source), contamination from artifacts, or moon level contamination[^11], we consider the magnitude in that band unreliable. If every band did not pass these quality control tests, then the source is not included in our final tally since we will not use the [*WISE*]{} data for generating the SEDs. For extended sources (where [*ext\_flag*]{} $>$0), the magnitudes measured from the profile-fitting photometry (i.e., [*wnmpro*]{}, where $n$ goes from 1-4) are unreliable. For these objects, we therefore focus on the magnitudes and quality flags associated with the elliptical apertures, [*wngmag*]{}, where $n$ goes from 1-4. Again, if all bands have null elliptical magnitudes and/or non-zero quality control flags, the source is not included in our catalog. The extended sources have the [*WISE*]{}\_ext flag set to ‘yes’ in the online catalogues.
When matching the [*Chandra*]{} catalog to [*WISE*]{}, we imposed an $R_{\rm crit}$ of 0.75 and found 595 counterparts that passed the photometry quality control checks, or 52% of the [*Chandra*]{} sample. Eight of these were extended. Photometry of 30 sources was compromised, 20 of which were extended. Our $R_{\rm crit}$ threshold for the [*XMM-Newton*]{} source list was 0.9, with 1324 counterparts identified with acceptable photometry (56% of the sample), of which 8 were extended. Sixty-five sources did not pass the quality control checks, of which 40 were extended. The X-ray sources with [*WISE*]{} counterparts removed for not passing the quality control checks are marked as ‘yes’ in the [*WISE*]{}\_rej field in the on-line catalogues.
UKIDSS
------
We searched for the UKIDSS Large Area Survey (LAS) Data Release 8 [@lawrence; @casali; @hewett; @warren] for NIR counterparts to the Stripe 82 X-ray sources; details regarding maintenance of the UKIDSS science archive are described by @hambly. We used the LAS $YJHK$ Source table, which contains only fields that have coverage in every filter and merges the data from multiple detections of the same object. Only primary objects were selected[^12] so that we worked with a clean input list with no duplicate NIR sources. We [*a priori*]{} removed objects flagged as noise, i.e., those sources with [*mergedClass*]{} set to zero and [*PNoise*]{} $\leq$ 0.05; that is, we only retained objects that are consistent with real detections (not noise) at greater than the 2$\sigma$ level for our candidate source list and background histograms. The UKIDSS positional uncertainties are set to NULL in the catalog. @dye quote that the internal accuracy can be $\sim$100 mas in each coordinate and the external accuracy is $\sim$80 mas in each coordinate. Adding the 180 mas uncertainty in quadrature for each coordinate gives a positional error of $\sim$0.$^{\prime\prime}$25 which we apply uniformly to all UKIDSS sources.
The X-ray source catalogs were matched separately to each UKIDSS band: $Y$ (0.97-1.07 $\mu$m), $J$ (1.17-1.33 $\mu$m), $H$ (1.49-1.78 $\mu$m) and $K$ (2.03-2.37 $\mu$m). The output matches were culled to include only sources exceeding $R_{\rm crit}$ and these individual band lists were then combined. Based on our test of shifting the X-ray positions by random amounts, we chose the values $R_{\rm crit}$=0.85, 0.75, 0.8 and 0.75 for the $Y$, $J$, $H$ and $K$ bands, respectively, for the [*Chandra*]{} matches; we used $R_{\rm crit}$=0.6, 0.6, 0.7 and 0.5 for the $Y$, $J$, $H$ and $K$ bands, respectively, in the [*XMM-Newton*]{} source matching.
When merging the individual UKIDSS matches with the X-ray source lists, more than one UKIDSS counterpart was matched to 1 [*Chandra*]{} object and to 45 [*XMM-Newton*]{} objects. We inspected these cases by eye and generally chose the brightest potential candidate in the most number of bands as the preferred match. In cases where the brightnesses were similar, we favored the candidate with the greatest number of matches among the UKIDSS bands. We note that a handful of these multiple potential candidates were duplicate observations of a bright star or a bright star and associated diffraction spike. We found 543 UKIDSS counterparts to the 1146 [*Chandra*]{} sources (47%) and 1266 UKIDSS counterparts to the 2358 [*XMM-Newton*]{} objects (54%). None of these IR sources was affected by saturation, i.e., there were no instances where ‘MergedClass’ was set to -9 and ‘PSaturated’ (probability of saturation) was 0 for all objects.
[*GALEX*]{}
-----------
The [*GALEX*]{} catalog comprises sources detected over several surveys, including Deep, Medium and All Sky Imaging Surveys (DIS, MIS and AIS, respectively) as well as a Guest Investigator program. For a trade-off between depth and coverage, and to cleanly remove duplicate observations of the same source, we extracted objects from the MIS survey only. Since the survey has overlapping tiles [see @morrissey for observation details], multiple observations of the same source can appear in the catalog. To chose the best candidate list, we queried the MIS database from Galex Release 7 for primary sources, i.e., those that are inside the pre-defined position (‘SkyGrid’) cell within the field [see @budavari]. We further require that each primary is within 5$^{\prime}$ of the field center. Following the prescription of [@bianchi], we considered objects within 2.5$^{\prime\prime}$ as possible duplicates: if they are part of the same observation, i.e., had the same ‘photoextractid,’ they are considered unique sources but if they are from different observations, the data corresponding to the longest exposure was used. We note that in many cases, sources with the same ‘photextractid’ but different ‘objids’ (which identifies unique sources) were actually unmerged FUV and NUV detections of the same source, where one observation had either a FUV non-detection while there was a NUV detection or vice versa. However, since we matched the X-ray source lists separately to the NUV and FUV catalogs, such duplicates do not affect the results of our analysis.
The [*Chandra*]{} and [*XMM-Newton*]{} source lists were matched to this cleaned [*GALEX*]{} catalog using $R_{\rm crit}$ = 0.5 for each band. We used the individual source positional errors reported in the [*GALEX*]{} database, rather than applying a systematic positional error to all sources. Matching the NUV and FUV detections separately, rather than focusing on the [*GALEX*]{} sources with detections in both bands, has the advantage that we locate ultraviolet counterparts that are detected in one band and not the other. We then merged the results of the individual band matching, locating [*GALEX*]{} counterparts for 164 [*Chandra*]{} and 249 [*XMM-Newton*]{} objects, corresponding to 14% and 11% of each parent sample, respectively.
FIRST
-----
Due the to low space density of both radio and X-ray sources, we matched our X-ray source lists to the FIRST [@first; @first_cat1] catalog using a simple nearest neighbor approach rather than MLE: the closest radio object within a search radius of 5$^{\prime\prime}$ for [*Chandra*]{} sources and within 7$^{\prime\prime}$ for [*XMM-Newton*]{} objects was chosen as the true counterpart. We used the FIRST catalog released in 2012, which contains all sources detected between 1993 and 2011, with a detection limit of 0.75 mJy over part of Stripe 82 ($319.6^{\circ} < $ R.A. $<$ 49.5$^{\circ}$, $-1^{\circ} <$ Dec $< 1^{\circ}$), and 1 mJy detection limit for the rest of the region [@first_cat]. We identified radio counterparts for 42 [*Chandra*]{} sources (4% of the sample) and 82 [*XMM-Newton*]{} objects (3% of the sample). From shifting the X-ray positions by random amounts, we expect spurious associations for 1 [*Chandra*]{} source within 5$^{\prime\prime}$ and 4 [*XMM-Newton*]{} objects within 7.$^{\prime\prime}$ Two [*Chandra*]{} sources had 2 potential radio counterparts within $r_{\rm search}$, but these X-ray sources were within the search radius of each other, so these duplicate potential matches are expected. Within the 7$^{\prime\prime}$ [*XMM-Newton*]{} search radius, 2 potential counterparts were found for 4 X-ray sources. In all of these cases, the nearest neighbor was also the brightest radio object.
[lrrr]{} Catalog & [*Chandra*]{} & [*XMM-Newton*]{} & Total$^{1}$\
X-ray & 1146 & 2358 & 3362\
SDSS & 676 & 1283 & 1892\
[*WISE*]{} & 595 & 1324 & 1855\
UKIDSS & 543 & 1266 & 1754\
[*GALEX*]{} & 164 & 301 & 447\
FIRST & 42 & 82 & 119\
Spec-$z$s & 306 & 497 & 759\
\
Discussion
==========
Here we use the results of the catalog matching to discuss general characteristics of the X-ray sources in Stripe 82, highlighting the science areas our survey is uniquely poised to investigate.
Probing the High X-ray Luminosity Regime of Black Hole Growth
-------------------------------------------------------------
We calculated full band X-ray luminosities for the sources with spectroscopic redshifts. After removing duplicate matches between the [*Chandra*]{} and [*XMM-Newton*]{} source catalogs and isolating the objects with luminosities exceeding 10$^{42}$ erg s$^{-1}$, the X-ray luminosity above which there are few or no starburst-dominated X-ray sources [e.g., @persic; @bh], we confirm that 645 of the 759 Stripe 82 X-ray sources with optical spectra are AGN; the remaining sources have X-ray luminosities consistent with star-forming galaxies or low-luminosity AGN or are stars. Below, we compare the X-ray luminosity distribution with other X-ray surveys and with model predictions and comment on the interesting sources we have discovered.
### Comparison with Other X-ray Surveys
The comparison X-ray surveys plotted in Figure \[lum\_distr\] span from deep, small area [the GOODS and MUSYC survey of E-CDFS and CDF-S, $\sim$0.3 deg$^2$, @giavalisco; @treister_04; @cardamone2], to moderate area and moderate depth [[*XMM*]{} and [*Chandra*]{} COSMOS, $\sim$2.1 deg$^2$, @cap09; @brusa3; @civano_mle], to wide area and shallow depth [XBoöotes,$\sim$9 deg$^2$, @kenter; @kochanek]. Again, we focus on only the X-ray sources with spectroscopic redshifts, with a completeness of $\sim$28%, $\sim$45% and $\sim$44% for E-CDFS + CDF-S, COSMOS and XBoötes, respectively. For reference, the spectroscopic completeness of Stripe 82X, prior to any dedicated follow-up observations, is currently $\sim$23%, though the smaller X-ray surveys peer deeper, garnering many more faint optical counterparts where spectroscopic follow-up opportunities are limited. We report observed, full-band luminosities, which is 0.5-10 keV for the [*XMM-Newton*]{} surveys (obtained for [*XMM*]{}-COSMOS by summing the individual soft and hard band fluxes while @civano_mle provides full band fluxes for [*Chandra*]{}-COSMOS objects), 0.5-7 keV for the Stripe 82 [*Chandra*]{} sources and XBoötes, and 0.5-8 keV for E-CDFS + CDF-S.
As Figure \[lum\_distr\] illustrates, survey area determines the AGN population sampled. Small area surveys (e.g., E-CDFS + CDF-S) identify faint objects but leave the high luminosity objects sparsely sampled. Moving to wider areas expands the parameter space to higher luminosities since these objects are rare and more volume must be probed in order to locate them. This becomes very apparent at $z > 2$ (Figure \[lum\_distr\] b).
Wider area surveys, such as COSMOS and XBoötes have higher levels of spectroscopic completeness than Stripe 82X due to dedicated multi-year spectroscopic campaigns. However, prior to any follow-up, we have identified 1.5 times more high luminosity AGN (L$_{0.5-10keV} > 3\times10^{44}$ erg s$^{-1}$) than [*XMM*]{}- and [*Chandra*]{}-COSMOS at all redshifts. Compared to XBoötes, Stripe 82X pilot has $\sim$30% more L$_{0.5-10keV} > 10^{45}$ erg s$^{-1}$ AGN when considering all redshifts, and finds almost as many in the young universe. Though the current spectroscopic completeness of Stripe 82X pilot is comparatively lower, more area is covered, enabling identification of more high luminosity AGN.
However, comparing the source density of the brightest objects among these 2 wider area surveys with Stripe 82X pilot indicates that there are still more high luminosity AGN left to find. The space density of L$_{0.5-10keV} >10^{45}$ erg s$^{-1}$ AGN found in COSMOS is 26 deg$^{-2}$ (i.e., 54 AGN in 2.1 deg$^2$) and in XBoötes is 12 deg$^{-2}$ (104 in 9 deg$^2$), while currently Stripe 82X has a space density of 8 deg$^{-2}$ (132 AGN in 16.5 deg$^2$). We therefore anticipate that with additonal spectroscopic completeness, the source density of high luminosity AGN will subsequently increase. Of the L$_{0.5-10keV} >10^{45}$ erg s$^{-1}$ AGN already identified in Stripe 82 that have optical classifications, one is a narrow line AGN (optically classified as a ‘galaxy’) while the remaining have broad lines.
### Comparison with Model Predictions
With the larger dataset presented here, we expand on the work of @me and compare the luminosity distribution of X-ray AGN we immediately identify with X-ray background population synthesis predictions of @treister, @Gilli, and @ballantyne. We input the observed area-flux curves for [*Chandra*]{} and [*XMM-Newton*]{} into the @treister simulator,[^13] and convolved the predicted Log$N$-Log$S$ distributions from @Gilli[^14] and @ballantyne with our observed area-flux curves; we note that since the @Gilli predictions only allow the hard band flux to be defined from 2-10 keV, we corrected the output fluxes in the [*Chandra*]{} band to our 2-7 keV range, using our assumed spectral model where $\Gamma$=1.7. As @Gilli do not provide model predictions in the full band, we compare our observed full-band numbers to the models from @treister and @ballantyne. The predicted luminosity bins represent intrinsic, rest-frame luminosities, while the Stripe 82X data are observed luminosities. We removed any [*Chandra*]{} pointings that overlapped [*XMM-Newton*]{} pointings since the latter has more effective area. Since we did detect a handful of [*Chandra*]{} sources in these removed pointings that were not identified by [*XMM-Newton*]{}, the histograms presented in Figures \[pred\_all\] and \[pred\_zgt2\] are a subset of the total data.
The models from @Gilli predict more AGN at all redshifts and more high luminosity ($>10^{45}$ erg s$^{-1}$) AGN at $z>2$ than the @treister models while the @ballantyne model predicts more L$>10^{44}$ erg s$^{-1}$ AGN than the @treister model. As @ballantyne produces predictions based on three different input luminosity functions [@ueda; @lafranca; @aird], which show a significant range in expected AGN numbers within most luminosity bins, discrepancies among models can be attributed to differences in luminosity functions. @Gilli uses the XLF from @hasinger in the 0.5-2 keV band, estimating the contribution of moderately obscured AGN ($10^{21}$ cm$^{-2} <$ N$_H < 10^{24}$ cm$^{-2}$) in the hard band by calculating the difference between the @ueda and @lafranca XLFs and the @hasinger XLF (after converting the latter to the hard band). The predictions from @treister are calibrated on the hard band XLF from @ueda.
Due to our limited spectroscopic completeness (see Section 4.1.1), we have identified fewer AGN than predicted given the constraints from our data. However, these ‘missing’ objects are predominantly at low to moderate luminosities ($<10^{45}$ erg s$^{-1}$). When considering objects at all spectroscopic redshifts, we found more high luminosity AGN than predicted by the @treister model, most of those predicted by the @Gilli model and a significant fraction to most, depending on the luminosity function, of those predicted by the @ballantyne model. The same result applies to objects at $z>2$ in the hard and full bands (though we do find more high luminosity objects than the @ballantyne predictions based on the @aird models in these bands), while in the soft band, the @treister and @ballantyne models predict slightly more high luminosity objects than we have yet discovered. Currently, the discrepancies between our observations and the @treister model are within $\sim$2$\sigma$ assuming Poisson uncertainties. Given the lower space density of these objects compared to surveys with higher spectroscopic completeness (i.e., Section 5.1.1), it seems clear that more high luminosity AGN will be confirmed. Even a small increase in the high luminosity population would surpass the predictions of @Gilli and the more conservative numbers of @ballantyne. We also expect that our luminosity distribution is systematically lower than the predictions as the latter use intrinsic, rather than observed, luminosities as input. The systematic effect would shift the Stripe 82X sources into higher luminosity bins if corrected for absorption, making our comparison at high luminosities conservative.
Finding a greater number of high X-ray luminosity AGN relative to the model predictions is consistent with what was reported earlier by @me, namely, that population synthesis models need to be refined to properly account for the high luminosity AGN regime. As this unexplored population is more numerous than predicted, quantifying its impact on AGN demography and evolution is critical for fully understanding black hole growth. Increased spectroscopic completeness will inform us as to the significance of the offset.
SMBH Growth at High Redshift
----------------------------
Though $z>5$ quasars identified by SDSS have been followed up with dedicated [*Chandra*]{} observations [e.g., @brandt; @shemmer; @vignali], not many have been found in X-ray surveys. Thus far, only 6 have been confirmed spectroscopically: $z=5.19$ in CDF-N [@barger], $z = 5.4$ from the [*Chandra*]{} Large Area Synoptic X-ray Survey [@steffen], $z = 5.3$ and $z = 5.07$ from [*Chandra*]{}-COSMOS [@civano_11], and 2 from ChaMP, with the most distant object having a redshift of 5.41 [@trichas]. None have been located in the 4 Ms of CDF-S, demonstrating that area trumps depth for locating these very high redshift sources. In this pilot survey of Stripe 82X, we have discovered only one object beyond a redshift of 5, but it is the most distant X-ray selected quasar from an X-ray survey to date, at $z = 5.86$ with L$_{0.5-10keV}$ = 4.4$\times10^{45}$ erg s$^{-1}$ ([*Chandra*]{} source, MSID = 165442). The SDSS spectrum of this source is shown in Figure \[highz\_spec\], revealing broad Ly$\alpha$ (i.e., this source is classified as a broad line AGN).
Such objects are expected to be quite rare. For instance, model predictions from @treister estimate that only 3 AGN at $z> 5$ with L$_{0.5-10keV}>10^{45}$ erg s$^{-1}$ exist in this survey area, given the observed full band area-flux curves. Similarly, using the soft band area-flux curves, the models from @Gilli predict 5 AGN at $z > 5$ with L$_{0.5-2keV}>10^{45}$ erg s$^{-1}$, but less than one when applying an exponential decline to the high-$z$ luminosity function. These types of objects will be below the flux limit of eRosita [@merloni; @kolodzig], making the Stripe 82X survey important for constraining black hole formation models.
![\[highz\_spec\] SDSS spectrum of the highest redshift quasar yet discovered in an X-ray survey at $z=5.86$ with an X-ray luminosity of 4.4$\times10^{45}$ erg s$^{-1}$. The Ly$\alpha$ transition is marked. This source was discovered in the archival [*Chandra*]{} data (MSID = 165442).](spectrum_z5.86.eps)
Obscured AGN Beyond the Local Universe
--------------------------------------
### [*WISE*]{} AGN Candidates
In Figure \[w1\_w2\], we plot the WISE $W1 - W2$ color as a function of $W1$ for the 1713 Stripe 82 X-ray sources with significant detections (SNR $\geq$ 2) in both bands on top of the contours for all [*WISE*]{} sources with significant $W1$ and $W2$ colors in the full 300 deg$^2$ Stripe 82 area. The color cut of $W1 - W2 \geq 0.8$ used to identify [*WISE*]{} AGN candidates [@stern; @assef] is overplotted, with 904 of our X-ray/[*WISE*]{} objects falling within this region, or 53% of the total. This contrasts with the results of @stern who find that in the COSMOS field, a majority of X-ray sources with [*WISE*]{} counterparts have blue colors, i.e., $W1 - W2 < 0.8$; for instance, only 91 out the 244 [*XMM-Newton*]{}/WISE sources in COSMOS (38%) have [*WISE*]{} AGN candidate colors. A higher fraction of the Stripe 82 X-ray sources have infrared colors consistent with obscured AGN.
Five hundred nine of the 1713 Stripe 82 X-ray sources with significant $W1$ and $W2$ detections have spectroscopic redshifts and X-ray luminosities indicative of AGN activity [L$_x > 10^{42}$ erg s$^{-1}$ @persic; @bh]. Of these, 165, or 32%, have [*WISE*]{} color $W1 - W2 < 0.8$ (green circles in Figure \[w1\_w2\]). These results indicate that two-thirds of our spectroscopically confirmed AGN are obscured (red [*WISE*]{} colors) and that identifying AGN candidates based on a simple color cut can miss up to a third of bluer AGN that can be recognized via other selection mechanisms, e.g., the optical and X-ray. As pointed out by @stern, this result reinforces the complementarity of MIR and X-ray selection in providing comprehensive views of SMBH growth.
![\[w1\_w2\] [*WISE*]{} color $W1 - W2$ as a function of $W1$ with the contours indicating the density (10$^3$, 10$^4$, 10$^5$, 10$^6$ objects per contour) of all [*WISE*]{} objects in 300 deg$^2$ if Stripe 82, with our X-ray objects overplotted as red stars and the $W1 - W2 \geq 0.8$ AGN candidate color cut [e.g., @stern; @assef] marked by the dashed line. About half of our X-ray objects have redder colors, while 2/3 of the objects with spectra that are X-ray identified as AGN also exceed this boundary. The 166 spectroscopically identified X-ray AGN (L$_x > 10^{42}$ erg s$^{-1}$) with bluer colors are shown by the green circles.](w1_w2.eps)
### Optically Normal Galaxies
At $z > 0.5$, diagnostic line ratio diagrams used to discriminate between Type 2 AGN and star-forming galaxies [e.g., @bpt; @kewley; @kauff] become challenging in optical surveys as H$\alpha$ is shifted out of the rest-frame bandpass. Candidates for obscured AGN would then have to be identified via alternative optical diagnostics, such as ratios of narrow emission lines vs. stellar mass [MEx, @juneau] or vs. rest frame $g - z$ color [TBT, @trouille]. Follow-up of type 2 AGN candidates with ground-based NIR spectroscopy to observe the traditional BPT line diagnostics is also possible, but as the metallicity of host galaxies evolves with redshift, the applicability of line ratios calibrated for galaxies at $z < 0.5$ to higher redshift systems may not cleanly separate star-forming from active galaxies. Conversely, calculating an object’s X-ray luminosity provides a more efficient identification mechanism. In our survey, we have identified 22 X-ray AGN at $z > 0.5$ with luminosities exceeding 10$^{43}$ erg s$^{-1}$ that were classified as galaxies in SDSS, 2SLAQ or DEEP2 based on their optical spectra. One of these objects is extremely bright as noted above, L$_x = 10^{45}$ erg s$^{-1}$, and is an example of the kind of highly luminous obscured AGN our survey is designed to uncover. Currently these sources only represent 3% of our AGN sample, but we expect that more of these objects will be discovered during our spectroscopic follow-up campaign.
### Optical Dropouts
We identified 748 and 1444 optical counterparts to the [*Chandra*]{} and [*XMM-Newton*]{} sources, respectively, though 72 and 161 are discarded due to poor photometry. How many of the $\sim$400 [*Chandra*]{} and $\sim$900 [*XMM-Newton*]{} X-ray objects lacking SDSS counterparts ($r > 23$) do we find in the infrared? Most of these optical dropouts are either reddened by large amounts of dust or live at high redshift, so that the rest-frame optical light is shifted to redder wavelengths.
To answer this question, we look at two classes of optical drop-outs: the X-ray sources with optical counterparts below $R_{\rm crit}$, including objects where no SDSS counterparts are found within the search radius, and the subset of X-ray sources without any optical counterpart within $r_{\rm search}$. In the former case, a true counterpart can be misclassified as a random association, especially if it is faint. The latter number then gives us a lower limit on the number of infrared bright optical dropout X-ray sources. Comparison of the flux limits for SDSS, [*WISE*]{} and UKIDSS to the type 1 quasar SED (i.e., broad line AGN) from @elvis_sed demonstrate that SDSS is deeper than the [*WISE*]{} or UKIDSS observations, making the detection of IR sources that are SDSS drop-outs a significant finding. We summarize these results in Table \[opt\_drop\], detailing the number of optical dropouts found in the IR generally and the numbers identified in the [*WISE*]{} and UKIDSS catalogs specifically. We note that the greater percentage of optical dropouts that have no counterpart within the search radius for the [*Chandra*]{} catalog compared to [*XMM-Newton*]{} can be understood by the larger search radius used for the latter catalog.
Over 30% of the optical droputs are detected in the infrared, making them candidates for the elusive population of obscured high luminosity AGN at high redshift. We plot the [*WISE*]{} colors of the 151 dropouts ($\sim$12% of optical dropouts) that have significant W1, W2 and W3 detections (i.e., SNR $>$ 2 in each band) in Figure \[wise\_color\] for the optical dropout X-ray sources. The [*WISE*]{} colors are overlaid on the diagram from @wright, where the colored loci represent different classes of astronomical objects. A majority of the optical dropouts detected in X-rays have [*WISE*]{} colors consistent with active galaxies, with nearly half having infrared colors akin to quasars. These are prime candidates for high-luminosity Type 2 AGN or highly reddened quasars and will be followed up by us with NIRSPEC on Keck and ISAAC on ESO’s VLT. For the remaining 840 optical dropouts without infrared associations (25% of the X-ray sample), deeper optical and infrared imaging is necessary is identify the multi-wavelength counterparts to the X-ray sources.
[lrrr]{} Catalog & [*Chandra*]{} & [*XMM-Newton*]{} & Total$^{1}$\
\
X-ray & 398 & 914 & 1312\
IR$^{2}$ & 112 & 371 & 472\
[*WISE*]{} & 95 & 313 & 401\
UKIDSS & 43 & 149 & 189\
\
X-ray & 317 & 486 & 781\
IR$^{2}$ & 88 & 161 & 240\
[*WISE*]{} & 73 & 124 & 192\
UKIDSS & 37 & 82 & 116\
\
\
![\[wise\_color\] [*WISE*]{} color-color diagram for X-ray sources (filled circles) detected significantly in the W1, W2 and W3 bands that have no optical counterpart within the search radius or where optical sources are found within $r_{\rm search}$ but are below $R_{\rm crit}$ and are therefore not likely candidates for the true optical counterpart to the X-ray source. The colored loci represent the classes of objects with these [*WISE*]{} colors, defined by @wright. Most of the optical dropouts are consistent with active galaxies.](wise_all_nosdss.eps)
Conclusions
===========
We have reduced and analyzed the $\sim$10.5 deg$^2$ of [*XMM-Newton*]{} data overlapping SDSS Stripe 82, including $\sim$4.6 deg$^2$ of proprietary data awarded to us in AO 10. From these observations, we detected 2358 unique X-ray sources at high significance, with 2073, 607 and 2079 in the soft (0.5-2 keV), hard (2-10 keV), and full (0.5-10 keV) bands, respectively. The Log$N$-Log$S$ relations show general agreement with previous surveys in these bands, given the effect that choice of spectral model affects the normalization in the full band. Using a maximum likelihood estimator algorithm [@mle; @brusa1; @brusa2; @cardamone; @luo; @brusa3; @civano_mle], we identified multi-wavelength counterparts to Stripe 82 X-ray sources, finding:
- 1892 optical matches from SDSS, of which 759 have spectroscopic redshifts; 1855 [*WISE*]{} counterparts; 1754 UKIDSS matches; 447 ultraviolet counterparts from [*GALEX*]{}; and 119 radio sources from FIRST (using nearest neighbor matching rather than MLE due to low source densities).
- Focusing on the subset of sources with spectroscopic redshifts, Stripe 82X harbors more high luminosity (L$_x \geq 10^{45}$ erg s$^{-1}$) AGN than E-CDFS and CDFS [$\sim$0.3 deg$^2$, @cardamone2], [*XMM*]{}- and [*Chandra*]{}-COSMOS [$\sim$2.1 deg$^2$, @cap09; @brusa3; @civano_mle] and even the larger XBoötes survey [$\sim$9 deg$^2$, @kenter; @kochanek]. Though these other surveys benefited from years of spectroscopic follow-up, Stripe 82X covers a wider area and thereby already uncovers more rare objects (high luminosity AGN at all redshifts and in the early universe, at $z >$ 2). These numbers will increase with the spectroscopic follow-up we are currently undertaking.
- We have compared the luminosity distribution of X-ray sources with spectroscopic redshifts with the population synthesis model predictions from @treister, @Gilli and @ballantyne taking into account the observational constraints of our observed area-flux curves in the soft, hard and full X-ray bands. As we showed in @me using a subset of these data and the full-band [@treister] model predictions given our area-flux curves, we discovered more high luminosity ($>10^{45}$ erg s$^{-1}$) AGN than predicted by @treister. Though the @Gilli and @ballantyne models predict more AGN, we have found most of those predicted at high luminosity (depending on the luminosity function for the @ballantyne model), and this number will continue to increase with our spectroscopic follow-up. Refinement to models is clearly indicated to better account for this important regime of black hole growth. As these rare, high luminosity AGN are more numerous than previously predicted, understanding their census, evolution and connection to the host galaxy becomes an important piece in completing the puzzle of cosmic black hole growth.
- We have found the most distant, spectroscopically confirmed X-ray selected quasar in an X-ray survey to date, at $z = 5.86$.
- About a third of the X-ray sources that are optical dropouts are identified in the infrared, making them candidates for reddened quasars and/or high luminosity Type 2 AGN at high redshift. Most of those with significant detections in the W1, W2 and W3 [*WISE*]{} bands have colors consistent with active galaxies, with more than half of them having quasar colors. We have a Keck-NIRSPEC campaign and were awarded ESO VLT ISAAC DDT time to follow-up these objects.
The Stripe 82X survey provides an important pathfinder mission to eRosita, scheduled to be launched in 2014, which will survey the entire sky in 0.5-10 keV X-rays, though with a poorer resolution than [*Chandra*]{} and [*XMM-Newton*]{} ($\sim$25$^{\prime\prime}$) and with an 0.5-2 keV flux limit that is 5 times higher than our proprietary [*XMM-Newton*]{} mosaicked observations [@merloni]. eRosita expects to uncover millions of AGN, of which a few tens will be $z>$6 QSOs. An efficient method will then need to be devised to isolate the very high redshift population: results of Stripe 82X, with the wealth of multi-wavelength data, will help to inform robust identification techniques applicable to eRosita.
In other luminosity ranges, X-ray selection has uncovered a different, if overlapping, population of AGN compared to optical selection. While the jury is still out on how this impacts black hole growth at high luminosity, it is clear that the answer requires large samples selected at X-ray energies, so that the optical- and X-ray samples can be compared.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank the referee for a careful reading of this manuscript and useful comments and suggestions. We also thank M. Brusa for helpful discussions.
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration.
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/.
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, University of Cambridge, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. Funding for Yale participation in SDSS-III was provided by Yale University.
Ahn, C. P., Alexandroff, R., Allende Prieto, C., et al. 2012, ApJS, 203, 21
Aird, J., Nandra, K., Laird, E. S., et al. 2010, MNRAS, 401, 2531
Alexander, D. M., Bauer, F. E., Brandt, W. N., et al. 2003, AJ, 126, 539
Assef, R. J., Stern, D., Kochanek, C. S., et al. 2013, ApJ, 772, 26
Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, PASP, 93, 5
Ballantyne, D. R., Draper, A. R., Madsen, K. K., Rigby, J. R., & Treister, E. 2011, ApJ, 736, 56
Barger, A. J., Cowie, L. L., Capak, P., et al. 2003, ApJL, 584, L61
Becker, R. H., White, R. L., & Helfand, D. J. 1995, ApJ, 450, 559
Becker, R. H., Helfand, D. J., White, R. L., Gregg, M. D., & Laurent-Muehleisen, S. A. 2012, VizieR Online Data Catalog, 8090, 0
Bianchi, L., Efremova, B., Herald, J., et al. 2011, MNRAS, 411, 2770
Brandt, W. N., & Hasinger, G. 2005, ARAA, 43, 827
Brandt, W. N., Schneider, D. P., Fan, X., et al. 2002, ApJL, 569, L5 Brunner, H., Cappelluti, N., Hasinger, G., et al. 2008, A&A, 479, 283
Brusa, M., Civano, F., Comastri, A., et al. 2010, ApJ, 716, 348
Brusa, M., Zamorani, G., Comastri, A., et al. 2007, ApJS, 172, 353
Brusa, M., Comastri, A., Daddi, E., et al. 2005, A&A, 432, 69
Budav[á]{}ri, T., Heinis, S., Szalay, A. S., et al. 2009, ApJ, 694, 1281
Cappelluti, N., Brusa, M., Hasinger, G., et al. 2009, A&A, 497, 635
Cappelluti, N., Hasinger, G., Brusa, M., et al. 2007, ApJS, 172,
Cardamone, C. N., van Dokkum, P. G., Urry, C. M., et al. 2010, ApJS, 189, 270
Cardamone, C. N., Urry, C. M., Damen, M., et al. 2008, ApJ, 680, 130
Casali, M., Adamson, A., Alves de Oliveira, C., et al. 2007, A&A, 467, 777
Chiappetti, L., Clerc, N., Pacaud, F., et al. 2013, MNRAS, 429, 1652
Civano, F., Elvis, M., Brusa, M., et al. 2012, ApJS, 201, 30
Civano, F., Brusa, M., Comastri, A., et al. 2011, ApJ, 741, 91
Comastri, A., Ranalli, P., Iwasawa, K., et al. 2011, A&A, 526, L9
Croom, S. M., Richards, G. T., Shanks, T., et al. 2009, MNRAS, 392, 19
Cutri, R. M., Wright, E. L., Conrow, T., et al. 2012, Explanatory Supplement to the WISE All-Sky Data Release Products, 1
Davis, M., Guhathakurta, P., Konidaris, N. P., et al. 2007, ApJL, 660, L1
Donley, J. L., Koekemoer, A. M., Brusa, M., et al. 2012, ApJ, 748, 142
Drinkwater, M. J., Jurek, R. J., Blake, C., et al. 2010, MNRAS, 401, 1429
Dye, S., Warren, S. J., Hambly, N. C., et al. 2006, MNRAS, 372, 1227
Elvis, M., Civano, F., Vignali, C., et al. 2009, ApJS, 184, 158
Elvis, M., Wilkes, B. J., McDowell, J. C., et al. 1994, ApJS, 95, 1
Evans, I. N., Primini, F. A., Glotfelty, K. J., et al. 2010, ApJS, 189, 37
Georgakakis, A., Nandra, K., Laird, E. S., et al. 2007, ApJL, 660, L15
Giacconi, R., Rosati, P., Tozzi, P., et al. 2001, ApJ, 551, 624
Giavalisco, M., Ferguson, H. C., Koekemoer, A. M., et al. 2004, ApJL, 600, L93
Gilli, R., Comastri, A., & Hasinger, G. 2007, A&A, 463, 79
Goulding, A. D., Forman, W. R., Hickox, R. C., et al. 2012, ApJS, 202, 6
Hambly, N. C., Collins, R. S., Cross, N. J. G., et al. 2008, MNRAS, 384, 637
Hasinger, G., Miyaji, T., & Schmidt, M. 2005, A&A, 441, 417
Hewett, P. C., Warren, S. J., Leggett, S. K., & Hodgkin, S. T. 2006, MNRAS, 367, 454 Jiang, L., Fan, X., Bian, F., et al. 2009, AJ, 138, 305
Juneau, S., Dickinson, M., Alexander, D. M., & Salim, S. 2011, ApJ, 736, 104
Kauffmann, G., Heckman, T. M., Tremonti, C., et al. 2003, MNRAS, 346, 1055
Kenter, A., Murray, S. S., Forman, W. R., et al. 2005, ApJS, 161, 9
Kewley, L. J., Dopita, M. A., Sutherland, R. S., Heisler, C. A., & Trevena, J. 2001, ApJ, 556, 121
Kim, M., Wilkes, B. J., Kim, D.-W., et al. 2007, ApJ, 659, 29
Kochanek, C. S., Eisenstein, D. J., Cool, R. J., et al. 2012, ApJS, 200, 8
Kolodzig, A., Gilfanov, M., Sunyaev, R., Sazonov, S., & Brusa, M. 2012, arXiv:1212.2151
Lacy, M., Storrie-Lombardi, L. J., Sajina, A., et al. 2004, ApJS, 154, 166
La Franca, F., Fiore, F., Comastri, A., et al. 2005, ApJ, 635, 864
LaMassa, S. M., Urry, C. M., Glikman, E., et al. 2013, MNRAS, 432, 1351
Lawrence, A., Warren, S. J., Almaini, O., et al. 2007, MNRAS, 379, 1599
Lehmer, B. D., Brandt, W. N., Alexander, D. M., et al. 2005, ApJS, 161, 21
Loaring, N. S., Dwelly, T., Page, M. J., et al. 2005, MNRAS, 362, 1371
Luo, B., Brandt, W. N., Xue, Y. Q., et al. 2010, ApJS, 187, 560
Mateos, S., Warwick, R. S., Carrera, F. J., et al. 2008, A&A, 492, 51
McGreer, I. D., Jiang, L., Fan, X., et al. 2013, ApJ, 768, 105
Mendez, A. J., Coil, A. L., Aird, J., et al. 2013, ApJ, 770, 40
Merloni, A., Predehl, P., Becker, W., et al. 2012, arXiv:1209.3114
Morrissey, P., Conrow, T., Barlow, T. A., et al. 2007, ApJS, 173, 682
Murray, S. S., Kenter, A., Forman, W. R., et al. 2005, ApJS, 161, 1
Newman, J. A., Cooper, M. C., Davis, M., et al. 2013, ApJS, 208, 5
Ochsenbein, F., Bauer, P., & Marcout, J. 2000, A&AS, 143, 23
Persic, M., Rephaeli, Y., Braito, V., et al. 2004, A&A, 419, 849
Pierre, M., Valtchanov, I., Altieri, B., et al. 2004, JCAP, 9, 11
Ranalli, P., Comastri, A., Vignali, C., et al. 2013, A&A, 555, A42
Rots, A. H., & Budav[á]{}ri, T. 2011, ApJS, 192, 8
Shemmer, O., Brandt, W. N., Schneider, D. P., et al. 2006, ApJ, 644, 86
Steffen, A. T., Barger, A. J., Capak, P., et al. 2004, AJ, 128, 1483
Stern, D., Assef, R. J., Benford, D. J., et al. 2012, ApJ, 753, 30
Stern, D., Eisenhardt, P., Gorjian, V., et al. 2005, ApJ, 631, 163
Sutherland, W., & Saunders, W. 1992, MNRAS, 259, 413
Treister, E., Urry, C. M., & Virani, S. 2009, ApJ, 696, 110
Treister, E., Urry, C. M., Chatzichristou, E., et al. 2004, ApJ, 616, 123
Trichas, M., Green, P. J., Silverman, J. D., et al. 2012, ApJS, 200, 17
Trouille, L., Barger, A. J., & Tremonti, C. 2011, ApJ, 742, 46
Ueda, Y., Akiyama, M., Ohta, K., & Miyaji, T. 2003, ApJ, 598, 886
Vignali, C., Brandt, W. N., Schneider, D. P., & Kaspi, S. 2005, AJ, 129, 2519
Virani, S. N., Treister, E., Urry, C. M., & Gawiser, E. 2006, AJ, 131, 2373
Warren, S. J., Cross, N. J. G., Dye, S., et al. 2007, arXiv:astro-ph/0703037
Watson, M. G., Schr[ö]{}der, A. C., Fyfe, D., et al. 2009, A&A, 493, 339
White, R. L., Becker, R. H., Helfand, D. J., & Gregg, M. D. 1997, ApJ, 475, 479
Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868
Xue, Y. Q., Luo, B., Brandt, W. N., et al. 2011, ApJS, 195, 10
Reliability Thresholds for Counterpart Selection
================================================
As mentioned in the main text, our goal is to optimize selection of multi-wavelength counterparts to the Stripe 82 X-ray sources by maximizing the number of true associations while minimizing contamination from chance coincidences. We inspected the distribution of source ‘reliabilities’ calculated via MLE and picked a critical threshold ($R_{\rm crit}$) above which we expect a vast majority of the ancillary objects represent true counterparts. By shifting the X-ray positions by random amounts and running the MLE code, the distribution of resulting reliabilities provides an empirical estimate of the contamination in our matched catalogs.
In Figures A1 - A7, we compare the reliability distribution of each wavelength band to which we matched (solid black histogram) with the reliability distribution after shifting the X-ray positions (by $\sim$21$^{\prime\prime}$ to $\sim$35$^{\prime\prime}$) overplotted (blue histogram). The dotted line indicates the specific $R_{\rm crit}$ value we used for that band. In the captions, we note the number of spurious associations expected, i.e., ancillary counterparts matched to random positions on the sky, above $R_{\rm crit}$. We stress that contamination percentages that can be calculated from this test are not exact, but are instead meant to provide an empirical method for calibrating the reliabilities on a band-by-band basis. As in all multi-wavelength surveys, a handful of true counterparts may be missed, falling below $R_{\rm crit}$, while several random coincident matches may be promoted as real matches. However, our empirical tests indicate that this effect is at the few percent level at most.
Column Descriptions for On-line Versions of the Catalogs
========================================================
Non-significant X-ray fluxes have zero values in the on-line catalogs. When reporting the ancillary multi-wavelength data, numeric values of -999 and null strings indicate that a reliable counterpart was not identified for that X-ray source.
[*Chandra*]{}
-------------
1. [**MSID**]{}: [*Chandra*]{} Source Catalog identification number [@csc]
2. [**ObsID**]{}: [*Chandra*]{} observation identification number
3. [**RA**]{}: [*Chandra*]{} RA (J2000)
4. [**Dec**]{} [*Chandra*]{} Dec (J2000)
5. [**RADec\_err**]{}: [*Chandra*]{} positional error (arcsec)
6. [**Dist\_nn**]{}: Distance to nearest [*Chandra*]{} source (arcsec)
7. [**Soft\_Flux**]{}: 0.5-2 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$). Set to 0 if flux is not significant at $>$4.5$\sigma$ level.
8. [**Soft\_flux\_error\_high**]{}: higher bound on 0.5-2 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$). If flux is 0, this is the flux upper limit.
9. [**Soft\_flux\_error\_low**]{}: lower bound on 0.5-2 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$)
10. [**Hard\_flux**]{}: 2-7 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$). Set to 0 if flux is not significant at $>$4.5$\sigma$ level.
11. [**Hard\_flux\_error\_high**]{}: higher bound on 2-7 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$). If flux is 0, this is the flux upper limit.
12. [**Hard\_flux\_error\_lo**]{}: lower bound on 2-7 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$)
13. [**Full\_flux**]{}: 0.5-7 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$). Set to 0 if flux is not significant at $>$4.5$\sigma$ level.
14. [**Full\_flux\_error\_high**]{}: higher bound on 0.5-7 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$). If flux is 0, this is the flux upper limit.
15. [**Full\_flux\_error\_lo**]{}: lower bound on 0.5-7 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$)
16. [**Lum\_soft**]{}: log 0.5-2 keV luminosity (erg s$^{-1}$)
17. [**Lum\_hard**]{}: log 2-7 keV luminosity (erg s$^{-1}$)
18. [**Lum\_full**]{}: log 0.5-7 keV luminosity (erg s$^{-1}$)
19. [**In\_XMM**]{}: Set to ‘yes’ if X-ray source is in the [*XMM-Newton*]{} Stripe 82 catalog
20. [**Removed\_LogN\_LogS**]{}: Set to ‘yes’ if X-ray source was not part of the Log$N$-Log$S$ relation published in [@me]
21. [**SDSS\_Rej**]{}: Set to ‘yes’ if SDSS counterpart is found but rejected due to poor photometry.
22. [**SDSS\_Objid**]{}: SDSS object identification number
23. [**SDSS\_RA**]{}: SDSS RA (J2000)
24. [**SDSS\_Dec**]{}: SDSS Dec (J2000)
25. [**SDSS\_Rel**]{}: MLE reliability of SDSS match to X-ray source
26. [**SDSS\_Dist**]{}: Distance between X-ray and SDSS source (arcsec)
27. [**u\_mag**]{}: SDSS u mag
28. [**u\_err**]{}: SDSS u mag error
29. [**g\_mag**]{}: SDSS g mag
30. [**g\_err**]{}: SDSS g mag error
31. [**r\_mag**]{}: SDSS r mag
32. [**r\_err**]{}: SDSS r mag error
33. [**i\_mag**]{}: SDSS i mag
34. [**i\_err**]{}: SDSS i mag error
35. [**z\_mag**]{}: SDSS z mag
36. [**z\_err**]{}: SDSS z mag error
37. [**Specobjid**]{}: SDSS spectroscopic object identification number
38. [**Class**]{}: optical spectroscopic class (if available)
39. [**Redshift**]{}: spectroscopic redshift
40. [**z\_src**]{}: source of spectroscopic redshift; 0 - SDSS, 1 - 2SLAQ, 2 - WiggleZ, 3 - DEEP2, 4 - SDSS spectra re-fit/verified by us
41. [**WISE\_Name**]{}: [*WISE*]{} name
42. [**WISE\_RA**]{}: [*WISE*]{} RA (J2000)
43. [**WISE\_Dec**]{}: [*WISE*]{} Dec (J2000)
44. [**WISE\_sigra**]{}: [*WISE*]{} RA error (arcsec)
45. [**WISE\_sigdec**]{}: [*WISE*]{} Dec error (arcsec)
46. [**WISE\_Rel**]{}: MLE reliability of [*WISE*]{} match to X-ray source
47. [**WISE\_Dist**]{}: Distance between X-ray and [*WISE*]{} source (arcsec)
48. [**W1**]{}: [*WISE*]{} W1 mag. All [*WISE*]{} magnitudes are from profile-fitting photometry, unless the [**WISE\_ext**]{} flag is set to ‘yes,’ in which case the magnitudes are associated with elliptical apertures.
49. [**W1sig**]{}: [*WISE*]{} W1 error
50. [**W1SNR**]{}: [*WISE*]{} W1 SNR. Any [*WISE*]{} magnitudes with SNR $<$2 are upper limits.
51. [**W2**]{}: [*WISE*]{} W2 mag
52. [**W2sig**]{}: [*WISE*]{} W2 error
53. [**W2SNR**]{}: [*WISE*]{} W2 SNR
54. [**W3**]{}: [*WISE*]{} W3 mag
55. [**W3sig**]{}: [*WISE*]{} W3 error
56. [**W3SNR**]{}: [*WISE*]{} W3 SNR
57. [**W4**]{}: [*WISE*]{} W4 mag
58. [**W4sig**]{}: [*WISE*]{} W4 error
59. [**W4SNR**]{}: [*WISE*]{} W4 SNR
60. [**WISE\_ext**]{}: Set to ‘yes’ if [*WISE*]{} source is extended
61. [**WISE\_rej**]{}: Set to ‘yes’ if [*WISE*]{} counterpart is found but rejected for poor photometry
62. [**UKIDSS\_ID**]{}: UKIDSS ID
63. [**UKIDSS\_RA**]{}: UKIDSS RA (J2000)
64. [**UKIDSS\_Dec**]{}: UKIDSS Dec (J2000)
65. [**UKIDSS\_Rel**]{}: MLE reliability of UKIDSS match to X-ray source
66. [**UKIDSS\_Dist**]{}: Distance between X-ray and UKIDSS source (arcsec)
67. [**Ymag**]{}: UKIDSS Y mag
68. [**Ysig**]{}: UKIDSS Y error
69. [**Hmag**]{}: UKIDSS H mag
70. [**Hsig**]{}: UKIDSS H error
71. [**Jmag**]{}: UKIDSS J mag
72. [**Jsig**]{}: UKIDSS J error
73. [**Kmag**]{}: UKIDSS K mag
74. [**Ksig**]{}: UKIDSS K error
75. [**UKIDSS\_Rej**]{}: UKIDSS counterpart found but rejected for poor photometry
76. [**GALEX\_Objid**]{}: [*GALEX*]{} object identification number
77. [**GALEX\_RA**]{}: [*GALEX*]{} RA (J2000)
78. [**GALEX\_Dec**]{}: [*GALEX*]{} Dec (J2000)
79. [**NUV\_poserr**]{}: [*GALEX*]{} NUV positional error (arcsec)
80. [**FUV\_poserr**]{}: [*GALEX*]{} FUV positional error (arcsec)
81. [**GALEX\_Rel**]{}: MLE reliability of [*GALEX*]{} match to X-ray source
82. [**GALEX\_Dist**]{}: Distance between X-ray and [*GALEX source*]{} (arcsec)
83. [**NUV\_mag**]{}: [*GALEX*]{} NUV mag
84. [**NUV\_magerr**]{}: [*GALEX*]{} NUV error
85. [**FUV\_mag**]{}: [*GALEX*]{} FUV mag
86. [**FUV\_magerr**]{}: [*GALEX*]{} FUV error
87. [**FIRST Name**]{}: IAU Name of FIRST counterpart
88. [**FIRST\_RA**]{}: FIRST RA (J2000)
89. [**FIRST\_Dec**]{}: FIRST Dec (J2000)
90. [**FIRST\_Dist**]{}: Distance between X-ray and FIRST source (arcsec)
91. [**FIRST\_Flux**]{}: FIRST 5 GHz Flux Density (Jy)
92. [**FIRST\_err**]{}: FIRST 5 GZ Flux Density error (Jy)
[*XMM-Newton*]{}
----------------
1. [**Rec\_no**]{}: Unique record number assigned to each [*XMM-Newton*]{} source
2. [**ObsID**]{}: [*XMM-Newton*]{} observation identification number
3. [**RA**]{}: [*XMM-Newton*]{} RA (J2000)
4. [**Dec**]{}: [*XMM-Newton*]{} Dec(J2000)
5. [**RADec\_Err**]{}: [*XMM-Newton*]{} positional error (arcsec)
6. [**Dist\_nn**]{}: Distance to nearest [*XMM-Newton*]{} source (arcsec)
7. [**Soft\_flux**]{}: 0.5-2 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$). Flux is 0 if $det\_ml < 15$ in the soft band.
8. [**Soft\_flux\_err**]{}: error in 0.5-2 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$)
9. [**Hard\_flux**]{}: 2-10 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$). Flux is 0 if $det\_ml < 15$ in the hard band.
10. [**Hard\_flux\_err**]{}:error in 2-10 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$)
11. [**Full\_flux**]{}: 0.5-10 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$). Flux is 0 if $det\_ml < 15$ in the full band.
12. [**Full\_flux\_err**]{}:error in 0.5-10 keV flux (10$^{-14}$ erg cm$^{-2}$ s$^{-1}$)
13. [**Lum\_soft**]{}: log 0.5-2k keV luminosity (erg s$^{-1}$)
14. [**Lum\_hard**]{}: log 2-10 keV luminosity (erg s$^{-1}$)
15. [**Lum\_full**]{}: log 0.5-10 keV luminosity (erg s$^{-1}$)
16. [**In\_Chandra**]{}: Set to ‘yes’ if source is found in the [*Chandra*]{} catalog.
17. [**Removed\_LogN\_LogS**]{}: Set to ‘yes’ if source is removed from Log$N$-Log$S$ calulation presented in the main text.
18. [**SDSS\_Rej**]{}: Set to ‘yes’ if SDSS counterpart is found but rejected due to poor photometry.
19. [**SDSS\_Objid**]{}: SDSS object identification number
20. [**SDSS\_RA**]{}: SDSS RA (J2000)
21. [**SDSS\_Dec**]{}: SDSS Dec (J2000)
22. [**SDSS\_Rel**]{}: MLE reliability of SDSS match to X-ray source
23. [**SDSS\_Dist**]{}: Distance between X-ray and SDSS source (arcsec)
24. [**u\_mag**]{}: SDSS u mag
25. [**u\_err**]{}: SDSS u mag error
26. [**g\_mag**]{}: SDSS g mag
27. [**g\_err**]{}: SDSS g mag error
28. [**r\_mag**]{}: SDSS r mag
29. [**r\_err**]{}: SDSS r mag error
30. [**i\_mag**]{}: SDSS i mag
31. [**i\_err**]{}: SDSS i mag error
32. [**z\_mag**]{}: SDSS z mag
33. [**z\_err**]{}: SDSS z mag error
34. [**Specobjid**]{}: SDSS spectroscopic object identification number
35. [**Class**]{}: optical spectroscopic class (if available)
36. [**Redshift**]{}: spectroscopic redshift
37. [**z\_src**]{}: source of spectroscopic redshift; 0 - SDSS, 1 - 2SLAQ, 2 - WiggleZ, 3 - DEEP2, 4 - SDSS spectra re-fit/verified by us
38. [**WISE\_Name**]{}: [*WISE*]{} name
39. [**WISE\_RA**]{}: [*WISE*]{} RA (J2000)
40. [**WISE\_Dec**]{}: [*WISE*]{} Dec (J2000)
41. [**WISE\_sigra**]{}: [*WISE*]{} RA error (arcsec)
42. [**WISE\_sigdec**]{}: [*WISE*]{} Dec error (arcsec)
43. [**WISE\_Rel**]{}: MLE reliability of [*WISE*]{} match to X-ray source
44. [**WISE\_Dist**]{}: Distance between X-ray and [*WISE*]{} source (arcsec)
45. [**W1**]{}: [*WISE*]{} W1 mag. All [*WISE*]{} magnitudes are from profile-fitting photometry, unless the [**WISE\_ext**]{} flag is set to ‘yes,’ in which case the magnitudes are associated with elliptical apertures.
46. [**W1sig**]{}: [*WISE*]{} W1 error
47. [**W1SNR**]{}: [*WISE*]{} W1 SNR. Any [*WISE*]{} magnitudes with SNR $<$2 are upper limits.
48. [**W2**]{}: [*WISE*]{} W2 mag
49. [**W2sig**]{}: [*WISE*]{} W2 error
50. [**W2SNR**]{}: [*WISE*]{} W2 SNR
51. [**W3**]{}: [*WISE*]{} W3 mag
52. [**W3sig**]{}: [*WISE*]{} W3 error
53. [**W3SNR**]{}: [*WISE*]{} W3 SNR
54. [**W4**]{}: [*WISE*]{} W4 mag
55. [**W4sig**]{}: [*WISE*]{} W4 error
56. [**W4SNR**]{}: [*WISE*]{} W4 SNR
57. [**WISE\_ext**]{}: Set to ‘yes’ if [*WISE*]{} source is extended
58. [**WISE\_rej**]{}: Set to ‘yes’ if [*WISE*]{} counterpart is found but rejected for poor photometry
59. [**UKIDSS\_ID**]{}: UKIDSS ID
60. [**UKIDSS\_RA**]{}: UKIDSS RA (J2000)
61. [**UKIDSS\_Dec**]{}: UKIDSS Dec (J2000)
62. [**UKIDSS\_Rel**]{}: MLE reliability of UKIDSS match to X-ray source
63. [**UKIDSS\_Dist**]{}: Distance between X-ray and UKIDSS source (arcsec)
64. [**Ymag**]{}: UKIDSS Y mag
65. [**Ysig**]{}: UKIDSS Y error
66. [**Hmag**]{}: UKIDSS H mag
67. [**Hsig**]{}: UKIDSS H error
68. [**Jmag**]{}: UKIDSS J mag
69. [**Jsig**]{}: UKIDSS J error
70. [**Kmag**]{}: UKIDSS K mag
71. [**Ksig**]{}: UKIDSS K error
72. [**UKIDSS\_Rej**]{}: UKIDSS counterpart found but rejected for poor photometry
73. [**GALEX\_Objid**]{}: [*GALEX*]{} object identification number
74. [**GALEX\_RA**]{}: [*GALEX*]{} RA (J2000)
75. [**GALEX\_Dec**]{}: [*GALEX*]{} Dec (J2000)
76. [**NUV\_poserr**]{}: [*GALEX*]{} NUV positional error (arcsec)
77. [**FUV\_poserr**]{}: [*GALEX*]{} FUV positional error (arcsec)
78. [**GALEX\_Rel**]{}: MLE reliability of [*GALEX*]{} match to X-ray source
79. [**GALEX\_Dist**]{}: Distance between X-ray and [*GALEX source*]{} (arcsec)
80. [**NUV\_mag**]{}: [*GALEX*]{} NUV mag
81. [**NUV\_magerr**]{}: [*GALEX*]{} NUV error
82. [**FUV\_mag**]{}: [*GALEX*]{} FUV mag
83. [**FUV\_magerr**]{}: [*GALEX*]{} FUV error
84. [**FIRST Name**]{}: IAU Name of FIRST counterpart
85. [**FIRST\_RA**]{}: FIRST RA (J2000)
86. [**FIRST\_Dec**]{}: FIRST Dec (J2000)
87. [**FIRST\_Dist**]{}: Distance between X-ray and FIRST source (arcsec)
88. [**FIRST\_Flux**]{}: FIRST 5 GHz Flux Density (Jy)
89. [**FIRST\_err**]{}: FIRST 5 GZ Flux Density error (Jy)
[^1]: E-mail:[email protected]
[^2]: We note that in some observations, only one or two detectors had data. See Table \[obs\_summary\].
[^3]: http://heasarc.nasa.gov/Tools/w3pimms.html
[^4]: In observations where only 2 detectors were active instead of 3, the normalization was adjusted accordingly. No normalization was necessary for observations with only 1 detector.
[^5]: https://github.com/piero-ranalli/cdfs-sim
[^6]: This systematic uncertainty takes into account that we used the coordinates as reported from [*emldetect*]{} as our attempt to use [*eposcorr*]{} to correct systematic astrometric offsets was unsuccessful, introducing different systematic offsets. The 1$^{\prime\prime}$ systematic error used here is consistent with the [*XMM-Newton*]{} Serendipitious Catalog procedure for estimating positional uncertainty for sources lacking independent astrometric corrections [@Watson].
[^7]: (NOT SATUR) OR (SATUR AND (NOT SATUR\_CENTER))
[^8]: (NOT BLENDED) OR (NOT NODEBLEND)
[^9]: (NOT BRIGHT) AND (NOT DEBLEND\_TOO\_MANY\_PEAKS) AND (NOT PEAKCENTER) AND (NOT NOTCHECKED) AND (NOT NOPROFILE)
[^10]: We consider the band to be affected by saturation if the fraction of saturated pixels exceeded 0.05, i.e., we could not rule out saturation at the 2$\sigma$ level.
[^11]: We consider [*moon\_lev*]{} $\geq$5 as contaminated, where [*moon\_lev*]{} is the number of frames affected by scattered moonlight normalized by the total number of frames in the exposure multiplied by 10, and spans from $0 \leq$ [*moon\_lev*]{} $\leq 9$.
[^12]: The ‘priOrSec’ flag is set to zero if there are no duplicate observations of the same source or to the best ‘frameSetId’ for duplicated observations. The SQL syntax to isolate primary observations is then ‘(priOrSec = 0 OR priORSec=frameSetId)’.
[^13]: Model predictions from the work of @treister for a range of input values are publicly available at http://agn.astroudec.cl/j\_agn/main.html
[^14]: http://www.bo.astro.it/$\sim$gilli/counts.html where we use their assumed spectral model of $\Gamma$=1.9 to convert the hard band luminosity bins into soft band luminosity bins required by their code.
|
---
abstract: 'Spin-orbit coupling in organic crystals is responsible for many spin-relaxation phenomena, going from spin diffusion to intersystem crossing. With the goal of constructing effective spin-orbit Hamiltonians to be used in multiscale approaches to the thermodynamical properties of organic crystals, we present a method that combines density functional theory with the construction of Wannier functions. In particular we show that the spin-orbit Hamiltonian constructed over maximally localised Wannier functions can be computed by direct evaluation of the spin-orbit matrix elements over the Wannier functions constructed in absence of spin-orbit interaction. This eliminates the problem of computing the Wannier functions for almost degenerate bands, a problem always present with the spin-orbit-split bands of organic crystals. Examples of the method are presented for isolated organic molecules, for mono-dimensional chains of Pb and C atoms and for triarylamine-based one-dimansional single crystals.'
author:
- Subhayan Roychoudhury
- Stefano Sanvito
title: 'Spin-orbit Hamiltonian for organic crystals from first principles electronic structure and Wannier functions'
---
Introduction
============
Spintronics devices operate by detecting the spin of a carrier in the same way as a regular electronic device measures its electrical charge [@Wolf1488]. These devices are already the state of the art in the design of magnetic sensors such as the magnetic read-head of hard-disk drives [@Ornes05032013], but also have excellent prospect as logic gate elements [@DattaSLogic; @1995PhT; @Awschalom; @Igor]. Logic circuits using the spin degree of freedom may offer low energy consumption and high speed owing to the fact that the dynamics of spins takes place at a much smaller energy scale than that of the charge [@DattaSLogic; @Wolf1488].
Recent years have also witnessed a marked increase in interest into investigations of organic molecules and molecular crystals as materials platform, initially for electronics [@HoroGill; @JOUR1] and lately also for spintronics [@Dediu2002181; @JOUR; @C1CS15047B]. The main reason behind such interest is that organic crystals, coming in a wide chemical variety, are typically much more flexible than their inorganic counterparts and they can exhibit an ample range of electronic properties, which are highly tuneable in practice. For example, it is possible to change the conductivity of organic polymers over fifteen orders of magnitude [@PhysRevLett.39.1098]. In addition to such extreme spectrum of physical/chemical properties organic materials are usually processed at low temperature. This is an advantage over inorganic compounds, which translates into a drastic reduction of the typical manufacturing and infrastructure costs [@Forrest]. Finally, specific to spintronics is the fact that both the spin-orbit (SO) and hyperfine interaction are very weak [@Sanvito] in organic compounds, resulting in a weak spin scattering during the electron transport [@Pramanik; @Tsukagoshi; @SCS2009].
Regardless of the type of media used, either organic or inorganic, spintronics always concerns phenomena related to the injection, manipulation and detection of spins into a solid state environment [@C1CS15047B]. In the prototypical spintronic device, the spin-valve [@SpinValve], a non-magnetic spacer is sandwiched between two ferromagnents. Spins, which are initially aligned along the magnetization vector of the first ferromagnet, travel to the other ferromagnent through the spacer, and the resistance of the entire device depends on the relative orientation of the magnetization vectors of the two magnets. However, if the spin direction is lost across the spacer, the resistance will become independent of the magnetic configuration of the device. As such, in order to measure any spin-dependent effect one has to ensure that the charge carriers maintain their spin direction through the spacer. Notably, this requirement is not only demanded by spin-valves, but also by any devices based on spins. There are several mechanism for spin-relaxation in the solid state [@RevModPhys.76.323].
In an organic semiconductor (OSC) the unwanted spin-relaxation can be caused by the presence of paramagnetic impurities, by SO coupling and by hyperfine interaction. In general paramagnetic impurities can be controlled to a very high degree of precision and they can be almost completely eliminated from an OSC during the chemical synthesis [@Impurity]. The hyperfine interaction instead can be usually considered small. This is because there are only a few elements typically present in organic molecules with abundant isotopes baring nuclear spins. The most obvious exception is hydrogen. However, most of the OSC crystals are $\pi$-conjugated and the $\pi$-states, responsible for the extremal energy levels, and hence for the electron transport, are usually delocalized. This means that the overlap of the wave function over the H nuclei has to be considered small. Finally, also the SO coupling is weak owing to the fact that most of the atoms composing organic compounds are light.
As such, since all the non-spin-conserving interactions are weak in OSCs, it is not surprising that there is contradictory evidence concerning the interaction mostly responsible for spin-diffusion in organic crystals. Conflicting experimental evidence exists supporting either the SO coupling [@PhysRevB.81.153202; @Drew] or the hyperfine interaction [@PhysRevB.75.245324; @PhysRevB.78.115203], indicating that the dominant mechanism may depend on the specific material under investigation. For this reason it is important to develop methods for determining the strength of both the SO and the hyperfine coupling in real materials. These can eventually be the basis for constructing effective Hamiltonians to be used for the evaluation of the relevant thermodynamics quantities (e.g. the spin diffusion length). Here we present one of such methods for the case of the SO interaction.
The SO interaction is a relativistic effect arising from the electron motion in the nuclear potential. In the electron reference frame the nucleus moves and creates a magnetic field, which in turn interacts with the electron spin. This is the spin-orbit coupling [@cohen1977quantum]. Since the SO interaction allows the spin of an electron to change direction during the electron motion, it is an interaction responsible for spin relaxation. In fact, there exist several SO-based microscopic theories of spin relaxation in solid state systems [@RevModPhys.76.323]. In the case of inorganic semiconductors these usually require knowledge of the band-structure of the material, some information about its mobility and an estimate of the spin-orbit strength. In the case of OSCs the situation, however, is more complex, mostly because the transport mechanism is more difficult to describe. Firstly, the band picture holds true only for a few cases, while for many others one has to consider the material as an ensemble of weakly coupled molecules with a broad distribution of hopping integrals [@Troisi]. Secondly, the typical phonon energies are of the same order of magnitude of the electronic band width, indicating that electron-phonon scattering cannot be treated as a perturbation of the band structure. For all these reasons the description of the thermodynamical properties of OSCs requires the construction of a multi-scale theory, where the elementary electronic structure is mapped onto an effective Hamiltonian retaining only a handful of the original degrees of freedom [@doi:10.1021/ct500390a]. A rigorous and now standard method for constructing such effective Hamiltonian consists in calculating the band structure over a set of Wannier functions [@PhysRev.52.191; @RevModPhys.34.645]. These can be constructed in a very general way as the Fourier transform of a linear combination of Bloch states, where the linear combination is taken so to minimize the spatial extension of the Wannier functions. These are the so-called maximally localized Wannier fuctions (MLWFs) [@PhysRevB.56.12847; @RevModPhys.84.1419].
The MLWF method performs best for well-isolated bands. This is indeed the case of OSCs, where often the valence and conduction bands originate respectively from the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) of the gas-phase molecule. In fact, when the MLWF procedure is applied to such band structure one obtains Wannier orbitals almost identical to the molecule HOMO and LUMO [@doi:10.1021/ct500390a]. Spin-orbit interaction, however, splits such well-defined bands, and in OSCs the split is typically a few tenths of $\mu$eV. Thus, in this case, one has to apply the MLWF procedure to bands, which are indistinguishable at an energy scale larger then a few $\mu$eV. In such conditions the minimization becomes almost impossible to converge, the MLWFs cannot be calculated for SO-split bands and an alternative scheme must be implemented.
Here we describe a method for obtaining the SO matrix elements with respect to the Wannier functions calculated in the absence of the SO interaction. Since the SO coupling in OSCs is weak, such spin-independent Wannier functions represent a close approximation of those that one could, at least in principle, obtain in the presence of the SO interaction. Furthermore, when the MLWF basis spans the same Hilbert space defined by all the atomic orbitals relevant for describing a given bands manifold, our method provides an accurate description of the system even in the case of heavy elements, i.e. for strong spin-orbit interaction. In particular we implement our scheme together with the atomic-orbital, pseudopotential, density functional theory (DFT) code [Siesta]{} [@0953-8984-14-11-302]. [Siesta]{} is used to generate the band structure in absence of the spin-orbit interaction and for calculating the SO potential, while the MLWF procedure is performed with the [Wannier90]{} code [@Mostofi2008685].
The paper is organized as follows. In the next section we describe our method in detail, by starting from the general idea and then going into the specific numerical implementation. A how-to workflow will also be presented. Next we discuss results obtained for rather diverse physical systems. Firstly, we evaluate the SO-split energy eigenvalues of a plumbane molecule and show how accurately these match those obtained directly from DFT including SO interaction. Then, we apply our procedure to the calculation of the band structure of a chain of Pb atoms, before moving to materials composed of light elements with low SO coupling. Here we will show that our method performs well for chains made of carbon atoms and of methane molecules. Finally we obtain the SO matrix elements for the Wannier functions derived from the HOMO band of a triarylamine-based nanowire, a relatively well-known semiconducting material with potential applications in photo-voltaic [@B908802D] and spintronics.
Method
======
General idea
------------
Here we describe the idea behind our method, which is general and does not depend on the specific implementation used for calculating the band structure. Consider a set of $N^\prime$ isolated Bloch states, $\ket{\psi_{m\mathbf{k}}}$, describing an infinite lattice. These for instance can be the DFT Kohn-Sham eigenstates of a crystal. One can then obtain the associated $N^\prime$ Wannier functions from the definition, $$\label{equ1}
\begin{split}
\ket{w_{n\mathbf{R}}}=\frac{V}{(2\pi)^3}\int_\mathrm{BZ}\left[\sum_{m=1}^{N^\prime} U_{mn}^{\mathbf{k}}
\ket{\psi_{m\mathbf{k}}}\right]e^{-i\mathbf{k}\cdot\mathbf{R}}d\mathbf{k}\:,
\end{split}$$ where $\ket{w_{n\mathbf{R}}}$ is the $n$-th Wannier vector centred at the lattice site $\mathbf{R}$, $V$ is the volume of the primitive cell and the integration is performed over the first Brillouin zone (BZ). In Eq. (\[equ1\]) $U^{\mathbf{k}}$ is a unitary operator that mixes the Bloch states and hence defines the specific set of Wannier functions. A particularly convenient gauge choice for $U^{\mathbf{k}}$ consists in minimizing the Wannier functions spread, which writes $$\label{equ2}
\begin{split}
\Omega=\sum_n\left[\bra{w_{n\mathbf{0}}}r^2\ket{w_{n\mathbf{0}}}-|\bra{w_{n\mathbf{0}}}\mathbf{r}\ket{w_{n\mathbf{0}}}|^2\right]\:.
\end{split}$$ Such choice defines the so-called maximally localized Wannier functions (MLWFs).
In the absence of SO coupling a Wannier function of spin $s_1$ is composed exclusively of Bloch states with the same spin, $s_1$. By moving from a continuos to a discrete $k$-point representation the spin-polarized version of Eq. (\[equ1\]) becomes [@RevModPhys.84.1419] $$\label{equ9}
\begin{split}
\ket{w^{s_1}_{n\mathbf{R}}}=\frac{1}{N}\sum_{\mathbf{k}}\sum_m U^{s_1}_{mn}(\mathbf{k})
\ket{\psi_{m\mathbf{k}}^{s_1}}e^{-i\mathbf{k}\cdot\mathbf{R}}\:.
\end{split}$$ Note that this represents either a finite periodic lattice comprising $N$ unit cells or a sampling of $N$ uniformly distributed $k$-points in the Brillouin zone of an infinite lattice. Here the Bloch states, which are normalized within each unit cell according to the relation $\braket{\psi_{m\mathbf{k}}^{s_1}|\psi_{n\mathbf{k'}}^{s_2}}=N\delta_{m,n}\delta_{\mathbf{k},\mathbf{k'}}\delta_{s_1,s_2}$, obey to the condition $\psi_{p\mathbf{k}}(\mathbf{r}_1)=\psi_{p\mathbf{k}}(\mathbf{r}_{N+1})$, where $\psi_{p\mathbf{k}}(\mathbf{r}_m)$ denotes the Bloch function for the $p$-th band at the wavevector **k** and position $\mathbf{r}_m$.
The projection of a generic Bloch state onto a MLWF in the absence of SO coupling can be written as $$\begin{gathered}
\label{equ10}
\begin{split}
&\braket{{\psi_{q\mathbf{k'}}^{s_1}}|{w_{n\mathbf{R}_2}^{s_2}}}= \\
&=\frac{1}{N}\sum_{\mathbf{k}}\sum_m U_{mn}^{s_2} (\mathbf{k})\braket{{\psi_{q\mathbf{k'}}^{s_1}}|{\psi_{m\mathbf{k}}^{s_2}}}
e^{-i\mathbf{k}\cdot\mathbf{R}}=\\
&=\frac{1}{N}\sum_{\mathbf{k}}\sum_m U^{s_2}_{mn}(\mathbf{k})e^{-i\mathbf{k}.\mathbf{R}}N\delta_{q,m}
\delta_{\mathbf{k},\mathbf{k'}}\delta_{s_1,s_2}=\\
&=U^{s_2}_{qn}(\mathbf{k'})e^{-i\mathbf{k'}\cdot\mathbf{R}}\delta_{s_1,s_2}\:.
\end{split}\end{gathered}$$ Hence a generic SO matrix element can be expanded over the MLWF basis set as $$\begin{gathered}
\label{equ11}
\begin{split}
&\bra{w_{m\mathbf{R}_1}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_{n\mathbf{R}_2}^{s_2}}= \\
&=\frac{1}{N^2}\sum_{p,q}\sum_{\mathbf{k}_1,\mathbf{k}_2}\braket{{w_{m\mathbf{R}_1}^{s_1}}|{\psi_{p\mathbf{k}_1}^{s_1}}}
(\mathbf{V}_\mathrm{SO})^{s_1,s_2}_{p\mathbf{k}_1,q\mathbf{k}_2}
%\bra{\psi_{p\mathbf{k}_1}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\psi_{q\mathbf{k}_2}^{s_2}}
\braket{\psi_{q\mathbf{k}_2}^{s_2}|{w_{n\mathbf{R}_2}^{s_2}}}=\\
&=\frac{1}{N^2}\sum_{p,q}\sum_{\mathbf{k}_1,\mathbf{k}_2}U^{*(s_1)}_{pm}(\mathbf{k}_1)e^{i\mathbf{k}_1\cdot\mathbf{R}_1}
(\mathbf{V}_\mathrm{SO})^{s_1,s_2}_{p\mathbf{k}_1,q\mathbf{k}_2}\cdot\\
& \quad \cdot U^{s_2}_{qn}(\mathbf{k}_2)e^{-i\mathbf{k}_2\cdot\mathbf{R}_2}\:,
%& \quad \bra{\psi_{p\mathbf{k}_1}^{s_1}}\mathbf{V}_{SO}\ket{\psi_{q\mathbf{k}_2}^{s_2}}\:,
\end{split}\end{gathered}$$ where $$\label{VSOME}
(\mathbf{V}_\mathrm{SO})^{s_1,s_2}_{p\mathbf{k}_1,q\mathbf{k}_2}=
\bra{\psi_{p\mathbf{k}_1}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\psi_{q\mathbf{k}_2}^{s_2}}\:.$$ It must be noted that in the absence of SO coupling, the Bloch states are spin-degenerate, i.e. there are two states corresponding to each spatial wave-function, one with spin up, $\ket{\psi^{\uparrow}(\mathbf{r})}=\ket{\psi(\mathbf{r})}\otimes\ket{\uparrow}$, and one with spin down, $\ket{\psi^{\downarrow}(\mathbf{r})}=\ket{\psi(\mathbf{r})}\otimes\ket{\downarrow}$. The same is true for the Wannier functions, i.e. one has always the pair $\ket{w^{\uparrow}(\mathbf{r})}=\ket{w(\mathbf{r})}\otimes\ket{\uparrow}$, $\ket{w^{\downarrow}(\mathbf{r})}=\ket{w(\mathbf{r})}\otimes\ket{\downarrow}$. In the presence of SO coupling, spin mixing occurs and each Bloch and Wannier state is, in general, a linear combination of both spin vectors. Since the Bloch states (or the Wannier ones) obtained in the absence of SO coupling form a complete basis set in the Hilbert space, the SO coupling operator can be written over such basis provided that one takes both spins into account. Therefore we use such spin-degenerate states as our basis for all calculations.
Numerical Implementation
------------------------
The derivation leading to Eq. (\[equ11\]) is general and the final result is simply a matrix transformation of the SO operator from the basis of the Bloch states to that of Wannier ones. Note that both basis sets are those calculated in the absence of SO coupling, i.e. we have assumed that the spatial part of the basis function is not modified by the introduction of the SO interaction. For practical purposes we now we wish to re-write Eq. (\[equ11\]) in terms of a localized atomic-orbital basis set, i.e. we wish to make our method applicable to first-principles DFT calculations implemented over local orbitals. In particular all the calculations that will follow use the [Siesta]{} package, which expands the wave-function and all the operators over a numerical atomic-orbital basis sets, {${\ket{\phi_{\mu,\mathbf{R}_j}^{s}}}$}, where $\ket{\phi_{\mu,\mathbf{R}_j}^{s}}$ denotes the $\rm{\mu}$-th atomic orbital ($\mu$ is a collective label for the principal and angular momentum quantum numbers) with spin $s$ belonging to the cell at the position $\mathbf{R}_j$. [Siesta]{} uses relativistic pseudopotentials to generate the spin-orbit matrix elements with respect to the basis vectors and truncates the range of the SO interaction to the on-site terms [@0953-8984-18-34-012]. For a finite periodic lattice comprising $N$ unit cells, a Bloch state is written with respect to atomic orbitals as $$\label{equ3}
\begin{split}
\ket{\psi_{p\mathbf{k}}}=\sum_{j=1}^N e^{i\mathbf{k}\cdot\mathbf{R}_j}
\left(\sum_{\mu}C_{\mu p}(\mathbf{k})\ket{\phi_{\mu,\mathbf{R}_j}}\right)\:,
\end{split}$$ where the coefficients $C_{\mu p}(\mathbf{k})$ are in general C-numbers. This state is normalized over unit cell with the allowed $\rm{\mathbf{k}}$-values being $\frac{m}{N} \mathbf{K}$, where $\rm{\mathbf{K}}$ is the reciprocal lattice vector and $m$ is an integer.
Hence the SO matrix elements written with respect to the spin-degenerate Bloch states calculated in absence of SO interaction are $$\begin{gathered}
\label{equ4}
\begin{split}
&\bra{\psi_{p\mathbf{k}_1}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\psi_{q\mathbf{k}_2}^{s_2}} =
\sum_{j,l}e^{i(\mathbf{k}_2\cdot\mathbf{R}_{l}-\mathbf{k}_1\cdot\mathbf{R}_{j})}\cdot\\
& \cdot\sum_{\mu,\nu}C_{\mu p}^{*s_1}(\mathbf{k}_1)C_{\nu q}^{s_2}(\mathbf{k}_2)
\bra{\phi_{\mu,\mathbf{R}_{j}}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\phi_{\nu,\mathbf{R}_{l}}^{s_2}}\:.
\end{split}\end{gathered}$$ As mentioned above [Siesta]{} neglects all the SO matrix elements between atomic orbitals located at different atoms. This leads to the approximation $$\begin{gathered}
\label{equ5}
\begin{split}
\bra{\phi_{\mu,\mathbf{R}_{j}}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\phi_{\nu,\mathbf{R}_{l}}^{s_2}}=
\bra{\phi_{\mu}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\phi_{\nu}^{s_2}}\delta_{\mathbf{R}_{j},\mathbf{R}_{l}}\:,
\end{split}\end{gathered}$$ so that Eq. (\[equ4\]) becomes $$\begin{gathered}
\label{equ6}
\begin{split}
&\bra{\psi_{p\mathbf{k}_1}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\psi_{q\mathbf{k}_2}^{s_2}} =
\sum_je^{i(\mathbf{k}_2-\mathbf{k}_1)\cdot\mathbf{R}_j}\cdot\\
& \cdot\sum_{\mu,\nu}C_{\mu p}^{*(s_1)}(\mathbf{k}_1)C_{\nu q}^{(s_2)}(\mathbf{k}_2)\bra{\phi_{\mu}^{s_1}}
\mathbf{V}_\mathrm{SO}\ket{\phi_{\nu}^{s_2}}\:.
\end{split}\end{gathered}$$ This can be further simplified by taking into account the relation $$\label{equ7}
\begin{split}
\sum_{j=1}^N e^{i(\mathbf{k}_1-\mathbf{k}_2)\cdot\mathbf{R}_j}=N\delta_{\mathbf{k}_1,\mathbf{k}_2}\:,
\end{split}$$ which leads to the final expression for the SO matrix elements $$\begin{gathered}
\label{equ8}
\begin{split}
&\bra{\psi_{p\mathbf{k}_1}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\psi_{q\mathbf{k}_2}^{s_2}} =\\
&=N\sum_{\mu,\nu}C_{\mu p}^{*(s_1)}(\mathbf{k}_1)C_{\nu q}^{(s_2)}(\mathbf{k}_1)
\bra{\phi_{\mu}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\phi_{\nu}^{s_2}}\delta_{\mathbf{k}_1,\mathbf{k}_2}\:.
\end{split}\end{gathered}$$
With the result of Eq. (\[equ8\]) at hand we can now come back to the expression for the SO matrix elements written over the MLWFs computed in absence of spin-orbit \[see Eq. (\[equ11\])\]. In the case of the [Siesta]{} basis set this now reads $$\begin{gathered}
\label{equ12}
\begin{split}
&\bra{w_{m\mathbf{R}_1}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_{n\mathbf{R}_2}^{s_2}}= \\
=&\frac{1}{N}\sum_{p,q,\mu,\nu}\sum_{\mathbf{k}}C_{\mu p}^{*s_1}(\mathbf{k})C_{\nu q}^{s_2}(\mathbf{k})
U^{*(s_1)}_{pm}(\mathbf{k})U^{s_2}_{qn}(\mathbf{k})\cdot\\
&\cdot e^{i\mathbf{k}\cdot(\mathbf{R}_1-\mathbf{R}_2)}\bra{\phi_{\mu}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\phi_{\nu}^{s_2}}\:.
\end{split}\end{gathered}$$ Finally, we go back to the continuous representation ($N\rightarrow\infty$), where the sum over **k** is replaced by an integral over the first Brillouin zone $$\begin{gathered}
\label{equ13}
\begin{split}
&\bra{w_{m\mathbf{R}_1}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_{n\mathbf{R}_2}^{s_2}}= \\
&=\frac{V}{(2\pi)^3}\sum_{p,q,\mu,\nu}\int_\mathrm{BZ}C_{\mu p}^{*s_1}(\mathbf{k})C_{\nu q}^{s_2}(\mathbf{k})
U^{*s_1}_{pm}(\mathbf{k})U^{s_2}_{qn}(\mathbf{k})\cdot\\
&\cdot e^{i\mathbf{k}\cdot(\mathbf{R}_1-\mathbf{R}_2)}\bra{\phi_{\mu}^{s_1}}
\mathbf{V}_\mathrm{SO}\ket{\phi_{\nu}^{s_2}}d\mathbf{k}\:.
\end{split}\end{gathered}$$
To summarize, our strategy consists in simply evaluating the SO matrix elements over the basis set of the MLWFs constructed in the absence of SO interaction. These are by definition spin-degenerate and they are in general easy to compute since associated to well-separated bands. Our procedure thus avoids to run the minimization algorithm necessary to fix the Wannier’s gauge over the SO-split bands, which in the case of OSCs have tiny splits. Our method is exact in the case the MLWFs form a complete set describing a particular bands manifold. In other circumstances they constitute a good approximation, as long as the SO interaction is weak, namely when it does not change significantly the spatial shape of the Wannier functions. However, for a material with strong SO coupling (eg. Pb), if the MLWFs under consideration do not span the entire Bloch states manifold, then the SO-split eigenvalues calculated with our method will not match those obtained directly with the first principles calculation.
Workflow
--------
The following procedure is adopted when calculating the SO-split band structures from the MLWFs Hamiltonian. The results are then compared to the band structure obtained directly from [Siesta]{} including SO interaction.
1. We first run a self-consistent non-collinear spin-DFT [Siesta]{} calculation and obtain the band structure.
2. From the density matrix obtained at step (1), we run a non self-consistent single-step [Siesta]{} calculation including SO coupling. This gives us the matrix elements $\bra{\phi_{\mu}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\phi_{\nu}^{s_2}}$. The band structure obtained in this calculation (from now on this is called the SO-DFT band structure) will be then compared with that obtained over the MLWFs. Note that we do not perform the [Siesta]{} DFT calculation including spin-orbit interaction in a self-consistent way. This is because the SO interaction changes little the density matrix so that such calculation is often not necessary. Furthermore, as we cannot run the MLWF calculation in a self-consistent way over the SO interaction, considering non-self-consistent SO band structure at the [Siesta]{} level allows us to compare electronic structures arising from identical charge densities.
3. Since the current version of [Wannier90]{} implemented for [Siesta]{} works only with collinear spins, we run a regular self-consistent spin-polarized [Siesta]{} calculation. This gives us the coefficients $C_{\mu n}^{s}(\mathbf{k})$, which are spin-degenerate for a non-magnetic material, $C_{\mu n}^{\uparrow}(\mathbf{k})=C_{\mu n}^{\downarrow}(\mathbf{k})$.
4. We run a [Wannier90]{} calculation to construct the MLWFs associated to the band structure computed at point (3). This returns us the unitary matrix, $U_{pm}^{s}(\mathbf{k})$, the Hamiltonian matrix elements $\bra{w_{m\mathbf{R}_1}^{s_1}}\mathbf{H}_0\ket{w_{n\mathbf{R}_2}^{s_2}}$ ($\mathbf{H}_0$ is the Kohn-Sham Hamiltonian in absence of SO interaction) and the phase factors [^1] $e^{i\mathbf{k}\cdot\mathbf{R}}$. For a non-magnetic material the matrix elements of $\mathbf{H}_0$ satisfy the relation $\bra{w_{m\mathbf{R_1}}^{s_1}}\mathbf{H}_0\ket{w_{n\mathbf{R_2}}^{s_2}}=
\bra{w_{m\mathbf{R_1}}}\mathbf{H}_0\ket{w_{n\mathbf{R_2}}}\delta_{s_1,s_2}$.
5. From $\bra{\phi_{\mu}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\phi_{\nu}^{s_2}}$ and the $C_{\mu n}^{s}(\mathbf{k})$’s we calculate the matrix elements $\bra{\psi_{p\mathbf{k}}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\psi_{q\mathbf{k}}^{s_2}}$ by using Eq. (\[equ8\]).
6. Next we transform the SO matrix elements constructed over the Bloch functions, $\bra{\psi_{p\mathbf{k}}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\psi_{q\mathbf{k}}^{s_2}}$, into their Wannier counterparts, $\bra{w_{m\mathbf{R}_1}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_{n\mathbf{R}_2}^{s_2}}$, by using Eq. (\[equ13\]).
7. The final complete Wannier Hamiltonian now reads $$\begin{gathered}
\label{equ14}
\begin{split}
\bra{w_{m\mathbf{R}_1}^{s_1}}\mathbf{H}\ket{w_{n\mathbf{R}_2}^{s_2}}
&=\bra{w_{m\mathbf{R}_1}^{s_1}}\mathbf{H}_0+\mathbf{V}_\mathrm{SO}\ket{w_{n\mathbf{R}_2}^{s_2}}\:,
\end{split}\end{gathered}$$ and the associated band structure can be directly compared with that computed at point (2) directly from [Siesta]{}.
Results and Discussion
======================
We now present our results, which are discussed in the light of the theory just described.
Plumbane Molecule
-----------------
We start our analysis by calculating the SO matrix elements and then the energy eigenvalues of a plumbane, PbH$_4$, molecule \[see figure \[fig:3\_structures\](a)\].
![(Color on line) Atomic structure of (a) a plumbane molecule, (b) a chain of lead atoms and (c) a chain of methane molecules. We have also calculated the electronic structure of a chain of C atoms, which is essentially identical to that presented in (b). Color code: Pb = grey, H = light blue, C = yellow.[]{data-label="fig:three_structure"}](Fig1.jpg){width="44.00000%"}
\[fig:3\_structures\]
Due to the presence of lead, the molecular eigenstates change significantly when the SO interaction is switched on. For this non-periodic system the key relations in Eq. (\[equ8\]) and Eq. (\[equ11\]) reduce to $$\begin{gathered}
\label{equ16}
\begin{split}
\bra{\psi_{p}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{\psi_{q}^{s_2}} =
\sum_{\mu,\nu}C_{\mu p}^{*s_1}C_{\nu q}^{s_2}\bra{\phi_{\mu}^{s_1}}
\mathbf{V}_\mathrm{SO}\ket{\phi_{\nu}^{s_2}}
\end{split}\end{gathered}$$ and $$\begin{gathered}
\label{equ17}
\begin{split}
&\bra{w_{m}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_{n}^{s_2}} =
\sum_{p,q}U^{*s_1}_{pm}U^{s_2}_{qn}\bra{\psi_{p}^{s_1}}
\mathbf{V}_\mathrm{SO}\ket{\psi_{q}^{s_2}}\:,
\end{split}\end{gathered}$$ respectively, where now the vectors $\psi_n^s$ are simply the eigenvectors with quantum number $n$ and spin $s$.
In Table \[table:plumbane\] we report the first 10 energy eigenvalues of plumbane, calculated either with or without SO coupling. These have been computed within the LDA (local density approximation) and a double-zeta polarized basis set. The table compares results obtained with our MLWFs procedure to those computed with SO-DFT by [Siesta]{}. Clearly in this case of a heavy ion the SO coupling changes the eigenvalues appreciably, in particular in the spectral region around -13 eV. Such change is well captured by our Wannier calculation, which returns energy levels in close proximity to those computed with SO-DFT by [Siesta]{}. In order to estimate the error introduced by our method, we calculate the *Mean Relative Absolute Difference (MRAD)*, which we define as $\frac{1}{N}\sum\frac{|\epsilon_i^s-\epsilon_i^w|}{|\epsilon_i^s|}$ for a set of $N$ eigenvalues ($i=1,...,N$), where $\epsilon_i^s$ and $\epsilon_i^w$ are the $i$-th eigenvalues calculated from [Siesta]{} and the MLWFs, respectively. Notably the *MRAD* is rather small both in the SO-free case and when the SO interaction is included. Most importantly, we can report that our procedure to evaluate the SO matrix elements over the MLWFs basis clearly does not introduce any additional error.
[>p[2cm]{}>p[2cm]{}| >p[2cm]{} cp[2cm]{} ]{} &\
& MLWF & [Siesta]{} & MLWF\
-33.93534 &-33.93521 &-33.93532 &-33.93521\
-33.93530 &-33.93521 &-33.93528 &-33.93521\
-13.02511 &-13.02507 &-14.69573 &-14.69568\
-13.02511 &-13.02507 &-14.69573 &-14.69568\
-13.02510 &-13.02506 &-12.64301 &-12.64298\
-13.02509 &-13.02506 &-12.64301 &-12.64298\
-13.02320 &-13.02315 &-12.64166 &-12.64162\
-13.02318 &-13.02315 &-12.64165 &-12.64162\
-5.75256 &-5.75251 &-5.75255 &-5.75251\
-5.75245 &-5.75251 &-5.75245 &-5.75251\
&\
Before discussing some of the properties of the SO matrix elements associated to this particular case of a finite molecule, we wish to make a quick remark on the Wannier procedure adopted here. The eigenvalues reported in Table \[table:plumbane\] are the ten with the lowest energies. However, in order to construct the MLWFs we have considered all the states of the calculated Kohn-Sham spectrum. This means that, if our [Siesta]{} basis set describes PbH$_4$ with $N$ distinct atomic orbitals, then the MLWFs constructed are 2$N$ (the factor 2 accounts for the spin degeneracy). In this case the original local orbital basis set and the constructed MLWFs span the same Hilbert space and the mapping is exact, whether or not the SO interaction is considered.
In most cases, however, one wants to construct the MLWFs by using only a subset of the spectrum, for instance the first $N^\prime$ eigenstates. Since in general the SO interaction mixes all states, there will be SO matrix elements between the selected $N^\prime$ states and the remaining $N-N^\prime$. This means that a MLWF basis constructed only from the first $N^\prime$ eigenstates will not be able to provide an accurate description of the SO-split spectrum. Importantly, one in general may expect that the SO interaction matrix elements between different Kohn-Sham orbitals, $\bra{\psi^{s_1}_p}\mathbf{V}_{\rm{SO}}\ket{\psi^{s_2}_q}$, are smaller than those calculated at the same orbital, $\bra{\psi^{s_1}_n}\mathbf{V}_{\rm{SO}}\ket{\psi^{s_2}_n}$. This is because of the short-range of the SO interaction and the fact that the Kohn-Sham eigenstates are orthonormal. In the case of light elements, i.e. for a weak SO potential, one may completely neglect the off-diagonal SO matrix elements. This means that the SO spectrum constructed with the MLWFs associated to the first $N^\prime$ eigenstates will be approximately equal to the first $N^\prime$ eigenvalues of the MLWFs Hamiltonian constructed over the entire $N$-dimensional spectrum. Such property is particularly relevant for OSCs, for which the SO interaction is weak.
We now move to discuss a general property of the MLWF SO matrix elements, namely the relations $\bra{w_m^{s}}\mathbf{V}_\mathrm{SO}\ket{w_m^{s}}=0$ and $\Re[\bra{w_m^{s}}\mathbf{V}_\mathrm{SO}\ket{w_n^{s}}]=0$. This means that the SO matrix elements for the same spin and the same Wannier function vanish, while those for the same spin and different Wannier functions are purely imaginary. This property can be understood from the following argument. The SO coupling operator is $\mathbf{V}_\mathrm{SO}=\sum_{\mathbf{R}_j}V_{\mathbf{R}_j}
\mathbf{L}_{\mathbf{R}_j}\cdot\mathbf{S}$, where $V_{\mathbf{R}_j}$ is a scalar potential independent of spin, and $\mathbf{L}_{\mathbf{R}_j}$ is the angular momentum operator corresponding to the central potential of the atom at position $\mathbf{R}_j$. Here $\mathbf{S}$ is the spin operator and the sum runs over all the atoms. By now expanding **S** in terms of the Pauli spin matrices one can see that for any vector $\ket{\gamma_i^s}=\ket{\gamma_i}\otimes\ket{s}$, which can be written as a tensor product of a spin-independent part, $\ket{\gamma_i}$, and a spinor $\ket{s}$, the following equality holds $$\begin{gathered}
\label{equ20}
\begin{split}
&\bra{\gamma_m^{s_1}}\mathbf{L}\cdot\mathbf{S}\ket{\gamma_n^{s_2}}=
\frac{1}{2}\left[\bra{\gamma_m}\hat{L}_z\ket{\gamma_n}\delta_{s_1\uparrow}\delta_{s_2\uparrow}\right.+\\
&+\bra{\gamma_m}\hat{L}_-\ket{\gamma_n}\delta_{s_1\uparrow}\delta_{s_2\downarrow} +
\bra{\gamma_m}\hat{L}_+\ket{\gamma_n}\delta_{s_1\downarrow}\delta_{s_2\uparrow}+\\
&+\left.\bra{\gamma_m}-\hat{L}_z\ket{\gamma_n}\delta_{s_1\downarrow}\delta_{s_2\downarrow}\right]\:.
\end{split}\end{gathered}$$ Eq. (\[equ20\]) can then be applied to both the Kohn-Sham eigenstates and the MLWFs, since they are both written as $\ket{\gamma_i^s}=\ket{\gamma_i}\otimes\ket{s}$.
Now, the atomic orbitals used by [Siesta]{} have the following form $$\begin{gathered}
\label{equ21}
\begin{split}
\ket{\phi_i}=\ket{R_{n_i,l_i}}\otimes\ket{l_i,M_i}\:,
\end{split}\end{gathered}$$ where $\ket{R_{n,l}}$ is a radial numerical function, while the angular dependence is described by the real spherial harmonic, [^2] $\ket{l,M}$. It can be proved that the real spherical harmonics follow the relation $$\begin{gathered}
\label{equ22}
\begin{split}
\bra{l,M_i}\hat{L}_z\ket{l,M_j}=-iM_i\delta_{M_i,M_j}\:.
\end{split}\end{gathered}$$
Since any Kohn-Sham eigenstate, $\ket{\psi_p^{s_1}}$, can be written as ${\ket{\phi_i}\otimes\ket{s_1}}$, Eq. (\[equ20\]) implies that only the terms in $\hat{L}_z$ (or $-\hat{L}_z$) contribute to the matrix element between same spins, $\bra{\psi_p^{s_1}}\mathbf{L}\cdot\mathbf{S}\ket{\psi_p^{s_1}}$. Eq. (\[equ22\]) together with the fact that the Kohn-Sham eigenstates are real for a finite molecule further establishes that $\Re[\bra{\psi_p}\hat{L}_z\ket{\psi_q}]=0$. As a consequence $\bra{\psi_m}\hat{L}_z\ket{\psi_m}=0$. Finally, by keeping in mind that the unitary matrix elements transforming the Kohn-Sham eigenstates into MLWFs are real for a molecule, we have also $$\begin{gathered}
\begin{split}
& \bra{w_m^{s_1}}\mathbf{L}\cdot\mathbf{S}\ket{w_n^{s_1}} =\pm \bra{w_m}\hat{L}_z\ket{w_n}=\\
%&=\sum_{p,q}U_{pm}U_{qn}\bra{\psi_p}\hat{L}_z\ket{\psi_q} =
&=\sum_{p\neq q}U_{pm}U_{qn}\bra{\psi_p}\hat{L}_z\ket{\psi_q}\:,
\end{split}\end{gathered}$$ which has to be imaginary. Thus we have $\Re\bra{w_m^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_n^{s_1}}=0$ and $\bra{w_m^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_m^{s_1}}=0$ since $\mathbf{V}_\mathrm{SO}$ must have real expectation values.
Lead Chain
----------
\[fig:Pbbands\]
Next we move to calculating the SO matrix elements for a periodic structure. In particular we look at a 1D chain of Pb atoms with a unit cell length of 2.55 Å, which is the DFT equilibrium lattice constant obtained with the LDA. Note that free-standing mono-dimensional Pb chains have been never reported in literature, although there are studies of low-dimensional Pb structures encapsulated into zeolites [@PbZeo]. Here, however, we do not seek at describing a real compound, but we rather take the 1D Pb mono-atomic chain as a test-bench structure to apply our method to a periodic structure with a large SO coupling. Also in this case we have constructed the MLWFs by taking the entire bands manifold and not a subset of it. For the DFT calculations we have considered a simple $s$ and $p$ single-zeta basis set, which, in absence of SO interaction yields three bands with one of them being doubly degenerate \[see Fig. \[fig:Pbbands\](a)\]. The doubly-degenerate relatively-flat band just cuts across the Fermi energy, $E_\mathrm{F}$, and it is composed of the $p_y$ and $p_z$ orbitals orthogonal to the chain axis ($\pi$ band). The other two bands are $sp$ hybrid ($\sigma$ bands). The lowest one at about 25 eV below $E_\mathrm{F}$ has mainly $s$ character ($\sigma$ band), while the other mainly $p_x$ ($\sigma^*$ band).
Spin-orbit coupling lifts the degeneracy of the $p$-type band manifold, which is now composed of three distinct bands. In particular the degeneracy is lifted only in the $\pi$ band at the edge of the 1D Brillouin zone, while it also involves the $\sigma$ one close to the $\Gamma$ point (after the band crossing). When the same band structure is calculated from the MLWFs we obtain the plot of Fig. \[fig:Pbbands\](b). This is almost identical to that calculated with SO-DFT demonstrating the accuracy of our method also for periodic system.
It must be noted that for a periodic structure the Bloch state expansion coefficients, $C_{\mu p}(\mathbf{k})$, and the elements of the unitary matrix $U$ are complex and consequently the diagonal elements of $\mathbf{V}_\mathrm{SO}$ with respect to Wannier functions are not zero in general. However, as expected $\bra{w_{m\mathbf{R}}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_{n\mathbf{R^\prime}}^{s_2}}$ tends to vanish as the separation $|\mathbf{R}-\mathbf{R^\prime}|$ increases. Furthermore, it is clear from Eq. (\[equ20\]) that the SO matrix elements for Wannier functions should obey the spin-box anti-hermitian relation $$\begin{gathered}
\label{equanti-hermi}
\begin{split}
\bra{w_{m\mathbf{R}}^{s_1}}\mathbf{V}_{SO}\ket{w_{n\mathbf{R'}}^{s_2}}=
-\bra{w_{m\mathbf{R}}^{s_2}}\mathbf{V}_\mathrm{SO}\ket{w_{n\mathbf{R'}}^{s_1}}^{*}\:.
\end{split}\end{gathered}$$ These two properties can be appreciated in Fig. \[fig:Pb\_MATelms\], where we plot the real \[panel (a)\] and imaginary \[panel (b)\] part of $\bra{w_{m\mathbf{0}}^{s_1}}\mathbf{V}_{SO}\ket{w_{n\mathbf{R}}^{s_2}}$ for some representative band combinations, $m$ and $n$, as a function of **R**.
Carbon Chain
------------
Next we look at the case of a 1D mono-atomic carbon chain with a LDA-relaxed interatomic distance of $\thicksim1.3$ Å. This has the same structure and electron count of the Pb chain, and the only difference concerns the fact that the SO coupling in C is much smaller then that in Pb. In this situation we expect that an accurate SO-split band structure can be obtained even when the MLWFs are constructed only for a limited number of bands and not for the entire band manifold as in the case of Pb. This time the DFT band structure is calculated at the LDA level over a double-zeta polarized (DZP) [Siesta]{} basis set, comprising 13 atomic orbitals per unit cell. In contrast, the MLWFs are constructed only from the first four bands, which are well isolated in energy from the rest and again describe the $sp$ bands with $\sigma$ and $\pi$ symmetry. Since the SO interaction in carbon is small (the band split is of the order of a few meV) it is impossible to visualize the effects of the SO interaction in a standard band plot as that in Fig. \[fig:Pbbands\]. Hence, in Fig. \[fig:Chain\_C\] we plot the difference between the band structure calculated in the presence and in the absence of SO coupling. In particular we compare the bands calculated with SO-DFT by [Siesta]{} (left-hand side panels in Fig. \[fig:C\_BS\]), with those obtained with the MLWFs scheme described here (right-hand side panels in Fig. \[fig:C\_BS\]). In the figure we have labelled the bands in order of increasing energy and neglecting the spin degeneracy. Thus, for instance, the $\psi_1$ and $\psi_2$ bands correspond to the two lowest $\sigma$ spin sub-bands (note that the band structure of the linear carbon chain is qualitatively identical to that of the Pb one and we can use Fig. \[fig:figure1\] to identify the various bands).
We note that the lowest $\sigma$ bands, defined as $\psi_1$ and $\psi_2$, do not split at all due to the SO interaction, exactly as in the case of Pb. This contrasts the behaviour of both the $\pi$ ($\psi_3$ through $\psi_6$) and $\sigma^*$ ($\psi_7$ and $\psi_8$) bands, which instead are modified by the SO interaction. Notably the changes in energy of the eigenvalues is never larger then 8 meV and it is perfectly reproduced by our MLWFs representation. This demonstrates that truncating the bands selected for constructing the MLWFs is a possible procedure for materials where SO coupling is weak. However, we should note that the truncation still needs to be carefully chosen. Here for instance we have considered all the 2$s$ and 2$p$ bands and neglected those with either higher principal quantum number (e.g. 3$s$ and 3$p$) or higher angular momentum (e.g. bands with $d$ symmetry originating from the $p$-polarized [Siesta]{} basis), which appear at much higher energies. Truncations, where one considers only a particular orbital of a given shell (say the $p_z$ orbital in an $np$ shell), need to be carefully assessed since it is unlikely that a clear energy separation between the bands takes place.
\[fig:C\_BS\]
Methane Chain
-------------
As a first basic prototype of 1D organic molecular crystal we perform calculations for a periodic chain of methane molecules. We use a double-zeta polarized basis set and a LDA-relaxed unit cell length of 3.45 Å (the cell contains only one molecule). Similarly to the previous case, the MLWFs are constructed over only the lowest 4 bands (8 when considering the spin degeneracy). When compared to the bands of the carbon chain, those of methane are much narrower. This is expected, since the bonding between the different molecules is small. In Fig. \[fig:Chain\_methane\] we plot the difference between the eigenvalues (1D band structure) calculated with, $E_\mathrm{SO}$, and without, $E_\mathrm{NSO}$, including SO interaction.
When SO interaction is included the spin-degeneracy is broken and one has now eight bands. These are labeled as $\psi_m$ in Fig. \[fig:Chain\_methane\] in increasing energy order. Again we find no SO split for the lowermost band and then a split, which is significantly smaller than that found in the case of the C chain. This is likely to originate from the crystal field of the C atoms in CH$_4$, which is different from that in the C chain (the C-C distance is different and there are additional C-H bonds). Again, as in the previous case, we find that our MLWFs procedure perfectly reproduces the SO-DFT band structure, indicating that in this case of weak SO interaction band truncation does not introduce any significant error.
![(Color on line) Difference, $E_\mathrm{SO}-E_\mathrm{NSO}$, between the band structure of chain of methane molecules calculated with, $E_\mathrm{SO}$, and without, $E_\mathrm{SO}$, considering SO interaction. The bands are labelled in increasing energy order without taking into account spin degeneracy. The left-hand side panels show results for the SO-DFT calculations performed with [Siesta]{}, while the right-hand side one, those obtained from the MLWFs. The inset shows an isovalue plot of one of the four MLWFs with the red and blue surfaces denoting positive and negative isovalues, respectively. All the MLWFs have similar structure and they resemble those of the isolated methane molecule because of the small intermolecular chemical bonding owing to the large separation. []{data-label="fig:Chain_methane"}](Fig5){width="50.00000%"}
(0,0) (-51,116)[![(Color on line) Difference, $E_\mathrm{SO}-E_\mathrm{NSO}$, between the band structure of chain of methane molecules calculated with, $E_\mathrm{SO}$, and without, $E_\mathrm{SO}$, considering SO interaction. The bands are labelled in increasing energy order without taking into account spin degeneracy. The left-hand side panels show results for the SO-DFT calculations performed with [Siesta]{}, while the right-hand side one, those obtained from the MLWFs. The inset shows an isovalue plot of one of the four MLWFs with the red and blue surfaces denoting positive and negative isovalues, respectively. All the MLWFs have similar structure and they resemble those of the isolated methane molecule because of the small intermolecular chemical bonding owing to the large separation. []{data-label="fig:Chain_methane"}](methane_WF_plot.png "fig:"){height="1.8cm"}]{}
Triarylamine Chain
------------------
Finally we perform calculations for a real system, namely for triarylamine-based molecular nanowires. These can be experimentally grown through a photo-self-assembly process from the liquid phase [@ANIE:ANIE201001833], and have been subject of numerous experimental and theoretical studies [@B908802D; @Vina]. In general, triarylamines can be used as materials for organic light emitting diodes, while their nanowire form appears to possess good transport and spin properties, making it a good platform for organic spintronics [@Akin]. Triarylamine-based molecular nanowires self-assemble only when particular radicals are attached to the main triarylamine backbone and here we consider the case of C$_8$H$_{17}$, H and Cl radicals, corresponding to the precursor [**1**]{} of Ref. \[\] (see upper panel in Fig. \[fig:triarylamine\_structure\]). The nanowire then arranges in such a way to have the central N atoms aligned along the wire axis (see Fig. \[fig:triarylamine\_structure\]).
![(Colour on line) Structure of the triarylamine molecule (upper picture) and of the triarylamine-based nanowire investigated here. The radicals associated to the triarylamine derivative are C$_8$H$_{17}$, H and Cl, respectively. Colour code: C=yellow, H=light blue, O=red, N=grey, Cl=green.[]{data-label="fig:triarylamine_structure"}](Fig6a "fig:"){width="35.00000%"} ![(Colour on line) Structure of the triarylamine molecule (upper picture) and of the triarylamine-based nanowire investigated here. The radicals associated to the triarylamine derivative are C$_8$H$_{17}$, H and Cl, respectively. Colour code: C=yellow, H=light blue, O=red, N=grey, Cl=green.[]{data-label="fig:triarylamine_structure"}](Fig6b "fig:"){width="35.00000%"}
In general self-assembled triarylamine-based molecular nanowires appear slightly $p$-doped so that charge transport takes place in the HOMO-derived band. This is well isolated from the rest of the valence manifold and has a bandwidth of about 100 meV (see figure Fig. \[fig:BS\_triarylamine\] for the band structure). Such band is almost entirely localized on the $p_z$ orbital of the central N atoms ($p_z$ is along the wire axis), a feature that has allowed us to construct a $p_z$-$sp^2$ model with the spin-orbit strength extracted from that of an equivalent mono-atomic N chain. The model was then used to calculate the temperature-dependent spin-diffusion length of such nanowires [@C4CC01710B]. Here we wish to use our MLWFs method to extract the SO matrix elements of triarylamine-based molecular nanowires in their own chemical environment, i.e. without approximating the backbone with a N atomic chain.
![(Color on line) Band structure of the 1D triarylamine-based nanowire constructed with the precursor [**1**]{} of Ref. \[\]. This is plotted over the 1D Brillouin zone (Z=$\pi/a$ with $a$ the lattice parameter). The Fermi level is marked with a dashed black line and it is placed just above the HOMO-derived valence band (in red). The lower panel is a magnification of the valence band. Note the bandwidth of about 100 meV and the fact that the band has a cosine shape, fingerprint of a single-orbital nearest-neighbour tight-binding-like interaction. Only the HOMO band is considered when constructing the MLWFs.[]{data-label="fig:BS_triarylamine"}](Fig7.jpg){width="30.00000%"}
\[fig:triarylamine\_BS\]
For this system we use a 1D lattice with LDA-optimized lattice spacing of 4.8 Å and run the DFT calculations with double-zeta polarized basis and the LDA functional. The MLWFs are constructed by using only the HOMO-derived valence band, i.e. we have a single spin-degenerate Wannier orbital. We can then drop the band index and write the SO matrix elements as $$\begin{gathered}
\label{equ82}
\begin{split}
& \bra{w_{\mathbf{0}}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_{\mathbf{R}}^{s_2}}\\
&=\frac{V}{(2\pi)^3}\int d\mathbf{k} U^*(\mathbf{k})U(\mathbf{k})e^{-i\mathbf{k}.\mathbf{R}}\bra{\psi_{\mathbf{k}}^{s_1}}
\mathbf{V}_{SO}\ket{\psi_{\mathbf{k}}^{s_2}} \\
& =\frac{V}{(2\pi)^3}\int d\mathbf{k} e^{-i\mathbf{k}.\mathbf{R}}\bra{\psi_{\mathbf{k}}^{s_1}}
\mathbf{V}_\mathrm{SO}\ket{\psi_{\mathbf{k}}^{s_2}}\:,
\end{split}\end{gathered}$$ or in a discrete representation of the reciprocal space $$\begin{gathered}
\label{equ82}
\begin{split}
\bra{w_{\mathbf{0}}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_{\mathbf{R}}^{s_2}}
&=\frac{1}{N}\sum_{\mathbf{k}}e^{-i\mathbf{k}.\mathbf{R}}\bra{\psi_{\mathbf{k}}^{s_1}}
\mathbf{V}_\mathrm{SO}\ket{\psi_{\mathbf{k}}^{s_2}} \\
\end{split}\end{gathered}$$ where the second equality comes from the unitarity of the gauge transformation, $U(\mathbf{k})$.
In Fig. \[fig:triarylamine\_Bandstructure\] we plot the difference between the band structure computed by including SO interaction and those calculated without. Notably our MLWFs band structure is almost identical to that computed directly with SO-DFT, again demonstrating both the accuracy of our method and the appropriateness of the drastic band truncation used here.
![(Color on line) Plot of ($\rm{E_{SO}}-\rm{E_{NSO}}$) as a function of **k** in arbitrary unit over a brillouin zone for the highest occupied band of a 1-d chain of triarylamine derivatives. The blue and the red points correspond to calculations with [Siesta]{} and [Wannier90]{} respectively.[]{data-label="fig:triarylamine_Bandstructure"}](Fig8){width="48.00000%"}
\[fig:BSdiff\_triarylamine\]
In this particular case the SO band split is maximized half-way between the $\Gamma$ point and the edge of the 1D Brillouin zone, where it takes a value of approximately 80 $\mu$eV. Clearly such split is orders of magnitude smaller than the value that one can possibly calculate by a direct construction of the MLWFs from the SO-splitted band structure. Note also that the SO split of the valence band is calculated here approximately a factor ten smaller than that estimated previously for a N atomic chain [@C4CC01710B], indicating the importance of the details of the chemical environment in these calculations.
Finally we take a closer look at the calculated SO matrix elements. As mentioned earlier, in the [Siesta]{} on-site approximation [@0953-8984-18-34-012] only the matrix elements calculated over orbitals centred on the same atom do not vanish. As a consequence the components $\bra{w_{\mathbf{R}}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_{\mathbf{R'}}^{s_2}}$ drop to zero as $|\mathbf{R}-\mathbf{R'}|$ gets large. This can be clearly appreciated in Fig. \[fig:triarylamine\_spinorbit\](a) and Fig. \[fig:triarylamine\_spinorbit\](b), where we plot the SO matrix elements for same and different spins, respectively.
From Fig. \[fig:triarylamine\_spinorbit\](a) we can observe that $\Re \bra{w_{\mathbf{0}}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_{\mathbf{R}}^{s_1}}$ vanishes for all **R**. This can be understood in the following way. In general any expectation value of $\mathbf{V}_{\rm{SO}}$, $\bra{\psi_{\mathbf{k}}^{s}}\mathbf{V}_\mathrm{SO}\ket{\psi^{s}_{\mathbf{k}}}$, has to be real. This is in fact anti-symmetric with respect to **k**, i.e we have $\bra{\psi_{\mathbf{0}+\mathbf{k}}^{s}}\mathbf{V}_{\rm{SO}}\ket{\psi^{s}_{\mathbf{0}+\mathbf{k}}}=-\bra{\psi_{\mathbf{0}-\mathbf{k}}^{s}}\mathbf{V}_{\rm{SO}}\ket{\psi^{s}_{\mathbf{0}-\mathbf{k}}}$, where $\mathbf{k}=\mathbf{0}$ denotes the $\Gamma$ point of the Brillouin zone. Additionally, $e^{i\mathbf{k}\cdot\mathbf{R}}$ satisfies the relation $e^{i(\mathbf{0}+\mathbf{k})\cdot\mathbf{R}}=\left[e^{i(\mathbf{0}-\mathbf{k})\cdot\mathbf{R}}\right]^*$. Hence, by performing the **k**-sum over first Brillouin zone we can write $$\begin{gathered}
\label{equ82}
\begin{split}
\Re \bra{w_{\mathbf{0}}^{s_1}}\mathbf{V}_{\rm{SO}}\ket{w_{\mathbf{R}}^{s_1}}=\Re \sum_{\mathbf{k}} e^{-i\mathbf{k}.\mathbf{R}}\bra{\psi_{\mathbf{k}}^{s_1}}\mathbf{V}_{\rm{SO}}\ket{\psi_{\mathbf{k}}^{s_1}}=0\:,
\end{split}\end{gathered}$$ where $\bra{w_{\mathbf{0}}^{s_1}}\mathbf{V}_{\rm{SO}}\ket{w_{\mathbf{0}}^{s_1}}$ is the expectation value of $\mathbf{V}_{\rm{SO}}$ and must be real. This implies $$\begin{gathered}
\label{equ82}
\begin{split}
\bra{w_{\mathbf{0}}^{s_1}}\mathbf{V}_{\rm{SO}}\ket{w_{\mathbf{0}}^{s_1}}=0\:.
\end{split}\end{gathered}$$
We can also see from Fig. \[fig:triarylamine\_spinorbit\](b) that for triarylamine the matrix elements $\bra{w_{\mathbf{R}}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_{\mathbf{R}}^{s_2}}$ are almost zero for $s_1 \neq s_2$. This follows directly from Eq. (\[equ20\]). In fact in the particular case of triarylamine nanowires the Wannier functions are constructed from one band only. As such, in order to have a non-zero matrix element, $\bra{w_{\mathbf{R}}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_{\mathbf{R}}^{s_2}}$, we must have non-zero values for $\bra{w_{\mathbf{R}}}\hat{L}_{\pm}\ket{w_{\mathbf{R}}}$. Therefore, the band under consideration must contain an appreciable mix of components of both the $\ket{l,p}$ and $\ket{l,p+1}$ complex spherical harmonics for some $l$ and $p$. As mentioned earlier, the triarylamine HOMO band is composed mostly of $p_z$ N orbitals. Hence, it has to be expected that the $\bra{w_{\mathbf{R}}^{s_1}}\mathbf{V}_\mathrm{SO}\ket{w_{\mathbf{R}}^{s_2}}$ matrix elements are small.
conclusion
==========
We have presented an accurate method for obtaining the SO matrix elements between the MLWFs constructed in absence of SO coupling. Our procedure, implemented within the atomic-orbital-based DFT code [Siesta]{}, allows one to avoid the construction of the Wannier functions over the SO-split band structure. In some cases, in particular for organic crystals, such splits are tiny and a direct construction is numerically impossible. The method is then put to the test for a number of materials systems, going from isolated molecules, to atomic nanowires, to 1D molecular crystals. When the entire band manifold is used for constructing the MLWFs the mapping between Bloch and Wannier orbitals is exact and the method can be used for both light and heavy elements. In contrast for weak spin-orbit interaction one can construct the MLWFs on a subset of the states in the band structures without any loss of accuracy. As such our scheme appears as an important tool for constructing effective spin Hamiltonians for organic materials to be used as input in a multiscale approach to the their thermodynamical properties.
ackowledgement {#ackowledgement .unnumbered}
==============
This work is supported by the European Research Council, Quest project. Computational resources have been provided by the supercomputer facilities at the Trinity Center for High Performance Computing (TCHPC) and at the Irish Center for High End Computing (ICHEC). Additionally, the authors would like to thank Ivan Rungger and Carlo Motta for helpful discussions and Akinlolu Akande for providing the structure of the triarylamine-based nanowire.
[42]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase
10.1126/science.1065389) [****, ()](\doibase 10.1073/pnas.1302494110) @noop [****, ()]{} [****, ()](\doibase 10.1063/1.881446) [****, ()](\doibase doi:10.1166/jctn.2006.003) @noop [****, ()]{} @noop [**]{}, edited by (, ) @noop [****, ()]{} [****, ()](\doibase
http://dx.doi.org/10.1016/S0038-1098(02)00090-X) @noop [****, ()]{} [****, ()](\doibase 10.1039/C1CS15047B) [****, ()](\doibase
10.1103/PhysRevLett.39.1098) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase
http://dx.doi.org/10.1016/0304-8853(91)90311-W) [****, ()](\doibase
10.1103/RevModPhys.76.323) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.81.153202) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.75.245324) [****, ()](\doibase 10.1103/PhysRevB.78.115203) [**](https://books.google.ie/books?id=BT5RAAAAMAAJ), Textbook physics (, ) @noop [****, ()]{} [****, ()](\doibase 10.1021/ct500390a) [****, ()](\doibase 10.1103/PhysRev.52.191) [****, ()](\doibase 10.1103/RevModPhys.34.645) [****, ()](\doibase 10.1103/PhysRevB.56.12847) [****, ()](\doibase
10.1103/RevModPhys.84.1419) [****, ()](http://stacks.iop.org/0953-8984/14/i=11/a=302) [****, ()](\doibase
http://dx.doi.org/10.1016/j.cpc.2007.11.016) [****, ()](\doibase 10.1039/B908802D) [****, ()](http://stacks.iop.org/0953-8984/18/i=34/a=012) [****, ()](http://stacks.iop.org/0953-8984/5/i=8/a=009) [****, ()](\doibase
10.1002/anie.201001833) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1039/C4CC01710B)
[^1]: The correctness of the elements $U_{pm}^{s}(\mathbf{k})$ and $e^{i\mathbf{k}\cdot\mathbf{R}}$ is easily verified by ensuring that the following relation is satisfied $$\begin{gathered}
\label{equ_footnote}
\begin{split}
\braket{w_{m\mathbf{R}_1}|w_{n\mathbf{R}_2}}&=\frac{1}{N}\sum_p \int_\mathrm{FBZ}
d\mathbf{k} \braket{w_{m\mathbf{R}_1}|\psi_{p\mathbf{k}}}\braket{\psi_{p\mathbf{k}}|w_{n\mathbf{R}_2}}=\\
&=\frac{1}{N}\sum_p \int_\mathrm{FBZ} d\mathbf{k}U^*_{pm}(\mathbf{k})U_{pn}(\mathbf{k})e^{i\mathbf{k}\cdot
(\mathbf{R}_1-\mathbf{R}_2)}=\\
&=\delta_{m,n}\delta_{\mathbf{R}_1,\mathbf{R}_2} \:.
\end{split}\end{gathered}$$
[^2]: The real spherical harmonics are constructed from the complex ones, $\ket{l,m}$, as $\ket{l,M}=\frac{1}{\sqrt2}[\ket{l,m}+(-1)^m\ket{l,-m}]$ and $\ket{l,-M}=\frac{1}{i\sqrt2}[\ket{l,m}-(-1)^m\ket{l,-m}]$. For $M=0$ the real and complex spherical harmonics coincide.
|
[ **Derivation of Relativistic Yakubovsky Equations\
under Poincaré Invariance\
**]{}
Hiroyuki Kamada ^$\star$^
Department of Physics, Faculty of Engineering, Kyushu Institute of Technology,\
Kitakyushu 804-8550, Japan\
${}^\star$ [[email protected]]{}
Abstract {#abstract .unnumbered}
========
[**Relativistic Faddeev-Yakubovsky four-nucleon scattering equations are derived including a 3-body force. We present these equations in the momentum space representation. The quadratic integral equations using the iteration method, in order to obtain boosted potentials and 3-body force, are demonstrated.** ]{}
------------------------------------------------------------------------
------------------------------------------------------------------------
Introduction {#sec:intro}
============
At high energies one could expect deficiencies in the nonrelativistic Faddeev approach [@Faddeev:1960su; @Gloeckle:1995jg] in three-nucleon system. We have been constructing a relativistic framework in the form of relativistic Faddeev equations [@Glockle:1986zz; @Keister:1991sb; @Kamada:1999wy; @Kamada:1999fz; @Kamada:2007ms; @Witala:2011yq; @Polyzou:2010kx; @Kamada:2014dba] according to the Bakamjian-Thomas theory [@Bakamjian:1953kh]. Not only using the realistic nonrelativistic nucleon-nucleon (NN) potentials but using Kharkov relativistic NN potential [@Arslanaliev:2018wkl] we obtained the triton wave function by solving the relativistic Faddeev equation [@Kamada:2003fh; @Kamada:2008xc; @Kamada:2017mmk]. However, in the three-body scattering states, the relativistic effects appear to be generally small [@Sekiguchi:2005vq; @Maeda:2007zza] and insufficient to significantly improve the data description. For sensitive observations such as $A_y$ puzzles, the relativistic effect has certainly surfaced [@Witala:2008va], but the results obtained were in the direction of deterioration.
As the number of particles handled increases, subtle relativistic effects will be accumulated and surface, so here we would like to rewrite the Yakubovsky equations [@Yakubovsky:1966ue], which solve the four-nucleon system exactly, to a relativistic equations as well.
In Section 1 we organize relativistic momenta and its Jacobian. Section 2 deals with rewriting interaction to relativistic potential by Lorentz boost. The boosted potential satisfies the relativistic Lippmann-Schwinger (LS) equation in Sec 3. Section 4 looks back on how the 3-body Faddeev equation was relativistically transformed using the boosted potential. In Section 5, we will remodel the four-body Yakubovsky equation and derive the relativistic Yakubovsky equation. In section 6, we derive relativistic equations involving 3-body force. A summary is in section 7.
2-body center of mass system {#2-body}
============================
In 2-body systems, these static mass of the particle are given $m_i$ ($i = 1, 2$). The four dimensional intrinsic momenta ${\mathfrak}{p}_i$ are $$\begin{aligned}
{\mathfrak}{p}_i = (p^\mu_i)=(p_i^0, \vec p_i)=(E_i(p_i), p_{i}^x, p_{i}^y, p_{i}^z)\end{aligned}$$ with $$\begin{aligned}
E_i(p_i) =\sqrt{m_i^2 + p_i^2}=\sqrt{m_i^2+\vec p_i \cdot \vec p_i}\end{aligned}$$ By Lorentz transformation $L=(L_\mu ^\nu)$ of boosting velocity $\vec \upsilon$ the transformed four-momenta $\overline{ {\mathfrak}{p}_i}$ are obtained $$\begin{aligned}
\overline{ {\mathfrak}{p}_i }=L {\mathfrak}{p}_i =(\overline{ E_i} ,~\overline{ \vec p_i})=
(\gamma (E_i -\vec p_i\cdot \vec \upsilon),~\vec p_i +(\gamma-1)(\vec p_i \cdot \hat \upsilon ) \hat \upsilon -\gamma E_i \vec \upsilon ).\end{aligned}$$ with $$\begin{aligned}
\gamma \equiv {1\over \sqrt{1-\upsilon^2}}\end{aligned}$$ where $\hat \upsilon$ is the unit vector of $\vec \upsilon$ and light velocity is set to 1.
The relativistic total momentum ${\mathfrak}{P}_{12}$ and the relative momentum $\vec k_{12}$ are given [@Fong:1986xx] as $$\begin{aligned}
{\mathfrak}{P}_{12} \equiv {\mathfrak}{p}_1 + {\mathfrak}{p}_2 =(E_1+E_2, \vec p_1+\vec p_2)=(E_{12},\vec P_{12}),\end{aligned}$$ $$\begin{aligned}
\vec k_{12} \equiv {{\epsilon _2\vec p_1 -\epsilon_1 \vec p_2} \over {\epsilon_1+\epsilon_2} },
\label{veck}\end{aligned}$$ with $$\begin{aligned}
\epsilon_i\equiv {1\over 2} (E_i +o_i ),
\label{epsilon}\end{aligned}$$ and $$\begin{aligned}
o_i\equiv \sqrt{m_i^2+k_{12}^2}
\label{omega}\end{aligned}$$ where we need pay attention that these equations from Eq.(\[veck\]) to Eq. (\[omega\]) are coupled for $k_{12}$. The momentum $\vec k_{12}$ is regarded an instanteneous momentum of center of mass system.
We start to enter the center of mass system, which the boosting velocity $\vec u$ is now chosen instead of $\vec \upsilon$ $$\begin{aligned}
\vec u \equiv { {\vec p_1 +\vec p_2 } \over {E_1+E_2}}.\end{aligned}$$ We have $$\begin{aligned}
L {\mathfrak}{p}_1 = (o_1, \vec k_{12})\end{aligned}$$ and $$\begin{aligned}
L {\mathfrak}{p}_2 = (o_2, -\vec k_{12}).\end{aligned}$$ Therefore, we solve them to have $\vec k$ as $$\begin{aligned}
\vec k_{12}= {1\over 2} \Biggl( (\vec p_1-\vec p_2)
-(\vec p_1+\vec p_2) \bigl(
{E_1 -E_2 +{m_1^2-m_2^2 \over \sqrt{(E_1+E_2)^2-(\vec p_1+\vec p_2)^2} } \over
E_1+E_2+\sqrt{(E_1+E_2)^2-(\vec p_1+\vec p_2)^2} }\bigr) \Biggr) \end{aligned}$$ We know the following Jacobian ${\cal J}_{12}$ $$\begin{aligned}
{\cal J}_{12} \equiv { \partial (\vec p_1 , \vec p_2) \over \partial (\vec k , \vec P) }
={E_1 E_2 \over E_1+E_2 }{o_1+o_2 \over o_1 o_2 }.\end{aligned}$$ If we take the case of equal mass ($m_1=m_2$), we have $o_1 =o_2$, $$\begin{aligned}
\vec k _{12}|_{m_1=m_2} = {1\over 2} \Biggl( (\vec p_1-\vec p_2)
-(\vec p_1+\vec p_2) \bigl(
{ E_1 -E_2
\over
E_1+E_2+\sqrt{(E_1+E_2)^2-(\vec p_1+\vec p_2)^2} }\bigr) \Biggr)
\label{G3.6}\end{aligned}$$ and $$\begin{aligned}
{\cal J}_{12}|_{m_1=m_2} ={E_1 E_2 \over E_1+E_2 }{4 \over \sqrt{(E_1+E_2)^2-(\vec p_1+\vec p_2)^2} } .
\label{G3.16}\end{aligned}$$ Eq. (\[G3.6\]) and Eq. (\[G3.16\]) are corresponding to Eq.(3.6) and (3.16) of [@Glockle:1986zz], respectively.
Boosted potential {#Boost_potential}
=================
Let us consider two equal mass particles ($m=m_1=m_2$) which are labeled 1 and 2 in the 2-body center of mass system with interaction $v_{12}$. We have a invariant mass $\sqrt{S_{12}}$ of the system as $$\begin{aligned}
\sqrt{S_{12}}=2\sqrt{m^2+k_{12}^2}+v_{12}.\end{aligned}$$ where $k_{12}$ is the relative momentum between particle 1 and 2.
On the other hand, we leave from the 2-body c. m. system, the total momentum $\vec P_{12}\equiv \vec p_1+\vec p_2$ is nonzero. The invariant mass $\sqrt{S_{12}^{\rm boost} }$ is given as $$\begin{aligned}
\sqrt{S_{12}^{\rm boost} }= \sqrt {S_{12}+ P_{12}^2}=\sqrt{(2\sqrt{m^2+k_{12}^2}+v_{12})^2 +P_{12}^2}.\end{aligned}$$ Now, one introduce so-called boosted potential $V_{12}$ as $$\begin{aligned}
V_{12}(P_{12}) &&\equiv \sqrt{s_{12}^{\rm boost}}- \sqrt{(2\sqrt{m^2+k_{12}^2}+ 0 )^2 +P_{12}^2} \cr && = \sqrt{(2\sqrt{m^2+k_{12}^2}+v_{12})^2 +P_{12}^2} -\sqrt{4(m^2+k_{12}^2)+P_{12}^2}.
\label{V}\end{aligned}$$ After quantumization ($k_{12} \to \hat k_{12}, v \to \hat v $ and $V \to \hat V$) the boosted potential operator $\hat V_{12}(P_{12})$ is still a diagonal operator to the boosting momentum $P_{12}$. We have a boosted Schrödinger equation for the wave function $\phi_{12}$ $$\begin{aligned}
\big( \sqrt{4(m^2+\hat k_{12}^2)+P_{12}^2} + \hat V_{12}(P_{12}) \big) \phi_{12}
= \sqrt{M ^2 + P_{12}^2} \phi_{12}\end{aligned}$$ and a un-boosted one, $$\begin{aligned}
\big(2\sqrt{m^2+\hat k_{12}^2} + \hat v_{12} \big) \phi_{12} = M \phi_{12} .\end{aligned}$$ where $M$ is a eigen value of mass operator $\sqrt{\hat S_{12}}$.
Relativistic Faddeev Equations {#Relativistic_Faddeev}
==============================
For the 3-body system, we add to third equal mass particle. There are 3 piarwises (subsystems) denoted not only as (12) but as (23) and (31). For each piarwise (ij) we can define the boosted potential as Eq. (\[V\]) by the boosting momentum $\vec P_{ij}$. $$\begin{aligned}
V_{ij}(P_{ij}) &&\equiv \sqrt{(2\sqrt{m^2+k_{ij}^2}+v_{ij})^2 +P_{ij}^2} -\sqrt{4(m^2+k_{ij}^2)+P_{ij}^2}.
\label{V_ij}\end{aligned}$$ We choose now the boosting momentum $\vec P_{ij}= - \vec p_k$, which means it is 3-body c. m. system ($i\ne k\ne j$); $$\begin{aligned}
\vec P_{ij}=\vec p_i+\vec p_j=-\vec p_k,~~~~ \vec p_1+\vec p_2+\vec p_3=0\end{aligned}$$ One may naturally have an idea the following 3-body invariant mass $\sqrt{S_{123}}$ poses a symmetry. $$\begin{aligned}
\sqrt{S_{123}}&&= \sqrt{m^2+p_1^2}+\sqrt{m^2+p_2^2}+\sqrt{m^2+p_3^2}
+V_{12}(P_{12})+V_{23}(P_{23})+V_{31}(P_{31}) \cr
&&=\sqrt{m^2+p_1^2}+\sqrt{m^2+p_2^2}+\sqrt{m^2+p_3^2}
+V_{12}(p_3)+V_{23}(p_1)+V_{31}(p_2) \cr
&&=\sqrt{(2\sqrt{m^2+k_{12}^2}+v_{12})^2 +p_3^2} +\sqrt{m^2+p_3^2}
+V_{23}(p_1)+V_{31}(p_2) \cr
&&=\sqrt{(2\sqrt{m^2+k_{23}^2}+v_{23})^2 +p_1^2} +\sqrt{m^2+p_1^2}
+V_{31}(p_2)+V_{12}(p_3) \cr
&&=\sqrt{(2\sqrt{m^2+k_{31}^2}+v_{31})^2 +p_2^2} +\sqrt{m^2+p_2^2}
+V_{12}(p_3)+V_{23}(p_1)
\label{3bodyeq}\end{aligned}$$ This symmetry helps us to build the relativistic Faddeev equations. After the quantumization we write the relativistic Faddeev equation for bound state as $$\begin{aligned}
\phi_{ij} = \hat G_0~\hat t_{ij}~(\phi_{jk} +\phi_{ki})
\label{Faddeev}\end{aligned}$$ where $\phi_{ij}$ is the Faddeev component for the subsystem (ij) $$\begin{aligned}
\phi_{ij} \equiv \hat G_0 \hat V_{ij} \Psi
\label{FC1}\end{aligned}$$ with the total wave function $\Psi$ $$\begin{aligned}
\Psi=\phi_{12}+\phi_{23}+\phi_{31}\end{aligned}$$ and $\hat G_0$ is the three-body Green’s function, $$\begin{aligned}
\hat G_0= {1 \over M_{123} - \bigl( \sqrt{4 (m^2+\hat k_{ij}^2 ) +\hat p_k^2} +\sqrt{m^2+\hat p_k^2} \bigr) }\end{aligned}$$ and $ \hat t_{ij}$ is the t-matrix of subsystem (ij) which is satisfactory with the LS equation; $$\begin{aligned}
\hat t_{ij} = \hat V_{ij} + \hat V_{ij}~\hat G_0~\hat t_{ij}.\end{aligned}$$ where $M_{123}$ is the eigen value of the mass operator $\sqrt{\hat S_{123}}$.
In the case of a system with identical particles, the permutation operators built from transpositions ${\cal P}_{ij}$, interchanging particles $i$ and $j$, are used to express all two-body interaction $$\begin{aligned}
V_{12}+V_{23}+V_{31}\equiv (1+{\cal P}_{12}{\cal P}_{23}+{\cal P}_{13}{\cal P}_{23}) V_{12} = (1+{\cal P}) V,\end{aligned}$$ where we have singled out the $(12)$ pair and denoted $V\equiv V_{12}$, $t \equiv t_{12}$ and $\phi_{12} \equiv \phi$ with permutation operator ${\cal P}\equiv {\cal P}_{12}{\cal P}_{23}+{\cal P}_{13}{\cal P}_{23} $. We have a simple presentation of Eq. (\[Faddeev\]). $$\begin{aligned}
\phi = \hat G_0~\hat t~{\cal P} \phi.
\label{Faddeev1}\end{aligned}$$
Relativistic Yakubovsky Equations {#Relativistic_Yakubovsky}
=================================
For the 4-body system, we add to fourth equal mass particle. There are 6 piarwises (subsystems) denoted not only as (12), (23) and (31) but as (14),(24) and (34).
We choose now the boosting momentum $\vec P_{ij}= - \vec p_k -\vec p_l$, which means it is 4-body c. m. system ($k\ne i,j,l$, and $l\ne i,j,k$); $$\begin{aligned}
\vec P_{ij}=\vec p_i+\vec p_j=-\vec p_k -\vec p_l=-\vec P_{kl},~~~~ \vec p_1+\vec p_2+\vec p_3+\vec p_4=0\end{aligned}$$ One may naturally have an idea the following 4-body invariant mass $\sqrt{S_{1234}}$ poses a symmetry. $$\begin{aligned}
\sqrt{S_{1234}}
&&= \sqrt{m^2+p_1^2}+\sqrt{m^2+p_2^2}+\sqrt{m^2+p_3^2}+\sqrt{m^2+p_4^2}\cr
&&+V_{12}(P_{34})+V_{23}(P_{14})+V_{31}(P_{24}) +V_{14}(P_{23})+V_{24}(P_{31})+V_{34}(P_{12}) \label{4bodyeq}\end{aligned}$$ The most important thing is that during generate the boosting potential $V_{ij}$ the momentum $P_{kl}$ behaves as a parameter. In other word, the boosting potential operator $\hat V_{ij}$ is diagonal to the momentum $P_{kl}$.
Using the 3-body relative momentum $\vec q_k$ between the subsystem (ij) and the third particle, and the \[2+2\] partition relative momentum $\vec s_l$ between the subsystem (ij) and (kl) (see Apppendix \[appA\]) we rewrite the 4-body invaliant mass $\sqrt{S_{1234}}$ $$\begin{aligned}
\sqrt{S_{1234}}=
&&= \sqrt{\bigl(\sqrt{(2\sqrt{m^2+k_{12}^2})^2+q_3^2}+\sqrt{m^2+q_3^2} \bigr)^2+p_4^2}+\sqrt{m^2+p_4^2}\cr
&&+V_{12}^{[3+1]}(q_3;p_4)+V_{23}+V_{31}+V_{14}+V_{24}+V_{34} \cr
&&= \sqrt{\bigl(\sqrt{(2\sqrt{m^2+k_{12}^2}+v_{12})^2+q_3^2}+\sqrt{m^2+q_3^2} \bigr)^2+p_4^2}+\sqrt{m^2+p_4^2}\cr
&&~~~~~~~~~~~~~~~~~~~~~~~~~+V_{23}+V_{31}+V_{14}+V_{24}+V_{34} \cr
&&= \sqrt{(2\sqrt{m^2+k_{12}^2})^2+s_4^2}+\sqrt{(2\sqrt{ m^2+ k_{34}^2 } )^2 + s_4^2 }\cr
&&+V^{[2+2]}_{12}(s_4)+V_{23}+V_{31}+V_{14}+V_{24}+V_{34} \cr
&&= \sqrt{(2\sqrt{m^2+k_{12}^2}+v_{12})^2+s_4^2}+\sqrt{(2\sqrt{ m^2+k_{34}^2 } )^2 + s_4^2 }\cr
&&~~~~~~~~~~~~~~~~~~~~+V_{23}+V_{31}+V_{14}+V_{24}+V_{34}.
\label{4bodyeq2}\end{aligned}$$ where $q_3$ is a relative momentum between subsystem (12) and the third particle, and $s_4$ is a relative momentum between subsystem (12) and (34). (see Appendix \[appA\])
The following equations from (\[rF\]) to (\[T3T22\]) are simply demonstrated as usual nonrelativistic Yakubovsky form except for the relativistic Green’s function (\[Green4\]) and the boosted potential. Similarily (\[FC1\]), Faddeev component $\phi_{ij}^{(4)}$ is defined with the four-body total wave function $\Psi^{(4)}$ as $$\begin{aligned}
\phi_{ij}^{(4)} \equiv \hat G_0^{(4)} \hat V_{ij} \Psi^{(4)}
\label{rF}\end{aligned}$$ where $\Psi^{(4)}$ is the total wave function which consists of 6 Faddeev components $\phi_{ij}^{(4)}$ $$\begin{aligned}
\Psi^{(4)} = \sum_{(ij)} \phi_{ij}^{(4)}.\end{aligned}$$
In case of identical particle, using permutation operator ${\cal P}$, ${\cal P}_{34} $ and $\tilde {\cal P}$ we have Faddeev equations for the four-body system as $$\begin{aligned}
\phi^{(4)}=\phi_{12}^{(4)}=
\hat G_0^{(4)} ~ \hat t^{(4)} ({\cal P}-{\cal P}_{34}{\cal P} + \tilde {\cal P} ) \phi^{(4)}
\equiv (1-{\cal P}_{34}) \psi_1 + \psi_2
\label{YC}\end{aligned}$$ with $$\begin{aligned}
\tilde {\cal P} \equiv {\cal P}_{13} {\cal P}_{24}\end{aligned}$$ where we have again singled out the $(12)$ pair and denoted $V\equiv V_{12}$, $t ^{(4)} \equiv t_{12}^{(4)}$ and $\phi_{12} ^{(4)} \equiv \phi ^{(4)}$, and $\hat t ^{(4)} $ also obeys Lippmann-Schwinger equation; $$\begin{aligned}
\hat t^{(4)} = \hat V + \hat V~\hat G_0^{(4)}~\hat t^{(4)}.\end{aligned}$$ and $\hat G_0^{(4)}$ is the four-body Green’s function, $$\begin{aligned}
\hat G_0^{(4)}= && {1 \over M_{1234} -
\Bigl(\sqrt{\bigl(\sqrt{(2\sqrt{m^2+\hat k_{12}^2})^2+\hat q_3^2}+\sqrt{m^2+\hat q_3^2} \bigr)^2+\hat p_4^2}+\sqrt{m^2+\hat p_4^2} \Bigr) }\cr
&&=
{1 \over M_{1234} -
\Bigl( \sqrt{(2\sqrt{m^2+\hat k_{12}^2})^2+\hat s_4^2}+\sqrt{(2\sqrt{ m^2+ \hat k_{34}^2 } )^2 + \hat s_4^2 } \Bigr) }
\label{Green4}\end{aligned}$$ The Yakubovsky components $\psi_1$ and $\psi_2$ already appear in Eq.(\[YC\]) which are defined as $$\begin{aligned}
\psi_1\equiv \hat G_0^{(4)} ~\hat t ^{(4)} {\cal P} \phi^{(4)}
\label{YC1}\end{aligned}$$ $$\begin{aligned}
\psi_2 \equiv \hat G_0^{(4)} ~\hat t ^{(4)} \tilde {\cal P} \phi^{(4)} \end{aligned}$$ We show the relativistic Yakubovsky equations for bound state. $$\begin{aligned}
&&\psi_1= - \hat G_0^{(4)} ~\hat T{\cal P}_{34} \psi_1 + \hat G_0^{(4)}~\hat T \psi_2, \cr
&&\psi_2= \hat G_0^{(4)} ~ \hat {\tilde T} \tilde {\cal P} (1-{\cal P}_{34}) \psi_1.
\label{rY}\end{aligned}$$ where $\hat T$ and $\hat {\tilde T}$ are 3-body t-matrix operator and 2+2 partition t-matrix operator, respectively. $$\begin{aligned}
&&\hat T= \hat t^{(4)} + \hat t^{(4)}~{\cal P}~\hat G_0^{(4)}~\hat T,\cr
&&\hat {\tilde T}= \hat t^{(4)} + \hat t^{(4)}~ \tilde {\cal P}~ \hat G_0^{(4)}\hat {\tilde T}.
\label{T3T22}\end{aligned}$$ The relativistic Yakubovsky equation (\[rY\]) keeps similar form as nonrelativistic one.
Inclusion of 3-body force {#Inclusion3NF}
=========================
Recently [@Huber:1996cg; @Kamada:2019irm], we obtained the Faddeev equations and Yakubovsky equations including 3-body force. The 3-body force $w_{123}$ is naturally decomposed into three parts $$\begin{aligned}
w_{123}\equiv w_{123}^{(1)}+w_{123}^{(2)}+w_{123}^{(3)} = (1+{\cal P}) w_{123}^{(3)}
=(1+{\cal P}) w\end{aligned}$$ with $w = w_{123}^{(3)} $. Instead of Eq. (\[FC1\]) the Faddeev component is defined including a part of 3-body force $$\begin{aligned}
\phi = \hat G_0 (\hat V +\hat w ) \Psi\end{aligned}$$ Including 3-body force the Faddeev equation for bound state is rewritten as $$\begin{aligned}
\phi = \hat G_0 \hat \tau \phi
\label{incF}\end{aligned}$$ where $\hat \tau$ is defined as $$\begin{aligned}
\hat \tau \equiv \hat t {\cal P} + (1+\hat t \hat G_0) \hat w(1+{\cal P}).\end{aligned}$$ Because the 3-body force are given in 3-body center of mass system, we need not boost the 3-body force.
On the other hand, in case of the 4-body system, the 3-body force $\hat w_{123}$ is required to be boosted with the direction of the last momentum $\vec p_4$. The invariant mass of the 4-body system $\sqrt{S_{1234}}$ may be given as $$\begin{aligned}
\sqrt{S_{1234}}
&&= \sqrt{m^2+p_1^2}+\sqrt{m^2+p_2^2}+\sqrt{m^2+p_3^2}+\sqrt{m^2+p_4^2}\cr
&&+V_{12}+V_{23}+V_{31}+V_{14}+V_{24}+V_{34} \cr
&&+W_{123}(p_4)+W_{234}(p_1)+W_{341}(p_2)+W_{412}(p_3)\cr
&&= \sqrt{m^2+p_1^2}+\sqrt{m^2+p_2^2}+\sqrt{m^2+p_3^2}+\sqrt{m^2+p_4^2}\cr
&&+V_{12}+V_{23}+V_{31}+V_{14}+V_{24}+V_{34} \cr
&&+W_{123}^{(1)}(p_4)+W_{234}^{(2)}(p_1)+W_{341}^{(3)}(p_2)+W_{412}^{(4)}(p_3)\cr
&&+W_{123}^{(2)}(p_4)+W_{234}^{(3)}(p_1)+W_{341}^{(4)}(p_2)+W_{412}^{(1)}(p_3)\cr
&&+W_{123}^{(3)}(p_4)+W_{234}^{(4)}(p_1)+W_{341}^{(1)}(p_2)+W_{412}^{(2)}(p_3)\cr
&&= \sqrt{\bigl(\sqrt{4(m^2+k_{12}^2)+q_3^2}+\sqrt{m^2+q_3^2}+w_{123}^{(3)}\bigr)^2+p_4^2}+\sqrt{m^2+p_4^2}\cr
&&+V_{12}+V_{23}+V_{31}+V_{14}+V_{24}+V_{34} \cr
&&+W_{123}^{(1)}(p_4)+W_{234}^{(2)}(p_1)+W_{341}^{(3)}(p_2)+W_{412}^{(4)}(p_3)\cr
&&+W_{123}^{(2)}(p_4)+W_{234}^{(3)}(p_1)+W_{341}^{(4)}(p_2)+W_{412}^{(1)}(p_3)\cr
&&~~~~~~~~~~~~~~~~~+W_{234}^{(4)}(p_1)+W_{341}^{(1)}(p_2)+W_{412}^{(2)}(p_3)
\label{4bodyeq3}\end{aligned}$$ where $W_{ijk}^{(i)}(p_l)$, $W_{ijk}^{(j)}(p_l)$ and $W_{ijk}^{(k)}(p_l)$ are boosted 3-body forces. (see Appendix \[appB\]) Similarily, instead of Eq. (\[YC1\]) the Yakubovsky component is defined including a part of boosted 3-body force $\hat W\equiv \hat W_{123}^{(3)}$, $$\begin{aligned}
&&\psi_1\equiv \hat G_0^{(4)} ~\hat t ^{(4)} {\cal P} \phi^{(4)}
+(1+\hat G_0^{(4)} \hat t^{(4)} ) \hat G_0 ^{(4)} \hat W \Psi^{(4)},\cr
&&\psi_2\equiv \hat G_0^{(4)} \hat t ^{(4)} \tilde {\cal P} \phi^{(4)}\end{aligned}$$ where $\Psi^{(4)}$ is the total wave function $$\begin{aligned}
\Psi^{(4)} =( 1+{\cal P} -{\cal P}_{34} {\cal P} +\tilde {\cal P})(\psi_1 -{\cal P}_{34}\psi_1 +\psi_2).\end{aligned}$$ Yakubovsky 4-body equations for bound state are rewritten as $$\begin{aligned}
\psi_1=&& - \hat G_0^{(4)} \hat {\cal T} {\cal P}_{34} \psi_1 +\hat G_0^{(4)} ~\hat {\cal T}~\psi_2 \cr
&&+(1 + \hat G_0^{(4)}~\hat {\cal T} )(1 + \hat G_0^{(4)}~\hat t^{(4)})~\hat G_0^{(4)}
\hat W( -{\cal P}_{34} {\cal P} + \tilde {\cal P} ) ( \psi_1 -{\cal P}_{34} ~ \psi_1 +\psi_2), \cr
\psi_2=&& \hat G_0^{(4)} \hat {\tilde T} \tilde {\cal P} (1-{\cal P}_{34}) \psi_1.
\label{rYw3}\end{aligned}$$ with $$\begin{aligned}
\hat {\cal T} = \hat \tau^{(4)} + \hat \tau^{(4)} \hat G_0^{(4)} \hat {\cal T}.\end{aligned}$$
Conclusion {#Conclusion}
==========
The relativistic Faddeev 3-body equations and the relativistic Yakubovsky 4-body equations are given in Eq.(\[Faddeev1\]) and Eq.(\[rY\]), respectively. These equations are not deformed from the original ones with the boosted potentials in Eq. (\[V\]) and Eq.(\[4bodyeq2\]). Inclusion of 3-body force is treated consistently in Faddeev equations (\[incF\]) and Yakubovsky ones (\[rYw3\]) under Poincaré invariance. These boosted potentials and the boosted 3-body force are defined by Eqs. (\[V\[3+1\]\]), (\[V\[2+2\]\]) and (\[W\]) in Appendices \[appA\] and \[appB\]. The quadratic integral equations using the iteration method [@Kamada:2007ms], in order to obtain these boosted potentials and 3-body force, are demonstrated in Appendix \[appC\].
Acknowledgements {#Acknow .unnumbered}
================
Author (H.K.) would like to thank H. Witała, J. Golak, R. Skibiński, K. Topolnicki, A. Nogga and E. Epelbaum for fruitful discussions during the 4th LENPIC meeting in Bochum, Germany. website <http://www.lenpic.org/>
Appendix A {#appA}
==========
The relative momentum $\vec q_3$ between subsystem (12) and the third particle is given as $$\begin{aligned}
\vec q_3 \equiv {\epsilon_{12} \vec p_3 -\epsilon_{3} \vec P_{12} \over \epsilon_{12} +\epsilon_{3}}\end{aligned}$$ with $$\begin{aligned}
\epsilon_{12}={1\over 2} \bigl(E_1+E_2+ \sqrt{(2\sqrt{m^2+k_{12}^2} )^2+ q_3^2} \bigr).\end{aligned}$$ Using the momentum $\vec q_3$ we have the boosted potential $V_{12}^{[3+1]} (q_3;p_4)$ in Eq. (\[4bodyeq2\]). $$\begin{aligned}
V_{12}^{[3+1]}(q_3;p_4) \equiv && \sqrt{\bigl(\sqrt{(2\sqrt{m^2+k_{12}^2}+v_{12})^2+q_3^2}+\sqrt{m^2+q_3^2} \bigr)^2+p_4^2} \cr
&&- \sqrt{\bigl(\sqrt{(2\sqrt{m^2+k_{12}^2})^2+q_3^2}+\sqrt{m^2+q_3^2} \bigr)^2+p_4^2}.
\label{V[3+1]}\end{aligned}$$
The \[2+2\] partition relative momentum $\vec s_4$ between subsystem (12) and (34) is given as $$\begin{aligned}
\vec s_4 \equiv {\epsilon_{12,34} \vec P_{34} -\epsilon_{34,12} \vec P_{12} \over \epsilon_{12,34} +\epsilon_{34,12}}\end{aligned}$$ with $$\begin{aligned}
&&\epsilon_{12,34}={1\over 2} \bigl(E_1+E_2 + \sqrt{(2\sqrt{m^2+k_{12}^2} )^2+ s_4^2} \bigr),\cr
&&\epsilon_{34,12}={1\over 2} \bigl(E_3+E_4 + \sqrt{(2\sqrt{m^2+k_{34}^2} )^2+ s_4^2} \bigr).\end{aligned}$$ Using the momentum $\vec s_4$ we have the boosted potential $V_{12}^{[2+2]} (s_4)$ in Eq. (\[4bodyeq2\]). $$\begin{aligned}
V_{12}^{[2+2]}(s_4) \equiv &&
\sqrt{(2\sqrt{m^2+k_{12}^2}+v_{12})^2+s_4^2} - \sqrt{(2\sqrt{m^2+k_{12}^2})^2+s_4^2} = V_{12} (s_4)
\label{V[2+2]}
\end{aligned}$$ Actually, this is the same definition of $V_{12}$ in Eq. (\[V\]).
Appendix B {#appB}
==========
The boosted 3-body force $W_{123}^{(3)}(p_4)$ is defined as $$\begin{aligned}
W_{123}^{(3)}(p_4)\equiv && \sqrt{\bigl(\sqrt{(2\sqrt{m^2+k_{12}^2})^2+q_3^2}+\sqrt{m^2+q_3^2} +w_{123}^{(3)} \bigr)^2+p_4^2} \cr
&& -\sqrt{\bigl(\sqrt{(2\sqrt{m^2+k_{12}^2})^2+q_3^2}+\sqrt{m^2+q_3^2} \bigr)^2+p_4^2}, \cr
W_{123}^{(\eta)}(p_4)\equiv && \sqrt{\bigl(\sqrt{(2\sqrt{m^2+k_{12}^2})^2+q_3^2}+\sqrt{m^2+q_3^2} +w_{123}^{(\eta)} \bigr)^2+p_4^2} \cr
&& -\sqrt{\bigl(\sqrt{(2\sqrt{m^2+k_{12}^2})^2+q_3^2}+\sqrt{m^2+q_3^2} \bigr)^2+p_4^2}.
\label{W}\end{aligned}$$ where $\eta$ can be chosen $1,2$ or $3$.
Appendix C {#appC}
==========
The boosted potential $V_{12} (\vec k,\vec k';q) \equiv \langle \vec k |\hat V_{12} (q) | \vec k' \rangle$ of Eq.(\[V\]) is a solution of the following quadratic integral equation [@Kamada:2007ms]. $$\begin{aligned}
V_{12}(\vec k,\vec k';q)= && \displaystyle \frac{1
}{\sqrt{4(m^{2}+k^{2})+q^{2}}+\sqrt{4(m^{2}+k^{\prime 2})+q^{2}}} \cr
&& \times \Biggl( \bigl( \sqrt{4(m^{2}+k^{2})}+\sqrt{4(m^{2}+k^{\prime 2})} \bigr)~v_{12}(\vec k,\vec k') \cr
&&~~~~+\displaystyle \int\left(v_{12}(\vec k,\vec k^{''})v_{12}(\vec k^{''},\vec k')-V_{12}(\vec k,\vec k'';q)V_{12}(\vec k'',\vec k';q)\right)d^{3}k'' \Biggr)
\label{eqV}\end{aligned}$$ with $$\begin{aligned}
v_{12}(\vec k,\vec k') \equiv \langle \vec k | \hat v_{12} | \vec k' \rangle .\end{aligned}$$ The boosted potential $V_{12}^{[3+1]} (\vec k,\vec k';q,r) \equiv
\langle \vec k |\hat V_{12}^{[3+1]} (q;r) | \vec k' \rangle$ of Eq.(\[V\[3+1\]\]) is a solution of the following quadratic integral equation. $$\begin{aligned}
&&V_{12}^{[3+1]}(\vec k,\vec k';q,r) \cr
&&=\displaystyle \frac{
1}{\sqrt{\left(\sqrt{4(m^{2}+k^{2})+q^{2}}+\sqrt{m^{2}+q^{2}}\right)^{2}+r^{2}}+\sqrt{\left(\sqrt{4(m^{2}+k^{\prime 2})+q^{2}}+\sqrt{m^{2}+q^{2}}\right)^{2}+r^{2}}} \cr
&&\times \Biggl( \bigl( \sqrt{4(m^{2}+k^{2})+q^{2}}+\sqrt{4(m^{2}+k^{\prime 2})+q^{2}}+2\sqrt{m^{2}+q^{2}} \bigr) ~V_{12}(\vec k,\vec k';q) \cr
&&
~~~~+\displaystyle \int\left(V_{12}(\vec k,\vec k'';q)V_{12}(\vec k'',\vec k';q)-V_{12}^{[3+1]}(\vec k,\vec k'';q,r)V_{12}^{[3+1]}(\vec k'',\vec k';q,r)\right)d^{3}k'' \Biggr).\end{aligned}$$ Finally, the boosted 3-body force $W_{123}^{(3)} (\vec k,\vec q,\vec k',\vec q' ;r)
\equiv \langle \vec k \vec q |\hat W_{123}^{(3)} (r) | \vec k' \vec q' \rangle$ of Eq.(\[W\]) is a solution of the following quadratic integral equation. $$\begin{aligned}
&&W_{123}^{(3)}(\vec k,\vec q,\vec k',\vec q';r)\cr
&&=\displaystyle \frac{1
}{\sqrt{\left(\sqrt{4(m^{2}+k^{2})+q^{2}}+\sqrt{m^{2}+q^{2}}\right)^{2}+r^{2}}+\sqrt{\left(\sqrt{4(m^{2}+k^{\prime 2})+q^{\prime 2}}+\sqrt{m^{2}+q^{\prime 2}}\right)^{2}+r^{2}}}
\cr
&&\times \Biggl( \bigl( \sqrt{4(m^{2}+k^{2})+q^{2}}+\sqrt{4(m^{2}+k^{\prime 2})+q^{\prime 2}}+\sqrt{m^{2}+q^{2}}+
\sqrt{m^{2}+q^{\prime 2}} \bigr) w_{123}^{(3)}(\vec k,\vec q,\vec k',\vec q') \cr
&&+\displaystyle \int\int\left(w_{123}^{(3)}(\vec k,\vec q,\vec k'',\vec q'')w_{123}^{(3)}(\vec k'',\vec q'',\vec k',\vec q')
-W_{123}^{(3)}(\vec k,\vec q,\vec k'',\vec q'';r)W_{123}^{(3)}(\vec k'',\vec q'',\vec k',\vec q';r)\right)\cr
&& \times d^{3}k'' d^{3}q'' \Biggr) \label{Weq}\end{aligned}$$ with $$\begin{aligned}
w_{123}^{(3)}(\vec k,\vec q, \vec k',\vec q')
\equiv \langle \vec k \vec q | \hat w_{123}^{(3)} | \vec k' \vec q' \rangle .\end{aligned}$$
These equations from (\[eqV\]) to (\[Weq\]) may be solved by the iteration method [@Kamada:2007ms].
|
---
author:
- Shamik Gupta
- Thierry Dauxois
- Stefano Ruffo
title: 'Out-of-equilibrium fluctuations in stochastic long-range interacting systems'
---
ł
Introduction
============
Fluctuations are ubiquitous in any physical system, and characterizing their behavior is one of the primary objectives of statistical physics. Fluctuations may originate spontaneously or may be triggered by an external force. When in thermal equilibrium, the system is unable to distinguish between the two sources of fluctuations, provided the fluctuations are small. As a result, the response of the system in thermal equilibrium to a small external force is related to the spontaneous fluctuations in equilibrium. The latter fact is encoded in the Fluctuation-Dissipation theorem (FDT), a cornerstone of statistical physics [@Kubo:1966]. Intensive research on generalizing the FDT to situations arbitrarily far from equilibrium led to the formulation of a set of exact relations, clubbed together as the Fluctuation Relations (FRs). Besides quantifying the fluctuations, these relations constrain the entropy production and work done on the system [@Seifert:2013]. Notable of the FRs are the Jarzynski equality [@Jarzynski:1997] and the Crooks theorem [@Crooks:1999] in which the system is driven far from an initial canonical equilibrium, and the Hatano-Sasa equality [@Hatano:2001] that applies when the system is initially in a non-equilibrium stationary state (NESS).
Despite such a remarkable success on the theoretical front, observing in experiments the full range of fluctuations captured by the FRs has been limited almost exclusively to small systems. In a macroscopic open system comprising a large number (of the order of Avogadro number) of constituents, the dynamics is governed by the interaction of the environment with these many constituents, so that any macroscopic observable such as the energy shows an average behavior in time, and statistical excursions are but rare. A small system, on the contrary, is one in which the energy exchange during its interaction with the environment in a finite time is small enough so that large deviations from the average behavior are much more amenable to observation [@Bustamante:2005]. Molecular motors constitute a notable example of small systems involved in efficiently converting chemical energy into useful mechanical work inside living cells. Recent advances in experimental manipulation at the microscopic level led to experimentally testing the FRs, e.g., in an RNA hairpin [@Collin:2005], and in a system of microspheres optically driven through water [@Trepagnier:2004].
In this work, we consider a macroscopic system with long-range interactions that is evolving under stochastic dynamics in presence of a time-dependent external force. The stochasticity in the dynamics is due to the interaction of the system with the environment. Long-range interacting (LRI) systems are those in which the inter-particle potential decays slowly with the separation $r$ as $r^{-\alpha}$ for large $r$, with $0 \le \alpha < d$ in $d$ dimensions [@review0].
Here, we study the out-of-equilibrium fluctuations of the work done on the system by the external force. We show that although constituted of a large number $N$ of interacting particles, an effective single-particle nature of the dynamics, which becomes more prominent the larger the value of $N$ is, leads to significant statistical excursions away from the average behavior of the work. The single-particle dynamics is represented in terms of a Langevin dynamics of a particle evolving in a self-consistent mean field generated by its interaction with other particles, and is thus evidently an effect stemming from the long-range nature of the interaction between the particles. For equilibrium initial conditions, we show that the work distributions for a given protocol of variation of the force in time and the corresponding time-reversed protocol exhibit a remarkable scaling and a symmetry when expressed in terms of the average and the standard deviation of the work. The distributions of the work per particle predict by virtue of the Crooks theorem the equilibrium free-energy per particle. For large $N$, the latter value is in excellent agreement with the analytical value obtained within the single-particle dynamics, thereby confirming its validity. For initial conditions in NESSs, we study the distribution of the quantity $Y$ appearing in the Hatano-Sasa equality (\[eq:hatano-sasa\]). We show that the distribution decays exponentially with different rates on the left and on the right.
A recap of the fluctuation relations
====================================
Consider a system evolving under stochastic dynamics, and which is characterized by a dynamical parameter $\lambda$ that can be externally controlled. Let us envisage an experiment in which the system is subject to the following thermodynamic transformation: starting from the stationary state corresponding to a given value $\lambda=\lambda_1$, the system undergoes dynamical evolution under a time-dependent $\lambda$ that changes according to a given protocol, $\{\lambda(t)\}_{0 \le t \le
\tau};\lambda(0)\equiv\lambda_1,\lambda(\tau)\equiv\lambda_2$, over time $\tau$. Only when $\lambda$ changes slowly enough over a timescale larger than the typical relaxation timescale of the dynamics does the system pass through a succession of stationary states. On the other hand, for an arbitrarily fast variation, the system at all times lags behind the instantaneous stationary state. Dynamics at times $t > \tau$, when $\lambda$ does not change anymore with time, leads the system to eventually relax to the stationary state corresponding to $\lambda_2$. In case of transitions between equilibrium stationary states, the Clausius inequality provides a quantitative measure of the lag at every instant of the thermodynamic transformation between the stationary state and the actual state of the system [@Bertini:2015]. For transitions between NESSs, Hatano and Sasa showed that a quantity $Y$ similar to dissipated work measures this lag [@Hatano:2001], where $Y$ is defined as Y\_0\^t ((t),(t)). \[eq:Y-defn\] Here, $\Phi(\mathcal{C},\lambda) \equiv -\ln \rho_{\rm
ss}(\mathcal{C};\lambda)$, and $\rho_{\rm
ss}(\mathcal{C};\lambda)$ is the stationary state measure of the microscopic configuration $\mathcal{C}$ of the system at a fixed $\lambda$. Owing to the preparation of the initial state and the stochastic nature of the dynamics, each realization of the experiment yields a different value of $Y$. An average over many realizations corresponding to the same protocol $\{\lambda(t)\}$ leads to the following exact result due to Hatano and Sasa [@Hatano:2001] e\^[-Y]{} =1. ł[eq:hatano-sasa]{}
In the particular case in which the stationary state at a fixed $\lambda$ is given by the Boltzmann-Gibbs canonical equilibrium state, let us denote by $\Delta F \equiv F_2 - F_1$ the difference between the initial value $F_1$ and the final value $F_2$ of the Helmholtz free energy that correspond respectively to canonical equilibrium at $\lambda_1$ and $\lambda_2$. Then, if $W$ is the work performed on the system during the thermodynamic transformation, the Jarzynski equality states [@Jarzynski:1997] that e\^[-W]{}=e\^[-F]{}, ł[eq:jarzynski]{} where $\beta$ is the inverse temperature of the initial canonical distribution. Subsequent to the work of Jarzynski, a remarkable theorem due to Crooks related (i) the distribution $P_{\rm F}(W_{\rm F})$ of the work done $W_{\rm F}$ during the forward process ${\rm F}$, when the system is initially equilibrated at $\lambda_1$ and inverse temperature $\beta$, and then the parameter $\lambda$ is changed according to the given protocol $\{\lambda(t)\}$, to (ii) the distribution $P_{\rm R}(W_{\rm R})$ of the work done $W_{\rm R}=-W_{\rm F}$ during the reverse process ${\rm R}$ when the system is initially equilibrated at $\lambda_2$ and $\beta$, and then the parameter $\lambda$ is changed according to the reverse protocol $\{\widetilde{\lambda}(t) \equiv \lambda(\tau-t)\}$. The theorem [@Crooks:1999] states that =e\^[(W\_[F]{}-F)]{}. ł[eq:crooks]{} Note that the two distributions intersect at $W_{\rm F}=\Delta F$. Multiplying both sides of the above equation by $\exp(-\beta W_{\rm F})$, and noting that $P_{\rm
R}(-W_{\rm F})$ is normalized to unity, one recovers the Jarzynski equality.
Our Model
=========
ł[sec:model]{} Our model comprises $N$ interacting particles, labelled $i=1,2,\ldots,N$, moving on a unit circle. Let the angle $\th_i \in [0,2\pi)$ denote the location of the $i$-th particle on the circle. A microscopic configuration of the system is $\mathcal{C}\equiv\{\th_i;i=1,2,\ldots,N\}$. The particles interact through a long-range potential $\mathcal{V}(\mathcal{C})\equiv K/(2N)\sum_{i,j=1}^N[1-\cos(\th_i-\th_j)]$, with $K$ being the coupling constant that we take to be unity in the following to consider an attractive interaction [@note]. An external field of strength $h$ produces a potential $\mathcal{V}_{\rm
ext}(\mathcal{C})\equiv-h\sum_{i=1}^N \cos \th_i$; thus, the total potential energy is $V(\mathcal{C})\equiv\mathcal{V}(\mathcal{C})+\mathcal{V}_{\rm
ext}(\mathcal{C})$. The external field breaks the rotational invariance of $\mathcal{V}(\mathcal{C})$ under equal rotation applied to all the particles.
The dynamics of the system involves configurations evolving according to a stochastic Monte Carlo (MC) dynamics. Every particle in a small time $\mbox{d}t \to 0$ attempts to hop to a new position on the circle. The $i$-th particle attempts with probability $p$ to move forward (in the anticlockwise sense) by an amount $\phi;~0<\phi<2\pi$, so that $\th_i \to \th_i'=\th_i+\phi$, while with probability $q=1-p$, it attempts to move backward by the amount $\phi$, so that $\th_i \to
\th_i'=\th_i-\phi$. In either case, the particle takes up the attempted position with probability $g(\Delta V(\mathcal{C}))\mbox{d}t$. Here, $\Delta V(\mathcal{C})$ is the change in the potential energy due to the attempted hop from $\th_i$ to $\th_i'$: $\Delta
V(\mathcal{C})=(1/N)\sum_{j=1}^N[-\cos(\th_i'-\th_j)+\cos(\th_i-\th_j)]-h [\cos \th_i'-\cos \th_i]$. The dynamics does not preserve the ordering of particles on the circle. The function $g$ has the form $g(z)=(1/2)[1-\tanh(\beta z/2)]$, where $\beta$ is the inverse temperature. Such a form of $g(z)$ ensures that for $p=1/2$, when the particles jump symmetrically forward and backward, the stationary state of the system is the canonical equilibrium state at inverse temperature $\beta$ [@can-note]. The case $p \ne q$ mimics the effect of an external drive on the particles to move in one preferential direction along the circle. The field strength $h$ has the role of the externally-controlled parameter $\lambda$ discussed in the preceding section.
The model was introduced in Ref. [@Gupta:2013] as an LRI system evolving under MC dynamics. Depending on the parameters in the dynamics, the system relaxes to either a canonical equilibrium state or a NESS. In either case, the single-particle phase space distribution can be solved [*exactly*]{} in the thermodynamic limit.
A model that has been much explored in the recent past to study static and dynamic properties of LRI systems is the so-called Hamiltonian mean-field (HMF) model [@review0]. This model involves $N$ particles moving on a circle, interacting via a long-range potential with the same form as $\mathcal{V}(\mathcal{C})$, and evolving under deterministic Hamilton dynamics. The dynamics leads at long times to an equilibrium stationary state. Our model may be looked upon as a generalization of the microcanonical dynamics of the HMF model to a stochastic dissipative dynamics in the overdamped regime, with an additional external drive causing a biased motion of the particles on the circle. The dissipation mimics the interaction of the system with an external heat bath.
In the Fokker-Planck limit $\phi \ll 1$, we may in the thermodynamic limit $N \to \infty$ consider, in place of the $N$-particle dynamics described above, the motion of a single particle in a self-consistent mean field generated by its interaction with all the other particles. The dynamics of the particle is described by the Langevin equation [@Gupta:2013] =(2p-1)-+(t), ł[eq:langevin-sp]{} where the dot denotes differentiation with respect to time, and $\eta(t)$ is a Gaussian, white noise with $\overline{\eta(t)}=0,
~\overline{\eta(t)\eta(t')}=\delta(t-t')$. Here, overbars denote averaging over noise realizations. In Eq. (\[eq:langevin-sp\]), $\langle v \rangle \equiv \langle v
\rangle[\rho](\th,t)\equiv-m_x[\rho] \cos \th-m_y[\rho] \sin \th- h\cos
\th$ is the mean-field potential, with $(m_x[\rho],m_y[\rho])\equiv\int \mbox{d}\th ~(\cos \th, \sin
\th)\rho(\th,t)$, where $\rho(\th,t)$ is the probability density of the particle to be at location $\th$ on the circle at time $t$. Together with $\rho(\th,t)=\rho(\th+2\pi,t)$, and the normalization $\int_0^{2\pi} \mbox{d}\th ~\rho(\th,t)=1~~\forall~ t$, $\rho(\th,t)$ is a solution of the Fokker-Planck equation [@Gupta:2013] =- +. ł[eq:FP-sp]{}
Steady state
============
Let $P(\mathcal{C},t)$ be the probability to observe configuration $\mathcal{C}$ at time $t$. At long times, the system relaxes to a stationary state corresponding to time-independent probabilities $P_{\rm
st}(\mathcal{C})$. For $p=1/2$, the system has an equilibrium stationary state in which the condition of detailed balance is satisfied, and $P_{\rm
st}(\mathcal{C})$ is given by the canonical equilibrium measure $P_{\rm
eq}(\mathcal{C}) \propto
e^{-\beta V(\mathcal{C})}$. On the other hand, for $p \ne 1/2$, the system at long times reaches a NESS, which is characterized by a violation of detailed balance that leads to closed loops of net non-zero probability current in the phase space.
For the single-particle dynamics (\[eq:langevin-sp\]), the stationary solution $\rho_{\rm ss}$ of Eq. (\[eq:FP-sp\]) is given by [@Gupta:2013] \_[ss]{}(;h)=; ł[eq:sp-st-soln-fsame]{} $g(\th) \equiv -2(2p-1)\th/\phi+\beta \langle v \rangle[\rho_{\rm
ss}](\th)$, $A(\th)\equiv \int_0^\th \mbox{d}\th'e^{g(\th')}$, while the constant $\rho_{\rm ss}(0;h)$ is fixed by the normalization $\int_0^{2\pi} \mbox{d}\th
~\rho_{\rm ss}(\th;h)=1~\forall~h$. To show the effectiveness of the single-particle dynamics in describing the stationary state of the $N$-particle dynamics for large $N$ and for $\phi \ll 1$, Fig. \[eq:rhoss-th-sim\] shows a comparison between the result (\[eq:sp-st-soln-fsame\]) and MC simulation results for the $N$-particle dynamics with $N=500,\phi = 0.1$, demonstrating an excellent agreement.
![Stationary distribution $\rho_{\rm ss}(\th;h)$ for $p \ne 1/2$: A comparison between MC simulation results (points) for $\phi = 0.1, N=500$ and three values of field $h$, and the theoretical result (continuous lines) in the Fokker-Planck approximation in the limit $N \to \infty$ given by Eq. (\[eq:sp-st-soln-fsame\]) illustrates an excellent agreement.](rho-ss.pdf){width="5cm"}
ł[eq:rhoss-th-sim]{}
For $p=1/2$, Eq. (\[eq:sp-st-soln-fsame\]) gives the equilibrium single-particle distribution $\rho_{\rm eq}(\th;h)=e^{-\beta \langle v \rangle[\rho_{\rm
eq}](\th)}/Z(h)$, with $Z(h)\equiv
\int_0^{2\pi}\mbox{d}\theta~\rho_{\rm eq}(\th;h)=2\pi
I_0(\beta m_{\rm eq})$, and $I_n(x)$ the modified Bessel function of order $n$. Here, $m_{\rm eq}\equiv \sqrt{(m_x^{\rm eq}+h)^2+(m_y^{\rm eq})^2}$ is obtained by solving the transcendental equation $m_{\rm eq}+h=I_1(\beta m_{\rm
eq})/I_0(\beta m_{\rm eq})$, see [@review0]. For $h=0$, $m_{\rm eq}$ as a function of $\beta$ decreases continuously from unity at $\beta=\infty$ to zero at the critical value $\beta_c=2$, and remains zero at smaller $\beta$, thus showing a second-order phase transition at $\beta_c$ [@review0]. For $h \ne 0$, the magnetization is non-zero at all $\beta$, hence, there is no phase transition.
Equilibrium initial condition
=============================
Let us consider $p=1/2$ in our model, when the system at a fixed value of $h$ has an equilibrium stationary state. In the following, we measure time in units of MC steps, where one MC step corresponds to $N$ attempted hops of randomly chosen particles. Starting with the system in equilibrium at $h=h_0$, we perform MC simulations of the dynamics while changing the field strength linearly over a total time $\tau \in \mathbb{I}$, with $\tau \ll
\tau_{\rm eq}$, such that at the $\alpha$-th time step, the field value is $h_\alpha=h_0+\Delta h~\alpha/\tau;~\alpha \in [0,\tau]$. Here, $\Delta h$ is the total change in the value of the field over time $\tau$. Note that the FRs are expected to hold for arbitrary protocols $\{\lambda(t)\}$; the linear variation we consider is just a simple choice. Here, $\tau_{\rm eq}$ is the typical equilibration time at a fixed value of $h$, and the condition $\tau \ll
\tau_{\rm eq}$ ensures that the system during the thermodynamic transformation is driven arbitrarily far from equilibrium. The initial equilibrium configuration is prepared by sampling independently each $\theta_i$ from the single-particle distribution $\rho_{\rm eq}(\th;h)$, with $m_{\rm eq}$ determined by solving $m_{\rm eq}+h_0=I_1(\beta m_{\rm
eq})/I_0(\beta m_{\rm eq})$. The work done on the system during the evolution is [@Jarzynski:1997] W\_[F]{} \_0\^ t=-\_[=1]{}\^\_[i=1]{}\^N \_i\^[()]{}, \[eq:Wdefn\] where $\th_i^{(\alpha)}$ is the angle of the $i$-th particle at the $\alpha$-th time step of evolution. In another set of experiments, we prepare the system to be initially in equilibrium at $h=h_\tau$, and then evolve the system while decreasing the field strength linearly in time as $h_\alpha=h_\tau-\Delta h~\alpha/\tau$. During these forward and reversed protocols of changing the field, we compute the respective work distributions $P_{\rm F}(W_{\rm F})$ and $P_{\rm
R}(W_{\rm R})$, for $\phi \ll 1$ and a number of system sizes $N \gg 1$. We take $\tau_{\rm eq}=N^2$, confirming that the distributions $P_{\rm F}(W_{\rm F})$ and $P_{\rm R}(W_{\rm R})$ do not change appreciably by considering $\tau_{\rm eq}$ larger than $N^2$.
Figures \[fig:crooks-inhom\](a),(b) show the forward and the reverse work distribution for a range of system sizes $N$. Here, we have taken $\phi=0.1,h_0=1.0,\tau=10,\beta=1,\Delta h=1.0$. The data collapse evident from the plots suggests the scaling P\_[B]{}(W\_[B]{})\~ g\_[B]{}(); BF,R, ł[eq:work-scaling-BHP]{} where $g_{\rm B}$ is the scaling function, while $\langle W_B\rangle$ and $\sigma_{\rm B}$ are respectively the average and the standard deviation of the work. A similar scaling, termed the Bramwell-Holdsworth-Pinton (BHP) scaling, was first observed in the context of fluctuations of injected power in confined turbulence and magnetization fluctuations at the critical point of a ferromagnet [@Bramwell:1998]. Over the years, a similar scaling has been reported in a wide variety of different contexts, from models of statistical physics, such as Ising and percolation models, sandpiles, granular media in a self-organized critical state [@Bramwell:2000], to fluctuations in river level [@Bramwell:2002], and even in fluctuations in short electrocardiogram episodes in humans [@Bakucz:2014]. Here, the BHP scaling is shown for the first time to also hold for work distributions out of equilibrium. The dependence of the average and the standard deviation on the system size $N$ is shown in panels (e) and(f), respectively, with the numerically data suggesting that $\langle W_{\rm B}\rangle \propto N$, $\sigma_{\rm F}\sim N^{a};~a \approx 0.528$, and $\sigma_{\rm R}\sim N^b;~b \approx 0.504$. The data collapse in (c) suggests the remarkable symmetry g\_[F]{}(x)=g\_[R]{}(-x). ł[eq:g-scaling]{} An understanding of the origin of this symmetry, and particularly, whether it is specific to our model or holds in general, is left for future work. Figure \[fig:crooks-inhom\](d) shows the distribution of the work per particle. Two essential features of the plots are evident, namely, (i) significant fluctuations of the work values even for large system size, and (ii) the forward and the reverse distribution intersecting at a common value regardless of the system size. By virtue of the Crooks theorem (\[eq:crooks\]), this common value should be given by the free energy difference per particle $\Delta f$ between the canonical equilibrium states of the system at field values $h_\tau$ and $h_0$. This latter quantity may be computed theoretically by knowing the free energy per particle in the limit $N \to \infty$ and at a fixed value of $h$ [@review0]: f(h)=m\_[eq]{}\^2-( e\^). ł[eq:sp-fe]{} Using the above gives $\Delta f\approx−0.725$, which is seen in Fig. \[fig:crooks-inhom\](d) to match very well with the intersection point of the forward and the reverse distribution of the work.
![Starting with an initial equilibrium state at inverse temperature $\beta=1$ and field $h=h_0=1.0$, and then increasing the field linearly in time to $h=2.0$ over a time $\tau=10$ Monte Carlo steps (thus, $\Delta h=1.0$), panel (a) shows the scaled work distribution for this forward (F) protocol, while (b) shows the same for the corresponding reverse (R) protocol, both for a range of system sizes $N$. Scaling collapse in (c) suggests for the scaling functions in (a) and (b) the symmetry $g_{\rm F}(x)=g_{\rm R}(-x)$. (d) shows $NP_{\rm F}(W_{\rm F})$ (right set of curves) and $NP_{\rm
R}(W_{\rm R})$ (left set) as a function of $W_{\rm F}/N$ and $W_{\rm R}/N$, respectively, for different $N$, with the curves intersecting at a value given by the free energy difference per particle $\Delta f$ estimated using Eq. (\[eq:sp-fe\]) for single-particle equilibrium. (e) and (f) show respectively the dependence of the average and the standard deviation of the forward and the reverse work on $N$, suggesting that while the average grows linearly with $N$, one has $\sigma_{\rm F}\sim N^{a};~a \approx 0.528$, and $\sigma_{\rm R}\sim N^b;~b \approx 0.504$. Here, $\phi=0.1,p=0.5$.](work-inhom.pdf){width="6cm"}
ł[fig:crooks-inhom]{}
![ Plots similar to those in Fig. \[fig:crooks-inhom\], but with $\beta=0.5,h_0=0.0,\tau=10,\Delta h=1.0$. The black lines in panels (a) and (b) denote a Gaussian distribution with zero average and unit standard deviation. While the averages in (d) grow linearly with $N$, the standard deviations in (e) satisfy $\sigma_{\rm F}\sim N^{a};~a \approx 0.501$, and $\sigma_{\rm R}\sim N^b;~b \approx 0.5$.](work-hom.pdf){width="6cm"}
ł[fig:crooks-hom]{}
While Fig. \[fig:crooks-inhom\] was for inhomogeneous initial equilibrium, in order to validate our results also for homogeneous initial conditions, Figure \[fig:crooks-hom\] repeats the plots at $h_0=0.0$ and at a temperature larger than $1/\beta_c$. In this case, the scaled work distributions fit quite well to a Gaussian distribution with zero average and unit standard deviation, see Figs. \[fig:crooks-hom\](a),(b), so that $g(x)=\exp(-x^2/2)/\sqrt{2\pi}$, and therefore, the symmetry (\[eq:g-scaling\]) is obviously satisfied. The free energy difference $\Delta f$ can be estimated by using Eq. (\[eq:sp-fe\]), but can also be obtained by using Eq. (\[eq:crooks\]) and the fact that in the present situation, the scaled work distributions are Gaussian. Using the latter procedure, one gets the expression $\Delta f=(\langle W_{\rm
F}\rangle-\langle W_{\rm R}\rangle)/(2N)$ [@Dourache:2005]; then, on substituting our numerical values for $\langle W_{\rm F}\rangle$ and $\langle W_{\rm R}\rangle$, we get $\Delta f\approx −0.161$. In Fig. \[fig:crooks-hom\](c), we show that this value of $\Delta f$ coincides with the point of intersection of the forward and the reverse work distribution.
In Figs. \[fig:crooks-inhom\] and \[fig:crooks-hom\], it may be seen that $\sigma_{\rm F} > \sigma_{\rm R}$, and also $\langle
W_{\rm R}\rangle > |\langle W_{\rm F}\rangle|$. In computing $\sigma_{\rm F}$, we start with a smaller magnetized state (thus, with the particles more spread out on the circle) than the state we start with in computing $\sigma_{\rm R}$. As a result, the work done in the former case during the thermodynamic transformation in which the increasing field tries to bring the particles closer together will show more variation from one particle to another, resulting in $\sigma_{\rm F} > \sigma_{\rm R}$. Now, $\langle W \rangle$, either F or R, is basically the time-integrated magnetization, see Eq. (\[eq:Wdefn\]). During the forward process, we start with a less-magnetized equilibrium state with magnetization $m_0^{\rm eq}$, and then increase the field for a finite time. The final magnetization value $m_{\rm fin,F}$ reached thereby will be smaller than the actual equilibrium value $m^{\rm eq}_1$ for the corresponding value of the field, since we did not allow the system to equilibrate during the transformation. For the reverse process, we started with this equilibrium value $m^{\rm eq}_1$, and during the transformation when the field is decreased, the magnetization decreases but not substantially to the value $m^{\rm eq}_0$, since the system remains out of equilibrium during the transformation. As a result, the time integrated forward magnetization, whose mean value is $\langle W_{\rm
F}\rangle$, is smaller in magnitude than the time-integrated reverse magnetization, whose mean value is $\langle W_{\rm R} \rangle$. In Fig. \[fig:crooks-hom\], $\langle W_{\rm F} \rangle$ is very close to zero. This is because here, we start with a homogeneous equilibrium for which the magnetization value is $m^{\rm eq}_0=0$, and then increase the field for a finite time to a not-so-high value $h=1$, so the magnetization does not increase much from the initial value. Hence, $\langle W_{\rm F} \rangle$, which is the time-integrated magnetization during this forward transformation, is close to zero.
Figures \[fig:crooks-inhom\] and \[fig:crooks-hom\], while illustrating the validity of the Crooks theorem (and hence, of the Jarzynski equality) for many-body stochastic LRI systems, underline the effective single-particle nature of the actual $N$-particle dynamics for large $N$ in the Fokker-Planck limit $\phi \ll 1$. This feature is further illustrated by our analysis of fluctuations while starting from NESSs, as we now proceed to discuss.
![Starting with initial conditions in a NESS at $h=h_0=1.0$, and then increasing the field linearly in time to $h=1.15$ over $\tau=15$ Monte Carlo steps (thus, $\Delta h=0.15$), the figure shows for two values of initial inverse temperature $\beta$ the distribution of the quantity $Y$ appearing in the Hatano-Sasa equality (\[eq:hatano-sasa\]). The black lines in the inset stand for the exponential fit $\sim \exp(aY)$ to the left tail, with $a\approx 280$ for $\beta=5$ and $a \approx 300$ for $\beta=10$, and the exponential fit $\sim \exp(-bY)$ to the right tail, with $b\approx 7$ for $\beta=5$ and $b \approx 11.5$ for $\beta=10$. Here, $N=500,\phi=0.1,p=0.55$.](HS-fig-new.pdf){width="4cm"}
ł[fig:HS]{}
Non-equilibrium initial condition
=================================
We now consider $p\ne1/2$ in our model. In this case, the system at a fixed value of $h$ relaxes to a NESS. We wish to compute the distribution of the quantity $Y$ appearing in the Hatano-Sasa equality (\[eq:hatano-sasa\]). To proceed, we consider a large value of $N$ and $\phi \ll 1$ and use a combination of $N$-particle dynamics, and the knowledge of the single-particle stationary-state distribution (\[eq:sp-st-soln-fsame\]). Starting with the initial value $h=h_0$, the field is varied linearly in time, as in the equilibrium case; specifically, at the $\alpha$-th time step, the field is $h_\alpha=h_0+\Delta h~\alpha/\tau;~\alpha
\in [0,\tau]$. Again, the choice of the protocol is immaterial in as far as validity of the Hatano-Sasa equality is concerned. The steps in computing the $Y$-distribution for fixed values of $\beta,h_0,\Delta h,\tau$ are as follows. A state prepared by sampling independently each $\theta_i$ uniformly in $[0,2\pi)$ is allowed to evolve under the $N$-particle MC dynamics with $h=h_0$ to eventually relax to the stationary state, which is confirmed by checking that the resulting single-particle distribution is given by Eq. (\[eq:sp-st-soln-fsame\]). Subsequently, the particles are allowed to evolve under the time-dependent field $h_\alpha$ for a total time $\tau$, and the quantity $Y$ is computed along the trajectory of each particle according to Eq. (\[eq:Y-defn\]), which is given in the present case for the $i$-th particle by the following expression, as an approximation to the integral and the derivative appearing in Eq. (\[eq:Y-defn\]): Y\_i=\_[=1]{}\^(), \[eq:Yi\] where $\{\theta_i^{(\alpha)}\}_{0\le\alpha\le\tau}$ gives the trajectory of the $i$-th particle, and $\rho_{\rm
ss}(\theta_i^{(\alpha)};h_{\alpha})$ is computed by using Eq. (\[eq:sp-st-soln-fsame\]). Repeating these steps yields the distribution of $Y$ for each particle, which is finally averaged over all the particles to obtain the distribution $P(Y)$ depicted in Fig. \[fig:HS\]. Here, we use two values of $\beta$, while the other parameters are $p=0.55,N=500,\phi=0.1,h_0=1.0,\Delta h=0.15,\tau=15$. As is evident from the figure, the distribution is highly asymmetric, and in particular, has exponential tails (see the inset). From the data for $P(Y)$, we find for $\langle
\exp(-Y)\rangle$ the value $1.04$ for $\beta=10$, and the value $1.11$ for $\beta=5$, which within numerical accuracy are consistent with the expected value of unity. Let us reiterate the combined use of the $N$-particle dynamics and the exact single-particle stationary state distribution in obtaining the $Y$-distribution, and remark that the consistency of the final results with the Hatano-Sasa equality further highlights the effective mean-field nature of the $N$-particle dynamics for large $N$.
To conclude, in this work, we studied the out-of-equilibrium fluctuations of the work done by a time-dependent external force on a many-particle system with long-range interactions and evolving under stochastic dynamics. For both equilibrium and non-equilibrium initial conditions, we characterized the fluctuations, and revealed how a simpler single-particle Langevin dynamics in a mean field gives accurate quantitative predictions for the $N$-particle dynamics for large $N$. This in turn highlights the effective mean-field nature of the original many-particle dynamics for large $N$. It is interesting to generalize recent studies of work statistics in quantum many-body short-range systems, e.g., [@Russomanno:2015; @Dutta:2015], to those with long-range interactions, and unveil any effective mean-field description.
SG and SR thank the ENS de Lyon for hospitality. We acknowledge fruitful discussions with A. C. Barato, M. Baiesi, S. Ciliberto, G. Jona-Lasinio, and A. Naert.
[99]{} R. Kubo, Rep. Prog. Phys. [**29**]{}, 255 (1966).
U. Seifert, Rep. Prog. Phys. [**75**]{}, 126001 (2012).
C. Jarzynski, Phys. Rev. Lett. **78**, 2690 (1997).
G. Crooks, Phys. Rev. E, [**60**]{}, 2721 (1999).
T. Hatano and S. Sasa, Phys. Rev. Lett. [**86**]{}, 3463 (2001).
C. Bustamante, J. Liphardt and F. Ritort, Physics Today, [**58**]{}, Issue no. 7, 43 (2005).
D. Collin, F. Ritort, C. Jarzynski, S. B. Smith, I. Tinoco, Jr and C. Bustamante, Nature [**437**]{}, 231 (2005).
E. H. Trepagnier, C. Jarzynski, F. Ritort, G. E. Crooks, C. J. Bustamante, and J. Liphardt, PNAS [**101**]{}, 15038 (2004).
, A. Campa, T. Dauxois, D. Fanelli, and S. Ruffo (Oxford University Press, UK, 2014).
L. Bertini, A. De Sole, D. Gabrielli, G. Jona-Lasinio, and C. Landim, J. Stat. Mech. P10018 (2015).
The exponent $\alpha$ characterizing the decay of the inter-particle potential with separation is zero here, thus corresponding to the extreme case of long-range interactions, when the potential does not decay at all with distance.
S. Gupta, T. Dauxois, and S. Ruffo, J. Stat. Mech.: Theory Exp. P11003 (2013).
Non-additivity of LRI systems brings in complications in deriving the canonical equilibrium while starting from a microcanonical one, whereby the former describes fluctuations in a subsystem that is a part of and is in interaction with the rest of the system. We however invoke canonical equilibrium in the sense that although non-additive, an LRI system in contact with an external short-ranged heat bath via a small coupling will be in canonical equilibrium at a temperature given by that of the bath [@Baldovin:2006].
F. Baldovin and E. Orlandini, Phys. Rev. Lett. [**96**]{}, 240602 (2006).
S. T. Bramwell, P. C. W. Holdsworth, and J.-F. Pinton, Nature [**396**]{}, 552 (1998).
S. T. Bramwell, K. Christensen, J.-Y. Fortin, P. C. W. Holdsworth, H. J. Jensen, S. Lise, J. M. López, M. Nicodemi, J.-F. Pinton, and M. Sellitto, Phys. Rev. Lett. [**84**]{}, 3744 (2000).
S. T. Bramwell, T. Fennell, P. C. W. Holdsworth, and B. Portelli, Europhys. Lett. [**57**]{},310 (2002).
P. Bakucz, S. Willems, and B. A. Hoffmann, Acta Polytechnica Hungarica [**11**]{}, 73 (2014).
F. Douarche, S. Ciliberto, and A. Petrosyan, J. Stat. Mech. P09011 (2005).
A. Russomanno, S. Sharma, A. Dutta, and G. E. Santoro, J. Stat. Mech. P08030 (2015).
A. Dutta, A. Das, and K. Sengupta, Phys. Rev. E [**92**]{}, 012104 (2015).
|
---
abstract: 'We consider the single-file motion of colloidal particles interacting via short-ranged repulsion and placed in a traveling wave potential, that varies periodically in time and space. Under suitable driving conditions, a directed time-averaged flow of colloids is generated. We obtain analytic results for the model using a perturbative approach to solve the Fokker-Planck equations. The predictions show good agreement with numerical simulations. We find peaks in the time-averaged directed current as a function of driving frequency, wavelength and particle density and discuss possible experimental realizations. Surprisingly, unlike a closely related exclusion dynamics on a lattice, the directed current in the present model does not show current reversal with density. A linear response formula relating current response to equilibrium correlations is also proposed.'
author:
- Debasish Chaudhuri
- Archishman Raju
- Abhishek Dhar
title: 'Pumping single-file colloids: Absence of current reversal'
---
=1
§ ł Ł ø
\#1
In single-file motion, colloidal particles are confined to move in a narrow channel such that they cannot overtake each other. This was first studied by Hodgkin and Keynes [@hodgkin1955] while trying to describe ion transport in biological channels. One of the most interesting features of single-file motion is the sub-diffusive behavior that individual particles exhibit, and has been extensively studied both theoretically [@rodenbeck1998; @lizana2008; @barkai2009; @roy2013] and experimentally [@Hahn1996; @Kukla1996; @Wei2000; @Lutz2004; @lin2005; @das2010]. An exciting question has been that of obtaining directed particle currents in such single-file systems in closed geometries, for example colloidal particles moving in a circular micro-channel. Using periodic forces that vanish on the average, it has been possible to drive particle currents in a unidirectional manner. These are referred to as Brownian ratchets and may, for example, be achieved through continual switching on and off of a spatially asymmetric potential profile [@Julicher1997; @Reimann2002]. Such phenomena have been studied experimentally using suitably constructed electrical gating [@Rousselet1994; @Leibler1994; @Marquet2002], and with the help of laser tweezers [@Faucheux1995; @Faucheux1995a; @Lopez2008]. Intracellular motor proteins like kinesin, myosin that move on respective filamentous tracks [@Reimann2002], or ${\rm Na}^+$-, ${\rm K}^+$-ATPase pumps associated with the cell-membranes [@Gadsby2009], are examples of naturally occurring stochastic pumps. With a few exceptions [@Derenyi1995; @Derenyi1996; @Aghababaie1999; @Slanina2008a; @Slanina2009; @savel2004], most theoretical studies of Brownian ratchets focused on systems of non-interacting particles.
Recently a model of classical stochastic pump [@Chaudhuri2011; @Marathe2008; @Jain2007] has been proposed, similar to those used in the study of quantum pumps [@Brouwer1998; @Citro2003]. Unlike Brownian ratchets, in these pump models, the colloidal particles are driven by a traveling wave potential. Thus, while typical ratchet models consider particles in a potential of the form $V(x,t)=f(x)g(t)$, the pump model considers a form such as $V(x,t)=V_0 \cos(q x -\omega t)$. In Ref. [@Chaudhuri2011], the dynamics of colloidal particles with short ranged repulsive interactions, and confined to move on a ring in the presence of an external space-time varying potential, was studied by considering a discretized version. In the discrete space model, particles moved on a lattice with the exclusion constraint that sites cannot have more than one particle and hopping rates between neighboring sites depended on the instantaneous potentials on the sites. This roughly mimics the over-damped Langevin dynamics of hard-core particles that is expected to be followed by sterically stabilized colloids. As expected, the traveling wave potential resulted in a DC particle current in the ring. An intriguing result was that the system showed a current direction-reversal on increasing the density beyond half-filling. This behavior was an outcome of the particle-hole symmetry of the discrete model [@Chaudhuri2011]. Current reversal has been observed in subsequent theoretical studies [@Dierl2014; @pradhan14]. Further interesting properties of this model, including a detailed phase diagram, were recently obtained for the case where the system was connected to reservoirs and a biasing field applied [@Dierl2014]. [General conditions for pumping to occur have recently been discussed in [@Rahav2008; @Mandal2011; @Asban2014].]{}
An important question is as to how much of the interesting qualitative features, seen in the lattice model, remain valid for real interacting colloidal particles executing single-file Brownian dynamics. This is one of the main motivations of this Letter. Here we consider the effect of a traveling wave potential on such particles which can be described by Langevin dynamics.
![(Color online) A circular potential trap, which confines the motion of colloids in one dimension, is denoted by the white annulus. The colloidal particles are shown by dark spheres. The oscillatory profile indicates a time-frozen version of the traveling wave potential $V = V_0 \cos(\w t - q x)$.[]{data-label="fig:cartoon"}](cartoon){width="7"}
Numerical and some analytic results based on the solution of the Fokker-Planck equation are presented. We derive a linear response formula for the DC current in terms of equilibrium correlation functions. We find that, unlike the lattice version [@Chaudhuri2011; @Marathe2008; @Jain2007], there is no current-reversal in this system. A proposal for possible experimental realization of particle pumping in colloidal systems, using traveling waves, is discussed.
We consider $N$ colloidal particles that are confined to move on a one-dimensional ring of length $L$. The particles interact via potentials $U(x)$ that are sufficiently short ranged that we take them to be only between nearest neighbors. In addition, a weak traveling wave potential of the form $ V(x,t) = \l k_B T \cos(\w t-q x)$ with $\l < 1$, acts on each particle. Let $x_i$, $i=1,2,\ldots,N$ denote the positions of the particles along the channel. Then the over-damped Langevin equations of motion of the system are given by $$\begin{aligned}
\f{dx_i}{dt}&= -\mu \f{\partial \cal U}{\partial x_i}+ \eta_i,
\label{lange} \\
{\rm where} ~{\cal U}&=\sum_{i=1}^N V(x_i) + \sum_{i=1}^N U(|x_i - x_{i+1}|) \nn \end{aligned}$$ is the total potential energy of the system and $\eta_i(t)$ is white Gaussian noise with $\la \eta_i \ra=0$, $\la \eta_i(t) \eta_j(t') \ra = 2 D \d_{i,j}\d(t-t'),~ D=\mu \kb T$ is the diffusion constant, $\mu$ the mobility, $\kb$ the Boltzmann constant and $T$ the ambient temperature. We have taken periodic boundary conditions $x_{N+1}=L+x_1$. Denoting the joint probability distribution of the $N$-particle system as $P({\bf x},t)$ with ${\bf x}=(x_1, x_2,\dots,x_N)$, the Fokker-Planck equation governing its time evolution is \_t P = \_i \_[x\_i]{} \[ D \_[x\_i]{} P + P \_[x\_i]{} [U]{}\] . \[eq:fp\] The one-point distribution for the $i^{\rm th}$ particle is given by $P^{(1)}_i(x_i,t) = \int d x_1 d x_2 \dots d x_{i-1} dx_{i+1}\ldots d x_N P({\bf x},t)$. Similarly let $P^{(2)}_{i,i+1} (x_i,x_{i+1})$ be the two-point distribution obtained from $P({\bf x})$ by integrating out all coordinates other than $x_i,x_{i+1}$. Let us then define the averaged distributions $P^{(1)}(x,t) = \f{1}{N} \sum_i P^{(1)}_i(x,t)$ and $P^{(2)}(x,x',t) = \f{1}{N} \sum_i P^{(2)}_{i,i+1}(x,x',t) $. Integrating the $N$-particle Fokker-Planck equation one finds a BBGKY hierarchy of equations, the first of which is $$\begin{aligned}
\p_t P^{(1)}(x,t) =& - \p_x J~, \label{intPeq} \\
{\rm where}\, J =& - D \p_x P^{(1)}(x,t) -\mu \p_x V P^{(1)}(x,t) \nn \\
& - \mu \int d x' \p_x U(|x-x'|) P^{(2)}(x,x',t) . \nn
\end{aligned}$$ The local density of particles is given by $ \rho(x,t) = N P^{(1)}(x,t)$, and the corresponding current density is $j(x,t)=N J(x,t)$. The time and space averaged directed current in the system is given by j\_[DC]{} = \_0\^ dt \_0\^L dx j(x,t) , where $\tau=2 \pi /\omega$.
![(Color online) Directed current $j_{\rm DC}$ as a function of ($a$) driving frequency $\w$ and ($b$) driving wave-number $q$ in the non-interacting system. The points denote Langevin dynamics simulation results of free particles and the solid lines are the analytic prediction of Eq. (\[jdc\_free\]). The parameters used are: number of particles $N=128$, mean density $\r_0=0.3$, potential strength $\l=0.5$, diffusion constant $D=1$ and temperature $k_BT=1$. In (a) $q=1.2\,\pi$ while in (b) $\omega=\pi/2$.[]{data-label="nonint"}](graph12.pdf){width="8.6"}
[*Non-interacting system:*]{} We first analyze the non-interacting system ($U=0$). The Fokker-Planck equation for $\r(x,t)$ is \_t (x,t) + \_x j =0, j(x,t) = -D (x,t) \[eq:jxt\] with $V'=\p_x V$. We expand the density in a perturbative series in small parameter $\l$ as (x,t) = \_0 + \_[k=1,2,…]{} ł\^k \^[(k)]{} (x,t), where $\r_0 = N/L$ is the mean density of particles. The mean directed current $j_{\rm DC}$ gets a contribution only from the drift part of the current in Eq. (\[eq:jxt\]), which to leading order is given by $-D \be V' \lambda \r^{(1)}$. The time evolution for $\r^{(1)}$ is given by \_t \^[(1)]{} - D \_x\^2 \^[(1)]{} &=& \_0 D \_x\^2 (V/) , and this has the time-periodic steady state solution \^[(1)]{} &=& \_0 q\^2 D [Re]{} . Thus, to leading order in the perturbation series in $\l$, the time averaged directed current is j\_[DC]{} &=& \_0\^ dt \_0\^L dx (-D ł\^[(1)]{} \_x V)\
&=& . \[jdc\_free\] As expected, the current has a linear dependence on particle density $\rho_0$. The dependence on driving frequency $\omega$ and wave-number $q$ are plotted in Fig. (\[nonint\]) where we also show a comparison of the results from the analytic perturbative theory with those from direct numerical simulations for $\lambda=0.5$. We see that there is excellent agreement even for this, not very small, value of $\lambda$.
![(Color online) Directed current $j_{\rm DC}$ as a function of ($a$) mean density $\r_0$, and ($b$) wave number $q$ in the interacting system. The points denote Langevin dynamics simulation results of particles interacting via WCA ($\circ$), soft-core ($\triangle$), and Fermi-function step potentials ($\triangledown$). The solid lines in ($a$) and ($b$) are plots of Eq. (\[jdc\_mf\]) with $a=0.75$. Parameters are: $\w=14$, and $\l=0.5$; in ($a$) $q=1.2\pi$, in ($b$) $\r_0 = 0.55$. []{data-label="fig:jdcro"}](graph34.pdf){width="8.6"}
[*Interacting system:*]{} Let us consider a hard-core interaction between the particles defined through the potential $U(x) = \infty$ if $|x| < a$ and $0$ otherwise. Since $N P^{(1)}=\rho(x,t)$ gives the density of particles and defining the pair distribution function $g(x,x',t)$ through the relation $N P^{(2)}=\rho(x,t) g(x,x',t)$, we see that Eq. (\[intPeq\]) can equivalently be written as $$\begin{aligned}
\p_t &\rho(x,t) = D \p^2_x \rho + \mu \p_x \left[ V(x,t) \rho(x,t)\right] \nn \\
& + \mu \p_x \left[ \rho(x,t) \int d x' \p_x U(|x-x'|) g(x,x',t)\right]~ . \label{intreq}
\end{aligned}$$ On expanding $\rho(x,t)$ and $g(x,x',t)$ as perturbation series in $\lambda$ we find that the resulting equations [*do not close*]{} at successive orders. This is different from the case of the discrete systems studied in [@Chaudhuri2011; @Marathe2008; @Jain2007] where the perturbative solution works even in the presence of interactions. We thus need to make further approximations before applying the perturbation theory. It turns out that a mean-field description of the interaction term in Eq. (\[intreq\]) makes the problem tractable. The pair correlation function $g(x,x',t)$ gives the probability of finding a particle at $x'$ given that there is a particle at $x$, while $-\p_x U(|x-x'|)$ is the force on the particle at $x$ due to a particle at $x'$. Hence the integral ${\cal I}=\int d x' [- \p_x U(|x-x'|)] g(x,x',t)$ has the interpretation of being the average force on a particle located at $x$. Next we note that for a hard rod centered at $x$, the force is localized at the points $x\pm a $, hence we can approximate the average force by the pressure difference between these points, i.e., [${\cal I} = \Pi (x-a)-\Pi (x+a)$]{}. Here we assume that $\Pi(x,t)$ is the instantaneous local equilibrium pressure, finally we relate this pressure to the density $\rho(x,t)$ through the equilibrium relation $\Pi=k_B T \rho/[1-\rho a]$ [@Chowdhury2000]. Using this form of the interaction term and expanding $\rho(x,t)$ to first order in $\l$, the time evolution equation for $\rho^{(1)}$ is $$\begin{aligned}
\p_t \r^{(1)} &- D \p_x^2 \r^{(1)} = \r_0 D \p_x^2 (\be V/\lambda) \nn \\
&+ D \alpha \p_x \left[ \rho^{(1)}(x+a,t) - \rho^{(1)}(x-a,t) \right]~, \nn \end{aligned}$$ where $\alpha= \rho_0/(1-\rho_0 a)^2$. The time-periodic steady state solution of this equation is given by \^[(1)]{} = \_0 q\^2 D [Re]{} . This leads to, up to order $\l^2$ in perturbation series, the following average current $$j_{\rm DC} = \f{\l^2 \r_0}{2} \f{D^2 q^3 \w}{D^2 [q^2+2 \a q \sin(q a)]^2 + \w^2}~.
\label{jdc_mf}$$ This is the first main result of our paper. We now see a non-trivial dependence on particle density $\rho_0$ and wave-number $q$. For a fixed density there is enhancement of particle current at some $q$ values \[see Fig. (\[fig:jdcro\])\]. The current vanishes at the full-packing density $\rho_0=1/a$, as expected. However, unlike the lattice model, [*there is now no current reversal*]{}. In the discrete lattice model of symmetric exclusion process driven by a potential $\l\cos(\w t-\phi n)$ with $n$ a lattice site and $\phi=q a$ [@Chaudhuri2011] , j\_[DC]{} &=& ł\^2 f\_0\^2 (q\_0 -2 k\_0) , \[eq:excl\] where $f_0=D/a^2$, and $(q_0 -2 k_0) =\eta(1-\eta)(1-2\eta)$ in the large $L/a$ limit with $\eta=\rho_0 a$ the packing fraction. The dynamics had particle-hole symmetry leading to current reversal at $\eta=1/2$. [In the continuum dynamics performed by colloidal particles, there is no such particle- hole symmetry. Note that]{} the continuum limit of Eq.(\[eq:excl\]) with $a/L \to 0$, $\phi = q a \ll 1$ and $\eta=\r_0 a \ll 1$ leads to the result for non-interacting [colloids]{} Eq. (\[jdc\_free\]). [Presumably, the correct discrete model that one needs to consider, in order to get the correct continuum limit, is one where particles occupy a finite number (large) of sites and then one has to take appropriate limts.]{}
[*Langevin dynamics simulations:*]{} To test our analytic predictions, we performed Langevin dynamics simulations of the model using Euler integration of Eq. (\[lange\]). The time scale is set by $\t_D = a^2/D$. For the non-interacting system, we used an integration time-step $\d t = 10^{-2} \t_D$. For the interacting single-file case, in order to avoid unphysical particle crossings at large densities, we used $\d t = 10^{-5}\t_D$. A total of $N=128$ particles were simulated. The particle flux is averaged over the system, and over a time period $100 \tau$ where $\tau=2\pi/\w$. The particle current is further averaged over $100$ realizations. The fluctuations over realizations provide the errors in the measured currents. To check the robustness of our results, we considered a number of smooth potentials to model the short-ranged inter-particle repulsion: (a) Weeks-Chandler-Anderson (WCA) potential [@Weeks1971] $\be U(x) = 4 [ (\s/x)^{12} - (\s/x)^6 + 1/4]$ if $|x|<2^{1/6}\s$ else $U=0$, (b) Soft core potential $\be U(x) = (\s/x)^{12} - 2^{-12}$ if $|x|<2\s$ else $U=0$, and (c) Fermi-function step potential $\be U(x) = A/[\exp((x-a)/w)+1]$ with $A=100$, $w=0.02 \s$ and $a=0.75 \s$. In the simulations $\kb T = 1/\be$ and $\s$ set the energy and length scales respectively. The simulation data for all the three potentials agree with each other within numerical errors (Fig. \[fig:jdcro\]($a$)). They show a non-monotonic variation with density, with maximal current near $\r_0\s=0.55$. A plot of Eq. (\[jdc\_mf\]) with $a=0.75\s$ shows qualitative agreement with numerical data. Fig.\[fig:jdcro\]($b$) shows $j_{\rm DC}$ as a function of driving wave-number $q$ in a system of particles interacting via Fermi-function step potential. Multiple maxima in $j_{\rm DC}$ appears, in qualitative agreement with Eq. (\[jdc\_mf\]). A comparison with Eq. (\[jdc\_free\]) shows another intriguing feature, directed current in presence of repulsive interaction can be higher than that of free particles. [ *Linear response theory*]{}: Even though the current response is ${\cal O}(\lambda^2)$ and hence nonlinear in the perturbation, we note that it was obtained from the first order change in the density and hence should be calculable from linear response theory. We now show that it is indeed possible to express the current response to the perturbing traveling wave potential, in terms of equilibrium correlation functions of various forces, using linear response theory. Let us write the equation of motion in the form $\dot{x}_i=\mu[F_i(t)+ f_i]+\eta_i$, where $F_i(t)=-\p_{x_i}V(x_i,t)$ and $f_i=-\p_{x_i}U(|x_i-x_{i+1}|)-\p_{x_i} U(|x_i-x_{i-1}|)$ is the total force on $i^{\rm th}$ particle from its neighbors. We see that the total current is given by $\int_0^L dx \la j(x) \ra = \sum_{i=1}^N \la \dot{x}_i \ra = \mu \sum_{i=1}^N \la F_i \ra$, where $\la F_i \ra = \int d {\bf x} F_i(x_i,t) P({\bf x},t)$ . The long time solution $P({\bf x},t)$ can be obtained from perturbation theory. The Fokker-Planck equation for $P$, given by Eq. (\[eq:fp\]), can be expressed as $\p_t P = \cLo P + \cLf P$ where $\cLo = \sum_i [D \p_{x_i}^2 - \mu \p_{x_i} f_i ]$ and the external perturbation is $\cLf=-\sum_i \mu \p_{x_i} F_i$. Writing $P = P_0 + P_1$, where $P_0 = \exp[-\be \sum_i U(x_i, x_{i+1})]/Z$ is the equilibrium state, one gets to $\cal O(\l)$, $P_1({\bf x},t) = \int_{-\infty}^t d t' e^{\cLo (t-t')} \cLf P_0({\bf x})$ . Using this, to leading order in $\l$, one obtains $$\begin{aligned}
&\la F_i \ra = -\mu \int_{-\infty}^t dt' \int d{\bf x} F_i(t) e^{\cLo (t-t')} \sum_j \p_{x_j} [F_j(t') P_0({\bf x})] \nn \\
&= -\mu \int_{0}^\infty du \big\la F_i(t) e^{\cLo u} \sum_j \left[\p_{x_j} F_j(t-u) +\beta F_j(t-u) f_j\right] \big\ra_{0}, \nn\end{aligned}$$ where $\la\ldots\ra_0$ refers to an equilibrium average, and the time-dependence in $F_i(t)=F_i(x_i,t)$ only refers to the explicit time-dependence of the external force. Using the fact that $F_i=-\lambda k_B T q \sin (q x_i - \omega t)$ and that $\la A(t) B(0) \ra= \int d {\bf x} A({\bf x}) e^{\cLo t} B({\bf x}) P_0$ we get $$\begin{aligned}
&\la F_i \ra =-\l^2 q^2 \mu (k_B T)^2 \int_0^\infty du \big\la \sin (q x_i(u)-\omega t) \nn \\
& \times \sum_j \left[q \cos (q x_j -\omega (t-u)) + \beta f_j \sin (q x_j -\omega (t-u) ) \right] \big\ra_0~. \nn \end{aligned}$$ Finally, after averaging over a time period we get for the DC current: $$\begin{aligned}
j_{DC}&=\sum_{i,j} \f{-(\l q \mu k_B T)^2}{2 L} \int_0^\infty dt \Big[ q \big\la \sin [q (x_i(t)-x_j)-\omega t ] \big\ra_0 \nn \\
& + \big\la \beta f_j \cos [q (x_i(t)-x_j)-\omega t] \big\ra_0 \Big]~. \label{LR}\end{aligned}$$ This linear response formula, relating the DC current to equilibrium correlation functions, is the second main result of this paper.
[*Possible experiment:*]{} Using oscillating mirrors, it is possible to move a strongly focused infrared laser beam along a circle to constrain $\s \approx 1 \mu$m sized polystyrene spheres to move along a circle [@Faucheux1995]. The steric interaction between polystyrene beads would lead to single-file motion. Using similar techniques as in [@Faucheux1995], one can generate a cosine potential by passing the laser through an appropriately graded filter. Finally, a traveling wave potential can be formed by rotating the filter at the required frequency. If we choose the driving force wavelength to be few particle sizes so that $qa \approx 1$ then the optimal driving frequency is $\omega \approx D q^2 \approx D/a^2 \approx 1 Hz$, using $D \approx 1 \mu{\rm m}^2{\rm s}^{-1}$ at room temperature. This leads to a current $j_{\rm DC}\approx 0.05 {\rm s}^{-1}$ at a density $\rho_0 a \approx 0.5$. This is comparable to the currents obtained using the flashing ratchet mechanism in [@Faucheux1995].
In summary, we investigated the dynamics of interacting colloidal particles confined to move in a narrow circular channel and driven by a traveling wave potential. Using a combination of mean-field type assumptions and perturbation theory, analytic results were obtained for the average particle current in the channel. This compares quite well with simulation results. We have also proposed a linear response formula relating the current response to equilibrium correlations. This relation opens up further analytic possibilities. The current shows peaks as a function of driving frequency and wave number, and also the particle density. The current vanishes as we approach the close packing limit and, rather surprisingly, does not show current reversal unlike what is seen in studies of discrete versions of this model [@Chaudhuri2011]. From the point of view of experiments, the pumping of colloidal particles in narrow channels using traveling wave potentials looks very accessible and could have potential applications.
DC and AR thank RRI Bangalore for hospitality where this work was initiated. DC thanks MPI-PKS Dresden for hosting him at various stages of this work, and ICTS-TIFR Bangalore for hospitality while writing the paper.
[10]{}
A. L. Hodgkin and R. Keynes, The Journal of Physiology [**128**]{}, 61 (1955).
C. R[ö]{}denbeck, J. K[ä]{}rger, and K. Hahn, Physical Review E [**57**]{}, 4382 (1998).
L. Lizana and T. Ambj[ö]{}rnsson, Physical Review Letters [**100**]{}, 200601 (2008).
E. Barkai and R. Silbey, Physical Review Letters [**102**]{}, 050602 (2009).
A. Roy, O. Narayan, A. Dhar, and S. Sabhapandit, Journal of Statistical Physics [**150**]{}, 851 (2013).
K. Hahn, J. Kärger, and V. Kukla, Phys. Rev. Lett. [**76**]{}, 2762 (1996).
V. Kukla, J. Kornatowski, D. Demuth, I. Girnus, H. Pfeifer, L. V. C. Rees, S. Schunk, K. K. Unger, and J. Karger, Science [**272**]{}, 702 (1996).
Q. Wei, C. Bechinger, and P. Leiderer, Science [**287**]{}, 625 (2000).
C. Lutz, M. Kollmann, P. Leiderer, and C. Bechinger, J. Phys. Cond. Matt. [ **16**]{}, S4075 (2004).
B. Lin, M. Meron, B. Cui, S. A. Rice, and H. Diamant, Physical Review Letters [**94**]{}, 216001 (2005).
A. Das, S. Jayanthi, H. S. M. V. Deepak, K. V. Ramanathan, A. Kumar, C. Dasgupta, and A. K. Sood, ACS nano [**4**]{}, 1687 (2010).
F. Jülicher, A. Ajdari, and J. Prost, Reviews of Modern Physics [**69**]{}, 1269 (1997).
P. Reimann, Physics Reports [**361**]{}, 57 (2002).
J. Rousselet, L. Salome, A. Ajdari, and J. Prost, Nature [**370**]{}, 446 (1994).
S. Leibler, Nature [**370**]{}, 412 (1994).
C. Marquet, A. Buguin, L. Talini, and P. Silberzan, Physical Review Letters [**88**]{}, 168301 (2002).
L. Faucheux, L. Bourdieu, P. Kaplan, and A. Libchaber, Physical Review Letters [**74**]{}, 1504 (1995).
L. P. Faucheux, G. Stolovitzky, and A. Libchaber, Phys. Rev. E [**51**]{}, 5239 (1995).
B. Lopez, N. Kuwada, E. Craig, B. Long, and H. Linke, Physical Review Letters [**101**]{}, 220601 (2008).
D. C. Gadsby, A. Takeuchi, P. Artigas, and N. Reyes, Philosophical transactions of the Royal Society of London. Series B, Biological sciences [**364**]{}, 229 (2009).
I. Derényi and T. Vicsek, Physical Review Letters [**75**]{}, 374 (1995).
I. Derenyi and A. Ajdari, Physical Review E [**54**]{}, R5 (1996).
Y. Aghababaie, G. Menon, and M. Plischke, Physical Review E [**59**]{}, 2578 (1999).
F. Slanina, EPL (Europhysics Letters) [**84**]{}, 50009 (2008).
F. Slanina, Physical Review E [**80**]{}, 061135 (2009).
S. Savel’ev, F. Marchesoni, and F. Nori, Phys. Rev. E [**70**]{}, 061107 (2004).
D. Chaudhuri and A. Dhar, EPL (Europhysics Letters) [**94**]{}, 30006 (2011).
R. Marathe, K. Jain, and A. Dhar, Journal of Statistical Mechanics: Theory and Experiment [**2008**]{}, P11014 (2008).
K. Jain, R. Marathe, A. Chaudhuri, and A. Dhar, Physical Review Letters [ **99**]{}, 190601 (2007).
P. Brouwer, Physical Review B [**58**]{}, R10135 (1998).
R. Citro, N. Andrei, and Q. Niu, Physical Review B [**68**]{}, 165312 (2003).
M. Dierl, W. Dieterich, M. Einax, and P. Maass, Physical Review Letters [ **112**]{}, 150601 (2014).
R. Chatterjee, S. Chatterjee, P. Pradhan, and S. S. Manna, Phys. Rev. E [ **89**]{}, 022138 (2014).
S. Rahav, J. Horowitz, and C. Jarzynski, Physical Review Letters [**101**]{}, 140602 (2008).
D. Mandal and C. Jarzynski, Journal of Statistical Mechanics: Theory and Experiment [**2011**]{}, P10006 (2011).
S. Asban and S. Rahav, Physical Review Letters [**112**]{}, 050601 (2014).
D. Chowdhury and D. Stauffer, [*Principles of Equilibrium Statistical Mechanics*]{} (Wiley-VCH, Weinheim, 2000).
J. D. Weeks, The Journal of Chemical Physics [**54**]{}, 5237 (1971).
|
---
abstract: 'Mass determinations are difficult to obtain and still frequently characterised by deceptively large uncertainties. We review below the various mass estimators used for star clusters of all ages and luminosities. We highlight a few recent results related to (i) very massive old star clusters, (ii) the differences and similarities between star clusters and cores of dwarf elliptical galaxies, and (iii) the possible strong biases on mass determination induced by tidal effects.'
author:
- Georges Meylan
title: Mass Determinations of Star Clusters
---
\#1[[*\#1*]{}]{} \#1[[*\#1*]{}]{} =
\#1 1.25in .125in .25in
Introduction
============
Open and globular clusters are respectively located, because of their local definition, in the plane and in the halo of our Galaxy. This review will not make any formal distinction between these two kinds of star clusters since (i) the above definitions apply only to our Galaxy and (ii) we do not know if there are any genuine differences in their formation mechanisms. Consequently, we may use the words “open” and “globular”, but we shall essentially mean “star clusters”, whether they are light or massive, or young or old. This approach is also justified by the fact that, in simulations of the Galactic globular cluster system, the dynamical evolution of an initial mass distribution, of either Gaussian or power-law type, leads always to a predicted distribution consistent with observations: light star clusters dissolve rather quickly, while heavy ones survive longer (Baumgardt 1998, 2001). See also Zhang & Fall (1999) in the case of the populations of star clusters of the Antennae Galaxies.
Individual masses of star clusters are not easy to measure. As an example, we shall quote the four different mass estimates given for the giant Galactic globular cluster by Ogorodnikov (1976). From a few stellar radial velocities $\simeq$ 7 $\times$ $10^5 \mathcal{M}_{\odot}$, from the same radial velocities corrected for the effect of the internal rotation of the cluster $\simeq$ 3 $\times$ $10^6 \mathcal{M}_{\odot}$, from low-quality proper motions measured on photographic plates $\simeq$ 3 $\times$ $10^7 \mathcal{M}_{\odot}$, and from gravitational focusing from the same proper motions $\simeq$ 2.8 $\times$ $10^8 \mathcal{M}_{\odot}$. These four different approaches provide results differing by 3 orders of magnitude! Fortunately, things have improved since then, but even the best mass determinations of star clusters remain rather uncertain, typically by a factor of 2. It is interesting to realize that the age of the Universe, intuitively more difficult to determine than the mass of a nearby star cluster, is more accurately known since the Hubble constant is now estimated at a better than 20 % level.
The Giant Galactic Clusters and
=================================
Significant improvements in the quality and numbers of radial velocities have allowed the determinations of reliable mass estimates through the use of various dynamical models. On the one hand, there is the parametric approach. For example, Gunn & Griffin (1979) developed multi-mass models whose distribution functions $f$ depend on the stellar energy per unit mass $\varepsilon$ and the specific angular momentum $l$. Such models are spherical and have a radial anisotropic velocity dispersion ($\overline {v_r^2}$ $\not=$ $\overline {v_{\theta}^2}$ = $\overline {v_{\phi}^2}$). They are called King-Michie models and associate the lowered Maxwellian of the King model with the anisotropy factor of the Eddington models: $$f(\varepsilon,l) \propto
[\exp(-2j^2\varepsilon)-\exp(-2j^2\varepsilon_t)] ~
\exp(-j^2 l^2/r_a^2) \eqno(1)$$ On the other hand, one never knows which of the assumptions underlying such a model are adhered to by the real system and which are not (Dejonghe & Merritt 1992). These arguments suggest that it might be profitable to interpret kinematical data from globular clusters in an entirely different manner, placing much stronger demands on the data and making fewer ad hoc assumptions about the distribution function $f$ as well as the gravitational potential $\Phi$. Ideally, the unknown functions should be generated non-parametrically from the data, in an approach pioneered by Merritt (1993, 1996). We provide hereafter the results of two studies, parametric and non-parametric, respectively, of the globular cluster , both studies using exactly the same observational data, viz., the surface brightness profile and 469 stellar radial velocities.
$\bullet$ Parametric: a simultaneous fit of these radial velocities and of the surface brightness profile to a multi-mass King-Michie dynamical model provides mean estimates of the total mass for equal to = 5.1 0.6 $\times$ , with a corresponding mean mass-to-light ratio = 4.1 (Meylan 1995).
$\bullet$ Non-parametric: the potential and mass distribution infered in this method provide a total mass for equal to = 2.9 0.4 $\times$ , with a corresponding mean mass-to-light ratio = 2.3 (Merritt 1997).
Universal Mass-to-Light Ratio?
==============================
With a similar parametric approach applied to the observational constraints obtained for NGC 1835, an old Large Magellanic Cloud globular cluster, King-Michie models give = 1.0 0.3 $\times$ , corresponding to a mean mass-to-light ratio = 3.4 1.0 (Meylan 1988, Dubath & Meylan 1990).
These studies show that when the same kind of dynamical models (King-Michie) constrained by the same kind of observations (surface brightness profile and central value of the projected velocity dispersion) are applied to an old and bright Magellanic globular cluster, viz., NGC 1835, the results seem similar to those obtained in the case of Galactic globular clusters. Consequently, the rich old globular clusters in the Magellanic clouds could be quite similar in mass and to the rich globular clusters in the Galaxy.
The Final Word from Thousands of Stellar Space Motions
======================================================
Recently, the amount of data related to stellar motions, viz., radial velocities and proper motions, has increased significantly. In a pioneering ground-based study of , van Leeuwen (2000) measured the individual proper motions of 7853 probable member stars, from photographic plates with epochs ranging from 1931 through 1935 and 1978 through 1983. An internal proper motion dispersion of 1.0 to 1.2 , equivalent to 25 to 29 for a distance of 5.1 kpc, is found for members near the cluster center. This dispersion decreases to 0.3 , equivalent to 7.5 in the outer regions. The full dynamical interpretation of these proper motions, combined with a few thousand radial velocities, is in preparation by the same group.
Another group, using HST/WFPC2 images, has obtained slightly better measurements of proper motions for about 15,000 stars in the core of , this within a time baseline of only 4 years, with 3 epochs separated by 2 years. See Anderson, King, & Meylan 1998 for a progress report after the second epoch. These data, combined with the radial velocities of about 5,000 stars, will provide, as in the case of , an insight into the dynamics of the core of , with fundamental by-products such as cluster distance and photometry of variable stars and binaries.
With their new proper-motion techniques and software (Anderson & King 2000) applied to the HST/WFPC2 archive data for the first epoch and to their own observations for the second epoch, the members of the same group will soon have available similar sets of proper motions for about ten among of the richest and most nearby Galactic globular clusters.
The above studies, with thousands of proper motions and radial velocities constraining dynamical models with three integrals of the motion as well as non-parametric ones, will allow a significant step forward in our understanding of the internal dynamics of massive star clusters.
Mayall II $\equiv$ G1, a Giant Globular Cluster in M31
======================================================
Mayall II $\equiv$ G1 is one of the brightest globular clusters belonging to M31, the Andromeda galaxy. Observations with HST/WFPC2 provide photometric data for the $I$ vs. $V-I$ and $V$ vs. $V-I$ color-magnitude diagrams. They reach stars with magnitudes fainter than $V$ = 27 mag, with a well populated red horizontal branch at about $V$ = 25.3 mag (Meylan 2001). From model fitting, that study determines a rather high mean metallicity of \[Fe/H\] = – 0.95 0.09, somewhat similar to . In order to determine the true measurement errors, Meylan (2001) have carried out artificial star experiments. They find a larger spread in $V-I$ than can be explained by the measurement errors. They attribute this to an intrinsic metallicity dispersion among the stars of G1, which may be the consequence of self-enrichment during the early stellar/dynamical evolutionary phases of this cluster. So far, only , the giant Galactic globular cluster, has been known to exhibit such an intrinsic metallicity dispersion. This is, a phenomenon certainly related to the deep potential well of each of these two star clusters, which are massive enough to retain the gas expelled by the first generations of very massive stars.
The structural parameters of G1 are deduced from the same HST/WFPC2 data. Its surface brightness profile provides its core radius = 0.14 = 0.52 pc, its tidal radius $\simeq$ 54 = 200 pc, and its concentration $\simeq$ 2.5. Such a high concentration indicates the probable collapse of the core of G1. KECK/HIRES observations provide the central velocity dispersion = 25.1 , with = 27.8 once aperture corrected.
Three estimates of the total mass of this globular cluster can be obtained. The King-model mass is $_K$ = 15 $\times$ with $\simeq$ 7.5, and the Virial mass is $_{Vir}$ = 7.3 $\times$ with $\simeq$ 3.6. The King-Michie model fitted simultaneously to the surface brightness profile and the central velocity dispersion value provides mass estimates ranging from $_{KM}$ = 14 $\times$ to 17 $\times$ (Meylan 2001).
[ccc]{} Mass & Mayall II &\
& \[\] & \[\]\
King & 15 & 4.3\
Virial & 7.3 & 2.9\
King-Michie & 13-18 & 5.1\
The spread between the three mass determination values listed in Table 1 give a better idea than their individual formal (smaller) errors about their true uncertainties. The masses of both clusters are known to about a factor of two. Although not very precise, all of these mass estimates make G1 more than twice as massive as , the most massive Galactic globular cluster. G1 is unique in M31 because of its projected location 40 kpc away from the center of the galaxy, but there are at least three other bright globular clusters in this galaxy which have velocity dispersions larger than 20 , implying rather large masses.
On the Origin of the Most Massive Globular Clusters
===================================================
Such large masses are related to the metallicity spread the origin of which is still unknown. It may come either (i) from metallicity self-enrichment in a massive globular cluster, (ii) from primordial metallicity inhomogeneity in a binary proto-cluster cloud, followed by early merger, or (iii) from the fact that the present globular cluster is merely the remaining core of a previously larger entity, e.g., originally a dwarf galaxy subsequently pruned by dynamical evolution.
Although is the best studied globular cluster, because of its size and relative proximity ($\sim$ 5.1 kpc), many conundrums remain: viz., (i) the metallicity spread among stars (Freeman & Norris 1981), (ii) its double Main Sequence (Anderson 1997), (iii) the different kinematics between metal-rich and -poor stars (Norris 1997), and (iv) a correlation between metallicity and age, implying that this cluster enriched itself over a timescale $\sim$ 3 Gyr (Hughes & Wallerstein 2000 and Hilker & Richtler 2000).
[ccccc]{} Parameters && Mayall II && NGC 205\
& & 27.8 & & 30\
& &–10.94 mag & & –9.6 mag\
& & 0.52 pc & & 0.35 pc\
& & 13.47 mag/arcs$^{2}$ & & 12.84 mag/arcs$^{2}$\
Let us consider (see Table 2 and Meylan 2001) the following four parameters relative to G1: the central velocity dispersion = 28 , the integrated absolute visual magnitude = – 10.94 mag, the core radius = 0.52 pc, and the central surface brightness = 13.47 mag arcsec$^{-2}$. The positions of G1 in the different diagrams defined by Kormendy (1985), using the above four parameters, always put it on the sequence defined by globular clusters, and definitely away from the other sequences defined by elliptical galaxies, bulges, and dwarf spheroidal galaxies (Fig. 1). The same is true for (Meylan 2001).
Little is known about the positions, in these diagrams, of the nuclei of nucleated dwarf elliptical galaxies, which could be the progenitors of the most massive, if not all, globular clusters (Zinnecker 1988, Freeman 1993). The above four parameters are known only for the nucleus of one dwarf elliptical, viz., NGC 205, and their values put this object, in Kormendy’s diagram, close to G1, right on the sequence of globular clusters (see Table 2 and Fig. 1). This result does not prove by itself that all massive globular clusters are the remnant cores of nucleated dwarf galaxies.
At the moment, only the anti-correlation of metallicity with age recently observed in suggests that this cluster enriched itself over a time scale of about 3 Gyr (Hughes & Wallerstein 2000 and Hilker & Richtler 2000). This contradicts the general idea that all the stars in a globular cluster are coeval, and may favor the origin of as being the remaining core of a larger entity, e.g., of a former nucleated dwarf elliptical galaxy. In any case, by the mere fact that their large masses imply complicated stellar and dynamical evolutions, the very massive globular clusters may blur the former clear (or simplistic) difference between globular clusters and dwarf galaxies.
Mass Estimates of Young Star Clusters
=====================================
So far, we have discussed only mass estimates related to old, rich star clusters. HST has triggered numerous studies of young, bright starburst clusters which may be quite massive. See, e.g., Holtzmann (1992) in the case of NGC 1275 clusters, and Schweizer & Seitzer (1993) and Whitmore (1993) in the case NGC 7252. Conti & Vacca (1994) use HST/FOC UV imaging of various bright knots in the nuclear starburst region of the Wolf-Rayet galaxy H2 2-10. The luminosities of the knots are compared to those predicted from stellar population synthesis models for ages between 1 and 10 Myr, from which they estimate the masses of the knots to be between 10$^5$ and 10$^6$ . In a more direct way, Böker (1999) measure a velocity dispersion = 33 3 in the nuclear star cluster of the face-on giant Scd spiral galaxy IC 342 and deduce, by fitting the central surface brightness profile, a cluster mass = 6 $\times$ . They infer a best-fitting cluster age in the range 10 - 60 $\times$ .
It is worth emphasizing that mass estimates of young star clusters are more difficult to obtain and more uncertain than those of older clusters. This is due to the frequent lack of velocity dispersion measurement and because of the large uncertainties inherent in the use of stellar population synthesis models for young stellar systems whose luminosities evolve quickly (see, e.g., Bruzual 2001).
The Slow Destruction of Star Clusters
=====================================
Numerical works have demonstrated that continual two-body relaxation within globular clusters combined with weak tidal encounters between globular clusters and the Galactic disk and/or bulge will lead, around each globular cluster, to the development of both a halo of unbound stars and tidal tails. All clusters observed, which do not suffer from strong observational biases, present tidal tails (Grillmair 1995, Leon 2000). These tidal tails exhibit projected directions preferentially aligned with the cluster orbit or towards the Galactic center, betraying their recent dynamical evolution through disk and/or bulge shocking. See also Odenkirchen (2001).
Recent theoretical work corroborate these observations. In N-body simulations of globular clusters moving in the Galactic potential well, Combes (1999) observe that once the particles (stars) are unbound, they slowly drift along the globular cluster path and form two huge tidal tails. A cluster is always surrounded by two giant tidal tails and debris, in permanence along its orbit. The length of these tidal tails is of the order 5 tidal radii or more. The orientation of these tidal tails is the signature of the last disk crossing and can constrain strongly the cluster orbit and the Galactic model. Each disk/bulge crossing may extract up to about 1% of the total mass of the star cluster, leading slowly but surely to its complete evaporation. The lighter the cluster and the stronger the tidal shocks, the faster the destruction process.
Do Globular Clusters Have Dark Halo?
====================================
Are globular clusters the most massive stellar systems without non-baryonic dark matter? Since both the velocity dispersion profile and the rotation curve of decrease towards the outer parts of this cluster, we may conclude that there is no dynamical evidence of any massive halo made of non-baryonic dark matter.
In order to investigate further this point among the most poorly studied Galactic clusters, Côté (2001) have obtained KECK/HIRES high-accuracy radial velocities of about 20 stars in six outer-halo globular clusters, located between 20 and 100 kpc from the center of our Galaxy. The velocity dispersions range between = 1 and 5 , with corresponding values between 1 and 4. This is exactly what is expected from the nearby well studied globular clusters such as and .
However there is one single, although conspicuous, exception in the sample of Côté (2001): Pal 13 exhibits a velocity dispersion = 2.6 0.3 . With its low and uncertain total luminosity, Pal 13 has a corresponding in the range 10 $<$ $<$ 40. This is quite unique for a globular cluster. Simulations show that such a result would be mimicked, with difficulties, only by an uncomfortably large fraction of binary stars. Binaries exist in globular clusters, but not in large quantities. We should apply Ockham’s razor before invoking the presence of a massive dark halo. Such a high value could be an indication of a cluster in the late phases of its dynamical evolution, when a large fraction of its total mass is made of white dwarfs, as predicted by Vesperini & Heggie (1997). It could also be explained by a velocity dispersion inflated because of lack of Virial equilibrium: Pal 13 seems to be in the advanced stages of tidal disruption, a status supported by the recent observations of extra-tidal extension around this cluster (Siegel 2001 and Côté 2001). These two explanations do not exclude each other.
Is Tidal Disruption an Ubiquitous Phenomenon?
=============================================
Leon (2000) show that any Galactic globular cluster observed which does not suffer from strong observational biases (e.g., intervening absorption by Galactic cirrus along the line-of-sight) displays a pair of tidal tails. The tidal disturbance may also be unveiled through its effect on the surface brightness profile. An isolated cluster suffering no tidal shock will have a normal King-model profile characterised by a core surrounded by an envelope with a steep profile. Any tidally-perturbed cluster will have an envelope profile departing from the King-model profile as if there were a very strong background of stars: the stronger the tidal effect, the higher the level of the background (i.e., the higher the departure from King-model profile), which is made of stars escaping the cluster gravitational attraction.
When observing a tidally perturbed globular cluster, there is a high probability of measuring radial velocities of escaping stars, which are no more in Virial equilibrium with the cluster. Such stars would inflate the measured velocity dispersion. The stronger the tidal disturbance, the higher the inflated velocity dispersion, and the larger the mass-to-light ratio. This is exactly what is observed in Fig. 2, from Majewksi (2001), which displays the surface brightness profiles of two globular clusters (NGC 288 with = 2.1 and Pal 13 $\sim$ 20) and five dSph galaxies (Sculptor with = 3.0, Leo I with = 5.6, Leo II with = 17, Carina with = 31, and Ursa Major with = 79). From the bottom panel (NGC 288) to the top one (Ursa Major) we observe a sequence of increasingly disturbed profiles, with higher and higher departures from the King models, with the most disturbed profile being for Ursa Major. Interestingly, this sequence of increasingly disturbed profiles correspond to a sequence of increasing mass-to-light ratios, as if the values were a direct measure of the intensity of the tidal shocks. Is the evidence for dark matter in dSph galaxies evaporating with their stars?
It is a pleasure to thank Steve Majewksi (Virginia) and my collaborators Patrick Côté (Rutgers) and George Djorgovski (Caltech) for allowing me to present results in advance of publication. I also thanks Jennifer Lotz (JHU), Roeland van der Marel (STScI), and Brad Whitmore (STScI) for interesting discussions and information about NGC 205.
Anderson J., 1997, Ph.D. thesis, University of California, Berkeley Anderson J., King I.R., Meylan G., 1998, BAAS, 30, 1347 Anderson J., King I.R., 2000, PASP, 112, 1360 Baumgardt H., 1998, A&A, 330, 480 Baumgardt H., 2001, in ASP Conf. Ser. Vol. ???, Modes of Star Formation and the Origin of Field Populations, ed. E.K. Grebel and W. Brandner (San Francisco: ASP), in press Böker T., van der Marel R.P., Vacca W.D., 1999, AJ, 118, 831 Bruzual G.A., 2001, in XI Canary Islands Winter School of Astrophysics, Galaxies at High Redshift, ed. I. Pérez-Fournon and F. Sánchez, (Cambridge: Cambridge Contemporary Astrophysics), in press Combes F., Leon S., Meylan G., 1999, A&A, 352, 149 Conti P.S., Vacca W.D., 1994, ApJ, 423, L97 Côté P., Djorgovski S.G., Meylan G., Castro S., McCarthy J.K., 2001, AJ, submitted Dejonghe H., Merritt D., 1992, ApJ, 391, 531 Dubath P., Meylan G., Mayor M., Magain P., 1990, A&A, 239, 142 Freeman K.C., 1993, in IAU Symp. 153, Galactic Bulges, ed. H. Dejonghe & H.J. Habing (Dordrecht: Kluwer), 263 Freeman K.C., Norris J.E., 1981, ARA&A, 19, 319 Grillmair C. J., Freeman K. C., Irwin M., 1995, AJ, 109, 2553 Gunn J.E., Griffin R.F., 1979, AJ, 84, 752 Hilker M., Richtler T., 2000, A&A, 362, 895 Holtzmann J.A., 1992, AJ, 103, 691 Hughes J., Wallerstein G., 2000, AJ, 119, 1225 Kormendy J., 1985, ApJ, 295, 73 Leon S., Meylan G., Combes F., 2000, A&A, 359, 907 Majewksi S.R., , 2001, in preparation Meylan G., 1988, ApJ, 331, 718 Meylan G., Heggie D.C., 1997, A&AR, 8, 1-143 Meylan G., Mayor M., Duquennoy A., Dubath P., 1995, A&A, 303, 761 Meylan G., Sarajedini A., Jablonka P., , 2001, AJ, in press Merritt D., 1993, ApJ, 413, 79 Merritt D., 1996, AJ, 112, 1085 Merritt D., Meylan G., Mayor M., 1997, AJ, 114, 1074 Norris J.E., Freeman K.C., Mayor M., Seitzer P., 1997, ApJ, 487, L187 Odenkirchen M., Grebel E.K., , 2001, ApJ, 548, L165 Ogorodnikov K.F., Nezhinskii E.M., Osipkov L.P., 1976, Sov. Astr. Lett. 2, 57 Schweizer F., Seitzer P., 1993, ApJ, 417, L29 Siegel M.H., Majewski S.R., Cudworth K.M., Takamiya M., 2001, AJ, 121, 935 van Leeuwen F., Le Poole R.S., Reijns R.A., Freeman K.C., de Zeeuw P.T., 2000, A&A, 360, 472 Vesperini E., Heggie D.C., 1997, MNRAS, 289, 898 Whitmore B.C., Schweizer F., Leitherer C., Borne K., Robert C., 1993, AJ, 106, 1354 Zhang Q., Fall S.M., 1999, ApJ, 527, L81 Zinnecker H., Keable C.J., Dunlop J.S., Cannon R.D., Griffiths W.K., 1988, in IAU Symp. 126, [*Globular Cluster Systems in Galaxies*]{}, ed. J.E. Grindlay, A.G.D. Philip, (Dordrecht: Kluwer), 603
Discussion {#discussion .unnumbered}
==========
[*Armandroff:*]{} You raised the possibility of tidal disruption leading to the large velocity dispersion in Ursa Minor. Models for the disruption of dSphs (Piatek & Pryor, 1995, AJ, 109, 1071; Oh , 1995, ApJ, 442, 142) predict a kinematic signature that resembles a rotation curve, as opposed to simply an increased dispersion. Large samples of radial velocities in Draco and Ursa Minor have not shown evidence for the kinematic signature of disruption. Do you have evidence for Ursa Minor or other dSphs matching the modeling?\
[*Meylan:*]{} The two papers you mention present results of N-body simulations which reach conclusions different from those obtained from other, more recent N-body simulations (Kroupa, 1997, New Astron, 2, 139; Klessen & Kroupa, 1998, ApJ, 498, 143). Kroupa’s work shows that dSphs are obtained under the extreme hypothesis that dSph progenitors are not dominated by dark matter but are significantly shaped by tides through many periastron passages; Piatek & Pryor simulated only one passage, and Kroupa’s results are completely consistent with their findings for one passage only. There are now in Draco photometric observations of stars beyond its measured tidal boundary (Piatek , 2001 AJ, 121, 841). It will be essential to obtain, for a few of these dSphs, samples of thousands of stellar radial velocities. At the moment, I simply find the correlation between tidal disturbance intensity and values rather intriguing. This definitely calls for more studies.\
[*Da Costa:*]{} Comment: Integrated spectrum of nucleus of NGC 205 is A-type, so its stellar population is very different from globular clusters like G1. Question: What is the core radius of the NGC 205 nucleus and does if fit a King model? (different from say M32 with its power-law cusp).\
[*Meylan:*]{} About the comment: Yes, the stellar populations in the core of NGC 205 are younger than those in G1. This means that the position, in the panels defined by Kormendy (see Fig. 1 above), of a dynamical system older than about 1 Gyr does not depend strongly on its age. Answer to the question: The core radius of the nucleus NGC 205 is 100 mas = 0.35 pc (Jones , 1996, ApJ, 466, 742), similar to a very dense globular cluster easily fitted by a King model. The star cluster in the center of NGC 205 is very different from the nucleus of M32, which has, e.g., a much larger velocity dispersion = 150 (Joseph , 2001, ApJ, 550, 668). See also Meylan (2001).\
[*Grillmair:*]{} Two comments: 1) The onset of tidal effects (e.g., break from normal profile to power-law tidal tail, velocity dispersion increase) can depend significantly on the orbital phase of the cluster (strongest effect at apogalacticon). 2) Ben Moore (1996, ApJ, 461, L13) put an upper limit of of NGC 7089 $\equiv$ M2, based on existence of tidal tails.\
[*Wenderoth:*]{} Around 15 years ago, the idea that is a merger was proposed. Since then, is there more evidence to support this idea or to reject it?\
[*Meylan:*]{} There was a paper by Icke & Alcaino, 1988, A&A, 204, 115, suggesting that could be the result of a merger, this in order to explain (i) its spread in metallicity, (ii) its strong flattening and (iii) its large mass. There is now more evidence about the intrinsic complexity of this globular cluster, as mentioned above in Section 6, although none of these observational facts supports the merger scenario over the other two alternatives. If merger there was, it must have been very early in the life of the two merging proto-cluster clouds, since the mere encounter of two current globular clusters would not induce a merger, except in the case of orbital-parameter adjustment with vanishingly small probabilities.\
|
---
author:
- |
Chandrasekhar Chatterjee, Amitabha Lahiri\
,\
Department of Theoretical Sciences\
S. N. Bose National Centre for Basic Sciences\
Block JD, Sector III, Salt Lake, Kolkata 700 098, W.B. India.
title: 'Flux dualization in broken SU(2)'
---
\[sec:level1\]Introduction:
===========================
It is widely believed that color confinement in the strong coupling regime should be a phenomenon dual to monopole confinement in a color superconductor at weak coupling. In this picture, the QCD vacuum behaves like a dual superconductor, created by condensation of magnetic monopoles, in which confinement is analogous to a dual Meissner effect. Quarks are then bound to the ends of a flux string [@Mandelstam:1974pi; @Nambu:1975ba; @Nambu:1974zg] analogous to the Abrikosov-Nielsen-Olesen vortex string of Abelian gauge theory [@Abrikosov:1956sx; @Nielsen:1973cs].
A construction of flux strings in the Weinberg-Salam theory was suggested by Nambu [@Nambu:1977ag], in which a pair of magnetic monopoles are bound by a flux string of Z condensate. The magnetic monopoles are introduced by hand. If we demand that the magnetic monopoles should appear from the underlying gauge theory, we need an additional adjoint scalar field. Such a construction of flux string, involving two adjoint scalar fields in an SU(2) gauge theory, has been discussed in [@Nielsen:1973cs; @deVega:1976rt]. Recently there has been a resurgence of interest in such constructions [@Auzzi:2003fs; @Hanany:2004ea; @Shifman:2002yi; @'tHooft:1999au]. We have previously shown explicitly that [@Chatterjee:2009pi] an SU(2) gauge theory broken by two adjoint scalar fields at different energy scales has configurations of magnetic monopoles bound by flux strings.
In this paper we consider an SU(2) gauge theory coupled to an adjoint scalar field as well as a fundamental scalar field. The two fields break the symmetry at two scales. At the higher scale the adjoint scalar breaks the symmetry down to U(1) and produces ’t Hooft-Polyakov magnetic monopoles [@'tHooft:1974qc; @Polyakov:1974ek; @Prasad:1975kr]. The fundamental scalar breaks the remaining U(1) symmetry at a lower scale and produces a flux string.
Our starting point is the Lagrangian $$\begin{aligned}
\label{lagrangian_1}
{\it{ L}} = - {\frac
12}\Tr\left(G_{\mu\nu}G^{\mu\nu}\right) + \Tr\left(D_\mu \phi
D^\mu \phi \right) + \half(D_\mu \psi^{\dagger})(D^\mu
\psi) + V(\phi, \psi). \end{aligned}$$ Here $\phi$ is in the adjoint representation of $SU(2), \phi =
\phi^i\tau^i$ with real $\phi^i\,$ and $\psi$ is a fundamental doublet of $SU(2)$, with $V(\phi, \psi)$ some interaction potential for the scalars. The $SU(2)$ generators $\tau^i$ satisfy $\Tr(\tau^i\tau^j) = \half\delta^{ij}$. The covariant derivative $D_\mu$ and the Yang-Mills field strength tensor $G_{\mu\nu}$ are defined as $$\begin{aligned}
\left(D_\mu\phi\right)^i &=& \partial_\mu \phi^i + g
\epsilon^{ijk}A_{\mu}^j\phi^k\,, \hfil\\
G_{\mu\nu}^i &=& \partial_\mu {A^i}_{\nu}- \partial_\nu {A^i}_{\mu}
+ g \epsilon^{ijk}{A^j}_{\mu}{A^k}_{\nu}\,,\hfil\\
(D_\mu \psi)_\alpha &=& \d_\mu \psi_\alpha -
igA_\mu^i\tau^i_{\alpha\beta} \psi_\beta\,. \end{aligned}$$ We will sometimes employ vector notation, in which $$\begin{aligned}
D_\mu\vec\phi &=& \partial_\mu\vec\phi + g\vec A
\times \vec\phi\,, \\
D_\mu\psi &=& \partial_\mu \psi - ig A_\mu\psi\,,\\
\vec G_{\mu\nu}&=& \partial_\mu\vec A_\nu -
\partial_\nu \vec A_\mu + g \vec A_\mu \times\vec A_\nu\,,\, {\rm
etc.}
%\label{}\end{aligned}$$ Obviously, $\vec\phi$ and $\phi$ represent the same object. The simplest form of the potential $V(\phi, \psi)$ that will serve our purpose is, $$\begin{aligned}
\label{potential}
V(\phi, \psi) = - {\frac{\lambda_1} 4}(|\phi|^2 - v_1^2)^2
- {\frac{\lambda_2} 4}(\psi^\dagger\psi - v_2^2)^2 -
V_{mix}(\phi,\psi). \end{aligned}$$ Here $v_1\,,v_2$ are the parameters of dimension of mass and $\lambda_1\,, \lambda_2$ are dimensionless coupling constants. The last term $V_{mix}(\phi,\psi)$ includes all mixing terms in the potential, which involve products of the two scalar fields in some way. We will take $V_{mix}(\phi\,,\psi) = 0$ for now, so $v_1$ and $v_2$ are the local minima of the potential, and we will refer to them as the vacuum expectation values of $\phi$ and $\psi$.
The adjoint scalar $\phi$ acquires a vacuum expectation value (vev) $\vec v_1$ which is a vector in internal space, and breaks the symmetry group down to U(1). The ’t Hooft-Polyakov monopoles are associated with this breaking. The other scalar field $\psi$ also has a non-vanishing vev $v_2$ which is a vector in the fundamental representation. This vector can be associated uniquely with a vector in the adjoint space which is free to wind around $\vec
v_1$. A circle in space is mapped to this winding, giving rise to the vortex string. We then dualize the fields as in [@Davis:1988rw; @Mathur:1991ip; @Lee:1993ty; @Akhmedov:1995mw; @Chatterjee:2006iq] to write the action in terms of string variables.
The idea of two-scale symmetry breaking in SU(2), the first to produce monopoles and the second to produce strings, has appeared earlier [@Hindmarsh:1985xc]. Later this idea was used in a supersymmetric setting in [@Kneipp:2003ue; @Auzzi:2003em; @Eto:2006dx], where the idea of flux matching, following Nambu [@Nambu:1977ag] was also included. The model we discuss in this paper, with one adjoint and one fundamental scalar, has been considered previously in [@Shifman:2002yi]. Here we construct the flux strings explicitly in non-supersymmetric SU(2) theory with ’t Hooft-Polyakov monopoles of the same theory attached to the ends. The internal direction of symmetry breaking is left arbitrary, so that the magnetic flux may be chosen to be along any direction in the internal space. We also dualize the variables to write the effective theory of macroscopic string variables coupled to an antisymmetric tensor, and thus show explicitly that the flux at each end of the string is saturated by the magnetic monopoles, indicating confinement of magnetic flux.
Magnetic monopoles
==================
We assume that $v_1\,,$ the vacuum expectation value of $\phi\,,$ is large compared to the energy scale we are interested in. Below the scale $v_1\,,$ we find the $\phi$ vacuum, defined by the equations $$\begin{aligned}
\label{1sthiggsvacuum}
D_\mu{\vec\phi} &=& 0\,,\\
|\phi|^2 &=& v_1^2.\nonumber\end{aligned}$$ Below $v_1\,,$ the original SU(2) symmetry of the theory is broken down to U(1). At low energies the theory is essentially Abelian, with the component of $A$ along $\phi$ remaining massless. We can now write the gauge field below the scale $v_1$ as $$\begin{aligned}
\label{A_phivac}
\vec A_\mu = B_{\mu}\hat{\phi}_1 - {\frac 1g}
\hat{\phi}_1\times\d_{\mu} \hat{\phi}_1\,,\end{aligned}$$ where $B_{\mu} = \vec A_{\mu}\cdot \hat{\phi}_1$ and $\hat\phi_1 =
\vec\phi_1/v_1$ [@Corrigan:1975hd]. In this vacuum, until we include the second symmetry breaking, $B_\mu$ is a massless mode. The other two components of $A\,,$ which we call $A^\pm\,,$ and the modulus of the scalar field $\phi$ acquire masses, $$\begin{aligned}
M_{A^\pm} = gv_1, \qquad M_{|\phi|} = \sqrt \lambda_1 v_1.\end{aligned}$$ Well below $v_1$ the modes $A^\pm$ are not excited, so they will not appear in the low energy theory. The second term on the right hand side of Eq. (\[A\_phivac\]) corresponds to the gauge field for SU(2) magnetic monopoles [@'tHooft:1999au].
A straightforward calculation shows that, $$\begin{aligned}
\Tr (G_{\mu\nu}G^{\mu\nu}) &=& {\frac 12}F_{\mu\nu}F^{\mu\nu},\end{aligned}$$ where $$\begin{aligned}
\label{F}
F_{\mu\nu} &=& \d_{[\mu}B_{\nu]} -
{\frac 1g}\hat{\phi}_1\cdot\d_\mu \hat{\phi}_1\times
\d_\nu\hat{\phi}_1 \, \equiv \partial_{[\mu} B_{\nu]} + M_{\mu\nu}.\end{aligned}$$ Then the Lagrangian can be written in the $\phi$-vacuum as $$\begin{aligned}
L = - {\frac 14}F_{\mu\nu}F^{\mu\nu}
+ (D_\mu \psi^{\dagger})(D^\mu \psi)
- {\frac{\lambda_2} 4}(\psi^\dagger \psi - v_2^2)^2 .\end{aligned}$$
The second term of Eq. (\[F\]) is the ‘monopole term’. In a configuration where the scalar field at spatial infinity goes as $\phi_1^i \to v_1\displaystyle{\frac {r^i}{r}}$, the $(ij)^{th}$ component of the second term of Eq. (\[F\]) becomes $-\displaystyle{{\frac{\epsilon_{ijk} r^k}{gr^3}}},$ which we can easily identify as the field of a magnetic monopole. The flux for this monopole field is ${4\pi}\over g $. On the other hand, a monopole with magnetic charge $Q_m$ produces a flux of $4\pi Q_m,$ and thus we find the quantization condition for unit charge, $Q_m g
= 1.$
The scalar field $ \phi $ can be written as $\phi(x) =
|\phi(x)|\hat\phi(x)$, where $\hat\phi$ contains two independent fields (and $x\equiv \vec x$). So under a gauge transformation $\hat\phi$ has a trajectory on $ S^2 $. Since $\phi$ is in the adjoint of SU(2), we can always write $\phi$ as $$\phi(x) = |\phi(x) | g(x)\tau ^3 g^{-1}(x) = |\phi(x)| \hat\phi(x)
\,,$$ with $g(x) \in $ SU(2). Then for a given $\phi(x)\,,$ we can locally decompose $g(x)$ as $g(x) = h(x)U(x)\,,$ with $h(x) = \exp
(- i\xi (x)\hat\phi(x))\,,$ and we can write $$\begin{aligned}
\phi(x) = |\phi(x)| U(\varphi(x), \theta(x))\tau ^3
U^\dagger(\varphi(x), \theta(x)),
\label{hU}\end{aligned}$$ Here $\xi(x), \varphi(x), \theta(x)$ are angles on $S^3$= SU(2). The matrix $U$ rotates $\hat\phi(x)$ in the internal space, and is an element of SU(2)/U(1), where the U(1) is the one generated by $h\,.$ If $|\phi|$ is zero at the origin and $ |\phi|$ goes smoothly to its vacuum value $v_1 $ on the sphere at infinity, the field $ \phi $ defines a map from space to the vacuum manifold such that second homotopy group of the mapping is ${\mathbb Z}$. Equating $ \phi $ with the unit radius vector of a sphere we can solve for $U(\theta(x),\varphi (x))$, $$\begin{aligned}
\label{Umonopole1}
U = \left(\begin{tabular}{cc}
$\cos{\theta\over 2} $ & $-\sin{\theta\over 2}e^{-i \varphi}$ \\
$\sin{\theta\over 2}e^{i \varphi}$ & $\cos{\theta\over 2}$\\
\end{tabular}\right)\, . \end{aligned}$$ An ’t Hooft-Polyakov monopole (in the point approximation, or as seen from infinity) at the origin is described by $$\begin{aligned}
\label{Umonopole2}
U = \cos{\theta\over 2}\left(\begin{tabular}{cc}
$e^{i \varphi}$ & 0 \\
0 & $e^{-i \varphi}$\\
\end{tabular}\right) + \sin{\theta\over 2}
\left( \begin{tabular}{cc}
$0\quad$ & $i$\\
$i\quad$ & $0$\\
\end{tabular}\right)\,, \end{aligned}$$ where $0\le\theta(\vec x)\le \pi$ and $0\le\varphi(\vec x)\le2\pi$ are two parameters on the group manifold. This choice of $U(\vec
x)$ is different from that in Eq. (\[Umonopole1\]) by a rotation of the axes. Both choices lead to the field configuration $$\begin{aligned}
\vec\phi &=& v_1{\frac {r^i}{r}}\tau_i.\end{aligned}$$ For this case, $Q_m g = 1\,,$ as we mentioned earlier. A monopole of charge $n/g$ is obtained by making the replacement $\varphi \to
n\varphi $ in Eq.s (\[Umonopole1\], \[Umonopole2\]). The integer $n$ labels the homotopy class, $\pi_2(SU(2)/U(1))
\sim \pi_2(S^2) \sim Z\,,$ of the scalar field configuration. Other choices of $U(\vec x)$ can give other configurations. For example, a monopole-anti-monopole configuration [@Bais:1976fr] is given by the choice $$\begin{aligned}
\label{M-anti-M}
U = \sin{({\theta_1 - \theta_2})\over 2}\left(\begin{tabular}{cc}
0 & $- e^{ -i \varphi}$\\
$e^{i \varphi}$ & 0\\
\end{tabular}\right) + \cos{({\theta_1 - \theta_2})\over 2}
\left( \begin{tabular}{cc}
$1\quad$ & $0$\\
$0\quad$ & $1$\\
\end{tabular}\right) . \end{aligned}$$ For our purposes, we will need to consider a $\phi_1$-vacuum configuration with $U(\vec x) \in SU(2)$ corresponding to a monopole-anti-monopole pair separated from each other by a distance $> 1/v_1.$ Then the total magnetic charge vanishes, but each monopole (or anti-monopole) can be treated as a point particle.
Flux tubes
==========
We started with a theory with SU(2) symmetry and a pair of scalars $\phi,\psi\,.$ The non zero vacuum expectation value $v_1$ of the field $\phi$ breaks the symmetry to U(1), so that below $v_1$ we have an effective Abelian theory with magnetic monopoles. The gauge group SU(2) acts transitively on the vacuum manifold $S^2$, so the Abelian effective theory is independent of the internal direction of $\phi$. The remaining symmetry of the theory is the U(1), the little group of invariance of $\phi$ on the vacuum manifold. This is the group of rotations around any point on the vacuum manifold $S^2$.
There is another scalar field $\psi$ in the theory, a scalar in the fundamental representation of SU(2). After breaking the original SU(2) down to the $\phi$-vacuum, the only remaining gauge symmetry of the SU(2) doublet $\psi$ is a transformation by the little group U(1). We will find flux tubes when this U(1) symmetry is spontaneously broken down to nothing. The elements of this U(1) are $h(x) = \exp[i\xi (x)\hat\phi(x)]\,,$ rotations by an angle $\xi(x)$ around the direction of $\phi(x)$ at any point in space. This U(1) will be broken by the vacuum configuration of $\psi\,.$
Let us then define the $\psi$-vacuum by, $$\begin{aligned}
\label{2ndhiggs1}
\psi^{*i}\psi^i = v_2^2\\
\label{2ndhiggs2}
D_\mu \psi = 0,\end{aligned}$$ where $ D_\mu $ is defined using $ A_\mu $ in the $\phi$-vacuum, as in Eq. (\[A\_phivac\]). Multiplying Eq. (\[2ndhiggs2\]) by $\psi^\dagger\hat\phi$ from the left, its adjoint by $\hat\phi\psi$ from the right, and adding the results, we get $$\begin{aligned}
0 & =& \psi^\dagger\hat\phi D_\mu\psi +
(D_\mu\psi^\dagger ) \hat\phi\psi\, \nonumber\\
%%% &= & \psi^\dagger\hat\phi\d_\mu \psi +
%%% (\d_\mu\psi^\dagger)\hat\phi\psi -ig\psi^\dagger{\frac
%%% ig}\left[\hat\phi,\left[\hat\phi,\d_\mu\hat\phi\right]
%%% \right]\psi\, \nonumber \\
&= & \d_\mu\left[\psi^\dagger\hat\phi\psi\right]\,,\end{aligned}$$ from which it follows that $$\begin{aligned}
\label{phi}
\psi^\dagger\hat\phi\psi = \mathrm{constant}\,,\end{aligned}$$ or explicitly in terms of the components, $$\begin{aligned}
\label{psi1}
\Tr\left[{\psi^\dagger}_i\sigma^\alpha_{ij}\psi_j
\tau_\alpha\hat\phi\right] = \mathrm{constant}\,.\end{aligned}$$ It follows that the components parallel and orthogonal to $\phi$ are both constants. Then we can decompose $$\begin{aligned}
\label{psi2}
{\psi^\dagger}_i\sigma^\alpha_{ij}\psi_j \tau_\alpha =
v^2_2\cos\theta_c \hat\phi + v^2_2 \sin\theta_c \hat \kappa\,,\end{aligned}$$ where $\hat\kappa$ is a vector in the adjoint, orthogonal to $\hat\phi\,.$ We can always write $\hat\kappa$ as $$\begin{aligned}
\label{h}
\hat\kappa = hU\tau^2U^\dagger h^\dagger\,,\end{aligned}$$ where $h$ and $U$ are as defined before and in Eq. (\[hU\]).
Using the identity $ {\sigma^\alpha}_{ij} {\sigma^\alpha}_{kl} =
\delta_{il}\delta_{kj} - \half \delta_{ij}\delta_{kl}$, we find that $\psi$ is a eigenvector of the expression on the left hand side of Eq. (\[psi2\]). Then writing the right hand side of that equation in terms of $h$ and $U$, we find that $\psi$ can be written as $$\begin{aligned}
\label{psi1}
\psi = v_2hU \left(\begin{tabular}{l}
$\rho_1$\\
$\rho_2$\\
\end{tabular}\right)\,,\end{aligned}$$ where $\rho_1$ and $\rho_2$ are constants. Keeping $U$ fixed, we vary $\xi$ and find the periodicity $$\begin{aligned}
\psi(\xi) = \psi(\xi + 4\pi)\,.\end{aligned}$$ This $\xi$ is the angle parameter of the residual $U(1)$ gauge symmetry and in the presence of a string solution, this $\xi$ is mapped a circle around the string. In order to make $\psi$ single valued around the string, we need $\xi = 2 \chi$, where $ \chi $ is the angular coordinate for a loop around the string. Next let us calculate the Lagrangian of the scalar field $\psi$. We have $$\begin{aligned}
D_\mu\psi &=& \d_\mu\psi -igA_\mu\psi\\
&=& \d_\mu (hU\rho) -ig\left[B_\mu \hat\phi +
ig\left[\hat\phi,
\d_\mu\hat\phi\right]\right]hU\rho\\
&=& \d_\mu (Uh_0\rho) -ig\left[B_\mu \hat\phi +
ig\left[\hat\phi,
\d_\mu\hat\phi\right]\right]Uh_0\rho\\
&=& - iUh_0\tau^3\rho \left[2\d_\mu\chi + g\left(B_\mu
+N_\mu\right)\right]\,,\end{aligned}$$ where $h_0 = e^{- i2\chi\tau_3}\,,\, \rho^i\rho^i = {v_2}^2\,,$ and we have used the identity $ U^\dagger h U = \exp (- 2i \chi\tau^3)\,.$ We have also introduced the Abelian ‘monopole field’ $$\begin{aligned}
N_\mu &=& 2iQ_m \Tr\left[ \d_\mu U U^{\dagger}\hat
\phi\right]\,,\\
\d_{[\mu} N_{\nu]} &=& Q_mM_{\mu\nu} +
2iQ_m\Tr[(\d_{[\mu}\d_{\nu]}U)
U^{\dagger}\hat\phi]\,.\end{aligned}$$ The first term reproduces the magentic field of the monopole configuration, while the second term is a gauge dependent line singularity, the Dirac string. This singular string is a red herring, and we are going to ignore it because it is an artifact of our construction. We have used a $U(\vec x)$ which is appropriate for a point monopole. If we look at the system from far away, the monopoles will look like point objects and it would seem that we should find Dirac strings attached to each of them. However, we know that the ’t Hooft-Polyakov monopoles are actually not point objects, and their near magnetic field is not describable by an Abelian four-potential $N_\mu,$ so if we could do our calculations without the far-field approximation, we would not find a Dirac string. Further, as was pointed out in [@Chatterjee:2009pi], the actual flux tube occurs along the line of vanishing $\psi\,,$ and it is always possible to choose a $U(\vec x)$ appropriate for the monopole configuration such that the Dirac string lies along the zeroes of $\psi$. Since $|\psi|^2$ always multiplies the term containing $N_\mu$ in the action, the effect of the Dirac string can always be ignored.
With these definitions we can calculate $$\begin{aligned}
L &=& -{\frac 14}F^{\mu\nu}F_{\mu\nu}
+ {\frac {{v_2}^2}2}\left(\d_\mu\chi + e\left(B_\mu +
N_\mu\right)\right)^2
\label{fundareps}\end{aligned}$$ Here, we have defined electric charge $e = \frac g2$ and written the magnetic charge as $Q_m = \frac 1{2e}$ .
Dualization
===========
Let us now dualize the low energy effective action in order to express the theory in terms of the macroscopic string variables. The partition function $Z$ is simply the functional integral $$\begin{aligned}
Z &=& \int \D B_\mu\D\chi\exp i
\int d^4 x \left[- {\frac 14}F_{\mu\nu}F^{\mu\nu} +
{\frac {v_2^2}2}\left(eB_\mu +\d_\mu\chi +
eN_\mu\right)^2\right]\,.
\label{flux.Higgs}\end{aligned}$$ In the presence of flux tubes we can decompose the angle $\chi$ into a part $\chi^s$ which measures flux in the tube and a part $\chi^r$ describing single valued fluctuations around this configuration, $\chi = \chi^r + \chi^s\,.$ Then if $ \chi $ winds around the tube $n$ times, we can define $$\begin{aligned}
\epsilon^{\mu\nu\rho\lambda}\partial_{\rho}
\partial_{\lambda}\chi^s =
2\pi n\int_{\Sigma}d\sigma^{\mu\nu}(x(\xi))\,\delta^4(x-x(\xi))
\equiv \Sigma^{\mu\nu}\,,
\label{def.sigma}\end{aligned}$$ where $\xi = (\xi^1, \xi^2)$ are the coordinates on the world-sheet and $d\sigma^{\mu\nu}(x(\xi)) = \epsilon^{ab}\partial_a
x^\mu \partial_b x^\nu\,.$ The vorticity quantum is $2\pi$ in the units we are using and $n$ is the winding number [@Marino:2006mk].
The integration over $\chi$ has now become integrations over both $\chi^r$ and $\chi^s\,$. However $\chi^r$ is a single-valued field, so it can be absorbed into the gauge field $ B_\mu $ by a redefinition, or gauge transformation, $B_\mu \to B_\mu
+ \partial_\mu\chi^r$. We can linearize the action by introducing auxiliary fields $C_\mu, B_{\mu\nu}$ and $A^{m}_\mu$, $$\begin{aligned}
Z &=& \int \D B_\mu \D C_\mu \D \chi_s \D B_{\mu\nu} \D
A^m_{\mu}\nonumber \\
&& \exp i \int d^4 x
\left[ -\frac{1}{4} G^{\mu\nu}G_{\mu\nu} + \frac{1}{4}
\epsilon^{\mu\nu\rho\lambda}
G_{\mu\nu}F_{\rho\lambda} - \frac{1}{2v_2^2}C_\mu^2 - C^\mu (eB_\mu
+ eN_\mu + \d_\mu \chi_s) \right]\,,\nonumber \\\end{aligned}$$ where we have written $G_{\mu\nu} = \d_\mu A^m_\nu - \d_\nu A^m_\mu
+ ev_2 B_{\mu\nu}.$ and $F_{\mu\nu}= \partial_\mu B_\nu
- \partial_\nu B_\mu + M_{\mu\nu}\,$. Now we can integrate over $B_\mu$ easily, $$\begin{aligned}
Z = \int \D C_\mu \D \chi_s \D B_{\mu\nu} \D A^m_{\mu}
\delta\left( C^\mu - \frac{v_2}{2}
\epsilon^{\mu\nu\rho\lambda}\d_\nu B_{\rho\lambda}\right)
\exp i \int d^4 x \qquad\qquad \qquad\nonumber\\
\left[ -\frac{1}{4} G^{\mu\nu}G_{\mu\nu} + \frac{ev_2}{4}
\epsilon^{\mu\nu\rho\lambda}
B_{\mu\nu}M_{\rho\lambda} - A^\mu j_\mu - \frac{1}{2v_2^2}C_\mu^2
- C^\mu (eB_\mu + eN_\mu + \d_\mu \chi_s) \right]\,.\end{aligned}$$ Here $j_m^{\mu} = - {\frac 1{2}} \epsilon^{\mu\nu\rho\lambda}\d_\nu
M_{\rho\lambda}$ is the magnetic monopole current. Integrating over $ C_\mu $ we get $$\begin{aligned}
Z = \int \D \chi_s \D B_{\mu\nu} \D A^m_{\mu}
\exp i \int d^4 x
\left[ -\frac{1}{4} G^{\mu\nu}G_{\mu\nu} + \frac{1}{12}
H^{\mu\nu\rho}H_{\mu\nu\rho} -
\frac{v_2}{2} \Sigma_{\mu\nu}B^{\mu\nu} - A^\mu j_\mu \right],\end{aligned}$$ where we have written defined $H_{\mu\nu\rho} = \partial_\mu B_{\nu\rho} + \partial_\nu
B_{\rho\mu} + \partial_\rho B_{\mu\nu}\,,$ used Eq. (\[def.sigma\]) and also written $M_{\mu\nu} = (\d_\mu N_\nu -
\d_\nu N_\mu)\,.$
We can also replace the integration over $\D\chi^s$ by an integration over $\D x_\mu(\xi)$, representing a sum over all the flux tube world sheet where $x_{\mu}(\xi)$ parametrizes the surface of singularities of $ \chi $. The Jacobian for this change of variables gives the action for the string on the background space time [@Akhmedov:1995mw; @Orland:1994qt]. The string has a dynamics given by the Nambu-Goto action, plus higher order operators [@Polchinski:1991ax], which can be obtained from the Jacobian. We will ignore the Jacobian below, but of course it is necessary to include it if we want to study the dynamics of the flux tube. $$\begin{aligned}
Z = \int \D x_\mu(\xi) \D B_{\mu\nu} \D A^m_{\mu}
\exp i \int d^4 x
\left[ -\frac{1}{4} G^{\mu\nu}G_{\mu\nu} + \frac{1}{12}
H^{\mu\nu\rho}H_{\mu\nu\rho} -
\frac{v_2}{2} \Sigma_{\mu\nu}B^{\mu\nu} - A^\mu j_\mu \right],
\label{flux.functional}\end{aligned}$$ The equations of motion for the field $B_{\mu\nu}$ and $A^{\mu}$ can be calculated from this to be $$\begin{aligned}
\label{flux.Beom}
\partial_\lambda H^{\lambda\mu\nu} &=& -m \, G^{\mu\nu} -
\frac{m}{e} \,\Sigma^{\mu\nu} \,,\\
\d_\mu G^{\mu\nu} &=& j_m^\mu
\label{flux.Aeom} \end{aligned}$$ where $G_{\mu\nu}= ev_2 B_{\mu\nu} + \partial_{\mu}A^m_{\nu} -
\partial_{\nu}A^m_{\mu}\,,$ and $m = e v_2$. Combining Eq. (\[flux.Aeom\]) and Eq. (\[flux.Beom\]) we find that $$\frac 1e \partial_\mu \Sigma^{\mu\nu}(x) + j_m^\mu(x) = 0\,.
\label{mono.coneq}$$ It follows rather obviously that a vanishing magnetic monopole current implies $\partial_\mu \Sigma^{\mu\nu}(x) = 0\,,$ or in other words if there is no monopole in the system, the flux tubes will be closed.
The magnetic flux through the tube is $\displaystyle{\frac{2n\pi}e}\,,$ while the total magnetic flux of the monopole is $\displaystyle{\frac{4m\pi}g}\,,$ where $n, m$ are integers. Since $\displaystyle{e Q_m = \half} $, it follows that we can have a string that confine a monopole and anti-monopole pair for every integer $ n $. Although this string configuration could be broken by creating a monopole-anti-monopole pair, there is a hierarchy of energy scales $v_1\gg v_2\,,$ which are respectively proportional to the mass of the monopole and the energy scale of the string. So this hierarchy can be expected to prevent string breakage by pair creation.
The conservation law of Eq. (\[mono.coneq\]) also follows directly from $ Z $ in Eq. (\[flux.functional\]) by introducing a variable $B'_{\mu\nu} = B_{\mu\nu} +
{\frac 1m } (\partial_\mu A^m_\nu - \partial_\nu A^m_\mu)$ and integrating over the field $ A^m_\mu $. If we do so we get $$\begin{aligned}
\label{confinement}
Z =\int \D x_{\mu}(\xi) \D B'_{\mu\nu}&&
\delta\Big[\frac1e\partial_\mu\Sigma^{\mu\nu}(x)
+ j^\nu_m(x)\Big] \nonumber\\
&&\exp\left[i\int\left\{\frac 1{12} H_{\mu\nu\rho}H^{\mu\nu\rho}
- \frac 14 m^2 {B'}^2_{\mu\nu}-
{\frac m{2e}} \Sigma_{\mu\nu}{B'}^{\mu\nu} \right\}\right] \,, \end{aligned}$$ with the delta functional showing the conservation law (\[mono.coneq\]). Thus these strings are analogous to the confining strings in three dimensions [@Polyakov:1996nc]. There is no $A^m_\mu$, the only gauge field which is present is $B'_{\mu\nu}$. This $B'_{\mu\nu}$ field mediates the direct interaction between the confining strings.
The delta functional in Eq. (\[confinement\]) enforces that at every point of space-time, the monopole current cancels the currents of the end points of flux tube. So the monopole current must be non-zero only at the end of the flux tube. Eq. (\[confinement\]) does not carry Abelian gauge field $A^m_{\mu}$, only a massive second rank tensor gauge field. All this confirms the permanent attachment of monopoles at the end of the flux tube which does not allow gauge flux to escape out of the flux tubes. There are important differences between the results obtained from this construction and that from using two adjoint scalars. The mass of the Abelian photon will be zero for the two adjoint case if the two adjoint vevs are aligned in the same direction. But this cannot happen for one adjoint and one fundamental scalar. Also, in this case flux confinement is possible for all winding numbers of the string.
[0]{} S. Mandelstam, Phys. Rept. [**23**]{} (1976) 245. Y. Nambu, Phys. Rept. [**23**]{} (1976) 250. Y. Nambu, Phys. Rev. D [**10**]{} (1974) 4262.
A. A. Abrikosov, Sov. Phys. JETP [**5**]{}, 1174 (1957) \[Zh. Eksp. Teor. Fiz. [**32**]{}, 1442 (1957)\]. H. B. Nielsen and P. Olesen, Nucl. Phys. B [**61**]{}, 45 (1973). Y. Nambu, Nucl. Phys. B [**130**]{} (1977) 505. H. J. de Vega, Phys. Rev. D [**18**]{}, 2932 (1978).
R. Auzzi, S. Bolognesi, J. Evslin, K. Konishi and A. Yung, Nucl. Phys. B [**673**]{} (2003) 187
A. Hanany and D. Tong, JHEP [**0404**]{} (2004) 066
M. Shifman and A. Yung, Phys. Rev. D [**66**]{}, 045012 (2002)
G. ’t Hooft, “Monopoles, instantons and confinement,” arXiv:hep-th/0010225.
C. Chatterjee and A. Lahiri, JHEP [**0909**]{}, 010 (2009)
G. ’t Hooft, Nucl. Phys. B [**79**]{}, 276 (1974). A. M. Polyakov, JETP Lett. [**20**]{}, 194 (1974) \[Pisma Zh. Eksp. Teor. Fiz. [**20**]{}, 430 (1974)\].
M. K. Prasad and C. M. Sommerfield, Phys. Rev. Lett. [**35**]{}, 760 (1975).
R. L. Davis and E. P. S. Shellard, Phys. Lett. B [**214**]{}, 219 (1988). M. Mathur and H. S. Sharatchandra, Phys. Rev. Lett. [**66**]{}, 3097 (1991). K. M. Lee, Phys. Rev. D [**48**]{}, 2493 (1993) E. T. Akhmedov, M. N. Chernodub, M. I. Polikarpov and M. A. Zubkov, Phys. Rev. D [**53**]{}, 2087 (1996) C. Chatterjee and A. Lahiri, Europhys. Lett. [**76**]{}, 1068 (2006)
M. Hindmarsh and T. W. B. Kibble, Phys. Rev. Lett. [**55**]{} (1985) 2398.
M. A. C. Kneipp, Phys. Rev. D [**69**]{} (2004) 045007
R. Auzzi, S. Bolognesi, J. Evslin and K. Konishi, Nucl. Phys. B [**686**]{} (2004) 119 M. Eto [*et al.*]{}, Nucl. Phys. B [**780**]{} (2007) 161
E. Corrigan, D. I. Olive, D. B. Fairlie and J. Nuyts, Nucl. Phys. B [**106**]{}, 475 (1976). F. A. Bais, Phys. Lett. B [**64**]{}, 465 (1976).
P. A. M. Dirac, Proc. Roy. Soc. Lond. A [**133**]{} (1931) 60.
E. C. Marino, J. Phys. A [**39**]{}, L277 (2006). P. Orland, Nucl. Phys. B [**428**]{}, 221 (1994)
J. Polchinski and A. Strominger, Phys. Rev. Lett. [**67**]{}, 1681 (1991).
A. M. Polyakov, Nucl. Phys. B [**486**]{}, 23 (1997)
|
**Occurrence of periodic Lamé functions**
**at bifurcations in chaotic Hamiltonian systems**
**M Brack$^1$, M Mehta$^1$ and K Tanaka$^{1,2}$**
*$^1$Institute for Theoretical Physics, University of Regensburg, D-93040 Regensburg, Germany*
*$^2$Dept. of Physics, University of Saskatchewan, Saskatoon, SK, Canada S7N5E2*
**Abstract**
Introduction
============
One of the well-established routes to chaos in maps is the so-called Feigenbaum scenario [@feig] which consists in a cascade of successive period-doubling bifurcations of pitchfork type. They were first discussed for the 1-dimensional logistic map by Feigenbaum [@feig] and then also found in the area-conserving two-dimensional Hénon map [@fei2; @fei3], although the numerical scaling constants found there differ from those in the 1-dimensional case. One of us (M.B.) has recently investigated [@mbgu] similar cascades of pitchfork bifurcations occurring in 2-dimensional Hamiltonian systems with mixed dynamics, whereby the scaling constants can be determined analytically and depend on the potential parameters. In the present paper, we shall show that the new orbits born at its bifurcations are, near the bifurcations points, analytically given by periodic solutions of a linear second-order differential equation studied over 160 years ago by G. Lamé [@lame], and therefore called the “periodic Lamé functions” [@strt]. They have been classified uniquely in 1940 by Ince [@inc1] who also derived their Fourier series expansions [@inc2]. We find that these not only reproduce accurately the periodic orbits found numerically by solving the equations of motion at the bifurcations, but in the Hénon-Heiles [@hh] and similar potentials the Lamé functions can also be used to describe the evolution of the bifurcated orbits at higher energies. A particularly interesting case is the homogeneous quartic oscillator for which the Lamé functions become finite polynomials in terms of Jacobi elliptic functions. Here we can also find analytical expressions for the algebraic Lamé functions which describe the orbits created at period-doubling bifurcations of island-chain type.
Bifurcations of a straight-line librating orbit {#stasec}
===============================================
We start from an autonomous two-dimensional Hamiltonian of a particle with unit mass in a smooth potential $V(x,y)$ H = 12 ( p\_x\^2 + p\_y\^2 ) + V(x,y). Assume that there exists a straight-line librating orbit, called A, along the $y$ axis, so that x\_A(t)0, y\_A(t)=y\_A(t+) are solutions of the equations of motion and $\TA$ is the period of the A orbit. Its stability is obtained from the stability matrix $\MA$ that describes the propagation of the linearized flow of a small perturbation $\delta x(t)$, $\delta p_x(t)=\delta\tx(t)$ transverse to the orbit A: (
[r]{} x()\
p\_x()
) = (
[r]{} x(0)\
p\_x(0)
). When $-2 < \trMA < +2$, the orbit is stable, for $|\trMA|>2$ it is unstable. Marginally stable orbits with $\trMA=+2$ occur in systems with continuous symmetries; in two dimensions this would imply integrability. We investigate here only non-integrable systems in which all orbits are isolated. Then, an orbit must undergo a bifurcation when $\trMA=+2$.
The elements of the stability matrix $\MA$ can be calculated from solutions of the linearized equation of motion in the transverse $x$ direction, which we write in the Newtonian form . (t) + |\_[x=0,y=y\_A(t)]{} x(t) = 0. This equation is identical to Hill’s equation [@hill] in its standard form [@mawi] (t) + \[+ Q(t)\] x(t) = 0, \[hill\] where $Q(t)$ is a $\TA$ (or $\TA/2$) periodic function whose constant Fourier component is zero. In general, the solutions of are non-periodic. However, periodic solutions with period $\TA$ (or $\TA/2$) and multiples thereof exist for specific discrete values of $\lambda$. This happens exactly at bifurcations of the A orbit where $\trMA=+2$. The periodic solutions $\delta x(t)$ found at these discrete values of $\lambda$ describe the $x$ motion of the bifurcated orbits infinitely close to the bifurcation point. A special case of the Hill equation is the Lamé equation which we discuss in the following section.
The periodic Lamé functions {#lamsec}
===========================
One of the standard forms of the Lamé equation reads [@erde] ”(z) + (z) = 0, \[lameq\] where $\sn(z,k)$ is a Jacobi elliptic function with modulus $k$ limited by $0\le k < 1$. The real period of $\sn(z,k)$ in the variable $z$ is 4[**K**]{}, where = K(k) = F(,k) is the complete elliptic integral of the first kind with modulus $k$. We follow throughout this paper the notation of Gradshteyn and Ryzhik [@gr] for elliptic functions and integrals. We are interested here only in real solutions for $\Lambda(z)$ with real argument $z$. Hence $h$ and $n(n+1)$ are assumed here to be arbitrary real constants. This means that $n$ is either real, or complex with real part $-\frac12$. There is a vast literature on the periodic solutions of Eq. ; for an exhaustive presentation of their definition and series expansions as well as the most relevant literature, we refer to Erdélyi [ *et al*]{} [@erde]. (See also [@strt] for literature prior to 1932.) Ince [@inc1; @inc2] has introduced a unique classification and nomenclature for the four types of periodic solutions, calling them Ec$_n^m(z)$ and Ec$_n^m(z)$, where $n$ is the parameter appearing in and $m$ an integer giving the number of zeros in the interval $0\leq z < 2\K$. Following a slight redefinition by Erdélyi [@erd1], the Ec$(z)$ are even and the Es$(z)$ are odd functions of $z-\K$, respectively. When $m$ is an even integer, the Lamé functions have the period $2\K$ in the variable $z$, which is the same as the period of $\sn^2(z,k)$ appearing in ; when $m$ is odd, they have the period $4\K$. Solutions with period $2p\K$ ($p=3,4,\dots$) can also be found; we shall discuss some solutions with period $8\K$ further below. All these periodic solutions exist only for discrete eigenvalues of $h$, denoted by $a_n^m$ and $b_n^m$ for the Ec$_n^m$ and Es$_n^m$, respectively; there exists exactly one solution of each of the above four types of Lamé functions for each $m\geq 0$. The eigenvalues of $h$ can, in principle, be found by solving the characteristic equation obtained from an infinite continued fraction [@inc1] which is, however, rather difficult to evaluate in the general case. In the context of our paper, they are determined by bifurcations of a linear periodic orbit and we obtain them therefore from a numerical calculation of its stability discriminant $\trMA$.
The Fourier expansions derived by Ince [@inc2], with the modification by Erdélyi [@erd1], are given in terms of the variable = - (z,k), \[zeta\] where $\am(z,k)=\arcsin[\sn(z,k)]$, and read as follows: $$\begin{aligned}
{\rm Ec}_n^{2m}(z) &=&\frac12 A_0 + \sum_{r=1}^\infty A_{2r}\cos(2r\zeta)
\,,\qquad\qquad\qquad(\hbox{period }2\K)\label{ece}\\
{\rm Ec}_n^{2m+1}(z)&=&\sum_{r=0}^\infty A_{2r+1}\cos[(2r+1)\zeta]\,,\!
\qquad\qquad\qquad (\hbox{period }4\K) \label{eco}\\
{\rm Es}_n^{2m}(z) &=&\sum_{r=1}^\infty B_{2r}\sin(2r\zeta)\,,\qquad\quad\;
\qquad\qquad\qquad (\hbox{period }2\K)\label{ese}\\
{\rm Es}_n^{2m+1}(z)&=&\sum_{r=0}^\infty B_{2r+1}\sin[(2r+1)\zeta]\,,
\qquad\qquad\qquad (\hbox{period } 4\K) \label{eso}\end{aligned}$$ with $m=0,$ 1, 2, …The expansion coefficients can be calculated by two-step recurrence relations; we give them here only for the $A_{2r}$ k\^2 A\_2 & = & \[2h-k\^2n(n+1)\] A\_0, \[recrel0\]\
k\^2A\_[2r+2]{} & = & 2A\_[2r]{}\
& & - k\^2 A\_[2r-2]{},\[recrel\] (with $r=1,$ 2, 3, …) and refer to Erdélyi ([@erde], ch 15.5.1) for the other recurrence relations which look very similar.
Although the series - are known [@inc2] to converge for $k<1$, they turned out to be semiconvergent in our numerical calculations for the cases with complex $n$, due to the fact that the characteristic values of $h$ were only determined approximately. We have truncated the above series at the value $r_{max}$ where the corresponding coefficient has its smallest absolute value before starting to diverge. The cut-off values $r_{max}$ were found to increase with the order $m$ of the Lamé function; their values are given in the Tables \[hhlame\] and \[r4lame\] in Secs. \[hhsec\] and \[h4sec\], respectively.
When $n$ is an integer, the Fourier series terminate at finite values of $r$. The Lamé functions then become [@inc1] finite polynomials in the Jacobi elliptic functions $\sn(z)$, $\dn(z)$, and $\cn(z)$, and are called the “Lamé polynomials” in short. In Sec. \[q4sec\] we will encounter a special case of the Lamé equation in which $h$ and $n$ are not independent, but where $h=\frac12 n(n+1)$ along with $k^2=\frac12$. Then, to each integer $n$ there exists only one value of $m$. Although the lowest few polynomials of this type and their eigenvalues of $h$ are included in the tables given by Ince [@inc1], we give below their explicit expressions which take a particularly simple form. The basic four types of solutions correspond to the four rest classes modulo 4 of the integer $n$. With $p=0,$ 1, 2, 3, we obtain the following sums which are finite since the expansion coefficients become identically zero for $r>p$: \_[4p]{}\^[2p]{}(z)&=&\_[r=0]{}\^p A\_[4r]{}\^[4r]{}(z), ( 2)\[lce\]\
[Ec]{}\_[4p+1]{}\^[2p+1]{}(z)&=&(z)\_[r=0]{}\^p C\_[4r]{}\^[4r]{}(z), (4)\[lco\]\
[Es]{}\_[4p+2]{}\^[2p+1]{}(z)&=&(z)(z) \_[r=0]{}\^p D\_[4r]{}\^[4r]{}(z), (4)\[lso\]\
[Es]{}\_[4p+3]{}\^[2p+2]{}(z)&=&(z)(z)(z)\_[r=0]{}\^p B\_[4r]{} \^[4r]{}(z). (2)\[lse\] The simple one-step recurrence relations for the coefficients are (with $r=0,$ 1, 2, …, $p-1$) (r+1)(4r+3)A\_[4r+4]{} & = & - \[p(4p+1)-r(4r+1)\] A\_[4r]{},\
2(r+1)(4r+5)C\_[4r+4]{} & = & - \[(2p+1)(4p+1)-(2r+1)(4r+1)\] C\_[4r]{},\
2(r+1)(4r+3)D\_[4r+4]{} & = & - \[(2p+1)(4p+3)-(2r+1)(4r+3)\] D\_[4r]{},\
(r+1)(4r+5)B\_[4r+4]{} & = & - \[(p+1)(4p+3)-(r+1)(4r+3)\] B\_[4r]{}. The first sixteen Lamé polynomials obtained from the above equations are given explicitly in Sec. \[q4sec\] (see ), with the normalization $A_0=B_0=C_0=D_0=1$.
For half-integer values of $n$, one obtains periodic Lamé functions with period $8\K$ that have algebraic forms in the Jacobi elliptic integrals and are called “algebraic Lamé functions” [@inc2; @erd2]. For the special case with $k^2=\frac12$ and $h=\frac12 n(n+1)$ we will encounter them in Sec. \[q4sec\] at period-doubling bifurcations of island-chain type. As shown in a beautiful paper by Ince [@inc2], there exist two linearly independent periodic solutions for $n=2p+\frac12$ with $p=0,$ 1, 2, … \_[2p+1/2]{}\^[m+1/2]{}(z) & = & {\_[r=0]{}\^p A\_r\^[2r]{}(z)+(z)(z) \_[r=0]{}\^[p-1]{}B\_r\^[2r]{}(z)},\[ecalg\]\
[Es]{}\_[2p+1/2]{}\^[m+1/2]{}(z) & = & {\_[r=0]{}\^p A\_r\^[2r]{}(z)-(z)(z) \_[r=0]{}\^[p-1]{}B\_r\^[2r]{}(z)},\[esalg\] where $m$ is the number of zeros in the open interval $(0,2\K)$; the coefficients $A_r$ and $B_r$ are given by two coupled recurrence relations. For $k^2=\frac12$, $h=\frac12 n(n+1)=2p(p+1)+\frac38$, there is only one solution with $m=p$ for each $p$, and the recurrence relations read 2(2r+2)\^2A\_[r+1]{}+\[4p(p+1)-6r(2r+1)\]A\_r-4(p-r+1)(p+r)A\_[r-1]{}\
= 2(2r+2)B\_[r+1]{}-3(2r+1)B\_r+2rB\_[r-1]{},\[alcof1\]\
(2r+2)\^2B\_[r+1]{}+\[2p(p+1)-3(r+1)(2r+1)\]B\_r-2(p-r)(p+r+1)B\_[r-1]{}\
= (2r+2)A\_[r+1]{}.\[alcof2\] These relations hold for $r\geq 0$ provided that coefficients with negative indices $r$ are taken to be zero. It is quite easy to see that $B_r=A_{r+1}=0$ for $r\geq p$, which justifies the upper limits of the sums above. The coefficient $A_0$ may be used for the overall normalization of both functions.
The algebraic Lamé functions and are even and odd functions of $z$, respectively, and related to each other by Ec$_{2p+1/2}^{m+1/2}(z+2\K)=$ Es$_{2p+1/2}^{m+1/2}(z)$, which amounts to a sign change in front of $\cn(z)$. Erdélyi [@erd2] showed that the linear combinations Ec$_{2p+1/2}^{m+1/2}(z)\;+$ Es$_{2p+1/2}^{m+1/2}(z)$ and Ec$_{2p+1/2}^{m+1/2}(z)\;-$ Es$_{2p+1/2}^{m+1/2}(z)$ are even and odd functions of $z-\K$, respectively, and proposed that they be used instead of the functions introduced by Ince. However, as we shall see in Sec. \[q4sec\], both pairs of independent functions are relevant in connection with period-doubling bifurcations. The Lamé functions found for $0\leq p\leq 3$ are explicitly given in of Sec. \[q4sec\].
The Hénon-Heiles potential {#hhsec}
==========================
We investigate here the role of the straight-line librating orbit A in the Hénon-Heiles (HH) potential [@hh]. The Hamiltonian reads in scaled coordinates $$e = 6H = 6\left[\frac12\,(p_x^2+p_y^2) + V_{\rm H\!H}(x,y)\right],
\qquad V_{\rm H\!H}(x,y) = \frac12\,(x^2+y^2) + x^2y- \frac13\,y^3,
\label{hhxy}$$ whereby the scaled energy is $e=1$ at the saddle points. The Newton equations of motion are $$\begin{aligned}
\ddot x + (1 + 2y)\, x & = & 0\,, \label{hheomx}\\
\ddot y + y - y^2 + x^2& = & 0\, . \label{hheomy}\end{aligned}$$ These equations, and therefore the classical dynamics of the HH potential, depend only on the scaled energy $e$ as a single parameter. In our numerical investigations we have solved Eqs. (\[hheomx\],\[hheomy\]) and determined the periodic orbits by a Newton-Raphson iteration using their stability matrix [@kabr].
The basic periodic orbits with shortest periods found in the HH potential have been discussed mathematically by Churchill [@chur]; an exhaustive numerical search and classification has been performed by Davies [@hhdb]. We focus here on the straight-line A orbit which exists along the three symmetry axes of the HH potential, one of which coincides with the $y$ axis. This orbit undergoes an infinite series of bifurcations which were studied in Ref. [@mbgu]. They form a geometric progression on the scaled energy axes $e$, cumulating at the critical energy $e=1$ where the period $\TA$ becomes infinity and the orbit A becomes non-compact. All the orbits bifurcated from it exist, however, also at $e>1$ and stay in a bounded region of the $(x,y)$ space. Vieira and Ozorio de Almeida [@vioz] have investigated some of these orbits at $e>1$, both numerically and semi-analytically using Moser’s converging normal forms near a harmonic saddle. In Fig. \[selfsim\] we show the shapes of the orbits born at the isochronous bifurcations of orbit A, i.e., of those orbits having the same period $\TA$ as orbit A at the bifurcation points. The subscripts of their names O$_\sigma$ indicate their Maslov indices $\sigma$ needed in the context of the semiclassical periodic orbit theory [@gutz; @gubu; @book]. Although we make no use of the Maslov indices in the present paper, they are a convenient means of classification of the bifurcated orbits, as will become evident from the systematics below.
All orbits shown in are evaluated at the barrier energy $e=1$. In the upper part of the figure, the $x$ axis has been zoomed by a factor 0.163 from each panel to the next, in order to bring the shapes to the same scale. The orbits look practically identical in the lower 97% of their vertical range, but near the barrier ($y=1$) they make one more oscillation in the $x$ direction with each generation. In the lower part of the figure, we have zoomed also the $y$ axis by the same factor from one panel to the next and plotted the top part of each orbit, starting from $y=1$. In these blown-up
=7.9cm
scales, the tips of the orbits exhibit a perfect self-similarity which has been described by analytical scaling constants in Ref. [@mbgu].
In Fig. \[zoom\] we show the stability discriminant $\trM$ for the orbit A (with its Maslov index $\sigma$ increasing by one unit at each bifurcation) and the orbits born at its isochronous bifurcations, plotted versus scaled energy $e$. In the lowest panel, we see the uppermost 3% of the energy scale available for the orbit A. The first bifurcation occurs at $e_5 =
0.969309$, where A$_5$ becomes unstable (with $\trMA$ $>2$) and the stable orbit R$_5$ is born. At $e_6 = 0.986709$, orbit A$_6$ becomes stable again and a new unstable orbit L$_6$ is born. In the middle panel, we have zoomed the uppermost 3% of the previous energy scale. Here the behavior of A repeats itself, with the new orbits R$_7$ and L$_8$ born at the next two bifurcations. Zooming with the same factor to the top panel, we see the birth of R$_9$ and L$_{10}$. This can be repeated [*ad infinitum*]{}: each new figure will be a replica of the previous one, with all the Maslov indices increased by two units and with $\trMA$ oscillating forever. This fractal behaviour is characteristic of the “Feigenbaum route to chaos” [@feig; @fei2; @fei3], although the present system is different from the Hénon map in that the pitchfork bifurcations seen in here are isochronous due to the reflection symmetry of the HH potential around the lines on which the bifurcating orbit A is situated (see Refs. [@nong; @maod; @then] for a discussion of the non-generic nature of the bifurcations in potentials with discrete symmetries). Also, the successive bifurcations happen here from one and the same orbit A, whereas in the standard Feigenbaum scenario one studies repeated period doubling bifurcations.
Note that the functions $\trM(e)$ of the bifurcated orbits in are approximately linear and intersect at $e=1$ in two points, one for the librating orbits L$_\sigma$ with $\trM_{\rm L}(e=1)=+8.183$ (lying outside the figure), and one for the rotating orbits R$_\sigma$ with $\trM_{\rm R}(e=1)=
-4.183$. We shall derive this linear behaviour in the limit $e\rightarrow1$ from an asymptotic analytical evaluation of $\trMA$ in a forthcoming publication [@fmmb].
=12.2cm
Presently we focus on the shapes of the orbits born at the bifurcations of A. Infinitely close to the bifurcation points, their motion in the transverse $x$ direction is given by periodic solutions of the stability equation . Note that this equation is identical with the full equation of motion in the $x$ direction, which happens to be linear for the HH potential. The function $y_A(t)$ describing the A orbit can easily be found analytically [@mbgu] and is given, with the initial condition $y_A(0)=y_1$, by y\_A(t) = y\_1 + (y\_2-y\_1)\^2(z,k), \[yaoft\] where $z$ is the scaled time variable z = t = at , \[hhta\] and $y_i$ ($i=1,$ 2, 3) are the roots of the cubic equation $e=6\,V_{\rm
H\!H} (x=0,y) = 3\,y^2-2\,y^3$. $y_1$ and $y_2$ are the turning points of the A orbit, whose period is =2= (2/a). \[ta\] The modulus of the elliptic integral is given by k\^2 = (y\_2-y\_1)/(y\_3-y\_1) and tends to unity for $e\rightarrow 1$ where $y_2=y_3$. Rewriting Eq. in terms of the scaled time variable $z$, it becomes identical with the Lamé equation , with h = 6(1+2y\_1)/(y\_3-y\_1), n(n+1) = -12 n=-1/2(i/2). \[hnhh\]
$e^{\star}_\sigma$ O$_\sigma$ $x_\sigma(t)$ $r_{max}$ $P$ $e_\sigma$ O$_\sigma$ $x_\sigma(t)$ $r_{max}$ $P$
-------------------- ------------ ----------------- ----------- ----- ----------------- ------------ ----------------- ----------- -----
0.811715516 F$_9$ Ec$_n^3(at)$ 13 4 0.9693090904 R$_5$ Es$_n^4(at)$ 20 2
0.915214692 F$_{10}$ Es$_n^3(at)$ 12 4 0.9867092353 L$_6$ Ec$_n^4(at)$ 26 2
0.995013 F$_{13}$ Ec$_n^5(at)$ 25 4 0.9991878410 R$_7$ Es$_n^6(at)$ 39 2
0.99784905 F$_{14}$ Es$_n^5(at)$ 30 4 0.9996498 L$_8$ Ec$_n^6(at)$ 40 2
0.99986763 F$_{17}$ Ec$_n^7(at)$ 53 4 0.999978390 R$_9$ Es$_n^8(at)$ 67 2
0.99994292 F$_{18}$ Es$_n^7(at)$ 62 4 0.9999906955 L$_{10}$ Ec$_n^8(at)$ 104 2
0.999996482 F$_{21}$ Ec$_n^9(at)$ 123 4 0.999999424 R$_{11}$ Es$_n^{10}(at)$ 152 2
0.999998483 F$_{22}$ Es$_n^9(at)$ 162 4 0.9999997525 L$_{12}$ Ec$_n^{10}(at)$ 211 2
0.9999999065 F$_{25}$ Ec$_n^{11}(at)$ 290 4 0.99999998475 R$_{13}$ Es$_n^{12}(at)$ 450 2
0.9999999593 F$_{26}$ Es$_n^{11}(at)$ 276 4 0.99999999343 L$_{14}$ Ec$_n^{12}(at)$ 517 2
0.999999997514 F$_{29}$ Ec$_n^{13}(at)$ 765 4 0.9999999996046 R$_{15}$ Es$_n^{14}(at)$ 757 2
0.999999998928 F$_{30}$ Es$_n^{13}(at)$ 890 4 0.9999999998249 L$_{16}$ Ec$_n^{14}(at)$ 1203 2
Note that $h$ depends on the energy $e$. Hence, the discrete eigenvalues $h=a_n^m,$ $b_n^m$ can be directly related to the bifurcation energies $e_\sigma$ of the orbit A, and the corresponding Lamé functions Ec$_n^m(z)$ and Es$_n^m(z)$ to the motion $x(z)=x(at)$ of the new periodic orbits O$_\sigma$ born at the bifurcations. In we give the bifurcation energies $e_\sigma$ obtained from the numerical computation of $\trMA$, the names O$_\sigma$ of the bifurcated orbits, and the corresponding Lamé functions with their periods in the variable $z=at$ given in . We also give the values $r_{max}$ at which the Fourier series – have been truncated. The right part of the table contains the lowest isochronous bifurcations seen in and the bifurcated orbits shown in ; their Lamé functions all have the period 2$\K$.
The left part of contains the lowest non-trivial period-doubling bifurcations (where $\trMA=-2$), which are also of pitchfork type, and the names of the orbits born thereby. Their Lamé functions have the period 4$\K$. To avoid ambiguities, we denote their bifurcation energies by $e^{\star}_\sigma$. The period-doubling bifurcations of new orbits with the Maslov indices 11, 12, 15, 16, etc., are trivial in the sense that they just involve the second iterates of orbit A and of the bifurcated orbits R$_5$, L$_6$, R$_7$, L$_8$, etc. The shapes of the first six non-trivial new orbits born at these bifurcations are shown in ; they have similar scaling properties as those shown in .
Mathematically speaking, the periodic solutions for $x_\sigma(t)$ in the form of the Lamé functions exist only at the bifurcation energies $e_\sigma$. However, the bifurcated orbits exist for all $e\geq e_\sigma$. As long as the amplitude of their $x$ motion remains small, it must be given by the Lamé equation with the constants appearing in . But this equation has only periodic solutions when $h$ has an eigenvalue corresponding to a bifurcation energy $e_\sigma$. Therefore, the bifurcated orbits must keep their $y$ motion “frozen” at $y(t) =
y_A(t)$ with the parameters corresponding to $e_\sigma$. Consequently, they also keep their periods at the bifurcation values. This has been confirmed numerically, as noticed already in [@mbgu], to hold up to $e=1$ and even beyond. Within the same small-amplitude limit of the $x$ motion, the energy of the $y$ motion is frozen at its value $e_\sigma$, and the excess energy $e-e_\sigma$ is consumed to rescale the amplitude of $x(t)$. In other words, we can determine the normalization of the Lamé function $x_\sigma(t)$ of each bifurcated orbit by exploiting the energy conservation. This is most easily done at the time $t_0=T_A/2$ where $y(t)$ has its maximum value, i.e., $y(t_0)=y_2$, ${\dot y}(t_0)=0$, and around which we know the symmetry of the Lamé functions. For the even functions Ec$_n^m$ we have ${\dot x}_\sigma(t_0)=0$, and $x_\sigma(t_0)$ is, with , found to be x\_(t\_0) = . \[x0\] For the odd functions Es$_n^m$ we have $x_\sigma(t_0)=0$, and their slopes at $t_0$ are given by \_(t\_0) = . \[xd0\] In this way we can not only normalize the Lamé functions near the bifurcation points, but also predict their evolution at higher energies.
In Figs. \[hhorb5\] - \[hhorb14\] show some of the periodic orbits obtained numerically from solving the equations of motion at $e=1$ by solid lines, and compare them to those predicted in the frozen-$y$-motion approximation, using $y(t)= y_A(t)$ (given at their bifurcation energies $e_\sigma$ or $e^\star_\sigma$) and using for $x(t)$ the Lamé functions according to , scaled as explained above. We see that in all cases, even for the lowest bifurcations, the new orbits keep their $y$ motion acquired at their bifurcation energies, up to $e=1$, indeed: the two curves $y(t)$ and $y_A(t)$ can hardly be distinguished. As a consequence,
=8.5cm
=8.5cm
we can expect the functions $x(t)$ to be well described by the appropriate Lamé functions. This is, indeed, the case if the latter are correctly scaled. As we see, the normalization predicted by is the better, the closer the bifurcation energy $e_\sigma$ comes to the saddle-point energy $e=1$. A rigorous justification of the frozen-$y$-motion approximation will be presented elsewhere [@fmmb].
We point out that all orbits born at the isochronous pitchfork bifurcations in the HH system are given by Lamé functions with period 2$\K$, since the orbit A is given by $y_A(t)$ and hence has the same period as the function appearing in the Lamé equation. The Lamé functions with period 4$\K$ must therefore correspond to orbits born at period-doubling bifurcations (see and ).
=8.5cm
=8.5cm
Note also that according to bifurcation theory [@nong; @maod; @then; @ssun], two degenerate periodic orbits should be born at each isochronous pitchfork bifurcation. The librating orbits L$_\sigma$ come, indeed, in pairs that are symmetric to the $y$ axis, whereas the rotating orbits R$_\sigma$ can be run through in two opposite directions. Each of these pairs of orbits are, however, described by one and the same Lamé function for $x(t)$ which is invariant under the corresponding symmetry operation. This is in agreement with a theorem, proved by Ince [@inc2], that there cannot exist two linearly independent periodic solutions of the Lamé equation to the same characteristic value of $h$. An exception of this theorem is given by transcendental and algebraic Lamé functions with integer or half-integer values of $n$ (see Sec. \[q4sec\] for an example).
We finally note that in the figures \[hhorb5\] - \[hhorb14\], the time scales are not always normalized such that $t=0$ corresponds to $y(0)=y_1$, as assumed in Eq. , but obtained rather randomly due to the way in which the periodic orbits where searched and found numerically. However, if we shift the time origin to $t'=0$ according to , all figures illustrate how the Lamé functions Ec$_n^m(z)$ are even and the Es$_n^m(z)$ odd, according to their definition [@inc1; @erd2], around $t'=\K/a$ where $y(t')$ has its maximum. This demonstrates that the association of Lamé functions to bifurcated orbits allows one to understand (or predict) their symmetries.
The quartic Hénon-Heiles potential {#h4sec}
==================================
We next investigate the quartic Hénon-Heiles (H4) potential [@pert; @hhun] with the scaled Hamiltonian e = 4H = 4, V\_[H4]{}(x,y) = 12(x\^2+y\^2) - 14(x\^4+y\^4) + 32x\^2y\^2, \[r4xy\] which is similar to the HH potential but has four saddles at the scaled energy $e=1$; it has reflection symmetry at both coordinate axes and both diagonals. It contains straight-line librating orbits along all four symmetry lines; two of them, which we call again orbits A, oscillate between the two saddles lying on the coordinate axes. To be specific, we choose again the A orbit along the $y$ axis. It has the same behaviour as the A orbit in the HH potential, but it approaches a saddle at both ends. Its motion is, for $y_A(0)=0$, given by y\_A(t) = y\_1(at,k), a=y\_2/, k = y\_1/y\_2, \[yar4\] and its period is = 4/y\_2 = 4/a. \[perar4\] Hereby $\pm y_1$ and $\pm y_2$ are the solutions of $e=4V_{\rm H4}(x=0,y)
= 2\,y^2 -y^4$, i.e., y\_1=, y\_2=, and $\pm y_1$ are the turning points of the orbit.
The linearized equation of motion in the $x$ direction, which decides about the stability of the orbit A, is for the H4 potential (t) + \[1+3y\^2(t)\]x(t) = 0, neglecting here explicitly a term of order $x^3$. Inserting the solution for $y_A(t)$ in and transforming to the scaled time variable $z=at$ leads again to the Lamé equation with h = 2/y\_2\^2, n(n+1) = -6 n=-1/2(i/2). Compared to the HH potential, we have now a new situation which is a consequence of the higher symmetry of the H4 potential: the periodic function $\sn^2(z,k)$ appearing in the stability equation has [*half*]{} the period, namely 2$\K$, of that of the orbit A itself . Therefore, all its periodic solutions with period $2\K$, corresponding to Lamé functions with an even number $m$ of zeros, also have the period $\TA/2$ at the bifurcations. The solutions involving the Lamé functions with odd $m$ share their periods $4\K=a\TA$ with that of the A orbit.
The systematics of the isochronous bifurcations of the A orbit for increasing bifurcation energies $e_\sigma$ is given in . The new orbits appear with $m=2,$ 3, 4, …, with alternatingly odd and even Lamé functions. Like in the HH case, the Ec$_n^m$ correspond to librations L$_\sigma$ and the Es$_n^m$ to rotations R$_\sigma$. They appear alternatingly as 2$\K$ and 4$\K$ periodic functions. The orbits given in the left part of the table are born stable and remain stable up to $e>1$, whereas those in the right part are unstable at all energies.
$e_\sigma$ O$_\sigma$ $x(t)$ $r_{max}$ $P$ $e_\sigma$ O$_\sigma$ $x(t)$ $r_{max}$ $P$
-------------- ------------ -------------- ----------- ----- -------------- ------------ -------------- ----------- -----
0.8561220 R$_5$ Es$_n^2(at)$ 7 2 0.8967139 L$_6$ Ec$_n^2(at)$ 9 2
0.9841765 L$_7$ Ec$_n^3(at)$ 11 4 0.9889128 R$_8$ Es$_n^3(at)$ 12 4
0.9982845 R$_9$ Es$_n^4(at)$ 18 2 0.9988004 L$_{10}$ Ec$_n^4(at)$ 22 2
0.9998140 L$_{11}$ Ec$_n^5(at)$ 29 4 0.99986995 R$_{12}$ Es$_n^5(at)$ 32 4
0.99997983 R$_{13}$ Es$_n^6(at)$ 46 2 0.99998590 L$_{14}$ Ec$_n^6(at)$ 48 2
0.999997812 L$_{15}$ Ec$_n^7(at)$ 77 4 0.9999984705 R$_{16}$ Es$_n^7(at)$ 98 4
0.9999997627 R$_{17}$ Es$_n^8(at)$ 117 2 0.9999998340 L$_{18}$ Ec$_n^8(at)$ 134 2
0.9999999742 L$_{19}$ Ec$_n^9(at)$ 194 4 0.9999999820 R$_{20}$ Es$_n^9(at)$ 236 4
The non-trivial period-doubling bifurcations in this potential are of island-chain type, and the orbits born thereby are given by Lamé functions of period 8$\K$. We will not investigate them here, but refer to the analogous situation in the quartic oscillator potential discussed in Sec. \[q4sec\].
In we show the stability discriminant $\trM$ versus energy $e$ for the orbit A and the orbits born at its lowest isochronous bifurcations. Different from the HH potential, here the functions $\trM(e)$ of the bifurcated orbits are, to a good approximation, quadratic in $e$. This can be derived analytically [@fmmb]. It is also striking that, different from , $\trM$ of all the orbits shown is always larger than or equal to $-2$. This behaviour, together with the systematics seen in , can be explained by the following arguments.
Bearing in mind that the stability matrix of the second iterate O$^2$ of a periodic orbit O is just M$_{\rm O}^2$, where M$_{\rm O}$ is that of the primitive orbit, one easily finds that its discriminant is \_[[O]{}\^2]{} = \^2\_[O]{} = (\_[O]{})\^2 - 2, \[trmosq\]
which can never be less than $-2$. Hence, we can mathematically understand $\trM$ of the orbit A in to be that of a second iterate. Its primitive is half of the orbit A, having the same period as that of the function $\sn^2$ in the Lamé equation for its stability, and having a discriminant $\trM$ which oscillates around zero, exceeding the values $+2$ and $-2$ on both sides symmetrically, like $\trMA$ in the HH potential (). Their second iterates, which correspond to the full bifurcated orbits, therefore have a discriminant $\trM$ which is quadratic in $e$. These features, together with the systematics in , are all a consequence of the C$_{4v}$ symmetry of the H4 potential, including the reflection symmetry at the $x$ axis that divides the A orbit (and all the bifurcated orbits discussed here) into to equal halves. The quadratic behaviour of $\trM$ of the bifurcated orbits is also consistent with the fact that their next period-doubling bifurcations (where $\trM=-2$) are symmetry breaking (see Ref. [@then] for details).
Like in the HH case, the values of $\trM$ of the bifurcated orbits intersect in two points at $e=1$, one with $\trM(e=1)=-1.711$ for the orbits born stable, and one with $\trM(e=1) =+9.991$ for the orbits born unstable.
We thus obtain the result that Lamé functions with both periods 2$\K$ and 4$\K$ describe the orbits born at isochronous bifurcations of the A orbit. Two examples are shown in Figs. \[r4orb14\] and \[r4orb16\], where the $x$ motion of the orbits L$_{14}$ and R$_{16}$ is given by the functions Ec$_n^6(z)$ and Es$_n^7(z)$, respectively.
=8.5cm
Whereas the latter shares its period with orbit A, the former has half its period. Like before, these orbits are evaluated at the critical energy $e=1$. The normalization of the Lamé functions has been chosen as for the HH potential, using the frozen-$y$-motion approximation and energy conservation, leading here to x\_(t\_0) = \_(t\_0) = .
=8.5cm
The homogeneous quartic oscillator {#q4sec}
==================================
We now turn to the quartic oscillator (Q4) potential V\_[Q4]{}(x,y)= 14(x\^4+y\^4) + x\^2 y\^2 \[q4xy\] which has been the object of several classical and semiclassical studies [@q4po]. Since it is homogeneous in the coordinates, the Hamiltonian can be rescaled together with coordinates and time such that its classical mechanics is independent of energy. Consequently, the system parameter is here not the energy but the parameter $\epsilon$. The potential has the same symmetry as the H4 potential and, correspondingly, possesses periodic straight-line orbits along both axes. The motion of the A orbit along the $y$ axis is given by y\_A(t) = y\_0(y\_0t,k), y\_0=(4E)\^[1/4]{}, k\^2=1/2 \[yaq4\] with the period $T_A = 4\,\K/y_0$. Its turning points are $\pm y_0$. Note that this solution does not depend on the value of $\epsilon$. The stability of the orbit A, however, does depend on $\epsilon$. The linearized equation of motion for the transverse $x$ motion yields, after transformation to the coordinate $z=y_0t$, the Hill equation x”(z) + \[1-\^2(z,k)\]x(z) = 0. \[lameq4\] This is a special case of the Lamé equation with h = = 12 n(n+1), \[hn\] where we have used $k^2=1/2$. The nice feature is that we here know analytically the eigenvalues $h=h_n$ of the Lamé equation, namely those given in Eq. . This agrees with the analytical result for the stability discriminant of the A orbit , which has been derived long ago by Yoshida [@yosh]: = 4+ 2. \[trmaq4\] It is easy to see that the bifurcation condition $\trMA=+2$ leads exactly to the values of the parameter $\epsilon$.
The periodic solutions of the Lamé equation at the bifurcation values of $h=\epsilon$ are the Lamé polynomials discussed already in Sec. \[lamsec\]. Their explicit forms for $n=0,\,1,\,\dots,$ 15 are given in , using the short notation $\cn=\cn(z,k)$, etc., and a normalization such that their leading coefficient is unity. We also give in the names of the new-born orbits, using the same nomenclature as for the H4 potential in the previous section. Their shapes are shown in . They have exactly the same topologies as the orbits of the H4 potential and are again described by Lamé functions of pairwise alternating periods $2\K$ and $4\K$, as seen also in . Each of these orbits has a discrete degeneracy of 2, which is due to the time reversal symmetry for the rotations and to the reflection symmetries about the coordinate axes for the librations.
A special comment is due concerning the cases $n=1$ and $n=2$ which correspond to $\epsilon=1$ and 3, respectively. For these values of the parameter $\epsilon$, the Q4 potential is integrable [@q4po; @yosh] and no bifurcations occur for the shortest orbits. Nevertheless, the Lamé equation possesses mathematically the solutions Ec$_1^1$ and Es$_2^1$, respectively. The orbits B$_3$ and C$_4$ given in and have topologically the shapes given by these Lamé polynomials, but it should be emphasized that they are not generated through bifurcations but are generic orbits existing at all values of $\epsilon$. Under the symmetry operation $\epsilon\longrightarrow
(3-\epsilon)/(1+\epsilon)$, which corresponds to a rotation about 45 degrees and simultaneous stretching of the potential [@q4po], the orbits of type A are mapped onto the
---------------------------------------------------------------------------------------------------------
$n$ $\epsilon_n$ O$_\sigma$ Lamé polynomial for $x(t)$ $P$
----- -------------- ------------ ----------------------------------------------------------------- -----
0 0 L$_3$ Ec$_0^0\;$ = 1 2
1 1 \[B$_3$\] \[Ec$_1^1\;$ = cn\] 4
2 3 \[C$_4$\] \[Es$_2^1\;$ = dn\] 4
3 6 R$_5$ Es$_3^2\;$ = cndn 2
4 10 L$_6$ Ec$_4^2\;$ = $1 - \frac53\,\cn^4$ 2
5 15 L$_7$ Ec$_5^3\;$ = cn($1-\frac75\,\cn^4$) 4
6 21 R$_8$ Es$_6^3\;$ = dn($1-3\,\cn^4$) 4
7 28 R$_9$ Es$_7^4\;$ = cndn($1-\frac{11}{5}\,\cn^4$) 2
8 36 L$_{10}$ Ec$_8^4\;$ = $1-6\,\cn^4+\frac{39}{7}\,\cn^8$ 2
9 45 L$_{11}$ Ec$_9^5\;$ = cn($1-\frac{22}{5}\,\cn^4+\frac{11}{3}\,\cn^8$) 4
10 55 R$_{12}$ Es$_{10}^5$ = dn($1-\frac{26}{3}\,\cn^4+\frac{221}{21}\,\cn^8$) 4
11 66 R$_{13}$ Es$_{11}^6$ = cndn($1-6\,\cn^4+\frac{19}{3}\,\cn^8$) 2
12 78 L$_{14}$ Ec$_{12}^6$ = ($1-13\,\cn^4+\frac{221}{7}\,\cn^8 2
-\frac{221}{11}\,\cn^{12}$)
13 91 L$_{15}$ Ec$_{13}^7$ = cn($1-9\,\cn^4+19\,\cn^8 4
-\frac{437}{39}\,\cn^{12}$)
14 105 R$_{16}$ Es$_{14}^7$ = dn($1-17\,\cn^4+51\,\cn^8 4
-\frac{425}{11}\,\cn^{12}$)
15 120 R$_{17}$ Es$_{15}^8$ = cndn($1-\frac{57}{5}\,\cn^4+\frac{437}{15}\,\cn^8 2
-\frac{1311}{65}\,\cn^{12}$)
---------------------------------------------------------------------------------------------------------
orbits of type B and vice versa, and the orbits of type C are mapped onto themselves. This is seen easily in the Lamé polynomial describing the B orbit, Ec$_1^1(z)$ = cn$(z,k)$, which is proportional to the function describing the A orbit.
=7.8cm
The scaling properties of these orbits and their evolution away from the bifurcation values $\epsilon_\sigma$ are more difficult to analyze than in the HH and H4 potentials and will be discussed elsewhere [@mmmb].
We now discuss period-doubling bifurcations of the A orbit in the Q4 potential. There is a series of trivial period doublings which just involve the second iterates of the orbit A and the orbits born at its isochronous bifurcations and listed in . The non-trivial period doublings occur when $\trMA=-2$, which leads with to the critical values = 2p(p+1)+3/8, p = 0,1,2,… This is exactly one of the conditions [@inc2] for the existence of period-8$\K$ solutions of , namely that obtained by inserting $n=(4p+1)/2$ into Eq. . The solutions are the algebraic Lamé functions, given (up to $p=3$) in below. These bifurcations are of the island-chain type (see, e.g., Ref. [@ssun]): the quantity trM of the second iterate of orbit A – let us call it orbit A$^2$ – touches the value $+2$, but the orbit A$^2$ remains stable on either side. At the bifurcation, two doubly-degenerate orbits are born, one stable and one unstable. The situation is illustrated in around the bifurcation at $\epsilon=4+3/8$ ($p=1$). The unstable new orbit is here called F$_{10}$, and the stable new orbit is called P$_9$. There shapes, together with those born at the other period doublings listed in , are shown in below. Their degenerate symmetry partners are called F$'_\sigma$ for the librating orbits (obtained by reflecting the F$_\sigma$ orbits at the $x$ axis) and P$'_\sigma$ for the rotating orbits (obtained by time reversal of the P$_\sigma$ orbits).
According to the theory of Ince [@inc2], the algebraic Lamé functions of period 8$\K$ are one exceptional case where two independent periodic solutions can coexist for the same critical value of $h$. These are the functions Ec$_{2p+1/2}^{\,p+1/2}$ and Es$_{2p+1/2}^{\,p+1/2}$ defined in Eqs. . As we see from , they correspond to the unstable orbits of type F$_\sigma$ and F$'_\sigma$. In contrast to the degenerate pairs of bifurcated orbits in the HH and H4 potentials, which are represented by one and the same periodic Lamé function, the pairs F$_\sigma$ and F$'_\sigma$ are here given by two linearly independent functions. With this, however, the number of independent solutions of the second-order differential equation is exhausted. Therefore the other pair of stable orbits of type P$_\sigma$ and P$'_\sigma$ born at the period doublings cannot be given by any new independent solutions. Indeed, we see from that the orbits P$_\sigma$ and P$'_\sigma$ are
=7.5cm
------------------------------------------------------------------------------------------------------------------------------
$n\!$ $\epsilon_n\!$ O$_\sigma\!$ algebraic Lamé function for $x(t)$
---------------- ---------------- -------------- -----------------------------------------------------------------------------
$\frac12\!$ $\frac38\!$ F$_6\!$ Ec$_{1/2}^{1/2}$ = $
\!\sqrt{\dn+\cn}$
F$'_6\!$ Es$_{1/2}^{1/2}$ = $\!\sqrt{\dn-\cn}$
P$_7\!$ $\sqrt{\dn+\cn}\,+\sqrt{\dn-\cn}$
P$'_7\!$ $\sqrt{\dn+\cn}\,-\sqrt{\dn-\cn}$
$\frac52\!$ $4\frac38\!$ F$_{10}\!$ Ec$_{5/2}^{3/2}$ = $
\!\sqrt{\dn+\cn}\,(1-\frac47\,\sn^2-\frac87\,\dn\,\cn)$
F$'_{10}\!$ Es$_{5/2}^{3/2}$ = $\!\sqrt{\dn-\cn}\,
(1-\frac47\,\sn^2+\frac87\,\dn\,\cn)$
P$_9\!$ Ec$_{5/2}^{3/2}\;+\;$Es$_{5/2}^{3/2}$
P$'_9\!$ Ec$_{5/2}^{3/2}\;-\;$Es$_{5/2}^{3/2}$
$\frac92\!$ $12\frac38\!$ F$_{14}\!$ Ec$_{9/2}^{5/2}$ = $
\!\sqrt{\dn+\cn}\,
(1-\frac{36}{13}\,\sn^2+\frac{16}{13}\,\sn^4-\frac{8}{13}\,\dn\,\cn)$
F$'_{14}\!$ Es$_{9/2}^{5/2}$ = $\!\sqrt{\dn-\cn}\,
(1-\frac{36}{13}\,\sn^2+\frac{16}{13}\,\sn^4+\frac{8}{13}\,\dn\,\cn)$
P$_{13}\!$ Ec$_{9/2}^{5/2}\;+\;$Es$_{9/2}^{5/2}$
P$'_{13}\!$ Ec$_{9/2}^{5/2}\;-\;$Es$_{9/2}^{5/2}$
$\frac{13}2\!$ $24\frac38\!$ F$_{18}\!$ Ec$_{13/2}^{7/2}$ =$
\,\sqrt{\dn+\cn}\,
[1-\frac{1304}{347}\,\sn^2+\frac{1200}{347}\,\sn^4
-\frac{320}{347}\,\sn^6+\frac{272}{347}\,\dn\,\cn\,
(1-\frac{40}{17}\,\cn^4)]\!$
F$'_{18}\!$ Es$_{13/2}^{7/2}$ =$\,\sqrt{\dn-\cn}\,
[1-\frac{1304}{347}\,\sn^2+\frac{1200}{347}\,\sn^4
-\frac{320}{347}\,\sn^6-\frac{272}{347}\,\dn\,\cn\,
(1-\frac{40}{17}\,\cn^4)]\!$
P$_{17}\!$ Ec$_{13/2}^{7/2}\;+\;$Es$_{13/2}^{7/2}$
P$'_{17}\!$ Ec$_{13/2}^{7/2}\;-\;$Es$_{13/2}^{7/2}$
------------------------------------------------------------------------------------------------------------------------------
given by the two independent linear combinations Ec$_{2p+1/2}^{p+1/2}\,\pm$ Es$_{2p+1/2}^{p+1/2}$ which were constructed by Erdélyi [@erd2] to have the same symmetry properties as those of the 2$\K$ and 4$\K$ periodic Lamé functions, as discussed in Sec.\[lamsec\].
We thus have found the interesting result – which was new to us – that the stable and unstable pairs of orbits born at a period-doubling bifurcation of island-chain type are mutually linear combinations of each other. It follows from our above arguments that this result must hold for all Hamiltonians with the double reflection symmetry C$_{2v}$.
=11.5cm
In we illustrate the situation for the four orbits born at the bifurcation at $\epsilon=12+3/8$ ($p=2$). In the left panels, their shapes $x(y)$ are shown; in the right panels we plot the algebraic Lamé functions (and their linear combinations) which describe the motion $x(t)$ of the respective orbits. Looking merely at the shapes of these orbits, the pairwise linear dependence of their $x$ motion is not obvious at all.
Summary and conclusions {#susec}
=======================
We have investigated cascades of isochronous pitchfork bifurcations of straight-line librational orbits in two-dimensional potentials. The linearized equation of the $x$ motion transverse to these orbits, determining their stability, can be written in the form of the Lamé equation. Its eigenvalues correspond to the bifurcation values of the system parameter, given also by the condition trM$=+2$ for the stability discriminant of the straigh-line orbit, and its eigenfunctions describe the $x$ motion of the new orbits born at the bifurcations. These eigenfunctions are the periodic Lamé functions of period $2\K$ or $4\K$, where $\K$ is the complete elliptic integral determining the period of the parent orbit at the corresponding bifurcation. In potentials with C$_{2v}$ symmetry, the solutions occur alternatingly as Lamé functions of period 2[**K**]{} and 4[**K**]{}, respectively. When this symmetry is absent, the $4\K$ periodic solutions describe the orbits born at period-doubling pitchfork bifurcations.
We have shown numerically that the periodic Lamé functions describe very accurately the shapes of the bifurcated orbits obtained from a numerical integration of the equations of motion, as long as the amplitude of their $x$ motion remains small, i.e., as long as one is not too far from the bifurcation point. Exploiting the energy conservation in the Hénon-Heiles type potentials HH and H4 and the known symmetries of the Lamé functions, we can predict the propagation of the new orbits up to the critical saddle-point energy where they have all become unstable and the system is highly chaotic. We thus have found an analytical description of an infinite series of unstable periodic orbits in chaotic systems.
In the homogeneous quartic oscillator (Q4) potential, the series expansions of the periodic Lamé functions terminate and they become finite polynomials. In this potential we have also analyzed solutions of period $8\K$ which occur at period-doubling bifurcations of the straight-line orbits of island-chain type. The two pairs of orbits born thereby are represented by two independent sets of orthogonal periodic solutions of the Lamé equation, which here are identified with the so-called algebraic Lamé functions that can again be given in a closed form.
Similar cascades of pitchfork bifurcations have also been discussed in connection with the diamagnetic Kepler problem represented by hydrogen atoms in strong magnetic fields [@maod; @main; @frwi]. Expressing the Hamiltonian in (scaled) semiparabolic coordinates $(u,v)$, the effective potential for orbits with angular momentum $L_z=0$ (where the $z$ axis is the direction of the external magnetic field) becomes similar to the H4 and Q4 potentials discussed here (although it contains only quadratic and sixth-order terms in the coordinates). Since physically the $(u,v)$ coordinates are positive definite, the periodic orbits in the diamagnetic Kepler problem correspond to the half-orbits of the H4 and the Q4 potentials. There exist straight-line librating orbits, corresponding to oscillations of the electron along the symmetry axis, which bifurcate infinitely many times as the energy of the electron approaches the ionization threshold. The stability of these linear orbits is given by the Mathieu equation, which is analogous to the Lamé equation but with the function sn$^2(z,k)$ replaced by $\cos(2z)$. Its periodic solutions are the periodic Mathieu functions se$_m$ and ce$_m$ which have properties completely analogous to those of the periodic Lamé functions, and were actually studied in detail by Ince [@inc3] prior to his investigations of the Lamé functions. The topology of the Mathieu functions and of the bifurcated orbits described by them is exactly the same as for the Q4 and H4 potentials described here. In particular, the so-called “balloon” orbits B$_n$ and “snake” orbits S$_n$ with $n=1,$ 2, $\dots$ [@maod] correspond exactly to the alternating sequence of halves of the orbits R$_5$, R$_9$, $\dots$ and L$_7$, L$_{11}$, $\dots$ shown in . (The other orbits, born unstable at the bifurcations, were not considered in [@maod] since they do not pass through the centre.) We believe that our analysis, applied in terms of the Mathieu functions, may be useful for further investigations of the bifurcations occurring in the diamagnetic Kepler problem.
The knowledge of the analytical properties of the bifurcated orbits will be useful in the application of the periodic orbit theory to the potentials studied here. First steps in this direction have been quite successful [@pert; @hhun; @hh1], but the orbits bifurcated from the A orbit were not considered. Their incorporation into the semiclassical trace formula is the object of further work in progress. We expect the bifurcated orbits, in particular, to play an important role in the semiclassical analysis of resonances above the barriers in Hénon-Heiles type or similar potentials.
[**Acknowledgments**]{}
We are grateful to S Fedotkin and A Magner for very stimulating discussions and critical comments. Valuable comments by M Sieber and H Then are highly appreciated. We also acknowledge financial support by the Deutsche Forschungsgemeinschaft.
[31]{}
Feigenbaum M J 1978 [*J. Stat. Phys.*]{} [**19**]{} 25\
see also Feigenbaum M J 1983 [*Physica*]{} [**7 D**]{} 16
Bountis T C 1981 [*Physica*]{} [**3 D**]{} 577
Greene J M, McKay R S, Vivaldi F and Feigenbaum M J 1981 [*Physica*]{} [**3 D**]{} 577
Brack M 2001 [*Festschrift in honor of the 75th birthday of Martin Gutzwiller*]{} ed A Inomata [*et al*]{} [ *Foundations of Physics*]{} [**31**]{} 209 \[LANL preprint nlin.CD/0006034\]
Lamé G 1839 [*Journal de Mathématiques Pures et Appliquées (Liouville)*]{} [**4**]{} 126\
Lamé G 1839 [*ibid.*]{} [**4**]{} 351
see, e.g., Strutt M J O 1932 [*Ergebnisse der Mathematik und ihrer Grenzgebiete*]{} Vol. 1 no. 3 (Springer-Verlag, Berlin)
Ince E L 1940 [*Proc. Royal Soc. Edinburgh*]{} [**60**]{} 47
Ince E L 1940 [*Proc. Royal Soc. Edinburgh*]{} [**60**]{} 83
Hénon M and Heiles C 1964 [*Astr. J.*]{} [**69**]{} 73
Hill G W 1886 [*Acta Math.*]{} [**8**]{} 1
Magnus W and Winkler S 1966 [*Hill’s Equation*]{} (Interscience Publ., New York)
Erdélyi A [*et al*]{} 1955 [*Higher Transcendental Functions Vol III*]{} (McGraw-Hill, New York) ch 15.\
Beware of the misprint in equation (13) and elsewhere in chapter 15.5.1 of this reference; the correct definition of the variable $\zeta$ is that given in Eq. of our present paper
Gradshteyn I S and Ryzhik I M 1994 [*Table of Integrals, Series, and Products*]{} (Academic Press, New York, 5th edition) ch 8.1
Erdélyi A 1941 [*Phil. Mag.*]{} (7) [**31**]{} 123
Erdélyi A 1941 [*Phil. Mag.*]{} (7) [**32**]{} 348
Tanaka K and Brack M, to be published
Churchill R C, Pecelli G and Rod D L 1979 [*Stochastic Behavior in Classical and Quantum Hamiltonian Systems*]{} ed G Casati and J Ford (Springer-Verlag, New York) p 76
Davies K T R, Huston T E and Baranger M 1992 [*Chaos*]{} [**2**]{} 215
Vieira W M and Ozorio de Almeida A M 1996 [*Physica*]{} [ **90 D**]{} 9
Gutzwiller M C 1971 [*J. Math. Phys.*]{} [**12**]{} 343
Gutzwiller M C 1990 [*Chaos in classical and quantum mechanics*]{} (Springer, New York)
Brack M and Bhaduri R K 1997 [*Semiclassical Physics*]{}, Frontiers in Physics Vol. 96 (Addison-Wesley, Reading, USA)
de Aguiar M A M, Malta C P, Baranger M, and Davies K T R 1987 [*Ann. Phys. (NY)*]{} [**180**]{} 167
Mao J-M and Delos J B 1992 [*Phys. Rev.*]{} A [**45**]{} 1746
Then H L 1999 [*Diploma thesis*]{} (Universität Ulm)\
Then H L and Sieber M, to be published
Fedotkin S N, Magner A G, Mehta M and Brack M, to be published
Sieber M 1996 [*J. Phys.*]{} A [**29**]{} 4715\
Schomerus H and Sieber M 1997 [*J. Phys.*]{} A [**30**]{} 4537\
Sieber M and Schomerus H 1998 [*J. Phys.*]{} A [ **31**]{} 165
Brack M, Creagh S C and Law J 1998 [*Phys. Rev.*]{} A [ **57**]{} 788
Brack M, Meier P and Tanaka K 1999 [*J. Phys.*]{} A [**32**]{} 331
Eckardt B 1988 [*Phys. Rep.*]{} [**163**]{} 205\
Bohigas O, Tomsovic S and Ullmo U 1993 [*Phys. Rep.*]{} [**233**]{} 45\
Eriksson A B and Dahlqvist P 1993 [*Phys. Rev.*]{} E [**47**]{} 1002\
Lakhshminarayan A, Santhanam M S, and Sheorey V B 1996 [*Phys. Rev. Lett.*]{} [**76**]{} 396 Yoshida H 1984 [*Celest. Mech.*]{} [**32**]{} 73. Strictly speaking, Yoshida derived trM for the half-orbit to be trM$_{{\rm A}/2}=2\sqrt{2}\cos[\pi\sqrt{1+8\epsilon}/4]$, from which Eq. follows using Eq.
Mehta M and Brack M, work in progress
Main J, Wiebusch G, Holle A and Welge K H 1986 [*Phys. Rev. Lett.*]{} [**57**]{} 2789\
Main J, Wiebusch G, Holle A and Welge K H 1987 [*Z. Phys.*]{} D [**6**]{} 295
Wintgen D and Friedrich F 1987 [*Phys. Rev.*]{} A [**36**]{} 131\
Friedrich F and Wintgen D 1989 [*Phys. Rep.*]{} [**183**]{} 39\
Hasegawa H, Robnik M and Wunner G 1989 [*Prog. Th. Phys.*]{} Suppl. [**98**]{} 198
Ince E L 1932 [*Proc. Royal Soc. Edinburgh*]{} [**52**]{} 355
Brack M, Bhaduri R K, Law J and Murthy M V N 1993 [*Phys. Rev. Lett.*]{} [**70**]{} 568\
Brack M, Bhaduri R K, Law J, Maier Ch and Murthy M V N 1995 [*Chaos*]{} [**5**]{} 317,707(E)
|
---
abstract: 'The Coulomb interaction is widely known to enhance the effective mass of interacting particles and therefore tends to favor a localized state at commensurate filling. Here, we will show that, in contrast to this consensus, in a van der Waals heterostructure consisting of graphene and hexagon boron nitride (h-BN), the onsite Coulomb repulsion will at first destroy the localized state. This is due to the fact that the onsite Coulomb repulsion tends to suppress the asymmetry between neighboring carbons induced by h-BN substrate. We corroborate this surprising phenomenon by solving a tight-binding model with onsite Coulomb repulsion treated within coherent potential approximation, where hopping parameters are derived from density functional theory calculations based on the graphene/h-BN heterostructure. Our results indicate that both gapless and gapped states observed experimentally in graphene/h-BN heterostructures can be understood after a realistic value of the onsite Coulomb repulsion as well as different interlayer distances are taken into account. Finally, we propose ways to enhance the gapped state which is essential for potential application of graphene to next-generation electronics. Furthermore, we argue that band gap suppressed by many-body effect should happen in other van der Waals heterostructures.'
author:
- 'Jin-Rong Xu'
- 'Ze-Yi Song'
- 'Chen-Guang Yuan'
- 'Yu-Zhong Zhang'
title: 'Interaction-induced metallic state in graphene on hexagonal boron nitride'
---
Introduction \[intro\]
======================
Graphene, a Dirac semimetal, is considered as a potential candidate for replacing silicon in next-generation electronics provided large gaps of order 300 K can be opened at charge neutral point. Various ways have been used to realize the gap opening [@Castro2007; @Han2007; @Elias2009; @Ci2010; @Hunt2013]. Among those, heterostructure consisting of two-dimensional layers of graphene and atomically flat hexagonal boron nitride (h-BN) coupled by van der Waals interactions attracts considerable interests [@Hunt2013; @Xue2011; @Britnell2012; @Slawinska2010; @Dean2011; @Zhong2011; @Gao2012; @Amet2013; @Tang2013; @San-Jose075428; @Titov2014; @Wijk2014; @Abergel2015; @Slotman2015].
While it was predicted by density functional theory (DFT) calculations that a band gap of $53$ meV can be generated in the van der Waals heterostructure [@Giovannetti2007], such an insulating state was not always detected in experiments [@Hunt2013; @Xue2011; @Amet2013; @Usachov2010; @Roth2013; @Chen2014; @Sediri2015]. The discrepancy is originally ascribed to the lattice mismatch of 1.8% between graphene and h-BN which leads to moiré patterns in real space with periodicity determined by interlayer twist angle [@Xue2011; @Roth2013; @Titov2014; @Sachs2011]. This is due to the fact that nonzero twist may give rise to a restoration of sublattice inversion symmetry on spatial average, possibly leading to a metallic state [@Xue2011]. However, such a metallic state is not anticipated by the noninteracting model, where a small gap persists even if lattice mismatch is involved [@Kindermann2012]. Also the insulating state with large band gap of around 100-300K observed experimentally [@Hunt2013] can not be accounted for by the noninteracting model [@Kindermann2012]. Then, a natural speculation arises that electronic interactions may enhance the gapped state. Indeed, various experiments indicate the importance of many-body effect in the heterostructure [@Dean2011; @Chen2014; @Dean2013; @Shi2014].
Recently, long-range Coulomb interactions are studied at either weak coupling limit or strong coupling limit [@Song2013; @Bokdam2014; @Jung2015] which confirm the speculation. But it is known from constrained random phase approximation calculations that the dominant onsite Coulomb repulsion $U^*$ is reduced by a weighted average of nonlocal interactions $\bar{V}$, resulting in an effective on-site Coulomb interaction of $U=U^*-V=1.6t$, where $t$ is the hopping integral of $\pi$ electrons between nearest neighbor carbons [@Schueler2013]. Therefore, it’s better to first understand the role of the effective on-site Coulomb interaction which is of intermediate strength with respect to the kinetic energy of the $\pi$ electrons.
In this paper, we will show that the onsite Coulomb repulsion will first suppress the band gap in the graphene/h-BN heterostructure even in the absence of lattice mismatch. The underlying physics behind the unexpected phenomenon comes from competition between two types of insulating states, namely the band insulator and the Mott insulator. As is well known, at $U=0$, lattice inversion symmetry of graphene is broken by coupling to the h-BN substrate, leading to a charge imbalance between two neighboring carbons and a band gap at half-filling (i.e. the charge neutral point). However, as $U$ is switched on, the onsite Coulomb repulsion tends to localize the electrons on graphene and thus suppress the hybridization between graphene and h-BN. Therefore, the asymmetry between neighboring carbons as well as the corresponding band gap, induced by interlayer coupling, will be suppressed. Further increasing the onsite Coulomb repulsion will drive the system into the Mott insulating state.
Besides the reduction of the band gap induced by the onsite Coulomb repulsion of intermediate strength, variation of interlayer distance, which dominates the interlayer coupling between graphene and h-BN, will also strongly affect the band gap. Since large variance of the interlayer distance may exist among different measurements due to the weak van der Waals interaction and diverse experimental conditions, and the onsite Coulomb repulsion inevitably exists in real materials, our findings may provide complementary ways to understand the conundrum of why both gapped and gapless states can be observed in different experiments. We corroborate the above findings by solving a tight-binding model with onsite Coulomb repulsion treated within coherent potential approximation. The hopping parameters of the tight-binding model are derived from DFT calculations based on the graphene/h-BN heterostructure.
The paper is organized as follows. In Sec. \[MM\], we describe the details of the structure, the model, and the method we used. In Sec. \[res\], we present our results including density of states (DOS), gap, difference in occupation numbers between neighboring carbons, total self-energy, spectral function, and phase diagram in the plane of onsite Coulomb interaction and interlayer distance. Corresponding indications of our results to experiments are also discussed in this section. We summarize our findings in Sec. \[conclusion\].
Model and Methods \[MM\]
========================
Details of the heterostructures \[cartoon\]
-------------------------------------------
The graphene/h-BN heterostructure we studied are composed of one h-BN monolayer and one graphene monolayer coupled by van der Waals interaction. Since the physical picture of interaction-induced reduction of the band gap is independent of lattice structure, for simplicity, we only study the van der Waals heterostructure without lattice mismatch. Further advantage to use lattice-matched structure is that, as long as the gapless state appears, it can be solely attributed to the inclusion of on-site Coulomb repulsion, since possible reduction of the band gap due to lattice mismatch is completely precluded.
There are three stable and metastable configurations of the heterostructure as shown in the insets of Fig. \[Fig:two\] and Fig. \[Fig:five\], respectively. The most stable configuration with lowest total energy is called AB-N stack where one carbon is above boron and the other centered above a h-BN hexagon as seen in the insets of Fig. \[Fig:two\]. Two metastable configurations are given in the insets of Fig. \[Fig:five\], denoted by AB-B stack with one carbon over nitrogen and the other centered above a h-BN hexagon and AA stack with one carbon over boron and the other over nitrogen, respectively. Unless specified, the lattice structure we used in our paper is of AB-N stack.
Tight-binding model \[TBmodel\]
-------------------------------
The tight-binding model with onsite Coulomb interaction used to describe the graphene/h-BN heterostructure is given by $$\label{fullmodel}
H=\sum_{i,j\in C,B,N,\sigma }t_{ij}a_{i\sigma }^{\dag }a_{j\sigma
}+U\sum_{i\in C}(n_{i\uparrow }-1/2)(n_{i\downarrow }-1/2)$$where $a_{j\sigma }$ ($a_{i\sigma }^{\dag }$) is the annihilation (creation) operator of an electron with spin $\sigma $ at site j (i), and $n_{i\sigma}=a_{i\sigma }^{\dag }a_{i\sigma }$ the density operator. $i,j\in C,B,N$ denotes the summation over the sites on both graphene and h-BN while $i\in C$ the graphene only. $t_{ij}$ is the hopping integral between site i and j derived from DFT calculations and $U$ is the effective onsite Coulomb repulsion. We are interested in the half-filled case, which is corresponding to the charge neutral point in experiments. Since no long-range magnetic order is observed in the heterostructure experimentally, we focus on the paramagnetic state throughout the paper.
Details of band structure calculations \[DFT\]
----------------------------------------------
In order to determine the hopping integrals $t_{ij}$ in model (\[fullmodel\]), we first perform DFT calculations within local density approximation (LDA) to obtain band structure of the graphene/h-BN heterostructure, based on projector augmented wave (PAW) method [@Bloechl1994], as implemented in VASP code [@Kresse1993; @Kresse1996]. The convergence of total energy with respect to K-point sampling, cutoff energies and vacuum distance between neighboring heterostructures has been carefully examined. The plane wave cutoff energy of 800 eV, a $\Gamma$-centered K-point grid of $33\times33\times1$ and a vacuum distance of 17 Å are chosen when lattice constants and atomic positions of lattice-matched graphene/h-BN heterostructures are fully relaxed, with residual forces on each atom less than 1 meV/Å. Since the distances between graphene and h-BN may vary considerably among different experiments due to the weak van der Waals coupling and diverse experimental conditions, the interlayer distances will be treated as a tuning parameter while other structural parameters are fixed according to the lattice optimization. We use $41\times41\times1$ Monkhorst-Pack k-point meshes [@Monkhorst] to integrate over the Brillouin zone in order to precisely determine the Fermi energy. With the parameters described above, we can reproduce the band structures of Ref. \[\] and Ref. \[\] where different equilibrium interlayer distances are used.
![(Color online) Comparison between band structures of graphene/h-BN heterostructure calculated from density functional theory (DFT) and the derived tight-binding model (\[fullmodel\]) at $U=0$ for AB-N stack with interlayer distance of $d_0=3.022$ Å. (b) is a blowup of (a) around the Fermi level. Solid (red) lines are the band structure from DFT calculations while dashed (black) ones from the derived tight-binding model.[]{data-label="Fig:supplethree"}](Figure1){width="48.00000%"}
Then, the hopping integrals can be derived through transformation from Bloch space to maximally localized Wannier functions (MLWFs) basis by using wannier90 code [@Marzari2012; @Mostofi]. Four bands close to the Fermi energy which are mainly contributed from $p_z$ orbitals of two carbons of different sublattices, one nitrogen, and one boron are taken into account. Comparisons between the band structures calculated from DFT and the derived tight-binding model (\[fullmodel\]) at $U=0$ are shown in Fig. \[Fig:supplethree\] for AB-N stack with interlayer distance of $d_0=3.022$ Å. Only a little differences around the $\Gamma$ point due to strong band entanglement can be seen, which will not affect the applicability of our results to the heterostructures as those states are far away from the Fermi level and therefore irrelevant to the band gap. Please note, as the noninteracting part in Eq. (\[fullmodel\]) are derived from DFT calculations, the effect of nonlocal terms of the Coulomb interactions has been implicitly incorporated in our study at the DFT level. In this paper, we will focus on the effect of the optimal onsite Coulomb interaction beyond the DFT level [@CommentTB].
Coherent potential approximation to the model (\[fullmodel\]) \[CPA\]
---------------------------------------------------------------------
In order to verify the above scenario of interaction-induced metallization and its relevance to graphene/h-BN heterostructure, we employ coherent potential approximation (CPA) [@Elliot1974; @Jarrell2001] to solve the tight-binding model (\[fullmodel\]). By applying the alloy analogy approach [@Hubbard1963], the interacting system can be viewed as a disordered alloy where an electron with spin $\sigma$ moving on the graphene layer encounters either a potential of $U/2$ at a site with a spin $-\sigma$ present or $-U/2$ without. Then, the model Hamiltonian (\[fullmodel\]) is replaced by a one-particle Hamiltonian with disorder potential which is of the form
$$H=\sum_{i,j\in C,B,N,\sigma }t_{ij}a_{i\sigma }^{\dag }a_{j\sigma}+\sum_{i\in C,\sigma} E_{i\sigma} n_{i\sigma},
\label{Randommdoel}$$
where the disorder potential $E_{i\sigma}$ satisfies $$E_{i\sigma}=\left\{
\begin{array}{lccl}
U/2& &\text{with probability}& \langle n_{i\bar{\sigma}} \rangle\\
-U/2& &\text{with probability}& 1-\langle n_{i\bar{\sigma}}\rangle
\end{array}\right. .
\label{rand.potential}$$ Here, $\langle n_{i\sigma} \rangle$ is the average occupancy of electrons on site $i$ of the graphene layer with spin $\sigma$. The Green’s function corresponding to the one-particle Hamiltonian has to be averaged over all possible disorder configurations. The averaging can not be performed exactly. To solve the alloy problem, the coherent potential approximation (CPA) is used [@Elliot1974; @Jarrell2001], where the disorder potential $E_{i\sigma}$ is replaced by a local complex and energy-dependent self-energy. The details of the CPA method applied to the model (\[fullmodel\]) are given in the appendix \[GF\]. Here, we should stress that, although above treatment itself has a few shortcomings [@Gebhard1997], it remains valuable as a computationally simple theory capable of capturing the Mott metal-insulator transition of many-body systems. For example, it successfully reproduces the phase diagram of ionic Hubbard model at half filling [@Hoang2010]. Moreover, the critical value of the Mott transition of $U_c/t \approx 3.5$ [@RowlandsCPB2014] obtained within CPA based on a Hubbard model on honeycomb lattice at half filling is in excellent agreement with the quantum Monte Carlo simulations [@Assaad2013; @Sorella2012; @Toldin2015] and cluster dynamical mean field theory calculations [@Wu2010; @Liebsch2011; @CommentAF]. More discussions about CPA can be found in Refs. \[\].
results \[res\]
===============
![(Color online) Densities of states (DOS) for three different values of $U$ with interlayer distance of $d_0=3.022$ Å. Inset is the blow-up of DOS around the Fermi Level.[]{data-label="Fig:one"}](Figure2){width="48.00000%"}
Fig. \[Fig:one\] shows DOSs at three different values of $U$. Here, an equilibrium interlayer distance of $d_0=3.022$ Å between graphene and h-BN is used according to Ref. . It is found that at a small value of $U=1.0$ eV, DOS at the Fermi level remains zero which is similar to the noninteracting case. However, when $U$ becomes larger, for example at $U=5.0$ eV, finite DOS appears at the charge neutral point, indicating emergence of an unconventional metallic state induced by onsite electronic Coulomb interaction. Further increasing $U$ will again result in a gapped state as evident from the DOS at $U=10.0$ eV.
![(Color online) Partial Densities of states (DOS) for three different values of $U$ with interlayer distance of $d_0=3.022$ Å. (a) $U=1.0$ eV, (b) $U=5.0$ eV, and (c) $U=10.0$ eV. Insets are the blow-ups of DOS around the Fermi Level. C$_1$ and C$_2$ are the carbons of different sublattices. B and N denote boron and nitrogen, respectively.[]{data-label="Fig:new"}](pDOS){width="46.00000%"}
The partial DOS is presented in Fig. \[Fig:new\] for above three different values of $U$. It is found that in all the cases, DOS around the Fermi level are mainly contributed from both carbons C$_1$ and C$_2$. The contributions of boron and nitrogen to the DOS around the Fermi level are negligibly small, compared to those of carbons. Moreover, while small difference between the DOS of C$_1$ and C$_2$ can be found at $U=1.0$ eV, the difference becomes indiscernible at $U=5.0$ eV and $U=10.0$ eV. This implies that the asymmetry between carbons induced by the substrate h-BN is suppressed by the onsite Coulomb repulsion. Due to the effective restoration of the inversion symmetry between carbons, the band gap induced by the asymmetry gradually vanishes, and finally an interaction-induced metallic state appears.
![(Color online) Evolution of gap size as a function of onsite Coulomb repulsion for several different interlayer distances in the unit of equilibrium distance $d_0=3.022$ Å. Insets show the most stable lattice configuration with lowest total energy, called AB-N stack where one carbon is above boron and the other centered above a h-BN hexagon. upper panel is side view while lower one is top view. C$_1$ and C$_2$ are the carbons of different sublattices. B and N denote boron and nitrogen, respectively.[]{data-label="Fig:two"}](Figure3){width="48.00000%"}
The evolution of gap size as a function of onsite Coulomb repulsion is shown in Fig. \[Fig:two\] for several different interlayer distances in the unit of equilibrium distance $d_0=3.022$ Å. While at weak interacting region, both the gap amplitudes and the critical values where the gap vanishes are strongly dependent on the interlayer distances, implying that insulating behavior in this region is dominated by coupling between graphene and h-BN, at large interacting region, those quantities remain almost unchanged, indicating that the gapped state at large $U$ is probably irrelevant to the h-BN substrate. Moreover, the strong variance of critical values at small $U$ region as a function of the interlayer distances also suggests that such an interaction-driven insulator-to-metal transition can be tuned by varying the interlayer distances between graphene and h-BN, which can be easily realized by applying out-of-plane strain to the van der Waals heterostructure.
![(Color online) Difference of occupation number between two nearest neighbor carbons $\Delta n_c$ as a function of $U$ for several different interlayer distances in the unit of equilibrium distance $d_0=3.022$ Å. Insets are cartoons for the graphene layer where larger dots denote charge-richer sites while smaller ones indicate charge-poorer sites. (a) and (b) represents for the case of smaller U region where interlayer hybridizaions play dominant roles and larger U region where local potentials dominate the charge distributions, respectively. C$_1$ and C$_2$ are the carbons of different sublattices.[]{data-label="Fig:three"}](Figure4){width="48.00000%"}
In order to reveal the nature of these two insulating states, difference of occupation number between two nearest neighbor carbons $\Delta n_c$ and imaginary part of total self-energy of two neighboring carbons $Im \Sigma_C$ as a function of $U$ are investigated as shown in Figs. \[Fig:three\] and \[Fig:four\], respectively. It is found that $\Delta n_c$ is rapidly suppressed as $U$ increases, indicating that the symmetry breaking between two neighboring carbons induced by h-BN substrate is effectively restored by the interaction continuously, leading to a reduction of gap amplitude as detected in Fig. \[Fig:two\]. Combining the fact that the imaginary part of total self-energy at small $U$, as displayed in Fig. \[Fig:four\] for $U=1$ or $3$ eV, vanishes in the vicinity of the charge neutral point, we conclude that in the small interacting region, it is still a band insulator where symmetry breaking dominates the gap opening. However, the situation is completely changed at large $U$ where the imaginary part of total self-energy is divergent within the gap, as seen in Fig. \[Fig:four\] at $U=10$ eV, which is a characteristic feature of a Mott insulator [@Georges1996]. In the intermediate range of onsite Coulomb interaction, the imaginary part of total self-energy, which will cause band broadening, becomes larger, (see $U=4$ or $5$ in Fig. \[Fig:four\]), while the asymmetry between neighboring carbons becomes smaller as indicated by smaller charge difference in Fig. \[Fig:three\], compared to the weak interacting case. A metallic state induced by many-body interaction appears, since the small asymmetry can not provide large enough separation between broadened conduction and valence bands.
Another interesting phenomenon presented in Fig. \[Fig:three\] is that the difference of occupation number between neighboring carbons $\Delta n_c$ changes from positive to negative value, indicating a reversal from charge-richer to charge-poorer site in graphene layer for all different interlayer distances. This can be ascribed to two competing effects induced by h-BN substrate. One is the substrate-induced different local potentials between neighboring carbons, and the other is the difference of interlayer hybridizations between different carbons and the substrate. We find that the carbon above boron has higher local potential and stronger interlayer hybridization than that sitting above the center of h-BN hexagon. When $U$ is small, more electrons prefer to occupy the carbon above boron since it can gain more kinetic energy through the interlayer hopping. However, when $U$ becomes larger, the electrons in graphene get more localized, and the interlayer hoppings are suppressed. As a result, more electrons choose to occupy the carbon above the center of h-BN hexagon due to the lower local potential. Such a physical picture can also be applied to understand why a reversal from charge-richer to charge-poorer site happens at fixed value of $U$ as the interlayer distance is varied (See e.g. $U=3$ or $4$ eV in Fig. \[Fig:three\]). Due to the competition between site-dependent local potentials and interlayer hybridizations, smaller interlayer distance results in larger interlayer hybridizations, and thus favors more electrons on the carbon above boron in order to gain more kinetic energy, while larger interlayer distance corresponds to weaker interlayer hybridizations and therefore less electrons on the carbon above boron due to the higher local potential.
![(Color online) Imaginary part of total self-energy of two neighboring carbons $Im \Sigma_C$ for several values of $U$ with interlayer distance of $d_0=3.022$ Å.[]{data-label="Fig:four"}](Figure5){width="48.00000%"}
The resulting phase diagram in the plane of interaction and interlayer distance is shown in Fig. \[Fig:five\]. Here, we present not only the phase boundaries for most stable lattice configuration, but also those for two metastable configurations with one carbon above nitrogen and the other above the center of h-BN hexagon represented by AB-B stack, and with one carbon above boron and the other above nitrogen denoted by AA stack, respectively. We find that consecutive phase transitions from band insulator to metal and then to Mott insulator always happen despite differences in lattice configurations and the interlayer distances, indicating that the interaction-driven metallization should always occur even if lattice mismatch is taken into account.
Finally, let us discuss the implication of our results to the experimental conundrum where metallic and insulating states are both observed in different measurements [@Xue2011; @Hunt2013; @Amet2013; @Usachov2010; @Roth2013; @Chen2014; @Sediri2015]. Although it was initially argued that nonzero twist may give rise to a restoration of sublattice inversion symmetry on spatial average, leading to a metallic state [@Xue2011], it was found from transport measurements that the gap remains large, e.g. of the order 200 K, even if the twist angle $\delta$ becomes larger than $2^{\circ}$ [@Hunt2013]. On the contrary, no gap was observed in another experiment when twist angle $\delta$ is larger than $1^{\circ}$ [@Woods2014]. Similar inconsistency also happens between the transport measurements and angular resolved photoemission spectroscopic studies (ARPES). From ARPES, a band with linear dispersion is observed crossing the Fermi level in the heterostructure with moiré wavelength of 9 nm, clearly indicating a metallic state [@Roth2013], which is in contrast to the gapped state with same moiré wavelength reported by transport measurements [@Hunt2013]. These discrepancies indicate that existing explanations of the gapless or gapped state observed in different experiments based on lattice relaxations and misorientations [@Jung2015; @Woods2014] can not resolve the conundrum completely.
![(Color online) Phase diagram in the plane of interaction in the unit of eV and interlayer distance in the unit of equilibrium distance $d_0=3.022$ Å. Lines are guidance for eyes. Insets are two metastable lattice configurations denoted by AB-B stack with one carbon over nitrogen and the other centered above a h-BN hexagon (left two panels), and AA stack with one carbon over boron and the other over nitrogen (right two panels), respectively. Upper panels are side views while lower panels are top views. The most stable lattice configuration is shown in the insets of Fig. \[Fig:two\]. C$_1$ and C$_2$ are the carbons of different sublattices. B and N denote boron and nitrogen, respectively.[]{data-label="Fig:five"}](Figure6){width="48.00000%"}
Here, according to the phase diagram, we propose a complementary way to fully understand the contradictory experiments based on the existence of onsite Coloumb interaction and the difference of interlayer distance, both of which are unavoidable in real materials, in addition to the way of considering commensurate-incommensurate phase transition [@Woods2014]. After the onsite Coulomb repulsion is taken into account, if we adopt $U=3$ eV, which is of the same value used for bilayer graphene [@Zhang2016], we can find from Fig. \[Fig:five\] that the heterostructure is just located in the vicinity of the phase boundary between band insulator and metal. Therefore the electronic property of the heterostructure is susceptible to the variation of interlayer distance which may be considerably large among different experiments due to the weak van der Waals interaction between graphene and h-BN. In Fig. \[Fig:six\], we further show the spectral functions along $\Gamma-K-M$ line in the Brillouin zone at $U=3$ eV for two different interlayer distances of $0.9d_0$ (see Fig. \[Fig:six\] (a)) and $1.1d_0$ (see Fig. \[Fig:six\] (b)) with $d_0=3.022$ Å. While a small gap of $59$ meV is clearly visible at the distance of $0.9d_0$, nearly linear dispersive band crossing the Fermi level appears at the distance of $1.1d_0$ as observed in the ARPES studies [@Roth2013].
According to the phase diagram, we further propose that applying stress perpendicular to the graphene plane may open a large band gap as required for making logic transistors. Another way to enhance the gap is to put one more h-BN on top of graphene which will enhance the asymmetry between neighboring carbons and therefore lead to a large gap. Moreover, both ways will enhance the interlayer coupling and therefore lessen the lattice mismatch which is detrimental to gap opening.
![(Color online) (a) and (b) are the spectral functions along $\Gamma-K-M$ line in the Brillouin zone at $U=3$ eV for two different interlayer distances of $0.9d_0$ and $1.1d_0$ with $d_0=3.022$ Å, respectively. []{data-label="Fig:six"}](Figure7){width="48.00000%"}
Conclusions \[conclusion\]
==========================
In conclusion, the experimentally observed metallic state in graphene/h-BN heterostructure is always attributed to the lattice mismatch [@Xue2011; @Roth2013; @Sachs2011; @Titov2014], and the electronic interaction is widely believed to enhance the gapped state [@Hunt2013; @Chen2014; @Song2013; @Bokdam2014; @Jung2015]. However, here we show that even without lattice mismatch, dominant onsite Coulomb repulsion of realistic value will suppress the band gap due to the effective restoration of inversion symmetry between neighboring carbons. The interaction-induced insulator-to-metal transition should also happen even if lattice mismatch is involved since the phase diagrams are qualitatively the same despite different lattice configurations. Finally, we argue that the phenomenon of band gap suppressed by many-body effect should be present in other van der Waals heterostructures as long as a band gap induced by lattice asymmetry has already been opened in the noninteracting case.
Acknowledgements \[Acknowledgements\]
=====================================
We thank W. Ku for discussions. This work is supported by National Natural Science Foundation of China (No. 11474217), Program for New Century Excellent Talents in University (NCET-13-0428), and the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning as well as the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry. J.-R. is also supported by Educational Commission of Anhui Province of China (No. KJ2013B059) and Natural Science Foundation of Anhui Province (No. 1308085QA05).
Solution of Coherent potential approximation to the model (\[fullmodel\]) \[GF\]
================================================================================
The CPA average Green’s function can be written in a matrix form $$\begin{aligned}
%\begin{tiny}
\bar{G}^{-1}(\mathbf{k},\omega)\!&=\!\omega + \mu - \widehat{H_{tb}}(\mathbf{k}) -
%\begin{array}{cccc}
&
\begin{bmatrix}
\begin{smallmatrix}
\Sigma_{C_1}(\omega) & 0 & 0 & 0 \\
0 & \Sigma_{C_2}(\omega) & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
%\end{array}
\end{smallmatrix}
\end{bmatrix}
%\right]^{-1},
%\end{tiny}
\end{aligned}$$ where $\mu$ is chemical potential and $\widehat{H_{tb}}(\mathbf{k})$ is a $4\times4$ matrix obtained by applying the Fourier transformation to the model (\[fullmodel\]). $\Sigma_{C_1}(\omega)$ and $\Sigma_{C_2}(\omega)$ represent for self-energies on two neighboring carbons. Now all spin indices have been omitted as we are interested in the paramagnetic phase. In real space, we have $$\bar{G}_{ii}(\omega)=\frac{1}{\Omega_{BZ}}\int_{BZ}d\mathbf{k} \bar{G}_{ii}(\mathbf{k},\omega),
\label{realGreen}$$ where the integral is over the first Brillouin zone of the sublattice and $i\in C,B,N$. Then a cavity Green’s function $\mathcal
{G}_{i}(\omega)$ can be obtained through the Dyson equation $$\mathcal {G}_{i}^{-1}(\omega)=\bar{G}_{ii}^{-1}(\omega)+\Sigma_{i}(\omega)$$ for sublattice ($i\in C_1,C_2$) in graphene layer, which describes a medium with self-energy at a chosen site removed. The cavity can now be filled by a real “impurity” with disorder potential, resulting in an impurity Green’s function $$G_{i}^{\gamma}(\omega)=[\mathcal {G}_{i}^{-1}(\omega)-E_{i}^{\gamma}]^{-1}$$ with impurity configurations of $E_{i}^{\gamma}=\begin{cases}U/2&\gamma=+\\
-U/2&\gamma=-\end{cases}$ as defined by Eq.(\[rand.potential\]). The CPA requires $$\langle G_{i}^{\gamma}(\omega)\rangle =\bar{G}_{ii}(\omega),
\label{aveGreen}$$ where the average is taken over the impurity configuration probabilities defined by Eq.( \[rand.potential\]).
Equation (\[realGreen\]) and (\[aveGreen\]) need to be solved self-consistently. By adjusting the chemical potential $\mu$, the condition of $\sum\limits_{i\in C,B,N}\langle n_{i}\rangle=4$ can be satisfied for half-filled case, where $$\langle n_{i}\rangle=-\frac{1}{\pi}\int_{-\infty}^{\mu}Im\bar{G}_{ii}d\omega .$$ It is also necessary to ensure that the resulting integrated DOS for each site should be consistent with the average occupation number probabilities used in Eq.(\[aveGreen\]), so an extra loop of self-consistency should be added.
[99]{}
E. V. Castro, K. S. Novoselov, S. V. Morozov, N. M. R. Peres, J. M. B. Lopes dos Santos, J. Nilsson, F. Guinea, A. K. Geim and A. H. Castro Neto, *Biased bilayer graphene: semiconductor with a gap tunable by the electric field effect*, Phys. Rev. Lett. **99**, 216802 (2007).
M. Y. Han, B. Özyilmaz, Y. Zhang and P. Kim, *Energy band-gap engineering of graphene nanoribbons*, Phys. Rev. Lett. **98**, 206805 (2007).
D. C. Elias, R. R. Nair, T. M. G. Mohiuddin, S. V. Morozov, P. Blake, M. P. Halsall, A. C. Ferrari, D. W. Boukhvalov, M. I. Katsnelson, A. K. Geim and K. S. Novoselov, *Control of graphene¡¯s properties by reversible dydrogenation: evidence for graphane*, Science **323**, 610 (2009).
L. Ci, L. Song, C. Jin, D. Jariwala, D. Wu, Y. Li, A. Srivastava, Z. F. Wang, K. Storr, L. Balicas, F. Liu and P. M. Ajayan, *Atomic layers of hybridized boron nitride and graphene domains*, Nat. Mater. **9**, 430 (2010).
B. Hunt, J. D. Sanchez-Yamagishi, A. F. Young, M. Yankowitz, B. J. LeRoy, K. Watanabe, T. Taniguchi, P. Moon, M. Koshino, P. Jarillo-Herrero and R. C. Ashoori, *Massive Dirac fermions and Hofstadter butterfly in a van der Waals heterostructure*, Science **340**, 1427 (2013).
J. Xue, J. Sanchez-Yamagishi, D. Bulmash, P. Jacquod, A. Deshpande, K. Watanabe, T. Taniguchi, P. Jarillo-Herrero and B. J. LeRoy, *Scanning tunnelling microscopy and spectroscopy of ultraflat graphene on hexagonal boron nitride*, Nat. Mater. **10**, 282 (2011).
F. Amet, J. R. Williams, K. Watanabe, T. Taniguchi and D. Goldhaber-Gordon, *Insulating behavior at the neutrality point in single-layer graphene*, Phys. Rev. Lett. **110**, 216601 (2013).
L. Britnell, R. V. Gorbachev, R. Jalil, B. D. Belle, F. Schedin, A. Mishchenko, T. Georgiou, M. I. Katsnelson, L. Eaves, S. V. Morozov, N. M. R. Peres, J. Leist, A. K. Geim, K. S. Novoselov and L. A. Ponomarenko, *Field-effect tunneling transistor based on vertical graphene heterostructures*, Scienece **335**, 947 (2012).
J. Slawińska, I. Zasada and Z. Klusek, *Energy gap tuning in graphene on hexagonal boron nitride bilayer system*, Phys. Rev. B **81**, 155433 (2010).
C. R. Dean, A. F. Young, P. Cadden-Zimansky, L.Wang, H. Ren, K. Watanabe, T. Taniguchi, P. Kim, J. Hone and K. L. Shepard, *Multicomponent fractional quantum Hall effect in graphene*, Nat. Phys. **7**, 693(2011).
X. Zhong, Y. K. Yap and R. Pandey, *First-principles study of strain-induced modulation of energy gaps of graphene/BN and BN bilayers*, Phys. Rev. B **83**, 193403(2011).
G. Gao, W. Gao, E. Cannuccia, J. Taha-Tijerina, L. Balicas, A. Mathkar, T. N. Narayanan, Z. Liu, B. K. Gupta, J. Peng, Y. Yin, A. Rubio and P. M. Ajayan, *Artificially stacked atomic layers: toward new van der Waals solids*, Nano Lett. **12**, 3518 (2012).
S. Tang, H. Wang, Y. Zhang, A. Li, H. Xie, X. Liu, L. Liu, T. Li, F. Huang, X. Xie and M. Jiang, *Precisely aligned graphene grown on hexagonal boron nitride by catalyst free chemical vapor deposition*, Sci. Rep. **3**, 2666 (2013).
P. San-Jose, A. Gutiérrez-Rubio, M. Sturla and F. Guinea, *Spontaneous strains and gap in graphene on boron nitride*, Phys. Rev. B **90**, 075428 (2014).
M. Titov and M. I. Katsnelson, *Metal-insulator transition in graphene on boron nitride*, Phys. Rev. Lett. **113**, 096801 (2014).
M. M. van Wijk, A. Schuring, M. I. Katsnelson and A. Fasolino, *Moiré patterns as a probe of interplanar interactions for graphene on h-BN*, Phys. Rev. Lett. **113**, 135504 (2014).
D. S. L. Abergel and M. Mucha-Kruczyński, *Infrared absorption of closely aligned heterostructures of monolayer and bilayer graphene with hexagonal boron nitride*, Phys. Rev. B **92**, 115430 (2015).
G. J. Slotman, M. M. van Wijk, P. Zhao, A. Fasolino, M. I. Katsnelson and S. Yuan, *Effect of structural relaxation on the electronic structure of graphene on gexagonal boron nitride*, Phys. Rev. Lett. **115**, 186801 (2015).
G. Giovannetti, P. A. Khomyakov, G. Brocks, P. J. Kelly and J. van den Brink, *Substrate-induced band gap in graphene on hexagonal boron nitride: Ab initio density functional calculations*, Phys. Rev. B **76**, 073103 (2007).
D. Usachov, V. K. Adamchuk, D. Haberer, A. Grüneis, H. Sachdev, A. B. Preobrajenski, C. Laubschat, and D. V. Vyalikh, *Quasifreestanding single-layer hexagonal boron nitride as a substrate for graphene synthesis*, Phys. Rev. B [**82**]{}, 075415 (2010).
S. Roth, F. Matsui, T. Greber and J. Osterwalder, *Chemical vapor deposition and characterization of aligned and in commensurate graphene/hexagonal boron nitride heterostack on Cu(111)*, Nano Lett. **13**,2668 (2013).
Z. Chen, Z. Shi, W. Yang, X. Lu, Y. Lai, H. Yan, F. Wang, G. Zhang, Z. Li, *Observation of an intrinsic bandgap and Landau level renormalization in graphene/boron-nitride heterostructures*, Nat. Commun. **5**, 4461 (2014).
H. Sediri, D. Pierucci, M. Hajlaoui, H. Henck, G. Patriarche, Y. J. Dappe, S. Yuan, B. Toury, R. Belkhou, M. G. Silly, F. Sirotti, M. Boutchich, A. Ouerghi, *Atomically sharp interface in an h-BN-epitaxial graphene van der Waals heterostructure*, Sci. Rep. **5**, 16465 (2015).
B. Sachs, T. O. Wehling, M. I. Katsnelson and A. I. Lichtenstein, *Adhesion and electronic structure of graphene on hexagonal boron nitride substrates*, Phys. Rev. B **84**, 195414 (2011).
M. Kindermann, Bruno Uchoa and D. L. Miller, *Zero-energy modes and gate-tunable gap in graphene on hexagonal boron nitride*, Phys. Rev. B **86**, 115415 (2012).
C. R. Dean, L. Wang, P. Maher, C. Forsythe, F. Ghahari, Y. Gao, J. Katoch, M. Ishigami, P. Moon, M. Koshino, T. Taniguchi, K. Watanabe, K. L. Shepard, J. Hone and P. Kim, *Hofstadter¡¯s butterfly and the fractal quantum Hall effect in moiré superlattices*, Nature **497**, 598 (2013).
Z. Shi, C. Jin, W. Yang, L. Ju, J. Horng, X. Lu, H. A. Bechtel, M. C. Martin, D. Fu, J. Wu, K. Watanabe, T. Taniguchi, Y. Zhang, X. Bai, E. Wang, G. Zhang and F. Wang, *Gate-dependent pseudospin mixing in graphene/boron nitride moiré superlattices*, Nat. Phys. **10**, 743 (2014).
J. C. W. Song, A. V. Shytov and L. S. Levitov, *Electron interactions and gap opening in graphene superlattices*, Phys. Rev. Lett. **111**, 266801 (2013).
M. Bokdam, T. Amlaki, G. Brocks and P. J. Kelly, *Band gaps in incommensurable graphene on hexagonal boron nitride*, Phys. Rev. B **89**, 201404(R) (2014).
J. Jung, A. M. DaSilva, A. H. MacDonald and S. Adam, *Orgin of band gaps in graphene on hexagonal boron nitride*, Nat. Commun. **6**, 6308 (2015).
M. Schüler, M. Rösner, T. O. Wehling, A. I. Lichtenstein, and M. I. Katsnelson, *Optimal Hubbard models for materials with nonlocal Coulomb interactions:graphene, silicene, and benzene*, Phys. Rev. Lett. **111**, 036601 (2013).
P. E. Blöchl, *Projector augmented-wave method*, Phys. Rev. B **50**, 17953 (1993).
G. Kresse and J. Hafner, *Ab initio molecular dynamics for liquid metals*, Phys. Rev. B **47**, 558 (1993).
G.Kresse and J.Furthmüller, *Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set*, Phys. Rev. B **54**, 11169 (1996).
H. J. Monkhorst and J .D. Pack, *Special points for Brillouin-zone integrations*, Phys. Rev. B **13**, 5188 (1976).
N. Marzari, A. A. Mostofi, J. R. Yates, I. Souza, D. Vanderbilt, *Maximally localized Wannier functions: Theory and applications*, Rev. Mod. Phys. [**84**]{}, 1419 (2012).
A. A. Mostofi, J. R. Yates, Y. S. Lee, I. Souza, D. Vanderbilt and N. Marzari, *wannier90: A Tool for Obtaining Maximally-Localised Wannier Functions*, Comput. Phys. Commun. **178**, 685 (2008).
Besides the semi-local approximations to the exchanges and correlations, density functional theory calculations (DFT) have also included the Hartree term of the nonlocal Coulomb interaction. Therefore, we mentioned that the effect of nonlocal terms of the Coulomb interactions has been implicitly incorporated in our study at the DFT level. However, the exact exchange term, i.e., the Fock term, is not involved in our DFT calculations based on local density approximation. As we know, the exact exchange term will enhance the band gap. While this will eventually lead to a shift of phase boundary, the overall phase diagram remains qualitatively unchanged.
R. J. Elliot, J. A. Krumhansk and P. L. Leath, *The theory and properties of randomly disordered crystals and related physical systems*, Rev. Mod. Phys. **46**, 465 (1974).
M. Jarrell and H. R. Krishnamurthy, *Systematic and causal corrections to the coherent potential approximation*, Phys. Rev. B **63**, 125102 (2001).
J. Hubbard, *Electron correlations in narrow energy bands*, Proc. Roy. Soc. (London) **A276**, 238 (1963).
F. Gebhard, *The Mott Metal-Insulator Transition: Models and Methods*, (New York: Spinger), (1997).
A. T. Hoang, *Metal-insulator transitions in the half-filled ionic Hubbard model*, J. Phys.: Condens. Matter **22**, 095602 (2010).
D. A. Rowlands, Y.-Z. Zhang, *Disappearance of the Dirac cone in silicene due to the presence of an electric field*, Chin. Phys. B **23**, 037101 (2014).
F. F. Assaad, I. F. Herbut, [*Pinning the Order: The Nature of Quantum Criticality in the Hubbard Model on Honeycomb Lattice*]{}, Phys. Rev. X [**3**]{}, 031010 (2013).
S. Sorella, Y. Otsuka, S. Yunoki, *Absence of a Spin Liquid Phase in the Hubbard Model on the Honeycomb Lattice*, Scientific Reports **2**, 992 (2012).
F. P. Toldin, M. Hohenadler, F. F. Assaad, *Fermionic quantum criticality in honeycomb and ¦Ð-flux Hubbard models: Finite-size scaling of renormalization-group-invariant observables from quantum Monte Carlo*, Phys. Rev. B **91**, 165108 (2015).
W. Wu, Y.-H. Chen, H.-S. Tao, N.-H. Tong, W.-M. Liu, *Interacting Dirac fermions on honeycomb lattice*, Phys. Rev. B **82**, 245102 (2010).
A. Liebsch, *Correlated Dirac fermions on the honeycomb lattice studied within cluster dynamical mean field theory*, Phys. Rev. B **83**, 035113 (2011).
The critical value of the metal-insulator transition of the Hubbard model on honeycomb lattice obtained within antiferromagnetic state [@Assaad2013; @Sorella2012; @Toldin2015] is consistent with that within paramagnetic state [@Wu2010; @Liebsch2011], indicating that our conclusion should remain unchanged even if long-range antiferromagnetic order were taken into account.
D. A. Rowlands, Y.-Z. Zhang, *Inclusion of intersite spatial correlations in the alloy analogy approach to the half-filled ionic Hubbard model*, J. Phys.: Condens. Matter **26**, 274201 (2014).
D. A. Rowlands, Y.-Z. Zhang, *Dynamical nonlocal coherent-potential approximation for itinerant electron magnetism*, J. Phys.: Condens. Matter **26**, 475602 (2014).¡±
Though the coherent potential approximation (CPA) can successfully capture the metal-insulator transition, here we would like to make a brief summary of its limitations which are discussed detailedly in Refs. \[\]; the first is the inability to describe magnetism, and the second is the incapability to determine the nature of the metallic state. However, these limitations will not affect the applicability of CPA to our current research, since we are only interested in the nontrivial issue of whether there is interaction-induced insulator-metal transition possibly happening in graphene on hexagonal boron nitride where no long-range magnetic order is observed. Please note, it is not our attempt to solve the minor details of whether the metal-insulator transition is accompanied by the antiferromagnetic phase transition or whether the metallic state is a Fermi liquid, a non-Fermi liquid, or a marginal Fermi liquid.
A. Georges, G. Kotliar, W. Krauth, M. J. Rozenberg, *Dynamical mean-field theory of strongly correlated fermion systems and the limit of infinite dimensions*, Rev. Mod. Phys. [**68**]{}, 13 (1996).
C. R. Woods, L. Britnell, A. Eckmann, R. S. Ma, J. C. Lu, H. M. Guo, X. Lin, G. L. Yu, Y. Cao, R. V. Gorbachev, A. V. Kretinin, J. Park, L. A. Ponomarenko, M. I. Katsnelson, Yu. N. Gornostyrev, K.Watanabe, T. Taniguchi, C. Casiraghi, H-J. Gao, A. K. Geim, K. S. Novoselov, *Commensurate-incommensurate transition in graphene on hexagonal boron nitride*, Nat. Phys. **10**, 451 (2014).
J. Xu, Z. Song, H. Lin and Y. Zhang, *Gate-induced gap in bilayer graphene suppressed by Coulomb repulsion*, Phys. Rev. B **93**, 035109 (2016).
|
---
abstract: 'We suggest that tunable orientational nonlinearity of nematic liquid crystals can be employed for all-optical switching in periodic photonic structures with liquid-crystal defects. We consider a one-dimensional periodic structure of Si layers with a local defect created by infiltrating a liquid crystal into a pore, and demonstrate, by solving numerically a system of coupled nonlinear equations for the nematic director and the propagating electric field, that the light-induced Freedericksz transition can lead to a sharp switching and diode operation in the photonic devices.'
address: 'Nonlinear Physics Centre and Centre for Ultra-high bandwidth Devices for Optical Systems (CUDOS), Research School of Physical Sciences and Engineering, Australian National University, Canberra ACT 0200, Australia'
author:
- 'Andrey E. Miroshnichenko, Igor Pinkevych, and Yuri S. Kivshar'
title: 'Tunable all-optical switching in periodic structures with liquid-crystal defects'
---
Introduction
============
During the past decade, photonic crystals (artificially fabricated one-, two- and three-dimensional periodic dielectric materials) have attracted a great deal of interest due to their ability to inhibit the propagation of light over special regions known as photonic band gaps [@book]. Such photonic bandgap materials are expected to revolutionize integrated optics and micro-photonics due to an efficient control of the electromagnetic radiation they provide, in such a way as semiconductors control the behavior of the electrons [@eli].
In general, the transmission of light through photonic crystals depends on the geometry and the index of refraction of the dielectric material. Tunability of the photonic bandgap structures is a key feature required for the dynamical control of light transmission and various realistic applications of the photonic crystal devices. One of the most attractive and practical schemes for tuning the band gap in photonic crystals was proposed by Busch and John [@busch], who suggested that coating the surface of an inverse opal structure with a liquid crystal could be used to continuously tune the band gap, as was confirmed later in experiment [@yoshino]. This original concept generated a stream of interesting suggestions for tunable photonic devices based on the use of liquid crystals infiltrated into the pores of a bandgap structure [@busch2]. The main idea behind all those studies is the ability to continuously tune the bandgap spectrum of a periodic dielectric structure using the temperature dependent refractive index of a liquid crystal [@yoshino; @busch2; @schuller; @bjarklev], or its property to change the refractive index under the action of an applied electric field [@zakhidov; @escuti; @khoo].
Another idea of the use of liquid crystals for tunability of photonic crystals is based on infiltration of individual pores [@mingaleev] and creation of liquid crystal defects [@villar; @ozaki; @greece], and even defect-induced waveguide circuits [@mingaleev]. In this case, the transmission properties can be controlled, for example, by tuning resonant reflections associated with the Fano resonances [@fan; @miroshnichenko] observed when the frequency of the incoming wave coincides with the frequency of the defect mode. As a result, the defect mode becomes highly excited at the frequency of the resonant reflection, and it can be tuned externally, again by an electric field or temperature.
However, liquid crystals by themselves demonstrate a rich variety of nonlinear phenomena (see, for example, Refs. [@zeldovich81; @khoo81; @ong83; @zeldovich86]). Therefore, nonlinear response of liquid crystals can be employed for all-optical control of light propagation in periodic structures and tunability of photonic crystals. In this paper, for the first time to our knowledge, we analyze the possibility of [*tunable all-optical switching*]{} in one-dimensional periodic structure with a liquid crystal defect. We demonstrate that the light field with the intensity above a certain critical value corresponding to the optical Freedericksz transition changes the optical properties of the liquid-crystal defect such that the nonlinear transmission of the photonic structure allows for all-optical switching, and the similar concept can be employed for creating of a tunable all-optical diode.
Nonlinear transmission of a liquid crystal slab
===============================================
First, we study the light transmission of a single slab of nematic liquid crystal and derive a system of coupled nonlinear equations for the liquid-crystal director reorientation in the presence of the propagating electric field of a finite amplitude. The corresponding steady-state equation for the director $\mathbf{n}$ can be obtained by minimizing the free energy which can be written in the following form [@zeldovich81; @degennes] $$\label{eq:free_elastic1}
\begin{array}{l}
{\displaystyle f=f_{\rm el}+f_{\rm opt},}\\*[9pt]
{\displaystyle
f_{\rm
el}=\frac{1}{2}\left[K_{11}(\nabla\cdot\mathbf{n})^2+K_{22}(\mathbf{n}\cdot\nabla\times\mathbf{n})^2\;
+K_{33}(\mathbf{n}\times\nabla\times\mathbf{n})^2\right],}\\*[9pt]
{\displaystyle f_{\rm opt}=-(1/16\pi)\mathbf{D\cdot E^*}},
\end{array}$$ where $f_{\rm el}$ is the elastic part and $f_{\rm opt}$ is the optical part of the energy density. Here $K_{11}$, $K_{22}$ and $K_{33}$ are splay, twist, and bend elastic constants, respectively, $\mathbf{D}= \hat{\epsilon} \mathbf{E}$, $\hat{\epsilon}$ is the dielectric tensor, and the real electric field is taken in the form $\mathbf{E}_{real}=(1/2)[\mathbf{E}(\mathbf{r})\exp(-i\omega
t)+\mathbf{E}^{*}(\mathbf{r})\exp(i\omega t)]$.
We assume that linearly polarized light wave propagates normally to the liquid-crystal slab with the initial homeotropic director orientation along $z$ \[see Fig. \[fig0\](a)\]. Under the action of the electric field polarized outside the slab along $x$, the director can change its direction in the $(x,z)$ plane and, therefore, we write the vector components of the director in the form $\mathbf{n}=\{\sin\phi(z),0,\cos\phi(z)\}$. Then the elastic part of the free energy density can be written as $$\begin{aligned}
\label{eq:free_elastic2}
f_{\rm el}=\frac{1}{2}\left(K_{11}\sin^2\phi+K_{33}\cos^2\phi\right)\left(\frac{d
\phi}{d z}\right)^2\;.\end{aligned}$$ Taking into account that the dielectric tensor $\hat{\epsilon}$ can be expressed in terms of the director components, $\epsilon_{ij}=\epsilon_{\bot}\delta_{ij}+\epsilon_a n_in_j$, where $\epsilon_a=\epsilon_{||}-\epsilon_{\bot}$ and $\epsilon_{||}$, $\epsilon_{\bot}$ are the liquid crystal dielectric constants at the director parallel and perpendicular to the electric vector, we can write $$\begin{aligned}
\label{eq:tensor}
\hat{ \epsilon}=\left(
\begin{array}{ccc}
\epsilon_{\bot}+\epsilon_a\sin^2\phi&0&\epsilon_a\sin\phi\cos\phi\\
0&\epsilon_{\bot}&0\\
\epsilon_a\sin\phi\cos\phi&0&\epsilon_{\bot}+\epsilon_a\cos^2\phi
\end{array}
\right)\;.\end{aligned}$$ As a result, the optical part of the free energy density takes the form $$f_{\rm
opt}=-\frac{\epsilon_a}{16\pi}\left[\sin^2\phi|E_x|^2+\cos^2\phi|E_z|+
+\sin\phi\cos\phi(E_xE_z^*+E_zE_x^*)\right]-\frac{\epsilon_{\bot}}{16\pi}|\mathbf{E}|^2\;.$$
After minimizing the free energy (\[eq:free\_elastic1\]) with respect to the director angle $\phi$, we obtain the nonlinear equation for the director in the presence of the light field
![Nonlinear transmission of a liquid-crystal slab. (a) Schematic of the problem. (b,c) Maximum angle of the director and transmission vs. the light intensity in the slab. Blue and red curves correspond to the increasing and decreasing light intensity, respectively.[]{data-label="fig0"}](fig/fig1){width="120mm"}
$$\begin{aligned}
\label{eq:director2} A(\phi) \frac{d^2\phi}{d
z^2}-B(\phi)\left(\frac{d\phi}{dz}\right)^2+
\frac{\epsilon_a\epsilon_{\bot}(\epsilon_a+\epsilon_{\bot})|E_x|^2\sin2\phi}{16\pi(\epsilon_{\bot}+\epsilon_a\cos^2\phi)^2}=0,\end{aligned}$$
where $A(\phi)= (K_{11}\sin^2\phi+K_{33}\cos^2\phi)$, $B(\phi)=(K_{33}-K_{11})\sin\phi\cos\phi$, and we take into account that, as follows from $D_{\rm z}=0$, that the electric vector of the light field has the longitudinal component, $E_z=-(\epsilon_{xz}/\epsilon_{zz})E_x=-[\epsilon_a\sin\phi\cos\phi/(\epsilon_{\bot}+\epsilon_{a}\cos^2\phi)]E_x$ (see also Ref. [@zeldovich81]).
From the Maxwell’s equations, we obtain the equation for the electric field $E_x$, $$\begin{aligned}
\label{eq:full_set} \frac{d^2
E_x}{dz^2}+k^2\frac{\epsilon_{\bot}(\epsilon_{\bot}+\epsilon_a)}{\epsilon_{\bot}+\epsilon_a\cos^2\phi}E_x=0,\end{aligned}$$ where $k=2\pi/(\lambda c)$. Moreover, it can be shown [@zeldovich81; @ong83] that the $z$ component of the Poynting vector $I=S_z=(c/8\pi)E_xH^{*}_y$ remains constant during the light scattering and, therefore, it can be used to characterize the nonlinear transmission results.
As the boundary conditions for the coupled nonlinear equations (\[eq:director2\]) and (\[eq:full\_set\]), we assume that there is an infinitely rigid director anchoring at both surfaces of the slab, i.e. $$\begin{aligned}
\label{eq:boundaries1} \phi(0)=\phi(L)=0,\end{aligned}$$ and also introduce the scattering amplitudes for the optical field $$\begin{aligned}
\label{eq:boundaries2}
E_x(z)=\left\lbrace %
\begin{array}{lc}
{\cal E}_{\rm in}\exp(ikz)+{\cal E}_{\rm ref}\exp(-ikz),& z\le 0,\\
{\cal E}_{\rm out}\exp(ikz), & z\ge L,
\end{array}
\right.
\end{aligned}$$ where $L$ is the thickness of the liquid-crystal slab, ${\cal
E}_{\rm in}$, ${\cal E}_{\rm ref}$, and ${\cal E}_{\rm out}$ are the electric field amplitudes of incident, reflected, and outgoing waves, respectively.
To solve this nonlinear problem, first we fix the amplitude of the outgoing wave ${\cal E}_{\rm out}$ and find unique values for the amplitudes of the incident, ${\cal E}_{\rm in}$, and reflected , ${\cal E}_{\rm ref}$, waves. By using the so-called *shooting method* [@nr], in Eq. (\[eq:director2\]) for the director we fix the amplitude of the outgoing wave and, assuming that $\phi(L)=0$ at the right boundary, find the derivative $(d\phi/dz)_{z=L}$ such that after integrating we obtain a vanishing value of the director at the left boundary, i.e. $\phi(0)=0$. Because Eq. (\[eq:director2\]) is a general type of the nonlinear pendulum equation, we look for periodic solutions with the period $2L$. Obviously, there exists an infinite number of such solutions and, therefore, there is an infinite set of the derivatives $(d\phi/dz)_{z=L}$ which satisfy Eq. (\[eq:director2\]) and the condition (\[eq:boundaries1\]). All such solutions correspond to some extrema points of the free energy of the system. However, we are interested only in that solution which realizes the minimum of the free energy. By analyzing our coupled nonlinear equations in a two-dimensional phase space, we can show that the corresponding solution lies just below the separatrix curve, and it has no node between the points $z=0$ and $z=L$. This observation allows us to reduce significantly the domain for our search for the required values of the derivative $(d\phi/dz)_{z=L}$.
![Transmission of an one-dimensional periodic structure with an embedded liquid-crystal defect. In the linear regime, the transmission is characterized by the presence of an in-gap resonant peak due to the excitation of a defect mode. Nonlinear transmission displays bistability at the defect-mode frequency with two different thresholds for “up” and “down” directions and a hysteresis loop (see the insert).[]{data-label="fig1"}](fig/fig2){width="120mm"}
The obtained solutions can be characterized by the maximum angle $\phi_{\rm max}$ of the director deviation which, as is intuitively clear, should be reached near or at the middle of the slab. In Fig. \[fig0\](b,c), we plot the maximum angle $\phi_{\rm max}$ and the transmission coefficient of the liquid-crystal slab, defined as $T=|{\cal E}_{\rm out}|^2/|{\cal E}_{\rm in}|^2$, vs. the light intensity. For numerical calculations, we use the parameters $K_{11}=4.5\times10^{-7}$ dyn, $K_{33}=9.5\times10^{-7}$ dyn, $\epsilon_a=0.896$, $\epsilon_{\bot}=2.45$, $L=200 nm$, and $\lambda=1.5\mu m$, that correspond to the PAA liquid crystal [@stephen]; because of a lack of the corresponding data at the wavelength $\lambda=1.5\mu m$, the values of the dielectric constant are taken from the optical range.
From the results presented in Fig. \[fig0\](b,c), we observe sharp jumps of the director maximum angle $\phi_{\rm max}$ and the transmission coefficient $T$ due to the *optical Freedericksz transition* in the liquid-crystal defect. However, a variation of the transmission coefficient during this process is not larger than $15\%$. The threshold of the optical Freedericksz transition appears to be different for the increasing and decreasing intensity of the incoming light, so that this nonlinear system is bistable, and it displays a hysteresis behavior. The bistable transmission of the liquid-crystal slab is similar to that predicted for the slab of PAA liquid crystal in the geometric optics approximation [@ong83], and such a behavior is explained by the existence of the metastable state which the system occupies at the decreasing light intensity [@zeldovich81; @ong83].
Liquid-crystal defect in a periodic photonic structure
======================================================
Now, we study the similar problem for a liquid-crystal defect infiltrated into a pore of the periodic structure created by Si layers with the refractive index $n=3.4$. For simplicity, we consider a one-dimensional structure with the period $a=400 nm$ and the layer thickness $d_1 = 200 nm$, that possesses a frequency gap between $1.4 \mu m$ and $2.5 \mu m$. We assume that one of the holes is infiltrated with a PAA nematic liquid crystal with $\epsilon_{\bot}=2.45$. Such a defect modifies the linear transmission of the periodic structure by creating a sharp defect-mode peak at the wavelength $\lambda_d\approx1.5 \mu m$, as shown in Fig. \[fig1\].
![Example of a tunable all-optical diode based on the optical Freedericksz transition in a liquid-crystal defect. Asymmetrically placed defect leads to different threshold intensities of the switching for the waves propagating from the right and left, respectively.[]{data-label="fig2"}](fig/fig3){width="118mm"}
To solve the nonlinear transmission problem, we employ the transfer matrix approach [@yeh] implementing it for the solution of the full system of coupled equations (\[eq:director2\]) and (\[eq:full\_set\]). By tuning the input intensity at the defect mode, we observe the same scenario as for a single liquid-crystal slab \[cf. the insert in Fig. \[fig1\] and Fig. \[fig0\](c)\]. Namely, there exists a hysteresis loop in the transmission with two different thresholds for the increasing and decreasing intensities. The difference is, however, in the values. Due to a small width of the defect-mode resonance, even a small reorientation of the director leads to a sharp (up to $90\%$) change in the transmission. Another significant difference is that the threshold values are *lower by four orders of the magnitude*, for a given periodic structure for which we take 10 layers from each side of the defect.
Finally, we notice that in a finite periodic structure the defect placed asymmetrically (see Fig. \[fig2\]) allows to create a nonreciprocal device when the threshold intensities for the molecular reorientation differ for the light propagating from the right and left. This feature is associated with the operation of an *optical diode* [@scalora; @gallo]. As can be seen in Fig. \[fig2\], by shifting the infiltrated liquid-crystal defect closer to one of the edges of the structure and fixing the total length of the structure, we can increase the switching power and extend the diode operation region decreasing the transmission power. Moreover, these results show that the threshold intensities depend strongly on the number of periods to the structure edge, due to a stronger confinement of the defect mode. Also, this gives us a possibility to reduce significantly the switching power simply by taken larger number of periods in the photonic structure.
Conclusions
===========
We have demonstrated that the orientational nonlinearity of nematic liquid crystals can be employed to achieve tunable all-optical switching and diode operation in periodic photonic structures with infiltrated liquid-crystal defects. For the first time to our knowledge, we have solved a coupled system of nonlinear equations for the nematic director and the propagating electric field for the model of a one-dimensional periodic structure created by Si layers with a single (symmetric or asymmetric) pore infiltrated by a liquid crystal. We have demonstrated that the threshold of the optical Freedericksz transition in the liquid-crystal defect is reduced dramatically due to multiple reflections in the periodic structure, so that such a defect may allow a tunable switching and diode operation in the photonic structure.
Acknowledgements {#acknowledgements .unnumbered}
================
The work has been supported by the Australian Research Council. The authors thank B.Ya. Zeldovich, I.C. Khoo, M. Karpierz, and O. Lavrentovich for useful discussion of our results and suggestions, and I.V. Shadrivov for the help in numerical simulations.
[99]{}
J.D. Joannopoulos, R.D. Meade, and J.N. Winn, [*Photonic Crystals: Molding the Flow of Light*]{} (Princeton University Press, Princeton, NY, 1995).
E. Yablonovitch, “Inhibited spontaneous emission in solid-state physics and electronics,” Phys. Rev. Lett. [**58**]{}, 2059-2062 (1987).
K. Busch and S. John, “Liquid-crystal photonic-band-gap materials: The tunable electromagnetic vacuum,” Phys. Rev. Lett. [**83**]{}, 967-970 (1999).
K. Yoshino, Y. Shimoda, Y. Kawagishi, K. Nakayama, and M. Ozaki, “Temperature tuning of the stop band in transmission spectra of liquid-crystal infiltrated synthetic opal as tunable photonic crystal,” Appl. Phys. Lett. [**75**]{}, 932-934 (1999).
S.W. Leonard, J.P. Mondia, H.M. van Driel, O. Toader, S. John, K. Busch, A. Birner, U. Gösele, and V. Lehmann,“Tunable two-dimensional photonic crystals using liquid-crystal infiltration,” Phys. Rev. B [**61**]{}, R2389–R2392 (2000).
Ch. Schuller, F. Klopf, J.P. Reithmaier, M. Kamp, and A. Forchel, “Tunable photonic crystals fabricated in III-IV semiconductor slab wavelengths using infiltrated liquid crystals,” Appl. Phys. Lett. [**82**]{}, 2767-2769 (2003).
T. T. Larsen, A. Bjarklev, D. S. Hermann, and J. Broeng, “Optical devices based on liquid crystal photonic bandgap fibers,” Opt. Express 11, 2589-2596 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-20-2589
D. Kang, J.E. Maclennan, N.A. Clark, A.A. Zakhidov, and R.H. Baughman, “Electro-optic behavior of liquid-crystal-filled silica photonic crystals: Effect of liquid-crystal alignment,” Phys. Rev. Lett. [**86**]{}, 4052-4055 (2001).
M.J. Escuti, J. Qi, and G.P. Crawford, “Tunable face-centered-cubic photonic crystal formed in holographic polymer dispersed liquid crystals,” Opt. Lett. [**28**]{}, 522-524 (2003).
E. Graugnard, J.S. King, S. Jain, C.J. Summers, Y. Zhang-Williams, and I.C. Khoo, “Electric-field tuning of the Bragg peak in large-pore TiO$_2$ inverse shell opals,” Phys. Rev. B [**72**]{}, 233105-4 (2005).
S.F. Mingaleev, M. Schillinger, D. Hermann, and K. Busch, “Tunable photonic crystal circuits: concepts and designs based on single-pore infiltration,” Opt. Lett. [**29**]{}, 2858-22860 (2004).
I. Del Villar, I. R. Matias, F. J. Arregui, and R. O. Claus, “Analysis of one-dimensional photonic band gap structures with a liquid crystal defect towards development of fiber-optic tunable wavelength filters,” Opt. Express 11, 430-436 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-5-430
R. Ozaki, T. Matsui, M. Ozaki, and K. Yoshino, “Electrically color-tunable defect mode lasing in one-dimensional photonic band-gap system containing liquid crystal,” Appl. Phys. Lett. [**82**]{}, 3593-3595 (2003).
E.P. Kosmidou, E.E. Kriezis, and T.D. Tsiboukis, “Analysis of tunable photonic crystal devices comprising liquid crystal materials as defects,” IEEE J. Quantum Electron. [**41**]{}, 657–665 (2005).
S. Fan, “Sharp asymmetric line shapes in side-coupled waveguide-cavity systems,“ App. Phys. Lett. **80**, 908-910 (2002).
A.E. Miroshnichenko and Y.S. Kivshar, “Sharp bends in photonic crystal waveguides as nonlinear Fano resonators,” Opt. Express 13, 3969-3976 (2005), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-13-11-3969
B.Ya. Zel’dovich, N.V. Tabiryan, and Yu.S. Chilingaryan, “Freedericksz transition induced by light fields,” Zh. Eksp. Teor. Fiz. **81**, 72 (1981) \[Sov. Phys.-JETP **81**, 72 (1981)\].
I.C. Khoo, “Optically induced molecular reorientation and third order nonlinear processes in nematic liquid crystals,” Phys. Rev. A **23**, 2077-2081 (1981).
H.L. Ong, “Optically induced Freedericksz transition and bistability in a nematic liquid crystal,” Phys. Rev. A **28**, 2393-2407 (1983).
N.V. Tabiryan, A.V. Sukhov, and B.Ya. Zel’dovich, “Orientational optical nonlinearity of liquid crystals,” Mol. Cryst. Liq. Cryst. **136**, 1-131 (1986).
P.G. de Gennes, *The Physics of Liquid Crystals*, (Clarendon Press, Oxford, 1979).
W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, *Numerical Recipes in C++*, (Cambridge University Press, 2002).
M.J. Stephen, and J.P. Straley, “Physics of Liquid Crystals,” Rev. Mod. Phys. **46**, 617-704 (1974).
P. Yeh, *Optical Waves in Layered Media*, (John Wiley & Sons, New York, 1988).
M. Scalora, J.P. Dowling, C.M. Bowden, and M.J. Bloemer , “The photonic band edge optical diode,“ J. App. Phys. **76**, 2023-2026 (1994).
K. Gallo, G. Assanto, K.R. Parameswaran, and M.M. Fejer, “All-optical diode in a periodically poled lithium niobate waveguide,“ App. Phys. Lett. **79**, 314-316 (2001).
|
---
abstract: |
The paper proposes to introduce incomplete Srivastava’s triple hypergeometric matrix functions through application of the incomplete Pochhammer matrix symbols. We also derive certain properties such as matrix differential equation, integral formula, reduction formula, recursion formula, recurrence relation and differentiation formula of the incomplete Srivastava’s triple hypergeometric matrix functions.\
Keywords: Matrix functional calculus, recursion formula, Gamma matrix function, Incomplete gamma matrix function, Incomplete Pochhammer matrix symbol, Laguerre matrix polynomial, Bessel and modified Bessel matrix function.\
AMS Subject Classification: 15A15; ; 33C65; 33C45; 34A05.
author:
- |
Ashish Verma[^1]\
Department of Mathematics\
Prof. Rajendra Singh (Rajju Bhaiya)\
Institute of Physical Sciences for Study and Research\
V.B.S. Purvanchal University, Jaunpur (U.P.)- 222003, India\
[email protected]\
title: 'On the incomplete Srivastava’s triple hypergeometric matrix functions'
---
Introduction
============
Recently, Srivastava [*et al*]{}. [@HM] have studied incomplete Pochhammer symbols and incomplete hypergeometric functions and discussed applications of these functions in communication theory, probability theory and groundwater pumping modelling. Cetinkaya [@AC] introduced the incomplete second Appell hypergeometric functions and obtained certain properties of these functions. Also, recently introduced incomplete Srivastava’s triple hypergeometrics and investigated certain properties of the incomplete Srivastava’s triple hypergeometrics [@jc; @jc1]. Srivastava [*et al*]{}. [@SSK] have obtained several interesting properties of the incomplete $H$-functions. On his work on hypergeometric functions of three variables, Srivastava [@HM1; @HM2] noticed the existence of three additional complete triple hypergeometric functions of the second-order. These functions are known in literature as Srivastava’s triple hypergeometric functions $H_A$, $H_B$ and $H_C$ and are given in [@SK; @SM].
For a wide variety of other explorations involving incomplete hypergeometric functions in several variables, the interested reader may be referred to several recent papers [@8r1; @jc; @jc1; @8r2; @8r5; @8r3; @8r4].
The matrix theory has become pervasive to almost every area of Mathematics, in general and in orthogonal polynomial and special functions, in particular. The matrix analogue of the Gauss hypergeometric function was introduced by Jódar and Cortés [@LC], particularly the hypergeometric matrix function, plays a very important role in solving numerous problems of mathematical physics, engineering and mathematical sciences [@p2; @p3; @p1; @p5; @p4]. Quite recently, the incomplete hypergeometric matrix functions was introduced by Abdalla [@Ab]. Within the frame, they discuss some fundamental properties of these functions. In a similar vein Bakhet [*et al*]{}. [@AY] introduced the Wright hypergeometric matrix functions and the incomplete Wright Gauss hypergeometric matrix functions and discuss some properties of these functions. The paper is in succession of the work we had already attempted and successfully completed [@A1; @A2; @A3] that had introduced the incomplete first, second and fourth Appell hypergeometric matrix functions and studied some basic properties such as matrix differential equation, integral formula, recursion formula, recurrence relation and differentiation formula of these functions.
Here is elucidated a sectionwise distribution of present work. In Section 2, we list basic definitions that are needed in sequel. In Section 3, we first introduce two incomplete Srivastava’s triple hypergeometric matrix functions $\gamma_{\mathcal{A}}^{H}$ and $\Gamma_{\mathcal{A}}^{H}$ by using properties of Pochhaommer matrix symbol. Some properties such as matrix differential equation, integral representation, reduction formula, recursion formula, recurrence relation and differentitation formula are also derived. In Section 4, we introduce incomplete Srivastava’s triple hypergeometric matrix functions $\gamma_{\mathcal{B}}^{H}$ and $\Gamma_{\mathcal{B}}^{H}$ by using properties of Pochhaommer matrix symbol and investigate several properties of each of these incomplete Srivastava’s triple hypergeometric functions. Finally, in Section 5, we define incomplete Srivastava’s triple hypergeometric matrix functions $\gamma_{\mathcal{C}}^{H}$ and $\Gamma_{\mathcal{C}}^{H}$ by using properties of Pochhaommer matrix symbol and derive certain properties of each of these functions. The work at hand also attempts to present connections between these matrix functions and Bessel and Laguerre matrix functions.
Preliminaries
=============
Throughout this paper, let $\mathbb{C}^{r\times r}$ be the vector space of $r$-square matrices with complex entries. For any matrix $A\in \mathbb{C}^{r\times r}$, its spectrum $\sigma(A)$ is the set of eigenvalues of $A$. A square matrix $A$ in $\mathbb{C}^{r\times r}$ is said to be positive stable if $\Re(\lambda)>0$ for all $\lambda\in\sigma(A)$.
Let $A$ be positive stable matrix in $\mathbb{C}^{r\times r}$. The Gamma matrix function $\Gamma(A)$ is defined as follows [@LC1]: $$\begin{aligned}
\Gamma(A)=\int_{0}^{\infty} e^{-t} t^{A-I} dt; \hskip1cm t^{A-I}= \exp((A-I)\ln t), \label{g1}\end{aligned}$$ where $I$ being the $r$-square identity matrix.
Let $A$ be positive stable matrix in $\mathbb{C}^{r\times r}$ and let $x$ be a positive real number. Then the incomplete gamma matrix functions $\gamma(A,x)$ and $\Gamma(A,x)$ are defined by [@Ab] $$\begin{aligned}
\gamma(A,x)=\int_{0}^{x} e^{-t} t^{A-I}dt \label{1eq4}\end{aligned}$$ and $$\begin{aligned}
\Gamma(A,x)= \int_{x}^{\infty} e^{-t} t^{A-I}dt\,,\label{1eq5}\end{aligned}$$ respectively and satisfy the following decomposition formula: $$\begin{aligned}
\gamma(A,x)+\Gamma(A,x)=\Gamma(A).\label{1eq6}\end{aligned}$$ Let $A$ be a matrix in $\mathbb{C}^{r\times r}$ and let $x$ be a positive real number. Then the incomplete Pochhammer matrix symbols $(A;x)_{n}$ and $[A; x]_{n}$ are defined as follows [@Ab] $$\begin{aligned}
(A; x)_{n}= \gamma(A+nI, x) \,\Gamma^{-1}(A)\label{1eq7}\end{aligned}$$ and $$\begin{aligned}
[A; x]_{n}= \Gamma(A+nI, x)\, \Gamma^{-1}(A).\label{1eq8}\end{aligned}$$ In idea of (\[1eq6\]), these incomplete Pochhammer matrix symbols $(A; x)_{n}$ and $[A; x]_{n}$ complete the following decomposition relation $$\begin{aligned}
(A; x)_{n}+[A; x]_{n}= (A)_{n}.\label{1eq9}\end{aligned}$$ where $(A)_{n}$ is the Pochhammer symbol given in [@LC].
Let $A$, $B$ and $C$ be matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$ is invertible for all integers $k \geq0.$ The incomplete Gauss hypergeometric matrix functions are defined by [@Ab] $$\begin{aligned}
{ _2\gamma_{1}}\Big[(A; x), B; C; z\Big]= \sum_{n=0}^{\infty}(A;x)_{n} (B)_{n}(C)_{n}^{-1}\frac{z^{n}}{n!}\label{11eq10}\end{aligned}$$ and $$\begin{aligned}
{_2\Gamma_{1}}\Big[[A; x], B; C; z\Big]= \sum_{n=0}^{\infty}[A;x]_{n} (B)_{n}(C)_{n}^{-1}\frac{z^{n}}{n!}.\label{1eq10}\end{aligned}$$
The Bessel matrix function is defined by [@J1; @J2; @J3]: $$\begin{aligned}
J_{A}(z)=\sum_{m\geq 0}^{\infty}\frac{(-1)^{m}\,\,\Gamma^{-1}(A+(m+1)I)}{m!}\Big(\frac{z}{2}\Big)^{A+2mI},\label{j1}\end{aligned}$$ where $A+kI$ is invertible for all integers $k\geq 0$. Therefore, the modified Bessel matrix functions are introduced in [@J3] in the form $$\begin{aligned}
&I_{A}= e^\frac{-Ai\pi}{2} J_{A}(z e^\frac{i\pi}{2}); \,\,\, -\pi<arg(z)<\frac{\pi}{2},\notag\\
&I_{A}= e^\frac{Ai\pi}{2} J_{A}(z e^\frac{-i\pi}{2}); \,\,\, -\frac{\pi}{2}<arg(z)<\pi.\label{j2}\end{aligned}$$
The Laguerre matrix polynomial is defined by [@LR] $$\begin{aligned}
L_{n}^{(A, \lambda)}(z)=\sum_{k=0}^{n}\frac{(-1)^{k}{\lambda}^{k}}{k!\, (n-k)!}(A+I)_{n}{[(A+I)_{k}]}^{-1} z^{k}.\end{aligned}$$ It follows that for $ \lambda=1$, we have $$\begin{aligned}
L_{n}^{(A)}(z)=\frac{(A+I)_{n}}{n!}\, _{1}F_{1}(-nI; A+I; z).\end{aligned}$$
The incomplete Srivastava’s triple hypergeometric matrix functions $\gamma_{\mathcal{A}}^{H}$ and $\Gamma_{\mathcal{A}}^{H}$
============================================================================================================================
This section deals with the incomplete Srivastava’s triple hypergeometric matrix functions $\gamma_{\mathcal{A}}^{H}$ and $\Gamma_{\mathcal{A}}^{H}$ as follows: $$\begin{aligned}
&\gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C'; z_1, z_2, z_3]\notag\\&=\sum_{m, n, p\geq 0}(A;x)_{m+p} (B)_{m+n} (B')_{n+p} (C)^{-1}_{m}(C')^{-1}_{n+p}\frac{z_{1}^{m}z_2^{n}z_3^{p}}{m! n! p!},\label{2eq1}\\
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C' ; z_1, z_2, z_3]\notag\\&=\sum_{m, n, p\geq 0}[A;x]_{m+p} (B)_{m+n} (B')_{n+p} (C)^{-1}_{m}(C')^{-1}_{n+p}\frac{z_{1}^{m}z_2^{n}z_3^{p}}{m! n! p!}.\label{2eq2}\end{aligned}$$ where $A$, $B$, $B'$, $C$, $C'$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$ and $C'+kI$ are invertible for all integers $k\geq0$.\
From (\[1eq9\]), we have the following decomposition formula $$\begin{aligned}
&\gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,C' ; z_1, z_2, z_3]+ \Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,C' ; z_1, z_2, z_3]\notag\\&= H_{\mathcal{A}}[A, B, B'; C, C' ; z_1, z_2, z_3].\label{2eq3}\end{aligned}$$ where $H_{\mathcal{A}}[A, B, B'; C,C' ; z_1, z_2, z_3]$ is the Srivastava’s triple hypergeometric matrix functions [@RD1].
It is quite evident that the special cases of (\[2eq1\]) and (\[2eq2\]) when $z_2 = 0$ reduces to the known incomplete families of the second Appell hypergeometric matrix functions [@A3]. Also, the special cases of (\[2eq1\]) and (\[2eq2\]) when $z_2 = 0$ and $z_3 = 0$ or $z_1 = 0$ are seen to yield the known incomplete families of Gauss hypergeometric matrix functions [@Ab].
If one can successfully discuss the properties and characteristics of\
$\gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C' ; z_1, z_2, z_3]$, it will evidently be sufficient to determine the properties of $\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C' ; z_1, z_2, z_3]$ according to the decomposition formula (\[2eq3\]).
For commuting matrices $A$, $B$, $B'$, $C$ and $C'$ in $\mathbb{C}^{r\times r}$ such that $C+kI$ and $C'+kI$ is invertible for all integers $k\geq0.$ Then the function defined by ${\mathcal{T}_1}={\mathcal{T}_1}(z_1, z_2, z_3) =\gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,C' ; z_1, z_2, z_3]+\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,C' ; z_1, z_2, z_3]$ satisfies the following system of partial differential equations: $$\begin{aligned}
&\Big[z_1\frac{\partial}{\partial z_1}{(z_1\frac{\partial}{\partial z_1}+C-I)- z_1(z_1\frac{\partial}{\partial z_1}+z_3\frac{\partial}{\partial z_3}+A)(z_1\frac{\partial}{\partial z_1}+z_2\frac{\partial}{\partial z_2}+B)}\Big] \mathcal{T}=O,\label{2eq4}\\
&\Big[z_2\frac{\partial}{\partial z_2}{(z_2\frac{\partial}{\partial z_2}+ z_3\frac{\partial}{\partial z_3}+C'-I)- z_2(z_2\frac{\partial}{\partial z_2}+z_1\frac{\partial}{\partial z_1}+B)(z_2\frac{\partial}{\partial z_2}+z_3\frac{\partial}{\partial z_3}+B')}\Big] \mathcal{T}=O,\label{m1}\\
&\Big[z_3\frac{\partial}{\partial z_3}{(z_2\frac{\partial}{\partial z_2}+z_3\frac{\partial}{\partial z_3}+C'-I)- z_3(z_1\frac{\partial}{\partial z_1}+z_3\frac{\partial}{\partial z_3}+A)(z_2\frac{\partial}{\partial z_2}+z_3\frac{\partial}{\partial z_3}+B')}\Big] \mathcal{T}=O.\label{2eq5}\end{aligned}$$
(\[2eq3\]) suceeds into the following proof cojoined with $H_{\mathcal{A}}[A, B, B'; C, C' ; z_1, z_2, z_3]$ which adequately satisfies the system of the matrix differential equations given by [@RD1].
Let $A$, $B$, $B'$, $C$ and $C'$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $A+kI$, $B+kI$, $C+kI$ and $C'+kI$ are invertible for all integers $k\geq0$. Then the following integral representation for $\Gamma_{\mathcal{A}}^{H}$ in (\[2eq2\]) holds true: $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,C'; z_1, z_2, z_3]= \Gamma^{-1}(A) \Gamma^{-1}(B)\notag\\&\times \Big[\int_{x}^{\infty} \int_{0}^{\infty}e^{-t-s}\, t^{A-I} s^{B-I} {_{0}F_{1}}(-; C; z_1 st)\, {_{1}F_{1}}(B'; C' ; z_2 s+ z_3 t) dt ds\Big],\label{2eq6}\end{aligned}$$ where ${_{0}F_{1}}(-; C; z_1)$ is the Gauss hypergeometric matrix function of one denomenator and ${_{1}F_{1}}(B; C ; z_1)$ is the Kummer hypergeometric matrix function.
By replacing the incomplete Pochhammer matrix symbol $[A; x]_{m+p}$ in (\[1eq5\]) and (\[1eq8\]) and the Pochhammer matrix symbol $(B)_{m+n}$ by its integral representation in (\[2eq2\]), one attain the following equation $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,C' ; z_1, z_2, z_3]\notag\\&= \Gamma^{-1}(A) \sum_{m, n, p\geq0}^{}\Big(\int_{x}^{\infty}e^{-t}\, t^{A+(m+p-1)I}dt\Big)\notag\\
&\times (B)_{m+n} (B')_{n+p} (C)^{-1}_{m} (C')^{-1}_{n+p}\frac{z_1^{m} z_{2}^{n} z_{3}^{p}}{m! n! p!},\notag\\
&=\Gamma^{-1}(A) \Gamma^{-1}(B)\sum_{m, n, p\geq0}^{}\Big(\int_{x}^{\infty}\int_{0}^{\infty} e^{-t-s} t^{A+(m+p-1)I} s^{B+(m+n-1)I}dt ds\Big)\notag\\
&\times (B')_{n+p} (C)^{-1}_{m} (C')^{-1}_{n+p}\frac{z_1^{m} z_{2}^{n} z_{3}^{p}}{m! n! p!},\notag\\
&=\Gamma^{-1}(A) \Gamma^{-1}(B)\Big(\int_{x}^{\infty}\int_{0}^{\infty} e^{-t-s} t^{A-I} s^{B-I}dt ds\Big)\notag\\
&\times \Big(\sum_{m\geq 0}^{}(C)^{-1}_{m}\frac{(z_1 st)^{m}}{m!}\Big)\Big(\sum_{n, p\geq 0}^{} (B')_{n+p} (C')^{-1}_{n+p}\frac{ (z_{2} s)^{n} (z_{3} t)^{p}}{n! p!}.\label{l1}\end{aligned}$$ Taking into account the summation formula [@SM] $$\begin{aligned}
\sum_{N\geq 0}^{}f(N) \frac{(z_1+z_2)^{N}}{N!}=\sum_{m, n\geq 0}^{}f(m+n)\frac{z_{1}^{m}}{m!}\frac{z_{2}^{n}}{n!}.\label{l2}\end{aligned}$$ we get (\[2eq6\]). Hereby stands completed the proof of (\[2eq6\]).
The following double integral representations hold true: $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, -mI; C, C'+I ; z_1, z_2, z_3]\notag\\
&=m![(C'+I)_{m}]^{-1} \Gamma^{-1}(A) \Gamma^{-1}(B)\notag\\&\times \Big[\int_{x}^{\infty} \int_{0}^{\infty}e^{-t-s}\, t^{A-I} s^{B-I} {_{0}F_{1}}(-; C; z_1 st)\, L^{(C')}_{m}(z_2 s+ z_3 t) dt ds\Big],\end{aligned}$$ where $A$, $B$, $B'$, $C$ and $C'$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $A+kI$, $B+kI$, $C+kI$ and $C'+kI$ are invertible for all integers $k\geq0$.
The following double integral representations hold true: $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C+I,C' ; -z_1, z_2, z_3]\notag\\
&=z_{1}^{\frac{-C}{2}}\, \Gamma^{-1}(A) \Gamma^{-1}(B)\Gamma(C+I)\notag\\
&\times \int_{x}^{\infty}\int_{0}^{\infty} e^{-t-s} \,t^{A-\frac{C}{2}-I} s^{B-\frac{C}{2}-I} J_{C}(2\sqrt{z_1 st}) {_{1}F_{1}}(B'; C' ; z_2 s+ z_3 t) dt ds;\end{aligned}$$
$$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C+I,C' ; z_1, z_2, z_3]\notag\\
&=z_{1}^{\frac{-C}{2}}\, \Gamma^{-1}(A) \Gamma^{-1}(B)\Gamma(C+I)\notag\\
&\times \int_{x}^{\infty}\int_{0}^{\infty} e^{-t-s} \,t^{A-\frac{C}{2}-I} s^{B-\frac{C}{2}-I} I_{C}(2\sqrt{z_1 st}) {_{1}F_{1}}(B'; C' ; z_2 s+ z_3 t) dt ds,\end{aligned}$$
where $A$, $B$, $B'$, $C$ and $C'$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $A+kI$, $B+kI$, $C+kI$ and $C'+kI$ are invertible for all integers $k\geq0$.
The following double integral representations: $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, -mI; C+I, C'+I ; -z_1, z_2, z_3]\notag\\
&=m!\,z_{1}^{\frac{-C}{2}}\,[(C'+I)_{m}]^{-1}\, \Gamma^{-1}(A) \Gamma^{-1}(B)\Gamma(C+I)\notag\\
&\times \int_{x}^{\infty}\int_{0}^{\infty} e^{-t-s} \,t^{A-\frac{C}{2}-I} s^{B-\frac{C}{2}-I} J_{C}(2\sqrt{z_1 st}) L^{(C')}_{m}(z_2 s+ z_3 t)dt ds;\end{aligned}$$ $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, -mI; C+I, C'+I ; z_1, z_2, z_3]\notag\\
&=m!\,z_{1}^{\frac{-C}{2}}\, [(C'+I)_{m}]^{-1}\,\Gamma^{-1}(A) \Gamma^{-1}(B)\Gamma(C+I)\notag\\
&\times \int_{x}^{\infty}\int_{0}^{\infty} e^{-t-s} \,t^{A-\frac{C}{2}-I} s^{B-\frac{C}{2}-I} I_{C}(2\sqrt{z_1 st}) L^{(C')}_{m}(z_2 s+ z_3 t) dt ds,\end{aligned}$$ where $A$, $B$, $B'$, $C$ and $C'$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $A+kI$, $B+kI$, $C+kI$ and $C'+kI$ are invertible for all integers $k\geq0$.
Let $A$, $B$, $B'$, $C$ and $C'$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $A+kI$, $B+kI$, $B'+kI$, $C+kI$ and $C'+kI$ are invertible for all integers $k\geq0$. Then the following triple integral representation for $\Gamma_{\mathcal{A}}^{H}$ in (\[2eq2\]) holds true: $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,C' ; z_1, z_2, z_3]\notag\\
&= \Gamma^{-1}(A) \Gamma^{-1}(B) \Gamma^{-1}(B')\Big[\int_{x}^{\infty}\int_{0}^{\infty} \int_{0}^{\infty}e^{-t-s-u}\, t^{A-I} s^{B-I} u^{B'-I} \notag\\& \times {_{0}F_{1}}(-; C; z_1 st) \,{_{0}F_{1}}(-; C'; z_2 us+z_3ut) dt ds du\Big].\label{x1}\end{aligned}$$
By applying the integral representation of the Pochhammer matrix symbol $(B')_{n+p}$ in (\[l1\]) and through applying the summation formula (\[l2\]) in tandem, one arrives at (\[x1\]).
The following triple integral representations holds true: $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C+I, C' ; -z_1, z_2, z_3]\notag\\
&=z_{1}^{\frac{-C}{2}} \, \Gamma^{-1}(A) \Gamma^{-1}(B)\Gamma^{-1}(B')\Gamma(C+I)\notag\\
&\times \int_{x}^{\infty}\int_{0}^{\infty}\int_{0}^{\infty} e^{-t-s-u} \,t^{A-\frac{C}{2}-I} s^{B-\frac{C}{2}-I} u^{B'-I}\notag\\&\times J_{C}(2\sqrt{z_1 st}) \,{_{0}F_{1}}(-; C'; z_2 us+z_3ut)dt dsdu;\end{aligned}$$ $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C+I, C' ; z_1, z_2, z_3]\notag\\
&=z_{1}^{\frac{-C}{2}} \, \Gamma^{-1}(A) \Gamma^{-1}(B)\Gamma^{-1}(B')\Gamma(C+I)\notag\\
&\times \int_{x}^{\infty}\int_{0}^{\infty}\int_{0}^{\infty} e^{-t-s-u} \,t^{A-\frac{C}{2}-I} s^{B-\frac{C}{2}-I} u^{B'-I}\notag\\&\times I_{C}(2\sqrt{z_1 st}) \,{_{0}F_{1}}(-; C'; z_2 us+z_3ut)dt dsdu,\end{aligned}$$ where $A$, $B$, $B'$, $C$ and $C'$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $A+kI$, $B+kI$, $C+kI$ and $C'+kI$ are invertible for all integers $k\geq0$.
Let $A$, $B$, $B'$, $C$ and $C'$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $A+kI$, $B+kI$, $C+kI$ and $C'+kI$ are invertible for all integers $k\geq0$. Then the following reduction formula for $\Gamma_{\mathcal{A}}^{H}$ holds true: $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,B' ; z_1, z_2, z_3]\notag\\
&= (1-z_3)^{-A} (1-z_2)^{-B}\notag\\
&\times {_{2}\Gamma_{1}}[(A; x(1-z_3)), B; C; \frac{z_1}{(1-z_2)(1-z_3)}].\label{x1i}\end{aligned}$$
Replacing $B'=C'$ in (\[2eq6\]), the contrived equation becomes $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,B'; z_1, z_2, z_3]\notag\\&= \Gamma^{-1}(A) \Gamma^{-1}(B) \Big[\int_{x}^{\infty} \int_{0}^{\infty}e^{-t(1-z_3)-s(1-z_2)}\, t^{A-I} s^{B-I} {_{0}F_{1}}(-; C; z_1 st)\, dt ds\Big],\label{ii1}\end{aligned}$$ where $F(-; -;z)= e^{z}$. Putting $t(1-z_3)= u,$ $s(1-z_2)=v$ and $dt= \frac{du}{(1-z_3)}$, $ds= \frac{dv}{(1-z_2)}$ in (\[ii1\]), the following simplified equation emerges: $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,B'; z_1, z_2, z_3]\notag\\& = \Gamma^{-1}(A) (1-z_3)^{-A}(1-z_2)^{-B}\notag\\ &\times \Big[\int_{x(1-z_3)}^{\infty}e^{-u}\, u^{A-I} {_{1}F_{1}}(B; C; \frac{z_1u}{(1-z_2)(1-z_3)})\, du\Big].\label{ii2}\end{aligned}$$ Finally, using known result in [@Ab]: $$\begin{aligned}
{_{2}\Gamma_{1}}[[A; x], B;C;z]= \Gamma^{-1}(A) \int_{x}^{\infty} e^{-t} t^{A-I} {_{1}F_{1}}(B; C; zt) dt\end{aligned}$$ in (\[ii2\]), we are led to the desired result (\[x1i\]).
\[h1\]Let $B'+sI$ be invertible for all integers $s\geq0$. Then the following recursion formula holds true for $\Gamma_{\mathcal{A}}^{H}$: $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'+sI; C,C' ; z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C' ; z_1, z_2, z_3]\notag\\&\quad+ z_2{B}{C'}^{-1}\Big[\sum_{k=1}^{s}\Gamma_{\mathcal{A}}^{H}[(A; x), B+I, B'+kI; C, C'+I ; z_1, z_2, z_3]\Big]\notag\\
&\quad+ z_3{A}{C'}^{-1}\Big[\sum_{k=1}^{s}\Gamma_{\mathcal{A}}^{H}[(A+I; x), B, B'+kI; C, C'+I; z_1, z_2, z_3]\Big].\label{r11}\end{aligned}$$ Furthermore, if $B'-kI$ are invertible for integers $k\leq s$, then $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'-sI; C,C' ;z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,C' ;z_1, z_2, z_3]\notag\\&\quad- z_2{B}{C'}^{-1}\Big[\sum_{k=0}^{s-1}\Gamma_{\mathcal{A}}^{H}[(A; x), B+I, B'-kI; C, C'+I ; z_1, z_2, z_3]\Big]\notag\\
&\quad- z_3{A}{C'}^{-1}\Big[\sum_{k=0}^{s-1}\Gamma_{\mathcal{A}}^{H}[(A+I; x), B, B'-kI; C, C'+I; z_1, z_2, z_3]\Big].\label{r12}\end{aligned}$$ where $A$, $B$, $B'$, $C$ and $C'$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$ and $C'+kI$ are invertible matrices for all integers $k\geq 0$.
Using integral formula (\[2eq6\]) of the incomplete Srivastava’s triple hypergeometric matrix function $\Gamma_{\mathcal{A}}^{H}$ and the transformation of the said equation $$\begin{aligned}
(B'+I)_{n+p}= B'^{-1}(B')_{n+p}(B'+(n+p)I),\end{aligned}$$ we get the following contiguous matrix relation: $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'+I; C, C' ;z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C' ; z_1, z_2, z_3]\notag\\&\quad+ z_2{B}{C'}^{-1}\Big[\Gamma_{\mathcal{A}}^{H}[(A; x), B+I, B'+I; C, C'+I ; z_1, z_2, z_3]\Big]\notag\\
&\quad+ z_3{A}{C'}^{-1}\Big[\Gamma_{\mathcal{A}}^{H}[(A+I; x), B, B'+I; C, C'+I; z_1, z_2, z_3]\Big].
\label{3eqp1}\end{aligned}$$ Application of this contiguous matrix relation to the matrix function $\Gamma_{\mathcal{A}}^{H}$ with the matrix parameter $B'+2I$, yields
$$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'+2I; C,C'; z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,C' ; z_1, z_2, z_3]\notag\\&\quad+ z_2{B}{C'}^{-1}\Big[\sum_{k=1}^{2}\Gamma_{\mathcal{A}}^{H}[(A; x), B+I, B'+kI; C, C'+I ; z_1, z_2, z_3]\Big]\notag\\
&\quad+ z_3{A}{C'}^{-1}\Big[\sum_{k=1}^{2}\Gamma_{\mathcal{A}}^{H}[(A+I; x), B, B'+kI; C, C'+I; z_1, z_2, z_3]\Big].
\label{3eqp2}\end{aligned}$$
Repeating this process $s$ times, we obtain (\[r11\]).
For the proof of (\[r12\]), replace the matrix $B'$ with $B'-I$ in (\[3eqp1\]). As $B'-I$ is invertible, this gives $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'- I; C,C'; z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,C'; z_1, z_2, z_3]\notag\\&\quad- z_2{B}{C'}^{-1}\Big[\Gamma_{\mathcal{A}}^{H}[(A; x), B+I, B'; C, C'+I ; z_1, z_2, z_3]\Big]\notag\\
&\quad- z_3{A}{C'}^{-1}\Big[\Gamma_{\mathcal{A}}^{H}[(A+I; x), B, B'; C, C'+I; z_1, z_2, z_3]\Big].\label{o1}\end{aligned}$$ Iteratively, we get (\[r12\]). This finishes the proof of theorem (\[h1\]).
Other recursion formulas for the matrix functions $\Gamma_{\mathcal{A}}^{H}$ about the matrix parameter $B'$ can be obtained as follows:
Let $B'+sI$ be invertible for all integers $s\geq0$. Then the following recursion formula holds true for $\Gamma_{\mathcal{A}}^{H}$: $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'+sI; C, C' ; z_1, z_2, z_3]\notag\\
&=\sum_{k_1+k_2\leq s}^{}{s\choose k_1, k_2}z_{2}^{k_1} z_{3}^{k_2}{(A)_{k_2}}(B)_{k_1}(C')^{-1}_{k_1+k_2} \,\notag\\
&\quad\times\Big[\Gamma_{\mathcal{A}}^{H}[(A+k_2 I; x), B+k_1I, B'+(k_1+k_2)I; C, C'+(k_1+k_2)I; z_1, z_2, z_3]\Big].\label{3eqh1}\end{aligned}$$ Furthermore, if $B'-kI$ are invertible for integers $k\leq s$, then $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'-sI; C, C'; z_1, z_2, z_3]\notag\\
&=\sum_{k_1+k_2\leq s}^{}{s\choose k_1, k_2}(-z_{2})^{k_1} (-z_{3})^{k_2}{(A)_{k_2}}(B)_{k_1}(C')^{-1}_{k_1+k_2}\,\notag\\
&\quad\times\Big[\Gamma_{\mathcal{A}}^{H}[(A+k_2 I; x), B+k_1I, B'; C, C'+(k_1+k_2 )I; z_1, z_2, z_3]\Big],\label{3eqh2}\end{aligned}$$ where $A$, $B$, $B'$, $C$ and $C'$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$ and $C'+kI$ are invertible matrices for all integers $k\geq 0$.
The proof of (\[3eqh1\]) is based upon the principle of mathematical induction on $s\in\mathbb{N}$. For $s=1$, the result (\[3eqh1\]) is verifiably true. Suppose (\[3eqh1\]) is true for $s=t$, that is, $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'+tI; C, C' ; z_1, z_2, z_3]\notag\\
&=\sum_{k_1+k_2\leq t}^{}{t\choose k_1, k_2}z_{2}^{k_1} z_{3}^{k_2}{(A)_{k_2}}(B)_{k_1}\,(C')^{-1}_{k_1+k_2}\,\notag\\
&\quad\times\Big[\Gamma_{\mathcal{A}}^{H}[(A+k_2 I; x), B+k_1I, B'+(k_1+k_2)I; C, C'+(k_1+k_2) I; z_1, z_2, z_3]\Big],
\label{3eqp3}
\end{aligned}$$ Replacing $B'$ with $B'+I$ in (\[3eqp3\]) and using the contiguous matrix relation (\[3eqp1\]), the equation becomes $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'+(t+1)I; C, C' ; z_1, z_2, z_3]\notag\\
&=\sum_{k_1+k_2\leq t}^{}{t\choose k_1, k_2}z_{2}^{k_1} z_{3}^{k_2}{(A)_{k_2}}(B)_{k_1}\,(C')^{-1}_{k_1+k_2}\,\notag\\
&\quad\times\Big[\Gamma_{\mathcal{A}}^{H}[(A+k_2 I; x), B+k_1I, B'+(k_1+k_2)I; C, C'+(k_1+k_2) I; z_1, z_2, z_3]\notag\\&\quad+z_2{(B+k_1I)}(C'+(k_1+k_2)I)^{-1}\notag\\&\quad\times \Gamma_{\mathcal{A}}^{H}[(A+k_2I;x), B+k_1I+I, B'+(k_1+k_2+1)I; C, C'+(k_1+k_2+1)I ; z_1, z_2, z_3]\notag\\
&\quad+z_3 {(A+k_2I)}(C'+(k_1+k_2)I)^{-1}\notag\\&\quad \times \Gamma_{\mathcal{A}}^{H}[(A+(k_2+1)I;x), B+k_1I, B'+(k_1+k_2+1)I; C, C'+(k_1+k_2+1)I; z_1, z_2, z_3]\Big].
\label{3eqp4}
\end{aligned}$$ Simplifying, (\[3eqp4\]) takes the form $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'+(t+1)I; C, C' ; z_1, z_2, z_3]\notag\\
&=\sum_{k_1+k_2\leq t}^{}{t\choose k_1, k_2}z_{2}^{k_1} z_{3}^{k_2}{(A)_{k_2}}(B)_{k_1}\,(C')^{-1}_{k_1+k_2}\,\notag\\
&\quad\times\Gamma_{\mathcal{A}}^{H}[(A+k_2 I; x), B+k_1I, B'+(k_1+k_2)I; C, C'+(k_1+k_2) I; z_1, z_2, z_3]\notag\\
&\quad +\sum_{k_1+k_2\leq t+1}^{}{t\choose k_1-1, k_2}z_{2}^{k_1} z_{3}^{k_2}{(A)_{k_2}}(B)_{k_1}\,(C')^{-1}_{k_1+k_2}\notag\\&\quad\times\Gamma_{\mathcal{A}}^{H}[(A+k_2 I; x), B+k_1I, B'+(k_1+k_2)I; C, C'+(k_1+k_2) I; z_1, z_2, z_3]\notag\\
&\quad +\sum_{k_1+k_2\leq t+1}^{}{t\choose k_1, k_2-1}z_{2}^{k_1} z_{3}^{k_2}{(A)_{k_2}}(B)_{k_1}\,(C')^{-1}_{k_1+k_2}\notag\\&\times\Gamma_{\mathcal{A}}^{H}[(A+k_2 I; x), B+k_1I, B'+(k_1+k_2)I; C, C'+(k_1+k_2) I; z_1, z_2, z_3].
\label{3eqp5}
\end{aligned}$$ Using Pascal’s identity in (\[3eqp5\]), we have $$\begin{aligned}
& \Gamma_{\mathcal{A}}^{H}[(A; x), B, B'+(t+1)I; C, C'; z_1, z_2, z_3]\notag\\
&=\sum_{k_1+k_2\leq t+1}^{}{t+1\choose k_1, k_2}z_{2}^{k_1} z_{3}^{k_2}{(A)_{k_2}}(B)_{k_1}\,(C')^{-1}_{k_1+k_2}\,\notag\\
&\quad\times\Gamma_{\mathcal{A}}^{H}[(A+k_2 I; x), B+k_1I, B'+(k_1+k_2)I; C, C'+(k_1+k_2) I; z_1, z_2, z_3].
\end{aligned}$$ This establishes (\[3eqh1\]) for $s=t+1$. Hence through induction the result given in (\[3eqh1\]) stands true for all values of $s$. The second recursion formula (\[3eqh2\]) can be proved in a similar manner.
Let $C-sI$ and $C'-sI$ be invertible matrices for all integers $s\geq0$. Then the following recursion formulas hold true for $\Gamma_{\mathcal{A}}^{H}$: $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C-sI, C'; z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C' ; z_1, z_2, z_3]\notag\\&\quad+ z_1 AB\Big[\sum_{k=1}^{s} \Gamma_{\mathcal{A}}^{H}[(A+I; x), B+I, B'; C+(2-k)I, C' ; z_1, z_2, z_3]\notag\\&\quad\times{(C-kI)^{-1}(C-(k-1)I)^{-1}}\Big];\label{zz1}\end{aligned}$$ $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C'-sI ; z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C'; z_1, z_2, z_3]\notag\\&\quad+ z_2 B B'\Big[\sum_{k=1}^{s} \Gamma_{\mathcal{A}}^{H}[(A; x), B+I, B'+I;\,C, C'+(2-k)I ; z_1, z_2, z_3]\,\notag\\
&\quad\times{(C'-kI)^{-1}(C'-(k-1)I)^{-1}}\Big]\notag\\
&+ z_3 A B'\Big[\sum_{k=1}^{s} \Gamma_{\mathcal{A}}^{H}[(A+I; x), B, B'+I;\,C, C'+(2-k)I ; z_1, z_2, z_3]\notag\\
&\quad\times{(C'-kI)^{-1}(C'-(k-1)I)^{-1}}\Big],\label{ph2}\end{aligned}$$ where $A$, $B$, $B'$, $C$ and $C'$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$ and $C'+kI$ are invertible matrices for all integers $k\geq 0$.
Applying integral formula (\[2eq6\]) of the incomplete Srivastava’s triple hypergeometric matrix function $\Gamma_{\mathcal{A}}^{H}$ and the transformation which follows $$\begin{aligned}
(C-I)^{-1}_{m}=(C)^{-1}_{m}\left[I+{m}{(C-I)^{-1}}\right],\end{aligned}$$ we can easily get the contiguous matrix relation $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C-I, C'; z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C' ; z_1, z_2, z_3]\notag\\&\quad+ z_1 AB \,\Big[\Gamma_{\mathcal{A}}^{H}[(A+I; x), B+I, B'; C+I, C' ; z_1, z_2, z_3]\Big]{C^{-1}(C-I)^{-1}}.
\label{2eqp9}\end{aligned}$$ Applying this contiguous matrix relation twice on the incomplete Srivastava’s triple hypergeometric matrix function $\Gamma_{\mathcal{B}}^{H}$ with matrix $C-2I$, we arrive at, $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C-2I, C' ; z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C-I, C' ; z_1, z_2, z_3]\notag\\&\quad+ z_1 AB \,\Big[\Gamma_{\mathcal{A}}^{H}[(A+I; x), B+I, B'; C, C' ; z_1, z_2, z_3]\Big]{(C-I)^{-1}(C-2I)^{-1}}\notag\\
&=\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C; z_1, z_2, z_3]\notag\\&\quad+ z_1 AB \Big[{\Gamma_{\mathcal{A}}^{H}[(A+I;x), B+I, B'; C+I, C'; z_1, z_2, z_3]}{C^{-1}(C-I)^{-1}}\notag\\
&\quad +\,{\Gamma_{\mathcal{A}}^{H}[(A+I; x), B+I, B'; C, C' ; z_1, z_2, z_3]}{(C-I)^{-1}(C-2I)^{-1}}\Big].
\label{2eqp19}\end{aligned}$$ Iterating this method $s$ times on the incomplete Srivastava’s triple hypergeometric matrix function $\Gamma_{\mathcal{A}}^{H}((A; x), B, B'; C-sI, C'; x_1, x_2, x_3)$ we get the recursion formula (\[zz1\]). Recursion formulas (\[ph2\]) can be proved in an analogous manner.
\[r1\] Let $A$, $B$, $B'$, $C$ and $C'$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$ and $C'+kI$ are invertible matrices for all integers $k\geq 0$. Then the following recurrence relations hold true: $$\begin{aligned}
&(C'-(B'+I))\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C'; z_1, z_2, z_3]\notag\\
&= (C'-I) \Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C'-I; z_1, z_2, z_3]\notag\\
&\quad- B' \,\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'+I; C, C'; z_1, z_2, z_3].\label{2eq15}\end{aligned}$$
Using the integral formula (\[2eq6\]) and the following contiguous matrix relation [@ZM] $$\begin{aligned}
&(C'-(B'+I)) _{1}F_{1}(B'; C'; z)\notag\\&=(C'-I) _{1}F_{1}(B'; C'-I; z)- B'\,{ _{1}F_{1}}(B'+I; C'; z), \end{aligned}$$ The following recurrence relation emerges: $$\begin{aligned}
&(C'-(B'+I))\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,C'; z_1, z_2, z_3]\notag\\&= \Gamma^{-1}(A)\Gamma^{-1}(B)\Big[\int_{x}^{\infty} e^{-t-s}t^{A-I}s^{B-I} \Big((C'-I) {_{1}F_{1}}[B'; C'-I; z_2s+z_3t]\notag\\&\quad -B' {_{1}F_{1}}(B'+I; C'; z_2s+z_3t)\Big) {_{0}F_{1}}(-; C; z_1st) dtds\Big],\notag\\
&= (C'-I) \Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C'-I; z_1, z_2, z_3]- B' \,\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'+I; C, C'; z_1, z_2, z_3].\end{aligned}$$ This finishes the proof of (\[2eq15\]) as well as completes the proof of Theorem \[r1\].
Let $A$, $B$, $B'$, $C$ and $C'$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$ and $C'+kI$ are invertible for all integers $k\geq0$. Then the following recurrence relation for $\Gamma_{\mathcal{A}}^{H}$ in (\[2eq2\]) holds true: $$\begin{aligned}
&\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C' ; z_1, z_2, z_3]=\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C-I, C' ; z_1, z_2, z_3]\notag\\
&-ABC^{-1}(C-I)^{-1} \Gamma_{\mathcal{A}}^{H}[(A+I; x), B+I, B'; C+I, C'; z_1, z_2, z_3].\end{aligned}$$
Applying the contiguous relation for the function $_{0}F_{1}$ $$\begin{aligned}
{_{0}F_{1}}(-; C-I; z_1)- {_{0}F_{1}}(-; C; z_1)- z_1 C^{-1}(C-I)^{-1} \,{_{0}F_{1}}(-; C+I; z_1)=0\end{aligned}$$ in the integral representation (\[2eq6\]), we are led to the desired result.
Let $A$, $B$, $B'$, $C$ and $C'$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$ and $C'+kI$ are invertible for all integers $k\geq0$. Then the following derivative formulas for $\Gamma_{\mathcal{A}}^{H}$ in (\[2eq2\]) hold true: $$\begin{aligned}
&\frac{\partial^{m}}{\partial z_{1}^{m}}\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C' ; z_1, z_2, z_3]\notag\\&= (A)_{m}(B)_{m} (C)^{-1}_{m}\Big[\Gamma_{\mathcal{A}}^{H}[(A+mI; x), B+mI, B'; C+mI, C' ; z_1, z_2, z_3]\Big];\label{d1}\\
&\frac{\partial^{m+n}}{\partial z_{1}^{m}\partial z_{2}^{n}}\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C'; z_1, z_2, z_3]\notag\\&= (A)_{m}(B)_{m+n}(B')_{n}(C)^{-1}_{m} (C')^{-1}_{n}\notag\\& \times \Big[\Gamma_{\mathcal{A}}^{H}[(A+mI; x), B+(m+n)I, B'+nI; C+mI,C'+nI ; z_1, z_2, z_3]\Big];\label{d2}\\
&\frac{\partial^{m+n+p}}{\partial z_{1}^{m}\partial z_{2}^{n}\partial z_{3}^{p}}\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C,C' ; z_1, z_2, z_3]\notag\\&= (A)_{m+p}(B)_{m+n}(B')_{n+p}(C)^{-1}_{m} (C')^{-1}_{n+p}\notag\\& \times \Big[\Gamma_{\mathcal{A}}^{H}[(A+(m+p)I; x), B+(m+n)I, B'+(n+p)I; C+mI,C'+(n+p)I; z_1, z_2, z_3]\Big].\label{d3}\end{aligned}$$
Upon differentiating formally both sides of (\[2eq6\]) with respect to $z_1$, the resultant equation comes to: $$\begin{aligned}
&\frac{\partial}{\partial z_{1}}\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C' ; z_1, z_2, z_3]
=AB C^{-1}\Gamma^{-1}(A+I) \Gamma^{-1}(B+I)\notag\\&\times \Big[\int_{x}^{\infty} \int_{0}^{\infty}e^{-t-s}\, t^{(A+I)-I} s^{(B+I)-I} {_{0}F_{1}}(-; C+I; z_1 st) {_1F_{1}}(B'; C' ; z_2 s+ z_3 t) dt ds\Big].\label{d4}\end{aligned}$$ Now, from (\[2eq6\]) and (\[d4\]), we obtain $$\begin{aligned}
&\frac{\partial}{\partial z_{1}}\Gamma_{\mathcal{A}}^{H}[(A; x), B, B'; C, C' ; z_1, z_2, z_3]\notag\\&= AB C^{-1}\Big[\Gamma_{\mathcal{A}}^{H}[(A+I; x), B+I, B'; C+I, C' ; z_1, z_2, z_3]\Big],\end{aligned}$$ which is (\[d1\]) for $m=1$. The general result follows by the principle of mathematical induction on $m$.
This completes the proof of (\[d1\]). Successively (\[d2\]) and (\[d3\]) can be proved in an analogous manner.
The incomplete Srivastava’s triple hypergeometric matrix functions $\gamma_{\mathcal{B}}^{H}$ and $\Gamma_{\mathcal{B}}^{H}$
============================================================================================================================
In this section, we introduce the incomplete Srivastava’s triple hypergeometric matrix functions $\gamma_{\mathcal{B}}^{H}$ and $\Gamma_{\mathcal{B}}^{H}$ as follows: $$\begin{aligned}
&\gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]\notag\\&=\sum_{m, n, p\geq 0}(A;x)_{m+p} (B)_{m+n} (B')_{n+p} (C)^{-1}_{m}(C')^{-1}_{n}(C'')^{-1}_{p}\frac{z_{1}^{m}z_2^{n}z_3^{p}}{m! n! p!},\label{s2eq1}\\
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]\notag\\&=\sum_{m, n, p\geq 0}[A;x]_{m+p} (B)_{m+n} (B')_{n+p} (C)^{-1}_{m}(C')^{-1}_{n}(C'')^{-1}_{p}\frac{z_{1}^{m}z_2^{n}z_3^{p}}{m! n! p!}.\label{s2eq2}\end{aligned}$$ where $A$, $B$, $B'$, $C$, $C'$ and $C''$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$, $C'+kI$ and $C''+kI$ are invertible for all integers $k\geq0$.\
From (\[1eq9\]), we have the following decomposition formula $$\begin{aligned}
&\gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]+ \Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]\notag\\&= H_{\mathcal{B}}[A, B, B'; C,C', C''; z_1, z_2, z_3].\label{s2eq3}\end{aligned}$$ where $H_{\mathcal{B}}[A, B, B'; C,C', C''; z_1, z_2, z_3]$ is the Srivastava’s triple hypergeometric matrix functions [@RD1].
It is quite evident to note that the special cases of (\[s2eq1\]) and (\[s2eq2\]) when $z_2 = 0$ reduce to the known incomplete families of the second Appell hypergeometric matrix functions [@A3]. Also, the special cases of (\[s2eq1\]) and (\[s2eq2\]) when $z_2 = 0$ and $z_3 = 0$ or $z_1 = 0$ are seen to yield the known incomplete families of Gauss hypergeometric matrix functions [@Ab].
To discuss the properties and characteristics of $\gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]$, it is sufficient to determine the properties of $\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]$ according to the decomposition formula (\[s2eq3\]). We omit the proofs of the given below theorems.
Let $A$, $B$, $B'$, $C$, $C'$ and $C''$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$, $C'+kI$ and $C''+kI$ is invertible for all integer $k\geq0.$ Then the function defined by ${\mathcal{T}_2}={\mathcal{T}_2}(z_1, z_2, z_3) =\gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]+\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]$ satisfies the following system of partial differential equations: $$\begin{aligned}
&\Big[z_1\frac{\partial}{\partial z_1}{(z_1\frac{\partial}{\partial z_1}+C-I)- z_1(z_1\frac{\partial}{\partial z_1}+z_3\frac{\partial}{\partial z_3}+A)(z_1\frac{\partial}{\partial z_1}+z_2\frac{\partial}{\partial z_2}+B)}\Big] \mathcal{T}=O,\label{2eq4}\\
&\Big[z_2\frac{\partial}{\partial z_2}{(z_2\frac{\partial}{\partial z_2}+C'-I)- z_2(z_2\frac{\partial}{\partial z_2}+z_1\frac{\partial}{\partial z_1}+B)(z_2\frac{\partial}{\partial z_2}+z_3\frac{\partial}{\partial z_3}+B')}\Big] \mathcal{T}=O,\label{m1}\\
&\Big[z_3\frac{\partial}{\partial z_3}{(z_3\frac{\partial}{\partial z_3}+C''-I)- z_3(z_1\frac{\partial}{\partial z_1}+z_3\frac{\partial}{\partial z_3}+A)(z_2\frac{\partial}{\partial z_2}+z_3\frac{\partial}{\partial z_3}+B')}\Big] \mathcal{T}=O.\label{s2eq5}\end{aligned}$$
Let $A$, $B$, $B'$, $C$, $C'$ and $C''$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $A+kI$, $B+kI$, $C+kI$, $C'+kI$ and $C''+kI$ are invertible for all integers $k\geq0$. Then the following integral representation for $\Gamma_{\mathcal{B}}^{H}$ in (\[s2eq2\]) holds true: $$\begin{aligned}
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]= \Gamma^{-1}(A) \Gamma^{-1}(B)\notag\\&\times \Big[\int_{x}^{\infty} \int_{0}^{\infty}e^{-t-s}\, t^{A-I} s^{B-I} {_{0}F_{1}}(-; C; z_1 st) \Psi_{2}(B'; C', C''; z_2 s, z_3 t) dt ds\Big],\label{s2eq6}\end{aligned}$$where $\Psi_{2}$ is the Humbert’s matrix function defined by [@MA; @SZ] $$\begin{aligned}
\Psi_{2}(A; C, C'; z_1, z_2)=\sum_{m, n \geq 0}^{}(A)_{m+n} (C)^{-1}_{m} (C')^{-1}_{n}\frac{z_{1}^{m} z_{2}^{n}}{m! n!}.\end{aligned}$$
The following double integral representations: $$\begin{aligned}
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C+I,C', C''; -z_1, z_2, z_3]\notag\\
&=x_{1}^{\frac{-C}{2}}\, \Gamma^{-1}(A) \Gamma^{-1}(B)\Gamma(C+I)\notag\\
&\times \int_{x}^{\infty}\int_{0}^{\infty} e^{-t-s} \,t^{A-\frac{C}{2}-I} s^{B-\frac{C}{2}-I} J_{C}(2\sqrt{z_1 st}) \Psi_{2}(B'; C', C''; z_2 s, z_3 t) dt ds\end{aligned}$$ and $$\begin{aligned}
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C+I,C', C''; z_1, z_2, z_3]\notag\\
&=x_{1}^{\frac{-C}{2}}\, \Gamma^{-1}(A) \Gamma^{-1}(B)\Gamma(C+I)\notag\\
&\times \int_{x}^{\infty}\int_{0}^{\infty} e^{-t-s} \,t^{A-\frac{C}{2}-I} s^{B-\frac{C}{2}-I} I_{C}(2\sqrt{z_1 st}) \Psi_{2}(B'; C', C''; z_2 s, z_3 t) dt ds,\end{aligned}$$ where $A$, $B$, $B'$, $C$, $C'$ and $C''$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $A+kI$, $B+kI$, $C+kI$, $C'+kI$ and $C''+kI$ are invertible for all integers $k\geq0$.
Let $A$, $B$, $B'$, $C$, $C'$ and $C''$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $A+kI$, $B+kI$, $B'+kI$, $C+kI$, $C'+kI$ and $C''+kI$ are invertible for all integers $k\geq0$. Then the following triple integral representation for $\Gamma_{\mathcal{B}}^{H}$ in (\[s2eq2\]) holds true: $$\begin{aligned}
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]\notag\\
&= \Gamma^{-1}(A) \Gamma^{-1}(B) \Gamma^{-1}(B')\Big[\int_{x}^{\infty}\int_{0}^{\infty} \int_{0}^{\infty}e^{-t-s-u}\, t^{A-I} s^{B-I} u^{B'-I} \notag\\& \times {_{0}F_{1}}(-; C; z_1 st) \,{_{0}F_{1}}(-; C'; z_2 us) \,{_{0}F_{1}}(-; C''; z_3 ut) dt ds du\Big].\label{sx1}\end{aligned}$$
The following triple integral representations holds true: $$\begin{aligned}
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C+I, C'+I, C''+I; -z_1, -z_2, -z_3]\notag\\
&=z_{1}^{\frac{-C}{2}} z_{2}^{\frac{-C'}{2}} z_{3}^{\frac{-C''}{2}}\, \Gamma^{-1}(A) \Gamma^{-1}(B)\Gamma^{-1}(B')\Gamma(C+I)\Gamma(C'+I)\Gamma(C''+I)\notag\\
&\times \int_{x}^{\infty}\int_{0}^{\infty}\int_{0}^{\infty} e^{-t-s-u} \,t^{A-\frac{C}{2}-\frac{C''}{2}-I} s^{B-\frac{C}{2}-\frac{C'}{2}-I} u^{B'-\frac{C'}{2}-\frac{C''}{2}-I}\notag\\&\times J_{C}(2\sqrt{z_1 st}) J_{C'}(2\sqrt{z_2 su})J_{C''}(2\sqrt{z_3 ut})dt dsdu\end{aligned}$$ and $$\begin{aligned}
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C+I, C'+I, C''+I; z_1, z_2, z_3]\notag\\
&=z_{1}^{\frac{-C}{2}} z_{2}^{\frac{-C'}{2}} z_{3}^{\frac{-C''}{2}}\, \Gamma^{-1}(A) \Gamma^{-1}(B)\Gamma^{-1}(B')\Gamma(C+I)\Gamma(C'+I)\Gamma(C''+I)\notag\\
&\times \int_{x}^{\infty}\int_{0}^{\infty}\int_{0}^{\infty} e^{-t-s-u} \,t^{A-\frac{C}{2}-\frac{C''}{2}-I} s^{B-\frac{C}{2}-\frac{C'}{2}-I} u^{B'-\frac{C'}{2}-\frac{C''}{2}-I}\notag\\&\times I_{C}(2\sqrt{z_1 st}) I_{C'}(2\sqrt{z_2 su})I_{C''}(2\sqrt{z_3 ut})dt dsdu,\end{aligned}$$ where $A$, $B$, $B'$, $C$, $C'$ and $C''$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $A+kI$, $B+kI$, $B'+kI$, $C+kI$, $C'+kI$ and $C''+kI$ are invertible for all integers $k\geq0$.
\[h1\]Let $B'+sI$ be invertible for all integers $s\geq0$. Then the following recursion formula holds true for $\Gamma_{\mathcal{B}}^{H}$: $$\begin{aligned}
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'+sI; C,C', C''; z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]\notag\\&\quad+ z_2{B}{C'}^{-1}\Big[\sum_{k=1}^{s}\Gamma_{\mathcal{B}}^{H}[(A; x), B+I, B'+kI; C, C'+I, C''; z_1, z_2, z_3]\Big]\notag\\
&\quad+ z_3{A}{C''}^{-1}\Big[\sum_{k=1}^{s}\Gamma_{\mathcal{B}}^{H}[(A+I; x), B, B'+kI; C, C', C''+I; z_1, z_2, z_3]\Big].\label{sr11}\end{aligned}$$ Furthermore, if $B'-kI$ are invertible for integers $k\leq s$, then $$\begin{aligned}
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'-sI; C,C', C''; z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]\notag\\&\quad- z_2{B}{C'}^{-1}\Big[\sum_{k=0}^{s-1}\Gamma_{\mathcal{B}}^{H}[(A; x), B+I, B'-kI; C, C'+I, C''; z_1, z_2, z_3]\Big]\notag\\
&\quad- z_3{A}{C''}^{-1}\Big[\sum_{k=0}^{s-1}\Gamma_{\mathcal{B}}^{H}[(A+I; x), B, B'-kI; C, C', C''+I; z_1, z_2, z_3]\Big].\label{sr12}\end{aligned}$$ where $A$, $B$, $B'$, $C$, $C'$ and $C''$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$, $C'+kI$ and $C''+kI$ are invertible matrices for all integers $k\geq 0$.
Other recursion formulas for the matrix functions $\Gamma_{\mathcal{B}}^{H}$ about the matrix parameter $B'$ can be obtained as follows:
Let $B'+sI$ be invertible for all integers $s\geq0$. Then the following recursion formula holds true for $\Gamma_{\mathcal{B}}^{H}$: $$\begin{aligned}
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'+sI; C, C', C''; z_1, z_2, z_3]\notag\\
&=\sum_{k_1+k_2\leq s}^{}{s\choose k_1, k_2}z_{2}^{k_1} z_{3}^{k_2}{(A)_{k_2}}(B)_{k_1}(C')^{-1}_{k_1} (C'')^{-1}_{k_2} \,\notag\\
&\quad\times\Big[\Gamma_{\mathcal{B}}^{H}[(A+k_2 I; x), B+k_1I, B'+(k_1+k_2)I; C, C'+k_1I, C''+k_2 I; z_1, z_2, z_3]\Big].\label{s3eqh1}\end{aligned}$$ Furthermore, if $B'-kI$ are invertible for integers $k\leq s$, then $$\begin{aligned}
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'-sI; C, C', C''; z_1, z_2, z_3]\notag\\
&=\sum_{k_1+k_2\leq s}^{}{s\choose k_1, k_2}(-z_{2})^{k_1} (-z_{3})^{k_2}{(A)_{k_2}}(B)_{k_1}(C')^{-1}_{k_1} (C'')^{-1}_{k_2}\,\notag\\
&\quad\times\Big[\Gamma_{\mathcal{B}}^{H}[(A+k_2 I; x), B+k_1I, B'; C, C'+k_1I, C''+k_2 I; z_1, z_2, z_3]\Big],\label{s3eqh2}\end{aligned}$$ where $A$, $B$, $B'$, $C$, $C'$ and $C''$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$, $C'+kI$ and $C''+kI$ are invertible matrices for all integers $k\geq 0$.
Let $C-sI$, $C'-sI$ and $C''-sI$ be invertible matrices for all $s\geq0$. Then the following recursion formulas hold true for $\Gamma_{\mathcal{B}}^{H}$: $$\begin{aligned}
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C-sI, C', C'';z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C, C', C''; z_1, z_2, z_3]\notag\\&\quad+ z_1 AB\Big[\sum_{k=1}^{s} \Gamma_{\mathcal{B}}^{H}[(A+I; x), B+I, B'; C+(2-k)I, C', C''; z_1, z_2, z_3]\notag\\&\quad\times{(C-kI)^{-1}(C-(k-1)I)^{-1}}\Big];\label{szz1}\end{aligned}$$ $$\begin{aligned}
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C, C'-sI, C''; z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C, C', C''; z_1, z_2, z_3]\notag\\&\quad+ z_2 B B'\Big[\sum_{k=1}^{s} \Gamma_{\mathcal{B}}^{H}[(A; x), B+I, B'+I;\,C, C'+(2-k)I, C''; z_1, z_2, z_3]\,\notag\\
&\quad\times{(C'-kI)^{-1}(C'-(k-1)I)^{-1}}\Big];\label{sph2}\end{aligned}$$ $$\begin{aligned}
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C, C', C''-sI; z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C, C', C''; z_1, z_2, z_3]\notag\\&\quad+ z_3\, A B'\Big[\sum_{k=1}^{s} \Gamma_{\mathcal{B}}^{H}[(A+I; x), B, B'+I;\,C, C', C''+(2-k)I; z_1, z_2, z_3]\,\notag\\
&\quad\times{(C''-kI)^{-1}(C''-(k-1)I)^{-1}}\Big].\label{sph3}\end{aligned}$$ where $A$, $B$, $B'$, $C$, $C'$ and $C''$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$, $C'+kI$ and $C''+kI$ are invertible matrices for all integers $k\geq 0$.
Let $A$, $B$, $B'$, $C$, $C'$ and $C''$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$, $C'+kI$ and $C''+kI$ are invertible for all integers $k\geq0$. Then the following derivative formulas for $\Gamma_{\mathcal{B}}^{H}$ in (\[s2eq2\]) hold true: $$\begin{aligned}
&\frac{\partial^{m}}{\partial z_{1}^{m}}\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]\notag\\&= (A)_{m}(B)_{m} (C)^{-1}_{m}\Big[\Gamma_{\mathcal{B}}^{H}[(A+mI; x), B+mI, B'; C+mI,C', C''; z_1, z_2, z_3]\Big];\label{sd1}\\
&\frac{\partial^{m+n}}{\partial z_{1}^{m}\partial z_{2}^{n}}\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]\notag\\&= (A)_{m}(B)_{m+n}(B')_{n}(C)^{-1}_{m} (C')^{-1}_{n}\notag\\& \Big[\Gamma_{\mathcal{B}}^{H}[(A+mI; x), B+(m+n)I, B'+nI; C+mI,C'+nI, C''; z_1, z_2, z_3]\Big];\label{sd2}\\
&\frac{\partial^{m+n+p}}{\partial z_{1}^{m}\partial z_{2}^{n}\partial z_{3}^{p}}\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]\notag\\&= (A)_{m+p}(B)_{m+n}(B')_{n+p}(C)^{-1}_{m} (C')^{-1}_{n} (C'')^{-1}_{p}\notag\\& \Big[\Gamma_{\mathcal{B}}^{H}[(A+(m+p)I; x), B+(m+n)I, B'+(n+p)I; C+mI,C'+nI, C''+pI; z_1, z_2, z_3]\Big].\label{sd3}\end{aligned}$$
Let $A$, $B$, $B'$, $C$, $C'$ and $C''$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$, $C'+kI$ and $C''+kI$ are invertible for all integer $k\geq0$. Then the following recurrence relation for $\Gamma_{\mathcal{B}}^{H}$ in (\[s2eq2\]) holds true: $$\begin{aligned}
&\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C,C', C''; z_1, z_2, z_3]=\Gamma_{\mathcal{B}}^{H}[(A; x), B, B'; C-I,C', C''; z_1, z_2, z_3]\notag\\
&-ABC^{-1}(C-I)^{-1} \Gamma_{\mathcal{B}}^{H}[(A+I; x), B+I, B'; C+I,C', C''; z_1, z_2, z_3].\end{aligned}$$
The incomplete Srivastava’s triple hypergeometric matrix functions $\gamma_{\mathcal{C}}^{H}$ and $\Gamma_{\mathcal{C}}^{H}$
============================================================================================================================
In this section, we introduce the incomplete Srivastava’s triple hypergeometric matrix functions $\gamma_{\mathcal{C}}^{H}$ and $\Gamma_{\mathcal{C}}^{H}$ as follows: $$\begin{aligned}
&\gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C; z_1, z_2, z_3]\notag\\&=\sum_{m, n, p\geq 0}(A;x)_{m+p} (B)_{m+n} (B')_{n+p} (C)^{-1}_{m+n+p}\frac{z_{1}^{m}z_2^{n}z_3^{p}}{m! n! p!},\label{ps2eq1}\\
&\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C; z_1, z_2, z_3]\notag\\&=\sum_{m, n, p\geq 0}[A;x]_{m+p} (B)_{m+n} (B')_{n+p} (C)^{-1}_{m+n+p}\frac{z_{1}^{m}z_2^{n}z_3^{p}}{m! n! p!}.\label{ps2eq2}\end{aligned}$$ where $A$, $B$, $B'$ and $C$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$ is invertible for all integers $k\geq0$.\
From (\[1eq9\]), we have the following decomposition formula $$\begin{aligned}
&\gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C ; z_1, z_2, z_3]+ \Gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C ; z_1, z_2, z_3]\notag\\&= H_{\mathcal{C}}[A, B, B'; C ; z_1, z_2, z_3].\label{ps2eq3}\end{aligned}$$ where $H_{\mathcal{C}}[A, B, B'; C ; z_1, z_2, z_3]$ is the Srivastava’s triple hypergeometric matrix functions [@RD1].
It is quite evident to note that the special cases of (\[ps2eq1\]) and (\[ps2eq2\]) when $z_2 = 0$ reduce to the known incomplete families of the first Appell hypergeometric matrix functions [@A1]. Also, the special cases of (\[ps2eq1\]) and (\[ps2eq2\]) when $z_2 = 0$ and $z_3 = 0$ or $z_1 = 0$ are seen to yield the known incomplete families of Gauss hypergeometric matrix functions [@Ab].
To discuss the properties and characteristics of $\gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C ; z_1, z_2, z_3]$, it is sufficient to determine the properties of $\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C ; z_1, z_2, z_3]$ according to the decomposition formula (\[s2eq3\]).We omit the proofs of the given below theorems.
Let $A$, $B$, $B'$ and $C$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$ is invertible for all integers $k\geq0.$ Then the function defined by ${\mathcal{T}_3}={\mathcal{T}_3}(z_1, z_2, z_3) =\gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C ; z_1, z_2, z_3]+\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C ; z_1, z_2, z_3]$ satisfies the following system of partial differential equations: $$\begin{aligned}
&\Big[z_1\frac{\partial}{\partial z_1}{(z_1\frac{\partial}{\partial z_1}+z_2\frac{\partial}{\partial z_2}+z_3\frac{\partial}{\partial z_3}+C-I)- z_1(z_1\frac{\partial}{\partial z_1}+z_3\frac{\partial}{\partial z_3}+A)(z_1\frac{\partial}{\partial z_1}+z_2\frac{\partial}{\partial z_2}+B)}\Big] \mathcal{T}_3=O,\label{p2eq4}\\
&\Big[z_2\frac{\partial}{\partial z_2}{(z_1\frac{\partial}{\partial z_1}+z_2\frac{\partial}{\partial z_2}+z_3\frac{\partial}{\partial z_3}+C-I)- z_2(z_2\frac{\partial}{\partial z_2}+z_1\frac{\partial}{\partial z_1}+B)(z_2\frac{\partial}{\partial z_2}+z_3\frac{\partial}{\partial z_3}+B')}\Big] \mathcal{T}_3=O,\label{pm1}\\
&\Big[z_3\frac{\partial}{\partial z_3}{(z_1\frac{\partial}{\partial z_1}+z_2\frac{\partial}{\partial z_2}+z_3\frac{\partial}{\partial z_3}+C-I)- z_3(z_1\frac{\partial}{\partial z_1}+z_3\frac{\partial}{\partial z_3}+A)(z_2\frac{\partial}{\partial z_2}+z_3\frac{\partial}{\partial z_3}+B')}\Big] \mathcal{T}_3=O.\label{ps2eq5}\end{aligned}$$
Let $A$, $B$, $B'$ and $C$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $A+kI$, $B+kI$ and $C+kI$ are invertible for all integers $k\geq0$. Then the following integral representation for $\Gamma_{\mathcal{C}}^{H}$ in (\[ps2eq2\]) holds true: $$\begin{aligned}
&\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C ; z_1, z_2, z_3]= \Gamma^{-1}(A) \Gamma^{-1}(B)\notag\\&\times \Big[\int_{x}^{\infty} \int_{0}^{\infty}e^{-t-s}\, t^{A-I} s^{B-I} \Phi_{3}(B'; C; z_2 s+ z_3 t, z_1 st) dt ds\Big],\label{ps2eq6}\end{aligned}$$where $\Phi_{3}$ is the Humbert’s matrix function defined by [@MA; @SZ] $$\begin{aligned}
\Phi_{3}(B'; C ; z_1, z_2)=\sum_{m, n \geq 0}^{}(B')_{m} (C)^{-1}_{m+n} \frac{z_{1}^{m} z_{2}^{n}}{m! n!}.\end{aligned}$$
Let $A$, $B$, $B'$ and $C$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $A+kI$, $B+kI$, $B'+kI$ and $C+kI$ are invertible for all integers $k\geq0$. Then the following triple integral representation for $\Gamma_{\mathcal{C}}^{H}$ in (\[ps2eq2\]) holds true: $$\begin{aligned}
&\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C ; z_1, z_2, z_3]\notag\\
&= \Gamma^{-1}(A) \Gamma^{-1}(B) \Gamma^{-1}(B')\Big[\int_{x}^{\infty}\int_{0}^{\infty} \int_{0}^{\infty}e^{-t-s-u}\, t^{A-I} s^{B-I} u^{B'-I} \notag\\& \times {_{0}F_{1}}(-; C ; z_1 st+z_2 us+z_3ut) dt ds du\Big].\label{psx1}\end{aligned}$$
\[h1\]Let $B'+sI$ be invertible for all integers $s\geq0$. Then the following recursion formula holds true for $\Gamma_{\mathcal{C}}^{H}$: $$\begin{aligned}
&\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'+sI; C ; z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C ; z_1, z_2, z_3]\notag\\&\quad+ z_2{B}{C}^{-1}\Big[\sum_{k=1}^{s}\Gamma_{\mathcal{C}}^{H}[(A; x), B+I, B'+kI; C+I ; z_1, z_2, z_3]\Big]\notag\\
&\quad+ z_3{A}{C}^{-1}\Big[\sum_{k=1}^{s}\Gamma_{\mathcal{C}}^{H}[(A+I; x), B, B'+kI; C+I; z_1, z_2, z_3]\Big].\label{psr11}\end{aligned}$$ Furthermore, if $B'-kI$ are invertible for integers $k\leq s$, then $$\begin{aligned}
&\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'-sI; C ; z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C; z_1, z_2, z_3]\notag\\&\quad- z_2{B}{C}^{-1}\Big[\sum_{k=0}^{s-1}\Gamma_{\mathcal{C}}^{H}[(A; x), B+I, B'-kI; C+I ; z_1, z_2, z_3]\Big]\notag\\
&\quad- z_3{A}{C}^{-1}\Big[\sum_{k=0}^{s-1}\Gamma_{\mathcal{C}}^{H}[(A+I; x), B, B'-kI; C+I; z_1, z_2, z_3]\Big].\label{psr12}\end{aligned}$$ where $A$, $B$, $B'$ and $C$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$ is invertible matrix for all integers $k\geq 0$.
Other recursion formulas for the matrix functions $\Gamma_{\mathcal{C}}^{H}$ about the matrix parameter $B'$ can be obtained as follows:
Let $B'+sI$ be invertible for all integers $s\geq0$. Then the following recursion formula holds true for $\Gamma_{\mathcal{C}}^{H}$: $$\begin{aligned}
&\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'+sI; C ; z_1, z_2, z_3]\notag\\
&=\sum_{k_1+k_2\leq s}^{}{s\choose k_1, k_2}z_{2}^{k_1} z_{3}^{k_2}{(A)_{k_2}}(B)_{k_1}(C)^{-1}_{k_1+k_2} \,\notag\\
&\quad\times\Big[\Gamma_{\mathcal{C}}^{H}[(A+k_2 I; x), B+k_1I, B'+(k_1+k_2)I; C+(k_1+k_2)I; z_1, z_2, z_3]\Big].\label{ps3eqh1}\end{aligned}$$ Furthermore, if $B'-kI$ are invertible for integers $k\leq s$, then $$\begin{aligned}
&\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'-sI; C ; z_1, z_2, z_3]\notag\\
&=\sum_{k_1+k_2\leq s}^{}{s\choose k_1, k_2}(-z_{2})^{k_1} (-z_{3})^{k_2}{(A)_{k_2}}(B)_{k_1}(C)^{-1}_{k_1+k_2}\,\notag\\
&\quad\times\Big[\Gamma_{\mathcal{C}}^{H}[(A+k_2 I; x), B+k_1I, B'; C+(k_1+k_2)I ; z_1, z_2, z_3]\Big],\label{s3eqh2}\end{aligned}$$ where $A$, $B$, $B'$ and $C$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$ is invertible matrices for all integers $k\geq 0$.
Let $C-sI$ be invertible for all integers $s\geq0$. Then the following recursion formulas hold true for $\Gamma_{\mathcal{C}}^{H}$: $$\begin{aligned}
&\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C-sI; z_1, z_2, z_3]\notag\\
&=\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C; z_1, z_2, z_3]\notag\\&\quad+ z_1 AB\Big[\sum_{k=1}^{s} \Gamma_{\mathcal{C}}^{H}[(A+I; x), B+I, B'; C+(2-k)I; z_1, z_2, z_3]\notag\\&\quad\times{(C-kI)^{-1}(C-(k-1)I)^{-1}}\Big]\notag\\&\quad+ z_2 B B'\Big[\sum_{k=1}^{s} \Gamma_{\mathcal{C}}^{H}[(A; x), B+I, B'+I; C+(2-k)I ; z_1, z_2, z_3]\,\notag\\
&\quad\times{(C-kI)^{-1}(C-(k-1)I)^{-1}}\Big]\notag\\
&+ z_3 A B'\Big[\sum_{k=1}^{s} \Gamma_{\mathcal{C}}^{H}[(A+I; x), B, B'+I; C+(2-k)I ; z_1, z_2, z_3]\notag\\
&\quad\times{(C-kI)^{-1}(C-(k-1)I)^{-1}}\Big],\label{pph2}\end{aligned}$$ where $A$, $B$, $B'$ and $C$ are positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$ is invertible matrix for all integers $k\geq 0$.
Let $A$, $B$, $B'$ and $C$ be positive stable and commutative matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$ is invertible for all integers $k\geq0$. Then the following derivative formulas for $\Gamma_{\mathcal{C}}^{H}$ in (\[ps2eq2\]) hold true: $$\begin{aligned}
&\frac{\partial^{m}}{\partial z_{1}^{m}}\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C; z_1, z_2, z_3]\notag\\&= (A)_{m}(B)_{m} (C)^{-1}_{m}\Big[\Gamma_{\mathcal{C}}^{H}[(A+mI; x), B+mI, B'; C+mI; z_1, z_2, z_3]\Big];\label{sd1}\\
&\frac{\partial^{m+n}}{\partial z_{1}^{m}\partial z_{2}^{n}}\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C ; z_1, z_2, z_3]\notag\\&= (A)_{m}(B)_{m+n}(B')_{n}(C)^{-1}_{m+n}\notag\\& \Big[\Gamma_{\mathcal{C}}^{H}[(A+mI; x), B+(m+n)I, B'+nI; C+(m+n)I; z_1, z_2, z_3]\Big];\label{sd2}\\
&\frac{\partial^{m+n+p}}{\partial z_{1}^{m}\partial z_{2}^{n}\partial z_{3}^{p}}\Gamma_{\mathcal{C}}^{H}[(A; x), B, B'; C ; z_1, z_2, z_3]\notag\\&= (A)_{m+p}(B)_{m+n}(B')_{n+p}(C)^{-1}_{m+n+p}\notag\\& \Big[\Gamma_{\mathcal{C}}^{H}[(A+(m+p)I; x), B+(m+n)I, B'+(n+p)I; C+(m+n+p)I; z_1, z_2, z_3]\Big].\label{sd3}\end{aligned}$$
The results in this article generalize the corresponding results of Choi *et. al.*[@jc; @jc1] to the matrix case. For $x=0$, the incomplete Srivastava’s triple hypergeometric matrix functions reduces to the Srivastava’s triple hypergeometric matrix function. Therefore, taking $x=0$, the obtained results for the incomplete Srivastava’s triple hypergeometric matrix functions reduce to the results for the Srivastava’s triple hypergeometric matrix functions.
[**Acknowledgements**]{} The author would like to thank the referees for their valuable comments and suggestions which have led to a better presentation of the paper. This work is inspired by and is dedicated to Professor Vivek Sahai, Department of Mathematics and Astronomy, University of Lucknow.
[99]{} M. Abdalla, On the incomplete hypergeometric matrix functions, [*Ramanujan J.*]{} 43 (2017), 663-678.
M. Abdalla, Special matrix functions: characteristics, achievements and future directions, *Linear Multilinear Algebra*, 68 (2020), 1-28.
A. Bakhet, Y. Jiao, F. He, On the Wright hypergeometric matrix functions and their fractional calculus, [*Integral Transforms Spec. Funct.*]{}, 30 (2019), 138-156.
A. Cetinkaya, The incomplete second Appell hypergeometric functions, [*Appl. Math. Comput.*]{} 219 (2013), 8332-8337.
M. A. Chaudhry, S. M. Zubair, Extended Incomplete Gamma Functions with Applications, [*J. Math. Anal. Appl.*]{} 274 (2002), 725-745.
J. Choi, R. Parmar, P. Chopra, The incomplete Srivastava’s triple hypergeometrics $\gamma^{H}_{A}$ and $\Gamma^{H}_{A}$, [*Miskolc Mathematical Notes*]{} 19 (2018), 191-200.
J. Choi, R. Parmar, P. Chopra, The incomplete Srivastava’s triple hypergeometrics $\gamma^{H}_{B}$ and $\Gamma^{H}_{B}$, [*Filomat*]{} 30 (2016), 1779-1787.
J. Choi, R. K. Parmar, P. Chopra, The Incomplete Lauricella and First Appell Functions and Associated Properties, [*Honam Math. J.*]{} 36 (2014), 531-542.
E. Defez, L. Jódar, Chebyshev matrix polynomails and second order matrix differential equations. [*Utilitas Math.*]{} 61 (2002), 107-123.
E. Defez, L. Jódar, A. Law, Jacobi matrix differential equation, polynomial solutions, and their properties. [*Comput Math Appl.*]{} 48 (2004), 789-803.
A. J. Duran, W. Van Assche, Orthogonal matrix polynomials and higher order recurrence relations. [*Linear Algebra Appl.*]{} 219 (1995), 261-280.
R.Dwivedi, V.Sahai, On the hypergeometric matrix functions of several variables, [*J. Math. Phys.*]{} 59 (2018), 023505, 15pp.
J. S. Geronimo, Scattering theory and matrix orthogonal polynomials on the real line, [*Circuits Syst Signal Process.*]{} 1 (1982), 471-495.
G. H. Golud, C. F. Van Loan, Matrix computations. London: The Johns Hopkins Press Ltd; 1996.
L. Jódar, R. Company, E. Navarro, Laguerre matrix polynomials and systems of second order differential equations, [*Appl. Numer. Math.*]{} 15 (1994), 53-63.
L. Jódar, R. Company, E. Navarro, Bessel matrix functions: explicit solution of coupled Bessel type equations, [*Util. Math.*]{} 46 (1994), 129-141.
L. Jódar, J.C. Cortés, On the hypergeometric matrix function, [*J. Comput. Appl. Math.*]{} 99 (1998), 205-217.
L. Jódar, J.C. Cortés, Some properties of gamma and beta matrix functions, [*Appl. Math. Lett.*]{} 11 (1998), 89-93. L. Jódar, J. Sastre, On the Laguerre matrix polynomials, [*Util. Math.*]{} 53 (1998), 37-48.
ZM. Kishka, A. Shehata, M. Abul-Dahab, A new extension of hypergeometric matrix functions. [*Adv. Appl. Math. Sci.*]{} 10 (2011), 349-371.
S.Z. Rida , M. Abul-Dahab, M. A. Saleem, On Humbert matrix function $\Psi_{1}(A, B; C, C'; z, w)$ of two complex variables under differential operator, [*Int. J Ind Math.*]{} 32 (2010), 167-179.
J. Sastre, L. Jódar, Asymptotics of the modified Bessel and incomplete gamma matrix functions, [*Appl. Math. Lett.*]{} 16 (2003), 815-820.
V. Sahai, A. Verma, Generalized Incomplete Pochhammer Symbols and Their Applications to Hypergeometric Functions, [*Kyungpook Math. J.*]{} 58 (2018), 67-79.
H. M. Srivastava, Hypergeometric functions of three variables, *Ganita* 15 (1964), 97–108.
H. M. Srivastava, Some integrals representing triple hypergeometric functions, *Rend. Circ. Mat. Palermo* 16 (1967), 99-115.
H.M. Srivastava, M.A. Chaudhary, R.P. Agarwal, The incomplete Pochhammer symbols and their applications to hypergeometric and related functions, [*Integral Transforms Spec. Funct.*]{} [23]{} (2012), 659-683.
H.M. Srivastava, P.W. Karlsson, [*Multiple Gaussian Hypergeometric Series*]{}, Ellis Horwood Series: Mathematics and its Applications. Ellis Horwood Ltd., Chichester; Halsted Press \[John Wiley & Sons, Inc.\], New York, 1985.
H.M. Srivastava, H. L. Manocha, [*A Treatise on Generating Functions*]{}, Halsted Press (Ellis Horwood Limited, Chichester), John Wiley and Sons, New York, Chichester, Brisbane and Toronto, 1984.
H.M. Srivastava, R.K. Saxena, R.K. Parmar, Some Families of the Incomplete H-Functions and the Incomplete $\overline{H}$- Functions and Associated Integral Transforms and Operators of Fractional Calculus with Applications, *Russ. J. Math. Phys.* 25 (2018), 116-138.
R. Srivastava, Some Properties of a Family of Incomplete Hypergeometric Functions, [*Russian J. Math. Phys.*]{} 20 (2013), 121-128.
R. Srivastava, N. E. Cho, Generating Functions for a Certain Class of Incomplete Hypergeometric Polynomials, [*Appl. Math. Comput.*]{} 219 (2012), 3219-3225.
A. Verma, On the incomplete first Appell hypergeometric matrix functions $\gamma_{1}$ and $\Gamma_{1}$, *Ramanujan J.*, Communicated (2019).
A. Verma, On the incomplete fourth Appell hypergeometric matrix functions $\gamma_{4}$ and $\Gamma_{4}$, [*Asian-Eur. J. Math.*]{}, Communicated (2019).
A. Verma, S. Yadav, On the incomplete second Appell hypergeometric matrix functions, *Linear Multilinear Algebra*, Accepted (2019) DOI:10.1080/03081087.2019.1640178.
[^1]: Corresponding author
|
---
abstract: 'String inspired models can serve as potential candidates to replace general relativity (GR) in the high energy/high curvature regime where quantum gravity is expected to play a vital role. Such models not only subsume the ultraviolet nature of gravity but also exhibit promising prospects in resolving issues like dark matter and dark energy, which cannot be adequately addressed within the framework of GR. The Einstein-Maxwell dilaton-axion (EMDA) theory, which is central to this work is one such string inspired model arising in the low energy effective action of the heterotic string theory with interesting implications in inflationary cosmology and in the late time acceleration of the universe. It is therefore important to survey the role of such a theory in explaining astrophysical observations, e.g. the continuum spectrum of black holes which are expected to hold a wealth of information regarding the background metric. The Kerr-Sen spacetime corresponds to the exact, stationary and axi-symmetric black hole solution in EMDA gravity, possessing dilatonic charge and angular momentum originating from the axionic field. In this work, we compute the theoretical spectrum from the accretion disk around quasars in the Kerr-Sen background assuming the thin accretion disk model due to Novikov & Thorne. This is then used to evaluate the theoretical estimates of optical luminosity for a sample of eighty Palomar-Green quasars which are subsequently compared with the available observations. Our analysis based on error estimators like the $\chi^2$, the Nash-Sutcliffe efficiency, the index of agreement etc., indicates that black holes carrying non-existent or weak dilaton charges (viz, $0\lesssim r_2\lesssim 0.1$) are observationally more favored. The spins associated with the quasars are also estimated. Interestingly, a similar conclusion has been independently achieved by studying the observed jet power and the radiative efficiencies of microquasars. The implications are discussed.'
author:
- 'Indrani Banerjee[^1] , Bhaswati Mandal[^2] and and Soumitra SenGupta[^3]\'
bibliography:
- 'accretion.bib'
- 'KN-ED.bib'
- 'Brane.bib'
- 'Black\_Hole\_Shadow.bib'
- 'EMDA-Jet.bib'
- 'Gravity\_1\_full.bib'
- 'Gravity\_2\_full.bib'
- 'Gravity\_3\_partial.bib'
- 'axion.bib'
date:
title: 'Implications of Einstein-Maxwell dilaton-axion gravity from the black hole continuum spectrum'
---
Introduction {#Intro}
============
The remarkable agreement of (GR) with a host of experimental tests [@Will:2005yc; @Will:1993ns; @Will:2005va; @Berti:2015itd] makes it the most successful theory of gravity, till date. With the advent of advanced ground-based and space-based missions, the predictions of GR, e.g. presence of black holes and gravitational waves [@Abbott:2017vtc; @TheLIGOScientific:2016pea; @Abbott:2016nmj; @TheLIGOScientific:2016src; @Abbott:2016blz], have received ever-increasing observational confirmations. Yet, the quest for a more complete theory of gravity continues, as GR is marred with the black hole and big-bang singularities [@Penrose:1964wq; @Hawking:1976ra; @Christodoulou:1991yfa] and the quantum nature of gravity continues to be elusive [@Rovelli:1996dv; @Dowker:2005tz; @Ashtekar:2006rx]. On the observational front, falls short in resolving the nature of dark matter and dark energy [@Bekenstein:1984tv; @Perlmutter:1998np; @Riess:1998cb], often invoked to explain the galactic rotation curves and the accelerated expansion of the universe, respectively.
This has given birth to a wide variety of alternative gravity models e.g. higher curvature gravity [@Nojiri:2003ft; @Nojiri:2006gh; @Capozziello:2006dj; @Lanczos:1932zz; @Lanczos:1938sf; @Lovelock:1971yv; @Padmanabhan:2013xyr], extra-dimensional models [@Shiromizu:1999wj; @Dadhich:2000am; @Harko:2004ui; @Carames:2012gr] and the scalar-tensor/scalar-vector-tensor theories of gravity [@Horndeski:1974wa; @Sotiriou:2013qea; @Babichev:2016rlq; @Charmousis:2015txa] which can potentially fulfill the deficiencies in GR, yielding GR in the low energy limit. Among the various alternatives to GR, string theory provides an interesting theoretical framework for quantum gravity and force unification. It is often said that it is difficult to detect any signature of string theory in low energy regime. In this context it is important to note that string theory in itself is not a model but provides a framework for building string inspired 4D models which can be confronted with the available observations [@Cicoli:2020bao].
This raises the question whether we can discover some footprints of a stringy model in low energy observations. With this aim in mind, in this work we explore the observational signatures of the Einstein-Maxwell dilaton-axion (EMDA) gravity which arises in the low energy effective action of superstring theories [@Sen:1992ua] when the ten dimensional heterotic string theory is compactified on a six dimensional torus $T^6$. Such a scenario consists of $N=4$, $d=4$ supergravity coupled to $N=4$ super Yang-Mills theory and can be appropriately truncated to a pure supergravity theory exhibiting $S$ and $T$ dualities. The bosonic sector of this $N=4$, $d=4$ supergravity coupled to the $U(1)$ gauge field is known as the Einstein-Maxwell dilaton-axion (EMDA) gravity [@Rogatko:2002qe] which provides a simple framework to study classical solutions. Such a theory comprises of the scalar field dilaton and the pseudo scalar axion coupled to the metric and the Maxwell field. The dilaton and the axion fields are inherited from string compactifications and have interesting implications in the late time acceleration of the universe and inflationary cosmology [@Sonner:2006yn; @Catena:2007jf]. It is therefore worthwhile to explore the role of such a theory in astrophysical observations. This has been explored extensively in the past [@Gyulchev:2006zg; @An:2017hby; @Younsi:2016azx; @Hioki:2008zw; @Narang:2020bgo] in the context of null geodesics, strong gravitational lensing and black hole shadow.
Since deviation from Einstein gravity is expected in the high curvature domain, the near horizon regime of black holes seem to be the ideal astrophysical laboratory to test these models against observations. In particular the continuum spectrum emitted from the accretion disk around black holes bears the imprints of the background spacetime and hence can be used as a promising probe to test the nature of strong gravity. This requires one to look for black hole solutions of these string inspired low-energy effective theories. Fortunately, there exists various classes of black hole solutions bearing non-trivial charges associated with the dilaton and the anti-symmetric tensor gauge fields [@Gibbons:1987ps; @Garfinkle:1990qj; @Horowitz:1991cd; @Kallosh:1993yg]. The stationary and axi-symmetric black hole solution in EMDA gravity corresponds to the charged, rotating Kerr-Sen metric [@Sen:1992ua] where the electric charge stems from the axion-photon coupling and not the in-falling charged particles. Also, the axionic field renders angular momentum to such black holes. Testing the impact of such a background on the observed spectrum is important since this provides a testbed for string theory.
In this work we compute the continuum spectrum from the accretion disk assuming the spacetime around the black holes to be governed by the Kerr-Sen metric. The presence of dilatonic and axionic charges modify the continuum spectrum from the Kerr scenario. The theoretical spectrum thus computed is compared with the optical data of eighty Palomar Green quasars which allows us to discern the observationally favored magnitude of the dilaton parameter and also estimate the spins of the quasars. We compute several error estimators, e.g. chi-squared, Nash-Sutcliffe efficiency and index of agreement etc. to arrive at our conclusions.
The paper is organised as as follows: In \[S2\] we describe the Einstein-Maxwell dilaton-axion (EMDA) theory and the Kerr-Sen solution. \[S3\] is dedicated to computing the theoretical spectrum from the accretion disk in the Kerr-Sen background. The theoretical spectrum is subsequently compared with the optical observations of eighty Palomar Green quasars and the error estimators are computed in \[S4\]. Finally we conclude with a summary of our findings with some scope for future work in \[S5\].
Notations and Conventions: Throughout this paper, we use (-,+,+,+) as the metric convention and will work with geometrized units taking $G=c=1$.
Einstein-Maxwell dilaton-axion gravity: A brief overview {#S2}
=========================================================
The Einstein-Maxwell dilaton-axion (EMDA) gravity [@Sen:1992ua; @Rogatko:2002qe] is obtained when the ten dimensional heterotic string theory is compactified on a six dimensional torus $T^6$. The action $\mathcal{S}$ associated with EMDA gravity comprises of couplings between the metric $g_{\mu\nu}$, the $U(1)$ gauge field $A_\mu$, the dilaton field $\chi$ and the third rank anti-symmetric tensor field $\mathcal{H}_{\mu\nu\alpha}$ such that, $$\begin{aligned}
\label{S2-1}
\mathcal{S} = \frac{1}{16\pi}\int\sqrt{-g}d^{4}x\bigg{[}~ \mathcal{R} - 2\partial_{\nu}\chi\partial^{\nu}\chi -\frac{1}{3}\mathcal{H}_{\rho\sigma\delta}\mathcal{H}^{\rho\sigma\delta} + e^{-2\chi}\mathcal{F}_{\rho\sigma}\mathcal{F}^{\rho\sigma}\bigg{]} \end{aligned}$$ where, $g$ is the determinant and $\mathcal{R}$ the Ricci scalar with respect to the 4-dimensional metric $g_{\mu\nu}$. In \[S2-1\], $\mathcal{F}_{\mu\nu}$ represents the second rank antisymmetric Maxwell field strength tensor such that $\mathcal{F}_{\mu\nu}=\nabla_\mu A_\nu-\nabla_\nu A_\mu$ while the dilaton field is denoted by $\chi$. The third rank antisymmetric tensor field $\mathcal{H}_{\rho\sigma\delta}$ in the above action can be expressed in the form, $$\begin{aligned}
\label{S2-2}
\mathcal{H}_{\rho\sigma\delta}=\nabla_\rho B_{\sigma\delta}+\nabla_\sigma B_{\delta \rho}+\nabla_\delta B_{\rho\sigma}-(A_\rho B_{\sigma\delta}+A_\sigma B_{\delta\rho}+A_\delta B_{\rho\sigma})\end{aligned}$$ where the second rank anti-symmetric tensor gauge field $B_{\mu\nu}$ in \[S2-2\] is known as the Kalb-Ramond field and its cyclic permutation with $A_\mu$ represents the Chern-Simons term.
In four dimensions $\mathcal{H}_{\mu\nu\alpha}$ is associated with the pseudo-scalar axion field $\xi$, such that, $$\begin{aligned}
\label{S2-3}
\mathcal{H}_{\rho\sigma\delta} = \frac{1}{2}e^{4\chi}\epsilon_{\rho\sigma\delta\gamma}\partial^{\gamma}\xi\end{aligned}$$ When expressed in terms of the axion field, \[S2-1\] assumes the form, $$\begin{aligned}
\label{S2-4}
\mathcal{S} = \frac{1}{16\pi}\int\sqrt{-g}~d^{4}x\bigg{[}\mathcal{R} - 2\partial_{\nu}\chi\partial^{\nu}\chi - \frac{1}{2}e^{4\chi}\partial_{\nu}\xi\partial^{\nu}\xi + e^{-2\chi}\mathcal{F}_{\rho\sigma}\mathcal{F}^{\rho\sigma} + \xi\mathcal{F}_{\rho\sigma}\tilde{\mathcal{F}}^{\rho\sigma}\bigg{]} \end{aligned}$$ By varying the action with respect to the dilaton, axion and the Maxwell fields we obtain their corresponding equations of motion. The equation of motion associated with the dilaton field is given by, $$\begin{aligned}
\nabla_{\mu}\nabla^{\mu}\chi - \frac{1}{2}e^{4\chi}\nabla_{\mu}\xi\nabla^{\mu}\xi + \frac{1}{2}e^{-2\chi}\mathcal{F}^{2} &= 0, \label{S2-5}\end{aligned}$$ while that of the axion is, $$\begin{aligned}
\nabla_{\mu}\nabla^{\mu}\xi + 4\nabla_{\nu}\xi\nabla^{\nu}\xi - e^{-4\chi}\mathcal{F}_{\rho\sigma}\tilde{\mathcal{F}}^{\rho\sigma} &= 0 \label{S2-6}\end{aligned}$$ The Maxwell’s equations coupled to the dilaton and the axion fields are given by, $$\begin{aligned}
\nabla_{\mu}(e^{-2\chi}\mathcal{F}^{\mu\nu} + \xi\tilde{\mathcal{F}}^{\mu\nu}) &= 0,\label{S2-7}\\
\nabla_{\mu}(\tilde{\mathcal{F}}^{\mu\nu}) &= 0 \label{S2-8}\end{aligned}$$ The solution of the axion, dilaton and the $U(1)$ gauge fields are respectively given by [@Ganguly:2014pwa; @Sen:1992ua; @Rogatko:2002qe], $$\begin{aligned}
\xi &= \frac{q^{2}}{\mathcal{M}}\frac{a\cos\theta}{r^{2} + a^{2}\cos^{2}\theta} \label{S2-9}\\
e^{2\chi} &= \frac{r^{2} + a^{2}\cos^{2}\theta}{r(r + r_{2}) + a^{2}\cos^{2}\theta}\label{S2-10}\\
A & =\frac{qr}{\tilde{\Sigma}}\bigg(-dt +a \mathrm{sin}^2\theta d\phi\bigg)
\label{S2-11}\end{aligned}$$ From above the non-zero components of $H_{\mu\nu\alpha}$ can also be evaluated [@Ganguly:2014pwa].
The gravitational field equations are obtained when the action is varied with respect to $g_{\mu\nu}$ which yields the Einstein’s equations, $$\begin{aligned}
\mathcal{G}_{\mu\nu} = \mathcal{T}_{\mu\nu}(\mathcal{F},\chi,\xi) \label{S2-12}\end{aligned}$$ where, $\mathcal{G}_{\mu\nu}$ is the Einstein tensor and the energy-momentum tensor $\mathcal{T}_{\mu\nu}$ on the right hand side of \[S2-12\] is given by, $$\begin{aligned}
\label{S2-13}
&\mathcal{T}_{\mu\nu}(\mathcal{F},\chi,\xi) = e^{2\chi}(4\mathcal{F}_{\mu\rho}\mathcal{F}_{\nu}^{\rho} - g_{\mu\nu}\mathcal{F}^{2}) - g_{\mu\nu}(2\partial_{\gamma}\chi\partial^{\gamma}\chi + \frac{1}{2}e^{4\chi}\partial_{\gamma}\xi\partial^{\gamma}\xi) + \partial_{\mu}\chi\partial_{\nu}\chi + e^{4\chi}
\partial_{\mu}\xi\partial_{\nu}\xi\end{aligned}$$ The stationary and axisymmetric solution of the Einstein’s equations corresponds to the Kerr-Sen metric [@Sen:1992ua] which when expressed in Boyer-Lindquist coordinates assumes the form [@Garcia:1995qz; @Ghezelbash:2012qn; @Bernard:2016wqo], $$\begin{aligned}
\label{S2-14}
ds^{2} &= - \bigg{(}1 - \frac{2\mathcal{M}r}{\tilde{\Sigma}}\bigg{)}~dt^{2} + \frac{\tilde{\Sigma}}{\Delta}(dr^{2} + \Delta d\theta^{2}) - \frac{4a\mathcal{M}r}{\tilde{\Sigma}}\sin^{2}\theta dt d\phi
&+ \sin^{2}\theta d\phi^{2}\bigg{[}r(r+r_{2}) + a^{2} + \frac{2\mathcal{M}ra^{2}\sin^{2}\theta}{\tilde{\Sigma}}\bigg{]}\end{aligned}$$ where, $$\begin{aligned}
\label{S2-14a}
\tilde{\Sigma} &= r(r + r_{2}) + a^{2}\cos^{2}\theta \tag {14a}\nonumber\\
\Delta &= r(r + r_{2}) - 2\mathcal{M}r + a^{2} \tag {14b}\nonumber\end{aligned}$$ In \[S2-14\], $\mathcal{M}$ is the mass, $r_{2} = \frac{q^{2}}{\mathcal{M}}e^{2\chi_{0}}$ is the dilaton parameter and $a$ is the angular momentum associated with the black hole. The dilaton parameter bears the imprints of the asymptotic value of the dilatonic field $\chi_{0}$ and the electric charge $q$ of the black hole. This charge essentially originates from the axion-photon coupling and not the in-falling charged particles since without the electric charge the field strengths corresponding to both the axion and the dilaton vanish (\[S2-9\] and \[S2-10\]). In such a scenario, \[S2-14\] reduces to the Kerr metric. We further note that in \[S2-14\] the spin of the black hole originates from the axion field since a non-rotating black hole ($a=0$) leads to a vanishing axionic field strength (\[S2-9\]). When the rotation parameter in \[S2-14\] vanishes, the resultant spherically symmetric spacetime represents a pure dilaton black hole labelled by its mass, electric charge and the asymptotic value of the dilaton field [@Garfinkle:1990qj; @Yazadjiev:1999xq].
The event horizon $r_H$ of the Kerr-Sen spacetime is obtained by solving for $\Delta=0$ such that, $$\begin{aligned}
\label{S2-15}
r_H=\mathcal{M}-\frac{r_2}{2} +\sqrt{\bigg(\mathcal{M}-\frac{r_2}{2}\bigg)^2 - a^2}\end{aligned}$$ From \[S2-15\] and the fact that $r_2$ depends on the square of the electric charge, it can be shown that $0\leq \frac{r_2}{\mathcal{M}} \leq 2$ leads to real, positive event horizons and hence black hole solutions. In this work we will be interested in this regime of $r_2$.
In the next section we will discuss accretion in the Kerr-Sen background and explicitly demonstrate the dependence of the luminosity from the accretion disk on the background spacetime.
Spectrum from the accretion disk around black holes in the Kerr-Sen spacetime {#S3}
=============================================================================
The continuum spectrum emitted from the accretion disk surrounding the black holes is sensitive to the background metric and hence provides an interesting observational avenue to explore the signatures of the Kerr-Sen spacetime. In this section, we will compute the continuum spectrum from the accretion disk in a general stationary, axi-symmetric background and subsequently specialize to the Kerr-Sen metric. This in turn will enable us to probe the observable effects of EMDA gravity which can be used to distinguish it from the general relativistic scenario.
The continuum spectrum depends not only on the background metric but also on the properties of the accretion flow. In this work we will derive the continuum spectrum based on the Novikov-Thorne model which adopts the ‘thin-disk approximation’ [@Novikov_Thorne_1973; @Page:1974he]. In such a scenario, the accretion takes place along the equatorial plane ($\theta=\pi/2$ plane) such that the resultant accretion disk is geometrically thin with $\frac{h(r)}{r}\ll 1 $, $h(r)$ being the height of the disk at a radial distance $r$. The accreting particles are assumed to maintain *nearly circular* geodesics such that the azimuthal velocity $u_{\phi}$ much exceeds the radial and the vertical velocity $u_r$ and $u_z$ respectively, i.e. $u_{z} \ll u_{r} \ll u_{\phi}$. The presence of viscous stresses imparts minimal radial velocity to the accreting fluid which facilitates gradual inspiral and fall of matter into the black hole. Since the vertical velocity is negligible, a thin accretion disk harbors *no outflows*.
The energy-momentum tensor associated with the accreting fluid can be expressed as, $$\begin{aligned}
T^{\alpha}_ {\beta}= \rho_{0}\left(1+\Pi\right)u^{\alpha}u_{\beta}+t^{\alpha}_{\beta}+u^{\alpha}q_{\beta}+q^{\alpha}u_{\beta}~,
\label{S3-1}\end{aligned}$$ where $\rho_{0}$ is the proper density and $u^{\alpha}$ is the four velocity of the accreting particles such that the term $\rho_{0}u^{\mu}u^{\nu}$ represent the contribution to the energy-momentum tensor due to the geodesic flow. The specific internal energy of the system is denoted by $\Pi$ and the associated term denotes the contribution to the energy density due to dissipation. In \[S3-1\], $t^{\alpha \beta}$ and $q^{\alpha}$ respectively denote the stress-tensor and the energy flux relative to the local inertial frame and consequently $t_{\alpha \beta}u^{\alpha}=0=q_{\alpha}u^{\alpha}$. Motion of the particles along the geodesics ensures that the gravitational pull of the central black hole dominates the forces due to radial pressure gradients and hence the specific internal energy of the accreting fluid can be ignored compared to its rest energy. Therefore, the special relativistic corrections to the local thermodynamic, hydrodynamic and radiative properties of the fluid can be safely neglected compared to the general relativistic effects of the black hole [@Novikov_Thorne_1973; @Page:1974he]. The loss of gravitational potential energy due to infall of matter towards the black hole generates electromagnetic radiation which interacts efficiently with the accreting fluid before being radiated out of the system. Since the specific internal energy $\Pi\ll 1$, the accreting fluid retains no heat and the only the z-component of the energy flux vector $q^\alpha$ has a non-zero contribution to the energy-momentum tensor.
In order to compute the flux and hence the luminosity from the accretion disk we assume that the black hole undergoes steady state accretion at a rate $\dot{M}$ and the accreting fluid obeys conservation of mass, energy and angular momentum. The conservation of mass assumes the form, $$\begin{aligned}
\label{S3-2}
\dot{M}=-2\pi \sqrt{-g}u^{r} \Sigma\end{aligned}$$ where $\Sigma$ denotes the average surface density of matter flowing into the black hole and $g$ corresponds to the determinant of the metric. The conservation of angular momentum and energy are respectively given by, $$\begin{aligned}
\label{S3-3}
\frac{\partial}{\partial r}\left[\dot{M}\mathcal{L}-2\pi \sqrt{-g}W^r_\phi \right]=4\pi \sqrt{-g} F \mathcal{L}~,~~~\rm{and } \end{aligned}$$ $$\begin{aligned}
\label{S3-4}
\frac{\partial}{\partial r}\left[\dot{M}\mathcal{E}-2\pi \sqrt{-g}\Omega W^r_\phi \right]=4\pi \sqrt{-g} F \mathcal{E} \end{aligned}$$ where $\Omega$, $\mathcal{E}$ and $\mathcal{L}$ are the angular velocity, the specific energy and the specific angular momentum of the accreting fluid. In a stationary and axi-symmetric spacetime $\mathcal{E}$ and $\mathcal{L}$ are conserved and can be expressed in terms of the metric coefficients such that, $$\begin{aligned}
\label{S3-5a}
\mathcal{E}&=\frac{-g_{tt}-\Omega g_{t\phi}}{\sqrt{-g_{tt}-2\Omega g_{t\phi}-\Omega^2 g_{\phi\phi}}}~,\end{aligned}$$ $$\begin{aligned}
\label{S3-5b}
\mathcal{L}&=\frac{\Omega g_{\phi\phi}+g_{t\phi}}{\sqrt{-g_{tt}-2\Omega g_{t\phi}-\Omega^2 g_{\phi\phi}}}~.\end{aligned}$$ and the angular velocity $\Omega$ is given by, $$\begin{aligned}
\label{S3-6}
\Omega=\frac{d\phi}{dt}=\frac{-g_{t\phi,r}\pm \sqrt{\left\lbrace-g_{t\phi,r}\right\rbrace^2-\left\lbrace g_{\phi\phi,r}\right\rbrace \left\lbrace g_{tt,r}\right\rbrace}}{g_{\phi\phi,r}}\end{aligned}$$ Since the motion is along the equatorial plane $\mathcal{E}$ and $\mathcal{L}$ are only functions of the radial coordinate and the $g_{\theta \theta}$ component does not contribute to the conserved quantities.
In \[S3-3\] and \[S3-4\], $F$ denotes the flux of radiation generated by the accretion process and is given by, $$\begin{aligned}
\label{S3-6}
F \equiv \left\langle q^z(r,h)\right\rangle=\left\langle -q^z(r,-h)\right\rangle \end{aligned}$$ while $W^{r}_{\phi}$ is associated with the time and height averaged stress tensor in the local rest frame of the accreting particles, i.e., $$\begin{aligned}
\label{S3-7}
W^\alpha_\beta=\int^h_{-h}dz\left\langle t^\alpha_\beta\right\rangle\end{aligned}$$
By manipulating the conservation laws an analytical expression for the flux $F$ from the accretion disk can be obtained, such that, $$\begin{aligned}
\label{S3-8}
F = \frac{\dot{M}}{4\pi\sqrt{-g}}f ~~~\rm{where}\end{aligned}$$ $$\begin{aligned}
\label{S3-9}
f=-\frac{\Omega_{,r}}{(\mathcal{E}-\Omega \mathcal{L})^2}\left[\mathcal{EL}-\mathcal{E}_{ms}\mathcal{L}_{ms}-2\int_{r_{ms}}^r \mathcal{LE}_{,r^\prime}dr^\prime \right]\end{aligned}$$ While deriving \[S3-8\] the viscous stress $W^r_\phi$ is assumed to vanish at the marginally stable circular orbit $r_{ms}$, such that after crossing this radius the azimuthal velocity of the accreting fluid vanishes and radial accretion takes over. The last stable circular orbit $r_{ms}$, corresponds to the inflection point of the effective potential in which the accreting particles move. The effective potential $V_{eff}$ is given by [@Banerjee:2019sae], $$\begin{aligned}
\label{S3-10}
V_{\rm eff}(r)=\frac{\mathcal{E}^2g_{\phi\phi}+2\mathcal{EL}g_{t\phi}+\mathcal{L}^2g_{tt}}{g_{t\phi}^2-g_{tt}g_{\phi\phi}}-1\end{aligned}$$ and $r_{ms}$ is obtained by solving for $V_{\rm eff}=\partial _{r}V_{\rm eff}=\partial _{r}^{2}V_{\rm eff}=0$. In \[S3-9\] $\mathcal{E}_{ms}$ and $\mathcal{L}_{ms}$ denote the energy and angular momentum at the marginally stable circular orbit.
Since the electromagnetic radiation emitted due to loss of gravitational potential energy undergoes repeated collisions with the accreting fluid, a thermal equilibrium between matter and radiation is established. This renders the accretion disk to be geometrically thin and optically thick such that it emits locally as a black body. Therefore, the temperature profile is given by the Stefan-Boltzmann law, i.e., $T(r)=\left(\mathcal{F}(r)/\sigma\right)^{1/4}$ ($\sigma$ being the Stefan-Boltzmann constant) and $\mathcal{F}(r)=F(r)c^6/G^2\mathcal{M}^2$ (where $F(r)$ is given by \[S3-8\] and \[S3-9\]). Hence the accretion disk emits a Planck spectrum at every radius with a peak temperature $T(r)$. Finally, the luminosity $L_{\nu}$ emitted by the disk at an observed frequency $\nu$ is obtained by integrating the Planck function $B_\nu(T(r))$ over the disk surface, $$\begin{aligned}
\label{S3-11}
L_{\nu}=8\pi^2 r_{\rm g}^2\cos i \int_{x_{\rm ms}}^{x_{\rm out}}\sqrt{g_{rr}} B_{\nu}(T)x dx;
\qquad
B_\nu (T)&=\frac{2h\nu^3/c^2}{{\rm exp}\left(\frac{h\nu}{z_{\rm g} kT}\right)-1}\end{aligned}$$ where, $x = r/r_{\rm g}$ is the radial coordinate expressed in units of the gravitational radius $r_{\rm g} = G\mathcal{M}/c^{2}$ and $``i"$ is the inclination angle between the line of sight and the normal to the disk. In \[S3-11\] $z_{\rm g}$ is the gravitational redshift factor given by, $$\begin{aligned}
\label{S3-12}
z_{\rm g}=\mathcal{E}\frac{\sqrt{-g_{tt}-2\Omega g_{t\phi}-\Omega^2 g_{\phi\phi}}}{\mathcal{E}-\Omega \mathcal{L}}\end{aligned}$$ which is associated with the change in the frequency suffered by the photon while travelling from the emitting material to the observer [@Ayzenberg:2017ufk].
![The above figure depicts the variation of the theoretically derived luminosity from the accretion disk with frequency for two different masses of the supermassive black holes, namely, $M=10^7 M_\odot$ and $M=10^9 M_\odot$. For both the masses, the black lines represent $r_2=0$, while the blue and red lines correspond to $r_2=0.6$ and $r_2=1.6$ respectively. For a given $r_2$, prograde spins are denoted by dashed lines, non-rotating black holes are denoted by solid lines while their retrograde counterparts are illustrated by the dotted lines. The Schwarzschild scenario is depicted by the solid black line. The accretion rate is assumed to be $1 M_{\odot}\textrm{yr}^{-1}$ and the inclination angle is taken to be $\cos i=0.8$. For more discussions see text. []{data-label="F1"}](spectra-2.pdf)
We have thus arrived at an analytical expression for the luminosity from the accretion disk given by \[S3-11\]. We note that it depends on the background metric through the energy, angular momentum, angular velocity and the radius of the marginally stable circular orbit, where in the present work we will consider the metric components corresponding to the Kerr-Sen spacetime given by \[S2-14\]. Apart from the metric parameters, the spectrum is also sensitive to the mass of the black hole, the accretion rate and the inclination angle of the disk to the line of sight.
\[F1\] depicts the variation of the theoretically derived luminosity from the accretion disk with the frequency, for two different masses of the supermassive black holes, viz, $\mathcal{M}=10^9 M_\odot$ and $\mathcal{M}=10^7 M_\odot$. The accretion rate is assumed to be $1 M_{\odot}\textrm{yr}^{-1}$ while the inclination angle is considered to be $\cos i=0.8$. We note that the spectrum from a lower mass black hole peaks at a higher frequency which can be ascribed to the $T \propto \mathcal{M}^{-1/4}$ dependence of the peak temperature $T$ for a multi-color black body spectrum with the black hole mass [@Frank:2002]. It is evident from \[F1\] that the metric parameters $r_2$ and $a$ substantially affect the luminosity from the accretion disk, specially at high frequencies. We recall from previous discussion that for the existence of the event horizon, the spin of the black hole should lie in the range: $-(1-\frac{r_2}{2})\leq a \leq (1-\frac{r_2}{2})$ for a given $r_2$ (\[S2-15\]), where $a$ and $r_2$ are expressed in units of $\mathcal{M}$, which will be considered throughout the remaining discussion. We study three different dilaton parameters in \[F1\], $r_2=0$ (black lines), $r_2=0.6$ (blue lines) and $r_2=1.6$ (red lines). For each dilaton parameter we consider non-rotating black holes (solid lines) as well as the prograde (dashed lines) and retrograde (dotted lines) spins in the allowed range. We note that for a given $r_2$ the disk luminosity associated with prograde black holes is maximum followed by the luminosity corresponding to their non-spinning and the retrograde counterparts. Similarly, for a given spin, accretion disks around dilaton black holes are more luminous compared to the general relativistic scenario.
Numerical Analysis {#S4}
==================
In this section we compute the optical luminosity of a sample of Palomar Green (PG) quasars [@Schmidt:1983hr; @Davis:2010uq] assuming the thin accretion disk model discussed in the last section. The optical luminosity $L_{opt}\equiv \nu L_\nu$ is evaluated at the wavelength 4861Å [@Davis:2010uq] and compared with the corresponding observed values [@Davis:2010uq] which in turn allows us to discern the observationally favored magnitude of the dilaton parameter and the black hole spins. These quasars have independent mass measurements based on the method of reverberation mapping [@Kaspi:1999pz; @Kaspi:2005wx; @Boroson:1992cf; @Peterson:2004nu] while the accretion rates of the quasars are reported in [@Davis:2010uq]. Since quasars are mostly face-on systems, the inclination angle $i$ is asumed to lie in the range: $\cos i \in \left(0.5,1\right)$. Following [@Davis:2010uq; @Wu:2013zqa] we adopt a typical value of $\cos i \sim 0.8$ in our analysis. This is in agreement with the results of Piotrovich et al. who estimated the inclination angle of some of these quasars using the degree of polarisation of the scattered radiation from the accretion disk.
The bolometric luminosities of these quasars have been estimated [@Davis:2010uq] based on the observed data in the optical [@1987ApJS...63..615N], UV [@Baskin:2004wn], far-UV [@Scott:2004sv], and soft X-rays [@Brandt:1999cm]. The error in bolometric luminosity receives dominant contribution from the far-UV extrapolation since the uncertainty in the UV luminosity supercedes other sources of error (e.g., optical or X-ray variability) [@Davis:2010uq]. Moreover, the UV part of the spectral energy distribution (SED) is contaminated by components other than the accretion disk since physical mechanisms e.g. advection, a Comptonizing coronae, etc. may redistribute the UV flux to the X-ray frequencies [@Davis:2010uq]. Therefore, although the maximum emission from the accretion disk for quasars generally peaks in the optical/UV part of the spectrum, disentangling the role of the metric from UV observations become difficult due to the aforesaid reasons. This motivates us to dwell in the optical domain and compare the optical observations of quasars with the corresponding theoretical estimates.
The requirement for the quasars to possess a real, positive event horizon imposes the constraint $0\leq r_2\leq 2$ on the dilaton parameter. Also, for a given $r_2$, the spin $a$ of the black hole can assume values between, $-(1-\frac{r_2}{2})\leq a \leq (1-\frac{r_2}{2})$. In order to arrive at the most favored dilaton parameter we proceed in the following way:
- We consider a quasar in the sample with known $\mathcal{M}$ and $\dot{M}$ and compute the theoretical optical luminosity at 4861Å for a given $r_2$ and all the allowed values of $a$ for that $r_2$. The value of $a$ that best reproduces the observed luminosity, is considered to be the spin of that quasar for the chosen $r_2$.
- We repeat the above procedure for all the quasars in the sub-sample for the aforesaid $r_2$. This assigns a specific spin for each of the quasars, for the given $r_2$.
- We now vary $r_2$ in the theoretically allowed range, ($0\leq r_2\leq 2$) and repeat the above two procedures. This ensures that for every $r_2$, the sample of quasars are associated with a spin that minimizes the error between the theoretical and the observed optical luminosities.
In order to arrive at the magnitude of $r_2$ favored by optical observations of quasars, we compute several error estimators which we discuss next.
Error estimators {#S4-1}
----------------
In this section we discuss various error estimators which enable us to find the dilaton parameter $r_{2}$ minimizing the error between the theoretical and the observed optical luminosities.
- **Chi-square $\boldsymbol {\chi^{2}}~$: ** We consider the sample of quasars with observed optical luminosities $\{\mathcal{O}_{k}\}$ and errors $\{\sigma_{k}\}$. The theoretical estimates of the optical luminosity corresponds to $\left\lbrace \mathcal{T}_{k}\left(r_{2},\left\lbrace a_{(i)}\right\rbrace\right)\right\rbrace$ for a given $r_2$ (where $\left\lbrace a_{(i)}\right\rbrace$ denotes the set of best choice of spin parameters associated with the quasars for that $r_2$, as discussed above). With this, the chi-square ($\chi ^{2}$) of the distribution can be defined as,
$$\begin{aligned}
\label{chi2}
\chi ^{2}(r_{2},\left\lbrace a_{(i)}\right\rbrace)=\sum _{k}\frac{\left\{\mathcal{O}_{k}-\mathcal{T}_{k}(r_{2},\left\lbrace a_{(i)}\right\rbrace) \right\}^{2}}{\sigma _{k}^{2}}.\end{aligned}$$
The errors associated with the optical luminosities are not explicitly reported [@Davis:2010uq]. However, we have already mentioned that the error in optical luminosity can be ignored compared to the error in bolometric luminosity which receives maximum contribution from the far-UV extrapolation since the uncertainty in the UV luminosity dominates over other sources of error (e.g., optical or X-ray variability) [@Davis:2010uq]. Therefore, we consider the errors in the bolometric luminosity reported in [@Davis:2010uq], as the maximum possible error in the estimation of the optical luminosity.
![The figure illustrates the variation of $\chi^{2}$ with the dilaton parameter $r_{2}$ for a sample of eighty quasars.[]{data-label="Fig_2"}](chi2_vs_q.pdf)
It is important to recall that there are restrictions in the magnitude of $r_2$ and $a$ (as discussed in the last section) and one cannot assume arbitrary values of these parameters. Consequently we cannot use reduced chi-square $\chi ^{2}_{Red}$, where $\chi ^{2}_{Red}=\chi ^{2}/\nu$, ($\nu$ being the degrees of freedom) as an error estimator, since the definition of the degrees of freedom becomes ambiguous [@Andrae:2010gh] in such cases. We therefore analyze the error between the theoretical and the observed optical luminosities using $\chi^2$ as the error estimator.
From the definition of $\chi^2$ in \[chi2\], it is clear that the magnitude of $r_2$ which minimizes $\chi^2$ is most favored by the observations. In \[Fig\_2\] we plot the variation of $\chi^{2}$ with the dilaton parameter $r_{2}$. The figure clearly reveals that $r_2\sim 0.1$ minimizes the $\chi^2$ signalling that axion-dilaton black holes with mild dilaton charges are more favored by quasar optical data. The spin of the quasars $\left\lbrace a_{(i)}\right\rbrace$ corresponding to $r_2=0.1$ which can best explain the data are reported in \[Table1\]. At this point it may be worthwhile to mention that in a previous work [@Banerjee:2019sae] we compared the theoretical estimates of optical luminosity for braneworld black holes (which can accommodate a negative tidal charge) with the optical data of the same quasar sample. A similar analysis resulted in the conclusion that black holes carrying negative tidal charge (realised in a higher dimensional scenario) are more favored compared to .\
In what follows we consider a few more error estimators in order to verify our conclusion.
- **Nash-Sutcliffe Efficiency: ** This particular error estimator denoted by $E$ [@NASH1970282; @WRCR:WRCR8013; @2005AdG.....5...89K] is associated with the the sum of the squared differences between the observed and the theoretical values normalized by the variance of the observed values. The functional form of the Nash-Sutcliffe Efficiency is given by, $$\begin{aligned}
\label{NSE}
E(r_{2},\left\lbrace a_{(i)}\right\rbrace)=1-\frac{\sum_{k}\left\{\mathcal{O}_{k}-\mathcal{T}_{k}(r_{2},\left\lbrace a_{(i)}\right\rbrace)\right\}^{2}}{\sum _{k}\left\{\mathcal{O}_{k}-\mathcal{O}_{\rm av}\right\}^{2}}\end{aligned}$$ where, $\mathcal{O}_{\rm av}$ represents the mean value of the observed optical luminosities of the PG quasars.
In contrast to $\chi^{2}$, the dialton parameter which maximizes $E$ is most favored by observations. Interestingly $E$ can range from $-\infty ~\rm to ~ 1$. Negative $E$ indicates that the average of the observed data explains the observation better than the theoretical model. Similarly, $E \sim 1$ represents the ideal model which predicts the observations with great accuracy [@Goyal]. The variation of the Nash-Sutcliffe efficiency with $r_2$ is illustrated in \[Fig\_3a\]. We note that $E$ maximizes when $r_2\sim 0.1$ thereby supporting our earlier conclusion derived from chi-square. The maximum value of Nash-Sutcliffe efficiency $E_{max}\sim 0.777$, which is shows that this is a satisfactory model representing the data [@Goyal]
- [**Modified Nash-Sutcliffe Efficiency $\bf E_{1}$ : ** In order to overcome the oversensitivity of the Nash-Sutcliffe efficiency to higher values of the optical luminosity, (which arises due to the presence of the square of the error in the numerator) a modified version of the same is proposed [@WRCR:WRCR8013]which is given by, $$\begin{aligned}
\label{E1}
E_{1}(r_{2},\left\lbrace a_{(i)}\right\rbrace)&=1-\frac{\sum_{k}|\mathcal{O}_{k}-\mathcal{T}_{k}(r_{2},\left\lbrace a_{(i)}\right\rbrace)|}{\sum _{k}|\mathcal{O}_{k}-\mathcal{O}_{\rm av}|}\end{aligned}$$ This modified estimator exhibits an enhanced sensitivity towards the lower values of the observed optical luminosities. Similar to Nash-Sutcliffe Efficiency, the most favorable $r_{2}$ maximizes its modified version as well. It is observed from the \[Fig\_3b\] that this maximization occurs at $r_{2}\sim 0$ which implies that the general relativistic scenario is more favored compared to the EMDA gravity. ]{}
- **Index of Agreement and its modified form : ** The Nash-Sutcliffe efficiency and its modified form turns out to be insensitive towards the differences between the theoretical and the observed luminosities from the corresponding observed mean [@WRCR:WRCR8013]. This is overcome by introducing the index of agreement $d$ [@willmott1984evaluation; @doi:10.1080/02723646.1981.10642213; @2005AdG.....5...89K] where, $$\begin{aligned}
\label{d}
d(r_{2},\left\lbrace a_{(i)}\right\rbrace)=1-\frac{\sum_{k}\left\{\mathcal{O}_{k}-\mathcal{T}_{k}(r_{2},\left\lbrace a_{(i)}\right\rbrace)\right\}^{2}}{\sum _{k}\left\{|\mathcal{O}_{k}-\mathcal{O}_{\rm av}|+|\mathcal{T}_{k}(r_{2},\left\lbrace a_{(i)}\right\rbrace)-\mathcal{O}_{\rm av}|\right\}^{2}}\end{aligned}$$ and $\mathcal{O}_{\rm av}$ refers to the average value of the observed luminosities. The denominator, known as the potential error, is related to the maximum value by which each pair of observed and predicted luminosities differ from the average observed luminosity.
Similar to Nash-Sutcliffe Efficiency, the index of agreement is also oversensitive to higher values of the optical luminosity due to the presence of the squared luminosities in the numerator. Therefore, a modified version of the same denoted by $d_1$ is proposed which assumes the form, $$\begin{aligned}
\label{d1}
d_{1}(r_{2},\left\lbrace a_{(i)}\right\rbrace)&=1-\frac{\sum_{k}|\mathcal{O}_{k}-\mathcal{T}_{k}(r_{2},\left\lbrace a_{(i)}\right\rbrace)|}{\sum_{k}\left\{|\mathcal{O}_{k}-\mathcal{O}_{\rm av}|+|\mathcal{T}_{k}(r_{2},\left\lbrace a_{(i)}\right\rbrace)-\mathcal{O}_{\rm av}| \right\}}\end{aligned}$$
It is clear from \[d\] and \[d1\] that the dilaton parameter which maximizes the index of agreement and its modified form best explains the data. \[Fig\_4a\] and \[Fig\_4b\] respectively illustrates the variation of $d$ and $d_1$ with $r_2$. We note that $d$ maximizes when $r_2\sim 0.1$ while $d_1$ attains the maximum at $r_2\sim 0$.
By studying the variation of the error estimators we note that the dilaton parameter which best explains the observations correspond to $0\lesssim r_2\lesssim 0.1$, indicating that the Kerr scenario or mildly charged dilaton-axion black holes are more favored. In another work [@Banerjee:2020ubc] we investigated the effect of the Einstein-Maxwell dilaton axion gravity on the power associated with ballistic jets and the radiative efficiencies derived from the continuum spectrum. The theoretical jet power and the radiative efficiencies derived in the Kerr-Sen background were compared with the available observations of microquasars. Such a study reveals that is more favored compared to the Einstein-Maxwell dilaton-axion scenario, thereby reinforcing our present findings from other independent observations using a different observational sample. It is however important to note that there exists several alternative gravity theories whose black hole solutions resemble the Kerr geometry in . Therefore, an observational verification of the Kerr solution cannot distinguish these modified gravity models from [@Psaltis:2007cw]. On the other hand, if deviations from are detected, then it will require revisiting our understanding of gravity in the high curvature domain. Interestingly, a similar analysis with the present quasar sample, when performed with braneworld blackholes (resembling the Kerr-Newman spacetime, although the tidal charge parameter can also accommodate negative values) indicate that quasars with a negative tidal charge (realised in a higher dimensional scenario) are more favored [@Banerjee:2017hzw; @Banerjee:2019sae].\
Since quasars are rotating in nature it is worthwhile to provide an estimate of the spins of the quasars from the above analysis. This is addressed in the next section.
Estimating the spins of the quasars {#S4-2}
-----------------------------------
We have already discussed in the last section that the dilaton parameter which best describes the optical observations of quasars lies in the range $0\lesssim r_2\lesssim 0.1$. We now discuss the most favored values of the spins of the quasars from the same observations. The procedure for extracting the observationally favored spin parameters of the quasar sample, for a given $r_2$, has already been discussed in \[S4\]. Since the most favored dilaton parameter lies between $0-0.1$, we report the spins of the quasars for $r_{2}\sim 0.1$ and $r_{2}\sim 0$ in \[Table1\].
-1.2cm
$\rm Object$ $\rm log~ m$ $\rm log ~\dot{m}$ $\rm log ~L_{obs}$ $\rm log ~L_{bol}$ $a_{r_{2}=0.1}$ $a_{r_{2}=0}$
---------------- -------------- -------------------- -------------------- ---------------------- ----------------- ---------------
$\rm 0003+199$ $\rm 6.88$ $\rm -0.06$ $\rm 43.91$ $\rm 45.13 \pm 0.35$ $\rm 0.95 $ $\rm 0.99 $
$\rm 0026+129$ $\rm 7.74$ $\rm 0.80$ $\rm 44.99$ $\rm 46.15 \pm 0.29$ $\rm 0.95 $ $\rm 0.99 $
$\rm 0043+039$ $\rm 8.98$ $\rm 0.36$ $\rm 45.47$ $\rm 45.98 \pm 0.02$ $\rm -0.3 $ $\rm -0.3 $
$\rm 0050+124$ $\rm 6.99$ $\rm 0.58$ $\rm 44.41$ $\rm 45.12 \pm 0.04$ $\rm 0.95 $ $\rm 0.99 $
$\rm 0921+525$ $\rm 6.87$ $\rm -0.55$ $\rm 43.56$ $\rm 44.47 \pm 0.14$ $\rm 0.95 $ $\rm 0.99 $
$\rm 0923+129$ $\rm 6.82$ $\rm -0.49$ $\rm 43.58$ $\rm 44.53 \pm 0.15$ $\rm 0.95 $ $\rm 0.99 $
$\rm 0923+201$ $\rm 8.84$ $\rm -0.47$ $\rm 44.81$ $\rm 45.68 \pm 0.05$ $\rm 0.3 $ $\rm 0.3 $
$\rm 1001+054$ $\rm 7.47$ $\rm 0.59$ $\rm 44.69$ $\rm 45.36 \pm 0.12$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1011-040$ $\rm 6.89$ $\rm 0.17$ $\rm 44.08$ $\rm 45.02 \pm 0.23$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1022+519$ $\rm 6.63$ $\rm -0.36$ $\rm 43.56$ $\rm 45.10 \pm 0.39$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1048-090$ $\rm 9.01$ $\rm 0.30$ $\rm 45.45$ $\rm 46.57 \pm 0.32$ $\rm -0.1 $ $\rm -0.1 $
$\rm 1049-006$ $\rm 8.98$ $\rm 0.34$ $\rm 45.46$ $\rm 46.29 \pm 0.15$ $\rm -0.2 $ $\rm -0.2 $
$\rm 1100+772$ $\rm 9.13$ $\rm 0.29$ $\rm 45.51$ $\rm 46.61 \pm 0.25$ $\rm 0.1 $ $\rm 0.1 $
$\rm 1103-006$ $\rm 9.08$ $\rm 0.21$ $\rm 45.43$ $\rm 46.19 \pm 0.10$ $\rm 0.1 $ $\rm 0.1 $
$\rm 1115+407$ $\rm 7.38$ $\rm 0.49$ $\rm 44.58$ $\rm 45.59 \pm 0.21$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1119+120$ $\rm 7.04$ $\rm -0.06$ $\rm 44.01$ $\rm 45.18 \pm 0.34$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1126-041$ $\rm 7.31$ $\rm -0.02$ $\rm 44.19$ $\rm 45.16 \pm 0.28$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1211+143$ $\rm 7.64$ $\rm 0.68$ $\rm 44.85$ $\rm 46.41 \pm 0.50$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1226+023$ $\rm 9.01$ $\rm 1.18$ $\rm 46.03$ $\rm 47.09 \pm 0.24$ $\rm -0.95 $ $\rm -1.0 $
$\rm 1244+026$ $\rm 6.15$ $\rm 0.15$ $\rm 43.70$ $\rm 44.74 \pm 0.22$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1259+593$ $\rm 8.81$ $\rm 0.99$ $\rm 45.79$ $\rm 47.04 \pm 0.29$ $\rm -0.95 $ $\rm -1.0 $
$\rm 1302-102$ $\rm 8.76$ $\rm 0.92$ $\rm 45.71$ $\rm 46.51 \pm 0.12$ $\rm -0.95 $ $\rm -1.0 $
$\rm 1351+236$ $\rm 8.10$ $\rm -1.14$ $\rm 43.93$ $\rm 44.57 \pm 0.12$ $\rm 0.1 $ $\rm 0.1 $
$\rm 1351+640$ $\rm 8.49$ $\rm -0.38$ $\rm 44.69$ $\rm 45.31 \pm 0.05$ $\rm 0.0 $ $\rm 0.0 $
$\rm 1402+261$ $\rm 7.64$ $\rm 0.63$ $\rm 44.82$ $\rm 46.07 \pm 0.27$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1404+226$ $\rm 6.52$ $\rm 0.55$ $\rm 44.16$ $\rm 45.21 \pm 0.26$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1416-129$ $\rm 8.74$ $\rm -0.21$ $\rm 44.94$ $\rm 45.82 \pm 0.23$ $\rm 0.0 $ $\rm 0.0 $
$\rm 1425+267$ $\rm 9.53$ $\rm 0.07$ $\rm 45.55$ $\rm 46.35 \pm 0.20$ $\rm 0.5 $ $\rm 0.5 $
$\rm 1426+015$ $\rm 8.67$ $\rm -0.49$ $\rm 44.71$ $\rm 45.84 \pm 0.24$ $\rm 0.2 $ $\rm 0.2 $
$\rm 1440+356$ $\rm 7.09$ $\rm 0.43$ $\rm 44.37$ $\rm 45.62 \pm 0.29$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1512+370$ $\rm 9.20$ $\rm 0.20$ $\rm 45.48$ $\rm 47.11 \pm 0.50$ $\rm 0.2 $ $\rm 0.2 $
$\rm 1519+226$ $\rm 7.52$ $\rm 0.18$ $\rm 44.45$ $\rm 45.98 \pm 0.41$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1535+547$ $\rm 6.78$ $\rm -0.01$ $\rm 43.90$ $\rm 44.34 \pm 0.02$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1543+489$ $\rm 7.78$ $\rm 1.18$ $\rm 45.27$ $\rm 46.43 \pm 0.25$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1545+210$ $\rm 9.10$ $\rm 0.01$ $\rm 45.29$ $\rm 46.14 \pm 0.13$ $\rm 0.2 $ $\rm 0.2 $
$\rm 1552+085$ $\rm 7.17$ $\rm 0.56$ $\rm 44.50$ $\rm 45.04 \pm 0.01$ $\rm 0.95 $ $\rm 0.99 $
$\rm 1613+658$ $\rm 8.89$ $\rm -0.59$ $\rm 44.75$ $\rm 45.89 \pm 0.11$ $\rm 0.4 $ $\rm 0.4 $
$\rm 1704+608$ $\rm 9.29$ $\rm 0.38$ $\rm 45.65$ $\rm 46.67 \pm 0.21$ $\rm 0.1 $ $\rm 0.1 $
$\rm 2112+059$ $\rm 8.85$ $\rm 1.16$ $\rm 45.92$ $\rm 46.47 \pm 0.02$ $\rm -0.95 $ $\rm -1.0 $
$\rm 2130+099$ $\rm 7.49$ $\rm 0.05$ $\rm 44.35$ $\rm 45.52 \pm 0.32$ $\rm 0.95 $ $\rm 0.99 $
$\rm 2251+113$ $\rm 8.86$ $\rm 0.66$ $\rm 45.60$ $\rm 46.13 \pm 0.01$ $\rm -0.95 $ $\rm -1.0 $
$\rm 2308+098$ $\rm 9.43$ $\rm 0.22$ $\rm 45.62$ $\rm 46.61 \pm 0.22$ $\rm 0.4 $ $\rm 0.5 $
: Spin parameters of quasars corresponding to $r_{2}=0.1$ and $r_{2}=0$ (for comparison with GR)[]{data-label="Table1"}
Before presenting the bounds on the spin we note that the theoretically estimated optical luminosity $L_{opt}$ depends both on the radius of the marginally stable circular orbit $r_{ms}$ and the outer extension of the accretion disk $r_{\rm out}$ (\[S3-11\]). However, emission from the inner radius has a much greater contribution to the total disk luminosity compared to the outer parts of the disk, since the flux from the accretion disk peaks close to the marginally stable circular orbit $r_{ms}$. Therefore, the choice of the outer disk radius $r_{\rm out}$ is not expected to significantly affect our results. In the present work we have taken $r_{\rm out}\sim 500~r_{\rm g}$ [@Walton:2012aw; @Bambi:2011jq], however, we have verified that even if the theoretical luminosity $L_{opt}$ is estimated based on $r_{\rm out}\sim 1000~r_{\rm g}$, our conclusions regarding the most favorable $r_2$ remain unchanged.
Interestingly, for some of the quasars in our sample, changing $r_{\rm out}$ does affect the best choice of spins corresponding to a given $r_2$. This is because, apart from the metric parameters, the theoretical optical luminosity is sensitive to $\mathcal{M}$ and $\dot{M}$ which varies between quasars. For a given $r_2$ and $a$, the ratio $\dot{M}/\mathcal{M}^2$ determines how sharply peaked the temperature profile $T(r)$ is near $r_{ms}$ (see \[S3-8\], \[S3-9\] and \[S3-11\]). In the event, $T(r)$ is sharply peaked near $r_{ms}$ the outer disk has negligible contribution to the disk luminosity and the choice of $r_{out}$ does not play a significant role in such cases. This turns out to be a limitation in our method of estimating the spin since the physical extent of the accretion disk should not modify the angular momentum of the quasars. Therefore we report the spin of only those quasars in \[Table1\] which remain unaltered by variation of $r_{out}$. The most favored values of spin corresponding to $r_2 \sim 0.1$ and $r_2 \sim 0$ are presented in \[Table1\] since the the error minimizes for $0\lesssim r_2\lesssim 0.1$. These are compared with the previously available spin estimates of the quasars, although it is important to note that those spins are determined based on and are highly model dependent [@Brenneman:2013oba; @Reynolds:2013qqa; @Reynolds:2019uxi].
We note from \[Table1\] that nearly half of the quasars are maximally spinning with $a \sim 0.95$ for $r_2\sim 0.1$ while $a \sim 0.99$ when GR is assumed. By using the general relativistic disk reflection model [@Ross:2005dm], Crummy et al. [@Crummy:2005nj] studied the spectra of several quasars reported in \[Table1\] (PG 0003+199, PG 0050+124, PG 1115+407, PG 1211+143, PG 1244+026, PG 1402+261, PG 1404+226, PG 1440+356) and arrived at a similar conclusion. In particular, the spin of PG 0003+199 (also known as Mrk335) is very well constrained, namely, $a \sim 0.89\pm 0.05$ [@Keek:2015apa] from its X-ray reflection spectrum and $a\sim 0.83^{+0.09}_{-0.13}$ [@Walton:2012aw] from the relativistic broadening of the Fe-$\rm K_{\alpha}$ line. These estimates are very much in agreement with our results based on , although their method of constraining the spin is different from ours. A study of the gravitationally red-shifted iron line of PG 1613+658 (Mrk 876) [@Bottacini:2014lva] reveals that the quasar harbors a rotating central black hole which is in agreement with our findings. From the polarimetric observations of AGNs, Afanasiev et al. [@Afanasiev:2018dyv] independently estimated the spins of some of the quasars reported in \[Table1\]. Such an analysis corroborates our spin estimates for PG 0003+199, PG 0026+129, PG 0050+124, PG 0923+129, PG 0923+201, PG 2130+099 and PG 2308+098 although the results for PG 0921+525, PG 1022+519, PG 1425+267, PG 1545+210, PG 1613+658, PG 1704+608 shows some variations. However, the spin of PG 1704+608 (3C 351) estimated based on the correlation between the jet power with the black hole mass and spin [@Daly:2013uga] is consistent with our estimates of $a\sim 0.1$, assuming . Piotrovich et al. constrained the spins of some of the radio-loud quasars [@Sikora:2006xz; @Vasudevan:2007hz] in our sample, e.g. PG 1226+023 (3C 273), PG 1704+608 (3C 351) and PG 1100+772 to $a<1$, $a<0.998$ and $a\sim 0.88^{+0.02}_{-0.03}$, respectively. Our analysis reveals that PG 1704+608 and PG 1100+772 are slowly spinning prograde systems while PG 1226+023 harbors a maximally spinning retrograde black hole. Recent studies [@Reynolds:2006uq; @Garofalo:2010hk; @Garofalo:2009ki; @Garofalo:2013ula] show that retrograde black holes are associated with strong radio jets which is in accordance with the high radio luminosity observed in these systems [@Polletta:2000gi; @Vasudevan:2007hz]. On the contrary, rapidly rotating prograde systems turn out to be radio-quiet, consistent with our findings [@Kellermann:1989tq; @Barvainis:2004wr; @Sikora:2006xz; @Villforth:2009eq].\
Summary and concluding remarks {#S5}
==============================
In this work we attempt to discern the signatures of Einstein-Maxwell dilaton-axion (EMDA) gravity from the quasar continuum spectrum, which is believed to be an important astrophysical site to examine the nature of gravitational interaction in the high curvature regime. EMDA gravity arises in the low energy effective action of the heterotic string theory and bears the coupling of the scalar dilaton and the pseudo-scalar axion to the metric and the Maxwell field. The dilaton and the axion fields are inherited in the action from string compactifications and exhibit interesting implications in inflationary cosmology and the late time acceleration of the universe. Therefore, it is worthwhile to search for the imprints of these fields in the available astrophysical observations since this provides a testbed for string theory.
The presence of dilaton and axion in the theory results in substantial modifications of the gravitational field equations compared to . The stationary and axi-symmetric black hole solution of these equations leads to the Kerr-Sen metric which carries a dilaton charge with the axionic field imparting angular momentum to the black hole. The presence of the Maxwell field makes the Kerr-Sen black hole electrically charged and renders non-trivial field strengths to the axion and the dilaton. The electric charge of the black hole, however, stems from the axion-photon coupling and not the in-falling charged particles. In the absence of the Maxwell field the effect of dilaton and axion vanish and the Kerr-Sen metric reduces to the Kerr metric in .
The observational signatures of the Kerr-Sen spacetime has been explored in the context of strong gravitational lensing and black hole shadows [@Gyulchev:2006zg; @An:2017hby; @Younsi:2016azx; @Hioki:2008zw]. Therefore, in this work we investigate the impact of the dilaton-axion black holes on the quasar continuum spectrum which is believed to store a wealth of information regarding the background metric. Such an attempt has been recently made [@Heydari-fard:2020syf] although there, the authors have not compared the theoretical spectra with the observations. In this work we compute the theoretical estimate of optical luminosity for a sample of eighty Palomar Green quasars using the thin accretion disk model proposed by Novikov & Thorne. These are then compared with the corresponding optical observations of quasars to obtain an estimate of the observationally favored dilaton parameter and the angular momentum of the quasars. Our study brings out that $0\lesssim r_2\lesssim 0.1$ is favored by the quasar optical data which is based on the analysis of the error estimators like the $\chi^{2}$, the Nash-Sutcliffe efficiency, the index of agreement and the modified versions of the last two.
The fact that $r_2\sim 0$ is supported by observations implies that the Kerr scenario is more favored compared to the Kerr-Sen background. This in turn indicates a negligible field strength for axion whose suppression has been observed in several other physical scenarios, e.g. in the inflationary era induced by higher curvature gravity [@Elizalde:2018rmz; @Elizalde:2018now] and higher dimensions [@Paul:2018jpq], in the warped braneworld scenario [@Randall:1999ee] with bulk Kalb-Ramond fields [@Mukhopadhyaya:2001fc; @Mukhopadhyaya:2002jn] and the related stabilization of the modulus [@Das:2014asa] and so on. We further point out that our results are in agreement with a previous work [@Banerjee:2020ubc] where such a conclusion has been independently arrived by investigating the observed jet power and the radiative efficiencies of the microquasars with the corresponding theoretical estimates in the Kerr-Sen background. Also it is worthwhile to note that an observational verification of the Kerr solution ($r_2\sim 0$) does not confirm with certainty, since apart from , the Kerr metric represents the black hole solution for several other alternate gravity models [@Psaltis:2007cw].
The present analysis also allows us to constrain the spins of the quasars which are mostly in agreement with the previous estimates. However, there are limitations associated with spin measurements. This is because the spectral energy distribution (SED) of the quasars consists of emission from multiple components, e.g. the accretion disk, the corona, the jet and the dusty torus which are not always easy to observe and model [@Brenneman:2013oba]. Discerning the effect of each of these components from the SED is often very challenging which limits accurate determinination of their mass, spin, distance and inclination. As a result the spin of the same quasar estimated by different methods often leads to inconsistent results [@Brenneman:2013oba; @Reynolds:2013qqa; @Steiner:2012vq; @Gou:2009ks; @Reynolds:2019uxi].
We further note that the background metric affects the emission from only those components which reside close to the horizon. We are therefore interested in modelling the continuum spectrum from the accretion disk since its inner edge approaches the vicinity of the horizon. The continuum spectrum however, is not only sensitive to the background spacetime but also on the properties of the accretion flow. In the present work the spectrum is computed using the thin-disk approximation which does not take into account the presence of outflows or the radial velocity of the accretion flow. A more comprehensive modelling of the disk would therefore impose stronger constraints on the background metric. At present, these issues are addressed by considering several phenomenological models which is beyond the scope of this work.
Apart from the continuum spectrum, there exists other astrophysical observations e.g., the quasi-periodic oscillations observed in the power spectrum of black holes[@Maselli:2014fca; @Pappas:2012nt], the broadened and skewed iron K-$\alpha$ line in the reflection spectrum of black holes [@Bambi:2016sac; @Ni:2016uik], and the black hole shadow [@Akiyama:2019brx; @Akiyama:2019fyp; @Akiyama:2019eap], which can be used to further establish or falsify our present findings. We leave this study for a future work which will be reported elsewhere.
Acknowledgements
================
The research of SSG is partially supported by the Science and Engineering Research Board-Extra Mural Research Grant No. (EMR/2017/001372), Government of India.
[^1]: [email protected]
[^2]: [email protected]
[^3]: [email protected]
|
---
abstract: 'We report on laboratory test results of the Compact Water Vapor Radiometer (CWVR) prototype for the NSF’s Karl G. Jansky Very Large Array (VLA), a five-channel design centered around the 22 GHz water vapor line. Fluctuations in precipitable water vapor cause fluctuations in atmospheric brightness emission, *T$_{B}$*, which are assumed to be proportional to phase fluctuations of the astronomical signal, $\Delta\phi_{V}$, seen by an antenna. Water vapor radiometry consists of using a radiometer to measure variations in *T$_{B}$* to correct for $\Delta\phi_{V}$. The CWVR channel isolation requirement of $<$ -20 dB is met, indicating $<$ 1% power leakage between any two channels. Gain stability tests indicate that Channel 1 needs repair, and that the fluctuations in output counts for Channel 2 to 5 are negatively correlated to the CWVR enclosure ambient temperature, with a change of $\sim$ 405 counts per 1$^{\circ}$ C change in temperature. With temperature correction, the single channel and channel difference gain stability is $<$ 2 $\times$ 10$^{-4}$, and the observable gain stability is $<$ 2.5 $\times$ 10$^{-4}$ over $\tau$ = 2.5 - 10$^3$ sec, all of which meet the requirements. Overall, the test results indicate that the CWVR meets specifications for dynamic range, channel isolation, and gain stability to be tested on an antenna. Future work consists of building more CWVRs and testing the phase correlations on the VLA antennas to evaluate the use of WVR for not only the VLA, but also the Next Generation Very Large Array (ngVLA).'
address: |
$^{1}$National Radio Astronomy Observatory Graduate Summer Student, 1003 Lopezville Road, Socorro, NM, 87801, USA\
$^{2}$Department of Astronomy and Astrophysics, University of Toronto, Toronto, ON, M5S 3H4, Canada\
$^{3}$National Radio Astronomy Observatory, 1003 Lopezville Road, Socorro, NM, 87801, USA
author:
- 'Ajay Gill$^{1,2}$, Robert J. Selina$^{3}$, Bryan J. Butler$^{3}$'
title: 'A Study of the Compact Water Vapor Radiometer for Phase Calibration of the Karl G. Janksy Very Large Array'
---
; ; ;
Background
==========
Phase fluctuations
------------------
The prospect of water vapor radiometry (WVR) for correcting atmospheric phase fluctuations has long been recognized by the radio astronomy community. ALMA relies on radiometry of the 183 GHz water vapor line for observations at high frequencies [@Sterling2004]. The expanded bandwidth of the NSF’s Karl G. Jansky Very Large Array (VLA) receivers and recent technological developments improve the prospects of WVR corrections for phase calibration for the VLA. Successful use of WVR for phase calibration at the VLA would also be a useful case study for using WVR corrections for the Next Generation Very Large Array (ngVLA).
The troposphere is the lowest layer of the atmosphere, extending from the ground to an elevation of 7 to 10 km. The clouds and precipitable water vapor (PWV) move across the troposphere at $v_{V}$ $\sim$ 10 m$/$s, leading to a time and frequency varying refractive index, $n_{V}(\nu,t)$, in the layer. Planar radio wavefronts propagating through regions of $n_{V}(\nu,t)$ undergo refraction and an excess in electrical path length, leading to fluctuations of the phase of the wavefronts seen by an antenna, as shown in Figure \[PF\].
\[!htb\]
The phase fluctuations due to PWV limit the spatial resolution of radio interferometers [@SH1997]. The total excess path length through the atmosphere can be estimated to be [@TS2001] $$\label{eqn1}
\mathcal{L} \simeq \mathcal{L}_{D} + \mathcal{L}_{V} \simeq 0.228P_{0} + 6.3w \; \;$$
where $\mathcal{L}_{D,V}$ is the excess path length due to dry air and PWV respectively, $P_{0}$ is the atmospheric pressure in millibars, and *w* is the height of the column of water condensed from the atmosphere. We assume $P_{0}$ and $\mathcal{L}_{D}$ are slowly varying, and that the primary cause of phase fluctuations is $\mathcal{L}_{V}$. PWV of extent *w* causes a phase change of the incoming radio wave, $\Delta\phi_{V}$, as [@B1999]
$$\label{eqn2}
\Delta\phi_{V} \simeq \frac {12.6\pi} {\lambda} \times w$$
A plot of the optical depth at the VLA site with *w* = 4 mm is shown in Figure \[OD\]. Below 130 GHz, both O$_{2}$ and H$_{2}$O contribute to the optical depth, whereas H$_{2}$O primarily contributes to the optical depth above 130 GHz.
\[!htb\]
Techniques for phase correction
-------------------------------
The various techniques used for phase calibration are briefly discussed below.
1. *Self calibration* - The target source itself can be used for phase correction if it at least has a signal-to-noise ratio of $\sim$ 2 over the self-calibration averaging time [@CF1999]. This technique is not possible for weaker continuum or spectral line sources.
2. *Fast switching* - This technique consists of observing a strong calibrator source with a calibration cycle time short enough to reduce atmospheric phase fluctuations [@FP1999; @HO1995]. Fast switching decreases observation time on the target source and places stringent constraints on the slew rate and mechanical settling time of the antenna.
3. *Paired array* - This technique uses a separate array that observes a nearby calibrator source while the target array continuously observes the target source [@Hold1992]. Paired elements constrain array configuration and reduce the sensitivity of the interferometer by a factor of 1.5 - 2 [@Clark2015].
4. *Water vapor radiometry* - Fluctuations in PWV cause fluctuations in atmospheric brightness emission, *T$_{B}$*, which are assumed to be proportional to phase fluctuations of the astronomical signal, $\Delta\phi_{V}$. WVR consists of using a radiometer to measure variations in *T$_{B}$* to correct for $\Delta\phi_{V}$ [@Lay1997]. WVR must be able to work well consistently under a variety of weather conditions.
Liquid water
------------
A complication arises for WVR because liquid and frozen water are also sources of continuum emission ($T_{B}$ $\propto$ $\nu$$^{2}$) but have a minimal contribution to the excess electrical path length. Therefore, water vapor emission may not correlate well with the astronomical phase fluctuations in the presence of liquid water, presenting a challenge for continuum radiometry.
Water vapor emits both line and continuum emission. To distinguish water vapor emission from liquid water emission, one of the water vapor lines (22 or 183 GHz) can be observed with multiple channels around the line. One of the channels should be away from the water vapor emission line to distinguish the $\nu$$^{2}$ continuum of liquid water from the vapor. WVR can be achieved in two ways: absolute and empirical radiometric phase correction.
Absolute radiometric phase correction
-------------------------------------
The brightness temperature due to atmospheric emission can be given by the radiometry equation as [@Dicke1946]
$$\label{Dicke}
T_{B} = T_{atm} (1 - e^{-\tau_{tot}})$$
where *T*$_{atm}$ is the physical temperature of the atmosphere, and $\tau$$_{tot}$ is the total optical depth, which depends on dry air and water vapor. We assume $\tau$$_{tot}$ can be separated into three parts [@Carilli]
$$\label{ttot}
\tau_{tot} = A_{\nu} w_{o} + B_{\nu} + A_{\nu} w_{rms}$$
where *A$_{\nu}$* is the optical depth per mm of PWV as a function of frequency, *w$_{o}$* is the temporally stable component of PWV, *B$_{\nu}$* is the temporally stable optical depth due to dry air as a function of frequency, and *w$_{rms}$* is the time-varying component of PWV. We assume a temporally stable optical depth term, $\tau$$_{o}$ $\equiv$ *A$_{\nu}$w$_{o}$* + *B$_{\nu}$*, and a time-varying optical depth term, $\tau$$_{rms}$ $\equiv$ *A$_{\nu}$w$_{rms}$*, and that $\tau$$_{o}$ $\gg$ $\tau$$_{rms}$. Inserting equation \[ttot\] into \[Dicke\] and assuming $\tau$$_{rms}$ $\ll$ 1 yields
$$\label{Tbred}
T_{B} = T_{atm} (1 - e^{-\tau_{o}}) + T_{atm} e^{-\tau_{o}} \bigg[A_{\nu}w_{rms} - \frac {(A_{\nu}w_{rms})^2} {2} + ...\bigg]$$
The first term in equation \[Tbred\] represents the temporally stable non-varying *T$_{B}$*, and the second term represents the fluctuating component due to variations in PWV, which to first-order reduces to $$\label{Tbrms}
T^{rms}_{B} = T_{atm} e^{-\tau_{o}} A_{\nu}w_{rms}$$
Absolute radiometric phase correction consists of measuring *T*$^{rms}_{B}$ using a radiometer, inverting equation \[Tbrms\] to derive the fluctuation in PWV, *w$_{rms}$*, and using equation \[eqn2\] to infer and correct for the fluctuation in phase, $\Delta\phi_{V}$, along the line of sight. However, there are some uncertainties involved with using absolute radiometry. The above derivations presume that the PWV fluctuations occur in a single narrow layer of known height, *h$_{turb}$*, that remains constant over time. If the PWV fluctuations are instead distributed over a large range of altitudes, then the height of the dominant fluctuation at any given time must be known to convert *T*$^{rms}_{B}$ into $\Delta\phi_{V}$. If PWV fluctuations at different heights contribute to *T*$^{rms}_{B}$ simultaneously, the conversion becomes inaccurate. Another uncertainty in making absolute radiometric corrections involves errors in the theoretical atmospheric models that relate *w* to *T$_{B}$*.
Empirical radiometric phase correction
--------------------------------------
Absolute radiometric phase correction at each antenna places stringent requirements on the accuracy of the measured *T$_{B}$*, ancillary data (*T$_{atm}$*, *h$_{turb}$*, etc.), and of the theoretical atmospheric models. A way to avoid some of these uncertainties is to empirically calculate the correlation between *T*$^{rms}_{B}$ and $\Delta\phi_{V}$ by observing a strong calibration source at regular intervals. The phase fluctuation is a differential measurement between the phase, $\phi$, of antennas *i* and *j* of a single baseline taken as [@Chandlera]
$$\label{diffP}
\Delta\phi_{V} = \phi_{i} - \phi_{j}$$
The brightness emission fluctuation, *T*$^{rms}_{B}$ $\equiv$ [*$\Delta$T$_{B}$*]{}, to be compared with $\Delta$$\phi_{V}$ is also a differential measurement between the *observable*, *$\Delta$T*, of antennas *i* and *j* of the same baseline taken as
$$\label{diffT}
\Delta T_{B} = \Delta T_{i} - \Delta T_{j}$$
The scaling factor derived from the correlation between $\Delta$$T_{B}$ and $\Delta$$\phi_{V}$ can then be used to correct for $\Delta$$\phi_{V}$. Empirical radiometry consists of finding correlations between *$\Delta$T$_{B}$* and $\Delta\phi_{V}$ for every baseline, so each antenna in the array requires a WVR. For preliminary tests with a single baseline, a minimum of two WVRs are required to test for correlations. A minimum of three WVRs are required to determine phase closure errors.
Prior water vapor radiometers
=============================
A three-channel WVR was tested on VLA antennas 26 and 28 for one year as described in [@Chandlerb]. The 183 GHz line at the VLA site is too saturated, so the 22 GHz line was used. The block diagram of the VLA WVR is shown in Figure \[WVRBD\].
\[!htb\] ![VLA water vapor radiometer block diagram [@Chandlerb].[]{data-label="WVRBD"}](WVRBD "fig:"){width="13cm" height="6cm"}
The channels were placed at $\nu_{1}$ = 21.00 GHz with $\Delta\nu_{1}$ = 750.00 MHz, $\nu_{2}$ = 22.25 GHz with $\Delta\nu_{2}$ = 1000.00 MHz, and $\nu_{3}$ = 23.50 GHz with $\Delta\nu_{3}$ = 750.00 MHz. The *observable*, $\Delta T$, per antenna was defined as
$$\label{eqn5}
\Delta T = w_{1}T_{1} + w_{2}T_{2} + w_{3}T_{3}$$
where the weights were *$w_{1}$* = - 0.5, *$w_{2}$* = 1.0, and *$w_{3}$* = - 0.5, and *$T_{1}$*, *$T_{2}$*, and *$T_{3}$* were the brightness temperatures of the three channels. The location of the channels is shown in Figure \[3ch\].
\[!htb\] ![Three channels of the VLA water vapor radiometer [@Chandlerb].[]{data-label="3ch"}](WVR "fig:"){width="3.5in"}
For $\sim$ 35 $\mu$m of PWV, an excess path length $\mathcal{L}_{V}$ $\sim$ 220 $\mu$m or ($\lambda$/30 for $\lambda$ = 7 mm) was predicted from equation \[eqn1\]. By considering possible atmospheric models and the channel locations, it was predicted that $\sim$ 35 $\mu$m of PWV would result in $\Delta$*T* $\sim$ 25 mK. For an observable $\Delta T_{rms}$ $\sim$ 25 mK with the chosen weights, the stability of an individual channel needed to be $\sim$ 20 mK. For typical system temperatures of 50 - 100 K, this implied a gain stability $\Delta g/g$ $\sim$ 2 - 4 $\times$ 10$^-$$^4$. Test results showed that the sensitivity and gain stability requirements of the WVRs were met. The astronomical phase on baseline 26 - 28 was most significantly improved when the sky was clear and the phase fluctuations were large. Phase residuals were not significantly improved in the presence of clouds or when phase fluctuations were small.
Compact water vapor radiometer
==============================
Motivation
----------
The prior WVRs were designed for the VLA rather than the Expanded Very Large Array (EVLA). The K-band for the VLA had a frequency range from 21.40 - 24.40 GHz, so the VLA WVRs were limited to three channels because of the narrow 3.00 GHz bandwidth. The K-band for the EVLA has a frequency range of 18.00 - 26.50 GHz with a bandwidth of 8.50 GHz, so a five-channel compact water vapor radiometer (CWVR) was proposed, designed, and built. The five-channel CWVR improves the ability to distinguish liquid water from water vapor. The CWVR is based on Monolithic Microwave Integrated Circuit (MMIC) technology instead of discrete components, and its smaller size makes it easier to fit inside the space available in the vertex room of the antenna.
CWVR system overview
--------------------
### Block diagram
The block diagram of the CWVR is shown in Figure \[CWVRBD\]. Major elements of the diagram are described in the following sections.
\[!htb\] ![Compact water vapor radiometer block diagram.[]{data-label="CWVRBD"}](CWVRBD "fig:"){width="6.5in"}
### K-band Dewar
The astronomical signals arrive at the feed horn of the K-band receiver. The signals pass through a 90$\degree$ phase shifter, which converts the two linear orthogonal polarizations into two circular polarizations. The two circular polarizations are then split into right-hand circularly polarized (RCP) and left-hand circularly polarized (LCP) signals using an Orthomode Transducer. The calibration signal from the noise diode is coupled into the RCP and LCP signals. The RCP and LCP signals are then sent through low-noise amplifiers (LNAs) with a gain of 35 dB built by the NRAO Central Development Lab. The K-band Dewar is cooled to two stages of 15K and 50K to minimize noise generated by the LNA. Note that the noise diode is temperature stabilized as part of the CWVR retrofit.
### Isolators, post-amplifiers, and filters
After an initial gain of 35 dB with the LNA, the RCP and LCP signals pass through a MICA T-318S50 isolator, a three-port device used to minimize reflections back into the input signal path by isolating the reflections to a third port. The signal from the isolator is sent to K&L Microwave bandpass filters with a center frequency of 22.25 GHz and a bandwidth of 8.50 GHz, providing the 18.00 - 26.50 GHz bandwidth of the K-band receiver. After the filters, the signal passes through another MICA T-318S50 isolator, after which it goes through a Quinstar QLN-2240J0 post amplifier with a gain of 32 dB. After the post amp, the signal passes through another MICA T-318S50 isolator and a directional coupler. The through signal from the directional coupler labeled *TO I.F* goes to the T303 IF down converter. The coupled signal is lower in power by 10 dB and goes to the MMIC block through another MICA T-318S50 isolator. This portion of the signal path is integrated into the CWVR as seen in Figure \[CWVRBD\] and mounted on a temperature controlled plate for gain stability.
### MMIC block
\[!htb\]
The block diagram of the MMIC block is shown in Figure \[MMIC\]. An input MA4SW210 PIN diode switch can select between the LCP and RCP input signals. The signal then passes through a CHT3091, which is a 100 step variable digital attenuator. The signal is amplified again by an ALH216 amplifier. The second MA4SW210 PIN diode switch can select between the incoming LCP/RCP signals and a termination to ground. The termination to ground allows DC offsets in the post-amps and diode detectors to be measured. The signal is then multiplexed with a frequency multiplexer into the 5 channels given in Table \[Channels\] and Figure \[5ch\].
[@ccccc@]{} Channel & $\nu$$_{low}$ & $\nu$$_{center}$ & $\nu$$_{high}$ & $\Delta\nu$\
& (GHz) & (GHz) & (GHz) & (GHz)\
1 & 18.500 & 19.250 & 20.000 & 1.500\
2 & 20.625 & 21.000 & 21.375 & 0.750\
3 & 21.750 & 22.250 & 22.750 & 1.000\
4 & 23.125 & 23.500 & 23.875 & 0.750\
5 & 24.500 & 25.250 & 26.000 & 1.500\
\[Channels\]
\[!htb\]
The DC offset counts were measured for a period of 90 minutes, and the median values per channel are given in Table \[Offset\].
[@cccccc@]{} & Ch 1 & Ch 2 & Ch 3 & Ch 4 & Ch 5\
Mean offset counts & 272 & 270 & 296 & 235 & 207\
\[Offset\]
The signal then goes through another CHT3091 attenuator per channel. The attenuators in the MMIC block are controlled through the function control box with a DB25 cable. Then, the signal goes to a MBD1057 tunnel diode detector, with the detector voltage output amplified by TLC2652 and OP27 amplifiers. The 0 - 10 V output from the amplifiers, proportional to the input RF power per channel, is then sent to the Voltage to Frequency board.
### Voltage to Frequency board
The 0 - 10V output per channel from the MMIC block is sent to a Voltage to Frequency (V-F) Converter (VFC110AP), which converts the 0 - 10 V voltage inputs to 0 - 2 MHz frequency outputs. The frequency outputs of the five channels are sent to the F318 module using ST-E2000 fiber optic cables.
### F318 module
The block diagram of the F318 module is shown in Figure \[F318BD\]. The F318 module is required to interface the EVLA monitor and control system to the fiber optic cables of the five channel CWVR. The F318 also monitors temperatures within the CWVR. Figure \[F318BD\] shows the main boards of the module. The Analog to Digital Interface Board brings in a total of twenty-nine analog voltages through four input channels. The F318 MIB Interface Board is the primary interface between the MIB and the CWVR, with the frequency counter implemented in the Xilinx FPGA. These two boards along with the MIB provide for a simple interface for analog and digital monitoring of the CWVR. Further details on the functionality of the F318 module with the CWVR are provided in [@Koski2017].
\[!htb\]
\[!htb\]
The 10 Hz noise diode calibration signal, with a period of 100 ms and 50$\%$ duty cycle, is used to count the number of pulses every high and low cal cycle. The data is collected every 100 ms at the rising edge of the 10 Hz signal. For the waveforms shown in Figure \[VFWave\], as an example
$$Ch\emph{1}_{low,high}= \frac {802.1 \times 10^3 \; counts} {sec} \times 50 \; ms \times \frac {1 \; sec} {10^3 \; ms} \simeq 40,000 \; counts$$
$$Ch\emph{1}_{total}= Ch\emph{1}_{low} + Ch\emph{1}_{high} \simeq 80,000 \;counts$$
### Temperature control
The temperature of the temperature controlled plate (TCP) on which the post-amps, filters, and the MMIC block are mounted is maintained using a CP-036 Peltier Thermoelectric Cooler (TEC). The TEC is controlled using a MPT-5000 linear, bipolar temperature controller. The MPT-5000 contains a 12 turn trimpot, which can be adjusted to set the desired temperature. The heat from the TCP is absorbed into the heat sink of the TEC, and a fan is used to remove this heat to the outside of the CWVR. For laboratory testing, the temperature of the TCP was set to 30C to minimize reflections and losses in Teflon cables. The PID control loop employs a TCS-620 temperature sensor, which has a resistance of 20 k$\ohm$ at 25C.
The Temperature Setpoint (TS) and Temperature Monitor (TM) signals of the MPT-5000 are used to set the temperature of the TEC and to monitor the temperature of the TCS-620, respectively. During stable operation, the voltage of TS closely matches that of TM. Two additional AD590 temperature sensors were used to monitor the temperature of the TCP adjacent to the post amps and the ambient temperature within the CWVR enclosure. Table \[MIBlabels\] lists the signal names and descriptions of signals on the MIB for signals related to temperature control and monitor.
[@cc@]{} Signal name & Description\
WVR 7 & Temperature monitor for TCS-620 (TCP)\
WVR 8 & Temperature setpoint for TEC\
WVR\_Temp1 & Temperature monitor for AD590 (CWVR ambient)\
WVR\_Temp2 & Temperature monitor for AD590 (TCP)\
\[MIBlabels\]
Compact water vapor radiometer requirements
===========================================
Channel isolation requirement
-----------------------------
The integrated isolation between any two channels is defined as
$$\label{Iso}
IS_{xy}= \frac {\int_{\nu_{i}}^{\nu_{f}} P_{x}(\nu) P_{y}(\nu) d\nu} {\int_{\nu_{i}}^{\nu_{f}} P_{y}(\nu) d\nu}$$
where $IS_{xy}$ is the power leakage from channel x into channel y, $\nu_{i}$ = 17.00 GHz, $\nu_{f}$ = 27.00 GHz, and $P_{x}$($\nu$) and $P_{y}$($\nu$) is the output power at frequency $\nu$ of channel x and y, respectively. The requirement for channel isolation $IS_{xy}$ was specified to be $<$ -20 dB, or a power leakage from channel x into channel y of $<$ 1$\%$.
Gain stability requirement
--------------------------
From the initial stability tests, Channel 1 was measured to be defective, so it was subsequently ignored, and the gain stability requirements were set for Channels 2 to 5. For Channels 2 to 5 of the CWVR given in Table \[Channels\], the *observable*, $\Delta T$, per antenna can be defined as
$$\label{obs}
\Delta T \propto \Delta P_{in} = w_{2}P_{2} + w_{3}P_{3} + w_{4}P_{4} + w_{5}P_{5}$$
where the weights are typically *$w_{2}$* = -0.5, *$w_{3}$* = 1.0, *$w_{4}$* = -0.5, *$w_{5}$* = 0.25, and *$P_{2}$*, *$P_{3}$*, *$P_{4}$*, and *$P_{5}$* are the power outputs due to sky emission in the five channels. It can be estimated from equation \[eqn1\] that $\sim$ 35 $\mu$m of PWV leads to $\mathcal{L}_{V}$ $\sim$ 220 $\mu$m of excess electrical path length ($\lambda$/30 for $\lambda$ = 7 mm) and will result in $\Delta$$T$ $\sim$ 25 mK [@B1999]. For $\Delta T_{rms}$ $\sim$ 25 mK, the stability of each individual channel needs to be $\sim$ 20 mK. For typical system temperatures of 50 - 100 K, this requires a gain stability for $\Delta g/g$ $\sim$ 2.5 - 5 $\times$ 10$^-$$^4$ for $\Delta T$ and $\Delta$$g_{i}$$/$$g_{i}$ $\sim$ 2 - 4 $\times$ 10$^-$$^4$ for each individual channel.
The short timescale, $\tau_{short}$, for which the CWVR needs to be gain stable is
$$\label{taushort}
\tau_{short} = \frac {D} {v_{V}} = \frac {25 \; m} {10 \; m/sec} = 2.5 \; sec$$
where *D* is the diameter of the antenna and $v_{V}$ is the average speed of water vapor across the troposphere. The long timescale for which the CWVR needs to be gain stable is $\tau_{long}$ $\sim$ 10$^3$ sec to allow for longer calibration cycles of the interferometer. The stability of the CWVR channels on different timescales, $\tau$, was characterized by the Allan Standard Deviation (ASD), $\sigma$$_{P}$($\tau$) defined by
$$\label{ASD}
\sigma_{P}(\tau) = \left\{\frac {1} {2} \langle \left\{P(t) - P(t-\tau)\right\}^2 \rangle \right\}^\frac {1} {2}$$
For single channel ASDs with N time and output power data points, equation \[ASD\] becomes
$$\label{ASDSC}
\sigma_{P}(\tau) = \left\{\frac {1} {2} \sum_{j=1}^{N/2} \frac {1} {N - \tau_{j}} \sum_{i=1}^{N - \tau_{j}} \left\{P(t_{i}+\tau_{j}) - P(t_{i})\right\}^2 \right\}^\frac {1} {2}$$
where $\sigma_{P}$($\tau$) is normalized with respect to
$$\label{MSC}
\mu_{P} = \langle P(t) \rangle$$
For channel difference ASDs with N time and output power data points, equation \[ASD\] becomes
$$\label{ASDDC}
\sigma_{P_{x}-P_{y}}(\tau) = \left\{\frac {1} {2} \sum_{j=1}^{N/2} \frac {1} {N - \tau_{j}} \sum_{i=1}^{N - \tau_{j}} \left\{[P_{x}(t_{i}+\tau_{j}) - P_{y}(t_{i}+\tau_{j})] - [P_{x}(t_{i}) - P_{y}(t_{i})] \right\}^2 \right\}^\frac {1} {2}$$
where $\sigma_{P_{x}-P_{y}}$($\tau$) is normalized with respect to
$$\label{MDC}
\mu_{P_{x}-P_{y}} = \frac {\langle P_{x}(t) + P_{y}(t) \rangle} {2}$$
For the ASD of $\Delta$$P_{in}$ with N time and output power data points, equation \[ASD\] becomes
$$\label{ASDobs}
\sigma_{\Delta P_{in}}(\tau) = \left\{\frac {1} {2} \sum_{j=1}^{N/2} \frac {1} {N - \tau_{j}} \sum_{i=1}^{N - \tau_{j}} \left\{\sum_{k=2}^{5} w_{k}P_{k}(t_{i} + \tau_{j}) - \sum_{k=2}^{5} w_{k}P_{k}(t_{i}) \right\}^2 \right\}^\frac {1} {2}$$
where $\sigma_{\Delta P_{in}}$($\tau$) is normalized with respect to
$$\label{Mobs}
\mu_{\Delta P_{in}} = \frac {\langle P_{2}(t) + P_{3}(t) + P_{4}(t) + P_{5}(t)\rangle} {4}$$
Temperature stability requirement
---------------------------------
Since the temperature coefficients of the CWVR components were not established prior to these tests, the physical temperature stability requirement inside the CWVR and on the TCP was arbitrarily set to $\sim$ 25 mK over 10$^3$ sec timescales.
Results
=======
Dynamic range
-------------
The dynamic range of the MBD1057 tunnel diode detectors was determined by sending input continuous wave power at the center of each channel at power levels ranging from -90 to -30 dbm. The resulting plot of output counts versus input power level per channel in shown in Figure \[DRange\]. At -90 dbm input, the channels are at the noise floor. Channel 5 has the highest noise floor possibly due to its wider than desired design bandwidth.
As the input power increases, the counts increase and approach the linear region labeled *square law*. In the square-law region, *$\Delta$T* $\propto$ $\Delta$$P_{in}$ $\propto$ $V_{out}$ $\propto$ $Counts_{out}$. The CWVR should be operated in the square-law region, as it is in this region that the output counts are most sensitive to changes in input power. The square-law region ranges from $\sim$ -58 to -52 dbm, providing a dynamic range of $\sim$ 6 dbm. At input powers above -52 dbm, the diode detectors go into saturation.
\[!htb\]
In the region labeled *fit* ranging from -90 to -52 dbm, the curve for each channel was fit to the form given in equation \[Fit\] to find the relationship between counts and input power. The fit effectively increases the dynamic range of the detectors to $\sim$ 18 dbm.
$$\label{Fit}
P_{in} (dbm) =\frac {1} {A} \left\{\ln(Counts) - \ln(B)\right\}$$
where the constants A and B per channel are given in Table \[Constants\].
[@ccc@]{} & A & B\
Ch 1 & 0.225 & 1.518$\times$$10^{10}$\
Ch 2 & 0.232 & 2.191$\times$$10^{10}$\
Ch 3 & 0.228 & 1.809$\times$$10^{10}$\
Ch 4 & 0.231 & 2.107$\times$$10^{10}$\
Ch 5 & 0.224 & 1.455$\times$$10^{10}$\
\[Constants\]
Channel isolation
-----------------
The power response of the CWVR was measured as a function of frequency. Frequency sweeps were done from 17.00 to 27.00 GHz at $\sim$ 6.41 MHz/sec at input power levels from -30 to -55 dbm with increments of -5 dbm. The Keysight PSG E8257D 250.00 kHz - 40.00 GHz Analog Signal Generator was used for the sweeps, and a MegaPhase RF Orange cable with a maximum operating frequency of 50.00 GHz was used to connect the generator to the CWVR input. For the sweeps, input power was sent only to the LCP input of the CWVR. The overall power response was determined by combining the unsaturated regions of the response curves at input power levels ranging from -30 to -55 dbm, and it is shown as a function of frequency in Figure \[Presponse\].
\[!htb\]
Using the data in Figure \[Presponse\], the isolation between all channels was calculated using equation \[Iso\] and the results are given in Table \[IsoTable\]. All values are $<$ -20 dB, indicating $<$ 1$\%$ power leakage between any two channels, which meets the specification.
-------- -------- -------- --------
-25.09 -30.79 -29.25 -32.38
-27.42 -22.90 -26.10 -27.72
-31.99 -21.78 -20.25 -25.46
-31.15 -25.67 -20.95 -22.74
-32.34 -25.35 -24.22 -20.81
-------- -------- -------- --------
\[IsoTable\]
Gain stability
--------------
The instrumental setup used to test the gain stability of the CWVR is shown in Figure \[GSBlock\]. The K-band noise diode source from within the CWVR generates broadband noise from 18.00 - 26.50 GHz. The noise diode signal is taken as an output from the CWVR and is sent through a MICA T-318S20 isolator to minimize reflections. The noise diode signal with no amplification to weak to provide output counts in the square-law region, so a K-band LNA with a gain of 35 dB was used to amplify the signal. The LNA is within the Dewar, which is cooled to $\sim$ 6 K to minimize noise and gain fluctuations generated by the LNA itself. The output signal from the Dewar has 13 dB of attenuation before going to the LCP input of the CWVR.
\[!htb\]
With the combination of the 13 dB attenuation and adjustment of the CHT3091 attenuators in the MMIC block, the output counts were set to $\sim$ 60,000 counts per channel in the square-law region of the diode detectors. The test was run for a period of 64 hours, and data of the output counts per channel and the temperature sensors was collected every second.
\[!htb\]
The measured output counts per channel over the 64 hr period averaged with a 20 min running mean are shown in Figure \[Ch1-5\]. Channel 1 is much more unstable than the rest, possibly due to its attenuation varying with time, so it was subsequently ignored. The counts from Channel 2 to 5 averaged with a 20 min running mean are shown in Figure \[Ch2-5\]. There is a slight offset in counts between the channels due to the step size of the CHT3091 attenuators.
\[!htb\]
### Single channel Allan Standard Deviations
The ASD for Channels 2 to 5 was calculated using equations \[ASDSC\] and \[MSC\], and the results are shown in Figure \[SinglASD\]. The two horizontal dashed lines represent the single channel gain stability requirement of 2 - 4 $\times$ 10$^-$$^4$. The two vertical lines represent the timescale $\tau$ = 2.5 - 10$^3$ sec over which the gain stability is required. Figure \[SinglASD\] shows that the gain stability for Channels 2 to 5 is $<$ 2 $\times$ 10$^-$$^4$ over $\tau$ = 2.5 -$10^{2.65}$ sec and within 2 - 4 $\times$ 10$^-$$^4$ from $\tau$ = $10^{2.65}$ - $10^{3}$ sec, so an improvement from $\tau$ = $10^{2.65}$ - $10^{3}$ sec is desirable.
\[!htb\]
### Channel difference Allan Standard Deviations
The channel difference ASDs are also calculated, as the single channel ASDs are limited by the stability of the input noise diode source and common signal path elements, whereas the difference ASDs measure the intrinsic stability of the independent signal paths. The channel difference ASDs were calculated for Channels 2 to 5 with respect to each other, and the results are shown in Figures \[Ch2diff\], \[Ch3diff\], \[Ch4diff\], and \[Ch5diff\]. The results show that the gain stability of 2 - 4 $\times$ 10$^-$$^4$ over $\tau$ = 2.5 - 10$^3$ sec is successfully met for all cases.
\[!htb\]
\[!htb\]
\[!htb\]
\[!htb\]
### Observable Allan Standard Deviation
The ASD of *$\Delta$P$_{in}$* was calculated using equations \[ASDobs\] and \[Mobs\], and the result is shown in Figure \[ObsASD\]. The result shows that the gain stability requirement of 2.5 - 5 $\times$ 10$^-$$^4$ over $\tau$ = 2.5 - 10$^3$ sec is successfully met.
\[!htb\]
### Temperature Allan Standard Deviations
The ASD of the three temperature sensors was calculated, and the results are shown in Figure \[TempASD\]. The results show that the temperature stability requirement of $\sim$ 25 mK was met for all the thermistors. The ASD of the TCP AD590 follows that of the TCP TCS-620 on longer timescales as expected. The measuring circuit employed by the TCS620 has lower noise for short time scales, and is more representative of the true temperature stability of the TCP.
\[!h\]
Gain stability with temperature correction
------------------------------------------
### Temperature correlation
Figure \[Ch2-5\] shows that the large-scale fluctuations in the output counts are consistent for Channels 2 to 5, suggesting a common source of the fluctuations. The CWVR ambient temperature and Channel 2 counts averaged with a 20 min running mean are plotted as a function of time over the 64 hr period in Figure \[Correlation\]. It is apparent from Figure \[Correlation\] that the CWVR ambient temperature and the output counts are negatively correlated.
\[!htb\]
A scatter plot of Channel 2 counts versus CWVR ambient temperature smoothed with a 5 min running mean is shown in Figure \[Scatter\]. The Pearson correlation coefficient between the Channel 2 counts and the CWVR ambient temperature is calculated to be r = -0.83 with 99.00$\%$ confidence. The coefficient of determination, R$^2$, for the linear fit is 0.69, providing an indication of the goodness of the fit. The slope of the fit indicates a change of $\sim$ 405 counts per 1C change in the CWVR ambient temperature.
\[!htb\]
### Temperature correction
It is possible to correct for the changes in counts due to changes in the CWVR ambient temperature. The temperature data is typically noisy over short timescales, so averaging over short timescales is necessary to effectively apply the correction. Equations \[Correct1\], \[Corr2\], and \[Corr3\] are used to apply the correction.
$$\label{Correct1}
Count_{corr,i} =
\begin{cases}
Count_{i} & \textit{for $i = 1 , 2 , 3 ,..., \;$n}\\
Count_{i} - A(T_{sm,i} - T_{ave}) & \textit{for $i = n+1,n+2,..., \;$N} \\
\end{cases}$$
$$\label{Corr2}
T_{ave} = \frac {1} {n} \sum_{i=1}^{n} T_{i}$$
$$\label{Corr3}
T_{sm,i} = \frac {1} {n} \sum_{j=i-n}^{i} T_{j}$$
where *A* = -405, *Count$_{corr,i}$* is the temperature corrected count at point *i*, *Count$_{i}$* is the measured count at point *i*, *T$_{ave}$* is the mean of the CWVR ambient temperature for the first *n* seconds of a *N*-second total observation time, *T$_{sm,i}$* is the temperature at point *i* taken as the mean from the preceding *n* seconds to point *i*, and *T$_{j}$* is the measured CWVR ambient temperature at point *j*. The result of applying temperature correction with *n* = 5 min for Channels 2 is shown in Figure \[CorrPlot\], and there is a clear improvement in the large-scale count fluctuations. Channels 3 to 5 show a similar improvement.
\[!htb\]
### Single channel Allan Standard Deviations with temperature correction
The temperature corrected and uncorrected ASDs for Channels 2 for n = 5, 10, and 20 min are shown in Figure \[Ch2ASDCorr\]. The results show that while the temperature corrected ASDs are more stable than the uncorrected ASDs at $\tau$$_{long}$ = 10$^3$ sec, the uncorrected ASDs provide better stability from $\tau$ $\sim$ 10$^{0.8}$ - 10$^{2.6}$ sec. Temperature averaging time, n = 5 min, results in slightly worse gain stability from $\tau$ $\sim$ 10$^{0.8}$ - 10$^{2.65}$ sec but better gain stability at $\tau$$_{long}$ = 10$^3$ sec than the n = 10 and 20 min cases. The results indicate that a temperature averaging time of n = 10 min is optimal for stability at both short and long timescales, and the temperature corrected ASDs with n = 10 min are $<$ 2 $\times$ 10$^-$$^4$ over $\tau$ = 2.5 - 10$^3$ sec, which meet the requirement. Channels 3 to 5 show a similar improvement.
\[!htb\]
### Observable Allan Standard Deviation with temperature correction
The temperature corrected and uncorrected ASDs of the observable for n = 5, 10, and 20 min are shown in Figure \[DPinASDC\]. The results show that for a temperature averaging time of n = 10 min, the temperature corrected ASD is unaffected from $\tau$ = 10$^{0.4}$ - 10$^{2.4}$ sec and improves from $\tau$ = 10$^{2.4}$ - 10$^{3}$ sec compared to the uncorrected ASD. Therefore, it is recommended that n = 10 min be used as the optimum temperature averaging time for temperature correction of the output counts.
\[!htb\]
Summary
=======
The compact water vapor radiometer (CWVR) was characterized in the laboratory, and results show that the design meets the dynamic range, channel isolation, and gain stability requirements to be tested on an antenna. The channel isolation requirement of $<$ -20 dB was met, indicating $<$ 1% power leakage between any two channels. Channel 1 needs to be repaired. The fluctuations in output counts are negatively correlated to the CWVR ambient temperature, with a change of $\sim$ 405 counts per 1C change in temperature. With temperature correction, the single channel and channel difference gain stability is $<$ 2 $\times$ 10$^-$$^4$, and the observable gain stability is $<$ 2.5 $\times$ 10$^-$$^4$ over $\tau$ = 2.5 - 10$^3$ sec, all of which meet the gain stability requirement. Future work consists of building more CWVRs and testing the phase correlations on the VLA antennas to evaluate the use of WVR for not only the VLA, but also the Next Generation Very Large Array (ngVLA).
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was possible through the National Radio Astronomy Observatory Graduate Student Research Assistantship program. I would like to thank R. Selina, B. Butler, R. Perley, J. Jackson, W. Grammer, and B. Willoughby for their guidance and support, C. Hennies for his assistance with the hardware, W. Koski for his assistance with the F318 module, the monitor and control system, and the 10 Hz calibration signal, G. Peck for the VHDL code, H. Frej for his assistance with the software, D. Urbain for his assistance with the K-band Dewar setup for the stability tests, and M. Morgan for the MMIC design. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
Butler, B. \[1999\] [*Some Issues for Water Vapor Radiometry at the VLA*]{}, VLA Scientific Memo $\#$ 177. Carilli, C. L & Holdaway, M. A. \[1999\] [*Tropospheric phase calibration in millimeter interferometry*]{}, [*Radio Science*]{}, vol. 34, no. 4, pp. 817-810. Chandler, C. J, Brisken, W. F, Butler, B. J, Hayward, R. H, M., Willoughby, B. E. \[2004\] [*Results of Water Vapor Radiometry Tests at the VLA*]{}, EVLA Memo $\#$ 73. Chandler, C. J, Brisken, W. F, Butler, B. J, Hayward, R. H, Morgan, M., Willoughby, B. E. \[2004\] [*A Proposal to Design and Implement a Compact Water Vapor Radiometer for the EVLA*]{}, EVLA Memo $\#$ 74. Clark, B. \[2015\] [*Calibration Strategies for the Next Generation VLA*]{}, ngVLA Memo $\#$ 2. Cornwell, T. J., and Fomalont, E. \[1999\] [*Aperture Synthesis in Radio Astronomy ${\expandafter\@slowromancap\romannumeral 2@}$*]{}, edited by Taylor, G., Carilli, C., & Perley, R., pp. 187-199, stron. Soc. of the Pac., San Francisco, Calif. Desai, K. \[1993\] [*Measurement of turbulence in the interstellar medium*]{}, Ph.D. thesis, 89 pp., Univ. of Calif. at Santa Barbara. Dicke, R. H., Beringer, R., Kyhl, R. L., and Vane, A. B. \[1946\] [*Atmospheric absorption measurements with a microwave radiometer*]{}, *Phys. Rev., 70*, pp. 340-348. Fomalont, E., and Perley, R. A. \[1999\] [*Aperture Synthesis in Radio Astronomy [slowromancap2@]{}*]{}, edited by Taylor, G., Carilli, C., and Perley, R. A. pp. 79-109, Astron. Soc. of the Pac., San Francisco, Calif. Holdaway, M. A. \[1992\] [*Possible phase calibration schemes for the MMA*]{}, Millimeter Array Memo. 84, pp. 14. Holdaway, M. A., and Owen F. N. \[1995\] [*A test of fast switching phase calibration with the VLA at 22GHz*]{}, Millimeter Array Memo. 126, pp. 8. Koski, W. M. \[2017\] [*F318 WVR & Low-Band Interface Module*]{}, EVLA Front End Group, Document $\#$ A23185N0044, Revision B. Lay, O. P. \[1997\] [*Phase calibration and water vapor radiometry for millimeter-wave arrays*]{}, *Astron. Astrphys. Suppl. Ser., 122*, pp. 547-565. Sterling, A., Hills, R., Richer, J., Pardo, J. \[2004\] [*183 GHz water vapour radiometers for ALMA: Estimation of phase erros under varying atmospheric conditions*]{}, ALMA Memo $\#$ 496. Sutton, E. C., and Hueckstaedt, R. M. \[1997\] [*Radiometric monitoring of atmospheric water vapor as it pertains to phase correction in millimeter interferometry*]{}, *Astron. Astrophys. Suppl. Ser., 119*, pp. 559-567. Thompson, A. R, Moran, J. M, Swenson Jr., G. W \[2001\] [*Interferometry and Synthesis in Radio Astronomy*]{}, 2nd edition (John Wiley & Sons, New York).
|
---
abstract: |
The nonzero and relatively large $\theta_{13}$ have been reported by Daya Bay, T2K, MINOS, and Double Chooz Collaborations. In order to accommodate the nonzero $\theta_{13}$, we modified the tribimaximal (TB), bimaxima (BM), and democratic (DC) neutrino mixing matrices. From three modified neutrino mixing matrices, two of them (the modified BM and DC mixing matrices) can give nonzero $\theta_{13}$ which is compatible with the result of the Daya Bay and T2K experiments. The modified TB neutrino mixing matrix predicts the value of $\theta_{13}$ greater than the upper bound value of the latest experimental results. By using the modified neutrino mixing matrices and impose an additional assumption that neutrino mass matrices have two zeros texture, we then obtain the neutrino mass in normal hierarchy when $(M_{\nu})_{22}=(M_{\nu})_{33}=0$ for the neutrino mass matrix from the modified TB neutrino mixing matrix and $(M_{\nu})_{11}=(M_{\nu})_{13}=0$ for the neutrino mass matrix from the modified DC neutrino mixing matrix. For these two patterns of neutrino mass matrices, either the atmospheric mass squared difference or the solar mass squared difference can be obtained, but not both of them simultaneously. From four patterns of two zeros texture to be considered on the obtained neutrino mass matrix from the modified BM neutrino mixing matrix, none of them can predict correctly neutrino mass spectrum (normal or inverted hierarchy).
Keywords: Nonzero $\theta_{13}$; mixing matrix; neutrino mass\
PACS: 14.60.Pq, 14.60.Lm
title: 'NONZERO $\theta_{13}$ AND NEUTRINO MASSES FROM MODIFIED NEUTRINO MIXING MATRIX'
---
**A**SAN DAMANIK\
*Faculty of Science and Technology,\
Sanata Dharma University,\
Kampus III USD Paingan Maguwoharjo Sleman, Yogyakarta, Indonesia\
[email protected]*
Introduction
============
Recently, there is a convincing evidence that neutrinos have a non-zero mass. This evidence is based on the experimental facts that both solar and atmospheric neutrinos undergo oscillations.[@Fukuda98]-[@Fukugita03] Since neutrinos are massive, there will be flavor mixing in the charged current interactions of the leptons and a leptonic mixing matrix will appear analogous to the mixing matrix in quarks sector. The mixing matrix in neutrino sector links the mass eigenstates of neutrino $(\nu_{1}, \nu_{2}, \nu_{3})$ to the flavor eigenstates of neutrino $(\nu_{e}, \nu_{\mu}, \nu_{\tau})$ as follow: $$\bordermatrix{& \cr
&\nu_{e}\cr
&\nu_{\mu}\cr
&\nu_{\tau}\cr}=V\bordermatrix{& \cr
&\nu_{1}\cr
&\nu_{2}\cr
&\nu_{3}\cr}
\label{V}$$ where $V$ is the $3\times 3$ neutrino mixing matrix.
The neutrino mixing matrix $V$, which is also known as PMNS matrix[@Pontecorvo58; @Maki], contains three mixing angles and three CP violating phases (one Dirac type and two Majorana type). In the standard parametrization the neutrino mixing matrix $V$ is given by: $$V=\bordermatrix{& & &\cr
&c_{12}c_{13} &s_{12}c_{13} &z^{*}\cr
&-s_{12}c_{23}-c_{12}s_{23}z &c_{12}c_{23}-s_{12}s_{23}z &s_{23}c_{13}\cr
&s_{12}s_{23}-c_{12}c_{23}z &-c_{12}s_{23}-s_{12}c_{23}z &c_{23}c_{13}\cr},
\label{V1}$$ where $c_{ij}$ and $s_{ij}$ stand for $\cos\theta_{ij}$ and $\sin\theta_{ij}$ respectively, and $z=s_{13}e^{i\varphi}$.
From the theoretical point of view, there are three well-known patterns of neutrino mixing matrix $V$: tribimaximal mixing pattern (TB)[@Harrison]-[@He], bimaximal mixing pattern (BM)[@Vissani]-[@Li], and democratic mixing pattern(DC).[@Fritzsch96]-[@Fritzschb] Explicitly, the form of the neutrino mixing matrices read: $$V_{\rm{TB}}=\bordermatrix{& & &\cr
&\sqrt{\frac{2}{3}} &\sqrt{\frac{1}{3}} &0\cr
&-\sqrt{\frac{1}{6}} &\sqrt{\frac{1}{3}} &\sqrt{\frac{1}{2}}\cr
&-\sqrt{\frac{1}{6}} &\sqrt{\frac{1}{3}} &-\sqrt{\frac{1}{2}}\cr},~V_{\rm{BM}}=\bordermatrix{& & &\cr
&\sqrt{\frac{1}{2}} &\sqrt{\frac{1}{2}} &0\cr
&-\frac{1}{2} &\frac{1}{2} &\sqrt{\frac{1}{2}}\cr
&\frac{1}{2} &-\frac{1}{2} &\sqrt{\frac{1}{2}}\cr},\nonumber$$ $$V_{\rm{DC}}=\bordermatrix{& & &\cr
&\sqrt{\frac{1}{2}} &\sqrt{\frac{1}{2}} &0\cr
&\sqrt{\frac{1}{6}} &-\sqrt{\frac{1}{6}} &-\sqrt{\frac{2}{3}}\cr
&-\sqrt{\frac{1}{3}} &\sqrt{\frac{1}{3}} &-\sqrt{\frac{1}{3}}\cr},
\label{tb}$$ which lead to $\theta_{13}=0$. However, the latest result from long baseline neutrino oscillation experiment T2K indicates that $\theta_{13}$ is relatively large. For a vanishing Dirac CP-violating phase, the T2K collaboration reported that the values of $\theta_{13}$ for neutrino mass in normal hierarchy (NH) are[@T2K]: $$5.0^{o}\leq\theta_{13}\leq 16.0^{o},$$ and $$5.8^{o}\leq\theta_{13}\leq 17.8^{o},$$ for neutrino mass in inverted hierarchy (IH), and the current combined world data[@Gonzales-Carcia]-[@Fogli]: $$\Delta m_{21}^{2}=7.59\pm0.20 (_{-0.69}^{+0.61}) \times 10^{-5}~\rm{eV^{2}},\label{21}$$ $$\Delta m_{32}^{2}=2.46\pm0.12(\pm0.37) \times 10^{-3}~\rm{eV^{2}},~\rm(for~ NH)\label{32}$$ $$\Delta m_{32}^{2}=-2.36\pm0.11(\pm0.37) \times 10^{-3}~\rm{eV^{2}},~\rm(for~ IH)\label{321}$$ $$\theta_{12}=34.5\pm1.0 (_{-2.8}^{3.2})^{o},~~\theta_{23}=42.8_{-2.9}^{+4.5}(_{-7.3}^{+10.7})^{o},~~\theta_{13}=5.1_{-3.3}^{+3.0}(\leq 12.0)^{o},
\label{GD}$$ at $1\sigma~(3\sigma)$ level. The latest experimental result on $\theta_{13}$ is reported by Daya Bay Collaboration which gives[@Daya]: $$\sin^{2}2\theta_{13}=0.092\pm 0.016 (\rm{stat}.)\pm 0.005 (\rm{syst.}).$$
In order to accommodate nonzero $\theta_{13}$ value, several models and modification of neutrino mixing matrix have been proposed by many authors. By analyzing two-zero texture of neutrino mass matrix ($M_{\mu\mu}=M_{\tau\tau}=0$), very large $\theta_{13}$ can be produced if atmospheric neutrino oscillations are not too nearly maximal.[@Frampton02]-[@Ludl] Nonzero $\theta_{13}$ in the context of $A_{4}$ model was discussed in Refs. [@Ma10]-[@King], and by using $S_{4}$ flavor symmetry with leaving maximal $\theta_{23}$ and trimaximal $\theta_{12}$ was discussed in Refs. [@Morisi11]–[@Chu]. Relatively large $\theta_{13}$ can also be obtained by applying permutation symmetry $S_{3}$ to both charged lepton and neutrino mass matrices in which the flavor symmetry is explicitly broken down with different symmetry breaking.[@Zhou11] Minimal modification to the neutrino mixing matrix (tribimaximal, bimaximal, and democratic) can be found in Refs. [@Xing11]–[@Wchao], nonzero $\theta_{13}$ and CP-violation in inverse neutrino mass matrix with two texture zero is discussed in Refs. [@Verma11]–[@Rodejohann11]. By using the criterion that the mixing matrix can be parameterized by three rotation angles which are simple fraction of $\pi$, there are twenty successful mixing patterns to be consistent with the latest neutrino oscillation data.[@Rodejohann11] The non-zero $\theta_{13}$ can also be derived from a Super-symmetric $B-L$ gauge model with $T_{7}$ lepton flavor symmetry, SO(10) with type II seesaw, finite quantum correction in quasi-degenerate neutrino mass spectrum, and introducing a small correction term $\delta M$ in the neutrino sector (see Refs. [@Cao11]–[@Araki]).
Neutrino mixing matrix can be used to obtain neutrino mass matrix. One of the interesting patterns of neutrino mass matrix that have been extensively studied in literature is the texture zero. Neutrino mass matrix with texture zero is a consequence of the underlying symmetry in a given model and is phenomenologically useful in the sense that they guarantee the calculability of $M_{\nu}$ from which both the neutrino mass spectrum and the flavor mixing pattern can more or less be predicted [@Fritzsch]. In view of the latest T2K neutrino oscillation data which hint a relatively large $\theta_{13}$ in relation to texture zero of neutrino mass, Kumar[@Kumar] discussed the implications of a class of neutrino mass matrices with texture zero and allows the deviations from maximal mixing, Deepthi [*et al.*]{}[@Deepthi11] analyzed one texture zero of neutrino mass matrix, and Fritzsch [*et al.*]{}[@Fritzsch] performed a systematic study of the neutrino mass matrix with two independent texture zero.
In this paper, we use the modified neutrino mixing matrix (TB, BM, and DC) in order to obtain nonzero $\theta_{13}$ similar to the Deepthi [*et al.*]{} paper[@Deepthi11], but with different zero texture. We use the modified neutrino mixing matrices to obtain the neutrino mass matrices which have two zeros texture. The neutrino mass and its hierarchies from the obtained neutrino mass matrices are studied systematically and discuss its phenomenological consequences. This paper is organized as follow: in section 2, the modified neutrino mixing matrices (TB, BM, and DC) to be reviewed and in section 3 the neutrino mass matrices from modified neutrino mixing matrix with two zeros texture to be constructed and discuss its phenomenological consequences. Finally, section 4 is devoted to conclusions.
Modified Neutrino Mixing Matrices
=================================
As we have stated explicitly in section 1, in this section we modify the tribimaximal, bimaximal, and democratic neutrino mixing matrices patterns in Eq. (\[tb\]). Modification of neutrino mixing matrix, by introducing perturbation matrices into neutrino mixing matrices in Eq. (\[tb\]), is the easiest way to obtain the nonzero $\theta_{13}$. The value of $\theta_{13}$ can be obtained in some parameters that can be fitted from experimental results. In this paper, the modified neutrino mixing matrices to be considered are given by: $$V_{{\rm TB}}^{'}=V_{{\rm TB}}V_{23}V_{12},\label{Modi1}$$ $$V_{{\rm BM}}^{'}=V_{{\rm BM}}V_{23}V_{12},\label{Modi2}$$ $$V_{{\rm DC}}^{'}=V_{{\rm DC}}V_{23}V_{12},
\label{Modified}$$ where $V_{12}$ and $V_{23}$ are the perturbation matrices to the neutrino mixing matrices. We take the form of the perturbation matrices as follow: $$V_{12}=\bordermatrix{& & &\cr
&c_{x} &s_{x} &0\cr
&-s_{x} &c_{x} &0\cr
&0 &0 &1\cr},~V_{23}=\bordermatrix{& & &\cr
&1 &0 &0\cr
&0 &c_{y} &s_{y}\cr
&0 &-s_{y} &c_{y}\cr}.
\label{xy}$$ where $c_{x}$ is the $\cos{x}$, $s_{x}$ is the $\sin{x}$, $c_{y}$ is the $\cos{y}$, and $s_{y}$ is the $\sin{y}$.
By inserting Eqs. (\[tb\]) and (\[xy\]) into Eqs. (\[Modi1\])-(\[Modified\]), we then have the modified neutrino mixing matrices as follow: $$V_{{\rm TB}}^{'}=\bordermatrix{& & &\cr
&\frac{\sqrt{3}}{3}(\sqrt{2}c_{x}-c_{y}s_{x}) &\frac{\sqrt{3}}{3}(\sqrt{2}s_{x}+c_{y}c_{x}) &\frac{\sqrt{3}}{3}s_{y}\cr
&-\frac{\sqrt{3}}{3}(\frac{\sqrt{2}}{2}c_{x}+c_{y}s_{x})+\frac{\sqrt{2}}{2}s_{y}s_{x} &-\frac{\sqrt{3}}{3}(\frac{\sqrt{2}}{2}s_{x}-c_{y}c_{x})-\frac{\sqrt{2}}{2}s_{y}c_{x} &\frac{\sqrt{3}}{3}s_{y}+\frac{\sqrt{2}}{2}c_{y}\cr
&-\frac{\sqrt{3}}{3}(\frac{\sqrt{2}}{2}c_{x}+c_{y}s_{x})-\frac{\sqrt{2}}{2}s_{y}s_{x} &-\frac{\sqrt{3}}{3}(\frac{\sqrt{2}}{2}s_{x}-c_{y}c_{x})+\frac{\sqrt{2}}{2}s_{y}c_{x} &\frac{\sqrt{3}}{3}s_{y}-\frac{\sqrt{2}}{2}c_{y}\cr},\label{Mo1}$$ $$V_{{\rm BM}}^{'}=\bordermatrix{& & &\cr
&\frac{\sqrt{2}}{2}(c_{x}-c_{y}s_{x}) &\frac{\sqrt{2}}{2}(s_{x}+c_{y}c_{x}) &\frac{\sqrt{2}}{2}s_{y}\cr
&-\frac{1}{2}(c_{x}+c_{y}s_{x}-\sqrt{2}s_{y}s_{x}) &-\frac{1}{2}(s_{x}-c_{y}c_{x}+\sqrt{2}s_{y}c_{x}) &\frac{1}{2}(s_{y}+\sqrt{2}c_{y})\cr
&\frac{1}{2}(c_{x}+c_{y}s_{x}+\sqrt{2}s_{y}s_{x}) &\frac{1}{2}(s_{x}-c_{y}c_{x}-\sqrt{2}s_{y}c_{x}) &-\frac{1}{2}(s_{y}-\sqrt{2}c_{y})\cr},\label{Mo2}$$ $$V_{{\rm DC}}^{'}=\bordermatrix{& & &\cr
&\frac{\sqrt{2}}{2}(c_{x}-c_{y}s_{x}) &\frac{\sqrt{2}}{2}(s_{x}+c_{y}c_{x}) &\frac{\sqrt{2}}{2}s_{y}\cr
&\frac{\sqrt{6}}{6}(c_{x}+c_{y}s_{x}-2s_{y}s_{x}) &\frac{\sqrt{6}}{6}(s_{x}-c_{y}c_{x}+2s_{y}c_{x}) &-\frac{\sqrt{6}}{6}(s_{y}+2c_{y})\cr
&-\frac{\sqrt{3}}{3}(c_{x}+c_{y}s_{x}+s_{y}s_{x}) &-\frac{\sqrt{3}}{3}(s_{x}-c_{y}c_{x}-s_{y}c_{x}) &\frac{\sqrt{3}}{3}(s_{y}-c_{y})\cr}.
\label{Mo}$$ By comparing Eqs. (\[Mo1\]), (\[Mo2\]), and (\[Mo\]) with the neutrino mixing in standard parameterization form as shown in Eq. (\[V1\]) with $\varphi=0$, then we obtain: $$\tan\theta_{12}=\left|\frac{\sqrt{2}s_{x}+c_{y}c_{x}}{\sqrt{2}c_{x}-c_{y}s_{x}}\right|,~~
\tan\theta_{23}=\left|\frac{\frac{\sqrt{3}}{3}s_{y}+\frac{\sqrt{2}}{2}c_{y}}{\frac{\sqrt{3}}{3}s_{y}-\frac{\sqrt{2}}{2}c_{y}}\right|,~~
\sin\theta_{13}=\left|\frac{\sqrt{3}}{3}s_{y}\right|,
\label{1}$$ for modified tribimaximal mixing, and $$\tan\theta_{12}=\left|\frac{s_{x}+c_{y}c_{x}}{c_{x}-c_{y}s_{x}}\right|,~~
\tan\theta_{23}=\left|-\frac{s_{y}+\sqrt{2}c_{y}}{s_{y}-\sqrt{2}c_{y}}\right|,~~
\sin\theta_{13}=\left|\frac{\sqrt{2}}{2}s_{y}\right|,
\label{2}$$ for modified bimaximal mixing, and $$\tan\theta_{12}=\left|\frac{s_{x}+c_{y}c_{x}}{c_{x}-c_{y}s_{x}}\right|,~~
\tan\theta_{23}=\left|-\frac{\sqrt{2}}{2}\left(\frac{s_{y}+2c_{y}}{s_{y}-c_{y}}\right)\right|,~~
\sin\theta_{13}=\left|\frac{\sqrt{2}}{2}s_{y}\right|,
\label{3}$$ for modified democratic mixing. It is apparent that for $y\rightarrow 0$, the value of $\tan\theta_{23}\rightarrow 1$ for both modified TB and BM, meanwhile for modified DC when $y\rightarrow 0$ the value of the $\tan\theta_{23}\rightarrow \sqrt{2}$. From Eqs. (\[1\]), (\[2\]), and (\[3\]), one can see that it is possible to determine the value of $x$ and $y$ and therefore the value of $\theta_{13}$ by using the experimental values of $\theta_{12}$ and $\theta_{23}$.
By inserting the experimental values of $\theta_{12}$ and $\theta_{23}$ in Eq. (\[GD\]) into Eqs. (\[1\]), (\[2\]), and (\[3\]), we obtain: $$x\approx 32.21^{o},~~y\approx -88.22^{o}, ~~\rm{for~modified~TB},\label{xtb}$$ $$x\approx 45.01^{o},~~y\approx -3.14^{o}, ~~\rm{for~modified~BM},\label{xbm}$$ $$x\approx -9.22^{o},~~y\approx -16.68^{o}, ~~\rm{for~modified~DC}\label{xdc},$$ which it imply that: $$\theta_{13}\approx 35.06^{o},~~\rm{for~modified~TB},\label{TB}$$ $$\theta_{13}\approx 2.22^{o},~~\rm{for~modified~BM},\label{BM}$$ $$\theta_{13}\approx 11.71^{o},~~\rm{for~modified~DC}.\label{DC}$$
The values of $x$ and $y$ for both modified TB and BM are in the range of the given values of Ref. [@Deepthi11], whereas the values of $x$ and $y$ for the modified DC in this paper did not already reported in Ref. [@Deepthi11]. The modified neutrino mixing matrices, within the scheme of Eqs. (\[Modi1\])-(\[Modified\]), can produce non-zero $\theta_{13}$, but only the modified DC neutrino mixing matrices can predict the values of $\theta_{13}$ that are compatible with the T2K result. The relatively large $\theta_{13}$ can be obtained from bimaximal neutrino mixing matrix (BM) with specific discrete models, see for example Refs. [@Altarelli]–[@Tooropa].
Neutrino Mass Matrix and Neutrino Mass Spectrum
===============================================
In this section, we analyze the predictions of all the modified neutrino mixing matrices on neutrino mass and its neutrino mass spectrum because all of them can predict the nonzero $\theta_{13}$. The neutrino mass matrix to be constructed by using the modified neutrino mixing matrix that have already been reviewed in section 2 and impose an additional assumption: that obtained neutrino mass matrix has two zeros texture. Four patterns of two zeros textures to be considered in the obtained neutrino mass matrix are the zero textures: $$\begin{aligned}
(M_{\nu})_{22}=(M_{\nu})_{33}=0,\label{2233}\\
(M_{\nu})_{11}=(M_{\nu})_{13}=0,\label{1113}\\
(M_{\nu})_{12}=(M_{\nu})_{13}=0,\label{1213}\\
(M_{\nu})_{12}=(M_{\nu})_{23}=0.\label{1223}\end{aligned}$$
We construct the neutrino mass matrix in flavor eigenstates basis (where the charged lepton mass matrix is diagonal). In this basis, the neutrino mass matrix can be diagonalized by a unitary matrix $V$ as follow: $$M_{\nu}=V M V^{T},
\label{Mf}$$ where the diagonal neutrino mass matrix $M$ is given by: $$M=\bordermatrix{& & &\cr
&m_{1} &0 &0\cr
&0 &m_{2} &0\cr
&0 &0 &m_{3}\cr}.
\label{Mb}$$ If the unitary matrix $V$ is replaced by $V_{TB}^{'}$, $V_{BM}^{'}$, or $V_{DC}^{'}$, then Eq. (\[Mf\]) becomes: $$M_{\nu}=V_{\alpha}^{'} M V_{\alpha}^{'T},
\label{A}$$ where $\alpha$ is the index for TB, BM, or DC.
Neutrino mass matrix from modified TB
-------------------------------------
By using Eqs. (\[Mo1\]), (\[Mb\]), and (\[A\]), we have the neutrino mass matrix with modified tribimaximal neutrino mixing matrix as follows: $$M_{\nu}=\bordermatrix{& & &\cr
&(M_{\nu})_{11} &(M_{\nu})_{12} &(M_{\nu})_{13} \cr
&(M_{\nu})_{21} &(M_{\nu})_{22} &(M_{\nu})_{23} \cr
&(M_{\nu})_{31} &(M_{\nu})_{32} &(M_{\nu})_{33}\cr},$$ where $$(M_{\nu})_{11}=m_{1}\left(\frac{\sqrt{6}c_{x}}{3}-\frac{\sqrt{3}c_{y}s_{x}}{3}\right)^{2}+m_{2}\left(\frac{\sqrt{6}s_{x}}{3}+\frac{\sqrt{3}c_{y}c_{x}}{3}\right)^{2}+m_{3}\frac{s_{y}^{2}}{3},$$ $$\begin{aligned}
(M_{\nu})_{12}=m_{1}\left(\frac{\sqrt{6}c_{x}}{3}-\frac{\sqrt{3}c_{y}c_{x}}{3}\right)\left(-\frac{\sqrt{6}c_{x}}{6}-\frac{\sqrt{3}c_{y}s_{x}}{3}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\nonumber\\+m_{2}\left(\frac{\sqrt{6}s_{x}}{3}+\frac{\sqrt{3}c_{y}c_{x}}{3}\right)\left(-\frac{\sqrt{6}s_{x}}{6}+\frac{\sqrt{3}c_{y}c_{x}}{3}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\nonumber\\+m_{3}\frac{\sqrt{3}}{3}\left(\frac{\sqrt{3}s_{y}^{2}}{3}+\frac{\sqrt{2}s_{y}c_{y}}{2}\right),\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{13}=m_{1}\left(\frac{\sqrt{6}c_{x}}{3}-\frac{\sqrt{3}c_{y}s_{x}}{3}\right)\left(-\frac{\sqrt{6}c_{x}}{6}-\frac{\sqrt{3}c_{y}s_{x}}{3}-\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\nonumber\\+m_{2}\left(\frac{\sqrt{6}s_{x}}{3}+\frac{\sqrt{3}c_{y}c_{x}}{3}\right)\left(-\frac{\sqrt{6}s_{x}}{6}+\frac{\sqrt{3}c_{y}c_{x}}{3}+\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\nonumber\\+m_{3}\frac{\sqrt{3}}{3}\left(\frac{\sqrt{3}s_{y}^{2}}{3}-\frac{\sqrt{2}s_{y}c_{y}}{2}\right),\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{21}=m_{1}\left(\frac{\sqrt{6}c_{x}}{3}-\frac{\sqrt{3}c_{y}s_{x}}{3}\right)\left(-\frac{\sqrt{6}c_{x}}{6}-\frac{\sqrt{3}c_{y}s_{x}}{3}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\nonumber\\+m_{2}\left(\frac{\sqrt{6}s_{x}}{3}+\frac{\sqrt{3}c_{y}c_{x}}{3}\right)\left(-\frac{\sqrt{6}s_{x}}{6}+\frac{\sqrt{3}c_{y}c_{x}}{3}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\nonumber\\+m_{3}\frac{\sqrt{3}}{3}\left(\frac{\sqrt{3}s_{y}^{2}}{3}+\frac{\sqrt{2}s_{y}c_{y}}{2}\right),\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{22}=m_{1}\left(-\frac{\sqrt{6}c_{x}}{6}-\frac{\sqrt{3}c_{y}s_{x}}{3}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)^{2}\nonumber\\+m_{2}\left(-\frac{\sqrt{6}s_{x}}{6}+\frac{\sqrt{3}c_{y}c_{x}}{3}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)^{2}+m_{3}\left(\frac{\sqrt{3}s_{y}}{3}+\frac{\sqrt{2}c_{y}}{2}\right)^{2},\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{23}=m_{1}\left(-\frac{\sqrt{6}c_{x}}{6}-\frac{\sqrt{3}c_{y}s_{x}}{3}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\left(-\frac{\sqrt{6}c_{x}}{6}-\frac{\sqrt{3}c_{y}s_{x}}{3}-\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\nonumber\\+m_{2}\left(\frac{-\sqrt{6}s_{x}}{6}+\frac{\sqrt{3}c_{y}c_{x}}{3}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\left(-\frac{\sqrt{6}s_{x}}{6}+\frac{\sqrt{3}c_{y}c_{x}}{3}+\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\nonumber\\+m_{3}\left(\frac{\sqrt{3}s_{y}}{3}+\frac{\sqrt{2}c_{y}}{2}\right)\left(\frac{\sqrt{3}s_{y}}{3}-\frac{\sqrt{2}c_{y}}{2}\right),\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{31}=m_{1}\left(\frac{\sqrt{6}c_{x}}{3}-\frac{\sqrt{3}c_{y}s_{x}}{3}\right)\left(-\frac{\sqrt{6}c_{x}}{6}-\frac{\sqrt{3}c_{y}s_{x}}{3}-\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\nonumber\\+m_{2}\left(\frac{\sqrt{6}s_{x}}{3}+\frac{\sqrt{3}c_{y}c_{x}}{3}\right)\left(-\frac{\sqrt{6}s_{x}}{6}+\frac{\sqrt{3}c_{y}c_{x}}{3}+\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\nonumber\\+m_{3}\frac{\sqrt{3}}{3}\left(\frac{\sqrt{3}s_{y}^{2}}{3}-\frac{\sqrt{2}s_{y}c_{y}}{2}\right),\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{32}=m_{1}\left(-\frac{\sqrt{6}c_{x}}{6}-\frac{\sqrt{3}c_{y}s_{x}}{3}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\left(-\frac{\sqrt{6}c_{x}}{6}-\frac{\sqrt{3}c_{y}s_{x}}{3}-\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\nonumber\\+m_{2}\left(\frac{-\sqrt{6}s_{x}}{6}+\frac{\sqrt{3}c_{y}c_{x}}{3}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\left(-\frac{\sqrt{6}s_{x}}{6}+\frac{\sqrt{3}c_{y}c_{x}}{3}+\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\nonumber\\+m_{3}\left(\frac{\sqrt{3}s_{y}}{3}+\frac{\sqrt{2}c_{y}}{2}\right)\left(\frac{\sqrt{3}s_{y}}{3}-\frac{\sqrt{2}c_{y}}{2}\right),\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{33}=m_{1}\left(-\frac{\sqrt{6}c_{x}}{6}-\frac{\sqrt{3}c_{y}s_{x}}{3}-\frac{\sqrt{2}s_{y}s_{x}}{2}\right)^{2}\nonumber\\+m_{2}\left(-\frac{\sqrt{6}s_{x}}{6}+\frac{\sqrt{3}c_{y}c_{x}}{3}+\frac{\sqrt{2}s_{y}c_{x}}{2}\right)^{2}\nonumber\\+m_{3}\left(\frac{\sqrt{3}s_{y}}{3}-\frac{\sqrt{2}c_{y}}{2}\right)^{2},\end{aligned}$$
If we impose the four patterns of two zeros texture in Eqs. (\[2233\])-(\[1223\]) into the neutrino mass matrix that obtained from the modified tribimaximal neutrino mixing matrix and insert the value of $x$ and $y$ in Eq. (\[xtb\]), then we have: $$\begin{aligned}
m_{1}=0.737880853~m_{2},~m_{3}=-1.783552908~ m_{2},~{\rm{for}}~(M_{\nu})_{22}=(M_{\nu})_{33}=0, \label{mtb}\\
m_{1}=-1.327010549~m_{2},~m_{3}=1.162476103~ m_{2},~{\rm{for}}~(M_{\nu})_{11}=(M_{\nu})_{13}=0, \label{mtb1}\\
m_{1}=0.9999999995 ~m_{2},~m_{3}= m_{2},~{\rm{for}}~(M_{\nu})_{12}=(M_{\nu})_{13}=0, \label{mtb2}\\
m_{1}=0.9999999993 ~m_{2},~m_{3}=0.9999999999 ~m_{2},~{\rm{for}}~(M_{\nu})_{12}=(M_{\nu})_{23}=0. \label{mtb3}\end{aligned}$$ From Eqs. (\[mtb\])-(\[mtb3\]), it is apparent that only the two zeros texture: $(M_{\nu})_{22}=(M_{\nu})_{33}=0$ can correctly give the neutrino mass spectrum. From Eq. (\[mtb\]), we have: $$\begin{aligned}
\left|\frac{m_{1}}{m_{2}}\right|,~\left|\frac{m_{2}}{m_{3}}\right|<1,
\label{IH1}\end{aligned}$$ which predict the normal hierarchy (NH): $\left|m_{1}\right|<\left|m_{2}\right|<\left|m_{3}\right|$.
By using the experimental values of squared mass difference as shown in Eqs. (\[21\]) into Eq. (\[mtb\]), we obtain the absolute values of neutrino mass as follow: $$\begin{aligned}
\left|m_{1}\right|=0.0095246222~\rm{eV},\nonumber\\
\left|m_{2}\right|=0.0129080761~\rm{eV},\nonumber\\
\left|m_{3}\right|=0.0230222366~\rm{eV},\label{MTB}\end{aligned}$$ which cannot correctly predict the value of the atmospheric squared mass difference ($\Delta m_{32}^{2}$) in Eq. (\[32\]). Conversely, if we use the experimental value of squared mass difference of Eq. (\[32\]), then Eq. (\[mtb\]) predicts the absolute values of neutrino mass as follow: $$\begin{aligned}
\left|m_{1}\right|=0.0247810607~\rm{eV},\nonumber\\
\left|m_{2}\right|=0.0335840950~\rm{eV},\nonumber\\
\left|m_{3}\right|=0.0598990103~\rm{eV},\end{aligned}$$ which cannot correctly predict the value of $\Delta m_{21}^{2}$ in Eq. (\[21\]).
Neutrino mass matrix from modified BM
-------------------------------------
By using Eqs. (\[Mo2\]), (\[Mb\]), and (\[A\]), we have the neutrino mass matrix from the modified bimaximal neutrino mixing matrix as follows: $$\begin{aligned}
M_{\nu}=\bordermatrix{& & &\cr
&(M_{\nu})_{11} &(M_{\nu})_{12} &(M_{\nu})_{13} \cr
&(M_{\nu})_{21} &(M_{\nu})_{22} &(M_{\nu})_{23} \cr
&(M_{\nu})_{31} &(M_{\nu})_{32} &(M_{\nu})_{33}\cr},\end{aligned}$$ where $$\begin{aligned}
(M_{\nu})_{11}=m_{1}\left(\frac{\sqrt{2}c_{x}}{2}-\frac{\sqrt{2}c_{y}s_{x}}{2}\right)^{2}+m_{2}\left(\frac{\sqrt{2}s_{x}}{2}+\frac{\sqrt{2}c_{y}c_{x}}{2}\right)^{2}+m_{3}\frac{s_{y}^{2}}{2},\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{12}=m_{1}\left(\frac{\sqrt{2}c_{x}}{2}-\frac{\sqrt{2}c_{y}s_{x}}{2}\right)\left(-\frac{c_{x}}{2}-\frac{c_{y}s_{x}}{2}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\nonumber\\+m_{2}\left(\frac{\sqrt{2}s_{x}}{2}+\frac{\sqrt{2}c_{y}c_{x}}{2}\right)\left(-\frac{s_{x}}{2}+\frac{c_{y}c_{x}}{2}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\nonumber\\+m_{3}\frac{\sqrt{2}}{2}\left(\frac{s_{y}^{2}}{2}+\frac{\sqrt{2}s_{y}c_{y}}{2}\right),\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{13}=m_{1}\left(\frac{\sqrt{2}c_{x}}{2}-\frac{\sqrt{2}c_{y}s_{x}}{2}\right)\left(-\frac{c_{x}}{2}+\frac{c_{y}s_{x}}{2}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\nonumber\\+m_{2}\left(\frac{\sqrt{2}s_{x}}{2}+\frac{\sqrt{2}c_{y}c_{x}}{2}\right)\left(\frac{s_{x}}{2}-\frac{c_{y}c_{x}}{2}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\nonumber\\+m_{3}\frac{\sqrt{2}}{2}\left(-\frac{s_{y}^{2}}{2}+\frac{\sqrt{2}s_{y}c_{y}}{2}\right),\\
(M_{\nu})_{21}=m_{1}\left(\frac{\sqrt{2}c_{x}}{2}-\frac{\sqrt{2}c_{y}s_{x}}{2}\right)\left(-\frac{c_{x}}{2}-\frac{c_{y}s_{x}}{2}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\nonumber\\+m_{2}\left(\frac{\sqrt{2}s_{x}}{2}+\frac{\sqrt{2}c_{y}c_{x}}{2}\right)\left(-\frac{s_{x}}{2}+\frac{c_{y}c_{x}}{2}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\nonumber\\+m_{3}\frac{\sqrt{2}}{2}\left(\frac{s_{y}^{2}}{2}+\frac{\sqrt{2}s_{y}c_{y}}{2}\right),\\
(M_{\nu})_{22}=m_{1}\left(-\frac{c_{x}}{2}-\frac{c_{y}s_{x}}{2}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)^{2}+m_{2}\left(-\frac{s_{x}}{2}+\frac{c_{y}c_{x}}{2}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)^{2}\nonumber\\+m_{3}\left(\frac{s_{y}}{2}+\frac{\sqrt{2}c_{y}}{2}\right)^{2},\\
(M_{\nu})_{23}=m_{1}\left(-\frac{c_{x}}{2}-\frac{c_{y}s_{x}}{2}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\left(\frac{c_{x}}{2}+\frac{c_{y}s_{x}}{2}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\nonumber\\+m_{2}\left(\frac{s_{x}}{2}-\frac{c_{y}c_{x}}{2}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\left(\frac{s_{x}}{2}-\frac{c_{y}c_{x}}{2}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\nonumber\\+m_{3}\left(\frac{s_{y}}{2}+\frac{\sqrt{2}c_{y}}{2}\right)\left(-\frac{s_{y}}{2}+\frac{\sqrt{2}c_{y}}{2}\right),\\
(M_{\nu})_{31}=m_{1}\left(\frac{\sqrt{2}c_{x}}{2}-\frac{\sqrt{2}c_{y}s_{x}}{2}\right)\left(\frac{c_{x}}{2}+\frac{c_{y}s_{x}}{2}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\nonumber\\+m_{2}\left(\frac{\sqrt{2}s_{x}}{2}+\frac{\sqrt{2}c_{y}c_{x}}{2}\right)\left(\frac{s_{x}}{2}-\frac{c_{y}c_{x}}{2}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\nonumber\\+m_{3}\frac{\sqrt{2}}{2}\left(-\frac{s_{y}^{2}}{2}+\frac{\sqrt{2}s_{y}c_{y}}{2}\right),\\
(M_{\nu})_{32}=m_{1}\left(-\frac{c_{x}}{2}-\frac{c_{y}s_{x}}{2}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\left(\frac{c_{x}}{2}+\frac{c_{y}s_{x}}{2}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)\nonumber\\+m_{2}\left(-\frac{s_{x}}{2}+\frac{c_{y}c_{x}}{2}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\left(\frac{s_{x}}{2}-\frac{c_{y}c_{x}}{2}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)\nonumber\\+m_{3}\left(\frac{s_{y}}{2}+\frac{\sqrt{2}c_{y}}{2}\right)\left(-\frac{s_{y}}{2}+\frac{\sqrt{2}c_{y}}{2}\right),\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{33}=m_{1}\left(\frac{c_{x}}{2}+\frac{c_{y}s_{x}}{2}+\frac{\sqrt{2}s_{y}s_{x}}{2}\right)^{2}+m_{2}\left(\frac{s_{x}}{2}-\frac{c_{y}c_{x}}{2}-\frac{\sqrt{2}s_{y}c_{x}}{2}\right)^{2}\nonumber\\+m_{3}\left(-\frac{s_{y}}{2}+\frac{\sqrt{2}c_{y}}{2}\right)^{2},\end{aligned}$$
For the obtained neutrino mass matrix from modified bimaximal mixing, when we impose the four patterns of two zeros texture in Eqs. (\[2233\])-(\[1223\]) and insert the value of $x$ and $y$ in Eq. (\[xbm\]), then we have: $$\begin{aligned}
m_{2}=-3473.465412~m_{1},~m_{3}=4.218376~ m_{1},~{\rm{for}}~(M_{\nu})_{22}=(M_{\nu})_{33}=0, \label{mbm}\\
m_{1}=-47615.39155~m_{2},~m_{3}=-655.03754~ m_{2},~{\rm{for}}~(M_{\nu})_{11}=(M_{\nu})_{13}=0, \label{mbm1}\\
m_{1}=0.99999996 ~m_{3},~m_{2}= m_{3},~{\rm{for}}~(M_{\nu})_{12}=(M_{\nu})_{13}=0, \label{mbm2}\\
m_{1}=m_{3},~m_{2}=m_{3},~{\rm{for}}~(M_{\nu})_{12}=(M_{\nu})_{23}=0. \label{mbm3}\end{aligned}$$ From Eqs. (\[mbm\])-(\[mbm3\]), one can see that from four patterns of two zeros texture, for the obtained neutrino mass matrix from modified bimaximal mixing, none of them can give the neutrino mass spectrum which is compatible with the known neutrino mass spectrum.
Neutrino mass matrix from modified DC
-------------------------------------
By using Eqs. (\[Mo\]), (\[Mb\]), and (\[A\]), we have the neutrino mass matrix from the modified democratic neutrino mixing matrix as follows: $$\begin{aligned}
M_{\nu}=\bordermatrix{& & &\cr
&(M_{\nu})_{11} &(M_{\nu})_{12} &(M_{\nu})_{13} \cr
&(M_{\nu})_{21} &(M_{\nu})_{22} &(M_{\nu})_{23} \cr
&(M_{\nu})_{31} &(M_{\nu})_{32} &(M_{\nu})_{33}\cr},\end{aligned}$$ where $$\begin{aligned}
(M_{\nu})_{11}=m_{1}\left(\frac{\sqrt{2}c_{x}}{2}-\frac{\sqrt{2}c_{y}s_{x}}{2}\right)^{2}+m_{2}\left(\frac{\sqrt{2}s_{x}}{2}+\frac{\sqrt{2}c_{y}c_{x}}{2}\right)^{2}+m_{3}\frac{s_{y}^{2}}{2},\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{12}=m_{1}\left(\frac{\sqrt{2}c_{x}}{2}-\frac{\sqrt{2}c_{y}s_{x}}{2}\right)\left(-\frac{\sqrt{6}c_{x}}{6}+\frac{\sqrt{6}c_{y}s_{x}}{6}+\frac{\sqrt{6}s_{y}s_{x}}{3}\right)\nonumber\\+m_{2}\left(\frac{\sqrt{2}s_{x}}{2}+\frac{\sqrt{2}c_{y}c_{x}}{2}\right)\left(-\frac{\sqrt{6}s_{x}}{6}-\frac{\sqrt{6}c_{y}c_{x}}{6}-\frac{\sqrt{6}s_{y}c_{x}}{3}\right)\nonumber\\+m_{3}\frac{\sqrt{2}}{2}\left(-\frac{\sqrt{6}s_{y}^{2}}{6}+\frac{\sqrt{6}s_{y}c_{y}}{3}\right),\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{13}=m_{1}\left(\frac{\sqrt{2}c_{x}}{2}-\frac{\sqrt{2}c_{y}s_{x}}{2}\right)\left(-\frac{\sqrt{3}c_{x}}{3}+\frac{\sqrt{3}c_{y}s_{x}}{3}-\frac{\sqrt{3}s_{y}s_{x}}{3}\right)\nonumber\\+m_{2}\left(\frac{\sqrt{2}s_{x}}{2}+\frac{\sqrt{2}c_{y}c_{x}}{2}\right)\left(-\frac{\sqrt{3}s_{x}}{3}+\frac{\sqrt{3}c_{y}c_{x}}{3}+\frac{\sqrt{3}s_{y}c_{x}}{3}\right)\nonumber\\+m_{3}\frac{\sqrt{2}}{2}\left(\frac{\sqrt{3}s_{y}^{2}}{3}-\frac{\sqrt{3}s_{y}c_{y}}{3}\right),\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{21}=m_{1}\left(\frac{\sqrt{2}c_{x}}{2}-\frac{\sqrt{2}c_{y}s_{x}}{2}\right)\left(-\frac{\sqrt{6}c_{x}}{6}+\frac{\sqrt{6}c_{y}s_{x}}{6}+\frac{\sqrt{6}s_{y}s_{x}}{3}\right)\nonumber\\+m_{2}\left(\frac{\sqrt{2}s_{x}}{2}+\frac{\sqrt{2}c_{y}c_{x}}{2}\right)\left(-\frac{\sqrt{6}s_{x}}{6}-\frac{\sqrt{6}c_{y}c_{x}}{6}-\frac{\sqrt{6}s_{y}c_{x}}{3}\right)\nonumber\\+m_{3}\frac{\sqrt{2}}{2}\left(-\frac{\sqrt{6}s_{y}^{2}}{6}+\frac{\sqrt{6}s_{y}c_{y}}{3}\right),\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{22}=m_{1}\left(-\frac{\sqrt{6}c_{x}}{6}+\frac{\sqrt{6}c_{y}s_{x}}{6}+\frac{\sqrt{6}s_{y}s_{x}}{3}\right)^{2}\nonumber\\+m_{2}\left(-\frac{\sqrt{6}s_{x}}{6}-\frac{\sqrt{6}c_{y}c_{x}}{6}-\frac{\sqrt{6}s_{y}c_{x}}{3}\right)^{2}\nonumber\\+m_{3}\left(-\frac{\sqrt{6}s_{y}}{6}+\frac{\sqrt{6}c_{y}}{3}\right)^{2},\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{23}=m_{1}\left(\frac{\sqrt{6}c_{x}}{6}-\frac{\sqrt{6}c_{y}s_{x}}{6}-\frac{\sqrt{6}s_{y}s_{x}}{3}\right)\left(\frac{\sqrt{3}c_{x}}{3}+\frac{\sqrt{3}c_{y}s_{x}}{3}+\frac{\sqrt{3}s_{y}s_{x}}{3}\right)\nonumber\\+m_{2}\left(-\frac{\sqrt{6}s_{x}}{6}-\frac{\sqrt{6}c_{y}c_{x}}{6}-\frac{\sqrt{6}s_{y}c_{x}}{3}\right)\left(-\frac{\sqrt{3}s_{x}}{3}+\frac{\sqrt{3}c_{y}c_{x}}{3}+\frac{\sqrt{3}s_{y}c_{x}}{3}\right)\nonumber\\+m_{3}\left(-\frac{\sqrt{6}s_{y}}{6}+\frac{\sqrt{6}c_{y}}{3}\right)\left(\frac{\sqrt{3}s_{y}}{3}-\frac{\sqrt{3}c_{y}}{3}\right),\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{31}=m_{1}\left(\frac{\sqrt{2}c_{x}}{2}-\frac{\sqrt{2}c_{y}s_{x}}{2}\right)\left(-\frac{\sqrt{3}c_{x}}{3}-\frac{\sqrt{3}c_{y}s_{x}}{3}-\frac{\sqrt{3}s_{y}s_{x}}{3}\right)\nonumber\\+m_{2}\left(\frac{\sqrt{2}s_{x}}{2}+\frac{\sqrt{2}c_{y}c_{x}}{2}\right)\left(-\frac{\sqrt{3}s_{x}}{3}+\frac{\sqrt{3}c_{y}c_{x}}{3}+\frac{\sqrt{3}s_{y}c_{x}}{3}\right)\nonumber\\+m_{3}\frac{\sqrt{2}}{2}\left(\frac{\sqrt{3}s_{y}^{2}}{3}-\frac{\sqrt{3}s_{y}c_{y}}{3}\right),\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{32}=m_{1}\left(-\frac{\sqrt{6}c_{x}}{6}+\frac{\sqrt{6}c_{y}s_{x}}{6}+\frac{\sqrt{6}s_{y}s_{x}}{3}\right)\left(-\frac{\sqrt{3}c_{x}}{3}-\frac{\sqrt{3}c_{y}s_{x}}{3}-\frac{\sqrt{3}s_{y}s_{x}}{3}\right)\nonumber\\+m_{2}\left(-\frac{\sqrt{6}s_{x}}{6}-\frac{\sqrt{6}c_{y}c_{x}}{6}-\frac{\sqrt{6}s_{y}c_{x}}{3}\right)\left(-\frac{\sqrt{3}s_{x}}{3}+\frac{\sqrt{3}c_{y}c_{x}}{3}+\frac{\sqrt{3}s_{y}c_{x}}{3}\right)\nonumber\\+m_{3}\left(-\frac{\sqrt{6}s_{y}}{6}+\frac{\sqrt{6}c_{y}}{3}\right)\left(\frac{\sqrt{3}s_{y}}{3}-\frac{\sqrt{3}c_{y}}{3}\right),\end{aligned}$$ $$\begin{aligned}
(M_{\nu})_{33}=m_{1}\left(-\frac{\sqrt{3}c_{x}}{3}-\frac{\sqrt{3}c_{y}s_{x}}{3}+\frac{\sqrt{3}s_{y}s_{x}}{3}\right)^{2}\nonumber\\+m_{2}\left(-\frac{\sqrt{3}s_{x}}{3}+\frac{\sqrt{3}c_{y}c_{x}}{3}+\frac{\sqrt{3}s_{y}c_{x}}{3}\right)^{2}\nonumber\\+m_{3}\left(-\frac{\sqrt{3}s_{y}}{3}-\frac{\sqrt{3}c_{y}}{3}\right)^{2},\end{aligned}$$
For the obtained neutrino mass matrix from the modified democratic mixing, if we impose the four patterns of two zeros texture in Eqs. (\[2233\])-(\[1223\]) and insert the value of $x$ and $y$ in Eq. (\[xdc\]), then we have: $$\begin{aligned}
m_{1}=-1.41771494~m_{3},~m_{2}=-0.66976966~ m_{3},~{\rm{for}}~(M_{\nu})_{22}=(M_{\nu})_{33}=0, \label{mdc}\\
m_{2}=-3.275173358~m_{1},~m_{3}=8.727515495~ m_{1},~{\rm{for}}~(M_{\nu})_{11}=(M_{\nu})_{13}=0, \label{mdc1}\\
m_{1}=0.999999999 ~m_{3},~m_{2}=0.999999999~m_{3},~{\rm{for}}~(M_{\nu})_{12}=(M_{\nu})_{13}=0, \label{mdc2}\\
m_{1}=m_{2},~m_{3}=0.999999999~m_{2},~{\rm{for}}~(M_{\nu})_{12}=(M_{\nu})_{23}=0. \label{mdc3}\end{aligned}$$ From Eqs. (\[mdc\])-(\[mdc3\]), one can see that from four patterns of two zeros texture for the obtained neutrino mass matrix from the modified democratic mixing, only the neutrino mass matrix with two zeros texture: $(M_{\nu})_{11}=(M_{\nu})_{13}=0$ can predicts the correct neutrino mass spectrum: $$\begin{aligned}
\left|\frac{m_{2}}{m_{1}}\right|,~\left|\frac{m_{3}}{m_{2}}\right|>1,
\label{IH}\end{aligned}$$ which imply that the neutrino mass is normal hierarchy: $\left|m_{1}\right|<\left|m_{2}\right|<\left|m_{3}\right|$.
By using the experimental values of squared mass difference as shown in Eqs. (\[21\]), we obtain the absolute values of neutrino mass for modified democratic neutrino mixing matrix as follow: $$\begin{aligned}
\left|m_{1}\right|=0.0027934235~\rm{eV},\nonumber\\
\left|m_{2}\right|=0.0091489461~\rm{eV},\nonumber\\
\left|m_{3}\right|=0.0244075808~\rm{eV}.\label{dcm}\end{aligned}$$ The neutrino masses in Eq. (\[dcm\]) cannot correctly predict the squared mass difference for atmospheric neutrino $\Delta m_{32}^{2}$ of Eq. (\[32\]). Conversely, if we first use $\Delta m_{32}^{2}$ in Eq. (\[32\]) for determining $m_{1}, m_{2}$, and $m_{3}$, then we have: $$\begin{aligned}
\left|m_{1}\right|=0.0061229117~\rm{eV},\nonumber\\
\left|m_{2}\right|=0.0200535970~\rm{eV},\nonumber\\
\left|m_{3}\right|=0.0534990351~\rm{eV},\label{dcm1}\end{aligned}$$ which cannot predict corectly the squared mass difference for solar neutrino $\Delta m_{21}^{2}$ in Eq. (\[21\]).
Conclusion
==========
The modified neutrino mixing matrices (TB, BM, DC) are obtained by introducing a perturbation matrices into neutrino mixing matrices. All of the modified neutrino mixing matrices can give nonzero $\theta_{13}$. Even though all of the modified neutrino mixing matrices can give nonzero $\theta_{13}$, but only the modified DC neutrino mixing matrix can predict the value of $\theta_{13}$ which is compatible with the latest experimental results. The modified TB neutrino mixing matrix predicts the value of $\theta_{13}$ greater than the upper bound value of T2K experiment, meanwhile the modified BM neutrino mixing matrix predicts the value of $\theta_{13}$ below the lower bound of T2K experiment. When the two zeros texture to be imposed on the obtained neutrino mass matrices from modified mixing matrices, only the obtained neutrino mass matrices from modified TB with two zeros texture $(M_{\nu})_{22}=(M_{\nu})_{33}=0$ and the modified DC with two zeros texture $(M_{\nu})_{11}=(M_{\nu})_{13}=0$ can give the neutrino mass spectrum in agreement with one of the known neutrino mass spectrum, that is normal hierarchy: $\left|m_{1}\right|< \left|m_{2}\right|<\left|m_{3}\right|$ . If we use the experimental results of squared mass difference $\Delta m_{21}^{2}$ to obtain the values of neutrino masses, then the obtained neutrino masses cannot predict the correct value of $\Delta m_{32}^{2}$. Conversely, if we use the experimental value of squared mass difference $\Delta m_{32}^{2}$ to obtain the values of neutrino masses, then the obtained neutrino masses cannot correctly predict the correct $\Delta m_{21}^{2}$.
Acknowledgment {#acknowledgment .unnumbered}
==============
Author thank to reviewer(s), the final version of this manuscript is the result of the first manuscript that has changed substantially due to the reviewer(s) comments and suggestions.
[00]{} Super-Kamiokande Collab. (Y. Fukuda [*et al.*]{}), [*Phys. Rev. Lett.*]{} [**81**]{}, 1158 (1998). Super-Kamiokande Collab. ( Y. Fukuda [*et al.*]{}), [*Phys. Rev. Lett*]{}. [**82**]{}, 2430 (1999). G. Giacomelli and M. Giorgini, hep-ex/0110021. SNO Collab. (Q.R. Ahmad [*et al.*]{}), [*Phys. Rev. Lett.*]{} [**89**]{}, 011301 (2002). K2K Collab. (M. H. Ahn [*et al.*]{}), [*Phys. Rev. Lett*]{}. [**90**]{}, 041801-1 (2003). M. Fukugita and T. Yanagida, [*Physics of Neutrinos and Application to Astrophysics*]{}, (Springer-Verlag, Haidelberg, 2003). B. Pontecorvo, [*Sov. Phys. JETP*]{} [**7**]{}, 172 (1958). Z. Maki, M. Nakagawa, and S. Sakata, [*Prog. Theor. Phys.*]{} [**28**]{}, 870 (1962). P. F. Harrison, D. H. Perkins, and W. G. Scott, [*Phys. Lett.*]{} [**B458**]{}, 79 (1999). P. F. Harrison, D. H. Perkins, and W. G. Scott, [*Phys. Lett.*]{} [**B530**]{}, 167 (2002). Z-z. Xing, [*Phys. Lett.*]{} [**B533**]{}, 85 (2002). P. F. Harrison and W. G. Scott, [*Phys. Lett.*]{} [**B535**]{}, 163 (2002). P. F. Harrison and W. G. Scott, [*Phys. Lett.*]{} [**B557**]{}, 76 (2003). X.-G. He and A. Zee, [*Phys. Lett.*]{} [**B560**]{}, 87 (2003). F. Vissani, hep/ph/9708483. V. D. Barger, S. Pakvasa, T. J. Weiler, and K. Whisnant, [*Phys. Lett.*]{} [**B437**]{}, 107 (1998). A. J. Baltz, A. S. Goldhaber, and M. Goldhaber, [*Phys. Rev. Lett.*]{} [**81**]{}, 5730 (1998). I. Stancu and D. V. Ahluwalia, [*Phys. Lett.*]{} [**B460**]{}, 431 (1999). H. Georgi and S. L. Glashow, [*Phys. Rev.*]{} [**D61**]{}, 097301 (2000). N. Li and B.-Q. Ma, [*Phys. Lett.*]{} [**B600**]{}, 248 (2004), arXiv: hep-ph/0408235. H. Fritzsch and Z-z. Xing, [*Phys. Lett.*]{} [**B372**]{}, 265 (1996). H. Fritzsch and Z-z. Xing, [*Phys. Lett.*]{} [**B440**]{}, 313 (1998). H. Fritzsch and Z-z. Xing, [*Phys. Rev.*]{} [**D61**]{}, 073016 (2000). T2K Collab. (K. Abe [*et al.*]{}), arXiv:1106.2822 \[hep-ph\]. M. Gonzales-Carcia, M. Maltoni and J. Salvado, arXiv:1001.4524 \[hep-ph\]. G. Fogli [*et al.*]{}, [*J. Phys. Con. Ser.*]{} [**203**]{}, 012103 (2010). F. P. An [*et al.*]{}, arXiv:1203.1669v2 \[hep-ex\]. P. H. Frampton, S. L. Glashow, and D. Marfatia, [*Phys. Lett.*]{} [**B536**]{}, 79 (2002), arXiv:hep-ph/0201008v2. P. O. Ludl, S. Morisi, and E. Peinado, arXiv:1109.3393v1 \[hep-ph\]. Y. H. Ahn and C. S. Chen, [*Phys. Rev.*]{} [**D81**]{}, 105013 (2010), arXiv:1001.2869 \[hep-ph\]. Y. H. Ahn, H. Y. Cheng, and S. Oh, [*Phys. Rev.*]{} [**D83**]{}, 076012 (2011), arXiv:1102.0879 \[hep-ph\]. E. Ma and D. Wegman, [*Phys. Rev. Lett.*]{} [**107**]{}, 061803 (2011), arXiv:1106.4269 \[hep-ph\]. N. Haba and R. Takahashi, [*Phy. Lett.*]{} [**B702**]{}, 388-393 (2011), arXiv \[hep-ph\]: 1106.5926. Y. H. Ahn, H. Y. Cheng, and S. Oh, [*Phys. Rev.*]{} [**D84**]{}, 113007 (2011), arXiv: 1107.4549 \[hep-ph\]. S. F. King and C. Luhn, arXiv:1112.1959 \[hep-ph\]. S. Morisi, K. M. Patel, and E. Peinando, [*Phys. Rev.*]{} [**D84**]{}, 053002 (2011), arXiv:1107.0696 \[hep-ph\]. X. Chu, M. Dhen, and T. Hambye, arXiv:1107.1589 \[hep-ph\]. S. Zhou, [*Phys. Lett.*]{} [**B704**]{}, 291-295 (2011), arXiv:1106.4808 \[hep-ph\]. Z-z.Xing, arXiv:1106.3244 \[hep-ph\]. X-G. He and A. Zee, [*Phys. Rev.*]{} [**D84**]{}, 053004 (2011), arXiv:1106.4359 \[hep-ph\]. Y. Shimizu, M. Tanimoto, and A. Watanabe, [*Prog. Theor. Phys.*]{} [**126**]{}, 81 (2011), arXiv:1105.2929 \[hep-ph\]. W. Chao and Y. Zheng, arXiv:1107.0738 \[hep-ph\]. S. Verma, [*Nucl. Phys.*]{} [**B854**]{}, 340-349 (2012), arXiv:1109.4228v1 \[hep-ph\]. W. Rodejohann, H. Zhang, and S. Zhou, arXiv:1107.3970v2 \[hep-ph\]. Q.-H. Cao, S. Khalil, E. Ma, and H. Okada, [*Phys. Rev.*]{} [**D84**]{}, 071302 (2011), arXiv:1108.0570 \[hep-ph\]. P. S. B. Dev, R. N. Mohapatra, and M. Severson, [*Phys. Rev.*]{} [**D84**]{}, 053005 (2011), arXiv:1107.2378 \[hep-ph\]. T. Araki and C-Q. Geng, [*JHEP*]{} [**1109**]{}, 139 (2011), arXiv:1108.3175 \[hep-ph\]. T. Araki, [*Phys. Rev.*]{} [**D84**]{}, 037301 (2011), arXiv:1106.5211 \[hep-ph\]. H. Fritzsch, Z-z. Xing, and S. Zhou, [*JHEP*]{} [**1109**]{}, 083 (2011), arXiv:1108.4534 \[hep-ph\]. S. Kumar, [*Phys. Rev.*]{} [**D84**]{}, 077301 (2011), arXiv:1108.2137 \[hep-ph\]. K. N. Deepthi, S. Gollu, and R. Mohanta, arXiv:1111.2781v1 \[hep-ph\]. G. Altarelli, F. Feruglio, and L. Merlo, [*JHEP*]{} [**0905**]{}, 020 (2009), arXiv:0903.1940 \[hep-ph\]. R. de Alerhart Toorop, F. Bazzocchi, and L. Merlo, [*JHEP*]{} [**1008**]{}, 001 (2010), arXiv:1003.4502 \[hep-ph\]. D. Meloni, [*JHEP*]{} [**1110**]{}, 010 (2011), arXiv:1107.0221 \[hep-ph\]. R. de Alerhart Toorop, F. Feruglio, and C. Hagedorn, [*Phys. Lett.*]{} [**B703**]{},447-451 (2011), arXiv:1107.3486 \[hep-ph\].
|
---
author:
- 'Javier Peña[^1]'
- 'Negar Soheili[^2]'
title: Computational performance of a projection and rescaling algorithm
---
Introduction
============
The projection and rescaling algorithm [@PenaS16] is a recent polynomial-time algorithm designed for solving the polyhedral feasibility problem $$\label{primal}
\text{find} \; x\in L\cap{\mathbb{R}}^n_{++},$$ where $L$ denotes a linear subspace in ${\mathbb{R}}^n$.
The projection and rescaling algorithm works by combining two building blocks, namely a [*basic procedure*]{} and a [*rescaling step*]{} as follows. Let $P_L:{\mathbb{R}}^n \rightarrow L$ denote the orthogonal projection onto $L$. Within a bounded number of low-cost iterations, the basic procedure finds $z\in{\mathbb{R}}^n_{++}$ such that either $$\label{SolvedCondition}
P_Lz \in {\mathbb{R}}^n_{++}$$ or $$\label{rescalingCondition}
\|(P_Lz)^+\|_1 \leq { \dfrac{1}{2} \|z\|_\infty,}$$ where $(P_Lz)^+ = \max\{0,P_Lz\}$. If holds, then $x = P_Lz \in L \cap {\mathbb{R}}^n_{++}$ is a solution to the original problem . On the other hand, if holds [ and $z_i = \|z\|_\infty$ then for every feasible solution $x$ to we have $$x_i \le \frac{1}{\|z\|_\infty}{\left\langle z , x \right\rangle} = \frac{1}{\|z\|_\infty}{\left\langle z , P_Lx \right\rangle} = \frac{1}{\|z\|_\infty}{\left\langle P_L z , x \right\rangle} \le \frac{1}{\|z\|_\infty}\|(P_L z)_+\|_1 \cdot\|x\|_\infty \leq \frac{1}{2} \|x\|_\infty.$$ In other words, if holds and $z_i = \|z\|_\infty$ then all solutions $x$ to have small $i$-th component.]{} The rescaling step takes $D:= I + e_ie_i{{^{\rm T}}}$ and transforms problem into the following equivalent rescaled problem: $$\label{rescaled}
\text{find} \; x\in D(L)\cap{\mathbb{R}}^n_{++}.$$ [ Observe that the solutions to the rescaled problem are in one-to-one correspondence with the solutions to via doubling of the $i$-th component.]{}
As it is easy to see and detailed in [@PenaS16], when $D$ is as above, the rescaled problem is better conditioned than in the following sense. If $L \cap {\mathbb{R}}^n_{++}\ne \emptyset$ then $
\delta(D(L)\cap {\mathbb{R}}^n_{++}) = 2 \delta(L \cap {\mathbb{R}}^n_{++})
$ where $\delta(L\cap{\mathbb{R}}^n_{++})$ is the following [*condition measure*]{} of the problem : $$\label{eq:ConditionMeasure}
\delta(L\cap{\mathbb{R}}^n_{++}) := \max\left\{\prod_{j=1}^n x_j: x\in L\cap{\mathbb{R}}^n_{++}, \|x\|_\infty = 1\right\}.$$ By convention $\delta(L \cap {\mathbb{R}}^n_{++}) = -\infty$ when $L \cap {\mathbb{R}}^n_{++} = \emptyset$. Observe that $\delta(L \cap {\mathbb{R}}^n_{++}) \le 1$ is a measure of the [*most interior*]{} solution to . As detailed in [@PenaS16], it follows that when $L\cap{\mathbb{R}}^n_{++}\ne \emptyset$, the projection and rescaling algorithm finds a solution to in at most $\log_2(1/\delta(L\cap {\mathbb{R}}^n_{++}))$ rounds of basic procedure and rescaling step. Furthermore, each round of basic procedure and rescaling step requires a number of elementary operations that is bounded by a low-degree polynomial (quadratic or cubic) on $n$.
The above projection and rescaling algorithm was originally proposed by Chubanov [@Chub12; @Chub15] and is in the same spirit as other rescaling methods in [@BellFV09; @Freund85; @PenaS13]. In addition to [@PenaS16], a number of articles [@DaduVZ16; @DaduVZ17; @HobeR17; @KitaT18; @LiRT15; @LourKMT16; @Roos18] have proposed new algorithmic developments by extending the projection and rescaling templates introduced in [@BellFV09; @Chub12; @Chub15; @PenaS13]. However, despite their interesting theoretical guarantees, there has been limited work on the computational effectiveness of the projection and rescaling algorithm as well as other methods based on rescaling. As far as we know, only the articles by Li et al. [@LiRT15] and by Roos [@Roos18] report numerical results on implementations of some variants of Chubanov’s projection and rescaling algorithm.
This paper documents a MATLAB implementation of an enhanced version of the projection and rescaling algorithm from [@PenaS16]. Our work differs from [@LiRT15; @Roos18] in several ways. Unlike the algorithms in [@LiRT15; @Roos18], our main algorithm solves both feasibility problems $L\cap {\mathbb{R}}^n_+$ and $L^\perp\cap {\mathbb{R}}^n_+$ in a symmetric fashion. We also perform and report a significantly larger set of computational experiments in higher level of detail. We compare, via numerous experiments, the performance of several possible schemes for the basic procedure. We provide full descriptions of the algorithms that we implement. The MATLAB code for our implementation is publicly available at the following website:
[http://www.andrew.cmu.edu/user/jfp/epra.html]{}
All of the numerical experiments reported in this paper can be easily replicated and verified via the above code. Furthermore, since our MATLAB code is a verbatim implementation of the algorithms described in the sequel, it is straightforward to replicate our implementation in other numerical computing environments such as R, python, or Julia.
Algorithm \[algo.MPRA\], the main algorithm in our implementation, incorporates the following enhancements to the original Projection and Rescaling Algorithm in [@PenaS16]:
1. Let $L^\perp$ denote the orthogonal complement of $L$. Algorithm \[algo.MPRA\] finds [*most interior*]{} solutions to the problems $$\label{primal.again}
\text{ find} \;x \in L \cap {\mathbb{R}}^n_{+},$$ and $$\label{dual}
\text{ find} \;\hat x \in L^\perp \cap {\mathbb{R}}^n_{+}.$$ That is, Algorithm \[algo.MPRA\] terminates with points in the relative interiors of $L \cap {\mathbb{R}}^n_{+}$ and $L^\perp \cap {\mathbb{R}}^n_{+}$. In particular, if is strictly feasible then Algorithm \[algo.MPRA\] finds a point in $L \cap {\mathbb{R}}^n_{++}.$ Likewise, if is strictly feasible then Algorithm \[algo.MPRA\] finds a point in $L^\perp \cap {\mathbb{R}}^n_{++}.$
Unlike the Projection and Rescaling Algorithm in [@PenaS16] and the algorithms in [@Chub12; @Chub15; @LiRT15], Algorithm \[algo.MPRA\] requires no prior feasibility assumptions about or .
2. We enforce an upper bound on the size of the entries of the diagonal rescaling matrices maintained throughout Algorithm \[algo.MPRA\]. The upper bound achieves two major goals. First, it prevents numerical overflow. Second, it yields a natural criteria to determine when the algorithm has found points in the relative interiors of $L \cap {\mathbb{R}}^n_{+}$ and $L^\perp \cap {\mathbb{R}}^n_{+}$.
3. In contrast to the rescaling step in the original Projection and Rescaling Algorithm that rescales $L$ only in one direction at each round, the rescaling step in Algorithm \[algo.MPRA\] performs a more aggressive rescaling along [*all*]{} directions that can improve the conditioning of the problem. This enhancement is fairly similar to a multiple direction rescaling step introduced by Louren[ç]{}o et al [@LourKMT16]. It is also similar in spirit to an idea proposed by Roos [@Roos18] to obtain sharper rescaling via a modified basic procedure.
The first two enhancements above enable Algorithm \[algo.MPRA\] to apply without the kind of feasibility assumption required by the original Projection and Rescaling Algorithm, namely that $L \cap {\mathbb{R}}^n_{++} \ne \emptyset$ or $L^\perp \cap {\mathbb{R}}^n_{++} \ne \emptyset$ and without concerns about numerical overflow due to excessively large rescaling. On the flip side, the correct termination of Algorithm \[algo.MPRA\] readily follows from the results in [@PenaS16] only when one of conditions $L \cap {\mathbb{R}}^n_{++} \ne \emptyset$ or $L^\perp \cap {\mathbb{R}}^n_{++} \ne \emptyset$ holds and $U$ is sufficiently large. Although our numerical experiments demonstrate that Algorithm \[algo.MPRA\] correctly terminates in the majority of the cases, a formal proof of correct termination in the case when both $L \cap {\mathbb{R}}^n_{++} =\emptyset$ and $ L^\perp \cap {\mathbb{R}}^n_{++} = \emptyset$ is not known yet. The natural conjecture is that Algorithm \[algo.MPRA\] correctly terminates when $U$ is sufficiently large. We will tackle this interesting theoretical question in some future work.
The basic procedure is the main building block of Algorithm \[algo.MPRA\]. We make a separate comparison of the performance of the following four different schemes for the basic procedure proposed in [@PenaS16]: perceptron, von Neumann, von Neumann with away-steps, and smooth perceptron schemes. These four schemes are described in Algorithm \[alg:perceptron\] through Algorithm \[alg:smooth\] below. According to the theoretical results established in [@PenaS16], the first three of these schemes have similar convergence rates while Algorithm \[alg:smooth\] (the smooth perceptron scheme) has a faster convergence rate but each main iteration of this scheme is computationally more expensive. Section \[sec:experiments.basic\] describes various numerical experiments that compare the performance of the four schemes. The experiments consistently demonstrate that indeed Algorithm \[alg:smooth\] has the best performance by a wide margin. Therefore, we use Algorithm \[alg:smooth\] as the basic procedure within Algorithm \[algo.MPRA\]. Section \[sec:experiments.mpra\] describes results on various numerical experiments that test the performance of Algorithm \[algo.MPRA\]. Our results demonstrate the significant advantage of using aggressive rescaling [ and confirm a similar observation by Chubanov [@Chub15 Section 4.2]]{}. They also provide promising evidence that Algorithm \[algo.MPRA\] can solve instances of moderate size.
The two main sections of the paper are organized as follows. In Section \[sec:rescaling\] we describe our enhanced version of the projection and rescaling algorithm. This section also recalls four different schemes for the basic procedure proposed in [@PenaS16]. In Section \[sec:experiments\] we present several sets of numerical experiments. To generate interesting problem instances, we devise a procedure to generate problem instances with arbitrary level of conditioning. We perform several numerical experiments to compare the different schemes for the basic procedure. We also perform a number of experiments to test the effectiveness of the enhanced projection and rescaling algorithm.
Enhanced projection and rescaling algorithm {#sec:rescaling}
===========================================
Main algorithm
--------------
Algorithm \[algo.MPRA\] below describes an enhanced version of the Projection and Rescaling Algorithm from [@PenaS16]. The algorithm relies on the following characterization of the relative interiors of $L\cap {\mathbb{R}}^n_+$ and $L^\perp\cap {\mathbb{R}}^n_+$ for a linear subspace $L \subseteq {\mathbb{R}}^n$. The characterization in Proposition \[prop.partition\] is a consequence of the classical Goldman-Tucker partition theorem as detailed in [@CheuCP03].
\[prop.partition\] Let $L \subseteq {\mathbb{R}}^n$ be a linear subspace. Then there exists a unique partion $B \cup N = \{1,\ldots,n\}$ such that $${{\rm ri\,}}(L\cap {\mathbb{R}}^n_+) = \{x\in L \cap {\mathbb{R}}^n_+: x_i > 0 \text{ for all } i \in B\},$$ and $${{\rm ri\,}}(L^\perp\cap {\mathbb{R}}^n_+) = \{\hat x \in L^\perp \cap {\mathbb{R}}^n_+: \hat x_i > 0 \text{ for all } i \in N\}.$$ In particular, $x\in {{\rm ri\,}}(L\cap {\mathbb{R}}^n_+)$ and $\hat x \in {{\rm ri\,}}(L^\perp\cap {\mathbb{R}}^n_+)$ if and only if $x\in L,\; \hat x\in L^\perp$, and $$\label{eq.relint}
x_B > 0, \, x_N = 0 \; \text{ and } \; \hat x_N > 0, \, \hat x_B = 0.$$
Observe that $B = \{1,\dots,n\}$ and $N = \emptyset$ when $L\cap {\mathbb{R}}^n_{++} \ne \emptyset$. Similarly, $B = \emptyset$ and $N = \{1,\dots,n\}$ when $L^\perp\cap {\mathbb{R}}^n_{++} \ne \emptyset$. In both of these cases we shall say that the partition $(B,N)$ is [*trivial*]{}. We shall say that the partition $(B,N)$ is [*non-trivial*]{} otherwise, that is, when $B\ne \emptyset$ and $N \ne \emptyset$.
Each main iteration of Algorithm \[algo.MPRA\] applies the following steps. First, apply the basic procedure to $D(L) \cap {\mathbb{R}}^n_+$ and $\hat D(L^\perp) \cap {\mathbb{R}}^n_+$ for some diagonal rescaling matrices $D$ and $\hat D$. Next, identify a potential partition $(B,N)$ and terminate if the basic procedures yield $x\in L$ and $\hat x \in L^\perp$ satisfying . Otherwise, update the rescaling matrices $D$ and $\hat D$ and proceed to the next main iteration: Apply the basic procedure to $D(L) \cap {\mathbb{R}}^n_+$ and $\hat D(L^\perp) \cap {\mathbb{R}}^n_+$, etc.
To prevent numerical overflow, Algorithm \[algo.MPRA\] caps the entries of the rescaling matrices $D$ and $\hat D$ by some pre-specified upper bound $U$. This upper bound naturally determines a numerical threshold to verify if the algorithm has found solutions in the relative interiors of $L \cap {\mathbb{R}}^n_+$ and $L^\perp \cap {\mathbb{R}}^n_+$. More precisely, Algorithm \[algo.MPRA\] will terminate with points $x \in L$ and $\hat x \in L^\perp$ satisfying the following approximation of : $$x_B > 0, \; \|x_N\|_\infty \le \frac{1}{U} \|x\|_\infty \; \text{ and } \;
\hat x_N > 0, \; \|\hat x_B\|_\infty \le \frac{1}{U} \|\hat x\|_\infty.$$
([**Initialization**]{}) Let $D := I$ and $\hat D := I$. Let $P := P_L$ and $\hat P := P_{L^\perp}$. Let $U >0$ be a pre-specified upper bound on the rescaling matrices $D$ and $\hat D$. ([**Basic Procedure**]{}) Find $z \gneqq 0$ such that either $Pz > 0$ or $\|(Pz)^+\|_1 \le \frac{1}{2} \|z\|_\infty$. Find $\hat z \gneqq 0$ such that either $\hat P\hat z > 0$ or $\|(\hat P\hat z)^+\|_1 \le \frac{1}{2} \|\hat z\|_\infty$.
([**Partition identification**]{}) Let $x:=D^{-1}Pz$ and $\hat x := \hat D^{-1}\hat P\hat z$. Let $B:=\{i: \vert\hat x_i\vert < \frac{1}{U}\|\hat x\|_\infty\}$ and $N:=\{i:\vert x_i \vert< \frac{1}{U}\|x\|_\infty\}$. HALT and $x \in {{\rm ri\,}}(L \cap {\mathbb{R}}^n_{+}), \; \hat x \in {{\rm ri\,}}(L^\perp \cap {\mathbb{R}}^n_{+})$ ([**Rescaling step**]{}) Let $e:= \left(z/\|( P z)^+\|_1 -1\right)^+$, $D:=\min\left((I+{\mbox{\rm diag}}(e))D,U\right)$ and $P:=P_{D(L)}$. Let $\hat e:= \left(\hat z/\|(\hat P\hat z)^+\|_1 -1\right)^+$, $\hat D:=\min\left((I+{\mbox{\rm diag}}(\hat e))\hat D,U\right)$ and $\hat P:=P_{\hat D(L^\perp)}$. Go back to step 2.
Basic procedure {#basicprocedures}
---------------
Let $P:{\mathbb{R}}^n \rightarrow {\mathbb{R}}^n$ be the orthogonal projection onto a linear subspace of ${\mathbb{R}}^n$ and $\epsilon \in (0,1)$. The goal of the basic procedure is to find a non-zero $z\in {\mathbb{R}}^n_+$ such that either $Pz > 0$ or $\|(Pz)^+\|_1 \le \epsilon \|z\|_\infty$. We choose $\epsilon = 1/2$ when the basic procedure is used within Algorithm \[algo.MPRA\]. We next recall the four schemes for the basic procedure proposed in [@PenaS16]. Algorithm \[alg:perceptron\] describes the simplest of these schemes, namely the perceptron scheme. In the algorithms below $\Delta_{n-1}$ denote the standard simplex in ${\mathbb{R}}^n$ ,that is, $$\Delta_{n-1} = \{x\in {\mathbb{R}}^n_+: \|x\|_1 = 1\}.$$
Pick $z_0\in \Delta_{n-1}$, and $t:=0.$ [$~~~\;\;\;$]{}Pick $u \in \Delta_{n-1}$ such that ${\left\langle u , Pz_t \right\rangle} \le 0$. [$~~~\;\;\;$]{}Let $z_{t+1}:=\left(1-\frac{1}{t+1}\right) z_t + \frac{1}{t+1} u.$ [$~~~\;\;\;$]{}$t :=t+1.$
Algorithm \[alg:vonNeumann\] describes the second basic procedure scheme, namely the von Neumann scheme. This scheme is a greedy variant of the perceptron scheme. This algorithm relies on the following mapping $$u(v) := {\mathop{\mathsf{argmin}}}_{u\in\Delta_{n-1}} {\left\langle u , v \right\rangle}.$$ At each iteration, Algorithm \[alg:vonNeumann\] chooses $z_{t+1}$ as the convex combination of $z_t$ and $u(Pz_t)$ that minimizes $\|Pz_{t+1}\|_2$.
Pick $z_0\in \Delta_{n-1}$, and $t:=0.$ [$~~~\;\;\;$]{}Let $u=u(Pz_t)$ and $z_{t+1}:= z_t + \theta(u-z_t)$ where $$\theta_t = {\mathop{\mathsf{argmin}}}_{\theta\in[0,1]} \|P(z_t + \theta (u-z_t))\|_2^2 =
\dfrac{\|Pz_t\|_2^2 -{\left\langle u , Pz_t \right\rangle}}{\|Pz_t\|_2^2 + \|Pu\|_2^2 - 2{\left\langle u , Pz_t \right\rangle}}.$$ [$~~~\;\;\;$]{}$t :=t+1.$
Algorithm \[alg:vonNeumann.away\] describes the third basic procedure scheme, namely the von Neumann with away steps scheme, which in turn is a variant of the von Neumann scheme. Algorithm \[alg:vonNeumann.away\] relies on the following construction. Define the [*support*]{} of a current iterate $z$ as $S(z):=\{i\in\{1,\ldots,n\}: z_i > 0\}.$ At each main iteration Algorithm \[alg:vonNeumann.away\] chooses between two different kinds of steps: [*regular*]{} steps as in Algorithm \[alg:vonNeumann\] and [*away steps*]{} that decrease the weight on a component of $z$ belonging to $S(z)$. The away steps are computed via the mapping $$v(z):={\mathop{\mathsf{argmax}}}_{v\in\Delta_{n-1}\atop S(v) \subseteq S(z)} {\left\langle v , Pz \right\rangle}.$$
Pick $z_0\in \Delta_{n-1}$, and $t:=0.$ [$~~~\;\;\;$]{}Let $u = u(Pz_t)$ and $v = v(z_t)$. [$~~~\;\;\;$]{}[**if**]{} $\|Pz_t\|^2 - {\left\langle u , Pz_t \right\rangle} > {\left\langle v , Pz_t \right\rangle} - \|Pz_t\|^2$ [**then**]{} (regular step)
[$~~~\;\;\;$]{}$a:= u-z_t; \; \theta_{\max} = 1$,
[$~~~\;\;\;$]{}[**else**]{} (away step)
[$~~~\;\;\;$]{}$a:= z_t-v; \; \theta_{\max} = \frac{{\left\langle v , z \right\rangle}}{1-{\left\langle v , z \right\rangle}}$.
[$~~~\;\;\;$]{}[**endif**]{} [$~~~\;\;\;$]{}Let $z_{t+1}:= z_t + \theta a$ where $$\theta = {\mathop{\mathsf{argmin}}}_{\theta\in[0,\theta_{\max}]} \|P(z_t + \theta a)\|^2
=
\min\left\{ \theta_{\max} , -\dfrac{{\left\langle z_t , Pa \right\rangle}}{\|Pa\|^2}\right\}$$ [$~~~\;\;\;$]{}$t :=t+1$.
Algorithm \[alg:smooth\] describes the fourth basic procedure scheme, namely the smooth perceptron scheme, which in turn is a variant of the the perceptron scheme that relies on the following smooth version of the mapping $u(\cdot)$. Let $\bar u \in \Delta_{n-1}$ be fixed. For $\mu > 0$ let $$u_\mu(v) := {\mathop{\mathsf{argmin}}}_{u\in\Delta_{n-1}}\left\{{\left\langle u , v \right\rangle} + \frac{\mu}{2}\|u-\bar u\|^2\right\}.$$
Let $u_0 := \bar u$; $\mu_0 = 2$; $z_0:=u_{\mu_0}(Pu_0);$ and $t:=0$. [$~~~\;\;\;$]{}$\theta_t:=\frac{2}{t+3}$ [$~~~\;\;\;$]{}$ u_{t+1} :=(1-\theta_t)u_t + \theta_t z_t + \theta_t^2 u_{\mu_t}(Pu_t)$ [$~~~\;\;\;$]{}$\mu_{t+1} := (1-\theta_t)\mu_t$ [$~~~\;\;\;$]{}$z_{t+1} := (1-\theta_t)z_t + \theta_t u_{\mu_{t+1}}(Pu_{t+1})$ [$~~~\;\;\;$]{}$t :=t+1$.
Numerical experiments {#sec:experiments}
=====================
This section describes various sets of numerical experiments that test the Enhanced Projection and Rescaling Algorithm described as Algorithm \[algo.MPRA\] above. We also performed numerical experiments to compare the four schemes for the basic procedure, namely Algorithm \[alg:perceptron\] through Algorithm \[alg:smooth\] on suitably generated instances.
Schemes to construct challenging instances {#sec.construct}
------------------------------------------
We should note that except for the case when the dimension of the subspace $L$ is about half the dimension of the ambient space ${\mathbb{R}}^n$, a naive approach to generate random instances yields results of limited interest. More precisely, suppose $L \subseteq {\mathbb{R}}^n$ is a random subspace generated via $L = \ker(A)$ where the entries of $A\in{\mathbb{R}}^{m\times n}$ are independently drawn from a standard normal distribution. From a classical result on coverage processes by Wendel [@Wend62 Equation (1)] it follows that $$\label{eq.prob}
\mathbb{P}(L^\perp\cap{\mathbb{R}}^n_{++} \ne \emptyset) = 2^{1-n}\sum_{k=0}^{m-1} {{n-1}\choose k}\;\;\text{ and }\;\;\mathbb{P}(L\cap{\mathbb{R}}^n_{++} \ne \emptyset) = 2^{1-n}\sum_{k=m}^{n-1} {{n-1}\choose k}.$$ In particular, implies that if $n$ is even and $\dim(L) = n-m = n/2$ then $L\cap {\mathbb{R}}^n_{++} \ne \emptyset$ with probability 0.5. Furthermore, implies that if $\dim(L) = n-m \gg n/2$ then $L\cap {\mathbb{R}}^n_{++} \ne \emptyset$ with high probability. Similarly, implies that if $\dim(L) = n-m \ll n/2$ then $L^\perp\cap {\mathbb{R}}^n_{++} \ne \emptyset$ with high probability. The identity also suggests that when $L\subseteq {\mathbb{R}}^n$ is a random subspace and $\dim(L)$ is far enough from $n/2$ then with high probability $\max\{\delta(L\cap {\mathbb{R}}^n_{++}),\delta(L^\perp\cap {\mathbb{R}}^n_{++})\}$ is bounded away from zero as there is extra room for either $L$ or $L^\perp$ to cut deep inside ${\mathbb{R}}^n_{++}$. The latter fact can be rigorously stated and justified, albeit in somewhat technical terms, by using the machinery on coverage processes and probabilistic analysis of condition numbers developed by Bürgisser et al [@BurgCL10]. Our numerical experiments confirm that indeed most random instances $L$ with either $\dim(L)\gg n/2$ or $\dim(L)\ll n/2$ are easily solvable without rescaling (see Table \[table.interior.naive\] in Section \[sec:experiments\]). Therefore such random instances are not particularly interesting.
We next describe schemes to generate collections of more interesting and challenging instances. First, we describe how to generate random subspaces $L\subseteq {\mathbb{R}}^n$ such that $L\cap {\mathbb{R}}^n_{++}\ne \emptyset$ with a [*controlled*]{} condition measure $\delta(L\cap {\mathbb{R}}^n_{++})$. We subsequently describe how to generate random subspaces $L\subseteq {\mathbb{R}}^n$ such that both $L\cap {\mathbb{R}}^n_{+}$ and $L^\perp \cap {\mathbb{R}}^n_{+}$ have non-trivial relative interiors.
\[prop.control.cn\] Let $\bar x\in{\mathbb{R}}^n_{++}$ and $\bar u\in {\mathbb{R}}^n_+$ be such that $\|\bar x\|_\infty = 1$, $\|\bar u\|_1 = n$ and $\bar u_j = 0$ whenever $\bar x_j < 1$ for $j=1,\dots,n$. Let $A = {\begin{bmatrix} a_1 & \cdots & a_m \end{bmatrix}}{{^{\rm T}}}\in {\mathbb{R}}^{m\times n}$ be such that $a_1 = \bar u-\bar X^{-1}{\mathbf{1}}$ and ${\left\langle a_j , \bar x \right\rangle} = 0$ for $j=2,\dots,m$ where $\bar X\in {\mathbb{R}}^{n\times n}$ is a diagonal matrix with elements of $\bar x$ spread across the diagonal and ${\mathbf{1}}\in{\mathbb{R}}^n$ is the vector of ones. Then for $L = \ker(A) := \{x\in {\mathbb{R}}^n : Ax = 0\}$ we have $$\bar x = {\mathop{\mathsf{argmax}}}\left\{\prod_{j=1}^n x_j: x\in L\cap{\mathbb{R}}^n_{++}, \|x\|_\infty = 1\right\}.$$ In particular, $L\cap{\mathbb{R}}^n_{++} \ne \emptyset$ and $\delta(L\cap{\mathbb{R}}^n_{++}) = \prod_{j=1}^n \bar x_j.$
It suffices to show that $$\begin{aligned}
\label{deltaInfinityrelax}
\bar x &= {\mathop{\mathsf{argmax}}}_x\left\{\ln\left(\prod_{i=1}^n x_i\right): x\in L \cap {\mathbb{R}}^n_{++}, \|x\|_\infty = 1\right\} \notag\\
& ={\mathop{\mathsf{argmax}}}_x\left\{\ln\left(\prod_{i=1}^n x_i\right): x\in L \cap {\mathbb{R}}^n_{++}, \|x\|_\infty \leq 1\right\}. \end{aligned}$$ The conditions on the rows of $A$ readily ensure that $\bar x \in L \cap {\mathbb{R}}^n_{++}$. Thus $\bar x$ is a feasible solution to . Since $\zeta = \bar X^{-1} {\mathbf{1}}$ is the gradient of the objective function in at $\bar x$, to show that $\bar x$ is optimal it suffices to show that ${\left\langle \zeta , x - \bar x \right\rangle} \leq 0$ for any feasible solution to . Take $x\in L\cap{\mathbb{R}}^n_{++}$ with $\|x\|_\infty \leq 1$. Since $x\in L$, we have ${\left\langle \bar X^{-1}{\mathbf{1}}- \bar u , x \right\rangle} = {\left\langle a_1 , x \right\rangle} = 0$. Therefore ${\left\langle \zeta , x-\bar x \right\rangle} = {\left\langle \bar X^{-1} {\mathbf{1}}, x \right\rangle} - n \le {\left\langle \bar u , x \right\rangle} - \|\bar u\|_1\|x\|_{\infty} \leq 0$. The last two steps follow from $\|\bar u \|_1 = n$ and Hölder’s inequality respectively.
Proposition \[prop.control.cn\] readily suggests a scheme to generate subspaces $L \subseteq {\mathbb{R}}^n$ such that the condition measure $\delta(L\cap {\mathbb{R}}^n_{++})$ is positive but as small as we wish: pick $\bar x\in {\mathbb{R}}^n_{++}$ with $\|\bar x\|_\infty =1$ and generate $\bar u\in {\mathbb{R}}^n_+, A \in {\mathbb{R}}^{m\times n},$ and $L = \ker(A)$ as in Proposition \[prop.control.cn\]. We next explain how Proposition \[prop.control.cn\] can be further leveraged to generate $L \subseteq {\mathbb{R}}^n$ so that both $L\cap {\mathbb{R}}^n_+$ and $L^\perp\cap {\mathbb{R}}^n_+$ have non-trivial relative interiors. Suppose $(B,N)$ is a partition of $\{1,\dots,n\}$ and $$\label{eq.block.matrix}
A = \begin{bmatrix}A_{BB} & A_{NB}\\ 0& A_{NN}\end{bmatrix}$$ is such that $L_B = \ker(A_{BB}) \subseteq {\mathbb{R}}^B$ and $L_N = \text{Im}(A_{NN}{{^{\rm T}}}) \subseteq {\mathbb{R}}^N$ satisfy $L_B\cap {\mathbb{R}}^B_{++}\ne\emptyset$ and $
L_N\cap {\mathbb{R}}^N_{++}\ne\emptyset$. If $A_{NN}$ is full row-rank then it readily follows that the subspaces $L = \ker(A)$ and $L^\perp = \text{Im}(A{{^{\rm T}}})$ satisfy $${{\rm ri\,}}(L\cap {\mathbb{R}}^n_+) = \{x\in L\cap {\mathbb{R}}^n_+: x_i >0 \text{ for all } i \in B\}$$ and $${{\rm ri\,}}(L^\perp\cap {\mathbb{R}}^n_+) = \{\hat x\in L^\perp\cap {\mathbb{R}}^n_+: \hat x_i >0 \text{ for all } i \in N\}.$$
Hence we can generate subspaces $L \subseteq {\mathbb{R}}^n$ such that both $L\cap {\mathbb{R}}^n_+$ and $L^\perp\cap {\mathbb{R}}^n_+$ have non-trivial relative interiors by proceeding as follows. First, choose a partition $(B,N)$ of $\{1,\dots,n\}.$ Next, use the construction suggested by Proposition \[prop.control.cn\] to generate full row-rank matrices $A_{BB},A_{NN}$ such that $\ker(A_{BB})\cap {\mathbb{R}}^B_{++}\ne\emptyset$ and $
\text{Im}(A_{NN}{{^{\rm T}}})\cap {\mathbb{R}}^N_{++}\ne\emptyset$. Finally let $L = \ker(A)$ where $A$ is of the form for some $A_{NB}$ of appropriate size.
Comparison of basic procedure schemes {#sec:experiments.basic}
-------------------------------------
The computational experiments summarized in this section compare the performance of the four schemes for the basic procedure, namely Algorithm \[alg:perceptron\] through Algorithm \[alg:smooth\]. We implemented these algorithms in MATLAB and ran them on collections of instances defined by $L
= \text{ker}(A)$, for $A \in {\mathbb{R}}^{m\times n}$. We used the QR-factorization to obtain the orthogonal projection mappings $P = P_L$ and $\hat P = P_{L^\perp}$.
We performed two main sets of experiments. The first set of experiments contains instances $L=\ker(A)$ where the entries of $A \in {\mathbb{R}}^{m\times n}$ are independently drawn from a standard normal distribution and $m = n/2$ for $n=200, 500, 1000, 2000.$ When $m$ significantly differs from $n/2$, random instances generated in this way are uninteresting as they can easily be solved by any of the four schemes. The second set of experiments contains more challenging instances $L=\ker(A)$ where $A\in {\mathbb{R}}^{m\times n}$ is generated via the procedure suggested by Proposition \[prop.control.cn\] for $n=1000$, $m = 100, 200, 800, 900$. More precisely, we generated $\bar x \in {\mathbb{R}}^n_{++}$ as follows. First, we set a random chunk of its entries uniformly at random between 0 and 0.001. Second, we set remaining entries uniformly at random distributed between 0 and 1. Third, we scaled the entries of $\bar x$ to obtain $\|\bar x\|_\infty = 1$. Once we generated $\bar x$ in this fashion, we generated $A \in {\mathbb{R}}^{m\times n}$ as in Proposition \[prop.control.cn\].
Table \[summaryTableeps1\] through Table \[summaryTableeps4.control\] summarize the results on various sets of experiments. Each row corresponds to a set of 1000 instances. To keep the number of iterations and CPU time manageable, we enforced an upper bound of 10000 iterations for all four schemes. The first two columns in each table indicate the size of $A \in {\mathbb{R}}^{m\times n}$. The other columns display three numbers for each of the four schemes: the average number of iterations, the average CPU time, and the success rate on the batch of 1000 instances of size $m$ by $n$. The success rate is the proportion of instances on which the scheme terminates normally before reaching the upper bound of 10000 iterations.
Table \[summaryTableeps1\] and Table \[summaryTableeps4\] display the results for the first set of experiments when $m=n/2$ and $A \in {\mathbb{R}}^{m\times n}$ is randomly generated without any control on the conditioning of $L\cap{\mathbb{R}}^n_{++}$. Table \[summaryTableeps1.control\] and Table \[summaryTableeps4.control\] display similar summaries for the second set of experiments where we generate $A\in {\mathbb{R}}^{m\times n}$ so that $L\cap{\mathbb{R}}^n_{++}$ has a controlled condition measure via the procedure suggested by Proposition \[prop.control.cn\]. The tables summarize results for two values of $\epsilon$: $\epsilon = 10^{-1}$ (large), and $\epsilon = 10^{-4}$ (small).
$m$ $n$ perceptron VN VNA smooth
------ ------ ----------------------- ----------------------- ----------------------- ----------------------- -- -- -- -- --
100 200 (6956.28, 0.27, 0.74) (5070.41, 0.26, 0.69) (3021.73, 0.23, 0.95) [(27.04, 0.03, 1)]{}
250 500 (9963.91, 0.85, 0.02) (9207.1, 0.26, 0.2) (8737.9, 0.23, 0.38) [ (43.88, 0.13, 1)]{}
500 1000 (10000, 8.67, 0) (9981.29, 8.84, 0.01) (9992.46, 14.3, 0.01) [(58.50, 0.42, 1)]{}
1000 2000 (10000, 34.72, 0) (10000, 35.54, 0) (10000, 67.24, 0) [(80.21, 1.42, 1)]{}
: Results for naive random instances, large $\epsilon$ ($\epsilon = 10^{-1}$), [ and 10000 iteration limit]{}
\[summaryTableeps1\]
$m$ $n$ perceptron VN VNA smooth
------ ------ ---------------------- ---------------------- ----------------------- ----------------------- -- -- -- -- --
100 200 (8236.2, 0.33, 0.35) (5395.1, 0.28, 0.67) (5861, 0.45, 0.66) [(123.6, 0.14, 1)]{}
250 500 (9981.9, 0.94, 0.01) (9258.1, 0.99, 0.2) (9518.5, 1.64, 0.16) [ (231.8, 0.64, 1)]{}
500 1000 (10000, 8.28, 0) (9973.7, 8.39, 0.01) (9988.5, 13.57, 0.01) [(337.64, 2.15, 1)]{}
1000 2000 (10000, 35.61, 0) (10000, 36.34, 0) (10000, 68.71, 0) [(465.94, 7.87, 1)]{}
: Results for naive random instances, small $\epsilon$ ($\epsilon = 10^{-4}$), [ and 10000 iteration limit]{}
\[summaryTableeps4\]
$m$ $n$ perceptron VN VNA smooth
----- ------ ----------------------- ----------------------- ----------------------- ------------------------ -- -- -- --
100 1000 (9134.38, 0.88, 0.32) (8519.12, 8.48, 0.25) (3329.84, 6.27, 1) [(130.77, 0.30, 1)]{}
200 1000 (9649.15, 8.02, 0.21) (8645.91, 7.35, 0.26) (5005.64, 8.12, 0.98) [ (140.21, 0.27, 1)]{}
800 1000 (3383.55, 2.86, 0.87) (9798.34, 8.33, 0.03) (6566.22, 10.61, 0.7) [(220.53, 0.42, 1)]{}
900 1000 (2156.34, 1.92, 0.99) (9842.71, 8.71, 0.03) (1429.66, 2.43, 1) [(198.58, 0.39, 1)]{}
: [ Results for controlled condition instances, large $\epsilon$ ($\epsilon = 10^{-1}$), and 10000 iteration limit]{}
\[summaryTableeps1.control\]
$m$ $n$ perceptron VN VNA smooth
----- ------ ---------------------- ----------------------- ----------------------- ----------------------------
100 1000 (9961.9, 8.26, 0) (9926.9, 16.06, 0.01) (9934.6, 16.01, 0.01) [(9463.5, 17.65, 0.16)]{}
200 1000 (9952.6, 8.74, 0.01) (9942.1, 16.95, 0.01) (9950.1, 16.95, 0.01) [ (9579.9, 19.16, 0.13)]{}
800 1000 (9964.52, 8.94, 0) (9961.16, 17.15, 0) (9967.6, 17.15, 0) [(8557.5, 17.05, 0.75)]{}
900 1000 (9939.5, 8.80, 0.01) (9915.7, 16.94, 0.01) (9939.1, 16.94, 0.01) [(7537.2, 14.85, 0.97)]{}
: [ Results for controlled condition instances, small $\epsilon$ ($\epsilon = 10^{-4}$), and 10000 iteration limit]{}
\[summaryTableeps4.control\]
When $\epsilon$ is large (Table \[summaryTableeps1\] and Table \[summaryTableeps1.control\]), the algorithms often stop when the condition $\|(Pz)^+\|_1\le \epsilon { \|z\|_\infty}$ is satisfied. Not surprisingly, when $\epsilon$ is small (Table \[summaryTableeps4\] and Table \[summaryTableeps4.control\]), the basic procedures more often stop when $Pz >0$ and require a larger number of iterations and longer CPU time. Also as expected, when the instances become larger, they become more challenging and more iterations are needed to find a feasible solution.
Our numerical experiments for large $\epsilon$ demonstrate that the smooth perceptron scheme is faster both in number of iterations and in terms of CPU time than any of the other three schemes. The experiments also suggest that [ when enforcing the 10000 iteration limit]{} the perceptron, von Neumann, and von Neumann with away steps are comparable in terms of number of iterations and CPU time. Given the evidence in favor of the smooth perceptron scheme, we use this method within the Enhanced Projection and Rescaling Algorithm.
We note that the numerical experiments for small $\epsilon$ in Table \[summaryTableeps4.control\] confirm that the scheme for generating challenging instances indeed yields instances that are difficult to solve for all schemes and thus provide an interesting testbed for the Enhanced Projection and Rescaling Algorithm.
The low success rates in some of the entries in Table \[summaryTableeps1\] through Table \[summaryTableeps4.control\] reveal that for many instances the upper limit of 10000 iterations is reached by the perceptron, von Neumann, and von Neumann with away steps schemes. Thus for additional robustness check, we also performed some extra sets of experiments without any limit on the number of iterations. The results are summarized in Table \[new.table1\] and Table \[new.table2\]. We ran fewer instances and used $\epsilon = 10^{-1}$ to keep the experiments manageable. (Some schemes run for over several million iterations in some instances.) The last four columns of Table \[new.table1\] and Table \[new.table2\] report only the average number of iterations and average CPU times since all instances are run until successful termination without iteration limit. Each row corresponds to a set of 100 instances except for the last row for $m=1000, \; n= 2000$. In this case we only ran 20 instances due to time limitations. Without iteration limit, some of these instances take multiple hours of CPU time.
The results in these two tables further confirm that the smooth perceptron scheme is faster both in number of iterations and in terms of CPU time than any of the other three schemes. Furthermore, the additional experiments suggest that without iteration limit the von Neumann scheme usually requires the highest number of iterations.
$m$ $n$ perceptron VN VNA smooth
------ ------ --------------------- --------------------- --------------------- --------------------- -- -- -- -- --
100 200 (8780.80, 0.31) (24453.42, 1.11) (3054.96, 0.22) [(66.7, 0.008)]{}
250 500 (62807.14, 11.8) (565958.64, 112.7) (22853.18, 8.13) [ (122.18, 0.06)]{}
500 1000 (267348.4, 227.73) (2301999.2, 2017.9) (91897.2, 151.3) [(177.0,0.34)]{}
1000 2000 (856072.0, 2739.47) (717508.8, 2331.06) (162726.9, 1010.66) [(226.6, 1.6)]{}
: [ Results for naive random instances, large $\epsilon$ ($\epsilon = 10^{-1}$), and no iteration limit]{}
\[new.table1\]
$m$ $n$ perceptron VN VNA smooth
----- ------ ------------------- --------------------- ------------------ -------------------- -- -- -- --
100 1000 (14982.34, 15.64) (47942.98, 51.68) (3585.23, 6.97 ) [(127.93, 0.29)]{}
200 1000 (16037.40, 16.67) (57652.07, 62.43) (5152.81, 9.87) [(146.49, 0.33)]{}
800 1000 (9157.77, 8.8) (892550.84, 905.88) (7243.26, 13.11) [ (220.8, 0.47)]{}
900 1000 (1975.82, 1.93) (692402.25, 695.27) (1410.76, 2.46) [(199.8, 0.43)]{}
: [ Results for controlled condition instances, large $\epsilon$ ($\epsilon = 10^{-1}$), and no iteration limit]{}
\[new.table2\]
Performance of the Enhanced Projection and Rescaling Algorithm {#sec:experiments.mpra}
--------------------------------------------------------------
This section describes the performance of Algorithm \[algo.MPRA\] on two main sets of problem instances. The first set contains instances of $L = \ker(A)$ for $A\in {\mathbb{R}}^{m\times n}$ with $L \cap {\mathbb{R}}^n_{++} \ne \emptyset$ generated via the approach based on Proposition \[prop.control.cn\] as described in Section \[sec.construct\]. The second set of instances $L = \ker(A)$ is also generated via a similar approach but ensuring that both ${{\rm ri\,}}(L \cap {\mathbb{R}}^n_{+}) \ne \{0\}$ and ${{\rm ri\,}}(L^\perp \cap {\mathbb{R}}^n_{+}) \ne \{0\}$. Most of these instances are sufficiently challenging that they cannot be solved by the basic procedure (via the smooth perceptron scheme) without rescaling.
We ran Algorithm \[algo.MPRA\] with $U=10^{10}$ in all of our experiments. Table \[table.interior\] displays the results for the first set of instances $L = \ker(A)$ with $L \cap {\mathbb{R}}^n_{++} \ne \emptyset$. Each row corresponds to a set of 500 instances of $A\in {\mathbb{R}}^{m\times n}$ for $m$ and $n$ as indicated in the first two columns. The other three columns display the average number of rescaling iterations, average total number of basic procedure iterations, and average CPU time for each set of 500 instances. Furthermore, we note that Algorithm \[algo.MPRA\] successfully solves all instances, that is, it terminates with a point $x\in L \cap {\mathbb{R}}^n_{++}$. It is noteworthy that the number of rescaling iterations ranges from 9 to 15 across instances of different sizes. To further illustrate this interesting fact, Figure \[fig.rescaling\] plots the number of rescaling iterations for some sets of instances.
$m$ $n$
------ ------ ------- --------- -------
100 200 9.51 712.38 0.076
250 500 11.03 1419.98 0.76
100 1000 7.97 1843.74 4.48
200 1000 9.00 1954.73 4.41
500 1000 11.92 2487.47 5.49
800 1000 13.08 4026.00 6.69
900 1000 11.63 4184.90 6.64
1000 2000 12.18 4318.48 35.76
: [ Algorithm \[algo.MPRA\] on controlled condition instances with $L\cap {\mathbb{R}}^n_{++} \ne \emptyset$]{}
\[table.interior\]
![Number of rescaling iterations for controlled condition instances $L$ with $L\cap{\mathbb{R}}^n_{++}\ne \emptyset.$[]{data-label="fig.rescaling"}](rescaledfig.pdf){width=".7\textwidth"}
Table \[table.partition\] and Figure \[fig.rescaling.part\] display similar results for the second set of instances with both ${{\rm ri\,}}(L \cap {\mathbb{R}}^n_{+}) \ne \emptyset$ and ${{\rm ri\,}}(L^\perp \cap {\mathbb{R}}^n_{+}) \ne \emptyset$. To accommodate for a wide and flexible range of dimensions of ${{\rm ri\,}}(L \cap {\mathbb{R}}^n_{+}) \ne \emptyset$ and ${{\rm ri\,}}(L^\perp \cap {\mathbb{R}}^n_{+}) \ne \emptyset$, for each fixed value of $n$ we construct $A\in {\mathbb{R}}^{m\times n}$ and $L=\ker(A)$ with varying values of $m$. For this second set of instances we also report the [*success rate*]{}, that is, the percentage of instances where the the partition $(B,N)$ is correctly identified. The algorithm succeeds in identifying this partition for most instances. In the rare cases when this is not the case, failure occurs because either $B$ or $N$ are small and Algorithm \[algo.MPRA\] terminates with a point that is either in $L^\perp \cap {\mathbb{R}}^n_{++}$ or in $L \cap {\mathbb{R}}^n_{++}$ within roundoff error. The experiments show that on this second set of instances a higher number of rescaling iterations is usually necessary. This is somewhat expected as these instances include the extra difficulty of finding a non-trivial partition $(B,N)$ of $\{1,\dots,n\}.$
$n$
------ --------- ------- --------- ------- -------
100 51.21 16.16 657.01 0.075 0.946
200 100.55 17.27 1031.45 0.12 0.974
500 249.64 17.60 1763.59 1.09 0.984
800 405.86 17.72 2269.69 3.52 0.998
1000 499.72 17.76 2625.84 6.73 0.994
2000 1006.93 17.59 3752.04 47.07 0.994
: Algorithm \[algo.MPRA\] on instances with ${{\rm ri\,}}(L\cap {\mathbb{R}}^n_{+}) \ne \{0\}$ and ${{\rm ri\,}}(L^\perp\cap {\mathbb{R}}^n_{+}) \ne \{0\}$
\[table.partition\]
![Number of rescaling iterations for instances $L$ with ${{\rm ri\,}}(L\cap{\mathbb{R}}^n_{+})\ne \{0\}$ and ${{\rm ri\,}}(L^\perp\cap{\mathbb{R}}^n_{+})\ne \{0\}.$[]{data-label="fig.rescaling.part"}](rescaledpartfig.pdf){width=".7\textwidth"}
To further illustrate the partition and the solutions found by Algorithm \[algo.MPRA\], Figure \[fig:partitioninxands1k\] and Figure \[fig:partitioninxands2k\] plot the coordinates of the points $x$ and $\hat x$ found by Algorithm \[algo.MPRA\] for two representative instances of dimension $n = 1000$ and $n = 2000$ respectively. The two plots in the first row of Figure \[fig:partitioninxands1k\] and Figure \[fig:partitioninxands2k\] show the components of the points $x = (x_B, x_N)$ and $\hat x = (\hat x_B, \hat x_N)$ returned by Algorithm \[algo.MPRA\] for an instance of size $n=1000$ and for an instance of size $n=2000$. The set $B$ is $\{1,\ldots,424\}$ in the first instance and it is $\{1,\ldots,1137\}$ in the second instance. The large red circle in the plots show the size of $B$. For scaling purposes, in both instances the vectors $x$ and $\hat x$ are normalized so that $\|x\|_\infty = \|\hat x \|_\infty =1$. As Figure \[fig:partitioninxands1k\] and Figure \[fig:partitioninxands2k\] show, in both cases the solutions $x$ and $\hat x$ satisfy the conditions in .
$\:$
Other experiments
-----------------
We also performed experiments to assess the effect of rescaling along multiple directions, as in Algorithm \[algo.MPRA\], versus rescaling only along one direction, as in the original Projection and Rescaling Algorithm in [@PenaS16]. More precisely, we compare the performance of Algorithm \[algo.MPRA\] versus the modification obtained by changing the update on $D$ and $\hat D$ in Step 5 to $D = \min\left(\left(I+{\mbox{\rm diag}}(e_i)\right)D, U\right)$ and $\hat D = \min\left(\left(I+{\mbox{\rm diag}}(e_j)\right)\hat D, U\right)$ where $i$ and $j$ are such that for $z_i = \|z\|_\infty$ and $\hat z_j = \|\hat z\|_\infty$. In most instances the modified version that rescales along one direction failed to find a solution within a reasonable number (a hundred) of rescaling iterations. We note that [@PenaS16 Section 6] provides a closed-form formula to update the projection matrix $P$ at low cost after rescaling along one direction. The formula can be extended to handle multiple rescaling directions. However, the formula is computationally attractive only when the number of rescaling directions is small because it requires computing the spectral decomposition of a matrix with rank equal to the number of rescaling directions. Our numerical experiments indicate that even using the closed-form formula in [@PenaS16] does not compensate for the additional number of iterations required by the modified version of Algorithm \[algo.MPRA\] with a single rescaling direction.
Table \[table.interior.naive\] provides a summary similar to that displayed on Table \[table.interior\] of the performance of Algorithm \[algo.MPRA\] on a set of naive random instances. These instances were generated in the same way as those used in the experiments summarized Table \[summaryTableeps1\] and Table \[summaryTableeps4\], namely, $L = \ker(A)$ where the entries of $A\in {\mathbb{R}}^{m\times n}$ are independently drawn from a standard normal distribution. In contrast to the results summarized in Table \[table.interior\] for controlled condition instances, Algorithm \[algo.MPRA\] solves most instances easily without rescaling and after a much lower number of total basic iterations. In particular, Algorithm \[algo.MPRA\] solves all naive random instances without rescaling when $m\ne n/2$ and only a few instances require a small number of rescaling steps when $m=n/2$. For additional illustration of the latter fact, Figure \[fig.rescaling.naive\] plots the number of rescaling iterations for the naive random instances with $m=n/2$. The last column of Table \[table.interior.naive\] shows the fraction of instances where $L\cap {\mathbb{R}}^n_{++} \ne \emptyset$. In contrast to the controlled condition instances, this is unknown for naive random instances. The fraction of instances where $L\cap {\mathbb{R}}^n_{++} \ne \emptyset$ is consistent with and the subsequent discussion.
$m$ $n$
------ ------ ------- --------- -------- -------
100 200 0.496 147.094 0.0213 0.5
250 500 0.338 257.660 0.1428 0.496
100 1000 0 3.700 0.1574 1
200 1000 0 9.520 0.2008 1
500 1000 0.228 373.352 0.7646 0.502
800 1000 0 5.738 0.1175 0
900 1000 0 2.616 0.1156 0
1000 2000 0.188 602.674 4.4935 0.482
: Algorithm \[algo.MPRA\] on naive random instances
\[table.interior.naive\]
![Number of rescaling iterations for naive random instances[]{data-label="fig.rescaling.naive"}](rescalingnaivefig.pdf){width=".7\textwidth"}
We also compared the performance of Algorithm \[algo.MPRA\] with the state-of-the-art commercial solver CPLEX. Similar comparisons with Gurobi and MATLAB solvers are reported in [@LiRT15; @Roos18]. Consistent with the reported results in [@LiRT15; @Roos18], we observe that on average Algorithm \[algo.MPRA\] is faster than CPLEX by nearly an order of magnitude for problem instances where $L = \ker(A)$ with $A \in {\mathbb{R}}^{m \times n}$ generated naively at random as in [@LiRT15; @Roos18] and as in the set of experiments summarized in Table \[table.interior.naive\]. On the other hand, the difference in speed is about the opposite, that is, CPLEX is nearly an order of magnitude faster when $A\in {\mathbb{R}}^{m\times n}$ is generated so that $L\cap{\mathbb{R}}^n_{++}$ has a controlled condition measure via the procedure suggested by Proposition \[prop.control.cn\] as in the set of experiments summarized in Table \[table.interior\]. We attribute this sharp difference to the fact that the naively generated instances are generally easier and can usually be solved within one single round of basic procedure and without the need for rescaling even when $m = n/2$ as Table \[table.interior.naive\] illustrates. By contrast, instances with controlled condition measure, such as those in the set of experiments summarized in Table \[table.interior\], are significantly more challenging and require on average ten or more rounds of basic and rescaling steps. We note that for similarly generated instances, the numerical experiments reported in [@Roos18] generally require several rescaling iterations when $m = n/2$ while our algorithm solves most of these instances without any rescaling iterations. This difference is likely due to the different basic procedures used in [@Roos18] and in our numerical experiments. The rescaling method in [@Roos18] uses a variant of the von Neumann algorithm as its basic procedure while we use the smooth perceptron scheme.
Concluding remarks
==================
We have described a computational implementation and numerical experiments of an Enhanced Projection and Rescaling algorithm for finding most interior solutions to the feasibility problems $$\text{find} \; x\in L\cap{\mathbb{R}}^n_{+} \;\;\;\; \text{ and } \; \;\;\;\;
\text{find} \; \hat x\in L^\perp\cap{\mathbb{R}}^n_{+},$$ where $L$ denotes a linear subspace in ${\mathbb{R}}^n$ and $L^\perp$ denotes its orthogonal complement. Our numerical results provide promising evidence of the effectiveness of this algorithmic approach.
The MATLAB code for our implementation is comprised of a set of MATLAB functions with verbatim implementations of Algorithm \[algo.MPRA\] through Algorithm \[alg:smooth\]. Our MATLAB code is publicly available at the following website
[http://www.andrew.cmu.edu/user/jfp/epra.html]{}
The tables presented in this paper were created by averaging the results obtained from running the following MATLAB functions.
- [TestSimpleBasicProcedures(m,n,N,epsilon):]{} This is the code used to generate and test the set of instances summarized in each row of Table \[summaryTableeps1\], Table \[summaryTableeps4\], and Table \[new.table1\].
- [TestControlledConditionBasicProcedures(m,n,N,epsilon,delta):]{} This is the code used to generate and test the set of instances summarized in each row of Table \[summaryTableeps1.control\], Table \[summaryTableeps4.control\], and Table \[new.table2\].
- [TestControlledConditionRescaled(m,n,N,delta):]{} This is the code used to generate and test the set of instances summarized in each row of Table \[table.interior\].
- [TestPartitionRescaled(n,N):]{} This is the code used to generate and test the set of instances summarized in each row of Table \[table.partition\].
- [TestSimpleRescaled(m,n,N):]{} This is the code used to generate and test the set of instances summarized in each row of Table \[table.interior.naive\]. This code also compares the performance of Algorithm 1 with a modified version that rescales along one direction only including a more efficient update on the projection matrix after each rescaling step.
The input parameters for the above functions are as follows
- [N:]{} Number of instances. We used [N = 1000]{} in Table \[summaryTableeps1\] through Table \[summaryTableeps4.control\], and [N = 100]{} in Table \[new.table1\] and Table \[new.table2\]. We used [N = 500]{} in Table \[table.interior\] through Table \[table.interior.naive\].
- [m:]{} Number of rows of $A\in {\mathbb{R}}^{m\times n}$ such that $L = \ker(A)$
- [n:]{} Dimension of the ambient space
- [epsilon:]{} Rescaling condition parameter
- [delta:]{} Upper bound on the values of a subset of randomly chosen positive entries of the most central solution. The smaller [delta,]{} the more ill-conditioned the problem. We used [delta = 0.001]{} for the experiments summarized in Table \[summaryTableeps1.control\], Table \[summaryTableeps4.control\], Table \[new.table2\], and Table \[table.interior\].
Algorithm \[algo.MPRA\] through Algorithm \[alg:smooth\] are implemented via the following MATLAB functions.
- [MultiEPRA(A,AA,n,z0,U):]{} This code implements Algorithm \[algo.MPRA\]. Assume $L = \ker({\tt A})\subseteq {\mathbb{R}}^n$ and $L^\perp = \ker({\tt AA})$. Use [z0]{} as starting point for the basic procedure and [U]{} to upper bound the rescaling matrices.
- [perceptron(P,z0,epsilon):]{} This code implements Algorithm \[alg:perceptron\].
- [VN(P,z0,epsilon):]{} This code implements Algorithm \[alg:vonNeumann\].
- [VNA(P,z0,epsilon):]{} This code implements Algorithm \[alg:vonNeumann.away\].
- [smooth(P,u0,epsilon):]{} This code implements Algorithm \[alg:smooth\]
.
[10]{}
A. Belloni, R. Freund, and S. Vempala. An efficient rescaled perceptron algorithm for conic systems. , 34(3):621–641, 2009.
P. B[ü]{}rgisser, F. Cucker, and M. Lotz. Coverage processes on spheres and condition numbers for linear programming. , 38(2):570–604, 2010.
D. Cheung, F. Cucker, and J. Pe[ñ]{}a. Unifying condition numbers for linear programming. , 28(4):609–624, 2003.
S. Chubanov. A strongly polynomial algorithm for linear systems having a binary solution. , 134:533–570, 2012.
S. Chubanov. A polynomial projection algorithm for linear feasibility problems. , 153:687–713, 2015.
D. Dadush, L. A V[é]{}gh, and G. Zambelli. Rescaled coordinate descent methods for linear programming. In [*International Conference on Integer Programming and Combinatorial Optimization*]{}, pages 26–37. Springer, 2016.
D. Dadush, L. A V[é]{}gh, and G. Zambelli. Rescaling algorithms for linear conic feasibility. , 2017.
R. Freund, R. Roundy, and M. Todd. Identifying the set of always-active constraints in a system of linear inequalities by a single linear program. , 1985.
R. Hoberg and T. Rothvoss. An improved deterministic rescaling for linear programming algorithms. In [*International Conference on Integer Programming and Combinatorial Optimization*]{}, pages 267–278. Springer, 2017.
T. Kitahara and T. Tsuchiya. An extension of [C]{}hubanov’s polynomial-time linear programming algorithm to second-order cone programming. , 33(1):1–25, 2018.
D. Li, C. Roos, and T. Terlaky. A polynomial column-wise rescaling von [N]{}eumann algorithm. Technical report, Lehigh University, 2015.
B. Louren[ç]{}o, T. Kitahara, M. Muramatsu, and T. Tsuchiya. An extension of [C]{}hubanov’s algorithm to symmetric cones. , pages 1–33, 2016.
J. Pe[ñ]{}a and N. Soheili. A deterministic rescaled perceptron algorithm. , 155:497–510, 2016.
J. Pe[ñ]{}a and N. Soheili. Solving conic systems via projection and rescaling. , 166:87–111, 2017.
C. Roos. An improved version of [C]{}hubanov’s method for solving a homogeneous feasibility problem. , 33:26–44, 2018.
J. Wendel. A problem in geometric probability. , 11:109–111, 1962.
[^1]: Tepper School of Business, Carnegie Mellon University, USA, [[email protected]]{}
[^2]: College of Business Administration, University of Illinois at Chicago, USA, [[email protected] ]{}
|
---
abstract: 'We consider in detail the situation of applying a time dependent external magnetic field to a $^{87}$Rb atomic Bose-Einstein condensate held in a harmonic trap, in order to adiabatically sweep the interatomic interactions across a Feshbach resonance to produce diatomic molecules. To this end, we introduce a minimal two-body Hamiltonian depending on just five measurable parameters of a Feshbach resonance, which accurately determines all low energy binary scattering observables, in particular, the molecular conversion efficiency of just two atoms. Based on this description of the microscopic collision phenomena, we use the many-body theory of T. Köhler and K. Burnett \[Phys. Rev. A **65**, 033601 (2002)\] to study the efficiency of the association of molecules in a $^{87}$Rb Bose-Einstein condensate during a linear passage of the magnetic field strength across the 100 mT Feshbach resonance. We explore different, experimentally accessible, parameter regimes, and compare the predictions of Landau-Zener, configuration interaction, and two level mean field calculations with those of the microscopic many-body approach. Our comparative studies reveal a remarkable insensitivity of the molecular conversion efficiency with respect to both the details of the microscopic binary collision physics and the coherent nature of the Bose-Einstein condensed gas, provided that the magnetic field strength is varied linearly. We provide the reasons for this universality of the molecular production achieved by [*linear*]{} ramps of the magnetic field strength, and identify the Landau-Zener coefficient determined by F.H. Mies [*et al.*]{} \[Phys. Rev. A **61**, 022721 (2000)\] as the main parameter that controls the efficiency.'
author:
- 'Krzysztof G[ó]{}ral'
- Thorsten Köhler
- 'Simon A. Gardiner'
- Eite Tiesinga
- 'Paul S. Julienne'
title: Adiabatic association of ultracold molecules via magnetic field tunable interactions
---
Introduction
============
The prospect of achieving quantum degenerate molecular gases has attracted considerable attention for some time now [@Levi00]. Such an accomplishment may open new avenues for research, for instance, bright sources of molecules for cold collision studies [@Weiner99], precise molecular spectroscopy, elucidating the nature of a possible BCS-BEC crossover in Fermi gases [@Randeria95] and, possibly, the exploitation of dipole-dipole interactions [@Baranov02]. It has been clear from the outset, however, that laser cooling techniques, essential for the production of Bose-Einstein condensates and degenerate Fermi gases of atoms, are difficult to apply in the case of molecules due to their typically complicated rovibrational energy spectrum. The association of ultracold atoms into diatomic molecules, which may also be quantum degenerate, therefore seems a very promising route. The molecular conversion can be achieved by photoassociation [@Wynar00] or by application of time dependent magnetic fields in the vicinity of Feshbach resonances [@Inouye98]. The Feshbach resonance technique has recently been exploited with great success to produce large, ultracold assemblies of diatomic molecules, using as a source both atomic Bose-Einstein condensates [@Donley02; @Claussen03; @Herbig03; @Duerr03; @Xu03] and quantum degenerate two component gases of Fermionic atoms [@Regal03; @Strecker03; @Cubizolles03; @JochimPRL03; @Regal04; @Greiner03; @JochimScience03; @Zwierlein03; @Bartenstein04; @Regal04-2; @Zwierlein04; @Bartenstein04-2].
The experiments reported in Refs. [@Herbig03; @Duerr03; @Xu03] achieve conversion of alkali atoms in dilute Bose-Einstein condensates to diatomic molecules by an adiabatic sweep of the strength of a homogeneous magnetic field across a Feshbach resonance from negative to positive scattering lengths. The observations indicate molecular fractions much smaller than the ideal limit of half the number of initial condensate atoms, even in the limit of perfect adiabaticity. This may be due to limitations in the initial production of molecules, or their long term stability in the presence of the surrounding gas.
The long term stability of the highest excited vibrational molecular bound state, produced by the adiabatic association technique, may be limited due to collisional deexcitation. Both theoretically and experimentally, very little is known about the associated rate constants for highly excited diatomic molecules composed of alkali atoms. To our knowledge, the only available exact calculations have been performed for the deexcitation of tightly bound Na$_2$ molecules upon collision with a Na atom [@Soldan02]. From an experimental viewpoint, the comparatively high relative momenta of the products of collisional deexcitation have so far prevented their detection. Conclusive experimental studies of the loss rates and the underlying microscopic processes are therefore difficult to achieve.
In such an uncertain situation it is important to first understand the production of the diatomic molecules. In this paper we consider the situation of adiabatically sweeping the magnetic field strength across a Feshbach resonance to form molecules from an initially Bose-Einstein condensed dilute gas of atoms. The calculations presented in this paper consider the 100 mT Feshbach resonance of $^{87}$Rb [@Marte02]. The underlying concepts can be extended to arbitrary species of Bosonic atoms, while the two-body physics is also applicable to any pair of atoms interacting via $s$ waves, including Fermionic species.
The structure of the paper is as follows: Section \[sec:twobody\] describes the binary physics of the association of ultracold atoms to a diatomic molecule, and Section \[sec:manybody\] analyses the many-body aspects of this process in a Bose-Einstein condensate. Section \[sec:conclusions\] summarises and presents our conclusions, and there is an appendix, which expands upon the explicit description of the binary collision physics specific to this work.
Section \[sec:twobody\] presents a two channel description of the magnetic field tunable resonance enhanced binary scattering and its relationship to the properties of the highest excited vibrational bound state. We develop the concept of a two-body Hamiltonian that accurately describes the relevant physics in terms of a minimal set of five parameters, which can usually be deduced from measurable properties of a Feshbach resonance. We then focus on the near resonance universal properties of the highest excited vibrational bound state, and identify the smooth transition between free and bound atoms in an ideally adiabatic passage across a Feshbach resonance. We then consider the dynamics of the association of just two atoms during a linear ramp of the magnetic field strength across a Feshbach resonance, and reveal the dependence of molecular conversion on the physical parameters. The last subsection is concerned with the determination of dissociation energy spectra, as dissociation of molecules by ramps of the magnetic field strength is frequently a necessary precursor to their detection. We show that both the molecular conversion efficiency and the dissociation spectra are determined by the same physical parameters of a Feshbach resonance, and are subject to a remarkable insensitivity with respect to the details of the binary collision dynamics, provided that the magnetic field strength is varied linearly.
Section \[sec:manybody\] introduces the relevant elements of the microscopic quantum dynamics approach [@KB02], which properly accounts for both the microscopic binary collision physics and the macroscopic coherent nature of an inhomogeneous atomic Bose-Einstein condensate. Using this approach, we study the many-body aspects of the molecular production during a linear adiabatic passage of the magnetic field strength across a Feshbach resonance. From the first order microscopic quantum dynamics approach, by applying the Markov approximation, we derive the commonly used two level mean field model [@Drummond98; @Timmermans99; @Yurovsky00; @Goral01; @Salgueiro01; @Adhikari01; @Cusack02; @Abdullaev03; @Naidon]. We discuss the deficits of two level models with respect to the description of the intermediate dynamics during passage of the magnetic field strength across a Feshbach resonance. Our comparative studies show, however, that the many-body approaches predict virtually the same final molecular production at all levels of approximation considered in this paper, provided that the ramp of the magnetic field strength across the Feshbach resonance is linear. We provide the reasons for this universality of the molecular conversion in linear ramps of the magnetic field strength with respect to the details of the underlying microscopic binary collision dynamics and to the coherent nature of the Bose-Einstein condensed gas. We then show that the molecular production efficiency is determined by the same physical parameters that we have previously identified in the associated two-body problem. Our findings strongly indicate that measurements of molecular production efficiencies as well as dissociation spectra obtained from [*linear*]{} ramps of the magnetic field strength are largely inconclusive with respect to the details of the underlying binary collision physics.
Adiabatic association of two atoms {#sec:twobody}
==================================
We consider a configuration of two atoms exposed to a homogeneous magnetic field whose strength can be varied. The concept underlying the adiabatic association of diatomic molecules in ultracold gases can be understood solely on the basis of binary collision physics. The key feature of this experimental technique is the adiabatic transfer of the zero energy binary scattering state into the highest excited diatomic vibrational bound state. In this section we shall study the binding energies of the diatomic molecules that determine the positions of the Feshbach resonances. We shall then show how the resonance enhanced interatomic collisions can be accurately described in terms of a minimal set of five quantities that can be determined from current experiments. Specifically, we consider the association of two asymptotically free ultracold $^{87}$Rb atoms with a total angular momentum quantum number of $F=1$ and an orientation quantum number of $m_F=+1$, at magnetic field strengths close to the broadest Feshbach resonance at about 100 mT. We shall also study the dissociation of the molecules, which plays an important role in the direct detection of ultracold molecular gases.
Feshbach resonances and vibrational bound states in $^{87}$Rb {#subsec:Feshbachandboundstates}
-------------------------------------------------------------
Throughout this paper, we will denote the open scattering channel of two asymptotically free atoms in the $(F=1,m_F=+1)$ electronic ground state as the open channel with an associated reference potential (the background scattering potential) $V_\mathrm{bg}(r)$. The dissociation threshold of $V_\mathrm{bg}(r)$ is determined by the internal energy of the noninteracting atoms, i.e. twice the energy corresponding to the $(F=1,m_F=+1)$ hyperfine state. When the atoms are exposed to an external homogeneous magnetic field the $m_F$ degeneracy of the atomic hyperfine levels is removed by the Zeeman effect. As a consequence, the potentials associated with the different asymptotic binary scattering channels are shifted with respect to each other. Although in general the interchannel coupling is weak, in the vicinity of certain magnetic field strengths the open channel can be strongly coupled to closed channels. This strong coupling leads to singularities of the $s$ wave scattering length known as Feshbach resonances. Figure \[fig:Eboverview\] (a) shows the theoretically predicted $s$ wave scattering length of two colliding $^{87}$Rb atoms in the electronic ground state that are exposed to a homogeneous magnetic field of strength $B$. The theoretical calculations use five coupled equations for one open and four closed channels to describe the $s$-wave collision of two $(F=1,m_F=+1)$ atoms [@Mies00]. We use standard methods to solve these equations for either scattering or bound states.
The Feshbach resonances are related to the binding energy of the highest excited vibrational state in a simple way; the singularities of the scattering length in Fig. \[fig:Eboverview\] (a) exactly match those magnetic field strengths that correspond to the zeros of the binding energy with respect to the threshold energy for dissociation into two asymptotically free atoms. Figure \[fig:Eboverview\] (b) gives an overview of the magnetic field dependence of the binding energies of $s$-wave symmetry molecular states of two $^{87}$Rb atoms below the dissociation threshold energy of the open channel.
![Magnetic field strength dependence of the $s$ wave scattering length (a) and the energies of the highest excited $s$-wave vibrational molecular bound states (b) of two $^{87}$Rb atoms. At each magnetic field strength $B$ the zero of energy is set at the threshold for dissociation into two asymptotically free atoms in the $(F=1,m_F=+1)$ hyperfine state. The lines in (b) at about -0.02 and -0.63 GHz parallel to the $E=0$ axis represent the energies of the last $s$-wave bound states of the open background channel. These states have the same magnetic moment as the separated atoms. The slanted lines in (b) represent closed channel molecular states that have different magnetic moments from the separated atoms. The weak avoided crossings at the intersection of parallel and slanted lines are due to interactions between the background and closed channels, as noted in Ref. [@Duerr03]. When the closed channel molecular states cross threshold at $E=0$, the zeros of the binding energy in (b) correspond to the positions of the Feshbach resonances in (a). Our calculated resonance positions are within one per cent of the measured positions [@Marte02].[]{data-label="fig:Eboverview"}](Feshbach.eps){width="\columnwidth"}
Two channel energy states {#subsec:energystates}
-------------------------
The scattering lengths and binding energies in Fig. \[fig:Eboverview\] have been obtained from exact solutions of the multichannel two-body Schrödinger equation as described in Subsection \[subsec:Feshbachandboundstates\]. These calculations were performed with a realistic potential matrix that accurately describes the bound and free molecular states over a wide range of energies and magnetic field strengths. Based on these exact considerations, Fig. \[fig:Eboverview\] reveals that the binding energy of the highest excited vibrational state determines the singularities of the scattering length and, in turn, all resonance enhanced low energy scattering properties of two atoms. Collisions in ultracold gases involve only a quite limited range of energies, and the adiabatic association of molecules takes place at magnetic field strengths in the close vicinity of a particular Feshbach resonance. We shall therefore restrict our analysis to an appropriately smaller range of energies and magnetic field strengths around the 100 mT Feshbach resonance.
### Background scattering {#subsubsec:background}
We shall first show how the low energy background scattering can be accurately described in terms of experimentally known physical quantities. At magnetic field strengths asymptotically far from the resonance the interchannel coupling is weak and the highest excited multichannel vibrational bound state can be determined directly from a single channel description with the background scattering potential $V_\mathrm{bg}(r)$. Considerations beyond the scope of this paper [@Gao98] show that the corresponding binding energy is determined, to an excellent approximation, by the long range asymptotic behaviour of $V_\mathrm{bg}(r)$ and the background scattering length $a_\mathrm{bg}$, i.e. the scattering length associated with the background scattering potential $V_\mathrm{bg}(r)$. Neglecting retardation phenomena, at large interatomic separations $V_\mathrm{bg}(r)$ has the universal form $V_\mathrm{bg}(r)\underset{r\to\infty}{\sim}-C_6/r^6$, where $C_6$ is the van der Waals dispersion coefficient. The low energy background scattering is determined by the same parameters $a_\mathrm{bg}$ and $C_6$ [@Gao98]. This universality is due to the fact that at typical ultracold collision energies the de Broglie wavelengths are much larger than the van der Waals length $$l_\mathrm{vdW}=\frac{1}{2}\left(\frac{mC_6}{\hbar^2}\right)^{1/4}.
\label{lvdW}$$ Here $m$ is the atomic mass. As the van der Waals length is the characteristic length scale set by the long range tail of the background scattering potential, the details of the potential $V_\mathrm{bg}(r)$ are not resolved in the collisions. At zero collision energy the background scattering length $a_\mathrm{bg}$ incorporates all the unresolved details of $V_\mathrm{bg}(r)$ into a single length scale. The van der Waals length determines the first correction at finite collision energies, which accounts for the long range asymptotic behaviour of $V_\mathrm{bg}(r)$. We note that any model of the background scattering potential will recover the binding energy of the highest excited vibrational state and all low energy scattering properties of the exact potential $V_\mathrm{bg}(r)$ to an excellent approximation, if it properly accounts for the parameters $C_6$ and $a_\mathrm{bg}$. We provide an appropriate minimal background scattering potential in the appendix.
### Two channel Schrödinger equation
We assume in the following that all the bound and free energy states associated with the background scattering potential have been determined, and on this basis derive the resonance enhanced collision properties of two atoms. In the vicinity of a Feshbach resonance, the strong coupling between the open channel and other asymptotic scattering channels originates from the near degeneracy of the magnetic field dependent energy $E_\mathrm{res}(B)$ of a closed channel vibrational state (a Feshbach resonance level) $\phi_\mathrm{res}(r)$ with the dissociation threshold energy of the open channel. Consequently, the resonance enhanced collision physics of two $^{87}$Rb atoms can be accurately described by the general form of a two-channel Hamiltonian matrix of the relative motion of the atoms: $$\begin{aligned}
H_\mathrm{2B}=
\left(
\begin{array}{cc}
-\frac{\hbar^2}{m}\nabla^2+V_\mathrm{bg}(r) & W(r)\\
W(r) & -\frac{\hbar^2}{m}\nabla^2+V_\mathrm{cl}(B,r)
\end{array}
\right).
\label{H2B2channel}\end{aligned}$$ Here $m$ is the atomic mass, $r$ is the distance between the atoms, $W(r)$ determines the strength of the coupling between the channels, and the closed channel potential $V_\mathrm{cl}(B,r)$ supports the resonance state: $$\left[-\frac{\hbar^2}{m}
\nabla^2+V_\mathrm{cl}(B,r)\right]\phi_\mathrm{res}(r)=
E_\mathrm{res}(B)\phi_\mathrm{res}(r).
\label{SEphires}$$ In the following the resonance state $\phi_\mathrm{res}(r)$ is normalised to unity. In Eq. (\[H2B2channel\]), as elsewhere in this paper, we have chosen the zero of energy as the dissociation threshold of the open channel, i.e. $V_\mathrm{bg}(r)\underset{r\to\infty}{\to}0$. The dissociation threshold of $V_\mathrm{cl}(B,r)$ is determined accordingly by the energy of two noninteracting atoms in the closed channel that is strongly coupled to the open channel. The relative Zeeman energy shift between the channels can be tuned by varying the magnetic field strength.
The bound and free energy states of the general Hamiltonian matrix in Eq. (\[H2B2channel\]) relate the remaining potentials $W(r)$ and $V_\mathrm{cl}(B,r)$ to a minimal set of measurable properties of a Feshbach resonance. The two channel states that we shall consider in the following are of the general form $|\mathrm{bg}\rangle\phi^\mathrm{bg}(\mathbf{r})
+|\mathrm{cl}\rangle\phi^\mathrm{cl}(\mathbf{r})$, where $|\mathrm{bg}\rangle$ and $|\mathrm{cl}\rangle$ denote the internal states of an atom pair in the open channel and the closed channel strongly coupled to it, respectively. The two components of the energy states are solutions of the stationary coupled Schrödinger equations $$\begin{aligned}
\label{SEphibg}
\left[-\frac{\hbar^2}{m}\nabla^2+V_\mathrm{bg}(r)\right]
\phi^\mathrm{bg}(\mathbf{r})
+W(r)\phi^\mathrm{cl}(\mathbf{r})&=E\phi^\mathrm{bg}(\mathbf{r}),\\
\label{SEphicl}
W(r)\phi^\mathrm{bg}(\mathbf{r})+
\left[-\frac{\hbar^2}m\nabla^2+V_\mathrm{cl}(B,r)\right]
\phi^\mathrm{cl}(\mathbf{r})&=E\phi^\mathrm{cl}(\mathbf{r}).\end{aligned}$$
### Continuum states
Bound and continuum energy states are distinguished by their energies and by their asymptotic behaviour at large interatomic distances. In the scattering continuum above the dissociation threshold all energies are in the spectrum of the Hamiltonian in Eq. (\[H2B2channel\]) and can be associated with the momentum $\mathbf{p}$ of the relative motion of two asymptotically noninteracting atoms in the open channel through $E=p^2/m$. Due to the continuous scattering angles between the atoms at a definite collision energy, the scattering energy states are infinitely degenerate. In the following, we will choose their wave functions to behave at large interatomic distances like: $$\phi_\mathbf{p}^\mathrm{bg}(\mathbf{r})
\underset{r\to \infty}{\sim}\frac{1}{(2\pi\hbar)^{3/2}}
\left[e^{i\mathbf{p}\cdot\mathbf{r}/\hbar}+f(\vartheta,p)
\frac{e^{ipr/\hbar}}{r}\right].
\label{BCphipbg}$$ This long range asymptotic behaviour corresponds to an incoming plane wave and an outgoing spherical wave in the open channel. Here and throughout this paper we will assume the plane wave momentum states to be normalised as $\exp(i\mathbf{p}\cdot\mathbf{r}/\hbar)/(2\pi\hbar)^{3/2}$. The function $f(\vartheta,p)$ in Eq. (\[BCphipbg\]) is the scattering amplitude, which depends on $p=\sqrt{mE}$ and on the scattering angle $\vartheta$ between the momentum $\mathbf{p}$ of the relative motion of the asymptotically noninteracting incoming atoms and their final relative position $\mathbf{r}$. The closed channel component $\phi_\mathbf{p}^\mathrm{cl}(\mathbf{r})$ of the wave function vanishes at asymptotically large distances between the colliding atoms. We shall also introduce the energy states $\phi_\mathbf{p}^{(+)}(\mathbf{r})$ of the background scattering that satisfy the Schrödinger equation $$\left[-\frac{\hbar^2}{m}\nabla^2+V_\mathrm{bg}(r)\right]
\phi_\mathbf{p}^{(+)}(\mathbf{r})=
\frac{p^2}{m}\phi_\mathbf{p}^{(+)}(\mathbf{r}),$$ with the long range asymptotic behaviour: $$\phi_\mathbf{p}^{(+)}(\mathbf{r})\underset{r\to \infty}{\sim}
\frac{1}{(2\pi\hbar)^{3/2}}
\left[e^{i\mathbf{p}\cdot\mathbf{r}/\hbar}+f_\mathrm{bg}(\vartheta,p)
\frac{e^{ipr/\hbar}}{r}\right].
\label{BCphipplus}$$
The coupled Schrödinger equations (\[SEphibg\]) and (\[SEphicl\]) can be expressed in terms of the energy dependent Green’s functions: $$\begin{aligned}
\label{Gbg}
G_\mathrm{bg}(z)&=
\left[z-\left(-\frac{\hbar^2}{m}\nabla^2+V_\mathrm{bg}\right)\right]^{-1},\\
G_\mathrm{cl}(B,z)&=
\left[z-\left(-\frac{\hbar^2}{m}\nabla^2+V_\mathrm{cl}(B)\right)\right]^{-1}.
\label{Gcl}\end{aligned}$$ Here $z$ is a complex parameter with the dimension of an energy. The coupled Schrödinger equations then read: $$\begin{aligned}
\label{LSphipbg}
\phi_\mathbf{p}^\mathrm{bg}&=\phi_\mathbf{p}^{(+)}+
G_\mathrm{bg}(E+i0)W\phi_\mathbf{p}^\mathrm{cl},\\
\phi_\mathbf{p}^\mathrm{cl}&=G_\mathrm{cl}(B,E)W\phi_\mathbf{p}^\mathrm{bg}.
\label{LSphipcl}\end{aligned}$$ The argument “$z=E+i0$” of the Green’s function $G_\mathrm{bg}(z)$ indicates that the physical energy $E=p^2/m$ is approached from the upper half of the complex plane. This choice of the energy argument ensures that the scattering wave function $\phi_\mathbf{p}^\mathrm{bg}(\mathbf{r})$ has the long range asymptotic form of Eq. (\[BCphipbg\]), in accordance with the asymptotic behaviour of the Green’s function at large interatomic distances: $$G_\mathrm{bg}(E+i0,\mathbf{r},\mathbf{r}')\underset{r\to\infty}{\sim}-
(2\pi\hbar)^{3/2}\frac{m}{4\pi\hbar^2}\frac{e^{ipr/\hbar}}{r}
\left[\phi_\mathbf{p}^{(-)}(\mathbf{r}')\right]^*.
\label{asympGbg}$$ Here $\phi_\mathbf{p}^{(-)}(\mathbf{r}')=
\left[\phi_{-\mathbf{p}}^{(+)}(\mathbf{r}')\right]^*$ is the incoming continuum energy state associated with the background scattering [@Newton82], and $\mathbf{p}=\left(\sqrt{mE}\right)\mathbf{r}/r$ can be interpreted as the asymptotic momentum associated with the relative motion of the scattered atoms.
As the resonance state $\phi_\mathrm{res}(r)$ fulfils the Schrödinger equation (\[SEphires\]), according to Eq. (\[Gcl\]) the Green’s function $G_\mathrm{cl}(B,z)$ has a singularity at $z=E_\mathrm{res}(B)$, i.e. $$\langle\phi_\mathrm{res}|G_\mathrm{cl}(B,z)|\phi_\mathrm{res}\rangle=
\frac{1}{z-E_\mathrm{res}(B)}.
\label{poleapproximation1}$$ At magnetic field strengths in the vicinity of a Feshbach resonance $E_\mathrm{res}(B)$ is nearly degenerate with the dissociation threshold energy of the open channel. Furthermore, the kinetic energies $E=p^2/m$ in ultracold collisions are small in comparison with the typical spacing between molecular vibrational bound states. As a consequence, the denominator in Eq. (\[poleapproximation1\]) becomes sufficiently small for the Green’s function $G_\mathrm{cl}(B,E)$ in Eq. (\[LSphipcl\]) to be excellently approximated by its resonance state component [@Child74]: $$G_\mathrm{cl}(B,E)\approx |\phi_\mathrm{res}\rangle
\frac{1}{E-E_\mathrm{res}(B)}
\langle\phi_\mathrm{res}|.
\label{poleapproximation}$$ Inserting this pole approximation of the Green’s function into Eq. (\[LSphipcl\]) determines the functional form of the closed channel component of the scattering wave function to be $$|\phi_\mathbf{p}^\mathrm{cl}\rangle=|\phi_\mathrm{res}\rangle A(B,E).
\label{phipcl}$$ The wave function $\phi_\mathbf{p}^\mathrm{bg}(\mathbf{r})$ is then determined by eliminating $\phi_\mathbf{p}^\mathrm{cl}$ on the right hand side of Eq. (\[LSphipbg\]) in terms of Eq. (\[phipcl\]) which gives $$|\phi_\mathbf{p}^\mathrm{bg}\rangle=|\phi_\mathbf{p}^{(+)}\rangle+
G_\mathrm{bg}(E+i0)W|\phi_\mathrm{res}\rangle A(B,E).
\label{phipbg}$$ The as yet unknown amplitude $$A(B,E)=\frac{\langle\phi_\mathrm{res}|W|\phi_\mathbf{p}^\mathrm{bg}\rangle}
{E-E_\mathrm{res}(B)}$$ can be determined straightforwardly by multiplying Eq. (\[phipbg\]) by $\langle\phi_\mathrm{res}|W$ from the left. This yields, after a short calculation: $$A(B,E)=\frac{\langle\phi_\mathrm{res}|W|\phi_\mathbf{p}^{(+)}\rangle}
{E-E_\mathrm{res}(B)-
\langle\phi_\mathrm{res}|WG_\mathrm{bg}(E+i0)W|\phi_\mathrm{res}\rangle}.
\label{amplitude}$$ Once all the energy states associated with the single channel potential $V_\mathrm{bg}(r)$ and the resonance state $\phi_\mathrm{res}(r)$ are known, Eqs. (\[phipcl\]), (\[phipbg\]) and (\[amplitude\]) establish the complete solution of the coupled Schrödinger equations (\[LSphipbg\]) and (\[LSphipcl\]) in the pole approximation. Under the assumption that the configuration of two atoms in the closed channels is restricted to the resonance state $\phi_\mathrm{res}$, the pole approximation becomes exact. This assumption implies the replacement $$-\frac{\hbar^2}{m}\nabla^2+V_\mathrm{cl}(B)\to
|\phi_\mathrm{res}\rangle E_\mathrm{res}(B)\langle\phi_\mathrm{res}|
\label{replacementHcl}$$ in the two channel Hamiltonian in Eq. (\[H2B2channel\]). Equations (\[phipcl\]), (\[phipbg\]) and (\[amplitude\]) then determine the exact scattering energy states of the resulting restricted two channel Hamiltonian.
The scattering length $a$ is determined by the long range asymptotic form of the scattering wave function $\phi_\mathbf{p}^\mathrm{bg}(\mathbf{r})$ in Eq. (\[BCphipbg\]) through $$f(\vartheta,p)\underset{p\to 0}{\sim}-a.$$ Because the incoming plane wave is isotropic at zero energy, this limit is obviously independent of the scattering angle. In an analogous way, the background scattering length $a_\mathrm{bg}$ is related to the asymptotic form of the scattering wave function $\phi_\mathbf{p}^\mathrm{(+)}(\mathbf{r})$ in Eq. (\[BCphipplus\]), in the limit of zero energy. Inserting the known asymptotic form of the Green’s function $G_\mathrm{bg}(E+i0)$ at large interatomic distances in Eq. (\[asympGbg\]) and the amplitude in Eq. (\[amplitude\]) into Eq. (\[phipbg\]), a short calculation determines the scattering length to be: $$a=a_\mathrm{bg}-
\frac{\frac{m}{4\pi\hbar^2}(2\pi\hbar)^3
\left|\langle\phi_\mathrm{res}|W|\phi_0^{(+)}\rangle\right|^2}
{E_\mathrm{res}(B)+
\langle\phi_\mathrm{res}|WG_\mathrm{bg}(0)W|\phi_\mathrm{res}\rangle}.
\label{aofEres}$$ The energy $E_\mathrm{res}(B)$ of the closed channel state is determined by the Zeeman energy shift between the asymptotic open channel and the closed channel strongly coupled to it. Within the limited range of magnetic field strengths that we shall consider in this paper, the Zeeman effect is approximately linear in the magnetic field strength $B$. We shall denote by $B_\mathrm{res}$ the magnetic field strength at which $E_\mathrm{res}(B)$ crosses the dissociation threshold of the open channel, i.e. $E_\mathrm{res}(B_\mathrm{res})=0$. An expansion of $E_\mathrm{res}(B)$ about $B=B_\mathrm{res}$ yields: $$E_\mathrm{res}(B)=\left[\frac{dE_\mathrm{res}}{dB}(B_\mathrm{res})\right]
(B-B_\mathrm{res}).
\label{slope}$$ Equations (\[aofEres\]) and (\[slope\]) then determine the magnetic field dependence of the scattering length by the well known formula $$a(B)=a_\mathrm{bg}\left[1-\frac{(\Delta B)}{B-B_0}\right],
\label{aofB}$$ where $$(\Delta B)=\frac{m}{4\pi\hbar^2 a_\mathrm{bg}}
\frac{(2\pi\hbar)^3
\left|\langle\phi_\mathrm{res}|W|\phi_0^{(+)}\rangle\right|^2}
{\left[\frac{dE_\mathrm{res}}{dB}(B_\mathrm{res})\right]}
\label{resonancewidth}$$ is termed the resonance width and $$B_0=B_\mathrm{res}-
\frac{\langle\phi_\mathrm{res}|WG_\mathrm{bg}(0)W|\phi_\mathrm{res}\rangle}
{\left[\frac{dE_\mathrm{res}}{dB}(B_\mathrm{res})\right]}
\label{resonanceshift}$$ is the measurable resonance position, i.e. the magnetic field strength at which the scattering length has a singularity. We note that according to Eq. (\[resonanceshift\]) the resonance position $B_0$ is shifted with respect to the magnetic field strength $B_\mathrm{res}$ at which the energy of the resonance state becomes degenerate with the dissociation threshold energy of the open channel.
### Bound states
The molecular bound states vanish at asymptotically large interatomic distances, and their energies are below the dissociation threshold of the open channel. Similarly to Eqs. (\[LSphipbg\]) and (\[LSphipcl\]), the coupled Schrödinger equations (\[SEphibg\]) and (\[SEphicl\]) can be expressed in terms of the coupled integral equations $$\begin{aligned}
\label{LSphibbg}
\phi_\mathrm{b}^\mathrm{bg}&=
G_\mathrm{bg}(E_\mathrm{b})W\phi_\mathrm{b}^\mathrm{cl},\\
\phi_\mathrm{b}^\mathrm{cl}&=
G_\mathrm{cl}(B,E_\mathrm{b})W\phi_\mathrm{b}^\mathrm{bg},
\label{LSphibcl}\end{aligned}$$ which incorporate the long range asymptotic behaviour of the components, $\phi_\mathrm{b}^\mathrm{bg}$ and $\phi_\mathrm{b}^\mathrm{cl}$, of the bound state. Here $E_\mathrm{b}$ is the binding energy. A short calculation verifies that the pole approximation in Eq. (\[poleapproximation\]) leads to the normalised solutions $$\left(
\begin{array}{c}
\phi_\mathrm{b}^\mathrm{bg}\\
\phi_\mathrm{b}^\mathrm{cl}
\end{array}
\right)=\frac{1}{\mathcal{N}_\mathrm{b}}
\left(
\begin{array}{c}
G_\mathrm{bg}(E_\mathrm{b})W\phi_\mathrm{res}\\
\phi_\mathrm{res}
\end{array}
\right)
\label{phib}$$ with the normalisation constant $$\mathcal{N}_\mathrm{b}=\sqrt{1+
\langle\phi_\mathrm{res}|W
\left[G_\mathrm{bg}(E_\mathrm{b})\right]^2W|\phi_\mathrm{res}\rangle},
\label{twochannelnormalisation}$$ whenever the binding energy $E_\mathrm{b}$ fulfils the condition: $$E_\mathrm{b}=E_\mathrm{res}(B)+
\langle\phi_\mathrm{res}|WG_\mathrm{bg}(E_\mathrm{b})W|
\phi_\mathrm{res}\rangle.
\label{determinationEb}$$ We note that Eq. (\[determinationEb\]) recovers Eq. (\[resonanceshift\]) when the binding energy and the magnetic field strength are inserted as $E_\mathrm{b}=0$ and $B=B_0$, respectively, i.e. the binding energy, indeed, vanishes at the position of the resonance.
### Minimal two channel Hamiltonian
Both the resonance width in Eq. (\[resonancewidth\]) and the shift in Eq. (\[resonanceshift\]) depend only on the product $W(r)\phi_\mathrm{res}(r)$. Consequently, for a minimal description of the resonance enhanced scattering we only need to specify $W(r)\phi_\mathrm{res}(r)$ in terms of two parameters to recover the magnetic field dependence of the scattering length in Eq. (\[aofB\]) and its relationship with the binding energy of the highest excited vibrational state. A derivation beyond the scope of this paper shows that, once the width of the resonance is known experimentally, a determination of the resonance shift does not require a full solution of the coupled channel two-body Schrödinger equation with a realistic potential matrix. It turns out that Eq. (\[resonanceshift\]) is excellently approximated by [@Julienne89] $$B_0-B_\mathrm{res}=(\Delta B)\frac{\frac{a_\mathrm{bg}}{\overline{a}}
\left(1-\frac{a_\mathrm{bg}}{\overline{a}}\right)}
{1+\left(1-\frac{a_\mathrm{bg}}{\overline{a}}\right)^2}.
\label{magicformula}$$ Here $\overline{a}$ is the mean scattering length of the background scattering potential $V_\mathrm{bg}(r)$, which is related to the van der Waals length in Eq. (\[lvdW\]) and Euler’s $\Gamma$ function by [@GribakinFlambaum93] $$\overline{a}=\frac{1}{\sqrt{2}}
\frac{\Gamma(3/4)}{\Gamma(5/4)}l_\mathrm{vdW}.$$ This approximation is consistent with the treatment of the background scattering in terms of just the parameters $a_\mathrm{bg}$ and $l_\mathrm{vdW}$ in \[subsubsec:background\].
In addition to the parameters $C_6$ and $a_\mathrm{bg}$ of the background scattering potential $V_\mathrm{bg}(r)$, a minimal description of the resonance enhanced scattering in a two channel Hamiltonian thus requires us to account for the width $(\Delta B)$ in Eq. (\[resonancewidth\]) and the shift $B_0-B_\mathrm{res}$ in Eqs. (\[resonanceshift\]) and (\[magicformula\]), in terms of the interchannel coupling $W(r)\phi_\mathrm{res}(r)$, while the slope $\left[\frac{dE_\mathrm{res}}{dB}(B_\mathrm{res})\right]$ of the resonance in Eq. (\[slope\]) determines the component of the Hamiltonian in the closed channel that is strongly coupled to the open channel. For the 100 mT Feshbach resonance of $^{87}$Rb, four of the five parameters of the two channel Hamiltonian, i.e. $C_6=4660$ a.u. [@Roberts01] (1 a.u. = 0.095734 yJ nm$^6$), $a_\mathrm{bg}=100 \ a_\mathrm{Bohr}$ [@vanKempen02] ($a_\mathrm{Bohr}=0.052918\ $nm), $(\Delta B)=0.02$ mT and $B_0-B_\mathrm{res}=-0.006371$ mT, can be either directly deduced from experiments [@Volz03] or from Eq. (\[magicformula\]). The only parameter that is not easily accessible is the slope of the resonance. We have obtained $\frac{1}{h}\left[\frac{dE_\mathrm{res}}{dB}(B_\mathrm{res})\right]=38$ MHz/mT from the binding energies in Fig. \[fig:Eboverview\]. In the appendix we provide the explicit form of the minimal two channel Hamiltonian in the pole approximation that we apply in the following to determine the dynamics of the adiabatic association of molecules. Figure \[fig:EbofBtwochannel\] shows the magnetic field dependence of the binding energies as obtained from this minimal Hamiltonian. The low energy scattering properties of two asymptotically free $^{87}$Rb atoms in the open channel and the properties of the highest excited vibrational bound state are insensitive with respect to the details of the implementation of the five parameter two channel Hamiltonian.
![The magnetic field dependence of the binding energy of the vibrational bound states of the two channel Hamiltonian in the appendix (solid curves). On the low field side of the resonance a new bound state $\phi_\mathrm{b}$ emerges at the resonance position $B_0=100.74$ mT [@Volz03] whose binding energy we have denoted by $E_\mathrm{b}(B)$. At magnetic field strengths asymptotically far from the resonance, the highest excited two channel vibrational bound state becomes identical with the highest excited single channel vibrational bound state $\phi_{-1}(r)$ that is associated with $V_\mathrm{bg}(r)$ and whose binding energy is denoted by $E_{-1}$ (dotted horizontal line). The dashed line indicates the energy $E_\mathrm{res}(B)$ of the resonance state.[]{data-label="fig:EbofBtwochannel"}](87RbEbofB.eps){width="\columnwidth"}
Universal properties of near resonant bound states {#subsec:universal}
--------------------------------------------------
We shall now focus on the properties of the highest excited vibrational bound state $\phi_\mathrm{b}$ on the low field side of the Feshbach resonance whose emergence at the resonance position $B_0=100.74$ mT causes the singularity of the scattering length (see Fig. \[fig:EbofBtwochannel\]). The vibrational state $\phi_\mathrm{b}$ is determined by its components in the open channel and in the closed channel strongly coupled to it, as given by Eq. (\[phib\]), while the binding energy is determined by Eq. (\[determinationEb\]). We note that the closed channel component $\phi_\mathrm{b}^\mathrm{cl}(r)$ in Eq. (\[phib\]) has the functional form of the resonance state $\phi_\mathrm{res}(r)$, which we have normalised to unity. Figure \[fig:mixingcoefficient\] shows the population $$4\pi \int_0^\infty r^2 dr \
\left|\phi_\mathrm{b}^\mathrm{bg}(r)\right|^2=
\frac{\mathcal{N}_\mathrm{b}^2-1}{\mathcal{N}_\mathrm{b}^2}$$ of the open channel component of the highest excited vibrational bound state $\phi_\mathrm{b}$ as a function of the magnetic field strength.
![The population of the open channel component of the highest excited vibrational bound state $\phi_\mathrm{b}$ as a function of the magnetic field strength on the low field side of the Feshbach resonance. The population was determined from the minimal two channel Hamiltonian in the appendix.[]{data-label="fig:mixingcoefficient"}](populationbg.eps){width="\columnwidth"}
At magnetic field strengths asymptotically far from the resonance, Fig. \[fig:mixingcoefficient\] shows that $\phi_\mathrm{b}$ is transferred entirely into the bound state $\phi_{-1}(r)$ of $V_\mathrm{bg}(r)$, i.e. the highest excited vibrational bound state in the absence of interchannel coupling. Figure \[fig:mixingcoefficient\] also reveals that $\phi_\mathrm{b}$ is dominated by its component in the open channel also in a small region of magnetic field strengths in the close vicinity of the Feshbach resonance.
### Universal binding energy
We shall study the binding energy $E_\mathrm{b}$ that is determined by Eq. (\[determinationEb\]) in this small region of magnetic field strengths on the low field side of the Feshbach resonance. As the binding energy vanishes at the resonance position, these studies will involve the asymptotic behaviour of Eq. (\[determinationEb\]) in the limit $E_\mathrm{b}\to 0$. Inserting the resolvent identity [@Newton82] $$G_\mathrm{bg}(E_\mathrm{b})=G_\mathrm{bg}(0)-
E_\mathrm{b}G_\mathrm{bg}(0)G_\mathrm{bg}(E_\mathrm{b})$$ as well as Eqs. (\[slope\]) and (\[resonanceshift\]) into Eq. (\[determinationEb\]) yields: $$\begin{aligned}
\nonumber
E_\mathrm{b}=&\left[\frac{dE_\mathrm{res}}{dB}(B_\mathrm{res})\right]
(B-B_0)\\
&-E_\mathrm{b}
\langle\phi_\mathrm{res}|WG_\mathrm{bg}(0)G_\mathrm{bg}(E_\mathrm{b})
W|\phi_\mathrm{res}\rangle.
\label{Ebuniversal1}\end{aligned}$$ In accordance with Eq. (\[Gbg\]), the Green’s functions $G_\mathrm{bg}(0)$ and $G_\mathrm{bg}(E_\mathrm{b})$ in Eq. (\[Ebuniversal1\]) can be decomposed into the complete orthogonal set of bound and continuum energy states associated with the background scattering potential $V_\mathrm{bg}(r)$. These decompositions yield: $$\begin{aligned}
\nonumber
E_\mathrm{b}=&
\left[\frac{dE_\mathrm{res}}{dB}(B_\mathrm{res})\right](B-B_0)\\
&-E_\mathrm{b}
\left[
\int d\mathbf{p} \
\frac{\left|\langle\phi_\mathrm{res}|W|\phi_\mathbf{p}^{(+)}\rangle
\right|^2}
{-\frac{p^2}{m}
\left(E_\mathrm{b}-\frac{p^2}{m}\right)}+
\sum_v
\frac{\left|\langle\phi_\mathrm{res}|W|\phi_v\rangle\right|^2}
{-E_v\left(E_\mathrm{b}-E_v\right)}
\right].
\label{spectraldecompositionEb}\end{aligned}$$ Here the sum includes the indices $v=-1,-2,-3,\ldots$ of all vibrational bound states $\phi_v(r)$ associated with the potential $V_\mathrm{bg}(r)$. In the limit of vanishing binding energy $E_\mathrm{b}$, the momentum integral on the right hand side of Eq. (\[spectraldecompositionEb\]) is singular at $p=0$. As a consequence, the slowly varying matrix element $\langle\phi_\mathrm{res}|W|\phi_\mathbf{p}^{(+)}\rangle$ can be evaluated at $p=0$, while the sum over the vibrational bound states can be neglected. Using the general relationship between the matrix element $\langle\phi_\mathrm{res}|W|\phi_0^{(+)}\rangle$ and the resonance width in Eq. (\[resonancewidth\]), the asymptotic form of the remaining momentum integral is then given by: $$\begin{aligned}
\nonumber
\int d\mathbf{p} \ \frac{\left|
\langle\phi_\mathrm{res}|W|\phi_\mathbf{p}^{(+)}\rangle\right|^2}
{-\frac{p^2}{m}\left(E_\mathrm{b}-\frac{p^2}{m}\right)}
&\underset{E_\mathrm{b}\to 0}{\sim}
4\pi\int_0^\infty p^2dp \
\frac{\left|\langle\phi_\mathrm{res}|W|\phi_0^{(+)}\rangle\right|^2}
{-\frac{p^2}{m}\left(E_\mathrm{b}-\frac{p^2}{m}\right)}\\
&=\left[\frac{dE_\mathrm{res}}{dB}(B_\mathrm{res})\right]
\left(\Delta B\right)
a_\mathrm{bg}\sqrt{\frac{m}{\hbar^2\left|E_\mathrm{b}\right|}}.
\end{aligned}$$ With this evaluation of the integral, Eq. (\[spectraldecompositionEb\]) can be solved for $E_\mathrm{b}$ and recovers the universal form $$E_\mathrm{b}(B)=-\frac{\hbar^2}{m[a(B)]^2}
\label{Ebuniversal}$$ of the binding energy.
### Universal bound states
In accordance with Eq. (\[phib\]), near resonance the highest excited bound state is strongly dominated by its component in the asymptotic open channel, which is given by $$\phi_\mathrm{b}(r)=
\phi_\mathrm{b}^\mathrm{bg}(r)=\frac{1}{\sqrt{2\pi a(B)}}
\frac{e^{-r/a(B)}}{r}
\label{universalwavefunction}$$ at interatomic distances that are large in comparison with the van der Waals length. This wave function is extended far beyond the outer classical turning point $r_\mathrm{classical}=
\left[a(B)\left(2 l_\mathrm{vdW}\right)^2\right]^{1/3}$ of the background scattering potential $V_\mathrm{bg}(r)$, with a mean interatomic distance on the order of the scattering length: $$\langle r \rangle=
4\pi\int_0^\infty r^2 dr \ \left|\phi_\mathrm{b}(r)\right|^2 r=a(B)/2.
\label{bondlength}$$ As a consequence, the coupling between the channels in the close vicinity of a Feshbach resonance can be treated, to an excellent approximation, as a perturbation of the background scattering potential $V_\mathrm{bg}(r)$. In fact, to describe the universal low energy scattering properties as well as the highest excited vibrational bound state, the whole potential matrix can be replaced by a single channel potential $V(B,r)$ that recovers the correct magnetic field dependence of the scattering length and the length scale associated with the long range van der Waals interaction between the atoms. This involves the replacement of the two-body Hamiltonian matrix in Eq. (\[H2B2channel\]) by the single channel Hamiltonian $$H_\mathrm{2B}=-\frac{\hbar^2}{m}\nabla^2+V(B,r).
\label{H2B1channel}$$ Figure \[fig:haloboundstates\] shows a comparison between the component $\phi_\mathrm{b}^\mathrm{bg}$ of the highest excited vibrational bound state as determined from a full multichannel calculation, and from a single channel Hamiltonian with the background scattering potential modified in such a way that it recovers the scattering length of the full multichannel Hamiltonian.
![Radial wave functions associated with the component $\phi_\mathrm{b}^\mathrm{bg}$ of the highest excited vibrational bound state as determined from a full multichannel calculation (solid curve) and from a single channel Hamiltonian (dashed curve) with the background scattering potential modified in such a way that it recovers the same scattering length $a(B)=7200 \ a_\mathrm{Bohr}$ as the full multichannel Hamiltonian. The dotted curve shows the result of an analogous calculation with the minimal two channel Hamiltonian in the appendix. The components $\phi_\mathrm{b}^\mathrm{bg}$ of the two- and multichannel wave functions agree at interatomic distances $r$ that are large in comparison with the van der Waals length of about $80 \ a_\mathrm{Bohr}$. The minimal two channel description does not account for all the bound states of the exact background scattering potential. The nodes of the multichannel wave function are therefore not recovered. The differences between the single channel wave function and the solid and dotted curves result from their different normalisation due to the small but still relevant 10 % admixture of the resonance state in the multichannel wave functions. We note that the radial coordinate is given on a logarithmic scale.[]{data-label="fig:haloboundstates"}](wavefunction.eps){width="\columnwidth"}
When the magnetic field strength approaches the resonance position from below, the spatial extent of $\phi_\mathrm{b}(r)$ becomes infinite and the bound state wave function becomes degenerate with the zero energy scattering state of two asymptotically free atoms in the open channel. Consequently, by sweeping the magnetic field strength adiabatically across the resonance from negative to positive scattering lengths the zero energy scattering state is transferred smoothly into the bound state $\phi_\mathrm{b}(r)$. This is the key feature of the adiabatic association of molecules in ultracold atomic gases. We note that applying the sweep of the magnetic field strength in the direction from positive to negative scattering lengths is not suited to associate molecules because there is no energetically accessible vibrational bound state on the high field side of the Feshbach resonance. Both the association mechanism and its asymmetry in the direction of the sweep can be understood purely in terms of two-body considerations.
Transition probability {#subsec:2Bapproach}
----------------------
We shall show in the following how the dynamics of the adiabatic association technique can be described in terms of the previous considerations. We assume that the magnetic field strength is swept linearly across the resonance position starting asymptotically far from the resonance on the high field side. The two-body Hamiltonian in Eq. (\[H2B2channel\]) becomes explicitly time dependent through its dependence on the magnetic field strength, i.e. $H_\mathrm{2B}=H_\mathrm{2B}(t)$. The probability for the adiabatic association of a pair of atoms in the initial state $|\Psi_i\rangle$ is determined in terms of the two-body time evolution operator $U_\mathrm{2B}(t_f,t_i)$ by: $$P_{fi}=\left|\langle\phi_\mathrm{b}(B_f)|
U_\mathrm{2B}(t_f,t_i)|\Psi_i\rangle\right|^2.
\label{transitionprobability}$$ Here $t_i$ and $t_f$ are the initial and final times before and after the linear ramp of the magnetic field strength, respectively, while $\phi_\mathrm{b}(B_f)$ is the highest excited vibrational bound state of the two-body Hamiltonian $H_\mathrm{2B}(t_f)$ at the final magnetic field strength $B(t_f)$. The time evolution operator is determined by the Schrödinger equation: $$i\hbar\frac{\partial}{\partial t}
U_\mathrm{2B}(t,t_i)=H_\mathrm{2B}(t)U_\mathrm{2B}(t,t_i).
\label{SEU2B}$$ The transition amplitude in Eq. (\[transitionprobability\]) for the minimal two channel Hamiltonian in the appendix can be obtained with methods similar to those applied to the determination of the single channel two-body time evolution operator in Ref. [@KGB03]. A transition amplitude similar to that in Eq. (\[transitionprobability\]) serves as an input to the microscopic quantum dynamics approach to the association of molecules in a trapped dilute Bose-Einstein condensate that is presented in Subsection \[subsec:microscopicquantumdynamics\].
We shall use Eq. (\[transitionprobability\]) to determine the probability for the association of two atoms in the ground state of a large box of volume $\mathcal{V}$ that is later taken to infinity. In the homogeneous limit the appropriate initial state in Eq. (\[transitionprobability\]) is given by the product state $$|\Psi_i\rangle=|0,\mathrm{bg}\rangle\sqrt{\frac{(2\pi\hbar)^3}{\mathcal{V}}},
\label{Psiinitialbox}$$ where $|0\rangle$ is the isotropic zero momentum plane wave of the relative motion of two atoms in free space and $|\mathrm{bg}\rangle$ denotes their internal state in the asymptotic open channel. The factor $\sqrt{(2\pi\hbar)^3/\mathcal{V}}$ in Eq. (\[Psiinitialbox\]) provides the appropriate normalisation. In the homogeneous limit the product $P_{fi}\mathcal{V}$ is independent of the volume $\mathcal{V}$.
Figure \[fig:transprob\] shows the product $P_{fi}\mathcal{V}$ as a function of the initial time $t_i$ for a linear sweep of the magnetic field strength across the 100.74 mT Feshbach resonance of $^{87}$Rb with a ramp speed of 0.1 mT/ms. The calculations were performed on the basis of Eq. (\[transitionprobability\]) with the two channel approach of the appendix (solid curve) and with a single channel Hamiltonian (dashed curve) that properly accounts for the width of the resonance. Although the detailed time evolution is slightly different in the single and two channel case the transition probabilities are virtually equal once the ramp starts sufficiently far outside the width of the resonance. We shall show in Subsection \[subsec:LandauZener\] how this independence of the final molecular population from the details of the two-body description can be derived in a systematic way. This derivation will reveal that in the limits $t_i\to-\infty$ and $t_f\to\infty$ the product $P_{fi}\mathcal{V}$ depends only on the atomic mass $m$, the background scattering length $a_\mathrm{bg}$, the width of the resonance $(\Delta B)$, and the ramp speed.
![The product $P_{fi}\mathcal{V}$ as a function of the initial time $t_i$ for two $^{87}$Rb atoms in the ground state of a large box with the volume $\mathcal{V}$. The solid curve shows results based on the two channel approach of the appendix, while the dashed curve corresponds to an analogous calculation with a single channel Hamiltonian that properly accounts for the width of the resonance. The final time in these calculations is $t_f=80\ \mu$s; $t_i=0$ corresponds to the initial time at which the linear ramp of the magnetic field strength starts at the position of the Feshbach resonance. In the case of $t_i<0$ the linear ramp crosses the resonance position from above. The horizontal arrow indicates the asymptotic Landau-Zener prediction for $P_{fi}\mathcal{V}$ as obtained from Eq. (\[deltaLZ\]) in Subsection \[subsec:LandauZener\].[]{data-label="fig:transprob"}](Pfi1Gperms.eps){width="\columnwidth"}
Landau-Zener approach {#subsec:LandauZener}
---------------------
An intuitive extension of the two-body dynamics to the adiabatic association of molecules in a dilute Bose-Einstein condensate has been developed by Mies [*et al.*]{} [@Mies00]. This approach is based on the time dependent two-body Schrödinger equation $$i\hbar\frac{\partial}{\partial t}\Psi(t)=H_\mathrm{2B}(t)\Psi(t)
\label{SEtimedependent}$$ in the spherically symmetric harmonic potential of an optical atom trap [@opticaltrap]. The confining atom trap modifies the potentials in Eq. (\[H2B2channel\]) to $V_\mathrm{bg}(r)\to V_\mathrm{bg}(r)+\frac{1}{2}\frac{m}{2}
\omega_\mathrm{ho}^2 r^2$ and $V_\mathrm{cl}(B,r)\to V_\mathrm{cl}(B,r)+
\frac{1}{2}\frac{m}{2}\omega_\mathrm{ho}^2 r^2$, where $\omega_\mathrm{ho}$ is the angular trap frequency. To extend the binary dynamics to the association of molecules in a dilute Bose-Einstein condensate, Mies [*et al.*]{} have formulated Eq. (\[SEtimedependent\]) in terms of a basis set expansion with respect to the single channel energy states in the open channel and the closed channel strongly coupled to it. In this subsection, we shall develop an improved version of this approach.
### Two-body configuration interaction approach
We label the spherically symmetric energy states associated with the background scattering by the vibrational quantum numbers $v=0,1,2,\ldots$. These energy states fulfil the stationary Schrödinger equation $$\left[-\frac{\hbar^2}{m}\nabla^2+V_\mathrm{bg}(r)\right]\phi_v(r)=
E_v\phi_v(r).$$ In contrast to the free space continuum energy states $\phi_\mathbf{p}^{(+)}(\mathbf{r})$ in Eq. (\[BCphipplus\]), the vibrational trap states $\phi_v(r)$ are confined in space and we shall assume them to be unit normalised. In the atom trap the dissociation threshold energy in the open channel is given by $E_0$. In analogy to Subsection \[subsec:universal\] we will label the vibrational bound states $\phi_v(r)$ below this threshold by negative quantum numbers $v=-1,-2,-3,\dots$. For realistic atom traps and background scattering potentials, we can presuppose the condition $|E_{-1}|\gg \hbar\omega_\mathrm{ho}$ to be fulfilled. As a consequence, the vibrational bound states below the dissociation threshold are hardly modified by the trapping potential. We shall assume furthermore that the configuration of the atoms in the closed channel that is strongly coupled to the open channel, is restricted to the resonance state $\phi_\mathrm{res}(r)$. This assumption is analogous to the pole approximation in Eq. (\[poleapproximation\]). The basis set expansions for the open channel and the closed channel components of $\Psi(t)$ are then given by $$\begin{aligned}
\Psi_\mathrm{bg}(r,t)&=\sum_v\phi_v(r)C_v(t)\\
\Psi_\mathrm{cl}(r,t)&=\phi_\mathrm{res}(r)C_\mathrm{res}(t), \end{aligned}$$ respectively. The expansion coefficients $C_v(t)$ and $C_\mathrm{res}(t)$ are determined by Eq. (\[SEtimedependent\]) in terms of the equivalent infinite set of coupled configuration interaction (CI) equations: $$\begin{aligned}
\label{CIbg}
i\hbar\dot{C}_v(t)&=E_vC_v(t)+\langle\phi_v|W|\phi_\mathrm{res}\rangle
C_\mathrm{res}(t),\\
i\hbar\dot{C}_\mathrm{res}(t)&=E_\mathrm{res}(B(t))C_\mathrm{res}(t)+
\sum_v\langle\phi_\mathrm{res}|W|\phi_v\rangle C_v(t).
\label{CIcl}\end{aligned}$$ These equations determine the exact dynamics of the adiabatic association of two atoms in the confining potential of an atom trap.
### Many-body configuration interaction approach
There are two essential phenomena that should be taken into account to extend the two-body CI approach to the many-body physics of the adiabatic association of molecules in a dilute Bose-Einstein condensate. First, the many-body mean field interactions can cause the size of the atom cloud to occupy a much larger volume than a single atom in the harmonic trapping potential. This size is determined by the nonlinearity parameter $k_\mathrm{bg}=Na_\mathrm{bg}/l_\mathrm{ho}$ of the Gross-Pitaevskii equation [@Dalfovo99]. Here $N$ is the number of atoms and $l_\mathrm{ho}=\sqrt{\hbar/(m\omega_\mathrm{ho})}$ is the oscillator length of the atom trap. We shall assume in the following that the Thomas Fermi condition $k_\mathrm{bg}\gg 1$ is fulfilled. This directly implies that the extension of the atom cloud is characterised by the Thomas Fermi radius: $$l_\mathrm{TF}=l_\mathrm{ho}\left(15 \ k_\mathrm{bg}\right)^{1/5}.$$ This radius determines the mean kinetic energy per atom [@Fetter98] $$\frac{\langle E_\mathrm{kin}\rangle}{N}=\hbar\omega_\mathrm{ho}\frac{5}{2}
\left(\frac{l_\mathrm{ho}}{l_\mathrm{TF}}\right)^2
\mathrm{ln}\left(\frac{l_\mathrm{TF}}{1.2683 \ l_\mathrm{ho}}\right).$$ The physical intuition underlying the Thomas Fermi limit relies upon the observation that the mean field potential cancels the trap potential at distances smaller than the Thomas Fermi radius. Under these assumptions the single atoms experience an effective flat potential rather than the harmonic potential of the atom trap. Motivated by this physical intuition, Mies [*et al.*]{} replaced the vibrational trap states in the matrix elements in Eqs. (\[CIbg\]) and (\[CIcl\]) by those that correspond to a spherical box with a zero point energy of $\langle E_\mathrm{kin}\rangle/N$. The radius of this box is then determined by: $$l_\mathrm{box}=l_\mathrm{TF}\left(\frac{2}{5}\right)^{1/2}
\frac{\pi}{\sqrt{\mathrm{ln}
\left(\frac{l_\mathrm{TF}}{1.2683 \ l_\mathrm{ho}}\right)}}.$$ The second many-body phenomenon taken into account by Mies [*et al.*]{} is the macroscopic occupation of the lowest energy mode corresponding to the Bose-Einstein condensate. Neglecting the curve crossing between $E_\mathrm{res}(B)$ and the energy $E_{-1}$ of the highest excited bound state of the background scattering potential in Fig. \[fig:EbofBtwochannel\], in a downward ramp of the magnetic field strength across the Feshbach resonance the prevailing asymptotic transition involves only the condensate mode and the resonance state. As each atom has $N-1$ atoms to interact with, the element of the potential matrix that is associated with this prevailing transition is enhanced by a factor of $\sqrt{N-1}$.
Combining these results leads to a two level Rabi flopping model for the adiabatic association of molecules in a Bose-Einstein condensate: $$\begin{aligned}
\label{Rabibg}
i\hbar\dot{C}_0(t)&=E_0C_0(t)+\frac{1}{2}\hbar\Omega^* C_\mathrm{res}(t),\\
i\hbar\dot{C}_\mathrm{res}(t)&=E_\mathrm{res}(B(t))C_\mathrm{res}(t)+
\frac{1}{2}\hbar\Omega C_0(t).
\label{Rabicl}\end{aligned}$$ Here the Rabi frequency is given by $$\Omega=2\sqrt{N-1}\frac{1}{\hbar}
\sqrt{\frac{(2\pi\hbar)^3}{\frac{4\pi}{3}l_\mathrm{box}^3}}
\langle\phi_\mathrm{res}|W|\phi_0^{(+)}\rangle.
\label{Rabifrequency}$$ We note that we have quoted the Rabi frequency in terms of the free space zero energy background scattering state $\phi_0^{(+)}(\mathbf{r})$ as introduced in Eq. (\[BCphipplus\]). The factor $\sqrt{(2\pi\hbar)^3/\left(\frac{4\pi}{3}l_\mathrm{box}^3\right)}$ provides the proper normalisation that accounts for the finite volume $\mathcal{V}=\frac{4\pi}{3}l_\mathrm{box}^3$ of the box. In accordance with Eq. (\[resonancewidth\]), the matrix element $\langle\phi_\mathrm{res}|W|\phi_0^{(+)}\rangle$ in Eq. (\[Rabifrequency\]) can be expressed in terms of the resonance width $(\Delta B)$, the slope $\left[\frac{d E_\mathrm{res}}{d B}(B_\mathrm{res})\right]$ of the resonance and the background scattering length $a_\mathrm{bg}$.
When the magnetic field strength is swept linearly across the Feshbach resonance the energy of the resonance state changes linearly in time: $$E_\mathrm{res}(B(t))=E_0+
\left[\frac{d E_\mathrm{res}}{d B}(B_\mathrm{res})\right]
\left[\frac{d B}{d t}(t_\mathrm{res})\right](t-t_\mathrm{res}).
\label{Eresoft}$$ Here $t_\mathrm{res}$ is the time at which the energy of the resonance state crosses the dissociation threshold energy of the open channel, i.e. $B(t_\mathrm{res})=B_\mathrm{res}$ and $E_\mathrm{res}(B_\mathrm{res})=E_0$. Under the assumption of a linear Feshbach resonance crossing with the initial populations $|C_0(t_i)|^2=1$ and $|C_\mathrm{res}(t_i)|^2=0$, the final populations $|C_0(t_f)|^2$ and $|C_\mathrm{res}(t_f)|^2$ can be determined by the Landau-Zener formulae: $$\begin{aligned}
\label{LZbg}
|C_0(t_f)|^2&=e^{-2\pi\delta_\mathrm{LZ}},\\
|C_\mathrm{res}(t_f)|^2&=1-e^{-2\pi\delta_\mathrm{LZ}},
\label{LZcl}\end{aligned}$$ in the limits $t_i\to -\infty$ and $t_f\to\infty$. Derivation of the asymptotic Landau-Zener populations requires much tedious calculation. Given the known general form of the exponent $\delta_\mathrm{LZ}$, however, a short calculation using Eq. (\[resonancewidth\]) reveals its simple dependence on the background scattering length $a_\mathrm{bg}$, the resonance width $(\Delta B)$ and the ramp speed $\left|\frac{d B}{d t}(t_\mathrm{res})\right|$: $$\delta_\mathrm{LZ}=
\frac{\hbar|\Omega|^2}{4\left|\frac{d E_\mathrm{res}}{d B}
(B_\mathrm{res})\right|\left|\frac{d B}{d t}(t_\mathrm{res})
\right|}\\
=\frac{(N-1)4\pi\hbar|a_\mathrm{bg}||\Delta B|}{\mathcal{V}m\left|
\frac{d B}{d t}(t_\mathrm{res})\right|}.
\label{deltaLZ}$$ Although Eq. (\[deltaLZ\]) can be derived on the basis of the simple two level Rabi flopping model in Eqs. (\[Rabibg\]) and (\[Rabicl\]), the two-body ($N=2$) Landau-Zener prediction is accurate even in applications to transition probabilities that include a continuum of energy levels above the dissociation threshold (see Fig. \[fig:transprob\]). In fact, the Landau-Zener coefficient for the asymptotic population of the resonance state $\phi_\mathrm{res}(r)$ can be derived rigorously for an arbitrary number of linear curve crossings associated with a quasi continuum of two-body energy states [@Demkov68]. We note, however, that despite the universality of the asymptotic populations, a two level model is not suited to provide an adequate description of the intermediate states and their intermediate populations. From this viewpoint, the agreement between the asymptotic two- and multilevel descriptions is coincidental.
When applied to a gas of many atoms the linear two level Rabi flopping model in Eqs. (\[Rabibg\]) and (\[Rabicl\]) yields analytic predictions on the efficiency of molecular production. The treatment of the Bose enhancement in the linear model, however, does not account for the depletion of the condensate mode in the course of the adiabatic association. This depletion can be accounted for in a straightforward way by replacing the initial number of condensate atoms $N$ by the actual number $N|C_0(t)|^2$ in the enhancement factor. This inclusion of the depletion modifies the linear Rabi flopping model in Eqs. (\[Rabibg\]) and (\[Rabicl\]) to the nonlinear dynamic equations: $$\begin{aligned}
\label{NLRabibg}
i\hbar\dot{C}_0(t)&=E_0C_0(t)+\frac{1}{2}
\hbar\Omega^* |C_0(t)| C_\mathrm{res}(t),\\
i\hbar\dot{C}_\mathrm{res}(t)&=E_\mathrm{res}(B(t))C_\mathrm{res}(t)+
\frac{1}{2}\hbar\Omega |C_0(t)| C_0(t).
\label{NLRabicl}\end{aligned}$$ At final times $t_f\to\infty$ we shall interpret $N|C_0(t_f)|^2$ as the number of atoms in the remnant Bose-Einstein condensate and $N|C_\mathrm{res}(t_f)|^2$ as the number of atoms converted into molecules. The number of diatomic molecules produced in the adiabatic association is then given by $N|C_\mathrm{res}(t_f)|^2/2$. We show in Subsection \[subsec:comparison\] that Eqs. (\[NLRabibg\]) and (\[NLRabicl\]) significantly improve the predictions of the asymptotic Landau-Zener formulae in Eqs. (\[LZbg\]) and (\[LZcl\]), in particular when the molecular production begins to saturate.
Dissociation of molecules {#subsec:dissociation}
-------------------------
We note that the chemical bond of the diatomic molecules shifts the atomic spectral lines. This line shift can prevent the bound atoms from scattering light of probe lasers even in the case of the very weakly bound Feshbach molecules that are produced with the adiabatic association technique. As a consequence, many present day experimental molecular detection schemes rely upon the spatial separation of bound and free atoms in the cloud and the subsequent dissociation of the molecules (cf., e.g., Ref. [@Duerr03]). The highest excited vibrational molecular bound state can be dissociated by crossing the Feshbach resonance from positive to negative scattering lengths which corresponds to an upward ramp in the case of the 100 mT Feshbach resonance of $^{87}$Rb. The crossing of the Feshbach resonance transfers the bound molecules into correlated pairs of atoms which can have a comparatively high relative velocity, depending on the ramp speed. In the following we shall characterise the dissociation energy spectra for the 100 mT Feshbach resonance of $^{87}$Rb, under the assumption that many-body phenomena can be neglected. We expect this assumption to be accurate because the final continuum states of the molecular fragments have a low occupancy. Consequently, phenomena related to Bose enhancement should be negligible.
### General determination of dissociation energy spectra
We shall consider linear upward ramps from the initial magnetic field strength $B_i$ at time $t_i$ across the Feshbach resonance to the final field strength $B_f$ at time $t_f$. Starting from the highest excited multichannel vibrational molecular bound state $\phi_\mathrm{b}(B_i)$ at the magnetic field strength $B_i$, the state of a pair of atoms at any time $t$ is determined in terms of the time evolution operator in Eq. (\[SEU2B\]) by: $$\Psi(t)=U_\mathrm{2B}(t,t_i)\phi_\mathrm{b}(B_i).
\label{twobodystate}$$ The dissociation energy spectrum is usually measured in a time of flight experiment that allows the fragments to evolve freely after the final time $t_f$ of the ramp. At any time $t$ after $t_f$ the two-body Hamiltonian is stationary, and the time evolution operator can be factorised into its contribution of the linear ramp of the magnetic field strength and a part that describes the subsequent relative motion of a pair of atoms: $$U_\mathrm{2B}(t,t_i)=U_\mathrm{2B}(t-t_f)U_\mathrm{2B}(t_f,t_i).
\label{factorizationU2B}$$ Here $U_\mathrm{2B}(t-t_f)$ is determined in terms of the stationary two-body Hamiltonian $H_\mathrm{2B}(B_f)$ at the final magnetic field strength $B_f$ by: $$U_\mathrm{2B}(t-t_f)=e^{-iH_\mathrm{2B}(B_f)(t-t_f)/\hbar}.$$ We shall insert Eq. (\[factorizationU2B\]) into Eq. (\[twobodystate\]) and represent the time evolution operator of the relative motion of a pair of atoms after the time $t_f$ in terms of the complete set of multichannel energy states at the final magnetic field strength $B_f$. The state $\Psi(t)$ can then be decomposed as $$\Psi(t)=\Psi_\mathrm{free}(t)+\Psi_\mathrm{bound}(t),$$ where $\Psi_\mathrm{free}(t)$ describes the dissociation into two asymptotically free fragments, while $\Psi_\mathrm{bound}(t)$ describes those events in which the bound state $\phi_\mathrm{b}(B_i)$ is transferred into more tightly bound multichannel molecular vibrational states at the final magnetic field strength $B_f$. It is the continuum part $\Psi_\mathrm{free}(t)$ of the wave function $\Psi(t)$ that determines the measurable dissociation energy spectrum. In terms of the continuum multichannel energy states $\phi_\mathbf{p}(B_f)$ at the final magnetic field strength $B_f$ this is given by $$\begin{aligned}
\Psi_\mathrm{free}(t)= \int d\mathbf{p} \ \phi_\mathbf{p}(B_f)
e^{-iE(t-t_f)/\hbar} \langle\phi_\mathbf{p}(B_f)|
U_\mathrm{2B}(t_f,t_i)|\phi_\mathrm{b}(B_i)\rangle.
\label{Psifree}\end{aligned}$$ Here $E=p^2/m$ is the energy of the relative motion of the fragments that corresponds to their relative momentum $\mathbf{p}$. From Eq. (\[Psifree\]) we deduce the probability of detecting a pair of atoms with a relative energy between $E$ and $E+dE$ to be: $$n(E)dE=p^2dp\int d\Omega_\mathbf{p} \ \left|\langle\phi_\mathbf{p}(B_f)|
U_\mathrm{2B}(t_f,t_i)|\phi_\mathrm{b}(B_i)\rangle\right|^2.
\label{dissociationspectrumgeneral}$$ Here $d\Omega_\mathbf{p}$ denotes the angular component of $d\mathbf{p}$.
### Exact dissociation energy spectra
![Dissociation spectra of the highest excited vibrational bound state of $^{87}$Rb$_2$ as a function of the relative energy of the fragments. The speeds of the upward ramps across the 100 mT Feshbach resonance were varied between 0.1 mT/ms (dashed dotted curve) and 1 mT/ms (solid curve). The spectra are rather insensitive to the range of magnetic field strengths covered by the ramp; the solid curve indicates a 1 mT/ms ramp with $B_i=100.7$ mT and $B_f=100.78$ mT, while the data indicated by the dots on top of the solid curve correspond to a calculation for a 1 mT/ms ramp with initial and final magnetic field strengths that are half way closer to the resonance position of $B_0=100.74$ mT. We note that the energies are given on a logarithmic scale.[]{data-label="fig:dissociationspectra"}](spectra.eps){width="\columnwidth"}
Figure \[fig:dissociationspectra\] shows the spectral density $n(E)$ for different ramp speeds as obtained from an exact solution of the Schrödinger equation (\[SEU2B\]) with the low energy two channel Hamiltonian in the appendix. Although the low energy two-body Hamiltonian supports a comparatively tightly bound state at the high field side of the Feshbach resonance (see Fig. \[fig:EbofBtwochannel\]), the calculations do not indicate any transfer into this state for the realistic ramp speeds under consideration. Within this range of ramp speeds between 0.1 and 1 mT/ms, the spectra cover a broad range of energies of the relative motion of a pair of atoms that is much larger than the typical energy spread of a Bose-Einstein condensate. Consequently, the molecular fragments will be detected as a burst of correlated pairs of atoms moving radially outward from the original position of the molecular cloud.
### Dependence on the physical parameters of a Feshbach resonance
In the following, we shall study in more detail the dependencies of the dissociation spectra in Fig. \[fig:dissociationspectra\] on the five physical parameters of a Feshbach resonance (cf. Subsection \[subsec:energystates\]), which completely characterise the resonance enhanced low energy collision physics. To this end, we shall neglect the vibrational bound states of the background scattering potential and reformulate Eq. (\[dissociationspectrumgeneral\]) in terms of the energy states in the absence of interchannel coupling, in the formal asymptotic limits $t_i\to -\infty$ and $t_f\to \infty$. The idealising assumption that the background scattering potential does not support any vibrational bound states implies that the two channel Hamiltonian in Eq. (\[H2B2channel\]) supports just the initial vibrational bound state $\phi_\mathrm{b}(B_i)$ of Eq. (\[dissociationspectrumgeneral\]). In the formal limit $t_i\to-\infty$, i.e. when the linear ramp of the magnetic field strength starts asymptotically far from the resonance position, $\phi_\mathrm{b}(B_i)$ is then identical to the closed channel resonance state $\phi_\mathrm{res}$. Furthermore, in accordance with Eqs. (\[phipcl\]), (\[phipbg\]) and (\[amplitude\]), the final continuum state $\phi_\mathbf{p}(B_f)$ of Eq. (\[dissociationspectrumgeneral\]) is transferred into the energy state $\phi_\mathbf{p}^{(+)}$ of the background scattering \[cf. Eq. (\[BCphipplus\])\], in the formal limit $t_f\to\infty$. Under these assumptions, Eq. (\[dissociationspectrumgeneral\]) can be reformulated to be $$n(E)dE=p^2dp\int d\Omega_\mathbf{p} \
\left|
\langle\phi_\mathbf{p}^{(+)},\mathrm{bg}|
U_\mathrm{2B}(t_f,t_i)
|\phi_\mathrm{res},\mathrm{cl}\rangle
\right|^2.
\label{dissociationspectrumasymptotic}$$ Here $|\mathrm{bg}\rangle$ and $|\mathrm{cl}\rangle$ denote the internal states of an atom pair in the asymptotic open channel and the closed channel strongly coupled to it, respectively. The asymptotic dissociation spectrum in Eq. (\[dissociationspectrumasymptotic\]) can then be determined from the results of Ref. [@Demkov68] in analogy to the derivation of the Landau-Zener formulae of Subsection \[subsec:LandauZener\]. This yields the analytic result for the spectral density [@Mukaiyama03]: $$n(E)=-\frac{\partial}{\partial E}
\exp\left(-\frac{4}{3}\sqrt{\frac{mE}{\hbar^2}}\left|a_\mathrm{bg}\right|
\frac{E|\Delta B|}{\hbar\left|\frac{dB}{dt}(t_\mathrm{res})\right|}\right).
\label{finalspectraldensity}$$
We note that the asymptotic energy density of the dissociation spectrum in Eq. (\[finalspectraldensity\]) depends, like the Landau-Zener coefficient in Eq. (\[deltaLZ\]), only on those physical parameters that determine the universal low energy scattering properties in the close vicinity of a Feshbach resonance (cf. Subsection \[subsec:universal\]), while the slope $\left[\frac{dE_\mathrm{res}}{dB}(B_\mathrm{res})\right]$ of the resonance and the van der Waals dispersion coefficient $C_6$ do not contribute to Eq. (\[finalspectraldensity\]). This corroborates the observation in Fig. \[fig:transprob\] that the relevant physics occurs in a small region of magnetic field strengths in which the modulus of the scattering length by far exceeds all the other length scales set by the binary interactions. In our applications to the 100 mT Feshbach resonance of $^{87}$Rb, the results obtained from Eq. (\[finalspectraldensity\]) are virtually indistinguishable from those of the exact calculations in Fig. \[fig:dissociationspectra\]. We note, however, that Eq. (\[finalspectraldensity\]) may not be applicable when the ramp of the magnetic field strength starts in the close vicinity of a Feshbach resonance. This situation may occur, for instance, in experimental applications involving the broad \[$(\Delta B)=1.1$ mT\] resonance of $^{85}$Rb at 15.5 mT [@Cornish00].
### Mean kinetic energy
In the following, we shall study the mean kinetic energies $$\langle E_\mathrm{kin}\rangle
=\frac{1}{2}\int_0^\infty dE \ E n(E)
\label{definitionmeankineticenergies}$$ of the molecular fragments after the dissociation. These energies characterise the speed of expansion of the gas of molecular fragments before the detection in related experiments [@Duerr03]. We note that the kinetic energy of a single atom is $E_\mathrm{kin}=p^2/(2m)$, which is half the energy of the relative motion of a pair. Equations (\[finalspectraldensity\]) and (\[definitionmeankineticenergies\]) allow us to represent $\langle E_\mathrm{kin}\rangle$ in terms of physical parameters of a Feshbach resonance, of the ramp speed and of Euler’s $\Gamma$ function: $$\langle E_\mathrm{kin}\rangle =\frac{1}{3}
\left[
\frac{3}{4}\sqrt{\frac{\hbar^2}{m\left(a_\mathrm{bg}\right)^2}}
\frac{\hbar\left|\frac{dB}{dt}(t_\mathrm{res})\right|}{|\Delta B|}
\right]^{2/3}\Gamma(2/3).
\label{asymptoticmeankineticenergies}$$ We expect this prediction to be accurate provided that the initial and final magnetic field strengths of the linear ramp are asymptotically far from the resonance position.
![Mean single particle kinetic energies of the molecular fragments associated with the exact dissociation spectra in Fig. \[fig:dissociationspectra\] (circles) in dependence on the ramp speed. The solid curve has been obtained from the asymptotic prediction in Eq. (\[asymptoticmeankineticenergies\]). The single particle kinetic energy is half the energy of the relative motion of the fragments in Fig. \[fig:dissociationspectra\].[]{data-label="fig:meandissociationenergies"}](meanenergies.eps){width="\columnwidth"}
Figure \[fig:meandissociationenergies\] shows the strong dependence of the mean single particle kinetic energies on the ramp speed. The circles indicate the mean energies of the exact dissociation energy spectra in Fig. \[fig:dissociationspectra\], while the solid curve has been obtained from Eq. (\[asymptoticmeankineticenergies\]). Both approaches lead to virtually the same predictions for the realistic ramp speeds and physical parameters of the 100 mT Feshbach resonance of $^{87}$Rb. Figure \[fig:meandissociationenergies\] also reveals that ramp speeds of less than about 0.03 mT/ms would be required to suppress the mean single particle energies of the fragments to a range below $\langle E_\mathrm{kin}\rangle/k_\mathrm{B}=100 \ \mathrm{nK}$ ($k_\mathrm{B}=1.3806503\times 10^{-23}$ J/K), which would be close to the typical energy spread in a Bose-Einstein condensate [@Dalfovo99].
Adiabatic association of molecules in a trapped Bose-Einstein condensate {#sec:manybody}
========================================================================
The results in Subsection \[subsec:2Bapproach\] allow us to rigorously treat the two-body physics of the adiabatic association. In this section we shall study the many-body physics of the molecular production in a trapped dilute Bose-Einstein condensate. Inhomogeneous Bose-Einstein condensates are subject to a rich spectrum of collective energy modes that depend sensitively on the number of condensate atoms and their binary interactions, as well as on the confining atom trap. The crossing of a singularity of the scattering length leads to a dramatic change of the intermediate collision physics that transfers a substantial fraction of the initially coherent atomic cloud into strongly correlated pairs of atoms in the highest excited vibrational bound state. This violent dynamics can be expected to couple all energy modes. A proper description of the complex interplay between the macroscopic collective behaviour and the microscopic binary collision physics requires a full many-body treatment. We shall provide an appropriate many-body description of the adiabatic association of molecules in a trapped dilute Bose-Einstein condensate and compare it to previous theoretical predictions at different levels of approximation. This will highlight in what parameter regimes approximate descriptions are valid and to which physical observables they are applicable, as well as when we expect them to break down.
Microscopic quantum dynamics approach {#subsec:microscopicquantumdynamics}
-------------------------------------
The general many-body approach to the dynamics of atomic gases that we shall apply has been derived in Refs. [@KB02; @KGB03]. This approach has been applied previously to several different physical situations that involve the production of correlated pairs of atoms in four wave mixing experiments with Bose-Einstein condensates [@KB02], the determination of the mean field energy associated with three-body collisions [@TK02], and the dynamics of atom molecule coherence [@KGB03; @TKTGPSJKB03] as well as Feshbach resonance crossing experiments with degenerate Bose gases of $^{85}$Rb and $^{23}$Na atoms [@TKKG03]. The underlying method of cumulants [@KB02; @Fricke96] is based on the exact time dependent many-body Schrödinger equation. As the general technique has been derived before, we shall outline only those details specific to the adiabatic association of molecules in a Bose-Einstein condensate.
### Multichannel approach
We shall formulate the approach in terms of the full multichannel many-body Hamiltonian, which couples all the internal hyperfine states of the single atoms. We label the quantum numbers associated with the single atom energy states by Greek indices. In our applications to the low energy scattering in the vicinity of the $100$ mT Feshbach resonance of $^{87}$Rb, these indices are sufficiently characterised by the total angular momentum quantum number $F$ and its orientation quantum number $m_F$. In its second quantised form, the full many-body Hamiltonian then reads:
$$\begin{aligned}
H=\sum_\mu\int d\mathbf{x} \ \psi_\mu^\dagger(\mathbf{x})
H^\mathrm{1B}_\mu(B)\psi_\mu(\mathbf{x})
+\frac{1}{2}\sum_{\mu\nu\kappa\lambda}
\int d\mathbf{x}d\mathbf{y} \ \psi^\dagger_\mu(\mathbf{x})
\psi^\dagger_\nu(\mathbf{y})V_{\{\mu\nu\},\{\kappa\lambda\}}
(\mathbf{x}-\mathbf{y})
\psi_\kappa(\mathbf{x})\psi_\lambda(\mathbf{y}).
\label{MBHamiltonian}
\end{aligned}$$
Here the single particle annihilation and creation field operators satisfy the Bose commutation rules: $$\begin{aligned}
\left[\psi_\mu(\mathbf{x}),\psi_\nu^\dagger(\mathbf{y})\right]&=
\delta_{\mu\nu}\delta(\mathbf{x}-\mathbf{y}),\\
\left[\psi_\mu(\mathbf{x}),\psi_\nu(\mathbf{y})\right]&=0.\end{aligned}$$ Furthermore, $H_\mu^\mathrm{1B}(B)$ is the one body Hamiltonian associated with the internal atomic state $\mu$ that contains the kinetic energy of the atom, the external potential of the optical atom trap, and the internal hyperfine energy $E_\mu^\mathrm{hf}(B)$: $$H^\mathrm{1B}_\mu(B)=-\frac{\hbar^2}{2m}\nabla^2+V_\mathrm{trap}+
E_\mu^\mathrm{hf}(B).$$ The hyperfine energy depends on the magnetic field strength through the Zeeman effect. The microscopic binary potential $V_{\{\mu\nu\},\{\kappa\lambda\}}(\mathbf{r})$ in Eq. (\[MBHamiltonian\]) is associated with the asymptotic incoming and outgoing binary scattering channels that can be labelled by the pairs of internal atomic quantum numbers $\{\kappa\lambda\}$ and $\{\mu\nu\}$, respectively. We note that in this general formulation of the many-body Hamiltonian all potentials associated with the asymptotic binary scattering channels are chosen to vanish at infinite interatomic distances.
All physical properties of a gas of atoms are determined by correlation functions, i.e. quantum expectation values (denoted by $\langle\cdot\rangle_t$) of normal ordered products of field operators for the quantum state at time $t$. The correlation functions of main interest in the adiabatic association of molecules involve the atomic mean field $\langle\psi_\mu(\mathbf{x})\rangle_t$, the anomalous average $\langle\psi_\nu(\mathbf{y})\psi_\mu(\mathbf{x})\rangle_t$, and the one-body density matrix $\langle\psi_\nu^\dagger(\mathbf{y})\psi_\mu(\mathbf{x})\rangle_t$. The dynamics of the correlation functions is determined by the many-body Schrödinger equation through an infinite hierarchy of coupled dynamic equations. The cumulant approach consists in transforming the coupled set of dynamic equations for the correlation functions into an equivalent but more practical infinite set of dynamic equations for noncommutative cumulants [@KB02; @KGB03]. The transformed equations of motion for the cumulants can be truncated, at any desired degree of accuracy, in accordance with Wick’s theorem of statistical mechanics [@Fetter71]. The noncommutative cumulants that we shall consider in the following are the atomic mean field $$\Psi_\mu(\mathbf{x},t)=\langle\psi_\mu(\mathbf{x})\rangle_t,$$ the pair function $$\Phi_{\mu\nu}(\mathbf{x},\mathbf{y},t)=
\langle\psi_\nu(\mathbf{y})
\psi_\mu(\mathbf{x})\rangle_t-\Psi_\mu(\mathbf{x},t)\Psi_\nu(\mathbf{y},t),$$ and the density matrix of the noncondensed fraction $$\Gamma_{\mu\nu}(\mathbf{x},\mathbf{y},t)=
\langle\psi_\nu^\dagger(\mathbf{y})\psi_\mu(\mathbf{x})\rangle_t-
\Psi_\mu(\mathbf{x},t)\Psi_\nu^*(\mathbf{y},t).$$ In the first order cumulant approach [@KB02; @KGB03] that we shall apply in this paper the atomic mean field and the pair function are determined by the coupled dynamic equations:
$$\begin{aligned}
\label{meanfieldgeneral}
i\hbar\frac{\partial}{\partial t}\Psi_\mu(\mathbf{x},t)=&H_\mu^\mathrm{1B}(B)
\Psi_\mu(\mathbf{x},t)+
\sum_{\nu\kappa\lambda}\int d\mathbf{y} \ \Psi_\nu^*(\mathbf{y},t)
V_{\{\mu\nu\},\{\kappa\lambda\}}(\mathbf{x}-\mathbf{y})
\left[
\Phi_{\kappa\lambda}(\mathbf{x},\mathbf{y},t)+
\Psi_\kappa(\mathbf{x},t)\Psi_\lambda(\mathbf{y},t)
\right]\\
i\hbar\frac{\partial}{\partial t}\Phi_{\mu\nu}(\mathbf{x},\mathbf{y},t)
=&\sum_{\kappa\lambda}
\left[
H^\mathrm{2B}_{\{\mu\nu\},\{\kappa\lambda\}}(B)
\Phi_{\kappa\lambda}(\mathbf{x},\mathbf{y},t)+
V_{\{\mu\nu\},\{\kappa\lambda\}}(\mathbf{x}-\mathbf{y})
\Psi_\kappa(\mathbf{x},t)\Psi_\lambda(\mathbf{y},t)
\right].
\label{pairfunctiongeneral}\end{aligned}$$
Here $H_\mathrm{2B}(B)$ is the Hamiltonian matrix of two atoms whose matrix elements $$H_{\{\mu\nu\},\{\kappa\lambda\}}^\mathrm{2B}(B)=
[H_\mu^\mathrm{1B}(B)+H_\nu^\mathrm{1B}(B)]
\delta_{\kappa\mu}\delta_{\lambda\nu}+V_{\{\mu\nu\},\{\kappa\lambda\}}
\label{H2Bgeneral}$$ involve all incoming and outgoing asymptotic binary scattering channels associated with the pairs of atomic indices $\{\kappa\lambda\}$ and $\{\mu\nu\}$, respectively. Given the solution of Eqs. (\[meanfieldgeneral\]) and (\[pairfunctiongeneral\]), the density matrix of the noncondensed fraction is determined in terms of the pair function by [@KGB03] $$\Gamma_{\mu\nu}(\mathbf{x},\mathbf{x}',t)=\sum_\kappa\int d\mathbf{y} \
\Phi_{\mu\kappa}(\mathbf{x},\mathbf{y},t)
\Phi^*_{\nu\kappa}(\mathbf{x}',\mathbf{y},t).
\label{Gammageneral}$$ The general first order dynamic equations (\[meanfieldgeneral\]), (\[pairfunctiongeneral\]) and (\[Gammageneral\]) not only strictly conserve the total number of atoms $$N=\sum_{\mu}\int d\mathbf{x} \
\left[\Gamma_{\mu\mu}(\mathbf{x},\mathbf{x},t)+
|\Psi_\mu(\mathbf{x},t)|^2
\right]
\label{numberconservationgeneral}$$ at all times, but the explicit form of Eq. (\[Gammageneral\]) also ensures the crucial property of positivity of the one-body density matrix.
Equation (\[Gammageneral\]) reveals that the density of the noncondensed fraction stems directly from the pair function. Consequently, the build up of pair correlations is the main source of atom loss from a Bose-Einstein condensate. The energy delivered by a time dependent homogeneous magnetic field can transfer a pair of condensate atoms either into the highest excited vibrational molecular bound state or into the quasi continuum of two-body energy states above the dissociation threshold. Not only the molecular association, but also the production of correlated pairs of atoms in the scattering continuum has been observed recently [@Donley02] as a burst of atoms ejected from the remnant condensate. As a time dependent homogeneous magnetic field delivers energy but no momentum to the gas, the centres of mass of all correlated bound and free pairs of atoms have the same momentum distribution as the initial Bose-Einstein condensate. In this sense, the molecules produced in the adiabatic association may be considered as a degenerate quantum gas. The number $N_\mathrm{b}$ of diatomic molecules in the state $\phi_\mathrm{b}$ is determined by counting the overlap of each pair of atoms with the multichannel molecular bound state [@Dollard73]. This relates the number of molecules to the two-body correlation function $$G^{(2)}_{\mu\nu\kappa\lambda}
(\mathbf{x},\mathbf{y};\mathbf{x}',\mathbf{y}')=
\langle\psi^\dagger_\kappa(\mathbf{x}')\psi^\dagger_\lambda(\mathbf{y}')
\psi_\nu(\mathbf{y})\psi_\mu(\mathbf{x})\rangle$$ by [@KGB03]:
$$N_\mathrm{b}=\frac{1}{2}\sum_{\mu\nu\kappa\lambda}
\int d\mathbf{r} d\mathbf{r}' d\mathbf{R}
\left[\phi_\mathrm{b}^{\{\mu\nu\}}(\mathbf{r})\right]^*
G^{(2)}_{\mu\nu\kappa\lambda}
\left(\mathbf{R}+\frac{\mathbf{r}}{2},\mathbf{R}-
\frac{\mathbf{r}}{2};\mathbf{R}+\frac{\mathbf{r}'}{2},
\mathbf{R}-\frac{\mathbf{r}'}{2}\right)
\phi_\mathrm{b}^{\{\kappa\lambda\}}(\mathbf{r}').
\label{Nbgeneral}$$
Here the spatial integration variables can be interpreted in terms of two-body centre of mass and relative coordinates $\mathbf{R}=(\mathbf{x}+\mathbf{y})/2$ and $\mathbf{r}=\mathbf{x}-\mathbf{y}$, respectively. The number of correlated pairs of atoms in the scattering continuum can be deduced from the two-body correlation function in a similar way [@KGB03] with the multichannel bound state wave function replaced by the continuum states. We note that Eq. (\[Nbgeneral\]) neither assumes any particular class of many-body states nor any approximation to the many-body Schrödinger equation. By expanding the two-body correlation function in Eq. (\[Nbgeneral\]) into cumulants and truncating the expansion in accordance with the first order cumulant approach, the density of bound molecules can be represented in terms of a molecular mean field [@KGB03]:
$$\Psi_\mathrm{b}(\mathbf{R},t)=\frac{1}{\sqrt{2}}\sum_{\mu\nu}\int
d\mathbf{r}
\left[\phi_\mathrm{b}^{\{\mu\nu\}}(\mathbf{r})\right]^*
\left[\Phi_{\mu\nu}(\mathbf{R},\mathbf{r},t)+
\Psi_\mu\left(\mathbf{R}+\frac{\mathbf{r}}{2},t\right)
\Psi_\nu\left(\mathbf{R}-\frac{\mathbf{r}}{2},t\right)\right].
\label{psibgeneral}$$
Here we have introduced the centre of mass and relative coordinates $\mathbf{R}$ and $\mathbf{r}$ and represented the pair function in terms of these variables. The molecular mean field determines the density of diatomic molecules in the state $\phi_\mathrm{b}$ by $n_\mathrm{b}(\mathbf{R},t)=|\Psi_\mathrm{b}(\mathbf{R},t)|^2$. The molecular mean field as well as the fraction of pairs of correlated atoms in the scattering continuum are determined completely by the solution of the coupled equations (\[meanfieldgeneral\]) and (\[pairfunctiongeneral\]).
### Two channel approach
The general form of the two-body Hamiltonian in Eq. (\[H2Bgeneral\]) with a realistic potential matrix allows us to describe the binary collision physics over a wide range of energies and magnetic field strengths, as indicated in Fig. \[fig:Eboverview\]. As the present applications involve only the adiabatic association of molecules in the vicinity of the 100 mT Feshbach resonance of $^{87}$Rb, we can restrict the asymptotic binary scattering channels to those that we have identified in Section \[sec:twobody\]. We shall thus insert the two channel description of Section \[sec:twobody\] into Eqs. (\[meanfieldgeneral\]) and (\[pairfunctiongeneral\]) and perform the pole approximation of Subsection \[subsec:energystates\]. The relevant potentials then involve the background scattering potential $V_\mathrm{bg}(r)$ and the off diagonal matrix element $W(r)$ between the open channel and the closed channel strongly coupled to it. In accordance with the pole approximation, the only configuration of the atoms in this closed channel is the diatomic resonance state $\phi_\mathrm{res}(r)$. Consequently, the atomic mean field is restricted to its component in the $(F=1,m_F=+1)$ state, which we shall denote simply by $\Psi(\mathbf{x},t)$. According to Eq. (\[meanfieldgeneral\]), the associated mean field dynamic equation is then given by
$$\begin{aligned}
i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{x},t)=
\left[
-\frac{\hbar^2}{2m}\nabla^2+V_\mathrm{trap}(\mathbf{x})
\right]\Psi(\mathbf{x},t)
+\int d\mathbf{y} \ \Psi^*(\mathbf{y},t)
\left(
W(|\mathbf{x}-\mathbf{y}|)
\Phi_\mathrm{cl}(\mathbf{x},\mathbf{y},t)
+V_\mathrm{bg}(|\mathbf{x}-\mathbf{y}|)
\left[
\Phi_\mathrm{bg}(\mathbf{x},\mathbf{y},t)+
\Psi(\mathbf{x},t)\Psi(\mathbf{y},t)
\right]
\right)
\label{Psi2ch}\end{aligned}$$
The pair function has a component in the open channel and in the closed channel strongly coupled to it. In accordance with Eq. (\[pairfunctiongeneral\]), their coupled dynamic equations read:
$$\begin{aligned}
i\hbar\frac{\partial}{\partial t}
\left(
\begin{array}{c}
\Phi_\mathrm{bg}(\mathbf{x},\mathbf{y},t)\\
\Phi_\mathrm{cl}(\mathbf{x},\mathbf{y},t)
\end{array}
\right)
=H_\mathrm{trap}^\mathrm{2B}(t)
\left(
\begin{array}{c}
\Phi_\mathrm{bg}(\mathbf{x},\mathbf{y},t)\\
\Phi_\mathrm{cl}(\mathbf{x},\mathbf{y},t)
\end{array}
\right)+
\left(
\begin{array}{c}
V_\mathrm{bg}(|\mathbf{x}-\mathbf{y}|)\\
W(|\mathbf{x}-\mathbf{y}|)
\end{array}
\right)
\Psi(\mathbf{x},t)
\Psi(\mathbf{y},t).
\label{Phi2ch}
\end{aligned}$$
Here $H_\mathrm{trap}^\mathrm{2B}(t)$ is the general two channel two-body Hamiltonian \[cf. Eq. (\[H2B2channel\])\] that includes the centre of mass kinetic energy of a pair of atoms as well as the confining harmonic potential of the atom trap. In the two channel formulation of the first order microscopic quantum dynamics approach, Eqs. (\[Psi2ch\]) and (\[Phi2ch\]) completely determine the dynamics of the coherent atomic condensate and the fraction of correlated pairs of atoms.
For the purpose of numerical convenience, we shall solve the inhomogeneous linear Schrödinger equation (\[Phi2ch\]) formally in terms of the complete two-body time evolution operator $U_\mathrm{trap}^\mathrm{2B}(t,\tau)$ \[cf. Eq. (\[SEU2B\])\] that accounts for the centre of mass motion of the atoms and for the confining trap potential. We shall then insert the solution into the mean field equation (\[Psi2ch\]) to eliminate the pair function. We shall assume that at the initial time $t_i$ at the start of the ramp the gas is well described by a dilute zero temperature Bose-Einstein condensate, so that initial two-body correlations are negligible. The resulting dynamic equation for the atomic mean field can then be expressed in terms of the time dependent two-body transition matrix
$$\begin{aligned}
T_\mathrm{trap}^\mathrm{2B}(t,\tau)
=\langle\mathrm{bg}|
\left[
V(t)\delta(t-\tau)
+\frac{1}{i\hbar}\theta(t-\tau)
V(t)U_\mathrm{trap}^\mathrm{2B}(t,\tau)V(\tau)
\right]
|\mathrm{bg}\rangle,
\label{T2Btrap}\end{aligned}$$
which involves the potential matrix $$V(t)=
\left(
\begin{array}{cc}
V_\mathrm{bg} & W\\
W & V_\mathrm{cl}(B(t))
\end{array}
\right)$$ and the two-body time evolution operator $U_\mathrm{trap}^\mathrm{2B}(t,\tau)$. Here $|\mathrm{bg}\rangle$ denotes the internal state of a pair of atoms in the asymptotic open scattering channel, and $\theta(t-\tau)$ is the step function, which yields unity at $t>\tau$ and zero elsewhere. A short calculation applying the methods derived in Refs. [@KB02; @KGB03] then shows that the elimination of the pair function in Eq. (\[Psi2ch\]) leads to a closed dynamic equation for the atomic mean field. Expressed in terms of the two-body transition matrix in Eq. (\[T2Btrap\]), this is given by:
$$\begin{aligned}
i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{x},t)=
\left[
-\frac{\hbar^2}{2m}\nabla^2+V_\mathrm{trap}(\mathbf{x})
\right]\Psi(\mathbf{x},t)
+\int_{t_i}^\infty d\tau\int d\mathbf{y} d\mathbf{x}' d\mathbf{y}' \
\Psi^*(\mathbf{y},t)
T_\mathrm{trap}^\mathrm{2B}(\mathbf{x},\mathbf{y},t;
\mathbf{x}',\mathbf{y}',\tau)
\Psi(\mathbf{x}',\tau)
\Psi(\mathbf{y}',\tau).
\label{NLSgeneral}\end{aligned}$$
In the case of a harmonic trap potential the two-body transition matrix in Eq. (\[T2Btrap\]) factorises into a centre of mass part and a contribution from the relative motion of an atom pair. In the following, we shall assume the confinement of the atom trap to be sufficiently weak and the ramp speeds to be sufficiently high that the time spent within the width of the resonance is much smaller than the trap periods in all spatial directions. The centre of mass motion of a pair of atoms then becomes negligible on this time scale. The explicit form of Eq. (\[T2Btrap\]) also reveals that the time evolution operator of the relative motion of two atoms is evaluated only within the spatial range of the binary interaction potentials, where it is hardly modified by the presence of the atom trap. Consequently, the trap potential can also be neglected in the two-body time evolution operator of the relative motion of an atom pair. Furthermore, as the atomic mean field is slowly varying on the length scales set by the binary interaction potentials, Eq. (\[NLSgeneral\]) becomes local in the spatial variables and acquires the form [@KB02; @KGB03]: $$\begin{aligned}
\nonumber
i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{x},t)=&
\left[-\frac{\hbar^2}{2m}\nabla^2+V_\mathrm{trap}(\mathbf{x})\right]
\Psi(\mathbf{x},t)\\
&-\Psi^*(\mathbf{x},t)\int_{t_i}^\infty d\tau \
\Psi^2(\mathbf{x},\tau)\frac{\partial}{\partial \tau}h(t,\tau).
\label{NLSElocal}\end{aligned}$$ Here we have introduced the coupling function $$\begin{aligned}
h(t,\tau)=\theta(t-\tau)(2\pi\hbar)^3\langle0,\mathrm{bg}|V(t)
U_\mathrm{2B}(t,\tau)|0,\mathrm{bg}\rangle,
\label{h2B}\end{aligned}$$ which depends on the time evolution operator $U_\mathrm{2B}(t,\tau)$ associated with the relative motion of an atom pair in free space, while $|0,\mathrm{bg}\rangle$ denotes the zero momentum plane wave state of the relative motion of two atoms in the asymptotic open scattering channel.
### Atomic condensate fraction
![The time evolution of the condensate fraction ($N_\mathrm{c}(t)=\int d\mathbf{x} \ |\Psi(\mathbf{x},t)|^2$) for a linear downward ramp of the magnetic field strength across the 100 mT Feshbach resonance of $^{87}$Rb. The solid curve shows a calculation for a weakly confining spherical atom trap with the frequency $\nu_\mathrm{ho}=10$ Hz, while the dashed curve corresponds to a comparatively strongly confining ($\nu_\mathrm{ho}=100$ Hz) spherical atom trap. The ramp speed in both calculations is 0.1 mT/ms and the initial number of condensate atoms is $N=50000$. The time at which the magnetic field strength crosses the resonance position is $t=500 \ \mu$s.[]{data-label="fig:87RbNcoft"}](87RbNcoft.eps){width="\columnwidth"}
As we have shown in Subsections \[subsec:universal\], \[subsec:2Bapproach\] and \[subsec:LandauZener\], the two-body time evolution of the main physical observables of interest in the adiabatic association of molecules is largely independent of the details of the implementation of the two-body Hamiltonian. The main requirement is that the two-body Hamiltonian properly accounts for the magnetic field dependence of the scattering length and the near resonant binding energy of the highest excited vibrational bound state. The differences between a single and a two channel treatment (cf. Subsection \[subsec:2Bapproach\]) in the dynamics of the atomic mean field in Eq. (\[NLSElocal\]) are marginal (cf., also, Ref. [@TKKG03]). The following calculations have, therefore, been performed with a single channel two-body Hamiltonian as introduced in Ref. [@KGB03] and described in the appendix. In the course of our studies, we have solved Eq. (\[NLSElocal\]) for a variety of linear ramps of the magnetic field strength across the 100 mT Feshbach resonance of $^{87}$Rb in spherically as well as cylindrically symmetric harmonic atom traps. Figure \[fig:87RbNcoft\] shows the time evolution of the number of condensate atoms $N_\mathrm{c}(t)=\int d\mathbf{x} \ |\Psi(\mathbf{x},t)|^2$ for linear downward ramps with a ramp speed of 0.1 mT/ms in spherically symmetric atom traps with the frequencies $\nu_\mathrm{ho}=100$ Hz and $\nu_\mathrm{ho}=10$ Hz. The atomic mean field at the initial time $t_i=0$ has been chosen in each calculation as the ground state of the Gross-Pitaevskii equation that corresponds to a dilute zero temperature Bose-Einstein condensate with $N=50000$ atoms and a scattering length of $a_\mathrm{bg}=100 \ a_\mathrm{Bohr}$. Figure \[fig:87RbNcoft\] reveals that the loss of condensate atoms mainly occurs during the passage across the Feshbach resonance. In accordance with the higher local density, the losses of condensate atoms are more pronounced in the comparatively strongly confining 100 Hz atom trap. The pronounced oscillations immediately after the passage across the resonance ($t>500 \ \mu$s) indicate a rapid exchange between the condensed and the noncondensed phases of the gas. Exchange on these short time scales suggests that the crossing of a Feshbach resonance drives the gas into a highly excited nonequilibrium state.
![Snapshots of the remnant condensate density after the crossing of the 100 mT Feshbach resonance of $^{87}$Rb in spherically symmetric atom traps with the frequencies $\nu_\mathrm{ho}=100$ Hz (a) and $\nu_\mathrm{ho}=10$ Hz (b). The densities correspond to the calculations in Fig. \[fig:87RbNcoft\] at the final time of the ramps.[]{data-label="fig:87Rbncfin1Gperms"}](87Rbncfin1Gperms.eps){width="\columnwidth"}
Figure \[fig:87Rbncfin1Gperms\] shows snapshots of the condensate density $n_\mathrm{c}(\mathbf{x},t)=|\Psi(\mathbf{x},t)|^2$ as a function of the radial coordinate $r=|\mathbf{x}|$ for the ramps and trap parameters in Fig. \[fig:87RbNcoft\] at the final time $t_f=580 \ \mu$s. The pronounced spatial variations of the remnant condensate density indicate the simultaneous occupation of many collective energy modes. We note that these highly occupied excited modes are described by a coherent classical mean field and are therefore distinguished from the pairs of correlated atoms in the initially unoccupied scattering continuum. In accordance with Eq. (\[Gammageneral\]), correlated pairs of atoms, as described by the pair function $\Phi(t)$, constitute the density of the noncondensed fraction.
### Noncondensed fraction
We shall study in the following how the final noncondensed fraction of the gas is distributed among the bound and free energy states of an atom pair. We shall focus on linear downward ramps of the magnetic field strength across the 100 mT Feshbach resonance of $^{87}$Rb. Given the solution of the dynamic equation (\[NLSElocal\]) for the atomic mean field, the pair function $\Phi(t)$ is determined completely by the inhomogeneous linear two-body Schrödinger equation (\[Phi2ch\]). We shall assume that the trap is switched off immediately after the ramp so that the gas expands freely in space. Inserting the complete set of two-body energy states (cf. Subsection \[subsec:energystates\]) at the final magnetic field strength $B_f$ into Eq. (\[Gammageneral\]), and integrating with respect to the centre of mass coordinate $\mathbf{R}=(\mathbf{x}+\mathbf{y})/2$ yields the number of noncondensate atoms [@KGB03]: $$\begin{aligned}
\nonumber
N_\mathrm{nc}=&\int d\mathbf{R}\int d\mathbf{p} \
\left|\langle\mathbf{R},\phi_\mathbf{p}(B_f)|\Phi(t_f)\rangle
\right|^2\\
&+\int d\mathbf{R} \
\left|\langle\mathbf{R},\phi_\mathrm{b}(B_f)|\Phi(t_f)\rangle
\right|^2.
\label{Nnc}\end{aligned}$$ Here we have presupposed that only the energetically accessible highest excited vibrational multichannel bound state $\phi_\mathrm{b}(B_f)$ on the low field side of the Feshbach resonance (cf. Subsection \[subsec:universal\]) will be populated in a downward ramp. We shall assume furthermore that the final magnetic field strength is sufficiently far from the Feshbach resonance that the gas is weakly interacting; i.e. $n_\mathrm{peak}[a(B_f)]^3\ll 1$, where $n_\mathrm{peak}$ is the peak density of the gas, and $a(B_f)$ is the final scattering length. The second, factorised contribution to the molecular mean field on the right hand side of Eq. (\[psibgeneral\]) can then be neglected [@KGB03]. Consequently, the bound state contribution to the noncondensed density on the right hand side of Eq. (\[Nnc\]) determines the number of atoms associated to molecules. The contribution of the continuum part of the two-body energy spectrum yields the burst fraction, composed of correlated atoms with a comparatively high relative momentum $|\mathbf{p}|$ in initially unoccupied modes [@KGB03]. According to Eqs. (\[numberconservationgeneral\]) and (\[Nnc\]), the number of condensate atoms, atoms associated to molecules and burst atoms then add up to the total number of atoms. We note that Eqs. (\[numberconservationgeneral\]) and (\[Nnc\]) are also applicable in the small region of magnetic field strengths in the close vicinity of the Feshbach resonance, in which the gas becomes strongly interacting. The dilute gas parameter $n_\mathrm{peak}[a(B)]^3$ is then on the order of unity or larger. Under these conditions, however, a separation of the gas into bound and free atoms is physically meaningless [@TKTGPSJKB03; @Molecularfraction].
![Density of $^{87}$Rb$_2$ molecules in the highest excited vibrational bound state after the crossing of the 100 mT Feshbach resonance in spherically symmetric atoms traps with the frequencies $\nu_\mathrm{ho}=100$ Hz (a) and $\nu_\mathrm{ho}=10$ Hz (b). The molecular densities correspond to the calculations in Fig. \[fig:87RbNcoft\] at the final time of the ramps.[]{data-label="fig:87Rbnbfin1Gperms"}](87Rbnbfin1Gperms.eps){width="\columnwidth"}
Figure \[fig:87Rbnbfin1Gperms\] shows the densities of $^{87}$Rb$_2$ molecules in the highest excited vibrational molecular bound state at the final magnetic field strength of the linear downward ramps across the 100 mT Feshbach resonance, which are described in Fig. \[fig:87RbNcoft\]. The densities have been obtained from the molecular mean field $\Psi_\mathrm{b}(\mathbf{R},t_f)$ in Eq. (\[psibgeneral\]) through $n_\mathrm{b}(\mathbf{R},t_f)=|\Psi_\mathrm{b}(\mathbf{R},t_f)|^2$ with the bound state wave function $\phi_\mathrm{b}(B_f)$ at the final magnetic field strength $B_f$ of the ramps. The spatial extent of the molecular clouds roughly corresponds to the size of the remnant condensate densities in Fig. \[fig:87Rbncfin1Gperms\]. In accordance with the higher local densities, the density of molecules in the tight 100 Hz atom trap is higher than in the 10 Hz trap.
![The relative loss of condensate atoms $[N_\mathrm{c}(t_i)-N_\mathrm{c}(t_f)]/N_\mathrm{c}(t_i)$ in linear upward ramps (circles) as well as downward ramps (squares) of the magnetic field strength across the 100 mT Feshbach resonance of $^{87}$Rb as a function of the inverse ramp speed. The upward ramps transfer the condensate atoms into correlated pairs of atoms in initially unoccupied excited states, while the downward ramps adiabatically associate a substantial fraction of the condensate to bound molecules in the highest excited vibrational state. The diamonds indicate the relative number $2N_\mathrm{b}(t_f)/N_\mathrm{c}(t_i)$ of atoms associated to molecules in the downward ramps. The calculations were performed for a cylindrical atom trap with the axial (radial) frequencies of $\nu_\mathrm{axial}=116$ Hz ($\nu_\mathrm{radial}=75$ Hz) and $N=140000$ atoms. The small deviations in the calculated loss data of the remnant atomic condensate (filled circles) from an entirely smooth curve are due to rapid oscillations in the atomic condensate fraction after the passage of the Feshbach resonance (see Fig. \[fig:87RbNcoft\]), which have not entirely decayed at the final times that we have chosen for the different inverse ramp speeds.[]{data-label="fig:ox-ramps"}](ox-ramps.eps){width="\columnwidth"}
Figure \[fig:ox-ramps\] shows systematic studies of the dependence of the molecular production, after downward ramps of the magnetic field strength across the 100 mT Feshbach resonance of $^{87}$Rb, on the inverse ramp speed. The calculations have been performed on the basis of the dynamic equation (\[NLSElocal\]) for the atomic mean field. In these calculations we have chosen the potential of a comparatively tight cylindrically symmetric optical atom trap with axial and radial frequencies (see Fig. \[fig:ox-ramps\]) that resemble those in current experiments on the production of ultracold $^{87}$Rb$_2$ molecules in Oxford [@Boyer03]. The initial state has been chosen as the ground state of the Gross-Pitaevskii equation with $N_\mathrm{c}(t_i)=140000$ atoms and a scattering length of $a_\mathrm{bg}=100 \ a_\mathrm{Bohr}$. The squares in Fig. \[fig:ox-ramps\] indicate the relative loss of condensate atoms $[N_\mathrm{c}(t_i)-N_\mathrm{c}(t_f)]/N_\mathrm{c}(t_i)$ in the downward ramps, where $N_\mathrm{c}(t)=\int d\mathbf{x} \ |\Psi(\mathbf{x},t)|^2$ is the number of condensate atoms at time $t$. As expected from the two-body considerations in Subsection \[subsec:LandauZener\], the loss curve is a monotonic function of the inverse ramp speed and saturates when the remnant condensate is completely depleted. The number of atoms associated to diatomic molecules in the highest excited vibrational bound state $\phi_\mathrm{b}(B_f)$ is given by $2N_\mathrm{b}(t_f)=2\int d\mathbf{R} \ |\Psi_\mathrm{b}(\mathbf{R},t_f)|^2$, where $\Psi_\mathrm{b}(\mathbf{R},t_f)$ is the molecular mean field in Eq. (\[psibgeneral\]). The dependence of the relative molecular association efficiency $2N_\mathrm{b}(t_f)/N_\mathrm{c}(t_i)$ on the inverse ramp speed in the downward ramps is indicated by the diamonds in Fig. \[fig:ox-ramps\] and follows the same monotonic trend as the loss curve of condensate atoms in the downward ramps (squares). The quantitative agreement between both curves shows that the missing fraction of condensate atoms is transferred into diatomic molecules in the highest excited multichannel vibrational bound state. The excitation of atom pairs in continuum energy modes is suppressed. The rapid loss of condensate atoms, however, leads to an overall heating of the atomic cloud due to the excitation of collective energy modes (see Fig. \[fig:87Rbncfin1Gperms\]).
The circles in Fig. \[fig:ox-ramps\] indicate predictions of the total loss of condensate atoms in upward ramps of the magnetic field strength across the 100 mT Feshbach resonance of $^{87}$Rb. The initial state of the gas as well as the confining atom trap in these calculations are the same as in the downward ramps. The final magnetic field strength $B_f$, after the passage across the resonance in an upward ramp, is on the high field side of the resonance position. Figure \[fig:EbofBtwochannel\] reveals that there is no energetically accessible bound state at the magnetic field strength $B_f$. Consequently, atoms lost from the condensate are entirely transferred into the burst fraction. Figure \[fig:ox-ramps\] shows that despite an overall agreement in the monotonic behaviour of the condensate loss with increasing inverse ramp speeds, there is a small difference in the saturation region between the curves for the different ramp directions. This small deviation between the curves would be absent in the two-body description of Section \[sec:twobody\] and may indicate phenomena related to the coherent nature of the gas.
Two level mean field approach {#subsec:twolevel}
-----------------------------
One of the most common approaches to the association of molecules in dilute Bose-Einstein condensates is based on a model many-body Hamiltonian that describes atoms and molecules in terms of separate quantum fields. The coupling between these quantum fields leads to an exchange between the different species. This exchange then serves as a model for the molecular production.
### Markov approximation to the first order microscopic quantum dynamics approach
In order to reveal the physical meaning of the molecular quantum field and its relation to the microscopic quantum dynamics approach of Subsection \[subsec:microscopicquantumdynamics\], we shall derive the mean field equations associated with the two component model Hamiltonian on the basis of the first order coupled dynamic equations (\[Psi2ch\]) and (\[Phi2ch\]) that determine the atomic mean field and the pair function. We shall represent the pair function in terms of the centre of mass and relative coordinates $\mathbf{R}$ and $\mathbf{r}$, respectively. In accordance with the pole approximation of Subsection \[subsec:energystates\], the closed channel part of the pair function can be factorised into the resonance state $\phi_\mathrm{res}(r)$ and a centre of mass wave function: $$\Phi_\mathrm{cl}(\mathbf{R},\mathbf{r},t)=\sqrt{2}\phi_\mathrm{res}(r)
\Psi_\mathrm{res}(\mathbf{R},t).
\label{poleapproxPhicl}$$ In the following derivation we shall show that the centre of mass wave function $\Psi_\mathrm{res}(\mathbf{R},t)$ is identical to the mean field associated with the molecular quantum field in the two component model Hamiltonian. To this end we shall proceed in a way similar to the derivation of the dynamic equation (\[NLSElocal\]) for the atomic mean field in Subsection \[subsec:microscopicquantumdynamics\], except that we shall eliminate only the component $\Phi_\mathrm{bg}(\mathbf{R},\mathbf{r},t)$ of the pair function in the open channel from Eqs. (\[Psi2ch\]) and (\[Phi2ch\]). The formal solution of the dynamic equation for $\Phi_\mathrm{bg}(\mathbf{R},\mathbf{r},t)$ \[i.e. the first component of Eq. (\[Phi2ch\])\] can be expressed in terms of the two-body time evolution operator $U_\mathrm{trap}^\mathrm{bg}(t-\tau)$ that corresponds to the stationary diagonal element $\langle\mathrm{bg}|H_\mathrm{2B}|\mathrm{bg}\rangle$ of the full two-body Hamiltonian matrix $H_\mathrm{2B}(t)$ \[cf. Eq. (\[Phi2ch\])\] in the open channel:
$$\begin{aligned}
\nonumber
\Phi_\mathrm{bg}(\mathbf{R},\mathbf{r},t)=
&\int d\mathbf{R}' d\mathbf{r}' \
\langle\mathbf{R},\mathbf{r}|
U_\mathrm{trap}^\mathrm{bg}(t-t_i)|\mathbf{R}',\mathbf{r}'
\rangle
\Phi_\mathrm{bg}(\mathbf{R}',\mathbf{r}',t_i)\\
\nonumber
&+\frac{1}{i\hbar}\int_{t_i}^t d\tau \int d\mathbf{R}' d\mathbf{r}' \
\langle\mathbf{R},\mathbf{r}|
U_\mathrm{trap}^\mathrm{bg}(t-\tau)
V_\mathrm{bg}|\mathbf{R}',\mathbf{r}'\rangle
\Psi(\mathbf{R}'+\mathbf{r}'/2,\tau)\Psi(\mathbf{R}'-\mathbf{r}'/2,\tau)\\
&+\frac{1}{i\hbar}\int_{t_i}^t d\tau\int d\mathbf{R}' \
\langle\mathbf{R},\mathbf{r}|
U_\mathrm{trap}^\mathrm{bg}(t-\tau)
W|\mathbf{R}',\phi_\mathrm{res}\rangle
\sqrt{2}\Psi_\mathrm{res}(\mathbf{R}',\tau).
\label{formalsolutionPhibg}
\end{aligned}$$
We note that $U_\mathrm{trap}^\mathrm{bg}(t-\tau)$ includes the centre of mass and the relative motion of two atoms as well as the confining potential of the atom trap.
To obtain the mean field equations for $\Psi(\mathbf{x},t)$ and the centre of mass wave function $\Psi_\mathrm{res}(\mathbf{R},t)$ in Eq. (\[poleapproxPhicl\]), we shall insert Eq. (\[formalsolutionPhibg\]) into Eqs. (\[Psi2ch\]) and (\[Phi2ch\]) and perform the Markov approximation. The Markov approximation relies upon the assumption that the main contribution to the time integrals in Eq. (\[formalsolutionPhibg\]) stems from a small region near $\tau=t$, in which the variation of the functions $\Psi(\mathbf{R}'+\mathbf{r}'/2,\tau)$, $\Psi(\mathbf{R}'-\mathbf{r}'/2,\tau)$ and $\Psi_\mathrm{res}(\mathbf{R}',\tau)$ is negligible. Under this assumption, the functions can be evaluated at $\tau=t$. A similar argument applies to the spatial variation of $\Psi(\mathbf{R}'+\mathbf{r}'/2,\tau)$, $\Psi(\mathbf{R}'-\mathbf{r}'/2,\tau)$ and $\Psi_\mathrm{res}(\mathbf{R}',\tau)$ and leads to the replacements $\mathbf{R}'\to\mathbf{R}$ and $\mathbf{r}'\to 0$ in these functions. The subsequent formal procedure to derive the mean field equations is analogous to the derivation of the Gross-Pitaevskii equation in Ref. [@KB02] and involves neglecting the initial pair function $\Phi_\mathrm{bg}(\mathbf{R}',\mathbf{r}',t_i)$ and the centre of mass motion as well as the confining potential of the atom trap in the time evolution operator $U_\mathrm{trap}^\mathrm{bg}(t-\tau)$. A short calculation then yields the coupled equations $$\begin{aligned}
\nonumber
i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{x},t)=&
\left[
-\frac{\hbar^2}{2m}\nabla^2+\frac{m}{2}\omega_\mathrm{ho}^2|\mathbf{x}|^2
\right]\Psi(\mathbf{x},t)\\
\nonumber
&+g_\mathrm{bg}|\Psi(\mathbf{x},t)|^2\Psi(\mathbf{x},t)\\
&+g_\mathrm{res}\Psi^*(\mathbf{x},t)\sqrt{2}
\Psi_\mathrm{res}(\mathbf{x},t)
\label{2levelMFPsi}\end{aligned}$$ for the atomic mean field, and $$\begin{aligned}
\nonumber
i\hbar\frac{\partial}{\partial t}\Psi_\mathrm{res}(\mathbf{R},t)=&
\left[
-\frac{\hbar^2}{4m}\nabla^2+\frac{2m}{2}\omega_\mathrm{ho}^2|\mathbf{R}|^2
\right]
\Psi_\mathrm{res}(\mathbf{R},t)\\
\nonumber
&
+\left[\frac{d E_\mathrm{res}}{d B}(B_\mathrm{res})\right]
[B(t)-B_0]
\Psi_\mathrm{res}(\mathbf{R},t)\\
&+\frac{1}{\sqrt{2}}g_\mathrm{res}\Psi^2(\mathbf{R},t)
\label{2levelMFPsires}\end{aligned}$$ for the molecular mean field [@opticaltrap]. Here we have assumed the optical atom trap to be spherically symmetric. The coupling constants $g_\mathrm{bg}$ and $g_\mathrm{res}$ can be obtained by performing the spatial as well as the time integrations in Eq. (\[formalsolutionPhibg\]) in the limit $t-t_i\to\infty$. This yields (cf., also, Ref. [@KB02]): $$\begin{aligned}
\label{gbg}
g_\mathrm{bg}=&\frac{4\pi\hbar^2}{m}a_\mathrm{bg}\\
g_\mathrm{res}=&(2\pi\hbar)^{3/2}
\langle\phi_\mathrm{res}|W|\phi_0^{(+)}
\rangle.
\label{gres}\end{aligned}$$ Here $a_\mathrm{bg}$ is the background scattering length and $\phi_0^{(+)}(\mathbf{r})$ is the zero energy wave function associated with the background scattering (cf. Eq. (\[BCphipplus\])). We note that the off diagonal coupling constant $g_\mathrm{res}$ is defined only up to a global phase and we shall, therefore, choose it to be real. In accordance with Eq. (\[resonancewidth\]), $g_\mathrm{res}$ is then determined by the resonance width $(\Delta B)$, the background scattering length $a_\mathrm{bg}$ and the slope of the resonance $\left[\frac{d E_\mathrm{res}}{dB}(B_\mathrm{res})\right]$. The mean field equations (\[2levelMFPsi\]) and (\[2levelMFPsires\]) as well as the coupling constants in Eqs. (\[gbg\]) and (\[gres\]) are analogous to those that have been applied, for instance, in Ref. [@Abeelen99] to describe the atom loss in ramps of the magnetic field strength across Feshbach resonances in $^{23}$Na. In the applications of the mean field equations (\[2levelMFPsi\]) and (\[2levelMFPsires\]), the densities $n_\mathrm{c}(\mathbf{x},t)=|\Psi(\mathbf{x},t)|^2$ and $n_\mathrm{res}(\mathbf{R},t)=|\Psi_\mathrm{res}(\mathbf{R},t)|^2$ are usually interpreted as atomic and molecular condensate densities, respectively.
### Two component model Hamiltonian
Equations (\[2levelMFPsi\]) and (\[2levelMFPsires\]) can be derived formally as equations of motion for the mean fields $$\begin{aligned}
\Psi(\mathbf{x},t)&=\langle\psi(\mathbf{x})\rangle_t \\
\Psi_\mathrm{res}(\mathbf{R},t)&=\langle\psi_\mathrm{res}(\mathbf{R})
\rangle_t\end{aligned}$$ in the Hartree approximation. The corresponding two component model Hamiltonian is given by: $$\begin{aligned}
\nonumber
H=&\int d\mathbf{x} \
\psi^\dagger(\mathbf{x})\left[
-\frac{\hbar^2}{2m}\nabla^2+\frac{m}{2}\omega_\mathrm{ho}^2|\mathbf{x}|^2
\right]\psi(\mathbf{x})\\
\nonumber
&+\int d\mathbf{R} \
\psi^\dagger_\mathrm{res}(\mathbf{R})\left[
-\frac{\hbar^2}{4m}\nabla^2+\frac{2m}{2}\omega_\mathrm{ho}^2|\mathbf{R}|^2
\right]\psi_\mathrm{res}(\mathbf{R})\\
\nonumber
&+\int d\mathbf{R} \
\psi^\dagger_\mathrm{res}(\mathbf{R})
\left[
\frac{dE_\mathrm{res}}{dB}(B_\mathrm{res})
\right]
[B(t)-B_0]
\psi_\mathrm{res}(\mathbf{R})\\
\nonumber
&+\frac{1}{2}g_\mathrm{bg}\int d\mathbf{x} \
\psi^\dagger(\mathbf{x})
\psi^\dagger(\mathbf{x})
\psi(\mathbf{x})\psi(\mathbf{x})\\
&+\frac{1}{\sqrt{2}}g_\mathrm{res}\int d\mathbf{x} \
\left[
\psi_\mathrm{res}^\dagger(\mathbf{x})\psi(\mathbf{x})\psi(\mathbf{x})
+\mbox{h.c.}
\right].
\label{modelHamiltonian}\end{aligned}$$ Here the field operators $\psi(\mathbf{x})$ and $\psi_\mathrm{res}(\mathbf{R})$ fulfil the usual Bose commutation relations and commute among the different species: $$\begin{aligned}
[\psi(\mathbf{x}),\psi_\mathrm{res}(\mathbf{R})]=&0\\
[\psi(\mathbf{x}),\psi_\mathrm{res}^\dagger(\mathbf{R})]=&0.
\label{commPsiPsires}\end{aligned}$$
### Deficits of the two level mean field approach
Despite its common use in studies of molecular association in dilute Bose-Einstein condensates, the two level mean field model in Eqs. (\[2levelMFPsi\]) and (\[2levelMFPsires\]) and its common interpretation are subject to two serious deficits. First, in the derivation of the mean field equations we have shown that the mean field $\Psi_\mathrm{res}(\mathbf{R},t)$ is associated with a pair of atoms in the closed channel resonance state $\phi_\mathrm{res}(r)$ \[see Eq. (\[poleapproxPhicl\])\]. Figure \[fig:mixingcoefficient\] clearly reveals that, in general, such an atom pair cannot be associated with a molecular bound state because in the close vicinity of the 100 mT Feshbach resonance of $^{87}$Rb, as well as asymptotically far from it, the wave function of the highest excited vibrational bound state is dominated by its component in the open channel [@85Rbadmixture].
The interpretation of $N_\mathrm{res}(t)=\int d\mathbf{R} \ |\Psi_\mathrm{res}(\mathbf{R},t)|^2$ in terms of the population in the resonance state (as opposed to the number of bound molecules) is a direct consequence of the commutation relation in Eq. (\[commPsiPsires\]) and of the form of the Hamiltonian in Eq. (\[modelHamiltonian\]), rather than an artifact of the derivation of the mean field equations (\[2levelMFPsi\]) and (\[2levelMFPsires\]). In fact, according to Eq. (\[commPsiPsires\]), a localised pair of atoms created from the vacuum state $|\mathrm{vac}\rangle$ by the field operator $\psi_\mathrm{res}^\dagger(\mathbf{R})$ is always orthogonal to the localised state of any atom pair created by $\psi^\dagger(\mathbf{y})\psi^\dagger(\mathbf{x})$ in the open channel; i.e. $$\langle\mathrm{vac}|\psi(\mathbf{x})\psi(\mathbf{y})
\psi_\mathrm{res}^\dagger(\mathbf{R})|\mathrm{vac}\rangle=0.$$ The two states of the atom pairs, therefore, correspond to different asymptotic scattering channels. It can also be verified from the Hamiltonian in Eq. (\[modelHamiltonian\]) that any physical two-body state associated with $\psi_\mathrm{res}^\dagger(\mathbf{R})|\mathrm{vac}\rangle$ is not stationary with respect to the time evolution in the relative motion of the atom pairs. The off diagonal coupling in Eq. (\[modelHamiltonian\]) leads to a decay into the open channel, which is determined by the coupling constant $g_\mathrm{res}$. The energy $E_\mathrm{res}(B)=\left[\frac{d E_\mathrm{res}}{dB}(B_\mathrm{res})\right]
(B-B_0)$, which can be associated with $\psi_\mathrm{res}^\dagger(\mathbf{R})|\mathrm{vac}\rangle$, is linear in the magnetic field strength and corresponds to the resonance state rather than to the bound molecular states (cf. Fig. \[fig:EbofBtwochannel\]). The off diagonal coupling constant $g_\mathrm{res}$ then determines the corresponding decay width.
Although it seems conceivable to measure the closed channel resonance state population $N_\mathrm{res}(t)=\int d\mathbf{R} \ |\Psi_\mathrm{res}(\mathbf{R},t)|^2$, several present day experiments on molecular association in Bose-Einstein condensates have clearly revealed the multichannel nature of the bound states. The experiments in Refs. [@Donley02; @Claussen03], for instance, have determined the near resonant universal form $E_\mathrm{b}(B)=-\hbar^2/\left\{m[a(B)]^2\right\}$ of the binding energy in Eq. (\[Ebuniversal\]), corresponding to a bound state wave function of the universal form of Eq. (\[universalwavefunction\]), which is dominated by its component in the asymptotic open channel (cf., also, Fig. \[fig:haloboundstates\]). The experiment in Ref. [@Duerr03] has determined the change of the magnetic moment of the molecular bound states in dependence on the magnetic field strength $B$, which is a consequence of the nonlinear dependence of the binding energy on $B$ (see Fig. \[fig:EbofBtwochannel\]). The magnetic moment of the resonance state, however, is independent of $B$. These experiments clearly reveal that the number of bound molecules in Eq. (\[Nbgeneral\]) is the relevant physical quantity rather than the resonance state population.
The second major deficit of the mean field equations (\[2levelMFPsi\]) and (\[2levelMFPsires\]) in the description of the adiabatic association technique is related to their two level nature; the only configurations of two atoms are a pair of condensate atoms and the resonance state. Neither the near resonant diatomic bound states nor the continuum states can be represented solely in terms of these two configurations. In particular, any coupling between the initial atomic condensate and the quasi continuum of states above the dissociation threshold energy of the open channel is ruled out. The two level nature of Eqs. (\[2levelMFPsi\]) and (\[2levelMFPsires\]) is a consequence of the application of the Markov approximation to Eqs. (\[Psi2ch\]) and (\[Phi2ch\]). This indicates that the presuppositions leading to the Markov approximation are violated during the passage across a Feshbach resonance. Consequently, Eqs. (\[2levelMFPsi\]) and (\[2levelMFPsires\]) fail quantitatively and even qualitatively in the dynamic description of the near resonant atom-molecule coherence experiments in Refs. [@Donley02; @Claussen03]. The most striking consequence of their two level nature, however, consists in the insensitivity of Eqs. (\[2levelMFPsi\]) and (\[2levelMFPsires\]) with respect to the ramp direction. The approach thus predicts molecular production even for an upward ramp across the 100 mT Feshbach resonance of $^{87}$Rb, although there is no energetically accessible diatomic vibrational bound state on the high field side of the resonance (cf. Fig. \[fig:EbofBtwochannel\]). The reason for this failure is the absence of continuum states that the closed channel resonance state could decay into. The results of Subsection \[subsec:LandauZener\] suggest, however, that the two level mean field approach may recover the far resonant asymptotic molecular population in a downward ramp of the magnetic field strength across the 100 mT Feshbach resonance of $^{87}$Rb.
### Applications of the two component model Hamiltonian beyond mean field
When applied beyond mean field [@Kokkelmans02; @Mackie02; @Yurovsky03], the model Hamiltonian in Eq. (\[modelHamiltonian\]) leads to ultraviolet singularities that are related to the energy independence of the coupling constants $g_\mathrm{bg}$ and $g_\mathrm{res}$, i.e. the associated contact potentials crucially lack any spatial extent. The approach in Ref. [@Kokkelmans02] circumvents this ultraviolet problem in terms of an energy cutoff, which is adjusted in such a way that the Hamiltonian, when applied to only two atoms, recovers the exact binding energy of the highest excited vibrational bound state. This beyond mean field approach has been applied recently to the experiments in Ref. [@Donley02]. We expect this approach to give results similar to those of the first order microscopic quantum dynamics approach. It has been pointed out [@KGB03; @TKTGPSJKB03; @Borca03; @Braaten03; @Duine03], however, that the interpretation in Ref. [@Kokkelmans02] of the closed channel resonance state population $N_\mathrm{res}(t)$ in terms of the number of bound molecules (i.e. the number of undetected atoms in Ref. [@Donley02]) is not applicable.
Some approaches [@Braaten03; @Duine03] have suggested curing this deficit, by multiplying the closed channel resonance state population $N_\mathrm{res}(t)$, as determined in Ref. [@Kokkelmans02], with a magnetic field dependent factor, termed the wave function renormalisation factor, which accounts for the component of the bound state wave function in the asymptotic open scattering channel. A short calculation using Eq. (\[twochannelnormalisation\]) shows that the wave function renormalisation factor is equivalent to the squared normalisation factor $$\mathcal{N}_\mathrm{b}^2=1+\frac{1}{2}
\left[
\frac{dE_\mathrm{res}}{dB}(B_\mathrm{res})
\right]
(\Delta B)\frac{a_\mathrm{bg}}{a(B)}\frac{m [a(B)]^2}{\hbar^2}
\label{wavefunctionrenormalisation}$$ of the two channel bound state wave function in Eq. (\[phib\]), provided that the magnetic field strength is sufficiently close to the resonance position that the bound state wave function and its binding energy are universal (see Subsection \[subsec:universal\]). A comparison between the closed channel resonance state population in Ref. [@Kokkelmans02], revised by multiplication with the right hand side of Eq. (\[wavefunctionrenormalisation\]) (as provided by Ref. [@Braaten03]), indeed, largely recovers the magnitude of the molecular fractions predicted by the first order microscopic quantum dynamics approach in Refs. [@KGB03; @TKTGPSJKB03]. The revised resonance state population, however, shows unphysical oscillations between the populations of bound and free atoms (cf. Fig. 3 in Ref. [@Kokkelmans02]), for magnetic field strengths at which the spatial extent of the highest excited vibrational molecular bound state is by far smaller than the mean interatomic distance in the dilute gas. This unphysical behaviour is due to the fact that not only the bound state wave function in Eq. (\[phib\]), but also the continuum wave functions in Eqs. (\[phipcl\]) and (\[phipbg\]), have a closed channel resonance state component. As a consequence, there is no [*general*]{} relationship between the closed channel resonance state population of a gas and the number of bound molecules.
The general relationship between the number of bound molecules and the two-body correlation function in Eq. (\[Nbgeneral\]), however, is applicable also to the approach in Ref. [@Kokkelmans02]. Equation (\[Nbgeneral\]) applies the appropriate quantum mechanical observable, and will therefore automatically determine the measurable fraction of bound molecules in a gas, provided that molecules identified as separate entities are at all a reasonable concept [@TKTGPSJKB03].
Comparison between different approaches {#subsec:comparison}
---------------------------------------
We have shown in Subsection \[subsec:microscopicquantumdynamics\] that, during a linear ramp of the magnetic field strength across a Feshbach resonance, the full microscopic quantum dynamics approach can predict a transfer of atoms from the atomic condensate not only to the bound, but also to the continuum part of the two-body energy spectrum. The transfer into the continuum energy states, however, occurred only in upward ramps of the magnetic field strength across the 100 mT Feshbach resonance of $^{87}$Rb. The two level configuration interaction approach of Subsection \[subsec:LandauZener\] restricts the configuration of an atom pair in the open channel to the lowest energetic quasi continuum state of the background scattering in the atom trap, while in the two level mean field model of Subsection \[subsec:twolevel\] all atoms in the $(F=1,m_F=+1)$ hyperfine state are described by a coherent mean field. Both two level approaches, therefore, rule out the production of correlated pairs of atoms in the quasi continuum of two-body energy levels from the outset.
Equations (\[NLRabibg\]) and (\[NLRabicl\]) of the configuration interaction approach and the mean field equations (\[2levelMFPsi\]) and (\[2levelMFPsires\]) describe the coupling between the atomic condensate and the closed channel resonance state in essentially the same way, except that the phases of the off diagonal coupling terms are different. The spatial configuration of the atomic condensate in the configuration interaction approach, however, is static, while the two level mean field approach allows the trapped atomic condensate to access coherent collective excitations with a high occupancy of the energy modes. Consequently, the configuration interaction approach can, at least to some extent, be interpreted as a local density approximation to the two level mean field model. In accordance with the derivations in Subsection \[subsec:twolevel\], both two level approaches can, therefore, be considered as the Markov approximation to the first order microscopic quantum dynamics approach of Subsection \[subsec:microscopicquantumdynamics\].
Our previous considerations have shown that there exists a qualitative agreement between the approaches, at different levels of approximation, with respect to the prediction of molecular formation in linear downward ramps of the magnetic field strength across the 100 mT Feshbach resonance of $^{87}$Rb. The description of the microscopic binary collision physics as well as their treatment of the coherent nature of the Bose-Einstein condensate, however, differ considerably among these approaches. In the following, we shall provide quantitative comparisons between their predictions with respect to the molecular production efficiency of the adiabatic association technique. We will study in detail the dependence of the magnitude of the molecular fraction on experimentally accessible parameters.
### Universal properties of the molecular production efficiency of a linear ramp of the magnetic field strength
In the adiabatic association technique with linear ramps of the magnetic field strength, the ramp speed $\frac{d B}{d t}$ controls the interatomic interactions. Furthermore, the initial number $N$ of atoms can be varied over a wide range in present experiments, and the trap frequencies determine the confining potential of a harmonic atom trap. The atom trap, the number of atoms as well as their binary interactions control the spectrum of coherent collective excitations of a Bose-Einstein condensate. In the following, we shall consider spherically symmetric atom traps with a radial trap frequency that we denote by $\nu_\mathrm{ho}$.
![Predicted remnant condensate fraction and number of atoms associated to diatomic molecules in the highest excited vibrational bound state after a linear downward ramp of the magnetic field strength across the 100 mT Feshbach resonance of $^{87}$Rb in dependence on the inverse ramp speed. The initial state of the gas corresponds to a dilute zero temperature Bose-Einstein condensate with a scattering length of $a_\mathrm{bg}=100 \ a_\mathrm{Bohr}$ and $N=50000$ atoms in spherically symmetric atom traps with the frequencies 100 Hz (a) and 10 Hz (b). The abbreviations in the legend indicate the different approaches applied in the calculations. These approaches contain the first order microscopic quantum dynamics approach (MQDA), the nonlinear configuration interaction equations (\[NLRabibg\]) and (\[NLRabicl\]) (CI), the two level mean field equations (\[2levelMFPsi\]) and (\[2levelMFPsires\]) (2 level), and the Landau-Zener asymptotic populations in Eqs. (\[LZbg\]) and (\[LZcl\]) (LZ). The small deviations in the calculated MQDA data of the remnant atomic condensate (filled circles) from an entirely smooth curve are discussed in Fig. \[fig:ox-ramps\].[]{data-label="fig:87RbNatomswithLZ"}](87RbNatomswithLZ.eps){width="\columnwidth"}
Figure \[fig:87RbNatomswithLZ\] shows the predicted dependence of the remnant atomic condensate fraction and of the fraction of atoms associated to diatomic molecules in the highest excited vibrational bound state on the inverse speed of linear downward ramps of the magnetic field strength across the 100 mT Feshbach resonance of $^{87}$Rb. We have chosen trap frequencies of 100 Hz (a) and of 10 Hz (b) that correspond to a comparatively strongly and a weakly confining harmonic atom trap, respectively. The initial number of $N_\mathrm{c}(t_i)=50000$ condensate atoms is fixed in all calculations. The predictions correspond to the full first order microscopic quantum dynamics approach (MQDA), the nonlinear configuration interaction equations (\[NLRabibg\]) and (\[NLRabicl\]) (CI), the two level mean field equations (\[2levelMFPsi\]) and (\[2levelMFPsires\]) (2 level), and the Landau-Zener asymptotic populations in Eqs. (\[LZbg\]) and (\[LZcl\]) (LZ). The comparison reveals that the approaches predict similar conversion efficiencies with respect to the association of molecules as well as similar remnant condensate fractions. Only the saturation behaviour of the exponential Landau-Zener (LZ) curves \[cf. Eqs. (\[LZbg\]) and (\[LZcl\])\] differs significantly from all other approaches because it corresponds to the asymptotic populations of the linear equations (\[Rabibg\]) and (\[Rabicl\]). These results suggest a remarkable insensitivity of the molecular production in linear ramps of the magnetic field strength with respect to both the details of the microscopic binary collision physics and the coherent nature of the Bose-Einstein condensate.
![Predicted proportion of remnant condensate atoms and proportion of atoms associated to diatomic molecules in the highest excited vibrational bound state after a linear downward ramp of the magnetic field strength across the 100 mT Feshbach resonance of $^{87}$Rb in dependence on the inverse ramp speed. The atom number and the angular frequency $\omega_{\rm ho}$ of the atom trap are adjusted to keep the nonlinearity parameter $k_{\rm bg}$ and the quantity $\omega_{\rm ho}t_\mathrm{nat}$ constant, which also fixes the Landau-Zener parameter $\delta_{\mathrm{LZ}}$. The initial states of the gas correspond to a dilute zero temperature Bose-Einstein condensate with a scattering length of $a_\mathrm{bg}=100 \ a_\mathrm{Bohr}$ in spherically symmetric atom traps. In each case the shape of the initial condensate mode is, in units of $l_{\rm ho}$, identical to that of a dilute Bose-Einstein condensate with 50000 atoms in a spherically symmetric atom trap with a frequency of $\nu_\mathrm{ho}=10$ Hz. The abbreviations in the legend indicate the different approaches applied in the calculations. These approaches contain the first order microscopic quantum dynamics approach (MQDA), and the two level mean field equations (\[2levelMFPsi\]) and (\[2levelMFPsires\]) (2 level).[]{data-label="fig:87RbLZconst"}](87RbLZconst.eps){width="\columnwidth"}
### Dependence of the molecular production efficiency on experimentally accessible parameters
We shall now identify the universal physical parameter that controls the molecular production efficiency of the adiabatic association technique. As shown in Subsection \[subsec:LandauZener\], the linear Rabi flopping model in Eqs. (\[Rabibg\]) and (\[Rabicl\]) leads to an asymptotic fraction of diatomic molecules that can be described by a single parameter, the Landau-Zener coefficient $\delta_\mathrm{LZ}$. We shall show in the following that also in the nonlinear two level approaches the asymptotic molecular population is characterised by $\delta_\mathrm{LZ}$. To this end, we take Eqs. (\[2levelMFPsi\]) and (\[2levelMFPsires\]) of the two level mean field approach, where it is revealing to introduce a natural time scale $t_\mathrm{nat}=(\Delta B)/\left(\frac{d B}{d t}\right)$, as well as the harmonic oscillator length scale $l_{\mathrm{ho}}=\sqrt{\hbar/(m 2\pi\nu_\mathrm{ho})}$. Applying these natural time and length scales to the two level mean field equations reveals that they can be characterised purely in terms of the nonlinearity parameter $k_{\mathrm{bg}}=Na_\mathrm{bg}/l_\mathrm{ho}$ of the Gross-Pitaevskii equation [@Dalfovo99], in addition to the quantities $\omega_{\mathrm{ho}}t_\mathrm{nat}$ and $$k_\mathrm{eff}=
\frac{1}{\hbar}
\left[\frac{d E_{\mathrm{res}}}{d B}(B_{\mathrm{res}})\right]
(\Delta B)^{2}/\left(d B/d t\right).$$ Here $N$ is the total number of atoms and $\omega_\mathrm{ho}=2\pi\nu_\mathrm{ho}$ is the angular frequency of the spherically symmetric atom trap. We note that all three of these parameters can in principle be independently varied by manipulating the trap frequency, the ramp speed, and the total number of atoms. Of these three dimensionless quantities, the Landau-Zener parameter $\delta_\mathrm{LZ}$ in Eq. (\[deltaLZ\]) can be written as a function of just the nonlinearity parameter $k_{\mathrm{bg}}$ and $\omega_{\mathrm{ho}}t_\mathrm{nat}$. Indeed, as we observe in Fig. \[fig:87RbLZconst\], keeping $k_{\mathrm{bg}}$ and $\omega_{\mathrm{ho}}t_\mathrm{nat}$ constant while varying $k_\mathrm{eff}$ reveals a very weak dependence of the molecular conversion efficiency on this variation. Using identical input parameters for the dynamic equation (\[NLSElocal\]) of the first order microscopic quantum dynamics approach reveals that this remarkable universality of the molecular production during a linear passage across a Feshbach resonance is preserved even when the complete quasi continuum of excited two-body energy modes is explicitly accounted for by the theory.
Conclusions {#sec:conclusions}
===========
We have presented a comprehensive theoretical analysis of the adiabatic association to diatomic molecules of initially Bose-Einstein condensed $^{87}$Rb atoms via magnetic field tunable interactions. In particular, we have considered the situation in which a gas of Bose-Einstein condensed atoms in the $(F=1,m_{F}=+1)$ state is exposed to a homogeneous magnetic field whose strength is varied linearly to cross the broadest Feshbach resonance at 100 mT, in order to produce strongly correlated pairs of atoms in the highest excited vibrational bound state. We have compared the predictions of Landau-Zener, nonlinear configuration interaction, and two level mean field approaches with the full first order microscopic quantum dynamics approach, which explicitly includes all energy states of two atoms.
We found that despite decisive differences between these many-body approaches with respect to the description of the underlying microscopic binary collision dynamics, the predicted molecular production efficiencies obtained from linear ramps of the magnetic field strength are virtually independent of the level of approximation. We have shown that the Landau-Zener coefficient of Eq. (\[deltaLZ\]) is the main parameter that controls the molecular production in all theoretical approaches. Consequently, the efficiency of molecular production in linear ramps of the magnetic field strength is remarkably insensitive with respect to the details of the binary collision dynamics and to the coherent nature of the gas. The adiabatic association of molecules in dilute Bose-Einstein condensed gases is thus subject to universal physical properties similar to those that we have identified in the associated two-body problem. This indicates that related experiments on the formation of molecules as well as their subsequent dissociation via linear ramps of the magnetic field strength (see, e.g., [@Mukaiyama03]) are largely inconclusive with respect to the details of the intermediate microscopic binary collision dynamics (cf., also, Ref. [@TKKG03]).
The universal properties of the molecular production efficiencies and dissociation spectra reported in this paper are restricted to [*linear*]{} ramps of the magnetic field strength. The interferometric studies of Refs. [@Donley02; @Claussen03] and their subsequent theoretical analysis [@Kokkelmans02; @Mackie02; @KGB03], for instance, have clearly revealed that sequences of linear variations of the magnetic field strength lead to molecular production efficiencies beyond the predictions of Landau-Zener or two level mean field models.
We thank Vincent Boyer, Donatella Cassetari, Rachel Godun, Eleanor Hodby, Giuseppe Smirne, C.M. Chandrashekar, Christopher Foot, Stephan Dürr, Thomas Gasenzer, and Keith Burnett for stimulating discussions. This research has been supported by the European Community Marie Curie Fellowship under Contract no. HPMF-CT-2002-02000 (K.G.), the United Kingdom Engineering and Physical Sciences Research Council as well as NASA (S.A.G.), the US Office of Naval Research (E.T. and P.S.J.), and a University Research Fellowship of the Royal Society (T.K.).
Low energy two channel Hamiltonian {#app:separablepotential}
==================================
In this appendix we shall provide the universal and practical model of the low energy two-channel potential matrix that we have used to determine the static as well as the dynamic properties of the resonance enhanced scattering in Sections \[sec:twobody\] and \[sec:manybody\]. Based on the general form of the energy states in Subsection \[subsec:energystates\], we shall determine the explicit parameters of the low energy potentials that correspond to the 100 mT Feshbach resonance of $^{87}$Rb.
Low energy background scattering potential {#app:background}
------------------------------------------
The two-channel bound and continuum energy states in Subsection \[subsec:energystates\] depend on the complete Green’s function $G_\mathrm{bg}(z)$ of the background scattering in Eq. (\[Gbg\]). Here $z$ is the complex parameter in Eqs. (\[phipbg\]) and Eq. (\[phib\]), respectively, which determines the asymptotic form of the continuum and bound state wave functions at large interatomic distances. The Green’s function $G_\mathrm{bg}(z)$ is determined by all bound and continuum energy states associated with the background scattering. The detailed form of the binary interaction potential is not resolved in ultracold collisions (cf. discussion in \[subsubsec:background\]). At magnetic field strengths asymptotically far from the position of a Feshbach resonance, the low energy scattering observables can be determined in terms of just two parameters, the background scattering length $a_\mathrm{bg}$ and the binding energy $E_{-1}$ (see Fig. \[fig:EbofBtwochannel\]) of the highest excited vibrational bound state of the background scattering potential $V_\mathrm{bg}$ [@Gao98; @secondchoice]. This universality allows us to choose a potential model of $V_\mathrm{bg}$ that recovers the exact scattering length $a_\mathrm{bg}$ as well as the exact binding energy $E_{-1}$. In the following, we thus use a most convenient separable potential energy operator, of the form [@Lovelace64] $$V_\mathrm{bg}=|\chi_\mathrm{bg}\rangle \xi_\mathrm{bg}
\langle\chi_\mathrm{bg}|,
\label{separablepotentialgeneral}$$ to determine $G_\mathrm{bg}(z)$. We choose the arbitrary but convenient Gaussian form of the function $$\langle\mathbf{p}|\chi_\mathrm{bg}\rangle=\chi_\mathrm{bg}(p)=
\frac{\exp\left(-\frac{p^2\sigma_\mathrm{bg}^2}{2\hbar^2}\right)}
{(2\pi\hbar)^{3/2}}
\label{separablepotentialGaussian}$$ in momentum space. We shall then adjust the amplitude $\xi_\mathrm{bg}$ and the range parameter $\sigma_\mathrm{bg}$ in such a way that they exactly recover $a_\mathrm{bg}$ as well as $E_{-1}$ (see Fig. \[fig:EbofBtwochannel\]).
The choice of a separable potential energy operator in Eq. (\[separablepotentialgeneral\]) allows us to determine the bound state $\phi_{-1}$ and the continuum energy states of the background scattering in an analytic form. The bound state $\phi_{-1}$ and its binding energy $E_{-1}$ are determined by the integral form of the Schrödinger equation, which for the separable potential in Eq. (\[separablepotentialgeneral\]) is given by: $$\begin{aligned}
|\phi_{-1}\rangle &=G_0(E_{-1})V_\mathrm{bg}|\phi_{-1}\rangle
=G_0(E_{-1})|\chi_\mathrm{bg}\rangle\xi_\mathrm{bg}
\langle\chi_\mathrm{bg}|\phi_{-1}\rangle.
\label{SEboundseparable}
\end{aligned}$$ Here $G_0(z)=\left[z-\left(-\hbar^2\nabla^2/m\right)\right]^{-1}$ is the free Green’s function, i.e. the Green’s function of the relative motion of an atom pair in the absence of interactions. The factor $\langle\chi_\mathrm{bg}|\phi_{-1}\rangle$ in Eq. (\[SEboundseparable\]) is determined by the unit normalisation of $\phi_{-1}$. The as yet undetermined binding energy $E_{-1}$ can be obtained by multiplying Eq. (\[SEboundseparable\]) by $\langle\chi_\mathrm{bg}|$ from the left. This yields: $$1-\xi_\mathrm{bg}\langle\chi_\mathrm{bg}|G_0(E_{-1})
|\chi_\mathrm{bg}\rangle=0.
\label{Eminus1determination}$$ A short calculation using the Gaussian form of $\chi_\mathrm{bg}$ in Eq. (\[separablepotentialGaussian\]) determines the matrix element of the free Green’s function in Eq. (\[Eminus1determination\]) by: $$\langle\chi_\mathrm{bg}|G_0(E_{-1})
|\chi_\mathrm{bg}\rangle=
\frac{m}{4\pi^{3/2}\hbar^2\sigma_\mathrm{bg}}
\left\{
\sqrt{\pi}xe^{x^2}\left[1-\mathrm{erf}(x)\right]-1
\right\}.
\label{matrixelementG0}$$ Here $\mathrm{erf}(x)=\frac{2}{\sqrt{\pi}}\int_0^x e^{-u^2}du$ is the error function with the argument $x=\sqrt{m|E_{-1}|}\sigma_\mathrm{bg}/\hbar$.
In addition to Eq. (\[Eminus1determination\]), the second condition that is needed to relate the two parameters $\xi_\mathrm{bg}$ and $\sigma_\mathrm{bg}$ of the separable potential operator to the background scattering length and the binding energy $E_{-1}$ is provided by the continuum energy wave functions, or equivalently, by the $T$ matrix. The $T$ matrix of the background scattering is determined by the Lippmann-Schwinger equation [@Newton82]: $$T_\mathrm{bg}(z)=V_\mathrm{bg}+V_\mathrm{bg}G_0(z)T_\mathrm{bg}(z).
\label{LSTmatrix}$$ For a separable potential energy operator Eq. (\[LSTmatrix\]) can be solved by iteration, which yields the sum of the Born series: $$\begin{aligned}
T_\mathrm{bg}(z)=V_\mathrm{bg}\sum_{j=0}^\infty
\left[G_0(z)V_\mathrm{bg}\right]^j
=\frac{|\chi_\mathrm{bg}\rangle\xi_\mathrm{bg}\langle\chi_\mathrm{bg}|}
{1-\xi_\mathrm{bg}\langle\chi_\mathrm{bg}|G_0(z)
|\chi_\mathrm{bg}\rangle}.
\label{Bornseries}
\end{aligned}$$ The background scattering length is then determined in terms of the $T$ matrix in Eq. (\[Bornseries\]) by $$\begin{aligned}
a_\mathrm{bg}=\frac{m}{4\pi\hbar^2}(2\pi\hbar)^3
\langle 0|T_\mathrm{bg}(0)|0\rangle
=\frac{\frac{m}{4\pi\hbar^2}\xi_\mathrm{bg}}
{1-\xi_\mathrm{bg}\langle\chi_\mathrm{bg}|G_0(0)|\chi_\mathrm{bg}\rangle}.
\label{abgofTbg}
\end{aligned}$$ Here $|0\rangle$ is the zero momentum plane wave of the relative motion of an atom pair. The denominator on the right hand side of Eq. (\[abgofTbg\]) can be obtained directly from Eq. (\[matrixelementG0\]) by replacing the energy argument $E_{-1}$ by 0. This yields: $$a_\mathrm{bg}=\frac{\frac{m}{4\pi\hbar^2}\xi_\mathrm{bg}}
{1+\frac{m}{4\pi\hbar^2}\xi_\mathrm{bg}/
\left(\sqrt{\pi}\sigma_\mathrm{bg}\right)}.
\label{abgofxibgsigmabg}$$ Equation (\[abgofxibgsigmabg\]) can be used to eliminate $\xi_\mathrm{bg}$ from Eq. (\[Eminus1determination\]), which, in turn, determines the range parameter $\sigma_\mathrm{bg}$ in terms of the background scattering length $a_\mathrm{bg}$ and of the binding energy $E_{-1}$. Given the range parameter $\sigma_\mathrm{bg}$ the remaining amplitude $\xi_\mathrm{bg}$ can then be obtained from Eq. (\[abgofxibgsigmabg\]). For the $^{87}$Rb parameters $a_\mathrm{bg}=100 \ a_\mathrm{Bohr}$ and $E_{-1}=-h\times 23$ MHz, as obtained from Fig. \[fig:Eboverview\], this yields $\sigma_\mathrm{bg}=42.90753599 \ a_\mathrm{Bohr}$ and $m\xi_\mathrm{bg}/\left(4\pi\hbar^2\right)=-317.5649079 \ a_\mathrm{Bohr}$.
Off diagonal coupling
---------------------
In order to recover the universal properties of the near resonant bound states in Subsection \[subsec:universal\] (cf. Fig. \[fig:haloboundstates\]), the transition probabilities in Subsection \[subsec:2Bapproach\] (cf. Fig. \[fig:transprob\]), as well as the time evolution operator in Eq. (\[NLSElocal\]), it is sufficient to use a single channel Hamiltonian, as explained in Subsection \[subsec:universal\]. To this end, we have extended the potential model in Eq. (\[separablepotentialgeneral\]) to recover not only the background scattering, but also the scattering length $a(B)$. We have thus adjusted the separable potential energy operator in Eq. (\[separablepotentialgeneral\]), at each magnetic field strength $B$, to the magnetic field dependent scattering length $a(B)=a_\mathrm{bg}\left[1-(\Delta B)/(B-B_0)\right]$ and to the energy $E_{-1}$ (cf., also, Ref. [@KGB03]).
The dissociation spectra in Subsection \[subsec:dissociation\] have been obtained from a two channel description. A two channel approach recovers not only the exact magnetic field dependence of the scattering length, but it also accurately describes the binding energies of the multichannel bound states over a much wider range of magnetic field strengths (see Fig. \[fig:EbofBtwochannel\]) than a single channel treatment. Apart from the background scattering, a two channel description of resonance enhanced collisions requires us to specify the coupling between the open and the closed channels. Equations (\[resonancewidth\]) and (\[resonanceshift\]) relate the off diagonal coupling element $W(r)$ of the general two channel Hamiltonian in Eq. (\[H2B2channel\]) to the resonance width $(\Delta B)$ as well as to the shift $B_0-B_\mathrm{res}$. Although the resonance shift is not directly measurable, Eq. (\[magicformula\]) relates it to the van der Waals dispersion coefficient $C_6$ and to the width $(\Delta B)$, which can usually be determined from experimental data. In the pole approximation to the closed channel Green’s function \[cf. Eq. (\[poleapproximation\])\], the closed channel part of the two channel Hamiltonian in Eq. (\[H2B2channel\]) reduces to Eq. (\[replacementHcl\]), which restricts the state of an atom pair in the closed channels to the resonance state $\phi_\mathrm{res}$. Given the linear dependence of the energy $E_\mathrm{res}(B)$, associated with $\phi_\mathrm{res}$, on the magnetic field strength $B$, the closed channel part of the two channel Hamiltonian is determined completely by the slope $\left[\frac{dE_\mathrm{res}}{dB}(B_\mathrm{res})\right]$ of the Feshbach resonance (cf. Subsection \[subsec:energystates\]).
The only remaining quantity that needs to be determined is the product $W(r)\phi_\mathrm{res}(r)$, which provides the interchannel coupling in the pole approximation. Similar to the choice of the separable potential in Eq. (\[separablepotentialgeneral\]), we shall use a two parameter description of the interchannel coupling, in terms of a real amplitude $\zeta$ and a range parameter $\sigma$, to recover the width and the shift of the Feshbach resonance. This leads to the [*ansatz*]{} $$W|\phi_\mathrm{res}\rangle=|\chi\rangle\zeta,$$ where we have chosen the arbitrary but convenient Gaussian form of the function $$\langle\mathbf{p}|\chi\rangle=\chi(p)=
\frac{\exp\left(-\frac{p^2\sigma^2}{2\hbar^2}\right)}
{(2\pi\hbar)^{3/2}}
\label{couplingGaussian}$$ in momentum space. In the following, we shall adjust the parameters $\zeta$ and $\sigma$ in such a way that the two channel Hamiltonian recovers the resonance width $(\Delta B)$ and the shift $B_0-B_\mathrm{res}$ via Eqs. (\[resonancewidth\]) and (\[resonanceshift\]), respectively. To this end, we shall first determine the zero energy continuum state $\phi_0^{(+)}$ in Eq. (\[resonancewidth\]) and the zero energy Green’s function $G_\mathrm{bg}(0)$ in Eq. (\[resonanceshift\]) in terms of the $T$ matrix associated with the background scattering, which is given by Eq. (\[LSTmatrix\]). The continuum energy states associated with the relative momentum $\mathbf{p}$ fulfil the Lippmann-Schwinger equation [@Newton82] ($E=p^2/m$): $$|\phi_\mathbf{p}^{(+)}\rangle=|\mathbf{p}\rangle+
G_0(E+i0)T_\mathrm{bg}(E+i0)|\mathbf{p}\rangle.$$ Here the energy argument “$z=E+i0$” ensures that the wave function $\phi_\mathbf{p}^{(+)}(\mathbf{r})$ has the long range asymptotic form of Eq. (\[BCphipplus\]). Furthermore, the Green’s function $G_\mathrm{bg}(z)$ is completely determined by the $T$ matrix in Eq. (\[LSTmatrix\]) in terms of the general formula [@Newton82]: $$G_\mathrm{bg}(z)=G_0(z)+G_0(z)T_\mathrm{bg}(z)G_0(z).$$ In the separable potential approach the exact $T$ matrix of the background scattering is given by Eq. (\[Bornseries\]), so that the subsequent determination of the right hand sides of Eqs. (\[resonancewidth\]) and (\[resonanceshift\]) involves just the evaluation of matrix elements of the zero energy free Green’s function $G_0(0)$ between the wave functions $\chi_\mathrm{bg}$ and $\chi$. Given the Gaussian form of these wave functions in Eqs. (\[separablepotentialGaussian\]) and (\[couplingGaussian\]), the matrix elements can be readily determined analytically in complete analogy to Eq. (\[matrixelementG0\]). This yields $$(\Delta B)=\frac{\zeta^2}{
\left[
\frac{dE_\mathrm{res}}{dB}(B_\mathrm{res})
\right]}
\frac{m}{4\pi\hbar^2a_\mathrm{bg}}
\left(
1-\frac{a_\mathrm{bg}}{\sqrt{\pi}\overline{\sigma}}
\right)^2
\label{deltaBofzetasigma}$$ for the resonance width and $$B_0-B_\mathrm{res}=(\Delta B)\frac{a_\mathrm{bg}}{\sqrt{\pi}\sigma}
\frac{1-\frac{a_\mathrm{bg}}{\sqrt{\pi}\sigma}
\left(\frac{\sigma}{\overline{\sigma}}\right)^2}
{\left(
1-\frac{a_\mathrm{bg}}{\sqrt{\pi}\sigma}
\frac{\sigma}{\overline{\sigma}}
\right)^2}
\label{shiftozetasigma}$$ for the shift. Here we have introduced the mean range parameter $$\overline{\sigma}=
\sqrt{\frac{1}{2}
\left(
\sigma^2+\sigma_\mathrm{bg}^2
\right)}.$$ Inserting the experimentally known width of the resonance and Eq. (\[magicformula\]) for the shift into the left hand side of Eqs. (\[deltaBofzetasigma\]) and (\[shiftozetasigma\]), respectively, then determines $\zeta$ and $\sigma$ in terms of $(\Delta B)$, $C_6$ and the slope of the resonance. Using $(\Delta B)=0.02$ mT [@Volz03], $C_6=4660$ a.u. [@Roberts01] and $\left[\frac{dE_\mathrm{res}}{dB}(B_\mathrm{res})\right]=h\times 38$ MHz/mT for the 100 mT Feshbach resonance of $^{87}$Rb, the parameters of the interchannel coupling can be summarised as $\sigma=21.50035463 \ a_\mathrm{Bohr}$ and $m\zeta^2/(4\pi\hbar^2\sigma)=h\times 8.0536126$ MHz.
[99]{} For a review see: B. Goss Levi, Physics Today **53**, no. 9, 46 (2000). J. Weiner, V. Bagnato, S. Zilio, and P.S. Julienne, Rev. Mod. Phys. **71**, 1 (1999). M. Randeria, in: [*Bose-Einstein condensation*]{}, edited by A. Griffin, D.W. Snoke, and S. Stringari, 355 (Cambridge University Press, Cambridge, 1995). M. Baranov, [Ł]{}. Dobrek, K. G[ó]{}ral, L. Santos, and M. Lewenstein, Phys. Scr. **T102**, 74 (2002). R. Wynar, R.S. Freeland, D.J. Han, C. Ryu, and D.J. Heinzen, Science **287**, 1016 (2000). S. Inouye, M.R. Andrews, J. Stenger, H.-J. Miesner, D.M. Stamper-Kurn, and W. Ketterle, Nature (London) **392**, 151 (1998). E.A. Donley, N.R. Claussen, S.T. Thompson, and C.E. Wieman, Nature (London) **417**, 529 (2002). N.R. Claussen, S.J.J.M.F. Kokkelmans, S.T. Thompson, E.A. Donley, E. Hodby, and C.E. Wieman, Phys. Rev. A **67**, 060701 (2003). J. Herbig, T. Kraemer, M. Mark, T. Weber, C. Chin, H.-Ch. N[ä]{}gerl, and R. Grimm, Science **301**, 1510 (2003). S. Dürr, T. Volz, A. Marte, and G. Rempe, Phys. Rev. Lett. **92**, 020406 (2004). K. Xu, T. Mukaiyama, J.R. Abo-Shaeer, J.K. Chin, D.E. Miller, and W. Ketterle, Phys. Rev. Lett. **91**, 210402 (2003). C.A. Regal, C. Ticknor, J.L. Bohn, and D.S. Jin, Nature (London) **424**, 47 (2003). K.E. Strecker, G.B. Partridge, and R.G. Hulet, Phys. Rev. Lett. **91**, 080406 (2003). J. Cubizolles, T. Bourdel, S.J.J.M.F. Kokkelmans, G.V. Shlyapnikov, and C. Salomon, Phys. Rev. Lett. **91**, 240401 (2003). S. Jochim, M. Bartenstein, A. Altmeyer, G. Hendl, C. Chin, J. Hecker Denschlag, and R. Grimm, Phys. Rev. Lett. **91**, 240402 (2003). C. A. Regal, M. Greiner, and D. S. Jin, Phys. Rev. Lett. **92**, 083201 (2004). M. Greiner, C.A. Regal, and D.S. Jin, Nature (London) **426**, 537 (2003). S. Jochim, M. Bartenstein, A. Altmeyer, G. Hendl, S. Riedl, C. Chin, J. Hecker Denschlag, and R. Grimm, Science **302**, 2101 (2003). M.W. Zwierlein, C.A. Stan, C.H. Schunck, S.M.F. Raupach, S. Gupta, Z. Hadzibabic, and W. Ketterle, Phys. Rev. Lett. **91**, 250401 (2003). M. Bartenstein, A. Altmeyer, S. Riedl, S. Jochim, C. Chin, J. Hecker Denschlag, and R. Grimm, Phys. Rev. Lett. **92**, 120401 (2004). C. A. Regal, M. Greiner, and D. S. Jin, Phys. Rev. Lett. **92**, 040403 (2004). M. W. Zwierlein, C. A. Stan, C. H. Schunck, S. M. F. Raupach, A. J. Kerman, and W. Ketterle, Phys. Rev. Lett. **92**, 120403 (2004). M. Bartenstein, A. Altmeyer, S. Riedl, S. Jochim, C. Chin, J. Hecker Denschlag, and R. Grimm, arXive.org eprint cond-mat/0403716 (2004). P. Soldán, M.T. Cvitaš, J.M. Hutson, P. Honvault and J.-M. Launay, Phys. Rev. Lett. **89**, 153201 (2002). A. Marte, T. Volz, J. Schuster, S. Dürr, G. Rempe, E.G.M. van Kempen, and B.J. Verhaar, Phys. Rev. Lett. **89**, 283202 (2002). T. Köhler and K. Burnett, Phys. Rev. A **65**, 033601 (2002). P.D. Drummond, K.V. Kheruntsyan, and H. He, Phys. Rev. Lett. **81**, 3055 (1998). E. Timmermans, P. Tommasini, M. Hussein, and A. Kerman, Phys. Rep. **315**, 199 (1999). V.A. Yurovsky, A. Ben-Reuven, P.S. Julienne, and C.J. Williams, Phys. Rev. A **62**, 043605 (2000). K. G[ó]{}ral, M. Gajda, and K. Rz[[a-0.22em.40ex]{}]{}żewski, Phys. Rev. Lett. **86**, 1397 (2001). A.N. Salgueiro, M.C. Nemes, M.D. Sampaio, and A.F.R.D. Piza, Physica A **290**, 4 (2001). S.K. Adhikari, J. Phys. B **34**, 4231 (2001). B.J. Cusack, T.J. Alexander, E.A. Ostrovskaya, and Y.S. Kivshar, Phys. Rev. A **65**, 013609 (2002). F.K. Abdullaev and V.V. Konotop, Phys. Rev. A **68**, 013605 (2003). P. Naidon and F. Masnou-Seeuws, Phys. Rev. A **68**, 033612 (2003). F.H. Mies, E. Tiesinga, and P.S. Julienne, Phys. Rev. A **61**, 022721 (2000). B. Gao, Phys. Rev. A **58**, 4222 (1998). R.G. Newton, [*Scattering Theory of Waves and Particles*]{} (Springer, New York, 1982). M.S. Child, [*Molecular Collision Theory*]{} (Academic, London, 1974). The derivation of the formula is based on ideas of multichannel quantum defect theory as outlined in: P.S. Julienne and F.H. Mies, J. Opt. Soc. Am. B **6**, 2257 (1989). G.F. Gribakin and V.V. Flambaum, Phys. Rev. A **48**, 546 (1993). J.L. Roberts, J.P. Burke, Jr., N.R. Claussen, S.L. Cornish, E.A. Donley, and C.E. Wieman, Phys. Rev. A **64**, 024702 (2001). E.G.M. van Kempen, S.J.J.M.F. Kokkelmans, D.J. Heinzen, and B.J. Verhaar, Phys. Rev. Lett. **88**, 093201 (2002). T. Volz, S. Dürr, S. Ernst, A. Marte, and G. Rempe, Phys. Rev. A **68**, 010702 (2003). T. Köhler, T. Gasenzer, and K. Burnett, Phys. Rev. A **67**, 013601 (2003). We note that the $^{87}$Rb atoms in the $(F=1,m_F=+1)$ electronic ground state cannot be magnetically trapped. The harmonic potential is, therefore, provided by crossed laser beams. These optical atom traps can confine atoms and molecules at the same time. F. Dalfovo, S. Giorgini, L.P. Pitaevskii, and S. Stringari, Rev. Mod. Phys. **71**, 463 (1999). A.L. Fetter and D.L. Feder, Phys. Rev. A **58**, 3185 (1998). Yu.N. Demkov and V.I. Osherov, Sov. Phys. JETP **26**, 916 (1968). This formula for the spectral density can also be obtained from the intuitive derivation of: T. Mukaiyama, J.R. Abo-Schaeer, K. Xu, J.K. Chin, and W. Ketterle, e-print arXive cond-mat/0311558. S.L. Cornish, N.R. Claussen, J.L. Roberts, E.A. Cornell, and C.E. Wieman, Phys. Rev. Lett. **85**, 1795 (2000). T. Köhler, Phys. Rev. Lett. **89**, 210404 (2002). T. Köhler, T. Gasenzer, P.S. Julienne, and K. Burnett, Phys. Rev. Lett. **91**, 230401 (2003). T. Köhler, K. G[ó]{}ral, and T. Gasenzer, e-print arXive cond-mat/0305060. J. Fricke, Ann. Phys. (N.Y.) **252**, 479 (1996). A.L. Fetter and J.D. Walecka, [*Quantum Theory of Many-Particle Systems*]{} (McGraw-Hill, New York, 1971). For a general description of the detection of composite particles see, e.g., J.D. Dollard, J. Math. Phys. [**14**]{}, 708 (1973). The second, factorised contribution to the molecular mean field on the right hand side of Eq. (\[psibgeneral\]) can be interpreted as an overlap between the atomic mean field and the molecular bound state wave function. When the gas is strongly interacting the molecular bound state has a spatial extent comparable to the mean distance of the atoms in the gas (cf. Eq. (\[bondlength\])), and the overlap becomes significant. As a consequence, one and the same atom could contribute to several bound molecules, so that a separation of the gas into bound and free atoms becomes physically meaningless. Private communication from V. Boyer, D. Cassetari, R.M. Godun, G. Smirne, C.M. Chandrashekar, and C.J. Foot (2003). An explicit determination of the energy spectra of the burst atoms and their typical single particle energy scale of $k_\mathrm{B}\times$150 nK in the atom-molecule coherence experiments of Ref. [@Donley02] has been performed in Ref. [@KGB03]. F.A. van Abeelen and B.J. Verhaar, Phys. Rev. Lett. **83**, 1550 (1999). This work also accounts for the finite lifetime of the closed channel resonance state in terms of a decay constant. The admixture of the resonance state to the highest excited vibrational bound state depends on the Feshbach resonance. Over the whole range of magnetic field strengths between 15.5 mT and 16.22 mT in the atom-molecule coherence experiments [@Donley02] with $^{85}$Rb Bose-Einstein condensates, the admixture of the resonance state is even less than 30 % (cf., also, Ref. [@TKTGPSJKB03]). S.J.J.M.F. Kokkelmans and M.J. Holland, Phys. Rev. Lett. **89**, 180401 (2002). M. Mackie, K.-A. Suominen, and J. Javanainen, Phys. Rev. Lett. **89**, 180403 (2002). V.A. Yurovsky and A. Ben-Reuven, Phys. Rev. A **67**, 043611 (2003). B. Borca, D. Blume, and C.H. Greene, New J. Phys. **5**, 111 (2003). E. Braaten, H.-W. Hammer, and M. Kusunoki, e-print cond-mat/0301489. R.A. Duine and H.T.C. Stoof, Phys. Rev. A **68**, 013602 (2003). An equivalent choice of parameters would be $a_\mathrm{bg}$ and the van der Waals dispersion coefficient $C_6$ [@Gao98]. C. Lovelace, Phys. Rev. **135**, B1225 (1964).
|
---
abstract: 'We discuss techniques of the density matrix renormalization group and their application to interacting fermion systems in more than one dimension. We show numerical results for equal–time spin–spin and singlet pair field correlation functions, as well as the spin gap for the Hubbard model on two chains. The system is a gapped spin liquid at half–filling and shows weak algebraic $d$-wave–like pair field correlations away from half–filling.'
author:
- |
R.M. Noack and S.R. White\
Department of Physics\
University of California, Irvine, CA 92717\
\
D.J. Scalapino\
Department of Physics\
University of California, Santa Barbara, CA 93106
title: The Density Matrix Renormalization Group for Fermion Systems
---
6.5in 0.0in 0.0in 0.0in 9.0in
Introduction
============
The numerical renormalization group was developed by Wilson [@wilson1] and used by him to solve the one impurity Kondo problem. The technique was subsequently applied to a number of quantum lattice systems [@earlyrg; @lee1] such as the Hubbard and Heisenberg models, but with little success. A suggestion by Wilson [@wilson2] to investigate why the technique fails for the simplest quantum lattice system, the one-dimensional electron gas, led to the development of a number of new techniques to overcome the difficulties of the numerical RG for this simple system [@whitenoack]. White [@whitedmrg] was able to generalize one of these techniques to interacting systems, applying it successfully to one dimensional quantum spin systems. This technique has come to be known as the density matrix renormalization group (DMRG).
This paper describes our our current efforts to apply the DMRG to fermion systems in more than one dimension, and in particular to the Hubbard model. So far, we have successfully applied the method to the Hubbard model on one and two chains [@twochains]. Here we discuss the details of the methods we have developed for the two–chain Hubbard model and show results for equal–time pair field and spin–spin correlation functions and for the spin gap for the half–filled and doped systems on lattices of up to $2\times 32$ sites.
At half–filling, both pair field and spin–spin correlations decay exponentially, with the spin correlations having a longer correlation length. There is a spin gap present at half filling which gets smaller as the system is doped, but persists down to band fillings of $\langle n \rangle =0.75$. For the doped system, the largest pair field correlations are ones in which a spin singlet pair is formed on adjacent sites on different chains. The pair field symmetry is $d$–wave–like in that the pair field wave function has opposite sign along and between the chains. The form of the decay of the pair field correlations for the doped system is algebraic with a form close to that of the noninteracting system, which decays as $\ell^{-2}$.
The Density Matrix Renormalization Group
========================================
The goal of the procedures discussed here is to find the properties of the low-lying states of a quantum system on a particular finite lattice. One way to do this would be to diagonalize the Hamiltonian matrix using a sparse matrix diagonalization method such as the Lanczos technique. However, for interacting quantum lattice systems, the number of states grows exponentially with the size of the lattice. Since exact diagonalization techniques must keep track of all the states, the maximum possible lattice sizes for interacting Hamiltonians is severely limited. It is therefore desirable to develop a procedure in which the Hilbert space of the Hamiltonian can be truncated in a controlled way so that only states that are important in making up the low-lying states of the system are included in a diagonalization. The DMRG provides a procedure for building up such a representation of the Hamiltonian matrix, which is then diagonalized to provide the properties of the low-lying states of the finite system.
The strategy of the DMRG is to build up a portion of the system (called the system block) using a real–space blocking procedure and then truncate the basis of its Hamiltonian after each blocking. In this way, the size of the Hilbert space is kept manageable as the system block is built up. The key idea is the method of truncating the Hilbert space of the system block in a controlled way. This is done by forming the reduced density matrix for the system block, given an eigenstate of the entire lattice. Let us first examine this procedure.
The Density Matrix Projection
------------------------------
Consider a complete system (the “universe”), divided into two parts, the “system”, labeled by coordinate $i$, and the “environment” [@liang], labeled by coordinate $j$. If we knew the exact state $\psi_{ij}$ of the universe, (assuming the universe is in a pure state) the prescription for finding the state of the system block would be to form the reduced density matrix of the system as part of the universe, $$\rho_{ii'} = \sum_j \psi_{ij} \psi^*_{i'j}.
\label{denmateqn}$$ The state of the system block is then given by a linear combination of the eigenstates of the density matrix with weight given by the eigenvalues. It is shown in Ref. [@whitedmrg] that the optimal reduced basis set for the system block is given by the eigenstates of the density matrix with the largest weights. The sum of the density matrix weights of the discarded states gives the magnitude of the truncation error.
Algorithms
==========
The density matrix projection procedure gives us a way of truncating the basis set of the matrix for the system block in a controlled way as degrees of freedom are added to the system. The projection procedure of the previous section assumes that the wavefunction $\psi_{ij}$ of the system is known. Of course, finding $\psi_{ij}$ is the goal of the DMRG procedure, so effective algorithms must iteratively improve approximations to $\psi_{ij}$. We will first discuss the algorithms for one-dimensional systems, as developed in Ref. [@whitedmrg].
In order to perform the density matrix projection procedure, we form the Hamiltonian for a “superblock” which is an approximation to the universe of the previous section. In this case, the superblock will describe a one-dimensional lattice of $L$ sites, with, for example, a Heisenberg or Hubbard Hamiltonian. The superblock configuration used for the one–dimensional algorithms developed in Ref. [@whitedmrg] is shown in Fig. \[figsuper1d\]. The superblock is formed from an approximate Hamiltonian for the system block containing $\ell$ sites (labeled by $B_\ell$), the Hamiltonians for two single sites which can be treated exactly, represented by solid circles, and an approximate Hamiltonian for the rightmost $\ell'$ sites, labeled by $B^R_{\ell'}$. Thus, the superblock contains $L = \ell + \ell' + 2$ sites. The algorithm proceeds as follows:
[1.]{} The superblock Hamiltonian is diagonalized using a Lanczos or similar exact diagonalization technique to find a particular target eigenstate $\psi_{ij}$.
[2.]{} The reduced density matrix is formed for the system block $B'_{\ell+1}$ using Eq. (\[denmateqn\]).
[3.]{} The density matrix is diagonalized using a dense matrix diagonalization.
[4.]{} The Hamitonian for $B'_{\ell+1}$ is transformed to a truncated basis formed by the $m$ highest weighted eigenstates of the density matrix.
[5.]{} This approximate Hamiltonian, labeled by $B_{\ell+1}$ is used as a starting point for the next iteration, starting with step 1.
Initially we choose $\ell$ to be small enough (a single site, for example) so that the Hamiltonian for $B_\ell$ can be treated exactly. The system block then grows by a single site at each iteration, but the dimension of its Hilbert space remains $m$. A single site only is added to $B_\ell$ in order to minimize the size of the superblock Hamiltonian, whose dimension will be $n^2mm'$ where $n$ is the number of states per site, and $m'$ is the size of the basis for $B^R_{\ell'}$.
The Infinite System Procedure
-----------------------------
The method we use to choose $B^R_{\ell'}$ at each step divides DMRG algorithms into two classes, the infinite system procedure and the finite system procedure. In the infinite system procedure, $B^R_{\ell'}$ is chosen to be the spatial reflection of $B_\ell$ so that $\ell=\ell'$. This means that the size $L$ of the superblock grows by two sites at each iteration. The procedure can be iterated until the energy, calculated in the superblock diagonalization, converges.
The advantage of the infinite system procedure is that calculated quantities scale to their infinite system values. In this sense, this procedure is in the spirit of the original real–space renormalization group. The disadvantages of the infinite system procedure are that for a given system size, it is less accurate than the finite system procedure, and that it cannot easily be generalized to two–dimensional systems. For a two-dimensional system, if a single site is added to the system block at each step, an environment block of the proper geometry cannot in general be formed from the reflected system block.
The Finite System Procedure
---------------------------
In the finite system procedure, the superblock is formed so that it describes the same finite lattice at each iteration. In other words, the block $B^R_{\ell'}$ is chosen so that $L=\ell + \ell' + 2$ remains fixed. We can do this if we repeat the procedure (which we call a sweep through the lattice) in which the system block is built up from $\ell = 1$ to $\ell = L-3$ more than once. After one sweep, the system block can be built up from the other side of the lattice, and the stored set of system blocks from that sweep can be used as environment blocks for the next sweep. The procedure is analogous to zipping a zipper back and forth once through the lattice, where the location of the zipper is the location of the single site added to $B_\ell$. The sweeps can be repeated until the energy or some other quantity of interest converges. In practice, we have found that it only takes a few sweeps through the lattice to achieve convergence to within truncation error for a given $m$. The power of this procedure lies in the iterative improvement of the environment block.
On the initial sweep of the finite system procedure, the environment blocks are undefined. For one dimensional systems, however, one can build up the superblock size using the infinite system procedure and use reflections of the stored blocks $B_\ell$ for $B^R_{\ell'}$ on the initial sweep.
There are a number of advantages to the finite system procedure. First, since the environment blocks are iteratively improved with each sweep through the lattice, the finite system procedure gives much more accurate results for a particular lattice size than the infinite system procedure, although the infinite system procedure can give results that are closer to the thermodynamic limit. It might be possible to combine the two procedures in a hybrid algorithm to get more accurate results for a given $m$ in the thermodynamic limit.
Second, since the environment block no longer must be a reflection of the system block, it is possible to study lattices that are no longer reflection symmetric. This is useful, for example, in studying systems with impurities or disorder.
Third, in the finite system procedure, the target state of the superblock is the same at each iteration, with unchanging quantum numbers, unlike in the infinite system procedure. For the one-dimensional Heisenberg model calculations described in Ref. [@whitedmrg], the states are labeled only by the $z$ component of the total spin, $S_z$, so it is easy to find a state with the appropriate quantum number for different lattice sizes. For fermion systems such as the Hubbard model, however, $N_\uparrow$ and $N_\downarrow$, the number of spin up and spin down fermions, are good quantum numbers. Since $N_\downarrow$ and $N_\uparrow$ must be integers, it is impossible to choose them so that the overall occupation stays constant on all different lattice sizes, except at half–filling. The best one can do is to target one or more states closest to the proper density, and this leads to reduced accuracy for non–half–filled systems.
Fourth, it is much easier to extend the finite system procedure to lattices of more than one dimension.
Extension to Higher Dimensions
------------------------------
One way to extend these algorithms to more than one dimension would be to replace the single sites added between the blocks with a row of sites. However, the extra degrees of freedom added to the system at each real–space blocking would make size of the superblock Hilbert space prohibitively large. Therefore, the two–dimensional algorithms we have developed still involve adding single sites at a time to the system block. This can be done by adding sites in a connected one-dimensional path through the two–dimensional lattice, i.e. by folding the one-dimensional zipper into two dimensions. A typical superblock configuration for the two-dimensional algorithm is shown in Fig. \[figsuper2d\]. The site added to the system block is enclosed by a dashed line and the dotted line shows the order in which sites are added to the system block for a sweep. One can see that it is not possible to reflect the system block into an environment block of the proper geometry at every iteration, so the finite system algorithm must used. The two–dimensional procedure differs from the one–dimensional finite size procedure only in that there are additional connections between the system and environment blocks along the boundary.
For one–dimensional lattices, we use the infinite system procedure to build up the superblock to the proper size on the first sweep through the lattice. Since this can no longer be done for higher dimensional lattices, we must formulate a procedure for the initial sweep through the lattice. The simplest procedure is to use an empty environment block on the first sweep. One can diagonalize the Hamiltonian for the system block and keep the $m$ states of lowest energy. This procedure is equivalent to Wilson’s original numerical renormalization group procedure, and is not very accurate even for the one–dimensional single electron on a lattice, as shown in Ref. [@whitenoack]. In addition, for fermion systems, one must adjust the chemical potential $\mu$ so that states with the proper $N_\uparrow$ and $N_\downarrow$ quantum numbers have the lowest energy. The procedure is quite sensitive to these adjustments. Thus, this initialization technique thus tends to be inaccurate and hard to use for fermion systems.
Liang [@liang2] has tried two other techniques for the initial sweep. In the first, he performs an initial infinite system sweep for a one-dimensional lattice, then turns on the additional couplings needed to make the lattice two dimensional on subsequent finite system sweeps. In the second, he uses as the environment block an approximate Hamiltonian for a one–dimensional system of the size of the row length. Both of these procedures depend on representing portions of two-dimensional states by one-dimensional states, and thus give poor representations of the superblock initially.
The technique which we find works best for the initial sweep is a hybrid procedure in which the finite system procedure for a smaller lattice size is repeated for a few iterations, until the system block is big enough so that its reflection can be used for the environment block of a superblock that is a row larger. Thus, the superblock is extended a row at a time. Initially, the first row can be built up with a one-dimensional infinite system procedure. This procedure minimizes problems with target states with inappropriate quantum numbers and provides a reasonable representation for two–dimensional states.
We have found that the accuracy of the initial sweep is not critical as long as the first set of environment blocks has a set of states with appropriate quantum numbers. In most cases, a few sweeps of the finite system procedure will improve the environment blocks sufficiently so that the procedure will converge.
Performance Considerations
--------------------------
The number of states needed to maintain a certain truncation error in the density matrix projection procedure depends strongly on the number of operators connecting the two parts of the system. Best accuracy is obtained when the number of connections between the system and environment blocks is minimized. Therefore, we study systems with open rather than periodic or antiperiodic boundary conditions. Also, we find that the number of states $m$ needed to maintain a given accuracy depends strongly on the width and weakly on the length of the system.
Just how rapidly the truncation error increases with the width of the system is not clear in general. Liang [@liang2] studied the error in the energy as a function of width for a gas of noninteracting spinless fermions and found that the number of states needed to maintain a given accuracy grew exponentially with the width of the system. In an interacting system such as the Hubbard model, the detailed structure of the energy spectrum seems to be important. For example, in the two chain Hubbard model at half–filling, where there is a spin and pairing gap, the truncation error for a given $m$ is much smaller than away from half–filling, where the spin gap is reduced and the gap to pairing excitations is no longer present. For multiple Hubbard or Heisenberg chains, the presence or absence of a gap in the spin spectrum depends on whether the number of chains is even or odd[@twospinchains], so the truncation error for a given $m$ depends on the number of chains in a complicated way. Also, increasing the strength of on-site interactions can reduce the truncation error. The Hubbard model DMRG is most accurate for large $U$ and least accurate for $U=0$.
For systems of more than one dimension, it is therefore important to be able to keep as many states $m$ per block as possible. We have been able to improve the performance of the algorithm in a number of ways. One way of doing this is to minimize the size of the superblock Hilbert space, whose dimension is $n^2mm'$. For fermion systems, one can reduce the number of states per site $n$ from four to two by treating the spin degree of freedom on the same footing as a spatial coordinate. A site for a particular spatial coordinate and spin can have an occupancy of zero or one fermion. While this makes the path through the lattice (which now has an added dimension) somewhat more complicated, we have found that by adding these “half–sites” instead of full spatial sites on the last few sweeps through the lattice we can increase the accuracy by increasing $m$. We have also found that $m'$ can be made smaller than $m$ without losing much accuracy in the truncation[@whitedmrg]. Since the representation the approximate block Hamiltonians is poor on the first few sweeps through the lattice, making $m$ large initially does not improve the representation very much. Therefore, the most efficient procedure is to increase $m$ after every sweep through the lattice, so that $m'$ is $m$ from the previous sweep.
We have made a major effort to write the code in an efficient way in C++. We store only the nonzero parts of operators that link states with particular quantum numbers. These matrices are dense in general because the basis transformation at each step mixes matrix elements. This representation minimizes memory usage and makes it possible to optimize highly the multiplication of a vector by the Hamiltonian, the basic step needed for the Lanczos diagonalization. However, the resulting data structures are complicated and are variable in size, so that it has been useful to take advantage of the object–oriented data structures and dynamic memory allocation available in C++. The code is currently limited more by memory usage than by computer time, although we minimize memory usage by writing to disk all operators not needed for a particular superblock diagonalization step. The current version of the code can handle $m=400$ or more whereas the original Fortran code used in Ref. [@whitedmrg] for the computationally less demanding Heisenberg spin problem could keep at most $m=200$. We have found that $m \approx 400$ is necessary in order to obtain accurate results for the two–chain Hubbard model away from half–filling.
Results for the two–chain Hubbard model
=======================================
The two–chain Hubbard model is described by the Hamiltonian $$\begin{aligned}
\begin{array}{cl}
H= &
- t_y \sum_{ i, \lambda \sigma}
( c^{\dagger}_{i,\lambda \sigma}c_{i+1,\lambda \sigma} +
c^{\dagger}_{i+1,\lambda\sigma}c_{i,\lambda \sigma} ) \\
&- t_x \sum_{i, \sigma}
( c^{\dagger}_{i,1 \sigma}c_{i,2 \sigma } +
c^{\dagger}_{i, 2\sigma}c_{i, 1 \sigma} )
+ U\sum_{ i, \lambda}
n_{i,\lambda \uparrow}n_{i,\lambda \downarrow } . \\
\end{array}\end{aligned}$$ We think of the lattice as beinging a ladder aligned with the $y$ axis so that $c^{\dagger}_{i, \lambda \sigma}$ creates an electron of spin $\sigma$ at rung $j$ and side $\lambda=1$ (left) or $2$ (right), the hopping along a chain is $t_y$, the hopping between chains on a rung is $t_x$, and $U$ is the on–site Coulomb repulsion. This system is thought to be relevant to a number of anisotropic two–dimensional systems, including $({\rm VO})_2 {\rm P}_2 {\rm O}_7$ [@johnston] and ${\rm Sr}_2 {\rm Cu}_4{\rm O}_6$ [@takano; @rice1], which have weakly coupled ladder–like structures arranged in planes. Here we will concentrate on a parameter regime relevant to the latter class of substances: $U/t_y=8$, and $t_x=t_y$. We will explore the phase diagram as a function of band filling as the half–filled system is doped with holes.
At half–filling, the Hubbard model maps to the Heisenberg model in the large $U/t_y$ limit. Therefore, the dominant correlations should be antiferromagnetic spin correlations. However, it is known that in the Heisenberg model on two chains [@dagotto; @barnes; @strong], there is spin gap leading to an exponential decay of the spin correlation function. The origin of the spin gap is easy to understand in the limit of strong coupling across the rungs. In this case, the only only interaction will be an antiferromagnetic coupling between the two spins on a rung. This two spin system forms a spin singlet state and a higher energy triplet state with an energy separation of the Heisenberg coupling $J$. Away from half–filling, it is not clear what correlations dominate the behavior. Some authors [@rice2; @tsunetsugu] have predicted that singlet superconductivity with a partial d-wave symmetry should be the dominant order.
In order to resolve these issues, we have calculated equal time spin–spin and pair field correlation functions $S_{\lambda\lambda '}(i,j)=\langle M^z_{i,\lambda} M^z_{j,\lambda '}\rangle$, $D_{xx}(i,j)= \langle \Delta_{x i} \Delta^\dagger_{x j} \rangle$, and $D_{yx}(i,j) = \langle \Delta_{y i} \Delta^\dagger_{x j} \rangle$ with $$\begin{aligned}
\begin{array}{cl}
M^z_{i,\lambda} & = n_{i,\lambda \uparrow} - n_{i,\lambda \downarrow}\\
\Delta^\dagger_{x i} & = c^\dagger_{i,1 \uparrow} c^\dagger_{i,2 \downarrow}
- c^\dagger_{i,1 \downarrow} c^\dagger_{i,2 \uparrow} \\
\Delta^\dagger_{y i} & =
c^\dagger_{i+1,2 \uparrow} c^\dagger_{i,2 \downarrow}
- c^\dagger_{i+1,2 \downarrow} c^\dagger_{i,2 \uparrow}. \\
\end{array}\end{aligned}$$ Here $S_{11}(i,j)$ and $S_{12}(i,j)$ measure the spin-spin correlations along a chain and between the chains respectively, and $D_{xx}(i,j)$ measures the singlet pair field correlations in which a singlet pair is added at rung $j$ and removed at rung $i$. In addition, $D_{yx} (i,j)$ measures the pair field correlations in which a singlet pair is added to rung $j$ and removed from the right–hand chain between rungs $i$ and $i+1$. The relative phase of the pair wave function across the $i$th rung to along one chain from $i$ to $i+1$ is given by comparing the phase of $D_{xx}(i,j)$ to $D_{yx}(i,j)$. This turns out to be negative, corresponding to the mean field result obtained in Ref. [@rice2]. However, the non-interacting $U=0$ result at a filling $\langle n \rangle = 0.875$ is also negative.
Fig. \[figsemilog\] shows the logarithm of the antiferromagnetic spin–spin correlation function and the cross–chain pairing correlation function $D_{xx}(i-j)$. Both the correlation functions decay exponentially with $|i-j|$, but the pair field correlations decay much more rapidly. The correlation length, calculated from the slope of the lines in the semilog plot, is plotted as a function of $U/t_y$ in the inset. The spin–spin correlation length decreases as $U$ is increased, saturating at a value near 3 lattice spacing for large $U$. We have calculated the spin–spin correlation length for the isotropic two chain Heisenberg model using the DMRG [@twospinchains] and find a value of 3.19 lattice spacings, consistent with the large $U$ limiting value. The pair field correlations decay with a correlation length of the order of a lattice spacing and are thus negligible at half–filling.
In order to determine the behavior of the spin correlations as the system is doped below half-filling, we have calculated the magnetic structure factor $S(q_x,q_y)$ by taking the fourier transform of $S_{\lambda\lambda '}(i,j)$. Since the lattice is long in the $y$ direction and the spin–spin correlation function decays exponentially with $ | i - j | $, one can take a continuous fourier transform in the $y$ direction without introducing much error. Since there are two chains, $q_x$ can take on the values $0$ and $\pi$. Only the $S(\pi,q_y)$ branch is interesting, because the correlations are always antiferromagnetic across the rungs. This function is plotted in Fig. \[figSq\] for the fillings, $\langle n \rangle = 1.0, 0.9875, 0.875, 0.75$, corresponding to doping 0, 2, 8, and 16 holes into the half–filled $2\times 32$ lattice. As the system is doped away from half-filling, $S(\pi,q_y)$ peaks at a wavevector $q_y = \langle n \rangle \pi$. The residual peak at $q_y=\pi$ present for $\langle n \rangle = 0.875$ and $\langle n \rangle = 0.75$ is present only for even numbers of hole pairs and thus probably disappears in the thermodynamic limit. Therefore, we see that the spin–spin correlations develop incommensurate structure as the system is doped away from half–filling.
One can calculate the spin gap directly, by calculating the difference in energies between the ground state, which has total spin $S=0$, and the lowest lying $S=1$ state. We calculate the ground state energy for $N_\uparrow$ spin up electrons and $N_\downarrow$ spin down electrons, $E_0(N_\uparrow,N_\downarrow)$. The spin gap for a system with $N_\uparrow=N_\downarrow=N$ electrons is then given by $\Delta_{\rm spin} = E_0(N+1,N-1) - E_0(N,N)$. The spin gap plotted as a function of filling is shown in Fig. \[figspingap\]. It is largest at half–filling and becomes smaller as the system is doped with holes and seems to be present at least down to fillings of $\langle n \rangle = 0.75$. We show the spin gap for $2 \times 16$ and $2 \times 32$ lattices to show the size of the finite size effects and argue that they are small enough that the gap is present in the thermodynamic limit for two chains.
We now turn to the behavior of the pair field correlations as the system is doped away from half–filling. We have seen that the pairing correlations with cross–chain symmetry decay exponentially in the half–filled system. This is true for all symmetries of the pair field wavefunction. Fig. \[figDij\] shows the pair field correlations $D_{xx}(i-j)$ and $D_{yx}(i-j)$ plotted as a function of $|i-j|$ for $\langle n \rangle = 1.0$ and $\langle n \rangle = 0.875$. One can see that $D_{xx}(i-j)$ and $D_{yx}(i-j)$ have opposite signs, as one would expect for $d$-wave like symmetry, at both fillings and are significantly enhanced for the doped system.
In order to determine the strength of the pairing correlations, one must consider their $\ell$–dependence at large distances. For a quasi–one–dimensional system, we expect that any pairing correlation will at best decay as a power of $\ell$ and can in some cases decay exponentially, as we have seen for the half-filled system. For two chains, one can compare with the the non-interacting $U=0$ ladder, for which $$D_{xx}(\ell) = ( 1/2 \pi \ell )^2
\left[ 2 - \cos ( 2 k_f(0)\ell) - \cos (2 k_f(\pi) \ell) \right].
\label{uzpair}$$ Here $k_f(0) = \cos^{-1} (t_x+\mu)/2$ and $k_f(\pi) = \cos^{-1} (t_x-\mu)/2$ are the Fermi wave vectors corresponding to the bonding and antibonding bands of the two coupled chains with $\mu$ the chemical potential. The pair correlations, $D_{xx}(\ell)$ are shown in Fig.\[figllDx\], plotted on a log–log scale. The correlations of the interacting system decay approximately as $\ell^{-2}$ and do not seem to be significantly enhanced over those of the non–interacting system, as given by Eq. (\[uzpair\]).
Conclusion
==========
We have discussed techniques we have developed to apply the density matrix renormalization group to Fermion systems in more than one dimension. In particular, we have been able to obtain accurate results for energy gaps and equal–time correlation functions for the Hubbard model on two coupled chains.
The two–chain Hubbard model is a gapped spin liquid at half–filling. Both spin–spin and pair field correlations decay exponentially, with the spin–spin correlations having the longest correlation length. As the system is doped with holes, the spin–spin correlations become incommensurate at a wave vector proportional to the filling and the spin gap becomes smaller, but persists in the thermodynamic limit. The pairing correlations are enhanced with a $d$-wave–like symmetry and decay algebraically with an exponent close to that of the non–interacting, $U=0$ system.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors thank N. Bulut, T.M. Rice, A. Sandvik, M. Vekic, E. Grannan, and R.T. Scalettar for useful discussions. R.M.N. and S.R.W. acknowledge support from the Office of Naval Research under grant No. N00014-91-J-1143 and D.J.S. acknowledges support from the National Science Foundation under grant DMR92–25027. The numerical calculations reported here were performed at the San Diego Supercomputer Center.
[99]{} K.G. Wilson, [*Rev. Mod. Phys.*]{} 47, 773 (1975). J.W. Bray and S.T. Chui, [*Phys. Rev.* ]{}[**B**]{}19, 4876 (1979); S.T. Chui and J.W. Bray, [*Phys. Rev.* ]{}[**B**]{}18, 2426 (1978); J.E. Hirsch,[*Phys. Rev.* ]{}[**B**]{}22, 5259 (1980); C. Dasgupta and P. Pfeuty, [*J. Phys.* ]{}[**C**]{}14, 717 (1981). P.A. Lee, [*Phys. Rev. Lett.* ]{} [**42** ]{}, 1492 (1979). K.G. Wilson, in an informal seminar. S.R. White and R.M. Noack, [[*Phys. Rev. Lett.* ]{}]{}[**68**]{} 3487, (1992). S.R. White, [[*Phys. Rev. Lett.* ]{}]{}[**69**]{}, 2863 (1992), [[*Phys. Rev. B* ]{}]{}[**48**]{}, 10345 (1993). R.M. Noack, S.R. White, and D.J. Scalapino (to be published). The term “environment” block is due to Shoudan Liang. S. Liang (to be published). S.R. White, R.M. Noack, and D.J. Scalapino (to be published). D.C. Johnston [*et al.*]{}, [[*Phys. Rev. B* ]{}]{}[**35**]{}, 219 (1987). M. Takano, Z. Hiroi, M. Azuma, and Y. Takeda, Jap. J. of Appl. Phys. Series [**7**]{}, 3 (1992). T.M. Rice, S. Gopalan, and M. Sigrist, Europhys. Lett. [**23**]{}, 445 (1993). E. Dagotto, J. Riera, and D.J. Scalapino, [[*Phys. Rev. B* ]{}]{}[**45**]{}, 5744 (1992). T. Barnes et al., [[*Phys. Rev. B* ]{}]{}[**47**]{}, 3196 (1993). S.P. Strong, and A.J. Millis, [[*Phys. Rev. Lett.* ]{}]{}[**69**]{}, 2419 (1992). M. Sigrist, T.M. Rice, and F.C. Zhang (to be published); Sudha Gopalan, T.M. Rice, and M. Sigrist (to be published). H. Tsunetsugu, M. Troyer, and T.M. Rice (to be published).
|
---
abstract: 'We investigate the spectral statistics of chaotic quasi one dimensional systems such as long wires. To do so we represent the spectral correlation function $R(\epsilon)$ through derivatives of a generating function and semiclassically approximate the latter in terms of periodic orbits. In contrast to previous work we obtain both non-oscillatory and oscillatory contributions to the correlation function. Both types of contributions are evaluated to leading order in $1/\epsilon$ for systems with and without time-reversal invariance. Our results agree with expressions from the theory of disordered systems.'
address:
- '$^1$ Fachbereich Physik, Universit[ä]{}t Duisburg-Essen, 47048 Duisburg, Germany'
- '$^2$ Institute of Physics, Saint-Petersburg University, 198504 Saint-Petersburg, Russia'
- '$^3$ Department of Mathematics, University of Bristol, Bristol BS8 1TW, United Kingdom'
author:
- 'Petr Braun$^{1,2}$, Sebastian Müller$^3$ and Fritz Haake$^1$'
title: 'Semiclassical spectral correlator in quasi one-dimensional systems'
---
Introduction
============
In the field of quantum chaos and disorder, the behavior of quasi one-dimensional systems such as long wires is clearly distinguished from the behavior of “normal” chaotic or disordered systems. Most importantly, quasi one-dimensional systems display Anderson localization [@Anderson], i.e., wave functions are localized in only part of the wire and the conductance is suppressed. Anderson localization has important consequences for the statistics of energy levels [@AltshulerShklovskii; @AndreevAltshuler]. For normal systems the energy levels tend to repel each other; the spectral statistics is universal and agrees with predictions made by averaging over random-matrix caricatures of the possible Hamiltonians, according to the so called BGS conjecture [@BGS]. In contrast the spectral statistics of quasi one-dimensional systems depends on the length (and thus the diffusion time $T_D$); in the limit of large length the energy levels belonging to the localized wave functions become independent and hence show Poissonian statistics.
This difference between normal and quasi one-dimensional systems is well understood for disordered systems. Notable approaches are based on the DMPK equation [@DMPK] and on the nonlinear sigma model, a field-theoretical technique to evaluate averages over different realizations of the disorder potential. From the latter, localization could be extracted in [@LSZ; @Kamenev]. The appropriate definition of quasi one-dimensional behavior arising in this context is that the classical diffusion time $T_{D}$ becomes comparable to or larger than the relevant quantum time scales, in particular the Heisenberg time $T_{H}=\frac{2\pi \hbar
}{\Delta }$ where $\Delta $ is the mean level spacing. A random-matrix model for systems of this type was considered in [@FyodorovMirlin; @Bible; @Efetov].
For clean chaotic systems (e.g. wires in which the classical motion becomes chaotic due to the shape of the boundary) the effects of quasi one-dimensionality are less well understood, and most of the literature is restricted to normal systems. A quantity that has attracted a lot of attention in this context is the spectral correlation function $R(\epsilon)$. For dynamical systems with a level density $\rho(E)$ this correlation function is defined by $$\label{defR}
R(\epsilon)=
\Delta^2\Big\langle
\rho\Big(E+\frac{\epsilon\Delta}{2\pi}\Big)
\rho\Big(E-\frac{\epsilon\Delta}{2\pi}\Big)
\Big\rangle-1\,$$ where $\epsilon$ is a real energy offset, the brackets denote an average over the center energy $E$ and $\Delta$ is the mean level spacing. For simplicity we assume that $\Delta$ is brought to 1 by appropriately scaling the energy levels. Random matrix theory (RMT) now makes predictions for $R(\epsilon)$: For systems without time reversal invariance an average over the Gaussian Unitary Ensemble (GUE) of RMT gives $-\frac{1}{2\epsilon^2}+\frac{\cos2\epsilon}{2\epsilon^2}$ while for time reversal invariant systems an average over the Gaussian Orthogonal Ensemble (GOE) leads to infinite power series in $\frac{1}{\epsilon^n}$ and $\frac{\cos2\epsilon}{\epsilon^n}$. The slow (power law) decay of oscillations leads to a singularity in the Fourier transform of $R(\epsilon)$ (the spectral form factor) at time $t=T_H$.
To show that individual systems are faithful to these predictions a semiclassical approach was proposed in [@Berry; @Argaman; @SR]. The essential idea is to express $\rho(E)$ as a sum over periodic orbits, using Gutzwiller’s formula [@Gutzi], and then study the interference between contributions of these orbits. The leading non-oscillatory contribution arises from “diagonal” pairs of identical (up to time reversal) orbits [@Berry]. The remaining terms were accessed only recently in [@SR; @Oursmalltime; @Ourlargetime; @KM].
In the present paper we want to generalize these new results to quasi one-dimensional systems. For these systems the diagonal approximation to the small-time form factor (the Fourier image of the non-oscillatory part of $R(\epsilon)$) was evaluated by Dittrich [@Dittrich]. First results on off-diagonal contributions were obtained by Schanz and Smilansky [@Schanz] for one-dimensional quantum graphs, and by Brouwer and Altland [@Brouwer] who semiclassically explained localization for quasi one-dimensional systems modelled by an array of quantum dots. In contrast, we will focus on general quasi one-dimensional systems such as long wires. We use a periodic-orbit expansion not of the correlation function itself, but of a generating function which yields $R(\epsilon)$ upon taking derivatives [@Ourlargetime; @KM]. This enables us to determine, to leading order in $\frac{1}{\epsilon}$, both the non-oscillatory and the oscillatory parts of $R(\epsilon)$. In this order we see that the effects of quasi one-dimensionality reduce to modification of the periodic orbit sum rule suggested in [@Dittrich]. For systems without time reversal invariance it suffices to perform a diagonal approximation on the level of the generating function. In contrast, for time-reversal invariant systems this diagonal approximation still captures only the non-oscillatory part; the evaluation of the oscillatory part involves off-diagonal contributions of pairs of non-identical but similar orbits. Both for systems with and without time-reversal invariance we reach agreement with results for disordered systems by Andreev and Altshuler [@AndreevAltshuler]. Our results illustrate how semiclassical methods are useful not only for describing universal features of “normal” systems but also deviations from universality.
Higher-order corrections in $1/\epsilon$ should be similarly accessible; their calculation needs taking into account more complicated groups of correlated orbits introduced in previous work on normal systems[@SR; @Oursmalltime], combined with treatment of higher order effects of quasi one-dimensionality. An extension of this approach to Anderson localization appears within reach.
Two-point spectral correlator and the generating function
=========================================================
To get started we briefly review how the correlation function $R(\epsilon)$ can be accessed through a generating function. Following Ref. [@Ourlargetime] we write $R(\epsilon)$ as the real part of the complex correlation function $$\begin{aligned}
\label{defC}
C(\epsilon ^{+})&=&\frac{\Delta ^{2}}{2\pi ^{2}}
\left\langle
{\rm Tr}\left( E+\frac{\epsilon ^{+}\Delta }{2\pi }-\hat{H}\right)^{-1}
{\rm Tr}\left( E-\frac{\epsilon ^{+}\Delta }{2\pi }-\hat{H}\right)^{-1}
\right\rangle
-\frac{1}{2}\,, \nonumber\\
\epsilon^\pm&=&\epsilon\pm{{\rm i}}\gamma\,,\label{complexcorrelator}\\
R(\epsilon)&=&\lim_{\gamma \to 0}{\rm Re}\,C(\epsilon ^{+})\nonumber\end{aligned}$$ and determine the latter from a generating function, the energy-averaged combination of four spectral determinants $$Z\left(\epsilon _{A}^{+},\epsilon _{B}^{-},\epsilon_{C}^{+},
\epsilon_{D}^{-}\right) =
\left\langle \frac{
\det\big(E+\epsilon_{C}^{+}-\hat{H}\big)
\det\big(E+\epsilon_{D}^{-}-\hat{H}\big)}
{\det\big(E+\epsilon_{A}^{+}-\hat{H}\big)
\det\big(E+\epsilon_{B}^{-}-\hat{H}\big)}
\right\rangle
\label{defZ}$$ as $$R(\epsilon)=
\lim_{\gamma \rightarrow 0}{\rm Re}\,C\left( \varepsilon ^{+}\right)=
-\frac{1}{2}+2\,{\rm Re}\,\lim_{\gamma \rightarrow 0}
\left.
\frac{\partial ^{2}Z}{\partial\epsilon_A^{+}\partial\epsilon_{B}^{-}}
\right\vert_{\parallel
,\times }\,. \label{CinZ}$$ Here the subscripts $\pm$ indicate small positive or negative imaginary parts. The symbols $\parallel ,\times $ denote two alternative ways of identifying the energy arguments, to be referred to as “columnwise” ($\parallel$) and “crosswise” ($\times$), $$\begin{aligned}
\parallel\; :&&
\epsilon _{A}^{+}=\epsilon _{C}^{+}=\epsilon_A^{+},\,\epsilon
_{B}^{-}=\epsilon _{D}^{-}=-\epsilon ^{+}
\hspace{3cm} {\rm columnwise}\,, \label{para}\\
\times :&&
\epsilon _{A}^{+}=\epsilon ^{+},\,\epsilon _{B}^{-}=
-\epsilon^{+},\epsilon _{C}^{+}=-\epsilon ^{-},\epsilon _{D}^{-}=
\epsilon^{-},\gamma \rightarrow +0\quad \,\,{\rm crosswise}\,.
\label{cro}\end{aligned}$$
Both procedures would yield the same result for the two-point correlator if implemented rigorously. However, we shall have to calculate $Z$ semiclassically, and that approximation entails two different expressions, one ($\parallel$) reproducing the non-oscillatory part and the other ($\times$) the oscillatory part of $R(\epsilon)$. To obtain the full result both expressions have to be added. In [@KM] it was shown that this addition can be understood naturally in terms of an improved semiclassical approximation preserving the unitarity of the time evolution (the Riemann-Siegel lookalike formula [@BerryKeating]).
The semiclassical approximation for $Z$ is based on Gutzwiller’s formula for the trace of the resolvent $${\rm tr}(E^+-H)^{-1}=-\frac{i\pi E^+}{\Delta}+\sum_a F_ae^{iS_a(E^+)/\hbar}$$ The factor proportional to $E^+$ is the smooth (Weyl) part of the level density and the sum taken over periodic orbits with $S_{a},T_{a},\,F_{a}$ action, period and stability coefficient of the $a$th orbit. Integration then yields the semiclassical approximation of the determinant $$\begin{aligned}
\det \left( E^{+}-\hat{H}\right)^{-1}
&=&
\exp\Big[-\int^{E^{+}}dE\,{\rm Tr}\,( E-H)^{-1}\Big]\\
&\sim &
\exp\Big(\frac{{{\rm i}}\pi E^{+}}{\Delta }+\sum_{a}F_{a}\,
\e^{\,\frac{{{\rm i}}}{\hbar }S_{a}( E^{+})}\Big)\end{aligned}$$ Substituting such expansions for all four determinants in $Z$ and expanding, e.g., $S_a(E+\epsilon_A^+)\approx S_a(E)+T_a\epsilon_A^+$ we obtain $$\begin{aligned}
Z &\approx &\e^{\frac{{{\rm i}}}{2}\left(\epsilon_{A}^{+}-\epsilon_{B}^{-}
-\epsilon_{C}^{+}+\epsilon_{D}^{-}\right) }\exp \Big[
\sum_{a}F_{a}\,\e^{\,\frac{{{\rm i}}}{\hbar}S_{a}(E)}
\left(\e^{\,{{\rm i}}\frac{T_{a}}{T_{H}}\epsilon_{A}^{+}}
-\e^{\,{{\rm i}}\frac{T_{a}}{T_{H}}\epsilon_{C}^{+}}\right)
\label{SemicZ} \\
&&+\sum_{a}F_{a}^*\,\e^{\,-\frac{{{\rm i}}}{\hbar }S_{a}(E)}
\left
(\e^{\,-{{\rm i}}\frac{T_{a}}{T_{H}}\epsilon _{B}^{-}}
-\e^{\,-{{\rm i}}\frac{T_{a}}{T_{H}}\epsilon_{D}^{-}}\right)\Big]
\nonumber\,.\end{aligned}$$
Systems without time reversal invariance
========================================
Diagonal approximation
----------------------
The semiclassical representation (\[SemicZ\]) falls into a product over periodic orbits, $$\begin{aligned}
\label{Zprod}
Z &=&e^{\frac{i}{2}\left( \varepsilon _{A}^{+}-\varepsilon
_{B}^{-}-\varepsilon _{C}^{+}+\varepsilon _{D}^{-}\right) }Z_{0}\,,
\qquad \qquad Z_{0} =\prod_{a}z_{a}, \\
z_{a} &=&\exp \big[ \underbrace{F_{a}
\left(
\e^{{{\rm i}}\frac{T_{a}}{T_{H}}\epsilon_{A}^{+}}
-\e^{\,{{\rm i}}\frac{T_{a}}{T_{H}}\epsilon_{C}^{+}}
\right)}_{=f_{AC}^a}\e^{\,\frac{{{\rm i}}}{\hbar }S_{a}(E)} \nonumber\\
&& \quad
+\underbrace{\,F_{a}^*\,
\left(
\e^{\,-{{\rm i}}\frac{T_{a}}{T_{H}}\epsilon _{B}^{-}}
-\e^{\,-{{\rm i}}\frac{T_{a}}{T_{H}}\epsilon_{D}^{-}}
\right)}_{=f_{BD}^{a*}}\e^{\,-\frac{{{\rm i}}}{\hbar }S_{a}(E)} \big] \,.\nonumber\end{aligned}$$ The diagonal approximation [@Berry] assumes that for systems without time-reversal invariance contributions of different periodic orbits are uncorrelated. For the generating function this means that the energy-averaged $Z_{0}$ becomes a product of single-orbit averages, $$\langle Z_{0}\rangle_{\mathrm{diag}}=\prod_{a}\langle z_{a}\rangle\,.$$
The energy average in $\langle z_{a}\rangle$ does away with rapid oscillations due to the phase factors $\e^{\pm{{\rm i}}S(E)/\hbar}$ in the exponent. Since $z_{a}$ is periodic in the phase $\Phi_{a}=S_{a}(E)/\hbar $ we may average with respect to $\Phi_a$, over a single period $2\pi$. This yields $$\begin{aligned}
\left\langle z_{a}\right\rangle &=& \frac{1}{2\pi }\int_{0}^{2\pi
}d\Phi \exp \left(f_{AC}^{a}\e^{{{\rm i}}\Phi }+f_{BD}^{a*}\e^{-{{\rm i}}\Phi
}\right) =I_{0}\left( 2\sqrt{f_{AC}^{a}f_{BD}^{a\ast }}\right) ,\end{aligned}$$ where $I_{0}$ is the imaginary-argument Bessel function. The expansion $\ln I_{0}\left( y\right)
=\frac{y^{2}}{4}-\frac{y^{4}}{64}+\ldots$ gives $ \langle
Z_{0}\rangle =\exp \big[ \sum_{a}f_{AC}^{a}f_{BD}^{a*}
-\frac{1}{4}\sum_{a}( f_{AC}^{a}f_{BD}^{a*})^{2}+\ldots\big]$. In the semiclassical limit ($T_{H}\rightarrow \infty $) it suffices to keep only the leading quadratic term in the exponent.[^1]
Our task is thus reduced to calculating the periodic-orbit sum $$\begin{aligned}
&&\ln \left\langle Z_{0}\right\rangle_{\rm diag}=
\sum_{a}f_{AC}^{a}f_{BD}^{a\ast } \label{lnZ} \\
&=&\sum_{a}|F_{a}|^{2}\left(\e^{{{\rm i}}\frac{T_{a}}{T_{H}}\epsilon_{A}^+}
-\e^{{{\rm i}}\frac{T_{a}}{T_{H}}\varepsilon _{C}^{+}}\right) \left(\e^{-{{\rm i}}\frac{T_{a}}{T_{H}}\epsilon _{B}^{-}}
-\e^{-{{\rm i}}\frac{T_{a}}{T_{H}}\epsilon_D^-}\right) \nonumber\end{aligned}$$
Diffusive vs ergodic behavior
-----------------------------
For “normal” systems the sum over periodic orbits can be done using the well-known sum rule of Hannay and Ozorio de Almeida [@HOdA] $$\sum_{a}|F_{a}|^{2}\,(\cdot) =\int_{T_{0}}^{\infty }
\frac{dT}{T}\,(\cdot)$$ which expresses the approximately ergodic behavior of long orbits; short orbits below a certain classical period $T_0$ have to be excluded.
The quasi one-dimensional character of long wires requires a modification of that sum rule [@Dittrich]. The momentum components in all directions and the transverse coordinates can still effectively randomize after a few bounces against the boundary, but the longitudinal coordinate takes much longer to explore the whole length $L$ of the wire. Given chaos inducing boundaries, each bounce will with finite probability change the sign of the longitudinal momentum component and thus entail diffusion along the wire. The longitudinal coordinate becomes randomized only for times long enough to explore the wire, i.e., times in excess of the diffusion (Thouless) time $T_D=L^2/D$ where $D$ denotes the diffusion constant. Orbits with periods $T$ much smaller than that Thouless time will have explored only the small fraction $\sqrt{DT}/L$ of the whole length, and the inverse of that fraction must be expected as a factor of increase of the right hand side of the above sum rule, relative to orbits with periods $T\gg T_D$.
Dittrich [@Dittrich] has determined the aforementioned factor of increase for arbitrary values of the ratio $T/T_D$ as the integral $\int_0^Ldx_0 p_{x_0}(x_0,T|x_0)=P(T)$ of the probability density of return to an arbitrary point $x_0$ of departure after a time $T$, for a one-dimensional random walk. Solution of the diffusion equation for a wire of length $L$, led to the “enhanced-return” factor $$\label{retprob}
P(T)
= \sum_{n=0}^{\infty }\exp\Big( -\frac{\pi^{2}n^{2}T}{2T_{D}}\Big)
=\frac{1}{2}\Big[1+\vartheta_{3}\Big(0,\e^{-\pi^{2}T/2T_D}\Big)\Big]$$ where $\vartheta _{3}(u,q) $ denotes the elliptic theta-function of the third kind [@AbraSteg]. The sum rule modified for quasi one dimensional systems thus reads $$\sum_{a}|F_{a}|^{2}\,(\cdot) =\int_{T_{0}}^{\infty }
\frac{dT}{T}P(T)\,(\cdot)\, .$$ Hannay’s and Ozorio de Almeida’s sum rule is restored if $T_{D}$ is so small compared with $T$ that only the $n=0$ term in (\[retprob\]) survives and $P(T)\sim 1$. In the opposite limit $T_D\gg T$ we have $P(T)\sim \sqrt{T_D/2\pi T}=L/\sqrt{2\pi DT}$ which agrees with the above qualitative expectation.
Inasmuch as orbit periods of the order of the Heisenberg time $T_H$ determine the level statistics the time scale ratio $$\zeta =\frac{\pi ^{2}T_{H}}{2T_{D}}$$ will play an important role for the wires under study here. In particular, the crossover from normal behavior ($\zeta\gg 1$) to the quasi one-dimensional behavior under study here takes place for $\zeta$ of the order unity; for $\zeta\ll 1$ the spectral statistics must be compatible with localization.
We now invoke the modified sum rule and the identity $\int_0^\infty \frac{dT}{T} (\e^{{{\rm i}}aT}-\e^{{{\rm i}}bT})=\ln\frac{a}{b}$; the lower limit $T_0$ of the time integral could be replaced by zero, accepting a negligible error ${\mathrm
o}(T_0/T_H)$. We thus get the diagonal approximation for our generating function as $$\begin{aligned}
\label{lnZdiag}
\ln \left\langle Z\right\rangle _{\rm diag}
&\approx&\sum_{n=0}^{\infty }\int_{0}^{\infty }\frac{dT}{T}
\e^{-n^{2}\frac{\zeta T}{T_H}}\left[\e^{{{\rm i}}\frac{T}{T_H}(\epsilon_{A}^+
-\epsilon_B^-)}
-\e^{{{\rm i}}\frac{T}{T_H}(\epsilon_C^+ -\epsilon_B^-) }\right. \nonumber\\
&&\left. \hspace{2.5cm}+\,
\e^{-{{\rm i}}\frac{T}{T_H}(\epsilon_D^- -\epsilon_C^+)}
-\e^{-{{\rm i}}\frac{T}{T_H}(\epsilon_D^- -\epsilon_A^+) }\right] \nonumber\\
&=&\sum_{n=0}^{\infty }\ln \frac{\left({{\rm i}}\zeta n^{2}
+\epsilon_{C}^{+}-\epsilon _{B}^{-}\right)
\left( {{\rm i}}\zeta n^{2}
+\epsilon_{A}^{+}-\varepsilon _{D}^{-}\right)}
{\left( {{\rm i}}\zeta n^{2}
+\epsilon_{A}^{+}-\epsilon_{B}^{-}\right)
\left( {{\rm i}}\zeta n^{2}
+\epsilon_{C}^{+}-\epsilon_{D}^{-}\right)}\,.\end{aligned}$$ The infinite product implicitly involved here can be brought to a closed form using $$\prod_{n=0}^{\infty }\frac{n^{2}+a}{n^{2}+b}=\frac{\varphi \left(
a\right) }{\varphi \left( b\right) },
\quad \varphi \left( x\right)= \sqrt{x}\sinh \pi \sqrt{x}\,,$$ whereupon we arrive at our final result for the generating function in the diagonal approximation, $$\langle Z\rangle_{\rm diag}=\e^{\frac{{{\rm i}}}{2}\left(
\epsilon_A^+-\epsilon_B^- -\epsilon_C^++\epsilon_D^-\right)}
\frac{\varphi \left(\frac{\epsilon_C^+ -\epsilon_B^-}{{{\rm i}}\zeta }\right)
\varphi \left(\frac{\epsilon_A^+ -\epsilon_D^-}{{{\rm i}}\zeta }\right) }
{\varphi \left(\frac{\epsilon_A^+ -\epsilon_B^-}{{{\rm i}}\zeta }\right)
\varphi \left(\frac{\epsilon_C^+ -\epsilon_D^-}{{{\rm i}}\zeta }\right) }.
\label{zuni}$$In the limit $T_{D}\to 0$ or $\zeta \to \infty$ we have $\varphi(x)\to
\pi x$, and the generating function tends to its familiar form for normal systems [@Oursmalltime].
Two-point correlator and form factor
------------------------------------
Substituting $\left\langle Z\right\rangle _{\mathrm{diag}}$ for $\left\langle Z\right\rangle $ in (\[CinZ\]) and identifying energies columnwise ($\parallel $) we obtain $$\begin{aligned}
\nonumber
C_{\parallel }\left(\epsilon \right)
&=&-2\sum_{n=0}^{\infty }\frac{1}{\left(
{{\rm i}}\zeta n^{2}+2\epsilon\right)^{2}} \\ \label{Cpar}
&=&-\frac{1}{2\epsilon^{2}}\left(\frac{1}{2}
+\frac{1}{4}\theta\cot\theta
+\frac{{{\rm i}}\epsilon\pi^{2}}{2\zeta\sin^{2}\theta}\right) ,
\\ \nonumber
\theta &=&\left( 1+{{\rm i}}\right)\pi\sqrt{\frac{\epsilon }{\zeta }}.\end{aligned}$$ Upon taking the real part we are led to a cumbersome expression for $R(\epsilon)$ equivalent to the earlier RMT [@AndreevAltshuler] and semiclassical [@Dittrich] results for the non-oscillatory part of the correlator. In the limit $\zeta \rightarrow \infty $ the GUE behavior $R_{\rm
non-osc}(\epsilon) =-1/2\epsilon ^{2}$ is restored.
The crosswise ($\times $) identification of parameters, on the other hand, entails $$\label{Ccross}
C_\times(\epsilon) =\frac{2\pi^{2}\e^{{{\rm i}}2\epsilon }}{\epsilon\zeta
\left[ \cosh 2\pi \sqrt{\frac{\epsilon }{\zeta }}-\cos
2\pi \sqrt{\frac{\epsilon }{\zeta }}\right] }\,,$$ and now the real part yields the oscillatory part of the correlator $R_{\rm osc}\left(\epsilon \right)$, in agreement with what Andreev and Altshuler [@AndreevAltshuler] had found through an average over an ensemble of disordered systems. The GUE expression $R_{\rm osc}(\epsilon) =\cos 2\epsilon /2\epsilon^{2} $ follows in the limit $\zeta \to \infty$.
For finite $\zeta$ the amplitude of oscillations of the spectral correlation function tends to zero exponentially with $\epsilon\to
\infty$ instead of the power law characteristic of normal systems. That leads to qualitative changes in the spectral form factor $K(\tau)$ where $\tau$ is the dimensionless time, $ \;
\tau=t/T_H$. For $\tau>0$, the form factor can be defined as the Fourier transform, $$\label{formfactor}
K\left( \tau \right) =\frac{1}{2\pi }\int_{-\infty +i\gamma
}^{\infty +i\gamma }e^{-i2\varepsilon \tau }C\left( \varepsilon
\right) d\varepsilon .$$ In normal systems without time reversal invariance $K(\tau)$ experiences discontinuity of its first derivative at $\tau=1$ introduced by the Fourier transform of the oscillatory part of the spectral correlation function. In quasi one-dimensional system this discontinuity is replaced by a smooth transition from the small-time to large-time behavior. Another change associated with the non-oscillatory part of the spectral correlation function is the much faster growth of $K(\tau)$ at small $\tau$: The respective (“parallel”) part of the diagonal form factor first deduced semiclassically in [@Dittrich], $$\label{K<}
K_{\parallel }\left( \tau \right) =\tau \sum_{n=0}^{\infty
}e^{-\zeta n^{2}\tau }=\tau P(T_{H}\tau)\,$$ grows like square root rather than linearly. That faster rise toward the saturation value unity may be seen as a (slightly indirect) hint to localization; in the Poissonian limit this rise would be a jump, $K(\tau)=1$ for all $\tau>0$.
As a note of caution it has to be mentioned that, in contrast to normal systems without time reversal, the diagonal approximation for the correlation functions no longer coincides with the exact result in quasi one-dimensional systems. In particular, the Fourier transform of the oscillatory part (\[Ccross\]) is no longer zero for $\tau<1$ tending to a finite negative value for $\tau\to+0$. Consequently the total form factor of the diagonal approximation becomes negative for small $\tau$ although the exact form factor is known to be non-negative.
Time reversal invariance
========================
Interesting changes arise if time-reversal invariance holds. Periodic orbits then exist in time reversed pairs ($a,\bar{a}$) with exactly the same action and stability coefficients. The generating function becomes a product of contributions of different pairs, and these pairs are uncorrelated in the diagonal approximation. Due to $z_{a}=z_{\bar{a}}$ the generating function is squared compared to (\[zuni\]), the Weyl factor apart, $$\langle Z\rangle_{\rm diag}=\e^{\frac{{{\rm i}}}{2}\left(
\epsilon _{A}^{+}-\epsilon _{B}^{-}-\epsilon _{C}^{+}+\epsilon
_{D}^{-}\right) }
\left[ \frac{\varphi \left( \frac{\epsilon
_{C}^{+}-\epsilon _{B}^{-}}{i\zeta }\right) \varphi \left( \frac{\epsilon _{A}^{+}-\epsilon _{D}^{-}}{i\zeta }\right) }{\varphi \left(
\frac{\epsilon _{A}^{+}-\epsilon _{B}^{-}}{i\zeta }\right) \varphi
\left( \frac{\epsilon _{C}^{+}-\epsilon _{D}^{-}}{i\zeta }\right) }\right] ^{2}\,. \label{zortho}$$ Using this generating function with columnwise identification of arguments we find that the non-oscillatory part of the two-point correlation function and the small-time form factor are doubled compared to (\[Cpar\]) and (\[K<\]); this is in line with Refs. [@AndreevAltshuler] and [@Dittrich].
Remarkably, for time-reversal invariant systems the diagonal approximation yields no oscillatory contributions to the correlation function, i.e., there are no terms of order $\frac{\cos 2\epsilon
}{\epsilon ^{2}}$. This can be understood as follows. In the crosswise limit (\[cro\]) we have $$\epsilon _{C}^{+}-\epsilon _{B}^{-}=\epsilon _{A}^{+}-\epsilon
_{D}^{-}={{\rm i}}2\gamma ,\quad \gamma \to 0\; \label{orders}$$ such that we can replace $\varphi(x)\to \pi x$ in the numerator of $\langle Z\rangle_{\rm diag}$; this gives $$\label{zorthocross}
\langle Z\rangle _{{\rm diag}}\propto
(\epsilon _{C}^{+}-\epsilon _{B}^{-})^2(\epsilon _{A}^{+}-\epsilon
_{D}^{-})^2$$ such that $\langle Z\rangle _{{\rm diag}}$ tends to zero like $\Or(\gamma^4)$. The two derivatives w.r.t. $\epsilon_A^+,\epsilon_B^-$ can only eliminate two factors $\gamma$. This leaves a result that tends to zero like $\Or(\gamma^2)$ which means that $C_{\mathrm{diag},\times }=0$.
To derive the oscillatory component of the spectral correlator we thus have to go beyond the diagonal approximation and take into account correlations between the factors $z_{a}$ in the generating function related to different periodic orbits. For the relevant correlated orbits the differences $\left\langle
z_{a}z_{b}\right\rangle -\left\langle z_{a}\right\rangle
\left\langle z_{b}\right\rangle ,\left\langle
z_{a}z_{b}z_{c}\right\rangle -\left\langle z_{a}\right\rangle
\left\langle z_{b}\right\rangle \left\langle z_{c}\right\rangle $etc. must be non-zero. In view of the semiclassical limit this is possible only if the respective actions have a chance to cancel, e.g. if $S_{a}\left( E\right) \approx S_{b}\left( E\right)$ or $S_{a}\approx S_{b}(E)+ S_{c}\left( E\right)$. Such correlations between orbits indeed exist for chaotic dynamics; they stem from “encounters”, i.e., places where two or more stretches of the same orbit or of different orbits are close and almost parallel or antiparallel to each other. By changing the connections inside these encounters one can turn, e.g., an orbit $a$ into an orbit $b$ with almost the same action, or split it into two orbits $b$ and $c$ whose sum of actions is close to the action of $a$. We shall refer to such sets of correlated orbits as “bunches”. The simplest encounter involves two almost antiparallel orbit stretches; the bunch it generates is the famous Sieber-Richter pair (containing one orbit where the encounter forms a crossing in configuration space and one where it forms an avoided crossing) [@SR]. More complicated scenarios were introduced in [@Oursmalltime; @Ourlargetime]. It has been shown in [@Ourlargetime] that taking into account both the “diagonal” correlations and those related to bunches gives correct semiclassical asymptotics of the generating function and the correlation function of normal systems. The generating function was found as $$\left\langle Z\right\rangle = \left\langle Z\right\rangle
_{\mathrm{diag} }\,\left( 1+\left\langle Z\right\rangle
_{\mathrm{off}}\right) , \label{ZTRI}$$where the off-diagonal part $\langle Z\rangle_{\rm off}$ contains the contributions of the bunches mentioned.
Let us now determine the term in $\langle Z\rangle_{\rm off}$ responsible for the leading oscillatory contribution to the correlator. Multiplication with this term must remove one factor $\epsilon_C^+-\epsilon_B^-\to i2\gamma$ and one factor $\epsilon_A^+-\epsilon_D^-\to i2\gamma$ from (\[zorthocross\]). The product is then proportional to $(\epsilon_A^+-\epsilon_D^-)(\epsilon_C^+-\epsilon_B^-)$ and survives differentiation w.r.t. $\epsilon_A^+$ and $\epsilon_B^-$ and taking the limit $\gamma\to0$. For normal systems the term of lowest order in $\frac{1}{\epsilon}$ satisfying this condition reads (see Eqs. (13,14) of the on-line version of Ref. [@Ourlargetime]) $$\left\langle Z\right\rangle _{\mathrm{off},\times
}=-\frac{4}{\left( \epsilon _{A}^{+}-\epsilon _{D}^{-}\right)
\left( \epsilon _{C}^{+}-\epsilon _{B}^{-}\right) }. \label{ZOFF}$$
In a quasi one-dimensional system, due to the diffusive dynamics, Eq. (\[ZOFF\]) would be replaced by an expression analogous to the diagonal approximation, i.e. a summand as in (\[ZOFF\]), plus terms where the energy differences in the denominator are shifted by finite imaginary amounts of the type ${{\rm i}}\zeta n^2$, with integer $n$ as in $(\ref{lnZdiag})$. However, the shifted terms would, for $n\neq 0$, no longer diverge in the limit $(\times )$. Hence combined with the above factor $\Or(\gamma^2)$ they would yield vanishing contributions to the correlation function. Therefore the leading term in $C_{\times }(\epsilon )$ is still due to (\[ZOFF\]). Substituting (\[ZOFF\]) into (\[ZTRI\]) and calculating derivatives in the crosswise procedure, we obtain the oscillatory component of the complex correlator, $$\label{Ccrossortho}
\lim_{\gamma \to+0}C_{\times }(\epsilon)= \frac{8\pi^{4}\,\e^{\,{{\rm i}}2\epsilon }}{\epsilon^{2}\zeta^{2}\left(\cosh
2\pi\sqrt{\frac{\epsilon}{\zeta }}-\cos 2\pi \sqrt{\frac{\epsilon}
{\zeta}}\right)^{2}}\,.$$ Its real part coincides with the RMT result for the two-point spectral correlation function in the presence of time reversal[@AndreevAltshuler], now deduced semiclassically for individual chaotic quasi one-dimensional systems. In the normal-system limit $\zeta\to \infty$ (\[Ccrossortho\]) tends to the random-matrix expression $\exp\left({{\rm i}}2\epsilon\right)/2\epsilon^4$. On the other hand, for all finite $\zeta$ the amplitude of oscillations diminishes exponentially with the growth of $\epsilon$. As a consequence the discontinuity at $\tau=1$ of the third derivative of the GOE spectral form factor $K(\tau)$ [@Bible] is smoothed out in the quasi one-dimensional case.
Financial support of the Sonderforschungsbereich SFB/TR12 of the Deutsche Forschungsgemeinschaft is gratefully acknowledged.
References {#references .unnumbered}
==========
[99]{}
Anderson P W 1958 [**109**]{} 1492
Altshuler B L and Shklovski B I 1986 [*Sov. Phys. -JETP*]{} [**64**]{} 127
Andreev A V and Altshuler B L 1995 [**75**]{} 902
Bohigas O, Giannoni M J and Schmit C 1984 [**52**]{} 1;\
McDonald S W and Kauffmann A N 1979 [**42**]{} 1189;\
Casati G, Valz-Gris F and Guarneri I 1980 [*Lett. Nuovo Cim.*]{} [**28**]{} 279;\
Berry M V 1987 [*Proc. R. Soc. Lond.*]{} A [**413**]{} 183
Dorokhov O N 1982 [*JETP Lett.*]{} [**36**]{} 318;\
Mello P A, Pereyra P and Kumar N 1988 [**181**]{} 290
Lamacraft A, Simons B D and Zirnbauer M R 2004 [**70**]{} 075412
Altland A, Kamenev A and Tian C 2005 [**95**]{} 206601
Fyodorov Y V and Mirlin A D 1994 [*Int. J. Mod. Phys.*]{} [**B8**]{} 3795
Haake F 2001 *Quantum Signatures of Chaos* (Springer, Berlin, 2nd ed.)
Efetov K B 1997 *Supersymmetry in Disorder and Chaos* (Cambridge University Press, Cambridge)
Berry M V 1985 [*Proc. R. Soc.*]{} A **400** 229
N. Argaman et al. 1993 [**71**]{} 4326
Sieber M and Richter K 2001 T **90** 128
Gutzwiller M 1990 [*Chaos in Classical and Quantum Mechanics*]{} (Springer, New York)
Müller S, Heusler S, Braun P, Haake F and Altland A 2004 **93** 014103;\
Müller S, Heusler S, Braun P, Haake F and Altland A 2005 E **72** 046207
Heusler S, Müller S, Altland A, Braun P and Haake F 2007 **98** 044103;\
Heusler S, Müller S, Altland A, Braun P and Haake F 2007 `arXiv:nlin. CD/0610053`.
Keating J P and Müller S 2007 [*Proc. R. Soc.*]{} A [**463**]{} 3241
Dittrich T 1996 [*Physics Reports*]{} **271** 267
Schanz H and Smilansky U 2000 [**84**]{} 1427
Brouwer P and Altland A, 2007 `arXiv:0802.0976v1`
Berry M V 1986 [*Riemann’s zeta function: a model for quantum chaos?*]{} In [*Quantum chaos and statistical nuclear physics*]{} (eds Seligman T H & Nishioka H, Springer Lecture Notes in Physics, Berlin, Germany: Springer) no. 263, pp. 1–-17;\
Berry M V and Keating J P 1990 [*J. Phys.*]{} A [**23**]{} 4839;\
Keating J P 1992 [*Proc. R. Soc. Lond.*]{} A [**436**]{} 99;\
Berry M V and Keating J P 1992 [*Lond.*]{} A [**437**]{} 151
Hannay J H and Ozorio de Almeida A M 1984 [*J. Phys.*]{} A **17** 3429
Abramovitz M and Stegun I A 1972 [*Handbook of mathematical functions*]{} (Dover, New York)
[^1]: For this term the decrease of the stability coefficients $|F_a|^2$ with the period is just compensated by the exponential increase in the number of orbits. For all other terms $F_a$ decreases faster; the contributions of the long orbits then become exponentially small whereas the shortest orbits make for corrections of the order ${\mathrm o}(T_0/T_H)$.
|
---
abstract: 'The origin of fast radio bursts (FRBs) is still a mystery. One model proposed to interpret the only known repeating object, FRB 121102, is that the radio emission is generated from asteroids colliding with a highly magnetized neutron star (NS). With $N$–body simulations, we model a debris disc around a central star with an eccentric orbit intruding NS. As the NS approaches the first periastron passage, most of the comets are scattered away rather than being accreted by the NS. To match the observed FRB rate, the debris belt would have to be at least three orders of magnitude more dense than the Kuiper belt. We also consider the rate of collisions on to the central object but find that the density of the debris belt must be at least four orders of magnitude more dense than the Kuiper belt. These discrepancies in the density arise even if (1) one introduces a Kuiper-belt like comet belt rather than an asteroid belt and assume that comet impacts can also make FRBs; (2) the NS moves $\sim 2$ orders of magnitude slower than their normal proper-motion velocity due to supernova kicks; and (3) the NS orbit is coplanar to the debris belt, which provides the highest rate of collisions.'
author:
- |
Jeremy L. Smallwood,[^1] Rebecca G. Martin and Bing Zhang\
Department of Physics and Astronomy, University of Nevada, Las Vegas, 4505 South Maryland Parkway, Las Vegas, NV 89154, USA
bibliography:
- 'main.bib'
date: 'Accepted XXX. Received YYY; in original form ZZZ'
title: 'Investigation of the asteroid–neutron star collision model for the repeating fast radio bursts'
---
\[firstpage\]
pulsars: general – minor planets, asteroids: general – radio continuum: general
Introduction
============
Fast radio bursts (FRBs) are bright transients of radio emissions with millisecond outburst durations. Despite the rapid observational progresses [@Lorimer2007; @Keane2011; @Thornton2013; @burke2014; @Spitler2014; @Ravi2015; @Petroff2015; @Masui2015; @Keane2016; @Spitler2016; @champion2016; @DeLaunay2016; @Chatterjee2017], thus far we still do not know the origin(s) of these mysterious bursts. There are about two dozen FRBs with a known source. Of these, there has been only one repeating source, FRB 121102 [@Spitler2016; @Scholz2016; @Law2017]. Due to their high dispersion measures ($\sim 500$ – $\sim 3000\, \rm cm^{-3}\, pc$) [@Thornton2013; @Petroff2016], FRBs most likely originate at cosmological distances. The repeating FRB 121102 was discovered to be associated with a steady radio emission source and localized to be in a star-forming galaxy at red shift $z=0.193$ [@Chatterjee2017; @Marcote2017; @Tendulkar2017], firmly establishing the cosmological nature of FRBs at least for this source. The bursts of FRB 121102 are sporadic [@Scholz2016; @Law2017]. [@Spitler2016] reported $17$ bursts recorded from this source, which suggests a repetitive rate of $\sim 3$ bursts per hour during the active phase [@Palaniswamy2016]. Recently, [@Michilli2018] reported almost 100% linear polarization of the radio burst emission from FRB 121102 with roughly a constant polarization angle within each burst as well as a high and varying rotation measure.
There have been many ideas proposed in the literature to explain the repeating bursts from FRB 121102. Widely discussed models include super-giant pulses from pulsars [@Connor2016; @Cordes2016] or young magnetars [@Katz2016; @Metzger2017; @Margalit2018]. [@Zhang2017] interpreted the repeating bursts from FRB 121102 as due to repeated interactions between a neutron star (NS) and a nearby variable outflow. [@Michilli2018] suggested that the the steady radio emission of FRB 121102 could be associated with a low-luminosity accreting super-massive black hole. As a result, the source of variable outflow can be this black hole. [@Zhang2018] showed that this model can interpret the available data satisfactorily.
This paper concerns another repeating FRB model that attributes the repeating bursts as due to multiple collisions of asteroids onto a NS [@Dai2016]. [@Geng2015] initially described a mechanism where asteroids/comets may impact a NS to produce FRBs. As the impactor penetrates the NS surface, a hot plasma fireball forms. The ionized material located interior to the fireball expands along magnetic field lines and then coherent radiation from the top of the fireball may account for the observed FRBs. Since the acceleration and radiation mechanism of ultra-relativistic electrons remains unknown, a more detailed model of an asteroid-NS impactor was proposed by [@Dai2016], where a highly magnetized NS travels through an asteroid belt around another star. They suggested that the repeating radio emission could be caused from the NS encountering a large number of asteroids. During each NS-asteroid impact, the asteroid has a large electric field component parallel to the stellar magnetic field that causes electrons to be scattered off the asteroidal surface and accelerated to ultra-relativistic energies instantaneously. Furthermore, [@Bagchi2017] argued that the model can interpret both repeating (when the NS intrudes a belt) and non-repeating (when the NS possesses the belt itself) FRBs. Asteroid impacts on NS were among early models for gamma ray bursts [@Harwit1973; @Colgate1981; @vanBuren1981; @Mitrofanov1990; @Shull1995] and soft gamma-ray repeaters [@Livio1987; @Boer1989; @Katz1994; @Zhang2000].
Debris discs are thought to be the remains of the planet formation process [@Wyatt2012; @Currie2015; @Booth2017]. They are observed to be common around unevolved stars [@Moro-mart2010; @Ballering2017; @anglada2017]. Debris discs around white dwarfs have not been directly observed, but their existence is implied by the pollution of their atmospheres by asteroidal material, perhaps from a debris disc that survived stellar evolution [@Gansicke2006; @Kilic2006; @vonHippel2007; @Farihi2009; @Jura2009; @Farihi2010b; @Melis2010; @Brown2017; @Bonsor2017; @xu2018; @Smallwood2018b]. However, the existence of debris discs around NSs is more uncertain [e.g., @Posselt2014]. The pulsar timing technique has a high level of precision which allows for the detection of small, asteroid mass objects around millisecond pulsars [@Thorsett1992; @Bailes1993; @Blandford1993; @Wolszczan1994; @Wolszczan1997]. No asteroids have been confirmed by observations and even the detections of planets around pulsars are rare [@johnston1996; @Bell1997; @Manchester2005; @Kerr2015; @Martin2016pulsars]. Although, [@Shannon2013] suggested that an asteroid belt, having a mass of about $0.05\, \rm M_{\oplus}$, may be present around pulsar B1937+21.
Putting aside whether collisions between asteroids and NSs can emit coherent radio emission with high brightness temperatures to interpret FRBs, here we only consider whether a NS passing through a debris disc around another star, either a main-sequence star or a white dwarf, is able to produce a collision rate to match the observed rate in the repeating FRB 121102 during the active phase. In Section \[analytic\] we examine analytically the expected rate of asteroid collisions for reasonable debris disc parameters. In Section \[numerical\], we use $N$–body simulations to model a binary system with a debris disc of asteroids around another star to determine the tidal disruption rate on to the companion NS. We then consider the case that the central object is also a NS and investigate the impact rate on to it. Finally we draw our conclusions in Section \[conc\].
Analytical collision rate for a neutron star traveling through an asteroid belt {#analytic}
===============================================================================
We follow the approach of [@Dai2016] to calculate the collision rate of asteroids with a NS passing through an asteroid belt. This analytical approximation is only relevant for the first periastron approach of the NS. As shown later in Section \[numerical\], numerical simulations allow us to test the collision rate over several periastron approaches and to model a system that represents a captured NS sweeping through a belt. [@Dai2016] considered a NS sweeping through the inner edge of an asteroid belt at $2\, \rm
au$. Each asteroid collision may give rise to an FRB. The impact rate is estimated as $$\mathcal{R}_a = \sigma_a \nu_* n_a,
\label{rate1}$$ where $n_a$ is the number density of the belt, $\sigma_a$ is the impact cross section described by [@Safronov1972] given by $$\sigma_a = \frac{4\pi G M R_*}{v_*^2},
\label{sigma}$$ $\nu_*$ is the proper velocity of the NS, $R_*$ is the radius of the NS, and $M$ is the mass of the NS. There are two parameters that the rate depends sensitively on: the number density of asteroids in the belt and the velocity with which the NS moves. We consider reasonable values for each below.
Number density {#numberdensity}
--------------
We estimate the number density of asteroids in the belt by assuming that the density is spatially uniform over the belt. For a belt of width and thickness $\eta R_{\rm a}$ with an inner radius $R_{\rm a}$, the number density is $$n_a = \frac{N_a}{2\pi \eta^2 R_a^3},
\label{numden}$$ Taking the parameters of [@Dai2016] of $N_{\rm a}=10^{10}$, $\eta=0.2$ and $R_{\rm a}=2\,\rm au$, the number density is $n_{\rm a}=4.97 \times 10^9 \, \rm au^{-3}$. With these parameters the collision rate may be sufficiently high to explain the repeating FRB (see also Section \[sec:rate\]). For comparison, we estimate the number density of the asteroid belt and the Kuiper belt in the solar system by assuming a cylindrical volume with height determined by the inclination distribution of the asteroids and comets.
### Comparison to the Solar System asteroid belt {#abelt}
If the total energy released during an FRB is solely due to the gravitational potential energy of the asteroid and not due to the magnetic field energy of the NS, then the mass of an asteroid needed to produce an FRB can be estimated as done by [@Geng2015]. The asteroid mass required to enable a FRB as it collides with the NS is about $5.4 \times 10^{17}\, \rm g$ [@Geng2015], which is in the range of observed asteroid masses [$10^{16}$–$10^{18}\, \rm g$, e.g., @Colgate1981].
The present-day asteroid belt extends from about $2.0\, \rm au$ to $3.5\,\rm au$ [@Petit2001]. We can estimate the number of asteroids that are above a mass required to produce a FRB from the main belt size frequency distribution given by [@Bottke2005]. In their table 1, the number of main belt asteroids with a radius greater than $\sim 4\, \rm km$ is approximately $N \sim 2.3\times 10^4$. To estimate the number density of the asteroid belt, we assume a volume produced by the inclination distribution of the asteroid belt being uniformly distributed between $-30$ and $30$ degrees [@Terai2011]. Thus, the number density of asteroids that are massive enough to produce a FRB is $n_{\rm asteroid} \approx 2.75 \times 10^2\, \rm au^{-3}$.
[@Dai2016] considered a typical iron-nickel asteroid to have a mass of $m = 2\times 10^{18}\, \rm g$, which is an order of magnitude larger than the minimum mass required to produce an FRB found by [@Geng2015]. With $10^{10}$ asteroids, their belt has a total mass of $16.7\, \rm M_{\oplus}$. The mass of the present-day asteroid belt is about $5\times 10^{-4}\, \rm M_{\oplus}$ [@Krasinsky2002]. While this is thought to be only $1\%$ of the mass of the original asteroid belt [@Petit2001], the mass would need to be over five orders of magnitude higher to reach this level. Furthermore, the mass of a debris disc decreases over time due to secular and mean-motion resonances with giant planets [@Froeschle1986; @Yoshikawa1987; @Morbidelli1995; @Gladman1997; @Morbidelli1998; @Bottke2000; @PetitMorbidelli2001; @Ito2006; @Bro2008; @Minton2011; @Chrenko2015; @Granvik2017; @smallwood2018a; @Smallwood2018b]. These resonant perturbations cause eccentricity excitation which causes collisional grinding, which reduces the mass of the belt over time [@Wyatt2008]. A debris belt undergoes significant changes as the star evolves. If a belt is located within $\sim 100\, \rm au$ of the central star, as the star loses mass the belt undergoes adiabatic expansion in orbital separation [@veras2013]. Since debris discs lose mass over time due to collisional grinding, an asteroid belt around a NS may not be sufficiently massive to provide enough collisions.
### Comparison to the Kuiper belt {#Kuiper_belt}
A Kuiper belt analog, that is much more extended in size than an asteroid belt, may be a better source for FRB causing collisions with a neutron star. The current observed mass of the Kuiper belt ranges from $0.01\, \rm M_{\oplus}$ [@Bernstein2004] to $0.1 \, \rm M_{\oplus}$ [@Gladman2001], but there is a mass deficit to explain how the Kuiper belt objects accreted at their present heliocentric locations. Thus, the mass estimated in the initial Kuiper belt may be as much as $\sim 10\, \rm M_{\oplus}$ [@Stern1996; @Stern1997a; @Stern1997b; @Kenyon1998; @Kenyon1999a; @Kenyon1999b; @Kenyon2004; @Delsanti2006]. The current Kuiper belt extends from about $30\, \rm au$ to $50\,\rm au$ [@Jewitt1995; @Wiessman1995; @Dotto2003]. The number of discovered comets is only a small fraction of the theoretical total. The number of Kuiper belt objects that have a radius greater than $R_{\rm min}$ is $$N_{> R_{\rm min}} = \frac{K}{2}\bigg[ \bigg(\frac{R_0}{R_{\rm min}}\bigg)^{2} -1 \bigg] + \frac{2K}{7},
\label{eq::total_num}$$ [@Holman1995; @Tremaine1990], where $R_0$ is the largest comet radius and $K$ is related to the total belt mass $M$ with $$M = \frac{4\pi}{3}\rho R_0^3 K C,$$ where $\rho$ is the comet density and the constant $C = 3$ [@Holman1995]. We assume an upper limit for the current mass of the Kuiper belt, $M=0.1 \, \rm M_{\oplus}$ [@Gladman2001]. We take $R_{\rm min}$ to be the minimum radius needed to produce a FRB. We assume a spherical cometary nucleus with density $\rho = 1 \rm \, g\, cm^{-3}$. With the critical mass required to produce a FRB being $5.4 \times 10^{17}\, \rm g$ [@Geng2015], the minimum radius of the object is set at $R_{\rm min} \approx 5 \, \rm km$. Thus, from equation (\[eq::total\_num\]) the total number of objects with a size large enough to produce a FRB is $N_{> R_{\rm min}} = 8.38 \times 10^8$. If the inclination is uniformly distributed between $-10$ and $10$ degrees [@Gulbis2010], then the number density of objects in the Kuiper belt that are large enough to create an FRB is roughly $n_{\rm Kuiper} \approx 1.2 \times 10^4\,\rm au^{-3}$.
Next we compare the estimated number density of the present-day Kuiper belt to the estimated number density of the primordial Kuiper belt. In the Nice model the outer Solar system began in a compact state [$\sim 5.5\, \rm au$ to $\sim 14\, \rm au$, e.g., @Levison2008], and eventually Jupiter and Saturn migrated inward to their present-day locations and Uranus and Neptune migrated outward. When Jupiter and Saturn crossed their mutual 2:1 mean-motion resonance, their eccentricities increased. This sudden jump in their eccentricities caused the outward migration of Uranus, Neptune, and the destabilization of the compact primordial Kuiper belt. The timescale for Jupiter and Saturn to cross the 2:1 resonance was from about $60 \, \rm Myr$ to $1.1\, \rm Gyr$ [@Gomes2005]. Thus, we can assume that the compact primordial Kuiper belt was stable during this period of time, which may give enough time for a NS to be captured and plummet through the compact disc. Based on the Nice model, the primordial Kuiper belt was compact ($15-30\, \rm au$) and had an initial mass of $\sim 10\, \rm M_{\oplus}$ [@Gomes2005; @Levison2008; @Morbidelli2010; @Pike2017].
Assuming that the mass for the primordial Kuiper belt is $M\sim 10\, \rm M_{\oplus}$, we find that the total number of objects that are capable of producing a FRB is $N_{> R_{\rm min}} = 8.38 \times 10^{10}$. This calculation assumes that the comet distribution is equivalent to that of the current Kuiper belt. We estimate the number density of the primordial compact Kuiper belt to be $n_{\rm Kuiper,p} \approx 4.8\times 10^{6}\, \rm au^{-3}$, which is about two orders of magnitude higher than the current Kuiper belt.
### Extrasolar debris discs {#extrasolar_discs}
Next, we compare extrasolar debris disc architectures with the Solar system and the theoretical belt used by [@Dai2016]. There have been hundreds of extrasolar debris discs that have been discovered over the past couple decades [e.g. @Wyatt2008]. Since the emission from debris discs are optically thin, observations using submillimeter continuum can be used to estimate the disc masses, with the caveat that large bodies are missed. Since one cannot detect asteroid-sized objects in debris belts, the presence of dust is used as an indicator of total disc mass. The majority of dust in debris belts are produced from asteroid and comet collisions due to eccentricity excitations from orbital resonances. Thus, the dust mass can be used an a predictor of the total mass of the disc by $$\frac{M_{\rm pb}}{t_{\rm age}}\approx \frac{M_d}{t_{\rm col}},
\label{discmass}$$ [e.g., @Chiang2009], where $M_{\rm pb}$ is the mass of the largest parent body at the top of the collisional cascade, $t_{\rm age}$ is the age of the system, $M_d$ is the dust mass and $t_{\rm col}$ is the collisional lifetime. The mass of largest parent body can be used as the minimum mass of the disc because larger bodies may exist collisionless over $t_{\rm age}$ [e.g., @Dohnanyi1969].
The dust mass residing within debris discs have been observed in a plethora of planetary systems. Depending on the size of the grains, dust masses have been observed to be in the range $10^{-6}\, \rm M_{\oplus}$ to $10^{-1}\, \rm M_{\oplus}$ [e.g., @Matthews2007; @Su2009; @Patience2011; @Hughes2011; @Matthews2014; @jilkov2015; @Kalas2015; @Nesvold2017]. Exozodical dust is the constituent for hot debris discs and these dust environments have been detected around two dozen main-sequence stars [@Absil2009; @2013Absil; @Ertel2014]. [@Kirchschlager2017] analyzed nine out of the two dozen systems and found that the dust should be located within $\sim 0.01$–$1\, \rm au$ from the star depending on the luminosity and that the dust masses amount to only ($0.2$–$3.5)\, \times 10^{-9}\, \rm M_{\oplus}$.
To calculate the minimum mass of the discs discussed above, based on the observed disc dust mass (see equation (\[discmass\])), we would have to calculate the collisional lifetime which is outside the scope of this paper. The main point about discussing some of the observed disc dust masses is to compare that to the Kuiper belt, which has a dust mass of $(3$–$5) \times 10^{-7} \, \rm M_{\oplus}$ [@Vitense2012]. The reason why the dust mass is so low in the Kuiper belt is that the belt has reached a steady-state where the amount of dust being ejected equals the amount being injected. The observed debris discs may not be in a steady-state, thus some have up to 6 orders of magnitude more dust than the Solar system. From equation \[discmass\], if the amount of dust is large and the collisional timescale is short, then this suggests that some extrasolar debris discs may be more massive than the Kuiper belt or the asteroid belt. [@Heng2011] estimated the total mass of the debris disc in the system HD 69830, based on the dynamical survival models of [@Heng2010], to be $3$–$4 \times 10^{-3}\, \rm M_{\oplus}$, several times more massive than our asteroid belt. [@Chiang2009] found that low mass limit of Fomalhaut’s debris disc to be about $3\, \rm M_{\oplus}$, a order of magnitude more massive than the observed mass in the Kuiper belt.
Neutron star velocity {#velocity}
---------------------
One well studied type of a NS is a radio pulsar. We use the measured pulsar velocities to represent the proper motion velocities of NSs. Identifying pulsar proper motions and velocities is critical in understanding the nature of pulsar and NS astrophysics. Applications of pulsar velocity measurements include determining the birth rate of pulsars [@Ankay2004], further understanding supernova remnants [@Migliazzo2002] and the Galactic distribution of the progenitor population [@chennamangalam2014], and for this work, calculating the collision rate of asteroids with a NS. Pulsar velocities are calculated by measuring their proper motions and distances. The origin of pulsars high velocities at birth, also known as their natal kick velocities, are thought to be driven by an asymmetrical explosion mechanism [e.g., @Lai2006; @Wongwathanarat2013]. For a review on pulsar natal kick velocities, see [@Janka2017]. The observed supernova explosions are not spherically symmetric [@Blaauw1961; @Bhattacharya1991; @Wang2001]. Natal kick velocities have typical values of $200$–$500\, \rm km\, s^{-1}$ and up to about $1000\, \rm km\, s^{-1}$, with a mean velocity of $400\, \rm km\, s^{-1}$ [e.g., @Cordes1993; @Harrison1993; @Lyne1994; @Kaspi1996; @Fryer1998; @Lai2001; @Arzoumanian2002; @Chatterjee2005; @Hobbs2005]. The large eccentricities that are observed in Be/X-ray binaries also suggest large kick velocities [@Brandt1995; @Bildsten1997; @Martin2009].
The average observed pulsar velocity is several hundred $\rm km\, s^{-1}$ [e.g., @Bailes1990; @Caraveo1999; @Hobbs2005; @Deller2012; @Temim2017; @Deller2018]. There have been several mechanisms put fourth to explain high natal velocities of pulsars. Asymmetric neutrino emission was thought to be a mechanism that could provide kick velocities up to $\sim 300\, \rm km\, s^{-1}$ [@Fryer2006] but this mechanism may be ruled out due to the dependence on a very large magnetic field ($> 10^{16}\,\rm G$) and nonstandard neutrino physics [e.g, @Wongwathanarat2010; @Nordhaus2010; @Nordhaus2012; @Katsudia2018]. Also, [@Harrison1975] suggested that the electromagnetic rocket effect from an off-centered dipole in a rapidly rotating pulsar can accelerate pulsars up to similarly high velocities. Another mechanism is non-radial flow instabilities, such as convective overturn and the standing accretion shock instability [@Foglizzo2002; @Blondin2003; @Foglizzo2006; @Foglizzo2007; @Scheck2008], which are able to produce asymmetric mass ejections during supernova explosions which can produce natal velocities from $100\, \rm km \, s^{-1}$ to up to and even beyond $1000 \, \rm km \, s^{-1}$. Next we explore the collision rate of asteroids on a pulsar with a pulsar velocity of $100\,\rm km\, s^{-1}$ [@Blaes1993; @Ofek2009; @Li2016]. The low value leads to a larger cross section area for the collisions and hence the maximum value for the collision rate.
Collision rate {#sec:rate}
--------------
The collision rate given by equation (\[rate1\]) is estimated as $$\begin{aligned}
\mathcal{R}_a
& = 1.25\bigg(\frac{R_*}{10\, {\rm km}}\bigg)\bigg(\frac{M}{1.4\,M_{\odot}}\bigg)\bigg(\frac{\nu_{*}}{100\, {\rm km\, s^{-1}}}\bigg)^{-1}
\notag \\
& \quad \quad \times
\bigg( \frac{n_a}{4.97\times10^{9}\, {\rm au^{-3}}}\bigg) \rm h^{-1}.
\label{coll_rate}\end{aligned}$$ Instead of an asteroid belt, we use the primordial Kuiper belt to calculate this rate. We set $n_a$ to equal the density of the primordial Kuiper belt, $n_{\rm Kuiper,p} = 4.8\times 10^{6}\, \rm au^{-3}$. We estimate a collision rate of $0.0012\, \rm h^{-1}$, which is about three orders of magnitude less than the analytical rate calculated by [@Dai2016], which requires an extremely high debris disc density and a low NS velocity. Our analytical calculation suggests that this mechanism cannot produce a comet collision rate of $3\, \rm h^{-1}$, even in the extremely dense primordial Kuiper belt. In the next section, we explore if our analytical findings can be supported by numerical integrations.
Previous works used the tidal disruption radius to calculate collisions, instead, we use the impact radius associated with equation (\[sigma\]). [@Colgate1981] defined the break up radius due to tidal forces to be $$\begin{aligned}
R_b & = \frac{\rho_0 r_0^2 G M}{s}^{-1/2} \notag \\
& = 2.22 \times 10^4 \bigg(\frac{m}{10^{18}\, {\rm g}}\bigg)^{2/9} \bigg( \frac{\rho_0}{8\times 10^{15}\, {\rm g\, km^{-3}}}\bigg)
\notag \\
& \quad \quad \times \bigg(\frac{s_0}{10^{20}\, {\rm dyn\, km^{-2}}}\bigg)^{1/3} \bigg(\frac{M}{1.4\,M_{\odot}}\bigg)^{1/3}\, \rm km,\end{aligned}$$ where $\rho_0$ is the density of the asteroid, $r_0$ is the cylindrical radius of the particle, and $s_0$ is the tensile strength. The impact radius is defined as $$\begin{aligned}
R_{\rm Impact} & = \sqrt{\frac{4GMR_*}{v_*^2}} \notag \\
& = 2.73\times 10^4 \bigg(\frac{M}{1.4\,{\rm M_{\odot}}}\bigg)^{1/2} \bigg(\frac{R_*}{10\, \rm km}\bigg)^{1/2}
\notag \\
& \quad \quad \times \bigg(\frac{v_*}{100\, \rm km}\bigg)^{-1} \, \rm km.\end{aligned}$$ We find that the impact radius is larger than the tidal breakup radius. [@Dai2016] specifically required asteroids rather than comets to produce FRBs. This is because the size of the asteroid is small enough to produce a duration of order of milliseconds, which is consistent with the typical durations of FRBs. With a spherical comet nucleus with a radius $r_0 = 5\, \rm km$, the duration can be estimated as $$\Delta t \simeq \frac{12r_0}{5}\bigg(\frac{2GM}{R_{\rm impact}}\bigg)^{-1/2},$$ giving a duration of $3.3\, \rm ms$. This duration is consistent with the pulse width of FRB 121102, which is observed at $3 \pm 0.5 \, \rm ms$ [@Spitler2014]. However, this calculation just encompasses the cometary nucleus and neglects the cometary tail. A long cometary tail could potentially destroy the coherent emission responsible for producing FRBs.
$N$–Body Simulations {#numerical}
====================
{width="8.7cm"} {width="8.7cm"}
{width="8.7cm"} {width="8.7cm"} {width="8.7cm"} {width="8.7cm"}
{width="8.7cm"} {width="8.7cm"}
{width="8.7cm"} {width="8.7cm"}
We investigate whether asteroid/comet collisions can occur on a NS at a rate high enough to explain the repeating FRB 121102. We examine two scenarios, in the the first scenario, the NS formed in a binary. In the second scenario, the NS was captured into a binary.
In the non-capture scenario, the NS orbit is coplanar to the debris disc, with an eccentricity of $e = 0.5$, a semimajor axis of $a = 100\, \rm au$, and an orbital period of $P_{\rm orb} = 597.6\, \rm yr$. The assumption of coplanarity gives the highest collision rate possible. As the NS is formed from a supernova explosion, the NS will receive a kick which can lead to an eccentric orbit [@Blaauw1961; @Bhattacharya1991]. In the capture scenario we also assume coplanarity, along with an eccentricity of $e = 0.9$, a semimajor axis of $a = 500\, \rm au$, and an orbital period of $P_{\rm
orb} = 6681.5\, \rm yr$. Even though an eccentricity of $0.9$ is technically bound, for simplicity, we assume that this eccentricity resembles a capture. In both scenarios we assume the binary system to be of equal mass of $1.4\, M_{\odot}$, with the frame of reference centered on the central star with the debris disc. We create a Kuiper–belt like fiducial disc of $10,000$ test particles with the orbital elements described as follows. The semimajor axis ($a$) is randomly allocated in the range \[0.1 60\] au, the eccentricity ($e$) is randomly distributed in the range \[0 0.1\], and the inclination ($i$) is randomly selected in the range \[0 10\]. The remaining rotation orbital elements, the argument of pericenter ($\omega$), the longitude of the ascending node ($\Omega$), and the mean anomaly ($\mathcal{M}$), are all randomly allocated in the range \[0 360\]. The NS companion begins at apastron.
Since in both cases, the intruding NSs are in bound orbits. We calculate the periastron velocities in both scenarios and compare that to the NS natal kick velocity used in the analytical approximation in equation (\[coll\_rate\]). For the NS with eccentricities of $0.5$ and $0.9$, the periastron velocities are $6.1048\, \rm km/s$ and $6.8707\, \rm km/s$, respectively, with each having a periastron distance of $50\, \rm au$. These velocities are about two orders of magnitude lower than the average NS velocities, which means the number of collisions from the numerical results should be heightened due to the extremely low periastron velocity.
We model the NS system along with a debris disc using the $N$–body sympletic integrator in the orbital dynamics package, [mercury]{} [@Chambers1999]. We simulate this system for a duration of $100,000$ years, which corresponds to a time of $166.67\, P_{\rm orb}$ for the non-capture scenario and a time of $14.97\, P_{\rm orb}$ for the capture scenario, where we calculate the number of test particles that impact the central star and the companion. We physically inflate the radius of the NS and the central star to the impact radius. When a test particle collides with a either star it is considered to have been impacted and removed from the simulation. The system is in a initial stable configuration without the intruding NS.
The left panel of Fig. \[setup\] shows the initial setup of the non-capture scenario. The orbit of the intruding NS that sweeps through the fiducial belt is shown by the red dashed line. The frame of reference is centered on the central star (which is not shown), which is located at the origin, $(0,0,0)$. The NS is initially at apastron, with the red dot being inflated in order to visibly enhance the location. [Mercury]{} uses the mean anomaly as one of the rotational elements. In order to construct the orbit of the NS in Fig. \[setup\], we make use of the first-order transformation from mean anomaly to the true anomaly ($\nu$) given by $$\mathcal{M} = \nu - 2e\sin \nu,$$ where $e$ is the eccentricity of the NS.
{width="8.7cm"} {width="8.7cm"}
{width="8.7cm"} {width="8.7cm"}
The right panel of Fig. \[setup\] shows the final distribution of the surviving debris disc after a time of $100,000$ years. The majority of the debris belt becomes unstable except for a population that resides close to the central star. We show the eccentricity versus the semimajor axis distribution of the test particle population at times $t= 0\, P_{\rm orb}$, $0.67\, P_{\rm orb}$, $1.67\, P_{\rm orb}$ and $t=166.67\, P_{\rm orb}$ shown in Fig. \[e0p5\]. The NS begins at apastron and has an orbital period of roughly $600\, \rm yr$. As the system evolves, the outer parts of the belt become unstable, increasing the eccentricity of the test particles. As the NS approaches periastron, the majority of the debris disc has already been scattered. This unstable nature extends throughout the belt as time increases. The belt is stable close to the central star in $R \lesssim 15\, \rm au$.
Next, we examine the scenario that resembles the NS being captured by a star with an debris belt. The left panel of Figure \[e0p9\] shows the initial setup for the NS capture model, while the right panel shows the final distribution of the debris belt. Much like the non-capture scenario, the belt becomes unstable as the NS approaches periastron. Figure \[e0p9\_dist\] shows the eccentricity versus the semi–major axis distribution of the test particle population at times $t=0 \, P_{\rm orb}$ and $t=14.97 \, P_{\rm orb}$. Again, as the system evolves, the outer parts of the belt become unstable, increasing the eccentricity of the test particles. Next, we examine the impact rate of the test particles that have become unstable in each scenario.
Numerical collision rate
------------------------
The fate of test particles with heightened eccentricities include impact with the central star or the NS, ejection from the system, or remains within the simulation domain. If a test particle collides with either of the stars, the test particle is considered impacted and removed from the simulation. Figure \[rate\] shows the impact rate onto the central star and onto the intruding NS in both non-capture (left panel) and capture (right panel) scenarios. We also show the time of first periastron approach for both models. Within both scenarios, the NS literally goes through the belt on the first periastron approach, however, there are only two collisions during the first periastron for the non-capture scenario. For the capture scenario, there is one collision during the first periastron passage. This is an interesting prediction, which states that the rate drops quickly with time, the highest being in the first orbit, but drops quickly in subsequent orbits. FRB 121102 has been observed for almost six years. It becomes active time and time again, which does not seem to be consistent with the prediction. However, since the orbital periods of the simulations are long, the source for FRB 121102 may still in the first encounter phase. In this case, we focus on the first encounter and comment on the deficiency of the rate (as above). In any case, the periodicity mentioned by [@Bagchi2017] should be irrelevant. Thus, a NS simply passing through a belt may not have a large amount of collisions.
[@Dai2016] used an asteroid belt analog as the source of debris. The numerical setup in this work made use of a larger Kuiper-belt analog. We now estimate the density of a Kuiper-belt analog that is able to produce the repetitive rate and then compare that with the densities of the current Kuiper belt and the primordial Kuiper belt.
The observed rate of FRB 121102 during its active phase is about $1000\, \rm yr^{-1}$. According to our simulations, the total number of collisions onto the NS for each scenario is of the order of $10$ collisions per $100,000\, \rm yr$ with a disc number density of the order of $10^{-2}\, \rm au^{-3}$. To achieve the repetitive rate of $1000\, \rm yr^{-1}$, the density of our Kuiper belt analog would have to increase to $10^{7}\, \rm au^{-3}$. This density would predict $10^{10}$ collisions per $100,000\, \rm yr$, however, the velocity of the NS at periastron within our numerical simulations is two orders of magnitude smaller than the observed NS proper motion velocity, which is of order of $100\, \rm km/s$ (see section \[velocity\]). Since the rate is inversely proportional to $v_*$, this means that the numerical results overestimated the collision rate by two orders of magnitude. Thus, scaling our number density of the Kuiper-like belt by $9$ orders of magnitude would match the repetitive rate of $1000\, \rm yr^{-1}$. This density is three orders of magnitude greater than the current Kuiper belt and still an order of magnitude greater than the primordial Kuiper belt. Keep in mind that this scaled density is for a coplanar intruding NS to capture the highest rate of collisions. Realistically, the intruding NS would be misaligned to the plane of the debris belt and therefore the density of the belt would be greater than $10^{7}\, \rm au^{-3}$ to match the repetitive rate. Recall, that [@Dai2016] analytically found the number density to be $10^{9}\, \rm au^{-3}$ for an asteroid belt to match the repetitive rate. Thus, our numerical simulations suggest that a Kuiper belt analog could match the repetitive rate with a density greater than $10^7\, \rm au^{-3}$. If the debris disc was instead orbiting the intruding NS (i.e., the central star in our simulations), the rate of impacts would be much lower and the density required to match the observed repetitive rate would have to be larger than $10^8\, \rm au^{-3}$.
We find another drawback to the collision model based on our numerical simulations. The repetitive rate of FRB 121102 is quite erratic, with a peak rate of about $3\, \rm hr^{-1}$ during its active phase [@Spitler2016; @Scholz2016; @Palaniswamy2016]. We explore our numerical results to identify if a short-time-scale erratic component is present. Figure \[time\_int\] shows the number of collisions as a function of the time between each collision. The left panel shows the time interval distribution for the case where the NS eccentricity is $0.5$ and the right panel is when the NS eccentricity is $0.9$. For the former case, the distribution shows a close to one component Gaussian distribution with no short-time-scale erratic component. For the latter case, the distribution is also close to a one component Gaussian distribution. With more initial test particles, such a one-component Gaussian distribution may be enhanced without developing a short-time-scale erratic component.
Conclusions {#conc}
===========
We have examined the FRB-asteroid collision model that has been postulated to explain the repeating FRB 121102. We summarize all the findings of the scenario below:
- We first estimated the analytical rate of debris colliding onto a intruding NS with a density of a primordial Kuiper belt and with a low NS natal kick velocity. The primordial Kuiper belt is an extreme case since the current mass of the Kuiper belt is $1\%$ of its initial mass. Given this extreme case, the rate is still about three orders of magnitude lower than the observed rate of $3\, \rm h^{-1}$. This supports the findings of [@Dai2016], that the source is most likely not located within a Milky Way analog, and that the potential progenitors could be in an extremely rare arrangement.
- We find that the analytical duration to produce FRB by comets is consistent with the pulse width of FRB 121102 ($3 \pm 0.5 \, \rm ms$), assuming an average cometary nucleus radius of $5\, \rm km$. This suggests that a comet may be able to produce an FRB assuming that the long cometary tail does not disrupt the coherent emission needed to produce FRBs.
- To compare our analytical interpretation to numerical integrations, we model a Kuiper-like debris disc around a central star with a NS on a highly eccentric orbits ($e=0.5$ and $e=0.9$). Within each scenario, the debris disc becomes unstable before the NS approaches periastron, which leads most comets to be scattered away from the belt rather than being accreted by the NS.
- We estimate how dense our Kuiper-belt analog would have to be in order to reproduce the repetitive rate. We constrain the estimated density to be larger than $10^7\, \rm au^{-3}$ to match the observed repeating radio bursts for an intruding NS. If the disc happened to be around the NS, the density required would have to be larger than $10^8\, \rm au^{-3}$. These densities are $3-4$ orders of magnitude greater than the current Kuiper belt and $1-2$ orders of magnitude greater than the primordial Kuiper belt even if: (1) one introduces a Kuiper-belt like comet belt rather than an asteroid belt and assume that comet impacts can also make FRBs; (2) the NS moves $\sim 2$ orders of magnitude slower than their normal proper-motion velocity due to supernova kicks; and (3) the NS orbit is coplanar to the debris belt, which provides the highest rate of collisions.
- Another drawback to this model is that the numerical simulations lack evidence for the erratic behavior of FRB 121102.
We conclude that if repeating FRBs are produced by comets colliding with an NS, the progenitor system must be in an extremely rare arrangement (i.e. an intruding NS plummeting through an extremely dense Kuiper-like comet belt or asteroid belt) to cause the repeating behavior as observed in FRB 121102. Thus, we do not rule out the mechanism proposed by [@Dai2016] but the evidence for such arrangements are sparse.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank Z. G. Dai for discussion and an anonymous referee for helpful suggestions. JLS acknowledges support from a graduate fellowship from the Nevada Space Grant Consortium (NVSGC). We acknowledge support from NASA through grants NNX17AB96G and NNX15AK85G. Computer support was provided by UNLV’s National Supercomputing Center.
\[lastpage\]
[^1]: E-mail: [email protected]
|
---
abstract: 'We describe the Fundamental Neutron Physics Beamline (FnPB) facility located at the Spallation Neutron Source at Oak Ridge National Laboratory. The FnPB was designed for the conduct of experiments that investigate scientific issues in nuclear physics, particle physics, and astrophysics and cosmology using a pulsed slow neutron beam. We present a detailed description of the design philosophy, beamline components, and measured fluxes of the polychromatic and monochromatic beams.'
address:
- '$^{1}$University of Tennessee, Knoxville, TN, USA'
- '$^{2}$Oak Ridge National Laboratory, Oak Ridge, TN, USA'
- '$^{3}$University of Kentucky, Lexington, KY, USA'
- '$^{4}$North Carolina State University, Raleigh, NC, USA'
- '$^{5}$University of Manitoba, Winnipeg, Manitoba, Canada'
- '$^{6}$Indiana University and Center for the Exploration of Energy and Matter, Bloomington, IN, USA'
- '$^{7}$Los Alamos National Laboratory, Los Alamos, NM, USA'
author:
- 'N. Fomin$^{1}$'
- 'G. L. Greene$^{1,2}$'
- 'R. Allen$^{2}$'
- 'V. Cianciolo$^{2}$'
- 'C. Crawford$^{3}$'
- 'T. Ito$^{7}$'
- 'P. R. Huffman$^{2,4}$'
- 'E. B. Iverson$^{2}$'
- 'R. Mahurin$^{5}$'
- 'W. M. Snow$^{6}$'
bibliography:
- 'beamline\_main.bib'
title: Fundamental Neutron Physics Beamline at the Spallation Neutron Source at ORNL
---
|
---
author:
- 'A. Aab,'
- 'P. Abreu,'
- 'M. Aglietta,'
- 'I. Al Samarai,'
- 'I.F.M. Albuquerque,'
- 'I. Allekotte,'
- 'A. Almela,'
- 'J. Alvarez Castillo,'
- 'J. Alvarez-Muñiz,'
- 'G.A. Anastasi,'
- 'L. Anchordoqui,'
- 'B. Andrada,'
- 'S. Andringa,'
- 'C. Aramo,'
- 'F. Arqueros,'
- 'N. Arsene,'
- 'H. Asorey,'
- 'P. Assis,'
- 'J. Aublin,'
- 'G. Avila,'
- 'A.M. Badescu,'
- 'A. Balaceanu,'
- 'R.J. Barreira Luz,'
- 'J.J. Beatty,'
- 'K.H. Becker,'
- 'J.A. Bellido,'
- 'C. Berat,'
- 'M.E. Bertaina,'
- 'X. Bertou,'
- 'P.L. Biermann,'
- 'P. Billoir,'
- 'J. Biteau,'
- 'S.G. Blaess,'
- 'A. Blanco,'
- 'J. Blazek,'
- 'C. Bleve,'
- 'M. Boháčová,'
- 'D. Boncioli,'
- 'C. Bonifazi,'
- 'N. Borodai,'
- 'A.M. Botti,'
- 'J. Brack,'
- 'I. Brancus,'
- 'T. Bretz,'
- 'A. Bridgeman,'
- 'F.L. Briechle,'
- 'P. Buchholz,'
- 'A. Bueno,'
- 'S. Buitink,'
- 'M. Buscemi,'
- 'K.S. Caballero-Mora,'
- 'L. Caccianiga,'
- 'A. Cancio,'
- 'F. Canfora,'
- 'L. Caramete,'
- 'R. Caruso,'
- 'A. Castellina,'
- 'G. Cataldi,'
- 'L. Cazon,'
- 'A.G. Chavez,'
- 'J.A. Chinellato,'
- 'J. Chudoba,'
- 'R.W. Clay,'
- 'R. Colalillo,'
- 'A. Coleman,'
- 'L. Collica,'
- 'M.R. Coluccia,'
- 'R. Conceição,'
- 'F. Contreras,'
- 'M.J. Cooper,'
- 'S. Coutu,'
- 'C.E. Covault,'
- 'J. Cronin,'
- 'S. D’Amico,'
- 'B. Daniel,'
- 'S. Dasso,'
- 'K. Daumiller,'
- 'B.R. Dawson,'
- 'R.M. de Almeida,'
- 'S.J. de Jong,'
- 'G. De Mauro,'
- 'J.R.T. de Mello Neto,'
- 'I. De Mitri,'
- 'J. de Oliveira,'
- 'V. de Souza,'
- 'J. Debatin,'
- 'O. Deligny,'
- 'C. Di Giulio,'
- 'A. Di Matteo,'
- 'M.L. Díaz Castro,'
- 'F. Diogo,'
- 'C. Dobrigkeit,'
- 'J.C. D’Olivo,'
- 'Q. Dorosti,'
- 'R.C. dos Anjos,'
- 'M.T. Dova,'
- 'A. Dundovic,'
- 'J. Ebr,'
- 'R. Engel,'
- 'M. Erdmann,'
- 'M. Erfani,'
- 'C.O. Escobar,'
- 'J. Espadanal,'
- 'A. Etchegoyen,'
- 'H. Falcke,'
- 'G. Farrar,'
- 'A.C. Fauth,'
- 'N. Fazzini,'
- 'B. Fick,'
- 'J.M. Figueira,'
- 'A. Filipčič,'
- 'O. Fratu,'
- 'M.M. Freire,'
- 'T. Fujii,'
- 'A. Fuster,'
- 'R. Gaior,'
- 'B. García,'
- 'D. Garcia-Pinto,'
- 'F. Gaté,'
- 'H. Gemmeke,'
- 'A. Gherghel-Lascu,'
- 'P.L. Ghia,'
- 'U. Giaccari,'
- 'M. Giammarchi,'
- 'M. Giller,'
- 'D. Głas,'
- 'C. Glaser,'
- 'G. Golup,'
- 'M. Gómez Berisso,'
- 'P.F. Gómez Vitale,'
- 'N. González,'
- 'A. Gorgi,'
- 'P. Gorham,'
- 'A.F. Grillo,'
- 'T.D. Grubb,'
- 'F. Guarino,'
- 'G.P. Guedes,'
- 'M.R. Hampel,'
- 'P. Hansen,'
- 'D. Harari,'
- 'T.A. Harrison,'
- 'J.L. Harton,'
- 'A. Haungs,'
- 'T. Hebbeker,'
- 'D. Heck,'
- 'P. Heimann,'
- 'A.E. Herve,'
- 'G.C. Hill,'
- 'C. Hojvat,'
- 'E. Holt,'
- 'P. Homola,'
- 'J.R. Hörandel,'
- 'P. Horvath,'
- 'M. Hrabovský,'
- 'T. Huege,'
- 'J. Hulsman,'
- 'A. Insolia,'
- 'P.G. Isar,'
- 'I. Jandt,'
- 'S. Jansen,'
- 'J.A. Johnsen,'
- 'M. Josebachuili,'
- 'A. Kääpä,'
- 'O. Kambeitz,'
- 'K.H. Kampert,'
- 'I. Katkov,'
- 'B. Keilhauer,'
- 'E. Kemp,'
- 'J. Kemp,'
- 'R.M. Kieckhafer,'
- 'H.O. Klages,'
- 'M. Kleifges,'
- 'J. Kleinfeller,'
- 'R. Krause,'
- 'N. Krohm,'
- 'D. Kuempel,'
- 'G. Kukec Mezek,'
- 'N. Kunka,'
- 'A. Kuotb Awad,'
- 'D. LaHurd,'
- 'M. Lauscher,'
- 'R. Legumina,'
- 'M.A. Leigui de Oliveira,'
- 'A. Letessier-Selvon,'
- 'I. Lhenry-Yvon,'
- 'K. Link,'
- 'L. Lopes,'
- 'R. López,'
- 'A. López Casado,'
- 'Q. Luce,'
- 'A. Lucero,'
- 'M. Malacari,'
- 'M. Mallamaci,'
- 'D. Mandat,'
- 'P. Mantsch,'
- 'A.G. Mariazzi,'
- 'I.C. Mariş,'
- 'G. Marsella,'
- 'D. Martello,'
- 'H. Martinez,'
- 'O. Martínez Bravo,'
- 'J.J. Masías Meza,'
- 'H.J. Mathes,'
- 'S. Mathys,'
- 'J. Matthews,'
- 'J.A.J. Matthews,'
- 'G. Matthiae,'
- 'E. Mayotte,'
- 'P.O. Mazur,'
- 'C. Medina,'
- 'G. Medina-Tanco,'
- 'D. Melo,'
- 'A. Menshikov,'
- 'M.I. Micheletti,'
- 'L. Middendorf,'
- 'I.A. Minaya,'
- 'L. Miramonti,'
- 'B. Mitrica,'
- 'D. Mockler,'
- 'S. Mollerach,'
- 'F. Montanet,'
- 'C. Morello,'
- 'M. Mostafá,'
- 'A.L. Müller,'
- 'G. Müller,'
- 'M.A. Muller,'
- 'S. Müller,'
- 'R. Mussa,'
- 'I. Naranjo,'
- 'L. Nellen,'
- 'P.H. Nguyen,'
- 'M. Niculescu-Oglinzanu,'
- 'M. Niechciol,'
- 'L. Niemietz,'
- 'T. Niggemann,'
- 'D. Nitz,'
- 'D. Nosek,'
- 'V. Novotny,'
- 'H. Nožka,'
- 'L.A. Núñez,'
- 'L. Ochilo,'
- 'F. Oikonomou,'
- 'A. Olinto,'
- 'M. Palatka,'
- 'J. Pallotta,'
- 'P. Papenbreer,'
- 'G. Parente,'
- 'A. Parra,'
- 'T. Paul,'
- 'M. Pech,'
- 'F. Pedreira,'
- 'J. Pȩkala,'
- 'R. Pelayo,'
- 'J. Peña-Rodriguez,'
- 'L. A. S. Pereira,'
- 'M. Perlín,'
- 'L. Perrone,'
- 'C. Peters,'
- 'S. Petrera,'
- 'J. Phuntsok,'
- 'R. Piegaia,'
- 'T. Pierog,'
- 'P. Pieroni,'
- 'M. Pimenta,'
- 'V. Pirronello,'
- 'M. Platino,'
- 'M. Plum,'
- 'C. Porowski,'
- 'R.R. Prado,'
- 'P. Privitera,'
- 'M. Prouza,'
- 'E.J. Quel,'
- 'S. Querchfeld,'
- 'S. Quinn,'
- 'R. Ramos-Pollan,'
- 'J. Rautenberg,'
- 'D. Ravignani,'
- 'B. Revenu,'
- 'J. Ridky,'
- 'M. Risse,'
- 'P. Ristori,'
- 'V. Rizi,'
- 'W. Rodrigues de Carvalho,'
- 'G. Rodriguez Fernandez,'
- 'J. Rodriguez Rojo,'
- 'D. Rogozin,'
- 'M.J. Roncoroni,'
- 'M. Roth,'
- 'E. Roulet,'
- 'A.C. Rovero,'
- 'P. Ruehl,'
- 'S.J. Saffi,'
- 'A. Saftoiu,'
- 'F. Salamida,'
- 'H. Salazar,'
- 'A. Saleh,'
- 'F. Salesa Greus,'
- 'G. Salina,'
- 'F. Sánchez,'
- 'P. Sanchez-Lucas,'
- 'E.M. Santos,'
- 'E. Santos,'
- 'F. Sarazin,'
- 'R. Sarmento,'
- 'C.A. Sarmiento,'
- 'R. Sato,'
- 'M. Schauer,'
- 'V. Scherini,'
- 'H. Schieler,'
- 'M. Schimp,'
- 'D. Schmidt,'
- 'O. Scholten,'
- 'P. Schovánek,'
- 'F.G. Schröder,'
- 'A. Schulz,'
- 'J. Schulz,'
- 'J. Schumacher,'
- 'S.J. Sciutto,'
- 'A. Segreto,'
- 'M. Settimo,'
- 'A. Shadkam,'
- 'R.C. Shellard,'
- 'G. Sigl,'
- 'G. Silli,'
- 'O. Sima,'
- 'A. Śmiałkowski,'
- 'R. Šmída,'
- 'G.R. Snow,'
- 'P. Sommers,'
- 'S. Sonntag,'
- 'J. Sorokin,'
- 'R. Squartini,'
- 'D. Stanca,'
- 'S. Stanič,'
- 'J. Stasielak,'
- 'P. Stassi,'
- 'F. Strafella,'
- 'F. Suarez,'
- 'M. Suarez Durán,'
- 'T. Sudholz,'
- 'T. Suomijärvi,'
- 'A.D. Supanitsky,'
- 'J. Swain,'
- 'Z. Szadkowski,'
- 'A. Taboada,'
- 'O.A. Taborda,'
- 'A. Tapia,'
- 'V.M. Theodoro,'
- 'C. Timmermans,'
- 'C.J. Todero Peixoto,'
- 'L. Tomankova,'
- 'B. Tomé,'
- 'G. Torralba Elipe,'
- 'P. Travnicek,'
- 'M. Trini,'
- 'R. Ulrich,'
- 'M. Unger,'
- 'M. Urban,'
- 'J.F. Valdés Galicia,'
- 'I. Valiño,'
- 'L. Valore,'
- 'G. van Aar,'
- 'P. van Bodegom,'
- 'A.M. van den Berg,'
- 'A. van Vliet,'
- 'E. Varela,'
- 'B. Vargas Cárdenas,'
- 'G. Varner,'
- 'J.R. Vázquez,'
- 'R.A. Vázquez,'
- 'D. Veberič,'
- 'I.D. Vergara Quispe,'
- 'V. Verzi,'
- 'J. Vicha,'
- 'L. Villaseñor,'
- 'S. Vorobiov,'
- 'H. Wahlberg,'
- 'O. Wainberg,'
- 'D. Walz,'
- 'A.A. Watson,'
- 'M. Weber,'
- 'A. Weindl,'
- 'L. Wiencke,'
- 'H. Wilczyński,'
- 'T. Winchen,'
- 'M. Wirtz,'
- 'D. Wittkowski,'
- 'B. Wundheiler,'
- 'L. Yang,'
- 'D. Yelos,'
- 'A. Yushkov,'
- 'E. Zas,'
- 'D. Zavrtanik,'
- 'M. Zavrtanik,'
- 'A. Zepeda,'
- 'B. Zimmermann,'
- 'M. Ziolkowski,'
- 'Z. Zong,'
- 'and Z. Zong'
title: '**Search for photons with energies above 10$^{18}$ eV using the hybrid detector of the Pierre Auger Observatory**'
---
Introduction {#sect:intro}
============
Ultra-high energy (UHE) photons are among the possible particles contributing to the flux of cosmic rays. A flux of UHE photons is expected from the decay of $\pi^0$ particles produced by protons interacting with the cosmic microwave background (CMB) in the so-called Greisen-Zatsepin-Kuz’min (GZK) effect [@GZK1; @GZK2]. The energy threshold of the process is about 10$^{19.5}$ eV and photons are produced on average with around 10% of the energy of the primary incident proton. The energy loss for the GZK protons limits their range to about a hundred Mpc: Only sources within this horizon contribute to the observed cosmic-ray flux above the GZK energy threshold producing a cut-off with respect to a continuation of the power-law energy spectrum. A flux suppression has been observed [@HiResSpectrum; @AugerSpectrum; @AugerSpectrum2; @TASpectrum] but the current experimental results are not sufficient to exclude other possible scenarios such as a limitation in the maximal acceleration energy of cosmic rays at the source. A combined fit of the energy spectrum and the mass composition measured by the Pierre Auger Observatory [@NIM2015] - under simple assumptions on the astrophysical sources and on the propagation of cosmic rays - seems to favor the latter scenarios [@DiMatteo]. Results obtained by the Telescope Array Collaboration prefer a GZK scenario when interpreting the observed mass composition as proton-dominated up to the highest energies [@TAFluxInterpretation]. This is however challenged with the limits on cosmogenic neutrino fluxes [@Heinze:2015hhp; @Aartsen:2016ngq; @AugerNeutrino] and by the observed diffuse sub-TeV $\gamma$-radiation (see for example [@Berezinsky2016]). Within this context, the observation of GZK (or “cosmogenic”) photons (and neutrinos) would be an independent proof of the GZK process. The expected flux of GZK photons is estimated to be of the order of 0.01-0.1% depending on the astrophysical model (e.g., mass composition and spectral shape at the source) [@Gelmini; @Hooper; @Kampert].
Moreover, a large flux of UHE photons is predicted in top-down models with ultra-high-energy cosmic rays (UHECR) originating from the decay of supermassive particles. Some of these models, severely constrained by previous experimental results on UHE photons [@AugerPhotonSD; @AugerPhotonHybrid; @AugerPhotonHybridICRC; @AugerPhotonSD2015], have been recently re-proposed to accommodate the existing photon limits and to test the lifetime-and-mass parameter space of putative Super Heavy Dark Matter (SHDM) particles [@AloisioSHDM]. As opposed to neutrinos, photons undergo interactions with the extragalactic background light (EBL) inducing electromagnetic cascades, see e.g. [@EleCa]. This makes photons sensitive to the extragalactic environment (e.g. EBL, magnetic fields). New physics scenarios (e.g., violation of Lorentz invariance, photon-axion conversion) related to interaction or propagation effects can also be tested with photons and neutrinos (see for example [@LIVphotons; @LIVphotons2; @LIVneutrinos; @axions]).
The production of UHE photons at astrophysical sources accelerating high-energy hadrons has been tested performing a blind search for excesses of photon-like events over the sky exposed to the Pierre Auger Observatory [@AugerDirectionalPhotons] and searching for a correlation with the directions of targeted sources [@AugerTargetPhotons]. These analyses consider events in the energy region between 10$^{17.3}$ and 10$^{18.5}$ eV. The reported null results set bounds to the photon flux emitted by discrete sources and on the extrapolation of $E^{-2}$ energy spectra of TeV (1 TeV = 10$^{12}$ eV) gamma-ray sources within or near the Galaxy.
No photons with energies above 1 EeV (10$^{18}$ eV) have been definitively identified so far, bounding their presence in the cosmic-ray flux to less than a few percent. Two analyses have been conducted by the Pierre Auger Collaboration in previous work, each one optimizing the energy range to the sensitivity of the two independent detectors comprising the Observatory. The photon detection efficiency of the surface detector enables a photon search with large event statistics at energies above 10 EeV [@AugerPhotonSD]. The analysis, recently updated in [@AugerPhotonSD2015], constrains the integral photon flux to less than $1.9\times 10^{-3}$, $1.0\times10^{-3}$ and $4.9\times10^{-4}$ km$^{-2}$ sr$^{-1}$ yr$^{-1}$ above 10, 20 and 40 EeV, respectively. A second analysis based on the detection of air-showers with the fluorescence telescopes operating in hybrid mode extended the energy range down to 2 EeV with statistics lower by a factor 10 because of the detector duty cycle [@AugerPhotonHybrid]. In that work the identification of photon-induced air showers relied on the measurement of the depth of the air-shower maximum for a sub-sample of hybrid events geometrically constrained to ensure a composition-independent detection efficiency. Upper limits were placed on the integral photon fraction of 3.8%, 2.4%, 3.5% and 11.7% above 2, 3, 5 and 10 EeV, respectively. A novel approach, combining the shower maximum observed by fluorescence telescopes and the signal at ground measured by the surface detectors is presented here. With respect to [@AugerPhotonHybrid], the data set is updated adding six more years of data and the improved background rejection and the use of a less stringent data selection allow one to achieve for the first time the sensitivity required to explore photon fractions in the all-particle flux down to 0.1% and to extend the search for photons to 1 EeV.
The paper is organized as follows. After a brief description of the Pierre Auger Observatory (section \[sect:Auger\]), the observables sensitive to the electromagnetic and hadronic nature of extensive air showers (EAS) are introduced in section \[sect:observables\]. The analysis is applied to 9 years of high-quality selected data as discussed in section \[sect:dataset\]. The multi-variate analysis tuned to identify photon-like events is described in section \[sect:analysis\]. In the absence of any significant signal, upper limits on the integral photon flux are derived. Results and systematic uncertainties are reported in section \[sect:results\]. A discussion is given in section \[sect:conclusions\] of constraints on astrophysical and exotic models for the origin of UHECRs along with expectations of more sensitive searches for UHE photons in the future.
The Pierre Auger Observatory {#sect:Auger}
============================
The Pierre Auger Observatory is located in Malargüe, Argentina, and consists of a surface detector (SD) array of 1660 water Cherenkov stations deployed over a triangular grid of 1.5 km spacing and covering an area of 3000 km$^2$. The stations sample the density of the secondary particles of the air shower at the ground and are sensitive to the electromagnetic, muonic and hadronic components. The Cherenkov light produced in the water volume of the station is collected by three photo-multiplier tubes (PMTs) and measured in units of VEM (Vertical Equivalent Muon, i.e. the signal produced by a muon traversing the station vertically). The signals are acquired and sent to the central acquisition system if they are above a threshold of 1.75 VEM in the three PMTs or if they match the time-over-threshold (ToT) algorithm requirements of at least 13 time bins above a threshold of 0.2 VEM in a 3 $\mu$s window for at least two PMTs. The threshold trigger selects large signals, not necessarily spread in time, and is mostly effective for the detection of inclined showers for which only the muonic component reaches the ground. On the other hand, the ToT trigger selects signals spread in time and is thus more efficient for events with arrival directions closer to the zenith [@SDtrigg].
The SD array is overlooked by 27 telescopes grouped in 5 buildings forming the fluorescence detector (FD) [@FDpaper]. The FD observes the longitudinal development of the shower by detecting the fluorescence and Cherenkov light emitted during the passage of the secondary particles of the shower in the atmosphere. Unlike the SD, the fluorescence telescopes work only during clear and moonless nights, for an average duty cycle of about 14% [@exposure2010].
The presence of aerosols and clouds alters the intensity of light collected by the telescopes, the FD trigger efficiency and the observed longitudinal profile. Several monitoring systems are installed to measure the aerosol content and the cloud coverage. The vertical aerosol optical depth (VAOD) is measured using two lasers deployed at the center of the array (the Central Laser Facility, CLF, and the eXtreme Laser Facility, XLF) [@aerosol; @aerosol2; @CLF1; @CLF2]. Close to each FD site, a lidar system [@Lidar] provides a cross-check of the aerosol content and measures the coverage and height of the clouds. In addition, the cloud coverage for each pixel of the FD is inferred from the analysis of the images acquired by the infrared cameras installed on the roof of the FD buildings [@clouds].
If at least one SD station detects a signal in time and spatial coincidence with the FD, a hybrid reconstruction can be performed [@FDpaper]. In the hybrid mode the geometry of the event is determined from the arrival time of the light at the FD pixels with the additional constraint provided by the timing information from the SD. The longitudinal profile is then reconstructed taking into account the scattering and absorption of light from the shower axis to the telescope. It is the main measurement for determining the energy of the primary cosmic ray and constraining its mass [@Xmax]. The depth, [$X_{\rm max}$]{}, at which the shower reaches its maximum development is directly derived from the fit of a Gaisser-Hillas function [@Gaisser-Hillas] to the longitudinal profile of the air shower. The parameter [$X_{\rm max}$]{}is well known to be anti-correlated with the mass of the primary cosmic ray at any fixed energy. The total energy of the primary particle is determined from the integral of the fitted Gaisser-Hillas function corrected for the invisible energy [@Mariazzi] carried by penetrating particles (mostly neutrinos and muons). The correction is about 1% for electromagnetic showers and 10-15% for nuclear primaries depending only weakly on the primary mass and on choice of the hadronic interaction models.
Unless differently specified, in this paper the photon energy $E_\gamma$ is used as default for simulations and data, independently of the nature of the primary particle.
![ Distributions of the zenith angle (left) and the distance of the shower axis to the FD (right) are shown as examples of the agreement between time-dependent simulation (histograms) and data (markers) in two separate energy intervals (below and above the “ankle” spectral feature $E_{\rm{ankle}} \simeq 10^{18.68}$ eV [@AugerSpectrum2]). Events in data and simulations are selected applying the criteria described in section \[sect:dataset\], with the exception of the energy cut. Simulations are re-weighted according to the spectral index given in [@AugerSpectrum2] and a mixed composition (50% proton - 50% iron) is assumed.[]{data-label="fig:dataMC"}](figure1a.pdf "fig:"){width="49.00000%"} ![ Distributions of the zenith angle (left) and the distance of the shower axis to the FD (right) are shown as examples of the agreement between time-dependent simulation (histograms) and data (markers) in two separate energy intervals (below and above the “ankle” spectral feature $E_{\rm{ankle}} \simeq 10^{18.68}$ eV [@AugerSpectrum2]). Events in data and simulations are selected applying the criteria described in section \[sect:dataset\], with the exception of the energy cut. Simulations are re-weighted according to the spectral index given in [@AugerSpectrum2] and a mixed composition (50% proton - 50% iron) is assumed.[]{data-label="fig:dataMC"}](figure1b.pdf "fig:"){width="49.00000%"}
Observables for the photon search {#sect:observables}
=================================
The search for UHE photon primaries is based on the different development and particle content of electromagnetic and hadronic air-showers. The induced electromagnetic cascades develop slower than hadronic ones so that [$X_{\rm max}$]{}is reached closer to the ground. Proton and photon simulated showers have average Xmax values that differ by about 200 g/cm$^{2}$ in the EeV energy range. This difference is enhanced at energies above 10$^{19}$ eV because of the Landau-Pomeranchuk-Migdal (LPM) effect [@LPM1; @LPM2]. At higher energies, above 50 EeV, photons have a non-negligible probability to convert in the geomagnetic field [@preshower1; @preshower2; @Homola2007] producing a bunch of low-energy electromagnetic particles, called “pre-shower”, entering the atmosphere. The [$X_{\rm max}$]{}of the pre-showered cascades is smaller than for non-converted ones and the separation between the average [$X_{\rm max}$]{}for photons and proton primaries is reduced.
The shower development and the nature of the primary cosmic ray determine the content and the shape of the distribution of particles at ground as a function of the distance from the shower axis (Lateral Distribution Function, LDF). Photon-induced showers generally have a steeper LDF compared to hadron primaries because of the sub-dominant role played by the flatter muonic component. The high-energy effects (LPM and pre-showering) do not affect the muon content, however the different stage of shower development (i.e., [$X_{\rm max}$]{}) leads to a modification of the observed LDF. Given the steeper LDF and the muon-driven SD triggers, the footprint at the ground, and consequently the number $N_{\rm{stat}}$ of triggered stations, is typically smaller for electromagnetic showers [@LTP]. These features are combined in the observable $S_b$ [@Ros]: $$S_b = \sum_{i}^N S_i \left( \frac{R_i}{R_0} \right)^{b}$$ where $S_i$ and $R_i$ are the signal and the distance from the shower axis of the $i$-th station, $R_0 = 1000$ m is a reference distance and $b = 4$ is a constant optimized to have the best separation power between photon and nuclear primaries in the energy region above 10$^{18}$ eV.
Detailed simulations of the air-showers and of the detector response have been performed to study the photon/hadron discrimination. A data set of about 60000 photon-induced showers have been generated with CORSIKA version 6.990 [@CORSIKA] with energy between 10$^{17}$ eV and 10$^{20}$ eV following a spectrum $E^{-1}$ in bins of 0.5 in the logarithm of energy. Events are sampled from an isotropic distribution, with the zenith angle $\theta$ ranging between 0 and 65 degrees. The azimuth angle $\phi$ is uniformly distributed between 0 and 360$^\circ$. Pre-showering and LPM effects are included in the simulations. Proton and iron showers are simulated with CORSIKA version 7.4002 adopting the most up-to date hadronic interaction models, EPOS LHC [@epos-lhc] and QGSJET-II-04 [@qgsjet2-04]. A total of 25000 showers have been generated for each hadronic model and primary type. Each shower is resampled 5 times, each time with a different impact point at ground uniformly distributed within an area enclosing the array and a border such that the trigger efficiency of each surface station is less than 1% outside it [@LTP]. Events are processed through the Offline software [@Offline] which includes a detailed simulation of the FD and the light propagation from the shower to the FD camera and a Geant4-based [@Geant4] simulation of the SD. A time-dependent approach developed for the energy spectrum in [@AugerSpectrum2012] is used for a realistic estimate of the detection efficiency and the discrimination performance. In this approach, the actual status of the FD and the SD, as well as the atmospheric conditions, are taken into account and the events are distributed according to the on-time of the hybrid detector. As validations of the procedure, Fig. \[fig:dataMC\] demonstrates the comparison between data and simulations for two reconstructed observables (zenith angle, left, and the shower-axis distance from the telescope, right) in two energy intervals. Fig. \[fig:scatter\] shows the correlation between the discriminating observables [$X_{\rm max}$]{}, $S_b$, $N_{\rm{stat}}$ for selected samples of well reconstructed photons (blue circles) and protons (red stars) events, the latter ones being the main source of background for this study.
![Correlation between the discriminating observables used in the multivariate analysis for the energy range $10^{18} < E_{\gamma} < 10^{19}$ eV: the red stars and the blue circles are the proton and photon simulated events, respectively. Events are selected applying the criteria in section \[sect:dataset\]. For a better visibility of the plot only 5% of events are plotted and a shift of 0.25 is applied to $N_{\rm{stat}}$ for proton events.[]{data-label="fig:scatter"}](figure2.png){width="96.00000%"}
Data set {#sect:dataset}
========
Criteria N events efficiency \[%\]
--------------------------- ---------- ------------------
Trigger 3306730 –
Detector 1490335 45.07
Geometry 610192 40.94
Profile 62776 10.29
$E_{\gamma} > 10^{18}$ eV 18968 30.22
$S_{b}$ 17297 91.19
Atmosphere 8178 47.28
: Event selection criteria, number of events after each cut and selection efficiency with respect to the previous cut.[]{data-label="tab:eff"}
The analysis presented in this work uses hybrid data collected between January 2005 and December 2013. Selection criteria are applied to ensure a good geometry and profile reconstruction and a reliable measurement of the discriminating observables. These cuts are detailed below.
#### [**Trigger and detector levels.**]{} {#trigger-and-detector-levels. .unnumbered}
The initial data set (trigger level) consists of all events passing the very loose trigger requirements of the data acquisition [@FDpaper]. Consequently it includes a fraction of events that are not due to air-shower events (e.g. lightning or low energy events with a random-coincidence station) and are thus discarded. Data periods without good FD or SD working conditions, mostly during the construction phase of the observatory (e.g., camera calibrations in the FD and unstable conditions of the SD trigger) are rejected.
#### [**Geometry level.**]{} {#geometry-level. .unnumbered}
The station selected in the hybrid reconstruction is required to be within 1500 m of the shower axis and its timing has to be within 200 ns of the expected arrival time of the shower front [@FDpaper]. Hybrid events with a successful reconstruction of the shower axis ($\chi^2$ of the temporal fit has to be smaller than 7) and with a zenith angle up to 60$^\circ$ are considered. More inclined events are not included in this analysis because of the absorption of the electromagnetic components of the EAS in the atmosphere and the resultant small trigger efficiency for photons at low energies. As a quality selection criterion, the angular track length, defined as the angular separation between the highest and lowest FD pixels in the track, is required to be larger than 15$^{\circ}$. A resolution better than 50 m on the core position and of 0.6$^\circ$ on the arrival direction are obtained with these cuts for events with energy above 10$^{18}$ eV. Events are selected if they land within a fiducial distance from the telescope for which the FD trigger efficiency is flat within 5% when shifting the energy scale by $\pm$14% [@AugerSpectrum2012]. This distance, parameterized in different energy intervals, is based on simulations and is mostly independent of the mass composition and hadronic models. It is around 14 km at 10$^{18}$ eV and 30 km at 10$^{19}$ eV.
#### [**Profile level.**]{} {#profile-level. .unnumbered}
For a reliable measurement of the [$X_{\rm max}$]{}and of the energy, the goodness of the Gaisser-Hillas fit is tested requiring a reduced $\chi^2$ smaller than 2.5. The request of a viewing angle between the shower axis and the telescope larger than 20$^\circ$ rejects events pointing toward the FD and having a large Cherenkov light contamination. To avoid biases in the reconstruction of the longitudinal profile, the [$X_{\rm max}$]{}has to be observed in the field of view of the telescope and gaps in the profile have to be shorter than 20% of the total observed length. To reject events with a flat profile, for which the [$X_{\rm max}$]{}determination is less reliable, the ratio between the $\chi^{2}$ of a Gaisser-Hillas and a linear fit of the profile is required to be smaller than 0.9 [@AugerPhotonHybrid]. Events are selected if the relative uncertainty on the reconstructed energy is smaller than 20%. These criteria ensure an energy resolution between 10 and 15% improving with energy and an [$X_{\rm max}$]{}resolution from about 20 g/cm$^{2}$ at 10$^{18}$ eV to about 15 g/cm$^2$ above 10$^{19}$ eV.
#### [**$S_b$ selections.**]{} {#s_b-selections. .unnumbered}
Artificially small values of $S_b$ and $N_{\rm{stat}}$ can be obtained for events landing in region of the array close to the borders or with incomplete station deployment (during the construction phase of the Observatory) or having stations inactive because of temporary detector inefficiencies. To reject these events, which would mimic photon candidates, at least 4 active stations are required within the first 1500 m hexagon around the station with the largest signal. This criterion rejects 9% of the events.
#### [**Atmosphere.**]{} {#atmosphere. .unnumbered}
To minimize biases from possible distortions of the longitudinal profile produced by clouds, a measurement of the cloud coverage by infrared camera or by the lidar system is required to be available and to be lower than 25%. Time periods without information on the aerosol content of the atmosphere or with poor viewing conditions are excluded requiring that the measured vertical aerosol optical depth (VAOD), integrated from the ground to 3 km, is smaller than 0.1.\
The selection efficiencies with respect to the full set of recorded events are given in table \[tab:eff\]. The final data set among which photon candidates are searched for contains 8178 events with energy $E_\gamma$ larger than 10$^{18}$ eV.
![Left: curve of the background rejection efficiency against the signal efficiency for different algorithms and observables. Right: distribution of the Boosted Decision Tree observables for signal (photon, blue), background (proton, red) and data (black). For simulations both the training and the test samples are shown. The cut at the median of the photon distribution is indicated by the dashed line. QGSJET-II-04 used as high-energy hadronic interaction model.[]{data-label="fig:MVA"}](figure3a.pdf "fig:"){width="48.00000%"} ![Left: curve of the background rejection efficiency against the signal efficiency for different algorithms and observables. Right: distribution of the Boosted Decision Tree observables for signal (photon, blue), background (proton, red) and data (black). For simulations both the training and the test samples are shown. The cut at the median of the photon distribution is indicated by the dashed line. QGSJET-II-04 used as high-energy hadronic interaction model.[]{data-label="fig:MVA"}](figure3b.pdf "fig:"){width="50.50000%"}
Analysis {#sect:analysis}
========
To identify a possible photon signal among the large background due to hadronic primaries, a multivariate analysis is performed adopting different algorithms. The Boosted Decision Tree (BDT) has been found to provide the best separation. This method has also the advantage of being more stable against the inclusion of observables with weak discriminating power. The variable ranking gives [$X_{\rm max}$]{}as the strongest variable followed by $S_b$ and $N_{\rm{stat}}$. To take into account the energy and angular dependences of these three observables, the energy and zenith angle are included in the multivariate analysis. A test excluding the least significant discriminating observable, $N_{\rm{stat}}$, has been performed to evaluate its impact on the separation power. The background rejection versus signal efficiency for the BDT using all observables and for the case excluding $N_{\rm{stat}}$ are drawn in Fig. \[fig:MVA\] (left). For a photon selection efficiency $\epsilon_\gamma$ = 50% the use of $N_{\rm{stat}}$ reduces the background contamination by more than a factor 2, from 0.37% to 0.14%. Thus the analysis is performed considering all discussed observables. In the preliminary analysis presented in [@AugerPhotonHybridICRC], a Fisher method trained only with [$X_{\rm max}$]{}and $S_b$ and optimized in three different energy ranges was adopted for the sake of simplicity. For comparison, the performance of the Fisher algorithm is also illustrated in Fig. \[fig:MVA\] (left). The background rejection efficiency is found to be around 99% for $\epsilon_\gamma=$ 50%. In the multivariate analysis events are weighted according to a power law spectrum $E^{-\Gamma}$ with $\Gamma = 2$. The performance of the BDT (using all the discriminating observables) has been tested against the variation of the spectral index. For a simulated flux with $\Gamma = 1.5$ and $\Gamma = 2.5$, the background contamination at 50% of the photon efficiency is 0.07% and 0.24%, respectively (cfr. 0.14% obtained in the case $\Gamma = 2$). These results are expected due to the larger (smaller) contribution of the highest energy events for which [$X_{\rm max}$]{}and $S_{b}$ have better separation.
The BDT response is given in Fig. \[fig:MVA\] (right) for data and for photon and proton QGSJET-II-04 simulations. The discrepancy between the data and the proton simulations is in agreement with the current experimental indications of a composition varying from light to heavier composition in the EeV range [@Xmax; @XmaxDistrib; @XmaxS1000] and the muon deficit observed in simulations with respect to the Auger data [@Farrar; @muon_horiz]. To identify photons, a cut is defined at the median of the BDT response distribution for photons. This way, the signal efficiency remains constant independently of the composition and hadronic model assumptions. Events having a BDT response larger than the median cut (dashed vertical line in Fig. \[fig:MVA\], right) are selected as “photon candidates”. A background contamination of $\sim$ 0.14% is obtained for proton showers using QGSJET-II-04 and it becomes $\sim$ 0.21% when the EPOS LHC model is used. This background level overestimates the one expected in data because of the composition and muon arguments discussed above. As a reference, the multivariate analysis has been performed providing a mixture of 50% proton and 50% iron as input to the training phase. The background contamination in this case reduces to $\sim$ 0.04% with the main contribution coming from the smaller values of [$X_{\rm max}$]{}. For the available data set, this background contamination corresponds to 11.4 (3.3) events in the case of a proton (mixed) composition, assuming the QGSJET-II-04 model.
![Left: Longitudinal profile and Gaisser-Hillas fit of one of the selected photon candidates (ID 6691838). Right: Correlation plot of [$X_{\rm max}$]{}and $S_b$ for the candidate (blue star) and dedicated proton events simulated with the same energy, geometry and detector configuration as the real event (red dots). Three out of 3000 simulated proton showers are selected as photon candidates (black circles). []{data-label="fig:cand"}](figure4a.pdf "fig:"){width="49.00000%"} ![Left: Longitudinal profile and Gaisser-Hillas fit of one of the selected photon candidates (ID 6691838). Right: Correlation plot of [$X_{\rm max}$]{}and $S_b$ for the candidate (blue star) and dedicated proton events simulated with the same energy, geometry and detector configuration as the real event (red dots). Three out of 3000 simulated proton showers are selected as photon candidates (black circles). []{data-label="fig:cand"}](figure4b.pdf "fig:"){width="49.00000%"}
Event ID $E_{\gamma}$ \[EeV\] Zenith \[$^\circ$\] [$X_{\rm max}$]{}\[g/cm$^{2}$\] $S_b$ \[VEM\] $N_{\rm{stat}}$ $l$ \[$^\circ$\] $b$ \[$^\circ$\]
---------- ---------------------- --------------------- --------------------------------- ------------------- ----------------- --------------------- ---------------------
3218344 1.40$\,\pm\,$0.18 34.9$\,\pm\,$0.9 851$\,\pm\,$31 2.04$\,\pm\,$0.77 2 218.21$\,\pm\,$1.29 -25.67$\,\pm\,$0.36
6691838 1.26$\,\pm\,$0.05 53.9$\,\pm\,$0.3 886$\,\pm\,$9 4.94$\,\pm\,$1.21 2 100.45$\,\pm\,$0.57 -46.25$\,\pm\,$0.25
12459240 1.60$\,\pm\,$0.14 49.4$\,\pm\,$0.4 840$\,\pm\,$21 9.57$\,\pm\,$2.56 3 324.94$\,\pm\,$0.37 -24.70$\,\pm\,$0.60
: List of the events selected as photon candidates with the main quantities used for photon-induced air-showers identification and with their arrival directions in galactic coordinates ($l$,$b$). []{data-label="tab:candidates"}
The BDT analysis is applied to the full data set described in section \[sect:dataset\]. After the selection 8178, 3484, 2015, 983 and 335 events are left for the analysis above 1, 2, 3, 5 and 10 EeV, respectively. Three events pass the photon selection cuts and all of them are in the first energy interval ($1-2$ EeV), close to the energy threshold of the analysis. This number of events is compatible with the expected nuclear background. Details of the candidate events are listed in table \[tab:candidates\]. The arrival directions of the three photon-like events have been checked against a catalogue of astrophysical sources of UHECRs whose distance is limited to a few Mpc because of UHE photons interaction on the extragalactic background radiation [@AugerTargetPhotons]. The smallest angular distances between the candidates and any of the objects in the catalogue is found to be around 10$^{\circ}$. One candidate (ID 6691838) was also selected in a previous analysis [@AugerPhotonHybridICRC]. Its longitudinal profile is shown in Fig. \[fig:cand\] (left). In Fig. \[fig:cand\] (right), the values of [$X_{\rm max}$]{}and $S_b$ for this event are compared to the measured ones in dedicated simulations having the same geometry and energy of this event. In the data sample of simulated protons, three out of 3000 showers pass the photon selections and are misclassified, in agreement with the expected average background contamination.
Results {#sect:results}
=======
Since the number of selected photon candidates is compatible with the background expectation, upper limits (UL) on the integral photon flux at 95% confidence level (C.L.) are derived as:
$$\label{eq:UL}
\Phi_{UL}^{0.95} (E_{\gamma}>E_0)= \frac{N_{\gamma}^{0.95} (E_{\gamma} > E_0)}{\mathcal{E_{\gamma}}(E_{\gamma}>E_0 | E_{\gamma}^{-\Gamma})}$$
where $N^{0.95}_{\gamma}$ is the Feldman-Cousins upper limit at 95% CL on the number of photon candidates assuming zero background events and $\mathcal{E_{\gamma}}$ is the integrated exposure above the energy threshold $E_0$, under the assumption of a power law spectrum $E^{-\Gamma}$ (if not differently stated $\Gamma = 2$ as in previous publications [@AugerPhotonSD]): $$\label{eq:exposure}
\mathcal{E_{\gamma}} = \frac{1}{c_E}\int_{E_{\gamma}}\int_{T}\int_{S} \int_{\Omega}E_{\gamma}^{-\Gamma} \epsilon(E_{\gamma},t,\theta,\phi,x,y)\,dS\,dt\,dEd\Omega$$ with $\epsilon$ being the overall efficiency for photons as a function of energy ($E_{\gamma}$), time ($t$), zenith angle ($\theta$), azimuth ($\phi$) and position (x,y) of the impact point at ground. $c_E$ is a normalization coefficient: $c_E = \int E^{-\Gamma} dE$. $\Omega$ is the solid angle and the area $S$ encloses the array and corresponds to the generation area used for the simulations. The hybrid exposure after photon selection criteria is shown in Fig. \[fig:expo\] (left).
![Upper limits on the integral photon flux derived from 9 years of hybrid data (blue arrows, Hy 2016) for a photon flux E$^{-2}$ and no background subtraction. The limits obtained when the detector systematic uncertainties are taken into account are shown as horizontal segments (light blue) delimiting a dashed-filled box at each energy threshold. Previous limits from Auger: (SD [@AugerPhotonSD2015] and Hybrid 2011 [@AugerPhotonHybridICRC]), for Telescope Array (TA) [@TAphotons], AGASA (A) [@Agasa], Yakutsk (Y) [@Yakutsk] and Haverah Park (HP) [@HaverahPark] are shown for comparison. None of them includes systematic uncertainties. The shaded regions and the lines give the predictions for the GZK photon flux [@Gelmini; @Kampert] and for top-down models (TD, Z-Burst, SHDM I[@SHDM2] and SHDM II [@AloisioSHDM]). []{data-label="fig:limits"}](figure6.pdf){width="70.00000%"}
Using equation \[eq:UL\] and the analysis trained on photon and proton QGSJET-II-04 simulations, with spectral index $\Gamma = 2$, upper limits to the integral photon flux are set to 0.027, 0.009, 0.008, 0.008, 0.007 km$^{-2}$ sr$^{-1}$ yr$^{-1}$ for energy thresholds of 1, 2, 3, 5 and 10 EeV. They are derived under the conservative choice that the expected background is zero (relevant here only for $E_{0} = 1$ EeV) which makes the limits more robust against hadronic interaction and mass composition assumptions. Rescaling the photon flux limits by the measured all-particle spectrum [@AugerSpectrum2] results in photon fraction limits of 0.1%, 0.15%, 0.33% 0.85% and 2.7% for the same threshold intervals.\
The robustness of the results is tested against several sources of systematic uncertainties. Some of them (see table \[tab:syst\]) are related to the detector knowledge and the data reconstruction. A contribution of $\pm 6.4$% applies to the exposure (gray band in Fig. \[fig:expo\]) and is obtained as a quadrature sum of the 4% uncertainty on the ontime [@exposure2010] and the 5% uncertainties in the FD trigger efficiency after the fiducial distance cut (section \[sect:dataset\]). The other terms are due to the uncertainties on the energy scale, [$X_{\rm max}$]{}and $S_{b}$. Since these variables are used in the multi-variate analysis, the impact of their systematic uncertainties on the upper limits is evaluated through altering the data by $\pm 1\sigma_{\rm{syst}}$ and applying the BDT to the new data set. Each variable is considered separately even if a correlation is expected between the systematic uncertainties on [$X_{\rm max}$]{}and energy scales because of the event reconstruction and the atmospheric contributions. A shift by $\Delta$[$X_{\rm max}$]{} = $\pm 10$ g/cm$^2$ [@Xmax] changes the number of selected candidates by $^{+1}_{-2}$ in the first energy interval ($E_0 > 1$ EeV) and leaves unaffected the limits at larger energy thresholds. The same result is obtained when applying a shift of $E_{\gamma}$ by $\Delta E = \pm14$% [@Verzi2013]. The systematic uncertainties on $S_b$ are mostly due to the time synchronization between SD and FD and the possible misalignment of the telescopes which can affect the geometry reconstruction. The latter is periodically tested using lasers and time periods having misaligned mirrors are rejected from the analysis. The SD/FD synchronization is checked using dedicated lasers shots which are observed by the FD and for which a signal is simultaneously sent to an SD station connected to CLF through an optical fiber. Moreover, the discrepancy between the core position reconstructed in hybrid and in SD-only modes $-$ having independent systematics on the geometry reconstruction $-$ are compared in data and in simulations. The difference between data and simulations is about +10 m in both easting and northing coordinates and independent of the zenith angle. It translates in a variation of $S_b$ by less than 5%. When applying a shift by $\Delta S_b = \pm 5\%$ to data, the number of candidates changes by $^{-1}_{+1}$ in the energy range $1-2$ EeV. The relative change in the upper limits when each of the sources of systematic uncertainty is considered separately is given in the last column of table \[tab:syst\]. As an additional test, an altered data set is generated applying a combined shift (+$\Delta$[$X_{\rm max}$]{}, +$\Delta E$, -$\Delta S_{b}$) which would make data more similar to photon events. The number of candidates found in this scenario is 11, 1, 0, 0 and 0 above 1, 2, 3, 5 and 10 EeV. Six of the candidates between $1-2$ EeV were initially at energy below 1 EeV and the candidate with energy above 2 EeV was previously not selected by the BDT cut. The maximum range of variation of the upper limits when considering all the experimental systematics (data and exposure) is shown in Fig. \[fig:limits\] as horizontal segments delimiting a dashed-filled box around each energy thresholds. Other contributions, related to the assumptions used to train the BDT and select photon-like events, have been considered and for each of them the full analysis, including BDT and selection optimization, data processing and exposure calculation have been performed. The selected number of candidates and the derived limits are summarized in the table \[tab:models\] for each of the tested models. To take into account the lack of knowledge on the hadronic interaction models and the mass composition, the search for photons has been performed using the Epos-LHC model and a proton-iron mix, respectively. Moreover, given the large uncertainties on the predicted flux of GZK photons, strongly dependent on the astrophysical scenarios, and for consistency with previous results, a simple power-law assumption with $\Gamma = 2$ is used in the paper as baseline. In the table \[tab:models\], an estimate of the upper limits variation is provided in a range of values describing possible GZK photon fluxes.
[0.6]{}[c| \*[6]{}[Y]{}]{} E$_{0}$ \[EeV\] & 1 & 2 & 3 & 5 & 10\
&\
N$_{\gamma}$ & 7 & 1 & 0 & 0 & 0\
$\Phi^{95\% \rm{C.L.}}$ & 0.043 & 0.015 & 0.008 & 0.008 & 0.008\
&\
N$_{\gamma}$ & 2 & 0 & 0 & 0 & 0\
$\Phi^{95\% \rm{C.L.}}$ & 0.041 & 0.019 & 0.008 & 0.007& 0.007\
&\
N$_{\gamma}$ & 6 & 1 & 0 & 0 & 0\
$\Phi^{95\% \rm{C.L.}}$ & 0.046 & 0.017 & 0.010 & 0.009 & 0.009\
&\
N$_{\gamma}$ & 3 & 0 & 0 & 0 & 0\
$\Phi^{95\%\rm{C.L.}}$ & 0.025 & 0.008 & 0.008 & 0.007 & 0.006\
Discussion and conclusions {#sect:conclusions}
==========================
The upper limits derived in this paper are drawn in Fig. \[fig:limits\] compared to other experimental results and to the photon flux predicted for the GZK and the top-down models. In the previous paper [@AugerPhotonHybrid] hybrid events with large [$X_{\rm max}$]{}were used to search for photons above 2, 3, 5 and 10 EeV. Eight candidates were found in the first two energy intervals and upper limits were derived on the fraction of photons in the all-particle spectrum. The new results lower the upper limits on the photon fraction by a factor 4 at energies above 5 and 10 EeV and up to a factor 25 at $E_{\rm{thr}}= 2$ EeV. This is a consequence of the larger exposure - which equally affects all energy intervals and is responsible for the factor 4 improvement in the two highest energy bins - and the reduced background contamination which explains the remaining gain at low energies. The factor 4 increase of the exposure is mostly due to the accumulation of 6 years of data. An additional gain arises from the accurate calculation of the exposure based on time-dependent simulation, avoiding the application of a fiducial cut used in the past to mitigate the dependence of the detector acceptance on mass composition [@AugerPhotonHybrid; @AugerPhotonHybrid2007]. Moreover, the present analysis based on a BDT and on the combination of SD and FD observables achieves a background contamination of about 10$^{-3}$ ($\sim 4\,\cdot\,10^{-4}$) for protons (proton-iron mix), which is at least 10 times lower compared to previous estimations [@AugerPhotonHybrid; @AugerPhotonHybridICRC] and has also allowed extending the analysis down to 1 EeV.
Some top-down scenarios proposed to explain the origin of trans-GZK cosmic rays (dashed lines) are illustrated though mostly rejected by previous bounds on the photon flux. A recent super-heavy dark matter proposal (SHDM II) developed in the context of an inflationary theory is shown as a long-dashed line. The case of a SHDM particle with mass $M_{\chi}=4.5\times10^{22}$ eV, life-time $\tau_{\chi}=2.2 \times 10^{22}$ yr and inflaton potential index $\beta = 2$ is only marginally compatible with the limits presented in this work and severely constrained by the limits from the surface detector data [@AugerPhotonSD2015], in agreement with the interpretation of the Planck results in [@Planck]. Constraints on the lifetime-and-mass parameter space of SHDM particle can be imposed by current and future limits on the photon flux, as obtained for example in [@ConstrainsSHDM].
The achieved sensitivity allows testing photon fractions of about 0.1% and exploring the region of photon fluxes predicted in some optimistic astrophysical scenarios (GZK proton-I in Fig. \[fig:limits\]) [@Gelmini]. A significant increase of the exposure is required to test more recent proton scenarios [@Kampert] (GZK proton-II in the figure) assuming a maximum acceleration energy of 10$^{21}$ eV and a strong evolution of the source which is only partially constrained by the limits on the neutrino flux above 10 PeV [@Aartsen:2016ngq]. Under similar astrophysical assumption but with the acceleration of iron primaries at the source, the predicted flux of cosmogenic photons is suppressed by a factor 10. Extrapolating the present analysis up to 2025 would reach flux limits of a few times 10$^{-3}$ km$^{-2}$ sr$^{-1}$ yr$^{-1}$ at the EeV energies which is at the upper edge of the GZK proton-II expected flux region. A factor 10 larger statistics can be gained with a future SD-based analysis above about 10$^{18.5}$ eV using new SD triggers that have been installed in all array stations and that are designed to enhance photon and neutrino detection efficiencies [@NIM2015; @UHECR2014]. A deployment of a 4 m$^2$ scintillator on top of each SD is foreseen as a part of the AugerPrime upgrade of the Observatory to determine the muon content of the air-showers at the ground which may provide further information to distinguish between photon- and hadron-induced showers [@AugerPrime].
Acknowledgments {#acknowledgments .unnumbered}
===============
The successful installation, commissioning, and operation of the Pierre Auger Observatory would not have been possible without the strong commitment and effort from the technical and administrative staff in Malargüe. We are very grateful to the following agencies and organizations for financial support:
Argentina – Comisión Nacional de Energía Atómica; Agencia Nacional de Promoción Científica y Tecnológica (ANPCyT); Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET); Gobierno de la Provincia de Mendoza; Municipalidad de Malargüe; NDM Holdings and Valle Las Leñas; in gratitude for their continuing cooperation over land access; Australia – the Australian Research Council; Brazil – Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); Financiadora de Estudos e Projetos (FINEP); Fundação de Amparo à Pesquisa do Estado de Rio de Janeiro (FAPERJ); São Paulo Research Foundation (FAPESP) Grants No. 2010/07359-6 and No. 1999/05404-3; Ministério de Ciência e Tecnologia (MCT); Czech Republic – Grant No. MSMT CR LG15014, LO1305 and LM2015038 and the Czech Science Foundation Grant No. 14-17501S; France – Centre de Calcul IN2P3/CNRS; Centre National de la Recherche Scientifique (CNRS); Conseil Régional Ile-de-France; Département Physique Nucléaire et Corpusculaire (PNC-IN2P3/CNRS); Département Sciences de l’Univers (SDU-INSU/CNRS); Institut Lagrange de Paris (ILP) Grant No. LABEX ANR-10-LABX-63 within the Investissements d’Avenir Programme Grant No. ANR-11-IDEX-0004-02; Germany – Bundesministerium für Bildung und Forschung (BMBF); Deutsche Forschungsgemeinschaft (DFG); Finanzministerium Baden-Württemberg; Helmholtz Alliance for Astroparticle Physics (HAP); Helmholtz-Gemeinschaft Deutscher Forschungszentren (HGF); Ministerium für Innovation, Wissenschaft und Forschung des Landes Nordrhein-Westfalen; Ministerium für Wissenschaft, Forschung und Kunst des Landes Baden-Württemberg; Italy – Istituto Nazionale di Fisica Nucleare (INFN); Istituto Nazionale di Astrofisica (INAF); Ministero dell’Istruzione, dell’Universitá e della Ricerca (MIUR); CETEMPS Center of Excellence; Ministero degli Affari Esteri (MAE); Mexico – Consejo Nacional de Ciencia y Tecnología (CONACYT) No. 167733; Universidad Nacional Autónoma de México (UNAM); PAPIIT DGAPA-UNAM; The Netherlands – Ministerie van Onderwijs, Cultuur en Wetenschap; Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO); Stichting voor Fundamenteel Onderzoek der Materie (FOM); Poland – National Centre for Research and Development, Grants No. ERA-NET-ASPERA/01/11 and No. ERA-NET-ASPERA/02/11; National Science Centre, Grants No. 2013/08/M/ST9/00322, No. 2013/08/M/ST9/00728 and No. HARMONIA 5 – 2013/10/M/ST9/00062; Portugal – Portuguese national funds and FEDER funds within Programa Operacional Factores de Competitividade through Fundação para a Ciência e a Tecnologia (COMPETE); Romania – Romanian Authority for Scientific Research ANCS; CNDI-UEFISCDI partnership projects Grants No. 20/2012 and No.194/2012 and PN 16 42 01 02; Slovenia – Slovenian Research Agency; Spain – Comunidad de Madrid; Fondo Europeo de Desarrollo Regional (FEDER) funds; Ministerio de Economía y Competitividad; Xunta de Galicia; European Community 7th Framework Program Grant No. FP7-PEOPLE-2012-IEF-328826; USA – Department of Energy, Contracts No. DE-AC02-07CH11359, No. DE-FR02-04ER41300, No. DE-FG02-99ER41107 and No. DE-SC0011689; National Science Foundation, Grant No. 0450696; The Grainger Foundation; Marie Curie-IRSES/EPLANET; European Particle Physics Latin American Network; European Union 7th Framework Program, Grant No. PIRSES-2009-GA-246806; and UNESCO.
[99]{}
K. Greisen, *End to the Cosmic-Ray Spectrum?*, *Phys. Rev. Lett.* [**16**]{} (1966) 748.
G. T. Zatsepin, V. A. Kuz’min, *Upper Limit of the Spectrum of Cosmic Rays*, *J. Exp. Theor. Phys+* [**4**]{} (1966) 78.
The HiRes Collaboration (R. Abbasi et al.), *First Observation of the Greisen-Zatsepin-Kuzmin Suppression*, *Phys. Rev. Lett.* [**100**]{} (2008) 101101.
The Pierre Auger Collaboration (J. Abraham et al.), *Observation of the Suppression of the Flux of Cosmic Rays above 4$\times$10$^{19}$ eV*, *Phys. Rev. Lett.* [**101**]{} (2008) 061101.
I. Valiño for the Pierre Auger Collaboration, *The flux of ultra-high energy cosmic rays after ten years of operation of the Pierre Auger Observatory*, *PoS(2015)* 271 (2015) \[arXiv 1509.03732\].
The Telescope Array Collaboration (T. Abu-Zayyad et al.), *The cosmic-ray energy spectrum observed with the surface detector of the Telescope Array experiment*, *Astrophys. J.* [**768**]{} (2013) 1.
The Pierre Auger Collaboration (A. Aab et al.), *The Pierre Auger Cosmic Ray Observatory*, *Nucl. Instrum. Meth. A* [**798**]{} (2015) 172.
A. Di Matteo for the Pierre Auger Collaboration, *Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory*, *PoS(ICRC2015)* 249 (2015) \[arXiv 1509.03732\].
E. Kido, O.E. Kalashev for the Telescope Array Collaboration, *Interpretation of the energy spectrum observed with the Telescope Array surface detectors*, *PoS(ICRC2015)* 258 (2015).
J. Heinze, D. Boncioli, M. Bustamante, M. and W. Winter, *Cosmogenic Neutrinos Challenge the Cosmic Ray Proton Dip Model*, *Astrophys. J.* [**825**]{} (2016), 122.
The IceCube Collaboration (Aartsen, M. G., et al.), *Constraints on ultra-high-energy cosmic ray sources from a search for neutrinos above 10 PeV with IceCube*, (2016) \[arXiv:1607.05886\].
The Pierre Auger Collaboration (A. Aab et al.), *An improved limit to the diffuse flux of ultra-high energy neutrinos from the Pierre Auger Observatory*, *Phys. Rev. D* [**91**]{} (2015) 092008.
V. Berezinsky, A. Gazizov, O. Kalashev, *Cascade photons as test of protons in UHECR*, *Astropart. Phys.* [**84**]{} (2016) 52.
G. Gelmini, O. Kalashev, D. Semikoz, *GZK Photons as Ultra High Energy Cosmic Rays*, *J. Exp. Theor. Phys.* [**106**]{} (2008) 1061.
D. Hooper, A. M. Taylor, S. Sarkar, *Cosmogenic photons as a test of ultra-high energy cosmic ray composition*, *Astropart. Phys.* [**34**]{} (2011) 340.
B. Sarkar et al., *Ultra-High Energy Photon and Neutrino Fluxes in Realistic Astrophysical Scenarios*, *Proceedings of the 32nd Int. Cosmic Ray Conf.* [**2**]{} (2011) 198.
The Pierre Auger Collaboration (J. Abraham et al.), *Upper limit on the cosmic-ray photon flux above 10$^{19}$ eV using the surface detector of the Pierre Auger Observatory*, *Astropart. Phys.* [**29**]{} (2008) 243.
The Pierre Auger Collaboration, *Upper limit on the cosmic-ray photon fraction at EeV energies from the Pierre Auger Observatory*, *Astropart. Phys.* [**31**]{} (2009) 399.
M. Settimo for the Pierre Auger Collaboration, *An update on a search for ultra-high energy photons using the Pierre Auger Observatory*, *Proceedings of the 32nd Int. Cosmic Ray Conf.* [**2**]{} (2011) 55 \[arXiv:1107.4805\].
C. Bleve for the Pierre Auger Collaboration, *Updates on the neutrino and photon limits from the Pierre Auger Observatory*, *PoS(ICRC2015)* 1103 (2015) \[arXiv 1509.03732\].
R. Aloisio, S. Matarrese, A.V. Olinto, *Super Heavy Dark Matter in light of BICEP2, Planck and Ultra High Energy Cosmic Rays Observations*, *J. Cosmol. Astropart. P.* [**08**]{} (2015) 24.
M. Settimo, M. De Domenico, *Propagation of extragalactic photons at ultra-high energy with the EleCa code*, *Astropart. Phys.* [**62**]{} (2014) 92.
L. Maccione et al., *Ultra high energy photons as probes of Lorentz symmetry violations in stringy space-time foam models*, *Phys. Rev. Lett.* [**105**]{} (2010) 021101.
M. Galaverni, G. Sigl, *Lorentz Violation and Ultrahigh-Energy Photons*, *Phys. Rev. Lett.* [**100**]{} (2008) 021102.
S. T. Scully, F. W. Stecker, *Testing Lorentz Invariance with Neutrinos from Ultrahigh Energy Cosmic Ray Interactions*, *Astropart.Phys.* [**34**]{} (2011) 575.
E. Gabrielli, K. Huitu, S. Roy, *Photon propagation in magnetic and electric fields with scalar/pseudoscalar couplings: a new look*, *Phys. Rev. D* [**74**]{} (2006) 073002.
The Pierre Auger Collaboration (A. Aab et al.), *A search for point sources of EeV photons*, *Astrophys. J.* [**789**]{} (2014) 160.
The Pierre Auger Collaboration (A. Aab et al.), *A targeted search for point sources of EeV photons with the Pierre Auger Observatory*, to be submitted.
The Pierre Auger Collaboration (J. Abraham, et al.), *Trigger and Aperture of the Surface Detector Array of the Pierre Auger Observatory*, *Nucl. Instrum. Meth. A* [**613**]{} (2010) 29.
The Pierre Auger Collaboration (J. Abraham, et al.), *The fluorescence detector of the Pierre Auger Observatory*, *Nucl. Instrum. Meth. A* [**620**]{} (2010) 2.
The Pierre Auger Collaboration (P. Abreu et al.), *The exposure of the hybrid detector of the Pierre Auger Observatory*, *Astropart. Phys.* [**34**]{} (2011) 368.
M. Settimo for the Pierre Auger Collaboration, *Measurement of the cosmic ray energy spectrum using hybrid events of the Pierre Auger Observatory*, *Eur. Phys. J. Plus* [**127**]{} (2012) 87.
The Pierre Auger Collaboration (A. Aab et. al), *Origin of atmospheric aerosols at the Pierre Auger Observatory using studies of air mass trajectories in South America*, *Atmospheric Research* [**149**]{} (2014) 120.
The Pierre Auger Collaboration (J. Abraham, et al.), *A study of the effect of molecular and aerosol conditions in the atmosphere on air fluorescence measurements at the Pierre Auger Observatory*, *Astropart. Phys.* [**33**]{} (2010) 108.
B. Fick et al., *The Central Laser Facility at the Pierre Auger Observatory*, *J. Instrum.* [**1**]{} (2006) 11003.
The Pierre Auger Collaboration (P. Abreu et al.), *Techniques for measuring aerosol attenuation using the Central Laser Facility at the Pierre Auger Observatory*, *J. Instrum.* [**8**]{} (2013) P04009.
S.Y. BenZvi et al., *The Lidar System of the Pierre Auger Observatory*, *Nucl. Instrum. Meth. A* [**574**]{} (2007) 171.
J. Chirinos for the Pierre Auger Collaboration, , *Proceedings of the 33rd Int. Cosmic Ray Conf.* (2013), \[arXiv:1307.5059\].
The Pierre Auger Collaboration (A. Aab et al.), *Depth of Maximum of Air-Shower Profiles at the Auger Observatory: Measurements at Energies above 10$^{17.8}$ eV*, *Phys. Rev. D* [**90**]{} (2014) 122005.
T. K. Gaisser and A. M. Hillas, *Reliability of the method of constant intensity cuts for reconstructing the average development of vertical showers*, *Proc. 15th Int. Cosmic Ray Conf.* (1977) 358.
A. Mariazzi for the Pierre Auger Collaboration, *A new method for determining the primary energy from the calorimetric energy of showers observed in hybrid mode on a shower-by-shower basis*, *Proceedings of the 32nd Int. Cosmic Ray Conf.* [**2**]{} (2011) 161 \[arXiv:1107.4809\].
L.D. Landau, I.Ya. Pomeranchuk, Dokl. Akad. Nauk, *The limits of applicability of the theory of bremsstrahlung by electrons and of creation of pairs at large energies*, *Dokl. Akad. Nauk SSSR* [**92**]{} (1953) 535.
A.B. Migdal, *Bremsstrahlung and Pair Production in Condensed Media at High Energies*, *Phys. Rev.* [**103**]{} (1956) 1811.
T. Erber, *High-Energy Electromagnetic Conversion Processes in Intense Magnetic Field*, *Rev. Mod. Phys.* [**38**]{} (1966) 626.
B. McBreen and C.J. Lambert, *Interactions of high-energy (E>5 x 10$^{19}$ eV) photons in the Earth’s magnetic field*, *Phys. Rev. D* [**24**]{} (1981) 2536.
P. Homola et al., *Characteristics of geomagnetic cascading of ultra-high energy photons at the southern and northern sites of the Pierre Auger Observatory*, *Astropart. Phys.* [**27**]{} (2007) 174 \[astro-ph/0608101\].
The Pierre Auger Collaboration (P. Abreu et al.), *The Lateral Trigger Probability function for UHE Cosmic Rays Showers detected by the Pierre Auger Observatory*, *Astropart. Phys.* [**35**]{} (2011) 266.
G. Ros et al., *A new composition-sensitive parameter for Ultra-High Energy Cosmic Rays*, *Astopart. Phys.* [**35**]{} (2011) 140 \[arXiv:1104.3399\].
D. Heck et al.,[*CORSIKA: A Monte Carlo Code to Simulate Extensive Air Showers*]{}, *Report FZKA* [**6019**]{} (1998).
T. Pierog, Iu. Karpenko, J. M. Katzy, E. Yatsenko, and K. Werner, *EPOS LHC: Test of collective hadronization with data measured at the CERN Large Hadron Collider*, *Phys. Rev. C* [**92**]{} (2015) 034906.
S. Ostapchenko, *Monte Carlo treatment of hadronic interactions in enhanced Pomeron scheme: QGSJET-II model*, *Phys. Rev. D* [**83**]{} (2011) 014018. Argiro, S., Barroso, S. L. C., Gonzalez, J. et al., *The offline software framework of the Pierre Auger Observatory*, *Nucl. Instrum. Meth. A* [**580**]{} (2007) 1485.
S. Agostinelli et al., *Geant4 - a simulation toolkit*, *Nucl. Instrum. Methods Phys. Res. A* [**506**]{} (2003) 250.
The Pierre Auger Collaboration (A. Aab et al.), *Depths of Maximum of Air-Shower Profiles at the Pierre Auger Observatory: Composition Implications*, *Phys. Rev. D* [**90**]{} (2014) 122006.
A. Yushkov for the Pierre Auger Collaboration, *Composition at the “ankle” measured by the Pierre Auger Observatory: pure or mixed?*, *PoS(ICRC2015)* 335 (2015) \[arXiv 1509.03732\].
The Pierre Auger Collaboration (A. Aab et al.), *Testing hadronic interactions at ultrahigh energies with air showers measured by the Pierre Auger Observatory*, *Phys. Rev. Lett.* [**117**]{} (2016) 192001.
The Pierre Auger Collaboration (A. Aab et al.), *Muons in air showers at the Pierre Auger Observatory: mean number in highly inclined events*, *Phys. Rev. D* [**91**]{} (2015) 032003; ERRATA: *Phys. Rev. D* [**91**]{} (2015) 059901.
V. Verzi for the Pierre Auger Collaboration, *The Energy Scale of the Pierre Auger Observatory*, *Proceedings of the 33rd Int. Cosmic Ray Conf.* (2013) \[arXiv:1307.5059\].
Rubtsov. G.I. et al. for the Telescope Array Collaboration, *Telescope Array search for photons and neutrinos with the surface detector data*, *PoS(ICRC2015)* 331.
K. Shinozaki et al., *Upper Limit on Gamma-Ray Flux above 10$^{19}$ eV Estimated by the Akeno Giant Air Shower Array Experiment*, *Astrophys. J.* [**571**]{} (2002) L117. A. Glushkov et al., *Constraints on the flux of primary cosmic-ray photons at energies E>10$^{18}$ eV from Yakutsk muon data*, *Phys. Rev. D* [**82**]{} (2010) 041101: 1.
M. Ave et al., *New Constraints from Haverah Park Data on the Photon and Iron Fluxes of Ultrahigh-Energy Cosmic Rays*, *Phys. Rev. Lett.* [**85**]{} (2000) 2244. Conversion to upper limits on the photon flux based on a private communication from R. Vazquez, A. A. Watson, E. Zas.
J. Ellis et al., *Ultrahigh-energy cosmic rays particle spectra from crypton decays*, *Phys. Rev. D* [**74**]{} (2006) 115003: 1.
The Pierre Auger Collaboration (J. Abraham at al.), *An upper limit to the photon fraction in cosmic rays above 10$^{19}$ eV from the Pierre Auger Observatory*, *Astropart. Phys.* [**27**]{} (2007) 155.
The Planck Collaboration (P. A. R. Ade et al.), *Planck 2015 results. XX. Constraints on inflation* \[arXiv:1502.02114\].
O. E. Kalashev and M. Yu. Kuznetsov, *Constraining heavy decaying dark matter with the high energy gamma-ray limits*, *Phys. Rev. D* [**94**]{} (2016) 063535.
M. Settimo for the Ice Cube, Pierre Auger and Telescope Array Collaborations, *Report from the Multi-Messenger Working Group at UHECR-2014 Conference*, UHECR 2014 symposium, Springdale, Utah, USA \[arXiv:1510.02050\].
The Pierre Auger Collaboration (A. Aab et al.), *The Pierre Auger Observatory Upgrade "AugerPrime”. Preliminary Design Report* \[arXiv:1604.03637\].
|
---
abstract: |
Layzer’s approximation method for investigation of two fluid interface structures associated with Rayleigh Taylor instability for arbitrary Atwood number is extended with the inclusion of second harmonic mode leaving out the zeroth harmonic one. The modification makes the fluid velocities vanish at infinity and leads to avoidance of the need to make the unphysical assumption of the existence of a time dependent source at infinity.
The present analysis shows that for an initial interface perturbation with curvature exceeding $1/(2\sqrt{A})$, where $A$ is the Atwood number there occurs an almost free fall of the spike with continuously increasing sharpening as it falls. The curvature at the tip of the spike also increases with Atwood number. Certain initial condition may also result in occurrence of finite time singularity as found in case of conformal mapping technique used earlier. However bubble growth rate is not appreciably affected.
author:
- |
M. R. Gupta[^1], Rahul Banerjee[^2], Labakanta Mandal, S. Roy, Manoranjan Khan[^3]\
Department of Instrumentation Science & Centre for Plasma Studies\
Jadavpur University, Kolkata-700032, India\
title: 'Spiky development at the interface in Rayleigh-Taylor instability: Layzer approximation with second harmonic '
---
Hydrodynamic instabilities such as Rayleigh Taylor instability(RTI) which sets in when a lighter fluid supports a heavier fluid against gravity or Richtmyer Meshkov instability(RMI) which is initiated when a shock passes an interface between two fluids with different acoustic impedances are of increasing importance in a wide range of physical phenomena starting from inertial confinement fusion (ICF) to astrophysical ones like supernova explosions. In ICF, the capsule shell undergoes the RTI both in the acceleration and deceleration phases. RTI can retard the formation of the hot spot by the cold RTI spike of capsule shell resulting in the destruction of the ignition hot spot or autoignition${\cite{lin04}-\cite{at04}}$. The hydrodynamic instabilities lead to development of heavy fluid ’spikes’ penetrating into the lighter fluid and ’bubbles’ of lighter fluid rising through the heavier fluid. Different approaches have been used for the study of such problems. Among these Layzer’s${\cite{dl55}}$ approach applied to single mode potential flow model${\cite{jh94}-\cite{ss03}}$ is a useful one giving approximate estimate of both Rayleigh Taylor and Richtmyer Meshkov instability evolution. The bubbles were shown by Zhang${\cite{qz98}}$ to rise at a rate tending asymptotically to a terminally constant velocity while spikes were shown to descend with a constant acceleration. However, whether for bubbles or for the spikes, Zhang’s analysis was applicable only for Atwood number $A=1$, i.e, only for fluid-vacuum interface. An extension to arbitrary value of Atwood number $A$ was done by Goncharov${\cite{vg02}}$. Within limitations of Layzer’s model as pointed out by Mikaelian${\cite{mik08}}$ bubbles were shown to rise with a velocity tending to an asymptotic value dependent on $A$ and having a fairly close agreement with the simulation results of Ramaprabhu et al.${\cite{pr06}}$. But the spikes were found to descend with a terminal constant velocity in contrast to a constant acceleration as obtained by Zhang${\cite{qz98}}$ for $A=1$.
Asymptotic spike evolution in Rayleigh Taylor instability behaving almost as a free fall was obtained by Clavin and Williams${\cite{cl05}}$ and also by Duchemin et al.${\cite{du05}}$ by conformal mapping method. Associated with the free fall of the spike, the surface curvature of the spike was also found to increase with time (i.e the spike sharpens as it falls).
In the single mode Layzer model with generalization ${\cite{vg02}}$ for arbitrary Atwood number the equation to the interface taken in X-Y plane as $$\begin{aligned}
\label{eq:1}
y\equiv\eta(x,t)=\eta_{0}(t)+\eta_{2}(t)x^2;\end{aligned}$$ with $\eta_{0}(t)>0$, $\eta_{2}(t)<0$ for bubble while $\eta_{0}(t)<0$, $\eta_{2}(t)>0$ for spike. The velocity potential describing the motion of the heavier fluid (density $\rho_{h}$) and the lighter fluid (density $\rho_{l}$) are ( gravity $g$ is along the negative $y$ direction) $$\begin{aligned}
\label{eq:2}
\phi_h(x,y,t)=a(t)\cos{(kx)}e^{-k(y-\eta_0(t))}; \quad y>0
\quad(heavier fluid)\end{aligned}$$ $$\begin{aligned}
\label{eq:3}
\phi_l(x,y,t)=b_{0}(t)y+b_1(t)\cos{(kx)}e^{k(y-\eta_0(t))}; \quad
y<0 \quad (lighter fluid)\end{aligned}$$ where $k$ is the wave number and $a(t),b_{0}(t),b_1(t)$ are amplitudes.This conventional single mode Layzer model has the drawback that rather than conforming to the physical requirement: $v_{ly}\rightarrow 0$ as $y\rightarrow-\infty$ it necessitates the assumption of a time dependent source at $y\rightarrow-\infty$${\cite{ab03}}$. To avoid this difficulty we modify the single mode Layzer model by replacing the zeroth mode term $b_{0}(t)y$ in Eq.(3) by a second harmonic term viz, $$\begin{aligned}
\label{eq:4}
\phi_l(x,y,t)=b_1(t)\cos{(kx)}e^{k(y-\eta_0(t))}+b_2(t)\cos{(2kx)}e^{2k(y-\eta_0(t))};
\quad y<0\end{aligned}$$ Eqs.(2) and (4) give $$\begin{aligned}
\label{eq:5}
v_{hy}(y\rightarrow\infty)=-\frac{\partial\phi_{h}}{\partial
y}]_{y\rightarrow\infty}=0\end{aligned}$$ $$\begin{aligned}
\label{eq:6}
v_{ly}(y\rightarrow-\infty)=-\frac{\partial\phi_{l}}{\partial
y}]_{y\rightarrow-\infty}=0\end{aligned}$$ The kinematic boundary conditions at the interface (1) are $$\begin{aligned}
\label{eq:7}
\frac{\partial\eta}{\partial t}+v_{hx}\frac{\partial\eta}{\partial
x}=v_{hy}\end{aligned}$$ $$\begin{aligned}
\label{eq:8}
\frac{\partial\eta}{\partial x}(v_{hx}-v_{lx})=v_{hy}-v_{ly}\end{aligned}$$ Setting the pressure boundary condition $p_{h}=p_{l}$ in Bernoulli’s equation for the heavier and lighter fluids leads to${\cite{qz98}\cite{vg02}\cite{ss03}\cite{ss07}-\cite{rb11}}$ $$\begin{aligned}
\label{eq:9}
\rho_{h}[-\frac{\partial \phi_{h}}{\partial t}+
\frac{1}{2}(\vec{\nabla} \phi_{h})^{2}+ g
\eta]-\rho_{l}[-\frac{\partial \phi_{l}}{\partial t}+
\frac{1}{2}(\vec{\nabla} \phi_{l})^{2}+ g \eta]=f_{h}(t)-f_{l}(t)\end{aligned}$$ Following the usual procedure${\cite{mrg09}-\cite{rb11}}$ ,i.e, expanding $\eta(x,t)$ and the velocity potentials in powers of $(kx)$ and equating coefficients of $(kx)^{r},(r=0,2)$ we obtain from Eqs.(7)-(9) the evolution equation for the RT bubbles/spikes (nondimensionalized) tip elevation $\xi_{1}=k\eta_{0}$, curvature $\xi_{2}=\eta_{2}/k$ and velocity $\xi_{3}=k^2 a/\sqrt{kg}$ $$\begin{aligned}
\label{eq:10}
\frac{d\xi_1}{d\tau}=\xi_{3}\end{aligned}$$ $$\begin{aligned}
\label{eq:11}
\frac{d\xi_2}{d\tau}=-(3\xi_2 + \frac{1}{2})\xi_{3}\end{aligned}$$ $$\begin{aligned}
\label{eq:12}
\frac{d\xi_{3}}{d\tau}=-\frac{[2\xi_{2}(\xi_{2}-\frac{1}{2})^2+\xi_{3}^2(\xi_{2}-\alpha)(\xi_{2}-\beta)]}{[2(\xi_{2}-\frac{1}{2})(\xi_{2}^2
-\frac{1}{4}\frac{r+1}{r-1})]}\end{aligned}$$ where $$\begin{aligned}
\label{eq:13}
\alpha,\beta=\frac{(r+4)\pm\sqrt{12r+13}}{2(r-1)};
r=\frac{\rho_{h}}{\rho_{l}}\end{aligned}$$ and $$\begin{aligned}
\label{eq:14}
\tau=t\sqrt{kg}\end{aligned}$$ is nondimensionalized time.
Starting from a set of initial values $\xi_{1}>0$, $\xi_{2}<0$ and $\xi_{3}>0$ which correspond to the description of temporal evolution of the tip of the bubble we arrive at the asymptotic value ($\tau\rightarrow\infty$) $$\begin{aligned}
\label{eq:15}
\xi_{2}\rightarrow-\frac{1}{6}\end{aligned}$$ and $$\begin{aligned}
\label{eq:16}
[\xi_{3}]_{asymp}=\sqrt{\frac{8A}{3(5A+3)}}>
[\xi_{3}]_{asymp}^{classical}=\sqrt{\frac{2}{3}\frac{A}{1+A}}\end{aligned}$$ (by classical we mean the single mode Layzer approximation as used by Goncharov${\cite{vg02}}$). Two values coincide as $A(=\frac{\rho_{h}-\rho_{l}}{\rho_{h}+\rho_{l}})\rightarrow1$. The growth rate of the development of the height of the bubble tip is shown in Figure 1 and compared with classical value. It is seen that presence or absence of a source does not give rise to any qualitatively significant change in the growth rate of the bubble height${\cite{vg02}\cite{ss01}\cite{ss07}}$.
To get spike like behavior of the perturbation of the interface we used $$\begin{aligned}
\label{eq:17}
\xi_{1}<0,\quad \xi_{2}>0 \quad and \quad \xi_{3}<0\end{aligned}$$ Corresponding to a start from an initial value $$\begin{aligned}
\label{eq:18}
[\xi_{2}]_{initial}>\frac{1}{2}\sqrt{\frac{r+1}{r-1}},\quad
[\xi_{3}]_{initial}<0\end{aligned}$$ Eq.(11) shows that $\xi_{2}$ increases monotonically ($\frac{d\xi_{2}}{d\tau}>0$ for all $\tau>0$) while from Eq.(12) it follows that the depth of the spike tip below the surface of separation increases continuously ($\frac{d\xi_{1}}{d
\tau }=\xi_{3}<0$ and $\frac{d\xi_{3}}{d\tau}<0$). These are shown in Figures 2(a) and 2(b) by plotting $\xi_{2}(\tau)$ and $\xi_{3}(\tau)$ as function of $\tau$ obtained from numerical solution of Eqs.(11) and (12) by employing fifth order Runge-Kutta-Fehlberg method. The initial value taken are $[\xi_{2}]_{initial}=1.0$ and $[\xi_{3}]_{initial}=-0.5$ which satisfy condition (17) for all the following three values of $r=2
(A=\frac{1}{3})$, $r=5 (A=\frac{2}{3})$, $r=20 (A=\frac{19}{21})$. The value of $\xi_{2}$ which represents the curvature at the tip of the spike is an increasing function of $\tau$ for every value of the Atwood number $A$ (Figure 2(a)). Moreover for every given value of $\tau$ the curvature $\xi_{2}$ increases with $A$. This implies that the spike continues to sharpen with time as well as with increasing Atwood number and is explicitly shown in Figure 3. Figure 2(b) shows that except very close to the starting instant the spike descends with a constant acceleration $\simeq -g$ (i.e, nearly a free fall). This agrees with the conclusions${\cite{qz98}}$ for Atwood number $A=1$.
The time development of spiky behavior for $[\xi_{2}]_{initial}>\frac{1}{2}\sqrt{\frac{r+1}{r-1}}$ and $[\xi_{3}]_{initial}<0$ is demonstrated in Figures4(a) and 4(b). This is shown both for increasing $A$ with fixed $\tau$ (Figure 4(a)) and with increasing $\tau$ for given $A$ (Figure 4(b)). But for $\frac{1}{2}<[\xi_{2}]_{initial}<\frac{1}{2}\sqrt{\frac{r+1}{r-1}}$ with $[\xi_{3}]_{initial}<0$ one encounters development of finite time singularity i.e, $\xi_{2}\rightarrow\infty$ and $\xi_{3}\rightarrow-\infty$ at a finite value of $\tau$. The possibility of the occurrence of such an eventuality at (or near) the tip of the spike is also found to arise when the RT instability is addressed by conformal mapping method${\cite{tb03}-\cite{tn93}}$ as mentioned by Clavin and Williams${\cite{cl05}}$.
Finally for a trajectory starting from $(\xi_{3})_{initial}<0$ and $0<(\xi_{2})_{initial}<\frac{1}{2}\sqrt{\frac{r+1}{r-1}}=\frac{1}{2}\sqrt{\frac{1}{A}}$ one finds that $\xi_{2}$ continues to increase towards $\xi_{2}=\frac{1}{2}$, i.e, the spike continues to sharpen as time progresses and its speed of fall slowly decreases in magnitude. Because of the presence of singularity at $\xi_{2}=\frac{1}{2}$(Eq.(12)), it is not possible to continue the numerical integration towards and beyond this point. This is shown in Figure 5 for initial values in the domain mentioned above.
ACKNOWLEDGMENTS {#acknowledgments .unnumbered}
================
This work is supported by the C.S.I.R, Government of India under ref. no. R-10/B/1/09.
J.D.Lindl,P.Amendt, R.L.Berger et al. [*[Phys. Plasmas]{}*]{} [**[11]{}**]{}, 339 (2004). D. Batani,W.Nazakov,T.Hall,T.Lower,M.Koenig,B.Faral,A.B.Mounaix,N.Grandjouan [*[Phys. Rev. E]{}*]{} [**[62]{}**]{}, 8573 (2000). R. Dezulian,F.Canova,S.Barbanotti, et al. [*[Phys. Rev. E]{}*]{} [**[73]{}**]{}, 047401 (2006). S. Atzeni, J. Meyer-ter-vehn [*[The Physics of Inertial Fusion: Beam Plasma Interaction, Hydrodynamics, Hot Dense Mater]{}*]{} (Oxford University, London,2004). D. Layzer [*[Astrophys. J.]{}*]{} [**[122]{}**]{}, 1 (1955). J. Hecht, U. Alon, and D. Shvarts [*[Phys. Fluids]{}*]{} [**[6]{}**]{}, 4019 (1994). Q. Zhang [*[Phys. Rev. Lett.]{}*]{} [**[81]{}**]{}, 3391 (1998). V.N.Goncharov [*[Phys. Rev. Lett.]{}*]{} [**[88]{}**]{}, 134502 (2002). Sung-Ik-Sohn, Q. Zhang [*[Phys. Fluids]{}*]{} [**[13]{}**]{}, 3493 (2001). Sung-Ik-Sohn, [*[J. Comput. App. Math.]{}*]{} [**[177]{}**]{}, 367 (2005). Sung-Ik-Sohn, [*[Phys. Rev. E]{}*]{} [**[67]{}**]{}, 026301 (2003). K.O. Mikaelian [*[Phys. Rev. E]{}*]{} [**[78]{}**]{}, 015303(R) (2008). P. Ramaprabhu, G. Dimonte, Yuan-Nan Young, A.C. Calder, B. Fryxell [*[Phys. Rev. E]{}*]{} [**[74]{}**]{}, 066308 (2006). P.Clavin, F. Williams [*[J. Fluid Mech.]{}*]{} [**[525]{}**]{}, 105 (2005). L. Duchemin, C. Josserand, P. Clavin [*[Phys. Rev. Lett.]{}*]{} [**[94]{}**]{}, 224501 (2005). S.I. Abarzhi, K. Nishihara, J. Glimm [*[Phys. Lett. A]{}*]{} [**[317]{}**]{}, 470 (2003). Sung-Ik-Sohn, [*[Phys. Rev. E]{}*]{} [**[75]{}**]{}, 066312 (2007). Sung-Ik-Sohn, [*[Phys. Rev. E]{}*]{} [**[80]{}**]{}, 055302(R) (2009). M.R.Gupta, S.Roy, M.Khan, H.C.Pant, S.Sarkar, M.K. Srivastava [*[Phys. Plasmas]{}*]{} [**[16]{}**]{}, 032303 (2009). R.Betti, J.Sanz, [*[Phys. Rev. Lett.]{}*]{} [**[97]{}**]{}, 205002 (2006). M.R.Gupta, L.Mandal, S.Roy, M.Khan [*[Phys. Plasmas]{}*]{} [**[17]{}**]{}, 012306 (2010). R. Banerjee, L.Mandal, S.Roy, M.Khan,M.R.Gupta [*[Phys. Plasmas]{}*]{} [**[18]{}**]{}, 022109 (2011). T. Yoshikawa,A. Balk [*[Math. Comput. Modelling]{}*]{} [**[38]{}**]{}, 113 (2003). S. Tanveer [*[Proc. R. Soc. Lond. A]{}*]{} [**[441]{}**]{},501 (1993).
[^1]: e-mail: mrgupta$_{-}[email protected]
[^2]: e-mail: [email protected]
[^3]: e-mail: mkhan$_{-}[email protected]
|
---
abstract: 'A two dimensional flow model is introduced with deterministic behavior consisting of bursts which become successively larger, with longer interburst time intervals between them. The system is symmetric in one variable $x$ and there are bursts on either side of $x=0$, separated by the presence of an invariant manifold at $x=0$. Further, the bursts can switch from positive to negative $x$ and vice-versa. The probability distribution of burst heights and interburst periods is studied, as is the dependence of the statistics on the noise variance. The modification of this behavior as the symmetry in $x$ is broken is studied, showing qualitatively similar behavior if the symmetry breaking is small enough. Experimental observations of a nonlinear circuit governed by the same equations are presented, showing good agreement.'
author:
- 'J. M. Finn[\*]{}, E. R. Tracy[\*]{}[\*]{}, W. E. Cooke[\*]{}[\*]{}, and A. S. Richardson[\*]{}[\*]{}'
bibliography:
- 'paper.bib'
title: NOISE STABILIZED RANDOM ATTRACTOR
---
T-15, Plasma Theory, Los Alamos National Laboratory; [\*]{}[\*]{}Department of Physics, College of William and Mary
Introduction
============
This paper is noise sensitivity in a two-dimensional flow of the form$$\frac{dx}{dt}=f(x,y)\equiv (y-1)x,\label{eq:x-eq.}$$ $$\frac{dy}{dt}=g(x,y)\equiv \epsilon y^{\nu }-x^{2}y.\label{eq:y-eq.}$$ This system is a low-dimensional model for the nonlinear behavior of a plasma instability in which $y$ represents the pressure gradient, and instability (with amplitude $x$) is driven by the pressure gradient and fixed magnetic field line curvature. Such pressure -driven instabilities are thought to be responsible for edge localized modes (ELMs)
For the system with zero noise, $x$ grows if $y>1$, but for large enough $x$ the term $-x^{2}y$, which represents the flattening of the pressure gradient due to the fluctuation, enters. This causes a decrease in $y$, which quenches the growth. For this flow, $x=0$ is an invariant manifold, and is in fact the unstable manifold of a fixed point at $x=y=0$. See Fig. 1. The $x-$axis is also an invariant manifold, the stable manifold of the same fixed point. There are two with $x=\pm x_{0}=\pm \sqrt{\epsilon }$. The nonlinear deterministic behavior consists of spirals coming out of the fixed points with $x=\pm x_{0}$, coming closer to the two invariant axes on each pass, and developing Because of symmetry in $x$, identical bursts can occur on both sides of $x=0$, isolated from each other by the invariant manifold $x=0$.
With a small amount of uncorrelated Gaussian noise added to eq. (\[eq:x-eq.\]), we find that
the resulting nonlinear stochastic equation has the following property: the bursts saturate in amplitude, leading to behavior that is qualitatively similar to deterministic chaos. Further, the noise allows transitions across the $y-$axis, an invariant manifold for the deterministic system. Statistically, the dynamics is sy number of bursts with $x<0$ compared with those with $x>0$; with statistical symmetry these are equal. In the physical system motivating this work, the processes we model as noise have a much shorter correlation time than the processes described by the deterministic equations (\[eq:x-eq.\]), (\[eq:y-eq.\]),
There have been several related papers on nonlinear stochastic equations which are sensitive to a small amount of noise. Sigeti and Horsthemke [@sigeti] studied the effect of noise at a saddle-node bifurcation, and found noise induced oscillations at a characteristic frequency. Stone and Holmes [@Stone-and-Holmes] studied systems with an attracting homoclinic orbit or an attracting heteroclinic cycle (structurally stable because of the presence of a symmetry) in the presence of noise. They found that the effect of the noise is to prevent the time between bursts from increasing on each cycle. Stone and Armbruster [@Stone-Armbruster] studied structurally stable (again because of symmetry) heteroclinic cycles in the presence of noise, and analyzed the jumping between invariant subspaces of the deterministic system. Armbruster and Stone [@Armbruster-Stone] studied heteroclinic networks in the presence of noise, and the induced switching between cycles. References [@Stone-and-Holmes; @Stone-Armbruster; @Armbruster-Stone] stressed the importance of the linear part of the flow near the saddles. Moehlis [@Moehlis] has investigated a system representing binary fluid convection, and found that states with large bursts can be very sensitive to noise. which the deterministic system is chaotic.
The difference between our work and this previous work is the following. Our work concerns a system which, in the absence of noise, has successive bursts, each larger than its predecessor and separated by lengthening time intervals. In the presence of noise, our system exhibits a finite characteristic scale for the burst amplitude, a characteristic time for bursts, and random switching across an invariant manifold of the deterministic system. Further, our deterministic system is two-dimensional, and therefore cannot have deterministic chaos, but the noise introduces behavior which resembles deterministic chaos in several ways. In Refs. [@Stone-and-Holmes; @Stone-Armbruster; @Armbruster-Stone] systems with homoclinic or heteroclinic cycles were studied; the noise was found to induce switching between subspaces and introduced a characteristic time scale for intervals between bursts, but the bursts in the deterministic system were limited in magnitude. The model of Ref. [@Moehlis] is four-dimensional, and therefore can, unlike our system, exhibit chaotic behavior even without noise, in principle. It was found that this specific system can have periodic bursts of infinite magnitude. These infinite bursts are periodic in the sense that if the origin and are mapped to each other in a specific way, the solutions to the equations can reach the origin in finite time and can be integrated through it, leading to a periodic signal. These states with large periodic bursts were found to be sensitive to noise. This behavior is to be contrasted with the behavior we have found from eqs. (\[eq:x-eq.\]), (\[eq:y-eq.\]), in which (for $\nu <2$) successive bursts get larger in magnitude, but no single burst goes to infinity, and noise causes the bursts to behave in a way that resembles deterministic chaos. The model in Refs. [@billings-schwartz; @bollt-billings-schwartz] exhibits noise-induced chaos because of bi-instability, related to the presence of two nearby unstable orbits.
The model we introduce is similar to the models of Refs. [@Stone-and-Holmes; @Stone-Armbruster; @Armbruster-Stone] with a heteroclinic connection, in the formal sense that in our model the $y-$axis is a heteroclinic orbit between the saddle at $(x,y)=(0,0)$ and the point at infinity. After a change of variables, the point at infinity can be mapped to a finite point and the origin can be left fixed. The new unstable manifold maps from the origin to this second fixed point. However, additive noise in our system would then map to non-additive noise in the compactified version. In particular the noise disappears at the second fixed point, which is physically unrealistic.
In Sec. 2 we introduce the deterministic form of the model and show that with $\nu =1$ it is equivalent to the Lotka-Volterra predator-prey model. We discuss the surface of section map $x\rightarrow x'=F(x)$, taking minima of $x$ to maxima of $x$ (and vice-versa), as well as the composite map $x\rightarrow x''$.
In Sec. 3 we introduce the stochastic model and present results. These results include those on the Lyapunov exponent $h_{1}$ and the distribution of maxima of $|x|$ and the time interval $T$ between bursts, and the dependence of these quantities on the noise diffusion coefficient $D$. A brief discussion of the behavior near the $y-$axis is shown. In this limit, the behavior in $x$ is linear and can be treated by the Fokker-Planck equation, discussed in more detail in Appendix A.
In Sec. 4 we discuss the role of reflection symmetry in $x$ and the effect of weak symmetry breaking. We also present results involving modifications to the system at small and large $y$, and a modified form of the equations in which the noise is replaced by a sinusoidal perturbation. The results with an offset show that in a sense the system with noise is structurally stable. The results with a sinusoidal perturbation lend credence to the validity of the Lyapunov exponent for the random case.
In Sec. 5 we show results from an experiment with a nonlinear circuit, showing noise stabilization in a physical system.
In Sec. 6 we summarize our work.
Deterministic model
===================
The deterministic form of the model we study is eqs. (\[eq:x-eq.\]), (\[eq:y-eq.\]). The parameters $\epsilon ,\: \nu $ are the only parameters that cannot be removed by rescaling $x,$ $y,$ and $t$. Starting with $x=0$ and $y>0$, $y$ increases in time, going to infinity in finite time if $\nu >1$. If $x$ grows at a rapid enough rate relative to $y$ (to be quantified later), the second term in (\[eq:y-eq.\]) eventually dominates the first and $y$ decreases. For $\nu =1$ the system (\[eq:x-eq.\]), (\[eq:y-eq.\]) is the Lotka-Volterra predator-prey model. The usual form[@Strogatz] of this system, in scaled variables, is$$\frac{dX}{ds}=X(Y-1),$$ $$\frac{dY}{ds}=(E-X)Y.$$ $X=x^{2}/2,\: Y=y,\: s=2t,\: E=\epsilon /2$, it can be put in the form of eqs. (\[eq:x-eq.\]), (\[eq:y-eq.\]) with $\nu =1$. Notice that in this latter form there is a symmetry $x\rightarrow -x$ not present in the usual form. For this value of $\nu $, equations (\[eq:x-eq.\]) (\[eq:y-eq.\]) can be written in terms of $q=\ln x$, $p=\ln y$ in the form$$\frac{dq}{dt}=e^{p}-1,$$ $$\frac{dp}{dt}=\epsilon -e^{2q}.$$ $(q,p)$:$$H(q,p)=e^{p}-p+\frac{1}{2}e^{2q}-\epsilon q=y-\ln y+\frac{1}{2}x^{2}-\epsilon \ln x.\label{eq:hamiltonian}$$
Successive intersections of $H=const.$ with $y=1$ define a $1D$ surface of section . See Fig. 2. There are $x=\pm x_{0}=\pm \sqrt{\epsilon },\: y=1$. The mapping $F$ is determined $H(q,p)$, i.e.$$\frac{1}{2}x^{2}-\epsilon \ln x=\frac{1}{2}x'^{2}-\epsilon \ln x'.\label{eq:LV-map}$$ For small $x$ we find $x\approx x'\exp (-x'^{2}/2\epsilon )$, which can be approximated further by $x'=\sqrt{-2\epsilon \ln x}$. Thus for small $x$ or very large $x'$, $F(x)$ is logarithmic in nature. For large $x$ or small $x'$ we have the inverse $x'=x\exp (-x^{2}/2\epsilon )$.
On the other hand, for $\nu >1$ the system is not Hamiltonian. It has fixed points at $y=1$, $x=\pm x_{0}\textcolor {red}{=\pm \sqrt{\epsilon }}$ and at $x=y=0$. Near these fixed points, orbits evolve according to the Jacobian $\mathsf{J}(x,y)=\nabla \mathbf{f}$, i.e.$$\frac{d}{dt}\delta \mathbf{x}(t)=\mathsf{J}\delta \mathbf{x}(t).\label{eq:variational}$$ For eqs. (\[eq:x-eq.\]), (\[eq:y-eq.\]),$$\mathsf{J}(x,y)=\left[\begin{array}{cc}
y-1 & x\\
-2xy & \epsilon \nu y^{\nu -1}-x^{2}\end{array}
\right].$$ For the two fixed points at $x=\pm x_{0},\: y=1$ the eigenvalues satisfy $\lambda ^{2}-\epsilon (\nu -1)\lambda +2\epsilon =0,$ and are complex with positive real parts (unstable ) for $$0<\nu -1<\sqrt{8/\epsilon }.\label{eq:bound-on-epsilon-nu}$$ Orbits continue to spiral out for $\nu >1$. This is demonstrated by showing that the Hamiltonian for the case $\nu =1$ in eq. (\[eq:hamiltonian\]) is a Lyapunov function for $\nu \neq 1$. To show this, we note$$\frac{dH}{dt}=\frac{dx}{dt}\frac{\partial H}{\partial x}+\frac{dy}{dt}\frac{\partial H}{\partial y}=\epsilon \left(y-1\right)\left(y^{\nu -1}-1\right).$$ Thus, for $\nu >1$, $dH/dt>0$ and the orbits spiral outward for all time, since $H$ has a minimum at $x=x_{0},y=1$. For $\nu <1$, $dH/dt<0$ and the orbits spiral in to the fixed point.
The axes $x=0,\: y=0$ are invariant manifolds; we consider only $y>0$, and for the noise-free case orbits with $x(0)>0$ remain in that quadrant. and $\nu $ given in eq. (\[eq:bound-on-epsilon-nu\]), orbits spiral away from the fixed points at $(\pm x_{0},1)$ [\[]{}Fig. 1[\]]{}, approaching the $x-$ and $y-$axes, as shown in Fig. 3,
which has $\epsilon =0.5,\nu =1.2$. After an initial transient, the motion is bursty, with each successive oscillation coming closer to the axes, leading to a larger interburst interval, followed by a larger burst.
We compute the finite time Lyapunov exponent $$h_{1}(t)=\frac{1}{t}\ln \left(\frac{|\delta \mathbf{x}(t)|}{|\delta \mathbf{x}(0)|}\right),\label{eq:lyapunov}$$ where $\delta \mathbf{x}(t)$ is evolved according to eq. (\[eq:variational\]) and $\mathbf{x}(t)$ is evolved by eqs. (\[eq:y-eq.\]), (\[eq:x-eq.\]). In deterministic systems with a chaotic attractor, $h_{1}(t)$ measures the average exponential rate of divergence, or stretching, over $0<t'<t$. The largest Lyapunov exponent is the limit of $h_{1}(t)$ as $t\rightarrow \infty $ or the average, with suitable invariant measure, of $h_{1}(t)$ over the attractor. in this section and Sec. 4.5, w The exponent $h_{1}(t)$ is shown as a function of time in Fig. 3d. It is clear that $h_{1}(t)$ shows the bursts in $x$ and $y$, and decreases whenever the orbit is near enough to the origin. In Fig. 4 we show the zero contours of the larger eigenvalue $\rho (x,y)$ of the symmetrized Jacobian $\mathsf{J}_{s}=(\mathsf{J}+\mathsf{J}^{T})/2$, computed analytically.
This quantity is relevant because $|\delta \mathbf{x}(t)|=(\delta \mathbf{x}(t),\delta \mathbf{x}(t))^{1/2}$ evolves according to $$(d/dt)(\delta \mathbf{x}(t),\delta \mathbf{x}(t))=(\delta \mathbf{x}(t),2\mathsf{J}_{s}\delta \mathbf{x}(t))\leq 2\rho (t),$$ so that $\rho (t)=\rho (x(t),y(t))$ is an upper bound for the local contribution to $h_{1}(t)$, namely $(d/dt)\ln |\delta \mathbf{x}(t)|=|\delta \mathbf{x}(t)|^{-1}(d/dt)|\delta \mathbf{x}(t)|\leq \rho (t)$. From this we find $dh_{1}(t)/dt\leq [\rho (t)-h_{1}(t)]/t$, $(d/dt)(th_{1})\leq \rho (t)$ or $h_{1}(t)\leq t^{-1}\int _{0}^{t}\rho (s)ds$.
Further insight into the bursty nature can be obtained by finding the surface of section, shown in Fig. 5 and discussed above for the Hamiltonian case $\nu =1$.
For the parameters of Fig. 3, this map $x\rightarrow x'=F(x)$ is shown in Fig. 5a. The slope $F'(x)$ at the fixed point $x=\sqrt{\epsilon }$, computed numerically, equals $s_{1}=-1.17$. This value agrees with the value obtained from the complex eigenvalues $\lambda $ of $\mathsf{J}(x_{0},y_{0})$, which satisfy $\lambda =\lambda _{r}\pm i\lambda _{i}$ with $\lambda _{r}=\epsilon (\nu -1)/2$ and $\lambda _{i}=\pm \sqrt{2\epsilon }\left(1+O(\epsilon (\nu -1)^{2})\right)$, which equals $\pm 1$ for $\epsilon =0.5$ and $\nu \ll 1$. This gives $s_{1}\approx -e^{\epsilon (\nu -1)\pi /2}$. For $\epsilon =1/2,\: \nu =1.2$, this gives $s_{1}=-1.17$, in agreement with the numerical results. This value $s_{1}$ is less than $-1$, as it must be because the fixed point is unstable. Note that the values of $x'$ for small $x$ rise rapidly as $x\rightarrow 0$ [\[]{}$x'$ is approximately proportional to $\sqrt{-\ln x}$, as suggested by the $\nu =1$ (Lotka-Volterra) results discussed after eq. (\[eq:LV-map\])[\]]{}, indicating that orbits that are near $x=0$ when they pass $y=1$ lead to large succeeding maxima. Even more pronounced is that for $x>3$ the values of $x'$ are vanishingly small, showing that moderately large maxima lead to succeeding minima that are extremely close to the $y-$axis. In Fig. 5b we show the composite surface of section $x\rightarrow x''$, from one minimum to the next, or one maximum to the next. The slope at the fixed point is $1.37\approx s_{1}^{2}$, as expected. For large $x$, $x''=F^{2}(x)$ appears to be exponential in $x$.
Next, we turn to a discussion of the choice of the parameter $\nu $. Let us investigate the range of the parameters $\nu ,\epsilon $ for which the system exhibits successively larger, more widely separated bursts.
Consider eqs. (\[eq:x-eq.\]), (\[eq:y-eq.\]) for large $y$ and small $x$, i.e.$$\frac{dx}{dt}=yx,\label{eq:x-eq-y-large}$$ $$\frac{dy}{dt}=g(0,y)\approx \epsilon y^{\nu }.\label{eq:y-eq-y-large}$$ From these we conclude $$x=x_{c}\exp \left[\frac{y^{2-\nu }}{\epsilon (2-\nu )}\right],\label{eq:large-y-solution}$$ where $x_{c}\exp \left[1/\epsilon (2-\nu )\right]$ is the value of $x$ when the orbit passes $y=1$ with small $x$. Let us compare the two terms on the right in eq. (\[eq:y-eq.\]), first for $\nu =1$ (Lotka-Volterra). The second term exceeds the first if $x^{2}>\epsilon $ and, since $x\sim e^{y/\epsilon }$, the nullcline $dy/dt=0$ is crossed, and $y$ eventually decreases. For $1<\nu <2$, the nullcline is crossed when $x^{2}\geq \epsilon y^{\nu -1}$ or$$x_{c}^{2}\exp \left[\frac{2y^{2-\nu }}{\epsilon (2-\nu )}\right]\geq \epsilon y^{\nu -1},\label{eq:nullcline}$$ which occurs eventually. So, in each burst, $y$ reaches a maximum and begins to decrease, starting a new cycle, as long as $x\neq 0$. (The orbits with $x=0$ go to infinity in finite time for $\nu >1$.)
For $\nu =2$, we can use eq. (\[eq:x-eq-y-large\]) with eq. (\[eq:y-eq.\]) for arbitrary $x$ (including the term $-x^{2}y$) to obtain, for large $y$,$$\frac{dy}{dx}=\epsilon \frac{y}{x}-x.$$ The solution is $$y=\zeta x^{\epsilon }-\frac{x^{2}}{2-\epsilon },$$ with $\zeta >0$; the nullcline has $y=x^{2}/\epsilon $. For $\epsilon <2$, the nullcline is crossed and the cycle begins again. For $\epsilon >2$ the nullcline is not crossed and the orbit can go off to infinity in one cycle, in finite time.
For $\nu >2$, the nullcline in eq. (\[eq:nullcline\]) is never reached if $x_{c}$ is small enough. This means that if the value of $x$ when the orbit crosses $y=1$ is below some critical value, the orbit will go off to infinity before another cycle. Therefore, an orbit starting near the fixed point $(x,y)=(\sqrt{\epsilon },1)$ will encircle the fixed point a finite number of times and then go off to infinity in finite time.
Stochastic model and results
============================
Model
-----
With noise, the system based on eqs. (\[eq:x-eq.\]), (\[eq:y-eq.\]) is a nonlinear stochastic ODE, of the form$$\frac{dx}{dt}=f(x,y)+\sqrt{2D}\xi (t),\label{eq:x-with-noise}$$
$$\frac{dy}{dt}=g(x,y),\label{eq:y-with-noise}$$
with $\xi (t)$ representing uncorrelated unit variance Gaussian noise, having $\langle \xi (t)\rangle =0,\: \langle \xi (t)\xi (t')\rangle =\delta (t-t')$. Here, $D$ is the Brownian diffusion coefficient. For a low noise level, $\xi (t)$ affects the dynamics only near the $y-$axis, where $f(x,y)$ is small. The motivation for including noise in the $x-$equation but not in the $y-$equation is the following. Without noise, when the orbit is traveling along the $y-$axis for $y<1$, $x(t)$ can decrease to a level that is unrealistically small for modeling any physical application with noise. Noise prevents $x$ from becoming so small for $0<y<1$, and therefore is expected to prevent the successive bursts from continuing to increase in magnitude, with increasing interburst time interval. We do not include noise in the $y-$equation because noise could cause $y$ to become negative when the orbit is near the $x-$axis. We will discuss a model allowing negative $y$ in Sec. 4.
We integrate the nonlinear stochastic ODE system (\[eq:x-with-noise\]), (\[eq:y-with-noise\]) numerically, with a noise term in $x$ added at each time step. Specifically, the time stepping from $t$ to $t+h$ is $$\begin{array}{c}
x(t+h)=x(t)+hf\left(\frac{x(t)+x(t+h)}{2}\: ,\: \frac{y(t)+y(t+h)}{2}\right)+\sqrt{2Dh}\xi (t),\\
y(t+h)=y(t)+hg\left(\frac{x(t)+x(t+h)}{2}\: ,\: \frac{y(t)+y(t+h)}{2}\right).\end{array}
\label{eq:finite-difference}$$ The implicit form of the deterministic part of the equations is solved by a simple Picard iteration. The random term is added after this iteration on the deterministic equations has converged. Each value $\xi (t)$ is an independent random number with zero mean Gaussian distribution and unit variance, and the coefficient $\sqrt{2Dh}$ is chosen to give results independent of the time step $h$ (in a mean-square sense) in the limit $h\rightarrow 0$.
Numerical results
-----------------
Results for the same parameters as in Fig. 3, with noise having $D=5\times 10^{-9}$, are shown in Fig. 6, with $0\leq t\leq 1000$.
The orbits are still of a bursty nature, but the bursts and the interburst time intervals are limited in magnitude. The successive bursts appear to be uncorrelated and bursts with $x$ negative are as common as those with $x$ positive, after the transient near the fixed point at $x=x_{0}=\sqrt{\epsilon },y=1$. To the eye, these results appear similar to those of a chaotic deterministic system, e.g. the $y-z$ projection of the Lorenz system[@Ott].
In Figure 7 we show the finite time Lyapunov exponent $h_{1}(t)$ for the case of Fig. 6 for $0\leq t\leq 10^{4}$. The orbits $\mathbf{x}(t)=(x(t),y(t))$ given by eqs. (\[eq:x-with-noise\]), (\[eq:y-with-noise\]) are affected by the noise $\xi (t)$ but the variational form for $\delta \mathbf{x}(t)$ is eq. (\[eq:variational\]) and does not directly involve the noise. [\[]{}Two orbits $\mathbf{x}_{1}(t)$ and $\mathbf{x}_{2}(t)=\mathbf{x}_{1}(t)+\delta \mathbf{x}(t)$ with slightly different initial conditions are integrated in time with the same realization of the noise $\xi (t)$.[\]]{} For these parameters $h_{1}(t)$ converges to $0.032$ as $t\rightarrow \infty $. For several other values of $\epsilon ,\nu ,$ and $D$, with $1<\nu <2$ and (\[eq:bound-on-epsilon-nu\]), similar results are obtained. This positive Lyapunov exponent shows exponential divergence between nearby orbits.
To analyze the bursts in terms of amplitude and time interval between bursts, we introduce $x_{n}$, $x_{n+1}$ and $T_{n}$. (See Fig. 3.) These are, respectively, the amplitude (in $x$) of a burst (a local maximum for positive $x$, a local minimum for negative $x$), the amplitude of the following burst, and the time interval between them. In Fig. 8
we show scatter plots of $T_{n}$ vs. $x_{n}$, $x_{n+1}$ vs. $T_{n}$, and the composite $x_{n+1}$ vs. $x_{n}$ for the parameters of the case of Figs. 6 and 7, indicating the probability density functions $f_{1}(x_{n},T_{n}),\: f_{2}(T_{n},x_{n+1})$ and $f_{3}(x_{n},x_{n+1})$. These are the marginal distributions of the full distribution $g(x_{n},T_{n},x_{n+1})$ projected over $x_{n+1}$, $x_{n}$, and $T_{n}$, respectively. The first has very little scatter. This lack of scatter shows a very strong correlation. However, this correlation is strongly nonlinear and would not be reflected in the linear correlation coefficient, but would require a diagnostic such as the conditional entropy [@Cover_Thomas]. The other plots show the expected symmetry in $x$. Specifically, there are four equivalent peaks in the four quadrants in Fig. 8c, showing that successive peaks are positive or negative, independent of the sign of the previous peak. Fig. 8b shows a long tail in $T_{n}$, and sharp cutoffs for small $|x_{n}|$ and small $T_{n}$.
In Fig. 9
are histograms, showing the marginal distributions of $x_{n}$, at the maxima of $\vert x\vert $, and the interburst time $T_{n}$. (See Fig. 3.) The maximum time was $t=10^{6}$ and there were about $23000$ peaks in $x_{n}$ and the same number of interburst intervals $T_{n}$. The histogram of $x_{n}$ is symmetric and shows peaks at $|x_{n}|=3.7$, with tails around $|x_{n}|=4.5$ and a sharp cutoff inside at $|x_{n}|=3.3$. The latter histogram, reflecting the nonlinear correlation of $T_{n}$ with $x_{n}$ shown in Fig. 8a, has a strong cutoff inside $T_{n}=30$, a peak at $T_{n}=38$, and a tail for $T\sim 60-80$.
Fokker-Planck analysis near $x=0$
---------------------------------
The peaks discussed in Figs. 8 and 9 are maxima in $|x|$, which occur at $y=1$. These are related to the values of $x$ near zero for which $y=1$: for small values of $D$, the noise is important only near the $y-$axis, and as the orbit lifts off this manifold it essentially obeys the deterministic equations, and therefore the peaks in $|x|$ are determined to high accuracy by the crossing of $y=1$ for small $x$. In this section we quantify this behavior by means of analysis involving the Fokker-Planck equation for behavior near the $y-$axis.
As the orbit travels near the $y-$axis, $x(t)$ satisfies the linear stochastic equation
$$\frac{dx}{dt}=\gamma (t)x+\xi (t),\label{eq:linear-stochastic}$$
where $\gamma (t)=y(t)-1$; for small $x$, $y$ satisfies $\dot{y}=\epsilon y^{\nu }$, independent of $x$. The noise $\xi (t)$ has the statistical characteristics described after eqs. (\[eq:x-with-noise\]), (\[eq:y-with-noise\]). Linearization in $x$ holds for small $D$, up to the time when the term $-x^{2}y$ in eq. (\[eq:y-eq.\]) becomes important. For low noise level (small $D$), the successive bursts are large in magnitude, leading to small values of $x$ on the next pass. On each successive pass near $y=1$, the correlation with the previous peak of $|x|$ is lost, according to the results shown in Fig. 8. This behavior is due to the fact that for $g(0,y)=\epsilon y^{\nu }$ with $\nu >1$, $x$ becomes small enough to become dominated by the noise while $y<1$.
In Appendix A we have included an analysis based on the Fokker-Planck equation for orbits near $x=0$, where eq. (\[eq:linear-stochastic\]) is valid. Conclusions based on this Fokker-Planck analysis and direct simulations are the following. The mean value $\left\langle |x_{n}|\right\rangle $ (c.f. Fig. 9a) decreases with $D$. The dependence of this quantity is shown as a function of $D$ in Fig. 11a.
The mean of the histogram of the interburst time $T_{n}$ as a function of $D$ is shown in Fig. 11b. The results for small $D$ in Fig. 11a are qualitatively similar to the behavior of $F(x)$ shown in Fig. 5a. This is expected because, as we have discussed in Appendix A, the orbits cross $y=1$ with typical values of $x$ proportional to $\sigma _{x}\sim \alpha ^{1/2}\sim D^{1/2}/\epsilon ^{1/4}$, and proceed with little subsequent effect of noise. The dependence of $\left\langle |x_{n}|\right\rangle $ on $D$ appears to be approximately logarithmic for small $D$, consistent with the approximately logarithmic behavior of the map $F$ shown in Fig. 5a. It is also interesting to note that, althougncrease is logarithmic (for $D\lesssim 5\times 10^{-5}$) and slow, varying by just over a factor of two for $5\times 10^{-12}<D<5\times 10^{-4}$. This logarithmic behavior extrapolates to $h_{1}=0$ at the very low level $D=10^{-19}$, giving $\sqrt{2Dh}=2\times 10^{-11}$[\[]{}c.f. eq. (\[eq:finite-difference\])[\]]{}. ****
The analysis in Appendix A shows that for small $x$, near the intersection with $y=1$, $x$ has a Gaussian distribution, $f(x)\propto e^{-x^{2}/2\sigma _{x}^{2}}$ . This yields a distribution for $x'$, at the next crossing of $y=1$ where $|x_{n}|$ is a maximum, equal to$$g(x')=\vert dx/dx'\vert f(x(x')),$$ where the functional form for $x(x')$ is shown in Fig. 5a. The second factor is responsible for the sharp cutoff to the left o tail to the right of the peak in Fig. 9a is due to the Jacobian factor $\vert dx/dx'\vert $. For example, for $\nu =1.2$ the behavior for small $x$ from Fig. 5 is similar to that for $\nu =1$, derived after eq. (\[eq:LV-map\]), namely $x'\sim \sqrt{-\ln x}$. From the Gaussian form for $f(x)$ we obtain $|dx/dx'|\sim x'e^{-x'^{2}}$ and$$g(x')\propto \left(x'e^{-x'^{2}}\right)e^{-\frac{x(x')^{2}}{2\sigma _{x}^{2}}}.$$ The first (Jacobian) factor $x'e^{-x'^{2}}$ gives a Gaussian-like tail for large $x'$ and the second factor gives a cutoff for $x'$ close to the fixed point $x'=x_{0}=\sqrt{\epsilon }$, where $x'-x_{0}=-s_{1}\left(x-x_{0}\right)$. This cutoff is sharp if $\sigma _{x}\ll x_{0}$.
ded to eq. (\[eq:x-with-noise\])[\]]{}. These results suggest a modification to the notion of structural stability in the presence of noise: the behavior is qualitatively unchanged if the offset is small relative to the noise. To deal with issue (b), we show results in which the behavior for large $y$ is modified, preventing orbits from going to l
Breaking of the symmetry in $x$
-------------------------------
We have investigated the effect of breaking the reflection symmetry $x\rightarrow -x$ in eqs. (\[eq:x-with-noise\]), (\[eq:y-with-noise\]), motivated by the experimental results shown in Sec. 5.3. The simplest way of breaking this symmetry is to introduce a constant offset. With this offset, eq. (\[eq:x-with-noise\]) takes the form $$\frac{dx}{dt}=(y-1)x+a+\sqrt{2D}\xi (t),\label{eq:with-offset}$$ with the $y-$equation unchanged. (For $a<0$ the results are identical, with $x\rightarrow -x$.) Therefore the zero noise results of Sec. 2 are not structurally stable with respect to such an offset.
However, in the presence of noise, the results change considerably. In Figs. 12a,b we show $x(t)$ and the phase portrait $y$ vs $x$ for a case with the same parameters as in Fig. 6 (in particular with $D=5\times 10^{-9}$), but with $a=5\times 10^{-5}$. The results are qualitatively similar to those in Fig. 6 except that most of the bursts go to the right. In Fig. 12c we show the fraction $\Phi $ of bursts that go to the left as a function of the offset $a$ for three values of $D$, and in Fig. 12d we show the Lyapunov exponent $h_{1}$. For $a\lesssim \sqrt{D}$, $h_{1}$ and the fraction $\Phi $ are appreciable and the orbits behave qualitatively as in Fig. 6. For $a\gtrsim \sqrt{D}$, on the other hand, ($\Phi \approx 0$) and have negative Lyapunov exponent and therefore behave qualitatively as the limit cycle found for $D=0,\: a>0$. These results, and those of Appendix A showing $\sigma _{x}\sim \sqrt{D}$, indicate that the offset changes the results qualitatively if it moves the orbit outside the region near $x=0$ where noise dominates.
This brings up the issue of structural stability of the behavior observed for $a=0$. For zero noise, this behavior, seen in Fig. 3, is certainly not structurally stable. However, for $D>0$ the qualitative behavior persists as long as $a\lesssim \sqrt{D}$. In this modified sense, the system with finite noise is structurally stable.
We will return to the issue of an offset in the electronic circuit in the next section.
Modifications for large $y$
---------------------------
We have discussed the deterministic model for $\nu >2$ in Sec. 2, showing that orbits go to infinity after a few passes near the fixed point $(x,y)=(x_{0}=\sqrt{\epsilon },1)$. The dynamics in the presence of noise is the following: if the noise is large enough, the value of $x$ at the throat where $y=1$ will typically be large enough that the system encircles $(x_{0},1)$ many times. Even with noise, however, eventually an orbit comes through the throat with small enough $x$ to infinity before another cycle can occur.
A system related to eqs. (\[eq:x-eq.\]), (\[eq:y-eq.\]) with orbits that do not to to infinity is the predator-prey system of Odell [@Odell; @Strogatz]. This system can be put into the form$$\frac{dX}{ds}=X(Y-\eta ),$$ $$\frac{dY}{ds}=Y^{2}(1-Y)-XY,$$ or by a change of variables ($X=\eta x^{2}/2,\: Y=\eta y,\: s=2t/\eta $) $$\frac{dx}{dt}=(y-1)x,\label{eq:odell-1}$$ $$\frac{dy}{dt}=\epsilon y^{\nu }(1-\eta y)-x^{2}y,\label{eq:odell-2}$$ with $\epsilon =\nu =2$, i.e. the form of eqs. (\[eq:x-eq.\]), (\[eq:y-eq.\]) with $\nu =2$ and $y^{2}\rightarrow y^{2}(1-\eta y)$. This system has fixed points at $x=\pm \sqrt{\epsilon (1-\eta )},\: y=1$. For $\epsilon =\nu =2$ these fixed points are unstable if $\eta <1/2$ and oscillating (complex eigenvalues) if $\eta <\sqrt{3}/2$. This system also has a saddle at $x=y=0$, with zero eigenvalue in the $y$ direction. In addition, it has a fourth fixed point, with $x=0$ and $y=1/\eta $. This fixed point is a saddle, stable in the $y-$direction and unstable in the $x-$direction; the section of the $y-$axis with $0<y<1/\eta $ is a heteroclinic line. Because of the presence of this saddle, there are two stable limit cycles, related by the reflection symmetry in $x$, to which typical orbits converge. For $\eta $ small, this limit cycle has large excursions, with peaks in $y$ approaching $1/\eta $. We have studied eqs. (\[eq:odell-1\]), (\[eq:odell-2\]) with noise in $x$, and with $\epsilon $, $\nu $ in the range of parameters of Fig. 3. The results are similar to those of (\[eq:x-with-noise\]), (\[eq:y-with-noise\]), as long as $D$ is large enough that the excursions almost always have $y\ll 1/\eta $. Specifically, the value of $h_{1}$ and the probability density plots as in Figs. 8, 9 are essentially identical. The effect of positive $\eta $ is similar to the effect of voltage corresponding to $y$ in the circuit (see Appendix B), except that by design the clipping turns on much more rapidly than the factor $(1-\eta y)$ in eq. (\[eq:odell-2\]).
Modifications near $y=0$
------------------------
We have studied the system (\[eq:x-with-noise\]), (\[eq:y-with-noise\]) with $\epsilon y^{\nu }\rightarrow g_{0}(y)=\epsilon (\beta y+y^{\nu })$. This modification regularizes the vicinity of $y=0$: the saddle at the origin is no longer dominated by $y^{\nu }$, and has eigenvalues $-1,\: \epsilon \beta $. The spiraling fixed points have $x=\pm x_{0}=\pm \sqrt{\epsilon (1+\beta )},\: y=1$. We have found that noise has the same qualitative influence for positive $\beta $ as it does for $\beta =0$. In Fig. 13a
we show the scr $\beta =1$ the eigenvalue $\epsilon \beta >1$, which implies that, when following a deterministic orbit along the $x-$ axis and up along the $y-$axis, it ends up further from the $y-$axis than it started from the $x-$ axis. (For the equations linearized about the origin, $x^{\epsilon \beta }y$ is constant.) This is related to the liftoff phenomenon of Refs.[@Stone-Armbruster; @Armbruster-Stone]. Based on this consideration, one might expect that the sign of $x_{n+1}$ might correlate with the sign of $x_{n}$, and the symmetry of the scatter plot would be broken, with the distribution $f_{3}(x_{n},x_{n+1})$ having more points in the NE and SW quadrants and fewer in the SE and NW quadrants, while of course still preserving the symmetry in the marginal distribution of $x_{n}$, $\int f_{3}(x_{n},x_{n+1})dx_{n+1}$. Nevertheless, the scatter plot for $\beta =1$ appears to have the same symmetry as for $\beta =0$.
This four-fold symmetry is explained by Fig. 13b, which shows the surface of section $x\rightarrow x''=F^{2}(x)$ for $0<x<x_{0}$, similar to that in Fig. 5b. For both cases $x''\ll x$. For these parameters $x''\sim x^{3}$ for $\beta =1$, while $x''$ goes to zero faster than any power when $\beta =0$. The origin is so very attracting for $F^{2}$ because small $x$ maps to large $x'$ under $F$ and the orbit from there passes extremely close to $y=0$, thereby leading to extremely small $x''$ in spite of $\epsilon \beta >1$. Because of this property, if an orbit starts with $x\sim \sigma _{x}$ at $y=1$ and executes one cycle, the value $x=x''$ when it crosses $y=1$ after this cycle will be so small ($x''\sim \sigma _{x}^{3}$ for $\beta =1$) that it is dominated by the noise added for small $x$ and will even for $\beta =1$ be nearly independent of $x$. This four-fold symmetry was observed for these parameters for $5\times 10^{-13}<D<5\times 10^{-3}$.
Although the increase of $\beta $ has no effect on the symmetry of the scatter plot $x_{n}\rightarrow x_{n+1}$, it has a profound influence on the burst intervals $T_{n}$. For larger $\beta $, typical values of $T_{n}$ (not shown) are much smaller because of the liftoff phenomenon. This dependence of $T$ on $\beta $ is understood easily. Suppose the orbit enters the region $[0,a]\times [0,b]$ with $y=y_{0}$. We find that if $\beta y\gg y^{\nu }$, the time to exit the region equals $T_{1}\equiv (1/\epsilon \beta )\ln (b/y_{0})\sim \beta ^{-1}$. If, on the other hand, $\nu >1$ and the orbit is far enough from the origin that $y^{\nu }\gg \beta y$, then $\beta $ can be neglected and the time interval equals $T_{2}\equiv \left(y_{0}^{1-\nu }-b^{1-\nu }\right)/\left[\epsilon (\nu -1)\right]$. For example, for $\epsilon =1.5,\beta =1,\nu =1.2,b=1,y_{0}=10^{-4}$ we find $T_{1}=6.1$ and $T_{2}=18$.
We have considered other models in which $g_{0}(y)$ is linear in $y$ near $y=0$ but behaves as $\epsilon y^{\nu }$ for large $y$. The cases investigated were $g_{0}(y)=\epsilon y(\beta ^{p}+y^{p(\nu -1)})^{1/p}$ for various values of $p$, including $p=1$. [\[]{}Note that $g_{0}$ is analytic at $y=0$ if $p(\nu -1)$ is an integer.[\]]{} The results for all the tested values of $p$ are similar to the $p=1$ case described above.
We have also considered the case $g_{0}(y)=\epsilon (\beta y+y^{2})$. In Sec. 2 we concluded that the deterministic system for $\nu =2$ continued to have bursts of increasing amplitude and time interval (rather than being capable of going to infinity in finite time in a single burst) if $\epsilon <2$. Results for various values of $\epsilon <2$ and $D$ show that the results are similar to those for $\nu <2$, as long as $\epsilon $ is small enough, $\beta $ is large enough, and $D$ is large enough. Note that for this case the flow is analytic everywhere, including $y<0$, and that for the deterministic form there is a fixed point at $x=0,\: y=-\beta $, as well as the fixed point at the origin, the latter having the $x-$axis as its stable manifold. This new fixed point is attracting in both directions, and therefore any noise in the $y-$direction eventually leads the orbit to this fixed point.
Limitation for large $y$ with asymmetry in $x$
----------------------------------------------
A phase portrait for a flow including both saturation in $y$ and symmetry breaking in $x$ [\[]{}c.f. eqs. (\[eq:with-offset\]), (\[eq:odell-2\])[\]]{},$$\frac{dx}{dt}=(y-1)x+a,$$ $$\frac{dy}{dt}=\epsilon y^{\nu }(1-\eta y)-x^{2}y,$$ is shown in Figure 14, with $\eta =0.1$ and $a=0.015$.
The unstable spirals are now slightly asymmetric due to the finite value of $a$. There are two saddle points near the origin, at $x=a,\: y=0$ and at $x\approx a,\: y\approx (a^{2}/\epsilon )^{1/(\nu -1)}$. For these parameters, the second fixed point has $y\sim a^{10}/\epsilon ^{5}\sim 10^{-17}$, and the dynamics of the system can be described as if only the saddle at $x=a,\: y=0$ exists. This saddle still has the $x-$axis as its stable manifold (with right and left pieces labeled $1SR$ and $1SL$). The unstable manifold of this saddle (labeled $1U$) now is no longer the $y-$ axis, but bends slightly to the right and eventually asymptotes to a limit cycle (not shown) orbiting the right spiral fixed point. Another saddle at approximately $x\sim -a\eta ,\: y\sim 1/\eta $ (filled circle) has an unstable manifold with right and left pieces (labeled $2UR$ and $2UL$, respectively). The invariant manifolds bend downward, coming into the vicinity of the $x-$axis, pass very close to the saddle at the origin, and both converge onto the unstable manifold $1U$, thus approaching the limit cycle on the right as well. The stable manifold for the upper saddle point, labeled $2S$, if followed backward in time, asymptotes to the spiral on the left. Hence, a narrow region on the $y-$axis near $y=1$ that is bounded by $2S$ on the left and $1U$ on the right sets the scale for the noise response. If the noise amplitude is smaller than the width of this region (denoted $\Delta $), nearly all points passing through this region will go to the right and asymptote to the limit cycle. If $\sigma _{x}>\Delta $, then orbits will get kicked to the left and right with nearly equal probability, leading to noise stabilized behavior that prevents the relaxation onto the limit cycle.
Thus, the presence of the symmetry breaking term in the deterministic $dx/dt$ equation destroys the heteroclinic connection between the two saddle points, leading generically to a limit cycle either on the right or left, depending upon the sign of the offset. Thus, the deterministic dynamics for $a=0,\eta >0$ discussed in Sec. 4.2 is not structurally stable, but the behavior with noise is structurally stable in the sense discussed at the end of Sec. 4.1. The noise response is very similar to the noise response of the model with $\eta =a=0$ (Sec. 3), to the model with $\eta =0,a\neq 0$ (Sec. 4.1) and to the model with $\eta >0,a=0$ (Sec. 4.2).
Sinusoidal perturbation
-----------------------
We have integrated eqs.(\[eq:x-eq.\]), (\[eq:y-eq.\]) with a sinusoidal term $\xi (t)=b\sin (\omega t)$ added to the $x-$equation rather than random noise. We chose $\omega $ to be large enough so that the sine goes through many cycles when the orbit is along the $x-$axis, but large enough to avoid aliasing, i.e. $\omega h<\pi $, where $h$ is the time step. The sinusoidal and random forms of $\xi (t)$ are extremes of temporal driving, with quasiperiodic time dependence and colored random time dependence as intermediate cases. In all such cases the analysis of Sec. 3.2 indicates that the typical value of $x$ at $y=y_{2}$ is the important factor. (See Sec. 3.2 and Fig. 10.) This suggests that the Lyapunov exponent $h_{1}$ has validity in all these cases. To explore this further, we have obtained results for $\nu =1.2,\: \epsilon =0.5$, as in Fig. 6, and with various values of $\omega $ and $b$. The results were found to be qualitatively similar to those with noise, with a simple relation between $b$ and $D$, showing that indeed the accumulated effect on $x$ at the time $y=y_{2}$ is the determining factor. That is, $\sigma _{x}\sim b/\omega $ or $b/\omega \sim D^{1/2}/\epsilon ^{1/4}$. In particular, the behavior of $<|x_{n}|>$, $<T_{n}>$ and $h_{1}$ are similar. Thus, the similarity of the results with this deterministic non-autonomous system and the nonlinear stochastic system (\[eq:x-with-noise\]), (\[eq:y-with-noise\]) lend credence to the idea that $h_{1}$ as defined in Sec. 2 and used in Sec. 3.1 is the appropriate form of the Lyapunov exponent for the stochastic system. It is known that a system with periodic driving can be distinguished from an autonomous system or one with more complex temporal driving by means of nonlinear symbolic time series analysis[@NLSTSA]. This distinction is possible because of definite dips in the conditional entropy of symbolic time series when the sampling time equals the period $2\pi /\omega $[@NLSTSA]. This condition distinguishes periodic driving from all other temporal driving (autonomous, quasi-periodic, colored noise, white noise), but does not distinguish the other possible varieties from each other. This topic is outside the scope of the present investigation.
Electronic circuit
==================
In order to test for noise stabilization in a physical system, we have constructed a circuit which integrates eqs. (13) and (14). n dimensionless integral form, these equations are $x(\tau )=x_{0}+\int _{\tau _{0}}^{\tau }{\left(({y-1})x+\hat{\xi }(\tau ')\right)}d\tau '$ and $y(\tau )=y_{0}+\int _{\tau _{0}}^{\tau }{\left(\epsilon y^{\nu }-x^{2}y\right)}d\tau '$, and the parameter used in the circuit were $\epsilon =0.5$ and $\nu =1.2$, as in Figs. 3,5-9,11,12. The circuit design is shown in Fig. 18. The white noise, $\hat{\xi }(t)=\sqrt{2D}\xi (t)$, stabilized the oscillations, and Figs. 15-17 show that the circuit output agreed well with numerical solution of eqs. (13) and (14). We also observed the structural instability in these equations. See Appendix B for a description of the circuit design.
Properties of the added noise
-----------------------------
The noise was generated by creating random numbers and recording them to a `.wav` file to play back via the computer’s audio output at the standard rate of 44 kHz. This net process effectively filters the noise through a lowpass filter. When we sampled the noise using a digital oscilloscope, we found that the noise had a relatively constant spectrum to frequencies as high as 20 kHz. We autocorrelated the noise, and found that it was well represented by: $$\left\langle V_{N}(t)V_{N}(t')\right\rangle =\frac{A_{0}}{\pi (t-t')}\sin {2\pi \frac{(t-t')}{T}}$$ with a period $T=50$ $\mu $s, which also represents a flat spectrum filtered by a 20 kHz low-pass filter. For times longer than $T/(2\pi )$, this autocorrelation function is a good approximation of $A_{0}\delta (t)$. By evaluating the autocorrelation function at $t=0$, we can determine that $A_{0}=\frac{T}{2}\left\langle V_{N}^{2}\right\rangle $ so the diffusion rate is $\frac{A_{0}}{2}$ or $$D=\left\langle \left(\frac{V_{N}}{V_{2}}\right)^{2}\right\rangle \left(\frac{R_{2}}{R_{4}}\right)^{2}\frac{T}{4R_{1}C_{1}}$$ in terms of the scaled variables used in Appendix B. The theoretical minimum diffusion constant for our circuit parameters given by eq. (\[eq:dmin\_est\]) is well below the intrinsic noise in the circuit. This intrinsic noise is not well characterized and occurs in both the $x$ and $y$ variables. We use a large enough value of the noise amplitude so that the intrinsic noise contribution is negligible. We show in Figs. 16 and 17 the quantities $T_{n-1}$ vs $x_{n}$ and $T_{n}$ vs $x_{n}$, first obtained from the experiment and also by integrating numerically the differential equations with the same parameters, in particular $D=4.7\times 10^{-4}$. (These results are similar to those in Fig. 8, but with a different value of $D$.) The agreement is very good.
Offsets and symmetry breaking
-----------------------------
The primary difficulty in designing this circuit is that small DC offsets at the input of the integrators significantly change the differential equations. In particular, an offset in the input to the $y$-integrator either drives the $V_{y}$-output negative to create an error in the AD538 computational unit, or it leads to a stable limit cycle similar to that described in Sec. 4.4. We adjusted a small current ($\sim 0.45\mu $A) to minimize the $V_{y}$-offset, using the automatic reset circuit to recover whenever $V_{y}$ became negative. The reset kicks the circuit back into the vicinity of one of the . The $x$-integrator naturally follows, bringing $V_{x}$ to a value near its fixed point. Without this reset, a negative value of $V_{y}$ leading to the failure of the AD538 causes the circuit to fall to a stable fixed point with a large negative value of $V_{y}$. An external trigger can also reset the circuit to values near its unstable fixed point.
Similarly, we also corrected the offset in the $x$-integrator by adding $\sim 0.2$ $\mu $A at the integrator input. We adjusted this value until the noise signal generated equal numbers of negative and positive $x$ pulses. After these adjustments, we observed the basic structure of the oscillations as they evolved away from the fixed point, in order to verify that the circuit waveforms were the same as the model calculations (see Fig. 15). The fact that such a simple adjustment can give results in agreement with the symmetric model is consistent with the extended concept of structural stability discussed at the end of Sec. 4.1. The results also show that the circuit is a sensitive detector of offsets.
Summary
=======
We have performed a study of a nonlinear stochastic ODE whose deterministic form has unstable spirals, leading to bursty behavior, with successive bursts growing in magnitude and with larger time intervals between them. This bursty behavior is due to the fact that after each burst, the orbit comes closer to the unstable manifold ($y-$axis) of a hyperbolic fixed point at the origin, and therefore travels farther along this unstable manifold before diverging from it to form the next burst.
In the presence of noise at a very small level, the bursts get stabilized in the sense of becoming limited in magnitude The time interval between them also limited and the bursts can go to either positive or negative $x$ In many qualitative senses, the behavior appears like deterministic chaos.
This system has reflection symmetry in $x$; an offset $a$ in $x$ destroying this symmetry can lead to completely different behavior, depending on its magnitude relative to the noise. That is, the bursty behavior seen in the symmetric deterministic equations is not structurally stable. With noise and a small value of the offset $|a|<\sqrt{2D}$ ($D$ is the Brownian diffusion coefficient), the bounded bursty behavior persists, but with more bursts going to the right if $a>0$ (to the left if $a<0$.) For larger offset $a\gtrsim \sqrt{2D}$, all bursts go to the right and basically give a noisy form of the stable limit cycle. In this sense, the results in the presence of noise and $a=0$ are structurally stable.
We have considered modifications to the model allowing for saturation of $y$, because bursts cannot continue to grow without bound in a physical system. We have also considered modifications near the saddle at the origin, to give the saddle at the origin a positive eigenvalue. This change in the linear part of the flow near the saddle affects the time intervals between bursts, making their characteristic value much smaller, but does not affect the properties of the burst amplitudes, or the signs (in $x$) of the bursts.
We have described briefly results on a nonlinear circuit satisfying the same equations as the model. The circuit behaves similarly to the model. In particular, the circuit is very sensitive to the presence of an offset, and in practice the offset is adjusted to minimize the asymmetry of the signal. More details are presented in Ref. [@Conference-on-Expermental-Chaos] and in Appendix B.
The system (\[eq:x-with-noise\]), (\[eq:y-with-noise\]) and its generalizations in Sec. 4 are arguably the simplest realizations of systems in which a small noise level can limit the amplitude of bursts and lead to qualitatively distinct behavior. We have listed in the Introduction physical examples of systems in which this effect may be important. For the tokamak example, the results here should have an impact on low dimensional modeling of ELMs. Indeed, the observation of chaotic time dependence of ELM data suggests that a simple autonomous ODE model must be three-dimensional. However, tokamaks are known to have a broad spectrum of fluctuations (turbulence). If these fluctuations can be treated as uncorrelated noise, i.e. if their correlation time is much shorter that ELM time scales, it is justifiable to explore two-dimensional models with noise such as the models studied here.
{#section-1 .unnumbered}
The stochastic behavior of eq. (\[eq:linear-stochastic\]) is governed by the Fokker-Planck equation for the probability density function $f(x,t)$,$$\frac{\partial f}{\partial t}+\frac{\partial }{\partial x}\left(\gamma (t)xf\right)=\frac{\partial }{\partial x}\left(D\frac{\partial f}{\partial x}\right),\label{eq:fokker-planck}$$ where $D=\sigma ^{2}/2$ is the diffusion coefficient. For arbitrary $\gamma (t)$, (\[eq:fokker-planck\]) has the exact solution$$f(x,t)=\sqrt{\frac{1}{2\pi \alpha (t)}}e^{-x^{2}/2\alpha (t)}$$ if the variance or temperature $\alpha (t)$ satisfies. (\[eq:alpha-equation\]) has the solution $$\alpha (t)=2D\int _{-\infty }^{t}ds_{1}e^{2\int _{s_{1}}^{t}\gamma (s_{2})ds_{2}},$$ assuming $\alpha (t\rightarrow -\infty )=0$. Thus, $\alpha (t)$ is proportional to $D$, with a coefficient depending on $\gamma (t)$.
If $\gamma $ is approximately constant ($|\dot{\gamma }/\gamma ^{2}|\ll 1$) and negative, $\alpha $ approaches a slowly varying state with $\alpha (t)=D/|\gamma (t)|$, in which the inward motion due to the advective term in (\[eq:fokker-planck\]) balances diffusion and $\partial f/\partial t$ is negligible. This limit gives$$f(x,t)\rightarrow \sqrt{|\gamma |/2\pi D}e^{-|\gamma |x^{2}/2D}.\label{eq:FP-constant-gamma}$$ Another limit is recovered by neglecting $\gamma (t)$ in eq. (\[eq:fokker-planck\]), giving $$\alpha (t)=\alpha (t_{1})+2D(t-t_{1})=2Dt+2\alpha _{0},$$ Without loss of generality we can set the time where $\gamma =0$ to $t=0$. This range, in which the advective term in eq. (\[eq:fokker-planck\]) is small, gives the purely diffusive random walk result $$f(x,t)\sim \frac{1}{\sqrt{4\pi (Dt+\alpha _{0})}}e^{-x^{2}/4(Dt+\alpha _{0})}.\label{eq:FP-gamma=0}$$ A third range has $\gamma $ positive with advection dominating diffusion. We find $$\alpha (t)=\alpha (t_{2})\exp \left(2\int _{t_{2}}^{t}\gamma (s)ds\right),\label{eq:alpha-late}$$ where $t_{2}$ is the time this range is entered, i.e. where $\gamma (t_{2})\alpha (t_{2})\sim D$. In this range the noise becomes negligible.
Again taking $\alpha (t=-\infty )=0$, we find $$\alpha (t)=2De^{\dot{\gamma }_{0}t^{2}}\int _{-\infty }^{t}e^{-\dot{\gamma }_{0}s^{2}}ds.$$ In this example $\alpha (t)$ has slow growth for $t<t_{1}\equiv -1/\sqrt{\dot{\gamma }_{0}}$, diffusive increase for $t_{1}<t<t_{2}$, where $t_{2}=1/\sqrt{\dot{\gamma }_{0}}$, and exponential growth for $t>t_{2}$. The value of $\alpha (t)$ at $t=0$ (corresponding to $y=1$) is $\sigma _{x}^{2}\equiv \alpha (0)\sim D/\sqrt{\dot{\gamma }_{0}}$.
For application to eqs. (\[eq:x-with-noise\]), (\[eq:y-with-noise\]), consider $x$ small so that its equation is linear (when the second term on the right in (\[eq:y-with-noise\]) is negligible). We then note that if $\alpha $ is small for $y\approx 0$, then $\alpha (t)$ proportional to $D/\sqrt{\dot{\gamma }_{0}}$. Since $\dot{\gamma }=\dot{y}\sim \epsilon $, we have $\alpha (y\approx 1)\sim D/\sqrt{\epsilon }$. After a diffusive stage, $\alpha $ continues to increase as in eq. (\[eq:alpha-late\]), with noise no longer playing a role. Thus, the nonlinear orbit for later times depends only on the noise accumulated by the time (here $t=t_{2}$) just after the orbits cross the throat at $y=1$; the value of $x$ at $y\approx y_{2}$, when noise last plays a role, is proportional to $\sqrt{\alpha }\propto D^{1/2}/\epsilon ^{1/4}$. See Fig. 10. Thus, in essence, the orbit from the crossing of $y=1$ with small $x$ out to the next crossing and back to near the origin is deterministic, and the noise plays its role only along the $y-$axis.
Appendix B: Circuit Design {#appendix-b-circuit-design .unnumbered}
==========================
The design of our circuit is basically the same as reported in Ref. [@Conference-on-Expermental-Chaos], but we have adjusted our circuit parameters, and extended the analysis of the circuit behavior. For the sake of completeness, we have included all of the new circuit parameters in this appendix, as well as our analysis of the minimum noise amplitude necessary to keep the circuit from saturating the circuit elements.
The analog circuit consists of three basic sub-circuits: the $x$-integrator, the $y$-integrator, and the reset controller, as shown in Fig. 18.
The integrators use OPA 4228 operational amplifiers (low noise, 33 MHz bandwidth) with capacitive feedback (10 nF) to integrate their inputs. $V_{1}$ and $V_{2}$ are constant applied voltages, while $V_{x}$ and $V_{y}$ are time varying voltages, proportional to $x(\tau )$ and $y(\tau )$, respectively.
The input to the $y$-integrator uses an AD538 real-time computational unit (400 kHz bandwidth) to raise the $V_{y}$ voltage to a fractional power, $V_{y}(t)^{\nu -1}$, by taking its logarithm, scaling the result by $\nu -1$, and then exponentiating to generate $V_{1}(V_{y}(t)/V_{2})^{\nu -1}$. This output is then added into the output of an MPY634 precision multiplier (10 MHz bandwidth) that creates the ratio $V_{x}^{2}(t)/V_{2}$. A second MPY634 multiplies this combined signal by $V_{y}/V_{2}$ before it enters the integrator. We also use additional small adjustable current sources to eliminate offsets.
The input to the $x$-integrator is the sum of $V_{x}$, the noise source, and $V_{x}V_{y}/V_{2}$ formed by another MPY634. The net output signal of the entire circuit has a maximum frequency of 2 KHz, well within the bandwidth limit of all the components. This circuit does the following integrations:$$\begin{array}{l}
V_{x}(t)=V_{x}(t_{0})+\int _{t_{0}}^{t}{\left(\frac{R_{2}V_{y}(t')}{R_{3}V_{2}}-1\right)V_{x}(t')\frac{dt'}{R_{2}C_{2}}}+\int _{t_{0}}^{t}{V_{N}(t')\frac{dt'}{R_{4}C_{2}}},\\
V_{y}(t)=V_{y}(t_{0})+\int _{t_{0}}^{t}{\left(V_{1}\left(\frac{V_{y}(t')}{V_{2}}\right)^{\nu }-{\left(\frac{V_{x}(t')}{V_{2}}\right)}^{2}V_{y}(t')\right)\frac{dt'}{R_{1}C_{1}}},\end{array}$$ where the circuit components had the values listed in Table 1, and the parameter $\nu -1$ was set to 0.2 in the AD538 component by a voltage divider composed of a 2200 $\Omega $ resistor and a 560 $\Omega $ resistor. This dimensional form of the equations is related to the dimensionless form by defining $x$, $y$, $\tau $, $\epsilon $ and $\eta $ as:
$$\begin{array}{l}
y=\frac{R_{2}}{R_{3}}\frac{V_{y}}{V_{2}}\\
\tau =\frac{t}{R_{2}C_{2}}\\
\epsilon =\frac{R_{2}C_{2}}{R_{1}C_{1}}\frac{V_{1}}{V_{2}}\left(\frac{R_{3}}{R_{2}}\right)^{\nu -1}\\
x=\sqrt{\frac{R_{2}C_{2}}{R_{1}C_{1}}}\frac{V_{x}}{V_{2}}=\sqrt{\epsilon }\frac{V_{x}}{\sqrt{V_{1}V_{2}}}\left(\frac{R_{2}}{R_{3}}\right)^{\frac{\nu -1}{2}}\\
\eta =\sqrt{\frac{R_{2}C_{2}}{R_{1}C_{1}}}\frac{V_{N}}{V_{2}}\frac{R_{2}}{R_{4}}\end{array}$$
This leads to fixed points at: $$\begin{array}{l}
V_{y*}=\frac{R_{3}}{R_{2}}V_{2}\\
V_{x*}=\sqrt{V_{1}V_{2}}{\left(\frac{R_{3}}{R_{2}}\right)}^{\frac{\nu -1}{2}}\end{array}$$ Thus, a circuit design with a given value of $\epsilon $ has its fixed points and its voltage scaling determined by the choice of the ratio $R_{3}/R_{2}$. This value can be optimally set by forcing both the $x$ circuit and the $y$ circuit to reach saturation values on the same cycle. For the $\nu =1$ case, neglecting the logarithmic terms of the Hamiltonian $H(x,y)$ in eq. (\[eq:hamiltonian\]), the peak value of $y$ ($y_{p}$) and its following peak value of $x$ ($x_{p}$) are related by $x_{p}^{2}=2y_{p}$ if $H$ is large enough, i.e. for bursts with $x_{p},\: y_{p}$ large enough. These two peak values cannot require voltages in excess of $V_{2}$, or the multipliers will **** To optimize, we equate these peaks when they reach $V_{2}$; for the $\nu =1$ case this gives $\epsilon V_{2}/V_{1}=2R_{2}/R_{3}$ or, for our values of $V_{1}$ and $V_{2}$,$$\frac{R_{2}}{R_{3}}=\frac{\epsilon }{2}\frac{V_{2}}{V_{1}}=6.25.$$
This choice then implies maximum values of $x_{m}=\sqrt{2\left(\epsilon V_{2}/2V_{1}\right)}=3.53$, and $y_{m}=\epsilon V_{2}/2V_{1}=6.25$. These maximum values of $x$ and $y$ determine the minimum noise amplitude that must be present to keep the voltage peaks within the operating range of the multipliers. The logarithmic dependence observed in Fig. 11 can be approximated as $\left\langle x\right\rangle =(1/8)\ln \left(10^{5}/D\right)$, so that: $$D_{min}=10^{5}e^{-8x_{m}}=10^{5}e^{-8\sqrt{2\left(\frac{\epsilon V_{2}}{2V_{1}}\right)}}\sim 2\times 10^{-10}.\label{eq:dmin_est}$$ When the amplitudes are low enough to avoid clipping, the measured results are in agreement with those given in Sec. 3.2.
$$\begin{array}{cc}
V_{1} & 0.4V\\
V_{2} & 10V\\
R_{1} & {6.8k}\Omega \\
R_{2} & {122k}\Omega \\
R_{3} & {19.5k}\Omega \\
R_{4} & {67k}\Omega \\
C_{1} & 10nF\\
C_{2} & 10nF\end{array}$$
ACKNOWLEDGMENTS {#acknowledgments .unnumbered}
===============
We wish to thank C. Doering, J. Guckenheimer, E. Ott, and D. Sigeti for valuable discussions. This work was supported by the U. S. Department of Energy, under contract W-7405-ENG-36, Office of Science, Office of Fusion Energy Sciences, and by the NSF-DOE Program in Basic Plasma Physics under contract PHY-0317256.
|
---
abstract: |
Recent experiments on conduction between a semiconductor and a superconductor have revealed a variety of new mesoscopic phenomena. Here is a review of the present status of this rapidly developing field. A scattering theory is described which leads to a conductance formula analogous to Landauer’s formula in normal-state conduction. The theory is used to identify features in the conductance which can serve as “signatures” of phase-coherent Andreev reflection, i.e. for which the phase coherence of the electrons and the Andreev-reflected holes is essential. The applications of the theory include a quantum point contact (conductance quantization in multiples of $4e^2/h$), a quantum dot (non-Lorentzian conductance resonance), and quantum interference effects in a disordered normal-superconductor junction (enhanced weak-localization and reflectionless tunneling through a potential barrier). The final two sections deal with the effects of Andreev reflection on universal conductance fluctuations and on the shot noise.\
[Lectures at the Les Houches summer school, Session LXI, 1994, to be published in:]{} [*Mesoscopic Quantum Physics*]{}, E. Akkermans, G. Montambaux, and J.-L. Pichard, eds. (North-Holland, Amsterdam).
address: |
Instituut-Lorentz, University of Leiden\
P.O. Box 9506, 2300 RA Leiden, The Netherlands
author:
- 'C. W. J. Beenakker'
date: 'June, 1994'
title: |
Quantum transport in\
semiconductor–superconductor microjunctions
---
Introduction
============
At the interface between a normal metal and a superconductor, dissipative electrical current is converted into dissipationless supercurrent. The mechanism for this conversion was discovered thirty years ago by A. F. Andreev [@And64]: An electron excitation slightly above the Fermi level in the normal metal is reflected at the interface as a hole excitation slightly below the Fermi level (see fig. \[reflection\]). The missing charge of $2e$ is removed as a supercurrent. The reflected hole has (approximately) the same momentum as the incident electron. (The two momenta are precisely equal at the Fermi level.) The velocity of the hole is minus the velocity of the electron (cf. the notion of a hole as a “time-reversed” electron). This curious scattering process is known as retro-reflection or [*Andreev reflection*]{}.
The early theoretical work on the conductance of a normal-metal – superconductor (NS) junction treats the dynamics of the quasiparticle excitations [*semiclassically*]{}, as is appropriate for macroscopic junctions. Phase coherence of the electrons and the Andreev-reflected holes is ignored. Interest in “mesoscopic” NS junctions, where phase coherence plays an important role, is a recent development. Significant advances have been made during the last few years in our understanding of quantum interference effects due to phase-coherent Andreev reflection. Much of the motivation has come from the technological advances in the fabrication of a highly transparent contact between a superconducting film and the two-dimensional electron gas in a semiconductor heterostructure. These systems are ideal for the study of the interplay of Andreev reflection and the mesoscopic effects known to occur in semiconductor nanostructures [@Eer91], because of the large Fermi wavelength, large mean free path, and because of the possibility to confine the carriers electrostatically by means of gate electrodes. In this series of lectures we review the present status of this rapidly developing field of research.
To appreciate the importance of phase coherence in NS junctions, consider the resistance of a normal-metal wire (length $L$, mean free path $l$). This resistance increases monotonically with $L$. Now attach the wire to a superconductor via a tunnel barrier (transmission probability $\Gamma$). Then the resistance has a [*minimum*]{} when $L\simeq l/\Gamma$. The minimum disappears if the phase coherence between the electrons and holes is destroyed, by increasing the voltage or by applying a magnetic field. The resistance minimum is associated with the crossover from a $\Gamma^{-1}$ to a $\Gamma^{-2}$ dependence on the barrier transparency. The $\Gamma^{-2}$ dependence is as expected for tunneling into a superconductor, being a two-particle process. The $\Gamma^{-1}$ dependence is surprising. It is as if the Andreev-reflected hole can tunnel through the barrier without reflections. This socalled “reflectionless tunneling” requires relatively transparent NS interfaces, with $\Gamma\gtrsim l/L$. Semiconductor — superconductor junctions are convenient, since the Schottky barrier at the interface is much more transparent than a typical dielectric tunnel barrier. The technological effort is directed towards making the interface as transparent as possible. A nearly ideal NS interface ($\Gamma\simeq 1$) is required if one wishes to study how Andreev reflection modifies the quantum interference effects in the normal state. (For $\Gamma\ll 1$ these are obscured by the much larger reflectionless-tunneling effect.) The modifications can be quite remarkable. We discuss two examples.
The first is weak localization. In the normal state, weak localization can not be detected in the current–voltage ($I$–$V$) characteristic, but requires application of a magnetic field. The reason is that application of a voltage (in contrast to a magnetic field) does not break time-reversal symmetry. In an NS junction, however, weak localization can be detected in the $I$–$V$ characteristic, because application of a voltage destroys the phase coherence between electrons and holes. The result is a small dip in $\partial I/\partial
V$ versus $V$ around $V=0$ for $\Gamma\simeq 1$. On reducing $\Gamma$, the dip crosses over to a peak due to reflectionless tunneling. The peak is much larger than the dip, but the widths are approximately the same.
The second example is universal conductance fluctuations. In the normal state, the conductance fluctuates from sample to sample with a variance which is independent of sample size or degree of disorder. This is one aspect of the universality. The other aspect is that breaking of time-reversal symmetry (by a magnetic field) reduces the variance by precisely a factor of two. In an NS junction, the conductance fluctuations are also size and disorder independent. However, application of a time-reversal-symmetry breaking magnetic field has no effect on the magnitude.
These three phenomena, weak localization, reflectionless tunneling, and universal conductance fluctuations, are discussed in sections 4, 5, and 6, respectively. Sections 2 and 3 are devoted to a description of the theoretical method and to a few illustrative applications. The method is a scattering theory, which relates the conductance $G_{\rm NS}$ of the NS junction to the $N\times N$ transmission matrix $t$ in the normal state ($N$ is the number of transverse modes at the Fermi level). In the limit of zero temperature, zero voltage, and zero magnetic field, the relationship is $$G_{\rm NS}=\frac{4e^{2}}{h}\sum_{n=1}^{N}
\frac{T_{n}^{2}}{(2-T_{n})^{2}},\label{GNS1}$$ where the transmission eigenvalue $T_{n}$ is an eigenvalue of the matrix product $tt^{\dagger}$. The same numbers $T_{n}$ ($n=1,2,\ldots N$) determine the conductance $G_{\rm N}$ in the normal state, according to the Landauer formula $$G_{\rm N}=\frac{2e^{2}}{h}\sum_{n=1}^{N}T_{n}.\label{GN1}$$ The fact that the same transmission eigenvalues determine both $G_{\rm N}$ and $G_{\rm NS}$ means that one can use the same (numerical and analytical) techniques developed for quantum transport in the normal state. This is a substantial technical and conceptual simplification.
The scattering theory can also be used for other transport properties, other than the conductance, both in the normal and the superconducting state. An example, discussed in section 7, is the shot noise due to the discreteness of the carriers. A doubling of the ratio of shot-noise power to current can occur in an NS junction, consistent with the notion of Cooper pair transport in the superconductor.
We conclude in section 8.
We restrict ourselves in this review (with one exception) to two-terminal geometries, with a single NS interface. Equation (\[GNS1\]), as well as the Landauer formula (\[GN1\]), only describe the two-terminal conductance. More complex multi-terminal geometries, involving several NS interfaces, have been studied theoretically by Lambert and coworkers [@Lam93a; @Lam93b], and experimentally by Petrashov et al. [@Pet93]. Since we focus on phase-coherent effects, most of our discussion concerns the linear-response regime of infinitesimal applied voltage. A recent review by Klapwijk contains a more extensive coverage of the non-linear response at higher voltages [@Kla94]. The scattering approach has also been applied to the Josephson effect in SNS junctions [@Bee91], resulting in a formula for the supercurrent–phase relationship in terms of the transmission eigenvalues $T_{n}$ in the normal state. We do not discuss the Josephson effect here, but refer to ref. [@Shima] for a review of mesoscopic SNS junctions. Taken together, ref. [@Shima] and the present work describe a unified approach to mesoscopic superconductivity.
Scattering theory
=================
The model considered is illustrated in fig. \[diagram\]. It consists of a disordered normal region (hatched) adjacent to a superconductor (S). The disordered region may also contain a geometrical constriction or a tunnel barrier. To obtain a well-defined scattering problem we insert ideal (impurity-free) normal leads ${\rm N}_{1}$ and ${\rm N}_{2}$ to the left and right of the disordered region. The NS interface is located at $x=0$. We assume that the only scattering in the superconductor consists of Andreev reflection at the NS interface, i.e. we consider the case that the disorder is contained entirely within the normal region. The spatial separation of Andreev and normal scattering is the key simplification which allows us to relate the conductance directly to the normal-state scattering matrix. The model is directly applicable to a superconductor in the clean limit (mean free path in S large compared to the superconducting coherence length $\xi$), or to a point-contact junction (formed by a constriction which is narrow compared to $\xi$). In both cases the contribution of scattering within the superconductor to the junction resistance can be neglected [@Pip71].
The scattering states at energy $\varepsilon$ are eigenfunctions of the Bogoliubov-de Gennes (BdG) equation. This equation has the form of two Schrödinger equations for electron and hole wavefunctions ${\rm
u}(\vec{r})$ and ${\rm v}(\vec{r})$, coupled by the pair potential ${\mit\Delta}(\vec{r})$ [@deG66]: $$\begin{aligned}
\left(\begin{array}{cc} {\cal H}_{0}&{\mit\Delta}\\ {\mit\Delta}^{\ast}&-{\cal
H}_{0}^{\ast} \end{array}\right) \left(\begin{array}{c}{\rm u}\\{\rm
v}\end{array}\right)=\varepsilon \left(\begin{array}{c}{\rm u}\\{\rm
v}\end{array}\right) .\label{BdG1}\end{aligned}$$ Here ${\cal H}_{0}=(\vec{p}+e\vec{A})^{2}/2m+V-E_{\rm F}$ is the single-electron Hamiltonian, containing an electrostatic potential $V(\vec{r})$ and vector potential $\vec{A}(\vec{r})$. The excitation energy $\varepsilon$ is measured relative to the Fermi energy $E_{\rm F}$. To simplify construction of the scattering basis we assume that the magnetic field $\vec{B}$ (in the $z$-direction) vanishes outside the disordered region. One can then choose a gauge such that $\vec{A}\equiv 0$ in lead ${\rm N}_{2}$ and in S, while $A_{x},A_{z}=0$, $A_{y}=A_{1}\equiv{\rm constant}$ in lead ${\rm N}_{1}$.
The pair potential in the bulk of the superconductor ($x\gg\xi$) has amplitude ${\mit\Delta_{0}}$ and phase $\phi$. The spatial dependence of ${\mit\Delta}(\vec{r})$ near the NS interface is determined by the self-consistency relation [@deG66] $${\mit\Delta}(\vec{r})=|g(\vec{r})|\sum_{\varepsilon>0}{\rm
v}^{\ast}(\vec{r}){\rm u}(\vec{r})[1-2f(\varepsilon)],\label{selfconsist}$$ where the sum is over all states with positive eigenvalue, and $f(\varepsilon)=[1+\exp(\varepsilon/k_{\rm B}T)]^{-1}$ is the Fermi function. The coefficient $g$ is the interaction constant of the BCS theory of superconductivity. At an NS interface, $g$ drops abruptly (over atomic distances) to zero, in the assumed absence of any pairing interaction in the normal region. Therefore, ${\mit\Delta}(\vec{r})\equiv 0$ for $x<0$. At the superconducting side of the NS interface, ${\mit\Delta}(\vec{r})$ recovers its bulk value $\Delta_{0}{\rm e}^{{\rm i}\phi}$ only at some distance from the interface. We will neglect the suppression of ${\mit\Delta}(\vec{r})$ on approaching the NS interface, and use the step-function model $${\mit\Delta}(\vec{r})={\mit\Delta_{0}}{\rm e}^{{\rm
i}\phi}\theta(x).\label{stepfunction}$$ This model is also referred to in the literature as a “rigid boundary-condition”. Likharev [@Lik79] discusses in detail the conditions for its validity: If the width $W$ of the NS junction is small compared to $\xi$, the non-uniformities in ${\mit\Delta}(\vec{r})$ extend only over a distance of order $W$ from the junction (because of “geometrical dilution” of the influence of the narrow junction in the wide superconductor). Since non-uniformities on length scales $\ll\xi$ do not affect the dynamics of the quasiparticles, these can be neglected and the step-function model holds. A point contact or microbridge belongs in general to this class of junctions. Alternatively, the step-function model holds also for a wide junction if the resistivity of the junction region is much bigger than the resistivity of the bulk superconductor. This condition is formulated more precisely in ref.[@Lik79]. A semiconductor — superconductor junction is typically in this second category. Note that both the two cases are consistent with our assumption that the disorder is contained entirely within the normal region.
It is worth emphasizing that the absence of a pairing interaction in the normal region ($g(\vec{r})\equiv 0$ for $x<0$) implies a vanishing pair potential ${\mit\Delta}(\vec{r})$, according to eq. (\[selfconsist\]), but does not imply a vanishing order parameter $\Psi(\vec{r})$, which is given by $$\Psi(\vec{r})=\sum_{\varepsilon>0}{\rm v}^{\ast}(\vec{r}){\rm
u}(\vec{r})[1-2f(\varepsilon)].\label{orderparam}$$ Phase coherence between the electron and hole wave functions u and v leads to $\Psi(\vec{r})\neq 0$ for $x<0$. The term “proximity effect” can therefore mean two different things: One is the suppression of the pair potential ${\mit\Delta}$ at the superconducting side of the NS interface. This is a small effect which is neglected in the present work (and in most other papers in this field). The other is the induction of a non-zero order parameter $\Psi$ at the normal side of the NS interface. This effect is fully included here, even though $\Psi$ does not appear explicitly in the expressions which follow. The reason is that the order parameter quantifies the degree of phase coherence between electrons and holes, but does not itself affect the dynamics of the quasiparticles. (The BdG equation (\[BdG1\]) contains ${\mit\Delta}$ not $\Psi$.)
We now construct a basis for the scattering matrix ($s$-matrix). In the normal lead ${\rm N}_{2}$ the eigenfunctions of the BdG equation (\[BdG1\]) can be written in the form $$\begin{aligned}
&&\psi_{n,{\rm e}}^{\pm}({\rm N}_{2})=
{\renewcommand{\arraystretch}{0.6}
\left(\begin{array}{c}1\\ 0\end{array}\right)}
(k_{n}^{\rm e})^{-1/2}\,\Phi_{n}(y,z)
\exp(\pm{\rm i}k_{n}^{\rm e}x),\nonumber\\
&&\psi_{n,{\rm h}}^{\pm}({\rm N}_{2})=
{\renewcommand{\arraystretch}{0.6}
\left(\begin{array}{c}0\\ 1\end{array}\right)}
(k_{n}^{\rm h})^{-1/2}\,\Phi_{n}(y,z)
\exp(\pm{\rm i}k_{n}^{\rm h}x),\label{PsiN}\end{aligned}$$ where the wavenumbers $k_{n}^{\rm e}$ and $k_{n}^{\rm h}$ are given by $$\begin{aligned}
k_{n}^{\rm e,h}\equiv (2m/\hbar^{2})^{1/2}(E_{\rm F}
-E_{n}+\sigma^{\rm e,h}\varepsilon)^{1/2}, \label{keh}\end{aligned}$$ and we have defined $\sigma^{\rm e}\equiv 1$, $\sigma^{\rm h}\equiv -1$. The labels e and h indicate the electron or hole character of the wavefunction. The index $n$ labels the modes, $\Phi_{n}(y,z)$ is the transverse wavefunction of the $n$-th mode, and $E_{n}$ its threshold energy: $$\begin{aligned}
[(p_{y}^{2}+p_{z}^{2})/2m+V(y,z)]\Phi_{n}(y,z)= E_{n}\Phi_{n}(y,z).\label{Phin}\end{aligned}$$ The eigenfunction $\Phi_{n}$ is normalized to unity, $\int\! {\rm d}y\int\!
{\rm d}z \,|\Phi_{n}|^{2}=1$. With this normalization each wavefunction in the basis (\[PsiN\]) carries the same amount of quasiparticle current. The eigenfunctions in lead ${\rm N}_{1}$ are chosen similarly, but with an additional phase factor $\exp[-{\rm i}\sigma^{\rm e,h}(eA_{\rm 1}/\hbar)y]$ from the vector potential.
A wave incident on the disordered normal region is described in the basis (\[PsiN\]) by a vector of coefficients $$\begin{aligned}
c_{\rm N}^{\rm in}\equiv\bigl(
c_{\rm e}^{+}({\rm N}_{1}), c_{\rm e}^{-}({\rm N}_{2}),
c_{\rm h}^{-}({\rm N}_{1}), c_{\rm h}^{+}({\rm N}_{2})\bigr).\label{cNin}\end{aligned}$$ (The mode-index $n$ has been suppressed for simplicity of notation.) The reflected and transmitted wave has vector of coefficients $$\begin{aligned}
c_{\rm N}^{\rm out}\equiv\bigl(
c_{\rm e}^{-}({\rm N}_{1}), c_{\rm e}^{+}({\rm N}_{2}),
c_{\rm h}^{+}({\rm N}_{1}), c_{\rm h}^{-}({\rm N}_{2})\bigr).\label{cNout}\end{aligned}$$ The $s$-matrix $s_{\rm N}$ of the normal region relates these two vectors, $$c_{\rm N}^{\rm out}=s_{\rm N}^{\vphantom{{\rm in}}}c_{\rm N}^{\rm
in}.\label{sNdef}$$ Because the normal region does not couple electrons and holes, this matrix has the block-diagonal form $$\begin{aligned}
s_{\rm N}(\varepsilon)=
{\renewcommand{\arraystretch}{0.8}
\left(\begin{array}{cc}
s_{0}(\varepsilon)&0\\
\!0&s_{0}(-\varepsilon)^{\ast}
\end{array}\right)},\,
s_{\rm 0}\equiv{\renewcommand{\arraystretch}{0.6}
\left(\begin{array}{cc}
r_{11}&t_{12}\\t_{21}&r_{22}
\end{array}\right)}.
\label{sN}\end{aligned}$$ Here $s_{0}$ is the unitary $s$-matrix associated with the single-electron Hamiltonian ${\cal H}_{0}$. The reflection and transmission matrices $r(\varepsilon)$ and $t(\varepsilon)$ are $N\times N$ matrices, $N(\varepsilon)$ being the number of propagating modes at energy $\varepsilon$. (We assume for simplicity that the number of modes in leads ${\rm N}_{1}$ and ${\rm N}_{2}$ is the same.) The matrix $s_{0}$ is unitary ($s_{0}^{\dagger}s_{0}^{\vphantom{\dagger}}=1$) and satisfies the symmetry relation $s_{0}(\varepsilon,B)_{ij}=s_{0}(\varepsilon,-B)_{ji}$.
For energies $0<\varepsilon<{\mit\Delta_{0}}$ there are no propagating modes in the superconductor. We can then define an $s$-matrix for Andreev reflection at the NS interface which relates the vector of coefficients $\bigl( c_{\rm
e}^{-}({\rm N}_{2}), c_{\rm h}^{+}({\rm N}_{2})\bigr)$ to $\bigl( c_{\rm
e}^{+}({\rm N}_{2}), c_{\rm h}^{-}({\rm N}_{2})\bigr)$. The elements of this $s$-matrix can be obtained by matching the wavefunctions (\[PsiN\]) at $x=0$ to the decaying solutions in S of the BdG equation. If terms of order ${\mit\Delta_{0}}/E_{\rm F}$ are neglected (the socalled Andreev approximation [@And64]), the result is simply $$\begin{aligned}
&&c_{\rm e}^{-}({\rm N}_{2})= \alpha\,{\rm e}^{{\rm i}\phi}c_{\rm h}^{-}({\rm
N}_{2}),\nonumber\\
&&c_{\rm h}^{+}({\rm N}_{2})= \alpha\,{\rm e}^{-{\rm i}\phi}c_{\rm e}^{+}({\rm
N}_{2}),\label{sA}\end{aligned}$$ where $\alpha\equiv\exp[-{\rm i}\arccos(\varepsilon/{\mit\Delta_{0}})]$. Andreev reflection transforms an electron mode into a hole mode, without change of mode index. The transformation is accompanied by a phase shift, which consists of two parts:
1. A phase shift $-\arccos(\varepsilon/{\mit\Delta_{0}})$ due to the penetration of the wavefunction into the superconductor.
2. A phase shift equal to plus or minus the phase of the pair potential in the superconductor ([*plus*]{} for reflection from hole to electron, [*minus*]{} for the reverse process).
We can combine the $2N$ linear relations (\[sA\]) with the $4N$ relations (\[sNdef\]) to obtain a set of $2N$ linear relations between the incident wave in lead ${\rm N}_{1}$ and the reflected wave in the same lead: $$\begin{aligned}
&&c_{\rm e}^{-}({\rm N}_{1})= s_{\rm ee}^{\vphantom{+}}c_{\rm e}^{+}({\rm
N}_{1})+s_{\rm eh}^{\vphantom{+}}c_{\rm h}^{-}({\rm N}_{1}),\nonumber\\
&&c_{\rm h}^{+}({\rm N}_{1})= s_{\rm he}^{\vphantom{+}}c_{\rm e}^{+}({\rm
N}_{1})+s_{\rm hh}^{\vphantom{+}}c_{\rm h}^{-}({\rm N}_{1}).\label{sdef}\end{aligned}$$ The four $N\times N$ matrices $s_{\rm ee}$, $s_{\rm hh}$, $s_{\rm eh}$, and $s_{\rm he}$ form together the scattering matrix $s$ of the whole system for energies $0<\varepsilon<{\mit\Delta_{0}}$. An electron incident in lead ${\rm
N}_{1}$ is reflected either as an electron (with scattering amplitudes $s_{\rm
ee}$) or as a hole (with scattering amplitudes $s_{\rm he}$). Similarly, the matrices $s_{\rm hh}$ and $s_{\rm eh}$ contain the scattering amplitudes for reflection of a hole as a hole or as an electron. After some algebra we find for these matrices the expressions $$\begin{aligned}
s_{\rm
ee}^{\vphantom{\ast}}(\varepsilon)&=&r_{11}^{\vphantom{\ast}}(\varepsilon)+
\alpha^{2}t_{12}^{\vphantom{\ast}}(\varepsilon)r_{22}^{\ast}
(-\varepsilon)M_{\rm e}^{\vphantom{\ast}}t_{21}^{\vphantom{\ast}}(\varepsilon),
\label{see}\\
s_{\rm hh}^{\vphantom{\ast}}(\varepsilon)&=&r_{11}^{\ast}(-\varepsilon)+
\alpha^{2}t_{12}^{\ast}(-\varepsilon)r_{22}^{\vphantom{\ast}}
(\varepsilon)M_{\rm
h}^{\vphantom{\ast}}t_{21}^{\ast}(-\varepsilon),\label{shh}\\
s_{\rm eh}^{\vphantom{\ast}}(\varepsilon)&=&\alpha\,{\rm e}^{{\rm
i}\phi}t_{12}^{\vphantom{\ast}}(\varepsilon)M_{\rm
h}^{\vphantom{\ast}}t_{21}^{\ast}(-\varepsilon),\label{seh}\\
s_{\rm he}^{\vphantom{\ast}}(\varepsilon)&=&\alpha\,{\rm e}^{-{\rm
i}\phi}t_{12}^{\ast}(-\varepsilon)M_{\rm
e}^{\vphantom{\ast}}t_{21}^{\vphantom{\ast}}(\varepsilon), \label{she}\end{aligned}$$ where we have defined the matrices $$\begin{aligned}
&&M_{\rm e}^{\vphantom{\ast}}\equiv[1-\alpha^{2}
r_{22}^{\vphantom{\ast}}(\varepsilon)r_{22}^{\ast}(-\varepsilon)]^{-1},
\nonumber\\
&&M_{\rm h}^{\vphantom{\ast}}\equiv[1-\alpha^{2}
r_{22}^{\ast}(-\varepsilon)r_{22}(^{\vphantom{\ast}}\varepsilon)]^{-1}.
\label{MeMh}\end{aligned}$$ One can verify that the $s$-matrix constructed from these four sub-matrices satisfies unitarity ($s^{\dagger}s=1$) and the symmetry relation $s(\varepsilon,B,\phi)_{ij}=s(\varepsilon,-B,-\phi)_{ji}$, as required by quasiparticle-current conservation and by time-reversal invariance, respectively.
For the linear-response conductance $G_{\rm NS}$ of the NS junction at zero temperature we only need the $s$-matrix at the Fermi level, i.e. at $\varepsilon=0$. We restrict ourselves to this case and omit the argument $\varepsilon$ in what follows. We apply the general formula [@Blo82; @Lam91; @Tak92a] $$G_{\rm NS}=\frac{2e^{2}}{h}{\rm Tr}\,(1-s_{\rm ee}^{\vphantom{\dagger}}s_{\rm
ee}^{\dagger}+s_{\rm he}^{\vphantom{\dagger}}s_{\rm
he}^{\dagger})=\frac{4e^{2}}{h}{\rm Tr}\,s_{\rm he}^{\vphantom{\dagger}}s_{\rm
he}^{\dagger}.\label{Gdef}$$ The second equality follows from unitarity of $s$, which implies $1-s_{\rm
ee}^{\vphantom{\dagger}}s_{\rm ee}^{\dagger}=s_{\rm
eh}^{\vphantom{\dagger}}s_{\rm eh}^{\dagger}=(s_{\rm ee}^{\dagger})^{-1}s_{\rm
he}^{\dagger}s_{\rm he}^{\vphantom{\dagger}}s_{\rm ee}^{\dagger}$, so that ${\rm Tr}\,(1-s_{\rm ee}^{\vphantom{\dagger}}s_{\rm ee}^{\dagger})={\rm
Tr}\,s_{\rm he}^{\vphantom{\dagger}}s_{\rm he}^{\dagger}$. We now substitute eq. (\[she\]) for $\varepsilon=0$ ($\alpha=-{\rm i}$) into eq.(\[Gdef\]), and obtain the expression $$\begin{aligned}
G_{\rm NS}=\frac{4e^{2}}{h}{\rm
Tr}\,t_{12}^{\dagger}t_{12}^{\vphantom{\dagger}}
(1+r_{22}^{\ast}r_{22}^{\vphantom{\ast}})^{-1}t_{21}^{\ast}t_{21}^{\rm
T}(1+r_{22}^{\dagger}r_{22}^{\rm T})^{-1},\label{key}\end{aligned}$$ where $M^{\rm T}\equiv (M^{\ast})^{\dagger}$ denotes the transpose of a matrix. The advantage of eq. (\[key\]) over eq. (\[Gdef\]) is that the former can be evaluated by using standard techniques developed for quantum transport in the normal state, since the only input is the normal-state scattering matrix. The effects of multiple Andreev reflections are fully incorporated by the two matrix inversions in eq. (\[key\]).
In the absence of a magnetic field the general formula (\[key\]) simplifies considerably. Since the $s$-matrix $s_{0}$ of the normal region is symmetric for $B=0$, one has $r_{22}^{\vphantom{\rm T}}=r_{22}^{\rm T}$ and $t_{12}^{\vphantom{\rm T}}=t_{21}^{\rm T}$. Equation (\[key\]) then takes the form $$\begin{aligned}
G_{\rm NS}&=&\frac{4e^{2}}{h}{\rm
Tr}\,t_{12}^{\dagger}t_{12}^{\vphantom{\dagger}}
(1+r_{22}^{\dagger}r_{22}^{\vphantom{\ast}})^{-1}
t_{12}^{\dagger}t_{12}^{\vphantom{\rm T}}(1+r_{22}^{\dagger}
r_{22}^{\vphantom{\rm T}})^{-1}\nonumber\\
&=&\frac{4e^{2}}{h}{\rm
Tr}\left(t_{12}^{\dagger}t_{12}^{\vphantom{\dagger}}
(2-t_{12}^{\dagger}t_{12}^{\vphantom{\dagger}})^{-1}\right)^{2}.
\label{GBzero}\end{aligned}$$ In the second equality we have used the unitarity relation $r_{22}^{\dagger}r_{22}^{\vphantom{\ast}}+
t_{12}^{\dagger}t_{12}^{\vphantom{\dagger}}=1$. The trace (\[GBzero\]) depends only on the eigenvalues of the Hermitian matrix $t_{12}^{\dagger}t_{12}^{\vphantom{\dagger}}$. We denote these eigenvalues by $T_{n}$ ($n=1,2,\ldots N$). Since the matrices $t_{12}^{\dagger}t_{12}^{\vphantom{\dagger}}$, $t_{12}^{\vphantom{\dagger}}t_{12}^{\dagger}$, $t_{21}^{\dagger}t_{21}^{\vphantom{\dagger}}$, and $t_{21}^{\vphantom{\dagger}}t_{21}^{\dagger}$ all have the same set of eigenvalues, we can omit the indices and write simply $tt^{\dagger}$. We obtain the following relation between the conductance and the transmission eigenvalues: $$G_{\rm NS}=\frac{4e^{2}}{h}\sum_{n=1}^{N}\frac{T_{n}^{2}}{(2-T_{n})^{2}}.
\label{keyzero}$$ This is the central result of ref. [@Bee92].
Equation (\[keyzero\]) holds for an arbitrary transmission matrix $t$, i.e.for arbitrary disorder potential. It is the [*multi-channel*]{} generalization of a formula first obtained by Blonder, Tinkham, and Klapwijk [@Blo82] (and subsequently by Shelankov [@She84] and by Zaĭtsev [@Zai84]) for the [*single-channel*]{} case (appropriate for a geometry such as a planar tunnel barrier, where the different scattering channels are uncoupled). A formula of similar generality for the normal-metal conductance $G_{\rm N}$ is the multi-channel Landauer formula $$G_{\rm N}=\frac{2e^{2}}{h}{\rm Tr}\,tt^{\dagger}\equiv\frac{2e^{2}}{h}
\sum_{n=1}^{N}T_{n}.\label{Landauer}$$ In contrast to the Landauer formula, eq. (\[keyzero\]) for the conductance of an NS junction is a [*non-linear*]{} function of the transmission eigenvalues $T_{n}$. When dealing with a non-linear multi-channel formula as eq. (\[keyzero\]), it is of importance to distinguish between the transmission eigenvalue $T_{n}$ and the modal transmission probability ${\cal
T}_{n}\equiv\sum_{m=1}^{N}|t_{nm}|^{2}$. The former is an eigenvalue of the matrix $tt^{\dagger}$, the latter a diagonal element of that matrix. The Landauer formula (\[Landauer\]) can be written equivalently as a sum over eigenvalues or as sum over modal transmission probabilities: $$\begin{aligned}
\frac{h}{2e^{2}}G_{\rm N}=\sum_{n=1}^{N}T_{n}\equiv\sum_{n=1}^{N}{\cal
T}_{n}.\label{Landauer2}\end{aligned}$$ This equivalence is of importance for (numerical) evaluations of the Landauer formula, in which one calculates the probability that an electron injected in mode $n$ is transmitted, and then obtains the conductance by summing over all modes. The non-linear scattering formula (\[keyzero\]), in contrast, can not be written in terms of modal transmission probabilities alone: The off-diagonal elements of $tt^{\dagger}$ contribute to $G_{\rm NS}$ in an essential way. Previous attempts to generalize the one-dimensional Blonder-Tinkham-Klapwijk formula to more dimensions by summing over modal transmission probabilities (or, equivalently, by angular averaging) were not successful precisely because only the diagonal elements of $tt^{\dagger}$ were considered.
Three simple applications
=========================
To illustrate the power and generality of the scattering formula (\[keyzero\]), we discuss in this section three simple applications to the ballistic, resonant-tunneling, and diffusive transport regimes [@Bee92].
Quantum point contact
---------------------
Consider first the case that the normal metal consists of a ballistic constriction with a normal-state conductance quantized at $G_{\rm
N}=2N_{0}e^{2}/h$ (a [*quantum point contact*]{}). The integer $N_{0}$ is the number of occupied one-dimensional subbands (per spin direction) in the constriction, or alternatively the number of transverse modes at the Fermi level which can propagate through the constriction. Note that $N_{0}\ll N$. An “ideal” quantum point contact is characterized by a special set of transmission eigenvalues, which are equal to either zero or one [@Eer91]: $$\begin{aligned}
T_{n}=\left\{\begin{array}{ll}
1 &\;{\rm if}\; 1\leq n\leq N_{0},\\
0 &\;{\rm if}\; N_{0}<n\leq N,
\end{array}\right.\label{TQPC}\end{aligned}$$ where the eigenvalues have been ordered from large to small. We emphasize that eq. (\[TQPC\]) does not imply that the transport through the constriction is adiabatic. In the case of adiabatic transport, the transmission eigenvalue $T_{n}$ is equal to the modal transmission probability ${\cal T}_{n}$. In the absence of adiabaticity there is no direct relation between $T_{n}$ and ${\cal
T}_{n}$. Substitution of eq. (\[TQPC\]) into eq. (\[keyzero\]) yields $$\begin{aligned}
G_{\rm NS}=\frac{4e^{2}}{h}N_{0}.\label{GQPC}\end{aligned}$$ The conductance of the NS junction is quantized in units of $4e^{2}/h$. This is [*twice*]{} the conductance quantum in the normal state, due to the current-doubling effect of Andreev reflection [@Hou91].
In the classical limit $N_{0}\rightarrow\infty$ we recover the well-known result $G_{\rm NS}=2G_{\rm N}$ for a [*classical*]{} ballistic point contact [@Blo82; @She84; @Zai80]. In the quantum regime, however, the simple factor-of-two enhancement only holds for the conductance plateaus, where eq.(\[TQPC\]) applies, and not to the transition region between two subsequent plateaus of quantized conductance. To illustrate this, we compare in fig.\[sqpcn\] the conductances $G_{\rm NS}$ and $2G_{\rm N}$ for Büttiker’s model [@But90] of a saddle-point constriction in a two-dimensional electron gas. Appreciable differences appear in the transition region, where $G_{\rm
NS}$ lies below twice $G_{\rm N}$. This is actually a rigorous inequality, which follows from eqs. (\[keyzero\]) and (\[Landauer\]) for [*arbitrary*]{} transmission matrix: $$\begin{aligned}
G_{\rm NS}\leq 2G_{\rm N},\;\forall\, t.\label{Gleg2G0}\end{aligned}$$
Quantum dot
-----------
Consider next a small confined region (of dimensions comparable to the Fermi wavelength), which is weakly coupled by tunnel barriers to two electron reservoirs. We assume that transport through this [*quantum dot*]{} occurs via resonant tunneling through a single bound state. Let $\varepsilon_{\rm
res}$ be the energy of the resonant level, relative to the Fermi level in the reservoirs, and let $\gamma_{1}/\hbar$ and $\gamma_{2}/\hbar$ be the tunnel rates through the two barriers. We denote $\gamma\equiv\gamma_{1}+\gamma_{2}$. If $\gamma\ll\Delta E$ (with $\Delta E$ the level spacing in the quantum dot), the conductance $G_{\rm N}$ in the case of non-interacting electrons has the form $$\begin{aligned}
\frac{h}{2e^{2}}G_{\rm N}=\frac{\gamma_{1}\gamma_{2}}{\varepsilon_{\rm
res}^{2}+{{\textstyle\frac{1}{4}}}\gamma^{2}}\equiv T_{\rm BW},\label{G0BW}\end{aligned}$$ with $T_{\rm BW}$ the Breit-Wigner transmission probability at the Fermi level. The normal-state transmission matrix $t_{12}(\varepsilon)$ which yields this conductance has matrix elements [@But88] $$\begin{aligned}
t_{12}(\varepsilon)=U_{1}\tau(\varepsilon)U_{2},
\;\tau(\varepsilon)_{nm}\equiv\frac{\sqrt{\gamma_{1n}\gamma_{2m}}}
{\varepsilon-\varepsilon_{\rm res}+{\textstyle\frac{1}{2}}{\rm
i}\gamma},\label{sBW}\end{aligned}$$ where $\sum_{n}\gamma_{1n}\equiv\gamma_{1}$, $\sum_{n}\gamma_{2n}\equiv\gamma_{2}$, and $U_{1}$, $U_{2}$ are two unitary matrices (which need not be further specified).
Let us now investigate how the conductance (\[G0BW\]) is modified if one of the two reservoirs is in the superconducting state. The transmission matrix product $t_{12}^{\vphantom{\dagger}}t_{12}^{\dagger}$ (evaluated at the Fermi level $\varepsilon=0$) following from eq. (\[sBW\]) is $$\begin{aligned}
t_{12}^{\vphantom{\dagger}}t_{12}^{\dagger}=
U_{1}^{\vphantom{\dagger}}MU_{1}^{\dagger},
\;M_{nm}^{\vphantom{\dagger}}\equiv\frac{T_{\rm
BW}}{\gamma_{1}}\sqrt{\gamma_{1n}\gamma_{1m}}.\label{ttBW}\end{aligned}$$ Its eigenvalues are $$\begin{aligned}
T_{n}=\left\{\begin{array}{ll}
T_{\rm BW} &\;{\rm if}\; n=1,\\
0 &\;{\rm if}\; 2\leq n\leq N.
\end{array}\right.\label{TBW}\end{aligned}$$ Substitution into eq. (\[keyzero\]) yields the conductance $$G_{\rm NS}=\frac{4e^{2}}{h}\left( \frac{T_{\rm BW}}{2-T_{\rm BW}}\right) ^{2}
=\frac{4e^{2}}{h}\left(\frac{2\gamma_{1}\gamma_{2}}{4\varepsilon_{\rm
res}^{2}+\gamma_{1}^{2}+\gamma_{2}^{2}}\right) ^{2}.\label{GBW}$$ The conductance on resonance ($\varepsilon_{\rm res}=0$) is maximal in the case of equal tunnel rates ($\gamma_{1}=\gamma_{2}$), and is then equal to $4e^{2}/h$ — independent of $\gamma$. The lineshape for this case is shown in fig. \[sqdotn\] (solid curve). It differs substantially from the Lorentzian lineshape (\[G0BW\]) of the Breit-Wigner formula (dotted curve).
The amplitude and lineshape of the conductance resonance (\[GBW\]) does not depend on the relative magnitude of the resonance width $\gamma$ and the superconducting energy gap ${\mit\Delta_{0}}$. This is in contrast to the supercurrent resonance in a superconductor — quantum dot — superconductor Josephson junction, which depends sensitively on the ratio $\gamma/{\mit\Delta_{0}}$ [@Gla89; @SQUID91]. The difference can be traced to the fact that the conductance (in the zero-temperature, zero-voltage limit) is strictly a Fermi-level property, whereas all states within ${\mit\Delta_{0}}$ of the Fermi level contribute to the Josephson effect. (For an extension of eq. (\[GBW\]) to finite voltages, see ref. [@Khl93].) Since we have assumed non-interacting quasiparticles, the above results apply to a quantum dot with a small charging energy $U$ for double occupancy of the resonant state. Devyatov and Kupriyanov [@Dev90], and Hekking et al. [@Hek93a], have studied the influence of Coulomb repulsion on resonant tunneling through an NS junction, in the temperature regime $k_{\rm B}T\gg\gamma$ where the resonance is thermally broadened. The extension to the low-temperature regime of an intrinsically broadened resonance remains to be investigated.
Disordered junction
-------------------
We now turn to the regime of diffusive transport through a disordered point contact or microbridge between a normal and a superconducting reservoir. The model considered is that of an NS junction containing a disordered normal region of length $L$ much greater than the mean free path $l$ for elastic impurity scattering, but much smaller than the localization length $Nl$. We calculate the average conductance of the junction, averaged over an ensemble of impurity configurations. We begin by parameterizing the transmission eigenvalue $T_{n}$ in terms of a channel-dependent localization length $\zeta_{n}$: $$T_{n}=\frac{1}{\cosh^{2}(L/\zeta_{n})}.\label{xip}$$ A fundamental result in quantum transport is that the inverse localization length is [*uniformly*]{} distributed between $0$ and $1/\zeta_{\rm
min}\simeq 1/l$ for $l\ll L\ll Nl$ [@Dor84; @Imr86; @Pen92; @Naz94]. One can therefore write $$\frac{\left\langle\sum_{n=1}^{N}f(T_{n})\right\rangle}
{\left\langle\sum_{n=1}^{N}T_{n}\right\rangle}=\frac{\int_{0}^{L/\zeta_{\rm
min}}\! {\rm d}x \, f(\cosh^{-2}x)}{\int_{0}^{L/\zeta_{\rm min}}\! {\rm d}x \,
\cosh^{-2}x}=\int_{0}^{\infty}\! {\rm d}x\, f(\cosh^{-2}x),\label{avf}$$ where $\langle\ldots\rangle$ indicates the ensemble average and $f(T)$ is an arbitrary function of the transmission eigenvalue such that $f(T)\rightarrow 0$ for $T\rightarrow 0$. In the second equality in eq. (\[avf\]) we have used that $L/\zeta_{\rm min}\simeq L/l\gg 1$ to replace the upper integration limit by $\infty$.
Combining eqs. (\[keyzero\]), (\[Landauer\]), and (\[avf\]), we find $$\langle G_{\rm NS}\rangle=2\langle G_{\rm N}\rangle\int_{0}^{\infty}\! {\rm
d}x\,\left( \frac{\cosh^{-2}x}{2-\cosh^{-2}x}\right) ^{2}=\langle G_{\rm
N}\rangle .\label{Gav}$$ We conclude that — although $G_{\rm NS}$ according to eq. (\[keyzero\]) is of [*second*]{} order in the transmission eigenvalues $T_{n}$ — the ensemble average $\langle G_{\rm NS}\rangle$ is of [*first*]{} order in $l/L$. The resolution of this paradox is that the $T$’s are not distributed uniformly, but are either exponentially small (closed channels) or of order unity (open channels) [@Imr86]. Hence the average of $T_{n}^{2}$ is of the same order as the average of $T_{n}$. Off-diagonal elements of the transmission matrix $tt^{\dagger}$ are crucial to arrive at the result (\[Gav\]). Indeed, if one would evaluate eq. (\[keyzero\]) with the transmission eigenvalues $T_{n}$ replaced by the modal transmission probabilities ${\cal T}_{n}$, one would find a totally wrong result: Since ${\cal T}_{n}\simeq l/L\ll 1$, one would find $G_{\rm NS}\simeq (l/L)G_{\rm N}$ — which underestimates the conductance of the NS junction by the factor $L/l$.
Previous work [@And66; @Art79] had obtained the equality of $G_{\rm NS}$ and $G_{\rm N}$ from [*semiclassical*]{} equations of motion, as was appropriate for macroscopic systems which are large compared to the normal-metal phase-coherence length $l_{\phi}$. The present derivation, in contrast, is fully quantum mechanical. It applies to the “mesoscopic” regime $L<l_{\phi}$, in which transport is phase coherent. Takane and Ebisawa [@Tak92b] have studied the conductance of a disordered phase-coherent NS junction by numerical simulation of a two-dimensional tight-binding model. They found $\langle G_{\rm
NS}\rangle =\langle G_{\rm N}\rangle$ within numerical accuracy for $l\ll L\ll
Nl$, in agreement with eq. (\[Gav\]).
If the condition $L\ll Nl$ is relaxed, differences between $\langle G_{\rm
NS}\rangle$ and $\langle G_{\rm N}\rangle$ appear. To lowest order in $L/Nl$, the difference is a manifestation of the [*weak-localization*]{} effect, as we discuss in the following section.
Weak localization
=================
An NS junction shows an [*enhanced*]{} weak-localization effect, in comparison with the normal state [@Bee92]. The origin of the enhancement can be understood in a simple way, as follows.
We return to the parameterization $T_{n}\equiv 1/\cosh^{2}(L/\zeta_{n})$ introduced in eq. (\[xip\]), and define the density of localization lengths $\rho(\zeta,L)\equiv\langle\sum_{n}
\delta(\zeta-\zeta_{n})\rangle_{L}$. The subscript $L$ refers to the length of the disordered region. Using the identity $\cosh 2x=2\cosh^{2}x-1$, the ensemble-average of eq. (\[keyzero\]) becomes $$\langle G_{\rm NS}\rangle_{L} =\frac{4e^{2}}{h}\int_{0}^{\infty}\!
{\rm d}\zeta\,\rho(\zeta,L)\cosh^{-2}(2L/\zeta).\label{GNSzeta}$$ In the same parameterization, one has $$\langle G_{\rm N}\rangle_{L} =\frac{2e^{2}}{h}\int_{0}^{\infty}\!
{\rm d}\zeta\,\rho(\zeta,L)\cosh^{-2}(L/\zeta).\label{GNzeta}$$ In the “open-channel approximation” [@Sto91], the integrals over $\zeta$ are restricted to the range $\zeta >L$ of localization lengths greater than the length of the conductor. In this range the density $\rho(\zeta,L)$ is approximately independent of $L$. The whole $L$-dependence of the integrands in eqs. (\[GNSzeta\]) and (\[GNzeta\]) lies then in the argument of the hyperbolic cosine, so that $$\langle G_{\rm NS}\rangle_{L}=2\langle G_{\rm
N}\rangle_{2L}.\label{openchannel}$$ This derivation formalizes the intuitive notion that Andreev reflection at an NS interface effectively doubles the length of the normal-metal conductor [@Tak92b].
Consider now the geometry $W\ll L$ relevant for a microbridge. In the normal state one has $$\langle G_{\rm N}\rangle=(W/L)\sigma_{\rm Drude}-\delta G_{\rm
N},\label{weaklocaN}$$ where $\sigma_{\rm Drude}$ is the classical Drude conductivity. The $L$-independent term $\delta G_{\rm N}$ is the weak-localization correction, given by [@Mel91] $\delta G_{N}=\frac{2}{3}\,e^{2}/h$. Equation (\[openchannel\]) then implies that $$\langle G_{\rm NS}\rangle
=(W/L)\sigma_{\rm Drude}-\delta G_{\rm NS},\label{weaklocaNS}$$ with $\delta G_{\rm NS}=2\,\delta G_{\rm N}$. We conclude that Andreev reflection increases the weak-localization correction, by a factor of two according to this qualitative argument [@Bee92]. A rigorous theory [@Bee94; @Mac94; @Tak94] of weak localization in an NS microbridge shows that the increase is actually somewhat less than a factor of two,[^1] $$\delta G_{\rm NS}=(2-8\pi^{-2})\,e^{2}/h=1.78\,\delta G_{\rm
N}.\label{weaklocaNSN}$$
As pointed out in ref. [@Mar93], the enhancement of weak localization in an NS junction can be observed experimentally as a [*dip*]{} in the differential conductance $G_{\rm NS}(V)=\partial I/\partial V$ around zero voltage. The dip occurs because an applied voltage destroys the enhancement of weak localization by Andreev reflection, thereby increasing the conductance by an amount $$\delta G_{\rm NS}-\delta G_{\rm N}\approx 0.5\,e^{2}/h\label{dipsize}$$ at zero temperature. \[At finite temperatures, we expect a reduction of the size of dip by a factor[^2] $(L_{\rm c}/L)^{2}$, where $L_{\rm c}={\rm min}\,(l_{\phi},\sqrt{\hbar D/k_{\rm
B}T})$ is the length over which electrons and holes remain phase coherent.\] We emphasize that in the normal state, weak localization can [*not*]{} be detected in the current–voltage characteristic. The reason why a dip occurs in $G_{\rm NS}(V)$ and not in $G_{\rm N}(V)$ is that an applied voltage (in contrast to a magnetic field) does not break time-reversal symmetry — but only affects the phase coherence between the electrons and the Andreev-reflected holes (which differ in energy by up to $2eV$). The width $V_{\rm c}$ of the conductance dip is of the order of the Thouless energy $E_{\rm c}\equiv\pi\hbar D/L^{2}$ (with $D$ the diffusion coefficient of the junction; $L$ should be replaced by $L_{\rm c}$ if $L>L_{\rm c}$). This energy scale is such that an electron and a hole acquire a phase difference of order $\pi$ on traversing the junction. The energy $E_{\rm c}$ is much smaller than the superconducting energy gap ${\mit\Delta_{0}}$, provided $L\gg\xi$ (with $\xi\simeq(\hbar D/{\mit\Delta_{0}})^{1/2}$ the superconducting coherence length in the dirty-metal limit). The separation of energy scales is important, in order to be able to distinguish experimentally the current due to Andreev reflection below the energy gap from the quasi-particle current above the energy gap.
The first measurement of the conductance dip predicted in ref. [@Mar93] has been reported recently by Lenssen et al. [@Len94]. The system studied consists of the two-dimensional electron gas in a GaAs/AlGaAs heterostructure with Sn/Ti superconducting contacts ($W=10\,\mu{\rm m}$, $L=0.8\,\mu{\rm m}$). No supercurrent is observed, presumably because $l_{\phi}\simeq 0.4\,\mu{\rm
m}$ is smaller than $L$. (The phase-coherence length $l_{\phi}$ is estimated from a conventional weak-localization measurement in a magnetic field.) The data for the differential conductance is reproduced in fig. \[Lenssen\]. At the lowest temperatures (10 mK) a rather small and narrow conductance dip develops, superimposed on a large and broad conductance minimum. The size of the conductance dip is about $2\,e^{2}/h$. Since in the experimental geometry $W>L>l_{\phi}$, and there are two NS interfaces, we would expect a dip of order $2(W/l_{\phi})(l_{\phi}/L)^2\times 0.5\,e^{2}/h\simeq 6\,e^{2}/h$, simply by counting the number of phase-coherent segments adjacent to the superconductor. This is three times as large as observed, but the presence of a tunnel barrier at the NS interface might easily account for this discrepancy. (The Schottky barrier at the interface between a semiconductor and superconductor presents a natural origin for such a barrier.) The conductance dip has width $V_{\rm
c}\simeq 0.25\,{\rm mV}$, which is less than the energy gap ${\mit\Delta_{0}}=0.56\,{\rm meV}$ of bulk Sn — but not by much. Experiments with a larger separation of energy scales are required for a completely unambiguous identification of the phenomenon.
An essential requirement for the appearance of a dip in the differential conductance is a high probability for Andreev reflection at the NS boundary. This is illustrated in fig. \[marmo\_fig2\], which shows the results of numerical simulations [@Mar93] of transport through a disordered normal region connected via a tunnel barrier to a superconductor. The tunnel barrier is characterized by a transmission probability per mode $\Gamma$. The dash-dotted lines refer to an ideal interface ($\Gamma=1$), and show the conductance [*dip*]{} due to weak localization, discussed above. For $\Gamma\simeq 0.2$–$0.4$ the data for $G_{\rm NS}$ (filled circles) shows a crossover[^3] to a conductance [*peak*]{}. This is the phenomenon of [*reflectionless tunneling*]{}, discussed in the following section.
Reflectionless tunneling
========================
In 1991, Kastalsky et al. [@Kas91] discovered a large and narrow peak in the differential conductance of a Nb–InGaAs junction. We reproduce their data in fig. \[Kastalsky\]. (A similar peak is observed as a function of magnetic field.) Since then a great deal of experimental [@Ngu92; @Man92; @Agr92; @Xio93; @Len94b; @Bak94; @Mag94], numerical [@Mar93; @Tak93a], and analytical work [@Wee92; @Tak92c; @Vol93; @Hek93; @Bee94b] has been done on this effect. Here we focus on the explanation in terms of [*disorder-induced opening of tunneling channels*]{} [@Naz94; @Bee94b], which is the most natural from the view point of the scattering formula (\[keyzero\]), and which we feel captures the essence of the effect. Equivalently, the conductance peak can be explained in terms of a non-equilibrium proximity effect, which is the preferred explanation in a Green’s function formulation of the problem [@Vol93; @Zai90; @Vol92a; @Vol92b]. We begin by reviewing the numerical work [@Mar93].
Numerical simulations
---------------------
A sharp peak in the conductance around $V,B=0$ is evident in the numerical simulations for $\Gamma=0.2$ (dotted lines in fig. \[marmo\_fig2\]). While $G_{\rm N}$ depends only weakly on $B$ and $V$ in this range (open circles), $G_{\rm NS}$ drops abruptly (filled circles). The width of the conductance peak in $B$ and $eV$ is respectively of order $B_{\rm c}=h/eLW$ (one flux quantum through the normal region) and $eV_{\rm c}=\pi\hbar D/L^{2}\equiv E_{\rm c}$ (the Thouless energy). The width of the peak is the same as the width of the conductance dip due to weak localization, which occurs for larger barrier transparencies. The size of the peak is much greater than the dip, however.
It is instructive to first discuss the [*classical*]{} resistance $R_{\rm
NS}^{\rm class}$ of the NS junction. The basic approximation in $R_{\rm
NS}^{\rm class}$ is that currents rather than amplitudes are matched at the NS interface [@And66]. The result is $$R_{\rm NS}^{\rm class}=(h/2Ne^{2})\left[L/l+2\Gamma^{-2}+{\cal
O}(1)\right].\label{RNSclass}$$ The contribution from the barrier is $\propto\Gamma^{-2}$ because tunneling into a superconductor is a two-particle process [@She80]: Both the incident electron and the Andreev-reflected hole have to tunnel through the barrier (the net result being the addition of a Cooper pair to the superconducting condensate [@And64]). Equation (\[RNSclass\]) is to be contrasted with the classical resistance $R_{\rm N}^{\rm class}$ in the normal state, $$R_{\rm N}^{\rm class}=(h/2Ne^{2})\left[L/l+\Gamma^{-1}+{\cal
O}(1)\right],\label{GNclass}$$ where the contribution of a resistive barrier is $\propto\Gamma^{-1}$. In the absence of a tunnel barrier (i.e. for $\Gamma=1$), $R_{\rm NS}^{\rm
class}=R_{\rm N}^{\rm class}$ for $L\gg l$, in agreement with refs.[@And66; @Art79]. Let us now see how these classical results compare with the simulations [@Mar93].
In fig. \[marmo\_fig1\] we show the resistance (at $V=0$) as a function of $\Gamma$ in the absence and presence of a magnetic field. (The parameters of the disordered region are the same as for fig. \[marmo\_fig2\].) There is good agreement with the classical eqs. (\[RNSclass\]) and (\[GNclass\]) for a magnetic field corresponding to 10 flux quanta through the disordered segment (fig. \[marmo\_fig1\]b). For $B=0$, however, the situation is different (fig. \[marmo\_fig1\]a). The normal-state resistance (open circles) still follows approximately the classical formula (solid curve). (Deviations due to weak localization are noticeable, but small on the scale of the figure.) In contrast, the resistance of the NS junction (filled circles) lies much below the classical prediction (dotted curve). The numerical data shows that for $\Gamma\gg l/L$ one has approximately $$R_{\rm NS}(B=0,V=0)\approx R_{\rm N}^{\rm class},\label{RNSzeroB}$$ which for $\Gamma\ll 1$ is much smaller than $R_{\rm NS}^{\rm class}$. This is the phenomenon of [*reflectionless tunneling*]{}: In fig. \[marmo\_fig1\]a the barrier contributes to $R_{\rm NS}$ in order $\Gamma^{-1}$, just as for single-particle tunneling, and not in order $\Gamma^{-2}$, as expected for two-particle tunneling. It is as if the Andreev-reflected hole is not reflected by the barrier. The interfering trajectories responsible for this effect were first identified by Van Wees et al. [@Wee92]. The numerical data of fig. \[marmo\_fig1\]a is in good agreement with the Green’s function calculation of Volkov, Zaĭtsev, and Klapwijk [@Vol93] (dashed curve). Both these papers have played a crucial role in the understanding of the effect. The scaling theory reviewed below [@Bee94b] is essentially equivalent to the Green’s function calculation, but has the advantage of explicitly demonstrating how the opening of tunneling channels on increasing the length $L$ of the disordered region induces a transition from a $\Gamma^{-2}$ dependence to a $\Gamma^{-1}$ dependence when $L\simeq l/\Gamma$.
Scaling theory
--------------
We use the parameterization $$T_{n}=\frac{1}{\cosh^{2}x_{n}},\label{Txdef}$$ similar to eq. (\[xip\]), but now with a dimensionless variable $x_{n}\in[0,\infty)$. The density of the $x$-variables, for a length $L$ of disordered region, is denoted by $$\rho(x,L)=\langle{\textstyle\sum_{n}}\delta(x-x_{n})\rangle_{L}.
\label{rhoxdef}$$ For $L=0$, i.e. in the absence of disorder, we have the initial condition imposed by the barrier, $$\rho(x,0)=N\delta(x-x_{0}),\label{rhox0}$$ with $\Gamma=1/\cosh^{2}x_{0}$. The scaling theory describes how $\rho(x,L)$ evolves with increasing $L$. This evolution is governed by the equation $$\frac{\partial}{\partial s}\rho(x,s)=-\frac{1}{2N} \frac{\partial}{\partial
x}\rho(x,s)\frac{\partial}{\partial x}
\int_{0}^{\infty}\!\!{\rm d}x'\,
\rho(x',s)\ln|\sinh^{2}x-\sinh^{2}x'|,\label{scaling}$$ where we have defined $s\equiv L/l$. This non-linear diffusion equation was derived by Mello and Pichard [@Mel89] from a Fokker-Planck equation [@Sto91; @Dor82; @Mel88] for the joint distribution function of all $N$ eigenvalues, by integrating out $N-1$ eigenvalues and taking the large-$N$ limit. This limit restricts its validity to the metallic regime ($N\gg L/l$), and is sufficient to determine the leading order contribution to the average conductance, which is ${\cal O}(N)$. The weak-localization correction, which is ${\cal O}(1)$, is neglected here. A priori, eq. (\[scaling\]) holds only for a “quasi-one-dimensional” wire geometry (length $L$ much greater than width $W$), because the Fokker-Planck equation from which it is derived requires $L\gg W$. Numerical simulations indicate that the geometry dependence only appears in the ${\cal O}(1)$ corrections, and that the ${\cal O}(N)$ contributions are essentially the same for a wire, square, or cube.
In ref. [@Bee94b] it is shown how the scaling equation (\[scaling\]) can be solved exactly, for arbitrary initial condition $\rho(x,0)\equiv\rho_{0}(x)$. The method of solution is based on a mapping of eq. (\[scaling\]) onto Euler’s equation for the isobaric flow of a two-dimensional ideal fluid: $L$ corresponds to time and $\rho$ to the $y$-component of the velocity field on the $x$-axis. \[Please note that in this section $x$ is the auxiliary variable defined in eq. (\[Txdef\]) and [*not*]{} the physical coordinate in fig. \[diagram\].\] The result is $$\rho(x,s)=(2N/\pi)\,{\rm Im}\,U(x-{\rm i}0^{+},s),\label{rhoxU}$$ where the complex function $U(z,s)$ is determined by $$U(z,s)=U_{0}\bigl(z-sU(z,s)\bigr).\label{Udef}$$ The function $U_{0}(z)$ is fixed by the initial condition, $$U_{0}(z)=\frac{\sinh 2z}{2N} \int_{0}^{\infty}\!\!{\rm
d}x'\,\frac{\rho_{0}(x')}{\sinh^{2}z-\sinh^{2}x'}.\label{U0def}$$ The implicit equation (\[Udef\]) has multiple solutions in the entire complex plane; We need the solution for which both $z$ and $z-sU(z,s)$ lie in the strip between the lines $y=0$ and $y=-\pi/2$, where $z=x+{\rm i}y$.
The initial condition (\[rhox0\]) corresponds to $$U_{0}(z)={\textstyle\frac{1}{2}}\sinh 2z\,(\cosh^{2}z
-\Gamma^{-1})^{-1}.\label{U0x0}$$ The resulting density (\[rhoxU\]) is plotted in fig. \[rhoplot\] (solid curves), for $\Gamma=0.1$ and several values of $s$. For $s\gg 1$ and $x\ll s$ it simplifies to $$\begin{aligned}
&&x={\textstyle\frac{1}{2}}{\rm arccosh}\,\tau-{\textstyle\frac{1}{2}}\Gamma
s(\tau^{2}-1)^{1/2}\cos\sigma,\nonumber\\
&&\sigma\equiv \pi sN^{-1}\rho(x,s),\;\;\tau\equiv\sigma(\Gamma
s\sin\sigma)^{-1},\label{rhoxapprox}\end{aligned}$$ shown dashed in fig. \[rhoplot\]. Equation (\[rhoxapprox\]) agrees with the result of a Green’s function calculation by Nazarov [@Naz94]. For $s=0$ (no disorder), $\rho$ is a delta function at $x_{0}$. On adding disorder the eigenvalue density rapidly spreads along the $x$-axis (curve a), such that $\rho\leq N/s$ for $s>0$. The sharp edges of the density profile, so uncharacteristic for a diffusion profile, reveal the hydrodynamic nature of the scaling equation (\[scaling\]). The upper edge is at $$x_{\rm max}=s+{\textstyle\frac{1}{2}}\ln(s/\Gamma)+{\cal O}(1).\label{xmax}$$ Since $L/x$ has the physical significance of a localization length [@Sto91], this upper edge corresponds to a minimum localization length $\xi_{\rm min}=L/x_{\rm max}$ of order $l$. The lower edge at $x_{\rm min}$ propagates from $x_{0}$ to $0$ in a “time” $s_{\rm c}=(1-\Gamma)/\Gamma$. For $1\ll s\leq s_{\rm c}$ one has $$x_{\rm min}={\textstyle\frac{1}{2}}{\rm arccosh}\,(s_{\rm c}/s)
-{\textstyle\frac{1}{2}}[1-(s/s_{\rm c})^{2}]^{1/2}.\label{xmin}$$ It follows that the maximum localization length $\xi_{\rm max}=L/x_{\rm min}$ [*increases*]{} if disorder is added to a tunnel junction. This paradoxical result, that disorder enhances transmission, becomes intuitively obvious from the hydrodynamic correspondence, which implies that $\rho(x,s)$ spreads both to larger [*and*]{} smaller $x$ as the fictitious time $s$ progresses. When $s=s_{\rm c}$ the diffusion profile hits the boundary at $x=0$ (curve c), so that $x_{\rm min}=0$. This implies that for $s>s_{\rm c}$ there exist scattering states (eigenfunctions of $tt^{\dagger}$) which tunnel through the barrier with near-unit transmission probability, even if $\Gamma\ll 1$. The number $N_{\rm open}$ of transmission eigenvalues close to one ([*open channels*]{}) is of the order of the number of $x_{n}$’s in the range $0$ to $1$ (since $T_{n}\equiv 1/\cosh^{2}x_{n}$ vanishes exponentially if $x_{n}>1$). For $s\gg s_{\rm c}$ (curve e) we estimate $$N_{\rm open}\simeq\rho(0,s)=N(s+\Gamma^{-1})^{-1},\label{Nopen}$$ where we have used eq. (\[rhoxapprox\]). The disorder-induced opening of tunneling channels was discovered by Nazarov [@Naz94]. It is the fundamental mechanism for the $\Gamma^{-2}$ to $\Gamma^{-1}$ transition in the conductance of an NS junction, as we now discuss.
According to eqs. (\[keyzero\]), (\[Landauer\]), (\[Txdef\]), and (\[rhoxdef\]), the average conductances $\langle G_{\rm NS}\rangle$ and $\langle G_{\rm N}\rangle$ are given by the integrals $$\begin{aligned}
\langle G_{\rm NS}\rangle&=&\frac{4e^{2}}{h}\int_{0}^{\infty}\!
{\rm d}x\,\rho(x,s)\cosh^{-2}2x,\label{GNSx}\\
\langle G_{\rm N}\rangle&=&\frac{2e^{2}}{h}\int_{0}^{\infty}\!
{\rm d}x\,\rho(x,s)\cosh^{-2}x.\label{GNx}\end{aligned}$$ Here we have used the same trigonometric identity as in eq. (\[GNSzeta\]). For $\Gamma\gg l/L$ one is in the regime $s\gg s_{\rm c}$ of curve e in fig.\[rhoplot\]. Then the dominant contribution to the integrals comes from the range $x/s\ll 1$ where $\rho(x,s)\approx\rho(0,s)=N(s+\Gamma^{-1})^{-1}$ is approximately independent of $x$. Substitution of $\rho(x,s)$ by $\rho(0,s)$ in eqs. (\[GNSx\]) and (\[GNx\]) yields directly $$\langle G_{\rm NS}\rangle\approx\langle G_{\rm N}\rangle\approx 1/R_{\rm
N}^{\rm class},\label{GNSGN}$$ in agreement with the result (\[RNSzeroB\]) of the numerical simulations.
Equation (\[GNSGN\]) has the linear $\Gamma$ dependence characteristic for reflectionless tunneling. The crossover to the quadratic $\Gamma$ dependence when $\Gamma\lesssim l/L$ is obtained by evaluating the integrals (\[GNSx\]) and (\[GNx\]) with the density $\rho(x,s)$ given by eq. (\[rhoxU\]). The result is [@Bee94b] $$\begin{aligned}
\langle G_{\rm NS}\rangle&=&(2Ne^{2}/h)(s+Q^{-1})^{-1},\label{GNSresult}\\
\langle G_{\rm N}\rangle&=&(2Ne^{2}/h)(s+\Gamma^{-1})^{-1}.\label{GNresult}\end{aligned}$$ The “effective” tunnel probability $Q$ is defined by $$Q=\frac{\theta}{s\cos\theta}\left(\frac{\theta} {\Gamma
s\cos\theta}(1+\sin\theta)-1\right),\label{Qdef}$$ where $\theta\in(0,\pi/2)$ is the solution of the transcendental equation $$\theta[1-{\textstyle\frac{1}{2}}\Gamma(1-\sin\theta)]=\Gamma s\cos\theta.
\label{phidef}$$ For $\Gamma\ll 1$ (or $s\gg 1$) eqs. (\[Qdef\]) and (\[phidef\]) simplify to $Q=\Gamma\sin\theta$, $\theta=\Gamma s\cos\theta$, in precise agreement with the Green’s function calculation of Volkov, Zaĭtsev, and Klapwijk [@Vol93]. According to eq. (\[GNresult\]), the normal-state resistance increases [*linearly*]{} with the length $L$ of the disordered region, as expected from Ohm’s law. This classical reasoning fails if one of the contacts is in the superconducting state. The scaling of the resistance $R_{\rm
NS}\equiv 1/\langle G_{\rm NS}\rangle$ with length, computed from eq.(\[GNSresult\]), is plotted in fig. \[GNSplot\]. For $\Gamma=1$ the resistance increases monotonically with $L$. The ballistic limit $L\rightarrow
0$ equals $h/4Ne^{2}$, half the contact resistance of a normal junction because of Andreev reflection (cf. section 3.1). For $\Gamma\lesssim 0.5$ a [*resistance minimum*]{} develops, somewhat below $L=l/\Gamma$. The resistance minimum is associated with the crossover from a quadratic to a linear dependence of $R_{\rm NS}$ on $1/\Gamma$.
If $\Gamma s\gg 1$ one has $\theta\rightarrow\pi/2$, hence $Q\rightarrow\Gamma$. In the opposite regime $\Gamma s\ll 1$ one has $\theta\rightarrow\Gamma s$, hence $Q\rightarrow\Gamma^{2}s$. The corresponding asymptotic expressions for $\langle G_{\rm NS}\rangle$ are (assuming $\Gamma\ll
1$ and $s\gg 1$): $$\begin{aligned}
\langle G_{\rm NS}\rangle&=&(2Ne^{2}/h)(s+\Gamma^{-1})^{-1},\;\;{\rm if}\;\;
\Gamma s\gg 1,\label{asympta}\\
\langle G_{\rm NS}\rangle&=&(2Ne^{2}/h)\Gamma^{2}s,\;\;{\rm if}\;\; \Gamma s\ll
1.\label{asymptb}\end{aligned}$$ In either limit the conductance is greater than the classical result $$G_{\rm NS}^{\rm class}=(2Ne^{2}/h)(s+2\Gamma^{-2})^{-1}, \label{GNSclass}$$ which holds if phase coherence between electrons and holes is destroyed by a voltage or magnetic field. The peak in the conductance around $V,B=0$ is of order $\Delta G_{\rm NS}=\langle G_{\rm NS}\rangle-G_{\rm NS}^{\rm class}$, which has the relative magnitude $$\frac{\Delta G_{\rm NS}}{\langle G_{\rm
NS}\rangle}\approx\frac{2}{2+\Gamma^{2}s}.\label{peakheight}$$
The scaling theory assumes zero temperature. Hekking and Nazarov [@Hek93] have studied the conductance of a resistive NS interface at finite temperatures, when $L$ is greater than the correlation length $L_{\rm c}={\rm
min}\,(l_{\phi},\sqrt{\hbar D/k_{\rm B}T})$. Their result is consistent with the limiting expression (\[asymptb\]), if $s=L/l$ is replaced by $L_{\rm
c}/l$. The implication is that, if $L>L_{\rm c}$, the non-linear scaling of the resistance shown in fig. \[GNSplot\] only applies to a disordered segment of length $L_{\rm c}$ adjacent to the superconductor. For the total resistance one should add the Ohmic contribution of order $(h/e^{2})(L-L_{\rm c})/l$ from the rest of the wire.
Double-barrier junction
-----------------------
In the previous subsection we have discussed how the opening of tunneling channels (i.e. the appearance of transmission eigenvalues close to one) by disorder leads to a minimum in the resistance when $L\simeq l/\Gamma$. The minimum separates a $\Gamma^{-1}$ from a $\Gamma^{-2}$ dependence of the resistance on the transparency of the interface. We referred to the $\Gamma^{-1}$ dependence as “reflectionless tunneling”, since it is as if one of the two quasiparticles which form the Cooper pair can tunnel through the barrier with probability one. In the present subsection we will show, following ref. [@Mel94], that a qualitatively similar effect occurs if the disorder in the normal region is replaced by a second tunnel barrier (tunnel probability $\Gamma'$). The resistance at fixed $\Gamma$ shows a minimum as a function of $\Gamma'$ when $\Gamma'\simeq\Gamma$. For $\Gamma'\lesssim\Gamma$ the resistance has a $\Gamma^{-1}$ dependence, so that we can speak again of reflectionless tunneling.
We consider an ${\rm NI}_{1}{\rm NI}_{2}{\rm S}$ junction, where N = normal metal, S = superconductor, and ${\rm I}_i$ = insulator or tunnel barrier (transmission probability per mode $\Gamma_{i}\equiv 1/\cosh^{2}\alpha_{i}$). We assume ballistic motion between the barriers. (The effect of disorder is discussed later.) A straightforward calculation yields the transmission probabilities $T_{n}$ of the two barriers in series, $$\begin{aligned}
&&T_{n}=(a+b\cos\varphi_{n})^{-1},\label{eq:tnphin}\\
&&a={\textstyle\frac{1}{2}}+{\textstyle\frac{1}{2}}\cosh 2\alpha_{1}\cosh
2\alpha_{2},\;\;
b={\textstyle\frac{1}{2}}\sinh 2\alpha_{1}\sinh 2\alpha_{2},\label{coeffalpha}\end{aligned}$$ where $\varphi_{n}$ is the phase accumulated between the barriers by mode $n$. Since the transmission matrix $t$ is diagonal, the transmission probabilities $T_n$ are identical to the eigenvalues of $tt^{\dagger}$. We assume that $L\gg\lambda_{\rm F}$ ($\lambda_{\rm F}$ is the Fermi wavelength) and $N\Gamma_{i}\gg 1$, so that the conductance is not dominated by a single resonance. In this case, the phases $\varphi_{n}$ are distributed uniformly in the interval $(0,2\pi)$ and we may replace the sum over the transmission eigenvalues in eqs. (\[keyzero\]) and (\[Landauer\]) by integrals over $\varphi$: $\sum_{n=1}^{N}f(\varphi_{n})\rightarrow(N/2\pi)\int_{0}^{2\pi}{\rm
d}\varphi\,f(\varphi)$. The result is $$\begin{aligned}
G_{\rm NS}&=&\frac{4Ne^2}{h}\frac{\cosh 2\alpha_{1}\cosh 2\alpha_{2}} {\left(
\cosh^{2}2\alpha_{1}+\cosh^{2}2\alpha_{2}-1 \right)^{3/2}},\label{gnsintphi}\\
G_{\rm N}&=&\frac{4Ne^{2}}{h}(\cosh 2\alpha_{1}+\cosh
2\alpha_{2})^{-1}\label{gnintphi}.\end{aligned}$$ These expressions are symmetric in the indices 1 and 2: It does not matter which of the two barriers is closest to the superconductor. In the same way we can compute the entire distribution of the transmission eigenvalues, $\rho(T)\equiv\sum_{n}\delta(T-T_{n}) \rightarrow(N/2\pi)\int_{0}^{2\pi}{\rm
d}\varphi\,\delta(T-T(\varphi))$. Substituting $T(\varphi)=(a+b\cos\varphi)^{-1}$ from eq. (\[eq:tnphin\]), one finds $$\rho(T)=\frac{N}{\pi T}\left(b^{2}T^{2}-(aT-1)^{2}\right)^{-1/2}.
\label{rhoT}$$
In fig. \[NINISfig\] we plot the resistance $R_{\rm N}= 1/G_{\rm N}$ and $R_{\rm NS}=1/G_{\rm NS}$, following from eqs. (\[gnsintphi\]) and (\[gnintphi\]). Notice that $R_{\rm N}$ follows Ohm’s law, $$R_{\rm N}=\frac{h}{2Ne^2}(1/\Gamma_{1}+1/\Gamma_{2}-1),\label{Ohmslaw}$$ as expected from classical considerations. In contrast, the resistance $R_{\rm
NS}$ has a [*minimum*]{} if one of the $\Gamma$’s is varied while keeping the other fixed. This resistance minimum cannot be explained by classical series addition of barrier resistances. If $\Gamma_{2}\ll 1$ is fixed and $\Gamma_{1}$ is varied, as in fig. \[NINISfig\], the minimum occurs when $\Gamma_{1}=\sqrt{2}\,\Gamma_{2}$. The minimal resistance $R_{\rm
NS}^{\rm min}$ is of the same order of magnitude as the resistance $R_{\rm N}$ in the normal state at the same value of $\Gamma_{1}$ and $\Gamma_{2}$. In particular, we find that $R_{\rm NS}^{\rm min}$ depends linearly on $1/\Gamma_{i}$, whereas for a single barrier $R_{\rm NS}\propto 1/\Gamma^{2}$.
The linear dependence on the barrier transparency shows the qualitative similarity of a ballistic NINIS junction to the disordered NIS junction considered in the previous subsection. To illustrate the similarity, we compare in fig. \[rhotplot\] the densities of normal-state transmission eigenvalues. The left panel is for an NIS junction \[computed using eq. (\[rhoxU\])\], the right panel is for an NINIS junction \[computed from eq. (\[rhoT\])\]. In the NIS junction, disorder leads to a bimodal distribution $\rho(T)$, with a peak near zero transmission and another peak near unit transmisssion (dashed curve). A similar bimodal distribution appears in the ballistic NINIS junction, for approximately equal transmission probabilities of the two barriers. There are also differences between the two cases: The NIS junction has a uni-modal $\rho(T)$ if $L/l<1/\Gamma$, while the NINIS junction has a bimodal $\rho(T)$ for any ratio of $\Gamma_{1}$ and $\Gamma_{2}$. In both cases, the opening of tunneling channels, i.e. the appearance of a peak in $\rho(T)$ near $T=1$, is the origin for the $1/\Gamma$ dependence of the resistance.
The scaling equation of section 5.2 can be used to investigate what happens to the resistance minimum if the region of length $L$ between the tunnel barriers contains impurities, with elastic mean free path $l$. As shown in ref.[@Mel94], the resistance minimum persists as long as $l\gtrsim\Gamma L$. In the diffusive regime ($l\ll L$) the scaling theory is found to agree with the Green’s function calculation by Volkov, Zaĭtsev, and Klapwijk for a disordered NINIS junction [@Vol93]. For strong barriers ($\Gamma_{1},\Gamma_{2}\ll 1$) and strong disorder ($L\gg l$), one has the two asymptotic formulas $$\begin{aligned}
G_{\rm NS}&=& \frac{2Ne^2}{h}\frac{\Gamma_{1}^{2}\Gamma_{2}^{2}}
{\left(\Gamma_{1}^{2}+\Gamma_{2}^{2}\right)^{3/2}},\;\;{\rm if}\;\;
\Gamma_{1},\Gamma_{2}\ll l/L,\label{eq:glimitls}\\
G_{\rm NS}&=&\frac{2Ne^2}{h}(L/l+1/\Gamma_{1}+1/\Gamma_{2})^{-1},\;\;{\rm
if}\;\;\Gamma_{1},\Gamma_{2}\gg l/L.\label{eq:glimitss}\end{aligned}$$ Equation (\[eq:glimitls\]) coincides with eq. (\[gnsintphi\]) in the limit $\alpha_{1},\alpha_{2}\gg 1$ (recall that $\Gamma_{i}\equiv
1/\cosh^2\alpha_{i}$). This shows that the effect of disorder on the resistance minimum can be neglected as long as the resistance of the junction is dominated by the barriers. In this case $G_{\rm NS}$ depends linearly on $\Gamma_{1}$ and $\Gamma_{2}$ only if $\Gamma_{1}\approx\Gamma_{2}$. Equation (\[eq:glimitss\]) shows that if the disorder dominates, $G_{\rm NS}$ has a linear $\Gamma$-dependence regardless of the relative magnitude of $\Gamma_{1}$ and $\Gamma_{2}$.
We have assumed zero temperature, zero magnetic field, and infinitesimal applied voltage. Each of these quantities is capable of destroying the phase coherence between the electrons and the Andreev-reflected holes, which is responsible for the resistance minimum. As far as the temperature $T$ and voltage $V$ are concerned, we require $k_{\rm B}T,eV\ll\hbar/\tau_{\rm dwell}$ for the appearance of a resistance minimum, where $\tau_{\rm dwell}$ is the dwell time of an electron in the region between the two barriers. For a ballistic NINIS junction $\tau_{\rm dwell}\simeq L/v_{\rm F}\Gamma$, while for a disordered junction $\tau_{\rm dwell}\simeq L^{2}/v_{\rm F}\Gamma
l$ is larger by a factor $L/l$. It follows that the condition on temperature and voltage becomes more restrictive if the disorder increases, even if the resistance remains dominated by the barriers. As far as the magnetic field $B$ is concerned, we require $B\ll h/eS$ (with $S$ the area of the junction perpendicular to $B$), if the motion between the barriers is diffusive. For ballistic motion the trajectories enclose no flux, so no magnetic field dependence is expected.
A possible experiment to verify these results might be scanning tunneling microscopy (STM) of a metal particle on a superconducting substrate [@Hesl94]. The metal–superconductor interface has a fixed tunnel probability $\Gamma_{2}$. The probability $\Gamma_{1}$ for an electron to tunnel from STM to particle can be controlled by varying the distance. (Volkov has recently analyzed this geometry in the regime that the motion from STM to particle is diffusive rather than by tunneling [@Vol94].) Another possibility is to create an NINIS junction using a two-dimensional electron gas in contact with a superconductor. An adjustable tunnel barrier could then be implemented by means of a gate electrode.
Circuit theory
--------------
The scaling theory of ref. [@Bee94b], which was the subject of section 5.2, describes the transition from the ballistic to the diffusive regime. In the diffusive regime it is equivalent to the Green’s function theory of ref.[@Vol93]. A third, equivalent, theory for the diffusive regime was presented recently by Nazarov [@Naz94b]. Starting from a continuity equation for the Keldysh Green’s function [@Lar77], and applying the appropriate boundary conditions [@Kup88], Nazarov was able to formulate a set of rules which reduce the problem of computing the resistance of an NS junction to a simple exercise in circuit theory. Furthermore, the approach can be applied without further complications to multi-terminal networks involving several normal and superconducting reservoirs. Because of its practical importance, we discuss Nazarov’s circuit theory in some detail.
The superconductors $S_{i}$ should all be at the same voltage, but may have a different phase $\phi_{i}$ of the pair potential. Zero temperature is assumed, as well as infinitesimal voltage differences between the normal reservoirs (linear response). The reservoirs are connected by a set of diffusive normal-state conductors (length $L_{i}$, mean free path $l_{i}$; $s_{i}\equiv
L_{i}/l_{i}\gg 1$). Between the conductors there may be tunnel barriers (tunnel probability $\Gamma_{i}$). The presence of superconducting reservoirs has no effect on the resistance $(h/2Ne^{2})s_{i}$ of the diffusive conductors, but affects only the resistance $h/2Ne^{2}\Gamma_{i}^{\rm eff}$ of the tunnel barriers. The tunnel probability $\Gamma_{i}$ of barrier $i$ is renormalized to an effective tunnel probability $\Gamma_{i}^{\rm eff}$, which depends on the entire circuit.
Nazarov’s rules to compute the effective tunnel probabilities are as follows. To each node and to each terminal of the circuit one assigns a vector $\vec{n}_{i}$ of unit length. For a normal reservoir, $\vec{n}_{i}=(0,0,1)$ is at the north pole, for a superconducting reservoir, $\vec{n}_{i}=(\cos\phi_{i},\sin\phi_{i},0)$ is at the equator. For a node, $\vec{n}_{i}$ is somewhere on the northern hemisphere. The vector $\vec{n}_{i}$ is called a “spectral vector”, because it is a particular parameterization of the local energy spectrum. If the tunnel barrier is located between spectral vectors $\vec{n}_{1}$ and $\vec{n}_{2}$, its effective tunnel probability is[^4] $$\Gamma^{\rm eff}=(\vec{n}_{1}\cdot\vec{n}_{2})\Gamma=
\Gamma\cos\theta_{12},\label{Gamma-eff}$$ where $\theta_{12}$ is the angle between $\vec{n}_{1}$ and $\vec{n}_{2}$. The rule to compute the spectral vector of node $i$ follows from the continuity equation for the Green’s function. Let the index $k$ label the nodes or terminals connected to node $i$ by a single tunnel barrier (with tunnel probability $\Gamma_{k}$). Let the index $q$ label the nodes or terminals connected to $i$ by a diffusive conductor (with $L/l\equiv s_{q}$). The spectral vectors then satisfy the sum rule [@Naz94b] $$\sum_{k}(\vec{n}_{i}\times\vec{n}_{k})\Gamma_{k}+
\sum_{q}(\vec{n}_{i}\times\vec{n}_{q})\frac{{\rm
arccos}(\vec{n}_{i}\cdot\vec{n}_{q})}
{s_{q}\sqrt{1-(\vec{n}_{i}\cdot\vec{n}_{q})^{2}}}=0.\label{sumrule}$$ This is a sum rule for a set of vectors perpendicular to $\vec{n}_{i}$ of magnitude $\Gamma_{k}\sin\theta_{ik}$ or $\theta_{iq}/s_{q}$, depending on whether the element connected to node $i$ is a tunnel barrier or a diffusive conductor. There is a sum rule for each node, and together the sum rules determine the spectral vectors of the nodes.
As a simple example, let us consider the system of section 5.2, consisting of one normal terminal (N), one superconducting terminal (S), one node (labeled A), and two elements: A diffusive conductor (with $L/l\equiv s$) between N and A, and a tunnel barrier (tunnel probability $\Gamma$) between A and S (see fig. \[NScircuit\]). There are three spectral vectors, $\vec{n}_{\rm N}$, $\vec{n}_{\rm S}$, and $\vec{n}_{\rm A}$. All spectral vectors lie in one plane. (This holds for any network with a single superconducting terminal.) The resistance of the circuit is given by $R=(h/2Ne^{2})(s+1/\Gamma^{\rm eff})$, with the effective tunnel probability $$\Gamma^{\rm eff}=\Gamma\cos\theta_{\rm AS}=\Gamma\sin\theta.\label{thetaAS}$$ Here $\theta\in[0,\pi/2]$ is the polar angle of $\vec{n}_{\rm A}$. This angle is determined by the sum rule (\[sumrule\]), which in this case takes the form $$\Gamma\cos\theta-\theta/s=0.\label{sumruleAS}$$ Comparison with section 5.2 shows that $\Gamma^{\rm eff}$ coincides with the effective tunnel probability $Q$ of eq. (\[Qdef\]) in the limit $s\gg 1$, i.e. if one restricts oneself to the diffusive regime. That is the basic requirement for the application of the circuit theory.
Let us now consider the “fork junction” of fig. \[forkcircuit\], with one normal terminal (N) and two superconducting terminals ${\rm S}_{1}$ and ${\rm
S}_{2}$ (phases $\phi_{1}\equiv-\phi/2$ and $\phi_{2}\equiv\phi/2$). There is one node (A), which is connected to N by a diffusive conductor ($L/l\equiv s$), and to ${\rm S}_{1}$ and ${\rm S}_{2}$ by tunnel barriers ($\Gamma_{1}$ and $\Gamma_{2}$). This structure was studied theoretically by Hekking and Nazarov [@Hek93] and experimentally by Pothier et al. [@Pot94]. For simplicity, let us assume two identical tunnel barriers $\Gamma_{1}=\Gamma_{2}\equiv\Gamma$. Then the spectral vector $\vec{n}_{\rm
A}=(\sin\theta,0,\cos\theta)$ of node A lies symmetrically between the spectral vectors of terminals ${\rm S}_{1}$ and ${\rm S}_{2}$. The sum rule (\[sumrule\]) now takes the form $$2\Gamma|\cos{\textstyle\frac{1}{2}}\phi|
\cos\theta-\theta/s=0.\label{sumrulefork}$$ Its solution determines the effective tunnel rate $\Gamma^{\rm
eff}=\Gamma|\cos{\textstyle\frac{1}{2}}\phi|\sin\theta$ of each of the two barriers in parallel, and hence the conductance of the fork junction, $$G=\frac{2Ne^{2}}{h}[s+{\textstyle\frac{1}{2}}(\Gamma
|\cos{\textstyle\frac{1}{2}}\phi|\sin\theta)^{-1}]^{-1}.\label{Gfork}$$ Two limiting cases of eqs. (\[sumrulefork\]) and (\[Gfork\]) are $$\begin{aligned}
G&=&(2Ne^{2}/h)(s+{\textstyle\frac{1}{2}}\Gamma^{-1}
|\cos{\textstyle\frac{1}{2}}\phi|^{-1})^{-1},\;\;{\rm
if}\;\;s\Gamma|\cos{\textstyle\frac{1}{2}}\phi|\gg 1,\label{Gforka}\\
G&=&(4Ne^{2}/h)s\Gamma^{2}(1+\cos\phi),\;\;{\rm
if}\;\;s\Gamma|\cos{\textstyle\frac{1}{2}}\phi|\ll 1.\label{Gforkb}\end{aligned}$$ For $\phi=0$ (and $2\Gamma\rightarrow\Gamma$) these expressions reduce to the results (\[asympta\]) and (\[asymptb\]) for an NS junction with a single superconducting reservoir. The limit (\[Gforkb\]) agrees with the finite-temperature result of Hekking and Nazarov [@Hek93], if $s$ is replaced by $L_{\rm c}/l$ and a series resistance is added due to the normal segment which is further than a correlation length from the NS interfaces. The possibility of a dependence of the conductance on the superconducting phase difference was noted also in other theoretical works, for different geometries [@Spi82; @Alt87; @Nak91; @Tak92; @Lam93; @Hui93].
The $\phi$-dependence of the conductance of a fork junction has recently been observed by Pothier et al. [@Pot94]. Some of their data is reproduced in fig. \[Pothier\]. The conductance of a Cu wire attached to an oxidized Al fork oscillates as a function of the applied magnetic field. The period corresponds to a flux increment of $h/2e$ through the area enclosed by the fork and the wire, and thus to $\Delta\phi=2\pi$. The experiment is in the regime where the junction resistance is dominated by the tunnel barriers, as in eq.(\[Gforkb\]).[^5] The metal-oxide tunnel barriers in such structures have typically very small transmission probabilities ($\Gamma\simeq 10^{-5}$ in ref. [@Pot94]), so that the regime of eq. (\[Gforka\]) is not easily accessible. Larger $\Gamma$’s can be realized by the Schottky barrier at a semiconductor — superconductor interface. It would be of interest to observe the crossover with increasing $\Gamma$ to the non-sinusoidal $\phi$-dependence predicted by eq.(\[Gforka\]), as a further test of the theory.
Universal conductance fluctuations
==================================
So far we have considered the [*average*]{} of the conductance over an ensemble of impurity potentials. In fig. \[variance\] we show results of numerical simulations [@Mar93] for the [*variance*]{} of the sample-to-sample fluctuations of the conductance, as a function of the average conductance in the normal state. A range of parameters $L,W,l,N$ was used to collect this data, in the quasi-one-dimensional, metallic, diffusive regime $l<W<L<Nl$. An ideal NS interface was assumed ($\Gamma=1$). The results for ${\rm Var\,}G_{\rm N}$ are as expected theoretically [@Sto91; @Mel91] for “universal conductance fluctuations” (UCF): $${\rm Var\,}G_{\rm N}=\frac{8}{15}\,\beta^{-1}(e^{2}/h)^{2}.\label{UCFN}$$ The index $\beta$ equals 1 in the presence and 2 in the absence of time-reversal symmetry. The $1/\beta$ dependence of ${\rm Var\,}G_{\rm N}$ implies that the variance of the conductance fluctuations is reduced by a factor of two upon application of a magnetic field, as observed in the simulation (see the two dotted lines in the lower part of fig.\[variance\]). The data for ${\rm Var\,}G_{\rm NS}$ at $B=0$ shows approximately a four-fold increase over ${\rm Var\,}G_{\rm N}$. For $B\neq 0$, the simulation shows that ${\rm Var\,}G_{\rm NS}$ is essentially [*unaffected*]{} by a time-reversal-symmetry breaking magnetic field. In contrast to the situation in the normal state, the theory for UCF in an NS junction is quite different for zero and for non-zero magnetic field, as we now discuss.
In zero magnetic field, the conductance of the NS junction is given by eq.(\[keyzero\]), which is an expression of the form $A=\sum_{n}a(T_{n})$. Such a quantity $A$ is called a [*linear statistic*]{} on the transmission eigenvalues. The word “linear” refers to the fact that $A$ does not contain products of different $T_{n}$’s. The function $a(T)$ may well depend non-linearly on $T$, as it does for $G_{\rm NS}$, where $a(T)$ is a rational function of $T$. The Landauer formula (\[Landauer\]) for the normal-state conductance is also a linear statistic, with $a(T)\propto T$. It is a general theorem in random-matrix theory [@Bee93] that the variance of a linear statistic has a $1/\beta$ dependence on the symmetry index $\beta$. Moreover, the magnitude of the variance is independent of the microscopic properties of the system (sample size, degree of disorder). This is Imry’s fundamental explanation for UCF [@Imr86].
For a wire geometry, there exists a formula for the variance of an arbitrary linear statistic [@Mac94; @Bee93b; @Cha94], $$\begin{aligned}
{\rm Var}\,A=-\frac{1}{2\beta\pi^{2}}\int_{0}^{1}
\!\!dT\int_{0}^{1}
\!\!dT'\left(\frac{da(T)}{dT}\right)
\left(\frac{da(T')}{dT'}\right)\nonumber\\
\times\ln\left(\frac{1+\pi^{2}[x(T)+x(T')]^{-2}}
{1+\pi^{2}[x(T)-x(T')]^{-2}}\right),
\label{VarAresult}\end{aligned}$$ where $x(T)={\rm arccosh}\,T^{-1/2}$. In the normal state, substitution of $a(T)=(2e^{2}/h)T$ into eq. (\[VarAresult\]) reproduces the result (\[UCFN\]). In the NS junction, substitution of $a(T)=(4e^{2}/h)T^{2}(2-T)^{-2}$ yields, for the case $\beta=1$ of zero magnetic field, $${\rm Var\,}G_{\rm NS}=\frac{32}{15}(2-90\pi^{-4})(e^{2}/h)^{2}=4.30\,{\rm
Var}\,G_{\rm N}.\label{UCFNS}$$ A factor of four between ${\rm Var\,}G_{\rm NS}$ and ${\rm Var\,}G_{\rm N}$ was estimated by Takane and Ebisawa [@Tak92b], by an argument similar to that which we described in section 4 for the weak-localization correction. (A diagrammatic calculation by the same authors [@Tak91] gave a factor of six, presumably because only the dominant diagram was included.) The numerical data in fig. \[variance\] is within 10 % of the theoretical prediction (\[UCFNS\]) (upper dotted line). Similar numerical results for ${\rm
Var\,}G_{\rm NS}$ in zero magnetic field were obtained in refs.[@Tak92b; @Bru94].
We conclude that UCF in zero magnetic field is basically the same phenomenon for $G_{\rm N}$ and $G_{\rm NS}$, because both quantities are linear statistics for $\beta=1$. If time-reversal symmetry (TRS) is broken by a magnetic field, the situation is qualitatively different. For $G_{\rm N}$, broken TRS does not affect the universality of the fluctuations, but merely reduces the variance by a factor of two. No such simple behavior is to be expected for $G_{\rm NS}$, since it is no longer a linear statistic for $\beta=2$. That is a crucial distinction between eq. (\[key\]) for $G_{\rm NS}$ and the Landauer formula (\[Landauer\]) for $G_{\rm N}$, which remains a linear statistic regardless of whether TRS is broken or not. This expectation [@Bee92] of an anomalous $\beta$-dependence of ${\rm Var}\,G_{\rm NS}$ was borne out by numerical simulations [@Mar93], which showed that the conductance fluctuations in an NS junction without TRS remain independent of disorder, and of approximately the same magnitude as in the presence of TRS (compare $+$ and $\times$ data points in the upper part of fig. \[variance\]). An analytical theory remains to be developed.
Shot noise
==========
The conductance, which we studied in the previous sections, is the [*time-averaged*]{} current $I$ divided by the applied voltage $V$. Time-dependent fluctuations $\delta I(t)$ in the current give additional information on the transport processes. The zero-frequency noise power $P$ is defined by $$P=4\int_{0}^{\infty}{\rm d}t\,\langle\delta I(t)\delta I(0)\rangle.\label{Pdef}$$ At zero temperature, the discreteness of the electron charge is the only source of fluctuations in time of the current. These fluctuations are known as “shot noise”, to distinguish them from the thermal noise at non-zero temperature. A further distinction between the two is that the shot-noise power is proportional to the applied voltage, whereas the thermal noise does not vanish at $V=0$. Shot noise is therefore an intrinsically non-equilibrium phenomenon. If the transmission of an elementary charge $e$ can be regarded as a sequence of uncorrelated events, then $P=2e|I|\equiv P_{\rm Poisson}$ as in a Poisson process. In this section we discuss, following ref. [@Jon94], the enhancement of shot noise in an NS junction. The enhancement originates from the fact that the current in the superconductor is carried by Cooper pairs in units of $2e$. However, as we will see, a simple factor-of-two enhancement applies only in certain limiting cases.
In the normal state, the shot-noise power (at zero temperature and infinitesimal applied voltage) is given by [@But90b] $$P_{\rm N}=P_{0}{\rm Tr}\,tt^{\dagger}(1-tt^{\dagger})
=P_{0}\sum_{n=1}^{N}T_{n}(1-T_{n}),\label{e2}$$ with $P_{0}\equiv 2e|V|(2e^{2}/h)$. Equation (\[e2\]) is the multi-channel generalization of earlier single-channel formulas [@Khl87; @Les89]. It is a consequence of the Pauli principle that closed ($T_{n}=0$) as well as open ($T_{n}=1$) scattering channels do not fluctuate and therefore give no contribution to the shot noise. In the case of a tunnel barrier, all transmission eigenvalues are small ($T_{n}\ll 1$, for all $n$), so that the quadratic terms in eq. (\[e2\]) can be neglected. Then it follows from comparison with eq. (\[Landauer\]) that $P_{\rm N}=2e|V|G_{\rm
N}=2e|I|=P_{\rm Poisson}$. In contrast, for a quantum point contact $P_{\rm
N}\ll P_{\rm Poisson}$. Since on the plateaus of quantized conductance all the $T_{n}$’s are either 0 or 1, the shot noise is expected to be only observable at the steps between the plateaus [@Les89]. For a diffusive conductor of length $L$ much longer than the elastic mean free path $l$, the shot noise $P_{\rm N}=\frac{1}{3}P_{\rm Poisson}$ is one-third the Poisson noise, as a consequence of noiseless open scattering channels .
The analogue of eq. (\[e2\]) for the shot-noise power of an NS junction is [@Jon94] $$P_{\rm NS}=4P_{0} {\rm Tr}\,
s_{\rm he}^{\vphantom{\dagger}}s_{\rm he}^{\dagger}
(1-s_{\rm he}^{\vphantom{\dagger}}s_{\rm he}^{\dagger})=P_{0}\sum_{n=1}^{N}
\frac{16T_{n}^{2}(1 - T_{n})}{(2-T_{n})^{4}},\label{e17}$$ where we have used eq. (\[she\]) (with $\varepsilon=0$) to relate the scattering matrix $s_{\rm he}$ for Andreev reflection to the transmission eigenvalues $T_{n}$ of the normal region. This requires zero magnetic field. As in the normal state, scattering channels which have $T_{n}=0$ or $T_{n}=1$ do not contribute to the shot noise. However, the way in which partially transmitting channels contribute is entirely different from the normal state result (\[e2\]).
Consider first an NS junction without disorder, but with an arbitrary transmission probability $\Gamma$ per mode of the interface. In the normal state, eq. (\[e2\]) yields $P_{\rm N}=(1-\Gamma)P_{\rm Poisson}$, implying full Poisson noise for a high tunnel barrier ($\Gamma\ll 1$). For the NS junction we find from eq. (\[e17\]) $$P_{\rm NS}=P_{0}N\frac{16\Gamma^{2}(1-\Gamma)}{(2-\Gamma)^{4}}=
\frac{8(1-\Gamma)}{(2-\Gamma)^{2}}P_{\rm Poisson},\label{e18}$$ where in the second equality we have used eq. (\[keyzero\]). This agrees with results obtained by Khlus [@Khl87], and by Muzykantskiĭ and Khmel’nitskiĭ [@Muz94], using different methods. If $\Gamma<2(\sqrt{2}-1)\approx 0.83$, one observes a shot noise above the Poisson noise. For $\Gamma\ll 1$ one has $$P_{\rm NS}=4e|I|=2 P_{\rm Poisson},\label{e19}$$ which is a doubling of the shot-noise power divided by the current with respect to the normal-state result. This can be interpreted as uncorrelated current pulses of $2e$-charged particles.
Consider next an NS junction with a disordered normal region, but with an ideal interface ($\Gamma=1$). We may then apply the formula (\[avf\]) for the average of a linear statistic on the transmission eigenvalues to eqs.(\[keyzero\]) and (\[e17\]). The result is $$\frac{\left\langle P_{\rm NS}\right\rangle}
{\left\langle G_{\rm NS}\right\rangle}=
\frac{2}{3}\,\frac{P_0}{2e^{2}/h}\;\Rightarrow\;\left\langle P_{\rm
NS}\right\rangle=
{\textstyle\frac{4}{3}}e|I|={\textstyle\frac{2}{3}}P_{\rm Poisson}.
\label{e20a}$$ Equation (\[e20a\]) is twice the result in the normal state, but still smaller than the Poisson noise. Corrections to (\[e20a\]) are of lower order in $N$ and due to quantum-interference effects [@Jon92].
Finally, consider an NS junction which contains a disordered normal region (length $L$, mean free path $l$) as well as a non-ideal interface. The scaling theory of section 5.2 has been applied to this problem in ref. [@Jon94]. Results are shown in fig. \[shotnoise\], where $\langle P_{\rm NS}\rangle/P_{\rm Poisson}$ is plotted against $\Gamma L/l$ for various $\Gamma$. Note the crossover from the ballistic result (\[e18\]) to the diffusive result (\[e20a\]). For a high barrier ($\Gamma \ll 1$), the shot noise decreases from twice the Poisson noise to two-thirds the Poisson noise as the amount of disorder increases.
Conclusion
==========
We have reviewed a scattering approach to phase-coherent transport accross the interface between a normal metal and a superconductor. For the reflectionless-tunneling phenomenon, the complete equivalence has been demonstrated to the non-equilibrium Green’s function approach. (The other effects we discussed have so far mainly been treated in the scattering approach.) Although mathematically equivalent, the physical picture offered by the two approaches is quite different. We chose to focus on the scattering approach because it makes direct contact with the quantum interference effects studied extensively in the normal state. The same techniques used for weak localization and universal conductance fluctuations in normal conductors could be used to study the modifications by Andreev reflection in an NS junction.
In the limit of zero voltage, zero temperature, and zero magnetic field, the transport properties of the NS junction are determined entirely by the transmission eigenvalues $T_{n}$ of the normal region. A scaling theory for the distribution of the $T_{n}$’s then allows one to obtain analytical results for the mean and variance of any observable of the form $A=\sum_{n}a(T_{n})$. The conductance is of this form, as well as the shot-noise power. The only difference with the normal state is the functional form of $a(T)$ (polynomial in the normal state, rational function for an NS junction), so that the general results of the scaling theory \[valid for any function $a(T)$\] can be applied at once. At finite $V$, $T$, or $B$, one needs the entire scattering matrix of the normal region, not just the transmission eigenvalues. This poses no difficulty for a numerical calculation, as we have shown in several examples. However, analytical progress using the scattering approach becomes cumbersome, and a diagrammatic Green’s function calculation is more efficient.
[*Note added February 1995:*]{} The theory of section 4 has been extended to non-zero voltage and magnetic field by P. W. Brouwer and the author (submitted to Phys. Rev. B). The results are $\delta
G_{\rm NS}(V=0,B\neq 0)=\frac{1}{3}e^{2}/h$, $\delta G_{\rm NS}(V\neq
0,B=0)=\frac{2}{3}e^{2}/h$, $\delta G_{\rm NS}(V\neq 0,B\neq 0)=0$. The disagreement with the numerical simulations discussed in section 4 is due to an insufficiently large system size.
Acknowledgments {#acknowledgments .unnumbered}
===============
It is a pleasure to acknowledge my collaborators in this research: R. A. Jalabert, M. J. M. de Jong, I. K. Marmorkos, J. A. Melsen, and B. Rejaei. Financial support was provided by the “Nederlandse organisatie voor Wetenschappelijk Onderzoek” (NWO), by the “Stichting voor Fundamenteel Onderzoek der Materie” (FOM), and by the European Community. I have greatly benefitted from the insights of Yu. V. Nazarov. Permission to reproduce the experimental figs. \[Lenssen\], \[Kastalsky\], and \[Pothier\] was kindly given by K.-M. H. Lenssen, A. W. Kleinsasser, and H. Pothier, respectively.
A. F. Andreev, Zh. Eksp. Teor. Fiz. [**46**]{}, 1823 (1964) \[Sov. Phys. JETP [**19**]{}, 1228 (1964)\]. C. W. J. Beenakker and H. van Houten, Solid State Phys. [**44**]{}, 1 (1991). C. J. Lambert, J. Phys. Condens. Matter [**5**]{}, 707 (1993). C. J. Lambert, V. C. Hui, and S. J. Robinson, J. Phys.Condens. Matter [**5**]{}, 4187 (1993). V. T. Petrashov, V. N. Antonov, P. Delsing, and T. Claeson, Phys. Rev. Lett. [**70**]{}, 347 (1993). T. M. Klapwijk, Physica B [**197**]{}, 481 (1994). C. W. J. Beenakker, Phys. Rev. Lett. [**67**]{}, 3836 (1991); [**68**]{}, 1442(E) (1992). C. W. J. Beenakker, in: [*Transport Phenomena in Mesoscopic Systems*]{}, ed. by H. Fukuyama and T. Ando (Springer, Berlin, 1992). The contribution of scattering inside the superconductor to the resistance of an NS junction has been studied extensively, see for example: A. B. Pippard, J. G. Shepherd, and D. A. Tindall, Proc. R. Soc. London A [**324**]{}, 17 (1971); A. Schmid and G. Schön, J. Low Temp. Phys. [**20**]{}, 207 (1975); T. Y. Hsiang and J. Clarke, Phys. Rev. B [**21**]{}, 945 (1980). P. G. de Gennes, [*Superconductivity of Metals and Alloys*]{} (Benjamin, New York, 1966). K. K. Likharev, Rev. Mod. Phys. [**51**]{}, 101 (1979). G. E. Blonder, M. Tinkham, and T. M. Klapwijk, Phys. Rev. B [**25**]{}, 4515 (1982). C. J. Lambert, J. Phys. Condens. Matter [**3**]{}, 6579 (1991). Y. Takane and H. Ebisawa, J. Phys. Soc. Jpn. [**61**]{}, 1685 (1992). C. W. J. Beenakker, Phys. Rev. B [**46**]{}, 12841 (1992). A. L. Shelankov, Fiz. Tverd. Tela [**26**]{}, 1615 (1984) \[Sov. Phys. Solid State [**26**]{}, 981 (1984)\]. A. V. Zaĭtsev, Zh. Eksp. Teor. Fiz. [**86**]{}, 1742 (1984) \[Sov. Phys. JETP [**59**]{}, 1015 (1984)\]. H. van Houten and C. W. J. Beenakker, Physica B [**175**]{}, 187 (1991). A. V. Zaĭtsev, Zh. Eksp. Teor. Fiz. [**78**]{}, 221 (1980); [**79**]{}, 2016(E) (1980) \[Sov. Phys. JETP [**51**]{}, 111 (1980); [**52**]{}, 1018(E) (1980)\]. M. Büttiker, Phys. Rev. B [**41**]{}, 7906 (1990). M. Büttiker, IBM J. Res. Dev. [**32**]{}, 63 (1988). L. I. Glazman and K. A. Matveev, Pis’ma Zh. Eksp. Teor.Fiz. [**49**]{}, 570 (1989) \[JETP Lett. [**49**]{}, 659 (1989)\]. C. W. J. Beenakker and H. van Houten, in: [*Single-Electron Tunneling and Mesoscopic Devices*]{}, ed. by H. Koch and H. Lübbig (Springer, Berlin, 1992). V. A. Khlus, A. V. Dyomin, and A. L. Zazunov, Physica C [**214**]{}, 413 (1993). I. A. Devyatov and M. Yu. Kupriyanov, Pis’ma Zh. Eksp.Teor. Fiz. [**52**]{}, 929 (1990) \[JETP Lett. [**52**]{}, 311 (1990)\]. F. W. J. Hekking, L. I. Glazman, K. A. Matveev, and R. I. Shekhter, Phys. Rev. Lett. [**70**]{}, 4138 (1993). O. N. Dorokhov, Solid State Comm. [**51**]{}, 381 (1984). Y. Imry, Europhys. Lett. [**1**]{}, 249 (1986). J. B. Pendry, A. MacKinnon, and P. J. Roberts, Proc. R. Soc.London A [**437**]{}, 67 (1992). Yu. V. Nazarov (to be published). A. F. Andreev, Zh. Eksp. Teor. Fiz. [**51**]{}, 1510 (1966) \[Sov. Phys. JETP [**24**]{}, 1019 (1967)\]. S. N. Artemenko, A. F. Volkov, and A. V. Zaĭtsev, Solid State Comm. [**30**]{}, 771 (1979). Y. Takane and H. Ebisawa, J. Phys. Soc. Jpn. [**61**]{}, 2858 (1992). A. D. Stone, P. A. Mello, K. A. Muttalib, and J.-L. Pichard, in: [*Mesoscopic Phenomena in Solids*]{}, ed. by B. L. Al’tshuler, P. A. Lee, and R. A. Webb (North-Holland, Amsterdam, 1991). P. A. Mello and A. D. Stone, Phys. Rev. B [**44**]{}, 3559 (1991). C. W. J. Beenakker, Phys. Rev. B [**49**]{}, 2205 (1994). A. M. S. Macêdo and J. T. Chalker, Phys. Rev. B [**49**]{}, 4695 (1994). Y. Takane and H. Otani \[J. Phys. Soc. Jpn. (to be published)\] find $\delta G_{\rm NS}=\frac{4}{3}\,e^{2}/h$, in slight disagreement with eq. (\[weaklocaNSN\]). I. K. Marmorkos, C. W. J. Beenakker, and R. A. Jalabert, Phys.Rev. B [**48**]{}, 2811 (1993). K.-M. H. Lenssen, P. C. A. Jeekel, C. J. P. M. Harmans, J. E. Mooij, M. R. Leys, J. H. Wolter, and M. C. Holland, in: [*Coulomb and Interference Effects in Small Electronic Structures*]{}, ed. by D. C. Glattli and M. Sanquer (Editions Frontières, to be published). A. Kastalsky, A. W. Kleinsasser, L. H. Greene, R. Bhat, F. P. Milliken, and J. P. Harbison, Phys. Rev. Lett. [**67**]{}, 3026 (1991). C. Nguyen, H. Kroemer, and E. L. Hu, Phys. Rev. Lett. [**69**]{}, 2847 (1992). R. G. Mani, L. Ghenim, and T. N. Theis, Phys. Rev. B [**45**]{}, 12098 (1992). N. Agraït, J. G. Rodrigo, and S. Vieira, Phys. Rev. B [**46**]{}, 5814 (1992). P. Xiong, G. Xiao, and R. B. Laibowitz, Phys. Rev. Lett., 1907 (1993). K.-M. H. Lenssen, L. A. Westerling, P. C. A. Jeekel, C. J. P. M. Harmans, J. E. Mooij, M. R. Leys, W. van der Vleuten, J. H. Wolter, and S. P. Beaumont, Physica B [**194–196**]{}, 2413 (1994). S. J. M. Bakker, E. van der Drift, T. M. Klapwijk, H. M. Jaeger, and S. Radelaar, Phys. Rev. B [**49**]{}, 13275 (1994). P. H. C. Magnée, N. van der Post, P. H. M. Kooistra, B. J. van Wees, and T. M. Klapwijk, Phys. Rev. B (to be published). Y. Takane and H. Ebisawa, J. Phys. Soc. Jpn. [**62**]{}, 1844 (1993). B. J. van Wees, P. de Vries, P. Magnée, and T. M. Klapwijk, Phys. Rev. Lett. [**69**]{}, 510 (1992). Y. Takane and H. Ebisawa, J. Phys. Soc. Jpn. [**61**]{}, 3466 (1992). A. F. Volkov, A. V. Zaĭtsev, and T. M. Klapwijk, Physica C [**210**]{}, 21 (1993). F. W. J. Hekking and Yu. V. Nazarov, Phys. Rev. Lett. [**71**]{}, 1625 (1993); Phys. Rev. B [**49**]{}, 6847 (1994). C. W. J. Beenakker, B. Rejaei, and J. A. Melsen, Phys. Rev.Lett. [**72**]{}, 2470 (1994). A. V. Zaĭtsev, Pis’ma Zh. Eksp. Teor. Fiz. [**51**]{}, 35 (1990) \[JETP Lett. [**51**]{}, 41 (1990)\]; Physica C [**185–189**]{}, 2539 (1991). A. F. Volkov and T. M. Klapwijk, Phys. Lett. A [**168**]{}, 217 (1992). A. F. Volkov, Pis’ma Zh. Eksp. Teor. Fiz. [**55**]{}, 713 (1992) \[JETP Lett. [**55**]{}, 746 (1992)\]; Phys. Lett. A [**174**]{}, 144 (1993). A. L. Shelankov, Pis’ma Zh. Eksp. Teor. Fiz. [**32**]{}, 122 (1980) \[JETP Lett. [**32**]{}, 111 (1980)\]. P. A. Mello and J.-L. Pichard, Phys. Rev. B [**40**]{}, 5276 (1989). O. N. Dorokhov, Pis’ma Zh. Eksp. Teor. Fiz. [**36**]{}, 259 (1982) \[JETP Lett. [**36**]{}, 318 (1982)\]. P. A. Mello, P. Pereyra, and N. Kumar, Ann. Phys. [**181**]{}, 290 (1988). J. A. Melsen and C. W. J. Beenakker, Physica B (to be published). D. R. Heslinga, S. E. Shafranjuk, H. van Kempen, and T. M. Klapwijk, Phys. Rev. B [**49**]{}, 10484 (1994). A. F. Volkov, Phys. Lett. A [**187**]{}, 404 (1994). Yu. V. Nazarov (to be published). A. I. Larkin and Yu. N. Ovchinnikov, Zh. Eksp. Teor. Fiz., 1915 (1975); [**73**]{}, 299 (1977) \[Sov. Phys. JETP [**41**]{}, 960 (1975); [**46**]{}, 155 (1977)\]. M. Yu. Kupriyanov and V. F. Lukichev, Zh. Eksp. Teor. Fiz., 139 (1988) \[Sov. Phys. JETP [**67**]{}, 1163 (1988)\]. Yu. V. Nazarov, contribution at the NATO Adv. Res. Workshop on “Mesoscopic Superconductivity” (Karlsruhe, May 1994). H. Pothier, S. Guéron, D. Esteve, and M. H. Devoret (to be published). B. Z. Spivak and D. E. Khmel’nitskiĭ, Pis’ma Zh. Eksp.Teor. Fiz. [**35**]{}, 334 (1982) \[JETP Lett. [**35**]{}, 412 (1982)\]. B. L. Al’tshuler and B. Z. Spivak, Zh. Eksp. Teor. Fiz., 609 (1987) \[Sov. Phys. JETP [**65**]{}, 343 (1987)\]. H. Nakano and H. Takayanagi, Solid State Comm. [**80**]{}, 997 (1991). S. Takagi, Solid State Comm. [**81**]{}, 579 (1992). C. J. Lambert, J. Phys. Condens. Matter [**5**]{}, 707 (1993). V. C. Hui and C. J. Lambert, Europhys. Lett. [**23**]{}, 203 (1993). C. W. J. Beenakker, Phys. Rev. Lett. [**70**]{}, 1155 (1993); Phys. Rev. B [**47**]{}, 15763 (1993). C. W. J. Beenakker and B. Rejaei, Phys. Rev. Lett. [**71**]{}, 3689 (1993); Phys. Rev. B [**49**]{}, 7499 (1994). J. T. Chalker and A. M. S. Macêdo, Phys. Rev. Lett. [**71**]{}, 3693 (1993). Y. Takane and H. Ebisawa, J. Phys. Soc. Jpn. [**60**]{}, 3130 (1991). J. Bruun, V. C. Hui, and C. J. Lambert, Phys. Rev. B [**49**]{}, 4010 (1994). M. J. M. de Jong and C. W. J. Beenakker, Phys. Rev. B (to be published). M. Büttiker, Phys. Rev. Lett. [**65**]{}, 2901 (1990); Phys. Rev. B [**46**]{}, 12485 (1992). V. A. Khlus, Zh. Eksp. Teor. Fiz. [**93**]{}, 2179 (1987) \[Sov. Phys. JETP [**66**]{}, 1243 (1987)\]. G. B. Lesovik, Pis’ma Zh. Eksp. Teor. Fiz. [**49**]{}, 513 (1989) \[JETP Lett. [**49**]{}, 592 (1989)\]. C. W. J. Beenakker and M. Büttiker, Phys. Rev. B [**46**]{}, 1889 (1992). K. E. Nagaev, Phys. Lett. A [**169**]{}, 103 (1992). B. A. Muzykantskiĭ and D. E. Khmel’nitskiĭ (to be published). M. J. M. de Jong and C. W. J. Beenakker, Phys. Rev. B [**46**]{}, 13400 (1992).
[^1]: Equation (\[weaklocaNSN\]) follows from the general formula $\delta
A=\frac{1}{4}a(1)+\int_{0}^{\infty}\!{\rm d}x\,(4x^{2}+
\pi^{2})^{-1}a(\cosh^{-2}x)$ for the weak-localization correction in a wire geometry, where $A$ is an arbitrary transport property of the form $A=\sum_{n}a(T_{n})$.
[^2]: The reduction factor $(L_{\rm c}/L)^{2}$ for the size of the conductance dip when $W<L_{\rm c}<L$ is estimated as follows: Consider the wire as consisting of $L/L_{\rm c}$ phase-coherent segments of length $L_{\rm c}$ in series. The first segment, adjacent to the superconductor, has a conductance dip $\delta
G_{1}\simeq e^{2}/h$, while the other segments have no conductance dip. The resistance $R_{1}$ of a single segment is a fraction $L_{\rm c}/L$ of the total resistance $R$ of the wire. Since $\delta G/G=-\delta R/R=-\delta R_{1}/R$ and $\delta R_{1}=-R_{1}^{2}\delta G_{1}\simeq -(L_{\rm c}/L)^{2}R^{2}e^{2}/h$, we find $\delta G\simeq(L_{\rm c}/L)^{2}e^{2}/h$.
[^3]: The crossover is accompanied by an “overshoot” around $eV\approx E_{\rm c}$, indicating the absence of an “excess current” (i.e. the linear $I$–$V$ characteristic for $eV\gg E_{\rm c}$ extrapolates back through the origin). We do not have an analytical explanation for the overshoot.
[^4]: It may happen that $\cos\theta_{12}<0$, in which case the effective tunnel probability is negative. Nazarov has given an example of a four-terminal circuit with $\Gamma^{\rm eff}<0$, so that the current through this barrier flows in the direction opposite to the voltage drop [@Naz94c].
[^5]: Equation (\[Gforkb\]) provides only a qualitative description of the experiment, mainly because the motion in the arms of the fork is diffusive rather than ballistic. This is why the conductance minima in fig.\[Pothier\] do not go to zero. A solution of the diffusion equation in the actual experimental geometry is required for a quantitative comparison with the theory [@Pot94].
|
---
abstract: 'The degree distribution of an ordered tree $T$ with $n$ nodes is $\vec{n} = (n_0,\ldots,n_{n-1})$, where $n_i$ is the number of nodes in $T$ with $i$ children. Let $\mathcal{N}(\vec{n})$ be the number of trees with degree distribution $\vec{n}$. We give a data structure that stores an ordered tree $T$ with $n$ nodes and degree distribution $\vec{n}$ using $\log \mathcal{N}(\vec{n})+O(n/\log^t n)$ bits for every constant $t$. The data structure answers tree queries in constant time. This improves the current data structures with lowest space for ordered trees: The structure of Jansson et al. \[JCSS 2012\] that uses $\log\mathcal{N}(\vec{n})+O(n\log\log n/\log n)$ bits, and the structure of Navarro and Sadakane \[TALG 2014\] that uses $2n+O(n/\log^t n)$ bits for every constant $t$.'
author:
- 'Dekel Tsur[^1]'
bibliography:
- 'ds.bib'
- 'dekel.bib'
title: Representation of ordered trees with a given degree distribution
---
Introduction
============
A problem which was extensively studied in recent years is designing a succinct data structure that stores a tree while supporting queries on the tree, like finding the parent of a node, or computing the lowest common ancestor of two nodes [@Jacobson89; @MunroR01; @BenoitDMRRR05; @DelprattRR06; @GearyRR06; @GearyRRR06; @GolynskiGGRR07; @RamanRS07; @HeMS12; @JanssonSS12; @MunroRRR12; @FarzanM14; @NavarroS14; @MunroRS01; @RamanR03; @FarzanM11; @ArroyueloDS16; @GuptaHSV07]. The problem of storing a static ordinal tree was studied in [@Jacobson89; @MunroR01; @BenoitDMRRR05; @DelprattRR06; @GearyRR06; @GearyRRR06; @GolynskiGGRR07; @JanssonSS12; @MunroRRR12; @FarzanM14; @NavarroS14]. These paper show that an ordinal tree with $n$ nodes can be stored using $2n+o(n)$ bits while answering queries in constant time. The space of $2n+o(n)$ bits matches the lower bound of $2n-\Theta(\log n)$ bits for this problem. In most of these papers, the $o(n)$ term is $\Omega(n\log\log n/\log n)$. The only exception is the data structure of Navarro and Sadakane [@NavarroS14] which uses $2n+O(n/\log^t n)$ bits for every constant $t$.
Jansson et al. [@JanssonSS12] studied the problem of storing a tree with a given degree distribution. The *degree distribution* of an ordered tree $T$ with $n$ nodes is $\vec{n} = (n_0,\ldots,n_{n-1})$, where $n_i$ is the number of nodes in $T$ with $i$ children. Let $\mathcal{N}(\vec{n})$ be the number of trees with degree distribution $\vec{n}$. Jansson et al. showed a data structure that stores a tree $T$ with degree distribution $\vec{n}$ using $\log\mathcal{N}(\vec{n})+O(n\log\log n/\log n)$ bits, and answers tree queries in constant time. This data structure is based on Huffman code that stores the sequence of node degrees (according to preorder). A different data structure was given by Farzan and Munro [@FarzanM14]. The space complexity of this structure is $\log\mathcal{N}(\vec{n})+O(n\log\log n/\sqrt{\log n})$ bits. The data structure of Farzan and Munro is based on a tree decomposition approach.
In this paper, we give a data structure that stores a tree $T$ using $\log\mathcal{N}(\vec{n})+O(n/\log^t n)$ bits, for every constant $t$. This results improve both the data structure of Navarro and Sadakane [@NavarroS14] (since $\log \mathcal{N}(\vec{n})\leq 2n$) and the data structure of Jansson et al. [@JanssonSS12]. Our data structure supports many tree queries which are answered in constant time. See Table \[tab:queries\] for some of the queries supported by our data structure.
Query Description
----------------------------------- ----------------------------------------------------------------------------------
${\mathrm{depth}(x)}$ The depth of $x$.
${\mathrm{height}(x)}$ The height of $x$.
${\mathrm{num\_descendants}(x)}$ The number of descendants of $x$.
${\mathrm{parent}(x)}$ The parent of $x$.
${\mathrm{lca}(x,y)}$ The lowest common ancestor of $x$ and $y$.
${\mathrm{level\_ancestor}(x,i)}$ The ancestor $y$ of $x$ for which ${\mathrm{depth}(y)} = {\mathrm{depth}(x)}-i$.
${\mathrm{degree}(x)}$ The number of children of $x$.
${\mathrm{child\_rank}(x)}$ The rank of $x$ among its siblings.
${\mathrm{child\_select}(x,i)}$ The $i$-th child of $x$.
${\mathrm{pre\_rank}(x)}$ The preorder rank of $x$.
${\mathrm{pre\_select}(i)}$ The $i$-th node in the preorder.
: Some of the tree queries supported by the our data structure. \[tab:queries\]
Our data structure is based on two components. The first component is the tree decomposition method of Farzan and Munro [@FarzanM14]. While Farzan and Munro used two levels of decomposition, we use an arbitrarily large constant number of levels. The second component is the aB-tree of Patrascu [@Patrascu08], which is a structure for storing an array of poly-logarithmic size with almost optimal space, while supporting queries on the array in constant time. This structure has been used for storing trees in [@NavarroS14; @Tsur_labeled]. However, in these papers the tree is converted to an array and tree queries are handled using queries on the array. In this paper we give a generalized aB-tree which can directly store an object from some decomposable family of objects. This generalization may be useful for the design of succinct data structures for other problems.
The rest of this paper is organized as follows. In Section \[sec:tree-decomposition\] we describe the tree decomposition of Farzan and Munro [@FarzanM14]. In Section \[sec:aB-tree\] we generalize the aB-tree structure of Patrascu [@Patrascu08]. Finally, in Section \[sec:data-structure\], we describe our data structure for ordinal trees.
Tree decomposition {#sec:tree-decomposition}
==================
One component of our data structure is the tree decomposition of Farzan and Munro [@FarzanM14]. In this section we describe a slightly modified version of this decomposition.
\[lem:tree-decomposition\] For a tree $T$ with $n$ nodes and an integer $L$, there is a collection ${\mathcal{D}_{T,L}}$ of subtrees of $T$ with the following properties.
1. \[enu:decomposition-edge\] Every edge of $T$ appears in exactly one tree of ${\mathcal{D}_{T,L}}$.
2. \[enu:decomposition-size\] The size of every tree in ${\mathcal{D}_{T,L}}$ is at most $2L+1$ and at least $2$.
3. \[enu:decomposition-num\] The number of trees in ${\mathcal{D}_{T,L}}$ is $O(n/L)$.
4. \[enu:boundary\] For every $T'\in {\mathcal{D}_{T,L}}$, at most two nodes of $T'$ can appear in other trees of ${\mathcal{D}_{T,L}}$. These nodes are called the *boundary nodes* of $T'$.
5. \[enu:boundary2\] A boundary node of a tree $T'\in {\mathcal{D}_{T,L}}$ can be either a root of $T'$ or a leaf of $T'$. In the latter case the node will be called the *boundary leaf* of $T'$.
6. \[enu:intervals\] For every $T'\in {\mathcal{D}_{T,L}}$, there are at most two maximal intervals $I_1$ and $I_2$ such that a node $x\in T$ is a non-root node of $T'$ if and only if the preorder rank of $x$ is in $I_1 \cup I_2$.
We now describe an algorithm that generates the decomposition of Lemma \[lem:tree-decomposition\] (the algorithm is based on the algorithm of Farzan and Munro with minor changes). The algorithm uses a procedure ${{\mathrm{pack}}(x,x_1,\ldots,x_k)}$ that receives a node $x$ and some children $x_1,\ldots,x_k$ of $x$, where each child $x_i$ has an associated subtree $S_i$ of $T$ that contains $x_i$ and some of its descendants. Each tree $S_i$ has size at most $L-1$. The procedure merges the trees $S_1,\ldots,S_k$ into larger trees as follows.
1. For each $i$, add the node $x$ to $S_i$, and make $x_i$ the child of $x$.
2. $i\gets 1$.
3. Merge the tree $S_i$ with the trees $S_{i+1},S_{i+2},\ldots$ (by merging their roots) and stop when the merged tree has at least $L$ nodes, or when there are no more children of $x$.
4. Let $S_j$ be the last tree merged with $S_i$. If $j<k$, set $i\gets j+1$ and go to step 3.
We say that a node $x\in T$ is *heavy* if $|{T\langlex\rangle}| \geq L$, where ${T\langlex\rangle}$ is the subtree of $T$ that contains $x$ and all its descendants. A heavy node is *type 2* if it has at least two heavy children, and otherwise it is *type 1*.
The decomposition algorithm has two phases. In the first phase the algorithm processes the type 2 heavy nodes. Let $x$ be a type 2 heavy node and let $x_1,\ldots,x_k$ be children of $x$. Suppose that the heavy nodes among $x_1,\ldots,x_k$ are $x_{h_1},\ldots,x_{h_{k'}}$, where $h_1 < \cdots < h_{k'}$. The algorithm adds to ${\mathcal{D}_{T,L}}$ the following trees.
1. A subtree whose nodes are $x$ and the parent of $x$ (if $x$ is not the root of $T$).
2. For $i=1,\ldots,k'$, a subtree whose nodes are $x$ and $x_{h_i}$.
3. For $i=1,\ldots,k'+1$, the subtrees generated by ${{\mathrm{pack}}(x,x_{h_{i-1}+1},\ldots,x_{h_i-1})}$, where the subtree associated with each $x_j$ is ${T\langlex_j\rangle}$. We assume here that $h_0 = 1$ and $h_{k'+1} = k+1$.
In the second phase, the algorithm processes maximal paths of type 1 heavy nodes. Let $x_1,\ldots,x_k$ be a maximal path of type 1 heavy nodes ($x_i$ is the child of $x_{i-1}$ for all $i$). If $x_k$ has a heavy child, denote this child by $x'$. Let $S$ be a subtree of $T$ containing $x_1$ and its descendants, except $x'$ and its descendants if $x'$ exists. Let $i$ be the maximal index such that $|{S\langlex_i\rangle}| \geq L$. If no such index exists, $i=1$. Now, run ${{\mathrm{pack}}(x_i,y_1,\ldots,y_d)}$, where $y_1,\ldots,y_d$ are the children of $x_i$ in $S$. The subtree associated with each $y_j$ is ${S\langley_j\rangle}$. Each tree generated by procedure ${\mathrm{pack}}$ is added to ${\mathcal{D}_{T,L}}$. If $i>1$, add to ${\mathcal{D}_{T,L}}$ the subtree whose nodes are $\{x_i,x_{i-1}\}$, and continue recursively on the path $x_1,\ldots,x_{i-1}$.
For a tree $T$ and an integer $L$ we define a tree ${\mathcal{T}_{T,L}}$ as follows. The tree ${\mathcal{T}_{T,L}}$ has a node $v_S$ for every tree $S \in {\mathcal{D}_{T,L}}$, and a node $v_r$ which is the root of the tree. For two trees $S_1,S_2\in{\mathcal{D}_{T,L}}$, $v_{S_1}$ is the parent of $v_{S_2}$ in ${\mathcal{T}_{T,L}}$ if and only if the root of $S_2$ is the boundary leaf of $S_1$. The node $v_r$ is the parent of $v_S$ for every $S\in{\mathcal{D}_{T,L}}$ such that the root of $S$ is the root of $T$.
\[obs:heavy\] For every tree $S\in {\mathcal{D}_{T,L}}$, if $v_S$ is a leaf of ${\mathcal{T}_{T,L}}$, the only node of $S$ that is a heavy node of $T$ is the root of $S$. Otherwise, the set of nodes of $S$ that are heavy nodes of $T$ consists of all the nodes on the path from the root of $S$ to the boundary leaf of $S$.
Generalized aB-trees {#sec:aB-tree}
====================
In this section we describe the aB-tree (augmented B-tree) structure of Patrascu [@Patrascu08], and then generalize it. An *aB-tree* is a data structure that stores an array with elements from a set $\Sigma$. Let ${\mathcal{A}}$ be the set of all such arrays. Let $B$ be an integer (not necessarily constant), and let ${f}\colon {\mathcal{A}}\to \Phi$ be a function that has the following property: There is a function ${f'}\colon \mathbb{N}\times \Phi^B \to \Phi$ such that for every array $A\in {\mathcal{A}}$ whose size is dividable by $B$, ${f}(A) = {f'}(|A|,{f}(A_1),\ldots,{f}(A_B))$, where $A = A_1 \cdots A_B$ is a partition of $A$ into $B$ equal sized sub-arrays.
Let $A\in {\mathcal{A}}$ be an array of size $m=B^t$. An *aB-tree* of $A$ is a $B$-ary tree defined as follows. The root $r$ of the tree stores ${f}(A)$. The array $A$ is partitioned into $B$ sub-arrays of size $m/B$, and we recursively build aB-trees for these sub-arrays. The $B$ roots of these trees are the children of $r$. The recursion stops when the sub-array has size 1. An aB-tree supports queries on $A$ using the following algorithm. Performs a descent in the aB-tree starting at the root. At each node $v$, the algorithm decides to which child of $v$ to go by examining the ${f}$ values stored at the children of $v$. We assume that if these values are packed into one word, the decision is performed in constant time. When a leaf is reached, the algorithm returns the answer to the query. Let ${\mathcal{N}(n,\alpha)}$ denote the number of arrays $A\in {\mathcal{A}}$ of size $n$ with ${f}(A)=\alpha$. The following theorem is the main result in [@Patrascu08].
\[thm:aB-tree\] If $B=O(w/\log(|A|+|\Phi|))$ (where $w\geq \log n$ is the word size), the aB-tree of an array $A$ can be stored using at most $\log{\mathcal{N}(|A|,{f}(A))}+2$ bits. The time for performing a query is $O(\log_B |A|)$ using pre-computed tables of size $O(|\Sigma|+|\Phi|^{B+1}+B\cdot|\Phi|^B)$.
We note that the value ${f}(A)$ is required in order to answer queries, and the space for storing this value is not included in the bound $\log{\mathcal{N}(|A|,{f}(A))}+2$ of the theorem.
In the rest of this section we generalizes Theorem \[thm:aB-tree\]. Let ${\mathcal{A}}$ be a set of objects (for example, ${\mathcal{A}}$ can be a set of ordered trees). As before, assume there is a function ${f}\colon {\mathcal{A}}\to \Phi$. We assume that ${f}(A)$ encodes the size of $A$ (namely, $|A|$ can be computed from $f(A)$). Suppose that there is a *decomposition algorithm* that receives an object $A\in {\mathcal{A}}$ and generates sub-objects $A_1,\ldots,A_B$ (some of these objects can be of size 0) and a value $\beta \in \Phi_2$ which contains the information necessary to reconstruct $A$ from $A_1,\ldots,A_B$. Formally, we denote by ${{\mathrm{Decompose}}(A)} = (\beta,A_1,\ldots,A_B)$ the output of the decomposition algorithm. We also define a function ${g}\colon {\mathcal{A}}\to \Phi_2$ by ${g}(A) = \beta$ and functions ${f}_i\colon {\mathcal{A}}\to \Phi$ by ${f}_i(A) = {f}(A_i)$. Let ${\mathcal{F}}= \{(g(A),f_1(A),\ldots,f_B(A)) \colon A\in {\mathcal{A}}\}$. We assume that the decomposition algorithm has the following properties.
1. \[enu:first\] \[enu:join-partition\] There is a function ${\mathrm{Join}}\colon \Phi_2\times {\mathcal{A}}^B \to {\mathcal{A}}$ such that ${{\mathrm{Join}}({{\mathrm{Decompose}}(A)})} = A$ for every $A\in {\mathcal{A}}$.
2. \[enu:partition-join\] ${{\mathrm{Decompose}}({{\mathrm{Join}}(\beta,A_1,\ldots,A_B)})} = (\beta,A_1,\ldots,A_B)$ for every $A_1,\ldots,A_B \in {\mathcal{A}}$ and $\beta\in \Phi_2$ such that $(\beta,{f}(A_1),\ldots,{f}(A_B))\in {\mathcal{F}}$.
3. \[enu:fprime\] There is a function ${f'}\colon {\mathcal{F}}\to \Phi$ such that ${f}(A) = {f'}({g}(A),{f}_1(A),\ldots,{f}_B(A))$ for every $A\in {\mathcal{A}}$.
4. \[enu:size\] There is a constant $\delta\leq B/2$ such that if ${{\mathrm{Decompose}}(A)} = (\beta,A_1,\ldots,A_k)$, then $|A_i| \leq \delta |A|/B$ for all $i$. \[enu:last\]
Let ${\mathcal{N}(\alpha,\beta)}$ denotes the number of objects $A\in {\mathcal{A}}$ for which ${f}(A)=\alpha$ and ${g}(A)=\beta$. Let $$\mathcal{X}_{\alpha,\beta} = \{(\vec{\alpha},\vec{\beta}) \colon
\vec{\alpha}=(\alpha_1,\ldots,\alpha_B)\in\Phi^{B},
\vec{\beta}\in\Phi_2^B,
(\beta,\alpha_1,\ldots,\alpha_B)\in\mathcal{F},
{f'}(\beta,\alpha_1,\ldots,\alpha_B)=\alpha
\}.$$
\[lem:N-alpha-beta\] For every $\alpha\in\Phi$ and $\beta\in\Phi_2$, $\sum_{((\alpha_1,\ldots,\alpha_B),(\beta_1,\ldots,\beta_B))\in
\mathcal{X}_{\alpha,\beta}} \prod_{i=1}^B {\mathcal{N}(\alpha_i,\beta_i)}
= {\mathcal{N}(\alpha,\beta)}$.
Let ${\mathcal{A}}_1$ be the set of all tuples $(A_1,\ldots,A_B) \in {\mathcal{A}}^B$ such that $$(({f}(A_1),\ldots,{f}(A_B)),({g}(A_1),\ldots,{g}(A_B)))\in
\mathcal{X}_{\alpha,\beta}.$$ Let ${\mathcal{A}}_2$ be the set of all $A\in {\mathcal{A}}$ such that ${f}(A) = \alpha$ and ${g}(A) = \beta$. We need to show that $|{\mathcal{A}}_1| = |{\mathcal{A}}_2|$. Define a mapping $h$ by $h(A_1,\ldots,A_B) = {{\mathrm{Join}}(\beta,A_1,\ldots,A_B)}$. We will show that $h$ is a bijection from ${\mathcal{A}}_1$ to ${\mathcal{A}}_2$.
Fix $(A_1,\ldots,A_B) \in {\mathcal{A}}_1$ and denote $A={{\mathrm{Join}}(\beta,A_1,\ldots,A_B)}$. By the definition of ${\mathcal{A}}_1$ and $\mathcal{X}_{\alpha,\beta}$, $(\beta,{f}(A_1),\ldots,{f}(A_B))\in{\mathcal{F}}$, and by Property (P\[enu:partition-join\], ${{\mathrm{Decompose}}(A)}=(\beta,A_1,\ldots,A_B)$. Hence, ${f}_i(A)={f}(A_i)$ for all $i$ and ${g}(A) = \beta$. We have ${f}(A)={f'}({g}(A),{f}_1(A),\ldots,{f}_B(A)) =
{f'}(\beta,{f}(A_1),\ldots,{f}(A_B))=\alpha$, where the first equality follows from Property (P\[enu:fprime\]) and the third equality follows from the definition of $\mathcal{X}_{\alpha,\beta}$. We also shown above that ${g}(A) = \beta$. Therefore, $h(A_1,\ldots,A_B) \in {\mathcal{A}}_2$.
The mapping $h$ is injective due to Property (P\[enu:partition-join\]). We next show that $h$ is surjective. Fix $A\in {\mathcal{A}}_2$. By definition, ${f}(A) = \alpha$ and ${g}(A) = \beta$. Let ${{\mathrm{Decompose}}(A)} = (\beta,A_1,\ldots,A_B)$. By Property (P\[enu:join-partition\]), $h(A_1,\ldots,A_B) = A$, so it remains to show that $(A_1,\ldots,A_B) \in {\mathcal{A}}_1$. By definition, $(\beta,{f}(A_1),\ldots,{f}(A_B)) = (\beta,{f}_1(A),\ldots,{f}_B(A))
\in {\mathcal{F}}$. By Property (P\[enu:fprime\]), ${f'}(\beta,{f}(A_1),\ldots,{f}(A_B))={f}(A)=\alpha$. Therefore, $(A_1,\ldots,A_B) \in {\mathcal{A}}_1$.
We now define a generalization of an aB-tree. A generalized aB-tree of an object $A\in {\mathcal{A}}$ is defined as follows. The root $r$ of the tree stores ${f}(A)$ and ${g}(A)$. Suppose that ${{\mathrm{Decompose}}(A)} = (\beta,A_1,\ldots,A_B)$. Recursively build aB-trees for $A_1,\ldots,A_B$, and the roots of these trees are the children of $r$. The recursion stops when the object has size 1 or 0.
The following theorem generalizes Theorem \[thm:aB-tree\]. The proof of the theorem is very similar to the proof of Theorem \[thm:aB-tree\] and uses Lemma \[lem:N-alpha-beta\] in order to bound the space.
\[thm:aB-tree-2\] If $B=O(w/\log(|\Phi|+|\Phi_2|))$, the generalized aB-tree of an object $A\in {\mathcal{A}}$ can be stored using at most $\log{\mathcal{N}({f}(A),{g}(A))}+2$ bits. The time for performing a query is $O(\log_B |A|)$ using pre-computed tables of size $O(a_1+|\Phi|^B\cdot|\Phi_2|^B\cdot(|\Phi|\cdot|\Phi_2|+B))$, where $a_1$ is the number of objects in ${\mathcal{A}}$ of size $1$.
The data structure {#sec:data-structure}
==================
For a tree $T$ with degree distribution $\vec{n} = (n_0,\ldots,n_{n-1})$ define the tree degree entropy ${H^*(T)} = \frac{1}{n}\sum_{i\colon n_i>0} n_i \log \frac{n}{n_i}$. Since $n{H^*(T)} = \log \mathcal{N}(\vec{n}) + O(\log n)$, it suffices to show a data structure for $T$ that uses $n{H^*(T)}+O(n/\log^t n)$ bits for any constant $t$.
Let $t$ be some constant. Define $B=\log^{1/3} n$ and $L=\log^{t+2}n$. As in [@Patrascu08], define ${e(i)}$ to be the rounding up of $\log \frac{n}{n_i}$ to a multiple of $1/L$. If $n_i = 0$, ${e(i)}$ is the rounding up of $\log n$ to a multiple of $1/L$. For a tree $S$ define ${E(S)} = \sum_{i=1}^{|S|} {e({\mathrm{degree}({\mathrm{pre\_select}_{S}(i)})})}$, where ${\mathrm{pre\_select}_{S}(i)}$ is the $i$-th node of $S$ in preorder. Let $\Sigma = \{i\leq n-1\colon n_i > 0 \}$. We say that a tree $S$ is a *$\Sigma$-tree* if for every node $x$ of $S$, except perhaps the root, ${\mathrm{degree}(x)} \in \Sigma$.
\[lem:entropy\] For every $m\leq n$ and $a\geq 0$, the number of $\Sigma$-trees $S$ with $m$ nodes and ${E(S)} = a$ is at most $2^{a+1}$.
For a string $A$ over the alphabet $\Sigma$ define ${E(A)} = \sum_{i=1}^{|A|} {e(A[i])}$. Let ${\mathcal{N}(n,a)}$ be the number of strings over $\Sigma$ with length $m$ and ${E(A)} = a$. We first prove that ${\mathcal{N}(m,a)} \leq 2^a$ using induction on $m$ (we note that this inequality was stated in [@Patrascu08] without a proof). The base $m=0$ is trivial. We now prove the induction step. Let $A$ be a string of length $m$ with ${E(A)} = a$. Clearly, ${e(A[1])} \leq a$ otherwise ${E(A)} > a$, contradicting the assumption that ${E(A)} = a$. If we remove $A[1]$ from $A$, we obtain a string $A'$ of length $m-1$ and ${E(A')} = {E(A)}-{e({\mathrm{degree}(A[1])})}\geq 0$. Therefore, ${\mathcal{N}(m,a)} = \sum_{i\in \Sigma\colon {e(i)} \leq a} {\mathcal{N}(m-1,a-{e(i)})}$. Using the induction hypothesis, we obtain that $${\mathcal{N}(m,a)} \leq \sum_{i\in\Sigma\colon {e(i)} \leq a} 2^{a-{e(i)}}
\leq \sum_{i\in\Sigma} 2^{a-\log \frac{n}{n_i}}
= 2^a \sum_{i\in\Sigma} \frac{n_i}{n} = 2^a.$$
We now bound the number of $\Sigma$-trees with $m$ nodes and ${E(S)} = a$. We say that a $\Sigma$-tree is of *type 1* if the degree of its root is in $\Sigma$, and otherwise the tree is of *type 2*. For every $\Sigma$-tree $S$ we associate a string $A_S$ in which $A_S[i] = {\mathrm{degree}({\mathrm{pre\_select}_{S}(i)})}$. If $S$ is a type 1 $\Sigma$-tree then $A_S$ is a string over the alphabet $\Sigma$ and ${E(A_S)} = {E(S)}$. Therefore, the number of type 1 $\Sigma$-trees $S$ with $m$ nodes and ${E(S)} = a$ is at most ${\mathcal{N}(m,a)} \leq 2^a$. If $S$ is a type 2 $\Sigma$-tree then ${A_S[2..m]}$ is a string over the alphabet $\Sigma$ and ${E({A_S[2..m]})} = {E(S)}-a'$, where $a'$ is the rounding up of $\log n$ to a multiple of $1/L$. Since there are at most $m$ ways to choose the degree of the root of $S$, it follows that the number of type 2 $\Sigma$-trees $S$ with $m$ nodes and ${E(S)} = a$ is at most $m{\mathcal{N}(m,a-a')} \leq n 2^{a-a'} \leq 2^a$.
To build our data structure on $T$, we first partition $T$ into *macro trees* using the decomposition algorithm of Lemma \[lem:tree-decomposition\] with parameter $L$. On each macro tree $S$ we build a generalized aB-tree as follows.
Let ${\mathcal{A}}$ be the set of all $\Sigma$-trees with at most $2L+1$ nodes, and in which one of the leaves may be designated a boundary leaf. We first describe procedure ${\mathrm{Decompose}}$. For a tree $S\in {\mathcal{A}}$, ${{\mathrm{Decompose}}(S)}$ generates subtrees $S_1,\ldots,S_B$ of $S$ by applying the algorithm of Lemma \[lem:tree-decomposition\] on $S$ with parameter $L(S)=\Theta(|S|/B)$, where the constant hidden in the $\Theta$ notation is chosen such that the number of trees in the decomposition is at most $B$ (such a constant exists to due to part \[enu:decomposition-num\] of Lemma \[lem:tree-decomposition\]). This algorithm generates subtrees $S_1,\ldots,S_k$ of $S$, with $k\leq B$. The subtrees $S_1,\ldots,S_k$ are numbered according to the preorder ranks of their roots, and two subtrees with a common root are numbered according to the preorder rank of the first child of the root. If $k<B$ we add empty subtrees $S_{k+1},\ldots,S_B$.
We next describe the mappings ${f}\colon {\mathcal{A}}\to \Phi$ and ${g}\colon {\mathcal{A}}\to \Phi_2$. Recall that ${g}(S)$ is the information required to reconstruct $S$ from $S_1,\ldots,S_B$. In our case, ${g}(S)$ is the balanced parenthesis string of the tree ${\mathcal{T}_{S,L(S)}}$. The number of nodes in ${\mathcal{T}_{S,L(S)}}$ is $k+1$. Since $k\leq B$, ${g}(S)$ is a binary string of length at most $2B+2$. Thus, $|\Phi_2|=O(2^{2B})$. We define ${f}(S)$ to be a vector $({E(S)},{|S|},{s_{S}},{s'_{S}},{s''_{S}},
{d_{S}},{l_{S}},{p_{S}})$ whose components are defined as follows.
- ${s_{S}}=|{S\langlex\rangle}|$, where $x$ is the rightmost child of the root of $S$ (recall that ${S\langlex\rangle}$ is the subtree of $S$ containing $x$ and its descendants).
- ${s'_{S}}=|{S\langlex'\rangle}|$, where $x'$ is the child of the root of $S$ which is on the path between the root of $S$ and the boundary leaf of $S$. If $S$ does not have a boundary leaf, ${s'_{S}}=0$.
- ${s''_{S}}=\max_y |{S\langley\rangle}|$ where the maximum is taken over every node $y$ of $S$ whose parent is on the path between the root of $S$ and the boundary leaf of $S$, and $y$ is not on this path. If $S$ does not have a boundary leaf, the maximum is taken over all children $y$ of the root of $S$.
- ${d_{S}}$ is the number of children of the root of $S$.
- ${l_{S}}$ is the distance between the root of $S$ and the boundary leaf of $S$. If $S$ does not have a boundary leaf, ${l_{S}} = 0$.
- ${p_{S}}$ is the number of nodes in $S$ that appear before the boundary leaf of $S$ in the preorder of $S$. If $S$ does not have a boundary leaf, ${p_{S}} = 0$.
We note that the value ${E(S)}$ is required in order to bound the space of the aB-trees. The values ${|S|}$, ${s_{S}}$, ${s'_{S}}$, ${s''_{S}}$, ${d_{S}}$, and ${l_{S}}$ are required in order to satisfy Property (P\[enu:partition-join\]) of Section \[sec:aB-tree\] (see the proof of Lemma \[lem:properties-2\] below). These values are also used for answering queries. Finally, the value ${p_{S}}$ is needed to answer queries.
The values ${|S|},{s_{S}},{s'_{S}},{s''_{S}},{d_{S}},
{l_{S}},{p_{S}}$ are integers bounded by $L$. Moreover, ${E(S)}$ is a multiple of $1/L$ and ${E(S)} \leq L(\log n+1/L)=L\log n+1$. Therefore, $|\Phi| = O(L^7 \cdot L^2\log n) = O(L^9 \log n)$. It follows that the condition $B=O(w/\log(|\Phi|+|\Phi_2|))$ of Theorem \[thm:aB-tree-2\] is satisfied (since $B=\log^{1/3} n$ and $w/\log(|\Phi|+|\Phi_2|) = \Omega(w/B) = \Omega(\log^{2/3}n)$). Moreover, the size of the lookup tables of Theorem \[thm:aB-tree-2\] is $O(2^{2B(B+1)})=O(\sqrt{n})$.
The following lemmas shows that Properties (P\[enu:first\])–(P\[enu:last\]) of Section \[sec:aB-tree\] are satisfied.
\[lem:properties-1\] Property (P\[enu:join-partition\]) is satisfied.
We define the function ${\mathrm{Join}}$ as follows. Given a balanced parenthesis string $\beta$ of a tree $S_\beta$ and trees $S_1,\ldots,S_B$, the tree $S = {{\mathrm{Join}}(\beta,S_1,\ldots,S_B)}$ is constructed as follows. For $i=1,\ldots,B$, associate the tree $S_i$ to the node ${\mathrm{pre\_select}_{S_\beta}(i+1)}$. For every internal node $v$ in $S_\beta$, merge the boundary leaf of the tree $S_i$ associated with $v$, and the roots of the trees associated with the children of $v$ (if $v$ is the root of $S_\beta$ just merge the roots of the trees associated with the children of $v$). By definition, ${{\mathrm{Join}}({{\mathrm{Decompose}}(S)})} = S$ for every tree $S$.
\[lem:boundary-size-degree\] Let $S={{\mathrm{Join}}(\beta,S_1,\ldots,S_B)}$. If a node $x\in S$ is a boundary node of some tree $S_i$, the values of $|{S\langlex\rangle}|$ and ${\mathrm{degree}(x)}$ can be computed from $\beta,{f}(S_1),\ldots,{f}(S_B)$.
Let $S_\beta$ be the tree whose balanced parenthesis string is $\beta$. Assume that $x$ is not the root of $S$ (the proof for the case when $x$ is the root is similar). Therefore, $x$ is the boundary leaf of some tree $S_i$. Let $I$ be a set containing every index $j\neq i$ such that ${\mathrm{pre\_select}_{S_\beta}(j+1)}$ is a descendant of ${\mathrm{pre\_select}_{S_\beta}(i+1)}$. Observe that $|{S\langlex\rangle}| = 1+\sum_{j\in I} ({|S_j|}-1)$. Similarly, let $I_2$ be a set containing every index $j$ such that ${\mathrm{pre\_select}_{S_\beta}(j+1)}$ is a child of ${\mathrm{pre\_select}_{S_\beta}(i+1)}$. By part \[enu:decomposition-edge\] of Lemma \[lem:tree-decomposition\], ${\mathrm{degree}(x)} = \sum_{j\in I_2} {d_{S_j}}$. The lemma now follows since $I,I_2$ can be computed from $\beta$ and ${|S_j|},{d_{S_j}}$ are components of ${f}(S_j)$.
\[lem:properties-2\] Property (P\[enu:partition-join\]) is satisfied.
Suppose that $\beta\in \Phi_2$ is a balanced parenthesis string and $S_1,\ldots,S_B \in {\mathcal{A}}$ are trees such that $(\beta,{f}(S_1),\ldots,{f}(S_B))\in {\mathcal{F}}$ (recall that ${\mathcal{F}}= \{(g(S),f_1(S),\ldots,f_B(S)) \colon S\in {\mathcal{A}}\}$). We need to show that ${{\mathrm{Decompose}}({{\mathrm{Join}}(\beta,S_1,\ldots,S_B)})} = (\beta,S_1,\ldots,S_B)$. Denote $S = {{\mathrm{Join}}(\beta,S_1,\ldots,S_B)}$. By the definition of ${\mathcal{F}}$, there is a tree $S^*$ such that ${g}(S^*) = \beta$ and ${f}_i(S^*) = {f}(S_i)$ for all $i$. Denote ${{\mathrm{Decompose}}(S^*)} = (\beta,S^*_1,\ldots,S^*_B)$. Let $S_\beta$ be a tree whose balanced parenthesis string is $\beta$.
By Lemma \[lem:boundary-size-degree\], $|S|=|S^*|$ and therefore $L(S) = L(S^*)$. Recall that a node of $S$ or $S^*$ is heavy if the size of its subtree is at least $L(S)$. Define the *skeleton* of a tree to be the subtree that contains the heavy nodes of the tree. We first claim that the skeleton of $S^*$ can be reconstructed from $\beta,{f}(S^*_1),\ldots,{f}(S^*_B)$. To prove this claim, define trees $P_1,\ldots,P_B$, where $P_i$ is a path of length ${l_{S^*_i}}$. By Observation \[obs:heavy\], the skeleton of $S^*$ is isomorphic to ${{\mathrm{Join}}(\beta,P_1,\ldots,P_B)}$.
We now show that $S$ and $S^*$ have isomorphic skeletons. Consider some subtree $S_i$ such that ${\mathrm{pre\_select}_{S_\beta}(i+1)}$ is not a leaf of $S_\beta$. Let $x$ be the boundary leaf of $S_i$, and let $x^*$ be the boundary leaf of $S^*_i$. By Lemma \[lem:boundary-size-degree\] and Observation \[obs:heavy\], $|{S\langlex\rangle}| = |{S^*\langlex^*\rangle}| \geq L(S^*) = L(S)$, so $x$ is a heavy node of $S$. Therefore, all the nodes of $S$ that are on the path between the root of $S_i$ and the boundary leaf of $S_i$ are heavy nodes of $S$ (this follows from the fact that all ancestors of a heavy node are heavy). Let $S'$ be the subtree of $S$ containing all the nodes of $S$ that are nodes on the path between the root and the boundary leaf of $S_i$, for every $S_i$ such that ${\mathrm{pre\_select}_{S_\beta}(i+1)}$ is not a leaf of $S_\beta$. Since ${l_{S_i}} = {l_{S^*_i}}$ for all $i$, it follows that $S'$ is isomorphic to ${{\mathrm{Join}}(\beta,P_1,\ldots,P_B)}$ and to the skeleton of $S^*$. It remains to show that $S'$ is the skeleton of $S$. Assume conversely that there is a heavy node $y$ of $S$ which is not in $S'$. We can choose such $y$ whose parent $x$ is in $S'$. Let $S_i$ be the tree containing $y$. Since the $y$ is not on the path between the root and the boundary leaf of $S_i$, all the descendants of $y$ are in $S_i$. Since $x$ is on the path between the root and the boundary leaf of $S_i$ (if $S_i$ does not have a boundary leaf, $x$ is the root of $S_i$), ${s''_{S_i}} \geq |{S\langley\rangle}| \geq L(S) = L(S^*)$. It follows that ${s''_{S^*_i}} = {s''_{S_i}} \geq L(S^*)$ which means that $S^*_i$ has a heavy node which is not on the path between the root and the boundary leaf. This contradicts Observation \[obs:heavy\]. Therefore, $S$ and $S^*$ have isomorphic skeletons.
We now prove that ${{\mathrm{Decompose}}(S)} = (\beta,S_1,\ldots,S_B)$. Suppose we run the decomposition algorithm on $S$ and on $S^*$. In the first phase of the algorithm, the algorithm processes type 2 heavy nodes. Since $S$ and $S^*$ have isomorphic skeletons, there is a bijection between the type 2 heavy nodes of $S$ and the type 2 heavy nodes of $S^*$. Let $x$ be a type 2 heavy node of $S$ and let $x^*$ be the corresponding type 2 heavy node of $S^*$. Let $x^*_1,\ldots,x^*_k$ be the children of $x^*$, and let $x^*_{h_1},\ldots,x^*_{h_{k'}}$ be the heavy children of $x^*$. When processing $x^*$, the decomposition algorithm generates the following subtrees of $S^*$.
1. A subtree whose nodes are $x^*$ and its parent.
2. For $j=1,\ldots,k'$, a subtree whose nodes are $x^*$ and $x^*_{h_j}$.
3. For $j=1,\ldots,k'+1$, the subtrees generated by ${{\mathrm{pack}}(x^*,x^*_{h_{j-1}+1},\ldots,x^*_{h_j-1})}$.
For every subtree $S^*_j$ of the first two types above that is generated when processing $x^*$, the subtree $S_j$ is generated when processing $x$ (since the number of heavy children of $x$ is equal to the number of heavy children of $x^*$). We now consider the subtrees of the third type. Suppose without loss of generality that $h_1>1$. Consider the call to ${{\mathrm{pack}}(x^*,x^*_1,\ldots,x^*_{h_1-1})}$. The first tree generated by this call, denoted $S^*_{a}$, consists of $x^*$, some children $x^*_1,\ldots,x^*_l$ of $x^*$, and all the descendants of $x^*_1,\ldots,x^*_l$, where $l = {d_{S^*_{a}}}$. From the definition of procedure ${\mathrm{pack}}$, $\sum_{j=1}^{l-1}|{S^*\langlex^*_j\rangle}| < L(S^*)-1$. Additionally, if $l < h_1-1$, $\sum_{j=1}^{l}|{S^*\langlex^*_j\rangle}| \geq L(S^*)-1$.
Let $I = \{{\mathrm{pre\_rank}(x^*_j)}-1\colon j=1,\ldots,h_1-1\}$. We have that $h_1-1 = \sum_{j\in I} {d_{S^*_j}}$. The number of children of $x$ before the first heavy child of $x$ is equal to $\sum_{j\in I} {d_{S_j}} = \sum_{j\in I} {d_{S^*_j}} = h_1-1$. Let $x_1,\ldots,x_{h-1}$ be these children.
Since ${d_{S_{a}}} = {d_{S^*_{a}}} = l$, when the decomposition algorithm processes the node $x$ of $S$ we have $$\sum_{j=1}^{l-1}|{S\langlex_j\rangle}|
= {|S_{a}|}-{s_{S_{a}}} - 1
= {|S^*_{a}|}-{s_{S^*_{a}}} - 1
= \sum_{j=1}^{l-1}|{S^*\langlex^*_j\rangle}|
< L(S^*)-1 = L(S)-1.$$ Additionally, if $l < h_1-1$, $$\sum_{j=1}^{l}|{S\langlex_j\rangle}|
= {|S_{a}|} - 1
= {|S^*_{a}|} - 1
= \sum_{j=1}^{l}|{S^*\langlex^*_j\rangle}|
\geq L(S^*) = L(S).$$ Therefore, the first tree generated by ${{\mathrm{pack}}(x,x_1,\ldots,x_{h_1-1})}$ is $S_{a}$. Continuing with the same arguments, we obtain that for every tree $S^*_j$ generated by a call to ${{\mathrm{pack}}(x^*,\cdot)}$ when processing $x^*$, the tree $S_j$ is generated by a call to ${{\mathrm{pack}}(x,\cdot)}$ when processing $x$. Now consider the second phase of the algorithm. Let $x^*_1,\ldots,x^*_k$ be a maximal path of type 1 heavy nodes of $S^*$, and let $x_1,\ldots,x_k$ be the corresponding maximal path of type 1 heavy nodes of $S$. For simplicity, assume that $x^*_k$ does not have a heavy child. Let $S^*_{a_1},S^*_{a_2},\ldots$ be the subtrees generated by ${{\mathrm{pack}}(x^*_i,y^*_1,y^*_2,\ldots)}$, where $y^*_1,y^*_2,\ldots$ are the children of $x^*_i$. Let $S^*_a$ be the subtree from $S^*_{a_1},S^*_{a_2},\ldots$ that contains $x^*_k$. Let $l = {l_{S^*_a}} = {l_{S_a}}$. By the definition of the decomposition algorithm, ${s'_{S^*_a}} = |{S^*\langlex^*_{k-l+1}\rangle}| < L(S^*)$. Moreover, if $l < k-1$, $1+\sum_j ({|S^*_{a_j}|}-1) = |{S^*\langlex^*_{k-l}\rangle}| \geq L(S^*)$. Therefore, $|{S\langlex_{k-l+1}\rangle}| = {s'_{S_a}} = {s'_{S^*_a}} < L(S)$ and if $l < k-1$, $|{S\langlex_{k-l}\rangle}| = 1+\sum_j ({|S_{a_j}|}-1)
= 1+\sum_j ({|S^*_{a_j}|}-1) \geq L(S)$. Therefore, when processing the path $x_1,\ldots,x_k$, the decomposition algorithm makes a call to ${{\mathrm{pack}}(x_i,y_1,y_2,\ldots)}$, where $y_1,y_2,\ldots$ are the children of $x_i$. Using the same argument used for the first phase of the algorithm, we obtain that the trees $S_{a_1},S_{a_2},\ldots$ are generated by ${{\mathrm{pack}}(x_i,y_1,y_2,\ldots)}$.
\[lem:properties-3\] Property (P\[enu:fprime\]) is satisfied.
Let $S$ be a tree and ${{\mathrm{Decompose}}(S)} = (\beta,S_1,\ldots,S_B)$. Recall that ${f}(S)=({E(S)},{|S|},{s_{S}},{s'_{S}},
{s''_{S}},{d_{S}},{l_{S}},{p_{S}})$ and ${g}(S) = \beta$ is the balanced parenthesis string of ${\mathcal{T}_{S,L(S)}}$. A node $x$ of $S$ is called an *inner boundary node* if it is a boundary node of some subtree $S_i$.
By definition, ${E(S)}$ is equal to $\sum_{i=1}^B ({E(S_i)}-{e({d_{S_i}})})$ plus the sum of ${e({\mathrm{degree}(x)})}$ for every inner boundary node $x$ of $S$. By Lemma \[lem:boundary-size-degree\], every such value ${e({\mathrm{degree}(x)})}$ can be computed from ${g}(S),{f}(S_1),\ldots,{f}(S_B)$. Therefore, ${E(S)}$ can be computed from ${g}(S),{f}(S_1),\ldots,{f}(S_B)$.
Similarly, ${|S|}$ is equal to $\sum_{i=1}^B ({|S_i|}-1)$ plus the number of inner boundary nodes of $S$. The number of inner boundary nodes of $S$ is equal to the number of internal nodes in ${\mathcal{T}_{S,L(S)}}$. Thus, ${|S|}$ can be computed from ${g}(S),{f}(S_1),\ldots,{f}(S_B)$.
We next consider ${s_{S}}$. Let $x$ be the rightmost child of the root of $S$. Let $v_{S_i}$ be the rightmost child of the root of ${\mathcal{T}_{S,L(S)}}$. Then, the tree $S_i$ contains both the root of $S$ and $x$. If $x$ is not the boundary leaf of $S_i$ then all the descendants of $x$ are in $S_i$. Thus, ${s_{S}} = {s_{S_i}}$. Otherwise, by Lemma \[lem:boundary-size-degree\], ${s_{S}}$ can be computed from ${g}(S),{f}(S_1),\ldots,{f}(S_B)$.
The other components of ${f}(S)$ can also be computed from ${g}(S),{f}(S_1),\ldots,{f}(S_B)$. We omit the details.
\[lem:properties-4\] Property (P\[enu:size\]) is satisfied.
The lemma follows from part \[enu:decomposition-size\] of Lemma \[lem:tree-decomposition\].
Our data structure for the tree $T$ consists of the following components.
- For each macro tree $S$, the aB-tree of $S$, stored using Theorem \[thm:aB-tree-2\].
- For each macro tree $S$, the values ${f}(S)$ and ${g}(S)$.
- Additional information and data structures for handling queries, which will be described later.
The space of the aB-trees and the values ${f}(S),{g}(S)$ is $\sum_S (\log {\mathcal{N}({f}(S),{g}(S))}+2+\lceil \log |\Phi|\rceil
+ \lceil \log |\Phi_2|\rceil )$, where the summation is over every macro tree $S$. By part \[enu:intervals\] of Lemma \[lem:tree-decomposition\], $$\sum_S (2+\lceil \log |\Phi|\rceil + \lceil \log |\Phi_2|\rceil )
= O(n/L\cdot B) = O(n/\log^t n).$$ We next bound $\sum_S \log {\mathcal{N}({f}(S),{g}(S))}$. Since ${E(S)},|S|$ are components of ${f}(S)$, we have from Lemma \[lem:entropy\] that ${\mathcal{N}({f}(S),{g}(S))} \leq 2^{{E(S)}+1}$. Therefore, $\sum_S \log {\mathcal{N}({f}(S),{g}(S))} \leq \sum_S ({E(S)}+1)$. By definition, $\sum_S {E(S)}$ is equal to ${E(T)}+\sum_S {e(d_S)}$ minus the sum of ${e({\mathrm{degree}(x)})}$ for every node $x$ of $T$ which is a boundary node of some macro tree. Therefore, $$\sum_S {E(S)} \leq {E(T)}+\sum_S {e({d_{S}})}
\leq (n{H^*(T)}+O(n/L))+O(n/L \cdot \log n)
= n{H^*(T)}+O(n/\log^t n).$$
Most of the queries on $T$ are handled in a similar way these queries are handled in the data structure of Farzan and Munro [@FarzanM14]. We give some examples below. We assume that a node $x$ in $T$ is represented by its preorder number. In order to compute the macro tree that contains a node $x$, we store the following structures.
- A rank-select structure on a binary string $B$ of length $n$ in which $B[x] = 1$ if nodes $x$ and $x-1$ belong to different macro trees.
- An array $M$ in which $M[i]$ is the number of the macro tree that contains node $x={\mathrm{select}_{1}(B,i)}$.
By part \[enu:intervals\] of Lemma \[lem:tree-decomposition\], the number of ones in $B$ is $O(n/L)$. Therefore, the space for $B$ is $O(n/L\cdot\log L)+O(n/\log^t n) = O(n/\log^t n)$ bits (using the rank-select structure of Patrascu [@Patrascu08]), and the space for $M$ is $O(n/L\cdot \log n) = O(n/\log^t n)$ bits.
For handling ${\mathrm{depth}(x)}$ queries, the data structure stores the depths of the roots of the macro trees. The required space is $O(n/L\cdot \log n)=O(n/\log^t n)$ bits. Answering a ${\mathrm{depth}(x)}$ query is done by finding the macro tree $S$ containing $x$. Then, add the depth of the root of $S$ (which is stored in the data structure) to the distance between $x$ and the root of $S$. The latter value is computed using the aB-tree of $S$. It suffices to describe how to compute this value when the aB-tree is stored naively. Recall that the root of the aB-tree corresponds to $S$, and the children of the root corresponds to subtrees $S_1,\ldots,S_B$ of $S$. Finding the subtree $S_i$ that contains $x$ can be done using a lookup table indexed by ${g}(S)$, ${|S_1|},\ldots,{|S_B|}$, and ${p_{S_1}},\ldots,{p_{S_B}}$. Next, compute the distance between the root of $S_i$ and the root of $S$ using a lookup table indexed by ${g}(S)$ and ${l_{S_1}},\ldots,{l_{S_B}}$. Then the query algorithm descend to the $i$-th child of the root of the aB-tree and continues the computation in a similar manner.
The handling of level ancestor queries is different than the way these queries are handled in the structure of Farzan and Munro. We define weights on the edges of ${\mathcal{T}_{T,L}}$ as follows. For every non-root node $v_S$ in ${\mathcal{T}_{T,L}}$, the weight of the edge between $v_S$ and its parent is ${l_{S}}$. The data structure stores a weighted ancestor structure on ${\mathcal{T}_{T,L}}$. We use the structure of Navarro and Sadakane [@NavarroS14] which has $O(1)$ query time. The space of this structure is $O(n'\log n'\cdot\log (n'W)+n'W/\log^{t'}(n'W))$ for every constant $t'$, where $n' = |{\mathcal{T}_{T,L}}|$ and $W$ is the maximum weight of an edge of ${\mathcal{T}_{T,L}}$. Since $n' = O(n/L)$ and $W = O(L)$, we obtain that the space is $O(n/\log^t n)$ bits.
In order to answer a ${\mathrm{level\_ancestor}(x,d)}$ query, first find the macro tree $S$ that contains $x$. Then use the aB-tree of $S$ to find ${\mathrm{level\_ancestor}(x,d)}$ if this node is in $S$. Otherwise, let $r$ be the root of $S$ and let $d'$ be the distance between $r$ and $x$ ($d'$ is computed using the aB-tree). Next, perform a ${\mathrm{level\_ancestor}({\mathrm{parent}(v_S)},d-d')}$ on ${\mathcal{T}_{T,L}}$, and let $v_{S'}$ be the answer. Let $v_{S''}$ be the child of $v_{S'}$ which is an ancestor of $v_S$. The node ${\mathrm{level\_ancestor}(x,d)}$ is in the macro tree $S''$, and it can be found using a query on the aB-tree of $S''$.
[^1]: Department of Computer Science, Ben-Gurion University of the Negev. Email: `[email protected]`
|
---
abstract: |
Utilizing $\sim 50$ ks of Chandra X-ray Observatory imaging, we present an analysis of the intracluster medium (ICM) and cavity system in the galaxy cluster [RBS 797]{}. In addition to the two previously known cavities in the cluster core, the new and deeper X-ray image has revealed additional structure associated with the active galactic nucleus (AGN). The surface brightness decrements of the two cavities are unusually large, and are consistent with elongated cavities lying close to our line-of-sight. We estimate a total AGN outburst energy and mean jet power of $\approx 3 \dash 6 \times
10^{60}$ erg and $\approx 3 \dash 6 \times 10^{45} ~\lum$, respectively, depending on the assumed geometrical configuration of the cavities. Thus, [RBS 797]{} is apparently among the the most powerful AGN outbursts known in a cluster. The average mass accretion rate needed to power the AGN by accretion alone is $\sim 1 ~\msolpy$. We show that accretion of cold gas onto the AGN at this level is plausible, but that Bondi accretion of the hot atmosphere is probably not. The BCG harbors an unresolved, non-thermal nuclear X-ray source with a bolometric luminosity of $\approx 2 \times
10^{44} ~\lum$. The nuclear emission is probably associated with a rapidly-accreting, radiatively inefficient accretion flow. We present tentative evidence that star formation in the BCG is being triggered by the radio jets and suggest that the cavities may be driving weak shocks ($M \sim 1.5$) into the ICM, similar to the process in the galaxy cluster [MS 0735.6+7421]{}.
author:
- |
K. W. Cavagnolo, B. R. McNamara, M. W. Wise, P. E. J. Nulsen,\
M. Brüggen, M. Gitti, and D. A. Rafferty
bibliography:
- 'cavagnolo.bib'
title: '[A Powerful AGN Outburst in RBS 797]{}'
---
Introduction {#sec:intro}
============
Evidence gathered over the last decade suggests that the growth of galaxies and supermassive black holes (SMBHs) are coupled, and that energetic feedback from active galactic nuclei (AGN) strongly influences galaxy evolution . The discovery of AGN induced cavities in the hot halos surrounding many massive galaxies has strengthened this idea [see @cfreview; @mcnamrev for reviews] by revealing that AGN mechanical heating is capable of regulating halo radiative cooling [@birzan04; @dunn06; @rafferty06]. Current models of radio-mode AGN feedback posit that cooling processes in a galaxy’s hot halo drives mass accretion onto a central SMBH, promoting AGN activity that eventually offsets halo cooling via a thermally regulated feedback loop [@croton06; @bower06; @sijacki07]. While there is direct evidence that halo cooling and feedback are linked [@haradent; @rafferty08], the observational constraints on how AGN are fueled and powered are more difficult to establish.
Gas accretion alone can, in principle, fuel most AGN [@pizzolato05; @2006MNRAS.372...21A]. However, for some relatively gas-poor systems hosting energetic AGN, for example, Hercules A, Hydra A, [MS 0735.6+7421]{}, and 3C 444 where the output exceeds $10^{61}$ erg [@herca; @hydraa; @ms0735; @2010arXiv1011.6405C], it appears that gas accretion alone may have difficulty sustaining their AGN unless the accretion is unusually efficient. This problem has lead to speculation that some BCGs may host ultramassive black holes [$> 10^{10} ~\msol$; @msspin], or that some AGN are powered by the release of energy stored in a rapidly-spinning SMBH [@minaspin]. In this paper, we explore these and other issues through an analysis of the powerful AGN outburst in the cluster.
The discovery of AGN-induced cavities in the intracluster medium (ICM) of the galaxy cluster [RBS 797]{} was first reported by @schindler01 using data from the Chandra X-ray Observatory (). Multifrequency radio observations showed that the cavities are co-spatial with extended radio emission centered on a strong, jetted radio source coincident with the [RBS 797]{} brightest cluster galaxy (BCG) [@2002astro.ph..1349D; @gitti06; @birzan08]. The observations implicate an AGN in the BCG as the cavities’ progenitor, and @birzan04 [hereafter B04] estimate the AGN deposited $\approx
10^{60}$ erg of energy into the ICM at a rate of $\approx 10^{45}
~\lum$.
The B04 analysis assumed the cavities are roughly spherical, symmetric about the plane of the sky, and that their centers lie in a plane passing through the central AGN and perpendicular to our line-of-sight. However, the abnormally deep surface brightness decrements of the cavities, and the nebulous correlation between the radio and X-ray morphologies, suggests the system may be more complex than B04 assumed. Using a longer, follow-up observation, we conclude that the cavities are probably elongated along the line-of-sight and provide evidence that additional cavities may be present at larger radii. We conservatively estimate the total AGN energy output and power are six times larger than the B04 values, $\approx 6 \times 10^{60}$ erg and $\approx 6 \times 10^{45} ~\lum$ respectively.
Reduction of the X-ray and radio data is discussed in Section \[sec:obs\]. Interpretation of observational results are given throughout Section \[sec:results\], and a brief summary concludes the paper in Section \[sec:con\]. . At a redshift of $z =
0.354$, the look-back time is 3.9 Gyr, $\da = 4.996$ kpc arcsec$^{-1}$, and $\dl = 1889$ Mpc. All spectral analysis errors are 90% confidence, while all other errors are 68% confidence.
Observations {#sec:obs}
============
X-ray Data {#sec:xray}
----------
[RBS 797]{} was observed with in October 2000 for 11.4 ks using the ACIS-I array () and in July 2007 for 38.3 ks using the ACIS-S array (). Datasets were reduced using and versions 4.2. Events were screened of cosmic rays using grades and [<span style="font-variant:small-caps;">vfaint</span>]{} filtering. The level-1 event files were reprocessed to apply the most up-to-date corrections for the time-dependent gain change, charge transfer inefficiency, and degraded quantum efficiency of the ACIS detector. The afterglow and dead area corrections were also applied. Time intervals affected by background flares exceeding 20% of the mean background count rate were isolated and removed using light curve filtering. The final, combined exposure time is 48.8 ks. Point sources were identified and removed via visual inspection and use of the tool [<span style="font-variant:small-caps;">wavdetect</span>]{}. We refer to the data free of point sources and flares as the “clean” data. A mosaiced, fluxed image (see Figure \[fig:img\]) was generated by exposure correcting each clean dataset and reprojecting the normalized images to a common tangent point.
Radio Data {#sec:radio}
----------
Very Large Array () radio images at 325 MHz (A-array), 1.4 GHz (A- and B-array), 4.8 GHz (A-array), and 8.4 GHz (D-array) are presented in @gitti06 and @birzan08. Our re-analysis of the archival radio observations yielded no significant differences with these prior studies. Using the rms noise ($\sigma_{\rm{rms}}$) values given in @gitti06 and @birzan08 for each observation, emission contours between $3\sigma_{\rm{rms}}$ and the peak image intensity were generated. These are the contours referenced and shown in all subsequent discussion and figures.
Results {#sec:results}
=======
Cavity Morphologies and ICM Substructure {#sec:morph}
----------------------------------------
Shown in Figure \[fig:img\] is the 0.7–2.0 keV mosaiced clean image. Outside of $\approx 50$ kpc, the global ICM morphology is regular and elliptical in shape, with the appearance of being elongated along the northwest-southeast direction. The cavities discovered by @schindler01 are clearly seen in the cluster core east and west of the nuclear X-ray source and appear to be enclosed by a thick, bright, elliptical ridge of emission which we discuss further in Section \[sec:ecav\]. The western cavity has more internal structure, and its boundaries are less well-defined, than the eastern cavity. The emission from the innermost region of the core is elongated north-south and has a distinct ‘S’-shape punctuated by a hard nuclear X-ray source.
Multifrequency radio images overlaid with ICM X-ray contours are shown in Figure \[fig:composite\]. [RBS 797]{} radio properties are discussed in @gitti06, and we summarize here. As seen in projection, the nuclear 4.8 GHz jets are almost orthogonal to the axis connecting the cavities. Radio imaging at $\approx 5\arcs$ resolution reveals the 325 MHz, 1.4 GHz, and 8.4 GHz radio emission are diffuse and extend well-beyond the cavities, more similar to the morphology of a radio mini-halo than relativistic plasma confined to the cavities . Of all the radio images, the 1.4 GHz emission imaged at $\approx 1 \arcs$ resolution most closely traces the cavity morphologies, yet it is still diffuse and uniform over the cavities with little structure outside the radio core. Typically, the connection between a cavity system, coincident radio emission, and the progenitor AGN is unambiguous. This is not the case for [RBS 797]{}, which suggests the cavity system may be more complex than it appears. Indeed, there is a hint of barely resolved structure in the Chandra image associated with the 4.8 GHz radio structure, with X-ray deficits just beyond the tips of the radio jets, bounded to one side by the ’S’ shaped feature noted above. This is suggestive of potentially small cavities from a new outburst episode with asymmetric bright rims created by young radio jets.
To better reveal the cavity morphologies, residual X-ray images of [RBS 797]{} were constructed by modeling the ICM emission and subtracting it off. The X-ray isophotes of two exposure-corrected images – one smoothed by a $1\arcs$ Gaussian and another by a $3\arcs$ Gaussian – were fitted with ellipses using the task <span style="font-variant:small-caps;">ellipse</span>. The ellipse centers were fixed at the location of the BCG X-ray point source, and the eccentricities and position angles were free to vary. A 2D surface brightness model was created from each fit using the task <span style="font-variant:small-caps;">bmodel</span>, normalized to the parent X-ray image, and then subtracted off. The residual images are shown in Figure \[fig:subxray\].
In addition to the central east and west cavities (labeled E1 and W1), tentative evidence for depressions north and south of the nucleus (labeled N1 and S1) are revealed. N1 and S1 lie along the 4.8 GHz jet axis and are coincident with spurs of significant 1.4 GHz emission. A possible depression coincident with the southeastern concentration of 325 MHz emission is also found (labeled E2), but no radio counterpart on the opposite side of the cluster is seen (labeled W2). The N1, S1, and E2 depressions do not show-up in surface brightness profiles extracted in wedges passing through each feature. So while coincidence with the radio emission hints at additional cavities, they may be spurious structures and a deeper image is required to confirm them. There is also an X-ray edge which extends southeast from E2 and sits along a ridge of 325 MHz and 8.4 GHz emission. No substructure associated with the western-most knot of 325 MHz emission is found, but there is a stellar object co-spatial with this region. The X-ray and radio properties of the object are consistent with those of a galactic RS CVn star [@1993RPPh...56.1145S] – if the star is less than 1 kpc away, $\lx \la 10^{31} ~\lum$ and $L_{325} \sim
10^{27} ~\lum$ – suggesting the western 325 MHz emission may not be associated with the cluster.
Radial ICM Properties {#sec:icm}
---------------------
In order to analyze the [RBS 797]{} cavity system and AGN outburst energetics in-detail, the radial ICM density, temperature, and pressure structure need to be measured. The radial profiles are shown in Figure \[fig:gallery\]. A temperature () profile was created by extracting spectra from concentric circular annuli (2500 source counts per annulus) centered on the cluster X-ray peak, binning the spectra to 25 counts per energy channel, and then fitting each spectrum in 12.4 [@xspec]. Spectra were modeled with an absorbed, single-component model [@mekal1] over the energy range 0.7–7.0 . For each annulus, weighted responses were created and a background spectrum was extracted from the ObsID matched blank-sky dataset normalized using the ratio of 9–12 keV count rates for an identical off-axis, source-free region of the blank-sky and target datasets. The absorbing Galactic column density was fixed to $\nhgal = 2.28 \times 10^{20} ~\pcmsq$ [@lab], and a spectral model for the Galactic foreground was included as an additional, fixed background component during spectral fitting (see @2005ApJ...628..655V and @xrayband for method). Gas metal abundance was free to vary and normalized to the @ag89 solar ratios. Spectral deprojection using the [<span style="font-variant:small-caps;">projct</span>]{} model did not produce significantly different results, thus only projected quantities are discussed.
A 0.7–2.0 keV surface brightness profile was extracted from the mosaiced clean image using concentric $1\arcs$ wide elliptical annuli centered on the BCG X-ray point source (central $\approx
1\arcs$ and cavities excluded). A deprojected electron density () profile was derived from the surface brightness profile using the method of @kriss83 which incorporates the 0.7–2.0 keV count rates and best-fit normalizations from the spectral analysis [see @accept for details]. Errors for the density profile were estimated using 5000 Monte Carlo simulations of the original surface brightness profile. A total gas pressure profile was calculated as $P
= n \tx$ where $n \approx 2.3 \nH$ and $\nH \approx
\nelec/1.2$. Profiles of enclosed X-ray luminosity, entropy ($K = \tx
\nelec^{-2/3}$) and cooling time ($\tcool = 3 n \tx~(2 \nelec \nH
\Lambda)^{-1}$, where $\Lambda$ is the cooling function), were also generated. Errors for each profile were determined by summing the individual parameter uncertainties in quadrature.
The radial profiles are consistent with those of a normal cool core cluster: outwardly increasing temperature profile, centrally peaked abundance profile, central cooling time of $\approx 400$ Myr within 9 kpc of the cluster center, and a pronounced low entropy core. Fitting the ICM entropy profile with the function $K = \kna +\khun (r/100
~\kpc)^{\alpha}$, where \[\] is core entropy, \[\] is a normalization at 100 kpc, and $\alpha$ is a dimensionless index, reveals $\kna = 17.9 \pm 2.2 ~\ent$, $\khun = 92.1 \pm 6.2 ~\ent$, and $\alpha = 1.65 \pm 0.11$ for (DOF)$ = 9.3(57)$. Further, the radial analysis does not indicate any significant temperature, density, or pressure discontinuities to signal the existence of large-scale shocks or cold fronts.
ICM Cavities {#sec:cavities}
------------
### Cavity Energetics {#sec:ecav}
Of all the ICM substructure, the E1 and W1 cavities are unambiguous detections, and their energetics were determined following standard methods (see B04). Our fiducial cavity configuration assumes the cavities are symmetric about the plane of the sky, and the cavity centers lie in a plane which is perpendicular to our line-of-sight and passes through the central AGN (configuration-1 in Figure \[fig:config\]). Hereafter, we denote the line-of-sight distance of a cavity’s center from this plane as $z$, and the cavity radius along the line-of-sight as [$r_{\rm{los}}$]{}. The volume of a cavity with $z = 0$ is thus $V = 4\pi a b {\ensuremath{r_{\rm{los}}}}/3$ where $a$ and $b$ are the projected semi-major and -minor axes, respectively, of the by-eye determined ellipses in Figure \[fig:subxray\]. The projected morphologies of E1 and W1 are similar enough that congruent regions were used for both. Initially, the cavities were assumed to be roughly spherical and [$r_{\rm{los}}$]{} was set equal to the projected effective radius, ${\ensuremath{r_{\rm{eff}}}}=
\sqrt{ab}$. For configuration-1, the distance of each cavity from the central AGN, $D$, is simply the projected distance from the ellipse centers to the BCG X-ray point source. A systematic error of 10% is assigned to the cavity dimensions.
Cavity ages were estimated using the , , and timescales discussed in B04. The age assumes a cavity reaches its present distance from the central AGN by moving at the local sound speed, while assumes the cavity buoyantly rises at its terminal velocity, and is simply the time required to refill the volume displaced by the cavity. The energy required to create each cavity is estimated by its enthalpy, $\ecav =
\gamma PV/(\gamma-1)$, which was calculated assuming cavity pressure support is provided by a relativistic plasma ($\gamma = 4/3$). We assume the cavities are buoyant structures and that the time-averaged power needed to create each cavity is $\pcav = \ecav/\tbuoy$. The individual cavity properties are given in Table \[tab:cavities\] and the aggregate properties in Table \[tab:totals\].
For the representative assumption of configuration-1, we measure a total cavity energy $\ecav = 3.23 ~(\pm 1.16) \times 10^{60}$ erg and total power $\pcav = 3.34 ~(\pm 1.41) \times 10^{45} ~\lum$. These values are larger than those of B04 ($\ecav = 1.52 \times 10^{60}$ erg, $\pcav = 1.13 \times 10^{44} ~\lum$) by a factor of $\approx 3$ due to our larger E1 and W1 volumes. Using the B04 cavity volumes in place of our own produces no significant differences between our energetics calculations and those of B04. The largest uncertainties in the energetic calculations are the cavity volumes and their 3-dimensional locations in the ICM, issues we consider next.
### Cavity Surface Brightness Decrements {#sec:dec}
A cavity’s morphology and location in the ICM affects the X-ray surface brightness decrement it induces. If the surface brightness of the undisturbed ICM can be estimated, then the decrement is useful for constraining the cavity line-of-sight size and, through simple geometric calculation, its distance from the launch point [see @hydraa for details]. We adopt the @hydraa definition of surface brightness decrement, $y$, as the ratio of the X-ray surface brightness inside the cavities to the value of the best-fit ICM surface brightness $\beta$-model at the same radius. Note the potential for confusion, since $y=1$ if there is no decrement and while small values of $y$ correspond to large decrements. Consistent with the analysis of @schindler01, we find the best-fit $\beta$-model has parameters $S_0 = 1.65 ~(\pm 0.15) \times 10^{-3}$ , $\beta = 0.62 \pm 0.04$, and $r_{\rm{core}} = 7.98\arcs \pm
0.08$ for (DOF) = 79(97). Using a circular aperture with radius $1\arcs$ centered on the deepest part of each cavity, we measure mean decrements of $\bar{y}_{\rm{W1}} = 0.50 \pm 0.18$ and $\bar{y}_{\rm{E1}} = 0.52 \pm 0.23$, with minima of $y^{\rm{min}}_{\rm{W1}} = 0.44$ and $y^{\rm{min}}_{\rm{E1}} = 0.47$.
To check if the representative approximation for cavity configuration-1 can produce the measured decrements, the best-fit $\beta$-model was integrated over each cavity with a column of gas equal to $2{\ensuremath{r_{\rm{los}}}}$ excluded. For this case, the most significant decrement obtained was $y = 0.67$, indicating that $r_{los}$ must exceed $r_{eff}$. If the centers of E1 and W1 lie in the plane of the sky, the minimum line-of-sight depths needed to reproduce the decrements of E1 and W1 are ${\ensuremath{r_{\rm{los}}}}^{\rm{E1}} = 23.4$ kpc and ${\ensuremath{r_{\rm{los}}}}^{\rm{W1}} = 25.9$ kpc. The energetics for these morphologies are given as configuration-1a in Tables \[tab:cavities\] and \[tab:totals\]. If the cavities are moving away from the AGN in the plane of the sky, then it is surprising to find that each has its shortest axis in the plane of the sky and perpendicular to its direction of motion. However, if the bright rims are masking the true extent of the cavities along this short axis, the cavities may be larger than we realize.
It is also possible that the cavities are inflating close to our line-of-sight and thus may be large, elongated structures as shown in configuration-2 of Figure \[fig:config\]. Presented in Figure \[fig:decs\] are curves showing how surface brightness decrement changes as a function of $z$ for various [$r_{\rm{los}}$]{}. The plots demonstrate that it is possible to reproduce the measured E1 and W1 decrements using larger cavities that have centers displaced from the plane of the sky. Consequently, the limiting case of $z = 0$ is a lower-limit on the AGN outburst energetics. If the cavity system lies close to our line-of-sight, and is much larger and more complex than the data allows us to constrain, it may explain the additional ICM substructures seen in the residual images and the ambiguous relationship between the X-ray and radio morphologies.
It should be noted that deep cavities like in [RBS 797]{} are uncommon, with most other cavity systems having minima $y \ga 0.6$ (B04). The large decrements raise the concern that the $\beta$-model fit has been influenced by the rim-like structures of E1-W1 and produced artificially large decrements (small $y$). The prominent rims can be seen in Figure \[fig:pannorm\] which shows the normalized surface brightness variation in wedges of a $2.5\arcs$ wide annulus centered on the X-ray peak and passing through the cavity midpoints. In addition to excluding the cavities, we tried excluding the rims during $\beta$-model fitting, but this did not provide insight as too much of the surface brightness profile was removed and the fit did not converge. Extrapolating the surface brightness profile at larger radii inward resulted in even lower decrements. The @hydraa decrement model assumes a cavity replaces some X-ray emitting gas with non-emitting material, without disturbing ambient gas. To be valid, the model requires that there is little uplifted gas surrounding a cavity [@2005ApJ...628..629N; @2001ApJ...558L..15B] and that the cavity is not driving significant shocks into its surroundings (see Section \[sec:shocks\]). The rims, and their possible connection to gas shocking, are discussed below.
### Do the Rims Indicate Shocks? {#sec:shocks}
The Chandra X-ray image of [RBS 797]{} bears a strong resemblance to that of [MS 0735.6+7421]{}, with a bright elliptical region at its center enclosing two prominent cavities. In [MS 0735.6+7421]{}, the bright central region is bounded by Mach $\simeq 1.4$ shocks. Although there is no evidence for density or temperature jumps in the Chandra data for [RBS 797]{}, the spatial resolution of the [RBS 797]{} observations is poorer than for [MS 0735.6+7421]{} (the angular scale of shocks in [RBS 797]{} would be $\sim 10\times$ smaller). Based on their similar appearances, we suggest that the prominent central ellipse in the X-ray image of [RBS 797]{} may also be surrounded by moderately strong shocks, and the lack of evidence for shocks may simply be a consequence of the poor spatial resolution.
A simple model that captures how a bright, shocked rim helps enhance a cavity decrement is to assume the shocked region is cylindrical and uniformly compressed. Ignoring emission outside the shocked cylinder, a compression $\chi$ gives a decrement of $y = \sqrt{\chi} -
\sqrt{\chi - 1}$, hence a decrement of $\ge 0.44$ can be obtained with $\chi \la 1.84$, not dissimilar from the shock Mach numbers in Hercules A and [MS 0735.6+7421]{}. This is a rough estimate (arguably pessimistic since the compressed gas is assumed to be uniform), but it demonstrates how bright rims created by shocks may enhance the cavity decrements. Assuming axial symmetry for the unshocked gas, including emission outside the cylinder adds more to the bright rim than the cavity center, increasing the surface brightness difference from rim to cavity, but generally reducing $y$. Because the ICM surface brightness drops rapidly outside the shocked region, the correction due to emission from outside the cylinder is modest, and it scales as $\chi^{-3/2}$, so its effect decreases as the compression increases. Thus the analogy with [MS 0735.6+7421]{} is also consistent with the relatively large cavity decrements found in [RBS 797]{}.
It is often assumed that cavity energetics calculations, like those in Section \[sec:ecav\], provide a reasonable estimate of the physical quantity jet power, . However, and do not account for AGN output energy which may be channeled into shocks. If significant ICM shocking has occurred in [RBS 797]{}, then the cavity energetics clearly underestimate the AGN output, which directly impacts the discussion of AGN fueling in Section \[sec:accretion\]. In the cases where cavity and shock energies have been directly compared, they are in general comparable. Thus the impact of shocks on the power estimates may be modest.
Powering the Outburst {#sec:accretion}
---------------------
If the AGN was powered by mass accretion alone, then gives a lower limit to the gravitational binding energy released as mass accreted onto the SMBH. Then, the total mass accreted can be approximated as $\macc = \ecav/(\epsilon c^2)$ with an average accretion rate $\dmacc = \macc/\tbuoy$. Consequently, the black hole’s mass grew by $\ddmbh = (1-\epsilon)\macc$ at an average rate $\dmbh =
\ddmbh/\tbuoy$. Here, $\epsilon$ is the mass-energy conversion factor and we adopt the commonly used value $\epsilon = 0.1$ [@2002apa..book.....F]. The accretion properties associated with each cavity configuration are given in Table \[tab:totals\], and lies in the range $2 \dash 3 \times 10^7 \msol$. Below, we consider if the accretion of cold or hot gas pervading the BCG could meet these requirements without being in conflict with observed BCG and ICM properties.
### Cold Accretion {#sec:cold}
Optical nebulae and substantial quantities of cold molecular gas are found in many cool core clusters [@crawford99; @edge01]. This gas may be the end point of thermal instabilities in the hot atmosphere, and potentially are a significant source of fuel for AGN activity and star formation [@pizzolato05; @2006NewA...12...38S; @2010MNRAS.408..961P]. We are unaware of a molecular gas mass () measurement for [RBS 797]{}, so we estimate it using the - correlation in @edge01. An optical spectrum (3200-7600 Å) of the [RBS 797]{} BCG reveals a strong emission line [@rbs1; @rbs2], which we used as a surrogate for by assuming a Balmer decrement of EW$_{\hbeta}$/EW$_{\halpha}$ = 0.29 [@2006ApJ...642..775M], where EW is line equivalent width. We estimate $\mmol \sim 10^{10} ~\msol$, which is sufficiently in excess of the $\sim 10^7 ~\msol$ needed to power the outburst. Thus, there is reason to believe that an ample cold gas reservoir is available to fuel the AGN. It must be noted that the - relation has substantial scatter [@salome03], and that the [RBS 797]{} emission line measurements are highly uncertain (A. Schwope, private communication), so the estimate is simply a crude estimate.
Accreting gas from the reservoir will cause the central SMBH mass to grow, but, on average, black hole mass growth is coupled to the star formation properties of the host galaxy . Assuming star formation accompanied the phase of accretion which powered the AGN outburst, the Magorrian relation [@magorrian] implies that for each unit of black hole mass growth, several hundred times as much goes into bulge stars [@2004ApJ...604L..89H]. Thus, the $\sim 1 ~\msolpy$ mass accretion rate needed to power the [RBS 797]{} outburst suggests that $\sim 700 ~\msolpy$ of star formation would be required to grow the galaxy and its SMBH along the Magorrian relation. The present BCG star formation rate (SFR) is $\sim 1$–10 (see Section \[sec:bcg\]), implying that, if the SFR preceding and during the AGN outburst was of the order the present rate, the SMBH has grown faster than the slope of the Magorrian relation implies.
### Hot Accretion
Direct accretion of the hot ICM via the Bondi mechanism provides another possible AGN fuel source. The accretion flow arising from this process is characterized by the Bondi equation (Equation \[eqn:bon\]) and is often compared with the Eddington limit describing the maximal accretion rate for a SMBH (Equation \[eqn:edd\]): $$\begin{aligned}
\dmbon &=& 0.013 ~\kbon^{-3/2} \left(\frac{\mbh}{10^9
~\msol}\right)^{2} ~\msolpy \label{eqn:bon}\\
\dmedd &=& \frac{2.2}{\epsilon} \left(\frac{\mbh}{10^9~\msol}\right)
~\msolpy \label{eqn:edd}\end{aligned}$$ where $\epsilon$ is a mass-energy conversion factor, is the mean entropy \[\] of gas within the Bondi radius, and is black hole mass \[\]. We chose the relations of @2002ApJ...574..740T and @2007MNRAS.379..711G to estimate , and find a range of $0.6 \dash 7.8 \times 10^9 ~\msol$, from which we adopted the weighted mean value $1.5^{+6.3}_{-0.9}
\times 10^9 ~\msol$ where the errors span the lowest and highest $1\sigma$ values of the individual estimates. For $\epsilon = 0.1$ and $\kbon = \kna$, the relevant accretion rates are $\dmbon \approx 4
\times 10^{-4} ~\msolpy$ and $\dmedd \approx 33 ~\msolpy$. The Eddington and Bondi accretion ratios are given in Table \[tab:totals\], with ${\ensuremath{\dot{m_{\rm{E}}}}}\equiv \dmacc/\dmedd \approx 0.02$ and ${\ensuremath{\dot{m_{\rm{B}}}}}\equiv \dmacc/\dmbon \approx 2000 \dash 4000$.
The value of [$\dot{m_{\rm{B}}}$]{} integrated over the duration of the AGN outburst is apparently larger than the available hot gas supply near the Bondi radius ($\approx 10$ pc). Using the upper limit of , $\approx 8
\times 10^9 ~\msol$, and assuming gas near has a mean temperature of 1.0 keV, a [$\dot{m_{\rm{B}}}$]{} of unity requires be less than $0.9 ~\ent$, corresponding to $\nelec \approx 1.0 ~\pcc$. This is nine times the measured ICM central density, although the gas density near the Bondi radius could be considerably higher and evade detection. Nevertheless, a sphere more than 1 kpc in radius would required to house up to $10^7 ~\msol$ needed to power the AGN. These are properties of a galactic corona [@coronae]. A $\sim 1$ kpc, $\tx
\sim 1$ keV, $\nelec \sim 0.4 ~\pcc$ corona would be easily detected in the observations as a bright point source with a distinct thermal spectrum. However, the spectrum of the observed nuclear X-ray point source (see Section \[sec:nuc\]) is inconsistent with being thermal in origin ( no $E < 2$ keV Fe L-shell emission hump and no $E \approx 6.5$ keV Fe K-shell emission line blend). If the pervading ICM is instead the source of accretion fuel, these numbers imply the inner-core of the cluster would have been fully consumed during the outburst ($<$ 100 Myr), which seems unlikely.
Because the nucleus is unresolved, the gas density at the Bondi radius could be much higher, particularly if enhanced Compton cooling is significant [@2010MNRAS.402.1561R]. Furthermore, it is possible that the central gas density was much higher in the past, as the AGN was turning on. Therefore, Bondi accretion cannot be ruled out. However, [RBS 797]{}, like other powerful AGN in clusters, would only be able to power their AGN by Bondi accretion with great difficulty [@rafferty06; @minaspin].
### Black Hole Spin
In an effort to find lower accretion rates which still produce powerful jets, we consider below if a rapidly-spinning SMBH could act as an alternate power source. We consider the spin model with the caveats that, like cold- and hot-mode accretion, spin is wrought with its own difficulties, is hard to evaluate for any one system, and how rapidly spinning black holes are created is unclear (see @msspin and @minaspin for discussion on these points).
In hybrid spin models [@1999ApJ...522..753M; @2001ApJ...548L...9M; @2006ApJ...651.1023R; @2007MNRAS.377.1652N; @2009MNRAS.397.1302B; @gesspin] jets are produced by a combination of the Blandford-Znajek [@bz] and Blandford-Payne [@bp] mechanisms, by extracting energy from a spinning black hole and its accretion disk via the poloidal component of strong magnetic fields. In these hybrid models, the accretion rate sets an upper limit to the strength of the magnetic field that can thread the inner disk and the black hole, which, in turn, determines the emergent jet power. Of the model parameters, only is truly measured, so a range of [$\dot{m_{\rm{E}}}$]{} and dimensionless spin values ($j$) can produce any particular . However, to avoid the need for excessively large accretion rates, which is the shortcoming of pure mass accretion mechanisms, $j$ needs to be as near unity as possible, the choice of [$\dot{m_{\rm{E}}}$]{} is not completely arbitrary.
In the hybrid spin model of @2007MNRAS.377.1652N, the jet efficiency ($\epsilon$ above) is strongly dependent on black hole spin. For example, for a disk viscosity parameter of $\alpha = 0.25$, we require a black hole spin parameter of $j \ga 0.96$ to obtain $\epsilon > 0.1$. In general, a high jet efficiency demands a high spin parameter. Alternatively, for $j = 0.7$ the jet efficiency would be $\approx 0.01$, boosting the mass required to fuel the outburst by an order of magnitude and exacerbating the issues of accounting for the fuel discussed above. Additional relief can be found in the model of @gesspin which has the feature that extremely powerful jets are produced when the material in the disk is orbiting retrograde relative to the spin of the SMBH. In the @gesspin model, for $\|j\| \ge 0.9$ and $\mbh \ge 1.5 \times 10^9 ~\msol$, the required mass accretion rates are ${\ensuremath{\dot{m_{\rm{E}}}}}\le 0.005$ or $\la 0.1 ~\msolpy$.
Nuclear Emission {#sec:nuc}
----------------
There is a bright BCG X-ray point source apparent in the Chandra image that coincides with the nucleus of the BCG, and its properties may also provide clues regarding on-going accretion processes. A source spectrum was extracted from a region enclosing 90% of the normalized PSF specific to the nuclear source median photon energy and off-axis position. The source region had an effective radius of $0.86\arcs$, and a background spectrum was taken from an enclosing annulus that had 5 times the area. The shape and features of the background-subtracted spectrum, shown in Figure \[fig:nucspec\], are inconsistent with thermal emission. The spectrum was modeled using an absorbed power law with two Gaussians added to account for emission features around 1.8 keV and 3.0 keV. Including an absorption edge component to further correct for the ACIS 2.0 keV iridium feature did not significantly improve any fit, resulted in optical depths consistent with zero, and did not negate the need for a Gaussian around 1.8 keV. For the entire bandpass of both observations, the number of readout frames exposed to two or more X-ray photons ( the pile-up percentage) inside an aperture twice the size of the source region was $< 6\%$, and addition of a pileup component to the spectral analysis resulted in no significant improvements to the fits.
A variety of absorption models were fit to the spectrum, and the best-fit values are given in Table \[tab:agn\]. The model with a power-law distribution of $\nhobs \sim 10^{22} ~\pcmsq$ absorbers yielded the best statistical fit, and the low column densities indicate the nucleus may not be heavily obscured. If the Gaussian components represent emission line blends, they would be consistent with ion species of sulfur, silicon, argon and calcium, possibly indicating emission or reflection from dense, ionized material in or near the nucleus [@1990ApJ...362...90B; @1998MNRAS.297.1219I].
If mass accretion is powering the nuclear emission, the X-ray source bolometric luminosity, $\lbol = 2.3 \times 10^{44} ~\lum$, implies an accretion rate of $\dmacc \approx \lbol/(0.1 c^2) \approx 0.04
~\msolpy \approx 0.001 {\ensuremath{\dot{m_{\rm{E}}}}}$, well into the regime where accretion flows are expected to be radiatively inefficient, advection-dominated flows [@adaf]. Depending upon the exact properties of the accretion disk, for $\mbh > 10^9 ~\msol$, the luminosity of a ${\ensuremath{\dot{m_{\rm{E}}}}}\sim 0.001$ advection-dominated accretion disk is $\la 10^{42} ~\lum$ [@2002ApJ...570L..13C], which, for a point source, is too faint to be detected against the extended emission of the BCG nucleus (see Section \[sec:bcg\]). The low column densities suggested by the X-ray modeling could not conceal a $> 10^{42} ~\lum$ source, so if there is a bright, undetected optical point source, the emission must be beamed away from us.
Extrapolation of the best-fit X-ray spectral model to radio frequencies reveals good agreement with the measured 1.4 GHz and 4.8 GHz nuclear radio fluxes. Further, the continuous injection synchrotron model of @1987MNRAS.225..335H produced an acceptable fit to the X-ray, 1.4 GHz, and 4.8 GHz nuclear fluxes (see Figure \[fig:sync\]). These results suggest the nuclear X-ray source may be unresolved synchrotron emission from the jets which are obvious in the high-resolution 4.8 GHz radio image. Regardless, there is no indication the X-ray source is the remnant of a very dense, hot gas phase which might be associated with a prior or on-going accretion event.
AGN-BCG Interaction and Constraints on Star Formation {#sec:bcg}
-----------------------------------------------------
The BCG’s stellar structure and star formation properties were investigated using archival observations from the Hubble Space Telescope (), , and the /Optical Monitor (). imaged [RBS 797]{} using the ACS/WFC instrument and the F606W (4500–7500 Å; [$V$]{}) and F814W (6800–9800 Å; [$I$]{}) filters. Images produced using the Hubble Legacy Archive pipeline version 1.0 were used for analysis. The [$V$]{}+[$I$]{} image is shown in Figure \[fig:hst\]. Comparison of the and images reveals the BCG coincides with the nuclear X-ray source. The BCG has the appearance of being bifurcated, perhaps due to a dust lane, and there is no evidence of an optical point source. There are also several distinct knots of emission in the halo, and a faint arm of emission extending south from the BCG. There are two ACS artifacts in the [$I$]{} image which begin at $\approx 2.7\arcs$ and $\approx 5.4\arcs$ from the BCG center. Within a $2\arcs$ aperture centered on the BCG, the measured magnitudes are $m_{{\ensuremath{V}}} = 19.3 \pm
0.7$ mag and $m_{{\ensuremath{I}}} = 18.2 \pm 0.6$ mag, consistent with non- measurements of @rbs1, indicating the photometry in this region is unaffected.
[RBS 797]{} was imaged in the far-UV (FUV; 1344–1786 Å) and near-UV (NUV; 1771–2831 Å) with , and the near-UV with the /Optical-Monitor () using filters UVW1 (2410–3565 Å) and UVM2 (1970–2675 Å). The pipeline-reduced observations of Data Release 5 were used for analysis, and the data was processed using version 8.0.1. [RBS 797]{} is detected in all but the UVM2 observation as an unresolved source co-spatial with the optical and X-ray BCG emission. The individual filter fluxes are $f_{\rm{FUV}}
= 19.2 \pm 4.8 ~\mu$Jy, $f_{\rm{NUV}} = 5.9 \pm 2.1 ~\mu$Jy, $f_{\rm{UVW1}} = 14.6 \pm 4.6$ $\mu$Jy and $f_{\rm{UVM2}} < 117$ $\mu$Jy, all of which lie above the nuclear power-law emission we attribute to unresolved jets (see Section \[sec:nuc\] and Figure \[fig:sync\]).
A radial $({\ensuremath{V}}-{\ensuremath{I}})$ color profile of the central $2\arcs$ ($\approx
10$ kpc) was extracted from the images and fitted with the function $\Delta({\ensuremath{V}}-{\ensuremath{I}}) \log r + b$, where $\Delta({\ensuremath{V}}-{\ensuremath{I}})$ \[mag dex$^{-1}$\] is the color gradient, $r$ is radius, and $b$ \[mag\] is a normalization. The best-fit parameters are $\Delta({\ensuremath{V}}-{\ensuremath{I}}) =
-0.20 \pm 0.02$ and $b = 1.1 \pm 0.01$ for (DOF) = 0.009(21), revealing a flat color profile.
BCGs, and elliptical galaxies in general, have red centers and bluer halos due to the higher metallicity of the central stellar populations [@1990ApJS...73..637M]. BCGs in cooling flows with significant star formation have unusually blue cores relative to the quiescent halos [@rafferty06]. Our analysis shows a flat color profile, which is consistent with a modest level of nuclear star formation, but is inconsistent with star formation occurring at several tens of solar masses per year. Lacking calibrated colors, the color gradient analysis alone is ambiguous, since other factors, such as dust, emission line contamination in the passbands, and metallicity variations can alter the profile slope.
Star formation rates were estimated from the UV fluxes using the relations of @kennicutt2 and @salim2007. We estimate rates in the range 1–10 (see Table \[tab:sfr\]), which are probably consistent with the flat color profile. These estimates should be considered upper limits because sources of significant uncertainty, such as AGN contamination, have been neglected.
The HST images have revealed a great deal of structure in the BCG. In order to investigate this structure, residual galaxy images were constructed by first fitting the [$V$]{} and [$I$]{} isophotes with ellipses using the tool [<span style="font-variant:small-caps;">ellipse</span>]{}. Foreground stars and other contaminating sources were rejected using a combination of $3\sigma$ clipping and by masking. The ellipse centers were fixed at the galaxy centroid, and ellipticity and position angle were fixed at $0.25 \pm 0.02$ and $-64\mydeg \pm 2\mydeg$, respectively – the mean values when they were free parameters. Galaxy light models were created using [<span style="font-variant:small-caps;">bmodel</span>]{} in and subtracted from the corresponding parent image, leaving the residual images shown in Figure \[fig:subopt\]. A color map was also generated by subtracting the fluxed [$I$]{} image from the fluxed [$V$]{} image.
The close alignment of the optical substructure with the nuclear AGN outflow clearly indicates the jets are interacting with material in the BCG’s halo. An $\approx 1$ kpc radius spheroid $\approx 4$ kpc northeast of the nucleus resembles a galaxy which may be falling through the core and being stripped [see @2007ApJ...671..190S for example]. The numbered regions overlaid on the residual [$V$]{} image are the areas of the color map which have the largest color difference with surrounding galaxy light. Regions 1–5 are relatively the bluest with $\Delta({\ensuremath{V}}-{\ensuremath{I}}) = -0.40, -0.30,
-0.25, -0.22, ~\rm{and} -0.20$, respectively. Regions 6–8 are relatively the reddest with $\Delta({\ensuremath{V}}-{\ensuremath{I}}) =$ +0.10, +0.15, and +0.18, respectively. Clusters like [RBS 797]{} with a core entropy less than $30 ~\ent$ often host a BCG surrounded by extended nebulae [@mcdonald10]. Like these systems, the residual [$I$]{} image reveals what appear to be 8–10 kpc long “whiskers” surrounding the BCG (regions 9–11). It is interesting that blue regions 1 and 4 reside at the point where the southern jet appears to be encountering whiskers 9 and 10.
It is unclear whether the structure is associated with dusty star formation, emission line nebulae or both. The influence of optical emission lines on the photometry was estimated crudely in the nucleus using the ratio of line EWs taken from @rbs1 to filter widths. The nuclear contribution to the [$I$]{} image is estimated at $\approx 3\%$ using the scaled line (see Section \[sec:cold\]), and the combined , , and contribution to the nuclear [$V$]{} image is $\approx 7\%$. The observed structures have magnitude differences that significantly exceed these values, which suggests that emission lines alone may not be responsible for the structure. However, because the stellar continuum drops rapidly into the halo of the galaxy, bright knots of nebular emission would have larger equivalent widths at larger radii, and thus may be able to explain the extended structure.
Conclusions {#sec:con}
===========
We have presented results from a study of the AGN outburst in the galaxy cluster [RBS 797]{}. observations have enabled us to constrain the energetics of the AGN outburst and analyze different powering mechanisms. We have shown the following:
1. In addition to the two previously-known cavities near the cluster core, residual imaging reveals extensive structure in the ICM (Figure \[fig:subxray\]) associated with the AGN. The ICM substructure and deep cavity decrements lead us to speculate that the cavity system may be much larger and more complex than the present data allows us to constrain.
2. We find that the two central cavities have surface brightness decrements that are unusually deep and are inconsistent with the cavities being spheres whose centers lie in a plane perpendicular to our line of sight which passes through the central AGN. Motivated by the decrement analysis, we propose that the cavities are either highly-elongated structures in the plane of the sky, or result from the superposition of much larger structures lying along our line of sight.
3. Using the cavity decrements as a constraint, we estimate the total AGN outburst energy at $3 \dash 6 \times 10^{60}$ erg with a total power output of $3 \dash 6 \times 10^{45} ~\lum$. The cavities may be larger than we have assumed and we consider the energetics estimates to be lower limits. The thick, bright rims surrounding the cavities may also be signaling the presence of shocks, which would further boost the AGN energy output.
4. We show that the AGN can be plausibly powered by accretion of cold gas, but accretion of hot gas via the Bondi mechanism is almost certainly implausible. We also demonstrate that the outburst could be powered by tapping the energy stored in a maximally spinning SMBH.
5. Archival imagery has revealed a great deal of structure in the BCG associated with either star formation, nebular emission, or both. The association of the optical structure with the radio lobes and X-ray cavities indicates that the AGN is interacting with cooler gas in the host galaxy. While we are unable to determine if any regions of interest host star formation or line emission, the convergence of what appear to be extended optical filaments, bluish knots of emission, and the tip of one jet suggest there may be AGN driven star formation.
KWC acknowledges financial support from L’Agence Nationale de la Recherche through grant ANR-09-JCJC-0001-01, and thanks David Gilbank, Sabine Schindler, Axel Schwope, and Chris Waters for helpful input. BRM was supported by a generous grant from the Canadian Natural Science and Engineering Research Council, and thanks support provided by the National Aeronautics and Space Administration through Chandra Award Number G07-8122X issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060. MG acknowledges support from the grants ASI-INAF I/088/06/0 and CXO GO0-11003X. Some results are based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). This research has made use of NASA’s Astrophysics Data System (ADS), Extragalactic Database (NED), High Energy Astrophysics Science Archive Research Center (HEASARC), data obtained from the Chandra Data Archive, and software provided by the Chandra X-ray Center (CXC).
[*Facilities:*]{} , , , , ,
|
---
abstract: |
We proceed from the premise that the spectrum of elementary excitations in the normal component in Landau’s theory of superfluidity should depend on the superfluid helium temperature. This leads to generalization of the Landau superfluidity criterion. On this basis, taking into account available experimental data on inelastic neutron scattering, it is shown that, in addition to phonon–roton excitations, there is one more type of elementary excitations in superfluid helium, which we called helons. The energy spectrum with such a momentum dependence was first proposed by Landau. The helon energy spectrum shape and its temperature dependence make it possible to explain the singular behavior of the heat capacity of superfluid helium near its phase transition to the normal state.
PACS number(s): 67.25.de, 47.37.+q, 67.25.dj, 67.10.Fj\
address: |
$^1$Joint Institute for High Temperatures, Russian Academy of Sciences, 13/19, Izhorskaia Str., Moscow 125412, Russia;\
$^2$ Eindhoven University of Technology, P.O. Box 513, MB 5600 Eindhoven, The Netherlands; emails: [email protected],[email protected]
author:
- 'V.B.Bobrov$^2$, S.A.Trigger$^{1,2}$'
title: ELEMENTARY EXCITATIONS AND HEAT CAPACITY SINGULARITY IN SUPERFLUID HELIUM
---
According to the seminal phenomenological Landau theory \[1,2\], superfluid helium is a liquid consisting of superfluid and normal components. The superfluid component moves without friction and is not involved in the energy transport in the form of heat. The normal component moves with friction and is involved in heat transport. In this case, according to the Landau theory \[1\], the normal component is a gas of elementary excitations which are characterized by the dependence of the energy spectrum $\varepsilon(p)$ on the momentum $p$. If the flow velocity $V$ of the superfluid component reaches the critical velocity $V_{cr}$, determined from the condition $$\begin{aligned}
V_{cr}=min \left(\varepsilon(p)/p\right), \label{A1}\end{aligned}$$ superfluidity breakdown occurs. Thus, the superfluidity phenomenon cannot to be observed at velocities $V>V_{cr}$. This statement \[1\] known as the Landau superfluidity criterion is in qualitative agreement with experimental data on superfluid helium motion in capillaries (see \[3\] for more details). To quantitatively describe the superfluid helium motion in capillaries, the superfluid component inhomogeneity caused by boundary effects (see, e.g., \[4\]) should be taken into account. In this paper, we consider the infinite medium model corresponding to the thermodynamic limit.
Thus, the Landau superfluidity criterion is in fact the superfluidity breakdown criterion, since it initially assumes the existence of superfluidity. Otherwise, considering that there are well defined acoustic elementary excitations (phonons) in any liquid, we would obtain a nonzero critical velocity equal to the speed of sound in a corresponding liquid. To clarify the problem it should be taken into account that the elementary excitation spectrum $\varepsilon$ in the normal component is a function of not only the momentum $p$, but also thermodynamic parameters of the system under consideration, e.g., temperature $T$, i.e., $$\begin{aligned}
\varepsilon=\varepsilon(p;T).\label{A2}\end{aligned}$$ Hence, the critical velocity $V_{cr}$ determined from relation (1) is also a function of thermodynamic parameters, $V_{cr}=V_{cr}(T)$. Let us further take into account that the superfluidity phenomenon is absent at the temperature $T>T_\lambda$, where $T_\lambda$ is the superfluid transition temperature, i.e., liquid is normal. Therefore, it should be accepted that $$\begin{aligned}
V_{cr}(T>T_\lambda)=0 .\label{A3}\end{aligned}$$ Thus, we can formulate the generalized Landau superfluidity criterion exactly as the superfluidity criterion, rather than the superfluidity breakdown criterion, in the following form: if the spectrum of elementary excitations in liquid satisfies the conditions $$\begin{aligned}
V_{cr}(T)>0 \qquad \mbox{for} \qquad T<T_\lambda; \qquad V_{cr}(T)=0 \qquad \mbox{for}\qquad T>T_\lambda, \label{A4}\end{aligned}$$ then the corresponding liquid at temperatures $T<T_\lambda$ is superfluid; the superfluidity breakdown occurs at velocities $V>V_{cr}$.
As noted above, there are well defined acoustic elementary excitations in any liquid, both normal and superfluid one; therefore, the phonon spectrum of elementary excitations $\varepsilon(p)=cp$ where $c$ is the speed of sound, does not satisfy the generalized Landau superfluidity criterion (4). This means that one more branch of elementary excitations should exist in addition to phonons, which differs essentially from the phonon spectrum. Thus, we should introduce one more correction to the formulation of the generalized Landau superfluidity criterion, associated with the fact that several “branches” of elementary excitations can exist in liquid. Therefore, among all possible values of the critical velocity $V^\alpha_{cr}(T)$ determined by each spectrum (spectrum index $\alpha$), our interest is in only that providing a minimum value among $V^\alpha_{cr}(T)$. Hence, the quantity $V_{cr}(T)$, appearing in relation (4), is determined from the condition $$\begin{aligned}
V_{cr}(T)=min_{\alpha}V^\alpha_{cr}(T); \qquad V^\alpha_{cr}(T)= min \left(\varepsilon^\alpha(p,T)/p\right).\label{A5}\end{aligned}$$ We note that, according to (4) and (5), two cases are possible:\
- either there is an excitation branch with $V^\alpha_{cr}=0$ in normal liquid, which differs essentially from the phonon spectrum and, during the transition to the superfluid state, yields $V^\alpha_{cr}>0$,\
- or there is one more branch of elementary excitations in the normal component of superfluid liquid, which differs essentially from the phonon spectrum and disappears at temperatures $T>T_\lambda$.
Before turning to the discussion of the possible shape of the energy spectrum of the additional branch of elementary excitations differing essentially from phonons, let us consider the situation with the phonon–roton spectrum of elementary excitations $\varepsilon^{ph-rot}(p)$ (see Fig. 1) in superfluid helium, which was proposed by Landau in \[2\]. The shape of the phonon–roton spectrum of elementary excitations was confirmed in experiments on inelastic neutron scattering in superfluid helium (see, e.g., \[5,6\].
Furthermore, numerous experiments on inelastic neutron scattering (see, e.g., \[7\]-\[9\]) show that the phonon–roton spectrum of elementary excitations very weakly depends on temperature to $T_\lambda =2.17$ K for all values of momenta, including phonon and roton spectral regions. Moreover, phonon–roton excitations also exist at temperatures $T>T_\lambda$, where liquid helium is in the normal state \[10\]. Thereby, according to the above discussion, there is reason to believe that the phonon–roton spectrum of elementary excitations is not nearly related to the explanation of the superfluidity phenomenon in liquid helium. This point of view is confirmed by the experimental results on inelastic neutron scattering in liquid metals (Fig. 1), where the phonon–roton spectrum of elementary excitations was detected (see, e.g., \[11\]-\[14\], which was noticed in \[15\]. Similar excitations were also experimentally detected in the two-dimensional Fermi liquid \[16\].
![[]{data-label="Fig.1"}](Triger_fig1.eps){width="6cm"}
Let us now pay attention that Landau in his first paper \[1\] proposed to consider, in addition to phonons, elementary excitations which he initially called “rotons” with the spectrum $$\begin{aligned}
\varepsilon^r=\Delta^r+\frac{p^2}{2\mu^r}.\label{A6}\end{aligned}$$ Let us refer this type of excitations to as “Landau rotons” in contrast to later introduced rotons \[2\] being a fraction of the single phonon–roton branch of excitations. In (6), $\Delta^r$ is the energy of the Landau roton with the effective mass $\mu^r$ at the zero momentum $p=0$. It is clear that the critical velocity $V^r_{cr}$ of Landau rotons, according to (1), (5), and (6) is given by $$\begin{aligned}
V_{cr}^r=\sqrt{2\mu^r \Delta^r}.\label{A7}\end{aligned}$$ Assuming that $\Delta^r$ in (6) depends on temperature and satisfies the condition $$\begin{aligned}
\Delta^r (T)=0 \qquad \mbox{for}\qquad T>T_\lambda, \label{A8}\end{aligned}$$ the spectrum of such excitations satisfies the generalized Landau superfluidity criterion (4).
Let us pay attention that, while satisfying condition (8), elementary excitations with energy spectrum (6) exist only in superfluid helium, in contrast to the phonon–roton spectrum characteristic of liquid. To distinguish elementary excitations with the spectrum (6)–(8) from “rotons” (used in the literature) in the phonon–roton spectrum and vortical “Landau rotons” currently dropped from consideration, and taking into account that the vortical nature of these excitations in the infinite medium is not obvious, in what follows, we refer these elementary excitations to as “helons” (index $h$).
The existence of helons with the spectrum (6)–(8) is in fact confirmed by experiments on inelastic neutron scattering \[17,18\] (see Fig. 2), in which, in addition to the maxima in the dynamic structure factor of superfluid helium, corresponding to the phonon–roton spectrum of elementary excitations, the maxima were detected, whose positions appeared close to the spectrum of the free helium atom $\varepsilon^a(p)=p^2/2m $ (here $m$ is the helium atom mass) for the region of the momentum transferred values $q>0.5 {\AA}^{-1}$ . The corresponding experimental data were called the spectrum of “single-atom scattering”. It is clear that the spectrum of the free helium atom $\varepsilon^{(a)}(p)$ (as well as other spectra with the same $p$-dependence at small $p$) does not satisfy the generalized Landau superfluidity criterion (4), (5). Therefore, the new experiments are required for the smaller values of $q=p/\hbar$ and for different temperatures, which can prove the existence of helons and give estimation of the value $\Delta^{(h)}(T)$.
![[]{data-label="Fig.2"}](Triger_fig2.eps){width="6cm"}
In attempts to theoretically explain the experimentally observed maxima in the dynamic structure factor of superfluid helium, corresponding to the “single-atom scattering” spectrum, the possible existence of helons was not taken into consideration (see, e.g.,\[20\]-\[22\] and references therein). We also note that theoretical models were repeatedly proposed in microscopic descriptions of superfluid helium, in which the spectrum of elementary excitations, similar to the spectrum of helons arises (see, e.g.,\[23\]-\[26\] and references therein); however, the existence of the corresponding maximum in the dynamic structure factor has not yet been confirmed in these models.
To provide condition (8), there is an appropriate quantity in the Landau theory, i.e., the superfluid component density $n_s$ for which the condition $$\begin{aligned}
n_s (T>T_\lambda)=0 \qquad \mbox{at}\qquad T>T_\lambda\label{A9}\end{aligned}$$ is satisfied. Then we can assume that $\Delta^{(h)} \simeq [n_s]^\gamma$,$\gamma>0$ at $T<T_\lambda$.
In this case, for dimensionality reasons, to determine the quantity $\Delta^{(h)}(T)$, several quantities with energy dimension can be constructed, based on the superfluid component density $n_s$, in particular, $\hbar^2 n_s^{2/3}(T)/m$ and $\hbar^2 L n_s(T)/m$, where $L$ is the so-called scattering length which is completely defined by the interparticle interaction potential of helium atoms.
Thus, there is good reason to believe that, in addition to phonon–roton elementary excitations, there are helons with spectrum (6)–(8) in the normal component of superfluid helium in the absence of boundary effects. Let us consider consequences from this statement. According to the Landau superfluidity theory \[1,2\], the free energy per unit volume of superfluid helium at temperature $T$ can be written as $$\begin{aligned}
F=E_0+\sum_a F^{(a)}, \qquad F^{(a)}= T \int \frac{d^3 p}{(2\pi\hbar)^3}\ln\left\{ 1-\exp\left[-\frac{\varepsilon^{(a)}(p;T)}{T}\right]\right\}. \label{A10}\end{aligned}$$ Here $E_0$ is the ground state energy per unit volume of superfluid helium, which depends only on its density $n$ equal to the sum of the densities of superfluid and normal components, $n=n_N+n_s$. The quantity $F^{(a)}$ is the free energy per unit volume of superfluid helium, corresponding to elementary excitations of type $a$ with energy spectra $\varepsilon^{(a)}(p;T)$ and corresponding to helons (6)–(8) and phonon–roton excitations.
It immediately follows from (10) that the average (internal) energy per unit volume of superfluid helium is given by $$\begin{aligned}
F=E_0+\sum_a E^{(a)}, \qquad E^{(a)}= T \int \frac{d^3 p}{(2\pi\hbar)^3} \frac{\varepsilon^{(a)}(p;T)-T[\partial \varepsilon^{(a)}(p;T)/\partial T]}{\exp\left[\varepsilon^{(a)}(p;T)/T\right]-1}. \label{A11}\end{aligned}$$ In turn, from (11), it is easy to verify that the heat capacity $c_V = (\partial E/\partial T)_V$ of superfluid helium, by virtue of condition (8), has a peculiarity at the temperature $T=T_\lambda$ for the helon energy spectrum, caused by the temperature dependence of $\Delta^{(h)}(T)$: $$\begin{aligned}
\lim_{T\rightarrow (T_\lambda-0)}c_V=\infty, \qquad \mbox{при} \qquad \lim_{T\rightarrow (T_\lambda-0)}\Delta^{(h)} (T)= 0, \qquad \lim_{T\rightarrow (T_\lambda-0)}\frac{d \Delta^{(h)} (T)}{d T} \neq 0 \label{A12}\end{aligned}$$ which is widely known in the literature as the $\lambda$-curve of the heat capacity.
The simple calculation for the singular part of the heat capacity below $T_\lambda$ leads to the temperature dependence $$\begin{aligned}
c_V\rightarrow\frac {A}{\sqrt{1-\left(\frac{T}{T_\lambda}\right)^\gamma}}\label{A13}\end{aligned}$$ where $A= \gamma^2 [2\mu^{(h)} \Delta^{(h)}(T=0)]^{3/2}/8\pi\hbar^3$. Here we assumed the form of temperature dependence of the value $\Delta^{(h)}(T)=\Delta^{(h)}(T=0) [1-(T/T_\lambda)^\gamma]$ with $\gamma>0$ and used the expansion of the expression under the integral for $c_V$ for small values of $p$, suggested in \[27\]. The value $\Delta^{(h)}(T=0)$ is the functional of the interaction potential between the particles and tends to zero when interaction disappears. In this case singularity is absent and the heat capacity is continuous with the well known fracture at $T=T_\lambda$. Probably, in the transition point there is an infinite jump of the heat capacity. These assumption based on disappearance of helons at $T=T_\lambda$, as well as on the character of the experimental data in vicinity of the transition point (see e.g. \[3\], \[28\], \[29\]). Explanation of the behavior of the heat capacity above $T_\lambda$ requires microscopical consideration of the correlation effects in quantum liquid.
Let us pay attention that the phonon–roton spectrum of excitations does not exhibit a similar anomaly of the specific heat $c_V$ even taking into account its temperature dependence due to the linearity of the phonon–roton spectrum at small momenta, $\varepsilon^{ph-rot}(p\rightarrow 0)\rightarrow c p$.
Thus, according to the above consideration, in addition to elementary excitations with the phonon–roton energy spectrum in superfluid helium, there are helons (6)–(8) which satisfy the generalized Landau superfluidity criterion. The consideration of the temperature dependence of the helon energy spectrum allows explanation of the anomalous behavior of the specific heat of superfluid helium in the vicinity of the phase transition to the normal state.
Acknowledgment {#acknowledgment .unnumbered}
==============
This study was supported by the Netherlands Organization for Scientific Research (NWO) and the Russian Foundation for Basic Research, projects no. 12-08-00822-a and no. 12-02-90433-Ukr-a.\
[99]{}
\[1\] L.Landau, J.Phys. (USSR) 5, 71 (1941)\
\[2\] L.Landau, J.Phys. (USSR) 11, 91 (1947)\
\[3\] J.G.Daunt and R.S.Smith, Rev.Mod.Phys. 26, 172 (1954)\
\[4\] V.L.Ginzburg and A.A.Sobyanin, Sov.Phys.Usp. 19, 773 (1977); 31, 239 (1988)\
\[5\] H.Palevsky, K.Otnes, and K.E.Larsson, Phys.Rev. 112, 11 ((1958)\
\[6\] D.G.Henshaw, Phys.Rev.Lett.1, 127 (1958)\
\[7\] E.F.Tabbot, H.R.Glyde, W.G.Stirling, and E.C.Svensson, Phys.Rev. B38, 11229 (1988)\
\[8\] W.G.Stirling, and H.R.Glyde, Phys.Rev. B42, 4224 (1990)\
\[9\] K.H.Andersen, W.G.Stirling, R.Schern, A.Stanault, B.Fak, H.Godfrin, and A.J.Dianoux, J.Phys.: Cond.Matt. 6, 821 (1994)\
\[10\] K.S.Pedersen and K.Carneiro, Phys.Rev. B22, 191 (1980)\
\[11\] J.R.D. Copley, J.M. Rowe, Phys. Rev. Lett. 32, 49 (1974)\
\[12\] W.Glaser, S.Hagen, U.Loffler, J.-B.Suck, and W.Schrommers, in The Properties of Liquid Metals ((Taylor and Francis, London, 1973)\
\[13\] J.-B.Suck, Condensed Matt.Phys. 11, 7 (2008)\
\[14\] S.J. Cocking, and P.A. Egelstaff, *Phys.Lett.* **16**, 130-132 (1965)\
\[15\] A.M.Belyayev, V.B.Bobrov, and S.A.Trigger, J.Phys.: Cond.Matt. 1, 9665 (1989)\
\[16\] H. Godfrin, M. Meschke, H.-J. Lauter et al., Nature, 483, 576 (2012)\
\[17\] N.M.Blagoveshchenskii, E.B.Dokukin, Zh.A.Kozlov, and V.A.Parfenov, Sov.Phys. JETP Lett. 31, 4 (1980)\
\[18\] E.C.Svensson and D.C.Tennant, Jap. J. Appl. Phys. 26, 31 (1987)\
\[19\] A.Griffin and S.H.Payne, J. Low Temp. Phys. 64, 155 (1986)\
\[20\] R Sridhar, Phys. Rep. 146, 259 (1987)\
\[21\] V.B.Bobrov, S.A.Trigger, and Yu.P.Vlasov, Physica B203, 95 (1994)\
\[22\] A.Griffin, Excitations in a Bose-condensed Liquid, Cambridge University Press (2005)\
\[23\] S.A.Trigger and P.P.J.M.Schram, Physica B228, 107 (1996)\
\[24\] C.-H.Zhang and H.A.Fertig, Phys.Rev. A74, 023613 (2006)\
\[25\] P.Navez, Physica A387, 4070 (2008)\
\[26\] V.B.Bobrov, S.A.Trigger, and I.M.Yurin, Phys.Lett. A374, 1938 (2010)\
\[27\] L.D.Landau and E.M.Lifshitz, Statistical Physics, part 1 (Butterworth-Heinemann, Oxford, 1980)\
\[28\] V.D. Arp, Int. J. Termophys. 26, 1477 (2005)\
\[29\] R.J. Donnelly, C.F. Barenghi, J. Phys. Chem. Ref. Data 27, 1217 (1998)\
|
---
abstract: 'This study deals with the problem of pricing compound options when the underlying asset follows a mixed fractional Brownian motion with jumps. An analytic formula for compound options is derived under the risk neutral measure. Then, these results are applied to value extendible options. Moreover, some special cases of the formula are discussed and numerical results are provided.'
address: 'Department of Mathematics and Statistics, University of Vaasa, P.O. Box 700, FIN-65101 Vaasa, FINLAND'
author:
- Foad Shokrollahi
bibliography:
- '../../reference10.bib'
title: Pricing compound and extendible options under mixed fractional Brownian motion with jumps
---
Introduction {#sec:1}
============
Compound option is a standard option with mother standard option being the underlying asset. Compound options have been extensively used in corporate fiance. When the total value of a firm’s assets is regarded as the risky underlying asset, the various corporate securities can be valued as claim contingent on underlying asset, the option on the security is termed a compound option. The compound option models were first used by Geske [@geske1979valuation] to value option on a share of common stock. Richard [@roll1977analytic] extended Geske’s work and obtained a closed-form solution for the price of an American call. Selby and Hodges [@selby1987evaluation] studied the valuation of compound options.
Extendible options are a generalized form of compound options whose maturities can be extended on the maturity date, at the choice of the option holder, and this extension may require the payment of an additional premium. They are widely applied in financial fields such as real-estate, junk bonds, warrants with exercise price changes, and shared-equity mortgages, so many researchers carry out the theoretical models for pricing the options.
Prior valuation of extendible bonds was presented by Brennan et al [@brennan] and Ananthanaray et al [@ananthanarayanan]. Longstal [@longstaff] extended their work to develop a set of pricing model for a wide variety of extendible options. Since these models assume the asset price follows geometric Brownian motion, they are unlikely to translate the abnormal vibrations in asset price when the arrival of important new information come out. Merton [@merton] considered the impact of a sudden event on the asset price in the financial market and proposed a geometric Brownian motion with jumps to match the abnormal fluctuation of financial asset price, which was introduced into derivation of the option pricing model. Based on this theory, Dias and Rocha [@dias] considered the problem of pricing extendible options under petroleum concessions in the presence of jumps. Kou [@kou2002jump] and Cont and Tankov [@cont] also considered the problem of pricing options under a jump diffusion environment in a larger setting. Moreover, Gukhal [@gukhal] derived a pricing model for extendible options when the asset dynamics were driven by jump diffusion process. Hence, the analysis of compound and extendible options by applying jump process is a significant issue and provides the motivation for this paper.
All this research above assumes that the logarithmic returns of the exchange rate are independent identically distributed normal random variables. However, the empirical studies demonstrated that the distributions of the logarithmic returns in the asset market generally reveal excess kurtosis with more probability mass around the origin and in the tails and less in the flanks than what would occur for normally distributed data [@cont]. It can be said that the properties of financial return series are nonnormal, nonindependent, and nonlinear, self-similar, with heavy tails, in both autocorrelations and cross-correlations, and volatility clustering [@huang; @cajueiro; @kang1; @kang2; @ding]. Since fractional Brownian motion $(FBM)$ has two substantial features such as self-similarity and long-range dependence, thus using it is more applicable to capture behavior from financial asset [@podobnik; @carbone; @wang1; @wang2; @xiao3]. Unfortunately, due to $FBM$ is neither a Markov process nor a semimartingale, we are unable to apply the classical stochastic calculus to analyze it [@bjork]. To get around this problem and to take into account the long memory property, it is reasonable to use the mixed fractional Brownian motion $(MFBM)$ to capture fluctuations of the financial asset [@cheridito1; @el]. The $MFBM$ is a linear combination of Brownian motion and $FBM$ processes. Cheridito [@cheridito1] proved that, for $H \in(3/4,1)$, the mixed model with dependent Brownian motion and $FBM$ was equivalent to one with Brownian motion, and hence it is arbitrage-free. For $H\in(\frac{1}{2},1)$, Mishura and Valkeila [@mishura2002absence] proved that, the mixed model is arbitrage-free.
In this paper, to capture the long range property, to exclude the arbitrage in the environment of $FBM$ and to get the jump or discontinuous component of asset prices, we consider the problem of compound option in a jump mixed fractional Brownian motion $(JMFBM)$ environment. We then exert the result to value extendible options. We also provide representative numerical results. The $JMFBM$ is based on the assumption that the underlying asset price is generated by a two-part stochastic process: (1) small, continuous price movements are generated by a $MFBM$ process, and (2) large, infrequent price jumps are generated by a Poisson process. This two-part process is intuitively appealing, as it is consistent with an efficient market in which major information arrives infrequently and randomly. The rest of this paper is as follows. In Section \[sec:2\], we briefly state some definitions related to $MFBM$ that will be used in forthcoming sections. In Section \[sec:2-1\], we analyze the problem of pricing compound option whose values follow a $JMFBM$ process and present an explicit pricing formula for compound options. In Section \[sec:3\], we derive an analytical valuation formula for pricing extendible option by compound option approach with only one extendible maturity under risk neutral measure, then extend this result to the valuation of an option with $N$ extendible maturity. Section \[sec:4\] deals with the simulation studies for our pricing formula. Moreover, the comparison of our $JMFBM$ model and traditional models is undertaken in this Section. Section \[sec:5\] is assigned to conclusion.
Auxiliary facts {#sec:2}
===============
In this section we recall some definitions and results which we need for the rest of paper [@mishura2002absence; @el; @xiao3].
**Definition 2.1:** A $MFBM$ of parameters $\epsilon, \alpha$ and $H$ is a linear combination of $FBM$ and Brownian motion, under probability space $(\Omega ,F,P)$ for any $t\in
R^+$ by: $$\begin{aligned}
M_t^H=\epsilon B_t+\alpha B_t^H,\end{aligned}$$ where $B_t$ is a Brownian motion , $B_t^H$ is an independent $FBM$ with Hurst parameter $H\in(0,1)$, $\epsilon$ and $\alpha$ are two real invariant such that $(\epsilon,\alpha)\neq (0,0)$.\
Consider a frictionless continuous time economy where information arrives both continuously and discontinuously. This is modeled as a continuous component and as a discontinuous component in the price process. Assume that the asset does not pay any dividends. The price process can hence be specified as a superposition of these two components and can be represented as follows:
$$\begin{aligned}
dS_t&=&S_t(\mu-\lambda\kappa) dt+\sigma S_tdB_t\nonumber\\
&+&\sigma
S_tdB_t^H+(J-1)S_tdN_t,\,0<t\leq T,\,S_{T_0}=S_0,
\label{eq:1}\end{aligned}$$
where $\mu,\sigma, \lambda$ are constant, $B_t$ is a standard Brownian motion, $B_t^H$ is a independent $FBM$ and with Hurst parameter $H$, $N_t$ is a Poisson process with rate $\lambda$, $J-1$ is the proportional change due to the jump and $k\sim N(\mu_ J=\ln(1+k)-\frac{1}{2}\sigma_J^2, \sigma_J^2)$. The Brownian motion $B_t$, the $FBM$, $B_t^H$, the poisson process $N_t$ and the jump amplitude $J$ are independent.
Using Ito Lemma [@li], the solution for stochastic differential equation (\[eq:1\]) is
$$\begin{aligned}
S_t=S_0\exp\Big[(r-\lambda k)t+\sigma B_t+\sigma B_t^H-\frac{1}{2}\sigma^2t-\frac{1}{2}\sigma^2t^{2H}\Big]J(n)^{N(t)}.
\label{eq:2}\end{aligned}$$
where $J(n)=\prod_{i=1}^nJ_i$ for $n\geq 1$, $J_t$ is independently and identically distributed and $J_0=1$; $n$ is the Poisson distributed with parameter $\lambda t$. Let $x_t=\ln\frac{S_t}{S_0}$. From Eq. (\[eq:2\]) easily get
$$\begin{aligned}
dx_t=\big(r-\lambda k-\frac{1}{2}\sigma^2-H\sigma^2t^{2H-1}\big)dt+\sigma dB
_t+\sigma dB_t^H+\ln(J)dN_t.
\label{eq:3}\end{aligned}$$
Consider a European call option with maturity $T$ and the strike price $K$ written on the stock whose price process evolves as in Eq. (\[eq:1\]). The value of this call option is known from [@shokrollahi1] and is given by
$$\begin{aligned}
&&C(S_0,K,T-T_0)\nonumber\\
&&=\sum_{n=0}^\infty\frac{e^{-\lambda'(T-T_0)}(\lambda'(T-T_0))^n}{n!} S_0\Phi(d_1)-Ke^{r(T-T_0)}\Phi(d_2),
\label{eq:4}\end{aligned}$$
where $$\begin{aligned}
d_1&=&\frac{\ln\frac{S_0}{K}+r_n(T-T_0)+\frac{1}{2}[\sigma^2(T-T_0)+\sigma^2(T^{2H}-T_0^{2H})+n\sigma_J^2)]}{\sqrt{\sigma^2(T-T_0)+\sigma^2(T^{2H}-T_0^{2H})+n\sigma_J^2}},\nonumber\\
d_2&=&d_1-\sqrt{\sigma^2(T-T_0)+\sigma^2(T^{2H}-T_0^{2H})+n\sigma_J^2}\nonumber,
\label{eq:5}\end{aligned}$$
$\lambda'=\lambda (1+k)$, $r_n=r-\lambda k+\frac{n\ln(1+k)}{T-T_0}$and $\Phi(.)$ is the cumulative normal distribution.
Compound options {#sec:2-1}
================
In order to derive a compound option pricing formula in a jump mixed fractional market, we make the following assumptions.
1. There are no transaction costs or taxes and all securities are perfectly divisible;
2. security trading is continuous;
3. there are no riskless arbitrage opportunities;
4. the short-term interest rate $r$ is known and constant through time;
5. the underlying asset price $S_t$ is governed by the following stochastic differential equation
Consider a compound call option written on the European call $C(K,T_2)$ with expiration date $T_1$ and exercise price $K_1$, where $T_1<T_2$. Assume $CC\left[C(K,T_2), K_1, T_1\right]$ denotes this compound option. This compound option is exercised at time $T_1$ when the value of the underlying asset, $C(S_1, K, T_1, T_2)$, exceeds the strike price $K_1$. When $C(S_1, K, T_1, T_2)<K_1$, it is not optimal to exercise the compound option and hence expires worthless. The asset price at which one is indifferent between exercising and not exercising is specified by the following relation: $$\begin{aligned}
C(S_1, K, T_1, T_2)=K_1.
\label{eq:6}\end{aligned}$$
Let, $S_1^*$ shows the price of indifference which can be obtained as the numerical solution of the Eq. (\[eq:6\]). When it is optimal to exercise the compound option at time $T_1$, the option holder pays $K_1$ and receives the European call $C(K, T_1, T_2)$. This European call can in turn be exercised at time $T_2$ when $S_T$ exceeds $K$ and expires worthless otherwise. Hence, the cashflows to the compound option are an outflow of $K_1$ at time $T_1$ when $S_1>S_1^*$, a net cashflow at time $T_2$ of $S_T -K$ when $S_1>S_1^*$ and $S_T>K$, and none in the other states. The value of the compound option is the expected present value of these cashflows as follows: $$\begin{aligned}
&&CC\left[C(K,T_2), K_1, T_0, T_1\right]\nonumber\\
&=&E_{T_0}\left[e^{-r(T_2-T_0)}(S_T-K)\textbf{1}_{S_T>K}\right]+E_{T_0}\left[e^{-r(T_1-T_0)}(-K_1)\textbf{1}_{S_1>S_1^*}\right]\nonumber\\
&=&E_{T_0}\left[e^{-r(T_1-T_0)}E_{T_1}\left[e^{-r(T_2-T_1)}(S_T-K)\textbf{1}_{S_T>K}\right]\textbf{1}_{S_1>S_1^*}\right]\nonumber\\
&&-E_{T_0}\left[e^{-r(T_1-T_0)}K_1\textbf{1}_{S_1>S_1^*}\right]\nonumber\\
&=&E_{T_0}\left[e^{-r(T_2-T_0)}C(S_1, K, T_1, T_2)\textbf{1}_{S_1>S_1^*}\right]-E_{T_0}\left[e^{-r(T_1-T_0)}K_1\textbf{1}_{S_1>S_1^*}\right]
\label{eq:7}\end{aligned}$$ where $C(S_1, K, T_1, T_2)$ is given in Eq. (\[eq:4\]).
Let, the number of jumps in the intervals $[T_0,T_1)$ and $[T_1,T_2]$ denoted by $n_1$ and $n_2$, respectively and $m=n_1+n_2$ shows the number of jumps in the interval $[T_0,T_2]$. Then, use the Poisson probabilities, we have
$$\begin{aligned}
&&E_{T_0}\left[e^{-r(T_2-T_0)}C(S_1, K, T_1, T_2)\textbf{1}_{S_1>S_1^*}\right]\nonumber\\
&=&E_{T_0}\left[e^{-r(T_1-T_0)}E_{T_1}\left[e^{-r(T_2-T_1)}(S_T-K)\textbf{1}_{S_T>K}\right]\textbf{1}_{S_1>S_1^*}\right]\nonumber\\
&=&\sum_{n_1=0}^\infty\sum_{n_2=0}^\infty\frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^{n_1}}{n_1!}\frac{e^{-\lambda'(T_2-T_1)}(\lambda'(T_2-T_1))^{n_2}}{n_2!}\nonumber\\
&&\times E_{T_0}\left[e^{-r(T_1-T_0)}E_{T_1}\left[e^{-r(T_2-T_1)}(S_T-K)\textbf{1}_{S_T>K}\right]\textbf{1}_{S_1>S_1^*}|n_1,n_2\right]\nonumber\\
&=&\sum_{n_1=0}^\infty\sum_{n_2=0}^\infty\frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^{n_1}}{n_1!}\frac{e^{-\lambda'(T_2-T_1)}(\lambda'(T-T_1))^{n_2}}{n_2!}\nonumber\\
&&\times E_{T_0}\left[e^{-r(T_2-T_0)}(S_T-K)\textbf{1}_{S_T>K}\textbf{1}_{S_1>S_1^*}|n_1,n_2\right]\nonumber
\label{eq:8}\end{aligned}$$
The evaluation of this expectation requires the joint density of two Poisson weighted sums of correlated normal. From this point, we work with the logarithmic return, $x_t=\ln\frac{S_t}{S_0}$, rather than the stock price. It is important to know that the correlation between the logarithmic return $x_{T_1}$ and $x_{T_2}$ depend on the number of jumps in the intervals $[T_0,T_1)$ and $[T_1,T_2]$. Conditioning on the number of jumps $n_1$ and $n_2$, $x_{T_1}$ has a normal distribution with mean $$\begin{aligned}
\mu_{J_{T_1-T_0}}&=&(r-\lambda k)(T_1-T_0)-\frac{1}{2}\sigma^2(T_1-T_0)\nonumber\\
&-&\frac{1}{2}\sigma^2(T_1^{2H}-T_0^{2H})+n_1[\ln(1+k)-\frac{1}{2}\sigma_J^2]\nonumber\\
\sigma_{J_{T_1-T_0}}^2&=&\sigma^2(T_1-T_0)+\sigma^2(T_1^{2H}-T_0^{2H})+n_1\sigma_J^2,\nonumber
\label{eq:9}\end{aligned}$$ and $x_{T_2}\sim N(\mu_{J_{T_2-T_0}},\sigma_{J_{T_2-T_0}}^2)$ where $$\begin{aligned}
\mu_{J_{T_2-T_0}}&=&(r-\lambda k)(T_2-T_0)-\frac{1}{2}\sigma^2(T_2-T_0)\nonumber\\
&-&\frac{1}{2}\sigma^2(T_2^{2H}-T_0^{2H})+m[\ln(1+k)-\frac{1}{2}\sigma_J^2]\nonumber\\
\sigma_{J_{T_2-T_0}}^2&=&\sigma^2(T_2-T_0)+\sigma^2(T_2^{2H}-T_0^{2H})+m\sigma_J^2\nonumber.
\label{eq:10}\end{aligned}$$ The correlation coefficient between $x_{T_2}$ and $x_{T_1}$ is as follows $$\begin{aligned}
\rho=\frac{cov(x_{T_1},x_{T_2})}{\sqrt{var(x_{T_1})\times var(x_{T_2})}}\nonumber.
\label{eq:11}\end{aligned}$$
Evaluation the first expectation in Eq. (\[eq:7\]) gives $$\begin{aligned}
&&E_{T_0}\left[e^{-r(T_2-T_0)}C(S_1, K, T_1, T)\textbf{1}_{S_1>S_1^*}\right]\nonumber\\
&&=\sum_{n_1=0}^\infty\sum_{n_2=0}^\infty\frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^{n_1}}{n_1!}\frac{e^{-\lambda'(T_2-T_1)}(\lambda'(T_2-T_1))^{n_2}}{n_2!}\nonumber\\
&&\times\Big[S_0\Phi_2(a_1,b_1,\rho)-Ke^{-r(T_2-T_0)}\Phi_2(a_2,b_2,\rho)\Big]
\label{eq:12}\end{aligned}$$ where $$\begin{aligned}
a_1&=&\frac{\ln\frac{S_0}{S_1^\ast}+\mu_{J_{T_1-T_0}}+\sigma_{J_{T_1-T_0}}^2}{\sqrt{\sigma_{J_{T_1-T_0}}^2}},\quad a_2=a_1-\sqrt{\sigma_{J_{T_1-T_0}}^2}\nonumber\\
b_1&=&\frac{\ln\frac{S_0}{K}+\mu_{J_{T_2-T_0}}+\sigma_{J_{T_2-T_0}}^2}{\sqrt{\sigma_{J_{T_2-T_0}}^2}},\quad b_2=b_1-\sqrt{\sigma_{J_{T_2-T_0}}^2}\nonumber
\label{eq:13}\end{aligned}$$ $\Phi(x)$ is the standard univariate cumulative normal distribution function and $\Phi_2(x,y,\rho)$ is the standard bivariate cumulative normal distribution function with correlation coefficient $\rho$.
The second expectation in Eq. (\[eq:7\]) can be evaluate to give
$$\begin{aligned}
&&E_{T_0}\left[e^{-r(T_1-T_0)}K_1\textbf{1}_{S_1>S_1^*}\right]\nonumber\\
&&=\sum_{n_1=0}^\infty\frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^{n_1}}{n_1!}E_{T_0}\left[e^{-r(T_1-T_0)}K_1\textbf{1}_{S_1>S_1^*}|n_1\right]\nonumber\\
&&=\sum_{n_1=0}^\infty\frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^{n_1}}{n_1!}K_1e^{-r(T_1-T_0)}\Phi(a_2),
\label{eq:14}\end{aligned}$$
where $a_2$ is defined above. Then, the following result for a compound call option is obtained.
The value of a compound call option with maturity $T_1$ and strike price $K_1$ written on a call option, with maturity $T_2$, strike $K$, and whose underlying asset follows the process in Eq. (\[eq:1\]), is given by $$\begin{aligned}
&&CC\left[C(K,T_2), K_1, T_0, T_1\right]\nonumber\\
&&=\Big\{\sum_{n_1=0}^\infty\sum_{n_2=0}^\infty\frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^{n_1}}{n_1!}\frac{e^{-\lambda'(T_2-T_1)}(\lambda'(T_2-T_1))^{n_2}}{n_2!}\nonumber\\
&&\times\Big[S_0\Phi_2(a_1,b_1,\rho)-Ke^{-r(T_2-T_0)}\Phi_2(a_2,b_2,\rho)\Big]\Big\}\nonumber\\
&&-\sum_{n_1=0}^\infty\frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^{n_1}}{n_1!}K_1e^{-r(T_1-T_0)}\Phi(a_2)\nonumber
\label{eq:15}\end{aligned}$$ where $a_1, a_2, b_1, b_2,$ and $\rho$ are as defined previously. \[th:1\]
For a compound option with dividend payment rate $q$, the result is similar with Theorem \[th:1\], only $r$ replaces with $r-q$.
Extendible option pricing formulae {#sec:3}
==================================
Based on the assumptions in the last Section, let $EC$ be the value of an extendible call option with time to expiration of $T_1$. At the time to expiration $T_1$, the holder of the extendible call can
1. let the call expire worthless if $S_{T_1}<L$, or
2. exercise the call and get $S_{T_1}-K_1$ if $S_{T_1}> M$, or
3. make a payment of an additional premium $A$ to extend the call to $T_2$ with a new strike of $K_2$ if $ L\leq S_{T_1}\leq M$,
where $S_{T_1}$ is the underlying asset price and strike price at time $T_1$ , $K_1$ is the strike price at time $T_1$, and Longstaff [@longstaff] refers to $L$ and $M$ as critical values, where $L<M$.
If at expiration time $T_1$ the option is worth more than the extendible value with a new strike price of $K_2$ for a fee of $A$ for extending the expiration time $T_1$ to $T_2$, then it is best to exercise; that is, $S_{T_1}-K_1\geq C(S_{T_1},K_2,T_2-T_1)-A$. Otherwise, it is best to extend the expiration time of the option to $T_2$ and exercise when it is worth more than zero; that is, $ C(S_{T_1},K_2,T_2-T_1)-A> 0$. Moreover, the holder of the option should be impartial between extending and not exercising at value $L$ and impartial between exercising and extending at value $M$. Therefore, the critical values $L$ and $M$ are unique solutions of $M-K_1= C(M,K_2,T_2-T_1)-A$ and $M-K_1= C(L,K_2,T_2-T_1)-A=0$. See Longstaff [@longstaff] and Gukhal [@gukhal] for an analysis of the conditions.
The value of a call option, $C$ at time $T_1$ with a time to expiration extended to $T_2$, as the discounted conditional expected payoff is given by
$$\begin{aligned}
EC(S_0,K_1,T_1,K_2,T_2,A)&=& E_{T_0}\Big[e^{-r(T_1-T_0)}(S_{T_1}-K_1)\textbf{1}_{S_{T_1}>M}\Big]\nonumber\\
&+&E_{T_0}\Big[e^{-r(T_1-T_0)}\Big(C(S_{T_1},K_2,T_2-T_1)-A\Big)\textbf{1}_{L \leq S_{T_1}\leq M}\Big]\nonumber\\
&=&E_{T_0}\Big[e^{-r(T_1-T_0)}(S_{T_1}-K_1)\textbf{1}_{S_{T_1}>M}\Big]\nonumber\\
&+&E_{T_0}\Big[e^{-r(T_1-T_0)}\Big(C(S_{T_1},K_2,T_2-T_1)-A\Big)\nonumber\\
&\times&\Big(\textbf{1}_{S_{T_1}\geq L}-\textbf{1}_{S_{T_1}\geq M}\Big)\Big].
\label{eq:16}\end{aligned}$$
Then, by the same way of the call compound option, we have $$\begin{aligned}
&&E_{T_0}\Big[e^{-r(T_1-T_0)}(S_{T_1}-K_1)\textbf{1}_{S_{T_1}>M}\Big]\nonumber\\
&=&\sum_{n_1=0}^\infty \frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^{n_1}}{n_1!} E_{T_0}\Big[e^{-r(T_1-T_0)}(S_{T_1}-K_1)\textbf{1}_{S_{T_1}>M}|n_1\Big],
\label{eq:17}\end{aligned}$$ $$\begin{aligned}
&&E_{T_0}\Big[e^{-r(T_1-T_0)}\Big(C(S_{T_1},K_2,T_2-T_1)-A\Big)\Big(\textbf{1}_{S_{T_1}\geq L}-\textbf{1}_{S_{T_1}\geq M}\Big)\Big]\nonumber\\
&=&E_{T_0}\Big[e^{-r(T_1-T_0)}E_{T_1}\Big(e^{-r(T_2-T_1)}(S_{T_2}-K_2)\textbf{1}_{S_{T_2}>K_2}\Big)\Big(\textbf{1}_{S_{T_1}\geq L}-\textbf{1}_{S_{T_1}\geq M}\Big)\Big]\nonumber\\
&-&E_{T_0}\Big[e^{-r(T_1-T_0)}A\Big(\textbf{1}_{S_{T_1}\geq L}-\textbf{1}_{S_{T_1}\geq M}\Big)\Big]\nonumber\\
&=&\Big\{\sum_{n_1=0}^\infty\sum_{n_2=0}^\infty\frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^{n_1}}{n_1!}\frac{e^{-\lambda'(T_2-T_1)}(\lambda'(T_2-T_1))^{n_2}}{n_2!}\nonumber\\
&\times&E_{T_0}\big[e^{-r(T_2-T_0)}(S_{T_2}-K_2)\textbf{1}_{S_{T_2}>K_2}\textbf{1}_{S_{T_1}>L}|n_1,n_2\big]\Big\}\nonumber\\
&-&\Big\{\sum_{n_1=0}^\infty\sum_{n_2=0}^\infty\frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^{n_1}}{n_1!}\frac{e^{-\lambda'(T_2-T_1)}(\lambda'(T_2-T_1))^{n_2}}{n_2!}\nonumber\\
&\times&E_{T_0}\big[e^{-r(T_2-T_0)}(S_{T_2}-K_2)\textbf{1}_{S_{T_2}>K_2}\textbf{1}_{S_{T_1}>M}|n_1,n_2\big]\Big\}\nonumber\\
&-&\Big\{\sum_{n_1=0}^\infty\frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^{n_1}}{n_1!}E_{T_0}\big[e^{-r(T_1-T_0)}A(\textbf{1}_{S_{T_1}>L}|n_1-\textbf{1}_{S_{T_1}>M}|n_1)\big]\Big\}.
\label{eq:18}\end{aligned}$$
Now, we assume that the asset price satisfies in Eq. (\[eq:1\]). Then, by calculating the expectations in Eqs. (\[eq:17\]) and (\[eq:18\]), the following result is derived.
The price of an extendible call option with time to expiration $T_1$ and strike price $K_1$, whose expiration time can extended to $T_2$ with a new strike price $K_2$ by the payment of an additional premium $A$, is given by
$$\begin{aligned}
&&EC(S_t,K_1,T_1,K_2,T_2,A)\nonumber\\
&=&\sum_{n_1=0}^\infty \frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^n1}{n1!}\Big[S_0\Phi(a_1)-K_1e^{-r(T_1-T_0)}\Phi(a_2)\Big]\nonumber\\
&&+\Big\{\sum_{n_1=0}^\infty\sum_{n_2=0}^\infty\frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^{n_1}}{n_1!}\frac{e^{-\lambda'(T_2-T_1)}(\lambda'(T_2-T_1))^{n_2}}{n_2!}\nonumber\\
&&\times\Big[S_0\Phi_2(b_1,c_1,\rho)-K_2e^{-r(T_2-T_0)}\Phi(b_2,c_2,\rho)\Big]\Big\}\nonumber\\
&&-\Big\{\sum_{n_1=0}^\infty\sum_{n_2=0}^\infty\frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^{n_1}}{n_1!}\frac{e^{-\lambda'(T_2-T_1)}(\lambda'(T_2-T_1))^{n_2}}{n_2!}\nonumber\\
&&-\Big[S_0\Phi_2(a_1,c_1,\rho)-K_2e^{-r(T_2-T_0)}\Phi(a_2,c_2,\rho)\Big]\Big\}\nonumber\\
&&-\Big\{\sum_{n_1=0}^\infty\frac{e^{-\lambda'(T_1-T_0)}(\lambda'(T_1-T_0))^{n_1}}{n_1!}S_0Ae^{-r(T_1-T_0)}\nonumber\\
&&\times\Big[\Phi(b_2)- \Phi(a_2)\Big]\Big\},
\label{eq:19}\end{aligned}$$
where $$\begin{aligned}
a_1&=&\frac{\ln\frac{S_0}{M}+\mu_{J_{T_1-T_0}}+\sigma_{J_{T_1-T_0}}^2}{\sqrt{\sigma_{J_{T_1-T_0}}^2}},\quad a_2=a_1-\sqrt{\sigma_{J_{T_1-T_0}}^2}\nonumber\\
b_1&=&\frac{\ln\frac{S_0}{L}+\mu_{J_{T_1-T_0}}+\sigma_{J_{T_1-T_0}}^2}{\sqrt{\sigma_{J_{T_1-T_0}}^2}},\quad b_2=b_1-\sqrt{\sigma_{J_{T_1-T_0}}^2}\nonumber\\
c_1&=&\frac{\ln\frac{S_0}{K_2}+\mu_{J_{T_2-T_0}}+\sigma_{J_{T_2-T_0}}^2}{\sqrt{\sigma_{J_{T_2-T_0}}^2}},\quad c_2=c_1-\sqrt{\sigma_{J_{T_2-T_0}}^2}\nonumber
\label{eq:20}\end{aligned}$$
$\Phi(x)$ is the standard univariate cumulative normal distribution function and $\Phi_2(x,y,\rho)$ is the standard bivariate cumulative normal distribution function with correlation coefficient $\rho$. \[th:1\]
If $H=\frac{1}{2}$, the asset price satisfies the Merton jump diffusion equation $$\begin{aligned}
dS_t&=&S_t(\mu-\lambda\kappa) dt+\sigma S_tdB_t+(J-1)S_tdN_t,\,0<t\leq T,\,S_{T_0}=S_0,
\label{eq:20-2}\end{aligned}$$
then, our results is consistent with the findings in [@gukhal].
When $\lambda=0$ , the asset price follows the $MFBM$ model shown below $$\begin{aligned}
dS_t=S_tr dt+\sigma S_tdB_t+\sigma
S_tdB_t^H.
\label{eq:21}\end{aligned}$$ and the formula (\[eq:8\]) reduces to the diffusion case. The result is in the following.
The price of an extendible call option with time to expiration $T_1$ and strike price $K_1$, whose expiration time can extended to $T_2$ with a new strike price $K_2$ by the payment of an additional premium $A$ and written on an asset following Eq. (\[eq:21\]) is
$$\begin{aligned}
&&EC(S_t,K_1,T_1,K_2,T_2,A)\nonumber\\
&=&S_0\Phi(a_1)-K_1e^{-r(T_1-T_0)}\Phi(a_2)\nonumber\\
&&+S_0\Phi_2(b_1,c_1,\rho)-K_2e^{-r(T_2-T_0)}\Phi(b_2,c_2,\rho)\nonumber\\
&&-\Big[S_0\Phi_2(a_1,c_1,\rho)-K_2e^{-r(T_2-T_0)}\Phi(a_2,c_2,\rho)\Big]\nonumber\\
&&-Ae^{-r(T_1-T_0)}\Big[\Phi(b_2)- \Phi(a_2)\Big],
\label{eq:22}\end{aligned}$$
where $$\begin{aligned}
a_1&=&\frac{\ln\frac{S_0}{M}+r(T_1-T_0)+\frac{\sigma^2}{2}(T_1-T_0)+\frac{\sigma^2}{2}(T_1^{2H}-T_0^{2H})}{\sqrt{\sigma^2(T_1-T_0)+\sigma^2(T_1^{2H}-T_0^{2H})}},\nonumber\\
a_2&=&a_1-\sigma\sqrt{T_1^{2H}-T_0^{2H}+T_1-T_0}\nonumber\\
b_1&=&\frac{\ln\frac{S_0}{L}+r(T_1-T_0)+\frac{\sigma^2}{2}(T_1-T_0)+\frac{\sigma^2}{2}(T_1^{2H}-T_0^{2H})}{\sqrt{\sigma^2(T_1-T_0)+\sigma^2(T_1^{2H}-T_0^{2H})}},\nonumber\\
b_2&=&b_1-\sigma\sqrt{T_1^{2H}-T_0^{2H}+T_1-T_0}\nonumber\\
c_1&=&\frac{\ln\frac{S_0}{K_2}+r(T_2-T_0)+\frac{\sigma^2}{2}(T_2 -T_0)+\frac{\sigma^2}{2}(T_2^{2H}-T_0^{2H})}{\sqrt{\sigma^2(T_2-T_0)+\sigma^2(T_2^{2H}-T_0^{2H})}}.\nonumber\\
c_2&=&c_1-\sigma\sqrt{T_2^{2H}-T_0^{2H}+T_2-T_0}.
\label{eq:23}\end{aligned}$$
Let us consider an extendible option with $N$ extended maturity times, the result is presented in the following corollary.
The value of the extendible call expiring at time $T_1$, written on an asset whose price is governed by equation (\[eq:1\]) and whose maturity extend to $T_2 < T_3 <,...,<
T_{N+1}$ with new strike of $K_2,K_3,...,K_{N+1}$ by the payment of corresponding premium of $A_1,A_2,...,A_{N+1}$, is given by
$$\begin{aligned}
EC_N(S_0,K_1,T_0,T_1)&=&\sum_{j=1}^{N+1}\Big\{\Big[S_0\Phi_j(a_{1j}^*,R_j^*)-K_je^{r(T_j-t)}\Phi(a_{2j}^*,R_j^*) \Big]\nonumber\\
&-&\Big[S_0\Phi_j(c_{1j}^*,R_j^*)-K_je^{r(T_j-t)}\Phi(c_{2j}^*,R_j^*)\Big]\nonumber\\
&-&A_je^{r(T_j-t)}\Big[\Phi(b_{2j}^*,R_{-1j}^*)-\Phi(a_{2j}^*,R_{-1j}^*)\Big]
\Big\}
\label{eq:27}\end{aligned}$$
where $A_0=0, \Phi_j(a_{1j}^*,R_j^*)$ is the $j$-dimensional multivariate normal integral with upper limits of integration given by the $j$-dimensional vector $a_{1j}^*$ and correlation matrix $R_j^*$ and define $a_{1j}^*= \big[a_1(M_1,T_1-t),-a_1(M_2,T_2-t),...,-a_1(M_j,T_j-t)\big]$. The same as $\Phi_j(c_{1j}^*,R_j^*)$ and $\Phi_j(b_{2j}^*,R_j^*)$ and define
$$\begin{aligned}
c_{1j}^*&=& \big[b_1(L_1,T_1-t),a_1(M_2,T_2-t),...,b_1(L_{j-1},T_{j-1}-t),a_1(M_j,T_j-t)\big]\nonumber\\
b_{2j}^*&=& \big[b_2(L_1,T_1-t),b_2(M_2,T_2-t),...,b_2(L_j,T_j-t)\big]\nonumber
\label{eq:28}\end{aligned}$$
and $\Phi_1(c_{1j}^*,R_j^*)$. $R_j^*$ is a $j \times j$ diagonal matrix with correlated coefficient $\rho_{p-1,p}$ as the $p$th diagonal element, $0$ and negative correlated coefficient $\rho_{j-1,j}$ , respectively, as the first and the last diagonal element, and correlated coefficient $\rho_{p-1,s}(s = p + 1,..., j)$. As to the rest of the elements, we note that $\rho_{p-1,s}$ is equal to negative correlated coefficient $\rho_{pj}$ when $s=j$ and $\rho_{p-1,s}$ is equal to zero when $p=1,s = 0,..., p-1,$ the term $T_j$ and $M_j, L_j$ respectively represents the $j$th “time instant” and the critical price as defined previously. \[cor:2\]
As $N$ increases to infinity the exercise opportunities become continuous and hence the value of the approximate option will converge in the limit to the value of the extendible option. Thus, the values $EC_1, EC_2, EC_3,...$ form a converging sequence and the limit of this sequence is the value of the extendible, i.e. $\lim_{N\rightarrow \infty}EC_N(S_0,K_1,T_0,T_1)=EC(S_0,K_1,T_0,T_1)$. To minimize the impact of this computational complexity, we use the Richardson extrapolation method [@geske1984american] with two points. This technique uses the first two values of a sequence of a sequence to obtain the limit of the sequence and leads to the following equation, $$\begin{aligned}
EC_2=2EC_1-EC_0,
\label{eq:29}\end{aligned}$$ where $EC_2$ stands for the extrapolated limit using $EC_1$ and $EC_0$.
Numerical studies {#sec:4}
=================
Table \[table:1\] provides numerical results for extendible call options when the underlying asset pays no dividends. Column (3) displays the value obtained using the Merton model and column (4) shows the results using the Gukhal [@gukhal] method. Column (5) indicates the results by the $JMFBM$ model and values using the Richardson extrapolation technique for $EC_1$ and $EC_0$ are shown in column (6). By comparing columns Merton, Gukhal, $JMFBM$ and Richardson in Table \[table:1\] for the low- and high-maturity cases, we conclude that the call option prices obtained by these valuation methods are close to each other.
------------- ----- -------- --------- --------- ------------
$T_1$ $K$ Merton Gukhal $JMFBM$ Richardson
\[0.5ex\] 1 10 0.1127 0.11143 0.1228 0.1330
1 11 0.0960 0.0997 0.1075 0.1190
1 12 0.0812 0.0852 0.0922 0.1031
1 13 0.0687 0.0707 0.0768 0.0850
1 14 0.0587 0.0561 0.0615 0.0566
0.5 10 1.0347 0.7521 0.7799 0.5250
0.5 11 0.8387 0.6541 0.6783 0.5180
0.5 12 0.6662 0.5560 0.5768 0.4875
0.5 13 0.5412 0.4579 0.4753 0.4094
0.5 14 0.4598 0.3598 0.3738 0.2871
\[1ex\]
------------- ----- -------- --------- --------- ------------
: Results by different pricing models. Here, $r=0.1, \sigma=0.1, L=5, M=15, A=0.05, H=0.8,S=12, \sigma_J=0.3, k=-0.004 $.
\[table:1\]
Fig \[fig:4\] displays the price of extendible call option difference by the Merton, Guukhal and $JMFBM$ models, according to the primary exercise date $T_1$ and strike price $K_1$.
![The relative difference between our $JMFBM$, Guukhal and Merton models. Parameters fixed are $r=0.3, \sigma=0.4, L=.1, M=1.5, A=0.02, H=0.8,S=1.2, \sigma_J=0.05, k=0.4 $ and $t=0.1.$ []{data-label="fig:4"}](M.eps){width="100.00000%"}
Conclusions {#sec:5}
===========
Mixed fractional Brownian motion is a strongly correlated stochastic process and jump is a significant component in financial markets. The combination of them provides better fit to evident observations because it can fully describe high frequency financial returns display, potential jumps, long memory, volatility clustering, skewness, and excess kurtosis. In this paper, we use a jump mixed fractional Brownian motion to capture the behavior of the underlying asset price dynamics and deduce the pricing formula for compound options. We then apply this result to the valuation of extendible options under a jump mixed fractional Brownian motion environment. Numerical results and some special cases are provided for extendible call options.
|
---
abstract: |
We propose a new method for measuring $CP$ violation in neutrino oscillation experiments. The idea is to isolate the term due to the $CP$-violating phase out of the oscillation probability by taking difference between yields of two (or three) detectors at path-lengths $L = 250 \left(\frac{E}{1.35 \mbox{GeV}}\right)
\left(\frac{\Delta m^2}{10^{-2}\mbox{eV}^2}\right)^{-1} \mbox{km}$ and at $L/3$ (and also at $2L/3$ in the case of three detectors). We use possible hierarchies in neutrino masses suggested by the astrophysical and the cosmological observations to motivate the idea and to examine how the method works.
address: |
[$^1$]{}Department of Physics, Tokyo Metropolitan University\
Minami-Osawa, Hachioji, Tokyo 192-03, Japan\
[$^2$]{}Instituto de Física Corpuscular - C.S.I.C.\
Departament de Física Teòrica, Universitat de València\
46100 Burjassot, València, Spain\
author:
- Hisakazu Minakata$^1$ and Hiroshi Nunokawa$^2$
date: 'June, 1997'
title: 'How to Measure $CP$ Violation in Neutrino Oscillation Experiments?'
---
0.5cm
$CP$ violation in the lepton sector is an unexplored fascinating subject in particle physics. If observed, it should shed light on the deep relationship between quarks and leptons, the most fundamental structure of matter which we know to date. Moreover, it is suggested that $CP$ violation in the lepton sector is the one of the key ingredients of the mechanism for generating baryon number asymmetry in the universe [@FY].
A viable way of observing $CP$ violation in the lepton sector is to utilize the phenomenon of neutrino oscillation. It was pointed out in refs. [@Barger; @Pakvasa] that the difference between oscillation probabilities of the neutrinos and its antineutrinos is proportional to the leptonic analogue of the Jarlskog factor [@Jarlskog], the unique (in three-flavor neutrinos) phase-convention independent measure for $CP$ violation. Recently, measuring $CP$ violation in long-baseline neutrino oscillation experiments is of considerable interest in the literature [@Tanimoto; @AS; @AKS; @MN; @BGW].
However, there exists a potential obstacle in measuring $CP$ violation in long-baseline neutrino experiments. It is the problem of the contamination due to the matter effect. Since the earth matter is not $CP$ symmetric its effect inevitably produces the fake $CP$ violation [@matter] which acts as contamination to the genuine $CP$-violating effect due to the leptonic Kobayashi-Maskawa phase [@KM]. Even worse, the matter effect dominates over the $CP$ phase effect in a certain region of the mixing parameters in the $\nu_\mu \rightarrow \nu_e$ experiment [@Tanimoto; @AS; @AKS; @MN].
In this paper, we suggest a novel way of measuring $CP$ violation in neutrino oscillation experiments by proposing the multiple detector difference method.[^1] We will show that our method is relatively free from the problem of the matter effect contamination in long-baseline neutrino oscillation experiments.
One way of avoiding the problem of “matter effect pollution” is to look for the oscillation channel in which genuine $CP$ violating effect dominates over the matter effect. This idea is examined in detail in ref. [@MN] within the restriction of the neutrino mass hierarchy to that motivated by hot dark matter and the atmospheric neutrino anomaly. It is found that under the constraints from the terrestrial experiments the unique case where the $CP$ phase effect dominates is the $\nu_\mu \rightarrow \nu_e$ channel in the region of large-$s_{13}$ and arbitrary-$s_{23}$ (the region (B) to be defined later). The $\nu_\mu \rightarrow \nu_\tau$ channel is relatively free from matter effect contaminations but they are not negligible. Unfortunately, the expected $CP$ violating effect is at most $\sim$ 1% in this type of mass hierarchy due to the strong constraints on mixing angles from the terrestrial experiments.
These discussions of the absolute and the relative magnitudes of $CP$ violation is based on the $\nu-\bar{\nu}$ difference method.[^2] The major experimental problem with this method is the difficulty in determining relative normalization of the neutrino and the antineutrino beams. If the $CP$ violation to be measured is of the order of a few % it would require to calibrate the flux of neutrino beams to the accuracy better than it, which would be extremely difficult, if not impossible, experimentally.
The multiple detector difference method which we shall discuss in this paper aims to overcome this problem. Since the absolute flux of the neutrino beam is hard to determine to the accuracy better than $\sim$ 10% [@nishi] we may have to give up the comparison between the two beams, or two different experiments, if we want to do measurement with a few % level accuracy. Therefore, we stick to use a single neutrino beam, $\nu_{\mu}$ for example, in the experiment. Then, how can one measure $CP$ violation to such precision, or even to the accuracy of 1% level?
In this paper we confine ourselves into the three-flavor mixing scheme of neutrinos. To develop the multiple detector difference method we work with the mass hierarchy $$\Delta M^2 \equiv \Delta m^2_{13} \simeq \Delta m^2_{23} \gg \Delta m^2_{12}
\equiv \Delta m^2,
\label{eqn:hierarchy}$$ where $\Delta m^2_{ij} \equiv m^2_j-m^2_i$ ($i,j$ = (1,3), (2,3), (1,2)), motivated by the solar and the atmospheric neutrino observation [@solar; @atmospheric] and the neutrinos as hot dark matter in the mixed dark matter cosmology [@hcdm].
We first focus on the case of mass hierarchy motivated by the dark-matter and the atmospheric neutrino observation, $\Delta M^2 = 5-100$ eV$^2$ and $\Delta m^2 = 10^{-3}-10^{-2}$eV$^2$. To find a hint on how to isolate the $CP$ violating term we first ignore the matter effect and analyze the structure of the neutrino oscillation probability in vacuum. With the mass hierarchy the oscillation probability in vacuum for the long-baseline experiments can be written as $$P(\nu_{\beta} \rightarrow \nu_{\alpha}) = A_{\beta\alpha} + B_{\beta\alpha}
(1-\cos\Delta) + C_{\beta\alpha}\sin\Delta
\label{eqn:probab1}$$ with $$\Delta \equiv \frac{\Delta m^2 L}{2E},$$ where $L$ denotes the path-length of the baseline, and $A_{\beta\alpha}, B_{\beta\alpha}$ and $C_{\beta\alpha}$ are constants which depend on the mixing angles and the $CP$ phase. We note that $C_{\beta\alpha} = 2J$, up to the sign, where $J$ indicates the Jarlskog factor whose explicit expression will be given later. (See eq.(\[eqn:Jarlskog\]).) The rapid oscillation due to large $\Delta M^2$, $$\frac{\Delta M^2 L}{4E} = 127\left(\frac{\Delta M^2}{1\mbox{eV}^2}\right)
\left(\frac{L}{100\mbox{km}}\right)\left(\frac{E}{1\mbox{GeV}}\right)^{-1},
\label{eqn:largeDM}$$ is averaged out which produces the first term in (\[eqn:probab1\]).
We want to isolate the last term, $J$-term, which is the measure for $CP$ violation from the others. It is a simple matter to observe that the best way to carry this out is to do the measurements at $\Delta = \frac{\pi}{2}$ and $\frac{3}{2}\pi$: $$\begin{aligned}
P(\nu_{\beta}\rightarrow\nu_{\alpha};\;\Delta=\frac{3\pi}{2}) &=&
A_{\beta\alpha} + B_{\beta\alpha} + 2J \nonumber\\
P(\nu_{\beta}\rightarrow\nu_{\alpha};\;\Delta=\frac{\pi}{2}) &=&
A_{\beta\alpha} + B_{\beta\alpha} - 2J \end{aligned}$$ where we took a particular sign for the $J$-term. Therefore, the difference $\Delta P$ between the oscillation probabilities at $\Delta=\frac{3}{2}\pi$ and at $\Delta=\frac{\pi}{2}$ is nothing but the $CP$ violation $4J$.
Of course, $\Delta$ is a function of $L/E$. Therefore, the measurement of the difference $P(\Delta=\frac{3\pi}{2})-P(\Delta=\frac{\pi}{2})\equiv\Delta P_2$ can be done either by varying $L$ or $E$, or both. However, we argue that a measurement of $CP$ violation with accuracy better than a few % would require measurements of $\Delta P_2$ by placing two detectors at $\Delta =\frac{\pi}{2}$ and $\Delta=\frac{3}{2}\pi$. If we try to measure $\Delta P_2$ by varying the energy of the neutrino beam with the single detector the energy, of course, have to be varied by factor of 3. By such re-adjustment of the neutrino beam energy the relative normalization of the beam would become uncertain to the order of $\sim$10%. Therefore, the best thinkable way of avoiding the uncertainty of relative normalization of the neutrino flux is to do measurement at 2 detectors, using the same neutrino beam, one at $\Delta =\frac{3}{2}\pi$, and the other at $\Delta = \frac{\pi}{2}$. For the KEK-PS$\rightarrow$Superkamiokande experiment in which $L = $250 km, the neutrino beam energy should be tuned to $E=1.35 (\frac {\Delta m^2}{10^{-2}eV^2})^{-1}$ GeV so that the location of Superkamiokande just corresponds to $\Delta=\frac{3}{2}\pi$. For the MINOS experiment with $L=730$ km the beam energy to be used is $E=3.94 (\frac {\Delta m^2}{10^{-2}eV^2})^{-1}$ GeV. Then, the second detector to be build should be located at 1/3 of the baseline, $L_2=83.3$ km for the KEK-PS$\rightarrow$Superkamiokande and $L_2=243$ km for the MINOS experiments.[^3]
Of course, one cannot make neutrino beam so monochromatic; it must have spread in energy. However, it seems possible to make the neutrino beam spread as small as $\sim$ 20 % of the beam energy [@nishi]. Therefore, at least it is worth to think about the possibility.
Does the matter effect give rise to the serious contamination to the genuine $CP$ violating effect in the 2 detector difference method? It appears that the problem has a better shape compared to that in the $\nu-\bar\nu$ difference method. To understand this point we write down the expression of neutrino oscillation probability with the correction of the earth matter effect. The expression [@MN] is based on the adiabatic approximation and is valid to first order in matter perturbation theory. If we define the neutrino mixing matrix as $\nu_\alpha = U_{\alpha i}\nu_i$ the oscillation probability is given by $$\begin{aligned}
&&P(\nu_\beta \to \nu_\alpha) = \cr
&& -2 \sum_{i=1,2}\biggl[
\mbox{Re}[U_{\alpha i}U^*_{\alpha3}U^*_{\beta i}U_{\beta3}]
+ \mbox{Re}(UUU\delta V)_{\alpha\beta \: ;\: i3}\biggr]\cr
&& -4 \mbox{Re}[U_{\alpha 1}U^*_{\alpha 2}U^*_{\beta 1}U_{\beta 2}]
\biggl[ \sin^2\biggl(\frac{\Delta m^2}{4E} L\biggr)
+ \frac{1}{2}aL \biggl(|U_{e2}|^2- |U_{e1}|^2\biggr)
\sin\biggl(\frac{\Delta m^2}{2E} L\biggr) \biggr] \cr
&& -2 J \biggl[\sin\biggl(\frac{\Delta m^2}{2E} L\biggr)
+ aL \biggl(|U_{e2}|^2- |U_{e1}|^2\biggr)
\cos\biggl(\frac{\Delta m^2}{2E} L\biggr) \biggr]\cr
&& -4 \mbox{Re}(UUU\delta V)_{\alpha\beta \: ;\: 12}
\sin^2\biggl(\frac{\Delta m^2}{4E} L\biggr) \cr
&& -2 \mbox{Im}(UUU\delta V)_{\alpha\beta \: ;\: 12}
\sin\biggl(\frac{\Delta m^2}{2E} L\biggr)
\label{eqn:probab2}\end{aligned}$$ where $(UUU\delta V)_{\alpha\beta \: ;\: ij}$ represent first-order corrections due to the matter effect and their expressions are given in ref. [@MN]. More precisely speaking, we made the following approximation to derive eq. (\[eqn:probab2\]): We took average over rapid oscillations with the period (\[eqn:largeDM\]), which produces the first two terms in (\[eqn:probab2\]). We ignored the terms of the order of $\frac{Ea}{\Delta M^2}$ because of the extreme hierarchy between the dark matter mass scale and the matter potential, $$\frac{Ea}{\Delta M^2} = 1.04 \times 10^{-4}
\left(\frac{\rho}{2.72\mbox{gcm}^{-3}}\right)
\left(\frac{E}{1\mbox{GeV}}\right)
\left(\frac{\Delta M^2}{1\mbox{eV}^2}\right)^{-1}.$$ We used the constant matter density approximation and ignored the terms of order $(aL)^2$ or higher, where $L$ is the path length of the baseline and $a=\sqrt{2}G_FN_e$ with $N_e$ being the electron number density. We note that $$aL=0.132\left(\frac{\rho}{2.72\mbox{gcm}^{-3}}\right)
\left(\frac{L}{250\mbox{km}}\right).
\label {eqn:aL}$$ Therefore, ignoring $(aL)^2$ term should give a good approximation at least for the KEK$\to$Superkamiokande experiment.
The difference between probabilities $\Delta P_2$ to be measured at 2 detectors up to the first order in matter potential $a$ is given by $$\begin{aligned}
\Delta P_2(\nu_\beta\to\nu_\alpha) &\equiv&
P(\nu_{\beta}\rightarrow\nu_{\alpha};\;\Delta=\frac{3}{2}\pi)
-P(\nu_{\beta}\rightarrow\nu_{\alpha};\;\Delta=\frac{\pi}{2})\nonumber\\
&=& 4J
- 8\pi\frac{Ea}{\Delta m^2}
\mbox{Re}[U_{\alpha1}U^*_{\alpha2}U^*_{\beta1}U_{\beta2}]
\cos2\theta_{12} c_{13}^2 \nonumber\\
&& + 8 J \frac{Ea}{\Delta m^2} \cos2\theta_{12} c_{13}^2
\label{eqn:Pdiff}\end{aligned}$$ where $(\alpha, \beta) = (e, \mu), (\mu, \tau)$ and $(\tau, e)$. We have used the standard form of the CKM matrix $$U=\left[
\begin{array}{ccc}
c_{12}c_{13} & s_{12}c_{13} & s_{13}e^{-i\delta}\nonumber\\
-s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta} &
c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta} & s_{23}c_{13}\nonumber\\
s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta} &
-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{i\delta} & c_{23}c_{13}\nonumber\\
\end{array}
\right]$$ for the neutrino mixing matrix. With this parametrization, $J$ is given as $$J\equiv \mbox{Im}
[U_{\alpha 1}U_{\alpha 2}^*U_{\beta 1}^*U_{\beta 2}]
=\pm c_{12}s_{12}c_{23}s_{23}c_{13}^2s_{13}\sin\delta,
\label{eqn:Jarlskog}$$ where $+$ sign is for cyclic permutations, i.e., $(\alpha,\beta)=(e,\mu),(\mu,\tau),(\tau,e)$ and $-$ is for anti-cyclic ones.
We note that in the mass hierarchy with dark matter mass scale the reactor and the accelerator experiments constrain the mixing angles into three regions on the plane spanned by $s_{13}^2$ and $s_{23}^2$ [@raconstr]. Since our present analysis is motivated by the atmospheric neutrino anomaly we restrict ourselves into the two regions (A) or (B) :
1. small-$s_{13}$ and small-$s_{23}$
2. large-$s_{13}$ and arbitrary $s_{23}$
In long-baseline experiments, we expect large ($\sim$ order 1) oscillation probabilities for $\nu_\mu\rightarrow\nu_e$ and $\nu_\mu\rightarrow\nu_\tau$ channels in the regions (A) and (B), respectively. For brevity we shall call these channels the dominant channels in the respective parameter regions, and the alternative ones, i.e., $\nu_\mu\rightarrow\nu_\tau$ in the regions (A) etc. the minor channels. Note that $-$ 4Re$[U_{\alpha1}U^*_{\alpha2}U^*_{\beta1}U_{\beta2}]$ is nothing but the coefficient of $\sin^2 (\frac{\Delta m^2}{4E} L)$ term in the oscillation probabilities, and therefore is of order unity for the dominant channels. For $\Delta M^2 = 5$ eV$^2$, $4J$ is at most $\sim 10^{-2}$ in both regions (A) and (B). See Fig. 1 of ref. [@MN]. We note that $\frac{Ea}{\Delta m^2} \sim 10^{-2}$ in the mass hierarchy with which we are dealing.
Now we can roughly estimate the relative magnitude of the matter and the $CP$ violating effects in $\Delta P_2$. It depends upon the regions (A) and (B); In the region (A), the $CP$ phase and the matter effects are, roughly speaking, comparable unless $\cos2\theta_{12}$ is tuned to be small. We have to do the experiments in the minor channel $\nu_\mu\rightarrow\nu_\tau$ to avoid matter effect contamination. In the region (B) on the other hand, $c_{13}$ is small, $c_{13}^2 \sim 10^{-2}$, and therefore the $CP$ phase effect is always dominant over matter effect. One can do experiments in the dominant channel in the region (B). However, to do subtraction between the yields of the end and the intermediate detectors it may be better to do experiments always in the minor channel; One has to subtract a large number from a comparable large number to have a small number in the case of dominant channels.
A special attention has been paid to the long-baseline neutrino experiment because with the dark matter motivated mass hierarchy the effect of $CP$ violation cannot be observed in short-baseline experiments. It can be shown that $CP$ violating effect is suppressed by a factor of $\frac {\Delta m^2}{\Delta M^2}$. Let us confirm by numerical computation that the qualitative results we have obtained so far are correct. First we pick up the following two sets of parameters (a) and (b) from the allowed regions (A) and (B), respectively, $$\begin{aligned}
&\mbox{(a)}& \ \ s_{23}^2 = 3.0\times 10^{-3}, s_{13}^2 = 2.0\times 10^{-2}\\
\label{eqn:seta}
&\mbox{(b)}&\ \ s_{23}^2 = 2.0\times 10^{-2}, s_{13}^2 = 0.98.
\label{eqn:setb}\end{aligned}$$ They are chosen so that the largest $CP$ violating effect is expected in each region (A) or (B), and the same set of parameters are used in our analysis done in ref. [@MN].
In Fig. 1 we present $\Delta P_2(\nu_\mu\to\nu_e)$ and $\Delta P_2(\nu_\mu\to\nu_\tau)$ for these two parameter sets for the KEK$-$Superkamiokande distance $L=250$ km. We have carried out the calculation using the exact solutions obtained in ref. [@Zaglauer] for a constant matter density and we used $\rho$ = 2.72 gcm$^{-3}$ with the electron fraction $Y_e$ = 0.5. We took average over the rapid oscillations due to the dark matter scale $\Delta M^2$. In the same figure we also plot the values obtained by our analytic formula (\[eqn:Pdiff\]) to indicate that it gives a reasonably good approximation.
In Fig. 2 we present the same quantities but for $L=730$ km, i.e., Fermilab-Soudan2 detector distance (MINOS experiment). This is important for $\nu_\mu\to\nu_\tau$ channel because $\tau$ cannot be produced with such low neutrino energy as $E=1.35$ GeV. The nice feature of $\Delta P_2$ in Figs. 1 and 2 is that the matter effect contamination is relatively smaller than that in the $\nu-\bar\nu$ difference method discussed in ref. [@MN]. This is particularly true for $L=250$ km. Therefore, if we can measure $\Delta P_2$ to the accuracy of $\sim$ 1 % level it is, in principle, possible to observe $CP$ violation in neutrino oscillation experiments.
Now we turn to the question of how the multiple detector difference method works for the neutrino mass hierarchy motivated by the atmospheric [@atmospheric] and the solar [@solar] neutrino observations, $\Delta M^2 = 10^{-3} - 10^{-2} \mbox{eV}^2$ and $\Delta m^2 = 10^{-6} - 10^{-4} \mbox{eV}^2$. In this case we cannot use the matter perturbation theory developed in ref. [@MN] because $\frac{Ea}{\Delta m^2} \simeq 1-10^2$ cannot be used as an expansion parameter. In this paper we rely on the approximate formula derived by Arafune, Koike and Sato [@AKS] who use as expansion parameters $aL$ in (\[eqn:aL\]) and $$\frac{\Delta m^2 L}{2E} = 6.4\times 10^{-2} \left(
\frac{\Delta m^2}{10^{-4}\mbox{eV}^2} \right) \left(
\frac{L}{250\mbox{km}} \right) \left(
\frac{E}{1\mbox{GeV}} \right)^{-1}.$$
The general structure of the oscillation probability [@AKS] can be expressed, to leading order of these expansion parameters, as $$P(\nu_{\beta}\rightarrow\mu_{\alpha})
= \bar{A}_{\beta \alpha}(1-\cos\Delta)
+ \bar{B}_{\beta \alpha}\Delta\sin\Delta
+ \bar{C}_{\beta \alpha}\Delta(1-\cos\Delta)$$ where $\Delta\equiv\frac{\Delta M^2L}{2E}$. The coefficient of the last term is given by $\bar{C}_{\beta \alpha} = 2J\frac{\Delta m^2}{\Delta M^2}$. The coefficient $\bar{B}_{\beta \alpha}$ contains the terms proportional to either $\frac{\Delta m^2}{\Delta M^2}$ or $\frac{Ea}{\Delta M^2}$ and they have similar order of magnitude as the last term. Enriched by three terms with different $\Delta$ dependence and with similar magnitudes it is unlikely that one can separate the last term by using only 2 detectors. Then, we have to generalize our method to that of 3 detectors, the ones at $\Delta = \pi /2, \pi$, and $3\pi/2$. Then, it is simple to show that $$\Delta P_3(\nu_\beta\to\nu_\alpha) \equiv
P(\Delta=\frac{3}{2}\pi) + 3P(\Delta=\frac{\pi}{2})-2P(\Delta=\pi)
= 2\pi J\frac{\Delta m^2}{\Delta M^2}\ .
\label {eqn:Delta-P3}$$ To have a feeling of the magnitude of $CP$ violation we choose $\Delta m^2 = 10^{-4}\mbox{eV}^2$ and $\Delta M^2=10^{-2}\mbox{eV}^2$, and take $s_{12}= 1/2$, $s_{23}=1/\sqrt{2}$, and $s_{13}= \sqrt{0.1}$, as done by Arafune et al. [@AKS]. Then, the RHS of (\[eqn:Delta-P3\]) is estimated to be 0.39$ \times 10^{-2}$. But, we should note that the parameters we picked up are not those which maximize $\Delta P_3$.
The above discussions on relative and absolute magnitudes of $CP$ violations can be confirmed by the precise computation of $\Delta P_3$ as we did for $\Delta P_2$. We plot in Fig. 3 $\Delta P_3(\nu_\mu\to\nu_e)$ and $\Delta P_3(\nu_\mu\to\nu_\tau)$ as a function of $s_{13}$. We see that the matter effect contamination is small in both channels and $\Delta P_3 \sim 0.5$ for $L=250$ km. In Fig. 4 we plot the same as in Fig. 3 but for $L=730$ km. In this case, however, the magnitude of matter effect can be comparable to the genuine $CP$ effect, which also implies that our analytic formula (\[eqn:Delta-P3\]) is not accurate.
What are the required statistics for the experiments designed for measurements of $CP$ violation at 1 % level? Suppose that we design a long-baseline experiment which will produce $N$ muon events at each detector in the absence of oscillation. We assume that number of appearance events in the dominant channel is $\sim N/2$. Then, the number of events in the minor channel may be of the order of $\sim 10^{-3}N$. We have to make subtraction between 2 or 3 detectors. If we require the accuracy of less than 10 % uncertainty in the number of events in the minor channel it means that $N \sim 10^5$. Having such statistics with very narrow-band beam is certainly not easy to achieve, but may be possible in future Japan Hadron Project and the MINOS experiments.
The potential problem, though technical, with the multiple detector difference method is that we have to dig a tunnel down from the earth surface to build an intermediate detector. We have to dig down to 1.1 km and 9.3 km for the KEK-PS$\rightarrow$Superkamiokande and the MINOS experiments, respectively. We hope that the technical problem can be overcome at the stage when the neutrino experiments for measuring $CP$ violation is on the time table.
To establish the multiple detector difference method much more careful studies are required on various aspects including beam design. Among them one of the most important is the effects of averaging over finite energy width of the beam; our preliminary investigation indicates the sensitivity of $\Delta P$ against the variation of neutrino energy. Keeping these problems in mind, we want to emphasize that this method may be the only practical way of measuring $CP$ violation to the accuracy of a few % level.
We thank Koichiro Nishikawa for informative discussions on the KEK-PS $\rightarrow$ Superkamiokande experiment. One of us (H.M.) is partially supported by Grant-in-Aid for Scientific Research \#09640370 of the Ministry of Education, Science and Culture, and by Grant-in-Aid for Scientific Research \#09045036 under International Scientific Research Program, Inter-University Cooperative Research. The other (H.N.) has been supported by a DGICYT postdoctoral fellowship at Universitat de València under Grant PB95-1077 and TMR network ERBFMRXCT960090 of the European Union.
[99]{}
M. Fukugita and T. Yanagida, Phys. Lett. [**B174**]{} (1986) 45.
V. Barger, K. Wisnant and R. J. N. Phillips, Phys. Rev. Lett. [**45**]{} (1980) 2084.
S. Pakvasa, in [*Proceedings of the XXth International Conference on High Energy Physics*]{}, edited by L. Durand and L. G. Pondrom, AIP Conf. Proc. No. 68 (AIP, New York, 1981), Vol. 2, pp. 1164.
C. Jarlskog, Phys. Rev. Lett. [**55**]{} (1985) 1039.
M. Tanimoto, Phys. Rev. [**D55**]{} (1997) 322; preprint EHU-96-12, hep-ph/9612444.
J. Arafune and J. Sato, Phys. Rev. [**D55**]{} (1997) 1653.
J. Arafune, M. Koike and J. Sato, preprint ICRR-385-97-8, hep-ph/9703351.
H. Minakata and H. Nunokawa, preprint TMUP-HEL-9704/FTUV/97-21, hep-ph/9705208.
S. M. Bilenky, C. Giunti and W. Grimus, preprint UWThPh-1997-11/DFTT 26/97, hep-ph/9705300.
T. K. Kuo and J. Pantaleone, Phys. Lett. [**B198**]{} (1987) 406; P. I. Krastev and S. T. Petcov, Phys. Lett. [**B205**]{} (1988) 84.
M. Kobayashi and T. Maskawa, Prog. Theor. Phys. [**49**]{} (1973) 652.
H. Minakata, Talk presented at Workshop on Physics of JHP, Yaizu, Shizuoka-ken, September 21-23, 1996; Talk given at the Conference on Neutrino Physics at Miyako, Miyako, Iwate-ken, January 8-10, 1997, in [*Proceedings of the Conference on Neutrino Physics at Miyako*]{}, edited by T. Hasegawa, F. Suekane, A. Suzuki, and M. Yamaguchi, TOHOKU-HEP-NOTE-97-01, January 1997.
D. Beavis et al. (E889 Collaboration), Physics Design Report, BNL No. 52459, April 1995.
K. Nishikawa, private communications.
B. T. Cleveland et al., Nucl. Phys. B (Proc. Suppl.) [**38**]{} (1995) 47; Y. Suzuki, ibid [**38**]{} (1995) 54; P. Anselmann et al., Phys. Lett. [**B285**]{} (1992) 376; [**B314**]{} (1993) 445; [**327**]{}, (1994) 377; [**B342**]{}, (1995) 440; J. N. Abdurashitov et al., Nucl. Phys. B (Proc. Suppl.) [**38**]{} (1995) 60.
K. S. Hirata et al., Phys. Lett. [**B205**]{} (1988) 416; [**B280**]{} (1992) 146; Y. Fukuda et al., ibid [**B335**]{} (1994) 237; R. Becker-Szendy et al., Phys. Rev. [**D46**]{} (1992) 3720; W. W. M. Allison et al., Phys. Lett. [**B391**]{} (1997) 491.
J. A. Holtzman, Astrophys. J. Suppl. [**71**]{} (1989) 1; J. A. Holtzman and J. R. Primack, Astrophys. J. [**405**]{} (1993) 428; J. R. Primack, J. Holtzman, A. Klypin, and D. O. Caldwell, Phys. Rev. Lett. [**74**]{} (1995) 2160; K. S. Babu, R. K. Schaefer, and Q. Shafi, Phys. Rev. [**D53**]{} (1996) 606; D. Pogosyan and A. Starobinsky, astro-ph/9502019.
H. Minakata, Talk presented at IInd Rencontres du Vietnam, Ho Chi Minh, Vietnam, October 21-28, 1996, in [*Physics at the Frontiers of the Standard Model*]{}, pp 477, edited by Nguyen van Hieu and J. Tran Thanh Van (Editions Frontieres, Gif-sur-Yvette, 1996).
H. Minakata, Phys. Rev. [**D52**]{} (1995) 6630; Phys. Lett. [**B356**]{} (1995) 61; S. M. Bilenky, A. Bottino, C. Giunti, and C. W. Kim, Phys. Lett. [**B356**]{} (1995) 273; G. L. Fogli, E. Lisi, and G. Scioscia, Phys. Rev. [**D52**]{} (1995) 5334.
H. W. Zaglauer and K. H. Schwarzer, Z. Phys. [**C40**]{} (1988) 273.
0.5cm
5.0cm
-5.0cm Fig. 1: We plot in (i) and (ii) $\Delta P_2(\nu_\mu\to\nu_e)$ and in (iii) and (iv) $\Delta P_2(\nu_\mu\to\nu_\tau)$ as a function of $s_{12}^2$, for the cases where only the genuine $CP$ effect (open circles), only the matter effect (dotted lines), and both effects (solid lines) exist. We fixed the other mixing parameters as $s_{23}^2 = 3.0\times 10^{-3}$, $s_{13}^2 = 2.0\times 10^{-2}$ for the left two panels (i) and (iii) and $s_{23}^2 = 2.0\times 10^{-2}$, $s_{13}^2$ = 0.98 for the right two panels (ii) and (iv). The other parameters are fixed to be the same for all the case (i-iv), i.e., $E$=1.35 GeV, $\Delta M^2 = 5$ eV$^2$, $\Delta m^2 = 10^{-2}$ eV$^2$ and $\delta = \pi/2$ and $L=250$ km. We also plot the approximated values for the case where only the matter (open squares) and the matter + $CP$ (asterisks) effects exist except for (ii) where no appreciable difference between the exact and the approximated values can be seen.
0.5cm
5.0cm
-5.0cm Fig. 2 : The same as in Fig. 1 but for $L=730$ km and $E=3.94$ GeV.
0.5cm
7.0cm
-5.0cm Fig. 3: We plot in (i) $\Delta P_3(\nu_\mu\to\nu_e)$ and in (ii) $\Delta P_3(\nu_\mu\to\nu_\tau)$ as a function of $s_{13}^2$, for the cases where only the genuine $CP$ effect (open circles), only the matter effect (dotted lines), and both effects (solid lines) exist. We fixed the other parameters as $s_{12}^2 = 0.25$, $s_{23}^2 = 0.5$, $E$=1.35 GeV, $\Delta M^2 = 10^{-2}$ eV$^2$, $\Delta m^2 = 10^{-4}$ eV$^2$, $\delta = \pi/2$ and $L=250$ km.
0.5cm
7.0cm
-5.0cm Fig. 4 : The same as in Fig. 3 but for $L=730$ km and $E=3.94$ GeV.
[^1]: Preliminary descriptions of the two-detector difference method were given in ref. [@mina1]. We should mention that the idea of placing multiple detectors in long-baseline experiments is not new. For example, it appeared in Brookhaven proposal [@BNL]. But their motivation and the basic idea behind the use of the multiple detectors is entirely different from ours, and they do not discuss the possibility of measuring $CP$ violation.
[^2]: Related but different proposals in this context have been discussed in ref. [@AKS].
[^3]: Their geographical locations are, respectively, at around the city of Honjo in Saitama prefecture and a midpoint between Westfield and Packwaukee, about 55 Miles north of Madison, Wisconsin.
|
---
abstract: 'The Yukawa interaction sector of superstring inspired models that give superconducting strings, can be described in terms of a supersymmetric quantum mechanics algebra. We relate the Witten index of susy quantum mechanics with an index characteristic to superconducting string models.'
author:
- |
V.K.Oikonomou[^1]\
Technological Education Institute of Serres,\
Dept. of Informatics and Communications 62124 Serres, Greece\
and\
Dept. of Theoretical Physics Aristotle University of Thessaloniki,\
Thessaloniki 541 24 Greece
title: Witten Index and Superconducting Strings
---
Introduction {#introduction .unnumbered}
============
Superconducting strings are known to have important cosmological implications [@witten; @supercondstrings]. Cosmic strings can become superconducting if charged fermionic transverse zero modes are trapped along the strings [@rossi]. For example in [@rossi] a single massive fermion was considered which acquired its mass through a Yukawa-type interaction with a scalar field having varying phase around the string. In [@weinberg] an index theorem was obtained, which determines the minimum number of zero modes.
Moreover in [@ganoulis] an index theorem was developed, which applies in more realistic theories. Particularly in grand-unified or superstring inspired models one has many left-right handed fermions coupled to a number of scalar fields with Yukawa interactions. Some of these models admit cosmic strings solutions and it is interesting to know which of these are superconducting. The index theorem developed in reference [@ganoulis] gives an adequate solution to this and applies to models where one has a matrix of Higgs fields and many charged fermion flavors coupled to this matrix with arbitrary phase variations around the string. Moreover the nonzero index case (we denote the index $I_q$) is a criterion whether the cosmic strings are superconducting or not.
In this letter we shall relate the index $I_q$ with the Witten index of supersymmetric quantum mechanical systems. Indeed we shall see that models that admit superconducting string solutions can be written in terms of a $N=2$ supersymmetric quantum mechanics systems and also that the Witten index of this system is identical to the $I_q$ index. Thus we relate a purely mathematical property of a system to the phenomenology of a grand-unified or superstring inspired model.
We shall briefly present some features of supersymmetric quantum mechanics and also the required background for superconducting strings and the index $I_q$, in order to make the article self contained.
Supersymmetric Quantum Mechanics and Superconducting Strings {#supersymmetric-quantum-mechanics-and-superconducting-strings .unnumbered}
============================================================
Supersymmetric Quantum Mechanics {#supersymmetric-quantum-mechanics .unnumbered}
--------------------------------
Let us briefly review here some properties of to supersymmetric quantum mechanics. The presentation is based on [@susyqm]. A quantum system, described by a Hamiltonian $H$, which is characterized by the set $\{H,Q_1,...,Q_N\}$, with $Q_i$ self adjoint operators, is called supersymmetric if the following anti-commutation relation holds for $i=1,2,...N$, $$\label{susy1}
\{Q_i,Q_j\}=H\delta_{i{\,}j}$$ The self-adjoint operators are then called supercharges and the Hamiltonian “$H$" is called SUSY Hamiltonian. The algebra (\[susy1\]) describes a symmetry called N-extended supersymmetry. Of course SUSY quantum mechanics can be defined in terms of non self-adjoint supercharges, as we will see shortly. The superalgebra (\[susy1\]) poses some restrictions on the SUSY Hamiltonian, particularly it follows due to the anti-commutation that, $$\label{susy3}
H=2Q_1^2=Q_2^2=\ldots =2Q_N^2=\frac{2}{N}\sum_{i=1}^{N}Q_i^2.$$ A supersymmetric quantum system $\{H,Q_1,...,Q_N\}$ is said to have good susy (unbroken supersymmetry) if its ground state vanishes, that is $E_0=0$. For a positive ground-state energy with $E_0>0$, susy is said to be broken. It is obvious that for good supersymmetry, the Hilbert space eigenstates must be annihilated by all supercharges, that is, $$\label{s1}
Q_i |\psi_0^j\rangle=0$$ for all $i,j$. We now describe the basic features of $N=2$ supersymmetric quantum mechanics. The $N=2$ algebra consists of two supercharges $Q_1$ and $Q_2$ and a Hamiltonian $H$, which obey the following relations,
$$\label{sxer2}
\{Q_1,Q_2\}=0,{\,}{\,}{\,}H=2Q_1^2=2Q_2^2=Q_1^2+Q_2^2$$
A more frequently used notation involves the following operators, $$\label{s2}
Q=\frac{1}{\sqrt{2}}(Q_{1}+iQ_{2})$$ and the adjoint, $$\label{s255}
Q^{\dag}=\frac{1}{\sqrt{2}}(Q_{1}-iQ_{2})$$ The operators of relations (\[s2\]) and (\[s255\]) satisfy the following equations, $$\label{s23}
Q^{2}={Q^{\dag}}^2=0$$ and also can be written in terms of the Hamiltonian as, $$\label{s4}
\{Q,Q^{\dag}\}=H$$ It is always possible for $N=2$ to define the Witten parity operator, $W$, which is defined through the following relations, $$\label{s45}
[W,H]=0$$ and $$\label{s5}
\{W,Q\}=\{W,Q^{\dag}\}=0$$ Also $W$ satisfies, $$\label{s6}
W^{2}=0$$ Using $W$, we can span the Hilbert space $\mathcal{H}$ of the quantum system to positive and negative Witten-parity spaces, defined as, $\mathcal{H}^{\pm}=P^{\pm}\mathcal{H}=\{|\psi\rangle :
W|\psi\rangle=\pm |\psi\rangle $. Thus the Hilbert space $\mathcal{H}$ is decomposed into the eigenspaces of $W$, so $\mathcal{H}=\mathcal{H}^+\oplus \mathcal{H}^-$, and each operator acting on the vectors of $\mathcal{H}$ is represented in general by $2N\times 2N$ matrices. We shall use the representation $$\label{s7345}
W=\bigg{(}\begin{array}{ccc}
I & 0 \\
0 & -I \\
\end{array}\bigg{)}$$ with $I$ the $N\times N$ identity matrix. Bearing in mind that $Q^2=0$ and $\{Q,W\}=0$, the supercharges are necessarily of the form, $$\label{s7}
Q=\bigg{(}\begin{array}{ccc}
0 & A \\
0 & 0 \\
\end{array}\bigg{)}$$ and $$\label{s8}
Q^{\dag}=\bigg{(}\begin{array}{ccc}
0 & 0 \\
A^{\dag} & 0 \\
\end{array}\bigg{)}$$ which imply, $$\label{s89}
Q_1=\frac{1}{\sqrt{2}}\bigg{(}\begin{array}{ccc}
0 & A \\
A^{\dag} & 0 \\
\end{array}\bigg{)}$$ and also, $$\label{s10}
Q_2=\frac{i}{\sqrt{2}}\bigg{(}\begin{array}{ccc}
0 & -A \\
A^{\dag} & 0 \\
\end{array}\bigg{)}$$ The $N\times N$ matrices $A$ and $A^{\dag}$ are generalized annihilation and creation operators. Particularly $A$ acts as follows, $A: \mathcal{H}^-\rightarrow \mathcal{H}^+$ and $A^{\dag}$ as, $A^{\dag}: \mathcal{H}^+\rightarrow \mathcal{H}^-$ In the representation (\[s7345\]), (\[s7\]), (\[s8\]) the quantum mechanical Hamiltonian $H$, can be written in the diagonal form, $$\label{s11}
H=\bigg{(}\begin{array}{ccc}
AA^{\dag} & 0 \\
0 & A^{\dag}A \\
\end{array}\bigg{)}$$ Thus for a $N=2$ supersymmetric quantum system, the total supersymmetric Hamiltonian $H$, consists of two superpartner Hamiltonians, $$\label{h1}
H_{+}=A{\,}A^{\dag},{\,}{\,}{\,}{\,}{\,}{\,}{\,}H_{-}=A^{\dag}{\,}A$$ The above two Hamiltonians are known to be isospectral for eigenvalues different from zero, that is, $$\label{isosp}
\mathrm{spec}(H_{+})\setminus \{0\}=\mathrm{spec}(H_{-})\setminus
\{0\}$$ The eigenstates of $P^{\pm}$ are called positive and negative parity eigenstates and are denoted as $|\psi^{\pm}\rangle$, with, $$\label{fd1}
P^{\pm}|\psi^{\pm}\rangle =\pm |\psi^{\pm}\rangle$$ In the representation (\[s7345\]), the parity eigenstates are represented in the form, $$\label{phi5}
|\psi^{+}\rangle =\left(\begin{array}{c}
|\phi^{+}\rangle \\
0 \\
\end{array}\right)$$ and also, $$\label{phi6}
|\psi^{-}\rangle =\left(\begin{array}{c}
0 \\
|\phi^{-}\rangle \\
\end{array}\right)$$ with $|\phi^{\pm}\rangle$ $\epsilon$ $H^{\pm}$.
Let us now see which are the ground state properties for good supersymmetry. For good supersymmetry as we noted before, there exists at least one state in the Hilbert space with vanishing energy eigenvalue, that is $H|\psi_{0}\rangle =0$. Since the Hamiltonian commutes with the supercharges, $Q$ and $Q^{\dag}$, it is obvious that, $Q|\psi_{0}\rangle =0$ and $Q^{\dag}|\psi_{0}\rangle =0$. For a negative parity ground state, $$\label{phi5}
|\psi^{-}_0\rangle =\left(\begin{array}{c}
|\phi^{-}_{0}\rangle \\
0 \\
\end{array}\right)$$ this implies that $A|\phi^{-}_{0}\rangle =0$, whereas for a negative parity ground state, $$\label{phi6s6}
|\psi^{+}_{0}\rangle =\left(\begin{array}{c}
0 \\
|\phi^{+}_0\rangle \\
\end{array}\right)$$ it implied that $A^{\dag}|\phi^{+}_{0}\rangle =0$. In general a ground state can have positive or negative Witten parity and when the ground state is degenerate both cases can occur. When $E\neq
0$ the number of positive parity eigenstates is equal to the negative parity eigenstates. This does not happen for the ground states. A rule to decide if there are zero modes is the so called Witten index. Let $n_{\pm}$ the number of zero modes of $H_{\pm}$ in the subspace $\mathcal{H}^{\pm}$. For finite $n_{+}$ and $n_{-}$ the quantity, $$\label{phil}
\Delta =n_{-}-n_{+}$$ is called the Witten index. Whenever the Witten index is non-zero integer, supersymmetry is good (unbroken). If the Witten index is zero, it is not clear whether supersymmetry is broken (which would mean $n_{+}=n_{-}=0$) or not ($n_{+}\neq n_{-}\neq 0$). The Witten index is related to the Fredholm index of the operator $A$ we mentioned earlier as, $$\label{ker}
\mathrm{ind} A = \mathrm{dim}{\,}\mathrm{ker}
A-\mathrm{dim}{\,}\mathrm{ker} A^{\dag}=
\mathrm{dim}{\,}\mathrm{ker}A^{\dag}A-\mathrm{dim}{\,}\mathrm{ker}AA^{\dag}$$ The importance of the Fredholm index is that it is a topological invariant. We shall use only Fredholm operators. For a discussion on non-Fredholm operators and the Witten index, see [@susyqm]. The Witten index is obviously related to the Fredholm index of $A$, as, $$\label{ker1}
\Delta=\mathrm{ind} A=\mathrm{dim}{\,}\mathrm{ker}
H_{-}-\mathrm{dim}{\,}\mathrm{ker} H_{+}$$
Superconducting Strings {#superconducting-strings .unnumbered}
-----------------------
We now briefly present the theory of superconducting strings in terms of Yukawa interactions of left-handed and right-handed fermions with Higgs scalars. We follow closely reference [@ganoulis]. Consider a theory containing $N$ left handed fermion fields $\psi_{\alpha}$ and $N$ right-handed fermions $\chi_{\alpha}$, interacting with the Higgs sector according to the following Lagrangian, $$\label{lagrangian}
\mathcal{L}=i
\bar{\psi_{\alpha}}\gamma^{\mu}\partial_{\mu}\psi_{\alpha}+i
\bar{\chi_{\alpha}}\gamma^{\mu}\partial_{\mu}\chi_{\alpha}-(\bar{\chi_{\alpha}}M_{\alpha
\beta}\psi_{\beta}+\mathrm{H.c}).$$ with $\alpha ,\beta =1,...,N$. The $N\times N$ matrix $M$ contains the scalar fields with the interaction couplings. In general in a string background the matrix $M$ depends only on the polar coordinates $r$ and $\theta$ around the string. Due to the cylindrical symmetry of the string, the theory has effectively two dimensions and we can work in terms of two component spinors. The chiral fermions can be written, $$\label{psibars}
\psi_{\alpha}=\frac{1}{\sqrt{2}}\left(\begin{array}{c}
\widehat{\psi_{\alpha}} \\
-\widehat{\psi_{\alpha}} \\
\end{array}\right)$$ and also, $$\label{psibars1}
\chi_{\alpha}=\frac{1}{\sqrt{2}}\left(\begin{array}{c}
\widehat{\chi_{\alpha}} \\
-\widehat{\chi_{\alpha}} \\
\end{array}\right)$$ Using an appropriate representation for the $\gamma$-matrices, the Lagrangian can be written, $$\begin{aligned}
\label{yeaaah}
&\mathcal{L}=i\widehat{\psi_{\alpha}}^{\dag}\partial_{0}\psi_{\alpha}-i\widehat{\psi_{\alpha}}^{\dag}\sigma^{j}\partial_{j}\psi_{\alpha}
i\widehat{\chi_{\alpha}}^{\dag}\partial_{0}\chi_{\alpha}-i\widehat{\chi_{\alpha}}^{\dag}\sigma^{j}\partial_{j}\chi_{\alpha}\notag
\\& - \widehat{\chi_{\alpha}^{\dag}}M_{\alpha
\beta}\widehat{\psi_{\alpha}}-\widehat{\psi_{\alpha}}^{\dag}M_{\alpha
\beta}\widehat{\chi_{\alpha}}\end{aligned}$$ The equations of motion corresponding to the Lagrangian (\[yeaaah\]) are, $$\begin{aligned}
\label{ref1}
&-\partial_{0}\widehat{\psi_{\alpha}}+\sigma^{j}\partial_{j}\widehat{\psi_{\alpha}}-iM_{\alpha
\beta}^{\dag}\widehat{\chi_{\beta}}=0\\&\notag
-\partial_{0}\widehat{\chi_{\alpha}}+\sigma^{j}\partial_{j}\widehat{\chi_{\alpha}}-iM_{\alpha
\beta}^{\dag}\widehat{\psi_{\beta}}=0\end{aligned}$$ with $\alpha,\beta=1,2,...,n$, and $\sigma^j$ the Pauli matrices. Set, $$\label{ref2}
\widehat{\psi_{\alpha}}=f(x_3,t)\left(\begin{array}{c}
\psi_{\alpha}(r,\phi) \\
0 \\
\end{array}\right)$$ and also, $$\label{ref2}
\widehat{\chi_{\alpha}}=f(x_3,t)\left(\begin{array}{c}
0 \\
\chi_{\alpha}(r,\phi) \\
\end{array}\right)$$ Using the above two, the transverse zero-mode equations in the $x_1{\,}x_2$ plane, read, $$\begin{aligned}
\label{koryfaia}
&(\partial_{1}+i\partial_{2})\psi_{\alpha}-iM_{\alpha
\beta}^{\dag}\chi_{\beta}=0\\&\notag
(\partial_{1}-i\partial_{2})\chi_{\alpha}+iM_{\alpha
\beta}\psi_{\beta}=0\end{aligned}$$ Additionally one must have, $$\label{yeah}
(\partial_{0}-\partial_{3})f=0$$ The last equation means that both $\psi$ and $\chi$ are left movers (L-movers, see [@witten]). Another possibility is to have, $$\label{ref2}
\widehat{\psi_{\alpha}}=f(x_3,t)\left(\begin{array}{c}
0 \\
\psi_{\alpha}(r,\phi) \\
\end{array}\right)$$
$$\label{ref2wer}
\widehat{\chi_{\alpha}}=f(x_3,t)\left(\begin{array}{c}
\chi_{\alpha}(r,\phi) \\
0 \\
\end{array}\right)$$
with corresponding equations of motion, $$\begin{aligned}
\label{koryfaiax}
&(\partial_{1}-i\partial_{2})\psi_{\alpha}-iM_{\alpha
\beta}^{\dag}\chi_{\beta}=0\\&\notag
(\partial_{1}+i\partial_{2})\chi_{\alpha}+iM_{\alpha
\beta}\psi_{\beta}=0\end{aligned}$$ and also, $$\label{yeah345}
(\partial_{0}+\partial_{3})f=0$$ In this case both $\psi$ and $\chi$ are right movers (R-movers, see [@witten]). The main interest in these theories is focused on the above zero modes. For a general mass matrix $M_{\alpha
\beta}$, the solutions of (\[koryfaia\]) and (\[koryfaiax\]) are difficult to find. We define, $$\label{dmatrix}
\mathcal{D}=\left(\begin{array}{cc}
\partial_{1}+i\partial_{2} & -iM^{\dag} \\
iM & \partial_{1}-i\partial_{2} \\
\end{array}\right)_{2N\times 2N}$$ and additionally, $$\label{dmatrix}
\mathcal{D}^{\dag}=\left(\begin{array}{cc}
\partial_{1}-i\partial_{2} & -iM^{\dag} \\
iM & \partial_{1}+i\partial_{2} \\
\end{array}\right)_{2N\times 2N}$$ acting on $$\label{wee}
\left(\begin{array}{c}
\psi_{\alpha} \\
\chi_{\alpha} \\
\end{array}\right)$$ The solutions of (\[koryfaia\]) and (\[koryfaiax\]) are the zero modes of $D$ and also $D^{\dag}$. The Fredholm index $I_q$ of the operator $\mathcal{D}^{\dag}$, is equal to, $$\label{indexd}
\mathrm{ind}D=\mathrm{I}_q=\mathrm{dim{\,}ker}(D^{\dag})-\mathrm{dim{\,}ker}(D)$$ which is the number of zero modes of $\mathcal{D}$ minus the number of zero modes of $\mathcal{D}^{\dag}$ and equals to the number of the right movers $R$ minus the number of the left movers $L$. The mass matrix is assumed to have the following form, $$\label{indexofd}
M_{\alpha \beta}(r,\phi)=S_{\alpha \beta}(r)e^{iq_{\alpha
\beta}\phi}$$ The integers $q_{\alpha \beta}$ are related to the charges of the fields with respect to the group generator $Q$ which corresponds to the string [@witten]. With $\bar{q}_{\alpha}$ and $q_{\beta}$ the charges of the fermion fields $\chi_{\alpha}^{\dag}$ and $\psi_{\beta}$, the neutrality of $\chi_{\alpha}^{\dag}M_{\alpha \beta}\psi_{\beta}$ implies $$\label{tetanus}
q_{\alpha \beta}=\bar{q}_{\alpha}-q_{\beta}$$
It is proved that $I_q=\sum_{\alpha =1}^{n}q_{\alpha
\alpha}$ [@weinberg; @ganoulis]. Therefore the Fredholm index of $D$ is related to the charges of the fermions to the string gauge group.
We can see that the theory of superconducting string zero modes, defines a $N=2$ supersymmetric quantum mechanical system. Indeed we can write, $$\label{wit2}
Q=\bigg{(}\begin{array}{ccc}
0 & D \\
0 & 0 \\
\end{array}\bigg{)}$$ and additionally, $$\label{wit3}
Q^{\dag}=\bigg{(}\begin{array}{ccc}
0 & 0 \\
D^{\dag} & 0 \\
\end{array}\bigg{)}$$ Also the Hamiltonian of the system can be written, $$\label{wit4}
H=\bigg{(}\begin{array}{ccc}
DD^{\dag} & 0 \\
0 & D^{\dag}D \\
\end{array}\bigg{)}$$ It is obvious that the above matrices obey, $\{Q,Q^{\dag}\}=H$, $Q^2=0$, ${Q^{\dag}}^2=0$, $\{Q,W\}=0$, $W^2=I$ and $[W,H]=0$. Thus we can relate the Witten index of the $N=2$ supersymmetric quantum mechanics system with the index $I_q$ of the charges that the fermions have. Indeed we have $I_q=-\Delta$, because, $$\label{ker}
I_q=\mathrm{dim}{\,}\mathrm{ker}
D^{\dag}-\mathrm{dim}{\,}\mathrm{ker} D=
\mathrm{dim}{\,}\mathrm{ker}DD^{\dag}-\mathrm{dim}{\,}\mathrm{ker}D^{\dag}D=-\mathrm{ind}D=-\Delta=n_--n_+$$ So it is clear that the underlying supersymmetric algebra is related to the phenomenology of the model on which the superconducting string is based. This is very valuable because one can answer the question if a model gives superconducting string solution by examining the Witten index of the corresponding $N=2$ supersymmetric algebra. Before proceeding to some examples, let us discuss some important issues. Due to the supersymmetric quantum mechanical structure of the system, the zero modes of the operator $D$ are related to the zero modes of the operator $DD^{\dag}$. Therefore we can say that the zero modes of $DD^{\dag}$ and $D^{\dag}D$ can be classified according to the Witten parity, to parity positive and parity negative solutions. The last is valuable in order to find solution to the equations (\[koryfaia\]) and (\[koryfaiax\]). It is known [@ganoulis] that when $I_q\neq 0$ then string superconductivity is guaranteed. According to relation (\[ker\]), string superconductivity occurs when the Witten index $\Delta$ is non-zero (susy unbroken). So when supersymmetry is not broken, the theory we described admits superconducting solutions. Also when the theory admits superconducting solutions (R-movers and L-movers) supersymmetry is good-unbroken. However according to [@ganoulis] when $I_q$ it is not sure whether superconducting strings exist or not. Actually there may be some cases in which solutions exist, while $I_q=0$. Does this means that the number of R-movers is equal to L-movers or there are no zero modes? It is found in [@ganoulis] that when someone uses the index $I_q$ it is a good criterion to deal with these problems. Therefore we can decide if supersymmetry is broken or not.
Let us give an example at this point (we follow [@ganoulis]). Consider a superstring inspired model based on a subgroup $G$ of $E_6$, which has an additional $U(1)$ factor along with the Standard Model group, that is $G=SU(3)_c\times
SU(2)_L\times U(1)_Y\times U(1)_{L-R}$. The breaking of $U(1)_{L-R}$ gives rise to cosmic strings. These models contain singlets under $SU(3)_c\times SU(2)_L\times U(1)_Y$, which are responsible for the breaking of the additional $U(1)_{L-R}$. When one performs a non-trivial $L-R$ transformation, the $SU(3)_c\times SU(2)_L\times U(1)_Y$ singlets acquire a phase around the string. These fields are, $S_i=\langle S_i\rangle
e^{i\phi}$, $\tilde{S_i}=\langle S_i\rangle e^{i\phi}$ ($i=1,2,3$), $N=\langle N\rangle e^{i\phi}$. The field $\tilde{S_i}$ is the mirror of $S_i$. The field $N$ does not have a mirror. The model contains the Higgs doublets, $H=\langle
H\rangle e^{in\phi}$ and also $\tilde{H}=\langle \tilde{H}\rangle
e^{i(n+1)\phi}$, $n=$integer. If we examine the down quark mass matrix, ignoring fermion states from incomplete multiplets, the mass matrix is, $$\label{naai}
\begin{array}{ccc}
&\begin{array}{ccccc}
g & & & & Q_D \\
& & & & \\
\end{array}\\
M=\begin{array}{c}
g_c \\
D_c \\
\end{array} & \left(\begin{array}{c|c}
\langle S_1\rangle & 0 \\
\hline
\langle N\rangle e^{i\phi} & \langle H\rangle e^{in\phi} \\
\end{array}\right)_{18\times 18} \\
\end{array}$$ The interactions that give rise to the above mass matrix are, $gg_cS$, $gD_cN$ and $Q_DD_cN$. The heavy quark states $g$, and $g_c$ mix with the $d$-quark states $D_c$ and $Q_D$. The fermion families are 3 and each flavor has 3 colors so each block has $9\times 9$ dimension. In the above when $n\neq -1$, then $I_q\neq
0$ (and actually $I_q=1+n$ according to $I_q=\sum_{\alpha
=1}^{n}q_{\alpha \alpha}$) the cosmic strings are superconducting. Thus in this case the scalar-fermion sector has Witten index $\Delta \neq 0$. Therefore the quantum mechanical supersymmetry is unbroken (good supersymmetry). However when $n=-1$, then $I_q=0$, nevertheless according to [@ganoulis], there are $9$ L-movers and $9$ R-movers. So someone could say that supersymmetry is good (unbroken), and so the positive parity states are equal to negative parity states.
[99]{}
E. Witten, Nucl. Phys. B249, 557 (1985) J. P. Ostriker, C. Thompson, E. Witten, Phys. Lett. B180, 231 (1986);E. M. Chundnovsky, G. Field, D. Spergel, A. Vilenkin, Phys. Rev. D34, 944 (1986) R. Jackiw, P. Rossi, Nucl. Phys. B190, 681 (1981) E. J. Weinberg, Phys. Rev. D24, 2669 (1981) N. Ganoulis, G. Lazarodes, Phys. Rev. D38, 547 (1988); Nucl. Phys. B316, 443 (1989) “Supersymmetric Methods in Quantum and Statistical Physics", G. Junker, Springer, 1996 ; M. Combescure, F. Gieres, M. Kibler, J. Phys. A: Math. Gen. 37, 10385 (2004)
[^1]: [email protected]
|
---
abstract: 'This contribution to the published Proceedings records the opening talk I presented on the first morning of the 2005 International Linear Collider Workshop in Snowmass, CO, August 14 - 27, 2005. It includes a summary of the motivation for the workshop, the scientific goals and charges for the working groups, the initial plans of the accelerator, detector, and physics groups, and the activities of the communication, education, and outreach group. This document also describes organizational aspects of the meeting, particularly the scientific committee structure, the self-organization of the working groups, the composition of the indispensable secretariat and computer support teams, and the sources of funding support. The report serves as an introduction to the proceedings whose individual papers and summary documents must be consulted for an appreciation of the accomplishments and progress made at Snowmass in 2005 toward the realization of an International Linear Collider.'
author:
- 'Edmond L. Berger'
title: |
\
Overview and Charge - Snowmass Workshop 2005
---
INTRODUCTION {#sec:intro}
============
Remarkable strides were taken in 2004 and 2005 toward the achievement of an international electron-positron linear collider having an initial center-of-mass energy of 500 GeV and the capability of extension to higher energy. Particularly significant are the choice of superconducting technology by the International Technology Recommendation Panel in August 2004 [@techchoice]; the start of the Global Design Effort (GDE) led by Barry Barish; and the articulation of the essential and mutually supportive relationship of the Large Hadron Collider (LHC) and International Linear Collider (ILC) physics programs in the 2005 HEPAP subpanel document [@hepapsubpanel]. That great challenges lie ahead is an obvious understatement. A detailed design is required for the accelerator, along with full detector concepts, ever sharper physics arguments, and requisite funding.
At the Linear Collider Workshop in Victoria in July/August 2004, consensus developed that the summer of 2005 would be an opportune time for the full community of physicists and engineers to gather for an extended period to work together to advance the design of the detectors and to understand better the scientific case for a linear collider. A proposal was then formulated in the American Linear Collider Physics Group (ALCPG), in consultation with international partners, to host a fully international detectors and physics workshop of duration long enough to facilitate substantial progress in addressing many of the challenges. Uriel Nauenberg and I agreed to co-chair the Organizing Committee. In the fall of 2004, the ILC Steering Committee decided to hold the Second ILC Accelerator Workshop in conjunction with the Physics and Detector Workshop. This joint workshop was designed expressly with international participation in all the advisory committees and in the scientific program committees that organized the accelerator, detector, and physics activities.
The Local Organizing Committee (LOC) selected Snowmass, Colorado, as the site of the workshop, formulated a set of specific goals, and wrote a four-part charge for the accelerator, detectors, physics, and education and outreach components of the workshop. We submitted and defended successful funding proposals to the US Department of Energy and the National Science Foundation. We made other funding appeals, primarily to national laboratories. Scientific committees were organized. A web site, http://alcpg2005.colorado.edu/, was developed and updated regularly. A computer support team and a secretariat were assembled. The scientific program was developed, special events were organized, wireless computer access was installed, meeting rooms were obtained and assigned, along with myriad other tasks.
It is highly gratifying that over 670 enthusiastic participants are assembled in Snowmass, eager to contribute to the exciting endeavor that the ILC represents.
CHARGE {#sec:charge}
======
There are four inter-related aspects of the Charge for the workshop.
The primary ILC [**accelerator**]{} goal is to define an ILC Baseline Configuration Document (BCD), to be completed by the end of 2005, and a research and development (R&D) plan. The BCD will be the basis for the design and costing effort as well as for developing the supporting R&D program. The accelerator groups at Snowmass will work toward agreement on the collider design, identify outstanding issues and develop paths toward their resolution, start documentation of the BCD, and identify critical R&D topics and time scales.
The [**detector**]{} groups will develop design studies with a firm understanding of the technical details and physics performance of the candidate detector concepts, the required future R&D, test-beam plans, machine-detector interface issues, beam-line instrumentation, cost estimates, and many related topics.
On the [**physics**]{} front, the goal is to advance and sharpen ILC physics studies, including precise calculations, synergy with the LHC, connections to cosmology and astrophysics, and, very importantly, relationships to the detector design studies.
A fourth aspect of the charge has two components: to [**facilitate and strengthen the broad participation**]{} of the scientific and engineering communities in ILC physics, detectors, and accelerators, and to [**engage the greater public**]{} in the excitement of this work. The first component relates to the broad community of high energy physicists and engineers who may not have participated previously in linear collider activities or workshops. The second addresses our fellow scientists in fields other than particle physics as well as members of the general public.
PLAN OF SCIENTIFIC ACTIVITIES {#sec:plan}
=============================
Accelerator
-----------
The Working Groups established for the First ILC workshop at KEK in 2004 form the basis of the organizing units through Snowmass: Low-Emittance Transport and Beam Dynamics, Linac Design, Sources, Damping Rings, Beam Delivery, Superconducting Cavities and Couplers, Communications and Outreach. In addition, six Global Groups were formed to work toward a realistic reference design: Parameters, Controls and Instrumentation, Operations and Availability, Civil and Siting, Cost and Engineering, and Options.
The accelerator activities kick off on the first morning with a presentation by Barry Barish on the Global Design Effort. The ILC working groups present introductory overviews in plenary sessions during the first afternoon. Lunchtime accelerator tutorials [**– an accelerator school –**]{} designed for experimenters and theorists begin on the second day of the workshop and continue through the next-to-last day.
The very ambitious schedule of accelerator activities leads to plenary ILC Global Group summaries and working group summaries by the end of the first week of this two-week Snowmass workshop.
Detectors
---------
Three Detector Concept Studies are based on complementary philosophies: the Silicon Detector Concept (SiD), the Large Detector Concept (with Time Projection Chamber (TPC) tracking), and GLD (largest, with a TPC). These concepts are introduced in plenary sessions on the first afternoon. In the SiD approach, the calorimeter consists of a tungsten absorber and silicon detectors. A relatively small inner calorimeter radius of 1.3m is chosen. Shower separation and momentum resolution are achieved with a 5 Tesla (T) magnetic field and silicon detectors for charged particle tracking. LDC, derived from the detector in the TESLA TDR, uses a somewhat larger radius of 1.7m, a silicon-tungsten calorimeter, and a large TPC for charged particle tracking. GLD chooses a radius of 2.1m, a calorimeter with coarser segmentation, and gaseous tracking similar to LDC. The Snowmass workshop is a major opportunity for the collaborations to draft their Concept detector outline documents before the international 2006 Linear Collider Workshop LCWS06 in Bangalore in March.
Detector capabilities are challenged by the precise physics planned at the ILC. The environment is relatively clean, but the detector performance must be a factor two to ten better than at LEP and the SLAC SLD. Tracking, vertexing, calorimetry, software algorithms, and other aspects of the detectors are all on the agenda. Among the leading questions and ambitions for the detector working groups are: R&D requirements; particle flow calorimetry for which a special session is organized on the second Monday; vertex detection at small radius; machine-detector interface issues (MDI) for which a joint MDI/accelerator/Concepts session is organized; agreement on feasible intersection-region (IR) parameters; the question of possibly two high energy IRs and detectors, discussed in a Town Meeting on Thursday afternoon during the first week; and the desirability of various options such as $e^+$ polarization, $\gamma \gamma$ collisions, $e \gamma$ collisions, and $e^- e^-$ collisions.
Physics
-------
The Physics activities begin on the first morning with presentations by Joe Lykken and Peter Zerwas on physics at the ILC and the LHC. The ILC offers the capability to control the collision energy, polarize one or both beams, and measure cleanly the particles produced. It will allow us to zero in on crucial features of the physics landscape, a rich world of Higgs bosons, supersymmetric particles, dark matter, and extra spatial dimensions. Four physics working groups are assembled under the headings Higgs, Supersymmetry (SUSY), Beyond the Standard Model (BSM), and top quark plus quantum chromodynamics (Top/QCD), along with three cross-cutting Special Topics groups: Cosmology Connections, LHC/ILC Connections, and precise high-order calculations (Loopfest). Plenary sessions for the Physics working groups take place on the first Tuesday at which the conveners outline the activities planned for each group. A two-day Loopfest Conference takes place Thursday and Friday of the first week, and a day is devoted to Cosmology and the ILC on the second Wednesday, August 24.
A partial menu of physics topics includes Physics benchmarks, with a plenary session on the first Tuesday afternoon; the Higgs mechanism – lessons about electroweak symmetry breaking at the ILC that we can learn no other way; SUSY – determination of masses and other parameters at the focus point and for other Snowmass points and slope scenarios from a combination of LHC and ILC data; extra-dimensions and strings; and precise high-order calculations to match the expected high precision of the ILC data. Permeating all these discussions is the paramount question of what ILC detector capabilities are needed. Detector benchmarking adds an important dimension and emphasis to the physics discussions at Snowmass.
There is a tremendous amount to accomplish in the two weeks of this workshop leading to the physics and detector summary talks at the end of the second week.
The individual papers and summary documents in these Proceedings must be consulted for an appreciation of the accomplishments of the workshop and progress made at Snowmass in 2005. A summary written for a more general audience is published in the December 2005 issue of the [*CERN Courier*]{} [@Courier].
COMMUNICATIONS, EDUCATION, and OUTREACH
=======================================
One of our important responsibilities is to engage our fellow citizens in the excitement of particle physics. Some of what we do may be mysterious, but none of it is a secret. Most of those with whom we speak are more curious and genuinely interested than we might guess.
A [**Dark Matter Cafe and Quantum Universe Exhibit**]{} will be set up on the Snowmass Mall during our first weekend here, with volunteers among us enjoying conversations with local residents and tourists. A [**Workshop on Dark Matter and Cosmic Ray Showers**]{} is organized for high school teachers on Friday of this first week, followed by a display of working cosmic ray shower detectors on the Aspen Mall on Saturday, staffed by members of the Education and Outreach working group and other volunteers from among the participants. A [**Physics Fiesta**]{} takes place on Sunday at the Roaring Fork High School in Carbondale, and volunteers, especially those with Spanish language skills, will meet with students, family members, and teachers.
Two Public Lectures are scheduled at 6:30 PM, one in Aspen on August 17 by Young-Kee Kim entitled [*$E = mc^2$: Opening Windows on the World*]{}, and the second in Snowmass on August 22 by Hitoshi Murayama, [*Seeing the Invisible – Challenge to 21st Century Particle Physics and Cosmology*]{}.
Complementing these activities organized by the Education and Outreach Committee of the Physics and Detectors component of this overall joint workshop, the ILC Communications group is hosting a series of workshops and invites all participants. The new website http://www.linearcollider.org is being launched at Snowmass, starting with daily coverage of the workshop, along with ILC NewsLine, http://www.linearcollider.org/newsline/, a weekly newsletter free to all subscribers.
OTHER SPECIAL EVENTS
====================
The importance of involving industry in the design and execution of the accelerator and detectors is recognized in an ILC Industry Forum on Tuesday, August 16 at 7:30PM.
The ILC International Steering Committee (ILCSC) convenes in an all day meeting on Tuesday August 23.
An evening Forum on Tuesday the 23rd addresses [**Challenges for Realizing the ILC: Funding, Regionalism, and International Collaboration**]{}. Eight distinguished speakers representing committees and funding agencies with direct responsibility for the ILC share their wisdom and perspectives: Jonathan Dorfan (ICFA Chairman), Fred Gilman (US-HEPAP), Pat Looney (formerly of OSTP), Robin Staffin (US-DOE), Michael Turner (US-NSF), Shin-ichi Kurokawa (ACFA Chair and incoming ILCSC Chair), Roberto Petronzio (Funding Agencies for the Linear Collider, FALC), and Albrecht Wagner (incoming ICFA Chair). Ample time is set aside for animated questions and comments from members of the audience.
Workshop Dinners take place on Thursday of the first week and Wednesday of the second week. Tickets are available for purchase.
FUNDING SUPPORT
===============
We acknowledge and are most appreciative for generous grants from the US Department of Energy (DOE) and the US National Science Foundation (NSF). The DOE funds underwrite workshop expenses, such as meeting rooms and secretariat and computing expenses. The NSF funds provide subsidies for the local expenses of young participants and for the education and outreach activities.
We received indispensable financial contributions from Argonne National Laboratory, Cornell Laboratory for Elementary Particle Physics, Brookhaven National Laboratory, Lawrence Berkeley National Laboratory, and Thomas Jefferson Laboratory to assist with workshop expenses. We are most grateful for this support and for grants from the Universities Research Association (URA) and from Stanford University that subsidize the opening reception on Sunday, August 14, and the two workshop dinners.
Support for participants from Europe was received from DESY, PPARC (UK), IN2P3 (France), and CERN. We acknowledge both the funds provided and the instrumental assistance in obtaining and administering these funds from Rolf-Dieter Heuer, David Miller, Francois Richard, and Jos Engelen.
The workshop could not have taken place without the enormous in-kind contributions from Fermilab (leadership of the secretariat, members of the secretariat and computer support teams, and equipment), SLAC (proceedings, and members of the secretariat and computer support team), and Cornell (leadership of the computer support team).
HEAVY LIFTING
=============
Local Organizing Committee
--------------------------
Recognition of the tremendous contributions of talent and time from many individuals begins with the members of the Local Organizing Committee. Foremost among these is my co-Chair Uriel Nauenberg, along with his associates Valerie Melendez and webmaster Will Ruddick from the University of Colorado. ALCPG Co-Chairs Jim Brau (Oregon) and Mark Oreglia (Chicago) along with accelerator community representatives Shekhar Mishra (Fermilab) and Nan Phinney (SLAC) round out the LOC. I am indebted to all my colleagues on the LOC for their constant dedication and many hours of effort on behalf of this workshop during the past year. And we have much left to do!
Executive Committee
-------------------
Members of the international executive committee provided timely and valued assistance in many respects. They are Barry Barish (Caltech), Edmond Berger (Argonne, co-Chair), James Brau (Oregon), Sally Dawson (Brookhaven), Rolf Heuer (DESY), David Miller (UC London), Shekhar Mishra (Fermilab), Uriel Nauenberg (Colorado,co-Chair), Mark Oreglia (Chicago), Hwanbae Park (Kyungpook National University), Michael Peskin (SLAC), Tor Raubenheimer (SLAC), and Hitoshi Yamamoto (KEK).
Maury Tigner of Cornell and Pier Oddone of Fermilab were generous advisors, sources of good counsel and [*savoir faire*]{}.
Working Group Organizing Committees
-----------------------------------
Four working committees were appointed by the Local Organizing Committee and asked to define the topics of the working groups and to secure conveners of these working groups. The members of these four committees for the accelerator, detector, physics, and education and outreach efforts are:
### Accelerator
David Burke (SLAC), Jean-Pierre Delahaye (CERN), Gerald Dugan (Cornell), Hitoshi Hayano (KEK), Steve Holmes (Fermilab), Olivier Napoly (Saclay), Kenji Saito (KEK), Nick Walker (DESY), and Kaoru Yokoya (KEK).
### Detectors
Philip Bambade (Orsay), Ties Behnke (DESY), Tiziano Camporesi (CERN), John Jaros (SLAC), Dean Karlen (Victoria), Akiya Miyamoto (KEK), Mark Oreglia (Chicago, Chair), Daniel Peterson (Cornell), Harry Weerts (Michigan State/Argonne), and Satoru Yamashita (Tokyo).
### Physics
Sally Dawson (Brookhaven, vice-Chair), Jonathan Feng (UC Irvine), Rohini Godbole (Bangalore), Norman Graf (SLAC), Howard Haber (UC Santa Cruz), Kaoru Hagiwara (KEK), Joseph Lykken (Fermilab), Michael Peskin (SLAC, Chair), W. James Stirling (Durham), Rick Van Kooten (Indiana), and Peter Zerwas (DESY).
### Education and Outreach
Marjorie Bardeen (Fermilab), Neil Calder (SLAC), Ulrich Heintz (Boston University), Judy Jackson (Fermilab), Hitoshi Murayama (UC Berkeley, Chair), and Gregory Snow (Nebraska).
Working Groups
--------------
In a sense, the two-dozen or so working groups are self-organizing units. Individuals choose whether to participate, and the groups define how they will best use their time and talents at Snowmass to achieve their goals. Nevertheless, the international conveners of the working groups, with representation from all regions, are the unsung heroes and heroines. They must cajole and motivate their members, keeping them focused. I apologize for not listing all of the conveners by name. Their identities and the agendas of the working group programs can be found from links on the workshop program page: http://alcpg2005.colorado.edu:8080/alcpg2005/program/.
The number of different working groups requires 22 meeting rooms for break-out sessions. A few of these are obtained by partitioning the large ballroom used for the plenary sessions on the first day and for the summary talks on the final two days. Many of the other meeting rooms are distributed among the different properties at Snowmass. All are within walking distance of the central Conference Center. A spreadsheet on the workshop web page lists room assignments, but updates, additions, changes will be made as needed. Mark Oreglia did a heroic job juggling a complex mix of competing room requirements! Rooms are equipped with LCD projectors and screens; overhead projectors are also available by special request. Working group conveners are asked to provide laptop computers for projecting presentations, with file transfer to be done via USB thumb drives.
SECRETARIAT, COMPUTER TEAM, and SNOWMASS PERSONNEL
==================================================
Snowmass Personnel
------------------
The particle physics community in the United States has run many extended summer workshops in Snowmass over the past two decades. We return because we find the concentrated facilities we need for a large gathering with many breakout rooms, and a group of local representatives who go the extra mile to provide what a group of physicists requires. I wish to acknowledge the forthcoming assistance received from Jim Pilcher and his staff at the Silvertree Hotel, Jim O’Leary of the Snowmass Village Resort Association, and Maidy Reside of Top of the Village.
Secretariat
-----------
The local staff at Snowmass assisted admirably with lodging reservations for participants, but the innumerable aspects of registrations, negotiations with vendors, and daily logistics have fallen on the shoulders of the dedicated and most capable Secretariat headed by Cynthia Sazama of Fermilab. Cynthia was an enthusiastic member of the organization from the first moment she was approached and asked to participate. Her team is made up of Maura Chatwell, Albe Larsen, and Naomi Nagahashi (SLAC) and Carol Angarola, Jody Federwitz, and Suzanne Weber (Fermilab). Their main office is at the Top of the Village (ToV) in condo unit Slope 210, with weekday hours of operation 8:00 AM - 6:00 PM, and Saturday hours 8:00 AM - 1 PM. Registration takes place in the Conference Center on the first day and in ToV Slope 210 thereafter.
Computer Facilities and Support
-------------------------------
Chips were down many months ago when we desperately needed a person to define the configuration of the workshop computer facility and to make it work. Maury Tigner stepped up as he has done so many times in his distinguished career. Maury offered the services of Ray Helmke, Director of the Computing Facility at Cornell’s Laboratory for Elementary Particle Physics. The computer support team assembled under Ray’s superb leadership includes his deputy John Urish (Fermilab), David Tang and Quinton Healy (Fermilab), Mike DiSalvo and Ken Zhou (SLAC), Bryan Abshier (LBL), and Andrew Hahn, Joseph Proulx, Martin Nagel, and Jason Gray (U Colorado).
Part of the networking and computing capacity is satisfied by an existing T1 line that serves the Silvertree Hotel and the Snowmass Conference Center, but a second dedicated line was brought into the Conference Center specifically for the greater requirements of the workshop.
The computer setup for this workshop is principally a laptop based facility, relying on equipment brought by individual participants, most functioning in wireless mode. There are also some hardwired connections and printers in the computer center rooms located in three condos at Top of the Village (ToV), Trails 109, 108, and 105, where 18 Windows PC’s on loan from Fermilab may also be found. The computer center is staffed daily during the hours 8 AM to 10 PM. Wireless access in the computer center and in condos in the ToV complex relies on a commercial cable system recently installed by ToV.
The computer support team set up the facility in the three condos in ToV during the week before the workshop and tested it. They prepared a descriptive document, available on the workshop web site.
Our workshop and the international ILC effort owe a debt of gratitude to all members of the Computer Support Team and of the Secretariat.
PROCEEDINGS
===========
Electronic files of all presentations at the workshop are linked on the web pages. Proceedings will appear on the SLAC Electronic Conference Proceedings Archive (eConf), a permanent repository for conference proceedings, and a CD will be produced of written contributions. Norman Graf (SLAC) is leading the editorial effort. The Proceedings link on the front page of the web site may be consulted for instructions and page limits.
L’ENVOI
=======
The organization of this workshop has benefited from the dedication and talents of scientists, engineers, and support personnel from many institutions and all regions of the world. I reviewed the motivation for the workshop, the scientific goals and charge to the working groups, the initial plans of the accelerator, detector, and physics working groups, and the activities of the education and outreach working group. I also summarized organizational aspects of the meeting, particularly the scientific committee structure, the self-organization of the working groups, the composition of the indispensable secretariat and computer support teams, and the sources of funding support.
This 2005 Snowmass Workshop is now in your capable hands to:
- design the accelerator,
- flesh out the detectors,
- hone the physics reasoning, and
- engage your fellow citizens
all within a very ambitious time scale.
Work in the Argonne High Energy Physics Division is supported by the US Department of Energy, Division of High Energy Physics, under contract W-31-109-ENG-38.
[9]{}
, Jean-Eudes Augustin, Jonathan Bagger, Barry Barish (chair), Giorgio Bellettini, Paul Grannis, Norbert Holtkamp, George Kalmus, G. S. Lee, Akira Masaike, Katsunobu Oide, Volker Soergel, and Hirotaka Sugawara, August 19, 2004, $\rm{http://lcdev.kek.jp/LCoffice/ITRPexecsum.pdf}$.
, Report of the 2005 DOE/NSF High Energy Physics Advisory Panel, Joseph Lykken and James Siegrist, co-Chairs, July 2005, $\rm{http://www.linearcollider.org/pdf/Report{\_}7.26.pdf}$.
, E. L. Berger [*et al*]{}, CERN Courier [**45**]{} 10 pp 24 - 27, December 2005.
|
---
abstract: 'In this paper we present Combined Array for Research in Millimeter-wave Astronomy (CARMA) 3.5 mm observations and SubMillimeter Array (SMA) 870 $\mu$m observations toward the high-mass star-forming region [IRAS 18162-2048]{}, the core of the HH 80/81/80N system. Molecular emission from HCN, HCO$^+$ and SiO is tracing two molecular outflows (the so-called Northeast and Northwest outflows). These outflows have their origin in a region close to the position of MM2, a millimeter source known to harbor a couple protostars. We estimate for the first time the physical characteristics of these molecular outflows, which are similar to those of $10^3-5\times10^3$[ $L_{\sun}$]{}protostars, suggesting that MM2 harbors high-mass protostars. High-angular resolution CO observations show an additional outflow due southeast. We identify for the first time its driving source, MM2(E), and see evidence of precession. All three outflows have a monopolar appearance, but we link the NW and SE lobes, explaining their asymmetric shape as a consequence of possible deflection.'
author:
- 'M. Fernández-López'
- 'J. M. Girart'
- 'S. Curiel'
- 'L. A. Zapata'
- 'J. P. Fonfría'
- 'K. Qiu'
bibliography:
- 'biblio.bib'
title: 'Multiple monopolar outflows driven by massive protostars in IRAS 18162-2048'
---
Introduction
============
High-mass stars exert a huge influence on the interstellar medium, ejecting powerful winds and large amounts of ionizing photons or exploding as supernova. Despite their importance injecting energy and momentum into the gas of galaxies, how they form is still not well understood. High–mass protostars are typically more than 1 kpc from Earth (@2005Cesaroni). They are deeply embedded inside dense molecular clouds and often accompanied by close members of clusters. Hence, well detailed studies have only been possible in a few cases. From a handful of studies based on high-angular resolution observations, we know that early B-type protostars posses accretion disks (e.g., @2001Shepherd [@2005Cesaroni; @2005Patel; @2010GalvanMadrid; @2010Kraus; @2011FernandezLopez1]), while for O-type protostars there is no unambiguous evidence for accretion disks yet, but vast toroids of dust and molecular gas have been seen rotating around them, showing signs of infalling gas (e.g., @2005Sollins1 [@2005Sollins2; @2006Beltran; @2009Zapata; @2012Qiu; @2012JimenezSerra; @2013Palau]). On the other hand, molecular outflow studies seem to display diverse scenarios in the few cases where both, high-angular resolution and relatively nearby targets were available. For example, the two nearest to the Earth massive protostars (Orion BN/KL and Cep A HW2) have molecular outflows that are very difficult to interpret. It has been suggested that Orion BN/KL shows an explosion-like isotropic ejection of molecular material [@2009Zapata], while Cep A HW2 drives a very fast jet (@2006Curiel), which is apparently pulsing and precessing due to the gravitational interaction of a small cluster of protostars [@2009Cunningham; @2013Zapata]. These two examples provide how analyzing the outflow activity of these kind of regions can provide important insight on the nature of each region. They are also showing us that although accretion disks could be ubiquitously found around all kind of protostars, a complete understanding on the real nature of the massive star-formation processes should almost inevitably include the interaction between close-by protostars. This introduces much more complexity to the observations, not only because of the possible interaction between multiple outflows, but also the difficulty to resolve the multiple systems with most telescopes.
[clcccrrcc]{}\[h\]\
\
3 & Continuum & 84.80200 && & & $9.2\times5.8$ & 18 & 2\
3 & SiO (2-1) & 86.84696 &400 & 5.2 & 6.3 & $8.6\times6.0$ & 7 & 20\
2 & HCO$^+$ (1-0) & 89.18852 & 400 & 5.2 & 4.3 & $9.0\times5.6$ & 14 & 30\
1 & HCN (1-0) & 88.63160 & 400 & 5.2 & 4.4 & $9.3\times5.9$ & 14 & 30\
1 & HC$_3$N (10-9) & 90.97902 & 400 & 5.2 & 24.0 & $8.9\times5.8$ & 17 & 35\
\
\
\
2 & CO (3-2) & 345.79599 & 767 & 1.4 & 33.2 & $0.45\times0.34$ & 31 & 77\
1 & CO (3-2) & 345.79599 & 357 & 1.4 & 33.2 & $3.0\times1.6$ & -6 & 117 \[t\_observ\]
[IRAS 18162-2048]{}, also known as the GGD27 nebula, is associated with the unique HH 80/81/80N radio jet. It is among the nearest high–mass protostars (1.7 kpc). The large radio jet ends at HH 80/81 in the south [@1993Marti] and at HH 80N in the north [@2004Girart], wiggling in a precessing motion across a projected distance of 7.5 pc [@1998Heathcote]. Recently, [@2012Masque] have proposed that this radio jet could be larger, extending to the north, with a total length of 18.4 pc. Linear polarization has been detected for the first time in radio emission from this jet indicating the presence of magnetic fields in a protostellar jet [@2010CarrascoGonzalez]. The jet is apparently launched close to the position of a young and massive protostar, surrounded by a very massive disk of dust and gas rotating around it [@2011FernandezLopez1; @2011FernandezLopez2; @2012CarrascoGonzalez]. In spite of this apparently easy-to-interpret scenario, the region contains other massive protostars [@2009Qiu; @2011FernandezLopez1] which are driving other high velocity outflows. (Sub)mm observations of the central part of the radio jet revealed two main sources, MM1 and MM2, separated about 7$\arcsec$ [@2003Gomez], that are probably in different evolutionary stages [@2011FernandezLopez1]. MM1 is at the origin of the thermal radio jet, while MM2 is spatially coincident with a water maser and has been resolved into a possible massive Class 0 protostar and a second even younger source [@2011FernandezLopez2]. MM2 is also associated with a young monopolar southeast CO outflow [@2009Qiu]. There is evidence of a possible third source, MC, which is observed as a compact molecular core, detected only through several millimeter molecular lines. At present, MC is interpreted as a hot core, but the observed molecular line emission could also be explained by shocks originated by the interaction between the outflowing gas from MM2 and a molecular core or clump [@2011FernandezLopez2].
In this paper we focus our attention on the CO(3-2), SiO(2-1), HCO$^+$(1-0) and HCN(1-0) emission toward [IRAS 18162-2048]{}, aiming at detecting and characterizing outflows from the massive protostars inside the [IRAS 18162-2048]{}core and their interaction with the molecular cloud. Millimeter continuum and line observations of [IRAS 18162-2048]{}were made with CARMA, while submillimeter continuum and line observations were made with the SMA. In Section 2, we describe the observations undertook in this study. In Section 3 we present the main results and in section 4 we carry out some analysis of the data. Section 5 is dedicated to discuss the data, and in Section 6 we give the main conclusions of this study.
Observations
============
CARMA
-----
[IRAS 18162-2048]{}observations used the CARMA 23-element mode: six 10.4 m antennas, nine 6.1 m antennas, and eight 3.5 m antennas. This CARMA23 observing mode included up to 253 baselines that provide extra short spacing baselines, thus minimizing missing flux when observing extended objects.
The 3.5 mm (84.8 GHz) observations were obtained in 2011 October 9, 11 and 2011 November 11. The weather was good for 3 mm observations in all three nights, with $\tau_{230GHz}$ fluctuating between 0.2 and 0.6. The system temperature varied between 200 and 400 K. During those epochs, CARMA was in its D configuration and the baselines ranged from 5 to 148 m. The present observations are thus sensitive to structures between $6\arcsec$ and $70\arcsec$, approximately[^1]. We made a hexagonal nineteen pointing Nyquist–sampled mosaic with the central tile pointing at R.A.(J2000.0)=$18^h 19^m 12\fs430$ and DEC(J2000.0)=$-20\degr 47\arcmin 23\farcs80$ (see Fig. \[contsen\]). This kind of mosaic pattern is used to reach a uniform rms noise in the whole area covered. The field of view of the mosaic has thus, a radius of about $70\arcsec$. Beyond this radius the rms noise increases over 20%.
At the time of the observations, the correlator of the CARMA 23-element array provided 4 separate spectral bands of variable width. One spectral band was set with a bandwidth of 244 MHz, used for the 3.5 mm continuum emission. The other three bands were set with a bandwidth of 125 MHz, providing 80 channels with a spectral resolution of 1.56 MHz ($\sim5.2$[ km s$^{-1}$]{}). One of these three bands was always tuned at the SiO (2-1) frequency. The other two bands were tuned at two of the following lines: HCO$^+$ (1-0), and HCN (1-0) and HC$_3$N (10-9).
The calibration scheme was the same during all the nights, with an on-source time of about 2 hours each. The gain calibrator was J1733-130 (with a measured flux density of 4.1$\pm$0.4 Jy), which we also used as the bandpass calibrator. Observations of MWC349, with an adopted flux of 1.245 Jy provided the absolute scale for the flux calibration. The uncertainty in the flux scale was 10% between the three observations, while the absolute flux error for CARMA is estimated to be 15-20%. In the remainder of the paper we will only consider statistical uncertainties.
The data were edited, calibrated, imaged and analyzed using the MIRIAD package (@1995Sault) in a standard way. GILDAS[^2] was also used for imaging. Continuum emission was built in the uv–plane from the line-free channels and was imaged using a uniform weighting to improve the angular resolution of the data. For the line emission, the continuum was removed from the uv–plane. We then imaged the continuum-free emission using a natural weighting to improve the signal-to-noise ratio. The continuum and the line images have rms noises of about 2[ mJy beam$^{-1}$]{}and 20-35[ mJy beam$^{-1}$]{}per channel, respectively.
SMA
---
The CO(3-2) maps at subarcsecond angular resolution presented in this paper were obtained from SMA observations taken on 2011 July 18 and October 3 in the extended and very extended configurations, respectively. The calibration, reduction and imaging procedures used in these observations are described in Girart et al. (2013, in preparation). Table 1 shows the imaging parameters for this data set.
The CO(3-2) maps at an angular resolution of $\simeq 2\arcsec$ presented in this paper were obtained from the public SMA data archive. The observations were done in the compact configuration on 2010 April 9. We downloaded the calibrated data and used the spectral windows around the CO(3-2) to produce integrated intensity maps.
Results
=======
Continuum emission at 3.5 mm
----------------------------
Using CARMA we detected a resolved 3.5 mm source composed of two components (Fig. \[cont\]). The spatial distribution of the continuum emission resembles very well that of the continuum at 1.4 mm shown by @2011FernandezLopez1. We applied a two-component Gaussian model to fit the continuum data, which left no residuals over a 3-$\sigma$ level. From this fit, the main southwestern component is spatially coincident with the exciting source of the thermal jet, MM1, and has an integrated flux of 94[ mJy beam$^{-1}$]{}. The component to the northeast coincides with the position of MM2 and has an integrated flux of 36[ mJy beam$^{-1}$]{}. This is the first time that a measurement of MM2 can be obtained at 3 mm, and it is in good agreement with the flux density expected at this wavelength from the estimated spectral index for MM2, using previous millimeter and submillimeter observations (@2011FernandezLopez1). Although no continuum emission from the molecular core (MC) detected by [@2009Qiu] was needed to explain the whole continuum emission, the angular resolution of the present CARMA observations does not allow us to rule out the possibility that the molecular core contributes to the observed dust emission.
The total flux measured on the field of view is $145\pm15$[ mJy beam$^{-1}$]{}, which is also consistent with the total flux reported by @2003Gomez. However, they proposed that all the emission is associated with MM1, while we find that a large fraction of the emission is associated with MM1 and the rest of the emission comes from MM2.
Molecular emission
------------------
### Detection of outflows
CARMA observations of the classical outflow tracers SiO(2-1), HCN(1-0) and HCO$^+$(1-0) were aimed to studying the emission from the outflow associated with MM1 and its powerful radio jet (@2001Ridge [@2004Benedettini]). We do not detect any emission associated with this radio jet in the velocity range (-211,+189)[ km s$^{-1}$]{}covered by the observations, although part of the HCN and HCO$^+$ low-velocity emission could be associated with material encasing the collimated radio jet. In what follows we adopt $v_{lsr}=11.8$[ km s$^{-1}$]{}as the cloud velocity (@2011FernandezLopez2). The CARMA observations do not show molecular emission from the southeast monopolar outflow associated with MM2 and previously reported by [@2009Qiu], but they show two other monopolar outflows, possibly originated from MM2 and/or MC (Fig. \[moms0\]).
The SiO(2-1) arises only at redshifted velocities (from +4 to +38[ km s$^{-1}$]{}with respect to the cloud velocity[^3]) and from two different spots: a well collimated lobe running in the northeast direction, east of MM2, and a low collimated lobe running in the northwest direction, north of MM2 and apparently coinciding with the infrared reflection nebula (e.g., @1992Aspin) seen in grey scale in Fig \[sio\] (which only shows the channels with the main emission from -12 to +26[ km s$^{-1}$]{}).
Fig. \[hcn\] shows the HCN(1-0) velocity cube. The spectral resolution of these observations does not resolve the hyperfine structure of this line, since the two strongest transitions are within 5[ km s$^{-1}$]{}. The redshifted HCN(1-0) emission mainly coincides with the same two outflows seen in SiO(2-1), but with lower velocities (near the cloud velocity), and the emission is more spread out, nearly matching the spatial distribution of the infrared reflection nebula and following the radio jet path. At these velocities, the HCN(1-0) also shows elongated emission due southwest (see also Fig. \[moms0\]).
HCO$^+$(1-0) redshifted emission is weaker, but also coinciding with the two SiO(2-1) outflows (Fig. \[hco\]). The low velocity emission from this molecular line also appears extended and mostly associated with the infrared reflection nebula and possibly the radio jet path.
In addition to CARMA observations, we present here new high-angular resolution SMA observations showing CO(3-2) emission from the southeast outflow. Fig. \[co32\] shows the CO(3-2) SMA images in three panels showing three different velocity regimes: low (LV), medium (MV) and high velocity (HV). The southeast outflow appears in all of them, comprised of blue and redshifted emission at LV and MV, and mostly blueshifted emission at HV. The HV panel, the map with highest angular resolution, clearly shows that the origin of the southeast outflow is MM2(E). Inspecting that image, it is evident that this outflow is wiggling, being ejected due east at the origin, and then turning southeast (about $\sim30\degr$ change in position angle). Such behavior can be produced by precessing motion of the source (e.g., Raga et al. 2009). The middle panel shows a possible additional east turn in the blueshifted emission far away from MM2(E). Finally, the LV and MV maps also show the redshifted northwest lobe and hints of the northeast lobe, but the angular resolution of these maps do not allow identification of their origin.
From now on we will call the two high velocity SiO lobes, NE (Northeast, P.A.$=71\degr$) and NW (Northwest, P.A.$=-22\degr$) outflows. The southeast outflow (P.A.$=126\degr$) will be designated as the SE outflow.
[lcccccc]{}\[h\] N(SiO) ($10^{13}$ cm$^{-2}$) & 1.2$\pm0.2$ & 1.1$\pm0.2$ & & & &\
\
Position angle () & 71$\pm$3 & -22$\pm$3 & 126 & & &\
$\lambda$ (pc) & 0.54$\pm$0.03 & 0.13$\pm$0.03 & 0.2 & & &\
v$_{max}$ ([ km s$^{-1}$]{}) & 38$\pm$3 & 27$\pm$3 & 100 & & &\
t$_{dyn}$ ($10^3$ yr) & 18$\pm$2 & 5$\pm$2 & 2.2 & & 10 & 38\
\
M ([ $M_{\sun}$]{}) & 1.60$\pm$0.08 & 0.74$\pm$0.05 & 0.22 & $10^{-3}-0.4$ & 2 & 10\
$\dot{M}$ (10$^{-5}$ [ $M_{\sun}$]{} yr$^{-1}$) & 9$\pm$1 & 14$\pm$7 & 10 & & 21 & 37\
P ([ $M_{\sun}$]{} [ km s$^{-1}$]{}) & 24$\pm$6 & 9$\pm$3 & 4.9 & $\sim0.02$ & 16 & 54\
$\dot{P}$ (10$^{-3}$[ $M_{\sun}$]{} [ km s$^{-1}$]{} yr$^{-1}$) & 1.3$\pm$0.5 & 1.9$\pm$1 & 2.2 & & 1.7 & 1.9\
E ($10^{45}$ erg) & 5.0$\pm$0.6 & 1.5$\pm$0.2 & & $10^{-4}-10^{-3}$ & 2 & 4\
L$_{mech}$ ([ $L_{\sun}$]{}) & 2.3$\pm$0.5 & 2$\pm$1 & & & 2 & 1 \[outflows\]
### Gas tracing the reflection nebula
Figures \[moms0\], \[sio\], \[hcn\] and \[hco\] allow comparisons between the observed molecular emission and the 2MASS K-band emission from the infrared reflection nebula and the 6 cm VLA radio continuum emission from the radio jet launched from MM1. The HCN(1-0) and HCO$^+$(1-0) lines are tracing the gas from the molecular outflows, but also other kind of structures. We have also detected emission from the HC$_3$N(10-9) transition, which is weaker than the other detected lines and at velocities close to systemic (Fig. \[hc3n\]).
The 2MASS K-band image shows the well-known bipolar reflection nebula (@1991Aspin [@1992Aspin]). This nebula wraps the radio jet path. Toward the position of MM1, the nebula becomes narrower and splits into two U–shaped lobes. The north lobe, with an intricate structure, matches quite well most of the HCN(1-0), HCO$^+$(1-0) and HC$_3$N(10-9) emission between -12 and +4[ km s$^{-1}$]{}. The blueshifted molecular line emission (-12 to -7[ km s$^{-1}$]{}) traces the basis of the reflexion nebula and spreads out due north, covering the whole northern lobe at velocities in the range -1[ km s$^{-1}$]{}and +4[ km s$^{-1}$]{}. At these velocities, part of the HCN(1-0) and HCO$^+$(1-0) emission follows the radio jet trajectory towards the north of the MC position. The southern lobe has fainter K-band emission than the northern lobe and it has no molecular emission associated with it. The lack of strong molecular line emission in that area of the nebula is probably due to the lack of dense molecular gas, as shown in the C$^{17}$O(2-1) emission map by (@2011FernandezLopez2).
Analysis of the outflow properties from SiO emission
====================================================
SiO is a good tracer of outflows (e.g. @2010JimenezSerra [@2011LopezSepulcre] and references therein). Its abundance is dramatically increased in shocks (e.g., @1997Schilke [@1996PineauDesForets; @2008Gusdorf2; @2008Gusdorf1; @2009Guillet]) and has been used as a tool to map the innermost part of outflows [@1992MartinPintado; @2009SantiagoGarcia]. In addition, it does suffer minimal contamination from quiescent and infalling envelopes and can not be masked by blended hyperfine components as in the case of the HCN(1-0) line. Thus, we choose this molecular outflow tracer to derive the characteristics of the NE and NW outflows.
SiO(2-1) has been previously detected toward [IRAS 18162-2048]{}with the IRAM 30 m by [@1997Acord]. They reported an integrated flux of $1.9\pm0.4$ K [ km s$^{-1}$]{}, while here we have measured $2.5\pm0.05$ K [ km s$^{-1}$]{}. Applying a $27\arcsec$ beam dilution (the angular size of the IRAM 30 m beam), the CARMA measurement becomes 1.9 K [ km s$^{-1}$]{}. This implies that CARMA is recovering the same flux as the IRAM 30 m for this transition and hence, we can use it to estimate some physical parameters of the outflow.
In the first place, we estimate the column density using the following equation (derived from the expressions contained in the appendix of @2010Frau), written in convenient units: $$N_{H_2} =2.04\times10^{20}\; \chi(SiO) \;\frac{Q(T_{ex})\; e^{E_u/T_{ex}}}{\Omega_s\; \nu^3 \; S\mu^2}\, \int{S_{\nu}\,dv}\quad,$$ where $\chi(SiO)$ is the abundance of SiO relative to H$_2$, $Q(T_{ex})$ is the partition function, $E_u$ and $T_{ex}$ are the upper energy level and the excitation temperature, both in K, $\Omega$ is the angular size of the outflow in square arcseconds, $\nu$ is the rest frequency of the transition in GHz, $S\mu^2$ is the product of the intrinsic line strength in erg cm$^3$ D$^{-2}$ and the squared dipole momentum in D$^2$. Following the NH$_3$ observations of [@2003Gomez], we adopted an excitation temperature of 30 K. $E_u$, $Q(T_{ex})$ and $S\mu^2=19.2~erg~cm^3$ were extracted from the CDMS catalogue (Muller et al. 2005). The term $\int{S_{\nu}\,dv}$ is the measured integrated line emission in Jy beam$^{-1}$ [ km s$^{-1}$]{}.
We analyzed the two SiO outflows separately (NE and NW outflows), finding average column densities of $1.1\pm0.2\times10^{13}$[ cm$^{-2}$]{}and $1.2\pm0.2\times10^{13}$[ cm$^{-2}$]{}respectively. We used these values to derive the mass ($M=N_{H_2}\mu_g m_{H_2}\Omega_s$) and other properties of the outflow, assuming a mean gas atomic weight $\mu_g=1.36$. The results are shown in Table \[outflows\].
We used an SiO abundance of $\chi(SiO)=5\times10^{-9}$. This value is in good agreement with the $\chi(SiO)$ found in outflows driven by other intermediate/high-mass protostars. For instance, in a recent paper, [@2013SanchezMonge] found the $\chi(SiO)$ in 14 high-mass protostars outflows range between $10^{-9}$ and $10^{-8}$ (similar values were also encountered in @2001Hatchell [@2007Qiu; @2013Codella]). It is important to mention that the uncertainty in the SiO abundance is probably one of the main sources of error for our estimates (e.g., @2009Qiu2). One order of magnitude in $\chi(SiO)$ translates into one order of magnitude in most of the outflow properties given in Table \[outflows\] (M, $\dot{M}$, P, $\dot{P}$, E and L$_{mech}$). Most of the outflow properties in this table were derived following the approach of [@2007Palau]. In order to derive the mass and momentum rates, together with the mechanical luminosity, we need to know the dynamical timescale of the outflows, which can be determined as $t_{dyn}=\lambda/v_{max}$, where $\lambda$ is the outflow length. No correction for the inclination was included. Hence, we estimate the dynamical time of the NE and NW outflows as 18000 yr and 5000 yr, and their masses as 1.60 and 0.74[ $M_{\sun}$]{}, respectively (Table \[outflows\]). In addition, it is worth to note that most of the mass (about 80%) of the NE outflow is concentrated in its brightest condensation (see Fig. \[moms0\]).
Discussion
==========
Asymmetric outflows and their possible origin
---------------------------------------------
Recent work has provided evidence supporting MM1 being a 11-15[ $M_{\sun}$]{}high-mass protostar, probably still accreting material from a 4[ $M_{\sun}$]{}rotating disk, and MM2 being a massive dusty core containing at least one high-mass protostar, MM2(E), in a less evolved stage than MM1 (@2011FernandezLopez2). MM2 also contains another core, MM2(W), thought to be in a still earlier evolutionary stage. In addition to these well known protostars, at present it is controversial whether MC harbors another protostar, since neither a thermal radio continuum nor a dust continuum emission has been detected in this molecular core. Hence in the area MM2-MC we can count two or three protostars that could be associated with the molecular outflows reported in this work.
[IRAS 18162-2048]{}has protostars surrounded by accretion disks and/or envelopes, and these protostars are associated with molecular outflows and jets, resembling low-mass star-forming systems. The most apparent case would be that of MM1, with one of the largest bipolar radio jets associated with a protostar. Being a scaled up version of the low-mass star formation scenario implies more energetic outflows (as we see in MM1 and MM2), which means larger accretion rates, but it also implies large amounts of momentum impinged on the surrounding environment, affecting occasionally the nearby protostellar neighbors and their outflows. In this case, [IRAS 18162-2048]{}is a region with multiple outflows, mostly seen monopolar. Our observations show no well opposed counterlobes for the NW, NE and SE outflows. That could be explained if the counterlobes are passing through the cavity excavated by the radio jet or through regions of low molecular abundance. Another explanation for the asymmetry of outflows, perhaps more feasible in a high-mass star-forming scenario with a number of protostars and powerful outflows, is deflection after hitting a dense clump of gas and dust (e.g., @2002Raga).
Adding a little bit more to the complexity of the region, the SE outflow is comprised of two very different kinematical components. Fig. \[co32\] shows high velocity emission at two well separated velocities laying almost at the same spatial projected path. A blueshifted and redshifted spatial overlap has been usually interpreted as an outflow laying close to the plane of the sky. However, the SE outflow has large radial blueshifted and redshifted velocities ($\sim\pm50$[ km s$^{-1}$]{}). One possibility is that the high velocity emission of the SE outflow comes from two molecular outflows, one redshifted and the other one blueshifted, both originated around the MM2 position. A second possibility is that the SE outflow is precessing with an angle $\alpha=15\degr$ (see Fig. 2 in @2009Raga). We derived this angle by taking half the observed wiggling angle of the outflow axis in projection (from P.A.$\simeq95\degr$ at the origin of the outflow to P.A.$\simeq126\degr$ away from it). Hence, the absolute velocity of the ejecta should be $\sim200$[ km s$^{-1}$]{}, which seems to be a reasonable value (see e.g., @2009Bally). A smaller $\alpha$ angle would imply a higher outflow velocity. From the central panel of Fig. \[co32\], we roughly estimate the period of the wiggles of the SE outflow as $\lambda\simeq 16\arcsec$ (27000 AU). We have assumed that the SE outflow completes one precessing period between the position of MM2(E) and the end of the CO(3-2) blueshifted emission in our map. Thus, the precession period is $\tau_p=\lambda / (v_j~\cos{\alpha}) \simeq 660~yr$. If the precession is caused by the tidal interaction between the disk of a protostar in MM2(E) and a companion protostar in a non-coplanar orbit (e.g. @1999Terquem1 [@2009Montgomery]) then it is possible to obtain some information about the binary system. We use an equivalent form of equation (37) from [@2009Montgomery] for circular precessing Keplerian disks. This equation relates the angular velocity at the disk edge ($\omega_d$), the Keplerian orbital angular velocity of the companion around the primary protostar ($\omega_o$) and the retrograde precession rate of the disk and the outflow ($\omega_p$): $$\omega_p=-\frac{15}{32}\frac{\omega_o^2}{\omega_d}\cos{\alpha}\quad,$$ where $\alpha$ is the inclination of the orbit of the companion with respect to the plane of the disk (or obliquity angle), and this angle is the same as the angle of the outflow precession (i.e. the angle between the outflow axis and the line of maximum deviation of the outflow from this axis; @1999Terquem2). Using this expression and adopting reasonable values for the mass of the primary protostar and the radius of its disk we can constrain the orbital period and the radius of the companion protostar. [@1995Gomez] assigned a B4 ZAMS spectral type to MM2(E) based on its flux at 3.5 cm. A B4 spectral type protostar has about 6-7[ $M_{\sun}$]{}(Table 5 in @1998Molinari). On the other side, the dust emission of MM2(E) has a radius no larger than 300 AU (@2011FernandezLopez1) and we consider 50 AU as a reasonable lower limit for the disk radius. With all of this we derived an orbital period between 200 yr and 800 yr and an orbital radius between 35 $M_2^{1/3}$ AU and 86 $M_2^{1/3}$ AU for the putative MM2(E) binary system, being $M_2$ the mass of the secondary protostar expressed in solar masses.
The HV panel of Fig. \[co32\] clearly shows for the first time the precise origin of the SE-blueshifted outflow: MM2(E). Its corresponding redshifted counterlobe appears truncated between MM2(E) and MM2(W). However, the CO(3-2) LV and MV panels show that the redshifted NW outflow reaches the position of MM2 (see also HCN(1-0) emission in Fig. \[hcn\]). Then, it is possible that the SE outflow counterpart is the NW outflow. This resolves the monopolar nature of both the NW and SE outflows and explains the prominent redshifted wing spectral line profiles of H$_2$CO and SO transitions previously observed by [@2011FernandezLopez2] at the position of MC, as being produced by a receding outflow from MM2(E) colliding with a dense cloudlet at the position of MC. The change in the SE-NW outflow direction could be explained by a deflection due north of the NW outflow. The cause of the deflection could be a direct impact against MM2(W), or the action of the powerful HH 80/81/80N wind over the NW lobe.
The NE outflow has a monopolar structure at first sight too (Figs. \[sio\], \[moms0\] and \[hcn\]). Its origin cannot be well determined due to angular resolution constraints, but it is also in the MM2-MC area. Figs. \[moms0\], \[hcn\] and \[hco\] (channels at -1 and +4[ km s$^{-1}$]{}) show some signs that a low-velocity counterlobe may exist with a position angle of -112$\degr$, almost opposite to the NE outflow. It would spread out 0.20-0.25 pc from the MM2-MC position.
Evolutionary stage of outflows and protostars
---------------------------------------------
As stated in several works, SiO is a commonly used molecular tracer of shocked gas in outflows from low-mass protostars (e.g., @2006Hirano [@2008Lee; @2009SantiagoGarcia]), but SiO is also found in outflows from high-mass protostars (e.g., @1999Cesaroni [@2001Hatchell; @2009Qiu2; @2004Beuther; @2007Zhang2; @2007Zhang1; @2011LopezSepulcre; @2012Zapata1; @2013Leurini]). However, with the present CARMA observations we have not detected SiO(2-1) emission associated with the HH 80/81/80N radio jet, nor with the SE outflow along the $\sim400$[ km s$^{-1}$]{}of the CARMA SiO(2-1) window bandwidth. The HH 80/81/80N and the SE outflows have not been observed either in the other molecular transitions of this study, HCO$^+$(1-0) and HCN(1-0), also known to be good outflow tracers. If anything, some HCO$^+$ and HCN emission may come from gas pushed away by the collimated and high-velocity jets or may be due to low-velocity winds. Both outflows have been observed in CO lines at the velocities sampled by the CARMA observations, though. Then, what is producing the different chemistry in the outflows of the region? Why is the SiO(2-1) not detected in the SE outflow nor the radio jet, while the other two outflows NE and NW are? Furthermore, why is only one lobe of each the NE and NW outflows detected? There are other cases in the literature where a similar behavior is observed in CO and SiO (e.g., @2007Zhang2 [@2007Zhang1; @2008Reid; @2012Zapata1; @2013Codella]).
It has been proposed that the SiO abundance can decrease with the age of the outflow (@1999Codella [@2006Miettinen; @2010Sakai; @2011LopezSepulcre]), which could explain the differences of SiO emission from the outflows of the same region. This hypothesis implies that during the early stages, the gas surrounding the protostar is denser and rich in grains, producing stronger shocks between the outflow and the ambient material, and thus producing an abundant release of SiO molecules. After that, in more evolved stages, the outflow digs a large cavity close to the protostar and thus the shocks are weaker, and grains are rarer. The hypothesis has further support in the shorter SiO depletion timescale (before it freezes out onto the dust grains) with respect to the typical outflow’s timescale (some $10^4$ yr), together with its disappearance from the gas phase favoring the creation of SiO$_2$ (@1996PineauDesForets [@2004Gibb]) and could describe well the HH 80/81/80N jet case, since it produced a cavity probably devoid of dust grains. Now, we can compare the timescale of the outflows in the central region of [IRAS 18162-2048]{}. The radio jet HH 80/81/80N is 10$^6$ yr (@2004Benedettini), the SE outflow has an age of about $2\times10^3$ years (@2009Qiu) and the NE and NW outflows have $2\times10^4$ and $5\times10^3$ years. Therefore, except for the SE outflow, the outflow timescales would be in good agreement with SiO decreasing its abundance with time. Actually, as indicated before, the case for the SE outflow is more complex. It has high velocity gas and it is apparently precessing. That can therefore indicate a larger outflow path. In addition CO(3-2) observations are constrained by the SMA primary beam, implying that the outflow could be larger than observed and thereby older. In any case, if the SE outflow is the counterlobe of the NW one (see §5.1), then a different explanation must be found to account for the chemical differences between these two outflow lobes.
We can also compare the characteristics of the NE and NW outflows with those of outflows ejected by (i) high-mass protostars and (ii) low-mass Class 0 protostars, in order to put additional constrains on the ejecting sources. [@2005Zhang] made an outflow survey toward high-mass star-forming regions using CO single-dish observations. After inspecting this work we summarize the properties belonging to outflows from $L<10^3$[ $L_{\sun}$]{}protostars and $L\in(10^3,5\times10^3)$[ $L_{\sun}$]{}protostars in Table \[outflows\]. This Table shows in addition, information on outflows from low-mass Class 0 protostars as well, gathered up from several sources (@2004Arce [@2005Arce; @2006Kwon; @2011Davidson]). The characteristics of the NE and NW outflows in [IRAS 18162-2048]{}(as well as those of the SE outflow) are similar to outflows from high-mass protostars, being the NE outflow more energetic and with higher momentum than the NW and SE outflows. On the contrary, the properties of outflows from low-mass Class 0 protostars, although similar in length and dynamical time, have in general lower mass, and overall kinetic energy, about four orders of magnitude lower. All of this indicates that the NE, NW and SE outflows in [IRAS 18162-2048]{}could be associated with intermediate or high-mass protostars in a very early evolutive stage (massive class 0 protostars). Therefore, given the powerful outflowing activity from MM2, the protostars would be undergoing a powerful accretion process in which the gas from the dusty envelope (about 11[ $M_{\sun}$]{}) is probably falling directly onto the protostars. This kind of objects is very rare. Maybe the closest case is that of Cepheus E (@2003Smith). The outflow from this intermediate-mass protostar, which is surrounded by a massive $\sim25$[ $M_{\sun}$]{}envelope, is very young (t$_{dyn}\sim1\times10^3$ yr), with a mass and an energy (M$\sim0.3$[ $M_{\sun}$]{}, E$\sim5\times10^{45}$ ergs) resembling those obtained for the NE, NW and SE outflows.
Conclusions
===========
We have carried out CARMA low-angular resolution observations at 3.5 mm and SMA high-angular resolution observations at 870 $\mu$m toward the massive star-forming region [IRAS 18162-2048]{}. We have also included the analysis of SMA low-angular resolution archive data analysis of the CO(3-2) line. The analysis of several molecular lines, all of which are good outflow tracers, resulted in the physical characterization of two previously not well detected outflows (NE and NW outflows) and the clear identification of the driving source of a third outflow (SE outflow). The main results of this work are as follows:
- We observed three apparently monopolar or asymmetric outflows in [IRAS 18162-2048]{}. The NE and NW outflows were detected in most of the observed molecular lines (SiO, HCN, HCO$^+$ and CO), while the SE outflow was only clearly detected in CO. The outflow associated with HH 80/81/80N was undetected. At most, it could explain some HCN and HCO$^+$ low-velocity emission associated with the infrared reflection nebula, which could be produced by dragged gas or a wide open angle low-velocity wind from MM1.
- The NE and NW outflows have their origins close to MM2.
- We have estimated the physical properties of the NE and NW outflows from their SiO emission. They have similar characteristics to those found in molecular outflows from massive protostars, being the NE outflow more massive and energetic than the NW and SE outflows.
- SMA high-angular resolution CO(3-2) observations *have identified* the driving source of the SE outflow: MM2(E). These observations provide evidence of precession along this outflow, which show a change of about 30$\degr$ in the position angle and a period of 660 yr. If the precession of the SE outflow is caused by the misalignment between the plane of the disk and the orbit of a binary companion, then the orbital period of the binary system is 200-800 yr and the orbital radius is 35-86 $M_2^{1/3}$ AU.
- We discuss the monopolar or asymmetric appearance of all three outflows. We provide evidence that the SE and NW outflows are linked and that precession and a possible deflection are the causes of the asymmetry of the outflow. In addition, the NE outflow could have a smaller and slower southwest counterlobe, maybe associated with elongated HCN and HCO$^+$ emission.
- Finally, we discuss that the SiO outflow content in [IRAS 18162-2048]{}could be related to outflow age. This would explain the SiO non-detection of the radio jet HH 80/81/80N.
The authors want to bring a special reminder of our good fellow Yolanda Gómez, who helped in the very beginning of this work with her contagious optimism.
We thank all members of the CARMA and SMA staff that made these observations possible. We thank Pau Frau for helping with the SMA observations. MFL acknowledges financial support from University of Illinois and thanks John Carpenter and Melvin Wright for their patience with CARMA explanations. MFL also thanks the hospitality of the Instituto de Astronomía (UNAM), México D.F., and of the CRyA, Morelia. JMG are supported by the Spanish MICINN AYA2011-30228-C03-02 and the Catalan AGAUR 2009SGR1172 grants. SC acknowledges support from CONACyT grants 60581 and 168251. LAZ acknowledges support from CONACyT.
Support for CARMA construction was derived from the Gordon and Betty Moore Foundation, the Kenneth T. and Eileen L. Norris Foundation, the James S. McDonnell Foundation, the Associates of the California Institute of Technology, the University of Chicago, the states of Illinois, California, and Maryland, and the National Science Foundation. Ongoing CARMA development and operations are supported by the National Science Foundation under a cooperative agreement, and by the CARMA partner universities.
The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics, and is funded by the Smithsonian Institution and the Academia Sinica.
Facilities:
[^1]: Note that while the observations are often said to be [*[sensitive]{}*]{} to structures on scales $\Theta=\lambda/B_{min}\simeq206\arcsec~[\lambda(mm)/B_{min}(m)]$, the interferometer flux filtering could reduce the actual largest scale detection roughly in a factor 2, so that $\Theta\simeq100\arcsec~[\lambda(mm)/B_{min}(m)]$ (see appendix in @1994Wilner). Here we report a value estimated using the last expression.
[^2]: The GILDAS package is available at http://www.iram.fr/IRAMFR/GILDAS
[^3]: All the velocities in this work are given with respect to the cloud velocity, adopted as $v_{lsr}=11.8$[ km s$^{-1}$]{}.
|
---
abstract: |
[ *In this paper, we study the linear stability of the elliptic rhombus homographic solutions in the classical planar four-body problem which depends on the shape parameter $u \in (1/\sqrt{3}, \sqrt{3})$ and eccentricity $e\in [0,1)$. By an analytical result obtained in the study of the linear stability of elliptic Lagrangian solutions, we prove that the linearized Poincare map of elliptic rhombus solution possesses at least two pairs of hyperbolic eigenvalues, when $(u,e) \in (u_3, 1/u_3) \times [0,1)$ or $ (u,e) \in\l([1/\sqrt{3}, u_3)
\cup( 1/u_3, \sqrt{3}]\r) \times [0, \hat{f}(\frac{27}{4})^{-1/2})$ where $u_3\approx 0.6633$ and $\hat{f}(\frac{27}{4})^{-1/2} \approx 0.4454$. By a numerical result obtained in the study of the elliptic Lagrangian solutions, we analytically prove that the elliptic rhombus solution is hyperbolic, i.e., it possesses four pairs of hyperbolic eigenvalues, when $(u,e)\in [1/\sqrt{3}, \sqrt{3}] \times [0,1)$.*]{}
author:
- |
Bowen Liu$^{1}$[^1]\
$^{1}$ Chern Institute of Mathematics\
Nankai University, Tianjin 300071, China\
title: 'Linear Stability of Elliptic Rhombus Solutions of the Planar Four-body Problem'
---
[**2010 MS classification:**]{} 58E05, 37J45, 34C25
[**Key words:**]{} Linear stability, Morse Index, Maslov-type $\om$-index, hyperbolic region, elliptic rhombus solution, planar four-body problem.
[**Running title:**]{} Linear Stability of Elliptic Rhombus Solution.
Introduction
============
We consider the classical planar four-body problem in celestial mechanics. Denote by $q_1 ,q_2 ,q_3, q_4\in \R^2$ the position vectors of four particles with masses $m_1 ,m_2 ,m_3,m_4 > 0$ respectively. By Newton’s second law and the law of universal gravitation, the system of equations for this problem is m\_i = , i = 1, 2, 3, 4, where $U(q) = U(q_1, q_2, q_3, q_4) = \sum_{1\leq i< j\leq 4} \frac{m_i m_j}{|q_i-q_j|}$ is the potential function by using the standard norm $||\cdot | |$ of vector in $\R^2$. For periodic solutions with period $T$, the system is the Euler-Lagrange equation of the action functional (q) = \_[0]{}\^[T]{} t, defined on the loop space $W^{1,2} (\R/T \Z, \hat{\chi})$, where :=ł{q =(q\_1, q\_2, q\_3, q\_4)(\^2)\^4ł|\_[i = 1]{}\^[4]{} m\_iq\_i = 0, q\_i q\_j, ij.} is the configuration space of the planar four-body problem. The periodic solutions of (\[1.1\]) correspond to critical points of the action functional (\[1.2\]).
It is a well-known fact that (\[1.1\]) can be reformulated as a Hamiltonian system. Let $p_1 , p_2 , p_3, p_4 \in \R^2$ be the momentum vectors of the particles respectively. The Hamiltonian system corresponding to (\[1.1\]) is \_i = -, \_i = , i = 1, 2, 3, 4, with the Hamiltonian function H(p, q) = \_[i =1]{}\^[4]{} - U(q\_1,q\_2, q\_3, q\_4).
A central configuration is a solution $(q_1, ..., q_4) = (a_1, ..., a_4)$ satisfying -m\_i q\_i = , where $\lm = \frac{U(a)}{2I(a)} > 0$ and $I(a) = \frac{1}{2}\sum_{i=1}^{4} m_i||a_i||^2$ is the momentum of inertia. Readers may refer to [@Long2012Notes] and [@Moeckel1990MathZ] for detailed properties of the central configuration.
In this paper, we consider the linear stability of elliptic rhombus solution of the planar $4$-body problem. We assume that $m_1 = m_3 = m$, $ m_2 = m_4 = 1$. By (5.10) of [@Long2003ANS], the central configuration $a = (a_1(u), a_2(u), a_3(u), a_4(u))$ satisfies a\_1 = (0, u)\^T, a\_2 = (1, 0)\^T, a\_3 = (0, -u)\^T, a\_4 = (-1, 0)\^T, and m = , where $1/\sqrt{3}<u< \sqrt{3}$ and $\aa = \sqrt{2m u^2 +2}$. We also assume that the constant $\mu$ satisfies that U(a) = ++ .
In 2002, Long and Sun in [@LongSun2002ARMA] studied any convex non-collinear central configuration of the planar 4-body problem with equal opposite masses must be a kite. In 2003, Long in [@Long2003ANS] studied the possible shapes of $4$-body non-collinear relative equilibrium for any positive masses and estimated the geometric quantities of the shape. Especially, Long obtained that the configuration is central if the configuration (\[1.7\]) and the mass $m$ satisfy (\[1.8\]) and $1/\sqrt{3}<u< \sqrt{3}$. In 2007, Perez-Chavela and Santoprete in [@PerezSantoprete2007ARMA] proved that if the configuration is convex and $m_1 = m_3 = m$, $ m_2 = m_4 = 1$, the central configuration must be a rhombus and this central configuration is unique. In 2008, Albouy, Fu ans Sun in [@AlbouyFuSun2008] studied the symmetry of the four body problem of the central configuration and they proved that for four particles forming a convex quadrilateral central configuration, the central configuration is symmetric with respect to the diagonal if and only if two particles on the opposite sides of the diagonal possess equal masses.
In 2005, Meyer and Schmidt in [@MeyerSchmidt2005JDE] decomposed the fundamental solution of the elliptic Lagrangian orbit into two parts symplectically using central configuration coordinates. They obtained results on stability by normal form theory for small enough eccentricity $e \geq 0$. In 2010-2014, Hu, Long and Sun introduced a Maslov-type index method and operator theory of the studying the stability in elliptic Lagrangian solutions of the planar three-body problem in [@HuLongSun2014] and [@HuSun2010]. In [@HuLongSun2014], the authors analytically proved the stability bifurcation diagram of the elliptic Lagrangian solutionsv in the parameter rectangle $(\bb, e) \in [0,9]\times[0,1)$. In 2015, Hu, Ou and Wang in [@HuOuWang2015ARMA] built up the trace formulas for Hamiltonian system and used it to estimate the stable and hyperbolic region of the elliptic Lagrangian solutions. Using the trace formula, Hu and Ou in [@HuOu2013RCD] studied the hyperbolic region and proved the elliptic relative equilibrium of square central configuration where $m_1= m_2 =m_3=m_4 = 1$ is hyperbolic for any eccentricity $e$. In 2017, Mansur, Offin and Lewis in [@Offin2017QTDS] proved the instability of the constrained elliptic rhombus solution in reduced space by the minimizing property of the action functional and assuming the nondegeneracy of variational problem, i.e., the linearized Poincare map which is the ending point of the fundamental solution of the linearized problem possesses at least one pair of hyperbolic eigenvalues. Especially, when $e= 0$, by [@OuyangXie2005], they obtained instability in the reduce space, i.e., the linearized Poincare map possesses one pair hyperbolic eigenvalues. In this paper, without the assumption on nondegeneracy, we obtain the fundamental solution at the end point possesses at least two pairs of hyperbolic eigenvalues which yields the instability by the analytical method. By the numerical results on linear stability of elliptic Lagrangian solution, we obtain that the eigenvalues of the linearized Poincare map of the essential part are all hyperbolic.
Furthermore, in 2017, Zhou and Long applied the Maslov-type index theory on the Euler-Moulton solutions. They reduced the elliptic Euler-Moulton solutions of the $N$-body problem to those $3$-body problem in [@ZhouLong2017CMDA] by the central configuration coordinate and obtained the linear stability of the elliptic Euler solution of the $3$-body problem by the Maslov-type indices in [@ZhouLong2017ARMA].
In this paper, we use the technique introduced by Meyer and Schmidt in [@MeyerSchmidt2005JDE] to reduce the system to three independent Hamiltonian systems of $\ga_{1}(t)$, $\ga_{u,e}(t)$ and $\eta_{u,e}(t)$. The Hamiltonian system of $\ga_1(t)$ is fully studied in [@HuSun2010]. For the rest two Hamiltonian systems $\ga_{u,e}(t)$ and $\eta_{u,e}(t)$, we analyze the $\om$-Maslov type indices of $\ga_{u,e}(t)$ and $\eta_{u,e}(t)$ and the $\om$-Morse indices of the corresponding operators.
Before stating our results, we need the following results on the positivity of certain operators obtained in the studies of the linear stability of the elliptic Lagrangian solutions in [@HuOuWang2015ARMA] and [@Simo2006JDEI].
[**Lemma 1.1.** ]{}
*(i) By the analytical result of Theorem 1.8 of [@HuOuWang2015ARMA], the operator $A(\bb,e)$ defined by (\[2.160\]) are positive definite for any $\om$-boundary condition with zero nullity where $\om \in \U$ and $(\bb,e)\in \{\frac{27}{4}\} \times [0,\hat{f}(\frac{27}{4})^{-1/2})$ or $(\bb,e)\in \{\bb_1\} \times [0,\hat{f}(\bb_1)^{-1/2})$ where $\bb_1$ is given by (\[4.28\]), $\hat{f}(\frac{27}{4})^{-1/2} \approx 0.4454$ and $\hat{f}(\bb_1)^{-1/2}\approx 0.4435$ can be obtained by Theorem 1.8 of [@HuOuWang2015ARMA].*
\(ii) By the numerical result in section 7 of [@Simo2006JDEI], for $(\bb,e)\in \{\frac{27}{4}\} \times [0,1)$ or $(\bb,e)\in \{\bb_1\} \times [0,1)$, the operator $A(\bb,e)$ is positive definite for any $\om$-boundary condition with zero nullity where $\om \in \U$.
By the analytical and numerical results of the elliptical Lagrangian solutions in Lemma 1.1, we analytically obtain the linear stability of the elliptic rhombus solutions.
[**Theorem 1.2.** ]{}
*(i) By (i) of Lemma 1.1, when $(u,e) \in (u_3, 1/u_3) \times [0,1)$ or $ (u,e) \in\l((1/\sqrt{3}, u_3)
\cup(1/u_3, \sqrt{3})\r) \times [0, \hat{f}(\frac{27}{4})^{-1/2})$ where $u_3 \approx 0.6633$ is given by (\[4.42\]), the linearized Poincare map, which is the end pint $\gamma_0(2\pi)$ of the fundamental solution of the linearized Hamiltonian system, possesses at least two pairs of hyperbolic eigenvalues, i.e., at least two pairs of eigenvalues are not on $\U$.*
\(ii) By (ii) of Lemma 1.1, for $(u,e)\in [1/\sqrt{3}, \sqrt{3}] \times [0,1)$, $\ga_0(2\pi)$ possesses four pairs of hyperbolic eigenvalues, i.e, all the eigenvalues of the essential parts are hyperbolic.
This paper is organized as follows. In Section 2, we introduce the $\om$-Maslov-type indices and $\om$-Morse indices, and reduce the linearized Hamiltonian system to three subsystems. In Section 3, we study the linear stability along the three boundary segments of the rectangle $(u,e) \in [1/\sqrt{3},\sqrt{3}] \times [0,1)$. In Section 4, we study the linear stability in the rectangle $(u,e) \in [1/\sqrt{3},\sqrt{3}] \times [0,1)$ and prove the Theorem 1.2.
Preliminaries
=============
$\omega$-Maslov-Type Indices and $\omega$-Morse Indices {#sec:1}
-------------------------------------------------------
Let $(\R^{2n},\Omega)$ be the standard symplectic vector space with coordinates $(x_1,...,x_n$, $y_1,...,y_n)$ and the symplectic form $\Omega=\sum_{i=1}^{n}dx_i \wedge dy_i$. Let $J=(\begin{smallmatrix}0&-I_n\\
I_n&0\end{smallmatrix})$ be the standard symplectic matrix, where $I_n$ is the identity matrix on $\R^n$. Given any two $2m_k\times 2m_k$ matrices of square block form $M_k=(\begin{smallmatrix}A_k&B_k\\
C_k&D_k\end{smallmatrix})$ with $k=1, 2$, the symplectic sum of $M_1$ and $M_2$ is defined (cf. [@Long1999PacificJMathBott] and [@Long2002BookIndexTheory]) by the following $2(m_1+m_2)\times 2(m_1+m_2)$ matrix $M_1\dm M_2$: M\_1M\_2=
A\_1 & 0 & B\_1 & 0\
0 & A\_2 & 0 & B\_2\
C\_1 & 0 & D\_1 & 0\
0 & C\_2 & 0 & D\_2
. For any two paths $\ga_j\in\P_{\tau}(2n_j)$ with $j=0$ and $1$, let $\ga_0 \dm \ga_1(t)= \ga_0(t) \dm \ga_1(t)$ for all $t\in [0,\tau]$.
It is well known that that the fundamental solution $\ga(t)$ of the linear Hamiltonian system with the continuous symmetric periodic coefficients is a path in the symplectic matrix group $\Sp(2n)$ starting from the identity. In the Lagrangian case, when $n =2$, the Maslov-type index $i_{\om}(\ga)$ is defined by the usual homotopy intersection number about the hypersurface $\Sp(2n)^0 = \{M\in\Sp(2n)\,|\, D_{\om}(M)=0\}$ where $ D_{\om}(M) = (-1)^{n-1}\ol{\om}^n\det(M-\om I_{2n})$. And the nullity is defined by $\nu_{\om}(M)=\dim_{\C}\ker_{\C}(\ga(2\pi) - \om I_{2n})$. Please refer to [@Long1999PacificJMathBott; @Long2000AdvMath; @Long2002BookIndexTheory] for more details on this index theory of symplectic matrix paths and periodic solutions of Hamiltonian system.
For $T>0$, suppose $x$ is a critical point of the functional $$F(x)=\int_0^TL(t,x,\dot{x})dt, \qquad \forall\,\, x\in W^{1,2}(\R/T\Z,\R^n),$$ where $L\in C^2((\R/T\Z)\times \R^{2n},\R)$ and satisfies the Legendrian convexity condition $L_{p,p}(t,x,p)>0$. It is well known that $x$ satisfies the corresponding Euler-Lagrangian equation: && L\_p(t,x,)-L\_x(t,x,)=0, \[p2.7\]\
&& x(0)=x(T), (0)=(T). \[p2.8\]
For such an extremal loop, define P(t) = L\_[p,p]{}(t,x(t),(t)), Q(t) = L\_[x,p]{}(t,x(t),(t)), R(t) = L\_[x,x]{}(t,x(t),(t)). Note that F”(x)=-(P+Q)+Q\^T+R.
For $\omega\in\U$, set D(,T)={yW\^[1,2]{}(\[0,T\],\^n)| y(T)=y(0) }. We define the $\omega$-Morse index $\phi_\omega(x)$ of $x$ to be the dimension of the largest negative definite subspace of $ \langle F\,''(x)y_1,y_2 \rangle$, for all $y_1,y_2\in D(\omega,T)$, where $\langle\cdot,\cdot\rangle$ is the inner product in $L^2$. For $\omega\in\U$, we also set (,T)= {yW\^[2,2]{}(\[0,T\],\^n)| y(T)=y(0), (T)=(0) }. Then $F''(x)$ is a self-adjoint operator on $L^2([0,T],\R^n)$ with domain $\ol{D}(\omega,T)$. We also define $$\nu_\omega(x)=\dim\ker(F''(x)).$$
In general, for a self-adjoint operator $A$ on the Hilbert space $\mathscr{H}$, we set $\nu(A)=\dim\ker(A)$ and denote by $\phi(A)$ its Morse index which is the maximum dimension of the negative definite subspace of the symmetric form $\langle A\cdot,\cdot\rangle$. Note that the Morse index of $A$ is equal to the total multiplicity of the negative eigenvalues of $A$.
On the other hand, $\td{x}(t)=(\partial L/\partial\dot{x}(t),x(t))^T$ is the solution of the corresponding Hamiltonian system of (\[p2.7\])-(\[p2.8\]), and its fundamental solution $\gamma(t)$ is given by (t) &=& JB(t)(t),\
(0) &=& I\_[2n]{}, with B(t)=
P\^[-1]{}(t)& -P\^[-1]{}(t)Q(t)\
-Q(t)\^TP\^[-1]{}(t)& Q(t)\^TP\^[-1]{}(t)Q(t)-R(t)
.
[**Lemma 2.3.**]{} [*([@Long2002BookIndexTheory], p.172) For the $\omega$-Morse index $\phi_\omega(x)$ and nullity $\nu_\omega(x)$ of the solution $x=x(t)$ and the $\omega$-Maslov-type index $i_\omega(\gamma)$ and nullity $\nu_\omega(\gamma)$ of the symplectic path $\ga$ corresponding to $\td{x}$, for any $\omega\in\U$ we have*]{} \_(x) = i\_(), \_(x) = \_().
A generalization of the above lemma to arbitrary boundary conditions is given in [@HuSun2009CMPIndexTheory]. For more information on these topics, readers may refer to [@Long2002BookIndexTheory].
The Essential Part of the Fundamental Solution
----------------------------------------------
In 2005, Meyer and Schmidt gave the essential part of the fundamental solution of the elliptic Lagrangian orbit (cf. p. 275 of [@MeyerSchmidt2005JDE]). Readers may also refer to [@Long2012Notes]. Note that \_[i=1]{}\^[4]{} m\_i a\_i =0 \_[i=1]{}\^[4]{} m\_i |a\_i|\^2 = 1. We define $M = \diag\{m_1 I , m_2 I , m_3I, m_4I\}$, $\tilde{J} = \diag\{J_2, J_2, J_2 ,J_2\}$ and $J_2$ is the standard $2 \times 2$ symplectic matrix.
We take the second derivative of the potential $U(q)$ at the central configuration $a$ and obtain . B\_[ij]{}|\_[q=a]{} .|\_[q=a]{} = ł(I - ), and . B\_[ii]{}|\_[q=a]{}.|\_[q=a]{} = \_[ji]{}\^n ł(-I + ). By the symmetry of the configuration, we have that $a_1-a_2 = a_4-a_3$ and $ a_2 -a_3 = a_1-a_4$. These yield that B\_[12]{} = B\_[21]{} =B\_[34]{} = B\_[43]{} &=&
u\^2 -2 & 3u\
3u & 1-2u\^2
,\
B\_[14]{} = B\_[41]{}=B\_[23]{} = B\_[32]{} &=&
u\^2 -2 & -3u\
-3u & 1-2u\^2
,\
B\_[13]{} = B\_[31]{} &=&
1 & 0\
0 & -2
,\
B\_[24]{} = B\_[42]{} &=&
-2 & 0\
0 & 1
.
Note that $B_{ii} =- \sum_{j\neq i} B_{ij}$. These yield that B\_[11]{} = B\_[33]{}&=&
2- u\^2 & 0\
0 & 2u\^2-1
+
-1 & 0\
0 & 2
;\
B\_[22]{} = B\_[44]{} &=& ł(
2-u\^2 & 0\
0 & 2u\^2-1
) + ł(
2 & 0\
0 & -1
).
As the in p. 263 of [@MeyerSchmidt2005JDE], Section 11.2 of [@Long2012Notes], we define P =
p\_1\
p\_2\
p\_3\
p\_4
, Q =
q\_1\
q\_2\
q\_3\
q\_4
, Y =
G\
Z\
W\_3\
W\_4
, X =
g\
z\
w\_3\
w\_4
, where $p_i$, $q_i$, $i =1, 2, 3, 4$ and $G$, $Z$, $W_3$, $W_4$, $g$, $z$, $w_3$, $w_4$ are all column vectors in $\R^2$. We make the symplectic coordinate change P= A\^[-T]{}Y, Q = AX, where the matrix $A$ is constructed as in the proof of Proposition 2.1 in [@MeyerSchmidt2005JDE]. Concretely, the matrix $A \in GL(\R^8)$ is given by A =
I\_2 & A\_[12]{} & A\_[13]{} & A\_[14]{}\
I\_2 & A\_[22]{} & A\_[23]{} & A\_[24]{}\
I\_2 & A\_[32]{} & A\_[33]{} & A\_[34]{}\
I\_2 & A\_[42]{} & A\_[43]{} & A\_[44]{}
, satisfying that A = A, A\^T MA = I. Note that (\[2.28\]) is equivalent to A\_[ij]{}J = J A\_[ij]{}, \_[i=1]{}\^[4]{} A\_[ij]{}\^TMA\_[ik]{} =\_[j]{}\^k I\_2. $A_{i2}$ is given by A\_[12]{} = J\_2, A\_[22]{} =I\_2, A\_[32]{}= -J\_2, A\_[42]{} = -I\_2. Readers may verify that $\sum_{i=1}^{4} m_i A_{i2} = 0$ and $\sum_{i=1}^{4} m_iA_{i2}^T A_{i2} = I_2$ hold. We define $A_{i3}$s by A\_[13]{} = A\_[33]{} = I\_2, A\_[23]{} = A\_[43]{} = I\_2. Readers may verify that $\sum_{i=1}^{4} m_i A_{i3} = 0$, $\sum_{i=1}^{4} m_iA_{i2}^T A_{i3} = 0$ and $\sum_{i=1}^{4} m_iA_{i3}^T A_{i3} = I_2$ hold. We define $A_{i4}$s by A\_[14]{} = -I\_2, A\_[24]{} = -J\_2, A\_[34]{} = I\_2, A\_[44]{} = J\_2. Readers may verify that $\sum_{i=1}^{4} m_i A_{i4} = 0$, $\sum_{i=1}^{4} m_iA_{i2}^T A_{i4} = 0$, $\sum_{i=1}^{4} m_iA_{i3}^T A_{i4} = 0$ and $\sum_{i=1}^{4} m_iA_{i4}^T A_{i4} = I_2$ hold. Above all, we have the matrix $A$ satisfying (\[2.29\]) which is A =
I & A\_[12]{} & A\_[13]{} & A\_[14]{}\
I & A\_[22]{} & A\_[23]{} & A\_[24]{}\
I & A\_[32]{} & A\_[33]{} & A\_[34]{}\
I & A\_[42]{} & A\_[43]{} & A\_[44]{}
=ł(
1 & 0 & 0 & - & - & 0 &-& 0\
0 & 1 & & 0 & 0 & - & 0 & -\
1 & 0 & & 0 & & 0 & 0 &\
0 & 1 & 0 & & 0 & & - &0\
1 & 0 & 0 & & - & 0 & & 0\
0 & 1 & - & 0 &0 & - &0&\
1 & 0 & - & 0 & & 0 &0& -\
0 & 1 &0 & - &0 & & & 0
).\
In following discussion, we also need to name each column of $A$ by defining $A = (c_1, c_2, ..., c_8)$ where $c_i$s are column vectors.
Under the change of (\[2.26\]), we have the kinetic energy K= (|G|\^2 + |Z|\^2+ |W\_3|\^2+|W\_4|\^2), and the potential function U(z,w\_3, w\_4) = \_[1<i j<4 ]{} . Recall that each $Z, W_i , z, w_i$ with $i =3,4$ is a vector in $\R^2$ . Here $z = z(t)$ is the Kepler elliptic orbit given through the true anomaly $\th = \th(t)$, r((t)) = |z(t)| = , where $p = a(1-e^2)$ and $a> 0$ is the latus rectum of the ellipse. We paraphase the proposition of [@MeyerSchmidt2005JDE] (p.271-273) and Proposition 2.1 of [@ZhouLong2017CMDA] in the case of $n =4$.
[**Proposition 2.4.**]{} [*There exists a symplectic coordinate change = (Z, W\_3, W\_4, z, w\_3, w\_4)\^T | = (|[Z]{}, |[W]{}\_3, |[W]{}\_4, |[z]{}, |[w]{}\_3, |[w]{}\_4)\^T such that using the true anomaly $\th$ as the variable the resulting Hamiltonian function of the $n$-body problem is given by H(, |[Z]{}, |[W]{}\_3, |[W]{}\_4, |[z]{}, |[w]{}\_3, |[w]{}\_4) &=& ł(||[Z]{}|\^2 + \_[k=3]{}\^[4]{} ||[W]{}\_k|\^2) + (|[z]{}J\_2|[Z]{} + \_[k=3]{}\^[4]{} |[w]{}\_kJ\_2|[W]{}\_[k]{})\
&&+ł(||[z]{}|\^2 + \_[k=3]{}\^[4]{} ||[w]{}\_k|\^2) - U(|[z]{}, |[w]{}\_3, |[w]{}\_4), where $r(\th) = \frac{p}{1+e\cos \th}$, $\mu$ is given by (\[1.9\]), $\sg = (\mu p)^{1/4}$ and $p$ is given in (\[2.38\]).* ]{}
The proof of this proposition can be found in pp. 271-275 of [@MeyerSchmidt2005JDE] and pp.403-407 of [@ZhouLong2017CMDA]. We omit it here.
[**Proposition 2.5.**]{}
*Using the notations in (\[2.25\]), elliptic rhombus solution $(P(t), Q(t))^T$ of the system (\[1.4\]) with Q(t) = (r(t)R((t))a\_1,r(t)R((t))a\_2,r(t)R((t))a\_3,r(t)R((t))a\_4)\^T, P(t) = M(t) in time $t$ with the matrix $M = \diag\{m_1 I_2 , m_2 I_2 , m_3I_2, m_4I_2\}$, is transformed to the new solution $(Y(\th), X(\th))^T$ in the variable true anomaly $\th$ with $G = g = 0$ with respect to the original Hamiltonian function H of (\[2.40\]), which is given by Y() =*
|[Z]{}()\
|[W]{}\_1()\
|[W]{}\_2()
=
0\
\
0\
0\
0\
0
, X() =
|[z]{}()\
|[w]{}\_1()\
|[w]{}\_2()
=
\
0\
0\
0\
0\
0
, Moreover, the linearized Hamiltonian system is given at the elliptic rhombus solution \_0 (Y(), X())\^T = (0, , 0, 0,0,0,,0,0,0,0, 0) \^[12]{} depending on the true anomaly $\th$ with respect to the Hamiltonian function $H$ defined in (\[2.40\]) is given by
() = JB() \_0() with $B(\th)$ is given by
B() &=& H”(, |[Z]{}, |[W]{}\_3, |[W]{}\_[4]{}, |[z]{}, |[w]{}\_3, |[w]{}\_[4]{})|\_[| = \_0]{}\
&=&
I\_2 & O & O && -J & O & O\
O & I\_2 & O && O & -J & O\
O& O & I\_2 && O & O &-J\
J & O & O && H\_[zz]{}(, \_0) & O& O\
O & J &O &&O & H\_[w\_3w\_3]{}(, \_0) & O\
O & O & J && O & O & H\_[w\_4w\_4]{}(, \_0)\
and $H_{zz} (\th, \xi_0)$ is given by H\_[zz]{} (, \_0)=
- & 0\
0 & 1
, $H_{w_3w_3}(\th, \xi_0)$ is given by H\_[w\_3w\_3]{}(, \_0)=ł(1- )I -
2-u\^2 & 0\
0 & 2u\^2-1
, $H_{w_4w_4}(\th, \xi_0)$ is given by &&H\_[w\_4w\_4]{}(, \_0) = (1- )I -\
&&ł(
2m\^2u\^4+(6m-m\^2-1)u\^2+2 & 0\
0 & -m\^2u\^4+(2m\^2-6m+2)u\^2-1
.\
&&ł.+ ł(+ )
-1 & 0\
0 & 2
), where $H''$ is the Hession matrix of $H$ with respect to its variables $\bar{Z}$, $\bar{W}_3$, $\bar{W}_4$ $\bar{z}$, $\bar{w}_3$ and $\bar{w}_4$. The corresponding quadric Hamiltonian function is given by
H\_2(, |[Z]{},|[W]{}\_1,|[W]{}\_2, |[z]{}, |[w]{}\_1,|[w]{}\_2) &=& ||[Z]{}|\^2 + |[Z]{}J |[z]{} + H\_[|[z]{}|[z]{}]{}(,\_0)|[z]{} |[z]{}\
&&+\_[i = 3]{}\^4(||[W]{}\_i|\^2 + |[W]{}\_iJ |[w]{}\_i + H\_[|[w]{}\_i|[w]{}\_i]{}(,\_0)|[w]{}\_i |[w]{}\_i).
[**Proof.** ]{} The proof is similar to those of Proposition 11.11 and Proposition 11.13 of [@Long2012Notes]. Reader may also refer to a similar proof in pp.404-407 in [@ZhouLong2017CMDA]. We only focus on the $H_{\bar{z}\bar{z}}(\th, \xi_0)$, $H_{\bar{z}\bar{w}_3}(\th, \xi_0)$, $H_{\bar{z}\bar{w}_4}(\th, \xi_0)$, $H_{\bar{w}_3\bar{w}_3}(\th, \xi_0)$, $H_{\bar{w}_3\bar{w}_4}(\th, \xi_0)$, $H_{\bar{w}_4\bar{w}_4}(\th, \xi_0)$.
For simplicity , we omit all the upper bars on the variables of $H$ in (\[2.40\]) in this proof. Note that we have transformed $(x_1, x_2, x_3, x_4)$ to $(g, z, w_3, w_4)$ by $Q= AX$. By this transformation, we have the linearized system is given by
H\_[zz]{} = I - U\_[zz]{}(z, w\_3, w\_4), &\
H\_[zw\_l]{} = H\_[w\_lz]{}= -U\_[zw\_l]{}(z, w\_3, w\_4), & l =3,4;\
H\_[w\_lw\_l]{} = I -U\_[w\_lw\_l]{}(z, w\_3, w\_4), & l=3,4;\
H\_[w\_lw\_s]{} = H\_[w\_s w\_l]{} = -U\_[w\_lw\_s]{}(z, w\_3, w\_4), & l,s=3,4, ls.
Then we have B() &=& H”(, |[Z]{}, |[W]{}\_3, |[W]{}\_[4]{}, |[z]{}, |[w]{}\_3, |[w]{}\_[4]{})|\_[| = \_0]{}\
&=&
I\_2 & O & O && -J & O & O\
O & I\_2 & 0 && O & -J & O\
O & O & I\_2 && O & O & -J\
J & O & O && H\_[zz]{}(, \_0) & H\_[zw\_3]{}(, \_0) & H\_[zw\_[4]{}]{}(, \_0)\
O & J & O && H\_[zw\_3]{}(, \_0) & H\_[w\_3w\_3]{}(, \_0) & H\_[w\_[3]{}w\_[4]{}]{}(, \_0)\
O & O & J && H\_[w\_[4]{}z]{}(, \_0) & H\_[w\_[4]{}w\_3]{}(, \_0) & H\_[w\_[4]{}w\_[4]{}]{}(, \_0)
.
We define $\Phi_{ij} $ and $\Psi_{ij}(k)$ by \_[ij]{} &=& A\_[i2]{} - A\_[j2]{} =(a\_i-a\_j, J(a\_i-a\_j));\
\_[ij]{}(k) &=& A\_[ik]{}-A\_[jk]{} = (a\_[ik]{}-a\_[jk]{}, J(a\_[ik]{}-a\_[jk]{})), where $a_i = (a_{i1}, a_{i2})$ and A\_[ij]{} =
a\_[ij,1]{} & -a\_[ij,2]{}\
a\_[ij,2]{} & a\_[ij,1]{}
. Then the potential $U(x)$ can be written as U(z,x) = \_[1i < j 4]{} .
Note that $|\Phi_{ij} z|= |a_i-a_j| |z|$ and define .K\_[ij]{}|\_[\_0]{} &=& {- I}. Therefore, $K_{ij} = K_{ji}$.
By the definition of $\xi_0$ in (\[2.43\]), (\[2.54\]) and ((ref[2.55]{}), .|\_[\_0]{} &=& \_[1i < j 4]{} {3\^T\_[ij]{}(\_[ij]{}z )|\_[ij]{}z |(z\^T\^T\_[ij]{})\_[ij]{}-\_[ij]{}\^T\_[ij]{}|\_[ij]{}z|\^3}\
&=& \_[1i < j 4]{} (A\_[i2]{}-A\_[j2]{})\^TK\_[ij]{} (A\_[i2]{}-A\_[j2]{})\
&=&\_[1i < j 4]{}
2 & 0\
0 & -1
\
&=&
2 & 0\
0 & -1
,
By the definition of $\xi_0$ in (\[2.43\]), (\[2.54\]) and (\[2.55\]), &&.|\_[\_0]{} =\_[1i < j 4]{}m\_i m\_j {+}\
&=&\_[1i < j 4]{} \^T\_[ij]{}(l) {- I}\_[ij]{}(s)\
&=&\_[1i < j 4]{} (A\_[il]{} - A\_[jl]{})\^T K\_[ij]{}(A\_[is]{} - A\_[js]{})\
&=&ł(\_[1i < j 4]{}A\_[il]{}\^T K\_[ij]{}(A\_[is]{} - A\_[js]{})- \_[1i < j 4]{} A\_[jl]{}\^T K\_[ij]{}(A\_[is]{} - A\_[js]{}))\
&=&ł(\_[1i < j 4]{}A\_[il]{}\^T K\_[ij]{}(A\_[is]{} - A\_[js]{})+ \_[1j < i 4]{} A\_[il]{}\^T K\_[ji]{}(A\_[is]{} - A\_[js]{}))\
&=&ł(\_[i=1]{}\^[4]{}A\_[il]{}\^T \_[j=1,ji]{}\^[4]{} -B\_[ij]{}(A\_[is]{} - A\_[js]{}))\
&=&\_[i=1]{}\^[4]{}\_[j=1]{}\^[4]{}A\_[il]{}\^TB\_[ij]{}A\_[js]{}. This yields that .|\_[\_0]{} &=& \_[i=1]{}\^[4]{}\_[j=1]{}\^[4]{}A\_[is]{}\^TB\_[ij]{}A\_[js]{}.
For the case of $s= 3$, since (\[2.19\]-\[2.24\]) and (\[2.31\]), we have that A\^T\_[13]{}B\_[11]{}A\_[13]{}=A\^T\_[33]{}B\_[33]{}A\_[33]{},&& A\^T\_[23]{}B\_[22]{}A\_[23]{}=A\^T\_[43]{}B\_[44]{}A\_[43]{},\
A\^T\_[33]{}B\_[31]{}A\_[13]{}=A\^T\_[13]{}B\_[13]{}A\_[33]{},&& A\^T\_[23]{}B\_[24]{}A\_[43]{}=A\^T\_[43]{}B\_[42]{}A\_[23]{},\
A\^T\_[23]{}B\_[21]{}A\_[13]{}=A\^T\_[43]{}B\_[43]{}A\_[33]{},&& A\^T\_[13]{}B\_[12]{}A\_[23]{}=A\^T\_[33]{}B\_[34]{}A\_[43]{},\
A\^T\_[43]{}B\_[41]{}A\_[13]{}=A\^T\_[23]{}B\_[23]{}A\_[33]{},&& A\^T\_[33]{}B\_[32]{}A\_[23]{}=A\^T\_[13]{}B\_[14]{}A\_[43]{}.
By direct computations, we have following equations hold. A\^T\_[13]{}B\_[11]{}A\_[13]{} = ł(
2- u\^2 & 0\
0 & 2u\^2-1
)+
-1 & 0\
0 & 2
,
A\^T\_[23]{}B\_[22]{}A\_[23]{}= ł(
2-u\^2 & 0\
0 & 2u\^2-1
) + ł(
2 & 0\
0 & -1
),
A\^T\_[33]{}B\_[31]{}A\_[13]{}=
1 & 0\
0 & -2
, A\^T\_[23]{}B\_[24]{}A\_[43]{}=
-2 & 0\
0 & 1
,
A\^T\_[13]{}B\_[12]{}A\_[23]{} = A\^T\_[23]{}B\_[21]{}A\_[13]{}=
u\^2 -2 & 3u\
3u & 1-2u\^2
,
A\^T\_[33]{}B\_[32]{}A\_[23]{} =A\^T\_[43]{}B\_[41]{}A\_[13]{}=
u\^2 -2 & -3u\
-3u & 1-2u\^2
. Then we have that .|\_[\_0]{} &=& 2( A\^T\_[13]{}B\_[11]{}A\_[13]{}+A\^T\_[23]{}B\_[22]{}A\_[23]{}+A\^T\_[33]{}B\_[31]{}A\_[13]{}+A\^T\_[23]{}B\_[24]{}A\_[43]{}\
&&+A\^T\_[23]{}B\_[21]{}A\_[13]{}+A\^T\_[13]{}B\_[12]{}A\_[23]{}+A\^T\_[43]{}B\_[41]{}A\_[13]{}+A\^T\_[33]{}B\_[32]{}A\_[23]{})\
&=&
2-u\^2 & 0\
0 & 2u\^2-1
.
Next, we consider $\frac{\partial^2 U}{\partial w_{3}\partial w_{4}}$ which satisfies .|\_[\_0]{} = \_[i=1]{}\^[4]{}\_[j=1]{}\^[4]{}A\_[i4]{}\^TB\_[ij]{}A\_[j3]{}. Since (\[2.19\]-\[2.24\])and (\[2.32\]), we have that A\^T\_[14]{}B\_[11]{}A\_[13]{}=-A\^T\_[34]{}B\_[33]{}A\_[33]{}, && A\^T\_[24]{}B\_[22]{}A\_[23]{}=-A\^T\_[44]{}B\_[44]{}A\_[43,]{}\
A\^T\_[34]{}B\_[31]{}A\_[13]{}=-A\^T\_[14]{}B\_[13]{}A\_[33]{}, && A\^T\_[24]{}B\_[24]{}A\_[43]{}=-A\^T\_[44]{}B\_[42]{}A\_[23]{},\
A\^T\_[24]{}B\_[21]{}A\_[13]{}=-A\^T\_[44]{}B\_[43]{}A\_[33]{}, && A\^T\_[14]{}B\_[12]{}A\_[23]{}=-A\^T\_[34]{}B\_[34]{}A\_[43]{},\
A\^T\_[44]{}B\_[41]{}A\_[13]{}=-A\^T\_[24]{}B\_[23]{}A\_[33]{}, && A\^T\_[34]{}B\_[32]{}A\_[23]{}=-A\^T\_[14]{}B\_[14]{}A\_[43]{}. We can rearrange the order of $\sum_{i=1}^{4}\sum_{j=1}^{4}A_{i4}^TB_{ij}A_{j3}$ and obtain that \^3.|\_[\_0]{} &=& ( A\_[14]{}\^TB\_[11]{}A\_[13]{} +A\_[34]{}\^TB\_[33]{}A\_[33]{}) +( A\_[24]{}\^TB\_[22]{}A\_[23]{} +A\_[44]{}\^TB\_[44]{}A\_[43]{})\
&&+( A\_[14]{}\^TB\_[13]{}A\_[33]{} +A\_[34]{}\^TB\_[31]{}A\_[13]{}) +( A\_[24]{}\^TB\_[24]{}A\_[43]{} +A\_[44]{}\^TB\_[42]{}A\_[23]{})\
&&+( A\_[24]{}\^TB\_[21]{}A\_[13]{}+A\_[44]{}\^TB\_[43]{}A\_[33]{}) +( A\_[14]{}\^TB\_[12]{}A\_[23]{} +A\_[34]{}\^TB\_[34]{}A\_[43]{})\
&&+( A\_[44]{}\^TB\_[41]{}A\_[13]{} +A\_[24]{}\^TB\_[23]{}A\_[33]{}) +( A\_[34]{}\^TB\_[32]{}A\_[23]{} +A\_[14]{}\^TB\_[14]{}A\_[43]{})\
&=& 0, where the last equality holds because every bracket is zero by (\[2.86\]-\[2.89\]).
Next, we consider $\left.\frac{\partial^2 U}{\partial w_{4}^2}\right|_{\xi_0}$ which satisfies .|\_[\_0]{} = \_[i=1]{}\^[4]{}\_[j=1]{}\^[4]{}A\_[i4]{}\^TB\_[ij]{}A\_[j4]{}. Since (\[2.19\]-\[2.24\]) and (\[2.31\]-\[2.32\]), we have that A\^T\_[14]{}B\_[11]{}A\_[14]{}=A\^T\_[34]{}B\_[33]{}A\_[34]{}, && A\^T\_[24]{}B\_[22]{}A\_[24]{}=A\^T\_[44]{}B\_[44]{}A\_[44]{},\
A\^T\_[34]{}B\_[31]{}A\_[14]{}=A\^T\_[14]{}B\_[13]{}A\_[34]{}, && A\^T\_[24]{}B\_[24]{}A\_[44]{}=A\^T\_[44]{}B\_[42]{}A\_[24]{},\
A\^T\_[24]{}B\_[21]{}A\_[14]{}=A\^T\_[44]{}B\_[43]{}A\_[34]{}, && A\^T\_[14]{}B\_[12]{}A\_[24]{}=A\^T\_[34]{}B\_[34]{}A\_[44]{},\
A\^T\_[44]{}B\_[41]{}A\_[14]{}=A\^T\_[24]{}B\_[23]{}A\_[34]{}, && A\^T\_[34]{}B\_[32]{}A\_[24]{}=A\^T\_[14]{}B\_[14]{}A\_[44]{}. Then we only need to calculate the left hand of each equation (\[2.94\]-\[2.97\]). A\^T\_[14]{}B\_[11]{}A\_[14]{} =
2- u\^2 & 0\
0 & 2u\^2-1
+
-1 & 0\
0 & 2
, A\^T\_[24]{}B\_[22]{}A\_[24]{} =
2u\^2-1& 0\
0 & 2-u\^2
+
-1 & 0\
0 & 2
, A\^T\_[34]{}B\_[31]{}A\_[14]{}=
-1 & 0\
0 & 2
, A\^T\_[24]{}B\_[24]{}A\_[44]{} =
-1 & 0\
0 & 2
, A\^T\_[24]{}B\_[21]{}A\_[14]{}=
3u & 1-2u\^2\
2-u\^2 & -3u
, A\^T\_[14]{}B\_[12]{}A\_[24]{}=
3u & 2-u\^2\
1-2u\^2 & -3u
, A\^T\_[44]{}B\_[41]{}A\_[14]{}=
3u & 2u\^2-1\
u\^2-2 & -3u
, A\^T\_[34]{}B\_[32]{}A\_[24]{}=
3u &u\^2 -2\
2u\^2-1 & -3u
. By (\[2.98\]-\[2.105\]), we have that &&.|\_[\_0]{} = \_[i=1]{}\^[4]{}\_[j=1]{}\^[4]{}A\_[i4]{}\^TB\_[ij]{}A\_[j4]{}\
&=& ł(
2m\^2u\^4+(6m-m\^2-1)u\^2+2 & 0\
0 & -m\^2u\^4+(2m\^2-6m+2)u\^2-1
.\
&&ł.+ ł(+ )
-1& 0\
0 & 2
).
Then, $\left.\frac{\partial^2 U}{\partial z \partial w_s}\right|_{\xi_0}$ is obtained by following computations. &&.|\_[\_0]{}= \_[1i < j n]{} {-\_[ij]{}\^T\_[ij]{}(s)|\_[ij]{}z|\^3+3\^T\_[ij]{}(\_[ij]{}z )|\_[ij]{}z |(z\^T\^T\_[ij]{})\_[ij]{}(s)}\
&=& \_[1i < j 4]{} (A\_[i2]{}-A\_[j2]{})\^TK\_[ij]{} (A\_[is]{}-A\_[js]{})\
&=&ł(\_[1i < j 4]{}(A\_[i2]{}-A\_[j2]{})\^T K\_[ij]{}A\_[is]{}+ \_[1j < i 4]{} (A\_[i2]{}-A\_[j2]{})\^T K\_[ij]{} A\_[is]{})\
&=&ł(\_[i=1]{}\^[4]{}\_[j=1, ji]{}\^[4]{})\
&=&ł(\_[i=1]{}\^[4]{}\_[j=1, ji]{}\^[4]{}
2(a\_i-a\_j)\^T\
(a\_i-a\_j)\^TJ
A\_[is]{})\
&=&
2 ł<c\_3, c\_[2s-1]{}>\_M & 2 ł<c\_3,c\_[2s]{}>\_M\
-ł<c\_4, c\_[2s-1]{}>\_M & -ł<c\_4, c\_[2s]{}>\_M
, where the last equality holds because $a$ is the central configuration and satisfies the following equation m\_i a\_i + \_[j=1, ji]{}\^[4]{}(a\_j-a\_i) =0, and $c_i$ is the i-th column of $A$. By (\[2.28\]), $\l<c_i, c_j\r>_M = 0$ for $i\neq j$. Therefore, we have that |\_[\_0]{} = 0.
We now derived the linearized Hamiltonian system at the elliptic rhombus solution. By (\[2.90\]) and (\[2.110\]), H\_[zw\_3]{} &=& H\_[w\_3 z]{}= -U\_[zw\_3]{}(z, w\_3, w\_4)= O\_[22]{},\
H\_[zw\_4]{} &=& H\_[w\_4 z]{}= -U\_[zw\_4]{}(z, w\_3, w\_4) = O\_[22]{},\
H\_[w\_3w\_4]{} &=& H\_[w\_4 w\_3]{} = -U\_[w\_3w\_4]{}(z, w\_3, w\_4) = O\_[22]{}. Since \^4 = p, r = , we have that $H_{zz}(\th, \xi_0)$ is given by H\_[zz]{}(, \_0)= I - U\_[zz]{}(z, w\_1, w\_3,w\_[4]{})=
- & 0\
0 & 1
, $ H_{w_3w_3} (\th, \xi_0)$ is given by H\_[w\_3w\_3]{} (, \_0) &=& I -U\_[w\_3w\_3]{}(z, w\_3, w\_4)\
&=&ł(1- )I -
2-u\^2 & 0\
0 & 2u\^2-1
, and $H_{w_4w_4} (\th, \xi_0) $ is given by &&H\_[w\_4w\_4]{} (, \_0)= I -U\_[w\_4w\_4]{}(z, w\_3, w\_4)\
&=& ł(1- )I -\
&&ł(
2m\^2u\^4+(6m-m\^2-1)u\^2+2 & 0\
0 & -m\^2u\^4+(2m\^2-6m+2)u\^2-1
.\
&&ł.+ ł(+ )
-1 & 0\
0 & 2
). Then this theorem holds.
Then Hamiltonian system (\[2.40\]) can be decomposed to three independent Hamiltonian systems. The first one is the Kepler $2$-body problem at the corresponding Kepler orbit which is given by \_1 ’ = JB\_0\_1=J
1 & 0 & 0 &1\
0 & 1 & -1 & 0\
0 & -1 &- & 0\
1 & 0 & 0 & 1
\_[1]{}. According to Proposition 3.6. of [@HuSun2010], p. 1012 of [@HuLongSun2014] and (3.4- 3.5) of $\cite{ZhouLong2017ARMA}$, we have that i\_(\_[1]{})=
0, & =1,\
2, & {1},
\_(\_[1]{})=
3, & =1,\
0, & {1}.
In the following, we only need to discuss the linear stability of the rest of two linearized Hamiltonian systems ’\_[u,e]{} &=& JB\_1 \_[u,e]{} =J
I & -J\
J & H\_[w\_3w\_3]{}(u,e)
\_[u,e]{},\
’\_[u,e]{} &=& JB\_2\_[u,e]{} =J
I & -J\
J & H\_[w\_4w\_4]{}(u, e)
\_[u,e]{}, where $(u,e)\in (1/\sqrt{3}, \sqrt{3})\times [0, 1)$.
To simplify the notations in following discussion, for $u\in (1/\sqrt{3},\sqrt{3})$ we define \_1(u) &=& 1 + ,\
\_2(u) &=& 1 + ,\
\_1(u) &=& 1 +ł(-- ),\
\_2(u) &=& 1 + ł(++). In following discussion, we will write $\vf_i$ and $\psi_i$ instead of $\vf_i(u)$ and $\psi_i(u)$ when it does not cause any confusion in the context. Note that $\vf_i$ and $\psi_i$ are both smooth functions of $u$ on the interval $1/\sqrt{3} <u < \sqrt{3}$ because $m$, $\mu$ and $\aa$ are smooth functions of $u$ on that interval. Furthermore, $\vf_i$ and $\psi_i$, for $i=1,2$, all converge when $u$ tends to $1/\sqrt{3}$ and $\sqrt{3}$ respectively . \_[u ]{}\_1(u) = \_[u 1/]{}\_2(u) = \_[u ]{}\_1(u) = \_[u 1/]{}\_1(u) =,\
\_[u 1/]{}\_1(u) = \_[u ]{}\_2(u) = \_[u ]{}\_2(u) = \_[u 1/]{}\_2(u) =. Then we extend the domain of $u$ to $[1/\sqrt{3}, \sqrt{3}]$. By direct computations, we have that for $1/\sqrt{3} \leq u \leq \sqrt{3} $, \_1 (u)= \_2(1/u), \_1 (u)= \_1(1/u), \_2 (u)= \_2(1/u). We define $K_{u,e}(t)$ and $T_{u,e}(t)$ by K\_[u,e]{}(t) & &
\_1 & 0\
0 & \_2
,\
T\_[u,e]{}(t) &&
\_1 & 0\
0 & \_2
. Therefore, $H_{w_3w_3}(t)$ and $H_{w_4w_4}(t)$ can be respectively written as H\_[w\_3w\_3]{}(t)&=&I -K\_[u,e]{}(t)=I -
\_1 & 0\
0 & \_2
,\
H\_[w\_4w\_4]{}(t) &=&I - T\_[u,e]{}(t)=I -
\_1 & 0\
0 & \_2
.
[**Proposition 2.6.**]{} [ *For any given $(u,e ) \in [1/\sqrt{3}, \sqrt{3}]\times [0,1)$, the $\om$-Maslov-type indices and nullities of $\ga_{u,e}(t)$ and $\eta_{u,e}(t)$ satisfying that for any $\om \in \U $,*]{} i\_(\_[u,e]{})= i\_(\_[1/u,e]{}), i\_(\_[u,e]{}) = i\_(\_[1/u,e]{}),\
\_(\_[u,e]{}) = \_(\_[1/u,e]{}), \_(\_[u,e]{}) = \_(\_[1/u,e]{}).
[**Proof.**]{} Note that $J_4^{-1} B_1(u,e) J_4$ where $J_4 = \diag(J_2, J_2)$ satisfies J\_4\^[-1]{} B\_1(u,e) J\_4 &=& B\_1(1/u,e), where the equality holds because of $\vf_1 (u)= \vf_2(1/u)$. Next we consider following system \_[1/u,e]{}(t) = JB\_1(1/u,e) \_[1/u,e]{}(t) = JJ\_4\^[-1]{} B\_1(u,e) J\_4 \_[1/u,e]{}(t)=J\_4\^[-1]{} J B\_1(u,e) J\_4 \_[1/u,e]{}(t) , where the third equality holds because of $J_4^{-1} J = JJ_4^{-1}$. Therefore, the fundamental solution $\ga_{1/u,e}(t)$ and $\ga_{u,e}(t)$ satisfy \_[1/u,e]{}(t) = J\_4\^[-1]{} \_[u,e]{}(t)J\_4. Then we have that for any $\om \in \U$ and $(u,e)\in [1/\sqrt{3},\sqrt{3}]\times [0,1)$, i\_(\_[u,e]{})= i\_(\_[1/u,e]{}), \_(\_[u,e]{}) = \_(\_[1/u,e]{}).
Note that $\psi_1 (u)= \psi_1(1/u)$ and $\psi_2 (u)= \psi_2(1/u)$. We have that $T_{u,e}(t) = T_{1/u, e}(t)$, and then $\eta_{u,e}(t) =\eta_{1/u,e}(t)$. Therefore, we have that i\_(\_[u,e]{}) = i\_(\_[1/u,e]{}), \_(\_[u,e]{}) = \_(\_[1/u,e]{}).
Therefore, this proposition holds.
A modification on the path $\ga_{u,e}(t)$
-----------------------------------------
According to the discussion of [@HuLongSun2014], we can transform the Lagrangian system to a simpler linear operator corresponding to a second order Hamiltonian system with the same linear stability as $\ga_{u,e}(2\pi)$ and $\eta_{u,e}(2\pi)$, using $R(t)$ and $R_4 = \diag (R(t),R(t))$ as in [@HuLongSun2014], we let \_[u,e]{}(t) = R\_4(t) \_[u,e]{}(t), , (u,e) \[0,1), and \_[u,e]{}(t) = R\_4(t) \_[u,e]{}(t), , (u,e) \[0,1). One can show by direct computations that \_[u,e]{}(t) &=& J ł(
I\_2 &0\
0 & R(t) (I\_2 - K\_[u,e]{}(t))R(t)\^T
)\_[u,e]{}(t),\
\_[u,e]{}(t) &=& J ł(
I\_2 &0\
0 & R(t) (I\_2 - T\_[u,e]{}(t))R(t)\^T
)\_[u,e]{}(t), where $K_{u,e}(t)$ is given by (\[2.132\]) and $T_{u,e}(t)$ is given by (\[2.133\]). Note that $R_4 (0) = R_4 (2\pi) = I_4$ , so $\ga_{u,e} (2\pi) = \xi_{u,e} (2\pi)$ and $\eta_{u,e} (2\pi) = \zeta_{u,e} (2\pi)$ hold. Then the linear stabilities of the systems are determined by the same matrix and thus are precisely the same.
By (\[2.148\]) and (\[2.149\]) the symplectic paths $\ga_{u,e}$ and $\xi_{u,e}$ are homotopic to each other via the homotopy $h(s,t) =R_4 (st)\ga_{u,e} (t)$ for $(s,t) \in [0,1] \times [0,2\pi]$. Because $R_4 (s)\ga_{u,e} (2\pi)$ for $s \in [0,1] $ is a loop in $\Sp(4)$ which is homotopic to the constant loop $\ga_{u,e} (2\pi)$, $h(\cdot,2\pi)$ is contractible in $\Sp(4)$. Therefore by the proof of Lemma 5.2.2 on p.117 of [@Long2012Notes], the homotopy between $\ga_{u,e}$ and $\xi_{u,e}$ can be modified to fix the end point $\ga_{u,e} (2\pi)$ for all $s \in [0,1]$. Thus by the homotopy invariance of the Maslov-type index (cf. (i) of Theorem 6.2.7 on p.147 of [@Long2012Notes]), we obtain that for $(u,e)\in [1/\sqrt{3},\sqrt{3}]\times [0,1)$, i\_(\_[u,e]{}) = i\_(\_[u,e]{}) , \_(\_[u,e]{}) = \_(\_[u,e]{}), . Similarly, we have that for $(u,e)\in [1/\sqrt{3},\sqrt{3}]\times [0,1)$, i\_(\_[u,e]{}) = i\_(\_[u,e]{}) , \_(\_[u,e]{}) = \_(\_[u,e]{}), .
Note that the first order linear Hamiltonian systems (\[2.150\]) and (\[2.151\]) correspond to the following second order linear Hamiltonian systems receptively (t) = -x(t) + R(t)K\_[u,e]{}(t)R(t)\^Tx(t), and (t) = -x(t) + R(t)T\_[u,e]{}(t)R(t)\^Tx(t). For $(u,e) \in[1/\sqrt{3},\sqrt{3}] \times [0,1)$, the second order differential operators defined on the domain $D(\om,2\pi)$ corresponding to (\[2.154\]) and (\[2.155\]) are given by (u,e) = -I\_2 -I\_2 +R(t)K\_[u,e]{}(t)R(t)\^T, and (u,e) = -I\_2 -I\_2 +R(t)T\_[u,e]{}(t)R(t)\^T, where $K_{u,e}(t)$ and $T_{u,e}(t)$ are defined by (\[2.132\]-\[2.133\]) and $D(\om, 2\pi)$ is given by (\[2.10\]). By direct computations, we have that (u,e) &=& -I\_2 -I\_2 + ((\_1+\_2)I\_2 + (\_1-\_2)S(t)),\
(u,e) &=& -I\_2 -I\_2 + ((\_1+\_2)I\_2 + (\_1-\_2)S(t)), where $S(t) = (\begin{smallmatrix}
\cos 2t & \sin 2t\\
\sin 2t & -\cos 2t
\end{smallmatrix})$. In [@HuLongSun2014], the authors defined a operator $A(\bb,e)$ is given by A(,e)= -I\_2 -I\_2 + (3I\_2 + S(t)). We will use this operator $A(\bb,e)$ in Section 3 and Section 4.
The operators $\cA(u,e) $ and $\B(u,e)$ are both self-adjoint and depend on the parameters $u$ and $e$. By p. 172 of [@Long2002BookIndexTheory], we have for any $(u,e)\in[1/\sqrt{3},\sqrt{3}] \times [0,1)$, the Morse indices which are $\phi_{\omega} (\cA(u,e))$ and $\phi_{\omega} (\B(u,e))$ and nullities which are $\nu_{\om} (\cA(u,e))$ and $\nu_{\om} (\B(u,e))$ on the domain $D(\om,2\pi)$ satisfy \_ (\_[u,e]{}) = i\_(\_[u,e]{}), \_ (\_[u,e]{}) = \_(\_[u,e]{}), , and \_ (\_[u,e]{}) = i\_(\_[u,e]{}), \_ (\_[u,e]{}) = \_(\_[u,e]{}), . In the rest of this paper, we shall use both of the paths $\ga_{u,e}$ and $\xi_{u,e}$ to study the linear stability of $\ga_{u,e}(2\pi) = \xi_{u,e}(2\pi)$ and use both of the paths $\zeta_{u,e}$ and $\eta_{u,e}$ to study the linear stability of $\zeta_{u,e}(2\pi) = \eta_{u,e}(2\pi)$. Because of (\[2.152\]) and (\[2.153\]), in many cases and proofs below, we shall not distinguish these two paths.
Stability on the Three Boundary Segments of the Rectangle $[1/\sqrt{3},\sqrt{3}] \times [0,1)$
==============================================================================================
The boundary segment $[1/\sqrt{3},\sqrt{3}]\times \{0\}$
--------------------------------------------------------
If $e = 0$ which means that the orbits of four bodies are circles, $ H_{w_3w_3}(t)$ and $ H_{w_4w_4}(t)$ are given by H\_[w\_3w\_3]{}(t)&=&I - K\_[u,e]{}(t)=I -
\_1 & 0\
0 & \_2
,\
H\_[w\_4w\_4]{}(t) &=& I - T\_[u,e]{}(t)=I -
\_1 & 0\
0 & \_2
, where $\vf_i$s and $\psi_i$s are given by (\[2.123\]-\[2.126\]). The system of $\ga_{u,0}$ is given by \_[u,0]{} ’ = JB\_2 \_[u,0]{}=J
1 & 0 & 0 &1\
0 & 1 & -1 & 0\
0 & -1 & 1-\_1 & 0\
1 & 0 & 0 & 1-\_2
\_[u,0]{}, and the system of $\eta_{u,0}$ is given by \_[u,0]{}’ = JB\_3\_[u,0]{} =J
1 & 0 & 0 &1\
0 & 1 & -1 & 0\
0 & -1 & 1-\_1 & 0\
1 & 0 & 0& 1-\_2
\_[u,0]{}.
[**Theorem 3.1.**]{} [*For any given $(u,e)\in [1/\sqrt{3},\sqrt{3}]\times \{0\}$ and $\om \in \U$, all the eigenvalues of matrices $\ga_{u,0}(2\pi)$ and $\eta_{u , 0}(2\pi)$ are all hyperbolic, i.e., all the eigenvalues are not on $\U$, i\_ (\_[u,0]{}) = \_ ((u,0)) = 0, && \_ (\_[u, 0]{}) = \_ ((u,0) )=0,\
i\_ (\_[u,0]{}) = \_ ((u,0)) =0, && \_ (\_[u, 0]{}) = \_ ((u,0) )= 0. Therefore, the operators $\cA(u, 0)$ and $\B(u, 0)$ are positive definite on the space $\bar{D}(\om, 2\pi)$ with zero nullity.* ]{}
[**Proof.**]{} The characteristic polynomial $\det(JB_2-\lm I)$ of $JB_2$ is given by p\_2() = \^4+(4-\_1-\_2)\^2+\_1\_2 The roots of $p_2(\lm)$ are all pure imaginary if and only if &&4-\_1-\_2 > 0,\
&&\_1\_2> 0,\
&& (4-\_1-\_2 )\^2-4\_1\_2 0, hold at the same time. Note that 4-\_1-\_2 &=& 2- (u\^2+1), and \_1\_2= (2-u\^2 )(2u\^2-1) + (u\^2+1) +1. Note that the denominator of $\frac{\d}{\d u}(4-\vf_1-\vf_2)$ is positive and the numerator of $\frac{\d}{\d u}(4-\vf_1-\vf_2)$ is a polynomial on $\Z[u, \sqrt{1+u^2}]$ of degree 20. Note that $\frac{\d}{\d u}(4-\vf_1-\vf_2)|_{u = 1} =0$. By the numerical computations with the step length $\frac{\sqrt{3} - 1/\sqrt{3}}{10000}$, $u = 1$ is the only root of $\frac{\d}{\d u}(4-\vf_1-\vf_2) = 0$ in the interval $[1/\sqrt{3},\sqrt{3}]$. Since $\frac{\d}{\d u}(4-\vf_1-\vf_2)|_{u = 0.8} \approx -0.729662 $ and $\frac{\d}{\d u}(4-\vf_1-\vf_2)|_{u = \sqrt{3}} = \frac{3\sqrt{3}}{4}$, we have that $\frac{\d}{\d u}(4-\vf_1-\vf_2) < 0$ on the interval $[1/\sqrt{3},1]$ and $\frac{\d}{\d u}(4-\vf_1-\vf_2) >0$ on the interval $[1, \sqrt{3}]$. This yields that when $1/\sqrt{3} \leq u \leq \sqrt{3}$, by (\[2.102\]), = 4-\_1(1)-\_2(1) 4-\_1-\_2 4-\_1ł()-\_2ł() = 1. The denominator of $\frac{\d}{\d u}(\vf_1\vf_2)$ is positive and the numerator of $\frac{\d}{\d u}(\vf_1\vf_2)$ is a polynomial on $\Z[u, \sqrt{1+u^2}]$ of degree 39. Note that $\frac{\d}{\d u}(\vf_1\vf_2)|_{u = 1} =0$. By the numerical computations with the step length $\frac{\sqrt{3} - 1/\sqrt{3}}{10000}$, $u = 1$ is the only root of $\frac{\d}{\d u}(\vf_1\vf_2) = 0$ in the interval $[1/\sqrt{3},\sqrt{3}]$. Since $\frac{\d}{\d u}(\vf_1\vf_2)|_{u = 0.8} \approx 3.37596 $ and $\frac{\d}{\d u}(\vf_1\vf_2)|_{u = \sqrt{3}} = -\frac{27\sqrt{3}}{32}$, we have $\frac{\d}{\d u}(\vf_1\vf_2) > 0$ on the interval $[1/\sqrt{3},1]$ and $\frac{\d}{\d u}(\vf_1\vf_2) < 0$ on the interval $[1, \sqrt{3}]$. This yields that when $1/\sqrt{3} \leq u \leq \sqrt{3}$, by (\[2.102\]), =\_1ł()\_2 ł() \_1\_2\_1(1)\_2(1)=. Then (4-\_1-\_2 )\^2-4\_1\_2ł(4-\_1ł()-\_2ł() )\^2-4\_1ł()\_2ł() =-.
Let $\bar{\lm} = \lm^2$ and we have that |[p]{}\_2(|) = | \^2+(4-\_1-\_2)| +\_1\_2. Therefore, we have that the two roots of $\bar{p}_2(\lm) $ is given by |\_1 &=& r\_0 e\^[i\_0]{}=((4-\_1-\_2)+),\
|\_2 &=&r\_0 e\^[-i\_0]{}=((4-\_1-\_2)-), where $r_0^2 = \frac{1}{2}(4-\vf_1-\vf_2)^2-\vf_1\vf_2 $ and $\th_0 \neq \pi$ because $(4-\vf_1-\vf_2 )^2-4\vf_1\vf_2 < 0$ for $1/\sqrt{3} \leq u \leq \sqrt{3}$. Therefore, we have the four roots of $p_2(\lm)$, which are \_1= e\^, \_2= e\^[+]{}, \_3= e\^, \_4= e\^[+]{}, are complex numbers with non-zero real parts because $\th_0\neq \pi$. This yields that $\ga_{u,0}(2\pi)$ is hyperbolic and for any $\om \in \U$ and $u \in [1/\sqrt{3}, \sqrt{3}]$, i\_ (\_[u,0]{}) = 0, \_ (\_[u, 0]{}) = 0. By (\[2.161\]), for any $\om \in \U$ and $u \in [1/\sqrt{3},\sqrt{3}]$, the operator $\cA(u,0)$ is non-degenerate and \_ ((u,0)) = 0, \_ ((u,0) )= 0.
The characteristic polynomial $\det(JB_3-\lm I)$ of $JB_3$ is given by p\_3() = \^4+(4-\_1-\_2)\^2+\_1\_2. Note that 4-\_1-\_2 = 2- ł(++)= 1, and \_1\_2 &=& ł(--)\
&&ł(++)+2 where the last equality of (\[3.26\]) is obtained by the symbolic computations of Mathematica. The roots of $p_3(\lm)$ are all pure imaginary if and only if && 4-\_1-\_2 =1> 0,\
&&\_1\_2 > 0,\
&& (4-\_1-\_2)\^2-4\_1\_2 = 1- 4 \_1\_2 0, hold at the same time.
Note that the denominator of $\frac{\d}{\d u}(\psi_1\psi_2)$ is positive and the numerator of $\frac{\d}{\d u}(\psi_1\psi_2)$ is a polynomial on $\Z[u, \sqrt{1+u^2}]$ of degree 35. Note that $\frac{\d}{\d u}(\psi_1\psi_2)|_{ u=1} = 0$. Since $\frac{\d}{\d u}(\psi_1\psi_2)|_{ u=1/\sqrt{3} + 0.001} \approx 17.7222 $, $\frac{\d}{\d u}(\psi_1\psi_2)|_{ u=0.8} \approx -2.3374 $, $\frac{\d}{\d u}(\psi_1\psi_2)|_{ u= 1.2} \approx 1.3857 $ and $\frac{\d}{\d u}(\psi_1\psi_2)|_{ u=\sqrt{3}} = -\frac{9(27+146\sqrt{3})}{416}$, there exists at least two more roots of $\frac{\d}{\d u}(\psi_1\psi_2) = 0$ in the interval $[1/\sqrt{3}, \sqrt{3}]$ except $u= 1$. By the numerical computations, $u = \bar{u}_3 \approx 0.663332$, $u =1$ and $u = 1/\bar{u}_3$ are three roots of $\frac{\d}{\d u}(\psi_1\psi_2) = 0$ in the interval $[1/\sqrt{3},\sqrt{3}]$. Then we have that $\frac{\d}{\d u}(\psi_1\psi_2) > 0$ on the interval $(1/\sqrt{3},\bar{u}_3)\cup (1, 1/\bar{u}_3)$, and $\frac{\d}{\d u}(\psi_1\psi_2) <0$ on the interval $(\bar{u}_3, 1) \cup (1/\bar{u}_3, \sqrt{3})$. By (\[2.102\]), when $1/\sqrt{3} \leq u \leq \sqrt{3}$, = \_1ł()\_2ł()\_1\_2 \_1(|[u]{}\_3)\_2(|[u]{}\_3) = 2.25000, and -8.00000 = 1- 4 \_1(|[u]{}\_3)\_2(|[u]{}\_3) 1- 4 \_1\_2 1- 4 \_1ł()\_2ł()= -. Let $\tilde{\lm} = \lm^2$ and we have that |[p]{}\_3() = \^2+ +\_1\_2. Therefore, we have that the two roots of $\bar{p}_3(\tilde{\lm})$ is given by \_1 &=& \_0 e\^[i\_0]{}=(-1+),\
\_2 &=&\_0 e\^[-i\_0]{}= (-1-), where $\tilde{r}_0^2 = \frac{1}{4}\sqrt{2-4\psi_1\psi_2} $ and $\th_0 \neq \pi$ by $1-4\psi_1\psi_2< 0$ for $1/\sqrt{3} \leq u \leq \sqrt{3}$. Therefore, we have the four roots of $p_3(\lm)$ are given by \_1= e\^, \_2= e\^[+]{}, \_3= e\^, \_4= e\^[+]{}, which are complex numbers with non-zero real parts because $\bar{\th}_0\neq \pi$. Therefore, the roots of $p_3(\lm)$ have non-zero real part. This yields that $\eta_{u,0}(2\pi)$ is hyperbolic, i.e., i\_ (\_[u,0]{}) = 0, \_ (\_[u, 0]{}) = 0. By (\[2.162\]), we have that for any $\om \in \U$ the operator is non-degenerate and \_ ((u,0)) = 0, \_ ((u,0) )= 0, when $u \in [1/\sqrt{3},\sqrt{3}]$. Then this theorem is proved.
The segment $ \{1\}\times [0, 1)$
---------------------------------
This case has been discussed in [@HuOu2013RCD]. Here we paraphrase their results in our notations. When $u = 1$, we have that $m = 1$, $\aa = 2$, $\mu = 4\sqrt{2}+2$ and \_1 (1)= 1 + , \_2(1) = 1 + , \_1(1) =1 +, \_2(1) =1 +. Therefore, we have the operator $\cA(1,e)$ and $\B(1,e)$ are given by (1,e) &=&-I\_2 -I\_2 + I\_2,\
(1,e) &=&-I\_2 -I\_2 + I\_2 + S(t). By Proposition 2 of [@HuOu2013RCD] and $\frac{4\sqrt{2}+1}{2\sqrt{2}+1} > 1$, they obtain following results.
[**Theorem 3.2.**]{} (cf. Theorem 2 of [@HuOu2013RCD]) [*For any $\om \in \U$ and $e\in [0,1)$, the operators $\cA(1,e)$ and $\B(1,e)$ are positive definite on $\ol{D}(\om, 2\pi)$ with zero nullity, i.e., i\_ (\_[1,e]{}) = \_((1,e)) = 0, && \_ (\_[1,e]{}) = \_((1,e)) = 0;\
i\_ (\_[1,e]{}) = \_((1,e)) = 0, && \_ (\_[1,e]{}) = \_((1,e)) = 0. Therefore, all the eigenvalues of $\ga_{1,e}(2\pi)$ and $\eta_{1,e}(2\pi)$ are hyperbolic, i.e., all the eigenvalues are not on $\U$.* ]{}
The boundary segment $\{\sqrt{3}\}\times [0, 1)$ and $\{1/\sqrt{3}\}\times [0, 1)$
----------------------------------------------------------------------------------
In this section, we consider the linear stability of the system when $(u, e) \in \{\sqrt{3}\}\times [0, 1)\cup \{1/\sqrt{3}\}\times [0, 1)$. When $u = \sqrt{3}$, by (\[2.116\]-\[2.117\]), (\[2.134\]) and (\[2.135\]), we have that H\_[w\_3w\_3]{} (,e) &=&I-
3 & 0\
0 & 9
,\
H\_[w\_4w\_4]{}(,e) &=&I-
3 & 0\
0 & 9
. Note that $H_{w_3w_3}(\sqrt{3},e) = H_{w_4w_4}(\sqrt{3},e)$. When $u = 1/\sqrt{3}$, we have that H\_[w\_3w\_3]{} (1/,e) &=&I-
9 & 0\
0 & 3
,\
H\_[w\_4w\_4]{}(1/,e) &=&I-
3 & 0\
0 & 9
.
[**Theorem 3.3.**]{}
*(i) By (i) of Lemma 1.1, when $(u, e) \in\{1/\sqrt{3}\}\times [0, \hat{f}(\frac{27}{4})^{-1/2})$ or $(u, e) \in \{\sqrt{3}\}\times [0, \hat{f}(\frac{27}{4})^{-1/2})$, for any $\om \in \U$, the operators $\cA(u, e)$ and $\B(u, e)$ are positive definite with zero nullity on the space $\bar{D}(\om, 2\pi)$, i.e., \_ ((u,e)) = i\_ (\_[u,e]{}) =0, &&\_ ((u,e) )= \_ (\_[u, e]{}) = 0,\
\_ ((u,e)) = i\_ (\_[u,e]{}) = 0, &&\_ ((u,e) )= \_ (\_[u, e]{}) =0. Then all eigenvalues of the matrices $\ga_{u,e}(2\pi)$ and $\eta_{u,e}(2\pi)$ are both hyperbolic, i.e., all the eigenvalues are not on $\U$, when $(u, e) \in\{1/\sqrt{3}\}\times [0, \hat{f}(\frac{27}{4})^{-1/2})$ or $(u, e) \in \{\sqrt{3}\}\times [0, \hat{f}(\frac{27}{4})^{-1/2})$.*
\(ii) By (ii) of Lemma 1.1, when $(u, e) \in\{1/\sqrt{3}\}\times [0, 1)$ or $(u, e) \in \{\sqrt{3}\}\times [0, 1)$, the results of (i) hold.
[**Proof.**]{} Since for all $e\in [0,1)$, $
H_{w_4w_4}(\sqrt{3},e)(t) = H_{w_4w_4}(1/\sqrt{3},e)(t) = H_{w_3w_3}(\sqrt{3},e)(t),$ we have that $\ga_{\sqrt{3},e}(t) = \eta_{\sqrt{3},e}(t)=\eta_{1/\sqrt{3},e}(t)$. This yields that for any $\om \in \U$, i\_(\_[,e]{}) =i\_(\_[,e]{}) = i\_(\_[1/,e]{}),\
\_(\_[,e]{}) =\_(\_[,e]{}) = \_(\_[1/,e]{}) . By Proposition 2.6, we have that i\_(\_[1/,e]{}) = i\_(\_[,e]{}) =i\_(\_[,e]{}) = i\_(\_[1/,e]{}),\
\_(\_[1/,e]{}) = \_(\_[,e]{}) =\_(\_[,e]{}) = \_(\_[1/,e]{}).
For the system of $\ga_{1/\sqrt{3},e}(t)$, by (\[2.158\]), the corresponding operator is given by (1/,e) = -I\_2 -I\_2 + (3I\_2 + S(t)). By the definition of $A(\bb, e)$ in (\[2.160\]), when $\bb = \frac{27}{4}$, we have that (1/,e) = A(,e).
By (i) of Lemma 1.1, when $\bb = \frac{27}{4}$, $0\leq e<\hat{f}(\frac{27}{4})^{-1/2} \approx 0.4454$, $A(\frac{27}{4},e)$ is positive operator with zero nullity on any $\om$ boundary condition where $\om\in \U$ and $\hat{f}(\bb)$ is given by (1.22) of [@HuOuWang2015ARMA], i.e., \_(A(,e)) = 0, \_ (A(,e))= 0. By (\[3.62\]), (\[3.56\]-\[3.57\]), and i\_(\_[1/,e]{}) = \_((1/,e) )=\_(A(,e)),\
\_ (\_[1/,e]{})= \_((1/,e) = \_ (A(,e)), we have that (i) of this theorem holds.
By (ii) of Lemma 1.1, we have that for any $e\in [0,1)$ and any $\om \in \U$, \_(A(,e)) = 0, \_ (A(,e))= 0. By (\[3.63\]), we have (ii) of this theorem holds.
The stability in the rectangle $[1/\sqrt{3},\sqrt{3}] \times [0,1)$
===================================================================
By direct computations, the denominator of $\vf_1(u) -\vf_2(u)$ is negative on the interval $[1/\sqrt{3},\sqrt{3}]$ and the numerator of $\vf_1(u) -\vf_2(u)$ is a polynomial on $\Z[u,\sqrt{1+u^2}]$. Furthermore, $u = 1$ is the only root of $\vf_1(u) -\vf_2(u) = 0$ , and \_1(u) -\_2(u) > 0, && 1/ u < 1,\
\_1(u) -\_2(u) < 0, && 1 < u. By (\[2.158\]), we define |(u,e) =
+ , 1/ u < 1,\
- , 1 < u .
Then when $1/\sqrt{3}\leq u < 1$, $\cA(u,e)$ can be written as (u,e) = (\_1-\_2)ł( + )= (\_1-\_2) |(u,e), and when $1 < u \leq \sqrt{3}$, $\cA(u,e)$ can be written as (u,e) = (\_2-\_1)ł( - )= (\_2-\_1) |(u,e). By (\[4.1\]-\[4.2\]) and (\[4.4\]-\[4.5\]), we have that \_((u,e)) = \_(|(u,e)), \_((u,e)) = \_(|(u,e)).
By direct computations, the denominator of $\frac{\d (\vf_2-\vf_1)}{\d u}$ is positive on the interval $[1/\sqrt{3},\sqrt{3}]$ and the numerator of $\frac{\d (\vf_2-\vf_1)}{\d u}$ is a polynomial on $\Z[u, \sqrt{1+u^2}]$ of degree 24. Note that $\frac{\d (\vf_2-\vf_1)}{\d u}|_{u = 0.6} \approx -0.366067$, $\frac{\d (\vf_2-\vf_1)}{\d u}|_{u = 1} =\frac{12(4-\sqrt{2})}{7} $ and $\frac{\d (\vf_2-\vf_1)}{\d u}|_{u = \sqrt{3}} =-\frac{3\sqrt{3}}{8} $. Then there exist at least two roots of $\frac{\d (\vf_2-\vf_1)}{\d u} = 0$ in the interval $[1/\sqrt{3},\sqrt{3}]$. The numerical computations show that $u_1 \approx 0.606169$ and $u_2 = 1/u_1 $ are only two roots of $\frac{\d (\vf_2-\vf_1)}{\d u} = 0$ in the interval $[1/\sqrt{3},\sqrt{3}]$. Therefore, we have that < 0, && 1/ < u < u\_1,\
>0, && u\_1< u < 1 , and < 0, && 1 < u < u\_2,\
>0. && u\_2< u < .
[**Lemma 4.1.** ]{}
*(i) For each fixed $e\in [0 ,1)$, the operator $ \bar{\cA}(u,e)$ is increasing with respect to $u\in (u_1, 1) \cup (u_2, \sqrt{3})$ and is decreasing with respect to $u\in (1/\sqrt{3}, u_1) \cup (1, u_2)$ where $u_1$ and $u_2$ are two roots of $\frac{\partial (\vf_2-\vf_1)}{\partial u} = 0$ when $u \in [1/\sqrt{3},\sqrt{3}]$. Especially, |(u,e)|\_[u = u\_0]{} =*
, 1/ < u < 1;\
, 1 < u < .
for $u \in [1/\sqrt{3},\sqrt{3}] $ is positive definite operator when $u \in (u_1, 1) \cup (u_2, \sqrt{3}]$ and is negative definite operator when $u\in [1/\sqrt{3}, u_1) \cup (1, u_2)$.
\(ii) For every eigenvalue $\lm_{u_0} = 0$ of $\bar{\cA}(u_0,e_0)$ with $\om \in \U$ for some $(u_0, e_0) \in [1/\sqrt{3},\sqrt{3}] \times [0,1)$, there hold \_[u]{}|\_[u= u\_0]{} > 0, u\_0(u\_1, 1) (u\_2, \], and
\_[u]{}|\_[u= u\_0]{} < 0 u\_0$ and is an analytic path of strictly
decreasing self-adjoint operators
with respect to $u$ when $ u\[1/, u\_1) (1, u\_2)$.
Following Kato (\cite{Kato1976book}, p.120 and p.386), we can choose
a smooth path of
unit norm eigenvectors $x\_[u]{}$ with $x\_[u\_0]{}= x\_0$ belonging to a smooth
path of real eigenvalues
$\_[u]{}$ of the self-adjoint operator $|(u,e\_0)$ on $(,2)$
such that for small
enough $|u-u\_0|$, we have
\be \bar{\cA}(u,e_0)x_u=\lambda_u x_u, \lb{4.16} \ee
where $\_[u\_0]{}=0$. Taking inner product with $x\_u$ on both sides of (\ref{4.16})
and then differentiating it with respect to $u$ at $u\_0$, we get
\bea \frac{\pt}{\pt u}\lambda_{u}|_{u=u_0}
&=& \<\frac{\pt}{\pt u}\bar{A}( u,e_0)x_{ u},x_{ u}\>|_{ u= u_0}
+ 2\<\bar{A}( u,e_0)x_{ u},\frac{\pt}{\pt u}x_{ u}\>|_{ u= u_0} \nn\\
&=& \<\frac{\pt}{\pt u}\bar{A}( u_0,e_0)x_0,x_0\> \nn\\
&=& \begin{cases}
\frac{1}{(\vf_1-\vf_2)^2}\frac{\partial (\vf_2-\vf_1)}{\partial u} \<\cA(1,e)x_0,x_0\>, \quad \text{when}\; 1/\sqrt{3} < u < 1;\\
\frac{1}{(\vf_2-\vf_1)^2}\frac{\partial (\vf_1-\vf_2)}{\partial u} \<\cA(1,e)x_0,x_0\>, \quad \text{when}\; 1 < u < \sqrt{3}.
\end{cases}
\eea
where the second equality follows from (\ref{4.16}), the last equality
follows from the definition of $|( u,e)$.
By (\ref{4.8} - \ref{4.11}) and the the non-negative definiteness of
$(1,e)$, we have that
\be
\frac{\d}{ \d u}\lm_{u}|_{u= u_0} > 0, \; \mbox{when} \; u_0\in (u_1, 1) \cup (u_2, \sqrt{3}),
\ee
and
\be
\frac{\d}{ \d u}\lm_{u}|_{u= u_0} < 0 \;\text{when} \; u_0\in (1/\sqrt{3}, u_1) \cup (1, u_2).
\ee
Thus, this lemma holds. \hfill\hb
\medskip
{\bf Corollary 4.2.} {\it
For every fixed $e\in [0,1)$ and $\om\in \U$, the index function
$\phi_{\om}(\cA(u,e))$, and consequently $i_{\om}(\ga_{u,e})$, is non-decreasing
as $u$ increases from $u_1$ to $1$ and from $u_2$ to $\sqrt{3}$; and they are non-increasing as
$u$ increases from $1/\sqrt{3}$ to $u_1$ and from $1$ to $u_2$. Especially, the index function
$\phi_{\om}(\cA(u,e))$ satisfies }
\bea
&&\phi_{\om}(\cA(u,e)) \geq \phi_{\om}(\cA(u_1,e)),\; \mbox{when}\; u\in (1/\sqrt{3}, 1],\\
&&\phi_{\om}(\cA(u,e)) \geq \phi_{\om}(\cA(u_2,e)),\; \mbox{when}\; u\in [1, \sqrt{3}).
\eea
\medskip
{\bf Proof.} For $u\_1u’<u” < 1$ and fixed $e\[0,1)$, when $u$ increases from $u’$ to
$u”$, it is possible that negative eigenvalues of $|(u’,e)$ pass through $0$ and become
positive ones of $|(u”,e)$, but it is impossible that positive eigenvalues of
$|(u’ ,e)$ pass through $0$ and become negative by (ii) of Lemma 4.1. Similar arguments
also hold when $u$ in the intervals $(u\_2, )$, $(1/, u\_1) $ and $(1, u\_2)$.
Therefore the first and the second claims hold. \hfill\hb
\medskip
Next we consider Morse index and nullity of $(u,e)$ when $ u = u\_1$ and $u = u\_2$.
\medskip
{\bf Lemma 4.3. }{\it (i) By (i) of Lemma 1.1, for any $\om$ boundary condition, when
$e\in [0, \hat{f}(\bb_1)^{-1/2})$,
both the operators $\cA(u_1,e)$ and $\cA(u_2,e)$ are non-degenerate
positive operators with zero nullity, i.e.,
\be
\phi_{\om}(\cA(u_1,e))= \phi_{\om}(\cA(u_2,e))=0, \; \nu_{\om}(\cA(u_1,e)) =\nu_{\om}(\cA(u_2,e)) =0.
\ee
\medskip
(ii) By (ii) of Lemma 1.1, when $e\in [0, 1)$, the results of (i) hold.}
\medskip
{\bf Proof.}
By $u\_2 = 1/u\_1$ and Proposition 2.6, we have that
\be
i_{\om}(\ga_{u_1,e})= i_{\om}(\ga_{u_2,e}), \; \nu_{\om}(\ga_{u_1,e}) =\nu_{\om}(\ga_{u_2,e}).
\ee
Then
\be
\phi_{\om}(\cA(u_1,e))= \phi_{\om}(\cA(u_2,e)), \; \nu_{\om}(\cA(u_1,e)) =\nu_{\om}(\cA(u_2,e)).\lb{4.24}
\ee
We only need to consider the case of $u = u\_1$.
By the direct computations, we have that
\be
\vf_1(u_1) + \vf_2 (u_1) \approx 3 .10002, \quad \vf_1(u_1) -\vf_2(u_1) \approx 1.52657.
\ee
The operator $(u\_1,e)$ is given by
\bea
\cA(u_1,e) = -\frac{\d^2}{\d t^2}I_2 -I_2 + \frac{1}{2(1+e\cos t)}
((\vf_1(u_1) + \vf_2 (u_1))I_2 +(\vf_1(u_1) -\vf_2(u_1) ) S(t)).
\eea
Since $\_1(u\_1) + \_2 (u\_1)> 3$ and $ $ is
a positive operator on $D(,2)$, we have
\be
\cA(u_1,e) > -\frac{\d^2}{\d t^2}I_2 -I_2 + \frac{1}{2(1+e\cos t)}
(3 I_2+(\vf_1(u_1) -\vf_2(u_1) ) S(t)) .
\ee
Note that there exists a $\_1 = 9-(\_1(u\_1) -\_2(u\_1) ) \^2 9-(1.52657)\^2 = 6.66958$ such that
\be
A(\bb_1, e) = -\frac{\d^2}{\d t^2}I_2 -I_2 + \frac{1}{2(1+e\cos t)}
(3 I_2+\sqrt{9-\bb_1} S(t)). \lb{4.28}
\ee
where $A(, e)$ is defined by (\ref{2.160}).
Then we have that for any $$ boundary condition
\be \cA(u_1,e) > A(\bb_1, e). \lb{4.29}\ee
By (i) of Lemma 1.1, when $= \_1$ and $0e<(\_1)\^[-1/2]{} 0.4435$,
$A(\_1,e)$ is positive operator with zero nullity on any $$ boundary
condition where $$ and
$()$ is given by (1.22) of \cite{HuOuWang2015ARMA}. Then for $e\[0,(\_1)\^[-1/2]{})$ and $$,
\be
\phi_{\om}(\cA(u_1,e))= 0, \; \nu_{\om}(\cA(u_1,e)) =0,\ \forall \om \in \U.
\ee
By (\ref{4.24}), we obtain (i) of this lemma.
By (ii) of Lemma 1.1 and (\ref{4.29}), we have that for any $e\[0,1)$ and $$,
$(u\_1,e)$ is also positive definite with zero nullity, i.e.,
\be
\phi_{\om}(\cA(u_1,e))= 0, \; \nu_{\om}(\cA(u_1,e)) =0,\ \forall \om \in \U.
\ee
Again, by (\ref{4.24}), we obtain (ii) of this lemma.
\hfill\hb
\medskip
{\bf Theorem 4.4.} {\it
(i) By (i) of Lemma 1.1, for any $(u,e)\in[1/\sqrt{3},\sqrt{3}] \times [0,\hat{f}(\bb_1)^{-1/2})$ and $\om \in \U$,
$\cA(u,e)$ is a positive definite operator with zero nullity on the space $\ol{D}(\om, 2\pi)$, i.e.,
\bea
i_{\om}(\ga_{u,e})= \phi_{\om}(\cA(u, e))= 0, \;\nu_{\om}(\ga_{u,e})= \nu_{\om}(\cA(u,e)) =0.
\eea
Then all the eigenvalues of the matrix $\ga_{u,e}(2\pi)$ are hyperbolic, i.e., all the eigenvalues are not on $\U$.
\medskip
(ii) By (ii) of Lemma 1.1, for any $(u,e)\in[1/\sqrt{3},\sqrt{3}] \times [0,1)$, the results of (i) hold.}
\medskip
{\bf Proof.} By Lemma 4.1, Corollary 4.2 and Lemma 4.3, we have that for any given $e,\
&&\_((u,e)) \_((u\_2,e))> 0, u\[1, ). By (i) of Lemma 4.3, we have that for any $(u,e)\in[1/\sqrt{3},\sqrt{3}] \times [0,\hat{f}(\bb_1)^{-1/2})$, \_((u, e))= 0, \_((u,e)) =0. Since i\_(\_[u,e]{})= \_((u, e)), \_(\_[u\_2,e]{})= \_((u,e)), we have (i) of the theorem holds.
By (\[4.33\]-\[4.34\]), (ii) of Lemma 4.3 and (\[4.36\]), we have (ii) of this theorem holds.
Next we consider the operator $\B(u,e)$ and the symplectic path $\eta_{u,e}(t)$. Since $\psi_i(u) = \psi_i(1/u)$ for $i =1, 2$, $\B(u,e) = \B(1/u,e)$ and $\eta_{u,e}(t) = \eta_{1/u,e}(t)$ for $(u,e)\in[1/\sqrt{3}, \sqrt{3}] \times [0,1)$. We only need to consider $\B(u,e)$ and $\eta_{u,e}(t)$ in the domain $(u,e)\in[1/\sqrt{3}, 1] \times [0,1)$.
By direct computations, the denominator of $\psi_1(u) -\psi_2(u)$ is positive on the interval $[1/\sqrt{3},\sqrt{3}]$ and the numerator of $\psi_1(u) -\psi_2(u)$ is a polynomial on $\Z[u, \sqrt{1+u^2}]$ of degree 12. Note that $\psi_1(u) -\psi_2(u)|_{ u =1} = \frac{3(9-4\sqrt{2})}{7}$ and $\psi_1(u) -\psi_2(u)|_{ u =1/\sqrt{3}} = \psi_1(u) -\psi_2(u)|_{ u =\sqrt{3}} =-\frac{3}{2}$. Then there exists at least one root of $\psi_1(u) -\psi_2(u) = 0$ in the interval $[1/\sqrt{3},1]$. By numerical computations, $u = u_3\approx 0.6633$ is the only root of $\psi_1(u) -\psi_2(u) = 0$ in the interval $[1/\sqrt{3},1]$. Therefore, &&\_1(u) -\_2(u) <0, 1/ u< u\_3,\
&&\_1(u) -\_2(u) > 0, u\_3 < u 1. When $u = u_3$, we have that \_1(u\_3) + \_2 (u\_3)= 3, \_1(u\_3) -\_2(u\_3) = 0. The operator $\B(u_3,e)$ is given by (u\_3,e) = -I\_2 -I\_2 + . By the definition of $A(\bb,e)$ in (\[2.160\]), we have that (u\_3,e) = A(9, e) By Corollary 4.3 of [@HuLongSun2014], we have that $A(9, e)$ is a positive definite operator with zero nullity for any $\om$ boundary condition. So is $\B(u_3,e)$. We define the operator $\bar{\B}(u,e)$ by |(u,e)=
-, 1/< u<u\_3,\
+ , u\_3< u< 1.
By the definition of $\B(u,e)$ in (\[2.159\]), when $1/\sqrt{3}< u<u_3$, $\B(u,e)$ can be written as (u,e) = (\_2-\_1)ł( - ) = (\_2-\_1) |(u,e), and when $u_3< u< 1$, $\B(u,e)$ can be written as (u,e) = (\_1-\_2)ł( + ) = (\_1-\_2) |(u,e).
[**Lemma 4.6.** ]{}
*(i) For each fixed $e\in [0 ,1)$, the operator $\bar{\B}(u,e)$ is increasing when $u\in [1/\sqrt{3}, u_3) $ and is decreasing when $u\in (u_3, 1) $ . Especially, |(u,e)|\_[u = u\_0]{} =*
, 1/ u<u\_3,\
, u\_3< u 1.
is positive definite operator when $u \in [1/\sqrt{3}, u_3)$ and is negative definite operator when $u\in (u_3,1]$.
\(ii) For every eigenvalue $\lm_{u_0} = 0$ of $\bar{\B}(u_0,e_0)$ with $\om \in \U$ for some $(u_0, e_0) \in (1/\sqrt{3},1) \times [0,1)$, there hold \_[u]{}|\_[u= u\_0]{} > 0, u\_0 (1/, u\_3), and
\_[u]{}|\_[u= u\_0]{} < 0 u\_0(u\_3, 1) .
[**Proof.**]{} Note that $\frac{\B(u_3,e)}{(\psi_1-\psi_2)^2}$ is always a positive definite operator on $\ol{D}(\om ,2\pi)$. By direct computations, the denominator of $\frac{\d (\psi_1-\psi_2)}{\d u}$ is positive on the interval $[1/\sqrt{3},\sqrt{3}]$ and the numerator of $\frac{\d (\psi_1-\psi_2)}{\d u}$ is a polynomial on $\Z[u, \sqrt{1+u^2}]$ of degree 22. Note that $\frac{\d (\psi_1-\psi_2)}{\d u}|_{ u = 1} = 0$. Furthermore, $\frac{\d (\psi_1-\psi_2)}{\d u}|_{ u = 0.8} \approx 4.42996$ and $\frac{\d (\psi_1-\psi_2)}{\d u}|_{ u = \sqrt{3}} = -\frac{3(27+146\sqrt{3})}{104}$. By the numerical computations with the step length $\frac{\sqrt{3} - 1/\sqrt{3}}{10000}$, $u = 1$ is the only one root of $\frac{\d (\psi_1-\psi_2)}{\d u} = 0$ in the interval $[1/\sqrt{3},\sqrt{3}]$. Then we have that when $1/\sqrt{3} \leq u < u_3$, $\frac{\d (\psi_1-\psi_2)}{\d u} > 0$; when $u_3< u \leq 1 $, $\frac{\d (\psi_2-\psi_1)}{\d u} < 0$. Therefore, the eigenvalues of $\bar{\B}(u,e)$ are not decreasing when $1/\sqrt{3} \leq u < u_3$ and the eigenvalues of $\bar{\B}(u,e)$ are not increasing when $u_3< u \leq 1 $. By the proof of Lemma 4.1, this lemma can be proved.
[**Corollary 4.7.**]{} [ *For every fixed $e\in [0,1)$ and $\om\in \U$, the index function $\phi_{\om}(\B(u,e))$ and consequently $i_{\om}(\eta_{u,e})$ are non-decreasing as $u$ increases from $1/\sqrt{3}$ to $u_3$ and are non-increasing as $u$ increase from $u_3$ to $1$. Especially, the index function $\phi_{\om}(\B(u,e))$ satisfies*]{} \_((u,e)) \_((1/,e)), && u,\
\_((u,e)) \_((1,e)), && u.
The proof of Corollary 4.7 is similar as the proof of Corollary 4.2. We omit it here.
[**Theorem 4.8.**]{}
*(i) By (i) of Lemma 1.1, for any $(u,e)\in [1/\sqrt{3},u_3)\times [0,\hat{f}(\frac{27}{4})^{-1/2}) $, $(u,e)\in (1/u_3,\sqrt{3}]\times [0,\hat{f}(\frac{27}{4})^{-1/2})$, or $(u,e)\in [u_3, 1/u_3]\times [0,1) $, the operator $\B(u,e)$ is positive definite with zero nullity on the space $\ol{D}(2\pi, \om)$, i.e., i\_(\_[u,e]{})= \_((u, e))= 0, \_(\_[u,e]{})= \_((u,e)) =0. Then all the eigenvalues of the matrix $\eta_{u,e}(2\pi)$ are hyperbolic, i.e., all the eigenvalues are not on $\U$.*
\(ii) By (ii) of Lemma 1.1, when $(u,e)\in[1/\sqrt{3},\sqrt{3}] \times [0,1)$, the results of (i) hold.
Since the proof of Theorem 4.8 is similar as the one of Theorem 4.4, we sketch the proof here.
[**Sketch of proof.**]{} By (\[4.47\]) and (i) of Theorem 3.3, we have that (\[4.49\]) holds when $(u,e)\in [1/\sqrt{3},u_3)\times [0,\hat{f}(\frac{27}{4})^{-1/2}) $ and $(u,e)\in (1/u_3,\sqrt{3}]\times [0,\hat{f}(\frac{27}{4})^{-1/2})$. By (\[4.48\]) and Theorem 3.2, (4.49) holds when $(u,e)\in [u_3, 1/u_3]\times [0,1)$. Then this yields (i) of this theorem holds.
By Corollary 4.7, Theorem 3.2 and (ii) of Theorem 3.3, (\[4.49\]) holds when $(u,e)\in[1/\sqrt{3},\sqrt{3}] \times [0,1)$. Then (ii) of this theorem holds.
[*Proof of Theorem 1.2.*]{} Note that the fundamental solution of the linearized Hamiltonian system $\ga_0(2\pi)$ satisfies $\ga_0(2\pi) = \ga_1(2\pi) \diamond \ga_{u,e}(2\pi)\diamond \eta_{u,e}(2\pi)$. By (i) of Theorem 4.8 , $\eta_{u,e}(2\pi)$ possesses two pairs of hyperbolic eigenvalues when $(u,e) \in [u_3, 1/u_3] \times [0,1)$. By (i) of Theorem 4.4, (i) of Theorem 4.8 and $\hat{f}(\frac{27}{4})^{-1/2} > \hat{f}(\bb_1)^{-1/2}$, i.e., $\ga_{u,e}(2\pi)$ possesses two pairs of hyperbolic eigenvalues when $(u,e) \in \l((1/\sqrt{3}, \sqrt{3})\r) \times [0, \hat{f}(\bb_1)^{-1/2})$ and $\eta_{u,e}(2\pi)$ possesses two pairs of hyperbolic eigenvalues when $(u,e) \in \l((1/\sqrt{3}, u_3)\cup( u_3, \sqrt{3})\r) \times [0, \hat{f}(\frac{27}{4})^{-1/2})$, we have that $\ga_{u,e}(2\pi) \diamond \eta_{u,e}(2\pi)$ possesses at least two pair of hyperbolic eigenvalues when $(u,e) \in \l((1/\sqrt{3}, u_3)\cup( u_3, \sqrt{3})\r) \times [0, \hat{f}(\frac{27}{4})^{-1/2})$. Then (i) of Theorem 1.2 holds.
By (ii) of Theorem 4.4 and (ii) of Theorem 4.8, $\ga_0(2\pi)$ possesses four pair of eigenvalues when $(u,e) \in[1/\sqrt{3},\sqrt{3}] \times [0,1)$. Then (ii) of Theorem 1.2 holds.
[**Acknowledgment.** ]{}[*This paper is a part of my Ph.D. thesis. I would like to express my sincere thanks to my advisor Professor Yiming Long for his valuable guidance, help, suggestions and encouragements during my study and discussions on this topic. I would also like to express my sincere thanks to Dr. Yuwei Ou for the valuable discussions on this topic.*]{}
[10]{}
A. Albouy, Y. Fu, and S. Sun. Symmetry of planar four-body convex central configurations. , 464(2093):1355–1365, 2008.
X. Hu, Y. Long, and S. Sun. Linear stability of elliptic [L]{}agrangian solutions of the planar three-body problem via index theory. , 213(3):993–1045, 2014.
X. Hu and Y. Ou. An estimation for the hyperbolic region of elliptic [L]{}agrangian solutions in the planar three-body problem. , 18(6):732–741, 2013.
X. Hu, Y. Ou, and P. Wang. Trace formula for linear [H]{}amiltonian systems with its applications to elliptic [L]{}agrangian solutions. , 216(1):313–357, 2015.
X. Hu and S. Sun. Index and stability of symmetric periodic orbits in [H]{}amiltonian systems with application to figure-eight orbit. , 290(2):737–777, 2009.
X. Hu and S. Sun. Morse index and stability of elliptic [L]{}agrangian solutions in the planar three-body problem. , 223(1):98–119, 2010.
T. Kato. . Springer-Verlag, Berlin-New York, second edition, 1976. Grundlehren der Mathematischen Wissenschaften, Band 132.
Y. Long. Bott formula of the [M]{}aslov-type index theory. , 187(1):113–149, 1999.
Y. Long. Precise iteration formulae of the [M]{}aslov-type index theory and ellipticity of closed characteristics. , 154(1):76–131, 2000.
Y. Long. , volume 207 of [*Progress in Mathematics*]{}. Birkhäuser Verlag, Basel, 2002.
Y. Long. Admissible shapes of 4-body non-collinear relative equilibria. , 3(4):495–509, 2003.
Y. Long. . Preprint, 2012.
Y. Long and S. Sun. Four-body central configurations with some equal masses. , 162(1):25–44, 2002.
A. Mansur, D. Offin, and M. Lewis. Instability for a family of homographic periodic solutions in the parallelogram four body problem. , 16(3):671–688, 2017.
R. Martí nez, A. Samà, and C. Simó. Stability diagram for 4[D]{} linear periodic systems with applications to homographic solutions. , 226(2):619–651, 2006.
K. R. Meyer and D. S. Schmidt. Elliptic relative equilibria in the [$N$]{}-body problem. , 214(2):256–298, 2005.
R. Moeckel. On central configurations. , 205(4):499–517, 1990.
T. Ouyang and Z. Xie. Linear instability of kepler orbits in the rhombus four-body problem. , pages 1–14, 2005.
E. Perez-Chavela and M. Santoprete. Convex four-body central configurations with some equal masses. , 185(3):481–494, 2007.
Q. Zhou and Y. Long. Maslov-type indices and linear stability of elliptic euler solutions of the three-body problem. , 226(3):1249–1301, Dec 2017.
Q. Zhou and Y. Long. The reduction of the linear stability of elliptic [E]{}uler-[M]{}oulton solutions of the [$n$]{}-body problem to those of 3-body problems. , 127(4):397–428, 2017.
[^1]: Partially supported by NSFC (No. 11131004, No. 11671215). Email: [email protected]
|
---
author:
- |
[^1]\
Physics Dept., 510c, Brookhaven National Laboratory, Upton, NY 11973-5000,USA\
E-mail:
title: |
Hard and Soft Physics at RHIC\
with implications for LHC
---
Large Transverse Momentum $\pi^0$ production—from ISR to RHIC
=============================================================
PHENIX has presented measurements of $\pi^0$ production at mid-rapidity in p-p collisions at two values of c.m. energy $\sqrt{s}$=200 GeV and 62.4 GeV (Fig. \[fig:PXpi0pp\]). Some of my younger colleagues are amazed at the excellent agreement of Next to Leading Order (NLO) and Next to Leading Log (NLL) perturbative Quantum Chromodynamics (pQCD) calculations [@ppg063; @ppg087] with the measurements.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(left) PHENIX measurement of invariant cross section, $E {d^3\sigma}/{d^3p}$, as a function of transverse momentum $p_T$ for $\pi^0$ production at mid-rapidity in p-p collisions at c.m. energy $\sqrt{s}=200$ GeV [@ppg063]. (right) PHENIX measurement of $\pi^0$ in p-p collisions at $\sqrt{s}=62.4$ GeV [@ppg087]. []{data-label="fig:PXpi0pp"}](figs/ppg063-cross.eps "fig:"){width="0.50\linewidth"} ![(left) PHENIX measurement of invariant cross section, $E {d^3\sigma}/{d^3p}$, as a function of transverse momentum $p_T$ for $\pi^0$ production at mid-rapidity in p-p collisions at c.m. energy $\sqrt{s}=200$ GeV [@ppg063]. (right) PHENIX measurement of $\pi^0$ in p-p collisions at $\sqrt{s}=62.4$ GeV [@ppg087]. []{data-label="fig:PXpi0pp"}](figs/ppg087-fig3-rev2.eps "fig:"){width="0.56\linewidth"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
However, this comes as no surprise to me because hard scattering in p-p collisions was discovered at the CERN ISR by the observation of a very large flux of high transverse momentum $\pi^0$ with a power-law tail which varied systematically with the c.m. energy of the collision. This observation in 1972 proved that the partons of deeply inelastic scattering were strongly interacting. Further ISR measurements utilizing inclusive single or pairs of hadrons established that high transverse momentum particles are produced from states with two roughly back-to-back jets which are the result of scattering of point-like constituents of the protons as described by QCD, which was developed during the course of these measurements.
ISR Data, Notably CCR 1972-73
-----------------------------
The Cern Columbia Rockefeller (CCR) Collaboration [@CCR] (and also the Saclay Strasbourg [@SS] and British Scandinavian [@BS] collaborations) measured high $p_T$ pion production at the CERN-ISR (Fig. \[fig:CCR\]). The $e^{-6p_T}$ breaks to a power law at high $p_T$ with characteristic $\sqrt{s}$ dependence. [^2] The large rate indicates that [*partons interact strongly ($\gg$ EM) with each other*]{}, [**but,**]{} to quote the authors [@CCR]: “Indeed, the possibility of a break in the steep exponential slope observed at low $p_T$ was anticipated by Berman, Bjorken and Kogut [@BBK]. However, the electromagnetic form they predict, $p_{T}^{-4} F(p_{T}/\sqrt{s})$, is not observed in our experiment. On the other hand, a constituent exchange model proposed by Blankenbacler, Brodsky and Gunion [@CIM], and extended by others, does give an excellent account of the data.” The data fit $p_{T}^{-n} F(p_{T}/\sqrt{s})$, with $n\simeq8$.
[cc]{}
Constituent Interchange Model (CIM) 1972
----------------------------------------
Inspired by the [*dramatic features*]{} of pion inclusive reactions revealed by “the recent measurements at CERN ISR of single-particle inclusive scattering at $90^\circ$ and large transverse momentum”, Blankenbecler, Brodsky and Gunion [@CIM] proposed a new general scaling form: $$E \frac{d^3\sigma}{dp^3}=\frac{1}{p_T^{n}} F({p_T \over \sqrt{s}})
\label{eq:bbg}$$ where $n$ gives the form of the force-law between constituents. For QED or Vector Gluon exchange, $n=4$, but perhaps more importantly, BBG predict [**$n$=8**]{} for the case of quark-meson scattering by the exchange of a quark (CIM) as apparently observed.
First prediction using ‘QCD’ 1975—WRONG!
----------------------------------------
R. F. Cahalan, K. A. Geer, J. Kogut and Leonard Susskind [@CGKS] generalized, in their own words: “The naive, pointlike parton model of Berman, Bjorken and Kogut to scale-invariant and asymptotically free field theories. The asymptotically free field generalization is studied in detail. Although such theories contain vector fields, [**[ single vector-gluon exchange contributes insignificantly to wide-angle hadronic collisions.]{}**]{} This follows from (1) the smallness of the invariant charge at small distances and (2) the [*breakdown of naive scaling*]{} in these theories. These effects should explain the apparent absence of vector exchange in inclusive and exclusive hadronic collisions at large momentum transfers observed at Fermilab and at the CERN ISR.”[^3]
Nobody’s perfect, they get [*one*]{} thing right! They introduce the “effective index” $n_{\rm eff}(x_T, \sqrt{s})$ to account for ‘scale breaking’: $$E \frac{d^3\sigma}{dp^3}={1 \over {p_T^{{n_{\rm eff}(x_T,\sqrt{s})}} } }
F({x_T})={1\over {\sqrt{s}^{{\,n_{\rm eff}(x_T,\sqrt{s})}} } }
\: G({x_T})\qquad ,
\label{eq:nxt}$$ where $x_T=2p_T/\sqrt{s}$.
CCOR 1978—Higher $p_T>7$ GeV/c—$n_{\rm eff}(x_T, \sqrt{s}) \rightarrow 5=4^{++}$. QCD works!
---------------------------------------------------------------------------------------------
The CCOR measurement [@CCOR] (Fig. \[fig:ccorpt\]) with a larger
[cc]{}
[c]{}
(50,30)
[c]{}
apparatus and much increased integrated luminosity extended their previous $\pi^0$ measurement [@CCR; @CCRS] to much higher $p_T$. The $p_T^{-8}$ scaling-fit which worked at lower $p_T$ extrapolated below the higher $p_T$ measurements for $\sqrt{s} > 30.7$ GeV and $p_T \geq 7$ GeV/c (Fig. \[fig:ccorpt\]a). A fit to the new data [@CCOR] for $7.5\leq p_T\leq 14.0$ GeV/c, $53.1\leq \sqrt{s}\leq 62.4$ GeV gave , (including [*all*]{} systematic errors).
The effective index $n_{\rm eff}(x_T, \sqrt{s})$ was also extracted point-by-point from the data as shown in Fig. \[fig:ccorpt\]b where the CCOR data of Fig. \[fig:ccorpt\]a for the 3 values of $\sqrt{s}$ are plotted vs $x_T$ on a log-log scale. $n_{\rm eff}(x_T, \sqrt{s})$ is determined for any 2 values of $\sqrt{s}$ by taking the ratio as a function of $x_T$ as shown in Fig. \[fig:ccorpt\]c. $n_{\rm eff}(x_T, \sqrt{s})$ clearly varies with both $\sqrt{s}$ and $x_T$, it is not a constant. For $\sqrt{s}=53.1$ and 62.4 GeV, $n_{\rm eff}(x_T, \sqrt{s})$ varies from $\sim 8$ at low $x_T$ to $\sim 5$ at high $x_T$. An important feature of the scaling analysis (Eq. \[eq:nxt\]) relevant to determining $n_{\rm eff}(x_T, \sqrt{s})$ is that [*the absolute $p_T$ scale uncertainty cancels!*]{}
[cc]{}
[c]{}
[c]{} ![(left)-(top) Invariant cross section for inclusive $\pi^0$ for several ISR experiments, compiled by ABCS Collaboration [@ABCS]; (left)-(bottom) $n_{\rm eff}(x_T,\sqrt{s})$ from ABCS 52.7, 62.4 GeV data only. There is an additional common systematic error of $\pm0.7$ in $n$. (right)-a) $\sqrt{s}{\rm (GeV)}^{6.38} \times Ed^3\sigma/dp^3$ as a function of $x_T=2p_T/\sqrt{s}$ for the PHENIX 62.4 and 200 GeV $\pi^0$ data from Fig. \[fig:PXpi0pp\]; (right)-b) point-by-point $n_{\rm eff}(x_T,\sqrt{s})$. \[fig:otherxt\] ](figs/xt_scaling_n_6380_with_par_fix0_scaleband_rev.eps "fig:"){width="0.55\linewidth" height="0.68\linewidth"}
(50,10)
The effect of the absoulte scale uncertainty, which is the main systematic error in these experiments, can be gauged from Fig. \[fig:otherxt\]-(left)-(top) [@ABCS] which shows the $\pi^0$ cross sections from several experiments. The absolute cross sections disagree by factors of $\sim 3$ for different experiments but the values of $n_{\rm eff}(x_T, \sqrt{s})$ for the CCOR [@CCOR] (Fig. \[fig:ccorpt\]-(right)-(bottom)) and ABCS [@ABCS] experiment (Fig. \[fig:otherxt\]-(left)-(bottom)) are in excellent agreement due to the cancellation of the error in the absolute $p_T$ scale. The $x_T$ scaling of the PHENIX p-p $\pi^0$ data at $\sqrt{s}=200$ and 62.4 GeV from Fig. \[fig:PXpi0pp\] with $n_{\rm eff}(x_T, \sqrt{s})\approx 6.38$ is shown in Fig. \[fig:otherxt\]-(right). The log-log plot emphasizes the pure power-law $p_T$ dependence of the invariant cross section, $E d^3\sigma/dp^3\simeq p_T^{-n}$ for $p_T >4$ GeV/c, with $n=8.11\pm 0.05$ at $\sqrt{s}=200$ GeV. [@ppg087]
The first modern QCD calculations and predictions for high $p_T$ single particle inclusive cross sections, including non-scaling and initial state radiation were done in 1978, by Jeff Owens and collaborators. [@Owens78] Jets in $4\pi$ Calorimeters at ISR energies or lower are invisible below $\sqrt{\hat{s}}\sim E_T \leq 25$ GeV [@Gordon]; but there were many false claims which led to skepticism about jets in hadron collisions, particularly in the USA. [@MJTIJMPA] A ‘phase change’ in belief-in-Jets was produced by one UA2 event at the 1982 ICHEP in Paris [@Paris82], but that’s another story. [@MJTJyv1]
The major discovery at RHIC–$\pi^0$ suppression in A+A collisions.
==================================================================
The discovery, at RHIC, that $\pi^0$ are suppressed by roughly a factor of 5 compared to point-like scaling of hard-scattering in central Au+Au collisions is arguably [*the*]{} major discovery in Relativistic Heavy Ion Physics. In Fig. \[fig:f2\]-(left), the data for $\pi^0$ and non-identified charged particles ($h^{\pm}$) are presented as the Nuclear Modification Factor, $R_{AA}(p_T)$, the ratio of the yield of $\pi^0$ (or $h^{\pm}$) per central Au+Au collision (upper 10%-ile of observed multiplicity) to the point-like-scaled p-p cross section: $$R_{AA}(p_T)={{d^2N^{\pi}_{AA}/dp_T dy N_{AA}}\over {\langle T_{AA}\rangle d^2\sigma^{\pi}_{pp}/dp_T dy}} \quad ,
\label{eq:RAA}$$
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(left) Nuclear modification factor $R_{AA}(p_T)$ for $\pi^0$ and $h^{\pm}$ in central Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV [@pi0-QM05]; (right) $R_{AA}(p_T)$ for $\pi^0$ in Cu+Cu central collisions at $\sqrt{s_{NN}}=200$, 62.4 and 22.4 GeV [@ppg084], together with theory curves [@Vitev2]. []{data-label="fig:f2"}](figs/Raa_pi0_h_compari_AUAU_200GEV_0to10cent.eps "fig:"){width="0.53\linewidth" height="0.4\linewidth"} ![(left) Nuclear modification factor $R_{AA}(p_T)$ for $\pi^0$ and $h^{\pm}$ in central Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV [@pi0-QM05]; (right) $R_{AA}(p_T)$ for $\pi^0$ in Cu+Cu central collisions at $\sqrt{s_{NN}}=200$, 62.4 and 22.4 GeV [@ppg084], together with theory curves [@Vitev2]. []{data-label="fig:f2"}](figs/raa_CuCu_energy_Cc08.eps "fig:"){width="0.47\linewidth"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
where $\mean{T_{AA}}$ is the overlap integral of the nuclear thickness functions. The $\pi^0$ data at nucleon-nucleon c.m. energy $\sqrt{s_{NN}}=200$ GeV are consistent with a constant $R_{AA}\sim 0.2$ over the range $4\leq p_T\leq 20$ GeV/c, while the suppression of non-identified charged hadrons and $\pi^0$ are different for $2\leq p_T \leq 6$ GeV/c and come together for $p_T > 6$ GeV/c.
A new PHENIX result [@ppg084] (Fig. \[fig:f2\]-(right)) nicely illustrates that parton suppression begins somewhere between $\sqrt{s_{NN}}$=22.4 and 62.4 GeV for Cu+Cu central collisions. This confirms that $\pi^0$ (jet) suppression is unique at RHIC energies and occurs at both $\sqrt{s_{NN}}=200$ and 62.4 GeV. The suppression is attributed to energy-loss of the outgoing hard-scattered color-charged partons due to interactions in the presumably deconfined and thus color-charged medium produced in Au+Au (and Cu+Cu) collisions at RHIC [@BSZARNPS].
### Precise and accurate reference spectra are crucial
It is important to note that PHENIX did not measure the reference p-p spectrum at $\sqrt{s}=22.4$ GeV but used a QCD-based fit [@Arleo-DdE] to the world’s data on charged and neutral pions which was checked against PHENIX p-p measurements at 62.4 and 200 GeV using $x_T$ scaling (Fig. \[fig:xT22-200\]) [@ppg084].
![Plots for $\sqrt{s_{NN}}$=22.4, 62.4 and 200 GeV of: a) measured invariant $\pi^0$ yields in central Cu+Cu collisions; b) measured invariant $\pi^0$ cross sections in p-p collisions at 62.4 and 200 GeV and fit at 22.4 GeV [@Arleo-DdE]; c) the p-p data and fit from (b) plotted in the form $\sqrt{s}{\rm (GeV)}^{n_{\rm eff}} \times Ed^3\sigma/dp^3$ to exhibit $x_T$ scaling with $n_{\rm eff}$ 6.1–6.4, consistent with Fig. \[fig:otherxt\]-(right)-b). []{data-label="fig:xT22-200"}](figs/pi0_spectra_cucu_allinone_2_xt.eps){width="1.0\linewidth"}
A key issue in this fit is that the data at $\sqrt{s}=22.4$ GeV were consistent with each other and with pQCD [@Arleo-DdE] except for one outlier which was excluded based on the experience from a previous global fit [@DdE] to the world’s data at $\sqrt{s}=62.4$ GeV where there are large disagreements (recall Fig. \[fig:otherxt\]-(left)-(top)). The PHENIX measurement of the p-p reference spectrum at 62.4 GeV [@ppg087] agreed with the measurements shown in Figs. \[fig:ccorpt\] and \[fig:otherxt\]-(left) to within the systematic error of the absolute $p_T$ scales, but disagreed significantly with the global fit at 62.4 GeV [@DdE] which did not attempt to eliminate outliers and which had no basis for adjusting the absolute $p_T$ scales
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(left) Comparison of $R_{AA}(p_T)$ for $\pi^0$ in $\sqrt{s_{NN}}=62.4$ GeV central Au+Au collisions using the fit [@DdE] to the previous world 62.4 GeV p-p data or the measured PHENIX reference 62.4 GeV p-p data [@ppg087]. (right) Final $R_{AA}(p_T)$ in central Au+Au collisions at $\sqrt{s_{NN}}=62.4$ and 200 GeV [@Gabor08].[]{data-label="fig:RAA2ways"}](figs/plot1-prel.eps "fig:"){width="0.48\linewidth"} ![(left) Comparison of $R_{AA}(p_T)$ for $\pi^0$ in $\sqrt{s_{NN}}=62.4$ GeV central Au+Au collisions using the fit [@DdE] to the previous world 62.4 GeV p-p data or the measured PHENIX reference 62.4 GeV p-p data [@ppg087]. (right) Final $R_{AA}(p_T)$ in central Au+Au collisions at $\sqrt{s_{NN}}=62.4$ and 200 GeV [@Gabor08].[]{data-label="fig:RAA2ways"}](figs/plot3-prel.eps "fig:"){width="0.48\linewidth"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
of the various measurements. In Fig. \[fig:RAA2ways\]-(left), the $R_{AA}(p_T)$ for Au+Au central collisions at $\sqrt{s_{NN}}=62.4$ GeV computed with the global p-p fit [@DdE] and the measured reference spectrum [@ppg087] are shown, while the final $R_{AA} (p_T)$ for Au+Au central collisions at 62.4 is compared to $R_{AA}(p_T)$ at 200 GeV in Fig. \[fig:RAA2ways\]-(right) [@Gabor08]. If it weren’t already obvious, this should be a lesson to the LHC physicists (and management) of the importance of making reference measurements in the same detector for p-p collisions at the identical $\sqrt{s}$ as the $\sqrt{s_{NN}}$ of the A+A collisions.
$J/\Psi$-suppression—still golden?
----------------------------------
The dramatic difference in suppression of hard-scattering at RHIC compared to SPS fixed target c.m. energy ($\sqrt{s_{NN}}=17$ GeV) stands in stark contrast to $J/\Psi$ suppression, originally thought to be the gold-plated signature for deconfinement and the Quark Gluon Plasma (QGP) [@MatsuiSatz]. $R_{AA}$ for $J/\Psi$ suppression is the same, if not identical, at SPS and RHIC (see Fig. \[fig:f3\]-(left)), thus casting a serious doubt on the value of $J/\Psi$ suppression as a probe of deconfinement. The medium at RHIC makes $\pi^0$’s nearly vanish but leaves the $J/\Psi$ unchanged compared to lower $\sqrt{s_{NN}}$. One possible explanation is that $c$ and $\bar{c}$ quarks in the QGP recombine to regenerate $J/\Psi$ (see Fig. \[fig:f3\]-(right)), miraculously making the observed $R_{AA}$ equal at SpS and RHIC c.m. energies. The good news is that such models predict $J/\Psi$ enhancement ($R_{AA}> 1$) at LHC energies, which would be spectacular, if observed.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(left) $R_{AA}^{J/\Psi}$ vs centrality ($N_{\rm part}$) at RHIC and SpS energies [@PRL98-1]. (right) Predictions for $R_{AA}^{J/\Psi}$ in a model with regeneration [@GRB].[]{data-label="fig:f3"}](figs/gunjiRAA.eps "fig:"){width="0.48\linewidth" height="0.6\linewidth"} ![(left) $R_{AA}^{J/\Psi}$ vs centrality ($N_{\rm part}$) at RHIC and SpS energies [@PRL98-1]. (right) Predictions for $R_{AA}^{J/\Psi}$ in a model with regeneration [@GRB].[]{data-label="fig:f3"}](figs/raa_npart_rapp.eps "fig:"){width="0.48\linewidth" height="0.6\linewidth"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This leaves us with the interesting question: Will Peter Higgs or Helmut Satz have to wait longer at LHC to find out whether they are right?
The baryon anomaly and $x_T$ scaling
====================================
Many RHI physicists tend to treat non-identified charged hadrons $h^{\pm}$ as if they were $\pi^{\pm}$. While this may be a reasonable assumption in p-p collisions, it is clear from Fig. \[fig:f2\]-(left) that the suppression of non-identified charged hadrons and $\pi^0$ is very different for $1 < p_T \leq 6$ GeV/c.
If the production of high-$p_T$ particles in Au+Au collisions is the result of hard scattering according to pQCD, then $x_T$ scaling should work just as well in Au+Au collisions as in p-p collisions and should yield the same value of the exponent $n_{\rm eff}(x_T,\sqrt{s})$. The only assumption required is that the structure and fragmentation functions in Au+Au collisions should scale, in which case Eq. \[eq:nxt\] still applies, albeit with a $G(x_T)$ appropriate for Au+Au. In Fig. \[fig:nxTAA\], $n_{\rm eff}(x_T,\sqrt{s_{NN}})$ in Au+Au is shown for $\pi^0$ and $h^{\pm}$ in peripheral and central collisions, derived by taking the ratio of $E d^3\sigma/dp^3$ at a given $x_T$ for $\sqrt{s_{NN}} = 130$ and 200 GeV, in each case. [@PXxTAuAu]
![Power-law exponent $n_{\rm eff}(x_T)$ for $\pi^0$ and $h^{\pm}$ spectra in central and peripheral Au+Au collisions at $\sqrt{s_{NN}} = 130$ and 200 GeV [@PXxTAuAu]. []{data-label="fig:nxTAA"}](figs/xTloghalf.eps){width="0.8\linewidth"}
The $\pi^0$’s exhibit $x_T$ scaling, with the same value of $n_{\rm eff} = 6.3$ as in p-p collisions, for both Au+Au peripheral and central collisions. The $x_T$ scaling establishes that high-$p_T$ $\pi^0$ production in peripheral and central Au+Au collisions follows pQCD as in p-p collisions, with parton distributions and fragmentation functions that scale with $x_T$, at least within the experimental sensitivity of the data. The fact that the fragmentation functions scale for $\pi^0$ in Au+Au central collisions indicates that the effective energy loss must scale, i.e. $\Delta E(p_T)/p_T$ is a constant, which is consistent with the constant value of $R_{AA}(p_T)$ for $p_T>4$ GeV/c (Fig. \[fig:f2\]-(left)), given that the $\pi^0$ $p_T$ spectrum is a pure power-law (Fig. \[fig:otherxt\]-(right)-a)).
The deviation of $h^{\pm}$ in Fig. \[fig:nxTAA\] from $x_T$ scaling in central Au+Au collisions is indicative of and consistent with the strong non-scaling modification of particle composition of identified hadrons observed in Au+Au collisions compared to that of p-p collisions in the range $2.0\leq p_T\leq 4.5$ GeV/c, where particle production is the result of jet-fragmentation. This is called the Baryon Anomaly. As shown in Fig. \[fig:banomaly\]-(left) the $p/\pi^{+}$ and $\bar{p}/\pi^{-}$ ratios as a function of $p_T$ increase dramatically to values $\sim$1 as a function of centrality in Au+Au collisions at RHIC [@ppg015]. This is nearly an order of magnitude larger than had ever been seen previously in either fragmentation of jets in $e^+ e^-$ collisions or in the average particle composition of the bulk matter in Au+Au central collisions [@ppg026].
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(left) $p/\pi^+$ and $\bar{p}/\pi^-$ as a function of $p_T$ and centrality from Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV [@ppg015] compared to other data indicated; (right) Conditional yields, per trigger meson (circles), baryon (squares) with $2.5< p_T < 4$ GeV/c, of associated mesons with $1.7 < p_T < 2.5$ GeV/c integrated within $\Delta\phi=\pm 0.94$ radian of the trigger (near side-full) or opposite azimuthal angle (open), for Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV [@ppg072]. []{data-label="fig:banomaly"}](figs/ppg015-Fig1color.eps "fig:"){width="0.52\linewidth"} ![(left) $p/\pi^+$ and $\bar{p}/\pi^-$ as a function of $p_T$ and centrality from Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV [@ppg015] compared to other data indicated; (right) Conditional yields, per trigger meson (circles), baryon (squares) with $2.5< p_T < 4$ GeV/c, of associated mesons with $1.7 < p_T < 2.5$ GeV/c integrated within $\Delta\phi=\pm 0.94$ radian of the trigger (near side-full) or opposite azimuthal angle (open), for Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV [@ppg072]. []{data-label="fig:banomaly"}](figs/ppg072-fig7.eps "fig:"){width="0.52\linewidth"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This ‘baryon anomaly’ was beautifully explained as due to the coalescence of an exponential (thermal) distribution of constituent quarks (a.k.a. the QGP) [@GFH03]. Unfortunately, measurements of correlations of $h^{\pm}$ in the range $1.7\leq p_{T_a}\leq 2.5$ GeV/c associated to identified meson or baryon triggers with $2.5\leq p_{T_t}\leq 4.0$ GeV/c showed the same near side and away side peaks and yields (Fig. \[fig:banomaly\]-(right)) characteristic of di-jet production from hard-scattering [@PXPRC71; @ppg072], rather than from soft coalescence, apparently ruling out this beautiful model.
There are still plenty of other models of the baryon anomaly, but none of them are clearly definitive. For instance, Stan Brodsky presented at this meeting [@StanAnne] a higher twist model of the baryon anomaly as the result of the reaction $q+q\rightarrow p+\bar{q}$. This predicts an isolated proton with no same-side jet, but with an opposite jet, a clear and crucial test. Another test (from the CIM [@CIM]) is that $n_{\rm eff}\rightarrow 8$ for these protons. This effect will be emphasized in central collisions because the higher twist subprocesses have ‘small size’ and are ‘color transparent’ so they propagate through the nuclear medium without absorption. This is consistent with the reduced near-side correlation to baryon triggers compared to meson triggers shown for the most central collisions in Fig. \[fig:banomaly\]-(right) [@ppg072]; but definitive detection of isolated $p$ or $\bar{p}$ in central A+A collisions or precision measurements of $x_T$ scaling for $p$ and $\bar{p}$ as a function of centrality remain very interesting projects for the future.
Guy Paic recently [@CuautlePaic08] claimed to explain the baryon anomaly by simple radial flow, which occurs late in the expansion, even in the hadronic phase. The radial flow velocity Lorentz-boosts the heaver protons to larger $p_T$ than the lighter pions. Also, protons have a shorter formation time than pions so they may participate in the radial flow even if they result from parton fragmentation. If Guy is correct, then it is time to re-examine the subject of the elliptic flow ($v_2$) and $p_T$ spectra of identified hadrons which was very popular several years ago. The last time I looked, in the PHENIX White Paper [@PXWP], the hydro models with radial flow could either explain the $p_T$ spectra of $\bar{p}$ and $\pi$ or the $v_2$ but not both. If the steadily improving models can now explain both $v_2$ and the $p_T$ spectra, this would spell the end of the ‘baryon anomaly’. However, the anomalously large $\bar{p}/\pi$ ratio would remain a signature property of the inclusive identified particle $p_T$ spectra in central Au+Au collisions.
In this vein, I was temporarily thrown for a loop by a result from a STAR presentation at Quark Matter 2008, as reported by Marco van Leeuwen at Hard Probes 2008 [@MarcoHP08] (and Christine Nattrass [@Nattthis] at this meeting), which measured the particle composition in the near-side jet and ‘ridge’ and seemed to indicate that the $p/\pi$ (or baryon/meson) ratio in the near-side jet was not anomalous. I first thought that this disagreed with everything that I said previously in this section about the ‘baryon anomaly’. To quote Marco [@MarcoHP08], “the $p/\pi$ ratio in the ridge is similar to the inclusive $p/\pi$ ratio in Au+ Au events, which is much larger than in p+p events. The $p/\pi$ ratio in the jet-like peak is similar to the inclusive ratio in p + p events.” However, at a Ridge workshop at BNL [@Ridgewks], I found out that what STAR really meant to say was that “The $p/\pi$ ratio of the conditional yield for near side-correlations associated to an $h^{\pm}$ trigger with $p_{T_t}>4$ GeV/c is similar in the jet-like peak to the inclusive ratio in p + p events; while the $p/\pi$ ratio in the ridge is similar to the inclusive $p/\pi$ ratio in Au+ Au events, which is much larger than in p+p events.”—i.e. STAR was talking about associated yields to the trigger $h^{\pm}$ and did not include the triggering particle in the yield. Hence there is no disagreement. In fact, the STAR result is actually in agreement with a recent PHENIX measurement [@ppg034] of the ratio of the associated baryon and meson conditional near-side yields from an $h^{\pm}$ trigger with $2.5< p_{T_t}< 4$ GeV/c in Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV.
To reiterate, the anomalously large $\bar{p}/\pi$ ratio remains a signature property of the inclusive identified particle $p_T$ spectra in central Au+Au collisions. In fact, Christine Nattrass’ [@Nattthis] thorough demonstration at this meeting of the properties and particle composition of ‘the ridge’ convinced me that that the ridge is nothing other than a random coincidence of any trigger particle and the background or bulk of inclusive particles, which, as Stan Brodsky commented, is generally biased towards the trigger direction due to the $k_T$ effect. Naturally, several details such as the azimuthal width of the ridge still remain to be explained. In my opinion, a key test of this idea is that a ridge of same side correlations with large $\Delta\eta$ should exist for direct-$\gamma$ triggers, or, in the PHENIX acceptance, the same-side correlation to a direct-$\gamma$ should exist at the same rate and azimuthal width as the ridge we observed in $\pi^0$ or inclusive $\gamma$ same-side correlations. [@ppg083]
Direct photons at RHIC—Thermal photons?
=======================================
Internal Conversions—the first measurement anywhere of direct photons at low $p_T$
----------------------------------------------------------------------------------
Internal conversion of a photon from $\pi^0$ and $\eta$ decay is well-known and is called Dalitz decay [@egNPS]. Perhaps less well known in the RHI community is the fact that for any reaction (e.g. $q+g\rightarrow \gamma +q$) in which a real photon can be emitted, a virtual photon (e.g. $e^+ e^-$ pair of mass $m_{ee}\geq 2m_e$) can also be emitted. This is called internal-conversion and is generally given by the Kroll-Wada formula [@KW; @ppg086]: $$\begin{aligned}
{1\over N_{\gamma}} {{dN_{ee}}\over {dm_{ee}}}&=& \frac{2\alpha}{3\pi}\frac{1}{m_{ee}} (1-\frac{m^2_{ee}}{M^2})^3 \quad \times \cr & &|F(m_{ee}^2)|^2 \sqrt{1-\frac{4m_e^2}{m_{ee}^2}}\, (1+\frac{2m_e^2}{m^2_{ee}})\quad ,
\label{eq:KW}
\end{aligned}$$ where $M$ is the mass of the decaying meson or the effective mass of the emitting system. The dominant terms are on the first line of Eq. \[eq:KW\]: the characteristic $1/m_{ee}$ dependence; and the cutoff of the spectrum for $m_{ee}\geq M$ (Fig. \[fig:ppg086Figs\]-(left)) [@ppg086]. Since the main background for direct-single-$\gamma$ production is a photon from $\pi^0\rightarrow \gamma +\gamma$, selecting $m_{ee} \gsim 100$ MeV effectively reduces the background by an order of magnitude by eliminating the background from $\pi^0$ Dalitz decay, $\pi^0\rightarrow \gamma + e^+ + e^- $, at the expense of a factor $\sim 1000$ in rate. This allows the direct photon measurements to be extended (for the first time in both p-p and Au+Au collisions) below the value of $p_T\sim 4$ GeV/c, possible with real photons, down to $p_T=1$ GeV/c (Fig. \[fig:ppg086Figs\]-(right)) [@ppg086], which is a real achievement. The solid lines on the p-p data are QCD calculations which work down to $p_T=2$ GeV/c. The dashed line is a fit of the p-p data to the modified power law $B (1+p_T^2/b)^{-n}$, used in the related Drell-Yan [@Ito81] reaction, which flattens as $p_T\rightarrow 0$.
The relatively flat, non-exponential, spectra for the direct-$\gamma$ and Drell-Yan reactions as $p_T\rightarrow 0$ is due to the fact that there is no soft-physics production process for them, only production via the partonic subprocesses, $g+q\rightarrow \gamma+q$ and $\bar{q}+q\rightarrow e^+ + e^-$, respectively. This is quite distinct from the case for hadron production, e.g. $\pi^0$, where the spectra are exponential as $p_T\rightarrow 0$ in p-p collisions (Fig. \[fig:PXpi0pp\]) due to soft-production processes, as well as in Au+Au collisions. Thus, for direct-$\gamma$ in Au+Au collisions, the exponential spectrum of excess photons above the $\mean{T_{AA}}$ extrapolated p-p fit is unique and therefore suggestive of a thermal source.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(left) Invariant mass ($m_{e+ e^-}$) distribution of $e^+ e^-$ pairs from Au+Au minimum bias events for $1.0< p_T<1.5$ GeV/c [@ppg086]. Dashed lines are Eq. \[eq:KW\] for the mesons indicated. Blue solid line is $f_c(m)$, the total di-electron yield from the sum of contributions or ‘cocktail’ of meson Dalitz decays; Red solid line is $f_{dir}(m)$ the internal conversion $m_{e^+ e^-}$ spectrum from a direct-photon ($M>> m_{e^+ e^-}$). Black solid line is a fit of the data to the sum of cocktail plus direct contributions in the range $80< m_{e+ e^-} < 300$ MeV/c$^2$. (right) Invariant cross section (p-p) or invariant yield (Au+Au) of direct photons as a function of $p_T$ [@ppg086]. Filled points are from virtual photons, open points from real photons. \[fig:ppg086Figs\] ](figs/ppg086Fig2KWdist.eps "fig:"){width="0.55\linewidth"} ![(left) Invariant mass ($m_{e+ e^-}$) distribution of $e^+ e^-$ pairs from Au+Au minimum bias events for $1.0< p_T<1.5$ GeV/c [@ppg086]. Dashed lines are Eq. \[eq:KW\] for the mesons indicated. Blue solid line is $f_c(m)$, the total di-electron yield from the sum of contributions or ‘cocktail’ of meson Dalitz decays; Red solid line is $f_{dir}(m)$ the internal conversion $m_{e^+ e^-}$ spectrum from a direct-photon ($M>> m_{e^+ e^-}$). Black solid line is a fit of the data to the sum of cocktail plus direct contributions in the range $80< m_{e+ e^-} < 300$ MeV/c$^2$. (right) Invariant cross section (p-p) or invariant yield (Au+Au) of direct photons as a function of $p_T$ [@ppg086]. Filled points are from virtual photons, open points from real photons. \[fig:ppg086Figs\] ](figs/ppg086Fig4.eps "fig:"){width="0.45\linewidth"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Low $p_T$ vs high $p_T$ direct-$\gamma$—Learn a lot from a busy plot
--------------------------------------------------------------------
The unique behavior of direct-$\gamma$ at low $p_T$ in Au+Au relative to p+p compared to any other particle is more dramatically illustrated by examining the $R_{AA}$ of all particles measured by PHENIX in central Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV (Fig. \[fig:Tshirt\]) [@ThanksAM]. For the entire region $p_T\leq 20$ GeV/c so far measured at RHIC, apart from the $p+\bar{p}$ which are enhanced in the region $2\leq p_T \lsim 4$ GeV/c (’the baryon anomaly’), the production of [*no other particle*]{} is enhanced over point-like scaling.
![Nuclear Modification Factor, $R_{AA}(p_T)$ for all identified particles so far measured by PHENIX in central Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV. [@ThanksAM] \[fig:Tshirt\] ](figs/raa_Tshirt_v07g.eps){width="0.90\linewidth"}
The behavior of $R_{AA}$ of the low $p_T\leq 2$ GeV/c direct-$\gamma$ is totally and dramatically different from all the other particles, exhibiting an order of magnitude exponential enhancement as $p_T\rightarrow 0$. This exponential enhancement is certainly suggestive of a new production mechanism in central Au+Au collisions different from the conventional soft and hard particle production processes in p-p collisions and its unique behavior is attributed to thermal photon production by many authors. [@DdE-DP]
### Direct photons and mesons up to $p_T=20$ GeV/c
Other instructive observations can be gleaned from Fig. \[fig:Tshirt\]. The $\pi^0$ and $\eta$ continue to track each other to the highest $p_T$. At lower $p_T$, the $\phi$ meson tracks the $K^{\pm}$ very well, but with a different value of $R_{AA}(p_T)$ than the $\pi^0$, while at higher $p_T$,the $\phi$ and $\omega$ vector mesons appear to track each other. Interestingly, the $J/\Psi$ seems to track the $\pi^0$ for $0\leq p_T\leq 4$ GeV/c; and it will be interesting to see whether this trend continues at higher $p_T$.
The direct-$\gamma$’s also show something interesting at high $p_T$ which might possibly indicate trouble ahead at the LHC. With admittedly large systematic errors, which should not be ignored, the direct-$\gamma$ appear to become suppressed for $p_T> 14$ GeV/c with a trend towards equality with $R_{AA}^{\pi^0}$ for $p_T\sim 20$ GeV. Should $R_{AA}^{\gamma}$ become equal to $R_{AA}^{\pi^0}$, it would imply that the energy loss in the final state is no longer a significant effect for $p_T\gsim 20$ GeV/c and that the equal suppression of direct-$\gamma$ and $\pi^0$ is due to the initial state structure functions. If this were true, it could mean that going to much higher $p_T$ would not be useful for measurements of parton suppression. In this vein, the new EPS09 structure functions for quarks and gluons in nuclei were presented at this meeting [@EPS09], which represented the best estimate of shadowing derived by fitting all the DIS data in $\mu(e)-A$ scattering as well as including in the fit, notably, the PHENIX $\pi^0$ data in d+Au and p-p as a function of centrality. Clearly, improved measurements of both direct-$\gamma$ and $\pi^0$ in the range $10<p_T<20$ GeV/c are of the utmost importance for both the RHIC and LHC programs.
Precision measurements, key to the next step in understanding
=============================================================
There are many different models of parton suppression with totally different assumptions which all give results in agreement with the PHENIX measurement $R_{AA}^{\pi^0}\approx 0.20$ for $4\leq p_T\leq 20$ GeV/c in Au+Au central collisions. In PHENIX, Jamie Nagle got all theorists to send us predictions as a function of their main single parameter that characterizes the medium in order to do precision fits to the latest PHENIX $\pi^0$ data including the correct treatment of correlated experimental systematic errors (Fig. \[fig:pi0pqm\] ) [@ppg079].
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![a) (left) PHENIX $\pi^0$ $R_{AA}(p_T)$ for Au+Au central (0-5%) collisions at $\sqrt{s_{NN}}=200$ [@ppg079] compared to PQM model predictions [@PQM] as a function of $\mean{\hat{q}}$. The thick red line is the best fit. b) (center) Values of $R_{AA}$ at $p_T=20$ GeV/c as a function of $\mean{\hat{q}}$ in the PQM model [@PQM] corresponding to the lines the left panel. c) (right) same as b) but on a log-log scale, with fit. \[fig:pi0pqm\] ](figs/ppg079-figure2a.eps "fig:"){width="0.74\linewidth"} ![a) (left) PHENIX $\pi^0$ $R_{AA}(p_T)$ for Au+Au central (0-5%) collisions at $\sqrt{s_{NN}}=200$ [@ppg079] compared to PQM model predictions [@PQM] as a function of $\mean{\hat{q}}$. The thick red line is the best fit. b) (center) Values of $R_{AA}$ at $p_T=20$ GeV/c as a function of $\mean{\hat{q}}$ in the PQM model [@PQM] corresponding to the lines the left panel. c) (right) same as b) but on a log-log scale, with fit. \[fig:pi0pqm\] ](figs/ppg079-figure2_pqm_loglog.eps "fig:"){width="0.29\linewidth"}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Systematic uncertainties of the theory predictions were not considered.
The large value of the transport coefficient $\mean{\hat{q}=\mu^2/\lambda}=13.2^{+2.1}_{- 3.2}$ GeV$^2$/fm from the best fit to the PQM model [@PQM] (where $\mu$ is the average 4-momentum transfer to the medium per mean free path $\lambda$) is a subject of some debate in both the more fundamental QCD community [@BS06] and the more phenomenological community [@fragility]. For instance it was stated in Ref. [@fragility] that “the dependence of $R_{AA}$ on $\hat{q}$ becomes weaker as $\hat{q}$ increases” as is clear from Fig. \[fig:pi0pqm\]b. It was also asserted that “when the values of the time-averaged transport coefficient $\hat{q}$ exceeds 5 GeV$^2$/fm, $R_{AA}$ gradually loses its sensitivity.” That statement also appeared reasonable. However, given the opportunity of looking at a whole range of theoretical predictions (kindly provided by the PQM authors [@PQM]) rather than just the one that happens to fit the data, we experimentalists learned something about the theory that was different from what the theorists emphasized. By simply looking at the PQM predictions on a log-log plot (Fig. \[fig:pi0pqm\]c), it became evident that the PQM prediction could be parameterized as $R_{AA}[p_T=20 {\rm~GeV/c}]=0.75/\sqrt{\hat{q}\,({\rm~GeV^2/fm})}$ over the range $5<\hat{q}<100$ GeV$^2$/fm. This means that in this range, the fractional sensitivity to $\hat{q}$ is simply proportional to the fractional uncertainty in $R_{AA}$, i.e. $\Delta\hat{q}/\hat{q}=2.0\times \Delta R_{AA}/R_{AA}$, so that improving the precision of $R_{AA}$ e.g. in the range $10\leq p_T\leq 20$ GeV/c will lead to improved precision on $\mean{\hat{q}}$. This is a strong incentive for experimentalists. Similarly, this should give the theorists incentive to improve their (generally unstated) systematic uncertainties.
$R_{AA}$ vs. the reaction plane
-------------------------------
Another good synergy between experimentalists and theorists is the study of $R_{AA}$ as a function of angle to the reaction plane and centrality in order to understand the effect of varying the initial conditions (centrality) and the path length through the medium (angle). When PHENIX first presented results on $R_{AA}(p_T)$ vs. the angle $\Delta\phi$ to the reaction plane [@ppg054] there was a reaction from the flow community that this is nothing other than a different way to present the anisotropic flow, $v_2$. This is strictly not true for two reasons: 1) $v_2$ measurements are relative while $R_{AA}(\Delta\phi, p_T)$ is an absolute measurement including efficiency, acceptance and all other such corrections; 2) if and only if the angular distribution of high $p_T$ suppression around the reaction plane were simply a second harmonic so that all the harmonics other than $v_2$ vanish (and why should that be?) then $R_{AA}(\Delta\phi, p_T)/R_{AA}(p_T)=1+2 v_2\cos 2\Delta\phi$. Nevertheless, whatever the actual form of the angular distribution, it is true that $R_{AA}(\Delta\phi, p_T)/R_{AA}(p_T)=dN(\Delta\phi, p_T)/d\Delta\phi /\mean{dN(\Delta\phi, p_T)/d\Delta\phi)}$ but without the absolute values it is impossible to tell whether $R_{AA}(\Delta\phi, p_T)$ approaches or exceeds 1 (or any other value) at some value of $\Delta\phi$.
For instance in a new result this year, PHENIX has observed a striking difference in the behavior of the dependence of the in-plane $R_{AA}(\Delta\phi\sim0, p_T)$ for $\pi^0$ as a function of centrality $N_{\rm part}$ compared to the dependence of the $R_{AA}(\Delta\phi\sim \pi/2, p_T)$ in the direction perpendicular to the reaction plane [@ppg092] (Fig. \[fig:ppg092\]).
![Nuclear Modification Factor, $R_{AA}^{\pi^0}$ in reaction-plane bins as a function of $p_T$ and centrality ($N_{\rm part}$) in Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV [@ppg092]. Filled circles represent $R_{AA}(0< \Delta\phi < 15^\circ)$ (in-plane), open squares $R_{AA}(75< \Delta\phi < 90^\circ)$ (out-of-plane) and filled triangles $R_{AA}(30< \Delta\phi < 45^\circ)$. \[fig:ppg092\] ](figs/fig_RAANpartPt_logy.eps){width="0.75\linewidth"}
The $R_{AA}$ perpendicular to the reaction plane is relatively constant with centrality, while the predominant variation of $R_{AA}$ with centrality comes in the direction parallel to the reaction plane which shows a strong centrality dependence. This is a clear demonstration of the sensitivity of $R_{AA}$ to the length traversed in the medium which is relatively constant as a function of centrality perpendicular to the reaction plane but depends strongly on the centrality parallel to the reaction plane. This is a fantastic but reasonable result and suggests that tests of the many models of energy loss should concentrate on comparing the centrality dependence for directions parallel to the reaction plane, where the length traversed depends strongly on centrality, compared to perpendicular to the reaction plane, where the length doesn’t change much with centrality, before tackling the entire angular distribution.
The theorists have not been idle on this issue and are making great strides by attempting to put all the theoretical models of jet quenching into a common nuclear geometrical and medium evolution formalism so as to get an idea of the fundamental differences in the models [@Bass*] “evaluated on identical media, initial state and final fragmentation. The only difference in models will be in the Eloss kernel.”. The different models [@Bass*] all agreed with the measured $R_{AA}(p_T)$ (Fig \[fig:th-angle\]a); but the agreement with the measured $R_{AA}(\Delta\phi, p_T)$ as shown by the $R_{AA}$(out)/$R_{AA}$(in) ratio is not very good (Fig \[fig:th-angle\]b). Hopefully the latest PHENIX results [@ppg092] (Fig. \[fig:ppg092\]) will suggest the way for further improvement.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![a) (left) Three theoretical models [@Bass*] compared to PHENIX $R_{AA}^{\pi^0}(p_T)$ in Au+Au collisions at 0-5% and 20-30% centrality [@pi0-QM05]. b) (right) $R_{AA}$(out)/$R_{AA}$(in) ratio for same 3 models at 20-30% centrality ($N_{\rm part}=167\pm 5$) . \[fig:th-angle\] ](figs/RAA_centrality.eps "fig:"){width="0.48\linewidth"} ![a) (left) Three theoretical models [@Bass*] compared to PHENIX $R_{AA}^{\pi^0}(p_T)$ in Au+Au collisions at 0-5% and 20-30% centrality [@pi0-QM05]. b) (right) $R_{AA}$(out)/$R_{AA}$(in) ratio for same 3 models at 20-30% centrality ($N_{\rm part}=167\pm 5$) . \[fig:th-angle\] ](figs/RAA_out_in_ratio.eps "fig:"){width="0.48\linewidth"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Do direct-$e^{\pm}$ from Heavy Flavors indicate one or two theoretical crises? {#sec:bc}
==============================================================================
PHENIX was specifically designed to be able to detect charm particles via direct single-$e^{\pm}$ from their semileptonic decay. Fig. \[fig:f7\]a shows our direct single-$e^{\pm}$ measurement in p-p collisions at $\sqrt{s}=200$ GeV [@PXcharmpp06] in agreement with a QCD calculation [@forRamona] of $c$ and $b$ quarks as the source of the direct single-$e^{\pm}$ (heavy-flavor $e^{\pm}$).
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![a) (left) Invariant cross section of direct single-$e^{\pm}$ in p-p collisions [@PXcharmpp06] compared to theoretical predictions from $c$ and $b$ quark semileptonic decay. [@forRamona] b) (right) $R_{AA}$ as a function of $p_T$ for direct single-$e^{\pm}$ [@PXPRL97e], $\pi^0$ and $\eta$ in Au+Au central (0-10%) collisions at $\sqrt{s_{NN}}=200$ GeV.[]{data-label="fig:f7"}](figs/single-e-pp.eps "fig:"){width="0.53\linewidth"} ![a) (left) Invariant cross section of direct single-$e^{\pm}$ in p-p collisions [@PXcharmpp06] compared to theoretical predictions from $c$ and $b$ quark semileptonic decay. [@forRamona] b) (right) $R_{AA}$ as a function of $p_T$ for direct single-$e^{\pm}$ [@PXPRL97e], $\pi^0$ and $\eta$ in Au+Au central (0-10%) collisions at $\sqrt{s_{NN}}=200$ GeV.[]{data-label="fig:f7"}](figs/AverbeckQM08-e.eps "fig:"){width="0.51\linewidth"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
In Au+Au collisions, a totally unexpected result was observed. The direct single-$e^{\pm}$ from heavy quarks are suppressed the same as the $\pi^0$ and $\eta$ from light quarks (and gluons) in the range $4\leq p_T\leq 9$ GeV/c where $b$ and $c$ contributions are roughly equal (Fig. \[fig:f7\]b) [@PXPRL97e]. This strongly disfavors the QCD energy-loss explanation of jet-quenching because, naively, heavy quarks should radiate much less than light quarks and gluons in the medium; but opens up a whole range of new possibilities including string theory [@egsee066].
Zichichi to the rescue?
-----------------------
In September 2007, I read an article by Nino Zichichi, “Yukawa’s gold mine”, in the CERN Courier, taken from his talk at the 2007 International Nuclear Physics meeting in Tokyo, Japan, in which he proposed: “the reason why the top quark appears to be so heavy (around 200 GeV) could be the result of some, so far unknown, condition related to the fact that the final state must be QCD-colourless. We know that confinement produces masses of the order of a giga-electron-volt. Therefore, according to our present understanding, the QCD colourless condition cannot explain the heavy quark mass. However, since the origin of the quark masses is still not known, it cannot be excluded that in a QCD coloured world, the six quarks are all nearly massless and that the colourless condition is ‘flavour’ dependent.”
Nino’s idea really excited me, even though, or perhaps, because, it appeared to overturn two of the major tenets of the Standard Model since it seemed to imply that: QCD isn’t flavor blind; the masses of quarks aren’t given by the Higgs mechanism. Massless $b$ and $c$ quarks in a color-charged medium would be the simplest way to explain the apparent equality of gluon, light quark and heavy quark suppression indicated by the equality of $R_{AA}$ for $\pi^0$ and $R_{AA}$ of direct single-$e^{\pm}$ in regions where both $c$ and $b$ quarks dominate. Furthermore RHIC and LHC-Ions are the only place in the Universe to test this idea. Nino’s idea seems much more reasonable to me than the string theory explanations of heavy-quark suppression (especially since they can’t explain light-quark suppression). Nevertheless, just to be safe, I asked some distinguished theorists what they thought, with these results: “Oh, you mean the Higgs field can’t penetrate the QGP” (Stan Brodsky); “You mean that the propagation of heavy and light quarks through the medium is the same” (Rob Pisarski); “The Higgs coupling to vector bosons $\gamma$, $W$, $Z$ is specified in the standard model and is a fundamental issue. One big question to be answered by the LHC is whether the Higgs gives mass to fermions or only to gauge bosons. The Yukawa couplings to fermions are put in by hand and are not required” “What sets fermion masses, mixings?" (Chris Quigg-Moriond2008); “No change in the $t$-quark, $W$, Higgs mass relationship if there is no Yukawa coupling: but there could be other changes” (Bill Marciano).
Nino proposed to test his idea by shooting a proton beam through a QGP formed in a Pb+Pb collision at the LHC and seeing the proton ‘dissolved’ by the QGP. My idea is to use the new PHENIX vertex detector, to be installed in 2010, to map out, on an event-by-event basis, the di-hadron correlations from identified $b,\overline{b}$ di-jets, identified $c,\bar{c}$ di-jets, which do not originate from the vertex, and light quark and gluon di-jets, which originate from the vertex and can be measured with $\pi^0$-hadron correlations. These measurements will confirm in detail (or falsify) whether the different flavors of quarks behave as if they have the same mass in a color-charged medium. Depending when the LHC-Ions starts, it is conceivable that ALICE or another LHC experiment with a good vertex detector could beat RHIC to the punch, since this measurement compares the energy loss of light and heavy quarks and may not need p-p comparison data.
If Nino’s proposed effect is true, that the masses of fermions are not given by the Higgs, and we can confirm the effect at RHIC or LHC-Ions, this would be a case where Relativistic Heavy Ion Physics may have something unique to contribute at the most fundamental level to the Standard Model—a “transformational discovery.” Of course the LHC or Tevatron could falsify this idea by finding the Higgs decay to $b,\overline{b}$ at the expected rate in p-p collisions.
Soft physics projections for LHC
================================
Some soft physics issues at LHC are also very interesting to me. Marek Gazdzicki has popularized 3 features from the NA49 results [@MGQM04] at the CERN SpS fixed target heavy ion program, which he calls ‘the kink’, ‘the horn’ and ‘the step’. I believe that ‘the kink’ is certainly correct (Fig. \[fig:kink\]a) and has relevance to the LHC program. The ‘kink’ reflects the fact that the wounded nucleon model (WNM) [@WNM] works only at $\sqrt{s_{NN}}\sim 20$ GeV where it was discovered [@Busza; @WA80] and fails above and below $\sqrt{s_{NN}}\sim 20$ GeV: wounded projectile nucleons below 20 GeV at mid-rapidity [@E802-87]; wounded projectile quarks (AQM) [@AQM], 31 GeV and above [@Ochiai; @Nouicer].
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![a) (left) Gazdzciki’s plot [@MGQM04] of pions/participant vs. $F=(\sqrt{s_{NN}}-2m_N)^{3/4}/\sqrt{s_{NN}}^{1/4} \approx \sqrt{s_{NN}}^{1/2}$ in A+A and p+p collisions. b) (right) Wit Busza’s prediction for the number of charged particles per participant pair vs. $\ln^2 \sqrt{s}$(GeV) [@LastCall]. \[fig:kink\] ](figs/thekink-fig4a.eps "fig:"){width="0.48\linewidth"} ![a) (left) Gazdzciki’s plot [@MGQM04] of pions/participant vs. $F=(\sqrt{s_{NN}}-2m_N)^{3/4}/\sqrt{s_{NN}}^{1/4} \approx \sqrt{s_{NN}}^{1/2}$ in A+A and p+p collisions. b) (right) Wit Busza’s prediction for the number of charged particles per participant pair vs. $\ln^2 \sqrt{s}$(GeV) [@LastCall]. \[fig:kink\] ](figs/WBfig_4.eps "fig:"){width="0.52\linewidth"}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This led me to speculate that maybe the charged-particle multiplicity or sum-transverse energy might be the only quantity to exhibit point-like $N_{\rm coll}$ scaling at LHC energies. However, Wit Busza’s prediction for the charged multiplicity per participant pair increased by the same ratio from RHIC to LHC in both A+A and p-p collisions, which implies that the AQM will still work at the LHC. This makes me think that $N_{\rm coll}$ scaling for soft-processes at LHC is unlikely.
A more interesting soft physics issue for the LHC concerns the possible increase of the an-isotropic flow $v_2$ beyond the ‘hydrodynamic limit’. Wit Busza’s extrapolation [@LastCall] of $v_2$ to the LHC energy is shown in Fig. \[fig:v2limit\]a, a factor of 1.6 increase from RHIC.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --
![a) (left) Busza’s extrapolation of $v_2$ to LHC [@LastCall]. b) (center) $v_2/\varepsilon$ vs ‘Bjorken multiplicity density’, $(1/S) dN_{\rm ch}/dy$ [@Alt]. c) (right) ‘Hydro Limit’ calculated in viscous Hydrodynamics for several values of the initial energy density $e_0$ [@SongHeinz]. \[fig:v2limit\] ](figs/WBfig_10.eps "fig:"){width="0.35\linewidth"} ![a) (left) Busza’s extrapolation of $v_2$ to LHC [@LastCall]. b) (center) $v_2/\varepsilon$ vs ‘Bjorken multiplicity density’, $(1/S) dN_{\rm ch}/dy$ [@Alt]. c) (right) ‘Hydro Limit’ calculated in viscous Hydrodynamics for several values of the initial energy density $e_0$ [@SongHeinz]. \[fig:v2limit\] ](figs/Alt-v2_over_eps.eps "fig:"){width="0.33\linewidth"} ![a) (left) Busza’s extrapolation of $v_2$ to LHC [@LastCall]. b) (center) $v_2/\varepsilon$ vs ‘Bjorken multiplicity density’, $(1/S) dN_{\rm ch}/dy$ [@Alt]. c) (right) ‘Hydro Limit’ calculated in viscous Hydrodynamics for several values of the initial energy density $e_0$ [@SongHeinz]. \[fig:v2limit\] ](figs/SongHeinzFig7b.eps "fig:"){width="0.33\linewidth"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --
A previous paper by NA49 [@Alt] which compared $v_2$ measurements from AGS and CERN fixed target experiments to RHIC as a function of the ‘Bjorken multiplicity density’, $dn_{\rm ch}/d\eta /S$, where $S=$ is the overlap area of the collision zone, showed an increase in $v_2/\varepsilon$ from fixed target energies to RHIC leading to a “hydro limit”, where $\varepsilon$ is the eccentricity of the collision zone (Fig. \[fig:v2limit\]b). This limit was confirmed in a recent calculation using viscous relativistic hydrodynamics [@SongHeinz] which showed a clear hydro-limit of $v_2/\varepsilon=0.20$ (Fig. \[fig:v2limit\]c). This limit is sensitive to the ratio of the viscosity/entropy density, the now famous $\eta/s$, but negligibly sensitive to the maximum energy density of the collision, so I assume that this calculation would give a hydro-limit at the LHC not too different from RHIC, $v_2/\varepsilon\approx 0.20$. Busza’s extrapolation of a factor of 1.6 increase in $v_2$ from RHIC to LHC combined with $v_2/\varepsilon$ from Fig. \[fig:v2limit\]b gives $v_2/\varepsilon=0.32$ at LHC. In my opinion this is a measurement which can be done to high precision on the first day of Pb+Pb collisions at the LHC, since it is high rate and needs no p-p comparison data. Personally, I wonder what the hydro aficionados would say if both Heinz and Busza’s predictions were correct?
[99]{} A. Adare, [*et al.*]{} (PHENIX), . A. Adare, [*et al.*]{} (PHENIX), . F. W. Büsser, [*et al.*]{}, , see also [*Proc. 16th Int. Conf. HEP*]{}, eds. J. D. Jackson and A. Roberts, (NAL, Batavia, IL, 1972) Vol. 3, p. 317. M. Banner, [*et al.*]{}, . B. Alper, [*et al.*]{}, . S. M. Berman, J. D. Bjorken and J. B. Kogut, . R. Blankenbecler, S. J. Brodsky, J. F. Gunion, . R. F. Cahalan, K. A. Geer, J. Kogut and Leonard Susskind, . A. L. S. Angelis, [*et al.*]{}, . See also, A. G. Clark, [*et al.*]{}, . F. W. Büsser, [*et al.*]{}, . C. Kourkoumelis, [*et al.*]{}, . J. F. Owens, E. Reya, M. Glück, ; J. F. Owens and J. D. Kimel, . Also, see, R. P. Feynman, R. D. Field and G. C. Fox, . T. [Å]{}kesson, [*et al.*]{}, . e.g. for a review, see M. J. Tannenbaum, . , Paris, 1982, eds P. Petiau, M. Porneuf, J. Phys. C[**3**]{} (1982): see J. P. Repellin, p. C3-571; also see M. J. Tannenbaum, p. C3-134, G. Wolf, p. C3-525. e.g. see M. J. Tannenbaum, PoS(LHC07)004. M. Shimomura, et al. (PHENIX), . A. Adare, [*et al.*]{} (PHENIX Collab.), . I. Vitev, , and private communication to Ref. [@ppg084]. See R. Baier, D. Schiff and B. G. Zakharov, . F. Arleo and D. d’Enterria, . D. d’Enterria, . G. David, . T. Matsui and H. Satz, . A. Adare, [*et al.*]{} (PHENIX Collab.), . L. Grandchamp, R. Rapp, and G. E. Brown, . S. S. Adler, [*et al.*]{} (PHENIX Collab.), . S. S. Adler, [*et al.*]{} (PHENIX Collab.), . S. S. Adler, [*et al.*]{} (PHENIX Collab.), . A. Adare, [*et al.*]{} (PHENIX Collab.), . V. Greco, C. M. Ko and P. Levai,; R. J. Fries, B. Müller, and C. Nonaka, [*ibid*]{}, [202303]{}; R. C. Hwa, and references therein. S. S. Adler, [*et al.*]{} (PHENIX Collab.), . S. J. Brodsky and A. Sickles, .
E. Cuautle and G. Paic, . K. Adcox, [*et al.*]{} (PHENIX Collab.), . M. van Leeuwen, [*et al.*]{} (STAR Collab.), to appear in [**]{}, DOI: 10.1140/epjc/s10052-009-1007-1. C. Nattrass, these proceedings. Workshop on the Ridge, September 22–24, 2008, Brookhaven National Laboratory, Upton, NY 11973-5000, USA. S. Afanasiev, [*et al.*]{} (PHENIX Collab.), . A. Adare, [*et al.*]{} (PHENIX Collab.), . e.g. see N. P. Samios, . N. M. Kroll and W. Wada, . A. Adare, [*et al.*]{} (PHENIX Collab.), arXiv:0804.4168v1, subm. [**]{} . A. S. Ito, [*et al.*]{}, . H. Fritzsch and P. Minkowski, . S. S. Adler [*et al.*]{} (PHENIX Collab.), . Thanks to Sasha Milov for the plot of $R_{AA}(p_T)$ for all PHENIX published and preliminary measurements. With the exception of the internal-conversion direct-$\gamma$ where the fit to the p-p data is used to compute $R_{AA}$, all the other values of $R_{AA}$ are computed from Eq. \[eq:RAA\] using the measured Au+Au and p-p data points. D. d’Enterria and D. Peressounko, . H. Paukkunen, K. J. Eskola and C. A. Salgado, arXiv:0903.1956v1. A. Adare, [*et al.*]{} (PHENIX Collab.), . A. Dainese, C. Loizides and G. Paic, ; C. Loizides, [*ibid*]{}, . R. Baier and D. Schiff, [*J. High Energy Phys.*]{} 09 (2006) 059. K. J. Eskola, H. Honkanen, C. A. Salgado and U. A. Wiedemann, . S. S. Adler, [*et al.*]{} (PHENIX Collab.),. S. Afanasiev, [*et al.*]{} (PHENIX Collab.), arXiv:0903.4886v1, submitted to . S. A. Bass, C. Gale, A. Majumder, C. Nonaka, G.-Y. Qin, T. Renk, J. Ruppert, ; arXiv:0808.0908v3. A. Adare, [*et al.*]{} (PHENIX Collab.), . M. Cacciari, P. Nason and R. Vogt, . A. Adare, [*et al.*]{} (PHENIX Collab.), . e.g. see Ref. [@PXPRL97e] for a list of references. M. Gazdzicki, [*et al.*]{} (NA49 Collab.), . N. Armesto, N. Borghini, S. Jeon, U. A. Wiedemann, [*et al.*]{}, . A. Białas, A. Błeszynski and W. Czy. z, . W. Busza, [*et al.,*]{} . See also Ref. [@WA80]. S.P. Sorensen [*et al.*]{} (WA80 Collab.), ; R. Albrecht [*et al.*]{} (WA80 Collab.), ; . T. Abbott, [*et al.*]{} (E802 Collab.), ; L.P. Remsberg, M.J. Tannenbaum [*et al.*]{} (E802 Collab.), A. Białas, W. Czy. z and L. Lesniak, . T. Ochiai, ; , and references therein. R. Nouicer, ; see also B. De and S. Bhattacharyya, . C. Alt, [*et al.*]{} (NA49 Collab.), . H. Song and U. Heinz, .
[^1]: Supported by the U.S. Department of Energy, Contract No. DE-AC02-98CH1-886.
[^2]: The clear break of the exponential to a power law at $\sqrt{s}=200$ GeV is shown in the inset of Fig. \[fig:PXpi0pp\]-(left).
[^3]: There is an acknowledgement in this paper which is worthy of note:“Two of us (J. K. and L. S.) also thank S. Brodsky for [*emphasizing to us* ]{} that the present data on wide-angle hadron scattering [*show no evidence for vector exchange.”*]{}
|
---
abstract: 'For any $k\geq 1$, this paper studies the number of polynomials having $k$ irreducible factors (counted with or without multiplicities) in $\mathbf{F}_q[t]$ among different arithmetic progressions. We obtain asymptotic formulas for the difference of counting functions uniformly for $k$ in a certain range. In the generic case, the bias dissipates as the degree of the modulus or $k$ gets large, but there are cases when the bias is extreme. In contrast to the case of products of $k$ prime numbers, we show the existence of complete biases in the function field setting, that is the difference function may have constant sign. Several examples illustrate this new phenomenon.'
address:
- 'Centre de recherches mathématiques, Université de Montréal, Pavillon André-Aisenstadt, 2920 Chemin de la tour, Montréal, Québec, H3T 1J4, Canada'
- 'Mathematisches Institut, Georg-August Universität Göttingen, Bunsenstra[ß]{}e 3-5, D-37073 Göttingen, Germany'
author:
- Lucile Devin
- Xianchang Meng
bibliography:
- 'biblio.bib'
title: 'Chebyshev’s bias for products of irreducible polynomials'
---
Introduction
============
Background
----------
The notion of Chebyshev’s bias originally refers to the observation in [@ChebLetter] that there seems to be more primes congruent to $3 \bmod 4$ than to $1 \bmod 4$ in initial intervals of the integers. More generally it is interesting to study the function $ \pi(x; q,a) -\pi(x;q, b)$ where $\pi(x; q,a)$ is the number of primes $\leq x$ that are congruent to $a \bmod q$. Under the Generalized Riemann Hypothesis (GRH) and the Linear Independence (LI) conjecture for zeros of the Dirichlet $L$-functions, Rubinstein and Sarnak [@RS] gave a framework to study Chebyshev’s bias quantitatively. Precisely they showed that the logarithmic density $\delta(q;a,b)$ of the set of $x\geq2$ for which $\pi(x; q,a)>\pi(x;q, b)$ exists and in particular $\delta(4;3,1)\approx 0.9959$. Many related questions have been asked and answered since then, we refer to the expository articles of Ford and Konyagin [@FordKonyagin_expository] and of Granville and Martin [@GranvilleMartin] for detailed reviews of the subject.
In this article we consider products of $k$ irreducible polynomials among different congruence classes. Our results are uniform for $k$ in a certain range, and we show that in some cases the *bias* (see Definition \[Defn-bias\]) in the distribution can approach any value between $0$ and $1$. The idea of this paper is motivated by two different generalizations of Chebyshev’s bias.
On one hand, Ford and Sneed [@FordSneed2010] adapted the observation of Chebyshev’s bias to quasi-prime numbers, i.e. numbers with two prime factors $p_1 p_2$ ($p_1=p_2$ included). They showed under GRH and LI that the direction of the bias for products of two primes is opposite to the bias among primes, and that the bias decreases. Similar results are developed in [@DummitGranvilleKisilevsky; @Moree2004]. Recently, under GRH and LI, the second author [@Meng2017; @Meng2017L] generalized the results of [@RS], [@FordSneed2010] and [@DummitGranvilleKisilevsky] to products of any $k$ primes among different arithmetic progressions, and observed that the bias changes direction according to the parity of $k$.
On the other hand, using the analogy between the ring of integers and polynomial rings over finite fields, Cha [@Cha2008] adapted the results of [@RS] to irreducible polynomials over finite fields. Cha discovered a surprising phenomenon: in the case of polynomial rings there are biases in unexpected directions. Further generalizations have been studied since then in [@ChaKim; @ChaIm; @CFJ; @Perret-Gentil].
Fix a finite field $\mathbf{F}_{q}$ and a polynomial $M \in \mathbf{F}_{q}[T]$ of degree $d \geq 1$, we study the distribution in congruence classes modulo $M$ of monic polynomials with $k$ irreducible factors. More precisely let $A, B\subset (\mathbf{F}_{q}[t]/(M))^{*}$ be subsets of invertible classes modulo $M$, for any $k\geq 1$, and any $X \geq 1$ we define the normalized[^1] difference function $$\begin{gathered}
\Delta_{f_k}(X; M, A, B) \\ := \frac{X (k-1)!}{q^{X/2}(\log X)^{k-1}} \Big(\frac{1}{\lvert A\rvert}\lvert \lbrace N \in \mathbf{F}_{q}[t]: N \text{ monic, } \deg{N} \leq X,~ f(N)=k, N \bmod M \in A \rbrace\rvert \\
- \frac{1}{\lvert B\rvert}\lvert \lbrace N \in \mathbf{F}_{q}[t]: N \text{ monic, } \deg{N} \leq X,~ f(N)=k, N \bmod M \in B \rbrace\rvert \Big) \\\end{gathered}$$ where $f=\Omega$ or $\omega$ is the number of prime factors counted with or without multiplicities. We study the distribution of the values of the function $\Delta_{f_k}(X; M, A, B)$, in particular we are interested in the bias of this function towards positive values.
\[Defn-bias\] Let $F:\mathbf{N}\rightarrow\mathbf{R}$ be a real function, we define the *bias* of $F$ towards positive values as the natural density (if it exists) of the set of integers having positive image by $F$: $$\begin{aligned}
\operatorname{dens}(F >0) = \lim_{X\rightarrow\infty}\frac{ \lvert\lbrace n \leq X : F(n)>0 \rbrace\rvert }{X}.\end{aligned}$$ If the limit does not exist, we say that the bias is not well defined.
Values of the bias
------------------
In this section we present our main result which is the consequence of the asymptotic formula obtained in Theorem \[Th\_Difference\_k\_general\_deg<X\].
Given a field $\mathbf{F}_{q}$ with odd characteristic, and a square-free polynomial $M$ in $\mathbf{F}_{q}$, we examine more carefully the case of races between quadratic residues ($A = \square$) and non-quadratic residues ($B =\boxtimes$) modulo $M$.
We say that $M$ satisfies (LI) if the multi-set $$\lbrace\pi\rbrace\cup\bigcup\limits_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 , \chi \neq \chi_0}}\lbrace \gamma \in [0,\pi] : L(\frac{1}{2} + i\gamma, \chi) = 0 \rbrace$$ is linearly independent over $\mathbf{Q}$ (see Section \[subsec\_Lfunctions\] for the definition of the $L$-functions).
We study the variation of the values of the bias when the degree of the modulus $M$ gets large. In particular, we show that the values of the bias are dense in $[\tfrac{1}{2},1]$.
\[Th central limit for M under LI\] Let $\mathbf{F}_{q}$ be a finite field of odd characteristic and $k$ a positive integer. Suppose that for every $d,r$ large enough, there exist a monic polynomial $M_{d,r} \in \mathbf{F}_q[t]$ with
1. $\deg M_{d,r} =d$,
2. $\omega(M_{d,r}) = r$,
3. $M_{d,r}$ satisfies *(LI)*.
Then for $f=\Omega$ or $\omega$, $$\overline{\lbrace \operatorname{dens}((\epsilon_f)^k \Delta_{f_k}(\cdot;M_{d,r},\square,\boxtimes) > 0) : d \geq1, r\geq1 \rbrace} = [\tfrac12,1],$$ where $\epsilon_{\Omega} = -1$ and $\epsilon_{\omega} = 1$.
Note that, when $k$ is odd, we obtain that the possible values of $\operatorname{dens}( \Delta_{\Omega_k}(\cdot;M,\square,\boxtimes) > 0)$ are dense in $[0,\tfrac{1}{2}]$, when $M$ varies in $\mathbf{F}_q[t]$, while the function $\Delta_{\omega_k}(\cdot;M,\square,\boxtimes) $ is biased in the direction of quadratic residues independently of the parity of $k$.
From [@Kowalski2010 Prop. 1.1], we expect the hypothesis (LI) to be true for most of the monic square-free polynomials $M \in \mathbf{F}_{q}[t]$ when $q$ is large enough. When $d, r$ are large, the set of polynomials of degree $d$ having $r$ irreducible factors should be big enough to contain at least one polynomial satisfying (LI). However, in Proposition \[Prop Chebyshev many factors\], similarly to [@Fiorilli_HighlyBiased Th. 1.2], we only need an hypothesis on the multiplicity of the zeros to prove the existence of extreme biases.
In [@Cha2008 Th. 6.2], Cha considered the case $k=1$ and showed that the values of the bias $ \operatorname{dens}(\Delta_{\Omega_1}(\cdot;M_{d,r},\square,\boxtimes) > 0)$ approach $\tfrac{1}{2}$ when $M$ varies among irreducible polynomials of increasing degree. In the case $k=1$, Fiorilli [@Fiorilli_HighlyBiased Th. 1.1] proved that the values of the biases in prime number races between non-quadratic and quadratic residues are dense in $[\tfrac{1}{2},1]$. We also show in Proposition \[Prop using Honda Tate\] that the values $\tfrac12$ and $1$ can be obtained as values of $\operatorname{dens}((-1)^k \Delta_{f_k}(\cdot;M,\square,\boxtimes) > 0)$, when $q>3$. These values are obtained for polynomials $M$ not satisfying (LI).
In the case $q =5$, Cha showed that there exists $M \in \mathbf{F}_5[t]$ with $\operatorname{dens}( \Delta_{\Omega_{1}}(\cdot;M,\square,\boxtimes) > 0) = 0.6$, uncovering a bias in “the wrong direction”, we wonder if such a phenomenon occurs for any $q$ and $k$.
Asymptotic formulas {#subsec_Lfunctions}
-------------------
Before stating the asymptotic formulas, let us set some notations. For $M \in \mathbf{F}_{q}[t]$ we denote $\phi(M) = \lvert \left( \mathbf{F}_{q}[t]/ (M) \right)^* \rvert$ the number of invertible congruence classes modulo $M$.
Recall that we define the Dirichlet $L$-function associated to a Dirichlet character $\chi$ by $$L(s, \chi) = \sum_{a \text{ monic }} \frac{\chi(a)}{\lvert a \rvert^{s}}$$ where $\lvert a \rvert = q^{\deg a}$. It can also be written as an Euler product over the irreducible polynomials: $$L(s, \chi) = \prod_{P \text{ irreducible }}\left( 1- \frac{\chi(P)}{\lvert P \rvert^s}\right)^{-1}.$$
Recall that (e.g. [@Rosen2002 Prop. 4.3]), for $\chi \neq \chi_0$ a Dirichlet character modulo $M \in\mathbf{F}_{q}[t]$, the Dirichlet $L$-function $L(\chi,s)$ is a polynomial in $q^{-s}=u$ of degree at most $\deg M-1$. Thanks to the deep work of Weil [@Weil_RH], we know that the analogue of the Riemann Hypothesis is satisfied.
In the following we denote $\alpha_{j}(\chi)= \sqrt{q}e^{i\gamma_{j}(\chi)}$, $\gamma_{j}(\chi) \in (-\pi, \pi)\setminus \lbrace 0 \rbrace$, the distinct non-real inverse zeros of $\mathcal{L}(u,\chi) = \mathcal{L}(q^{-s},\chi) = L(s,\chi)$ of norm $\sqrt{q}$, with multiplicity $m_j(\chi)$. The real inverse zeros will play an important role; we denote $m_{\pm}(\chi)$ the multiplicity of $\pm \sqrt{q}$ as an inverse zero of $\mathcal{L}(u,\chi)$, and $d_{\chi}$ the number of distinct non-real inverse zeros or norm $\sqrt{q}$. We summarize the notations in the following formula: $$\label{Not_L-function}
\mathcal{L}(u,\chi) = (1-u \sqrt{q})^{m_+(\chi)}(1+u \sqrt{q})^{m_-(\chi)} \prod_{j=1}^{d_{\chi}} (1 - u \alpha_{j}(\chi) )^{m_{j}(\chi)}\prod_{j'=1}^{d'_{\chi}} (1- u\beta_{j'}(\chi))$$ where $\lvert \beta_{j'}(\chi) \rvert =1$. Recently Wanlin Li proved [@Li2018 Th. 1.2] that $m_{+}(\chi) >0$ for some primitive quadratic character $\chi$ over $\mathbf{F}_{q}[t]$ for any odd $q$. This result disproves the analogue of a conjecture of Chowla about the existence of central zeros. We present some of such examples in Section \[subsec\_examples\_realZero\] to exhibit large biases. As $k$ increases, we observe a new phenomenon: such characters can induce complete biases in races between quadratic and non-quadratic residues (see Section \[subsec\_examples\_realZero\]), those biases do not dissipate as $k$ gets large (see Proposition \[Prop k limit\]).
Denote $$\gamma(M) = \min\limits_{\chi \bmod M}\min\limits_{1\leq i\neq j\leq d_{\chi}}(\lbrace \lvert \gamma_i(\chi) - \gamma_j(\chi) \rvert, \lvert \gamma_{i}(\chi) \rvert, \lvert \pi -\gamma_{i}(\chi) \rvert \rbrace).$$
We have the following asymptotic formulas without any conditions uniformly for $k$ in some reasonable range (for example, $k\leq \frac{0.99\log\log X}{\log d}$).
\[Th\_Difference\_k\_general\_deg<X\] Let $M \in \mathbf{F}_{q}[t]$ be a non-constant polynomial of degree $d$, and let $A, B \subset (\mathbf{F}_{q}[t]/(M))^*$ be two sets of invertible classes modulo $M$. For any integer $k \geq1 $, with $k=o((\log X)^{\frac12})$, one has $$\begin{gathered}
\label{Form_deg=n}
\Delta_{\Omega_k}(X; M, A,B) \\
= (-1)^k \Bigg\{ \sum_{\chi \bmod M}c(\chi,A,B)\bigg( {\left(}m_+(\chi) +\tfrac{\delta(\chi^2)}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q}-1} +{\left(}m_-(\chi)+\tfrac{\delta(\chi^2)}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q} + 1}(-1)^{X} \\
+\sum_{\gamma_j(\chi)\neq 0, \pi} m_j(\chi)^k \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} e^{iX\gamma_{j}(\chi)} \bigg) + O\bigg(\frac{d^k k(k-1)}{\gamma(M)\log X} + dq^{-X/6}\bigg) \Bigg\},
\end{gathered}$$ and if $q\geq 5$, $$\begin{gathered}
\label{Form_deg=n_littleomega}
\Delta_{\omega_k}(X; M, A,B) \\
= (-1)^k \Bigg\{ \sum_{\chi \bmod M}c(\chi,A,B)\bigg( {\left(}m_+(\chi) -\tfrac{\delta(\chi^2)}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q}-1} +{\left(}m_-(\chi)-\tfrac{\delta(\chi^2)}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q} + 1}(-1)^{X} \\
+\sum_{\gamma_j(\chi)\neq 0, \pi} m_j(\chi)^k \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} e^{iX\gamma_{j}(\chi)} \bigg) + O\bigg(\frac{d^k k(k-1)}{\gamma(M)\log X} + dq^{-X/6}\bigg) \Bigg\},
\end{gathered}$$ where the implicit constants are absolute, $\delta(\chi^2) = 1$ if $\chi$ is real and $0$ otherwise, and $$c(\chi,A,B) =
\frac{1}{\phi(M)}\bigg( \frac{1}{\lvert A\rvert}\sum_{a\in A}\bar\chi(a) -
\frac{1}{\lvert B\rvert}\sum_{b\in B}\bar\chi(b) \bigg).$$
This theorem follows from the asymptotic formula obtained in Section \[Sec\_proof\_deg<X\]. The method we use here is not a straightforward generalization of the method used in [@Cha2008] since the analogue of the weighted form of counting function is not ready to detect products of irreducible elements (see [@Meng2017]). Different from the results in [@RS], [@FordSneed2010] and [@Meng2017], we obtain asymptotic formulas for the corresponding difference functions unconditionally, and the density we derive in this paper is the natural density rather than the logarithmic density. Our starting point is motivated by a combinatorial idea in [@Meng2017], but the main proof is not a parallel translation since the desired counting function is not derived as Meng did in [@Meng2017] using Perron’s formula.
\[Rk\_on\_Th\_deg<n\]
1. \[Item\_IntegerCase\] Note that in the case $\Delta_{\Omega_1}$ this result is [@Cha2008 Th. 2.5]. Our formulas are more general analogues of the result in [@Meng2017] including the multiplicities of the zeros.
2. \[Item\_CompareOmegas\] As $\lvert m_+(\chi) -\frac{\delta(\chi^2)}{2} \rvert \leq \lvert m_+(\chi) +\frac{\delta(\chi^2)}{2} \rvert $, we expect a more important bias in the race between polynomials with $\Omega(N) = k$ than in the race between polynomials with $\omega(N) = k$. Note also that if $m_{+}(\chi) =0$ for all $\chi$, the two mean values have different sign when $k$ is odd, hence we expect the two biases to be in different directions when $k$ is odd.
3. \[Item\_largestMult\] We observe from the formula that the inverse zeros with largest multiplicity will determine the behavior of the function as $k$ grows. This is the point of Proposition \[Prop k limit\] below. Moreover the real zeros play an important role in determining the bias.
4. \[Item\_Range\_k\] For degree $X$ polynomials, the typical number of irreducible factors is $\log X$. Hence, one may expect an asymptotic formula which holds for $k\ll \log X$, or at least for $k=o(\log X)$. However, we are not able to reach this range in general, and the factor $d^k$ in the error term is inevitable in our proof. Through personal communication, we know that Sam Porritt is currently using a different method to study these asymptotic formulas [@Porritt].
In the case of the race of quadratic residues against non-quadratic residues modulo $M$, the expressions in and can be simplified. This is studied in more detail in Section \[Sec\_Examples\]. For the race between polynomials with $\Omega(N) = k$, we expect a bias in the direction of quadratic residues or non-quadratic residues according to the parity of $k$. We show that the existence of the real zero $\sqrt{q}$ sometimes leads to extreme biases.
In the generic case, we expect $m_{\pm} = 0$ and that the other zeros are simple, then the asymptotic formulas in Theorem \[Th\_Difference\_k\_general\_deg<X\] give a connection between $\Delta_{f_k}(X; M, A,B)$ and $\Delta_{f_1}(X; M, A,B)$ ($f=\Omega$ or $\omega$), similar to the case of products of primes [@Meng2017 Cor. 1.1, Cor. 2.1]. We expect in this case that the polynomials with $\Omega(N) = k$ have preference for quadratic non-residue classes when $k$ is odd; and when $k$ is even, such polynomials have preference for quadratic residue classes. However, the polynomials with $\omega(N) = k$ always have preference for quadratic residue classes. Moreover, as $k$ increases, the biases become smaller and smaller for both of the two cases. This observation is justified by Proposition \[Prop k limit\](Expected generic case).
Further behaviour of the bias
-----------------------------
The asymptotic formula from Theorem \[Th\_Difference\_k\_general\_deg<X\] helps the understanding of the bias in the distribution of polynomials with a certain number of irreducible factors in congruence classes.
In the case of a sequence of polynomials with few irreducible factors, we give a precise rate of convergence of the bias to $\tfrac{1}{2}$ in Theorem \[Th central limit for M under LI\] in the following result.
\[Th\_CentralLimit\] Let $\lbrace M \rbrace$ be a sequence of square-free polynomials in $\mathbf{F}_{q}[t]$ satisfying *(LI)* and such that $\frac{2^{\omega(M)}}{\deg M} \rightarrow 0$. Then, for $f = \Omega$ or $\omega$, as $\deg M \rightarrow\infty$, the limiting distribution $\mu_{M,f_k}^{\mathrm{norm}}$ of $$\begin{aligned}
&\Delta_{f_k}^{\mathrm{norm}}(X;M) := \frac{\lvert \boxtimes\rvert \sqrt{q-1} }{\sqrt{q2^{\omega(M)-1}\deg M}}\Delta_{f_k}(X;M,\square,\boxtimes)\end{aligned}$$ exists and converges weakly to the standard Gaussian distribution. More precisely, one has $$\sup_{x\in\mathbf{R}}\left\lvert \int_{-\infty}^{x}{\mathop{}\!\mathrm{d}}\mu_{M,f_k}^{\mathrm{norm}} - \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x} e^{-t^2/2} {\mathop{}\!\mathrm{d}}t \right\rvert \ll \frac{\sqrt{2^{\omega(M)}}}{2^k\sqrt{\deg M}} + \frac{\log \deg M}{\deg M}.$$ In particular the bias dissipates as $\deg M$ gets large.
Note that a sequence of irreducible polynomials $M$ with increasing degree satisfies the hypothesis $\frac{2^{\omega(M)}}{\deg M} \rightarrow 0$, thus Theorem \[Th\_CentralLimit\] generalizes [@Cha2008 Th. 6.2]. We observe in particular that the rate of convergence to the Gaussian distribution increases with $k$, this justifies an observation in the number fields setting [@Meng2017]: the race seems to be less biased when $k$ is large.
In the other direction, fixing a modulus and letting $k$ grow, we obtain the following result.
\[Th central limit for k under LI\] Let $\mathbf{F}_{q}$ be a finite field of odd characteristic and $M \in \mathbf{F}_q[t]$ satisfying *(LI)*. Then, for $f =\Omega$ or $\omega$, the bias in the distributions of $\Delta_{f_k}(X;M,\square,\boxtimes)$ dissipates as $k\rightarrow\infty$.
This is a corollary of Proposition \[Prop k limit\] which is more general and unconditional.
Limiting distribution and bias {#Sec_LimDist}
==============================
In this section, the assertions are given in the context of almost periodic functions as in [@ANS], as we expect these to be useful for other work on Chebyshev’s bias over function fields. Our main results are based on the existence of a limiting distribution for functions defined over the integers, let us briefly recall the definitions and ideas to obtain such results.
Let $F:\mathbf{N}\rightarrow\mathbf{R}$ be a real function, we say that $F$ admits a limiting distribution if there exists a probability measure $\mu$ on Borel sets in $\mathbf{R}$ such that for any bounded Lipschitz continuous function $g$, we have $$\begin{aligned}
\lim_{Y\rightarrow\infty}\frac{1}{Y}\sum_{n\leq Y}g(F(n)) =
\int_{\mathbf{R}}g(t){\mathop{}\!\mathrm{d}}\mu(t).\end{aligned}$$ We call $\mu$ the limiting distribution of the function $F$.
Note that if the function $F$ admits a limiting distribution $\mu$, and that $\mu(\lbrace 0\rbrace) = 0$, then by dominated convergence theorem, the bias of $F$ towards positive values (see Definition \[Defn-bias\]) is well defined and we have $\operatorname{dens}(F >0) = \mu((0,\infty))$.
We focus on the limiting distribution to study the bias of the difference function. For $f = \Omega$ or $\omega$, and for any $k\geq 1$, the fact that the function $\Delta_{f_k}(\cdot; M, A,B)$ admits a limiting distribution follows directly from the asymptotic formula of Theorem \[Th\_Difference\_k\_general\_deg<X\] and the following result.
\[Prop\_limitingDist\] Let $\gamma_2,\ldots,\gamma_N \in (0,\pi)$ be distinct real numbers. For any $C_0,c_1\in \mathbf{R}$, $c_2,\ldots, c_N \in \mathbf{C}^{*}$, let $F:\mathbf{N} \rightarrow \mathbf{R}$ be a function satisfying $$\label{Almost periodic function}
F(n) = C_0 + c_1e^{in\pi} + \sum_{j=2}^{N}{\left(}c_je^{in\gamma_j} + \overline{c_j}e^{-in\gamma_j}{\right)}+ o(1)$$ as $n\rightarrow \infty$. Then the function $F$ admits a limiting distribution $\mu$ with mean value $C_0$ and variance $c_1^2 + 2\sum\limits_{j=1}^{N}\lvert c_j\rvert^2$. Moreover
1. \[Item\_Support\] the measure $\mu$ has support in $\left[C_0 - \lvert c_1\rvert - \sum_{j=2}^{N}2\lvert c_j\rvert, C_0 + \lvert c_1\rvert +\sum_{j=2}^{N}2\lvert c_j\rvert\right]$,\
in particular, if $\lvert C_0 \rvert > \lvert c_1\rvert + \sum_{j=2}^{N}2\lvert c_j\rvert $ then $\operatorname{dens}(C_0F >0) = 1$;
2. \[Item\_continuous\] if there exists $j\in \lbrace 2, \ldots, N\rbrace$ such that $\gamma_j \notin \mathbf{Q}\pi$, then $\mu$ is continuous,\
in particular $\operatorname{dens}(F >0) = \mu((0,\infty))$;
3. \[Item\_symmetry\] if the smallest sub-torus of $\mathbf{T}^{N}$ containing $\lbrace (n\pi, n\gamma_2,\ldots, n\gamma_N) : n\in \mathbf{Z} \rbrace$ is symmetric, then the distribution $\mu$ is symmetric with respect to $C_0$;
4. \[Item\_Fourier\] if the set $\lbrace \pi, \gamma_2,\ldots,\gamma_N\rbrace$ is linearly independent over $\mathbf{Q}$, then the Fourier transform $\hat\mu$ of the measure $\mu$ is given by $$\hat\mu(\xi) = e^{-iC_{0}\xi}\cos(c_1\xi)
\prod_{j=2}^{N}J_{0}\left( 2\lvert c_j \rvert \xi \right),$$ where $J_{0}(z) = \int_{-\pi}^{\pi}\exp\left(iz\cos(\theta)\right) \frac{{\mathop{}\!\mathrm{d}}\theta}{2\pi}$ is the $0$-th Bessel function of the first kind.
Kowalski, [@Kowalski2010 Prop. 1.1], showed that in certain families of polynomials $M \in \mathbf{F}_q[t]$, the hypothesis of Linear Independence (LI) is satisfied generically when $q$ is large (with fixed characteristic) for the $L$-function of the primitive quadratic character modulo $M$. That is, the imaginary parts of the zeros of $L(\cdot,\chi_{M})$ are linearly independent over $\mathbf{Q}$. In particular the hypotheses in \[Item\_continuous\], \[Item\_symmetry\] and \[Item\_Fourier\] are satisfied generically for $F = \pi_{k}(\cdot,\chi_M)$ (see ). We expect this to hold more generally for example when racing between quadratic residue and non-quadratic residues as in Section \[Sec\_Examples\]. The Linear Independence has also been proved generically in other families of $L$-functions over functions fields [@CFJ_Indep; @Perret-Gentil]. Proposition \[Prop\_limitingDist\] is a consequence of a general version of the Kronecker–Weyl Equidistribution Theorem (see [@Humphries Lem. 2.7], [@Devin2018 Th. 4.2], also [@MartinNg Lem. B.3]).
\[Th\_KW\] Let $\gamma_1,\ldots,\gamma_N \in \mathbf{R}$ be real numbers. Denote $A(\gamma)$ the closure of the $1$-parameter group $\lbrace y(\gamma_{1},\ldots,\gamma_{N}) : y\in\mathbf{Z}\rbrace/(2\pi\mathbf{Z})^{N}$ in the $N$-dimensional torus $\mathbf{T}^{N}:= (\mathbf{R}/2\pi\mathbf{Z})^{N}$. Then $A(\gamma)$ is a sub-torus of $\mathbf{T}^{N}$ and we have for any continuous function $h: \mathbf{T}^{N}\rightarrow \mathbf{C}$, $$\lim_{Y\rightarrow\infty}\frac{1}{Y}\sum_{n=0}^{Y}h(n\gamma_{1},\ldots,n\gamma_{N})
= \int_{A(\gamma)}h(a){\mathop{}\!\mathrm{d}}\omega_{A(\gamma)}(a)$$ where $\omega_{A(\gamma)}$ is the normalized Haar measure on $A(\gamma)$.
As in the proof of [@Cha2008 Th. 3.2], we associate Lemma \[Th\_KW\] with the asymptotic formula of Proposition \[Prop\_limitingDist\] and Helly’s selection theorem [@Billingsley Th. 25.8 and Th. 25.10]. From this, one can show that the corresponding limiting distribution exists and is a push-forward of the Haar measure on the sub-torus generated by the the $\gamma_j$’s.
Then \[Item\_Support\] is straightforward, and since the measure has compact support, its moments can be computed using compactly supported approximations of polynomials, this gives the result on the mean value and variance. The point \[Item\_continuous\] follows from the same lines as [@Devin2018 Th. 2.2] using the fact that the set of zeros is finite and being more careful about the rational multiples of $\pi$ to ensure that the sub-torus is not discrete (see also [@Devin2019 Th. 4]). The point \[Item\_symmetry\] follows directly from the proof of [@Devin2018 Th. 2.3].
To prove the point \[Item\_Fourier\], we compute the Fourier transform: $$\begin{aligned}
\hat\mu(\xi) &= \lim_{Y\rightarrow\infty} \frac{1}{Y} \sum_{n\leq Y}\exp(-i\xi F(n)) \\
&= e^{-iC_0 \xi} \int_{A(\pi,\gamma_1,\ldots,\gamma_N)}
\exp\Bigg(-i\xi\bigg( c_1(-1)^{a_1} + \sum_{j=2}^{N}2\operatorname{Re}(c_je^{ia_j}) \bigg)\Bigg) {\mathop{}\!\mathrm{d}}\omega(a) \\
&= e^{-iC_0\xi} \frac{1}{2}\left( e^{i\xi c_1} + e^{-i\xi c_1} \right) \prod_{j=2}^{N}\int_{-\pi}^{\pi}\exp\left(i\xi 2 \lvert c_j\rvert \cos(\theta)\right) \frac{{\mathop{}\!\mathrm{d}}\theta}{2\pi},\end{aligned}$$ where in the last line we use the linear independence to write $A(\pi,\gamma_1,\ldots,\gamma_N) = \lbrace 0, \pi\rbrace \times \mathbf{T}^{N-1}$, and the corresponding Haar measure as the product of the Haar measures. This concludes the proof.
Note that in the case all the $\gamma_j$’s are rational multiples of $\pi$, then the main term in the asymptotic expansion is a periodic function. Thus the limiting distribution obtained in Proposition \[Prop\_limitingDist\] is a linear combination of Dirac deltas supported on the image of this periodic function. If this image does not contain $0$, the limiting distribution has no mass at the point $0$ hence the bias is well defined. Otherwise the determination of the bias requires to study lower order terms in the asymptotic expansion, which are for now out of reach.
Special values of the bias {#Sec_Examples}
==========================
In this section, we assume that the field $\mathbf{F}_{q}$ has characteristic $\neq 2$ and that the polynomial $M$ is square-free. When $q$ and the degree of $M$ are small, it is possible to compute the Dirichlet $L$-functions associated to the quadratic characters modulo $M$ explicitly. In particular, we illustrate our results in the case of races between quadratic residues ($\square$) and non-quadratic residues ($\boxtimes$) modulo $M$. In this case the asymptotic formula of Theorem \[Th\_Difference\_k\_general\_deg<X\] is a sum over quadratic characters. Indeed, let $\chi$ be a non-trivial, non-quadratic character, it induces a non-trivial character on the subgroup $\square$ of quadratic residues, by orthogonality one has $c(\chi,\square,\boxtimes) = 0.$ For $\chi$ a quadratic character, one has $$\begin{aligned}
c(\chi,\square,\boxtimes) &=
\frac{1}{\phi(M)}{\left(}\frac{1}{\lvert \square\rvert}\sum_{a\in \square}1 -
\frac{-1}{\lvert \boxtimes\rvert} \sum_{a\in \square}1 {\right)}\\
&= \frac{1}{\phi(M)}{\left(}1 +
\frac{\lvert \square\rvert}{\lvert \boxtimes\rvert} {\right)}=
\frac{1}{\lvert \boxtimes\rvert}.\end{aligned}$$ Thus, for $ k =o((\log X)^{\frac12})$, one has $$\begin{gathered}
\label{Formula_Res_vs_NonRes}
\Delta_{\Omega_k}(X; M, \square,\boxtimes) \\
= \frac{(-1)^k}{\lvert \boxtimes\rvert} \Bigg\{ \sum_{\substack{\chi \bmod M \\ \chi^2 = \chi_0 \\ \chi\neq \chi_0}}\Bigg(\ {\left(}m_+(\chi) +\tfrac{1}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q}-1} +{\left(}m_-(\chi)+\tfrac{1}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q} + 1}(-1)^{X} \\
+\sum_{\gamma_j\neq 0, \pi} m_j(\chi)^k \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} e^{iX\gamma_{j}(\chi)} \Bigg) + O{\left(}\frac{d^k k^2}{\gamma(M)\log X}{\right)}\Bigg\},
\end{gathered}$$ and, if $q \geq 5$, $$\begin{gathered}
\label{Formula_Res_vs_NonRes_littleomega}
\Delta_{\omega_k}(X; M, \square,\boxtimes) \\
= \frac{(-1)^k}{\lvert \boxtimes\rvert} \Bigg\{ \sum_{\substack{\chi \bmod M \\ \chi^2 = \chi_0 \\ \chi\neq \chi_0}}\Bigg(\ {\left(}m_+(\chi) -\tfrac{1}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q}-1} +{\left(}m_-(\chi)-\tfrac{1}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q} + 1}(-1)^{X} \\
+\sum_{\gamma_j\neq 0, \pi} m_j(\chi)^k \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} e^{iX\gamma_{j}(\chi)} \Bigg) + O{\left(}\frac{d^k k^2}{\gamma(M)\log X}{\right)}\Bigg\}.
\end{gathered}$$ By Proposition \[Prop\_limitingDist\], we know that for all $k$ the function in admits a limiting distribution $\mu_{M,\Omega_k}$ with mean value $$\label{Form_MeanValue}
\mathbf{E}\mu_{M,\Omega_k} = \frac{(-1)^k}{\lvert \boxtimes\rvert} \sum_{\substack{\chi \bmod M \\ \chi^2 = \chi_0 \\ \chi\neq \chi_0}}\ {\left(}m_+(\chi) +\frac{1}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q}-1},$$ and variance $$\begin{aligned}
&\operatorname{Var}(\mu_{M,\Omega_k}) \\
&= \frac{1}{\lvert \boxtimes\rvert^2} \Bigg( \Bigg(\sum_{\substack{\chi \bmod M \\ \chi^2 = \chi_0 \\ \chi\neq \chi_0}}{\left(}m_-(\chi)+\frac{1}{2}{\right)}^{k}\Bigg)^2 \frac{q}{(\sqrt{q} + 1)^2}
+\sum_{\alpha_j\neq \pm \sqrt{q}}\Bigg( \sum_{\substack{\chi \bmod M \\ \chi^2 = \chi_0 \\ \chi\neq \chi_0}} m_j^k(\chi) \frac{\lvert\alpha_j\rvert}{\lvert\alpha_j -1\rvert}\Bigg)^2 \Bigg). \end{aligned}$$ The results are similar for $\Delta_{\omega_k}(X; M, \square,\boxtimes)$, with ${\left(}m_{\pm}(\chi)+\frac{1}{2}{\right)}$ replaced by ${\left(}m_{\pm}(\chi)-\frac{1}{2}{\right)}$, we denote by $\mu_{M,\omega_k}$ the corresponding limiting distribution.
In the following section we study various square-free polynomials $M$ and we denote by $\chi_M$ the primitive quadratic character modulo $M$. In the case of prime numbers, it has been observed that the bias tends to disappear as $k\rightarrow\infty$. Moreover in the case of the race with fixed $\Omega$, the bias changes direction with the parity of $k$. Whereas the bias always stays in the direction of the quadratic residues in the race with fixed $\omega$. We present here various examples where this does (or not) happen in the context of irreducible polynomials.
Case with no real inverse zero
------------------------------
In the generic case, we expect that $m_{\pm}(\chi) =0$. In particular, for $k$ even, $\mu_{M,\Omega_k}= \mu_{M,\omega_k}$, and, if the non-real zeros are independent of $\pi$, for $k$ odd, $\mu_{M,\Omega_k}$ is the symmetric of $\mu_{M,\omega_k}$ with respect to $0$. Moreover, for $f = \Omega$ or $\omega$, the mean value of $\mu_{M,f_k}$ becomes negligible as $k$ grows. This situation is very similar to the case of primes in $\mathbf{Z}$ (see [@Meng2017]). More precisely, we can simplify the expression of the mean value in . One has $$\begin{aligned}
\epsilon_f^k\mathbf{E}\mu_{M,f_k} = \frac{1}{\lvert \boxtimes\rvert} \sum_{\substack{\chi \bmod M \\ \chi^2 = \chi_0 \\ \chi\neq \chi_0}} {\left(}\frac{1}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q}-1}
= \frac{1}{\lvert \square\rvert} \frac{1}{2^k} \frac{\sqrt{q}}{\sqrt{q}-1},\end{aligned}$$ where $\epsilon_{\Omega} = -1$, $\epsilon_{\omega} = 1$. Note that in this case, $\mathbf{E}\mu_{M,\Omega_k}$ alternates sign as $k$ changes parity and $\mathbf{E}\mu_{M,\omega_k}$ has the same absolute value but stays positive. Finally, if the sum over the non-real inverse zeros is not empty, one has $$\mathbf{E}\mu_{M,f_k} \ll_M \frac{\sqrt{\operatorname{Var}(\mu_{M,f_k})}}{2^k}.$$ This hints towards a vanishing bias as $k$ gets large, see Proposition \[Prop k limit\] for a precise statement.
Let us start with an irreducible polynomial $M$ (as in [@Cha2008 Sec. 5]). Assume that the $L$-function $\mathcal{L}(\cdot,\chi_M)$ has only simple zeros that are not real. Then for $k\geq 1$ we have the formulas $$\begin{gathered}
\Delta_{\Omega_k}(X; M, \square,\boxtimes)\\
=(-1)^{k+1} \left\{ \Delta_{\Omega_1}(X; M, \square,\boxtimes)
+\frac{ 1 - \frac{1}{2^{k-1}} }{2\lvert\square\rvert} \left[ \frac{\sqrt{q}}{\sqrt{q}-1}+(-1)^X \frac{\sqrt{q}}{\sqrt{q}+1} \right] + O_{M}{\left(}\frac{d^k k^2}{\log X} {\right)}\right\},
\end{gathered}$$ $$\begin{gathered}
\Delta_{\omega_k}(X; M, \square,\boxtimes)\\
=(-1)^{k+1} \left\{ \Delta_{\Omega_1}(X; M, \square,\boxtimes)
+\frac{1}{2\lvert\square\rvert} \left[ \frac{\sqrt{q}}{\sqrt{q}-1}+(-1)^X \frac{\sqrt{q}}{\sqrt{q}+1} \right]\right\} \\ +\frac{1}{2^k \lvert\square\rvert} \left[ \frac{\sqrt{q}}{\sqrt{q}-1}+(-1)^X \frac{\sqrt{q}}{\sqrt{q}+1} \right] + O_{M}{\left(}\frac{d^k k^2}{\log X} {\right)}.
\end{gathered}$$ Note that the term $\frac{ -1 }{2\lvert \square\rvert} \frac{\sqrt{q}}{\sqrt{q}-1}$ above is the mean value $\mathbf{E}\mu_{M,\Omega_1}$ of the limiting distribution associated to the function $ \Delta_{\Omega_1}(X; M, \square,\boxtimes)$.
Thus, up to a change of sign, the function $\Delta_{f_k}(\cdot; M, \square,\boxtimes)$ satisfies properties similar to those of the function $\Delta_{\Omega_1}(\cdot; M, \square,\boxtimes)$ regarding the behavior at infinity and the limiting distribution, with the mean value of the limiting distribution going to $0$ as $k$ grows.
In [@Cha2008 Ex. 5.3], Cha studies the polynomial $M = t^5 + 3t^4 + 4t^3 + 2t+ 2 \in \mathbf{F}_{5}[t]$, from his work, we observe that the function[^2] $$X \mapsto\Delta_{\Omega_1}(X; M, \square,\boxtimes)
+\frac{1}{2\lvert\square\rvert} \left[ \frac{\sqrt{5}}{\sqrt{5}-1}+(-1)^X \frac{\sqrt{5}}{\sqrt{5}+1} \right]$$ is periodic of period $10$ and takes positive values larger than $\frac{1}{2\lvert\square\rvert} \left[ \frac{\sqrt{5}}{\sqrt{5}-1}+(-1)^X \frac{\sqrt{5}}{\sqrt{5}+1} \right]$ for $6$ values of $X \bmod 10$. Thus there is a bias in the “wrong direction”: one has for all $k\geq 1$ $$\begin{aligned}
\operatorname{dens}((-1)^k\Delta_{\Omega_k}(\cdot;M,\square,\boxtimes) > 0) = \frac{4}{10} < \frac12.
\end{aligned}$$ Contrary to what is expected in the generic case, When $k$ is odd the bias is in the direction of the quadratic residues. Similarly we obtain that $$\begin{aligned}
\operatorname{dens}(\Delta_{\omega_1}(\cdot;M,\square,\boxtimes) > 0) = \frac{7}{10} > \frac12,
\end{aligned}$$ and for $k \geq 2$, $$\begin{aligned}
\operatorname{dens}((-1)^{k+1}\Delta_{\omega_k}(\cdot;M,\square,\boxtimes) > 0) = \frac{6}{10} > \frac12.
\end{aligned}$$ In particular the bias changes direction according to the parity of $k$, and when $k$ is odd the bias is in the direction of the quadratic residues.
As observed in [@Li2018], when $M$ is not irreducible, the $L$-function $\mathcal{L}(\cdot,\chi_M)$ can have non-simple zeros and real zeros. Moreover in Proposition \[Prop Chebyshev many factors\], we obtain extreme biases in races modulo polynomials $M$ with many irreducible factors. We now focus on square-free non irreducible polynomials.
\[Ex\_double\]
Take $q=5$, and $M= t^6 + 2t^4 + 3t + 1$ in $\mathbf{F}_5[t]$. One has $$\mathcal{L}(u,\chi_M) = (1 + u + 5u^2)^2(1-u) = (1 - 2\sqrt{5}\cos(\theta_1) u + 5u^2)^2 (1-u),$$ where $\theta_1 = \pi + \arctan\sqrt{19}$. The polynomial $M$ has two irreducible factors of degree $3$ in $\mathbf{F}_{5}[t]$. We denote $M=M_1 M_2$, and for $i=1$, $2$ let $\chi_i$ be the character modulo $M$ induced by the character $\chi_{M_i}$. We have $$\mathcal{L}(u,\chi_1) = (1-u +5u^2)(1-u^{3}) = (1 - 2\sqrt{5}\cos(\theta_1 - \pi) u + 5u^2) (1-u^3)$$ and $$\mathcal{L}(u,\chi_2) = (1 +3u +5u^2)(1-u^{3})
= (1 - 2\sqrt{5}\cos(\theta_2) u + 5u^2) (1-u^3),$$ where $\theta_2 = \pi + \arctan(\sqrt{11}/3) $, the factor $(1-u^3)$ comes from the fact that the $\chi_i$ are not primitive (see e.g. [@Cha2008 Prop. 6.4]). Inserting this information in we obtain $$\begin{gathered}
\Delta_{\Omega_k}(X; M, \square,\boxtimes)
\\ = \frac{(-1)^k}{\lvert \boxtimes\rvert} \Bigg(\ \frac{3}{2^{k+2}} {\left(}5+ \sqrt{5} + (-1)^{X} (5-\sqrt{5}){\right)}+ 2^{k+1}\operatorname{Re}{\left(}\frac{10}{11 + i\sqrt{19}} e^{iX\theta_1}{\right)}\\ +
2\operatorname{Re}{\left(}\frac{10}{9+ i\sqrt{19}}
e^{iX(\theta_1 -\pi)}{\right)}+
2\operatorname{Re}{\left(}\frac{10}{13 + i\sqrt{11}} e^{iX \theta_2}{\right)}\Bigg) + O_{M}{\left(}\frac{6^k k^2}{\log X}{\right)}.
\end{gathered}$$
We observe that $\theta_1$ is not a rational multiple of $\pi$. This follows from the fact that for any $n\in\mathbf{N}$ the $5$-adic valuation of $\cos(n\theta_1)$ is $-n/2$, thus we cannot have $\cos(n\theta_1)=\pm 1$ except for $n=0$. Hence by Proposition \[Prop\_limitingDist\].\[Item\_continuous\], for each $k\geq 1$, the corresponding limiting distribution is continuous. Moreover it has mean value $ \mathbf{E} \asymp\frac{(-1)^k}{2^{k}(k-1)!} $ and variance $\operatorname{Var}\asymp\frac{2^{2k}}{(k-1)!^2}$.
Note that LI is not satisfied in this example. However, Damien Roy and Luca Ghidelli observed that the set $\lbrace \pi, \theta_1,\theta_2 \rbrace$ is linearly independent over $\mathbf{Q}$. For any $(a,b,c) \in \mathbf{Z}^3$, we see using the Chebyshev polynomials of the second kind that $\sin(a\pi + b\theta_1) \in \sqrt{19}\mathbf{Q}(\sqrt{5})$ and $\sin(c\theta_2) \in \sqrt{11}\mathbf{Q}(\sqrt{5})$, hence the only chance for them to be equal is to be $0$.
$k$ $\#\{ X \leq 10^9 : {\Delta}_{\Omega_k}(X; M, \square,\boxtimes) >0 \} $ $\#\{ X \leq 10^9 : {\Delta}_{\omega_k}(X; M, \square,\boxtimes) >0 \} $
----- -------------------------------------------------------------------------- --------------------------------------------------------------------------
1 $194\ 355\ 543$ $805\ 644\ 606$
2 $563\ 506\ 459$ $ 563\ 506\ 459 $
3 $484\ 542\ 923$ $515\ 457\ 280$
4 $503\ 903\ 947$ $503\ 903\ 947$
5 $499\ 014\ 553$ $500\ 985\ 439$
6 $500\ 247\ 844$ $500\ 247\ 844 $
7 $499\ 937\ 823$ $500\ 062\ 193$
8 $ 500\ 015\ 580$ $500\ 015\ 580$
9 $499\ 996\ 073$ $500\ 003\ 876$
10 $500\ 000\ 986$ $500\ 000\ 986$
: Approximation of the bias of $\Delta_{\Omega_k}$ and $\Delta_{\omega_k}$ for $k \in \lbrace 1,\ldots, 10\rbrace$ []{data-label="Table_Ex2"}
We observe that the term $2^{k+1}\operatorname{Re}{\left(}\frac{10}{11 + i\sqrt{19}} e^{iX\theta_1}{\right)}$ will become the leading term as $k$ grows. This term corresponds to a symmetric distribution with mean value equal to zero. Proposition \[Prop k limit\] predicts that the bias tends to $\frac{1}{2}$ as $k$ grows. We observe this tendency in the data; in Table \[Table\_Ex2\] we present an approximation of the bias for the functions $ \Delta_{f_k}(X; M, \square,\boxtimes) \bmod o(1) $, with $f = \Omega$ or $\omega$, computed for $1\leq X \leq 10^9$ and $1\leq k\leq 10$.
Case where $\sqrt{q}$ or $-\sqrt{q}$ is an inverse zero. {#subsec_examples_realZero}
--------------------------------------------------------
In [@Li2018], Li showed the existence of a family of polynomials $M$ satisfying $m_{+}(\chi_M) >0$. We now use some of these polynomials to obtain completely biased races between quadratic residues and non-quadratic residues.
\[Ex\_sqrt(q)\]
Taking $q=9$, we study polynomials with coefficients in $\mathbf{F}_{9}= \mathbf{F}_{3}[a]$ (i.e. $a$ is a generator of $\mathbf{F}_9$ over $\mathbf{F}_3$). Let $M= t^4 + 2t^3 + 2t + a^7$. This polynomial is square-free and has the particularity that $m_{+}(\chi_{M}) =2$ where $\chi_{M}$ is the primitive quadratic character modulo $M$ (see [@Li2018]). More precisely, $$\mathcal{L}(u,\chi_M) = (1 - 3u)^2.$$
The polynomial $M$ has two irreducible factors of degree $2$ in $\mathbf{F}_{9}[t]$. We denote $M=M_1 M_2$, and for $i=1$, $2$, let $\chi_i$ be the character modulo $M$ induced by the character $\chi_{M_i}$. Then for $i = 1$, $2$, one has $$\mathcal{L}(u,\chi_i) = (1-u)(1-u^{2}).$$ In particular, the only inverse zero of a quadratic character modulo $M$ with norm $\sqrt{9} = 3$ is the real zero $\alpha = 3$ with multiplicity $2$. Inserting this information in and , we obtain $$\Delta_{\Omega_k}(X; M, \square,\boxtimes)
= \frac{(-1)^k}{\lvert \boxtimes\rvert} \Bigg\{ \frac{2 + 5^k}{2^k} \frac{3}{2} +\frac{3}{2^k} \frac{3}{4}(-1)^{X} \Bigg\}
+ O_{M}{\left(}\frac{4^k k^2}{\log X}{\right)},$$ and $$\Delta_{\omega_k}(X; M, \square,\boxtimes)
= \frac{1}{\lvert \boxtimes\rvert} \Bigg\{ \frac{2 + (-3)^k}{2^k} \frac{3}{2} +\frac{3}{2^k} \frac{3}{4}(-1)^{X} \Bigg\}
+ O_{M}{\left(}\frac{4^k k^2}{\log X}{\right)}.$$ In each case, for each $k\geq 1$, the limiting distribution is a sum of two Dirac deltas, symmetric with respect to the mean value. One can observe that, in each case and for any $k\geq 2$, the constant term is larger in absolute value than the oscillating term. We deduce that, for $k\geq 2$, $$\operatorname{dens}((-1)^k \Delta_{\Omega_k}(\cdot;M,\square,\boxtimes) > 0)
= \operatorname{dens}((-1)^k \Delta_{\omega_k}(\cdot;M,\square,\boxtimes) > 0)= 1.$$ We say that the bias is complete. Note that in this case, contrary to the case of prime numbers, when $k$ is odd, the function $\Delta_{\omega_k}(\cdot;M,\square,\boxtimes)$ does not have a bias towards quadratic residues.
We note that the complete bias obtained in Example \[Ex\_sqrt(q)\] could be one of the simplest ways to observe such a phenomenon. Previously, in the setting of prime number races, Fiorilli [@Fiorilli_HighlyBiased] observed that arbitrary large biases could be obtained in the race between quadratic residues and non-quadratic residues modulo an integer with many prime factors (see also Proposition \[Prop Chebyshev many factors\] for a translation in our setting). Fiorilli’s large bias is due to the squares of prime numbers. Note that over number fields, the infinity of zeros of the $L$-functions is (under the GRH) an obstruction to the existence of complete biases in prime number races with positive coefficients (see [@RS Rk. 2.5]). The first observation of a complete bias is in [@CFJ Th. 1.5] in the context of Mazur’s question on Chebyshev’s bias for elliptic curves over function fields. As in [@CFJ], our complete bias is due to a “large rank” i.e. a vanishing of the $L$-function at the central point.
\[Ex\_-sqrt(q)\] Taking $q=9$, we study polynomials with coefficients in $\mathbf{F}_{9}= \mathbf{F}_{3}[a]$ (as in Example \[Ex\_sqrt(q)\]). Let $M= t^3 -t$. This polynomial is square-free and has the particularity that $m_{-}(\chi_{M}) =2$. More precisely, $$\mathcal{L}(u,\chi_M) = (1 + 3u)^2.$$
The polynomial $M$ has three irreducible factors of degree $1$ in $\mathbf{F}_{9}[t]$. We denote $M=M_1 M_2 M_3$, and for $i=1$, $2$, $3$ let $\chi_i$ be the character modulo $M$ induced by the character $\chi_{M_i}$. For $i\neq j \in \lbrace 1,2,3\rbrace$ one has $$\mathcal{L}(u,\chi_i) = (1-u)^2, \quad \text{ and } \quad
\mathcal{L}(u,\chi_i\chi_j) = (1-u)^2.$$ In particular, the only inverse zero of a quadratic character modulo $M$ with norm $\sqrt{9} = 3$ is the real zero $\alpha = -3$ with multiplicity $2$. Inserting this information into and , we obtain $$\Delta_{\Omega_k}(X; M, \square,\boxtimes)
= \frac{(-1)^k}{\lvert \boxtimes\rvert} \Bigg\{ \ \frac{7}{2^k} \frac{3}{2} + \frac{6 + 5^k}{2^k} \frac{3}{4}(-1)^{X} \Bigg\}
+ O_{M}{\left(}\frac{3^k k^2}{\log X}{\right)},$$ and $$\Delta_{\omega_k}(X; M, \square,\boxtimes)
= \frac{1}{\lvert \boxtimes\rvert} \Bigg\{ \ \frac{7}{2^k} \frac{3}{2} + \frac{6 + (-3)^k}{2^k} \frac{3}{4}(-1)^{X} \Bigg\}
+ O_{M}{\left(}\frac{3^k k^2}{\log X}{\right)}.$$ In each case, the limiting distribution associated to the function for each fixed $k$ is again a sum of two Dirac deltas, symmetric with respect to the mean value. We observe that for $k =1$ the constant term dominates the sign of the function so there are complete biases $\operatorname{dens}(\Delta_{\Omega_1}(X;M,\square,\boxtimes) > 0) = 0$ and $\operatorname{dens}(\Delta_{\omega_1}(X;M,\square,\boxtimes) > 0) = 1$. For $k\geq 2$, the two Dirac deltas are each on one side of zero, hence $$\operatorname{dens}(\Delta_{\Omega_k}(\cdot;M,\square,\boxtimes) > 0) = \operatorname{dens}(\Delta_{\omega_k}(\cdot;M,\square,\boxtimes) > 0)= \frac{1}{2},$$ the race is unbiased.
The examples in this section illustrate the following more general result, we can always find unbiased and completely biased races.
\[Prop using Honda Tate\] Let $\mathbf{F}_{q}$ be a finite field of odd characteristic, then there exists $M_{\frac12}\in \mathbf{F}_{q}[t]$ such that, for $f= \Omega$ or $\omega$, and for $k$ large enough, one has $$\operatorname{dens}(\Delta_{f_k}(\cdot;M_{\frac12},\square,\boxtimes)>0) = \frac12.$$ Moreover, if $q >3$, there exists $M_{1}\in \mathbf{F}_{q}[t]$ such that, for $f= \Omega$ or $\omega$, and for $k$ large enough, one has $$\operatorname{dens}((-1)^k\Delta_{f_k}(\cdot;M_1,\square,\boxtimes)>0) = 1.$$
It is interesting to note that in the case of an extreme bias, the bias for the function $\Delta_{\omega_k}(\cdot;M,\square,\boxtimes)$ changes direction with the parity of $k$, whereas in the case of integers [@Meng2017] the analog function has a bias towards squares independently of the parity of $k$.
In the case where $q$ is a square, this result is a consequence of Honda–Tate theorem in the case of elliptic curve: by [@Waterhouse Th. 4.1] there exist two elliptic curves $E_{\pm}$ on $\mathbf{F}_{q}$ whose Weil polynomial is $P_{\pm}(u) = (1 \mp \sqrt{q}u)^2$. If $q >3$ is not a square, by [@Waterhouse Th. 4.1], there exists one elliptic curve $E_{\frac12}$ on $\mathbf{F}_{q}$ whose Weil polynomial is $P_{\frac12}(u) = 1 + \sqrt{q}u^2$, and by [@HNR Th. 1.2], there exist an hyperelliptic curve $C_1$ of genus $2$ whose Weil polynomial is $P_1(u) = (1 - qu^2)^2$.
Since $q$ is odd, using Weierstrass form, each of the elliptic curve $E_{a}$, where $a = +,-$ or $\frac12$, has an affine model with equation $y^2 = M_{a}(x)$, where $M_{a} \in \mathbf{F}_q$ has degree $3$. Similarly the hyperelliptic curve $C_1$ has an affine model with equation $y^2 = M_{1}(x)$, with $M_1 \in \mathbf{F}_q[t]$ of degree $5$. Then $\mathcal{L}(u,\chi_{M_{a}}) = P_{a}(u)$, for $a\in \lbrace +,-,\frac12,1\rbrace$.
For $a\in \lbrace +,-,\frac12\rbrace$, let $D$ be a strict divisor of $M_a$, then $\deg D\leq 2$ so $\mathcal{L}(u,\chi_{D})$ does not have inverse zeros of norm $\sqrt{q}$. In the case $D$ is a strict divisor of $M_1$, then $\mathcal{L}(u,\chi_{D}) \in \mathbf{Z}[u]$ has at most two inverse zeros of norm $\sqrt{q}$ that are conjugate, in particular, its inverse zeros are simple.
Thus the case where $q$ is a square follows in the same way as in Examples \[Ex\_sqrt(q)\] and \[Ex\_-sqrt(q)\].
In the case where $q$ is not a square, using the information above in we obtain, for $f = \Omega$ or $\omega$, $$\begin{gathered}
\Delta_{f_k}(X; M_{\frac12}, \square,\boxtimes)
= \frac{(-1)^k}{\lvert \boxtimes\rvert} \Bigg\{ {\left(}-\tfrac{\epsilon_{f}}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q}-1} +{\left(}-\tfrac{\epsilon_f }{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q} + 1}(-1)^{X} \\
+2\operatorname{Re}\left( \frac{i\sqrt{q}}{i\sqrt{q} -1} e^{iX\frac{\pi}2}\right)\Bigg\} + O_k{\left(}(\log X)^{-1}{\right)},
\end{gathered}$$ where $\epsilon_{\Omega} = -1$, $\epsilon_{\omega} = 1$. The periodic part inside the brackets takes $4$ different values : $$2q{\left(}{\left(}-\tfrac{\epsilon_{f}}{2}{\right)}^k\tfrac{1}{q-1} \pm \tfrac{1}{q+1}{\right)}, \quad 2\sqrt{q}{\left(}{\left(}-\tfrac{\epsilon_{f}}{2}{\right)}^k\tfrac{1}{q-1} \pm \tfrac{1}{q+1}{\right)},$$ exactly $2$ of them are positive and $2$ are negative when $q>3$ or $k\geq 2$. So the race is unbiased.
Similarly for $M_1$ we have $$\begin{gathered}
\Delta_{f_k}(X; M_1, \square,\boxtimes)
= \frac{(-1)^k}{\lvert \boxtimes\rvert} \Bigg\{ {\left(}2 -\epsilon_{f}\tfrac{1}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q}-1} +{\left(}2 -\epsilon_f \tfrac{1}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q} + 1}(-1)^{X} \\
+2\operatorname{Re}\left( m_1 \frac{\alpha_1}{\alpha_1 -1} e^{iX\gamma_{1}}\right) + O_k{\left(}(\log X)^{-1}{\right)}\Bigg\},
\end{gathered}$$ where $m_1 = 0$ or $1$ and $\alpha_1 = \sqrt{q}e^{i\gamma_1}$ is an inverse zero of $\mathcal{L}(u,\chi_{D})$, for $D$ a strict divisor of $M_1$. We observe that the constant term dominates for $k$ large enough; one has an extreme bias: $\operatorname{dens}((-1)^k \Delta_{f_k}(X;M,\square,\boxtimes) > 0) = 1,$ with different directions of the bias according to the parity of $k$.
Limit behaviours {#subsec central limit}
================
In this section we study the limit behaviour of the measures $\mu_{M,f_k}$, for $f= \Omega$ or $\omega$, as $k$ or $\deg M$ gets large. We present the results by increasing strength of assumption needed.
Unconditional results as $k$ grows {#subsec k limit}
----------------------------------
First we focus on $k$ getting large while the modulus $M$ is fixed. We obtain the following unconditional result (see also Remark \[Rk\_on\_Th\_deg<n\].\[Item\_largestMult\]) regarding the $k$-limit of the limiting distributions.
\[Prop k limit\] Fix $M\in \mathbf{F}_{q}[t]$, let $f= \Omega$ or $\omega$, and $\epsilon_{\Omega} = -1$ and $\epsilon_{\omega} = 1$. We define $$m_{f,\max} = \max_{\chi, j}\lbrace m_{\pm}(\chi) -\epsilon_{f} \frac{1}{2}, m_j(\chi)\rbrace,$$ where the maximum is taken over all non-trivial quadratic characters $\chi$ modulo $M$. Then, as $k\rightarrow\infty$, the limiting distribution of $$\begin{aligned}
&\Delta_{f_k}^{\mathrm{norm}}(\cdot;M) := \frac{(-1)^k\lvert \boxtimes\rvert \sqrt{q-1}}{m_{f,\max}^{k}\sqrt{q}}\Delta_{f_k}(\cdot;M,\square,\boxtimes)\end{aligned}$$ converges weakly to some probability measures $\mu_{f,M}$, depending only on the set of zeros of maximal multiplicity. In particular,
1. \[Item max integer\]*(Expected generic case)* if $m_{\Omega,\max}$ is an integer and if the set of zeros of maximal multiplicity generates a symmetric sub-torus, then $\mu_{M,\Omega}=\mu_{M,\omega}$ is symmetric, so the bias dissipates as $k$ gets large;
2. \[Item max is m+\] if for some non-trivial quadratic character $\chi_1$ modulo $M$ one has $$\max_{\chi}\lbrace m_{-}(\chi) \rbrace < m_{+}(\chi_1) = m_{\Omega,\max} - \frac{1}{2}$$ then $\mu_{M,\Omega}$ is a Dirac delta, so the bias tends to be extreme as $k$ gets large.
Note that in the generic case, we expect LI to be satisfied and to have $m_{\Omega,\max}=1$ an integer. So, for most $M \in \mathbf{F}_{q}[t]$ square-free, the bias should dissipate in the race between polynomials with $k$ irreducible factors in the quadratic residues and non-quadratic residues modulo $M$.
Let $\varphi_{M,f_k}$ be the Fourier transform of the limiting distribution of $\Delta_{f_k}^{\mathrm{norm}}(\cdot;M)$. One has $$\begin{gathered}
\varphi_{M,f_k}(\xi)
= \exp\left(-i\xi \frac{\sqrt{q-1} \sum_{\chi} \left( m_{+}(\chi) - \epsilon_f \frac{1}{2} \right)^k}{m_{f,\max}^{k}(\sqrt{q} - 1)} \right) \\ \times
\int_{A}\exp\Bigg\lbrace -i\xi\Bigg( \frac{\sqrt{q-1}\sum_{\chi} \left( m_{-}(\chi) - \epsilon_f \frac{1}{2} \right)^k}{m_{f,\max}^{k}(\sqrt{q} +1)}(-1)^{a_1}
\\+ \sqrt{\frac{q-1}{q}}\sum_{j=2}^{N} 2\frac{m_{j}^k}{m_{\max}^k}\operatorname{Re}\left(\frac{\sqrt{q}e^{i\gamma_j}}{\sqrt{q}e^{i\gamma_j} -1}e^{ia_j}\right) \Bigg)\Bigg\rbrace {\mathop{}\!\mathrm{d}}\omega_{A}(a),\end{gathered}$$ where we write the ordered list of inverse-zeros with multiplicities $$\label{Def zeros multiplicities}
\lbrace (\gamma_{2},m_2), \ldots, (\gamma_{N},m_N)\rbrace = \bigcup\limits_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 \\ \chi \neq \chi_0}}\lbrace (\gamma,m)\in (0,\pi)\times\mathbf{N}_{>0} : L(\tfrac{1}{2} + i\gamma, \chi) = 0 \text{ with multiplicity } m \rbrace,$$ and $A$ is the closure of the $1$-parameter group $\lbrace y(\pi,\gamma_2,\ldots,\gamma_N) : y \in \mathbf{Z} \rbrace/2\pi\mathbf{Z}^{N}$.
Now, by dominated convergence theorem, there are four cases according to which zeros have maximal multiplicity. In each case it is easy to see that the limit function $\varphi_{M,f}$ is indeed the Fourier transform of a measure $\mu_{M,f}$, and the conclusion follows by Lévy’s Continuity Theorem.
Suppose that $m_{f,\max}$ is an integer, i.e. the zeros of maximal order are not real. Up to reordering, we can assume that the first $d$ zeros in have maximal multiplicity: $m_2 = \ldots = m_d = m_{\max} > \max_{j>d}\lbrace m_j \rbrace$. We have for $f=\Omega$ or $\omega$ and for every $\xi \in\mathbf{R}$, $$\begin{aligned}
\varphi_{M,f}(\xi) := \lim_{k\rightarrow\infty} \varphi_{M,f_k}(\xi) = \int_{A(\max)}
\exp\left(-i\xi\left( \sqrt{\frac{q-1}{q}}\sum_{j=2}^{d} 2\operatorname{Re}\left(\frac{\alpha_j}{\alpha_j -1}e^{ia_j}\right) \right)\right) {\mathop{}\!\mathrm{d}}\omega_{A(\max)}(a),\end{aligned}$$ where $A(\max)$ is the closure of the $1$-parameter group $\lbrace y(\gamma_2,\ldots,\gamma_d) : y \in \mathbf{Z} \rbrace/2\pi\mathbf{Z}^{d-1}$. This follows from the fact that the projection $A\rightarrow A(\max)$ induces a bijection between a sub-torus of $A$ and $A(\max)$, then, by uniqueness of the normalized Haar measure, the measure induced on the sub-torus by the normalized Haar measure of $A$ is exactly the normalized Haar measure of $A(\max)$.
By Proposition \[Prop\_limitingDist\].\[Item\_symmetry\], the function $\varphi_{M,f}$ is even if the sub-torus $A(\max)$ is symmetric. This concludes the proof of Proposition \[Prop k limit\].\[Item max integer\]. Note that in the case $m_{\Omega,\max}$ is an integer, we have $m_{\omega,\max} = m_{\Omega,\max}$ and the zeros of maximal order are the same for the two functions $\Delta_{\Omega_k}$ and $\Delta_{\omega_k}$. In particular $\mu_{M,\Omega} = \mu_{M,\omega}$.
In the case the zeros of maximal order are real, we have the three following possibilities.
1. The maximum $m_{f,\max}$ is reached only by $m_{+}(\chi_1), \ldots, m_{+}(\chi_d)$, for $\chi_1,\ldots,\chi_d$ non-trivial quadratic characters modulo $M$, then for every $\xi \in\mathbf{R}$, we have $$\begin{aligned}
\varphi_{M,f}(\xi) := \lim_{k\rightarrow\infty} \varphi_{M,f_k}(\xi) = \exp\left(-i\xi \frac{d \sqrt{q-1}}{\sqrt{q} - 1} \right). \end{aligned}$$ This is the Fourier transform of a Dirac delta at a positive value, thus $$\lim_{k\rightarrow\infty}\operatorname{dens}((-1)^k\Delta_{f_k}(X;M,\square,\boxtimes) >0) = 1.$$
2. The maximum $m_{f,\max}$ is reached only by $m_{-}(\chi_1), \ldots, m_{-}(\chi_d)$, for $\chi_1,\ldots,\chi_d$ non-trivial quadratic characters modulo $M$, then for every $\xi \in\mathbf{R}$, we have $$\begin{aligned}
\varphi_{M,f}(\xi) := \lim_{k\rightarrow\infty} \varphi_{M,f_k}(\xi) = \cos\left(\xi \frac{d\sqrt{q-1}}{\sqrt{q} +1}\right). \end{aligned}$$ This is the Fourier transform of a combination of two half-Dirac deltas symmetric with respect to $0$, thus $$\lim_{k\rightarrow\infty}\operatorname{dens}(\Delta_{f_k}(X;M,\square,\boxtimes) >0) = \frac{1}{2}.$$
3. The maximum $m_{f,\max}$ is reached by $m_{+}(\chi_1), \ldots, m_{+}(\chi_d)$ and by $m_{-}({\chi'}_1), \ldots, m_{-}({\chi'}_{d'})$ for $\chi_1,\ldots,\chi_d, {\chi'}_1, \ldots, {\chi'}_{d'}$ non-trivial quadratic characters modulo $M$, then for every $\xi \in\mathbf{R}$, we have $$\begin{aligned}
\varphi_{M,f}(\xi) := \lim_{k\rightarrow\infty} \varphi_{M,f_k}(\xi) = \exp\left(-i\xi \frac{d \sqrt{q-1}}{\sqrt{q} - 1} \right)\cos\left(\xi \frac{d' \sqrt{q-1}}{\sqrt{q} +1}\right). \end{aligned}$$ This is the Fourier transform of a combination of two half-Dirac deltas symmetric with respect to $ \frac{d \sqrt{q-1}}{\sqrt{q} - 1}$. Thus the limit measure has a complete bias, or is unbiased, or its bias is not well defined depending on whether $\frac{d}{\sqrt{q} -1} > \frac{d'}{\sqrt{q} + 1}$, or $<$, or $=$.
Finally, note that when $m_{\Omega,\max} \notin \mathbf{N}$, the set of zeros of maximal real part for $\Delta_{\Omega_k}$ and $\Delta_{\omega_k}$ can differ, they coincide if $m_{\omega,\max} \notin \mathbf{N}$.
Existence of extreme biases for moduli with many irreducible factors {#subsec extreme bias M limit}
--------------------------------------------------------------------
We now keep $k$ fixed and vary the modulus $M$. Following the philosophy of [@Fiorilli_HighlyBiased Th. 1.2], we obtain that, as the number of irreducible factors of $M$ increases, extreme biases appear in the race between quadratic and non-quadratic residues modulo $M$. Thus $1$ is in the closure of the values of the densities in Theorem \[Th central limit for M under LI\]. As in the work of Fiorilli, the full strength of (LI) is not necessary here.
\[Prop Chebyshev many factors\] Let $\lbrace M \rbrace$ be a sequence of polynomials in $\mathbf{F}_{q}[t]$ such that the multi-set $\mathcal{Z}(M) = \bigcup\limits_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 , \chi \neq \chi_0}}\lbrace \gamma \in [0,\pi] : L(\frac{1}{2} + i\gamma, \chi) = 0 \rbrace$ is linearly independent of $\pi$ over $\mathbf{Q}$. Assume also that the multiplicities of the zeros are bounded : there exists $B>0$ such that for each $M$, for each $\gamma \in \mathcal{Z}(M)$ one has $$\sum_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 , \chi \neq \chi_0}} m_{\gamma}(\chi) \leq B.$$
Then, for $f=\Omega$ or $\omega$, as $\omega(M)\rightarrow\infty$, one has $$\operatorname{dens}((\epsilon_f)^k\Delta_{f_k}(\cdot;M,\square,\boxtimes) >0) \geq 1 - O\left( \frac{(2B)^{2k} q \deg M}{2^{\omega(M)}} \right),$$ where $\epsilon_{\Omega} = -1$ and $\epsilon_{\omega} = 1$.
The proof follows the idea of [@Fiorilli_HighlyBiased Th. 1.2], [@Devin2018 Cor. 5.8], using Chebyshev’s inequality (e.g. [@Billingsley (5.32)]). However, unlike the results in loc. cit. that use a limiting density for a function over $\mathbf{R}$, we need to be careful about the influence of $\pi$.
Thanks to the hypothesis of linear independence applied to , we have that $\mu_{M,f_k}$ is the convolution of two probability measures: $\mu_{M,f_k} = D_{M,f_k} \ast \nu_{M,k}$, where $D_{M,f_k}$ is the combination of two half-Dirac deltas at $\frac{2(2^{\omega(M)} -1)}{\lvert \boxtimes\rvert} {\left(}\frac{\epsilon_f}{2}{\right)}^k \frac{\sqrt{q}}{q-1}$ and $\frac{2(2^{\omega(M)} -1)}{\lvert \boxtimes\rvert} {\left(}\frac{\epsilon_f}{2}{\right)}^k \frac{q}{q-1}$, with $\epsilon_{\Omega} = -1$ and $\epsilon_{\omega} = 1$, while $\nu_{M,k}$ has mean value $0$ and variance $$\begin{aligned}
\operatorname{Var}(\nu_{M,k}) = \frac{1}{\lvert \boxtimes\rvert^2} \sum_{\gamma \in \mathcal{Z}(M)}\Bigg( \sum_{\substack{\chi \bmod M \\ \chi^2 = \chi_0 , \chi\neq \chi_0}} m_{\gamma}^{k}(\chi) \frac{\lvert\sqrt{q}e^{i\gamma}\rvert}{\lvert\sqrt{q}e^{i\gamma}-1\rvert}\Bigg)^2
\ll \frac{B^{2k} 2^{\omega(M)} \deg M}{\lvert \boxtimes\rvert^2}.\end{aligned}$$ To obtain this bound, note that there are $2^{\omega(M)} -1$ quadratic characters modulo $M$ (see e.g. [@Rosen2002 Prop. 1.6]) and for each of them the associated Dirichlet $L$-function is of degree smaller than $\deg M$. (Note also that $\nu_{M,k}$ is independent of $f=\Omega$ or $\omega$.) Thus, by Proposition \[Prop\_limitingDist\].\[Item\_continuous\] and Chebyshev’s inequality, $$\begin{aligned}
\operatorname{dens}\left((\epsilon_f)^k\Delta_{f_k}(X;M,\square,\boxtimes) >0\right) &\geq \nu_{M,k}\left(\bigg( \frac{-\sqrt{q}}{(q-1)}\frac{(2^{\omega(M)} -1)}{ 2^{k-1}\lvert \boxtimes\rvert} , \infty\bigg)\right) \\
&\geq 1 - O\left( \frac{B^{2k} 2^{\omega(M)} \deg M}{\lvert \boxtimes\rvert^2} \left(\frac{\sqrt{q}}{(q-1)} \frac{(2^{\omega(M)} -1)}{ 2^{k-1}\lvert \boxtimes\rvert} \right)^{-2} \right) \end{aligned}$$ which concludes the proof.
Limit behaviour for moduli satisfying the linear independence {#subsec M limit under LI}
-------------------------------------------------------------
In this section, following [@Fiorilli_HighlyBiased Sec. 3], we generalize [@Cha2008 Th. 6.2] on the central limit behaviour of the measure $\mu_{M,f_k}$ for $f=\Omega$ or $\omega$. In particular we prove Theorem \[Th central limit for M under LI\]; under (LI), the bias can approach any value in $[\tfrac12 ,1]$ as the degree of $M$ gets large. We already proved that $1$ can be approached by the values of the bias without assuming (LI) in Proposition \[Prop Chebyshev many factors\], thus it remains to prove Theorem \[Th central limit for M under LI\] for the interval $[\frac12,1)$.
As noted in Section \[subsec extreme bias M limit\] when using Chebyshev’s inequality, assuming enough linear independence of the zeros, the distribution $\mu_{M,f_k}$ is well described by the data $$B(M) :=\frac{\lvert \mathbf{E}\mu_{M,f_k}\rvert} {\sqrt{\operatorname{Var}(\nu_{M,k})}}= \frac{(2^{\omega(M)} -1)\sqrt{q}}{2^k(\sqrt{q} -1)}\Bigg( \sum_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 \\ \chi \neq \chi_0}}\sum_{j}^{d_{\chi}} \left\lvert \frac{\alpha_j{\chi}}{\alpha_j{\chi} -1} \right\rvert^2 \Bigg)^{-1/2}$$ where $\nu_{M,k}$ is as defined in the proof of Proposition \[Prop Chebyshev many factors\]. By [@Cha2008 (44)] and [@Cha2008 Prop. 6.4], we have for every non-trivial quadratic character $\chi$ modulo $M$, $$\begin{aligned}
\sum_{j=1}^{d_{\chi}}\left\lvert\frac{\alpha_j(\chi)}{\alpha_{j}(\chi)-1}\right\rvert^2 = \frac{q}{q-1}(\deg M^*(\chi) -2) + O(\log (\deg M^*(\chi) +1)),\end{aligned}$$ where $M^*(\chi)$ is the modulus of the primitive character that induces $\chi$. Note that the sum is empty if $\deg M^*(\chi) \leq 2$. Thus, summing over the non-trivial quadratic characters we have $$\begin{aligned}
\label{bound IM}
I(M) :=\sum_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 \\ \chi \neq \chi_0}}
\sum_{j=1}^{d_{\chi}} \left\lvert \frac{\alpha_j(\chi)}{\alpha_{j}(\chi)-1}\right\rvert^2
&= \frac{q}{q-1}\sum_{\substack{D\mid M' \\ D \neq 1}} (\deg D -2) + O(2^{\omega(M)} \log (\deg M' +1))\nonumber\\
&= \frac{q}{q-1}(2^{\omega(M)} -1) \frac{\deg M' -4}{2} + O(2^{\omega(M)} \log (\deg M' +1),\end{aligned}$$ where $M'$ is the largest square-free divisor of $M$. Thus, if $\deg M'>4$, $$B(M)= 2^{-k}\frac{\sqrt{2(q-1) (\tau(M') -1)}}{(\sqrt{q} -1)\sqrt{\deg M' -4}} \left( 1 + O\left(\frac{\log(\deg M') }{\deg M'}\right)\right),$$ where $\tau$ counts the number of divisors. We show that, as $\deg M' \rightarrow\infty$, $B(M)$ can approach any non-negative real number.
\[Lem any possible value\] For any fixed $0\leq c <\infty$, there exists a sequence of square-free monic polynomials $M_n \in \mathbf{F}_q[t]$ such that $$\deg M_n \rightarrow\infty \text{ and } \tau(M_n) = c\deg(M_n) + O(1).$$
The case $c=0$ follows from taking a sequence of irreducible polynomials. Now, fix $0<c<\infty$, for $\omega$ large enough, there exist $0<d_1 < d_2 < \ldots < d_{\omega}$ integers such that $$\left[\frac{2^{\omega}}{c}\right] = d_1 + d_2 + \ldots + d_{\omega}.$$ For each $1\leq i \leq \omega$, there exists an irreducible polynomial $P_i \in \mathbf{F}_{q}[t]$ of degree $d_i$. Then the polynomial $M := P_1 P_2 \ldots P_\omega$ is square-free and satisfies $$\begin{aligned}
\tau(M) = 2^{\omega} = c( \deg M + O(1)).\end{aligned}$$ This concludes the proof.
\[Rk the measure is as smooth as you want\] Note that we only used the fact that there exist irreducible polynomials of each degree in $\mathbf{F}_{q}[t]$. In the sequence we constructed, we have $\deg M_n\rightarrow\infty$, thus $I(M_n)\rightarrow \infty$, we deduce that the number of zeros $\lvert \mathcal{Z}(M_n) \rvert$ gets large too. In particular, we can always assume that $\lvert \mathcal{Z}(M_n) \rvert\geq 3$ so that the limiting distribution $\mu_{M_n,f_k}$ is absolutely continuous (see [@MartinNg Th. 1.5]).
In the case of a sequence of polynomials $(M_n)_{n\in \mathbf{N}}$ satisfying (LI) and for which $B(M_n)$ converges, we show that the limiting bias can be precisely described.
\[Prop M limit using Berry Esseen\] Let $b \in [0,\infty),$ suppose there exist a sequence of polynomials $M_n \in \mathbf{F}_q[t]$ with $\deg M_n \rightarrow\infty$, $B(M_n) \rightarrow b$, and for each $n$, $M_n$ satisfies *(LI)*. Then, for $f = \Omega$ or $\omega$, as $\deg M_n \rightarrow\infty$, the limiting distribution $\mu_{M,f_k}^{\mathrm{norm}}$ of $$\begin{aligned}
&\Delta_{f_k}^{\mathrm{norm}}(\cdot;M) := \frac{(\epsilon_f)^k}{\sqrt{\operatorname{Var}\nu_{M,k}}}\Delta_{f_k}(\cdot;M,\square,\boxtimes)\end{aligned}$$ converges weakly to the distribution $\frac{1}{2}(\delta_{2b/(\sqrt{q} +1)} + \delta_{2b\sqrt{q}/(\sqrt{q} +1)})\ast \mathcal{N}$, where $\mathcal{N}$ is the standard Gaussian distribution, and where $\epsilon_{\Omega} = -1$ and $\epsilon_{\omega} = 1$. More precisely, one has $$\sup_{x\in\mathbf{R}}\left\lvert \int_{-\infty}^{x}{\mathop{}\!\mathrm{d}}\mu_{M,f_k}^{\mathrm{norm}} - \frac{1}{2\sqrt{2\pi}}\int_{-\infty}^{x} e^{-\frac{1}{2}(t - b\frac{2}{\sqrt{q} +1})^2} + e^{-\frac{1}{2}(t - b\frac{2\sqrt{q}}{\sqrt{q} +1})^2} {\mathop{}\!\mathrm{d}}t \right\rvert \ll 2^{-\omega(M_n)}(\deg M'_n)^{-1} + \lvert B(M_n) - b\rvert$$ where $M'_n$ is the square-free part of $M_n$.
The proof follows ideas from the proof of [@Fiorilli_HighlyBiased Th. 1.1] and [@CFJ Th. 4.5], and is based on the use of Berry–Esseen inequality [@Esseen Chap. II, Th. 2a]. Let $M\in\mathbf{F}_{q}[t]$ be a polynomial satisfying (LI). We begin with computing the Fourier transform $\varphi_{M,f_k}$ of the limiting distribution $\mu^{\mathrm{norm}}_{M,f_k}$ of $\frac{(\epsilon_f)^k}{\sqrt{\operatorname{Var}\nu_{M,k}}}\Delta_{k}$ using Proposition \[Prop\_limitingDist\].\[Item\_Fourier\] for where we assume linear independence. With the notations of , one has $$\begin{aligned}
\varphi_{M,f_k}(\xi) &= \hat\mu_{M,f_k}\left(\frac{(\epsilon_f)^k}{\sqrt{\operatorname{Var}\nu_{M,k}}}\xi\right) \\
&= \exp\left(-i B(M)\xi \right)
\cos\left( \frac{\sqrt{q} - 1}{\sqrt{q} + 1} B(M)\xi \right)
\prod_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 \\ \chi \neq \chi_0}}\prod_{j=1}^{d_{\chi}/2}J_{0}\Big( 2 I(M)^{-1/2}
\left\lvert \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} \right\rvert (-\epsilon_f)^k\xi \Big),\end{aligned}$$ where using the functional equation, we assume that the first $\frac{d_{\chi}}{2}$ non-real inverse zeros have positive imaginary part, (recall that, since $\chi$ is real, up to reordering we can write $\alpha_{d_{\chi} - j}(\chi) = \overline{\alpha_{j}(\chi)}$ for $j\in \lbrace1,\ldots,d_{\chi}\rbrace$).
Using the parity of the Bessel function and the power series expansion $\log J_0(z) = - \frac{z^2}{4} + O(z^4)$, for $\lvert z\rvert <\frac{12}{5}$ (see e.g. [@FiorilliMartin Lem. 2.8]), we get that, for any $\lvert \xi\rvert < \frac{1}{2} I(M)^{1/2}$, $$\begin{gathered}
\label{Asympt Fourier deg M}
\log \left\lbrace\varphi_{M,f_k}(\xi) \exp\left(i B(M)\xi \right)
\cos\left( \tfrac{\sqrt{q} - 1}{\sqrt{q} + 1} B(M)\xi \right)^{-1} \right\rbrace \\
= - \frac{1}{2}\xi^2
+ O\Big( I(M)^{-2}\sum_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 \\ \chi \neq \chi_0}}
\sum_{j=1}^{d_{\chi}/2} \left\lvert \frac{\alpha_j(\chi)}{\alpha_{j}(\chi)-1}\right\rvert^4 \xi^4 \Big).\end{gathered}$$ Since $\left\lvert \frac{\alpha_j(\chi)}{\alpha_{j}(\chi)-1}\right\rvert \leq \frac{\sqrt{q}}{\sqrt{q} - 1}$, the error term in is $O( \xi^4 I(M)^{-1})$. In the other direction, in the range $\lvert \xi \rvert > \tfrac{1}{2}I(M)^{\frac14}$, we have $$\begin{aligned}
2 I(M)^{-1/2}
\left\lvert \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} \right\rvert \lvert\xi\rvert > I(M)^{-1/4}
\left\lvert \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} \right\rvert \in [0,\tfrac{5}{3}]\end{aligned}$$ for $I(M)$ large enough. Since $J_0$ is positive and decreasing on the interval $[0,\tfrac{5}{3}]$ and that for all $z \geq \tfrac{5}{3}$ we have $\lvert J_0(z) \rvert \leq J_0(\tfrac{5}{3})$, we deduce $$\begin{gathered}
\label{Asympt Fourier xi large}
\log \left\lbrace\varphi_{M,f_k}(\xi) \exp\left(i B(M)\xi \right)
\cos\left( \tfrac{\sqrt{q} - 1}{\sqrt{q} + 1} B(M)\xi \right)^{-1} \right\rbrace \\ \leq \sum_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 \\ \chi \neq \chi_0}}\sum_{j=1}^{d_{\chi}/2}\log J_{0}\Big( I(M)^{-1/4}
\left\lvert \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} \right\rvert \Big)
= - \frac{1}{4} I(M)^{\frac12}
+ O(1).\end{gathered}$$
Note that is enough to show, by Lévy’s Continuity Theorem, that $\nu_{M,k}^{\mathrm{norm}}$ converges weakly to the standard Gaussian distribution as $I(M) \rightarrow\infty$. Since limit and convolution are compatible, we deduce that if $B(M)\rightarrow b$ converges, then $\mu_{M,f_k}^{\mathrm{norm}}$ converges weakly to a distribution that is a sum of two Gaussian distributions centered at $b(1 \pm \tfrac{\sqrt{q} -1}{\sqrt{q} +1})$.
The precise rate of convergence of the distribution function is obtained via the Berry–Esseen inequality [@Esseen Chap. II, Th. 2a]. Let $F$ and $G$ be the cumulative distribution functions of $\frac{1}{2}(\delta_{2b/(\sqrt{q} +1)} + \delta_{2b\sqrt{q}/(\sqrt{q} +1)})\ast \mathcal{N}$ and $\mu_{M,f_k}^{\mathrm{norm}}$, precisely: $$F(x) = \frac{1}{2\sqrt{2\pi}}\int_{-\infty}^{x}( e^{-\frac{1}{2}(t - b\frac{2}{\sqrt{q} +1})^2} + e^{-\frac{1}{2}(t - b\frac{2\sqrt{q}}{\sqrt{q} +1})^2} ){\mathop{}\!\mathrm{d}}t
\quad \text{ and } \quad G(x) = \int_{-\infty}^{x}{\mathop{}\!\mathrm{d}}\mu_{M,f_k}^{\mathrm{norm}}.$$ As observed in Remark \[Rk the measure is as smooth as you want\], when $\deg M$ is large enough, the function $G$ is differentiable. For any $T>0$, we have $$\label{Eq Berry Esseen}
\lvert G(x) - F(x) \rvert \ll \int_{-T}^{T}\left\lvert \frac{\varphi_{M,f_k}(\xi) - \exp(-ib\xi) \cos\left( \tfrac{\sqrt{q} - 1}{\sqrt{q} + 1} b\xi \right)e^{-\frac{1}{2}\xi^2}}{\xi} \right\rvert{\mathop{}\!\mathrm{d}}\xi
+ \frac{\lVert G' \rVert_{\infty}}{T}.$$ Let us first estimate the second term in the right-hand side of . We have for all $x\in \mathbf{R}$ $$\begin{aligned}
G'(x) = \frac{1}{2\pi}\int_{\mathbf{R}}e^{-ix\xi}\varphi_{M,f_k}(\xi){\mathop{}\!\mathrm{d}}\xi \ll \int_{\mathbf{R}} \prod_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 \\ \chi \neq \chi_0}}\prod_{j=1}^{d_{\chi}/2}\left\lvert J_{0}\Big( 2 I(M)^{-1/2}
\left\lvert \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} \right\rvert \xi \Big) \right\rvert{\mathop{}\!\mathrm{d}}\xi.
\end{aligned}$$ Using the bound $\lvert J_0(z) \rvert \ll \min(1,x^{-\frac12})$, and that $\lvert \mathcal{Z}(M) \rvert \geq 3$ we obtain that $$\label{Bound BerryEsseen derivative}
\lVert G' \rVert_{\infty} \ll \int_{\mathbf{R}} \min(1, I(M)^{3/4}\lvert\xi\rvert^{-3/2})
{\mathop{}\!\mathrm{d}}\xi \ll I(M)^{3/4}.$$ To bound the integral in , we cut the interval of integration in two ranges. First, by , the integral in the range $\lvert\xi\rvert \leq \frac{1}{2}I(M)^{1/4}$ is $$\begin{aligned}
\label{Ineq Berry--Esseen first range}
\int_{-\frac{1}{2}I(M)^{1/4}}^{\frac{1}{2}I(M)^{1/4}}
\frac{e^{-\frac{1}{2}\xi^2}} {2\lvert\xi\rvert} &
\left\lvert \sum_{\pm} e^{-ib(1 \pm \tfrac{\sqrt{q} - 1}{\sqrt{q} + 1})\xi} \left( 1 - \exp\left(-i(B(M) - b)(1 \pm \tfrac{\sqrt{q} - 1}{\sqrt{q} + 1})\xi + O(\xi^4 I(M)^{-1})\right)\right) \right\rvert
{\mathop{}\!\mathrm{d}}\xi \nonumber\\
\ll&
\int_{-\frac{1}{2}I(M)^{1/4}}^{\frac{1}{2}I(M)^{1/4}}
e^{-\frac{1}{2}\xi^2}\left( \lvert B(M) - b \rvert + \lvert \xi\rvert^3 I(M)^{-1} \right)
{\mathop{}\!\mathrm{d}}\xi \\
\ll& \lvert B(M) - b \rvert + I(M)^{-1}.\nonumber
\end{aligned}$$ In the range $\frac{1}{2}I(M)^{1/4} \leq \lvert\xi\rvert \leq T$, we use the bound from : $$\begin{aligned}
\label{Ineq Berry--Esseen second range}
\int_{\frac{1}{2}I(M)^{1/4} \leq \lvert\xi\rvert \leq T}&
\left\lvert \frac{\varphi_{M,f_k}(\xi) - \exp(-ib\xi) \cos\left( \tfrac{\sqrt{q} - 1}{\sqrt{q} + 1} b\xi \right)e^{-\frac{1}{2}\xi^2}}{\xi} \right\rvert{\mathop{}\!\mathrm{d}}\xi \\
\ll& \int_{\frac{1}{2}I(M)^{1/4} \leq \lvert\xi\rvert \leq T} e^{-\frac14 I(M)^{\frac12}} \frac{{\mathop{}\!\mathrm{d}}\xi}{\lvert \xi \rvert} +
\int_{\frac{1}{2}I(M)^{1/4} \leq \lvert\xi\rvert \leq T}
e^{-\frac{1}{2}\xi^2}\frac{{\mathop{}\!\mathrm{d}}\xi}{\lvert \xi\rvert }\nonumber \\
\ll& e^{-\frac14 I(M)^{\frac12}}\log T + e^{-\frac18 I(M)^{\frac12}}.\nonumber\end{aligned}$$ Now combining , and in for $T= I(M)^{\frac74}$, and the estimate gives the result, as $\deg M\rightarrow\infty$.
The proof of Theorem \[Th central limit for M under LI\] follows.
Let $\eta \in [\tfrac12,1]$. If $\eta = 1$, then by Proposition \[Prop Chebyshev many factors\], there exist a sequence of polynomials $M\in \mathbf{F}_q[t]$ with $\operatorname{dens}((\epsilon_f)^k\Delta_{f_k}(\cdot;M,\square,\boxtimes)) \rightarrow \eta$ as $\deg M \rightarrow \infty$.
Now assume $\eta \in [\frac12,1)$. Since the function $b \rightarrow \frac{1}{2\sqrt{2\pi}}\int_{0}^{\infty}( e^{-\frac{1}{2}(t - b\frac{2}{\sqrt{q} +1})^2} + e^{-\frac{1}{2}(t - b\frac{2\sqrt{q}}{\sqrt{q} +1})^2} ){\mathop{}\!\mathrm{d}}t $ is increasing continuous and taking values in $[\tfrac12,1)$ when $b\in [0,\infty)$, there exist a unique $b$ such that $\frac{1}{2\sqrt{2\pi}}\int_{0}^{\infty}( e^{-\frac{1}{2}(t - b\frac{2}{\sqrt{q} +1})^2} + e^{-\frac{1}{2}(t - b\frac{2\sqrt{q}}{\sqrt{q} +1})^2} ){\mathop{}\!\mathrm{d}}t = \eta$. Lemma \[Lem any possible value\] ensures the existence of a sequence of square-free monic polynomials $M$ with $$\begin{aligned}
B(M) = \begin{cases} b\left(1 + O\left(\frac{\log\deg M}{\deg M}\right)\right) \text{ if } b > 0,\\
O\left( 2^{-k} 2^{\omega(M)/2} (\deg M)^{-\frac12} \right) \text{ if } b=0,
\end{cases}\end{aligned}$$ as $\deg M\rightarrow\infty$. Those polynomials are only defined by their degree and number of divisors, according to the hypothesis of Theorem \[Th central limit for M under LI\], we can assume that each of them satisfies (LI). Then applying Proposition \[Prop M limit using Berry Esseen\] to this sequence, we get $$\lvert \mu_{M,f_k}^{\mathrm{norm}}([0,\infty)) - \eta\rvert \ll \begin{cases} (2^{-\omega(M)} + \log \deg M)(\deg M)^{-1} \text{ if } b >0, \\
2^{-\omega(M)}(\deg M)^{-1} + 2^{-k} 2^{\omega(M)/2} (\deg M)^{-\frac12} \text{ if } b=0.
\end{cases}$$ Since we assume (LI) for $M$, one has $$\mu_{M,f_k}^{\mathrm{norm}}([0,\infty)) = \operatorname{dens}((\epsilon_f)^k \Delta_{k}(\cdot;M,\square,\boxtimes) > 0),$$ this concludes the proof.
Character sums over polynomials of degree $n$ with $k$ irreducible factors {#Sec_proof_deg=n}
==========================================================================
For $k\geq 1$, $\chi$ a Dirichlet character modulo $M$, and $f = \Omega$ or $\omega$ we define $$\label{Eq defi pi(chi)}
\pi_{f_k}(n, \chi)=\sum_{\substack{N \text{ monic, } (N,M)=1\\ \deg(N) = n,~ f(N)=k }} \chi(N).$$ In this section, we prove the following result about the asymptotic expansion of $\pi_{f_k}(n,\chi)$ by induction over the number of irreducible factors $k$.
\[Prop\_k\_general\] Let $M \in \mathbf{F}_{q}[t]$ of degree $d \geq 1$. Let $k$ be a positive integer. Let $\chi$ be a non-trivial Dirichlet character modulo $M$, and $$\gamma(\chi) = \min\limits_{1\leq i\neq j\leq d_{\chi}}{\left(}\lbrace \lvert \gamma_i(\chi) - \gamma_j(\chi) \rvert, \lvert \gamma_{i}(\chi) \rvert, \lvert \pi -\gamma_{i}(\chi) \rvert \rbrace{\right)}.$$ With notations as in , for $f = \Omega$ or $\omega$, under the conditions $k =o((\log n)^{\frac{1}{2}})$, and $q\geq 5$ if $f= \omega$, one has $$\begin{aligned}
\pi_{f_k}(n,\chi) = \frac{(-1)^k}{(k-1)!} & \Bigg\{
{\left(}{\left(}m_+(\chi) -\epsilon_f\frac{\delta(\chi^2)}{2}{\right)}^k+(-1)^n {\left(}m_-(\chi) -\epsilon_f\frac{\delta(\chi^2)}{2}{\right)}^k {\right)}\frac{q^{n/2}(\log n)^{k-1}}{n} \\
+ &\sum_{\alpha_j\neq\pm\sqrt{q}} m_j(\chi)^k \frac{\alpha_j^n(\chi) (\log n)^{k-1}}{n} +O \left( d^k \frac{k(k-1)}{\gamma(\chi)} \frac{q^{n/2}(\log n)^{k-2}}{n} + d\frac{q^{n/3}}{n} \right) \Bigg\},\end{aligned}$$ where the implicit constant is absolute, $\delta(\chi^2) = 1$ if $\chi^2 = \chi_0$ and $0$ otherwise, $\epsilon_{\Omega} = -1$ and $\epsilon_{\omega} = 1$.
Case $k=1$
----------
We start by recalling the usual setting of the race between irreducible polynomials which is the base case in our induction. In this situation we obtain a better error term.
\[Prop\_k=1\] Let $\chi$ be a non-trivial Dirichlet character modulo $M$. Its Dirichlet $L$-function $\mathcal{L}(u,\chi)$ is a polynomial, let $\alpha_{1}(\chi), \ldots, \alpha_{d_{\chi}}(\chi)$ denote the distinct non-real inverse zeros of norm $\sqrt{q}$ of $\mathcal{L}(u,\chi)$, and $m_{1},\ldots,m_{d_{\chi}} \in \mathbf{Z}_{>0}$ be their multiplicities. For $f=\Omega$ or $\omega$, one has $$\begin{aligned}
\pi_{f_1}(n,\chi)&
=-\sum_{j=1}^{d_{\chi}} m_{j}(\chi)\frac{\alpha_{j}(\chi)^n}{n} - {\left(}-\epsilon_f \delta\left(\frac{n}{2},\chi^{2}\right) +m_{+}(\chi) +(-1)^{n}m_{-}(\chi){\right)}\frac{q^{n/2}}{n} + O(\frac{dq^{n/3}}{n}),
\end{aligned}$$ where $\epsilon_{\Omega} = -1$, $\epsilon_{\omega} =1$, and $$\delta\left(\frac{n}{2},\chi^{2}\right) =\begin{cases}
1, & \text{if} ~n~\text{is even and} ~\chi^{2} = \chi_{0};\\
0, & \text{otherwise.}
\end{cases}$$
We write the Dirichlet $L$-function in two different ways. First it is defined as an Euler product: $$\begin{aligned}
\mathcal{L}(u,\chi) = \prod_{n=1}^{\infty} \prod_{\substack{P \text{ irred.}\\ \deg(P) = n \\ P\nmid M}} (1- \chi(P)u^n)^{-1}.
\end{aligned}$$ As $\chi \neq \chi_{0}$, the function $\mathcal{L}(u,\chi)$ is a polynomial in $u$, using the notations of , $$\begin{aligned}
\mathcal{L}(u,\chi) = (1-\sqrt{q}u)^{m_+}(1+\sqrt{q}u)^{m_-}\prod_{j=1}^{d_{\chi}} (1- \alpha_{j}(\chi)u)^{m_{j}} \prod_{j'=1}^{d'_{\chi}} (1- \beta_{j'}(\chi)u),
\end{aligned}$$ where $\lvert \beta_{j}(\chi)\rvert =1$. By comparing the coefficients of degree $n$ in the two expressions of the logarithm we obtain $$\begin{aligned}
\sum_{\ell\mid n} \frac{\ell}{n}\sum_{\substack{P \text{ irred.}\\ \deg(P) = \ell \\ P\nmid M}} \chi(P)^{n/\ell} &= -\frac{q^{n/2}}{n}(m_+ +(-1)^{n}m_-) -\sum_{j=1}^{d_\chi} m_{j}\frac{\alpha_{j}(\chi)^n}{n} - \sum_{j'=1}^{d'_\chi}\frac{\beta_{j'}(\chi)^n}{n}.
\end{aligned}$$
Thus $$\begin{aligned}
\pi_{\Omega_1}(n,\chi) &= -\frac{q^{n/2}}{n}(m_+ +(-1)^{n}m_-) -\sum_{j=1}^{d_\chi} m_{j}\frac{\alpha_{j}(\chi)^n}{n} + O\left(\frac{d'_{\chi}}{n}\right) - \sum_{\substack{\ell\mid n \\ \ell\neq n}} \frac{\ell}{n}\sum_{\substack{P \text{ irred.}\\ \deg(P) = \ell \\ P\nmid M}} \chi(P)^{n/\ell} \\
&=-\frac{q^{n/2}}{n}(m_+ +(-1)^{n}m_-) -\sum_{j=1}^{d_\chi} m_{j}\frac{\alpha_{j}(\chi)^n}{n} - \frac{1}{2}\pi_{\Omega_1}\left(\frac{n}{2},\chi^{2}\right) + O\left(\frac{d + q^{n/3}}{n}\right),
\end{aligned}$$ and $$\begin{aligned}
\pi_{\omega_1}(n,\chi) &= -\frac{q^{n/2}}{n}(m_+ +(-1)^{n}m_-) -\sum_{j=1}^{d_\chi} m_{j}\frac{\alpha_{j}(\chi)^n}{n} + O\left(\frac{d'_{\chi}}{n}\right) + \sum_{\substack{\ell\mid n \\ \ell\neq n}} (1 -\frac{\ell}{n})\sum_{\substack{P \text{ irred.}\\ \deg(P) = \ell \\ P\nmid M}} \chi(P)^{n/\ell} \\
&=-\frac{q^{n/2}}{n}(m_+ +(-1)^{n}m_-) -\sum_{j=1}^{d_\chi} m_{j}\frac{\alpha_{j}(\chi)^n}{n} + \frac{1}{2}\pi_{\Omega_1}\left(\frac{n}{2},\chi^{2}\right) + O\left(\frac{d+q^{n/3}}{n}\right),
\end{aligned}$$
where $\pi_{\Omega_1}\left(\frac{n}{2},\chi^{2}\right) =0$ if $n$ is odd, and it can be included in the error term if $\chi^{2} \neq \chi_{0}$. If $n$ is even and $\chi^{2} = \chi_{0}$, one has ([@Rosen2002 Th. 2.2]) $$\pi_{\Omega_1}\left(\frac{n}{2},\chi^{2}\right) = 2\frac{q^{n/2}}{n} + O(q^{n/4}).$$ This concludes the proof.
Newton’s formula
----------------
To prove the general case of Theorem \[Prop\_k\_general\], we use a combinatorial argument.
Let $x_1$, $x_2$, $\cdots$ be an infinite collection of indeterminates. If a formal power series $P(x_1, x_2, \cdots)$ with bounded degree is invariant under all finite permutations of the variables $x_1$, $x_2$, $\cdots$, we call it a *symmetric function*. We define the $n$-th *homogeneous symmetric function* $h_n=h_n(x_1, x_2, \cdots)$ by the following generating function $$\sum_{n=0}^{\infty} h_n z^n=\prod_{i=1}^{\infty}\frac{1}{1-x_i z}.$$ Thus, $h_n$ is the sum of all possible monomials of degree $n$. The *n-th elementary symmetric function* $e_n=e_n(x_1, x_2, \cdots)$ is defined by $$\sum_{n=0}^{\infty}e_n z^n=\prod_{i=1}^{\infty}(1+x_i z).$$ Precisely, $e_n$ is the sum of all square-free monomials of degree $n$. Finally the $n$-th *power symmetric function* $p_n=p_n(x_1, x_2, \cdots)$ is defined to be $$p_n=x_1^n+x_2^n+\cdots.$$
The following result is due to Newton or Girard (see [@Mac Chap. 1, (2.11)], or [@Me-Re Th. 2.8]).
\[lem-Newton\] For any integer $k\geq 1$, we have $$kh_k=\sum_{\ell=1}^k h_{k-\ell}p_{\ell},
\qquad
ke_k=\sum_{\ell=1}^k (-1)^{\ell} e_{k-\ell} p_\ell.$$
Products of $k$ irreducible polynomials — Induction step
--------------------------------------------------------
We will prove Theorem \[Prop\_k\_general\] by induction on $k$. First we use the combinatorial arguments from Lemma \[lem-Newton\] to obtain a relation between $\pi_{f_k}$ and $\pi_{f_{k-1}}$, the two relations are obtained by different calculations according to whether $f= \Omega$ or $\omega$.
\[lem recurence big Omega\] Let $M\in \mathbf{F}_q[t]$ of degree $d\geq1$, and $\chi$ be a non-trivial Dirichlet character modulo $M$. For any positive integer $k \geq 2$, assume that for all $1\leq \ell\leq k-1$ there exists $A_{\Omega,\ell}>0$ such that one has $\lvert\pi_{\Omega_{\ell}}(n,\chi)\rvert \leq A_{\Omega,\ell} \frac{d^{\ell}}{(\ell -1)!} \frac{q^{n/2}}{n}(\log n)^{\ell -1}$ for all $n\geq 1$. Then one has $$\begin{aligned}
\pi_{\Omega_k}(n, \chi)=\frac{1}{k}\sum_{n_{1} + n_{2} = n}&\pi_{\Omega_{k-1}}(n_{1},\chi)\pi_{\Omega_1}(n_{2},\chi)+O_k \left(\frac{q^{n/2}(\log n)^{k-2}}{n} \right),\end{aligned}$$ where the implicit constant depends on $k$ and is bounded by $$\frac{d^{k}}{k!}\sum_{\ell=2}^{k} ( 2 + \frac{\ell}{\log n})A_{\Omega,k-\ell}\frac{d^{-\ell} q^{-\ell/2+1}(\log n)^{2-\ell}(k-1)!}{(k-\ell -1)!}$$ for all $n$.
We study the function $$F_{\Omega_k}(u,\chi) = \sum_{n=1}^{\infty} \sum_{\substack{N \text{ monic} \\ \deg(N) = n \\ \Omega(N)=k}} \chi(N) u^{n} = \sum_{n=1}^{\infty}\pi_{\Omega_k}(n,\chi) u^{n}.$$ Adapting the idea of [@Meng2017], we choose $x_P=\chi(P)u^{\deg P}$ for each irreducible polynomial $P$ . Using Lemma \[lem-Newton\], we obtain $$\label{general-k-series}
F_{\Omega_k}(u,\chi) = \frac{1}{k}\sum_{\ell=1}^{k}F_{\Omega_{k-\ell}}(u,\chi)F_{\Omega_1}(u^{\ell},\chi^{\ell}),$$ where we use the convention $F_{\Omega_0}(u,\chi) = 1$. Comparing the coefficients of degree $n$, we see that the first term will give the main term and the other terms contribute to the error term. For $2\leq \ell\leq k-1 $, using the trivial bound for $\pi_{\Omega_1}$ the coefficient of degree $n$ of $F_{\Omega_{k-\ell}}(u,\chi)F_{\Omega_1}(u^{\ell},\chi^{\ell})$ is indeed by hypothesis: $$\begin{aligned}
\label{general-k-error}
\sum_{n_{1} + \ell n_{2} = n}&\pi_{\Omega_{k-\ell}}(n_{1},\chi)\pi_{\Omega_1}\left(n_{2},\chi^{\ell} \right) \\
&\leq A_{\Omega,k-\ell} \frac{d^{k-\ell}}{(k-\ell-1)!}\sum_{n_{1} + \ell n_{2} = n}\frac{q^{n_{1}/2} q^{n_{2}} }{n_{1} n_{2}} (\log n_1)^{k-\ell-1}\nonumber\\
&\leq A_{\Omega,k-\ell} \frac{d^{k-\ell}}{(k-\ell -1)!} q^{n/2-\ell/2+1}(\log n)^{k-\ell -1} \sum_{n_1+\ell n_2=n}\frac{1}{n_1 n_2} \nonumber\\
&\leq A_{\Omega,k-\ell}\frac{d^{k-\ell} q^{n/2-\ell/2+1}(\log n)^{k-\ell -1}}{(k-\ell -1)! n}(2 \log n + \ell) \nonumber. \end{aligned}$$ The coefficient of degree $n$ of $F_{\Omega_{0}}(u,\chi)F_{\Omega_1}(u^{k},\chi^{k})$ is non-zero only when $k\mid n$, and it is bounded by $\lvert \pi_{\Omega_1}(\tfrac{n}{k},\chi^{k})\rvert \ll \frac{k q^{\frac{n}{k}}}{n} \leq 2A_{\Omega,0} \frac{q^{n/2 - k/2 + 1}}{n}$, for a good choice of $A_{\Omega,0} >0$. Then, by and , summing over $2\leq \ell\leq k$ we obtain Lemma \[lem recurence big Omega\].
\[lem recurence small omega\] Let $M\in \mathbf{F}_q[t]$ of degree $d\geq1$, and $\chi$ be a non-trivial Dirichlet character modulo $M$. For any positive integer $k \geq 2$, assume that for all $2\leq \ell\leq k-1$ there exists $A_{\omega, \ell}>0$ such that one has $\lvert\pi_{\omega_{\ell}}(n,\chi)\rvert \leq A_{\omega,\ell} \frac{d^{\ell}}{(\ell -1)!} \frac{q^{n/2}}{n}(\log n)^{\ell -1}$ for all $n\geq 1$. Then one has $$\begin{aligned}
\pi_{\omega_k}(n,\chi) = \frac{1}{k} \sum_{n_1 + n_2 = n}\pi_{\omega_{k-1}}(n_1,\chi)\pi_{\omega_1}(n_2,\chi)
+ O_{k}\left(\frac{q^{n/2}(\log n)^{k-2}}{n} \right).\end{aligned}$$ where the implicit constant depends on $k$ and is bounded by $$2\frac{d^{k}}{k!} \sum_{\ell = 2}^{k}\sum_{j = \ell}^{n-1}A_{\omega,k-\ell}j\binom{j-1}{\ell-1}\frac{q^{1-\frac{j}{2}}d^{1-\ell}(\log n)^{2-\ell}(k-1)!}{(k-\ell -1)!}$$ for all $n$.
We study the function $$F_{\omega_k}(u,\chi) = \sum_{n=1}^{\infty} \sum_{\substack{N \text{ monic} \\ \deg(N) = n \\ \omega(N)=k}} \chi(N) u^{n} = \sum_{n=1}^{\infty}\pi_{\omega_k}(n,\chi) u^{n}.$$ Adapting the idea of [@Meng2017] for $x_{P} = \sum_{j\geq 1}\chi(P)^j u^{j\deg P}$, and using Lemma \[lem-Newton\], we obtain $$\label{littleomega-general-k-series}
F_{\omega_k}(u,\chi) = \frac{1}{k}\sum_{\ell=1}^{k}(-1)^{\ell +1}F_{\omega_{k-\ell}}(u,\chi)\tilde{F}(u,\chi;\ell),$$ where $$\begin{aligned}
\tilde{F}(u,\chi;\ell) = \sum\limits_{P\text{ irred.}}{\left(}\sum\limits_{j\geq 1} \chi(P)^{j}u^{j\deg P} {\right)}^{\ell}
= \sum\limits_{P\text{ irred.}}\sum\limits_{j\geq \ell}\binom{j-1}{\ell -1} \chi(P)^{j}u^{j\deg P} .\end{aligned}$$ Note that $\tilde{F}(u,\chi;1) = F_{\omega_1}(u,\chi)$, and we use the convention $F_{\omega_0}(u,\chi) = 1$. Then we compare the coefficients of $u^n$ in , we show that the terms for $\ell \geq 2$ all contribute to the error term. For $2\leq \ell \leq k-1$, the coefficient of degree $n$ of $F_{\omega_{k-\ell}}(u,\chi)\tilde{F}(u,\chi;\ell)$ is indeed by hypothesis: $$\begin{aligned}
\sum_{j\geq \ell}\binom{j-1}{\ell -1}\sum_{n_{1} + j n_{2} = n}&\pi_{\omega_{k-\ell}}(n_{1},\chi)\pi_{\Omega_1}\left(n_{2},\chi^{j} \right) \\
&\leq A_{\omega,k-\ell}\sum_{j = \ell}^{n-1}\binom{j-1}{\ell -1} \frac{d^{k-\ell +1}}{(k-\ell-1)!}\sum_{n_{1} + j n_{2} = n}\frac{q^{n_{1}/2} q^{n_{2}} }{n_{1} n_{2}} (\log n_1)^{k-\ell-1}\nonumber\\
&\leq A_{\omega,k-\ell} \frac{d^{k-\ell+1}(\log n)^{k-\ell-1} q^{n/2}}{(k-\ell-1)!}\sum_{j = \ell}^{n-1}\binom{j-1}{\ell -1}\sum_{n_1+ j n_2=n} \frac{q^{-(j/2-1)n_2}}{n_1 n_2} \nonumber\\
&\leq 2 A_{\omega,k-\ell}\frac{d^{k}}{(k-\ell -1)!} \frac{q^{n/2}(\log n)^{k-\ell}}{n} \sum_{j = \ell}^{n-1} j\binom{j-1}{\ell -1} q^{1-j/2} d^{1-\ell},\nonumber\end{aligned}$$ The coefficient of degree $n$ of $F_{\omega_{0}}(u,\chi)\tilde{F}(u,\chi;k)$ is bounded by $$\sum_{\substack{k\leq j \leq n-1 \\ j\mid n }}\binom{j-1}{k-1}\frac{j q^{\frac{n}{j}}}{n} \leq 2A_{\omega,0} \frac{q^{n/2}}{n} \sum_{j = k}^{n-1} j\binom{j-1}{k -1} q^{1-j/2},$$ for a good choice of $A_{\omega,0}>0$. Summing over $2\leq \ell\leq k$ we obtain Lemma \[lem recurence small omega\].
In order to avoid some confusions with complete sum over all zeros, in the following we use $\sum'$ to represent the sum over non-real zeros of the $L$-function. We also assume all the multiplicities and zeros depend on $\chi$ in this section.
For $f = \Omega$ or $\omega$, $\ell \geq 1$ and $\chi \bmod M$ (with the convention $0!=1$), we denote $$\begin{aligned}
Z_{\ell}(n,\chi)& = \frac{(-1)^{\ell}}{(\ell-1)!} \sideset{}{'}\sum_{1\leq j\leq d_{\chi}} m_j^{\ell} \frac{\alpha_j^n(\chi) (\log n)^{\ell-1}}{n},\\
B_{f_\ell}(n,\chi) &= \frac{(-1)^{\ell}}{(\ell-1)!} {\left(}{\left(}m_+ -\epsilon_f\frac{\delta(\chi^2)}{2}{\right)}^{\ell} +(-1)^n {\left(}m_- -\epsilon_f \frac{\delta(\chi^2)}{2}{\right)}^\ell{\right)}\frac{q^{n/2}(\log n)^{\ell-1}}{n},\end{aligned}$$ where $\epsilon_{\Omega} = -1$ and $\epsilon_{\omega} = 1$. With these notations, we rewrite the formula in Theorem \[Prop\_k\_general\] and Proposition \[Prop\_k=1\] in the following form: there exists positive constants $C_{f,\ell}$ such that $$\begin{aligned}
\label{formula Th deg=n}
E_{f_\ell}(n,\chi) := \lvert\pi_{f_\ell}(n,\chi) - Z_{\ell}(n,\chi) - B_{f_\ell}(n,\chi)\rvert \leq \begin{cases} C_{f,\ell}\frac{d^{\ell}}{(\ell-1)!}\frac{1}{\gamma(\chi)}\frac{q^{n/2} (\log n)^{\ell-2}}{n}& \text{ for } \ell \geq 2\\
C_{f,1}d \frac{q^{n/3}}{n} & \text{ for } \ell =1,
\end{cases}\end{aligned}$$ where for $2\leq \ell =o((\log n)^{\frac12})$, and if $f = \omega$ then $q>3$, we need to show that $C_{f,\ell} \leq C\ell(\ell -1)$ with $C$ an absolute constant. By Lemma \[lem recurence big Omega\] (resp. \[lem recurence small omega\]), it suffices to study the coefficient of $u^n$ in $F_{f_{k-1}}(u,\chi)F_{f_1}(u,\chi)$, that is: $$\begin{aligned}
\label{general-k-main}
\sum_{n_{1} + n_{2} = n}&\pi_{f_{k-1}}(n_{1},\chi)\pi_{f_1}(n_{2},\chi)\nonumber \\
=& \sum_{n_{1} + n_{2} = n} \big\{ Z_{k-1}(n_{1},\chi)+B_{f_{k-1}}(n_{1},\chi)+E_{f_{k-1}}(n_{1},\chi) \big\}\big\{Z_1(n_{2},\chi)+B_{f_1}(n_{2},\chi)+E_{f_1}(n_{2},\chi)\big\}\nonumber\\
=&\sum_{n_{1} + n_{2} = n}Z_{k-1}(n_1, \chi)Z_1(n_2, \chi)+\sum_{n_{1} + n_{2} = n}B_{f_{k-1}}(n_1, \chi)B_{f_1}(n_2, \chi)\nonumber\\
&\quad + \sum_{n_{1} + n_{2} = n}\big\{ Z_{k-1}(n_1, \chi)B_{f_1}(n_2, \chi)+B_{f_{k-1}}(n_1, \chi)Z_1(n_2, \chi)\big\}\nonumber \\
&\quad +\sum_{n_{1} + n_{2} = n} \big\{ {\left(}Z_{k-1}(n_{1},\chi)+B_{f_{k-1}}(n_{1},\chi){\right)}E_{f_1}(n_2, \chi) \big\}
+\sum_{n_{1} + n_{2} = n} E_{f_{k-1}}(n_{1},\chi) \pi_{f_1}(n_2,\chi).\end{aligned}$$ We will now study each of these sums separately.
Bounds for certain exponential sums
-----------------------------------
We first give a bound for certain exponential sums that appear several times in the proof of Lemmas \[Lem\_general\_k\_Zeros\]–\[Lem-mixed-term\]. The following result follows from partial summation.
\[Lem\_Abel\] Let $f$ be a differentiable function on $[1,+\infty)$ such that $f'(x) \in L^{1}[1,\infty)$. Then for every $\theta \in (-\frac{\pi}{2}, \frac{\pi}{2}]$, $\theta \neq 0$, one has $$\begin{aligned}
\sum_{n=1}^{N} e^{i\theta n}f(n) = O\left(\frac{ \lVert f' \rVert_{L^{1}} + \lVert f \rVert_{\infty}}{\lvert\theta\rvert}\right)\end{aligned}$$ as $N \rightarrow +\infty$, with an absolute implicit constant.
As $e^{i\theta} \neq 1$, one has $$\begin{aligned}
H(x) := \sum_{n\leq x} e^{i\theta n} = \frac{e^{i\theta}- e^{i\theta ([x]+1)}}{1-e^{i\theta}} = O{\left(}\frac{1}{\lvert\theta\rvert}{\right)}.\end{aligned}$$ So applying Abel’s identity, one has $$\begin{aligned}
\sum_{n=1}^{N} e^{i\theta n}f(n) = H(N)f(N) + \int_{1}^{N} H(t)f'(t) {\mathop{}\!\mathrm{d}}t
= O\left(\frac{\lvert f(N)\rvert}{\lvert\theta\rvert}\right) + O\left(\frac{1}{\lvert\theta\rvert}\int_{1}^{N} \lvert f'(t) \rvert {\mathop{}\!\mathrm{d}}t\right).\end{aligned}$$
Sum over non-real zeros
-----------------------
\[Lem\_general\_k\_Zeros\] For any $k \geq 2$, one has $$\begin{gathered}
\sum_{n_{1} + n_{2} = n}Z_{k-1}(n_{1},\chi)Z_{1}(n_{2},\chi) \\
= \frac{(-1)^{k}k}{(k-1)!} \left\{ \sideset{}{'}\sum_{j=1}^{d_{\chi}} m_{j}^{k} \frac{\alpha_{j}(\chi)^{n}(\log n)^{k-1}}{n} + O\left( d^k \left( k + \frac{1}{\gamma(\chi)}\right)\frac{q^{n/2}(\log n)^{k-2}}{n}\right) \right\}, \end{gathered}$$ where the implicit constant is absolute.
We separate the sum in a diagonal term and off-diagonal term: $$\begin{aligned}
\sideset{}{'}\sum_{j_{1}=1}^{d_{\chi}} \sideset{}{'}\sum_{j_{2}=1}^{d_{\chi}} \sum_{n_{1}+ n_{2} = n}\frac{(-1)^k}{(k-2)!}m_{j_{1}}^{k-1}m_{j_{2}}\frac{\alpha_{j_{1}}(\chi)^{n_{1}}\alpha_{j_{2}}(\chi)^{n_{2}}(\log n_{1})^{k-2}}{n_{1}n_{2}} = \Sigma_{1} + \Sigma_{2},\end{aligned}$$ where $$\begin{aligned}
\Sigma_{1} =\frac{(-1)^k}{(k-2)!}\sideset{}{'}\sum_{j=1}^{d_{\chi}}\sum_{n_{1}+ n_{2} = n} m_{j}^{k}\frac{\alpha_{j}(\chi)^{n_{1} + n_{2}}(\log n_{1})^{k-2}}{n_{1}n_{2}}, \end{aligned}$$ and $$\begin{aligned}
\Sigma_{2} = \frac{(-1)^k}{(k-2)!}\sideset{}{'}\sum_{j_{1}\neq j_2} \sum_{n_{1}+ n_{2} = n}m_{j_{1}}^{k-1}m_{j_{2}}\frac{\alpha_{j_{1}}(\chi)^{n_{1}}\alpha_{j_{2}}(\chi)^{n_{2}}(\log n_{1})^{k-2}}{n_{1}n_{2}}. \end{aligned}$$
The diagonal term gives the main term, for $1\leq j \leq d$ one has $$\begin{aligned}
\label{main-zero}
\sum_{n_{1}+ n_{2} = n} m_{j}^{k}\frac{\alpha_{j}(\chi)^{n_{1} + n_{2}}(\log n_{1})^{k-2}}{n_{1}n_{2}}= m_{j}^{k}\frac{\alpha_{j}(\chi)^{n}}{n} \sum_{n_{1} + n_{2} = n}\left(\frac{(\log n_{1})^{k-2}}{n_{1}} + \frac{(\log n_{1})^{k-2}}{n_{2}} \right).\end{aligned}$$ By partial summation, we have $$\label{sum1-1}
\sum_{n_1+n_2=n} \frac{(\log n_{1})^{k-2}}{n_{1}}=\sum_{n_1=1}^{n-1} \frac{(\log n_{1})^{k-2}}{n_{1}}=\frac{1}{k-1} \left( (\log n)^{k-1}+O\left( k(\log n)^{k-2}\right) \right).$$ For the second sum in , we have $$\begin{aligned}
\label{sum1-2-0}
\sum_{n_1+n_2=n}\frac{(\log n_{1})^{k-2}}{n_{2}}&=\sum_{1\leq n_2\leq n/2} \frac{(\log(n-n_2))^{k-2}}{n_{2}}+\sum_{n/2<n_2<n}\frac{(\log(n-n_2))^{k-2}}{n_{2}} \nonumber \\
&=\sum_{1\leq n_2\leq n/2} \frac{(\log n+\log (1-n_2/n))^{k-2}}{n_{2}}+O{\left(}\frac{n}{2}\cdot \frac{(\log n)^{k-2}}{n} {\right)},\end{aligned}$$ for $1\leq n_2\leq n/2$, $|\log(1-n_2/n)|<1$, thus $$\begin{aligned}
\label{sum1-2}
\sum_{n_1+n_2=n}\frac{(\log n_{1})^{k-2}}{n_{2}}&=\sum_{1\leq n_2\leq n/2}{\left(}\frac{(\log n)^{k-2} }{n_2}+ \frac{O(k(\log n)^{k-3})}{n_2}{\right)}+O{\left(}(\log n)^{k-2}{\right)}\nonumber\\
&=(\log n)^{k-1}+O{\left(}k(\log n)^{k-2}{\right)}.\end{aligned}$$ Inserting and into , we get $$\label{sum1-3}
\sum_{n_{1}+ n_{2} = n} m_{j}^{k}\frac{\alpha_{j}(\chi)^{n_{1} + n_{2}}(\log n_{1})^{k-2}}{n_{1}n_{2}}=\frac{k m_j^k}{k-1}\left( \frac{\alpha_j(\chi)^{n}(\log n)^{k-1}}{n}+O{\left(}k\frac{q^{n/2}(\log n)^{k-2}}{n}{\right)}\right).$$ Thus, $$\begin{aligned}
\Sigma_1=\frac{(-1)^k k}{(k-1)!}\left\{{\sideset{}{'}}\sum_{j=1}^{d_{\chi}} m_j^k \frac{\alpha_j(\chi)^{n}(\log n)^{k-1}}{n}+O{\left(}d^k k\frac{q^{n/2}(\log n)^{k-2}}{n}{\right)}\right\}.\end{aligned}$$
For $\alpha_{j_{1}} \neq \alpha_{j_{2}}$, one has $$\begin{aligned}
\label{sum-2-1}
\sum_{n_{1}+ n_{2} = n}\frac{\alpha_{j_{1}}(\chi)^{n_{1}}\alpha_{j_{2}}(\chi)^{n_{2}}(\log n_{1})^{k-2}}{n_{1}n_{2}}
= &\frac{\alpha_{j_{2}}(\chi)^{n}}{n}\sum_{n_{1} =1}^{n-1}\frac{\left(\alpha_{j_{1}}(\chi)/\alpha_{j_{2}}(\chi)\right)^{n_{1}}(\log n_{1})^{k-2}}{n_{1}}\nonumber\\& +
\frac{\alpha_{j_{1}}(\chi)^{n}}{n}\sum_{n_{2} =1}^{n-1}\frac{\left(\alpha_{j_{2}}(\chi)/\alpha_{j_{1}}(\chi)\right)^{n_{2}}(\log (n-n_{2}))^{k-2}}{n_{2}},\end{aligned}$$ where $\lvert \alpha_{j_{1}}(\chi)/\alpha_{j_{2}}(\chi) \rvert =1$, and $\alpha_{j_{1}}(\chi)/\alpha_{j_{2}}(\chi) \neq 1$. We apply Lemma \[Lem\_Abel\] with $f(x) = \frac{(\log x)^{k-2}}{x}$ to the first sum to deduce that this sum is $O\left( (\log n)^{k-2} \lvert \gamma_{j_1} - \gamma_{j_2} \rvert^{-1} \right)$. The second term can be separated at $\frac{n}{2}$ as in , it yields $$\sum_{1\leq n_2\leq n/2}{\left(}\frac{\left(\alpha_{j_{2}}(\chi)/\alpha_{j_{1}}(\chi)\right)^{n_{2}} (\log n)^{k-2} }{n_2}+ \frac{O(k(\log n)^{k-3})}{n_2}{\right)}+O{\left(}(\log n)^{k-2}{\right)}.$$ Then we apply Lemma \[Lem\_Abel\] with $f(x) = \frac{1}{x}$ to the first term above. In the end we obtain $$\begin{aligned}
\Sigma_{2} = O\left( d^k \frac{k\left(k + \frac{1}{\gamma(\chi)}\right)}{(k-1)!}\frac{q^{n/2}(\log n)^{k-2}}{n}\right).\end{aligned}$$
The proof of Lemma \[Lem\_general\_k\_Zeros\] is complete.
Bias term
---------
\[Lem\_general\_k\_Bias\] For $f=\Omega$ or $\omega$, and for any $k\geq 2$, we have $$\begin{gathered}
\sum_{n_{1} + n_{2} = n} B_{f_{k-1}}(n_1, \chi)B_{f_1}(n_2, \chi)
= \frac{(-1)^k k}{(k-1)!}\frac{q^{n/2}(\log n)^{k-1}}{n}\Bigg\{ {\left(}m_+ - \epsilon_f\frac{\delta(\chi^2)}{2}{\right)}^k \\ +(-1)^n {\left(}m_- -\epsilon_f\frac{\delta(\chi^2)}{2}{\right)}^k +O {\left(}d^k k (\log n)^{-1} {\right)}\Bigg\},\end{gathered}$$ where $\epsilon_{\Omega} = -1$, $\epsilon_{\omega} =1$ and the implicit constant is absolute.
We write the sum as sum of four parts, $$\begin{aligned}
\sum_{n_{1} + n_{2} = n} B_{f_{k-1}}(n_1, \chi)&B_{f_1}(n_2, \chi) \nonumber \\
=\frac{(-1)^k q^{n/2}}{(k-2)!} \sum_{n_1+n_2=n}\Bigg\{&{\left(}m_+ -\epsilon_f \frac{\delta{\left(}\chi^2{\right)}}{2}{\right)}^{k-1}
{\left(}m_+ -\epsilon_f \frac{\delta{\left(}\chi^2{\right)}}{2} {\right)}\nonumber\\
&+
{\left(}m_+ -\epsilon_f \frac{\delta{\left(}\chi^2{\right)}}{2}{\right)}^{k-1}
(-1)^{n_2}{\left(}m_{-} -\epsilon_f \frac{\delta{\left(}\chi^2{\right)}}{2}{\right)}\nonumber\\
&+
(-1)^{n_1}{\left(}m_- -\epsilon_f \frac{\delta{\left(}\chi^2{\right)}}{2}{\right)}^{k-1}
{\left(}m_{+} -\epsilon_f \frac{\delta{\left(}\chi^2{\right)}}{2}{\right)}\nonumber\\
&+
(-1)^{n_1}{\left(}m_- -\epsilon_f \frac{\delta{\left(}\chi^2{\right)}}{2}{\right)}^{k-1}
(-1)^{n_2}{\left(}m_{-} -\epsilon_f \frac{\delta{\left(}\chi^2{\right)}}{2}{\right)}\Bigg\} \frac{ (\log n_1)^{k-2}}{n_1 n_2}\nonumber\\
=&: \frac{(-1)^k q^{n/2}}{(k-2)!} \left\{ S_1+S_2+S_3+S_4 \right\}.\end{aligned}$$ First, we see that $S_1$ and $S_4$ should give the main term, and we expect $S_2$ and $S_3$ to be in the error term. Using and , we have $$\begin{aligned}
\sum_{n_1+n_2=n} \frac{(\log n_1)^{k-2}}{n_1 n_2} = \frac{k}{k-1}\frac{(\log n)^{k-1}}{n} + O{\left(}k\frac{(\log n)^{k-2}}{n} {\right)}.\end{aligned}$$ Thus $$\begin{aligned}
S_1 &= \frac{k}{k-1}{\left(}m_+ -\epsilon_f \frac{\delta{\left(}\chi^2{\right)}}{2}{\right)}^{k}\frac{(\log n)^{k-1}}{n} + O{\left(}k{\left(}m_+ -\epsilon_f \frac{1}{2}{\right)}^{k} \frac{(\log n)^{k-2}}{n} {\right)}, \\
S_4 &= (-1)^n\frac{k}{k-1}{\left(}m_- -\epsilon_f \frac{\delta{\left(}\chi^2{\right)}}{2}{\right)}^{k}\frac{(\log n)^{k-1}}{n} + O{\left(}k{\left(}m_- -\epsilon_f \frac{1}{2}{\right)}^{k} \frac{(\log n)^{k-2}}{n} {\right)}.\end{aligned}$$ Similar to , we have $$\begin{aligned}
\sum_{n_1+n_2=n} (-1)^{n_1} \frac{(\log n_1)^{k-2}}{n_1 n_2} = O{\left(}\left(k + \frac{1}{\pi}\right)\frac{(\log n)^{k-2}}{n} {\right)}.\end{aligned}$$ Thus $$\begin{aligned}
S_2 + S_3 = O{\left(}d^k k\frac{(\log n)^{k-2}}{n} {\right)}\end{aligned}$$ Combining $S_1$, $S_2$, $S_3$ and $S_4$ we obtain Lemma \[Lem\_general\_k\_Bias\].
Other error terms
-----------------
\[lem\_general\_k\_mixed-bias-zero\] For $f=\Omega$ or $\omega$ and for any $k\geq 2$, one has $$\begin{aligned}
\sum_{n_{1} + n_{2} = n}\big\{ Z_{k-1}(n_1, \chi)B_{f_1}(n_2, \chi)+B_{f_{k-1}}(n_1, \chi)Z_1(n_2, \chi)\big\} = O{\left(}d^k \frac{k \left( k + \frac{1}{\gamma(\chi)}\right)}{(k-1)!}\frac{q^{n/2} (\log n)^{k-2}}{n}{\right)},\end{aligned}$$ where the implicit constant is absolute.
Let $\alpha_{j}$ be a non-real inverse zero of the $L$-function, one has $$\begin{gathered}
\label{sum_zero_bias_1-2}
\sum_{n_1+n_2=n}m_j^{k-1} {\left(}{\left(}m_{+} -\epsilon_f \frac{\delta(\chi^2)}{2}{\right)}+(-1)^{n_2}{\left(}m_{-} -\epsilon_f \frac{\delta(\chi^2)}{2}{\right)}{\right)}\frac{\alpha_{j}^{n_{1}}(\log n_1)^{k-2}}{n_{1}} \frac{q^{n_{2}/2}}{n_{2}} \\
= O {\left(}m_j^{k-1} ( \max{\left(}m_{+},m_{-}{\right)}+ \tfrac{1}{2}) \left( k + \frac{1}{\min(\lvert \gamma_j\rvert, \lvert\pi - \gamma_j\rvert) }\right)\frac{q^{n/2} (\log n)^{k-2}}{n}{\right)},\end{gathered}$$ this follows from the same idea as for . We sum over the zeros to obtain $$\begin{aligned}
\sum_{n_{1} + n_{2} = n} Z_{k-1}(n_1, \chi)B_{f_1}(n_2, \chi)
&= O{\left(}d^k \frac{k \left( k + \frac{1}{\gamma(\chi)}\right)}{(k-1)!}\frac{q^{n/2} (\log n)^{k-2}}{n}{\right)}.\end{aligned}$$ The proof is similar for the other term.
\[Lem-mixed-term\] For $f = \Omega$ or $\omega$ and for any $k\geq 2$, one has $$\begin{aligned}
\sum_{n_{1} + n_{2} = n} & {\left(}Z_{k-1}(n_{1},\chi)+B_{f_{k-1}}(n_{1},\chi){\right)}E_{f_1}(n_2, \chi) = O{\left(}d^k \frac{k}{(k-1)!} \frac{q^{n/2} (\log n)^{k-2}}{n} {\right)}.
\end{aligned}$$
We use the following bound, for $k-1 \geq 1$: $$\begin{aligned}
\lvert Z_{k-1}(n,\chi) + B_{f_{k-1}}(n,\chi)\rvert \leq d^{k-1} \frac{1}{(k-2)!} \frac{q^{n/2}(\log n)^{k-2}}{n},\end{aligned}$$ where the implicit constant is absolute. In particular the term evaluated in Lemma \[Lem-mixed-term\] satisfies $$\begin{aligned}
\sum_{n_{1} + n_{2} = n}& {\left(}Z_{k-1}(n_{1},\chi)+B_{f_{k-1}}(n_{1},\chi){\right)}E_{f_1}(n_2, \chi) \\
&= \sum_{n_{1} + n_{2} = n} O{\left(}d^{k} \frac{1}{(k-2)!} \frac{q^{n_1/2}q^{n_2/3}(\log n_1)^{k-2}}{n_1}{\right)}\\
&= O {\left(}d^{k} \frac{q^{n/2}(\log n)^{k-2}}{(k-2)!} {\left(}\frac{1}{n}\sum_{n_{2} \leq n/2} q^{-{n_2}/6} + q^{-n/6}\sum_{n_{1} \leq n/2}\frac{q^{n_{1}/6}}{n_1} {\right)}{\right)}\\
&= O {\left(}\frac{d^k}{(k-2)!} \frac{q^{\frac{n}{2}}(\log n)^{k-2}}{n} {\right)},\end{aligned}$$ with an absolute implicit constant. This concludes the proof.
Proof of Theorem \[Prop\_k\_general\]
-------------------------------------
We now have all the ingredients to finish the proof of Theorem \[Prop\_k\_general\].
By induction on $k$, the base case is Proposition \[Prop\_k=1\] ($k=1$). Now suppose for any $2\leq \ell\leq k-1$, we have $$\begin{aligned}
\label{Eq bound Error}
E_{f_\ell}(n,\chi) \leq C_{f,\ell} \frac{d^{\ell}}{(\ell-1)!}\frac{1}{\gamma(\chi)}\frac{q^{n/2} (\log n)^{\ell-2}}{n},\end{aligned}$$ where $C_{f,\ell}\leq C\ell(\ell-1)$ as stated in . In particular, the condition of Lemma \[lem recurence big Omega\] (resp. Lemma \[lem recurence small omega\]) is satisfied for $k$, one has $$\begin{aligned}
\lvert \pi_{f_\ell}(n,\chi) \rvert &\leq \lvert Z_{\ell}(n,\chi) + B_{f_{\ell}}(n,\chi)\rvert + C_{f,\ell} \frac{d^{\ell}}{(\ell-1)!}\frac{1}{\gamma(\chi)}\frac{q^{n/2} (\log n)^{\ell-2}}{n}\nonumber \\
&\leq \Big(1 + \frac{C_{f,\ell}}{\gamma(\chi)\log n}\Big) \frac{d^{\ell}}{(\ell -1)!} \frac{q^{n/2}}{n}(\log n)^{\ell -1},
\label{Eq bound on A Omega}\end{aligned}$$ for $1\leq \ell \leq k-1$, for all $n\geq 2$. Thus, take $A_{\Omega, \ell}=1 + \frac{C_{\Omega,\ell}}{\gamma(\chi)\log n}$ in Lemma \[lem recurence big Omega\], and evaluate each sum in Equation thanks to Lemmas \[Lem\_general\_k\_Zeros\]–\[Lem-mixed-term\], this yields $$\begin{aligned}
\label{Eq induction Omega}
\frac{n}{q^{n/2}(\log n)^{k-2}}\lvert E_{\Omega_{k}}(n,\chi) \rvert \leq& \frac{d^{k}}{k!}\sum_{\ell=2}^{k} \Big(1 + \frac{C_{\Omega,k-\ell}}{\gamma(\chi)\log n}\Big)\Big(2 + \frac{\ell}{\log n}\Big)\frac{d^{-\ell} q^{-\ell/2+1}(\log n)^{2-\ell}(k-1)!}{(k-\ell -1)!} \nonumber\\ &+ \frac{C_0}{2} d^k \frac{k + \frac{1}{\gamma(\chi)}}{(k-1)!} + \frac{1}{k} \frac{n}{q^{n/2}(\log n)^{k-2}}\sum_{n_1 + n_2 = n}\lvert E_{\Omega_{k-1}}(n_1,\chi) \pi_{f_1}(n_2,\chi) \rvert,\end{aligned}$$ where $C_0$ is an absolute constant. In the case $k=2$, we get $$\begin{aligned}
\frac{n}{q^{n/2}}\lvert E_{\Omega_{2}}(n,\chi) \rvert \ll \frac{d^2}{\gamma(\chi)} + \frac{n}{q^{n/2}} \sum_{n_1 + n_2 = n} d\frac{q^{n_1 /3}}{n_1} d\frac{q^{n_2/2}}{n_2}
\ll \frac{d^2}{\gamma(\chi)}\end{aligned}$$ which is the expected bound. For $k\geq 3$, using the bound , we have $$\begin{aligned}
\label{Eq applying induction Omega}
\sum_{n_1 + n_2 = n}&\lvert E_{\Omega_{k-1}}(n_1,\chi) \pi_{f_1}(n_2,\chi) \rvert\\ &\leq \sum_{n_1 + n_2 = n}C_{\Omega,k-1}\frac{d^{k-1}}{(k-2)!}\frac{1}{\gamma(\chi)}\frac{q^{n_1/2}(\log n_1)^{k-3}}{n_1} \Big(1 + C_{\Omega,1}q^{-n_2/6}\Big) d \frac{q^{n_2/2}}{n_2}\nonumber \\
&\leq C_{\Omega,k-1}\frac{d^k}{(k-2)!}\frac{1}{\gamma(\chi)}\frac{q^{n/2}}{n}\sum_{n_1 + n_2 = n} \left(\frac1{n_1} + \frac{1}{n_2}\right)(\log n_1)^{k-3}\Big(1 + C_{\Omega,1}q^{-n_2/6}\Big)\nonumber \\
&\leq C_{\Omega,k-1}\frac{d^k}{(k-2)!}\frac{1}{\gamma(\chi)}\frac{q^{n/2}}{n}\left( \frac{k-1}{k-2}(\log n)^{k-2} + O(k(\log n)^{k-3} ) \right),\nonumber\end{aligned}$$ which together with the bound , proves the existence of $C_{\Omega,k}$ satisfying $$\begin{aligned}
E_{\Omega_k}(n,\chi) \leq C_{\Omega,k} \frac{d^{k}}{(k-1)!}\frac{1}{\gamma(\chi)}\frac{q^{n/2} (\log n)^{k-2}}{n}.\end{aligned}$$ Now, when $k = o((\log n)^{1/2})$, by the induction hypothesis , one has $C_{\Omega,\ell} \leq C\ell(\ell -1) = o(\log n)$ for $2\leq \ell\leq k-1$ and some absolute constant $C$. In the following, we show how to choose $C$ and close the induction. We simplify the bounds and to obtain $$\begin{aligned}
C_{\Omega,k} \leq& \frac{2(k-1)(k-2)}{k}\sum_{\ell=2}^{k} (\gamma(\chi) + o(1))d^{-\ell} q^{-\ell/2+1} + \frac{C_0}{2}k + C_{\Omega,k-1}\frac{k-1}{k}\left( \frac{k-1}{k-2} + o(k^{-1}) \right) \\
\leq& C\left(\frac{(k-1)(k-2)}{2k} + \frac{k}{2} + \frac{(k-1)^3}{k} + o(k) \right) \leq C k(k-1),\end{aligned}$$ if we choose $C\geq \max \lbrace C_0, 6\pi \rbrace \geq 4\gamma(\chi)\sum_{\ell=2}^{k} d^{-\ell} q^{-\ell/2+1}$ and for $k$ large enough (say $k\geq K$ finite). In the end, choose $C\geq \max\{ \frac{C_{\Omega, 2}}{2}, \cdots, \frac{C_{\Omega, K}}{K(K-1)}, C_0,6\pi \}$, we deduce that $C_{\Omega, k}\leq Ck(k-1)$ for all $2\leq k=o((\log n)^{1/2})$. This closes the induction step for $C_{\Omega,k}$.
The proof works similarly for $C_{\omega,k}$ using Lemma \[lem recurence small omega\]. For $k\geq 2$, we have $$\begin{aligned}
\label{Eq induction omega}
\frac{n}{q^{n/2}(\log n)^{k-2}}\lvert E_{\omega_{k}}(n,\chi) \rvert \leq& 2\frac{d^{k}}{k!} \sum_{\ell = 2}^{k}\sum_{j = \ell}^{n-1}\left(1 + \frac{C_{\omega,k-\ell}}{\gamma(\chi)\log n}\right)j\binom{j-1}{\ell-1}\frac{q^{1-\frac{j}{2}}d^{1-\ell}(\log n)^{2-\ell}(k-1)!}{(k-\ell -1)!}\nonumber
\\ &+ \frac{C_0}{2} d^k \frac{k + \frac{1}{\gamma(\chi)}}{(k-1)!} + \frac{1}{k} \frac{n}{q^{n/2}(\log n)^{k-2}}\sum_{n_1 + n_2 = n}\lvert E_{\omega_{k-1}}(n_1,\chi) \pi_{f_1}(n_2,\chi) \rvert.\end{aligned}$$ The last term is handled as in . The first term is bounded independently of $n$ (but a priori not independently of $q$ if $q = 3$) by observing that the series $$\sum_{j \geq \ell} \frac{j!}{(j-\ell)!} q^{-\frac{j}{2}} = \frac{\sqrt{q}}{\sqrt{q}-1} \ell ! (\sqrt{q} - 1)^{-\ell}$$ is convergent. Up to increasing the constant to include the case $q=3$, this proves the existence of $C_{\omega,k}$ satisfying $$\begin{aligned}
E_{\omega_k}(n,\chi) \leq C_{\omega,k} \frac{d^{k}}{(k-1)!}\frac{1}{\gamma(\chi)}\frac{q^{n/2} (\log n)^{k-2}}{n}.\end{aligned}$$ Now, assuming $q \geq 5$, one has $$\begin{aligned}
\sum_{\ell = 2}^{k}\sum_{j = \ell}^{n-1}j\binom{j-1}{\ell-1}q^{1-\frac{j}{2}}d^{1-\ell} &\leq 2dq\sum_{\ell = 2}^{k}\ell (d(\sqrt{q} -1))^{-\ell}. \end{aligned}$$ The series is convergent and can be bounded independently of $q$ and $d$, we may choose $C \geq \max\{ C_0, 8\gamma(\chi)dq\sum_{\ell \geq 2}\ell (d(\sqrt{q} -1))^{-\ell}\}$. Thus, for $q\geq 5$, $k = o((\log n)^{1/2})$, using the induction hypothesis $C_{\omega,\ell} \leq C\ell(\ell -1)$ for $2\leq \ell\leq k-1$, becomes $$\begin{aligned}
C_{\omega,k}\leq C\left(\frac{(k-1)(k-2)}{2k} + \frac{k}{2} + \frac{(k-1)^3}{k} + o(k) \right). \end{aligned}$$ By the same argument as $C_{\Omega, k}$, we conclude that $C_{\omega, k}\leq C k(k-1)$ for some absolute constant $C$.
Counting polynomials of degree $\leq n$ with $k$ irreducible factors in congruence classes {#Sec_proof_deg<X}
==========================================================================================
The asymptotic formula in Theorem \[Th\_Difference\_k\_general\_deg<X\] is obtained as a corollary of Theorem \[Prop\_k\_general\], by summing over the characters and over the degree of the polynomials.
For $A \subset (\mathbf{F}_q[t]/(M))^{*}$, for $f = \Omega$ or $\omega$ and for any integers $n,k \geq 1$, we define the function $$\pi_{f_k}(n; M, A) = \lvert\lbrace N \in\mathbf{F}_{q}[t] : N \text{ monic, } \deg{N} = n,~ \Omega(N)=k,~ N \bmod M \in A \rbrace\rvert,$$ so that $$\begin{aligned}
\Delta_{f_k}(X; M, A, B)
= \frac{X (k-1)!}{q^{X/2}(\log X)^{k-1}}\sum_{n\leq X}\left( \frac{1}{\lvert A\rvert }\pi_{f_k}(n; M, A) - \frac{1}{\lvert B \rvert}\pi_{f_k}(n; M, B)\right).\end{aligned}$$
Before we give the proof of Theorem \[Th\_Difference\_k\_general\_deg<X\], let us prove the following preliminary lemma.
\[Lem\_SumOverN\] Let $k\geq 0$ be an integer. For any complex number $\alpha$ with $\lvert\alpha\rvert \geq \sqrt{2}$, as $X\rightarrow\infty$ we have that $$\begin{aligned}
\frac{X}{\alpha^{X}(\log X)^{k}}\sum_{n=1}^{X}\frac{\alpha^n(\log n)^{k}}{n} = \frac{\alpha}{\alpha -1} + O{\left(}\frac{1}{\lvert\alpha\rvert^{X}}+\frac{1 + \frac{k}{\log X}}{X\log X} {\right)}.\end{aligned}$$
The proof is adapted from [@Cha2008 Lem. 2.2]. Applying Abel identity yields $$\begin{aligned}
\sum_{n=1}^{X} \frac{\alpha^n(\log n)^{k}}{n} &= \frac{\alpha^{X+1} -{\alpha}}{\alpha -1}\frac{(\log X)^k}{X} + \int_{1}^{X} \frac{\alpha^{[t]+1} -{\alpha}}{\alpha -1} \frac{(k-1)(\log t)^{k-2} - (\log t)^{k-1}}{t^2} {\mathop{}\!\mathrm{d}}t \\
&= \frac{\alpha}{\alpha -1}(\alpha^{X} +O(1)) \frac{(\log X)^k}{X}
+ O{\left(}{\left(}k(\log X)^{k-2} + (\log X)^{k-1}{\right)}\int_{1}^{X} \frac{ \lvert \alpha\rvert^{t}}{t^2} {\mathop{}\!\mathrm{d}}t {\right)}.\end{aligned}$$ Cha proved that $\int_{1}^{X} \frac{ \lvert \alpha\rvert^{t}}{t^2} {\mathop{}\!\mathrm{d}}t = O{\left(}\frac{\lvert \alpha\rvert^X}{X^2}{\right)}$ via integration by parts. This concludes the proof.
Let us first sum over the characters. By orthogonality of characters, for every $A \subset (\mathbf{F}_{q}[t]/(M))^*$, one has $$\pi_{f_k}(n; M, A) =
\frac{1}{\phi(M)}\sum_{\chi \bmod M}\sum_{a\in A} \bar\chi(a) \pi_{f_k}(n,\chi).$$ Hence for any $A, B \subset (\mathbf{F}_{q}[t]/(M))^*$, one has $$\begin{aligned}
\frac{1}{\lvert A\rvert}\pi_{f_k}(n; M, A) - \frac{1}{\lvert B\rvert}\pi_{f_k}(n; M, B)
&= \frac{1}{\phi(M)}\sum_{\chi \bmod M}{\left(}\frac{1}{\lvert A\rvert}\sum_{a\in A} \bar\chi(a) - \frac{1}{\lvert B\rvert}\sum_{b\in B} \bar\chi(b) {\right)}\pi_{f_k}(n,\chi) \\
&= \sum_{\chi \bmod M}c(\chi,A,B) \pi_{f_k}(n,\chi). \end{aligned}$$ Note that the case $\chi = \chi_0$ is trivial, one has $c(\chi,A,B) =0$.
We have $\vert c(\chi,A,B) \rvert \leq 2$, so when we sum over the degree $n$, the implicit constants in the error terms are at most multiplied by $2$.
Now, let us sum over the degree. We divide the range $n\leq X$ into the two parts $n\leq \frac{X}{3}$ and $\frac{X}{3}<n\leq X$. For $n\leq \frac{X}{3}$, we use the trivial bound $\pi_{f_k}(n; M, A)\leq q^{n}$. We have $$\begin{aligned}
\Delta_{f_k}(X; M, A, B)&= \sum_{\chi \bmod M}c(\chi,A,B) \frac{X (k-1)!}{q^{X/2}(\log X)^{k-1}}\Bigg\{\sum_{n\leq \frac{X}{3}}+\sum_{\frac{X}{3}<n\leq X}\Bigg\} \pi_{f_k}(n,\chi)\nonumber\\
&= \sum_{\chi \bmod M}c(\chi,A,B)\frac{X (k-1)!}{q^{X/2}(\log X)^{k-1}}\sum_{\frac{X}{3}<n\leq X} \pi_{f_k}(n,\chi) +O{\left(}Xq^{1 - \frac{X}{6}}\frac{(k-1)!}{(\log X)^{k-1}}{\right)}.\end{aligned}$$ When $\frac{X}{3}<n\leq X$, we have $k=o((\log X)^{\frac12})=o((\log n)^{\frac12})$, the asymptotic formula in Theorem \[Prop\_k\_general\] yields $$\begin{aligned}
&\sum_{\frac{X}{3}<n\leq X} \pi_{f_k}(n,\chi)\nonumber\\
&= \frac{(-1)^k}{(k-1)!} \sum_{\frac{X}{3}<n\leq X} \Bigg\{\frac{q^{n/2}(\log n)^{k-1}}{n}
{\left(}{\left(}m_+(\chi) -\epsilon_f\frac{\delta(\chi^2)}{2}{\right)}^k+(-1)^n {\left(}m_-(\chi) -\epsilon_f\frac{\delta(\chi^2)}{2}{\right)}^k {\right)}\\
&\quad + \sum_{\alpha_j\neq\pm\sqrt{q}} m_j(\chi)^k \frac{\alpha_j^n(\chi) (\log n)^{k-1}}{n}
+O \left( d^k k(k-1)\frac{1}{\gamma(\chi)} \frac{q^{n/2}(\log n)^{k-2}}{n} + d\frac{q^{n/3}}{n}\right) \Bigg\}\\
&= \frac{(-1)^k}{(k-1)!} \sum_{n\leq X} \Bigg\{\frac{q^{n/2}(\log n)^{k-1}}{n}
{\left(}{\left(}m_+(\chi) -\epsilon_f\frac{\delta(\chi^2)}{2}{\right)}^k+(-1)^n {\left(}m_-(\chi) -\epsilon_f\frac{\delta(\chi^2)}{2}{\right)}^k {\right)}\\
&\quad + \sum_{\alpha_j\neq\pm\sqrt{q}} m_j(\chi)^k \frac{\alpha_j^n(\chi) (\log n)^{k-1}}{n}
+O \left( d^k \frac{k(k-1)}{(k-1)!\gamma(\chi)} \frac{q^{n/2}(\log n)^{k-2}}{n} + d\frac{q^{n/3}}{n} \right) \Bigg\} \\
&\qquad+ O \left( d^k \frac{1}{(k-1)!} \frac{q^{X/6}(\log X)^{k-1}}{X} \right).\end{aligned}$$ Now, applying Lemma \[Lem\_SumOverN\] for each $\alpha_j = \sqrt{q}e^{i\gamma_j(\chi)}$ (real or not), and using $k= o(\log X)$, one has $$\begin{aligned}
\frac{X}{(\log X)^{k-1}q^{X/2}}\sum_{n=1}^{X}\frac{\alpha_j^n(\log n)^{k-1}}{n} = \frac{\alpha_j}{\alpha_j -1}{\left(}\frac{\alpha_j}{\sqrt{q}}{\right)}^{X} + O{\left(}\frac{1}{X\log X}{\right)}.\end{aligned}$$ We also apply Lemma \[Lem\_SumOverN\] and [@Cha2008 Lem. 2.2] to the sum of the error term and derive that $$\begin{aligned}
&\Delta_{f_k}(X; M, A, B)\nonumber\\
&= (-1)^k \sum_{\chi}c(\chi,A,B)\Bigg(\ {\left(}m_+(\chi) +\frac{\delta(\chi^2)}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q}-1} +(-1)^X {\left(}m_-(\chi)+\frac{\delta(\chi^2)}{2}{\right)}^k \frac{\sqrt{q}}{\sqrt{q}+1} \nonumber\\
&\qquad +\sideset{}{'}\sum_{j=1}^{d_{\chi}} m_j^k e^{iX\gamma_{j}(\chi)} \frac{\alpha_j(\chi)}{\alpha_j(\chi)-1} \Bigg) + O{\left(}\frac{d^k k(k-1)}{\gamma(M)\log X} + dq^{-X/6}{\right)}.\end{aligned}$$ This concludes the proof of Theorem \[Th\_Difference\_k\_general\_deg<X\].
Acknowledgements {#acknowledgements .unnumbered}
================
The authors thank Peter Humphries and Lior Bary-Soroker for suggesting the project. The authors are grateful to Daniel Fiorilli for his feedback and careful reading. The authors are also grateful to Byungchul Cha and Andrew Granville for their helpful advice and for pointing out the works of Wanlin Li and Sam Porritt respectively. We are grateful for Wanlin Li’s explanation of her paper and her help in finding interesting examples, and to Sam Porritt for sending us his preprint. This paper also benefited from conversations with Florent Jouve, Jon Keating, Corentin Perret-Gentil, and K. Soundararajan. We would like to thank the CRM, McGill University, Concordia University, the University of Ottawa and MPIM for providing good working conditions that made this collaboration possible.
The computations in this paper were performed using SageMath and Matlab.
[^1]: Using Sathe–Selberg method, Afshar and Porritt [@AfsharPorritt Th. 2] gave an asymptotic formula for the number of monic polynomials of degree $X$ with $k$ irreducible factors in congruence classes modulo $M\in \mathbf{F}_{q}[t]$ the main term of which is $\frac{q^{X}(\log X)^{k-1}}{X (k-1)! \phi(M)}$ in the case $k = o(\log X)$ and the modulus $M$ does not vary with $X$. In this paper, we focus on the error terms and expect to have square-root cancellation in the error terms.
[^2]: Note that [@Cha2008 Ex. 5.3] contains a typo, we have $\mathcal{L}(u,\chi_M) = (1 - 2 \sqrt{5} \cos(\tfrac{\pi}{5})u + 5u^2 )(1 - 2 \sqrt{5} \cos(\tfrac{2\pi}{5})u + 5u^2 ).$
|
---
abstract: 'We derive exact results for close-packed dimers on the triangular kagome lattice (TKL), formed by inserting triangles into the triangles of the kagome lattice. Because the TKL is a non-bipartite lattice, dimer-dimer correlations are short-ranged, so that the ground state at the Rokhsar-Kivelson (RK) point of the corresponding quantum dimer model on the same lattice is a short-ranged spin liquid. Using the Pfaffian method, we derive an exact form for the free energy, and we find that the entropy is $\frac{1}{3} \ln 2$ per site, regardless of the weights of the bonds. The occupation probability of every bond is $\frac{1}{4}$ in the case of equal weights on every bond. Similar to the case of lattices formed by corner-sharing triangles (such as the kagome and squagome lattices), we find that the dimer-dimer correlation function is identically zero beyond a certain (short) distance. We find in addition that monomers are deconfined on the TKL, indicating that there is a short-ranged spin liquid phase at the RK point. We also find exact results for the ground state energy of the classical Heisenberg model. The ground state can be ferromagnetic, ferrimagnetic, locally coplanar, or locally canted, depending on the couplings. From the dimer model and the classical spin model, we derive upper bounds on the ground state energy of the quantum Heisenberg model on the TKL.'
author:
- 'Y. L. Loh'
- 'Dao-Xin Yao'
- 'E. W. Carlson'
title: Dimers on the Triangular Kagome Lattice
---
Introduction
============
The nontrivial statistical mechanics problem of dimer coverings of lattices, which may be used to model, [*e.g.*]{}, the adsorption of diatomic molecules onto a surface[@diatomic], experienced a renaissance with the discovery of exact mappings to Ising models[@fisher-1961; @kasteleyn1963]. A second renaissance came with the search for[@anderson-rvb-1973; @rokhsar-kivelson-1988] and discovery of[@sondhi-triangular] a true spin liquid phase with deconfined spinons. In the latter case, the problem of classical dimer coverings of a lattice illuminates the physics of the corresponding [*quantum*]{} dimer model. At the Rokhsar-Kivelson (RK) point of the quantum dimer model, the ground states are an equal amplitude superposition of dimer coverings within the same topological sector,[@rokhsar-kivelson-1988; @sondhi-triangular][^1] and in fact dimer correlations at this point correspond to the dimer correlations of the classical dimer model.
Results on classical hard core dimer models in two[@kasteleyn1963] and higher dimensions[@huse-prl-2003] point to two classes of models, depending upon the monomer-monomer correlation function, which is defined as the ratio of the number of configurations available with two test monomers inserted to the number of configurations available with no monomers present. On bipartite lattices (such as the square and honeycomb lattices), monomers are confined, with power law correlations.[@kasteleyn1963; @fisher1963] On nonbipartite lattices (such as the triangular, kagome, and the triangular kagome lattice discussed here), monomers can be either confined or deconfined, and correlators exhibit exponential decay except at phase transtions.[@krauth-2003; @fendley-2002; @misguich-2002; @fisher1966; @moessner-2003] This implies that while the RK point of the quantum dimer model is critical on bipartite lattices, so that at T=0 a (critical) spin liquid exists only at the RK point, in non-bipartite lattices, such as the triangular lattice and lattices made of corner-sharing triangles such as the kagome and squagome lattice, it has been shown that the RK point corresponds to a disordered spin liquid. Correspondingly, it was established in both of these cases that there exist finite regions of parameter space where the ground state is a gapped spin liquid with deconfined spinons. Part of the interest in such states is the topological order that accompanies such ground states, and hence such states may be useful examples of the toric code. Interest also stems from the original proposals that the [*doped*]{} spin liquid phase leads to superconductivity.[@anderson-1987; @rokhsar-kivelson-1988]
![(Color online) A dimer covering of a portion of the triangular kagome lattice (TKL). The TKL can be derived from the triangular lattice by periodically deleting seven out of every sixteen lattice sites. This structure has two different sublattices “a” (closed circles) and “b” (open circles), which correspond to small trimers and large trimers, respectively. Each site has four nearest neighbors. The primitive unit cell contains 6 $a$-sites, 3 $b$-sites, 6 $a$–$a$ bonds, and 12 $a$–$b$ bonds. Thick blue lines represent dimers. A typical close-packed dimer covering is shown. []{data-label="f:valence-bonds"}](valence-bonds4.eps){width="0.85\columnwidth"}
In this paper, we analyze the problem of classical close-packed dimers on the triangular kagome lattice (TKL), a non-bipartite lattice expected to display a spin liquid phase, as the first step in understanding the RK point of the corresponding quantum dimer model. The TKL, depicted in Fig. \[f:valence-bonds\], has a physical analogue in the positions of Cu atoms in the materials $\mbox{Cu}_{9}\mbox{X}_2(\mbox{cpa})_{6}\cdot x\mbox{H}_2\mbox{O}$ (cpa=2-carboxypentonic acid, a derivative of ascorbic acid; X=F,Cl,Br) [@gonzalez93; @maruti94; @mekata98]. We have previously studied Ising spins[@lohyaocarlson2007] and XXZ/Ising spins[@xxzising] on the TKL; this paper represents an alternative approach to the problem. Using the well-known Pfaffian method,[@kasteleyn1963] we obtain exact solutions of close-packed dimers on the TKL. We obtain an analytic form of the free energy for arbitrary bond weights. The entropy is $\frac{1}{3} \ln 2$ per site, independent of the weights of the bonds, $z_{aa}$ and $z_{ab}$. We find the occupation probability of every bond is a constant $\frac{1}{4}$ in the absence of an orienting potential. The system has only local correlations, in that the dimer-dimer correlation function is exactly zero beyond two lattice constants, much like the situation on lattices made from corner-sharing triangles such as the kagome and squagome lattices[@misguich-2002]. We use exact methods to find the monomer-monomer correlation function, and show that monomers are deconfined on the TKL. In addition, we solve for the ground states of the classical Heisenberg on this model. In addition to collinear phases (ferromagnetic and ferrimagnetic), we find a canted ferrimagnetic phase which interpolates smoothly between the two. We obtain a variational upper bound to the ground state energy of the TKL quantum Heisenberg antiferromagnet using closed-packed dimer picture.
Model, Thermodynamic Properties, and Correlation Function \[model\]
===================================================================
In this paper we consider the close-packed dimer model on the TKL, a lattice which can be obtained by inserting triangles inside of the triangles of the kagome lattice (see Fig. \[f:valence-bonds\]). The dimer generating function is defined as $$\begin{aligned}
Z &= \sum_{\text{dimer coverings}} \prod_{{\left<ij\right>}} z_{ij} {} ^ {n^{ij}} ,
\label{e:partition-function}
\end{aligned}$$ where ${\left<ij\right>}$ indicates a product over nearest-neighbor bonds, $z_{ij}$ is the weight on the bond joining site $i$ and site $j$, and $n_{ij}$ is the number of dimers (either 0 or 1) on bond $ij$ for the dimer covering under consideration. The term “close-packed” refers to the constraint that every lattice site must be occupied by one dimer, that is, that vacancies are not allowed. Therefore the number of sites $N_\text{sites}$ is twice the number of dimers $N_\text{dimers}=\sum_{{\left<ij\right>}} n_{ij}$. We allow for the possibility of different weights $z_{\alpha}=e^{-\beta\epsilon_{\alpha}}$ for six different types of bonds $\alpha=1,2,3,4,5,6$, as depicted in Fig. \[f:weights\]. Figure \[f:valence-bonds\] shows an example of a dimer covering.
![Our assignment of weights $z_{\alpha}$ to bonds in the TKL. Solid (open) circles represent $a$-sites ($b$-sites). \[f:weights\]](activities.eps){width="0.7\columnwidth"}
Several properties of this model, including the free energy, entropy, and dimer-dimer correlation function, can be calculated exactly using the well-known Pfaffian method[@kasteleyn1963]. We begin by defining a Kasteleyn orientation[@kasteleyn1963] (or Pfaffian orientation) for this lattice, [*i.e.*]{} a pattern of arrows laid on the bonds such that in going clockwise around any closed loop with an even number of bonds, there is an odd number of arrows pointing in the clockwise direction along the bonds. For the TKL, we have found it necessary to double the unit cell in order to obtain a valid Kasteleyn orientation. [^2] Such an orientation is shown in Fig. \[pfaffian-orientation\]. The doubled unit cell contains 18 sites.
The antisymmetric weighted adjacency matrix associated with this orientation, $A_{ij}$, is a $N_\text{sites} \times N_\text{sites}$ square matrix with a “doubly Toeplitz” block structure. The generating function of the dimer model is given by the Pfaffian of this matrix: $Z = \text{Pf}~ \mathbf{A} = \sqrt{ \det \mathbf{A}}$. In the infinite-size limit, this approaches an integral over the two-dimensional Brillouin zone: $$\begin{aligned}
f &=& \lim_{N_\text{sites} \rightarrow \infty} \frac{F}{N_\text{sites}} \\ \nonumber
&=& \frac{1}{18}
\int_0^{2\pi} \frac{dk_x}{2\pi}
\int_0^{2\pi} \frac{dk_y}{2\pi}
~\tfrac{1}{2} \ln \left| \det \mathbf{M} (k_x, k_y) \right| \end{aligned}$$ where we have normalized the free energy by the temperature such that $F \equiv {\rm ln}Z$, and where $\mathbf{M}(k_x,k_y)$ is the 18x18 matrix below,
$$\begin{aligned}
\mathbf{M} &=
\left(
\begin{array}{llllllllllllllllll}
0 & z_1 & z_3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{z_3}{u} & -\frac{z_1}{u} \\
-z_1 & 0 & -z_5 & 0 & 0 & z_1 & -z_6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
-z_3 & z_5 & 0 & -v z_3 & 0 & 0 & z_4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{z_3}{v} & 0 & -z_2 & 0 & -\frac{z_2}{v} & -z_3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & z_2 & 0 & z_2 & 0 & z_4 & z_6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & -z_1 & 0 & 0 & -z_2 & 0 & z_2 & 0 & z_1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & z_6 & -z_4 & v z_2 & 0 & -z_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & z_3 & -z_4 & 0 & 0 & 0 & -z_5 & -z_3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & -z_6 & -z_1 & 0 & z_5 & 0 & z_1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & z_3 & -z_1 & 0 & z_1 & z_3 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -z_1 & 0 & -z_5 & 0 & 0 & z_1 & -z_6 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -z_3 & z_5 & 0 & -v z_3 & 0 & 0 & z_4 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{z_3}{v} & 0 & z_2 & 0 & -\frac{z_2}{v} & z_3 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -z_2 & 0 & z_2 & 0 & z_4 & z_6 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -z_1 & 0 & 0 & -z_2 & 0 & z_2 & 0 & z_1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & z_6 & -z_4 & v z_2 & 0 & -z_2 & 0 & 0 & 0 \\
-u z_3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -z_3 & -z_4 & 0 & 0 & 0 & -z_5 \\
u z_1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -z_6 & -z_1 & 0 & z_5 & 0
\end{array}
\right)\end{aligned}$$
where, for brevity, we have written $u=e^{ik_x}$ and $v=e^{ik_y}$. The determinant of this matrix is independent of $k_x$ and $k_y$: $$\begin{aligned}
&\det \mathbf{M} (k_x, k_y)
\nonumber\\
&=64 z_1^2 z_2^2 z_3^2 \left(z_1 z_4+z_2 z_5\right){}^2 \left(z_1 z_4+z_3 z_6\right){}^2 \left(z_2 z_5+z_3 z_6\right){}^2\end{aligned}$$ Taking the logarithm and integrating over the Brillouin zone gives the free energy per doubled unit cell. Hence, the free energy per site is $$\begin{aligned}
f &=&\frac{1}{18} \ln\big[ 8 z_1 z_2 z_3 \left(z_1 z_4+z_2 z_5\right) \\ \nonumber
& &\hspace{.25in}\times \left(z_1 z_4+z_3 z_6\right) \left(z_2 z_5+z_3 z_6\right)\big]~.
\label{e:free-energy}\end{aligned}$$
![(Color online) The arrows represent a Kasteleyn orientation on the triangular kagome lattice (TKL). Solid (open) circles represent “a” (“b”) sublattices. The shaded region represents the doubled unit cell. \[pfaffian-orientation\]](pfaffian-orientation.eps){width="0.9\columnwidth"}
The occupation probability of each bond may be calculated by differentiating the free energy with respect to the weight of each bond. Let $N_\alpha$ be the total number of dimers on $z_\alpha$-bonds (as defined in Fig. \[f:weights\]), averaged over all configurations of the system. Since $Z=\sum_\text{configs} \prod_\alpha z_\alpha {}^ {N_\alpha}$, we have $N_\alpha = z_\alpha \frac{\partial F}{\partial z_\alpha}$. We define the occupation probability of each $\alpha$-bond as $p_\alpha = \frac{N_\alpha}{B_\alpha}$, where $B_\alpha$ is the total number of type-$\alpha$ bonds on the lattice. If $N_\text{cells}$ is the number of primitive unit cells, then $N_\text{sites}=9N_\text{cells}$, $B_1=B_2=B_3=4N_\text{cells}$, and $B_4=B_5=B_6=2N_\text{cells}$. The results, normalized by the number of sites in the system, are $$\begin{aligned}
p_1 &= \frac{1}{8}
\left(1 + \frac{z_1 z_4 }{z_1 z_4+z_3 z_6}+\frac{z_1 z_4 }{z_1 z_4+z_2 z_5}\right).\\
p_4 &= \frac{1}{4}
\left(\frac{z_1 z_4 }{z_1 z_4+z_3 z_6}+\frac{z_1 z_4 }{z_1 z_4+z_2 z_5}\right),\end{aligned}$$ Expressions for $p_2$, $p_3$, $p_5$, $p_6$ follow by cyclic permutation of $\{1,2,3\}$ simultaneously with permutation of $\{4,5,6\}$. The entropy can be computed by the usual Legendre transformation, $S=F + \sum_{\alpha=1}^6 \beta\epsilon_\alpha N_\alpha$. [^3]
The behavior of the correlation functions can be deduced in the same way as in Ref. . To find the dimer-dimer correlation functions, the standard method is to first calculate the “fermion” Green function, which is the inverse of the matrix $\mathbf{A}$, Fourier-transform it to real space, and use the result to construct the dimer-dimer correlation functions. The inverse of the matrix $\mathbf{A}$, $\mathbf{G}(k_x,k_y)= \left[ \mathbf{A}(k_x,k_y)
\right]^{-1}$, can be written as the matrix of cofactors of $\mathbf{A}$ divided by the determinant of $\mathbf{A}$. Since $\det\mathbf{A}$ is independent of $k_x$ and $k_y$, the only dependence on $k_x$ and $k_y$ enters through the cofactor matrix. Each cofactor is at most a monomial in $e^{ik_x}$ and $e^{ik_y}$. From the rules of Fourier transformation it is easily seen that the real-space Green function $\mathbf{G}(x,y)$ is zero when $|x|>1$ or $|y|>1$ is greater than a certain cutoff distance. Hence the dimer-dimer correlation function will be zero beyond a distance of two unit cells. This is true *regardless of the values of the bond weights* depicted in Fig. \[f:weights\]. This extremely short-ranged behavior of the correlation function is similar to that for dimers on the kagome lattice[@misguich-2002], and also to the spin-spin correlation for Ising spins in the frustrated parameter regime. [@lohyaocarlson2007] It underscores the special role played by kagome-like lattices (c.f. Ref. ).
Whereas quantum dimer models on bipartite lattices do not support deconfined spinons, quantum dimer models on non-bipartite lattices can have deconfined spinons. The connection to classical dimer models is that at the RK point, correlatons in the quantum dimer model are the same as the correlations of the corresponding classical dimer problem. The only non-bipartite lattice for which deconfined spinons have been rigorously demonstrated is the triangular lattice, by explicitly calculating the classical monomer-monomer correlation function using Pfaffian methods.[@fendley-2002] On the kagome lattice, while no correspondingly rigorous calculation of the monomer-monomer correlation function has yet been demonstrated, there have been several indications that the spinons in quantum dimer models on the kagome lattice are deconfined (and therefore classical monomer-monomer correlators are similarly deconfined), from [*e.g.*]{}, the energetics of static spinon configurations[@misguich-2002], the behavior of the single-hole spectral function[@poilblanc-2004], and in the limit of easy-axis anisotropy[@balents-2002]. We have calculated the monomer-monomer correlation for the kagome lattice dimer model using the Pfaffian approach of Fisher and Stephenson[@fisher1963] and we find that it is strictly constant, with $M(r)=1/4$ for any $r>0$. [^4] Because the triangular kagome lattice dimer model maps to the kagome dimer model (with an extra degeneracy of 4 per unit cell), the monomer-monomer correlation on the TKL is also $M(r)=1/4$ for monomers on any two $b$-sites, or for any combination of $a$ and $b$ sites at least three sites apart.
Effects of an Orienting Potential
=================================
In the $\mbox{Cu}_{9}\mbox{X}_2(\mbox{cpa})_{6}\cdot
x\mbox{H}_2\mbox{O}$ materials[@gonzalez93; @maruti94; @mekata98], the $a$ spins are closer to each other than they are to the $b$ spins, so the exchange couplings satisfy $|J_{aa}| > |J_{ab}|$. In the classical dimer approximation described in Sec. \[s:heisenberg\], this corresponds to unequal weights for dimers on $ab$ bonds [*vs.*]{} those on $aa$ bonds, $|z_{aa}| > |z_{ab}|$. Aside from this intrinsic difference in bond weights, it may also be possible to apply anisotropic mechanical strain to vary the lattice geometry (and hence the exchange couplings and dimer weights) in different directions.
[ ]{}
To obtain some insight into the behavior of the classical dimer model under these conditions, we write $z_{\alpha}=e^{-\beta\epsilon_{\alpha}}$, where $\beta=1/T$ is the inverse temperature and $\epsilon_\alpha$ is the potential energy for dimers on bond $\alpha$. We use the following parametrization for the potential energy on each site: $$\begin{aligned}
\epsilon_1&=\epsilon_{ab} - \delta, \quad
\epsilon_2=
\epsilon_3=\epsilon_{ab}, \\
\epsilon_4&=\epsilon_{aa} - \delta, \quad
\epsilon_5=
\epsilon_6=\epsilon_{aa},
\end{aligned}$$ where $\delta$ is an orienting potential (i.e., an anisotropy parameter) which favors dimers in one direction. The bond occupation probabilities and entropy are independent of the values of $\epsilon_{ab}$ and $\epsilon_{aa}$, and depend smoothly upon $\beta\delta$ (see Fig. \[density-and-entropy\]): $$\begin{aligned}
p_1 &= \frac{1}{8} \left( 2 + \tanh \beta\delta \right), \label{e:p1} \\
p_4 &= \frac{1}{4} \left( 1 + \tanh \beta\delta \right), \\
p_2 &= p_3= \frac{1}{16} \left( 4 - \tanh \beta\delta \right), \\
p_5 &= p_6= \frac{1}{8} \left( 2 - \tanh \beta\delta \right), \label{e:p4} \\
s &= \frac{S}{N_\text{sites}} = \frac{1}{18} \left[
\ln \left(64\cosh^2\beta\delta\right)
- 2\beta\delta\tanh \beta\delta
\right] .
\end{aligned}$$ These results show that the TKL dimer model has neither a deconfinement transition (as a function of $\epsilon_{ab}-\epsilon_{aa}$) nor a Kasteleyn transition (as a function of $\delta$). It does, however, have a Curie-like “polarizability” with respect to an orienting potential. This is contrast to the situation on the kagome lattice,[@wang:040105] where the bond occupation probabilities do not depend on the orienting potential.
Results for symmetrical case
============================
In the absence of the orienting potential (i.e., $\delta=0$), the expressions for the bond occupation probabilities and entropy become very simple: $$\begin{aligned}
&p_\alpha = \frac{1}{4}, \qquad \alpha=1,2,3,4,5,6, \\
&s = \frac{1}{3} \ln 2.\end{aligned}$$ Note that these quantities are independent of the relative bond weights $z_{aa}$ and $z_{ab}$. The comparison with other lattices in Table \[comparison-table\] shows that the entropy per site for the TKL is the same as that for the kagome lattice. Although the two lattices are related, this is in fact a coincidence for the following reason. The similarity can be seen by considering the number of $b$-spins per unit cell which have a dimer that connects to a different unit cell. Because there is an odd number of sites per unit cell, this number must be odd, [*i.e.*]{} either $1$ or $3$. Since the $b$-spins themselves form a kagome lattice, the same is in fact true of the kagome lattice. The difference is that for a given pattern of external dimers connecting to $b$ spins, there is no further degeneracy in the kagome case, whereas for the TKL there are four different internal dimer patterns corresponding to any given pattern of external dimers connecting to the $b$ spins. This means that the TKL has a further $4$-fold degeneracy, so that the kagome entropy per unit cell of $s_{\rm cell} = \ln 2$ becomes an entropy per unit cell of $s_{\rm cell} = \ln 8 = 3 \ln 2 $ in the TKL. Since there are $9$ spins per unit cell in the TKL, this yields $s=(1/3) \ln 2$ per site.
The total numbers of dimers on $a$–$a$ bonds and on $b$–$b$ bonds are $$\begin{aligned}
N_{aa}=\frac{1}{3}N_\text{dimers},
\label{e:Naa}
\\
N_{ab}=\frac{2}{3}N_\text{dimers},
\label{e:Nab}
\end{aligned}$$ where $N_\text{dimers}$ is the total number of dimers and $N_\text{dimers}=\frac{1}{2} N_\text{sites}$. (Of course, $N_{aa} = N_4+N_5+N_6$ and $N_{ab}=N_1+N_2+N_3$.) Note that because there are twice as many $a$–$b$ bonds in the lattice as there are $a$–$a$ bonds, this implies that the dimer density [*is the same on every bond*]{}, regardless of the weights of the bonds. Since the number of sites is twice the number of dimers in the close-packed case, $N_{\rm sites} = 2 N_{\rm dimers}$, there are on average $9/2$ dimers per unit cell. One third of those are on the $aa$ bonds, or $3/2$ per unit cell. Since there are six $aa$ bonds per cell, there are $(3/2)/6 = 1/4$ dimers per $aa$ bond. A similar analysis shows that there are $1/4$ dimers per $ab$ bond. In other words, there are $1/4$ dimers per bond, regardless of the relative weight $z_{aa}$ and $z_{ab}$, and regardless of whether it is an $aa$ or $ab$ bond. Under the constraint of close-packing, the dimer densities are set by [*geometry*]{}, rather than by energetics, similar to case of classical dimers on the kagome lattice[@wang:040105; @wunderlich; @elser-1989; @elser-1993],
Our results for close-packed, classical dimers on the TKL are summarized in Table \[comparison-table\], along with known results for the corresponding properties on the square, honeycomb, triangular, and kagome lattices. Notice that the kagome and TKL are special in having simple, closed-form expressions for the entropies. In fact, the entropy per unit cell in each case is the logarithm of an integer. On triangular lattice as well as on the two bipartite lattices which are shown in the table (square and honeycomb), the entropy is not expressible as the logarithm of an integer.
Lattice Entropy Dimer corr. Monomer corr. Polarizability
-------------------------------------- ----------------------------- ----------------- ----------------------- ----------------------
Square[@fisher1963] $0.2915609$ $r^{-2}$ $r^{-1/2}$ finite
Honeycomb[@moessner2003] $0.161533$ $r^{-2}$ $r^{-1/2}$ Kasteleyn transition
Triangular[@fendley-2002] $0.4286$ $e^{-r/0.6014}$ const+$e^{-r/0.6014}$ finite
Kagome[@misguich-2002; @wang:040105] $\frac{1}{3}\ln 2=0.231049$ local deconfined 0
TKL $\frac{1}{3}\ln 2=0.231049$ local deconfined finite
: Properties of close-packed dimer models on various lattices. Entropies are quoted per site. “Local” means that the correlation function is exactly zero beyond a certain radius – it has “finite support”. The triangular, kagome, and triangular kagome lattices have deconfined monomers. The honeycomb dimer model not only has a finite dimer polarizability, but it has a Kasteleyn transition at $\delta=\delta_c$. The polarizability describes the changes in bond occupation probabilities induced by an orienting potential $\delta$. \[comparison-table\]
The square and honeycomb lattices, being bipartite, admit a mapping to a solid-on-solid model[@height-model], and therefore have power-law correlations for both the dimer-dimer correlations and the monomer-monomer correlations. In the corresponding quantum dimer models, these lattices do not support deconfined spinons. As conjectured in Ref. , the non-bipartite lattices have exponential (or faster) falloff of the dimer-dimer correlations. In the triangular, kagome, and TKL lattices, monomers are deconfined, which means that spinons are deconfined in the corresponding quantum dimer model at the RK point. In fact, Moessner and Sondhi showed that there is a finite region of parameter space in which a stable spin liquid [*phase*]{} is present on the triangular lattice.[@sondhi-triangular]
Bounds on the Ground State Energy of the Quantum Heisenberg Model\[s:heisenberg\]
=================================================================================
It is thought that the materials $\mbox{Cu}_{9}\mbox{X}_2(\mbox{cpa})_{6}\cdot x\mbox{H}_2\mbox{O}$ can be described in terms of quantum $S=1/2$ spins on the Cu atoms coupled by superexchange interactions. Nearest-neighbor isotropic antiferromagnetic couplings between $S=1/2$ spins on a 2D lattice with sublattice structure can lead to Néel order. For example, $2$-sublattice Néel order is favored on the square lattice, whereas $3$-sublattice Néel order is favored on the triangular lattice.[@huse-elser-1988] However, on the kagome lattice and the TKL, quantum fluctuations are much more severe, and there is a possibility that they may lead to alternative ground states (such as valence-bond liquids).
![(Color online) Comparison of upper bounds on the ground state energy per site of the quantum Heisenberg model on the TKL, obtained by considering various trial wavefunctions. In the figure, we have set $S=1/2$. []{data-label="f:energies"}](Energies.eps){width="1.1\columnwidth"}
A valence bond state is a direct product of singlet pair states. Using a fermionic representation for the spins, $$\begin{aligned}
\left| \Psi_{ \{ n \} } \right>
&=
\Bigg[
\prod_{{\left<ij\right>}}
\frac{1}{\sqrt{2}}
\left( c^\dag_{i{\uparrow}} c^\dag_{j{\downarrow}} - c^\dag_{i{\downarrow}} c^\dag_{j{\uparrow}} \right)
^ {n_{ij}}
\Bigg]
\left| \text{vacuum} \right>
\end{aligned}$$ where $n_{ij}=0$ or $1$ is the number of valence bonds on bond $ij$, just as in Eq. .
Consider a quantum Hamiltonian with isotropic antiferromagnetic Heisenberg interactions $$\begin{aligned}
\hat{H}
&= -
\sum_{{\left<ij\right>}}
J_{ij} {\hat{\mathbf{S}}}_i \cdot {\hat{\mathbf{S}}}_j,
\end{aligned}$$ where $J_{ij}$ is negative. The expectation value of this Hamiltonian in a valence-bond state is $$\begin{aligned}
\left< \Psi_{ \{ n \} } \right|
J_{ij} {\hat{\mathbf{S}}}_i \cdot {\hat{\mathbf{S}}}_j
\left| \Psi_{ \{ n \} } \right>
&=
-\frac{3}{4} \sum_{{\left<ij\right>}} n_{ij} |J_{ij}|~.
\end{aligned}$$ For close-packed dimers, the densities of valence bonds on $aa$ and $ab$ bonds are given by Eqs. and . Therefore, the total energy of the close-packed valence-bond “trial wavefunction” is $$\begin{aligned}
E_\text{VB}&=-\frac{3}{4} \big( N_{aa}|J_{aa}| + N_{ab}|J_{ab}| \big)\\
&=-\frac{1}{4} \big( |J_{aa}| + 2|J_{ab}| \big) N_\text{dimers}\\
&=-\frac{1}{8} \big( |J_{aa}| + 2|J_{ab}| \big) N_\text{sites}.
\label{e:close-packed-vb-energy}
\end{aligned}$$ This serves as an upper bound of the ground state energy of the quantum Heisenberg model. Of course, matrix elements of the Hamiltonian which connect one dimer covering to another can serve to lower the actual energy even further.
One may also consider a more dilute dimer state. For large $|J_{aa}|$, one may expect dimers to preferentially occupy $a-a$ bonds, so that hexamers with three $a-b$ bonds are disallowed. In such a trial dimer state, the associated energy is $$E_{\rm dilute} = -{1 \over 6} (|J_{aa}| + |J_{ab}|)N_{sites}~.
\label{e:dilute-vb}$$ As shown in Fig. \[f:energies\], this upper bound to the ground state energy is lower than the others for large $|J_{aa}|$. If $J_{ab}$ is ferromagnetic and $J_{aa}$ is still antiferromagnetic, we expect another diluted dimer state, where dimers preferentially occupy $a-a$ bonds, and other spins tend to be aligned (ferromagnetic phase). The corresponding energy is $$E_{\rm dilute+FM}= - \left({1 \over 6}|J_{aa}| + {1 \over 9}|J_{ab}| \right)N_{sites}~.$$
Other bounds can be obtained by considering the *classical* ground states of the Heisenberg model on the TKL (in which the spins are 3-vectors of magnitude $S=1/2$). In the materials of interest, there is not yet consensus whether the coupling $J_{ab}$ is ferromagnetic or antiferromagnetic. However, the Hamiltonian of the classical Heisenberg model is invariant under the transformation $S_b \rightarrow - S_b$ with $J_{ab} \rightarrow -J_{ab}$, so the thermodynamics are independent of the sign of $J_{ab}$.
![(Color online) Canted state of a hexamer of classical Heisenberg spins on the TKL. $\alpha$ and $\beta$ are the canting angles of the $a$- and $b$-spins from the vertical axis. When $\alpha=\beta=0$, this reduces to a collinear state (which is ferromagnetic or antiferromagnetic depending on the sign of $J_{ab}$). When $\alpha=\beta=\pi/2$, it reduces instead to a coplanar state, in which the spins are all at $\pi/3$ to each other. []{data-label="f:canted"}](canted.eps){width="0.6\columnwidth"}
![(Color online) Canting angles in the ground state of the classical Heisenberg model on the TKL for as a function of coupling ratio $J_{aa}/|J_{ab}|$. The thin line shows the canting angle $\alpha$ of the $a$-spins, and the thick line shows the canting angle $\beta$ of the $b$-spins, with respect to the collinear state, which is ferromagnetic or antiferromagnetic depending on the sign of $J_{ab}$. []{data-label="f:canting-angles"}](CantingAngles.eps){width="0.95\columnwidth"}
First, let us consider classical Heisenberg spins on a *single* hexamer. By direct minimization of the energy of a single hexamer, we find that its classical ground state may be either collinear, coplanar, or canted. For $J_{aa} > -|J_{ab}|/2$, the ground state is collinear; the $a$-spins are aligned with each other, the $b$-spins are aligned with each other, and the $a$- and $b$-spins are parallel if $J_{ab}$ is ferromagnetic or antiparallel if $J_{ab}$ is antiferromagnetic. For $J_{aa}<-|J_{ab}|$, the ground state is coplanar; the $a$-spins are at $120^\circ$ to each other, the $b$-spins are at $120^\circ$ to each other, and adjacent $a$- and $b$-spins are at $60^\circ$ if $J_{ab}$ is ferromagnetic or at $120^\circ$ if $J_{ab}$ is antiferromagnetic. At intermediate couplings, $-|J_{ab}| < J_{aa} < -|J_{ab}|/2$, the ground state is a canted state in which neither the $a$-spins nor the $b$-spins are coplanar; rather, each sublattice is canted away from Néel order, and each sublattice is canted away from the other. We define the canting angles of the $a$- and $b$-spins, $\alpha$ and $\beta$, such that $\alpha = \beta = 0$ in the collinear state (see Fig. \[f:canted\]). The canting angles evolve continuously from $0^\circ$ (collinear) to $90^\circ$ (coplanar) as a function of the coupling ratio $J_{aa}/|J_{ab}|$ (see Fig. \[f:canting-angles\]): the classical ground state has two continuous transitions. Now, we observe that each of these hexamer states can tile the kagome lattice. Therefore, the ground state energy of each hexamer can be used to deduce the ground state energy of the entire system. In the collinear regime ($J_{aa} > -|J_{ab}|/2$), the collinear hexamer states lead to a unique global spin configuration (up to a global SU(2) rotation), so there is long-range ferromagnetic order (if $J_{ab}>0$) or ferrimagnetic order (if $J_{ab}<0$), and there is no macroscopic residual entropy. The ground state energy of the system is $$E_{\rm collinear} = {1 \over 6}(|J_{aa}| - 2 |J_{ab}|)N_{sites}~.
\label{e:collinear}$$ In the coplanar regime ($J_{aa}<-|J_{ab}|$), there are infinitely many ways to tile the TKL with coplanar hexamer configurations (e.g., corresponding to 3-sublattice or 9-sublattice Néel order). Furthermore, there are an infinite number of zero modes (rotations of a few spins that cost zero energy). The ground state energy is $$E_{\rm coplanar} = -{1 \over 12} (|J_{aa}|+2 |J_{ab}|)N_{sites}~.
\label{e:coplanar}$$ The physics is essentially the same as that of the classical Heisenberg kagome model. For that model, the prevailing point of view[@chalker1992; @moessnerchalker1998; @reimers1993; @ritchey1993] is that *globally* coplanar configurations are selected at finite temperature via an order-by-disorder mechanism, and the spin chiralities develop nematic order; recently, Zhitomirsky[@zhitomirsky2008] has argued that there is an additional octupolar ordering which is, in fact, the true symmetry-breaking order parameter. The canted regime $-|J_{ab}| < J_{aa} < -|J_{ab}|/2$ has the interesting property that in general $\alpha \ne \beta$, so there is a net magnetic moment on each hexamer. We have found that there are still infinitely many ways to tile the TKL, and that there are still an infinite number of zero modes. It is possible that the zero modes cause the directions f the local moment to vary from place to place, destroying the long-range order with net magnetization; however, it is conceivable that the spin correlation length gradually increases towards infinity in going from the locally coplanar state to the collinear state. The energy of the canted state is $$\begin{aligned}
E_{\rm canted} &= {2 \over 9} \bigg( -{7 |J_{aa}|\over 4} + {5 J_{ab}^2 \over 8 |J_{aa}|}
\nonumber \\&
-|J_{ab}| \sqrt{\left(1 - {J_{aa}^2 / J_{ab}^2 }\right) \left({{J_{ab}}^2 / J_{aa}^2} - 1 \right) } \bigg)N_{sites}.
\label{e:canted}
\end{aligned}$$
Equations , , and are the exact ground state energies for the classical Heisenberg model on the TKL. They serve as upper bounds on the ground state energy for the *quantum* Heisenberg model. Figure \[f:energies\] shows these upper bounds, plotted together with the upper bounds derived from dimer coverings, Eq. and , as explained earlier in this section. Notice that the upper bound for the ground state energy set by considering dimer configurations beats the classical ground states for $J_{aa}$ large and negative (antiferromagnetic). In this highly frustrated regime, we expect that the true ground state of the quantum Heisenberg model is significantly modified by quantum fluctuations from that of the classical case.
Conclusions \[conclusions\]
===========================
In conclusion, we have studied the close-packed dimer model on the triangular kagome lattice (TKL), using exact analytic methods. We find that (in the absence of an orienting potential) the entropy is $s=\frac{1}{3} \ln 2$ per site, regardless of the weights of the bonds, $z_{aa}$ and $z_{ab}$. The occupation probability of every bond is $p_\alpha=\frac{1}{4}$. The dimer-dimer correlation function vanishes identically beyond two lattice sites, faster than that in the triangular lattice, and similar to the falloff in the case of the kagome lattice.[@misguich-2002] The monomer-monomer correlation function is $M(r)=1/4$ for $r$ greater than two lattice constants, indicating that monomers are deconfined in this lattice. This implies that the Rokhsar-Kivelson point of the corresponding quantum dimer model is a short-ranged, deconfined spin liquid.
In addition, we find that the classical ground state of the Heisenberg model on the TKL is ferromagnetic (if $J_{ab}$ is ferromagnetic) or ferrimagnetic (if $J_{ab}$ is antiferromagnetic) when the coupling between $a$ spins on small trimers is large enough compared to the coupling between $a$ spins and $b$ spins, $J_{aa} > -|J_{ab}|/2$. For $J_{aa}<-|J_{ab}|$, the ground state of a single hexamer is a coplanar state, and the physics reduces to that of the classical Heisenberg kagome model.[@chalker1992; @moessnerchalker1998; @reimers1993; @ritchey1993] In between, there is a *canted* classical ground state in which the $a$ spins and $b$ spins within a hexamer both cant away from the coplanar state. Such a state does not arise in a simple model of frustrated magnetism on the kagome lattice. This type of canted ground state of the hexamer can tile the lattice, and therefore it is the building block of the classical ground state of the macroscopic system. There is a corresponding macroscopic degeneracy associated with the many ways in which this local hexamer ground state can tile the lattice. Each hexamer possesses a local moment; it is not yet clear whether the local magnetic moments from different hexamers cancel out due to the presence of zero modes.
Acknowledgments {#acknowledgments .unnumbered}
===============
It is a pleasure to thank M. Ma for helpful discussions. D. X. Y. acknowledges support from Purdue University. This work was also supported by Research Corporation (Y. L. L. and E. W. C.).
[10]{}
R. H. Fowler and G. S. Rushbrooke, Trans. Faraday Soc. [**33**]{}, 1272 (1937).
M. E. Fisher, Phys. Rev. [**124**]{}, 1664 (1961).
P. W. Kasteleyn, J. Math. Phys. [**4**]{}, 287 (1963).
P. W. Anderson, Mater. Res. Bull. [**8**]{}, 153 (1973).
D. S. Rokhsar and S. A. Kivelson, Phys. Rev. Lett. [**61**]{}, 2376 (1988).
R. Moessner and S. L. Sondhi, Phys. Rev. Lett. [**86**]{}, 1881 (2001).
D. A. Huse, W. Krauth, R. Moessner, and S. L. Sondhi, Phys. Rev. Lett. [ **91**]{}, 167004 (2003).
M. E. Fisher and J. Stephenson, Phys. Rev. [**132**]{}, 1411 (1963).
G. Misguich, D. Serban, and V. Pasquier, Phys. Rev. Lett. [**89**]{}, 137202 (2002).
P. Fendley, R. Moessner, and S. L. Sondhi, Phys. Rev. B [**66**]{}, 214513 (2002).
W. Krauth and R. Moessner, Phys. Rev. B [**67**]{}, 064503 (2003).
M. E. Fisher, J. Math. Phys. [**7**]{}, 1776 (1966).
R. Moessner and S. L. Sondhi, Phys. Rev. B [**68**]{}, 054405 (2003).
P. W. Anderson, Science [**235**]{}, 1196 (1987).
M. Gonzalez, F. Cervantes-Lee, and L. W. ter Haar, Mol. Cryst. Liq. Cryst. [ **233**]{}, 317 (1993).
S. Maruti and L. W. ter Haar, J. Appl. Phys [**75**]{}, 5949 (1993).
M. Mekata, M. Abdulla, T. Asano, H. Kikuchi, T. Goto, T. Morishita, and H. Hori, J. Magn. Magn. Matt. [**177**]{}, 731 (1998).
Y. L. Loh, D. X. Yao, and E. W. Carlson, Phys. Rev. B [**77**]{}, 134402 (2008).
D. X. Yao, Y. L. Loh, E. W. Carlson, and M. Ma, Phys. Rev. B [**78**]{}, 052507 (2008).
F. Wang and F. Y. Wu, Phys. Rev. E [**75**]{}, 040105 (2007).
A. Läuchli and D. Poilblanc, Phys. Rev. Lett. [**92**]{}, 236404 (2004).
L. Balents, M. P. A. Fisher, and S. M. Girvin, Phys. Rev. B [**65**]{}, 224412 (2002).
A. J. Phares and F. J. Wunderlich, Nuovo Cimento Soc. Ital. Fis. [**101B**]{}, 653 (1988).
V. Elser, Phys. Rev. Lett. [**62**]{}, 2405 (1989).
V. Elser and C. Zeng, Phys. Rev. B [**48**]{}, 13647 (1993).
R. Moessner and S. L. Sondhi, Phys. Rev. B [**68**]{}, 064411 (2003).
H. W. J. Blote and H. J. Hilborst, J. Phys. A [**15**]{}, )631 (1982).
D. A. Huse and V. Elser, Phys. Rev. Lett. [**60**]{}, 2531 (1988).
J. T. Chalker, P. C. W. Holdsworth, and E. F. Shender, Phys. Rev. Lett. [ **68**]{}, 855 (1992).
R. Moessner and J. T. Chalker, Phys. Rev. B [**58**]{}, 12049 (1998).
J. N. Reimers and A. J. Berlinsky, Phys. Rev. B [**48**]{}, 9539 (1993).
I. Ritchey, P. Chandra, and P. Coleman, Phys. Rev. B [**47**]{}, 15342 (1993).
M. E. Zhitomirsky, arXiv:0805.0676 (2008).
S. Sachdev, Rev. Mod. Phys. [**75**]{}, 913 (2003).
T. T. Wu, Journal of Mathematical Physics [**3**]{}, 1265 (1962).
G. Misguich, private communication.
[^1]: A topological sector is defined as the following. Draw a line through the system, without touching any site. For a given topological sector, the number of dimers which cross that line is invariant modulo 2 under local rearrangements of the dimer covering. See, [*e.g.*]{}, Ref. .
[^2]: Kasteleyn’s theorem may be generalized to allow complex phase factors in the weighted adjacency matrix: for a transition cycle passing through sites $1,2,\dotsc,2n$, the phase factors must satisfy $\eta_{12} \eta_{34} \dotso \eta_{2n-1,2n} =
-\eta_{23} \eta_{45} \dotso \eta_{2n,1}$. Complex phase factors provide a more elegant solution of the square lattice dimer model[@wu1962]. However, they do not help in the case of the kagome lattice[@wang:040105] or TKL; we have found that any orientation with the periodicity of the original lattice violates the generalized Kasteleyn theorem, even if the phase factors are allowed to be arbitrary complex numbers.
[^3]: The expression for $S$ does not simplify appreciably when $F$, etc., are substituted in.
[^4]: The long distance behavior can also be seen from the perspective of the quantum dimer model of Ref. , in that each monomer removes an Ising degree of freedom, since it merges two hexagons. Therefore each monomer removes half of the configurations.[@misguich-private]
|
---
abstract: 'The metric dimension of a graph $\Gamma$ is the least number of vertices in a set with the property that the list of distances from any vertex to those in the set uniquely identifies that vertex. We consider the jellyfish graph $JFG(n, m)$ is defined as follows. If $n$ copies of $K_{1,m}$ and a cycle $C_n$ are joined by merging any vertex of $C_n$ to the vertex with maximum degree of $K_{1,m}$, then the resulting graph is called the jellyfish graph $JFG(n, m)$. In this paper, we find the metric dimension of jellyfish graph $JFG(n, m)$, also we consider the problem of determining the cardinality $\psi(JFG(n, m))$ of minimal doubly resolving sets of jellyfish graph $JFG(n, m)$, and the strong metric dimension of jellyfish graph $JFG(n, m)$. Moreover, we find an adjacency dimension of the jellyfish graph $JFG(n, m)$.'
address: 'Department of Mathematics, Faculty of Science, Payame Noor University, P.O. Box 19395-4697, Tehran, Iran'
author:
- Ali Zafari
bibliography:
- 'mybibfile.bib'
title: ' Metric dimension, minimal doubly resolving sets and the strong metric dimension of jellyfish graphs $JFG(n, m)$ '
---
Metric dimension; resolving set; doubly resolving; strong resolving; jellyfish graph. 05C75,05C35,05C12
Introduction {#sec:introduction}
============
In this paper we consider finite, simple, and connected graphs. The vertex and edge sets of a graph $\Gamma$ are denoted by $V(\Gamma)$ and $E(\Gamma)$, respectively. For $u, v \in V (\Gamma)$, the length of a shortest path from $u$ to $v$ is called the distance between $u$ and $v$ and is denoted by $d_{\Gamma}(u, v)$, or simply $d(u, v)$. The adjacency and non-adjacency relations are denoted by $\sim$ and $\nsim$, respectively. Metric dimension was first introduced in the 1970s, independently by Harary and Melter [@f-1] and by Slater [@o-1]. In recent years, a considerable literature has developed [@a-1-1]. This concept has different applications in the areas of network discovery and verification [@b-1], robot navigation [@g-1], chemistry [@e-1], and combinatorical optimization [@n-1]. A vertex $x\in V(\Gamma)$ is said to resolve a pair $u, v \in V(\Gamma)$ if $d_{\Gamma}(u, x)\neq d_{\Gamma}(v, x)$. For an ordered subset $W = \{w_1, w_2, ..., w_k\}$ of vertices in a connected graph $\Gamma$ and a vertex $v$ of $\Gamma$, the metric representation of $v$ with respect to $W$ is the $k$-vector $r(v | W) = (d(v, w_1), d(v, w_2), ..., d(v, w_k ))$. If every pair of distinct vertices of $\Gamma$ have different metric representations then the ordered set $W$ is called a resolving set of $\Gamma$. Indeed, the set $W$ is called a resolving set for $\Gamma$ if $r(u | W) = r(v | W)$ implies that $u = v$ for all pairs $u, v$ of vertices of $\Gamma$. A resolving set of minimum cardinality for a graph $\Gamma$ is called a minimum resolving set or a basis for $\Gamma$, denoted by $\beta(\Gamma)$. The metric dimension $\beta(\Gamma)$ is the number of vertices in a basis for $\Gamma$. If $\beta(\Gamma)=k$, then $\Gamma$ is said to be $k$-dimensional. Chartrand et. al. [@e-1] determined the bounds of the metric dimensions for any connected graphs and determined the metric dimensions of some well known families of graphs such as trees, paths, and complete graphs. Bounds on $\beta(\Gamma)$ are presented in terms of the order and the diameter of $\Gamma$. All connected graphs of order $n$ having metric dimension $1, n-1$, or $n-2$ are determined. Notice, for each connected graph $\Gamma$ and each ordered set $W = \{w_1, w_2, ..., w_k\}$ of vertices of $\Gamma$, that the $i^{th}$ coordinate of $r(w_i | W)$ is $0$ and that the $i^{th}$ coordinate of all other vertex representations is positive. Thus, certainly $r(u | W) = r(v | W)$ implies that $u = v$ for $u\in W$. Therefore, when testing whether an ordered subset $W$ of $V(\Gamma)$ is a resolving set for $\Gamma$, we need only be concerned with the vertices of $V(\Gamma)-W$.
Cáceres et al. [@c-1] define the notion of a doubly resolving set as follows. Vertices $x, y$ of the graph $\Gamma$ of order at least 2, are said to doubly resolve vertices $u, v$ of $\Gamma$ if $d(u, x) - d(u, y) \neq d(v, x) - d(v, y)$. A set $Z = \{z_1, z_2, ..., z_l\}$ of vertices of $\Gamma$ is a doubly resolving set of $\Gamma$ if every two distinct vertices of $\Gamma$ are doubly resolved by some two vertices of $Z$. The minimal doubly resolving set is a doubly resolving set with minimum cardinality. The cardinality of minimum doubly resolving set is denoted by $\psi(\Gamma)$. The minimal doubly resolving sets for Hamming and Prism graphs has been obtained in [@i-1] and [@d-1], respectively. Another researchers in [@a-1] determined the minimal doubly resolving sets for necklace graph. Since if $x, y$ doubly resolve $u, v$, then $d(u, x) - d(v, x) \neq 0$ or $d(u, y) - d(v, y) \neq 0$, and hence $x$ or $y$ resolve $u, v$. Therefore, a doubly resolving set is also a resolving set and $\beta(\Gamma) \leq\psi(\Gamma)$.
The strong metric dimension problem was introduced by A. Sebö and E. Tannier [@n-1] and further investigated by O. R. Oellermann and J. Peters-Fransen [@l-1]. Recently, the strong metric dimension of distance hereditary graphs has been studied by T. May and O. R. Oellermann [@j-1]. A vertex $w$ strongly resolves two vertices $u$ and $v$ if $u$ belongs to a shortest $v - w$ path or $v$ belongs to a shortest $u - w$ path. A set $N= \{n_1, n_2, ..., n_m\}$ of vertices of $\Gamma$ is a strong resolving set of $\Gamma$ if every two distinct vertices of $\Gamma$ are strongly resolved by some vertex of $N$. The smallest cardinality of strong resolving set is called strong metric basis of $\Gamma$. The strong metric dimension of a graph $\Gamma$ is defined as the cardinality of strong metric basis denoted by $sdim(\Gamma)$. It is easy to see that if a vertex $w$ strongly resolves vertices $u$ and $v$ then $w$ also resolves these vertices. Hence every strong resolving set is a resolving set and $\beta(\Gamma) \leq sdim(\Gamma)$.
All three previously defined problems are NP-hard in general case. The proofs of NP-hardness are given for the metric dimension problem in [@g-1], for the minimal doubly resolving set problem in [@h-1] and for the strong metric dimension problem in [@l-1]. Intrinsic metrics on a graph have become of interest, as generally discussed in [@f-1-1b; @i-1; @k-1-1; @k-1-2; @m-1; @z-1], for instance. An interesting family of graphs of order $nm+n$ is defined as follows. If $n$ copies of $K_{1,m}$ and a cycle $C_n$ are joined by merging any vertex of $C_n$ to the vertex with maximum degree of $K_{1,m}$, then the resulting graph is called the jellyfish graph $JFG(n, m)$ with parameters $m$ and $n$. In particular, if $n$ is an even integer then the jellyfish graph $JFG(n, m)$ is a bipartite graph. In this paper, we consider the problem of determining the cardinality $\psi(JFG(n, m))$ of minimal doubly resolving sets of the jellyfish graph $JFG(n, m)$. First, we find the metric dimension of jellyfish graph $JFG(n, m)$, in fact we prove that if $n\geq3$ and $m\geq2$ then the metric dimension of jellyfish graph $JFG(n, m)$ is $nm-n$. Also, we consider the problem of determining the cardinality $\psi(JFG(n, m))$ of minimal doubly resolving sets of jellyfish graph $JFG(n, m)$, and the strong metric dimension of jellyfish graph $JFG(n, m)$. Moreover, we find an adjacency dimension of the jellyfish graph $JFG(n, m)$.
Definitions And Preliminaries
=============================
[@f-1-1] \[b.1\] Let $\Gamma$ be a graph, and let $W = \{w_1, ... ,w_k\} \subseteq V(\Gamma)$. For each vertex $v \in V(\Gamma)$, the adjacency representation of $v$ with respect to $W$ is the $k$-vector $$\hat{r}(v|W) = (a_\Gamma(v,w_1), ..., a_\Gamma(v,w_k)),$$ where $$a_\Gamma(v, w_i)= \left\{
\begin{array}{lr}
0 \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ if \,\ v=w_i, \\
1 \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ if \,\ v\sim w_i, \\
2 \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ if \,\ v\nsim w_i. \\
\end{array} \right.$$ The set $W$ is an adjacency resolving set for $\Gamma$ if the vectors $\hat{r}(v|W)$ for $v \in V (\Gamma)$ are distinct. The minimum cardinality of an adjacency resolving set is the adjacency dimension of $\Gamma$, denoted by $\hat{\beta}(\Gamma)$. An adjacency resolving set of cardinality $\hat{\beta}(\Gamma)$ is an adjacency basis of $\Gamma$.
[@f-1-1]\[b.2\] Let $\Gamma$ be a graph of order $n$.
1\) If $diam(\Gamma) = 2$, then $\hat{\beta}(\Gamma)=\beta(\Gamma)$.
2\) If $\Gamma$ is connected, then $\beta(\Gamma)\leq \hat{\beta}(\Gamma)$.
3\) $1\leq \hat{\beta}(\Gamma) \leq n-1$.
Main results
============
\[f.1\] Let $n, m$ be integers such that $n\geq 3$, $m\geq 2$. Then the metric dimension of jellyfish graph $JFG(n, m)$ is $nm-n$.
Let $V(JFG(n, m))=V_1 \cup V_2$, where $V_1=\{1,2, ..., n\}$, $V_2=\{A_{1j}, A_{2j}, ..., A_{nj}\}$, and let $A_{ij}=\cup_{j=1} ^ m v_{ij}$, $1\leq i \leq n$. Suppose that every vertex $i\in V_1$ is adjacent to vertices $v_{i1}, v_{i2}, ..., v_{im}\in A_{ij}\subset V_2$. We can show that the diameter of jellyfish graph $JFG(n, m)$ is $[\frac{n}{2}]+2$. In the following cases, we show that the metric dimension of jellyfish graph $JFG(n, m)$ is $nm-n$.
Case 1. Let $W$ be an ordered subset of $V_1$ in the jellyfish graph $JFG(n, m)$ such that $|W|\leq n$. It is an easy task if $|W|< n$ then $W$ is not a resolving set of jellyfish graph $JFG(n, m)$. In particular, if $|W|= n$ then we show that $W$ is not a resolving set of jellyfish graph $JFG(n, m)$. Without loss of generality one can assume that an ordered subset of vertices in jellyfish graph $JFG(n, m)$ is $W=\{1,2, ..., n\}$. Hence, $V(JFG(n, m))- W=\{A_{1j}, A_{2j}, ..., A_{nj}\}$. Therefore, the metric representation of the vertices $v_{11}, v_{12}, ..., v_{1m}\in A_{1j}$ with respect to $W$ is the same as $n$-vector. Thus, $W$ is not a resolving set of jellyfish graph $JFG(n, m)$.
Case 2. Let $W$ be an ordered subset of $V_2$ in the jellyfish graph $JFG(n, m)$ such that $W=\{A_{2j}, A_{3j}, ..., A_{nj}\}$. Hence, $V(JFG(n, m))- W=\{1, 2, ..., n, A_{1j}\}$. We know that $|W|= nm-m$. So, the metric representation of the vertices $v_{11}, v_{12}, ..., v_{1m}\in A_{1j}$ with respect to $W$ is the same as $nm-m$-vector. Therefore, $W$ is not a resolving set of jellyfish graph $JFG(n, m)$.
Case 3. Let $W$ be an ordered subset of $V_2$ in the jellyfish graph $JFG(n, m)$ such that $W=\{A_{1j}, A_{2j}, A_{3j}, ..., A_{nj}-\{v_{n1} , v_{n2}\}\}$. Hence, $V(JFG(n, m))- W=\{1, 2, ..., n, v_{n1}, v_{n2} \}$. We know that $|W|= nm-2$. So, the metric representation of the vertices $v_{n1}, v_{n2}\in A_{nj}$ with respect to $W$ is the same as $nm-2$-vector. Therefore, $W$ is not a resolving set of jellyfish graph $JFG(n, m)$.
Case 4. Let $W$ be an ordered subset of $V_2$ in the jellyfish graph $JFG(n, m)$ such that $|W|= nm-1$. We show that $W$ is a resolving set of jellyfish graph $JFG(n, m)$. Without loss of generality one can assume that an ordered subset is $W=\{A_{1j}, A_{2j}, ..., A_{nj}-v_{nm}\}$. Hence, $V(JFG(n, m))- W=\{1, 2, ..., n, v_{nm}\}$. We can show that all the vertices $1, 2, ..., n, v_{nm}\in V(JFG(n, m))-W$ have different representations with respect to $W$. Because, for every $k\in V(JFG(n, m))- W$, $1\leq k\leq n$ and $v_{ij}\in A_{ij}$, $1\leq i \leq n$, $1\leq j \leq m$, if $k=i$ then we have $d(k, v_{ij})=1$, otherwise $d(k, v_{ij})>1$. Also, for the vertex $v_{nm} \in V(JFG(n, m))- W$ with $v_{nm}\neq v_{ij}\in A_{ij}$, $1\leq i \leq n$, $1\leq j \leq m$, if $i=n$ then we have $d(v_{nm}, v_{ij})=2$, otherwise $d(v_{nm}, v_{ij})>2$. Therefore, all the vertices $1, 2, ..., n, v_{nm}\in V(JFG(n, m))-W$ have different representations with respect to $W$. This implies that $W$ is a resolving set of jellyfish graph $JFG(n, m)$.
Case 5. Let $W$ be an ordered subset of $V_2$ in the jellyfish graph $JFG(n, m)$ such that $W=\{A_{1j}-v_{1m}, A_{2j}-v_{2m}, ..., A_{nj}-v_{nm}\}$. Hence, $V(JFG(n, m))- W=\{1, 2, ..., n, v_{1m}, v_{2m}, ..., v_{nm}\}$. We know that $|W|= nm-n$. In a similar fashion which is done in Case 4, we can show that all the vertices $1, 2, ..., n, v_{1m}, v_{2m}, ..., v_{nm}\in V(JFG(n, m))-W$ have different representations with respect to $W$. This implies that $W$ is a resolving set of jellyfish graph $JFG(n, m)$.
Case 6. In particular, let $W$ be an ordered subset of $V_2$ in the jellyfish graph $JFG(n, m)$ such that $|W|= nm$. We show that $W$ is a resolving set of jellyfish graph $JFG(n, m)$. Without loss of generality one can assume that an ordered subset is $W=\{A_{1j}, A_{2j}, ..., A_{nj}\}$. Hence $V(JFG(n, m))- W=\{1, 2, ..., n\}$. We can show that all the vertices $1, 2, ..., n\in V(JFG(n, m))-W$ have different representations with respect to $W$. This implies that $W$ is a resolving set of jellyfish graph $JFG(n, m)$.
From the above cases, we know conclude that the minimum cardinality of a resolving set of the jellyfish graph $JFG(n, m)$ is $nm-n$.
\[f.4-1\] Let $n, m$ be integers such that $n\geq 3$, $m\geq 2$. Then the subset $Z=\{A_{1j}-v_{1m}, A_{2j}-v_{2m}, ..., A_{nj}-v_{nm}\}$ of vertices in the jellyfish graph $JFG(n, m)$ is not a doubly resolving set of jellyfish graph $JFG(n, m)$.
We know that an ordered subset $Z=\{A_{1j}-v_{1m}, A_{2j}-v_{2m}, ..., A_{nj}-v_{nm}\}$ of vertices in the jellyfish graph $JFG(n, m)$ is a resolving set of jellyfish graph $JFG(n, m)$ of size $nm-n$. Also by Theorem \[f.1\], the metric dimension of jellyfish graph $JFG(n, m)$ is $\beta(JFG(n, m))=nm-n$. Moreover, $B(JFG(n, m))\leq \psi(JFG(n, m))$. We show that the subset $Z=\{A_{1j}-v_{1m}, A_{2j}-v_{2m}, ..., A_{nj}-v_{nm}\}$ of vertices in jellyfish graph $JFG(n, m)$ is not a doubly resolving set of jellyfish graph $JFG(n, m)$. Because, if $u=v_{im}$ and $v=i$, $1\leq i\leq n$, then for every $x,y\in Z$, we have $d(u, x) - d(u, y) = d(v, x) - d(v, y)$.
\[f.4-2\] Let $n, m$ be integers such that $n\geq 3$, $m\geq 2$. Then the subset $Z=\{A_{1j}, A_{2j}, ..., A_{nj}-v_{nm}\}$ of vertices in the jellyfish graph $JFG(n, m)$ is not a doubly resolving set of jellyfish graph $JFG(n, m)$.
We show that subset $Z=\{A_{1j}, A_{2j}, ..., A_{nj}-v_{nm}\}$ of vertices in the jellyfish graph $JFG(n, m)$ is not a doubly resolving set of jellyfish graph $JFG(n, m)$. Because, if $u=v_{nm}$ and $v=n$, then for every $x,y\in Z$, we have $d(u, x) - d(u, y) = d(v, x) - d(v, y)$.
\[f.4\] Let $n, m$ be integers such that $n\geq 3$, $m\geq 2$. Then the cardinality of minimum doubly resolving set of jellyfish graph $JFG(n, m)$ is $nm$.
Let $V(JFG(n, m))=V_1 \cup V_2$, where $V_1=\{1,2, ..., n\}$, $V_2=\{A_{1j}, A_{2j}, ..., A_{nj}\}$, and let $A_{ij}=\cup_{j=1} ^ m v_{ij}$, $1\leq i \leq n$. Suppose that every vertex $i\in V_1$ is adjacent to vertices $v_{i1}, v_{i2}, ..., v_{im}\in A_{ij}\subset V_2$. We know that an ordered subset $Z=\{A_{1j}, A_{2j}, ..., A_{nj}\}$ of vertices in the jellyfish graph $JFG(n, m)$ is a resolving set of jellyfish graph $JFG(n, m)$ of size $nm$. Also by Theorem \[f.1\], the metric dimension of jellyfish graph $JFG(n, m)$ is $\beta(JFG(n, m))=nm-n$. Moreover, $B(JFG(n, m))\leq \psi(JFG(n, m))$. We show that the subset $Z=\{A_{1j}, A_{2j}, ..., A_{nj}\}$ of vertices in jellyfish graph $JFG(n, m)$ is a doubly resolving set of jellyfish graph $JFG(n, m)$. It is sufficient to show that for two vertices $u$ and $v$ of jellyfish graph $JFG(n, m)$ there are vertices $x, y \in Z$ such that $d(u, x) - d(u, y) \neq d(v, x) - d(v, y) $. Consider two vertices $u$ and $v$ of jellyfish graph $JFG(n, m)$. Then we have the following:
Case 1. Let $u\notin Z$ and $v\notin Z$. Hence, $u,v\in V_1=\{1, 2, ..., n\}$. We can assume without loss of generality that $u=i$ and $v=j$, $1\leq i, j\leq n$ and $i\neq j$. Therefore, if $x=v_{i1}$ and $y=v_{j1}$, then we have $d(u, x) - d(u, y) \neq d(v, x) - d(v, y)$, because $d(u, x) - d(u, y)<0$ and $d(v, x) - d(v, y)>0$.
Case 2. Let $u\in Z$ and $v\in Z$. Hence, $u,v\in V_2=\{A_{1j}, A_{2j}, ..., A_{nj}\}$. Therefore, if $x=u$ and $y=v$, then we have $d(u, x) - d(u, y) \neq d(v, x) - d(v, y)$, because $d(u, x) - d(u, y)<0$ and $d(v, x) - d(v, y)>0$.
Case 3. Finally, let $u\notin Z$ and $v\in Z$. Hence, $u\in V_1=\{1, 2, ..., n\} $ and $v\in V_2=\{A_{1j}, A_{2j}, ..., A_{nj}\}$. We can assume without loss of generality that $u=k$, $1\leq k\leq n$ and $v=v_{11}\in A_{11}$. Therefore, if $x=v_{k2}$ and $y=v_{11}$, then we have $d(u, x) - d(u, y) \neq d(v, x) - d(v, y)$, because $d(u, x) - d(u, y)\leq0$ and $d(v, x) - d(v, y)>0$.
Thus, by Lemma \[f.4-1\], Lemma \[f.4-2\] and the above cases we know conclude that the cardinality of minimum doubly resolving set of jellyfish graph $JFG(n, m)$ is $nm$.
\[f.5\] Let $n, m$ be integers such that $n\geq 3$, $m\geq 2$. Then the subset $N=\{A_{1j}-v_{1m}, A_{2j}-v_{2m}, ..., A_{nj}-v_{nm}\}$ of vertices in the jellyfish graph $JFG(n, m)$ is not a strong resolving set of jellyfish graph $JFG(n, m)$.
Let $M=V_2-N=\{v_{1m}, v_{2m}, ..., v_{nm}\}$, where $V_2$ is the set which is defined already. It is not hard to see that for every two distinct vertices $u, v\in M$ there is not a vertex $w\in N$ such that $u$ belongs to a shortest $v - w$ path or $v$ belongs to a shortest $u - w$ path. So, the subset $N=\{A_{1j}-v_{1m}, A_{2j}-v_{2m}, ..., A_{nj}-v_{nm}\}$ of vertices in jellyfish graph $JFG(n, m)$ is not a strong resolving set of jellyfish graph $JFG(n, m)$. We conclude that if $N$ is a strong resolving set of jellyfish graph $JFG(n, m)$ then $|N|\geq nm-1$, because $|M|$ must be less than $2$.
\[f.6\] Let $n, m$ be integers such that $n\geq 3$, $m\geq 2$. Then the strong metric dimension of jellyfish graph $JFG(n, m)$ is $nm-1$.
By Lemma \[f.5\] we know that if $N$ is a strong resolving set of the jellyfish graph $JFG(n, m)$ then $|N|\geq nm-1$. We show that the subset $N=\{A_{1j}, A_{2j}, ..., A_{nj}-v_{nm}\}$ of vertices in jellyfish graph $JFG(n, m)$ is a strong resolving set of jellyfish graph $JFG(n, m)$. It is sufficient to prove that every two distinct vertices $u,v \in V(JFG(n, m))-N=\{1,2, ...,n, v_{nm}\}$ is strongly resolved by a vertex $w\in N$. In the following cases we show that the strong metric dimension of jellyfish graph $JFG(n, m)$ is $nm-1$.
Case 1. Let $u$ and $v$ be two distinct vertices in $V(JFG(n, m))-N$ such that $u, v\in V_1=\{1, 2, ..., n\}$. So, there is $i, j\in V_1$ such that $u=i$ and $v=j$. Therefore $i$ and $j$ will be strongly resolved by some $v_{ij}\in A_{ij}$, because $i$ and $v_{ij}$ are adjacent, and hence $i$ belongs to a shortest $v_{ij} - j$ path.
Case 2. Now, let $u$ and $v$ be two distinct vertices in $V(JFG(n, m))-N$ such that $u\in V_1=\{1, 2, ..., n\}$ and $v=v_{nm}$. Without loss of generality we may assume $u=i$, where $i\in V_1$. Therefore $i$ and $v_{nm}$ will be strongly resolved by some $v_{ij}\in A_{ij}$, because $i$ and $v_{ij}$ are adjacent, and hence $i$ belongs to a shortest $v_{ij} -v_{nm}$ path.
From the above cases, we know conclude that the minimum cardinality of a strong metric dimension of the jellyfish graph $JFG(n, m)$ is $nm-1$.
\[f.7\] Let $n, m$ be integers such that $n\geq 3$, $m\geq 2$. Then an ordered subset $W=\{A_{1j}-v_{1m}, A_{2j}-v_{2m}, ..., A_{nj}-v_{nm}\}$ of vertices in the jellyfish graph $JFG(n, m)$ is not the adjacency resolving set of jellyfish graph $JFG(n, m)$.
Let $M=V_2-W=\{v_{1m}, v_{2m}, ..., v_{nm}\}$, where $V_2$ is the set which is defined already. Thus, the adjacency representation of the vertices $v_{1m}, v_{2m}, ..., v_{nm}\in V(JFG(n, m))-W$ with respect to $W$ is the $nm-n$-vector $\hat{r}(v_{1m}|W) =\hat{r}(v_{2m}|W)=...=\hat{r}(v_{nm}|W)= (2, 2, ..., 2)$. Because, for every vertex $w \in W$ we have $a_\Gamma(w, v_{1m}) = a_\Gamma(w, v_{2m}) = ...=a_\Gamma(w, v_{nm})=2$. We conclude that if $W$ is an adjacency resolving set of jellyfish graph $JFG(n, m)$ then $|W|\geq nm-1$, because $|M|$ must be less than $2$.
\[f.8\] Let $n, m$ be integers such that $n\geq 3$, $m\geq 2$. Then the adjacency dimension of jellyfish graph $JFG(n, m)$ is $nm-1$.
By Lemma \[f.7\] we know that if $W$ is an adjacency resolving set of the jellyfish graph $JFG(n, m)$ then $|W|\geq nm-1$. Let $W$ be an ordered subset of $V_2$ in jellyfish graph $JFG(n, m)$ such that $|W|= nm-1$, where $V_2$ is the set which is defined already. We show that $W$ is an adjacency resolving set of jellyfish graph $JFG(n, m)$. Without loss of generality one can assume that an ordered subset is $W=\{A_{1j}, A_{2j}, ..., A_{nj}-v_{nm}\}$. Hence $V(JFG(n, m))- W=\{1, 2, ..., n, v_{nm}\}$. We can show that all the vertices $1, 2, ..., n, v_{nm}\in V(JFG(n, m))-W$ have different adjacency representations with respect to $W$. Because, for every $k\in V(JFG(n, m))- W$, $1\leq k\leq n$ and $ v_{ij}\in A_{ij}$, $1\leq i \leq n$, $1\leq j \leq m$, if $k=i$ then we have $a_\Gamma(k, v_{ij})=1$, otherwise $a_\Gamma(k, v_{ij})=2$. Also, for the vertex $v_{nm} \in V(JFG(n, m))- W$ with $v_{nm}\neq v_{ij}\in A_{ij}$, $1\leq i \leq n$, $1\leq j \leq m$, we have $a_\Gamma(v_{nm}, v_{ij})=2$. Therefore, all the vertices $1, 2, ..., n, v_{nm}\in V(JFG(n, m))-W$ have different adjacency representations with respect to $W$. This implies that $W$ is an adjacency resolving set of the jellyfish graph $JFG(n, m)$. We know conclude that the minimum cardinality of the adjacency resolving set of jellyfish graph $JFG(n, m)$ is $nm-1$.
[5]{} , *Minimal doubly resolving sets of necklace graph*, *MATH. REPORTS*, **20(70)** (2018), 123-129. , *Base size, metric dimension and other invariants of groups and graphs*, *Bull. London Math. Soc*. **43** (2011), 209-242. , *Network discovery and verification*, *IEEE J. Sel. Area. Commun*. **24** (2006), 2168-2181. , *On the metric dimension of Cartesian products of graphs*, *SIAM J. Discrete Math*. **21(2)** (2007), 423-441.
, *Minimal doubly resolving sets of prism graphs*, *Optimization*. **62** (2013), 1037-1043. , *Resolvability in graphs and the metric dimension of a graph*, *Discrete Appl. Math*. **105** (2000), 99-113. , *On the metric dimension of a graph*, *Ars Combin*. **2** (1976), 191-195. , *The metric dimension of graph with pendant edges*, *J. Combin. Math. Combin. Comput*. **65** (2008), 139-146. , *The metric dimension of the lexicographic product of graphs*, *Discrete Mathematics*. **312(22)** (2012), 3349-3356. , *Landmarks in graphs*, *Discrete Appl. Math*. **70 (3)** (1996), 217-229. , *Computing minimal doubly resolving sets of graphs*, *Comput. Oper. Res*. **36(7)** (2009), 2149-2159. , *Minimal doubly resolving sets and the strong metric dimension of Hamming graphs*, *Appl. Anal. Discrete Math*. **6** (2012), 63-71. , *The strong metric dimension of distance hereditary graphs*, *J. Combin. Math. Combin. Comput*. In press. , *Computing Metric Dimension of Certain Families of Toeplitz Graphs*, *IEEE Access*. **4** (2019), 1-8. , *Resolvability and fault-tolerant resolvability structures ofconvex polytopes*, *Theoretical Computer Science*. In Press.
, *The strong metric dimension of graphs and digraphs*, *Discrete Appl. Math*. **155** (2007), 356-364. , *The metric dimension of the lexicographic product of graphs*, *Discrete Math*. **313** (2013), 1045-1051. , *On metric generators of graphs*, *Math. Oper. Res*. **29 (2)** (2004), 383-393. , *Leaves of trees*, *Congr. Numer*. **14** (1975), 549-559. , *Minimal doubly resolving sets and the strong metric dimension for some classes of graphs*, to appear in arxive.
|
---
abstract: 'I discuss possible implications a symmetry relating gravity with antigravity might have for smoothing out of the cosmological constant puzzle. For this purpose, a very simple model with spontaneous symmetry breaking is explored, that is based on Einstein-Hilbert gravity with two self-interacting scalar fields. The second (exotic) scalar particle with negative energy density, could be interpreted, alternatively, as an antigravitating particle with positive energy.'
author:
- Israel Quiros
title: 'Symmetry relating Gravity with Antigravity: A possible resolution of the Cosmological Constant Problem?'
---
One of the most profound mysteries in fundamental physics is the cosmological constant problem (see references [@lambdap] for resent reviews on this subject). Recently there have been revealed two faces of this problem: 1) Why is the cosmological constant so small? and 2) Why is it comparable to the critical density of the Universe precisely at present? In the present letter I will focus on point 1) of the problem, leaving point 2) for further research.
The physical basis for the cosmological constant $\Lambda$ are the zero-point vacuum fluctuations. The expectation value of the energy-momentum tensor for vacuum can be written in the Lorentz invariant form $<T_{ab}>_{vac}=(\Lambda/8\pi G)g_{ab}$, where $G$ is Newton’s constant. It is divergent for both bosons and fermions. Since bosons and fermions (of identical mass) contribute equally but with opposite sign to the vacuum expectation of physical quantities, supersymmetry was expected to account for the (nearly) zero value of the cosmological constant, through an accurate balance between bosons and fermions in Nature. However, among other objections, the resulting scenario is not the one it is expected to occur if a Universe with an early period of inflation (large $\Lambda$) and a very small current value of $\Lambda$ is to be described[@sahni]. Although other mechanisms and principles, among them a running $\Lambda$ and the anthropic principle[@lambdap; @sahni], have been invoked to solve the cosmological constant (vacuum energy density) puzzle, none of them have been able to give a definitive answer to this question and the problem still remains a mystery.
At present, there is no known fundamental symmetry in Nature which will set to zero the value of $\Lambda$ [@lambdap; @sahni]. Hence, the search for such a symmetry remains a challenge. In the present letter I want to put forward the possibility that such a fundamental symmetry could be the one under the interchange of gravity and antigravity. The idea behind this possibility is that, once this symmetry is assumed, to each gravitating standard model particle (antiparticle), it corresponds an antigravitating partner, whose contribution to the vacuum energy exactly cancels gravitating particle’s contribution. This can be visualized as if there were two distinct vacua: one gravitating and other one that antigravitates so that, the resulting “total” (averaged) vacuum does not gravitates at all. The kind of symmetry I am proposing to account for the (nearly) zero value of the cosmological constant, opens up the possibility that such exotic entities like antigravitating objects, might exist in Nature. Hence, why this kind of objects are not being observed in our Universe? A similar question, this time for antimatter, has been raised before. In this last case, a possible mechanism for generating the desired amount of baryon asymmetry[@m-antim], relies on three necessary (Sakharov’s) conditions: i) Baryon number non-conservation, ii) C and CP violation and iii) Deviations from thermal equilibrium. In the same fashion, an answer to the problem of gravitating-antigravitating matter asymmetry could be approached. In this sense, one should expect non-conservation of the charge associated with gravity-antigravity symmetry.
In this letter I call gravity-antigravity transformations (G-aG transformations for short) the following set of simultaneous transformations: $G\rightarrow -G$ and $g_{ab}\rightarrow
-g_{ab}$. This means that, simultaneous with the interchange of gravity and antigravity, interchange of time-like and space-like domains is also required. It is straightforward noting that, the purely gravity part of the Einstein-Hilbert action $S=\int d^4x\sqrt{|g|} R/(16\pi G)$, is G-aG symmetric. Actually, under $g_{ab}\rightarrow -g_{ab}$, the Ricci tensor is unchanged $R_{ab}\rightarrow R_{ab}$, while the curvature scalar $R\rightarrow -R$. The introduction of a $\Lambda$ term in the above Eisntein-Hilbert action breaks this symmetry. This fact hints at the possibility that, precisely, this kind of symmetry could account for a zero value of the cosmological constant $\Lambda$.
Since models with spontaneous symmetry breaking are relevant to the cosmological constant problem [@sahni], I will explore a very simple model, that is based on general relativity plus self-interacting scalar fields with symmetry breaking potentials, as sources of gravity. The starting point will be the following action which includes a single scalar field $\phi$:
$$S=\int\frac{d^4x\sqrt{|g|}}{16\pi G}\{R-(\nabla\phi)^2-2V(\phi)\},
\label{1}$$
where $V(\phi)$ is the self-interaction (symmetry breaking) potential. This action respects reflection symmetry $\phi\rightarrow -\phi$ if $V$ is an even function. However, it is invariant under G-aG transformations, with the inclusion of reflection, only if, simultaneously, $V(\phi)\rightarrow
-V(\phi)$. The last transformation, under reflection, restricts the self-interaction potential to be an odd function of $\phi$. Therefore, a very wide class of symmetry breaking potentials (including the typical “Mexican hat” potential) is ruled out by G-aG symmetry. In order to extend this symmetry to any kind of potential, one can introduce a second self-interacting scalar field $\bar\phi$, with the wrong sign of both kinetic and potential energy terms, in the action (\[1\]) and, at the same time, to introduce the innocuous factor $\epsilon\equiv G/|G|$ ($\epsilon=+1$ for gravity and $\epsilon=-1$ for antigravity) in both kinetic terms for $\phi$ and $\bar\phi$. The improved action is:
$$\begin{aligned}
S=\int\frac{d^4x\sqrt{|g|}}{16\pi\epsilon
|G|}\{R-\epsilon(\nabla\phi)^2-2V(\phi)\nonumber\\
+\epsilon(\nabla\bar\phi)^2+2V(\bar\phi)\}. \label{2}\end{aligned}$$
Notice I kept the same symbol $V$ for the self-interacting potential, meaning that the functional form of both $V(\phi)$ and $V(\bar\phi)$ is the same. The fact that both kinetic and potential energies of $\bar\phi$ enter with the wrong sign, means that the energy density of the second scalar field $\rho_{\bar\phi}=\epsilon(\nabla\bar\phi)^2/2-V(\bar\phi)$ is negative if the potential $V$ is a positive definite function. A second interpretation could be that, $\bar\phi$ has positive energy ($\rho_{\bar\phi}\rightarrow\rho_{\bar\phi}^+=-\rho_{\bar\phi}>0$), but it antigravitates ($\epsilon\rightarrow -\epsilon$). This second interpretation is apparent if one realizes that, in the right-hand side (RHS) of Einstein’s equations, that are derivable from (\[2\]), one has the combination: $8\pi\epsilon |G|
(T_{ab}^\phi-T_{ab}^{\bar\phi})$, where the stress-energy tensor for scalar field degrees of freedom is defined in the usual way (except for the innocuous factor $\epsilon$): $T_{ab}^\chi=\epsilon(\nabla_a\chi\nabla_b\chi-\frac{1}{2}g_{ab}(\nabla\chi)^2)
-g_{ab}V(\chi)$ ($\chi$ is the collective name for $\phi$ and $\bar\phi$). Hence, one could hold the view that, the minus sign in the second term of the RHS of Einstein’s field equations, could be absorbed into the innocuous factor $\epsilon$: $8\pi |G|
(\epsilon T_{ab}^\phi+(-\epsilon) T_{ab}^{\bar\phi})$. Any way, $\bar\phi$ represents a kind of exotic particle, whose existence could be justified only in quantum systems like the quantum vacuum.
The action (\[2\]) is explicitly invariant under the (enhanced) set of G-aG transformations:
$$\epsilon\rightarrow -\epsilon,\;\; g_{ab}\rightarrow
-g_{ab},\;\;\phi\leftrightarrow\bar\phi. \label{3}$$
A remarkable property of this model is that the Klein-Gordon equations for both $\phi$ and $\bar\phi$
$$\Box\phi=\epsilon\frac{dV(\phi)}{d\phi},
\;\;\Box\bar\phi=\epsilon\frac{dV(\bar\phi)}{d\bar\phi}, \label{4}$$
coincide. The consequence is that both fields will tend to run down the potentials towards smaller energies. Therefore, if $V$ has global minima, both $\phi$ and $\bar\phi$ will tend to approach one of these minima. This is, precisely, the key ingredient in the present model to explain the small value of the vacuum energy density, through weak violation of G-aG symmetry. To illustrate this point, let us consider the “Mexican hat” potential:
$$V(\chi)=V_0-\frac{\mu_\chi^2}{2}\chi^2+\frac{\lambda_\chi}{4}\chi^4,\;\;\lambda_\chi>0.
\label{5}$$
The symmetric state $(\phi,\bar\phi)=(0,0)$ is unstable and the system settles in one of the following ground states $(\phi,\bar\phi)$: $(\sqrt{\mu^2/\lambda},\sqrt{\bar\mu^2/\bar\lambda})$, $(\sqrt{\mu^2/\lambda},-\sqrt{\bar\mu^2/\bar\lambda})$, $(-\sqrt{\mu^2/\lambda},-\sqrt{\bar\mu^2/\bar\lambda})$, $(-\sqrt{\mu^2/\lambda},\sqrt{\bar\mu^2/\bar\lambda})$. This means that the reflection symmetry $\phi\rightarrow -\phi$, $\bar\phi\rightarrow -\bar\phi$, inherent in theory (\[2\]) with potential (\[5\]), is spontaneously broken. The immediate consequence is that the stress-energy tensor of the vacuum of the theory takes the Lorentz invariant form
$$T_{ab}=\frac{\Lambda}{8\pi |G|} g_{ab}, \label{6}$$
where the cosmological constant $\Lambda=((\mu^4/\lambda)-(\bar\mu^4/\bar\lambda))/4\epsilon$ ($\mu\equiv\mu_\phi$, $\bar\mu\equiv\mu_{\bar\phi}$, etc.). Now it is apparent that the resulting theory (with broken reflection symmetry), that is given by the action $S=\int
d^4x\sqrt{|g|}\{R-2\Lambda\}/16\pi\epsilon |G|$, is invariant under G-aG transformations (\[3\]) only if $(\mu,\lambda)=(\bar\mu,\bar\lambda)\Rightarrow\Lambda=0$. In consequence, the small observed value $\Lambda\sim 10^{-47}
GeV^4$, denotes a weak violation of G-aG symmetry, that is due to a degeneracy of reflection symmetry energy scales $\alpha\equiv\mu^4/8\pi |G|\lambda$ and $\bar\alpha\equiv\bar\mu^4/8\pi |G|\bar\lambda$.
Which physical mechanism is responsible for a small violation of G-aG symmetry, is a question that could be clarified only once exotic (in principle, antigravitating) fields like $\bar\phi$, are built into a fundamental theory of the physical interactions, including gravity. In the absence of such fundamental theory of the unified interactions, one might only conjecture on the physical origin of the degeneracy in reflection symmetry energy scales. In this regard, a possible origin of the aforementioned degeneracy could be associated with the following reasoning. It is an observational fact that, almost all real (observable) standard model particles in Nature gravitate. Therefore, their interactions with gravitating and antigravitating vacuum particles are of different nature. The difference is accentuated in the early stages of cosmic evolution, due to strongest character of gravitational interactions for high-energy scales, while, at present, the difference is very tiny, due to the very weak intensity of gravitating effects at the low-energies prevailing in the Universe. This will fit well a scenario, in which a large value of $\Lambda$ is required to produce the due amount of inflation in the early Universe, while a small current value of $\Lambda$ will reproduce the present stage in the cosmic evolution[@sahni]. A less physically inspired possibility is that, during the course of the cosmic evolution, the relative difference between reflection symmetry breaking scales is nearly constant: $(\alpha-\bar\alpha)/\alpha\sim const.$ Therefore, for larger (reflection) symmetry breaking scales, the cosmological constant $\Lambda=2\pi |G|(\alpha-\bar\alpha)/\epsilon$ is larger.
Although the model explored in this letter is far from giving a final answer to the cosmological constant puzzle and, besides, it is incomplete in that it gives no explanation about a realistic mechanism for present small violation of G-aG symmetry, nevertheless, it hints at a possible connection between this (would be) fundamental symmetry and the vacuum energy density. The study of this symmetry could be relevant, also, to the understanding of the role orbifold symmetry plays in Randall-Sundrum (RS) brane models[@rs]. Actually, in RS scenario $M_{Pl}^2\propto\int dy M_{Pl,5}^3$ ($y$ accounts for the extra coordinate) so, symmetry under $y\rightarrow
-y\Leftrightarrow M_{Pl}^2\rightarrow -M_{Pl}^2$ ($G\rightarrow
-G$). Treatment of the second face of the cosmological constant puzzle (see the introductory part of this letter), within the present approach, requires of further research
I am grateful to the MES of Cuba for financial support of this research.
[99]{}
T Padmanabhan, Phys. Rept. [**380**]{} (2003) 235-320, hep-th/0212290; A D Dolgov, hep-ph/0203245; hep-ph/0405089; A Vilenkin, hep-th/0106083; S Weinberg, astro-ph/0005265; Rev. Mod. Phys. [**61**]{} (1989) 1.
V Sahni, astro-ph/0403324; Class. Quant. Grav. [**19**]{} (2002) 3435-3448, astro-ph/0202076; S M Carroll, Living Rev. Rel. [**4**]{} (2001) 1, astro-ph/0004075.
T Hertog, G T Horowitz and K Maeda, Phys. Rev. D[**69**]{} (2004) 105001, hep-th/0310054; JHEP [**0305**]{} (2003) 060, hep-th/0304199; G T Horowitz, hep-th/0312123; N Dadhich, Phys. Lett. B[**492**]{} (2000) 357-360, hep-th/0009178; J Polchinski, L Susskind and N Toumbas, Phys. Rev. D[**60**]{} (1999) 084006, hep-th/9903228; K D Olum, Phys. Rev. Lett. [**81**]{} (1998) 3567-3570, gr-qc/9806091; R Mann, Class. Quant. Grav. [**14**]{} (1997) 2927-2930, gr-qc/9705007.
A D Dolgov, hep-ph/0211260; V A Rubakov and M E Shaposhnikov, Usp. Fiz. Nauk [**166**]{} (1996)493-537; Phys. Usp. [**39**]{} (1996) 461-502, hep-ph/9603208; A G Cohen, D B Kaplan and A E Nelson, Ann. Rev. Nucl. Part. Sci. [**43**]{} (1993) 27-70, hep-ph/9302210.
L Randall and R Sundrum, Phys. Rev. Lett. [**83**]{} (1999) 4690-4693, hep-th/9906064; Phys. Rev. Lett. [**83**]{} (1999) 3370-3373, hep-ph/9905221.
|
---
abstract: 'I show that the characteristic diffusion timescale and the gamma-ray escape timescale, of SN Ia supernova ejecta, are related with each other through the time when the bolometric luminosity, $L_{\rm bol}$, intersects with instantaneous radioactive decay luminosity, $L_\gamma$, for the second time after the light-curve peak. Analytical arguments, numerical radiation-transport calculations, and observational tests show that $L_{\rm bol}$ generally intersects $L_\gamma$ at roughly $1.7$ times the characteristic diffusion timescale of the ejecta. This relation implies that the gamma-ray escape timescale is typically 2.7 times the diffusion timescale, and also implies that the bolometric luminosity 15 days after the peak, $L_{\rm bol}(t_{15})$, must be close to the instantaneous decay luminosity at that time, $L_\gamma (t_{15})$. With the employed calculations and observations, the accuracy of $L_{\rm bol}=L_\gamma$ at $t=t_{15}$ is found to be comparable to the simple version of “Arnett’s rule” ($L_{\rm bol}=L_\gamma$ at $t=t_{\rm peak}$). This relation aids the interpretation of SN Ia supernova light curves and may also be applicable to general hydrogen-free explosion scenarios powered by other central engines.'
author:
- Tuguldur Sukhbold
title: 'Properties of Type-I Supernova Light Curves'
---
INTRODUCTION
============
[[\[sec:intro\]]{}]{}
Supernovae of Type-Ia are believed to result from thermonuclear explosions of white dwarfs [@Hoy60]. While they play a major role as a cosmographic tool, the identity of the progenitors and the nature of the ignition process remain a mystery [for a general review, see @Mao14]. In this work, however, an agnostic stance is taken on the exact nature of the progenitor or explosion, and instead I focus on the generic properties of the light curves.
It is well known that the optical display of a Type-Ia explosion is predominantly powered by the radioactive decay chain of $^{56}$Ni $\rightarrow ^{56}$Co $\rightarrow ^{56}$Fe [@Pan62; @Tru67; @Bod68; @Col69],[^1] whose power is a precisely known exponentially decaying function of time [e.g., @Nad94], $$L_\gamma = \frac{M_{\rm Ni}}{{\ensuremath{\mathrm{M}_\odot}}}(C_{\rm Ni}e^{-t/\tau_{\rm Ni}}+C_{\rm Co}e^{-t/\tau_{\rm Co}})\ \rm ergs\ s^{-1},$$ where $t$ and $M_{\rm Ni}$ are time and $^{56}$Ni mass, and $C_{\rm Ni}\approx6.45\times10^{43}$, $C_{\rm Co}\approx1.45\times10^{43}$, $\tau_{\rm Ni}=8.8$ days and $\tau_{\rm Co}=111.3$ days. A time-dependent fraction of this energy input is thermalized in the expanding ejecta, and thus the actual shape of the light curve is dictated by the competition between adiabatic degradation of internal energy into kinetic energy, and the loss of internal energy via radiation.
![Schematic representation of a Type-Ia light curve. The bolometric luminosity, $L_{\rm bol}$ is shown as solid black, the instantaneous radioactive decay power, $L_\gamma$, is the red solid curve, and the heating rate (thermalized fraction of $L_\gamma$), $H$, is shown as the dashed-pink curve. The $L_{\rm bol}$ crosses $L_\gamma$ on two points - A and B. The former marks the well known “Arnett’s” rule, while this study aims to understand how the time of point B, $t_{\rm B}$, is related to the time of point A, which is approximately the characteristic diffusion timescale of the ejecta, $t_{\rm d}$. [[\[fig:schem\]]{}]{}](fig1.pdf){width="48.00000%"}
[[[[Fig. [[\[fig:schem\]]{}]{}]{}]{}]{}]{} provides a schematic Type-Ia light curve. Since the explosion starts from a compact star, the ejecta begins its life as an opaque ball of plasma and thus the bolometric luminosity ($L_{\rm bol}$, black solid curve) rises at early times. Though the instantaneous radioactive power ($L_\gamma$, red solid curve) is very high at this time, the time that it takes for radiation to diffuse out of the ejecta is much larger than the age. As the expansion reduces the density, eventually the age of the ejecta surpasses the diffusion time of $$t_{\rm d} = \Bigg[\frac{3\kappa M_{\rm ej}}{4\pi c v}\Bigg]^{1/2},
{{\label{eq:diff}}}$$ where $\kappa$, $M_{\rm ej}$, $c$, and $v$ are the opacity, ejecta mass, speed of light and ejecta velocity, respectively. Near this point, a significant amount of the deposited energy can be radiated rather than converted into kinetic energy for the first time, and the light curve reaches its peak. The well-known “Arnett’s rule” [@Arn79; @Arn82] defines this feature; he first showed that the light-curve peak must be close to the instantaneous decay power at that time (point A).
But this is not the only time the bolometric luminosity equals the decay power. Shortly after the peak of the light curve, there is a significant amount of radiation still trapped and diffusing outward in the ejecta [@Pin00a]. The bolometric luminosity remains greater than the decay power for some time while this “excess” energy is drained and the decay power falls onto the more slowly declining $^{56}$Co $\rightarrow ^{56}$Fe curve. Therefore the bolometric luminosity crosses the decay power again after the peak at point B.
Note that the bolometric luminosity crosses the decay power for a second time, not the actual heating rate ($H$, pink-dashed). Due to the escape of $\gamma$-rays, the non-thermalized fraction of the decay power increases with time, and so the heating rate crosses $L_{\rm bol}$ only once near the peak, and at late times it asymptotes to $L_{\rm bol}$. Also, the existence of point B may not be a universal feature of Type-Ia light curves. In extreme cases, the decay power may barely graze the bolometric luminosity at a single point near the peak, or it may not cross at all.
The aim of this study is to understand how the time, $t_{\rm B}$, of point B relates to the time, $\sim t_{\rm d}$, of point A. While the classical papers by @Arn82 and @Pin00a provide semi-analytical model light curves, simple arguments are employed here to show why the time of point B is expected to be a constant multiple of the characteristic diffusion timescale of the ejecta. The claim is tested using both a set of Monte-Carlo radiation-transport calculations describing various possible ejecta configurations, and also observational data, to find that $t_{\rm B}/t_{\rm d}\approx1.7$. In the end, one of the main implications of this finding - the bolometric luminosity at 15 days after the peak is close to the radioactive decay power at that time, $L_{\rm bol}(t_{15})\approx L_\gamma(t_{15})$, is tested and I provide a discussion of how potentially this can be applied in the study of the observed width-luminosity relation [WLR; @Phi99].
Semi-Analytical Arguments
=========================
[[\[sec:args\]]{}]{}
Energy conservation for the expanding ejecta implies that $$\frac{{\rm d}E}{{\rm d}t}+P\frac{{\rm d}V}{{\rm d}t}+L_{\rm bol}=H,
{{\label{eq:1st}}}$$ where $E$, $P$, and $V$ are internal energy, pressure, and volume, respectively. The energy deposited from radioactive decay is stored, spent doing work on the expanding ejecta, and lost through the photosphere. Next, we employ two excellent and commonly invoked assumptions. First, the ejecta are homologously expanding in time with an isotropic velocity gradient, such that for a uniform density profile ${\rm d}V/{\rm d}t=4\pi v^3 t^2$. Second, the plasma is radiation-dominated, with $P=E/3V$. Under these assumptions, $$\frac{{\rm d}E}{{\rm d}t}+P\frac{{\rm d}V}{{\rm d}t}=\frac{1}{t}\frac{{\rm d}(tE)}{{\rm d}t}.$$ Writing the heating rate as $H=L_\gamma F_\gamma$, where $F_\gamma$ is the time-dependent deposition fraction, [[eq. [[([[\[eq:1st\]]{}]{})]{}]{}]{}]{} becomes $$\frac{1}{t}\frac{{\rm d}(tE)}{{\rm d}t}=L_\gamma F_\gamma - L_{\rm bol}.
{{\label{eq:main}}}$$ This is equivalent to equation 10 of @Kas10a with a magnetar spin-down power $L_{\rm p}$ instead of radioactivity, and also to equation 2 of @Kat13, before time integration.
A critical piece in [[eq. [[([[\[eq:main\]]{}]{})]{}]{}]{}]{} is $F_\gamma$, where it must be a function that stays close to unity at early times, and then gradually asymptotes to zero at late times. First, consider early times, as they provide a simple way to derive the point A[^2]. When $F_\gamma=1$, the condition $L_{\rm bol}=L_\gamma$ is satisfied only when ${\rm d}(tE)/{\rm d}t=0$. Employing the diffusion equation, one can derive an approximate relation, $$L_{\rm bol}\approx\frac{4\pi Rc E}{3\kappa\rho V}=\frac{tE}{t_{\rm d}^2},
{{\label{eq:tE}}}$$ between bolometric luminosity and internal energy, where $R$ and $\rho$ are the radius and density. Taking time derivatives of both sides in [[eq. [[([[\[eq:tE\]]{}]{})]{}]{}]{}]{}, one sees that ${\rm d}(tE)/{\rm d}t=0$ is true when ${\rm d}L_{\rm bol}/{\rm d}t=0$, which is satisfied at the peak of the light curve (near the point A).
![Deposition fraction $F_\gamma=1-e^{-\tau}$ (red curves) as compared to a sample radiation-transport calculation (black circles). The thicknesses of the curves correspond to differing values of $t_\gamma$, where $\tau(t_\gamma)=1$. Though it typically overpredicts the energy deposition at early times for small $t_\gamma$, this prescription provides a good overall description of the deposition fraction. The sample model is well fit with $t_\gamma=46$ days (dashed red curve). [[\[fig:fgam\]]{}]{}](fig2.pdf){width="48.00000%"}
As a more general and realistic description I adopt $F_\gamma=1-e^{-\tau}$ [e.g., @Pin01]. Here, $\tau=(t_\gamma/t)^2$ is the mean optical depth for gamma-rays, representing the mostly absorptive nature of Compton opacity. Due to the homologous expansion, the optical depth scales as $t^{-2}$ and the timescale $t_\gamma$ is chosen such that $\tau(t_\gamma)=1$.
Positrons from the decay of $^{56}$Co will only start escaping the ejecta at very late times [e.g., @Mil99], and therefore more formally $F_\gamma$ should apply to only gamma-component of $L_\gamma$. But for the sake of simplicity, the energetically less important positron contribution is ignored. A comparison of this prescription with a sample Monte-Carlo radiation-transport calculation (presented in [[§[[\[sec:num\]]{}]{}]{}]{}) is illustrated in [[[[Fig. [[\[fig:fgam\]]{}]{}]{}]{}]{}]{}, where $t_\gamma\sim46$ days reproduces this specific model well.
With this description of $F_\gamma$, [[eq. [[([[\[eq:main\]]{}]{})]{}]{}]{}]{} becomes $$L_\gamma e^{-\tau}+\frac{1}{t}\frac{{\rm d}(tE)}{{\rm d}t}=L_\gamma - L_{\rm bol},$$ where $L_\gamma=L_{\rm bol}$ is satisfied on two conditions: (1) $L_\gamma e^{-\tau}={\rm d}(tE)/{\rm d}t/t=0$, equivalent to the condition for the point A discussed above for $F_\gamma\equiv1$; and (2) $$\frac{{\rm d}(tE)}{{\rm d}t}=-L_\gamma e^{-\tau}t,
{{\label{eq:scond}}}$$ which is the condition for point B, and is satisfied only when $t=t_{\rm B}$. Unfortunately [[eq. [[([[\[eq:scond\]]{}]{})]{}]{}]{}]{} is not analytically integrable to elementary functions without approximations to the term $e^{t^{-2}}$.
![The relevant quantities for [[eq. [[([[\[eq:scond\]]{}]{})]{}]{}]{}]{} are shown as a function of time for a sample radiation-transport calculation. For clarity, the output from the Monte-Carlo calculation is smoothed by a Savitzky-Golay filter. Note that the curves for ${\rm d}(tE)/{\rm d}t$ and $-L_\gamma e^{-\tau}t$ first cross at point A near zero (simplified condition for “Arnett’s rule”), and then cross again at point B. Also, near the time of point B, ${\rm d}(tE)/{\rm d}t$ shares a common tangent with $-E$ and therefore their derivatives are expected to be similar near this point. [[\[fig:econs\]]{}]{}](fig3.pdf){width="48.00000%"}
But [[eq. [[([[\[eq:scond\]]{}]{})]{}]{}]{}]{} can be simplified using the fact that the internal energy changes in time roughly as ${\rm d}E/{\rm d}t\approx -2E/t$, near point B. This can be seen from the relationship between $E$ and ${\rm d}(tE)/{\rm d}t$, illustrated in [[[[Fig. [[\[fig:econs\]]{}]{}]{}]{}]{}]{} for a sample model. Note the quantity ${\rm d}(tE)/{\rm d}t$ shares a common tangent with $-E$ near the time $t\sim t_{\rm B}$. Considering a general form of ${\rm d}E/{\rm d}t= xE/t$, where $x$ is a negative real number for $t>t_{\rm d}$, the time derivative of the left-hand term in [[eq. [[([[\[eq:scond\]]{}]{})]{}]{}]{}]{} becomes $$\frac{{\rm d}}{{\rm d}t}\Bigg[\frac{{\rm d}(tE)}{{\rm d}t}\Bigg] = \frac{E}{t}(x^2+x).
{{\label{eq:x}}}$$ For example, the rate of change in the internal energy at the minimum of ${\rm d}(tE)/{\rm d}t$ can be found by solving $x^2+x=0$ for its non-zero root as ${\rm d}E/{\rm d}t=-E/t$. Since the curves ${\rm d}(tE)/{\rm d}t$ and $-E$ are tangents to each other near the point of interest, the time derivatives must also be similar at that point. Thus, the solution to $x^2+x=-x$ gives ${\rm d}E/{\rm d}t=-2E/t$ near $t\sim t_{\rm B}$. While this argument does not explain why the two curves are expected to share a common tangent near $t\sim t_{\rm B}$, numerical calculations ([[§[[\[sec:num\]]{}]{}]{}]{}) demonstrate that it is a good assumption.
Taking ${\rm d}E/{\rm d}t\approx-2E/t$, and replacing $E$ with [[eq. [[([[\[eq:tE\]]{}]{})]{}]{}]{}]{}, the [[eq. [[([[\[eq:scond\]]{}]{})]{}]{}]{}]{} becomes $$L_{\rm bol}(t_{\rm B})t_{\rm d}^2 t_{\rm B}^{-1} \approx L_\gamma(t_{\rm B}) e^{-(t_\gamma/t_{\rm B})^2}t_{\rm B}.$$ Recall that the condition for the point B is $L_{\rm bol}(t_{\rm B}) = L_\gamma(t_{\rm B})$, and thus the time of point B satisfies $$t_{\rm B}/t_{\rm d} \approx e^{(t_\gamma/t_{\rm B})^2/2}.
{{\label{eq:final}}}$$ This implies that $\emph{if}$ the time of point B, $t_{\rm B}$, is proportional or slowly varying with $t_\gamma$ across a wide range of models, then one should expect it to also be a constant multiple of diffusion timescale $t_{\rm d}$. Furthermore, since the energy deposition terms cancel, [[eq. [[([[\[eq:final\]]{}]{})]{}]{}]{}]{} may also be applicable to other types of central power sources, such as magnetar spin-down [@Woo10; @Kas10a] and black hole accretion [@Dex13].
Numerical Models and Observations
=================================
[[\[sec:tests\]]{}]{}
In this section, the arguments presented in [[§[[\[sec:args\]]{}]{}]{}]{} are explored with simple numerical calculations and with a set of bolometric measurements available from the literature.
Radiation-transport Calculations
--------------------------------
[[\[sec:num\]]{}]{}
The time-dependent Monte-Carlo radiation-transport code developed by @Luc05 is used. The code includes Compton-scattering and photoelectric absorption for $\gamma$-ray transport, and employs gray transport for the optical radiation. Despite its simplicity, the resulting bolometric light curves are in good agreement with more advanced tools [e.g., [$\mathrm{\texttt{SEDONA}}$]{}, @Kas06].
![Sample bolometric light curves from the grid of models computed with the Monte-Carlo radiation-transport code from @Luc05. The trends seen from the variation in ejecta mass (top) and $^{56}$Ni mass (bottom) are consistent with prior calculations. The radioactive decay curve for $M_{\rm Ni}=0.55$ [$\mathrm{M}_\odot$]{} is shown as a dashed gray curve in the top panel, while the decay curves in the lower panel are shown only during the period when $L_\gamma < L_{\rm bol}$ (i.e. between the points A and B). Note that this duration decreases as the ratio of $M_{\rm Ni}/M_{\rm ej}$ increases. [[\[fig:lcs\]]{}]{}](fig4a.pdf "fig:"){width="48.00000%"} ![Sample bolometric light curves from the grid of models computed with the Monte-Carlo radiation-transport code from @Luc05. The trends seen from the variation in ejecta mass (top) and $^{56}$Ni mass (bottom) are consistent with prior calculations. The radioactive decay curve for $M_{\rm Ni}=0.55$ [$\mathrm{M}_\odot$]{} is shown as a dashed gray curve in the top panel, while the decay curves in the lower panel are shown only during the period when $L_\gamma < L_{\rm bol}$ (i.e. between the points A and B). Note that this duration decreases as the ratio of $M_{\rm Ni}/M_{\rm ej}$ increases. [[\[fig:lcs\]]{}]{}](fig4b.pdf "fig:"){width="48.00000%"}
A small grid of 48 model light curves was generated based on a simple template. The ejecta are assumed to have a uniform density and the $^{56}$Ni mass is confined to the innermost region. The ejecta mass is varied between $0.8<M_{\rm ej}<2.2$ [$\mathrm{M}_\odot$]{}, and the $^{56}$Ni mass was varied between $0.2<M_{\rm Ni}/M_{\rm ej}<0.7$, so as to sample the wide range of possibilities emerging from various progenitor and explosion scenarios. For simplicity, a constant gray opacity of $0.1\ \rm cm^2\ g^{-1}$ is employed, and the initial outer radius and velocity of the ejecta were kept constant at $10^8$ cm and $10^{9}$ cm s$^{-1}$, respectively, for each model. Each spherically symmetric model is computed with 100 spatial zones and $5\times10^6$ radioactive matter packets. This choice is computationally cheap yet produces results with an acceptable level of noise. The final smooth light curves were obtained by applying a Savitzky-Golay filter with a second order polynomial.
Generic features of the resulting light curves are presented in [[[[Fig. [[\[fig:lcs\]]{}]{}]{}]{}]{}]{} for $M_{\rm ej}=1.4$ [$\mathrm{M}_\odot$]{} with varying $^{56}$Ni mass, and for $M_{\rm Ni}=0.55$ [$\mathrm{M}_\odot$]{} with varying ejecta mass. With increasing ejecta mass, the diffusion timescale is longer, so the light curve evolves more slowly, reaching lower peak luminosities. With increasing $^{56}$Ni mass, the light curves are also broader, but they reach higher peak luminosities. These well-known general attributes are in good agreement with the results from many prior studies [e.g., @Pin00a; @Woo07]. Note that, with increasing $M_{\rm Ni}/M_{\rm ej}$, the point A crossing is delayed with respect to the peak of the light curve, and $L_\gamma$ spends less time under $L_{\rm bol}$ (lower $t_{\rm B}/t_{\rm d}$). For extreme ratios, roughly when $M_{\rm Ni}/M_{\rm ej}\gtrsim0.8$ in these models, $L_\gamma$ never crosses $L_{\rm bol}$.
![Dimensionless quantity $x$ defined in [[eq. [[([[\[eq:x\]]{}]{})]{}]{}]{}]{} for all models at the time of point B. Note that it has a nearly constant value for a wide range of $t_{\rm B}$, justifying the assumption of ${\rm d}E/{\rm d}t\approx-2E/t$ made in deriving [[eq. [[([[\[eq:final\]]{}]{})]{}]{}]{}]{}. [[\[fig:xcheck\]]{}]{}](fig5.pdf){width="48.00000%"}
Using this grid of models, we can numerically justify one of the key assumptions used in the derivation of [[§[[\[sec:args\]]{}]{}]{}]{} – that the change in time-weighted internal energy is always tangent to the negative of the internal energy near point B, so that ${\rm d}E/{\rm d}t|_{t=t_{\rm B}}\approx-2E/t_{\rm B}$. [[[[Fig. [[\[fig:xcheck\]]{}]{}]{}]{}]{}]{} illustrates the dimensionless quantity $x$ defined in [[eq. [[([[\[eq:x\]]{}]{})]{}]{}]{}]{}, evaluated at $t_{\rm B}$ for all models. It has a value close to $\sim -1.8$ for wide range of $t_{\rm B}$, justifying the assumption.
The time $t_\gamma$, defined as $\tau(t_\gamma)=1$, is evaluated by fitting the models with the adopted functional form of $F_\gamma$, as illustrated in [[[[Fig. [[\[fig:fgam\]]{}]{}]{}]{}]{}]{}. Finally, the time ratios in [[eq. [[([[\[eq:final\]]{}]{})]{}]{}]{}]{} are shown in [[[[Fig. [[\[fig:ratio\]]{}]{}]{}]{}]{}]{} for all of our models. The apparent anti-correlation is not captured in the simplified arguments of [[§[[\[sec:args\]]{}]{}]{}]{}. But as can be seen in [[[[Fig. [[\[fig:lcs\]]{}]{}]{}]{}]{}]{}, in models with increasing $M_{\rm Ni}/M_{\rm ej}$, the decay power spends less time under the bolometric luminosity, which results in a shorter $t_{\rm B}$. Also, some of the spread in the ratios of $t_\gamma/t_{\rm B}$ and $t_{\rm B}/t_{\rm d}$ are due to the assumptions of constant gray opacity and ejecta velocity, which results in a diffusion timescale that is not sensitive to $^{56}$Ni mass. Nonetheless, the points cluster within a fairly confined region, with only a 30% variation in $t_{\rm B}/t_{\rm d}$, despite the very wide range of ejecta parameters. The ratio $t_\gamma/t_{\rm B}$ also clusters in a narrow range, as expected from [[eq. [[([[\[eq:final\]]{}]{})]{}]{}]{}]{}. Taking central values of $t_{\rm B}/t_{\rm d}=1.7$ and $t_\gamma/t_{\rm B}=1.6$, we see that the ratio of $t_\gamma/t_{\rm d}\approx2.7$ is also approximately constant.
As a simple sensitivity test, a subset of four Chandrasekhar mass models with varying $^{56}$Ni masses were recomputed with a broken power-law density profile, instead of the uniform one. Following the setup from @Kas10b, I adopt an inner segment of density, $\rho_{\rm i} \propto r^{-\delta}$ for $v < v_{\rm t}$, and an outer segment, $\rho_{\rm o}\propto r^{-n}$ for $v\geq v_{\rm t}$, with $\delta=1$ and $n=10$. Here, $r$ is radius and $v_{\rm t}$ is a transitional velocity point connecting the two profile segments [Equation (1) of @Kas10b]. These power-law models have higher interior (innermost solar mass) and lower exterior density for a given kinetic energy and mass. The effect of higher central density is dominant, and generally this leads to longer diffusion timescales and slower evolving light curves. However, the dimensionless quantity $x$ is still close to $-2$ in all four models, which implies $-E$ and ${\rm d}(tE)/{\rm d}t$ still share a common tangent near point B. The ratios of the timescales are systematically shifted by about $\sim 0.1$, but otherwise they show the same clustered pattern. While this simple test demonstrates the generic nature of these results, it should be noted that the post-peak “hump” seen in bolometric light curves is not modeled here. As noted by many prior studies [e.g., @She18], the complex redistribution due to florescence requires non-gray ratiative transport without the assumption of local thermodynamic equilibrium.
Comparison with Observations
----------------------------
[[\[sec:obs\]]{}]{}
Approximate bolometric light curves of actual SNe Ia supernovae can provide an independent, practical check on whether the ratios $t_{\rm B}/t_{\rm d}$ and $t_\gamma/t_{\rm B}$ are really constant across a wide range of explosions. Unfortunately, bolometric measurements are difficult and are not frequently published. In this work, a small but modern set of spectroscopically *normal* Type-Ia light curves compiled by the Nearby Supernova Factory project[^3] is employed. For details on the sample selection and construction of the bolometric luminosity see @Sca14,@Chi13 and @Ald02.
The reconstructed $M_{\rm Ni}$, $M_{\rm ej}$ values, and “joint” host galaxy reddening values from Tables 2 and 3 of @Sca14 are adopted. The radioactive decay power is assumed to cross the bolometric curve near the time of peak, which is estimated through the characteristic diffusion timescale with corresponding $M_{\rm ej}$, but with constant $\kappa=0.1\ \rm cm^2\ g^{-1}$ and $v=10^9\ \rm cm\ s^{-1}$. From the original list of 19 events, 3 cases, `SNF20080717-000`, `SNF20080913-031`, and `SNF20080918-004`, were discarded due to their irregular luminosity evolution such that $L_\gamma$ never crosses $L_{\rm bol}$, or crosses it more than twice.
![Time ratios derived in [[eq. [[([[\[eq:final\]]{}]{})]{}]{}]{}]{} for all 48 radiation-transport calculations. Despite the models spanning a very wide range of parameter space, the resulting ratio of the point B time and the diffusion timescale varies by only $\sim30$%, around $t_{\rm B}/t_{\rm d}\approx1.7$. The ratio $t_\gamma/t_{\rm B}$ is also confined to a narrow range around $\sim1.6$, as expected from [[eq. [[([[\[eq:final\]]{}]{})]{}]{}]{}]{}. [[\[fig:ratio\]]{}]{}](fig6.pdf){width="48.00000%"}
[[[[Fig. [[\[fig:obs\]]{}]{}]{}]{}]{}]{} shows the ratio $t_{\rm B}/t_{\rm d}$ measured for the remaining 16 events. In all cases, the ratio is distributed in a narrow range around a central value of $t_{\rm B}/t_{\rm d}\approx1.65$. This is in a close agreement with the simple radiation-transport calculations presented in [[[[Fig. [[\[fig:ratio\]]{}]{}]{}]{}]{}]{}. Note that these 16 events had a relatively narrow range in ejecta mass $0.9<M_{\rm ej}<1.4$ [$\mathrm{M}_\odot$]{}, but $0.3<M_{\rm Ni}/M_{\rm ej}<0.65$, similar to the range covered by the model light curves. Therefore the tighter distribution of $t_{\rm B}/t_{\rm d}$ as compared to the models may be indicating a systematic difference between the reconstructed parameters from this observational sample and the simple radiation-transport calculations used in [[§[[\[sec:num\]]{}]{}]{}]{}.
Recently @Wyg17 proposed a novel method of estimating $t_\gamma$ from the bolometric light curve, and on the same sample of observations [@Sca14], they have found that it spans a narrow range between 30 and 45 days. The values of $t_{\rm B}$ measured for these events in this study span between 19 and 30 days. This implies $30<t_\gamma<48$ days for $t_\gamma/t_{\rm B}=1.6$ ([[[[Fig. [[\[fig:ratio\]]{}]{}]{}]{}]{}]{}), and is in excellent agreement with the range measured in @Wyg17. This agreement suggests that $t_\gamma$ can be estimated without integration on bolometric measurements; instead a simple estimate of the peak time (i.e. in B-band) is used, $t_{\rm d}\approx t_{\rm peak}$ and thus $t_\gamma=2.7t_{\rm d}$.
Luminosity 15 Days after Peak
=============================
[[\[sec:15d\]]{}]{}
Simple analytic arguments, numerical calculations, and observational tests indicate that $t_{\rm B}\approx 1.7 t_{\rm d}$ for wide range of Type-Ia explosion light curves. But how can this relation can be useful?
Consider the following aspects of Type-Ia light curves. (1) Both theoretical and observational studies have shown that the simple version of “Arnett’s rule” works with an accuracy of about $10$% in Type-Ia explosions [e.g., @Blo13; @Str06] - in most cases $L_\gamma$ crosses $L_{\rm bol}$ near the peak of the light curve at $t\sim t_{\rm d}$. (2) The range of characteristic diffusion timescale, or the range of the observed light-curve rise time, is not that great. It is almost always between 10-25 days, which, according to $t_{\rm B}\approx 1.7 t_{\rm d}$, implies $t_{\rm B}$ will be roughly between 7 and 18 days *after the peak*. (3) The rate of change ${\rm d}L_{\rm bol}/{\rm d}t$ and ${\rm d}L_\gamma/{\rm d}t$ are not too different near $t_{\rm B}$, unless the light curve is evolving too fast.
![The ratio $t_{\rm B}/t_{\rm d}$ as measured for the set of Type-Ia bolometric light curves from @Sca14. The Neaby Supernova Factory event names have been shortened to their discovery dates (i.e., `060907-000` is `SNF20060907-000`). Three events with peculiar luminosity evolution have been excluded from the analysis. For the 16 remaining events, the ratio falls in a narrow range centered around $t_{\rm B}/t_{\rm d}\approx1.65$, consistent with the calculations presented in [[[[Fig. [[\[fig:ratio\]]{}]{}]{}]{}]{}]{}. [[\[fig:obs\]]{}]{}](fig7.pdf){width="48.00000%"}
{width="99.00000%"}
{width="50.00000%"} {width="50.00000%"}
These aspects suggest that the time $t_{\rm B}$ will typically happen a few days earlier than the time 15 days after the peak, $t_{15}$, and therefore normally $L_\gamma(t_{\rm B})>L_{\rm bol}(t_{15})$. However, because $L_\gamma$ is a decreasing function of time, the decay power at $t_{15}$ should be much closer to the bolometric luminosity at that time, i.e. the following is true no matter $t_{\rm B}>t_{15}$ or $t_{\rm B}< t_{15}$: $$|L_\gamma(t_{15})-L_{\rm bol}(t_{15})|<|L_\gamma(t_{15})-L_{\rm bol}(t_{\rm B})|$$ In order to test how close $L_\gamma(t_{15})$ is to $L_{\rm bol}(t_{15})$, the luminosities measured at $t_{15}$ are shown as a function of peak luminosity, $L_{\rm peak}$, in [[[[Fig. [[\[fig:LL\]]{}]{}]{}]{}]{}]{}. The “true” values at $t_{15}$ have been measured from the model light curves computed in [[§[[\[sec:tests\]]{}]{}]{}]{} (gray circles), and compared to the corresponding instantaneous radioactive power at that time (black bars).
In general, the light curves that reach higher peak luminosities will have higher luminosities at $t_{15}$, and for a given ratio of $M_{\rm Ni}/M_{\rm ej}$, this trend correlates with the ejecta mass (gray circle sizes). Note that $L_\gamma(t_{15})$ always overpredicts the luminosity for the lowest ejecta masses, and it underpredicts it for the highest ejecta masses. This reflects the fact that $t_{\rm B}>t_{15}$ for lower ejecta masses and the opposite at higher ejecta masses. Overall, excluding the handful of rapidly evolving models with the lowest ejecta and $^{56}$Ni masses, the radioactive decay power at $t_{15}$ closely matches the bolometric luminosity at that time (to within $\lesssim10$%).
The left panel of [[[[Fig. [[\[fig:perc\]]{}]{}]{}]{}]{}]{} shows the correlation between $L_{15}$ and $L_\gamma(t_{15})$. All of the radiation-transport calculation values (circles) fall closely around the one-to-one correlation. Also shown are the values measured from the sample of 16 SNe Ia supernovae presented in [[§[[\[sec:obs\]]{}]{}]{}]{} (stars). As mentioned earlier, the simple calculations from [[§[[\[sec:num\]]{}]{}]{}]{} do not model the post-peak “hump” due to florescence, and therefore the observational values are slightly less luminous for a given $L_\gamma(t_{15})$. Since most of the poorly behaving cases are due to overpredicted luminosities, the overall fit can be improved by introducing a small offset factor, $L_{15}=f_{\rm off}L_\gamma(t_{15})$, with $f_{\rm off}=0.85$ (dashed gray).
Note that most of the overpredicted cases are of lower ejecta mass, and thus the overall fit can be further improved with a simple ejecta mass dependent offset, $f_{\rm off}=0.15M_{\rm ej}-0.7$. The right panel of [[[[Fig. [[\[fig:perc\]]{}]{}]{}]{}]{}]{} illustrates the absolute fractional difference between $f_{\rm off}L_\gamma(t_{15})$ and $L_{\rm bol}(t_{15})$. Without any correction factor ($f_{\rm off}=1$), nearly all model values (gray circles) are below 10%, but the observed values (gray stars) are off by as much as 35%. With mass-dependent offset, all of the models and majority of observed events lie below 10% (colored circles and stars).
Given the simplistic nature of the numerical models and the uncertainties in parameters estimated from the observations, it is hard to determine which comparison deserves more weight as a test. But it is encouraging to see that $L_\gamma(t_{15})$ very closely matches with $L_{15}$ from a set of models that span a wide range of parameter space, and that a simple offset can greatly improve the overall agreement.
Conclusion
==========
[[\[sec:conclude\]]{}]{}
This study investigates the properties of SN Ia supernova light curves through energy conservation arguments, radiation-transport calculations, and observational tests. The main finding is that for a wide range of parameters, the time when the instantaneous radioactive decay power crosses the bolometric luminosity for the second time, after the peak of light curve, appears to be a constant multiple of the characteristic diffusion timescale of the ejecta. For these sets of simulations and observed SNe Ia, this constant turns out to be $\sim1.7$, i.e., $$t_{\rm B}\approx1.7t_{\rm d}.$$ It has been shown that the gamma-ray escape timescale is also related with the diffusion timescale roughly as, $t_\gamma \approx 2.7 t_{\rm d}$.
The primary implication of this finding is that this relation suggests the bolometric luminosity 15 days after the peak must be very close to the instantaneous radioactive decay luminosity at that time, i.e., $$L_{\rm bol}(t_{15})\approx L_\gamma (t_{15}).$$ It may serve as a simple to tool that connects the observables of the WLR to a physical description of the ejecta.
A calibrated version of this relation that works for individual band-absolute magnitudes is needed for a more practical application on WLR. For instance, the $B$-band magnitudes will evolve significantly faster than the bolometric magnitude after the peak, so without any calibration, it is likely that the $B$-band magnitude 15 days after the peak will be systematically overpredicted in all cases. These types of effects may be explored by employing more advanced radiation-transport tools, e.g., `SEDONA`, [@Kas06], `STELLA`, [@Bli06], `CMFGEN`, [@Hil12]) and `JEKYLL`, [@Erg18], to see if reliable calibrations can be built for specific bands. A proper radiative transfer treatment is crucial in modeling the post-peak “hump” in bolometric light curves as well.
There may also be other subtle uses of this relation, where it could be employed in the interpretation of certain poorly sampled light-curve measurements. For instance, if the peak of the light curve is missed, but the $^{56}$Ni mass is estimated from the nebular spectra, then $t_{\rm B}$ can be measured from the light curve. This relation implies $t_{\rm d}\approx t_{\rm B}/1.7$ and the peak luminosity would be $L_{\rm peak}\approx L_\gamma(t_{\rm d})$.
As was emphasized originally in @Arn82, the Type-Ia explosion light curves are physically simpler than core-collapse Type-Ib/c, where there is a much weaker association between the main heating agent and the kinetic energy source. However, since the energy source terms cancel in the derivation of [[eq. [[([[\[eq:final\]]{}]{})]{}]{}]{}]{}, the proposed relation will also approximately hold for Type-Ib/c explosions. It would be an interesting future project to explore this relation in Type-Ib/c light curves, including the cases that are dominantly powered by energy sources other than radioactivity.
In general, a larger sample of observations with good constraints on the explosion date, host reddenning and preferably with independently measured $^{56}$Ni masses [e.g. nebular spectra, @Chi15], would go a long way in demonstrating the usefulness of this relation.
Acknowledgments {#acknowledgments .unnumbered}
===============
I wish to thank the anonymous referee for providing a thorough review of this work. I also thank Todd Thompson, Stan Woosley, Anthony Piro, Chris Kochanek, Gantumur Tsogtgerel and Maximilian Stritzinger for many helpful discussions. The radiation-transport code used in this work was developed by Leon Lucy, who sadly passed away earlier this year. I wish to thank him for his many pioneering contributions to astrophysics. Support for this work was provided by NASA through the NASA Hubble Fellowship grant \#60065868 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555.
[[*Software:*]{}]{} `matplotlib` [@Hun07], `numpy` [@Van11].
[99]{}
Aldering, G., Adam, G., Antilogus, P., et al. 2002, , 4836, 61
Arnett, W. D. 1979, , 230, L37
Arnett, W. D. 1982, , 253, 785
Arnett, W. D., Fryer, C., & Matheson, T. 2017, , 846, 33
Blinnikov, S. I., R[ö]{}pke, F. K., Sorokina, E. I., et al. 2006, , 453, 229
Blondin, S., Dessart, L., Hillier, D. J., & Khokhlov, A. M. 2013, , 429, 2127
Bodansky, D., Clayton, D. D., & Fowler, W. A. 1968, , 16, 299
Childress, M., Aldering, G., Antilogus, P., et al. 2013, , 770, 107
Childress, M. J., Hillier, D. J., Seitenzahl, I., et al. 2015, , 454, 3816
Colgate, S. A., & McKee, C. 1969, , 157, 623
Dexter, J., & Kasen, D. 2013, , 772, 30
Ergon, M., Fransson, C., Jerkstrand, A., et al. 2018, , 620, A156
Hillier, D. J., & Dessart, L. 2012, , 424, 252
Hoyle, F., & Fowler, W. A. 1960, , 132, 565
Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90
Kasen, D., Thomas, R. C., & Nugent, P. 2006, , 651, 366
Kasen, D., & Bildsten, L. 2010, , 717, 245
Kasen, D. 2010, , 708, 1025
Katz, B., Kushnir, D., & Dong, S. 2013, arXiv:1301.6766
Khatami, D. K., & Kasen, D. N. 2018, arXiv:1812.06522
Khokhlov, A., Mueller, E., & Hoeflich, P. 1993, , 270, 223
Lucy, L. B. 2005, , 429, 19
Maoz, D., Mannucci, F., & Nelemans, G. 2014, , 52, 107
Milne, P. A., The, L.-S., & Leising, M. D. 1999, , 124, 503
Nadyozhin, D. K. 1994, , 92, 527
Pankey, T., Jr. 1962, Ph.D. Thesis,
Phillips, M. M., Lira, P., Suntzeff, N. B., et al. 1999, , 118, 1766
Pinto, P. A., & Eastman, R. G. 2000, , 530, 744
Pinto, P. A., & Eastman, R. G. 2001, , 6, 307
Scalzo, R., Aldering, G., Antilogus, P., et al. 2014, , 440, 1498
Shen, K. J., Kasen, D., Miles, B. J., & Townsley, D. M. 2018, , 854, 52
Stritzinger, M., Mazzali, P. A., Sollerman, J., & Benetti, S. 2006, , 460, 793
Truran, J. W., Arnett, W. D., & Cameron, A. G. W. 1967, Canadian Journal of Physics, 45, 2315
Van Der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, arXiv:1102.1523
Woosley, S. E., & Weaver, T. A. 1986, , 24, 205
Woosley, S. E., Kasen, D., Blinnikov, S., & Sorokina, E. 2007, , 662, 487
Woosley, S. E. 2010, , 719, L204
Wygoda, N., Elbaz, Y., & Katz, B. 2019, , 484, 3941
[^1]: Though he did not provide a correct description of Si-burning, Dr. Titus Pankey Jr. appears to have been the first to speculate on the connection between the decay of $^{56}$Ni and Type-Ia light curves.
[^2]: The opacity changes with temperature and composition and may have significant time dependence [e.g., @Kho93], which is ignored in this simplistic argument. For a more general derivation see @Arn17, and @Kha18.
[^3]: https://snfactory.lbl.gov
|
---
abstract: 'A compact versatile photoacoustic (PA) sensor for trace gas detection is reported. The sensor is based on an integrating sphere as the PA absorption cell with an organ pipe tube attached to increase the sensitivity of the PA sensor. The versatility and enhancement of the sensitivity of the PA signal is investigated by monitoring specific ro-vibrational lines of CO$_2$ in the 2 $\mu$m wavelength region and of NO$_2$ in the 405 nm region. The measured enhancement factor of the PA signal exceeds 1200, which is due to the acoustic resonance of the tube and the absorption enhancement of the integrating sphere relatively to a non-resonant single pass cell. It is observed that the background absorption signals are highly attenuated due to the thermal conduction and diffusion effects in the polytetrafluoroethylene cell walls. This demonstrates that careful choice of cell wall materials can be highly beneficial to the sensitivity of the PA sensor. These properties makes the sensor suitable for various practical sensor applications in the ultraviolet (UV) to the near infrared (NIR) wavelength region, including climate, environmental and industrial monitoring.'
address: '$^1$Danish Fundamental Metrology, Matematiktorvet 307, DK-2800 Kgs. Lyngby, Denmark'
author:
- 'Mikael Lassen,$^{1,*}$ David Balslev-Clausen,$^1$ Anders Brusch,$^1$ and Jan C. Petersen$^1$'
title: A versatile integrating sphere based photoacoustic sensor for trace gas monitoring
---
[99]{}
M.W. Sigrist, *Air Monitoring by Spectroscopic Techniques* (John Wiley & Sons Inc.,1994).
M. W. Sigrist, R. Bartlome, D. Marinov, J. M. Rey, D. E. Vogler, and H. Wächter, “Trace gas monitoring with infrared laser-based detection schemes,” Appl. Phys. B **90**, 289–300 (2008).
C. K. N. Patel, “Laser photoacoustic spectroscopy helps fight terrorism: High sensitivity detection of chemical warfare agent and explosives,” Eur. Phys. J. Special Topics, **153**, 1, 1–18 (2008).
F. M. J. Harren, G. Cotti, J. Oomens, and S. te Lintel Hekkert, *Photoacoustic spectroscopy in trace gas monitoring, in Ensyclopedia of Analytical Chemistry* ed. by R. A. Meyers, (John Wiley & Sons Inc., 2000).
M. Nägele and M. W. Sigrist, “Mobile laser spectrometer with novel resonant multipass photoacoustic cell for trace-gas detection,” Appl. Phys. B **70**, 895–901 (2000).
A. Miklos, P. Hess, and Z. Bozoki, “Application of acoustic resonators in photoacoustic trace gas analysis and metrology,” Rev. Sci. Instrum., **72**, 1937–1955 (2001).
M. Xu, and L. V. Wang, “Photoacoustic imaging in biomedicine,” Review of Scientific Instruments, **77**, 041101 (2006).
K.H. Michaelian, *Photoacoustic Infrared Spectroscopy, Chemical Analysis Series*, ed. by J.D. Winefordner, (John Wiley & Sons Inc., 2003).
V. Koskinen, J. Fonsen, K. Roth and J. Kauppinen, “Progress in cantilever enhanced photoacoustic spectroscopy,” Vibr. Spectrosc. **48**, 1, 16–21, (2008).
J.-P. Besson, S. Schilt, S., L. and Th´evenaz, “Sub-ppm multi-gas photoacoustic sensor,” Spectrochim. Acta A, **63**, 899–-904 (2006).
A. Rosencwaig, *Photoacoustics and Photoacoustic Spectroscopy* (John Wiley & Sons Inc., 1980).
M. Webber, M. Pushkarsky, and C. Patel, “Fiber-amplifier-enhanced photoacoustic spectroscopy with near-infrared tunable diode lasers,” Appl. Opt. **42**, 2119–2126 (2003).
J. Rey, D. Marinov, D. Vogler, and M. Sigrist, “Investigation and optimisation of a multipass resonant photoacoustic cell at high absorption levels,” Appl. Phys. B **80**, 261–-266 (2005).
A. Miklos, S.C. Pei, and A.H. Kung, “Multipass acoustically open photoacoustic detector for trace gas measurements,” Appl. Opt. **45**, 2529–2534 (2006).
J. Saarela, Johan Sand, T. Sorvajarvi, A. Manninen, and J. Toivonen, “Transversely Excited Multipass Photoacoustic Cell Using Electromechanical Film as Microphone,” Sensors **10**, 5294–5307 (2010).
A. Manninen, B. Tuzson, H. Looser, Y. Bonetti, and L. Emmenegger, “Versatile multipass cell for laser spectroscopic trace gas analysis,” Applied Physics B **109**, 3, 461–466 (2012).
P. Elterman, “Integrating Cavity Spectroscopy Applied Optics,” Vol. **9**, 2140–2142 (1970).
Jane Hodgkinson, Dackson Masiyano, and Ralph P. Tatam, “Using integrating spheres as absorption cells: path-length distribution and application of Beer’s law”, Applied Optics, Vol. **48**, 30, 5748–5758 (2009).
S. Tranchart, I. H. Bachir, and J.-L. Destombes, “Sensitive trace gas detection with near-infrared laser diodes and an integrating sphere,” Appl. Opt. **35**, 7070–-7074 (1996).
E. Hawe, E. Lewis, and C. Fitzpatrick, “Hazardous gas detection with an integrating sphere in the near-infrared,” J. Phys. Conf. Ser. **15**, 250–-255 (2005).
R. Lewicki, G. Wysocki, A. A. Kosterev, and F. K. Tittel, “Carbon Dioxide and ammonia detection using 2µm diode laser based quartz-enhanced photoacoustic spectroscopy,” Appl. Phys. B **87**, 157–162 (2007).
E. Hawe, G. Dooly, C. Fitzpatrick, E. Lewis, and P. Chambers, “UV based pollutant quantification in automotive exhausts,” Proc. SPIE **6198**, 619807 (2006).
R. Bernhardt, G. D. Santiago, V. B. Slezak, A. Peuriot, and M. G. Gonzlez, “Differential, LED-excited, resonant NO$_2$ photoacoustic system,” Sens. Actuators B **150**, 513–-516 (2010).
H. Yi, K. Liu, W. Chen, T. Tan, L. Wang, and X. Gao, “Application of a broadband blue laser diode to trace NO$_2$ detection using off-beam quartz-enhanced photoacoustic spectroscopy,” Opt. Lett. **36**, 481–-483 (2011).
A. G. Bell, Phil. Mag.,“The production of sound by radiant energy,” **11**, 510 (1881).
A. C. Tam, “Applications of photoacoustic sensing techniques,” Reviews of Modern Physics, **58**, 381, (1986).
W. Demtroder, *Laser Spectroscopy: Basic Concepts and Instrumentation* third Edition, (Springer-Verlag, 2003).
F. Harren, and Jorg Reuss, *Photoacoustic Spectroscopy*, ed. G.L. Trigg, (Wiley-VCH Verlag GmbH, 1979).
S. Schilt and L. Thevenaz, “Wavelength modulation photoacoustic spectroscopy: Theoretical description and experimental results,” Infrared Phys. Technol. **48**, 154–162 (2006).
J. N. Pitts Jr., J. H. Sharp, and S. I. Chan,“Effects of Wavelength and Temperature on Primary Processes in the Photolysis of Nitrogen Dioxide and a Spectroscopic—Photochemical Determination of the Dissociation Energy,” J. Chem. Phys. **40**, 3655, (1964)
N. Barreiro, A. Vallespi, A. Peuriot, V. Slezak, and G. Santiago, “Quenching effects on pulsed photoacoustic signals in NO$_2$-air samples,” Appl. Phys. B: Lasers Opt. **99**, 591–-597 (2010)
J. Saarela, T. Sorvajärvi, T. Laurila, and J. Toivonen, “Phase-sensitive method for background-compensated photoacoustic detection of NO$_2$ using high-power LEDs”, Optics Express, Vol. **19**, 725–732 (2011)
I. S. Sidorov, S. V. Miridonov, E. Nippolainen, and A. A. Kamshilin, “Estimation of light penetration depth in turbid media using laser speckles,” Optics Express, **20**, 13, 13692–13701 (2012)
R.B. Bird, W.E. Stewart and E.N. Lightfoot, *Transport Phenomena* (John Wiley & sons, 1976)
Introduction
============
Versatile, highly sensitive, low cost and easy to operate trace gas detection systems are important for a number of practical applications, including climate, environmental and industrial monitoring. Trace gas measurements in the MIR and NIR wavelength regions are particular important due to the presence of strong ro-vibrational bands of most molecules resulting in high sensitivity [@Sigrist1994; @Sigrist2008; @Patel2008]. Photoacoustic spectroscopy (PAS) is a very promising method due to its ease of use, relatively low cost and the capability of allowing trace gas measurements at the sub-ppb level [@Harren2000; @Nägele2000; @Miklos2001; @Xu2006; @Michaelian2003; @Koskinen2008; @Besson2006]. These outstanding features of PAS can only be fully exploited using a suitable designed acoustic resonator in combination with a high power light source due to the fact that sensitivity is proportional to the optical power and the acoustic enhancement factor [@Rosencwaig1980Book]. The modulation frequency of the light source and thus the acoustic wave needs to be matched to the resonance frequency of the acoustic cell, resulting in a PA signal amplified by the acoustic quality factor $Q$. High power and tunable light sources tend to be big and bulky and limit the compactness of the spectrometer [@Webber2003]. However, the PA signal may also be enhanced by optical multi-pass techniques resulting in an increase of the sensitivity of the PA spectrometer due to the increased light absorption path length from multiple reflection. Various multi-pass and single-pass configurations have so far been exploited for PAS configurations, such as ring cells, cavity based cells and transverse square cells [@Nägele2000; @Rey2005; @Miklos2006; @Saarela2010; @Manninen2012]. However, most existing technologies with small and compact size have relatively low sensitivity and limited spectral resolution. Therefore, novel solutions are desirable. The use of an integrating sphere as absorption cell simplifies the sensor since the optical alignment is very simple and multiple passes automatically are achieved. The sensor can be made compact and highly versatile. The integrating sphere based PA sensor is in principle suited for many different detection techniques and has so far mostly been used for direct absorption including various modulation techniques [@Elterman1970; @Hodgkinson2009; @Tranchart1996; @Hawe2005].
The scope of this paper is to demonstrate the signal enhancement and the versatilely of the combination of an integrating sphere as multi-pass absorption cell and an attached organ pipe tube as a coupled acoustic resonator. A PA sensor based on an integrating sphere manufactured from polytetrafluoroethylene (PTFE) is presented. The PTFE material is a highly reflective in the wavelength range 250 - 2500 nm (UV - NIR). In this region the average reflectivity of PTFE is higher than $>95\%$. However due to the uniform distribution of the light field inside the integrating sphere acoustic resonances can not be exploited directly [@Rosencwaig1980Book]. It is demonstrated that the sensitivity of the PA sensor can be increased by attaching a 90 mm long organ pipe tube to the integrating sphere thereby making use of the acoustic resonance of the tube. The versatility and enhancement of the PAS sensitivity of the integrating sphere has been investigated by monitoring specific ro-vibrational lines of CO$_2$ in the 2 $\mu$m wavelength region and of NO$_2$ in the 405 nm wavelength region [@Lewicki2007; @Hawe2006; @Bernhardt2010; @Yi2011]. The absolute sensitivity of the system is estimated to a minimum single-shot detectable CO$_2$ concentration in the 2 $\mu$m region of approximately 30 ppm and a minimum detectable NO$_2$ concentration of 1.9 ppm at a SNR = 1. These two gases are important in climate and environmental monitoring. NO$_2$ is a toxic atmospheric pollutant and is mainly emitted into the atmosphere due to combustion processes. The average mixing ratio of NO$_2$ in the atmosphere is typically between 5-30 parts per billion, however close to a combustion engine it can be orders of magnitude higher. Enhancements exceeding 1200 of the PA signal were observed. It is observed that the background absorption signals are highly attenuated due to the thermal conduction and diffusion effects in the PTFE cell walls. This suggests that the background signal issue of typical PA measurements can be circumvented by appropriate choice of cell wall materials in additional to careful optical alignment to ensure background free PA measurements. This makes the sensor suitable for many practical applications in the UV - MIR wavelength range, including climate, environmental and industrial monitoring of trace gasses.
Theoretical considerations
==========================
The PA effect was first reported in 1881 by Bell [@PAS1881], however, it was not until after the invention of the laser that the effect found use in sensitive spectroscopy in the 1970s [@Tam1986]. The PA technique is based on the detection of sound waves that are generated due to absorption of modulated optical radiation. A microphone is used to monitor the acoustic waves that appear after the laser radiation is absorbed and converted to local heating via molecular collisions and de-excitation in the PA cell [@Demtroder2003]. The magnitude of the measured PA signal is given by [@Rosencwaig1980Book]: $$S_{PA} = S_m P F \alpha,
\label{eq.PAsignal}$$ where $P$ is the power of the incident radiation, $\alpha$ is the absorption coefficient, which depends on the the total number of molecules per cm$^3$ and the absorption cross section, $S_m$ is the sensitivity of the microphone and $F$ is the cell-specific constant, which depends on the geometry of acoustic cell and the quality factor $Q$ of the acoustic resonance [@Sigrist1994; @Harren1997]. Since according to Eq.(\[eq.PAsignal\]) the PA signal is proportional to the density of molecules the technique is able to measure absorption directly, rather than deriving it from the transmission spectrum. However, the PAS technique is not a metrological absolute technique and requires calibration using a certified gas reference sample. Ideally, a highly sensitive PA system should amplify the sound wave and reject acoustic and electrical noise as well as in-phase background absorption signals from other materials in the cell (walls and windows). Eq. (\[eq.PAsignal\]) shows that the sensitivity of the PA signal increases with laser power. Higher sensitivity can thus be achieved either by increased laser power or by overlapping laser beams multiple times through the gas sample. Various multi-pass and single-pass configurations have sofar been exploited for PAS, including standard single pass cylindrical cells, ring cells, cavity based and transverse square based [@Miklos2006; @Saarela2010].
Integrating sphere as absorption cell
-------------------------------------
Using an integrating sphere as absorption cell simplifies the system. Integrating spheres are typically used for a variety of optical, photometric or radiometric measurements. For integrating spheres the measure for optical intensity enhancement is the so called sphere multiplier $M$ that accounts for the increase in radiance due to multiple reflections in the sphere. The multiplier is simply a function of the average reflectance, $\bar{\rho}$, of the sphere and the initial reflectance for incident radiant, $\rho_0$, and is given by: $$M = \frac{\rho_0}{1-\bar{\rho}}\label{eq.Q multiplier}.$$ The PTFE sphere used in this work has an average reflectance $\bar{\rho} = 0.95$ at 2 $\mu$m while at 405 nm it is $\bar{\rho} = 0.98$, where we have taken the apertures of the integrating sphere into account. This results in multiplier values of 20 and 50, respectively. Thus enhancing the optical intensity and hereby the sensitivity of the sensor by these factors. The total enhancement of the system relative to a single-pass non-resonant PAS is therefore: $ S_{enhancement}= Q \times M$.
![Simulations of the acoustic coupled system. (a) Acoustics pressure response as a function of frequency for the organ pipe tube. (b) Acoustics pressure response as a function of frequency for the sphere. The figure shows the 3D simulation of first three eigenfrequencies of the coupled sphere and cylindrical acoustic resonator measured at end of the tube. i) 743 Hz, ii) 2229 Hz and iii) 3716 Hz. The blue and red colors indicate the maximum and minimum acoustic pressure, with opposite phase. The white color is zero acoustic pressure. The corresponding experimental data is shown in Fig. \[NO2\_sensor\].[]{data-label="comsolsimulations"}](fig1.pdf){width="10cm"}
In a typical PA systems the PA signal is enhanced by the acoustic resonances of the absorption cell. Unfortunately the acoustic resonance cannot be exploited directly in the present setup due to the uniform distribution of the light field inside the integrating sphere. This can be seen from the overlap integral between the acoustic pressure field, $p_j$, and the optical intensity distribution $I(r,\omega)$. For a spatially constant optical field distribution we see [@Rosencwaig1980Book] that for all $j$ except for $j=0$ which is the only non-zero solution with an acoustic frequency of $\omega_0=0$. In order to overcome the constrains of the non-resonant integrating sphere and exploit the acoustic resonances as an enhancement factor a 90 mm long organ pipe tube is attached to the integrating sphere, thus allowing for an enhancement of the PAS signal due to the acoustic resonance of the tube. Simulations of the coupled acoustic system (sphere and organ pipe) were performed with a 3-dimensional model using a finite element model (FEM) multi-physics simulation program COMSOL. The pressure acoustic module is used to solve the Helmholtz equation. The boundary conditions were hard walls. The acoustic field is excited by applying a uniform pressure on the sphere walls. The simulation conditions were 1 atm pressure and at 25 $^\circ$C and a speed of sound of 343 m/s. The simulations are shown in Fig. \[comsolsimulations\] and shows the theoretical simulated frequency response of the coupled acoustic system. The tube material is aluminum and the length and radius are 90 mm and 2 mm, respectively. The integrating sphere absorption cell has a radius of 25.4 mm and the material is PTFE. These are the same conditions as for the experimental realization. When the integrating sphere is excited by an uniform pressure wave, no resonance is observed in the integrating sphere, however, acoustic resonances in the organ pipe are excited. In Fig. \[comsolsimulations\](a) the first three eigenmodes of the coupled acoustic system are shown and depicted as if a microphone were attached to the end of the organ pipe tube. The eigenfrequencies of the modes are at i) 743 Hz, ii) 2229 Hz and iii) 3716 Hz, respectively. Note that the first eigenfrequency is relatively low and for real practical applications this could pose a potential problem since acoustic noise from the surroundings would interfere with the PA signal. This can be circumvented by choosing a shorter organ pipe tube, thus moving the resonance toward higher eigenfrequencies. Note that in a typical experiment the light field is not necessarily completely uniform and acoustic resonances may be excited in the integrating sphere. From the simulations this can be realised by exciting the integrating sphere with a nonuniform pressure wave. The top figure in Fig. \[comsolsimulations\](b) shows the theoretical simulated frequency response of sphere as if a microphone were attached to the equator of the sphere, where a response at 1500 Hz and 3000 Hz is found. Note that in pure CO$_2$ and the speed of sound is approximately 267 m/s at 1 atm and 20 $^\circ$C, thus it is expected that the fundamental resonance is shifted to approximately 650 Hz for pure CO$_2$.
Experimental setup
==================
![(a) The experimental setup for CO$_2$ monitoring consist of a distributed-feedback (DFB) diode laser emitting radiation at 2.004 $\mu$m, an integrating sphere with a diameter of 50.8 mm and two microphones attached to the integrating sphere. One directly on the integrating sphere and one attached via the 90 mm organ tube pipe. DAQ is a data acquisition card. (b) The setup for measuring NO$_2$ include a 405 nm LED source and a lock-in amplifier connected to a DAQ card.[]{data-label="experimentsetup"}](fig2.pdf){width="10cm"}
A typical experimental setup for PAS involves a light source, that is either mechanically chopped or current modulated, and an absorption cell with microphones or a pressure sensitive detector. Two experimental setups were used, one using a DFB laser light source for monitoring specific ro-vibrational lines of CO$_2$ in the 2 $\mu$m region and one using a LED light source for investigating NO$_2$ in UV/Blue region [@Demtroder2003]. The schematics of these are shown in Fig. \[experimentsetup\]. The laser based setup is shown in Fig. \[experimentsetup\](a). The DFB laser emits light at 2.004 $\mu$m ($\pm 0.002 \mu$m) and is used to probe the R(12) line of CO$_2$. The laser wavelength was fine tuned by changing the temperature allowing for a change of 0.26 nm/C. The 50.8 mm diameter integrating sphere was manufactured from polytetrafluoroethylene (PTFE), which is a high reflective bulk material in the wavelength range 250 - 2500 nm (UV - NIR). The reflectivity in this region is higher than $>95\%$, resulting in a mean light path length of approximately 1.2 m. The light beam enters the integrating sphere via an uncoated 3 mm thick calcium fluoride window. The optical transmission is 92$\%$ in the wavelength region 0.2-8 $\mu$m and the absorption coefficient is 10$^{-4}$/cm. The laser beam then hits the cell wall opposite of the window and is scattered so that the light field is evenly distributed over all angles. Due to the placement of the window outside the sphere the background signal is decoupled from the PA signal and was not detectable. Two microphones were attached to the integrating sphere. One directly on the sphere and one at the end of the organ tube pipe. The measurements were performed at the experimental conditions of 1 atm and at 20 $^\circ$C. The optical modulation was approximately 1.4 mW peak-to-peak. The amplitude modulation of the optical field is made by switching the laser current on and off, thus generating the PA signal at this particular modulation frequency. However, pure amplitude modulation is not easily achieved, as residual wavelength modulation also occurs. Alternatively, the PA sensor could also be operated in the wavelength modulation mode in which case the PA signal would be excited at the overtone frequency [@Schilt2006]. The data are collected with a DAQ card having a bandwidth of 9 kHz at 50 ks/S and 12 bit resolution.
The schematics of the experimental setup for the NO$_2$ measurements is shown in Fig. \[experimentsetup\](b), where the DFB laser has been substituted with a 405 nm LED having a 13 nm bandwidth (FWHM). The NO$_2$ molecule has a strong and broad absorption spectrum covering the 250-650 nm spectral, however, below 415 nm photochemical dissociation of NO$_2$ occurs [@Pitts1964; @Barreiro2010]. Above 415 nm approximately 90$\%$ of the absorbed light is converted to heat/pressure through the PA effect. The data is processed using a lock-in amplifier. The LED modulation is controlled by a signal generator, which also acts as the local oscillator for the lock-in amplifier. The peak-to-peak modulation is 130 mW, and approximately 80 mW is coupled into the sphere. The data from the lock-in amplifier is collected with a DAQ card with a bandwidth of 9 kHz at 256 ks/S and 16 bit resolution.
Results
-------
![Enhancement factors of the integrating sphere based PA sensor for (a) CO$_2$ and (b) NO$_2$. The black curves are the data recorded by Mic2 at the end of the organ pipe while the red curves are the data recorded by Mic1, situated inside the sphere. The enhancement factors for the two cases are indicated.[]{data-label="DataTwoMics"}](fig3.pdf){width="10cm"}
Fig. \[DataTwoMics\] shows the enhancements of the PA signal due to the acoustic resonance of the organ pipe tube. The PA signals from the two microphones are shown; one at the end of the organ pipe (Mic2, black curve) and one directly attached to the sphere (Mic1, red curve). The latter provides the non-resonant signal. The signal shown in Fig. \[DataTwoMics\](a) is due to a pure CO$_2$ filled sphere. The PA signal (sensitivity) is enhanced by factor of approximately 58 compared with the non-resonant signal. It can be concluded that the Q-factor of the organ pipe is approximately 58. The signal shown in Fig. \[DataTwoMics\](b) is due to a 300 ppm NO$_2$ in N$_2$ filled sphere. The PA signal is enhanced by a factor of approximately 30. However this enhancement factor is probably slightly higher if the background absorption is taken into account and also the interference from ambient noise. The enhancement due to the long path length provided by the sphere compared with a single pass absorption cell can be estimated from Eq. (\[eq.Q multiplier\]) to be approximately 20 and 50 for the CO$_2$ and NO$_2$ setup, respectively. The total enhancement factor for the CO$_2$ and NO$_2$ measurements is approximately 1200 and 1500, respectively, compared with a simple single pass non-resonant cell with the same incident optical power.
The SNR (PA signal over the background signal with no light or SBR) of the high concentration CO$_2$ measurements is approximately 4500. Note that in using pure CO$_2$ the PA signal is saturated and therefore the SNR would be higher for pure CO$_2$ if there were no saturation effects. It is observed that by diluting the pure CO$_2$ with atmospherical air so that the CO$_2$ concentration is lowered to 20$\%$ we see that the PA signal size does not decrease. The saturation of the PA signal occurs when the pump becomes depleted due to a highly dense gas, thus all light is therefore absorbed in the first 20-30 cm of the integrating sphere. The mean light path length of the sphere is approximately 1.2 m this gives therefore the same signal for a 20$\%$ CO$_2$ mixture with atmospheric air as for the pure CO$_2$. The pressure is kept at atmospheric pressure at all times. The absolute concentration and thus the sensitivity of the sensor is difficult to estimate from this high concentration (pure) CO$_2$. By observing the shift in the resonance frequency due to the change in the speed of sound, which is 267 m/s and 343 m/s for pure CO$_2$ and atmospherical air, respectively, it is estimated that the concentration is approximately 15$\%$ CO$_2$. With this SNR the minimum detectable CO$_2$ concentration in the 2 $\mu$m region is approximately 30 ppm (SNR =1) for a single-shot. We would like to point out that the scope of this paper is to demonstrate the signal enhancement due to the integrating sphere as simple kind of multi-pass absorption cell and the further enhancement of the signal due to the attached organ pipe tube and not to demonstrate an absolute sensitivity of the system. However, the sensitivity can easily be enhanced by using higher optical power, better microphones and longer integrating time, which would make the system comparable to state-of-art PA sensors.
![Data for measurement on a 300 ppm NO$_2$ mixture, SNR = 22 dB. (a) Sphere microphone PA signal (black curve) and background signal (red curve). (b) Tube microphone PA signal (black curve) and background signal (red curve) The eigenresonances are found at approximately 740 Hz and 2500 Hz for the tube microphone. (c) Monitoring of a 300 ppm NO$_2$ concentration over 3.5 minutes resulting in a standard deviation of 0.9 ppm. []{data-label="NO2_sensor"}](fig4.pdf){width="10cm"}
The results of the NO$_2$ measurements are shown in Fig. \[NO2\_sensor\]. The measurements are processed with a lock-in amplifier with a integration time of 1 second. Fig. \[NO2\_sensor\] (a) and (b) show the frequency response, for the PA signals and background signals of the microphones attached to the integrating sphere and the organ pipe tube, respectively. It is observed that the SNR is approximately 160 for the organ pipe tube signal at a modulation frequency of 740 Hz. The minimum detectable NO$_2$ concentration is therefore approximately 1.9 ppm (SNR =1). The largest contribution to the background seems to originate from external ambient noise sources, such as vacuum pumps and therefore the SNR is probably considerably higher (500 or more). A SNR of 500 corresponds to a minimum detectable absorption coefficient of 3.8 $\cdot 10^{-7}$cm$^{-1}$ Hz$^{1/2}$ W$^{-1}$ which is comparable to state-of-the-art PA NO$_2$ measurements [@Saarela2011]. The SNR for the PA signal measured with the sphere microphone is approximately 4 measured at 1500 Hz, thus the minimum detectable NO$_2$ concentration is approximately 75 ppm (SNR =1). This relatively high background signal is due to the fact the microphone is placed inside the sphere and is being heated by the incident light. On the other hand the tube microphone is protected from the stray light by placing the microphone at the far end of the tube. The resonances of the tube can be observed at 740 Hz and 2500 Hz as expected from the simulations discussed above. The sphere has a resonances at approximately at 1500 Hz which was in good agreement with the above discussed simulation. Note that the size of the total acoustic pressure strength and hence the Q-factor of the peaks in the simulation does not agree very well with the experiment. This is attributed to the fact that our simulation does not take damping and general loss factors into account. Fig. \[NO2\_sensor\](c) shows results due to a continuously flow of 300 ppm NO$_2$ through the PA cell resulting in a standard deviation (STD) of 0.9 ppm for measurements during 3.5 minutes. The flow was maintained in order to keep the concentration at the same level while photo-dissociation of the NO$_2$ molecule was active. It is expected that the STD will be lower by a factor of 10 if photo-dissociation were not present. By stopping the flow of NO$_2$ into the sphere the photochemical dissociation of the molecules becomes apparent and within 2 minutes the concentration is lower by a factor 10.
Background Signal
-----------------
These relatively high SNR/SBRs of the PA signals are slightly surprising since the mean reflectivity is only 95$\%$(98$\%$) for the 2 $\mu$m (405 nm) measurements and in the case of an empty cell (no absorbance by gas) all incident light is absorbed in the cell walls (PTFE material). In standard PAS experiments such a high level of background absorption would be detrimental to the performance of the system. This surprising features is due to a combination of mechanisms, namely the large light penetration depth, the low thermal diffusion length and heat conduction to the outer aluminium casing of the integrating sphere. The light penetration depth is around 1.39 mm at a wavelength of 633 nm [@Sidorov2012]. The diffusion length scale for pulsed heating is given by [@Bird1976]: $\mu_t = 2 \sqrt{\alpha t}$, where $\alpha$ is the thermal diffusivity, which is 0.124 mm$^2/s$ for PTFE at 25 $^\circ$C and $t$ is the pulse duration time. In our experiment the modulation frequency is around 700 Hz which leads to a diffusion length of approximately 27 $\mu$m. Thus the light that penetrates the PTFE material deeper than 27 $\mu$m only contributes to the background absorption signal with a constant DC heating component and does not contribute to an in-phase PA signal at 700 Hz. The amount of absorbed light that contributes to the background signal is given by the ration between the diffusion length and the penetration depth and is on the order of 2$\%$. In our CO$_2$ experiment the average optical power is 2 mW, frequency of 700 Hz and with a 50-50 duty circle. This would lead to a background signal of approximately 20 ppb. In comparison if we consider aluminum as the cell material. Aluminum has a thermal diffusivity of 84 mm$^2/s$, which leads to a diffusion length of 700 $\mu$m, however since the optical penetrations depths is only a few nm. Therefore diffusion effects will be neglectable and all of the absorbed light will contribute to an in-phase PA signal. The background signal will in this case be approximately 50 times larger than for the PTFE cell. Due to heat conduction and the insulation properties of the PTFE material the background signal is attenuated further. This has been modeled using a COMSOL simulation applying the heat transfer module, where it was assumed that 15 $\mu$m into the PTFE layer the material is heated by 40 $\mu K$ due to light absorption. From the simulation we find that heating of the surface is “only” approximately 5 $\mu K$. This is due to heat being conducted away via the contact points to the aluminium outer casing of the integrating sphere.
These calculations together with the experimental results demonstrate that reduction of the background signal of typical PA measurements can be addressed by careful choice of the cell wall material in addition to carefully scatter-free optical designs. The heat diffusion and conduction mechanisms suggests that the integrating sphere based PA sensor may be used in the region from 2500 - 3500 nm even though the reflection of the PTFE material is less than $90\%$. This spectral region is very important for trace gas sensing, since the fundamental vibrations of many molecules lie in this spectral region [@Sigrist2008; @Demtroder2003].
Conclusion
==========
The scope of the paper was to demonstrate the PA signal enhancement and the versatility of an integrating sphere based PA sensor as a simple multi-pass absorption cell in combination with an attached organ pipe tube for signal enhancement. A PA enhancement factor of approximately 58(30) for CO$_2$(NO$_2$) in the case of resonant over non-resonant excitation has been demonstrated as the result of attaching a 90 mm long organ pipe tube. Further enhancement has been demonstrated due to the integrating sphere itself resulting in enhancement factors of 20(50) due increased path length. A total enhancement factor of approximately 1200(1500) relative to a simple single pass non-resonant cell with the same incident optical power and microphones has been achieved. The minimum single-shot detectable CO$_2$ concentration in the 2 $\mu$m region is approximately 30 ppm and a minimum detectable NO$_2$ concentration of 1.9 ppm both with a SNR=1 (or SBR=1). It is anticipated that the sensitivity can be further enhanced by applying wavelength modulation techniques, more sensitive microphones, better acoustic insulation, higher optical power and differential measurements, however, this is outside the scope of this paper and will be the focus of the next generation sensor.
It has also been shown that the PA system can be decoupled from various background noise sources, such as the in-phase background absorption signal due to thermal conduction effects and heat diffusion. We believe this is an important result and it demonstrates that the background signal issue of typically PA measurements can be approached from a direction other than through careful scatter-free designs. In the region from 2500 - 3500 nm the reflection of PTFE is less than $90\%$. Since we found that the background absorption signal is greatly attenuated due to thermal conduction of heat away from the absorption cell we believe that the integrating sphere based PA sensor might be used in the region up to 3700 nm. Even though the reflectivity is relatively low. We foresee that this system can be made highly sensitive and versatile at the same time and be very cheap to produce and therefore attract attention as product in the rapidly growing sensor field for climate, environmental and industrial monitoring.\
**Acknowledgments**\
We acknowledge the financial support from the Danish Council for Technology and Production Sciences (Sapere Aude project no. 10-0935849).
|
---
abstract: 'We have measured the ultrafast anisotropic optical response of highly doped graphene to an intense single cycle terahertz pulse. The time profile of the terahertz-induced anisotropy signal at 800 nm has minima and maxima repeating those of the pump terahertz electric field modulus. It grows with increasing carrier density and demonstrates a specific nonlinear dependence on the electric field strength. To describe the signal, we have developed a theoretical model that is based on the energy and momentum balance equations and takes into account optical phonons of graphene and substrate. According to the theory, the anisotropic response is caused by the displacement of the electronic momentum distribution from zero momentum induced by the pump electric field in combination with polarization dependence of the matrix elements of interband optical transitions.'
author:
- 'A. A. Melnikov'
- 'A. A. Sokolik'
- 'A. V. Frolov'
- 'S. V. Chekalin'
- 'E. A. Ryabov'
title: Anisotropic ultrafast optical response of terahertz pumped graphene
---
Due to the peculiar electronic band structure of graphene [@CastroNeto; @DasSarma] the field-induced motion of electrons was predicted to be strongly nonlinear in this material [@Mikhailov]. High nonlinearity together with the unique electronic and optical properties make graphene a prospective material for photonic and optoelectronic applications. In the light of this perspective, nonlinear optical phenomena in graphene are actively studied [@Bao; @Bonaccorso; @Ooi]. Among them are harmonic generation[@Hong; @Cheng; @Kumar; @Soavi; @Jiang; @Taucer; @Baudisch; @Yoshikawa; @Bowlan; @Hafez2018], saturable absorption [@Winzer; @Marini; @Cihan], self-phase modulation [@Vermeulen], and four-wave mixing [@Hendry; @Koenig]. In the optical range the frequency of light is higher than the electron-electron scattering rate, so the resulting “coherent” electronic response is determined by the properties of single-electron band structure [@Taucer; @Baudisch; @Yoshikawa]. In the THz range another limiting case is realized — the characteristic time of electron-electron scattering processes is shorter than the period of the light wave. The energy imparted by the electric field to electrons is quickly redistributed heating the electron gas, while the electron-phonon collisions cool and decelerate the gas. The concept of “incoherent” nonlinearity that appears due to the change of electron gas conductivity upon heating [@Mics; @Razavipour] was employed recently to explain highly effective generation of THz harmonics in graphene [@Hafez2018].
The routine technique used to evaluate the optical nonlinearity of graphene is the spectral analysis of light transmitted through the sample in search of harmonics of the pump frequency radiated by the nonlinear current. In the present work we employ an alternative approach by using an optical probe to detect the transient THz field-induced shift of electron momentum distribution (note that such shift can induce the optical 2-nd harmonic generation, as was recently observed [@Tokman]). In graphene, due to the specific polarization dependence of matrix elements of interband transitions in combination with Pauli blocking, an anisotropy of electronic distribution implies an anisotropy of infrared optical conductivity, which can be measured by detecting depolarization of probe light reflected from the sample. We measure the ultrafast anisotropic optical response of graphene to intense THz pulses and show that though the corresponding signal is rather weak, it can be reliably detected for heavily doped graphene and contains specific nonlinear features. To interpret the signal, we develop a model based on the Boltzmann kinetic equation solved in the hydrodynamic approximation.
The sample used in our experiments was a sheet of single-layer CVD graphene on the SiO$_2$/Si substrate (the thickness of SiO$_2$ was 300 nm). Four indium contacts were attached to the sample in order to apply gate voltage and to measure the resistance of the graphene layer. Nearly single-cycle THz pulses with a duration of about 1 ps were generated in a lithium niobate crystal in the process of optical rectification of femtosecond laser pulses with tilted fronts (see, e.g., Ref. [@Stepanov] for details). The THz generation stage was fed by 50 fs laser pulses at 800 nm, 1.2 mJ per pulse at 1 kHz repetition rate. THz radiation was focused by a parabolic mirror so that the peak electric field of the THz pulses incident on the sample was $\sim$ 400 kV/cm (denoted below as $E_\mathrm{max}$). The waveform of the pulses was characterized by means of electro-optic detection in a 0.15 mm thick (110)-cut ZnTe crystal. The central frequency of the THz pulse was $\sim$ 1.5 THz, while its spectral width $\sim$ 2 THz (FWHM). In the experiments we detected transient anisotropic changes of reflectance of the sample caused by the pump THz pulses. The probe 50 fs pulses at 800 nm were polarized before the sample at 45$^{\circ}$ relative to the vertical polarization of pump THz pulses. Both pump and probe beams were incident onto the sample at an angle of $\sim$ 7$^{\circ}$. Upon reflection from the sample excited by THz radiation the probe pulses experienced a small rotation of polarization, which was detected by measuring the intensities of two orthogonal polarization components of the reflected probe beam $I_{r,x}$ and $I_{r,y}$ using a Wollaston prism and a pair of photodiodes. The quantity $$\begin{aligned}
F=1-\frac{I_{r,y}}{I_{r,x}}.\label{F}\end{aligned}$$ as a function of the probe pulse delay time is referred to as anisotropy signal or anisotropic response.
The anisotropic response of the sample induced by the pump THz pulse is shown in Fig. \[Fig1\] for the peak values of the electric field of $E_\mathrm{max}$ and $E_\mathrm{max}/2$. As soon as the signal from the regions of SiO$_2$/Si substrate not covered by graphene was below the noise level, we concluded that the source of the observed anisotropy signal was the graphene layer itself. Fig. \[Fig1\] also shows the temporal profile of the electric field of the pump THz pulse. In order to ensure linearity of the electrooptic detection in the ZnTe crystal we attenuated pump THz beam power by a factor of $\sim$ 400 using a variable metallic filter. To record the electric field profile the filter was “closed” so that the THz beam passed through the fused silica plate covered by the thickest metallic layer. The sample response at $E_\mathrm{max}/2$ was measured with the “opened” filter as the pump THz pulses passed only through fused silica (the 2 mm thick fused silica plate reduces the THz field by a factor of $\approx2$). Finally, in order to detect the anisotropic response induced by the strongest pump electric field available ($E_\mathrm{max}$) we removed the variable filter so that the THz radiation traveled to the sample only through air. As soon as the fused silica plate causes a large additional retardation of THz pulses the anisotropic response measured at $E_\mathrm{max}$ was time-shifted so that it matched the signal detected at $E_\mathrm{max}/2$ in time domain. As follows from Fig. \[Fig1\] the third peak in the signal measured at $E_\mathrm{max}$ occurs earlier than the corresponding peaks in the signal detected at $E_\mathrm{max}/2$ and in the electric field profile $|E(t)|$. This effect is due to group velocity dispersion in the fused silica plate that leads to a $\sim$ 20% lengthening of the pulse and of the signal. Variation of the relative amplitude of the third peak in $|E(t)|$ caused by the plate is negligible.
To estimate the doping level of graphene, we measured the resistance of the sample as a function of gate voltage $V_\mathrm{g}$ soon after its preparation (thick line in the inset to Fig. \[Fig2\]). We approximated this dependence by the formula $R(V_\mathrm{g})\approx
R_0+A/|V_\mathrm{g}-V_\mathrm{CNP}|+B/|V_\mathrm{g}-V_\mathrm{CNP}|^{3/2}+C/|V_\mathrm{g}-V_\mathrm{CNP}|^2$ with the charge neutrality point location $V_\mathrm{CNP}\approx65\,\mbox{V}$, which takes into account short- and long-range impurities [@DasSarma] (the first two terms), and corrections proportional to higher powers of the inverse Fermi momentum (the last two terms). This approximation was extrapolated to the measured values of $R$, allowing us to estimate the current Fermi level position $\mu$. We found that when the ultrafast measurements were performed several months later, the Fermi level of graphene shifted considerably probably due to doping by water molecules adsorbed from ambient air. The shift was of such magnitude as if the effective gate voltage $V_{\mathrm{g}0}\approx-125\,\mbox{V}$ was applied. Application of the real gate voltages $\mp30\,\mbox{V}$, which were effectively added to $V_{\mathrm{g}0}$ resulting in the total effective gate voltages $V_{\mathrm{g}0}\mp30\,\mbox{V}$, allowed us to increase (decrease) the charge carrier concentration, leading to increase (decrease) of the anisotropy signal. The experiment illustrated by Fig. \[Fig1\] was performed even later than the one, the results of which are shown in Fig. \[Fig2\]. The doping level is this case was estimated as $\mu\approx-500\,\mbox{meV}$, corresponding to the hole density $n\approx2\times10^{13}\,\mbox{cm}^{-2}$. (In calculations below we assume positive $\mu$ for better clarity, because our model is particle-hole symmetric).
Time evolution of the electron gas in highly doped graphene under intense THz field $\mathbf{E}(t)$ is dominated by its intraband dynamics [@Tani; @Hafez; @Tomadin], described in terms of two separate momentum distribution functions $f_\pm(\mathbf{k},t)$ for electrons in conduction and valence bands. Time evolution of these functions is described by the semiclassical Boltzmann kinetic equation $$\begin{aligned}
\frac{\partial f_\gamma}{\partial t}=-\frac{e\mathbf{E}}\hbar\cdot\frac{\partial f_\gamma}{\partial\mathbf{k}}
+\frac{\langle f_\gamma\rangle+\mathbf{n}\cdot\langle\mathbf{n}f_\gamma\rangle-f_\gamma}{\tau_\mathrm{imp}(k)}
\nonumber\\+\Gamma_\gamma^\mathrm{in}(1-f_\gamma)-\Gamma_\gamma^\mathrm{out}f_\gamma+ \left(\frac{\partial
f_\gamma}{\partial t}\right)_\mathrm{ee}.\label{Boltzmann}\end{aligned}$$ The terms in the right hand side describe, respectively, electron acceleration by the applied electric field, elastic collisions with impurities [@Kashuba] with momentum-dependent scattering time $\tau_\mathrm{imp}(k)$, electron-phonon and electron-electron collisions. $\Gamma_\gamma^\mathrm{in,out}(\mathbf{k},t)$ are the rates of electron scattering into the $\mathbf{k}\gamma$ state and out of this state [@Malic; @Kim; @Brida].
Interband dynamics of the electron gas induced by the THz field in our case should be slow with respect to fast electron-electron collisions, which thermalize the electron gas on a time scale less than $30\,\mbox{fs}$ [@Johanssen; @Gierz]. Consequently, $f_\gamma(\mathbf{k},t)$ can be taken in the form of the “hydrodynamic” distribution function $$\begin{aligned}
f^\mathrm{drift}_\gamma(\mathbf{k},t)=\left\{\exp\left[\frac{\epsilon_{k\gamma}-\hbar\mathbf{k}\cdot\mathbf{V}(t)
-\mu(t)}{T(t)}\right]+1\right\}^{-1},\label{drift}\end{aligned}$$ which is formed due to electron-electron collisions with conservation of total energy, momentum and particle number [@Kashuba; @Briskot]. Here $\epsilon_{k\gamma}=\gamma v_\mathrm{F}k$ are the single-particle energies, while temperature $T$, chemical potential $\mu$, and drift velocity $\mathbf{V}$ are slowly varying functions of time. Fig. \[Fig3\] shows examples of (\[drift\]) for $n$-doped graphene. Note that owing to the linear dispersion in graphene the distribution function (\[drift\]) is not just a shifted Fermi sphere, as it would be in the case of massive electrons, but rather a gas with anisotropic temperature. Combined action of the strong THz field that accelerates electrons and rapid thermalization makes the distribution function elongated in the direction of the THz field, while the subsequent impurity and phonon scattering tends to make $f_\gamma(\mathbf{k},t)$ isotropic, leading to electron gas heating.
A nonzero drift velocity $V_x$ (we take $\mathbf{E}$ and $\mathbf{V}$ along the $x$ axis) makes the distribution functions (\[drift\]) angular anisotropic at the probe pulse wave vector modulus $|\mathbf{k}|=k_\mathrm{pr}=\omega_\mathrm{pr}/v_\mathrm{F}$. In combination with the angular dependence of the matrix elements of interband transitions [@Malic], it leads to the anisotropy of the optical conductivity tensor at $\omega=\omega_\mathrm{pr}$: $$\begin{aligned}
\left\{\begin{array}{ll}\sigma_{xx}\\
\sigma_{yy}\end{array}\right\}=\frac{e^2}{4\pi\hbar}\int\limits_0^{2\pi}d\varphi\:
\left\{\begin{array}{lr}\sin^2\varphi\\
\cos^2\varphi\end{array}\right\}\left.(f_--f_+)\right|_{k=k_\mathrm{pr}}.\end{aligned}$$
The small difference between $\sigma_{xx}$ and $\sigma_{yy}$ manifests itself in the reflectances $R_{x,y}$ of the whole graphene/SiO$_2$/Si structure for the $x$- and $y$-polarized probe pulses at normal incidence. Defining the optical contrast of graphene on a substrate as $C\approx-(\sigma/R)(\partial R/\partial\sigma)$ [@Blake; @Casiraghi; @Fei], we can calculate the anisotropy signal (\[F\]) as $$\begin{aligned}
F=\frac{R_x-R_y}{R_x}\approx C\frac{\sigma_{yy}-\sigma_{xx}}{e^2/4\hbar}\nonumber\\
=\frac{C}{\pi}\int\limits_0^{2\pi}d\varphi\:\cos2\varphi\left.(f_--f_+)\right|_{k=k_\mathrm{pr}}.\label{F_theor}\end{aligned}$$ For graphene on Si covered by the $300\,\mbox{nm}$-thick $\mathrm{SiO}_2$ layer, the optical contrast at $\lambda_\mathrm{pr}=800\,\mbox{nm}$ is rather small and negative [@Casiraghi]. Calculating it using the transfer matrix method [@Fei], which allows us to take into account multiple reflections from graphene and Si substrate, and taking the universal optical conductivity of graphene $\sigma=e^2/4\hbar$ in the calculation, we get $C\approx-0.0044$. In principle, by adjusting the $\mathrm{SiO}_2$ layer thickness [@Blake] in order to enhance the visibility of graphene it is possible to increase the observed signal.
The physical origin of the optical anisotropy is illustrated by Fig. \[Fig3\](b). The interband transitions for the $y$-polarized light become suppressed with respect to those for the $x$ polarization due to Pauli blocking, caused by the thermal tail of the displaced distribution function at $V_x\neq0$. The resulting difference of the conductivities, $\sigma_{yy}<\sigma_{xx}$, leads to a positive anisotropy signal (\[F\_theor\]) since $C$ is negative. This picture is symmetric when $V_x$ changes sign, so in the limit of low drift velocity $F\propto V_x^2$. Unlike studies with linearly polarized optical pump [@Mittendorff; @Trushin; @Malic; @Yan1; @Danz; @Konig-Otto], where momentum distribution of the photoexcited electrons and holes is highly anisotropic ($\sim\sin^2\varphi$) from the very beginning in spite of the zero total momentum, in our case the anisotropy arises as the electron Fermi sphere is displaced from zero momentum by the strong THz field. In both cases the distribution functions acquire nonzero second angular harmonics ($\sim\cos2\varphi$) that is necessary for the anisotropy of the optical response.
We solve the Boltzmann equation (\[Boltzmann\]) in the hydrodynamic approximation (\[drift\]), using balance equations for the total energy, momentum and particle number of the electron gas similarly to the works on electron transport in graphene in stationary high electric fields [@Bistritzer; @DaSilva]. In these equations, the energy and momentum time derivatives caused by phonons are calculated using full electron-phonon collision integrals. We consider 6 phonon modes: 4 modes of graphene $\boldsymbol\Gamma$ and $\mathbf{K}$ optical phonons [@Malic; @Kim; @Brida; @Butscher] and 2 modes of $\mathrm{SiO}_2$ surface polar phonons [@Fratini; @Konar; @Perebeinos; @Yan2]. We assume polarization- and momentum-independent phonon occupation numbers $n_\mu=[\exp(\hbar\omega_\mu/T_\mu)-1]^{-1}$ determined by two separate temperatures for graphene optical $T_\mathrm{GO}$ and surface polar $T_\mathrm{SPP}$ phonons. Since hot phonons play an important role in the electron gas dynamics in strong fields [@Malic; @Butscher; @Perebeinos], we calculate time evolution of $T_\mathrm{GO}$ and $T_\mathrm{SPP}$ from the energy balance for the corresponding phonon gases, which exchange energy with electrons and additionally lose energy via phonon decay with the characteristic times $\tau_\mathrm{ph}\approx 2\,\mbox{ps}$ [@Johanssen; @Gierz] and $\tau_\mathrm{SPP}\approx1\,\mbox{ps}$ [@Yan2]. For the scattering time on long-range impurities, relevant for graphene on a $\mathrm{SiO}_2$ substrate [@Al-Naib], we take $\tau_\mathrm{imp}(k)\approx
s/k$, where $s$ can be related to the low-field carrier mobility $\mu_\mathrm{c}=2ev_\mathrm{F}s/\hbar$, which is about $1000\,\mbox{cm}^2/\mbox{V}\cdot\mbox{s}$ in our sample. In numerical calculations we use the THz electric field strength of $\sim70\,\mbox{kV/cm}$ that is several times lower than the incident field $E_\mathrm{max}$. This reduction of the field acting on graphene electrons is caused by the destructive interference of the incident THz wave with the wave reflected from the underlying $p$-doped Si substrate with the reflectivity $R_\mathrm{THz}\approx0.6\div0.7$ [@Ray].
Typical calculation results for $F$ are shown in Fig. \[Fig1\]. We take the doping level $\mu=500\,\mbox{meV}$ and the values $70\,\mbox{kV/cm}$ and $35\,\mbox{kV/cm}$ for the peak strength of the electric field acting on graphene electrons. One can see that our numerical model reproduces the magnitude of $F$ and the general similarity of $F(t)$ and $|E(t)|$ relatively well at realistic parameters. Fig. \[Fig2\] illustrates the dependence of the calculated $F$ on the doping level. For the calculation we used Fermi levels $\mu=430$ and $480\,\mbox{meV}$ for the cases of lower and higher doping. These values were close to those extracted from the resistance measurements and allowed us to reproduce the experimental results relatively well. Both the theory and the experiment demonstrate the same qualitative effect — $F$ grows with increasing doping level. However, generally the theory predicts highly nonlinear doping dependence of $F$, especially for strong THz fields. Note that increasing $|\mu|$ or decreasing $\omega_\mathrm{pr}$ in order to bring optically probed energy regions $\pm\hbar\omega_\mathrm{pr}/2$ closer to the Fermi level would significantly increase the anisotropy signal.
The detected anisotropic response of graphene, however, contains specific features, that our model is not able to reproduce. First, the third peak behaves differently with respect to the first two ones: its growth upon doubling the electric field is considerably higher ($\sim4.5$) than for the first two peaks ($\sim2.5$), and is underestimated by the theory. This anomalous behavior at the end of the THz pulse can be caused by the heating of the electron gas in graphene, the temperature of which is expected to be maximal after the action of the peak electric field, as shown by the calculated profiles of $T(t)$ in Fig. \[Fig4\](a). It should also be noted that the rise time of the third peak is the shortest of all three peaks ($\sim50\,\mbox{fs}$) and is comparable with the characteristic time of electron-electron interactions in graphene, so in this regime our hydrodynamic approximation (\[drift\]) can miss some features of the coherent collisionless dynamics of electrons driven by the high electric field.
One more interesting property of the anisotropic response is the sharp bend or kink observed in the signal at $\sim40\,\mbox{fs}$, after the THz electric field has reached the maximum and just began to decrease. It is visible in the signals recorded at both field strengths and is marked by arrows in the inset to Fig. \[Fig1\]. One can see that due to this kink the form of the anisotropy signal differs considerably from the THz waveform. The latter evolves smoothly similar to a sine wave, while the former resembles a wave crest indicating the nonlinearity of the THz response of graphene. Such behavior of the anisotropic signal near the peak electric field can be a signature of similar nonlinear features in the THz-induced current, although we do not measure the latter directly in our experiment.
Finally, in view of the long-standing search of the nonlinear current response of graphene in the THz range [@Mikhailov; @Bowlan; @Hafez2018], we calculate the electric current density $j$ (Fig. \[Fig4\](b)). The electric current demonstrates strong nonlinearities: first, its peak values change insignificantly when the electric field is doubled, that can be considered as a manifestation of the electric current saturation [@DaSilva; @Perebeinos], and, second, $j$ becomes lower at the same field strength near the end of the pulse, which can be attributed to the influence of electron gas heating.
In conclusion, we have measured the ultrafast anisotropic optical response of highly doped graphene under intense THz excitation and developed the model of temporal dynamics of the momentum distribution functions based on the Boltzmann equation, solved in the hydrodynamic approximation. Theoretical calculations provide good description of the general shape and magnitude of the anisotropy signal at realistic parameters, and also predict strong nonlinearities of the THz-field induced electric current. We demonstrate that the anisotropic optical response measured with subcycle temporal resolution contains information on the ultrafast dynamics of the electron gas, its heating, isotropization and concomitant nonlinearities. Our work links the areas of nonlinear THz electrodynamics of graphene [@Mikhailov], ultrafast pseudospin dynamics of Dirac electrons [@Mittendorff; @Trushin; @Malic; @Yan1; @Danz; @Konig-Otto], and strong-current graphene physics [@DaSilva; @Perebeinos], thereby providing an alternative tool for studying high-field phenomena in graphene in the far-IR and THz range.
This work was supported by the Ministry of Education and Science of the Russian Federation (Project No. RFMEFI61316X0054). The experiments were performed using the Unique Scientific Facility “Multipurpose femtosecond spectroscopic complex” of the Institute for Spectroscopy of the Russian Academy of Sciences.
[99]{}
A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. **81**, 109 (2009).
S. Das Sarma, S. Adam, E. H. Hwang, and E. Rossi, Rev. Mod. Phys. **83**, 407 (2011).
S. A. Mikhailov, EPL **79**, 27002 (2007).
Q. Bao and K. P. Loh, ACS Nano **6**, 3677 (2012).
F. Bonaccorso, Z. Sun, T. Hasan, and A. C. Ferrari, Nat. Photon. **4**, 611 (2010).
K. J. A. Ooi and D. T. H. Tan, Proc. R. Soc. A **473**, 20170433 (2017).
S.-Y. Hong, J. I. Dadap, N. Petrone, P.-C. Yeh, James Hone, and R. M. Osgood, Jr., Phys. Rev. X **3**, 021014 (2013).
L. Cheng, N. Vermeulen, and J. E. Sipe, New J. Phys. **16** 053014 (2014).
N. Kumar, J. Kumar, C. Gerstenkorn, R. Wang, H.-Y. Chiu, A. L. Smirl, and H. Zhao, Phys. Rev. B **87**, 121406(R) (2013).
G. Soavi, G. Wang, H. Rostami, D. G. Purdie, D. De Fazio, T. Ma, B. Luo, J. Wang, A. K. Ott, S. A. Bourelle, J. E. Muench, I. Goykhman, S. Dal Conte, M. Celebrano, A. Tomadin, M. Polini, G. Cerullo, and A. C. Ferrari, Nat. Nanotechnol. **13**, 583 (2018).
T. Jiang, D. Huang, J. Cheng, X. Fan, Z. Zhang, Y. Shan, Y. Yi, Y. Dai, L. Shi, K. Liu, C. Zeng, J. Zi, J. E. Sipe, Y.-R. Shen, W.-T. Liu, and S. Wu, Nat. Photon. **12**, 430 (2018).
M. Taucer, T. J. Hammond, P. B. Corkum, G. Vampa, C. Couture, N. Thiré, B. E. Schmidt, F. Légaré, H. Selvi, N. Unsuree, B. Hamilton, T. J. Echtermeyer, and M. A. Denecke, Phys. Rev. B **96**, 195420 (2017).
M. Baudisch, A. Marini, J. D. Cox, T. Zhu, F. Silva, S. Teichmann, M. Massicotte, F. Koppens, L. S. Levitov, F. J. García de Abajo, and J. Biegert, Nat. Commun. **9**, 1018 (2018).
N. Yoshikawa, T. Tamaya, and K. Tanaka, Science **356**, 736 (2017).
P. Bowlan, E. Martinez-Moreno, K. Reimann, T. Elsaesser, and M. Woerner, Phys. Rev. B **89**, 041408(R) (2014).
H. A. Hafez, S. Kovalev, J.-C. Deinert, Z. Mics, B. Green, N. Awari, M. Chen, S. Germanskiy, U. Lehnert, J. Teichert, Z. Wang, K.-J. Tielrooij, Z. Liu, Z. Chen, A. Narita, K. Müllen, M. Bonn, M. Gensch and D. Turchinovich, Nature **561**, 507–511 (2018).
T. Winzer, A. Knorr, M. Mittendorff, S. Winnerl, M.-B. Lien, D. Sun, T. B. Norris, M. Helm, and E. Malic, Appl. Phys. Lett. **101**, 221115 (2012).
A. Marini, J. D. Cox, and F. J. García de Abajo, Phys. Rev. B **95**, 125408 (2017).
C. Cihan, C. Kocabas, U. Demirbas, and A. Sennaroglu, Opt. Lett. **43**, 3969 (2018).
N. Vermeulen, D. Castelló-Lurbe, M. Khoder, I. Pasternak, A. Krajewska, T. Ciuk, W. Strupinski, J. Cheng, H. Thienpont, and J. Van Erps, Nat. Commun. **9**, 2675 (2018).
E. Hendry, P. J. Hale, J. Moger, A. K. Savchenko, and S. A. Mikhailov, Phys. Rev. Lett. **105**, 097401 (2010).
J. C. König-Otto, Y. Wang, A. Belyanin, C. Berger, W. A. de Heer, M. Orlita, A. Pashkin, H. Schneider, M. Helm, and S. Winnerl, Nano Lett. **17**, 2184–2188 (2017).
Z. Mics, K.-J. Tielrooij, K. Parvez, S. A. Jensen, I. Ivanov, X. Feng, K. Müllen, M. Bonn, and D. Turchinovich, Nat. Commun. **6**, 7655 (2015).
H. Razavipour, W. Yang, A. Guermoune, M. Hilke, D. G. Cooke, I. Al-Naib, M. M. Dignam, F. Blanchard, H. A. Hafez, X. Chai, D. Ferachou, T. Ozaki, P. L. Levesque, and R. Martel, Phys. Rev. B **92**, 245421 (2015).
M. Tokman, S. B. Bodrov, Y. A. Sergeev, A. I. Korytin, I. Oladyshkin, Y. Wang, A. Belyanin, and A. N. Stepanov, arXiv:1812.10192.
A. G. Stepanov, J. Hebling, and J. Kuhl, Appl. Phys. Lett. **83**, 3000 (2003).
I. Al-Naib, M. Poschmann, and M. M. Dignam, Phys. Rev. B **91**, 205407 (2015).
S. Fratini and F. Guinea, Phys. Rev. B **77**, 195415 (2008).
A. Konar, T. Fang, and D. Jena, Phys. Rev. B **82**, 115452 (2010).
V. Perebeinos and P. Avouris, Phys. Rev. B **81**, 195442 (2010).
H. A. Hafez, I. Al-Naib, M. M. Dignam, Y. Sekine, K. Oguri, F. Blanchard, D. G. Cooke, S. Tanaka, F. Komori, H. Hibino, and T. Ozaki, Phys. Rev. B **91**, 035422 (2015).
S. Tani, F. Blanchard, and K. Tanaka, Phys. Rev. Lett. **109**, 166603 (2012).
A. Tomadin, S. M. Hornett, H. I. Wang, E. M. Alexeev, A. Candini, C. Coletti, D. Turchinovich, M. Klaui, M. Bonn, F. H. L. Koppens, E. Hendry, M. Polini, and K.-J. Tielrooij, Sci. Adv. **4**, eaar5313 (2018).
O. Kashuba, B. Trauzettel, and L. W. Molenkamp, Phys. Rev. B **97**, 205129 (2018).
R. Kim, V. Perebeinos, and P. Avouris, Phys. Rev. B **84**, 075449 (2011).
D. Brida, A. Tomadin, C. Manzoni, Y. J. Kim, A. Lombardo, S. Milana, R. R. Nair, K. S. Novoselov, A. C. Ferrari, G. Cerullo, and M. Polini, Nature Commun. **4**, 1987 (2013).
E. Malic, T. Winzer, E. Bobkin, and A. Knorr, Phys. Rev. B **84**, 205406 (2011).
J. C. Johannsen, S. Ulstrup, F. Cilento, A. Crepaldi, M. Zacchigna, C. Cacho, I. C. E. Turcu, E. Springate, F. Fromm, C. Raidel, T. Seyller, F. Parmigiani, M. Grioni, and P. Hofmann, Phys. Rev. Lett. **111**, 027403 (2013).
I. Gierz, J. C. Petersen, M. Mitrano, C. Cacho, I. C. E. Turcu, E. Springate, A. Stöhr, A. Köhler, U. Starke, and A. Cavalleri, Nature Mater. **12**, 1119 (2013).
U. Briskot, M. Schütt, I. V. Gornyi, M. Titov, B. N. Narozhny, and A. D. Mirlin, Phys. Rev. B **92**, 115426 (2015).
P. Blake, E. W. Hill, A. H. Castro Neto, K. S. Novoselov, D. Jiang, R. Yang, T. J. Booth, and A. K. Geim, Appl. Phys. Lett. **91**, 063124 (2007).
C. Casiraghi, A. Hartschuh, E. Lidorikis, H. Qian, H. Harutyunyan, T. Gokus, K. S. Novoselov, and A. C. Ferrari, Nano Lett. **7**, 2711 (2007).
Z. Fei, Y. Shi, L. Pu, F. Gao, Y. Liu, L. Sheng, B. Wang, R. Zhang, and Y. Zheng, Phys. Rev. B **78**, 201402(R) (2008).
M. Mittendorff, T. Winzer, E. Malic, A. Knorr, C. Berger, W. A. de Heer, H. Schneider, M. Helm, and S. Winnerl, Nano Lett. **14**, 1504 (2014).
X.-Q. Yan, J. Yao, Z.-B. Liu, X. Zhao, X.-D. Chen, C. Gao, W. Xin, Y. Chen, and J.-G. Tian, Phys. Rev. B **90**, 134308 (2014).
M. Trushin, A. Grupp, G. Soavi, A. Budweg, D. De Fazio, U. Sassi, A. Lombardo, A. C. Ferrari, W. Belzig, A. Leitenstorfer, and D. Brida, Phys. Rev. B **92**, 165429 (2015).
J. C. König-Otto, M. Mittendorff, T. Winzer, F. Kadi, E. Malic, A. Knorr, C. Berger, W. A. de Heer, A. Pashkin, H. Schneider, M. Helm, and S. Winnerl, Phys. Rev. Lett. **117**, 087401 (2016).
T. Danz, A. Neff, J. H. Gaida, R. Bormann, C. Ropers, and S. Schäfer, Phys. Rev. B **95**, 241412(R) (2017).
R. Bistritzer and A. H. MacDonald, Phys. Rev. B **80**, 085109 (2009).
A. M. DaSilva, K. Zou, J. K. Jain, and J. Zhu, Phys. Rev. Lett. **104**, 236601 (2010).
S. Butscher, F. Milde, M. Hirtschulz, E. Malić, and A. Knorr, Appl. Phys. Lett. **91**, 203103 (2007).
H. Yan, T. Low, W. Zhu, Y. Wu, M. Freitag, X. Li, F. Guinea, P. Avouris, and F. Xia, Nat. Photonics **7**, 394 (2013).
S. K. Ray, T. N. Adam, R. T. Troeger, and J. Kolodzey, G. Looney, and A. Rosen, J. Appl. Phys. **95**, 5301 (2004).
|
---
abstract: |
We explicitly determine all magnetic curves corresponding to the Killing magnetic fields on the 3-dimensional Euclidean space.
[**Keywords and Phrases.**]{} Killing magnetic field, Lorentz force, magnetic curve.
[**2010 MSC:**]{} 53A04, 65D17
address:
- |
’AL. I. Cuza’ University of Iaşi\
Department of Sciences\
Lascăr Catargi Street, no. 54\
700107 Iaşi, Romania\
email: [simona.druta (at) uaic.ro]{}
- |
’Al.I.Cuza’ University of Iaşi, Faculty of Mathematics\
Bd. Carol I, no. 11, 700506 Iaşi, Romania\
<http://www.math.uaic.ro/~munteanu>
- |
Michigan State University\
Department of Mathematics\
Wells Hall\
48824-1029 East Lansing\
USA, email: [marian.ioan.munteanu (at) gmail.com]{}
author:
- 'Simona Luiza Druţă-Romaniuc'
- Marian Ioan Munteanu
title: 'Magnetic curves corresponding to Killing magnetic fields in ${\mathbb{E}}^3$'
---
Introduction
============
The geodesic flow on a Riemannian manifold represents the extremals of the least action principle, namely it is determined by the motion of a certain physical system in the manifold. It is known that the geodesic equations are second order non-linear differential equations and they usually appear in the form of Euler-Lagrange equations of motion. Magnetic curves generalize geodesics. In physics, such a curve represents a trajectory of a charged particle moving on the manifold under the action of the magnetic field.
Let $(M, g)$ be an $n$-dimensional Riemannian manifold. A [*magnetic field*]{} is a closed 2-form $F$ on $M$ and the [*Lorentz force*]{} of a magnetic field $F$ on $(M, g)$ is an $(1,1)$ tensor field $\Phi$ given by $$\label{Lorentzforce}
g(\Phi(X), Y) = F(X, Y), \quad \forall X,Y\in \chi(M).$$ The [*magnetic trajectories*]{} of $F$ are curves $\gamma$ on $M$ that satisfy the [*Lorentz equation*]{} (sometimes called the [*Newton equation*]{}) $$\label{Lorentzeq}
\nabla_{\gamma^\prime}\gamma^\prime=\Phi(\gamma^\prime).$$
Lorentz equation generalizes the equation satisfied by the geodesics of $M$, namely $$\nabla_{\gamma^\prime}\gamma^\prime=0.$$ Therefore, from the point of view of the dynamical systems, a geodesic corresponds to a trajectory of a particle without an action of a magnetic field, while a magnetic trajectory is [*a flowline of the dynamical system*]{}, associated with the magnetic field.
Since the Lorentz force is skew symmetric we have $$\frac{d}{dt}g(\gamma^\prime,\gamma^\prime)=2g(\nabla_{\gamma^\prime}\gamma^\prime,\gamma^\prime)=0,$$ so the magnetic curves (trajectories) have constant speed $v(t) =||\gamma^\prime|| = v_0$. When the magnetic curve $\gamma(t)$ is arc length parametrized $(v_0 = 1)$, it is called a [*normal magnetic curve*]{}.
Recall that a vector field $V$ on $M$ is [*Killing*]{} if and only if it satisfies the Killing equation: $$\label{Kill_eq:dm}
g(\nabla_YV,Z)+g(\nabla_ZV,Y)=0$$ for every vector fields $Y, Z$ on $M$, where $\nabla$ is the Levi Civita connection on $M$.
A typical example of uniform magnetic fields is obtained by multiplying the volume form on a Riemannian surface by a scalar $s$ (usually called [*strength*]{}). When the surface is of constant Gaussian curvature $K$, trajectories of such magnetic fields are well known. More precisely, on the sphere ${\mathbb{S}}^2(K)$, $K>0$, trajectories are small (Euclidean) circles of radius $(s^2+K)^{-1/2}$, on the Euclidean plane they are circles and the period of motion equals to $\frac{2\pi}s$, while, on a hyperbolic plane ${\mathbb{H}}^2(-K)$, $K>0$, trajectories can be either closed curves (when $|s|>\sqrt{K}$), or open curves. Moreover, when $|s|=\sqrt{K}$ normal trajectories are horocycles (see e.g. ).
This problem was extended also for different ambient spaces. For example, if the ambient is a complex space form, Kähler magnetic fields are studied (see [@Ada95]), in particular, explicit trajectories for Kähler magnetic fields are found in the complex projective space ${\mathbb{CP}}^n$ [@Ada94]. Kähler magnetic fields appear in theoretical and mathematical physics, varying from quantum field theory and string theory to general relativity.
If the ambient is a contact manifold, the fundamental 2-form defines the so-called [*contact magnetic field*]{}. Interesting results are obtained when the manifold is Sasakian, namely the angle between the velocity of a normal magnetic curve and the Reeb vector field is constant (see [@Cabrerizo1]). Moreover, explicit description for normal flowlines of the contact magnetic field on a 3-dimensional Sasakian manifold is known [@Cabrerizo1].
In the case of a 3-dimensional Riemannian manifold $(M,g)$, 2-forms and vector fields may be identified via the Hodge star operator $\star$ and the volume form $dv_g$ of the manifold. Thus, magnetic fields mean divergence free vector fields (see e.g. [@Cabrerizo]). In particular, Killing vector fields define an important class of magnetic fields, called [*Killing magnetic fields*]{}. It is known that geodesics can be defined as extremal curves for the action energy functional. A variational approach to describe Killing magnetic flows in spaces of constant curvature is given in [@BarrosRomero].
Note that, one can define on $M$ the [*cross product*]{} of two vector fields $X,Y\in\chi(M)$ as follows $$g(X\times Y,Z)=dv_g(X,Y,Z),\quad \forall Z\in\chi(M).$$ If $V$ is a Killing vector field on $M$, let $F_V=\iota_Vdv_g$ be the corresponding Killing magnetic field. By $\iota$ we denote the inner product. Then, the Lorentz force of $F_V$ is (see [@Cabrerizo]) $$\Phi(X)=V\times X.$$ Consequently, the Lorentz force equation (\[Lorentzeq\]) can be written as $$\nabla_{\gamma^\prime}\gamma^\prime=V\times \gamma^\prime.$$
In what follows we consider the 3-dimensional Euclidian space ${\mathbb{E}}^3$, endowed with the usual scalar product $\langle~,~\rangle$ .
The fundamental solutions of are $\{\partial_x, \partial_y,
\partial_z, -y \partial_x + x \partial_y, -z \partial_y + y \partial_z, z \partial_x - x \partial_z\}$ and they give a basis of Killing vector fields on ${\mathbb{E}}^3$. Here $x, y, z$ denote the global coordinates on ${\mathbb{E}}^3$ and ${\mathbb{R}}^3 = {\rm span}\{\partial_x, \partial_y,
\partial_z\}$ is regarded as a vector space.
The easiest example is to consider the Killing vector field $\xi_0=\partial_z$. (Similar discussions can be made for $\partial_x$ and $\partial_y$, respectively.) Its trajectories are helices with axis $\partial_z$, namely $t\mapsto(x_0+a\cos t,y_0+a \sin t, z_0+bt)$, where $(x_0,y_0,z_0)\in{\mathbb{R}}^3$ and $a,b\in{\mathbb{R}}$. An interesting fact is that Lancret curves (i.e. general helices) in ${\mathbb{E}}^3$ are characterized by the following property (in our framework): they are magnetic trajectories associated with magnetic fields parallel to their axis. A similar result, relating Killing magnetic fields and Lancret curves is provided on the 3-sphere (see e.g. [@BarrosRomero]). Theorems of Lancret for general helices in 3-dimensional real space forms are presented in [@Bar97].
In this paper we consider the following magnetic field $F_V=-(xdx+ydy)\wedge dz$ in ${\mathbb{E}}^3$, determined by the Killing vector field $V=-y\partial_x + x\partial_y$. The other two rotational vector fields $-z\partial_y+y\partial_z$ and $z\partial_x-x\partial_z$ give rise to analogue classifications for corresponding magnetic trajectories. The aim of this note is to find all magnetic curves corresponding to $F_V$. The main result we obtain is the following:
[**Theorem.**]{} [*The magnetic trajectories of the Killing magnetic field $F_V$ are: [(a)]{} planar curves situated in a vertical strip; [(b)]{} circular helices and [(c)]{} curve parametrized by* ]{} $$x(t)=\rho(t)\cos\phi(t),\ y(t)=\rho(t)\sin\phi(t),\ z(t)=-\frac12\int\limits^t\rho^2(\zeta)d\zeta$$ [*where $\rho$ and $\phi$ satisfy*]{} $$\left(\frac{d\rho^2}{dt}\right)^2+P\big(\rho^2(t)\big)=0,\quad \rho^2(t)\phi'(t)={\rm constant}$$ [*and $P$ is a polynomial of degree $3$.*]{}
We are able to obtain explicit solutions in case (c) and we represent some examples by using numerical approximations for some integrals.
Recall, for later use, some basic facts on [*normal elliptic integral of the first kind*]{} (see for example [@BF71]): $$\int\limits_0^y\frac{dt}{\sqrt{(1-t^2)(1-k^2t^2)}}=\int\limits_0^\varphi\frac{d\vartheta}{\sqrt{1-k^2\sin^2\vartheta}}
=u=\sn^{-1}(y,k)=F(\varphi,k),$$ where $y=\sin\varphi$ and $\varphi={\rm am\ } u$. The angle $\varphi$ is called [*Jacobi amplitude*]{} and the function $\sn$ in known as [*Jacobi elliptic sine*]{}. The number $k$ is called [*modulus*]{} and for applications to engineering and physics it belongs to $(0,1)$.
Rotational magnetic trajectories in ${\mathbb{E}}^3$
====================================================
Let us consider the Killing vector field $V=-y \partial_x + x\partial_y$ on ${\mathbb{E}}_3\setminus Oz$, which defines the magnetic field $F_V=-(xdx+ydy)\wedge dz$. The Lorentz force $\Phi_V$ acts on the vector space ${\mathbb{R}}^3$ as follows: $$\Phi_V\partial_x=-x\partial_z,\ \Phi_V\partial_y=-y\partial_z,\
\Phi_V\partial_z=x\partial_x+y\partial_y.$$ For the Euclidian space ${\mathbb{E}}^3$ the Lorentz force equation becomes $$\label{magn}
\gamma''=V\times \gamma^\prime$$ where the curve $\gamma:I=[0,l]\longrightarrow{\mathbb{E}}^3$, $\gamma(t)=(x(t),y(t),z(t))$ is parametrized by arc length, namely $$\label{arc}
x'(t)^2+y'(t)^2+z'(t)^2=1, \quad \forall t\in I$$ and at the moment $t=0$ it passes through the point $(x_0,y_0,z_0)$, with the velocity $(u_0,v_0,w_0)$, such that $$u_0^2+v_0^2+w_0^2=1.$$
[**Proof of the Theorem.**]{} Our aim is to determine the magnetic curves of $F_V$. The equation (\[magn\]) yields the following ordinary differential equations system $$\label{syst}
\begin{cases}
x''=xz'\\
y''=yz'\\
z''=-(xx'+yy').
\end{cases}$$ In order to solve it, note that from the first two equations we get a prime integral $$\label{xy'}
x'y-y'x=u_0y_0-x_0v_0$$ while from the third equation we obtain $$\label{zp}
z'=-\frac{1}{2}(x^2+y^2)+\frac{1}{2}(x_0^2+y_0^2)+w_0.$$ Notice that $z'$ cannot vanish identically (on a subinterval of $I$). Indeed, if $z'=0$ then $x'=u_0$, $y'=v_0$ and $z=z_0$ with $u_0^2+v_0^2=1$. Hence, $x(t)=x_0+u_0t$, $y(t)=y_0+v_0t$ and combining with we get a contradiction. It follows that one cannot have horizontal magnetic curves corresponding to $V$.
In the sequel it is more convenient to consider cylindrical coordinates $\{\rho,\phi,z\}$ on ${\mathbb{E}}^3\setminus Oz$. Thus, for our curve we have $$\begin{cases}
x=\rho(t)\cos\phi(t)\\
y=\rho(t)\sin\phi(t)\\
z=z(t)
\end{cases}$$ where $\rho^2(t)=x^2(t)+y^2(t)$, $\rho(t)\gneq0$.
**Case I.** First we study the general case, when $z'$ is not constant (equivalently $\rho$ is not constant). The relations (\[xy’\]) and (\[zp\]) lead to $$\label{rho}
\rho^2(t)\phi'(t)=p_0$$ $$\label{zpt}
z'(t)=q_0-\frac{1}{2}\rho^2(t)$$ where we put $p_0=x_0v_0-u_0y_0$ and $q_0=\frac{1}{2}\left(x_0^2+y_0^2\right)+w_0$.
The arc length parametrization condition (\[arc\]), together with , becomes $$\label{rhoarc}
\rho'^2(t)+\rho^2(t)\phi'^2(t)+q_0^2-q_0\rho^2(t)+\frac{1}{4}\rho^4(t)=1.$$ Multiplying by $4 \rho^2(t)$, using and denoting $\rho^2(t)$ by $f(t)>0$, for all $t\in I$, one gets $$\label{F}
f'^2+f^3-4q_0f^2+4(q_0^2-1)f+4p_0^2=0.$$
We start to study the above differential equation for some particular values of the constants $p_0$ and $q_0$.
If $p_0=0,$ i.e. $x_0v_0=y_0u_0$ it follows that the angle $\phi$ is constant, $\phi=\phi_0$, so the magnetic trajectory is a planar curve, with $$x(t)=\rho(t)\cos\phi_0,\ y(t)=\rho(t)\sin\phi_0.$$ More precisely, the curve lies in the plane $(\sin\phi_0)x-(\cos\phi_0)y=0$. The initial conditions expressed in cylindrical coordinates, may be written as $$x_0=\rho_0\cos\phi_0,\ y_0=\rho_0\sin\phi_0$$ and the condition $x_0v_0=y_0u_0$ becomes $u_0=\zeta_0\cos\phi_0$, $v_0=\zeta_0\sin\phi_0$, for a certain $\zeta_0\in{\mathbb{R}}$. It follows that $\zeta_0^2+\big(q_0-\frac12~\rho_0^2\big)^2=1$ $$%\label{eq:cond}
-1+\frac12~\rho_0^2\leq q_0\leq 1+\frac12~\rho_0^2$$ Since $\rho_0\gneq0$ it follows that $q_0>-1$.
Let us solve the equation (\[F\]), for three particular situations arising from the initial conditions:
1. If $p_0=0$ and $q_0=0$, then the equation (\[F\]) takes the form $$f'^2(t)+f(t)\big(f(t)-2\big)\big(f(t)+2\big)=0$$ and it has solution if and only if $f(t)\leq 2$, i.e. $\rho(t)\in (0,\sqrt{2}]$, so the magnetic curve $\gamma$ lies inside a cylinder. In fact, being a planar curve, $\gamma$ stays in a vertical strip centered in $Oz$ and of width $2\sqrt{2}$.
We have $f'(t)=\pm\sqrt{f(t)\big(4-f(t)^2\big)}$ and we consider only the [*plus*]{} sign (the other situation may be treated in similar way). Supposing $\rho_0\neq \sqrt{2}$, we have that $f$ and the integral ${\mathcal{I}}(f)=\displaystyle\int^f_{\rho_0^2} \frac{d\zeta}{\sqrt{\zeta (4-\zeta^2)}}$ are strictly increasing functions. Thus, the equation ${\mathcal{I}}(f)=t$ has a unique solution in the interval $(\rho_0^2,2)$, namely $f={\mathcal{J}}(t)$, where ${\mathcal{J}}$ is the inverse function of ${\mathcal{I}}$. Consequently, $\rho(t)=\sqrt{{\mathcal{J}}(t)}$. In fact ${\mathcal{J}}$ may be expressed in terms of the elliptic functions. More precisely, $${\mathcal{J}}(t)=\frac{2~\sn^2(t+t_0,\frac1{\sqrt{2}})}{2-\sn^2(t+t_0,\frac1{\sqrt{2}})}$$ where $t_0$ is determined by $\sn(t_0,\frac1{\sqrt{2}})=\frac{\sqrt{2}\rho_0}{\sqrt{2+\rho_0^2}}$.
Summarizing, the magnetic curve is given by $$x(t)=\sqrt{{\mathcal{J}}(t)}\cos\phi_0,\
y(t)=\sqrt{{\mathcal{J}}(t)}\sin\phi_0,\
z(t)=-\frac{1}{2}\int_0^t {\mathcal{J}}(\zeta)d\zeta.$$
In order to draw a picture of our curve, one can use Matlab to compute the parametrization. The idea is to calculate the integrals numerically, as Riemann sums. See Appendix. \[Ii\]
2. If $p_0=0,\ q_0=1$, then the equation (\[F\]) becomes $f'^2(t)+f^2(t)\big(f(t)-4\big)=0,$ from which we have that $f(t)\leq 4,$ equivalently $\rho(t)\leq 2$, so the magnetic curve $\gamma$ stays inside a cylinder of radius 2. In fact, being planar, the curve lies in a vertical strip centered on $z$-axis. The equation can be written in the form $$\frac{df}{f\sqrt{4-f}}=\pm~dt.$$ Taking the [*plus*]{} sign, one gets the solution $$f(t)=\frac{4}{\cosh^2(t-t_0)},\quad t\in (0,t_0)$$ where $t_0=-\frac{1}{2}\ln\frac{2-\sqrt{4-\rho_0^2}}{2+\sqrt{4-\rho_0^2}}$. Hence $$\rho(t)=\frac{2}{\cosh(t-t_0)}$$ and the magnetic curve is parametrized by $$x(t)=\frac{2\cos\phi_0}{\cosh(t-t_0)},\
y(t)=\frac{2\sin\phi_0}{\cosh(t-t_0)},\
z(t)=z_0+t-2 \left(\tanh(t-t_0)+\tanh t_0\right).$$
We draw a picture of this (planar) curve.
\[fig:sol\_03\]
{width="75mm"}
Let us finalize the examination of the equation for $p_0=0$. The polynomial $P(f)=f\Big(f^2-4q_0f+4(q_0^2-1)\Big)$ has three real solutions, namely $f_1=2(q_0-1)$, $f_2=2(q_0+1)$ and $f_3=0$. If $f$ is a solution for , then $P(f)$ should be negative. Recall that $q_0>-1$. We have
1. If $q_0\in(-1,1)$, then $f_1<0<f_2$. It follows that $\rho(t)\in(0,\sqrt{2(q_0+1)})$ and the discussion is similar as in case $q_0=0$. More precisely we have $$\rho(t)^2=\frac{2(1-q_0^2)~\sn^2\big(t+t_0,\sqrt{\frac{q_0+1}2}\big)}{2-(q_0+1)~\sn^2\big(t+t_0,\sqrt{\frac{q_0+1}2}\big)}$$ where $t_0$ is defined by $\sn\big(t_0,\sqrt{\frac{q_0+1}2}\big)=\sqrt{\frac2{q_0+1}}\frac{\rho_0}{\sqrt{\rho_0^2-2(q_0-1)}}$.
2. If $q_0>1$, then $0<f_1<f_2$. It follows that $f(t)\in(f_1,f_2)$. Thus, the curve $\gamma$ lies between two cylinders since $\rho(t)\in\left(\sqrt{2(q_0-1)},\sqrt{2(q_0+1)}\right)$. As before, the curve is situated in a union of two vertical strips. Again, the discussion is similar as in case $q_0=0$. In terms of elliptic functions, we may write $$\rho(t)^2=\frac{q_0^2-1}{\frac{q_0+1}2-\sn^2\big(\sqrt{\frac{q_0+1}2} t+t_0,\sqrt{\frac2{q_0+1}}\big)}$$ where $t_0$ is defined by $\sn\big(t_0,\sqrt{\frac2{q_0+1}}\big)=\sqrt{\frac{q_0+1}2}\frac{\sqrt{\rho_0^2-2(q_0-1)}}{\rho_0}$.
In order to visualize an example, consider the following initial conditions: $x_0=2$, $y_0=0$, $z_0=0$ and $u_0=0$, $v_0=0$, $w_0=1$ (this yields $p_0=0$ and $q_0=3$).
We will use again Matlab to compute the integrals (numerically) and to draw the picture.
\[fig:exb\_pg5\]
{width="75mm"}
Return to for $p_0\neq0$ and notice that the equation $$\label{eq14}
P(f)=f^3-4q_0f^2+4(q_0^2-1)f+4p_0^2=0$$ has the discriminant $$\Delta=-16\big[27p_0^4+8p_0^2q_0(q_0^2-9)-16(q_0^2-1)^2\big]$$ and the following situations appear:
- the equation has three distinct solutions iff $\Delta>0.$
- the polynomial $P$ has multiple roots iff $\Delta=0$.
- the polynomial $P$ has one real root and two complex conjugate roots iff $\Delta<0$.
A detailed analysis of the above situations, lead us to conclude, after taking into account classical Viète’s formulas, that the equation (\[F\]) has solutions if and only if $\Delta>0$.
Indeed, if $\Delta<0$, let $A\in{\mathbb{C}}\setminus{\mathbb{R}}$ and $\bar A$ be the complex solutions of , and $B$ its real solution. Then, the ODE can be rewritten as $$f'(t)^2+\left(f(t)^2-2~Re(A)~t+|A|^2\right)\left(f(t)-B\right)=0,$$ where $Re(A)$ denotes the real part of the complex number $A$. From the third Viète’s formula we conclude that $B$ should be negative, and consequently, the previous equality cannot occur.
On the other hand, if $\Delta=0$, analyzing the coefficients one cannot have a triple root (since $16q_0^2\neq12(q_0^2-1)$. Hence, let $A\in{\mathbb{R}}$ be the double root, and let $B\in{\mathbb{R}}$ be the third one. With a similar argument as above, $B$ is negative and the ODE becomes $$f'(t)^2+\left(f(t)-A\right)^2\left(f(t)-B\right)=0.$$ Again, this equality cannot hold.
It follows that $\Delta$ should be (strictly) positive. Let $A$, $B$, $C\in {\mathbb{R}}$ be the three distinct solutions of . The third Viète’s formula yields $ABC=-4p_0^2<0$, and hence
- either $A,B,C$ are all negative,
- or two of them, $A$ and $B$, are positive and the third one, $C$, is negative.
In case a) the ODE : $f'(t)^2+(f(t)-A)(f(t)-B)(f(t)-C)=0$ has no solution. This happens if and only if $q_0<-1$ and $p_0\neq0$ (for the proof use the second and the third Viète’s formulas) together with $\Delta>0$ (for example if $q_0=-3$ and $p_0=1$).
In case b), equivalently to $\Delta>0$, $q_0>-1$ and $p_0\neq 0$, the equation has a solution in the interval defined by the positive solutions $A<B$ of . Since the function ${\mathcal{I}}(f)=\displaystyle\int_{A}^f\frac{d\zeta}{\sqrt{(\zeta-A)(B-\zeta)(\zeta-C)}}$ is strictly increasing, ${\mathcal{I}}(f)=t$ has a unique solution $f$, denoted by ${\mathcal{J}}(t)$. Thus we have $\rho=\sqrt{{\mathcal{J}}(t)}$, and $\phi(t)=\phi_0+p_0\displaystyle\int_0^t\frac{d\zeta}{{\mathcal{J}}(\zeta)}$. In this case, the magnetic curve $\gamma$ is given by $$x(t)=\sqrt{{\mathcal{J}}(t)}\cos\Big(\phi_0+p_0\int_0^t\frac{d\zeta}{{\mathcal{J}}(\zeta)}\Big),\
y(t)=\sqrt{{\mathcal{J}}(t)}\sin\Big(\phi_0+p_0\int_0^t\frac{d\zeta}{{\mathcal{J}}(\zeta)}\Big),$$ $$z(t)=z_0+q_0t-\frac{1}{2}\int_0^t{\mathcal{J}}(\zeta)d\zeta.$$ We may express $\rho$ in terms of elliptic functions, namely $$\rho(t)^2=\frac{Ak^2-C\sn^2(rt+t_0,\frac1k)}{k^2-\sn^2(rt+t_0,\frac1k)}$$ where $k^2=\frac{B-C}{B-A}$, $r=\frac{\sqrt{B-C}}2$, and $\sn(t_0,\frac1k)=k\sqrt{\frac{\rho_0^2-A}{\rho_0^2-C}}$.
In the Appendix we will draw a picture (using the same technique in Matlab as before) corresponding to the following data: $p_0=\frac{\sqrt{2\sqrt{6}-3}}2$, $q_0=\frac{3-\sqrt{6}} 2$, for which we have $A=1$, $B=2$ and $C=3-2\sqrt{6}$. \[ex\_ABC\]
The situation $I(f)=-t$ can be treated in similar way.
Finally, notice that for $q_0=-1$ we get $\Delta=-16p_0^2(27p_0^2+64)$ and this case was discussed above.
**Case II.** Now, let us study the remaining case when $z'(t)=w_0\neq0$. We immediately have that $$z(t)=z_0+tw_0,$$ and from (\[zp\]) we obtain $$\label{xy}
x^2+y^2=x_0^2+y_0^2.$$ This means that the magnetic trajectory $\gamma$ lies on the circular cylinder of radius $\rho_0=\sqrt{x_0^2+y_0^2}$.
Two subcases must be discussed: $w_0<0$ and $w_0>0$.
II.1
: In the case when $w_0<0$ the magnetic curve is given by $$\label{1}
\begin{cases}
x(t)=x_0\cos(\sqrt{-w_0}~t)+\frac{u_0}{\sqrt{-w_0}}\sin(\sqrt{-w_0}~t)\\[2mm]
y(t)=y_0\cos(\sqrt{-w_0}~t)+\frac{v_0}{\sqrt{-w_0}}\sin(\sqrt{-w_0}~t)\\[2mm]
z(t)=z_0+tw_0.
\end{cases}$$ This curve is a helix around the above cylinder.
At this point, we have to find which are the initial conditions leading this situation. To do this, using and , we should have the following relations $$w_0=-\frac2{\rho_0^2+\sqrt{\rho_0^4+4}}\ ,\ u_0=\varepsilon\rho_0\sqrt{-w_0}\sin\phi_0,\
v_0=-\varepsilon\rho_0\sqrt{-w_0}\cos\phi_0$$ where $\rho_0$ and $\phi_0$ have the usual meaning and $\varepsilon=\pm1$.
II.2
: If $w_0>0$, the ODE system has the following solution $$\begin{cases}
x(t)=x_0\cosh(\sqrt{w_0}~t)+\frac{u_0}{\sqrt{w_0}}\sinh(\sqrt{w_0}~t)\\
y(t)=y_0\cosh(\sqrt{w_0}~t)+\frac{v_0}{\sqrt{w_0}}\sinh(\sqrt{w_0}~t)\\
z(t)=z_0+tw_0
\end{cases}$$ but in this case the condition is satisfied if and only if $x_0=y_0=0$ and $u_0=v_0=0$. This situation cannot occur.
Review on the classical magnetic field on ${\mathbb{E}}^3$
==========================================================
As we have already said in Introduction, the best known example of magnetic fields in the Euclidean space ${\mathbb{E}}^3$ is furnished by the 2-form $F_0=dx\wedge dy$, corresponding to the Killing vector field $\xi_0=\frac\partial{\partial z}$.
In this section we consider the Killing magnetic field $F_\xi=s~F_0=s~dx\wedge dy$, determined by the Killing vector field $\xi=s~\xi_0=s~\partial_z$ on ${\mathbb{E}}^3$, where $s\neq0$ is an arbitrary constant. We briefly describe its magnetic curves.
The action of the Lorentz force $\Phi_\xi$ on the vector space ${\mathbb{R}}^3$ is given by: $$\Phi_\xi\partial_x=s\partial_y,\ \Phi_\xi\partial_y=-s\partial_x,\
\Phi_\xi\partial_z=0.$$
Solving the Lorentz force equation $\gamma_s''=\Phi_\xi(\gamma_s')$, we obtain the family of magnetic curves $\gamma_s(t)=(x(t),y(t),z(t))$, parametrized by $$\begin{cases}
x(t)=\frac{u_0}{s}\sin(st)+\frac{v_0}{s}\cos(st)+x_0-\frac{v_0}{s}\\
y(t)=-\frac{u_0}{s}\cos(st)+\frac{v_0}{s}\sin(st)+y_0+\frac{u_0}{s}\\
z(t)=w_0t+z_0.
\end{cases}$$
Write the first Frénet equation $$\gamma''=\kappa N$$ where $\kappa$ is the curvature and $N$ is the normal of the curve. Using the equation we obtain that the square of the curvature is $$\kappa^2=s^2(1-w_0^2).$$ Moreover, classical computations give the torsion $\tau=sw_0.$
Notice that even both the curvature $\kappa$ and the torsion $\tau$ depend on the strength $s$, the ratio $\tau\over\kappa$ does not. We conclude with some comments:
1. If $w_0=0$ the curvature is $\kappa=s$ and the torsion is $\tau=0,$ so the magnetic line is a (planar) circle.
2. If $w_0=\pm 1,$ then $\kappa=0,\ \tau=\pm s$, so the magnetic curves are vertical lines.
3. In other cases the magnetic curves are circular helices.
Appendix
========
In this section we present a Matlab program in order to compute, by numerical approximation of the involved integrals, the parametrization of magnetic curve obtained in case I (i) from page . Since the curve is planar we consider $\phi_0=0$.
clear all
%%% Compute the integral I(f) as a Riemann sum
rho0=1.41;
f_max=2;
N=1000;
L=(f_max-rho0^2)/N;
for K=1:N+1
a=0.001;
b=rho0^2+(K-1)*L;
n=1000;
h=(b-a)/n;
k=0:n-1;
x=a+k*h;
f=1./sqrt(x.*(4-x.^2));
I(K)=h*sum(f);
J(K)=b;
end
%% \phi_0=0
xx=sqrt(J);
%yy=0*J;
zz(1)=0;
for K=1:N
zz(K+1)=zz(K)-0.5*(I(K+1)-I(K))*J(K);
end
%% the curve is planar
plot(xx,zz,'g-')
text(0.25,-0.75,'\rho_0=1.41','Color','g')
hold on
Representation of the magnetic curves depending on the initial position:
\[fig:Ii\]
{width="85mm"}
Using the previous Matlab program adapted to the example furnished at page , and for the initial data $\phi_0=0$ and $z_0=0$, we can represent the corresponding magnetic curves:
\[fig:Ii\]
{width="85mm"}
[**Acknowledgements.**]{} The first authors is a postdoctoral researcher in the framework of the program POSDRU 89/1.5/S/49944, ’AL. I. Cuza’ University of Iaşi, Romania. The second author is supported by a Fulbright Grant no. 498 at the Michigan State University, USA.
[00]{}
Adachi, T., [*Kähler Magnetic Field on a Complex Projective Space*]{}, Proc. Japan Acad. [**70**]{} Ser. A (1994), 12–13.
Adachi, T., [*Kähler Magnetic Flow for a Manifold of Constant Holomorphic Sectional Curvature*]{}, Tokyo J. Math. [**18**]{} (1995) 2, 473–483.
Barros, M., [*General Helices and a Theorem of Lancret*]{}, Proc. AMS [**125**]{} (1997) 5, 1503–1509.
Barros, M., Cabrerizo, J. L., Fernández, M., and Romero, A., [*Magnetic vortex filament flows*]{}, J. Math. Phys. [**48**]{} (2007) 8, 082904:1–27.
Barros, M. and Ferrández, A., [*A conformal variational approach for helices in nature*]{}, J. Math. Phys. [**50**]{} (2009) 10, 103529:1–20.
Barros, M., Romero, A., Cabrerizo, J. L., and Fernández, M., [*The Gauss-Landau-Hall problem on Riemannian surfaces*]{}, J. Math. Phys. [**46**]{} (2005), 112905:1–15.
Barros, M. and Romero, A., [*Magnetic vortices*]{}, EPL [**77**]{} (2007), 34002:1–5.
Byrd, P. F., Friedman, M. D., [*Handbook of Elliptic Integrals foe Engineers and Scientists*]{}, 2nd Edition revised, Springer, 1971.
Cabrerizo, J. L., Fernández, M., and Gómez, J. S., [*On the existence of almost contact structure and the contact magnetic field*]{}, Acta Math. Hungar., [**125**]{} (2009)(1–2), 191–199.
Cabrerizo, J. L., Fernández, M., and Gómez, J. S., [*The contact magnetic flow in $3D$ Sasakian manifolds*]{}, J. Phys. A: Math. Theor., [**42**]{} (2009), 19, 195201:1–10.
Comtet, A., [*On the Landau levels on the hyperbolic plane*]{}, Ann. of Phys. [**173**]{} (1987), 185–209.
Sunada, T., [*Magnetic flows on a Riemann surface*]{}, Proceedings of KAIST Mathematics Workshop, 1993, 93–108.
|
---
abstract: |
The length distribution of proteins measured in amino acids follows the CoHSI (Conservation of Hartley-Shannon Information) probability distribution. In previous papers we have verified various predictions of this using the Uniprot database but here we explore a novel predicted relationship between the longest proteins and evolutionary time. We demonstrate from both theory and experiment that the longest protein and the total number of proteins are intimately related by Information Theory and we give a simple formula for this. We stress that no evolutionary explanation is necessary; it is an intrinsic property of a CoHSI system. While the CoHSI distribution favors the appearance of proteins with fewer than 750 amino acids (characteristic of most functional proteins or their constituent domains) its intrinsic asymptotic power-law also favors the appearance of unusually long proteins; we predict that there are as yet undiscovered proteins longer than 45,000 amino acids. In so doing, we draw an analogy between the process of protein folding driven by favorable pathways (or funnels) through the energy landscape of protein conformations, and the preferential information pathways through which CoHSI exerts its constraints in discrete systems.
Finally, we show that CoHSI predicts the recent appearance in evolutionary time of the longest proteins, specifically in eukaryotes because of their richer unique alphabet of amino acids, and by merging with independent phylogenetic data, we confirm a predicted consistent relationship between the longest proteins and documented and potential undocumented mass extinctions.
author:
- 'Les Hatton[^1], Gregory Warr[^2]'
bibliography:
- 'bibliography.bib'
title: 'CoHSI III: Long proteins and implications for protein evolution'
---
Statement of computational reproducibility
==========================================
There is a growing awareness of the problem of computational irreproducibility in the software-consuming sciences, and so as with our previous papers in this area [@HatTSE14; @HattonWarr2015; @HattonWarr2017], this paper is accompanied by a complete computational reproducibility suite including all software source code, data references and the various glue scripts necessary to reproduce each figure, table and statistical analysis and then regress local results against a gold standard embedded within the suite to help build confidence in the theory and results we are reporting. This follows the methods broadly described by [@Ince2012] and exemplified in a tutorial and case study [@HattonWarr2016]. These reproducibility suites are currently available at http://leshatton.org/ until a suitable public archive appears, to which they can be transferred.
Introduction
============
The diversity of life that has evolved on earth could not have done so without the successive emergence of novel proteins to carry out newly essential functions. It is axiomatic (albeit requiring some qualification) that the sequence of a protein determines its structure and thus its function ([@Anfinsen1973; @Dill1042]). The function and evolution of proteins pose, on the surface, two major questions associated with the astronomical size of theoretical protein sequence space and, even for a given protein of average length, the vast number of possible conformational states it could explore before finding the “correct” (i.e. native) functional conformation ([@Levinthal1969; @Dryden2008]. In recent decades these questions have been simplified by considerations that minimize e.g. the effective size of the amino acid alphabet and that have engaged polymer theory to consider protein folding in terms of energy landscapes ([@Ben-Naim2012; @Dryden2008; @GhoshDill2009]). However, the constraints imposed by information theory (through the Conservation of Hartley-Shannon Information or CoHSI) on protein sequence space, and its implications for protein evolution and function, have hitherto been unexplored.
We know a great deal about how variation arises in protein-encoding genes, how processes at the cellular and population level lead to the spread and fixation of novel genes in populations, and how speciation occurs. It is generally accepted that we have a good (if continually developing) understanding of the biological processes that underpin evolution. Thus, we are under no illusions that the idea we are introducing here, that the emergence of novelty in protein sequences is constrained by a conservation principle (CoHSI) arising from information theory, will be controversial.
Information can be defined in different (but valid) ways, and information theory has been applied to gain insight into many aspects of biology (e.g. [@Frank2009; @Vinga2014]) including evolutionary biology (e.g. [@Adami2012]). Often the starting point for these studies is to ask how the information present in a system can be used to infer functional or evolutionary relationships, e.g. quoting from [@Adami2012] “Information stored in a biological organism’s genome is used to generate the organism and to maintain and control it.” Our approach to the implications of information theory for proteins is quite (indeed totally) different from this; it arises from an interest in the fundamental properties of discrete systems *as a whole,* i.e. systems that consist of components each of which is made from smaller indivisible pieces (called tokens.)
Discrete systems are ubiquitous; they include e.g. the elements, software, texts, musical compositions, DNA and RNA and proteins. Our interest was therefore not to understand the role of information contained or indeed “used” in any particular discrete system, but rather to understand how fundamental information theory might constrain the properties of *all* discrete systems. Thus, to be applicable to proteins as well as software, texts and musical compositions etc. the theory has to be token agnostic, i.e. eschewing any meaning or function associated with the tokens; we therefore used information in the sense of Hartley-Shannon Information, i.e. a change of sign without reference to any meaning. Indeed Hartley specifically cautioned against associating *any meaning* to the tokens [@Hartley1928]. We emphasize that in both Hartley-Shannon Information and the Statistical Mechanical framework in which it is embedded in our theory that the *type* of token has no relevance. This is quite counter-intuitive when we are used to mechanisms e.g. looking at proteins from the perspective of protein chemistry and biochemical function, but CoHSI has a completely different perspective, considering any discrete system as an ergodic ensemble of components unrestrained with respect to space or time and characterised only by equi-probable microstates.
Embedding Hartley-Shannon information [@Hartley1928; @Shannon1948] in a Statistical Mechanical framework [@HattonWarr2017; @HattonWarr2018a] showed that the length distribution of components in a discrete system is overwhelmingly likely to obey a differential equation (the CoHSI equation) which implicitly defines the probability distribution function. Solving this equation [@HattonWarr2018a] gave a canonical distribution of component sizes that predicted the length distribution of components in any discrete system of sufficient size and complexity. When this prediction was examined experimentally, the length distribution of component sizes in such disparate systems as the total collection of known proteins, texts and the functions in computer programs (written in any programming language) were shown to conform accurately to the CoHSI distribution [@HattonWarr2018a].
Since proteins are a classical discrete system, and intimately involved with the evolution of life, it is worthwhile to ask here what constraints might be imposed by CoHSI on the evolution of protein novelty. We have already shown that the canonical CoHSI distribution is observable in proteins, as predicted, at all scales [@HattonWarr2017]. As such, it constrains the distribution of protein lengths at taxonomic levels ranging from individual species through the domains of life and to the entirety of protein sequences documented in the databases. We have also shown that the average length of proteins is highly constrained, as predicted by CoHSI [@HattonWarr2018b]. Here we address a no less interesting property of proteins: an important prediction of the length distribution constrained by the CoHSI equation is that very long components are **inevitable**. Thus we can ask the question, is the longest protein in an aggregation *directly predicted* by the total number of proteins in the aggregation? If it is, what impact have the constraints imposed by CoHSI had on the emergence of novel protein sequences as life has evolved on earth?
The CoHSI distribution and taxonomic level: scale independence
===============================================================
From [@HatTSE14; @HattonWarr2015; @HattonWarr2017] the fundamental CoHSI equation (1) was determined, the solution of which gives the canonical distribution of component lengths shown in Fig 1.
$$\log t_{i} = -\alpha -\beta ( \frac{d}{dt_{i}} \log N(t_{i}, a_{i}; a_{i} ) ), \label{eq:minifst}$$
where $t_{i}$ is the distribution of lengths and $N(t_{i}, a_{i}; a_{i})$ is the number of different ways in which the $t_{i}$ tokens in the $i^{th}$ component chosen from a unique alphabet of $a_{i}$ tokens can be arranged where order is important. The two undetermined Lagrange multipliers are $\alpha$ and $\beta$. For the proteins, the tokens are the amino acids comprising both the 22 directly encoded in the genome, as well as those which have been subjected to post-translational modification (PTM).
![\[fig:mdata\]Illustrating a typical solution of the CoHSI equation described in [@HattonWarr2017]. Both a sharp unimodal peak and power-law tail can be seen clearly](generateModelPDF_single_far.eps){width="50.00000%"}
The canonical distribution shown in Fig \[fig:mdata\] can then be compared with the distribution (Fig \[fig:tdata\]) of protein sizes (expressed as their length in amino acids) observed in a large aggregation of protein sequences (TrEMBL v15-07, https://uniprot.org/); the predicted sharp unimodal peak at small $t_{i}$ and precise power-law tail of higher lengths of Fig \[fig:mdata\] can be seen clearly in Fig \[fig:tdata\] (see [@HattonWarr2017] for a detailed analysis of this).
![\[fig:tdata\]Illustrating the length distribution of the proteins measured in amino acids in TrEMBL version 15-07.](trembl_length_distribution.eps){width="50.00000%"}
We note that the canonical distribution of protein sizes can also be seen at scales ranging from the domains of life (archaea Fig \[fig:archaea\_eps\], bacteria Fig \[fig:bacteria\_eps\], eukaryota Fig \[fig:eukaryota\_eps\]) down to the level of species (human Fig \[fig:human\_eps\], maize Fig \[fig:maize\_eps\], and fruit fly Fig \[fig:drosi\_eps\]).
[0.5]{}
{width="6cm"} \[fig:archaea\_eps\]
[0.5]{}
{width="6cm"} \[fig:human\_eps\]
[0.5]{}
{width="6cm"} \[fig:bacteria\_eps\]
[0.5]{}
{width="6cm"} \[fig:maize\_eps\]
[0.5]{}
{width="6cm"} \[fig:eukaryota\_eps\]
[0.5]{}
{width="6cm"} \[fig:drosi\_eps\]
The presence of the power-law means that the distribution of protein sizes is long-tailed, and along with the sharp unimodal peak at small $t_{i}$, leads to a distribution which is palpably right-skewed. This significantly complicates the meaning of the word *average* as we discussed in detail in [@HattonWarr2018b]. By way of example and to show that common measures of the average such as mean, median and mode can differ substantially, if we compute each of these for the length distributions of the domains of life in TrEMBL release 15-07 of Figure \[fig:tdata\], we get the values shown in Table \[tab:average\]. The values for the mean lengths of proteins agree closely with those calculated by Kozlowski [@doi:10.1093/nar/gkw978] for example. The effects of the skew can be clearly seen in the significant difference between mean and median.
Domain of Life Mean Median Mode
---------------- ------ -------- ------
Archaea 287 246 130
Bacteria 312 272 156
Eukaryota 435 350 379
: Measures of average length of proteins in the domains of life
\[tab:average\]
The two undetermined Lagrange parameters $\alpha,\beta$ play an interesting role. From [@HatTSE14; @HattonWarr2015; @HattonWarr2017], $\beta$ was shown to be intimately related to the range of unique token alphabet sizes encountered in a discrete system. It is in fact the asymptotic slope of the power-law *in the pdf of the unique alphabets,* $p_{i} \sim a_{i}^{-\beta}$. For systems with bigger unique alphabets than the known collection of proteins, such as software systems where the tokens are programming language tokens, the $a_{i}$ can be in the hundreds or even thousands. For such systems, the power-law is flatter corresponding to a smaller $\beta$, [@HatTSE14; @HattonWarr2017]. For the proteins the alphabet ranges are, at least currently, much smaller primarily because of technological limitations in the measurement of the degree to which post-translational modification enhances the basic 22-letter amino acid alphabet directly coded from DNA. For example, the largest unique alphabet of any protein in the SwissProt subset of TrEMBL release v.13-11 using the results of the Selene project [@Selene2013] was 33 [@HattonWarr2015]. We expect the largest unique alphabet of any protein to increase markedly in size in the years to come, as the thousands of PTMs both known or predicted are better identified and annotated [@ApweilerHermjakobSharon1999; @ZafarNasirBokhari2011]. This we expect to happen rather slowly as PTM identification is time-consuming. **We therefore expect $\beta$ to vary slowly as the size of the collection of known proteins increases.**
Now, as we showed in [@HattonWarr2017; @HattonWarr2018a], the *length distribution* of proteins measured in amino acids also asymptotes to a power-law, but whose slope $\beta'$ say, is related in a complex way to *both* $\alpha$ and $\beta$. However, this too is slowly varying so *we expect that as the known total collection of proteins increases in size, the power-law in the tail of the length distribution will retain a nearly constant slope $\beta'$*. In other words, the shape of the canonical CoHSI length distribution is essentially scale-invariant. This is predicted from the nature of the CoHSI equation [@HattonWarr2017] and it is possible to test this prediction.
To see this in real data, Figure \[fig:tremblversions\] shows the length distribution of two versions of the full TrEMBL protein database, versions 15-07 and 17-03 as a complementary cumulative distribution function (ccdf), in which the power-law has slope $-\beta'+1$, [@Newman2006]. In the 20 months separating these two releases, the total number of proteins appearing in the database has increased by 60% from around $5 \times 10^{7}$ to $8 \times 10^{7}$. *As predicted, the power-law slopes are both self-similar and emphatic.* We will consider this as an equilibrium state for the system of proteins as we discussed in [@HattonWarr2018b].
![\[fig:tremblversions\] The complementary cumulative distribution function (ccdf) of two versions of the full TrEMBL protein database separated by 20 months.](trembl_length_cdf_combined.eps){width="50.00000%"}
This scale-independence is also clearly visible in subsets of the same dataset. Figure \[fig:swissprottrembl\] shows the full TrEMBL dataset, version 15-07, and the curated SwissProt subset of this version. Even though the difference in size of the datasets is two orders of magnitude, the self-similarity is obvious as predicted.
![\[fig:swissprottrembl\]Illustrating the complementary cumulative distribution function (ccdf) for the curated SwissProt subset and the full TrEMBL dataset in version 15-07, [@HattonWarr2017].](SwissProt_Trembl_lengths.eps){width="50.00000%"}
Exactly the same phenomenon is observed in collections of software [@HatTSE14]. Along with the highly conserved average component length [@HattonWarr2018a], this self-similarity is a natural property of all CoHSI systems and has a particularly interesting implication - the prediction that there is a relationship between the total number of components in an aggregation (the y-axis of Fig. \[fig:tremblversions\]) and the maximum size of component within that aggregation (the x-axis of Fig. \[fig:tremblversions\]). In the following section we test this prediction for proteins.
A total size v. maximum length relationship
===========================================
Here we use a simple geometric argument based on self-similarity to derive a relationship between the total number of proteins in a collection and the maximum length of protein within that collection.
Consider Figure \[fig:geometry\]. This is a ccdf schematic with logarithmic axes to base 10 so we can derive a simple geometric relationship by imagining a system growing in size from $f$ to $f'$ in the total number of proteins and a matching self-similar growth from $t$ to $t'$ in the maximum length of a protein in that population. We then get,
$$\frac{\log_{10} f - \log_{10} f'}{\log_{10} t' - \log_{10} t} = - \beta' + 1$$
As mentioned earlier, we have used the fact that the power-slope on a cumulative distribution function is one greater than its slope on a pdf [@Newman2006], which we have already taken to be $\beta'$. Re-arranging, we derive the relationship
$$t' = t. \big [ (\frac{f'}{f})^{1/(\beta'-1)} \big ]
\label{eq:geometry}$$
![\[fig:geometry\]A simple geometric model of the self-similarity of the protein length distribution represented as a ccdf.](scalepic.eps){width="50.00000%"}
We can quantify this by inspecting Figure \[fig:tremblversions\]. The graphs are noisy for the largest protein itself but we can get a more robust idea by identifying the protein length *after which* the longest 1000 proteins appear when arranged in increasing order. For TrEMBL release 15-07, this length is 8,886 amino acids and for release 17-03 it is 10,787 amino acids. This corresponds to a relative increase in length of $t'/t = 10787/8886 = 1.21$.
Moreover, there are $f = 49,01,998$ proteins in release 15-07 and $f' = 80,204,459$ proteins in release 17-03. If we substitute these numbers into (\[eq:geometry\]) for a value of the slope of around $\beta' = 4.13$ [@HattonWarr2017], we get an estimate of $t'/t \approx 1.17$ which is satisfactorily close.
**We close this section by asserting therefore that the longest protein in a collection of proteins is constrained solely by the total number of proteins in that collection by (\[eq:geometry\]). No evolutionary mechanism is necessary, it is simply a general property of the CoHSI equation, its natural and precise asymptote to a power-law in length, and the relative constancy of the slope.**
Possible constraints on the longest proteins
============================================
In order to explore the implications of the assertion that the longest proteins of a collection will grow in a predictable way with the total number of proteins in that collection, we will first attempt to identify if current theory places any physico-chemical or combinatorial constraints on such growth.
What physico-chemical bounds might constrain the longest protein size?
----------------------------------------------------------------------
An examination of theoretical work in protein folding does not appear to place any obvious bounds although much of the work has focused on globular proteins. At the heart of this discussion is the Levinthal paradox [@Levinthal1969], which states that an exhaustive search of all folding possibilities would lead to unreasonably long folding times for any reasonably long protein. In a fascinating discussion Dryden et al [@Dryden2008] amongst other things address this question of how proteins can find their native folded conformation given the large (in principle astronomically vast) number of possible intermediate folding states. Using the concept of both unique amino acid alphabets and insensitivity to the actual amino acids used (but without reference to information content), they found that the effective number of possibilities could be greatly reduced. They made several assumptions, the most important of which is that for proteins to fold into a native state, the physico-chemical properties of the amino acid side chains can be reduced effectively to two classes, hydrophilic and hydrophobic. Such a reduced functional alphabet of amino acids (in our nomenclature $a_{i} = 2$) is based on the assumptions discussed by Dryden et al. ([@Dryden2008] that protein folding is driven essentially by hydrophobic effects and that protein functionality is not necessarily constrained to unique sequences supported by the framework of the native protein.
Numerous other mechanisms have been proposed to resolve the Levinthal paradox as discussed in detail by [@Finkelstein2013]. Moreover the existence of partitioning into semi-autonomous protein domains of 25-500 amino acids in length also ameliorates this paradox. [@Finkelstein2013] go on to give a phenomenological formula for dependence of the folding rate on the size, shape and stability of protein folds.
Studies such as those by [@LanePande2013] show that for domains up to about 300 amino acids in length various rate-folding laws such as power-law and exponential fit experimental data well with corresponding folding times between around $10^{-8}s.$ and $10^{2}s$. Using a theoretical argument [@GhoshDill2009], based on thermodynamics, derive a relationship between protein stability and various physico-chemical properties such as temperature, denaturant, pH and salt. Importantly, this relationship is linear in protein chain-length measured in amino acids rather than being non-linear.
Together these studies do not appear to throw up any insurmountable barriers to the gradually increasing length of the longest proteins that we predict based on the total number of proteins.
We will consider later in this discussion a possible link between the longest proteins in a given species with the point in evolutionary time when the species emerged, invoking a bridging medium of the total number of proteins. Thus we need to explore the issue of the total number of proteins that have existed through the history of life on earth.
What bounds might exist theoretically on the total number of proteins?
----------------------------------------------------------------------
The total possible population of proteins must by definition have grown from zero in early time, and despite its (arguable) monotonic growth as an ergodic system, acting on all proteins considered over time and space, the number of proteins present at any given time is a moot point given the history of successive mass extinction events [@Benton2005; @Rothman2017]. Clearly a precise knowledge of the number and diversity of proteins present throughout evolutionary history is without our power to obtain, and thus plausible assumptions need to be explored.
Dryden et al. [@Dryden2008] made assumptions regarding the upper and lower bounds for the number of novel proteins generated during 4Gyr of evolution. This lower bound is $4\times10^{21}$ proteins, and the upper bound is $4\times10^{43}$ proteins. They claim that “most of the sequence space may have been explored” during evolution. This then suggests that after an early phase when the sequence space was being explored by life, that the total number of proteins rose rapidly from zero to some value at which it remained stable. This is questionable. The authors’ argument that the physico-chemical complexity of amino acid structures can be reduced to a unique alphabet as low as 2 or 3 is certainly challenging, but we also note that their argument is restricted to proteins of 100 amino acids or less in length; they consider that “The exploration of longer chains of 100 amino acids with only two types of residue is obviously much less complete but it is not a negligible fraction of the total”. This appears to be an overstatement. The median length of proteins (Table 1) currently ranges from 246 amino acids for the archaea to 350 amino acids for the eukaryota [@HattonWarr2018b] implying by definition that half are longer. Even with the most parsimonious assumption of just $a_{i} = 2$ types of amino acid, a median length archaean protein offers $2^{246} = 10^{74}$ possible ways of arranging its amino acids and a median length eukaryotic protein offers $2^{350}$ = $2.29\times10^{105}$.
As we will confirm when we consider the amount of protein sequence space which has been documented so far, this suggests that only a tiny fraction of this space could have been explored in evolutionary history. It can be argued of course that we have no sequences for all the lifeforms that have disappeared but we must also consider that half of all sequences are longer than the median sequence and some of these, because of the slowly decaying power-law tail of the CoHSI distribution, are 1 to 2 orders of magnitude longer.
Let us examine more closely how CoHSI allows us a different perspective into the theoretical extent of relevant sequence space. The CoHSI distribution has two defining features of relevance for the evolution of novelty in proteins, Fig \[fig:tdata\]. On the one hand, the majority of proteins are smaller than 750 amino acids, whilst on the other, very long proteins (1 or 2 orders of magnitude larger than the median) are overwhelmingly likely to be present. We can recall [@HattonWarr2017] that CoHSI begins with a basic Statistical Mechanics formulation in which the total number of possible arrangements of $T$ tokens distinguishable by their order amongst $M$ components with $t_{i}$ tokens in the $i^{th}$ component is
$$\Omega(T,M) = \frac{T!}{\prod_{i=1}^{M} (t_{i}!)} \label{eq:distinguishable}$$
This is the total ergodic space of Statistical Mechanics. To render it a little less overwhelming, (even for modest systems $T!$ is astronomically large compared with the denominator), it is conventional to work with the logarithm of $\Omega(T,M)$ and employ Stirling’s theorem yielding
$$\log \Omega(T,M) \sim T \log T - {\sum_{i=1}^{M} (t_{i} \log t_{i})} \label{eq:logdistinguishable}$$
By applying constraints to this, we define subsets that are the most likely pathways (or highways) through the information landscape of ergodic space, thus whittling down the number of possible configurations, i.e. the size of sequence space, in which systems can appear. In the case of CoHSI, we constrain the possibilities by considering all those possible systems in time and space which have the same $T$ and a fixed $I$ where $I$ is the total Hartley-Shannon information split such that the information content of the $i^{th}$ component is $I_{i}$. In this case, $I_{i}$ is simply the $N()$ function of equation (\[eq:minifst\])
$$I_{i} = N(t_{i}, a_{i}; a_{i} ) \label{eq:hsi}$$
The motivation for this is that the Statistical Mechanical methodology will then reveal the most likely length distribution which any system of given $T,I$ will have, exactly analogous to the Maxwell-Boltzmann distribution of classical kinetic theory when energy and size are the constraints [@GlazerWark2001].
In the language of Statistical Mechanics then the length distributions governing these information highways through ergodic space are given by maximising
$$\begin{gathered}
\log \Omega(T,M) = T \log T - T - \sum_{i=1}^{M} \lbrace t_{i} \log (t_{i}) - t_{i} \rbrace \\
+ \alpha \lbrace T - \sum_{i=1}^{M}t_{i} \rbrace + \beta \lbrace I - \sum_{i=1}^{M} I_{i} \rbrace \label{eq:mini}\end{gathered}$$
The solution of this process is of course equivalent to solving the differential equation (\[eq:minifst\]) giving the range of distributions we explored in [@HattonWarr2018a]. These distributions all asymptote to power-laws for components large compared with their unique alphabet of tokens, perhaps inviting the use of *information superhighways* through sequence space.
### Information landscapes - an analogy with protein folding?
The concept of an information landscape with preferential pathways (highways or superhighways) that constrain the properties of discrete systems has an analogy with the energy landscapes that are associated with protein folding. [@Finkelstein2013] review numerous mechanisms which attempt to provide an answer to the question posed by Levinthal [@Levinthal1969], of how proteins of even modest length can reach their final stable, correct (i.e. native) conformation given the astronomically large number of folding intermediates that are possible. This challenge is reduced to manageable proportions through the understanding that the process of folding offers an energy landscape in which energetically preferential (lower energy) states exist and by means of which pathways through this landscape (characterized by [@DillChan1997] as “funnels”) channel the protein towards its native conformation as it folds.
We note however that strictly speaking, CoHSI and the methodology of Statistical Mechanics does not remove possible configurations - *it is not a mechanism.* All it does is recognize those configurations which because they have many more corresponding equi-probable microstates, are most likely to be seen in a given system. As we have noted before, it is not a strait-jacket. However, these configurations all asymptote to a power-law for a given T, I because of the constraint on the total information. So ergodic space is whittled down simply because many configurations are so rare, it would be extremely unlikely ever to see them. It is not easily possible to judge how much this process whittles down what is likely to be observed. Looking at Fig \[fig:mdata\], the canonical solution of equation (\[eq:minifst\]), all the other solutions with an area of 1 under the curve which therefore satisfies the requirement of a probability distribution, have been eliminated in the sense of being very unlikely. It is certainly clear that what is likely to be observed, (i.e. the CoHSI solution) is a tiny subset of what might under extreme circumstances be observed. Quantifying this observation is challenging but it is entirely consistent with the simple calculation earlier in this section.
How much of protein sequence space is documented?
-------------------------------------------------
The study of [@Dryden2008] discussed above dealt with the theoretical extent of protein space. Given that we cannot realistically access that sector of protein sequence space that was explored by extinct organisms, we can address only what can be observed in modern life forms. This discussion is normally couched in terms of species rather than proteins. For example, [@Mora2011] estimated that 86% of land species and 91% of marine species remain to be described, so we have perhaps sampled some 10% of the total species thought to be extant; we have concerns over these numbers that we will consider at the end of this section. However a discussion based on species is not obviously the same as a discussion based on proteins because we would need to know how proteins are distributed by species. This we can explore using CoHSI.
### Viewpoint using proteins
A basic property of a CoHSI system is that the more components there are, the more emphatic is the CoHSI signal. For smaller numbers the signal is visible but for large numbers it is completely dominant as we saw in the discussion leading from Fig. \[fig:tdata\] to Figs \[fig:archaea\_eps\] - \[fig:drosi\_eps\]. This is because when we choose proteins as components with tokens of amino acids to generate the CoHSI length distribution, then the numbers are indeed large, for example, there are around 80 million proteins with a median length of around 300 amino acids in TrEMBL 17-03. CoHSI is then completely dominant with a demonstrably precise power-law in the tail of both the length and alphabet distributions [@HattonWarr2015].
The point of this discussion is that we are trying to get to a theoretical relationship between species and proteins. Following the success of using proteins as components and amino acids as tokens in revealing the CoHSI length distributions, *suppose then we now use species as components and proteins as their tokens*. We note that the definition of components and their token contents is only a matter of categorisation - providing it is consistent, CoHSI is still applicable, [@HattonWarr2017]. We are free to do this provided we are consistent, as this is how we humans categorise and then measure the universe we live in. As a result we expect species and proteins also to be related by a CoHSI distribution but with rather more noise present as there are self-evidently many fewer species in TrEMBL than there are proteins.
Before continuing with this argument, we should take a moment to consider the non-trivial problem of extracting species names from the TrEMBL releases and define what we mean by a species. This will become important when we attempt below to associate TrEMBL protein data, using the species name, with a phylogenetic dataset. Paraphrasing the Uniprot documentation (https://web.expasy.org/docs/userman.html, https://www.uniprot.org/docs/speclist), there are *real* organism codes used in both Swissprot and TrEMBL and *virtual* organism codes used only in TrEMBL. Real organism codes correspond to a specific and specified organism. Virtual organism codes regroup organisms at a certain taxonomic level and generally correspond to a ’pool’ of organisms, which may be as wide as a kingdom, as it is not possible in a reasonable timeframe to manually assign organism codes to all species represented in TrEMBL. Instead Uniprot assigns a specific “official” name to an aggregated group of organisms such as Amphibia. Bioinformatically, Uniprot and TrEMBL releases identify an organism by a card image in the form:-
ID X_Y ...
In the Swissprot release, X is a protein name and Y is a real organism code. In TrEMBL, X is the accession number (https://web.expasy.org/docs/userman.html\#AC\_line) and Y the organism code (real or virtual). The virtual codes all begin with the number 9. There are some 23,204 organism records in TrEMBL 18-02 each identified by an “ID X\_Y” card image, and of these 285 are virtual; some of these virtual organisms contain many related species and in some cases correspondingly large aggregations of sequences. The question is of course, how should we treat the virtual organism codes acting as they do, as a ’pool’ of organisms which cross species boundaries? Taxonomically, this would be very complex and many of them have very few proteins sequenced, which is why they are grouped in this way prior to the manual review winnowing process in Swissprot. CoHSI however offers a unique simplification of this problem - it is agnostic with regard to the categories provided they are consistent. *We will simply define our species name as the value of the Y code on the “ID” card image, and extract the full scientific name from the “OS” or Organism Species card image (https://web.expasy.org/docs/userman.html\#OS\_line).*
This approach using our definition of species as components and proteins as tokens might seem to be in conflict with our previous analyses using proteins as components and amino acids as tokens. This is not the case, since we already know that the same discrete system can house multiple concurrent CoHSI systems, but with different unique alphabets of categorisation. For example, we illustrated this by demonstrating the presence of two heterogeneous CoHSI systems co-existing in digital music depending on whether or not we include musical note duration as a separate category over pitch [@HattonWarr2017]. We also demonstrated a homogeneous CoHSI system when we count words in texts co-existing with a heterogeneous CoHSI system when we count letters in words in the very same texts. (A homogeneous CoHSI system is one in which each component contains only tokens of the same type unique to that component. A heterogeneous CoHSI system is one in which each component contains multiple types of token.)
*We therefore consider a heterogeneous CoHSI system in which components are **species** as defined above, and tokens are **proteins**. In this system, we expect the distribution of numbers of proteins in species also to obey the CoHSI distribution (with different $\alpha, \beta$ of course).* This we can test and we use the largest dataset, TrEMBL for this purpose. Furthermore, to test the agnosticism of CoHSI with regard to the categories, we will do this by both including and excluding the virtual organism records.
Fig. \[fig:speciesproteins\] is a ccdf of the distribution of numbers of proteins in species in the full release 18-02 of TrEMBL and shows the data broken down for the domains of life, i.e. archaea, bacteria and eukaryota *including* the virtual organism records. Fig. \[fig:speciesproteins\_novirtual\] is the same dataset but this time *excluding* the virtual organism records. We can note three things.
1. The tails of the distributions show the linearity expected from the predicted power-law, reflecting the insensitivity of CoHSI to the categorisation provided it is consistent; however, the tails of the distributions have different slopes.
2. When the virtual organisms are excluded, the slopes of the tails of the distributions are more consistent.
3. In the eukaryotic distributions there is a noticeable kink or plateau between around 10 and 10,000 sequenced proteins which is not present in either the archaea or bacteria. Given that this kink appears in *both* Fig. \[fig:speciesproteins\] and \[fig:speciesproteins\_novirtual\], this is probable evidence of researcher bias in exploring the vastness of life. We note that this bias does not appear in Fig. \[fig:swissprottrembl\] which (because proteins are the components in this analysis) contains many more datapoints, thereby strengthening the CoHSI signal. We will now look at this probable researcher bias in a little more detail.
![\[fig:speciesproteins\]A ccdf of the distributions of number of proteins in species in the full TrEMBL 18-02 dataset for the domains of life, individual and in aggregate INCLUDING the virtual organisms.](trembl_proteinCounts.eps){width="50.00000%"}
![\[fig:speciesproteins\_novirtual\]A ccdf of the distributions of number of proteins in species in the full TrEMBL 18-02 dataset for the domains of life, individual and in aggregate EXCLUDING the virtual organisms.](trembl_proteinCounts_novirtual.eps){width="50.00000%"}
First of all, if we took a minimum qualifying number of proteins even as low as 500 in order to measure how well researchers have covered a particular species (using our definition of species above of what we are actually measuring), then more than 98% of the species appearing in the better curated SwissProt dataset would **not** qualify. Even if the very much larger TrEMBL v 18-02 is used (larger by about a factor of 100x), almost 85% of the species would not qualify either.
Another manifestation of this, at least currently, is that the number of species real or virtual, with sequenced proteins appearing in the full TrEMBL database is growing much more slowly than the number of proteins. Table \[tab:tremblcontents\] shows that in a space of some 20 months between two recent versions of TrEMBL, the total number of proteins increased by over 60% but the number of species (real and virtual) increased by less than 4%. In other words current emphasis appears to be almost exclusively on making the proteomes of species which are already in TrEMBL more complete. It is hard to predict how long this differential bias will persist.
TrEMBL version Species Proteins Average proteins per species
---------------- --------- ------------ ------------------------------
v\. 15-07 22,030 49,401,998 2,242
v\. 17-03 22,824 80,204,459 3,514
: Numbers of species and proteins in versions of TrEMBL
\[tab:tremblcontents\]
Finally, it is clear that an incomplete picture of proteome size for species in the TrEMBL database is presented in Fig. \[fig:speciesproteins\_novirtual\] (since the analysis was not restricted to organisms with whole genomes sequenced). We know that when there were only 63 million proteins in TrEMBL from [@doi:10.1093/nar/gkw978], that the mean size of proteomes, i.e. mean number of sequenced proteins per species for viruses, archaea, bacteria and eukaryota was 42, 2358, 3200 and 15145 respectively suggesting that some flattening of the curve for species in the lower ranges of sequenced proteins might be expected. Fig. \[fig:speciesproteins\_novirtual\] provides substance for this but with some subtleties. For example, we note that this figure appears to support our predictions about the CoHSI asymptotic state in that both archaea (1% of all proteins in TrEMBL) and bacteria (48% of all proteins in TrEMBL) appear to exhibit the least researcher bias as their curves are closest to the predicted CoHSI asymptotic ccdf shape [@HattonWarr2018a]. In contrast the eukaryota (47% of all proteins in TrEMBL) show a substantial dip between 10 and 10,000 proteins, representing primarily those eukaryotic species currently with a significant shortfall in the sequencing of their proteomes. This shortfall observed in the eukaryotes is also manifest in a smaller but similar dip in the curve for the full TrEMBL dataset. *We can therefore make the (falsifiable) prediction that in the fullness of time the eukaryote dataset (and full TrEMBL dataset) will morph into the asymptotic CoHSI ccdf already observed in Fig \[fig:speciesproteins\_novirtual\] for the archaea and bacteria.*
So we can indeed observe a relationship, guided by CoHSI, between species and proteins (and as a beneficial side-effect, we can observe the effects of researcher bias). It is therefore reasonable to consider from this different perspective (i.e. arguments focused upon species) the central question of this section - the degree to which protein sequence space is documented.
### Viewpoint using species
To close this section, we reflect on the above estimates of how far we have to go in sequencing the proteins of life by considering species. [@Mora2011] reported that 86% of land species and 91% of marine species remained to be described. In terms of the actual numbers of species they estimated “∼8.7 million (+/-1.3 million SE) eukaryotic species globally, of which ∼2.2 million (+/-0.18 million SE) are marine. In spite of 250 years of taxonomic classification and over 1.2 million species already catalogued in a central database, our results suggest that some 86% of existing species on Earth and 91% of species in the ocean still await description.” It is worth noting that these authors give estimates of species numbers that are very much lower than those reported by others. Locey and Lennon ([@Locey201521291] applying ecological scaling laws estimated upwards of one trillion species on earth, with the majority being bacteria. Larsen et al. ([@Larsen2017] estimated the total number of species to be 1-6 billion, with up to 90% of these being bacterial species.
Even if we accept the lowest of these estimates ([@Mora2011] ) of the total number of species, *only 0.2% have been sequenced, and these only partially,* and yet these already take up 18.5 Gigabytes when optimally compressed! This suggests that to sequence all of the known lifeforms on earth will take some 18.5 x 500 Gigabytes $\sim 10$ Terabytes x $F$ where $F$ is whatever factor by which we decide the current TrEMBL database is undersampled for the species it already contains. Given the rate of change in protein addition compared with species addition, a factor of 10 is probably very conservative but nevertheless leads to an estimate of perhaps 100 Terabytes in an optimally compressed form, which uncompressed, would amount to some 350 Terabytes. Even at this level, a tiny fraction of the potential space would have been explored by all the protein sequences on earth.
However, we can put even these huge estimates into context by noting that terrifying though this prospect may sound, these numbers are already dwarfed by the amount of data acquired by the Large Hadron Collider, which in July 2017 surpassed 200,000 Terabytes[^3].
What has been the effect of mass extinctions?
---------------------------------------------
During the largest of the five commonly documented mass extinctions, the Permian-Triassic, some estimates suggest that 90% of all species disappeared [@Benton2005]. Even though gene sharing between species through for example, the mechanism of horizontal gene transfer, whereby proteins shared amongst species could survive if some but not all of those species disappeared, it seems unavoidable that this extinction would have reduced the number of proteins substantially, although it has been argued that the biomass associated with complex multicellular eukaryotes has remained approximately stable [@Franck2006]. We would anticipate that after such an extinction carried out sufficiently rapidly, that the equilibrium relationship we describe between the total number of proteins and the longest might take some time to re-establish. Of course we have no modern data on this but taxonomic and ecological recovery have been described in detail [@Sahney2008]. We will return to this shortly when we look for extinction footprints in the protein data.
Piecing these factors together
------------------------------
Given that CoHSI exerts its constraints regardless of how proteins might be distributed amongst species, we could reasonably argue that the total number of proteins might increase by several orders of magnitude if we were able to sequence the entire population of species on the Earth, problems with continual extinction and speciation not withstanding.
Given also that the largest protein in the TrEMBL distribution is currently around 36,000 amino acids, using (\[eq:geometry\]), this would suggest that the largest protein which might be discovered in the future may well be as large as $36000 (10^{1/3.13}) \sim 75,126$ amino acids. Of course, this is a noisy estimate as explained earlier and in addition, there may well be biochemical and structural stability limits on such long proteins, although our discussion of protein folding did not throw up any obvious barriers. However, we can be a little more circumspect given the noise in the tail of longest proteins in the protein length distribution by extending our analysis using v. 17-03 of TrEMBL as a convenient baseline with regard to the point beyond which there are only 1,000 proteins, which roughly corresponds to the $99.999^{th}$ percentile. As can be seen by studying Fig. \[fig:tremblversions\], a value of 1,000 amino acids in length is comfortably within the self-similar linear tail in both distributions. For TrEMBL release 17-03, the length corresponding to the $99.999^{th}$ percentile is 10,787 amino acids, which we will notate as $l_{1000}$. In other words this is the length of the $1000^{th}$ longest protein. We therefore extend this by using (\[eq:geometry\]) with various very conservative scenarios as shown in Table \[tab:predictedbins\] for the same value of $\beta' = 4.13$ used earlier. The first row simply reproduces the observed results for version 17-03 of TrEMBL.
Factor increase on 17-03 $l_{1000}$ $l_{500}$ $l_{100}$
-------------------------- ------------ ----------- -----------
x1 10,787 13,580 21,538
x5 18,039 22,710 36,018
x10 22,511 28,339 44,946
x15 25,624 32,259 51,163
: Predicted bins of long proteins in the complete proteome, from a baseline of version 17-03 of TrEMBL.
\[tab:predictedbins\]
*Taking a guess of a x10 factor increase in total number of proteins over version 17-03 in the next few years, it seems likely that there will be a significant number of proteins (around 100) which are longer than about 45,000 amino acids in species yet to be discovered and/or sequenced. We discuss later the domain of life in which this is most likely to occur.*
The longest proteins and their evolutionary significance
========================================================
We have demonstrated both by theory and experiment in preceding sections that it is overwhelmingly likely that the longest protein in a collection of proteins is determined *solely* by a) the total number of proteins in that collection and b) the power-law slope of the pdf of the length distribution, which we will call $\beta'$. We have also shown that the value of $\beta'$ tends to be highly conserved, accounting for the self-similar nature of Figures \[fig:tremblversions\] and \[fig:swissprottrembl\]. Given that the total number of proteins must by definition have grown from zero when life first appeared, although perhaps not monotonically as speciations and extinctions have occurred, *we would expect our theory to show that the longest proteins will tend to occur in recent times, and might perhaps provide some evidence of the mass extinctions.*
CoHSI and the recent emergence of long proteins
-----------------------------------------------
In order to test this hypothesis, we merged protein data *using only real organism codes* in version 18-02 of the TrEMBL dataset with the evolutionary data on the time of species emergence in the *Time Tree of Life*[^4]. The Time Tree of Life is assembled as a phylogenetic tree and the TrEMBL dataset contains only those species for which some subset (potentially a large proportion) of their proteome has been sequenced. Clearly the Time Tree of Life will contain species now extinct and which have no such proteome whereas the TrEMBL dataset can contain the proteome of ancient species but only if they have been extant in the last few thousand years and for which DNA is still accessible, (for example that of the Mammoth). Merging the two datasets using the species name extracted from the “OS” header lines in the TrEMBL .dat format files for real organism codes, and the species name extracted from the phylogenetic data, gives a potential population of 9,469 species which appear in both the Time Tree of Life (file TimetreeOfLife2015.nwk, as of 28-Sep-2018) and also in version 18-02 of the TrEMBL dataset.
Before proceeding, we note that predicting the role of the longest protein depends of course on reliable estimates of that longest protein given that many species proteomes are incomplete. There are in fact complete proteomes available in Uniprot, specifically documented as *reference proteomes* (https://uniprot.org/proteomes), including for example *Homo sapiens*; however there are not very many of these so we have a typical statistical trade-off between completeness and size of dataset. We therefore approach this in two ways.
### Estimating the longest proteins of a species
In order to get a reasonable estimate of the size of the longest proteins in a species, we should specify some minimum qualifying number of proteins in the species’ proteome. The appropriate size of the sample that should be available from a species genome is a difficult question to resolve statistically as we are confronted (amongst other challenges) by potential unknown researcher bias in acquiring the data. Thus we took an empirical approach, specifying a variety of minimum qualifying numbers (from 1 - 10,000) of proteins for a species to be included in the analysis. Data for each qualifying species were extracted from TrEMBL version 18-02, and plotted against the time of evolutionary emergence. Figures \[fig:time\_maxprot\_1prot\_150\_ALL\] - \[fig:time\_maxprot\_10000prot\_150\_ALL\] illustrate the observed relationships between the maximum length of protein in each qualifying species when the qualifying level for each species was set at 1; 10; 100; 500; 5,000; or 10,000 proteins, merged with the evolutionary time data from the Time Tree of Life.
[0.5]{}
{width="6cm"} \[fig:time\_maxprot\_1prot\_150\_ALL\]
[0.5]{}
{width="6cm"} \[fig:time\_maxprot\_10prot\_150\_ALL\]
[0.5]{}
{width="6cm"} \[fig:time\_maxprot\_100prot\_150\_ALL\]
[0.5]{}
{width="6cm"} \[fig:time\_maxprot\_500prot\_150\_ALL\]
[0.5]{}
{width="6cm"} \[fig:time\_maxprot\_5000prot\_150\_ALL\]
[0.5]{}
{width="6cm"} \[fig:time\_maxprot\_10000prot\_150\_ALL\]
Studying these results, horizontal linearities at low values on the Y-axis are obvious for minimum qualifying numbers of 1, 10, and 100 proteins; these suspected artifacts disappear progressively as the number of qualifying proteins/species increases, and we interpret these linearities as reflecting the statistical likelihood that a random small sample of proteins from a species will contain predominantly the relatively shorter and more numerous proteins (see Fig \[fig:tdata\]). Apart from the linearities, the data for all numbers of qualifying proteins look broadly similar, with a clear trend for the longest proteins in a species to correlate inversely with the time since its emergence. Thus we chose 500 proteins as our minimum size to qualify a species’ proteome for further analysis in the context of time of evolutionary emergence. However, we stress that until we consider reference proteomes, this is an arbitrary decision. Considering only species with at least 500 proteins sequenced leaves 954 qualifying species. Figures \[fig:time\_maxprot\_500prot\_500\_ALL\] and \[fig:time\_maxprot\_500prot\_4000\_ALL\] show these data for all qualifying species in TrEMBL version 18-02 whose times of evolutionary emergence are estimated up to 500 Mya or up to 4,000 Mya.
[0.5]{}
{width="6cm"} \[fig:time\_maxprot\_500prot\_500\_ALL\]
[0.5]{}
{width="6cm"} \[fig:time\_maxprot\_500prot\_4000\_ALL\]
With a view to attempting to falsify the CoHSI predictions in the spirit of Popperian analysis, we note from Figures \[fig:time\_maxprot\_500prot\_500\_ALL\] and \[fig:time\_maxprot\_500prot\_4000\_ALL\], that two things become clear. First of all, the growth in the longest protein as evolutionary time winds forwards to the present day is obvious, exactly as predicted by CoHSI. Second, there are intriguing minima in the signal at around 80 Mya, 120 Mya, 250 Mya, 250 Mya, 430 Mya, 1700 Mya and 2500 Mya, and an obvious interpretation is that these reflect extinction events. Before discussing these further however, we will gain further insight using the reference proteomes of Uniprot.
### Using reference proteomes
Certainly at first glance, the data of Figures \[fig:time\_maxprot\_500prot\_500\_ALL\] and \[fig:time\_maxprot\_500prot\_4000\_ALL\] support the predictions of CoHSI made earlier, i.e. that as the total number of proteins increases so does the size of the longest proteins. Could there be other explanations for this ? Looking deeper, it is certainly possible that the results seen in Fig \[fig:time\_maxprot\_1prot\_150\_ALL\] - \[fig:time\_maxprot\_10000prot\_150\_ALL\] may simply reflect researcher bias, but we can test and resolve this by looking at the same evolutionary timeline but this time for different domains of life and also by using only the *reference proteomes* of Uniprot thereby eliminating concerns about estimating the longest protein in a species proteome from an incomplete subset. Reference proteomes are complete sequenced proteomes for various chosen species, (around 10,000 currently for bacteria and eukaryota of which 705 also have estimated times of evolutionary emergence). The results of merging the data on longest proteins with times of evolutionary emergence for these species with reference proteomes are shown in Figures \[fig:time\_maxprot\_500prot\_500\_BACTERIA\] and \[fig:time\_maxprot\_500prot\_500\_EUKARYOTA\] for the eukaryota and bacteria.
[0.5]{}
{width="6cm"} \[fig:time\_maxprot\_500prot\_500\_BACTERIA\]
[0.5]{}
{width="6cm"} \[fig:time\_maxprot\_500prot\_500\_EUKARYOTA\]
Figures \[fig:time\_maxprot\_500prot\_500\_BACTERIA\] and \[fig:time\_maxprot\_500prot\_500\_EUKARYOTA\] present intriguing results and are consistent with our estimation method which led to Figs \[fig:time\_maxprot\_500prot\_500\_ALL\] and \[fig:time\_maxprot\_500prot\_4000\_ALL\]. It is clear from Figure \[fig:time\_maxprot\_500prot\_500\_BACTERIA\] that the growth in longest proteins in recent evolutionary time is an exclusive property of the eukaryota, and is *not* shared by the bacteria. This seriously undermines the possibility of a consistent researcher-induced bias due to incompleteness.
However, it might also be said that since CoHSI predicts that the longest protein is simply a function of the total number of proteins, the more numerous bacteria (which represent a greater share of protein sequences, over 60% in TrEMBL version 18-02) should therefore show a similar and perhaps even bigger growth in longest proteins as do the eukaryotes. However, CoHSI also shows that the longest protein is a function of not only the total number of proteins *but also the decay rate of its power-law, $\beta'$*. Thus, within the overall CoHSI distribution, we predict that the subset of bacterial proteins must have a value of $\beta'$ distinct from that of the eukaryotic protein distribution. How can we test this? We have discussed in previous work that the decay rate $\beta'$ is a complex function of the distribution of unique alphabets of amino acids found in each collection [@HattonWarr2017; @HattonWarr2018a] and thus we could falsify CoHSI if the two domains of life, bacteria and eukaryota are *indistinguishable* in their average unique amino acid alphabets. This in turn would imply the same $\beta'$ in both domains and therefore that we should have seen an even more emphatic growth in long proteins in bacteria as compared to the eukaryota, given the greater abundance of proteins in bacteria. Since this is clearly not the case, we test if the eukaryota and bacteria do indeed have the same average unique amino acid alphabet.
To do this we take all bacterial and eukaryotic proteins greater than 1,000 amino acids in length, which places them squarely in the power-law tail, and compare their average unique amino acid alphabets using the better annotated SwissProt version 18-02 which has more detailed information on post-translationally modified (PTM) amino acids, (which of course are instrumental in increasing the size of the unique alphabet [@HattonWarr2015] since there is an absolute maximum of 22 amino acids directly encoded by the genome). Both the Kolmogorov-Smirnov test ($D = 0.52633, p < 2.2 \times 10^{-16}$) and a Welch two sample t-test ($t = -85.813, df = 15838, p < 2.2 \times 10^{-16}$) emphatically reject the null hypothesis that the sample means are the same. In fact, the unique amino acid alphabet of the eukaryota exceeds that of the bacteria by around 5.8%, (21.18 v. 20.02), which we interpret as showing the greater influence of PTM in the eukaryota.
We note in passing that, in contrast to the information theoretic explanation, based on unique alphabet size, that we have presented (and tested experimentally) above, that a number of essentially biological arguments can also be adduced to explain the failure of the longest bacterial proteins to show an increase in size over evolutionary time. These arguments include, but are not limited to, consequences of the relatively small size and the highly crowded cytoplasm of typical bacteria [@McGuffee2010; @Yu2016], which impact properties including diffusivity, the frequency and strength of multiple weak interactions with neighboring macromolecules, and necessarily longer times for protein translation. However, since such arguments about the physicochemical properties of longer prokaryotic proteins are difficult to falsify in the Popperian sense, we feel it appropriate to reserve judgment on their possible relevance to the observations we have reported here.
We thus conclude this section by stating 1) that the CoHSI prediction of growth in the longest proteins in recent evolutionary time is entirely consistent with what we observe in the sequence databases. However, 2) our results suggest that the emergence of longer proteins is occurring predominantly in the most recently emerging eukaryotes; the converse of this is that once a species has emerged it is not likely to be the source of the emerging novel longest proteins. The concept that the evolutionary emergence of a taxon is associated with a lack of change at the molecular level is not borne out for mutation rates. Indeed, the concept that mutations accumulate at measurable rates (regardless of time of emergence) is the basis of the molecular clocks by which phylogenetic trees can be calibrated ([@Ho2014]). *This suggests that the emergence of novel very long proteins as predicted by CoHSI (and experimentally observed) operates in evolution independently of the well-understood processes of genetic mutation.*
CoHSI and the extinctions
-------------------------
Since our estimation method was essentially verified by using reference proteomes, we return to the discussion of Figures \[fig:time\_maxprot\_500prot\_500\_ALL\] and \[fig:time\_maxprot\_500prot\_4000\_ALL\]. Minima or even gaps in the plot of maximum length of proteins against evolutionary time are clearly visible. It is reasonable to question whether or not these correspond to known extinction events. An obvious example in Figure \[fig:time\_maxprot\_500prot\_4000\_ALL\] is the dip at 2,500 Mya that is coincident with the Great Oxygenation Event (https://en.wikipedia.org/wiki/Great\_Oxygenation\_Event, accessed 08-Oct-2018). A less obvious example is seen at around 1,700 Mya and this would be a candidate for a currently undocumented extinction event.
However to see the more recent extinction events of the last 500 Mya, we can consider the analyses shown in Figure \[fig:boxplot\] where box and whisker plots of maximum protein lengths are given for all species within 10 Mya bins. This plot is annotated with the documented major and some minor extinction events, as EO = Eocene - Oligocene, CP = Cretaceous - Paleogene, A = Aptian, EJ = End-Jurassic, TJ = Triassic-Jurassic, PT = Permian-Triassic, LD = Late-Devonian and OS = Ordovician-Silurian. The red annotated ones belong to the normally described five major extinctions. Each of the extinctions is shown with a line underneath showing the documented date of the extinction. Although the indicated extinction dates are only approximate, there is an intriguing correspondence between these extinction events and a significant fall in the average longest protein across species contemporaneous with the event. Whilst qualitative, the data are certainly not inconsistent with the CoHSI longest protein hypothesis, bearing in mind that the relationship between species number and protein numbers is not trivial as we described earlier in section 5.3.2.
![\[fig:boxplot\]Illustrates the boxplots of the maximum protein lengths for all species in 10 Mya bins. This is annotated with major extinctions in red and minor ones in blue. Boxplots show the interquartile range with the median marked as a line. The whiskers where shown indicate values within $1.5 \times$ the inter-quartile range and values outside this, known as outliers are shown as circles.](boxplot_time.eps){width="100.00000%"}
Conclusions
===========
The most significant point emerging from this paper is that the asymptotic power-law nature of the CoHSI length distribution means that the longest protein of any collection of proteins (e.g. a single species’ proteome, a pan-proteome embracing multiple species, or the full collection of proteins in a database such as TrEMBL) is intimately and indeed simply related to the total number of proteins in that collection, through the parameter $\beta'$. In other words as the collection grows in the number of proteins, the longest protein to be found in that collection grows in an entirely predictable way. This process borrows nothing from evolution. It is an information theoretic property of any discrete system which obeys the CoHSI distribution and we know through various preceding papers [@HattonWarr2017; @HattonWarr2018a] that the known full collection of proteins in the TrEMBL and SwissProt databases very accurately obeys this length distribution. We explored the consequences of this in several ways.
- Using this relationship and version 17-03 of the full TrEMBL distribution as a baseline, we have predicted under various scenarios how many proteins longer than some target figure are likely to appear in the months and years to come. For example, if the total number of proteins grows by a factor of 10 within 5-10 years (fairly likely given that sequences have probably been obtained from fewer than 0.2% of the current lifeforms on earth, and the known current rate of growth in TrEMBL), then we predict that approximately 100 proteins will be found longer than about 45,000 amino acids, either in existing species whose proteomes have not yet been fully sequenced, or in species yet to be sequenced. We also pointed out that current researcher bias seems to be most obvious with the eukaryota because, in the analysis using species as components and proteins as tokens, they depart from the predicted CoHSI asymptotic state (Fig \[fig:speciesproteins\_novirtual\]) by a larger amount than the archaea or bacteria. Researcher bias also currently appears to favor completing the full proteome sequencing of existing species rather than investigating new species.
- We then explored the degree to which sequence space has been populated by evolution on earth based on previous authors’ studies, and suggest that only a tiny percentage of possible space has indeed yet been explored, although admittedly we have no means of assessing the contribution of extinct species. Of course we can make no comment on how much more of this space has been populated by any extraterrestrial life with amino acid-based proteins, although we were able to give some insights as to how CoHSI and its backbone methodology from Statistical Mechanics explores the all-encompassing ergodic space.
- Finally, we merged phylogenetic data and TrEMBL data to explore an intriguing prediction by CoHSI that the longest proteins should have appeared recently in evolutionary history, and that there should be traces of the earth’s extinction events in these merged data. These two predictions were well supported by the merged data.
In summary, we have shown that another property of CoHSI systems, the inevitability of very long proteins, manifests itself in a number of interesting ways in real data. Each of these analyses further supports the fundamental role we believe CoHSI plays in setting bounds on the evolution of life.
[^1]: Emeritus Professor, Kingston University, KT1 2EE, U.K., [email protected]
[^2]: Emeritus Professor, Medical University of South Carolina, 96 Jonathan Lucas St, Charleston, SC 29425, USA, [email protected]
[^3]: https://home.cern/about/updates/2017/07/cern-data-centre-passes-200-petabyte-milestone, accessed 19-Oct-2018
[^4]: http://www.timetree.org/, accessed 07-Oct-2018
|
---
abstract: 'The latest results from the Double Chooz experiment on the neutrino mixing angle $\theta_{13}$ are presented. A detector located at an average distance of 1050 m from the two reactor cores of the Chooz nuclear power plant has accumulated a live time of 467.90 days, corresponding to an exposure of 66.5 GW-ton-year (reactor power $\times$ detector mass $\times$ live time). A revised analysis has boosted the signal efficiency and reduced the backgrounds and systematic uncertainties compared to previous publications, paving the way for the two detector phase. The measured $\sin^2 2\theta_{13} = 0.090^{+0.032}_{-0.029}$ is extracted from a fit to the energy spectrum. A deviation from the prediction above a visible energy of 4 MeV is found, being consistent with an unaccounted reactor flux effect, which does not affect the $\theta_{13}$ result. A consistent value of ${\theta_{13}}$ is measured in a rate-only fit to the number of observed candidates as a function of the reactor power, confirming the robustness of the result.'
address: 'Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, CIEMAT, 28040, Madrid, Spain'
author:
- 'J.I. Crespo-Anadón'
bibliography:
- 'DC.bib'
title: 'Double Chooz: Latest results'
---
=1
reactor ,neutrino ,oscillation ,$\theta_{13}$
Introduction {#sec:Introduction}
============
Neutrino oscillations in the standard three-flavor framework are described by three mixing angles, three mass-squared differences (two of which are independent) and one CP-violating phase. Excepting the phase which still remains unknown, all the other parameters have been measured [@PDG2014]. ${\theta_{13}}$ was the last to be measured by short-baseline reactor and long-baseline accelerator experiments [@DC2ndPub; @DCHPub; @DCRRMElsevier; @DC3rdPub; @DayaBayShape; @RENO; @MINOSTheta13; @T2KTheta13].
For the energies and distances relevant to Double Chooz, the oscillation probability is well approximated by the two-flavor case. Thus, the survival probability reads: $$P_{{\overline{\nu}_{e}}\to {\overline{\nu}_{e}}} = 1 - \sin^2 2{\theta_{13}}\sin^2 \left( 1.27 \frac{\Delta m^2_{31} [\text{eV}^{2}] L [\text{m}]}{E_{\nu}[\text{MeV]}} \right)$$ So ${\theta_{13}}$ can be measured from the deficit in the electron antineutrino flux emitted by the reactors. In this analysis, $\Delta m^2_{31} = 2.44^{+0.09}_{-0.10} \times 10^{-3} \text{\,eV}^2$, taken from [@MINOSDM231], assuming normal hierarchy .
Antineutrinos are detected through the inverse beta-decay (IBD) process on protons, ${\overline{\nu}_{e}}+ p \to e^+ + n$, which provides two signals: a prompt signal in the range of 1 - 10 MeV is given by the positron kinetic energy and the resulting $\gamma$s from its annihilation. This visible energy is related to the ${\overline{\nu}_{e}}$ energy by $E_{vis} \approx E_{\nu} - 0.8 \text{\,MeV}$. A delayed signal is given by the $\gamma$s released in the radiative capture of the neutron by a Gd or H nucleus. The results presented here correspond only to captures in Gd, which occur after a mean time of 31.1 $\mu$s and release a total energy of 8 MeV, which is far above the natural radioactivity energies. The coincidence of these two signals grants the experiment a powerful background suppression.
The Double Chooz experiment {#sec:DoubleChooz}
===========================
Double Chooz (DC) is a 2-detector experiment located in the surroundings of the Chooz nuclear power plant (France), which has two pressurized water reactor cores, producing 4.25 $\rm{GW_{th}}$ each. The Near Detector (ND), placed at $\sim 400$ m from the cores, has a 120 m.w.e. overburden and it is currently being commissioned. The Far Detector (FD), placed at $\sim 1050$ m from the cores, has a 300 m.w.e. overburden and its data are used here. The 2-detector concept allows to extract ${\theta_{13}}$ with high precision from the relative comparison of the ${\overline{\nu}_{e}}$ flux at the two detectors. Because the detectors are built identical, all the correlated uncertainties between them are cancelled.
Since the ND was not operative for this analysis yet, an accurate reactor flux simulation was needed to obtain the ${\overline{\nu}_{e}}$ prediction. Électricité de France provides the instantaneous thermal power of each reactor, and the location and initial composition of the reactor fuel. The simulation of the evolution of the fission rates and the associated uncertainties is done with `MURE` [@MURE1; @MURE2], which has been benchmarked with another code [@DRAGON]. The reference ${\overline{\nu}_{e}}$ spectra for $\rm{{}^{235}U}$, $\rm{{}^{239}Pu}$ and $\rm{{}^{241}Pu}$ are computed from their $\beta$ spectrum [@ILL1; @ILL2; @ILL3], while [@Haag238U] is used for $\rm{{}^{238}U}$ for the first time. The short-baseline Bugey4 ${\overline{\nu}_{e}}$ rate measurement [@Bugey4] is used to suppress the normalization uncertainty on the ${\overline{\nu}_{e}}$ prediction, correcting for the different fuel composition in the two experiments. The systematic uncertainty on the ${\overline{\nu}_{e}}$ rate amounts to 1.7%, dominated by the 1.4% of the Bugey4 measurement. Had the Bugey4 measurement not been included, the uncertainty would have been 2.8%.
![Double Chooz far detector design.[]{data-label="fig:Detector"}](./figures/detectorDesign.pdf){width="50.00000%"}
The DC detector is composed of four concentrical cylindrical vessels (see figure \[fig:Detector\]). The innermost volume, the $\nu$-target (NT), is an 8 mm thick acrylic vessel (UV to visible transparent) filled with 10.3 $\rm{m^3}$ of liquid scintillator loaded with Gd (1g/l) to enhance the neutron captures. The $\gamma$-catcher (GC), a 55 cm thick layer of liquid scintillator (Gd-free) enclosed in a 12 mm thick acrylic vessel surrounds the NT to maximize the energy containment. Surrounding the GC is the buffer, a 105 cm thick layer of mineral oil (non-scintillating) contained in a stainless steel tank where 390 low background 10-inch photomultiplier tubes (PMT) are installed, and which shields from the radioactivity of the PMTs and the surrounding rock. The elements described so far constitute the inner detector (ID). Enclosing the ID and optically separated from it, the inner veto (IV), a 50 cm thick layer of liquid scintillator, serves as a cosmic muon veto and as an active shield to incoming fast neutrons observed by 78 8-inch PMTs positioned on its walls. A 15 cm thick demagnetized steel shield protects the whole detector from external $\gamma$-rays. The outer veto (OV), two orthogonally aligned layers of plastic scintillator strips placed on top of the detector, allows a 2D reconstruction of impinging muons. An upper OV covers the chimney, which is used for filling the volumes and for the insertion of calibration sources (encapsulated radioactive sources of $^{137}$Cs, $^{68}$Ge, $^{60}$Co and ${\rm{{}^{252}Cf}}$ and a laser). Attached to the ID and IV PMTs, a multi-wavelength LED-fiber light injection system is used to periodically calibrate the readout electronics.
Waveforms from all ID and IV PMTs are digitized and recorded by dead-time free flash-ADC electronics.
DC has pioneered the measurement of ${\theta_{13}}$ using the ${\overline{\nu}_{e}}$ spectral information because of its exhaustive treatment of the energy scale, which is applied in parallel to the recorded data and the Monte Carlo (MC) simulation. A linearized photoelectron (PE) calibration produces a PE number in each PMT which has been corrected from dependencies on the gain non-linearity and time. A uniformity calibration corrects for the spatial dependence of the PE, equalizing the response within the detector. The conversion from PE to energy units is obtained from the analysis of neutron captures in H from a ${\rm{{}^{252}Cf}}$ calibration source deployed at the center of the detector. A stability calibration is applied to the data to remove the remaining time variation by analyzing the evolution of the H capture peak from spallation neutrons, which is also crosschecked at different energies using the Gd capture peak and the $\alpha$ decays of $\rm{{}^{212}Po}$. Two further calibrations are applied to the MC to correct for the energy non-linearity relative to the data: the first is applied to every event and it arises from the modeling of the readout systems and the charge integration algorithm; the second, which is only applied to positrons, is associated to the scintillator modeling. The total systematic uncertainty in the energy scale amounts to 0.74%, improving the previous one [@DC2ndPub] by a 1.5 factor.
Neutrino selection {#sec:Selection}
==================
The minimum energy for a selected event is $E_{vis} > 0.4 \text{\,MeV}$, where the trigger is already 100% efficient. Events with $E_{vis} > 20 \text{\,MeV}$ or $E_{IV} > 16 \text{\,MeV}$ are rejected and tagged as muons, imposing a 1 ms veto after them to reject also muon-induced events. *Light noise* is a background caused by spontaneous light emission from some PMT bases, and it is avoided by requiring the selected events to satisfy all the following cuts: i) $q_{max}$, the maximum charge recorded by a PMT, must be less or equal to 12% of the total charge of the event; ii) $1/N \times \sum\nolimits_{i = 0}^{N} (q_{max} - q_i)^2/q_i < 3 \times 10^4 \text{\,charge units}$, where $N$ is the number of PMTs located at less than 1 m from the PMT with the maximum charge; iii) $\sigma_t < 36 \text{\,ns}$ or $\sigma_q > (464 - 8\sigma_t)\text{\, charge units}$, where $\sigma_t$ and $\sigma_q$ are the standard deviations of PMTs hit times and integrated charge distributions, respectively. Events passing the previous cuts are used to search for coincidences, which must satisfy the conditions: the prompt $E_{vis}$ must be in $(0.5, 20){\text{\,MeV}}$, the delayed $E_{vis}$ in $(4, 10) {\text{\,MeV}}$, the correlation time between the signals must be in $(0.5, 150) \,\mu$s, and the distance between reconstructed vertex positions must be less than 1 m. In addition, only the delayed signal can be in a time window spanning 200 $\mu$s before and 600 $\mu$s after the prompt signal.
Background measurement and vetoes {#sec:Background}
=================================
The backgrounds are non-neutrino processes which mimic the characteristic coincidence of the IBD.
#### Cosmogenic isotopes
Unstable isotopes are produced by spallation of nuclei inside the detector by cosmic muons. Products as ${\rm{{}^{9}Li}}$ and ${\rm{{}^{8}He}}$ have a decay mode in which a neutron is emitted along with an electron, indistinguishable from an IBD interaction. Moreover, lifetimes of ${\rm{{}^{9}Li}}$ and ${\rm{{}^{8}He}}$ are 257 ms and 172 ms, respectively, so the 1 ms after-muon veto is not effective. A cut in a likelihood based on the event distance to the muon track and the number of neutron candidates following the muon in 1 ms allows to reject 55% of ${\rm{{}^{9}Li}}$ and ${\rm{{}^{8}He}}$. The ${\rm{{}^{9}Li}}/{\rm{{}^{8}He}}$ contamination is determined from fits to the time correlation between the IBD candidates and the previous muon. The estimation of the remaining ${\rm{{}^{9}Li}}/{\rm{{}^{8}He}}$ background in the IBD candidates sample is $0.97^{+0.41}_{-0.16}$ events/day. The events vetoed by the likelihood cut are used to build the prompt energy spectrum (see figure \[fig:BGRSFit\]), which includes also captures on H to enhance the statistics.
#### Fast neutrons and stopping muons
Fast neutrons originated from spallation by muons in the surrounding rock can enter the detector and reproduce the IBD signature by producing recoil protons (prompt) and be captured later (delayed). Stopping muons are muons which stop inside the detector, giving the prompt signal, and then decay producing the Michel electron that fakes the delayed signal. In order to reject this background, events fulfilling at least one of the following conditions are discarded: (i) Events with an OV trigger coincident with the prompt signal. (ii) Events whose delayed signal is not consistent with a point-like vertex inside the detector. (iii) Events in which the IV shows correlated activity to the prompt signal. The three vetoes together reject 90% of the events with a prompt $E_{vis} > 12 {\text{\,MeV}}$, where this background is dominant. The veto (iii) is used to extract the fast neutron/stopping muon prompt energy spectrum, which is found to be flat. This shape is further confirmed by using the other vetoes. The rate of this background in the candidate sample is estimated from a IBD-like coincidence search in which the prompt signal has an energy in the $(20, 30){\text{\,MeV}}$ region, and it amounts to $0.604\pm 0.051$ events/day.
#### Accidental background
Those are random coincidences of two triggers satisfying the selection criteria. Because of its random nature, their rate and spectrum (see figure \[fig:BGRSFit\]) can be studied with great precision from the data by an off-time coincidence search, the same as the IBD selection except for the correlated time window, which is opened more than 1 s after the prompt signal. The use of multiple windows allows to collect high statistics. The background rate is measured to be $0.0701 \pm 0.0003 \text{\,(stat)} \pm 0.026 \text{\,(syst)}$.
Other backgrounds, such as the $^{13}$C($\alpha$, n)$^{16}$O reaction or the $^{12}$B decay, were considered but they were found to have negligible occurrence. Table \[table:BGSummary\] summarizes the estimated background rates and the reduction with respect to the previous publication [@DC2ndPub].
Background Rate (d$^{-1}$) [@DC3rdPub]/[@DC2ndPub]
------------------------------- ------------------------ -------------------------
$^{9}$Li/$^{8}$He $0.97^{+0.41}_{-0.16}$ 0.78
Fast-n/stop-$\mu$ $0.604 \pm 0.051$ 0.52
Accidental $0.070 \pm 0.003$ 0.27
$^{13}$C($\alpha$, n)$^{16}$O $< 0.1$ N/A in [@DC2ndPub]
$^{12}$B $< 0.03$ N/A in [@DC2ndPub]
: Summary of background rate estimations. [@DC3rdPub]/[@DC2ndPub] shows the reduction of the background rate in [@DC3rdPub] with respect to the previous publication [@DC2ndPub], after correcting for the different prompt energy range.[]{data-label="table:BGSummary"}
IBD detection efficiency {#sec:Efficiency}
========================
A dedicated effort was carried out to decrease the detection efficiency uncertainty. This signal normalization uncertainty is dominated by the neutron detection uncertainty, which has been reduced from 0.96% in [@DC2ndPub] to the current 0.54% in [@DC3rdPub]. This was achieved thanks to the reduction of the volume-wise selection systematic uncertainty by using two new methods to estimate the neutron detection efficiency in the full Target. The first one uses the neutrons produced by the IBD interactions, which are homogeneously distributed in the detector, to produce a direct measurement of the volume-wide efficiency. The second method exploits the symmetry shown by the neutron detection efficiency, in which the data from the ${\rm{{}^{252}Cf}}$ source deployed along the vertical coordinate can be extrapolated to the radial coordinate. Another reduction was obtained on the uncertainty arising from the spill-in/spill-out currents (neutron migration into and out of the NT, respectively), which are sensitive to the low energy neutron physics. It was decreased by comparing the custom DC `Geant4` simulation, which includes an analytical modeling of the impact of the molecular bonds on low energy neutrons, to `Tripoli4`, a MC code with a specially accurate model of low energy neutron physics.
After accounting for the uncertainties introduced by the background vetoes and the scintillator proton number, the detection-related normalization uncertainty totals 0.6%.
Oscillation analyses {#sec:Fit}
====================
In a live-time of 460.67 days with at least one reactor running, 17351 IBD candidates were observed. The prediction, including backgrounds, in case of no oscillation was $18290^{+370}_{-330}$. The deficit is understood as a consequence of neutrino oscillation. In addition, a live-time of 7.24 days with the two reactors off was collected [@DCOffOff], in which 7 IBD candidates were observed, whereas the prediction including the residual ${\overline{\nu}_{e}}$ was $12.9^{+3.1}_{-1.4}$. The reactor-off measurement allows to test the background model and constrain the total background rate in the oscillation analysis. It is a unique advantage of DC, which has only two reactors.
The normalization uncertainties of the signal and the background are summarized in table \[table:RateError\], showing also the improvement with respect to the previous analysis [@DC2ndPub].
Source Uncertainty (%) [@DC3rdPub]/[@DC2ndPub]
---------------------- ------------------ -------------------------
Reactor flux 1.7 1.0
Detection efficiency 0.6 0.6
$^{9}$Li/$^{8}$He $+1.1$ / $-0.4$ 0.5
Fast-n/stop-$\mu$ 0.1 0.2
Statistics 0.8 0.7
Total $+ 2.3$ / $-2.0$ 0.8
: Signal and background normalization uncertainties relative to the signal prediction. [@DC3rdPub]/[@DC2ndPub] shows the reduction of the uncertainty with respect to the previous publication [@DC2ndPub].[]{data-label="table:RateError"}
Reactor rate modulation analysis {#subsec:RRMFit}
--------------------------------
![Observed versus expected candidate daily rates for different reactor powers.The prediction under the null oscillation hypothesis (dotted line) and the best-fit with the background rate constrained by its uncertainty (blue dashed line) are shown. The first point corresponds to the reactor-off data.[]{data-label="fig:RRMFit"}](./figures/RRMFit.pdf){width="45.00000%"}
From the linear correlation existing between the observed and the expected candidate rates at different reactor conditions, a fit to a straight line determines simultaneously ${\sin^{2}2\theta_{13}}$ (proportional to the slope) and the total background rate $B$ (intercept) [@DCRRMElsevier]. Including the prediction of the total background $B = 1.64^{+0.41}_{-0.17} \text{\,events/day}$, the best fit is found at ${\sin^{2}2\theta_{13}}= 0.090^{+0.034}_{-0.035}$ and $B = 1.56^{+0.18}_{-0.16} \text{\,events/day}$ (see figure \[fig:RRMFit\]).
A background model independent measurement of ${\theta_{13}}$ is possible when the background constraint is removed and $B$ is treated as a free parameter. The best fit ($\chi^2_{min}/d.o.f. = 1.9/5$) corresponds to ${\sin^{2}2\theta_{13}}= 0.060 \pm 0.039$ and $B = 0.93^{+0.43}_{-0.36} \text{\,events/day}$, consistent with the background-constrained fit.
The impact of the reactor-off data is tested by removing the reactor-off point (with the background rate still unconstrained). In this case, the best fit ($\chi^2_{min}/d.o.f. = 1.3/4$) gives ${\sin^{2}2\theta_{13}}= 0.089 \pm 0.052$ and $B = 1.56 \pm 0.86 \text{\,events/day}$, which confirms the improvement granted by the reactor-off measurement.
Rate + shape analysis {#subsec:RSFit}
---------------------
This analysis measures ${\sin^{2}2\theta_{13}}$ by minimizing a ${\chi^{2}}$ in which the prompt energy spectrum of the observed IBD candidates and the prediction are compared. A covariance matrix accounts for the statistical and systematic (reactor flux, MC normalization, ${\rm{{}^{9}Li}}/\rm{{}^8He}$ spectrum shape, accidental statistical) uncertainties in each bin and the bin-to-bin correlations. A set of nuisance parameters accounts for the other uncertainty sources: $\Delta m^2_{31}$, the number of residual ${\overline{\nu}_{e}}$ when reactors are off ($1.57 \pm 0.47$ events), the ${\rm{{}^{9}Li}}/\rm{{}^8He}$ and fast neutron/stopping muon rates, the systematic component of the uncertainty on the accidental background rate, and the energy scale. The best fit ($\chi^2_{min}/d.o.f. = 52.2/40$) is found at ${\sin^{2}2\theta_{13}}= 0.090^{+0.032}_{-0.029}$ (see figures \[fig:BGRSFit\],\[fig:RSFit\]).
![Measured prompt energy spectrum (black points with statistical error bars), superimposed on the no-oscillation prediction (blue dashed line) and on the best fit (red solid line), with the stacked best-fit backgrounds added.[]{data-label="fig:BGRSFit"}](./figures/BackgroundsRSFit.pdf){width="50.00000%"}
![Top: Measured prompt energy spectrum with best-fit backgrounds subtracted (black points with statistical error bars) superimposed on the no-oscillation prediction (blue dashed line) and on the best fit (red solid line). Bottom: Ratio of data to the no-oscillation prediction (black points with statistical error bars) superimposed on the best fit ratio (red solid line). The gold band represents the systematic uncertainty on the best-fit prediction.[]{data-label="fig:RSFit"}](./figures/RSFit.pdf){width="50.00000%"}
In addition to the oscillation-induced deficit on the bottom panel of figure \[fig:RSFit\], a spectrum distortion is observed above 4 MeV. The excess has been found to be proportional to the reactor power, disfavoring a background origin. Considering only the IBD interaction, the structure is consistent with an unaccounted reactor ${\overline{\nu}_{e}}$ flux effect, which does not affect significantly the ${\theta_{13}}$. The good agreement with the shape-independent reactor rate modulation result demonstrates it. The existence of this distortion has been later confirmed by the Daya Bay and RENO reactor experiments.
![Double Chooz projected sensitivity using the IBD neutrons captured in Gd. The previous analysis, [@DC2ndPub] with only the FD (black dashed line) and adding the ND (black solid line), and the current analysis, with only the FD (blue dashed line) and adding the ND (blue solid line), are shown. The shaded region represents the range of improvement expected by reducing the systematic uncertainty, bounded from below by considering only the reactor systematic uncertainty.[]{data-label="fig:sensitivity"}](./figures/sensitivity.pdf){width="50.00000%"}
Figure \[fig:sensitivity\] shows the projected sensitivity of the Rate + Shape analysis using the IBD neutrons captured in Gd. A $0.2\%$ relative detection efficiency uncertainty is assumed, the expected remnant from the cancellation of the correlated detection uncertainties due to the use of identical detectors. The portion of reactor flux uncorrelated between detectors is $0.1\%$ (thanks to the simple experimental setup with two reactors). Backgrounds in the Near Detector are scaled from the Far Detector accounting for the different muon flux. Comparing the curves from the previous [@DC2ndPub] and the current analysis [@DC3rdPub], the improvement gained with the new techniques is clear, and it is expected to improve further (e.g. systematic uncertainty on the background rate is limited by statistics).
Conclusion {#sec:Conclusion}
==========
Double Chooz has presented improved measurements of ${\theta_{13}}$ corresponding to 467.90 days of live-time of a single detector using the neutrons captured in Gd. The most precise value is extracted from a fit to the observed positron energy spectrum: ${\sin^{2}2\theta_{13}}= 0.090^{+0.032}_{-0.029}$. A consistent result is found by a fit to the observed candidate rates at different reactor powers: ${\sin^{2}2\theta_{13}}= 0.090^{+0.034}_{-0.035}$. A distortion in the spectrum is observed above 4 MeV, with an excess correlated to the reactor power. It has no significant impact on the ${\theta_{13}}$ result.
As a result of the improved analysis techniques, Double Chooz will reach a 15% precision on ${\sin^{2}2\theta_{13}}$ in 3 years of data taking with two detectors, with the potential to improve to 10%.
|
---
abstract: 'The electrical resistivity of a single crystal of MnSi was measured across its ferromagnetic phase transition line at ambient and high pressures. Sharp peaks of the temperature coefficient of resistivity characterize the transition line. Analysis of these data shows that at pressures to $\sim0.35$ GPa these peaks have fine structure, revealing a shoulder at $\sim0.5$ K above the peak. It is symptomatic that this structure disappears at pressures higher than $\sim0.35$ GPa, which was identified earlier as a tricritical point.'
author:
- 'Alla E. Petrova'
- Eric Bauer
- Vladimir Krasnorussky
- 'Sergei M. Stishov'
title: Peculiar behavior of the electrical resistivity of MnSi at the ferromagnetic phase transition
---
The intermetallic compound MnSi experiences a second order phase transition at temperature $T_{c}$ slightly below 30 K, acquiring helical magnetic structure and becoming a weak itinerant ferromagnet. On application of pressure the transition temperature $T_{c}$ decreases and tends to zero at a pressure of about 1.4 GPa [@1]. As was noticed for the first time in ref. [@2] (see also [@3]), a $\lambda$-type singularity of the AC magnetic susceptibility $\chi_{AC}$ at the phase transition in MnSi deforms gradually with pressure and becomes a simple step at pressures more than 1GPa. That was ground to claim existence of a tricritical point with the coordinates: $\sim1.2$ GPa, $\sim12$ K [@2; @3]. This conclusion was partly disputed in ref. [@4], where new measurements of $\chi_{AC}$ of MnSi at high pressures, created by compressed helium, were reported. These authors [@4] confirmed the existence of a tricritical point on the phase transition line in MnSi but placed it at much lower pressure and at significantly higher temperature ($P_{tr}\cong0.355$ GPa, $T_{tr}\cong25.2$ K).
To resolve that somewhat controversial issue we have carried out precise resistivity measurements of a MnSi single crystal across the phase transition line at ambient and at high pressures, using a compressed helium technique. The primary purpose was to study behavior of the temperature coefficient of resistivity $d\rho/dT$ at the transition line. According to the theoretical conclusions [@5; @6; @7], a temperature coefficient of resistivity diverge at a second order magnetic phase transition and can be characterized by a static critical exponent . Contrary to our expectations, we found that peaks in $d\rho/dT$ at $T_{c}$ are accompanied by a well-defined shoulder on their high temperature side, which vanish when approaching a pressure $\sim0.35$ GPa. This finding nicely correlates with corresponding features in ultrasound attenuation [@8], thermal expansion [@9], and heat capacity [@10], discovered in the critical region of MnSi at ambient pressure.
The single crystal of MnSi was grown from a tin flux by dissolving pre-alloyed Mn and Si in excess Sn. For resistivity measurements, four Pt-wires of 25-$\mu$m in diameter were welded to the crystal with dimensions of about $0.5{\times}0.3{\times}0.3 mm^{3}$. The temperature of the magnetic phase transition $T_{c}$ and resistivity ratio $R_{300}/R_{(T=2.1)}$, taken at ambient pressure, are equal to $29.25{\pm}0.02$ K and $\approx100$ correspondingly. The crystal was placed into a high pressure cell made of beryllium copper. Fluid and solid helium were used as a pressure medium. Temperature was measured by a calibrated Cernox sensor, imbedded in the cell body, with an accuracy of about 0.05 K. A calibrated manganin gauge was used to measure pressure with accuracy about $10^{-3}$ GPa in the fluid helium domain. In the domain of solid helium, pressure was calculated on the basis of the measured helium-crystallization temperatures and data for the equation of state of helium. Accuracy of pressure measurements in solid helium is estimated as $5{\times}10^{-3}$ GPa. The resistivity was measured by a four-terminal DC method. The experimental setup, including the high pressure gas installation and the cryostat, is described in [@4], [@11].
The resistivity measurements of MnSi were carried out along 24 quasi isobars [@Iso] in the pressure range from zero to 1.5 GPa. Selected experimental data are displayed in Fig.\[fig1\]. We have tried to describe the resistivity curves in the temperature range from zero to the phase transition region by various polynomials that contained potentially important $T^{2}$ and/or $T^{5}$ terms accounting for scattering by spin and density fluctuations (phonons) [@12; @13]. The overall results appeared to be quite satisfactory though we observed small but systematic deviations of the experimental data points from the corresponding approximations at low temperatures. Replacing the $T^{2}$ term with $T^{n}$ improves the situation but does not correct it entirely, though always leads to a value of $n<2$. On the other hand, as is seen in Fig.\[fig1\] the pressure derivatives of resistivity are positive below the Curie point and negative above (see also [@3]). This implies a dominant role of order parameter fluctuations in the electron scattering in MnSi. Hence, any analysis of the resistivity behavior in MnSi should take into account this significant factor. We will discuss this issue elsewhere. However, it is important to emphasize here that the residual resistivity of MnSi, derived from reasonable extrapolations, decreases monotonically from 2.25 to 2.11 $\mu\Omega$cm over all the pressure range studied on compression. This indicates that many cycles of pressure loading and unloading, cooling and warming do not introduce additional defects into the sample. The temperature-dependent resistivity of MnSi above the phase transition line shows clear signs of resistivity saturation at $T\rightarrow\propto$ [@14].
![\[fig1\] (Color online) Temperature dependence of the electrical resistivity ${\rho(T)}$ of MnSi at different pressures. The isobars correspond to pressures in GPa: 0, 0.2, 0.32, 0.43, 0.54, 0.63, 0.7, 0.885, 1.13, counting from the right to the left at the bottom of the figure. ](fig1.eps){width="80mm"}
Now we turn to an analysis of the temperature coefficient of resistivity $d\rho/dT$ in the vicinity of the phase transition boundary. Temperature derivatives of resistivity $\rho$ were taken by averaging the slopes of two adjacent points of the raw experimental data. The outcome of this procedure is illustrated in Fig.\[fig2\], where also the smoothing lines are shown. As is seen from the figure at ambient pressure the curve ${d\rho\over
dT}(T)$ has a distinct shoulder on the high temperature side of $T_{c}$ which disappears at high pressure. The evolution of the shape of the peaks of $d\rho/dT$ with applied pressure is shown in Fig.\[fig3\]. The shoulder in ${d\rho\over dT}(T)$ vanishes at a pressure of around 0.35 GPa that was recognized early as a coordinate of the tricritical point [@4]. The overall trend is that at low pressure structure in $d\rho/dT$ consists of two components: one sharp and another broad, separated only by half a degree or so. Because of lack of *a priori* knowledge of the peak forms and uncertainty with background subtraction, we could not separate these peaks in a reliable way. The obvious overlapping of the peaks makes also unreliable attempts to obtain a critical exponent, based on behavior $d\rho/dT$ [@5; @6; @7]. Nevertheless, we have found that an approximation of $d\rho/dT$ at $T<T_{c}$ with the expression $${d\rho \over dT}=a+bT+c(T_{c}-T)^{-m}$$ gives $m\approx0.25$ in case of the low pressure isobars, which is a reasonable value for an exponent characterizing critical behavior of heat capacity near helical spin ordering [@15; @16]. At pressures more than 0.3-0.4 GPa, the fitting became unstable and did not lead to realistic values of the exponents.
![\[fig2\] (Color online) Examples of temperature derivatives of resistivity $d\rho/dT$ at ambient and elevated pressures. The square dots are the temperature derivatives of resistivity, taken by averaging the slopes of two adjacent points of the raw experimental data. The solid lines are results of smoothing procedures. ](fig2.eps){width="80mm"}
![\[fig3\] (Color online) Evolution of temperature derivatives of resistivity $d\rho/dT$ with pressure. The pressures in GPa are shown at the left side of the figure. ](fig3.eps){width="80mm"}
![\[fig4\] (Color online) Pressure dependence of the Curie temperature of MnSi according to the current resistivity measurements and the AC susceptibility data [@4]. The inset shows that the average mismatch of the two sets of the data is less than 0.1K. ](fig4.eps){width="80mm"}
Summarizing, we point out that the reported experimental data demonstrate complicated behavior of the temperature coefficient of resistivity of MnSi in the vicinity of its phase transition. As is seen from Fig.\[fig3\], $d\rho/dT$ evolves from a highly asymmetric, not quite resolved doublet with one rather sharp component at ambient pressure to the single, fairly symmetric peak at pressure, corresponding to the tricritical point. It was mentioned earlier that a doublet structure of related peaks was discovered in sound absorption [@8], thermal expansion [@9], and heat capacity [@10] at the phase transition in MnSi that correlate with the current observations. Unfortunately, little is known about the origin of this structure, but what we know is that the high temperature satellite does not reveal itself in magnetic susceptibility measurements [@3; @4]. The data comparison shows that the magnetic transition is associated with a sharp peak on the low temperature side of $d\rho/dT$ (Fig.\[fig4\]). Thus, the observed shoulder in $d\rho/dT$ could be connected with short range spin order or with the spin texture [@18; @19]. However, it does not appear that the shoulder in $d\rho/dT$ marks any kind of a conventional phase transition. Nevertheless, one cannot exclude that a topological phase transition takes place at a temperature above the magnetic transformation. In the latter case, instead of a tricritical point there would be a special kind of a multicritical point in the phase diagram of MnSi. But, if the scenario with a topological phase transition is not appropriate, then the shoulder in $d\rho/dT$ disappears, being adsorbed by the volume instability gap, which is opened at a tricritical point [@20].
Authors express their gratitude to Vladimir Sidorov for technical assistance and to J. D. Thompson for reading the manuscript and valuable remarks. A.E. Petrova, V. Krasnorussky and S.M. Stishov appreciate support of the Russian foundation for Basic Research (grant 06-02-16590), Program of the Physics Department of Russian Academy of Science on Strongly Correlated Systems and Program of the Presidium of Russian Academy of Science on Physics of Strongly Compressed Matter. Work at Los Alamos was performed under the auspices of the US Department of Energy, Office of Science.
[99]{}
J.D. Thompson, Z. Fisk, and G.G. Lonzarich, Physica B [**1**61]{}, 317 (1989) C. Pfleiderer, G.J. McMullan, G.G. Lonzarich, Physica B [**2**06-207]{}, 847 (1995) C. Pfleiderer, G.J. McMullan, S.R. Julian, G.G. Lonzarich, Phys.Rev.B [**5**5]{}, 8330 (1997) A.E. Petrova, V.Krasnorussky, John Sarrao and S.M. Stishov, Phys.Rev.B, [**7**3]{}, 052409 (2006) V.M. Nabutovskii, A.Z. Patashinskii, Fizika Tverdogo Tela [**1**0]{}, 3121 (1968) M.E. Fisher and J.S. Langer, Phys.Rev.Lett. [**2**0]{}, 665 (1968) T.G. Richard and D.J.W. Geldart, Phys.Rev.Lett. [**3**0]{}, 290 (1973) S. Kusaka, K. Yamamoto, T. Komatsubara and Y. Ishikawa, Solid State Communications [**2**0]{} 925 (1976) M. Matsunaga, Y. Ishikawa, T. Nakajima, J. Phys. Soc. Japan [**5**1]{}, 1153 (1982) C. Pfleiderer, J. Magnetism and Magnetic Materials [**2**26-230]{}, 23 (2001) A.E. Petrova, V.A. Sidorov and S.M. Stishov, Physica B [**3**59-361]{}, 1463 (2005) The normal experimental procedure starts with cooling the high pressure cell, containing certain amount of compressed helium and the sample. After a while helium crystallizes, blocking the high pressure tubing \[4,11\] and making further cooling isochoric in respect to solid helium. So cooling and subsequently warming the sample of MnSi, which has compressibility quite different from that of helium, are neither isobaric nor isochoric. Toru Moria, Spin fluctuations in itinerant electron magnetism, 1985 Springer - Verlag, Berlin-Heidelberg-New York-Tokyo Frank J. Blatt, Physics of electronic conduction in solids (McGraw-Hill Book Company, 1968) H.Wiesmann, M. Gurvitch, H. Lutz, A. Ghosh, B. Schwarz, Myron Strongin, P.B. Allen, J.W. Halley, Phys.Rev.Lett. [**3**8]{}, 782 (1977) H.T. Diep, Phys.Rev. B [**3**9]{}, 397 (1989) Using the hyperscaling relation $d{\nu}=2-{\alpha}$ and the ambient pressure value of ${\nu}=0.62$ for MnSi \[17\], one finds ${\alpha}=0.14$ S.V. Grigoriev, S.V. Maleyev, A.I. Okorokov, Yu.O. Chetverikov, R. Georgii, P.Böni, D.Lamago, H. Eckerlebe, and K. Pranzas, Phys.Rev. B [**7**2]{}, 134420 (2005) A.N. Bogdanov, U.K. Rö[ß]{}ler, C. Pfleiderer, Physica B [**3**59-361]{}, 1162 (2005) B. Binz, A. Vishwanath, and V.Aji, Phys.Rev.Lett. [**9**6]{}, 207202 (2006) At a first order phase transition, a volume change occurs since a homogenous state of matter becomes instable in a certain range of volumes, which may be called a volume gap. Specific features of behavior around a second order phase transition could fall in the volume gap when the transition turns to be a first order and therefore may not be observable.
|
---
abstract: 'We report the discovery of a transiting, $R_p = {4.347}\pm{0.099}R_\oplus$, circumbinary planet (CBP) orbiting the [*[[*Kepler*]{}]{}*]{} $K+M$ Eclipsing Binary (EB) system KIC 12351927 (Kepler-413) every $\sim66$ days on an eccentric orbit with $a_p = {0.355}\pm{0.002}AU$, $e_p = {0.118}\pm{0.002}$. The two stars, with $M_A={0.820}\pm{0.015}M_{\odot}, R_A={0.776}\pm{0.009}R_{\odot}$ and $M_B={0.542}\pm{0.008}M_{\odot}, R_B = {0.484}\pm{0.024}R_{\odot}$ respectively revolve around each other every $10.11615\pm0.00001$ days on a nearly circular ($e_{EB} = 0.037\pm0.002$) orbit. The orbital plane of the EB is slightly inclined to the line of sight ($i_{EB}=87.33\pm0.06\arcdeg$) while that of the planet is inclined by $\sim2.5\arcdeg$ to the binary plane at the reference epoch. Orbital precession with a period of $\sim11$ years causes the inclination of the latter to the sky plane to continuously change. As a result, the planet often fails to transit the primary star at inferior conjunction, causing stretches of hundreds of days with no transits (corresponding to multiple planetary orbital periods). We predict that the next transit will not occur until 2020. The orbital configuration of the system places the planet slightly closer to its host stars than the inner edge of the extended habitable zone. Additionally, the orbital configuration of the system is such that the CBP may experience Cassini-States dynamics under the influence of the EB, in which the planet’s obliquity precesses with a rate comparable to its orbital precession. Depending on the angular precession frequency of the CBP, it could potentially undergo obliquity fluctuations of dozens of degrees (and complex seasonal cycles) on precession timescales.'
author:
- 'V. B. Kostov, P. R. McCullough, J. A. Carter, M. Deleuil, R. F. Díaz, D. C. Fabrycky, G. Hébrard, T. C. Hinse, T. Mazeh J. A. Orosz, Z. I. Tsvetanov, W. F. Welsh'
title: 'Kepler-413b: a slightly misaligned, Neptune-size transiting circumbinary planet'
---
Introduction {#sec:intro}
============
A mere two years ago, [@doyle2011] announced the discovery of the first transiting circumbinary planet (CBP), Kepler-16b. Six more transiting CBPs, including a multi-planet system, a CBP in the habitable zone, and a quadruple host stellar system, have been reported since [@Welsh2012; @Orosz2012; @Orosz2012b; @kostov2013; @Schwamb2013]. In comparison, the number of planetary candidates orbiting single stars is significantly larger $-$ three thousand and counting [@burke2013].
Extensive theoretical efforts spanning more than two decades have argued that planets can form around binary stars [@alex12; @paar12; @pn07; @pn08a; @pn08b; @pn08c; @pn13; @mart13; @marz13; @mes12a; @mes12b; @mes13; @raf13]. Simulations have shown that sub-Jupiter gas and ice giants should be common, and due to their formation and migration history should be located near the edge of the CB protoplanetary disk cavity. Indeed that is where most of the CBPs discovered by [*[[*Kepler*]{}]{}*]{} reside! Once formed, it has been shown that CBPs can have dynamically stable orbits beyond certain critical distance [@holman99]. This distance depends on the binary mass fraction and eccentricity and is typically a few binary separations. All discovered CBP are indeed close to the critical limit – their orbits are only a few tens of percent longer than the critical separation necessary for stability [@welsh14]. Additionally, models of terrestrial planet formation in close binary systems ($a_{bin} < 0.4AU$) indicate that accretion around eccentric binaries typically produces more diverse and less populated planetary systems compared to those around circular binaries [@quin06]. In contrast, the location of the ice line in CB protoplanetary disks is expected to be interior to the critical stability limit for 80% of wide, low-mass binary systems ($M_{bin} < 4M_{\odot}$) with $a_{bin} \sim 1AU$ [@clan13]. Thus, Clanton argues, formation of rocky planets in such systems may be problematic. The theoretical framework of formation and evolution of planets in multiple stellar systems demands additional observational support, to which our latest CBP discovery [[Kepler-413 ]{}]{}contributes an important new insight.
The configurations of six of the confirmed CBPs are such that they currently transit their parent stars every planetary orbit. [@doyle2011] note, however, that the tertiary (planet transits across the primary star) of Kepler-16b will cease after 2018, and the quaternary (planet transits across the secondary star) after 2014. The last transit of Kepler-35b was at BJD 2455965 [@Welsh2012]; it will start transiting again in a decade. As pointed out by [@schneider1994], some CBP orbits may be sufficiently misaligned with respect to their host EB and hence precessing such that the above behavior may not be an exception. Additionally, [@fouc13] argue that circumbinary disks around sub-AU stellar binaries should be strongly aligned (mutual inclination $\theta \le 2\arcdeg$), in the absence of external perturbations by additional bodies (either during or after formation), whereas the disks and planets around wider binaries can potentially be misaligned ($\theta \ge 5\arcdeg$). [@fouc13] note that due to the turbulent environment of star formation, the rotational direction of the gas accreting onto the central proto-binary is in general not in the same direction as that of the central core. Their calculations show that the CB disk is twisted and warped under the gravitational influence of the binary. These features introduce a back-reaction torque onto the binary which, together with an additional torque from mass accretion, will likely align the CB protoplanetary disks and the host binary for close binaries but allow for misalignment in wider binaries.
The observational consequence of slightly misaligned CBPs is that they may often fail to transit their host stars, resulting in a light curve exhibiting one or more consecutive tertiary transits followed by prolonged periods of time where no transits occur. This effect can be further amplified if the size of the semi-minor axis of the transited star projected upon the plane of the sky is large compared the star’s radius.
Such is the case of [[Kepler-413 ]{}]{}(KIC 12351927), a $10.116146$-day Eclipsing Binary (EB) system. Its [*Kepler*]{} light curve exhibits a set of three planetary transits (separated by $\sim66$ days) followed by $\sim800$ days with no transits, followed by another group of five transits (again $\sim66$ days apart). We do not detect additional events $\sim66$ days (or integer multiples of) after the last transit. Our analysis shows that such peculiar behavior is indeed caused by a small misalignment and precession of the planetary orbit with respect to that of the binary star.
Here we present our discovery and characterization of the CBP orbiting the EB [[Kepler-413 ]{}]{}. This paper is organized as an iterative set of steps that we followed for the complete description of the circumbinary system. In Section \[sec:kepler\] we describe our analysis of the [[*Kepler*]{}]{} data, followed by our observations in Section \[sec:followup\]. We present our analysis and results in Section \[sec:photodynamics\], discuss them in Section \[sec:discussion\] and draw conclusions in Section \[sec:conclusions\].
[*Kepler*]{} Data {#sec:kepler}
=================
[*Kepler*]{} Light Curve {#sec:lc}
------------------------
We extract the center times of the primary ($T_{prim}$) and secondary ($T_{sec}$) stellar eclipses, the normalized EB semi major axes ($a/R_A$), ($a/R_B$), the ratio of the stellar radii ($R_B/R_A$), and inclination ($i_b$) of the binary and the flux contribution of star B from the [[*Kepler*]{}]{} light curve. Throughout this work, we refer to the primary star with a subscript [*“A”*]{}, to the secondary with a subscript [*“B”*]{}, and to the planet with a subscript [*“p”*]{}. We model the EB light curve of [[Kepler-413 ]{}]{}with ELC [@Orosz2012].
The [[*Kepler*]{}]{} data analysis pipeline [@jen10] uses a cosmic-ray detection procedure which introduces artificial brightening near the middle of the stellar eclipses of [[Kepler-413 ]{}]{}(see also @Welsh2012). The procedure flags and corrects for positive and negative spikes in the light curves. The rapidly changing stellar brightness during the eclipse and the comparable width between the detrending window used by the pipeline and the duration of the stellar eclipse misleads the procedure into erroneously interpreting the mid-eclipse data points as negative spikes. This leads to the unnecessary application of the cosmic ray correction to the mid-eclipse data points prior to the extraction of the light curve. The target pixel files, however, contain a column that stores the fluxes, aperture positions and times of each flagged cosmic ray event. To account for the anomalous cosmic ray rejection introduced by the pipeline, we add this column back to the flux column using fv (downloaded from the [*Kepler Guest Observer*]{} website) and then re-extract the corrected light curve using the [*kepextract*]{} package from PyKE [^1] [@still_xxx; @kinem12]. We note that our custom light curve extraction from the target pixel files for Quarters 1 through 14 introduces a known timing error of $\sim67$ sec in the reported times which we account for.
Next, we detrend the normalized, raw [*Kepler*]{} data (SAPFLUX with a SAPQUALITY flag of 0) of [[Kepler-413 ]{}]{}by an iterative fit with a high-order (50+) Legendre polynomial on a Quarter-by-Quarter basis. A representative section of the light curve, spanning Quarter 15 is shown in Figure \[fig:raw\_lc\]. We use a simple $\sigma$-clipping criteria, where points that are 3-$\sigma$ above and below the fit are removed and the fit is recalculated. Next, the stellar eclipses are clipped out. We note that for our search for transiting CBP we do this for the entire EB catalog listed in [@slawson11; @kirk14]. The order of execution of the two steps (detrending and removal of stellar eclipses) generally depends on the baseline variability of the particular target. For quiet stars (like [[Kepler-413 ]{}]{}) we first remove the eclipses and then detrend.
Next, we phase-fold the light curve of [[Kepler-413 ]{}]{}on our best-fit binary star period of $P = 10.116146$ days. For fitting purposes, we allow the limb-darkening coefficients of the primary star to vary freely. We note that star B is not completely occulted during the secondary stellar eclipse, and it’s contribution to the total light during secondary eclipse needs to be taken into account. The best-fit models to the folded primary and secondary eclipses, based on the fast analytic mode of ELC (using @mand02) are shown in Figure \[fig:ELC\_lc\]. The best-fit parameters for the ELC model of the [*[[*Kepler*]{}]{}*]{} light curve of [[Kepler-413 ]{}]{}are listed in Table \[tab\_parameters\]. Including a “third-light” contamination of 8% due to the nearby star (see @Kostov2014b), we obtain $k = R_B/R_A = 0.5832\pm0.0695$, $a/R_A = 27.5438\pm0.0003$, $i_b = 87.3258\arcdeg\pm0.0987$, and $T_B/T_A = 0.7369\pm0.0153$.
We measure the stellar eclipse times using the methodology of [@Orosz2012]. For completeness, we briefly describe it here. We extract the data around each eclipse and detrend the light curve. Starting with the ephemeris given by our best-fit model, we phase-fold the light curve on the given period. Thus folded, the data were next used to create an eclipse template based on a cubic Hermite polynomial. Next, adjusting only the center time of the eclipse template, we iteratively fit it to each individual eclipse and measure the mid-eclipse times. To calculate eclipse time variations (ETVs), we fit a linear ephemeris to the measured primary and secondary eclipse times. The Observed minus Calculated ([*“O-C”*]{}) residuals, shown in Figure \[fig:etvs\], have r.m.s. amplitudes of $A_{prim}\sim0.57$ min and $A_{sec}\sim8.6$ min respectively. Primary eclipses near days (BJD-2455000) 63, 155, 185, 246, 276, 337, 559, 640, 802, 842, 903, 994, 1015, 1035, 1105, 1126, 1237, and 1247 have been removed due to bad (with a flag of SAPQUALITY$\ne0$) or missing data. $A_{sec}$ is much larger than $A_{prim}$ because the secondary eclipses are much shallower than the primary eclipses and therefore is much noisier.
The high precision of the measured primary ETVs allow us to constrain the mass of the CBP. The planet contributes to non-zero ETVs through the geometric light travel-time and the dynamical perturbations it exerts on the EB [@bork13]. A CBP of $10M_{Jup}$ and with the orbital configuration of [[Kepler-413 ]{}]{}would cause primary ETVs with amplitudes of $A_{geometric}\sim1.2$ sec and $A_{dynamic}\sim2.7$ min respectively. The latter is $\sim3\sigma$ larger than the measured amplitude of the primary ETVs, indicating an upper limit for the mass of the CBP of $\sim10M_{Jup}$, and thereby confirming its planetary nature.
Discovering the transits of [[Kepler-413b ]{}]{}
------------------------------------------------
We discovered the planetary transits of [[Kepler-413b ]{}]{}using the method described in [@kostov2013]. For completeness, we briefly outline it here.
Due to the aperiodic nature of the transits of a CBP, traditional methods used to search for periodic signals are not adequate. The amplitude of the transit timing variations between consecutive transits of [[Kepler-413b ]{}]{}, for example, are up to two days ($\sim3\%$ of one orbital period) compared to an average transit duration of less than 0.5 days. To account for this, we developed an algorithm tailored for finding individual box-shaped features in a light curve [@kostov2013], based on the widely-used Box-fitting Least-Squares (BLS) method [@bls]. To distinguish between systematic effects and genuine transits, we incorporated the methodology of [@burke2006].
Our procedure is as follows. Each detrended light curve is segmented into smaller sections of equal lengths (dependent on the period of the EB and on the quality of the detrending). Next, each section is replicated N times (the number is arbitrary) to create a periodic photometric time-series. We apply BLS to each and search for the most significant positive (transit) and negative (anti-transit, in the inverted time-series flux) box-shaped features. We compare the goodness-of-fit of the two in terms of the $\Delta_{\chi^{2}}$ difference between the box-shaped model and a straight line model. Systematic effects (positive or negative) present in a particular segment will have similar values for $\Delta_{(\chi^{2}){transit}}$ and $\Delta_{(\chi^{2}){anti-transit}}$. On the contrary, a segment with a dominant transit (or anti-transit) feature will be clearly separated from the rest on a $\Delta_{(\chi^{2}){transit}}$ versus $\Delta_{(\chi^{2}){anti-transit}}$ diagram.
The (transit) – (anti-transit) diagram for [[Kepler-413 ]{}]{}is shown on Fig. \[fig:dipblip\]. The segments of the light curve where no preferable signal (transit or anti-transit) is detected form a well-defined cloud, symmetrically distributed along the $\frac{\Delta_{(\chi^{2}){transit}}}{\Delta_{(\chi^{2}){anti-transit}}}=1$ line. The segments containing the transits of the CBP marked in red (or grey color) diamonds, along with a few other segments where systematic features dominate (black circles), exhibit a preferred $\Delta_{(\chi^{2}){transit}}$ signal. The blue line represents the merit criterion adopted for this target, defined in terms of an iteratively chosen ratio of $\frac{\Delta_{(\chi^{2}){transit}}}{\Delta_{(\chi^{2}){anti-transit}}}=2$.
The signal for all but one (transit 7) of the [[Kepler-413b ]{}]{}transits is very strong. That transit 7 falls short of the criterion is not surprising. This event is the shortest and also the shallowest and can be easily missed even when scrutinized by eye. For [[Kepler-413 ]{}]{}we had a preliminary dynamical model of the system based on events 1 through 6, prior to the release of Quarter 14 data. The observed events 7 and 8 were very near the predicted times, providing additional constraints to our model.
Stellar Rotation {#sec:rotation}
----------------
Flux modulations of up to $\sim1\%$ on a timescale of $\sim13$ days are apparent in the light curve of [[Kepler-413 ]{}]{}. We assume the source of this variation is star spots carried along with the rotation of the stellar surface of the primary, the dominant flux contributor ($\sim85\%$) in the [[*Kepler*]{}]{} bandpass. To calculate the rotation period of star A, we compute Lomb-Scargle (L-S) periodograms and perform wavelet analysis [using a Morlet wavelet of order 6, @torr98] for each Quarter separately. No single period matches all Quarters because of spot evolution as spots emerge/disappear (the most dramatic change, for example, being during Quarter 10). We estimate an average rotation period across all Quarters of $P_{rot,A}=13.1\pm0.3$ days and $P_{rot,A}=12.9\pm0.4$ days from Lomb-Scargle and wavelet analysis respectively.
In addition, we measured the degree of self-similarity of the light curve over a range of different time lags by performing an autocorrelation function (ACF) analysis. In the case of rotational modulation, repeated spot-crossing signatures produce ACF peaks at lags corresponding to the rotation period and its integer multiples (McQuillan, Aigrain and Mazeh 2013). Figure \[fig:acf\] depicts the autocorrelation function (ACF) of the cleaned and detrended light curve, after the primary and secondary eclipses were removed and replaced by the value of the mean light curve with a typical random noise. The autocorrelation reveals clear stable modulation with a period of about 13 days. To obtain a more precise value of the stellar rotation we measured the lags of the first 25 peaks of the autocorrelation and fitted them with a straight line, as shown in the lower panel of Figure \[fig:acf\] (McQuillan, Mazeh and Aigrain 2013). From the slope of the fitted line we derived a value of $P_{rot,A}=13.15\pm0.15$ days as our best value for the stellar rotation period, consistent with the rotation period derived from the L-S analysis.
We carefully inspected the light curve to verify the period and to ensure that it did not correspond to any harmonic of the spin period. A 13.1-day period matches the spot modulation well. Using the stellar rotation velocity measured from our spectral analysis we derive an upper limit to star A’s radius of $R_A\le1.29~R_{\odot}$. The surface gravity of star A, ${\mbox{$\log g$}}_A = 4.67$, provided by the NASA Exoplanet Archive[^2], in combination with the upper limit on $R_A$ indicate $M_A\le2.82~M_{\odot}$.
Doppler Beaming {#sec:doppler}
---------------
A radiation source emitting isotropically and moving at nonrelativistic speed with respect to the observer is subject to a Doppler beaming effect [@ryb79]. The apparent brightness of the source increases or decreases as it moves towards or away from the observer. To calculate the Doppler beaming factor for star A, we approximate its spectrum as that of a blackbody with $T_{eff} =4700$K (see next Section) and the [[*Kepler*]{}]{} data as monochromatic observations centered at $\lambda=600$nm. Using Equations 2 and 3 from [@loeb2003], we estimate the boost factor $3-\alpha = 5.13$. For the value of $K_1 = 43.49$ km s$^{-1}$ derived from the radial velocity, we expect a Doppler beaming effect due to star A with an amplitude of $\sim750$ ppm, on par with the intrinsic r.m.s. of the individual [[*Kepler*]{}]{} measurements. The Doppler beaming contribution due to star B is much smaller (amplitude of $\sim50$ ppm) because of its small contribution to the total brightness of the system.
To search for the signal due to star A, we do a custom data detrending of the [[*Kepler*]{}]{} light curve tailored to the rotational modulations. To each data point $t_i$, we fit either one or more sine waves with the same mean and period (but different phases and amplitudes) centered on the $[-0.5P_{rot,A}+t_i,t_i+0.5P_{rot,A}]$ interval. The mean values of the best-fit sine waves at each point represent a rotation-free light curve. Few sections of the light curve are consistent with a single spot (or group of) rotating in and out of view, and can be modeled with one sinusoid; most need two, or more. The continuously evolving spot pattern, the faintness of the source and the fact that the binary period is close to the rotation period of the primary star make detection of the otherwise strong expected signal ($\sim750$ ppm) challenging. Despite the custom detrending, the modulations in the processed data is consistent with noise and we could not detect the Doppler beaming oscillations caused by the motion of star A. We note that we successfully detected the Doppler beaming effect for Kepler-64 [@kostov2013], where the amplitude is smaller but the target is brighter and the r.m.s. scatter per 30-min cadence smaller.
Follow-up Observations {#sec:followup}
======================
SOPHIE {#SOPHIE}
------
[[Kepler-413 ]{}]{}was observed in September-October 2012 and in March-April 2013 with the SOPHIE spectrograph at the 1.93-m telescope of Haute-Provence Observatory, France. The goal was to detect the reflex motion of the primary star due to its secondary component through radial velocity variations. SOPHIE [@bouchy09] is a fiber-fed, cross-dispersed, environmentally stabilized échelle spectrograph dedicated to high-precision radial velocity measurements. The data were secured in High-Efficiency mode (resolution power $R=40\,000$) and slow read-out mode of the detector. The exposure times ranged between 1200 and 1800 sec, allowing a signal-to-noise ratio per pixel in the range $4-8$ to be reached at 550 nm. The particularly low signal-to-noise ratio is due to the faintness of the target ($K_p = 15.52$mag).
The spectra were extracted from the detector images with the SOPHIE pipeline, which includes localization of the spectral orders on the 2D-images, optimal order extraction, cosmic-ray rejection, wavelength calibration and corrections of flat-field. Then we performed a cross-correlation of the extracted spectra with a G2-type numerical mask including more than 3500 lines. Significant cross-correlation functions (CCFs) were detected despite the low signal-to-noise ratio. Their Gaussian fits allow radial velocities to be measured as well as associated uncertainties, following the method described by [@baranne96] and [@pepe02]. The full width at half maximum (FWHM) of those Gaussians is $11 \pm 1$ kms$^{-1}$, and the contrast is $12 \pm 4$% of the continuum. One of the observations (BJD$\,= 2\,456\,195.40345$) was corrected from the $230\pm30$m/s blue shift due to Moon light pollution and measured thanks to the reference fiber pointed on the sky [e.g. @hebrard08]. The other exposures were not significantly polluted by sky background or by light from the Moon. The measured radial velocities are reported in Table \[tab\_RV\] and plotted in Figure \[fig\_orbits\]. Radial velocities show significant variations in phase with the [[*Kepler*]{}]{} ephemeris.
The radial velocities were fitted with a Keplerian model, taking into account the three constraints derived from the [[*Kepler*]{}]{} photometry: the orbital period $P$, and the mid-times of the primary and secondary stellar eclipses, $T_{prim}$ and $T_{sec}$ respectively. The fits were made using the `PASTIS` code [@diaz13], previously used e.g. by [@santerne11] and [@hebrard13]. Confidence intervals around the best solutions were determined by Monte Carlo simulations. The histograms of the obtained parameters have a single-peak. We fitted them by Gaussians, whose centers and widths are the derived values and uncertainties reported in Table \[tab\_parameters\]. The best fits are over-plotted with the data in Figure \[fig\_orbits\]. The dispersion of the residuals of the fit is 106ms$^{-1}$, in agreement with the error bars of the radial velocity measurements. We did not detect any significant drift of the radial velocities in addition to the reflex motion due to the binary. The small difference between the stellar eclipses, $T_{prim} - T_{sec}$, and $P/2$ measured from [[*Kepler*]{}]{} photometry indicates that the orbit is not circular. Together with the radial velocities, it allows the detection of a small but significant eccentricity $e=0.037\pm0.002$, and longitude of the periastron $\omega=279.54\pm0.86\arcdeg$. We note that our spectroscopic observations determined [[Kepler-413 ]{}]{}as a single-lined spectroscopic binary, and allowed us to evaluate the binary mass function ${\it f(m)}$ from the derived radial velocity semi-amplitude of the primary star $K_1 = 43.485\pm0.085$ $km~s^{-1}$.
The signal-to-noise ratio of the final co-added spectrum is too low to allow a good spectral analysis of the star. The profile of the H-$\alpha$ line suggests an effective temperature $T_{\rm eff} \simeq 4700$ K. The width of the CCF implies $v \sin i_* =5\pm2\,$kms$^{-1}$.
Third-light companion {#photometry}
---------------------
The large [[*Kepler*]{}]{} pixel, $4\arcsec\times4\arcsec$ [@jen10b], is prone to photometric contamination due to background sources. Unaccounted extra light inside the target’s aperture can contribute to an erroneous interpretation of eclipse and transit depths, resulting in incorrect estimation of the relative sizes of the occulting objects. Proper characterization of such contamination is particularly important for the analysis of CBPs [e.g. @Schwamb2013; @kostov2013].
We note that there is a visible companion (“third light”) inside the central pixel of [[Kepler-413 ]{}]{}at a separation of $\sim1.6\arcsec$ from the target, with a magnitude difference of $\Delta K_p\sim2.8$ [@Kostov2014b]. The presence of the companion can be deduced from 2MASS (Skrutskie et al. 2006) and UKIRT [@lawrence07] images, and from the full frame [[*Kepler*]{}]{} image. A marked asymmetry in the target’s point spread function, exhibited as a side bump with a position angle of $\sim218\arcdeg$, hints at the presence of an object close to [[Kepler-413 ]{}]{}.
During our reconnaissance spectroscopy with the 3.5-m Apache Point Observatory telescope we noticed the companion as a clearly separated star $\sim1.6\arcsec$ away from [[Kepler-413 ]{}]{}. The companion was physically resolved using adaptive-optics-assisted photometry from Robo-AO [@baran13] and seeing-limited infrared photometry from WIYN/WHIRC [@meix10]. The measured flux contribution from the companion to the aperture of [[Kepler-413 ]{}]{}is $\sim8\%, \sim15\%,\sim19\%$ and $\sim21\%$ in the [*Kepler*]{}, J-, H- and Ks-bands respectively [@Kostov2014b]; we correct for the contamination in our analysis. A detailed discussion of the companion’s properties will be presented in future work [@Kostov2014b].
The presence of such contamination is not unusual: adaptive-optics observations of 90 [[*Kepler*]{}]{} planetary candidates show that $\sim20\%$ of them have one visual companion within $2\arcsec$ [@adams12]; lucky imaging observations by [@lillo12] find that $\sim17\%$ of 98 [[*Kepler*]{}]{} Objects of Interest have at least one visual companion within $3\arcsec$. As more than 40% of spectroscopic binaries with $P<10$ days are member of triple systems according to [@tok93], it is reasonable to consider the visible companion to be gravitationally bound to [[Kepler-413 ]{}]{}. Using Table 3 of [@gilli11], for a contaminating star of $K_p\le18.5$ mag (i.e. $\Delta K_p\le3$ mag), and interpolating for the galactic latitude of [[Kepler-413 ]{}]{}of $b=17.47\arcdeg$, we estimate the probability of a random alignment between a background source and [[Kepler-413 ]{}]{}within an area of radius $1.6\arcsec$ to be $\sim0.002$. That despite the odds there is a star within this area indicates that the “third light” source is gravitationally bound to the EB, and could provide a natural mechanism for the observed misalignment of [[Kepler-413b ]{}]{}. Based on this statistical estimate, we argue that [[Kepler-413b ]{}]{}is a CBP in a triple stellar system.
Analysis of the system {#sec:photodynamics}
======================
A complete description of a CBP system requires 18 parameters – three masses ($M_{A}$, $M_{B}$ and $M_p$), three radii ($R_{A}$, $R_{B}$, $R_P$), six orbital elements for the binary system ($a_{bin}, e_{bin}, \omega_{bin}, {\it i_{bin}}$, $\Omega_{bin}$ and phase $\phi_{0,bin}$ ) and six osculating orbital elements for the CBP ($a_p, e_p, \omega_p, {\it i_p}, \Omega_p$ and $\phi_{0,p}$). As described in Sections \[sec:kepler\] and \[sec:followup\], some of these parameters can be evaluated from either the [[*Kepler*]{}]{} data, or from follow-up photometric and spectroscopic observations. Measurements of the stellar radial velocities provide $e_{bin}, \omega_{bin}, {\it i_{bin}}$ and the binary mass function ${\it f(m)}$ (but not the individual stellar masses, as we observed [[Kepler-413 ]{}]{}as a single-lined spectroscopic binary). The relative sizes of the two stars and the inclination of the binary system are derived from the [[*Kepler*]{}]{} light curve. Based on the measured ETVs, we approximate the planet as a test particle ($M_p = 0$) for our preliminary solution of the system, and solve for its mass with the comprehensive photodynamical model. The value of $\Omega_{bin}$ is undetermined (see @doyle2011 [@Welsh2012]), unimportant to our analysis, and is set equal to zero.
Here we derive the mass of the eclipsing binary (thus the masses of the primary and secondary stars) and the radius of the primary star from the planetary transits. Next, we produce a preliminary numerical solution of the system – a necessary input for the comprehensive photometric dynamical analysis we present in Section \[sec:pd\_model\]. We study the dynamical stability of [[Kepler-413b ]{}]{}in Section \[sec:stability\].
Initial Approach: Planetary transits and preliminary solutions {#sec:pl_transits}
--------------------------------------------------------------
The mid-transit times, durations and depths of consecutive transits of a CBP are neither constant nor easy to predict when the number of observed events is low. However, while strictly periodic transit signals can be mimicked by a background contamination (either an EB or a planet), the variable behavior of CBP transits provide a unique signature without common false positives.
Different outcomes can be observed depending on the phase of the binary system. While the CBP travels in one direction on the celestial sphere when at inferior conjunction, the projected velocities of each of the two stars can be in either direction. When the star and the planet move in the same direction, the duration of the transit will be longer than when the star is moving in the opposite direction with respect to the planet. As shown by [@kostov2013], the transit durations as a function of binary phase can be used to constrain the it a priori unknown mass of the binary and the radius of the primary star (both critical parameters for the photodynamical model described below), assuming the planet transits across the same chord on the star. Typically, the more transits observed and the wider their EB phase coverage, the better the constraints are.
While useful for favorable conditions, the approximation of [@kostov2013] is not applicable in general, and we extend it here. Depending on the relative positions of the CBP and the star on the sky, the CBP will transit across different chord lengths with associated impact parameters, such that different transits will have different durations and depths. A particular situation may be favorable, such as the cases of Kepler-64b and Kepler-47b where the CBPs transit across approximately constant chords. While the chords lengths do change from one transit to another, the variations are small as the stellar radii are sufficiently large, the mutual inclination between the orbits of the CBP and the EB is small, and the approximation in [@kostov2013] applies. The situation for [[Kepler-413 ]{}]{}, however, is quite the opposite – due to the misalignment between the two orbits and the small stellar radius, the chord length changes so much from one transit to another that the impact parameter is often larger than $R_{A}+R_{p}$, i.e. the planet misses a transit. To properly account for this novel behavior of a CBP, we modify our analytic approach accordingly to allow for variable impact parameter. Expanding on Equation (4) of [@kostov2013], we add another term ([*D*]{}) to the numerator:
$$t_{dur,i} = \frac{ABD_i}{1 + ACx_i}
\label{eq:durations}$$
$$\begin{split}
A = (M_{bin})^{-1/3} \\
B = 2R_c (\frac{P_p}{2 \pi G })^{1/3} \\
C = - f(m)^{1/3} (\frac{P_p}{P_{bin}})^{1/3} (1-e^{2})^{-1/2} \\
D_i = \sqrt{1 - {\it b_i}^2} \\
x_i = (e\sin\omega + \sin(\theta_i + \omega))
\end{split}
\label{eq:durations2}$$
[where $t_{dur,i}$, ${\it b_i}$ and $\theta_i$ are the duration, impact parameter and binary phase of the ${\it i_{th}}$ transit respectively, $M_{bin}$ is the sum of the masses of the two stars of the EB, $P_p$ is the average period of the CBP, $R_c = R_A + R_p$ is the transited chord length (where $R_A $ and $R_p$ are the radius of the primary star and the planet respectively), $f(m)$ is the binary mass function [Eqn. 2.53, @hilditch01], and [*e*]{} and $\omega$ are the binary eccentricity and argument of periastron respectively. Applying Equation \[eq:durations\] to transits with [*b>0*]{} results in smaller derived $M_{bin}$ compared to transits across a maximum chord, [*b=0*]{}.]{}
The generally used method to derive [*b*]{} from the measured transit durations and depths for a planet orbiting a single star [@seager03] is not applicable for a CBP. The CBP impact parameter cannot be easily derived from the observables. From geometric considerations, [*b*]{} is:
$$b = \sqrt{(x_{s}-x_{p})^2 + (y_{s} - y_{p})^2}
\label{eq:b_CBP}$$
[where ($x_{s}$, $y_{s}$) and ($x_{p}$,$y_{p}$) are the sky [*x*]{} and [*y*]{} - coordinates of the star and the planet respectively. The former depend on the binary parameters only and can be calculated from [@hilditch01][^3]:]{}
$$\begin{split}
x_{s} = r_{s} \cos(\theta_{bin} + \omega_{bin}) \\
y_{s} = r_{s} \sin(\theta_{bin} + \omega_{bin}) \cos i_{bin}
\end{split}
\label{eq:xy_EB}$$
[where $r_{s}, \omega_{bin}, \theta_{bin}$ and $i_{bin}$ can be directly estimated from the radial velocity measurements and from the [[*Kepler*]{}]{} light curve. The CBP coordinates, however, depend on the unknown mass of the binary and on the instantaneous orbital elements of the CBP $\Omega_{p}, \theta_{p}$ and $i_{p}$. Assuming a circular orbit for the CBP:]{}
$$\begin{split}
x_{p}=a_{p} [cos(\Omega_{p})cos(\theta_{p}) - sin(\Omega_{p})sin(\theta_{p})cos(i_{p})] \\
y_{p}=a_{p} [sin(\Omega_{p})cos(\theta_{p}) + cos(\Omega_{p})sin(\theta_{p})cos(i_{p})]
\end{split}
\label{eq:xy_CBP}$$
[where $a_{p}$ is the semi-major axis of the CBP. For a mis-aligned CBP like [[Kepler-413b ]{}]{}, however, $\Omega_{p} \ne 0.0$ and equations \[eq:xy\_CBP\] cannot be simplified any further. In addition, due to 3-body dynamics, all three CBP orbital parameters vary with time. As a result, incorporating Equation \[eq:b\_CBP\] into Equation \[eq:durations\] will significantly complicate the solution.]{}
However, we note that Equation \[eq:durations\] uses only part of the information contained in the [[*Kepler*]{}]{} light curve, i.e. transit durations and centers; it does not capitalize on the depth or shape of each transit. To fully exploit the available data, we evaluate the impact parameters of the eight transits directly from the light curve by fitting a limb-darkened transit model [@mand02] to each transit individually. The procedure is as follows. First, we scale the CB system to a reference frame of a mock, stationary primary star with a mass equal to the total binary mass of [[Kepler-413 ]{}]{}. The scaling is done by adjusting for the relative velocities of the primary star [[Kepler-413 ]{}]{}A ($V_{x,A}$), and of the CBP ($V_{x,p}$). The impact parameters are not modified by the scaling, as it does not change the distance between the planet and the star or their mutual inclination during each transit. We approximate $V_{x,p}$ as a single value for all transits:
$$V_{x,p} = ({\frac{2\pi GM_{bin}}{P_{p}}})^{1/3} = {\it const}
\label{eq:circ_vel}$$
[A mock planet orbits the star on a circular, $P_{p}=66$ day orbit (the period of [[Kepler-413b ]{}]{}). The relative velocity of the observed CBP at the time of each transit $(V_{x,obs,i})$ is calculated as the absolute difference between the instantaneous $V_{x,p}$ and $V_{x,A}$:]{}
$$V_{x,obs,i} = |V_{x,p} - V_{x,A,i}|
\label{eq:scaled_vel}$$
[where $V_{x,A,i}$ can be calculated from the fit to the RV measurements. The scaled time of the $i_{th}$ mock transit $t_{mock,i}$, referred to the time of minimum light, is then:]{}
$$t_{mock,i} = \frac{ |V_{x,p} - V_{x,A,i}|}{V_{x,p}} t_{obs,i}
\label{eq:scaled_time}$$
[where $t_{obs,i}$ is the observed time during the $i_{th}$ transit. The mock transits are “stretched” with respect to the observed ones when $V_{x,A} < 0$ and “compressed” when $V_{x,A} > 0$. ]{}
While $V_{x,p}$ depends on the unknown binary mass, it does so by only its third root (Equation \[eq:circ\_vel\]). For the low-mass binary we expect from the [[*Kepler*]{}]{} Input Catalog, $V_{x,p}$ varies only by $\sim26\%$ for $M_{bin}$ between 1.0$M_{\odot}$ and 2.0$M_{\odot}$. Thus, the dominant factor in Eqn. \[eq:scaled\_time\] is $V_{x,A,i}$.
The eight scaled, mock transits are next fit individually, sharing the same binary mass $M_{bin}$, size of the primary star $R_A$, and of the CBP radius $R_p$. The normalized semi-major axis of the mock planet, $a_{mock}/R_A$, depends on the binary phase of each transit and is different for different transits – for fitting purposes it ranges from $(a_p - a_A)/R_A$ for transits near secondary stellar eclipse to $(a_p + a_A)/R_A$ for those near primary eclipse. Here $a_{p}$ is the mean semi-major axis of the CBP [[Kepler-413b ]{}]{}and $a_{A}$ is the semi-major axis of the primary star [[Kepler-413 ]{}]{}-A. For light curve modeling, we use the limb-darkening coefficients from Section \[sec:kepler\].
To estimate $R_{p}/R_{A}$, we first fit a limb-darkened light curve model to the scaled transit 8. The binary star is near a quadrature during the transit, $|V_{x,A,i}|$ is near zero, $a_{mock}\approx a_{p}$, $M_{bin}$ does not significantly affect Equation \[eq:scaled\_time\] and the scaling is minimal ($t_{mock,i}\approx t_{obs,i}$). To confirm that the scaling is negligible, we fit transit 8 for all $M_{bin}$ between 1.0 and 2.0. The differences between the derived values for $R_{p,8}/R_{A}$ are indistinguishable – $R_{p,8}/R_{A} = 0.053$ for all $M_{bin}$, where $R_{p,8}$ is the radius of the planet deduced from the fit to scaled transit 8. We next use $R_{p,8}$ for light curve fitting of the other seven scaled transits. Also, the best-fit $a_{mock,8}$ from transit 8 is used in combination with $a_{A}$ to constrain the allowed range for $a_{mock,1-7}$ for the other seven transits, as described above. We note that while transit 1 also occurs near quadrature, the transit duration and depth are both much smaller than than those of transit 8, making the latter a better scaling ruler. The derived impact parameters for transits 1 through 8 are $0.85, 0.71, 0.17, 0.61, 0.84, 0.67, 0.78$ and $0.05$ respectively. We note that these are used to estimate $M_{bin}$ in Equation \[eq:durations\] only and not as exact inputs to the photodynamical analysis described below.
To evaluate the applicability of our approach, we test it on synthetic light curves designed to mimic [[Kepler-413b ]{}]{}(8 transits, 10-11 misses, CBP on a $\sim66$-day orbit). For a noise-less light curve, we recover the simulated impact parameters of the 8 transits to within $0.01$, the semi-major axis to within 1% and the size of the planet to within 10%. Allowing the (known) mass of the simulated binary star to vary by $\pm0.5 M_{\odot}$ modifies the derived impact parameters by not more than 0.02. For a simulated set of light curves with normally distributed random noise of $\sim700$ ppm r.m.s. per 30-min cadence (similar to that of [[Kepler-413 ]{}]{}) we recover the impact parameters to within $0.15$, the semi-major axis, and the size of the planet each to within 10%. The good agreement between the derived and simulated model values validates the method. The observed (black) and scaled (green, or light color) transits of [[Kepler-413b ]{}]{}and the best-fit models (red, or grey color) to the latter are shown on Figure \[fig:scaled\_transits\].
We note that there are secondary effects not taken into account by Equation \[eq:scaled\_time\]. $V_{x,A}$, assumed to be constant in the equation, in reality varies throughout the duration of the transit. In principle, the longer the CBP transit, the more the stellar velocity and acceleration deviate from constancy. Longer transits (like Transit 6, see Figure \[fig:scaled\_transits\]) have asymmetric shape and the circular orbit approximation for the CBP in Equation \[eq:scaled\_time\] is not optimal. Depending on the phase of the binary star at the time of transit, both the magnitude and the sign of $V_{x,A}$ may change – near quadrature, for example, the star changes direction.
Next, we apply Equation \[eq:durations\] to the eight transits of [[Kepler-413b ]{}]{}for constant and for variable chords and compare the results. The best-fit models for the two cases are shown on Figure \[fig:dur\_fit\] as the blue and red curve respectively. The derived values for $M_{bin}$ and $R_{A}$ are $1.41~M_{\odot}$ and $0.70~R_{\odot}$ for constant [*b*]{} and $1.33~M_{\odot}$ and $0.91~R_{\odot}$ for varying [*b*]{}. Not accounting for different impact parameters overestimates $M_{bin}$ and underestimates $R_{A}$.
We use the measured transit duration uncertainties to constrain the derived binary mass as follows. We simulate a set of 10,000 scrambled observations, each consisting of the eight measured transit durations individually perturbed by adding a normally distributed noise with a standard deviation of $20$ min. Next, we apply Equation \[eq:durations\] to each realization. The distribution of the derived $M_{bin}$ for the entire set of scrambled observation is shown in Figure \[fig:scrambled\_durations\]. The blue histogram represents the solutions accounting for constant chord length and the red histogram – for variable chord length. The median values for binary mass and their 1-sigma deviations are $1.41\pm0.19~M_{\odot}$ and $1.33\pm0.17~M_{\odot}$ for the former and latter case respectively. Based on these results, for our preliminary photodynamical search over the parameter space of the [[Kepler-413 ]{}]{}system (described next) we adopt the latter case, and allow the binary mass to vary from 1.16 to 1.5 $M_{\odot}$.
For our initial photodynamical solutions we use a numerical N-body integrator [described in @kostov2013] to solve the equations of motion. For completeness, we briefly outline it here and discuss the modifications we added for diagnosing [[Kepler-413 ]{}]{}. The integrator is an implementation of the SWIFT code[^4] adapted for IDL. Due to the particular behavior of the CBP transits of [[Kepler-413b ]{}]{}, we can neither fix the planetary inclination $i_{p}$ to 90 degrees, or the ascending node $\Omega_p$ and the initial phase $\phi_{0,p}$ to zero. Unlike the case of Kepler-64b described in [@kostov2013], here we solve numerically for these three parameters. Furthermore, it is not optimal to choose the time of the first transit as the starting point of the numerical integration as [@kostov2013] did. Doing so would introduce an additional parameter – the impact parameter ${\it b_0}$ of the chosen transit; the estimated impact parameters of the individual transits indicated above are too coarse to be used in the photodynamical model. Instead, here we specify initial conditions with respect to the time when the planet is crossing the x-y plane ([$z_p=0$]{}), approximately 3/4 of a planetary period prior to transit 2, i.e. at $t_0 = 2,455,014.465430$ (BJD). This allows us to find the true anomaly of the planet ($\theta_p = 2 \pi - \omega_p$), and the planet’s eccentric and mean anomalies at the reference time. The number of free parameters we solve for is 9: \[$ M_{A}, a_p, e_p, \omega_p, {\it i_p}, \Omega_p, \phi_{0,p}, R_{A}$ and $R_{p}$\].
Restricting the binary mass to the $1\sigma$ range indicated by the scrambled durations, we fit preliminary photodynamical models to the eight transits of [[Kepler-413b ]{}]{}by performing a grid search over the 8 parameters. The quality of the fit is defined as the chi-square value of the observed minus calculated (O-C) mid-transit times of all 8 events. Starting with an initial, coarse time step of 0.1 days, we select the models that reproduce the mid-transit times of each of the observed eight transits to within 0.05 days and also correctly “miss” all other events by more than $R_{A}+R_p$. Next, we refine the grid search by reducing the time step to 0.02 days, and minimize again. The best-fit model is further promoted for a detailed MCMC exploration as described in the next section.
Comprehensive photometric-dynamical analysis {#sec:pd_model}
--------------------------------------------
The [*[[*Kepler*]{}]{}*]{} light curve and radial velocity data for [[Kepler-413 ]{}]{}were further modeled using a comprehensive photometric-dynamical model. This model uses a dynamical simulation, assuming only Newton’s equations of motion and the finite speed of light, to predict the positions of the stars and planet at the observed times [e.g., @doyle2011; @Welsh2012]. The parameters of this simulation are functions of the initial conditions and masses of the three bodies, and are provided by the preliminary simulations described above. These positions are used as inputs – along with radii, limb darkening parameters, fluxes and “third-light” contamination – to a code [@josh11; @pal12] that produces the modeled total flux (appropriately integrated to the [*Kepler*]{} ‘long cadence’ exposure). This flux is compared directly to a subset of the full [*Kepler*]{} data. The radial velocity data of the larger star are compared to the velocities determined by the dynamical simulation at the observed times.
We isolate only the [*Kepler*]{} data within a day of the stellar eclipses or suspected planetary transit crossing events (data involving ‘missing’ events are included as well). Those data, excluding the eclipse features, are divided by a linear function in time in order to detrend the light curve for local astrophysical or systematic features that are unrelated to the eclipses.
The model described in this section has 23 adjustable parameters. Three parameters are associated with the radial velocity data: the RV semi-amplitude of star A, $K_A$, the RV offset, $\gamma_A$, and a ‘jitter’ term, $\sigma_{\rm RV}$, that is added in quadrature to the individual RV errors, correcting for unaccounted systematic error sources. The initial conditions are provided as instantaneous Keplerian elements of the stellar (subscript [*“bin”*]{}) and planetary (subscript [*“p”*]{}) orbits, defined in the Jacobian scheme: the periods, $P_{bin,p}$, the sky-plane inclinations $i_{bin,p}$, vectorial eccentricities $e_{bin,p} \cos(\omega_{bin,p})$, $e_{bin,p} \sin(\omega_{bin,p})$, the relative nodal longitude $\Delta \Omega = \Omega_p -\Omega_{bin}$ and the times of barycenter passage $T_{bin,p}$. The latter parameters are more precisely constrained by the data than the mean anomalies; however, they may be related to the mean anomalies, $\eta_{bin,p}$, via $$\begin{aligned}
\frac{P_{bin,p}}{2 \pi} \eta_{bin,p} &=& t_0 - T_{bin,p}+ \frac{P_{bin,p}}{2 \pi} \left[E_{bin,p}-e_{bin,p}\sin(E_{bin,p})\right]\end{aligned}$$ where $E_{bin,p}$ are the eccentric anomalies at barycenter passage, defined by $$\begin{aligned}
\tan\left(\frac{E_{bin,p}}{2}\right) & = & \sqrt{\frac{1-e_{bin,p}}{1+e_{bin,p}}} \tan\left(\frac{\pi}{4}-\omega_{bin,p}\right)\end{aligned}$$ Two parameters are the mass ratios between stars and planet, $M_A/M_B$ and $M_p/M_A$. The remaining 7 parameters are related to the photometric model: the density of star A, $\rho_A$, the two radii ratios, $R_B/R_A$ and $R_b/R_A$, the [[*Kepler*]{}]{}-band flux ratio $F_B/F_A$, the linear limb darkening parameter of star A, $u_1$, and the additional flux from contaminating sources $F_X/F_A$. A final parameter parameterizes the Gaussian distribution of the photometric residuals, $\sigma_{\rm LC}$.
We adopted uniform priors in all the parameters excluding the vectorial eccentricities and $F_X/F_A$. For those parameters we enforced uniform priors in $e_{bin,p}$ and $\omega_{1,2}$ and a Gaussian prior in $F_X/F_A$ with mean $0.08$ and variance $0.0001$. The likelihood of a given set of parameters was defined as $$\begin{aligned}
L & \propto & \prod^{N_{\rm LC}} \sigma_{\rm LC}^{-1} \exp\left[-\frac{\Delta F_i^2}{2 \sigma_{\rm LC}^2}\right] \\ \nonumber
& & \times \prod^{N_{\rm RV}} \left(\sigma_{\rm RV}^2+\sigma_i^2\right)^{-1/2} \exp\left[-\frac{\Delta {\rm RV}_i^2}{2\left( \sigma_{i}^2+\sigma_{\rm RV}^2\right)}\right]\end{aligned}$$ where $\Delta {\rm LC}_i$ is the residual of the $i$th photometric measurement and $\Delta {\rm RV}_i$ is the residual of the $i$th radial velocity measurement with formal error $\sigma_i$.
We explored the parameter space with a Differential Evolution Markov Chain Monte Carlo (DE-MCMC) algorithm [@terBraak08]. In detail, we generated a population of 60 chains and evolved through approximately 100,000 generations. The initial parameter states of the 60 chains were randomly selected from an over-dispersed region in parameter space bounding the final posterior distribution. The first 10% of the links in each individual Markov chain were clipped, and the resulting chains were concatenated to form a single Markov chain, after having confirmed that each chain had converged according to the standard criteria including the Gelman-Rubin convergence statistics and the observation of a long effective chain length in each parameter (as determined from the chain autocorrelation).
The photodynamical fits to the 8 observed transits of the CBP are shown in Figure \[fig:photodyn\_hits\]. We note that our model predicts a ninth, very shallow and buried in the noise transit, labeled as “A” in Figure \[fig:photodyn\_hits\]. For clarity, we label the observed transits with a number, and those either missed or not detected with a letter. We tabulate the results of this analysis in Tables \[tab:pd\_in\] and \[tab:pd\_out\], reporting the median and 68% confidence interval for the marginalized distributions in the model parameters and some derived parameters of interest. The parameters we adopt throughout this paper are the “best-fit” values reported in Tables \[tab:pd\_in\] and \[tab:pd\_out\]. The orbital configuration of the system is shown on Figure \[fig:123\_orbit\]. The orbit of the CBP evolves continuously and, due to precession, is not closed. We note that our best-fit mass for the planet is large for it’s radius. The expected mass is $M_p\sim16M_\oplus$, using the mass-radius relation of [@weiss13] for $1M_\oplus<M<150M_\oplus$, whereas our model provides $M_p\sim67M_\oplus\pm21M_\oplus$. This suggests that either [[Kepler-413b ]{}]{}is a much denser planet (mix of rock, metal, gas), or that the mass is even more uncertain than stated, and a factor of 2-3 times likely smaller.
We note that the binary orbit reacts to the gravitational perturbation of the planet. As a result, the EB orbital parameters and eclipse times are not constant. The effect, however, is difficult to measure with the available data. Also, the planetary orbit does not complete one full precession period between transits 1 and 8. The precession period for our best-fit model is $\sim4000$ days, in line with the analytic estimate of $\sim4300$ days (for equal mass stars) based on [@schneider1994] . After transit 8, the transits cease as the planetary orbit precesses away from the favorable transit configuration. The transits will reappear after BJD 2458999 (2020 May 29).
Orbital Stability {#sec:stability}
-----------------
The minimum critical semi-major axis ([@holman99], Eq. 3) for the best-fit parameters of the [[Kepler-413 ]{}]{}system is $a_{crit} = 2.55~a_{bin} = 0.26$ AU. With a semi-major axis that is $\approx37\%$ larger than the critical limit ($a_p = 0.3553$ AU), the orbit of the planet [[Kepler-413b ]{}]{}is in a gravitationally stable region. We note that due to the planet’s non-zero eccentricity, its closest approach to the binary is reduced by $(1-e)$ and the stability criterion is more tight – $r_{p,min} = a_p\times(1-e_p) = 0.3168$ AU, closer compared to a zero-eccentricity orbit but still beyond $a_{crit}$.
Three-body systems are notorious for exhibiting complex dynamical behavior spurred by mean-motion resonances (MMR). To explore the long-term stability of the [[Kepler-413 ]{}]{}system we have studied its dynamical behavior by utilizing the MEGNO[^5] factor $\langle Y\rangle$ [@cincottasimo2000a; @cincottasimo2000b; @cincottaetal2003], a widely used method for dynamical analysis of mutliplanet systems [@Gozdziewski2008; @Hinse2010]. We note that by a stable orbit here we refer to an orbit that is stable only up to the duration of the numerical integration, i.e. a quasi-periodic orbit. The time scale we use is sufficient to detect the most important mean-motion resonances. However, the dynamical behavior of the system past the last integration time-step is unknown.
We utilized the MECHANIC software[^6] [@Slonina2012a; @Slonina2012b; @Slonina2014] to calculate MEGNO maps for [[Kepler-413 ]{}]{}, applying the latest MEGNO implementation [@Gozdziewski2001; @Gozdziewski2003; @Gozdziewski2008]. The maps have a resolution of 350 x 500 initial conditions in planetary semi-major axis $(a_p)$ and eccentricity $(e_p)$ space, each integrated for 200,000 days (corresponding to $\sim20,000$ binary periods). Quasi-periodic orbits are defined as $|\langle Y\rangle - 2.0| \simeq 0.001$; for chaotic orbits $\langle Y\rangle \rightarrow \infty$ as $t \rightarrow \infty$. The MEGNO map computed for the best-fit parameters of Table \[tab:pd\_out\] is shown in Figure \[fig:megno\]. The cross-hair mark represents the instantaneous osculating Jacobian coordinates of [[Kepler-413b ]{}]{}. Purple (or dark) color indicates a region of quasi-periodic orbits, whereas yellow (or light) color denotes chaotic (and possibly unstable) orbits. The CBP sits comfortably in the quasi-periodic (purple) region of $(a,e)$-space between the 6:1 and 7:1 MMR [not unlike Kepler-64b, see @kostov2013], confirming the plausibility of our solution from a dynamical perspective.
Discussion {#sec:discussion}
==========
“[*Why Does Nature Form Exoplanets Easily?*]{}”, ponders [@heng12]. Both planetary formation scenarios of core accretion and gravitational collapse require complex processes at work and even more so for the violent environments of CBPs. Yet the plethora of discovered planets [@burke2013] indicates that planetary formation is ubiquitous. [@mart13] argue that it may be in fact easier to form planetary systems around close binary stars than around single stars, if there is a quiescent, low-turbulence layer present in the mid plane of the CB disks. Unlike disks around single stars, the surface density in a CB disk peaks in such a “dead zone” and, being close to the snow line, provides an ideal site for planetary formation. In addition, [@alex12] has shown that circumbinary disks around binary stars with $a_{bin} < 1$ AU persist longer than disks around single stars, suggesting that formation of CBPs should be commonplace.
The $\Delta i \sim2.5\arcdeg$ misalignment of [[Kepler-413b ]{}]{}is notably larger than that of the other [*Kepler*]{}-discovered CBPs (with an average of $\sim0.3\arcdeg$). It is, however, comparable to the mutual inclination between Kepler-64b and its host EB, the only known quadruple stellar system with a CBP. It is comparable to the mutual orbital inclinations of $1\arcdeg$ – $2.3\arcdeg$ reported for the [*Kepler*]{} and [*HARPS*]{} multiplanet systems orbiting single stars, and of the Solar System value of $2.1\arcdeg$ – $3.1\arcdeg$, including Mercury (Fabrycky et al. 2012; Fang and Margot, 2012; Figueira et al., 2012; Lissauer et al., 2011).
[@quill13] argue that one plausible scenario responsible for the excitation of planetary inclinations is collisions with embryos. The authors note that measured correlations between planetary mass and inclination can provide strong clues for this scenario. While planetary masses are difficult to measure, photodynamical models of slightly misaligned CBP like [[Kepler-413b ]{}]{}can provide an important venue to test this hypothesis by providing constraints on masses and inclinations. Additionally, according to [@rapp13] up to 20% of close binaries have a tertiary stellar companion, based on extrapolation from eclipse time variations (ETVs) measured for the entire [[*Kepler*]{}]{} EB catalog. [@eggl08] find that $\sim25\%$ of all multiple systems with a solar-type star are triples and higher order. A tertiary companion on a wide orbit can be responsible for complex dynamical history of the binary system involving Kozai cycles with tidal friction [@kozai62; @fab07b; @kis98; @eggl01; @pej13].
A robust correlation between occurrence rate of planets and (single) host star metallicities has been established over the past 10 years [@mayor11; @howard13]. While it is equally likely to detect small planets around stars of wide metallicity range, giant planets [$R\textgreater4R_{Earth}$, @howard13] are preferentially found in orbits around metal-rich stars. Such dichotomy naturally originates from the core-accretion scenario for planet formation, with the caveat that in-situ formation may be more appropriate to describe the presence of low-mass planets close to their star [@howard13]. It is interesting to note that 7 of the [[*Kepler*]{}]{} CB planets are gas giants, with $R\ge4.3R_{Earth}$, (the only exception being Kepler 47b) but all 7 host stellar systems are deficient in metals compared to the Sun.
Eclipsing binary systems have long been proposed to be well-suited candidates to the discovery of transiting planets due to the favorable orbital orientation of the stellar system. However, EBs may not be as favorable as generally thought. Given the correct orientation, planets orbiting single stars will transit at every inferior conjunction. As we have shown here, and also discussed by [@schneider1994], misaligned CBPs, however, may either transit or miss depending on their instantaneous orbital configuration. If the configuration is favorable, one can observe several consecutive transits. Otherwise there may be a few, widely-separated transits or even only a single transit. A trivial case is no transits at all during the course of the observations, where the planetary orbit has not yet precessed into the favorable transit geometry and the first “good hit” may be approaching; even a very misaligned system will occasionally transit. Thus, a non-detection of tertiary transits in the light curve of an EB does not rule out the possibility to observe a transiting CBP in the future. This statement is trivially obvious for planets with periods much longer than the duration of observations. However, as this work has illustrated, the statement also applies to short-period planetary orbits with non-zero mutual inclinations.
Such photodynamical effects may further affect the deduced occurrence rate of CBP, even after accounting for detection efficiency, systematic effects, etc. Aligned systems have a strong selection effect, but many systems (potentially a “silent majority” of CBPs) could be misaligned and precessing, and [[Kepler-413b ]{}]{}will be the prototype of that class of objects.
“[*...The existence of planets in these systems \[CBP\]...*]{}”, [@paar12] note, “[*...baffles planet formation theory...*]{}”. The facts that the confirmed CBPs are so close to the theoretical limit for dynamical stability, and that shorter-period EBs have typically longer-period CBPs (further away from the critical limit) hint at an interesting dynamical history, and can be directly addressed by finding more CB systems. Future additions to the still small family of CBPs will add important new insight into our understanding of these remarkable objects. Or, perhaps more interestingly, the new discoveries will baffle the theoretical framework even further.
Stellar Insolation {#sec:HZ}
------------------
Our best-fit photodynamical model places [[Kepler-413b ]{}]{}on a ${0.355}$ AU-orbit around two stars with effective temperatures of $T_A = 4700$K, estimated from SOPHIE, and $T_B = 3460$K, derived from the temperature ratio $T_B/T_A$ from ELC, respectively (see Table \[tab\_parameters\]). The combined incident flux $S_{tot}=S_A+S_B$ due to the two stars A and B at the orbital location of [[Kepler-413b ]{}]{}is shown in Figure \[fig:irradiance\]. It varies from a minimum of $\sim1.64~S_{\star}$ to a maximum of $\sim3.86~S_{\star}$ (where $S_{\star}$ is the mean Solar constant of 1368 W m$^{-2}$) on two different timescales (stellar and planetary periods), with an average of $\sim2.42~S_{\star}$. Following [@kane13], we calculate the effective temperature of the EB, $T_{eff, AB}$, as that of a source with an energy flux similar to that of the two stars combined. From Wien’s displacement law, and using the combined blackbody radiation of the two stars, we estimate $T_{eff, AB}\sim4500$ K. Following [@kopp13] cloud-free models, the inner edge of the habitable zone (“runaway greenhouse”) for the [[Kepler-413 ]{}]{}system is at an incident stellar flux $S_{inner}=0.91~S_{\star}$ (red, or grey line in Figure \[fig:irradiance\]); the outer edge (“maximum greenhouse”) is at $S_{outer}=0.28~S_{\star}$ (blue, or dark line in Figure \[fig:irradiance\]). [[Kepler-413b ]{}]{}is slightly closer to it’s host star than the inner edge of the habitable zone. We note that the inner edge distance for the habitable zone of the [[Kepler-413 ]{}]{}system for dry desert planets is at $\sim0.32~AU$ (Equation 12, [@zsom13]), $\sim2.71~S_{\star}$, for a surface albedo of 0.2 and 1% relative humidity. This limiting case places [[Kepler-413b ]{}]{}($a_p=0.3553~AU$) in the dry desert habitable zone for most of its orbit.
The flux variations experienced by the CBP, coupled with the peculiar behavior of the planetary obliquity described next may result in very interesting and complex weather and climate patterns on [[Kepler-413b ]{}]{}and similar CBPs.
Cassini States {#sec:Cassini}
--------------
Next we shall discuss how the quick orbital precession, which is highly constrained by the transit fits, should affect the spin orientation of [[Kepler-413b ]{}]{}. Instantaneously, each of the stars causes a torque on the rotational bulge of the planet, but over one EB orbit, and even over one orbit of the CBP, this torque causes little reorientation of the planet. Over many orbits, however, the effect of this torque adds coherently. If we replace the stars with a point mass at their barycenter, the small-obliquity precession angular frequency of the planetary spin would be (e.g., @fab07a): $$\alpha = \frac{ k_{2,p} } { c_p } \frac{M_A+M_B}{M_p} (1-e_p^2)^{-3/2} (R_p / a_p)^3 S_p,$$ where $k_{2,p}$ is the apsidal motion constant (half the Love number) of the CBP, $c_p$ is the normalized moment of inertia, and $S_p$ is the spin angular frequency of the planet.
In the presence of quick orbital precession, the dynamics become much richer, as Cassini states appear [@ward04; @ham04; @fab07a; @levrard07]. These states are fixed-points of the spin dynamics in which the spin and orbit precess at the same rate around the total angular momentum. Thus the effect is a 1:1 secular resonance between the orbital precession and the spin precession. The orbital precession rate, $g$, is known from the best-fitting model $g=0.57$ radians/year. Taking a 1-day rotation period (i.e. $S_p = 2\pi$ radians/day) for [[Kepler-413b ]{}]{}, $k_{2,p}=0.1$, $c_p=0.08$, and assuming $M_p=15 M_\oplus$, with the above values of the constants, we have $\alpha = 1.0$ radians/year, very close to resonant with $g$. Even precession trajectories that are not in these states are affected by them, as they must navigate the resonant island. Thus when $\alpha \approx g$, the obliquity can vary by many degrees on a timescale somewhat longer than the precession timescale.
However, the value of $\alpha$ for the case of [[Kepler-413b ]{}]{}is very uncertain due to the poorly constrained parameters, particularly $M_p$ and $S_p$. For the best-fitting $M_p$ of $\sim60 M_\oplus$ (and other parameters as assumed above), the spin would travel around Cassini State 2 (see [@peale69] for the numbering), the state in which the spin responds to torques more slowly than the orbit precesses. In that case, the spin feels the precession-averaged effect of the EB orbit, and so its spin-precession pole is close to the orbit normal of the binary (the dominant angular momentum). This is the case of the Earth’s moon, which has faster orbital precession than spin precession, and it is tidally damped to Cassini State 2. If [[Kepler-413b ]{}]{}has instead low mass, $M_p<10 M_\oplus$ (i.e., quite puffy), it could have a higher natural spin frequency $\alpha$. In that case, it is possible for the planet to be in a tipped-over Cassini State 2, in which a high obliquity (near $90\arcdeg$) lessens the torque from the binary star, allowing the planet precession to continue resonating with the orbital precession. However, it is more likely that it would travel around Cassini State 1, which is the normal precession around an orbit, but slightly modified due to the (relatively slow) precession of that orbit. Finally, for the Neptune-like mass of $15 M_\oplus$ assumed above, both Cassini State 1 and Cassini State 2 would be considerably displaced from the orbit normal, and either large obliquity or large obliquity fluctuations ($\sim 30\arcdeg$) would result.
It is beyond the scope of this work to calculate the obliquity evolution of [[Kepler-413b ]{}]{}in detail. We expect, however, that it would give interesting boundary conditions for climate models [@lang07]. Another consideration is that the $\alpha$ value would have changed as the planet cooled, as that contraction would result in changes in $R_p$, $k_{2,p}$, $c_p$, and $S_p$; the scanning of $\alpha$ could cause trapping into a Cassini resonance [@winn05]. We expect that at the orbital distance of [[Kepler-413b ]{}]{}, tides would be too weak to cause spin alignment, but we note that in other systems such alignment would bring the planetary spin to a Cassini state rather than standard spin-orbit locking [@fab07a].
Finally, we suggest that spin-precession of a planet may actually be observable for CBP systems. [@carter10] pointed out that a precessing planet will display a time-varying surface area to a transit observer, due to the oblateness of the planet changing orientations. A Saturn-like oblateness with a $30\arcdeg$ obliquity results in a few-percent change in depth over the precession cycle. The radii ratios in some CBP systems are constrained by [*Kepler*]{} photometry at the $\sim1\%$ level, thus variations at this level might be detectable. This is considerably more observable than the transit shape signature of oblique planets [@seager02; @barnes03].
Conclusions {#sec:conclusions}
===========
We report the discovery of a $R_p = {4.347}\pm{0.099}~R_\oplus$ planet transiting the primary star of [[Kepler-413 ]{}]{}. The system consists of two K+M stars that eclipse each other every 10.116146 days. Due to the small misalignment ($\Delta i\sim2.5\arcdeg$) between the binary and CBP orbital planes, the latter precesses and the planet often fails to transit the primary star. The CBP revolves around the EB every $\sim66$ days on an orbit with $a_{p} = {0.355}$ AU and $e={0.118}\pm{0.002}$. The orbital configuration of the system is such that we observe a set of three transits occurring $\sim66$ days apart, followed $\sim800$ days later by five more transits also separated by $\sim66$ days from each other. We note that, among the known transiting CBPs, [[Kepler-413b ]{}]{}is the only CBP with a higher eccentricity compared to its host binary star.
Spectroscopic measurements determined the target as a single-lined EB, and provided its mass function, eccentricity and argument of periastron. Photometric observations identified a nearby companion (“third light”) to [[Kepler-413 ]{}]{}inside the central [[*Kepler*]{}]{} pixel, and addressed itÕs flux contamination to the target’s light curve [@Kostov2014b]. Based on statistical estimates, we propose that the companion star is gravitationally bound to the EB, making [[Kepler-413b ]{}]{}a CBP in a triple stellar system.
Our best-fit model places [[Kepler-413b ]{}]{}slightly closer to its host stars than the inner edge of the extended habitable zone, with the bolometric insolation at the location of the planet’s orbit varying between $\sim1.75~S_{\star}$ and $\sim3.9~S_{\star}$ on multiple timescales (where $S_{\star} = 1368~W m^{-2}$, the mean Solar constant). The planet is, however in the dry desert habitable zone for most of its orbit. Also, the peculiar orbital configuration of the system indicates that [[Kepler-413b ]{}]{}may be subject to Cassini-States dynamics. Depending on the angular precession frequency of the planet, its spin and orbital precession rates could be commensurate. This suggests that [[Kepler-413b ]{}]{}may experience obliquity fluctuations of dozens of degrees on precession timescales ($\sim11$ years) and complex seasonal cycles with interesting climate patterns.
The transits of a CBP provide precise measurements on the stellar and planetary sizes and on the masses of the host binary star. Our discovery adds to the growing knowledge about CBPs: their radii, masses, occurrence frequency about which types of stars, when they formed (first versus second generation) and even whether the concept of habitability can be extended beyond single-star planetary systems. The results reported here can be applied to studies of the formation and evolution of protoplanetary disks and planetary systems in multiple-stellar systems.
This research was performed in partial fulfillment of the requirements of the PhD of V.B.K. at Johns Hopkins University. The authors gratefully acknowledge everyone who has contributed to the [[*Kepler*]{}]{} Mission, which is funded by NASA’s Science Mission Directorate. We acknowledge conversations with Nicolas Crouzet, Holland Ford, K. Go[ź]{}dziewski, Nader Haghighipour, Amy McQuillan, Colin Norman, Rachel Osten, Neill Reid, Jean Schneider, M. S[ł]{}onina, and Martin Still. The authors thank the referee for the helpful comments and suggestions.
This research used observations made with the SOPHIE instrument on the 1.93-m telescope at Observatoire de Haute-Provence (CNRS), France, as part of programs 12B.PNP.MOUT and 13A.PNP.MOUT. This research made use of the the SIMBAD database, operated at CDS, Strasbourg, France; data products from the Two Micron All Sky Survey (2MASS), the Digitized Sky Survey (DSS), the NASA exoplanet archive NexSci[^7]; source code for transit light curves (Mandel and Agol 2002); SFI/HEA Irish Centre for High-End Computing (ICHEC); Numerical computations presented in this work were partly carried out using the SFI/HEA Irish Centre for High-End Computing (ICHEC, STOKES) and the PLUTO computing cluster at KASI; Astronomical research at Armagh Observatory is funded by the Department of Culture, Arts and Leisure (DCAL). V.B.K. and P.R.M. received funding from NASA Origins of Solar Systems grant NNX10AG30G, and NESSF grant NNX13AM33H. W.F.W. and J.A.O. gratefully acknowledge support from the National Science Foundation via grant AST-1109928, and from NASA’s [[*Kepler*]{}]{} Participating Scientist Program (NNX12AD23G) and Origins of Solar Systems Program (NNX13AI76G). T.C.H acknowledges support by the Korea Research Council for Science and Technology (KRCF) through the Young Scientist Research Fellowship Program grant number 2013-9-400-00. T.M. acknowledges support from the European Research Council under the EU’s Seventh Framework Programme (FP7/(2007-2013)/ ERC Grant Agreement No. 291352) and from the Israel Science Foundation (grant No. 1423/11).
Adams, E. R.; Ciardi, D. R.; Dupree, A. K. et al., 2012, , 144, 42 Alexander, R., 2012, , 757, 29 Baranne, A., Queloz, D., Mayor, M., et al. 1996, , 119, 373 Baranec, C., et al. 2013, J. Vis. Exp. (72), e50021, doi:10.3791/50021 Barnes, J. W. and Fortney, J. J., 2003, , 588, 545 Borkovits, T., Derekas, A., Kiss, L. L. et al. 2012, , 428,1656 Bouchy, F., H[é]{}brard, G., Udry, S., et al. 2009, , 505, 853 Burke, C. J., Gaudi, B. S., DePoy, D. L., and Pogge, R. W. 2006, , 132, 210 Burke, C. J.; Bryson, S.; Christiansen, J., et al. 2013, AAS, 221, 216 Carter, J. A.; Winn, J. N., 2010, , 716, 850 Carter, J. A., [*et al.*]{}, 2011, Science, 331, 562 Cincotta, P. M., Sim[ó]{}, C. 2000, , 147, 205 Cincotta, P. M., Sim[ó]{}, C. 2000, CMDA, 73, 195 Cincotta, P. M., Giordano, C. M., Sim[ó]{}, C. 2003, Physica D, 182, 151 Clanton, C., 2013, , 768, 15 Claret, A. and Bloemen, S., 2011, , 529, 75 Demarque, P., Woo, J-H, Kim, Y-C, and Yi, S. K. 2004, , 155, 667 Díaz, R. F., Damiani, D., Deleuil, M., et al. 2013, , 551, L9 Doyle, L. R.; Carter, J. A., Fabrycky, D. C., et al. 2011, Science, 333, 6049 Duquennoy, A. and Mayor, M., 1991, , 248, 485 Eggleton, P. P. and Kiseleva-Eggleton, L., 2001, , 562, 1012 Eggleton, P. P. and Tokovinin, A. A., 2008, , 389, 869 Fabrycky, D. C. and Tremaine, S., 2007, , 669, 1289 Fabrycky, D. C.,Johnson, E. T.; Goodman, J., 2007 , 665, 754 Foucart, F. and Lai, D., 2013, 764, 106 Gilliland, R. L.; Chaplin, W. J.; Dunham, E. W. et al., 2011, , 197, 6 Go[ź]{}dziewski, K.,Bois, E., Maciejewski, A. J., and Kiseleva-Eggleton, L. 2001, , 378, 569 Go[ź]{}dziewski, K., 2003, , 398, 315 Go[ź]{}dziewski, K., Breiter, S., Borczyk, W., 2008, , 383, 989 Hamilton, D. P. and Ward, W.R., 2004, , 128, 2510 H[é]{}brard, G., Bouchy, F., Pont, F., et al. 2008, , 488, 763 Hébrard, G., Amenara, J.-M, Santerne , A., et al. 2013, , 554, A114 Heng, K, 2012, eprint arXiv:1304.6104 Hilditch, R. W. 2001, An Introduction to Close Binary Stars, by R. W. Hilditch, pp. 392. ISBN 0521241065. Cambridge, UK: Cambridge University Press, March 2001. Hinse, T. C., Christou, A. A., Alvarellos, J. L. A., Go[ź]{}dziewski, K., 2010, MNRAS, 404, 837 Howard, A. W., 2013, Science, 340, 572 Holman, M. J.; Wiegert, P. A., 1999, , 117, 621 Jenkins, J. M., et al., 2010, , 713, 2 Jenkins, J. M., et al., 2010, , 713, 87 Kane, S. .R and Hinkel, N. R., 2013, , 762, 7 Kinemuchi, K.; Barclay, T.; Fanelli, M. et al. 2012, , 124, 963 Kirk, B. et al. 2014, in prep. Kiseleva, L. G.; Eggleton, P. P.; Mikkola, S., 1998, , 300, 292 Kopparapu, R. K.; Ramirez, R.; Kasting, J. F. et al. 2013, , 765, 2 Kostov, V. B.; McCullough, P. R.; Hinse, T. C., et al. 2013, , 770, 52 Kostov, V. B. et al., 2014, in prep. Kov[á]{}cs, G., Zucker, S., and Mazeh, T. 2002, , 391, 369 Kozai, Y. 1962, , 67, 591 Lagarde, N., Decressin, T., Charbonnel, C., et al. 2012, , 543, A108 Langton, J. and Laughlin, G., 2007, , 657, 113 Lawrence, A., et al.. 2007, , 379, 1599 Levrard, B.; Correia, A. C. M.; Chabrier, G. et al. 2007, , 462,5 Lillo-Box, J.; Barrado, D.; and Bouy, H., 2012, , 546, 10 Lissauer, J. J., et al. 2011, , 197, 8 Lissauer, J. J., Marcy, G. W., Rowe, J. F., et al. 2012, , 750, 112 Loeb, A., and Gaudi, B. S. 2003, , 588, L117 Mandel, K., and Agol, E. 2002, , 580, L171 Martin, R. G.; Armitage, P. J.; Alexander, R. D., 2013, , 773, 74 Marzari, F.; Thebault, P.; Scholl, H., et al., 2013, , 553, 71 Matijevic, G. et al., 2012, , 143,123 Mazeh, T., and Faigler, S. 2010, , 521, L59 Mayor, M. et al., 2011, eprint arXiv:1109.2497 McQuillan, A., Aigrain, S. and Mazeh, T. 2013, , 432, 1203 McQuillan, A., Mazeh, T. and Aigrain, S. 2013, , 775, 11 Meixner, M., Smee, S., Doering, R. L., et al., 2010, , 122, 890 Meschiari, S., 2012, , 752, 71 Meschiari, S., 2012, , 761, 7 Meschiari, S., 2013, eprint arXiv:1309.4679 Mikkola, S., Innanen, K., 1999, CMDA, 74, 59 Murray, C. D., and Correia, A. C. M. 2011, Exoplanets, edited by S. Seager. Tucson, AZ: University of Arizona Press, 2011, 526 pp. ISBN 978-0-8165-2945-2., p.15-23 Nutzman, P. A., Fabrycky, D. C., and Fortney, J. J. 2011, , 740, L10 Orosz, J. A., Welsh, W. F., Carter, J. A., et al. 2012, Science, 337, 1511 Orosz, J. A., Welsh, W. F., Carter, J. A., et al., 758, 87 Paardekooper, S.-J.; Leinhardt, Z. M.; ThŽbault, P.; Baruteau, C., 2012, , 754, 16 P[á]{}l, A., 2012, , 420, 1630 Peale, S. J., 1969, , 74, 483 Pejcha, O.; Antognini, J. M.; Shappee, B. J.; Thompson, T. A., 2013, , tmp.2074 Pepe, F., Mayor, M., Galland, F., et al. 2002, , 388, 632 Pickles, A. J. 1998, , 110, 863 Pr[š]{}a, A., Batalha, N., Slawson, R. W., et al. 2011, , 141, 83 Pierens, A. and Nelson, R. P., 2007, , 472, 993 Pierens, A. and Nelson, R. P., 2008, , 483, 633 Pierens, A. and Nelson, R. P., 2008, , 482, 333 Pierens, A. and Nelson, R. P., 2008, , 478, 939 Pierens, A. and Nelson, R. P., 2013, , 556, 134 Quillen, A. C.; Bodman, E.; Moore, A., 2013, /mnras, 2126 Quintana, E. V. and Lissauer, J. J., 2006, Icarus, 185, 1 Rafikov, R., 2013, , 764, 16 Raghavan, D.; McAlister, H. A.; Henry, T. J. et al., 2010, , 191, 1 Rappaport, S.; Deck, K.; Levine, A., et al. 2013, , 768, 33 Rybicki, G.B. and Lightman, A. P., 1985, [*John Wiley & Sons, ISBN-10: 0471827592*]{} Santerne, A., Bonomo, A., S., Hébrard, G., et al. 2011, , 536, A70 Schneider, J., and Chevreton, M. 1990, , 232, 251 Schneider, J., 1994, [*P&SS*]{}, 42, 539 Schwamb, M. E., [Orosz]{}, J. A., [Carter]{}, J. A. et al. 2013, , 768, 127 Seager, S. and Hui, L., 2002, , 574, 1004 Seager, S. and MallŽn-Ornelas, G., 2003, , 585, 1038 Silva, A. V. R. 2003, Bulletin of the Astronomical Society of Brazil, 23, 15 Silva-Valio, A. 2008, , 683, L179 Slawson, R. W., [Pr[š]{}a]{}, A. [Welsh]{}, W. F. et al., 142, 160 S[ł]{}onina, M., Go[ź]{}dziewski, K., Migaszewski, C., Rozenkiewicz, A., 2012, ArXiv eprints: 2012arXiv1205.1341S, submitted to S[ł]{}onina M., Go[ź]{}dziewski K., Migaszewski C., 2012, in F. Arenou and D. Hestroffer ed., Orbital Couples: Pas de Deux in the Solar System and the Milky Way (arXiv:1202.6513v1,in print) Mechanic: a new numerical MPI framework for the dynamical astronomy S[ł]{}onina, M., Go[ź]{}dziewski, K., Migaszewski, C., Rozenkiewicz, A., 2014, submitted to [*New Astronomy*]{} Still, M. and Barclay, T. 2012, Astrophysics Source Code Library, record ascl:1208.004 ter Braak, C.J.F., and Vrugt, J.A., 2008, [*Statistics and Computing*]{} 16, 239 Tokovinin, A. A., 1993, Astronomy Letters, 23, 6 Torrence, C. and G.P. Compo, 1998, Bull. Amer. Met. Soc., 79, 61-78 van Kerkwijk, M. H., Rappaport, S. A., Breton, R. P., et al. 2010, , 715, 51 Ward, W.R. and Hamilton, D. P. 2004, , 128, 2501 Weiss, L. M.; Marcy, G. W.; Rowe, J. F. et al., 2013, , 768, 1 Welsh, W. F.; Orosz, J. A.; Carter, J. A., et al. 2012, Nature, 481, 475 Welsh, W. F. et al., 2014, in prep Winn, J. N. and Holman, M., 2005, , 628, 159 Wright, J. T., Fakhouri, O., Marcy, G. W., et al. 2011, , 123, 412 Zsom, A., Seager, S., de Wit, J., Stamenkovic, V. 2013, , 778, 2
[llllll]{} & & & & &\
Parameter & Symbol & Value & Uncertainty (1$\sigma$) & Unit & Note\
Orbital Period & $P_{bin}$ & 10.116146 & 0.000001 & d & [*Kepler*]{} photometry\
Epoch of primary eclipse & $T_{prim}$ & 2454972.981520 & - & BJD & Pr[š]{}a et al. (2011)\
Epoch of secondary eclipse & $T_{sec}$ & 2454977.999 & 0.001 & BJD & Pr[š]{}a et al. (2011)\
Epoch of Periastron passage & $T_{0}$ & 2454973.230 & 0.023 & BJD & SOPHIE\
Velocity semi-amplitude & $K_1$ & 43.489 & 0.085 & km s$^{-1}$ & SOPHIE\
Velocity offset & $\gamma$ & -27.78& 0.05 & km s$^{-1}$ & SOPHIE\
Argument of Periapsis & $\omega_{bin}$ & 279.54$^\dagger$$^\dagger$ & 0.86 & & SOPHIE\
Eccentricity & $e_{bin}$ & 0.0372 & 0.0017 & & SOPHIE\
Orbital Inclination & $i_{bin}$ & 87.3258 & 0.0987 & & [*Kepler*]{} photometry$^\dagger$\
Normalized Semimajor Axis & $a_{bin}/R_A$ & 27.5438 & 0.0003 & & [*Kepler*]{} photometry$^\dagger$\
Fractional Radius & $R_B/R_A$ & 0.5832 & 0.0695 & & [*Kepler*]{} photometry$^\dagger$\
Temperature of Star A & $T_A$ & 4700 & - & K & Spectroscopic\
Temperature ratio & $T_B/T_A$ & 0.7369 & 0.0153 & K & [*Kepler*]{} photometry$^\dagger$\
Limb-Darkening Coeff. of Star A & $x_A$ & 0.3567 & 0.0615 & K & [*Kepler*]{} photometry$^\dagger$\
V sin i of Star A & $V sin i$ & 5 & 2 & km s$^{-1}$ & SOPHIE\
Fe/H of Star A & $[Fe/H]$ & -0.2 & - & & NexSci$^\dagger$$^\dagger$$^\dagger$\
Gravity of Star A & ${\mbox{$\log g$}}_A$ & 4.67 & - & & NexSci\
\
\
\
---------------------- -------------- ------------------
BJD$_{\rm UTC}$ RV $\pm$$1\,\sigma$
$-2\,400\,000$ (kms$^{-1}$) (kms$^{-1}$)
56180.42595 $-59.89$ 0.20
56184.39404 15.59 0.19
56186.44561 $-14.52$ 0.09
56187.47375 $-43.73$ 0.10
56192.42495 $-16.21$ 0.16
56195.40345$\dagger$ 9.99 0.25
56213.36121 $-0.37$ 0.17
56362.67907 $-56.59$ 0.16
56401.55880 $-71.26$ 0.14
56403.56461 $-47.72$ 0.27
56404.62680 $-21.87$ 0.19
---------------------- -------------- ------------------
: Measured radial velocities. \[tab\_RV\]
\
$\dagger$: measurement corrected for sky background pollution.
\[pd\_in\]
Index Parameter Name Best-fit 50% 15.8% 84.2%
------- ------------------------------------------------------------------ -------------------------------------------- --------------------------------------------- --------------------------------------------- ---------------------------------------------
[*Mass parameters*]{}
0 RV Semi-Amplitude Star A, $K_A$ (km s$^{-1}$) $ 43.42$ $ 43.49 $ $- 0.16$ $+ 0.19$
1 Mass ratio, Star B, $M_B/M_A$ $ 0.6611$ $ 0.6592 $ $- 0.0035$ $+ 0.0034$
2 Planetary mass ratio, $M_p/M_A$ ($\times 1000$) $ 0.245$ $ 0.186 $ $- 0.078$ $+ 0.078$
[*Stellar Orbit*]{}
3 Orbital Period, $P_{bin}$ (day) $ 10.1161114$ $ 10.1161185 $ $- 0.0000101$ $+ 0.0000099$
4 Time of Barycentric Passage, $t_{bin}-2455000$ (BJD) $ 8.34898$ $ 8.34902 $ $- 0.00024$ $+ 0.00024$
5 Eccentricity Parameter, ${e_{bin}} \sin(\omega_{bin})$ $ -0.0359$ $ -0.0360 $ $- 0.0023$ $+ 0.0022$
6 Eccentricity Parameter, ${e_{bin}} \cos(\omega_{bin})$ $ 0.006169$ $ 0.006166 $ $- 0.000037$ $+ 0.000038$
7 Orbital Inclination, $i_{bin}$ (deg) $ 87.332$ $ 87.301 $ $- 0.060$ $+ 0.050$
[*Planetary Orbit*]{}
8 Orbital Period, $P_p$ (day) $ 66.262$ $ 66.269 $ $- 0.021$ $+ 0.024$
9 Time of Barycenteric Passage, $t_p-2455000$ (BJD) $ 96.64$ $ 96.57 $ $- 0.17$ $+ 0.16$
10 Eccentricity Parameter, $\sqrt{e_p} \sin(\omega_p)$ $ 0.3426$ $ 0.3435 $ $- 0.0033$ $+ 0.0031$
11 Eccentricity Parameter, $\sqrt{e_p} \cos(\omega_p)$ $ -0.027$ $ -0.022 $ $- 0.013$ $+ 0.014$
12 Orbital Inclination, $i_p$ (deg) $ 89.929$ $ 89.942 $ $- 0.016$ $+ 0.024$
13 Relative Nodal Longitude, $\Delta \Omega_p$ (deg) $ 3.139$ $ 3.169 $ $- 0.064$ $+ 0.080$
[*Radius/Light Parameters*]{}
14 Linear Limb Darkening Parameter, $u_A$ $ 0.599$ $ 0.643 $ $- 0.036$ $+ 0.036$
15 Density of Star A, $\rho_A$ (g cm$^{-3}$) $ 1.755$ $ 1.799 $ $- 0.049$ $+ 0.066$
16 Radius Ratio, Star B, $R_B/R_A$ $ 0.624$ $ 0.650 $ $- 0.032$ $+ 0.043$
17 Planetary Radius Ratio, $R_p/R_A$ $ 0.0514$ $ 0.0517 $ $- 0.0013$ $+ 0.0013$
18 Stellar Flux Ratio, $F_B/F_A$ ($\times 100$) $ 5.90$ $ 6.40 $ $- 0.76$ $+ 1.05$
[*Relative Contamination, $F_{\rm cont}/F_A$*]{} ($\times 100$)
19 All Seasons $ 7.6$ $ 8.0 $ $- 1.0$ $+ 1.0$
[*Noise Parameter*]{}
20 Long Cadence Relative Width, $\sigma_{\rm LC}$ ($\times 10^5$) $ 67.78$ $ 67.76 $ $- 0.53$ $+ 0.54$
[*Radial Velocity Parameters*]{}
21 RV Offset, $\gamma$ (km s$^{-1}$) $ -27.784$ $ -27.810 $ $- 0.113$ $+ 0.098$
22 RV Jitter, $\sigma_{\rm RV}$ (km s$^{-1}$) $ 0.01$ $ 0.17 $ $- 0.11$ $+ 0.20$
: Model parameters for the photometric-dynamical model. We adopt the “best-fit” values as the system’s parameters. The reference epoch is $t_0 =2,455,014.46543$ (BJD). \[tab:pd\_in\]
[|l|llll|]{} Parameter & Best-fit & 50% & 15.8% & 84.2%\
[*Bulk Properties*]{} & & & &\
Mass of Star A, $M_A$ ($M_\odot$) & $ 0.820$ & $ 0.830 $ & $- 0.014$ & $+ 0.015$\
Mass of Star B, $M_B$ ($M_\odot$) & $ 0.5423$ & $ 0.5472 $ & $- 0.0073$ & $+ 0.0081$\
Mass of Planet b, $M_p$ ($M_\oplus$) & $ 67.$ & $ 51. $ & $- 21.$ & $+ 22.$\
Radius of Star A, $R_A$ ($R_\odot$) & $ 0.7761$ & $ 0.7725 $ & $- 0.0096$ & $+ 0.0088$\
Radius of Star B, $R_B$ ($R_\odot$) & $ 0.484$ & $ 0.502 $ & $- 0.021$ & $+ 0.027$\
Radius of Planet p, $R_p$ ($R_\oplus$) & $ 4.347$ & $ 4.352 $ & $- 0.099$ & $+ 0.099$\
Density of Star A, $\rho_A$ (g cm$^{-3}$) & $ 1.755$ & $ 1.799 $ & $- 0.049$ & $+ 0.066$\
Density of Star B, $\rho_B$ (g cm$^{-3}$) & $ 4.77$ & $ 4.32 $ & $- 0.63$ & $+ 0.58$\
Density of Planet, $\rho_p$ (g cm$^{-3}$) & $ 3.2$ & $ 2.4 $ & $- 1.0$ & $+ 1.0$\
Gravity of Star A, $\log g_A$ (cgs) & $ 4.5721$ & $ 4.5811 $ & $- 0.0086$ & $+ 0.0108$\
Gravity of Star B, $\log g_B$ (cgs) & $ 4.802$ & $ 4.774 $ & $- 0.046$ & $+ 0.036$\
[*Orbital Properties*]{} & & & &\
Semimajor Axis of Stellar Orbit, $a_{bin}$ (AU) & $ 0.10148$ & $ 0.10185 $ & $- 0.00052$ & $+ 0.00057$\
Semimajor Axis of Planet, $a_p$ (AU) & $ 0.3553$ & $ 0.3566 $ & $- 0.0018$ & $+ 0.0020$\
Eccentricity of Stellar Orbit, $e_{bin}$ & $ 0.0365$ & $ 0.0366 $ & $- 0.0021$ & $+ 0.0023$\
Argument of Periapse Stellar Orbit, $\omega_{bin}$ (Degrees) & $ 279.74$ & $ 279.71 $ & $- 0.58$ & $+ 0.62$\
Eccentricity of Planetary Orbit , $e_p$ & $ 0.1181$ & $ 0.1185 $ & $- 0.0017$ & $+ 0.0018$\
Argument of Periapse Planet Orbit, $\omega_p$ (Degrees) & $ 94.6$ & $ 93.6 $ & $- 2.3$ & $+ 2.2$\
Mutual Orbital Inclination, $\Delta i$ (deg)$^\dagger$ & $ 4.073$ & $ 4.121 $ & $- 0.083$ & $+ 0.113$\
\
[lllllll||ll]{} Event \# & Center & $\sigma$ & Depth$^\dagger$ & $\sigma$ & Duration & $\sigma$ & Center & Duration\
& (Time-2455000 \[BJD\]) & (Center) & \[ppm\] & (Depth) & \[days\] & (Duration) & (Time-2455000 \[BJD\]) & \[days\]\
& & & & & & & [**Predicted**]{} &\
1 & -4.3799 & 0.0019 & 1557 & 668 & 0.1517 & 0.0113 & -4.38 & 0.14\
2 & 62.3363 & 0.0018 & 2134 & 537 & 0.18 & 0.0138 & 62.34 & 0.18\
3 & 125.0938 & 0.0033 & 2958 & 678 & 0.1549 & 0.0145 & 125.1 & 0.16\
– & – & – & – & – & – & – & 188.34$^\dagger$$^\dagger$ & 0.1\
4 & 963.1529 & 0.0045 & 2662 & 406 & 0.1551 & 0.0209 & 963.16 & 0.16\
5 & 1026.1544 & 0.0037 & 2376 & 381 & 0.1083 & 0.0062 & 1026.16 & 0.12\
6 & 1092.3978 & 0.0075 & 2759 & 322 & 0.3587 & 0.0199 & 1092.40 & 0.36\
7 & 1156.2889 & 0.0057 & 1892 & 394 & 0.0921 & 0.0144 & 1156.29 & 0.1\
8 & 1219.5674 & 0.0084 & 3282 & 432 & 0.2149 & 0.0236 & 1219.56 & 0.22\
& & & & & &\
9 & – & – & – & – & – & – & 3999.47 & 0.12\
\
\
[^1]: http://keplergo.arc.nasa.gov/ContributedSoftwarePyKEP.shtml
[^2]: http://exoplanetarchive.ipac.caltech.edu
[^3]: Generally, $\Omega_{bin}$ (the EB longitude of ascending node) is undetermined and assumed to be zero
[^4]: http://www.boulder.swri.edu/ hal/swift.html
[^5]: Mean Exponential Growth of Nearby Orbits
[^6]: https://github.com/mslonina/Mechanic
[^7]: http://exoplanetarchive.ipac.caltech.edu
|
---
abstract: 'Molecular dynamic simulations were conducted to study the self-interstitial migration in zirconium. By defining the crystal lattice point, at which more than one atom fall in the Wigner-Seitz cell of the lattice point, for the location of interstitial atoms (LSIA), three types of events were identified for LSIA migration: the jump remaining in one $\langle11\overline{2}0\rangle$ direction (ILJ), the jump from one $\langle11\overline{2}0\rangle$ to another $\langle11\overline{2}0\rangle$ direction in the same basal plane (OLJ) and the jump from one basal plane to an adjacent basal plane (OPJ). The occurrence frequencies of the three types were calculated. ILJ was found to be the dominant event in the temperature range ($300 K $ to $1200 K$), but the occurrence frequencies of OLJ and OPJ increased with increasing temperature. Although the three types of jumps may not follow Brownian and Arrhenius behavior, on the whole, the migration of the LSIAs tend to be Brownian-like. Moreover, the migration trajectories of LSAs in the hcp basal-plane are not what are observed if only conventional one- or two-dimensional migrations exist; rather, they exhibit the feature we call fraction-dimensional. Namely, the trajectories are composed of line segments in $\langle11\overline{2}0\rangle$ directions with the average length of the line segments varying with the temperature. Using Monte Carlo simulations, the potential kinetic impacts of the fraction-dimensional migration, which is measured by the average number of lattice sites visited per jump event (denoted by $n_{SPE}$ ), was analyzed. The significant differences between the $n_{SPE}$ value of the fraction-dimensional migration and those of the conventional one- and two-dimensional migrations suggest that the conventional diffusion coefficient, which cannot reflect the feature of fraction-dimensional migration, cannot give an accurate description of the underlying kinetics of SIAs in Zr. This conclusion may not be limited to the SIA migration in Zr and could be more generally meaningful for situations in which the low dimensional migration of defects has been observed.'
author:
- Rui Zhong
- Chaoqiong Ma
- Baoqin Fu
- Jun Wang
- Qing Hou
bibliography:
- 'mybibfilea.bib'
title: 'On the fraction-dimension migration of self-interstitials in zirconium'
---
Introduction
============
Because zirconium (Zr) has a small neutron absorption cross section, its alloys have been commonly used as fuel cladding and structural material in nuclear reactors. An intensively concerning issue herein is the degradation of material properties induced by prolonged exposure to the irradiation environment. The changes of material properties observed macroscopically originate from various atomistic processes [@Was:2007]. To bridge the macro observations with the micro processes, studies spanning multiple time and space scales are crucial [@Samaras:2009]. In the present paper, we focus on the migration of self-interstitial atoms (SIAs) in Zr. It is believed that the anisotropic migration of SIAs and vacancies, which can be produced by irradiation of energetic neutrons and ions as well as electrons, play essential roles in explaining the irradiation growth in Zr [@Wen:2012; @Arevalo:2007; @Barashev:2015; @Woo:2007; @Woo:1988; @Semenov:2006].
Simulation studies relevant to the migration behavior of SIAs in Zr have been reported by a number of groups using ab initio modeling or molecular dynamics [@Pasianot:2000; @Osetsky:2002; @Woo:2003; @Domain:2006; @Diego:2008; @Diego:2011; @Wen:2012; @Verite:2013; @Samolyuk:2014; @Woo:2007; @Varvenne:2013; @Christensen:2015]. The most recent ab initio modeling showed that, in contrast to the symmetric octahedral configuration predicted earlier [@Domain:2006; @Willaime:2003], the lowest-energy SIA configuration should be the low-symmetric basal octahedral (BO) configuration [@Peng:2012; @Samolyuk:2014; @Verite:2013; @Varvenne:2013]. In addition, it was found in [@Samolyuk:2014; @Verite:2013; @Varvenne:2013] that there may exist some low-symmetric metastable SIA configurations that were not recognized previously [@Willaime:1991; @Peng:2012; @Peng:2012]. These new results of ab initio modeling reveal that the migration of SIA in Zr should be anisotropic and involve complex atomistic processes. On the other hand, from the viewpoint of multiscale simulations, the extraction of the effective SIA diffusion coefficients of SIAs that condense the effects of the atomistic processes and more directly correlate to simulation and analysis methods of larger time-space scales is desired [@Christien:2005; @Arevalo:2007; @Barashev:2015; @Samolyuk:2014; @Peng:2012]. Using the nudged elastic band (NEB) method to obtain migration barriers between SIA configurations identified in ab initio modeling, Samolyuk et al estimated the SIA diffusion coefficients in the c-axis and in the basal plane of the hcp structure through event driving kinetics Monte Carlo (KMC) simulations [@Samolyuk:2014]. As noted by the authors, the out-of-plane diffusion coefficient was possibly overestimated because the same attempt jump frequency was assigned to all considered jump events. The jump frequencies can be calculated according to the formalism of the transition state theory of Vineyard [@Vineyard:1957], in which the potential surface is assumed to be harmonic in the neighboring region of local minimum and saddle points. At high temperatures, the anharmonicity of the potential surface may influence the calculated jump frequencies [@Wen:2012]. Thus, comparatively, molecular dynamic (MD) simulations provide a more intuitive approach to calculate the SIA diffusion coefficients or jump frequencies.
Using MD simulations, Pasianot et al calculated the SIA diffusion coefficients in the basal plane [@Pasianot:2000]. Based on the highly non-Arrhenius behavior of the obtained diffusion coefficients, the possibility that the SIAs perform one-dimensional movement in $\langle11\overline{2}0\rangle$ at low temperature was suggested. More detailed MD simulations conducted by Osetsky et al [@Osetsky:2002] provided further evidence for that the diffusion of SIAs was one-dimensional in the $\langle11\overline{2}0\rangle$ direction at low temperature and to change from one-dimensional to two-dimensional and then three-dimensional diffusion with increasing temperature. Shortly after Osetsky et al , Woo et al [@Woo:2003] published their MD results for SIA diffusion coefficient and also demonstrated the low-dimension diffusion of SIA. However, there was a remarkable difference in the basal diffusion coefficients given in these two works. The basal diffusion coefficients obtained in the former work exhibited good Arrhenius dependence on temperature at low temperatures, whereas the basal diffusion coefficients given in the later work depend on temperature very weakly. Given the same potential [@Ackland:1995] used in both works, the reason for this difference is unclear. More recent MD studies on the SIA diffusion in Zr were conducted by Mendelev and Bokstein [@Mendelev:2010] and Diego et al [@Diego:2011]. In these two studies, no obvious anisotropic diffusion was observed because of the used potential [@Mendelev:2007] leading to a symmetric most stable SIA configuration (octahedral configuration), a result that disagreed with the predictions of the newest ab initio modelling [@Peng:2012; @Samolyuk:2014; @Verite:2013; @Varvenne:2013].
In addition to the studies mentioned above, more detailed pictures of the SIA migration are needed. In the present paper, based on MD and MC simulations, we will demonstrate that the migration of SIAs in Zr tends to be Brownian-like and fraction-dimensional; namely, the migration trajectories of SIAs are constituted with a sequence of line segments lying in $\{0001\}$ planes, with the average length of the line segments being strongly temperature dependent. We will also demonstrate that this migration feature may have significant kinetic impacts, which cannot be reflected by the diffusion coefficient that is usually used to characterize the diffusion phenomenon.
MD Simulation of SIA migration {#sec:MD simulation}
==============================
Methods {#subsec:Method}
-------
Our MD simulations were performed using the graphics processing unit (GPU)-based MD package MDPSCU [@Hou:2013]. We used the many-body semi-empirical potential of the Finnis-Sinclair type proposed by Ackland et al [@Ackland:1995] for Zr-Zr potential. This Zr-Zr potential has been widely used to study the stable configurations and diffusion of SIAs in Zr [@Osetsky:2002; @Woo:2003; @Wen:2012; @Diego:2011]. Although the most stable SIA configuration predicted by the potential is the basal-crowdion (BC), which deviates from the basal-octahedral (BO) predicted by the most recent ab initio modelling, the prediction that the basal-split (BS) configuration has the second-lowest energy agrees with the prediction of the ab initio modelling [@Peng:2012; @Samolyuk:2014; @Verite:2013; @Varvenne:2013]. Comprehensively, the potential used in the present paper makes predictions closer to those of the newest ab initio modelling than those given by the potential developed later [@Mendelev:2007].
The initial configurations of the simulation boxes were prepared by introducing a Zr atom to a Zr substrate in an hcp structure at a randomly selected position. For statistical purposes and to optimally use GPU-accelerated MD simulations [@Hou:2013], we generated 1000 independent replicas of simulation boxes, each of which contained 6481 Zr atoms. The x, y and z axes were set along $[\overline{1}2\overline{1}0]$, $[\overline{1}010]$ and $[0001]$, and the periodic condition was applied in the three directions. The boxes were then thermalized and relaxed at a given temperature ranging from $300 K$ to $1200 K$ until thermal equilibrium was reached. The boxes were then relaxed further for $30 ps$. The trajectories of atoms were usually recorded every $0.1ps$. A finer recording time step was used only for observing the detailed transition path between SIA configurations. Because 1000 replicas of simulation boxes were used in parallel, the simulation procedure is equivalent to running one simulation box for $30 ns$.
In addition to visually observing the atomic configurations, quantitative analysis was conducted. Methodologically, correct identification of SIAs is essential for calculating the SIA diffusion properties, especially in cases of high temperature. As will be shown in our results, the BS configuration is the SIA state appearing most frequently in SIA migration. It is not meaningful to distinguish which atom in the atom pair of the BS configuration is the self-interstitial atom. Thus, in place of identifying the exact position a self-interstitial atom, we identify the location of a self-interstitial atom (LSIA). An LSIA is defined by the position of the hcp lattice point at which more than one atom fall in the Wigner-Seitz cell of the lattice point. A similar method was used by Osetsky et al [@Osetsky:2002] in their calculations. This method was also used to identify interstitials and vacancies produced in cascade collisions and was shown to be robust [@Nordlund:1998]. We also tested the robustness of the method for the MD simulations in the present paper. In a simulation box created above, only one LSIA can be found, and an LSIA contains only two atoms for all temperatures considered in the present paper. Another advantage of the method is that we can easily determine the moving direction of the LSIA, immune to the influence of temperature. Because of this, we can differentiate the types of events in SIA migration.
Features of LSIA migration {#subsec:MD_LSIA}
--------------------------
{width="12cm"}
According to the simulation approach described above, MD simulations were performed at different substrate temperatures $T_{s}$. Fig. \[fig:SIA\_VISUAL\] displays the typical trajectories of LSIAs for $T_{s}=300 K$, $600 K$ and $900 K$. The graphs were created by merging the snapshots at different time steps. Also shown are the interstitial atoms and their connections with the LSIAs at which they are located. The first point to observe is that the pair of interstitial atoms in LSIAs are in BS or BC configuration for most of the time (it is actually difficult to distinguish BS and BC configuration for nonzero substrate temperatures). However, the atom-LSIA-atom connection is not necessarily a straight line in the $\langle11\overline{2}0\rangle$ direction. Fig. \[fig:JUMP\_SCHAME\] schematically displays the typical configurations of the interstitial atoms (denoted as A and B). The interstitial atoms exhibit superposition movements of thermal swinging and vibrating around their LSIA. The atoms (denoted as C1, C2 and C3) on the nearest lattice sites of the LSIA also thermally swing and vibrate around their lattice points. The nearest lattice sites of the LSIAs are LSIA candidates. There are moments when the interstitial atoms and the atom on one of the LSIA candidates approximately line up. At these moments, an LSIA jump may occur, during which one of the interstitial atoms falls in the new LSIA and another returns to the lattice point where the previous LSIA was located. When we quenched the simulation boxes to zero temperature, the SIA configuration denoted as PS¡¯ in reference [@Varvenne:2013], which leads to the migration of SIAs from one basal plane to another, was indeed found. However, the appearance of this state is infrequent. Most of the time, the interstitial atoms tend to swing around their LSIAs at small angles. This leads to our second point to observe. From Fig. \[fig:SIA\_VISUAL\], it is seen that the LSIA exhibits one-dimensional migration at low temperatures ($T_{s} = 300K$). The LSIA ¡®jumps¡¯ forward and backward in a $\langle11\overline{2}0\rangle$ direction. With increasing temperature ($T_{s} = 600K$), changes in the moving direction of LSIAs are observed. An LSIA may change its moving direction from one $\langle11\overline{2}0\rangle$ direction to another equivalent $\langle11\overline{2}0\rangle$ direction on the same basal plane. It may also first jump in one of the two nearest $\{0001\}$ planes and then quickly turn into movement in one $\langle11\overline{2}0\rangle$ direction. However, the probability of an LSIA changing its moving direction in-plane or jumping off-plane is significantly less than the probability of it continuing to move one-dimensionally. The trajectories of LSIAs are thus observed to be constituted by connected line segments of various lengths. With a further increase in temperature ($T_{s} = 900 K$), the probability of an LSIA changing its moving direction increases, and the average length of the line segments thus decreases. Even so, the events in which an LSIA jumps forward or backward in a $\langle11\overline{2}0\rangle$ direction still occur most frequently.
![(color online). Schematic graph for the movement of interstitial atoms and LSIAs. A and B denote the pair of interstitial atoms, which swing and vibrate around their LSIAs. C1, C2 and C3 denote the atoms on three of the nearest lattices of the LSIA. These atoms on the LSIA candidates also swing and vibrate around their lattice points. When the interstitial atoms and the atom on one of the LSIA candidates approximately line up, a jump of the LSIA may occur, with one of the interstitial atoms falling into the new LSIA and another returning to lattice point where the previous LSIA was located. \[fig:JUMP\_SCHAME\] ](Visual2.eps){width="8cm"}
To conduct a more quantitative analysis of the migration feature of Zr SIA, we calculated occurrence frequencies of events, $f_{E}^{(\alpha)}$, where the subscript ¦Á denotes the event type. The occurrence of an event is determined if an LSIA changes. If $N_{E}^{(\alpha)}$ is the number of times event $\alpha$ occurs in all 1000 simulation boxes in the simulation time of $30 ps$, then $f_{E}^{(\alpha)}$ in unit $s^{-1}$ is calculated by $(10^{8}/3)N_{E}^{(\alpha)}$. We differentiated three types of events: in-line jump (ILJ), off-line jump (OLJ), and off-plane jump (OPJ), as schematically displayed in Fig .\[fig:JUMP\_SCHAME\]. An off-plane jump is easily identified when an LSIA changes from one basal plane to another. A jump is identified as off-line if the LSIA jumps in the same basal plane but in a different direction from its intermediately previous jump in a $\langle11\overline{2}0\rangle$ direction. Otherwise, a jump is an in-line jump. It should be mentioned that in a recording time step ($0.1ps$) only the LSIA jumps between two adjacent hcp lattice sites were observed. Fig. \[fig:JUMP\_FRE\] shows $f_{E}^{(\alpha)}$ as a function of the temperature. Also shown is the summation of the occurrence frequencies of all types of events, $f_{E}^{(S)}$. It is seen that the in-line jump frequency $f_{E}^{(ILJ)}$ is significantly higher than the off-line jump frequency $f_{E}^{(OLJ)}$ and off-plane jump frequency $f_{E}^{(OPJ)}$ in the considered range of the temperatures. At room temperature ($T_{s} = 300 K$), $f_{E}^{(OLJ)}$ and $f_{E}^{(OPJ)}$ are very small compared with $f_{E}^{(ILJ)}$. With increasing temperature in the range of $300 K \leq T_{s} \leq 600 K$, $f_{E}^{(ILJ)}$ increases but at a decreasing rate. For $T_{s}\geq 600 K$, $f_{E}^{(ILJ)}$ tends to be weakly dependent on the temperature. Conversely, $f_{E}^{(OLJ)}$ and $f_{E}^{(OPJ)}$ increase with increasing temperature at an increasing rate in the range of $300 K \leq T_{s} \leq 600 K$. For $T_{s} \geq 600 K$, $f_{E}^{(OLJ)}$ and $f_{E}^{(OPJ)}$ tend to linearly increase with increasing temperature. $f_{E}^{(OPJ)}$ increases slightly faster than $f_{E}^{(OLJ)}$ and slightly overtakes $f_{E}^{(OLJ)}$ for $T_{s} \geq 900 K$. The increasing rate of $f_{E}^{(OLJ)}$ and $f_{E}^{(OPJ)}$ increase compensates the decreasing increase of $f_{E}^{(ILJ)}$. This leads to good linear dependence of the total occurrence frequencies of all events, $f_{E}^{(S)}$, on the temperature. Using the data of $f_{E}^{(S)}$ for $T_{s} \geq 600 K$, the dependence of $f_{E}^{(S)}$ on the temperature was very well fitted by equation $f_{E}(T_{s}) = \mu T_{s}+\mu_{0}$ with $\mu=1.01906 K^{-1} ps^{-1}$ and $\mu_{0}=0.00133 ps^{-1}$. The equation $f_{E}(T_{s})$ can be applied even for lower temperature, with a small deviation at $T_{s} = 300 K$. In contrast, dependence of $f_{E}^{(S)}$ on the temperature cannot be well fitted by a single Arrhenius relation in the whole temperature range considered. This result suggests that the jump frequency of all events follows the Einstein-Smoluchowski relationship better than the Arrhenius relationship and that the SIA migration is thus more Brownian-like when the temperature is higher than room temperature, although the occurrence frequency for a specific type of event, for example $f_{E}^{(ILJ)}$, may not always follow the Einstein-Smoluchowski relationship in the considered temperature range.
![(color online). The occurrence frequencies of event types, $f_{E}^{(\alpha)}$, as a function of temperature, where solid square denotes the total frequency of events $f_{E}^{(S)}$; solid circle denotes $f_{E}^{(ILJ)}$; hollow circle denotes $f_{E}^{(OLJ)}$; hollow square denotes $f_{E}^{(OPJ)}$; and solid star denotes $f_{E}^{(IPJ)}=f_{E}^{(ILJ)}+f_{E}^{(OLJ)}$. The red solid line is the linear fitting of $f_{E}^{(S)}$: $f_{E}(T_{s})=1.01906 T_{s}+0.00133$, with $T_{s}$ in $K$ and $f_{E}(T_{s})$ in unit $ps^{-1}$. \[fig:JUMP\_FRE\] ](Fig_fre.eps){width="8.cm"}
For the purpose of comparing our results with those of other authors, the the occurrence frequency of in-plane jump, $f_{E}^{(IPJ)}=f_{E}^{(ILJ)}+f_{E}^{(OLJ)}$, is also shown in Fig. \[fig:JUMP\_FRE\]. In contrast to $f_{E}^{(S)}$, the dependence of the in-plane jump frequency $f_{E}^{(IPJ)}$ on $T_{s}$ should be considered in two temperature regimes divided by $T_{s} = 600 K$. This is in agreement with what was observed in the work of Osetsky et al [@Osetsky:2002], but for unknown reason, the in-plane jump frequency displayed in reference [@Osetsky:2002] is systematically larger than ours by a factor of approximately two. Osetsky et al had fitted in-plane jump frequency to the Arrhenius relationship in two temperature regimes ($T_{s} \leq 600 K$ and $T_{s} \geq 800 K$), with the activated energy equal to $0.007 eV$ and $0.028 eV$ , respectively. The small activated energy in the low temperature regime ($0.007 eV$), which is comparable to or even smaller than the considered temperature, is actually an indication that the SIA migration tends to be Brownian-like rather than activated migration.
Fig. \[fig:JUMP\_PECENTAGE\] displays the percentage of occurrence frequencies of event types, defined by $p_{E}^{(\alpha)}=f_{E}^{(\alpha)}/f_{E}^{(S)}$. Again, $p_{E}^{(\alpha)}$ can be fitted by the linear equation $p^{(\alpha)}(T_{s})=A^{(\alpha)} T_{s} + B^{(\alpha)}$, especially for $T_{s} \geq 600 K$. The fitting parameters $A^{(\alpha)}$ and $B^{(\alpha)}$ obtained by fitting to $p_{E}^{(\alpha)}$ data of $T_{s} \geq 600 K$ are given in Table \[tab:Fitting\_para\]. The percentage of in-line jump frequency $p_{E}^{(ILJ)}$ is in good agreement with $p^{(ILJ)}$ event for the low temperature. Some deviations from the linear equation can be observed for $p_{E}^{(OLJ)}$ and $p_{E}^{(OPJ)}$ in the low temperature regime. However, because $p_{E}^{(OLJ)}$ and $p_{E}^{(OPJ)}$ are small at low temperatures, the deviations are less important from the viewpoint of applications such as KMC in which the occurrence frequency of type $\alpha$ is calculated by $f_{E}(T_{s})\cdot p^{(\alpha)}(T_{s})$. Again, it is clearly seen that the in-line jumps are the major events in LSIA migration even at $T_{s} = 1200 K$ or higher.
![(color online). The percentage of occurrence frequencies of event types, $p_{E}^{(\alpha)}=f_{E}^{(\alpha)}/f_{E}^{(S)}$, as a function of temperature. solid circle denotes $p_{E}^{(ILJ)}$; hollow circle denotes $p_{E}^{(OLJ)}$; hollow square denotes $p_{E}^{(OPJ)}$; solid star denotes $p_{E}^{(IPJ)}$. The solid lines are the linear fittings of $p_{E}^{(\alpha)}$: $p^{(\alpha)}(T_{s})=A^{(\alpha)} T_{s}+B^{(\alpha)}$, with the fitting parameters given in Table \[tab:Fitting\_para\]. \[fig:JUMP\_PECENTAGE\] ](Fig_percent.eps){width="8.cm"}
Event type ILJ OLJ IPJ OPJ
------------------------- -------------- ------------ ------------- ------------- -- --
$A^{(\alpha)}(K^{-1}) $ $~~-0.0441$ $~0.0166$ $~-0.0275$ $~0.0275$
$B^{(\alpha)}$ $~~115.0209$ $~-3.3084$ $~111.7124$ $~-11.7124$
: \[tab:Fitting\_para\] The fitting parameters of fitting $p^{(\alpha)}(T_{s})=A^{(\alpha)} T_{s}+B^{(\alpha)}$ to the percentage of occurrence frequencies of event types in Fig.\[fig:JUMP\_PECENTAGE\]
For the convenience of description, we call the migration, which is featured by migration trajectories that are constructed by line segments of temperature-dependent average lengths in hcp-basal planes, fraction-dimensional migration. The kinetic effects of the fraction-dimensional migration could be different from those that would occur if there was only conventional one-dimensional or two-dimensional migration in the basal plane, which will be defined in the next section. In the next section, based on Monte Carlo simulations, we account for the kinetic effects in which the feature of the fraction-dimensional migration may results.
Visited sites by SIAs {#sec:MC simulation}
=====================
As demonstrated in the previous section, the migration of LSIA in the basal plane of Zr is fraction-dimensional. According to the random walk theory [@Chandrasekhar:1943], a particle randomly walking on one-dimensional lattice sites can repeatedly visit the lattice sites many times. If off-line jumps occur, the visits on these sites are branched, and sites aligned in other directions, which could not be visited when the particle walks in one direction, can probably be visited. We suggest to use the average number of visited sites per event, $n_{SPE}$ , to measure the potential kinetic effects. From the viewpoint of applications, the interaction rate between SIAs or SIAs and defects of other types (e.g., vacancies) should be proportional to $n_{SPE}$.
We performed Monte Carlo (MC) simulations to calculate $n_{SPE}$. We started an MC simulation from a randomly selected hcp lattice site. The next site to be visited was randomly chosen from the candidate sites of an event, with the type of event determined by sampling according to the percentage of occurrence frequencies $p_{E}^{(\alpha)}$. The numbers of candidate sites were 2, 4 and 6 for ILJ, OLJ and OPJ, respectively. A site can contribute only one count to the number $N_{V}$ of visited sites. The simulation terminates after issuing an assumed number $N_{E}$ of events. $N_{E}$ can be considered as the average number of jumps for an LSIA moving before its annihilation due to, for example, recombination between SIAs and vacancies. $N_{E}$ can also be converted to migration time $t$ by $t=N_{E}/f_{E}^{(S)}$, where $f_{E}^{(S)}$ is the occurrence frequency of all event types. For a given $N_{E}$ , many MC simulation runs were performed. $n_{SPE}$ was then calculated by $\langle N_{V}\rangle / N_{E}$ , with $\langle \rangle$ denoting the average over the runs. The diffusion coefficient $D^{(2)}$ in the basal-plane was calculated by $\langle x^{2}+y^{2}\rangle /4t$, and the three-dimensional diffusion coefficient $D^{(3)}$ was calculated by $\langle x^{2}+y^{2}+z^{2} \rangle/6t$, with $(x,y,z)$ denoting the displacement vectors of LSIAs after $N_{E}$ ($\gg 1$) jumps.
Fig. \[fig:NSPE\_NE\] displays $n_{SPE}$ as a function of $N_{E}$ , obtained using $p_{E}^{(\alpha)} (T_{s})$ given in subsection \[subsec:MD\_LSIA\] for different substrate temperatures. Also shown in Fig. \[fig:NSPE\_NE\] is $n_{SPE}^{(d)}$ for full three- and two- and one-dimensional migration of LSIA on hcp-lattices, where the superscript $d$ denotes the dimensionality. Here, we denote ¡®full¡¯ three-dimensional migration as the case in which the probabilities of jumps from a site to all of its 12 nearest neighboring sites are the same, ¡®full¡¯ two-dimensional migration as the case in which the probabilities of jumps from a site to its 6 nearest neighboring sites in the same plane are the same, and ¡®full¡¯ one-dimensional migrations as the case in which only in-line jumping exists. Obviously, two- and one-dimensional migrations are special cases of the fraction-dimensional migration. Keeping in mind that mathematically, $n_{SPE}=1$ if $N_{E} =1$ in any case, all $n_{SPE}$ values are seen to be monotonically decreasing with increasing $N_{E}$ due to the spreading out of the visited lattice sites. The migration dimensionality is lower, the level of the spreading out is less. Thus, $n_{SPE}^{(1)}$ and $n_{SPE}^{(3)}$ provide the minimum and maximum boundaries for $n_{SPE}$ of the migration phenomenon in the hcp-lattice, respectively. The difference between the minimum and maximum is $53.8\%$ at $N_{E} =20$. The difference increases with increasing $N_{E}$ . The large difference suggests that $n_{SPE}$ is very sensitive to the migration dimensionality. Returning to the migration of LSIA in Zr, it is seen that $n_{SPE}$ varies from $30.6\%$ to $68.8\%$ at $N_{E} =20$ and from $9.1\%$ to $60.7\%$ at $N_{E} =10,000$ , when the temperature $T_{s}$ increases from $300 K$ to $1200 K$. Another observation is that the increase of $p_{E}^{(OPJ)}$ causes $n_{SPE}$ to approach a constant asymptotically for large $N_{E}$. For full two-dimensional and one-dimensional migration, the slope of $n_{SPE}$ as a function of $N_{E}$ is large even for $N_{E} =1000$. The large slope is probably an indication that the kinetics of the LSIAs could be dependent of the concentration of the sites with which the SIAs could interact. In such a case, the use of concentration-independent kinetic quantities, such as the diffusion coefficient that originates from the theory of continuous diffusion, should be revalidated. We will address this issue more generally in the future. Here, we address another question on the usage of diffusion coefficients when the LSIA migration is fraction-dimensional.
![(color online). The average number of visited sites per event, $n_{SPE}$ , as a function of the number of jump events $N_{E}$. Also shown are $n_{SPE}$ for full three-, two- and one-dimensional migration. The definitions of full three-, two- and one-dimensional migration can be found in the text. \[fig:NSPE\_NE\] ](Nspe_NE.eps){width="8.5cm"}
Because an LSIA jump in the basal plane always contributes the same square displacement $a_{0}^{2}$, where $a_{0}$ is the basal lattice length, to the mean square displacement (MSD) whether the jump is in-line or off-line, one cannot actually judge solely by the diffusion coefficient whether the migration of SIAs in the basal plane is one-dimensional, two-dimensional or fraction-dimensional. The in-basal-plane migration of an LSIA in Zr is usually treated either as one-dimensional or two-dimensional [@Woo:2000; @Arevalo:2007; @Christien:2005; @Samolyuk:2014]. However, our present results reveal that the in-basal-plane migration of LSIA in Zr is fraction-dimensional. To study the impacts of such treatments, we performed further MC simulations for three cases. The first case is the ¡®real¡¯ fraction-dimensional case, in which all $p_{E}^{(¦Á)} (T_{s} )$ are what are obtained by MD simulations in subsection \[subsec:MD\_LSIA\]. For the other two cases, the same $p_{E}^{(OPJ)}(T_{s})$ was also adopted. However, we set $p_{E}^{(ILJ)} (T_{s} )=1-p_{E}^{(OPJ)} (T_{s})$ and $p_{E}^{(OLJ)} (T_{s})=0$ to treat the in-basal-plane migration of LSIA as full one-dimensional, and set $p_{E}^{(ILJ)} (T_{s})=2(1-p_{E}^{(OPJ)} (T_{s}))/6$ and $p_{E}^{(OLJ)}(T_{s})=4(1-p_{E}^{(OPJ)} (T_{s}))/6$, to treat the in-basal-plane migration of LSIA as full two-dimensional. Fig. \[fig:NSPE\_D\] shows $n_{SPE}$ versus diffusion coefficient $D^{(2)}(T_{s})$ and $D^{(3)}(T_{s})$, obtained with $N_{E} =10,000$ for all three cases. The temperature $T_{s}$ is from $300 K$ to $1200 K$. As expected, at the same temperature, the three cases have the same diffusion coefficients $D^{(2)}(T_{s})$ and $D^{(3)}(T_{s})$ in statistical accuracy, as denoted by the dashed line in Fig. \[fig:NSPE\_D\]. However, the $n_{SPE}$ of the ¡®real¡¯ case deviates significantly from the $n_{SPE}$ of the assumed one- and two-dimensional cases. This result suggests that the conventional diffusion coefficients cannot reflect the real underlying kinetics of LSIA migration in Zr, in which the in-basal-plane migration of LSIAs is neither fully one-dimensional nor fully two-dimensional but rather fraction-dimensional. Thus, the models in downstream applications, such as KMC or mead field rate theory, for modeling the microstructure evolution in Zr should account for the effects of the fraction-dimensional migration of SIAs to make accurate predictions.
![(color online). $n_{SPE}$ vs diffusion coefficients $D^{(2)}$(denoted by solid square) and $D^{(3)}$(denoted by solid circle) for: two-dimensional case (red); fraction-dimensional case (blue); and one-dimensional case (black) (see text for definitions of the cases). \[fig:NSPE\_D\] ](Nspe_D.eps){width="8.5cm"}
Conclusions {#sec:Conclusions}
===========
Based on MD simulations, we have analyzed the migration of LSIAs in Zr by extracting the occurrence frequencies of ILJ, OLJ and OPJ. On the whole, the migration of LSIAs tends to be Brownian-like, although the three types of jumps may not follow Brownian and Arrhenius behavior. Because the occurrence frequencies of ILJ, OLJ and OPJ are equal, with ILJs being dominant, the trajectories of LSAs in the hcp basal-plane exhibit the feature of fraction-dimension. Based on MC simulations, we have analyzed the potential kinetic impacts of fraction-dimensional migration, which is measured by $n_{SPE}$, the average number of lattice sites visited by a jump event. The $n_{SPE}$ of fraction-dimension migrations could be very different from that if the migration in hcp basal-plane is conventional one- or two-dimensional. This result suggests that the conventional diffusion coefficient calculated by mean square displacements of particles cannot reflect the migration dimensionality and thus cannot provide an accurate description of the underlying kinetics of SIAs in Zr. This conclusion may not be limited to SIA migration in Zr and could be more generally meaningful to the situations in which low dimensional migration of defects in materials have been observed.
Acknowledgments
===============
This work was partly supported by the National Natural Science Foundation of China (Contract Nos. 91126001) and the National Magnetic Confinement Fusion Program of China (2013GB109002). The authors thank Dr. Ziwen FU for his help during the preparation of the manuscript.
|
---
abstract: 'The study of exclusive $\pi^{\pm}$ electroproduction on the nucleon, including separation of the various structure functions, is of interest for a number of reasons. The ratio $R_L=\sigma_L^{\pi^-}/\sigma_L^{\pi^+}$ is sensitive to isoscalar contamination to the dominant isovector pion exchange amplitude, which is the basis for the determination of the charged pion form factor from electroproduction data. A change in the value of $R_T=\sigma_T^{\pi^-}/\sigma_T^{\pi^+}$ from unity at small $-t$, to 1/4 at large $-t$, would suggest a transition from coupling to a (virtual) pion to coupling to individual quarks. Furthermore, the mentioned ratios may show an earlier approach to pQCD than the individual cross sections. We have performed the first complete separation of the four unpolarized electromagnetic structure functions above the dominant resonances in forward, exclusive $\pi^{\pm}$ electroproduction on the deuteron at central values of 0.6, 1.0, 1.6 at $W$=1.95 GeV, and $Q^2=2.45$ at $W$=2.22 GeV. Here, we present the $L$ and $T$ cross sections, with emphasis on $R_L$ and $R_T$, and compare them with theoretical calculations. Results for the separated ratio $R_L$ indicate dominance of the pion-pole diagram at low $-t$, while results for $R_T$ are consistent with a transition between pion knockout and quark knockout mechanisms.'
author:
- 'G.M. Huber'
- 'H.P. Blok'
- 'C. Butuceanu'
- 'D. Gaskell'
- 'T. Horn'
- 'D.J. Mack'
- 'D. Abbott'
- 'K. Aniol'
- 'H. Anklin'
- 'C. Armstrong'
- 'J. Arrington'
- 'K. Assamagan'
- 'S. Avery'
- 'O.K. Baker'
- 'B. Barrett'
- 'E.J. Beise'
- 'C. Bochna'
- 'W. Boeglin'
- 'E.J. Brash'
- 'H. Breuer'
- 'C.C. Chang'
- 'N. Chant'
- 'M.E. Christy'
- 'J. Dunne'
- 'T. Eden'
- 'R. Ent'
- 'H. Fenker'
- 'E.F. Gibson'
- 'R. Gilman'
- 'K. Gustafsson'
- 'W. Hinton'
- 'R.J. Holt'
- 'H. Jackson'
- 'S. Jin'
- 'M.K. Jones'
- 'C.E. Keppel'
- 'P.H. Kim'
- 'W. Kim'
- 'P.M. King'
- 'A. Klein'
- 'D. Koltenuk'
- 'V. Kovaltchouk'
- 'M. Liang'
- 'J. Liu'
- 'G.J. Lolos'
- 'A. Lung'
- 'D.J. Margaziotis'
- 'P. Markowitz'
- 'A. Matsumura'
- 'D. McKee'
- 'D. Meekins'
- 'J. Mitchell'
- 'T. Miyoshi'
- 'H. Mkrtchyan'
- 'B. Mueller'
- 'G. Niculescu'
- 'I. Niculescu'
- 'Y. Okayasu'
- 'L. Pentchev'
- 'C. Perdrisat'
- 'D. Pitz'
- 'D. Potterveld'
- 'V. Punjabi'
- 'L.M. Qin'
- 'P.E. Reimer'
- 'J. Reinhold'
- 'J. Roche'
- 'P.G. Roos'
- 'A. Sarty'
- 'I.K. Shin'
- 'G.R. Smith'
- 'S. Stepanyan'
- 'L.G. Tang'
- 'V. Tadevosyan'
- 'V. Tvaskis'
- 'R.L.J. van der Meer'
- 'K. Vansyoc'
- 'D. Van Westrum'
- 'S. Vidakovic'
- 'J. Volmer'
- 'W. Vulcan'
- 'G. Warren'
- 'S.A. Wood'
- 'C. Xu'
- 'C. Yan'
- 'W.-X. Zhao'
- 'X. Zheng'
- 'B. Zihlmann'
title: ' Separated Response Function Ratios in Exclusive, Forward $\bf\pi^{\pm}$ Electroproduction'
---
` `\
Measurements of exclusive meson production are a useful tool in the study of hadronic structure. Through these studies, one can discern the relevant degrees of freedom at different distance scales. In contrast to inclusive $(e,e')$ or photoproduction measurements, the transverse momentum (size) of a scattering constituent and the resolution at which it is probed can be varied independently. Exclusive [*forward pion*]{} electroproduction is especially interesting, because by detecting the charge of the pion, even the flavor of the interacting constituents can be tagged. Finally, [*ratios*]{} of separated response functions can be formed for which nonperturbative corrections may partially cancel, yielding insight into soft-hard factorization at the modest photon virtuality, , to which exclusive measurements will be limited for the foreseeable future.
The longitudinal response in exclusive charged pion electroproduction has several important applications. At low Mandelstam variable $-t$, it can be related to the charged pion form factor, $F_{\pi}(Q^2)$, [@Huber08] which is used to test non-perturbative models of this “positronium” of light quark QCD. In order to reliably extract $F_{\pi}$ from electroproduction data, the isovector $t$-pole process should be dominant in the kinematic region under study. This dominance can be studied experimentally through the ratio of longitudinal $\gamma^{*}_L n \to \pi^- p$ and $\gamma^*_L p \to \pi^+ n$ cross sections. If the photon possessed definite isospin, exclusive $\pi^-$ production on the neutron and $\pi^+$ production on the proton would be related to each other by simple isospin rotation and the cross sections would be equal [@boyarski68]. A departure from $R_L\equiv\sigma_L^{\pi^-}
/\sigma_L^{\pi^+}= \frac{|A_V-A_S|^2}{|A_V+A_S|^2}=1$, where $A_S$ and $A_V$ are the respective isoscalar and isovector photon amplitudes, would indicate the presence of isoscalar backgrounds arising from mechanisms such as $\rho$ meson exchange [@VGL1] or perturbative contributions due to transverse quark momentum [@Milana]. Such physics backgrounds may be expected to be larger at higher $-t$ (due to the drop-off of the pion pole) or non-forward kinematics (due to angular momentum conservation). Because previous data are unseparated [@Brauel1], no firm conclusions about possible deviations of $R_L$ from unity were possible.
In the limit of small $-t$, where the photon is expected to couple to the charge of the pion, the transverse ratio $R_T\equiv\sigma_T^{\pi^-}/\sigma_T^{\pi^+}$ is expected to be near unity. With increasing $-t$, the photon starts to probe quarks rather than pions, and the charge of the produced pion acts as a tag on the flavor of the participating constituent. Applying isospin decomposition and charge symmetry invariance to $s$-channel knockout of valence quarks in the hard-scattering regime, Nachtmann [@nachtmann] predicted the exclusive electroproduction $\pi^-/\pi^+$ ratio at sufficiently large $-t$ to be $\frac{\gamma^*_T
n\rightarrow\pi^-p}{\gamma^*_T
p\rightarrow\pi^+n}=\Bigl(\frac{e_d}{e_u}\Bigr)^2=\frac{1}{4}$. Previous unseparated $\pi^-/\pi^+$ data [@Brauel1] trend to a ratio of 1/4 for $|t|>0.6$ , but with relatively large uncertainties.
In the transition region between low $-t$ (where a description of hadronic degrees of freedom in terms of effective hadronic Lagrangians is valid) and large $-t$ (where the degrees of freedom are quarks and gluons), $t$-channel exchange of a few Regge trajectories permits an efficient description of the energy dependence and the forward angular distribution of many real- and virtual-photon-induced reactions. The VGL Regge model [@VGL; @van98] has provided a good and consistent description of a wide variety of $\pi^{\pm}$ photo- and electroproduction data above the resonance region. However, the model has consistently failed to provide a good description of $p(e,e'\pi^+)n$ data [@Blok08]. The VGL Regge model was recently extended [@kaskulov; @vrancx] by the addition of a hard deep inelastic scattering (DIS) process of virtual-photons off nucleons. The DIS process dominates the transverse response at moderate and high , providing a better description of .
Exclusive $\pi^{\pm}$ electroproduction has also been calculated in the handbag framework, where only one parton participates in the hard subprocess, and the soft physics is encoded in generalized parton distributions (GPDs). Pseudoscalar meson production, such as ${\mbox{$\sigma_T$}}$ in exclusive $\pi^{\pm}$ electroproduction which is not dominated by the pion pole term, has been identified as being especially sensitive to the chiral-odd transverse GPDs [@ahmad; @gk10]. The model of Refs. [@gk10; @gk13] uses a modified perturbative approach based on GPDs, incorporating the full pion electromagnetic form factor and substantial contributions from the twist-3 transversity GPD, $H_T$.
We have performed a complete $L$/$T$/$LT$/$TT$ separation in exclusive forward $\pi^{\pm}$ electroproduction from deuterium. Here, we present the $L$ and $T$ cross sections, with emphasis on $R_L$ and $R_T$ in order to better understand the dynamics of this fundamental inelastic process; the $LT$ and $TT$ interference cross sections will be presented in a future work. Because there are no practical free neutron targets, the $^2$H$(e,e'\pi^{\pm})NN_s$ reactions (where $N_s$ denotes the spectator nucleon) were used. In $\pi^-/\pi^+$ ratios, the corrections for nuclear binding and rescattering largely cancel.
The data were obtained in Hall C at the Thomas Jefferson National Accelerator Facility (JLab) as part of the two pion form factor experiments presented in detail in Ref. [@Blok08]. Except where noted, the experimental details and data analysis techniques are as presented in Ref. [@Blok08] for the $^1$H$(e,e'\pi^+)n$ data. Charged $\pi^{\pm}$ were detected in the High Momentum Spectrometer (HMS) while the scattered electrons were detected in the Short Orbit Spectrometer (SOS). Given the kinematic constraints imposed by the available electron beam energies and the properties of the HMS and SOS magnetic spectrometers, deuterium data were acquired in the first experiment for nominal $(Q^2$, $W$, $\Delta\epsilon)$ settings of $(0.60, 1.95, 0.37)$, $(1.00, 1.95,
0.32)$, $(1.60,1.95, 0.36)$, and in the second experiment of $(2.45, 2.22,
0.27)$. The value $W$=1.95 GeV used in the first experiment is high enough to suppress most $s$-channel baryon resonance backgrounds, but this suppression should be even more effective in the second experiment. For each setting, the electron spectrometer angle and momentum, as well as the pion spectrometer momentum, were kept fixed. To attain full coverage in $\phi$, in most cases additional data were taken with the pion spectrometer at a slightly smaller and at a larger angle than the $\vec{q}$-vector direction for the high $\epsilon$ settings. At low $\epsilon$, only the larger angle setting was possible. The HMS magnetic polarity was reversed between $\pi^+$ and $\pi^-$ running, with the quadrupoles and dipole magnets cycled according to a standard procedure. Kinematic offsets in spectrometer angle and momentum, as well as in beam energy, were previously determined using elastic $e^-p$ coincidence data taken during the same run, and the reproducibility of the optics checked [@Blok08].
The potential contamination by electrons when the pion spectrometer is set to negative polarity, and by protons when it is set to positive polarity, introduces some differences in the $\pi^{\pm}$ data analyses which were carefully examined. For most negative HMS polarity runs, electrons were rejected at the trigger level by a gas Cerenkov detector containing ${\rm
C}_4{\rm F}_{10}$. The beam current was significantly reduced during $\pi^-$ running to minimize the inefficiency due to electrons passing through the gas Cerenkov within $\approx 100$ ns after a pion has traversed the detector, causing the pion to be misidentified as an electron. A Cerenkov blocking correction (1-15%) was applied to the $\pi^-$ data using the measured electron rates combined with the effective time window of the gas Cerenkov ADC, the latter determined from data where the Cerenkov was not in the trigger. A cut on particle speed ($v/c>0.95$), calculated from the time-of-flight difference between two scintillator planes in the HMS detector stack, was used to separate $\pi^+$ from protons. Additionally in the second experiment, an aerogel Cerenkov detector was used to separate protons and $\pi^+$ for central momenta above 3 GeV/$c$. A correction for the number of pions lost due to pion nuclear interactions and true absorption in the HMS exit window and detector stack of 4.5-6% was applied. For further details, see Ref. [@Blok08]. Because the $\pi^-$ data are typically taken at higher HMS detector rates than the $\pi^+$ data, a good understanding of rate-dependent efficiency corrections was required. An improved high rate tracking algorithm was implemented, resulting in high rate tracking inefficiencies of 2-9% for HMS rates up to 1.4 MHz. Liquid deuterium target boiling corrections of 4.7%/100 $\mu$A were determined for the horizontal-flow target used in the first experiment. The vertical-flow target and improved beam raster used in the second experiment resulted in a negligible boiling correction for those data. The experimental yields were also corrected for dead time (1-11%).
![(Color online) Missing mass of the undetected nucleon calculated as quasi-free pion electroproduction for a representative $\pi^+$ setting. The diamonds are experimental data, and the red line is the quasi-free Monte Carlo simulation. The vertical line indicates the $M_X$ cut upper limit.[]{data-label="fig:MMplot"}](mm_100_33_0000.ps){width="3.25in"}
Kinematic quantities such as $t$ and missing mass $M_X$ were reconstructed as quasi-free pion electroproduction, $\gamma^* N \rightarrow \pi^{\pm} N'$, where the virtual-photon interacts with a nucleon at rest. The former is calculated using $t=(p_{\rm target}-p_{\rm recoil})^2$, which can differ from $(p_{\gamma}-p_{\pi})^2$ due to Fermi motion and radiation. Missing mass cuts were then applied to select the exclusive final state (Fig. \[fig:MMplot\]). Because of Fermi motion in the deuteron, this cut is taken wider than for a hydrogen target. Real and random coincidences were isolated with a coincidence time cut of $\pm 1$ ns. Background from aluminum target cell walls (2-4% of the yield) and random coincidences ($\sim 1\%$) were subtracted from the charge-normalized yields on a bin by bin basis.
The virtual-photon cross section can be expressed in terms of contributions from transversely and longitudinally polarized photons, and interference terms, $$\begin{aligned}
\label{eqn:unsep}
2\pi \frac{d^2 \sigma}{dt d\phi} & = & \frac{d \sigma_T}{dt} +
\epsilon \frac{d \sigma_L}{dt} + \sqrt{2 \epsilon (1 + \epsilon)}
\frac{d \sigma_{LT}}{dt} \cos \phi \\ \nonumber & + & \epsilon \frac{d
\sigma_{TT}}{dt} \cos 2 \phi.\end{aligned}$$ Here, $\epsilon=\left(1+2\frac{|\vec{q}|^2}{Q^2}\tan^2\frac{\theta}{2}\right)^{-1}$ is the virtual-photon polarization, where $\vec{q}$ is the three-momentum transferred to the quasi-free nucleon, $\theta$ is the electron scattering angle, and $\phi$ is the azimuthal angle between the scattering and the reaction plane.
![(Color online) Separated exclusive $\pi^{\pm}$ electroproduction cross sections from deuterium. Because the data were taken at different values of $\overline{W}$, all cross sections were scaled to a value of $W=2.0$ GeV according to $1/(W^2-M^2)$. The error bars indicate statistical and uncorrelated systematic uncertainties in both $\epsilon$ and $-t$, combined in quadrature. The shaded error bands indicate the model-dependence of ${\mbox{$\sigma_L$}}$. The ${\mbox{$\sigma_T$}}$ model-dependence (not shown) is smaller. \[fig:xsec\] ](dsig_sep_scaled.13sep23.ps){width="3.5in"}
For each charge state, the data for $d^2\sigma/dtd\phi$ were binned in $t$ and $\phi$ and the individual components in Eqn. \[eqn:unsep\] determined from a simultaneous fit to the $\phi$ dependence of the measured cross sections at two values of $\epsilon$. The separated cross sections are determined at fixed values of $W$, $Q^2$, common for both high and low values of $\epsilon$. Because the acceptance covers a range in $W$ and $Q^2$, the measured cross sections, and hence the separated response functions, represent an average over this range. They are determined at the average values (for both $\epsilon$ points together), $\overline{Q^2}$, $\overline W$, which are different for each $t$ bin. The experimental cross sections were calculated by comparing the experimental yields to a Monte Carlo simulation of the experiment. The simulation uses a quasi-free $N(e,e'\pi^{\pm})N'$ model, where the struck nucleon carries Fermi momentum, but the events are reconstructed in the same manner as the experimental data, i.e. assuming the target is a nucleon at rest. The Monte Carlo includes a detailed description of the spectrometers, multiple scattering, ionization energy loss, pion decay, and radiative processes.
The separated cross sections, and , are shown in Fig. \[fig:xsec\]. Even if $\pi^+$ production on $^2$H occurs only on the proton, the deuterium cross section cannot be directly connected to the free $^1$H cross section because the Monte Carlo cross-section model ignores off-shell effects and averages over the nucleon momentum distribution in $^2$H. The uncertainties in the separated cross sections have both statistical and systematic sources. The statistical uncertainty in $\sigma_T+\epsilon
\sigma_L$ is 5-10% for $\pi^-$ settings, and more uniformly near 5% for $\pi^+$ settings. Systematic uncertainties that are uncorrelated between high and low $\epsilon$ points are amplified by a factor of $1/\Delta \epsilon$ in the $L$/$T$ separation. This uncertainty ($\sim 1.3\%/\Delta\epsilon$) is dominated by uncertainties in the spectrometer acceptance, uncertainties in the efficiency corrections due to Cerenkov trigger blocking and analysis cuts, and the Monte Carlo model-dependence. Scale systematic uncertainties of $\sim$3% (not shown in the figure) propagate directly into the separated cross sections. They are dominated by uncertainties in the radiative corrections, pion decay and pion absorption corrections, and the tracking efficiencies. The systematic uncertainty due to the simulation model and the applied $M_X$ cut (model-dependence) was estimated by extracting new sets of $L$/$T$/$LT$/$TT$ cross sections with alternate models and tighter $M_X$ cuts.
In the ${\mbox{$\sigma_L$}}$ response of Fig. \[fig:xsec\], the pion pole is evident by the sharp rise at small $-t$. $\pi^-$ and $\pi^+$ are similar, and the data at different follow a nearly universal curve versus $t$, with only a weak $Q^2$-dependence. The $T$ responses are flatter versus $t$.
Finally, $\pi^-/\pi^+$ ratios of the separated cross sections were formed to cancel nuclear binding and rescattering effects. Many experimental normalization factors cancel to a high degree in the ratio (acceptance, target thickness, pion decay and absorption in the detectors, radiative corrections, etc.). The principal remaining uncorrelated systematic errors are in the tracking inefficiencies, target boiling corrections, and Cerenkov blocking corrections.
![(Color online) The ratios $R_L$ and $R_T$ versus $-t$ for four settings. The error bars include statistical and uncorrelated systematic uncertainties. The model-dependences of the ratios are indicated by the shaded bands. The dotted black curves are predictions of the VGL Regge model using the values $\Lambda_{\pi}^2=0.394$, 0.411, 0.455, 0.491 , as determined from fits to our $^1$H data , and the solid red curves are predictions by Goloskokov and Kroll , both models calculated at the same $\overline{W}$, $\overline{Q^2}$ as the data. The dashed green curves are predictions by Kaskulov and Mosel , and the dot-dashed blue curves are the predictions by Vrancx and Ryckebusch , both models calculated at the nominal kinematics.[]{data-label="fig:Rlt_plot"}](vgl_ratioL_portrait.ps){width="8.5cm"}
Fig. \[fig:Rlt\_plot\] shows the first experimental determination of $R_L$. The ratio is approximately 0.8 near $-t_{\rm min}$ at each setting, as predicted in the large $N_c$ limit calculation of Ref. [@frankfurt]. The data are generally lower than the predictions of the pion-pole dominated models [@van98; @kaskulov; @vrancx]. Under the naive assumption that the isoscalar and isovector amplitudes are real, $R_L=0.8$ gives $A_S/A_V=0.06$. This is relevant for the extraction of the pion form factor from electroproduction data, which uses a model including some isoscalar background. This result is qualitatively in agreement with the findings of our pion form factor analyses [@Huber08; @volmer], which found evidence of a small additional contribution to ${\mbox{$\sigma_L$}}$ not taken into account by the VGL Regge Model in our $Q^2=0.6-1.6$ GeV$^2$ data at $W=1.95$ GeV, but little evidence for any additional contributions in our $Q^2=1.6-2.45$ GeV$^2$ data at $W=2.2$ GeV. The main conclusion to be drawn is that pion exchange dominates the forward longitudinal response even $\sim 10\ m_{\pi}^2$ away from the pion pole.
Also in Fig. \[fig:Rlt\_plot\] are the first $R_T$ results in electroproduction. At $Q^2$=0.6, 1.0 , $R_T$ drops rapidly and given the small $t$-range covered, it is not apparent if this drop is due to $t$ or $Q^2$-dependence. However, the values at $Q^2$=1.6 and 2.45 overlap, suggesting that $R_T$ is primarily a function of $-t$, dropping from about 0.6 at $-t=$0.15 to about 0.3 at $-t$=0.3 . Interestingly, photoproduction data in this $t$-range [@heide] give simiar values. It is noteworthy that the unseparated data of Ref. [@Brauel1] reach a value of 0.3 at a much higher value of $-t$. A value of $-t$=0.3 seems quite low for quark-charge scaling arguments to apply directly. This might indicate the partial cancellation of soft QCD corrections in the transverse $\pi^-/\pi^+$ ratios. Previous photoproduction measurements of $R_T$ have hinted at quark-partonic behavior, but such non-forward, $Q^2=0$ measurements are inherently more difficult to interpret due to sea quark and $u$-channel contributions. Indeed, the photoproduction measurements at sufficiently high $-t$ first dip down toward 1/4 then [*increase*]{} at backward angles [@Gao]. The models of Refs. [@VGL; @kaskulov; @vrancx] do not accurately predict $R_T$ at $-t_{\rm
min}$, although [@vrancx] does much better at higher $-t$. The Goloskokov-Kroll GPD-based model is in reasonable agreement, but the parameters in this model are optimized for small skewness ($\xi<0.1$) and large $W>4$ GeV. The application of this model to the kinematics of our data requires a substantial extrapolation and one should be cautious in this comparison. Indeed, although the model does a reasonable job at predicting the $\pi^-/\pi^+$ ratios, the agreement of the model with ${\mbox{$\sigma_T$}}$ is not good [@gk13]. Further theoretical work is clearly needed to investigate alternative explanations of the observed ratios.
To summarize, our data for $R_L$ trend toward unity at low $-t$, indicating the dominance of isovector processes in forward kinematics, which is relevant for the extraction of the pion form factor from electroproduction data [@Huber08; @volmer; @hornt]. The evolution of $R_T$ with $-t$ shows a rapid fall off consistent with $s$-channel quark knockout. Since $R_T$ is not dominated by the pion pole term, this observable is likely to play an important role in future transverse GPD programs. Further work is planned after the completion of the JLab 12 GeV upgrade, including complete separations at $Q^2$=5-10 over a larger range of $-t$ [@12gev].
The authors thank Drs. Goloskokov and Kroll for the unpublished model calculations at the kinematics of our experiment, and Drs. Guidal, Laget, and Vanderhaeghen for modifying their computer program for our needs. This work is supported by DOE and NSF (USA), NSERC (Canada), FOM (Netherlands), NATO, and NRF (Rep. of Korea). Additional support from Jefferson Science Associates and the University of Regina is gratefully acknowledged. At the time these data were taken, the Southeastern Universities Research Association (SURA) operated the Thomas Jefferson National Accelerator Facility for the United States Department of Energy under contract DE-AC05-84150.
[99]{} G.M. Huber, [*et al.*]{}, Phys. Rev. C [**78**]{} (2008) 045203. A.M. Boyarski, [*et al.*]{}, Phys. Rev. Lett. [**21**]{} (1968) 1767. M. Vanderhaeghen, M. Guidal, and J.-M. Laget, Phys. Rev. C [**57**]{} (1998) 1454. C.E. Carlson, J. Milana, Phys. Rev. Lett. [**65**]{} (1990) 1717. P. Brauel, [*et al.*]{}, Z. Physik [**C 3**]{} (1979) 101;\
M. Schaedlich, Dissertation des Doktorgrades, Universitaet Hamburg, 1976, DESY F22-76/02 November 1976. O. Nachtmann, Nucl. Phys. [**B115**]{} (1976) 61. M. Guidal, J.-M. Laget, M. Vanderhaeghen, Nucl. Phys. [**A 627**]{} (1997) 645. M. Vanderhaeghen, M. Guidal, J.-M. Laget, Phys. Rev. C [**57**]{} (1998) 1454. H.P. Blok, [*et al.*]{}, Phys. Rev. C [**78**]{} (2008) 045202. M.M. Kaskulov, U. Mosel, Phys. Rev. C [**81**]{} (2010) 045202. T. Vrancx, J. Ryckebusch, Phys. Rev. C [**89**]{} (2014) 025203. S. Ahmad, G.R. Goldstein, S. Liuti, Phys. Rev. D [**79**]{} (2009) 054014. S.V. Goloskokov, P. Kroll, Eur. Phys. J. C [**65**]{} (2010) 137. S.V. Goloskokov, P. Kroll, Eur. Phys. J. A [**47**]{} (2011) 112; and Private Comminication, 2013. L.L. Frankfurt, M.V. Polyakov, M. Strikman, M. Vanderhaeghen, Phys. Rev. Lett. [**84**]{} (2000) 2589. J. Volmer, [*et al.*]{}, Phys. Rev. Lett. [**86**]{} (2001) 1713. P. Heide, [*et al.*]{}, Phys. Rev. Lett. [**21**]{} (1968) 248. L.Y. Zhu, [*et al.*]{}, Phys. Rev. Lett. [**91**]{} (2003) 022003; Phys. Rev. C [**71**]{} (2005) 044603. T. Horn, [*et al.*]{}, Phys. Rev. Lett. [**97**]{} (2006) 192001. G.M. Huber, D. Gaskell, [*et al.*]{}, Jefferson Lab Experiment E12-06-101; T. Horn, G.M. Huber, [*et al.*]{}, Jefferson Lab Experiment E12-07-105.
|
---
abstract: |
#### Background: {#background .unnumbered}
Chromatin immunoprecipitation combined with DNA microarrays (ChIP-chip) is an assay for DNA-protein-binding or post-translational chromatin/histone modifications. As with all high-throughput technologies, it requires a thorough bioinformatic processing of the data for which there is no standard yet. The primary goal is the reliable identification and localization of genomic regions that bind a specific protein. The second step comprises comparison of binding profiles of functionally related proteins, or of binding profiles of the same protein in different genetic backgrounds or environmental conditions. Ultimately, one would like to gain a mechanistic understanding of the effects of DNA binding events on gene expression.
#### Results: {#results .unnumbered}
We present a free, open-source [**R**]{} package [[*Starr*]{} ]{}that, in combination with the package [[*Ringo*]{} ]{}, facilitates the comparative analysis of ChIP-chip data across experiments and across different microarray platforms. Core features are data import, quality assessment, normalization and visualization of the data, and the detection of ChIP-enriched genomic regions. The use of common Bioconductor classes ensures the compatibility with other [**R**]{} packages.
#### Conclusion: {#conclusion .unnumbered}
[[*Starr*]{} ]{}is an [**R**]{} package that enables flexible analysis of a wide range of ChIP-chip experiments, in particular for Affymetrix data. Most importantly, [[*Starr*]{} ]{}provides methods for integration of complementary genomics data, e.g., it enables systematic investigation of the relation between gene expression and DNA binding.
address: ' (1) Gene Center, Ludwig-Maximilians-University of Munich, Feodor-Lynen-Str. 25, D-81377 Munich, Germany '
author:
- Benedikt Zacher$^1$
- Achim Tresch$^1$
bibliography:
- 'bmc\_article.bib'
title: 'Starr: Simple Tiling ARRay analysis of Affymetrix ChIP-chip data'
---
\[1995/12/01\]
Background {#background-1 .unnumbered}
==========
ChIP-chip is a technique for identifying Protein-DNA interactions. For this purpose, the chromatin is immunoprecipitated with an antibody to the protein of interest and the fragmented, protein-bound DNA is analyzed with tiling arrays [@chipchip]. Before the results can be analyzed, some bioinformatics methods must be applied to ensure the quality of the experiments and preprocess the data.\
Here we present the open-source software package [[*Starr*]{} ]{}, which is available as part of the opensource bioconductor project [@bioconductor]. It is an extension package for the programming language and statistical environment R [@R]. [[*Starr*]{} ]{}facilitates the analysis of ChIP-chip data, in particular it supports experiments that have been performed on the Affymetrix^^ platform. Its functionality includes data acquisition, quality assessment and data visualization. Starr provides new functions for high level data analysis, e.g., association of ChIP signals with annotated features, gene filtering, and the combined analysis of the ChIP signals and other data like gene expression measurements. It uses the standard data structures for microarray analyses in Bioconductor, building on and fully exploiting the package [[*Ringo*]{} ]{}[@ringopackage]. The latter implements algorithms for smoothing and peak-finding, as well as low level analysis functions for microarray platforms such as Nimblegen and Agilent.
Results and discussion {#results-and-discussion .unnumbered}
======================
We demonstrate the utility of [[*Starr*]{} ]{}by applying it to a yeast RNA-Polymerase II (PolII for short) ChIP experiment. We discuss the question whether constitutive mRNA expression is mainly determined by the PolII recruitment rate to the promoter.
Data acquisition, quality assessment and normalization {#data-acquisition-quality-assessment-and-normalization .unnumbered}
------------------------------------------------------
We facilitated data import as much as possible, since in our experience, this is a major obstacle for the widespread use of R packages in the field of ChIP-chip analysis. The import of data from the microarray manufacturers Nimblegen and Agilent has already been implemented in [[*Ringo*]{} ]{}, the common array platform Affymetrix is covered by [[*Starr*]{} ]{}. There are two kinds of files that must be known to [[*Starr*]{} ]{}: the .bpmap file which contains the mapping of the reporter sequences to its physical position on the array and the .cel files which contain the actual measurement values. All data, no matter from which platform, are stored in the common Bioconductor object [*ExpressionSet*]{}, which makes them accessible to a number of algorithms operating on that data structure. An R script reproducing the entire results of this paper, together with the data stored as RData objects can be found in the supplements. ChIP-chip data of yeast PolII binding was published by Venters and Pugh in 2009 [@chipchipdata] and is available on array express under the accession number E-MEXP-1676. The gene expression data used here is available under accession number E-MEXP-2123. Transcription start and termination sites were obtained from David et al. [@transcripts].
The obligatory second step in the analysis protocol is quality control. The complex experimental procedures of a ChIP-chip assay make errors almost inevitable. A special issue of Affymetrix oligo arrays is the bias caused by the GC-content of the oligomer probes [@sequenceDependent]. [[*Starr*]{} ]{}displays the average expression of probes as a function of their GC-content, and it calculates a position-specific bias of every nucleotide in each of the 25 positions within the probe (see Figure 1). Moreover, [[*Starr*]{} ]{}provides many other quality control plots like an in silico reconstruction of the physical array image to identify flawed regions on the array, or pairwise MA-plots, boxplots and heat-scatter plots to visualize pairwise dependencies within the dataset.
For the purpose of bias removal (normalization), [[*Starr*]{} ]{}interfaces the packages [*limma*]{} and [*rMAT*]{}, the latter of which implements the MAT algorithm [@MAT]. But it also contains proper normalization methods like the median-rank-percentile normalization, which was originally proposed by Buck and Lieb in 2004 [@rankpercentile].
Visualization and high-level analysis {#visualization-and-high-level-analysis .unnumbered}
-------------------------------------
[[*Starr*]{} ]{}provides functions for the visualization of a set of “profiles” (e.g. time series, signal levels along genomic positions). Figure 2 shows the ChIP profile of PolII along the transcription start site of genes whose mRNA expression according to [@expressiondata] ranges in the least 20% resp. the top 10% of all yeast genes (the cutoffs were chosen such that within both groups, the number of genes having an annotated transcription start site was roughly the same). The common way of looking at the intensity profiles is to calculate and plot the mean intensity at each available position along the region of concern. Such an illustration however may hide more than it reveals, since it fails to capture the variability at each position. It is desirable to display this variability in order to assess whether a seemingly obvious alteration in DNA binding is significant or not. Accordingly, our [*profileplot*]{} function relates to the conventional mean value plot like a box plot relates to an individual sample mean: Let the profiles be given as the rows of a samples $\times$ positions matrix that contains the respective signal of a sample at a given position. Instead of plotting a line for each profile (e.g. column of the row), the q-quantiles for each position (e.g. column of the matrix) are calculated, where q runs through a set of representative quantiles. Then for each q, the profile line of the q-quantiles is plotted. Color coding of the quantile profiles aids the interpretation of the plot: There is a color gradient from the median profile to the 0 (=min) resp. 1 (=max) quantile. Another useful high-level plot in [[*Starr*]{} ]{}is the [*correlationPlot*]{}, which displays the correlation of a gene-related binding signal to its corresponding gene expression. Figure 3 shows a plot in which the mean PolII occupancy in various transcript regions of 2526 genes is compared to the corresponding mRNA expression. Each region is defined by its begin and end position relative to the transcription start site (start sites are taken from [@transcripts]). The regions are plotted in the lower panel of Figure 3. For each region, the correlation between the vector of mean occupancies and the vector of gene expression values is calculated and shown in the upper panel.
Results interpretation {#results-interpretation .unnumbered}
----------------------
Figs 2 and 3 supply ambiguous evidence for the role of PolII recruitment in basal transcription: The profile plots suggest that a high PolII occupancy at the initiation region of a gene is a necessary prerequisite for a high mRNA expression level. As opposed to this, the correlation plot reveals that PolII occcupancy at the transcription start is not a good predictor of mRNA expression, but the mean occupancy of PolII in the elongation phase (region 4 in Fig.3) is. Nevertheless, a more detailed analysis of particular gene groups, and a comparison of PolII profiles under different environmental conditions might yield valuable new insights.
Conclusion {#conclusion-1 .unnumbered}
==========
[[*Starr*]{} ]{}is a Bioconductor package for the analysis of ChIP-chip experiments, in particular of Affymetrix tiling arrays. It exploits the full functionality of [[*Ringo*]{} ]{}for the analysis of Affymetrix tiling arrays. These include functions like peak finding, smoothing or plotting genomic regions. [[*Starr*]{} ]{}adds new analysis and visualization methods, which can also be applied to two-color technologies. It utilizes standard Bioconductor object classes and can thus easily interface other Bioconductor packages. All functions and methods in the package are well documented in help pages and in a vignette, which illustrates a workflow by means of some example data. Support is provided by the bioconductor mailing list and the package maintainer.\
Altogehter, [[*Starr*]{} ]{}in conjunction with [[*Ringo*]{} ]{}constitute a powerful and comprehensive tool for the analysis of tiling arrays across established one- and two-color technologies like Affymetrix, Agilent and Nimblegen.
Availability and requirements {#availability-and-requirements .unnumbered}
=============================
The R-package [[*Starr*]{} ]{}is available from the Bioconductor web site at http://www.bioconductor.org and runs on Linux, Mac OS and MS-Windows. It requires an installed version of R (version $>=$ 2.10.0), which is freely available from the Comprehensive R Archive Network (CRAN) at http://cran.r-project.org, and other Bioconductor packages, namely Ringo, affy, affxparser, rMAT and vsn plus the CRAN package pspline and MASS. The easiest way to obtain the most recent version of the software, with all its dependencies, is to follow the instructions at http:// www.bioconductor.org/download. [[*Starr*]{} ]{}is distributed under the terms of the Artistic License 2.0.
Authors’ contributions {#authors-contributions .unnumbered}
======================
BZ implemented the [[*Starr*]{} ]{}package and did the analysis. AT initiated and supervised the project. Both authors wrote the manuscript and approved of its final version.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank Michael Lidschreiber, Andreas Mayer, Matthias Siebert, Johannes Soeding and Kemal Akman for useful comments on the package, Joern Toedling for help on [[*Ringo*]{} ]{}, and Anna Ratcliffe for proofreading.
Figures {#figures .unnumbered}
=======
Figure 1 - Hybridization bias {#figure-1---hybridization-bias .unnumbered}
-----------------------------
Sequence-specific dependency of raw reporter intensities. (A) Boxplots of probe intensity distributions. Probes are grouped according to the C/C content in their sequence. The median intensity increases with rising G/C-content. (B) Position-dependent mean probe intensity. Each letter corresponds the mean intensity of all probes that contain the corresponding nucleotide in the respective position.
Figure 2 - PolII along the transcriptional start site {#figure-2---polii-along-the-transcriptional-start-site .unnumbered}
-----------------------------------------------------
Profiles of PolII occupancy of genes with low (least $20\%$) resp high (top $10\%$) transcription rates (cluster 1 resp. cluster 2). The upper graphs show the mean occupancy calculated over each position along the transcription start site. The lower plots illustrate the variance in the two clusters. The black line indicates the median profile of all features. The color gradient corresponds to quantiles (from 0.05 to 0.95), and the first and third quartiles are shown as grey lines. The light grey lines in the background show the profiles of individual “outlier” features.
Figure 3 - Correlation of PolII occupancy to gene expression {#figure-3---correlation-of-polii-occupancy-to-gene-expression .unnumbered}
------------------------------------------------------------
[[*Starr*]{} ]{}enables the systematic investigation of gene expression related to DNA binding. Figure 2 shows the correlation of the mean PolII occupancy within different regions along the transcript to gene expression. The lower panel shows the regions of interest relative to the transcription start site (TSS) and the transcription termination site (TTS). The upper panel shows the correlation of PolII occupancy to the gene expression of the corresponding regions.
|
---
abstract: 'We studied the collapse of rotating molecular cloud cores with inclined magnetic fields, based on three-dimensional numerical simulations. The numerical simulations start from a rotating Bonnor–Ebert isothermal cloud in a uniform magnetic field. The magnetic field is initially taken to be inclined from the rotation axis. As the cloud collapses, the magnetic field and rotation axis change their directions. When the rotation is slow and the magnetic field is relatively strong, the direction of the rotation axis changes to align with the magnetic field, as shown earlier by Matsumoto & Tomisaka. When the magnetic field is weak and the rotation is relatively fast, the magnetic field inclines to become perpendicular to the rotation axis. In other words, the evolution of the magnetic field and rotation axis depends on the relative strength of the rotation and magnetic field. Magnetic braking acts to align the rotation axis and magnetic field, while the rotation causes the magnetic field to incline through dynamo action. The latter effect dominates the former when the ratio of the angular velocity to the magnetic field is larger than a critical value $ \Omega _0/ B _0 \, > \, 0.39\, G^{1/2} \, c_s^{-1} $, where $ B _0 $, $ \Omega _0 $, $ G $, and $ c _s^{-1}$ denote the initial magnetic field, initial angular velocity, gravitational constant, and sound speed, respectively. When the rotation is relatively strong, the collapsing cloud forms a disk perpendicular to the rotation axis and the magnetic field becomes nearly parallel to the disk surface in the high density region. A spiral structure appears due to the rotation and the wound-up magnetic field in the disk.'
author:
- 'Masahiro N. Machida'
- Tomoaki Matsumoto
- Tomoyuki Hanawa
- Kohji Tomisaka
title: Evolution of Rotating Molecular Cloud Core with Oblique Magnetic Field
---
Introduction
============
Magnetic fields and rotation are believed to play important roles in the gravitational collapse of molecular cloud cores. For example, the outflow associated with a young star is believed to be related to the magnetic field and rotation of a protostar and its disk. The rotation of various molecular clouds have been studied by @caselli02, @goodman93, and @arquilla86, who found that the rotation energy is not negligible compared with the gravitational energy. @crutcher99 obtained strengths of the magnetic field for various molecular clouds by Zeeman splitting observations and also concluded that the magnetic energy of a molecular cloud is comparable to the gravitational energy. The direction of the magnetic field is also crucial for cloud evolution, because it controls the direction of outflow and the orientation of the disk. Polarization observations of young stellar objects suggest that circumstellar dust disks around young stars are aligned perpendicular to the magnetic field [e.g., @moneti84; @tamura89]. However, the direction of a large-scale magnetic field of an ambient cloud and that of a small-scale magnetic field around a molecular core do not always coincide. For example, the Barnard 1 cloud in Perseus exhibits field directions different from the ambient field in three of its four cores [@matthews02]. Recently, high-resolution observations by @matthews05 have shown that the polarization angle measured in the OMC-3 region of the Orion A cloud changes systematically across the core. These findings indicate that the spatial configuration of magnetic field lines in a molecular core is not simple.
@dorfi82 [@dorfi89] and @matsumoto04 [hereafter MT04] numerically investigated the contraction of a molecular cloud when the magnetic field lines are not parallel to the rotation axis (non-aligned rotator). @dorfi82 [@dorfi89] showed that a formed disk changes its shape into a bar or a ring depending on the angle between the magnetic field and the rotation axis at the initial stage. However, the evolution was not calculated to high densities. MT04 reinvestigated the collapse of rotating molecular cloud cores threaded by oblique magnetic fields by high-resolution numerical simulations. They found that strong magnetic braking associated with outflow causes the direction of the angular momentum to converge to that of the local magnetic field, resulting in convergence of the local magnetic field, angular momentum, outflow, and disk orientation. However, their study was restricted to clouds with an intermediate rotation rate ($\Omega = 7.11 \times 10^{-7}$ yr$^{-1}$). It is expected that the evolution is affected not only by the magnetic field strength but also by the rotation rate. In this paper, we extend the parameter range of the MT04 study and explore the evolution of rotating magnetized clouds more generally.
In the case of a cloud in which the magnetic field is parallel to the rotation axis (aligned rotator), four distinct evolutions are observed according to the magnetic field strength and rotation rate [@machida04; @machida05a; @machida05b hereafter Papers I, II, and III]. In the isothermal regime a contracting disk is formed perpendicular to the magnetic field and the rotation axis. In the disk, the magnetic field strength, angular rotation speed, and gas density are correlated with one another and satisfy the magnetic flux-spin relation: $$\frac{\Omega_{zc}^2}{(0.2)^2 \times 4 \pi G \rho_c} + \frac{B_{\rm zc}^2}{(0.36)^2 \times 8 \pi c_s^2 \rho_c} \equiv F (\Omega_{\rm zc}, B_{\rm zc}, \rho_c) \simeq 1,
\label{eq:UL}$$ where $\Omega_{zc}$, $B_{zc}$, ${\rho_{\rm c}}$, $c_s$, and $G$ are the angular velocity, magnetic flux density, gas density at the center, isothermal sound speed, and gravitational constant, respectively.
In the case of a weak magnetic field and a small rotation rate, $F < 1$, the cloud contracts spherically (spherical collapse). This increases $F$ and a self-similarly contracting disk forms when $F\simeq 1$ is reached. On the other hand, a cloud with a strong magnetic field and/or a fast rotation rate, $F > 1$, contracts only in the direction of the magnetic field and rotation axis (vertical collapse). This reduces $F$ and a self-similarly contracting disk forms when $F \simeq 1$. Hence, $F$ controls the mode of contraction (spherical or vertical collapse). This is understood as follows: for $F < 1$ the support forces are deficient (support-deficient model), while for $F > 1$ the cloud is supported laterally by rotation and/or the magnetic field (support-sufficient model).
The evolution can be further divided into two categories depending on whether the magnetic or centrifugal forces are more effective in forming the disk. A model with a spin-to-magnetic flux ratio $\Omega_0/B_0$ larger than a critical value $(\Omega/B)_{\rm crit} = 0.39 G^{1/2} c_s^{-1} $ forms a disk mainly supported by the centrifugal force, while that with a smaller ratio forms a disk by the Lorentz force.
In addition to the differences in the evolution of the pre-protostellar phase (isothermal phase), subsequent evolution is affected by the magnetic field and rotation. A first core consisting of adiabatic H$_2$ molecular gas experiences fragmentation only if the initial cloud is rotation dominated, $(\Omega_0/B_0) > (\Omega/B)_{\rm crit}$. In the support-deficient regime ($F<1$), fragmentation proceeds through a deformation forming a ring, while in the support-sufficient regime ($F > 1$) fragmentation from a bar appears as well as ring fragmentation.
Based upon our previous results (Papers II and III), all models of @dorfi82 [@dorfi89] and MT04 belong to a type of “support-sufficient”, $F > 1$, and “magnetic-force-dominant”, $\Omega_0/B_0 < (\Omega/B)_{\rm crit}$, models. In this paper, we investigate the evolution of a rotating isothermal cloud with an inclined magnetic field, not restricted to the above type, in large parameter space, and also explore three other models which have not been previously studied.
The plan of the paper is as follows. The numerical method of our computations and the framework of our models are given in §2 and the numerical results are presented in §3. We discuss the geometry of the collapse in §4, and compare our results with previous works and observations in §5.
Numerical Model
===============
Our initial settings are almost the same as those of MT04. To study the cloud evolution and disk formation, we use the three-dimensional magnetohydrodynamical (MHD) nested grid method. We assume ideal MHD equations including the self-gravity: $$\begin{aligned}
& {{\displaystyle \frac{\partial \rho}{\partial t}} } + \nabla \cdot (\rho {\mbox{\boldmath$v$}}) = 0, & \\
& \rho {{\displaystyle \frac{\partial {\mbox{\boldmath$v$}}}{\partial t}} }
+ \rho({\mbox{\boldmath$v$}} \cdot \nabla){\mbox{\boldmath$v$}} =
- \nabla P - {{\displaystyle \frac{1}{4 \pi}} } {\mbox{\boldmath$B$}} \times (\nabla \times {\mbox{\boldmath$B$}})
- \rho \nabla \phi, & \\
& {{\displaystyle \frac{\partial {\mbox{\boldmath$B$}}}{\partial t}} } =
\nabla \times ({\mbox{\boldmath$v$}} \times {\mbox{\boldmath$B$}}), & \\
& \nabla^2 \phi = 4 \pi G \rho, &\end{aligned}$$ where $\rho$, ${\mbox{\boldmath$v$}}$, $P$, ${\mbox{\boldmath$B$}} $, and $\phi$ denote the density, velocity, pressure, magnetic flux density, and gravitational potential, respectively. The gas pressure is assumed to be expressed by the gas density (barotropic gas) as $$P = c_s^2 \rho \left[
1 + \left( {{\displaystyle \frac{\rho}{\rho_{\rm cri}}} } \right)^{2/5}
\right],
\label{eq:eos}$$ where $c_s = 190\, {\rm m}\, {\rm s^{-1}}$ and $ \rho_{\rm cri} = 1.9205 \times 10^{-13} \, \rm{g} \, \cm$ ($n_{\rm cri} = 5\times 10^{10} \cm$ for an assumed mean molecular weight of 2.3). This equation of state implies that the gas is isothermal at $T = 10$ K for $n \ll n_{\rm cri}$ and is adiabatic for $n \gg n_{\rm cri} $ [@masunaga00]. For convenience, we define the core formation epoch as that for which the central density ($n_c$) exceeds $n_{\rm cri} $. We also call the period for which $n_c < n_{\rm cri}$ the isothermal phase, and the period for which $n_c \ge n_{\rm cri}$ the adiabatic phase.
In this paper, a spherical cloud with a critical Bonnor–Ebert [@ebert55; @bonnor56] density profile, $\rho_{\rm BE}$, is assumed as the initial condition, although a filamentary cloud is assumed in Papers I–III. The cloud rotates rigidly ($\Omega_0$) around the $z$-axis and has a uniform magnetic field ($B_0$). To promote contraction, we increase the density by a factor $f$ (density enhancement factor) as $$\begin{aligned}
\rho(r) = \left\{
\begin{array}{ll}
\rho_{\rm BE}(r) \, f & \mbox{for} \; \; r < R_{c}, \\
\rho_{\rm BE}(R_c)\, f & \mbox{for}\; \; r \ge R_{c}, \\
\end{array}
\right. \end{aligned}$$ where $r$ and $R_c$ denote the radius and the critical radius for a Bonnor–Ebert sphere, respectively. We assume $\rho_{\rm BE}(0) = 7.39 \times 10^{-19}\, {\rm g} \,\cm$, which corresponds to a central number density of $n_{c,0} = 5\times 10^4\cm$. Thus, the critical radius for a Bonnor–Ebert sphere $R_c = 6.45\, c_s [4\pi G \rho_{BE}(0)]^{-1/2}$ corresponds to $ R_c = 2.05 \times 10^4$ AU for our settings.
The initial model is characterized by three nondimensional parameters: $\alpha$, $\omega$, and $\theta_0$. The magnetic field strength and rotation rate are scaled using a central density $\rho_0 = \rho_{\rm BE}(0) f$ as $$\alpha = B_0^2 / (4\pi \, \rho_0 \, c_{s}^2),
\label{eq:alpha}$$ and $$\omega = \Omega_0/(4 \pi\, G \, \rho_0 )^{1/2}.
\label{eq:omega}$$ The parameter $\theta_0$ represents the angle between the magnetic field and the rotation axis ($z$-axis). Thus, the initial magnetic field is given by $$\begin{aligned}
\left(
\begin{array}{l}
B_x \\
B_y \\
B_z \\
\end{array}
\right)
= B_0
\left(
\begin{array}{l}
\mbox{sin}\, \theta_0 \\
0 \\
\mbox{cos}\, \theta_0 \\
\end{array}
\right)\end{aligned}$$ in Cartesian coordinates ($x$, $y$, $z$). The above definitions of $\alpha$ and $\omega$ are the same as those of Papers I–III. Although we added a finite non-axisymmetric perturbation in Papers I–III, we assume no explicit non-axisymmetric perturbation in this paper.
We calculated 36 models, widely covering the parameter space. 20 typical models are listed in Table \[table:init\]. The model parameters ($\alpha$, $\omega$, and $\theta_0$); density enhancement factor ($f$); ratio of the thermal ($\alpha_0$), rotational ($\beta_0$), and magnetic ($\gamma_0$) energies to the gravitational energy;[^1] initial central number density ($n_{0}$), magnetic field strength ($B_0$); angular velocity ($\Omega_0$); and total mass inside the critical radius ($r<R_{\rm c}$) are summarized in this table. The models SF, MF, and WF in MT04 are also listed. The clouds in MT04 have stronger (models SF and MF) or equivalent (model WF) magnetic fields compared to our models and have an intermediate rotation rate.
We calculated cloud evolutions up to $n \simeq 10^{14} \cm$ using an ideal MHD approximation. The ideal MHD approximation is fairly good as long as the gas density is lower than $ \sim 10 ^{12} $ cm$^{-3}$ [@nakano76; @nakano02]. However, ohmic dissipation affects protostellar collapse, especially at high densities exceeding $ n \simeq 10^{12} \cm$ [@nakano02]. @nakano02 have shown that the magnetic field is coupled with gas for $n \la 10^{12} \cm$, indicating that the assumption of an ideal MHD is valid in the isothermal phase. In the adiabatic phase, the number density in the adiabatic core exceeds $\sim 10^{12} \cm$, and the magnetic field begins to decouple as the ohmic dissipation being effective. Our simulation may therefore overestimate the magnetic field, especially for a dense core ($n \gtrsim 10^{12} \cm$). Since we are interested in the direction of the magnetic field, rotation axis, and disk normal in the isothermal phase, we show the results of cloud evolution mainly for low density cores ($n \lesssim 10^{12} \cm$) in which the ideal MHD approximation is valid. The effect of ohmic dissipation and cloud evolution, outflow, and jets in high density cores will be investigated in a subsequent paper.
We adopt the nested grid method (for details, see Paper II). The nested grid consists of concentric hierarchical rectangular grids to give a high spatial resolution near the origin. Each level of the rectangular grid has the same number of cells ($128 \times 128 \times 128 $), but the cell width $h(l)$ depends on the grid level $l$. The cell width is reduced by a factor of 1/2 as the grid level increases by 1 ($l \rightarrow l+1$). We begin our calculations with 6 grid levels ($l=1$–6). The box size of the initial finest grid $l=6$ was chosen to be $2 R_{\rm c}$. The coarsest grid ($l=1$), therefore, has a box size equal to $2^6\, R_{\rm c}$. A boundary condition is imposed at $r=2^6\, R_{\rm c}$, where the magnetic field and ambient gas rotate at an angular velocity of $\Omega_0$ (for details see MT04). Due to this large simulation box, it takes $t\simeq 40$ free-fall time until the Alfvén wave generated at the cloud center reaches the simulation boundary, even in the model with the strongest magnetic field. Hence, the boundary condition does not affect the central cloud because our calculations end within $\simeq 10$ free-fall time. The highest level of grids changes dynamically. A new finer grid was generated whenever the minimum local Jeans length $ \lambda _{\rm J} $ become smaller than $ 8\, h (l{\rm max}) $, where $h$ is the cell width. The maximum level of grids was restricted to $l_{\rm max} = 20$ in typical models. Since the density is highest in the finest grid, the generation of new grid ensures the Jeans condition of @truelove97 with a margin of a factor of 2. We adopted the hyperbolic divergence ${\mbox{\boldmath$B$}}$ cleaning method of @dedner02.
Results
=======
Our models are characterized mainly by the strength of the magnetic field ($\alpha$) and the angular velocity ($\omega$). We calculated six groups of models, groups A–F, distinguished by the values of $\alpha$ and $\omega$ (see Table \[table:init\]). The models are designated by group (A, B, C, D, E, or F) and $\theta_0$ (0$\degr$, 30$\degr$, 45$\degr$, or 60$\degr$). Hence, model A00 has $\alpha = 0.01$, $\omega =0.01$, and $\theta_0 = 0\degr$, while model A45 has the same $\alpha$ and $\omega$, but $\theta_0$ = 45$\degr$. Models with $\theta_0 = 0 \degr$ are “aligned rotators” and those with $\theta_0 \ne 0 \degr$ are “non-aligned rotators.” The aligned rotator models of groups A, B, C, and D (A00, B00, C00, and D00) have the same $\alpha$ and $\omega$ as groups A, B, C, and D in Papers II and III and have the same magnetic field strength and angular velocity at the center, although the distributions of the density, magnetic field, and angular velocity are different. Groups E and F are newly added in this paper.
The evolution of the aligned rotator models can be divided into four types (§1) by two criteria: $F \lessgtr 1$ and $\Omega_0/B_0 \lessgtr 0.39 G^{1/2} c_s^{-1}$. Groups A, B, and F are magnetic-force-dominant, $\Omega_0/B_0 < 0.39 G^{1/2} c_s^{-1}$ for $\theta = 0$, and Groups C, D, and E are rotation-dominant. The models are described in the following subsections (§3.1–§3.3).
Magnetic-Force-Dominant Disks
-----------------------------
In this subsection, we show the evolutions of groups A, B, and F. These groups are expected to form magnetic-force-dominant disks in the isothermal phase since $\Omega_0/{\mbox{\boldmath$B_0$}} < 0.39 G^{1/2} c_s^{-1}$. Before disk formation, the cloud is expected to collapse spherically in groups A and B, since they are support-deficient models, $F \lesssim 1$. On the other hand, one-dimensional collapse along the magnetic field lines is expected for model F, since this is a support-sufficient model, $F \gtrsim 1$.
### Support-deficient models
In this section, we consider the evolution of groups A and B. Figures \[fig:1\] (a)–(d) shows the evolution for model A45. Model A45 has the parameters $\alpha=0.01$, $\omega=0.01$, and $\theta_0=45\degr$. This cloud rotates slowly ($\omega = 0.01$) around the $z$-axis, and has a weak magnetic field ($\alpha = 0.01$). Figure \[fig:1\] (a) shows the initial state of model A45. The cloud is threaded by a magnetic field running in the direction $\theta_0=45\degr$. Figure \[fig:1\] (b) shows the structure when the central density reaches $n_c = 5.9 \times 10^7\cm$. Figure \[fig:1\] (b) ($l=9$) covers $(1/8)^3$ of the volume of Figure \[fig:1\] (a) ($l=6$). Figure \[fig:1\] (b) shows that the magnetic field lines run in a direction at an angle $\theta_B \simeq 45\degr$ from the $z$-axis, similar to the large-scale view ($l=6$) of Figure \[fig:1\] (a), though they are squeezed near the center. The green colored disks in Figures \[fig:1\] (b), (c), and (d) indicate regions of $\rho> (1/100) \,{\rho_{\rm c}}$ on the mid-plane parallel to the disk-like structure (perpendicular to the disk normal). From the density contour lines in the $x$-$z$ plane of Figure \[fig:1\] (b), it can be seen that the high density region is slightly flattened and the cloud collapses along the magnetic field lines. We derive three principal axes ($a_1 \ge a_2 \ge a_3$) from the moment of inertia according to MT04. We define the shortest axis $a_3$ to be the disk normal axis and define $a_1$ and $a_2$ to be the long and short axes on the disk mid-plane. The oblateness and axis-ratio of the high-density region of $\rho \ge 0.1 \rho_c$ are defined here as ${\varepsilon_{\rm ob}}\equiv (a_1 a_2)^{1/2}/a_3$ and ${\varepsilon_{\rm ar}}\equiv a_1/a_2-1$, respectively.
Figure \[fig:2\] plots the oblateness (upper panel) and axis-ratio (lower panel) against the central density for group A. For model A45, the oblateness reaches ${\varepsilon_{\rm ob}}\simeq 1.1$ at the epoch of Figure \[fig:1\] (b) ($n_c = 5.9 \times 10^7 \cm$). This means that the high-density region maintains a spherical structure at this epoch. The axis-ratio grows only to ${\varepsilon_{\rm ar}}\simeq 10^{-3}$ at the same epoch.
Figure \[fig:1\] (c) shows the high-density region at the core formation epoch ($n_c = 5 \times 10^{10}\cm$). The angle between the disk normal and the $z$-axis has increased to $\theta_p \simeq 41 \degr$, although that between the magnetic field lines and the $z$-axis is $\theta_B\simeq 45\degr$, similar to Figure \[fig:1\] (b). Figure \[fig:3\] shows the loci of the magnetic field ${\mbox{\boldmath$B$}}$ ($\theta_B, \phi_B$), the rotation axis ${\mbox{\boldmath$\Omega$}}$ ($\theta_{\Omega}, \phi_{\Omega}$), and the normal vector of the disk ${\mbox{\boldmath$P$}}$ ($\theta_{P}, \phi_{P}$), all averaged within $\rho > 0.1 {\rho_{\rm c}}$. Each vector is projected on the $z=0$ plane and the distance from the origin represents the angle between a given vector and the $z$-axis. For example, a vector parallel to the $y$-axis is plotted at (0$\degr$, 90$\degr$). The dotted line in Figure \[fig:3\] indicates that the magnetic field rotates around the $z$-axis from $\phi_B =0\degr$ to $125 \degr$ during the calculation, keeping the same angle with respect to the $z$-axis ($\theta_B \simeq 45 \degr$). It can also be seen that the disk normal (the solid line) stays parallel to the magnetic field after $n_c > 10^6 \cm$, showing that a disk structure is formed perpendicular to the magnetic field and rotates with the magnetic field. On the other hand, the rotation axis, represented by the broken line, points in the direction $\theta_\Omega \lesssim 15\degr$ before $n_c \lesssim 5 \times 10^{10} \cm$. Thus, the rotation axis maintains its initial direction in the isothermal phase. We summarize the angles of the magnetic field ($\theta_B$, $\phi_B$), rotation axis ($\theta_\Omega$, $\phi_\Omega$), and disk normal ($\theta_P$, $\phi_P$) at the end of the isothermal phase in Table \[table:results\]. The angles between the magnetic field and the rotation axis $\psi_{B\Omega}$, those between the magnetic field and disk normal $\psi_{BP}$, and those between the rotation axis and the disk normal $\psi_{\Omega P}$ are also listed in Table \[table:results\]. For example, the angle $\psi_{B\Omega}$ is calculated as $$\psi_{B \Omega} \equiv \mbox{sin}^{-1} \frac{|{\mbox{\boldmath$B$}} \times {\mbox{\boldmath$\Omega$}}| }{|{\mbox{\boldmath$B$}}|\;|{\mbox{\boldmath$\Omega$}}|}.$$ We can confirm from Table \[table:results\] that the magnetic field and disk normal have almost the same directions ($\psi_{B P} = 4 \degr$) and that the disk normal is inclined from the rotation axis at an angle of $\psi_{\Omega P}=42 \degr$ at the end of the isothermal phase. The oblateness reaches ${\varepsilon_{\rm ob}}=1.45$ at the core formation epoch (Figure \[fig:2\] upper panel). This oblateness is smaller than that of model AS (${\varepsilon_{\rm ob}}= 2.9$) of Paper II. The increase in the initial central density from $5 \times 10^2 \cm$ (Paper II) to $5 \times 10^4 \cm$ suppresses the growth of oblateness. The disk forms slowly compared with models with large $\alpha$ or $\omega$, similar to Paper II. The axis-ratio grows only to ${\varepsilon_{\rm ar}}= 7.1 \times 10^{-3}$ at the end of the isothermal phase.
Figure \[fig:1\] (d) shows the central region at 140 yr after the core formation epoch ($n_c = n_{\rm cri}$). At this stage, the directions of the magnetic field and disk normal are parallel to each other and co-rotate around the $z$-axis, keeping the angles $\theta_B, \theta_{P} \simeq 45 \degr.$ The magnetic field and disk normal vector continue to rotate up to $\phi_B, \phi_P \simeq 125\degr$ at the end of the calculation period ($n_c \sim 10^{14} \cm$). The rotation axis begins to move away from the $z$-axis at the beginning of the adiabatic phase, and has an angle $\theta_\Omega \simeq 20 \degr$ with respect to the $z$-axis at the end of the calculation (asterisk). It can be seen that the three axes (${\mbox{\boldmath$B$}}$, ${\mbox{\boldmath$\Omega$}}$, and ${\mbox{\boldmath$P$}}$) are in alignment in the adiabatic stage, as seen in MT04. Although the disk becomes thinner in the adiabatic phase, the oblateness is no more than ${\varepsilon_{\rm ob}}\simeq 2$, as shown in Figure \[fig:2\]. When the central density exceeds $n_c \simeq 10^{12} \cm$, the oblateness gradually decreases and the central region becomes spherical owing to the thermal pressure. Figure \[fig:2\] indicates that the oblateness depends only slightly on $\theta_0$. On the other hand, the evolution of the axis-ratio clearly depends on $\theta_0$. The axis-ratio grows faster in models with larger $\theta_0$. The growth of the axis-ratio must be due to non-axisymmetry in the plane perpendicular to the magnetic field in magnetic-force-dominated models. Such non-axisymmetry is induced by the centrifugal force due to the rotation motion with ${\mbox{\boldmath$\Omega$}}$ in the disk plane, which vanishes in an aligned rotator for which $\theta_0 =0.$ Thus, the axis-ratio does not grow in model A00, for which the initial magnetic field lines are parallel to the rotation axis. This tendency is more marked in group C. We will discuss the correlation of the axis-ratio and the initial angle $\theta_0$ in §3.2. Although the high-density region rotates slightly around the rotation axis ${\mbox{\boldmath$\Omega$}}$ in the isothermal collapse phase \[Figure \[fig:1\] (a)–(c)\], it begins to rotate in the adiabatic phase \[Figure \[fig:1\] (c) and (d)\] when the gravitational collapse is suppressed by the thermal pressure. Comparing Figures \[fig:1\] (c) and (d), we can see the magnetic field lines and the disk-normal rotate around the $z$-axis.
Figure \[fig:4\] shows the evolution of the angles $\theta_B$, $\theta_\Omega$, and $\theta_P$ against the central density for models A00, A30, A45, and A60. The angles $\phi_B$, $\phi_\Omega$, and $\phi_P$ for model A45 are also plotted. In this figure, we cannot see any differences in the angles $\theta_B$, $\theta_\Omega$, and $\theta_P$ of model A00, because these angles are all zero. The angles $\theta_B$ (broken lines) show that the magnetic field hardly changes with respect to the $z$-axis until the end of the calculation period. Thus, the magnetic field maintains its initial zenithal angle for group A. In contrast, the disk normal changes to the direction of the magnetic field after the central density exceeds $n \gtrsim 10^6 - 10^7 \cm$ for all models A00, A30, A45, and A60. Hence, we can see that the disk is formed perpendicular to the magnetic field, irrespective of the initial angle $\theta_0$. On the other hand, the rotation axis maintains its initial direction ($z$-axis) in the isothermal phase and then begins to move away from the $z$-axis in the adiabatic phase. The angles $\phi_B$ and $\phi_P$ increase slightly in the isothermal phase because the clouds rotate slowly. Figures \[fig:2\] and \[fig:4\] clearly show that the evolutions of the oblateness and the angles $\theta_B$, $\theta_\Omega$, and $\theta_P$ do not qualitatively depend on the initial angle $\theta_0$, while the angles of the disk normal do depend on $\theta_0$.
At the end of the calculation period for model A45, a disk structure is found in the region $r \lesssim 200$ AU, where the cloud has an oblateness of ${\varepsilon_{\rm ob}}\ge 1.5$. In this region, the direction of the disk normal varies depending on the disk scale. The disk normal is directed toward $\theta_P \sim 45\degr$, which is almost parallel to the magnetic field for $r \lesssim 50$ AU. On the other hand, for a disk in the range $r \simeq 100$ - $200$ AU, $\theta_P$ increases from $\sim 55\degr$ ($r\lesssim 100$ AU) to $\sim 60 \degr$ ($r\lesssim 200$ AU). Thus, the disk normal gradually becomes inclined and approaches the magnetic field direction, moving towards the center. The magnetic field strength also depends on the scale. It increases as the center is approached and has a maximum at the center.
Next, we focus on group B ($\alpha=0.1$, $\omega = 0.01$) which has a magnetic field $10^{1/2}$-times stronger than group A. The left panel of Figure \[fig:5\] shows the same information as Figure \[fig:3\] for model B45 ($\theta_0 = 45 \degr$). The cloud structure, magnetic field lines, and velocity vectors for model B45 at the end of the isothermal phase are shown in the right panel, for which the contour, streamline, isosurface, and other notations have the same meanings as in Figure \[fig:1\]. In group B, a disk is formed perpendicular to the magnetic field, similar to group A. The direction of the magnetic field rotates around the $z$-axis, keeping the initial angle $\theta_0$, similar to model A45. The directions of the magnetic field and the disk normal also coincide for model B45. The angles $\phi_B$ and $\phi_P$, however, increase slightly in the isothermal phase, compared with model A45. Since the initial rotation speed $\omega = 0.01$ is common for groups A and B, this difference must be due to the growth rate of the angular velocity $\Omega$, which is smaller in group B than in group A, as shown in Paper II. The growth rates of the angular velocity ($\Omega$) and magnetic flux density ($B$) are large when the cloud collapses spherically ($\propto \rho^{2/3}$), while they are small when the cloud collapses laterally ($\propto \rho^{1/2}$) (Paper II). A cloud with a strong magnetic field (group B) forms a disk earlier than that with a weak magnetic field (group A), and thus, the growth rate of the angular velocity in group B decreases from $\Omega \propto \rho^{2/3}$ to $\Omega \propto \rho^{1/2}$ at an earlier epoch than for group A. Further, magnetic braking is more effective in group B than in group A, since the initial magnetic field is stronger in group B. Model B45 has $\theta_\Omega = 7 \degr$ at the end of isothermal phase, while model A45 has $\theta_\Omega = 2 \degr$ (Table \[table:results\]). That is, the rotation axis is more inclined with respect to the $z$-axis in model B45 than in model A45. This inclination is caused by magnetic braking (MT04). MT04 show that magnetic braking works more effectively for the perpendicular component of angular momentum to the magnetic field than for the parallel component. The angular velocity in model B45 is slower than model A45 for the following two reasons, prompt disk formation and effective magnetic braking. The direction of the rotation axis oscillates violently around the $z$-axis in the adiabatic phase (Figure \[fig:5\]) because magnetic braking is so effective, as noted in MT04 (model MF) and Paper II. In conclusion, groups A and B exhibit a disk perpendicular to the magnetic field line irrespective of the initial angle $\theta_0$.
### Support-sufficient models
Group F models are magnetic-force dominant, similar to groups A and B but with a strong magnetic field. Group F has the parameters $\alpha = 1$ and $\omega = 0.05$. As group F clouds have a stronger magnetic field and rapid rotation at the initial stage, to promote cloud contraction we set a larger density enhancement factor for group F ($f = 5$) compared with that of groups A and B ($f= 1.68$). We can confirm that the density enhancement factor does not greatly affect cloud evolution (see models A45 and \[n\], or models B45 and \[o\] in Table \[table:results\]). The models in group F form disks through the magnetic force, as for models A45 and B45. For group F models, the cloud collapses along the magnetic field lines and contraction crossing the magnetic field lines is suppressed by the strong Lorentz force, while for models A45 and B45, the cloud collapses spherically. The locus of the disk normal (${\mbox{\boldmath$P$}}$) traces that of the magnetic field (${\mbox{\boldmath$B$}}$) in the left panel of Figure \[fig:6n\]. The rotation axis is once inclined to $\theta_{\Omega} \simeq 70\degr$ in the isothermal phase, and then reverts to $\theta_{\Omega} = 10\degr$ at the end of the isothermal phase. The loci of ${\mbox{\boldmath$B$}}$, ${\mbox{\boldmath$P$}}$, and ${\mbox{\boldmath$\Omega$}}$ are similar to those of models MF70 and MF80 of MT04, which have parameters ($\alpha$, $\omega$) = (0.76, 0.14). Although the three vectors (${\mbox{\boldmath$B$}}$, ${\mbox{\boldmath$P$}}$, and ${\mbox{\boldmath$\Omega$}}$) do not completely converge to align in models MF70 and MF80, these vectors are roughly parallel, as seen in the left panel of Figure \[fig:6n\]. The right panel of Figure \[fig:6n\] shows the cloud structure, magnetic field lines, and velocity vectors at the end of the isothermal phase. This panel shows that a disk forms perpendicular to the magnetic field lines, as for groups A and B.
The axis-ratio grows to ${\varepsilon_{\rm ar}}= 0.4$ in model F45 at the end of the isothermal phase (Table \[table:results\]). The non-axisymmetric structure is caused by the centrifugal force whose rotation vector is perpendicular to the magnetic field in magnetic-force-dominant models. The axis-ratio begins to grow after a thin disk is formed. The axis-ratios in group F are larger than those of groups A and B (see Table \[table:results\]) as the disk formation epoch in group F is earlier than that of groups A and B.
In groups A, B, and F (magnetic-force-dominant models), a disk is formed perpendicular to the local magnetic field. Our results agree qualitatively with those of MT04 (models SF, MF, and WF).
Rotation-Dominant Disks
-----------------------
In this subsection, we show the cloud evolution of groups C and E. These groups are expected to form rotation-dominant disks in the isothermal phase since $\Omega_0/B_0 > 0.39 G^{1/2} c_s^{-1}$. The cloud collapses along the rotation axis (vertical collapse) in group C (support-sufficient models; $ F \gtrsim 1$), while it collapses spherically in group E (support-deficient models; $F \lesssim 1$).
### Support-deficient models
The models of group E are rotation dominant ($\Omega_0/B_0 > 0.39 G^{1/2} c_s^{-1}$), although they have a slow rotation rate of $\omega = 0.05$. Compared with group A, group E has a magnetic field $10^{1/2}$-times smaller but a 5-times larger angular velocity. Model E45 has parameters $\alpha = 0.001$, $\omega=0.05$, and $\theta_0 = 45 \degr$.
The models of group E form a disk through the centrifugal force. The left panel of Figure \[fig:7n\] shows that the rotation axis maintains its initial direction $\theta_\Omega \simeq 0\degr$ and the disk normal is also parallel to the $z$-axis (the rotation axis) in the isothermal phase. Therefore a disk forms perpendicular to the rotation axis, not along the magnetic field. The angle $\theta_B$ increases gradually in the isothermal phase. This means that the magnetic field tends to be aligned along the direction [*perpendicular*]{} to the rotation axis. However, the isothermal phase ends before the magnetic field lines are completely directed in the perpendicular direction. Since the models in group E have initial states inside the [$B$-$\Omega$ ]{}relation line,[^2] the magnetic field gradually becomes inclined as the cloud collapses slowly in a spherically symmetric fashion. Although the direction of the magnetic field does not coincide with the direction of rotation and of the disk normal in the isothermal phase, the magnetic field, rotation axis, and disk normal begin to converge with each other after the core formation epoch. As shown in MT04, magnetic braking works preferentially for the component of the angular momentum perpendicular to the magnetic field, which drives the alignment of and .
### Support-sufficient models
Figures \[fig:8n\] (a)–(d) show the cloud evolution of model C45 in views along the $y$-axis (edge-on view; upper panels) and along the $z$-axis (face-on view; lower panels). Model C45 has the parameters $\alpha=0.01$, $\omega=0.5$, and $\theta_0=45 \degr$. Compared with group A, group C has the same magnetic field but has a 50-times larger angular velocity.
Figure \[fig:8n\] (a) shows the cloud structure at $n_c = 6.3 \times 10^4 \cm$. The spherical cloud collapses along the rotation axis (i.e. the $z$-axis), and then an oblate core forms at the center (upper panel). The magnetic field lines are slightly squeezed at the center (upper panel) and are rotated $\phi_B \simeq 45 \degr$ from the initial stage $\phi_B=0\degr$ \[Figure \[fig:8n\] (a) lower panel\]. Figure \[fig:8n\] (b) shows the core shape at $n_c = 6 \times 10^5 \cm$. A thin disk is formed in the $x$-$y$ plane. In this model, an extremely thin disk forms in the isothermal phase (${\varepsilon_{\rm ob}}\simeq 10 $ at $n_c = 9 \times 10^6 \cm$, seen in Figure \[fig:9n\]). In model CS of Paper II, a thin disk forms promptly in the early isothermal phase, because the lateral collapse is suppressed by a strong centrifugal force and hence the cloud collapses only vertically along the $z$-axis. In model C45, the cloud similarly collapses vertically and forms a disk promptly in the isothermal phase. The weak magnetic field in this model hardly affects the cloud evolution. Moreover, the magnetic field is compressed in the direction of the rotation axis and begins to run along the disk surface, as shown in Figure \[fig:8n\] (b). Outside the thin disk, the magnetic field lines run in the direction $\theta_B \simeq 45\degr$. That is, the magnetic field lines emerge from the lower-left side of the cloud and escape from the disk in the upper-right direction. This configuration of the magnetic field appears only in non-aligned rotators. In contrast, the disk is vertically threaded by the magnetic field along the rotation axis in model C00. This configuration of the magnetic field is seen in all the models studied in Paper II, which was restricted to aligned rotators.
Figure \[fig:10n\] shows the direction of ${\mbox{\boldmath$B$}}$, ${\mbox{\boldmath$P$}}$, and ${\mbox{\boldmath$\Omega$}}$ for model C45. The inset at the lower-left corner is an enlarged view of the center. This shows that the direction of the magnetic field gradually moves away from the $z$-axis and towards $\theta_B \simeq 90 \degr$. Then, the direction of the magnetic field rotates around the $z$-axis, keeping an angle of $\theta_B \simeq 90 \degr$. On the other hand, the rotation axis hardly changes its direction from the initial state and remains directed along the $z$-axis. The disk normal is also oriented along the $z$-axis (i.e. the rotation axis). From Figure \[fig:8n\] (b), it can be seen that a disk forms by the effect of the rotation and the disk normal direction coincides with the rotation axis.
Figure \[fig:8n\] (c) shows the central region at the core formation epoch ($n_c = 2.3 \times 10^{11} \cm$). It can be seen from this figure that a non-axisymmetric structure has formed and the central core has changed its shape from a circular disk \[lower panels of Figures \[fig:8n\] (a) and (b)\] to a bar \[Figure \[fig:8n\] (c) lower panel\]. The magnetic field lines run laterally, i.e. $|B_r|$, $|B_\phi| \gg |B_z|$, in the adiabatic phase \[Figure \[fig:8n\] (c) and (d)\]. Figure \[fig:8n\] (d) shows an adiabatic core when the central density has reached $n_c = 6.9 \times 10^{14} \cm$. A spiral structure is seen in this figure, which indicates that a non-axisymmetric pattern has formed, even if no explicit non-axisymmetric perturbation is assumed at the initial stage. (Although the non-axisymmetric patterns also appear in some models of Papers I–III, it should be noted that these patterns are due to a non-axisymmetric perturbation added to the density and magnetic field at the initial stage.) The magnetic field lines are considerably twisted in Figure \[fig:8n\] (d). It should be noted that in this model, the inclined magnetic field induces non-axisymmetric perturbations, on behalf of the initial explicit perturbation.
Figure \[fig:11n\] shows the magnetic field lines, the shape of the core, and the velocity vectors on the $z=0$ plane in the adiabatic phase for model C00. This figure shows that a ring is formed, as found in Paper III, without any growth of a non-axisymmetric pattern. In Papers I–III, we assumed a cylindrical cloud in hydrostatic equilibrium, in which the magnetic field and angular velocity are functions of the radius $r$ in cylindrical coordinates. On the other hand, the cloud is assumed to be spherical with a uniform magnetic field and angular velocity at the initial stage in model C00. In spite of these differences, a similar ring structure appears in both models C00 and CS of Paper II. The lower panel of Figure \[fig:9n\] plots the evolution of the axis-ratio against the central density for group C. The axis-ratios for models C30, C45, and C60 begin to grow after a thin disk is formed ($n_c \gtrsim 5 \times 10^6 \cm$) and reach ${\varepsilon_{\rm ar}}\simeq 0.5$ at the core formation epoch. The axis-ratio grows to ${\varepsilon_{\rm ar}}\simeq 1$ at $n_c = 10^{12} \cm$ in models C30, C45, and C60, while no non-axisymmetric pattern appears in model C00. This shows that the non-axisymmetric pattern arises from the anisotropy of the Lorentz force around the rotation axis. A bar structure is formed by the non-axisymmetric force exerted by the inclined magnetic field, as shown in Figures \[fig:8n\] (b)–(d). This is confirmed by the fact that the short axis of the density distribution on the $z=0$ plane (the disk mid-plane) and the bar pattern rotate together with the magnetic field lines. The axis-ratio (the non-axisymmetricity) grows in proportion to $\rho^{1/6}$ ($10^7 \lesssim n_c \lesssim 10^{10} \cm$ in the lower panel of Figure \[fig:9n\]), as found by @hanawa99. Since the lateral component of the magnetic field ($|{\mbox{\boldmath$B$}}|\, {\rm sin} \,\theta_0$) is large (Figure \[fig:9n\] lower panel), the axis-ratio grows more in models with large $\theta_0$.
The evolution of the angles $\theta_B$, $\theta_\Omega$, $\theta_P$, and $\phi_B$ for group C are plotted against the central density in Figure \[fig:12n\]. The angle between the magnetic field and $z$-axis becomes $\theta_B \simeq 90\degr$ even in the early phase of isothermal collapse for all the models C30, C45, and C60. The rotation axis and the disk normal maintain their angles $\theta_\Omega, \ \theta_P \simeq 0\degr$. Figures \[fig:4\] and \[fig:12n\] show that in both magnetic- and rotation-dominant models the directions of the magnetic field, rotation axis, and disk normal are qualitatively the same for models with the same $\alpha$ and $\omega$, irrespective of $\theta_0$ in the range $30\degr \le \theta_0 \le 60\degr$.
Figure \[fig:13n\] shows the magnetic field lines, velocity vectors, and density distribution for the epoch $t=1.52\times 10^6$ yr ($n_c = 1.5 \times 10^9 \cm$) for model C30. Note that the box scale and level of grid are different for each panel. The spatial scale of each successive panel is different by a factor of four and thus the scale between panels (a) and (d) is different by a factor of 64. The magnetic field has an angle $\theta_B' \sim 30\degr$ in panel (a), where $\theta_B'$ is defined as the angle between the volume average magnetic field in the grid and the $z$-axis. Although the magnetic field lines in the thin disk run parallel to the disk surface ($\theta_B \simeq 90\degr$; Figure \[fig:13n\] [*b*]{}), the magnetic field lines outside the disk preserve the ambient direction of $\theta_B' \simeq 30\degr$. Closer to the cloud center, the magnetic field lines are twisted near the disk surface in the azimuthal direction, as seen in Figures \[fig:13n\] (c) and (d). As a result, the directions of the magnetic field are considerably different for different scales or densities even in the same cloud.
Disk Formation Affected Both by Magnetic Field and Rotation
-----------------------------------------------------------
In this subsection, we detail the evolution of group D, in which both the magnetic field and angular velocity are crucial for cloud evolution and disk formation. Thus, group D is located near the border between the magnetic-force- and rotation-dominant models, $\Omega_0/B_0 \simeq 0.39 G^{1/2} c_s^{-1}$, and has the parameters $\alpha = 1$ and $\omega=0.5$. Group D has the same angular velocity as group C, but has a 10-times larger magnetic field. The clouds in this group have both a strong magnetic field and rapid rotation.
The left panel of Figure \[fig:14n\] shows the directions of ${\mbox{\boldmath$B$}}$, ${\mbox{\boldmath$\Omega$}}$, and ${\mbox{\boldmath$P$}}$ for model D45. It can be seen that the direction of the magnetic field oscillates in the range $\theta_B \simeq 45 \degr$ – $70 \degr$. Although the direction of the disk normal approaches the magnetic field direction, they are not completely aligned, as is the case for groups A and B. The direction of the rotation axis also oscillates considerably in the isothermal phase and is at $\theta_\Omega = 63 \degr$ at the core formation epoch. This value is almost the same as the angles of the magnetic field ($\theta_B = 51 \degr$) and the disk normal ($\theta_P = 49 \degr$), but the azimuthal coordinate $\phi_\Omega =39 \degr$ is very different from those of the magnetic field ($\phi_B = 283\degr$) and the disk normal ($\phi_P = 275\degr$). Thus, the direction of the rotation axis differs greatly from those of the magnetic field and disk normal. The direction of the disk normal is nearer to the magnetic field ($\psi_{BP} = 6\degr$) than the rotation axis ($\psi_{\Omega P} = 85\degr$) at the core formation epoch, as shown in Table \[table:results\]. Thus, the disk normal seems to be parallel to the magnetic field for model D45 (Figure \[fig:14n\] right panel). On the other hand, the disk normal is nearer to the rotation axis ($\psi_{\Omega P} = 6 \degr$) than the magnetic field ($\psi_{B P} = 72 \degr$) in model D30. In model D60, the disk normal is close to both the magnetic field ($\psi_{B P} = 29 \degr$) and the rotation axis ($\psi_{\Omega P} = 33 \degr$). In these clouds, the direction of the magnetic field, rotation axis, and disk normal oscillate in the isothermal phase. Whether a disk is aligned perpendicularly to the magnetic field or rotation motion depends on $\theta_0$. The cloud evolutions are influenced both by the magnetic field and the centrifugal force in group D. Therefore, we cannot clearly classify models in group D into either magnetic-force- or rotation-dominant models.
Magnetic Flux–Spin Relation
===========================
Amplification of the Magnetic Field and Angular Velocity
--------------------------------------------------------
The magnetic field strength and angular rotation speed increase as a cloud collapses. We have found in Paper II that the magnetic field strength normalized by the gas pressure and the angular velocity normalized by the free-fall timescale satisfy Equation (\[eq:UL\]) after a contracting disk forms in the isothermal phase for an aligned rotator model. In this subsection, we investigate the above relation for the case when the magnetic field is not necessarily parallel to the rotation axis (non-aligned rotator models). The evolution loci of the cores are plotted in Figure \[fig:15n\], where the horizontal and vertical axes were calculated using the central values of ${\rho_{\rm c}}$, $B_c$, and $\Omega_c$. In Figure \[fig:15n\] a thick band indicating the equation $$\frac{\Omega_c^2}{(0.2)^2 \; 4 \pi G \rho_c} +
\frac{B_{\rm c}^2}{(0.36)^2 \; 8 \pi c_s^2 \rho_c} =1
\label{eq:UL2}$$ is also drawn, where the numerators of the left-hand side are defined as $B_c = (B_{x,c}^2 + B_{y,c}^2 + B_{z,c}^2)^{1/2}$ and $\Omega_c = (\Omega_{x,c}^2 + \Omega_{y,c}^2 + \Omega_{z,c}^2)^{1/2}$, where the suffix $c$ indicates the values at the center.
First, we consider the aligned rotator models (models with $\theta_0 = 0$) and compare them with those of Paper II. Comparing the solid lines ($\theta_0 = 0 \degr$ models) of Figure \[fig:15n\] with those of Figure 12 in Paper II, we can see the evolution locus is almost the same. That is, (1) the points move from the lower-left to the upper-right inside the [$B$-$\Omega$ ]{}relation line, whereas (2) those distributed outside the line move from the upper-right to the lower-left; (3) the slope of each evolution path is $ d\log \Omega _{\rm c} / d \log B_{\rm c} \simeq 1 $ and (4) the evolution paths finally converge to Equation (\[eq:UL2\]) for the isothermal phase. However, the evolution paths for models C00 and D00, which are located outside the [$B$-$\Omega$ ]{}relation line have slightly smaller angular velocities \[$\Omega_c \, (4 \pi G {\rho_{\rm c}})^{-1/2} \simeq 0.1$–$0.15 $\] than those of models C and D of Paper II \[$\Omega_c \, (4 \pi G {\rho_{\rm c}})^{-1/2} \simeq 0.2$\]. This seems to be due to the fact that the initial cloud considered in this paper is more unstable against gravity than that of Paper II; the ratio of thermal energy to gravitational energy is $\alpha_0 = 0.168$ in the present paper, while $\alpha_0 \simeq 0.6$-$0.7$ in Paper II. Although the clouds collapse vertically (vertical collapse) for the models C, D, C00, and D00 until the magnetic field and rotation satisfy the [$B$-$\Omega$ ]{}relation, the vertical collapse overshoots the [$B$-$\Omega$ ]{}relation line for initially unstable clouds in models C00 and D00. This difference can also be seen in comparing the density increase rate. The increase of the central density can be approximated by $\rho_c \simeq 5.1 / [4 \pi G (t-t_f)^2]$ in model C00, where $t_f$ is the time at which the central density becomes infinite for the isothermal phase. This density increase rate is 1.75-times slower than that of the similarity solution ($\rho_c = 1.667/[4 \pi G (t-t_f)^2]$; Larson 1969; Penston 1969). On the other hand, the density increase can be approximated by $\rho_c \simeq 6.2 / [4 \pi G (t-t_f)^2]$ for model C of Paper II, giving a density increase rate $(6.2/5.1)^{1/2} \simeq 1.1 $-times faster in model C00 than in model C. This is a natural consequence of the lower $\alpha_0$ of model C00. This difference is also seen in the evolution of the oblateness. For example, the oblateness reaches ${\varepsilon_{\rm ob}}\simeq 10$ in model C00, while it reaches only ${\varepsilon_{\rm ob}}\simeq 4$ in model C. Thus, a thinner disk is formed in model C00, which has a more unstable initial state.
The evolution locus of model F00 moves towards the upper-right in the period $n_c \lesssim 10^6 \cm$ in Figure \[fig:15n\]. This indicates that the cloud collapses spherically in this period, as for models A00 and B00, because the cloud is more unstable against the gravity than those of groups C and D. Group F has the same thermal energy as groups C and D, but has smaller rotational and magnetic energies, as shown in Table \[table:init\]. Thus, in model F00, the cloud collapses spherically in the early phase of the isothermal collapse. However, the collapse becomes anisotropic in the late phase of isothermal collapse because the magnetic force becomes effective. Although there are a few differences between the models, the convergence to the [$B$-$\Omega$ ]{}relation curve is evident, irrespective of the initial cloud shape and the distributions of the density, magnetic field, and angular velocity when the magnetic field is parallel to the rotation axis. This is natural because the [$B$-$\Omega$ ]{}relation is satisfied for the central part of the cloud and information on the outer part of the cloud is lost as the cloud collapses in the isothermal phase, as noted by @larson69.
Next, we consider the non-aligned rotator models ($\theta_0 \ne 0$). In Figure \[fig:15n\], the dotted, broken, and dash-dotted lines show the evolution paths of models with $\theta$ = 30$\degr$, 45$\degr$, and 60$\degr$, respectively. The points located inside the [$B$-$\Omega$ ]{}relation line (non-aligned rotator models in groups A, B, and E) move from the lower-left to the upper-right, as for the aligned rotator models. These models have almost the same loci as the aligned rotator models and converge to the [$B$-$\Omega$ ]{}relation line. On the other hand, models located outside the [$B$-$\Omega$ ]{}relation line (non-aligned rotator models in groups C, D, and F) show different evolutions than the aligned rotators. The non-aligned rotator models C30, C45, and C60 evolve towards the [*lower-right*]{}, then reverse their direction after they reach the [$B$-$\Omega$ ]{}relation line. Thus, the evolution of the angular velocity in the non-aligned rotator models C30, C45, and C60 is the same as that of the aligned rotator model C00, while the evolution of the magnetic field is different. The magnetic field strength normalized by the gas pressure increases in non-aligned rotator models, but decreases in aligned rotator models. The evolution paths of non-aligned rotator models of group F (F40, F45, and F60) are toward the upper-left. Thus, the magnetic field strength normalized by the thermal pressure approaches the [$B$-$\Omega$ ]{}relation line in these models, while the angular velocity normalized by the free-fall timescale continues to increase in the isothermal collapse phase. The evolution paths for models D30, D45, and D60 oscillate in the $B_c\, (8\pi c_s^2 {\rho_{\rm c}})^{-1/2}$–$\Omega_c \, (4 \pi G {\rho_{\rm c}})^{-1/2}$ plane, moving away from the [$B$-$\Omega$ ]{}relation line.
Generalized Magnetic Flux–Spin Relation
---------------------------------------
The growths of the magnetic field strength and angular velocity depend on the geometry of the collapse (vertical collapse to form a disk, spherical collapse, or lateral collapse in a disk). Figure \[fig:16n\] is similar to Figure \[fig:15n\], but with the magnetic field strength and the angular velocity parallel to the disk normal. That is, $B_{cp}$ in the abscissa and $\Omega_{cp}$ in the ordinate are defined as $$B_{cp} = {\mbox{\boldmath$B$}} \cdot {\mbox{\boldmath$p$}},
\label{eq:bp}$$ $$\Omega_{cp} = {\mbox{\boldmath$\Omega$}} \cdot {\mbox{\boldmath$p$}},
\label{eq:wp}$$ where ${\mbox{\boldmath$B$}}$, ${\mbox{\boldmath$\Omega$}}$, and ${\mbox{\boldmath$p$}}$ represent the magnetic flux density vector, angular velocity vector, and the unit vector of the disk normal. The starting points of each locus are different even for models with the same $\alpha$ and $\omega$ but different $\theta_0$ (c.f. A00 and A45), since the angle of the magnetic field at the initial stage and the formation epoch of the disk structure are dependent on $\theta_0$. We plotted the loci according to Equations (\[eq:bp\]) and (\[eq:wp\]) subsequent to the formation of a disk structure. The figure shows that all the evolution paths of models with the same $\alpha$ and $\omega$ (e.g. A00 and A45) move in the same direction irrespective of $\theta_0$, even though the starting points are different.
Clouds for the models inside the [$B$-$\Omega$ ]{}relation line (support-deficient models: groups A, B, and E) evolve towards the upper-right, regardless of the initial angle $\theta_0$ in Figure \[fig:16n\]. Clouds having parameters inside the [$B$-$\Omega$ ]{}relation line collapse spherically until the magnetic field strength and angular velocity reach the [$B$-$\Omega$ ]{}relation line, as shown in §4.1. The clouds evolve isotropically (spherically), because the anisotropy caused by the magnetic or centrifugal force is not induced before the clouds reach the [$B$-$\Omega$ ]{}relation line for small initial magnetic and rotational energies. Thus, the evolution direction of the non-aligned rotator models A45, B45, and E45 is the same as that of the non-aligned rotator models A00, B00, and E00 in Figure \[fig:15n\], though the angles between the magnetic field and rotation axis are different. This is because the anisotropy grows only slightly during spherical collapse.
For the support-sufficient models, the evolution paths of the non-aligned rotator models C45, D45, and F45 are not the same as the aligned rotator models C00, D00, and F00 in Figure \[fig:15n\]. The cloud collapses vertically (vertical collapse) for groups C, D, and F, as shown in §3.2 and Paper II. In the case of the evolution of a weakly magnetized cloud rotating rapidly, in which the magnetic field is not parallel to the rotation axis and the magnetic field does not affect the cloud evolution, as seen in model C45, the cloud collapses along the rotation axis and the lateral collapse is suppressed by the centrifugal force. The cloud then forms a disk perpendicular to the rotation axis. The magnetic field and angular velocity parallel to the disk normal increase slightly for this collapse ($\Omega_{cp}$ and $B_{cp} \approx$ constant). Thus, as the collapse proceeds, the evolution path is toward the lower-left, as shown in Figure \[fig:16n\] and also seen for model C00. The magnetic field perpendicular to the disk normal (parallel to the disk) is amplified with the cloud collapse, when the cloud has a magnetic field that is not parallel to the disk normal at the initial stage. Including this component of the magnetic field, the evolution path is toward the lower-right, as shown in Figure \[fig:15n\] \[the numerator of the second term of Equation (12) increases\]. This is the reason why the normalized magnetic field strength in model C45 of Figure \[fig:15n\] increases in the isothermal collapse phase. The case for group F is similar to that of group C, however the cloud evolution is mainly controlled by the magnetic field for group F, not by the centrifugal force as in group C. Thus, the roles of the magnetic field and angular velocity are reversed. In group F clouds collapse along the magnetic field lines and disks are formed perpendicular to the magnetic field. The angular velocity [*perpendicular*]{} to the magnetic field or disk normal is then amplified and the angular velocity parallel to the magnetic field increases slightly.
In Figure \[fig:16n\], the ordinate $\Omega_{cp} \, (4 \pi G {\rho_{\rm c}})^{-1/2}$ indicates the ratio of the rotation and gravitational energies, while the abscissa $B_{cp}\, (8\pi c_s^2 {\rho_{\rm c}})^{-1/2}$ indicates the ratio of the magnetic and thermal energies. Since these ratios decrease in the aligned rotator models in proportion to ${\rho_{\rm c}}^{-1/2}$ for the vertical collapse phase, the thermal and gravitational energies catch up with the magnetic and rotational energies eventually. The cloud then reaches the [$B$-$\Omega$ ]{}relation line and the geometry of the collapse changes from vertical to lateral in the disk. Thus, a balance between the magnetic, rotational, thermal, and gravitational forces is achieved in a collapsing cloud. This type of evolution occurs in groups C, D, and F.
When the evolution loci for groups A and B approach the [$B$-$\Omega$ ]{}relation line, the evolution depends on $\theta_0$ even for the same $\alpha$ and $\omega$. For groups A and B, both ratios $\Omega_c \, (4 \pi G {\rho_{\rm c}})^{-1/2}$ and $B_c\, (8\pi c_s^2 {\rho_{\rm c}})^{-1/2}$ increase in proportion to ${\rho_{\rm c}}^{1/6}$ for a spherical collapse in which the cloud has small magnetic and rotational energies. Thus, the magnetic and rotational energies become comparable to the gravitational and thermal energies during the contraction. The cloud then reaches the [$B$-$\Omega$ ]{}relation line and the geometry of the collapse changes from spherical to lateral. As a result, the anisotropy in $\theta$ appears as the cloud reaches the [$B$-$\Omega$ ]{}relation line. Groups A and B form magnetic-force-dominant disks, in which the direction of the disk normal is controlled by the magnetic field. The magnetic field strength normalized by gas pressure does not increase or decrease after a cloud reaches the [$B$-$\Omega$ ]{}relation line in Figure \[fig:15n\]. The angular velocity, however, can increase even after the cloud has reached the [$B$-$\Omega$ ]{}relation line in Figure \[fig:15n\], because the rotation axis is not parallel to the disk normal. Differences in the position of the end point in models A00, A30, A45, and A60 are caused by this mechanism. Magnetic braking is also effective in these models. For these reasons, the evolution paths begin to diverge as they approach the [$B$-$\Omega$ ]{}relation line.
We have plotted the normalized magnetic field strength and angular velocity at the initial stage for models WF, MF, and SF of MT04 as diamonds in Figure \[fig:16n\]. MT04 shows that disks are formed perpendicular to the local magnetic field in all non-aligned rotator models of MT04. This is natural because the models WF, MF, and SF are distributed in the magnetic-force-dominant region in Figure \[fig:16n\].
In summary, the geometry of the collapse determines the amplification of the magnetic field and angular velocity. The gas cloud in the support-deficient region amplifies the magnetic field strength and angular velocity during the contraction, regardless of the initial angle $\theta_0$. In these models, aligned and non-aligned rotators evolve similarly. On the other hand, for the support-sufficient models, aligned and non-aligned rotators evolve differently. In the rotation-dominated models, the magnetic field perpendicular to the rotation axis is amplified. This plays a role as a non-axisymmetric perturbation in forming a bar or spiral structure. Even in non-aligned rotator models, the generalized magnetic flux–spin relation holds in contracting disks formed in the isothermal regime.
Disk Formation by Magnetic Field or Rotation
--------------------------------------------
We have shown that a cloud forms either a magnetic-force-dominant or a rotation-dominant disk according to its initial conditions and that cloud evolution can be well understood and classified using the generalized [$B$-$\Omega$ ]{}relation. In this subsection, we show how we can specify the parameter regions where a disk is formed under the influence of either the magnetic field or under the influence of rotation. The magnetic field, rotation axis, and disk normal have different directions when the magnetic field is not parallel to the rotation axis at the initial stage, although they have are identical for an aligned rotator. We can assess the dominant force for disk formation using the evolution loci of the direction of the magnetic field, rotation axis, and disk normal. When a disk is formed perpendicular to the magnetic field, the disk normal moves in association with a small-scale magnetic field in magnetic-force-dominant models (Figure \[fig:3\]). A similar evolution is seen in rotation-dominated models. We compare the evolution of 16 models with different $\alpha$ and $\omega$ but the same $\theta_0$. We choose the initial angle between the magnetic field and rotation axis as $\theta_0 = 45\degr$, because the cloud evolution does not depend on $\theta_0$, as shown in §3.1 and §3.2. The angles ($\theta_B$, $\theta_\Omega$, $\theta_P$), ($\phi_B$, $\phi_\Omega$, $\phi_P$), and ($\psi_{B \Omega}$, $\psi_{B P}$, $\psi_{\Omega P}$); the dominant force for forming a disk ($B$ or $\Omega$); and the axis-ratio (${\varepsilon_{\rm ar}}$) at the end of the isothermal phase are all listed in Table \[table:results\]. The dominant force for disk formation is determined by the loci of the magnetic field, rotation axis, and disk normal for the isothermal phase. For example, the locus of the disk normal moves together with that of the magnetic field in magnetic-force-dominant models.
The shapes of the clouds at the core formation epoch are shown in Figure \[fig:17n\]. In this figure, each panel is positioned based on the initial magnetic field strength and angular velocity. The figure shows that the disk normals are almost parallel to the $z$-axis in the upper-left region, while they are parallel to the magnetic field in the lower-right region. This is natural since the cloud is initially rotating rapidly and magnetized weakly in the upper-left region, while it rotates slowly and is magnetized strongly in the lower-right region.
In order to compare the cloud evolutions with different initial angular velocities we focus on four models with $\alpha$ = 0.01 and different $\omega$ = 0.3, 0.1, 0.03, and 0.01 \[models (b), (f), (j), and (n)\], shown in Figure \[fig:17n\]. These models are aligned in the second column of Figure \[fig:17n\]. The disk normal is oriented with the $z$-direction, and the magnetic field lines are inclined from the $z$-axis in the models with large $\omega$ \[(b) and (f)\]. Each model has a small angle between the rotation axis and the $z$-axis \[$\theta_\Omega = 2 \degr$ (b), $3 \degr$ (f), 2$\degr$ (j), and 5$\degr$ (n) \]. Thus, it is shown that the cloud evolves maintaining the direction of the initial rotation axis. On the other hand, the angle between the magnetic field and the $z$-axis increases with increasing initial angular velocity $\omega$ \[$\theta_B$ = $ 86 \degr$ (b), $83 \degr$ (f), 54$\degr$ (j), and 46$\degr$ (n)\]. The magnetic field in models (b) and (f) is almost perpendicular to the rotation axis and the disk normal ($\psi_{B \Omega}$, $\psi_{B P} \simeq 90 \degr$). It appears that the disk is formed by the effect of rotation in models (b) and (f), because the angle between the rotation axis and disk normal is small \[$\psi_{\Omega P} = 2\degr$ (b) and $6\degr$ (f)\]. On the other hand, the disk seems to be formed by the magnetic force in models (j) and (n), because the angles $\psi_{B P}$ \[$\psi_{B P} = 25\degr$ (j) and $4\degr$ (n)\] are smaller than those of $\psi_{\Omega P}$ \[$\psi_{\Omega P} = 28\degr$ (j) and $39\degr$ (n)\]. The axis-ratio increases with increasing $\omega$ \[${\varepsilon_{\rm ar}}= 0.51$ (b), 0.18 (f), $0.91\times 10^{-2}$ (j), and $6\times 10^{-3}$ (n)\], because the disk forms earlier in a model with larger $\omega$.
Next, we focus on four models with the same $\omega = 0.1$ but different $\alpha$ = 0.001, 0.01, 0.1, and 1 \[models (e), (f), (g), and (h)\], in order to compare cloud evolution with different initial magnetic field strengths. These models are aligned in the second row of Figure \[fig:17n\]. The disk normals are considerably inclined from the $z$-axis in the models with strong magnetic field, (g) and (h). This inclination indicates that the disk is formed by the effect of the magnetic force. The disk is perpendicular to the rotation axis in model (e) \[($\psi_{BP}$, $\psi_{\Omega P}$) = (83$\degr$, 1$\degr$)\], while the disk is perpendicular to the local magnetic field rather than the rotation axis in model (h) \[($\psi_{BP}$, $\psi_{\Omega P}$) = (2$\degr$, 11$\degr$)\].
The angle between the rotation axis and the disk normal ($\psi_{\Omega P}$) is smaller in model (h) ($\psi_{\Omega P} = 11 \degr$) than in model (g) ($\psi_{\Omega P} = 48 \degr$). This seems to be due to the fact that the angular momentum perpendicular to the magnetic field in models with $\alpha > 0.1$ is effectively removed by the magnetic braking process, and thus the direction of rotation tends to incline to the magnetic field and the disk normal. As a result, the rotation axis is considerably inclined from the initial direction in models with a strong magnetic field \[$\theta_{\Omega}$ = 0 (e), 3 (f), 16 (g), and 32 (h)\]. The magnetic field is parallel to the disk in models with small $\alpha$ \[e.g. $\psi_{BP} = 83$ (e)\], while the magnetic field maintains its initial direction in models with large $\alpha$ \[e.g. $\theta_B = 43$ (h)\]. The azimuthal directions of the magnetic field ($\phi_B$) and disk normal ($\phi_P$) coincide in models (e), (f), (g), and (h). In these models, the gas contracts along the magnetic field line onto the disk mid-plane, then a non-axisymmetric structure (i.e. bar structure) is formed perpendicular to the magnetic field, as discussed in §3.2. The non-axisymmetry tends to increase with the initial magnetic field strength \[see models (e), (f), and (g)\], except for model (h).
The role of the magnetic field in magnetic-force-dominant models is similar to that of the centrifugal force in rotation-dominant models. The disk orientation is essentially determined by the direction of the dominant force. However, there is at least one quantitatively different point between the two types of models. Namely, the angular momentum is transferred by the magnetic braking in the magnetic-dominant models. The dominant force ($B$ or $\Omega$) for disk formation is summarized in Table \[table:results\] and Figure \[fig:17n\]. The shadowed region in the lower-right part of Figure \[fig:17n\] indicates disks formed by the Lorentz force, while the upper-left region indicates disks formed by the centrifugal force. A broken line between these two indicates the border between the magnetic-force-dominant and rotation-dominant disks, which is well fitted by $$\begin{aligned}
\frac{\Omega_0 }{B_0 } \simeq
0.39 \, G^{1/2}{ c _{\rm s}}^{-1},
\label{eq:UL3}
\end{aligned}$$ similar to the result for aligned rotators.
Cloud evolution can be classified into four patterns using the generalized [$B$-$\Omega$ ]{}relation curve $$\frac{\Omega_{cp}^2}{(0.2)^2 \; 4 \pi G \rho_c} +
\frac{B_{cp}^2}{(0.36)^2 \; 8 \pi c_s^2 \rho_c} =1,
\label{eq:UL3b}$$ and Equation (\[eq:UL3\]): (i) support-deficient, rotation-dominant models \[inside the [$B$-$\Omega$ ]{}relation line and above Equation (\[eq:UL3\])\], (ii) support-deficient, magnetic-force-dominant models \[inside the [$B$-$\Omega$ ]{}relation line and below Equation (\[eq:UL3\])\], (iii) support-sufficient, rotation-dominant models \[outside the [$B$-$\Omega$ ]{}relation line and above Equation (\[eq:UL3\])\], and (iv) support-sufficient, magnetic-force-dominant models \[outside the [$B$-$\Omega$ ]{}relation line and below Equation (\[eq:UL3\])\]. In the models of class (i) \[models (e), (f), (g), and (i)\], the cloud collapses slowly, maintaining spherical symmetry, and then a disk forms due to the rotation. In this type of evolution, the magnetic field hardly changes its initial direction. On the other hand, in the models of class (iii) \[models (a), (b), and (c)\], the cloud collapses along the rotation axis owing to the strong centrifugal force, and then a thin disk is formed in the early isothermal collapse phase. The magnetic field lines run along the disk plane, because the magnetic field lines are compressed together with a cloud in these models. In the rotation-dominant models of classes (i) and (iii), the rotation axis maintains its initial direction because the magnetic braking is not so effective. On the other hand, in the magnetic-force-dominant models of classes (ii) and (iv), the rotation axis is inclined from the initial direction because the clouds with strong magnetic fields experience effective magnetic braking. The inclination of the rotation axis, $\theta_\Omega$ in class (iv) is greater than that in class (ii). The direction of the magnetic field, however, tends to maintain its initial direction relative to the $z$-axis in classes (ii) and (iv) for a strong magnetic tension force.
Discussion
==========
Fragmentation of a Magnetized Rotating Cloud
--------------------------------------------
Fragmentation is considered to be one of the mechanisms producing binary and multiple stars. We investigated fragmentation of a rotating magnetized cloud in Paper III for the case of ${\mbox{\boldmath$B_0$}} \parallel {\mbox{\boldmath$\Omega_0$}}$. We found fragmentation only in rotation-dominant clouds. This indicates that fragmentation is suppressed by the magnetic field, similar to the findings of @hosking04 and @ziegler05. Fragmentation occurs via a global bar or ring mode and depends on the initial amplitude of the non-axisymmetric perturbation in support-sufficient rotation-dominant clouds. That is, when the non-axisymmetric structure barely grows in the isothermal phase and the rotation rate reaches $\omega \gtrsim 0.2$ at the core formation epoch, an adiabatic core fragments via a ring (ring fragmentation). On the other hand, when the core is deformed to an elongated bar at the core formation epoch, the bar fragments into several cores (bar fragmentation). The parameter study in Paper III shows that ring fragmentation is seen in rotation-dominant models in the adiabatic phase \[classes (i) and (iii)\] and bar fragmentation is observed only in support-sufficient rotation-dominant models \[class (iii)\]. Support-deficient and support-sufficient magnetic-dominant models \[classes (ii) and (iv)\] evolve into a single dense core without fragmentation owing to effective magnetic braking and a slow rotation.
The results obtained in Paper III show that fragmentation patterns are dependent on the growth of a non-axisymmetric structure. The non-axisymmetric perturbation begins to grow after a thin disk is formed. The amplitude of the non-axisymmetric mode barely grows in support-deficient rotation-dominant models \[class (i)\], because the disk forms slowly, as shown in §3.1. On the other hand, the non-axisymmetric structure grows sufficiently in support-sufficient rotation-dominant models \[class (iii)\], because the disk forms promptly. In class (iii), the patterns of fragmentation are dependent on the initial amplitude of the non-axisymmetric perturbation. Thus, a cloud fragments through a ring when the cloud has a small amount of initial non-axisymmetric perturbations, whereas the cloud fragments through a bar when it has a sufficient amount of initial non-axisymmetric perturbations. These results apply to aligned rotator clouds. Below we extend the study to the evolution of non-aligned rotator models.
We did not add any explicit non-axisymmetric perturbations to the initial state of non-aligned rotator models. We did not find any rings for the rotation-dominant models of non-aligned rotators. In the rotation-dominant models, non-axisymmetry arises from the magnetic force in the case of ${\mbox{\boldmath$B_0$}} \nparallel {\mbox{\boldmath$\Omega_0$}}$. Since this non-axisymmetric perturbation from the magnetic force grows sufficiently in the isothermal phase, bar fragmentation must occur in non-aligned rotator models.
As detailed previously, the axis-ratio (${\varepsilon_{\rm ar}}$), listed in Table \[table:results\], arises from an anisotropic force due to the magnetic field or rotation. Model (c) has the greatest axis-ratio $\simeq 2.2$ of all models. The initial state of this cloud is outside the [$B$-$\Omega$ ]{}relation line, and has the strongest magnetic field in rotation-dominant models (Figure \[fig:17n\]). In this cloud, a disk is formed perpendicular to the rotation axis and the magnetic field parallel to the disk surface is greatly amplified. This induces a bar structure along the magnetic field line on the disk mid-plane, as shown in §3.2 (e.g. model C45). Although this bar does not fragment in this study, such a bar structure suggests the possibility of fragmentation in the adiabatic phase (Paper III) if the calculation is continued. Even if the bar does not fragment in the case of there being no explicit non-axisymmetric perturbations, we expect bar fragmentation if an initial explicit perturbation is added. We confirmed that bar fragmentation occurs in model (c) when we added a 10% non-axisymmetric density perturbation to the initial state. As a result, the anisotropy arising from magnetic and centrifugal forces in non-aligned rotator models promotes bar fragmentation and suppresses ring fragmentation.
Comparison with Previous Works
------------------------------
Several studies have examined the gravitational collapse of molecular cloud cores and star formation using three-dimensional MHD calculations [@dorfi82; @dorfi89; @boss02; @hosking04; @ziegler05; @banerjee06 MT04; Papers I, II and III]. Except for those of @dorfi82 [@dorfi89] and MT04, these studies assume that the magnetic field lines are initially parallel to the rotation axis. The evolutions of non-aligned rotator models are almost the same as those of aligned rotator models for support-deficient models, while the evolutions are completely different for support-sufficient models. For example, the direction of the magnetic field continues to move away from the rotation axis in a rapidly rotating cloud. The directions of the magnetic field and rotation are completely different after disk formation in this case. Comparing magnetic fields parallel and perpendicular to the rotation vector shows that a perpendicular magnetic field transfers angular momentum more effectively than a parallel field [@mouschovias79]. In other words, magnetic braking is more effective in non-aligned rotator models than in aligned rotator models. This strong magnetic braking may be the solution of the angular momentum problem, in which the specific angular momentum of a parent cloud is much larger than that of a new-born star.
Comparison with Observation
---------------------------
The magnetic field strengths and directions have been observed for many clouds. It is believed that there is no correlation between the direction of the magnetic field and the large-scale cloud shape [@goodman93; @tamura95; @word00]. Recently, the directions of the magnetic field have been observed in both large (cloud) and small (prestellar core) scales in the same target. The small-scale polarization pattern of the W51 molecular cloud observed by BIMA [Berkeley-Illinois-Maryland Association; @lai01] coincides with the large-scale polarization pattern observed by SCUBA [@chrysostomou02]. Also, the directions of the magnetic field were found to be the same in both large- and small-scale observations of the DR21 cloud [@lai03]. However, some observations have given contrasting findings. Although the average polarization angle in the MMS6 core in the OMC-3 region of the Orion A cloud [@matthews05] coincides with the large-scale polarization angle observed by SCUBA [@houde04], the polarization angle changes systematically across the core. An observation of the Barnard 1 cloud in Perseus reveals that three of the four cores exhibit different mean field directions than that of the ambient cloud [@matthews02; @matthews05]. These trends of OMC-3 and the Barnard 1 cloud agree well with the results for non-aligned rotator models in group C in §3.1. The direction of a small-scale magnetic field can be different from the large-scale field in models C30, C45, and C60 (support-sufficient, rotation-dominant models). As shown in Figure \[fig:13n\] for model C30, although the magnetic field maintains its initial direction $\theta_{B} \simeq 30 \degr$ outside the high-density core, inside the high-density region the magnetic field lines are perpendicular to the rotation ($z$-) axis ($\theta_{B} \simeq 90 \degr$). Thus, the direction of the magnetic field varies for different spatial scales. In this model, the angle between the large-scale and small-scale magnetic fields is $85 \degr$. On the other hand, the magnetic field lines have the same direction in both large- and small-scales in W51 and DR21. These clouds correspond to groups A, B, and E, in which the magnetic field hardly changes its direction in the isothermal phase. These clouds are expected to have a slow rotation rate. These findings show that the direction of the magnetic field can change only in support-sufficient rotation-dominant clouds.
Recently, an hour-glass-shaped magnetic field has been found in a dynamically contracting core around the binary protostellar system NGC 1333 IRAS 4A [@girart06]. Two outflows were also observed in this region and associated with each protostar of the protobinary system [@choi05]. However, the direction of the magnetic field does not coincide with the outflow axis [@girart06]. From our previous study, fragmentation (or binary formation) appears only in rotation-dominated clouds (Paper III). In these clouds, the magnetic field tends to be aligned along the direction perpendicular to the rotation axis, as shown in §3.2.1, and therefore the direction of the magnetic field in a dense core does not coincide with that of the large-scale field. Thus, the observed miss-aligned outflow indicates that a binary is being formed from a rotation-dominated cloud, because the outflows are driven along the local magnetic field (MT04).
Our numerical calculations were carried out with a Fujitsu VPP5000 at the Astronomical Data Analysis Center of the National Astronomical Observatory of Japan. This work was supported partially by Grants-in-Aid from MEXT (16077202 \[MM\], 17340059 \[TM, KT\], 15340062, 14540233 \[KT\], 16740115 \[TM\]).
Arquilla, R., & Goldsmith, P. F. 1986, , 303, 356
Banerjee, R., & Pudritz, R. E. 2006, accepted
Bonnor, W. B. 1956, , 116, 351
Boss, A. P., 2002, , 568, 743
Caselli, P., Benson, P. J., Myers, P. C., & Tafalla, M. 2002a, , 572, 238
Choi, M. 2005, , 630, 976
Chrysostomou, A., Aitken, D. K., Jenness, T., Davis, C. J., Hough, J. H., Curran, R., & Tamura, M. 2002, , 385, 1014
Crutcher, R. M., Roberts, D. A., Troland, T. H., & Goss, W. M. 1999a, , 515, 275
Dedner A., Kemm F., Kröner D., Munz C.-D., Schnitzer T., Wesenberg M., 2002, J. Comp. Phys., 175, 645
Dorfi E., 1982, , 114, 151
Dorfi E., 1989, , 225, 507
Ebert, R. 1955, Z. Astrophys., 37, 222
Girart, submitted.
Goodman, A. A., Benson, P. J., Fuller, G. A., & Myers, P. C. 1993, , 406, 528
Hanawa, T & Matsumoto, T. 1999, , 521, 703
Hosking, J. G., & Whitworth, A. P. 2004, , 347, 1001
Houde, M., Dowell, C. D., Hildebrand, R. H., Dotson, J. L., Vaillancourt, J. E., Phillips, T. G., Peng, R., & Bastien, P. 2004, , 604, 717
Lai, S.-P., Crutcher, R. M., Girart, J. M., & Rao, R. 2001, , 561, 864
Lai, S.-P., Girart, J. M., & Crutcher, R. M. 2003, , 598, 392
Larson R. B., 1969, , 145, 271
Machida, M. N., Tomisaka, K., & Matsumoto, T. 2004, , 348, L1 (Paper I)
Machida, M. N., Matsumoto, T., Tomisaka, K., & Hanawa, T. 2005, , 362, 369 (Paper II)
Machida, M. N., Matsumoto, T., Hanawa, T., & Tomisaka, K. 2005, , 362, 382 (Paper III) Masunaga, H., & Inutsuka, S., 2000, , 531, 350
Matthews, B. C., & Wilson, C. D. 2002, , 574, 822
Matthews, B. C., Lai, S.-P., Crutcher, R.M., & Wilson, C. D. 2005, , 626, 959
Matsumoto, T., & Tomisaka, K. 2004, , 616, 266 (MT04)
Moneti, A., Pipher, J. L., Helfer, H. L., McMillan, R. S., & Perry, M. L. 1984, , 282, 508
Mouschovias, T. C., & Paleologou, E. V. 1979, , 230, 204
Nakano, T. 1976, PASJ, 28, 355
Nakano, T., Nishi, R., & Umebayashi, T. 2002, , 573, 199
Penston M. V., 1969, , 144, 425
Tamura, M., Hough. J. H., & Hayashi. S. S. 1995, , 448, 346
Tamura, M., & Sato, S. 1989, , 98, 1368
Truelove J, K., Klein R. I., McKee C. F., Holliman J. H., Howell L. H., & Greenough J. A., 1997, , 489, L179
Word-Thompson, D., Kirk. J. M., Crutcher, R. M., Greaves, J. S., Holland. W. S., & Andr$\acute{\rm{e}}$. P., 2000, , 537, L135
Ziegler, U., 2005, , 435, 385
Group Model $\alpha$ $\omega$ $\theta_0$ $f$ $\alpha_0$ $\beta_0$ $\gamma_0$ $n_{0}$$^a$ $B_0$$^b$ $\Omega_0$$^c$ $M^d$
------- ------- ------------------------ ----------- ------------ ------------------------------------------------- ------------ ----------- --------------------- ---------------------- ----------- ---------------- --------- ------ -- -- -- -- --
A [A00, A30, A45, A60]{} 0.01 0.01 [(0$\degr$, 30$\degr$, 45$\degr$, 60$\degr$)]{} 1.05 0.70 3.29$\times10^{-4}$ 1.34$\times10^{-2}$ 5.25 3.23 1.38 6.41
B [B00, B30, B45, B60]{} 0.1 0.01 [(0$\degr$, 30$\degr$, 45$\degr$, 60$\degr$)]{} 1.05 0.70 3.29$\times10^{-4}$ 0.134 5.25 10.2 1.38 6.41
C [C00, C30, C45, C60]{} 0.01 0.5 [(0$\degr$, 30$\degr$, 45$\degr$, 60$\degr$)]{} 5.0 0.168 0.823 3.22$\times10^{-3}$ 25 6.59 141 26.7
D [D00, D30, D45, D60]{} 1 0.5 [(0$\degr$, 30$\degr$, 45$\degr$, 60$\degr$)]{} 5.0 0.168 0.823 0.32 25 65.9 141 26.7
E [E00, E30, E45, E60]{} $10^{-3}$ 0.05 [(0$\degr$, 30$\degr$, 45$\degr$, 60$\degr$)]{} 5.0 0.168 0.82$\times10^{-3}$ 3.22$\times 10^{-4}$ 25 2.08 14.1 26.7
F [F00, F30, F45, F60]{} 1 0.05 [(0$\degr$, 30$\degr$, 45$\degr$, 60$\degr$)]{} 5.0 0.168 0.82$\times10^{-3}$ 0.32 25 65.9 14.1 26.7
SF$^e$ 3.04 0.14 [(00$\degr$, 45$\degr$, 90$\degr$) ]{} 1.68 0.5 0.02 2.88 2.61 37.1 7.11 6.13
MF$^e$ 0.76 0.14 [(45$\degr$, 70$\degr$, 80$\degr$)]{} 1.68 0.5 0.02 0.72 2.61 18.6 7.11 6.13
WF$^e$ 0.12 0.14 [(00$\degr$, 45$\degr$, 90$\degr$) ]{} 1.68 0.5 0.02 0.12 2.61 7.42 7.11 6.13
: Parameters and initial conditions for typical models[]{data-label="table:init"}
\
[$^a$ $n_{0} (10^4 \times \cm)$, $^b$ $B_0$ ($\mu \rm{G}$), $^c$ $\Omega_0$ ($10^{-7}$ yr$^{-1}$), $^d$ $M$ (${\thinspace M_\odot}$) , $^e$ models calculated by [@matsumoto04] ]{}
Model $\alpha$ $\omega$ $\theta_0$ $f$ [($\theta_B$, $\theta_\Omega$, $\theta_P$) ]{} [($\phi_B$, $\phi_\Omega$, $\phi_P$)]{} [($\psi_{B \Omega}$, $\psi_{B P}$, $\psi_{\Omega P}$)]{} [$B$/$\Omega$]{} ${\varepsilon_{\rm ar}}$
------- ---------- ---------- ------------ ------ ------------------------------------------------ ----------------------------------------- ---------------------------------------------------------- ------------------ -------------------------- -- -- -- -- -- -- -- -- --
A00 0.01 0.01 00 1.05 (0, 0, 0) (0, 0, 0) (0, 0, 0) — 0
A30 0.01 0.01 30 1.05 (31, 1, 30) (24, 22, 20) (30, 2, 29) $B$ $1.2 \times 10^{-3}$
A45 0.01 0.01 45 1.05 (46, 2, 44) (24, 23, 20) (44, 4, 42) $B$ $7.1 \times 10^{-3}$
A60 0.01 0.01 60 1.05 (61, 2, 58) (24, 11, 20) (59, 4, 56) $B$ $1.5 \times 10^{-2}$
B00 0.1 0.01 00 1.05 (0, 0, 0) (0, 0, 0) (0, 0, 0) — 0
B30 0.1 0.01 30 1.05 (33, 32, 33) (22, 66, 14) (24, 4, 27) $B$ $8.0 \times 10^{-3}$
B45 0.1 0.01 45 1.05 (46, 7, 45) (23, 14, 15) (39, 5, 38) $B$ $2.5 \times 10^{-3}$
B60 0.1 0.01 60 1.05 (58, 52, 57) (24, 298, 15) (68, 8, 61) $B$ $1.3 \times 10^{-2}$
C00 0.01 0.5 00 5 (0, 0, 0) (0, 0, 0) (0, 0, 0) — 0
C30 0.01 0.5 30 5 (88, 1, 2) (302, 273, 96) (88, 90, 3) $\Omega$ 0.43
C45 0.01 0.5 45 5 (90, 1, 2) (302, 263, 86) (88, 89, 3) $\Omega$ 0.61
C60 0.01 0.5 60 5 (89, 1, 2) (302, 257, 84) (89, 89, 3) $\Omega$ 0.68
D00 1 0.5 00 5 (0, 0, 0) (0, 0, 0) (0, 0, 0) — 0
D30 1 0.5 30 5 (60, 13, 14) (243, 8, 34) (67, 72, 6) $\Omega$ 0.19
D45 1 0.5 45 5 (51, 63, 49) (283, 39, 275) (88, 6, 85) $B$ 0.21
D60 1 0.5 60 5 (35, 26, 51) (337, 51, 13) (36, 29, 33) $B$ 0.32
E00 0.001 0.05 00 5 (0, 0, 0) (0, 0, 0) (0, 0, 0) — 0
E30 0.001 0.05 30 5 (43, 1, 1) (50, 80, 38) (43, 42, 1) $\Omega$ $3.4\times 10^{-3}$
E45 0.001 0.05 45 5 (59, 1, 1) (50, 110, 47) (58, 58, 1) $\Omega$ $6.8\times 10^{-3}$
E60 0.001 0.05 60 5 (71, 1, 1) (90, 142, 49) (71, 70, 1) $\Omega$ $1.3\times10^{-2}$
F00 1 0.05 00 5 (0, 0, 0) (0, 0, 0) (0, 0, 0) — 0
F30 1 0.05 30 5 (35, 9, 30) (51, 13, 24) (28, 15, 21) $B$ 0.21
F45 1 0.05 45 5 (51, 10, 44) (51, 354, 24) (46, 20, 35) $B$ 0.40
F60 1 0.05 60 5 (65, 12, 58) (51, 336, 24) (62, 24, 50) $B$ 0.40
(a) 0.001 0.3 45 1.68 (88, 0, 0) (189, 308, 224) (88, 88, 0) $\Omega$ $5.9\times 10^{-2}$
(b) 0.01 0.3 45 1.68 (86, 2, 0) (294, 269, 87) (85, 87, 2) $\Omega$ 0.51
(c) 0.1 0.3 45 1.68 (82, 15, 8) (226, 356, 167) (89, 78, 23) $\Omega$ 2.2
(d) 1 0.3 45 1.68 (48, 46, 42) (256, 32, 231) (85, 18, 87) $B$ 0.17
(e) 0.001 0.1 45 1.68 (83, 0, 0) (120, 42, 127) (83, 83, 1) $\Omega$ $1.7\times 10^{-2}$
(f) 0.01 0.1 45 1.68 (83, 3, 4) (125, 4, 126) (85, 79, 6) $\Omega$ 0.18
(g) 0.1 0.1 45 1.68 (78, 16, 37) (146, 347, 109) (87, 50, 48) $\Omega$ 0.44
(h) 1 0.1 45 1.68 (43, 32, 42) (63, 68, 60) (12, 2, 11) $B$ $5.2\times 10^{-2}$
(i) 0.001 0.03 45 1.68 (53, 0, 4) (51, 81, 49) (53, 49, 4) $\Omega$ $2.1\times 10^{-2}$
(j) 0.01 0.03 45 1.68 (54, 2, 30) (53, 25, 45) (52, 25, 28) $B$ $9.1\times 10^{-2}$
(k) 0.1 0.03 45 1.68 (58, 5, 45) (70, 342, 35) (58, 30, 40) $B$ $6.0\times 10^{-2}$
(l) 1 0.03 45 1.68 (45, 26, 45) (20, 16, 19) (19, 1, 19) $B$ $5.3\times 10^{-2}$
(m) 0.001 0.01 45 1.68 (46, 0, 29) (18, 2, 16) (46, 17, 29) $B$ $1.1\times 10^{-2}$
(n) 0.01 0.01 45 1.68 (46, 5, 43) (19, 21, 16) (41, 4, 39) $B$ $6.0\times 10^{-3}$
(o) 0.1 0.01 45 1.68 (45, 8, 45) (29, 4, 12) (38, 12, 37) $B$ $5.8\times 10^{-3}$
(p) 1 0.01 45 1.68 (45, 29, 45) (6, 5, 6) (16, 0, 16) $B$ $2.8\times 10^{-2}$
: Calculation results at the core formation epoch[]{data-label="table:results"}
\[table:2\]
[^1]: Denoting the thermal, rotational, magnetic, and gravitational energies as $U$, $K$, $M$, and $W$, the relative factors against the gravitational energy are defined as $\alpha_0 = U/|W|$, $\beta_0 = K/|W|$, and $\gamma_0 = M/|W|$.
[^2]: The cloud collapses slowly in a spherically symmetric fashion inside the [$B$-$\Omega$ ]{}relation line. Otherwise, the cloud rapidly collapses along the vertical axis when the model is distributed outside the [$B$-$\Omega$ ]{}relation line.
|
---
abstract: 'The simplest models of inflation based on slow roll produce nearly scale invariant primordial power spectra (PPS). But there are also numerous models that predict radically broken scale invariant PPS. In particular, markedly cuspy dips in the PPS correspond to nulls where the perturbation amplitude, hence PPS, goes through a zero at a specific wavenumber. Near this wavenumber, the true quantum nature of the generation mechanism of the primordial fluctuations may be revealed. Naively these features may appear to arise from fine tuned initial conditions. However, we show that this behavior arises under fairly generic set of conditions involving super-Hubble scale evolution of perturbation modes during inflation. We illustrate this with the well-studied examples of punctuated inflation and the Starobinsky-break model.'
address: 'IUCAA, Post Bag 4, Ganeshkhind, Pune-411007, India'
author:
- 'Gaurav Goswami[^1] and Tarun Souradeep[^2]'
title: 'Power spectrum nulls due to non-standard inflationary evolution'
---
The paradigm of cosmological inflation explains not only a set of peculiarities, such as the flatness problem, the horizon problem, etc., in the hot big bang model, but also the origin of the initial metric perturbations that led to formation of the large scale structure in the distribution of matter in the universe [@inflation; @fluctuations; @infrev]. The simplest models of inflation achieve this by assuming that at high enough energy scales, the dynamics of the universe is as if it was dominated by a single scalar (inflaton) field. The inevitable quantum fluctuations seed the primordial metric perturbations.
The primordial power spectrum (PPS) is connected to observed angular power spectrum ($C_l$s) of the temperature fluctuations in the CMB sky through the radiative transport kernel. Alternatively, for a given cosmology (which determines the transport kernel), the primordial power spectrum can be deconvolved from the observed $C_l$s [@ppsfeatures]. These results seem to indicate that the PPS may have features, e.g., a sharp infrared cutoff on the horizon scale, a bump (i.e. a localized excess just above the cut off) and a ringing (i.e. a damped oscillatory feature after the infrared break). While the statistical significance of such a feature is still being assessed [@armantarundec09], this led to a lot of activity in building up models of inflation that can give large and peculiar features in primordial power spectrum (see [@PI1] and references therein, along with [@Hodges1990; @double-inf; @Kof-linde; @Staro1992; @cusps; @nonTevol; @nonTevol2; @Leach1]). Many such models tend to assume very special initial conditions at the beginning of inflation. In contrast, others postulate quite fine tuned values of the parameters of the Lagrangian of the inflaton at tree level to produce features in the scalar PPS [@PI1; @cusps; @Hodges1990]). In many such scenarios, the scalar PPS has cuspy dips [@PI1; @cusps; @Leach1; @nonTevol2] (also, see Fig. \[PI\_PPS\]) that actually correspond to a null in the PPS i.e. precisely zero scalar power at some wave number . Also for a range of modes near such a feature, the tensor power overtakes scalar power [@PI2]. Scalar PPS with cuspy dips turn up in many models of inflation with different forms of potential (such as false vacuum inflation model with quartic potential [@Leach1], double-well potential [@nonTevol2], Coleman-Weinberg potential [@nonTevol2] etc) and also in other ways (such as in dissipative models of inflation, see [@HP]). It is seen that when one tries to produce an enhanced power on some scale (such as when considering models which try to enhance the production of primordial black holes), it is accompanied with a sharp drop in power leading to cusps in scalar PPS. Similarly, models that tend to produce low power in low multipoles of CMB anisotropies end up having sharp cusps. Thus, cusps in scalar PPS have been reported in the literature but their origin has not been satisfactorily understood. An exact null in scalar PPS can have interesting consequences such as on processed non-linear power spectrum. On the other hand, this null in the power spectrum is found by doing a classical computation, thus, for a small range of modes near the one having a zero, quantum effects can not be neglected [@quantum] and hence the null may not be present when the quantum effects are taken into account exposing the truly quantum nature of the generation mechanism of primordial perturbations. This motivates us to study the origin and conditions required for cuspy dips in scalar PPS.
In this paper we take a fresh look at the evolution of mode functions of cosmological perturbations during inflation, a subject that is well studied (see [@pascal] for a recent treatment). We point out that there are some key properties that the mode functions follow as they evolve in the complex plane. We also realize that it is possible to cause a particular kind of non standard evolution of modes. This kind of non standard evolution is connected closely to the existence of sharp cuspy dips in the scalar PPS.
We study the complex plane trajectory that the Fourier mode function of perturbation variables such as the curvature perturbation on hypersurfaces orthogonal to comoving worldlines, $R_k$ and the Mukhanov-Sasaki variable $v_k=z R_k$ (where $z=a \dot{\phi} /H$). The evolution of $v_k$ is given by
$$\label {MSE}
{v_k}'' + \left( k^2 - \frac{z''}{z} \right)v_k = 0 \, .$$
Above equation shows that the mode function of $v$ goes along a circle of radius $1/\sqrt{2 k}$ in clockwise sense (Bunch-Davies vacuum [^3]) in the complex plane when the mode is well inside the Hubble radius ($k\gg aH$) and goes radially outwards, (with $v_k\propto z$) when the mode is well outside the Hubble radius ($k\ll aH$), see e.g. [@nonTevol]. Correspondingly, $R_k$ just spirals-in (along the curve with polar equation $r \sim \theta ^{\nu -1/2}$ for power law inflation) in extreme sub-Hubble regime, while it just freezes to some value when the mode is in super-Hubble regime. It is important to note that just prior to freezing, the tangent vector to the trajectory of $R_k$ points radially inward since, $z$ is negative, see the thin trajectory in Fig. \[stdR\]. We will refer to this as standard evolution of the mode function of $R_k$. It is easy to confirm that in the simplest case of power law inflation, the trajectory of $R_k$ in the complex plane would never cross the origin.
It is a well known fact that in a universe dominated by a single scalar field, $R_k$ always freezes on super Hubble scales. This means that the amplitude as well as the phase of $R_k$ freeze. Let us use the phrase super Hubble evolution to mean any evolution after $R_k$ has completely frozen once. *In this super Hubble limit, it is impossible that the amplitude gets frozen while the phase does not, however, it is possible that the phase freezes but the amplitude does not*. Writing $v_k = r e^{i \theta}$, Eq. (\[MSE\]) implies that $$\label{theta}
\theta '' + 2 \left( \frac{r'}{r}\right) \theta' = 0 \,.$$ Notice that, once the phase of $v_k$ (and hence $R_k$) is frozen, it can not unfreeze. Also, the rate of change of $\theta'$ is directly proportional to $\theta'$ itself. Hence, the nearer we are to the epoch of phase freezing for a given mode, the less will the phase get affected by any background evolution. Thus, once the mode goes out of the Hubble radius, the phase freezes, in ordinary scenarios, the amplitude will also freeze. However, if it is arranged to unfreeze $R_k$, (by briefly decreasing $z''/z$) even then only the amplitude will unfreeze and not the phase. Thus, *if there is a possibility of any super-Hubble evolution of the mode, that should lead to only radial trajectory in the complex plane!* (see Fig. \[stdR\]). Thus, provided that the phase of the mode function is already frozen, if such an evolution is sufficiently large, the mode must cross the origin in the complex plane. This is connected to a cuspy dip in the scalar PPS.
In the rest of the paper, we put forth the conditions that lead to the nulling in the PPS leading to cuspy dip features. Recall from Eq. (\[MSE\]) that it is the peculiarities in the dynamics of the quantity $z''/z$ that can lead to super-Hubble evolution.
We seek an evolution of $z''/z$ which is such that the mode function of $R_k$ for at least some wavenumber, $k$, crosses the origin in the complex plane. Then simple continuity argument ensures that for at least one mode, the mode function of $R_k$ would freeze exactly at the origin.
For clarity let us recall that in the simplest case of power law inflation, $z$ goes as ${\eta}^{1+ \gamma}$ (where $\eta$ is the conformal time and $\gamma $ is a constant). In such a case, we shall have, $ \frac{z''}{z~} ~=~ \frac{a''}{a} ~=~ \frac{\gamma (\gamma +1)}
{ \eta ^2} \,.$
Note that the first equality implies evolution of scalar modes and tensor modes are identical in these models, we will use this observation later. First, it is important to note the monotonic increasing form of $\frac{z''}{z~}$. In this case, a mode which has once become super-Hubble ($k^2 < z''/z$) would continue to stay in that regime and the perturbation $R_k$ will remain frozen. To unfreeze the mode, it is important for $z''/z$ to decrease, and hence have a non-monotonic dip like feature (see Fig. \[sandwich\]).
Hence, we consider a class of approximate models that we refer to as sandwich models. These have three distinct stages of evolution during inflation. The quantity $z''/z$ follows the monotonic power law inflation evolution in stage I. In stage II, there is a specific deviation from power law inflation leading to a desired dip feature in $z''/z$ (Fig. \[sandwich\]). Stage III reverts to power law inflation. In what follows, we shall make a series of statements that will hold good for any such model. We will illustrate our arguments with the help of (i) Punctuated inflation (PI) model [@PI1; @PI2] (see Fig. \[\[PI\_PPS\]\]), and (ii) Starobinsky-break model [@Staro1992], which have been well-studied in literature.
The origin of cuspy dips in the power spectrum can now be understood in terms of the following salient features which are summarized below and elaborated afterwards:
- **Radial trajectory:** Super-Hubble evolution (any evolution after $R_k$ for the mode freezes in stage I) always leads to only radial trajectory in the complex plane.
- **Inward motion:** The super-Hubble evolution involves a radially inward motion (see Eq. (\[Delr’\])).
- **Amount of super-Hubble evolution:** The amount of super-Hubble evolution in stage II or III is determined by the depth of the dip in $z''/z$ in stage II.
- **Continuity:** If a mode $k_1$ crosses the origin in the complex plane on a radial trajectory, there should exist a mode $k_*$ (with $k_* < k_1$) that ends up right at the origin.
Thus, it is clear that one can easily construct sandwich models which will offer cuspy dips in scalar PPS under fairly general conditions.
For a mode whose phase is frozen in stage I, it is absolutely necessary that, in stage II, $v_k$ should turn back and go radially in if it has to undergo origin crossing. This is a necessary but insufficient condition for origin crossing as is illustrated for Starobinsky-break model (in which stage II is just a Dirac delta function) in Fig. \[starof\]. For a given $k$, the dip in $z''/z$ can be made sufficiently deep, to have origin crossing. Both the quantities $|v_k|$ and $|v_k|'$ are positive in stage I of a sandwich model, so we are in the upper half of Fig. \[starof\]. The desired model will have a stage II which brings us in the lower half of Fig. \[starof\] which shows that if $r'$ is sufficiently negative, origin crossing definitely take place. Thus, if after stage II, both $|v_k|$ and $\frac{d}{d
\eta} |v_k|$ are sufficiently small, origin crossing must take place.
From Eq. (\[MSE\]), we can find the equation for the super-Hubble evolution of amplitude, $r$, of $v_k$. The condition for radial evolution means that ${\theta}'$ vanishes. The equation for $r$ in this approximation is,
$$\label{r2}
\frac{r''}{r} + \left[ k^2 - \frac{z''}{z} \right] = 0 \,.$$
One can estimate the super-Hubble evolution of $r$ in stage II. We see from Eq. (\[r2\]) above that the change in the derivative of the amplitude of $v_k$ is:
$$\label{Delr'}
\Delta {r}' = - \int \left( k^2 - \frac{z''}{z} \right) r ~ d {\eta} \,.$$
The obvious conclusion that can be easily drawn from Eq. (\[Delr’\]) is that for a fixed $k$, unless $z''/z$ is such that $k^2 > z''/z$, the change in $r'$ will be positive, so that at the end of stage II, instead of having $v_k$ turn back in the complex plane, we shall have $v_k$ going radially out at a quicker rate and origin crossing will most definitely not happen. This means that to cause origin crossing, for a given mode, we need a sufficiently deep dip such that $k^2 >
z''/z$.
Let us fix $z''/z$ and consider two modes ( $k_1$ and $k_2$ with $k_1 > k_2$) with sufficiently small $k$ values such that $R_k$ corresponding to both have frozen in stage I ($v_k$ radially outgoing). Stage I is power law inflation that gives tilted red spectrum so that $P_s(k_2) >
P_s(k_1)$, which means that (since $k_1 > k_2$) $R_{k_2} >
R_{k_1}$. Thus $v_{k_2} > v_{k_1}$. This means that the amplitude $v_k$ in the complex plane at the end of stage one is smaller for a mode having larger $k$ value. Also, since $R_k$’s have already frozen,
$$\label{v'}
\frac{v_{2}'}{v_{1}'} = \frac{v_{2}}{v_{1}} > 1$$
Thus, the speed with which $|v_k|$ increases in the complex plane at the end of stage I is also smaller for a mode having larger $k$ value. From Eq. (\[Delr’\]), we also know that $\Delta ({r}')|_1 >
\Delta ({r}')|_2$. Thus, we conclude the following: for a fixed $z''/z$, even if origin crossing does not happen for a given $k$ mode the chances that a larger $k$ mode will undergo origin crossing is much larger (provided that in stage I, this larger $k$ mode has become super-Hubble i.e. frozen once).
But increasing just the $k$ value for a given model can potentially lead us to modes that have not become super-Hubble in stage I. In such a case, one could (i) delay the location of dip (in $z''/z$) so that it occurs a little later and by that time $v_k$ for that mode becomes radial, or, (ii) increase $z''/z$ in stage I, (such as the dashed curve in Fig. \[sandwich\]). In either case, it is easy to see that stage I can be conveniently modified to cause origin crossing in stage III due to a given dip in stage II.
A deep dip in $z''/z$ will cause $z$ to quickly change, as a result, $|z|$ will actually decrease briefly. Since in an expanding universe, the scale factor can not decrease, the said evolution can never happen for tensor modes whose evolution is governed by $a''/a$ (also see [@Leach1]). If the scalar power corresponding to a mode undergoes super Hubble suppression while the tensor power does not, it is not a surprise that the tensor to scalar ratio can become greater than one (see Fig. \[PI\_PPS\] and ref. [@PI2]).
Thus, it is important to pay attention to the evolution of the mode functions (of various quantities related to perturbations of interest) in the complex plane. We learn that though the amplitude of $R_k$ can be reawakened once it is frozen, frozen phase can not unfreeze i.e. super-Hubble evolution leads to radial trajectory in the complex plane. In Sandwich models, for modes that have taken up such a radial trajectory in the complex plane, the amount of super-Hubble evolution is determined by the depth of the dip in $z''/z$ in stage II. If a given dip fails to bring the mode to the origin, we can easily modify stage I such that it does so. This explainsthe origin of (i) nulls in the scalar PPS (ii) tensor power overtaking scalar power, observed in the literature. There is no apriori reason that such zeros in PPS should survive even when higher order corrections are taken into account. Usually, quantum corrections to the PPS are too small compared to the classical result (see e.g. [@quantum-recent]), here, since the classical contribution is zero, the truly quantum nature of the primordial perturbations may get revealed.
[**Acknowledgment:**]{} The authors would like to thank L. Sriramkumar for comments and discussion at various stages of the work. We would also like to thank Alexei Starobinsky for illuminating comments and suggestions. GG thanks Council of Scientific and Industrial Research (CSIR), India, for the research grant award No. 10-2(5)/2006(ii)-EU II. TS acknowledges support from the Swarnajayanti Fellowship, DST, India.
A.A. Starobinsky, Phys. Lett. B [**91**]{}, 99 (1980); D. Kazanas, Ap. J. [**241**]{}, L59 (1980); A. H. Guth, Phys. Rev. D [**23**]{}, 347 (1981); A. D. Linde, Phys. Lett. [**B108**]{}, 389 (1982); A. Albrecht and P. J. Steinhardt, Phys. Rev. Lett. [**48**]{}, 1220 (1982).
A.A. Starobinsky, JETP Lett. 30, 682 (1979); Mukhanov V. F., Chibisov G. V., 1981, ZhETF Pis ma Redaktsiiu, 33, 549; Hawking S. W., 1982, Physics Letters B, 115, 295; A.A. Starobinsky, Phys. Lett. B 117, 175 (1982); Guth A. H., Pi S.Y., 1982, Physical Review Letters, 49, 1110.
A. Linde, arXiv: hep-th/ 0503203; D. H. Lyth and A. R. Liddle, *The Primordial Density Perturbation*, Cambridge University Press, 2009; D. Baumann, arXiv:astro-ph/0907.5424v1; D. Langlois, arXiv:astro-ph/1001.5259v1; L. Sriramkumar, arXiv:astro-ph/0904.4584v1.
A. Shafieloo and T. Souradeep Phy. Rev. D [**70**]{}, 043523 (2004); R. Sinha and T. Souradeep Phy. Rev. D [**74**]{}, 043518 (2006); A. Shafieloo, T. Souradeep, P. Manimaran, P.K. Panigrahi and R. Rangarajan Phy. Rev. D [**75**]{}, 123502 (2007); A. Shafieloo and T. Souradeep Phy. Rev. D [**78**]{}, 023511 (2008); Tocchini-Valentini, D., Hoffman, Y. and Silk, J. MNRAS, 367: 1095-1102, 2006.
J. Hamann, A. Shafieloo and T. Souradeep JCAP 10 04: 010,2010.
Rajeev Kumar Jain, Pravabati Chingangbam, Jinn-Ouk Gong, L. Sriramkumar, Tarun Souradeep, JCAP 09 01: 009, 2009 \[arXiv:astro-ph/0809.3915\].
Rajeev Kumar Jain, Pravabati Chingangbam, L. Sriramkumar, Tarun Souradeep Phy. Rev. D [**82**]{} 023509 (2010) \[arXiv:astro-ph/0904.2518\].
Hardy M. Hodges, George R. Blumenthal, Lev A. Kofman, Joel R. Primack Nucleal Physics B 335 (1990) 197-120.
D. Polarski and A.A. Starobinsky, Nucl. Phys. B 385, 623 (1992).
L. A. Kofman and A. D. Linde, Nucl. Phys. B 282, 555 (1987).
Ryo Saito, Jun’ichi Yokoyama, Ryo Nagata JCAP06(2008)024; Rajeev Kumar Jain, Pravabati Chingangbam, L. Sriramkumar JCAP 07 10: 003, 2007. \[arXiv:astro-ph/0703762\]; Sirichai Chongchitnan, George Efstathiou JCAP 07 01: 011, 2007.
Edgar Bugaev, Peter Klimai Phys. Rev. D 78, 063515 (2008).
Lisa M H Hall and Hiranya V Peiris JCAP 01 027 2008.
Samuel M. Leach and Andrew R. Liddle Phys. Rev. D 63, 043508 (2001) \[arXiv:astro-ph/0010082\].
Jan Hamann, Laura Covi, Alessandro Melchiorri, Anze Slosar Phys. Rev. D 76, 023503 (2007).
Pascal M. Vaudrevange et al JCAP 04 031 2010.
A. A. Starobinsky Pis’ma Zh. Éksp. Teor. Fiz. 55, 477 (1992) \[JETP Lett. 55, 489 (1992)\].
Samuel M. Leach, Misao Sasaki, David Wands, and Andrew R. Liddle Phys. Rev. D 64, 023512 (2001) \[arXiv:astro-ph/0101406\].
D. Polarsky and A.A. Starobinsky, Phys. Lett. B 356, 196 (1995). \[arxiv:astro-ph/9505125\]; D. Polarski and A. A. Starobinsky, 1996 Class. Quantum Grav. 13 377.
S. Weinberg, Phys. Rev. D 72, 043514 (2005); S. Weinberg Phys. Rev. D 74, 023508 (2006).
[^1]: [email protected]
[^2]: [email protected]
[^3]: It is important to note that except for the exact shape of the trajectory, the general arguments that we have given do not depend on the choice of vacuum.
|
---
abstract: |
Smart contracts are an innovation built on top of the blockchain technology. It provides a platform for automatically executing contracts in an anonymous, distributed, and trusted way, which has the potential to revolutionize many industries. The most popular programming language for creating smart contracts is called Solidity, which is supported by Ethereum. Like ordinary programs, Solidity programs may contain vulnerabilities, which potentially lead to attacks. The problem is magnified by the fact that smart contracts, unlike ordinary programs, cannot be patched easily once deployed. It is thus important that smart contracts are checked against potential vulnerabilities.
Existing approaches tackle the problem by developing methods which aim to automatically analyze or verify smart contracts. Such approaches often results in false alarms or poor scalability, fundamentally because Solidity is Turing-complete. In this work, we propose an alternative approach to automatically identify critical program paths (with multiple function calls including *inter-contract* function calls) in a smart contract, rank the paths according to their criticalness, discard them if they are infeasible or otherwise present them with user friendly warnings for user inspection. We identify paths which involve monetary transaction as critical paths, and prioritize those which potentially violate important properties. For scalability, symbolic execution techniques are only applied to top ranked critical paths. Our approach has been implemented in a tool called [sCompile]{}, which has been applied to 36,099 smart contracts. The experiment results show that [sCompile]{} is efficient, i.e., 5 seconds on average for one smart contract. Furthermore, we show that many known vulnerability can be captured if the user inspects as few as 10 program paths generated by [sCompile]{}. Lastly, [sCompile]{} discovered 224 unknown vulnerabilities with a false positive rate of 15.4% before user inspection.
author:
-
bibliography:
- 'contractanalysis.bib'
title: '[sCompile]{}: Critical Path Identification and Analysis for Smart Contracts'
---
Introduction {#sub:introduction}
============
Built on top of cryptographic algorithms [@Diffie:2006:NDC:2263321.2269104; @Diffie:1976:MCT:1499799.1499815; @jorstad1997cryptographic] and the blockchain technology [@haber1990time; @brito2013bitcoin; @narayanan2016bitcoin], cryptocurrency like Bitcoin has been developing rapidly in recent years. Many believe that it has the potential to revolutionize the banking industry by allowing monetary transactions in an anonymous, distributed, and trusted way. Smart contracts bring it one step further by providing a framework which allows any contract (not only monetary transactions) to be executed in an autonomous, distributed, and trusted way. Smart contracts thus may revolutionize many industries. Ethereum [@wood2014ethereum], an open-source, blockchain-based cryptocurrency, is the first to integrate the functionality of smart contracts. Due to its enormous potential, its market cap reached at \$45.13 billion as of Nov 28th, 2017 [@BitcoinE61:online].
In essence, smart contracts are computer programs which are automatically executed on a distributed blockchain infrastructure. Popular applications of smart contracts include crowd fund raising and online gambling, which often involve monetary transactions as part of the contract. Majority of smart contracts in Ethereum are written in a programming language called Solidity. Like ordinary programs, Solidity programs may contain vulnerabilities, which potentially lead to attacks. The problem is magnified by the fact that smart contracts, unlike ordinary programs, cannot be patched easily once they are deployed on the blockchain.
In recent years, there have been an increasing number of news reports on attacks targeting smart contracts. These attacks exploit security vulnerabilities in Ethereum smart contracts and often result in monetary loss. One notorious example is the DAO attack, i.e., an attacker stole more than 3.5 million Ether (equivalent to about \$45 million USD at the time) from the DAO contract on June 17, 2017. This attack is carried out through a bug in the DAO contract. The correctness and systematic analysis and verification of smart contracts since then have been brought into sight with urgency.
There have been multiple attempts on building tools which aim to analyze smart contracts fully automatically. For instance, Oyente [@luu2016making] applies symbolic execution techniques to find potential security vulnerabilities in Solidity smart contracts. Oyente has reportedly been applied to 19,366 Ethereum contracts and 45.6% of them are flagged as vulnerable. Another example is Zeus [@kalra2018zeus], which applies abstract interpretation to analyze smart contracts and claims that 94.6% of the contracts are vulnerable. In addition, there are approaches on applying theorem proving techniques to verify smart contracts which requires considerable manual effort [@Fstar].
The problem of analyzing and verifying smart contracts is far from being solved. Some believe that it will never be, just as the verification problem of traditional programs. Solidity is designed to be Turing-complete which intuitively means that it is very expressive and flexible. The price to pay is that almost all interesting problems associated with checking whether a smart contract is vulnerable are undecidable [@turing1937computable]. Consequently, tools which aim to analyze smart contracts *automatically* either are not scalable or produce many false alarms. For instance, Oyente [@luu2016making] is designed to check whether a program path leads to a vulnerability or not using a constraint solver to check whether the path is feasible or not. Due to the limitation of constraint solving techniques, if Oyente is unable to determine whether the path is feasible or not, the choice is either to ignore the path (which may result in a false negative, i.e., a vulnerability is missed) or to report an alarm (which may result in a false alarm).
In this work, we develop an alternative approach for analyzing smart contracts. On one hand, we believe that manual inspection is unavoidable given the expressiveness of Solidity. On the other hand, given that smart contracts often enclose many behaviors (which manifest through different program paths), manually inspecting every program path is simply overwhelming. Thus, our goal is to reduce the manual effort by identifying a small number of critical program paths and presenting them to the user with easy-to-digest information. Towards this goal, we make the following contributions in this work.
1. We develop a tool called [sCompile]{}. Given a smart contract, [sCompile]{} constructs a control flow graph (CFG) which captures all possible control flow including those due to the *inter-contract* function calls. Based on the CFG, we can systematically generate program paths which are constituted by a bounded sequence of function calls.
2. As the number of program paths are often huge, [sCompile]{} then statically identifies paths which are ‘critical’. In this work, we define paths which involve monetary transaction as critical paths. Focusing on such paths allows us to “follow the money”, which is often sufficient in capturing vulnerabilities in smart contracts.
3. Afterwards, to prioritize the program paths, [sCompile]{} analyze each path to see whether it potentially violates certain critical property. We define a set of (configurable) money-related properties based on existing vulnerabilities. After the analysis, [sCompile]{} ranks the paths by computing a criticalness score for each path. The criticalness score is calculated using a formula which takes into account what properties the path potentially violates and its length (so that a shorter path is more likely to be presented for user inspection).
4. Next, for each program path which has a criticalness score larger than a threshold, [sCompile]{} automatically checks whether it is feasible using symbolic execution techniques. The idea is to automatically filter those infeasible ones (if possible) to reduce user effort.
5. Lastly, the remaining critical paths are presented to the user for inspection through an interactive user interface.
[sCompile]{} is implemented in C++ and has been applied systematically to 36,099 smart contracts which are gathered from EtherScan [@Ethereum27:online]. Our experiment results show that [sCompile]{} can efficiently analyze smart contracts, i.e., it spends 5 seconds on average to analyze a smart contract (with a bound on the number of function calls $3$ including calls through inter-contract function calls). This is mainly because [sCompile]{} is designed to rank the program paths based on static analysis and only applies symbolic execution to critical paths, which significantly reduces the number of times symbolic execution is applied. Furthermore, we show that [sCompile]{} effectively prioritizes programs paths which reveal vulnerabilities in the smart contracts, i.e., it is often sufficient to capture the vulnerability by inspecting the reported $10$ or fewer critical program paths. Lastly, using [sCompile]{}, we identify 224 vulnerabilities. The false positive rate of the identified property-violating paths (before they are presented to the user for inspection) is kept to an acceptable 15.4%. We further conduct a user study which shows that with [sCompile]{}’s help, users are more likely to identify vulnerabilities in smart contracts.
The rest of the paper is organized as follows. Section \[examples\] illustrates how [sCompile]{} works through a few simple examples. Section \[approach\] presents the details of our approach step-by-step. Section \[experiment\] shows evaluation results on [sCompile]{}. Section \[related\] reviews related work and lastly Section \[conclusion\] concludes with a discussion on future work.
Illustrative Examples {#examples}
=====================
In this section, we present multiple examples to illustrate vulnerabilities in smart contracts and how [sCompile]{} helps to reveal them. The contracts are shown in Fig. \[contract:Illustrative\_contracts\].\
contract EnjinBuyer {
address public developer =
0x0639C169D9265Ca4B4DEce693764CdA8ea5F3882;
address public sale =
0xc4740f71323129669424d1Ae06c42AEE99da30e;
function purchase_tokens() {
require(msg.sender == developer);
contract_eth_value = this.balance;
require(sale.call.value(contract_eth_value)());
require(this.balance==0);
}
}
contract toyDAO{
address owner;
mapping (address => uint) credit;
function toyDAO() payable public {
owner = msg.sender;
}
function donate() payable public{
credit[msg.sender] = 100;
}
function withdraw() public {
0 uint256 value = 20;
1 if (msg.sender.call.value(value)()) {
2 credit[msg.sender] = credit[msg.sender] - value;
}
}
}
contract Bitway is ERC20 {
function () public payable {
createTokens();
}
function createTokens() public payable {
require(msg.value > 300);
...
}
...
}
{width="18cm"}
**Example 1:** Contract *EnjinBuyer* is a token managing contract. It has 2 inherent addresses for *developer* and *sale*. In function *purchase\_tokens()*, the balance is sent to the sale’s address. There is a mistake on the sale’s address and as a result the balance is sent to a non-existing address and is lost forever. Note that any hexadecimal string of length not greater than 40 is considered a valid (well-formed) address in Ethereum and thus there is no error when function *purchase\_tokens()* is executed.
Given this contract, the most critical program path reported by [sCompile]{} is one which invokes function *purchase\_tokens()*. The program path is labeled with a message stating that the address does not exist on Ethereum mainnet. With this information, the user captures the vulnerability.\
**Example 2:** Contract *toyDAO* is a simple contract which has the same problem of the DAO contract. Mapping *credit* is a map which records a user’s credit amount. Function *donate()* allows a user to top up its credit with $100$ wei (which is a unit of Ether). Function *withdraw()* by design sends $20$ wei to the message sender (at line 1) and then updates *credit*. However, when line 1 is executed, the message sender could call function *withdraw()* through its fallback function, before line 2 is executed. Line 1 is then executed again and another $20$ wei is sent to the message sender. Eventually, all Ether in the wallet of this contract is sent to the message sender. In [sCompile]{}, inspired by common practice in banking industry, users are allowed to set a limit on the amount transferred out of the wallet of the contract. Assume that the user sets the limit to be 30. Given the contract, a critical program path reported by [sCompile]{} is one which executes line 0, 1, 0, and 1. The program path is associated with a warning message stating that the accumulated amount transferred along the path is more than the limit. With this information, the user is able to capture the vulnerability. We remark that existing approaches often check such vulnerability through a property called reentrancy, which often results in false alarms [@luu2016making; @kalra2018zeus].\
**Example 3:** Contract *Bitway* is another token management contract. It receives Ether (i.e., cryptocurrency in Ethereum) through function *createTokens()*. Note that this is possible because function *createTokens()* is declared as *payable*. However, there is no function in the contract which can send Ether out. Given this contract, [sCompile]{} identifies a list of critical program paths for user inspection. The most critical one is a program path where function *createTokens()* is invoked. Furthermore, it is labeled with a warning message stating that the smart contract appears to be a “black hole” contract as there is no program path for sending Ether out, whereas this program path allows one to transfer Ether into the wallet of the contract. By inspecting this program path and the warning message, the user can capture the vulnerability. In comparison, existing tools like Oyente [@luu2016making] and MAIAN [@nikolic2018finding] report no vulnerability given the contract. We remark that even although MAIAN is designed to check similar vulnerability, it checks whether a contract can receive Ether through testing[^1] and thus results in a false negative in this case.
Approach
========
In this section, we present the details of our approach step-by-step. Fig. \[overallworkflow\] shows the overall work flow of [sCompile]{}. There are six main steps. Firstly, given a smart contract, [sCompile]{} construct a control flow graph (CFG) [@allen1970control], based on which we can systematically enumerate all program paths. Secondly, we identify the monetary paths based on the CFG up to a user-defined bound on the number of function calls. Thirdly, we analyze each program path in order to check whether it potentially violates any of the pre-defined monetary properties. Next, we compute a criticalness score for each program and rank the paths accordingly. Afterwards, we apply symbolic execution to filter infeasible critical program paths. Lastly, we present the results along with the associated program paths to the user for inspection.
Constructing CFG
----------------
Given a smart contract, the first step of [sCompile]{} is to construct the CFG. The CFG must capture all possible program paths. [sCompile]{} constructs the CFG based on the compiled EVM opcode. Note that in the compiled opcode, there is a unique location for the first instruction to be executed and there is a unique location for every function in the contract. Formally, a CFG is a tuple $(N, root, E)$ such that
- $N$ is a set of nodes. Each node represents a basic block of opcodes (i.e., a sequence of opcode instructions which do not branch).
- $root \in N$ is the first basic block of opcodes.
- $E \subseteq N \times N$ is a set of edges. An edge $(n, n')$ is in $E$ if and only if there exists a control flow from $n$ to $n'$.
For simplicity, we skip the details on how $E$ is precisely defined and refer the readers to the formal semantics of EVM in [@wood2014ethereum]. In order to support inter-contract function calls, when a [[ ]{}]{} instruction calls a foreign function defined in an unknown third-party contract, we assume that the foreign function may in turn call any function defined in current function[^2]. Note that this assumption includes the case of calling the fallback function in third-party contract.
For instance, Fig. \[toyDAOCFG\] shows the CFG of the contract *toyDAO* shown in Fig. \[contract:Illustrative\_contracts\]. Each node in Fig. \[toyDAOCFG\] represents a basic block with a name in the form of $Node\_m\_n$, where $m$ is the index of the first opcode of the basic block and $n$ is the index of the last. In Fig. \[toyDAOCFG\], the red diamond node at the top is the $root$ node; the blue rectangle nodes represent the first node of a function. For example, $Node\_102\_109$ is the first node of function *donate()*. Note that a black oval represents a node which contains a [[ ]{}]{} instruction and thus the node can be redirected to the root due to inter-contract function calls. We use different lines to represent different edges in Fig. \[toyDAOCFG\]. The black solid edges represent the normal control flow. The red dashed edges represent control flow due to a new function call, e.g., the edge from $Node\_88\_91$ to $Node\_0\_12$. That is, for every node $n$ such that $n$ ends with a terminating opcode instruction (i.e., [[ ]{}]{}, [[ ]{}]{}), we introduce an edge from $n$ to $root$. The red dotted edges represent control flow due to the inter-contract function call. That is, for every node which ends with a [[ ]{}]{} instruction to an external function, an edge is added from the node to the root, e.g., the edge from $Node\_112\_162$ to $Node\_0\_12$.
![Control flow graph of the contract *toyDAO*[]{data-label="toyDAOCFG"}](res/toyDAOCFG.pdf){width="7.8cm"}
Given a bound $b$ on the number of function calls, we can systematically unfold the CFG so as to obtain all program paths during which only $b$ or fewer functions are called. For instance, with a bound 2, the set of program paths include all of those which visit $Node\_81\_87$ or $Node\_102\_109$ no more than twice. Statically constructing the CFG is non-trivial due to *indirect jump*s in the bytecode generated by the Solidity compiler. For instance, part of the bytecode for contract *toyDAO* is shown as follows.
The Solidity compiler applies templates to translate a Solidly program statements to EVM bytecode and often introduces indirect jumps. In the above example, The [[ ]{}]{} at line $99$ is a direct jump because its target is pushed as a constant value ($0x70$) by the [[ ]{}]{} instruction at line $96$. The instruction [[ ]{}]{} at line $307$ is an indirect jump because the target of [[ ]{}]{} is the top entry of the stack when execution reaches line $307$. The content of the stack however cannot be determined simply by scanning the preceding instructions. In fact, the target address is pushed into the stack by the [[ ]{}]{} instruction at line $93$.
We thus use the following steps to construct CFG from EVM opcode:
1. Disassemble the bytecode to a sequence of opcode instructions.
2. Construct basic blocks from the opcode instructions (such that each basic block is a node in the CFG).
3. Connect basic blocks with edges (including but not limit to direct jumps) which can be statically decided from the opcode instructions.
4. Use stack simulation to complete the CFG with edges for indirect jumps.
In step 1, we use the disassembly utility provided by Solidity compiler to convert the bytecode to a human readable sequence of opcode instructions. In step 2, we break the opcode instructions into basic blocks such that all instructions inside a basic block execute sequentially (e.g., the basic block $Node\_92\_99$). The boundaries between basic blocks are determined by the following instructions: branching instructions [[ ]{}]{} and [[ ]{}]{}, [[ ]{}]{} which denote the start of a basic block (the entry basic block starts at address $0$ which is not a [[ ]{}]{} instruction), and [[ ]{}]{} whose next instruction denotes a start of a new basic block, and terminal instructions such as [[ ]{}]{}, [[ ]{}]{}, and [[ ]{}]{} which denote the end of a terminating basic block (e.g., the basic block $Node\_100\_101$). The terminal instructions do not have a successor block and a basic block which ends with a [[ ]{}]{} has two successor. One is the basic block whose first instruction is the next instruction of the [[ ]{}]{} instruction in the instruction sequence. The other is the entry basic block because of the assumption about a [[ ]{}]{} instruction. One successor of instruction [[ ]{}]{} is the basic block which starts with the next instruction of the [[ ]{}]{}.
The target address of [[ ]{}]{} and of the other successor of [[ ]{}]{} are stored in the top stack entry when execution reaches [[ ]{}]{} and [[ ]{}]{}. In most of the cases, the top stack entry is pushed in as a constant value by the [[ ]{}]{} instruction proceeding it (a.k.a. a direct jump). Thus the successor block can be determined statically by checking the constant value of the [[ ]{}]{} instruction.
For indirect jumps, the target of [[ ]{}]{} may be pushed by an instruction far away from the [[ ]{}]{} instruction and thus cannot be decided by checking the proceeding instructions. Thus after step 3, the basic blocks which end with an indirect jump have missing edges to their successors and we call these basic blocks as *dangling block*s (e.g., the basic block $Node\_305\_307$) and some basic blocks may not be reachable from the entry basic block due to the dangling blocks (e.g., the basic block $Node\_100\_101$).
We use *stack simulation* to find the successor of dangling basic blocks. Stack simulation is similar to define-use analysis except that dangling blocks which are reachable from the entry basic block are processed first. First, we find all the paths from the entry block to the dangling blocks (e.g., there are two paths from entry block $Node\_0\_12$ to the dangling block $Node\_305\_307$) and simulate the instructions in each path following the semantics of the instruction on the stack. Note that a dangling block ends with [[ ]{}]{} may have multiple successors in the CFG. When we reach the [[ ]{}]{} or [[ ]{}]{} in the dangling block, the content of the top stack entry shall be determined and we connect the dangling block with the block which starts at the address as in the top stack entry. For instance, for the dangling block $Node\_305\_307$, there is only one successor $Node\_100\_101$ in both paths which is pushed by the instruction at address $093$. We repeat the above step until all dangling blocks are processed.
Identifying Monetary Paths
--------------------------
Given a bound $b$ on the number of call depth (i.e., the number of function calls) and a bound on the loop iterations, there could be still many paths in the CFG to be analyzed. For instance, there are $6$ program paths in the *toyDAO* contract with a call depth bound of 1 (and a a loop bound of 5) and 1296 with a call depth bound of 4. This is known as the path explosion problem [@anand2008demand]. Examining every one of them, either automatically or manually, is likely infeasible. Thus, it is important that we focus on the important ones. In this work, we focus mostly on the program paths which are money-related. The reason is although there are a variety of vulnerabilities [@atzei2017survey], almost all of them are ‘money’-related as attackers often target vulnerability in the smart contracts for monetary gain.
To systematically identify money-related program paths, we label the nodes in the CFG with a flag indicating whether it is money related or not. A node is money-related if and only if its basic block contains any of the following opcode instructions: [[ ]{}]{}, [[ ]{}]{}, [[ ]{}]{} or [[ ]{}]{}. In general, one of these opcode instructions must be used when Ether is transferred from one account to another. A program path which traverses through a money-related node is considered money-related. Note that each opcode instruction in EVM is associated with some gas consumption which technically makes them money-related. However, the gas consumption alone in most cases does not constitute vulnerabilities and therefore we do not consider them money-related.
For instance, given the CFG of *toyDAO* shown in Fig. \[toyDAOCFG\], $Node\_112\_162$ contains a [[ ]{}]{} instruction, implementing statement [[ ]{}]{}, and thus is money-related. Any path that traverses through $Node\_112\_162$ is a money-related path. In Fig. \[toyDAOCFG\], we visualize money-related nodes with black background.
Focusing on money-related paths allows us to reduce the number of path to analyze. For instance, the number of paths is reduced from 6 to 2 with a bound $1$ and it is reduced from 1296 to 116 such paths with a bound $4$.
Identifying Property-Violating Paths {#property}
------------------------------------
After the previous step, we are left with a set of important program paths. To prioritize the program paths for user inspection, we proceed to analyze these paths in order to check whether critical properties are potentially violated or not. The objective is to prioritize those program paths which may trigger violation of critical properties for user inspection. In the following, we introduce some of the properties that we focus on in detail and discuss the rest briefly. We remark that these properties are designed based on previously known vulnerabilities. Furthermore, the properties can be configured and extended in [sCompile]{}.\
*Property: Respect the Limit* In [sCompile]{}, we allow users to set a limit on the amount of Ether transferred out of the contract’s wallet. This is inspired by common practice applied in banking systems. For each program path, we statically check whether Ether is transferred out of the wallet and whether the transferred amount is potentially beyond the limit. That is, for each program path which transfers Ether, we use a symbolic variable to simulate the remaining limit, which is initialized to be the limit. Each time an amount is transferred out, we decrease the variable by the same amount. Afterwards, we check whether the remaining limit (i.e., a symbolic expression) is less than zero. If it is, we conclude that the program path potentially violates the property. Note that if we are unable to determine the exact amount to be transferred (e.g., it may depend on the user input), we conservatively assume the limit may be broken.
For instance, assume that the limit is set to be 30wei for the *toyDAO* contract shown in Fig. \[contract:Illustrative\_contracts\], the following path is reported to exceed the transfer limit: $Node\_0\_12\rightarrow
\cdots\rightarrow Node\_112\_162\rightarrow
Node\_0\_12\rightarrow\cdots\rightarrow Node\_112\_162$. Initially, the limit has value 30 (i.e., the assumed user-set limit). Each time $Node\_112\_162$ is executed, its value is reduced by 20. Thus, its value becomes negative after the second time.\
*Property: Avoid Non-Existing Addresses* Any hexadecimal string of length no greater than 40 is considered a valid (well-formed) address in Ethereum. If a non-existing address is used as the receiver of a transfer, the Solidity compiler does not generate any warning and the contract can be deployed on Ethereum successfully. If a transfer to a non-existing address is executed, Ethereum automatically registers a new address (after padding 0s in front of the address so that its length becomes 160bits). Because this address is owned by nobody, no one can withdraw the Ether in it since no one has the private key.
For every program path which contains instruction [[ ]{}]{} or [[ ]{}]{}, [sCompile]{} checks whether the address in the instruction exists or not. This is done with the help of EtherScan, which is a block explorer, search, API and analytic platform for Ethereum [@Ethereum27:online]. Given an address, EtherScan makes use of the public ledger of Ethereum, and returns true if it is registered (i.e., the address has come to effect with a cost of 25,000wei for this account, and there is at least one transaction history record). Otherwise, returns false. A program path which sends Ether to a non-existing address is considered to be violating the property. There are 2 types of transactions to register an address in Ethereum: external transactions which are initiated by an external account and internal transaction which are initiated by other contracts through function calls to the address. Most of the addresses are registered by external transactions. To minimize the number of requests to EtherScan, we only query external transactions, thus may lead to false positives when the address has only internal transactions.
For instance, in the contract *EnjinBuyer* shown in Fig. \[contract:Illustrative\_contracts\], address *sale* is less than $160$ bits (due to omitting the last $4$ bits). [sCompile]{} checks the validity of the address *sale* in a program path which calls function *purchase\_tokens()* and warns the user that it is not an existing address. As a result, the user can capture such mistakes.\
*Property: Guard Suicide* [sCompile]{} checks whether a program path would result in destructing the contract without constraints on the date or block number, or the contract ownership. A contract may be designed to “suicide” after certain date or reaching certain number of blocks, and often by transferring the Ether in the contract wallet to the owner. If however a program path which executes the opcode instruction [[ ]{}]{} can be executed without constraints on the date or block number, or the contract ownership, the contract can be destructed arbitrarily and the Ether in the wallet can be transferred to anyone. A famous example is Parity Wallet [@anyoneca35:online] which resulted in an estimated loss of tokens worthy of \$155 million [@AnotherP81:online].
We thus check whether there exists a program path which executes [[ ]{}]{} and whether its path condition is constituted with constraints on date or block number and contract owner address. While checking the former is straightforward, checking the latter is achieved by checking whether the path contains constraints on instruction [[ ]{}]{} or [[ ]{}]{}, and checking whether the path condition compares the variables representing the contract owner address with other addresses. A program path which calls [[ ]{}]{} without such constraints is considered a violation of the property.
contract StandardToken is Token {
1 function destroycontract(address _to) {
2 require(now > start + 10 days);
3 require(msg.sender != 0);
4 selfdestruct(_to);
5 }
6 ...
7 }
8 contract Problematic is StandardToken { ... }
One example of such vulnerability is the *Problematic* contract[^3] shown in Fig. \[contract:GuardlessSuicide\]. Contract *Problematic* inherits contract *StandardToken*, which provides basic functionalities of a standard token. One of the functions in *StandardToken* is *destroycontract()*, which allows one to destruct the contract. [sCompile]{} reports a program path which executes line 4 potentially violates the property.\
*Property: Be No Black Hole* In a few cases, [sCompile]{} analyzes program paths which do not contain [[ ]{}]{}, [[ ]{}]{}, [[ ]{}]{} or [[ ]{}]{}. For instance, if a contract has no money-related paths (i.e., never sends any Ether out), [sCompile]{} then checks whether there exists a program path which allows the contract to receive Ether. The idea is to check whether the contract acts like a black hole for Ether. If it does, it is considered a vulnerability.
To check whether the contract can receive Ether, we check whether there is a *payable* function. Since Solidity version 0.4.x, a contract is allowed to receive Ether only if one of its public functions is declared with the keyword *payable*. When the Solidity compiler compiles a non-payable function, the following sequence of opcode instructions are inserted before the function body
1 CALLVALUE
2 ISZERO
3 PUSH XX
4 JUMPI
5 PUSH1 0x00
6 DUP1
7 REVERT
At line 1, the instruction [[ ]{}]{} retrieves the message value (to be received). Instruction [[ ]{}]{} then checks if the value is zero, if it is zero, it jumps (through the [[ ]{}]{} instruction at line 4) to the address which is pushed into stack by the instruction at line 3; or it goes to the block starting at line 5, which reverts the transaction (by instruction [[ ]{}]{} at line 7). Thus, to check whether the contract is allowed to receive Ether, we go through every program path to check whether it contains the above-mentioned sequence of instructions. If all of them do, we conclude that the contract is not allowed to receive Ether. Otherwise, it is. If the contract can receive Ether but cannot send any out, we identify the program path for receiving Ether as potentially violating the property and label it with a warning messaging stating that the contract is a black hole.
For instance, given contract *Bitway* shown in Fig. \[contract:Illustrative\_contracts\], the program path corresponding to a call of function *approve()* contains the following sequence of instructions.
0305 JUMPDEST
0306 CALLVALUE //get the msg.value
0307 ISZERO
0308 PUSH2 013c //if msg.value is 0, go to line 316
0311 JUMPI
0312 PUSH1 00
0314 DUP1
0315 REVERT
0316 JUMPDEST //start of main block
As a comparison, the program path corresponding to a call of function *createTokens()* does not contain the sequence of instructions. At the same time, there is no instruction like [[ ]{}]{}, [[ ]{}]{}, [[ ]{}]{} and [[ ]{}]{} in its EVM code to send Ether out, so the contract *Bitway* is a contract which receives Ether but never sends any out.\
We have presented above a few built-in properties supported by [sCompile]{}. These properties are designed based on reported vulnerabilities. [sCompile]{} is designed to be extensible, i.e., new properties can be easily supported by providing a function which takes a program path as input and reports whether the property is violated or not.
To further help users understand program paths of a smart contract, [sCompile]{} supports additional analysis. For instance, [sCompile]{} provides analysis of gas consumption of program paths. Gas is the price for executing any part of a contract. It helps to defend against network abuse as execution of any EVM bytecode instruction consumes a certain amount of gas. To execute a normal transaction successfully, enough gas must be provided; otherwise the transaction will fail and the consumed gas will be forfeited. For every transaction, Ethereum estimates the amount of gas to be consumed by the transaction based on concrete transaction inputs provided by the user. However, without trying out all possible inputs, users of the contract may not be aware of the existence of certain particularly gas consuming program paths. Given a contract, [sCompile]{} estimates the gas consumption of all program paths found by symbolic execution and then output the maximum gas consumption of corresponding path(s). The gas consumption of a program path is estimated based on each opcode instruction in the program path statically.
Ranking Program Paths {#Prioritizing}
---------------------
So far we have identified a number of program paths, some of which potentially violates certain properties. To allow user to focus on the most critical program paths as well as to save the effort on applying heavy analysis techniques like symbolic execution on these program paths, we prioritize the program paths according to the likelihood they reveal critical vulnerability in the contract. For each program path, we calculate a criticalness score and then rank the program paths according to the scores. The criticalness score is calculated using the following formula: let $pa$ be a program path and $V$ be the set of properties which $p$ violates. $$\begin{aligned}
criticalness(pa) = \frac{\Sigma_{pr \in V} \alpha_{pr}}{\epsilon * length(pa)}\end{aligned}$$ where $\alpha_{pr}$ is a constant which denotes the criticalness of violating property $pr$, $length(pa)$ is the length of path $pa$ (i.e., the number of function calls) and $\epsilon$ is a positive constant. Intuitively, the criticalness is designed such that the more critical a property the program path violates, the larger the score is; and the more properties it violates, the larger the score is. Furthermore, it penalizes long program paths so that short program paths are presented first for user inspection. Note that a program path may violate multiple properties. For instance, a path which transfers all Ethers to an non-existing account before destructing the contract violates property of non-existing address as well as property on guardless suicide.
transfer limit non-existing addr. suicide black hole
--------------- ---------------- -------------------- --------- ------------
Likelihood 1 1 2 3
Severity 2 3 3 2
Difficulty 2 2 3 2
$\alpha_{pr}$ 4 6 18 12
: Definition of $\alpha_{pr}$[]{data-label="table:pr-definition"}
To assess the criticalness of each property, we use the technique called failure mode and effects analysis (FMEA [@stamatis2003fmea]) which is a risk management tool widely used in a variety of industries. FMEA evaluates each property with 3 factors, i.e., *Likelihood*, *Severity* and *Difficulty*. Each factor is a value rating from 1 to 3, i.e., 3 for *Likelihood* means the most likely; 3 for *Severity* means the most severe and 3 for *Difficulty* means the most difficult to detect. The criticalness $\alpha_{pr}$ is then set as the product of the three factors. After ranking the program paths according to their criticalness score, only program paths which have a criticalness score more than certain threshold are subject to further analysis. This allows us to reduce the program paths significantly. In order to identify the threshold for criticalness, we adapt the k-fold cross-validation[@devijver1982pattern; @kohavi1995study] idea in statistical area. We collected a large set of smart contracts and split them into a training data set(10,452 contracts) and a test data set (25,678 contracts). The training data set is used to tune the parameters required for computing the criticalness, e.g., value of $\epsilon$ and the threshold for criticalness score. We repeated the experiments 20 times which took more than 5,700 total hours of all machines and optimizes those parameters (based on the number of vulnerability discovered and the false positive rate of each property). The parameters adapted for each property as shown in Table \[table:pr-definition\], and $\epsilon$ is set to be 1 and the threshold for criticalness is set to be 10.
Feasibility Checking
--------------------
After the last step, we have identified a ranked list of highly critical program paths which potentially reveal vulnerability in the smart contract. Not all the program paths are however feasible. To avoid false alarms, in this step, we filter infeasible program paths through symbolic execution.
Symbolic execution [@king1976symbolic; @howden1977symbolic] is a well-established method for program analysis. It has been applied to solve a number of software engineering tasks. The basic idea is to symbolically execute a given program, e.g., use symbolic variables instead of concrete values to represent the program inputs and maintain the constraints that a program path must satisfy in order to traverse along the path, and lastly solve the constraint using a constraint solver in order to check whether the program path is feasible or not. Symbolic execution has been previously applied to Solidity programs in Oyente [@luu2016making] and MAIAN [@nikolic2018finding]. In this work, we apply symbolic execution to reduce the program paths which are to be presented for users’ inspection. Only if a program path is found to be infeasible by symbolic execution, we remove it. In comparison, both Oyente and MAIAN aim to fully automatically analyze smart contracts and thus when a program path cannot be determined by symbolic execution, the result may be a false positive or negative.
contract GigsToken {
1 function createTokens() payable {
2 require(msg.value > 0);
3 uint256 tokens = msg.value.mul(RATE);
4 balances[msg.sender] = balances[msg.sender].add(tokens);
5 owner.transfer(msg.value);
6 }
7 ...
}
For instance, Fig. \[contract:FalseGreedy\] shows a contract which is capable of both receiving (since the function is *payable*) and sending Ether (due to *owner.transfer(msg.value)* at line 5), and thus [sCompile]{} does not flag it to be a black hole contract. MAIAN however claims that it is. A closer investigation reveals that because MAIAN has trouble in solving the path condition for reaching line 5, and thus mistakenly assumes that the path is infeasible. As a result, it believes that there is no way Ethers can be sent out and thus the contract is a black hole.
User Inspection
---------------
The last step of our approach is to present the analysis results for user inspection. For user’s convenience, we implemented a graphical user interface (GUI) in sCompile. The GUI is not limited to display the final analysis results. For the first step, user could open a smart contract in GUI by either open or copy/paste the source code of smart contract. User has the options to customize various parameters used in the analysis, i.e., the bound on the call depth, the transfer limit, the bound on loop iteration, the threshold for the criticalness and the criticalness of various properties. After the analysis, the output to the user consists of mainly two parts, i.e., one part on statistical data and the other on detailed path data. For the statistical data, a report is displayed which shows the total execution time, the number of symbolic analyzed path and the number of warnings for each properties that are discussed in section \[property\]. For the path data, the top ranked critical paths (which have a criticalness more than the threshold and are not proved infeasible by symbolic execution) are shown to user in the form of function call sequences. That is, for each critical program path, we map it back to a sequence of function calls by identifying the basic blocks in the sequence which represent the start of a function. Furthermore, if the constraint solver is able to solve the path condition, concrete function parameters are used. Each critical path is associated with a warning message which explains why the program path should be inspected by the user. e.g., a potential violation of a critical property or being particularly gas-consuming. User can click on a specific path and the part of code which are associated with the path is highlighted in the source code.
Implementation and Evaluation {#experiment}
=============================
Implementation
--------------
[sCompile]{} is implemented in C++ with about 8K lines of code. The source code is available online[^4]. The symbolic execution engine in [sCompile]{} is built based on the Z3 SMT solver [@de2008z3]. Note that we also symbolically execute the constructor in the contract and use the resultant symbolic states as the initial states for symbolic execution of all other functions in the contract.
[max width=]{}
----------- ------- ------- ------- -------- -------- -------- --------
median 3.106 8.717 5.267 18.015 19.053 23.472 19.397
\#timeout 1145 1737 2597 2223 1561 6186 1081
----------- ------- ------- ------- -------- -------- -------- --------
Experiment {#sub:experiment}
----------
In the following, we evaluate [sCompile]{} to answer research questions (RQ) regarding [sCompile]{}’s efficiency, effectiveness and usefulness in practice. Our test subjects contain all 36,099 contracts (including both the training set and the set) with Solidity source code which are downloaded from EtherScan. Although [sCompile]{} can also take EVM code as input, we apply [sCompile]{} to Solidity source code so that we can manually inspect the experiment results.
All experiment results reported below are obtained on a machine running on Amazon EC2 C3 xlarge instance type with Ubuntu 16.04 and gcc version 5.4.0. The detailed hardware configuration is: 2.8 GHz Intel Xeon E5-2680 v2 processor, 7.5 GB ram, 2 x 40 GB SSD. The timeout set for [sCompile]{} is: global wall time is 60 seconds and Z3 solver timeout is 100 milliseconds. Furthermore, the limit on the maximum number of blocks for a single path is set to be 60, and the limit on the maximum iterations of loops is set to be 5, i.e., each loop is unfolded at most five times.\
*RQ1: Is [sCompile]{} efficient enough for practical usage?* [sCompile]{} is designed to be an addon toolkit for Solidity compiler and thus it is important that [sCompile]{} is able to provide timely feedback to users when a smart contract is implemented and compiled. In this experiment, we evaluate [sCompile]{} in terms of its execution time. We systematically apply [sCompile]{} to all the benchmark programs in the training set (which includes all the contracts in EtherScan as of January 2018) and measure the execution time (including all steps in our approach).
The results are summarized in Table \[exe\_time\_table1\] and Fig. \[exe\_time\]. In Table \[exe\_time\_table1\], the second, third and fourth column show the execution of [sCompile]{} with call depth bound 1, 2, and 3 respectively, so that we observe the effect of different call depth bounds. For baseline comparison, the fifth column shows the execution time of Oyente (the latest version 0.2.7) with the same timeout. We remark that the comparison should be taken with a grain of salt. Oyente does not consider sequences of function calls, i.e., its bound on function calls is 1. Furthermore, it does not consider initialization of variables in the constructor (or in the contract itself). The next columns show the execution time of MAIAN (the latest commit version on Mar 19). Although MAIAN is designed to analyze program paths with multiple (by default, 3) function calls, it does not consider the possibility of a third-party contract calling any function in the contract through inter-contract function calls and thus often explores much fewer program paths than [sCompile]{}. Furthermore, MAIAN checks only one of the three properties (i.e., suicidal, prodigal and greedy) each time. Thus, we must run MAIAN three times to check all three properties. The different bounds used in all three tools are summarized in Table \[exe\_bound\_table1\].
Tool call bound loop bound timeout other bound
---------- ----------------------- ------------ --------- --------------
sCompile 3 5 60 s 60 cfg nodes
Oyente 1 10 60 s N.A.
MAIAN 3 (no inter-contract) N.A. 60 s 60 cfg nodes
: Loop bound definitions among three tools[]{data-label="exe_bound_table1"}
In Table \[exe\_time\_table1\], the second row shows the median execution time and the third row shows the number of times the execution time exceeds the global wall time ($60$ seconds). We observe that [sCompile]{} almost always finishes its analysis within $10$ second. Furthermore, the execution time remains similar with different call depth bounds. This is largely due to [sCompile]{}’s strategy on applying symbolic execution only to a small number of top ranked critical program paths. We do however observe that the number of timeouts increases with an increased call depth bound. A close investigation shows that this is mainly because the number of program paths extracted from CFG is much larger and it takes more time to extract all paths for ranking. In comparison, although Oyente has a call depth bound of 1, it times out on more contracts and spends more time on average. MAIAN spends more time on each property than the total execution of [sCompile]{}. For some property (such as *Greedy*), MAIAN times out fewer times, which is mainly because it does not consider inter-contract function calls and thus works with a smaller CFG.
Fig. \[exe\_time\] visualizes the distribution of execution time of the tools. The horizontal-axis represents the execution time (in seconds). Five numbers are used to construct each row: min, lower quartile, median, upper quartile and max. The leftmost and rightmost vertical lines represents the minimal and the maximum respectively. For each box, The leftmost and rightmost vertical line represents the lower quartile (i.e., the median of the lower half $1/4$) and upper quartile value (i.e., the median of the upper half $3/4$). The middle vertical line inside the box is the median value. Based on the data, we conclude that [sCompile]{} is efficient.
![Execution time of [sCompile]{} vs. Oyente vs. MAIAN[]{data-label="exe_time"}](res/exe_time.pdf){width="8.3cm"}
[max width=]{}
------------------------------ --------- --------------- ---------------- --------- --------------- ---------------- --
alarmed true positive false positive alarmed true positive false positive
*Avoid non-existing address* 37 32 5 N.A. N.A. N.A.
*Be no black hole* 57 57 0 141 56 85
*Guard suicide* 42 38 4 66 30 36
------------------------------ --------- --------------- ---------------- --------- --------------- ---------------- --
We conjecture that the main reason that [sCompile]{} can efficiently analyze smart contracts is that heavy techniques like symbolic execution are applied only to the most critical program paths in [sCompile]{}. To validate the conjecture, we count the average number of program paths which are analyzed through symbolic execution in [sCompile]{}. Table \[numbers\] shows the results. The second column shows the estimated total number of program paths on average for each smart contract which is successfully analyzed. Note that the estimation is based on the CFG and thus may count program paths which are infeasible. This is part of the reason it is often greater than the number reported by alternative methods like [@luu2016making; @nikolic2018finding]. The other part is that our CFG is more complete. The third column shows the average number of paths analyzed with symbolic execution. It can be observed that only a small fraction of the program paths are symbolically analyzed. Furthermore, the number of symbolically executed paths remain small even when the call depth bound is increased. This is because only the top ranked critical program paths are analyzed by symbolic execution. If there are multiple program paths which potentially violate the same property, [sCompile]{} prioritizes the shorter one and often avoids symbolically executing the longer one. The results confirm our conjecture.
in total symbolic-executed to user
-------------- ---------- ------------------- ---------
call depth 1 48.92 37.51 1.49
call depth 2 6177.21 144.24 12.46
call depth 3 31346.62 121.23 12.62
: Average number of program paths[]{data-label="numbers"}
In the second experiment, we aim to investigate the effectiveness of [sCompile]{}. We apply [sCompile]{} to all 36,099 contracts and manually inspect the critical paths reported by [sCompile]{} to check whether the program path, together with the associated warning message, reveals a true vulnerability in the contract. Note that not all properties checked by [sCompile]{} readily signals a vulnerability. For instance, given a user-set transfer limit, [sCompile]{} may report that a program path violates the transfer limit. Although such information is often useful, depending on the transfer limit set by the user, the program path may or may not signal a vulnerability. For instance, a gambling contract may allow a user to place a bet with certain amount and transfer some amount back to the user when the betting result is revealed. In such a case, the transfer limit is likely broken if a large bet is placed by the user. For another instance, [sCompile]{} automatically reports a program path which is the most gas-consuming. Such information is useful for the user (e.g., to set the right ‘price’ for the transaction). It however does not necessarily signal a vulnerability (although it may signal program bugs). We thus focus on those results produced by [sCompile]{} which are directly related to vulnerabilities in the following, i.e., program paths which are deemed to violate property “avoid non-existing addresses”, “be no black hole” and “guard suicide”. Note that two of the properties (i.e., the latter two) analyzed by [sCompile]{} are supported by MAIAN as well. We can thus compare [sCompile]{}’s performance with that of MAIAN for these two properties. The results are shown in Table \[table:comparison on property2&3&4\]. In the following, we discuss the detailed findings[^5].\
For *Property: Be no Black Hole*, there are 57 contracts in the training set are marked vulnerable by [sCompile]{}. We manually checked all these contracts and confirm that they are all true positives. In comparison, MAIAN identified 141 black hole contracts and 56 contracts among them are true positives, 43 of which overlap with [sCompile]{}’s results. We then investigate why [sCompile]{} missed the remaining 13 contracts identified by MAIAN. We discovered all of them took more than 60 seconds and thus [sCompile]{} timed out before finishing analyzing. When we set the timeout to 200s, [sCompile]{} identifies 3 more as black hole contracts. The other 85 identified by MAIAN are false positives. Our investigation reveals that 62 of them are library contracts. Because MAIAN does not differentiate library contracts from normal contracts, it marks all library contracts as vulnerable. We randomly choose 5 contracts from the remaining for further investigation. We find Z3 could not finish solving the path condition in time and thus MAIAN conservatively marks the contract as vulnerable. After extending the time limit for Z3 and the total timeout, 4 of the 5 false positives are still reported. The reason is that these contracts can only send Ether out after certain period, and MAIAN could not find a feasible path to send Ether out for such cases, and mistakenly flags the contract as a black hole.\
For *Property: Guard Suicide*, [sCompile]{} reports a program path if it leads to [[ ]{}]{}, without a constraint on the ownership of the contract or the date or the block number, i.e., a guard to prevent an unauthorized users from killing the contract. Among the analyzed contracts, [sCompile]{} identified 42 contracts which contain at least one program path which violates the property. Many of the identified contracts violate the property due to contract inheritance as shown in Fig. \[contract:GuardlessSuicide\].
contract ViewTokenMintage{
1 modifier auth {
2 require(isAuthorized(msg.sender, msg.sig));
3 _;
4 }
5 function isAuthorized(address src, bytes4 sig)
internal view returns (bool) {
6 if (src == address(this)) {
7 return true;
8 } else if (src == owner) {
9 return true;
10 } else if (authority == DSAuthority(0)) {
11 return false;
12 } else {
13 return authority.canCall(src, this, sig);
14 }
15 }
16 function destruct(address addr) public auth {
17 selfdestruct(addr);
18 }
}
The remaining 4 cases reported by [sCompile]{} are false positives. We manually investigate them one by one. We find in one case, the contract is set up such that only the sender of original transaction can trigger [[ ]{}]{}. This is rather uncommon way of coding. The other 3 false alarms are from the same contract *ViewTokenMintage* shown in Fig. \[contract:FalseGuardless\]. The guard of *selfdestruct* depends on the return value of function *isAuthorized()*. The path going through line 6 returns true only if the *msg.sender* is the same as the current contract. [sCompile]{} mistakenly reports the alarm as the *ADDRESS* is symbolized as a symbolic constant.
Different from [sCompile]{}, MAIAN only checks whether a contract can be destructed without any constraints except an ownership constraint. MAIAN identified 66 contracts violating the property. 30 of them are true positives, 13 of which are also identified by [sCompile]{}. The other 36 are false positives. The contract *MiCarsToken* shown in Fig. \[contract:AmbiguousCase\] shows a typical false alarm. There are 2 constraints before [[ ]{}]{} in the contract. [sCompile]{} considers such a contract safe for there is a guard of *msg.sender == owner* (or the other condition), whereas MAIAN reports a vulnerability as the contract can also be killed if the msg.sender is not the owner when the second condition is satisfied.
contract MiCarsToken {
function killContract () payable external {
if (msg.sender==owner ||
msg.value >=howManyEtherInWeiToKillContract)
selfdestruct(owner);
}
...
}
We further analyzed the $17$ cases which were neglected by [sCompile]{}. 6 of them are alarmed for owner change as exemplified in Fig. \[contract:ownerChange\]. In this contract, *selfdestruct* is well guarded, but the developer makes a mistake so that the constructor becomes a normal function, and anyone can invoke *mortal()* to make himself the owner of this contract and kill the contract.\
contract Mortal {
address public owner;
function mortal() { owner = msg.sender; }
function kill() {
if (msg.sender == owner) suicide(owner); }
}
For *Property: Avoid Non-existing Address*, as demonstrated by contract EnjinBuyer in Fig. \[contract:Illustrative\_contracts\], it is a problem if a wrong address is used. For the contracts in the training set, all addresses identified are of length 160 bits. However, there are 37 contracts identified as non-existing addresses (i.e., not registered in Ethereum mainnet). These non-existing addresses may be used for different reasons. For example, in contract *AmbrosusSale*, the address of TREASURY does not exist before the function `specialPurchase()` or `processPurchase()` is invoked. As a result, it costs more gas for a user who is the first to invoke those 2 functions because account registration costs at least additional $25,000$wei, which the user may not be aware of. There are 5 addresses from 5 contracts which are registered by internal transactions.\
We further analyzed 25,647 contracts newly uploaded in EtherScan from February 2018 to July 2018. For *“Be no Black Hole”*, there are 109 vulnerabilities out of 139 alarms generated by [sCompile]{}. Applying MAIAN on these contracts, 84 of them are marked vulnerable, 77 of which are true vulnerabilities overlapping with those found by [sCompile]{} and 7 library contracts are marked vulnerable mistakenly. Among the 139 contracts, 25 vulnerable ones are missed by MAIAN according to our manual check. For *“Guard Suicide”*, there are 83 vulnerabilities out of 114 alarms generated by [sCompile]{}. Applying MAIAN on these contracts, 42 are marked vulnerable, all of which overlap with those found by [sCompile]{}. For *“Avoid Non-existing Addresses”*, there are 80 vulnerabilities out of 87 alarms generated by [sCompile]{}. The 7 false alarms are due to internal transactions.
In total, [sCompile]{} identifies 224 new vulnerabilities from the 36,099 contracts consisting of 46 *Black Hole* vulnerabilities, 66 *Guardless Suicide* vulnerabilities and 112 *Non-existing Address* vulnerabilities.
\
*RQ3: Is [sCompile]{} useful to contract users?* Different from other tools which aim to fully automatically analyze smart contracts, [sCompile]{} is designed to facilitate human users. We thus conduct a user study to see whether [sCompile]{} is helpful for users to detect vulnerabilities.
The user study takes the form of an online test. Once a user starts the test, first the user is briefed with necessary background on smart contract vulnerabilities (with examples). Then, 6 smart contracts (selected at random each time from a pool of contracts) are displayed one by one. For each contract, the source code is first shown. Afterwards, the user is asked to analyze the contract and answer the two questions. The first question asks what is the vulnerability that the contract has. The second question requires user to identify the most gas consuming path in the contract (with one function call).
For the first three contracts, the outputs from [sCompile]{} are shown alongside the contract source code as a hint to the user. For the remaining 3 contracts, the hints are not shown. The contracts are randomized so that not the same contracts are always displayed with the hint. The goal is to check whether users can identify the vulnerabilities correctly and more efficiently with [sCompile]{}’s results.
We distribute the test through social networks and online professional forums. We also distribute it through personal contacts who we know have some experience with Solidity smart contracts. In three weeks we collected 48 successful responses to the contracts (without junk answers)[^6]. Table \[table:survey\] summarizes the results. Recall that [sCompile]{}’s results are presented for the first three contracts. Column LOC and \#paths shows the number of lines and program paths in each contract. Note that in order to keep the test manageable, we are limited to relatively small contracts in this study. Columns Q1 and Q2 show the number of correct responses (the numerator) out of the number of valid responses (the denominator). We collect the time (in seconds) taken by each user in the Time column to answer all the questions. In the end of the survey we ask the user to give us a score (on the scale of 1 to 7, the higher the score the more useful our tool is) on how useful the hints in helping them answer the questions. The value in column Usefulness is the average score over all responses because all responses are shown half the hints.
Contract LOC \#paths Q1 Q2 Time Usefulness
---------- ----- --------- ----- ----- ------ ------------
C1 (w) 33 8 7/8 3/8 119
C2 (w) 52 16 7/8 2/8 98
C3 (w) 67 38 7/8 2/8 233
C4 (w/o) 87 59 2/8 1/8 414
C5 (w/o) 103 13 3/8 1/8 397
C6 (w/o) 107 27 4/8 1/8 420
: Statistics and results of surveyed contracts[]{data-label="table:survey"}
The results show that for the first three contracts for which [sCompile]{}’s analysis results are shown, almost all users are able to answer Q1 correctly using less time. For the last three contracts without the hints, most of the users cannot identify the vulnerability correctly and it takes more time for them to answer the question. For identifying the most gas-consuming path, even with the hints on which function takes the most gas, most of the users find it difficult in answering the question, although with [sCompile]{}’s help, more users are able to answer the question correctly. The results show that gas consumption is not a well-understood problem and highlight the necessity of reporting the condition under which maximum gas consumption happens. All the users think our tool is useful (average score is $5/7$) in helping them identify the problems.
Related work {#related}
============
[sCompile]{} is related to work on identifying vulnerabilities in smart contracts. Existing work can be roughly categorized into 3 groups according to the level at which the vulnerability resides at: Solidity-level, EVM-level, and blockchain-level [@atzei2017survey; @li2017survey]. In addition, existing work can be categorized according to the techniques they employ to find vulnerabilities: symbolic execution [@luu2016making; @nikolic2018finding; @mythril; @manticore; @jiang2018contractfuzzer], static-analysis based approaches [@securify] and formal verification [@kalra2018zeus; @Fstar]. Our approach works at the EVM-level and is based on static analysis and symbolic execution, and is thus closely related to the following work.
Oyente [@luu2016making] is the first tool to apply symbolic execution to find potential security bugs in smart contracts. Oyente formulates the security bugs as intra-procedural properties and uses symbolic execution to check these properties. Among 19,366 existing Ethereum contracts, Oyente flags $8,833$ of them as vulnerable, including the vulnerability responsible for the DAO attack. However, Oyente does not perform inter-procedural analyses to check inter-procedural or trace properties as did in [sCompile]{}.
MAIAN [@nikolic2018finding] is recently developed to find three types of problematic contracts in the wild: prodigal, greedy and suicidal. It formulates the three types of problems as inter-procedural properties and performs bounded inter-procedural symbolic execution. It also builds a private testnet to valid whether the contracts found by it are true positives by executing the contracts with data generated by symbolic execution. In the high-level, both MAIAN and [sCompile]{} perform inter-procedural symbolic analyses and check the suicidal and greedy contracts. However, [sCompile]{} differs from MAIAN in the following aspects. First, [sCompile]{} makes a much more conservative assumption about a call to third-party contract which we assume can call back a function in current contract. [sCompile]{} is designed to reduce user effort rather than to analyze smart contracts fully automatically. Therefore [sCompile]{} focuses on ranking program paths in terms of their criticalness and only applies symbolic execution to selected few critical program paths. Secondly, [sCompile]{} supports more properties than MAIAN. Thirdly, [sCompile]{} checks properties in ways which are different from MAIAN. For instance, to check for black hole contracts, MAIAN checks whether a contract can receive Ether through testing (e.g., by sending Ether to the contract). As showed in Section \[examples\], the result is that there may be false negatives. Other symbolic execution based tools [@mythril; @manticore] perform intra-procedural symbolic analysis directly on the EVM bytecode as what Oyente does.
The tool Securify [@securify] is based on static analysis to analyze contracts. It infers semantic information about control dependencies and data dependencies from the CFG of an intermediate language for EVM bytecode. Then it specifies both compliance and violation patterns for the property. The vulnerability detection problem is then reduced to search the patterns on the inferred data and control dependencies information. The use of compliance pattern reduces the number of false positives in the reported warnings. Our approach does not infer semantic information from CFG, instead in the ranking algorithm, we rely on syntactic information to reduce paths for further symbolic analysis to improve performance. We analyze the extracted paths with symbolic execution which is more precise than the pure static analysis as adopted by Securify.
In addition, static analysis based tools such as those provided in Solidity compiler and Remix IDE [@RemixSol56:online] can perform checks on the Solidity source code to find common programming anti-patterns and cannot find the properties proposed in this work.
Besides symbolic execution, there are attempts on formal verification of smart contracts using either model-checking techniques [@kalra2018zeus] or theorem-proving approaches [@Fstar]. These approaches in theory can check arbitrary properties specified manually in a form accepted by the model checker or the theorem prover. It is known that model checking has limited scalability whereas theorem proving requires an overwhelming amount of user effort.
Conclusion
==========
In this work, we introduce an approach to reveal “money-related“ vulnerabilities in smart contract by identifying a small number of critical paths for user inspection. The critical paths are identified and ranked so that the effort required on applying symbolic execution techniques or user inspection is minimized. We implemented the approach in the tool [sCompile]{}. We show that [sCompile]{} can effectively and efficiently analyze smart contracts. In addition, with [sCompile]{}, we find $224$ new vulnerabilities. All the new vulnerabilities are well defined in our approach and could be presented to the user in well-organized information within a reasonable time frame. In the future, we plan to further develop [sCompile]{} to improve its efficiency and effectiveness (with techniques like loop-invariant synthesis).
[^1]: MAIAN sends a value of 256 wei to the contract deployed in the private blockchain network
[^2]: The return value from a call to a foreign function is marked as symbolic during symbolic execution.
[^3]: We hide the names of the contracts as some of them are yet to be fixed.
[^4]: The link is removed for anonymity.
[^5]: We have informed all developers whose contact info are available about the vulnerabilities in their contracts and several have confirmed the vulnerabilities and deployed new contracts to substitute the vulnerable ones. Some are yet to respond, although the balance in their contracts are typically small.
[^6]: There are about 80 people who tried the test. Most of the respondents however leave the test after the first question, which perhaps evidences the difficulty in analyzing smart contracts.
|
---
abstract: 'Computer representations of real numbers are necessarily discrete, with some finite resolution, discreteness, quantization, or minimum representable difference. We perform astrometric and photometric measurements on stars and co-add multiple observations of faint sources to demonstrate that essentially all of the scientific information in an optical astronomical image can be preserved or transmitted when the minimum representable difference is a factor of two finer than the root-variance of the per-pixel noise. Adopting a representation this coarse reduces bandwidth for data acquisition, transmission, or storage, or permits better use of the system dynamic range, without sacrificing any information for down-stream data analysis, including information on sources fainter than the minimum representable difference itself.'
author:
- 'Adrian M. Price-Whelan, David W. Hogg'
bibliography:
- 'apj-jour.bib'
- 'refs.bib'
title: 'What bandwidth do I need for my image?'
---
Introduction
============
Computers operate on bits and collections of bits; the numbers stored by a computer are necessarily discrete; finite in both range and resolution. Computer-mediated measurements or quantitative observations of the world are therefore only approximately real-valued. This means that choices must be made, in the design of a computer instrument or a computational representation of data, about the range and resolution of represented numbers.
In astronomy this limitation is keenly felt at the present day in optical imaging systems, where the analog-to-digital conversion of CCD or equivalent detector read-out happens in real time and is severely limited in bandwidth; often there are only eight bits per readout pixel. This is even more constrained in space missions, where it is not just the bandwidth of real-time electronics but the bandwidth of telemetry of data from space to ground that is limited. If the “gain” of the system is set too far in one direction, too much of the dynamic range is spent on noise, and bright sources saturate the representation too frequently. If the gain is set too far in the other direction, information is lost about faint sources.
Fortunately, the information content of any astronomical image is limited *naturally* by the fact that the image contains *noise*. That is, tiny differences between pixel values—differences much smaller than the amplitude of any additive noise—do not carry very much astronomical information. For this reason, the discreteness of computer representations of pixel values do not have to limit the scientific information content in a computer-recorded image. All that is required is that the noise in the image be *resolved* by the representation. What this means, quantitatively, for the design of imaging systems is the subject of this [*Article*]{}; we are asking this question: “What bandwidth is required to deliver the scientific information content of a computer-recorded image?”
This question has been asked before, using information theory, in the context of telemetry [@Gaztanaga] or image compression [@Watson], treating the pixels (or linear combinations of them) as independent. Here we ask this question, in some sense, *experimentally*, and for the properties of imaging on which optical astronomy depends, where groups of contiguous pixels are used in concert to detect and centroid faint sources. We perform experiments with artificial data, varying the bandwidth of the representation—the size of the smallest representable difference $\Delta$ in pixel values—and measuring properties of scientific interest in the image. We go beyond previous experiments of this kind (@WhiteGreenfield, and @PenceInPress) by measuring the centroids and brightnesses of compact sources, and sources fainter than the detection limit. The higher the bandwidth, the better these measurements become, in precision and in accuracy. We find, in agreement with previous experiments and information-theory-based results, that the smallest representable difference $\Delta$ should be on the order of the root-variance $\sigma$ of the noise in the image. More specifically, we find that *the minimum representable difference should be about half the per-pixel noise sigma* or that about two bits should span the FWHM of the noise distribution if the computer representation is to deliver the information content of the image.
Of course, tiny mean differences in pixel values, even differences much smaller than the noise amplitude, *do* contain *extremely valuable* information, as is clear when many short exposures (for example) of one patch of the sky are co-added or analyzed simultaneously. “Blank” or noise-dominated parts of the individual images become signal-dominated in the co-added image. In what follows, we explicitly include this “below-the-noise” information as part of the information content of the image. Perhaps surprisingly, *all* of the information can be preserved, even about sources fainter than the discreteness of the computer representation, provided that the discreteness is finer than the amplitude of the noise. This result has important implications for image compression, but our main interest here is in the design and configuration of systems that efficiently take or store raw data, using as much of the necessarily limited dynamic range on signal as possible.
Our results have some relationship to the study of *stochastic resonance*, where it has been shown that signals of low dynamic range can be better detected in the presence of noise than in the absence of noise (see @stochres for a review). These studies show that if a signal is below the minimum representable difference $\Delta$, it is visible in the data only when the digitization of the signal is noisy. A crude summary of this literature is that the optimal noise amplitude is comparable to the minimum representable difference $\Delta$. We turn the stochastic resonance problem on its head: The counterintuitive result (in the stochastic resonance context) that weak signals become detectable only when the digitization is noisy becomes the relatively obvious result (in our context) that so long as the minimum representable difference $\Delta$ is comparable to or smaller than the noise, signals are transmitted at the maximum fidelity possible in the data set.
What we call here the “minimum representable difference” has also been called by other authors the “discretization” [@Gaztanaga] or “quantization” [@Watson; @WhiteGreenfield; @Pence].
Method
======
The artificial images we use for the experiments that follow are all made with the same basic parameters and processes. The images are square $16\times 16$ pixel images, to which we have added Gaussian noise to simulate sky (plus read) noise. A random number generator chooses a mean sky level $\nu$ for this Gaussian noise within the range 0 to 100 but the variance $\sigma^2$ is fixed for all experiments at $\sigma^2 = 1$. For most experiments we also add a randomly placed “star” with a Gaussian point-spread function at a location $(x_0,y_0)$ within a few pixel radius of the center of the image. The intensity of the star is given by a circular 2-dimensional Gaussian function. The FWHM of the star point-spread function is set to 2.35 pix for convenience, and the total flux of the star—total counts above background after integration over the array—is a variable. In the experiments to follow we set this total flux $S$ to 2.0, 64.0, and 2048.0. Given the FWHM setting of 2.35 pix and the sky noise setting of $\sigma^2=1$, the peak intensities corresponding to these fluxes are $0.32\,\sigma$, $10\,\sigma$, and $320\,\sigma$.
When we add the star, we do not add any Poisson or star-induced noise contribution to the images. That is, the images are “sky-dominated” in the sense that the per-pixel noise is the same in the center of the star as it is far from the star. In the context of setting the minimum representable difference, this choice is conservative, but it is also slightly unrealistic.
The method for setting the minimum representable difference $\Delta$ for the artificial images is used extensively in the experiments to follow. We define an array of factors $\Delta$ ranging from $2^{4}\,\sigma$ to $2^{-8}\,\sigma$, which we use to scale the data. We divide the image data by each value of $\Delta$, round all pixels to their nearest-integer values, and then multiply back in by $\Delta$. For convenience, we will call this “scale, snap to integer, un-scale” procedure “SNIP”. \[fig:twelvepanel\] shows identical images SNIPped at different multiplicative factors; each panel shows the same original image data but with a different minimum representable difference $\Delta$.
In the first experiments we determine the effect of the minimum representable difference on the measurement of the variance of the noise in the image, which we generated as pure Gaussian noise. For this experiment we created empty images, added Gaussian noise with mean $\nu$ (a real value selected in the range 0 to 100) and variance $\sigma^2 = 1$, and applied the SNIP procedure. \[fig:variance\] shows the dependance of the measured variance on the minimum representable difference $[\Delta /
\sigma]$. As expected, the measured variance increases in accuracy as the minimum representable difference decreases; the accuracy is good when $\Delta < 0.5\,\sigma$.
In the second experiments we add a star to each image and ask how well we can measure its centroid. We added the randomly placed “star” before applying the SNIP procedure. The star is given a set integrated flux, a FWHM of 2.35 pix, and a randomly selected true centroid $(x_0,y_0)$ within the central few pixels of the image. Our technique for measuring the centroid of the star involves fitting a quadratic surface to a $3\times 3$ section of the image data with the center of this array set on a first-guess value for the star position. It re-centers the $3\times 3$ array around the highest-value pixel in the neighborhood of the first-guess value. We perform a simple least-squares fit to these data, using $I(x,y) = a + b x + c y
+ d x^2 + e x y + f y^2$ as our surface model, where $x$ and $y$ are pixel coordinates in the $3\times 3$ grid, and $a$, $b$, $c$, $d$, $e$, and $f$ are parameters. Our centroid measurement $(x_s,y_s)$ is then computed from the best-fit parameters by $$(x_s, y_s) = \left(\frac{c e - b f}{2 d f - 2 e^{2}},
\frac{b e - c d}{2 d f - 2 e^{2}}\right)$$ The offset of this measurement from the true value (in pix) is then $\sqrt{(x_s - x_0)^{2} + (y_s - y_0)^{2}}$. For sources strongly affected by noise, this fitting method sometimes returns large offsets; we artificially cap all offsets at 2 pix.
In the third experiments we consider the effect of quantization on the photometric properties of the star. We have now centroided the star, and so we use the position of the star as found in the above paragraph along with the known variance $\sigma^2$ to do a Gaussian fit to the point spread function of the star. We only allow the height $A$ to vary for fitting the star, which is related to the total flux $S$ of the star by $S = A \times 2\pi\sigma^2$. The fit is therefore really just a linear fit which is represented by the model $Ae^{\frac{-(x-x_s)^2-(y-y_s)^2}{2\sigma^2}} + \mu_{sky}$, where $\mu_{sky}$ is the “sky level.”
\[fig:bitsoffset64\] shows the dependence of the measured centroids and brightnesses of the stars relative to the known values as a function of minimum representable difference $\Delta$. We find (not surprisingly) that the accuracy by which we measure the centroid and brightness increases as $\Delta$ decreases; the accuracy saturates at $\Delta < 0.5\,\sigma$. The expectation is that a star of high flux compared to the noise level will be very accurately measured, even at the highest minimum representable difference $\Delta
= 16\,\sigma$. At lower fluxes the offset is expected to be larger. s \[fig:bitsoffset64\] and \[fig:bitsoffset2048\] confirm these intuitions. In each of these s, the experiment is performed on 1024 independent trials—each trial image has a unique sky level, noise sample, and star position—and each trial image has been SNIPped at each value of $\Delta$.
Tiny variations in mean pixel values, even those smaller than the noise amplitude, do contain valuable information. In the fourth experiments we investigate this by coadding noise-dominated images of the same region of the sky to reveal sources too faint to be detected in any individual image. The test we perform is—for each trial—to take 1024 images, add a faint source (fainter than any detection limit) to each image at a common location $(x_0,y_0)$, generate independent sky level $\nu$ and sky noise for each image, apply the SNIP method for the same range of minimum representable differences $\Delta$, coadd the images and measure the star offsets in the SNIPped, coadded images. The star position remains constant, but each image has independent sky properties. Just to reiterate, we coadd *after* applying the SNIP procedure, but given enough images we can measure astrometric and photometric properties of the extremely faint star with good accuracy. The coadding procedure is illustrated in \[fig:provecoadd\].
\[fig:bitsoffsetcoadd1\] shows that for a total star flux of $2.0$, if we coadd 1024 images with independent sky properties, we can centroid and photometer the source with similar quality to the “single exposure” \[fig:bitsoffset64\] with a total star flux of 64.0. This is expected when the minimum representable difference $\Delta$ is small. What may not be expected is that even sources for which every pixel is fainter than the minimum representable difference $\Delta$ in any individual image pixel, we are able to detect, centroid, and photometer accurately by coadding images together; that is, the imposition of a large individual-image minimum representable difference does not distort information about exceedingly faint sources. \[fig:bitsoffsetcoadd2\] shows the same for total flux 64.0; the trend is similar to that in \[fig:bitsoffset2048\].
In the coadd experiments, we have made the optimistic assumption that the sky level will be independent in every image that contributes to each coadd trial. To test the importance of varying the sky among the coadded exposures, we made a version in which we did *not* vary the sky. That is, we made each individual image not just with a fixed star flux and location but also a fixed sky level—different for each trial, but the same for each coadded exposure within each trial. The differences between s \[fig:bitsoffsetcoadd1\] and \[fig:bitsoffsetcoadd1\_samesky\] are substantial when the minimum representable difference $\Delta$ is significantly larger than the per-pixel noise level $\sigma$.
Discussion
==========
Because of finite noise, the information content in astronomical images is finite, and can be captured by a finite numerical resolution. In the above, we scaled and snapped-to-integer real-valued images by a SNIP procedure such that in the SNIPped image, the minimum representable difference $\Delta$ between pixel values was set to a definite fraction of the Gaussian noise root-variance (sigma) $\sigma$. We found with direct numerical experiments that the SNIP procedure introduces essentially no significant error in estimating the variance of the image, or in centroiding or photometering stars in the image, when the minimum representable difference is set to any value $\Delta \leq 0.5\,\sigma$. In addition, we showed that all the information about sources fainter than the per-pixel noise level is preserved by the quantization (SNIP) procedure, again provided that $\Delta \leq 0.5\,\sigma$. This is somewhat remarkable because at $\Delta = 0.5\,\sigma$ the faintest sources in our experiments were fainter than the minimum representable difference.
Although it is somewhat counterintuitive that integer quantization of the data does not remove information about sources fainter than the quantization level, it is perhaps even more counterintuitive how well photometric measurements perform in our coadd tests. For example, in s \[fig:bitsoffsetcoadd1\] and \[fig:bitsoffsetcoadd2\], the photometric measurements are relatively accurate even when the data are quantized at minimum representable difference $\Delta = 16\,\sigma$! The quality of the measurements can be understood in part by noting that the coadded images have a per-pixel noise $\sigma = \sqrt{1024}\,\sigma =
32\,\sigma$, which is once again larger than the minimum representable difference, and in part by noting that each image has a different sky level, so each individual image is differently “wrong” in its photometry; many of these differences average out in the coadd. When the sky level is held fixed across coadded images, photometric measurements become inaccurate again—as seen in \[fig:bitsoffsetcoadd1\_samesky\]—because individual-image biases caused by the coarse quantization no longer “average out”.
Our fundamental conclusion is that all of the scientifically relevant information in an astronomical image is preserved as long as the minimum representable difference $\Delta$ in pixel values is smaller than or equal to half the per-pixel root-variance (sigma) $\sigma$ in the image noise. This confirms previous results based on information-theory arguments (for example, @Gaztanaga), and extends previous experiments on bright-source photometry [@WhiteGreenfield; @PenceInPress] to astrometry and to sources fainter than the noise.
Our experiments were performed on images with pure Gaussian noise; of course many images contain significant non-Gaussianity in their per-pixel noise so the empirical variance will depart significantly from the true noise variance [@WhiteGreenfield]. The conservative approach for such images is to take not the true variance for $\sigma^2$ but rather use for $\sigma^2$ something like the minimum of the straightforwardly measured variance and a central variance estimate, such as a sigma-clipped variance estimate, an estimate based on the curvature of the central part of the noise value frequency distribution function, or the median absolute difference of nearby pixels [@Pence]. With this re-definition of the root variance $\sigma$, the condition $\Delta \leq 0.5\,\sigma$ represents a conservative setting of the minimum representable difference.
The fact that a $\Delta = 0.5\,\sigma$ representation preserves information on the faint sources—even those fainter than $\Delta$ itself—has implications for the design of data-taking systems, which are necessarily limited in bandwidth. If the system is set with $\Delta$ substantially smaller than $0.5\,\sigma$, then bright sources will saturate the representation more frequently than necessary, while no additional information is being carried about the faintest sources. Any increase in $\Delta$ pays off directly in putting more of the necessarily limited system dynamic range onto bright sources, so it behooves system designers to push as close to the $\Delta =
0.5\,\sigma$ limit as possible.
To put this in the context of a real data system, we looked at a “DARK” calibration image from the Hubble Space Telescope Advanced Camera for Surveys (ACS). The dark image should have the lowest per-pixel noise of any ACS image, because it has only dark and read noise. We chose image set `jbanbea2q`, and measured the median noise level in the raw DARK image with the median absolute difference between values of nearby pixels (for robustness). The ACS data system is operating with a minimum representable difference $0.25\,\sigma <
\Delta < 0.33\,\sigma$, comfortably within the information-preserving range and close to the minimum-bandwidth limit of $\Delta =
0.5\,\sigma$. Of course this is for a dark frame; sky exposures (especially long ones) could have been profitably taken with a larger $\Delta$ (because $\sigma$ will be greater); this would have preserved more of the system dynamic range for bright sources. If the ACS took almost exclusively long exposures, the output would contain more scientific information with a larger setting of the minimum representable difference.
In some sense, the results of this paper recommend a “lossy” image compression technique, in which data are scaled by a factor and snapped to integer values such that the minimum representable difference $\Delta$ is made equal to or smaller than $0.5\,\sigma$. Indeed, when typical real-valued astronomical images are converted to integers at this resolution, the integer versions compress far better with subsequent standard file compression techniques (such as gzip) than do the floating-point originals [@Gaztanaga; @Watson; @WhiteGreenfield; @Pence; @bernstein]. In the $\Delta = 0.5\,\sigma$ representation, after lossless compression, storage and transmission of the image “costs” only a few bits per noise-dominated pixel. Because the snap-to-integer step changes the data, this overall procedure is technically lossy, but we have shown here that none of the *scientific* information in the image has been lost.
We thank Mike Blanton, Doug Finkbeiner, and Dustin Lang for useful discussions, and the anonymous referee for constructive suggestions. Support for this work was provided by NASA (NNX08AJ48G), the NSF (AST-0908357), and the Alexander von Humboldt Foundation.
![Starting from top left and moving to bottom right we show $16\times 16$ images of increasing bit depth. The original images are identical but snapped to integer as described in the text. The images are labeled by the ratio $[\sigma/\Delta]$ of noise root-variance $\sigma$ to the minimum representable difference $\Delta$. At ratios $[\sigma/\Delta] > 2^{0}$, the images become virtually indistinguishable from the high bandwidth images.\[fig:twelvepanel\]](twelve-panel.png){width="\textwidth"}
![Measurement of image noise variance as a function of bit depth or minimum representable difference $[\Delta/\sigma]$ for images with a randomly chosen mean level and gaussian noise with true variance $\sigma^{2} = 1.0$. Each data point has been dithered a small amount horizontally to make the distribution visible. Black circles show medians (of samples of 1024) for each value of the multiplicative factor. The variance is well measured as long as the noise root-variance $\sigma$ is twice the minimum representable difference $\Delta$. \[fig:variance\]](1024ImsNoStarINT_Variance.png){width="\textwidth"}
![For both plots, the black circles show the median values. The points are generated by generating 1024 images with noise variance $\sigma^{2} = 1.0$ and a gaussian star randomly placed with a total flux of 64.0 and a FWHM of 2.35 pix. The peak per-pixel intensity of the star is $10\,\sigma$. *Top:* Plot of measured star offset (astrometric error in pixels; see text for centroiding procedure and offset calculation) as a function of bit depth or minimum representable difference $[\Delta/\sigma]$. *Bottom:* Plot of $log_2$ of the absolute value of the difference between measured magnitude and the true magnitude of the star (photometric error; logarithm of the logarithm!) as a function of bit depth or minimum representable difference $[\Delta/\sigma]$.\[fig:bitsoffset64\]](BitsvsOffset_1024ims_flux64.png){width="\textwidth"}
![Same as \[fig:bitsoffset64\] except the total flux of the star was set to 2048.0. The peak per-pixel intensity of the star is now $320\,\sigma$.\[fig:bitsoffset2048\]](BitsvsOffset_1024ims_flux2048.png){width="\textwidth"}
![Four $16\times 16$ pixel images that demonstrate the coadding procedure. The top left image shows a single image with noise variance $\sigma^{2}$ = 1.0 and an (extremely faint) Gaussian star with a total flux of 2.0 and FWHM of 2.35 pix. The top right image is the same as the top left, but with the pixel values snapped to finite resolution $[\sigma/\Delta]=2$ or minimum representable difference $\Delta = 0.5\,\sigma$. The bottom left image shows the result of coadding 1024 images without snapping to finite resolution. The bottom right image is the same but coadding *after* snapping each individual image data to $[\sigma/\Delta]=2$. The similarities of the images indicates that information has been preserved. The peak per-pixel intensity of the star is $0.32\,\sigma$; this star is not visible in any of the individual images, but appears in the coadded images.\[fig:provecoadd\]](prove-coadd.png){width="\textwidth"}
![Same as \[fig:bitsoffset64\] except with a star of total flux 2.0, and coadding sets of 1024 exposures after snap-to-integer to make the extremely faint source detectable. In this experiment we give each of the coadded images a different sky level (see text). \[fig:bitsoffsetcoadd1\]](BitsvsOffset_1024ims_flux2_Coadded.png){width="\textwidth"}
![Same as \[fig:bitsoffsetcoadd1\] except with a star of total flux 64.0. \[fig:bitsoffsetcoadd2\]](BitsvsOffset_1024ims_flux64_Coadded.png){width="\textwidth"}
![Same as \[fig:bitsoffsetcoadd1\] except that in this experiment we give each of the coadded images the same sky level; the sky level is different for each of the coadd trials, but the same for all the images within each coadd trial. \[fig:bitsoffsetcoadd1\_samesky\]](BitsvsOffset_1024ims_flux2_Coadded_False.png){width="\textwidth"}
|
---
abstract: 'We show that the covariant derivative of a spinor for a general affine connection, not restricted to be metric compatible, is given by the Fock–Ivanenko coefficients with the antisymmetric part of the Lorentz connection. The projective invariance of the spinor connection allows to introduce gauge fields interacting with spinors. We also derive the relation between the curvature spinor and the curvature tensor for a general connection.'
author:
- 'Nikodem J. Popławski'
title: Covariant differentiation of spinors for a general affine connection
---
Introduction
============
Einstein’s relativistic theory of gravitation (general relativity) gives a geometrical interpretation of the gravitational field. The geometry of general relativity is that of a four-dimensional Riemannian manifold, i.e. equipped with a symmetric metric-tensor field and an affine connection that is torsionless and metric compatible. In the Einstein–Palatini formulation of gravitation [@Pal], which is dynamically equivalent to the standard Einstein–Hilbert formulation [@FK], field equations are derived by varying a total action for the gravitational field and matter with respect to the metric tensor and the connection, regarded as independent variables. The corresponding Lagrangian density for the gravitational field is linear in the symmetric part of the Ricci tensor of the connection.
The postulates of symmetry and metric compatibility of the affine connection determine the connection in terms of the metric. Relaxing of the postulate of symmetry leads to relativistic theories of gravitation with torsion [@Car; @Hehl]. Relaxing of the postulate of metric compatibility leads to [*metric–affine*]{} formulations of gravity. Theories of gravity that incorporate spinor fields use a tetrad as a dynamical variable instead of the metric. Accordingly, the variation with respect to the affine connection can be replaced by the variation with respect to the Lorentz (spin, anholonomic) connection. Dynamical variables in relativistic theories of gravitation with relaxed constraints are: metric and torsion (Einstein–Cartan theory) [@Car; @Hehl; @SG], metric and asymmetric connection [@HK; @HLS], metric, torsion and nonmetricity [@Smal], tetrad and torsion [@HD], and tetrad and spin connection (Einstein–Cartan–Kibble–Sciama theory) [@KS].
Fock and Ivanenko generalized the relativistic Dirac equation for the electron by introducing the covariant derivative of a spinor in a Riemannian spacetime [@FoIv]. In order to incorporate spinor fields into a metric–affine theory of gravitation we must construct a covariant differentiation of spinors for a general affine connection. An example of a physical theory with such a connection is the generalized Einstein–Maxwell theory with the electromagnetic field tensor represented by the second Ricci tensor (the homothetic curvature tensor) [@spinor1]. In this paper we present a derivation of the covariant derivative of a spinor for a general connection. We show how the projective invariance of the spinor connection allows to introduce gauge fields interacting with spinors in curved spacetime. We also derive the formula for the curvature spinor in the presence of a general connection.
Tetrads
=======
In order to construct a generally covariant Dirac equation, we must regard the components of a spinor as invariant under coordinate transformations [@Schr]. In addition to the coordinate systems, at each spacetime point we set up four orthogonal vectors (a [*tetrad*]{}) and the spinor gives a representation for the Lorentz transformations that rotate the tetrad [@Lord; @BJ]. Any vector $V$ can be specified by its components $V^\mu$ with respect to the coordinate system or by the coordinate-invariant projections $V^a$ of the vector onto the tetrad field $e^a_\mu$: $$V^a=e^a_\mu V^\mu,\,\,\,\,V^\mu=e^\mu_a V^a,$$ where the tetrad field $e^\mu_a$ is inverse of $e^a_\mu$: $$e^\mu_a e^a_\nu=\delta^\mu_\nu,\,\,\,\,e^\mu_a e^b_\mu=\delta^b_a.$$ The [*metric tensor*]{} $g_{\mu\nu}$ of general relativity is related to the coordinate-invariant metric tensor of special relativity $\eta_{ab}=\mbox{diag}(1,-1,-1,-1)$ through the tetrad: $$g_{\mu\nu}=e^a_\mu e^b_\nu \eta_{ab}.
\label{metric}$$ Accordingly, the determinant $\textgoth{g}$ of the metric tensor $g_{\mu\nu}$ is related to the determinant $\textgoth{e}$ of the tetrad $e^a_\mu$ by $$\sqrt{-\textgoth{g}}=\textgoth{e}.$$ We can use $g_{\mu\nu}$ and its inverse $g^{\mu\nu}$ to lower and raise coordinate-based indices, and $\eta_{ab}$ and its inverse $\eta^{ab}$ to lower and raise coordinate-invariant (Lorentz) indices.
Eq. (\[metric\]) imposes 10 constraints on the 16 components of the tetrad, leaving 6 components arbitrary. If we change from one tetrad $e^\mu_a$ to another, $\tilde{e}^\mu_b$, then the vectors of the new tetrad are linear combinations of the vectors of the old tetrad: $$\tilde{e}^\mu_a=\Lambda^b_{\phantom{b}a}e^\mu_b.
\label{rotation}$$ Eq. (\[metric\]) applied to the tetrad field $\tilde{e}^\mu_b$ imposes on the matrix $\Lambda$ the orthogonality condition: $$\Lambda^c_{\phantom{c}a}\Lambda^d_{\phantom{d}b}\eta_{cd}=\eta_{ab},
\label{ortho}$$ so $\Lambda$ is a Lorentz matrix. Consequently, the Lorentz group can be regarded as the group of [*tetrad rotations*]{} in general relativity [@Hehl; @Lord].
Spinors
=======
Let $\gamma^a$ be the coordinate-invariant [*Dirac matrices*]{}: $$\gamma^a\gamma^b+\gamma^b\gamma^a=2\eta^{ab}.
\label{anticom1}$$ Accordingly, the spacetime-dependent Dirac matrices, $\gamma^\mu=e^\mu_a \gamma^a$, satisfy $$\gamma^\mu\gamma^\nu+\gamma^\nu\gamma^\mu=2g^{\mu\nu}.
\label{anticom2}$$ Let $L$ be the spinor representation of a tetrad rotation (\[rotation\]): $$\tilde{\gamma}^a=\Lambda^a_{\phantom{a}b}L\gamma^b L^{-1}.
\label{spinor}$$ Since the Dirac matrices $\gamma^a$ are constant in some chosen representation, the condition $\tilde{\gamma}^a=\gamma^a$ gives the matrix $L$ as a function of $\Lambda^a_{\phantom{a}b}$ [@Lord; @BJ]. For infinitesimal Lorentz transformations: $$\Lambda^a_{\phantom{a}b}=\delta^a_b+\epsilon^a_{\phantom{a}b},$$ where the antisymmetry of the 6 infinitesimal Lorentz coefficients, $\epsilon_{ab}=-\epsilon_{ba}$, follows from Eq. (\[ortho\]), the solution for $L$ is: $$L=1+\frac{1}{2}\epsilon_{ab}G^{ab},\,\,\,\,L^{-1}=1-\frac{1}{2}\epsilon_{ab}G^{ab},$$ where $G^{ab}$ are the [*generators*]{} of the spinor representation of the Lorentz group: $$G^{ab}=\frac{1}{4}(\gamma^a\gamma^b-\gamma^b\gamma^a).
\label{gen}$$
A [*spinor*]{} $\psi$ is defined to be a quantity that, under tetrad rotations, transforms according to [@BJ] $$\tilde{\psi}=L\psi.$$ An [*adjoint spinor*]{} $\bar{\psi}$ is defined to be a quantity that transforms according to $$\tilde{\bar{\psi}}=\bar{\psi}L^{-1}.$$ Consequently, the Dirac matrices $\gamma^a$ can be regarded as quantities that have, in addition to the invariant index $a$, one spinor index and one adjoint-spinor index. The derivative of a spinor does not transform like a spinor since $$\tilde{\psi}_{,\mu}=L\psi_{,\mu}+L_{,\mu}\psi.$$ If we introduce the [*spinor connection*]{} $\Gamma_\mu$ that transforms according to $$\tilde{\Gamma}_\mu=L\Gamma_\mu L^{-1}+L_{,\mu}L^{-1},
\label{trans}$$ then the [*covariant derivative*]{} of a spinor [@Lord]: $$\psi_{:\mu}=\psi_{,\mu}-\Gamma_\mu \psi,
\label{covsp}$$ is a spinor: $$\tilde{\psi}_{:\mu}=L\psi_{:\mu}.$$ Similarly, one can show that the spinor-covariant derivative of the Dirac matrices $\gamma^a$ is $$\gamma^a_{\phantom{a}:\mu}=-[\Gamma_\mu,\gamma^a]
\label{covar}$$ since $\tilde{\gamma}^\mu=L\gamma^\mu L^{-1}$ due to Eq. (\[spinor\]) and $\gamma^a_{\phantom{a},\mu}=0$.
Lorentz connection
==================
Covariant differentiation of a contravariant vector $V^\mu$ and a covariant vector $W_\mu$ in a relativistic theory of gravitation introduces the [*affine connection*]{} $\Gamma^{\,\,\rho}_{\mu\,\nu}$: $$V^\mu_{\phantom{\mu};\nu}=V^\mu_{\phantom{\mu},\nu}+\Gamma^{\,\,\mu}_{\rho\,\nu}V^\rho,\,\,\,\,W_{\mu;\nu}=W_{\mu,\nu}-\Gamma^{\,\,\rho}_{\mu\,\nu}W_\rho,
\label{covder}$$ where the semicolon denotes the [*covariant derivative*]{} with respect to coordinate indices.[^1] The affine connection in general relativity is constrained to be symmetric, $\Gamma^{\,\,\rho}_{\mu\,\nu}=\Gamma^{\,\,\rho}_{\nu\,\mu}$, and metric compatible, $g_{\mu\nu;\rho}=0$. For a general spacetime we do not impose these constraints. As a result, raising and lowering of coordinate indices does not commute with covariant differentiation with respect to $\Gamma^{\,\,\rho}_{\mu\,\nu}$.
Let us define: $$\omega^\mu_{\phantom{\mu}a\nu}=e^\mu_{a;\nu}=e^\mu_{a,\nu}+\Gamma^{\,\,\mu}_{\rho\,\nu}e^\rho_a.
\label{omega}$$ The quantities $\omega^{ab}_{\phantom{ab}\mu}=e^a_\rho \eta^{bc} \omega^\rho_{\phantom{\rho}c\mu}$ transform like tensors under coordinate transformations. We can extend the notion of covariant differentiation to quantities with Lorentz coordinate-invariant indices by regarding $\omega^{ab}_{\phantom{ab}\mu}$ as a connection [@Hehl; @Lord]: $$V^a_{\phantom{a}|\mu}=V^a_{\phantom{a},\mu}+\omega^{a}_{\phantom{a}b\mu}V^b.$$ The covariant derivative of a scalar $V^a W_a$ coincides with its ordinary derivative: $$(V^a W_a)_{|\mu}=(V^a W_a)_{,\mu},$$ which gives $$W_{a|\mu}=W_{a,\mu}-\omega^{b}_{\phantom{b}a\mu}W_b.$$ We can also assume that the [*covariant derivative*]{} $|$ recognizes coordinate and spinor indices, acting on them like $;$ and $:$, respectively. Accordingly, the covariant derivative of the Dirac matrices $\gamma^a$ is [@LR] $$\gamma^a_{\phantom{a}|\mu}=\omega^{a}_{\phantom{a}b\mu}\gamma^b-[\Gamma_\mu,\gamma^a].$$ The definition (\[omega\]) can be written as [@Lord] $$e^\mu_{a|\nu}=e^\mu_{a,\nu}+\Gamma^{\,\,\mu}_{\rho\,\nu}e^\rho_a-\omega^b_{\phantom{b}a\nu}e^\mu_b=0.
\label{zero}$$
Eq. (\[zero\]) implies that the total covariant differentiation commutes with converting between coordinate and Lorentz indices. This equation also determines the [*Lorentz connection*]{} $\omega^{ab}_{\phantom{ab}\mu}$, also called the [*spin connection*]{}, in terms of the affine connection, tetrad and its derivatives. Conversely, the affine connection is determined by the Lorentz connection, tetrad and its derivatives [@spinor5]: $$\Gamma^{\,\,\rho}_{\mu\,\nu}=\omega^\rho_{\phantom{\rho}\mu\nu}+e^a_{\mu,\nu}e^\rho_a.
\label{affine}$$ The Cartan [*torsion tensor*]{} $S^\rho_{\phantom{\rho}\mu\nu}=\Gamma^{\,\,\,\rho}_{[\mu\,\nu]}$ is then $$S^\rho_{\phantom{\rho}\mu\nu}=\omega^\rho_{\phantom{\rho}[\mu\nu]}+e^a_{[\mu,\nu]}e^\rho_a,$$ from which we obtain the [*torsion vector*]{} $S_\mu=S^\nu_{\phantom{\nu}\mu\nu}$: $$S_\mu=\omega^\nu_{\phantom{\nu}[\mu\nu]}+e^a_{[\mu,\nu]}e^\nu_a.$$
Spinor connection
=================
We now derive the spinor connection $\Gamma_\mu$ for a general affine connection. Deviations of such a connection from the Levi-Civita metric-compatible connection are characterized by the [*nonmetricity tensor*]{}: $$N_{\mu\nu\rho}=g_{\mu\nu;\rho}.
\label{nonm0}$$ Eq. (\[nonm0\]) yields: $$\eta_{ab|\rho}=N_{ab\rho},\,\,\,\,\eta^{ab}_{\phantom{ab}|\rho}=-N^{ab}_{\phantom{ab}\rho},
\label{nonm1}$$ from which it follows that $$\omega_{(ab)\mu}=-\frac{1}{2}N_{ab\mu},
\label{nonm2}$$ i.e. the Lorentz connection is antisymmetric in first two indices [*only*]{} for a metric-compatible affine connection [@Hehl].[^2] In the presence of nonmetricity (lack of metric compatibility) the covariant derivative of the Dirac matrices deviates from zero. From Eqs. (\[anticom1\]) and (\[nonm1\]) it follows that [@spinor2] $$\gamma^a_{\phantom{a}|\mu}=-\frac{1}{2}N^a_{\phantom{a}b\mu}\gamma^b.
\label{linear}$$ Multiplying both sides of this equation by $\gamma_a$ (from the left) yields $$\omega_{ab\mu}\gamma^a \gamma^b-\gamma_a \Gamma_\mu \gamma^a+4\Gamma_\mu=-\frac{1}{2}N^c_{\phantom{c}c\mu}.
\label{FI1}$$
We seek the solution of Eq. (\[FI1\]) in the form: $$\Gamma_\mu=-\frac{1}{4}\omega_{[ab]\mu}\gamma^a \gamma^b-A_\mu,
\label{FI2}$$ where $A_\mu$ is a spinor quantity with one vector index. Substituting Eq. (\[FI2\]) to (\[FI1\]) and using the identity $\gamma_c \gamma^a \gamma^b \gamma^c=4\eta^{ab}$ give $$-\gamma_a A_\mu \gamma^a+4A_\mu=\frac{1}{2}N^c_{\phantom{c}c\mu}+\omega^c_{\phantom{c}c\mu}.
\label{FI3}$$ The right-hand side of Eq. (\[FI3\]) vanishes because of Eq. (\[nonm2\]) so $A_\mu$ is simply an arbitrary vector multiple of the unit matrix [@Lord; @BC]. We can write Eq. (\[FI2\]) as $$\Gamma_\mu=-\frac{1}{4}\omega^{(A)}_{ab\mu}\gamma^a \gamma^b,$$ where $\omega^{(A)}_{ab\mu}$ is related to $\omega_{[ab]\mu}$ by a [*projective transformation*]{}:[^3] $$\omega^{(A)}_{ab\mu}=\omega_{[ab]\mu}+\eta_{ab}A_\mu.
\label{project}$$ If we assume that $A_\mu$ represents some non-gravitational field then we can, for spinors in a purely gravitational field, set $A_\mu=0$.[^4] Therefore the spinor connection $\Gamma_\mu$ for a general affine connection has the form of the [*Fock–Ivanenko coefficients*]{} of general relativity [@HD; @FoIv; @Lord; @spinor2; @Uti; @FI]: $$\Gamma_\mu=-\frac{1}{4}\omega_{[ab]\mu}\gamma^a \gamma^b,
\label{FI4}$$ with the [*antisymmetric*]{} part of the Lorentz connection.[^5] Using the definition (\[omega\]), we can also write Eq. (\[FI4\]) as $$\Gamma_\mu=-\frac{1}{8}e^\nu_{c;\mu}[\gamma_\nu,\gamma^c]=\frac{1}{8}[\gamma^\nu_{\phantom{\nu};\mu},\gamma_\nu].$$
Curvature spinor
================
The commutator of the covariant derivatives of a vector with respect to the affine connection defines the [*curvature tensor*]{} $R^\rho_{\phantom{\rho}\sigma\mu\nu}=\Gamma^{\,\,\rho}_{\sigma\,\nu,\mu}-\Gamma^{\,\,\rho}_{\sigma\,\mu,\nu}+\Gamma^{\,\,\kappa}_{\sigma\,\nu}\Gamma^{\,\,\rho}_{\kappa\,\mu}-\Gamma^{\,\,\kappa}_{\sigma\,\mu}\Gamma^{\,\,\rho}_{\kappa\,\nu}$:[^6] $$V^\rho_{\phantom{\rho};\nu\mu}-V^\rho_{\phantom{\rho};\mu\nu}=R^\rho_{\phantom{\rho}\sigma\mu\nu}V^\sigma+2S^\sigma_{\phantom{\sigma}\mu\nu}V^\rho_{\phantom{\rho};\sigma},\,\,\,\,V_{\rho;\nu\mu}-V_{\rho;\mu\nu}=-R^\sigma_{\phantom{\sigma}\rho\mu\nu}V_\sigma+2S^\sigma_{\phantom{\sigma}\mu\nu}V_{\rho;\sigma}.$$ In analogous fashion, the commutator of the total covariant derivatives of a spinor: $$\psi_{|\nu\mu}-\psi_{|\mu\nu}=K_{\mu\nu}\psi+2S^\rho_{\phantom{\rho}\mu\nu}\psi_{|\rho},$$ defines the antisymmetric [*curvature spinor*]{} $K_{\mu\nu}$ [@Lord]: $$K_{\mu\nu}=\Gamma_{\mu,\nu}-\Gamma_{\nu,\mu}+[\Gamma_\mu,\Gamma_\nu].$$ From Eq. (\[trans\]) it follows that $K_{\mu\nu}$ transforms under tetrad rotations like the Dirac matrices $\gamma^a$ (with one spinor index and one adjoint-spinor index): $$\tilde{K}_{\mu\nu}=LK_{\mu\nu}L^{-1}.$$
Eq. (\[linear\]) is equivalent to $$\gamma^\rho_{\phantom{\rho}|\mu}=-\frac{1}{2}N^\rho_{\phantom{\rho}\sigma\mu}\gamma^\sigma.$$ The commutator of the covariant derivatives of the spacetime-dependent Dirac matrices with respect to the affine connection is then: $$2\gamma^\rho_{\phantom{\rho}|[\nu\mu]}=-(N^\rho_{\phantom{\rho}\sigma[\nu}\gamma^\sigma)_{|\mu]}.$$ Multiplying both sides of this equation by $\gamma_\rho$ (from the left) and using $$2\gamma^\rho_{\phantom{\rho}|[\nu\mu]}=R^\rho_{\phantom{\rho}\sigma\mu\nu}\gamma^\sigma+2S^\sigma_{\phantom{\sigma}\mu\nu}\gamma^\rho_{\phantom{\rho}|\sigma}+[K_{\mu\nu},\gamma^\rho]$$ and Eq. (\[anticom2\]) yield $$\begin{aligned}
& & R_{\rho\sigma\mu\nu}\gamma^\rho \gamma^\sigma-S^\sigma_{\phantom{\sigma}\mu\nu}N^\rho_{\phantom{\rho}\rho\sigma}+\gamma_\rho K_{\mu\nu}\gamma^\rho-4K_{\mu\nu}=-\gamma_\rho(N^\rho_{\phantom{\rho}\sigma[\nu}\gamma^\sigma)_{|\mu]} \nonumber \\
& & =-N^\rho_{\phantom{\rho}\rho[\nu;\mu]}+\frac{1}{2}N_{\rho\sigma[\nu}N^{\lambda\rho}_{\phantom{\sigma\rho}\mu]}\gamma_\lambda \gamma^\sigma=-N^\rho_{\phantom{\rho}\rho[\nu,\mu]}-S^\sigma_{\phantom{\sigma}\mu\nu}N^\rho_{\phantom{\rho}\rho\sigma}+\frac{1}{2}N_{\rho\sigma[\nu}N^{\rho\lambda}_{\phantom{\rho\lambda}\mu]}\gamma_\lambda \gamma^\sigma.
\label{cs1}\end{aligned}$$
We seek the solution of Eq. (\[cs1\]) in the form: $$K_{\mu\nu}=\frac{1}{4}R_{[\rho\sigma]\mu\nu}\gamma^\rho \gamma^\sigma-\frac{1}{8}N_{\rho\sigma[\nu}N^{\rho\lambda}_{\phantom{\rho\lambda}\mu]}\gamma_\lambda \gamma^\sigma+B_{\mu\nu},
\label{cs2}$$ where $B_{\mu\nu}$ is a spinor quantity with two vector indices. Substituting Eq. (\[cs2\]) to (\[cs1\]) gives $$\gamma_\rho B_{\mu\nu}\gamma^\rho-4B_{\mu\nu}=-Q_{\mu\nu}-N^\rho_{\phantom{\rho}\rho[\nu,\mu]},
\label{cs3}$$ where $$Q_{\mu\nu}=R_{\rho\sigma\mu\nu}g^{\rho\sigma}=\Gamma^{\,\,\rho}_{\rho\,\nu,\mu}-\Gamma^{\,\,\rho}_{\rho\,\mu,\nu}
\label{second}$$ is the [*second Ricci tensor*]{}, also called the tensor of [*homothetic curvature*]{} [@spinor1] or the [*segmental curvature tensor*]{} [@spinor2]. The right-hand side of Eq. (\[cs3\]) vanishes because of Eq. (\[nonm0\]) so $B_{\mu\nu}$ is simply an antisymmetric-tensor multiple of the unit matrix. The tensor $B_{\mu\nu}$ is related to the vector $A_\mu$ in Eq. (\[FI2\]) by[^7] $$B_{\mu\nu}=A_{\nu,\mu}-A_{\mu,\nu}+[A_\mu,A_\nu].
\label{gauge}$$ Setting $A_\mu=0$, which corresponds to the absence of non-gravitational fields, yields $B_{\mu\nu}=0$.[^8] Therefore the curvature spinor for a general affine connection is:[^9] $$K_{\mu\nu}=\frac{1}{4}R_{[\rho\sigma]\mu\nu}\gamma^\rho \gamma^\sigma-\frac{1}{8}N_{\rho\lambda[\mu}N^{\rho\sigma}_{\phantom{\rho\sigma}\nu]}\gamma^\lambda \gamma_\sigma.
\label{cs4}$$
Curvature and Ricci tensors
===========================
Finally, we examine the curvature tensor for a general affine connection. The commutator of the covariant derivatives of a tetrad with respect to the affine connection is $$2e^\rho_{a;[\nu\mu]}=R^\rho_{\phantom{\rho}\sigma\mu\nu}e^\sigma_a+2S^\sigma_{\phantom{\sigma}\mu\nu}e^\rho_{a;\sigma}.$$ This commutator can also be expressed in terms of the Lorentz connection: $$e^\rho_{a;[\nu\mu]}=\omega^\rho_{\phantom{\rho}a[\nu;\mu]}=(e^\rho_b \omega^b_{\phantom{b}a[\nu})_{;\mu]}=\omega_{ba[\nu}\omega^{\rho b}_{\phantom{\rho b}\mu]}+\omega^b_{\phantom{b}a[\nu;\mu]}e^\rho_b=\omega_{ba[\nu}\omega^{\rho b}_{\phantom{\rho b}\mu]}+\omega^b_{\phantom{b}a[\nu,\mu]}e^\rho_b+S^\sigma_{\phantom{\sigma}\mu\nu}\omega^\rho_{\phantom{\rho}a\sigma}.$$ Consequently, the curvature tensor with two Lorentz and two coordinate indices depends only on the Lorentz connection and its first derivatives [@KS; @Lord; @Uti]: $$R^{ab}_{\phantom{ab}\mu\nu}=\omega^{ab}_{\phantom{ab}\nu,\mu}-\omega^{ab}_{\phantom{ab}\mu,\nu}+\omega^a_{\phantom{a}c\mu}\omega^{cb}_{\phantom{cb}\nu}-\omega^a_{\phantom{a}c\nu}\omega^{cb}_{\phantom{cb}\mu}.
\label{curva}$$
The contraction of the tensor (\[curva\]) with a tetrad leads to two [*Lorentz-connection Ricci tensors*]{}: $$\begin{aligned}
& & R^a_\mu=R^{[ab]}_{\phantom{[ab]}\mu\nu}e^\nu_b, \\
& & P^a_\mu=R^{(ab)}_{\phantom{(ab)}\mu\nu}e^\nu_b,\end{aligned}$$ that are related to the sum and difference, respectively, of the [*Ricci tensor*]{}, $R_{\mu\nu}=R^\rho_{\phantom{\rho}\mu\rho\nu}$, and the tensor $C_{\mu\nu}=R_{\mu\rho\nu\sigma}g^{\rho\sigma}$. The contraction of the tensor $R^a_\mu$ with a tetrad gives the [*Ricci scalar*]{}, $$R=R^a_\mu e^\mu_a=R^{ab}_{\phantom{ab}\mu\nu}e^\mu_a e^\nu_b,$$ while the contraction of $P^a_\mu$ gives zero. The contraction of the curvature tensor (\[curva\]) with the Lorentz metric tensor $\eta_{ab}$ gives the second Ricci tensor (\[second\]),[^10] $$Q_{\mu\nu}=\omega^c_{\phantom{c}c\nu,\mu}-\omega^c_{\phantom{c}c\mu,\nu}.$$
Under the projective transformation (\[project\]) the tensor (\[curva\]) changes according to $$R^{ab}_{\phantom{ab}\mu\nu}\rightarrow R^{ab}_{\phantom{ab}\mu\nu}+\eta^{ab}B_{\mu\nu},$$ where $B_{\mu\nu}$ is given by Eq. (\[gauge\]), and so does $R^{(ab)}_{\phantom{(ab)}\mu\nu}$. Consequently, the tensor $P^a_\mu$ changes according to $$P^a_\mu\rightarrow P^a_\mu+e^{a\nu}B_{\mu\nu}.
\label{proj1}$$ The tensor $R^{[ab]}_{\phantom{[ab]}\mu\nu}$ is projectively invariant and so are $R^a_\mu$ and $R$.[^11] The second Ricci tensor changes according to $$Q_{\mu\nu}\rightarrow Q_{\mu\nu}+8A_{[\nu,\mu]},
\label{proj2}$$ so it is invariant only under [*special projective transformations*]{} [@Ein] with $A_\mu=\lambda_{,\mu}$, where $\lambda$ is a scalar. If $[A_\mu,A_\nu]=0$ then Eqs. (\[proj1\]) and (\[proj2\]) yield the projective invariance of the tensor: $$I^a_\mu=4P^a_\mu-Q_{\mu\nu}e^{a\nu}.$$
A. Palatini, [*Rend. Circ. Mat.*]{} (Palermo) [**43**]{}, 203 (1919); A. Einstein, [*Sitzungsber. Preuss. Akad. Wiss.*]{} (Berlin), 32 (1923); M. Ferraris, M. Francaviglia and C. Reina, [*Gen. Relativ. Gravit.*]{} [**14**]{}, 243 (1982). M. Ferraris and J. Kijowski, [*Gen. Relativ. Gravit.*]{} [**14**]{}, 165 (1982). É. Cartan, [*Compt. Rend. Acad. Sci.*]{} (Paris) [**174**]{}, 593 (1922). F. W. Hehl, P. von der Heyde, G. D. Kerlick and J. M. Nester, [*Rev. Mod. Phys.*]{} [**48**]{}, 393 (1976). V. de Sabbata and M. Gasperini, [*Introduction to Gravitation*]{} (World Scientific, Singapore, 1986). F. W. Hehl and G. D. Kerlick, [*Gen. Relativ. Gravit.*]{} [**9**]{}, 691 (1978). F. W. Hehl, E. A. Lord and L. L. Smalley, [*Gen. Relativ. Gravit.*]{} [**13**]{}, 1037 (1981). L. L. Smalley, [*Phys. Lett. A*]{} [**61**]{}, 436 (1977). F. W. Hehl and B. K. Datta, [*J. Math. Phys.*]{} [**12**]{}, 1334 (1971). T. W. B. Kibble, [*J. Math. Phys.*]{} [**2**]{}, 212 (1961); D. W. Sciama, [*Rev. Mod. Phys.*]{} [**36**]{}, 463 (1964). V. Fock and D. Ivanenko, [*C. R. Acad. Sci.*]{} (Paris) [**188**]{}, 1470 (1929); V. Fock, [*Z. Phys.*]{} [**57**]{}, 261 (1929). V. N. Ponomariov and Ju. Obuchov, [*Gen. Relativ. Gravit.*]{} [**14**]{}, 309 (1982). E. Schrödinger, [*Sitzungsber. Preuss. Akad. Wiss.*]{} (Berlin), 105 (1932). E. A. Lord, [*Tensors, Relativity and Cosmology*]{} (McGraw-Hill, New Delhi, 1976). W. L. Bade and H. Jehle, [*Rev. Mod. Phys.*]{} [**25**]{}, 714 (1953). C. P. Luehr and M. Rosenbaum, [*J. Math. Phys.*]{} [**15**]{}, 1120 (1974). L. Mangiarotti and G. Sardanashvily, [*Connections in Classical and Quantum Field Theory*]{} (World Scientific, Singapore, 2000); G. A. Sardanashvily, [*Theor. Math. Phys.*]{} [**132**]{}, 1161 (2002). A. K. Aringazin and A. L. Mikhailov, [*Class. Quantum Grav.*]{} [**8**]{}, 1685 (1991). D. R. Brill and J. M. Cohen, [*J. Math. Phys.*]{} [**7**]{}, 238 (1966). A. Einstein, [*The Meaning of Relativity*]{} (Princeton, New Jersey, 1953).
R. Utiyama, [*Phys. Rev.*]{} [**101**]{}, 1597 (1956). G. A. Sardanashvily, [*J. Math. Phys.*]{} [**39**]{}, 4874 (1998); F. Reifler and R. Morris, [*Int. J. Theor. Phys.*]{} [**44**]{}, 1307 (2005). J. A. Schouten, [*Ricci-Calculus*]{} (Springer–Verlag, New York, 1954).
[^1]: The definitions (\[covder\]) are related to one another via the condition $(V^\mu W_\mu)_{;\nu}=(V^\mu W_\mu)_{,\nu}$.
[^2]: As a result, raising and lowering of Lorentz indices does not commute with covariant differentiation; it commutes only with ordinary differentiation.
[^3]: The transformation (\[project\]) is related, because of Eq. (\[affine\]), to a projective transformation of the affine connection: $\Gamma^{\,\,\rho}_{\mu\,\nu}\rightarrow\Gamma^{\,\,\rho}_{\mu\,\nu}+\delta^\rho_\mu A_\nu$ [@Ein].
[^4]: The invariance of Eq. (\[FI1\]) under the addition of a vector multiple $A_\mu$ of the unit matrix to the spinor connection allows to introduce gauge fields interacting with spinors [@BC]. Eq. (\[covsp\]) can be written as $\psi_{:\mu}=(\partial_\mu+A_\mu)\psi+\frac{1}{4}\omega_{[ab]\mu}\gamma^a \gamma^b \psi$, from which it follows that if the vector $A_\mu$ is imaginary then we can treat it as a gauge field.
[^5]: The Fock–Ivanenko coefficients (\[FI4\]) can also be written in terms of the generators of the spinor representation of the Lorentz group (\[gen\]): $\Gamma_\mu=-\frac{1}{2}\omega_{ab\mu}G^{ab}$.
[^6]: The curvature tensor can also be defined through a parallel displacement $\delta V=(V_{,\mu}-V_{;\mu})dx^\mu$ of a vector $V$ along the boundary of an infinitesimal surface element $\Delta f^{\mu\nu}$ [@Scho]: $\oint\delta V^\rho=-\frac{1}{2}R^\rho_{\phantom{\rho}\sigma\mu\nu}V^\sigma\Delta f^{\mu\nu}$ and $\oint\delta V_\rho=\frac{1}{2}R^\sigma_{\phantom{\sigma}\rho\mu\nu}V_\sigma\Delta f^{\mu\nu}$. These equations are related to one another since the parallel displacement of a scalar along a closed curve vanishes.
[^7]: If $\psi$ is a pure spinor, i.e. has no non-spinor indices, then $A_\mu$ is a pure vector, like the electromagnetic potential [@BC], and $[A_\mu,A_\nu]=0$. If $\psi$ has non-spinor indices corresponding to some symmetries, e.g., the electron–neutrino symmetry, then $A_\mu$ is also assigned these indices and $[A_\mu,A_\nu]$ can be different from zero.
[^8]: Eq. (\[gauge\]) resembles the definition of a field strength in terms of a field potential for a non-commutative gauge field. The invariance of Eq. (\[cs1\]) under the addition of an antisymmetric-tensor multiple $B_{\mu\nu}$ of the unit matrix to the curvature spinor is related to the invariance of Eq. (\[FI1\]) under the addition of a vector multiple $A_\mu$ of the unit matrix to the spinor connection.
[^9]: The curvature spinor (\[cs4\]) can also be written in terms of the generators of the spinor representation of the Lorentz group (\[gen\]): $K_{\mu\nu}=\frac{1}{2}R_{\rho\sigma\mu\nu}G^{\rho\sigma}-\frac{1}{4}N_{\rho\lambda\mu}N^{\rho\sigma}_{\phantom{\rho\sigma}\nu}G^\lambda_{\phantom{\lambda}\sigma}$.
[^10]: If the Lorentz connection is antisymmetric, the curvature tensor (\[curva\]) is antisymmetric in the Lorentz indices and the tensors $P^a_\mu$ and $Q_{\mu\nu}$ vanish.
[^11]: The Einstein–Cartan–Kibble–Sciama formulation of gravitation [@Hehl; @KS], where the tetrad and Lorentz connection are dynamical variables, is based on the Lagrangian density $\textgoth{L}=\textgoth{e}R$.
|
---
abstract: 'In non-linear dynamics there are several model systems to study oscillations. One iconic example is the “Brusselator”, which describes the dynamics of the concentration of two chemical species in the non-equilibrium phase. In this work we study the Brusselator dynamics as a stochastic chemical reaction without diffusion analysing the corresponding stochastic differential equations with thermal or multiplicative noise. In both stochastic scenarios we investigate numerically how the Hopf bifurcation of the non-stochastic system is modified. Furthermore, we derive analytical expressions for the noise average orbits and variance of general stochastic dynamics, a general diffusion relationship in the thermal noise framework, and an asymptotic expression for the noise average quadratic deviations. Hence, besides the impact of these results on the noisy Brusselator’s dynamics, our findings are also relevant for general stochastic systems.'
author:
- 'Nicol[á]{}s Rubido'
title: Stochastic dynamics and the noisy Brusselator behaviour
---
=1
Introduction
============
There are many instances in which physical systems spontaneously become emergent or orderly [@Strogatz; @Murray; @Prigogine]. Even more spectacular is the order created by chemical systems; the most dramatic being the order associated with life. However, not all chemical reactions generate order. The class of reactions most closely associated with order creation are the auto-catalytic reactions. These are chemical reactions in which at least one of the reactants is also a product, hence, the equations are fundamentally non-linear due to this feedback effect.
Simple auto-catalytic reactions are known to exhibit sustained oscillations [@Prigogine; @Broomhead; @Osipov], thus, creating temporal order. Other reactions can generate separation of chemical species generating spatial order, i.a., the Belousov-Zhabotinsky reaction [@Agladze]. More complex reactions are involved in metabolic pathways and networks in biological systems [@Petsko; @Etay; @Hilborn; @Ciandrini]. The transition to order as the distance from equilibrium increases is not usually continuous. Order typically appears abruptly. The threshold between the disorder of chemical equilibrium and order happens as a phase transition. The conditions for a phase transition to occur are determined with the mathematical machinery of non-equilibrium statistical mechanics [@Reichl].
A paradigmatic example of an auto-catalytic reaction, which exhibits out of equilibrium oscillations, is the “*Brusselator*”. It describes the dynamics of the concentration of two chemical species, where the evolution of each component is obtained from the following dimensionless differential equations [@Prigogine; @Murray; @Broomhead; @Osipov] $$\left\lbrace \begin{array}{lcl}
d\,u/d\,\tau & = & 1 - \left( b + 1 \right)\,u + a\,u^2\,v = f\left(u,\,v
\right)\,,\\
d\,v/d\,\tau & = & b\,u - a\,u^2\,v = g\left(u,\,v\right)\,,
\end{array} \right.
\label{eq_oscillator}$$ where $a,\,b > 0$ are constants, $u$ and $v$ correspond to the concentrations of the two species, and $\tau$ is the dimensionless time. These differential equations show, for most initial conditions, self-sustained oscillations when $b > 1 + a$. The transition from the chemical equilibrium state to this oscillatory behaviour happens via a Hopf bifurcation [@Guckenheimer].
Classical and recent work has emphasized the importance of fluctuations to better model the internal evolution of macroscopic systems which are only accounted for by stochastic models [@Broomhead; @Osipov; @Agladze; @Petsko; @Traulsen; @Jan; @Melbinger; @Beta; @Abbott]. In particular, Chemical oscillations might be augmented and/or disturbed by stochastic effects and random drift. For instance, models of diffusion-driven pattern formation that rely on the Turing mechanism are commonly used in science. Nevertheless, many such models suffer from the defect of requiring fine tuning of parameters in order to predict the formation of spatial patterns. The limited range of parameters for which patterns are seen could be attributed to the simplicity of the models chosen to describe the process; however, for systems with an underlying molecular basis another explanation has recently been put forward [@Biancalani; @Biancalani2; @Butler]. Those authors have observed that Turing-like patterns exist for a much greater range of parameter values if the discrete nature of the molecules comprising the system is taken into account. The systems within this class may be analysed using the theory of stochastic processes. For the Brusselator, the inclusion of noise affects the concentrations of the relevant chemical species and accounts for the molecular character of the reaction compounds.
In this work, we study the Brusselator system without diffusion as a stochastic process due to the inclusion of thermal or multiplicative noise. We discuss the non-stochastic dynamics of the Brusselator analytically (Sec. \[sec\_brusselator\]) and derive a general expression for the noise average linear and quadratic deviations of the thermal noise stochastic system (Sec. \[sec\_stochastic\]) from generic Stochastic Differential Equations (SDEs). Consequently, we obtain a general Einstein diffusion relationship (Sec. \[sec\_diffusion\]) and derive a general expression for the asymptotic behaviour of the quadratic deviations. These expressions are derived in terms of the eigenvalues and eigenvectors of the Jacobian matrix of the non-stochastic equations. Furthermore, we analyse how the Brusselator Hopf bifurcation is perturbed by thermal or multiplicative noise via numerical experiments (Sec. \[sec\_Hopf\]). Our results show that, in both frameworks, the bifurcation transition is kept in average for small noise intensities ($\Gamma \lesssim 0.1$). Our findings are relevant, not only for the analysis of the noisy Brusselator chemical reaction, but also for general stochastic systems.
Model
=====
Dynamical features of the Brusselator {#sec_brusselator}
-------------------------------------
The equilibrium phase of the Brusselator chemical reaction is given by the fixed point (FP) of Eq. (\[eq\_oscillator\]), namely, $\left(u_0,\,v_0\right) = \left(1,\,b/a\right)$. The Jacobian of Eq. (\[eq\_oscillator\]) evaluated at the FP is given by $$D\vec{F}_{(u_0,\,v_0)} = \left( \begin{array}{cc}
b - 1 & a \\
-b & -a
\end{array} \right),
\label{eq_jacobian}$$ where each entry of the matrix corresponds to the partial derivatives given by $\left(
\partial\,f_i/\partial\,x_j\right)_{(u_0,\,v_0)}$, $f_i$ being $f_1 = f$ or $f_2 = g$, with $x_1 = u$ and $x_2 = v$. The eigenvalues $\lambda_\pm$ of Eq. (\[eq\_jacobian\]) are the solutions of the characteristic polynomial $\chi\left(\lambda\right) = \det\left(
D\vec{F}_{u_0,\,v_0} - \lambda\,\mathbf{I}\right) = 0$, i.e., $$\lambda_{\pm} = \frac{\mathsf{Tr}}{2} \pm \sqrt{ \frac{\mathsf{Tr}^2}{4} - \Delta }\,,
\label{eq_eigenvalues}$$ and determine the local stability of the FP. As $\Delta = a > 0$, $\Delta$ being the determinant of Eq. (\[eq\_jacobian\]), a saddle node FP is impossible [@Guckenheimer]. Hence, analysing the sign of the trace \[$\mathsf{Tr} = b - \left(1 + a\right)$\] of Eq. (\[eq\_jacobian\]), we have that the FP is *unstable* if $\mathsf{Tr} > 0$ and that the FP is *stable* if $\mathsf{Tr} < 0$. The *critical point* is then $b_c
\equiv 1 + a$.
![(Color online) Real (left column) and imaginary (right column) parts of the eigenvalue solutions ($\lambda_\pm$ in colour code) for the Brusselator Jacobian matrix at the fixed point (FP). The top row panels show $\lambda_{+}$ and the bottom row panels $\lambda_{-}$ \[positive and negative solutions of Eq. (\[eq\_eigenvalues\]), respectively\]. The dashed lines on the left column panels indicate the critical border $b_c = 1 + a$. The FP is stable for values beneath the dashed line, otherwise, is unstable. The dashed lines on the right column panels indicate the border between ordinary (outside the dashed area) and spiral (inside the dashed area) FP.[]{data-label="fig_eigenvalues"}](eigenvalues.pdf){width="0.95\columnwidth"}
The square root in Eq. (\[eq\_eigenvalues\]) determines if the FP is ordinary ($\lambda_{
\pm}\in\mathbb{R}$) or spiral ($\lambda_{\pm}\in\mathbb{C}$). Thus, two two-fold cases appear, as they are shown in Fig. \[fig\_eigenvalues\] (colour code). From the left column panels in Fig. \[fig\_eigenvalues\], it is seen that the line $b_c = 1 + a$ (diagonal dashed line) divides the parameter space in an upper positive region ($\mathsf{Re}\{
\lambda_{\pm}\} > 0$) and a lower negative region ($\mathsf{Re}\{\lambda_{\pm}\} < 0$). These two regions account for the unstable and stable FP situations, respectively. On the right column of the figure, the spiral and ordinary characteristics of the FP eigenvalues are discriminated by the critical set $\mathsf{Tr}^2 = 4\Delta$ (curved dashed line). The outer region of this set has the imaginary part null, namely, $\mathsf{Im}\{\lambda_{\pm}
\} = 0$, thus, the FP is ordinary classified. On the inside region, the FP is spiral (unstable, if it is above the critical level, and stable if it is below).
However, the main feature for self-sustained oscillations to occur is to have an unstable FP ($b > 1 + a$). In that case, solutions are attracted to a limit-cycle [@Guckenheimer; @Jan]. A limit-cycle on a plane (or a two-dimensional manifold) is a closed trajectory in phase space having the property that at least one other trajectory spirals into it either as time approaches infinity or as time approaches negative infinity [@Guckenheimer]. This behaviour is shown in Fig. \[fig\_bifurcation\] for the Brusselator. For $b < b_c$ the FP is stable and there is no self-sustained oscillation. Such a steady state solution becomes unstable under a Hopf bifurcation as $b$ is increased. A stable oscillation appears for $b > b_c$, which corresponds to the non-equilibrium phase of the system.
![(Color online) Brusselator’s bifurcation diagram as $b$ is increased for $a = 1$. Red dots correspond to the system’s asymptotic orbits. The critical point $b_c = 1 + a =
2$ is signalled by straight (blue online) lines. The dashed (black online) line corresponds to the steady state solution $\left(1,\,b/a\right)$, which becomes unstable after the critical point as a Hopf bifurcation.[]{data-label="fig_bifurcation"}](bifurcation.pdf){width="1.0\columnwidth"}
Stochastic Dynamics
-------------------
A general coupled set of first order Stochastic Differential Equations (SDEs) is given by $$\dot{\vec{x}}_\eta(t) = \vec{F}\left(\vec{x}_\eta(t),\,t\right) +
\mathbf{H}\left(\vec{x}_\eta(t),\,t\right)\,
\vec{\eta}(t)\,,
\label{eq_SDEs}$$ where we name $\vec{x}_\eta = \{ x_i:\;i = 1,\,\ldots,\,N\}$ as the set of state variables, $\vec{F}$ as the deterministic part of the SDE (namely, a field vector $\vec{F}$ with components $f_i:\mathbb{R}^N\times \mathbb{R} \to \mathbb{R}$ known as the *drift coefficients*), $\mathbf{H}$ as the $N \times N$ coupling matrix with function entries $h_{ij}:\mathbb{R}^N\times\mathbb{R} \to \mathbb{R}$ \[i.e., $h_{ij}\left(\vec{x}_\eta(t),\,
t\right)$\], known as the *diffusion coefficients*, and $\vec{\eta}$ as the vector of random fluctuations. The noise is assumed to be uncorrelated and with zero mean for all coordinates, namely, $$\begin{aligned}
\nonumber
\left\langle \eta_i(t)\,\eta_j(s) \right\rangle = \delta_{ij}\,\delta\left(t - s\right)\,,
\\ \text{and}\;\;
\left\langle \eta_i(t) \right\rangle = 0\,,\;\forall\,i = 1,\,\ldots,\,N\,,
\label{eq_noise}\end{aligned}$$ where, as in the following, we denote $\left\langle \cdots \right\rangle$ to be the average over various noise realisations (in other words, a mean value over an ensemble of possibly different fluctuations), $\delta_{ij}$ to be the Kronecker delta, and $\delta(t-s)$ to be the Dirac delta function.
The system is said to be subject to *additive or thermal noise* if the drift coefficients ($h_{ij}$) are constants, otherwise it is said to be subject to *multiplicative noise*. For the Brusselator, taking into account random fluctuations in the equations of motion modifies the concentrations of the relevant chemical species into stochastic variables. The inclusion of noise is intended to account for the molecular character of the chemical reaction. Hence, the *Stochastic Brusselator Equations* (SBEs) are $$\left[\! \begin{array}{c}
\dot{u} \\ \dot{v}
\end{array} \!\right] = \left[\! \begin{array}{c}
f\left(u,\,v\right) \\ g\left(u,\,v\right)
\end{array} \!\right] + \mathbf{H}\left(u,\,v\right)
\left[\! \begin{array}{c}
\eta_{u} \\ \eta_{v}
\end{array} \!\right]\,,
\label{eq_BLEs}$$ with $$\mathbf{H}\left(u,\,v\right) = \left[\! \begin{array}{cc}
h_{11}\!\left(u,\,v\right) & h_{12}\!\left(u,\,v\right) \\
h_{21}\!\left(u,\,v\right) & h_{22}\!\left(u,\,v\right)
\end{array} \!\right]\,,
\label{eq_coupling}$$ where the deterministic drift coefficients come from Eq. (\[eq\_oscillator\]) and the chemical concentrations $u$ and $v$ are now stochastic variables.
In particular, the analysis of how the stability of the FP (sub-critical parameter values) and the limit-cycle (above criticality) of the deterministic Brusselator changes due to the inclusion of noise is carried by linearising the SBEs around the particular stable solution. We tackle this analysis on the general SDE case \[Eq. (\[eq\_SDEs\])\]. We start by analysing the effect of the noise over the stability of the FP and *find that the resulting average solutions are mainly affected by the eigenvalues and eigenvectors of the Jacobian of the field vector $\vec{F}$ and the drift coefficients*.
Let $\delta\vec{x}_{\eta} \equiv\vec{x}_{\eta} - \vec{x}_{eq}$ be the *deviation* vector, where $\vec{x}_{eq}$ is the steady equilibrium solution (FP). Assuming that this deviation is small for all times and any noise realisation, then, $$\begin{aligned}
\nonumber
\delta\dot{\vec{x}}_{\eta} = \left[D\vec{F}_{\vec{x}_{eq}} + \left( \nabla\left[
\mathbf{H}\left(\vec{x}_\eta(t),\,t\right) \,\vec{\eta}(t)\right] \right)_{\vec{x}_{eq}}
\right]\times \\ \times\delta\vec{x}_{\eta} +
\mathbf{H}_{\vec{x}_{eq}}\,\vec{\eta}(t)\,,
\label{eq_linear-BLEs}\end{aligned}$$ $D\vec{F}_{\vec{x}_{eq}}$ ($\mathbf{H}_{\vec{x}_{eq}}$) being the Jacobian (coupling) matrix evaluated at the FP and $\nabla$ the gradient operator. Unless the noise is additive, the matrices inside the square brackets have a noise dependence. Thus, we restrict ourselves to the case of thermal noise for the mathematical derivations.
For *additive noise*, $\mathbf{H}(\vec{x}_\eta(t),t)$ is a constant matrix, $\mathbf{H}$, independent of the system’s state vector and time, hence, direct integration of Eq. (\[eq\_linear-BLEs\]) is possible. Denoting $D\vec{F}_{\vec{x}_{eq}} = \mathbf{J}$, the deviation vector evolves according to $$\delta\vec{x}_{\eta}(t) = e^{\mathbf{J}\,t}\delta\vec{x}(0) + \int_0^t ds\;e^{\mathbf{J}
\,(t - s)}\,\mathbf{H}\,\vec{\eta}(s)\,.
\label{eq_BLEs-linear_sol}$$
We note that the exponentials of the Jacobian matrix in the former expressions are understood to be a matrix exponent, thus, they are computed in a power series expansion using its spectral decomposition \[$\mathbf{J} = \mathbf{P}\mathbf{\Lambda}\mathbf{P}^{-1}$, where $\left(\mathbf{P}\right)_{ij} = \left(\vec{v}_j\right)_i$ is the $i$-th coordinate of the $j$-th eigenvector of $\mathbf{J}$ and $\left(\mathbf{\Lambda}\right)_{ij} = \delta_{ij}
\lambda_j$, with $\lambda_j$ being the $j$-th eigenvalue of $\mathbf{J}$\]. Specifically, $$e^{\mathbf{J}\,t} \equiv \sum_{n = 0}^\infty \frac{1}{n!}\left(\mathbf{J}\,t\right)^n =
\mathbf{P}\,e^{\mathbf{\Lambda}\,t}\,\mathbf{P}^{-1}\,.
\label{eq_spectral}$$
Results
=======
Thermal noise ensemble first and second moments {#sec_stochastic}
-----------------------------------------------
Due to the stochastic character of Eq. (\[eq\_BLEs-linear\_sol\]), we focus on the analytical derivation of the first and second moments of the thermal noise ensemble distribution. In other words, we derive an expression for the noise average value of the deviations with respect to the FP and the noise average value of the quadratic deviations with respect to the FP. The noise average value of the deviations, $\vec{m}(t) \equiv
\left\langle \delta\vec{x}_{\eta}(t) \right\rangle$, is $$\vec{m}(t) = e^{\mathbf{J}\,t}\,\delta\vec{x}(0) + \int_0^t ds\;e^{\mathbf{J}\,( t - s )}
\,\left\langle \mathbf{H}\,\vec{\eta}(s) \right\rangle\,,
\label{eq_mean_BLE-sol}$$ and the noise average value of the quadratic deviations, $\rho^2(t) \equiv \left\langle
\left[\delta\vec{x}_{\eta}(t)\right]^2 \right\rangle$, is $$\begin{aligned}
\nonumber
\rho^2(t) = e^{\mathbf{J}\,t}\,\delta\vec{x}(0) \cdot e^{\mathbf{J}\,t}\,\delta\vec{x}(0)
+ \\
\nonumber
2e^{\mathbf{J}\,t}\,\delta\vec{x}(0) \cdot \int_0^t ds\;e^{\mathbf{J}\,( t - s )}
\,\left\langle \mathbf{H}\,\vec{\eta}(s) \right\rangle + \\
\int_0^t\!ds\!\int_0^s\!ds' \left\langle e^{\mathbf{J}( t - s)}\,\mathbf{H}\,\vec{\eta}(s)
\cdot e^{\mathbf{J}( t - s' )}\,\mathbf{H}\,\vec{\eta}(s') \right\rangle,
\label{eq_std_BLE-sol}\end{aligned}$$ where “ $\cdot$ ” is the inner product between vectors. In particular, for the Jacobian eigenvectors we have that $\vec{v}_n \cdot \vec{v}_m = \sum_{i=1}^N \left(\vec{v}_n\right)_i
\left(\vec{v}_m\right)_i^{\star} = \delta_{nm}$, “ $^\star$ ” being the complex conjugate operation.
As the random fluctuations are additive, then $\mathbf{H}$ is independent of the noise realisation, hence, $\left\langle\mathbf{H}\eta\right\rangle = \mathbf{H} \left\langle
\eta\right\rangle = 0$. Consequently, the *noise average value of the deviations* evolves as $$\vec{m}(t) = \mathbf{P}\,e^{\mathbf{\Lambda}\,t}\,\mathbf{P}^{-1}\,\delta\vec{x}(0)\,,
\label{eq_mean}$$ which is the first main result of this work and constitutes the first moment of the thermal noise ensemble orbits. Equation (\[eq\_mean\]) says that the average of the deviation variables tends to zero for $t \to \infty$ if and only if the eigenvalues correspond to a stable FP, namely, when $\mathsf{Re}\{{\lambda_i}\} < 0 \;\forall\,i$. Hence, *the stochastic system returns to the steady state (in average) after being perturbed*, as long as the noise intensities are small (i.e., $\left\|\delta\vec{x}_{\eta}(t)\right\| \ll 1$ $\forall\,t$).
In particular, for the Brusselator, the analytical result of Eq. (\[eq\_mean\]) predicts how the noisy chemical concentrations converge to the FP if averaged over various thermal noise realisations, as it is demonstrated in Fig. \[fig\_BLE\_orbits\]. Both panels show how the analytical prediction (filled squares) of Eq. (\[eq\_mean\]) for $\vec{m}(t) =
\left( \left\langle \delta u_\eta(t) \right\rangle,\,\left\langle \delta v_\eta(t)
\right\rangle \right)$ has a remarkable agreement with the numerical experiments, although the initial perturbation is large \[the initial condition for every stochastic orbit is $\left(\delta u_\eta(0),\, \delta v_\eta(0) \right) = \left(0.9,\,-0.9\right)$\].
**(a)**\
![(Color online) Panel [**(a)**]{} \[Panel [**(b)**]{}\] shows $100$ stochastic orbits of the Brusselator’s chemical concentration deviations from the fixed point $\delta u_\eta$ \[$\delta v_\eta$\] in continuous light (grey online) curves. The diffusion coefficients are constant (thermal noise scenario) and given by $h_{ij} = \delta_{ij} \Gamma$, with $\Gamma
= 10^{-1}$. The noise average orbit deviations in these panels are represented by dark (black online) continuous curves. The analytical predictions \[Eq. (\[eq\_mean\])\] for these averages are shown by filled (red online) squares. Both panels corresponds to the sub-critical regime, with $a = b = 1$.[]{data-label="fig_BLE_orbits"}](additive_g=1E-1_BLEs_a=1_b=1_orb_u.pdf "fig:"){width="10pc"}
**(b)**\
![(Color online) Panel [**(a)**]{} \[Panel [**(b)**]{}\] shows $100$ stochastic orbits of the Brusselator’s chemical concentration deviations from the fixed point $\delta u_\eta$ \[$\delta v_\eta$\] in continuous light (grey online) curves. The diffusion coefficients are constant (thermal noise scenario) and given by $h_{ij} = \delta_{ij} \Gamma$, with $\Gamma
= 10^{-1}$. The noise average orbit deviations in these panels are represented by dark (black online) continuous curves. The analytical predictions \[Eq. (\[eq\_mean\])\] for these averages are shown by filled (red online) squares. Both panels corresponds to the sub-critical regime, with $a = b = 1$.[]{data-label="fig_BLE_orbits"}](additive_g=1E-1_BLEs_a=1_b=1_orb_v.pdf "fig:"){width="10pc"}
On the other hand, the *noise average value of the quadratic deviations* of the SDE orbits around the steady equilibrium state evolve as $$\begin{aligned}
\nonumber
\rho^2(t) = \delta\vec{x}(0)\cdot \mathbf{P}\,e^{2\mathsf{Re}\{\mathbf{\Lambda}\}\,t}\,
\mathbf{P}^{-1} \delta\vec{x}(0) + \\
\sum_{i,j,k,n}^N h_{ik}\left(\vec{v}_n\right)_i \left[ \frac{e^{2\,\mathsf{Re}\{\lambda_n
\}\,t} - 1}{ 2\,\mathsf{Re}\{\lambda_n\} }\right]\left(\vec{v}_n\right)_j^\star h_{jk}\,,
\label{eq_std}\end{aligned}$$ where $\left(\vec{v}_n\right)_{i}$ is the $i$-th coordinate of the eigenvector associated to the $\lambda_n$ eigenvalue of the Jacobian matrix in the particular steady state solution and all summations run from $1$ to $N$.
Equation (\[eq\_std\]) is the second main results in this work and constitutes the second moment of the ensemble of thermal noise realisations. It is derived by applying direct integration of Eq. (\[eq\_std\_BLE-sol\]), the noise properties defined in Eq. (\[eq\_noise\]), and the spectral decomposition of the Jacobian matrix \[Eq. (\[eq\_spectral\])\]. It expresses how the noise and the deterministic part of the SDE produce divergence (or convergence) in the trajectories of neighbouring initial conditions over the ensemble of thermal noise realisations close to the FP solution. The first right hand side term in Eq. (\[eq\_std\]) diverges (converges) if the FP is unstable (stable). In such case, neighbouring initial conditions are driven away (closer) at a rate given by the real part of the exponents of the system, i.e., by $2\mathsf{Re}\{\lambda_n\}$. The second term on the right hand side of Eq. (\[eq\_std\]) accounts for the divergence (convergence) due to the stochasticity in the system.
In order to find the *variance* $\sigma^2(t)$ of the SDE, it is enough to discard the first term on the right side of Eq. (\[eq\_std\]). Specifically, $$\sigma^2(t) = \rho^2(t) - \vec{m}(t)\cdot\vec{m}(t) = \left\langle \left[\vec{x}_\eta(t)
\right]^2 \right\rangle - \left[ \left\langle \vec{x}_\eta(t) \right\rangle \right]^2\,,$$ $$\sigma^2(t) = \sum_{i,j,k,n}^N{\left(\vec{v}_n\right)_i h_{ik}\! \left[ \frac{e^{2\,
\mathsf{Re}\{\lambda_n\}\,t} - 1}{ 2\,\mathsf{Re}\{\lambda_n\} }\right]\!
\left(\vec{v}_n\right)_j^\star h_{jk} }.
\label{eq_variance}$$
Thermal noise diffusion relationship and the variance asymptotic behaviour {#sec_diffusion}
--------------------------------------------------------------------------
Our general diffusion relationship is derived from Eq. (\[eq\_variance\]) by considering the small time-scales. In such transient window, an expansion in power series up to the first order results in the following *Einstein diffusion relationship*, $\sigma^2(t)
\simeq D\,t$, $$D \equiv \sum_{i,j}^N h_{ij}^2 = \mu\,k_B T\,,
\label{eq_Einstein_rel}$$ where $\mu$ is the mobility coefficient, $k_B$ is Boltzmann’s constant, and $T$ is the temperature.
It is worth mentioning that Eq. (\[eq\_Einstein\_rel\]) corresponds to the rate at which the variance of the perturbations grows in time averaged over the various random fluctuation realisations. Hence, it may not be directly relatable to the regular Brownian Motion (BM) solution under an external potential in the over-damped regime, i.e., the known Einstein diffusion relationship [@Reichl].
On the one hand, the diffusion relationship that BM achieves corresponds to the particle’s position variance, though it depends on the fact that the external force is derived from a potential. However, if the Brusselator field vector ($\vec{F}$) is derived from a bi-dimensional potential, periodic solutions are absent ($\nabla\times\vec{F} = 0$), such as the limit-cycle state. On the other hand, the variance relationship for the BM, that holds the Einstein’s diffusion relationship, is a relationship regarding how much the particle diffuses as if it performed random walks as a function of the temperature, i.e., the parameter that regulates the “strength” of the fluctuations. For the Brusselator, the relationship given by Eq. (\[eq\_Einstein\_rel\]), predicts a similar behaviour for the variance of the perturbations in the system out of the equilibrium state when subject to additive noise, but there is no particle movement involved. Instead, the “movement” corresponds to the chemical concentration fluctuations. Consequently, the general diffusion relationship we find is only mentioned as a qualitative Einstein diffusion relationship analogue which exhibits the same mathematical formulation as the one for BM.
For the Brusselator, in the case where the thermal noise only affects each chemical concentration independently, namely, $h_{ij} = \delta_{ij}\Gamma$, then $\sum_{i,j}^2
h_{ij}^2 = 2\Gamma^2$, and Eq. (\[eq\_Einstein\_rel\]) results in $$D = 2\Gamma^2 = \mu\,k_B T\,,
\label{eq_Einstein_rel_ex}$$ where we can continue the analogy with the BM and say that the $2$ appearing in the relationship corresponds to the $2$ degrees of freedom of the Brusselator.
For a stable FP, the variance \[Eq. (\[eq\_variance\])\] saturates at a finite value, meaning that initially neighbouring orbits are found always within a fixed distance from the FP. In other words, for a stable FP, the variance of the thermal noise SDE converges to $$\begin{aligned}
\lim_{t\to\infty} \sigma^2(t) =
-\sum_{i,j,k,n}^N \left(\vec{v}_n\right)_i h_{ik} \frac{1}{ 2\,\mathsf{Re}\{\lambda_n\} }
\left(\vec{v}_n\right)_j^\star h_{jk}\,,
\label{eq_std_asympt}\end{aligned}$$ with $\mathsf{Re}\{\lambda_n\} < 0\;\forall\,n$. This constitutes the final analytical result of this work.
**(a)**\
![(Color online) The panels show the sub-critical ($a = b = 1$) noise average orbits variance, namely, $\sigma^2 = \left\langle [\vec{x}_\eta(t)]^2 \right\rangle -
[\left\langle \vec{x}_\eta(t) \right\rangle]^2$, by a continuous dark (black online) curve for two noise strength values: $\Gamma = 10^{-2}$ \[panel [**(a)**]{}\] and $\Gamma
= 10^{-1}$ \[panel [**(b)**]{}\], with constant diffusion coefficients (thermal noise) given by $h_{ij} = \delta_{ij}\,\Gamma$. The vertical \[horizontal\] dashed (red online \[blue online\]) curves represents the transient \[asymptotic\] analytical prediction of Eq. (\[eq\_Einstein\_rel\]) \[Eq. (\[eq\_std\_asympt\])\] for the behaviour of $\sigma^2$.[]{data-label="fig_BLE_orbits_std"}](additive_g=1E-2_BLEs_a=1_b=1_std.pdf "fig:"){width="10pc"}
**(b)**\
![(Color online) The panels show the sub-critical ($a = b = 1$) noise average orbits variance, namely, $\sigma^2 = \left\langle [\vec{x}_\eta(t)]^2 \right\rangle -
[\left\langle \vec{x}_\eta(t) \right\rangle]^2$, by a continuous dark (black online) curve for two noise strength values: $\Gamma = 10^{-2}$ \[panel [**(a)**]{}\] and $\Gamma
= 10^{-1}$ \[panel [**(b)**]{}\], with constant diffusion coefficients (thermal noise) given by $h_{ij} = \delta_{ij}\,\Gamma$. The vertical \[horizontal\] dashed (red online \[blue online\]) curves represents the transient \[asymptotic\] analytical prediction of Eq. (\[eq\_Einstein\_rel\]) \[Eq. (\[eq\_std\_asympt\])\] for the behaviour of $\sigma^2$.[]{data-label="fig_BLE_orbits_std"}](additive_g=1E-1_BLEs_a=1_b=1_std.pdf "fig:"){width="10pc"}
For the Brusselator, the asymptotic behaviour of the variance implies that for the sub-critical values of the parameter ($b < b_c$), any small deviation from the FP in presence of moderate additive noise keeps, in average, the same asymptotic state. This is also valid for the limit-cycle situation when parameters are above the critical point ($b > b_c$) if, in the former equations, we use the Floquet exponents and corresponding time-dependent eigenvectors [@Guckenheimer].
These results \[Eqs. (\[eq\_Einstein\_rel\]) and (\[eq\_std\_asympt\])\] are shown in Fig. \[fig\_BLE\_orbits\_std\] for the particular case of Eq. (\[eq\_Einstein\_rel\_ex\]) with $\Gamma = 10^{-2}$ \[Fig. \[fig\_BLE\_orbits\_std\][**(a)**]{}\] and $\Gamma = 10^{-1}$ \[Fig. \[fig\_BLE\_orbits\_std\][**(b)**]{}\]. As it is seen from the results, both analytical predictions show good agreement with the numerical experiments. Moreover, the asymptotic *diffusion*, $\sigma^2/\Gamma^2$, as $\Gamma$ is increased (even up to values close to $\Gamma\sim10^0$) remains constant and identical to the degrees of freedom of the system, identically to the transient growth rate $D/\Gamma^2 = 2$.
Stochastic Brusselator’s Hopf bifurcation {#sec_Hopf}
-----------------------------------------
In order to analyse how the Hopf bifurcation that the Brusselator exhibits in the non-stochastic scenario is modified by the presence of additive or multiplicative noise, we compute the *orbits quadratic difference*, $\Delta^2$, $$\Delta^2 = \frac{1}{T}\sum_{t = t^\star}^T \left[ \vec{x}(t) - \left\langle\vec{x}_\eta(t)
\right\rangle \right]^2\,,
\label{eq_quad_diff}$$ where $\vec{x}(t) = \left( u(t),\,v(t) \right)$ is the deterministic orbit and $\left\langle
\vec{x}_\eta(t)\right\rangle$ is the noise average orbit for the stochastic scenario. This measure quantifies the distance between the deterministic and the noise average orbit for each control parameter.
For our numerical experiments, we generate the deterministic orbit and each realisation of the stochastic orbits from identical initial conditions. Then, a transient of $t^\star =
10^3$ iterations is removed from the orbits to compute the orbit quadratic difference of Eq. (\[eq\_quad\_diff\]).
**(a)**\
![(Color online) The left (right) panel shows the Hopf bifurcation that the stochastic Brusselator averaged orbits exhibit for thermal (multiplicative) noise with constant (linear) diffusion coefficients, $h_{ij}$, as a function of the control parameter $b$ for constant $a = 1$. Specifically, the diffusion coefficients are given by $h_{ij} =
\delta_{ij} \Gamma$ ($h_{ij} = \delta_{ij} \,x_j\,\Gamma$, with $x_1 = u_\eta$ and $x_2 =
v_\eta$) with $\Gamma = 10^{-2}$. The light (red online) curves correspond to the deterministic orbits and the dark (black online) curves correspond to the noise averaged orbit in each stochastic scenario \[additive noise in panel [**(a)**]{} and multiplicative noise in panel [**(b)**]{}\] for $100$ noise realisations. For this $\Gamma$, the deterministic and stochastic orbits are very similar, specially for the additive noise scenario.[]{data-label="fig_SBLE_bifurcation"}](bifurcation_diag_a=1_additive_G=1E-2.pdf "fig:"){width="10pc"}
**(b)**\
![(Color online) The left (right) panel shows the Hopf bifurcation that the stochastic Brusselator averaged orbits exhibit for thermal (multiplicative) noise with constant (linear) diffusion coefficients, $h_{ij}$, as a function of the control parameter $b$ for constant $a = 1$. Specifically, the diffusion coefficients are given by $h_{ij} =
\delta_{ij} \Gamma$ ($h_{ij} = \delta_{ij} \,x_j\,\Gamma$, with $x_1 = u_\eta$ and $x_2 =
v_\eta$) with $\Gamma = 10^{-2}$. The light (red online) curves correspond to the deterministic orbits and the dark (black online) curves correspond to the noise averaged orbit in each stochastic scenario \[additive noise in panel [**(a)**]{} and multiplicative noise in panel [**(b)**]{}\] for $100$ noise realisations. For this $\Gamma$, the deterministic and stochastic orbits are very similar, specially for the additive noise scenario.[]{data-label="fig_SBLE_bifurcation"}](bifurcation_diag_a=1_L_multiplica_G=1E-2.pdf "fig:"){width="10pc"}
As it is seen from Fig. \[fig\_SBLE\_bifurcation\], the Hopf bifurcation for the Brusselator is conserved in the additive \[Fig. \[fig\_SBLE\_bifurcation\][**(a)**]{}\] and multiplicative \[Fig. \[fig\_SBLE\_bifurcation\][**(b)**]{}\] cases for mild noise intensities ($\Gamma =
10^{-2}$). In general, *we observe that the effect of increasing the noise strength is to reduce the amplitude of the limit-cycle oscillations in the super-critical regime* ($b > b_c$), hence, destroying gradually the Hopf bifurcation. Moreover, the multiplicative noise realisations generate an even greater decrease in amplitude. Nevertheless, as it is seen from Fig. \[fig\_SBLE\_bif\_distan\], both stochastic scenarios maintain the bifurcation type up to noise strengths of $10^{-1}$, where the bifurcation is finally lost.
Besides the determination of the critical noise strength where the Hopf bifurcation is lost, Fig. \[fig\_SBLE\_bif\_distan\] also shows a somehow universal behaviour of the stochastic system with respect to the deterministic case. In the sub-critical regime, and for parameter values far from the critical point, the orbits quadratic difference scales linearly with the noise intensity ($\Delta^2/\Gamma^2 \sim 10^{-1}$). On the other hand, in the supercritical regime, the orbits quadratic difference collapse under a common curve as a function of the control parameter. To the best of our knowledge, such behaviour has not been accounted in previous works.
**(a)**\
![(Color online) The orbits quadratic difference, i.e., the time average quadratic difference between the deterministic and average stochastic orbit ($\Delta^2 = \frac{1}{T}
\sum_{t} \left[ \vec{x}(t) - \left\langle\vec{x}_\eta(t) \right\rangle \right]^2$), for the additive (left panel) and multiplicative (right panel) noisy Brusselator as the control parameter, $b$, is increased for various noise strengths, $\Gamma$. The diffusion coefficients are given by $h_{ij} = \delta_{ij}\Gamma$ for panel [**(a)**]{} \[$h_{ij} =
\delta_{ij} \,x_j\,\Gamma$, with $x_1 = u_\eta$ and $x_2 = v_\eta$, for panel [**(b)**]{}\]. The curves correspond to noise intensities of $\Gamma = 10^{-1}$ (filled –black online– squares), $10^{-2}$ (filled –blue online– circles), $10^{-3}$ (filled –red online– triangles), and $10^{-4}$ (filled –green online– diamonds).[]{data-label="fig_SBLE_bif_distan"}](bif_orbit_distance_a=1_additive.pdf "fig:"){width="10pc"}
**(b)**\
![(Color online) The orbits quadratic difference, i.e., the time average quadratic difference between the deterministic and average stochastic orbit ($\Delta^2 = \frac{1}{T}
\sum_{t} \left[ \vec{x}(t) - \left\langle\vec{x}_\eta(t) \right\rangle \right]^2$), for the additive (left panel) and multiplicative (right panel) noisy Brusselator as the control parameter, $b$, is increased for various noise strengths, $\Gamma$. The diffusion coefficients are given by $h_{ij} = \delta_{ij}\Gamma$ for panel [**(a)**]{} \[$h_{ij} =
\delta_{ij} \,x_j\,\Gamma$, with $x_1 = u_\eta$ and $x_2 = v_\eta$, for panel [**(b)**]{}\]. The curves correspond to noise intensities of $\Gamma = 10^{-1}$ (filled –black online– squares), $10^{-2}$ (filled –blue online– circles), $10^{-3}$ (filled –red online– triangles), and $10^{-4}$ (filled –green online– diamonds).[]{data-label="fig_SBLE_bif_distan"}](bif_orbit_distance_a=1_L_multiplica.pdf "fig:"){width="10pc"}
Discussion {#sec_conclusions}
==========
In this work we study the Brusselator dynamical behaviour in the absence and presence of random fluctuations and derive some general expressions for generic stochastic systems.
In the non-stochastic dynamics, all main physical properties of the Brusselator, such as the spectral values for the equilibrium states, are found and discussed. The inclusion of thermal and multiplicative noise to the system is first analysed analytically via generic Stochastic Differential Equations. For the thermal noise scenario, expressions for the noise average deviation \[Eq. (\[eq\_mean\])\], noise average quadratic deviations \[Eq. (\[eq\_std\])\], variance rate growth \[Eq. (\[eq\_Einstein\_rel\])\], and variance asymptotic behaviour \[Eq. (\[eq\_std\_asympt\])\] are derived.
From our numerical experiments, we conclude that the transition that the Brusselator exhibits from one parameter region, where the chemical concentrations are in a time independent equilibrium state, to another one, where they oscillate in time, is proved to be maintained for moderate values of the noise strength ($< 0.1$) in both stochastic scenarios (additive or multiplicative noise). The character of this transition, which is Hopf-like for the deterministic evolution, is still observed in the numerical experiments noise averaged evolutions of the chemical concentrations. Moreover, the analytical expression for the noise averaged orbit \[Eq. (\[eq\_mean\])\] and the variance \[Eq. (\[eq\_std\])\] in the case of additive noise, which we derive in a general framework, support these findings.
The expression found for the rate at which the variance grows initially \[Eq. (\[eq\_Einstein\_rel\])\], is discussed as the Brusselator’s diffusion processes and correlated to the regular Random Walk. Hence, it is accounted as a Einstein diffusion relationship for the Brusselator. Such analogy is further explored by the derivation of a general expression for the asymptotic value of the variance of the noisy orbits from the fixed-point state \[Eq. (\[eq\_std\_asympt\])\]. Consequently, the stochastic character that is included into the deterministic Brusselator evolution for the chemical concentrations accounts for the molecular random fluctuations that the real chemical species involved in the reaction exhibit.
Acknowledgements {#acknowledgements .unnumbered}
================
The author acknowledges the support of the Scottish University Physics Alliance (SUPA). The author is in debt with Murilo S. Baptista and Davide Marenduzzo for illuminating discussions and helpful comments.
[10]{} S. Strogatz, [*Sync: The emerging science of spontaneous order*]{} (Hyperion, New York, 2003). J. D. Murray, [*Mathematical Biology*]{} (Springer, Berlin, 2002). G. Nicolis and I. Prigogine, [*Self-Organization in Non-Equilibrium Systems*]{} (Wiley, New York, 1977). D. Broomhead, G. McCreadie, and G. Rowlands, Phys. Lett. [**84**]{}A, 229-231 (1981). V.V. Osipov and E.V. Ponizovskaya, Phys. Rev. E [**61**]{}(4), 4603 (2000). K.I. Agladze, V.I. Krinsky, and A.M. Pertsov, Nature [**308**]{}, 834-835 (1984). M. Karplus and G.A. Petsko, Nature [**347**]{}, 631-639 (1990). E. Ziv, I. Nemenman, H.Ch. Wiggins, PLoS ONE [**2**]{}(10), e1077 (2007). R.C. Hilborn, B. Brookshire, J. Mattingly, A. Purushotham, and A. Sharma, PLoS ONE [**7**]{}(4), e34536 (2012). L. Ciandrini, I. Stansfield, and M.C. Romano, PLoS Comput Biol [**9**]{}(1), e1002866 (2013). L.E. Reichl, [*A Modern Course in Statistical Physics*]{} (John Wiley & Sons, 2nd Ed., Canada, 1998). J. Guckenheimer and P. Holmes, [*Nonlinear Oscilations, Dynamical Systems, and Bifurcations of Vector Fields*]{} (Springer-Verlag, New York, 1983). A. Traulsen, J.Ch. Claussen, and Ch. Hauert, Phys. Rev. Lett. [**95**]{}, 238701 (2005). J.H. Schleimer and M. Stemmler, Phys. Rev. Lett. [**103**]{}, 248105 (2009). A. Melbinger, J. Cremer, and E. Frey, Phys. Rev. Lett. [**105**]{}, 178101 (2010). G. Amselem, M. Theves, A. Bae, E. Bodenschatz, and C. Beta, PLoS ONE [**7**]{}(5), e37213 (2012). F. Duan, F. Chapeau-Blondeau, and D. Abbott, PLoS ONE [**9**]{}(3), e91345 (2014). T. Biancalani, D. Fanelli, and F. Patti, Phys. Rev. E [**81**]{}, 046215 (2010). T. Biancalani, T. Galla, and A.J. McKane, Phys. Rev. E [**84**]{}, 026201 (2011). T. Butler and N. Goldenfeld, Phys. Rev. E [**84**]{}, 011112 (2011). W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery, [ *Numerical Recipes in Fortran 77: The Art of Scientific Computing*]{} (Cambridge University Press, 2nd Ed,1992). D.E. Knuth, [*The art of Computer Programming: Seminumerical Algorithms*]{} (Addison-Wesley, Reading, 1969). K. Burrage, I. Lenane, and G. Lythe, SIAM J. Sci. Comput. [**29**]{}, 245-264 (2007).
|
---
abstract: 'Replacing independent single quantum wells inside a strongly-coupled semiconductor microcavity with double quantum wells produces a special type of polariton. Using asymmetric double quantum wells in devices processed into mesas allows the alignment of the electron levels to be voltage-tuned. At the resonant electronic tunnelling condition, we demonstrate that ‘oriented polaritons’ are formed, which possess greatly enhanced dipole moments. Since the polariton-polariton scattering rate depends on this dipole moment, such devices could reach polariton lasing, condensation and optical nonlinearities at much lower threshold powers.'
author:
- Gabriel Christmann
- Alexis Askitopoulos
- George Deligeorgis
- Zacharias Hatzopoulos
- 'Simeon I. Tsintzos'
- 'Pavlos G. Savvidis'
- 'Jeremy J. Baumberg'
title: 'Oriented polaritons in strongly-coupled asymmetric double quantum well microcavities'
---
[accepted in APL (2011)]{}
Semiconductor microcavities (MCs) in the strong coupling regime (SCR) have shown great potential for the realization of optoelectronic devices. In this context, Imamoğlu and coworkers proposed in 1996 the concept of the polariton laser in which a MC in the SCR can emit coherent directional light, just like a vertical cavity surface emitting laser (VCSEL), but below the inversion threshold.[@Imamoglu96] Working devices were subsequently realized with several semiconductors systems: II-tellurides[@Dang98] and III-arsenides[@Bajoni08] at cryogenic temperatures, and even at room temperature with III-nitrides.[@Christopoulos07; @Christmann08] Fabrication of fully functional devices is becoming even more realistic since electrically-injected structures exhibiting polariton light emission have been realized.[@Tsintzos08; @Bajoni08led; @Khalifa08]. The enormous polariton parametric amplification enables controllable nonlinear optical elements,[@Savvidis00; @Christmann10] while more recently Bose-Einstein condensation[@Kasprzak06; @Balili07] which is currently pushed towards room temperature with III-nitrides,[@Baumberg08; @Levrat10] opens perspectives for the realization of coherent superfluid devices.[@Lagoudakis10]
Nevertheless currently the threshold of these devices is limited by the rate of polariton-polariton scattering. This is illustrated by the large number of quantum wells (QWs) generally inserted in the MCs for polariton lasing/condensation,[@Dang98; @Bajoni08; @Christmann08] compared to equivalent VCSEL structures.[@Jewell91; @Kao08] This design is needed to allow high enough polariton densities for polariton-polariton scattering to become faster than polariton decay. Reducing the decay rate by using high-$Q$ cavities has led to polariton localisation in the photonic disorder potential. The desire to reduce the minimum threshold for polariton lasing/condensation has thus led to various proposals to enhance polariton relaxation. One suggestion has been to use scattering by free electrons,[@Malpuech02; @Lagoudakis03] however to date this has not been effective.
In this letter we demonstrate an innovative MC design in which the active region is composed of asymmetric double quantum wells (ADQWs). By applying an electric field to the structure it is possible to bring the electron levels of neighboring quantum wells into resonance. In this resonant condition the spatially direct and indirect excitons become coupled so they share the strong oscillator strength of the direct constituent and the strong dipole moment in the growth direction of the indirect one.[@Ciuti98] This configuration should favour polariton-polariton interactions and is therefore very promising for threshold reduction of nonlinear effects in strong-coupling MCs. In addition, the ability to tune the polaritons by applied voltage offers a simple way to make polariton waveguides and devices. This concept thus combines ideas of controlling indirect excitons in bare QWs[@Butovpapers] with traditional strong coupling microcavities.
The sample used to demonstrate [*oriented polaritons*]{} is a strongly-coupled MC made of a $5\lambda/2$ undoped cavity containing four sets of In$_{0.1}$Ga$_{0.9}$As/GaAs/In$_{0.08}$Ga$_{0.92}$As (10 nm/4 nm/10 nm) ADQWs, placed at the antinodes of the electric field \[Fig. 1(a)\]. The cavity is formed from top (17-pair, $p$-doped) and bottom (21-pair $n$-doped) GaAs/AlAs distributed Bragg reflectors (DBRs), thus forming a $p-i-n$ junction. A $\Omega_{VRS}\sim 5.6$ meV Rabi splitting is measured at $10$ K in this structure. Polariton LEDs are processed into 400 $\mu$m diameter mesas with a ring-shaped Ti/Pt electrode deposited after a second etch step to contact the lower $p$-layers, improving the series resistance (details in [@Tsintzos08; @Tsintzos09]).
In the ADQW, the lower energy excitons in the left quantum well (LQW, 10% In) couple to the cavity mode while the higher energy exciton in the right quantum well (RQW, 8% In) is blue shifted well out of resonance. Applying reverse bias decreases the electron levels of the RQW, producing a tunnelling resonance when it matches the LQW $n$=1 electron energy. At resonance, the electron wavefunctions will be spread between both wells \[Fig. 1(b)\] whereas the LQW hole remains strongly confined, thus creating a significant dipole moment.
![(color online) (a) ADQW polariton mesa structure. (b) Conduction bands of one ADQW period (black line at resonance, dashed at flat-band), and corresponding electron wavefunctions \[red (dark gray) lines\]. (c) Calculated transition energies for the ADQW structure \[red (dark gray) lines\], and for corresponding single QWs (black lines).](Figure1.pdf){width="\columnwidth"}
To understand the polaritons we compare energies of the direct (DX) and indirect (IX) excitons of the LQW coupled to the cavity mode. The indirect transition excites electrons from the LQW heavy-hole level to the RQW electron level. Solving the Schrödinger equation using complex Airy functions[@Ahn86] and suitable material parameters[@Vurgaftman01] yields complex eigenvalues where the real part is the quasiconfined energy level and the imaginary part is related to the carrier escape time. Compared to weak field tuning[@Skolnickpaperaround1998] of single QWs used in previous microcavities (black lines), a new level structure is revealed \[Fig. 1(c)\]. The tunnelling resonance at $12.5$ kV/cm between direct and indirect excitons leads to an anticrossing with tunnelling-induced splitting, $\hbar J$=5 meV. We concentrate here on the lowest states corresponding to the lower polariton branch only, although we also observe strong coupling with RQW excitons. The lifetime of the levels (not shown) is also obtained from the model. When these times become smaller than the Rabi period $T_{VRS}=2\pi/\Omega_{VRS}\sim 740$ fs, the strong coupling is lost. From our simulations this only occurs for $F>38$ kV/cm, hence the tunnelling resonance does not destroy the strong coupling.
![(color online) (a) Calculated polariton dispersion curves *vs* electric field \[red (dark gray) lines\]. Line thickness is proportional to the photon fraction. Black lines are uncoupled modes (cavity mode $C$ and two excitons $DX$ and $IX$). Inset: dipole orientation for normal MC and for oriented polarions (OPs). (b) Surface plot of the polariton modes *vs* electric field and incident angle. Surface color gives photon fraction. Black lines are guide to the eye. (c) Calculated dipole moment corresponding to $DX$ and $IX$, for single electron-hole pair.](Figure2.pdf){width="\columnwidth"}
From the corresponding wavefunctions produced by the model, the overlaps between electron and hole wavefunctions, which are proportional to the oscillator strength, are calculated for each level as a function of the applied field (not shown).[@noteWF] These coupling strengths are inserted in a standard $3\times 3$ Hamiltonian model to describe the SCR between the two excitons (direct and indirect) of the LQW and the cavity mode. The three resulting dispersion curves, lower, middle and upper polariton branches (LPB, MPB and UPB), are shown in Fig. 2(a), where the thickness of the lines is proportional to the photon fraction (and hence the out-coupling to free space). The effect of the tunnelling resonance is clearly observed on the LPB as an extra anticrossing. Surprisingly, by controlling the cavity detuning, the electric field of this anticrossing shifts relative to the bare electron tunnelling resonance. The important consequence of this observation is that the field at which this oriented polariton anticrossing occurs will be angle dependent \[Fig. 2(b), dashed line\]. This point opens interesting potential for the study of angle-resonant parametric scattering,[@Savvidis00] as pump, signal and idler will meet this anticrossing at different electric fields.
Of key significance is the net polariton dipole moment at the resonance \[Fig. 2(c)\] induced by the electron spreading between QWs. Since polariton scattering depends on the exciton dipole-dipole coupling, this will now be greatly amplified compared to normal MCs \[Fig. 2(a), inset\]. The extent of the alignment depends on temperature, disappearing only for $T > \hbar J/k_B \simeq 60$ K here. Since the tunnelling rate is controlled exponentially by the barrier thickness, higher temperature operation is possible. The oriented polaritons, $$\left| OP \right\rangle_{\pm} = a \left\{ \left| DX \right\rangle \pm \left| IX \right\rangle \right\} + c \left| C \right\rangle \nonumber$$ both have the same dipole moment, although the upper level has less penetration into the barrier \[Fig. 1(b)\].
Evidence for oriented polaritons is indeed observed in these devices. Broadband 150fs pulses of a Ti:S laser are used to record 40 K reflectivity spectra while changing the bias from $1.5$ V to $-2$ V \[Fig. 3(a)\]. The lower polariton (LP) mode is tracked as its reflectivity dip redshifts revealing a clear anticrossing. At large negative bias the features become weak and beyond this a non-dispersing broad reflectivity dip is seen at a slightly higher energy than the original LP mode. To compare this data to simulations, the applied bias is converted into internal electric field, $F=\left(V_g-V_b\right)/L_{i}$, with built-in potential $V_g$=1.52V and undoped $i$-thickness $L_i$=670 nm. In practice the effective field shifts due to series resistances of the DBRs and to compensating electric fields created by the non-zero excitonic dipole moment under illumination.[@Bajoni08bistable; @Christmann10]. We thus perform measurements at low incident power with internal power density inside the cavity below 100 mW/cm$^2$ where such effects are expected to be reasonably small. However for measurements at higher power (as in the study of polariton nonlinearities) these issues must be taken into account.
{width="\columnwidth"}
The extracted dispersion curves with electric field \[Fig. 3(b)\] show the oriented polariton anticrossing occuring at $14$ kV/cm which is in good agreement with the simulations \[Fig. 2(a)\] which predict $17$ kV for this detuning. The simulations also explain why the anticrossing jumps rapidly because the photon fraction is very quickly transferred from one branch to the other, making the indirect exciton-like polariton branch very weak. At large electric fields above 37 kV/cm, the features do not track the predicted polariton dispersion. However the simulation shows that at $38$ kV/cm the lifetime of the electron level becomes equal to the Rabi period which means that strong coupling is lost. The non-dispersing mode is therefore the bare cavity mode. This mode is much broader than at low fields because the QW absorption from the electron-hole continuum has also Stark red-shifted to the same energy at high bias.
The bias-dependent photocurrent corresponding to this tunnelling resonance (Fig.3c) shows a strong peak at 14 kV/cm, and a plateau at higher bias. All photocurrent is created by absorbtion into the polariton modes, as seen by the negligible dark current. Polaritons tunnel out from the RQW with a probability which increases with bias. This probability is increased when direct and indirect polaritons in the LQW first tunnel into the RQW at the oriented polariton resonance. Note that both symmetric and asymmetric tunnelling levels $\left| OP \right\rangle_{\pm}$ are excited by the spectrally broad laser pulses, and the tunnelling rate thus shows a single peak. At higher bias, tunnelling becomes faster than the Rabi period and the photocurrent saturates as all carriers tunnel out before radiative recombination.
The energy splitting of the oriented polaritons produced by the tunnelling allows lateral polariton confinement. Thus gated structures with indirect excitons allow trapping of polaritons in electrostatic traps,[@Hammack06] to form waveguides and optical transistors.
In conclusion we have fabricated and studied an ADQW MC in the strong coupling regime observing both symmetric- and asymmetric-oriented polaritons. The electron QW level coupling is clearly shown experimentally within the LPB and in good agreement with simulations. This effect can be efficiently controlled by reverse biasing the sample, by tuning the levels into tunnelling resonance. We also show that in such structures a strong dipole moment is created at the oriented polariton resonance which can enhance the polariton-polariton scattering making these structures very promising for studies of nonlinearities. Finally, from a wider perspective, such tunnelling microcavities enable many types of opto-electronic devices.
The authors acknowledge assistance from N. T. Pelekanos and support from UK EPSRC EP/C511786/1, EP/F011393 and EU CLERMONT4.
[bib]{}
A. Imamoğlu, R. J. Ram, S. Pau, and Y. Yamamoto, Phys. Rev. A [**53**]{}, 4250 (1996).
Le Si Dang, D. Heger, R. André, F. Bœuf, and R. Romestain, Phys. Rev. Lett. [**81**]{}, 3920 (1998).
D. Bajoni, P. Senellart, E. Wertz, I. Sagnes, A. Miard, A. Lemaître, and J. Bloch, Phys. Rev. Lett. [**100**]{}, 047401 (2008).
S. Christopoulos, G. Baldassarri Höger von Högersthal, A. J. D. Grundy, P. G. Lagoudakis, A. V. Kavokin, J. J. Baumberg, G. Christmann, R. Butté, E. Feltin, J.-F. Carlin, and N. Grandjean, Phys. Rev. Lett. [**98**]{}, 126405 (2007).
G. Christmann, R. Butté, E. Feltin, J.-F. Carlin, and N. Grandjean, Appl. Phys. Lett. [**93**]{}, 051102 (2008).
S. I. Tsintzos, N. T. Pelekanos, G. Konstantinidis, Z. Hatzopoulos, and P. G. Savvidis, Nature [**453**]{}, 372 (2008).
D. Bajoni, E. Semenova, A. Lemaître, S. Bouchoule, E. Wertz, P. Senellart, and J. Bloch, Phys. Rev. B [**77**]{}, 113303 (2008).
A. A. Khalifa, A. P. D. Love, D. N. Krizhanovskii, M. S. Skolnick, and J. S. Roberts, Appl. Phys. Lett. [**92**]{}, 061107 (2008).
P. G. Savvidis, J. J. Baumberg, R. M. Stevenson, M. S. Skolnick, D. M. Whittaker, and J. S. Roberts, Phys. Rev. Lett. [**84**]{}, 1547 (2000).
G. Christmann, C. Coulson, J. J. Baumberg, N. T. Pelekanos, Z. Hatzopoulos, S. I. Tsintzos, and P. G. Savvidis, Phys. Rev. B [**82**]{}, 113308 (2010)
J. Kasprzak, M. Richard, S. Kundermann, A. Baas, P. Jeambrun, J. M. J. Keeling, F. M. Marchetti, M. H. Szymaska, R. André, J. L. Staehli, V. Savona, P. B. Littlewood, B. Deveaud, and Le Si Dang, Nature, [**443**]{}, 409 (2006).
R. Balili, V. Hartwell, D. Snoke, L. Pfeiffer, K. West, Science [**316**]{}, 1007 (2007).
J. J. Baumberg, A. V. Kavokin, S. Christopoulos, A. J. D. Grundy, R. Butté, G. Christmann, D. D. Solnyshkov, G. Malpuech, G. Baldassarri Höger von Högersthal, E. Feltin, J.-F. Carlin, and N. Grandjean, Phys. Rev. Lett. [**101**]{}, 136409 (2008).
J. Levrat, R. Butté, T. Christian, M. Glauser, E. Feltin, J.-F. Carlin, N. Grandjean, D. Read, A. V. Kavokin, and Y. G. Rubo, Phys. Rev. Lett. [**104**]{}, 166402 (2010).
K. G. Lagoudakis, B. Pietka, M. Wouters, R. André, and B. Deveaud-Plédran, Phys. Rev. Lett. [**105**]{}, 120403 (2010).
J. L. Jewell, J. P. Harbison, A. Scherer, Y. H. Lee, and L. T. Florez, IEEE J. Quantum Electron. [**27**]{}, 1332 (1991).
T.-C. Lu, C.-C. Kao, H.-C. Kuo, G.-S. Huang, and S.-C. Wang, Appl. Phys. Lett. [**92**]{}, 141102 (2008).
G. Malpuech, A. Kavokin, A. Di Carlo, J. J. Baumberg, Phys. Rev. B [**65**]{}, 153310 (2002).
P. G. Lagoudakis, M. D. Martin, J. J. Baumberg, A. Qarry, E. Cohen, and L. N. Pfeiffer, Phys. Rev. Lett. [**90**]{}, 206401 (2003).
C. Ciuti, and G. C. La Rocca, Phys. Rev. B [**58**]{}, 4599 (1998).
L. V. Butov, C. W. Lai, A. L. Ivanov, A. C. Gossard, and D. S. Chemla, Nature [**417**]{}, 47 (2002).
S.I. Tsintzos, P.G. Savvidis, G. Deligeorgis, Z. Hatzopoulos, N.T. Pelekanos, Appl. Phys. Lett. [**94**]{}, 071109 (2009).
D. Ahn and S. L. Chuang, Phys. Rev. B [**34**]{}, 9034–9037 (1986)
T. A. Fisher, A. M. Afshar, D. M. Whittaker, M. S. Skolnick, J. S. Roberts, G. Hill, and M. A. Pate, Phys. Rev. B [**51**]{}, 2600 (1995)
I. Vurgaftman, J. R. Meyer, and L. R. Ram-Mohan, J. Appl. Phys. 89, 5815 (2001)
The levels being only quasiconfined, the wavefunctions are not normalized. As we are interested by the levels confined in the QWs, the integration is stopped when the energy level cross the barrier.
D. Bajoni, E. Semenova, A. Lemaître, S. Bouchoule, E. Wertz, P. Senellart, S. Barbay, R. Kuszelewicz, and J. Bloch, Phys. Rev. Lett. [**101**]{}, 266402 (2008)
A. T. Hammack, N. A. Gippius, Sen Yang, G. O. Andreev, L. V. Butov, M. Hanson, and A. C. Gossard, J. Appl. Phys. [**99**]{}, 066104 (2006)
|
---
author:
- |
Rasmus Kyng\
Yale University [^1]
- |
Jakub Pachocki\
Harvard University [^2]
- |
Richard Peng\
Georgia Tech [^3]
- |
Sushant Sachdeva\
Google [^4]
bibliography:
- 'papers.bib'
title: A Framework for Analyzing Resparsification Algorithms
---
[^1]: [email protected]
[^2]: [email protected]
[^3]: [email protected]
[^4]: [email protected]
|
---
abstract: 'Dense emulsions, colloidal gels, microgels, and foams all display a solid-like behavior at rest characterized by a yield stress, above which the material flows like a liquid. Such a fluidization transition often consists of long-lasting transient flows that involve shear-banded velocity profiles. The characteristic time for full fluidization, $\tau_\text{f}$, has been reported to decay as a power-law of the shear rate $\dot \gamma$ and of the shear stress $\sigma$ with respective exponents $\alpha$ and $\beta$. Strikingly, the ratio of these exponents was empirically observed to coincide with the exponent of the Herschel-Bulkley law that describes the steady-state flow behavior of these complex fluids. Here we introduce a continuum model, based on the minimization of a [“free energy”]{}, that captures quantitatively all the salient features associated with such *transient* shear-banding. More generally, our results provide a unified theoretical framework for describing the yielding transition and the steady-state flow properties of yield stress fluids.'
author:
- Roberto Benzi
- Thibaut Divoux
- Catherine Barentin
- Sébastien Manneville
- Mauro Sbragaglia
- Federico Toschi
title: Unified theoretical and experimental view on transient shear banding
---
*Introduction.-* Amorphous soft materials, such as dense emulsions, foams and microgels, display solid-like properties at rest, while they flow like liquids for large enough stresses [@Barnes:1999; @Balmforth:2014; @Coussot:2015; @Bonn:2017]. These yield stress fluids are characterized by a steady-state flow behavior that is well described by the Herschel-Bulkley (HB) model, where the shear stress $\sigma$ is linked to the shear rate $\dot \gamma$ through $\sigma=\sigma_\text{c}+A\dot \gamma^n$, with $\sigma_\text{c}$ the yield stress of the fluid, $A$ the consistency index and $n$ a phenomenological exponent that ranges between 0.3 and 0.7, and is often equal to $1/2$ [@Herschel:1926; @Barnes:2001; @Katgert:2008; @Cohen:2014]. However, steady-state flow is never reached instantly and the yielding transition may involve transient regimes much longer than the natural timescale $\dot \gamma^{-1}$ [@Sprakel:2011; @Siebenburger:2012a; @Grenard:2014; @Fielding:2014; @Divoux:2016; @Bonn:2017].
As demonstrated experimentally in Refs. [@Divoux:2010; @Divoux:2011b; @Divoux:2012], long-lasting heterogeneous flows develop from the initial solid-like state, involving shear-banded velocity profiles before reaching a homogeneous steady-state flow. Depending on the imposed variable, $\dot \gamma$ or $\sigma$, the characteristic time $\tau_{\rm f}$ to reach a fully fluidized state was reported to scale respectively as $\tau_{\rm f} \propto 1/\dot \gamma^\alpha$ or as $\tau_{\rm f} \propto 1/(\sigma-\sigma_\text{c})^\beta$, where $\alpha$ and $\beta$ are fluidization exponents that only depend on the material properties (see Fig. \[fig1\]). Interestingly, these two power laws naturally lead to a constitutive relation $\sigma$ [*vs*]{} $\dot \gamma$ given by the steady-state HB equation with an exponent $n=\alpha/\beta$ [@Divoux:2011b].
The above experimental findings have triggered a wealth of theoretical contributions aiming at reproducing long-lasting heterogeneous flows, some of which have successfully produced transient shear-banded flows together with non-trivial scaling laws for fluidization times [@Illa:2013; @Moorcroft:2011; @Moorcroft:2013; @Hinkle:2016; @Vasisht:2017; @Liu:2018; @Jain:2018]. While these contributions offer potential explanations for long-lasting transients, which appear to be age-dependent and related to structural heterogeneities [@Moorcroft:2011; @Hinkle:2016; @Vasisht:2017; @Liu:2018; @Liu:2018b], none of these numerical studies captures the link between the exponents governing the transient regimes and that of the steady-state HB behavior.
From a more general perspective, shear banding has often been discussed as a first-order dynamical phase transition [@Dhont:1999; @Lu:2000; @Bocquet:2009; @Chikkadi:2014; @Divoux:2016]. In that framework, *transient* shear banding can be interpreted as the coarsening of the fluid phase, which nucleates within the solid region and whose size $\delta$ can be seen as the growing length scale that characterizes the coarsening dynamics. In this letter, we show that the yielding transition and the corresponding transient shear-banding behavior can be described by a field theory based on a [“free energy”]{}, whose order parameter is the fluidity, i.e., the ratio between the shear rate and the shear stress. In such a theory, as first introduced by Bocquet *et al.* [@Bocquet:2009] and later analyzed in Ref. [@Benzi:2016], shear-banded flows can be obtained as a minimum of a [ “free energy”]{} that depends on the fluidity and on the non-local, [i.e., spatially-dependent [@Dhont:1999; @Lu:2000]]{}, rheological properties of the system. [A link between the fluidity order parameter and the physics of elasto-plasticity at the mesoscale has been explored in Ref. [@Nicolas:2013] based on Eshelby elastic response functions [@Eshelby:1957; @Zaccone:2017; @Dasgupta:2012]. Here we build upon the fluidity approach and extend it]{}, leading to analytical expressions for the scaling exponents $\alpha$ and $\beta$ that are in quantitative agreement with experiments and that provide a clear-cut explanation for the link between these exponents and the HB exponent $n$. Our findings demonstrate that non-local effects are key to understand transient shear banding in amorphous soft solids.
![(color online) Stress-induced fluidization time $\tau_\text{f}$ vs reduced shear stress $\sigma-\sigma_\text{c}$ for carbopol microgels at various weight concentrations: 0.5% (), 0.7% (), 1% () and 3% (). Solid lines correspond to the best power-law fits of the various data sets $\tau_\text{f}\sim(\sigma-\sigma_\text{c})^{-\beta}$ with exponent $\beta$ ranging from 2.8 to 6.2. Experimental conditions are listed in Supplemental Table S1 together with values of $\sigma_\text{c}$ and $\beta$. \[fig1\]](fig1.eps){width="0.8\linewidth"}
*Fluidity model.-* We start by considering that the bulk rheology of the system is governed by the dimensionless HB model, $\Sigma=1+\dot{\Gamma}^{n}$, where $\Sigma=\sigma/\sigma_\text{c}$ is the shear stress normalized by the yield stress and $\dot{\Gamma}=\dot\gamma/(\sigma_\text{c}/A)^{1/n}$ is the shear rate normalized by the characteristic frequency for the HB law. Given the spatial coordinate $y$ along the velocity gradient direction and the system size $L$, we next assume that the flow properties of the yield stress fluid are controlled by a [“free energy”]{} functional, $F[f] = \int_0^L \Phi(f,m,\xi)\, {\rm d}y$, where [@Bocquet:2009; @Benzi_note1] $$\label{eq:bocquet}
\Phi(f,m,\xi) \equiv \left[ \frac{1}{2} \xi^2 (\nabla f)^2 - \frac{1}{2} m f^2 + \frac{2}{5} f^{5/2} \right]\,.$$ The quantity $f=f(y)$ is the *local* (dimensionless) fluidity defined by $f(y)=\dot{\Gamma}(y)/\Sigma(y)$ and represents the order parameter in the model. Following Refs. [@Bocquet:2009; @Benzi:2016], $m^2$ is defined as: $$\label{eq:m}
m^2(\Sigma) \equiv\frac{(\Sigma-1)^{1/n}}{\Sigma}\, \hspace{.1in}\hbox{\rm for}\hspace{.1in} \Sigma\ge 1$$ and $m^2=0$ for $\Sigma<1$. This formulation implies that, for $f(y)=m^2$ independently of $y$, the system flows homogeneously and follows the dimensionless HB model. Finally, the length scale $\xi$ is usually referred to as the “cooperative” scale and is of the order of a few times the size of the elementary microstructural constituents [@Bocquet:2009; @Goyon:2008; @Goyon:2010; @Geraud:2013; @Geraud:2017]. In steady-state, the flowing properties of the system can then be derived from the variational equation $\delta F/ \delta f = 0$. This equation predicts heterogeneous flow profiles as induced by wall effects but it cannot account for stable shear banding [@Benzi:2016]. Moreover, transient flow properties require that some temporal dynamics be specified for $f$. To overcome these limitations, we now generalize a recent theoretical proposal introduced in Ref. [@Benzi:2016] and apply it to describe transient flows.
{width="0.7\linewidth"}
*Stress-induced fluidization dynamics.-* Let us first focus on the yielding transition under an imposed shear stress $\sigma$ for which $m$ is a constant. We note that introducing $\tilde f = f / m^2 $ and $\tilde y = m^{1/2} y/\xi$ allows us to rescale homogeneously the functional $\Phi$ to $\Phi(f,m,\xi) = m^5 \tilde \Phi (\tilde f) $, where [@Benzi_note2] $$\label{eq:bocquet_normalized}
\tilde \Phi (\tilde f) =\left[ \frac{1}{2} (\tilde\nabla \tilde{f})^2 - \frac{1}{2} \tilde{f}^2 + \frac{2}{5} \tilde{f}^{5/2} \right]\,.$$ The advantage of using $\tilde f $ and $\tilde y$ is that we can now formulate the dynamical equation independently of both the strength of external forcing $m$ and $\xi$. We further assume that the system reaches a stable equilibrium configuration corresponding to a minimum of $F[\tilde f]$ and that such dynamics is governed by a “mobility” $k(\tilde{f})$, for which the most general dynamical equation reads [@Benzi_note1] $$\label{new1}
\begin{split}
\frac{\partial \tilde f}{\partial t} & = - m^5 k(\tilde{f}) \frac{\delta F[\tilde f]}{\delta \tilde f} \\
& = m^5 k( \tilde f) \left[ \tilde \Delta \tilde f + \tilde f - \tilde f ^ {3/2}\right]\,.
\end{split}$$ If the mobility $k(\tilde f)$ is an analytic function of $\tilde f$ and $k(0)= 0$, then Eq. (\[new1\]) can account for a shear-banding solution in the general form $ \tilde f(\tilde y) = 0$ (solid branch) for $ \tilde y \in [0, \tilde L-\tilde \delta]$ and $\tilde f(\tilde y)$ solution of $\tilde \Delta \tilde f + \tilde f - \tilde f ^ {3/2} = 0$ (fluidized branch) for $\tilde y \in [\tilde L-\tilde\delta,\tilde L]$, where $\tilde\delta$ is the rescaled size of the fluidized region. Furthermore, transient shear banding corresponds to the case where the solid branch $\tilde f=0$ is an unstable solution. To explore this latter case, we next consider the time dynamics in Eq. with $k(\tilde f) = \tilde f$ and fixed initial conditions. Note that the initial conditions influence mainly the early-time response of the fluid. [A detailed discussion on the choice of $k(\tilde f)$ and on intial conditions is given in the Supplemental Material.]{} Equation is solved numerically for $\Sigma=1.1$ and $\xi/L=0.01$ in Figs. \[fig2\](a)-(b), assuming $\tilde f (\tilde y,0) = \tilde f_0 \ll 1$ for the initial solid-like state and $\tilde f(\tilde{L},t) = 1$ and $\partial_{ \tilde y} \tilde f (0, t)=0$ for boundary conditions at the two different walls. Such a choice will be addressed below in the discussion section. As seen in the velocity profiles $v(y)$ \[insets in Fig. \[fig2\](a)\], the system forms a shear band near $y=L$ at time $ t>0$. The shear band grows in time and the system eventually reaches the stable equilibrium configuration $\tilde f(\tilde y,t)=1$ within a well-defined fluidization time $T_\text{f}$. This phenomenology is in remarkable agreement with experimental observations in Figs. \[fig2\](c) and (d) for a carbopol microgel. In particular, the band size $\delta(t)$ follows very similar growths whatever the applied stress (see Supplemental Fig. S1).
Using Eq. (\[new1\]), we may predict the scaling behavior of the fluidization time $T_\text{f}$ as a function of $m$. Upon rescaling the time as $\tilde t = m^5 t$, we observe that Eq. (\[new1\]) no longer depends on $m$. Regardless of the specific function $k(\tilde f)$, we expect that the shear band expands with some characteristic velocity $\tilde v_\text{f}$ independent of $m$. Therefore, the rescaled fluidization time should be proportional to $ \tilde L / \tilde v_\text{f}$. It follows that the fluidization time should exhibit the scaling $T_\text{f} \sim \tilde L / (m^5 \tilde v_\text{f}) \sim 1/ (\xi m^{9/2})$ [independently of the specific functional form of $k(\tilde f)$]{}. The numerical integration of Eq. (\[new1\]) for various values of $m$ leads to the fluidization times $T_\text{f}$ shown in Fig. \[fig3\](a), which nicely follow the predicted $m^{-9/2}$ power-law decay. Such a scaling is also in excellent agreement with the experimental data of Fig. \[fig1\] when rescaled and plotted in terms of $m(\Sigma)$ based on the experimental steady state HB parameters \[see Fig. \[fig3\](b) and discussion below\].
![(color online) Stress-induced fluidization time as a function of $m(\Sigma)$ defined by Eq. . (a) Theoretical predictions $T_\text{f}$. (b) Experiments from Fig. \[fig1\] where each data set for $\tau_\text{f}$ was rescaled by the time $\tau_0$ shown in the inset as a function of the microgel concentration $C$ (see also Supplemental Table S1). Red lines show the predicted power law with exponent $-9/2$. The best power-law fits of the whole data sets yield exponents $-4.46\pm0.10$ and $-4.69\pm 0.33$ respectively for theory and experiments. The gray line in the inset is $\tau_0\sim C^4$. \[fig3\]](fig3.eps){width="0.8\columnwidth"}
*Strain-induced fluidization.-* We now proceed to show that the same approach allows us to rationalize the yielding transition under an imposed shear rate $\dot\Gamma$. In that case, we must supplement the theory by the fluidity equation $\dot \Sigma = \dot \Gamma - f \Sigma$, which corresponds to a single Maxwell mode for the evolution of the stress [@Moorcroft:2011]. Moreover, $m$ being a function of time, we can no longer use the rescaling $\tilde f = f / m^2$. Since $\dot \Gamma$ is a constant, we rather introduce the rescaled variable $\tilde f = f /\dot \Gamma$. Upon rescaling the spatial variable as $\tilde y = \dot \Gamma^{1/4} y/ \xi$, the analogous of Eq. (\[new1\]) reads $$\label{15}
\frac{\partial \tilde f}{\partial t} = \dot \Gamma^{5/2 }k(\tilde f) \left[ \tilde \Delta \tilde f + \tilde m \tilde f - \tilde f^{3/2} \right]\,,$$ where $\tilde m = m/\dot \Gamma^{1/2}$. Under the assumption that $\tilde m$ remains roughly constant during the shear band evolution, rescaling time as $\tilde t =\dot\Gamma^{5/2} t$ leads to $T_\text{f} \sim \tilde L /( \dot \Gamma^{5/2}\tilde v_\text{f}) \sim 1/(\xi\dot \Gamma^{9/4})$. The inset of Fig. \[fig4\] shows the actual $T_\text{f}$ computed numerically from Eq. (\[15\]) with $k(\tilde f) = \tilde f$ for different shear rates $\dot \Gamma$. The results are very well fitted by a power-law decay of exponent $2.15\pm 0.10$, quite close to the theoretical exponent $\alpha=9/4$, and in good agreement with experiments on a 1% wt. carbopol microgel for various geometries and boundary conditions that lead to an exponent of $2.45\pm 0.23$ (see Fig. \[fig4\] and Supplemental Table S2).
*Discussion.-* Let us now compare the theoretical findings against experimental data. Coming back to the case of an imposed shear stress and to the definition of $m$ in Eq. , we note that $T_\text{f}\sim m^{-9/2}$ corresponds to the scaling $T_\text{f}\sim (\Sigma-1)^{-9/4n}$ in terms of the reduced viscous stress $\Sigma-1$. This corresponds to a fluidization exponent $\beta=9/4n$. To illustrate such a scaling, numerical results are plotted in Supplemental Fig. S2 for different values of $n$ covering the range reported in experiments ($n\simeq 0.30$–0.57). The spread of the exponents $\beta\simeq 3$–8 nicely corresponds to that observed experimentally ($\beta\simeq 2.8$–6.2). More specifically, these theoretical predictions prompt us to revisit the experimental data shown in Fig. \[fig1\] by computing estimates of $m(\Sigma)$ using Eq. with $\Sigma=\sigma/\sigma_\text{c}$ and the HB parameters $\sigma_\text{c}$ and $n$ determined at steady state [@Divoux:2011b]. When plotted as a function of $m(\Sigma)$, the experimental fluidization times remarkably collapse onto the predicted scaling $\tau_\text{f}\sim m(\Sigma)^{-9/4}$, provided $\tau_\text{f}$ is rescaled by a characteristic time $\tau_0$ independent of the applied stress \[see Fig. \[fig3\](b)\]. Although a clear physical interpretation of $\tau_0$ is still lacking [@Benzi_note3], the collapse of the experimental data seen in Fig. \[fig3\](b) is a strong signature of the predictive power of the theory.
![(color online) Strain-induced fluidization time $\tau_\text{f}$ vs shear rate $\dot\gamma$ for a 1% wt. carbopol microgel under the various experimental conditions listed in Supplemental Table S2. Inset: theoretical prediction for $T_\text{f}$ vs $\dot\Gamma$. Red lines show the predicted power law with exponent $-9/4$. The best power-law fits of the whole data sets yield exponents $-2.15\pm 0.10$ and $-2.45\pm0.23$ respectively for theory and experiments. \[fig4\]](fig4.eps){width="0.8\linewidth"}
Another key outcome of the proposed approach is that, assuming an underlying HB rheology, it provides the first theoretical analytical expressions for both fluidization exponents $\alpha$ and $\beta$, in quantitative agreement with experimental results. Moreover, the ratio of these exponents, $\alpha / \beta = (9/4)/(9/4n)= n$, coincides with the Herschel-Bulkley exponent exactly as in experiments [@Divoux:2011b; @Divoux:2012]. Therefore, the present theory provides a natural framework for justifying the empirical connection between transient and steady-state flow behaviors.
Furthermore, the scaling found here for $\tau_\text{f}$ is extremely robust and depends only weakly on the initial conditions. As illustrated in Supplemental Figs. S3 and S4 for two different initial values of the fluidity in the gap, the shear rate either shows a monotonic increase up to complete fluidization or displays a decreasing trend with a well-defined minimum before increasing towards steady state. Yet, the fluidization time remains comparable in both cases. Note also that, at early stage, $\dot \Gamma$ shows a power-law decrease in time that is strongly reminiscent of the primary creep regime reported in amorphous soft materials [@Bauer:2006; @Divoux:2011b; @Grenard:2014; @Leocmach:2014; @Helal:2016; @Lidon:2017; @Aime:2018]. In the present model, the power-law exponent may take any value between $-2/3$ and $0$ depending on the choice of $k(\tilde f)$, thus providing an explanation for the diversity of exponents reported in the literature.
To conclude, our results show that the [“free energy”]{} approach originally introduced to account for non-local effects in steady-state flows of complex fluids [@Bocquet:2009] also captures long-lasting transient heterogeneous flows: thanks to cooperative effects, a fluidized band nucleates and grows until complete yielding, which quantitatively matches the experimental phenomenology. In this framework, transient shear banding appears as the dynamical signature of the unstable nature of the solid branch at $\dot\gamma=0$ in the flow curve [@Varnik:2003; @Varnik:2004; @Bonn:2017]. More generally, as explored in Ref. [@Benzi:2016], the present model also accounts for steady-state shear banding when cooperative effects are hindered, e.g., by mechanical noise that prevents the shear band from growing through cascading plastic events. Such a connection between transient and steady-state behaviors in terms of cooperativity-induced stability of the shear band offers for the first time a unified framework for describing the local scenario associated with the yielding dynamics of soft glassy materials.
The authors thank David Tamarii for help with the experiments as well as Emanuela Del Gado and Suzanne Fielding for fruitful discussions. This research was supported in part by the National Science Foundation under Grant No. NSF PHY 17-48958 through the KITP program on the Physics of Dense Suspensions.
[53]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [**** ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [ ()]{}, @noop [****, ()]{} [****, ()](\doibase
10.1122/1.5023305) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop @noop @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [**]{} (, ) @noop [**]{} (, )
[**Unified theoretical and experimental view on transient shear banding.**]{}
[**Supplementary information**]{}
Experimental parameters
=======================
Symbol $C$ ($\%$) Geometry BC $L$ (mm) $\sigma_\text{c}$ (Pa) $n$ $A$ (Pa.s$^n$) $\beta$ $\tau_0$ (s)
-------- ------------ ---------------------- -------- ---------- ------------------------ ------ ---------------- --------- --------------
0.5 parallel plate rough 1 21.8 0.57 9.1 2.8 2.5
0.7 parallel plate rough 1 32.9 0.54 12.3 3.3 2.0
1 cone & plate smooth - 30.0 0.50 10.6 4.2 0.25
1 concentric cylinders rough 1.1 27.8 0.53 11.3 4.2 0.06
1 concentric cylinders smooth 1 30.4 0.53 10.3 4.9 0.04
1 parallel plate smooth 1 40.2 0.43 20.8 4.5 0.2
1 parallel plate rough 1 47.4 0.50 18.7 4.5 0.4
1 parallel plate rough 3 47.4 0.50 18.7 5.9 0.35
3 parallel plate rough 1 115.5 0.30 99.7 6.2 3.3$10^{-3}$
: Experimental parameters for stress-induced fluidization of carbopol microgels of weight concentration $C$ in different shearing geometries with different boundary conditions (BC) and gap widths $L$. The yield stress $\sigma_\text{c}$, the shear-thinning exponent $n$ and the consistency index $A$ are inferred from Herschel-Bulkley fits of the steady-state $\sigma$ vs $\dot\gamma$ data. $\beta$ is the exponent of the best power-law fit of the stress-induced fluidization time $\tau_\text{f}$ vs $\sigma-\sigma_\text{c}$ shown in Fig. 1. $\tau_0$ is the characteristic time used to rescale $\tau_\text{f}$ in Fig. 3(c). For a fixed weight concentration of 1 %, it varies by one order of magnitude depending on the batch sample, on the geometry and on boundary conditions. This suggests a subtle dependence of $\tau_0$ on the microscopic details of the system and its interaction with the shearing walls, standing out as an open issue. The symbols in the first column are those used in Fig. 1 and Fig. 3(c) in the main text.[]{data-label="table1"}
Symbol Geometry BC $L$ (mm) $\alpha$
-------- ---------------------- -------- ---------- ---------- -- -- -- -- --
concentric cylinders smooth 0.5 2.6
concentric cylinders rough 1.1 2.3
concentric cylinders smooth 1.5 2.5
concentric cylinders smooth 3 2.0
cone & plate smooth - 2.3
: Experimental parameters for strain-induced fluidization of a 1% wt. carbopol microgel in different shearing geometries with different boundary conditions (BC) and gap widths $L$. $\alpha$ is the exponent of the best power-law fit of the strain-induced fluidization time $\tau_\text{f}$ vs $\dot\gamma$ found for each individual data set. The symbols in the first column are those used in Fig. 4 in the main text.[]{data-label="table2"}
The experimental conditions leading to the results shown in Fig. 1, Fig. 2(c) and (d), Fig. 3(b) and Fig. 4 in the main text are gathered in Tables \[table1\] and \[table2\]. In all cases, carbopol microgels were prepared at a weight concentration $C$ following the protocol described in Ref. [@Divoux:2011b]. As explored in Refs. [@Baudonnet:2002; @Baudonnet:2004; @Lee:2011; @Geraud:2013; @Geraud:2017], the details of the preparation protocol, especially the carbopol type, the final pH and the mixing procedure, have a strong impact on the microstructure of the resulting microgels and on their rheological properties. In particular, carbopol microgels prepared with a similar procedure as the present samples [@Geraud:2013; @Geraud:2017] were shown to be constituted of jammed, polydisperse swollen polymer particles of typical size $6~\mu$m. The cooperative length $\xi$ was estimated to be about 2 to 5 times the particle size thanks to local rheological measurements in microchannels [@Geraud:2013; @Geraud:2017].
The samples are loaded in a shearing cell attached to a standard rheometer (Anton Paar MCR301). Experiments listed in Tables \[table1\] and \[table2\] performed in parallel-plate and in concentric-cylinder geometries with gaps larger than 0.5 mm have already been described at length in Refs. [@Divoux:2010; @Divoux:2011b; @Divoux:2012]. The present work also includes new data sets obtained in a smooth cone-and-plate geometry (steel cone of diameter 50 mm, angle 2$^\circ$, truncation 55 $\mu$m) and in a smooth concentric-cylinder geometry of gap 0.5 mm (Plexiglas cylinders, outer diameter 50 mm, height 30 mm). Note that the HB parameters $\sigma_\text{c}$, $A$ and $n$ for measurements in parallel-plate geometries were extracted from the steady-state rheological data, which explains the differences in the yield stress (and thus in the exponent $\beta$) indicated in Table \[table1\] and in Ref. [@Divoux:2011b] where $\sigma_\text{c}$ was directly extracted from the $\tau_\text{f}$ vs $\sigma$ data.
Under an imposed shear stress, the fluidization time $\tau_\text{f}$ was shown to correspond to the last inflection point of the shear rate response $\dot\gamma(t)$ [@Divoux:2011b]. This allows us to measure $\tau_\text{f}(\sigma)$ in the absence of simultaneous velocity measurements, e.g., in cone-and-plate and in parallel plate geometries. As for experiments performed under an imposed shear rate, the end of the transient shear-banding regime is associated with a significant drop in the stress response $\sigma(t)$ [@Divoux:2010; @Divoux:2012] that is used to estimate $\tau_\text{f}(\dot\gamma)$ in the cone-and-plate geometry.
In the case of concentric cylinders, rheological measurements are supplemented by time-resolved local velocity measurements. The technique is based on the scattering of ultrasound by hollow glass microspheres (Potters, Sphericel, mean diameter 6 $\mu$m, density 1.1) suspended at a volume fraction of 0.5 % within the carbopol microgel. It was previously shown that such seeding of the microgel samples does not affect their fluidization dynamics [@Divoux:2011b]. Full details on ultrasonic velocimetry coupled to rheometry can be found in Ref. [@Manneville:2004a]. This technique outputs the tangential velocity $v(y,t)$ as a function of the distance $y$ to the fixed wall and as a function of time $t$. The outer fixed cylinder is thus located at $y=0$ and the inner rotating cylinder at $y=L$, where $L$ is the width of the gap between the two cylinders. Fig. 2(c) in the main text shows a few velocity profiles $v(y,t)/v_0(t)$ vs $y/L$ where the velocity is normalized by the current velocity $v_0(t)$ of the moving wall deduced from the shear rate response $\dot\gamma(t)$. Each velocity profile is itself an average over 10 to 1000 successive velocity measurements, which corresponds typically to an average over 8 s to 140 s. The typical standard deviation of these measurements is about the symbol size. Note that these data, obtained in a smooth geometry, show significant wall slip, as opposed to those shown in Ref. [@Divoux:2011b] for rough boundary conditions. Finally, each individual velocity profile is fitted by linear functions over $y$-intervals extending respectively within the solid-like region and within the fluidized band (when present). The intersection of the two fits yields the width $\delta$ of the fluidized band as shown in Fig. 2(c) and as plotted as a function of time in Figs. 2(d) and \[suppfig1\](b).
Theoretical considerations
==========================
[In this section, we examine in more details some theoretical aspects concerning the fluidity model used in the main text in order to justify our choice of function $k(\tilde{f})$. We specifically address the basic differences between the general case $k(\tilde f) = \tilde f^p$ with $p>0$ \[hereafter referred to as case (I)\] and the particular case $k(\tilde f) = \textrm{const}$ \[hereafter referred to as case (II)\]. As already outlined in Ref. [@Benzi:2016], case (I) admits stationary solutions with the coexistence of two rheological branches: the solid branch where $\tilde f = \tilde f_s = 0$ and a fluid branch $\tilde f = \tilde f_b > 0$. In other words, case (I) admits for stationary solution a shear-banded profile whilst this cannot be for case (II). Such a difference matters because these two fluidization mechanisms yield different time scales. Indeed,]{} assuming that the initial condition $\tilde f(0)$ is homogeneous and neglecting the term $\tilde \Delta \tilde f$ in Eq. (4), we obtain $$\label{new22}
\frac{\partial \tilde f}{\partial t} = m^5 k(\tilde f) \left[ \ \tilde f - \tilde f ^ {3/2} \right ] \,.$$ [We further consider the short time behavior of the instability by neglecting the term $\tilde f^{3/2}$ in Eq. (\[new22\]). It is enough to compare the two cases for the choice $p = 1$. For case (I), we obtain: $$\label{new81}
\tilde f(t) = \frac{\tilde f(0)}{1-m^5\tilde f(0) t} \,,$$ while for case (II) we get $$\label{new82}
\tilde f(t) = \tilde f(0)\exp(m^5 t) \,.$$ Upon comparing Eqs. (\[new81\]) and (\[new82\]), it is clear that the characteristic time for the instability depends on the initial condition $\tilde f(0)$ for case (I), while it is independent of the initial condition for case (II). This dependence on $\tilde f(0)$ for case (I) probably explains the small yet detectable dependence of the fluidization time $T_f$ on the initial condition as reported in Fig. \[figvel\]. There, assuming two different initial conditions, we show that $$\label{new83}
\frac{T_{\text{f},1}}{T_{\text{f},2}} = C_1-C_2 \log \left[m(\Sigma)\right]\,,$$ where $T_{\text{f},i}$ is the fluidization time computed for initial condition $i$ and $C_1$ and $C_2$ are positive constants. This is not observed for case (II), whose fluidization time is independent on the initial condition since Eq. (4) for case (II) is essentially a reaction-diffusion equation [@Crank:1979; @Murray:2003].]{} Finally, we discuss how cases (I) and (II) differ in the decay rate of the fluidity. [Indeed, for a sufficiently large initial fluidity, the term $\tilde f ^ {3/2}$ is dominant in Eq. (\[new22\]) so that the fluidity decreases.]{} The relaxation equation thus takes the following form $$\label{new24}
\frac{\partial \tilde f}{\partial t} {= -m^5} k( \tilde f) \tilde f ^ {3/2} = -{m^5}\tilde f ^ {p+3/2}\,,$$ [with $p>0$ for case (I) and $p=0$ for case (II).]{} The solution of Eq. (\[new24\]) reads $$\label{new26}
\tilde f(t) = \frac{A}{ (1 + B t)^b}\,,$$ where $b = 2/(1+2p)$ and $A$ and $B$ are suitable constants. For $p=1$, one has $b = 2/3$ as already discussed in the main text. [This corresponds to the scaling observed experimentally for the shear rate (or fluidity) response under a constant stress in Ref. [@Divoux:2011b], which motivates our choice of $p=1$]{}. Note that for case (II) we obtain an exponent $b = 2$ far away from any experimental finding [[@Bauer:2006; @Siebenburger:2012a; @Leocmach:2014; @Grenard:2014; @Helal:2016; @Lidon:2017; @Aime:2018]]{}.
[The above discussion around Eq. (\[new22\]) leads to two interesting conclusions. First, the growth of the instability depends on the initial conditions for case (I) but not for case (II). The weak dependence of fluidization times on initial conditions for case (I) could also be linked to the logarithmic dependence of $T_\text{f}$ on the waiting time spent at rest as reported in Ref. [@Benzi:2016] although a thorough comparison of aging effects in theory and experiments is left for future work. Second, the decay of the fluidity is an indication of the functional form of the mobility function $k$ and points to a linear behavior of $k(\tilde f)$.]{}
[In summary, complex materials as the one considered in this Letter show a broad spectrum of relaxation time scales, which cannot be reduced to a simple diffusion constant. This simple argument allows us to rule out case (II) where $k(\tilde f) = \textrm{const}$ would correspond to a single relaxation time. Indeed, although case (II) predicts the same scaling behavior for the fluidization time as case (I), it fails to reproduce several key features of the experimental results on carbopol microgels. This is the reason why we chose to use $k(\tilde f)= \tilde f^p$ with $p=1$ in the main text.]{}
Supplemental figures
====================
{width="0.8\linewidth"}
![Theoretical predictions for the stress-induced fluidization time $T_\text{f}$ as function of the reduced stress $\Sigma-1$ for three different values of the Herschel-Bulkley exponent ($n=0.3, 0.45$ and $0.6$). Solid lines show power laws with exponents -3.5 and -7.5. \[suppfig2\]](suppfig2.eps){width="0.4\columnwidth"}
{width="0.8\linewidth"}
![Ratio of the fluidization times $T_{f,1}/T_{f,2}$ (symbols) predicted theoretically for the two different initial conditions used in Fig. \[suppfig3\] [with $k(\tilde{f})=\tilde{f}$]{}. $T_{f,1}$ ($T_{f,2}$ resp.) refers to a system with the initial conditions used for the red (blue resp.) line in Fig. \[suppfig3\](a). Upon changing the applied stress $\Sigma$, the ratio of the two fluidization times shows a weak dependence on $m(\Sigma)$ that is well fitted by a logarithmic dependence with slope $\simeq -0.014$ (red line).[]{data-label="figvel"}](suppfig4.eps){width="0.4\columnwidth"}
|
---
abstract: 'Modelling excesses over a high threshold using the Pareto or generalized Pareto distribution (PD/GPD) is the most popular approach in extreme value statistics. This method typically requires high thresholds in order for the (G)PD to fit well and in such a case applies only to a small upper fraction of the data. The extension of the (G)PD proposed in this paper is able to describe the excess distribution for lower thresholds in case of heavy tailed distributions. This yields a statistical model that can be fitted to a larger portion of the data. Moreover, estimates of tail parameters display stability for a larger range of thresholds. Our findings are supported by asymptotic results, simulations and a case study.'
address:
- |
Department of Mathematics and Leuven Statistics Research Centre\
Katholieke Universiteit Leuven, Celestijnenlaan 200b, B-3001 Heverlee, Belgium
- |
Joint Research Centre, European Commission\
Via Fermi 2749, 21027 Ispra (VA), Italy
- |
Institut de statistique, Université catholique de Louvain\
Voie du Roman Pays 20, B-1348 Louvain-la-Neuve, Belgium
author:
- Jan Beirlant
- Elisabeth Joossens
- Johan Segers
bibliography:
- 'Biblio.bib'
title: 'Second-Order Refined Peaks-Over-Threshold Modelling for Heavy-Tailed Distributions'
---
bias reduction , Hill estimator , extended Pareto distribution ,extreme value index ,heavy tails , regular variation , tail empirical process , tail probability , Weissman probability estimator
Introduction
============
It is well known that a distribution is in the max-domain of attraction of an extreme value distribution if and only if the distribution of excesses over high thresholds is asymptotically generalized Pareto (GP) [@BalkemadeHaan74; @Pickands75]. This result gave rise to the peaks-over-threshold methodology introduced in @DavisonSmith90; see also @Coles01. The method consists of two components: modelling of clusters of high-threshold exceedances with a Poisson process and modelling of excesses associated to the cluster peaks with a GPD. In practice, a way to verify the validity of the model is to check whether the estimates of the GP shape parameter are stable when the model is fitted to excesses over a range of thresholds. The question then arises how to proceed if this threshold stability is not visible for a given data set. From a theoretical point of view, absence of the stability property can be explained by a slow rate of convergence in the Pickands–Balkema–de Haan theorem. In case of heavy-tailed distributions, the same issue arises when fitting a Pareto distribution (PD) to the relative excesses over high, positive thresholds.
A possible solution is to build a more flexible model capable of capturing the deviation between the true excess distribution and the asymptotic model. For heavy-tailed distributions, this deviation can be parametrized using a power series expansion of the tail function [@Hall82], or more generally via second-order regular variation [@GdH87; @BGT].
The aim of this paper is to propose such an extension, called the extended Pareto or extended generalized Pareto distribution (EPD/EGPD). A key distinction with other approaches is that although in previous papers the second-order approximation is used for adjusting the inference of the tail index, inference on the tail itself is still based on the GPD; in contrast, in our approach the EP(G)D is fitted directly to the high-threshold excesses. Indeed, as we will show later, even if the (G)PD parameters are estimated in an unbiased way, tail probability estimators may still exhibit asymptotic bias if based upon the (G)PD approximation.
The main advantages of the new model are a reduction of the bias of estimators of tail parameters and a good fit to excesses over a larger range of thresholds. In an actuarial context, the relevance of using more elaborate models has already been discussed for instance in @FHR02 and @CoorayAnanda05.
In case of heavy-tailed distributions, it is more convenient to work with relative excesses $X/u$ rather than absolute excesses $X-u$. Under the domain of attraction condition the limit distribution of $X/u$ given $X > u$ for $u \to \infty$ is the PD. The EPD and EGPD presented here are related through the same affine transformation that links these relative and absolute excesses. Building on the theory of generalized regular variation of second order in @dHS96, it is also possible to construct an extension of the GPD with comparable merits applicable to distributions in all max-domains of attraction. However, parameter estimation in this more general setting is numerically quite involved [@BJS02]: the model contains one additional parameter and the upper endpoint of the distribution depends in a complicated way on the parameters, which complicates both theory and computations.
Bias-reduction methods have already been proposed in, amongst others, @FeuervergerHall99, @GMN00, @BDGM99, @DBGS02, @GM02, and @GM04. These methods focus on the distribution of log-spacings of high order statistics. Moreover, *ad hoc* construction methods for asymptotically unbiased estimators of the extreme value index were introduced in @Peng98, @Drees96 and @Segers05. In contrast, next to providing bias-reduced tail index estimators, our model can be fitted directly to the excesses over a high threshold. The fitted model can then be used to estimate any tail-related risk measure, such as tail probabilities, tail quantiles (or value-at-risk), etc.
In the same spirit as in this paper, a mixture model with two Pareto components was proposed in @PengQi04. The advantage of our model is that it also incorporates the popular GPD. From our experience, this connection can assist in judging the quality of the GPD fit; see for instance the case study in Example \[Ex:secura\].
The paper is structured as follows. The next section provides the definition of the E(G)PD, which is shown to yield a more accurate approximation to the distribution of absolute and relative excesses for a wide class of heavy-tailed distributions. Estimators of the EPD parameters are derived in Section \[S:par\] using the linearized score equations, and their asymptotic normality is formally stated. In Section \[S:compar\], we compare the asymptotic distribution and the finite-sample behavior of the estimators of the extreme value index following from PD, GPD and EPD modelling. To illustrate how to apply the methodology to the estimation of general tail-related risk measures, we elaborate in Section \[S:prob\] on tail probability estimation with theoretical results and a practical case. The appendices, finally, contain the statement and proof of an auxiliary result on a certain tail empirical process followed by the proofs of the main theorems.
The Extended (Generalized) Pareto Distribution {#S:EGPD}
==============================================
\[D:EGPD\] The *Extended Pareto Distribution (EPD)* with parameter vector $(\gamma, \delta, \tau)$ in the range $\tau < 0 < \gamma$ and $\delta > \max(-1, 1/\tau)$ is defined by its distribution function $$G_{\gamma,\delta,\tau}(y) =
\begin{cases}
1 - \{y(1 + \delta - \delta y^\tau)\}^{-1/\gamma}, & \text{if $y > 1$}, \\
0, & \text{if $y {\leqslant}1$.}
\end{cases}$$ The *Extended Generalized Pareto Distribution (EGPD)* is defined by its distribution function $$H_{\gamma,\delta,\tau}(x) = G_{\gamma,\delta,\tau}(1+x), \qquad x \in {\mathbb{R}}.$$
The ordinary Pareto Distribution (PD) with shape parameter $\alpha
> 0$ is a member of the EPD family: take $\gamma = 1/\alpha$ and $\delta = 0$ (arbitrary $\tau$). The Generalized Pareto Distribution (GPD) with positive shape parameter $\gamma > 0$ and scale parameter $\sigma > 0$ is a member of the EGPD family: take $\tau = -1$ and $\delta = \gamma / \sigma - 1$. Finally, the distribution of the random variable $Y$ is EPD($\gamma,\delta,\tau$) if and only if the distribution of $Y - 1$ is EGPD($\gamma,\delta,\tau$).
We will use the E(G)PD to model tails of heavy-tailed distributions that satisfy a certain second-order condition, to be described next. For a distribution function $F$, write ${\overline{F}}= 1-F$. Recall that a positive, measurable function $f$ defined in some right neighborhood of infinity is *regularly varying* with index $\beta \in {\mathbb{R}}$ if $\lim_{u \to \infty} f(ux)/f(u) = x^\beta$ for all $x \in (0,
\infty)$; notation $f \in {{\cal R}}_\beta$. The following definition describes a subset of the class of distribution functions $F$ for which ${\overline{F}}\in
{{\cal R}}_{-1/\gamma}$, $\gamma > 0$. Note that the latter is precisely the class of distributions in the max-domain of attraction of the Fréchet distribution with shape parameter $1/\gamma$.
\[C:2nd\] Let $\gamma > 0$ and $\tau < 0$ be constants. A distribution function $F$ is said to belong to the class ${\cal F}(\gamma, \tau)$ if $x^{1/\gamma} {\overline{F}}(x) \to C \in (0, \infty)$ as $x \to \infty$ and if the function $\delta$ defined via $$\label{E:delta}
{\overline{F}}(x) = C x^{-1/\gamma} \{ 1 + \gamma^{-1} \delta(x) \}$$ is eventually nonzero and of constant sign and such that $|\delta| \in {{\cal R}}_\tau$.
Note that $|\delta| \in {{\cal R}}_\tau$ with $\tau < 0$ implies $\delta(x)
\to 0$ as $x \to \infty$. In many examples, the function $\delta$ in Definition \[C:2nd\] is actually of the form $\delta(x) \sim D
x^\tau$ as $x \to \infty$ for some nonzero constant $D$, a class of distributions which was first considered in @Hall82. See Table \[tab:parameters\] for examples; for later use, we also list $\rho = \gamma \tau$ (see Lemma \[L:rho\] below).
[@l l l l l]{}\
distribution & distribution function & $\gamma$ & $\tau$ & $\rho = \gamma \tau$\
[\[parameters\]]{}\
\
Burr($\gamma,\rho,\beta$) & $1 - (1+x^{-\rho/\gamma}/\beta)^{1/\rho}$ & $\gamma$ & $\phantom{-}\rho / \gamma$ & $\phantom{-}\rho$\
[\[$\gamma>0$, $\rho<0$, $\beta>0$\]]{}\
Fréchet($\alpha$) & $\exp(-x^{-\alpha})$ & $1/\alpha$ & $-\alpha$ & $-1$\
[\[$\alpha>0$\]]{}\
GPD($\gamma,\sigma$) & $1 - (1 + \gamma x/\sigma)^{-1/\gamma}$ & $\gamma$ & $-1$ & $-\gamma$\
[\[$\gamma > 0$, $\sigma > 0$\]]{}\
Student-*t*$_{\nu}$ & $C(\nu) \int_{-\infty}^x (1+\frac{y^2}{\nu})^{-(\nu+1)/2} {\, \mathrm{d}}y$ & $1/\nu$ & $-2$ & $-2/\nu$\
[\[$\nu>0$\]]{}\
\
Let $X$ be a random variable with distribution function $F$ and let $u > 0$ be such that $F(u) < 1$. The conditional distributions of relative and absolute excesses of $X$ over $u$ are given by $$\Pr(X/u > y \mid X > u) = \frac{{\overline{F}}(uy)}{{\overline{F}}(u)} \quad \text{and} \quad
\Pr(X-u > x \mid X > u) = \frac{{\overline{F}}(u+x)}{{\overline{F}}(u)}$$ for $x {\geqslant}0$ and $y {\geqslant}1$. The next proposition shows that for $F \in {\cal F}(\gamma, \tau)$, the EPD and the EGPD improve the PD and GPD approximations to these excess distributions with an order of magnitude.
\[P:EGPD\] If $F \in {\cal F}(\gamma, \tau)$, then as $u \to \infty$, $$\begin{aligned}
\label{E:P:EPD}
\sup_{y {\geqslant}1}
\biggl| \frac{{\overline{F}}(uy)}{{\overline{F}}(u)} - {\overline{G}}_{\gamma,\delta(u),\tau}(y) \biggr| &= o\{|\delta(u)|\}, \\
\label{E:P:EGPD}
\sup_{x {\geqslant}0}
\biggl| \frac{{\overline{F}}(u+x)}{{\overline{F}}(u)} - {\overline{H}}_{\gamma,\delta(u),\tau}(x/u) \biggr| &= o\{|\delta(u)|\}.\end{aligned}$$
Equation follows directly from by writing $u + x = uy$ or $y = 1 + x/u$ and exploiting the link between the EPD and the EGPD. So let us show . On the one hand, we have $$\frac{{\overline{F}}(uy)}{{\overline{F}}(u)}
= y^{-1/\gamma} \frac{1 + \gamma^{-1} \delta(uy)}{1 + \gamma^{-1} \delta(u)}
= y^{-1/\gamma} \left( 1 - \gamma^{-1} \delta(u) \frac{1 - \frac{\delta(uy)}{\delta(u)}}{1 + \gamma^{-1} \delta(u)} \right).$$ On the other hand, since $0 {\leqslant}1 - y^\tau {\leqslant}1$ for $y {\geqslant}1$ and since $\delta(u) \to 0$, $$\begin{gathered}
[y \{ 1 + \delta(u) - \delta(u) y^\tau \}]^{-1/\gamma} \\
= y^{-1/\gamma} \{1 - \gamma^{-1} \delta(u) (1 - y^\tau)\} + o\{|\delta(u)|\}, \qquad u \to \infty,\end{gathered}$$ uniformly in $y {\geqslant}1$. As a consequence, $$\begin{gathered}
\frac{{\overline{F}}(uy)}{{\overline{F}}(u)}
- [y \{ 1 + \delta(u) - \delta(u) y^\tau \}]^{-1/\gamma} \\
= - \gamma^{-1} y^{-1/\gamma} \delta(u)
\left( \frac{1 - \frac{\delta(uy)}{\delta(u)}}{1 + \gamma^{-1} \delta(u)} - (1 - y^\tau) \right)
+ o\{|\delta(u)|\}, \qquad u \to \infty,\end{gathered}$$ uniformly in $y {\geqslant}1$. The asymptotic relation now follows from the uniform convergence theorem for regularly varying functions with negative index [@BGT Theorem 1.5.2].
If in we would replace the EPD tail function ${\overline{G}}_{\gamma,\delta(u),\tau}(y)$ by the PD tail function $y^{-1/\gamma}$, the rate of convergence would be $O\{|\delta(u)|\}$ only. Similarly, if in we would replace the EGPD tail function ${\overline{H}}_{\gamma,\delta(u),\tau}(x/u)$ by the GPD tail function $(1 + \gamma x / \sigma)^{-1/\gamma}$ for some $\sigma = \sigma(u)$, then, provided $\tau \neq -1$, the rate of convergence would again be $O\{|\delta(u)|\}$ only. If $\tau = -1$, the EGPD is just a reparametrization of the GPD, so that in that case, the GPD approximation is already of the order $o\{|\delta(u)|\}$.
It will be useful to rephrase our second-order assumption on $F$ in terms of the tail quantile function $U$ defined by $$\label{E:U}
U(y) = Q(1 - 1/y) \quad \text{ with } \quad Q(p) = \inf \{ x \in \mathbb{R} : F(x) {\geqslant}p \},$$ where $y \in (1, \infty)$ and $p \in (0, 1)$. Note that $U$ is a (generalized) inverse of $1/{\overline{F}}$.
\[L:rho\] If $F \in {\cal F}(\gamma, \tau)$ with $\lim_{x \to \infty} x^{1/\gamma} {\overline{F}}(x) = C \in (0, \infty)$, then $\lim_{y \to \infty} y^{-\gamma} U(y) = C^\gamma$, and the function $a$ defined implicitly by $$\label{E:a}
U(y) = C^\gamma y^\gamma \{1 + a(y)\}$$ satisfies $a(y) = \delta(U(y)) \{1 + o(1)\} = \delta(C^\gamma y^\gamma) \{1 + o(1)\}$ as $y \to \infty$, with $\delta$ as in .
In particular, $a$ is eventually nonzero and of constant sign and $|a| \in {{\cal R}}_\rho$ with $\rho = \gamma \tau < 0$. In addition, even if $F$ is not continuous, then still $y {\overline{F}}(U(y)) = 1 + o\{|a(y)|\}$ as $y \to \infty$.
Parameter Estimation {#S:par}
====================
Our aim is to make inference on the distribution function $F$ on the region to the right of some high, positive threshold $u$. To this end, we assume $F \in {\cal F}(\gamma, \tau)$ and rewrite as follows: as $u \to \infty$ and uniformly in $y {\geqslant}1$, $$\label{E:approx}
{\overline{F}}(uy) = {\overline{F}}(u) {\overline{G}}_{\gamma,\delta(u),\tau}(y) + o\{ {\overline{F}}(u) |\delta(u)| \}.$$ Omitting the remainder term leads to an approximation of ${\overline{F}}(x)$ for $x {\geqslant}u$ in terms of ${\overline{F}}(u)$ and the EPD parameters $(\gamma,
\delta(u), \tau)$. Replacing these unknown quantities by estimates then yields our estimate for ${\overline{F}}(x)$.
The purpose of this section is to construct estimators of the E(G)PD parameters $(\gamma, \delta(u), \tau)$. As usual in extreme value statistics, the threshold exceedance probability ${\overline{F}}(u)$ will be estimated nonparametrically. Although the arguments leading to the estimators will be of a heuristic nature only, the asymptotic behaviour of the estimators will be stated and proved rigorously.
Let $X_1, \ldots, X_n$ be a random sample from $F$. In view of , the estimates of the EPD parameters will be based on the relative excesses $X_i / u$ over $u$, for those $i \in \{1, \ldots, n\}$ such that $X_i > u$. In an extreme value asymptotic setting, the threshold $u$ needs to tend to infinity to make the approximation valid; at the same time, in a statistical context, the number of excesses over $u$ must be sufficiently large to make inference feasible. Denoting the order statistics by $X_{1:n} {\leqslant}\cdots {\leqslant}X_{n:n}$, we can ensure both criteria to be met by choosing a data-adaptive threshold $u = u_n = X_{n-k:n}$ where $k = k_n \in \{1, \ldots, n-1\}$ is an intermediate sequence of integers, that is, $k \to \infty$ and $k/n \to 0$ as $n \to \infty$. For convenience, assume $F(0) = 0$, so that all $X_i$ are positive with probability one.
Recall the tail quantile function $U$ in and the auxiliary function $a$ in Lemma \[L:rho\]. In addition to $k$ being an intermediate sequence, we will assume that $$\label{E:k}
\sqrt{k} a(n/k) \to \lambda \in \mathbb{R}, \qquad n \to \infty.$$ Writing $\delta_n = \delta(u_n) = \delta(X_{n-k:n})$, we will show later that implies $$\label{E:kdelta}
\sqrt{k} \delta_n = \lambda + o_p(1), \qquad n \to \infty.$$ Since in the definition of the EPD the term $x^{\tau}$ is multiplied by $\delta$, the previous display implies that the asymptotic distribution of tail estimators based on will not depend on the asymptotic distribution of the estimator of $\tau$, not even on its rate of convergence. Therefore, we will assume for the moment that $\tau$ (or $\rho$) is known. In the end, the unknown second-order parameters will be replaced by consistent estimators, a substitution which will be shown not to affect the asymptotic distributions of the other estimators. Note that under the regime $\sqrt{k} |a(n/k)| \to \infty$ as $n \to \infty$, which will not be considered in this paper, the asymptotic distribution of the estimator of the second-order parameter does play a role.
The estimators of $\gamma$ and $\delta_n$ will be found by maximizing an approximation to the EPD likelihood given the sample of $k$ relative excesses $X_{n-k+i:n}/X_{n-k:n}$, $i \in \{1, \ldots, k\}$, over the random threshold $X_{n-k:n}$. The density function of the EPD is given by $$g_{\gamma,\delta,\tau}(x)
= \frac{1}{\gamma} x^{-1/\gamma-1} \{1 + \delta(1-x^\tau)\}^{-1/\gamma-1} [1 + \delta\{1 - (1+\tau)x^\tau\}].$$ The score functions admit the following expansions in $\delta \to 0$: $$\begin{aligned}
\frac{\partial}{\partial \gamma} \log g_{\gamma,\delta,\tau}(x)
&= - \frac{1}{\gamma} + \frac{1}{\gamma^2} \log x + \frac{\delta}{\gamma^2} (1 - x^\tau) + O(\delta^2), \\
\frac{\partial}{\partial \delta} \log g_{\gamma,\delta,\tau}(x)
&= \frac{1}{\gamma} \{ (1 - \gamma \tau) x^\tau - 1 \} \\
& \qquad \mbox{} + \{ 1 - 2(1 - \gamma \tau)x^\tau
+ (1 - 2 \gamma \tau - \gamma \tau^2) x^{2\tau} \} \frac{\delta}{\gamma} + O(\delta^2).\end{aligned}$$ Define $$\begin{aligned}
\label{E:Hill}
H_{k,n} &= \frac{1}{k} \sum_{i=1}^k \log (X_{n-k+i:n} / X_{n-k:n}), \\
\label{E:Ekn}
E_{k,n}(s) &= \frac{1}{k} \sum_{i=1}^k (X_{n-k+i:n} / X_{n-k:n})^s, \qquad s {\leqslant}0.\end{aligned}$$ Note that $H_{k,n}$ is the Hill estimator [@Hill75]. Assume for the moment that $\tau$ is known. Given the sample of excesses $X_{n-k+i:n}/X_{n-k:n}$, $i = 1, \ldots, k$, solving the linearized score equations yields the following equations for the pseudo-maximum likelihood estimators for $\gamma$ and $\delta$: $$\begin{aligned}
\label{E:score:1}
\hat{\gamma}_{k,n}
&= H_{k,n} + \hat{\delta}_{k,n} \{ 1 - E_{k,n}(\tau) \}, \\
\label{E:score:2}
(\hat{\gamma}_{k,n} \tau - 1) E_{k,n}(\tau) + 1
&=
\{1 - 2(1 - \hat{\gamma}_{n,k} \tau)E_{k,n}(\tau) \nonumber \\
& \qquad \mbox{}
+ (1 - 2 \hat{\gamma}_{k,n} \tau - \hat{\gamma}_{k,n} \tau^2) E_{k,n}(2\tau)\} \hat{\delta}_{k,n}.\end{aligned}$$ Substitute the expression for $\hat{\gamma}_{k,n}$ in into the left-hand side of and solve for $\hat{\delta}_{k,n}$ to get $$\hat{\delta}_{k,n}
= \frac{(H_{k,n} \tau - 1) E_{k,n}(\tau) + 1}{D_{k,n}}
= \frac{H_{k,n} \tau - 1}{D_{k,n}} \left( E_{k,n}(\tau) - \frac{1}{1 - H_{k,n} \tau} \right),$$ the denominator being $$\begin{gathered}
D_{k,n} = 1 - 2(1 - \hat{\gamma}_{n,k} \tau)E_{k,n}(\tau)
+ (1 - 2 \hat{\gamma}_{k,n} \tau - \hat{\gamma}_{k,n} \tau^2) E_{k,n}(2\tau) \\
- \tau \{ 1 - E_{k,n}(\tau) \} E_{k,n}(\tau).\end{gathered}$$
By , $\hat{\delta}_{k,n}$ can be expected to be of the order $O_p(k^{-1/2})$ as $n \to \infty$. This justifies the following simplifications. Since the distribution of relative excesses over a large threshold is approximately Pareto with shape parameter $1/\gamma$, for $s {\leqslant}0$, $$E_{k,n}(s) = \frac{1}{1 - \gamma s} + o_p(1),
\qquad n \to \infty;$$ see Theorem \[T:simpler\]. Hence, writing $\rho = \gamma \tau$, we have $E_{k,n}(\tau) = (1 - \rho)^{-1} + o_p(1)$ and $E_{k,n}(2\tau) = (1 - 2\rho)^{-1} + o_p(1)$ as $n \to \infty$, so that $$D_{k,n} = - \frac{\rho^4}{\gamma (1 - 2 \rho) (1 - \rho)^2} + o_p(1), \qquad n \to \infty.$$ This leads to the following simplified estimators: $$\begin{aligned}
\hat{\delta}_{k,n}
&= H_{k,n} (1 - 2 \rho) (1 - \rho)^3 \rho^{-4}
\left( E_{k,n}(\tau) - \frac{1}{1 - H_{k,n} \tau} \right), \\
\hat{\gamma}_{k,n}
&= H_{k,n} - \hat{\delta}_{k,n} \frac{\rho}{1 - \rho}.\end{aligned}$$
Up to now we have assumed that $\rho$ is known. Let $\hat{\rho}_n$ be a weakly consistent estimator sequence of $\rho = \gamma \tau$; see for instance @FAdHL03, @FAGdH03, and @PengQi04. Replace $\tau$, which is unknown, by $\hat{\tau}_{k,n} = \hat{\rho}_n / H_{k,n}$, to finally get $$\begin{aligned}
\label{E:deltakn}
\hat{\delta}_{k,n} &=
H_{k,n} (1 - 2 \hat{\rho}_n) (1 - \hat{\rho}_n)^3 \hat{\rho}_n^{-4}
\left( E_{k,n}(\hat{\rho}_n / H_{k,n}) - \frac{1}{1 - \hat{\rho}_n} \right), \\
\label{E:gammakn}
\hat{\gamma}_{k,n} &=
H_{k,n} - \hat{\delta}_{k,n} \frac{\hat{\rho}_n}{1 - \hat{\rho}_n}.\end{aligned}$$ Further, put $$\label{E:Zkn}
Z_{k,n} = \sqrt{k} \{ n {\overline{F}}(X_{n-k:n}) / k - 1 \}.$$ The joint asymptotics of $Z_{k,n}$ with $(\hat{\gamma}_{k,n}, \hat{\delta}_{k,n})$ will become relevant in Section \[S:prob\] when estimating tail probabilities on the basis of with $u = X_{n-k:n}$. Let the arrow ${\rightsquigarrow}$ denote convergence in distribution.
\[T:estim\] Let $F \in {\cal F}(\gamma, \tau)$ and let $X_1, \ldots, X_n$ be independent random variables with common distribution function $F$. Let $k = k_n$ be an intermediate sequence satisfying . Recall $\delta_n = \delta(X_{n-k:n})$ and $Z_{k,n}$ in . If $\hat{\rho}_n = \rho + o_p(1)$ as $n \to \infty$, with $\rho = \gamma \tau$, then $\sqrt{k} \delta_n = \lambda + o_p(1)$ as $n \to \infty$ and $$\label{E:estim}
\Bigl( \sqrt{k} (\hat{\gamma}_{k,n} - \gamma), \sqrt{k} (\hat{\delta}_{k,n} - \delta_n), Z_{k,n} \Bigr)
{\rightsquigarrow}N_3(\boldsymbol{0}, \Sigma), \qquad n \to \infty,$$ a trivariate normal distribution with mean vector zero and covariance matrix $$\label{E:Sigma}
\Sigma =
\left(
\begin{array}{llc}
\phantom{-}\gamma^2 \frac{(1-\rho)^2}{\rho^2} & -\gamma^2 \frac{(1-2\rho)(1-\rho)}{\rho^3} & 0 \\[1ex]
-\gamma^2 \frac{(1-2\rho)(1-\rho)}{\rho^3} & \phantom{-}\gamma^2 \frac{(1-2\rho)(1-\rho)^2}{\rho^4} & 0 \\[1ex]
\phantom{-}0 & \phantom{-}0 & 1
\end{array}
\right).$$
An asymptotic confidence interval for $\gamma$ of nominal level $1 - \alpha$ is given by $$\label{E:CI:gamma}
\biggl[
\hat{\gamma}_{k,n} \biggl( 1 + \frac{1 - \hat{\rho}_n}{\hat{\rho}_n} \frac{z_{\alpha/2}}{\sqrt{k}} \biggr), \;
\hat{\gamma}_{k,n} \biggl( 1 - \frac{1 - \hat{\rho}_n}{\hat{\rho}_n} \frac{z_{\alpha/2}}{\sqrt{k}} \biggr)
\biggr],$$ with $z_{\alpha/2}$ the $1 - \alpha/2$ quantile of the standard normal distribution.
The proof of Theorem \[T:estim\] is given in Appendix \[A:estim\]. It is based on a functional central limit theorem for a certain tail empirical process, stated and proved in Appendix \[A:simpler\]. Note that the asymptotic distribution of $\hat{\rho}_{k,n}$ is unimportant; the only requirement is that the estimator is consistent for $\rho$.
The fact that the limit distribution in is centered for any $\lambda$, is important for two reasons:
- It makes possible the use of larger $k$ and thus of lower thresholds compared to when the mean would be proportional to $\lambda$. In this way, the model can be fitted to a larger fraction of the data, leading to a reduction of the asymptotic variances and thus of the asymptotic mean squared errors of the parameter estimates.
- Sample paths of the estimates as a function of $k$ will exhibit larger regions of stability around the true value. As a consequence, the choice of $k$ becomes easier.
These issues will be illustrated in the simulations in Section \[S:compar\] and in the case study in Example \[Ex:secura\].
Comparison of Extreme Value Index Estimators {#S:compar}
============================================
Under the conditions of Theorem \[T:estim\], we have $$\label{E:AN:EPD}
\sqrt{k} (\hat{\gamma}_{k,n} - \gamma)
{\rightsquigarrow}N \left( 0, \gamma^2 \frac{(1-\rho)^2}{\rho^2} \right),
\qquad n \to \infty.$$ According to @Drees98, the asymptotic variance is minimal for scale-invariant, asymptotically unbiased estimators of $\gamma$ of a certain form. The limit distribution in corresponds with the one of the estimators in @BDGM99, @FeuervergerHall99 and @GM02.
The maximum likelihood estimator for $\gamma$ arises from fitting the GPD to the excesses $X_{n-k+i:n} - X_{n-k:n}$, $i = 1, \ldots,
k$. Its asymptotics have been studied in @Smith87, @DFdH04 and @dHF [Theorem 3.4.2]. From the latter theorem, it follows that under the conditions of our Theorem \[T:estim\], we have $$\label{E:AN:MLE}
\sqrt{k} \left( \hat{\gamma}_{k,n}^{\text{GPD}} - \gamma \right)
{\rightsquigarrow}N \left( \lambda b(\gamma, \rho), (1 + \gamma)^2 \right),
\qquad n \to \infty,$$ where $$b(\gamma, \rho) = \frac{\rho (1 + \gamma) (\gamma + \rho)}{\gamma (1 - \rho) (1 + \gamma - \rho)}.$$ Comparing and , we see that if $\tau = -1$ and thus $\rho = - \gamma$, the asymptotic distributions of $\hat{\gamma}_{k,n}$ and $\hat{\gamma}_{k,n}^{\text{GPD}}$ coincide. This is in correspondance with the fact that the EGPD with $\tau = -1$ is a reparametrization of the GPD and the fact that the EPD estimators were obtained by solving the linearized score equations.
Finally, under the conditions of Theorem \[T:estim\], the asymptotic distribution of the Hill estimator is $$\label{E:AN:Hill}
\sqrt{k} (H_{k,n} - \gamma)
{\rightsquigarrow}N \left( \lambda \frac{\rho}{1-\rho}, \gamma^2 \right),
\qquad n \to \infty;$$ see for instance Theorem \[T:simpler\] below. Of the three estimators considered, the Hill estimator has the smallest asymptotic variance. Unless $\lambda = 0$, however, its asymptotic bias is never zero. The asymptotic distribution of the Hill estimator and its optimal variance property are of course well known; see for instance @Reiss89 [Section 9.4], @Drees98 and @BBW06.
To illustrate the behavior of the three estimators, we generated samples from four different distributions. For each distribution, we generated $10,000$ samples of size $n = 1,000$ and computed the three extreme value index estimators for $k$ up to $500$. For the EPD estimator, we estimated the second-order parameter $\rho$ using the estimator in @FAGdH03. For each distribution and each estimator, we computed Monte Carlo estimates of the bias, variance and mean squared error by averaging out over the $10,000$ samples.
Comparing the asymptotic results to the graphs in Figures \[F:gamma:1\]–\[F:gamma:2\] we learn the following:
with $\alpha = 1$. We have $\gamma = 1/\alpha = 1$, $\tau = -\alpha = -1$, and $\rho = \gamma \tau = -1$. From and , it follows that the asymptotic distributions of the EPD and the GPD estimators coincide, with zero asymptotic bias and an asymptotic variance of $4/k$. The Hill estimator has an asymptotic variance of $1/k$ only, but its asymptotic bias is nonzero.
with $\nu = 4$. We have $\gamma = 1/\nu = 1/4$, $\tau = -2$, and $\rho = \gamma \tau = -1/2$. The asymptotic variances of the three estimators are $\sigma^2 / k$ with $\sigma^2 = \gamma^2 = 1/16$ for the Hill estimator, $\sigma^2 = \gamma^2 (1 - \rho)^2 / \rho^2 = 9/16$ for the EPD estimator, and $\sigma^2 = (1 + \gamma)^2 = 25/16$ for the GPD estimator. Of the three estimators, the EPD estimator is the only one which is asymptotically unbiased.
defined by ${\overline{F}}(x) = (1+c)^{-1} x^{-\alpha} (1 + c x^{-\alpha})$, $x {\geqslant}1$, with shape parameter $\alpha = 2$ and mixing parameter $c = 2$. We have $\gamma = 1/\alpha = 1/2$, $\tau = -\alpha = -2$, and $\rho = \gamma \tau = -1$. The weight of the second-order component is equal to $c = 2$ times the weight of the first-order component, inducing a severe bias to the Hill and GPD estimators; the EPD estimator is much less affected by this. The asymptotic variances of the three estimators are $\sigma^2 / k$ with $\sigma^2 = \gamma^2 = 1/4$ for the Hill estimator, $\sigma^2 = \gamma^2 (1 - \rho)^2 / \rho^2 = 1$ for the EPD estimator, and $\sigma^2 = (1 + \gamma)^2 = 9/4$ for the GPD estimator.
with shape parameter $\alpha = 4$ and scale parameter $\beta = 2$. Although this distribution has positive extreme-value index $\gamma = 1/\beta$, it is not in any of the classes ${\cal F}(\gamma, \tau)$, since ${\overline{F}}(x) \sim \text{constant} \times x^{-1/\beta} (\log x)^{\alpha - 1}$. Nevertheless, the EPD estimator performs reasonably well when compared to the Hill and GPD estimators.
-- --
-- --
-- --
-- --
Tail Probability Estimation {#S:prob}
===========================
Let us return to the tail estimation problem raised in the beginning of Section \[S:par\]. Given the order statistics $X_{1:n} {\leqslant}\cdots {\leqslant}X_{n:n}$ of an independent sample from an unknown distribution function $F \in {\cal F}(\gamma, \tau)$, we want to estimate the tail probability $p_n = {\overline{F}}(x_n)$, where $x_n \to
\infty$ and thus $p_n \to 0$ as $n \to \infty$. As before, let $k =
k_n \in \{1, \ldots, n-1\}$ be an intermediate integer sequence, that is, $k \to \infty$ and $k/n \to 0$. Assume that $p_n =
{\overline{F}}(x_n)$ satisfies $$\label{E:pn}
n p_n / k \to q \in [0, 1), \qquad n \to \infty.$$ Let $\hat{\gamma}_n$, $\hat{\delta}_n$, and $\hat{\tau}_n$ denote general estimator sequences and put $\delta_n = \delta(X_{n-k:n})$ as well as $$\label{E:GammaDelta}
\Gamma_{k,n} = \sqrt{k} (\hat{\gamma}_n - \gamma)
\qquad \text{and} \qquad
\Delta_{k,n} = \sqrt{k} (\hat{\delta}_n - \delta_n).$$ Recall $Z_{k,n}$ in and assume that $$\label{E:joint}
\hat{\tau}_n = \tau + o_p(1)
\quad \text{and} \quad
(\Gamma_{k,n}, \Delta_{k,n}, Z_{k,n}) {\rightsquigarrow}(\Gamma, \Delta, Z), \qquad n \to \infty,$$ a trivariate random vector. A possible choice for the estimators of $\gamma$ and $\delta_n$ are the ones studied in Theorem \[T:estim\]. However, we will formulate our results so as to allow for general estimator sequences satisfying . For the estimator of $\tau$, one can for instance take $\hat{\tau}_n = \hat{\rho}_n /
\hat{\gamma}_n$, where $\hat{\rho}_n$ is an estimator of $\rho =
\gamma \tau$, see for instance @FAGdH03. As in Theorem \[T:estim\], the asymptotic distribution of $\hat{\tau}_{n}$ plays no role.
Omitting the remainder term in and replacing the unknown quantities ${\overline{F}}(u)$ and $(\gamma, \delta(u), \tau)$ at the random threshold $u = X_{n-k:n}$ by $k/n$ and $(\hat{\gamma}_n,
\hat{\delta}_n, \hat{\tau}_n)$, respectively, yields the estimator $$\hat{p}_{k,n} = \hat{{\overline{F}}}_n(x_n) = \frac{k}{n} {\overline{G}}_{\hat{\gamma}_n, \hat{\delta}_n, \hat{\tau}_n}(x_n / X_{n-k:n}).$$ In the same way, one can construct estimators for other tail quantities: return levels, expected shortfall, etc. For brevity, we focus here on tail probabilities.
In order to describe the asymptotics of $\hat{p}_{k,n}$, we need to make a distinction between the case $0 < q < 1$ in and $q = 0$. The proofs of the following two theorems are to be found in Appendix \[A:tail\]. Results for tail probability estimators based on the PD and GPD approximations can be found in @dHF [Section 4.4].
\[T:tail\] Let $F \in {\cal F}(\gamma, \tau)$, let $k_n$ be an intermediate sequence satisfying and let $p_n$ be such that holds for some $0 < q < 1$. If , then $$\label{E:tail}
\sqrt{k} \left( \frac{\hat{p}_{k,n}}{p_n} - 1 \right)
{\rightsquigarrow}- \gamma^{-1} \Gamma \log q - \gamma^{-1} \Delta (1 - q^{-\rho}) - Z, \qquad n \to \infty.$$
\[T:tail:extreme\] In Theorem \[T:tail\], if is replaced by $$np_n / k \to 0 \qquad \text{and} \qquad \log (np_n) / \sqrt{k} \to 0, \qquad n \to \infty,$$ then $$\frac{\sqrt{k}}{\log \{ k / (np_n) \}} \left( \frac{\hat{p}_{k,n}}{p_n} - 1 \right)
{\rightsquigarrow}\gamma^{-1} \Gamma, \qquad n \to \infty.$$
For the EPD estimators $\hat{\gamma}_n = \hat{\gamma}_{k,n}$ and $\hat{\delta}_n = \hat{\delta}_{k,n}$, Theorems \[T:estim\] and \[T:tail\] lead to $$\label{E:pn:EPD}
\sqrt{k} \left( \frac{\hat{p}_n}{p_n} - 1 \right)
{\rightsquigarrow}N \left(0, \sigma^2(q, \rho) \right), \qquad n \to \infty,$$ with asymptotic variance given by $$\begin{gathered}
\sigma^2(q, \rho)
= (\log q)^2 \frac{(1-\rho)^2}{\rho^2} + \left( \frac{1-q^{-\rho}}{\rho} \right)^2 \frac{(1-2\rho)(1-\rho)^2}{\rho^2} \\
- 2 \log (q) \frac{1 - q^{-\rho}}{\rho} \frac{(1-2\rho)(1-\rho)}{\rho^2} + 1.\end{gathered}$$ The importance of the fact that the limit distribution in has mean zero was already discussed after Theorem \[T:estim\]. An asymptotic confidence interval of nominal level $1 - \alpha$ is given by $$\label{E:CI:p}
\biggl[
\hat{p}_n \biggl( 1 - \sigma(\hat{q}_n, \hat{\rho}_n) \frac{z_{\alpha/2}}{\sqrt{k}} \biggr), \;
\hat{p}_n \biggl( 1 + \sigma(\hat{q}_n, \hat{\rho}_n) \frac{z_{\alpha/2}}{\sqrt{k}} \biggr)
\biggr]$$ where $\hat{q}_n = n \hat{p}_n / k_n$ and with $z_{\alpha/2}$ the $1-\alpha/2$ quantile of the standard normal distribution.
If we simply define $\hat{\delta}_n = 0$, then $\Delta_{k,n} = - \sqrt{k} \delta_n$ in and thus $\Delta = -\lambda$ in . The tail probability estimator $\hat{p}_n$ then reduces to the Weissman estimator [@Weissman78] $$\label{E:Weissman}
\hat{p}_n^{\mathrm{W}} = \frac{k}{n} \left( \frac{x_n}{X_{n-k:n}} \right)^{-1/\hat{\gamma}_n}.$$ Theorem \[T:tail\] then implies $$\label{E:AN:Weissman}
\sqrt{k} \left( \frac{\hat{p}_n^{\mathrm{W}}}{p_n} - 1 \right)
{\rightsquigarrow}- \gamma^{-1} \Gamma \log q + \gamma^{-1} \lambda (1 - q^{-\gamma \tau}) - Z,
\qquad n \to \infty.$$ For instance, if we estimate $\gamma$ by the Hill estimator, then in view of Theorem \[T:simpler\], $$\begin{gathered}
\sqrt{k} \left( \frac{k}{np_n} \left( \frac{x_n}{X_{n-k:n}} \right)^{-1/H_{k,n}} - 1 \right) \\
{\rightsquigarrow}N \left( -\lambda \frac{\rho}{\gamma} \left( \frac{q^{-\rho}-1}{\rho} + \frac{\log q}{1 - \rho}\right), 1 + (\log q)^2 \right),
\qquad n \to \infty.\end{gathered}$$
Even if the extreme value index estimator $\hat{\gamma}_n$ is such that the asymptotic distribution of $\Gamma_n$ has mean zero, then still the asymptotic distribution of the Weissman estimator will have a mean which is proportional to $\lambda$. In other words, unbiased tail estimation requires more than unbiased estimation of the extreme value index alone.
From Theorem \[T:tail:extreme\] and its proof, we learn that for estimation of tail probabilities $p_n$ of smaller order than $k/n$, the difference between the Pareto approximation and the EPD approximation does not matter asymptotically. Still, for $\hat{p}_n$ to be an asymptotically unbiased estimator of $p_n$, the estimator $\hat{\gamma}_n$ needs to be asymptotically unbiased for $\gamma$. For instance, if we use the EPD estimator $\hat{\gamma}_{k,n}$, then $$\frac{\sqrt{k}}{\log \{ k / (np_n) \}} \left( \frac{\hat{p}_n}{p_n} - 1 \right)
{\rightsquigarrow}N \left( 0, \frac{(1 - \rho)^2}{\rho^2} \right),
\qquad n \to \infty.$$
\[Ex:secura\] The Secura Belgian Re data in @BGST [Section 1.3.3] comprise 371 automobile claims not smaller than €1.2 million. The data span the period 1988–2001 and have been gathered from several European insurance companies. Figure \[F:secura\] shows the estimates of $\gamma$ (left) and of the probability of a claim to exceed €7 million (right). Nominal 90 % confidence intervals for the EPD estimates are added too, see and . In the data-set, there were actually 3 exceedances over €1.2 million, yielding a nonparametric estimate of $3/371 = 0.81 \%$. In comparison to the Weissman (Hill) and POT (GPD) estimates, the trajectories of the EPD estimates are relatively stable, with $\hat{\gamma}$ around $0.3$ and $p$ around $0.75 \%$. By way of comparison, in @BGST [Section 6.2.4] it is suggested to model the complete distribution by a mixture of two components, an exponential and a Pareto distribution, with the knot at about €2.6 million, which corresponds to the order statistic $X_{n-k:n}$ with $k = 95$. Although this knot is detected by the EPD estimator, it does not cause the tail parameter estimates to change dramatically.
-- --
-- --
Tail Empirical Processes {#A:simpler}
========================
Recall $H_{k,n}$, $E_{k,n}(s)$ and $Z_{k,n}$ from equations , and , respectively, and define $$\begin{aligned}
\label{E:Gammakn}
\Gamma_{k,n} &= \sqrt{k} (H_{k,n} - \gamma), \\
\label{E:EEkn}
\mathbb{E}_{k,n}(s) &= \sqrt{k} \left( E_{k,n}(s) - \frac{1}{1 - s \gamma} \right), \qquad s {\leqslant}0.\end{aligned}$$ Our proof of Theorem \[T:estim\] will be based on the fact that $(\Gamma_{k,n}, \mathbb{E}_{k,n}, Z_{k,n})$ converges weakly in the space ${\mathbb{R}}\times {\cal C}[s_0, 0] \times
{\mathbb{R}}$; here $s_0 < 0$ and ${\cal C}[a, b]$ is the Banach space of continuous functions $f : [a, b] \to {\mathbb{R}}$ equipped with the topology of uniform convergence. Of course, the asymptotic distribution of the normalized Hill estimator $\Gamma_{k,n}$ has been established in numerous other papers; in the following theorem, it is the joint convergence which is our main concern.
\[T:simpler\] Let $F \in {\cal F}(\gamma, \tau)$. If $k = k_n$ is an intermediate integer sequence satisfying , then for every $s_0 < 0$, in ${\mathbb{R}}\times {\cal C}[-s_0, 0] \times {\mathbb{R}}$, $$(\Gamma_{k,n}, {\mathbb{E}}_{k,n}, Z_{k,n}) {\rightsquigarrow}(\Gamma, {\mathbb{E}}, Z), \qquad n \to \infty,$$ a Gaussian process with the following distribution: $Z$ is standard normal and is independent of $(\Gamma, {\mathbb{E}})$, and for $s, s_1, s_2 \in [s_0, 0]$, $$\begin{aligned}
\operatorname{E}[ {\mathbb{E}}(s) ] &= \lambda \frac{s \rho}{(1 - s \gamma - \rho)(1 - s \gamma)},
& \operatorname{E}[ \Gamma ] &= \lambda \frac{\rho}{1-\rho}, \\
\operatorname{cov}\{ {\mathbb{E}}(s_1), {\mathbb{E}}(s_2) \} &= \frac{s_1 s_2 \gamma^2}{(1 - s_1 \gamma - s_2 \gamma)(1 - s_1 \gamma)(1 - s_2 \gamma)},
& \operatorname{var}( \Gamma ) &= \gamma^2, \\
\operatorname{cov}\{ \Gamma, {\mathbb{E}}(s) \} &= \frac{s \gamma^2}{(1 - s \gamma)^2}.\end{aligned}$$
Let $Y_1, {\check{Y}}_1, Y_2, {\check{Y}}_2, \ldots$ be independent Pareto(1) random variables. For positive integer $k$, denote the order statistics of $Y_1, \ldots, Y_k$ by $Y_{1:k} < \cdots < Y_{k:k}$; also, let $Y_{0:k} = 1$. Similarly, denote the order statistics of ${\check{Y}}_1,
\ldots, {\check{Y}}_n$ by ${\check{Y}}_{1:n} < \cdots < {\check{Y}}_{n:n}$. Then the following three vectors are equal in distribution: $$\begin{aligned}
( X_{n-k+i:n} : i = 0, \ldots, k)
&{\stackrel{d}{=}}( U({\check{Y}}_{n-k+i:n}) : i = 0, \ldots, k ) \\
&{\stackrel{d}{=}}( U(Y_{i:k} {{\check{Y}}_{n-k:n}}) : i = 0, \ldots, k ).\end{aligned}$$ Since we are only interested in the asymptotic distribution of $(\Gamma_{k,n}, {\mathbb{E}}_{k,n}, Z_{k,n})$, we may without loss of generality assume that actually $$( X_{n-k+i:n} : i = 0, \ldots, k)
= ( U(Y_{i:k} {{\check{Y}}_{n-k:n}}) : i = 0, \ldots, k ).$$ The following property is well-known: if $k$ is an intermediate sequence, then $$\label{E:Ynk}
\sqrt{k} \{ (n/k) {{\check{Y}}_{n-k:n}}^{-1} - 1 \} {\rightsquigarrow}N(0, 1), \qquad n \to \infty.$$ \[A quick proof is to employ the distributional representation ${{\check{Y}}_{n-k:n}}{\stackrel{d}{=}}(E_1 + \cdots + E_{n+1}) / (E_1 + \ldots + E_k)$, with $E_1, \ldots, E_n$ independent standard exponential random variables.\] As a consequence, we have ${{\check{Y}}_{n-k:n}}= (n/k) \{1 + o_p(1)\}$ as $n \to \infty$, and therefore, by and the Uniform Convergence Theorem for ${{\cal R}}_\rho$ [@BGT Theorem 1.5.2], $$\label{E:aYnk}
\sqrt{k} a({{\check{Y}}_{n-k:n}})
= \sqrt{k} a(n/k) \frac{a({{\check{Y}}_{n-k:n}})}{a(n/k)}
= \lambda + o_p(1), \qquad n \to \infty.$$ Since $a(y) \sim \delta(U(y))$ as $n \to \infty$, this also shows that $\sqrt{k} \delta(X_{n-k:n}) = \lambda + o_p(1)$ as $n \to \infty$.
In the next three paragraphs, we will analyse the components $\Gamma_{k,n}$, ${\mathbb{E}}_{k,n}$ and $Z_{k,n}$ separately. In the fourth and final paragraph, these analyses will be combined.
*1. The component $\Gamma_{k,n}$.* Let the function $a$ be as in and define $\eta(y) = \log \{1 + a(y) \}$. Since $\lim_{y \to \infty} a(y) = 0$, we have $\eta(y) = a(y) \{1 + o(1)\}$ as $y \to \infty$, and hence $$\label{E:etaYnk}
\sqrt{k} \eta({{\check{Y}}_{n-k:n}})
= \lambda + o_p(1), \qquad n \to \infty.$$ In particular, $\eta$ is eventually nonzero and of constant sign, and $|\eta| \in {{\cal R}}_\rho$. We have $$\begin{aligned}
H_{k,n}
&= \frac{1}{k} \sum_{i=1}^k \log X_{n-k+i:n} - \log X_{n-k:n} \\
&= \frac{1}{k} \sum_{i=1}^k \log U(Y_i {{\check{Y}}_{n-k:n}}) - \log U({{\check{Y}}_{n-k:n}}) \\
&= \frac{1}{k} \sum_{i=1}^k \{ \gamma \log Y_i + \eta(Y_i {{\check{Y}}_{n-k:n}}) - \eta({{\check{Y}}_{n-k:n}}) \}.\end{aligned}$$ As a consequence, $$\begin{aligned}
\Gamma_{k,n}
&= \sqrt{k} (H_{k,n} - \gamma) \\
&= \frac{\gamma}{\sqrt{k}} \sum_{i=1}^k (\log Y_i - 1)
+ \sqrt{k} \eta({{\check{Y}}_{n-k:n}}) \frac{1}{k} \sum_{i=1}^k \left( \frac{\eta(Y_i {{\check{Y}}_{n-k:n}})}{\eta({{\check{Y}}_{n-k:n}})} - 1 \right).\end{aligned}$$ By the Uniform Convergence Theorem for ${{\cal R}}_\rho$, for every $x_0 > 0$, $$\lim_{y \to \infty} \sup_{x {\geqslant}x_0} \left| \frac{\eta(xy)}{\eta(y)} - x^\rho \right| = 0.$$ By the last two displays and in view of , $$\label{E:epsin}
\max_{i=1, \ldots, k} \left| \frac{\eta(Y_i {{\check{Y}}_{n-k:n}})}{\eta({{\check{Y}}_{n-k:n}})} - Y_i^\rho \right| = o_p(1), \qquad n \to \infty.$$ By and since $k^{-1} \sum_{i=1}^k Y_i^\rho = (1 - \rho)^{-1} + o_p(1)$ as $k \to \infty$, we find $$\label{E:emp:G}
\Gamma_{k,n}
= \frac{\gamma}{\sqrt{k}} \sum_{i=1}^k (\log Y_i - 1)
+ \lambda \left( \frac{1}{1 - \rho} - 1 \right) + o_p(1), \qquad n \to \infty.$$
*2. The component ${\mathbb{E}}_{k,n}$.* Recall the notation $\eta(y) = \log \{1 + a(y)\}$, so that $U(y) = C^\gamma y^\gamma \exp\{ \eta(y) \}$. We have $$\begin{aligned}
E_{k,n}(s)
&= \frac{1}{k} \sum_{i=1}^k \left( \frac{X_{n-k+i:n}}{X_{n-k:n}} \right)^s
= \frac{1}{k} \sum_{i=1}^k \left( \frac{U(Y_i{{\check{Y}}_{n-k:n}})}{U({{\check{Y}}_{n-k:n}})} \right)^s \\
&= \frac{1}{k} \sum_{i=1}^k Y_i^{\gamma s} \exp [ s \{ \eta(Y_i {{\check{Y}}_{n-k:n}}) - \eta({{\check{Y}}_{n-k:n}}) \} ].\end{aligned}$$ Writing ${\varepsilon}_{i,n} = \eta(Y_i {{\check{Y}}_{n-k:n}}) / \eta({{\check{Y}}_{n-k:n}}) - Y_i^\rho$, we find $$E_{k,n}(s) = \frac{1}{k} \sum_{i=1}^k Y_i^{\gamma s} \exp \{ s \eta({{\check{Y}}_{n-k:n}}) (Y_i^\rho - 1 + {\varepsilon}_{i,n})\}.$$ Recall the elementary inequality $|e^z - 1 - z| {\leqslant}(z^2/2) \max(e^z, 1)$, $z \in {\mathbb{R}}$. Since $0 < Y_i^{\gamma s} {\leqslant}1$, $0 < Y_i^\rho {\leqslant}1$ and $\max_{i=1, \ldots, n} |{\varepsilon}_{i,n}| = o_p(1)$ \[see \], we get by , $$\begin{gathered}
\sup_{s \in [s_0, 0]}
\left| E_{k,n}(s) - \frac{1}{k} \sum_{i=1}^k Y_i^{\gamma s} \{ 1 + s \eta({{\check{Y}}_{n-k:n}}) (Y_i^\rho - 1) \} \right| \\
= o_p\{|\eta({{\check{Y}}_{n-k:n}})|\} = o_p(k^{-1/2}), \qquad n \to \infty.\end{gathered}$$ For $\theta_0 < 0$, the class of functions $\{f_\theta : \theta \in [\theta_0, 0]\}$ from $[1, \infty)$ to $(0, 1]$ defined by $f_\theta(y) = y^\theta$, $y {\geqslant}1$, satisfies the Glivenko-Cantelli property $$\sup_{\theta \in [\theta_0, 0]} \left| \frac{1}{k} \sum_{i=1}^k Y_i^\theta - \frac{1}{1-\theta} \right|
= o_p(1), \qquad k \to \infty;$$ see for instance Example 19.8 in @vdV98 or just use the monotonicity and continuity of $y^\theta$ in $\theta$. In view of , we obtain $$\begin{gathered}
\sup_{s \in [s_0, 0]}
\left| E_{k,n}(s) - \frac{1}{k} \sum_{i=1}^k Y_i^{\gamma s}
- s \eta({{\check{Y}}_{n-k:n}}) \left(\frac{1}{1 - \gamma s - \rho} - \frac{1}{1 - \gamma s}\right) \right| \\
= o_p\{|\eta({{\check{Y}}_{n-k:n}})|\} = o_p(k^{-1/2}), \qquad n \to \infty.\end{gathered}$$ Using again, we find $$\begin{aligned}
{\mathbb{E}}_{k,n}(s)
&= \sqrt{k} \left( E_{k,n}(s) - \frac{1}{1 - \gamma s} \right) \\
&= \frac{1}{\sqrt{k}} \sum_{i=1}^k \left( Y_i^{\gamma s} - \frac{1}{1 - \gamma s} \right)
+ s \lambda \left(\frac{1}{1 - \gamma s - \rho} - \frac{1}{1 - \gamma s}\right)
+ {\varepsilon}_n(s),\end{aligned}$$ with $$\label{E:emp:E}
\sup_{s \in [s_0, 0]} |{\varepsilon}_n(s)| = o_p(1), \qquad n \to \infty.$$
*3. The component $Z_{k,n}$.* By and , we find $$y {\overline{F}}(U(y)) = 1 + o\{|a(y)|\}, \qquad y \to \infty.$$ As a consequence, $$\begin{aligned}
{\overline{F}}(X_{n-k:n})
&= {\overline{F}}(U({{\check{Y}}_{n-k:n}})) \\
&= {{\check{Y}}_{n-k:n}}^{-1} [ 1 + o_p\{|a({{\check{Y}}_{n-k:n}})|\} ] \\
&= {{\check{Y}}_{n-k:n}}^{-1} \{ 1 + o_p(k^{-1/2}) \}, \qquad n \to \infty,\end{aligned}$$ where we used in the last step. We obtain $$\begin{aligned}
\label{E:emp:Z}
Z_{k,n}
&= \sqrt{k} \{ (n/k) {\overline{F}}(X_{n-k:n}) - 1 \} \nonumber \\
&= \sqrt{k} \{ (n/k) {{\check{Y}}_{n-k:n}}^{-1} - 1 \} + o_p(1), \qquad n \to \infty.\end{aligned}$$
*4. Joint convergence.* Define $$\begin{aligned}
\tilde{\Gamma}_k &= \frac{\gamma}{\sqrt{k}} \sum_{i=1}^k (\log Y_i - 1), \\
\tilde{{\mathbb{E}}}_k(s) &= \frac{1}{\sqrt{k}} \sum_{i=1}^k \left( Y_i^{\gamma s} - \frac{1}{1 - \gamma s} \right), \qquad s \in [s_0, 0].\end{aligned}$$ For $\theta_0 < 0$, the class of functions $\{f_\theta : \theta \in [\theta_0, 0]\}$ defined by $f_\theta(y) = y^\theta$, $y {\geqslant}1$, is Donsker with respect to the Pareto(1) distribution; this follows from Example 19.7 in @vdV98 upon noting that $|y^{\theta_1} - y^{\theta_2}| {\leqslant}|\theta_1 - \theta_2| \log y$ for $\theta_1 {\leqslant}0$, $\theta_2 {\leqslant}0$ and $y {\geqslant}1$. As a consequence, in ${\mathbb{R}}\times {\cal C}[s_0, 0]$, $$\label{E:tilde}
(\tilde{\Gamma}_k, \tilde{{\mathbb{E}}}_k) {\rightsquigarrow}(\tilde{\Gamma}, \tilde{{\mathbb{E}}}), \qquad k \to \infty,$$ a centered Gaussian process with covariance function $$\begin{aligned}
\operatorname{var}\tilde{\Gamma}
&= \operatorname{var}(\gamma \log Y_1) = \gamma^2, \\
\operatorname{cov}\{ \tilde{{\mathbb{E}}}(s_1), \tilde{{\mathbb{E}}}(s_2) \}
&= \operatorname{cov}( Y_1^{\gamma s_1}, Y_1^{\gamma s_2} )
= \frac{1}{1 - s_1 \gamma - s_2 \gamma} - \frac{1}{(1 - s_1 \gamma)(1 - s_2 \gamma)}, \\
\operatorname{cov}\{ \tilde{\Gamma}, \tilde{{\mathbb{E}}}(s) \}
&= \operatorname{cov}( \gamma \log Y_1, Y_1^{\gamma s} )
= \frac{s \gamma^2}{(1 - s \gamma)^2}.\end{aligned}$$ By and , it follows that in ${\mathbb{R}}\times {\cal C}[s_0, 0]$, $$(\Gamma_{k,n}, {\mathbb{E}}_{k,n}) {\rightsquigarrow}(\Gamma, {\mathbb{E}}), \qquad n \to \infty.$$ Finally, from and it follows that $(\Gamma_{k,n}, {\mathbb{E}}_{k,n}, Z_{k,n}) {\rightsquigarrow}(\Gamma, {\mathbb{E}}, Z)$ as $n \to \infty$, where $Z$ is standard normally distributed and is independent of $(\Gamma, {\mathbb{E}})$.
Proof of Theorem \[T:estim\] {#A:estim}
============================
The fact that $\sqrt{k} \delta(X_{n-k:n}) = \lambda + o_p(1)$ as $n \to \infty$ has already been shown in the proof of Theorem \[T:simpler\]; in particular, see . Recall $\Gamma_{k,n}$ and ${\mathbb{E}}_{k,n}(s)$ in equations and , respectively, and write $\hat{\tau}_{k,n} = \hat{\rho}_n / H_{k,n}$. We have $$\begin{aligned}
\lefteqn{
\sqrt{k} \left( E_{k,n}(\hat{\tau}_{k,n}) - \frac{1}{1 - \hat{\rho}_n} \right)
} \\
&= \sqrt{k} \left( E_{k,n}(\hat{\tau}_{k,n}) - \frac{1}{1 - \gamma \hat{\tau}_n} \right)
+ \sqrt{k} \left( \frac{1}{1 - \gamma \hat{\rho}_n / H_{k,n}} - \frac{1}{1 - \hat{\rho}_n} \right) \\
&= {\mathbb{E}}_{k,n}(\hat{\tau}_{k,n})
- \frac{1}{H_{k,n}} \frac{\hat{\rho}_n}{(1 - \gamma \hat{\rho}_{k,n} / H_{k,n} )(1 - \hat{\rho}_n)} \Gamma_{k,n}.\end{aligned}$$ By Theorem \[T:simpler\], $H_{k,n} = \gamma + k^{-1/2} \Gamma_{k,n} = \gamma + o_p(1)$ and thus $\hat{\tau}_{k,n} = \tau + o_p(1)$ as $n \to \infty$. It follows that $$\sqrt{k} \left( E_{k,n}(\hat{\tau}_{k,n}) - \frac{1}{1 - \hat{\rho}_n} \right)
= {\mathbb{E}}_{k,n}(\hat{\tau}_{k,n}) - \frac{\rho}{\gamma(1-\rho)^2} \Gamma_{k,n} + o_p(1), \qquad n \to \infty.$$ Substituting this into the definition of $\hat{\delta}_{k,n}$ yields $$\begin{gathered}
\label{E:estim:delta}
\sqrt{k} \hat{\delta}_{k,n}
= \gamma (1 - 2\rho) (1 - \rho)^3 \rho^{-4}
\left( {\mathbb{E}}_{k,n}(\hat{\tau}_{k,n}) - \frac{\rho}{\gamma(1-\rho)^2} \Gamma_{k,n} \right) + o_p(1), \\
n \to \infty,\end{gathered}$$ as well as $$\begin{aligned}
\label{E:estim:gamma}
\sqrt{k} (\hat{\gamma}_{k,n} - \gamma)
&= \sqrt{k} \left( H_{k,n} - \hat{\delta}_{k,n} \frac{\hat{\rho}_n}{1 - \hat{\rho}_n} - \gamma \right) \nonumber \\
&= \Gamma_{k,n} - \sqrt{k} \hat{\delta}_{k,n} \frac{\rho}{1 - \rho} + o_p(1) \nonumber \\
&= \frac{(1-\rho)^2}{\rho^2} \left( \Gamma_{k,n} - \gamma \frac{1 - 2\rho}{\rho} {\mathbb{E}}_{k,n}(\hat{\tau}_{k,n}) \right) + o_p(1),
\qquad n \to \infty.\end{aligned}$$ From $\hat{\tau}_{k,n} = \tau + o_p(1)$ and Theorem \[T:simpler\], it follows that in ${\mathbb{R}}\times {\cal C}[s_0, 0] \times {\mathbb{R}}\times {\mathbb{R}}$, $$(\Gamma_{k,n}, {\mathbb{E}}_{k,n}, Z_{k,n}, \hat{\tau}_{n,k})
{\rightsquigarrow}(\Gamma, {\mathbb{E}}, Z, \tau), \qquad n \to \infty.$$ For $s_0 < \tau$, we have $\Pr(s_0 {\leqslant}\hat{\tau}_{n,k} {\leqslant}0) \to 1$ as $n \to \infty$, and thus, by the previous display and the continuous mapping theorem, $$(\Gamma_{k,n}, {\mathbb{E}}_{k,n}(\hat{\tau}_{k,n}), Z_{k,n})
{\rightsquigarrow}(\Gamma, {\mathbb{E}}(\tau), Z), \qquad n \to \infty.$$ In view of and , as $n \to \infty$, $$\begin{gathered}
\label{E:estim:10}
\left(
\sqrt{k} (\hat{\gamma}_{k,n} - \gamma),
\sqrt{k} \hat{\delta}_{k,n},
Z_{k,n}
\right) \\
\shoveleft{\qquad {\rightsquigarrow}\Biggl(
\frac{(1-\rho)^2}{\rho^2} \left( \Gamma - \gamma \frac{1 - 2\rho}{\rho} {\mathbb{E}}(\tau) \right),} \\
\frac{(1-2\rho)(1-\rho)}{\rho^3} \left( - \Gamma + \gamma \frac{(1-\rho)^2}{\rho} {\mathbb{E}}(\tau) \right),
Z
\Biggr).\end{gathered}$$ The vector $(\Gamma, {\mathbb{E}}(\tau), Z)$ is trivariate normal, with $Z$ standard normal and independent of $(\Gamma, {\mathbb{E}}(\tau))$, with $\Gamma$ as in Theorem \[T:simpler\], and with $$\begin{aligned}
\operatorname{E}[ {\mathbb{E}}(\tau) ]
&= \lambda \frac{\rho^2}{\gamma(1-2\rho)(1-\rho)}, &
\operatorname{var}\{ {\mathbb{E}}(\tau) \}
&= \frac{\rho^2}{(1-2\rho)(1-\rho)^2}, \\
\operatorname{cov}\{ \Gamma, {\mathbb{E}}(\tau) \}
&= \gamma \frac{\rho}{(1-\rho)^2}.\end{aligned}$$ As a consequence, the distribution of the limit vector in is trivariate normal with mean vector $(0, \lambda, 0)'$ and covariance matrix $\Sigma$ as in .
Proofs for Section \[S:prob\] {#A:tail}
=============================
#### Proof of Theorem \[T:tail\] {#proof-of-theoremttail .unnumbered}
Put $y_n = x_n / X_{n-k:n}$, recall $\delta_n = \delta(X_{n-k:n})$, and define $$\tilde{p}_n = {\overline{F}}(X_{n-k:n}) {\overline{G}}_{\gamma, \delta_n, \tau}(y_n).$$ Since $k \to \infty$ and $p_n \to 0$ as $n \to \infty$, it is sufficient to prove with $\hat{p}_n/p_n - 1$ replaced by $\log \hat{p}_n - \log p_n$. Let us write $$\sqrt{k} (\log \hat{p}_n - \log p_n)
= \sqrt{k} (\log \hat{p}_n - \log \tilde{p}_n) + \sqrt{k} (\log \tilde{p}_n - \log p_n)$$ and treat the two terms on the right-hand side separately.
*1. The term $\sqrt{k} (\log \tilde{p}_n - \log p_n)$.* We have $$\log \tilde{p}_n - \log p_n
= \log {\overline{G}}_{\gamma, \delta_n, \tau}(y_n)
- \log \frac{{\overline{F}}(y_n X_{n-k:n})}{{\overline{F}}(X_{n-k:n})}.$$ Since $(n/k_n) {\overline{F}}(y_n X_{n-k:n}) \to q$ and $(n/k_n) {\overline{F}}(X_{n-k:n}) = 1 + o_p(1)$ as $n \to \infty$, $$\frac{{\overline{F}}(y_n X_{n-k:n})}{{\overline{F}}(X_{n-k:n})}
= \frac{{\overline{F}}(x_n)}{{\overline{F}}(X_{n-k:n})}
= q + o_p(1), \qquad n \to \infty.$$ Since moreover ${\overline{F}}$ is monotone and regularly varying of index $-1/\gamma$, this forces $y_n \to y$ as $n \to \infty$ with $y^{-1/\gamma} = q$, or $y = q^{-\gamma} \in (1, \infty)$. By Proposition \[P:EGPD\], we find $$\frac{\log \tilde{p}_n - \log p_n}{\delta_n} = o_p(1), \qquad n \to \infty.$$ Finally, from $\sqrt{k} \delta_n = \lambda + o_p(1)$ as $n \to \infty$, we can conclude that $$\label{E:tail:7}
\sqrt{k} (\log \tilde{p}_n - \log p_n)
= \sqrt{k} \delta_n \frac{\log \tilde{p}_n - \log p_n}{\delta_n}
= o_p(1), \qquad n \to \infty.$$
*2. The term $\sqrt{k} (\log \hat{p}_n - \log \tilde{p}_n)$.* We have $$\begin{gathered}
\label{E:tail:10}
\log \hat{p}_n - \log \tilde{p}_n
= \{\log (k/n) - \log {\overline{F}}(X_{n-k:n})\} \\
+ \{\log {\overline{G}}_{\hat{\gamma}_n, \hat{\delta}_n, \hat{\tau}_n}(y_n) - \log {\overline{G}}_{\gamma, \delta_n, \tau}(y_n)\}.\end{gathered}$$ The first term on the right-hand side is $$\begin{aligned}
\log (k/n) - \log {\overline{F}}(X_{n-k:n})
&= - \log \{ n {\overline{F}}(X_{n-k:n}) / k \} \\
&= - \log (1 + k^{-1/2} Z_n) \\
&= - k^{-1/2} Z_n + o_p(k^{-1/2}), \qquad n \to \infty.\end{aligned}$$ For the second term on the right-hand side in , we proceed as follows. Since $y_n = y + o_p(1)$ as $n \to \infty$ and $y > 1$, it is sufficient to work on the event $y_n > 1$. Then $$\begin{aligned}
\label{E:tail:20}
\lefteqn{
\log {\overline{G}}_{\hat{\gamma}_n, \hat{\delta}_n, \hat{\tau}_n}(y_n) - \log {\overline{G}}_{\gamma, \delta_n, \tau}(y_n)
} \nonumber \\
&= \log [ \{ y_n (1 + \hat{\delta}_n - \hat{\delta}_n y_n^{\hat{\tau}_n} ) \}^{-1/\hat{\gamma}_n} ]
- \log [ \{ y_n (1 + \delta_n - \delta_n y_n^\tau ) \}^{-1/\gamma} ] \nonumber \\
&= \left((-\frac{1}{\hat{\gamma}_n}) - (-\frac{1}{\gamma}) \right) \log y_n \nonumber \\
& \qquad \mbox{} + (-\frac{1}{\hat{\gamma}_n}) \{ \log (1 + \hat{\delta}_n - \hat{\delta}_n y_n^{\hat{\tau}_n})
- \log (1 + \delta_n - \delta_n y_n^\tau) \} \nonumber \\
& \qquad \mbox{} + \left((-\frac{1}{\hat{\gamma}_n}) - (-\frac{1}{\gamma}) \right) \log (1 + \delta_n - \delta_n y_n^\tau).\end{aligned}$$ We treat the three terms on the right-hand side of in turn. First, $$\begin{aligned}
(-\frac{1}{\hat{\gamma}_n}) - (-\frac{1}{\gamma})
&= \frac{\hat{\gamma}_n - \gamma}{\hat{\gamma}_n \gamma} \\
&= k^{-1/2} \gamma^{-2} \Gamma_n + O_p(k^{-1}), \qquad n \to \infty.\end{aligned}$$ Second, $\delta_n = O_p(k^{-1/2})$ and therefore also $\hat{\delta}_n = O_p(k^{-1/2})$ as $n \to \infty$. Hence the second term on the right-hand side of is $$\begin{aligned}
\lefteqn{
-\hat{\gamma}_n^{-1} \{ \log (1 + \hat{\delta}_n - \hat{\delta}_n y_n^{\hat{\tau}_n})
- \log (1 + \delta_n - \delta_n y_n^\tau) \}
} \\
&= \{- \gamma^{-1} + O_p(k^{-1/2}) \}
\{ \hat{\delta}_n - \hat{\delta}_n y_n^{\hat{\tau}_n} - \delta_n + \delta_n y_n^\tau + O_p(k^{-1}) \} \\
&= - k^{-1/2} \gamma^{-1} \Delta_n (1 - y_n^\tau) + o_p(k^{-1/2}), \qquad n \to \infty.\end{aligned}$$ The third term on the right-hand side of is $O_p(k^{-1/2}) O_p(k^{-1/2}) = O_p(k^{-1})$. All in all, we find $$\label{E:tail:30}
\sqrt{k} (\log \hat{p}_n - \log \tilde{p}_n)
= - Z_n + \gamma^{-2} \Gamma_n \log y - \gamma^{-1} \Delta_n (1 - y^\tau) + o_p(1)$$ as $n \to \infty$. Combine and and recall $y = q^{-\gamma}$ and $\rho = \gamma \tau$ to find the result. $\Box$
#### Proof of Theorem \[T:tail:extreme\] {#proof-of-theoremttailextreme .unnumbered}
Recall the Weissman estimator $\hat{p}_n^{\mathrm{W}}$ in and put $d_n = k / (np_n)$. From Theorem 4.4.7 in @dHF, it follows that $$\frac{\sqrt{k}}{\log d_n} \left( \frac{\hat{p}_n^{\mathrm{W}}}{p_n} - 1 \right)
{\rightsquigarrow}\gamma^{-1} \Gamma, \qquad n \to \infty.$$ Moreover, writing $y_n = x_n / X_{n-k:n}$, $$\frac{\hat{p}_n}{\hat{p}_n^{\mathrm{W}}}
= \{ 1 + \hat{\delta}_n - \hat{\delta}_n y_n^{\hat{\tau}_n} \}^{-1/\hat{\gamma}_n}
= 1 + O_p(k^{-1/2}), \qquad n \to \infty.$$ As $\log d_n \to \infty$, we find that $\hat{p}_n$ and $\hat{p}_n^{\mathrm{W}_n}$ have the same asymptotic distribution. $\Box$
Acknowledgments {#acknowledgments .unnumbered}
===============
We are grateful to two referees for their speedy reports containing thoughtful and constructive remarks.
|
---
abstract: |
\
In this paper, we consider three transmit strategies for the fading three-node, two-way relay network (TWRN) – physical-layer network coding (PNC), digital network coding (DNC) and codeword superposition (CW-Sup). The aim is to minimize the total average energy needed to deliver a given pair of required average rates. Full channel state information is assumed to be available at all transmitters and receivers. The optimization problems corresponding to the various strategies in fading channels are formulated, solved and compared. For the DNC-based strategies, a simple time sharing of transmission of the network-coded message and the remaining bits of the larger message (DNC-TS) is considered first. We extend this approach to include a superposition strategy (DNC-Sup), in which the network-coded message and the remainder of the longer source message are superimposed before transmission. It is demonstrated theoretically that DNC-Sup outperforms DNC-TS and CW-Sup in terms of total average energy usage. More importantly, it is shown in simulation that DNC-Sup performs better than PNC if the required rate is low and worse otherwise. Finally, an algorithm to select the optimal strategy in terms of energy usage subject to different rate pair requirements is presented.
author:
- 'Zhi Chen Teng Joon Lim and Mehul Motani [^1]'
title: 'Fading Two-Way Relay Channels: Physical-Layer Versus Digital Network Coding'
---
Two-way, fading, PNC, DNC, energy usage
Introduction
============
Network coding, introduced in [@ahlswede2000network], has proven to be an important tool for improving network throughput. As one of the simplest applications of network coding, the two-way relay network (TWRN) has been extensively studied in the literature. A TWRN usually comprises two source nodes ($S_1$ and $S_2$) and one relay node ($R$), where $S_1$ and $S_2$ have information to exchange with each other. A direct link between $S_1$ and $S_2$ is unavailable.
In the literature, several transmission methods have been proposed for the TWRN, such as digital network coding (DNC) in [@liu2008network]–[@Chen2012two-way6], codeword superposition (CW-Sup) in [@Rankov] and [@Boche], physical-layer network coding (PNC) in [@Nam]–[@popovski2007physical] and analog network coding (ANC) in [@avestimehr2010capacity]–[@Katti]. Among them, DNC requires the relay node to jointly decode individual messages from both $S_1$ and $S_2$ on the uplink and then combine them together as a new message for delivery on the downlink. The codeword superposition strategy superimposes the two individual codewords received on the uplink and forwards the result to both sources. PNC requires the relay node to decode a function of the two messages instead of the two individual messages and forward this function message to both sources. Another method, ANC, requires the relay node to simply amplify the mixed signals received over the uplink without decoding, and forward this amplified signal to the source nodes.
In all cases, since each source node has perfect knowledge of the message originating from itself, it can subtract its own transmitted message and obtain the intended message from the other source upon receipt of the relay’s network-coded message. Note that ANC performs worse than PNC as noise at the relay is amplified when transmitting on the downlink, thus degrading the achievable rate, as was shown in [@nazer2011physical]. Hence in this work we shall only consider PNC, DNC and CW-Sup strategies over fading channels. For clarity, a graphical description of the strategy considered is depicted in Fig. \[fig:Comparison\_NC\], [where the message transmitted by $S_1$ is denoted by $a$, and the one from $S_2$ is $b$. Without loss of generality, we assume that the message $b$ is longer than $a$. It is composed of two parts: $b_1$ and $b_2$, where $b_1$ is assumed to have the same length as the message $a$. With the DNC based strategy or CW-Sup, the decoded messages at $R$ are $\hat{a}$ and $\hat{b}$. With PNC-Sup, the relay decodes the function message $\widehat{a \oplus b_1}$ and the remaining bits $\hat{b_2}$.]{} It should be noted that PNC is a family of techniques, and the PNC strategy considered in this work is a specific one from [@wilson2010joint]. In addition, in some works in the literature, network coding schemes with a conventional multi-access uplink for TWRNs were also referred to as PNC schemes. In this work, however, to distinguish a conventional multi-access uplink from the idea of decoding a function message over the uplink, we shall refer to the former as a DNC scheme.
In a TWRN, an achievable rate region with DNC was first investigated in [@liu2008network] for a three-slot protocol. In [@fong2011practical], an achievable rate region with only time-resource allocation was investigated for the case of static channels. In [@Boche], a codeword superposition technique was discussed and only the optimal time division between the uplink and the downlink phases was investigated. In [@Nam] and [@wilson2010joint], the capacity region employing PNC was derived for AWGN channels without any discussion of resource allocation. In [@Tarokh] the optimized constellation for TWRNs with PNC was investigated for symmetric traffic scenario, by adjusting the PNC map with the channel gains. In [@Katti], analog network coding is studied as a way to utilize interference in wireless networks. Furthermore, PNC was demonstrated to be better than DNC with time sharing uplink in [@Liew2006] in terms of achievable throughput as long as SNR is higher than -5dB.
In [@Chen2012two-way], [@Chen2012two-way5] and [@Chen2012two-way6], we investigated the minimization of total energy usage for a three-node TWRN subject to stability constraints, with various allowed transmit modes based on DNC. Only an orthogonal, time-sharing uplink, as well as time sharing of the digital network coded bits with the remaining bits in the downlink were considered in [@Chen2012two-way]. In [@Chen2012two-way5], joint decoding on the uplink was allowed, ensuring that all points in the multi-access channel rate region are achievable. The non-fading (static) case was considered in [@Chen2012two-way5]. The fading scenario was investigated in the preliminary version of this work in [@Chen2012two-way6]. However, no mention of PNC and CW-Sup was made in [@Chen2012two-way]–[@Chen2012two-way6]. It is also noted that the strategies considered in this work are different from the traditional three-phase DNC and two-phase PNC. The DNC schemes considered in this paper utilize a multi-access uplink, while the three-phase DNC adopts the time sharing uplink transmission. In addition, the traditional DNC or PNC usually considers the symmetric traffic scenario, whereas in this work we assume asymmetric traffic.
Although in [@Liew2006] PNC was shown to perform better than DNC with a time-sharing uplink, its performance relative to DNC with a multi-access uplink is still unknown. In this work, we will focus on comparing the performance and complexity of the optimized PNC, DNC and CW-Sup strategies, with a multi-access uplink assumed for the latter two. It will be shown that the proposed PNC strategy outperforms other strategies with relatively high data rate requirements. In the regime of low data rate requirements, DNC-Sup performs better than the proposed PNC and CW-Sup strategies. To summarize, our main contributions are:
- We find the resource allocation that minimizes the total average energy required to support a given rate requirement in a three-node TWRN in a fading channel.
- We prove that DNC-Sup outperforms DNC-TS and CW-Sup under all channel conditions.
- We show that the proposed PNC-Sup strategy outperforms DNC-Sup in applications with high data rate requirements on both sides. However, if data rate requirements on both sides are low, DNC-Sup is preferred in terms of total average transmit energy usage.
- Based on the analysis of different schemes, we present an optimal algorithm to select the best strategy with different rate pair requirements in terms of energy usage.
The rest of this paper is organized as follows. In Section II, we describe the three-node TWRN and our setup. In Section III-A, we briefly describe the strategies considered. From Section II-B to Section II-E, we discuss respectively the PNC strategy, the orthogonal time-sharing of DNC message and the excess bits of the longer message, the superposition of the DNC bits and the excess bits of the longer message, and the superposition of the source messages. In Section III-F, an optimal algorithm to always select the best strategy is presented. Numerical results are presented in Section IV and Section V concludes this work.
System Description
==================
We consider a three-node, two-way relay network consisting of two sources $S_1$, $S_2$ and one relay node $R$. $S_1$ and $S_2$ exchange information through the relay $R$ without a direct link between them. A block flat-fading channel model is assumed for all links, i.e., the channel is a constant over one slot, defined as a cycle through all defined transmission modes in a strategy. The instantaneous channel power gains, corresponding to links $S_1$-$R$, $S_2$-$R$, $R$-$S_1$ and $R$-$S_2$, are defined as $g_{1r}$, $g_{2r}$, $g_{r1}$ and $g_{r2}$, respectively, their averages by $\bar{g}_{ij}$ ($i,j=1,2,r$), and their probability density functions by $p(g_{ij})$. We also assume ergodicity in the channel processes. Noise at every node is modeled by an i.i.d. Gaussian random variable with zero mean and unit variance. Each node is equipped with one antenna and works in half-duplex mode, i.e., it cannot transmit and receive simultaneously. It is assumed that all transmitters incur some energy overhead due to the energy needed to operate the transmitter hardware, which is a constant independent of transmit power. This energy overhead is defined as $P_Z$ for all three nodes together.
For each transmission strategy, the following optimization is performed. Given instantaneous channel state information (CSI) as well as their probability distributions, find
1. the optimal time fraction to allocate to each mode in the transmission strategy over all time, i.e., the designed values are applied regardless of instantaneous CSI, and
2. the channel-dependent optimal power and rate allocations for each node, in each mode.
Here, the optimal solution is the one that minimizes the total energy used. The constraints that must be met are that the [*average*]{} rate from $S_1$ to $S_2$ is at least $\lambda_1$, and that from $S_2$ to $S_1$ is at least $\lambda_2$.
Transmission Strategies
=======================
Brief Description
-----------------
In this work, we consider four strategies: PNC-Sup, DNC-TS, DNC-Sup and CW-Sup, as shown in Fig. \[fig:Comparison\_NC\]. Without loss of generality, we let $\lambda_1 \leq \lambda_2$ in the following analysis.
In the PNC-Sup strategy, there are a total of three transmission modes. The first mode is when $S_1$ and $S_2$ simultaneously transmit two messages ($a$ and $b_1$ respectively) of the same length at the same rate, encoded so that $R$ is able to decode the sum $a \oplus b_1$ (see [@wilson2010joint] for details). In the second mode, with $\lambda_1 \leq \lambda_2$, $S_2$ transmits to $R$ the bits that were not transmitted in the first mode ($b_2$), at a rate that will be found through solving the optimization problem below. Finally, in the third mode, the relay node superimposes the sum message ($a \oplus b_1$) on $b_2$, and broadcasts the combination to $S_1$ and $S_2$.
In DNC-TS, we consider a multi-access uplink and time sharing of the digital network-coded message (denoted as $a\oplus b_1$ in Fig. \[fig:Comparison\_NC\]) and the remaining bits of the longer message ($b_2$) on the downlink. On the other hand, in DNC-Sup, a multi-access uplink and the superposition of the digital network coded bits and the remaining bits of the longer message on the downlink are considered.
The last strategy considered is taken from [@Rankov], CW-Sup, and comprises a multi-access uplink and a codeword superposition of the two original messages from $S_1$ and $S_2$ at $R$ for transmission on the downlink. No explicit network coding technique is applied in this strategy.
Note that from queueing theory, for a queueing system with the service rate close to the arrival rate, there are packets buffered at all nodes with high probability. To make the problem tractable for PNC-Sup (inherently required by the second mode of PNC-Sup), we further introduce the constraint that if $\lambda_1 \le \lambda_2$, then the message to be transmitted in each slot from $S_1$ is shorter than the one from $S_2$, irrespective of the instantaneous channel gains, and vice versa for the case $\lambda_1 > \lambda_2$. Note also that for all strategies if $\lambda_1 \le \lambda_2$, with the service rate close to the arrival rate, the number of bits from $S_1$ buffered at $R$ is smaller than that from $S_2$ with high probability. Hence we assume that it can always transmit more data to $S_1$ in each slot, irrespective of the instantaneous channel gains, and vise versa for the case $\lambda_1 > \lambda_2$.
Physical Layer Network Coding Strategy (PNC-Sup)
------------------------------------------------
In this strategy, there are three transmission modes. In mode one, $S_1$ and $S_2$ transmit to $R$ $a$ and $b_1$ of the same length, respectively, and R decodes $a+b_1$. In mode two, $S_2$ transmits $b_2$ to $R$. In mode three, the relay broadcasts to $S_1$ and $S_2$ the superposition of $\widehat{a+b_1}$ and $\hat{b_2}$.
For the first mode, it was demonstrated in [@wilson2010joint] that the achievable symmetric transmit rate with PNC is $\log_2(1/2+SNR)$ over the AWGN channel[^2], where $SNR$ is the signal-to-noise ratio at the receiver from both $S_1$ and $S_2$. The details of how to practically transmit rates close to this theoretical capacity are given in [@wilson2010joint]. [The $\log_2(1/2 + SNR)$ expression implies that transmit powers at $S_1$ and $S_2$ are adjusted at each channel use so that the instantaneous SNR in the $S_1$-$R$ and $S_2$-$R$ links are identical. While this is sub-optimal, no other simple capacity expression exists for PNC and therefore we use this scheme in this paper. Note that the PNC map is matched to channel fading coefficients in \[10\], but no rate expression was derived.]{} The power required to transmit a message at rate $R_{1i}^{\mathrm{PNC}}$ from $S_i$ in the first mode is therefore given by, $$\begin{aligned}
P_{1i}^{\mathrm{PNC}}(g_{ir})=\frac{2^{R_{1i}^{\mathrm{PNC}}}-\frac{1}{2}}{g_{ir}}. \label{eqn:pnc1}\end{aligned}$$ The total power required in the first mode then is, $$\begin{aligned}
P_{1}^{\mathrm{PNC}}(g_{1r},g_{2r})=\sum_{i=1}^2 \frac{2^{R_{1i}^{\mathrm{PNC}}}-\frac{1}{2}}{g_{ir}}.\label{eqn:pnc2}\end{aligned}$$ The associated transmit rate $R_1^{\mathrm{PNC}}=R_{1i}^{\mathrm{PNC}}$ is given by, $$\begin{aligned}
R_1^{\mathrm{PNC}}=\log_2 \left(\frac{1}{2}+P_{11}^{\mathrm{PNC}}g_{1r} \right)=\log_2 \left(\frac{1}{2}+P_{12}^{\mathrm{PNC}}g_{2r} \right),
\end{aligned}$$ where the channel inversion equality $P_{11}^{\mathrm{PNC}}g_{1r}=P_{12}^{\mathrm{PNC}}g_{2r}$ is required \[17\] for the validity of the PNC rate expressions.
In the second mode, the remaining bits of $S_2$ will be delivered to the relay node. The minimum power needed by $S_2$ to transmit its remaining bits at the rate $R_{22}^{\mathrm{PNC}}$ is $$\begin{aligned}
P_{22}^{\mathrm{PNC}}(g_{2r})=\frac{2^{R_{22}^{\mathrm{PNC}}}-1}{g_{2r}},
\end{aligned}$$ which comes from the Shannon channel capacity for point-to-point Gaussian channels, i.e., $R_{22}^{\mathrm{PNC}}=\log_2\left(1+P_2^{\mathrm{PNC}}(g_{2r}) g_{2r})\right)$. The total transmit power then is $P_{2}^{\mathrm{PNC}}=P_{22}^{\mathrm{PNC}}$ and the total transmit rate $R_{2}^{\mathrm{PNC}}=R_{22}^{\mathrm{PNC}}$. The power required in the downlink phase (the third mode) consists of two parts. One is to transmit the common network-coded message at rate $R_{3,c}^{\mathrm{PNC}}(g_{r1},g_{r2})$ and the other is to transmit the remaining bits of the message from $S_2$ to $S_1$ at rate $R_{3,p}^{\mathrm{PNC}}(g_{r1},g_{r2})$. The decoding method at $S_1$ and $S_2$ is as follows. Firstly, both source nodes decode the common message in the presence of the interference from the private message. Hence $S_1$ can subtract its own message from the network-coded message and obtain part of the message from $S_2$. $S_2$ can also subtract its own message from the network-coded message and obtain the full message from $S_1$. $S_1$, however, needs to decode the remaining bits of the larger message from $S_2$, and then combine it with the message embedded in the network-coded message to obtain the full message from $S_2$. Hence, from the Shannon capacity formula, $$\begin{aligned}
R_{3,c}^{\mathrm{PNC}}&=\min_i \log_2\left(1+\frac{P_{3,c}^{\mathrm{PNC}}(g_{r1},g_{r2}) g_{ri}}{1+P_{3,p}^{\mathrm{PNC}}(g_{r1},g_{r2})g_{ri}}\right) \label{eqn:bc1}\\
&=\log_2\left(1+\frac{P_{3,c}^{\mathrm{PNC}}(g_{r1},g_{r2}) \min( g_{r1},g_{r2})}{1+P_{3,p}^{\mathrm{PNC}}(g_{r1},g_{r2})\min(g_{r1},g_{r2})}\right),\label{eqn:bc2}\end{aligned}$$ where $P_{3,c}^{\mathrm{PNC}}(g_{r1},g_{r2})$ is the power required for transmission of the network-coded bits and $P_{3,p}^{\mathrm{PNC}}(g_{r1},g_{r2})$ is that for the remaining bits of the larger message. (\[eqn:bc2\]) comes from the fact that $P_{3,c}^{\mathrm{PNC}} g/(1+P_{3,p}^{\mathrm{PNC}}g)$ is a monotonically increasing function of $g$ if $g>0$. The power required to transmit the network-coded message then is given by, $$\begin{aligned}
P_{3,c}^{\mathrm{PNC}}(g_{r1},g_{r2})= \frac{\left(2^{R_{3,c}^{\mathrm{PNC}}}-1\right)\left( 1+P_{3,p}^{\mathrm{PNC}}\min(g_{r1},g_{r2}) \right) }{\min (g_{r1},g_{r2})}.\end{aligned}$$
For the remaining bits after subtracting the network-coded message, the achievable rate is given by $$R_{3,p}^{\mathrm{PNC}}=\log_2\left(1+P_{3,p}^{\mathrm{PNC}}(g_{r1},g_{r2})g_{r1}\right),$$ and the power required by the relay to transmit at this rate is $$\begin{aligned}
P_{3,p}^{\mathrm{PNC}}(g_{r1})= \frac{\left(2^{R_{3,p}^{\mathrm{PNC}}}-1\right)}{g_{r1}}.\end{aligned}$$ The total power required in the third mode is given by, $$\begin{aligned}
{P}_3^{\mathrm{PNC}}=P_{3,c}^{\mathrm{PNC}}+P_{3,p}^{\mathrm{PNC}}.\end{aligned}$$ We define $\bar{P}_i^{\mathrm{PNC}}$ and $\bar{R}_i^{\mathrm{PNC}}$ as the average transmit power required and the associated average transmit rate for the $i$th phase, respectively, where the expectation is over the associated channel distributions. In addition, $f_i^{\mathrm{PNC}}$ ($i=1,2,3$) denotes the time fraction assigned to the $i$th phase. $f_i^{\mathrm{PNC}}\bar{P}_i^{\mathrm{PNC}}$ is proportional to the average transmit energy consumed in the $i$th phase.
Hence, assuming that $\lambda_1 \le \lambda_2$, minimizing average energy-usage with average rate constraints and PNC at the relay, is formulated as follows. We call this problem [**P1**]{}. $$\begin{aligned}
\min_{f_i,R_i^{\mathrm{PNC}}(g_{ij})} \quad \sum_{i=1}^3 f_i^{\mathrm{PNC}}\bar{P}_i^{\mathrm{PNC}}+P_Z \label{eqn:pncobj}\end{aligned}$$ subject to: $$\begin{aligned}
f_1^{\mathrm{PNC}}\bar{R}_1^{\mathrm{PNC}} &\ge \lambda_1 \label{eqn:pnccon1}\\
f_2^{\mathrm{PNC}}\bar{R}_{22}^{\mathrm{PNC}} &\ge \lambda_2-\lambda_1 \label{eqn:pnccon2}\\
f_3^{\mathrm{PNC}}\bar{R}_{3,c}^{\mathrm{PNC}} &\ge \lambda_1 \label{eqn:pnccon3}\\
f_3^{\mathrm{PNC}}\bar{R}_{3,p}^{\mathrm{PNC}} &\ge \lambda_2-\lambda_1 \label{eqn:pnccon4}\\
P_{11}^{\mathrm{PNC}}g_{1r}&=P_{12}^{\mathrm{PNC}}g_{2r} \label{eqn:pnccon5}\\
\sum_{i=1}^3 f_i^{\mathrm{PNC}} &\leq 1 \label{eqn:pnccon6}\end{aligned}$$ where the target function in (\[eqn:pncobj\]) is the average energy consumed per slot. $\lambda_1$ (assuming $\lambda_1 \leq \lambda_2$) in (\[eqn:pnccon1\]) comes from the fact that only part of the larger message is simultaneously transmitted with the smaller message such that the two messages have the same length. $\lambda_2-\lambda_1$ in (\[eqn:pnccon2\]) is for unicast transmission of the remaining bits of the larger message. (\[eqn:pnccon5\]) comes from the channel-inversion equality to alleviate intrinsic interference. Note that the overhead energy $P_Z$ is a constant and its value does not affect the optimal solution to [**P1**]{}. Hence we simply assume $P_Z=0$ in the rest of this work. Note that the objective function is a convex function of the transmit rates and a linear function of the time fraction of each mode. In addition, the constraints are also linear functions of time fractions and/or of the transmit rates. Therefore, [**P1**]{} is a standard convex optimization problem and it can be solved by the Lagrange multiplier method [@boyd2004convex]. By taking the first-order derivative of the Lagrangian with respect to the related parameters, the associated KKT conditions are given by, $$\begin{aligned}
\bar{P}^{\mathrm{PNC}}_i-\beta_i\bar{R}^{\mathrm{PNC}}_i&=\gamma, \quad i=1,2 \label{eqn:pnckkt1} \\
\bar{P}^{\mathrm{PNC}}_3-\beta_{3,c}\bar{R}^{\mathrm{PNC}}_{3,c}-\beta_{3,p}\bar{R}^{\mathrm{PNC}}_{3,p} &=\gamma, \label{eqn:pnckkt2}\\
2^{R_1^{\mathrm{PNC}}}\ln2 \left(\frac{1}{g_{1r}}+\frac{1}{g_{2r}}\right)-\beta_1 &= 0, \label{eqn:pnckkt3}\\
\frac{2^{R_2^{\mathrm{PNC}}}}{g_{2r}} \ln2 -\beta_2 &= 0, \label{eqn:pnckkt4}$$ and constraints in (\[eqn:pnccon1\])-(\[eqn:pnccon4\]) and (\[eqn:pnccon6\]) are satisfied with equality. The Lagrangian multiplier $\beta_i$ is with the rate requirement in the $i$th phase and $\gamma$ with the physical constraint in (\[eqn:pnccon6\]). From these KKT conditions, the associated optimal power allocation strategy is given by $$\begin{aligned}
P_{1}^{\mathrm{PNC}}(g_{1r},g_{2r})&=\left[\beta_1^* \log_2 e - \frac{1}{2} \left( \frac{1}{g_{1r}}+\frac{1}{g_{2r}} \right)\right]^+ , \label{eqn:pncopt1}\\
P_{1i}^{\mathrm{PNC}}(g_{1r},g_{2r})&=\left[\beta_1^* \log_2 e \frac{g_{3-i,r}}{g_{1r}+g_{2r}}-\frac{1}{2g_{ir}}\right]^+, \label{eqn:pncopt2}\\
P_2^{\mathrm{PNC}}(g_{r1},g_{r2})&=\left[\beta_2^*\log_2 e - \frac{1}{g_{2r}}\right] ,\label{eqn:pncopt3}
$$ where the asterisks denote optimality. $[\cdot]^+=\max(\cdot, 0)$ and (\[eqn:pncopt2\]) comes from (\[eqn:pnc1\]), (\[eqn:pnc2\]) and (\[eqn:pncopt1\]). (\[eqn:pncopt3\]) comes from the assumption that $\lambda_1 \leq \lambda_2$. It is observed that the total power allocation in the uplink, i.e., $P_{1}^{\mathrm{PNC}}(g_{1r},g_{2r})$, has the water-filling structure in (\[eqn:pncopt1\]), with the instantaneous channel gains $g_{1r}$ as well as $g_{2r}$ taken into account. By deriving the first-order derivative of (\[eqn:pnckkt2\]) (similar to power allocation for broadcast channels in [@David]), the optimal power allocation in the downlink is summarized in Lemma \[downlink1\].
\[downlink1\] If $g_{r1}>g_{r2}$, the power allocation in the downlink is
- if $\beta_{3,p}^*g_{r1} \leq \beta_{3,c}^*g_{r2}$, then $$\begin{aligned}
\left\{
\begin{array}{ll}
P_{3,c}^{\mathrm{PNC}}=[\beta_{3,c}^*\log_2e-\frac{1}{g_{r2}}]^+\\
P_{3,p}^{\mathrm{PNC}}=0\\
\end{array}
\right.\end{aligned}$$
- if $\beta_{3,p}^*g_{r1}>\beta_{3,c}^*g_{r2}$ and $(\beta_{3,c}^*-\beta_{3,p}^*)\log_2e \leq \frac{g_{r1}-g_{r2}}{g_{r1}g_{r2}}$, then $$\begin{aligned}
\left\{
\begin{array}{ll}
P_{3,c}^{\mathrm{PNC}}=0\\
P_{3,p}^{\mathrm{PNC}}=[\beta_{3,p}^*\log_2e-\frac{1}{g_{r1}}]^+\\
\end{array}
\right.\end{aligned}$$
- if $\beta_{3,p}^*g_{r1}>\beta_{3,c}^*g_{r2}$ and $(\beta_{3,c}^*-\beta_{3,p})\log_2e > \frac{g_{r1}-g_{r2}}{g_{r1}g_{r2}}$, then $$\begin{aligned}
\left\{
\begin{array}{ll}
P_{3,c}^{\mathrm{PNC}}=[\beta_{3,c}^*\log_2 e-\frac{(\beta_{3,p}^*g_{r1}-\beta_{3,c}^*g_{r2})}{(\beta_{3,c}^*-\beta_{3,p}^*)g_{r1}g_{r2}}]^+\\
P_{3,p}^{\mathrm{PNC}}=[\frac{(\beta_{3,p}^*g_{r1}-\beta_{3,c}^*g_{r2})}{(\beta_{3,c}^*-\beta_{3,p}^*)g_{r1}g_{r2}}]^+\\
\end{array}
\right.\end{aligned}$$
For the case that $g_{r1}<g_{r2}$ power allocation for the third mode in the downlink can be derived in a similar manner and is omitted for brevity.
A multi-bisection method is used to numerically find the optimal solution to [**P1**]{}. The procedure is as follows.
1. For a given $f_1^{\mathrm{PNC}}$ we can obtain $\bar{R}_1^{\mathrm{PNC}}$ from (\[eqn:pnccon1\]) and then find the appropriate $\beta_{1}$ and $\bar{P}_1^{\mathrm{PNC}}$ from (\[eqn:pncopt1\]) and (\[eqn:pncopt2\]). We then compute $\gamma$ from (\[eqn:pnckkt1\]).
2. With the computed $\gamma$, we can then iteratively find the appropriate $\beta_2$, $\beta_{3,c}$ and $\beta_{3,p}$ for the second mode and the third mode satisfying (\[eqn:pnckkt1\]) and (\[eqn:pnckkt2\]) subject to the pre-defined accuracy, and obtain the associated $\bar{R}_i^{\mathrm{PNC}}$, $\bar{P}_i^{\mathrm{PNC}}$ and $f_i^{\mathrm{PNC}}$ from (\[eqn:pnccon3\])-(\[eqn:pnccon5\]), (\[eqn:pncopt3\]) and Lemma \[downlink1\]. For instance, the pre-defined accuracy for Mode 2 can be $|\bar{P}^{\mathrm{PNC}}_2-\beta_i\bar{R}^{\mathrm{PNC}}_2-\gamma|<\epsilon_1$ where $\epsilon_1$ is a very small number.
3. We hence iterate $f_1^{\mathrm{PNC}}$ with all the computed $f_i^{\mathrm{PNC}}$ and follow the same procedure in 1) and 2) until we arrive at the optimal solution subject to the pre-defined accuracy, e.g., the convergence constraint can be $1-\epsilon_2<\sum_{i=1}^3 f_i^{\mathrm{PNC}}<1$, where $\epsilon_2$ is also a very small positive number.
Note that $\epsilon_1<\epsilon_2$ must be satisfied since the inner iteration should have higher accuracy than the outer iteration in numerical computation, e.g., we can set $\epsilon_1=10^{-6}$ and $\epsilon_2=10^{-3}$. This algorithm is presented to obtain the optimal Lagrangian multipliers. In simulation, it is shown to converge to the global optimal solution in only tens of iterations.
Digital Network Coding and Time Sharing in the Downlink (DNC-TS) {#time sharing}
----------------------------------------------------------------
In this section, we consider the use of digital network coding in place of PNC, and seek similarly to minimize total average energy usage. The downlink time-shares a network coded message and a message of the remaining bits from one source, and explains why we call this strategy DNC-TS. The four modes in this strategy are described as follows.
- [*Mode 1:*]{} $S_1$ and $S_2$ simultaneously transmit to $R$ at average rates $\bar{R}_{11}$ and rate $\bar{R}_{12}$ with average powers $\bar{P}_{11}$ and $\bar{P}_{12}$ in the multi-access uplink respectively.
- [*Mode 2:*]{} $R$ broadcasts to $S_1$ and $S_2$ at an average rate $\bar{R}_{2}$ with an average power $\bar{P}_{2}$ using digital network coding.
- [*Mode 3:*]{} $R$ transmits only to $S_1$ at an average rate $\bar{R}_{3}$ with an average power $\bar{P}_{3}$.
- [*Mode 4:*]{} $R$ transmits only to $S_2$ at an average rate $\bar{R}_{4}$ with an average power $\bar{P}_4$.
In order for the subscripts to match the transmission modes, we introduce the new definitions $g_{11} = g_{1r}$, $g_{12} = g_{2r}$, $g_2 = \min(g_{r1},g_{r2})$, $g_3 = g_{r1}$ and $g_4 = g_{r2}$ for each mode. In DNC-TS, Mode 3 (if $S_2$ transmits the longer message) and Mode 4 (if $S_1$ transmits the longer message) are useful for transmitting bits that cannot be network-coded due to the asymmetric message sizes. Without them, we would have to zero-pad the shorter message in order to apply network coding in Mode 2. However, since the network-coded message must be decoded by [*both*]{} sources, the rate in Mode 2 is constrained by the smaller of $g_{r1}$ and $g_{r2}$. In contrast, the message in Mode 3 is only for $S_1$, and therefore its rate is limited only by $g_{r1}$, and similarly for Mode 4. It is thus always beneficial, in the asymmetric case of interest here, to use Mode 3 or 4, in addition to the network-coded Mode 2.
Let $P_{1i}(g_{11}, g_{12})$ ($i = 1, 2$) be the transmit power of $S_i$ for a given gain pair ($g_{11}$,$g_{12}$). For the multi-access uplink transmit mode, i.e., Mode 1, we have $\bar{P}_1=\sum_{i=1}^2\bar{P}_{1i}$ and $$\begin{aligned}
\bar{P}_{1i}= E \{P_{1i}(g_{11},g_{12})\},\end{aligned}$$ where the expectation is taken over the distribution of channel gains. As stated in [@David], the optimal decoding order for a multi-access channel is to firstly decode the data from the stronger user, i.e., the user with the better uplink channel gain. This result follows from MAC-BC duality. Hence with the assumption that $g_{11}<g_{12}$, we have $$\begin{aligned}
R_{11}& =\log_2(1+P_{11}g_{11}), \label{eqn:MA1} \\
R_{12}&=\log_2 \left(1+\frac{P_{12}g_{12}}{1+P_{11}g_{11}} \right). \label{eqn:MA2}
$$ From (\[eqn:MA1\]) and (\[eqn:MA2\]), the transmit powers of $S_1$ and $S_2$ for a given channel gain pair and transmit rate pair are derived in [@Chen2012two-way6] and are omitted here for brevity. In addition, for Modes 2 to 4, we have $$\begin{aligned}
\bar{R}_i = E\{R_i(g_i)\}=E\{ \log_2(1+P_i(g_i)g_i) \}, \label{eq:rate}$$ $$\begin{aligned}
\bar{P}_i = E\{P_i(g_i)\}= E\left\{ \frac{2^{R_i(g_i)}-1}{g_i} \right\}, \label{eq:power}$$ where $\bar{P}_i$ and $\bar{R}_i$ are averaged over the channel gain distribution. As in the PNC based strategy, a fraction $f_i$ ($i = 1,\cdots,4$) of time is allocated to Mode $i$. Therefore $f_i \bar{P}_i$ is proportional to the average energy used for transmission in Mode $i$. The optimal DNC-TS strategy is found by solving problem [**P2**]{}: $$\begin{aligned}
\min_{f_i,R_i(g_i)} && \sum_{i=1}^{4}f_i \bar{P}_i \label{opt}\end{aligned}$$ subject to $$\begin{aligned}
f_{1}\bar{R}_{1i} &\ge \lambda_i \quad\quad i=1,2 \label{lopt_1}\\
f_2\bar{R}_2 + f_4\bar{R}_4 &\ge \lambda_1 \label{lopt_2}\\
f_2\bar{R}_2 + f_3\bar{R}_3 &\ge \lambda_2 \label{lopt_3}\\
\sum_{i=1}^4 f_i &\leq 1 \label{lopt_4}\end{aligned}$$
In [@Chen2012two-way], it was observed that Mode 3 and Mode 4 are never both active due to network coding gain. Hence assuming that $\lambda_1 \le \lambda_2$, Mode 4 can be dropped and only Mode 1 to Mode 3 will be used. In addition, it was observed in [@Chen2012two-way] that the rate constraints in (\[lopt\_2\]) and (\[lopt\_3\]) are met with equality. Hence we have $\lambda_1 = f_2^* \bar{R}^*_2$ and $\lambda_2-\lambda_1 = f_3^* \bar{R}^*_3$. Since [**P2**]{} is a convex optimization problem, its optimal solution can be derived from the KKT conditions. The details are presented in [@Chen2012two-way6] and are omitted here for brevity. For Modes 2 to 3, we have a water-filling structure for the optimal power allocations: $$P_i^*(g_i) = \left[\beta^*_i \log_2 e -\frac{1}{g_i}\right]^+ i = 2,3. \label{eqn:kkt3}$$
For Mode 1, resource allocation is however a little bit complicated and the result is summarized in the following lemma.
\[uplink\] If $g_{11}>g_{12}$ and $\lambda_1 \le \lambda_2$, the power allocation for the multi-access uplink transmission mode can be described as follows.
1. if $\beta_{12}^* \leq \beta_{11}^*$, then $$\begin{aligned}
\left\{
\begin{array}{ll}
P_{11}^*=[\beta_{11}^*\log_2e-\frac{1}{g_{11}}]^+ \\
P_{12}^*=0 \\
\end{array}
\right.\end{aligned}$$
2. if $\beta_{12}^* > \beta_{11}^*$, then
- if $\beta_{11}^*g_{11} \leq \beta_{12}^*g_{12}$, then $$\begin{aligned}
\left\{
\begin{array}{ll}
P_{11}^*=0\\
P_{12}^*=[\beta_{12}^*\log_2e-\frac{1}{g_{12}}]^+\\
\end{array}
\right.\end{aligned}$$
- if $\beta_{11}^*g_{11}>\beta_{12}^*g_{12}$ and $\beta_{12}^*-\beta_{11}^* \leq \frac{g_{11}-g_{12}}{g_{11}g_{12}}$, then $$\begin{aligned}
\left\{
\begin{array}{ll}
P_{11}^*=[\beta_{11}^*\log_2e-\frac{1}{g_{11}}]^+\\
P_{12}^*=0\\
\end{array}
\right.\end{aligned}$$
- if $\beta_{11}^*g_{11}>\beta_{12}^*g_{12}$ and $\beta_{12}^*-\beta_{11}^* > \frac{g_{11}-g_{12}}{g_{11}g_{12}}$, then $$\begin{aligned}
\left\{
\begin{array}{ll}
P_{11}^*=[\frac{(\beta_{11}^*g_{11}-\beta_{12}^*g_{12})\log_2e}{g_{11}-g_{12}}]^+\\
P_{12}^*=[\frac{(\beta_{12}^*-\beta_{11}^*)g_{11}\log_2e}{g_{11}-g_{12}}-\frac{1}{g_{12}}]^+\\
\end{array}
\right.\end{aligned}$$
For the case that $g_{11}<g_{12}$ power allocation for Mode 1 can be derived in a similar manner.
Digital Network Coding and Superposition in the Downlink (DNC-Sup) {#superposition}
------------------------------------------------------------------
The downlink channel from $R$ to $S_1$ and $S_2$ can be considered as a degraded broadcast channel, except that each receiver has full knowledge of the message being transmitted to the other. In Section II.C, the possibility of superposition coding at R and successive interference cancelation (SIC) decoding at $S_1$ and $S_2$ was not considered. In this section, we will allow for the superposition of the network-coded message with the remainder of the longer message in a new Mode 5, described as follows when $\lambda_1 \le \lambda_2$, and when the message from $S_2$ is always longer than the one from $S_1$.
- [*Mode 5:*]{} $R$ broadcasts to $S_1$ and $S_2$ at the average rate pair ($\bar{R}_{51}, \bar{R}_{52}$) on the downlink. Each node decodes the network coded message first and $S_1$ decodes the remaining bits of the larger message after subtracting the network coded message. $\bar{R}_{52}$ is the rate of the network coded message to both users and $\bar{R}_{51}$ that of the remaining bits of the longer message to $S_1$.
We now define $g_{51} = g_{r1}$, and $g_{52} = g_{r2}$, to be consistent with the notation of previous sections. If $g_{51}>g_{52}$, the required energy pair ($P_{51}$,$P_{52}$), for a given rate pair ($R_{51}$,$R_{52}$), is given by, $$\begin{aligned}
\left\{
\begin{array}{ll}
P_{51}(R_{51},R_{52})= \frac{2^{R_{51}}-1}{g_{51}} \\ P_{52}(R_{51},R_{52})= \frac{2^{R_{52}}-1}{g_{52}}\left( 1+ \frac{g_{52}(2^{R_{51}}-1)}{g_{51}}\right) \\\end{array} \label{eq:bc2}
\right.\end{aligned}$$ Similarly, if $g_{51}<g_{52}$, the required energy pair ($P_{51}$,$P_{52}$), for a given rate pair ($R_{51}$,$R_{52}$) is given by, $$\begin{aligned}
\left\{
\begin{array}{ll}
P_{51}(R_{51},R_{52})= \frac{2^{R_{51}}-1}{g_{51}} \\ P_{52}(R_{51},R_{52})= \frac{2^{R_{51}}(2^{R_{52}}-1)}{g_{51}} \\\end{array} \label{eq:bc3}
\right.\end{aligned}$$
For a given gain pair $(g_{51},g_{52}) $, the total energy usage in Mode 5 then is defined as $P_5=P_{51}+P_{52}$.
Note that in Section \[time sharing\] Mode 5 was not allowed. Introducing Mode 5 turns problem [**P2**]{} into $$\begin{aligned}
\min_{f_i,R_i(g_i)} && \sum_{i=1}^{5}f_i \bar{P}_i \label{P3_opt}\end{aligned}$$ subject to $$\begin{aligned}
f_{1}\bar{R}_{1i} &\ge \lambda_i \quad\quad i=1,2 \label{P3_lopt_1}\\
f_2\bar{R}_2 + f_5\bar{R}_{52} &\ge \lambda_1 \label{P3_lopt_2}\\
f_3\bar{R}_3 + f_5\bar{R}_{52} &\le \lambda_2-\lambda_1 \label{P3_lopt_3}\\
\sum_{i=1}^5 f_i &\leq 1 \label{P3_lopt_4}\end{aligned}$$
This new problem is equivalent to a simpler optimization problem with only Mode 1 and Mode 5 used. The conclusion is summarized below and the proof is given thereafter.
\[lemma:equivalence\] The solution to (\[P3\_opt\]) must necessarily have only Mode 1 and Mode 5 active, i.e., $$\begin{aligned}
f_2^*=f_3^*=f_4^*=0,\end{aligned}$$ where $f_i^*$ is the optimal value of $f_i$.
Note that from [@Chen2012two-way], at most one of Mode 3 and Mode 4 will be active and this also applies to [**P2**]{}. With the assumption that $\lambda_1<\lambda_2$, we conclude that $f_4^*=0$. It comes from the rate gain of network coding and hence we should employ network coding to the fullest extent in our strategy.
Below we prove $f_2^*=f_3^*=0$ by contradiction. Suppose in the optimal solution, we have both $f_2^*$ and $f_3^*$ positive. For a given gain pair in the downlink, ($g_{51}$,$g_{52}$), the total energy consumed in Mode 2 and Mode 3 is given by, $$\begin{aligned}
E_1(g_{51},g_{52})=f_2^*\frac{2^{R_2^*}-1}{\min(g_{51},g_{52})}+f_3^*\frac{2^{R_3^*}-1}
{g_{51}}. \label{E1}\end{aligned}$$
Now consider replacing Modes 2 and 3 with Mode 5. Since the bit rates in Modes 2 and 3 were $f_2^* R_2^*/(f_2^* + f_3^*)$ and $f_3^*R_3^*/(f_2^* + f_3^*)$ respectively, when Modes 2 and 3 are replaced by Mode 5, we must have broadcast-channel rates of $$\begin{aligned}
R_{51}^{'}(g_{51},g_{52})=\frac{f_3^*R_3^*}{f_2^*+f_3^*},\\
R_{52}^{'}(g_{51},g_{52})=\frac{f_2^*R_2^*}{f_2^*+f_3^*}.\end{aligned}$$ The corresponding total consumed energy for this gain pair per unit time is given by $$\begin{aligned}
E_5^{'}=\left\{
\begin{array}{ll}
(f_2^*+f_3^* )\left(\frac{2^{R_{52}^{'}+R_{51}^{'}}-2^{R_{52}^{'}}}{g_{51}}+\frac{2^{R_{52}^{'}}-1}{g_{52}}\right) \hspace{0.05cm} \mbox{if $g_{51}\ge g_{52}$,} \\
(f_2^*+f_3^*)\frac{2^{R_{51}^{'}+R_{52}^{'}}-1}{g_{51}} \quad \quad \mbox{Otherwise.}\\
\end{array}
\right.\end{aligned}$$
We can now compare the energy consumption on the downlink for Mode 5 and time sharing of Mode2 and Mode 3. When $g_{51}
> g_{52}$, we have the comparison in (54)-(58), which shows that Mode 5 uses less average energy than time sharing of Mode 2 and Mode 3.
$$\begin{aligned}
E_5^{'}-E_1&=\left(f_2^*+f_3^* \right)\left(\frac{2^{R_{52}^{'}+R_{51}^{'}}}{g_{51}}-\frac{1}{g_{52}}-2^{R_{52}^{'}}\left(\frac{1}{g_{51}}-\frac{1}{g_{52}}\right) \right)-\left( f_2^*\frac{2^{R_2^*}-1}{g_{52}}+f_3^*\frac{2^{R_3^*}-1}
{g_{51}} \right) \label{eqn:1}\\
&=\left(f_2^*+f_3^* \right)\left(\frac{2^{\frac{f_2^*R_{2}^{*}+f_3^*R_{3}^{*}}{f_2^*+f_3^*}}}{g_{51}}-\frac{1}{g_{52}}-2^{R_{52}^{'}}\left(\frac{1}{g_{51}}-\frac{1}{g_{52}}\right) \right)-\left( f_2^*\frac{2^{R_2^*}-1}{g_{52}}+f_3^*\frac{2^{R_3^*}-1}
{g_{51}} \right) \label{eqn:11}\\
&<f_2^*\frac{2^{R_2^*}}{g_{51}}+f_3^*\frac{2^{R_3^*}}
{g_{51}}-\left( f_2^*\frac{2^{R_2^*}}{g_{52}}+f_3^*\frac{2^{R_3^*}}
{g_{51}} \right)-(f_2^*+f_3^*)2^{R_{52}^{'}}\left(\frac{1}{g_{51}}-\frac{1}{g_{52}}\right)
+f_3^*\left(\frac{1}{g_{51}}-\frac{1}{g_{52}}\right)\label{eqn:2} \\
&=\left(f_2^*2^{R_2^*}-(f_2^*+f_3^*)2^{R_{52}^{'}}\right)\left(\frac{1}{g_{51}}-\frac{1}{g_{52}}\right)+f_3^*\left(\frac{1}{g_{51}}-\frac{1}{g_{52}}\right) \label{eqn:3} \\
&=\left(f_2^*(2^{R_2^*}-1)-(f_2^*+f_3^*)(2^{R_{52}^{'}}-1)\right)\left(\frac{1}{g_{51}}-\frac{1}{g_{52}}\right)<0. \label{eqn:4}\end{aligned}$$
Note that (\[eqn:2\]) follows from convexity and (\[eqn:4\]) follows from the fact that $f(t)=t(2^{\frac{a}{t}}-1)$ ($a>0,t>0$) is a strictly decreasing function of $t$ and $g_{51}>g_{52}$. Hence Mode 2 and Mode 3 should be replaced by Mode 5 in this case to minimize energy usage.
On the other hand, if $g_{51}<g_{52}$, we have $$\begin{aligned}
&E_5^{'}-E_1\\
=&\left(f_2^*+f_3^* \right)\frac{2^{R_{51}^{'}+R_{52}^{'}}-1}{g_{51}}
-f_2^*\frac{2^{R_2^*}-1}{g_{51}}+f_3^*\frac{2^{R_3^*}-1}
{g_{51}} \label{eqn:21}\\
=&\frac{\left(f_2^*+f_3^* \right)2^{\frac{f_2^*R_{2}^{*}+f_3^*R_{3}^{*}}{f_2^*+f_3^*}}-f_2^*2^{R_2^*}-f_3^*2^{R_3^*}}{g_{51}}<0 \label{eqn:22}
$$ where (\[eqn:22\]) follows from convexity.
Hence we have proved that for any channel gain pair we can use Mode 5 to replace Mode 2 and Mode 3 to reduce energy consumption while delivering the same rates. Averaging over all possible channel gain pairs, it is directly deduced that the average energy consumed by employing Mode 5 is less than that with Mode 2 and Mode 3. The assumed optimal solution to [**P2**]{} thus could not be optimal.
*Remarks*: Intuitively, with SIC decoding allowed at each receiver, DNC-Sup is able to achieve all points in the capacity region of the broadcast channel while DNC-TS can not. Therefore DNC-Sup outperforms DNC-TS.
Henceforth, an equivalent optimization problem which only consists of Mode 1 and Mode 5, [**P2**]{}’, aiming to minimize total average energy usage, is formulated as follows. $$\begin{aligned}
\min_{f_i,R_i(g_i)} f_1\bar{P}_1+f_5\bar{P}_5\end{aligned}$$ subject to $$\begin{aligned}
f_1\bar{R}_{1i} &\geq \lambda_i \quad i=1,2\\
f_5\bar{R}_{51} &\geq \lambda_2-\lambda_1 \label{eq:excess}\\
f_5\bar{R}_{52} &\geq \lambda_1 \label{eq:common}\\
f_1 + f_5 &\leq 1\end{aligned}$$
The optimal solution of [**P2’**]{} is straightforward to obtain and was presented in \[8\]. Note that the optimal power allocation in both modes are identical to those of Mode 1 and Mode 3 in the PNC-based strategy.
It is noted that [**P2**]{} and [**P2**]{}’ differ only in the downlink transmission. The former time shares the digital network-coded bits and the remaining bits of the larger message in the downlink and the latter superimposes the network-coded bits and the remaining bits of the larger message in the downlink.
Codeword Superposition in the Downlink (CW-Sup) {#other}
-----------------------------------------------
In [@Rankov] and [@Boche], the authors discussed a codeword superposition of the two original messages from $S_1$ and $S_2$ at $R$ for transmission in the downlink. As each source has the knowledge of the message transmitted by itself, it can subtract this message and decode the intended message without interference. Hence the broadcast channel in the downlink is equivalent to two interference-free AWGN channels.
Note that in this codeword superposition strategy, the relay node simply superimposes the two messages from the two sources and no network coding technique is applied. The transmit power of the relay node then is the sum of the transmit power for transmission of each of the two messages to the intended source. For convenience, we refer to this transmission strategy in the downlink as Mode 6 and we shall analyze Mode 6 in our setup and minimize the total energy usage. Note that in [@Rankov][@Boche], the authors assumed that each node is subject to individual power constraint and focused on the throughput region of this network. In our setup, however, we are more interested in total energy usage of this system instead of deriving the throughput region.
- [*Mode 6:*]{} On average, $R$ broadcasts to $S_1$ and $S_2$ at rate pair ($\bar{R}_{61}, \bar{R}_{62}$) in the downlink. Each node subtracts the message originating from itself and then decodes the intended message from the other source.
For a given rate pair (${R}_{61}, {R}_{62}$) and gain pair ($g_{61}$,$g_{62}$), the energy required is given by $$\begin{aligned}
P_{6i}(g_{6i})&=\frac{2^{R_{6i}}-1}{g_{6i}} \quad i=1,2, \label{Boche1}
\end{aligned}$$ where each source enjoys an interference-free Gaussian channel with SIC within one slot. Hence the capacity on either side in the downlink is given by the Shannon formula $C=\log_2(1+P_{6i}g_{6i})$ ($i=1,2$) and the total power required is $P_6=P_{61}+P_{62}$. In this section, we are interested in deriving the minimal energy usage with Mode 1 and Mode 6. We are further interested in comparing energy usage of the designed transmission strategy in the previous section with the one consisting of Mode 1 and Mode 6.
The optimization problem to minimize energy usage with Mode 1 and Mode 6 can be formulated as [**P3**]{}, $$\begin{aligned}
\min_{f_i,R_i(g_i)} f_1\bar{P}_1+f_6\bar{P}_6\end{aligned}$$ subject to $$\begin{aligned}
f_1\bar{R}_{1i} &\geq \lambda_i \quad i=1,2 \label{eq:up6}\\
f_6\bar{R}_{6i} &\geq \lambda_{3-i} \quad i=1,2 \label{eq:down6}\\
f_1 + f_6 &\leq 1\end{aligned}$$
Since [**P5**]{} is a convex optimization problem, we can derive KKT conditions and obtain the optimal solution. Hence, the optimal solution to [**P5**]{} has $$\begin{aligned}
&P_{6i}^*(g_{6i}) = \left[ \beta^*_{6i} \log_2 e -\frac{1}{g_{6i}} \right]^+, \quad i=1,2 \label{eqn:6kkt3}
$$ where $\beta^*_{6j}$ ($j=1,2$) are Lagrangian multipliers. It is observed in (\[eqn:6kkt3\]) that the optimal power allocation again has a water-filling structure. Note that the optimal power allocation for the uplink is identical to Lemma \[uplink\] and is hence omitted.
Comparing DNC-Sup and CW-Sup, we can prove the following lemma.
\[codeword\] For ergodic fading channels, DNC-Sup performs no worse than CW-Sup in terms of energy usage, which indicates the superiority of our proposed transmission strategy in the downlink.
Suppose in the optimal solution to [**P5**]{}, the optimal rate pair for a given gain pair ($g_{61}$,$g_{62}$) in the downlink is $R_{61}^*$ and $R_{62}^*$ and the energy required is given in (\[Boche1\]).
We construct another solution which employs Mode 5 with the optimal assigned time fraction $f_6^*$.
Comparing the link gains $g_{61}$ and $g_{62}$ and the associated transmit rates $R_{61}^*$ and $R_{62}^*$, there are four cases to be investigated.
- [Case i)]{}: $g_{61}>g_{62} $ and $R_{61}^*>R_{62}^*$.
In Case i), by employing Mode 5, we have $$\begin{aligned}
R_{51}^{'}(g_{51},g_{52})&= R_{61}^*-R_{62}^*, \\
R_{52}^{'}(g_{51},g_{52})&= R_{62}^*.\end{aligned}$$ The energy required is given by, $$\begin{aligned}
E_5^{'}&=\frac{2^{R_{52}^{'}+R_{51}^{'}}}{g_{51}}-\frac{1}{g_{52}}-2^{R_{52}^{'}}\left(\frac{1}{g_{51}}-\frac{1}{g_{52}}\right)\\
&=\frac{2^{R_{61}^*}}{g_{61}}-\frac{1}{g_{62}}
-2^{R_{62}^*}\left(\frac{1}{g_{61}}-\frac{1}{g_{62}}\right).\end{aligned}$$
Hence we have $$\begin{aligned}
E_5^{'}-E_6^*&=\frac{1}{g_{61}}-2^{R_{62}^*}\frac{1}{g_{61}} \le 0\end{aligned}$$ since $2^{R_{62}^*} \ge 1$.
- [Case ii)]{}: $g_{61}>g_{62}$ and $R_{61}^*<R_{62}^*$.
In this case, we can simply transmit a network coded message at rate $R_{61}^*$, in other words, we have $$\begin{aligned}
R_{51}^{'}(g_{51},g_{52})&= 0, \\
R_{52}^{'}(g_{51},g_{52})&= R_{61}^*.\end{aligned}$$ Both sources can obtain $R_{61}^*$ bits of message, hence $S_2$ can obtain more information than in Mode 6. The energy required in Mode 5, is given by $$\begin{aligned}
E_5^{'}=\frac{2^{R_{61}^*}-1}{g_{62}}<
\frac{2^{R_{62}^*}-1}{g_{62}}+\frac{2^{R_{61}^*}-1}{g_{61}}=E_6^*.\end{aligned}$$
- [Case iii)]{}. $g_{61}<g_{62}$ and $R_{61}^*>R_{62}^*$.
In Case iii), by employing Mode 5, we have $$\begin{aligned}
R_{51}^{'}(g_{51},g_{52})&= R_{61}^*-R_{62}^*, \\
R_{52}^{'}(g_{51},g_{52})&= R_{62}^*.\end{aligned}$$ The energy required is given by, $$\begin{aligned}
E_5^{'}=\frac{2^{R_{51}^{'}+R_{52}^{'}}-1}{g_{61}}=\frac{2^{R_{61}^*}-1}{g_{61}}.\end{aligned}$$
Hence we have $$\begin{aligned}
E_5^{'}=\frac{2^{R_{61}^*}-1}{g_{61}}<
\frac{2^{R_{62}^*}-1}{g_{62}}+\frac{2^{R_{61}^*}-1}{g_{61}}=E_6^*.\end{aligned}$$
- [Case iv)]{}. $g_{61}<g_{62}$ and $R_{61}^*<R_{62}^*$.
In this case, we can also simply transmit a network coded message at rate $R_{62}^*$, in other words, we have $$\begin{aligned}
R_{51}^{'}(g_{51},g_{52})&= 0, \\
R_{52}^{'}(g_{51},g_{52})&= R_{62}^*.\end{aligned}$$ Both sources can obtain $R_{62}^*$ bits of message, hence $S_1$ can obtain more information than in Mode 6. The energy required in Mode 5, is given by $$\begin{aligned}
E_5^{'}=\frac{2^{R_{62}^*}-1}{g_{61}}<
\frac{2^{R_{62}^*}-1}{g_{62}}+\frac{2^{R_{61}^*}-1}{g_{61}}=E_6^*.\end{aligned}$$
Hence we have verified that for all cases Mode 5 performs better than Mode 6 in terms of energy usage.
The Optimal Scheme
------------------
We have so far analyzed four different strategies for TWRNs. Here we design an algorithm to obtain the optimal solution among these strategies for TWRNs with arbitrary rate pair requirements. Note that we have shown that DNC-Sup outperforms DNC-TS and CW-Sup. Hence only PNC-Sup and DNC-Sup are considered in the optimal solution in terms of energy usage, which is given as follows.
1. For a given rate pair requirement, we solve [**P1**]{} for PNC-Sup and [**P2’**]{} for DNC-Sup to obtain their total energy usage.
2. We compare the total energy usage of PNC-Sup and DNC-Sup. If PNC-Sup uses less energy, it is selected as the optimal transmit strategy. Otherwise, DNC-Sup is selected.
In this way, we always select the optimal strategy and the minimum total average energy usage is then achieved for arbitrary rate pair requirements. For reference, we call it [**Popt**]{}.
Numerical Results
=================
We now present numerical results to verify our findings. Noise at each node is assumed to be Gaussian with zero mean and unit variance and all links are assumed to be Rayleigh fading channels. The associated instantaneous channel state information (CSIT), as well as channel statistics are assumed to be known to the corresponding transmit nodes. Moreover, coding and decoding algorithms are not employed directly. Rather, we use the theoretical achievable rate over each link as the transmit rate. It is also noted that the unit of rate requirement on either side is frames per slot. In addition, the constant overhead energy usage (for circuit operation) for a TWRN is assumed to be zero, as it does not affect the transmit energy usage for the different strategies.
For comparison with [**P1**]{}, we also present the achievable minimal average energy used for a TWRN by PNC with zero padding (PNC-ZP). Specifically, when the messages from the two sources are not of equal length, we zero pad the shorter message to make it equal in length to the longer message. The associated optimization problem is referred to as [**P0**]{}: $$\begin{aligned}
\min_{g_{ij}} \quad \sum_{i=1}^2 f_i^{\mathrm{PNC}}\bar{P}_i^{\mathrm{PNC}} \label{eqn:pnczpobj}\end{aligned}$$ subject to the following constraints, $$\begin{aligned}
&f_1^{\mathrm{PNC}}\bar{R}_1^{\mathrm{PNC}} \ge \max(\lambda_1,\lambda_2) \label{eqn:pnczpcon1}\\
&f_2^{\mathrm{PNC}}\bar{R}_2^{\mathrm{PNC}} \ge \max(\lambda_1,\lambda_2) \label{eqn:pnczpcon2}\\
&P_{11}^{\mathrm{PNC}}g_{1r}=P_{12}^{\mathrm{PNC}}g_{2r} \label{eqn:pnczpcon3}\\
&f_1 + f_2 \leq 1 \label{eqn:pnczpcon4}\end{aligned}$$ where (\[eqn:pnczpcon1\]) and (\[eqn:pnczpcon2\]) follows from the fact that the two transmitted messages have the same length after zero-padding. $f_1$ is the time fraction assigned for the transmission on the uplink and $f_2$ is for transmission over the downlink. This is a standard convex optimization problem and the solution to [**P0**]{} is given by, $$\begin{aligned}
P_{1i}^{\mathrm{PNC}}(g_{1r},g_{2r})&=\left[\beta_1^* \log_2 e \frac{g_{3-i,r}}{g_{1r}+g_{2r}}-\frac{1}{2g_{ir}}\right]^+, \label{eqn:pnzpcopt2}\\
P_2^{\mathrm{PNC}}(g_{r1},g_{r2})&=\left[\beta_2^*\log_2 e - \frac{1}{g_{2r}}\right] ,\label{eqn:pnzpopt3}
$$where $\beta_1$ and $\beta_2$ are Lagrangian multipliers for (\[eqn:pnczpcon1\]) and (\[eqn:pnczpcon2\]) respectively.
For clarity, Table \[Tab1\] lists the different optimization problems with the corresponding transmit strategies.
[|c|c|m[3.7cm]{}|]{} ine Problem Index & Strategy Employed & Detailed Description\
ine [**P0**]{} & PNC-ZP & Physical layer network coding with zero padding for the smaller message in the downlink\
ine [**P1**]{} & PNC-Sup & Physical layer network coding with superposition in the downlink\
ine [**P2**]{} & DNC-TS & Time sharing of DNC message and the remaining bits of the larger message in the downlink\
ine [**P2**]{}’ & DNC-Sup & Superposition of DNC message and the remaining bits of the larger message in the downlink\
ine & CW-Sup & Superposition of two original codeword messages in the downlink\
ine
\[Tab1\]
In Fig. \[fig:energy\_usage\_strategy1\], the minimal average transmit energy usage per slot with symmetrical rate requirements ($\lambda_1=\lambda_2$) of different strategies are compared. Under a symmetric traffic scenario, DNC-TS and DNC-Sup are identical to each other, hence only the solution to DNC-Sup is plotted. It is also seen that PNC-ZP performs identical to PNC-Sup as they are the same for symmetric traffic case. It is observed that PNC-Sup outperforms DNC-Sup and CW-Sup at relatively high data rate requirements, i.e., $\lambda_i>1.2$ frame/slot. However, with low data rate requirements, DNC-Sup and CW-Sup perform better than PNC-Sup in terms of total energy usage. It is because, with low data rate requirement, interference in joint decoding in the multi-access uplink plays a negligible role in degrading performance. However, with high data rate requirements, interference from joint-decoding in the multi-access uplink dominates the performance of the uplink transmission with DNC-Sup and CW-Sup. On the other hand, PNC-Sup can perform better in the high data rate requirement regime, because the relay node only decodes a function of individual messages in the uplink instead of jointly decoding two messages. It is also interesting to note that DNC-Sup and CW-Sup are quite close to each other with symmetric traffic, which intuitively follows from DNC-Sup being bounded by the instantaneous minimum gain of two downlink channels and CW-Sup suffering from more messages being transmitted in the downlink. As designed, [**Popt**]{} always selects the optimal strategy in terms of energy usage and performs best for all rate pair requirements.
We next discuss the performance of the different strategies for asymmetric traffic scenarios ($\lambda_1 \neq \lambda_2$).
In Fig. \[fig:energy\_usage\_bc\], we compare the optimal time-sharing solution in [@Chen2012two-way], PNC-ZP, PNC-Sup, DNC-TS and DNC-Sup. It is observed that with multi-access transmission, DNC-TS performs better than the solution in [@Chen2012two-way] in terms of energy usage, as [@Chen2012two-way] only considers orthogonal, time-sharing transmissions, which was investigated by Yeung in terms of achievable throughput region in [@fong2011practical]. Hence our strategy also outperforms the strategy in [@fong2011practical] in terms of energy usage.
It is observed that with superposition coding in the downlink by DNC-Sup for asymmetric traffic, we can perform even better in terms of energy, which validates Lemma \[lemma:equivalence\]. It should also be noted that with $\lambda_1$ and $\lambda_2$ approaching each other, the energy benefit from the superposition coding on the downlink gradually decreases as fewer remaining bits can be superimposed on the network coded message. It is interesting to note that the minimal energy usage by PNC-ZP is a constant when $\lambda_1<\lambda_2$ and that it performs worse than PNC-Sup, which is due to the fact that with zero padding the virtual traffic is determined by $\max(\lambda_1,\lambda_2)$ and will incur unnecessary energy usage.
In Fig. \[fig:energy\_comparison\_boche\], we compare PNC-Sup, DNC-TS, DNC-Sup and CW-Sup. It is observed that with small rate pair requirements, PNC-Sup performs worse than the other strategies. It is also observed that CW-Sup is worse than DNC-Sup in terms of energy usage, which verifies our theoretical observation in Lemma \[codeword\]. However, it is noted that DNC-TS, which employs time sharing in the downlink, consumes more energy resources than CW-Sup. This intuitively follows from two facts. The first is that the network coded message enjoys a channel whose average link gain is less than that of either link in the downlink, while in CW-Sup, messages are sent over interference-free individual channels whose average link gains are unity. The second is that in CW-Sup, both messages can use more time resources for transmission in the downlink, whereas in DNC-TS, the network-coded message and the excess bits of the larger message compete for time-resource allocation.
In Fig. \[fig:unequal\_gain\], we compare DNC-Sup and CW-Sup for different average channel gain pairs. It can be observed that the total average energy used per slot in the case of $\bar{g}_{r1}=1,\bar{g}_{r2}=2$ is less than that in the case of $\bar{g}_{r1}=2,\bar{g}_{r2}=1$ for both DNC-Sup and CW-Sup strategies. For DNC-Sup, this is because the energy consumption for the remaining bits of the larger message is determined by $\bar{g}_{51}$, i.e., $\bar{g}_{r1}$ and the case with $\bar{g}_{r1}=2$ obviously performs better than the case with $\bar{g}_{r1}=1$. For CW-Sup, it is because the larger message is transferred over the link $R-S_2$ on the downlink. Hence, the performance of CW-Sup improves with higher $\bar{g}_{2r}$.
Conclusion
==========
In this work, the problem of minimizing energy usage in a TWRN over a fading channel was formulated and solved for various transmit strategies and comparisons were performed. Three transmission strategies were considered: physical-layer network coding (PNC), digital-network coding (DNC), and codeword superposition (CW-Sup). In the downlink for DNC, a simple time sharing strategy of the digital network-coded message and the remaining bits of the larger message (DNC-TS) was first considered and extended to a superposition strategy which superimposed these two messages (DNC-Sup). Between DNC and CW-Sup, the superiority of the superposition of network coded message and the excess bits of the larger message (DNC-Sup), was demonstrated theoretically in terms of energy usage. More importantly, it was shown that, in terms of total energy usage, the specific PNC scheme performs better than DNC in the regime of relatively high data rate requirements, and worse than DNC with relatively low data rate requirements. This provides some insights on when to select PNC or DNC for TWRNs. Finally, an optimal algorithm to always select the best strategy in terms of energy usage was presented.
[99]{}
R. Ahlswede, N. Cai, R. Li and R. W. Yeung, “Netwoking information flow," *IEEE Trans. Inf. Theory*,0.5em plus 0.5em minus 0.4emvol. 46, no. 4, pp. 1204–1216, 2000.
C. H. Liu, and F. Xue, “Network coding for two-way relaying: rate region, sum rate and opportunistic scheduling," in *Proc. IEEE Int. Conf. Communications (ICC’08)*,0.5em plus 0.5em minus 0.4emMay. 2008, pp. 1044–1049.
S. L. Fong, M. Fan and R. W. Yeung, “Practical network coding on three-node point-to-point relay networks," in *Proc. IEEE Int. Symp. Information Theory (ISIT’11)*,0.5em plus 0.5em minus 0.4emAug. 2011, pp. 2055–2059.
Z. Chen, T. J. Lim and M. Motani, “Digital network coding aided two-way relaying: energy minimization and queue analysis," *IEEE Trans. Wireless Commun.*,0.5em plus 0.5em minus 0.4emvol. 12, no. 4, pp. 1947–1957, Apr. 2013.
Z. Chen, T. J. Lim and M. Motani, “Energy Optimization for Stable Two-Way Relaying with a Multi-Access Uplink," *Proc. of IEEE Wireless Communication and Networking Conf. (WCNC’13)*, Apr. 2013.
Z. Chen, T. J. Lim and M. Motani, “Two-Way Relay Networks Optimized for Rayleigh Fading Channels," *Proc. of IEEE Global Communications Conf. (Globecom’13)*, 2013.
B. Rankov and A. Wittneben, “Spectral efficient signaling for half-duplex relay channels," in *Proc. Asilomar Conf. Signals, Systems and Computers (ACSSC’05)*,0.5em plus 0.5em minus 0.4emOct. 2005, pp. 1066–1071.
T. Oechtering and H. Boche, “Stability region of an optimized bidirectional regenerative half-duplex relaying protocol," *IEEE Trans. Commun.*,0.5em plus 0.5em minus 0.4emvol. 56, no. 9, pp. 1519–1529, 2008.
W. Nam, S.-Y. Chung and Y. H. Lee, “Capacity of the Gaussian two-way relay channel to within 1/2 bit," *IEEE Trans. Inf. Theory*, 0.5em plus 0.5em minus 0.4emvol. 56, no. 11, pp. 5488–5494, 2010.
T. Koike-Akino, P. Popovski and V. Tarokh, “Optimized constellation for two-way wireless relaying with physical network coding," *IEEE J. Sel. Areas Commun.*,0.5em plus 0.5em minus 0.4emvol. 27, pp. 773–787, Jun. 2009.
S. Zhang, S. Liew and P. Lam, “Physical layer network coding," in *Proc. ACM Int. Conf. Mobile Computing and Networking (Mobicom’06)*,0.5em plus 0.5em minus 0.4emSep. 2006, pp. 358–365.
R. Louie, Y. Li, and B. Vucetic, “Practical physical layer network coding for two-way relay channels: performance analysis and comparison," *IEEE Trans. Wireless Commun.*,0.5em plus 0.5em minus 0.4emvol. 9, no. 2, pp. 764–777, 2010.
B. Nazer and M. Gastpar, “Reliable physical layer network coding," *Proceedings of IEEE*,0.5em plus 0.5em minus 0.4emvol. 99, no. 3, pp. 438–460, 2011.
B. Nazer and M. Gastpar, “Compute-and-forward: Harnessing interference through structured codes," *IEEE Trans. Inf. Theory*,0.5em plus 0.5em minus 0.4emvol. 57, no. 10, pp. 6463–6486, 2011.
P. Popovski and H. Yomo, “Physical network coding in two-way wireless relay channels," *Proc. IEEE Int. Conf. Communications (ICC’07)*,0.5em plus 0.5em minus 0.4emJun. 2007, pp. 707-712.
A. S. Avestimehr, A. Sezgin and D. Tse, “Capacity of the two-way relay channel within a constant gap," *European Trans. Telecommun.*,0.5em plus 0.5em minus 0.4emvol. 21, no. 4, pp. 363-374, 2010, Wiley Online Library.
M. P. Wilson, K. Narayanan, H. Pfister and A. Sprintson, “Joint physical layer coding and network coding for bidirectional relaying," *IEEE Trans. Inf. Theory*,0.5em plus 0.5em minus 0.4emvol. 56, no. 11, pp. 5641–5654, 2010.
S. Katti, S. Gollakota and D. Katabi, “Embracing wireless interference: analog network coding," *ACM SIGCOMM Computer Communication Review.*,0.5em plus 0.5em minus 0.4emvol. 37. no. 4, pp. 397–408, ACM, 2007.
S. P. Boyd and L. Vandenberghe, *Convex Optimization*. 0.5em plus 0.5em minus 0.4emCambridge University Press, 2004.
D. Tse and P. Viswanath, *Fundamentals of Wireless Communication*. 0.5em plus 0.5em minus 0.4emCambridge University Press, 2005.
[^1]: Part of this work was accepted by IEEE Globecom 2013. This work was partially funded by grant R-263-000-649-133 and R-263-000-579-112 from the Ministry of Education Academic Research Fund.
The authors are with the Department of Electrical and Computer Engineering, National University of Singapore, Singapore, 117583. Tel: +65 6601 2055. Fax: +65 6779 1103. Emails: [email protected]; [email protected]; [email protected].
[^2]: [Note that research on the achievable rate of PNC is still an open topic. Although an ideal exchange rate of $1/2 \log_2 (1 + SNR)$ is suggested in \[15\], the achievable PNC rate exploits the recent result given in \[17\].]{}
|
---
abstract: 'It was recently pointed out that the distribution of times between solar flares (the flare waiting-time distribution) follows a power law, for long waiting times. Based on 25 years of soft X-ray flares observed by Geostationary Operational Environmental Satellite (GOES) instruments it is shown that 1. the waiting-time distribution of flares is consistent with a time-dependent Poisson process, and 2.the fraction of time the Sun spends with different flaring rates approximately follows an exponential distribution. The second result is a new phenomenological law for flares. It is shown analytically how the observed power-law behavior of the waiting times originates in the exponential distribution of flaring rates. These results are argued to be consistent with a non-stationary avalanche model for flares.'
author:
- 'M.S. Wheatland'
title: 'THE ORIGIN OF THE SOLAR FLARE WAITING-TIME DISTRIBUTION'
---
Introduction
============
The distribution of times between flares (“waiting times”) gives information about whether flares occur as independent events, and also provides a test for models for flare statistics. For example, the avalanche model for flares (, ) is a model designed to reproduce the observed power-law distributions of flare energy and duration. Flares are described as redistribution events in a cellular automaton (CA) that is driven to a self-organized critical state. Because the system is driven at a constant (mean) rate and flares occur as independent events, the model makes the specific prediction that the flare waiting-time distribution (WTD) is a simple exponential, consistent with a Poisson process.
Observational determinations of the flare WTD have given varying results. Determinations based on hard X-ray observations have focused on the distribution of short waiting times (seconds – hours). Biesecker (1994) found the WTD for hard X-ray bursts observed by the Burst and Transient Source Experiment on the Compton Gamma Ray Observatory to be consistent with a time-dependent Poisson process, i.e.one in which the mean flaring rate is time-varying. This result is consistent with a non-stationary avalanche model for flares (an avalanche model driven with a non-constant rate). However, Wheatland et al. (1998) found an overabundance of short waiting times (by comparison with a time-dependent Poisson process) in hard X-ray bursts observed by the Interplanetary Cometary Observer (ICE) spacecraft. (For another determination of the WTD based on hard X-ray, see .)
Recently, the distribution of times between soft X-ray flares observed by the Geostationary Operational Environmental Satellite sensors (GOES) between 1976 and 1996 was examined by Boffeta et al. (1999). The advantage of the GOES data is that it provides a long sequence of data with few gaps, and so the flare WTD can be examined for long waiting times. Boffeta et al. found that the distribution follows a power law for waiting times greater than a few hours. They argued that this result is inconsistent with the avalanche model, and that the appearance of a power law suggests a turbulence model for the origin of flares.
In this paper the GOES data is re-examined. It is shown that the observed, power-law like WTD is consistent with a piecewise-constant Poisson process, and hence with the non-stationary avalanche model. Further, it is shown that the time distribution of rates of the GOES flares averaged over several solar cycles is approximately exponential. This is a new phenomenological law for flaring. Finally, it is shown analytically how a piecewise-constant Poisson process with an exponential distribution of rates has a WTD that is power-law distributed for long waiting times, consistent with the observations.
Data analysis
=============
The data examined here is the catalog of flares observed during 1975-1999 by the 1-8Å GOES sensors (see [@gar94] for details of the GOES instrument). The chosen period of time covers three solar cycles (21, 22 and 23). Because the soft X-ray background rises with the solar cycle, flares are undercounted near solar maximum, by comparison with solar minimum. Hence only those flares with a peak flux greater than a threshold value ($10^{-6}\,{\rm W m}^{-2}$, corresponding to a GOES C1.0 class flare) are included in the study. This leaves a total of 32,563 flares.
Figure 1 shows the WTD for the GOES events included in this study (histogram), constructed from differences between start times of flares. The figure agrees well with Figure 1 in Boffeta et al. (1999), and in particular shows the same power-law like behavior for waiting times greater than a few hours. The index of the power law is about $-2.16\pm 0.05$ (for waiting times greater than 10 hours), which may be compared with Boffeta et al.’s estimate of $-2.4\pm 0.1$. The difference in power-law indices is due to the restriction to flares greater than class C1.0 – Boffeta et al. included all flares in their determination of the waiting-time distribution. Error bars are plotted on the histogram in Figure 1, corresponding to the square root of the number of waiting times in each bin. The meaning of the solid and dashed curves in the figure is explained below.
To compare the observed occurrence of flares with that expected from a time-dependent Poisson process, it is necessary to determine the mean rate of flaring as a function of time. For this step a Bayesian procedure devised by Scargle (1998) was used. (The same procedure was used in Wheatland et al. 1998.) The method takes a sequence of times of events and determines a decomposition into intervals of time when the observed event occurrence is consistent with a (constant rate) Poisson process. These intervals are characterized by a duration $t_i$ and a rate $\lambda_i$, and are referred to as “Bayesian blocks.” The procedure has only one free parameter, a “prior odds ratio,” which disfavors further segmentation of intervals when the single rate and dual-rate Poisson model are almost equally likely (see [@sca98] for further details). However, very similar results are achieved for different choices of this ratio. In the following analysis, the value ${\tt PRIOR\_ODDS=2}$ is used.
Figure 2 shows the results of the application of the Bayesian procedure to the GOES data. The method has decomposed the 25 years of flaring into 390 Bayesian blocks. The rate of flaring is observed to vary with the solar cycle, as expected, and also exhibits short time-scale variations. Relatively long intervals with a constant rate are also observed.
A Poisson process with a constant rate $\lambda$ has a WTD given by $P(\Delta t)=\lambda\exp(-\lambda\Delta t)$, where $\Delta t$ describes a waiting time. The WTD distribution for a piecewise-constant Poisson process with rates $\lambda_i$ and intervals $t_i$ may be approximated by $$\label{eq:tdpp}
P(\Delta t)\approx\sum_i\varphi_i\lambda_i \exp(-\lambda_i\Delta t),$$ where $$\varphi_i=\frac{\lambda_it_i}{\sum_j \lambda_j t_j}$$ is the fraction of events associated with a given rate $\lambda_i$.
The rates and intervals shown in Figure 2 were used to construct a model WTD, from (\[eq:tdpp\]). The result is plotted in Figure 1 as the solid curve. It is clear that there is good qualitative agreement between the observed and model WTDs. In particular, the model distribution reproduces power-law like behavior for long waiting times, and is relatively constant for short waiting times. There is some discrepancy between the curves, e.g. there are too few observed short waiting times, and too many observed waiting times near the rollover in the distribution. However, it is likely that there are errors in the observational determination of the WTD. For example, short waiting times are likely to be missed due to the overlap of flares close in time. Also, the Bayesian method for determining rates from the data may produce some erroneous rates and intervals, and the expression (\[eq:tdpp\]) and the decomposition into a piecewise constant Poisson process involve approximations that are difficult to precisely quantify. The good qualitative agreement of the model and observed distributions is taken as strong evidence that the GOES flares occur as a time-varying Poisson process.
The origin of the power-law behavior
====================================
The Bayesian procedure decomposed the GOES time series into a large number of Bayesian blocks. For a piecewise-constant Poisson process involving a large number of rates, the summation in (\[eq:tdpp\]) may be replaced by an integral: $$\label{eq:lap}
P(\Delta t)=\frac{1}{\lambda_0}\int_{0}^{\infty}f(\lambda)\lambda^2
e^{-\lambda\Delta t}\,d\lambda,$$ where $f(\lambda)\,d\lambda$ is the fraction of time that the flaring rate is in the range $(\lambda,\lambda+d\lambda)$, and $$\lambda_0=\int_{0}^{\infty}\lambda f(\lambda)\,d\lambda$$ is the mean rate of flaring.
The expression (\[eq:lap\]) for the WTD of a piecewise-constant Poisson process depends only on the time distribution of the rates of flaring, $f(\lambda)$. Figure 3 shows this distribution (the histogram), constructed from the rates and intervals shown in Figure 2. This figure reveals the remarkable fact that the rate of flaring – effectively averaged over several solar cycles – follows an exponential distribution, a result that does not appear to have been noted in the literature before. The observed distribution may be approximated by $$\label{eq:exp}
f(\lambda)=\lambda_0^{-1}\exp(-\lambda/\lambda_0),$$ where $\lambda_0\approx 0.15\,{\rm hour}^{-1}$ is obtained from the total number of flares divided by the total observing time. Equation (\[eq:exp\]) is shown by the straight line in Figure 3. The observed distribution does not agree exactly with the exponential form. The cumulative probability distribution corresponding to Figure 3 (this distribution is preferable to the differential distribution because it involves no binning) was compared with the model distribution corresponding to (\[eq:exp\]) using the Kolmogorov-Smirnov test (). This test excludes the possibility that the two distributions are the same at a high level of significance. However, the observationally-inferred distribution of rates is somewhat uncertain, and the exponential model clearly provides a good first approximation to the observed distribution.
Substituting (\[eq:exp\]) into (\[eq:lap\]), the integral may be evaluated to give $$\label{eq:result}
P(\Delta t)=\frac{2\lambda_0}{(1+\lambda_0\Delta t)^3}.$$ Equation (\[eq:result\]) is plotted in Figure 1 as the dashed curve. For short waiting times ($\Delta t\ll \lambda_0^{-1}$), equation (\[eq:result\]) approaches the value $P(\Delta t)=2\lambda_0$. For long waiting times ($\Delta t\gg \lambda_0^{-1}$), the distribution has the power-law form $P(\Delta t)\sim 2\lambda_0^{-2}(\Delta t)^{-3}$. Hence equation (\[eq:result\]) accounts for the qualitative behavior of the WTD, in particular the power-law behavior for large waiting times, the location of the rollover to power-law form ($\Delta t\approx \lambda_0^{-1}$), and the approximate index of the power-law tail. The behavior of the observed WTD is seen to originate from a time-dependent Poisson process with an approximately exponential distribution of rates.
Discussion
==========
In this paper the waiting-time distribution for 25 years of GOES soft X-ray flares (of greater than C1.0 class) has been investigated. The observed WTD is found to be qualitatively consistent with a piecewise-constant Poisson process, with a time history of rates determined from the data using a Bayesian procedure. This result indicates that the GOES flares are independent, random events. There does not appear to be good evidence in the GOES events for flare sympathy, or for long-term correlations in the times of flare occurrence.
The GOES WTD displays a power-law tail for long waiting times, as pointed out by Boffeta et al. (1999), and confirmed here. In this paper the power-law behavior is demonstrated to originate from two basic assumptions, that are well supported by the data: 1. that flare process is Poisson, and 2. that the distribution of flaring rates follows an approximate exponential. Subject only to these assumptions, the theoretical WTD is equation (\[eq:result\]), which reproduces the qualitative features of the observed WTD, including the power-law tail. There is some discrepancy between equation (\[eq:exp\]) and the observationally determined WTD. For example, the observational determination of the power-law index of the tail of the distribution is around $-2.2$ (Boffeta et al. found $-2.4\pm 0.1$), whereas equation (\[eq:result\]) predicts an index of $-3$. This difference is most likely due to the departure of the observed distribution of flaring rates from a simple exponential form, particularly for low flaring rates (which influence the behavior of the WTD for long waiting times).
This paper presents the new result that the probability of flare occurrence per unit time, when averaged over the solar cycle, follows an approximate exponential distribution (see Figure 3). This is a new phenomenological law for flaring, that must be explained by any theory for the origin of flare energy. The rate of flare occurrence reflects the total rate of energy release in flaring, which must match the rate of energy supply to the system. Hence it follows that the rate at which energy is supplied to the corona for flaring also follows an exponential distribution. It is clear from Figure 2 that this new law does not hold instantaneously – e.g. at times of maxima of the cycle, there are few low flaring rates. For certain periods of time during each solar cycle the flaring rate is approximately constant. From these points it also follows that the observed flare WTD is time-dependent, and may have a different form depending upon the interval of observation. If the WTD is constructed for a short period of observation, during which time the rate of flaring is approximately constant, then the distribution will resemble an exponential. The power-law tail of the WTD appears in the GOES data taken over several solar cycles, during which time there is wide variation in the flaring rate. For shorter periods of observation the power-law form might not appear, depending on whether there is sufficient variation in the flaring rate. The time-dependence and cycle-dependence of the rate and waiting-time distributions will be investigated in more detail in future work.
In this paper, waiting times between flares from all active regions present on the Sun have been considered, so that the Sun is treated as a single flaring system. Boffeta et al. (1999) also considered flares in individual active regions, as identified (in the GOES catalog) from H$\alpha$ events. The distribution for waiting times in individual active regions was found to be similar to that from all active regions. In future work the WTD in individual active regions will be considered in more detail.
The results presented in this paper are consistent with the avalanche model for flares. Although avalanche cellular automata produce an exponential WTD when driven with a constant rate, if the rate of driving is varied so that the distribution of rates is exponential, then the resulting model (referred to here as a non-stationary avalanche model) should reproduce the qualitative features of the observed WTD. There is no need to consider models that produce a power-law WTD through long-term correlations between events (e.g. models of MHD turbulence, cf. Boffeta et al. 1999), because the WTD is seen to be a simple consequence of the statistics of independent flare events together with an exponential distribution of flaring rates.
The author acknowledges the support of a U2000 Post-doctoral Fellowship at the University of Sydney.
Babu, G.J. & Feigelson, E.D. 1996, Astrostatistics (London: Chapman & Hall), p. 65 Biesecker, D. 1994, “On the Occurrence of Solar Flares Observed with the Burst and Transient Source Experiment,” PhD Thesis, University of New Hampshire Boffeta, G., Carbone, V., Giuliani, P., Veltri, P. & Vulpiani, A. 1999, 83, 4662 Crosby, N. 1996, “Contribution [à]{} l’[é]{}tude des Ph[é]{}nom[è]{}nes [É]{}ruptifs du Soleil en Rayons X [à]{} partir des Observations de l’Exp[é]{}rience WATCH sur le Satellite GRANAT,” PhD Thesis, University Paris VII Garcia, H.A. 1994, 154, 275 Lu, E.T, & Hamilton, R.J. 1991, 380, L89 Lu, E.T., Hamilton, R.J., McTiernan, J.M., & Bromund, K.R. 1993, 412, 841 Pearce, G., Rowe, A., & Yeung, J. 1993, 208, 99 Scargle, J. 1998, 504, 405 Wheatland, M.S., Sturrock, P.A. & McTiernan, J.M. 1998, 509, 448
|
---
abstract: 'We study the well-posedness of the Dirac-Klein-Gordon system in one space dimension with initial data that have an analytic extension to a strip around the real axis. It is proved that the radius of analyticity $\sigma(t)$ of the solutions at time $t$ cannot decay faster than $1/t^4$ as ${\vert t \vert} \to \infty$.'
address:
- 'Department of Mathematics, University of Bergen, PO Box 7803, N-5020 Bergen, Norway'
- 'Universität Bielefeld, Fakultät für Mathematik, Postfach 10 01 31, D-33501 Bielefeld, Germany'
author:
- Sigmund Selberg
- Achenef Tesfahun
bibliography:
- 'DKGbibliography.bib'
title: 'On the radius of spatial analyticity for the 1d Dirac-Klein-Gordon equations'
---
Introduction {#Intro}
============
Consider the Dirac-Klein-Gordon equations (DKG) on ${\mathbb{R}}^{1+1}$, $$\label{DKG}
\left\{
\begin{aligned}
\left( -i \gamma^0 \partial_t - i \gamma^1 \partial_x + M \right) \psi &= \phi \psi,
\\
\left( \partial_t^2 - \partial_x^2 + m^2 \right) \phi &= \psi^* \gamma^0 \psi,
\end{aligned}
\right.
\qquad (t,x \in {\mathbb{R}})$$ with initial condition $$\label{Data}
\psi(0,x) = \psi_0(x),
\quad
\phi(0,x) = \phi_0(x), \quad \partial_t \phi(0,x) = \phi_1(0,x).$$ Here the unknowns are $\phi \colon {\mathbb{R}}^{1+1} \to {\mathbb{R}}$ and $\psi \colon {\mathbb{R}}^{1+1} \to {\mathbb{C}}^2$, the latter regarded as a column vector with conjugate transpose $\psi^*$. The masses $M,m \ge 0$ are given constants. The $2 \times 2$ Dirac matrices $\gamma^0,\gamma^1$ should satisfy $\gamma^0\gamma^1 + \gamma^1\gamma^0 = 0$, $(\gamma^0)^2 = I$, $(\gamma^1)^2 = -I$, $(\gamma^0)^* = \gamma^0$ and $(\gamma^1)^* = - \gamma^1$; we will work with the representation $$\gamma^0 = \left( \begin{matrix}
0 & 1 \\
1 & 0 \\
\end{matrix} \right),
\quad
\gamma^1 = \left( \begin{matrix}
0 & -1 \\
1 & 0 \\
\end{matrix} \right).$$
The well-posedness of this Cauchy problem with data in the family of Sobolev spaces $H^s = (1-\partial_x^2)^{-s/2}L^2({\mathbb{R}})$, $s \in {\mathbb{R}}$, has been intensively studied; see [@Chadam:1973; @Bournaveas:2000; @Bachelot:2006; @Bournaveas:2006; @Pecher:2006; @Machihara:2007; @Selberg:2008; @Selberg:2007; @Pecher:2008; @Tesfahun:2009; @Machihara:2010; @Selberg:2010; @Candy:2012]. Local well-posedness holds for data $$\label{SobolevData}
(\psi_0,\phi_0,\phi_1) \in H^s({\mathbb{R}};{\mathbb{C}}^2) \times H^r({\mathbb{R}};{\mathbb{R}}) \times H^{r-1}({\mathbb{R}};{\mathbb{R}})$$ with $s > -1/2$ and ${\vert s \vert} \le r \le s+1$; see [@Machihara:2010], where it is also proved that this is the optimal result, in the sense that for other $(r,s)$ one either has ill-posedness or the solution map (if it exists) is not regular. Moreover, when $s \ge 0$ there is conservation of charge, $${\left\Vert \psi(t) \right\Vert}_{L^2} = {\left\Vert \psi_0 \right\Vert}_{L^2},$$ implying that the solutions extend globally when $0 \le s \le r \le s+1$, and by propagation of higher regularity the global solution is $C^\infty$ if the data are $C^\infty$.
While the well-posedness in Sobolev spaces is well-understood, much less is known concerning spatial analyticity of the solutions to the above Cauchy problem, and this is what motivates the present paper.
On the one hand, local propagation of analyticity for nonlinear hyperbolic systems has been studied by Alinhac and Métivier [@Alinhac1984] and Jannelli [@Jannelli:1986], and in particular this general theory implies that if the data (with $s,r$ sufficiently large) are analytic on the real line, then the the same is true of the solution $(\psi,\phi,\partial_t \phi)(t)$ to for all times $t$. The local theory does not give any information about the radius of analyticity, however.
On the other hand, one can consider the situation where a uniform radius of analyticity on the real line is assumed for the initial data, so there is a holomorphic extension to a strip $\{ x+iy \colon {\vert y \vert} < \sigma_0\}$ for some $\sigma_0 > 0$. One may then ask whether this property persists for all later times $t$, but with a possibly smaller and shrinking radius of analyticity $\sigma(t) > 0$. This type of question was introduced in an abstract setting of nonlinear evolutionary PDE by Kato and Masuda [@Kato:1986], who showed in particular that for the Korteweg-de Vries equation (KdV) the radius of analyticity $\sigma(t)$ can decay to zero at most at a super-exponential rate. A similar rate of decay for semilinear symmetric hyperbolic systems has been proved recently by Cappiello, D’Ancona and Nicola [@Cappiello:2014]. An algebraic rate of decay for KdV was shown by Bona and Kalisch [@Bona2005]. Panizzi [@Panizzi2012] has obtained an algebraic rate for nonlinear Klein-Gordon equations. In this paper our aim is to obtain an algebraic rate for the DKG system.
We use the following spaces of Gevrey type. For $\sigma \ge 0$ and $s \in {\mathbb{R}}$, let $G^{\sigma,s}$ be the Banach space with norm $${\left\Vert f \right\Vert}_{G^{\sigma,s}} = {\bigl\Vert e^{\sigma{\vert \xi \vert}} {\langle \xi \rangle}^s \widehat f(\xi) \bigr\Vert}_{L^2_\xi},$$ where $\widehat f(\xi) = \int_{{\mathbb{R}}} e^{- i x\xi} f(x) \, dx$ is the Fourier transform and ${\langle \xi \rangle} = (1+{\vert \xi \vert}^2)^{1/2}$. So for $\sigma > 0$ we have $G^{\sigma,s} = \{ f \in L^2 \colon e^{\sigma{\vert \cdot \vert}} {\langle \cdot \rangle}^s \widehat f \in L^2 \}$ and for $\sigma = 0$ we recover the Sobolev space $H^s = G^{0,s}$ with norm ${\left\Vert f \right\Vert}_{H^s} = {\bigl\Vert {\langle \xi \rangle}^s \widehat f(\xi) \bigr\Vert}_{L^2_\xi}$.
Observe the embeddings $$\begin{aligned}
{2}
\label{Inclusion1}
G^{\sigma,s} &\subset H^{s'}& \quad &\text{for $0 < \sigma$ and $s,s' \in {\mathbb{R}}$},
\\
\label{Inclusion2}
G^{\sigma,s} &\subset G^{\sigma',s'}& \quad &\text{for $0 < \sigma' < \sigma$ and $s,s' \in {\mathbb{R}}$}.\end{aligned}$$ Every function in $G^{\sigma,s}$ with $\sigma > 0$ has an analytic extension to the strip $$S_\sigma = \left\{ x+iy \colon x,y \in {\mathbb{R}}, \;{\vert y \vert} < \sigma \right\}.$$
Let $\sigma > 0$, $s \in {\mathbb{R}}$. The following are equivalent:
1. $f \in G^{\sigma,s}$.
2. $f$ is the restriction to the real line of a function $F$ which is holomorphic in the strip $S_\sigma$ and satisfies $\sup_{{\vert y \vert} < \sigma} {\left\Vert F(x+iy) \right\Vert}_{H^s_x} < \infty$.
The proof given for $s=0$ in [@Katznelson1976 p. 209] applies also for $s \in {\mathbb{R}}$ with some obvious modifications.
Main result
===========
Consider the Cauchy problem , with data $$\label{GevreyData}
(\psi_0,\phi_0,\phi_1) \in G^{\sigma_0,s}({\mathbb{R}};{\mathbb{C}}^2) \times G^{\sigma_0,r}({\mathbb{R}};{\mathbb{R}}) \times G^{\sigma_0,r-1}({\mathbb{R}};{\mathbb{R}}),$$ where $\sigma_0 > 0$ and $(r,s) \in {\mathbb{R}}^2$. By the embedding and the existing well-posedness theory we know that this problem has a unique, smooth solution for all time, regardless of the values of $r$ and $s$. Our main result gives an algebraic lower bound on the radius of analyticity $\sigma(t)$ of the solution as the time $t$ tends to infinity.
\[thm2\] Let $\sigma_0 > 0$ and $(r,s) \in {\mathbb{R}}^2$. Then for any data the solution of the Cauchy problem , satisfies $$(\psi,\phi,\partial_t \phi)(t) \in G^{\sigma(t),s} \times G^{\sigma(t),r} \times G^{\sigma(t),r-1}
\quad \text{for all $t \in {\mathbb{R}}$},$$ where the radius of analyticity $\sigma(t) > 0$ satisfies an asymptotic lower bound $$\sigma(t) \ge \frac{c}{t^4}
\qquad \text{as ${\vert t \vert} \to \infty$},$$ with a constant $c > 0$ depending on $m$, $M$, $\sigma_0$, $r$, $s$, and the norm of the data .
Observe that by the embedding , it suffices to prove Theorem \[thm2\] for a single choice of $(r,s) \in {\mathbb{R}}^2$, and we choose $(r,s) = (1,0)$; global well-posedness in the Sobolev data space at this regularity was first proved by Bournaveas [@Bournaveas:2000]. By time reversal it suffices to prove the theorem for $t > 0$, which we assume henceforth.
The first step in the proof is to show that in a short time interval $0 \le t \le \delta$, where $\delta > 0$ depends on the norm of the initial data, the radius of analyticity remains constant. This is proved by a contraction argument involving energy estimates, Sobolev embedding and a null form estimate which is somewhat similar to the one proved by Bournaveas in [@Bournaveas:2000]. Here we take care to optimize the dependence of $\delta$ on the data norms, since the local result will be iterated.
The next step is to improve the control of the growth of the solution in the time interval $[0,\delta]$, measured in the data norm . To achieve this we show that, although the conservation of charge does not hold exactly in the Gevrey space $G^{\sigma,0}$, it does hold in an approximate sense. Iterating the local result we then obtain Theorem \[thm2\].
Analogous results for the KdV equation were proved by Bona, Grujić and Kalisch in [@Bona2005], but the method used there is quite different: They estimate in Gevrey-modified Bourgain spaces directly on any large time interval $[0,T]$ while we iterate a precise local result. While we have been able to adapt the method from [@Bona2005] to the DKG equations, this only gave us a rate $1/t^{8+}$, while our method gives $1/t^4$. The reason for this is twofold: (i) the contraction norms involve integration in time, which is a disadvantage when working on large time intervals, and (ii) it is not clear how to get any kind of approximate charge conservation when working directly on large time intervals. Our short-time iterative approach is more inspired by the ideas developed by Colliander, Holmer and Tzirakis in [@Colliander2008] in the context of global well-posedness in the standard Sobolev spaces[^1], and the idea of almost conservation laws introduced in [@Colliander2002].
Although our method gives a significantly better result for DKG than the method from [@Bona2005], we do not know whether the result in Theorem \[thm2\] is optimal. We do expect that the ideas introduced here can be applied also to other equations than DKG to obtain algebraic lower bounds on the radius of analyticity.
We now turn to the proofs. Leaving the case $m=0$ until the very end of the paper, we will assume $m > 0$ for now. By a rescaling we may assume $m=1$.
Reformulation of the system
===========================
It will be convenient to rewrite the DKG system as follows. Write $\psi= (\psi_+,\psi_-)^T$ and $\phi = \phi_+ + \phi_-$ with $\phi_\pm = \frac12 \left( \phi \pm i {\langle D_x \rangle}^{-1} \partial_t \phi \right)$, where $D_x = -i\partial_x$, hence $D_x$ and ${\langle D_x \rangle}$ are Fourier multipliers with symbols $\xi$ and ${\langle \xi \rangle} = (1+{\vert \xi \vert}^2)^{1/2}$ respectively. For later use we note that $e^{\sigma {\vert D_x \vert}}$ has symbol $e^{\sigma{\vert \xi \vert}}$, while $e^{\pm \sigma D_x}$ has symbol $e^{\pm \sigma\xi}$.
Writing also $D_t = -i\partial_t$, the Cauchy problem , with $m=1$ is then equivalent to $$\label{DKGsplit}
\left\{
\begin{alignedat}{2}
\left( D_t + D_x \right) \psi_+ &= -M\psi_- + \phi \psi_-,&
\quad
\psi_+(0) &= f_+ \in G^{\sigma_0,s},
\\
\left( D_t - D_x \right) \psi_- &= -M\psi_+ + \phi \psi_+,&
\quad
\psi_-(0) &= f_- \in G^{\sigma_0,s},
\\
\left( D_t + {\langle D_x \rangle} \right) \phi_+ &= - {\langle D_x \rangle}^{-1} \operatorname{Re}\left( \overline{\psi_+} \psi_- \right),&
\quad
\phi_+(0) &= g_+ \in G^{\sigma_0, r},
\\
\left( D_t - {\langle D_x \rangle} \right) \phi_- &= + {\langle D_x \rangle}^{-1} \operatorname{Re}\left( \overline{\psi_+} \psi_- \right),&
\quad
\phi_-(0) &= g_- \in G^{\sigma_0, r},
\end{alignedat}
\right.$$ where $\psi_0 = (f_+,f_-)^T$ and $g_\pm = \frac12\left( \phi_0 \pm i {\langle D_x \rangle}^{-1} \phi_1 \right)$. We remark that $\overline{\phi_+} = \phi_-$, since $\phi$ is real-valued.
Energy estimate
===============
Each line in is of the schematic form $$\left( D_t + h(D_x) \right) u = F(t,x), \quad u(0,x) = f(x),$$ with $h(\xi) = \pm \xi$ or $\pm{\langle \xi \rangle}$. Then for sufficiently regular $f$ and $F$ one has, by Duhamel’s formula, $$u(t) = W_{h(\xi)}(t) f + i \int_0^t W_{h(\xi)}(t-s) F(s) \, ds,$$ where $W_{h(\xi)}(t) = e^{-ith(D_x)}$ is the solution group; it is the Fourier multiplier with symbol $e^{-ith(\xi)}$. From this one obtains immediately the following energy inequality, for any $\sigma \ge 0$ and $a \in {\mathbb{R}}$: $$\label{EnergyInequality}
{\left\Vert u(t) \right\Vert}_{G^{\sigma,a}} \le {\left\Vert f \right\Vert}_{G^{\sigma,a}} + \int_0^t {\left\Vert F(s) \right\Vert}_{G^{\sigma,a}} \, ds \qquad (t \ge 0).$$
Bilinear estimates
==================
We shall need the following null form estimate.
\[NullLemma\] The solutions $u$ and $v$ of the Cauchy problems $$\begin{aligned}
{2}
(D_t+D_x) u &= F(t,x),& \qquad u(0,x) &= f(x),
\\
(D_t-D_x) v &= G(t,x),& \qquad v(0,x) &= g(x),\end{aligned}$$ satisfy $${\left\Vert uv \right\Vert}_{L^2([0,T] \times {\mathbb{R}})}
\le C
\left( {\left\Vert f \right\Vert}_{L^2} + \int_0^T {\left\Vert F(t) \right\Vert}_{L^2} \, dt \right)
\left( {\left\Vert g \right\Vert}_{L^2} + \int_0^T {\left\Vert G(t) \right\Vert}_{L^2} \, dt \right)$$ for all $T > 0$. Moreover, the same holds for $\overline u v$.
By Duhamel’s formula, as in the proof of Theorem 2.2 in [@Klainerman1993], one can reduce to the case $F=G=0$, and this case is easily proved by changing to characteristic coordinates or by using Plancherel’s theorem as in [@Selberg:2008 Lemma 2]. Replacing $u$ by its complex conjugate $\overline u$ does not affect the argument, since $u(t)=e^{-itD_x}f$ implies $\overline u = e^{-itD_x} \overline f$, as one can check on the Fourier transform side.
\[NullCorollary\] Let $\sigma \ge 0$. With notation as in Lemma \[NullLemma\] we have $$\begin{gathered}
{\left\Vert uv \right\Vert}_{L^2_t([0,T];G^{\sigma,0})}
\le C
\left( {\left\Vert f \right\Vert}_{G^{\sigma,0}} + \int_0^T {\left\Vert F(t) \right\Vert}_{G^{\sigma,0}} \, dt \right)
\\
\times\left( {\left\Vert g \right\Vert}_{G^{\sigma,0}} + \int_0^T {\left\Vert G(t) \right\Vert}_{G^{\sigma,0}} \, dt \right)\end{gathered}$$ for all $T > 0$. Moreover, the same holds for $\overline u v$.
It suffices to prove that ${\left\Vert e^{\pm \sigma D_x}(uv) \right\Vert}_{L^2_t([0,T];L^2)}$ is bounded by the right-hand side. But this follows from Lemma \[NullLemma\], since $e^{\pm \sigma D_x} (uv) = (e^{\pm \sigma D_x}u) (e^{\pm \sigma D_x}v)$, as is obvious on the Fourier transform side.
We will also need the Sobolev product estimate $$\label{Sobolev}
{\left\Vert fg \right\Vert}_{G^{\sigma,0}} \le C {\left\Vert f \right\Vert}_{G^{\sigma,1}} {\left\Vert g \right\Vert}_{G^{\sigma,0}},$$ where $C > 1$ is an absolute constant. For $\sigma=0$ this reduces to Hölder’s inequality and the Sobolev embedding $H^1({\mathbb{R}}) \subset L^\infty({\mathbb{R}})$, and the case $\sigma > 0$ is then deduced as in the proof of the corollary above.
A local result {#LWP}
==============
Here we prove the following local existence result:
\[FirstStep\] Let $\sigma_0 > 0$. For any $$(\psi_0,\phi_0,\phi_1) \in X_0 := G^{\sigma_0,0}({\mathbb{R}};{\mathbb{C}}^2) \times G^{\sigma_0,1}({\mathbb{R}};{\mathbb{R}}) \times G^{\sigma_0,0}({\mathbb{R}};{\mathbb{R}})$$ there exists a time $\delta > 0$ such that the solution of the Cauchy problem , satisfies $(\psi,\phi,\partial_t \phi) \in C\left([0,\delta]; X_0 \right)$. Moreover, $$\label{delta}
\delta = \frac{c_0}{1 + a_0^2 + b_0},$$ where $a_0 = {\left\Vert f_+ \right\Vert}_{G^{\sigma_0,0}} + {\left\Vert f_- \right\Vert}_{G^{\sigma_0,0}}$, $b_0 = {\left\Vert g_+ \right\Vert}_{G^{\sigma_0,1}} + {\left\Vert g_- \right\Vert}_{G^{\sigma_0,1}}$, and $c_0 > 0$ is a constant depending on the Dirac mass $M$.
Consider the iterates $\psi_\pm^{(n)}$, $\phi_\pm^{(n)}$ given inductively by $$\begin{gathered}
\psi_\pm^{(0)}(t) = W_{\pm\xi}(t) f_\pm,
\qquad
\phi_\pm^{(0)}(t) = W_{\pm{\langle \xi \rangle}}(t) g_\pm,
\\
\begin{aligned}
\psi_\pm^{(n+1)}(t) &= \psi_\pm^{(0)}(t)
+ i \int_0^t W_{\pm\xi}(t-s)\left( -M\psi_\mp^{(n)} + \phi^{(n)} \psi_\mp^{(n)} \right)(s) \, ds,
\\
\phi_\pm^{(n+1)}(t) &= \phi_\pm^{(0)}(t)
\mp i \int_0^t W_{\pm{\langle \xi \rangle}}(t-s) {\langle D_x \rangle}^{-1}
\operatorname{Re}\left( \overline{\psi_+^{(n)}} \psi_-^{(n)} \right)(s) \, ds
\end{aligned}\end{gathered}$$ for $n \in \mathbb N_0$. Here $\phi^{(n)} = \phi_+^{(n)} + \phi_-^{(n)}$. Now set $$\begin{aligned}
A_n(\delta) &= \sum_\pm \left( {\left\Vert f_\pm \right\Vert}_{G^{\sigma_0,0}} + \int_0^\delta {\left\Vert (D_t \pm D_x)\psi_\pm^{(n)}(t) \right\Vert}_{G^{\sigma_0,0}} \, dt \right),
\\
B_n(\delta) &= \sum_\pm {\left\Vert \phi_\pm^{(n)} \right\Vert}_{L_t^\infty([0,\delta];G^{\sigma_0,1})}.\end{aligned}$$ We claim that, for $n \in \mathbb N_0$, $$\begin{aligned}
{2}
\label{IterationEstimate1}
A_0(\delta) &= a_0,& \qquad A_{n+1}(\delta) &\le a_0 + C \delta A_n(\delta) \bigl(M + B_n(\delta)\bigr),
\\
\label{IterationEstimate2}
B_0(\delta) &= b_0,& \qquad B_{n+1}(\delta) &\le b_0 + C \delta^{1/2} A_n(\delta)^2,\end{aligned}$$ where $C > 1$ is an absolute constant. By the energy inequality , this reduces to $$\begin{aligned}
\int_0^\delta {\left\Vert -M\psi_\mp^{(n)}(t) \right\Vert}_{G^{\sigma_0,0}} \, dt
&\le
C\delta M A_n(\delta),
\\
\int_0^\delta {\left\Vert \phi^{(n)}(t) \psi_\mp^{(n)}(t) \right\Vert}_{G^{\sigma_0,0}} \, dt
&\le
C \delta A_n(\delta) B_n(\delta),
\\
\int_0^\delta {\left\Vert \overline{\psi_+^{(n)}(t)} \psi_-^{(n)}(t) \right\Vert}_{G^{\sigma_0,0}} \, dt
&\le
C \delta^{1/2} A_n(\delta)^2.\end{aligned}$$ The first estimate follows from the energy inequality , the second from , and the third from Corollary \[NullCorollary\], after an application of Hölder’s inequality in time.
For convenience we replace $b_0$ in the right-hand side of by $a_0+b_0$. Then by induction we get $A_n(\delta) \le 2a_0$ and $B_n(\delta) \le 2(a_0+b_0)$ for all $n$, provided $\delta > 0$ is so small that $C \delta 2a_0 (M+2a_0+2b_0) \le a_0$ and $C \delta^{1/2} (2a_0)^2 \le a_0+b_0$, but the latter we replace by the more restrictive $C \delta^{1/2} 2a_0 2(a_0+b_0) \le a_0+b_0$, hence we get .
Applying the same estimates to $$\begin{aligned}
\mathfrak A_n(\delta) &= \sum_\pm \int_0^\delta {\left\Vert (D_t \pm D_x)(\psi_\pm^{(n)}-\psi_\pm^{(n-1)})(t) \right\Vert}_{G^{\sigma_0,0}} \, dt,
\\
\mathfrak B_n(\delta) &= \sum_\pm {\left\Vert \phi_\pm^{(n)}-\phi_\pm^{(n-1)} \right\Vert}_{L_t^\infty([0,\delta];G^{\sigma_0,1})},\end{aligned}$$ one finds that $$\begin{aligned}
\label{IterationEstimate3}
\mathfrak A_{n+1}(\delta) &\le C \delta \mathfrak A_n(\delta) \bigl(M + B_n(\delta)\bigr)
+ C \delta A_n(\delta) \mathfrak B_n(\delta),
\\
\label{IterationEstimate4}
\mathfrak B_{n+1}(\delta) &\le 2C \delta^{1/2} A_n(\delta)\mathfrak A_n(\delta),\end{aligned}$$ so taking $\delta$ even smaller, but still satisfying , then $\mathfrak A_{n+1}(\delta) \le \frac14 \mathfrak A_n(\delta) + \frac14 \mathfrak B_n(\delta)$ and $\mathfrak B_{n+1}(\delta) \le \frac14 \mathfrak A_n(\delta)$, hence $\mathfrak A_{n+1}(\delta) + \mathfrak B_{n+1}(\delta) \le \frac12 \left( \mathfrak A_n(\delta) + \mathfrak B_n(\delta) \right)$. Therefore the iterates converge, and this concludes the proof of Theorem \[FirstStep\].
Growth estimate and almost conservation of charge {#SecondStepSection}
=================================================
Next we estimate the growth in time of $$\begin{aligned}
\mathfrak M_\sigma(t) &= \sum_{\epsilon \in \{-1,+1\}} \left( {\left\Vert e^{\epsilon\sigma D_x}\psi_+(t) \right\Vert}_{L^2}^2 + {\left\Vert e^{\epsilon\sigma D_x}\psi_-(t) \right\Vert}_{L^2}^2 \right),
\\
\mathfrak N_\sigma(t) &= {\left\Vert \phi_+(t) \right\Vert}_{G^{\sigma,1}} + {\left\Vert \phi_-(t) \right\Vert}_{G^{\sigma,1}},\end{aligned}$$ where $\sigma \in (0,\sigma_0]$ is considered a parameter. Note that $\mathfrak M_\sigma(t)$ is comparable to $$\mathfrak M_\sigma'(t) = {\left\Vert \psi_+(t) \right\Vert}_{G^{\sigma,0}}^2 + {\left\Vert \psi_-(t) \right\Vert}_{G^{\sigma,0}}^2.$$ By the local theory developed so far we know that $\mathfrak M_\sigma(t)$ and $\mathfrak N_\sigma(t)$ remain finite for times $t \in [0,\delta]$, where $$\label{Time}
\delta = \delta(\sigma) = \frac{c_0}{1+\mathfrak M_\sigma(0) + \mathfrak N_{\sigma}(0)}.$$ Moreover, from the proof of Theorem \[FirstStep\] we know that $$\begin{gathered}
\label{IterationBound1}
{\left\Vert f_\pm \right\Vert}_{G^{\sigma,0}} + \int_0^\delta {\left\Vert (D_t \pm D_x)\psi_\pm(t) \right\Vert}_{G^{\sigma,0}} \, dt
\le C\mathfrak M_\sigma(0)^{1/2},
\\
\label{IterationBound2}
{\left\Vert \phi_\pm \right\Vert}_{L_t^\infty([0,\delta];G^{\sigma,1})}
\le C \left( \mathfrak M_\sigma(0)^{1/2} + \mathfrak N_\sigma(0) \right).\end{gathered}$$
We will prove the following.
\[SecondStep\] With hypotheses as in Theorem \[FirstStep\], then for $\sigma \in (0,\sigma_0]$ and $\delta=\delta(\sigma)$ as in , we have $$\begin{aligned}
\label{Mest}
\sup_{t \in [0,\delta]} \mathfrak M_\sigma(t)
&\le
\mathfrak M_\sigma(0) + C\sigma \delta^{1/2} \mathfrak M_\sigma(0) \left( \mathfrak M_\sigma(0)^{1/2} + \mathfrak N_\sigma(0) \right),
\\
\label{Nest}
\sup_{t \in [0,\delta]} \mathfrak N_\sigma(t)
&\le
\mathfrak N_\sigma(0) + C \delta^{1/2} \mathfrak M_\sigma(0),\end{aligned}$$ where $C > 1$ is an absolute constant.
It suffices to prove these estimates at the endpoint $t=\delta$.
The estimate for $\mathfrak N_\sigma$ follows from the energy inequality and Corollary \[NullCorollary\] as in the proof of , and taking into account .
To estimate $\mathfrak M_\sigma$ we proceed as in the proof of conservation of charge. Let $\epsilon \in \{ -1, +1\}$ and write $$\mathfrak M_{\sigma,\epsilon}(t) = {\left\Vert e^{\epsilon\sigma D_x}\psi_+(t) \right\Vert}_{L^2}^2 + {\left\Vert e^{\epsilon\sigma D_x}\psi_-(t) \right\Vert}_{L^2}^2,$$ so that $\mathfrak M_\sigma = \mathfrak M_{\sigma,-1} + \mathfrak M_{\sigma,+1}$. Applying $e^{\epsilon\sigma D_x}$ to each side of the Dirac equations in gives[^2] $$(D_t \pm D_x)\Psi_\pm = (\phi-M) \Psi_\mp + F_\mp,$$ where $$\Psi_\pm = e^{\epsilon\sigma D_x}\psi_\pm,
\qquad
F_\pm = (e^{\epsilon\sigma D_x}\phi-\phi) \Psi_\pm.$$ Since $\phi$ and $M$ are real-valued we then get $$\begin{aligned}
\frac{d}{dt} \mathfrak M_{\sigma,\epsilon}(t)
&= 2\operatorname{Re}\int \partial_t\Psi_+(t,x) \overline{\Psi_+(t,x)} + \partial_t\Psi_-(t,x) \overline{\Psi_-(t,x)} \, dx
\\
&= \int \partial_x\left(-\Psi_+(t,x) \overline{\Psi_+(t,x)} + \Psi_-(t,x) \overline{\Psi_-(t,x)}\right) \, dx
\\
&\qquad+ 2\operatorname{Re}\int i(\phi(t,x)-M) \left( \Psi_-(t,x) \overline{\Psi_+(t,x)} + \Psi_+(t,x) \overline{\Psi_-(t,x)} \right) \, dx
\\
&\qquad+ 2\operatorname{Re}\int iF_-(t,x) \overline{\Psi_+(t,x)} + iF_+(t,x) \overline{\Psi_-(t,x)} \, dx
\\
&= - 2\operatorname{Im}\int F_-(t,x) \overline{\Psi_+(t,x)} + F_+(t,x) \overline{\Psi_-(t,x)} \, dx.\end{aligned}$$ In the last step we used the fact that we may assume that $\Psi_\pm(t,x)$ decays to zero at spatial infinity. Indeed, if we want to prove the estimates in Theorem \[SecondStep\] for a given $\sigma$, then by the monotone convergence theorem it suffices to prove it for all $\sigma' < \sigma$, and then we get the decay by the Riemann-Lebesgue lemma.
Integration in time yields $$\begin{aligned}
\mathfrak M_{\sigma,\epsilon}(\delta)
&=
\mathfrak M_{\sigma,\epsilon}(0) - 2\operatorname{Im}\int_0^\delta \int \left( F_-(t,x)\overline{\Psi_+(t,x)}
+ F_+(t,x)\overline{\Psi_-(t,x)} \right) \, dx \, dt
\\
&\le \mathfrak M_{\sigma,\epsilon}(0)
+ 4 \delta^{1/2} {\left\Vert (e^{\epsilon\sigma D_x}-1)\phi \right\Vert}_{L_t^\infty([0,\delta];L^2)}
{\left\Vert \overline{\Psi_+}\Psi_- \right\Vert}_{L^2([0,\delta] \times {\mathbb{R}})}
\\
&\le \mathfrak M_{\sigma,\epsilon}(0)
+ 4 \delta^{1/2} \sigma {\left\Vert {\vert D_x \vert}\phi \right\Vert}_{L_t^\infty([0,\delta];G^{\sigma,0})} {\left\Vert \overline{\Psi_+}\Psi_- \right\Vert}_{L^2([0,\delta] \times {\mathbb{R}})}\end{aligned}$$ where we applied Hölder’s inequality and the symbol estimate $${\vert e^{\epsilon\sigma \xi} - 1 \vert} \le \sigma{\vert \xi \vert} e^{\sigma{\vert \xi \vert}}.$$ Applying Lemma \[NullLemma\] and taking into account the bounds and we then obtain , and this concludes the proof of Theorem \[SecondStep\].
Conclusion of the proof
=======================
All the tools required to complete the proof of the main result, Theorem \[thm2\], are now at hand.
We are given $\sigma_0 > 0$ and data such that $\mathfrak M_{\sigma_0}(0)$ and $\mathfrak N_{\sigma_0}(0)$ are finite. Our task is to prove that for all large $T$, the solution has a positive radius of analyticity $$\label{Goal}
\sigma(t) \ge \frac{c}{T^4} \quad \text{for all $t \in [0,T]$,}$$ where $c > 0$ is a constant depending on the Dirac mass $M$ and the data norms $\mathfrak M_{\sigma_0}(0)$ and $\mathfrak N_{\sigma_0}(0)$.
Since we are interested in the behaviour as $T \to \infty$, we may certainly assume $$\label{Tlarge}
\mathfrak M_{\sigma_0}(0)+\mathfrak N_{\sigma_0}(0) \le T^2.$$ Now fix such a $T$ and let $\sigma \in (0,\sigma_0]$ be a parameter to be chosen; then of course holds also with $\sigma_0$ replaced by $\sigma$.
Let $A \gg 1$ denote a constant which may depend on $M$, $\mathfrak M_{\sigma_0}(0)$ and $\mathfrak N_{\sigma_0}(0)$; the choice of $A$ will be made explicit below.
As long as $\mathfrak M_\sigma(t) \le 2\mathfrak M_\sigma(0)$ and $\mathfrak N_\sigma(t) \le 2AT^2$ we can now apply the local results, Theorems \[FirstStep\] and \[SecondStep\], with a uniform time step $$\label{TimeStep}
\delta = \frac{c_0}{A T^2},$$ where $c_0 > 0$ depends only on $M$. We can choose $c_0$ so that $T/\delta$ is an integer. Proceeding inductively we cover intervals $[(n-1)\delta,n\delta]$ for $n=1,2,\dots$, obtaining $$\begin{aligned}
\mathfrak M_\sigma(n\delta)
&\le
\mathfrak M_\sigma(0) + nC\sigma \delta^{1/2} (2\mathfrak M_\sigma(0)) (4AT^2),
\\
\mathfrak N_\sigma(n\delta)
&\le
\mathfrak N_\sigma(0) + nC\delta^{1/2} 2\mathfrak M_\sigma(0),\end{aligned}$$ and in order to reach the prescribed time $T=n\delta$ we require that $$\begin{aligned}
\label{final1}
nC\sigma \delta^{1/2} (2\mathfrak M_\sigma(0)) (4AT^2) &\le \mathfrak M_\sigma(0),
\\
\label{final2}
nC\delta^{1/2} 2\mathfrak M_\sigma(0) &\le AT^2.\end{aligned}$$ From $T=n\delta$ and we get $n\delta^{1/2} = T \delta^{-1/2} = T c_2 A^{1/2} T$, where $c_2$ depends on $M$. Therefore and reduce to $$\begin{aligned}
\label{final3}
C\sigma c_2 A^{1/2} T^2 2(4AT^2) &\le 1,
\\
\label{final4}
C c_2 A^{1/2} T^2 2\mathfrak M_\sigma(0) &\le AT^2.\end{aligned}$$ To satisfy the latter we choose $A$ so large that $$Cc_2 2\mathfrak M_\sigma(0) \le A^{1/2}.$$ Finally, is satisfied if we take $\sigma$ equal to the right-hand side of . This concludes the proof of our main result, Theorem \[thm2\], in the case $m > 0$.
The case $m=0$
==============
Set $m=0$. The conclusion in Theorem \[FirstStep\] remains true. To see this, we add $\phi$ to each side of the second equation in , hence the last two equations in are replaced by $$\left( D_t \pm {\langle D_x \rangle} \right) \phi_\pm = \mp {\langle D_x \rangle}^{-1} \left( \phi + \operatorname{Re}\left(\overline{\psi_+} \psi_- \right)\right),$$ where as before $\phi = \phi_+ + \phi_-$ with $\phi_\pm = \frac12 \left( \phi \pm i {\langle D_x \rangle}^{-1} \partial_t \phi \right)$. Then the proof of Theorem \[FirstStep\] goes through with some obvious changes: in there appears an extra term $\delta B_n(\delta)$ on the right-hand side, and in a term $\delta \mathfrak B_n(\delta)$.
Next we consider the changes that need to be made in the argument from Section \[SecondStepSection\]. Since Theorem \[FirstStep\] remains unchanged, then so does –. Now observe that the norm $$\mathfrak N_\sigma(t) = {\left\Vert \phi_+(t) \right\Vert}_{G^{\sigma,1}} + {\left\Vert \phi_-(t) \right\Vert}_{G^{\sigma,1}}$$ is equivalent to $${\left\Vert {\langle D_x \rangle}\phi(t) \right\Vert}_{G^{\sigma,0}} + {\left\Vert \partial_t \phi(t) \right\Vert}_{G^{\sigma,0}},$$ which in turn is equivalent to $${\left\Vert \phi(t) \right\Vert}_{L^2} + {\left\Vert {\vert D_x \vert}\phi(t) \right\Vert}_{G^{\sigma,0}} + {\left\Vert \partial_t \phi(t) \right\Vert}_{G^{\sigma,0}}.$$ But if we write $\phi = \Phi_+ + \Phi_-$ with $\Phi_\pm = \frac12 \left( \phi \pm i {\vert D_x \vert}^{-1} \partial_t \phi \right)$, then the norm $${\left\Vert {\vert D_x \vert}\phi(t) \right\Vert}_{G^{\sigma,0}} + {\left\Vert \partial_t \phi(t) \right\Vert}_{G^{\sigma,0}}$$ is equivalent to $$\mathfrak N_\sigma'(t) = {\left\Vert {\vert D_x \vert}\Phi_+(t) \right\Vert}_{G^{\sigma,0}} + {\left\Vert {\vert D_x \vert}\Phi_-(t) \right\Vert}_{G^{\sigma,0}}.$$ Thus we have the norm equivalence $$\label{NormEquivalence}
\mathfrak N_\sigma(t) \sim {\left\Vert \phi(t) \right\Vert}_{L^2} + \mathfrak N_\sigma'(t),$$ where the implicit constants are independent of $t$, of course. In particular, this means that can be replaced by $$\label{NewTime}
\delta = \frac{c_0}{1 + \mathfrak M_{\sigma}(0) + {\left\Vert \phi(0) \right\Vert}_{L^2} + \mathfrak N_\sigma'(0)}.$$
The first term on the right-hand side of is a priori under control. Indeed, from the energy inequality for the wave operator $\square = -\partial_t^2+\partial_x^2$ and by conservation of charge we have $$\label{Growth}
{\left\Vert \phi(t) \right\Vert}_{L^2} = O(t^2)$$ as $t \to \infty$, where the implicit constant depends on ${\left\Vert (\phi_0,\phi_1) \right\Vert}_{L^2 \times H^{-1}} + {\left\Vert \psi_0 \right\Vert}_{L^2}^2$.
Noting that $$\left( D_t \pm {\vert D_x \vert} \right) \Phi_\pm = \mp {\vert D_x \vert}^{-1} \operatorname{Re}\left(\overline{\psi_+} \psi_- \right),$$ the argument used to prove Theorem \[SecondStep\] now gives $$\begin{aligned}
\sup_{t \in [0,\delta]} \mathfrak M_\sigma(t)
&\le
\mathfrak M_\sigma(0) + C\sigma \delta^{1/2} \mathfrak M_\sigma(0) \left( \mathfrak M_\sigma(0)^{1/2} + {\left\Vert \phi(0) \right\Vert}_{L^2} + \mathfrak N_\sigma'(0) \right),
\\
\sup_{t \in [0,\delta]} \mathfrak N_\sigma'(t)
&\le
\mathfrak N_\sigma'(0) + C \delta^{1/2} \mathfrak M_\sigma(0).\end{aligned}$$ Using these as well as and , the argument from the previous section goes through to show that $\sigma(t) \ge c/t^4$ as $t \to \infty$. This concludes the case $m=0$.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors are indebted to Hartmut Pecher and to an anonymous referee for helpful comments on an earlier version of this article. Sigmund Selberg was supported by the Research Council of Norway, grant no. 213474/F20. Achenef Tesfahun acknowledges support from the German Research Foundation, Collaborative Research Center 701.
[^1]: In particular, our argument also provides an alternative proof of the result of Bournaveas [@Bournaveas:2000] concerning global well-posedness for data $(\psi_0,\phi_0,\phi_1) \in L^2 \times H^1 \times L^2$, by setting $\sigma=0$ throughout.
[^2]: Observe that $e^{\epsilon\sigma D_x}(fg) = (e^{\epsilon\sigma D_x}f)(e^{\epsilon\sigma D_x}g)$, as is obvious on the Fourier transform side. This is the reason for using the norm $\mathfrak M_\sigma$ instead of the simpler $\mathfrak M_\sigma'$.
|
---
abstract: 'Hybrid cloud is an integrated cloud computing environment utilizing a mix of public cloud, private cloud, and on-premise traditional IT infrastructures. Workload awareness, defined as a detailed full range understanding of each individual workload, is essential in implementing the hybrid cloud. While it is critical to perform an accurate analysis to determine which workloads are appropriate for on-premise deployment versus which workloads can be migrated to a cloud off-premise, the assessment is mainly performed by rule or policy based approaches. In this paper, we introduce StackInsights, a novel cognitive system to automatically analyze and predict the cloud readiness of workloads for an enterprise. Our system harnesses the critical metrics across the entire stack: 1) infrastructure metrics, 2) data relevance metrics, and 3) application taxonomy, to identify workloads that have characteristics of a) low sensitivity with respect to business security, criticality and compliance, and b) low response time requirements and access patterns. Since the capture of the data relevance metrics involves an intrusive and in-depth scanning of the content of storage objects, a machine learning model is applied to perform the business relevance classification by learning from the meta level metrics harnessed across stack. In contrast to traditional methods, StackInsights significantly reduces the total time for hybrid cloud readiness assessment by orders of magnitude.'
author:
-
bibliography:
- 'IEEEabrv.bib'
- 'stack\_insight\_paper.bib'
title: |
StackInsights: Cognitive Learning for Hybrid\
Cloud Readiness
---
Introduction
============
Hybrid cloud, which utilizes a mix of public cloud, private cloud, and on-premise, has become the dominant cloud deployment architecture for enterprises. Public cloud offers a multi-tenant environment, where physical resources, such as computing, storage and network devices, are shared and accessible over a public network, whereas private cloud is operated solely for a single organization with dedicated resources. Hybrid cloud inherits the advantages of these two cloud models and allows workloads to move between them according to the change of business needs and cost, therefore resulting in greater deployment flexibility. The global hybrid cloud market is estimated to grow from USD 33.28 Billion in 2016 to USD 91.74 Billion in 2021 [@mandm2016].
Business sensitivity is one of the main factors that enterprises consider when deciding which cloud model to deploy. For example, an enterprise can deploy public clouds for test and development workloads, where security and compliance are not an issue. However, it is hard to meet PCI (payment card industry) or SOX (Sarbanes-Oxley) compliance in public clouds due to the nature of multi-tenancy. On the other hand, because private clouds are dedicated to a single organization, the architecture can be designed to assure high level security and stringent compliance, such as HIPAA (health insurance portability and accountability act). Therefore, private clouds are usually deployed for business sensitive and critical workloads. Infrastructure is another important factor to consider when choosing between public and private clouds. Since private cloud is a single-tenant environment where resources can be specified and highly customized, it is ideal to host data which are frequently accessed and require fast response time. For example, high-end storage system can be used in private cloud to deliver IOPS (input/output operations per second) within a guaranteed response time.
Moreover, business sensitivity and infrastructure are traditionally considered in two separate schools of work. However, not all data is created equal, neither is the infrastructure. In this paper, we introduce StackInsights, a novel cognitive learning system to automatically analyze and predict the cloud readiness of workloads for an enterprise by considering both business sensitivity and infrastructure. StackInsights classifies the entire data into several subspaces, as shown in Figure \[fig:intro\], where the $X$-axis indicates the infrastructure heat map (e.g., storage access intensity) and the $Y$-axis represents the business sensitivity. A threshold on the $X$-axis is set to determine if the data is “*cold*" or “*hot*" with respect to infrastructure related performance metrics, and on the $Y$-axis, the data is classified into three categories: “*sensitive*", “*non-sensitive*", or “*non-classifiable*”. Formally, we define sensitive data as the data owned by the enterprise, which if lost or compromised, bares financial, integrity, and compliance damage. There are many different forms of sensitive data, such as sensitive personal information (SPI), personal health information (PHI), confidential business information, client data, intellectual property, and other domain-specific sensitive information. The category of “*non-classifiable*” includes structured data, such as databases, the sensitivity of which can be analyzed using domain knowledge. For example, the databases storing employment information in the HR department should be highly sensitive. All the data which are cold and non-sensitive can be migrated to public clouds while the rest should reside in private clouds. The thresholds on the $X$-axis and $Y$-axis can also be adjusted by users. The areas of the subspaces indicate the size of data migrating to different clouds, thereby, serving as a cloud sizing tool. The hotness of data on the $X$-axis can be obtained by measuring infrastructure performance metrics. The key issue therefore lies in how to determine the business sensitivity of data on the $Y$-axis.
To the best of our knowledge, StackInsights is the first cognitive system that uses machine learning to understand data sensitivity based on metadata, as it correlates application, data, and infrastructure metrics for hybrid cloud readiness assessment. In contrast to traditional methods which require content scanning for sensitivity analysis, StackInsights significantly reduces the total running time through machine learning. It advises users what data are appropriate to be stored on premises, or to be migrated to the cloud, and the specific cloud deployment model, by integrating both data sensitivity and hotness in terms of infrastructure performance.
The rest of the paper is organized as follows. We describe our motivation and contribution in Section \[motivation\]. The relevant work is reviewed in Section \[related\]. In Section \[framework\], we introduce the framework of StackInsights as well as the cognitive learning components. Section \[exp\] is on the experiments and results. Finally, we conclude in Section \[conclusion\].
![Hybrid cloud migration overview[]{data-label="fig:intro"}](intro_fig.png){width="77.00000%"}
Motivation and Contribution {#motivation}
===========================
The classification of data sensitivity belongs to the general domain of data classification, which allows organizations to categorize data by business relevance and sensitivity in order to maintain the confidentiality and integrity of their data. Data classification is a very costly activity. In large organizations, data is usually stored and secured by many repositories or systems in different geo locations, which may have different data privacy and regulatory compliances. Various security access approvals have to be obtained in order to get access to data. In addition, traditional sensitivity assessment approaches require an intrusive and in-depth content scanning of the objects, which is not scalable in this big data era, where numerous structured and unstructured data are generated in real-time. To solve this issue, we develop a machine learning model in StackInsights, which can perform a business sensitivity classification by learning from file metadata, which is much easier and cost-efficient to collect. By using meta data, we can already obtain a sensitivity prediction model with high accuracy. Therefore, we do not have to perform a detailed content analysis on all the files. Instead, intensive content analysis can be conducted on the predicted non-sensitive files for further screening if needed. Our model-based approach significantly reduces the total sensitivity assessment time.
When migrating workloads among private, public, and hybrid clouds, one of the biggest challenges is the storage layer. An enterprise’s infrastructure might consist of a mixture of file, block, and object storage, which have different properties and offer their own advantages. Enterprises big or small tend to manage large and heterogeneous environments. For example, one of our IT environments supports around 100 business accounts, spread over several geo locations, amassing a total of 200 PB of block storages alone. Similarly, our file storage fabric is also massive, where file shares (mapped to volumes/q-trees) may be in the TB, or even PB scale.
Given such a large storage infrastructure with a number of volumes, we need to determine their cloud migration priority, i.e., which volumes should be migrated first. Data sensitivity is one of the most important factors in cloud migration. Volumes of low sensitivity can be sanitized first and then migrated to the public cloud. From a cloud migration service admin point of view, it is not critical to know the exact sensitivity of the volumes, but rather the sensitivity “level” of the volumes, so that a priority can be given to a large number of volumes. Traditional sensitivity assessment approaches, which require content scanning, are very expensive. It is impractical and not necessary to perform a full content scanning on all the volumes in order to obtain the priority. Machine learning can help predict the sensitivity of files based on the easily collected file meta data, and then obtain the migration priority within a much shorter time.
As it is expensive to determine the business sensitivity of each storage volume, we develop a clustering component in StackInsights, which identifies groups of volumes that share similar characteristics. Specifically, the volumes are clustered based on their meta level information, which are obtained by aggregating the file metadata at the volume level. The sensitivity of a representative volume in each cluster is used as the representative sensitivity of all the volumes in the same cluster. To obtain the sensitivity of a representative volume, we apply the previously introduced machine learning model to predict the sensitivity of each single file on that volume. The sensitivity of a volume is defined as the number of sensitive files divided by the total number the files.
Similarly, we also obtain the IOPS of each volume and compute its IO density, which is defined as IOPS per GB. All the volumes with both low business sensitivity and IO density can be candidates to be migrated to public clouds while the other volumes should remain on premise or be migrated to private clouds.
Related Work {#related}
============
In the marketplace of enterprise softwares, there are tools developed for data classification in regard of data governance or life cycle management. For example, [@symantec] [@varonis] provide data classification services for managing and retaining various types of data such as emails and other unstructured data through pre-determined rules or dictionary searches. Data privacy and security have become the most pressing concerns for organizations. To embrace the newly announced General Data Protection Regulation (GDPR) by European Union, enterprises are making great efforts in addressing key data protection requirements as well as automating the compliance process. For example, IBM Security Guardium solutions [@ibm] help clients secure their sensitive data across a full range of environments. Data classification, as the first step to security, has become extremely important. Only if we understand which data are sensitive through classification, we can design better security product to protect them. On the other hand, [@gravitant] assesses the cloud migration readiness by providing a questionnaire to the owner of the infrastructure. Many of existing tools lack of cognitive aspects, and even if there is, the data preparation step requires the scanning of file content, which is not scalable or limits the type of files to which the tool can be applied.
Besides the rule-based approaches to classify data, there are previous attempts leveraging a predictive model. Model-based approaches are much more systematic and scalable because there is no need to generate a rule to classify files manually. Data classification based on the proof-of-concept system that crawls all files in order to analyze data sensitivity was studied in [@park:2011]. A nearest neighbor algorithm was proposed in [@Ali2015] to attribute the confidentiality of files. A general-purpose method to automatically detect sensitive information from textual documents was presented in [@Sanchez2012] by adapting the information theory and a large corpus of documents. The application to data loss prevention by using automatic text classification algorithms for classifying enterprise documents as either sensitive or non-sensitive was introduced in [@Hart2011]. However, most of these previous works require an exhaustive process to crawl the contents of data, which is impossible in many applications due to privacy, governance, or regulations. [@Mesnier2004] proposed to use the decision tree classifier for finding associations between a file’s properties and metadata. There are preliminary works in the field of hybrid cloud migrations whose components include a data classification method. The tool for migrating enterprise services into hybrid cloud-based deployment was introduced in [@Hajjat2010]. The complexity of enterprise applications is highlighted and the model, accommodating the complexity and evaluating the benefits of a hybrid cloud migration, is developed. Though the work shed insight on security of data, the tool does not assess the sensitivity of applications in the level of file. Rule-based decision support tool is deployed in [@Khajeh2011] for the purpose of providing a modeling tool to compare various infrastructures for IT architects. In many practical cases, however, IT architects are blind on the contents of data and thus it is not straightforward to model their applications, data, and infrastructure requirements without understanding the nature of data such as sensitivity. In addition, [@Menzel2012] developed a framework which automates the migration of web server to the cloud.
As observed in previous works, data classification and hybrid cloud migration are explored separately although two components are tightly related. In contrast to the existing works, our proposed framework covers the whole process including classifying data, assessing the readiness of cloud migration, and finally a decision support for hybrid cloud migration. Furthermore, we develop an efficient and scalable method to determine cloud readiness by considering data sensitivity and infrastructure performance through a cognitive learning process.
StackInsights Framework {#framework}
=======================
We show the high-level framework of StackInsights in Figure \[fig:arch\]. In order to gain insights into an existing IT environment, we scan various layers across the entire stack: 1) the application layer, 2) the data layer, and 3) the infrastructure layer. The application layer tells us the types of the running workloads, what components they depend on, and the specific requirements. The data layer provides file metadata as well as content. Finally, the infrastructure layer provides performance metrics, such as, how often the data is accessed, and where it is stored.
{width="6.0in"}
Workload Scan
-------------
Before scanning the infrastructure and the data itself, we first need to understand what type of workloads are running within the enterprise. This can be done through the use of tools such as IBM’s Tivoli Application Dependency Discovery Manager (TADDM) [@taddm], which provides an automated discovery and application mappings. Additionally, the IT staff within each enterprise are also good sources as they are often the subject matter experts and can give a high-level overview of their workloads. This step is critical as we need to understand the workloads or applications before starting any sort of scan. For example, file content scanning could be invasive to latency-sensitive workloads and interfere with their running.
Infrastructure Scan {#scan}
-------------------
In order to understand an IT environment, we need to get a picture of where the data is, how they can be accessed, and what types of storage the infrastructure consists of. Our framework is composed of a set pluggable modules that can be adapted to scan different infrastructures. This is important as infrastructures tend to be heterogeneous in nature. For example, a storage infrastructure may have a mixture of different storages, management technologies developed by multiple providers, as well as different ways of accessing data (e.g., via block, file, or object stores). For example, if the workload scan tells us that the storage layer consists mostly of storage filers, we can assume that at this layer, most data is reached through protocols such as Network File System (NFS), and Common Internet File System (CIFS), as well as manufacturer-specific APIs such as NetApp’s ONTAP [@ontap]. Once a scan of the infrastructure has been completed, we can then build a location map of all the data. This step is critical as we need to be able to identify the storage volumes or data shares that are most critical for scanning. Similarly, we need to identify volumes/file shares that may host critical data or data administrators do not wish to migrate. For the rest of the paper, we will use $volume$ and $file~share$ interchangeably as NetApp filers have the notion of Q-trees/Volumes as mapping points for file shares (which are exposed to users through protocols such as CIFS), the root volume/q-tree for each share is mounted on our virtual machines as read-only directories. Similarly, with block storage, we care mostly about volume granularity, as most migration utilities operate at this level.
Volume Clustering
-----------------
Given a large heterogeneous IT infrastructure, we first apply a clustering method to identify groups of volumes that share similar characteristics. The volume are clustered based on their meta level information which are obtained by aggregating the metadata of all the files on the same volume. Each volume is represented by a feature vector. Note that more meta-level information about the volumes can be collected and included as additional features. We apply the K-means algorithm to obtain the volume clusters. Given a set of data points $(x_1, x_2, ..., x_n)$, where each data point is represented by a $d$-dimensional feature vector, K-means aims to partition the $n$ data points into $k$ sets: $S = (s_1, s_2, ..., s_k), k \leq n $, so that the total within cluster distance is minimized, i.e., its objective function is $\operatorname*{arg\,min}_{S} \sum_{i=1}^{k}\sum_{x \in s_i} || x - \mu_{i} || ^2$, where $\mu_i$ is the mean of data points in $s_i$.
After all the volumes are clustered, we select a representative volume from each cluster, which is defined as the one with minimum total distance to all the other volumes in the same cluster. We further analyze the business sensitivity of every representative volume. We assume that volumes in the same cluster share a similar sensitivity score.
Cognitive Learning
------------------
Business sensitivity is a critical factor in hybrid cloud migration. We need to analyze the sensitivity of the representative volume of each volume cluster, which is defined as the number of sensitive files divided by the total number of files on that volume. The current approach to detecting sensitive files requires in-depth content scanning, which is intrusive and expensive. However, even a single storage volume may contain millions of files, in TB or even PB scale. It will take a tremendous amount of time to do a full content scan in order to determine the sensitivity of all the files. In StackInsights, we develop a cognitive learning component to predict the sensitivity of files based on easily obtained metadata, which significantly reduces the total running time.
### File Content Crawling {#datamap}
We randomly sample a subset of files from the selected representative storage volume, crawl their content, and apply the traditional rule based approach to determine the sensitivity. Files are identified as sensitive or non-sensitive by matching against a list of regular expressions and keywords predefined by users. Table \[tb:sendic\] shows a sample dictionary of sensitive information.
Email address regex
------------------------ ----------------
Phone number regex
Social Security Number regex
Credit card number regex
Keywords list of tokens
: Dictionary of sensitive information[]{data-label="tb:sendic"}
The definition of sensitive files shown above can be modified or extended with the domain knowledge of specific industry verticals. For instance, financial, healthcare, or retail industries may have their own guidelines to define sensitive files but we tried to come up with a reasonable criteria to single out sensitive files. The crawling output contains attributes such as file name, file path, flag for whether the content was crawled or not, the number of total tokens excluding stop words, the number of matching key words, email address, phone number, social security number, and credit card number. Users can define the file sensitivity labeling rule. For example, a file can be labelled as sensitive if it contains any sensitive information in the dictionary. Users can also specify a more stringent rule, such as the file is sensitive only if the percentage of sensitive information is above a certain threshold. After the content crawling process, each sampled file is labeled as one of the three classes, sensitive, non-sensitive, or unknown (in case they cannot be scanned, for example, unsupported file formats and encrypted files). The sensitivity labels of these files are correlated with their metadata, which compose the training data for building our sensitivity prediction model.
### Intelligent File Sampling {#sampling}
After the training data is prepared, we can build a binary classification model and apply it to predict the sensitivity of remaining files on the volume based on their metadata. However, it is well know that the quality of training data has a significant impact on the performance of machine learning model. In Section \[datamap\], to prepare the training data, we randomly sample a subset of files, crawl their content, and determine the sensitivity of these files. One question remains: how should we conduct the random sampling in order to obtain a “good” training data? In StackInsights, we develop a clustering based progressive sampling method to solve this problem.
All the files on a storage volume are first clustered using their metadata, for example, via K-means. We then compute the percentage of data points assigned to each cluster. A random sampling is performed on each cluster to select data points proportionally, with respect to the previously obtained percentages. For example, suppose the total number of files on the volume is 160,000, the number of files in a particular cluster takes 20%, and we want to randomly sampling 3% of the entire data as training. The final number of sampled files from that cluster is therefore $160,000 \times 20\% \times 3\% = 960$. We crawl the content of the selected files and determine their sensitivity using the approach introduced in Section \[datamap\].
A progressive sampling method is applied to determine the final total sampling size. We start from a relatively small sampling percentage and apply the aforementioned clustering-based sampling to obtain a set of training data. A machine learning model is trained on this data set. We obtain the model’s classification accuracy on a held-out test dataset or using K-fold cross validation on the training data. Comparing with the classification accuracy from the previous run (set the accuracy to be 0 for the first run), if the accuracy improves, we will do an incremental sampling on all the clusters, per user defined sampling size. If the change of classification accuracy is within a predetermined threshold or the total sampling size reaches an upper bound, the sampling process will be stopped.
### Metadata-based Sensitivity Prediction {#modeling}
We use the newly sampled dataset from Section \[sampling\] to train a binary classification machine learning model. Each file is represented as a feature vector, derived from the file metadata, consisting of features such as tokens in file names, file extensions, paths, and sizes. The output is the classification label “sensitive” or “non-sensitive”. Once the machine learning model is built, we can then apply it to classify the remaining files on the volume based on the available metadata obtained from the infrastructure scan in Section \[scan\].
Experiment {#exp}
==========
Our environment consists of roughly 100 different accounts, with a wide variety of storage requirements. Each account has a mixture of file, block, and object storage. For experiment, we choose a mid-size account, whose storage infrastructure is predominantly file storage. This account has two sites (data centers), each one with a set of NetApp filers in clustered mode (roughly 33.8 TB). We install one secure virtual machine inside each site preloaded with our StackInsights scanning codebase. We use those virtual machines to scan the environment and extract the necessary information for further analysis. Figure \[fig:scan\] shows a high-level diagram of our sample environment. The storage filer gives us a map of what the file storage infrastructure looks like. We first build a tree of the infrastructure by starting from the cluster and then down to the file level (i.e., $cluster~\Rightarrow~aggregate~\Rightarrow~volume~\Rightarrow~q-tree~\Rightarrow~file system~\Rightarrow~directory~\Rightarrow~file$). A collected file metadata example is shown in Table \[filerecord\]. In parallel, we poll the storage filers for IOPS for each storage volume. This will allow us to measure the IO density for each system, thereby, to determine how volatile a given volume or file share is.
![Infrastructure, File Metadata and File Content Scan[]{data-label="fig:scan"}](scan.png){width="3.4in"}
We use IBM’s TADDM tool to get a picture of the different workloads running in the environment. Similarly, the system administrators also provide their valuable feedback, as they are able to help us narrow our scanning scope. Because the filers in the environment are all NetApp filers, we use the ONTAP APIs to extract file metadata, as well as performance metrics from the filers. All machine learning algorithms are implemented based on the python scikit-learn library.
IOPS and file metadata collection {#iops_and_metadata}
---------------------------------
The IOPS for each volume is collected over a four-week time window. We then compute the hourly-average IO density (IO per second per GB) for each volume. Table \[iops\] shows the total size of each volume as well as its corresponding IO density range. As we can see, except V5, the IO density of all the other volumes is between 0.0 and 0.01, which is relatively cold. Based on the storage performance metric, we may recommend the tier 3 storage type for V5 and the nearline or inactive storage type (e.g., archiving) for the other volumes. In Figure \[fig:iops\], all the volumes are first aligned along the $X$-axis according to their IO density. Note that since the highest IO density is only around 0.05, we indicate the corresponding hotness as “warm” on the $X$-axis.
![Volumes are aligned according to IO density[]{data-label="fig:iops"}](aligniops.png){width="3.2in"}
Volume name Total size (TB) I/O density range
------------- ----------------- -------------------
V1 13.66 0 - 0.01
V2 12.32 0 - 0.01
V3 6.06 0 - 0.01
V4 1.14 0 - 0.01
V5 0.66 0.01 - 0.1
V6 0.01 0 - 0.01
V7 0.01 0 - 0.01
: Volume IO density[]{data-label="iops"}
We leverage the NetApp ONTAP APIs to extract the metadata of all the files on these volumes. In total, we extract the metadata from more than 13 million files. One example is shown in Table \[filerecord\], where “Last accessed time” is the time stamp that the file was last accessed, “Creation time” is the time stamp that the file was created, “Changed time” is the time stamp that the file metadata (e.g., file name) was changed, “Last modified time” is the time stamp that the file itself (not metadata) was last modified, “File size” represents the size that was allocated to the file on disk, and “bytes used” represents the bytes that were actually written in that file. The volume level metadata is then obtained by aggregating the metadata of all the files on the same volume. One volume metadata example is shown in Table \[volumerecord\]. The “Top3ExtensionbySize” attribute is the top three file extensions ranked by their total sizes. For instance, in Table \[volumerecord\], “.nsf”, “.zip”, and “.xls” are the top three file extensions on that volume in terms of their total sizes. “NotModifiedin1YearCount” is the percentage of files on that volume that have not been modified in the past one year in terms of file counts. Similarly, “notAccessedin1YearSize” is the percentage of files on that volume that have not been accessed in the past one year in terms of file sizes. All the other attributes are likewise. The volume metadata can provide us some further insights. For example, for one volume, we discover that about half number of the files have not been accessed in the past one year, but they take about 98% storage capacity in size.
Volumes clustering
------------------
After the volume level metadata is obtained, we apply K-means to do clustering on all the volumes. Due to the relatively small number of volumes, we empirically set $K=3$. The optimal value of $K$ can be determined by the elbow method, which considers the percentage of variance explained by the clusters against the total number of clusters. The first clusters will explain much of the variance. The optimal number of clusters is chosen at the point where the marginal gain will drop. Figure \[fig:vcluster\] shows the clustering results when $K=3$. Note that we have not considered the data sensitivity yet, so all the volumes are still aligned along the $X$-axis.
![Volume clustering[]{data-label="fig:vcluster"}](volumeclusters.png){width="3.2in"}
We select a representative volume from each cluster, which is defined as the one with the minimum total distance to the other volumes in the same cluster. A sensitivity analysis is performed on each representative volume. The volumes in the same cluster are assumed to share similar sensitivity scores.
Metadata-based Sensitivity Prediction {#metadata-based-sensitivity-prediction}
-------------------------------------
We present in this section the experimental results of the metadata-based sensitivity prediction.
### Training Data
For each representative volume, we build a machine learning model to learn the sensitivity of all the files. One of the selected volume from the first data center contains 3.9 million files, which are 2.64 TB in size. Through the infrastructure scan introduced in Section \[scan\], we obtain the metadata of all the files. We randomly select a subset of the files to crawl their content and determine the sensitivity using the dictionary based approach introduced in Section \[datamap\]. Specifically, we use Apache Tika to scan the file content. A file is considered as sensitive if it contains any sensitive information listed in Table \[tb:sendic\]. We finally obtain a set of 114,854 files with both their metadata and sensitivity labels, out of which, 66,221 (57.65%) files labelled as sensitive and the rest as non-sensitive. Similarly, we obtain another set of 39,571 files with both their metadata and sensitivity labels on a representative volume from the second data center. This data set includes 21,284 sensitive files (53.79%). In the following, we will refer to these two data sets as dataset I and dataset II. Unless noted, the percentage of sensitive files in these two datasets remain as 57.65% and 53.79%, respectively.
### Feature Engineering
Given the training data, we derive features from file metadata for the classification model. Specifically, the features are divided into several categories: file name, file extension, file path, file size related, and time related. We will briefly introduce each feature category.
**Name Features:** We extract the name of the file in plain text. The file name can contain textual information that indicate the file sensitivity. For example, a file named “patent disclosure review\_Feb2\_2015.docx” probably contains intellectual property information and should be considered as sensitive. In order to exploit those textual information, we model each file name using the bag-of-word approach and represent it as a vector $v = [v_1, ..., v_n]$, where $n$ is the size of the vocabulary and $v_i$ is the frequency of word $i$ in the file name. Before computing the feature vectors, we clean the file names by removing all numeric characters, punctuation marks, and stop words. Finally, we obtain the textual feature vector, whose vocabulary size is around 28,000 for dataset I and 15,000 for dataset II.
**Path Features:** The file paths are extracted from the file system, indicating where the files are stored. In order to exploit this feature, we transform the file path into a binary vector. We choose a parameter $d$ which controls the depth of the folders we explore. We extract all the folders that are $d$ levels away from the root. Assuming that we have found $m$ folders, we store them in a list $l=[l_1,...,l_m]$. Then, for a given file $f$, we represent it as a feature vector $v=[v_1,...v_m]$ of length $m$ where $v_i = 1$ if $f$ belongs to folder $l_i$, otherwise 0.
**Extension Features:** We expect that certain file extensions are more likely to be sensitive than others. In order to use this feature, we apply a similar procedure as for the file paths. The extension features are encoded as a binary vector. We collect all the extensions that belong to our training set and store them in a list $e=[e_1,...,e_m]$ where m is the number of extensions we have collected. For a given file $f$, we will represent it as a feature vector $v = [v_1, ..., v_m]$, where $v_i = 1$ if $f$ has extension $e_i$, otherwise 0.
\[tb:feature\_size\]
Feature category dataset I dataset II
-------------------- ----------- ------------
File name 29736 14861
File path 788 315
File extensions 1170 316
File size related 2 2
Time related 3 3
Total feature size 31699 15497
: Feature summary
**Size Related Features:** We have access to two features: the size of the file and the bytes used allocated to the file. The file size represents the size that was allocated on disk to the file, while the bytes used represents the bytes that were actually written.
**Time Related Features:** We include three time related features for each file: specifically, the time difference between the last accessed time and the creation time, the difference between the changed time and the creation time, and the difference between the last modified time and the creation time. All the differences are in the number of days. Please refer to section \[iops\_and\_metadata\] for the detailed meaning of these time stamps.
**Feature Summary:** After the features in each category are collected, we concatenate them into a larger feature vector to represent each file. Because the size of the file path feature grows exponentially with the depth $d$, we choose $d=2$ empirically in our experiments. All the features are normalized into the range $[0,1]$. Table \[tb:feature\_size\] summarizes the total feature size for each category and the overall size of the feature vector.
### Feature Selection
We use feature selection to get a more in-depth view of all the features that are significant in the machine learning model. In particular, we investigate which features and what types of features are the most significant. We apply mutual information [@mi2005] to select the top features. Table \[tb:mi\] shows the top 10 selected features.
\[tb:mi\]
Feature Type dataset I dataset II
------------------------- ------------------------- ------------
Extension .url .xls
Extension .properties .txt
Extension .mdm .html
Extension .pas .net
File Size - -
Bytes Used - -
Last access time diff - -
Change time diff - -
Last modified time diff - -
Text “feature" & “username"\
: Top ten features
Among the top 10 features, file extension features take the most percentage. In addition, the two file size related and three time related features are also significant. Text tokens “feature” and “username” are also among the top 10. Note that we use “username” to replace an actual username for privacy concerns. We do not find any file path features in the list, which may indicate that the location of the file in the filesystem carries less significance than it’s size, time, name, or extension in the prediction. For example, one particular folder may include both sensitive and non-sensitive files.
Table \[feature\_category\] shows the number of selected features in each category when we vary the total number of top selected features. Again we notice that file extensions and text tokens in file names are significant features, while the file paths do not appear in the top list.
Feature Type Top 100 Top 500
------------------- --------- ---------
Extension 27 79
File size related 2 2
Time related 3 3
Text 68 416
: Top feature categories[]{data-label="feature_category"}
### Prediction Models
After all the features are extracted, we build machine learning models on our training data and apply them to predict the file sensitivity based on meta data. Specifically, we compare the performance of several well-known classification models: Naive Bayes, Logistic Regression, Support Vector Machines (SVM), and Random Forest.
All the experiments are conducted using the 10 fold cross validation. Since only file size related and time related features have numerical values (in total five), among the large number of features, we apply multinomial Naive Bayes, the feature distributions of which are modeled as multinomial distributions rather than the gaussian distributions. Naive Bayes has the advantage of having no parameters to optimize on. Logistic Regression has only one parameter that we need to tune: the regularization parameter $C$. In order to select the optimal value $C$, we run grid search using 10-fold cross-validation on multiple values of $C$. We find that the best $C$ value is $0.9$. For SVM, the linear kernel is selected. In practice, we find that the RBF kernel takes a very long time to converge. The optimal regularization parameter $C$ is selected following the same procedure as with Logistic Regression. The optimal $C$ value is set to be 0.8 for linear SVM. We use the default parameter setting for Random Forest, where the number of tree in the forest is 10, no maximum tree depth constraint, and the samples are drawn with replacement. Table \[performance\] shows the performance of each model in terms of overall accuracy, precision, recall, and F1 score. In our classification problem, the positive class is “sensitive” while the negative class is “non-sensitive”. Specifically, the precision is defined as the ratio $tp / (tp + fp)$, where $tp$ is the number of true positives and $fp$ the number of false positives. The recall is the ratio $tp / (tp + fn)$, where tp is the number of true positives and fn the number of false negatives. Accuracy is defined as the ratio between the number of samples that are correctly classified and the total number of samples. The F1 score is computed as $2 \times (precision \times recall) / (precision + recall) $, which is a weighted average of the precision and recall.
In Table \[performance\], the percentages of sensitive files in dataset I and II are 57.65% and 53.79%, respectively. Therefore, the classes in the training data are roughly balanced. As we can see, Random Forest has the best performance among all the models over all the metrics. The precision and recall on dataset I are above 90%. In contrast to other models, Random Forest, as an ensemble method, combines the predictions of several based estimators, i.e., decision trees. Each tree in the ensemble is built from a sample drawn with replacement from the training set. When splitting a node during the construction of the tree, the split is chosen as the best split among a random subset of the features. Since both the feature size and sample size are large in our classification, as a result of this randomness, the variance of the forest is reduced due to averaging, hence yielding an overall better model.
In practice, the percentage of sensitive files in the training data depends on the specific domain and sensitivity labeling. In some domain, the sensitivity labelling may be stringent, resulting in a relatively small percentage of sensitive files. We also design experiments to test the performance of machine learning models for such case. Specifically, we only use the phone regular expression in Table \[tb:sendic\] to do the labelling, discarding the other sensitive information, which yields 25,256 (21.99%) sensitive files for dataset I. We apply four different machine learning models and report the results in Table \[performance2\] for imbalanced classes. The “balanced” classification mode is used for Logistic Regression, SVM, and Random Forest, where the values of prediction target $y$ are used to automatically adjust weights inversely proportional to class frequencies in the training data. As shown in Table \[performance2\], Random Forest has the best performance among all the models over all the metrics, except recall. Due to space limit, we only show the results on dataset I.
----------------------------------------------------------------------------------------------------------------
Model (dataset I) Accuracy Precision Recall $F1$
--------------------- ------------------------------------------------------------ ----------- -------- --------
Naive Bayes 0.8044 0.8348 0.8238 0.8293
Logistic Regression 0.8115 0.8664 0.7959 0.8296
SVM 0.8309 0.8730 0.8269 0.8493
Random Forest **[0.9014]{} & **[0.9250]{} & **[0.9022]{} & **[0.9135]{}\
********
----------------------------------------------------------------------------------------------------------------
: Models on datasets of balanced classes[]{data-label="performance"}
----------------------------------------------------------------------------------------------------------------
Model (dataset II) Accuracy Precision Recall $F1$
--------------------- ------------------------------------------------------------ ----------- -------- --------
Naive Bayes 0.7780 0.7855 0.8081 0.7966
Logistic Regression 0.7923 0.8350 0.7652 0.7985
SVM 0.8055 0.8493 0.7762 0.8111
Random Forest **[0.8739]{} & **[0.8926]{} & **[0.8703]{} & **[0.8813]{}\
********
----------------------------------------------------------------------------------------------------------------
: Models on datasets of balanced classes[]{data-label="performance"}
----------------------------------------------------------------------------------------------------------------------------
Model (dataset I) Accuracy Precision Recall $F1$
--------------------- ---------- ----------- ---------------------------------------------------------------------- --------
Naive Bayes 0.8813 0.8122 0.5988 0.6894
Logistic Regression 0.8538 0.6270 **[0.8274]{} & 0.7134\
SVM & 0.8774 & 0.6836 & 0.8237 & 0.7471\
Random Forest & **[0.9361]{} & **[0.9176]{} & 0.7795 & **[0.8429]{}\
********
----------------------------------------------------------------------------------------------------------------------------
: Models on dataset of imbalanced classes[]{data-label="performance2"}
As we can see, Random Forest has very robust performance, and consistently outperforms the other methods in both balanced and imbalanced classifications. Previously, we have used 10-fold cross validation to test the prediction performance, which uses 90% data for training and the remaining for testing. We now vary the percentage of training and testing, and check how Random Forest performs if relatively small percentage of data is used for training. Table \[performance3\] shows the performance of Random Forest with two-fold cross validation on both dataset I and II, where 50% data are used for training and 50% for testing. Again, Random Forest has shown good performance on all the metrics, even better than the other models when 90% data are used for training.
Accuracy Precision Recall $F1$
---------------------------- ---------- ----------- -------- --------
Random Forest (dataset I) 0.8790 0.9072 0.8802 0.8935
Random Forest (dataset II) 0.8583 0.8820 0.8503 0.8658
: Performance with two-fold cross validation[]{data-label="performance3"}
To have a detailed analysis of the classification results, we show the confusion matrices of Random Forest (one fold in a two-fold cross validation) in Table \[nb\_cm\], which allows us to see how well the model performs on the classification of each class. Overall, the error ratios on false positives and false negatives are balanced on both datasets.
Random Forest (dataset I) Non-Sensitive Sensitive
--------------------------- --------------- --------------
Non-Sensitive 21493 (0.88) 2824 (0.12)
Sensitive 3999 (0.12) 29112 (0.88)
: Model confusion matrices[]{data-label="nb_cm"}
Random Forest (dataset II) Non-Sensitive Sensitive
---------------------------- --------------- -------------
Non-Sensitive 7897 (0.86) 1247 (0.14)
Sensitive 1535 (0.14) 9107 (0.86)
: Model confusion matrices[]{data-label="nb_cm"}
**Prediction Model Usage**: Note that we do not intend to use the above prediction model to completely replace the traditional content scanning method, such as dictionary based method. As we can see, the prediction model is based on meta data and cannot achieve 100% accuracy. In data governance and security, the misclassification of sensitive data can be catastrophic for an organization. For example, critical IP documents leak or data compliance violation can lead to serious legal consequences. Therefore, a thorough sensitivity screening can be performed to make sure all the sensitive information are identified. From the machine learning model, all the files that have been predicted as sensitive will be labelled as sensitive data. We can then perform intensive content scanning method on all the files that are predicted to be non-sensitive. For example, after applying Random Forest on dataset I, 25,492 files are predicted as non-sensitive. The content scanning based method will then be applied to these files, so that the 3,999 mis-classified sensitive files can be identified. In contrast to the content scanning of all the 57,428 files, we now only need to perform content scanning on 25,492 files (44.38%), significantly smaller than the original number of files. There are certainly non-sensitive files mis-calssified as sensitive files. For example, 2824 non-sensitive files are mis-classified as sensitive files. As a result, they are “over-protected”. However, the percentage of such files only takes 4.91% of the total files. As a simple comparison baseline, with the percentage of sensitive files in the training data (i.e., 57.65%) for dataset I, a user can randomly selects 57.65% data, and label them as sensitive and the remaining as non-sensitive, without using the prediction model. They can then perform content scanning on the previously labelled non-sensitive files in order to identify any sensitive information. Note that in this baseline, among the 57.65% files that are labelled as sensitive, 42.35% files are actually non-sensitive (based on the percentage of non-sensitive files in the training data), therefore, $57.65\% \times 42.35\% = 24.41\%$ amount of non-sensitive files are misclassified as sensitive, therefore “over-protected”, in contrast to 4.91% that are misclassified by the prediction model.
### Prediction Ranking and Running Time
After the machine learning model is trained, we apply it to predict the sensitivity of the remaining files on the same volume. For dataset I, we apply Random Forest to predict the sensitivity of the remaining 3.9 million files and measure its running time. On a local machine with 2.5 GHz Intel Core i7 CPU and 16GB of RAM, the total running time is 112 minutes. The model classifies 1,804,798 (46.33%) files as sensitive and 2,089,913 (53.67%) files as non-sensitive. As a comparison, the content scanning based approach took more than 30 hours to process 228,000 files, which is only 5.85% of the 3.9 million files. StackInsights reduces the total running time by orders of magnitude.
We also apply the trained learning model to predict the sensitivity of files on other volumes in the same data center. Table \[summary\] shows the prediction results: the predicted sensitive files \#, volume sensitivity, and running time in seconds. As we can see, V1 and V4 have sensitivity close to 1.00. They also belong to the same cluster in Figure \[fig:vcluster\]. V2, V3, and V5 have sensitivity between 0.45 and 0.70, which are in the same cluster. V6 and V7 are in the same cluster, with predicted sensitivity 0.5758 and 0.1728, respectively.
Total file \# Sensitive file \# Sensitivity Running time (sec)
---- --------------- ------------------- ------------- -------------------- --
V1 400415 385369 0.9624 286.28
V2 7798224 5501763 0.7055 5993.20
V3 3894711 1804798 0.4633 6720
V4 170808 170808 1.0000 134.01
V5 1481322 902804 0.6095 1133.58
V6 686 395 0.5758 0.51
V7 81 14 0.1728 0.06
: Prediction results on all the volumes[]{data-label="summary"}
### Migration Insights
After the sensitivity of all the files on a storage volume is predicted, we can compute sensitivity scores at different levels. The sensitivity score is defined as the number of sensitive files divided by the total number of files at a targeted level. We give two examples: volume level and user level. Similarly, we can also obtain data hotness (e.g., storage performance metrics) at different levels. From the Infrastructure scan, we compute the IO density for volumes. For the user level, we use the metric, the percentage of files that have not been accessed in past one year, to represent the data hotness for a particular user folder. By correlating data sensitivity and hotness, StackInsights can provide hybrid could migration insights. In addition, StackInsights can be integrated with data movement tools, such as DataDynamics [@datadynamics], in order to automate the entire migration process from recommendation to action.
**Volume Sensitivity and Hotness Map:** We get the sensitivity score for each volume from the prediction results. As shown in Figure \[fig:vpredict\], all the volumes are ordered based on their sensitivity level and hotness. Therefore, all the volumes which are cold and low sensitive can be migrated to the public cloud. The remaining should be migrated to the private cloud or remain on premise.
![Volume sensitivity and IO density map[]{data-label="fig:vpredict"}](volumeprediction.png){width="3.2in"}
**User Sensitivity and Hotness Map:** Similar as the volume level analysis, we can also obtain the data sensitivity and hotness map on the user level. The data hotness for a particular user folder is computed as the percentage of all the associated files that have not been accessed in the past one year. Specifically, we find 1,060 user folders on volume V3. We compute the sensitive score for each user folder based on the predicted file sensitivity, as well as the data hotness using the file metadata collected from infrastructure scan. The user sensitivity and hotness map is shown in Figure \[fig:usermap\]. As we can see, there are many user folders that have not been accessed in the past one year. The percentage of sensitive files under these folders vary though. For user folders at the bottom right (cold and low sensitive), they may be eligible to be migrated to the public cloud. For those at the top left (hot and high sensitive), they should be migrated to the private cloud or remain on premise.
![User sensitivity and data hotness map[]{data-label="fig:usermap"}](userlevel.png){width="2.7in"}
Conclusions and Future Work {#conclusion}
===========================
We have introduced StackInsights, a cognitive learning system which automatically analyzes and predicts the cloud readiness of workloads. StackInsights correlates the metrics from application, data, and infrastructure layers to identify the business sensitivity of data as well as their hotness in terms of infrastructure performance, and provides insights into hybrid cloud migration. Given the scale of data and infrastructure, a machine learning model is developed in StackInsights to predict file sensitivity based on the metadata. In contrast to traditional approach which requires intrusive and expensive content scanning, StackInsights significantly reduces the total running time for sensitivity classification by orders of magnitude, therefore, is scalable to be deployed in large scale IT environment. As more and more enterprises are committing to hybrid cloud architecture, StackInsights can help accelerate this digital transformation in their organizations.
Our current system is mainly focused on understanding the sensitivity of textual files. There are many different types of data that can contain sensitive information, such as images, videos, and audios. In the future, we can leverage IBM Watson services [@watson] to analyze the sensitive content from these multimedia data. Similarly, we can predict their sensitivity based on the meta level information. Last but not the least, the cognitive learning capabilities of StackInsights can be greatly enhanced by collecting more metadata across the stack. The more metadata we can collect, the more accurate the prediction model will be.
|
---
author:
- 'Hsiao-Fan Liu[^1]'
title: Geometric Algorithm of Schrödinger Flow on a Sphere
---
[^1]: Department of Mathematics, National Tsing Hua University, Taiwan ().
|
---
abstract: 'Experimental realization of efficient graphene-based absorbers is a challenging task due to the low carrier mobility in processed graphene. In this paper, we circumvent this problem by placing uniform graphene sheets on metallic metasurfaces designed for improving the absorption properties of low-mobility graphene. Complete absorption can be achieved for different frequencies with the proper metasurface design. In the THz band, we observe strong tunability of the absorption frequencies and magnitudes when modulating graphene Fermi level.'
author:
-
title: 'Graphene-based Perfect Absorbers: Systematic Design and High Tunability'
---
Introduction
============
In the literature, most of the theoretical works predict perfect absorption in graphene with the assumptions of high mobility ($\mu>10000$ ${\rm cm}^{2}{\rm V}^{-1}{\rm s}^{-1}$) and high Fermi level [@thongrattanasiri2012complete]. However, the typical achievable mobility in experiment is around $\mu=1000$ ${\rm cm}^{2}{\rm V}^{-1}{\rm s}^{-1}$ [@kim2017electronically], which makes these theoretical models impractical to be implemented. Despite the large efforts devoted to improve the absorption properties of graphene, it has been shown that a monolayer graphene behaves as a poor absorber in almost all the frequencies. From the circuit-theory perspective, the main reason for the weak absorption in graphene is the severe impedance mismatch between graphene and its surrounding materials (the sheet impedance of undoped graphene is usually several thousand ohm per square or more).
In this talk, we will present and discuss the use of metasurfaces for reducing the effective impedance of graphene, i.e., for improving the absorption efficiency in graphene. With this method, even in very poor quality graphene samples ($\mu=300$ ${\rm cm}^{2}{\rm V}^{-1}{\rm s}^{-1}$), the realization of perfect absorption is possible. In addition, we will show how the proposed structures are highly tunable for different frequencies or levels of absorption in the THz band. Readers can get more detailed information in [@wang2017tunable].
Theory {#section:Theory}
=======
At THz frequencies or below, the surface conductivity of graphene is described by the Drude model, $\sigma_{\rm g}={e^2E_{\rm F}}/{\pi\hbar^2(j\omega+\gamma)}$, where $\omega$ is the angular frequency, $E_{\rm F}$ is the Fermi level, and $\gamma={e{v_{\rm F}}^2}/({\mu E_{\rm F}})$ is the scattering rate (related to carrier mobility $\mu$ and Fermi level $E_{\rm F}$). Using an equivalent circuit model, the graphene sheet can be considered as a complex-impedance sheet, with the sheet impedance $Z_{\rm g}={1}/{\sigma_{\rm g}}=R_{\rm g}+jX_{\rm g}$.
Let us first discuss the absorption characteristics of a graphene Salisbury screen where the graphene layer is supported by a grounded substrate, as depicted in Fig. \[fig:Salisbury\_circuit\]. The equivalent circuit of this structure is described as a parallel $RLC$ resonant circuit. Instead of expressing graphene sheet impedance as a series connection of resistance and reactance, here we interpret graphene as a resistor $R_{\rm g}^\prime$ shunt connected with an inductor $jX_{\rm g}^\prime$ ($Z_{\rm g}=R_{\rm g}^\prime\parallel jX_{\rm g}^\prime$), where $$R_{\rm g}^\prime=R_{\rm g}+\frac{X_{\rm g}^2}{R_{\rm g}}
\label{shunt_Rg},~\text{and} \quad jX_{\rm g}^\prime=j\left(X_{\rm g}+\frac{R_{\rm g}^2}{X_{\rm g}}\right)$$ Since the graphene sheet is inductive in the THz band, the substrate thickness $d$ should be between $\lambda_{\rm d}/4$ and $\lambda_{\rm d}/2$ to ensure its impedance $jX_{\rm d}=j\eta_{\rm d}\tan(k_{\rm d}d)$ is capacitive and thus create a high impedance surface at the resonant frequency.
The use of this equivalent circuit model allows systematic design of perfect absorbers: the parallel resistance controls the absorption efficiency, while the inductance together with the capacitive grounded substrate determine the resonant frequency. In order to achieve perfect absorption at a resonant frequency, $R_{\rm g}^\prime$ should be equal to free space impedance $\eta_0$. We assume the graphene Fermi level is $E_{\rm F}=0.1$ eV in its natural state (no intentional chemical or electrical doping). For this value of $E_{\rm F}$, Fig. \[fig:Rg\_p\_EF\_0\_1\] shows the calculated shunt resistance with respect to frequency and graphene mobility, where one can see that $R_{\rm g}^\prime$ is huge compared to the free space impedance when $\mu<5000$ ${\rm cm}^{2}{\rm V}^{-1}{\rm s}^{-1}$. This is the main reason why perfect absorption in a complete graphene layer is quite difficult to obtain in experiment.
To reduce the shunt resistance of graphene. we replace the substrate with a reflective-type metasurface, as shown in Fig. \[fig:proposed\_structure\]. The metasurface consists of the grounded substrate and the periodically arranged square patches with the period $D$ and gap size $g$. The graphene sheet is directed transferred onto this metallic metasurface. In the area where graphene and metal are contacted, graphene is actually shorted by the metallic patches. Therefore, the complete graphene sheet is equivalent to be patterned into mesh-type strips with effective impedance $Z_{\rm eff}=Z_{\rm g}/p$, where $p=(D-g)/g$ is called the scaling factor. The grid reactance of periodic patches is expressed as $jX_{\rm m}={1}/{j\omega C_{\rm m}}$ with $C_{\rm m}={\frac{(\epsilon_{\rm r}+1)\epsilon_0D}{\pi}\ln\left(\csc\frac{\pi}{2(p+1)}\right)}$. In principle, by choosing a suitable scaling factor $p$, $R_{\rm g}^\prime$ can be matched to the free space impedance $\eta_0$. By properly choosing the substrate thickness and permittivity or period $D$, the resonant frequency can be adjusted to the expected one.
Electrical tunability
=====================
![Simulated absorption intensity in terms of the frequency when varying the Fermi level from 0.1 eV to 0.9 eV. The graphene mobility is set to $\mu=1500~{\rm cm}^{2}{\rm V}^{-1}{\rm s}^{-1}$. Here, $D=9.5~\mu$m, $d=4.75~\mu$m, $p=20$ and $\epsilon_{\rm r}=2.8$. []{data-label="fig:SU8_frequency_tunable"}](SU8_frequency_tunable.pdf){width="0.73\linewidth"}
In this section, we present two tunable scenarios for absorption frequency and amplitude with different graphene quality. The assumed carrier mobilities in these two cases are typical values in experiment. In the first case, we use graphene with $\mu=1500$ ${\rm cm}^{2}{\rm V}^{-1}{\rm s}^{-1}$. Perfect absorption is designed at 3.8 THz ($E_{\rm F}=0.1$ eV) according to the design rules in Section \[section:Theory\]. By improving the Fermi level of graphene from 0.1 eV to 0.9 eV, large shift of the peak frequency is observed. The center absorption frequency is tuned from 3.8 THz to 12.3 THz, as shown in Fig. \[fig:SU8\_frequency\_tunable\].
The other scenario is strong tunability of absorption amplitude at a specified frequency. We assume very low-quality graphene with $\mu=300$ ${\rm cm}^{2}{\rm V}^{-1}{\rm s}^{-1}$. For perfect absorption at 3.3 THz, scaling factor is calculated as $p=91$. If the period $D$ is 5 $\mu$m, the gap size of the patches is as narrow as $g=55$ nm. This nano-scale channel not only brings fabrication challenges, but also results in Fermi-level spinning effect in graphene channels. We replace the straight gap with meandered slot, as shown in the inset of Fig. \[fig:SU8\_intensity\_tunable\]. The intersected fingers largely increase the length of graphene channel in one unit thus decreasing the effective resistance of graphene. We can see in Fig. \[fig:SU8\_intensity\_tunable\] that the absorption achieves unity with low Fermi levels and almost zero at high Fermi levels, as a switchable absorber.
![Simulated absorption in terms of the frequency when varying the Fermi level from 0.1 eV to 0.9 eV. Here, $D=5~\mu$m, $d=4~\mu$m, $g=300$ nm, $l_{\rm f}=3~\mu$m, $w_{\rm f}=533$ nm and $\epsilon_{\rm r}=2.8$. []{data-label="fig:SU8_intensity_tunable"}](SU8_intensity_tunable.pdf){width="0.73\linewidth"}
Conclusions
===========
In this paper, we use a simple transmission-line model to explain the weak wave absorption in graphene, and propose metallic metasurface substrates to obtain complete and tunable absorption in the THz band. This method is effective even for graphene samples of poor quality.
Acknowledgment {#acknowledgment .unnumbered}
==============
This project has received funding from the European Union’s Horizon 2020 research and innovation programme-Future Emerging Topics (FETOPEN) under grant agreement No 736876.
[1]{}
S. Thongrattanasiri, F. H. Koppens, and F. J. G. De Abajo, “Complete optical absorption in periodically patterned graphene,” *Physical Review Letters*, vol. 108, no. 4, p. 047401, 2012.
S. Kim, M. S. Jang, V. W. Brar, K. W. Mauser, and H. A. Atwater, “Electronically tunable perfect absorption in graphene,” *arXiv preprint arXiv:1703.03579*, 2017.
X.-C. Wang and S. A. Tretyakov, “Tunable perfect absorption in continuous graphene sheets on metasurface substrates,” *arXiv preprint arXiv:1712.01708*, 2017.
|
---
abstract: |
Let $V(I)$ be a polarized projective variety or a subvariety of a product of projective spaces and let $A$ be its (multi-)homogeneous coordinate ring. Given a full-rank valuation $\val$ on $A$ we associate weights to the coordinates of the projective space, respectively, the product of projective spaces. Let $w_\val$ be the vector whose entries are these weights. Our main result is that the value semi-group of $\val$ is generated by the images of the generators of $A$ if and only if the initial ideal of $I$ with respect to $w_\val$ is prime. We further show that $w_\val$ always lies in the tropicalization of $I$.
Applying our result to string valuations for flag varieties, we solve a conjecture by [@BLMM] connecting the Minkowski property of string cones with the tropical flag variety. For Rietsch-Williams’ valuation for Grassmannians our results give a criterion for when the Plücker coordinates form a Khovanskii basis. Further, as a corollary we obtain that the weight vectors defined in [@BFFHL] lie in the tropical Grassmannian.
author:
- 'Lara Bossinger[^1]'
bibliography:
- 'Trop.bib'
title: Full rank valuations and toric initial ideals
---
Introduction
============
In the context of toric degenerations[^2] of projective varieties the study of full-rank valuations on homogeneous coordinate rings (see Definition \[def: valuation\]) is very popular. This goes back to a result of Anderson [@An13]: if the semi-group, which is the image of the valuation (called *value semi-group*) is finitely generated then the valuation defines a toric degeneration. Therefore, a hard and central question is whether a given valuation has finitely generated value semi-group or not.
For example, the *valuations from birational sequences* in [@FFL15] are of full rank, and constructed to define toric degenerations of flag and spherical varieties. But for a general valuation arising in this setting it remains unknown if its value semi-group is finitely generated. In the recent paper [@B-birat] a new class of valuations from birational sequences for Grassmannians is constructed. The author applies results of this paper to identify those giving toric degenerations.
To gain more control over the valuation it is moreover desirable to identify algebra generators of $A$, whose valuation images generate the value semi-group. Such generators are called a *Khovanskii basis*, introduced by Kaveh-Manon in [@KM16]. They construct full rank valuations with finite Khovanskii bases from maximal prime cones of the tropicalization of a polarized projective variety. In this paper, we complement their work by taking the opposite approach: starting from a full rank valuation we give a criterion for when it comes from a maximal prime cone. Our main tool is reformulating the problem in terms of initial ideals in Gröbner theory.
Throughout the paper for $n\in \mathbb Z_{>0}$ let $[n]$ denote $\{1,\dots,n\}$. Let $X$ be a subvariety of the product of projective spaces $\mathbb P^{k_1-1}\times \dots \times \mathbb P^{k_s-1}$. In particular, if $s=1$ then $X$ is a polarized projective variety. Its (multi-)homogeneous coordinate ring $A$ is given by $\mathbb C[x_{ij}\vert i\in[s],j\in[k_i]]/I$. Here $I$ is a prime ideal in $S:=\mathbb C[x_{ij}\vert i\in[s],j\in[k_i]]$, the total coordinate ring of $\mathbb P^{k_1-1}\times \dots \times \mathbb P^{k_s-1}$. Further, $I$ is homogeneous with respect to the $\mathbb Z_{\ge 0}^s$-grading on $S$. By $\bar x_{ij}\in A$ we denote the cosets of variables $x_{ij}$. Let $d$ be the Krull-dimension of $A$.
A valuation $\val:A\setminus \{0\}\to \mathbb Z^d$ has *full rank*, if its image (the value semi-group $S(A,\val)$) spans a sublattice of full rank in $\mathbb Z^d$. It is *homogeneous*, if it respect the grading on $A$. From now on we only consider full-rank valuations. In Definition \[def: wt matrix from valuation\] we define the *weighting matrix* of $\val$ as $M_\val:=(\val(\bar x_{ij}))_{ij}\in \mathbb Z^{d\times (k_1+\dots +k_s)}$.
By means of higher Gröbner theory (see for example, [@KM16 §3.1]), we consider the *initial ideal* $\init_{M_\val}(I)\subset S$ of $I$ with respect to $M_\val$ (see Definition \[def: init wrt M\]). Our main result is the following theorem. It is formulated in greater detail in Theorem \[thm: val and quasi val with wt matrix\] below.
Let $\val:A\setminus\{0\}\to \mathbb Z^d$ be a full-rank valuation with $M_\val\in\mathbb Z^{d\times (k_1+\dots+k_s)}$ the weighting matrix of $\val$. Then, $S(A,\val)$ is generated by $\{\val(\bar x_{ij})\}_{i\in[s],j\in[k_i]}$ if and only if $\init_{M_\val}(I)$ is toric[^3].
The theorem has some very interesting implications in view of toric degenerations and Newton–Okounkov bodies. Consider a valuation of form $\val:A\setminus\{0\}\to \mathbb Z_{\ge 0}^{s}\times \mathbb Z^{d-s}$ with $\val(f)=(\deg f,\cdot)$ for all $f\in A$. Without loss of generality by [@IW18 Remark 2.6] we may assume that any full-rank homogeneous valuation is of this form. The Newton–Okounkov cone $C(A, \val)\subset \mathbb R^{s}\times \mathbb R^{d-s}$ is the cone over its image. The *Newton–Okounkov body* $\Delta(A,\val)$ is then the intersection of $C(A,\val)$ with the subspace $\{(1,\dots,1)\}\times \mathbb R^{d-s}$, for details consider Definition \[def: val sg NO\]. Newton–Okounkov bodies are in general pretty wild objects, they are convex bodies but need neither be polyhedral nor finite. Anderson showed for $s=1$ that they are rational polytopes, if $S(A,\val)$ is finitely generated. Our theorem implies the following result that allows to compute Newton–Okounkov polytopes explicitly in the nicest imaginable case.
With assumptions as in the theorem above, if $\init_{M_\val}(I)$ is prime then $\val$ is homogeneous and
(i) $\{\bar x_{ij}\}_{i\in[s],j\in[k_i]}$ is a Khovanskii basis for $(A,\val)$, and
(ii) the Newton–Okounkov polytope is the Minkowski sum[^4]: $$\Delta(A,\val)= \conv(\val(\bar x_{1j}))_{ j\in[k_1]}+\dots+ \conv(\val(\bar x_{sj}))_{j\in[k_s]}. %\sum_{i=1}^s\conv(\val(\bar x_{ij})\mid j\in[k_i]).$$
We continue our study by considering monomial maps as appear for example in [@MoSh]. Let $\phi_\val:S\to \mathbb C[y_1,\dots,y_d]$ be the homomophism defined by sending a generator $x_{ij}$ to the monomial in $y_k$’s with exponent vector $\val(\bar x_{ij})$. Its kernel $\ker(\phi_\val)\subset S$ is a toric ideal, see .
Further analyzing our weighting matrices, we associate to each a *weight vector* following [@Cal02 Lemma 3.2]. Let $w_\val\in \mathbb Z^{k_1+\dots+k_s}$ be the weight vector associated to $M_\val$ as in Definition \[def: wt vector for wt matrix\]. It satisfies $\init_{M_\val}(I)=\init_{w_\val}(I)$. The following lemma reveals the relation between $w_\val$ and the toric ideal $\ker(\phi_\val)$, see also Lemma \[lem: wt matrix val in trop\].
For every full-rank valuation $\val:A\setminus\{0\}\to \mathbb Z^d$ we have $\init_{w_\val}(I)\subset \ker(\phi_\val)$. In particular, $\init_{w_\val}(I)$ is monomial-free and $w_\val$ is contained in the tropicalization of $I$ (in the sense of [@M-S], see ).
We apply our results to two classes of valuations.
#### Grassmannians and valuations from plabic graphs.
We consider a class of valuations on the homogeneous coordinate rings of Grassmannians defined in [@RW17]. In the context of cluster algebras and cluster duality for Grassmannians, they associate a full-rank valuation $\val_\G$ to every plabic graph $\mathcal G$ with certain properties [@Pos06].
For $k<n$ denote by $\binom{[n]}{k}$ the set of $k$-element subsets of $[n]$. Consider the Grassmannian with its Plücker embedding $\Gr_k(\mathbb C^n)\hookrightarrow \mathbb P^{\binom{n}{k}-1}$. We obtain its homogeneous coordinate ring $A_{k,n}:=\mathbb C[p_J\vert J\in \binom{[n]}{k}]/I_{k,n}$. In particular, $I_{k,n}$ is the Plücker ideal defining the Grassmannian. The elements $\bar p_J\in A_{k,n}$ are Plücker coordinates. Applying our main theorem we obtain a criterion for when the Plücker coordinates form a Khovanskii basis for $\val_\G$ (see Theorem \[thm:main plabic\]).
Related to [@RW17], in [@BFFHL] the authors associate *plabic weight vectors* ${\bf w}_\G$ for $\mathbb C[p_J]_J$ to the same plabic graphs $\G$ (see Definition \[def: plabic deg\]). They study these weight vector for $\Gr_2(\mathbb C^n)$ and $\Gr_3(\mathbb C^6)$ and show that, in these cases, they lie in the *tropical Grassmannian* [@SS04], i.e. the tropicalization of $I_{k,n}$. With our methods, we can prove this in general and obtain the following result (see also Proposition \[prop: plabic lin form\]). Let $M_\G$ denote the weighting matrix of the valuation $\val_\G$.
For every plabic weight vector $\bw_\G$ we have $
\init_{M_\G}(I_{k,n})=\init_{\bw_G}(I_{k,n})$. In particular, this implies $\bw_\G$ lies in the tropical Grassmannian $\trop(\Gr_k(\mathbb C^n))$.
#### Flag varieties and string valuations.
We consider string valuations [@Kav15; @FFL15] on the homogeneous coordinate ring of the full flag variety $\Flag_n$[^5]. They were defined to realize string parametrizations [@Lit98; @BZ01] of Lusztig’s dual canonical basis in terms of Newton–Okounkov cones and polytopes.
Consider the algebraic group $SL_n$ and its Lie algebra $\lie{sl}_n$ over $\mathbb C$. Fix a Cartan decomposition and take $\Lambda\cong \mathbb Z^{n-1}$ to be the weight lattice. It has a basis of fundamental weights $\omega_1,\dots,\omega_{n-1}$ and every dominant integral weight, i.e. $\lambda \in \mathbb Z^{n-1}_{\ge0}$ yields an irreducible highest weight representation $V(\lambda)$ of $\lie{sl}_n$. The Weyl group of $\lie{sl}_n$ is the symmetric group $S_n$. By $w_0\in S_n$ we denote its longest element.
For every reduced expression $\w_0$ of $w_0\in S_n$ and every dominant integral weight $\lambda \in \mathbb Z^{n-1}_{\ge0}$, there exists a *string polytope* $Q_{\w_0}(\lambda)\subset \mathbb R^{\frac{n(n-1)}{2}}$. Its lattice points parametrize a basis for $V(\lambda)$. The string polytope for the weight $\rho=\omega_1+\dots +\omega_{n-1}$ is the Newton–Okounkov body for the string valuation $\val_{\w_0}$ on the homogeneous coordinate ring of $\Flag_n$.
We embed $\Flag_n$ into a product of projective spaces as follows: first, consider the embedding into the product of Grassmannians $\Gr_1(\mathbb C^n)\times\dots\times\Gr_{n-1}(\mathbb C^n)$. Then concatenate with the Plücker embeddings $\Gr_k(\mathbb C^n)\hookrightarrow \mathbb P^{\binom{n}{k}-1}$ for every $1\le k\le n-1$. This yields the (multi-)homogeneous coordinate ring $A_n$ of $\Flag_n$ as $\mathbb C[p_J\vert J\subset [n]]/I_n$. Our main result applied to string valuations yields the following, for more details see Theorem \[thm: quasival for string\].
Let $\w_0$ be a reduced expression of $w_0\in S_n$ and consider the string valuation $\val_{\w_0}$. If $\init_{M_{\val_{\w_0}}}(I_n)$ is prime, then $$Q_{\w_0}(\rho)= \conv (\val_{\w_0}(\bar p_J))_{J\in\binom{[n]}{1} } + \dots + \conv(\val_{\w_0}(\bar p_J))_{ J\in\binom{[n]}{n-1}}.$$
A central question concerning string polytopes is the following: fix a reduced decomposition $\w_0$ and let $\lambda=a_1\omega_1+\dots a_{n-1}\omega_{n-1}$ with $a_i\in\mathbb Z_{\ge 0}$. *Is the string polytope $Q_{\w_0}(\lambda)$ equal to the Minkowski sum $a_1Q_{\w_0}(\omega_1)+\dots +a_{n-1} Q_{\w_0}(\omega_{n-1})$ of fundamental string polytopes?*
If equality holds for all $\lambda$, we say $\w_0$ has the *Minkowski property*. In [@BLMM] the authors define weight vectors ${\bf w}_{\w_0}$ for every $\w_0$, which turn out to coincide with $w_{\val_{\w_0}}$ from above. They conjecture a relation between the Minkowski property of $\w_0$ and the weight vector ${\bf w}_{\w_0}$ lying in a special part of the *tropical flag variety*, i.e. the tropicalization of $I_n$. A corollary of our main theorem proves an even stronger version of their conjecture. It can be summarized as follows (for details see Corollary \[cor: prime implies MP\]):
Let $\w_0$ be a reduced expression of $w_0\in S_n$ and consider the weight vector ${\bf w}_{\w_0}$. Then $\init_{{\bf w}_{{\w_0}}}(I_n)$ is prime if and only if $\w_0$ has the Minkowski property.
The paper is structured as follows. We recall preliminaries on valuations, toric degenerations and Newton–Okounkov bodies in §\[sec:notation\]. We then turn to quasi-valuations and weighting matrices in §\[sec:val and quasival\] and prove our main result Theorem \[thm: val and quasi val with wt matrix\]. In the subsection §\[sec:wt matrix wt vector\] we make the connection to weight vectors and tropicalization and prove the above mentioned result Lemma \[lem: wt matrix val in trop\]. In §\[sec:exp trop flag\] we apply our results to string valuations for flag varieties and in §\[sec: exp plabic\] to valuations from plabic graphs for Grassmannians. The Appendix contains background information on plabic graphs and the associated valuations.
[**Acknowledgements.**]{} The results of this paper were mostly obtained during my PhD at the University of Cologne under Peter Littelmann’s supervision. I am deeply grateful for his support and guidance throughout my PhD. I would like to further thank Xin Fang, Kiumars Kaveh, Fatemeh Mohammadi and Bea Schumann for inspiring discussions. Moreover, I am grateful to Alfredo Nájera Chávez for helpful comments during the preparation of this manuscript.
<span style="font-variant:small-caps;">Instituto de Matemáticas UNAM Unidad Oaxaca, Antonio de León 2, altos, Col. Centro, Oaxaca de Juárez, CP. 68000, Oaxaca, México</span>
*E-mail address:* `[email protected]`
[^1]: Supported by “Programa de Becas Posdoctorales en la UNAM 2018” Instituto de Matemáticas, UNAM, and Max Planck Institute for Mathematics in the Sciences, Leipzig.
[^2]: A *toric degeneration* of a projective variety $X$ is a flat morphism $\pi:\mathcal X\to \mathbb A^m$ with generic fiber $\pi^{-1}(t)$ for $t\not=0$ isomorphic to $X$ and $\pi^{-1}(0)$ a projective toric variety.
[^3]: An ideal is *toric* if it is binomial, i.e. generated by binomials, and prime.
[^4]: For two polytopes $A,B\subset \mathbb R^d$ their *Minkowski sum* is defined as $A+B:=\{a+b\mid a\in A,b\in B\}\subset \mathbb R^d.$
[^5]: We consider the full flag variety of type $\mathtt A$, i.e. $\Flag_n:=\{ \{0\}\subset V_1\subset \dots\subset V_{n-1}\subset \mathbb C^n\mid \dim V_i=i\}$. It can also be realized as $SL_n/B$, where $B$ are upper triangular matrices in $SL_n$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.