text
stringlengths 1
2.25M
|
---|
---
address: |
$^1$ Physics Department, Boston University, Boston, MA 02215, USA\
$^2$ Laboratoire de Physique Th[é]{}orique de l’[É]{}cole Normale Sup[é]{}rieure, Paris, France\
$^3$ Laboratoire de Physique Th[é]{}orique et Hautes Energies, Jussieu, Paris, France\
$^4$ Department of Physics, Princeton University, Princeton, NJ 08544, USA
author:
- 'Horacio E. Castillo$^1$, Claudio Chamon$^1$, Leticia F. Cugliandolo$^{2,3}$, Malcolm P. Kennett$^4$'
date: 'February 16, 2002'
title: Heterogeneous aging in spin glasses
---
Very slow equilibration and sluggish dynamics are characteristics shared by disordered spin systems and by other glassy systems such as structural and polymeric glasses. The origin of this dynamic arrest near and below the glass transition is currently poorly understood. Studies of the time evolution of many quantities, such as the remanent magnetization, the dielectric constant, or the incoherent correlation function, have shown that below the glass transition the system falls out of equilibrium [@out-of-equil; @Eric]. This is evidenced by the presence of aging, [*i.e.*]{} the dependence of physical properties on the time since the quench into the glassy state, and also by the breakdown of the equilibrium relations dictated by the fluctuation dissipation theorem ([fdt]{}) [@mean-field-dyn; @review].
Most analytical progress in understanding non-equilibrium glassy dynamics has been achieved in mean-field fully connected spin models [@mean-field-dyn], while numerical simulations have addressed both structural glasses [@aging-struct] and short-range spin glass models [@Picco]. Until recently, however, experimental, numerical, and analytical studies have mainly focused on global quantities, such as global correlations and responses, which do not directly probe local relaxation mechanisms. Local regions that behave differently from the bulk, or dynamic heterogeneities, could be crucial to understand the full temporal evolution, and have received considerable experimental [@heterogeneities; @confocal; @Nathan] and numerical [@numerical] attention lately. However, no clear theoretical picture has yet emerged to describe the local nonequilibrium dynamics of the glassy phase.
Here we introduce such a theoretical framework, and test its predictions via numerical simulations of a short-range spin glass model. We show that local correlations and responses are linked, and we find scaling properties for the heterogeneities that connect the evolution of the system at different times. This universality may provide a general basis for a realistic physical understanding of glassy dynamics in a wide range of systems.
The framework that we propose is motivated by an analogy [@reparam1] between aging dynamics and the well-known statics of Heisenberg magnets. For concreteness, we test its predictions against Monte Carlo simulations on the prototypical spin glass model, the three dimensional Edwards-Anderson (3DEA) model, $H = \sum_{\langle ij \rangle} J_{ij} s_i s_j$, where $s_i = \pm 1$ and the nearest-neighbor couplings are $J_{ij}=\pm 1$ with equal probability. We argue that two dynamical [*local*]{} quantities, the coarse-grained local correlation $ C_r(t,t_w) \equiv \frac1{V} \sum_{i\in V_r}
\overline s_i(t) \overline s_i(t_w) $ and integrated response $\chi_r(t,t_w)
\equiv \frac1{N_f} \sum_{k=1}^{N_f} \frac1{V} \sum_{i\in V_r}
{\overline s_i(t)|_{h^{(k)}}-\overline s_i(t) \over h^{(k)}_i} $ are essential to understand the mechanisms controlling the dynamics of glassy systems. The spins are represented by $s_i$ in the absence of an applied field and by $s_i|_{h^{(k)}}$ in the presence of one. $\overline s_i(t) \equiv \frac{1}{\tau}
\sum_{t'=t-\tau}^{t'=t-1} s_i(t')$ is the result of coarse-graining the spin over a small time-window \[typically, $\tau=1000$ Monte Carlo steps (MCs)\]. $V_r$ is a cubic box with volume $V$ centered at the point $r$. By taking $V$ to be the volume of the whole system, the bulk or global correlation $C(t,t_w)$ and response $\chi(t,t_w)$ are recovered. Two generic times after preparation are represented by $t_w$ and $t$, with $t_w\leq t$. When the system is not in equilibrium, time dependences [*do not*]{} reduce to a dependence on the time difference $t-t_w$. We measure a staggered local integrated linear response by applying a bimodal random field on each site $h^{(k)}_i=\pm h$ during the time interval $[t_w, t]$. Linear response holds for the values of $h$ that we use. The index $k=1,\dots,N_f$ labels different realizations of the perturbing field. We use random initial conditions. The thermal histories, i.e. the sequences of spins and random numbers used in the MC test, are the same with and without a perturbing field.
In a disordered spin model, the coarse-grained local magnetization typically vanishes, but the local correlation is non-trivial. Averaged over disorder and the thermal history, this correlator defines the Edwards-Anderson parameter $q_{\sc ea}$ when $t_w\to \infty$, and $t-t_w\to\infty$ subsequently. Can we detect the growth of local order [@Fihu] by analyzing the evolution of the local correlator, as one easily can for a system undergoing ferromagnetic domain growth? In Fig. \[fig1\] we show the local correlation for fixed $t_w$ and $t$ on a 2D cut of the 3DEA model. Regions with large values of $C_r$ are intertwined with regions with a small value of $C_r$ as shown by the contour levels. This behavior persists for all $t_w$ and $t$ that we can reach with the simulation and a more sophisticated analysis is necessary to identify a growing order in this system.
=8.5cm
It is clear from Fig. \[fig1\] that different sites have distinct dynamics. Analysis of the local correlation for fixed $t_w$ as a function of $t$ shows that in general the relaxation is non-exponential – this is often ascribed to the presence of heterogeneous dynamics. How can one characterize the heterogeneous dynamics and determine its origin? We argue that relevant quantities are the probability distribution ([pdf]{}) of the local correlation, $\rho(C_r(t,t_w))$, the [pdf]{} of the local integrated response, $\rho(\chi_r(t,t_w))$, and the joint [pdf]{} $\rho(C_r(t,t_w),\chi_r(t,t_w))$ and we start by discussing the latter.
The [fdt]{} relates the correlation of spontaneous fluctuations to the integrated linear response of a chosen observable ([*e.g.*]{} $C_r(t,t_w)$ and $\chi_r(t,t_w)$ averaged over thermal histories) at equilibrium. Glassy systems modify the [fdt]{} in a particular way first obtained analytically for mean-field models [@mean-field-dyn], later verified numerically in a number of realistic models [@numerics-fdt; @Ludo], and more recently tested experimentally [@exp-fdt]. A parametric plot of the bulk integrated response, $\chi(t,t_w)$, against the bulk correlation, $C(t,t_w)$, for fixed and long waiting-time $t_w$ and using $t$ as a parameter, approaches a non-trivial limit, $\chi(C)$, represented by the crosses in Fig. \[fig3\](b). The curve has a straight section, for which $t-t_w\ll t_w$ and the correlation decays from $1$ to $q_{\sc ea}$ with a slope $-1/T$ as found with the equilibrium [fdt]{}. Beyond this point, as $t$ increases towards infinity, the curve separates from the [fdt]{} line. Now, consider each lattice site for fixed times $t_w$ and $t$. If we plot points for the pairs $(C_r(t,t_w),
\chi_r(t,t_w))$, where will they lie?
When $t_w\to\infty$ and $t-t_w\ll t_w$, all local correlators satisfy the [fdt]{} strictly once averaged over thermal histories, since the magnitude of local deviations from the [fdt]{} has an upper bound [@Cudeku]. We have checked that $C_r(t,t_w)$ and $\chi_r(t,t_w)$ obey the [fdt]{} for an individual thermal history apart from small fluctuations \[see Fig. \[fig3\](b)\].
For the regime of widely separated times we propose an analysis similar in spirit to the one that applies to the low energy excitations of the Heisenberg model. There, the free energy for the coarse grained magnetization ${\vec m(\vec r)}$ is $
F = \int d^d r [ ({\vec \nabla}_{\vec r} {\vec m}({\vec r}))^2
+ V(|{\vec m}({\vec r})|) ]
$. A spontaneous symmetry breaking signals the transition into the ordered phase $\langle \vec m \rangle = \vec m_0 \neq 0$, in which the order parameter has both a uniform length (the radius of the bottom of the effective potential $V(|\vec m|)$), and a uniform direction. $F$ is invariant under uniform rotations ${\vec m(\vec r)} \rightarrow {\cal R}
{\vec m(\vec r)}$. The lowest energy excitations (spin waves) are obtained from the ground state by leaving the length of the vector invariant and applying a slowly varying rotation to it: $\vec m(\vec r) = {\cal R}(\vec r)
\vec m_0 $. These are massless transverse fluctuations (Goldstone modes). In contrast, longitudinal fluctuations, which change the magnitude of the magnetization vector, are massive and energetically costly.
Let us now apply the same kind of analysis to the dynamics of the spin glass. Here, the relevant fluctuating quantities are the coarse grained local correlations $C_r$ and their associated local integrated responses $\chi_r$. In Ref. [@reparam1] we derived an effective action for these functions that becomes invariant under a global time-reparametrization $t \to h(t)$ in the aging regime. This symmetry leaves the bulk relation, $\chi(C)$, invariant. A uniform reparametrization is analogous to a global rotation in the Heisenberg magnet, and the curve $\chi(C)$ is analogous to the surface where $V(|\vec m|)$ is minimized. Hence, we expect that for fixed long times $t_w$ and $t$ in the aging regime, the local fluctuations in $C_r$ and $\chi_r$ should be given by smooth spatial variations in the time reparametrization, $h_r(t)$, i.e. $C_r(t,t_w) = C_{\sc sp}(h_r(t),h_r(t_w))
\approx C(h_r(t),h_r(t_w))$ where $C_{\sc sp}$ is the global correlation at the saddle-point level that in the numerical studies we approximate by the actual global correlation $C$, and similarly for $\chi_r$. These transverse fluctuations are soft Goldstone modes. Longitudinal fluctuations, which move away from the $\chi(C)$ curve, are massive and penalized. This implies the first testable prediction of our theoretical framework: the pairs $(C_r,\chi_r)$ should follow the curve $\chi(C)$ for the bulk integrated response against the bulk correlation.
In Fig. \[fig3\], we test this prediction by plotting the distribution of pairs $(C_r,\chi_r)$. We find, as expected, that for long times the dispersion in the longitudinal direction (i.e. away from the bulk $\chi(C)$ curve) is much weaker than in the transverse direction (i.e. along the bulk $\chi(C)$ curve). In the coarse grained aging limit we expect the former to disappear while the latter should remain. (This limit corresponds to the way actual measurements are performed: the thermodynamic limit is taken first to eliminate finite size effects and undesired equilibration; then the large $t_w$ limit is taken to reach the asymptotic regime; finally, the limit $V\to\infty$ serves to eliminate fluctuations through the coarse graining process; in the figure we used a large volume $V=13^3$ to approach the latter limit though we found a similar qualitative behavior for smaller $V$.) Figure \[fig3\](a) displays the joint [pdf]{} $\rho(C_r, \chi_r)$ for a pair of times $(t_w,t)$ that are far away from each other. Figure \[fig3\](b) shows the projection of a set of contour levels for $t_w$ fixed and six values of $t$. Even though the data for each contour corresponds to a single pair of times $(t_w,t)$, the fluctuations span a range of values that, for the bulk quantities, would require a whole family of pairs $(t_w,t)$. This reveals that the aging process is non-uniform across a finite-range model.
=7.5cm =7.5cm
We now turn our attention to a more detailed examination of the time dependences in the local dynamics. A good $t/t_w$ scaling that breaks down only for very large values of the subsequent time $t-t_w$ has been obtained for the bulk thermoremanent magnetization experimentally [@Eric] and for the bulk correlation numerically [@Picco] once the stationary ($ t-t_w \ll t_w$) part of the relaxation is subtracted, as suggested by the solution to mean-field models [@mean-field-dyn]. For systems that display this particular dependence on $t/t_w$ for the bulk correlator, a second prediction can be extracted from our theoretical framework: the distribution $\rho(C_r(t,t_w))$ should only depend on the ratio $t/t_w$. Even further, if the bulk correlator has a simple power law form $C_{\sc sp}(t,t_w) \sim q_{\sc ea} (t/t_w)^{-\rho}$, an approximate treatment of fluctuations leads to a rescaling and collapse of $\rho(C_r(t,t_w))$ even for pairs of times with [*different*]{} ratios $t/t_w$.
Since we are dealing with ratios of times, it is convenient to define $h_r(t)=e^{\varphi_r(t)}$, so that $C_r(t,t_w)=C_{\sc
sp}(h_r(t)/h_r(t_w))=C_{\sc sp}(e^{\;\varphi_r(t)-\varphi_r(t_w)})$. Therefore the statistics of local correlations are determined from the statistical distribution of distances between two “surfaces”, $\varphi_r(t)-\varphi_r(t_w)$. In this form, a dynamic theory of short-range spin glasses is not different from a theory of fluctuating geometries or elasticity. We propose a simple reparametrization invariant effective action for $\varphi_r(t) =
\ln t + \delta\varphi_r(t)$, expanding around $\delta\varphi_r(t) = 0$, with no zeroth or first order term in $\delta\varphi_r(t)$. We assure that the effective action is reparametrization invariant by taking one time derivative for each time variable. Thus [@preparation] $$S=\frac{q_{\sc ea} \rho}{2}\! \int \!\! d^{d}r \!
\int_0^\infty \!\!\!\!\!\! dt
\int_0^\infty \!\!\!\!\!\! dt^\prime
\;\nabla \dot{\varphi}_r(t)
\;\nabla \dot{\varphi}_r(t^\prime)
\; e^{-\rho|\varphi_r(t) - \varphi_r(t^\prime)|}.$$ where the last factor penalizes fast time variations of $\varphi_r$ and the $\nabla$ ensure that spatial variations are smooth. Expanding to lowest order in $\delta\varphi_r(t)$ yields $\varphi_r(t) -
\varphi_r(t_w) = \ln(t/t_w) + \delta\varphi_r(t) - \delta\varphi_r(t_w)
\simeq \ln(t/t_w) + (a + b\ln(t/t_w))^\alpha X_r(t,t_w),$ where $a$ and $b$ are determined by the magnitude of the fluctuations, and $X_r(t,t_w)$ is a random variable drawn from a time-independent [pdf]{} that governs the fluctuations of the surfaces. In our approximation, which describes uncorrelated drift between two surfaces (i.e. a random walk), $\alpha = 1/2$.
=3.2in
Figure \[fig2\] displays $\rho(C_r(t,t_w))$ for several choices of the ratio $t/t_w$. Interestingly enough, all the curves have a noticeable peak at a value of $C_r$ that is independent of $t$ and $t_w$, with a height that decreases significantly with increasing ratio $t/t_w$. The form derived above for $\varphi_r(t) -
\varphi_r(t_w)$ explains the approximate collapse of $\rho(C_r(t,t_w))$ for a fixed ratio $t/t_w$, as shown in Fig. \[fig2\] for a small value of $V$. (Due to mixing with the stationary part, the $t/t_w$ scaling worsens when $V$ increases.) Barely noticeable in Fig. \[fig2\] is a slow drift of the curves for increasing values of $t_w$ at fixed ratio $t/t_w$ such that the height of the peak decreases while the area below the tail at lower values of $C_r$ increases. This trend leads to the “sub-aging” scaling observed for bulk quantities [@Eric; @Picco].
Furthermore, the above expression for $\varphi_r(t)-\varphi_r(t_w)$ implies that the [pdf]{}s for all of the $28$ pairs of times $(t,t_w)$ should collapse by rescaling with two parameters: $\ln C_{typ}$ and $s$, corresponding respectively to the nonrandom part in $\varphi_r(t)-\varphi_r(t_w)$ and to the width of the random part (see Fig. \[fig5\]). The scaling curve itself gives the [pdf]{} for $X_r(t,t_w)$. The rather good collapse of the curves should be improved by further knowledge of $C_{\sc sp}$.
=3.1in
A local separation of time-scales leads to a reparametrization invariant action [@reparam1] with a soft mode that controls the aging dynamics. Our framework, based on the analogy with Heisenberg magnets, predicts the existence of a local relationship between $C_r$ and $\chi_r$, expressed by $\rho(C_r(t,t_w),\chi_r(t,t_w))$ being sharply concentrated along the global $\chi(C)$ in the $(C,\chi)$ plane. Under additional assumptions, we obtained the scaling behavior of $\rho(C_r(t,t_w))$ for all $(t,t_w)$. Our simulations both confirm these predictions and uncover striking regularities in the geometry of local fluctuations [@preparation]. These results open the way to a systematic study of local dynamic fluctuations in glassy systems and they suggest a number of exciting avenues for future research. On the theoretical side, this framework could be applied to a large variety of glassy models, including those without explicit disorder. More interesting still are the experimental tests suggested by this work. For instance, the local correlations of colloidal glasses are accessible experimentally with the confocal microscopy technique [@confocal]. Similarly, cantilever measurements of noise spectra [@Nathan] allow probing of local fluctuations in the glassy phase of polymer melts. These are just two examples: any experiment that measures local fluctuations in glassy systems is a potential candidate for testing our ideas.
We thank D. Huse and J. Kurchan for useful discussions. Supported in part by the NSF (grant DMR-98-76208) and the Alfred P. Sloan Foundation. Supercomputing time was allocated by the Boston University SCF.
[99]{}
L. C. E. Struick, [*Physical aging in amorphous polymers and other materials*]{} (Elsevier, 1978).
E. Vincent [*et al.*]{}, in [*Proceedings of the Sitges conference*]{} (E. Rubi ed., Springer-Verlag, 1997).
L. F. Cugliandolo and J. Kurchan, Phys. Rev. Lett. [**71**]{}, 173 (1993); L. F. Cugliandolo and J. Kurchan, J. Phys. A [**27**]{}, 5749 (1994).
J-P Bouchaud [*et al*]{}, in [*Spin glasses and random fields*]{} A. P. Young ed (World Scientific, 1998).
J-L Barrat and W. Kob, Eur. Phys. J. B [**13**]{}, 319 (2000).
M. Picco, F. Ricci-Tersenghi, F. Ritort, Eur. Phys. J. B [**21**]{}, 211 (2001).
M. D. Ediger, Annu. Rev. Phys. Chem. [**51**]{}, 99 (2000).
A. van Blaaderen and P. Wiltzius, Science [**270**]{}, 1177 (1995); E. R. Weeks [*et al.*]{}, Science [**287**]{}, 627 (2000); W. K. Kegel and A. van Blaaderen, Science [**287**]{}, 290 (2000).
E. Vidal-Russell and N. E. Israeloff, Nature [**408**]{}, 695 (2000).
P. H. Poole [*et al*]{} Phys. Rev. Lett. [**78**]{}, 3394 (1997); A. Barrat and R. Zecchina, Phys. Rev. E [**59**]{} R1299 (1999); F. Ricci-Tersenghi and R. Zecchina, Phys. Rev. E [**62**]{}, R7567 (2000); C. Bennemann [*et al*]{} Nature, [**399**]{}, 246 (1999); W. Kob [*et al.*]{}, Phys. Rev. Lett. [**79**]{}, 2827 (1997).
C. Chamon [*et al*]{}, cond-mat/0109150.
D. S. Fisher and D. A. Huse, Phys. Rev. Lett. [**56**]{}, 1601 (1986).
S. Franz and H. Rieger, J. Stat. Phys. [**79**]{}, 749 (1995). E. Marinari [*et al*]{} J. Phys. A [**33**]{}, 2373 (2000); W. Kob and J-L. Barrat, Eur. Phys. J. B [**13**]{}, 319 (2000); A. Barrat [*et al*]{}, Phys. Rev. Lett. [**85**]{}, 5034 (2000); H. Makse and J. Kurchan, Nature [**415**]{}, 614 (2002); J-L. Barrat and L. Berthier, cond-mat/0110257; Phys. Rev. E [**57**]{}, 3629 (1998).
A. Barrat and L. Berthier, Phys. Rev. Lett. [**87**]{}, 087204 (2001).
T. S. Grigera and N. E. Israeloff, Phys. Rev. Lett. [**83**]{}, 5038 (2000). L. Bellon, S. Ciliberto, C. Laroche, Europhys. Lett. [**53**]{}, 511 (2001). D. Herisson and M. Ocio, cond-mat/0112378.
L. F. Cugliandolo, D. S. Dean, J. Kurchan, Phys. Rev. Lett. [**79**]{}, 2168 (1997).
H. E. Castillo, C. Chamon, L. F. Cugliandolo, M. P. Kennett, in preparation.
|
---
abstract: 'It is well known that every positive integer can be expressed as a sum of nonconsecutive Fibonacci numbers provided the Fibonacci numbers satisfy $F_n =F_{n-1}+F_{n-2}$ for $n\geq 3$, $F_1 =1$ and $F_2 =2$. In this paper, for any $n,m\in\mathbb{N}$ we create a sequence called the $(n,m)$-bin sequence with which we can define a notion of a legal decomposition for every positive integer. These sequences are not always positive linear recurrences, which have been studied in the literature, yet we prove, that like positive linear recurrences, these decompositions exist and are unique. Moreover, our main result proves that the distribution of the number of summands used in the $(n,m)$-bin legal decompositions displays Gaussian behavior.'
address:
- 'Department of Mathematics, Saint Peter’s University'
- 'Department of Mathematics and Statistics, Williams College, United States'
- 'Department of Mathematics, Saint Peter’s University'
- 'Department of Mathematical Sciences, United States Military Academy'
- 'Department of Mathematical Sciences, United States Military Academy'
author:
- Daniel Gotshall
- 'Pamela E. Harris'
- Dawn Nelson
- 'Maria D. Vega'
- Cameron Voigt
title: Bin Decompositions
---
Introduction
============
In 1972 Edouard Zeckendorf proved that any positive integer can be uniquely decomposed as a sum of non-consecutive Fibonacci numbers provided we use the recurrence $F_1=1$, $F_2=2$, and $F_n=F_{n-1}+F_{n-2}$ for $n\geq 3$ [@Ze]. Since then numerous researchers have generalized Zeckendorf’s theorem to other recurrence relations [@miller; @CFHMN1; @DDKMMV; @DDKMV; @KKMW; @lengyel]. Most work involved recurrence relations with positive leading terms, called positive linear recurrences. That was until Catral, Ford, Harris, Miller, and Nelson generalized these results to the $(s,b)$-Generacci sequences and to the Fibonacci Quilt sequence, which are defined by non-positive linear recurrences [@CFHMN1; @CFHMN2; @newbehavior], and Dorward, Ford, Fourakis, Harris, Miller, Palsson, and Paugh to the $m$-gonal sequences, which arise from a geometric construction via inscribed $m$-gons [@mgonpaper; @individualgaps]. The main results in these studies involved determining the uniqueness of the decompositions of nonnegative integers using the numbers in these new sequences, determining whether the behaviour arising from the average number of summands in these decompositions is Gaussian, and other related results.
A way to interpret the creation of the $(s,b)$-Generacci sequences is to imagine an infinite number of bins each containing $b$ distinct positive integers. Given a number $\ell\in\mathbb{N}$, we decompose it as a sum of elements in the sequence such that their sum is $\ell$, and the terms satisfy that 1) no two numbers in the sequences used in the decomposition appear in the same bin, and that 2) we do not use numbers in $s$ bins to the left and right of any bin containing a summand used in the decomposition of $\ell$. If such a decomposition of $\ell$ exists using the numbers in the sequence, we then say that $\ell$ has a legal decomposition. If every positive integer $\ell$ has a legal decomposition, then we call the sequence of numbers satisfying this property the $(s,b)$-Generacci sequence. Note that the $(1,1)$-Generacci sequence gives rise to the Fibonacci sequence, as we have bins with only one integer and we cannot use any consecutive integers in any decomposition.
Motivated by the bin construction used in the $(s,b)$-Generraci sequences, we create the $(n,m)$-bin sequences. These sequences are defined by nonpositive linear recurrences and depend on the positive integer parameters $s,b$ for Generacci sequences and $n,m$ for bin sequences. The terms of an $(n,m)$-bin sequence $\{a_x\}_{x=0}^\infty$ can be pictured via $$\underbracket{ \underbracket{a_0,\ldots,a_{n-1}}_{n}\ ,\underbracket{a_{n},\ldots,a_{n+m-1}}_{m}}_{\mathcal{B}_0}
\ ,\ldots,\
\underbracket{\underbracket{a_{(n+m)k},\ldots,a_{(n+m)k+n-1}}_{n}\ ,\underbracket{a_{(n+m)k+n},\ldots,a_{(n+m)k+n+m-1}}_{m}}_{\mathcal{B}_{k}}\ ,\ldots.$$ Note that the first term in the sequence is indexed by 0. Notice also that there are $n$ terms in the first bin and $m$ terms in the next. The number of terms in each subsequent bin alternates between $n$ and $m$. We use the notation $\mathcal{B}_k$ to indicate a pair of bins of size $n$ and $m$, in that order. Given a term in the sequence, $a_x$, we can determine which $\mathcal{B}_k$ contains $a_x$ and whether $a_x$ is in the $n$ or $m$ sized bin by using the division algorithm to write $x=(n+m)k+i$. If $0\leq i\leq n-1$ then $a_x$ is in the $n$ sized bin. If $n\leq i\leq m+n-1$ then $a_x$ is in the $m$ sized bin. For example, consider the (2,3)-bin sequence and term $a_{44}$. Since $44 = (2+3)8+4$, $a_{44}\in \mathcal{B}_8$ and since $i=4\geq 2=n$, $a_{44}$ is the third term in the $m=3$ size bin.
Before defining how we construct the sequences, we need to establish the notion of a legal decomposition.
Let an increasing sequence of integers $\{a_i\}_{i=0}^\infty$, divided into bins of sizes $n$ and $m$ be given. For any $n,m\in\mathbb{N}$, a [*$(n,m)$-bin legal decomposition*]{} of an integer using summands from this sequence is a decomposition in which no two summands are from the same or adjacent bins.
As described in [@DDKMMV], this notion of legal decompositions is an $f$-decomposition defined by the function $f:{{\mathbb N}}_0\rightarrow{{\mathbb N}}_0$ with $$\begin{aligned}
f(j)=\begin{cases}
m+i & \mbox{if } j\equiv i\!\!\!\mod m+n \quad\mbox{ and
} 0\leq i\leq n-1\\
i & \mbox{if } j\equiv i\!\!\!\mod m+n \quad\mbox{ and } n\leq i\leq m+n-1.
\end{cases}\label{eq:fdef}\end{aligned}$$ In other words, if $a_j$ is a summand in a $(n,m)$-bin legal decomposition, then none of the previous $f(j)$ terms ($a_{j-f(j)},a_{j-f(j)+1}, \ldots, a_{j-1}$) are in the decomposition. Consider the (2,3)-bin legal decompositions. Then $f:{{\mathbb N}}_0\rightarrow{{\mathbb N}}_0$ is the periodic function $$\{f(j)\}=\{ 3, 4, 2, 3, 4, 3, 4, 2, 3, 4,\ldots \}.$$ Note $f(44)=4$, so if $a_{44}$ is a term in an $(n,m)$-bin legal decomposition, then $a_{40},a_{41},a_{42}, a_{43}$ are not in the decomposition. Notice that $a_{42}, a_{43}$ are other terms in the 3-bin (denoting the bin of size 3) that contains $a_{44}$ and that $a_{40},a_{41}$ are the two terms in the previous 2-bin (denoting the bin of size 2).
Through an immediate application of Theorems 1.2 and 1.3 from [@DDKMMV] we can establish that for any $n,m\in\mathbb{N}$, $(n,m)$-bin legal decompositions are unique and we get Proposition \[1.1\].
\[1.1\] For each pair of $n,m\in\mathbb{N}$ there is a unique sequence such that every positive integer has a unique $(n,m)$-bin legal decomposition.
With this result at hand, we can now formally define an ($n,m$)-bin sequence.
For each pair of $n,m\in\mathbb{N}$, an [*($n,m$)-bin sequence*]{} is the unique sequence such that every positive integer has a unique $(n,m)$-bin legal decomposition.
Using this definition one can verify that the $(2,3)$-bin sequence begins: $$\underbracket{1, 2},\underbracket{ 3, 4, 5}, \underbracket{6, 9}, \underbracket{12, 18, 24}, \underbracket{30, 42}, \underbracket{54, 84, 114}, \underbracket{144, 198}, \underbracket{252,
396, 540},\underbracket{684, 936}, \underbracket{1188,
1872, 2556}, \ldots$$ and that the $(2,3)$-bin legal decomposition of 2018 is $2018=1872+144+2$. We also note that we can once again recover the Fibonacci sequence, which in this case is given by the $(1,1)$-bin sequence.
In Section \[sec:recurrences\] we establish a recurrence for the $(n,m)$-bin sequences.
\[thm:single recurrence\] Assume $\{a_x\}_{x=0}^\infty$ is an $(n,m)$-bin sequence. Then for all $n,m\geq 1$ and $x\geq 2(m+n)$, $$\begin{aligned}
a_x&=(m+n+1)a_{x-(m+n)}-mna_{x-2(m+n)}.\label{eq:single recurrence}\end{aligned}$$
We note that the recurrence above is sometimes a PLR and sometimes it is not. For example, as noted previously, the $(1,1)$-bin legal decompositions are exactly the Zeckendorf decompositions, and use the Fibonacci numbers, which are defined via a PLR. However, when $n=2$ and $m=1$ the recurrence above is not a PLR and we show this in Appendix \[appendix\]. This provides further motivation to study sequences that are more broadly defined and do not necessarily fall under (or out of) the PLR definition.
Our main result establishes that the number of summands used in $(n,m)$-bin legal decompositions of the natural numbers follows a Gaussian distribution.
\[thm:gaussian\] Let the random variable $Y_k$ denote the number of summands in the (unique) $(n,m)$-bin legal decomposition of an integer chosen uniformly at random from $[0, a_{(n+m)k})$. Normalize $Y_k$ to $Y_k' = (Y_k - \mu_k)/\sigma_k$, where $\mu_k$ and $\sigma_k$ are the mean and variance of $Y_k$ respectively. Then $$\begin{aligned}
\label{muConstantA} \mu_k \ = \ Ck+O(1), \ \ \ \ \sigma_k^2 \ = \ C'k+O(1),
\end{aligned}$$ for some positive constants $$C=\frac{{{\sqrt{(1+m+n)^2-4mn}}}-1}{{{\sqrt{(1+m+n)^2-4mn}}}},\qquad C'=\frac{(m+n)(1+m+n)-4mn}{{{\sqrt{(1+m+n)^2-4mn}}}^3}.$$ Moreover, $Y_k'$ converges in distribution to the standard normal distribution as $k \rightarrow \infty$.
As we noted earlier, the $(1,1)$-bin sequence is simply the Fibonacci sequence. In this case, the formulas for the mean and the variance given in simplify to the known formulas obtained by Lekkerkerker [@Lek] and Kolǒglu et al. [@KKMW]. Lekkerkerker computed that for $x\in[F_n,F_{n+1})$ the average number of summands in a Zeckendorf decomposition is $\frac{n}{\phi^2+1}+O(1)$, where $\phi = \frac{1+\sqrt{5}}{2}$. The result is the same when the interval is extended to $x\in[0,F_n)$. In [@KKMW], the authors show that for $x\in[F_n,F_{n+1})$ the variance of the number of summands in a Zeckendorf decomposition is $\frac{\phi n}{5(\phi+2)}+O(1)$. Again the result is same when the interval is extended to $x\in[0,F_n)$.
\[cor:m=n=1\] Consider the $(1,1)$-bin sequence. For $x\in [0,a_{2k})$ the average and variance of the number of summands in a $(1,1)$-bin legal decomposition is $$\mu_k = \frac{\sqrt{5}-1}{\sqrt{5}}k+O(1)= \frac{1}{\phi^2+1}2k+O(1) \hspace{.25in}\hbox{and}\hspace{.25in}
\sigma^2_k = \frac{2}{5\sqrt{5}}k+O(1)= \frac{\phi }{5(\phi+2)}2k+O(1).$$
The paper is organized as follows. Section \[sec:recurrences\] establishes needed recurrence relations and proves Theorem \[thm:single recurrence\], Section \[sec:generating\] develops helpful generating functions, and Section \[sec:gaussian\] pulls these ideas together and contains the proof of Theorem \[thm:gaussian\]. We end with some directions for future research.
Recurrence relations {#sec:recurrences}
====================
In this section we establish recurrence relations for $(n,m)$-bin sequences. We will establish Theorem \[thm:single recurrence\] via the following two technical results. Lemma \[rec\_reln\_a\] provides a family of recurrence relations. For example, equation computes the first term in the $n$-bin, equation computes the remaining terms in the $n$-bin and the first term in the $m$-bin, and equation computes the remaining terms in the $m$-bin. In contrast, Theorem \[thm:single recurrence\] provides a single recurrence relation that can be used to compute any term regardless of its position in the bins.
\[rec\_reln\_a\] If $n,m\in\mathbb{N}$, then for $k\geq1$ $$\begin{aligned}
a_{(m+n)(k+1)} & = a_{(m+n)k+m+n-1} +a_{(m+n)k}\label{a0_rec}\\
a_{(m+n)(k+1)+i} & = a_{(m+n)(k+1)+(i-1)} +a_{(m+n)k+n} \qquad \mbox{for } 1\leq i\leq n\label{ai_rec}\\
a_{(m+n)(k+1)+j} & = a_{(m+n)(k+1)+j-1} +a_{(m+n)(k+1)}\qquad \mbox{for } n+1\leq j\leq m+n-1\label{aj_rec}\end{aligned}$$
By Theorems 1.2 and 1.3 in [@DDKMMV], $a_x=a_{x-1}+a_{x-1-f(x-1)}$. When $x=(m+n)(k+1)$, then $x-1 = (m+n)k+m+n-1$ and $f((m+n)k+m+n-1)=m+n-1$. Hence Equation (\[a0\_rec\]), is immediate. The other equations follow from a similar argument.
Lemma \[lem\_same\] interweaves the family of recurrence relations to show that if the single recurrence relation (of Theorem \[thm:single recurrence\]) is true for $x\equiv 0\,(\mbox{mod }m+n)$, then it is true for all $x$.
\[lem\_same\] Assume $n,m\geq 1$. If $$\label{lem_rec}
a_x=(m+n+1)a_{x-(m+n)}-mna_{x-2(m+n)}$$ for $x\geq 2(m+n)$ and $x\equiv 0\,(\mbox{mod }m+n)$, then Equation (\[lem\_rec\]) is true for all $x\geq 2(m+n)$.
By hypothesis, $$a_{(m+n)k}=(m+n+1)a_{(m+n)k-(m+n)}-mna_{(m+n)k-2(m+n)}.$$ In other words, $$a_{(m+n)k}=(m+n+1)a_{(m+n)(k-1)}-mna_{(m+n)(k-2)}.$$ So applying Equation (\[a0\_rec\]), we have $$\begin{aligned}
a_{(m+n)(k-1)+m+n-1} +a_{(m+n)(k-1)}=&(m+n+1)[a_{(m+n)(k-2)+m+n-1} +a_{(m+n)(k-2)}]\\
&-mn[a_{(m+n)(k-3)+m+n-1} +a_{(m+n)(k-3)}]. \end{aligned}$$ Thus $$\begin{aligned}
a_{(m+n)(k-1)+m+n-1} -[(m+n+1)a_{(m+n)(k-2)+m+n-1}-mna_{(m+n)(k-3)+m+n-1}]\\
= - a_{(m+n)(k-1)}+[(m+n+1)a_{(m+n)(k-2)}-mna_{(m+n)(k-3)}]. \end{aligned}$$ By hypothesis, the right hand side of this equation is 0. Hence so is the left side and thus Equation (\[lem\_rec\]) is true for $x\equiv m+n-1\,(\mbox{mod }m+n)$.
Repeating a similar argument several more times shows that Equation (\[lem\_rec\]) is true for all $x$.
It remains to prove that Equation is true for $x\equiv 0\,(\mbox{mod }m+n)$. We do this in the following proof and thus establish Theorem \[thm:single recurrence\].
Assume $\{a_x\}_{x=0}^\infty$ is an $(n,m)$-bin sequence. As explained in Section 1, this sequence is an $f$-sequence defined by the function $f(j)$ given in Equation . Note that the period of $f(j)$ is $m+n$ and $m+n\geq f(j)+1$ for all $j$.
By Theorem 1.5 in [@DDKMMV], since $f(j)$ is periodic, we know that there is a single recurrence relation for our sequence. The proof of Theorem 1.5 in [@DDKMMV] gives us an algorithm for computing the single recurrence relation.
Consider $m+n$ subsequences of $\{a_x\}_{x=0}^\infty$ given by terms whose indices are all in the same residue class mod $m+n$. We will begin by finding a recurrence relation for each subsequence: $$\label{coeff}
a_x=\sum_{i=1}^{m+n+1}c_i a_{x-(m+n)i}.$$ A priori, these relations may be different for each residue class, but Lemma \[lem\_same\] tells us that all relations are in fact the same. Thus we focus on the subsequence corresponding to the 0 residue class.
It remains to solve for the constants $c_i$ in . To solve for these constants we will use linear algebra techniques, in particular we use matrices and vectors to represent systems of equations. Each of the equations in Lemma \[rec\_reln\_a\] can be rewritten as vectors. (The starred columns, beginning with 0, are those that are indexed by multiples of $m+n$, and the columns marked with $\circ$ are indices congruent to $m$ modulo $m+n$): $$\begin{array}{rrrrrrrrrrrrrrrr}
&&\star\,&&&&\circ&&&&\star&&&&\circ\,\\
\vec{v}_0 &=& [1,&-1, &0, & & & & \ldots,&0,&-1,&0,&\ldots]\\
\vec{v}_1 &=& [0,& 1, &-1,&0,& & &\ldots,&0,&-1,&0,&\ldots]\\
\vdots \\
\vec{v}_{m-1} &=& [0,&\ldots,& 0,& 1,&-1,&0,&\ldots,&0,&-1,&0,&\ldots]\\
\vec{v}_{m} &=& [0,&\ldots,&&0,& 1,&-1,&0,&\ldots,&0,&\ldots,&&0,&-1]\\
\vdots \\
\vec{v}_{m+n-1} &=& [0,&\ldots,& &&&&0,& 1,&-1,&0,&\ldots,&0,&-1]
\end{array}$$ Vector $\vec{v}_0$ corresponds to the recurrence relation in (\[a0\_rec\]), $\vec{v}_1$ to $\vec{v}_{m-1}$ correspond to the recurrence relations in (\[aj\_rec\]), $\vec{v}_m$ to $\vec{v}_{m+n-1}$ correspond to the recurrence relations in (\[ai\_rec\]). For all $\vec{v}_j$ the number of leading 0’s is $j$ and the number of middle 0’s is $f
(m+n-j)-1$.
Define $T$ to be the transformation that shifts all coordinates to the right by $(m+n)$ places.
According to the algorithm in [@DDKMMV] the goal is to zero out the coordinates that are not indexed by multiples of $m+n$ (the period). Note the first column is indexed by 0. Our first step in this process is to define $\vec{w}_1$, a linear combination of the $\vec{v}_j$. We have: $$\vec{w}_1 =\vec{v}_0+\cdots+\vec{v}_{m+n-1}=[1,\; 0,\; \ldots,\; 0,\; -m-1,\;0 ,\; \ldots,\; 0,\;-n, \;0],$$ where there are $(m+n-1)$ 0’s in the first set and $(m-1)$ 0’s in the second set. We continue and use $T$ to define $\vec{w}_2$: $$\begin{aligned}
\vec{w}_2 &=\vec{w}_1+n\sum_{j=m}^{m+n-1}T\vec{v}_j\\
&=[1,\; 0,\; \ldots,\; 0,\; -m-1,\;0 ,\; \ldots,\; 0, \;-n,\;0 ,\; \ldots,\; 0,\;-n^2 ], \end{aligned}$$ where there are $(m+n-1)$ 0’s in the first and second sets and $(m-1)$ 0’s in the last set.
Note that in $\vec{w}_0=\vec{v}_0$, $\vec{w}_1$, and $\vec{w}_2$ the bad coordinates (the coordinates that are not 0 and not indexed by multiples of $(m+n)$) are given by $$\begin{array}{rrlrrrrr}
\vec{u}_0 &= &[-1,&0 & \ldots, &0]\\
\vec{u}_1 &= &[0, & \ldots,&0, &-n]\\
\vec{u}_2 &= &[0, & \ldots,&0, &-n^2]
\end{array}.$$ We simplify by removing the common strings of 0’s: $$\begin{array}{rrlrrrr}
\vec{u}_0 &= &[-1, &0]\\
\vec{u}_1 &= &[0, &-n]\\
\vec{u}_2 &= &[0, &-n^2]
\end{array}.$$ There exists a non-trivial solution to $\sum_{j=0}^2\lambda_j\vec{u}_j=0$, namely $\lambda_0 =0,\lambda_1 =-n, \lambda_2 =1$. Using these values, we can write a linear combination of the $\vec{w}_j$ in which we have succeeded in zeroing out the coordinates that are not multiples of $m+n$: $$\sum_{j=0}^2 \lambda_jT^{2-j}\vec{w}_j = [1,\, 0,\, \ldots,\, 0,\, -(m+n+1),\,0 ,\, \ldots,\, 0, \,mn,\,0 ,\, \ldots ].$$ Thus Relation (\[coeff\]) becomes $a_x = (m+n+1)a_{x-(m+n)}-mna_{x-2(m+n)}$. Note that a priori this is only the recurrence relation for the subsequence given by the terms whose indices are congruent to $0\, (\mbox{mod } m+n)$. Fortunately, applying Lemma \[lem\_same\], we see that this recurrence relation is the single relation for the entire sequence.
Counting summands with generating functions {#sec:generating}
===========================================
In this section we provide generating functions for counting integers with a fixed number of summands in their $(n,m)$-bin legal decomposition. We continue to assume throughout that $\{a_x\}_{x=0}^\infty$ is an $(n,m)$-bin sequence.
Let $p_{k,c}$ denote the number of integer $z\in [0,a_{(n+m)k})$ whose legal decomposition contains exactly $c$ summands, where $c\geq 0$. Then by definition
$$p_{0,c}=\begin{cases}
1 &c=0 \\
0 & c>0
\end{cases}$$
$$p_{1,c}=\begin{cases}
1 &c=0 \\
n+m & c=1\\
0 & c>1
\end{cases}$$
for all $k\geq 0$, $p_{k,0}=1$. Also, for all $k\geq 0$, $p_{k,1}=k(n+m)$. Moreover, for all $c>k\geq 0$, $p_{k,c}=0$.
We also have the following recurrence relation for the values of $p_{k,c}$.
\[prop:rec for pkc\] If $k\geq 2$ and $c\geq 0$, then $$p_{k,c}= p_{k-1,c} +(m+n)p_{k-1,c-1} -nmp_{k-2,c-2}.$$
The decomposition of an integer $z\in [0,a_{(n+m)k})$ either has a summand from the bin $\mathcal{B}_{k-1}$ or it doesn’t. If it doesn’t then the number of integers with $c$ summands is $p_{k-1,c}$.
If $z$ has a summand in the bin $\mathcal{B}_{k-1}$, then there are two possibilities: either the summand lies in the bin of size $m$ or in the bin of size $n$. In what follows we need to recall that the first sub-bin of $\mathcal{B}_{k-1}$ has size $n$ and the second has size $m$. If the largest summand appearing in the decomposition of $z$ is in the sub-bin of size $m$ then there are $m$ ways to choose it, and since the next largest legal summand is less than $a_{(n+m)(k-1)}$, there are $p_{k-1,c-1}$ ways to choose the remaining $c-1$ summands. Hence there are $mp_{k-1,c-1}$ integers with $c$ summands and largest summand from the $m$ sub-bin of $\mathcal{B}_{k-1}$. On the other hand, if the largest summand in the decomposition of $z$ is in the sub-bin of size $n$, the quantity $np_{k-1,c-1}$ overcounts by $nmp_{k-2,c-2}$, because a decomposition with a summand from the sub-bin of size $n$ of $\mathcal{B}_{k-1}$ and a summand from the sub-bin of size $m$ of $\mathcal{B}_{k-2}$ does not give rise to a $(n,m)$-bin legal decomposition. Hence $p_{k,c}= p_{k-1,c} +(m+n)p_{k-1,c-1}-nmp_{k-2,c-2}$.
\[prop:Fxy\] Let $F(x,y)=\sum_{k\geq 0}\sum_{c\geq 0}p_{k,c}x^ky^c$ be the generating function of the $p_{k,c}$’s arising from the $(n,m)$-bin legal decompositions. Then $$\begin{aligned}
F(x,y)&=\frac{1}{1-x-(m+n)xy+mnx^2y^2}.\label{eq:Fxy}\end{aligned}$$
Noting that $p_{k,c}=0$ if either $k < 0$ or $c < 0$, using explicit values of $p_{k,c}$ and the recurrence relation from Proposition \[prop:rec for pkc\], after some straightforward algebra we obtain $$F(x,y)=xF(x,y)+(m+n)xyF(x,y)-mnx^2y^2F(x,y)+1$$ from which follows.
Gaussian behavior {#sec:gaussian}
=================
To motivate the main result of this section, we point the reader to the following experimental observations. Taking samples of 100,000 integers from the intervals $[0, a_{10000(m+n)})$, in Figure \[gaussiangraphs\] we provide a histogram for the distribution of the number of summands in the $(n,m)$-bin decomposition of these integers, when $(n,m)=(1,2)$, $(n,m)=(2,1)$, $(n,m)=(2,3)$, and $(n,m)=(3,2)$ respectively. In these figures we also provide the Gaussian curve computed using each sample’s mean and variance. Furthermore, Table \[table:gaussian\] gives the values of the predicted means and variances as computed using Theorem \[thm:gaussian\], as well as the sample means and variances, for each of the samples considered.
\
Figure $(n,m)$ Predicted Mean Sample Mean Predicted Variance Sample Variance
------------------------- --------- ---------------- --------------- -------------------- -----------------
\[gaussian(n,m)=(1,2)\] $(1,2)$ $6464.466094$ $6465.205230$ $1767.766953$ $1770.751318$
\[gaussian(n,m)=(2,1)\] $(2,1)$ $6464.466094$ $6465.418910$ $1767.766953$ $1774.385128$
\[gaussian(n,m)=(2,3)\] $(2,3)$ $7113.248654$ $7114.140920$ $1443.375673$ $1450.656668$
\[gaussian(n,m)=(3,2)\] $(3,2)$ $7113.248654$ $7114.202700$ $1443.375673$ $1437.312966$
: Predicted means and variances versus sample means and variances for simulations from Figure \[gaussiangraphs\].[]{data-label="table:gaussian"}
From these observations one might speculate that for any pair of integers $n,m\in\mathbb{N}$ the distribution of the number of summands in the $(n,m)$-bin legal decompositions of integers in the interval $[0,a_{(n+m)k})$ displays Gaussian behavior. This is in fact the statement of Theorem \[thm:gaussian\].
To prove Theorem \[thm:gaussian\] we first need the following technical results.
For all $m,n,y>0$, the following inequalities hold: $$\begin{aligned}
(1+(m+n)y)^2-4mny^2&>1+(m+n)y\label{lem:betay}\\
{{\sqrt{(1+(m+n)y)^2-4mny^2}}}&>1\label{lem:discr}\\
1+(m+n)y+{{\sqrt{(1+(m+n)y)^2-4mny^2}}}&>1+(m+n)y-{{\sqrt{(1+(m+n)y)^2-4mny^2}}}> 0\label{lem:denom}\end{aligned}$$
To establish and we note that $$\begin{aligned}
(1+(m+n)y)^2-4mny^2=&1+2(m+n)y+(m-n)^2y^2>1+(m+n)y>1.\end{aligned}$$
The first inequality in is clear, while the second is true because $$\begin{aligned}
(1+(m+n)y)^2>&(1+(m+n)y)^2-4mny^2>1.\end{aligned}$$ Hence $1+(m+n)y>{{\sqrt{(1+(m+n)y)^2-4mny^2}}}$.
Let $g_k(y):=\sum_{c=0}^{k}p_{k,c}y^c$ denote the coefficient of $x^k$ in $F(x,y)$. Then $$\begin{aligned}
g_k(y)&=\frac{1}{\sqrt{(1+(m+n)y)^2-4mny^2}}\left[\left(\frac{2mny^2}{(1+(m+n)y)
-\sqrt{(1+(m+n)y)^2-4mny^2}}\right)^{k+1}\right.\\
&\hspace{5mm}\left.-\left(\frac{2mny^2}{(1+(m+n)y)+\sqrt{(1+(m+n)y)^2-4mny^2}}\right)^{k+1}\right].\end{aligned}$$
From Proposition \[prop:Fxy\] we know that $$F(x,y)=\frac{1}{1-x-(m+n)xy+mnx^2y^2}=\frac{1}{mny^2}\cdot\frac{1}{x^2-\frac{1+(m+n)y}{mny^2}+\frac{1}{mny^2}}.$$ In order to expand $F(x,y)$ into a power series we will use partial fraction decomposition, but first we must factor $x^2-\frac{1+(m+n)y}{mny^2}+\frac{1}{mny^2}$ into two linear factors. Using the quadratic formula yields $$x^2-\frac{1+(m+n)y}{mny^2}+\frac{1}{mny^2}=(x-\lambda_1)(x-\lambda_2)$$ where $$\begin{aligned}
\lambda_1=\lambda_1(y)&=\frac{(1+(m+n)y)-\sqrt{(1+(m+n)y)^2-4mny^2}}{2mny^2}\\
\lambda_2=\lambda_2(y)&=\frac{(1+(m+n)y)+\sqrt{(1+(m+n)y)^2-4mny^2}}{2mny^2}.\end{aligned}$$
Since the discriminant is positive, by Equation , we can use partial fraction decomposition $$F(x,y)=\frac{1}{mny^2}\cdot\frac{1}{x^2-\frac{1+(m+n)y}{mny^2}+\frac{1}{mny^2}}=\frac{1}{mny^2}\cdot\left(\frac{A_1}{x-\lambda_1}+\frac{A_2}{x-\lambda_2}\right).$$
Solving for $A_1, A_2$: $$1=A_1(x-\lambda_2)+A_2(x-\lambda_1)$$ If $x=\lambda_1$, then $1=A_1(\lambda_1-\lambda_2)$. Hence $A_1 =\frac{1}{\lambda_1-\lambda_2}$ and $$\begin{aligned}
\lambda_1-\lambda_2=&\left( \frac{(1+(m+n)y)-\sqrt{(1+(m+n)y)^2-4mny^2}}{2mny^2}\right)\\
&-\left( \frac{(1+(m+n)y)+\sqrt{(1+(m+n)y)^2-4mny^2}}{2mny^2}\right)\\
=& -\frac{\sqrt{(1+(m+n)y)^2-4mny^2}}{mny^2}\end{aligned}$$ Thus $ A_1= \frac{-mny^2}{\sqrt{(1+(m+n)y)^2-4mny^2}}$. Similarly, if $x=\lambda_2$, then $1=A_2(\lambda_1-\lambda_1)$. So $A_2 =\frac{1}{\lambda_2-\lambda_1}=-A_1.$
Thus $$\begin{aligned}
F(x,y)=&\frac{1}{mny^2}\cdot\left(\frac{-A_1}{\lambda_1-x}-\frac{A_2}{\lambda_1-x}\right)=
\frac{1}{mny^2}\cdot\left( \frac{-A_1}{\lambda_1}\sum_{i=0}^{\infty}\left(\frac{x}{\lambda_1}\right)^i-\frac{A_2}{\lambda_2}\sum_{i=0}^{\infty}\left(\frac{x}{\lambda_2}\right)^i\right).\label{eqinprevious}\end{aligned}$$ If $g_k(y)$ denotes the coefficient of $x^k$ in $F(x,y)$, then using Equation we have that $$\begin{aligned}
g_k(y)=&\frac{1}{mny^2}\cdot\left(\frac{-A_1}{\lambda_1}\left(\frac{1}{\lambda_1}\right)^k-\frac{A_2}{\lambda_2}\left(\frac{1}{\lambda_2}\right)^k\right)\\
=& \frac{1}{\lambda_1 \sqrt{(1+(m+n)y)^2-4mny^2}} \left(\frac{2(mny^2)}{(1+(m+n)y)-\sqrt{(1+(m+n)y)^2-4(nmy^2)}}\right)^k\\
& +\frac{-1}{\lambda_2 \sqrt{(1+(m+n)y)^2-4mny^2}} \left(\frac{2(mny^2)}{(1+(m+n)y)+\sqrt{(1+(m+n)y)^2-4(nmy^2)}}\right)^k.\qedhere\end{aligned}$$
To complete the proof of Theorem \[thm:gaussian\] we make use the following result from [@DDKMV].
\[thm:DDKMV\][@DDKMV]\*[Theorem 1.8]{} Let $\kappa$ be a fixed positive integer. For each $n$, let a discrete random variable $Y_n$ in $I_{n}=\{1,2,\ldots,n\}$ have $$\begin{aligned}
{\rm Prob}(Y_n=j)\ = \ \begin{cases}p_{j,n}/\sum_{j=1}^np_{j,n}&\text{{\rm if} $j\in I_n$}\\ 0&\text{{\rm otherwise}}\end{cases}\end{aligned}$$ for some positive real numbers $p_{1,n}, p_{2,n}, \ldots, p_{n,n}$. Let $g_n(y):=\sum_j p_{j,n}y^j$.
If $g_n$ has the form $g_n(y) = \sum_{i=1}^\kappa q_i(y)\alpha_i^n(y)$ where
1. for each $i \in \{1, \ldots, \kappa\}, q_i, \alpha_i: \mathbb{R} \to \mathbb{R}$ are three times differentiable functions which do not depend on $n$;
2. there exists some small positive $\epsilon$ and some positive constant $\lambda < 1$ such that for all $y \in I_{\epsilon} = [1-\epsilon, 1 + \epsilon], |\alpha_1(y)| > 1$ and $|\frac{\alpha_i(y)}{\alpha_1(y)}| < \lambda < 1$ for all $i=2, \ldots, \kappa$;
then the mean $\mu_n$ and variance $\sigma_n^2$ of $Y_n$ both grow linearly with $n$. Specifically, $$\begin{aligned}
\mu_n \ =\ Cn + d + o(1), \ \ \ \ \sigma_n^2 \ =\ C^\prime n + d^\prime + o(1)\end{aligned}$$ where $$\begin{aligned}
C&\ = \ \frac{\alpha_1'(1)}{\alpha_1(1)}, \ d \ = \ \frac{q_1'(1)}{q_1(1)} \nonumber\\
C^\prime &\ = \ \frac{d}{dy}\left. \left(\frac{y\alpha_1'(y)}{\alpha_1(y)} \right)\right\vert_{y=1} \ = \ \frac{\alpha_1(1)[\alpha_1'(1)+ \alpha_1''(1)]-\alpha_1'(1)^2}{\alpha_1(1)^2}\nonumber\\
d^\prime &\ = \ \frac{d}{dy} \left. \left(\frac{yq_1'(y)}{q_1(y)} \right) \right\vert_{y=1} \ = \ \frac{q_1(1)[q_1'(1)+ q_1''(1)]-q_1'(1)^2}{q_1(1)^2}.\end{aligned}$$ Moreover, if
1. $\alpha_1'(1) \neq 0$ and $\frac{d}{dy}\left[ \frac{y\alpha_1'(y)}{\alpha_1(y)}\right]|_{y=1} \neq 0$, i.e., $C,C'>0$,
then as $n \to \infty$, $Y_n$ converges in distribution to a normal distribution.
Throughout the following proof we will simplify some calculations with the substitutions: $$s=m+n,\quad p=mn, \quad \mbox{and}\quad \beta = {{\sqrt{(1+m+n)^2-4mn}}}.$$
To prove Gaussian behavior we need only show that $g_k(y)$ satisfies the hypothesis of Theorem \[thm:DDKMV\]. Note that $$g_k(y)=q_1(y)\alpha_1^k(y) + q_2(y)\alpha_2^k(y),$$ where $$q_i(y)=
\frac{(-1)^{i+1}2mny^2}{\left(1+(m+n)y+(-1)^i \sqrt{(1+(m+n)y)^2-4mny^2}\right)\sqrt{(1+(m+n)y)^2-4mny^2}}$$ and $$\alpha_i(y)=
\frac{2mny^2}{1+(m+n)y+(-1)^i \sqrt{(1+(m+n)y)^2-4mny^2}}.$$
- Condition (i): For each $i =1,2$, the functions $q_i(y)$ and $\alpha_i(y)$ are three times differentiable.
- Condition (ii): Let $\epsilon$ be some small positive constant and assume $y \in I_{\epsilon} = [1-\epsilon, 1 + \epsilon]$.
By Equation , we see that $0<\alpha_2(y)<\alpha_1(y)$. Thus for some positive constant $\lambda$, $|\frac{\alpha_2(y)}{\alpha_1(y)}| < \lambda < 1$. Next we show that $\alpha_1(y)>1$. We begin by noting that $py^2 >0$ and $\sqrt{(1+sy)^2-4py^2}>1$ (by equation ). Hence $$\begin{aligned}
0<& 4py^2(py^2+\sqrt{(1+sy)^2-4py^2}-1)\\
(1+sy)^2<& 4py^2(py^2+\sqrt{(1+sy)^2-4py^2}-1)+(1+sy)^2\\
(1+sy)^2<& 4p^2y^4+4py^2\sqrt{(1+sy)^2-4py^2}+(1+sy)^2 - 4py^2\\
(1+sy)^2<& (2py^2+\sqrt{(1+sy)^2-4py^2})^2\\
1+sy<& 2py^2+\sqrt{(1+sy)^2-4py^2}\\
1<&\frac{2py^2}{1+sy-\sqrt{(1+sy)^2-4py^2}}.\end{aligned}$$
- Condition (iii): First we compute $C=\frac{\alpha_1'(1)}{\alpha_1(1)}$ and prove that it is not 0. Use $$\alpha_1(y) = \frac{2py^2}{1+sy-\sqrt{(1+sy)^2-4py^2}}$$ and compute $$\begin{aligned}
\alpha_1'(y) =&\frac{4py}{1+sy-\sqrt{(1+sy)^2-4py^2}}-\frac{2py^2\left[s-\tfrac{1}{2}\left( (1+sy)^2-4py^2 \right)^{-1/2}(2s(1+sy)-8py) \right]}{(1+sy-\sqrt{(1+sy)^2-4py^2})^2}.\end{aligned}$$ Substitute $y=1$, use a common denominator to add fractions, and the numerator of $\alpha_1'(1)$ simplifies to $$\begin{aligned}
4p(1+s-\beta)-2p\left[s-\frac{2s(1+s)-8p}{2\beta} \right]
=&2p\left( 2(1+s-\beta) -s+\frac{s(1+s)-4p}{\beta} \right)\\
=&\frac{2p}{\beta}(1+s-\beta)(\beta-1).\end{aligned}$$ Hence $$C=\frac{\alpha_1'(1)}{\alpha_1(1)}=\frac{\frac{2p(1+s-\beta)(\beta-1)}{\beta(1+s-\beta)^2}}
{\frac{2p}{1+s-\beta}}=\frac{\beta-1}{\beta}=\frac{{{\sqrt{(1+m+n)^2-4mn}}}-1}{{{\sqrt{(1+m+n)^2-4mn}}}}.$$ Note that this final value is positive (in particular not zero) (see Equation ).
Second we compute $C'=\frac{\alpha_1'(1)-\alpha_1''(1)}{\alpha_1(1)}-\left(\frac{\alpha_1'(1)}{\alpha_1(1)}\right)^2$ and prove that it is not 0. Note $$\begin{aligned}
\alpha_1''(1)=&\frac{4p\left(s+\frac{4p-s(1+s)}{\beta}\right)^2}{(1+s-\beta)^3}-\frac{8p\left(s+\frac{4p-s(1+s)}{\beta}\right)}{(1+s-\beta)^2}+\frac{4p}{1-s-\beta}
-\frac{2p\left(\frac{(-4p+s(1+s))^2}{\beta^3}+\frac{4p-s^2}{\beta}\right)}{(1+s-\beta)^2}\\
=&\frac{4p}{1+s-\beta}\left( \frac{4p-s-s^2-\beta-4p+1+2s+s^2}{\beta(1+s-\beta)} \right)^2
-\frac{2p}{(1+s-\beta)^2}\frac{4p}{\beta^3}\\
=&\frac{4p}{(1+s-\beta)\beta^2}-\frac{8p^2}{(1+s-\beta)^2\beta^3}\end{aligned}$$ and using this we find that $$\begin{aligned}
\frac{\alpha_1'(1)-\alpha_1''(1)}{\alpha_1(1)}=&\frac{\frac{2p(\beta-1)}{\beta(1+s-\beta)}+\frac{4p}{(1+s-\beta)\beta^2}-\frac{8p^2}{(1+s-\beta)^2\beta^3}}
{\frac{2p}{1+s-\beta}}
=\frac{\beta-1}{\beta}+\frac{\beta-1-s}{\beta^3}.\end{aligned}$$ Finally $$\begin{aligned}
C'=\frac{\alpha_1'(1)-\alpha_1''(1)}{\alpha_1(1)}-\left(\frac{\alpha_1'(1)}{\alpha_1(1)}\right)^2=&\frac{\beta-1}{\beta}+\frac{\beta-1-s}{\beta^3}-\left(\frac{\beta-1}{\beta}\right)^2\\
=&\frac{\beta^2-1-s}{\beta^3}\label{eqn:g0}\\
=&\frac{s(1+s)-4p}{\beta^3}.\end{aligned}$$ By considering Equation with we see that $C'>0$.
Therefore, by satisfying the conditions of Theorem \[thm:DDKMV\], we have completed our proof.
Directions for future research
==============================
In this paper we considered the construction of $(n,m)$-bin sequences. For $d\in\mathbb{Z}_+$, one natural extension is to consider ${\bf{N}}=(n_1,n_2,\ldots,n_d)\in\mathbb{Z}_+^d$ and define ${\bf{N}}$-bin sequences in an analogous way to that of $(n,m)$-bin sequences. One could then study the ${\bf{N}}$-bin decompositions of positive integers. Namely, do these decompositions exist and are they unique? What is the behavior of the average number of summands used in the ${\bf{N}}$-bin legal decompositions, i.e. is it Gaussian?
Another further generalization would be to consider introducing a new parameter $s\in\mathbb{N}$ which accounts for the number of bins which must be skipped between summands used in a legal ${\bf{N}}$-bin decomposition. We call such decompositions the $(s,{\bf{N}})$-bin with skip decompositions. Note that when $s=1$ and ${\bf{N}}=(n,m)$, the $(s,{\bf{N}})$-bin with skip decompositions are exactly the $(n,m)$-bin decompositions and when $s\in\mathbb{Z}_+$ and ${\bf{N}}=b\in\mathbb{Z}_+$, the $(s,{\bf{N}})$-bin with skip decompositions are exactly the $(s,b)$-Generacci decompositions. Therefore the study of the $(s,{\bf{N}})$-bin with skip decompositions provides natural ways to generalize prior results in this area.
Negative Coefficient in Linear Reccurence {#appendix}
=========================================
The $(2,3)$-bin sequence is not a Positive Linear Recurrence Sequence (PLRS).
By Equation the recurrence relation for the $(2,3)$-bin sequence is $a_x=4a_{x-3}-2a_{x-6}$. This has characteristic equation $y^6-4y^3+2$. By Eisenstein’s criterion the polynomial $y^6-4y^3+2$ is irreducible in $\mathbb{Q}[y]$ since there exists a prime $p=2$ such that $p$ divides all non-leading coefficients of the polynomial, does not divide the leading coefficient, and whose square does not divide the constant term. Thus the polynomial $y^6-4y^3+2$ can not be factored into the product of non-constant polynomials with rational coefficients. Moreover, since this equation is irreducible in $\mathbb{Q}[y]$ our recurrence relation is minimal. By applying Lemma B.1 in [@DDKMMV], it is enough to show that all multiples of the characteristic equation cannot have the form $$y^{k+6} - \sum_{i=0}^{k+5} c_iy^i$$ with all $c_i>0$.
Consider the multiple of the characteristic equation (with $p_k\neq 0$): $$\begin{aligned}
\sum_{i=0}^{k+6}c_iy^i &= \ \left(\sum_{j=0}^k p_j y^j\right)\left(y^6-4y^3+2\right)\\
&=\ \sum_{i=0}^{k+6}\left( p_{i-6}-4p_{i-3}+2p_i \right)y^i\end{aligned}$$ Thus $c_i =p_{i-6}-4p_{i-3}+2p_i $. Note that $p_i = 0$ when $i<0$ and when $i>k$.
We will proceed by contradiction. Hence we assume $c_{k+6}>0$, and $c_i\leq 0$ whenever $i<k+6$. Let $t$ be the smallest non-negative integer such that $p_t\neq 0$. Note that $0\leq t\leq k$.
We claim that for all integers $j\geq 0$ with $t+3j<k+6$, $p_{t+3j}<p_{t+3j-3}$ and $p_{t+3j}<0$. In other words the coefficients become increasingly negative. The proof of this claim is by induction.
[**Base case $n=0$:**]{} By definition of $t$, $c_t =p_{t-6}-4p_{t-3}+2p_t=2p_t $. Hence $2p_t=c_t<0$, because $p_t\neq0$ and $t<k+6$. Thus $p_t<0=p_{t-3}$.
[**Base case $n=1$:**]{} We have $$\begin{aligned}
c_{t+3} = p_{t-3} -4p_t +2p_{t+3} & \leq \ 0\\
2p_{t+3} &\leq \ 4p_t\\
p_{t+3} &\leq \ 2p_t <p_t\end{aligned}$$ where the last inequality is true because $p_t<0$.
[**Inductive Step:**]{} We have $$\begin{aligned}
c_{t+3j} = p_{t+3j-6} -4p_{t+3j-3} +2p_{t+3j} & \leq \ 0\label{eq:contr_assume}\\
2p_{t+3j} & \leq \ 4p_{t+3j-3} - p_{t+3j-6}\\
2p_{t+3j} & \leq \ 4p_{t+3j-3} - p_{t+3j-3}\label{eq:ind_assum}\\
p_{t+3j} & \leq \ 1.5p_{t+3j-3} \\
p_{t+3j} & \leq \ p_{t+3j-3} \label{eq:zero}\end{aligned}$$
Step is true because $t+3j<k+6$. Step is true by the inductive assumption. Finally step is true because $p_{t+3j-3}<0$.
To establish our contradiction, choose $j^*$ such that $k<t+3j^*<k+6$. Thus we have $$\begin{aligned}
c_{t+3j^*} = p_{t+3j^*-6} -4p_{t+3j^*-3} +2p_{t+3j^*} & \leq \ 0\label{eq:contr_assume2}\\
p_{t+3j^*-6} & \leq \ 4p_{t+3j^*-3} \label{eq:t}\\
p_{t+3j^*-6} & \leq \ p_{t+3j^*-3} \label{eq:zero2}\end{aligned}$$ Step is true because $t+3j^*<k+6$. Step is true because $p_i=0$ when $i>k$. Step is true because $p_{t+3j^*-3}<0$. But this last line contradictions the claim we just proved above.
[99]{} M. Catral, P. Ford, P. E. Harris, S. J. Miller, and D. Nelson, *Generalizing Zeckendorf’s Theorem: The Kentucky Sequence*, Fibonacci Quarterly [**52**]{} (2014), no. 5, 68–90.
M. Catral, P. Ford, P. E. Harris, S. J. Miller, and D. Nelson, *Legal Decompositions Arising from Non-positive Linear Recurrences*, The Fibonacci Quarterly [**54**]{} (2016), no. 4, 348–365.
M. Catral, P. Ford, P. E. Harris, S. J. Miller, D. Nelson, Z. Pan, and H. Xu, *New Behavior in Legal Decompositions Arising from Non-positive Linear Recurrences*, The Fibonacci Quarterly [**55**]{} (2017), no. 3, 252–275.
R. Dorward, P. Ford, E. Fourakis, P. E. Harris, S. J. Miller, E. Palsson, and H. Paugh, *A Generalization of Zeckendorf’s Theorem via Circumscribed $m$-gons*, Involve, a Journal of Mathematics, 2017, vol. 10, no. 1, pp. 125–150.
R. Dorward, P. Ford, E. Fourakis, P. E. Harris, S. J. Miller, E. Palsson, and H. Paugh, *Individual Gap Measures from Generalized Zeckendorf Decompositions*, Uniform Distribution Theory [**12**]{} (2017), no. 1, 27–36.
P. Demontigny, T. Do, A. Kulkarni, S. J. Miller, D. Moon and U. Varma, *Generalizing Zeckendorf’s Theorem to $f$-decompositions*, Journal of Number Theory, [**141**]{} (2014), 136–158.
P. Demontigny, T. Do, A. Kulkarni, S. J. Miller and U. Varma, *A Generalization of Fibonacci Far-Difference Representations and Gaussian Behavior*, Fibonacci Quarterly, [**52.3**]{} (2014), 247–273.
T. J. Keller. *Generalizations of Zeckendorf’s Theorem*. Fibonacci Quarterly [**10**]{} (1975).
M. Kologlu, G. Kopp, S. J. Miller and Y. Wang. *On the number of summands in Zeckendorf decompositions*, Fibonacci Quarterly [**49**]{} (2011), no. 2, 116–130.
C. G. Lekkerkerker, *Voorstelling van natuurlyke getallen door een som van getallen van Fibonacci*, Simon Stevin [**[29]{}**]{} (1951-1952), 190–195.
T. Lengyel. *A Counting Based Proof of the Generalized ZeckendorfÕs Theorem.* Mathematics Department, Occidental College, 2005.
S. J. Miller and Y. Wang. *Gaussian Behavior in Generalized Zeckendorf Decomposition.* arXiv: 1107.2718 (2011).
E. Zeckendorf, *Représentation des nombres naturels par une somme des nombres de Fibonacci ou de nombres de Lucas*, Bulletin de la Société Royale des Sciences de Liége [**41**]{} (1972), pages 179–182.
|
---
abstract: 'Smart use of mixers is a relevant issue in radio engineering and in instrumentation design, and of paramount importance in phase noise metrology. However simple the mixer seems, every time I try to explain to a colleague what it does, something goes wrong. One difficulty is that actual mixers operate in a wide range of power (150 dB or more) and frequency (up to 3 decades). Another difficulty is that the mixer works as a multiplier in the time-domain, which is necessary to convert frequencies. A further difficulty is the interaction with external circuits, the input sources and the load. Yet far the biggest difficulty is that designing with mixers requires a deep comprehension of the whole circuit at *system* level and at a *component* level. As the electronic-component approach is well explained in a number of references, this tutorial emphasizes the system approach, aiming to provide *wisdom* and *insight* on mixes.'
author:
- |
Enrico Rubiola\
web page `http://rubiola.org`\
{width="35.00000%"}\
FEMTO-ST Institute\
CNRS and Université de Franche Comté, Besançon, France\
bibliography:
- '\\bibfile{ref-short}.bib'
- |
%
\\bibfile{references}.bib
- |
%
\\bibfile{rubiola}.bib
title: Tutorial on the double balanced mixer
---
[ll]{}\
------------------------------------------------------------------------
$A(t)$ & slow-varying (baseband) amplitude\
$h_{lp}$, $h_{bp}$ & impulse response of lowpass and bandpass filters\
$h$, $k$, $n$, $p$, $q$& integer numbers\
$i(t)$, $I$ & current\
I (goes with Q) & in-phase in/out (of a two-phase mixer/modulator)\
IF & intermediate frequency\
$j$ & imaginary unit, $j^2=-1$\
$\ell$ & mixer voltage loss, $1/\ell^2=P_i/P_o$\
LO & local oscillator\
$P$ & power\
$P_i$, $P_o$ & power, input and output power\
$P_p$, $P_S$ & LO (pump) power and internal LO saturation power\
Q (goes with i) & quadrature in/out (of a two-phase mixer/modulator)\
$R$ & resistance\
$R_0$ & characteristic resistance (by default, $R_0=50$ )\
$R_G$ & source resistance (Thévenin or Norton model)\
$U$ & dimensional constant, $U=1$\
$v(t)$, $V$ & voltage\
$v'$, $v''$ & real and imaginary, or in-phase and quadrature part\
$v_i(t)$, $v_o(t)$ & input (RF) voltage, and output (IF) voltage\
$v_p(t)$ & LO (pump) signal\
$v_l(t)$, $V_L$ & internal LO signal\
$V_O$ & saturated output voltage\
$V_S$ & satureted level of the internal LO signal $v_l(t)$\
$x(t)$ & real (in-phase) part of a RF signal\
$y(t)$ & imaginary (quadrature) part of a RF signal\
$\varphi$, $\varphi(t)$ & static (or quasistatic) phase\
$\phi(t)$ & random phase\
$\omega$, $f$ & angular frequency, frequency\
$\omega_i$, $\omega_l$ & input (RF) and pump (LO) angular frequency\
$\omega_b$, $\omega_s$ & beat and sideband angular frequency\
------------------------------------------------------------------------
\
\
$b$ & beat, as in $|\omega_s-\omega_i|=\omega_b$\
$i$, $I$ & input\
$l$, $L$ & local oscillator (internal signal)\
$o$, $O$ & output\
$p$, $P$ & pump, local oscillator (at the input port)\
$s$ & sideband, as in $|\omega_s-\omega_i|=\omega_b$\
$S$ & saturated\
\
Basics
======
It is first to be understood that the mixer is *mainly intended*, and *mainly documented*, as the frequency converter of the superheterodyne receiver (Fig. \[fig:mix-superhet\]). The port names, LO (local oscillator, or *pump*), RF (radio-frequency), and IF (intermediate frequency) are clearly inspired to this application.
[ ]{}
The basic scheme of a mixer is shown in Fig. \[fig:mix-dbm\]. At microwave frequencies a star configuration is often used, instead the diode ring.
[ ]{}
Under the basic assumptions that $v_p(t)$ is large as compared to the diode threshold, and that $v_i(t)$ is small, the ring acts a switch. During the positive half-period of $v_p(t)$ two diodes are reverse biased and the other two diodes are forward biased to saturation. During the negative half-period the roles are interchanged. For the small RF signal, the diodes are open circuit when reverse biased, and small resistances when forward biased. As a result, the IF signal $v_o(t)$ switches between $+v_i(t)$ and $-v_i(t)$ depending on the sign of $v_p(t)$. This is equivalent to multiplying $v_i(t)$ by a square wave of amplitude $\pm1$ that takes the sign from $v_p(t)$. In most practical cases, it is sufficient to describe the frequency conversion mechanism as the product between $v_i(t)$ and the first term of the Fourier expansion of the square wave. More accurate models account for the higher-order Fourier terms, and for the dynamic resistance and capacitance of the diodes.
At the RF and LO sides, a balun is necessary in order to convert the unbalanced inputs into the balanced signals required for the ring to operate as a switch. Conversely, no adapter is needed at the IF output, which is already unbalanced. In low-frequency mixers (from a few kHz to 2–3 GHz) the baluns are implemented with power iron tore transformers. At higher frequencies, up to some tens of GHz, transformers are not available, for microstrip networks are the preferred balun types. The typical LO power is of 5–10 mW (7–10 dBm), whereas in some cases a power up to 1 W (30 dBm) is used for highest linearity. The RF power should be at least 10 dB lower than the LO power. The diodes are of the Schottky types, because of the low forward threshold and of the fast switching capability. The characteristic impedance to which all ports should be terminated is $R_0=50$ , with rare exceptions.
The mixer can be used in a variety of modes, each with its “personality” and peculiarities, listed in Table \[tab:mix:modes\], and detailed in the next Sections. In short summary, the mixer is (almost) always used with the LO input saturated at the nominal power. Then, the main parameters governing the behavior are:
Input power.
: The input (RF) power is usually well below the saturation level, as in Figures \[fig:mix-superhet\]–\[fig:mix-dbm\]. Yet, the input can be intentionally saturated.
Frequency degeneracy.
: When the input (RF) and LO frequency overlap, the conversion product also overlap.
Interchanging the RF and IF ports.
: The difference is that the RF port is coupled in ac, while the IF port is often coupled in dc.
Additionally, the mixer is sometimes used in a **strange mode**, with both LO and RF inputs not saturated.
[|c|c|cc|l|]{}&mode& & note\
&& frequency & $P$ or $I$ &\
&LC & $\nu_i\ne \nu_l$ & $P_i\ll P_S$ &\
&SD & $\nu_i=\nu_l$ & $P_i\ll P_S$ &\
&SC & $\nu_i\ne\nu_l$ & $P_i\ge P_S$ &\
&DC & & &\
[90]{} Normal Modes
&PD & $\nu_i=\nu_l$ & $P_i\ge P_S$ &\
&LM & $\nu_i\approx0$ & $I_i\ll I_S$ &\
&RLC & $\nu_i\gg0$ & $P_i\ll P_S$ &\
&DM & $\nu_i\approx0$ & $P_i\ge P_S$ &\
&RSC & $\nu_i\gg0$ & $P_i\ge P_S$ &\
[90]{} Reverse Modes
&RDC & & &\
[90]{}Strange
&AD & $\nu_i=\nu_l$ & &\
Golden rules
------------
1. First and foremost, check upon saturation at the LO port and identify the operating mode (Table \[tab:mix:modes\]).
2. Generally, all ports should be reasonably impedance matched, otherwise reflected waves result in unpredictable behavior.
3. When reflected waves can be tolerated, for example at low frequencies or because of some external circuit, impedance plays another role. In fact, the appropriate current flow is necessary for the diodes to switch.
4. In all cases, read carefully Sections \[ssec:mix:lc-mode\] to \[ssec:mix:linearity\].
Avoid damage {#sec:mix:safe-op}
------------
However trivial, avoid damage deserves a few words because the device can be pushed in a variety of non-standard operation modes, which increases the risk.
1. Damage results from excessive *power*. Some confusion between maximum power for linear operation and the absolute maximum power to prevent damage is common in data sheets.
2. The nominal LO power (or range) refers to best performance in the linear conversion mode. This value can be exceeded, while the absolute maximum power can not.
3. The maximum RF power is specified as the maximum power for linear operation. When linearity is not needed this value can be exceeded, while the absolute maximum power can not.
4. Voltage driving may result in the destruction of the mixer for two reasons. The diode $i=i(v)$ characteristics is exponential in $v$, for the current tend to exceed the maximum when the diode is driven by a voltage source. The thin wires of the miniature transformers tend to blow up as a fuse if the current is excessive.
5. In the absence of more detailed information, the absolute maximum power specified for the LO port can be used as the *total dissipated power*, regardless of where power enters.
6. The absolute maximum LO power can also be used to guess the *maximum current through one diode*. This may be useful in dc or degenerated modes, where power is not equally split between the diodes.
Better than general rules, a misfortunate case occurred to me suggests to be careful about subtle details. A \$3000 mixer used as a phase detector died unexpectedly, without being overloaded with microwave power. Further analysis showed that one rail of a dc supply failed, and because of this the bipolar operational amplifier (LT-1028) connected to the IF port sank a current from the input (20 mA?).
Signal representations {#sec:mix:multiplication}
======================
The simple sinusoidal signal takes the form $$\begin{aligned}
\label{eqn:mix:simple-sinusoid}
v(t)=A_0\cos(\omega_0t+\varphi)~.\end{aligned}$$ This signal has rms value $A_0/\sqrt2$ and phase $\varphi$. An alternate form often encountered is $$\begin{aligned}
\label{eqn:mix:cos-sin-sinusoid1}
v(t) &=V_\text{rms}\sqrt2\cos(\omega_0t+\varphi) \\
\label{eqn:mix:cos-sin-sinusoid2}
&= V'\sqrt2\cos(\omega_0t) - V''\sqrt2\sin(\omega_0t)~,\end{aligned}$$ with $$\begin{aligned}
V' &= V_\text{rms}\cos\varphi\\
V'' &= V_\text{rms}\sin\varphi\\
V_\text{rms} &=\sqrt{(V')^2+(V'')^2}\\
\varphi&=\arctan(V''/V')~.\end{aligned}$$ The form - relates to the *phasor* representation[^1]$$\begin{aligned}
\label{eqn:mix:phasor-sinusoid}
V=V'+jV''=|V|e^{j\varphi}~,\end{aligned}$$ which is obtained by freezing the $\omega_0$ oscillation, and by turning the amplitude into a complex quantity of modulus $$\begin{aligned}
|V|=\sqrt{(V')^2+(V'')^2} = V_\text{rms}\end{aligned}$$ equal to the rms value of the time-domain sinusoid, and of argument $$\begin{aligned}
\varphi=\arctan\frac{V''}{V'}\end{aligned}$$ equal to the phase $\varphi$ of the time-domain sinusoid. The “$\sin\omega_0t$” term in Eq. has a sign “$-$” for consistency with Eq. .
Another form frequently used is the *analytic* (complex) signal $$\begin{aligned}
\label{eqn:mix:analytic-signal}
v(t) = Ve^{j\omega_0t}~,\end{aligned}$$ where the complex voltage $V=V'+jV''$ is consistent with Eq. . The analytic signal has zero energy at negative frequencies, and double energy at positive frequencies.
The product of two signals can only be described in the time domain \[Eq. , , \]. In fact, the phasor representation is useless, and the analytic signal hides the down-conversion mechanism. This occurs because $e^{j\omega_at}e^{j\omega_bt}=e^{j(\omega_a+\omega_b)t}$, while the product of two sinusoids is governed by $$\begin{aligned}
\label{eqn:mix:product-1}
\cos(\omega_at) \cos(\omega_bt)
&= \frac12\cos\bigl(\omega_a-\omega_b\bigr)t+\frac12\cos\bigl(\omega_a+\omega_b\bigr)t\\
\label{eqn:mix:product-2}
\sin(\omega_at) \cos(\omega_bt)
&= \frac12\sin\bigl(\omega_a-\omega_b\bigr)t+\frac12\sin\bigl(\omega_a+\omega_b\bigr)t\\
\label{eqn:mix:product-3}
\sin(\omega_at) \sin(\omega_bt)
&= \frac12\cos\bigl(\omega_a-\omega_b\bigr)t-\frac12\cos\bigl(\omega_a+\omega_b\bigr)t~.\end{aligned}$$ Thus, the product of two sinusoids yields the sum and the difference of the two input frequencies (Fig. \[fig:mix-up-down-conversion\]).
[ ]{}
A pure sinusoidal signal is represented as a pair of Dirac delta function $\delta(\omega-\omega_0)$ and $\delta(\omega+\omega_0)$ in the spectrum, or as a single $\delta(\omega-\omega_0)$ in the case of the analytic signal. All the forms , , , , and are also suitable to represent (slow-varying) modulated signals. A modulated signal can be represented[^2] as $$\begin{aligned}
\label{eqn:mix:modulated-sinusoid}
v(t)=A'(t)\cos(\omega_0t)-A''(t)\sin(\omega_0t)~.\end{aligned}$$ $A'(t)$ and $A''(t)$ are the low-pass signals that contain information. They may include a dc term, which accounts for the carrier, like in the old AM and PM. Strictly, it is not necessary that $A'(t)$ and $A''(t)$ are narrow-band. The time-depencence of $A'(t)$ and $A''(t)$ spreads the power around $\omega_0$. The spectrum of the modulated signal is a copy of the two-side spectrum of $A'(t)$ and $A''(t)$ translated to $\pm\omega_0$. Thus, the bandwidth of the modulated signal is twice the bandwidth of $A'(t)$ and $A''(t)$. Not knowing the real shape, the spectrum can be conventionally represented as a rectangle centered at the carrier frequency, which occupies the bandwidth of $A'$ and $A''$ on each side of $\pm\omega_0$ (Fig. \[fig:mix-lc-conv\]).
Of course, Equations – also apply to the product of modulated signals, with their time-dependent coefficients $A'(t)$ and $A''(t)$. Using mixers, we often encounter the product of a pure sinusoid \[Eq. \] multiplied by a modulated signal \[Eq. \]. The spectrum of such product consists of two replicas of the modulated input, translated to the frequency sum and to the frequency difference (IF signal Fig. \[fig:mix-lc-conv\]).
Linear modes {#sec:mix:lin-modes}
============
For the mixer to operate in any of the linear modes, it is necessary that
- the LO port is saturated by a suitable sinusoidal signal,
- a small (narrowband) signal is present at the RF input.
The reader should refer to Sec. \[ssec:mix:linearity\] for more details about linearity.
Linear frequency converter (LC) mode {#ssec:mix:lc-mode}
------------------------------------
The additional condition for the mixer to operate as a linear frequency converter is that the LO and the RF signals are separated in the frequency domain (Fig. \[fig:mix-lc-conv\]).
[ ]{}
It is often convenient to describe the mixer as a system (Fig. \[fig:mix-lc-model\]), in which the behavior is modeled with functional blocks.
[ ]{}
The clipper at the LO input limits the signal to the saturation level $V_S$, while the clipper at the RF port is idle because this port is not saturated. The overall effect is that the internal LO voltage $v_l(t)$ is approximately a trapezoidal waveform that switches between the saturated levels ${\pm}V_S$. The value of $V_S$ is a characteristic parameter of the specific mixer. The effect of higher LO power is to shrink the fraction of period taken by the slanted edges, rather than increasing $V_S$. The asymptotic expression of $v_l(t)$ for strong saturation is $$\begin{aligned}
v_l(t)&=\frac{4}{\pi}V_S\sum_{\mathrm{odd}~k\ge1} \Big(\!-1\Big)^{\textstyle\frac{k-1}{2}}~~ \frac{1}{k}\;\cos(k\omega_lt)
\label{eqn:mix:multih-lo}\\
&=\frac{4}{\pi}V_S\left[\cos\omega_lt-\frac{1}{3}\cos3\omega_lt
+\frac{1}{5}\cos5\omega_lt-\ldots+\ldots\;\right]
\nonumber\end{aligned}$$ The filters account for the bandwidth limitations of the actual mixer. The IF output is often coupled in dc. As an example, Table \[tab:mix-example\] gives the main characteristics of two typical mixers.
[|c|c|c|]{}port & HF-UHF mixer & microwave mixer\
LO & 1–500 MHz & 8.4–18 GHz\
& 7 dBm $\pm1$ dB & 8–11 dBm\
& $\textsc{swr}<1.8$ & $\textsc{swr}<2$\
RF & 1–500 MHz & 8.4–18 GHz\
& 0 dBm max & 0 dBm max\
& $\textsc{swr}<1.5$ & $\textsc{swr}<2$\
IF & dc – 500 MHz & dc – 2 GHz\
& 0 dBm max & 0 dBm max\
& $\textsc{swr}<1.5$ & $\textsc{swr}<2$\
& <span style="font-variant:small-caps;">ssb</span> loss 5.5 dB max & <span style="font-variant:small-caps;">ssb</span> loss 7.5 dB max\
\
A simplified description of the mixer is obtained by approximating the internal LO waveform $v_l(t)$ with the first term of its Fourier expansion $$v_l(t) = V_L\cos(\omega_lt)~~.
\label{eqn:mix:lo}$$ The input signal takes the form $$v_i(t) = A_i(t)\cos\left[\omega_it+\varphi_i(t)\right]~~,$$ where $A_i(t)$ and $\varphi_i(t)$ are the slow-varying signals in which information is coded. They may contain a dc term. The output signal is $$\begin{aligned}
v_o(t) & = \frac{1}{U}~v_i(t)\,v_l(t) \\
& = \frac{1}{U}~A_i(t)\cos\bigl[\omega_it+\varphi_i(t)\bigr] ~~V_L\cos(\omega_lt) \\
& = \frac{1}{2U}\,V_LA_i(t)\:\Bigl\{
\cos\bigl[(\omega_l-\omega_i)t-\varphi_i(t)\bigr]+
\cos\bigl[(\omega_l+\omega_i)t+\varphi_i(t)\bigr]\Bigr\}~~.
\label{eqn:mix:lc-vo}\end{aligned}$$ The trivial term $U=1$ V is introduced for the result to have the physical dimension of voltage.
An optional bandpass filter, not shown in Fig. \[fig:mix-lc-model\], may select the upper sideband (USB) or the lower sideband (LSB). If it is present, the output signal is $$\begin{aligned}
v_o(t) &= \frac{1}{2U}\,V_LA_i(t)\,
\cos\bigl[(\omega_l-\omega_i)t-\varphi_i(t)\bigr]
\qquad\mbox{LSB} \\
v_o(t) &= \frac{1}{2U}\,V_LA_i(t)\,
\cos\bigl[(\omega_l+\omega_i)t+\varphi_i(t)\bigr]
\qquad\mbox{USB}~~.\end{aligned}$$
#### Image frequency.
[ ]{}
Let us now consider the inverse problem, that is, the identification of the input signal by observing the output of a mixer followed by a band-pass filter (Fig. \[fig:mix-lc-image\] top). In a typical case, the output is a band-pass signal $$v_o(t) = A_o(t)\cos\big[\omega_bt+\varphi_o(t)\big]~~,
\label{eqn:mix:image}$$ centered at $\omega_b$, close to the filter center frequency. It is easily proved that there exist two input signals $$\begin{aligned}
v_L(t) &= A_L(t)\cos\bigl[(\omega_l-\omega_b)t+\varphi_L(t)\bigr]&&\text{LSB}
\label{eqn:mix:image-1}\\
v_U(t) &= A_U(t)\cos\bigl[(\omega_l+\omega_b)t+\varphi_U(t)\bigr]&&\text{USB}~~,
\label{eqn:mix:image-2}\end{aligned}$$ that produce a signal that passes through the output filter, thus contribute to $v_o(t)$. It is therefore impossible to ascribe a given $v_o(t)$ to $v_L(t)$ or to its *image* $v_U(t)$ if no a-priori information is given. Fig. \[fig:mix-lc-image\] (middle) gives the explanation in terms of spectra. The USB and the LSB are image of one another with respect to $\omega_l$. In most practical cases, one wants to detect one signal, so the presence of some energy around the image frequency is a nuisance. In the case of the superheterodyne receiver, there results ambiguity in the frequency at which the receivers is tuned. Even worse, a signal at the image frequency interferes with the desired signal. The obvious cure is a preselector filter preceding the mixer input.
More generally, the input signal can be written as $$v_i(t) = \sum_n A_n'(t)\cos(n\omega_0t)-A_n''(t)\sin(n\omega_0t)~~,
\label{eqn:mix:image-3}$$ which is a series of contiguous bandpass processes of bandwidth $\omega_0$, centered around $n\omega_0$, and spaced by $\omega_0$. The output is $$v_o(t) = \frac{1}{U}\big[v_l(t)\,v_i(t)\big]*h_{bp}(t)~~,$$ where “$*$” is the convolution operator, and $h_{bp}(t)$ the impulse response of the bandpass IF filter. The convolution ${}*h_{bp}(t)$ defines the pass-band filtering. Accordingly, the terms of $v_i(t)$ for which $|n\omega_0-\omega_l|$ is in the pass-band of the filter contribute to the output signal $v_o(t)$. Fig. \[fig:mix-lc-image\] (bottom) shows the complete conversion process.
#### Multi-harmonic conversion.
In usual conditions, the LO port is well saturated. Hence it makes sense to account for several terms of the Fourier expansion of the LO signal. Each term of Eq. is a sinusoid of frequency $k\omega_l$ that converts the portions of spectrum centered at $|k\omega_l+\omega_b|$ and $|k\omega_l-\omega_b|$ into $\omega_b$ (Fig. \[fig:mix-lc-harm\]), thus $$\begin{aligned}
v_o(t) & = \frac{1}{U}~v_i(t)\,v_l(t) \\[1ex]
& = \frac{1}{U}~A_i(t)\cos\bigl[\omega_it+\varphi_i(t)\bigr] ~~\frac{4}{\pi}V_S\!\!\!\sum_{\mathrm{odd}~k\ge1} \!\!\Big(-1\Big)^{\textstyle\frac{k-1}{2}} \;\frac{1}{k}\,\cos(k\omega_lt) \\[1ex]
& = \frac{1}{2U}\,\frac{4}{\pi}V_S\,A_i(t) \!\!\!\sum_{\mathrm{odd}~k\ge1} \!\!\!\Big(-1\Big)^{\textstyle\frac{k-1}{2}} \;\frac{1}{k}\Bigl\{
\cos\bigl[(k\omega_l-\omega_i)t-\varphi_i(t)\bigr]+{}
\Bigr.\nonumber\\
&\hspace{32ex}\Bigl.
{}+\cos\bigl[(k\omega_l+\omega_i)t+\varphi_i(t)\bigr]\Bigr\}~~.
\label{eqn:mix:lc-vo-multih}\end{aligned}$$ With $k=1$, one term can be regarded as the signal to be detected, and the other one as the image. All the terms with $k>1$, thus $3\omega_0$, $5\omega_0$, etc., as stray signals taken in because of distortion. Of course, the mixer can be intentionally used to convert some frequency slot through multiplication by one harmonic of the LO, at the cost of lower conversion efficiency. A bandpass filter at the RF input is often necessary to stop unwanted signals. Sampling mixers are designed for this specific operation. Yet their internal structure differs from that of the common double-balanced mixer.
In real mixers the Fourier series expansion of $v_l(t)$ can be written as $$v_l(t)=\sum_{\mathrm{odd}~k\ge1} \Big(-1\Big)^{\textstyle\frac{k-1}{2}} V_{L,k}\cos(k\omega_lt+\varphi_k)~~,
\label{eqn:mix:multih-lo-1}$$ for Eq. becomes $$\begin{aligned}
v_o(t) & = \frac{1}{2U}\,A_i(t) \!\!\!\sum_{\mathrm{odd}~k\ge1} \!\!\!\Big(-1\Big)^{\textstyle\frac{k-1}{2}} V_{L,k}\,\Bigl\{
\cos\bigl[(k\omega_l-\omega_i)t-\varphi_i(t)\bigr]+{}
\Bigr.\nonumber\\
&\hspace{32ex}\Bigl.
{}+\cos\bigl[(k\omega_l+\omega_i)t+\varphi_i(t)\bigr]\Bigr\}~~.
\label{eqn:mix:lc-vo-multihreal}\end{aligned}$$ The first term of Eq. is equivalent to , thus $V_{L,1}=V_L$. Equation differs from Eq. in the presence of the phase terms $\varphi_k$, and in that the coefficient $V_{L,k}$ decrease more rapidely than $1/k$. This due to non-perfect saturation and to bandwidth limitation. In weak saturation conditions the coefficient $V_{L,k}$ decrease even faster.
[ ]{}
Looking at Eq. , one should recall that frequency multiplication results in phase noise multiplication. If the LO signal contains a (random) phase $\phi(t)$, the phase $k\phi(t)$ is present in the $k$-th term.
For a more accurate analysis, the diode can no longer be modeled as a switch. The diode forward current $i_F$ is governed by the exponential law $$i_F=I_s\left(e^{\textstyle\frac{v_F}{\eta V_T}}-1\right)$$ where $V_F$ is the forward voltage, $I_s$ the inverse saturation current, $\eta\in[1\ldots2]$ a technical parameter of the junction, and $V_T=kT/q$ the thermal voltage at the junction temperature. At room temperature, it holds that $V_T=kT/q\simeq25.6$ mV. The term “$-1$” is negligible in our case. In the presence of a sinusoidal pump signal, the exponential diode current can be expanded using the identity $$e^{z\cos\varphi} = I_0(z)
+2\sum_{k=1}^{\infty}I_k(z)\cos(k\varphi)~~,$$ where $I_k(\cdot)$ is the modified Bessel function of order $k$. As a consequence of the mixer symmetry, the even harmonics are canceled and the odd harmonics reinforced. Ogawa [@ogawa80mtt] gives an expression of the IF output current $$i_o(t)=4I_s\frac{V\p{rf}}{\eta V_T}
\sum_{\text{odd}~k\ge1}
I_k\left(\frac{V\p{lo}}{\eta V_T}\right)
\Big[\cos(k\omega_l+\omega_i)t+\cos(k\omega_l-\omega_i)t\Big]~~.
\label{eqn:mix:ogawa}$$ Equation is valuable for design purposes. Yet, it is of limited usefulness in analysis because some parameters, like $I_s$ and $\eta$ are hardly available. In addition, Eq. holds in quasistatic conditions and does not account for a number of known effects, like stray inductances and capacitances, varactor effect in diodes, bulk resistance of the semiconductors, and other losses. Nonetheless, Eq. provides insight in the nature of the coefficients $V_{L,k}$.
#### Rules for the load impedance at the IF port. {#par:mix-if-load-impedance}
The product of two sinusoids at frequency $\omega_i$ and $\omega_l$, inherently, contains the frequencies $\omega_i\pm\omega_l$. At the IF port, current flow must be allowed at both these frequencies, otherwise the diodes can not switch. The problem arises when IF selection filter shows high impedance in the stop band. Conversely, low impedance $Z\ll R_0$ is usually allowed.
[ ]{}
Figure \[fig:mix-if-filter\] shows three typical cases in which a filter is used to select the $|\omega_i-\omega_l|$ signal at the IF output, and to reject the image at the frequency $|\omega_i+\omega_l|$. The scheme A is correct because the image-frequency current can flow through the diodes (low impedance). The scheme B will not work because the filter is nearly open circuit at the image frequency. The scheme C is a patched version of B, in which an additional $RC$ cell provides the current path for the image frequency. The efficient use of a mixer as a multi-harmonic converter may require a specific analysis of the filter.
In microwave mixers, the problem of providing a current path to the image frequency may not be visible, having been fixed inside the mixer. This may be necessary when the image frequency is out of the bandwidth, for the external load can not provide the appropriate impedance.
Rules are different in the case of the *phase detector* because the current path is necessary at the $2\omega_l$ frequency, not at dc.
#### Can the LO and RF ports be interchanged?
With an ideal mixer yes, in practice often better not. Looking at Fig. \[fig:mix-dbm\], the center point of the LO transformer is grounded, which helps isolation. In the design of microwave mixers, where the transformers are replaced with microstrip baluns, optimization may privilege isolation from the LO pump, and low loss in the RF circuit. This is implied in the general rule that the mixer is designed and documented for the superheterodyne receiver. Nonetheless, interchanging RF and LO can be useful in some cases, for example to take benefit from the difference in the input bandwidth.
Linear Synchronous Detector (SD) Mode {#ssec:mix:sd-mode}
-------------------------------------
[ ]{}
The general conditions for the linear modes are that the LO port is saturated by a suitable sinusoidal signal, and that a small (narrowband) signal is present at the RF input. The additional conditions for the mixer to operate in the SD mode are: (1) the LO frequency $\omega_l$ is tuned at the center of the spectrum of the (narrowband) RF signal, and (2) the IF output is low-passed.
The basic mixer operation is the same of the frequency conversion mode, with the diode ring used as a switch that inverts or not the input polarity dependig on the sign of the LO. The model of Fig. \[fig:mix-lc-model\] is also suitable to the SD mode. Yet, the frequency conversion mechanism is slightly different. Figure \[fig:mix-sd-conv\] shows the SD mode in the frequency domain, making use of two-sided spectra. Using one-sided spectra, the conversion products of negative frequency are folded to positive frequencies.
[ ]{}
Of course, the multi-harmonic frequency conversion mechanism, due to the harmonics multiple of the LO frequency still works (Figure \[fig:mix-ld-harm\]).
[ ]{}
The simplest way to understand the synchronous conversion is to represent the input and the internal LO signal $v'_l(t)=V_L\cos(\omega_0t+\varphi_L)$ in Cartesian coordinates[^3] $$\begin{aligned}
\label{eqn:mix:x-y-signal}
v_i(t)&=x(t)\cos\omega_0t-y(t)\sin\omega_0t\\
v'_l(t)&=V_L\left[\cos\varphi_L\cos\omega_0t-\sin\varphi_L\sin\omega_0t\right]\end{aligned}$$ The signal at the output of the low-pass filter is[^4] (Fig. \[fig:mix-lc-scalar\]) $$\begin{aligned}
X(t)
& = \frac{1}{U}\,v_i(t)\,v'_l(t) * h_{lp}\\
& = \frac{1}{U}\,\bigl[x\cos\omega_0t-y\sin\omega_0t\bigr]
\:V_L\bigl[\cos\varphi_L\cos\omega_0t-\sin\varphi_L\sin\omega_0t\bigr]*h_{lp}\\
& = \frac{1}{2U}\,V_L
\Big[x\cos\varphi_L+y\sin\varphi_L + \text{($2\omega$ terms)}\Big]*h_{lp}~,\\
\intertext{thus,}
X(t)
&= \frac{1}{2U}\,V_L
\Big[x(t)\cos\varphi_L+y(t)\sin\varphi_L\Big]~~.
\label{eqn:mix:scalar-product-x}\end{aligned}$$ Eq. can be interpreted as the scalar product $$\begin{aligned}
X = \frac{1}{2U}\,V_L (x,y)\cdot(\cos\varphi_L,\sin\varphi_L)~,\end{aligned}$$ plus a trivial factor $\frac{1}{2U}V_L$ that accounts for losses.
Let us now replace the LO signal $v'_l(t)$ with $$\begin{aligned}
v''_l(t)=-V_L\sin(\omega_0t+\varphi_L) =
-V_L\left[\sin\varphi_L\cos\omega_0t-\cos\varphi_L\sin\omega_0t\right]~.\end{aligned}$$ In this conditions, the output signal is $$\begin{aligned}
Y(t)
& = \frac{1}{U}\,v_i(t)\,v''_l(t) * h_{lp}\\
& = \frac{1}{U}\,\bigl[x\cos\omega_0t-y\sin\omega_0t\bigr]
\:V_L\bigl[-\sin\varphi_L\cos\omega_0t-\cos\varphi_L\sin\omega_0t\bigr]*h_{lp}\\
& = \frac{1}{2U}\,V_L
\Big[-x\sin\varphi_L+y\cos\varphi_L + \text{($2\omega$ terms)}\Big]*h_{lp}~,\\
\intertext{thus,}
Y(t) &= \frac{1}{2U}\,V_L\Big[-x(t)\sin\varphi_L+y(t)\cos\varphi_L\Big]~~.
\label{eqn:mix:scalar-product-y}\end{aligned}$$
[ ]{}
Finally, by joining Equations and , we find $$\begin{aligned}
\label{eqn:mix:iq-frame-rotation}
\left[\begin{array}{c}X(t)\\Y(t)\end{array}\right] & =
\frac{1}{2U}\,V_L
\left[\begin{array}{cc}\cos\varphi_L&\sin\varphi_L\\-\sin\varphi_L&\cos\varphi_L\end{array}\right]
\left[\begin{array}{c}x(t)\\y(t)\end{array}\right]~.\end{aligned}$$ Equation is the common form of a frame rotation by the angle $\varphi_L$ in Cartesian coordinates (Fig. \[fig:mix-frame-rotation\]).
The simultaneous detection of the input signal with two mixers pumped in quadrature is common in telecommunications, where QAM modulations are widely used[^5]. The theory of coherent communication is analyzed in [@viterbi:communication]. Devices like that of Fig. \[fig:mix-lc-scalar-iq\], known as I-Q detectors, are commercially available from numerous manufacturers. Section \[sec:mix:specials-iqs\] provide more details on these devices.
[ ]{}
Linearity {#ssec:mix:linearity}
---------
A function $f(\cdot)$ is said linear [@rudin:mathematical-analysis] if it has the following two properties $$\begin{gathered}
f(ax)=af(x)\\
f(x+y)=f(x)+f(y)~.\end{gathered}$$ The same definition applies to operators. When a sinusoidal signal of appropriate power and frequency is sent to the LO port, *the mixer is linear*, that is, *the output signal $v_o(t)$ is a linear function of the input $v_i(t)$*. This can be easily proved for the case of simple conversion \[Eq. \] $$\begin{aligned}
v_o(t) & = \frac{1}{2U}\,V_LA_i(t)\:\Bigl\{
\cos\bigl[(\omega_l-\omega_i)t-\varphi_i(t)\bigr]+
\cos\bigl[(\omega_l+\omega_i)t+\varphi_i(t)\bigr]\Bigr\}
\nonumber\end{aligned}$$ The linearity of $v_o(t)$ vs. $v_i(t)$ can also be demonstrated in the case of the multi-harmonic conversion, either by taking a square wave as the LO internal signal \[Eq. \], or by using the internal LO signal of real mixers \[Eq. \]. In fact, the Fourier series is a linear superposition of sinusoids, each of which treated as above. In practice, the double balanced mixer can be used in a wide range of frequency (up to $10^4$), where it is linear in a wide range of power, which may exceed $10^{16}$ (160 dB).
In large-signal conditions, the mixer output signal can be expanded as the polynomial $$v_o(v_i) = a_0 + a_1v_i + a_2v_i^2 + a_3v_i^3 + \ldots~~.
\label{eqn:mix-nonlinear-polynomial}$$ The symmetric topology cancels the even powers of $v_i$, for the above polynomial can not be truncated at the second order. Yet, the coefficient $a_2$ is nonzero because of the residual asymmetry in the diodes and in the baluns. Another reason to keep the third-order term is the adjacent channel interference. In principle, transformer nonlinearity should also be analyzed. In practice, this problem is absent in microwave mixers, and a minor concern with ferrite cores. The coefficient $a_1$ is the invese loss $\ell$. The coefficients $a_2$ and $a_3$ are never given explicitely. Instead, the intercept power (IP2 and IP3) is given, that is, the power at which the nonlinear term ($a_2v_i^2$ and $a_3v_i^3$) is equal to the linear term.
Mixer loss {#sec:mix:loss}
==========
The conversion efficiency of the mixer is operationally defined via the two-tone measurement shown in Fig. \[fig:mix-loss\]. This is the case of a superheterodyne receiver in which the incoming signal is an unmodulated sinusoid $v_i(t)=V_i\cos\omega_it$, well below saturation. The LO sinusoid is set to the nominal saturation power. In this condition, and neglecting the harmonic terms higher than the first, the output signal consists of a pair of sinusoids of frequency $\omega_o=|\omega_l\pm\omega_i|$. One of these sinusoids, usually $|\omega_l-\omega_i|$ is selected. The SSB power loss $\ell^2$ of the mixer is defined[^6] as $$\frac{1}{\ell^2}=\frac{P_o}{P_i}
\qquad\text{SSB loss $\ell$}
\label{eqn:mix-ssb-loss-def}$$ where $P_i$ is the power of the RF input, and $P_o$ is the power of the IF output at the selected freqency. The specifications of virtually all mixes resort to this definition.
[ ]{}
The loss is about constant in a wide range of power and frequency. The upper limit of the RF power range is the saturation power, specified as the compression power $P_{1\unit{dB}}$ at which the loss increases by 1 dB.
#### Intrinsic SSB loss.
The lowest loss refers to the ideal case of the zero-threshold diode, free from resistive dissipation. The LO power is entirely wasted in switching the diodes. Under this assumptions, the ring of Figure \[fig:mix-dbm\] works as a loss-free switch that inverts or not the polarity of the RF, $v_o(t)=\pm v_i(t)$, according to the sign of $v_l(t)$. Of course, the instantaneous output power is conserved $$\begin{aligned}
\frac{1}{R_0}\:v_i^2(t) &= \frac{1}{R_0}\:v_o^2(t)~~. \end{aligned}$$ Nonetheless, the mixer splits the input power into the conversion products at frequency $|\omega_i\pm\omega_l|$ and higher harmonics, for only a fraction of the input power is converted into the desired frequency. There result a loss *inherent* in the frequency conversion process, found with the definition .
In the described conditions, the internal LO signal is a unit square wave ($V_S=1$ V), whose Fourier series expansion is $$v_l(t) = \frac{4}{\pi}\left[
\cos\omega_lt-\frac{1}{3}\cos3\omega_lt
+\frac{1}{5}\cos5\omega_lt -\ldots+\ldots\right]~~.$$ Only the first term of the above contributes to the down-converted signal at the frequency $\omega_b=|\omega_i-\omega_l|$. The peak amplitude of this term is $V_L=\frac{4}{\pi}$ V. Hence, $$\begin{aligned}
v_o(t)
&= \frac{1}{U}\,v_l(t)\,v_i(t) \\
&= \frac{4}{\pi}\cos(\omega_lt) \; V_i\cos(\omega_it)\\
&= \frac{4}{\pi} V_i \; \frac{1}{2}\Bigl\{
\cos[(\omega_i-\omega_l)t]+\cos[(\omega_i+\omega_l)t]\Bigr\}\\
&= \frac{2}{\pi}V_i \; \cos[\omega_bt] \qquad\mbox{rubbing out
the USB}\end{aligned}$$ The RF and IF power are $$P_i=\frac{V_i^2}{2R_0} \qquad\mbox{and}\qquad
P_o=\frac{1}{2R_0}\,\frac{4V_i^2}{\pi^2}$$ from which the minimum loss $\ell=\sqrt{P_i/P_o}$ is $$\ell=\frac{\pi}{2}~~\simeq1.57~~\text{(3.92 dB)}
\qquad\text{minimum SSB loss}.$$
#### SSB loss of actual mixers.
The loss of microwave mixer is usually between 6 dB for the 1-octave devices, and 9 dB for 3-octave units. The difference is due to the microstrip baluns that match the nonlinear impedance of the diodes to the 50 input over the device bandwidth. In the case of a narrow-band mixer optimized for conversion efficiency, the SSB loss can be of 4.5 dB [@ogawa80mtt]. The loss of most HF/UHF mixers is of about 5–6 dB in a band up to three decades. This is due to the low loss and to the large bandwidth of the tranmission-line transformers. Generally, the LO saturation power is between 5 and 10 mW (7–10 dBm). Some mixers, optimized for best linearity make use of two or three diodes in series, or of two diode rings (see Fig. \[fig:mix-multidiode\]), and need larger LO power (up to 1 W). The advantage of these mixers is high intercept power, at the cost of larger loss (2–3 dB more). When the frequencies multiple of the LO frequency are exploited to convert the input signal, it may be necessary to measure the conversion loss. A scheme is proposed in Fig. \[fig:mix-loss-harm\].
[ ]{}
#### Derivation of the internal LO voltage from the loss.
For the purpose of analytical calculus, the amplitude $V_L$ of the internal LO signal is often needed. With real (lossy) mixers, it holds that $V_L<\frac{4}{\pi}$ V. $V_L$ can be derived by equating the output power $P_i/\ell^2$ to the power of the output product. The usefulness of this approach is in that $\ell$ is always specified. Let $$v_i(t)=V_i\cos\left[\omega_i(t)+\varphi_i\right]$$ the RF input, and select the lower[^7] output frequency $\omega_b=|\omega_i-\omega_l|$. The internal LO signal is $$v_l(t)=V_L\cos(\omega_lt+\varphi_l)~~.$$ Measuring the output power, we can drop the phases $\varphi$ and $\varphi_l$. Hence, the output signal is $$\begin{aligned}
v_o(t)&=\frac{1}{U}V_iV_L
\big[\cos\omega_it+\cos\omega_lt\big]*h_{bp}(t)\\
&=\frac{1}{U}V_iV_L \frac{1}{2}\cos(\omega_i-\omega_l)t\end{aligned}$$ The output power is $$P_o=\frac{1}{2R_0} \; \frac{1}{4U^2}V_i^2V_L^2~~
\label{eqn:mix:po-product}$$ when the input power is $$P_i=\frac{1}{2R_0}\,V_i^2~~.$$ Combining the two above Equations with the definition of $\ell$ \[Eq. \], we obtain $$\frac{1}{\ell^2} \: \frac{1}{2R_0}A_i^2 =
\frac{1}{2R_0} \; \frac{1}{4U^2}A_i^2V_L^2~~,$$ hence $$V_L = \frac{2U}{\ell}\qquad\text{Internal LO peak amplitude}.
\label{eqn:mix:equiv-lo-v}$$ Interestingly, the loss of most mixers is close to 6 dB, for $V_L\simeq1$ V, while the intrinsic loss $\ell=\pi/2$ yields $V_L=4/\pi\simeq1.27$ V.
#### What if the LO power differs from the nominal power?
[ ]{}
When the LO input is saturated, the LO power has little or no effect on the output signal. This fact is often referred as *power desensitization* (also LO desensitization, or pump desensitization). In a narrow power range, say ${\pm}2$ dB from the nominal power, the conversion loss changes slightly, and noise also varies. The internal Schottky diodes exhibit exponential $i=i(v)$ characteristics, hence lower LO power is not sufficient to saturate the diodes, and the the ring is unable to switch. The conversion efficiency $1/\ell$ is reduced, and drops abruptly some 10 dB below the nominal LO power. As a side effect of loss, white noise increases. Figure \[fig:mix-if-vs-lo-power\] shows an example of output power as a function of the RF power, for various LO power levels. Below the nominal LO power, flicker noise increases. Whereas this phenomenon is still unclear, we guess that this is due to the increased fraction of period in which the diodes are neither open circuit or saturated, and that up conversion of the near-dc flickering of the junction takes place during this transition time .
Insufficient LO power may also impair symmetry, and in turn the cancellation of even hamonics. The physical explanation is that saturated current is limited by the diode bulk resistance, which is more reproducible than the exponential law of the forward current. Increasing the fraction of time in which the exponential law dominates emphasizes the asymmetry of the diodes.
Too high LO power may increase noise, and damage the mixer. Special care is recommanded with high-level mixers, in which the nominal LO power of of 50 mW or more, and in the miniaturized mixers, where the small size limits the heat evacuation.
According to the model of Fig. \[fig:mix-lc-model\], the LO clipper limits the internal voltage to ${\pm}V_S$, which turns the input sinusoid into a trapezoidal waveform. Hence, the input power affects the duration of the wavefronts, and in turns the harmonic contents. As a result, a circuit may be sensitive to the LO power if stray input signals are not filtered out properly.
Finally, changing the LO power affects the dc voltage at the IF output. This can be a serious problem when the mixer is used as a synchronous converter or as a phase detector.
Saturated Modes {#sec:mix:saturated-modes}
===============
When both RF and LO inputs are saturated, the mixer behavior changes radically. The mixer can no longer be described as a simple switch that invert or not the RF signal, depending on the LO sign. Instead, at each instant the largest signal controls the switch, and sets the polarity of the other one. Of course, the roles are interchanged continuously. Strong odd-order harmonics of the two input frequencies are present, while even-order harmonics are attenuated or cancelled by symmetry. Saturation means that amplitude has little effect on the output, for saturated modes are useful in phase detectors or in frequency synthesis, where amplitudes are constant. A further consequence of saturation is phase noise multiplication, which is inherent in harmonic generation. In the case of saturated modes, phase noise multiplication takes place in both LO and RF.
In saturated modes the specified maximum power at the RF port is always exceeded. When this maximum power is exceeded, the mixer leaves the “normal” linear operation, still remaining in a safe operating range until the “absolute maximum ratings” are approached. Read page .
The model of Fig. \[fig:mix-lc-model\] describes some characteristics, as it emphasizes the internally clipped waveforms, and the cancellation of even harmonics. Yet, the model fails in predicting amplitude because the ring is no longer a multiplier. The output amplitude is lower than expected.
Saturated Frequency Converter (SC) Mode {#sec:mix:sc-mode}
---------------------------------------
The conditions for the mixer to operate in SC mode are
- the LO and the RF ports are saturated by sinusoidal signals,
- the input frequencies are not equal, and the ratio $\omega_l/\omega_i$ is not too close to the the ratio of two small integers (say, 5–7),
- the output is band-passed.
Let the input signals $$\begin{aligned}
v_i(t) & = V'_P\cos\omega_it\\
v_p(t) & = V''_P\cos\omega_lt~~.\end{aligned}$$ If possible, the saturated amplitudes $V'_P$ and $V''_P$ should be equal. The main output signal consists of the pair of sinusoids $$v_o(t)=V_O\cos(\omega_l-\omega_i)t+V_O\cos(\omega_l+\omega_i)t
\label{eqn:mix:sfc-vo}$$ that derives from the product $v_i(t)\,v_l(t)$. Yet, the output amplitude $V_O$ is chiefly due to the internal structure of the mixer, and only partially influenced by $V'_P$ and $V''_P$. A bandpass filter selects the upper or the lower frequency of .
The unsuitability of the model of Fig. \[fig:mix-lc-model\] to predict amplitude can be seen in the following example.
Replacing $V'_P$ and $V''_P$ with $V_L$ yields $V_O=\frac{1}{2}UV_L^2$. Let us consider typical mixer that has a loss of 6 dB when the LO has the nominal power of 5 mW (7 dBm). From Eq. we get $V_L\simeq1$ V, thus we expect $V_O=250$ mV, and an output power $V_O^2/2R_0=2.5$ mW ($+4$ dBm) with $R_0=50$ . Yet, the actual power is hardly higher than 1.25 mW ($+1$ dBm).
Accounting for the harmoncs, the output signal is $$v_o(t)=\sum_{\text{odd}~h,k}V_{hk}\cos(h\omega_l+k\omega_i)t
\qquad\begin{array}{c}
\text{\small positive frequencies}\\[-0.5ex]
\omega_{hk}=h\omega_l+k\omega_i>0
\end{array}~~,
\label{eqn:mix:sfc-vo-harm}$$ where the sum is extended to the positive output frequencies, i.e., $h\omega_l+k\omega_i>0$. $V_{hk}$ decreases more rapidely than the product $|hk|$, and drops abruptly outside the bandwidth. Figure \[fig:mix-sc-harm\] shows an example of spectra involving harmonics.
The contition on the ratio $\omega_l/\omega_i$ two output frequencies $\omega_{h'k'}$ and $\omega_{h''k''}$ do not degenerate in a single spectral line, at least for small $h$ and $k$. This problem is explained in Section \[sec:mix:dc-mode\].
[ ]{}
Other authors write the output frequencies as $|{\pm}h\omega_l{\pm}k\omega_i|$, with positive $h$ and $k$. We recommend to keep the sign of $h$ and $k$. One reason is that the positive and negative subscripts of $V_{hk}$ make the spectrum measurements unambiguously identifiable. Another reason is that input phase fluctuations are multiplied by $h$ and $k$, and wrong results may be obtained discarding the sign.
Degenerated Frequency Converter (DC) Mode {#sec:mix:dc-mode}
-----------------------------------------
The conditions for the mixer to operate in DC mode are the following
- the LO and the RF ports are saturated by sinusoidal signals,
- the input frequencies are not equal, and the ratio $\omega_l/\omega_i$ is equal or close to the the ratio of two small integers (say, 5–7 max.),
- the output is band-passed.
When $\omega_l$ and $\omega_i$ are multiple of a common frequency $\omega_0$, thus $$\omega_l=p\omega_0\quad\text{and}\quad\omega_i=q\omega_0
\qquad\text{integer}~p{>}0,~q{>}0,~p{\neq}q~~,
\label{eqn:mix:dfc-cond}$$ the sum degenerates, and groups of terms collapse into fewer terms of frequency $n\omega_0$, integer $n$. The combined effect of saturation and symmetry produces strong odd-order harmonics $h\omega_l$ and $k\omega_i$ $$\begin{aligned}
&\omega_l\,{:} &v_{l1}&=V_1\cos(p\omega_0t+\varphi_l)&\qquad
&\omega_i\,{:} &v_{i1}&=V_1\cos(q\omega_0t+\varphi_i)\\
&3\omega_l\,{:}&v_{l3}&=V_3\cos(3p\omega_0t+3\varphi_l)&\qquad
&3\omega_i\,{:}&v_{i3}&=V_3\cos(3q\omega_0t+3\varphi_i)\\&\cdots &\cdots &\qquad\cdots&\qquad
&\cdots &\cdots &\qquad\cdots\\[-1ex]
&h\omega_l\,{:}&v_{lh}&=V_h\cos(hp\omega_0t+h\varphi_l)&\qquad
&k\omega_i\,{:}&v_{ik}&=V_k\cos(kq\omega_0t+k\varphi_i)\\&\cdots &\cdots &\qquad\cdots&\qquad
&\cdots &\cdots &\qquad\cdots\end{aligned}$$ inside the mixer. After time-domain multiplication, all the cross products appear, with amplitude $V_{hk}$, frequency $(hp+kq)\omega_0$, and phase $h\varphi_l+k\varphi_i$. The generic output term of frequency $n\omega_0$ derives from the vector sum of all the terms for which $$hp+kq=n~~,$$ thus $$v_n(t)=\sum_{\substack{h,k~\text{pair\,:}\\hp+kq=n}}
V_{hk}\cos(n\omega_0t+h\varphi_l+k\varphi_i)
\label{eqn:mix:dfc-vn}$$ Reality is even more complex than because
- some asymmetry is always present, thus even-order harmonics,
- each term of may contain an additional constant phase $\varphi_{hk}$,
- for a given $\omega_l$$\omega_i$ pair, several output frequencies $n\omega_0$ exist, each one described by . Due to nonlinearity, the $v_n(t)$ interact with one another.
Fortunately, the amplitudes $V_{hk}$ decrease rapidly with $|hk|$, therefore the sum can be accurately estimated from a small number of terms, while almost all the difficulty resides in parameter measurement. For this reason, there is no point in devlopping a sophisticated theory, and the few cases of interest can be anlyzed individually. The following example is representative of the reality.
The input frequencies are $f_l=5$ MHz and $f_i=10$ MHz, and we select the output frequency $f_o=5$ MHz with an appropriate bnd-pas filter. Thus $f_0=5$ MHz, $p=1$, $q=2$, and $n=1$. The output signal results from the following terms $$\begin{array}{cc|ccc|cl}
hf_l+kf_i=nf_0 &&&hp+kq=n&&&v_n(t)\\\hline
-1{\times}5+1{\times}10=5&&&-1{\times}1{+1}{\times}2=1&&&V_{-1\,1}\cos(\omega_0t{-}\varphi_l{+}\varphi_i)\\
+3{\times}5-1{\times}10=5&&&+3{\times}1{-1}{\times}2=1&&&V_{3\,-1}\cos(\omega_0t{+}3\varphi_l{-}\varphi_i)\\
-5{\times}5+3{\times}10=5&&&-5{\times}1{+3}{\times}2=1&&&V_{-5\,3}\cos(\omega_0t{-}5\varphi_l{+}3\varphi_i)\\
+7{\times}5-3{\times}10=5&&&+7{\times}1{-1}{\times}2=1&&&V_{7\,-3}\cos(\omega_0t{+}7\varphi_l{-}3\varphi_i)\\
-9{\times}5+5{\times}10=5&&&-9{\times}1{+5}{\times}2=1&&&V_{-9\,5}\cos(\omega_0t{+}7\varphi_l{-}3\varphi_i)\\
\cdots &&&\cdots &&&\cdots
\end{array}$$
Phase Amplification Mechanism {#sec:mix:phase-ampli}
-----------------------------
[ ]{}
Introducing the phasor (Fresnel vector) representation[^8] Eq. becomes $\mathbf{V_n}=\sum\mathbf{V_{hk}}$, thus $$\frac{1}{\sqrt{2}}\,V_n\,e^{j\varphi_n}=
\sum_{\substack{h,k~\text{pair\,:}\\hp+kq=n}}
\frac{1}{\sqrt{2}}\,V_{hk}\,e^{j\varphi_{hk}}
\qquad\text{with}\;\varphi_{hk}=h\varphi_l+k\varphi_i~~.
\label{eqn:mix:dfc-vn-vec}$$ Both $V_n$ and $\varphi_n$ are function of $\varphi_l$ and $\varphi_i$, thus function of the phase relationship between the two inputs. Let $\phi$ the fluctuation of the static phase $\varphi$. The output phase fluctuation is $$\phi_n=
\frac{\partial\varphi_n}{\partial\varphi_l}\,\phi_l +
\frac{\partial\varphi_n}{\partial\varphi_i}\,\phi_i~~,
\label{eqn:mix:dfc-phasegain}$$ where the derivatives are evaluated in the static working point. There follows that the input phase fluctuations $\varphi_l$ and $\varphi_i$ are amplified or attenuated (gain lower than one) by the mixer. The phase gain/attenuation mechanism is a consequence of degeneracy. The effect on phase noise was discovered studying the regenerative frequency dividers [@rubiola92im].
Figure \[fig:mix-phasors\] shows a simplified example in which a 5 MHz signal is obtained by mixing a 5 MHz and a 10 MHz, accounting only for two modes ($10-5$ and $3{\times}5-10$). For $\varphi_l=0$, the vectors are in phase, and the amplitude is at its maximum. A small negative $\varphi_n$ results from $\mathbf{V_{-1\,1}}$ and $\mathbf{V_{3\,{-1}}}$ pulling in opposite directions. A phase fluctuation is therefore attenuated. For $\varphi_l=\pi/4\simeq0.785$, the vectors are opposite, and the amplitude is at its minimum. The combined effect of $\mathbf{V_{-1\,1}}$ and $\mathbf{V_{3\,{-1}}}$ yields a large negative $\varphi_n$. With $V_{3\,{-1}}/V_{-1\,1}=0.2$ ($-14$ dB), the phase gain $\partial\varphi_n/\partial\varphi_l$ spans from $-0.33$ and $2$, while it would be $-1$ (constant) if only the $-1,1$ mode was present.
The experimentalist not aware of degeneracy may obtain disappointing results when low-order harmonics are present, as in the above example. The deliberate exploitation of degeneracy to manage phase noise is one of the most exhotic uses of the mixer.
[ ]{}
[ ]{}
[ ]{}
#### Parameter Measurement.
There are two simple ways to measure the parameters of a degenerated frequency converter (Fig. \[fig:mix-dfc-meas\]).
The first method is the separate measurement of the coefficients $V_{hk}$ of Eq. by means of a spectrum analyzer. One input signal is set at a frequency $\delta$ off the nominal frequency $\omega_l$ (or $\omega_i$). In this condition degeneracy is broken, and all the terms of Eq. are visible as separate frequencies. The offset $\delta$ must be large enough to enable the accurate measurement of all the spectral lines with a spectrum analyzer, but small enough not to affect the mixer operation. Values of 10–50 kHz are useful in the HF/UHF bands, and up to 1 MHz at higher frequencies. Figure \[fig:mix-phase-gain-spectrum\] provides an example. This method is simple and provides insight. On the other hand, it is not very accurate because it hides the phase errors $\varphi_{hk}$ that may be present in each term.
The second method consists of the direct measurement of $\mathbf{V_n}$ \[Eq. \] as a function of the input phase, $\varphi_l$ or $\varphi_i$, by means of a vector voltmeter. This gives amplitude and phase, from which the phase gain is derived. For the measurement to be possible, the three signals must be converted to the same frequency $\omega_0$ with approprate dividers. Of course, the mixer must be measured in the same conditions (RF and LO power) of the final application. While one vector voltmter is sufficient, it is better to use two vector voltmters because the measurement accounts for the reflected waves in the specific circuit. In some cases good results are obtained with resistive power splitters located close to the mixer because these splitters are not directional. Interestingly, most frequency synthesizers can be adjusted in phase even if this feature is not explicitely provided. The trick consists of misaligning the internal quartz oscillator when the instrument is locked to an external frequency reference. If the internal phase locked loop does not contain an integrator, the misalignamet turns into a phase shift, to be determined a posteriori. The drawback of the direct measurement method is that it requires up to two vector voltmeters, two frequency synthesizers and three frequency dividers. In the general case, the dividers can not be replaced with commercial synthesizers because a synthesizer generally accepts only a small set of round input frequencies (5 MHz or 10 MHz). Figure \[fig:mix-phase-gain\] shows an example of direct measurement, compared to the calculated values, based on the first method.
Phase Detector (PD) Mode {#sec:mix:pd-mode}
------------------------
The mixer works as a phase detector in the following conditions
- the LO and the RF ports are saturated by sinusoidal signals of the same frequency $\omega_0$, about in quadrature,
- the output is low-passed.
[ ]{}\
[ ]{}
\[fig:mix-vphi-measured\]
The product of such input signals is $$\cos\Bigl(\omega_0t+\varphi\Bigr)\,
\cos\Bigl(\omega_0t-\frac{\pi}{2}\Bigr) =
\frac{1}{2}\,\sin\Bigl(2\omega_0t+\varphi\Bigr) -
\frac{1}{2}\,\sin\varphi~~,
\label{eqn:mix:pd-base}$$ from which one obtains a sinusoid of frequency $2\omega_0$, and a dc term $-\frac{1}{2}\sin\varphi$ that is equal to $-\frac{1}{2}\varphi$ for small $\varphi$. The output signal of an actual mixer is a distorted sinusoid of frequency $2\omega_0$ plus a dc term, which can be approximated by $$v_o(t)=V_2\sin\bigl(2\omega_0t+\varphi\bigr) - V_0\sin\varphi~~.
\label{eqn:mix:pd-sat}$$ $V_2$ and $V_0$ are experimental parameters that depend on the specific mixer and on power. Due to saturation, the maximum of $|v_o(t)|$ is about independent of $\phi$, hence $V_2$ decreases as the absolute value of the dc term increases.
Using the $2\omega_0$ output signal to double the input frequency is a poor choice because (i) the quadrature condition can only be obtained in a limited bandwidth, (ii) the IF circuit is usually designed for frequencies lower than the RF and LO. A better choice is to use a reversed mode.
When the PD mode is used close to the quadrature conditions, the deviation of dc response from $\sin\varphi$ can be ignored. After low-pass filtering, the output signal is[^9] $$v_o= - k_\varphi\varphi + V\p{os}~~,
\label{eqn:mix:pd-real}$$ where $k_\varphi$ is the phase-to-voltage gain \[the same as $V_0$ in Eq. \], and $V\p{os}$ is the dc offset that derives from asymmetry. Figure \[fig:mix-vphi\] shows an example of phase detector charactaristics. The IF output can be loaded to a high resistance in order to increase the gain $k_\varphi$.
It is often convenient to set the input phase for zero dc output, which compensate for $V\p{os}$. This condition occurs at some random—yet constant—phase a few degrees off the quadrature conditions, in a range where the mixer characteristics are virtually unaffected.
Due to diode asymmetry, the input power affects $V\p{os}$. Exploiting the asymmetry of the entire $v(i)$ law of the diodes, it is often possible to null the output response to the fluctuation of the input power, therefore to make the mixer insensitive to amplitude modulation. This involves setting the phase between the inputs to an appropriate value, to be determined experimentally. In our experience, the major problem is that there are distinct AM sensitivities $$\frac{dv_o}{dP_l},\qquad
\frac{dv_o}{dP_i},\qquad
\frac{dv_o}{d(P_l+P_i)}~~,$$ and that nulling one of them is not beneficial to the other two. In some cases the nulls occurr within some 5 from the quadrarure, in other cases farther, where the side effects of the offset are detrimental.
Reversed Modes {#sec:mix:reversed-modes}
==============
[ ]{}
The mixer can be reversed taking the IF port as the input and the RF port as the output (Fig. \[fig:mix-modul\]). The LO signal makes the diodes switch, exactly as in the normal modes. The major difference versus the normal modes is the coupling bandwidth: the output is now ac-coupled via the RF balun, while the input is in most cases dc-coupled. When impedance-matching is not needed, the IF input can be driven with a current source.
Linear Modulator (LM) {#sec:mix:lm-mode}
---------------------
The mixer works as a LM in the following conditions
- the LO port is saturated by a sinusoidal signal,
- a near-dc signal is present at the IF input,
- the IF input current is lower than the saturation current[^10] $I_S$.
As usual, the LO pump forces the diodes to switch. At zero input current, due to symmetry, no signal is present at the RF output. When a positive current $i_i$ is present, the resistance of D2 and D4 averaged over the period decreases, and the conduction angle of D2 and D4 increases. The average resistance of D1 and D3 increases, and their conduction angle decreases. Therefore, a small voltage $v_o(t)$ appears at the RF output, of amplitude proportional to $i_i$, in phase with $v_p(t)$. Similarly, a negative $i_i$ produces an output voltage proportional to $i_i$, of phase opposite to $v_p(t)$. The mixer can be represented as the system of Fig. \[fig:mix-rev-model\], which is similar to the LC model (Fig. \[fig:mix-lc-model\]) but for the input-output filters.
[ ]{}
The internal saturated LO signal can be approximated with a sinusoid $v_l(t)=V_L\cos\omega_lt$, \[Eq. \], or expanded as Eq. . Strictly, $V_L$ can not be derived from the reverse loss, which is not documented. Reciprocity should not given for granted. Nonetheless, measuring some mixers we found that the ‘conventional’ (forward) SSB loss $\ell$ and Eq. provide useful approximation of reverse behavior. Thus, the mixer operates as a linear modulator described by $$\begin{aligned}
v_o(t) &= \frac1U\,v_i(t)\,v_l(t)\\[1ex]
&= \frac1U\,v_i(t)\,V_L\cos\omega_lt~~.
\label{eqn:mix:rev-mod-dc}\end{aligned}$$
The LO signal of a mixer (Mini-Circuits ZFM-2) is a sinusoid of frequency $f_l=100$ MHz and power $P=5$ mW (7 dBm). In such conditions the nominal SSB loss is $\ell=2$ (6 dB). By virtue of Eq. , $V_L=1$ V. When the input current is $i_i=2$ mA dc, the input voltage is $v_i=R_0i_i=100$ mV with $R_0=50$ . After Eq. , we expect an output signal of 100 mV peak, thus 71 mV rms. This is close to the measured value of 75 mV. The latter is obtained fitting the the low-current experimental data of Fig. \[fig:mix-iq-mod-gain\]. Beyond $i_i=3$ mA, the mixer lives gradually the linear behavior, and saturates at some 230 mV rms of output signal, when $i_i\approx12$ mA dc. Similar results were obtained testing other mixers.
[ ]{}
Reverse Linear Converter (RLC) {#sec:mix:rlc-mode}
------------------------------
The mixer works as a RLC in the following conditions
- the LO port is saturated by a sinusoidal signal,
- a small narrowband signal is present at the IF input, which is not saturated,
- LO and the IF separated in the frequency domain,
- an optional filter selects one of the beat products.
This mode is similar to the LM mode. Letting $v_i(t)=A_i(t)\cos[\omega_i(t)+\varphi_i(t)]$ the input, the output signal is $$\begin{aligned}
v_o(t) & = \frac{1}{U}~v_i(t)\,v_l(t) \\[0.5ex]
& = \frac{1}{U}~A_i(t)\cos\bigl[\omega_it+\varphi_i(t)\bigr] ~~V_L\cos(\omega_lt) \\[0.5ex]
& = \frac{1}{2U}\,V_LA_i(t)\:\Bigl\{
\cos\bigl[(\omega_l-\omega_i)t-\varphi_i(t)\bigr]+
\cos\bigl[(\omega_l+\omega_i)t+\varphi_i(t)\bigr]\Bigr\}~~.
\label{eqn:mix:rev-mod-ac}\end{aligned}$$ The model of Fig. \[fig:mix-rev-model\] still holds, and the internal LO amplitude $V_L$ can be estimated using Eq. and the ‘conventional’ SSB loss $\ell$.
If an external bandpass filter, not shown in Fig. \[fig:mix-rev-model\], is present, the output signal is $$\begin{aligned}
v_o(t) &= \frac{1}{2U}\,V_LA_i(t)\,
\cos\bigl[(\omega_l-\omega_i)t-\varphi_i(t)\bigr]
\qquad\text{LSB,}\qquad\qquad\text{or}\\[1ex]
v_o(t) &= \frac{1}{2U}\,V_LA_i(t)\,
\cos\bigl[(\omega_l+\omega_i)t+\varphi_i(t)\bigr]
\qquad\text{USB}~~,\end{aligned}$$ under the obvious condition that the signal bandwidth fits into the filter passband.
Digital Modulator (DM) Mode {#sec:mix:dm-mode}
---------------------------
The mixer works as a DM in the following conditions
- the LO port is saturated by a sinusoidal signal,
- a large near-dc current is present at the IF input, which is saturated,
- the RF output is bandpassed.
Let $v_p=V_P\cos\omega_lt$ the LO input signal, $i_i={\pm}I_i$ the IF input current, and $V_O$ the saturated output amplitude. The output signal is $$v_o(t) = \mbox{sgn}(i_i)\:V_O\cos\omega_lt~~,
\label{eqn:mix:dm-out}$$ where $\mbox{sgn}(\cdot)$ is the signum function. Equation represents a BPSK (binary phase shift keying) modulation driven by the input current $i_i$.
Reverse Saturated Converter (RSC) Mode {#sec:mix:rsc-mode}
--------------------------------------
The mixer works in the RSC mode under the following conditions
- the LO and the IF ports are saturated by sinusoidal signals,
- the input frequencies are not equal, and the ratio $\omega_l/\omega_i$ is not too close to the the ratio of two small integers (say, 5-7 max.),
- the output is band-passed.
The RSC mode is similar to the SC mode, for the explanations given in Section \[sec:mix:sc-mode\] also apply to the RSC mode. The only difference between SC and RSC is the input and output bandwidth, because IF and RF are interchanged.
Reverse Degenerated Converter (RDC) Mode {#sec:mix:rdc-mode}
----------------------------------------
The mixer works in the RDC mode when
- the LO and the IF ports are saturated by sinusoidal signals,
- the input frequencies are equal, or the ratio $\omega_l/\omega_i$ is equal or close to the the ratio of two small integers (say, no more than 5–7),
- the output is band-passed.
The RDC mode is similar to the DC mode (Section \[sec:mix:sc-mode\]) but for the trivial difference in the input and output bandwidth, as the roles of IF and RF are interchanged. The output signal results from the vector addition of several beat signals, each one with its own phase and amplitude.
It is to be made clear that when two equal input frequencies ($\omega_i=\omega_l=\omega_0$) are sent to the input, the reverse mode differs significantly from the normal mode. In the DC mode, this condition would turn the degenerated converter mode into the phase-detector mode. But in the reversed modes no dc output is permitted because the RF port is ac coupled. Of course, a large $2\omega_0$ signal is always present at the RF output, resulting from the vector addition of several signals, which makes the RDC mode an efficient frequency doubler.
The input frequencies are $f_l=f_i=5$ MHz, and we select the output $f_o=10$ MHz. Thus $f_0=5$ MHz, $p=1$, $q=1$, and $n=2$. The output signal \[Eq. \] results from the follwoing terms $$\begin{array}{cc|ccc|cl}
hf_l+kf_i=nf_0 &&&hp+kq=n&&&v_n(t)\\\hline
+1{\times}5+1{\times}5=10&&&+1{\times}1{+1}{\times}1=2&&&V_{1\,1}\cos(\omega_0t{+}\varphi_l{+}\varphi_i)\\
+3{\times}5-1{\times}5=10&&&+3{\times}1{-1}{\times}1=2&&&V_{3\,{-1}}\cos(\omega_0t{+}3\varphi_l{-}\varphi_i)\\
-1{\times}5+3{\times}5=10&&&-1{\times}1{+3}{\times}1=2&&&V_{-1\,3}\cos(\omega_0t{-}\varphi_l{+}3\varphi_i)\\
+5{\times}5-3{\times}5=10&&&+5{\times}1{-3}{\times}1=2&&&V_{5\,-{3}}\cos(\omega_0t{+}5\varphi_l{-}3\varphi_i)\\
-3{\times}5+5{\times}5=10&&&-3{\times}1{+5}{\times}1=2&&&V_{-3\,5}\cos(\omega_0t{-}3\varphi_l{+}5\varphi_i)\\
\cdots &&&\cdots &&&\cdots
\end{array}$$
Special Mixers and I-Q Mixers {#sec:mix:specials-iqs}
=============================
[ ]{}
#### Phase Detector.
Some mixers are explicitely designed to operate in the phase detector mode. In some cases such devices are actually general-purpose mixers *documented* for phase detector operation. Often the IF output impedance is larger than 50 , typically 500 . The main advantage of this higher impedance is a lower residual white noise of the system. In fact, the output preamplifier can hardly be noise-matched to an input resistance lower than a few hundreds Ohms. The IF bandwidth reduction that results from the increased output impedance is not relevant in practice. The residual flicker, which is the most relevant parameter for a number of measurements, is usually not documented[^11].
#### Analog Modulator / Variable Attenuator.
A mixer can be designed and *documented* to be used in a reverse mode as an analog modulator (See Sec. \[sec:mix:lm-mode\]). The fancy name “variable attenuator” is sometimes used. Yet, the mixer operation is more general than that of a simple attenuator because the mixer input current can be either positive or negative, and the output signal changes sign when the input current is negative.
#### BPSK Modulator.
The BPSK modulator differs from the analog modulator in that the IF input is saturated (See Sec. \[sec:mix:dm-mode\]). Once again, the device may differ from a general-purpose mixer mostly in the documentation.
#### High Linearity Mixers.
In some cases low intermodulation performance must be achieved at any cost. Special mixers are used, based on a ring in which the diodes are replaced with the more complex elements shown in Fig. \[fig:mix-multidiode\] (classes I-III). High linearity is achieved by forcing the diodes to switch abruptly in the presence of a large pump signal. These mixers, as compared to the single-diode ones, need large LO power, up to 1 W, and show higher loss.
[ ]{}
#### Improved Impedance-Matching Mixers.
The 90 hybrid junction, used as a power splitter, has the useful property that the input (output) is always impedance matched when the isolation port is correctly terminated and the two outputs (inputs) are loaded with equal impedances. This property is exploited joining two equal double-balanced mixers to form the improved mixer of Fig. \[fig:mix-improved-z-match\] (Class IV mixer). Other schemes are possible, based on the same idea.
#### Double-Double-Balanced Mixers.
[ ]{}
The double-double-balanced mixer (Figure \[fig:mix-ddbm\]) shows high 1 dB compression point, thus high dynamic range and low distortion, and high isolation. This device is sometimes called *triple balanced mixer* because it is balanced at the three ports. Other schemes are possible.
#### Image-Rejection Mixer.
[ ]{}
Let us go back to the frequency conversion system of Fig. \[fig:mix-lc-image\], in which the LSB and the USB are converted into the same IF frequency $\omega_b$. The scheme of Fig. \[fig:mix-img-rej\] divides the IF components, enabling the selection of the LSB or the USB input (RF) signal.
Let us for short $a=\omega_it$ and $b=\omega_lt$ the instantaneous phase of the RF and LO signal. The converted signals, at the IF output of the mixers are $$\begin{aligned}
v_1&=\frac{1}{\sqrt{2}U} V_IV_L\, \sin a \, \cos b\\[0.5ex]
v_2&=\frac{1}{\sqrt{2}U} V_IV_L\, \cos a \, \cos b~~,\end{aligned}$$ thus $$\begin{aligned}
v_1&=\frac{1}{2\sqrt{2}U} V_IV_L \Bigl[\sin(a-b)+\sin(a+b)\Bigr] \\[0.5ex]
v_2&=\frac{1}{2\sqrt{2}U} V_IV_L \Bigl[\cos(a-b)+\cos(a+b)\Bigr]~~.\end{aligned}$$ The path of the hybrid junction labeled ‘$-90^\circ$’ turns the phase of the positive-frequency signals by $-90^\circ$, and the phase of the negative-frequencies signal by $+90^\circ$. The rotated signals are $$\begin{aligned}
v''_1 &=\begin{cases}
\frac{1}{4U} V_IV_L\bigl[-\cos(a-b)-\cos(a+b)\bigr] &a{>}b\\[0.5ex]
\frac{1}{4U} V_IV_L\bigl[+\cos(a-b)+\cos(a+b)\bigr] &a{<}b
\end{cases}\\[2ex]
v''_2 &=\begin{cases}
\frac{1}{4U} V_IV_L \bigl[+\sin(a-b)+\sin(a+b)\bigr] &a{>}b\\[0.5ex]
\frac{1}{4U} V_IV_L \bigl[-\sin(a-b)-\sin(a+b)\bigr] &a{<}b
\end{cases}\end{aligned}$$ which also account for a factor $1/\sqrt{2}$ due to energy conservation. The non-rotated signals are $$\begin{aligned}
v'_1&=\frac{1}{4U} V_IV_L \bigl[\sin(a-b)+\sin(a+b)\bigr] \\[0.5ex]
v'_2&=\frac{1}{4U} V_IV_L \bigl[\cos(a-b)+\cos(a+b)\bigr]~~.\end{aligned}$$ The output signals are $$\begin{aligned}
v_\text{USB}=v''_1+v'_2&=\begin{cases}
\frac{1}{4U} V_IV_L \bigl[\sin(a-b)+\sin(a+b)\bigr]
& a{>}b~~\text{\footnotesize (USB taken in)}\\[0.5ex]
0
& a{<}b~~\text{\footnotesize (LSB rejected)}
\end{cases} \\[2ex]
v_\text{LSB}=v'_1+v''_2&=\begin{cases}
0
&a{>}b~~\text{\footnotesize (USB rejected)}\\[0.5ex]
\frac{1}{4U} V_IV_L \bigl[\cos(a-b)+\cos(a+b)\bigr]
&a{<}b~~\text{\footnotesize (LSB taken in)}
\end{cases}\end{aligned}$$
The unwanted sideband is never cancelled completely. A rejection of 20 dB is common in practice. The main reason to prefer the image-rejection mixer to a (simple) mixer is noise. Let us assume that the LO frequency $\omega_l$ and the IF center frequency $\omega_\text{IF}$ are given. The mixer converts both $|\omega_l-\omega_\text{IF}|$ and $|\omega_l+\omega_\text{IF}|$ to $\omega_\text{IF}$, while the image-rejection mixer converts only one of these channels. Yet, the noise of the electronic circuits is present at both frequencies.
The IF filter of a FM receiver has a bandwidth of 300 kHz centered at 10.7 MHz. In order to receive a channel at 91 MHz, we tune the local oscillator to 101.7 MHz ($101.7-10.7=91$). A mixer down-convert to IF two channels, the desired one (91 MHz) and the image frequency at 122.4 MHz ($101.7+10.7=122.4$). In the best case, only noise is present at the image frequency (122.4 MHz), which is taken in by the mixer, yet not by the image-rejection mixer.
#### SSB Modulator.
[ ]{}
[ ]{}
The SSB modulator (Fig. \[fig:mix-ssb-mod\]) is a different arrangement of the same blocks used in the image-rejection mixer. The main purpose of this device is to modulate a carrier by adding only one sideband, either LSB or USB. All explanations are given on the scheme, in Fig. \[fig:mix-ssb-mod\].
#### I-Q Detectors and Modulators.
[ ]{}
The two-axis synchronous detector introduced in Section \[ssec:mix:sd-mode\] is commercially available in (at least) two practical implementations, shown in Fig. \[fig:mix-iq-detectors\]. Of course, the conversion loss is increased by the loss of the input power splitter, which is of 3–4 dB. For the same reason, the required LO power is increased by 3–4 dB. The I-Q mixer can be reversed, operating as a modulator, as the simple mixer did (Sec. \[sec:mix:lm-mode\]). A number of I-Q modulators are available off the shelf, shown in Fig. \[fig:mix-iq-modulators\]. Other configurations of I-Q detector/modulator are possible, with similar characteristics.
[ ]{}
The Type-2 detector seems to work better than the Type-1 because the $180^\circ$ junction exhibit higher symmetry and lower loss than the $90^\circ$ junction. Some power loss and asymmetry is more tolerated at the LO port, which is saturated. Figure \[fig:mix-sd-v6p9fig\] gives an idea of actual loss asymmetry. In addition, there can be a phase error, that is a deviation from quadrature, of a few degrees.
[ ]{}
Finally, it is worth pointing out that the phase relationships shown in Figures \[fig:mix-iq-detectors\]–\[fig:mix-iq-modulators\] result from a technical choice, for they should not be given for granted. Letting the phase of the LO arbitrary, there are two possible choices, Q leads I or Q lags I. The experimentalist may come across unclear or ambiguous documentation, hence inspection is recommended. Figure \[fig:mix-iq-identify\] shows a possible method. The FFT analyzer is used to measure the phase of the signal Q versus the reference signal I. I have some preference for $\omega_s>\omega_l$, and for a beat note $\frac{1}{2\pi}\omega_b=\frac{1}{2\pi}|\omega_s-\omega_b|$ of some 1–5 kHz. A phase-meter, a vector voltmeter, or a lock-in amplifier can be used instead of the dual-channel FFT analyzer.
Non-ideal behavior
==================
Most of the issues discussed here resort to the general background on radio-frequency and microwave background, for they are listed quickly only for the sake of completeness. The book [@razavi:rf-microelectronics] is a good reference.
[ ]{}
Impedance matching.
: Inputs and output of the mixer only approximate the nominal impedance, for reflection are present in the circuit. In practice, the impedance mismatching depends on frequency and power.
Isolation and crosstalk.
: A fraction of the input power leaks to the output, and to the other input as well. Often, isolating the LO port is relevant because of power.
1 dB compression point.
: At high input power, of about 10 dB below the LO power, the mixer starts saturating, hence the SSB loss increases. The 1 dB compression power is defined as the compression power at which the loss increases by 1 dB (Figure \[fig:mix-loss\]).
Non-linearity.
: The mixer behavior deviates from the ideal linear model of Section \[ssec:mix:linearity\], for the input-output relationship is of the form $v_o(v_i) = a_0 + a_1v_i + a_2v_i^2 + a_3v_i^3 + \ldots~~$ \[Eq. , here repeated\]. In radio engineering the cubic term, $a_3v_i^3$, is often the main concern. This is due to the fact that, when two strong adjacent-channel signals are present at $\Delta\omega$ and $2\Delta\omega$ off the received frequency $\omega_i$, a conversion product falls exactly at $\omega_i$, which causes interference. Being $\Delta\omega\ll\omega_i$, a preselector filter can not fix the problem.
Offset.
: In ‘synchronous detector’ mode, the output differs from the expected value by a dc offset, which depends on the LO power and of frequency. The same problem is present in the in ‘phase detector’ mode, where also the RF power affects the offset. This occurs because of saturation.
Internal phase shift.
: The presence of a small phase lag at each port inside the mixer has no effect in most application. Of course, in the case of I-Q devices the quadrature accuracy is relevant.
Mixer Noise
===========
The mixer noise were studied since the early time of radars [@radlab-v15-torrey:crystal-rectifiers; @bergmann68iretmtt]. Significantly lower noise was later obtained with the Schottky diode [@barber67mtt; @gewartowski71mtt], and afterwards with the double balanced mixer. More recent and complete analysis of the mixer noise is available in [@held78mtt-1; @held78mtt-2; @kerr78mtt-0; @kerr78mtt-1; @kerr78mtt-2]. Nonetheless in the design electronics, and even in *low-noise* electronics, the mixer noise is often a second-order issue because:
1. Nowadays mixers exhibit low noise figure, of the order of 1 dB.
2. The mixer is almost always preceded by an amplifier.
3. The mixer picks up noise from a number of frequency slots sometimes difficult to predict.
Noise pick-ups from various frequency slots is probably the major practical issue. The presence of the USB/LSB pair makes the image-rejection mixer (Fig. \[fig:mix-img-rej\], p. ) appealing. Two phenomena deserve attention. The first one is the multi-harmonic frequency conversion (Fig. \[fig:mix-lc-harm\] p. and Fig. \[fig:mix-ld-harm\] p. ), by which noise is converted to the IF band from the sidebands of frequencies multiple of the LO frequency. The second phenomenon is a step in the output noise spectrum at the LO frequency, in the presence of white noise at the RF port (Fig. \[fig:mix-noise-step\]). Only a graphical proof is given here. The output slots IF1, IF2, and IF3 are down-converted from the input slots RF3+RF4, RF2+RF5, and RF1+RF6, respectively. Thus, the conversion power loss is $\ell^2/2$. At higher frequencies, the output slots IF4, IF5, …, come from RF7, RF8, …, for the loss is $\ell^2$. The analytical proof follows exactly the graphical proof, after increasing to infinity the number of frequency slots so that their width is $d\omega$.
[ ]{}
Flicker ($1/f$) noise is generally not documented. All the references found about the mixer noise are limited to classical white noise, that is, thermal and shot noise, while the flicker noise is not considered. The flicker behavior of mixer may depend on the operating mode, as listed in Table \[tab:mix:modes\] (p. ). Yet, the general rule is that flicker noise is a near-dc phenomenon, powered by the LO pump. Then, the near-dc flicker is up-converted by non-linearity and brougt to the output; or available at the output, in the ‘synchronous detector’ mode (Sec. \[ssec:mix:sd-mode\]) and in the ‘phase detector’ mode (Sec. \[sec:mix:pd-mode\]), where the dc signal is taken at the output.
Where to learn more
===================
Our approach, which consists of identifying and analyzing the modes of Table \[tab:mix:modes\], is original. Thus, there are no specific references.
A lot can be learned from the data sheets of commercial mixers and from the accompaining application notes. Unfortunately, learning in this way requires patience because manufacturer tend to use their own notation, and because of the commercial-oriented approach. Another problem is that the analysis is often too simplified, which makes difficult to fit technical information into theory. Watkins Johnson[^12] application notes [@wj:mixers-1; @wj:mixers-2] provide useful general description and invaluable understanding of intermodulation [@wj:selecting-mixers]. We also found useful the Anzac [@anzac:mixers; @anzac:modulators], Macom [@macom:mixers] and Mini-Circuits [@minicircuits:understanding-mixers; @minicircuits:mixer-terms] application notes.
Reading books and book chapters on mixers, one may surprised by the difference between standpoints. A book edited by E. L. Kollberg [@kollberg:mixers] collects a series of articles, most of which published in the IEEE Transactions on Microwave Theory and Technology and other IEEE Journals. This collection covers virtually all relevant topics. The non-specialist may be interested at least in the first part, about basic mixer theory. The classical book written by S. A. Maas [@maas:mixers] is a must on the subject.
A few books about radio engineering contains a chapter on mixers. We found useful chapter 3 (*mixers*) of McClaning & al. [@mcclaning:receivers pp. 261–344], chapter 7 (*Mixers*) of Krauss & al. [@krauss:radio-engineering pp. 188–220], chapter 6 (*Mixers*) of Rohde & al. [@rohde:communications-receivers pp. 277–318], and Chapter 7 (*Microwave Mixer Design*), of Vendelin & al.[@vendelin:microwave-circuit-design].
Some radio amateur handbooks provide experiment-oriented information of great value, hard to find elsewere. Transmission-line transformers and baluns are described in [@sevick:transmission-line]. Recent editions of the the ARRL Handbook [@straw:arrl-handbook-99] contain a chapter on mixers (chapter 15 in the 1999 edition), written by D. Newkirk and R. Karlquist, full of practical information and common sense.
\#1[\#1]{}
[^1]: This is also known as the *complex* representation, or as the *Fresnel vector* representation.
[^2]: The factor $\sqrt2$ is dropped, for $A$ is a peak amplitude. Thus, $A'(t)$ and $A''(t)$ are the time-varying counterpart of $V'\sqrt2$ and $V''\sqrt2$.
[^3]: In this Section we use $x$ and $y$ in order to emphasize some properties of the synchronous detection tightly connected to Cartesian-coordinate representation. Here, $x$ and $y$ are the same thing of $A'$ and $A''$ of Eq. .
[^4]: Once again, we emphasize the properties connected with the Cartesian-coordinate representation. $X(t)$ is the same thing of $v_o(t)$ of other sections.
[^5]: For example, the well known wireless standard 811g (WiFi) is a 64 QAM. The transmitted signal is of the form , with $x$ and $y$ quantized in 8 level (3 bits) each.
[^6]: In our previous articles we took $\ell=P_i/P_o$ instead of $\ell^2=P_i/P_o$. The practical use is unchanged because $\ell$ is always given in dB.
[^7]: Some experimental advantages arise from taking $\omega_b=|\omega_i-\omega_l|$ instead of $\omega_b=|\omega_i+\omega_l|$.
[^8]: In this section we use uppercase boldface for phase vectors, as in $\mathbf{V}=Ve^{j\varphi}$. $V$ is the rms voltage.
[^9]: The phase-to-voltage gain is also written as $k_\phi$ (with the alternate shape of $\phi$) because it is used with the small fluctuations $\phi$.
[^10]: The mixer saturation current, which can be of some mA, should not be mistaken for the diode reverse saturation current. The latter can be in the range from $10^{-15}$ A to $10^{-15}$ A.
[^11]: I never come across a phase detector whose residual flicker is documented.
[^12]: http://www.wj.com/technotes/
|
---
abstract: 'We give a survey on classical and recent applications of dynamical systems to number theoretic problems. In particular, we focus on normal numbers, also including computational aspects. The main result is a sufficient condition for establishing multidimensional van der Corput sets. This condition is applied to various examples.'
address:
- ' 1. Université de Lorraine, Institut Elie Cartan de Lorraine, UMR 7502, Vandoeuvre-lès-Nancy, F-54506, France;2. CNRS, Institut Elie Cartan de Lorraine, UMR 7502, Vandoeuvre-lès-Nancy, F-54506, France'
- |
Department for Analysis and Computational Number Theory\
Graz University of Technology\
A-8010 Graz, Austria
author:
- 'Manfred G. Madritsch'
- 'Robert F. Tichy'
title: Dynamical systems and uniform distribution of sequences
---
Dynamical systems in number theory
==================================
In the last decades dynamical systems became very important for the development of modern number theory. The present paper focuses on Furstenberg’s refinements of Poincaré’s recurrence theorem and applications of these ideas to Diophantine problems.
A (measure-theoretic) dynamical system is formally given as a quadruple $(X,\mathfrak{B}, \mu, T)$, where $(X,\mathfrak{B},\mu)$ is a probability space with $\sigma$-algebra $\mathfrak{B}$ of measurable sets and $\mu$ a probability measure; $T\colon
X\rightarrow X$ is a measure-preserving transformation on this space, *i.e.* $\mu(T^{-1}A)= \mu(A)$ for all measurable sets $A\in\mathfrak{B}$. In the theory of dynamical systems, properties of the iterations of the transformation $T$ are of particular interest. For this purpose we only consider invertible transformations and call such dynamical systems invertible.
The first property, we consider, originates from Poncaré’s famous recurrence theorem (see Theorem 1.4 of [@walters1982:introduction_to_ergodic] or Theorem 2.11 of [@einsiedler_ward2011:ergodic_theory_with]) saying that starting from a set $A$ of positive measure $\mu(A)>0$ and iterating $T$ yields infinitely many returns to $A$. More generally, we call a subset $\mathcal{R}\subset\mathbb{N}$ of the positive integers a set of recurrence if for all invertible dynamical systems and all measurable sets $A$ of positive measure $\mu (A)>0$ there exists $n\in\mathcal{R}$ such that $\mu(A\cap
T^{-n}A)>0$. Then Poincaré’s recurrence theorem means that ${\mathbb{N}}$ is a set of recurrence.
A second important theorem for dynamical systems is Birkhoff’s ergodic theorem (see Theorem 1.14 of [@walters1982:introduction_to_ergodic] or Theorem 2.30 of [@einsiedler_ward2011:ergodic_theory_with]). We call $T$ ergodic if the only invariant sets under $T$ are sets of measure $0$ or of measure $1$, *i.e.* $T^{-1}A=A$ implies $\mu(A)=0$ or $\mu(A)=1$. Then Birkhoff’s ergodic theorem connects average in time with average in space, *i.e.* $$\lim_{N\rightarrow\infty}\frac{1}{N}\sum^{N-1}_{n=0}f \circ
T^{n}(x)= \int_{X}f(x)d\mu(x)$$ for all $f\in L^{1}(X,\mu)$ and $\mu$-almost all $x \in X$.
Let us explain an important application of this theorem to number theory. For $q\geq2$ a positive integer, consider $T\colon[0,1)\rightarrow [0,1)$ defined by $T(x)=\{qx\},$ where $\{t\}=t-\lfloor t\rfloor$ denotes the fractional part of $t$. If $x\in{\mathbb{R}}$ is given by its $q$-ary digit expansion $x=\lfloor x\rfloor+ \sum^{\infty}_{j=1}a_{j}(x)q^{-j}$, then the digits $a_{j}(x)$ can be computed by iterating this transformation $T$: $a_{j}(x)=i$ if $T^{j-1}x\in\left[\tfrac iq,\frac{i+1}q\right)$ with $i\in\{0,1,\ldots,q-1\}$. Moreover, since $a_j(Tx)=a_{j+1}(x)$ for $j\geq1$ the transformation $T$ can be seen as a left shift of the expansion.
Now we call a real number $x$ simply normal in base $q$ if $$\lim_{\mathbb{N}\rightarrow\infty}\frac{1}{N}\#\{j\leq N\colon a_{j}=d\}= \frac{1}{q}$$ for all $d=0, \ldots, q-1$, *i.e.* all digits $d$ appear asymptotically with equal frequencies $1/q.$ A number $x$ is called $q$-normal if it is simply normal with respect to all bases $q, q^{2},q^{3},\ldots$. This is equivalent to the fact that the sequence $(\{q^{n}x\})_{n\in\mathbb{N}}$ is uniformly distributed modulo $1$ (for short: u.d. mod $1$), which also means that all blocks $d_{1},d_{2},\ldots, d_{L}$ of subsequent digits appear in the expansion of $x$ asymptotically with the same frequency $q^{-L}$ (*cf.* [@bugeaud2012:distribution_modulo_one; @drmota_tichy1997:sequences_discrepancies_and; @kuipers_niederreiter1974:uniform_distribution_sequences]). For completeness, let us give here one possible definition of u.d. sequences $(x_{n})$: a sequence of real numbers $x_{n}$ is called u.d. mod $1$ if for all continuous functions $f: [0,1]\rightarrow
\mathbb{R}$
$$\label{ud}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum^{N}_{n=1}f(x_{n})=
\int^{1}_{0}f(x)dx.$$
Note, that by Weyl’s criterion the class of continuous functions can be replaced by trigonometric functions $e(hx)=e^{2\pi ihx}$, $h\in \mathbb{N}$ or by characteristic functions $1_{I}(x)$ of intervals $I=[a,b)$. Applying Birkhoff’s ergodic theorem, shows that Lebesgue almost all real numbers are $q$-normal in any base $q\geq 2$. Defining a real number to be absolutely normal if it is $q$-normal for all bases $q\geq 2$, this immediately yields that almost all real numbers are absolutely normal.
In particular, this shows the existence of absolutely normal numbers. However, it is a different story to find constructions of (absolutely) normal numbers. It is a well-known difficult open problem to show that important numbers like $\sqrt{2}$, $\ln 2$, $e$, $\pi$ etc. are simply normal with respect to some given base $q\geq 2$. A much easier task is to give constructions of $q$-normal numbers for fixed base $q$. Champernowne [@champernowne1933:construction_decimals_normal] proved that
$$0.1\,2\,3\,4\,5\,6\,7\,8\,9\,10\,11\,12\ldots$$
is normal to base $10$ and later this type of constructions was analysed in detail. So, for instance, for arbitrary base $q\geq 2$
$$0.\langle\lfloor g(1)\rfloor\rangle_{q}\; \langle\lfloor g(2)\rfloor\rangle_{q}\ldots$$
is $q$-normal, where $g(x)$ is a non-constant polynomial with real coefficients and the $q-$normal number is constructed by concatenating the $q-$ary digit expansions $\langle\lfloor g(n)\rfloor\rangle_{q}$ of the integer parts of the values $g(n)$ for $n=1,2,\ldots$. These constructions were extended to more general classes of functions $g$ (replacing the polynomials) (see [@nakai_shiokawa1992:discrepancy_estimates_class; @nakai_shiokawa1990:class_normal_numbers; @madritsch_thuswaldner_tichy2008:normality_numbers_generated; @madritsch_tichy2013:construction_normal_numbers; @davenport_erdoes1952:note_on_normal; @schiffer1986:discrepancy_normal_numbers]) and the concatenation of $\langle[g(p)]\rangle_{q}$ along prime numbers instead of the positive integers (see [@nakai_shiokawa1997:normality_numbers_generated; @madritsch2014:construction_normal_numbers; @copeland_erdoes1946:note_on_normal; @madritsch_tichy2013:construction_normal_numbers]).
All such constructions depend on the choice of the base number $q\geq 2$, and thus they are not suitable for constructing absolutely normal numbers. A first attempt to construct absolutely normal numbers is due to Sierpinski [@sierpinski1917:demonstration_elementaire_du]. However, Turing [@turing1992:note_on_normal] observed that Sierpinski’s “construction” does not yield a computable number, thus it is not based on a recursive algorithm. Furthermore, Turing gave an algorithm for a construction of an absolutely normal number. This algorithm is very slow and, in particular, not polynomially in time. It is very remarkable that Becher [@becher_heiber_slaman2013:polynomial_time_algorithm] established a polynomial time algorithm for the construction of absolutely normal numbers. However, there remain various questions concerning the analysis of these algorithms. The discrepancy of the corresponding sequences is not studied and the order of convergence of the expansion is very slow and should be investigated in detail. Furthermore, digital expansions with respect to linear recurring base sequences seam appropriate to be included in the study of absolute normality from a computational point of view.
Let us now return to Poincaré’s recurrence theorem which shortly states that the set $\mathbb{N}$ of positive integers is a recurrence set. In the 1960s various stronger concepts were introduced:
(i) $\mathcal{R}\subseteq \mathbb{N}$ is called a nice recurrence set if for all invertible dynamical systems and all measurable sets $A$ of positive measure $\mu(A)>0$ and all $\varepsilon > 0,$ there exist infinitely many $n\in \mathcal{R}$ such that $$\mu(A\cup
T^{-n}A)>\mu(A)^{2}- \varepsilon.$$
(ii) $\mathcal{H}\subseteq\mathbb{N}$ is called a van der Corput set (for short: vdC set) if for all $h\in\mathcal{H}$ the following implications holds: $$(x_{n+h}-x_{n})_{n\in\mathbb{N}}\; \text{is u.d.~mod $1$}\Longrightarrow
(x_{n})_{n\in\mathbb{N}}\; \text{is u.d.~mod $1$.}$$
Clearly, any nice recurrence set is a recurrence set. By van der Corput’s difference theorem (see [@kuipers_niederreiter1974:uniform_distribution_sequences; @drmota_tichy1997:sequences_discrepancies_and]) the set $\mathcal{H}= {\mathbb{N}}$ of positive integers is a vdC set. Kamae and Mendès-France [@kamae_mendes1978:van_der_corputs] proved that any vdC set is a nice recurrence set. Ruzsa [@ruzsa1984:connections_between_uniform] conjectured that any recurrence set is also vdC. An important tool in the analysis of recurrence sets is their equivalence with intersective (or difference) sets established by Bertrand-Mathis [@bertrand-mathis1986:ensembles_intersectifs_et]. We call a set $\mathcal{I}$ intersective if for each subset $E\subseteq \mathbb{N}$ of positive (upper) density, there exists $n\in\mathcal{I}$ such that $n=x-y$ for some $x,y\in E$. Here the upper density of $E$ is defined as usual by $$\overline{d}(E)=\limsup_{N\to\infty}\frac{\#(E\cap[1,N])}{N}.$$ Bourgain [@bourgain1987:ruzsas_problem_on] gave an example of an intersective set which is not a vdC set, hence contradicting the above mentioned conjecture of Ruzsa.
Furstenberg [@furstenberg1977:ergodic_behavior_diagonal] proved that the values $g(n)$ of a polynomial $g\in \mathbb{Z}[x]$ with $g(0)=0$ form an intersective set and later it was shown by Kamae and Mendès-France [@kamae_mendes1978:van_der_corputs] that this is a vdC set, too. It is also known, that for fixed $h\in \mathbb{Z}$ the set of shifted primes $\{p\pm h\colon
p\text{ prime}\}$ is a vdC set if and only if $h=\pm 1.$ ([@montgomery1994:ten_lectures_on Corollary 10]). This leads to interesting applications to additive number theory, for instance to new proofs and variants of theorems of Sárkőzy [@sarkozy1978:difference_sets_sequences1; @sarkoezy1978:difference_sets_sequences3; @sarkoezy1978:difference_sets_sequences2]. A general result concerning intersective sets related to polynomials along primes is due to Nair [@nair1992:certain_solutions_diophantine].
In the present paper we want to extend the concept of recurrence sets, nice recurrence sets and vdC sets to subsets of ${\mathbb{Z}}^{k},$ following the program of Bergelson and Lesigne [@bergelson_lesigne2008:van_der_corput] and our earlier paper [@bergelson_kolesnik_madritsch+2014:uniform_distribution_prime]. In section 2 we summarize basic facts concerning these concepts, including general relations between them and counter examples. Section 3 is devoted to a sufficient condition for establishing the vdC property. In the final section 4 we collect various examples and give some new applications.
Van der Corput sets
===================
In this section we provide various equivalent definitions of van der Corput sets in ${\mathbb{Z}}^k$. In particular, we give four different definitions, which are $k$-dimensional variants of the one dimensional definitions, whose equivalence is due to Ruzsa [@ruzsa1984:connections_between_uniform]. These generalizations were established by Bergelson and Lesigne [@bergelson_lesigne2008:van_der_corput]. Then we present a set, which is not a vdC set in order to give some insight into the structure of vdC sets. Finally, we define the higher-dimensional variant of nice recurrence sets.
Characterization via uniform distribution
-----------------------------------------
Similarly to above we first define a van der Corput set (vdC set for short) in ${\mathbb{Z}}^k$ via uniform distribution.
A subset $\mathcal{H}\subset{\mathbb{Z}}^k\setminus\{0\}$ is a vdC set if any family $(x_{\mathbf{n}})_{\mathbf{n}\in{\mathbb{N}}^k}$ of real numbers is u.d. mod $1$ provided that it has the property that for all $\mathbf{h}\in \mathcal{H}$ the family $(x_{\mathbf{n}+\mathbf{h}}-x_{\mathbf{n}})_{\mathbf{n}\in{\mathbb{N}}^k}$ is u.d. mod $1$.
Here the property of u.d. mod $1$ for the multi-indexed family $(x_{\mathbf{n}})_{\mathbf{n}\in{\mathbb{N}}^k}$ is defined via a natural extension of \[ud\]:
$$\label{ud1}
\lim_{N_1,N_2,\ldots,N_k\to+\infty}\frac1{N_1N_2\cdots N_k}
\sum_{0\leq\mathbf{n}<(N_1,N_2,\ldots,N_k)}f(x_{\mathbf n})=
\int^{1}_{0}f(x)dx$$
for all continuous functions $f:[0,1]\rightarrow\mathbb{R}.$ Here in the limit $N_1, N_2,\ldots, N_k$ are tending to infinity independently and $<$ is defined componentwise.
Using the $k$-dimensional variant of van der Corput’s inequality we could equivalently define a vdC set as follows:
A subset $\mathcal{H}\subset{\mathbb{Z}}^k\setminus\{0\}$ is a van der Corput set if for any family $(u_\mathbf n)_{n\in{\mathbb{Z}}^k}$ of complex numbers of modulus $1$ such that $$\forall\mathbf{h}\in \mathcal{H},\quad
\lim_{N_1,N_2,\ldots,N_k\to+\infty}\frac1{N_1N_2\cdots N_k}
\sum_{0\leq\mathbf{n}<(N_1,N_2,\ldots,N_k)}u_{\mathbf{n}+\mathbf{h}}\overline{u_{\mathbf{n}}}=0$$ the relation $$\lim_{N_1,N_2,\ldots,N_k\to+\infty}\frac1{N_1N_2\cdots
N_k}
\sum_{0\leq\mathbf{n}<(N_1,N_2,\ldots,N_k)}u_{\mathbf{n}}=0$$ holds.
Trigonometric polynomials and spectral characterization
-------------------------------------------------------
The first two definitions are not very useful for proving or disproving that a set $\mathcal{H}$ is a vdC set. Similar to the one dimensional case the following spectral characterization involving trigonometric polynomials is a better tool.
A subset $\mathcal{H}\subset{\mathbb{Z}}^k\setminus\{0\}$ is a van der Corput set if and only if for all $\varepsilon>0$, there exists a real trigonometric polynomial $P$ on the $k$-torus ${\mathbb{T}}^k$ whose spectrum is contained in $\mathcal{H}$ and which satisfies $P(0)=1$, $P\geq-\varepsilon$.
The set of polynomials fulfilling the last theorem for a given $\varepsilon$ forms a convex set. Moreover the conditions may be interpreted as some infimum. Therefore we might expect some dual problem, which is actually provided by the following theorem. For details see Bergelson and Lesigne [@bergelson_lesigne2008:van_der_corput] or Montgomery [@montgomery1994:ten_lectures_on].
Let $\mathcal{H}\subset{\mathbb{Z}}^k\setminus\{0\}$. Then $\mathcal{H}$ is a van der Corput set if and only if for any positive measure $\sigma$ on the $k$-torus ${\mathbb{T}}^k$ such that, for all $\mathbf{h}\in \mathcal{H}$, $\widehat{\sigma}(\mathbf{h})=0$, this implies $\sigma(\{(0,0,\ldots,0)\})=0$.
Examples
--------
The structure of vdC sets is better understood by first giving a counter example. The following lemma shows to be very useful in the construction of counter examples.
\[mt:infinite\_intersection\] Let $\mathcal{H}\subset{\mathbb{N}}$. If there exists $q\in{\mathbb{N}}$ such that the set $\mathcal{H}\cap q{\mathbb{N}}$ is finite, then the set $\mathcal{H}$ is not a vdC set.
The proof is a combination of the following two observations of Ruzsa [@ruzsa1984:connections_between_uniform] (see Theorem 2 and Corollary 3 of [@montgomery1994:ten_lectures_on]):
1. Let $m\in{\mathbb{N}}$. The sets $\{1,\ldots,m\}$ and $\{n\in{\mathbb{N}}\colon
m\nmid n\}$ are both not vdC sets.
2. Let $\mathcal{H}=\mathcal{H}_1\cup\mathcal{H}_2\subset{\mathbb{N}}$. If $\mathcal{H}$ is a vdC set, then $\mathcal{H}_1$ or $\mathcal{H}_2$ also has to be a vdC set.
Suppose there exists a $q\in{\mathbb{N}}$ such that $\mathcal{H}\cap q{\mathbb{N}}$ is finite. Then we may split $\mathcal{H}$ into the sets $\mathcal{H}\cap q{\mathbb{N}}$ and $\mathcal{H}\setminus q{\mathbb{N}}$. The first one is finite and the second one contains no multiples of $q$. Therefore both are not vdC sets and hence $\mathcal{H}$ is not a vdC set.
The first counter example deals with arithmetic progressions.
\[lem:arithmetic\_prog\] Let $a,b\in{\mathbb{N}}$. If the set $\{an+b\colon n\in{\mathbb{N}}\}$ is a vdC set, then $a\mid b$.
Let $b\in{\mathbb{N}}$ and $\mathcal{H}=\{an+b\colon n\in{\mathbb{N}}\}$ be a vdC set. Then by Lemma \[mt:infinite\_intersection\] we must have $$an+b\equiv b\equiv 0\bmod a\quad\text{infinitely often.}$$ This implies that $a\mid b$.
The sufficiency (and also the necessity) of the requirement $a\mid b$ follows from the following result of Kamae and Mendès-France [@kamae_mendes1978:van_der_corputs] (*cf.* Corollary 9 of [@montgomery1994:ten_lectures_on]).
Let $P(z)\in{\mathbb{Z}}[z]$ and suppose that $P(z)\to+\infty$ as $z\to+\infty$. Then $\mathcal{H}=\{P(n)>0\colon n\in{\mathbb{N}}\}$ is a vdC set if and only if for every positive integer $q$ the congruence $P(z)\equiv 0\pmod q$ has a root.
Now we want to establish a similar result for sets of the form $\{ap+b\colon p\text{ prime}\}$. In this case the following result is due to Bergelson and Lesigne [@bergelson_lesigne2008:van_der_corput] which is a generalization of the case $f(x)=x$ due to Kamae and Mendès-France [@kamae_mendes1978:van_der_corputs].
\[bl:prop1.22\] Let $f$ be a (non zero) polynomial with integer coefficients and zero constant term. Then the sets $\{f(p-1)\colon
p\in\mathbb{P}\}$ and $\{f(p+1)\colon p\in\mathbb{P}\}$ are vdC sets in ${\mathbb{Z}}$.
We show the converse direction.
Let $a$ and $b$ be non-zero integers. Then the set $\{ap+b\colon
p\in\mathbb{P}\}$ is a vdC set if and only if ${\left\vert}a{\right\vert}={\left\vert}b{\right\vert}$, *i.e.* $ap+b=a(p\pm 1)$.
It is clear from Lemma \[bl:prop1.22\] that $\{ap+b\colon
p\in\mathbb{P}\}$ is a vdC set if ${\left\vert}a{\right\vert}={\left\vert}b{\right\vert}$.
On the contrary a combination of Lemma \[mt:infinite\_intersection\] and Lemma \[lem:arithmetic\_prog\] yields that $a\mid b$. Now we consider the sequence modulo $b$. Then by Lemma \[mt:infinite\_intersection\] we get that $$ap+b\equiv ap\equiv 0\bmod b\quad\text{infinitely often.}$$ Since $(p,b)>1$ only holds for finitely many primes $p$ we must have $b\mid a$. Combining these two requirements yields ${\left\vert}a{\right\vert}={\left\vert}b{\right\vert}$.
A sufficient condition
======================
In this section we want to formulate a general sufficient condition which provides us with a tool to show for plenty of different examples that they generate a vdC set. This is a generalization of the conditions of Kamae and Mendès-France [@kamae_mendes1978:van_der_corputs] and Bergelson and Lesigne [@bergelson_lesigne2008:van_der_corput]. Before stating the condition we need an auxiliary lemma.
\[bl:lem\_linear\_algebra\] Let $d$ and $e$ be positive integers, and let $L$ be a linear transformation from ${\mathbb{Z}}^d$ into ${\mathbb{Z}}^e$ (represented by an $e\times
d$ matrix with integer entries). Then the following assertions hold:
1. If $D$ is a vdC set in ${\mathbb{Z}}^d$ and if $0\not\in L(D)$, then $L(D)$ is a vdC set in ${\mathbb{Z}}^e$.
2. Let $D\in{\mathbb{Z}}^d$. If the linear map $L$ is one-to-one, and if $L(D)$ is a vdC set in ${\mathbb{Z}}^e$, then $D$ is a vdC set in ${\mathbb{Z}}^d$.
Our main tool is the following general result. Applications are given in the next section.
\[mt:sufficient\_condition\] Let $g_1,\ldots,g_k\colon{\mathbb{N}}\to{\mathbb{Z}}$ be arithmetic functions. Suppose that $g_{i_1},\ldots,g_{i_m}$ is a basis of the ${\mathbb{Q}}$-vector space $\mathrm{span}(g_1,\ldots,g_k)$. For each $q\in{\mathbb{N}}$, we introduce $$D_q:=\left\{(g_{i_1}(n),\ldots,g_{i_m}(n)\colon n\in{\mathbb{N}}\text{ and $q!\mid
g_{i_j}(n)$ for all $j=1,\ldots,m$}\right\}.$$ Suppose further that, for every $q$, there exists a sequence $(h^{(q)}_n)_{n\in{\mathbb{N}}}$ in $D_q$ such that, for all $\mathbf{x}=(x_1,\ldots,x_m)\in{\mathbb{R}}^m\setminus{\mathbb{Q}}^m$, the sequence $(h^{(q)}_n\cdot\mathbf{x})_{n\in{\mathbb{N}}}$ is uniformly distributed mod $1$. Then $$\widetilde{D}:=\{(g_1(n),\ldots,g_k(n))\colon n\in{\mathbb{N}}\}\in{\mathbb{Z}}^k$$ is a vdC set.
We first show that the set $$D:=\{(g_{i_1}(n),\ldots,g_{i_m}(n))\colon n\in{\mathbb{N}}\}$$ is a vdC set in ${\mathbb{Z}}^m$. For $q,N\in{\mathbb{N}}$ we define a family of trigonometric polynomials $$P_{q,N}:=\frac1N\sum_{n=1}^Ne\left(h_n^{(q)}\cdot\mathbf{x}\right).$$ By hypothesis, $\lim_{N\to\infty}P_{q,N}(x)=0$ for $x\not\in{\mathbb{Q}}^m$. For fixed $q$ there exists a subsequence $(P_{q,N'})$ which converges pointwise to a function $g_q$. Since $g_q(x)=1$ (for $x\in{\mathbb{Q}}^m$ and $q$ sufficiently large) and $g_q(x)=0$ (for $x\not\in{\mathbb{Q}}^m$), the sequence $(g_q)$ is pointwise convergent to the indicator function of ${\mathbb{Q}}^m$. For a positive measure $\sigma$ on the $m$-dimensional torus with vanishing Fourier transform $\widehat{\sigma}$ on $D$, we have $\int P_{q,N}\mathrm{d}\sigma=0$ for all $q,N$. Thus $\sigma({\mathbb{Q}}^m)=0$ follows from the dominating convergence theorem, obviously $\sigma(\{0,0,\ldots,0\})=0$, and thus $D$ is a vdC set.
In order to prove that $\widetilde{D}$ is a vdC set we apply Lemma \[bl:lem\_linear\_algebra\] twice. Since $g_{i_1},\ldots,g_{i_m}$ is a base of $\mathrm{span}(g_1,\ldots,g_k)$, we can write each $g_j$ as a linear combination (with rational coefficients) of $g_{i_1},\ldots,g_{i_m}$. Multiplying with the common denominator of the coefficients yields $$a_jg_j=b_{j,1}g_{i_1}+\cdots+b_{j,m}g_{i_m}$$ for $j=1,\ldots,k$ and certain $a_j,b_{j,\ell}\in{\mathbb{Z}}$. Considering the transformation $L\colon{\mathbb{Z}}^m\to{\mathbb{Z}}^k$ given by the matrix $(b_{j,\ell})$ and applying part (1) of Lemma \[bl:lem\_linear\_algebra\] shows that $$\{(a_1g_1(n),\ldots,a_kg_k(n))\colon n\in{\mathbb{N}}\}$$ is a vdC set for certain integers $a_1,\ldots,a_k$.
Now consider the transformation $\widetilde{L}\colon{\mathbb{Z}}^k\to{\mathbb{Z}}^k$ given by the $k\times k$ diagonal matrix with entries $a_1,\ldots,a_k$ in the diagonal. Then by part (2) of Lemma \[bl:lem\_linear\_algebra\] also $\widetilde{D}$ is a vdC set and the proposition is proved.
Various examples and applications to additive problems
======================================================
In this section we consider multidimensional variants of prime powers, entire functions and $x^\alpha\log^\beta x$ sequences.
Prime powers
------------
In a recent paper the authors together with Bergelson, Kolesnik and Son [@bergelson_kolesnik_madritsch+2014:uniform_distribution_prime] consider sets of the form $$\{(\alpha_1(p_n\pm1)^{\theta_1},\ldots,\alpha_k(p_n\pm1)^{\theta_k})\colon
n\in{\mathbb{N}}\},$$ where $\alpha_i,\beta_i\in{\mathbb{R}}$ and $p_n\in\mathcal{P} $ runs over all prime numbers. These sets are vdC, however, we missed the treatment of a special case in the proof. In particular, if for some $i\neq j$ the exponents satisfy $\theta_i=\theta_j=:\theta$, then the vector $(p_n^{\theta},p_n^{\theta})$ is not uniformly distributed mod 1.
Here we close this gap.
If $\alpha_i$ are positive integers and $\beta_i$ are positive and non-integers, then $$D_1 = \{ \left( (p-1)^{\alpha_1}, \cdots , (p-1)^{\alpha_k}, [(p-1)^{\beta_1}], \cdots , [(p-1)^{\beta_\ell}] \right) | \, p \in\mathcal{P} \},$$ and $$D_2 = \{ \left( (p+1)^{\alpha_1}, \cdots , (p+1)^{\alpha_k}, [(p+1)^{\beta_1}], \cdots , [(p+1)^{\beta_\ell}] \right) | \, p \in \mathcal{P} \}$$ are vdC sets in ${\mathbb{Z}}^{k+\ell}$.
Since $x^{\theta_1}$ and $x^{\theta_2}$ are ${\mathbb{Q}}$-linear dependent for all $x\in{\mathbb{Z}}$ if and only if $\theta_1=\theta_2$, an application of Proposition \[mt:sufficient\_condition\] yields that it suffices to consider the case where all exponents are different. However, this follows by the same arguments as in the proof of Theorem 4.1 in [@bergelson_kolesnik_madritsch+2014:uniform_distribution_prime].
Entire functions
----------------
In this section we consider entire functions of bounded logarithmic order. We fix a transcendental entire function $f$ and denote by $S(r):=\max_{{\left\vert}z{\right\vert}\leq r}{\left\vert}f(z){\right\vert}$. Then we call $\lambda$ the logarithmic order of $f$ if $$\limsup_{r\to\infty}\frac{\log S(r)}{\log r}=\lambda.$$
The central tool is the following result of Baker [@baker1984:entire_functions_and].
\[baker:uniform\_distribution\_of\_entire\_functions\] Let $f$ be a transcendental entire function of logarithmic order $1<\lambda<\frac43$. Then the sequence $$\left(f(p_n)\right)_{n\geq1}$$ is uniformly distributed mod $1$.
Our second example of a class of vdC sets is the following.
\[thm:entire\_functions\_vdC\] Let $f_1,\ldots,f_k$ be entire functions with distinct logarithmic orders $1<\lambda_1,\lambda_2,\ldots,\lambda_k<\frac43$, respectively. Then the set $$D:=\{(\lfloor f_1(p_n)\rfloor,\ldots,\lfloor f_k(p_n)\rfloor)\colon
n\in{\mathbb{N}}\}$$ is a vdC set.
We enumerate $D=(\mathbf{d}_n)_{n\geq1}$, where $$\mathbf{d}_n:=\left(\lfloor f_1(p_n)\rfloor,\ldots,\lfloor f_k(p_n)\rfloor\right).$$
First we show, that for every $q\in{\mathbb{N}}$ the set $$D^{(q)}:=\{(d_1,\ldots,d_k)\in D\colon q\mid d_i\}$$ has positive relative density in $D$. We note that if $0\leq\left\{\frac{f_i(p_n)}{q}\right\}<\frac1q$ for $1\leq i\leq k$, then $\mathbf{d}_n\in D^{(q)}$. By Theorem \[baker:uniform\_distribution\_of\_entire\_functions\] the sequence $$\left(\left(\frac{f_1(p_n)}{q},\ldots,\frac{f_k(p_n)}q\right)\right)_{n\geq1}$$ is uniformly distributed and thus $D^{(q)}$ has positive density in $D$.
For each $q\in{\mathbb{N}}$ we enumerate the elements of $D^{(q!)}=(\mathbf{d}^{(q!)}_n)_{n\geq1}$, such that ${\left\vert}\mathbf{d}^{(q!)}_n{\right\vert}$ is increasing. Since the logarithmic orders are distinct we immediately get that the functions $f_i$ are ${\mathbb{Q}}$-linearly independent. Thus by Proposition \[mt:sufficient\_condition\] it is sufficient to show that for all $q\in{\mathbb{N}}$ and all $\mathbf{x}=(x_1,\ldots,x_k)\in{\mathbb{R}}^k\setminus{\mathbb{Q}}^k$ the sequence $(\mathbf{d}^{(q!)}_n\cdot\mathbf{x})_{n\geq1}$ is u.d. mod 1.
Using the orthogonality relations for additive characters we get for any non-zero integer $h$, that $$\begin{gathered}
\frac1{{\left\vert}\{n\leq N\colon\mathbf{d}_n\in D^{(q!)}\}{\right\vert}}
\sum_{n\leq N}e\left(h\left(d^{(q!)}_n\cdot\mathbf{x}\right)\right)\\
=\frac1{{\left\vert}\{n\leq N\colon\mathbf{d}_n\in D^{(q!)}\}{\right\vert}}
\frac1{(q!)^k}\sum_{j_1=1}^{q!}\cdots\sum_{j_k=1}^{q!}
\frac1N\sum_{n\leq
N}e\left(d_n\cdot\left(h\mathbf{x}+\left(\frac{j_1}{q!},\ldots,\frac{j_k}{q!}\right)\right)\right).
\end{gathered}$$ The innermost sum is of the form $$\sum_{n\leq N}e(g(p_n)),$$ with $g(x)=\sum_{i=1}^k\alpha_i\lfloor f_i(x)\rfloor$ for a certain $(\alpha_1,\ldots,\alpha_k)\in{\mathbb{R}}^k\setminus{\mathbb{Q}}^k$.
By relabeling the terms we may suppose that there exists an $\ell$ such that $\alpha_1,\ldots,\alpha_\ell\not\in{\mathbb{Q}}$ and $\alpha_{\ell+1},\ldots,\alpha_k\in{\mathbb{Q}}$. Furthermore we may write $\alpha_j=\frac{a_j}q$ for $\ell+1\leq j\leq m$. Then $$e(g(p_n))=e\left(\sum_{i=1}^k\alpha_k\lfloor
f_i(p_n)\rfloor\right)= \prod_{j=1}^\ell
s_j(\alpha_jf_j(p_n),f_j(p_n))\prod_{j=\ell+1}^kt_j(\lfloor
f_j(p_n)\rfloor),$$ where $s_j(x,y)=e(x-\{y\}\alpha_j)$ ($1\leq
j\leq\ell$) and $t_j(z)=e\left(a_j\frac zq\right)$ ($\ell+1\leq
j\leq k$).
Since $s_j(x,y)$ is Riemann-integrable on ${\mathbb{T}}^2$ for $j=1,\ldots,\ell$ and $t_j(z)$ is continuous on ${\mathbb{Z}}_q={\mathbb{Z}}/q{\mathbb{Z}}$, the function $\prod_{j=1}^\ell s_j\prod_{j=\ell+1}^kt_j$ is Riemann-integrable on ${\mathbb{T}}^{2\ell}\times{\mathbb{Z}}_q^{k-\ell}$.
Now an application of Theorem \[baker:uniform\_distribution\_of\_entire\_functions\] yields that for any $u\in{\mathbb{N}}$ the sequence $$\left(\alpha_1f_1(p_n),f_1(p_n),\ldots,\alpha_\ell
f_\ell(p_n),f_\ell(p_n),\frac{f_{\ell+1}(p_n)}u,\ldots,\frac{f_k(p_n)}u\right)_{n\geq1}$$ is u.d. in ${\mathbb{T}}^{2\ell}\times{\mathbb{T}}^{k-\ell}$. Since $\lfloor
x\rfloor\equiv a\pmod q$ is equivalent to $\frac xq\in[\frac
aq,\frac{a+1}q]$, we deduce that $$\left(\alpha_1f_1(p_n),f_1(p_n),\ldots,\alpha_\ell
f_\ell(p_n),f_\ell(p_n),\lfloor
f_{\ell+1}(p_n)\rfloor,\ldots,\lfloor
f_k(p_n)\rfloor\right)_{n\geq1}$$ is u.d. in ${\mathbb{T}}^{2\ell}\times{\mathbb{Z}}_q^{k-\ell}$, and Weyl’s criterion implies that $$\lim_{N\to\infty}\frac1N\sum_{n\leq
N}e\left(\sum_{i=1}^k\alpha_i\lfloor f_i(p_n)\rfloor\right)=0,$$ proving the theorem.
Functions of the form $x^\alpha\log^\beta x$
--------------------------------------------
In the one-dimensional case Boshernitzan *et al.* [@boshernitzan_kolesnik_quas+2005:ergodic_averaging_sequences] showed, among other things, that these sets are vdC sets. Our aim is to show an extended result for the $k$-dimensional case. Therefore we use the following general criterion, which is a combination of Fejer’s theorem and van der Corput’s difference theorem.
Let $f(x)$ be a function defined for $x > 1$ that is $k$-times differentiable for $x > x_0$. If $f^{(k)}(x)$ tends monotonically to $0$ as $x\to\infty$ and if $\lim_{x\to\infty}x{\left\vert}f^{(k)}(x){\right\vert}=\infty$, then the sequence $(f(n))_{n\geq1}$ is u.d. mod 1.
Applying this theorem we get the following
\[cor:uniform\_distribution\_of\_n\_log\_powers\] Let $\alpha\neq0$ and
- either $\sigma>0$ not an integer and $\tau\in{\mathbb{R}}$ arbitrary
- or $\sigma>0$ an integer and $\tau\in{\mathbb{R}}\setminus[0,1]$.
Then the sequence $(\alpha n^\sigma\log^\tau n)_{n\geq2}$ is u.d. mod 1.
Our third example is the following class of vdC sets.
Let $\alpha_1,\ldots,\alpha_k>0$ and $\beta_1,\ldots,\beta_k\in{\mathbb{R}}$, such that $\beta_i\not\in[0,1]$ whenever $\alpha_i\in{\mathbb{Z}}$ for $i=1,\ldots,k$. Then the set $$D:=\{(\lfloor n^{\alpha_1}\log^{\beta_1}n\rfloor,\ldots,\lfloor
n^{\alpha_k}\log^{\beta_k}n\rfloor)\colon n\in{\mathbb{N}}\}$$ is a vdC set.
Following the same arguments as is the proof of Theorem \[thm:entire\_functions\_vdC\] and replacing the uniform distribution result for entire functions (Theorem \[thm:entire\_functions\_vdC\]) by the corresponding result for $n^\alpha\log^\beta n$ sequences (Corollary \[cor:uniform\_distribution\_of\_n\_log\_powers\]) yields the proof.
Acknowledgment {#acknowledgment .unnumbered}
==============
This research work was done when the first author was a visiting lecturer at the Department of Analysis and Computational Number Theory at Graz University of Technology. The author thanks the institution for its hospitality.
The second author acknowledges support of the project F 5510-N26 within the special research area “Quasi Monte-Carlo Methods and applications” founded by the Austrian Science Fund.
[10]{}
R. C. Baker, *Entire functions and uniform distribution modulo [$1$]{}*, Proc. London Math. Soc. (3) **49** (1984), no. 1, 87–110.
V. Becher, P. A. Heiber, and T. A. Slaman, *A polynomial-time algorithm for computing absolutely normal numbers*, Inform. and Comput. **232** (2013), 1–9.
V. Bergelson, G. Kolesnik, M. Madritsch, Y. Son, and R. Tichy, *Uniform distribution of prime powers and sets of recurrence and van der [C]{}orput sets in [$\Bbb{Z}^k$]{}*, Israel J. Math. **201** (2014), no. 2, 729–760.
V. Bergelson and E. Lesigne, *Van der [C]{}orput sets in [$\Bbb Z^d$]{}*, Colloq. Math. **110** (2008), no. 1, 1–49.
A. Bertrand-Mathis, *Ensembles intersectifs et récurrence de [P]{}oincaré*, Israel J. Math. **55** (1986), no. 2, 184–198.
M. Boshernitzan, G. Kolesnik, A. Quas, and M. Wierdl, *Ergodic averaging sequences*, J. Anal. Math. **95** (2005), 63–103.
J. Bourgain, *Ruzsa’s problem on sets of recurrence*, Israel J. Math. **59** (1987), no. 2, 150–166.
Y. Bugeaud, *Distribution modulo one and [D]{}iophantine approximation*, Cambridge Tracts in Mathematics, vol. 193, Cambridge University Press, Cambridge, 2012.
D. Champernowne, *[The construction of decimals normal in the scale of ten]{}*, J. Lond. Math. Soc. **8** (1933), 254–260 (English).
A. H. Copeland and P. Erd[ő]{}s, *Note on normal numbers*, Bull. Amer. Math. Soc. **52** (1946), 857–860.
H. Davenport and P. Erd[ő]{}s, *Note on normal decimals*, Canadian J. Math. **4** (1952), 58–63.
M. Drmota and R. F. Tichy, *Sequences, discrepancies and applications*, Lecture Notes in Mathematics, vol. 1651, Springer-Verlag, Berlin, 1997.
M. Einsiedler and T. Ward, *Ergodic theory with a view towards number theory*, Graduate Texts in Mathematics, vol. 259, Springer-Verlag London, Ltd., London, 2011.
H. Furstenberg, *Ergodic behavior of diagonal measures and a theorem of [S]{}zemerédi on arithmetic progressions*, J. Analyse Math. **31** (1977), 204–256.
T. Kamae and M. Mend[è]{}s France, *van der [C]{}orput’s difference theorem*, Israel J. Math. **31** (1978), no. 3-4, 335–342.
L. Kuipers and H. Niederreiter, *Uniform distribution of sequences*, Wiley-Interscience \[John Wiley & Sons\], New York, 1974, Pure and Applied Mathematics.
M. G. Madritsch, J. M. Thuswaldner, and R. F. Tichy, *Normality of numbers generated by the values of entire functions*, J. Number Theory **128** (2008), no. 5, 1127–1145.
M. G. Madritsch and R. F. Tichy, *Construction of normal numbers via generalized prime power sequences*, J. Integer Seq. **16** (2013), no. 2, Article 13.2.12, 17.
M. G. Madritsch, *Construction of normal numbers via pseudo-polynomial prime sequences*, Acta Arith. **166** (2014), no. 1, 81–100.
H. L. Montgomery, *Ten lectures on the interface between analytic number theory and harmonic analysis*, CBMS Regional Conference Series in Mathematics, vol. 84, Published for the Conference Board of the Mathematical Sciences, Washington, DC, 1994.
R. Nair, *On certain solutions of the [D]{}iophantine equation [$x-y=p(z)$]{}*, Acta Arith. **62** (1992), no. 1, 61–71.
Y. Nakai and I. Shiokawa, *A class of normal numbers*, Japan. J. Math. (N.S.) **16** (1990), no. 1, 17–29.
[to3em]{}, *Discrepancy estimates for a class of normal numbers*, Acta Arith. **62** (1992), no. 3, 271–284.
[to3em]{}, *Normality of numbers generated by the values of polynomials at primes*, Acta Arith. **81** (1997), no. 4, 345–356.
I. Z. Ruzsa, *Connections between the uniform distribution of a sequence and its differences*, Topics in classical number theory, [V]{}ol. [I]{}, [II]{} ([B]{}udapest, 1981), Colloq. Math. Soc. János Bolyai, vol. 34, North-Holland, Amsterdam, 1984, pp. 1419–1443.
A. S[á]{}rk[ő]{}zy, *On difference sets of sequences of integers. [I]{}*, Acta Math. Acad. Sci. Hungar. **31** (1978), no. 1–2, 125–149.
A. S[á]{}rk[ö]{}zy, *On difference sets of sequences of integers. [II]{}*, Ann. Univ. Sci. Budapest. Eötvös Sect. Math. **21** (1978), 45–53 (1979).
[to3em]{}, *On difference sets of sequences of integers. [III]{}*, Acta Math. Acad. Sci. Hungar. **31** (1978), no. 3-4, 355–386.
J. Schiffer, *Discrepancy of normal numbers*, Acta Arith. **47** (1986), no. 2, 175–186.
W. Sierpinski, *Démonstration élémentaire du théorème de [M]{}. [B]{}orel sur les nombres absolument normaux et détermination effective d’une tel nombre*, Bull. Soc. Math. France **45** (1917), 125–132.
A. M. Turing, *A note on normal numbers*, Collected Works of A.M. Turing (J. Britton, ed.), North Holland, Amsterdam, 1992, pp. 117–119.
P. Walters, *An introduction to ergodic theory*, Graduate Texts in Mathematics, vol. 79, Springer-Verlag, New York-Berlin, 1982.
|
VLBL Study Group-H2B-4\
AMES-HET-01-09\
AS-ITP-2001-022\
[Probing neutrino oscillations jointly\
in long and very long baseline experiments]{}
Y. F. Wang$^a$, K. Whisnant$^b$, Zhaohua Xiong$^c$, Jin Min Yang$^c$, Bing-Lin Young$^b$
[*$^a$ Institute of High Energy Physics, Academia Sinica, Beijing 100039, China\
$^b$ Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011, USA\
$^c$ Institute of Theoretical Physics, Academia Sinica, Beijing 100080, China*]{}
ABSTRACT
We examine the prospects of making a joint analysis of neutrino oscillation at two baselines with neutrino superbeams. Assuming narrow band superbeams and a 100 kt water Cerenkov calorimeter, we calculate the event rates and sensitivities to the matter effect, the signs of the neutrino mass differences, the CP phase and the mixing angle $\theta_{13}$. Taking into account all possible experimental errors under general consideration, we explored the optimum cases of narrow band beam to measure the matter effect and the CP violation effect at all baselines up to 3000 km. We then focus on two specific baselines, a long baseline of 300 km and a very long baseline of 2100 km, and analyze their joint capabilities. We found that the joint analysis can offer extra leverage to resolve some of the ambiguities that are associated with the measurement at a single baseline.
Introduction {#sec1}
============
Although the existing data from the Super-Kamiokande experiment [@superK] and various other corroborating experiments offer very strong indications of neutrino oscillations, the appearance experiment, i.e., the appearance of a flavor different from the original one, has not been convincingly performed. If neutrinos indeed oscillate, the oscillation parameters, including the leptonic CP phase, have to be determined with sufficient accuracy. Furthermore, the well-known MSW matter effect [@MSW] has to be tested by experiments. In spite of the various ongoing and planed neutrino oscillation experiments, additional experiments with very long baseline are needed, at least for the test of the matter effect. The recently approved superbeam facility [@HIPA], which will be available towards the later part of this decade, offers the possibility of a very long baseline (VLBL) experiment which, in conjunction with other oscillation experiments, can test thoroughly all properties of neutrino oscillations.
Among all neutrino oscillation experiments, the long baseline (LBL) experiments are particularly attractive. Since the neutrino beams are produced in an accelerator according to definite physics criteria with the detector site chosen accordingly, the experiment can be conducted in a more controlled fashion to maximize the physics output. Hence the LBL experiments will allow us to make detailed analyses of the oscillation parameters so as to provide a complete picture of the physics of neutrino oscillation. As one example of such experiments, a project called H2B is under discussion [@H2B; @Japanesegroup; @FoM]. The neutrino super-beam for H2B would be from the newly approved high intensity 50 GeV proton synchrotron in Japan called HIPA [@HIPA] and the detector, tentatively called the Beijing Astrophysics and Neutrino Detector (BAND), is envisioned to be a 100 kt water Cerenkov calorimeter (WCC) with resistive plate chambers (RPC) [@wang] located in Beijing, China. The distance from HIPA to Beijing is about 2100 km. Such a very long baseline experiment would be complementary to the recently proposed J2K experiment [@J2K] which will also use the neutrino beam from HIPA but with the Super-Kamiokande detector or its update. The distance from HIPA to Super-Kamiokande is about 300 km.
In this article, we will examine the prospects of investigating neutrino oscillations at H2B in conjunction with J2K so that the joint data at the two widely different baselines can be used in a complementary way to provide strong leverage to eliminate some of the ambiguities in the determination of oscillation parameters. The joint analysis can expand the capability of the parameter search that are not attainable by either of the experiments alone. The two baselines can work at their respective favorable energy ranges. The present work is to demonstrate this possibility. But we have not search for the best narrow beam energies for the two baselines. Assuming a narrow band meson beam and the above mentioned 100 kt WCC with RPC, we simulate the event rates for 5-year operation. The sensitivity of the event rates for the various oscillation parameters will be explored. The present work can be regarded partly as a continuation of the study of H2B Refs. [@H2B; @Japanesegroup; @FoM] and an initial exploration of the idea of joint analyses of two detectors which we think is appropriate for oscillation physics. In Sec. \[sec2\], we discuss some of the fundamentals of neutrino oscillation and LBL experiments. In Sec. \[sec3\], we present some of our numerical results. We present the joint analyses of the data of two detectors in Sec. 4. Finally, in Sec. 5, we present our conclusions.
Fundamentals of neutrino oscillation and LBL experiments {#sec2}
========================================================
If we accept all current data, there will be three distinctive mass scales provided by the five categories of experiments: long baseline, short baseline accelerator experiments such as LSND, atmospheric, solar, and reactor. If the LSND data are excluded, the three SM neutrino flavors are sufficient and no extension of the number of neutrinos beyond that of the standard model is necessary. In view of the uncertainty of the LSND data, our discussion will be restricted to the 3-flavor scenario.
The oscillation of the 3-flavor neutrinos is a system with a limited number of degrees of freedom. The system consists of 2 mass square differences (MSD), three mixing angles and one measurable CP phase. These parameters together with the matter effect determine the various survival and appearance probabilities [@BDWY]. The unitary mixing matrix in vacuum is generally parameterized as $$\begin{aligned}
U & = & \left( \begin{array}{ccc}
c_{12}c_{13} & c_{13}s_{12} & \hat{s}^*_{13} \\
-c_{23}s_{12} - c_{12}\hat{s}_{13}s_{23} &
c_{12}c_{23} -s_{12}\hat{s}_{13}s_{23} & c_{13}s_{23} \\
s_{12}s_{23} - c_{12}c_{23}\hat{s}_{13} &
-c_{12}s_{23} -c_{23}s_{12}\hat{s}_{13} & c_{13}c_{23}
\end{array} \right)\end{aligned}$$ where $s_{jk}=\sin(\theta_{jk})$, $c_{jk}=\cos(\theta_{jk})$, and $\hat{s}_{jk}=\sin(\theta_{jk})e^{i\delta}$, $\theta_{jk}$ defined for $j<k$ is the mixing angle of mass eigenstates $\nu_j$ and $\nu_k$, and $\delta$ is the CP phase angle. The three mass eigenvalues are denoted as $m_1$, $m_2$, and $m_3$. The two independent MSD are $\Delta{\rm m}^2_{21}\equiv {\rm m}^2_2 - {\rm m}^2_1$ and $\Delta{\rm m}^2_{32}\equiv {\rm m}^2_3 - {\rm m}^2_2$.
In LBL experiments the neutrino beam has to go through matter which gives rise to the well-known MSW effect [@MSW]. A widely used model for the Earth, called the preliminary reference Earth model PREM, is given in [@earth] and the earth density profile can be found in [@profile]. Since for a VLBL experiment the matter density can vary significantly along the path of the neutrino beam, in our calculation we perform numerical integration of the Schrödinger equation for a realistic treatment of the distance dependent matter density.
The detection of a given neutrino flavor is through its accompanying charged lepton produced by the charge current interaction of the neutrino with the nucleons in the detector mass. For a neutrino energy $E_\nu$, which is small compared to the mass of the W and Z bosons but large enough so that quasi-elastic effect is small, the charge current cross sections are given by $\sigma_{\nu N} = 0.67\times 10^{-38}{\rm cm}^2 E_\nu(\rm GeV)$ for electron and muon neutrinos, and $\sigma_{\bar{\nu} N} = 0.34\times 10^{-38}{\rm cm}^2 E_\nu(\rm GeV)$ for electron and muon anti-neutrinos. For the tau neutrino, the above expression is subject to a threshold suppression. The threshold for the production of the tau is $E_T = m_\tau + {m_\tau^2\over 2m_N}= 3.46~{\rm GeV}$. A fit of $\nu_\tau$ to $\nu_\mu$ cross section as a function of the neutrino energy in terms of the ratio of two quadratic polynomials can be found in Ref. [@H2B]. The signal events of flavor $\beta$, i.e., the number of charged lepton of flavor $\beta$, from a neutrino beam of flavor $\alpha$, to be observed at a baseline L is given by $$\begin{aligned}
N_s = \int^{E_{\rm Max}}_{E_{\rm min}} \Phi(E_\nu,L)\sigma(E_\nu)
P_{\alpha\to\beta}(E_\nu, L) d E_\nu, \end{aligned}$$ where $\Phi(E_\nu,L)$ is the total neutrino flux spectrum including the detector size and running time period, $P_{\alpha\to\beta}(E_\nu, L)$ is the oscillation probability, $\sigma(E_\nu)$ the neutrino charge current cross section, and $E_{\rm Max}$ and $E_{\rm min}$ are the maximum and the minimum energies of the beam.
In a narrow band beam the neutrino flux is distributed below a given energy $E_{\rm peak}$. The intensity is peaked at $E_{\rm peak}$ and decreases rapidly below $E_{\rm peak}$. The wide band beam contains neutrinos with energy spread out in a significant range of energy. In our calculation we will use the realistic beam energies and profiles provided in [@Japanesegroup; @beam]. Some of the narrow band beams together with the wide beam are plotted in Fig. 1. Here $dN_{cc}/dE_{\nu} \equiv \Phi(E_\nu, L)\sigma (E_\nu)$ is the energy distribution of the charged-current events $N_{cc}$ for one year operation of a 100 kt detector at L=2100 km.
Since in oscillation experiments, especially in the case of electron neutrino appearance, the statistics are generally not large. Therefore the error is an important factor in the physics extraction. W use the approach of Ref. [@FoM] to estimate the possible statistical and systematic errors and to gain a sense of the goodness of the fit. For the electron counting experiments the errors and uncertainties arise from the following sources:
The statistical error in the measurement of the charge lepton of flavor $\beta$ which is as usual $\sqrt{N_s+N_b}$. $N_b$ is the number of measured background events and can be expressed as $$\begin{aligned}
N_b=f_\beta \int^{E_{\rm Max}}_{E_{\rm min}}
\Phi(E_\nu,L)\sigma(E_\nu) d E_\nu .
\end{aligned}$$
The systematic uncertainty in the calculation of the number of background events, which can be denoted as $r_\beta N_b$.
The systematic uncertainty in the beam flux and the cross section which we denote as $g_\beta N_s$.
The total error is the quadrature of all these uncertainties. In our calculation we will take $r_\beta=0.1$, $g_\beta=0.05$, and $f_{\beta}=0.01$.
Numerical results for individual baselines {#sec3}
==========================================
Presently there are sizable errors in all the oscillation parameters. However, we envisage that at the H2B time, $\Delta{\rm m}^2_{32}$, $\Delta{\rm m}^2_{21}$, $\theta_{23}$, and $\theta_{12}$ will be fairly accurately determined. So we will not assign any specific errors to them. We focus our investigations on the following parameters and effects: matter, MSD sign, CP violation, and $\theta_{13}$.
Inputs
------
We present numerical results of a 5-year operation with a water Cerenkov detector. The detector size is assumed to be 100 kt for all baselines. Sizes other than 100 kt will be labeled whenever used.
The inputs of the mixing angles and MSD’s are from solar, atmospheric and CHOOZ experiments. For definiteness we take $\sin^2(2\theta_{12})=0.8$ and $\sin^2(2\theta_{23})=1.0$. In most of our results we use $\sin^2(2\theta_{13})=0.05$ for illustration and effects of larger and smaller values of $\theta_{13}$, $0.01 \leq \sin^2(2\theta_{13})\leq 0.1$, will be investigated. The inputs of MSD $\Delta{\rm m}^2_{21}$ and $\Delta{\rm m}^2_{32}$ are respectively given by $\Delta{\rm m}^2_{\rm sol}=5 \times 10^{-5}$ eV$^2$ and $\Delta{\rm m}^2_{\rm atm}=3 \times 10^{-3}$ eV$^2$.
Presently the sign of the MSD’s are unknown so there are 4 possibilities: $$\begin{tabular}{lllll} \\ \hline\hline
& ~~~I & ~~~II & ~~~III & ~~~IV \\ \hline
$\Delta{\rm m}^2_{32}$ & ~~~+ & ~~~+ & ~~~~- & ~~~~- \\ \hline
$\Delta{\rm m}^2_{21}$ & ~~~+ & ~~~ - & ~~~+ & ~~~~- \\
\hline\hline
\end{tabular}$$
1.ex After showing the effects of all four sign combinations in the electron event numbers we will choose the sign I for illustration.
Matter effects
--------------
In Tables 1 and 2 we show the $\nu_\mu\rightarrow \nu_e$ event rates with and without matter effects for a narrow band beam with $E_{\rm peak}=4$ GeV for both baselines. It is clear that for both narrow band and wide band beams the matter effect is significant on electron event number at L=2100 km, but negligible at L=300 km. As expected, the $\nu_\mu$ and $\nu_\tau$ events show very little matter effect at either distance. The event rates at both baselines can be increased if different narrow band beams are used. For example, for L=2100 km the $E_{\rm peak}$=6 GeV beam has twice as many electron events as the $E_{\rm peak}=4$ GeV beam.
In order to look for the optimum beam energy to measure matter effects at a given baseline, we have examined the following ratio, which is approximately the statistical significance of the matter effect and is referred to in Ref. [@FoM] as the figure of merit, $$R_{\rm matter}=\frac{N_e\vert_{\rm with~ matter} - N_e\vert_{\rm without ~matter}}
{ \Delta N_e} .$$ Here $\Delta N_e$ is the total error of the electron event number, as discussed at the end of Sec. 2, without the matter effect. Figure 2 shows $R_{\rm matter}$ versus the baseline up to 3000 km for several narrow band beams for the four MSD signs combinations. We see that for L=2100 km the optimal narrow band beams for the matter effect are with peak energies in the range of $4\sim 6$ GeV. For example, as shown in Fig. 2 for the MSD sign I, the optimal narrow band beam has the peak energy around $E_{\rm peak}=4$ GeV. For L=300 km, as expected, there is very little statistical sensitivity to the matter effect at all available energies.
Given a narrow band beam with $E_{\rm peak}=4$ GeV for L=2100 km and $E_{\rm peak}=0.7$ GeV for L=300 km, Fig. 3 shows the electron event rate versus the CP phase with or without matter effect. We see that for $\theta_{13}$ to have a fixed value or small range of uncertainties the matter effect is experimentally measurable for L=2100 km but hardly observable for L=300 km. However in the currently fully allowed range of $\theta_{13}$, $\sin^2(2\theta_{13})\leq 0.1$, it is even difficult for the 2100 km baseline to distinguish the matter effect from the vacuum for the following fact: Since the electron event rate is proportional to $\sin^2(2\theta_{13})$, the electron event rates for $\sin^2(2\theta_{13})=0.03$ with matter effect and for $\sin^2(2\theta_{13})=0.1$ in the case of vacuum are the same as can be inferred from Fig. 3, it is not possible to distinguish the two. This ambiguity will be reinforced when the error is not negligible.
MSD sign effects
----------------
The sensitivity of the event rate to the sign of MSD for $\sin^2(2\theta_{13})=0.05$ is also shown in Tables 1 and 2 for $E_{\rm peak}=4$ GeV and $\delta=0$ for both baselines, and in Fig. 4 for different energies for the two baselines as functions of the CP phase. Tables 1 and 2 show that the electron event rates are sensitive to the sign of MSD at the 2100 km baseline. It is also interesting to note that for L=300 km there is sensitivity in distinguishing signs I and IV in which both MSD are positive or negative from signs II and III in which one is positive and the other negative. This general feature is valid for other values of $\theta_{13}$ once it is determined.
In Fig. 4, in which we take $\sin^2(2\theta_{13})=0.05$, it shows clearly that for L=2100 km I and II are well separated from III and IV for all values of CP phase. Hence the sign of $\Delta{\rm m}^2_{32}$ should be readily determined with moderate amount of electron neutrino appearance data. However, the separation of I from II depends on the value of the CP phase. In the region of small, intermediate and large value of the CP phase, the sign of $\Delta{\rm m}^2_{\rm sol}$ can be determined, but around $\delta = 130^\circ$ and $\delta = 280^\circ$ I and II are not distinguishable. The signs III and IV are almost inseparable in the whole region of $\delta$. Hence the sign of $\Delta{\rm m}^2_{21}$ will be very hard to determine if $\Delta{m}^2_{32} < 0$. Then the anti-neutrino beam is needed for the determination. For L=300 km, Fig. 4 shows that it is difficult to distinguish I, II, III and IV except in very special values of the CP phase.
Unfortunately, the above result is only true if $\theta_{13}$ is already known. Similar to the situation discussed at the end of the preceding subsection, the significant uncertainty in $\sin^2(2\theta_{13})$ muddies the water. As $\sin^2(2\theta_{13})$ decreases the electron event rate will also be reduced. Therefore, it is difficult to distinguish the signs I and II of small $\theta_{13}$ with signs III and IV with a larger $\theta_{13}$. We demonstrate the decrease of the lepton event rate with $\sin^2(2\theta_{13})$ in Fig. 4. Hence when the full range of current uncertainty of $\theta_{13}$ is include, i.e., $\sin^2(2\theta_{13})<0.1$, the sensitivity in distinguishing the MSD sign is lost for both baselines.
CP violation effects
--------------------
Figures 3 and 4 show the electron event number versus the CP phase, modulo the matter effect. The typical total errors are also shown. The dominant error is found to be statistical, i.e., from the source (i) as described at the end of Sec. 2. We see that although the event rate varies significantly with the CP phase, as the electron event rate is not a single valued function of the CP phase, it is ambiguous to determine $\delta$ from the electron event number even for a fixed value of $\theta_{13}$. The caveat of the uncertainty in $\theta_{13}$ discussed in the two previous subsections made the ambiguity even more serious.
The sensitivity of the electron event rate to the CP phase depends on the beam energy as shown in Fig. 5. At some of the beam energies, e.g., 2 and 10 GeV for $L$=2100 km and 0.7 GeV for $L$=300 km, the curves are quite flat, indicating a poor sensitivity to the CP phase at such beam energies. Furthermore at almost no energies that one can determine a unique CP phase from the electron event number at either 300 km or 2100 km.
To investigate the sensitivity we define two ratios involving the two CP conserving phases: $\delta=0^\circ$ and $\delta=180^\circ$: $$\begin{aligned}
R^{(0^\circ)}_{\rm CP}(\delta) &\equiv&
{N_e(\delta)-N_e(0^\circ) \over \Delta{N}_e(0^\circ)}, \\
R^{(180^\circ)}_{\rm CP}(\delta) &\equiv&
{N_e(\delta)-N_e(180^\circ) \over \Delta{N}_e(180^\circ)}, \end{aligned}$$ where $N_e(\delta)$, $N_e(0^\circ)$, and $N_e(180^\circ)$ are respectively the electron event numbers for CP phases $\delta$, 0$^\circ$ and 180$^\circ$, and $\Delta N_e(0^\circ)$ and $\Delta N_e(180^\circ)$ are the total error at $\delta=0^\circ$ and $\delta=180^\circ$. We can now defined the figure of merit [@FoM], i.e., the goodness of the fit, for the CP violation measurement as the smaller in magnitude of the two ratios: $$F_{CP}\equiv \left[R^{(0^\circ)}_{\rm CP}(\delta),~
R^{(180^\circ)}_{\rm CP}(\delta)\right]_{\rm min} .$$ In Fig. 6 we plot $F_{\rm CP}(\delta)$ versus the peak energy of the narrow band beam, separately for $L$=2100 and 300 km. We show six values of $\delta$=0$^\circ$, 30$^\circ$, 60$^\circ$, 90$^\circ$, 120$^\circ$, and 150$^\circ$. The curves satisfy approximately the relation $F_{CP}(180^\circ +\delta)\approx -F_{CP}(\delta)$. Hence the curves for $\delta=$180$^\circ$, 210$^\circ$, 240$^\circ$, 270$^\circ$, 300$^\circ$, and 330$^\circ$ can be inferred as the negatives of the above corresponding curves of $\delta$ less than 180$^\circ$. The left panel is for the 100 kt detector and the right panel shows the results for a 1000 kt detector. We see that for the 100 kt detector at both baselines the effects of the finite CP phases are within 1$\sigma$ from each other, including the CP conserving case. If we increase the detector size to 1000 kt, the CP violation effects can reach to the $2\sigma$ level for the beams around $E_{\rm peak}\simeq$ 3-4 GeV and 6-7 GeV for $\delta=60^\circ$-$120^\circ$ and $240^\circ$-$300^\circ$ at L=2100 km, and around $E_{\rm peak} \simeq 0.7$ GeV for the similar $\delta$ ranges at L=300 km.
Effects of the uncertainty of $\sin^2(2\theta_{13})$
----------------------------------------------------
In all the above results we have used $\sin^2(2\theta_{13})=0.05$. Since $\nu_\mu \to \nu_e$ is proportional to $\sin^2(2\theta_{13})$, the latter is a sensitive parameter for the electron event number. Accordingly, the counting experiment of the electron event number may provide a good measurement for the value of $\sin^2(2\theta_{13})$.
In Fig. 7 we present the electron event number versus the CP phase for different $\sin^2(2\theta_{13})$ values. The error bars indicate the size of the estimated total errors. From the total errors, we see how precisely the $\sin^2(2\theta_{13})$ value can be measured. For example, for L=2100 km the curve of $\sin^2(2\theta_{13})=0.08 (0.06)$ lies about $1.5\sigma$ ($3\sigma$) away from that of $\sin^2(2\theta_{13})=0.1$. Then it is difficult to distinguish 0.1 from 0.08 all along the curves. Furthermore without knowing the CP phase, it may be difficult to distinguishing 0.1 at one CP phase to 0.6 at another CP phase. This ambiguity is even more serious for L=300 km because there is more variation of the event number as a function of the CP phase.
Joint analysis of baselines 2100 and 300 km
===========================================
We imagine that major efforts of the very long baseline experiments such as H2B are the confirmation of the matter effect, the determination of the MSD signs, the CP phase, and $\theta_{13}$. However, there exist difficulties in finding unique solutions for them, given the measured electron event rate, as demonstrated in the preceding section. We have discussed repeatedly in the previous section the ambiguities caused by the current wide range of uncertainty in $\theta_{13}$. There are other ambiguities which are caused by the multi-valueness of the oscillation probability as a function of the oscillation parameters and the possibility of overlapping parameter regions. To illustrate the latter ambiguity let us consider Fig. 4. For the simplicity of argument, let us ignore any possible errors. Suppose a measurement of the electron event rate is 60 at 300 km baseline for a narrow band beam with peak energy 0.7 GeV. Then the CP phase can be either around 0$^\circ$ or 150$^\circ$ for $\sin^2(2\theta_{13})=0.05$. Similarly, suppose a measurement at the 2100 km baseline gives, say, the electron event rate is 40 at 4 GeV. Then CP phase can be either 150$^\circ$ or 300$^\circ$ for $\sin^2(2\theta_{13})=0.05$. Further, since the value of $\sin^2(2\theta_{13})$ is unkown, we in fact obtain a curve in the $\delta-\sin^2(2\theta_{13})$ plane for a given electron event number, as shown in Fig. 8. Hence the measurement from only one experiment, either at L=300 km or at L=2100 km, is not enough to determine CP phase or the value of $\sin^2(2\theta_{13})$.
To illustrate the advantage of the joint analysis of two widely different baselines, we plot in Fig. 8 $\sin^2(2\theta_{13})$ vs $\delta$ for measured electron event rates for both 300 km and 2100 km baselines at respectively 60 and 40 events for the MSD sign I. In the absence of any errors, the intersect of the curves gives unique values of both $\sin^2(2\theta_{13})$ and $\delta$. In reality the situation will be more complicated due to the presence of errors of the measurements, and hence the intersect of the two curves will cover a sizable area of the $\sin^2(2\theta_{13})$ vs $\delta$ plane. However, this example shows the possibility of extra leverages one can gain with two different baselines.
In this section we present some of our analyses of such joint measurements, taking the advantage of superbeams like HIPA, which can offer multiple narrow band beams of different energies. We use different energies at the two baselines. We will plot 2100 km baseline vs 300 baseline by simultaneously looking at two different parameters.
$\sin^2(2\theta_{13})$ and the CP phase $\delta$
------------------------------------------------
In Fig. 9 we show electron event number at L=2100 km versus those at L=300 km for fix MSD sign I. Each curve has a fixed value of $\sin^2(2\theta_{13})$ with the CP phase $\delta$ varies in the full possible range from $0^\circ$ to $360^\circ$. The $\delta=0^\circ$ point is marked by a solid dot and the $\delta=180^\circ$ point by a cross. The direction of increasing $\delta$ is indicated by the arrow on the curve. The curves are generally ellipses and the eccentricity of the ellipse is determined by the specific beam energies of the two baselines.
We fix 0.7 GeV for the 300 km baseline and allow the energy at 2100 km to change. The upper diagram of Fig. 9 is at 4 GeV for 2100 km. When $\sin^2(2\theta_{13})$ increases the ellipse moves towards the upper right, i.e., increasing the electron event rate for both baselines. This is expected from the fact that the oscillation probability $\nu_\mu\rightarrow \nu_e$ is proportional to $\sin^2(2\theta_{13})$. Since the ellipses of neighboring values of $\sin^2(2\theta_{13})$ overlap significantly, the value of $\delta$ and $\sin^2(2\theta_{13})$ can not be determined uniquely, reflecting again the ambiguities discussed in the preceding section. However there are energies at which the overlap of the ellipses is minimized. The lower diagram of Fig. 9 shows that the ellipses of constant $\theta_{13}$ are collapsed into lines when the beam energy of the 2100 km baseline is 6.3 GeV. So in principle the joint measurement allow us to narrow down the allowed range of $\sin^2(2\theta_{13})$. For the lines each measurement still allows two values of $\delta$. But the two values of $\delta$ which fall on top of one another on the line segment will be separated when the line becomes an ellipse. So measurements at both 6.3 and 4 GeV will offer a better possibility to determined the values of $\sin^2(2\theta_{13})$ and $\delta$ simultaneously.
In Table 3 we present, for the case of MSD sign I, some $E_{\rm peak}$ values in GeV of narrow band beams where the ellipses of Ne(300) versus Ne(2100) as the CP Phase varies from $0^\circ$ to $360^\circ$ collapse into lines. At these energies the curves for MSD sign II are ellipses of high eccentricities which approximate lines. For MSD signs III and IV, and in the absence of matter effect the curves are ellipse of very high eccentricities. For these energies the combined measurements of electron event at L=2100 km and L=300 km can provide better measurement for the $\sin^2(2\theta_{13})$.
MSD sign and the CP phase $\delta$
----------------------------------
In Fig. 10 we present similar results, but for different MSD signs with fixed $\sin^2(2\theta_{13})=0.05$. The results without the matter effect are also plotted, with the dotted curves denoting MSD sign II or III and dashed ones I or IV. In the absence of the matter effect MSD signs I and IV give the same results, so do the MSD signs II and III, as already shown in Tables 1 and 2. For the almost overlapped curves of MSD sign III and IV with matter effects, the solid ones denote III and dotted ones IV.
It is clear from Fig. 10 that in the lower diagram, i.e., 6.3 GeV for the 2100 km baseline, it is quite easy to differentiate MSD signs I and II from III and IV, and from the case without the matter effect. To make better measurements it is again better to take measurements with the line together with the ellipse.
Conclusion {#sec4}
==========
In the above study of the event rates and the sensitivity to various oscillation parameters investigated, we found:
At the distance L=2100 km, a narrow band beam with peak energy of about 6 GeV is optimum for measuring CP violation effects and about 5 GeV for measuring matter effects.
To measure the CP violation effect at a shorter distance such as L=300 km, a narrow band beam with lower peak energy ($\sim 0.7$ GeV) is preferable. But the matter effect is hardly observable at such a shorter baseline.
The two baselines, 300 km and 2100 km, are complementary to each other. Through the joint analysis of the two baselines, some of the ambiguities associated with the measurement at either baselines may be resolved.
With the optimum narrow band beam, a 5-year operation of a 100 kt water Cerenkov detector at a very long distance such as L=2100 km has the following physics prospects:
The matter effects can be observed.
The sign of $\Delta{\rm m}^2_{32}$ may be determined.
The sign of $\Delta{\rm m}^2_{21}$ may be determined only in favorable situations.
Evidence exceeding 2-$\sigma$ of a CP violating phase may be seen in favorable cases for a detector size of 1000 kt or with a much longer running time.
Combined with the analyses of L=300 km, the parameter $\sin^2(2\theta_{13})$ may be measured and the matter effects are more clearly determined.
In this article we have focused on the $\nu_\mu\rightarrow \nu_e$ exclusively. The investigation of the $\tau$ appearance and the inclusion of the $\bar{\nu}_\mu$ beam option in the analysis, which is needed in the cases of MSD signs III and IV, i.e., $\Delta{m}^2_{32} < 0$, will be taken for a future investigation. There we will also make a more complete search for the best energies of the two baselines for the various parameters.
We finally note that the statistics are generally low in all the cases discussed. Running with higher energy narrow band beam will increase the statistics. However, that may be disfavored by the figure of merit (signal to error ratio). Another way to increase the statistics is to increase the detector mass. It has been pointed out, however, that there is a saturation problem [@saturation] caused by the systematic errors which are of the form of the errors of types (ii) and (iii) as discussed at the end of Sec. 2. These errors increase linearly as the number of events rather than the square root of the number of events as is in the case of the statistical error. Hence, when the mass of the detector is increased so that the number of events becomes sufficiently large, the systematical error becomes dominant. After that, further increase of the detector size may no longer be beneficial. In Fig. 11 we show the ratio of $\Delta{N}_e$ to $N_e$ as a function of the detector mass. We see that according to our general error estimate the best $\Delta{N}_e$ to $N_e$ ratio can be attained is 6%. When the detector reaches 1000 kt the benefit of further increasing the detector size is no long significant.
Acknowledgment {#acknowledgment .unnumbered}
==============
We thank K. Hagiwara and N. Okamura for discussions. We also thank our colleagues of the H2B collaboration [@H2B] for support. This work is supported in part by DOE Grant No. DE-FG02-G4ER40817.
[99]{} Y. Fukuda [*et al.*]{}, Phys. Rev. Lett. B 81 (1998) 1562. L. Wolfenstein, Phys. Rev. [**D17**]{}, 2367 (1978); [**D20**]{}, 2634 (1979); S.P. Mikheyev and A. Yu. Smirnov, Yad. Fiz. [**42**]{}, 1441 (1986); Nov. Cim. [**9C**]{}, 17 (1986). HIPA: A multipurpose high intensity proton synchrotron at both 50 GeV and 3 GeV to be constructed at the Jaeri Tokai Campus, Japan has been approved in December, 2000 by the Japanese funding agency. The long baseline neutrino oscillation experiment is one of projects of the particle physics program of the facility. More about HIPA can be found at the website: “http://jkj.tokai.jaeri.go.jp”. H. Chen, et al., [*Study Report: H2B, Prospect of a very Long Baseline Neutrino Oscillation Experiment, HIPA to Beijing*]{}, hep-ph/0104266. M. Aoki, K. Hagiwara, U. Hayato, T. Kobayashi T. Nakaya, K. Nishikawa and N. Okamura, hep-ph/0104220. Y.-F. Wang, K. Whisnant and Bing-Lin Young, hep-ph/0109053, to appear in Phys. Rev. D. Y.-F. Wang, hep-ex/0010081, talk given at “[*NEW Initiatives in Lepton Flavor Violation and Neutrino Oscillations with Very Long Intense Muon Neutrino Sources*]{}”, Oct. 2-6, 2000, Hawaii, USA. J2K: Y. Ito, et. al., [*Letter of Intent: A Long Baseline Neutrino Oscillation Experiment the JHF 50 GeV Proton-Synchrotron and the Super-Kamiokande Detector*]{}, JHF Neutrino Working Group, Feb. 3, 2000. The JHF is renamed as HIPA. For a detail discussion of the parameter counting, see V. Barger, Y.-B. Dai, K. Whisnant, Bing-Lin Young, Phys. Rev. [**D59**]{}, 113010 (1999). A. M. Dziewonski and D. L. Anderson, Phys. Earth Planet. Inter., 25, 297 (1981). F. D. Staecy, [*Physics of the Earth*]{} (John Wiley & Sons, 1977); D. J. Anderson, [*Theory of the Earth*]{} (Blackwell Scientific Pub., 1989.) The HIPA superbeam profiles are available at http://neutrino.kek.jp/JHF-VLBL. V. Barger, S. Geer, R. Raja, and K. Whisnant, Phys. Rev. [**D63**]{}, 113011 (2001) (arXiv: hep-ph/0012017).
----------- ------------- ----------- --------------- ---------
electron \# muon \# tau \#
I 34 (10) 430 (435) 10 (11)
L=2100 km II 46 (16) 405 (415) 11 (11)
III 3 (16) 413 (415) 12 (11)
IV 3 (10) 427 (435) 11 (11)
I 159 (157) 39408 (39407) 72 (72)
L=300 km II 119 (116) 39535 (39535) 71 (71)
III 114 (116) 39535 (39535) 71 (71)
IV 154 (157) 39408 (39407) 72 (72)
----------- ------------- ----------- --------------- ---------
: Event rates of 5-year operation with (without) matter effects for different MSD sign choices for a narrow band beam of $E_{\rm peak}=4$ GeV. The CP-phase is taken to be zero. []{data-label="table_1"}
----------- ------------- ----------- ----------------- -----------
electron \# muon \# tau \#
I 151 (96) 2313 (2311) 448 (453)
L=2100 km II 151 (90) 2326 (2333) 443 (449)
III 39 (90) 2335 (2333) 454 (449)
IV 49 (96) 2308 (2311) 458 (453)
I 453 (443) 271536 (271535) 731 (731)
L=300 km II 359 (348) 271842 (271842) 718 (718)
III 337 (348) 271843 (271842) 718 (718)
IV 431 (443) 271535 (271535) 731 (731)
----------- ------------- ----------- ----------------- -----------
: Same as Table 1, but for a wide band beam.[]{data-label="table_2"}
$E_{\rm peak}(300)$
--------------------- ------- ------- ------ ------ ------
0.70 0.750 1.215 1.85 2.30 6.30
0.80 0.820 1.10 1.98 2.25 7.60
0.85 0.820 1.20 2.05 2.19 8.30
: Some $E_{\rm peak}$ values (GeV) of narrow band beams where the ellipses of Ne(300) versus Ne(2100) as CP Phase varies from 0$^\circ$ to 360$^\circ$ collapse into line segments. The MSD sign is assumed to be case I. []{data-label="table_3"}
|
---
abstract: 'Let $R=S/I$ where $S=k[T_1, \ldots, T_n]$ and $I$ is a homogeneous ideal in $S$. The acyclic closure $R\langle Y \rangle$ of $k$ over $R$ is a DG algebra resolution obtained by means of Tate’s process of adjoining variables to kill cycles. In a similar way one can obtain the minimal model $S[X]$, a DG algebra resolution of $R$ over $S$. By a theorem of Avramov there is a tight connection between these two resolutions. In this paper we study these two resolutions when $I$ is the edge ideal of a path or a cycle. We determine the behavior of the deviations $\varepsilon_i(R)$, which are the number of variables in $R\langle Y \rangle$ in homological degree $i$. We apply our results to the study of the $k$-algebra structure of the Koszul homology of $R$.'
address:
- |
Adam Boocher\
School of Mathematics\
University of Edinburgh\
James Clerk Maxwell Building, Mayfield Road\
Edinburgh EH9 3JZ, Scotland
- |
Alessio D’Alì\
Dipartimento di Matematica\
Università degli Studi di Genova\
Via Dodecaneso 35\
16146 Genova, Italy
- |
Eloísa Grifo\
Department of Mathematics\
University of Virginia\
141 Cabell Drive, Kerchof Hall\
Charlottesville, VA 22904, USA
- |
Jonathan Montaño\
Department of Mathematics\
Purdue University\
150 North University Street\
West Lafayette, IN 47907, USA
- |
Alessio Sammartano\
Department of Mathematics\
Purdue University\
150 North University Street\
West Lafayette, IN 47907, USA
author:
- Adam Boocher
- 'Alessio D’Alì'
- Eloísa Grifo
- Jonathan Montaño
- Alessio Sammartano
title: Edge ideals and DG algebra resolutions
---
Introduction
============
Let $S= k[T_1, \ldots, T_n]$, $I\subseteq (T_1,\ldots, T_n)^2$ be a homogeneous ideal and $R = S/I$. Endowing free resolutions over $R$ with multiplicative structures can be a powerful technique in studying homological properties of the ring. The idea of multiplicative free resolution is made precise by the notion of a [**Differential Graded (DG) algebra resolution**]{} (cf. [@PeevaGradedSyzygies Ch. 31]). Several interesting resolutions admit a DG algebra structure: examples include the Koszul complex, the Taylor resolution of monomial ideals, the Eliahou-Kervaire resolution (cf. [@0Borel]), the minimal free resolution of $k$ (cf. [@Gulliksen], [@Schoeller]), and free resolutions of length at most 3 (cf. [@BuchEis]). In general, though, minimality and DG algebra structure are incompatible conditions on resolutions of an $R$-algebra: obstructions were discovered and used in [@AvramovObstructions] to produce perfect ideals $\fa\subseteq R$ with prescribed grade $\gs 4$ such that the minimal free $R$-resolution of $R/\fa$ admits no DG algebra structure.
Nevertheless, it is always possible to obtain DG algebra resolutions of a factor ring $R/\fa$ by a recursive process that mimics the construction of the minimal free resolution of a module; we refer to [@Avramov6Lectures] for more details and background. Let $\{a_1, \ldots, a_r\}$ be a minimal generating set of $\fa$ and start with the Koszul complex on $a_1, \ldots, a_r$. Apply inductively Tate’s process of adjoining variables in homological degree $i+1$ to kill cycles in homological degree $i$ whose classes generate the $i$-th homology minimally (cf. [@Tate]). Using exterior variables to kill cycles of even degrees and polynomial variables to kill cycles of odd degrees we obtain a DG algebra resolution of $R$, called a [**minimal model**]{} of $R/\fa$ over $R$ and denoted by $R[X]$, where $X$ is the collection of all the variables adjoined during the process (cf. [@Avramov6Lectures 7.2]). Using divided power variables instead of polynomial variables we obtain another DG algebra resolution of $R$, called an [**acyclic closure**]{} of $R/\fa$ over $R$ and denoted by $R \langle Y \rangle$; similarly, $Y$ is the collection of all the [variables]{} adjoined (cf. [@Avramov6Lectures 6.3]). Both objects are uniquely determined up to isomorphisms of DG algebras. The minimal model and the acyclic closure are isomorphic if $R$ is a complete intersection or if $\mathbb{Q}\subseteq R$, but they differ in general. The set $X_i$ (resp. $Y_i$) of variables adjoined to $R[X]$ (resp. $R\langle Y \rangle$) in homological degree $i$ has finite cardinality.
A result of Avramov relates the minimal model $S[X]$ of $R$ over the polynomial ring $S$ to the acyclic closure $R\langle Y \rangle$ of the residue field $k$ over $R$: the equality $\Card(X_i)=\Card(Y_{i+1})$ holds for all $i\gs 1$ (cf. [@Avramov6Lectures 7.2.6]). We remark that such resolutions are considerably hard to describe explicitly. The growth of $S[X]$ and $R\langle Y \rangle$ is determined by the integers $\varepsilon_i(R) = \Card (Y_{i})$, known as the [**deviations**]{} of $R$ (because they measure how much $R$ deviates from being regular or a complete intersection, cf. [@AvramovCI], [@Avramov6Lectures Section 7.3]). The deviations are related to the Poincaré series $P^R_k(z) = \sum_{i\gs 0} \dim_k \Tor^R_i(k,k)z^i$ by the following formula (cf. [@Avramov6Lectures 7.1.1]) $$P^R_k(z) = {\frac{ \prod_{i \in 2\mathbb{N}+1} (1+z^i)^{\varepsilon_i(R)} }{ \prod_{i \in 2\mathbb{N}} (1-z^i)^{\varepsilon_i(R)} } }.$$
In this paper we study the minimal model $S[X]$ of $R$ over $S$ and the acyclic closure $R\langle Y \rangle$ of $k$ over $R$ when $R$ is a [**Koszul algebra**]{}, i.e. when $k$ has a linear resolution over $R$. It is well known that for Koszul algebras $R$ the Poincaré series is related to the Hilbert series by the equation $$\label{EquationHilbertPoincare}
P^R_k(z) {\operatorname{HS}_R(-z)} = 1.$$ Furthermore, $R$ is Koszul if $I$ is a quadratic monomial, in particular if $I$ is the edge ideal of a graph. See [@PeevaGradedSyzygies Ch. 34] and the references therein for details.
In Section \[SectionEdgeIdeals\] we study the deviations of $R$ when $I$ is the edge ideal of a cycle or a path. In order to do so, we exploit the multigraded structure of $R\langle Y \rangle$. In Theorem \[TheoremExistenceSequences\] we determine the deviations $\varepsilon_i(R)$ for $i=1, \ldots, n$; these values are determined by two sequences $\{\alpha_s\}$ and $\{\gamma_s\}$, that are independent of the number of vertices $n$.
In Section \[SectionKoszulHomology\] we use the minimal model $S[X]$ to investigate the Koszul homology $H^R= \Tor^S(R,k)$ of $R$. Its $k$-algebra structure encodes interesting homological information on $R$: for instance, $R$ is a complete intersection if and only if $H^R$ is an exterior algebra on $H^R_1$ (cf. [@Tate]), and $R$ is Gorenstein if and only if $H^R$ is a Poincaré algebra (cf. [@AvramovGolod]). When $R$ is a Golod ring $H^R$ has trivial multiplication (cf. [@Golod]). It is not clear how the Koszul property of $R$ is reflected in the $k$-algebra structure of $H^R$. Results in this direction have been obtained by Avramov, Conca, and Iyengar in [@AvramovConcaIyengar] and [@AvramovConcaIyengar2]. We extend their theorem [@AvramovConcaIyengar2 5.1] to show that if $R$ is Koszul then the components of $H^R$ of bidegrees $(i, 2i-1)$ are generated in bidegrees $(1,2)$ and $(2,3)$, see Theorem \[KoszulSecondDiag\]. While these theorems tell us that a part of the $k$-algebra $H^R$ is generated in the linear strand if $R$ is Koszul, in general there may be minimal algebra generators in other positions (see Remark \[RemarkKoszulHomologyLinearStrand\]). In fact, in Theorem \[KoszulHomologyPolygons\] we give a complete description of the $k$-algebra generators of the Koszul homology of the algebras considered in Section \[SectionEdgeIdeals\]: for edge ideals of cycles, the property of being generated in the linear strand depends on the residue of the number of vertices modulo 3.
Deviations of edge ideals of paths and cycles {#SectionEdgeIdeals}
=============================================
Throughout this section we consider $S=[T_1, \ldots, T_n]$ as an $\NN^n$-graded algebra by assigning to each monomial of $S$ the multidegree $\operatorname{mdeg}(T_1^{v_1}\cdots T_n^{v_n})=\fv =(v_1, \ldots, v_n)$. If $I \subseteq S$ is a monomial ideal, then $R=S/I$ inherits the multigrading from $S$. Let $t_i$ be the image of $T_i$ in $R$ and denote by ${\beta_{i,\mathbf{v}}}^R(k) = \dim_k \Tor_i^R(k,k)_{\mathbf{v}}$ the multigraded Betti numbers of $k$ over $R$, and by $P_k^R(\zi,\fxi)=\sum_{i,\fv}{\beta_{i,\mathbf{v}}}^R(k)\zi^i\fxi^\fv$ the multigraded Poincaré series of $R$, where $\fxi^\fv=x_1^{v_1}\cdots x_n^{v_n}$. There are uniquely determined nonnegative integers $\ee_{i,\fv}=\ee_{i,\fv}(R)$ satisfying the infinite product expansion (cf. [@Berglund Remark 1]) $$\label{MultigradedExpansion}
P_k^R(\zi,\fxi)=\prod_{i\gs 1, \fv \in \NN^n}\frac{(1+\zi^{2i-1}\fxi^{\fv})^{\ee_{2i-1,\fv}}}{(1-\zi^{2i}\fxi^{\fv})^{\ee_{2i,\fv}}}.$$ The numbers $\ee_{i,\fv}$ are known as the [**multigraded deviations**]{} of $R$. They refine the usual deviations in the sense that $\ee_{i} =\sum_{\fv \in \NN^n}\ee_{i,\fv}$. We can repeat the constructions in the Introduction respecting the multigrading. In particular, we can construct an acyclic closure $R\langle Y \rangle$ of $k$ over $R$ and hence $$\ee_{i,\fv}(R)=\Card(Y_{i,\fv}),$$ where $Y_{i,\fv}$ denotes the set of variables in homological degree $i$ and internal multidegree $\fv$. Similarly, we can construct a minimal model $S[X]$ of $R$ over $S$ and denote by $X_{i,\fv}$ the variables in homological degree $i$ and internal multidegree $\fv$; the multigraded version of [@Avramov6Lectures 7.2.6] holds, see [@Berglund Lemma 5].
Let $\operatorname{HS}_R(\fxi)=\sum_{\fv\in\NN^n}\dim_k(R_{\fv})\fxi^{\fv}$ be the multigraded Hilbert series of $R$. The following fact is folklore. We include here its proof for the reader’s convenience.
\[MultPoncareHilbert\] Let $S=k[T_1,\ldots, T_n]$ and $I$ be a monomial ideal of $S$. Then $$P^R_k(-1,\fxi)\operatorname{HS}_R(\fxi)=1.$$
Let $\FF$ be the augmented minimal free resolution of $k$ over $R$ and fix $\fv\in\NN^n$. Let $\FF_{\fv}$ be the strand of $\FF$ in multidegree $\fv$: $$\FF_{\fv}:\cdots\rightarrow F_{2,\fv}\rightarrow F_{1,\fv}\rightarrow F_{0,\fv}=R_\fv \rightarrow k_{\fv}\rightarrow 0.$$ Since $\FF_{\fv}$ is an exact complex of $k$-vector spaces, we have $\sum_{i\gs 0}(-1)^i\dim_k F_{i,\fv}=1$ if $\fv=(0,\ldots,0)$ and $0$ otherwise. On the other hand, it is easy to see that this alternating sum is equal to the coefficient of $\fxi^{\fv}$ in $P^R_k(-1,\fxi)\operatorname{HS}_R(\fxi)$ and the conclusion follows.
Let $\mathcal{G}$ be a graph with vertices $\{1, \ldots, n\}$. The [**edge ideal**]{} of $\mathcal{G}$ is the ideal $I(\mathcal{G})\subseteq S$ generated by the monomials $T_iT_j$ such that $\{i,j\}$ is an edge of $\mathcal{G}$. We denote the [**$n$-path**]{} by $\P_n$ and the [**$n$-cycle**]{} by $\C_n$, the graphs whose edges are respectively $
\big\{\{1,2\},\,\ldots,\, \{n-1,n\}\big\}
$ and $
\big\{\{1,2\},\,\ldots,\, \{n-1,n\}, \{n,1\}\big\}$ (see Figure \[pictures\]).
(-1,0) circle \[radius=0.04\]; (-2,0) circle \[radius=0.04\]; (-3,0) circle \[radius=0.04\]; (-4,0) circle \[radius=0.04\]; (-5,0) circle \[radius=0.04\];
(1,0) circle \[radius=0.04\]; (1.7,1) circle \[radius=0.04\]; (1.7,-1) circle \[radius=0.04\]; (3.1,-1) circle \[radius=0.04\]; (3.1,1) circle \[radius=0.04\]; (3.8,0) circle \[radius=0.04\];
(-1,0)–(-5,0); (1,0)–(1.7,1)–(3.1,1)–(3.8,0)–(3.1,-1)–(1.7,-1)–(1,0);
at (-1,-0.3) [5]{}; at (-2,-0.3) [4]{}; at (-3,-0.3) [3]{}; at (-4,-0.3) [2]{}; at (-5,-0.3) [1]{};
at (0.8,0) [1]{}; at (1.7,1.3) [2]{}; at (1.7,-1.3) [6]{}; at (3.1,-1.3) [5]{}; at (3.1,1.3) [3]{}; at (4,0) [4]{};
Given a vector $\fv = (v_1, \ldots, v_n)\in \mathbb{N}^n$, denote by $\|\fv\|=\sum_i v_i$ the [**1-norm**]{} of $\fv$. If $R$ is a Koszul algebra, $\beta_{i,\fv}^R(k) \ne 0$ only if $i =\|\fv\|$, and thus a deviation $\ee_{i,\fv} $ is nonzero only if $i =\|\fv\|$; for this reason we denote $\ee_{\|\fv\|,\fv}$ simply by $\ee_{\fv}$ for the rest of the section. The [**support**]{} of a vector $\fv =(v_1, \ldots, v_n)$ is $\Supp(\fv)=\{ i \, : \, v_i\ne 0\}$. The set $\Supp(\fv)$ is said to be an [**interval**]{} if it is of the form $\{a, a+1, \ldots, a+b\}$ for some $a$ and some $b\gs 0$, while it is said to be a [**cyclic interval**]{} if it is an interval or a subset of the form $\{1, 2, \ldots, a, b, b+1, \ldots, n\}$ for some $a<b$. These definitions are motivated by Lemma \[LemmaConsecutiveDeviations\], which plays an important role in the rest of the paper as it narrows down the possible multidegrees of nonzero deviations.
The support of $(1,2,1,0,0)$ is an interval.
The support of $(1,0,0,1,2)$ is a cyclic interval but not an interval.
The support of $(1,0,2,0,1)$ is not a cyclic interval.
\[LemmaConsecutiveDeviations\] Let $S=k[T_1,\ldots,T_n]$ with $n\gs 3$ and $\fv\in \NN^n$.
1. If $\varepsilon_{\mathbf{v}}(S/{I(\mathcal{P}_{n})})>0$, then $\Supp(\fv)$ is an interval.
2. If $\varepsilon_{\mathbf{v}}(S/{I(\mathcal{C}_{n})})>0$, then $\Supp(\fv)$ is a cyclic interval.
We only prove (a) as the proof of (b) is the same with straightforward modifications. Let $R\langle Y\rangle $ be an acyclic closure of $k$ over $R$. We regard the elements of $R\langle Y\rangle $ as polynomials in the variables $Y$ and ${t_1, \ldots, t_n}$. If $V \subseteq R\langle Y \rangle$ is a graded vector subspace, we denote by $V_{i,\fv}$ the graded component of $V$ of homological degree $i$ and internal multidegree $\fv$. We show by induction on $i$ that the support of the multidegree of each variable in $Y_i$ is an interval. For $i=1$, the statement is clear, since the multidegrees are just the basis vectors of $\NN^n$. The case $i=2$ follows from [@Berglund Lemma 5], because the multidegrees of the variables in $Y_2$ are the same as those of the generators of ${I(\mathcal{P}_{n})}$.
Now let $i> 2$, $y\in Y_i$ and $\fv=\operatorname{mdeg}(y)$, so that $\|\fv\|=i$. Assume by contradiction that $\Supp(\fv)$ is not an interval. Then we can write $\fv=\fv_1+\fv_2$ for two nonzero vectors $\fv_1$ and $\fv_2$ such that $\Supp(\fv_1)$ and $\Supp(\fv_2)$ are disjoint and do not contain two adjacent indices. By construction of $R\langle Y\rangle$, the variable $y$ is adjoined to kill a cycle $z$ in $R\langle Y_{\ls i-1}\rangle$ whose homology class is part of a minimal generating set of $H_{i-1}(R\langle Y_{\ls i-1}\rangle)$. We will derive a contradiction by showing that $z$ is a boundary in $R\langle Y_{\ls i-1}\rangle$. Note that $z \in R\langle Y_{\ls i-1}\rangle_{\|\fv\|-1,\fv}$, and by induction the variables in $Y_{\ls i-1}$ have multidegrees whose supports are intervals, thus we can write $$z=\sum_j \left( A_j p_j + B_j q_j \right) ,$$ where $A_j \in R\langle Y_{\ls i-1} \rangle_{\|\fv_1\|,\fv_1}$ and $B_j \in R\langle Y_{\ls i-1} \rangle_{\|\fv_2\|,\fv_2}$ are distinct monomials and $p_j \in R\langle Y_{\ls i-1} \rangle_{\|\fv_2\|-1,\fv_2}$ and $q_j \in R\langle Y_{\ls i-1} \rangle_{\|\fv_1\|-1,\fv_1}$ are homogeneous polynomials. Since $z$ is a cycle, the Leibniz rule yields $$\begin{aligned}
\begin{split}
0&=\partial(z)\\
&=\sum \partial(A_j)p_j + (-1)^{\|\fv_1\|} \sum A_j\partial(p_j)+ \sum \partial(B_j)q_j + (-1)^{\|\fv_2\|} \sum B_j\partial(q_j).
\end{split}\end{aligned}$$ In the sum above, each monomial $A_j$ only appears in $A_j\partial(p_j)$, therefore $\partial(p_j)=0$. However, by construction of $R\langle Y \rangle$ the homology of the DG algebra $R\langle Y_{\ls i-1}\rangle$ vanishes in the homological degree $\|\fv_2\|-1<\|\fv\|-1=i-1$ hence $p_j$ is a boundary in $R\langle Y_{\ls i-1}\rangle$. Likewise, $q_j$ is a boundary.
Let $P_j, Q_j$ be homogeneous polynomials such that $\partial(P_j)=p_j$, $\partial(Q_j)=q_j$, so that $$z=\sum A_j\partial(P_j)+ \sum B_j\partial(Q_j).$$ Since $\partial(B_jQ_j)=\partial(B_j)Q_j+(-1)^{\|\fv_2\|}B_j\partial(Q_j),$ we have that $z$ is a boundary if and only if the cycle $
\sum A_j\partial(P_j)- (-1)^{\|\fv_2\|} \sum \partial(B_j)Q_j
$ is a boundary. In other words, by grouping together the terms in the two sums, we may assume without loss of generality that the original cycle has the form $$\label{EquationFormCycle}
z= \sum p_jq_j
\qquad
\mbox{ with }
\quad
p_j \in R\langle Y_{\ls i-1}\rangle_{\|\fv_1\|, \fv_1}
\quad
\mbox{ and }
\quad
q_j \in \partial \left( R\langle Y_{\ls i-1}\rangle_{\|\fv_2\|, \fv_2}\right)$$ with the $q_j$ linearly independent over $k$.
Let $\{e_h\}$ be a $k$-basis of $\partial(R\langle Y_{\ls i-1}\rangle_{\|\fv_1\|, \fv_1})$ and write $\partial(p_j) = \sum \lambda_{j,h} e_h$ with $\lambda_{j,h}\in k$. Then $$\label{EquationLinearDependence}
0=\partial(z)=\sum \partial(p_jq_j)=\sum \partial(p_j)q_j= \sum \lambda_{j,h} e_h q_j.$$
The boundaries form a homogeneous two-sided ideal in the subring of cycles, and thus by Equation \[EquationFormCycle\] in order to show that $z$ is a boundary it suffices to prove that the $p_j$ are cycles. This follows from Equation \[EquationLinearDependence\] once we know that the set $\{e_hq_j\}$ is linearly independent. To see this, observe that we have an embedding of graded $k$-vector spaces $$R\langle Y_{\ls i-1}\rangle_{\|\fv_1\|-1, \fv_1}
\otimes_k
R\langle Y_{\ls i-1}\rangle_{\|\fv_2\|-1, \fv_2}
\hookrightarrow
R\langle Y_{\ls i-1}\rangle_{\|\fv_1+\fv_2\|-2,\fv_1+\fv_2}.$$ as the tensor product of the two monomial $k$-bases in the LHS is mapped injectively into the monomial $k$-basis of the RHS, because as no $t_lt_{l+1}$ arises in the products by the assumption on $\fv_1$ and $\fv_2$. This completes the proof.
Given ${\bf v}=(v_1,\ldots,v_n)\in \mathbb{N}^n$, we denote the vector $(v_1,\ldots,v_n, 0)\in \mathbb{N}^{n+1}$ by $\fv^a$. We denote by $S[T_{n+1}]=k[T_1,\ldots,T_{n+1}]$, the polynomial ring in $n+1$ variables over $k$.
\[LemmaConstantMultigradedDeviations\] Let $S=k[T_1,\ldots,T_n]$ with $n\gs 3$ and ${\bf v}=(v_1,\ldots,v_n)\in \mathbb{N}^n$, then
1. $\varepsilon_{\mathbf{v}}(S/{I(\mathcal{P}_{n})}) =\varepsilon_{\mathbf{v}^a}(S[T_{n+1}]/{I(\mathcal{P}_{n+1})})$.
Moreover, if either $v_1=0$ or $v_n=0$ then
1. $\varepsilon_{\mathbf{v}}(S/{I(\mathcal{C}_{n})}) =\varepsilon_{\mathbf{v}^a}(S[T_{n+1}]/{I(\mathcal{C}_{n+1})})$,
2. $\varepsilon_{\mathbf{v}}(S/{I(\mathcal{P}_{n})}) =\varepsilon_{\mathbf{v}}(S/{I(\mathcal{C}_{n})})$.
Let $R=S/{I(\mathcal{P}_{n})}$ and $R'=S[T_{n+1}]/{I(\mathcal{P}_{n+1})}$. From Proposition \[MultPoncareHilbert\] and Equation \[MultigradedExpansion\] we have $$\label{EquationMultigradedPoincare}
\prod_{\|\mathbf{v}\|\, odd} (1-\fxi^\mathbf{v})^{{\varepsilon}_\mathbf{v}(R)} \sum_{\fv\in\NN^n} c_\mathbf{v}(R)
\fxi^\mathbf{v}=
\prod_{\|\mathbf{v}\|\, even} (1-\fxi^\mathbf{v})^{{\varepsilon}_\mathbf{v}(R)},$$ where $c_\mathbf{v}(R)$ is the coefficient of $\fxi^\fv$ in $\operatorname{HS}_R(\fxi)$, namely $c_\mathbf{v}(R)=0$ if $\mathbf{v}$ has two consecutive positive components and $c_\mathbf{v}(R)=1$ otherwise.
\(a) We proceed by induction on $\|\mathbf{v}\|$. If $\|\mathbf{v}\|=1$ then $\varepsilon_{\fv} (R)=\varepsilon_{\fv^a}(R')=1$, as these deviations corresponds to the elements $t_i$ with $1\ls i\ls n$. Assume now that $\|\fv\|>1$. We reduce Equation \[EquationMultigradedPoincare\] modulo the ideal of $\mathbb{Z}[[\xi_1,\ldots,\xi_n]]$ generated by the monomials $\fxi^\mathbf{w}\nmid\fxi^\fv$. Every surviving multidegree $\mathbf{u}$ other than $\mathbf{v}$ verifies $\|\mathbf{u}\|<\|\fv\|$, hence by induction $\varepsilon_{\mathbf{u}}(R) =\varepsilon_{\mathbf{u}^a}(R')$. Furthermore, it is clear that $c_\mathbf{u}(R)=c_{\mathbf{u}^a}(R')$. Hence after reducing the corresponding Equation \[EquationMultigradedPoincare\] for $R'$ modulo the ideal of $\mathbb{Z}[[\xi_1,\ldots,\xi_{n+1}]]$ generated by the monomials $\fxi^\fw\nmid\fxi^{\fv^a}$ and solving the two equations for $\varepsilon_{\fv}(R)$ and $\varepsilon_{\mathbf{v}^a}(R')$ respectively, we obtain $\varepsilon_{\mathbf{v}}(R) =\varepsilon_{\mathbf{v}^a}(R')$ as desired.
\(b) The same argument as above works, however we need to assume that either $v_1=0$ or $v_n=0$ to guarantee that $c_\mathbf{u}(S/{I(\mathcal{C}_{n})})=c_{\mathbf{u}^a}(S[T_{n+1}]/{I(\mathcal{C}_{n+1})})$.
\(c) It follows by induction and because the support of a vector in the set $\{\fw\,:\,\fxi^\fw\mid \fxi^\fv\}$ is an interval if and only if it is a cyclic interval, provided that either $v_1$ or $v_n=0$.
We say that a vector is [**squarefree**]{} if its components are either 0 or 1. In the following proposition we determine $\varepsilon_{\fv}$ for squarefree vectors $\fv$; this result will also be useful in Section \[SectionKoszulHomology\]. We denote by $\f1_n$ the vector $(1,\ldots,1)\in\NN^n$.
\[PropositionSquarefreeDeviations\] Let $S=k[T_1,\ldots,T_n]$ with $n\gs 3$ and ${\bf v}\in \mathbb{N}^n$ be a squarefree vector.
1. If $R=S/{I(\mathcal{P}_{n})}$, then $\varepsilon_{\mathbf{v}}(R)=1$ if $\Supp(\fv)$ is an interval and $\varepsilon_{\mathbf{v}}(R)=0$ otherwise.
2. If $R=S/{I(\mathcal{C}_{n})}$ and $\fv\neq\f1_n$, then $\varepsilon_{\mathbf{v}}(R)=1$ if $\Supp(\fv)$ is a cyclic interval and $\varepsilon_{\mathbf{v}}(R)=0$ otherwise. Furthermore, $\varepsilon_{\f1_n}(R)=n-1$.
We are going to apply [@Berglund Theorem 2] and we follow the notation therein.
\(a) By Lemma \[LemmaConsecutiveDeviations\] if $\varepsilon_{\mathbf{v}}(R)\neq 0$ then $\Supp(\fv)$ is an interval. Let $p=\|\fv\|$, the statement is clear for $p\ls 2$ by [@Berglund Lemma 5], hence we may assume $p\gs 3$. We have that $M_{\fv}=\{T_aT_{a+1},\ldots, T_{a+p-2}T_{a+p-1}\}$ for some $a$. Notice that any subset of $p-2$ elements of $M_{\fv}$ is either disconnected or $m_S\neq m_{M_{\fv}}$ then $\Delta'_{M_{\fv}}$ is the $p-3$ skeleton of a $p-2$ simplex. Let $S^d$ be the unit sphere in $\RR^{d+1}$. Then [@Berglund Theorem 2] yields $$\varepsilon_{\fv}(R)=\dim_k \widetilde\Ho_{p-3}(\Delta'_{M_{\fv}}; k)=\dim_k \widetilde\Ho_{p-3}(S^{p-3}; k)=1.$$
\(b) The same argument as above works for $\fv\neq \f1_n$. If $\fv=\f1_n$, then $M_{\f1_n}=\{T_1T_2,\ldots, T_{n}T_{1}\}$, and $\Delta'_{M_{\f1_n}}$ is the $n-3$ skeleton of an $n-1$ simplex, and by [@Berglund Theorem 2] we have $$\varepsilon_{ \f1_n}(R)=\dim_k \widetilde\Ho_{n-3}(\Delta'_{M_{\f1_n}}; k)=
\dim_k \widetilde\Ho_{n-3}\left(\bigvee^{n-1}S^{n-3}; k\right)=n-1.$$
The next theorem determines the first $n$ deviations of the $n$-cycle and the first $n+1$ deviations of the $n$-path.
\[TheoremExistenceSequences\] Let $S=k[T_1,\ldots,T_n]$. There exist two sequences of natural numbers $\{\gamma_s\}_{s\gs 1}$ and $\{\alpha_s\}_{s\gs 1}$ such that for every $n\gs 3$
1. $\varepsilon_s(S/{I(\mathcal{P}_{n})}) = \gamma_sn-\alpha_s$ for $s\ls n+1$;
2. $\varepsilon_s(S/{I(\mathcal{C}_{n})}) = \gamma_sn$ for $s< n$ and $\varepsilon_n(S/{I(\mathcal{C}_{n})}) = \gamma_nn-1$.
\(b) Fix $s\gs 1$ and for every $n\gs s$ define the set $$\mathcal{E}(s, n)=\big\{\mathbf{v}\in \NN^n \, : \, \|\mathbf{v}\|=s,\, \varepsilon_{\mathbf{v}}(S/{I(\mathcal{C}_{n})})>0 \big\}.$$ Let $R=S/{I(\mathcal{C}_{n})}$ and $R'=S[T_{n+1}]/{I(\mathcal{C}_{n+1})}$. Assume first that $s<n$. The group $\ZZ_n:=\ZZ/n\ZZ$ acts on $\mathcal{E}{(s,n)}$ by permuting the components of a vector cyclically and by the symmetry of ${I(\mathcal{C}_{n})}$ the deviations are constant in every orbit. By Lemma \[LemmaConsecutiveDeviations\] the support of each $\fv \in \mathcal{E}{(s,n)}$ is a cyclic interval and since some component of $\fv$ is 0 we conclude that each orbit contains exactly $n$ elements. Similarly, every orbit in the action of $\ZZ_{n+1}$ on $\mathcal{E}{(s,\,n+1)}$ has $n+1$ elements. We denote orbits by $[\cdot]$ and for a given $\mathbf{v}\in\mathcal{E}{(s,\,n)}$ we denote by $\bar{\fv}=(\overline{v_1},\ldots, \overline{v_n})$ the only vector in $[\fv]$ such that $\overline{v_1}\neq 0$ and $\overline{v_n}=0$. The map $\phi\colon\,\, \mathcal{E}{(s,\,n)}/\ZZ_n \to \mathcal{E}{(s,\,n+1)}/\ZZ_{n+1}$ defined via $[\bar{\mathbf{v}}] \xmapsto{\phantom{\ZZ_n}} [\bar{\mathbf{v}}^a]$ is well-defined and bijective by Lemmas \[LemmaConsecutiveDeviations\] and \[LemmaConstantMultigradedDeviations\], and moreover $\ee_{[\fv]}(R)=\ee_{\phi([\fv])}(R')$. Since the multigraded deviations refine the deviations we obtain
$$\label{EquationDeviationsOrbits}
\frac{\varepsilon_s(R)}{n}=
\sum_{[\fv]\in\mathcal{E}(s,\,n)/\ZZ_n}{\varepsilon}_{[\fv]}(R) \\
=\sum_{[\fv]\in\mathcal{E}(s,\,n)/\ZZ_n}{\varepsilon}_{\phi([\fv])}(R')=
\frac{\varepsilon_s(R')}{n+1}.$$
It follows that $\varepsilon_s(S/{I(\mathcal{C}_{n})}) = \gamma_s n $ for every $n> s$, for some natural number $\gamma_s$. Now consider the case $s=n$. By Proposition \[PropositionSquarefreeDeviations\] we have ${\varepsilon}_{\f1_n}(R)=n-1$ and ${\varepsilon}_{\f1_n^a}(R')=1$. The orbit $[\f1_n]$ consists of 1 element, while the orbit $[\f1_n^a]$ consists of $n+1$ elements, thus in this case we modify Equation \[EquationDeviationsOrbits\] to obtain $$\frac{{\varepsilon}_n(R)-(n-1)}{n}=
\frac{{\varepsilon}_n(R') - (n+1)}{n+1}.$$ Hence $\frac{{\varepsilon}_n(R)-(n-1)}{n}= \gamma_{n}-1$ and the conclusion follows.
\(a) Fix $s\gs 1$, and similarly define the set $$\mathcal{E}(s, n)=\big\{\mathbf{v}\in \NN^n \, : \, \|\mathbf{v}\|=s,\, \varepsilon_{\mathbf{v}}(S/{I(\mathcal{P}_{n})})>0 \big\}.$$ Let $R=S/{I(\mathcal{P}_{n})}$, $R'=S[T_{n+1}]/{I(\mathcal{P}_{n+1})}$, and assume $s\ls n$. By Lemma \[LemmaConsecutiveDeviations\], if $\fu=(u_1,\ldots,u_{n+1})\in\mathcal{E}(s,\,n+1)$, then either $u_1=0$ or $u_{n+1}=0$. The map $\psi\colon\,\, \mathcal{E}{(s,\,n)} \to \mathcal{E}{(s,\,n+1)}$ defined via $\mathbf{v} \xmapsto{\phantom{\ZZ_n}} \mathbf{v}^a $ is injective. By Lemma \[LemmaConstantMultigradedDeviations\] $\Im(\psi)=\big\{\mathbf{u}=(u_1,\ldots,u_{n+1})\in\mathcal{E}(s,\,n+1)\,:\, u_{n+1}=0\big\}$ and if $\mathbf{u}\in\Im(\psi)$ then $\varepsilon_{\psi^{-1}(\fu)}(R)=\varepsilon_{\fu}(R')$. We conclude that $$\varepsilon_s(R')-\varepsilon_s(R)= \sum_{\fu\in\mathcal{E}(s,\,n+1)\setminus \Im(\psi)}{\varepsilon}_{\fu}(R').$$ Since $$\mathcal{E}(s,\,n+1)\setminus \Im(\psi)=\big\{\mathbf{u}=(u_1,\ldots,u_{n+1})\in\mathcal{E}(s,\,n+1)\,:\, u_1=0,\,u_{n+1}\neq 0\big\},$$ then it follows that $\varepsilon_s(R')-\varepsilon_s(R)=\gamma_s$, by the proof of part (b) and Lemma \[LemmaConstantMultigradedDeviations\] (c).
Finally, let $s=n+1$. In this case, the difference $\varepsilon_{n+1}(R')-\varepsilon_{n+1}(R)$ is equal to the sum of $\varepsilon_{\f1_{n+1}}(R')$ and all $\varepsilon_{\mathbf{u}}(R')$ where $\mathbf{u}=(u_1,\ldots,u_{n+1}) \in \mathcal{E}(n+1,\,n+1)$ with $u_1=0, u_{n+1}\neq 0$. By Proposition \[PropositionSquarefreeDeviations\], $\varepsilon_{\f1_{n+1}}(R')=1$, and thus $\varepsilon_{n+1}(R')-\varepsilon_{n+1}(R)=\gamma_{n+1}$.
We have proved the existence of the sequence of integers $\{\alpha_s\}_{s\gs 1}$; they are non-negative as by Lemma \[LemmaConstantMultigradedDeviations\] (c) we have $\ee_s(S/{I(\mathcal{C}_{n})})\gs\ee_s(S/{I(\mathcal{P}_{n})})$ for every $n> s$.
\[FurtherDeviations\] Explicit formulas for the graded Betti numbers of ${I(\mathcal{P}_{n})}$ and ${I(\mathcal{C}_{n})}$ were found in [@Jacques] using Hochster’s formula; combining these formulas and Equation \[EquationHilbertPoincare\] one can deduce a recursion for the deviations. Through this recursion we noticed that some of the higher deviations also seem to be determined by the sequences $\{\alpha_s\}_{s\gs 1}$ and $\{\gamma_s\}_{s\gs 1}$. We observed the following patterns for cycles $$\begin{aligned}
\ee_{n+1}(S/{I(\mathcal{C}_{n})})&=&\gamma_{n+1}n-n,\\
\ee_{n+2}(S/{I(\mathcal{C}_{n})})&=&\gamma_{n+2}n-\binom{n+2}{2}+1,\\
\ee_{n+3}(S/{I(\mathcal{C}_{n})})&=&\gamma_{n+3}n-\binom{n+3}{3}-\binom{n+1}{2}+1.\\
\mbox{ and the following patterns for paths:}\\
\ee_{n+2}(S/{I(\mathcal{P}_{n})})&=&\gamma_{n+2}n-\alpha_{n+2}+1,\\
\ee_{n+3}(S/{I(\mathcal{P}_{n})})&=&\gamma_{n+3}n-\alpha_{n+3}+n+2,\\
\ee_{n+4}(S/{I(\mathcal{P}_{n})})&=&\gamma_{n+4}n-\alpha_{n+4}+\binom{n+4}{2}-1,\\
\ee_{n+5}(S/{I(\mathcal{P}_{n})})&=&\gamma_{n+5}n-\alpha_{n+5}+\binom{n+5}{3}-\binom{n+3}{2}-1.\end{aligned}$$ Verifying these formulas with the method in the proof of Lemma \[LemmaConstantMultigradedDeviations\], would require the explicit computation of multigraded deviations for vectors that are not squarefree. One possible approach is to use Equation \[EquationMultigradedPoincare\] and proceed by induction for each multidegree; this is an elementary but rather intricate argument. Using this method we were able to verify the above identities for $\ee_{n+2}(S/{I(\mathcal{P}_{n})})$ and $\ee_{n+1}(S/{I(\mathcal{C}_{n})})$.
[l l l l l]{} $s$ & $\gamma_s$ & $\alpha_s$ & $\approx \gamma_s/\gamma_{s-1}$ & $\approx \alpha_s/\alpha_{s-1}$\
\[0.5ex\]
1 & 1 & 0 & &\
2 & 1 & 1 & 1&\
3 & 1 & 2 & 1&2\
4 & 2 & 5 & 2&2.5\
5 & 5 & 14 & 2.5&2.8\
6 & 12 & 38 & 2.4&2.71\
7 & 28 & 100 & 2.33 &2.62\
8 & 68 & 269 & 2.43 &2.69\
9 & 174 & 744 & 2.56&2.77\
10 & 450 & 2064 & 2.59&2.77\
11 & 1166 & 5720 & 2.59&2.77\
12 & 3068 & 15974 & 2.63& 2.79\
13 & 8190 & 44940 & 2.67&2.81\
14 & 22022 & 126854 & 2.69&2.82\
15 & 59585 & 359118 & 2.71&2.83\
16 & 162360 & 1020285 & 2.72&2.84\
17 & 445145 & 2907950 & 2.74&2.85\
18 & 1226550 & 8309106 & 2.76&2.86\
19 & 3394654 & 23796520 & 2.77&2.86\
20 & 9434260 & 68299612 & 2.78&2.87\
21 & 26317865 & 196420246 & 2.79&2.88\
22 & 73662754 & 565884418 & 2.8&2.88\
23 & 206809307 & 1632972230 & 2.81&2.89\
24 & 582255448 & 4719426574 & 2.82&2.89\
25 & 1643536725 & 13658698734 & 2.82&2.89\
\[table\]
Using the recursive formula for deviations mentioned in Remark \[FurtherDeviations\], we compute with Macaulay2 [@Macaulay2] some values of the sequences $\{\alpha_s\}_{s\gs 1}$ and $\{\gamma_s\}_{s\gs 1}$, cf. Table \[table\]. From these values we can observe that the sequences $\{\alpha_s\}_{s\gs 1}$ and $\{\gamma_s\}_{s\gs 1}$ seemingly grow exponentially at a ratio that approaches 3. This observation is consistent with Theorem \[TheoremExistenceSequences\] and the asymptotic growth of $\ee_i(S/{I(\mathcal{P}_{n})})$ and $\ee_i(S/{I(\mathcal{C}_{n})})$ described in [@BDGMS 4.7].
Koszul homology of Koszul algebras {#SectionKoszulHomology}
==================================
Let $S[X]$ be a minimal model of $R$ over $S$ and let $K^S$ denote the Koszul complex of $S$ with respect to $\fn$. Denoting by $\epsilon^{S[X]}: S[X]\rightarrow R$ and $\epsilon^{K^S}: K^S\rightarrow k$ the augmentation maps, the following homogeneous DG algebra morphisms
$$\xymatrix@C=20mm{k[X]\cong S[X]\otimes_S k & S[X] \otimes_S K^S \ar[l]_-{\id_{S[X]}\otimes_S \epsilon^{K^S}} \ar[r]^-{\epsilon^{S[X]}\otimes_S \id_{K^S}} & R\otimes_S K^S \cong K^R }$$ are quasi-isomorphisms, i.e., they induce $k$-algebra isomorphisms on homology $$\Tor^S(R,k)=H( k[X])\cong H(S[X] \otimes_S K^S)\cong H( K^R)=H^R$$ see [@Avramov6Lectures 2.3.2]. Thus we have $\dim_k H^R_{i,j}= \dim_k\Tor_i^S(R,k)_{j}=\beta_{i,j}^S(R)$ for every $i,j\gs 0$, where $\beta_{i,j}^S(R)$ denote the graded Betti numbers of $R$ over $S$. If $I$ is a monomial ideal, then $H^R$ inherits the $\NN^n$ grading from $K^R$ and for every $\fv\in \NN^n$ we have $\dim_k H^R_{i,\fv}=\Tor_i^S(R,k)_{\fv}=\beta_{i,\fv}^S(R)$. In other words, the graded vector space structure of $H^R$ is completely determined by the Betti table of $R$ over $S$.
It was proved in [@AvramovConcaIyengar2 5.1] that for a Koszul algebra $R$, if $H^R_{i,j}\neq 0$ then $j\ls 2i$ and $H^R_{i,2i}=\big(H^R_{1,2}\big)^i$. We extend this result to the next diagonal in the Betti table.
\[KoszulSecondDiag\] Let $R$ be a Koszul algebra. Then for every $i\gs 2$ we have $$H^R_{i,2i-1}=\big(H^R_{1,2}\big)^{i-2}H^R_{2,3}.$$
We show first that $
k[X]_{i,2i-1}\subseteq \big(k[X]_{1,2}\big)^{i-2}k[X]_{2,3}$. By the Koszul property and [@Avramov6Lectures 7.2.6] we have $\deg(x)=|x|+1$ for every variable $x\in X$. For every monomial $x_{1}\cdots x_{p}\in k[X]_{i,2i-1}$, we have $$2i-1=\deg(x_{1}\cdots x_{p}) = \deg(x_{1})+\cdots + \deg(x_{p})$$ and $$i=|x_{1}\cdots x_{p}| = |x_{1}|+\cdots + |x_{p}|=\deg(x_{1})+\cdots + \deg(x_{p})-p$$ hence $p=i-1$. Assume without loss of generality that $|x_j|\ls|x_{j+1}|$ for every $1\ls j\ls i-2$, then $|x_{1}|=\cdots=|x_{i-2}|=1$ and $|x_{i-1}|=2$, so the claim follows.
Since the model $S[X]$ is minimal, we must have $\partial(X_{1,2})\subseteq \mathfrak{m}S[X]$ and also $\partial(X_{2,3})\subseteq \mathfrak{m}S[X]$, because there cannot be a quadratic part in the differential for these low degrees (cf. [@Avramov6Lectures 7.2.2]). Hence $X_{1,2} \cup X_{2,3}\subseteq Z(k[X])$, the subalgebra of cycles of $k[X]$, therefore we have inclusions $$\begin{aligned}
\big(k[X]_{1,2}\big)^{i-2}k[X]_{2,3}&=& \big(Z(k[X])_{1,2}\big)^{i-2}Z(k[X])_{2,3}\subseteq Z(k[X])_{i,2i-1} \\
& \subseteq & k[X]_{i,2i-1} \subseteq \big(k[X]_{1,2}\big)^{i-2}k[X]_{2,3}.\end{aligned}$$ We conclude $\big(Z(k[X])_{1,2}\big)^{i-2}Z(k[X])_{2,3}= Z(k[X])_{i,2i-1}$ and the desired statement follows after going modulo $B(k[X])$, the ideal of boundaries of $Z(k[X])$.
\[RemarkKoszulHomologyLinearStrand\] By Theorem \[KoszulSecondDiag\] and [@AvramovConcaIyengar2 5.1] if $R$ is a Koszul algebra then the components $H^R_{i,j}$ of the Koszul homology such that $j \gs 2i-1$ are generated by the components of bidegrees $(1,2)$ and $(2,3)$. It is natural then to ask whether for Koszul algebras the minimal $k$-algebra generators of the Koszul homology have bidegrees $(i,i+1)$, corresponding to the linear strand of the Betti table of $R$; observe that these components are necessarily minimal generators. This question was raised by Avramov and the answer turns out to be negative: the first example was discovered computationally by Eisenbud and Caviglia using Macaulay2. By manipulating this example, Conca and Iyengar were led to consider edge ideals of $n$-cycles. A family of rings for which this fails is $R=S/I(\mathcal{C}_{3k+1})$ with $k\gs 2$, whose Koszul homology has a minimal algebra generator in bidegree $(2k+1, 3k+1)$ (cf. Theorem \[KoszulHomologyPolygons\]).
Now we turn our attention to the Koszul algebras studied in Section \[SectionEdgeIdeals\]. We begin with a well-known fact about resolutions of monomial ideals.
Let $I$ be a monomial ideal, $\mathbb{F}$ a multigraded free resolution of $R=S/I$, and $\mathbf{a}\in \mathbb{N}^n$. Denote by $\mathbb{F}_{\ls \mathbf{a}}$ the subcomplex of $\mathbb{F}$ generated by the standard basis elements of multidegrees $\fv \ls \mathbf{a}$. Then $\mathbb{F}_{\ls \mathbf{a}}$ is a free resolution of $S/I_{\ls \mathbf{a}}$, where $I_{\ls \mathbf{a}}$ is the ideal of $S$ generated by the elements of $I$ with multidegrees $\fv\ls \mathbf{a}$. In particular, if $I$ is squarefree then $\beta_{i,\fv}^S(R)\ne 0$ only for squarefree multidegrees $\fv$.
Next we introduce some notation for the decomposition of squarefree vectors into intervals and cyclic intervals (cf. Section \[SectionEdgeIdeals\]).
\[blocks\] Let $\fv\in \mathbb{N}^n$ be a squarefree vector. There exists a unique minimal (with respect to cardinality) set of vectors $\{\fv_1,\ldots,\fv_{\tau(\fv)}\}\subset \NN^n$ such that $\Supp(\fv_j)$ is an interval for each $j$ and $\fv=\sum_{j=1}^{\tau(\fv)}\fv_j$. By minimality $\Supp(\fv_i+\fv_j)$ is not an interval if $i\neq j$. Define further $\iota(\fv)=\sum_{j=1}^{\tau(\fv)}\left \lfloor\frac{2\|\fv_j\|}{3}\right\rfloor$. For example, let $\fv = (1,1,0,0,0,1,1,0,1)$ then the set is $$\left\{(1,1,0,0,0,0,0,0,0),(0,0,0,0,0,1,1,0,0),(0,0,0,0,0,0,0,0,1)\right\}$$ and we have $\tau(\fv)=3$, $\iota(\fv)=1+1+0=2$.
Likewise, given a squarefree vector $\fw\in \mathbb{N}^n$ there exists a unique minimal set of vectors $\{\fw_1,\ldots,\fw_{\tilde{\tau}(\fw)}\}$ such that $\Supp(\fw_j)$ is a cyclic interval for each $j$ and $\fw=\sum_{j=0}^{\tilde{\tau}(\fw)}\fw_j$; it follows that $\Supp(\fw_i+\fw_j)$ is not a cyclic interval if $i\neq j$. Set $\tilde{\iota}(\fw)=\sum_{j=1}^{\tilde{\tau}(\fw)} \left\lfloor\frac{2\|\fw_j\|}{3}\right\rfloor$. For example, let $\fw = (1,1,0,0,0,1,1,0,1)$ then the set is $$\left\{(1,1,0,0,0,0,0,0,1), (0,0,0,0,0,1,1,0,0)\right\}$$ and we have $\tilde{\tau}(\fw)=2$, $\tilde{\iota}(\fw)=2+1=3$.
Using Jacques’ results [@Jacques], these definitions allow us to describe completely the multigraded Betti numbers of $S/{I(\mathcal{P}_{n})}$ and $S/{I(\mathcal{C}_{n})}$.
\[bettionlyones\] Let $\fv$ and $\fw$ be squarefree vectors. Following Definition \[blocks\], we have:
1. $
\beta_{i,\fv}^S(S/{I(\mathcal{P}_{n})})
=
1 $ if $
\|\fv_j\|\not\equiv 1\pmod{3}$ for $1\ls j\ls \tau(\fv)$ and $i=\iota(\fv)$;\
$\beta_{i,\fv}^S(S/{I(\mathcal{P}_{n})})
=0$ otherwise.
2. Assume $\fw\ne \f1_n$, then\
$
\beta_{i,\fw}^S(S/{I(\mathcal{C}_{n})})
=
1$ if $\|\fw_j\|\not\equiv 1\pmod{3}$ for $1\ls j\ls \tilde{\tau}(\fw)$ and $ i=\tilde{\iota}(\fw)$;\
$\beta_{i,\fw}^S(S/{I(\mathcal{C}_{n})})
=0$ otherwise.
3. $
\beta^S_{i,\f1_n}(S/{I(\mathcal{C}_{n})})
=1 $ if $n \equiv 1 \pmod{3}$ and $ i = \lceil \frac{2n}{3} \rceil$ or $n \equiv 2 \pmod{3}$ and $i = \tilde{\iota}({\f1_n})$;\
$
\beta^S_{i,\f1_n}(S/{I(\mathcal{C}_{n})})
=
2 $ if $n \equiv 0 \pmod{3}$ and $ i = \tilde{\iota}({\f1_n})$;\
$
\beta^S_{i,\f1_n}(S/{I(\mathcal{C}_{n})})
=
0$ otherwise.
Let $\FF$ be a minimal multigraded free resolution of $S/{I(\mathcal{P}_{n})}$. We prove by induction on $j$ that $$\FF_{\ls\sum_{i=1}^{j}\fv_i}\cong \bigotimes_{i=1}^{j}\FF_{\ls\fv_i}.$$ If $j=1$, this is trivial. Assume $j\gs 2$ and let $J_1={I(\mathcal{P}_{n})}_{\ls \fv_j}$ and $J_2=\displaystyle\sum_{i=1}^{j-1}{I(\mathcal{P}_{n})}_{\ls \fv_i}$. Since $\Supp(\fv_j)\cap \bigcup_{i=1}^{j-1}\Supp(\fv_i)=\emptyset$ and $J_1$ and $J_2$ are monomial ideals, we have $0=J_1\cap J_2/J_1J_2\cong \Tor_1^S(S/J_1,S/J_2)$. By rigidity of Tor and induction hypothesis, we conclude $$\begin{aligned}
S/\sum_{i=1}^{j} {I(\mathcal{P}_{n})}_{\ls \fv_i}&=&S/(J_1+J_2)\cong \Tor^S(S/J_1,S/J_2)=H\Big(\FF_{\ls \fv_j}\otimes_S \bigotimes_{i=1}^{j-1}\FF_{\ls\fv_i}\Big)\\
&=&H\Big(\bigotimes_{i=1}^{j}\FF_{\ls\fv_i}\Big)\end{aligned}$$ which proves the claim, since minimal free resolutions are unique up to isomorphism of complexes (and the entries of the differential maps in $\FF_{\ls\sum_{i=1}^{j}\fv_i}$ and $\bigotimes_{i=1}^{j}\FF_{\ls\fv_i}$ lie in the maximal ideal of $S$ by construction). In particular, $\FF_{\ls\fv}\cong\bigotimes_{i=1}^{\tau(\fv)}\FF_{\ls\fv_i}$. Since $\FF_{\ls\fv_i}$ is a free resolution of $S/{I(\mathcal{P}_{n})}_{\ls \fv_i}\cong S/{I(\mathcal{P}_{\|\fv_i\|})}$ for every $i$, part (a) follows by [@Jacques 7.7.34, 7.7.35]. The proof of part (b) is analogous and part (c) is a direct consequence of [@Jacques 7.6.28, 7.7.34].
Since the variables of the models $S[X]$ and $S[\widetilde{X}]$ with squarefree multidegrees play a crucial role in this section, we introduce here a suitable notation for them.
\[variables\] Let $S[X]$ be a minimal model of $S/{I(\mathcal{P}_{n})}$. Proposition \[PropositionSquarefreeDeviations\] and [@Berglund Lemma 5] determine the subset of $X$ consisting of variables of squarefree multidegrees: for every $\fv \ls \f1_n $ we have $X_{i,\fv}\ne\emptyset$ if and only if $i=\|\fv\|-1$ and $\Supp(\fv)$ is an interval, and in this case $\Card(X_{\|\fv\|-1,\fv})=1$. For this reason, given distinct $p,q\in \{1, \ldots,n\}$ with $p<q$, let $x_{p,q}$ denote the only element of $X_{\|\fv_{p,q}\|-1,\fv_{p,q}}$ where $\fv_{p,q}$ is the squarefree vector with $\Supp(\fv_{p,q})=\{p,p+1,\ldots, q\}$. Notice $|x_{p,q}|=q-p$. We also denote by $x_{i,i}$ the variable $T_i$ for each $i=1, \ldots, n$.
Let $S[\widetilde{X}]$ be a minimal model of $S/{I(\mathcal{C}_{n})}$. Likewise, for every $\fv \ls \f1_n $, we have $\widetilde{X}_{i,\fv}\ne\emptyset$ if and only if $i=\|\fv\|-1$ and $\Supp(\fv)$ is a cyclic interval. Moreover, $\Card(\widetilde{X}_{\|\fv\|-1,\fv})=1$ if $\fv\ne \f1_n$ and $\Card(\widetilde{X}_{n-1,\f1_n})=n-1$. Given distinct $p,q\in \{1, \ldots,n\}$ with $p\not\equiv q+1\pmod{n}$, let $\wx_{p,q}$ denote the only element of $\widetilde{X}_{\|\fv_{p,q}\|-1,\fv_{p,q}}$ where $\fv_{p,q}$ denotes the squarefree vector with $\Supp(\fv_{p,q})=\{1,2,\ldots, q,p,p+1,\ldots,n\}$ if $q<p$, notice $|\wx_{p,q}|=q-p$ if $p<q$ and $|\wx_{p,q}|=n-(p-q)$ if $p>q$. Similarly, for each $i=1, \ldots, n$ denote by $\wx_{i,i}$ the variable $T_i$.
With an abuse of notation, we denote also by $x_{p,q}$ the image of $x_{p,q} \in S[X]$ in $k[X]= k \otimes_S S[X]$, and similarly by $\tilde{x}_{p,q}$ the image of $\tilde{x}_{p,q} \in S[\tilde{X}]$ in $k[\tilde{X}]$.
Let $n=7$. According to the notation just introduced we have $$\{T_1,T_2,\ldots,T_7\}=\{x_{1,1},x_{2,2},\ldots, x_{7,7}\}=\{\wx_{1,1},\wx_{2,2},\ldots, \wx_{7,7}\},$$ $$X_{3, (0,0,1,1,1,1,0)}=\{x_{3,6}\}, \,\,\, \widetilde{X}_{3, (0,0,1,1,1,1,0)}=\{ \wx_{3,6}\}, \,\,\, \widetilde{X}_{4,(1,1,1,0,0,1,1)}=\{ \wx_{6,3} \}.$$
In the next proposition we find formulas for the differential of variables with squarefree multidegree. Note that, once a multidegree $\boldsymbol\alpha$ is fixed, one can run a partial Tate process killing only cycles with multidegree bounded by $\boldsymbol\alpha$. The DG algebra $S[X^{\ls \boldsymbol\alpha}]$ obtained this way can be extended to a minimal model $S[X]$ of $R$ such that $X_{i, \boldsymbol\beta} = X^{\ls \boldsymbol\alpha}_{i, \boldsymbol\beta}$ for all $i > 0$, $\boldsymbol\beta \ls \boldsymbol\alpha$ componentwise. We apply the above strategy to compute variables with squarefree multidegree. We will sometimes denote the set of these variables by $X^{\text{sf}}$.
\[PropositionDifferentialSquarefreeVariable\] Following Definition \[variables\] we have:
1. There exists a minimal model $S[X]$ of $S/{I(\mathcal{P}_{n})}$ such that for every $p<q$ $$\partial(x_{p,q}) = \sum_{r\in\Supp(\fv_{p,q})\setminus\{q\}} (-1)^{|x_{p,r}|} x_{p,r}x_{r+1,q}.$$
2. There exists a minimal model $S[\widetilde{X}]$ of $S/{I(\mathcal{C}_{n})}$ such that for every $p\not\equiv q+1\pmod{n}$ $$\partial(\wx_{p,q}) = \sum_{r\in\Supp(\fv_{p,q})\setminus\{q\}} (-1)^{|\wx_{p,r}|} \wx_{p,r}\wx_{r+1,q}$$ where $\wx_{n+1,q}:=\wx_{1,q}$, and $\widetilde{X}_{n-1, \f1_n} = \{w_1, \ldots, w_{n-1}\}$ with $$\partial(w_i) = \sum_{r=i}^{n+i-2}(-1)^{r-i}\wx_{i,r}\wx_{r+1,n+i-1}$$ where $\wx_{p, q}:=\wx_{p',q'}$ if $p \equiv p' \pmod n$, $q \equiv q' \pmod n$ and $1 \ls p', q' \ls n$.
If $n=7$ then we have the following differentials $$\begin{aligned}
\partial(x_{1,1}) & =&\partial(\wx_{1,1})= 0,\\
\partial(x_{1,2}) & =&\partial(\wx_{1,2})= T_1 T_2,\\
\partial(x_{1,4}) & =& T_1 x_{2,4} - x_{1,2}x_{3,4} + x_{1,3}T_4,\\
\partial(\wx_{1,4})&=& T_1 \wx_{2,4} - \wx_{1,2}\wx_{3,4} + \wx_{1,3}T_4,\\
\partial(\wx_{4,1}) & = &T_4 \wx_{5,1} - \wx_{4,5}\wx_{6,1} + \wx_{4,6}\wx_{7,1}-\wx_{4,7}T_1.\\
\partial( w_1) & = & T_1 \wx_{2,7} - \wx_{1,2}\wx_{3,7} + \wx_{1,3}\wx_{4,7}-\wx_{1,4}\wx_{5,7}+\wx_{1,5}\wx_{6,7}-\wx_{1,6}T_7.\end{aligned}$$
$\,$\
(a) We proceed by induction on $|x_{p,q}|=q-p$.
If $q-p=1$, since $|\partial(x_{p,p+1})|=0$ and $\operatorname{mdeg}(\partial(x_{p,p+1}))=\fv_{p,p+1}$, we can assume $\partial(x_{p,p+1})=T_pT_{p+1}$. Now assume $q-p>1$; since $|\partial(x_{p,q})|=q-p-1$ and $\operatorname{mdeg}(\partial(x_{p,q}))=\fv_{p,q}$, we have $$\partial(x_{p,q}) = \sum_{r=p}^{q-1}\lambda_r x_{p,r}x_{r+1,q}$$ for some $\lambda_r \in k$, and then by the Leibniz rule $$\label{diffs}
0=\partial^2(x_{p,q}) = \sum_{r=p}^{q-1}\lambda_r \big[
\partial(x_{p,r})x_{r+1,q}+
(-1)^{r-p}
x_{p,r}\partial(x_{r+1,q})
\big].$$ By induction hypothesis we can rewrite the RHS of Equation \[diffs\] as $$\begin{aligned}
\sum_{r=p}^{q-1}\lambda_r \left[
\Big(
\sum_{s=p}^{r-1} (-1)^{s-p} x_{p,s}x_{s+1,r}
\Big)
x_{r+1,q}
+
(-1)^{r-p}
x_{p,r}
\Big(
\sum_{t=r+1}^{q-1} (-1)^{t-r-1} x_{r+1,t}x_{t+1,q}
\Big)
\right].\end{aligned}$$ Since the set of monomials in a given multidegree is linearly independent, the coefficient of each monomial must be 0. For fixed $p<u<q$, the monomial $x_{p,p}x_{p+1,u}x_{u+1,q}$ appears in the first sum when $r=u, s=p$ and in the second sum when $r=p, t=u$, hence $$\lambda_u(-1)^{p-p}+ \lambda_p (-1)^{p-p}(-1)^{u-p-1} =0.$$ Therefore, $\lambda_u =(-1)^{u-p} \lambda_p$, for every $p\ls u\ls q-1$. Since $S[X]$ is a minimal model we have $\lambda_p\ne0$, thus we can assume $\lambda_p=1$ and the conclusion follows.
\(b) By the proof of part (a) we know there exists a DG algebra satisfying the first part of the claim: let $S[\widetilde{X}^{\text{sf}}_{\ls n-2}]$ be this DG algebra, which is obtained by adding to $S$ all the variables $\wx_{p, q}$ where $p\not\equiv q+1\pmod{n}$. We now show a possible choice of the variables in homological degree $n-1$ and internal multidegree $\f1_n$. For $i \in \{1, \ldots, n-1\}$ let $$z_i = \sum_{r=i}^{n+i-2}(-1)^{r-i}\wx_{i,r}\wx_{r+1,n+i-1}.$$ One checks easily that each $z_i$ is a cycle in $S[\widetilde{X}^{\text{sf}}_{\ls n-2}]$. Moreover, the $z_i$’s are linearly independent over $k$, since the monomial $\wx_{j, n-1}\wx_{n, j-1}$ appears only in $z_j$ for any $j \in \{1, \ldots, n-1\}$. Note that there exists no nonzero boundary of $S[\widetilde{X}^{\text{sf}}_{\ls n-2}]$ having homological degree $n-2$ and internal multidegree $\f1_n$: if $b$ were such a boundary, then there would exist $w$ in $S[\widetilde{X}^{\text{sf}}_{\ls n-2}]$ having homological degree $n-1$, internal multidegree $\f1_n$ and such that $\partial(w) = b$. Since $\operatorname{mdeg}(\wx_{p, q})=\fv_{p, q}$ and $|\wx_{p, q}| = \Card(\text{Supp}(\fv_{p, q}))-1$, no such $w$ can be obtained as a linear combination of products of some $\wx_{p, q}$’s (the objects obtained that way and having multidegree $\f1_n$ must have homological degree at most $n-2$). Let $cls(z_i)$ be the homology class of $z_i$. We now claim that $$H_{n-1, \f1_n}(S[\widetilde{X}^{\text{sf}}_{\ls n-2}]) = \langle cls(z_1), \ldots, cls(z_{n-1})\rangle.$$ Since $\ee_{n-1, \f1_n} = n-1$ by Proposition \[PropositionSquarefreeDeviations\], it suffices to prove that the homology classes of the $z_i$’s are minimal generators of $\langle cls(z_1), \ldots, cls(z_{n-1})\rangle$. Suppose $z_i - \sum_{j \neq i}\mu_jz_j$ equals a boundary $b$ for some $\mu_j \in S[\widetilde{X}^{\text{sf}}_{\ls n-2}]$: since all $z_i$’s have homological degree $n-2$ and multidegree $\f1_n$, we can suppose the $\mu_j$’s all lie in $k$ and $b$ is homogeneous of multidegree $\f1_n$. Since such a boundary is forced to be zero and the $z_i$’s are $k$-linearly independent, we get a contradiction.
Next we introduce a compact way to denote monomials of $S[X]$ and $S[\widetilde{X}]$ with squarefree multidegrees. As for the variables $x_{p,q}$ and $\wx_{p,q}$, we use the same symbol to denote monomials in $S[X]$ (resp. $ S[\widetilde{X}]$) and their images in $k[X]$ (resp. $k[\widetilde{X}]$).
\[monomials\] Let $S[X]$ and $S[\widetilde{X}]$ be as in Proposition \[PropositionDifferentialSquarefreeVariable\]. Given $N\in \NN$ and a pair of sequences of natural numbers $P=\{p_i\}_{i=1}^N$ and $Q=\{q_i\}_{i=1}^N$ such that $1\ls p_1<q_1\ls n$ and $q_i<p_{i+1}<q_{i+1}\ls n$ for each $i\ls N-1$, we consider the monomial of $k[X]$ $$\mathcal{B}_{P,Q}=\prod_{i=1}^{N}x_{p_i,q_i}.$$ Similarly, given $P=\{p_i\}_{i=1}^N$ and $Q=\{q_i\}_{i=1}^N$ such that $1\ls p_1<q_1<p_1+n-1<2n$ and $q_i<p_{i+1}<q_{i+1}< p_1+n$ for each $i\ls N-1$, we consider the monomial of $k[\widetilde{X}]$ $$\widetilde{\mathcal{B}}_{P,Q}=\prod_{i=1}^{N}\wx_{p_i,q_i},$$ where if $p_i>n$ or $q_i>n$ we set $\wx_{p_i,q_i}:=\wx_{p'_i,q'_i}$ with $p'_i\equiv p_i, q'_i \equiv q_i \pmod{n}$ and $1 \ls p'_i, q'_i \ls n$.
For each pair of sequences $(P,Q)$ as above define $$\Gamma_{P,Q}=\{i>1\,:\,p_i=q_{i-1}+1\}$$ and for each $i\in \Gamma_{P,Q}$ denote by $P(i)$ and $Q(i)$ the sequences of $N-1$ elements obtained by deleting $p_i$ from $P$ and $q_{i-1}$ from $Q$, respectively.
\[sequences\] Let $n=16$, $N=5$, $P=\{1,4, 7, 11,14\}$ and $Q=\{3,5,9,13,15\}$. Then $$\mathcal{B}_{P,Q}=x_{1,3}x_{4,5}x_{7,9}x_{11,13}x_{14,15}.$$ In this case, $\Gamma_{P,Q}=\{2,5\}$, $P(2)=\{1, 7, 11,14\}$, $Q(2)=\{5,9,13,15\}$, $P(5)=\{1,4, 7, 11\}$, $Q(5)=\{3,5,9,15\}$.
\[coeffs\] Notice that for each $j \in \Gamma_{P,Q}$ we have $$\operatorname{mdeg}(\mathcal{B}_{P(j),Q(j)})=\operatorname{mdeg}(\mathcal{B}_{P,Q})
\qquad
\text{and}
\qquad
|\mathcal{B}_{P(j),Q(j)}|=|\mathcal{B}_{P,Q}|+1,$$ $$\operatorname{mdeg}(\widetilde{\mathcal{B}}_{P(j),Q(j)})=\operatorname{mdeg}(\widetilde{\mathcal{B}}_{P,Q})
\qquad
\text{and}
\qquad
|\widetilde{\mathcal{B}}_{P(j),Q(j)}|=|\widetilde{\mathcal{B}}_{P,Q}|+1.$$ Now suppose one of the following holds:
- $\operatorname{mdeg}(\mathcal{B}_{P,Q}) < \f1_n$ (resp. $\operatorname{mdeg}(\widetilde{\mathcal{B}}_{P,Q}) < \f1_n$);
- $\operatorname{mdeg}(\mathcal{B}_{P,Q}) = \f1_n$ and $N > 1$;
- $\operatorname{mdeg}(\widetilde{\mathcal{B}}_{P,Q}) = \f1_n$ and $N > 2$.
Then, if the coefficient of $\mathcal{B}_{P,Q}$ (resp. $\widetilde{\mathcal{B}}_{P,Q}$) in the differential of another monomial of $k[X]$ is nonzero, one has that this monomial must be $\mathcal{B}_{P(i),Q(i)}$ (resp. $\widetilde{\mathcal{B}}_{P(i),Q(i)}$) for some $i\in \Gamma_{P,Q}$. Consider $\partial(\sum_{i\in\Gamma_{P,Q}} \lambda_{P(i),Q(i)}\mathcal{B}_{P(i),Q(i)})$ for some $\lambda_{P(i),Q(i)}\in k$, then by Proposition \[PropositionDifferentialSquarefreeVariable\] the coefficient of $\mathcal{B}_{P,Q}$ in this expression is $$\sum_{i\in\Gamma_{P,Q}}(-1)^{\sum_{j=1}^{i-1}(q_j-p_j)}\lambda_{P(i),Q(i)}.$$ Similarly, the coefficient of $\widetilde{\mathcal{B}}_{P,Q}$ in $\partial(\sum_{i\in\Gamma_{P,Q}} \lambda_{P(i),Q(i)}\widetilde{B}_{P(i),Q(i)})$ is $$\sum_{i\in\Gamma_{P,Q}}(-1)^{\sum_{j=1}^{i-1}(q_j-p_j)}\lambda_{P(i),Q(i)}.$$
In the following lemma we show that the homology classes of some of the monomials introduced in Definition \[monomials\] are nonzero.
\[notboundary\] Let $n > 3$ and let $P=\{p_i\}_{i=1}^N$ and $Q=\{q_i\}_{i=1}^N$ be two sequences of natural numbers as in Definition \[monomials\]. Assume the following conditions $(\star)$ are satisfied:
- $q_i-p_i\in \{1,2\}$ for every $i$,
- if $q_i-p_i=1$ then either $i=N$ or $q_i<p_{i+1}-1.$
Then $\mathcal{B}_{P,Q}$ (resp. $\widetilde{\mathcal{B}}_{P,Q}$) is a cycle but not a boundary in $k[X]$ (resp. $k[\widetilde{X}]$).
Assume first we are in the hypotheses of Remark \[coeffs\]. We give the proof for $k[X]$, and the one for $k[\widetilde{X}]$ is analogous. From Proposition \[PropositionDifferentialSquarefreeVariable\] we get that for every $p$, the variables $x_{p,p+2}$ and $x_{p,p+1}$ are cycles, so ${\mathcal{B}}_{P,Q}$ is a cycle as well. Now we show that it is not equal to the differential of any linear combinations of monomials of $k[X]$. Consider $$\label{coefficient}
\partial\left(\sum\lambda_{P',Q'}{\mathcal{B}}_{P',Q'}\right)$$ for some $\lambda_{P',Q'}\in k$ with the sum ranging over all the monomials in the same multidegree of ${\mathcal{B}}_{P,Q}$ and homological degree one higher, i.e., one variable less. For every subset $\psi\subseteq \Gamma_{P,Q}$, let $P^{\psi}=\{p_i^{\psi}\}_{i=1,\ldots, N}$ and $Q^{\psi}=\{q_i^{\psi}\}_{i=1,\ldots, N}$ where $p^{\psi}_i=p_i-1$ if $i\in\psi$ and $p^{\psi}_i=p_i$ otherwise; and $q^{\psi}_{i}=q_{i}-1$ if $i+1\in\psi$ and $q^{\psi}_{i}=q_{i}$ otherwise (see Example \[shiftings\]). By Remark \[coeffs\], the coefficient of ${\mathcal{B}}_{P^{\psi},Q^{\psi}}$ in Equation \[coefficient\] is
$$\label{lambda}
\sum_{i\in\Gamma_{P^{\psi},Q^{\psi}}}(-1)^{\sum_{j=1}^{i-1}(q_j^{\psi}-p_j^{\psi})}\lambda_{P^{\psi}(i),Q^{\psi}(i)}=\sum_{i\in\Gamma_{P,Q}}(-1)^{\sum_{j=1}^{i-1}(q_j^{\psi}-p_j^{\psi})}\lambda_{P^{\psi}(i),Q^{\psi}(i)}.$$
For each $i\in\Gamma_{P,Q}$, the coefficient of $\lambda_{P^{\psi}(i),Q^{\psi}(i)}$ in Equation \[lambda\] is $$\begin{aligned}
&(-1)^{\sum_{j=1}^{i-1}(q_j-p_j)} &\text{ if } i\not\in \psi
\\
&(-1)^{\sum_{j=1}^{i-1}(q_j-p_j)-1} &\text{ if } i\in \psi. \end{aligned}$$
We claim that the sum of the coefficients of the monomials ${\mathcal{B}}_{P^{\psi},Q^{\psi}}$ in Equation \[coefficient\], considering all possible subsets $\psi\subseteq \Gamma_{P,Q}$, is equal to zero. This holds because if $i\in\Gamma_{P,Q}\setminus \psi$ then $P^{\psi}(i)=P^{\psi\cup\{i\}}(i)$ and $Q^{\psi}(i)=Q^{\psi\cup\{i\}}(i)$ and hence each coefficient $\lambda_{P^{\psi}(i),Q^{\psi}(i)}$ appears twice with opposite signs (see Example \[shiftings\]). In particular, Equation \[coefficient\] will never be equal to ${\mathcal{B}}_{P,Q}={\mathcal{B}}_{P^{\emptyset},Q^{\emptyset}}$, finishing the proof.
Assume now that the hypotheses of Remark \[coeffs\] are not satisfied. If $N = 1$ and $\operatorname{mdeg}(\mathcal{B}_{P, Q})$ equals $\f1_n$, then $n$ equals either $2$ or $3$, against our assumption.
If $\operatorname{mdeg}(\widetilde{\mathcal{B}}_{P,Q}) = \f1_n$ and $N=2$, then the conditions $(\star)$ imply that $n$ is either $5$ or $6$. Then, knowing by Proposition \[PropositionDifferentialSquarefreeVariable\] (b) the differential of $w_1, \ldots, w_{n-1}$, one can check the claim by hand by slightly modifying the idea of the main case.
\[shiftings\] For the sequences in Example \[sequences\], the possible sets $\psi$ are $\emptyset$, $\psi_1=\{2\}$, $\psi_2=\{5\}$, and $\psi_3=\{2,5\}$. Notice that $${\mathcal{B}}_{P^{\psi_1},Q^{\psi_1}}=x_{1,2}x_{3,5}x_{7,9}x_{11,13}x_{14,15},$$ $${\mathcal{B}}_{P^{\psi_2},Q^{\psi_2}}=x_{1,3}x_{4,5}x_{7,9}x_{11,12}x_{13,15},$$ $${\mathcal{B}}_{P^{\psi_3},Q^{\psi_3}}=x_{1,2}x_{3,5}x_{7,9}x_{11,12}x_{13,15}.$$ Therefore, $${\mathcal{B}}_{P^{\psi_1}(2),Q^{\psi_1}(2)}=x_{1,5}x_{7,9}x_{11,13}x_{14,15}={\mathcal{B}}_{P^{\emptyset}(2),Q^{\emptyset}(2)},$$ $${\mathcal{B}}_{P^{\psi_2}(5),Q^{\psi_2}(5)}=x_{1,3}x_{4,5}x_{7,9}x_{11,15}={\mathcal{B}}_{P^{\emptyset}(5),Q^{\emptyset}(5)},$$ $${\mathcal{B}}_{P^{\psi_1}(5),Q^{\psi_1}(5)}=x_{1,2}x_{3,5}x_{7,9}x_{11,15}={\mathcal{B}}_{P^{\psi_3}(5),Q^{\psi_3}(5)},$$ $${\mathcal{B}}_{P^{\psi_2}(2),Q^{\psi_2}(2)}=x_{1,5}x_{7,9}x_{11,12}x_{13,15}={\mathcal{B}}_{P^{\psi_3}(2),Q^{\psi_3}(2)}.$$
We are now ready to present the main theorem of this section.
\[KoszulHomologyPolygons\] Let $S=k[T_1,\ldots,T_n]$ with $n\gs 3$.
1. If $R =S/{I(\mathcal{P}_{n})}$, then the $k$-algebra $H^R$ is generated by $H^R_{1,2}$ and $H^R_{2,3}$.
2. If $R = S/{I(\mathcal{C}_{n})}$, then the $k$-algebra $H^R$ is generated by $H^R_{1,2}$ and $ H^R_{2,3}$ if and only if $n\not \equiv 1\pmod{3}$. If $n \equiv 1\pmod{3}$ then for any $0\ne z\in H^R_{\lceil\frac{2n}{3}\rceil, n}$ the $k$-algebra $H^R$ is generated by $H^R_{1,2}$, $H^R_{2,3}$, and $z$.
\(b) Let $\f1_n\ne\fw\in \NN^n$ be such that $\beta^S_{\tilde{\iota}(\fw),\fw}(S/{I(\mathcal{C}_{n})})\ne 0$. Following Definition \[blocks\], assume without loss of generality that the vectors $\fw_j$ are ordered increasingly according to $\min(\Supp(\fw_j))$.
By Proposition \[bettionlyones\] (b), we get $\beta^S_{\tilde{\iota}(\fw),\fw}(S/{I(\mathcal{C}_{n})})=1$, and furthermore there exists a unique pair of sequences $P$ and $Q$ satisfying the hypothesis of Lemma \[notboundary\], with $p_1=\min(\Supp(\fw_1))$ and $\operatorname{mdeg}(\widetilde{B}_{P,Q})=\fw$. Notice $|\widetilde{B}_{P,Q}|=\tilde{\iota}(\fw)$ and by Lemma \[notboundary\] the image of $\widetilde{B}_{P,Q}$ in $H^R$ is nonzero, hence it is a $k$-basis of $H^R_{\widetilde{\iota}(\fw),\fw}$. By construction the homology class of $\widetilde{B}_{P,Q}$ is generated by $H^R_{1,2}$ and $H^R_{2,3}$, hence it only remains to consider the case $\fw=\f1_n$.
[**Case 1: $n\equiv 2\pmod{3}$**]{}
If $n\equiv 2\pmod{3}$, by Proposition \[bettionlyones\] (c) we have $\beta^S_{\tilde{\iota}(\f1_n),\f1_n}(S/{I(\mathcal{C}_{n})})=1$. Defining $P$ and $Q$ as above, we conclude that $H^R_{\tilde{\iota}(\f1_n),\f1_n}$ is generated by $H^R_{1,2}$ and $H^R_{2,3}$.
[**Case 2: $n\equiv 0\pmod{3}$**]{}
If $n\equiv 0\pmod{3}$ then $\beta^S_{\tilde{\iota}(\f1_n),\f1_n}(S/{I(\mathcal{C}_{n})})=2$. If $n = 3$ the claim is trivial, since in this case the only $\mathbb{N}$-graded nonzero Betti numbers of $S/{I(\mathcal{C}_{n})}$ are $\beta^S_{0,0}$, $\beta^S_{1,2}$ and $\beta^S_{2,3}$. If $n > 3$, we define the sequences $P=\{1,\,4,\,\ldots,\, n-2\}$, $Q=\{3,\,6,\,\ldots,\, n\}$, $P'=\{2,\,5,\,\ldots,\, n-1\}$, and $Q'=\{4,\,7,\,\ldots,\, n+1\}$. From Lemma \[notboundary\] we know that $\widetilde{B}_{P,Q}$ and $\widetilde{B}_{P',Q'}$ are cycles of $k[\widetilde{X}]$. Suppose a linear combination $\lambda_{P,Q}\widetilde{B}_{P,Q}+\lambda_{P',Q'}\widetilde{B}_{P',Q'}$ is a boundary, we may proceed exactly as in the proof of Lemma \[notboundary\] to conclude $\lambda_{P,Q}=0$. Therefore $\lambda_{P',Q'}{\mathcal{B}}_{P',Q'}$ is a boundary, this forces $\lambda_{P',Q'}=0$ again by Lemma \[notboundary\]. Hence the homology classes of $\widetilde{B}_{P,Q}$ and $\widetilde{B}_{P',Q'}$ are linearly independent. This shows that $H^R_{\widetilde{i}(\f1_n),\f1_n}$ is generated by $H^R_{1,2}$ and $H^R_{2,3}$.
[**Case 3: $n\equiv 1 \pmod{3}$**]{}
If $n\equiv 1 \pmod{3}$, then $\beta^S_{\lceil\frac{2n}{3}\rceil,\f1_n}(S/{I(\mathcal{C}_{n})})=1$. Suppose $H^R_{\lceil\frac{2n}{3}\rceil,\f1_n}$ is the product of elements in smaller homological degrees, then there exists a set $\{\fu_1,\ldots,\fu_p\}\subset\NN^n$ such that $\f1_n=\sum_{i=1}^p\fu_i$ with $\Supp(\fu_i)$ being a cyclic interval for every $i$ and $\lceil\frac{2n}{3}\rceil=\sum_{i=1}^{p}\lfloor\frac{2\|\fu_i\|}{3}\rfloor$. This contradicts the fact that $\sum_{i=1}^{p} \|\fu_i\|=n\not\equiv 0\pmod{3}$. Hence, $H^R_{\lceil\frac{2n}{3}\rceil,\f1_n}$ contains minimal algebra generators of $H^R$. The conclusion follows.
The above proof works also for (a), if $n > 3$. Case 1 follows likewise via Proposition \[bettionlyones\] (a). Case 2 is simpler since $\beta^S_{i(\f1_n),\f1_n}(S/{I(\mathcal{P}_{n})})=1$ for $n\equiv 0 \pmod{3}$ and Case 3 is trivial since $\beta^S_{i(\f1_n),\f1_n}(S/{I(\mathcal{P}_{n})})=0$ for $n\equiv 1 \pmod{3}$. Finally, for $n =3$ the claim is trivial, since in these case the only $\mathbb{N}$-graded nonzero Betti numbers of $S/{I(\mathcal{P}_{n})}$ are $\beta^S_{0,0}$, $\beta^S_{1,2}$ and $\beta^S_{2,3}$.
By Theorem \[KoszulHomologyPolygons\] we can determine, more generally, the $k$-algebra generators of $H^R$ when $R=S/I(\mathcal{G})$ and $\mathcal{G}$ is a graph whose vertices have degree at most 2. Such graphs are disjoint unions of paths and cycles, hence it follows that $R$ is of the form $$R \cong S_1/{I(\mathcal{C}_{n_1})} \otimes_k \cdots \otimes_k S_a/{I(\mathcal{C}_{n_a})} \otimes_k S_{a+1}/{I(\mathcal{P}_{n_{a+1}})} \otimes_k \cdots \otimes_k S_b/{I(\mathcal{P}_{n_b})}$$ where each $S_i$ is a polynomial ring in $n_i$ variables over $k$, yielding an isomorphism of $k$-algebras $$H^R \cong H^{ S_1/{I(\mathcal{C}_{n_1})}} \otimes_k \cdots \otimes_k H^{S_a/{I(\mathcal{C}_{n_a})}} \otimes_k H^{S_{a+1}/{I(\mathcal{P}_{n_{a+1}})}} \otimes_k \cdots \otimes_k H^{S_b/{I(\mathcal{P}_{n_b})}}.$$ Notice that the ideals considered here are not prime. In fact, we know no examples of domains $R$ for which the question in Remark \[RemarkKoszulHomologyLinearStrand\] has a negative answer, therefore we conclude the paper with the following:
Is there a Koszul algebra $R$ which is a domain and whose Koszul homology $H^R$ is not generated as a $k$-algebra in the linear strand?
Acknowledgements {#acknowledgements .unnumbered}
================
This project originated during the workshop Pragmatic 2014 in Catania. The authors would like to express their sincere gratitude to the organizers Alfio Ragusa, Francesco Russo, and Giuseppe Zappalà and to the lecturers Aldo Conca, Srikanth Iyengar, and Anurag Singh. The authors are especially grateful to the first two lecturers for suggesting this topic and for several helpful discussions.
[99]{}
L. L. Avramov, *Homology of local flat extensions and complete intersection defects*, Math. Ann. [**228**]{} (1977), 27–37.
L. L. Avramov, *Obstructions to the existence of multiplicative structures on minimal free resolutions*, Amer. J. Math. [**103**]{} (1981), 1–31.
L. L. Avramov, *Infinite Free Resolutions*, Six Lectures on Commutative Algebra (Bellaterra, 1996), 1–118, Progr. Math. [**166**]{}, Birkhäuser, Basel, 1998.
L. L. Avramov, A. Conca, and S. B. Iyengar, *Free resolutions over commutative Koszul algebras*, Math. Res. Lett. [**17**]{} (2010), 197–210.
L. L. Avramov, A. Conca, and S. B. Iyengar, *Subadditivity of syzygies of Koszul algebras*, Math. Ann. [**361**]{} (2015), 511–534.
L. L. Avramov and E. S. Golod, *Homology algebra of the Koszul complex of a local Gorenstein ring*, Math. Notes [**9**]{} (1971), 30–32.
A. Berglund, *Poincaré series of monomial rings*, J. Algebra [**295**]{} (2006), 211–230.
A. Boocher, A. D’Alì, E. Grifo, J. Montaño, and A. Sammartano, *On the growth of deviations*, preprint (2015), `arXiv:1504.01066`.
D. A. Buchsbaum and D. Eisenbud, *Algebra structures for finite free resolutions, and some structure theorems for ideals of codimension 3*, Amer. J. Math. [**99**]{} (1977), 447–485.
E. S. Golod, *On the homologies of certain local rings*, Soviet Math. Dokl. [**3**]{} (1962), 745–748.
D. R. Grayson and M. E. Stillman, *Macaulay2, a software system for research in algebraic geometry*, available at `http://www.math.uiuc.edu/Macaulay2/`.
T. H. Gulliksen, *A proof of the existence of minimal R-algebra resolutions*, Acta Math. [**120**]{} (1968), 53–58.
S. Jacques, *The Betti numbers of graph ideals*, PhD Thesis, The University of Sheffield (2004), `arXiv.math.AC/0410107`.
I. Peeva, *0-Borel fixed ideals*, J. Algebra [**184**]{} (1996), 945–984.
I. Peeva, *Graded Syzygies*, Vol. 14. Springer Science & Business Media (2010).
C. Schoeller, *Homologie des anneaux locaux noethériens*, C. R. Acad. Sci. Paris Sér. A [**265**]{} (1967), 768–771.
J. Tate, *Homology of Noetherian rings and local rings*, Illinois J. Math. [**1**]{} (1957), 14–27.
|
---
abstract: 'Gaussian Graphical Models (GGMs) are popular tools for studying network structures. However, many modern applications such as gene network discovery and social interactions analysis often involve high-dimensional noisy data with outliers or heavier tails than the Gaussian distribution. In this paper, we propose the Trimmed Graphical Lasso for robust estimation of sparse GGMs. Our method guards against outliers by an implicit trimming mechanism akin to the popular Least Trimmed Squares method used for linear regression. We provide a rigorous statistical analysis of our estimator in the high-dimensional setting. In contrast, existing approaches for robust sparse GGMs estimation lack statistical guarantees. Our theoretical results are complemented by experiments on simulated and real gene expression data which further demonstrate the value of our approach.'
author:
- |
Eunho Yang and Aurélie C. Lozano\
[IBM T.J. Watson Research Center]{}
bibliography:
- 'RobustGGM.bib'
- 'sml.bib'
- 'robggm.bib'
title: Robust Gaussian Graphical Modeling with the Trimmed Graphical Lasso
---
=1
Introduction
============
Problem Setup and Robust Gaussian Graphical Models
==================================================
Statistical Guarantees of Trimmed Graphical Lasso
=================================================
Experiments
===========
Appendix {#appendix .unnumbered}
========
|
---
abstract: |
We determine the full mass and $q^2$ dependence of the heavy quark vacuum polarization function $\Pi(q^2)$ and its contribution to the total $e^+e^-$ cross section at ${\cal O}(\alpha_s^2)$ and ${\cal O}(\alpha_s^3)$ in perturbative QCD. We use known results for the expansions of $\Pi(q^2)$ at high energies, in the threshold region and around $q^2=0$, conformal mapping and the Padé approximation method. From our results for $\Pi(q^2)$ we determine numerically at ${\cal
O}(\alpha_s^3)$ the previously unknown non-logarithmic contributions in the high-energy expansion at order $(m^2/q^2)^i$ for $i=0,1$ and the coefficients in the expansion around $q^2=0$ at order $q^{2n}$ with $n\ge 2$. We also determine at ${\cal O}(\alpha_s^2)$ the previously unknown ${\cal O}(v^0)$ constant term in the expansion of $\Pi(q^2)$ in the threshold region, where $v$ is the quark velocity. Our method allows for a quantitative estimate of uncertainties and can be systematically improved once more information in the three kinematic regions becomes available by future multi-loop computations. For the contributions to the total $e^+e^-$ cross section at ${\cal
O}(\alpha_s^2)$ we confirm results obtained earlier by Chetyrkin, Kühn and Steinhauser.\
\
author:
- 'André H. Hoang[^1]'
- 'Vicent Mateu[^2]'
- 'S. Mohammad Zebarjad[^3]'
bibliography:
- 'vacpolpaper.bib'
title: 'Heavy Quark Vacuum Polarization Function at ${\cal O}(\alpha_s^2)$ and ${\cal O}(\alpha_s^3)$\'
---
Introduction {#sectionintroduction}
============
The vacuum polarization function $\Pi(q^2)$ defined by the correlator of two electromagnetic currents $j^\mu(x)=\bar\psi(x)\gamma^\mu\psi(x)$, $$\begin{aligned}
\label{pidef}
\left(g_{\mu\nu}q^2-q_\mu q_\nu\right)\, \Pi(q^2)
\, = \, \,
- \,i
\int\mathrm{d}x\, e^{iqx}\left\langle \,0\left|T\, j_\mu(x)j_\nu(0)\right|0\,
\right\rangle
\,,\end{aligned}$$ where $q^\mu$ is the four-momentum of the quark pair produced or annihilated by $j^\mu$, represents an important quantity for theoretical studies as well as for many practical phenomenological applications. Relevant applications for the case of massive quarks include predictions of the hadronic cross section $R\sim \mbox{Im}[\Pi]$, or sum rules for the determination of the heavy quark masses [@Novikov:1977dq; @Reinders:1984sr]. These sum rules are based on moments of the cross section for heavy quark pair production $$\begin{aligned}
M_n \, = \int_{4m^2}^\infty\dfrac{\mathrm{d}s}{s^{n+1}}\,R(s)
\,,\end{aligned}$$ which in fixed-order perturbation theory are related to the expansion coefficients of $\Pi(q^2)$ around $q^2= 0$, $$\begin{aligned}
\Pi(q^2\approx 0,m^2) \, = \, \dfrac{1}{12\,\pi^2\,Q_q^2}\sum_{n=1}^\infty
M_n\, q^{2n}
\,,\end{aligned}$$ where $Q_q$ is the heavy quark electric charge. In general the knowledge of the full dependence of the vacuum polarization function $\Pi(q^2)$ on $q^2$ and the quark mass $m$ is desirable to avoid having to rely on approximations that are only valid in certain kinematic regimes.
At ${\cal O}(\alpha_s)$ the full mass and $q^2$ dependence of the vacuum polarization function is known from analytic computations carried out in Ref. [@Kallen:1955fb]. At ${\cal O}(\alpha_s^2)$ analogous analytic results exist for the contributions that originate from inserting the massive [@Kniehl:1989kz; @Hoang:1994it] and massless [@Hoang:1995ex] fermion loops into ${\cal O}(\alpha_s)$ one-gluon exchange diagrams. For the other ${\cal O}(\alpha_s^2)$ contributions results for the expansions of $\Pi(q^2)$ in the high-energy limit, $|q^2|\to\infty$, the nonrelativistic threshold regime, $q^2\approx 4m^2$, and in the Euclidean region around $q^2=0$ were used to reconstruct an accurate approximation [@Chetyrkin:1995ii; @Chetyrkin:1996cf]. The method is based on the definition of subtraction functions which account for all logarithmic terms that arise for the expansions in the high-energy limit and in the threshold region. Using a conformal transformation to a new variable $\omega$ the full $q^2$ and mass dependence in the complex plane of the remaining contributions can be mapped into the unit circle rendering those contributions to an analytic function in the variable $\omega$. The latter can then be successfully approximated by Padé approximants using the remaining expansion coefficients that are not related to logarithmic terms. With a large number of expansion coefficients for the three kinematic limits the full mass and $q^2$ dependence of the ${\cal O}(\alpha_s^2)$ vacuum polarization function can be determined with small numerical uncertainties.
For the ${\cal O}(\alpha_s^3)$ vacuum polarization there is also no fully analytic result available in the literature. A numerical study of the full ${\cal O}(\alpha_s^3 n_f^2)$ double fermionic contributions to the vacuum polarization function can be found in Ref. [@Czakon:2007qi]. In the high-energy expansion its contributions to the total cross section up to order $(m^2/q^2)^2$ are known [@Chetyrkin:2000zk]. A comprehensive review of these results can be found in Ref. [@Harlander:2002ur]. Moreover, in the threshold region, where an expansion in the small quark velocity $v$ can be carried out, the ${\cal O}(\alpha_s^3)$ contributions to the total cross section at order $1/v^i$ for $i=2,1,0$ are available from a factorization theorem for the heavy quark-antiquark pair production cross section in nonrelativistic QCD (NRQCD) at next-to-next-to-leading order (NNLO) [@Hoang:1997sj; @Hoang:1998xf]. More recently also the moments $M_1$ [@Chetyrkin:2006xg; @Boughezal:2006px] and $M_2$ [@Maier:2008he] at ${\cal O}(\alpha_s^3)$ have become available using elaborate high-power computer algebra tools.
In this work we use the presently available information on the ${\cal O}(\alpha_s^3)$ corrections to the vacuum polarization function $\Pi(q^2)$ in the high-energy limit, in the threshold region and the small $q^2$ domain to reconstruct the full $q^2$ and mass dependence of the vacuum polarization function at ${\cal O}(\alpha_s^3)$. The method we use is similar to the approach of Refs. [@Chetyrkin:1995ii; @Chetyrkin:1996cf] employed previously for the ${\cal O}(\alpha_s^2)$ corrections of the vacuum polarization function (see also Refs. [@Baikov:1995ui; @Fleischer:1994ef; @Broadhurst:1993mw]), but also accommodates a few notable differences which are motivated by the fact that less information is known on the vacuum polarization function at ${\cal O}(\alpha_s^3)$. While in Refs. [@Chetyrkin:1995ii; @Chetyrkin:1996cf] information from the high-energy expansion up to order $(m^2/q^2)$ and from the threshold expansion up to next-to-leading order (NLO) was incorporated for the reconstruction, we account for the expansions up to order $(m^2/q^2)^2$ at high energies and up to NNLO in the threshold region. While in Ref. [@Chetyrkin:1995ii; @Chetyrkin:1996cf] the full set of terms in the high-energy expansion of $\Pi$ was used for the construction, in this work we only rely on the terms that carry an absorptive part and that contribute to the cross section above threshold. We show that our method allows to determine previously unknown non-logarithmic terms of the vacuum polarization at ${\cal O}(\alpha_s^3)$ in the high-energy expansion with very small errors. Moreover, while in Ref. [@Chetyrkin:1995ii; @Chetyrkin:1996cf] and later in Ref. [@Chetyrkin:1997mb] the coefficients of the small-$q^2$ expansion were included up to order $q^{14}$ and $q^{16}$, respectively, we only rely on the presently available ${\cal O}(\alpha_s^3)$ coefficients up to order $q^4$. Our method allows to determine the expansion coefficients at order $q^{2n}$ with $n\ge 3$. The results allow to compute the corresponding moments $M_n$ in the fixed-order expansion at ${\cal O}(\alpha_s^3)$. For phenomenologically relevant values of $n$ the error in the $M_n$ due to the uncertainties in these coefficients is an order of magnitude smaller than the remaining scale-uncertainties of the $M_n$ at ${\cal O}(\alpha_s^3)$ . We demonstrate the reliability of the results by using the same approach for determining the corresponding coefficients for the vacuum polarization function at ${\cal O}(\alpha_s^2)$ where their values are well known analytically from the computations of Feynman diagrams. Another noteworthy difference of our approach to Refs. [@Chetyrkin:1995ii; @Chetyrkin:1996cf] is that we implement a continuous set of subtraction functions to have a more reliable estimation of the uncertainty inherent to the method. Our approach can systematically incorporate new information from the expansions in the three kinematical regions, once it becomes available.
One important application of the vacuum polarization function at ${\cal O}(\alpha_s^3)$ obtained in this work is an analysis of low-$n$ moments of the $e^+e^-\to c\bar c$ cross section to determine the $\overline{\mbox{MS}}$ charm quark mass $\overline m_c$ and to investigate the uncertainty in $\overline m_c$ that arises from the difference of using fixed-order and contour-improved perturbation theory. For using contour-improved perturbation theory, which involves integrations of $\Pi(q^2)$ in the complex $q^2$-plane, it is essential to have the full mass and $q^2$ dependence of the vacuum polarization function. Such an analysis was carried out in Ref. [@Hoang:2004xm] at ${\cal O}(\alpha_s^2)$. Determinations of the charm quark mass $\overline m_c$ using the vacuum polarization function at ${\cal O}(\alpha_s^3)$ in the fixed-order expansion alone were carried out recently in Refs. [@Kuhn:2007vp; @Boughezal:2006px]. In this paper we discuss in detail the reconstruction of the full $q^2$ and mass dependence of the ${\cal O}(\alpha_s^2)$ and ${\cal O}(\alpha_s^3)$ corrections to the vacuum polarization function as outlined above. The thorough analysis of uncertainties in the charm and bottom quark $\overline{\mbox{MS}}$ masses obtained from low-$n$ moments of the $e^+e^-$ cross section will be given in a subsequent publication.
The program of this paper is as follows: In Sec. \[sectionnotation\] we set up our notation and in Sec. \[sectionmethod\] we present the basic features of our method for reconstructing the vacuum polarization function. In Sec. \[sectionPilog\] we explain details about how logarithmic contributions in the expansions in the threshold region and for high energies are incorporated and in Sec. \[sectionPireg\] we present how the remaining non-logarithmic terms are treated. Some of the solutions we obtain have unphysical properties. Criteria that allow to identify and discard such solutions are discussed in Sec. \[sectionunphysical\]. Numerical analyses for the ${\cal O}(\alpha_s^2)$ and ${\cal O}(\alpha_s^3)$ contributions of the vacuum polarization function are given in Secs. \[sectionOas2\] and \[sectionOas3\]. Our conclusions are given in Sec. \[sectionconclusions\].
Notation {#sectionnotation}
========
The relation between the normalized $e^+e^-$ cross section $R$ and and the vacuum polarization function $\Pi$ reads $$\begin{aligned}
\label{eq:spectral-function}
R(q^2) \, = \, 12\pi\,Q_q^2\,\mbox{Im}\,\Pi(q^{2}+i0,m^2)\,,\end{aligned}$$where $Q_q$ is the heavy quark electric charge. The perturbative fixed-order expansion of $\Pi(q^2,m^2)$ has the form $$\begin{aligned}
\label{eq:General-Pi}
\Pi(q^{2},m^2) \, = \,&\,
\Pi^{(0)}(q^{2},m^2)
\, + \,\left(\frac{C_F\,\alpha_{s}(\mu^{2})}{\pi}\right)\,
\Pi^{(1)}(q^{2},m^2)
{\nonumber}\\[2mm] &
\, + \,
\left(\frac{\alpha_{s}(\mu^{2})}{\pi}\right)^{2}\,
\Pi^{(2)}(q^{2},m^2,\mu^2)
\, + \,
\left(\frac{\alpha_{s}(\mu^{2})}{\pi}\right)^{3}\Pi^{(3)}(q^{2},m^2,\mu^2)
\,+\,\cdots\,,\end{aligned}$$ with the color factor $C_F=4/3$. We use the on-shell normalization of the vacuum polarization function, where $$\begin{aligned}
\label{Thomson}
\Pi(0,m^2) \, = \,0\,.\end{aligned}$$ We exclude the so-called singlet contributions where the vacuum polarization function contains a three-gluon cut. Note that in this work we do not distinguish between the contributions in $\Pi^{(2)}$ and $\Pi^{(3)}$ proportional to the different SU(3) group theory color factors since there isn’t any compelling technical reason that would make such a distinction mandatory. This approach neglects the existence of the multi-particle cuts from diagrams with the insertion of massive fermion loops. Their contribution is strongly phase-space-suppressed and can be safely ignored for the level of accuracy intended in this work. We emphasize, however, that our approach can be applied to the individual color contributions as well.
For the reconstruction of the vacuum polarization function accomplished in this work we use exclusively the pole mass scheme, $m=m_{\rm
pole}$, since it allows for the most transparent treatment of the information from the quark pair production threshold. Moreover we use the choice $\mu=m=m_{\rm pole}$ for the renormalization scale and generally suppress the $\mu$-dependence of the functions $\Pi^{(i)}$. To simplify the presentation we frequently use the variable $$\begin{aligned}
z \, \equiv \, \frac{q^2}{4 m^2}
\,.\end{aligned}$$ For the strong coupling we use $n_f=n_\ell+1$ active running flavors, where quarks that are heavier than those produced by the current $j^\mu$ are integrated out and where all $n_\ell$ light flavors are treated as massless.
The analytic expression for the vacuum polarization functions at ${\cal O}(\alpha_s)$ [@Kallen:1955fb] is an important ingredient of our analysis. The corresponding contributions using the notation of Eq. (\[eq:General-Pi\]) have the form $$\begin{aligned}
\label{eq:Pi01}
\Pi^{(0)} & \, = \,
\frac{3}{16\pi^{2}}\left[\frac{20}{9}+\frac{4}{3z}-\frac{4(1-z)(1+2z)}{3z}G(z)\right],
{\nonumber}\\[2mm]
\Pi^{(1)} & \, = \,
\frac{3}{16\pi^{2}}\left[\frac{5}{6}+\frac{13}{6z}-\frac{(1-z)(3+2z)}{z}G(z)+
\frac{(1-z)(1-16z)}{6z}G^{2}(z)\right.
-\,\left.\frac{(1+2z)}{6z}\left(1+2z(1-z)\frac{d}{dz}\right)\frac{I(z)}{z}\right],\end{aligned}$$ where $$\begin{aligned}
\label{eq:Gz}
I(z) & \, = \,
6\Big[\zeta_{3}+4\,\mbox{Li}_{3}(-u)+2\,\mbox{Li}_{3}(u)\Big]-
8\Big[2\,\mbox{Li}_{2}(-u)+\mbox{Li}_{2}(u)\Big]\ln u
-2\Big[2\,\ln(1+u)+\ln(1-u)\Big]\ln^{2}u\,, {\nonumber}\\[2mm]
G(z) & \, = \, \frac{2\, u\,\ln u}{u^{2}-1}\,,
\quad
\mbox{with}\quad
u \, \equiv \, \frac{\sqrt{1-1/z}-1}{\sqrt{1-1/z}+1}
\,.\end{aligned}$$ An important application is the determination of the moments $M_n$ in the fixed-order expansion. Here the pole mass scheme is strongly disfavored since it contains an ${\cal O}(\Lambda_{\rm QCD})$ renormalon ambiguity that leads to a quite bad perturbative expansion of the moments. For small values of $n$ this problem can be avoided conveniently by using the $\overline{\mbox{MS}}$ mass scheme. For the complications that arise for large values of $n$ see e.g. Refs. [@Hoang:1998uv; @Hoang:1999ye]. In this paper we use the $\overline{\mbox{MS}}$ running mass $\overline m$ with $n_f=n_\ell+1$ running flavors for discussions of the moments $M_n$. Using a common renormalization scale $\mu$ for the mass and the strong coupling, the fixed-order perturbative expansion of the moments $M_n$ can be written in the form \[$l_{m\mu}=\ln(\bar m^2(\mu)/\mu^2)$\] $$\begin{aligned}
\label{Mnexpanded}
M_n \, = \, & \,\frac{9}{4}\,\frac{Q_q^2}{(4\bar m^2(\mu))^n}\,
\bigg[ \bar{C}_n^{(0)}
+ \frac{\alpha_s(\mu)}{\pi}
\left( \bar{C}_n^{(10)} + \bar{C}_n^{(11)}l_{m\mu} \right)
+\left(\frac{\alpha_s(\mu)}{\pi}\right)^2
\left( \bar{C}_n^{(20)} + \bar{C}_n^{(21)}l_{m\mu}
+ \bar{C}_n^{(22)}l_{m\mu}^2 \right)
{\nonumber}\\[2mm] & \hspace{2.5cm}
+\,\left(\frac{\alpha_s(\mu)}{\pi}\right)^3
\left( \bar{C}_n^{(30)} + \bar{C}_n^{(31)}l_{m\mu} + \bar{C}_n^{(32)}l_{m\mu}^2 +
\bar{C}_n^{(33)}l_{m\mu}^3 \right)\bigg]
\,,\end{aligned}$$ adopting the notation of Refs. [@Chetyrkin:1995ii; @Chetyrkin:1996cf].
The Method {#sectionmethod}
==========
The expansions of $\Pi(z)$ in the threshold region $z\simeq 1$ and the high-energy limit $|z|\to\infty$ involve powers of $\log(1-z)$ and $\log(-4z)$, respectively. Above production threshold for $z>1$ these logarithmic terms contribute to the absorptive parts in $\Pi$ that constitute the cross section according to Eq. (\[eq:spectral-function\]). On the other hand, the expansion around $z=0$, which is located in the Euclidean region, leads to fully analytic terms and admits a usual Taylor expansion. We want to reconstruct the full $q^2$ and mass dependence of $\Pi^{(3)}$ by building functions that incorporate all known properties of $\Pi^{(3)}$ in the threshold regime, the high-energy limit and the region around $z=0$. We carry out the same program also for $\Pi^{(2)}$ using only coefficients in the expansions that are analogous to the available information for $\Pi^{(3)}$. From the reconstructed $\Pi^{(3)}$ we can determine previously unknown non-logarithmic coefficients in the high-energy and the nonrelativistic expansions as well as the ${\cal O}(\alpha_s^3)$ corrections of the moments $M_n$ for $n\ge 3$. Using the reconstructed $\Pi^{(2)}$ function we can test the reliability of these determinations and find that these coefficients and moments can be determined remarkably well.
Following the approach of Ref. [@Chetyrkin:1995ii], we split $\Pi^{(2,3)}(z)$ into two parts, $$\begin{aligned}
\label{Piseparated}
\Pi^{(2,3)}(z) \, = \, \Pi^{(2,3)}_{\rm reg}(z) \, + \, \Pi^{(2,3)}_{\rm log}(z)
\,,\end{aligned}$$ where $\Pi^{(2,3)}_{\rm log}(z)$ are designed such that they contain the logarithmic terms in the expansions around $z=1$ and for $|z|\to\infty$. They can be conveniently constructed from the functions $\Pi^{(1)}$ and $G(z)$ given in Eqs. (\[eq:Pi01\]) and (\[eq:Gz\]) since the latter readily incorporate analytic structures that allow to incorporate the appropriate threshold and high-energy behavior into $\Pi^{(2,3)}_{\rm log}(z)$. Once $\Pi^{(2,3)}_{\rm log}(z)$ has been specified, the remaining task is to construct a Padé approximant for $\Pi^{(2,3)}_{\rm reg}$ that allows to incorporate the remaining non-logarithmic constraints in the regions $z\simeq
1$, $|z|\to\infty$ and $z\simeq 0$. The general structure of a Padé approximant $P_{n,m}$ has the form $$\begin{aligned}
\label{Padedef}
P_{n,m}(x) \, = \,
\frac{\sum_{i=0}^{n}a_{i}x^{i}}{1+\sum_{j=1}^{m}b_{j}x^{j}}
\,,\end{aligned}$$ which means that there are $n+m+1$ coefficients that need to be specified. Note that the coefficients $a_i$ and $b_j$ are real numbers. Since $\Pi^{(2,3)}_{\rm reg}$ still has a physical cut for $z>1$ along the positive real $z$ axis, one cannot use the variable $z$ to formulate the Padé approximant. A convenient variable to automatically account for this cut is $\omega$ defined by (see e.g. Refs. [@Broadhurst:1993mw; @Fleischer:1994ef]) $$\begin{aligned}
\label{eq:omega}
\omega \, = \,
\frac{1-\sqrt{1-z}}{1+\sqrt{1-z}}
\,,\qquad \qquad
z \, = \, \frac{4\omega}{(1+\omega)^{2}}.
$$ Here, the physical $z$-plane is mapped into the unit-circle of the complex $\omega$-plane, where approaching the physical cut from the upper (lower) complex $z$-half-plane corresponds to approaching the upper (lower) semi unit-circle in the complex $\omega$-plane. The three points $z=(0,1,\pm\infty)$ are conformally mapped onto $\omega=(0,1,-1)$. Expressed in terms of the variable $\omega$, $\Pi^{(2,3)}_{\rm reg}$ can therefore be approximated by rational functions involving the Padé approximant $P_{n,m}(\omega)$. All Padé approximants that turn out to have unphysical poles inside the unit circle have to be discarded. In practice some additional restrictive criteria have to be imposed to avoid an unphysical behavior of $\Pi$ and $R$ due to poles in the Padé approximant outside the unit circle that are either close to the unit circle or have a large residue. We discuss these restrictions in Sec. \[sectionunphysical\].
It goes without saying that the constructions of $\Pi^{(2,3)}_{\rm log}(z)$ and the Padé approximant for $\Pi^{(2,3)}_{\rm reg}$ are not unique and that the resulting reconstructed $\Pi^{(2,3)}$ functions have a dependence on choices made for their construction. The ambiguity in the procedure therefore needs to be quantified by accounting for variations in the construction. While in Ref. [@Chetyrkin:1995ii; @Chetyrkin:1996cf] variations coming from different choices for $P_{n,m}$ were included for the error estimate, we include in our work in addition continuous variations in the construction of $\Pi^{(2,3)}_{\rm log}$. We test the reliability of the method by determining properties of $\Pi^{(2)}$ that are precisely known analytically, but that have not been incorporated for the construction of the approximation for $\Pi^{(2)}$.
Designing $\Pi_{\rm log}$ {#sectionPilog}
=========================
To determine $\Pi_{\rm log}^{(2,3)}$ we need to account for the logarithmic terms that arise in $\Pi^{(2,3)}$ in the threshold region $z\to 1$ and in the high-energy limit $|z|\to\infty$. To facilitate the presentation it is convenient to write $$\begin{aligned}
\label{Pilogdef}
\Pi_{\rm log}^{(2,3)}(z) \, = \,
\Pi_{\rm thr}^{(2,3)}(z) \, + \,
\Pi_{\rm inf}^{(2,3)}(z) \, + \,
\Pi_{\rm zero}^{(2,3)}(z)
\,,\end{aligned}$$ where $\Pi_{\rm thr}^{(2,3)}$ and $\Pi_{\rm inf}^{(2,3)}$ are designed to account for the logarithmic terms at threshold and at high energies, respectively, and $\Pi_{\rm zero}^{(2,3)}$ incorporates subtractions that ensure a physical behavior at $z=0$.\
[*Threshold Logarithms.*]{} We start by presenting the expansions of $\Pi^{(1,2,3)}(z)$ and $G(z)$ in the threshold limit $z\to 1$ keeping terms up to NNLO in the expansion in $\sqrt{1-z}$: $$\begin{aligned}
\label{Pithresh}
\Pi^{(1)}(z) \, = \,& \,-0.1875\ln(1-z)-0.314871 +0.477465\sqrt{1-z}
{\nonumber}\\[1mm] &
+\Big(0.354325 + 0.125 \ln(1-z)\Big)(1-z)
+ {\cal O}\Big((1-z)^{3/2}\Big)
\,,
{\nonumber}\\[1mm]
\Pi^{(2)}(z) \, = \,&
\frac{1.72257}{\sqrt{1-z}}
+(0.34375-0.0208333 n_{\ell})\ln^{2}(1-z)+(0.0116822 n_{\ell}
+1.64058 )\ln(1-z)
+ K^{(2)}
{\nonumber}\\[1mm] &
+\Big(\!
-0.721213 - 0.0972614 n_\ell + 3.05433 \ln(1-z)\Big)\sqrt{1-z}
\, + \, {\cal O}\Big((1-z)\Big)
\,,
{\nonumber}\\[1mm]
\Pi^{(3)}(z) \, = \,&
\frac{2.63641}{1-z}+\frac{0.678207 n_{\ell}-27.2677}{\sqrt{1-z}}
+(0.57419 n_{\ell}-9.47414)\,\frac{\log(1-z)}{\sqrt{1-z}}
{\nonumber}\\[1mm] &
+(-0.00231481 n_{\ell}^{2}+0.0763889 n_{\ell}-0.630208)\log^{3}(1-z)
{\nonumber}\\[1mm] &
+(0.00194703 n_{\ell}^{2}+0.0312341 n_{\ell}+1.3171)\log^{2}(1-z)
{\nonumber}\\[1mm] &
+(-0.0690848 n_{\ell}^{2}+2.37068 n_{\ell}-17.6668)\log(1-z)
\, + \, K^{(3)}
+ {\cal O}\Big((1-z)^{1/2}\Big)
\,,
{\nonumber}\\[1mm]
G(z) \, = \,& \frac{\pi}{2\sqrt{1-z}} - 1
+ \frac{\pi\sqrt{1-z}}{4}
\, + \, {\cal O}\Big((1-z)\Big)
\,.\end{aligned}$$ To avoid cluttering we show the various coefficients for $\Pi^{(1,2,3)}(z)$ in numerical form, but keep the number $n_\ell$ of light quark flavors as a variable. The expansions of $\Pi^{(1)}$ and $G$ are known from their exact expressions given in Eqs. (\[eq:Pi01\]) and (\[eq:Gz\]) while the expansion for $\Pi^{(2)}$ can be derived from the results for $R$ in the threshold region computed in Ref. [@Czarnecki:1997vz]. The expansion for $\Pi^{(3)}$ is obtained from the NNLO threshold cross section factorization formula for $R$ within NRQCD first derived in Ref. [@Hoang:1997sj; @Hoang:1998xf] (see also Ref. [@Hoang:2001mm]). The result was later confirmed by many other groups [@Hoang:2000yr]. Note that within NRQCD it is the standard convention that only the $n_\ell$ light quark species contribute to the running of the strong coupling. Switching to $n_f=n_\ell+1$ running flavors affects the coefficient of the term $\propto\ln(1-z)$ in $\Pi^{(3)}$. All other coefficients shown in Eq. (\[Pithresh\]) are unaffected. We also note that the singlet contributions to the vacuum polarization function only affect the threshold expansion at N${}^4$LO in the expansion in $\sqrt{1-z}$ and do not contribute at the order we consider here.[^4] The constant terms $K^{(2,3)}$ that appear in the nonrelativistic expansion of $\Pi^{(2,3)}$ have not yet been computed from Feynman diagrams. As we show in Secs. \[sectionOas2\] and \[sectionOas3\] they can be determined from the reconstructed vacuum polarization function based on the method described in Sec. \[sectionmethod\].
$\ln^{m}(1-z)$ $[\Pi^{(1)}(z)]^m$
---------------------------- -------------------------------------
$(1-z)^{-n/2}\ln^{m}(1-z)$ $[G(z)]^n [\Pi^{(1)}(z)]^m$
$(1-z)^{n/2}\ln^{m}(1-z)$ $(1-z)^n [G(z)]^n [\Pi^{(1)}(z)]^m$
: First column: Logarithmic terms that arise in the expansion of $\Pi^{(2,3)}(z)$ close to threshold where $z\approx 1$. Second column: Corresponding functions used in the construction of $\Pi^{(2,3)}_{\rm
thr}(z)$. \[tab:thresh\]
To construct $\Pi_{\rm thr}^{(2,3)}$ we have to find appropriate functions that account for the different combinations of the logarithmic term $\ln(1-z)$ and powers of $\sqrt{1-z}$ that appear in Eqs. (\[Pithresh\]). A convenient choice is given in Tab. \[tab:thresh\], and leads to $$\begin{aligned}
\label{Pithreshansatz}
\Pi_{\mathrm{thr}}^{(2)}(z)
\,=\, &\,
A_{0}^{(2)}\frac{1+a_0^{(2)}\, z}{z}\left[\Pi^{(1)}(z)\right]^{2}
+ A_{1}^{(2)}\Pi^{(1)}(z)
+A_{2}^{(2)}(1-z)G(z)\Pi^{(1)}(z)
\,,
{\nonumber}\\[2mm]
\Pi_{\mathrm{thr}}^{(3)}(z)
\, =\, &\,
A_{0}^{(3)}\dfrac{1+a_0^{(3)}\, z}{z}\left[\Pi^{(1)}(z)\right]^{3}
+A_{1}^{(3)}\left[\Pi^{(1)}(z)\right]^{2}+A_{2}^{(3)} \dfrac{1+a_2^{(3)}\, z}{z}G(z)\Pi^{(1)}(z)
+A_{3}^{(3)}\Pi^{(1)}(z)
\,.\end{aligned}$$ The coefficients $A_i^{(2,3)}$ can be unambiguously determined from the expressions shown in Eqs. (\[Pithresh\]). Obviously the choices in Tab. \[tab:thresh\] are not unique. To have some quantitative way to account for this source of uncertainty we have multiplied the functions related to the highest power of $\ln(1-z)$ in different orders in the expansion in $\sqrt{1-z}$ by a term $(1-a^{(2,3)}_i\,z)/z$, where the $a^{(2,3)}_i$ are free parameters. Since the construction becomes singular for $a^{(2,3)}_i=-1$, we exclude this value and use variations in the ranges $a^{(2,3)}_i \ge0$ and $a^{(2,3)}_i \le -2$. Note that for $|a^{(2,3)}_i|\to\infty$ the term $(1-a^{(2,3)}_i\,z)/z$ becomes $z$-independent and the results for $\Pi^{(2,3)}_{\rm thr}$ become independent of the $a^{(2,3)}_i$. We note that given the functions in Tab. \[tab:thresh\] it is straightforward to account for even higher terms in the expansion in the threshold limit for the construction of $\Pi_{\rm thr}^{(2,3)}$.\
[*High-Energy Logarithms.*]{} The expansions of $\Pi^{(1,2,3)}(z)$ and $G(z)$ in the high-energy limit $|z|\to\infty$ read $$\begin{aligned}
\label{Pihigh}
\Pi^{(1)}(z) \, = \,&
-\,0.018998\log(-4z)-0.075514- 0.056993 \frac{\ln(-4z)}{z}
{\nonumber}\\[1mm] &
+ \,\frac{0.023628}{z^{2}}- 0.014248 \frac{\ln^{2}(-4z)}{z^{2}}
- 0.011874 \frac{\ln(-4z)}{z^{2}}
+\,{\cal O}(z^{-3})
\,,
{\nonumber}\\[1mm]
\Pi^{(2)}(z) \, = \,& \,
(0.034829 - 0.0021109 n_f)\ln^{2}(-4z)
+(-0.050299 + 0.0029205 n_f)\ln(-4z)
+ H^{(2)}_0
{\nonumber}\\[1mm] &
+ (0.18048 - 0.0063326 n_f)\frac{\ln^{2}(-4z)}{z}
+ (-0.59843 + 0.027441 n_f)\frac{\ln(-4z)}{z}
+ \frac{H^{(2)}_1}{z}
{\nonumber}\\[1mm] &
+ (0.042745 - 0.0010554 n_f)\frac{\ln^3(-4z)}{z^2}
+ (-0.10132 + 0.0058049 n_f)\frac{\ln^2(-4z)}{z^2}
{\nonumber}\\[1mm] &
+ (-0.48134 + 0.032065 n_f)\frac{\ln(-4z)}{z^2}
+ \frac{H^{(2)}_2}{z^2}
+\,{\cal O}(z^{-3})
\,,
{\nonumber}\\[1mm]
\Pi^{(3)}(z) \, = \,& \,
(-0.063853 + 0.0077398 n_f-0.00023454\ n_f^2) \ln^{3}(-4z)
{\nonumber}\\[1mm] &
+ (0.21906 - 0.026441 n_f + 0.0004867 n_f^2) \ln^{2}(-4z)
{\nonumber}\\[1mm] &
+ (-0.46209 + 0.10679n_f - 0.0021837n_f^2)\ln(-4z)
+ H^{(3)}_0
{\nonumber}\\[1mm] &
+ (-0.45120+ 0.035885 n_f - 0.00070362 n_f^2)\frac{\ln^{3}(-4z)}{z}
+ (3.0848 - 0.26016 n_f + 0.0045735 n_f^2) \frac{\ln^{2}(-4z)}{z}
{\nonumber}\\[1mm] &
+ (-6.6516 + 0.78237 n_f - 0.0146587 n_f^2) \frac{\ln(-4z)}{z}
+ \frac{H^{(3)}_1}{z}
{\nonumber}\\[1mm] &
+ (-0.10152 + 0.0060687 n_f - 0.000087952 n_f^2) \frac{\ln^{4}(-4z)}{z^{2}}
{\nonumber}\\[1mm] &
+ (0.57013 - 0.044856 n_f + 0.0006743 n_f^2) \frac{\ln^{3}(-4z)}{z^{2}}
+ (0.17822 + 0.038525 n_f - 0.00088394 n_f^2) \frac{\ln^{2}(-4z)}{z^{2}}
{\nonumber}\\[1mm] &
+ (-8.8712 + 1.0393 n_f - 0.026019 n_f^2) \frac{\ln(-4z)}{z^{2}}
+ \frac{H^{(3)}_2}{z^2}
+\,{\cal O}(z^{-3})
\,,
{\nonumber}\\[1mm]
G(z) \, = \,&
-\frac{\log(-4z)}{2z}
+\frac{1-\log(-4z)}{4z^{2}}
+\,{\cal O}(z^{-3})
\,.\end{aligned}$$ The expansions for $\Pi^{(1)}$ and $G$ are known from the exact expressions given in Eqs. (\[eq:Pi01\]) and (\[eq:Gz\]), while the expansion for $\Pi^{(2)}$ was taken from Ref. [@Chetyrkin:1994ex]. Note that many orders in high-energy expansion are known for $\Pi^{(2)}$ [@Chetyrkin:1997qi], but we only consider in this work terms up to order $1/z^2$, since our analysis for $\Pi^{(2)}$ mainly serves as a testing ground for the application to $\Pi^{(3)}$. The expansion for $\Pi^{(3)}$ was obtained in Refs. [@Chetyrkin:2000zk]. At ${\cal
O}(\alpha_s^2)$ the non-logarithmic coefficients $H^{(2)}_{0,1}$ are known analytically [@Chetyrkin:1996cf] and read $$\begin{aligned}
\label{HOas}
H^{(2)}_0 \, = \, & -0.73628 + 0.037645 n_f
\,,
{\nonumber}\\[2mm]
H^{(2)}_1 \, = \, & -0.30324 + 0.029002 n_f
\,.\end{aligned}$$ At ${\cal O}(\alpha_s^3)$ the non-logarithmic coefficients $H^{(3)}_{0,1}$ have not been computed from Feynman diagrams in the literature before. As we show in Secs. \[sectionOas2\] and \[sectionOas3\] they can be determined from the reconstructed vacuum polarization function $\Pi^{(3)}$.
$\ln^{n}(-4z)$ $(1-z)^n\,[G(z)]^n$
----------------------------------- -------------------------------
$\frac{1}{z}\ln^{n}(-4z)\,,(n>1)$ $(1-z)^{n-1}\,[G(z)]^n$
$\frac{1}{z}\ln(-4z)$ $\frac{1-z}{z}\,G(z)$
$\frac{1}{z^2}\ln(-4z)$ $\frac{1-z}{z^2}\,G(z)$
$\frac{1}{z^2}\ln^2(-4z)$ $\frac{1-z}{z}\,[G(z)]^2$
$\frac{1}{z^2}\ln^3(-4z)$ $\frac{(1-z)^2}{z}\,[G(z)]^3$
$\frac{1}{z^2}\ln^4(-4z)$ $(1-z)^2\,[G(z)]^4$
: First column: Logarithmic terms that arise in the high-energy expansion of $\Pi^{(2,3)}(z)$ where $|z|\to\infty$. Second column: Corresponding functions used in the construction of $\Pi^{(2,3)}_{\rm inf}(z)$. \[tab:high\]
To construct $\Pi^{(2,3)}_{\rm inf}(z)$ we have to find functions that can account for the different combinations of powers of $\ln(-4z)$ and of powers of $1/z$ that arise in the expansions of Eq. (\[Pihigh\]). A convenient choice is given in Tab. \[tab:high\]. Our guideline for including the factors of $(1-z)^i$ is to ensure that the functions are constant or $\sim\sqrt{1-z}$ in the threshold limit $z\to 1$. This leads to $$\begin{aligned}
\label{Pihighansatz}
\Pi_{\rm{inf}}^{(2)}
\, = \,&
B_{0}^{(2)}\frac{1+b_{0}^{(2)}z}{z}(1-z)^{2}G(z)^{2}+\left(
B_{10}^{(2)}+\frac{B_{11}^{(2)}}{z}\right)
\frac{1+b_{1}^{(2)}z}{z}(1-z)G(z)^{2}
{\nonumber}\\[2mm] &
+ \left(B_{30}^{(2)}+\frac{B_{31}^{(2)}}{z}+\frac{B_{32}^{(2)}}{z^2}\right)(1-z)G(z)
+ B_{4}^{(2)}\frac{(1-z)^{2}}{z}G(z)^{3}
,
{\nonumber}\\[2mm]
\Pi_{\rm{inf}}^{(3)}
\, = \, &
\frac{1+b_{0}^{(3)}z}{z}\left(B_{00}^{(3)}
+\frac{B_{01}^{(3)}}{z}\right)(1-z)^{3}G(z)^{3}+B_{1}^{(3)}(1-z)^{2}G^{4}(z)
+\frac{1+b_{2}^{(3)}z}{z}\left(B_{20}^{(3)}
+\frac{B_{21}^{(3)}}{z}\right)(1-z)^{2}G(z)^{3}
{\nonumber}\\[2mm] &
+B_{30}^{(3)}(1-z)^{2}G(z)^{2}+\left(B_{40}^{(3)}+\frac{B_{41}^{(3)}}{z}\right)(1-z)G(z)^{2}
+\left(B_{50}^{(3)}+\frac{B_{51}^{(3)}}{z}+\frac{B_{52}^{(3)}}{z^{2}}\right)(1-z)G(z)
\,,\end{aligned}$$ where the coefficients $B^{(n)}_i$ can be determined unambiguously from the conditions in Eqs. (\[Pithresh\]) and (\[Pihigh\]). In analogy to $\Pi_{\rm thr}^{(2,3)}$ we have included modification factors $(1+b^{(2,3)}_i\,z)/z$ for the functions that are related to highest-power logarithmic terms at each order in the $1/z$ expansion. In $\Pi^{(3)}_{\rm high}$ we have a common modification factor for the functions related to the terms $\ln^3(-4 z)/z$ and $\ln^3(-4 z)/z^2$. For the parameters $b^{(2,3)}_i$ the choice $b^{(2,3)}_i=0$ is excluded because in this case the construction becomes singular. For our analysis we adopt variations in the ranges $|b^{(2,3)}_i|\ge 1$. Using functions along the lines of Tab. \[tab:thresh\] it is straightforward to account for even higher terms in the high-energy expansion in the for the construction of $\Pi_{\rm inf}^{(2,3)}$.
Note that for the determination of the coefficient $A_i^{(2,3)}$ and $B_i^{(2,3)}$ one first fixes the constants $a^{(2,3)}_i$ and $b^{(2,3)}_i$ in the modification functions and then solves a set of linear equations.\
[*Subtractions at $q^2=0$.*]{} There are singularities $\sim 1/z$ and $\sim 1/z^2$ in $\Pi_{\rm thr}^{(2,3)}(z)$ and $\Pi_{\rm inf}^{(2,3)}(z)$ that arise in the limit $z\to 0$. They are a consequence of the functions used to construct $\Pi_{\rm thr}^{(2,3)}(z)$ and $\Pi_{\rm inf}^{(2,3)}(z)$. These singularities lead to unphysical behavior and need to be subtracted. For this task we define the function $$\begin{aligned}
\label{Pizero}
\Pi_{\rm zero}^{(2,3)}(z) \, = \,
S_0^{(2,3)} \, + \, \frac{S_1^{(2,3)}}{z}
\, + \,\frac{S_2^{(2,3)}}{z^2}
\,.\end{aligned}$$ After the coefficients $A_i^{(2,3)}$ and $B_i^{(2,3)}$ have been computed, the coefficients $S_{0,1,2}^{(2,3)}$ are determined such that $$\begin{aligned}
\label{Thomson2}
\Pi^{(2,3)}_{\rm log}(0) \, = \, 0
\,.\end{aligned}$$ Note that it is not mandatory to fix $S_0^{(2,3)}$ in this way, and that our approach is independent of the choice for $S_0^{(2,3)}$. However, to satisfy Eq. (\[Thomson\]) it is convenient for the purpose of presentation to impose the condition (\[Thomson2\]) and also the relation $\Pi^{(2,3)}_{\rm reg}(0)=0$.
Designing $\Pi_{\rm reg}$ {#sectionPireg}
=========================
The terms $\Pi_{\rm reg}^{(2,3)}$ in Eq. (\[Piseparated\]) have to account for the non-logarithmic conditions in the expansion at the threshold and at high energies, and for the coefficients that arise in the expansion around $z=0$. We start by presenting the small-$z$ expansion of $\Pi^{(2)}$ and $\Pi^{(3)}$: $$\begin{aligned}
\label{Pismallz}
\Pi^{(2)} \, = \,&
(0.719976- 0.0296233 n_\ell) z + (0.698894-
0.0275334 n_\ell) z^2 + (0.637986- 0.0240088 n_\ell) z^3
{\nonumber}\\[1mm] &
+ (0.584109-
0.0211621 n_\ell) z^4 + (0.539450- 0.0189263 n_\ell) z^5
+ (0.502392- 0.0171420 n_\ell) z^6
{\nonumber}\\[1mm] &+ (0.471258- 0.0156884 n_\ell) z^7
+\,{\cal O}(z^8)
\,,
{\nonumber}\\[2mm]
\Pi^{(3)} \, = \,&
(10.6103- 1.30278 n_\ell + 0.0282783 n_\ell^2)z
+(10.4187- 1.12407 n_\ell + 0.0223706 n_\ell^2)z^2
\,+\,{\cal O}(z^3)
\,.\end{aligned}$$ The coefficients for $\Pi^{(2)}$ were computed in Refs. [@Chetyrkin:1995ii; @Chetyrkin:1996cf]. Recently the coefficients for $\Pi^{(2)}$ have even been determined up to order $z^{30}$ [@Maier:2007yn; @Boughezal:2006uu]. For $\Pi^{(3)}$ the coefficient of order $z$ was computed in Refs. [@Chetyrkin:2006xg; @Boughezal:2006px], and the coefficient of order $z^2$ was given in Ref. [@Maier:2008he]. The coefficients of order $z^n$ with $n\ge
3$ have not yet been computed from Feynman diagrams. However, they can be determined from the reconstructed function $\Pi^{(3)}$ as we show in Secs. \[sectionOas2\] and \[sectionOas3\].\
[*Designing $\Pi_{\rm reg}^{(2)}$.*]{} We start exemplarily with the construction of $\Pi_{\rm reg}^{(2)}$. Close to threshold $\Pi^{(2)}$ exhibits the Coulomb singularity $\sim 1/\sqrt{1-z}$, see Eq. (\[Pithresh\]). To avoid that the Padé approximant contains explicitly this singularity, we use two different methods:
- We relate the Padé approximant $P(\omega)$ to $f(z)\Pi_{\rm reg}^{(2)}$, where $f(z\approx 1)\sim \sqrt{1-z}$. The coefficient of the Coulomb singularity is implemented through a condition on $P(1)$.
- We use the relation $$\begin{aligned}
\frac{\pi^{2}}{9}\, G(z\approx 1)
\, =\,
\frac{\pi^{3}}{18\sqrt{1-z}} \, + \, \ldots\end{aligned}$$ and account for the Coulomb singularity by adding the function $\frac{\pi^{2}}{9}G(z)$ to $\Pi^{(2)}_{\rm log}$. The Padé approximant $P$ is not affected by the Coulomb singularity.
The numerical differences that result from these two methods of implementing the Coulomb singularity constitute another tool for quantifying the uncertainties inherent to our approach.
For method (i) the expression we use for the relation between the Padé approximant $P(\omega)$ and $\Pi^{(2)}_{\rm reg}$ reads $$\begin{aligned}
\label{Pa}
P(\omega) \, = \,
\frac{1-\omega}{(1+\omega)^{2}}
\left[\Pi_{\mathrm{reg}}^{(2)}(z)-\Pi_{\mathrm{reg}}^{(2)}(-\infty)\right]
\,,\end{aligned}$$ where $\frac{1-\omega}{(1+\omega)^2}\sim \sqrt{1-z}$ for $z\to 1$. A similar relation was also used in Ref. [@Chetyrkin:1996cf]. Since the prefactor grows linearly with $z$, $P(-1)$ is a finite number. Some comments are in order concerning the term $\Pi_{\rm reg}(-\infty)$ that appears in Eq. (\[Pa\]) and also in the analogous relations (\[Pb\]) and (\[Pc\]) that follow below. From the conditions $\Pi(0)=\Pi_{\rm log}(0)=\Pi_{\rm reg}(0)=0$ it is easy to see that $$\begin{aligned}
\label{PzeroPi}
P(0) \, = \, -\,\Pi_{\rm reg}(-\infty)
\,.\end{aligned}$$ Thus in case that $\Pi_{\rm reg}(-\infty)$ is known and taken as an input, Eq. (\[PzeroPi\]) represents a condition that is imposed on the Padé approximant $P$. On the other hand, if $\Pi_{\rm reg}(-\infty)$ is unknown or not taken as an input, it can be determined from Eq. (\[PzeroPi\]) once the Padé approximant has been fixed from other conditions. We show in Secs. \[sectionOas2\] and \[sectionOas3\] that this allows to determine the high energy constants $H^{(2,3)}_0$ with small uncertainties. From Eqs. (\[Pa\]), (\[PzeroPi\]) and (\[Piseparated\]) the vacuum polarization function $\Pi^{(2)}$ is recovered from the relation $$\begin{aligned}
\label{Pia}
\Pi^{(2)}(z) \, = \,
\frac{(1+\omega)^{2}}{1-\omega}\,P(\omega) \,- \, P(0)
\, + \, \Pi_{\mathrm{log}}^{(2)}(z)\,. \end{aligned}$$ From Eqs. (\[Pa\]) it is now straightforward to determine the conditions on the Padé approximant $P(\omega)$ from the non-logarithmic constraints on $\Pi(z)$ in the threshold and the high-energy regions and from the coefficients in the expansion around $z=0$. Additional constraints on $P$ arise from the fact that in the limit $|z|\to\infty$ the first term on the RHS of Eq. (\[Pia\]) can exhibit odd power terms $\sim 1/z^{(2n+1)/2}$ with $n=1,2,\ldots$, which do not exist in the high-energy expansion of the vacuum polarization function. It is reasonable to exclude such terms up to order $1/z^{(2n+1)/2}$ when the information from the high-energy expansion up to order $1/z^n$ is accounted for. For example, excluding terms $\sim 1/z^{3/2}$ in Eq. (\[Pia\]) leads to the constraint $P(-1)-2P^\prime(-1)=0$, where $P^\prime$ refers to the derivative of $P(\omega)$ with respect to $\omega$. Excluding also terms $\sim
1/z^{5/2}$ leads to the condition $3P(-1)-9P^{\prime\prime}(-1)+2P^{\prime\prime\prime}(-1)=0$. The various conditions on $P$ lead to a complicated non-linear set of equations for the coefficients of the Padé approximant in Eqs. (\[Padedef\]), which we do not present explicitly here. These equations frequently have multiple solutions and are most conveniently tackled numerically.
For method (ii), where the Coulomb singularity is treated in $\Pi_{\rm log}^{(2)}$ the relation between the Padé approximant $P(\omega)$ and $\Pi_{\rm reg}^{(2)}(z)$ reads $$\begin{aligned}
\label{Pb}
P(\omega) \, = \,
\frac{1}{(1+\omega)^{2}}
\left[\Pi_{\mathrm{reg}}^{(2)}(z)-\Pi_{\mathrm{reg}}^{(2)}(-\infty)\right]
\,.\end{aligned}$$ A similar relation was also used in Ref. [@Chetyrkin:1996cf]. Here, excluding terms of order $1/z^{3/2}$ and $1/z^{5/2}$ for $|z|\to\infty$ corresponds to the conditions $P(-1)-P^\prime(-1)=0$ and $6P(-1)-6P^{\prime\prime}(-1)+P^{\prime\prime\prime}(-1)=0$, respectively. The vacuum polarization function is then recovered from the relation $$\begin{aligned}
\label{Pib}
\Pi^{(2)}(z) \, = \,
(1+\omega)^{2}\,P(\omega) \, - \, P(0)
\, + \, \Pi_{\mathrm{log}}^{(2)}(z)\,.\end{aligned}$$\
[*Designing $\Pi_{\rm reg}^{(3)}$.*]{} The construction of $\Pi_{\rm reg}^{(3)}$ proceeds in a similar way. At ${\cal
O}(\alpha_s^3)$ the vacuum polarization function has a Coulomb singularity $\sim 1/(1-z)$. For method (i) this singularity is incorporated in $\Pi_{\rm reg}^{(3)}$ and the relation between $P(\omega)$ and $\Pi_{\rm reg}^{(3)}$ reads $$\begin{aligned}
\label{Pc}
P(\omega) \, = \,
\left(\frac{1-\omega}{1+\omega}\right)^{2}
\left[\Pi_{\mathrm{reg}}^{(3)}(z)-\Pi_{\mathrm{reg}}^{(3)}(-\infty)\right]
\,,\end{aligned}$$ where $(1-\omega)^2\sim(1-z)$ for $z\to 1$. Excluding terms of order $1/z^{3/2}$ and $1/z^{5/2}$ for $|z|\to\infty$ corresponds to the conditions $P^\prime(-1)=0$ and $3P^{\prime\prime}(-1)-P^{\prime\prime\prime}(-1)=0$, respectively. The vacuum polarization function is recovered from the relation $$\begin{aligned}
\label{Pic}
\Pi^{(3)}(z) \, = \,
\left(\frac{1+\omega}{1-\omega}\right)^{2}\,P(\omega)
\, - \, P(0) \, + \, \Pi_{\mathrm{log}}^{(3)}(z)\,.\end{aligned}$$
For method (ii) we add the function $8\zeta_3[G(z)]^2/9$ to $\Pi^{(3)}_{\rm log}$. Since $8\zeta_3[G(z)]^2/9\to 2\,\pi^2\zeta_3/[9(1-z)]$ for $z\to 1$ the Coulomb singularity is therefore accounted for in $\Pi^{(3)}_{\rm log}$. The relation between the Padé approximants and $\Pi^{(3)}_{\rm reg}$ has then the same form as Eq. (\[Pa\]) with $\Pi^{(2)}_{\rm reg}$ replaced by $\Pi^{(3)}_{\rm reg}$. The relation for the $\Pi^{(3)}(z)$ has the same form as Eq. (\[Pia\]) with $\Pi^{(2)}_{\rm log}$ replaced by $\Pi^{(3)}_{\rm log}$. The relations imposed on $P$ to exclude terms of order $1/z^{3/2}$ and $1/z^{5/2}$ are then also $P(-1)-2P^\prime(-1)=0$ and $3P(-1)-9P^{\prime\prime}(-1)+2P^{\prime\prime\prime}(-1)=0$, respectively.
Discarding Unphysical Solutions {#sectionunphysical}
===============================
For the reconstructed functions $\Pi^{(2)}$ and $\Pi^{(3)}$ we have several types of variations that can be implemented into the construction and which we can use to quantify numerically the uncertainty of the results. Apart from the two ways to account for the Coulomb singularity described as methods (i) and (ii) in the previous section, we have also implemented modification factors in Eqs. (\[Pithreshansatz\]) and (\[Pihighansatz\]) that allow us to scan over a continuous set of functions within $\Pi^{(2,3)}_{\rm log}$. Once the modification functions are fixed there are in general several possible choices one can use for the Padé approximants $P_{m,n}$ with $n+m$ being fixed by the number of conditions one imposes on $\Pi^{(2,3)}_{\rm reg}$. The resulting solutions for the Padé approximants can, however, have properties that lead to an unphysical and pathological behavior for $\Pi$ and $R$. Such solutions need to be discarded for a meaningful phenomenological analysis [@Chetyrkin:1996cf].
An obvious restriction concerns solutions for $P_{m,n}(\omega)$ that lead to poles in $\Pi^{(2,3)}$ in the complex $\omega$-plane inside the unit circle.[^5] These solutions are unacceptable and we discard them right away because such poles lead to an unphysical analytic structure. A more subtle situation arises for solutions with poles in $\Pi^{(2,3)}$ in the upper complex $\omega$-half-plane that are outside the unit circle, but are either close to the unit circle or have a large residue. Although the analytic structure of such solutions is not a priori wrong, we still discard such solutions if they lead to an unphysical resonance-like structure in the cross section $R$. To have a quantitative criterion that can be implemented easily automatically we compute for every pole in $\Pi_{\rm reg}$ in the upper complex $\omega$-half-plane the so called [*pole factor*]{} $$\begin{aligned}
\label{polefactor}
\rho \, = \, \frac{|\mbox{Res}_{\Pi}(\omega_{\rm pole})|}{|\omega_{\rm pole}|-1}
\,,\end{aligned}$$ where $\omega_{\rm pole}$ is the location of the pole in the complex $\omega$-plane and $\mbox{Res}_{\Pi}(\omega_{\rm pole})$ the residue of $\Pi^{(2,3)}$ at $\omega_{\rm pole}$. If $|\omega_{\rm pole}|$ is close to unity or if the residue is large, the pole factor becomes big and a resonance-like structure can arise in $R$. We discard solutions when $\rho > \rho_{0}$. For our analysis we found that the choice $\rho_0=2.8$ represents a reasonable restriction for $\Pi^{(2)}$, while for $\Pi^{(3)}$ we use $\rho_0=30$. For the vacuum polarization function $\Pi^{(3)}$ a larger value for $\rho_0$ is used since such poles arise predominantly in the threshold region close to $\omega=1$. Here $\mbox{Im}[\Pi^{(3)}]$ is substantially larger than $\mbox{Im}[\Pi^{(2)}]$ due to the bigger size of its Coulomb singularity, see Eq. (\[Pithresh\]). Given the set of solutions for $\Pi^{(2)}$ that pass the restrictions described above we can analyze how well these solutions reproduce other well-known properties of $\Pi^{(2)}$.
Analysis for the Vacuum Polarization at ${\cal O}(\alpha_s^2)$ {#sectionOas2}
==============================================================
The purpose of this section is two-fold. First we demonstrate the reliability of our approach for its application to $\Pi^{(3)}$ by testing it with the rather well-known ${\cal O}(\alpha_s^2)$ vacuum polarization function $\Pi^{(2)}$ and, second, we determine the previously unknown constant $K^{(2)}$ that appears in the nonrelativistic expansion of $\Pi^{(2)}$ close to the threshold, see Eq. (\[Pithresh\]).
To demonstrate the reliability of our approach let us reconstruct $\Pi^{(2)}$ using only information from the different expansions that is analogous to the available information in $\Pi^{(3)}$. Thus we account for the expansions in the threshold region up to NNLO, in the high-energy region up to order $1/z^2$ and up to order $z^2$ for the expansion around $z=0$. For the construction of $\Pi^{(2)}_{\rm reg}$ this entails that we account for the first two coefficients of the expansion around $z=0$, the non-logarithmic term $\propto\sqrt{1-z}$ in the threshold region and the constraints that terms $\sim 1/z^{3/2}$ and $\sim 1/z^{5/2}$ are absent for $|z|\to\infty$. We do not implement the known constants $H^{(2)}_{0,1}$, but we determine them from the reconstructed $\Pi^{(2)}$. This amounts to 6 constraints on the Padé approximant for method (i), where the Coulomb singularity is accounted for in $\Pi^{(2)}_{\rm reg}$, and to 5 constraints on the Padé approximant for method (ii), where the Coulomb singularity is accounted for in $\Pi^{(2)}_{\rm log}$. Thus we have $n+m=5$ for the Padé approximants $P_{m,n}$ for method (i) and $n+m=4$ for the Padé approximants for method (ii).
=
Given the analytic form for the reconstructed $\Pi^{(2)}$ functions we can expand them in the threshold region, the high-energy limit and around $q^2=0$. In Fig. \[fig:2momOas2\] the results for the coefficients $\bar C^{(20)}_k$ for $k=3,4,5,6,7$, the high-energy constants $H^{(2)}_{0,1}$ and the threshold constant $K^{(2)}$ are displayed exemplarily for $n_f=n_\ell+1=4$ relevant for the production of charm quarks. The labels $[m,n]$ which have been added to the upper left panel for $\bar C^{(20)}_3$ refer to the Padé approximant used for the respective $\Pi^{(2)}_{\rm reg}$ functions, and their order is representative for all diagrams. The error bars represent the range of values covered by the variations of the modification factors as described in Sec. \[sectionPilog\]. The blue dashed lines indicate the range covered by all individual results and the red solid lines show the exact result obtained from Feynman diagrams. We see that for all cases the exact values are well within the range covered by the reconstructed $\Pi^{(2)}$ functions. Particularly precise determination are obtained for $\bar C^{(20)}_3$ and the leading high-energy coefficient $H^{(2)}_0$. We also obtain a very precise determination of the the threshold constant $K^{(2)}$.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
approx. A approx. B approx. C approx. D approx. E exact
------------------- ---------------------- ---------------------- ------------------------------ ---------------------------------- -------------------------- ----------------
$\bar C^{(20)}_1$ $2.49671$
$\bar C^{(20)}_2$ $2.77702$
$\bar C^{(20)}_3$ $ 1.365\pm 0.425 $ $ 1.609\pm 0.266 $ $ 1.611\pm 0.048 $ $ 1.63882 $
$\bar C^{(20)}_4$ $ 0.283\pm 0.799 $ $ 0.770\pm 0.441 $ $ 0.750\pm 0.085 $ $ 0.79555 $
$\bar C^{(20)}_5$ $ - 0.389\pm 1.057 $ $ 0.271\pm 0.521 $ $ 0.225\pm 0.106 $ $ 0.278\pm $ 0.27814 $
0.001 $
$\bar C^{(20)}_6$ $ - 0.744\pm 1.213 $ $ 0.021\pm 0.541 $ $ - $ 0.007\pm 0.002 $ $ 0.0070080 $
0.047\pm 0.115 $
$\bar C^{(20)}_7$ $ - 0.871\pm 1.296 $ $ - 0.054\pm 0.528 $ $ - 0.136\pm 0.117 $ $ - $ - 0.08594\pm 0.00003 $ $ - 0.085963 $
0.086\pm 0.003 $
$H^{(2)}_0$ $ 0.159\pm 0.770 $ $ - 0.561\pm 0.063 $ $ - 0.580\pm 0.008 $ $ - $ - 0.5857\pm 0.0001 $ $ - 0.58570 $
0.5854\pm 0.0004 $
$H^{(2)}_1$ $ 0.007\pm 0.574 $ $ 0.338\pm 0.871 $ $ - 0.132\pm 0.071 $ $ - $ - 0.185\pm 0.004 $ $ - 0.18723 $
0.180\pm 0.008 $
$K^{(2)}$ $ - 6.64\pm 10.12 $ $ 3.933\pm 0.303 $ $ 3.795\pm 0.079 $ $ \ $ 3.805\pm 0.020 $
3.809\pm 0.032 $
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: Results for the coefficients $\bar C^{(20)}_{3,4,5,6,7}$, $H^{(2)}_{0,1}$ and $K^{(2)}$ from the reconstructed $\Pi^{(2)}$ functions using various different types of approximations for $\Pi^{(2)}$. Empty entries for coefficients $\bar C^{(20)}_k$ indicate that they are exact in that particular approximation. The exact analytic form for $K^{(2)}$ is unknown. All results are for $n_f=n_\ell+1=4$ running flavors relevant for charm production. \[tab:CHKOas2\]
To demonstrate that our approach is systematic we need to show that the results become more accurate once more information is included for the reconstruction of $\Pi^{(2)}$. In Tab. \[tab:CHKOas2\] the results of Fig. \[fig:2momOas2\] for $\bar C^{(20)}_k$ with $k=3,4,5,6,7$, $H^{(2)}_{0,1}$ and $K^{(2)}$ are displayed in the line labeled as “approximation C”. In comparison we also show the results when all the information $\sim 1/z^2$ in the high-energy expansion is neglected for the reconstruction of $\Pi^{(2)}$ (“approximation B”) and when in addition to that also all NNLO threshold information is neglected (“approximation A”). The results show that the properties of the vacuum polarization can be determined more accurately once more information is used for its reconstruction with our approach. Moreover, we find that the variations of the reconstructed vacuum polarization function due to the different choices for the modification factors and the Padé approximants represent a reliable tool to estimate the uncertainties.
= 9cm
It is an astounding and amusing fact our approach allows for a determination of the coefficients $\bar C^{(20)}_k$ for large values of $k$ with practically negligible uncertainties. This is demonstrated in Fig. \[fig:Cnhigh\] where the difference of the results we obtain for the coefficients $\bar
C^{(20)}_k$ and the exact values from Ref. [@Maier:2007yn; @Boughezal:2006uu], $\bar C^{(20)}_k-\bar C^{(20)}_{k,{\rm exact}}$ are shown up to $k=30$. The range of values between the green triangular-shaped symbols (green light shaded region) is obtained from approximation C using solutions with Taylor-like Padé approximants. For $k>(10,15,20)$ the maximal relative discrepancy to the exact values is below $(14,3,1)$%. The discrepancies become even smaller when more of the coefficients in the expansion around $z=0$ of Eq. (\[Pismallz\]) are accounted for in the reconstruction of $\Pi^{(2)}$. Including the coefficients up to order $z^4$ (approximation D) we obtain the range of values between the red squared symbols (red shaded region). In this case we obtain for $k>(10,15,20)$ a maximal relative discrepancy to the exact values of below $(3,0.8,0.5)$%. Including the coefficients up to order $z^6$ (approximation E) we obtain the range of values between the blue diamond-shaped symbols (blue dark shaded region). Here, we obtain for $k>10$ a maximal relative discrepancy to the exact values of below $0.2$%. The results for the high-energy coefficients $H^{(2)}_{0,1}$ for approximations D and E are also shown in Tab. \[tab:CHKOas2\]. Again we find agreement with the exact results with decreasing uncertainties once more information is included for the reconstruction of $\Pi^{(2)}$. Given the excellent quality of the results we consider our approach a reliable method to determine the previously unknown threshold constant $K^{(2)}$. As our final result for $K^{(2)}$ we adopt $$\begin{aligned}
K^{(2)} \, = \, 3.81 \, \pm \, 0.02
\,.\end{aligned}$$
=
To conclude this section let us analyze the ${\cal O}(\alpha_s^2)$ corrections to the $e^+e^-$ cross section obtained from the reconstructed $\Pi^{(2)}$. In Fig. \[fig:ROas2\] we have plotted $12 \pi v \mbox{Im}[\Pi^{(2)}(q^2+i 0)]$ for $n_f=4$ relevant for charm quark production in the pole mass scheme as a function of the quark velocity $v=\sqrt{1-1/z}$. We have included the factor of $v$ to suppress the Coulomb $1/v$-singularity that arises in the cross section for small values of $v$ and to have a finite value in the limit $v\to
0$. In the left panel the result for approximation C is shown. The red band is the area covered by all solutions for $\Pi^{(2)}$ that pass the criteria discussed in Sec. \[sectionunphysical\] and represents the uncertainty. The size of the uncertainty corresponds to the envelope of the individual error bars shown in Fig. \[fig:2momOas2\] where approximation C has been used as well. For comparison we have also displayed the expansions in the threshold region for $v\to 0$ (dotted lines) and in the high-energy limit for $v\to 1$ (dashed lines), where the short lines refer to leading order, the medium-length lines to next-to-leading order and the longest lines to next-to-next-to-leading order. The uncertainties are reduced substantially when additional coefficients for the expansion around $z=0$ are included for the reconstruction of $\Pi^{(2)}$. This is demonstrated in the right panel, where the coefficients up to order $z^6$ are included for the reconstruction of $\Pi^{(2)}$. Again the red band is the area covered by all solutions for $\Pi^{(2)}$ that pass the criteria discussed in Sec. \[sectionunphysical\]. For method (i) to account the Coulomb singularity we found solutions based on the Padé approximants \[9,0\], \[8,1\], \[7,2\], \[6,3\], \[5,4\], \[3,6\], \[1,8\], and for method (ii) we found solutions based on the Padé approximants \[8,0\], \[7,1\], \[6,2\], \[5,3\], \[4,4\], \[3,5\], \[1,7\]. The width of the band is already smaller than the width of the solid lines used to draw the boundaries of the band. For $v=(0.2,0.4,0.6,0.8)$ the relative uncertainty is $\pm(0.09,0.4,2.0,2.5)\%$ and thus negligible for all conceivable practical applications. The approximation formulae for the ${\cal O}(\alpha_s^2 C_F^2)$ and ${\cal O}(\alpha_s^2 C_A C_F)$ contributions given in Ref. [@Chetyrkin:1996cf; @Chetyrkin:1997mb] (using Eqs. (65) and (66) of Ref. [@Chetyrkin:1996cf]) together with the analytically known fermionic corrections agree within 1-2% with our result. We thus confirm the results for the cross section given in Refs. [@Chetyrkin:1996cf; @Chetyrkin:1997mb].
Analysis for the Vacuum Polarization at ${\cal O}(\alpha_s^3)$ {#sectionOas3}
==============================================================
For the reconstruction of $\Pi^{(3)}$ we use all available information from the expansions in the threshold region, Eqs. (\[Pithresh\]), the high-energy region, Eqs. (\[Pihigh\]) and around $z=0$ in Eqs. (\[Pismallz\]). For the construction of $\Pi^{(3)}_{\rm reg}$ we account for the first two coefficients in the expansion around $z=0$, the non-logarithmic term $\propto\sqrt{1-z}$ in the threshold limit and the two constraints from the absence of terms $\sim 1/z^{3/2}$ and $\sim 1/z^{5/2}$ for $|z|\to\infty$. This amounts to 6 constraints on the Padé approximants for method (i), where the Coulomb singularity $\propto 1/(1-z)$ is accounted for in $\Pi^{(3)}_{\rm
reg}$, and 5 constraints on the Padé approximants for method (ii), where this Coulomb singularity is accounted for in $\Pi^{(3)}_{\rm log}$. Thus we have $n+m=5$ for the Padé approximants $P_{m,n}$ for method (i) and $n+m=4$ for the Padé approximants $P_{m,n}$ for method (ii).
=
In Fig. \[fig:2momOas3\] the results for the coefficients $\bar C^{(30)}_k$ for $k=3,4,5,6,7$, the high-energy constants $H^{(3)}_{0,1}$ and the threshold constant $K^{(3)}$ are displayed for $n_f=n_\ell+1=4$ relevant for charm quark production. The different labels $[m,n]$ which have been added to the upper left panel for $\bar C^{(30)}_3$ refer to the Padé approximant used for the respective $\Pi^{(3)}_{\rm reg}$ function and their order is representative for all panels. The error bars represent the range of values covered by the variations of the modification factors as described in Sec. \[sectionPilog\], and the blue dashed lines indicate the range covered by all individual results. We adopt this range as the uncertainty in our determination of these coefficients, and the results are summarized together with the corresponding results for $n_f=n_\ell+1=5$ in Tab. \[tab:CHKOas3\]. For the determination of the high-energy coefficients $H^{(3)}_0$ and $H^{(3)}_1$ we find uncertainties of about 1% and 10%, respectively. This compares well with the corresponding results for $H^{(2)}_0$ and $H^{(2)}_1$ we have obtained at ${\cal O}(\alpha_s^2)$ for approximation C, see Fig. \[fig:2momOas2\] and Tab. \[tab:CHKOas2\]. For the coefficients $\bar C^{(30)}_k$ with $k\ge 3$ we find somewhat larger relative uncertainties than in for the $\bar C^{(20)}_k$ in approximation C. This is, however, not unexpected since the cancellations that arise when the pole mass results for these coefficients are transferred to the $\overline{\mbox{MS}}$ mass scheme are substantially larger at ${\cal O}(\alpha_s^3)$. The result for $K^{(3)}$ has a particularly large error and can merely serve as a rough constraint on its true values. Concerning the precision in the determinations of $K^{(3)}$, we believe that a substantial improvement can be achieved once the full set of NNNLO terms $\propto \sqrt{1-z}$ in the expansion for $R$ at the threshold and the exact values for $\bar C^{(30)}_k$ with $k\ge 3$ become available.
$n_f=4$ $n_f=5$
------------------- -------------------- --------------------
$\bar C^{(30)}_1$ $-5.6404$ $-7.7624$
$\bar C^{(30)}_2$ $-3.4937$ $-2.6438$
$\bar C^{(30)}_3$ $-3.279 \pm 0.573$ $-1.457 \pm 0.579$
$\bar C^{(30)}_4$ $-4.238 \pm 1.171$ $-1.935 \pm 1.201$
$\bar C^{(30)}_5$ $-4.996 \pm 1.666$ $-2.507 \pm 1.732$
$\bar C^{(30)}_6$ $-5.280 \pm 2.045$ $-2.809 \pm 2.150$
$\bar C^{(30)}_7$ $-5.151 \pm 2.321$ $-2.847 \pm 2.467$
$H^{(3)}_0$ $-6.122 \pm 0.054$ $-4.989 \pm 0.053$
$H^{(3)}_1$ $-3.885 \pm 0.417$ $-3.180 \pm 0.405$
$K^{(3)}$ $-10.09 \pm 11.00$ $-5.97 \pm 10.09$
: Summary of the results for the coefficients $C^{(30)}_{3,4,5,6,7}$, $H^{(3)}_{0,1}$ and $K^{(3)}$ obtained from the reconstructed $\Pi^{(3)}$ function for $n_f=n_\ell+1=4$ and $n_f=n_\ell+1=5$. The coefficients $C^{(30)}_{1,2}$ are known exactly and shown for completeness. \[tab:CHKOas3\]
One of the most important applications of the coefficients $\bar C^{(30)}_n$ is the determination of the $\overline{\mbox{MS}}$ charm and bottom quark masses from moments $M_n$ of the charm and bottom quark $e^+e^-$ cross section. For small values of $n$ one way to compute the moments is using fixed-order perturbation theory as shown in Eq. (\[Mnexpanded\]). Using the results from Tab. \[tab:CHKOas3\] we find for the fixed-order moments at ${\cal O}(\alpha_s^3)$ for charm quarks ($n_f=4$) $$\begin{aligned}
M_3\, =\,&(0.1348\pm 0.0044\pm 0.0005)\times 10^{-2}\,,
{\nonumber}\\[1mm]
M_4\, = \,& (0.153\pm 0.032\pm 0.002)\times 10^{-3}\,,
{\nonumber}\\[1mm]
M_5\, = \,& (0.199\pm 0.084\pm 0.008)\times 10^{-4}\,
{\nonumber}\\[1mm]
M_6\, = \,& (0.084\pm 0.144\pm 0.036)\times 10^{-5}\,.\end{aligned}$$ Here we used $\overline m_c(\overline m_c)=1.27$ GeV for the $\overline{\mbox{MS}}$ charm mass and $\alpha_s^{(n_f=4)}(1.27~\mbox{GeV})=0.387637$ for the strong coupling as the input and four-loop renormalization group evolution. The first error arises from the variation of the renormalization scale between $1.27$ and $3.81$ GeV and the second error is due to the uncertainties in the ${\cal O}(\alpha_s^3)$ coefficients $C^{(30)}_k$ shown in Tab. \[tab:CHKOas3\]. For bottom quarks ($n_f=5$) with $\overline m_b(\overline m_b)=4.17$ GeV and $\alpha_s^{(n_f=5)}(4.17~\mbox{GeV})=0.224778$ as the input we find $$\begin{aligned}
M_3\, = \, & (2.350\pm 0.017\pm 0.002)\times 10^{-7}\,,
{\nonumber}\\[1mm]
M_4\, = \, &(2.167\pm 0.045\pm 0.005)\times 10^{-9} \,,
{\nonumber}\\[1mm]
M_5\, = \, &(2.126\pm 0.091\pm 0.011)\times 10^{-11}\,,
{\nonumber}\\[1mm]
M_6\, = \, &(2.160\pm 0.148\pm 0.022)\times 10^{-13}\,.\end{aligned}$$ The first error arises from the variation of the renormalization scale between $2.085$ and $8.34$ GeV and the second error is due to uncertainties in the ${\cal O}(\alpha_s^3)$ coefficients $C^{(30)}_k$ shown in Tab. \[tab:CHKOas3\]. The results show that that the uncertainties in $M_{3,4,5}$ caused by the errors in the coefficients $C^{(30)}_{3,4,5}$ we have obtained in this work are an order of magnitude smaller than the overall uncertainties of the moments at ${\cal O}(\alpha_s^3)$ due to variations of the renormalization scale. For physically relevant values of $n$ they can be safely neglected.
= 9cm
Finally, let us analyze the ${\cal O}(\alpha_s^3)$ corrections to the $e^+e^-$ cross section obtained from the $\Pi^{(3)}$. In Fig. \[fig:ROas3\] we have plotted the function $12 \pi v \mbox{Im}[\Pi^{(3)}(q^2+i 0)]$ for $n_f=4$ relevant for charm quark production in the pole mass scheme as a function of the quark velocity $v=\sqrt{1-1/z}$. As for the analysis in Fig. \[fig:ROas2\] we have included the factor $v$ to suppress the Coulomb singularity. The function still diverges logarithmically for $v\to 0$ because the ${\cal O}(\alpha_s^3)$ cross section has a singularity $\sim \ln(v)/v$ in the nonrelativistic limit. The red shaded band is the area covered by all solutions for $\Pi^{(3)}$ that pass the criteria discussed in Sec. \[sectionunphysical\] and represents the uncertainty. The relative uncertainty is about 10% at $v=0.2$ and $0.8$ and should be acceptable for most applications where ${\cal O}(\alpha_s^3)$ accuracy is required. For comparison we have also displayed the expansions in the threshold region for $v\to 0$ (dotted lines) at NLO (short line) and at NNLO (long line). Likewise the expansions in the high-energy limit for $v\to 1$ (dashed lines) are shown, where the short line refers to order $1/z^0$, the medium-length line to order $1/z$ and the longest lines to order $1/z^2$. We strongly emphasize the importance of incorporating the NNLO contributions in the expansion close to the threshold and the $1/z^2$ terms at high energies for achieving our result. Once more information from the different kinematic regions becomes available, the uncertainties can be further reduced substantially.
Conclusions {#sectionconclusions}
===========
In this work we have determined the full mass and $q^2$ dependence of the ${\cal O}(\alpha_s^2)$ and ${\cal O}(\alpha_s^3)$ corrections to the heavy quark vacuum polarization function $\Pi(q^2,m^2)$ and its contribution to the $e^+e^-$ total cross section. Our approach uses known results for the expansions of $\Pi(q^2,m^2)$ at high energies, in the threshold region and around $q^2=0$, conformal mapping and the Padé approximation method. We have demonstrated for the vacuum polarization function at ${\cal O}(\alpha_s^2)$ that the approach allows for reliable determinations of other properties of $\Pi$ with small uncertainties, and that the uncertainties of the results can be systematically reduced if more information from the three different kinematic regions is accounted for. Our results for the cross section at ${\cal O}(\alpha_s^2)$ also confirm previous results by Chetyrkin, Kühn and Steinhauser from Refs. [@Chetyrkin:1996cf; @Chetyrkin:1997mb]. For the vacuum polarization function at ${\cal O}(\alpha_s^2)$ we have determined the previously unknown non-logarithmic constant term that arises at NLO in the expansion close to the threshold. For the ${\cal O}(\alpha_s^3)$ corrections to the vacuum polarization function we determined the previously unknown coefficients in the expansion around $q^2=0$ beyond order $q^4$ and the first two non-logarithmic coefficients in the high-energy expansion. The results for the coefficients in the expansion around $q^2=0$ allow for the determination of the moments $M_n$ of the $e^+e^-$ cross section for $n\ge 3$ at ${\cal O}(\alpha_s^3)$.
Acknowledgements {#sectionacknowledgements}
================
We thank K. Chetyrkin, H. Kühn and M. Steinhauser for useful converstion and comments to the manuscript. M. Zebarjad thanks the MPI for hospitality while this work was accomplished and the MPI guest program for partial support. This work was supported in part by the EU network contract MRTN-CT-2006-035482 (FLAVIAnet).
[**Note added:**]{} After completion of this work K. Chetyrkin pointed out to us that analytic expressions for the constants $H^{(3)}_1$ can be derived from results given in Ref. [@Baikov:2004ku]. Evaluated numerically they give $H_1^{(3)}=-4.33306$ for $n_f=4$ and $H_1^{(3)}=-3.53165$ for $n_f=5$ which is in agreement with the results we have presented in Tab. \[tab:CHKOas3\].
[^1]: Electronic address: [email protected]
[^2]: Electronic address: [email protected]
[^3]: Electronic address: [email protected]
[^4]: Within NRQCD the dominant effect of the singlet contributions is associated to 4-quark operators with a Wilson coefficient that incorporates the hard effects of the 3-gluon annihilation. This operator leads to a momentum space potential $\propto
\alpha_s^3/m^2$.
[^5]: For Padé approximants of the form $P_{k,0}$ (Taylor-like) such poles do not exist and none of the solutions is discarded.
|
---
abstract: 'Given a compact topological dynamical system $(X, f)$ with positive entropy and upper semi-continuous entropy map, and any closed invariant subset $Y \subset X$ with positive entropy, we show that there exists a continuous roof function such that the set of measures of maximal entropy for the suspension semi-flow over $(X,f)$ consists precisely of the lifts of measures which maximize entropy on $Y$. This result has a number of implications for the possible size of the set of measures of maximal entropy for topological suspension flows. In particular, for a suspension flow on the full shift on a finite alphabet, the set of ergodic measures of maximal entropy may be countable, uncountable, or have any finite cardinality.'
address:
- 'Department of Mathematics, The City College of New York, New York, NY, 10031'
- 'Department of Mathematics, Ohio State University, Columbus, OH, 43210'
author:
- Tamara Kucherenko
- 'Daniel J. Thompson'
bibliography:
- 'nonuniqueMMEbiblio.bib'
title: 'Measures of maximal entropy on subsystems of topological suspension semi-flows'
---
Introduction
============
We investigate which measures can achieve maximal entropy in the class of topological suspension semi-flows. Our result is a kind of universality result for measures of maximal entropy (MME) for suspension semi-flows. We show in Theorem \[main\] that for *any* positive entropy compact topological dynamical system $(X, f)$ with upper semi-continuous entropy map, and *any* closed invariant subset $Y \subset X$ with positive entropy, we can find a suspension semi-flow over $(X, f)$ whose MME are exactly the lifts of the MME for the subsystem $(Y, f|_{Y})$. The assumption that the entropy map is upper semi-continuous is needed for existence of the MME and is thought of as a very weak expansivity condition. The condition that the subsystem has positive entropy is essential since any suspension flow over $X$ has positive entropy, so only positive entropy measures in the base can possibly lift to MME for the flow.
Universality and structure results for suspension flows are well known in the measure-theoretic category. For example, Rudolph [@dR76] proved the famous result that every measure-preserving flow is isomorphic to a suspension flow whose roof takes only two values. Quas and Soo [@QS] proved the following measure-theoretic universality result for Hölder suspension flows over the full shift: any measure of smaller entropy than the flow can be embedded isomorphically into the suspension flow. The main result of our paper can be interpreted as a universality phenomenon in the topological category.
After our main result is proved, it can be used as a machine for producing examples of suspension flows with various different prescribed behaviors through careful choices for the subsystem $Y$. When the base is the full shift $(\Sigma, \sigma)$, by choosing $Y \subset \Sigma$ appropriately, we can build topological suspension flows over the shift $(\Sigma, \sigma)$ with any of the following properties: the MME is unique but not fully supported; there are multiple MME with the same support; the MME (unique or not) are supported on a minimal subsystem; there are any prescribed finite number of ergodic MMEs; the set of ergodic MMEs is countably infinite; the set of ergodic MMEs is uncountable. These statements contrast sharply with the well-known situation of a suspension flow over the full shift on a finite alphabet with Hölder continuous roof function in which case the MME is unique and fully supported. Note that every continuous suspension flow over a full shift with a fixed alphabet size is orbit equivalent. Thus, our results show that a continuous orbit equivalence does not preserve finiteness, or even countability, of the set of MME.
We discuss previous results in this direction and our approach. We extensively generalize our previous work [@KTh] in which we proved that for the full shift $(\Sigma, \sigma)$ and any positive entropy subshift of finite type $Y \subset \Sigma$, there exists a roof function such that the MMEs for the suspension flow over the full shift are exactly the lifts of the MMEs for the subshift. In the current paper, we remove the restriction that the subset $Y$ is a shift of finite type, and we remove the need for the ambient space to be symbolic, allowing any topological dynamical system with upper-semi-continuous entropy map. The proof in [@KTh] was based on an explicit combinatorial description of the roof function, and hands-on pressure estimates. That argument has the advantage of giving explicit and constructive examples for which uniqueness of the MME fails in the class of topological suspension flows. However, that argument relied heavily on the structure of subshifts of finite type, and did not provide examples with multiple MME on a single transitive component. Our explicit construction does not seem to carry over even to sofic subshifts of the full shift, much less to general subshifts or topological dynamical systems.
We note that Iommi and Velozo [@IV] have recently given an independent proof that suspension flows over the full shift on a finite alphabet can have an uncountable set of ergodic MME. They show that this phenomenon is dense in the sense that for any continuous roof function over a compact shift of finite type, there is a small continuous time change so that the resulting flow has an uncountable collection of ergodic MME.
The approach of this paper is a generalization and refinement of a result by Markley and Paul [@MP82], and is based on the theory of tangent functionals. The MMEs for the flow are described in terms of equilibrium states of the base system (see §\[s.flows\]). We show that for systems with upper semi-continuous entropy map, we can find a function whose equilibrium states are the MME on a given subsystem, and in addition that the function is non-positive and vanishes only on the subsystem. Markley and Paul proved a version of this result under the hypothesis that the ambient space is the full shift, and without obtaining the key conclusion needed for the application to suspension flows, which is that the function is non-positive. The paper is structured as follows. In §\[s.prelim\], we collect our preliminaries. In §\[s.mainresult\], we state and prove our main result. In §\[s.applications\], we apply our main result to questions about the set of possible MME for suspensions over the full shift.
Preliminaries {#s.prelim}
=============
Topological Dynamical Systems
-----------------------------
We consider a continuous function $f:X\to X$ on a compact metric space $X$. We call the pair $(X, f)$ a topological dynamical system. We denote by $C(X)$ the Banach space of all continuous real-valued functions of $X$ with the supremum norm. The dual space $C^\ast (X)$ consists of all Radon measures on $X$. Let $\M\subset C^\ast(X)$ be a subspace of $f$-invariant probability measures and $\M_E\subset \M$ be a subset of ergodic measures.
For $\phi\in C(X)$, the *topological pressure* can be defined by the Variational Principle to be the quantity $$\label{VarPr}
P(\phi)=\sup_{\mu\in\M}\left\{h_\mu(f)+\int \phi\, d\mu\right\},$$ where $h_\mu(f)$ is the measure-theoretic entropy of $\mu$. The topological pressure can also be defined using $(n, \epsilon)$-separated or spanning sets, see [@pW82] for a detailed treatment. The number $P(0)=\sup_{\mu\in\M}\{h_\mu(f)\}$ is called the *topological entropy* of $f$ and is denoted by $h_{\rm top}(f)$. If there exists a measure $\mu\in\M$ at which the supremum in (\[VarPr\]) is attained it is called an *equilibrium state* of $\phi$. The set of all equilibrium states of $\phi$ is a convex subset of $\M$ which is compact with respect to the weak$^\ast$-topology. However, in general it may be empty [@Gu]. If the entropy map $\mu\mapsto h_\mu(f)$ is upper semi-continuous on $\M$ then for any $\phi\in C(X)$ the topological pressure $P(\phi)< \infty$ and the set of equilibrium states is not empty. This condition will be a hypothesis of our main result.
The *pressure function* (with respect to $f$) is the map $P:C(X)\mapsto \R \cup\{\infty\}$ which sends $\phi$ to $P(\phi)$. The pressure function is continuous, convex and satisfies the following properties [@pW82]:
1. $P$ is Lipschitz, i.e. $|P(\phi)-P(\psi)|\le\|\phi-\psi\|$ for any $\phi,\psi\in C(X)$;
2. $P$ is increasing, i.e. $P(\phi)\le P(\psi)$ whenever $\phi\le \psi$;
3. $P(t+\phi+\psi\circ f-\psi)=t+P(\phi)$ for any $t\in\R$ and $\phi,\psi\in C(X)$.
If $f$ has finite topological entropy, then the function $P$ is real-valued. This is the case for us since we assume upper-semi-continuity of the entropy map. If $f$ has infinite topological entropy, then $P(\phi)=\infty$ for all $\phi\in C(X)$.
We will make use of the following facts from the theory of tangent functionals to convex functions. For any continuous convex functional $Q:C(X)\mapsto \R$ a measure $\nu\in C^\ast(X)$ is called *$Q$-bounded* if there is $C\in\R$ such that for any $\psi\in C(X)$ we have $$\label{Q-bnd}
\int\psi\, d\nu\le Q(\psi)+C$$ We say that $\nu\in \C^\ast(X)$ is a *tangent* to $Q$ at a point $\phi\in C(X)$ if for any $\psi\in C(X)$ we have $$\label{Tangent_def}
\int\psi\, d\nu\le Q(\phi+\psi)-Q(\phi)$$ A simple application of the Hahn-Banach theorem shows that the set of tangents is non-empty at any point $\phi\in C(X)$. We can approximate any $Q$-bounded measures by tangents using the following result which is a special case of Israel’s theorem [@I]. For purposes of exposition and to make the paper self-contained, we include a short proof.
\[Special case of Israel’s Theorem\] \[Israel\] Let $\varepsilon>0$, $\mu\in C^\ast(X)$ be a $Q$-bounded measure, and $V\subset C(X)$ be a closed linear subspace. Then there exists a function $\phi\in V$ and a tangent $\nu$ to $Q$ at $\phi$ such that $\|\mu-\nu\|_{C^\ast(V)}<\varepsilon$.
Since $\mu$ is $Q$-bounded, there is $C\in \R$ such that $\int\xi d\nu\le Q(\xi)+C$ for all $\xi\in C(X)$. We may assume that both $\mu=0$ and $C=0$ by replacing function $Q(\xi)$ with $Q(\xi)-\int\xi d\mu-C$. For $\psi\in V$ define $$\label{S(omega)_def}
S(\psi)=\{\xi\in V:Q(\psi)-Q(\xi)\ge \varepsilon\|\psi-\xi\|\}$$ Since $\psi\in S(\psi)$ and $Q$ is continuous, $S(\psi)$ is a nonempty closed subset of $V$. Moreover, it is easy to see that $S(\xi)\subset S(\psi)$ whenever $\xi\in S(\psi)$.
We pick any starting point $\psi_0\in V$ and build a sequence $(\psi_n)_{n\ge 1}$ such that $\psi_n\in S(\psi_{n-1})$ and $$\label{psi_n_def}
Q(\psi_n)<\inf_{\xi\in S(\psi_{n-1})}Q(\xi)+\frac{\varepsilon}{2^n}.$$ Note that $S(\psi_n)$ is a sequence of nested closed sets whose diameters tend to zero. Indeed, for $\xi\in S(\psi_n)$ we have $$\label{Size_S(psi_n)}
\varepsilon\|\psi_n-\xi\|\le Q(\psi_n)-Q(\xi)\le \inf_{ S(\psi_{n-1})}Q+\frac{\varepsilon}{2^n} -Q(\xi)\le \frac{\varepsilon}{2^n},$$ where the last inequality follows from the fact that $\xi$ is also in $S(\psi_{n-1})$. In particular, we obtain that $(\psi_n)$ is Cauchy and hence there exists $\phi=\lim \psi_n$. Furthermore, (\[Size\_S(psi\_n)\]) implies that $S(\phi)$ contains only one point $\phi$, since $S(\phi)\subset S(\psi_{n})$ for all $n$.
Now we define two subsets in $C(x)\otimes \R$ by $$A=\{(\xi,y):\xi\in V,\,\, y<Q(\phi)-\varepsilon\|\xi-\phi\|\}\quad\text{and}\quad B=\{(\xi,y): y>Q(\xi)\}.$$ It follows from the continuity and convexity of $Q$ that $A$ and $B$ are open and convex. If $(\xi,y)\in A\cap B$ then $\xi\in S(\phi)$ and we must have $\xi=\phi$. Since $(\phi,y)$ cannot lie in both $A$ and $B$, we conclude that $A\cap B$ is empty. By Hanh-Banach theorem there exists a linear functional $\lambda$ on $C(x)\otimes \R$ and $t\in\R$ such that $\lambda(a)<t<\lambda(b)$ for all $a\in A$ and $b\in B$. With a suitable choice of $t\in\R$ and $\nu\in C^*(X)$, we can write $\lambda(\xi,y)=y-\int\xi d\nu$. Then for any $\xi\in V$ we have $y-\int\xi d\nu< t$ as long as $y<Q(\phi)-\varepsilon\|\xi-\phi\|$. This gives $$\label{norm_estimate}
Q(\phi)-\varepsilon\|\xi-\phi\|-\int\xi d\nu\le t.$$ On the other hand, for any $\xi\in C(X)$ we have $t<y-\int\xi d\nu$ as long as $y>Q(\xi)$. In that case we arrive at $$\label{tangent_estimate}
t\le Q(\xi)-\int\xi d\nu.$$ Taking $\xi=\phi$ in both inequalities above we see that $t=Q(\phi)-\int\phi d\nu$. With $\psi=\xi-\phi$ this allows us to rewrite (\[norm\_estimate\]) and (\[tangent\_estimate\]) as $$\begin{aligned}
&\int \psi d\nu\le\varepsilon\|\psi\|,\,\text{for any}\,\psi\in V; \\
&\int\psi d\nu\le Q(\psi+\phi)-Q(\phi),\,\text{for any}\,\psi \in C(X).\end{aligned}$$ The first inequality provides the required norm estimate $\|\nu\|_{C^*(V)}\le\varepsilon$ and the second one shows that $\nu$ is a tangent to $Q$ at $\phi$.
Since the pressure function $P$ is continuous and convex, there exists a tangent to the pressure at every point $\phi\in C(X)$. From properties (1) and (3) of the pressure one can deduce that any tangent to $P$ must be a positive $f$-invariant probability measure on $X$. On the other hand, if $\mu\in\M$ is an equilibrium state of $\phi$ then by the Variational principle for any $\psi\in C(X)$ we have $$P(\phi+\psi)-P(\phi)\ge h_\mu(f)+\int(\phi+\psi)d\mu-h_\mu(f)-\int\phi d\mu=\int\psi d\mu,$$ and hence $\mu$ is tangent to $P$ at $\phi$. It follows that the set of equilibrium states for $\phi$ is a subset of tangents to $P$ at $\phi$. It was proved in [@pW92] that the opposite inclusion holds if and only if the entropy map $\mu\mapsto h_\mu(f)$ is upper semi-continuous on the set of tangents to $P$ at $\phi$.
Suspension Semiflows {#s.flows}
--------------------
We recall some basic facts about suspension semiflows. Given a topological dynamical system $(X, f)$ and a continuous function $\rho:X\to (0,\infty)$ consider the quotient space $$\label{SuspSpace}
^{\textstyle X_\rho=\{(x,s)\in X\times(0,1):~ 0\le s\le \rho(x) \}}\big/_{\textstyle \sim}$$ obtained by the equivalence relation $(x,\rho(x))\sim (f(x),0)$ for every $x\in X$. We refer to $X$ as the *base*, to $\rho$ as the *roof function* and to $X_\rho$ as the *suspension space relative to $\rho$*. The *suspension semiflow* $\Phi = (\varphi_t)_{t\ge 0}$ on $X_\rho$ associated to $(f,X,\rho)$ is defined locally by $\varphi_t(x,s)=(x, s+t)$. We extend this definition to all $t\in [0, \infty)$ by setting $$\varphi_t(x,s)=\left(f^n(x), t+s-\sum_{k=0}^{n-1} \rho(f^k(x))\right),$$\[semiflow\] where $n\in\N$ is uniquely determined using $\sum_{k=0}^{n-1} \rho(f^k(x))\le t+s< \sum_{k=0}^{n} \rho(f^k(x))$. We call a flow of this type a *topological suspension semiflow* to emphasize that we are working with topological dynamical systems and continuous roof functions.
Since $\rho$ is bounded away from zero there is a natural identification between the space $\M(\Phi)$ of $(\varphi_t)_t$-invariant probability measures on $X_\rho$ and the space $\M(f)$ of $f$-invariant probability measures on $X$. If $m$ denotes the Lebesgue measure in $\R$ then the map $$\label{lifting of measures}
\mu \mapsto \tilde \mu =\frac{(\mu\times m)|_{X_\rho}}{\int_X \rho\, d\mu}$$ is a bijection from $\M(f)$ to $\M(\Phi)$. Abramov [@lA59] established a relation between the entropies of corresponding measures $\mu$ and $\tilde\mu$, namely $$\label{Abramov}
h_{\tilde \mu}(\varphi_t)=\frac{t\cdot h_\mu(f)}{\int_X\rho\, d\mu}.$$ Therefore, the entropy $h_{\tilde\mu}(\Phi)$ of measure $\tilde\mu$ with respect to the flow, which is define as the entropy of the time-one map $\varphi_1$ satisfies $$\label{Abramov2}
h_{\tilde\mu}(\Phi)=\frac{h_\mu(f)}{\int_X\rho\, d\mu}.$$
We remark that if $f:X\to X$ is a homeomorphism then the expression (\[semiflow\]) with $n\in\Z$ defines a flow $(\varphi_t)_{t\in\R}$ on $X_\rho$. In this case the bijection between the measure spaces (\[lifting of measures\]) remains the same and the Abramov’s formula (\[Abramov\]) is valid with $t$ replaced by $|t|$.
The measures of maximal entropy for the semiflow $(X_\rho,\Phi)$ can be described in terms of equilibrium states of a constant multiple of the roof function on the base space [@BR; @PP90]. Precisely, for $c\in \R$ consider the function $c \to P(-c \rho)$. It follows from (\[VarPr\]) that this function is real-valued and strictly decreasing, and hence there exists $c$ with $P(-c \rho)=0$. Suppose $-c\rho$ has an equilibrium state $\mu$. Denote by $\tilde{\mu}$ the image of $\mu$ given by (\[lifting of measures\]). We claim that $\tilde{\mu}$ is the measure of maximal entropy for the flow $\Phi$. Indeed, let $\tilde{\nu}$ be any other $\Phi$-invariant measure on $X_\rho$ and $\nu$ be the corresponding $f$-invariant measure on the base. By the Variational Principle (\[VarPr\]) we have $$0=h_\mu + \int-c\rho d\mu \geq h_\nu + \int-c\rho d\nu$$ with equality if and only if $\nu$ is an equilibrium state for $-c\rho$. It follows from (\[Abramov2\]) that $$h_{\tilde{\mu}}(\Phi)=\frac{h_\mu(f)}{\int \rho d \mu} \geq \frac{h_{\nu}(f)}{\int \rho d \nu}=h_{\tilde{\nu}}(\Phi)$$ and $\tilde{\mu}$ is a measure of maximal entropy for the flow. Conversely, any measure of maximal entropy for $(X_\rho,\Phi)$ corresponds to an equilibrium state of $-c\rho$ on the base transformation $(X, f)$ with $c=h_{\rm top}(\Phi)$.
Main Result {#s.mainresult}
===========
We state and prove our main theorem.
\[main\] Let $(X, f)$ be a compact topological dynamical system such that $h_{\rm top}(f)>0$ and the entropy map $\mu\mapsto h_\mu(f)$ is upper semi-continuous. Let $Y\subset X$ be a closed $f$-invariant subset with $h_{\rm top}(f|_Y)>0$. There exists a continuous roof function $\rho:X\mapsto (0,\infty)$ so that the measures of maximal entropy for the suspension semi-flow $(X_{\rho},\Phi)$ (or suspension flow if $f$ is a homeomorphism) are exactly the lifts of the measures of maximal entropy for $Y$.
The assumption $h_{\rm top}(f|_Y)>0$ is essential. If $Y$ satisfies $h_{\rm top}(f|_Y)=0$, then by Abramov’s formula, the lift of a measure of maximal entropy on $
Y$ has entropy $0$. Since $h_{\rm top}(\Phi)>0$ for any continuous roof function $\rho$, such a measure can never be an MME for any topological suspension semi-flow over $(X,f)$.
We denote by $V$ the set of all continuous functions which vanish on $Y$, i.e. $$\label{Def_V}
V=\{\psi\in C(X): \psi(x)=0\text{ whenever }x\in Y\}.$$ Then $V$ is a closed linear subspace of $C(X)$. Consider $\mu\in\M$ such that $\mu(Y)=1$. The Variational Principle (\[VarPr\]) implies that any $f$-invariant probability measure is $P$-bounded. Hence, we can apply Proposition \[Israel\] to the subspace $V$ and measure $\mu$ with $\varepsilon=1/2$. We obtain the existence of $\phi\in V$ and a tangent $\nu$ to $P$ at $\phi$ such that for any $\psi\in V$ we have $$\label{IsraelThm}
\left|\int \psi\,d\nu-\int\psi\,d\mu\right|\le\frac12\|\psi\|_\infty.$$
Note that since $\mu$ is supported on $Y$ and $\psi$ vanishes on $Y$, the second integral in (\[IsraelThm\]) is zero. By [@pW92 Theorem 5], and our assumption that the entropy map $\mu\mapsto h_\mu(f)$ is upper semi-continuous, the measure $\nu$ is an equilibrium state for $\phi$.
First we show that $Y$ has positive $\nu$-measure. Assume to the contrary that $\nu(Y)=0$. Then the complement of $Y$ is open and has a full $\nu$-measure. Since $\nu$ is regular, we can approximate the complement of $Y$ by closed sets from below. Let $F\subset X\setminus Y$ be a closed set such that $\nu(F)>1/2$. The sets $Y$ and $F$ are closed and disjoint, thus by Uryson’s Lemma they can be separated via continuous functions. Precisely, there exists a continuous $\xi:X\to [0,1]$ such that $\xi(x)=0$ for any $x\in Y$ and $\xi(x)=1$ for any $x\in F$. We obtain that $\xi\in V$ and $$\int\xi\,d\nu\ge\int_{F}\xi d\nu=\nu(F)>\frac12,$$ which contradicts (\[IsraelThm\]).
The set of all equilibrium states for $\phi$ is a compact and convex subset of $\M(f)$ whose extreme points are the ergodic measures. Therefore, the ergodic decomposition of $\nu$ must contain at least one ergodic equilibrium state $\nu_E$ with $\nu_E(Y)>0$. The ergodicity of $\nu_E$ and $f$-invariance of $Y$ imply that, in fact, $\nu_E(Y)=1$. Since $\phi|_Y\equiv 0$ we obtain $$P(\phi)=h_{\nu_E}(f)+\int \phi\,d\nu_E=h_{\nu_E}(f).$$ Moreover, for any other invariant measure $m$ supported on $Y$ we have $h_{m}(f)=h_m(f)+\int \phi\,d{m}\le P(\phi)=h_{\nu_E}(f)$. We conclude that $P(\phi)=h_{\rm top}(f|_Y)$ and hence every measure of maximal entropy of $f|_Y$ is an equilibrium state of $\phi$.
However, a priori the function $\phi$ might have some other equilibrium states which are not supported on $Y$. To eliminate this possibility, we will again make use of the continuous function $\xi: X \to [0,1]$ defined previously using Uryson’s Lemma, which was chosen so that $\xi^{-1}(0)=Y$ and $\xi^{-1}(1)=F$. We define $\tau=\min\{0,\phi\}-\xi$. Then $\tau$ is continuous, $\tau(x)=0$ whenever $x\in Y$ and $\tau(x)<0$ whenever $x\notin Y$. In addition we have that $\tau\le\phi$, which implies that $P(\tau)\le P(\phi)$. For $m \in \M(f)$ which is an MME for $f|_Y$, we have $P(\tau) \geq h_m(f)+\int \tau\,d{m} = h_{m}(f) =h_{\rm top}(f|_Y) = P(\phi)$. We conclude that $P(\tau)= P(\phi)$. Thus, $P(\tau)=h_{\rm top}(f|_Y)$ and any MME for $f_{Y}$ is an equilibrium state for $\tau$. A measure supported on $Y$ which is not an MME for $f_{Y}$ is clearly not an equilibrium state for $\tau$. For any other measure $\tilde{\mu}$ with $\tilde{\mu}(X\setminus Y )>0$ we have $$\begin{aligned}
h_{\tilde{\mu}}(f)+\int \tau\,d{\tilde{\mu}} & =h_{\tilde{\mu}}(f)+\int_{X\setminus Y} \tau\,d{\tilde{\mu}} \\
& < h_{\tilde{\mu}}(f)+\int_{X\setminus Y} \phi\,d{\tilde{\mu}}\\
&\le P(\phi).\end{aligned}$$ Hence $\tilde{\mu}$ is not an equilibrium state for $\rho$. We conclude that the set of equilibrium states of $\tau$ is exactly the set of measures of maximal entropy for $Y$. We constructed a continuous $\tau:X\mapsto\R$ satisfying
1. $\tau(x)=0$ for any $x\in Y$ and $\tau(x)<0$ for any $x\notin Y$;
2. $P(\tau)=h_{\rm top}(f|_Y)$;
3. the set of equilibrium states of $\tau$ is exactly the set of measures of maximal entropy for $Y$.
We define $\rho: X \to (0, \infty)$ by $\rho=-\tau$. It follows from the discussion in §\[s.flows\] that the measures of maximal entropy for the suspension semi-flow with roof function $\rho$ are exactly the lifts of the measures of maximal entropy for $Y$.
MMEs for suspensions over the full shift {#s.applications}
=========================================
We now apply our main result, mainly in the case of suspension flows over the full shift on a finite alphabet.
Support for unique MME
----------------------
Theorem \[main\] allows much flexibility in specifying the support of a unique measure of maximal entropy. Recall that a system is *intrinsically ergodic* if it has a unique MME.
\[cor1\]Let $(X, f)$ be a compact topological dynamical system such that $h_{\rm top}(f)>0$ and the entropy map $\mu\mapsto h_\mu(f)$ is upper semi-continuous. Let $Y$ be a closed $f$-invariant subset so that $h_{\rm top}(f|_Y)>0$ and $f|_Y$ is intrinsically ergodic. There exists a continuous $\rho:X\to (0,\infty)$ so that the suspension semi-flow $(X_\rho,\varphi)$ is intrinsically ergodic and its unique measure of maximal entropy is supported on the lift of $Y$ to $X_\rho$.
This follows immediately from Theorem \[main\]. Examples of suitable $Y \subset X$ include expansive subsystems with specification [@rB74]. For example, $Y$ could be a transitive horseshoe inside a smooth non-uniformly hyperbolic system $(X,f)$. Corollary \[cor1\] can be applied to the full shift and a positive entropy uniquely ergodic subsystem to give an example of a unique MME supported on a minimal set. Explicit examples of such subsystems were provided by Grillenberger [@cG73]. Another example where the structure of $Y$ is more complex is to let $(X,f)$ to be the full shift on $\{0, 1, \ldots, n\}$, and $Y=\Sigma_{\beta}$ be the $\beta$-shift for some $\beta \in (1, n)$. The $\beta$-shift is intrinsically ergodic, but typically does not have the specification property.
Two ergodic MMEs on a transitive component
------------------------------------------
We now turn to the phenomenon of non-uniqueness of MME. Examples of suspension flows over the full shift with non-unique MME were given in [@KTh], however each MME was supported on a different transitive component. Theorem \[main\] gives us the following corollary.
There are examples of suspension flows over the full shift on a finite alphabet which have two distinct ergodic measures of maximal entropy with the same support.
This can be achieved by letting $X$ be the full shift on four symbols, and taking the subshift $Y$ to be the Dyck shift. We recall the simple description of this shift, given in terms of parentheses and brackets. We split the alphabet of $X$ into two pairs of matching left and right symbols and denote them by (, ), \[, \]. The Dyck shift consists of all sequences where every opening parenthesis ( must be closed by ) and every opening bracket \[ must be closed by \]. Krieger [@Kr] showed that the Dyck shift has topological entropy $\log 3$ and admits exactly two ergodic measures of maximal entropy, both are fully supported and Bernoulli. An application of Theorem \[main\] gives a suspension flow over the full shift on four symbols with two measures of maximal entropy which have the same support and are products of Bernoulli measure with Lebesgue measure.
There exist minimal subshifts with positive entropy which are not uniquely ergodic. These examples are commented on in, for example, [@DGS p.157], and [@kP86]. Explicit examples can be constructed by modifying Grillenberger’s arguments for uniquely ergodic positive entropy subshifts. These are more difficult to construct than the Dyck example. However, they allow us to strengthen the conclusion of the Corollary from ‘the same support’ to ‘both supported on the same minimal set’.
Cardinality for the set of MME
------------------------------
Krieger extends the Dyck shift construction described above to get examples of subshifts on $4L$ symbols with positive entropy and $2^L$ Bernoulli measures as maximal measures. Thus, we can get large numbers of measures of maximal entropy for a suspension flow over the shift, and we can ensure that they all have the same support and are products of Bernoulli measure with Lebesgue measure.
The first example of a transitive positive entropy subshift with any prescribed finite number of ergodic measures of maximal entropy was given by Shtilman in [@Sh]. His result was later extended by Haydn [@H], who showed that for any $L\in\mathbb{N}$ there is a topologically mixing subshift on $2L$ symbols with positive topological entropy and $L$ distinct ergodic entropy maximizing measures.
It is also possible for a transitive subshift on a finite alphabet to have infinitely many maximizing measures. In [@jB05 Lemma 8], Buzzi constructed a subshift on three symbols with topological entropy $\log 2$ which supports countably many ergodic measures with entropy $\log 2$. In [@sW84 Example 4.6], Williams gave an example of a minimal subshift (in fact, a Toeplitz subshift) with positive entropy and uncountably many ergodic invariant probability measure. A simple explicit example of a transitive shift with uncountably many ergodic MME is also given in Buzzi [@jB05 Proof of Lemma 17]. Applying Theorem \[main\] to the full shift and the subshifts described above, we obtain the following result.
For suspension flows over the full shift on a finite alphabet, the set of ergodic measures of maximal entropy can have any finite cardinality, be countably infinite, or be uncountably infinite.
We remark that the existence of suspension flows in the above class for which the set of ergodic MMEs is uncountable has been independently obtained in a preprint by Iommi and Velozo [@IV].
|
---
abstract: 'We obtain the exact solution of the bond-percolation thresholds with inhomogenous probabilities on the square lattice. Our method is based on the duality analysis with real-space renormalization, which is a profound technique invented in the spin-glass theory. Our formulation is a more straightforward way compared to the very recent study on the same problem \[R. M. Ziff, [*et. al.*]{}, J. Phys. A: Math. Theor. 45 (2012) 494005\]. The resultant generic formulas from our derivation can give several estimations for the bond-percolation thresholds on other lattices rather than the square lattice.'
address: 'Department of Systems Science, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto, 606-8501, Japan'
author:
- Masayuki Ohzeki
bibliography:
- 'paper.bib'
title: 'Duality with real-space renormalization and its application to bond percolation'
---
Introduction
============
Forest fire happens suddenly and spreads out rapidly. In order to save the forest itself, living animals, humans and their community there, it is important to resolve a naive question: how can we prevent the fire spread through the whole system? In the present study we take an associated mathematical problem, namely [*percolation*]{} [@Stauffer1994]. The percolation is a very simple but ubiquitous problem, which is closely related to the phenomenon involved in the formation of long-range connectivity in systems, as well as forest fire as exemplified above. For instance, it provides rich comprehensions for numerous practical issues including conductivity in composite materials, infectious disease, flow through porous media, and polymerization. In the present study we restrict ourselves to the case of the bond percolation problem, where each bond to connect both ends on the system is selected in a stochastic manner. The bond percolation is a typical instance of the cooperative phenomena, with which highly skillful techniques are essential to deal. Nevertheless very simple formulas have been expected to hold for the bond-percolation thresholds at which giant clusters over the whole system appear. The key is a particular symmetry embedded in the system, namely the duality.
In classical spin models, the duality is known to be a hidden symmetry between the partition functions in low and high temperatures. This symmetry allows us to identify the locations of the critical points for various spin models such as the Ising and Potts models [@Kramers1941; @Wu1976]. In the present study we employ the duality in order to assess the bond-percolation threshold, since $q$-state Potts model can be mapped to the bond-percolation problem in the limit of $q\to 1$ [@Wu1982; @Nishimori2011]. The special symmetry of the square lattice, namely self-duality, yields the exact solution of the bond-percolation threshold in the case with a homogenous probability on each bond. Even for the case without self-duality, we can perform the duality analysis to obtain the bond-percolation thresholds in several cases in conjunction with another technique, namely the star-triangle transformation [@Wu1982].
In the present study, we generalize the star-triangle transformation to the case on the square lattice. We apply the generalized technique, namely the duality analysis with real-space renormalization, to the inhomogenous case on the square lattice. The resultant equation to provide the bond-percolation thresholds coincides with that proposed by Wu [@Wu1979]. Very recent work performed by Ziff, [*et. al.*]{} has proved its validity by combination of the several profound results [@Ziff2012]. Our technique provides a more straightforward way to derive the exact formula on the critical manifolds of the bond-percolation thresholds without any other ingredients to support our analysis. Moreover, we give explicit forms of several generic formulas depending on the structure of the unit cell forming the lattice. The basis of our technique comes from the different stream of study on random spin systems, in particular spin glasses. The straightforward rederivation of the existing equalities in the different context implies existence of close connection between different realms, bond-percolation problems and spin glasses.
The paper is organized as follows. In the next section, we review the conventional duality and the star-triangle transformation for convenience. The third section demonstrates the duality with real-space renormalization to the inhomogenous case on the square lattice. In §4 we find the resultant generic formulas for the bond-percolation thresholds and compare our results to the very recent studies. In the last section, we conclude our study.
Conventional analysis
=====================
Our analysis is based on the duality [@Kramers1941; @Wu1976], which is the simplest way to estimate the bond-percolation thresholds. We consider the bond-percolation thresholds for the lattice consisting of repetition of the unit cell as in Fig. \[fig1\].
![The unit cell of the inhomogenous bond-percolation problem on the square lattice. The unit cell is a part of the square lattice, which is covered with the dashed lines. The assigned values $p$, $r$, $s$ and $t$ are the probabilities to connect both ends on each bond on the unit cell.[]{data-label="fig1"}](fig1.eps){width="70mm"}
Let us define $p$, $r$, $s$, and $t$ as the inhomogenous probabilities to connect both ends of the assigned bonds. The conventional duality analysis can lead to the bond-percolation thresholds for the homogenous case $p=r=s=t$ on the square lattice. In addition to the duality, the star-triangle transformation gives the bond-percolation thresholds for the inhomogenous case on the triangular and hexagonal lattices. First, let us review the conventional duality for convenience.
Duality
-------
We consider the $q$-state Potts model with the following Hamiltonian, $$H = - \sum_{\langle ij \rangle} J_{ij} \delta\left(\phi_i - \phi_j\right),$$ where $J_{ij}$ is the strength of interactions and takes different values as $J_p$, $J_r$, $J_s$, and $J_t$, which will correspond to the probability assigned on the bonds. The summation is taken over all bonds, $\delta(x)$ is Kronecker’s delta, and $\phi_i$ stands for the spin direction taking $0,1,\cdots$, and $q-1$. Let us estimate the critical point of the $q$-state Potts model, since it corresponds to the bond-percolation threshold in the limit of $q \to 1$ [@Wu1982; @Nishimori2011].
We here assume the homogeneous case $J=J_p=J_r=J_s=J_t$. The duality exploits an inherent symmetry embedded in the partition function with the inverse temperature $\beta$ as $Z=\sum_{\phi_i}\prod_{\langle ij \rangle}\exp(\beta J \delta(\phi_i - \phi_j))=\sum_{\phi_i}\prod_{\langle ij \rangle}(1+ v\delta(\phi_i - \phi_j))$, where $v=\exp(\beta J)-1$ [@Kramers1941]. Two different approaches to evaluate the partition function, the low- and high-temperature expansions, can be related to each other by the $q$-component discrete Fourier transformation for the local part of the Boltzmann factor, namely edge Boltzmann factor $x_k = 1+ v\delta(k)$ [@Wu1976]. Specifically, each term in the low-temperature expansion can be expressed by $x_k$, while the high-temperature one is written in the dual edge Boltzmann factor $x_l^*=\sum_{k} x_k \exp(i2\pi k l/q)/\sqrt{q}$. As a result, we obtain a double expression of the partition function by use of two different edge Boltzmann factors as $$Z(x_0,x_1,\cdots) = q^{N_S-\frac{N_B}{2}-1}Z^*(x^*_0,x^*_1,\cdots) \label{duality0}.$$ where $Z^*$ is the partition function on a dual lattice. Here $N_S$ and $N_B$ denote the numbers of sites and plaquettes, respectively. The unity in the power of $q$ can be ignored in the following analysis. We obtain another system on the dual graph, on which each site on the original lattice exchanges with each plaquette on the dual one and vice versa, after the dual transformation through the $q$-component discrete Fourier transformation. When the dual lattice is the same as the original one, the system holds self-duality. For instance, the square lattice is the case. Then we can regard $Z^*(x^*_0,x^*_1,\cdots)$ as $Z(x^*_0,x^*_1,\cdots)$ and can obtain the exact value of the critical point by the duality. We restrict ourselves to the case on the square lattice. Notice that $N_B/2=N_S$ on the square lattice. Let us extract the principal Boltzmann factors with edge spins parallel $x_0$ and $x_0^*$ from both sides of Eq. (\[duality0\]) as $$(x_0)^{N_B}z(u_1,u_2,\cdots) = (x^*_0)^{N_B}z(u^*_1,u^*_2,\cdots),$$ where $z$ is the normalized partition function $z(u_1,u_2,\cdots)=Z/(x_0)^{N_B}$ and $z(u^*_1,u^*_2,\cdots)=Z/(x^*_0)^{N_B}$. We here define the relative Boltzmann factors $u_k = x_k/x_0=1/(1+v)$ and $u_k^*= x^*_k/x^*_0=v/(q+v)$. The well-known duality relation can be obtained by rewriting $u^*_k$ in the same form as $u_k$ by use of $v^*$ as $v/(q+v) = 1/(1+v^*)$, namely $v^*=q/v$. Notice that the quantity $v^*$ has a different parameter $K^*$ from the original coupling $K=\beta J$, which implies transformation of the temperature. We obtain the exact value of the critical temperature from the fixed point condition $v^2_c=q$ under the assumption that a unique transition undergoes in the system. The limit $q \to 1$ can then give the bond-percolation threshold in the homogenous case $p_c=p=r=s=t$ on the square lattice through $p_c = v_c/(1+v_c)$, namely $p_c=1/2$ [@Wu1982; @Nishimori2011]. We can also derive the critical point by the following simple equality $$x_0 = x_0^*.\label{MCP_duality}$$ Indeed this equality gives $v_c=1$, namely $p_c=1/2$.
For the case without self-duality, we can find an important relation from $v^*=q/v$. We can relate the probability assigned on the bond on the original lattice to that on the dual one as $p^* = 1-p$ in the limit $q \to 1$ [@Wu1982; @Kesten1982]. In other words, the probability $p$ that both ends are connected on the original lattice is transformed into the disconnected probability on the dual lattice as $1-p^*$ and vise versa. We can rewrite this fact in terms of the relationship of the connectivity as $$P(AB) = P(\bar{A}|\bar{B}),\label{PPdual}$$ where the quantity on the left-hand side expresses the probability that $A$ and $B$ are connected, and that on the right-hand side stands for the probability that $\bar{A}$ and $\bar{B}$ are disconnected. The end points $A$ and $B$ in Fig. \[fig2\] denote the sites on the original lattice.
![Duality relation of the bond-percolation problem. The dotted line denotes the disconnected bond. The bold line represents the connected bond. The white circles denote the original sites. The black circles represent the dual sites (original plaquettes). []{data-label="fig2"}](fig2.eps){width="70mm"}
On the other hand, $\bar{A}$ and $\bar{B}$ represent the sites on the dual lattice. Then the bond percolation threshold for the homogenous case on the square lattice can be represented by the following equality $$P(AB) = P(A|B).\label{PP0}$$
Star-triangle transformation
----------------------------
Let us consider the case on the triangular lattice. We here remove the homogeneous restriction that we impose above. We deal with the bond-percolation problem with the inhomogenous probabilities on the triangular lattice as depicted in Fig. \[fig3\].
![The inhomogeneous bond-percolation problem on the triangular and hexagonal lattices. The assigned values $p$, $r$ and $s$ are the connected probabilities assigned on each bond on the unit cell. On the hexagonal lattice, we put the dual probabilities $p^*$, $r^*$ and $s^*$, which are obtained after the dual transformation. The black circle on the hexagonal lattice represents the internal site we sum over in the star-triangle transformation. []{data-label="fig3"}](fig3.eps){width="70mm"}
The dual transformation changes the triangular lattice into the hexagonal lattice. Then we cannot perform the same analysis as that in the case on the square lattice. We employ another technique to relate the hexagonal lattice to the original triangular lattice. This can be achieved by the partial summation over internal spins at the down-pointing (up-pointing) star on the hexagonal lattice, namely star-triangle transformation [@Wu1982]. Then we can transform the partition function on the hexagonal lattice into that on another triangular lattice, namely $Z^*(x^*_0,x^*_1,\cdots)=Z(x^{*({\rm tr})}_0,x^{*({\rm tr})}_1,\cdots)$ in Eq. (\[duality0\]). We here use the renormalized-edge Boltzmann factor $x^{*({\rm tr})}_k$ defined as $$x_k^{*({\rm tr})} = \frac{1}{\sqrt{q}} \sum_{\phi_0} \prod_{i}\left\{\frac{v_i}{\sqrt{q}}\left(1 + \delta(\phi_i- \phi_0)\frac{q}{v_i}\right)\right\},\label{RBS}$$ where the product runs over $i=p,r$, and $s$ for the three bonds on the unit cell of the hexagonal lattice, namely the down-pointing (up-pointing) star. We here assume the inhomogeneous system with $v_i = \exp(\beta J_i)-1$. We take the summation over the internal spin $\phi_0$ denoted by the black circle on the unit cell as in Fig. \[fig3\]. The coefficient $1/\sqrt{q}$ comes from that in front of the partition function on the right-hand side of Eq. (\[duality0\]). Notice that $N_S$ is the same as the number of down-pointing (up-pointing) triangles $N_{\rm tr}$ on the triangular lattice, and $N_B=3N_S$. The subscript $k$ denotes the configuration of the edge spins $\{\phi_{l=p,r,s}\}$ on the unit cell. On the other hand, we rewrite the original partition function in terms of the product of the edge Boltzmann factors as $$x_k^{({\rm tr})} = \prod_{i}\left(1 + \delta(\phi_i- \phi_0)v_i\right).\label{RBT}$$ The double expression of the partition function can be written as $$Z(x^{({\rm tr})}_0,x^{({\rm tr})}_1,\cdots) = Z^*(x^{*({\rm tr})}_0,x^{*({\rm tr})}_1,\cdots) \label{duality_th}.$$ Similarly, let us extract the renormalized-principal Boltzmann factors with edge spins parallel $x^{({\rm tr})}_0$ and $x_0^{*({\rm tr})}$ from both sides of Eq. (\[duality\_th\]) as $$\begin{aligned}
\nonumber
&& \{x^{({\rm tr})}_0\}^{N_{\rm tr}}z^{({\rm tr})}(u^{({\rm tr})}_1,u^{({\rm tr})}_2,\cdots) \\
&&= \{x_0^{*({\rm tr})}\}^{N_{\rm tr}}z^{({\rm tr})}(u^{*({\rm tr})}_1,u^{*({\rm tr})}_2,\cdots).\end{aligned}$$ Notice that the number of the down-pointing (up-pointing) stars is the same as $N_{\rm tr}$. and $z^{({\rm tr})}$ is the normalized partition function $z^{({\rm tr})}(u^{({\rm tr})}_1,u^{({\rm tr})}_2,\cdots)=Z/(x^{({\rm tr})}_0)^{N_{\rm tr}}$ and $z^{({\rm tr})}(u^{*({\rm tr})}_1,u^{*({\rm tr})}_2,\cdots)=Z/(x^{*({\rm tr})}_0)^{N_{\rm tr}}$. We here define the renormalized-relative Boltzmann factors $u^{({\rm tr})}_k = x^{({\rm tr})}_k/x^{({\rm tr})}_0$ and $u_k^{*({\rm tr})}= x^{*({\rm tr})}_k/x^{*({\rm tr})}_0$. Similarly to the case on the square lattice, we put the simple equality as $$x^{({\rm tr})}_0 = x^{*({\rm tr})}_0.\label{MCP_th}$$ This equality yields the critical manifold of the $q$-state Potts model as detailed in Appendix \[AP0\]. By taking the limit $q \to 1$, we obtain the equality for the bond-percolation thresholds on the triangular lattice as $$T(p,r,s) = 0,\label{Teq0}$$ where $$T(p,r,s) = prs - p -r -s +1. \label{Teq}$$ If we perform the dual transformation on this equality, we find the solution for the bond-percolation thresholds on the hexagonal lattice as $$H(p^*,r^*,s^*) = 0, \label{Heq0}$$ where $$H(p,r,s) = prs - rp -rs-ps +1.\label{Heq}$$ As shown above we can obtain the exact solution of the bond-percolation thresholds for several cases through the duality and the technique in conjunction with the star-triangle transformation.
Duality with real space renormalization
=======================================
The duality with the star-triangle transformation, which is the partial summation of the unit cell on the hexagonal lattice, leads to the exact solution for the bond-percolation thresholds on the triangular and hexagonal lattices as in (\[Teq0\]) and (\[Heq0\]). Let us develop the similar analysis to the successful case on the triangular and hexagonal lattices.
We start from Eq. (\[duality0\]) for the case on the square lattice. Notice that the edge Boltzmann factor is not enough to express the local property of the inhomogeneous system. Thus we consider to use the renormalized-edge Boltzmann factor inspired by the star-triangle transformation. We take the square unit cell consisting of four bonds from both of the original and dual square lattices as in Fig. \[fig4\].
![The unit cell with four bonds for the inhomogenous case on the square lattice. The black circle denotes the internal spins that we sum over, while the white ones are fixed as $\phi_i = 0$.[]{data-label="fig4"}](fig4.eps){width="90mm"}
Let us take the product of the edge Boltzmann factors and perform the summation over the internal spin. The resultant quantity is written as $$x_k^{({\rm sq})} = \sum_{\phi_0} \prod_{i=p,r,s,t}\left(1+v_i\delta(\phi_i- \phi_0)\right).\label{RB1}$$ Similarly, we obtain the dual renormalized-edge Boltzmann factor as $$x_k^{*({\rm sq})} = \sum_{\phi_0} \prod_{i=p,r,s,t}\left\{\frac{v_i}{\sqrt{q}}\left(1 + \delta(\phi_i- \phi_0)\frac{q}{v_i}\right)\right\}.\label{RB2}$$ We can then rewrite the relation obtained by the conventional duality (\[duality0\]) as $$Z(x^{({\rm sq})}_0,x^{({\rm sq})}_1,\cdots) = Z(x^{*({\rm sq})}_0,x^{*({\rm sq})}_1,\cdots) \label{duality1}.$$ We extract the renormalized-principal Boltzmann factors $x^{({\rm sq})}_0$ and $x_0^{*({\rm sq})}$ with all edge spins on the unit cell parallel as $$\begin{aligned}
\nonumber
&&(x^{({\rm sq})}_0)^{N_B/4}z^{({\rm sq})}(u^{({\rm sq})}_1,u^{({\rm sq})}_2,\cdots) \\
&& \quad = (x^{*({\rm sq})}_0)^{N_B/4}z^{({\rm sq})}(u^{*({\rm sq})}_1,u^{*({\rm sq})}_2,\cdots),\end{aligned}$$ where $z^{({\rm sq})}$ is the normalized partition function but $z^{({\rm sq})}(u^{({\rm sq})}_1,u^{({\rm sq})}_2,\cdots)=Z/(x^{({\rm sq})}_0)^{N_B/4}$ and $z(u^{*({\rm sq})}_1,u^{*({\rm sq})}_2,\cdots)=Z/(x^{*({\rm sq})}_0)^{N_B/4}$. We here define the renormalized-relative Boltzmann factors $u^{({\rm sq})}_k = x^{({\rm sq}))}_k/x^{({\rm sq})}_0$ and $u_k^{*({\rm sq})}= x^{*({\rm sq})}_k/x^{*({\rm sq})}_0$. Then we impose the following equation to identify the location of the critical point $$x^{({\rm sq})}_0 = x^{*({\rm sq})}_0.\label{MCP1}$$ The direct evaluation of this equality in the leading order of $\epsilon$ where $q = 1 + \epsilon$ gives the formula for the bond-percolation thresholds, as detailed in Appendix \[AP1\], $$\prod_{i} (1+v_i) C(p,r,s,t) = 0.\label{Per1}$$ where $$\begin{aligned}
\nonumber
& & C(p,r,s,t) = 1 -pr -ps -rs -pt -rt -st \\
& & \quad +prs +prt+rst+pst.\label{Ceq}\end{aligned}$$ It is reasonable that a unique transition undergoes if we tune the temperature for the inhomogeneous interactions as $J_p$, $J_r$, $J_s$, and $J_t$, which correspond to the probabilities of the bond-percolation problem in the limit $q \to 1$. Therefore the singularity of the free energy should be unique for change of the temperature. The duality can then identify the location of the critical point. Therefore we conclude that $C(p,r,s,t) = 0$ gives the exact bond-percolation thresholds for the inhomogeneous case on the square lattice.
The equality $C(p,r,s,t) = 0$ was originally conjectured [@Wu1979], confirmed numerically with high precision and derived in a different way [@Scullard2008]. The proof of validity of Eq. (\[Ceq\]) has been very recently established [@Ziff2012]. It is not simple to show the validity of Eq. (\[Ceq\]), since it is based on the indirect analysis via considerations of the bond-percolation problem on different lattices. The present analysis demonstrates the more straightforward analysis. Without recourse to the duality, the real-space renormalization group analysis can give the exact bond-percolation thresholds for the homogenous case but fails into the approximations for the inhomogenous case [@Raynolds1978; @Nakanishi1981]. By virtue of the duality, we can here find the exact answer for the critical point while we stand on the fixed point of the renormalization group.
The duality with real-space renormalization as shown above is essentially the same as the profound technique in the analysis of the random spin system, in particular spin glasses [@Ohzeki2008; @Ohzeki2009a]. For several models in the random spin system, the dual transformation cannot relate the original system to the same one with a different temperature as the case for the $q$-state Potts model despite existence of self-duality of the lattice. In these cases, we recover the self-duality of the random spin models via real-space renormalization over larger range beyond the unit cell, namely summation over several internal spins by taking larger size of the cluster in order to find the correct fixed point in a relatively wide space of parameters as well as the temperature. Then we impose the following condition to estimate the critical point similarly to the conventional analysis by the duality (\[MCP\_duality\]), its combination with the star-triangle transformation (\[MCP\_th\]), and the above analysis (\[MCP1\]) [@Ohzeki2008; @Ohzeki2009a] $$x_0^{(b)}=x_0^{*(b)}, \label{MCP_cluster}$$ where $x_0^{(b)}$ and $x_0^{*(b)}$ are renormalized principal and dual Boltzmann factors on the cluster. The size of the cluster is denoted by $b$ ($b=0$ means the simple duality without renormalization). It is examined that, if $b$ is taken to be a large value, we can recover the self-duality following the concept of renormalization [@Raynolds1977; @Raynolds1980], and Eq. (\[MCP\_cluster\]) can give a precise estimation of the critical point [@Ohzeki2008]. When we deal with the random spin system on the square lattice, the estimation by setting $b=1$ (four-bond cluster as depicted in Fig. \[fig5\]) often attains satisfiable precision.
![The cluster for the duality with real-space renormalization on the square lattice. The white circles denote the fixed spins for evaluation of the principal Boltzmann factors. The black circles express the spins we trace over (similarly to the star-triangle transformation) to obtain the principal Bolztmann factors.[]{data-label="fig5"}](fig5.eps){width="50mm"}
If one wishes to enhance the precision, the systematic improvement is achievable by increase of $b$ (i. e. $b=2$ 16-bond cluster as in Fig, \[fig5\]). Indeed we can also estimate the bond-percolation thresholds from the critical points via analysis on the bond-dilution Ising model, which is a typical model in the random spin system. In Appendix \[AP2\], we demonstrate the rederivation of the formula $C(p,r,s,t) = 0$ through the duality analysis with real-space renormalization for the bond-dilution Ising model with inhomogenous distribution. Equation (\[MCP\_cluster\]) yields the formula $C(p,r,s,t) = 0$ for the bond-percolation threshold for the inhomogeneous case on the square lattice. Below we examine the obtained result through the duality with real-space renormalization in several points of view. Beyond the case on the square lattice, we try to apply several similar systems.
Generic formulas on bond percolation thresholds
===============================================
When we analyze the homogenous case on the square lattice, the formula of the bond-percolation threshold is a relation on the unit cell with the two-terminal structure (a single bond) as in Eq. (\[PP0\]). On the other hand, the unit cell where we perform the analysis consists of the three-terminal structure for the case on the triangular and hexagonal lattice as depicted in Fig. \[fig3\]. As in this case, for the lattice consisting of repetition of the three-terminal unit cell, the generic formula of the bond-percolation thresholds is known to be [@Ziff2006]. $$P(ABC) = P(A|B|C), \label{PP1}$$ where the quantity on the left-hand side expresses the probability that the end points $A$, $B$, and $C$ on the three-terminal unit cell are all connected, and that on the right-hand side stands for the probability that none of $A$, $B$, and $C$ are connected.
We obtain the critical manifold (\[Teq0\]) for the bond-percolation thresholds on the triangular lattice from the above formula (\[PP1\]). We here demonstrate the reduction to Eq. (\[Teq0\]) from Eq. (\[PP1\]). In the case on the triangular lattice, let us write down all terms included in $P(ABC)$ as $$\begin{aligned}
\nonumber
P(ABC) &=& prs + pr(1-s) + p(1-r)s + (1-p)rs \\
&=& pr+sp+rs - 2 prs.\end{aligned}$$ On the other hand, the probability that none of the end points are connected is $$\begin{aligned}
\nonumber
P(A|B|C) &=& (1-p)(1-r)(1-s)\\ \nonumber
&=& 1-p-r-s+pr+rs+sp-prs.\\\end{aligned}$$ Thus we can obtain Eq. (\[Teq0\]) from Eq. (\[PP1\]).
In addition, on the hexagonal lattice, Eq. (\[PP1\]) can be reduced to Eq. (\[Heq0\]). On the hexagonal lattice, the left-hand side of Eq. (\[PP1\]) is written as $$\begin{aligned}
\nonumber
P(ABC) &=& prs.\end{aligned}$$ The right-hand side can be given as $$\begin{aligned}
\nonumber
P(A|B|C) &=& (1-p)(1-r)(1-s)+p(1-r)(1-s)\\ \nonumber
&&\quad +(1-p)r(1-s)+(1-p)(1-r)s \\
&=& 1-pr-rs-sp+2prs.\end{aligned}$$ Equation (\[PP1\]) reproduces Eq. (\[Heq0\]).
We here give a generic formula for several lattices consisting of repetition of the four-terminal unit cell as in Fig. \[fig6\] (left). The analysis as detailed in Appendix \[AP2\] provides the generic formula for the four-terminal unit cell as $$\begin{aligned}
\nonumber
&&P(ABCD) + P(BCD|A) +P(ACD|B) \\ \nonumber
&&\quad +P(ABD|C)+P(ABC|D) =
P(A|B|C|D), \\ \label{PP2}\end{aligned}$$ where $P(BCD|A)$ is the probability that $BCD$ connects with each other while $A$ is disconnected, and the other quantities follow the same manner. We can reproduce Eq. (\[PP1\]) by reduction to the three-terminal unit cell (in particular hexagonal lattice) by removing the single bond from four bonds as in Fig. \[fig6\] (left). It means that all connected probabilities to $D$ vanish as $P(ABCD)=0$ since $D$ can not be connected, and we omit dependence on $D$ for the disconnected probability with $D$ as $P(ABC|D)=P(ABC)$.
![Transformations into the hexagonal and triangular lattices from the square lattice.[]{data-label="fig6"}](fig6.eps){width="90mm"}
For the case of the four-terminal unit cell, we can obtain another generic formula. By the conventional duality, we relate the bond-percolation problem on the original square lattice to that on the dual square lattice through the duality relation. The probabilities expressing connectivity of the edge sites are then changed as $P(ABCD) = P(\bar{A}|\bar{B}|\bar{C}|\bar{D})$ and $P(D|ABC)=P(\bar{D}\bar{C}|\bar{A}|\bar{B})$ similarly to Eq. (\[PPdual\]). Another generic formula for the four-terminal unit cell as in Fig. (\[fig6\]) (right) can be expressed as $$\begin{aligned}
\nonumber
&&P(\bar{A}|\bar{B}|\bar{C}|\bar{D}) + P(\bar{A}\bar{B}|\bar{C}|\bar{D})+P(\bar{B}\bar{C}|\bar{D}|\bar{A}) \\ \nonumber
&&\quad +P(\bar{C}\bar{D}|\bar{A}|\bar{B})+P(\bar{D}\bar{A}|\bar{B}|\bar{C}) =
P(\bar{A}\bar{B}\bar{C}\bar{D}), \\
\label{PP3}\end{aligned}$$ which is detailed in Appendix \[AP2\]. Here let us again reduce the above equality to the case of the three-terminal unit cell. This can be achieved by eliminating the terms associated with the disconnected probabilities with $\bar{D}$ since they are always connected as in Fig. \[fig6\] (right). In addition we omit the dependence on $D$. Then Eq. (\[PP3\]) recovers $P(\bar{A}\bar{B}\bar{C}) = P(\bar{A}|\bar{B}|\bar{C})$, namely Eq. (\[PP1\]). The difference between the four-terminal unit cells associated with Eqs. (\[PP2\]) and (\[PP3\]) comes from the tiling manner to cover the whole lattice. The former case is the full tiling of the unit cell. On the other hand, the latter case is the checker-board tiling as in Fig. \[fig7\].
![Covering by the two four-terminal unit cells. The shaded squares express the unit cells. The left panel denotes the original square lattice, and the right one depicts the dual lattice.[]{data-label="fig7"}](fig7.eps){width="80mm"}
In order to show the efficiency of the above generic formula, we take a fascinating instance of the application beyond the case of the square lattice. By the above generic formula (\[PP3\]), we can recover the equality for the bond-percolation thresholds on the bow-tie lattice, which is dealt with to prove the validity of Eq. (\[PP1\]) [@Ziff2012]. The four-terminal unit cell of the bow-tie lattice is shown in Fig. \[fig8\]. Equation (\[PP3\]) can be then reduced to $$\begin{aligned}
\nonumber
&&u\{prs(1-t)+pr(1-s)t+p(1-r)st+(1-p)rst\\ \nonumber
&&\quad + p(1-r)(1-s)t + p(1-r)s(1-t)\\ \nonumber
&&\qquad + (1-p)r(1-s)t + (1-p)rs(1-t) \\ \nonumber
&&\quad \qquad + prst\} + (1-u)C(p,r,s,t) \\
&&= C(p,r,s,t) - u (1-pr-st-prst) = 0.\end{aligned}$$ This equality has been given by combination of the results for the three-terminal unit cells by splitting of the four-terminal unit cell to two triangles as in Refs. [@Wierman1984; @Scullard2008; @Ziff2012]. Then the combination of the duality and star-triangle transformation yields the exact solution of the bond-percolation thresholds on the bow-tie lattice.
![The bow-tie lattice. The shaded squares express the four-terminal unit cells.[]{data-label="fig8"}](fig8.eps){width="50mm"}
As another interesting but wrong instance of applications, let us apply the generic formula (\[PP3\]) to the bond-percolation problem on the Kagomé lattice by considering the four-terminal unit cell with up-pointing and down-pointing triangles as in Fig. \[fig9\].
![The Kagomé lattice. The four-terminal unit cell consists of the up-pointing and down-pointing triangles.[]{data-label="fig9"}](fig9.eps){width="50mm"}
Here we take the homogenous case for simplicity. We then obtain the following polynomial from the generic formula (\[PP3\]) as $$1 - 3p^2 - 6p^3 + 12 p^4 - 6p^5 + p^6 = 0.\label{KL}$$ The solution is $p_c = 0.524 429 71$, which is known to be an approximate estimation (Wu’s conjecture [@Wu1976]), while a numerical evaluation gives $p_c = 0.524 405 02(5)$ [@Feng2008]. The reason why the above equality yields an wrong value comes from the solvability by the duality and the stat-triangle transformation. Our generic formula (\[PP3\]) then fails to give the exact answer but possibly an approximate estimation when the system lacks the solvability.
However our formulation by the duality with real-space renormalization can give a more precise value of the bond-percolation threshold on the Kagomé lattice. The above case is the same as that we used to find in study on random spin systems [@Ohzeki2008; @Ohzeki2009a]. The generic formulas as in (\[PP2\]) and (\[PP3\]) can be given by Eq. (\[MCP\_cluster\]) for $b=1$ through analyses of the random spin systems as detailed in Appendix \[AP2\]. We can obtain the precise value of the bond-percolation threshold by considering the larger cluster over the four-terminal unit cell, namely $16$-bond cluster ($b=2$) and more. Although we here cease the discussion on the bond-percolation problem on the Kagomé lattice since its threshold is out of scope in the present study, we remark that several recent development on this issue. The similar analysis is proposed in context of the graph polynomial as demonstrated in Refs. [@Scullard2012a; @Jacobsen2012; @Scullard2012b]. The idea is essentially based on consideration on the large cluster over the four-terminal unit cell to estimate the bond-percolation thresholds on the Kagomé lattice. The method has indeed succeeded in giving very precise estimations of the bond-percolation threshold on the Kagomé lattice as $p_c = 0.524 405 00(1)$ [@Scullard2012b]. These fact suggest that two independent methods developed in random spin systems and graph polynomial would be closely related to each other through the bond-percolation problem. We hope that the future study reveals more clear relationship between their different realms.
Conclusion
==========
In the present study, we rederived the exact solution for the bond-percolation thresholds in the inhomogenous case on the square lattice by use of the duality with real-space renormalization, which is a generalized analysis of the star-triangle transformation. In addition, we obtain two different generic formulas depending on the tiling manner of the four-terminal unit cell to cover the whole lattice. Both equalities can be reduced to the known formula for the three-terminal unit cell, which includes the triangular and hexagonal lattices. The application of the generic formula reproduces the exact solution on the bow-tie lattice and the well-known approximate solution on the Kagomé lattice. The further analysis possibly gives more precise value of the bond-percolation threshold on the Kagomé lattice.
The duality analysis with real-space renormalization shown in the present study is essentially the same as the special technique, which has developed in context of spin glass theory. The method has been useful to describe the precise phase boundary [@Ohzeki2008; @Ohzeki2009a; @Ohzeki2012e]. The straightforward rederivation of the existing results on the bond-percolation problem are given in the different context implies existence of a fascinating theoretical connection between different realms, graph polynomial and theory of spin glasses.
We emphasize high nontriviality of our results shown in the present study. The exact solutions for finite dimensional many-body systems have been rare in spite of the long-year efforts. However the situation begins to change by development of the duality analysis, which is found to be applicable to a relatively broad class of problems, namely spin glasses and inhomogenous percolation problems. We hope that the duality analysis with real-space renormalization would play an essential roll to understand the nature of the many-body systems as the conventional duality proposed by Kramers and Wannier contributed to establishment of the Onsager solution [@Onsager1944].
The author thanks the fruitful discussions with H. Nishimori, K. Fujii and R. M. Ziff, and is grateful to J. L. Jacobsen, T. Obuchi, and T. Hasegawa for comments on the manuscript. This work was partially supported by MEXT in Japan, Grant-in-Aid for Young Scientists (B) No.24740263.
Derivation of Eq. (\[Teq0\]) from Eq. (\[MCP\_th\]) {#AP0}
===================================================
We evaluate Eq. (\[MCP\_th\]) in this appendix. From the definition, we write down Eq. (\[MCP\_th\]) as $$\prod_{i}(1+v_i) = \frac{1}{q^2}\left\{(q-1)\prod_{i}v_i + \prod_{i}(q+v_i)\right\}.$$ This is the critical manifold of the $q$-state Potts model on the triangular lattice. Let us take the leading term of $\epsilon$ of $q=1+\epsilon$ for obtaining the bond-percolation thresholds on the triangular lattice. $$\begin{aligned}
\nonumber
&&x_0^{*({\rm tr})} - x_0^{*({\rm tr})} \\ \nonumber
&&= \epsilon\prod_{i}(1+v_i)\left(-2 + \sum_i \frac{1}{1+v_i} + \prod_i \frac{v_i}{1+v_i} \right). \\ \label{T0}\end{aligned}$$ By rewriting each $v_i$ in terms of the probability assigned on each bond as $p = v_p/(1+v_p)$ etc., we reach $$\prod_{i}(1+v_i)T(p,r,s) = 0,$$ which is reduced to Eq. (\[Teq0\]).
Derivation of Eq. (\[Ceq\]) {#AP1}
===========================
We here demonstrate the detailed evaluation of Eq. (\[Per1\]) from Eq. (\[MCP1\]). We can write the difference between the left and right-hand sides of Eq. (\[MCP1\]) by use of definition of the renormalized-edge Boltzmann factors as in Eqs. (\[RB1\]) and (\[RB2\]) as $$\begin{aligned}
\nonumber
&&x_0^{*({\rm sq})} -x_0^{({\rm sq})} \\ \nonumber
&&= \frac{\prod_i v_i}{q^2} \left\{q - 1 + \prod_{i}\left(1 + \frac{q}{v_i}\right)\right\}\\
&&\quad - \left\{ q - 1 + \prod_{i}\left(1 + v_i \right)\right\}.\label{App1}\end{aligned}$$ We take the leading term of $\epsilon$ of $q=1+\epsilon$. In advance, we evaluate the following quantities $$\begin{aligned}
\nonumber
&&\frac{\prod_i v_i}{q^2}\left\{q - 1 + \prod_{i}\left(1 + \frac{q}{v_i}\right)\right\} \\ \nonumber
&&= \prod_i(1+v_i)\left( 1 - 2 \epsilon + \epsilon\sum_i \frac{1}{1+v_i} + \epsilon \prod_{i}\frac{v_i}{1+v_i}\right) \label{A1} \\ \end{aligned}$$ and $$\begin{aligned}
\nonumber
&&\left\{ q - 1 + \prod_{i}\left(1 + v_i \right)\right\} \\
&&= \prod_i(1+v_i)\left( 1 + \epsilon \prod_{i}\frac{1}{1+v_i}\right) \label{B1}\end{aligned}$$ Therefore Eq. (\[App1\]) can be reduced to $$\begin{aligned}
\nonumber
&& \prod_i(1+v_i) \\ \nonumber
&& \quad \times\left\{- 2 + \sum_i \frac{1}{1+v_i} + \left(\prod_{i}v_i -1 \right)\prod_{i}\frac{1}{1+v_i} \right\} \\
&& = \prod_i(1+v_i) C(p,r,s,t).\end{aligned}$$ We reproduce Eq. (\[Ceq\]).
Alternative way to Eq. (\[Ceq\]) {#AP2}
================================
We show an alternative way to give Eq. (\[Ceq\]) with recourse to the bond-dilution Ising model on the square lattice. We consider the following Hamiltonian $$H = - \sum_{\langle ij \rangle} J_{ij} S_iS_j,$$ where $S_i$ stands for the Ising spin taking $\pm 1$, and $J_{ij}$ stands for the random coupling following the distribution functions $$P_p(J_{i}) = p \delta(J_{ij}-J) + (1-p)\delta(J_{ij}).$$ We also define $P_s$, $P_t$ and $P_u$ for each bond.
In random spin system, we need to take the configurational average of $J_{ij}$ to evaluate the free energy. We often employ the replica method to perform the configurational average. Instead of the averaged logarithm of the partition function (free energy), we analyze the averaged power following the well-known identity as $$\left[ \log Z \right] = \lim_{n \to 0} \frac{\left[Z^n\right]-1}{n},$$ where $[\cdots]$ expresses the configurational average. Initially we deal with the replicated system by setting $n$ as a natural number. At the final step of analysis, we take the limit of $n \to 0$. We then regard the averaged power of the partition function $[Z^n]$ as the effective partition function written as $Z_n$ (the replicated partition function).
Let us perform the duality analysis with real-space renormalization by dealing with the effective partition function. The effective partition function consists of the following edge Boltzmann factor as $$x_{\{S^{\alpha}_{i}\}} = \left[ \prod_{\alpha=1}^n\exp(K \tau_{ij}S^{\alpha}_{i}S^{\alpha}_j) \right],$$ where $K=\beta J$, and $\tau_{ij}$ takes $0$ or $1$ and expresses the existence of the interaction. The superscript $\alpha$ runs from $1$ to $n$ standing for the index of the replicas. On the other hand the dual edge Boltzmann factor is defined as $$\begin{aligned}
\nonumber
&&x^*_{\{S^{\alpha}_{i}\}} \\ \nonumber
&&= \left(\frac{1}{\sqrt{2}}\right)^n\left[ \prod_{\alpha=1}^n\left({\rm e}^{K \tau_{ij}} + S^{\alpha}_{i}S_j^{\alpha}{\rm e}^{K \tau_{ij}} \right) \right].\\\end{aligned}$$
Let us take the cluster with four bonds as in Fig. \[fig5\] to evaluate Eq. (\[MCP\_cluster\]) for $b=1$. In order to evaluate the renormalized-edge Boltzmann factors, we fix the edge spins to $S_i=1$ on the cluster and sum over the internal spin similarly to the star-triangle transformation. The renormalized-principal Boltzmann factor is written as $$x_0^{(1)} = \left[ \left\{ \sum_{S_0} \prod_{i}\exp(K \tau_{i}S_0) \right\}^n\right],$$ where the product runs over $i=p,r,s$ and $t$. The dual renormalized-principal Boltzmann factor is given by $$x_0^{*(1)} = \left[ \left\{ \left(\frac{1 }{4}\right)\sum_{S_0} \prod_{i}\left({\rm e}^{K \tau_{i}}+{\rm e}^{-K \tau_{i}}S_0 \right) \right\}^n\right].$$ Taking $n \to 0$ in Eq. (\[MCP\_cluster\]), we obtain the following formula $$\begin{aligned}
\nonumber
&&\left[ \log \left( \frac{ \prod_{i}2\cosh K \tau_{i}}{2\cosh \sum_i K \tau_i} \right) \left(1 + \prod_i\tanh K \tau_i\right)\right] = 2 \log 2.\\\end{aligned}$$ In order to identify the location of the bond-percolation thresholds, we consider $K \to \infty$. We obtain $$\left[ \log \left(2^{4-\sum_{i}\tau_i-\prod_{i}(1-\tau_{i}) }\left( 1 + \prod_{i} \tau_i \right) \right)\right] = 2 \log 2.\label{Con0}$$
First, let us take the homogenous case $p=r=s=t$ Equation (\[Con0\]) becomes $$-p^4 - 4p^3(1-p) + 4p(1-p)^3 + (1-p)^4 = 0.$$ This implies $$p^4 + 4p^3(1-p) = 4p(1-p)^3 + (1-p)^4.$$
Let us obtain the generic formula for the inhomogenous case (\[Ceq\]). From Eq. (\[Con0\]) we find
$$\begin{aligned}
\nonumber
&& prst +\left\{prs(1-t) + pr (1-s) t + p(1-r) st + (1-p) rst \right\}\\ \nonumber
&& \quad - \left\{p(1-r)(1-s)(1-t) + (1-p)r (1-s) (1-t) + (1-p)(1-r) s(1-t) + (1-p)(1-r)(1-s)t \right\} \\
&& \qquad - (1-p)(1-r)(1-s)(1-t) = 0.\label{PPbefore}\end{aligned}$$
By simplifying the above equality, we reproduce Eq. (\[Ceq\]).
We can find the general formula ($\ref{PP2}$) as follows. The first term in Eq. (\[PPbefore\]) $prst$ corresponds to $P(ABCD)$ in Eq. (\[PP2\]). The following four terms in the first line is $P(ABC|D)$, $P(ACD|B)$, $P(ABD|C)$, and $P(BCD|A)$, respectively. The remaining terms become $-P(A|B|C|D)$. Therefore the above equality (\[PPbefore\]) suggests the generic formula (\[PP2\]).
On the other hand, the simple duality as $p^* = 1-p$ reduces Eq. (\[PPbefore\]) to $$\begin{aligned}
\nonumber
&& p^*r^*s^*t^*+\left\{p^*r^*s^*(1-t^*) + p^*r^* (1-s^*) t^* + p^*(1-r^*) s^*t^* + (1-p^*) r^*s^*t^* \right\}\\ \nonumber
&& \quad - \left\{p^*(1-r^*)(1-s^*)(1-t^*) + (1-p^*)r^* (1-s^*) (1-t^*) + (1-p^*)(1-r^*) s^*(1-t^*) + (1-p^*)(1-r^*)(1-s^*)t^* \right\} \\
&& \qquad - (1-p^*)(1-r^*)(1-s^*)(1-t^*) = 0.\label{PPbeforedual}\end{aligned}$$
Then the collection of all the terms in the first line becomes $P(\bar{A}\bar{B}\bar{C}\bar{D})$. Each term in the second line corresponds to $-P(\bar{A}\bar{B}|\bar{C}|\bar{D})$, $-P(\bar{B}\bar{C}|\bar{D}|\bar{A})$, $-P(\bar{D}\bar{A}|\bar{B}|\bar{C})$, and $-P(\bar{C}\bar{D}|\bar{A}|\bar{B})$, respectively. The last term is nothing but $-P(\bar{A}|\bar{B}|\bar{C}|\bar{D})$. We thus obtain another generic formula (\[PP3\]).
|
---
abstract: 'We report the experimental demonstration of two quantum networking protocols, namely quantum $1{\rightarrow}3$ telecloning and open-destination teleportation, implemented using a four-qubit register whose state is encoded in a high-quality two-photon hyperentangled Dicke state. The state resource is characterized using criteria based on multipartite entanglement witnesses. We explore the characteristic entanglement-sharing structure of a Dicke state by implementing high-fidelity projections of the four-qubit resource onto lower-dimensional states. Our work demonstrates for the first time the usefulness of Dicke states for quantum information processing.'
author:
- 'A. Chiuri'
- 'C. Greganti'
- 'M. Paternostro'
- 'G. Vallone'
- 'P. Mataloni'
title: 'Experimental Quantum Networking Protocols via Four-Qubit Hyperentangled Dicke States'
---
Networking offers the benefits of connectivity and sharing, often allowing for tasks that individuals are unable to accomplish on their own. This is known for computing, where grids of processors outperform the computational power of single machines or allow the storage of much larger databases. It should thus be expected that similar advantages are transferred to the realm of quantum information. Quantum networking, where a given task is pursued by a lattice of local nodes sharing (possibly entangled) quantum channels, is emerging as a realistic scenario for the implementation of quantum protocols requiring medium/large registers. Key examples of such approach are given by quantum repeaters [@repeaters], non-local gates [@distributed], scheme for light-mediated interactions of distant matter qubits [@kimble] and one-way quantum computation [@Briegelreview].
In this scenario, photonics is playing an important role: the high reconfigurability of photonic setups and outstanding technical improvements have facilitated the birth of a new generation of experiments (performed both in bulk optics and, recently, in integrated photonic circuits [@integrato]) that have demonstrated multi-photon quantum control towards high-fidelity computing with registers of a size inaccessible until only recently [@vall08prl; @lu07nap; @gao10prl; @vall10pra3; @bigge09prl; @gao10nap]. The design of complex interferometers and the exploitation of multiple degrees of freedom of a single photonic information carrier have enabled the production of interesting states, such as cluster/graph states, GHZ-like states and (phased) Dicke states [@kies07prl; @dickeexp; @dickeexpRome], among others [@bourennane2010; @bour06prl]. Dicke states have been successfully used to characterize multipartite entanglement close to fully symmetric states and its robustness to decoherence [@dickeexpRome]. They are potentially useful resource for the implementation of protocols for distributed quantum communication such as quantum secret sharing [@hillery], quantum telecloning (QTC) [@mura99pra], and open destination teleportation (ODT) [@bourennane; @panODT]. So far, such opportunities have only been examined theoretically and confirmed indirectly [@kies07prl; @dickeexp], leaving a full implementation of such protocols unaddressed.
In this Letter, we report the experimental demonstration of $1{\rightarrow{3}}$ QTC and ODT of logical states using a four-qubit symmetric Dicke state with two excitations realized using a high-quality hyperentangled (HE) photonic resource [@barbieri; @dickeexpRome]. The entanglement-sharing structure of the state has been characterized quantitatively using a structural entanglement witness for symmetric Dicke states [@structural; @toth] and fidelity-based entanglement witnesses for the three- and two-qubit states achieved upon subjecting the Dicke register to proper single-qubit projections [@dickeexp]. All such criteria have confirmed the theoretical expectations with a high degree of significance. As for the protocols themselves, the qubit state to teleclone/teleport is encoded in an extra degree of freedom of one of the physical information carriers entering such multipartite resource. This has been made possible by the use of a displaced Sagnac loop [@alme07sci] \[cf. Fig. \[setup\]\], which introduced unprecedented flexibility in the setting, allowing for the realization of high-quality entangling two-qubit gates on heterogeneous degrees of freedom of a photon [*within*]{} the Sagnac loop itself. The high fidelities achieved between the experiments and theory (as large as $96\%$, on average, for ODT) demonstrate the usefulness of Dicke states as resources for distributed quantum communication beyond the limitations of a “proof of principle". Our scheme is well suited for implementing $1{\rightarrow}N{>}3$ QTC of logical states or ODT with more than three receivers via the realization of larger HE resources, which is a realistic possibility.
![[**a)**]{} Scheme for the ${|\xi\rangle}\to{|D^{(2)}_4\rangle}$ conversion. The spatial qubits experience the Hadamard gates ${\sf H}_{c,d}$ implemented through a polarization insensitive beam splitter (${\rm BS}_1$). A controlled-NOT (controlled-PHASE) gate ${\sf CX}{=}{|{0}\rangle_{i}\langle{0}|}\otimes\openone_{j}+{|{1}\rangle_{i}\langle{1}|}\otimes\hat{\sigma}^x_j$ ($\overline{\sf CZ}{=}{|{1}\rangle_{i}\langle{1}|}\otimes\openone_{j}+{|{0}\rangle_{i}\langle{0}|}\otimes\hat{\sigma}^z_j$) is realized by a half-wave plate (HWP) with axis at $45^\circ$ ($0^\circ$) with respect to the vertical direction $(i{=c,d},~j{=}a,b$). The control (target) qubit of such gate is the path (polarization) degree of freedom (DOF). [**b)**]{} & [**c)**]{} Displaced Sagnac loop for the realization of the QTC/ODT protocol. Panel [**b)**]{} \[[**c)**]{}\] shows the path followed by the upper \[lower\] photon A \[B\]. The glass plates $\phi_{A,B,X}$ allow us to vary the relative phase between the different paths within the interferometer. [**d)**]{} Circuit for $1{\rightarrow}3$ QTC and ODT. Qubits $\{a,b,c,d\}$ are prepared in ${|D^{(2)}_4\rangle}$ while $X$ should be cloned/teleported. For QTC, the ${\sf CX}_{Xb}$ gate is complemented by the projection of $X$ ($b$) on the eigenstates of $\sigma^x$ ($\sigma^z$), so as to perform a BM. For QTC (ODT), operation O is a local Pauli gate ${\sf P}$ determined by the outcome of the BM according to the given table. For ODT (with, say, receiver qubit $c$), the operations in the dashed boxes should be removed.[]{data-label="setup"}](setup4small.eps){width="\linewidth"}
[*Resource production and state characterization.-*]{} The building block of our experiment is the source of two-photon four-qubit polarization-path HE states developed in [@barbieri; @ceccarelli] and used recently to test multi-partite entanglement, decoherence and general quantum correlations [@dickeexpRome; @discord2011; @chiur12njp]. Such apparatus has been modified as described in the Supplementary Information \[SI\] [@epaps] to produce the HE state ${|\xi\rangle}_{abcd}{=}[{|HH\rangle}_{ab}({|r \ell\rangle}-{|\ell r\rangle})_{cd}+2{|VV\rangle}_{ab}{|r \ell\rangle}]_{cd}/\sqrt6$. Here, we have used the encoding $\{{|H\rangle},{|V\rangle}\}{\equiv}\{{|0\rangle},{|1\rangle}\}$, with $H/V$ the horizontal/vertical polarization states of a single photon, and $\{{|r\rangle},{|\ell\rangle}\}{\equiv}\{{|0\rangle},{|1\rangle}\}$, where $r$ and $\ell$ are the path followed by the photons emerging from the HE stage [@epaps]. Qubits $a,c$ ($b,d$) are encoded in the polarization and momentum of photon A (B). State ${|\xi\rangle}$ is turned into a four-qubit two-excitation Dicke state ${|D^{(2)}_4\rangle}{=}(1/\sqrt6)\sum^6_{j=1}{|\Pi_j\rangle}$ (with ${|\Pi_j\rangle}$ the elements of the vector of states constructed by taking all the permutation of $0$’s and $1$’s in ${|0011\rangle}$) by means of unitaries arranged as specified in Ref. [@dickeexpRome] \[cf. Fig. \[setup\] [**a)**]{}\]. In the basis of the physical information carriers, the state reads ${|D^{(2)}_4\rangle}{=}[{|HH\ell\ell\rangle}+{|VVrr\rangle}+({|VH\rangle}+{|HV\rangle})({|r\ell\rangle}+{|\ell r\rangle})]/\sqrt6$. The fidelity of the protocols depends on the quality of this state, as will be clarified soon. We have thus tested the closeness of the experimental state to ${|D^{(2)}_4\rangle}$ and characterized its entanglement-sharing structure.
First, we have ascertained the genuine multipartite entangled nature of the state at hand by using tools designed to assess the properties of symmetric Dicke states [@structural; @toth; @campbell]. We have considered the multipartite entanglement witness $$\label{witness}
{\cal W}_m=[24\openone+\hat J^2_x\hat S_x+\hat J^2_y\hat S_y+\hat J^2_z(31\openone-7\hat J^2_z)]/12,$$ which is specific of ${|D^{(2)}_4\rangle}$ [@toth] and requires only three measurement settings. Here, $\hat S_{x,y,z}{=}(\hat J^2_{x,y,z}-\openone)/2$ with $\hat J_{x,y,z}{=}\sum_{i\in{\cal Q}}\hat\sigma^{x,y,z}_i/2$ collective spin operators, ${\hat\sigma}^j~(j{=}x,y,z)$ the $j$-Pauli matrix and ${\cal Q}=\{a,b,c,d\}$. The expectation value of ${\cal W}_m$ is positive on any bi-separable four-qubit state, thus negativity implies multipartite entanglement. Its experimental implementation allows to provide a lower bound to the state fidelity with the ideal Dicke state as $F_{D^{(2)}_4}\ge(2-\langle{\cal W}_m\rangle)/3$. When calculated over the resource that we have created in the lab, we achieve ${\cal W}_m{=}-0.341\pm0.015$, which leads to $F_{D^{(2)}_4}\ge(78\pm0.5)\%$. The genuine multipartite entangled nature of our state is corroborated by another significant test: we consider the witness testing bi-separability on multipartite symmetric, permutation invariant states like our ${|D^{(2)}_4\rangle}$ [@dickeexp; @campbell] $${\cal W}_{cs}(\gamma)=b_{4}(\gamma)\openone-(\hat{J}^2_x+\hat{J}^2_y+\gamma\hat{J}^2_z)~~~(\gamma{\in}\mathbb{R}).$$ Here $b_{4}(\gamma)$ is the maximum expectation value of the collective spin operator $\hat{J}^2_x{+}\hat{J}^2_y{+}\gamma\hat{J}^2_z$ over the class of bi-separable states of four qubits and can be calculated for any value of the parameter $\gamma$. [@campbell]. Finding $\langle{\cal W}_{cs}(\gamma)\rangle{<}0$ for some $\gamma$ implies genuine multipartite entanglement. The direct evaluation shows that already for $\gamma=-0.12$ the witness is negative by more than one standard deviation and by more than fifteen for $\gamma=-2.5$ (cf. SI [@epaps]).
These results, although indicative of high quality of the resource produced, are not exhaustive and further evidence is needed. In order to provide an informed and experimentally not-demanding analysis on the state being generated, we have decided to resort to indirect yet highly significant evidence on its properties. In particular, we have exploited the interesting entanglement structure that arises from ${|D^{(2)}_4\rangle}$ upon subjecting part of the qubit register to specific single-qubit projections. In fact, by projecting one of the qubits onto the logical ${|0\rangle}$ and ${|1\rangle}$ states, we maintain or lower the number of excitations in the resulting state without leaving the Dicke space, respectively. Indeed, we achieve ${|D^{(2)}_3\rangle}=({|011\rangle}+{|101\rangle}+{|110\rangle})/\sqrt3$ when projecting onto ${|0\rangle}$, while ${|D^1_3\rangle}{=}({|100\rangle}+{|010\rangle}+{|001\rangle})/\sqrt3$ is obtained when the projected qubit is found in ${|1\rangle}$. Needless to say, these are genuinely tripartite entangled states, as it can be ascertained by using the entanglement witness formalism. For this task we have used the fidelity-based witness [@acin] ${\cal W}_{D^{(k)}_3}=({2}/{3})\openone{-}{|D^{(k)}_3\rangle}{\langleD^{(k)}_3|}$ $(k=1,2)$, whose mean is positive for any separable and biseparable three-qubit state, is $-1/3$ when evaluated over ${|D^k_3\rangle}$ and whose optimal decomposition (cf. SI [@epaps]) requires five local measurement settings [@acin; @gueh03ijtp]. We have implemented the witness for states obtained projecting qubit $d$ ([*i.e.*]{} momentum of photon B), achieving $\langle{\cal W}^{exp}_{D^{(1)}_3}\rangle{=}-0.21{\pm}0.01$ and $\langle{\cal W}^{exp}_{D^{(2)}_3}\rangle{=}-0.24{\pm}0.01$ (the apex indicates their experimental nature) corresponding to lower bounds for the fidelity with the desired state of $0.876{\pm}0.003$ and $0.908{\pm}0.003$, respectively.
Finally, by projecting two qubits onto elements of the computational basis, one can obtain elements of the Bell basis. Indeed, regardless of the projected pair of qubits, ${\langleij|}D^{(2)}_4\rangle{=}{|\psi^+\rangle}$ with $\{{|\psi^\pm\rangle}{=}({|01\rangle}{\pm}{|10\rangle})/\sqrt2,~{|\phi^\pm\rangle}{=}({|00\rangle}{\pm}{|11\rangle})/\sqrt2\}$ the Bell basis and $i{\neq}{j}{=}{0,1}$. We have verified the quality of the reduced experimental states achieved by projecting the Dicke state onto ${|10\rangle}_{cd}$ and ${|01\rangle}_{cd}$ using two-qubit quantum state tomography (QST) [@jame01pra] on the remaining two qubits. By finding fidelities ${>}91\%$ regardless of the projections operated, we can claim to have a very good Dicke resource, which puts us in the position to experimentally implement the quantum protocols.
[*$1{\rightarrow}3$ QTC and ODT.-*]{} Telecloning [@mura99pra] is a communication primitive that merges teleportation and cloning to deliver approximate copies of a quantum state to remote nodes of a network. Differently, ODT [@bourennane] enables the teleportation of a state to an arbitrary location of the network. Both require shared multipartite entanglement. A deterministic version of ODT makes use of GHZ entanglement [@panODT], while the optimal resources for QTC are symmetric states having the form of superpositions of Dicke states with $k$ excitations [@bourennane2010; @bour06prl; @mura99pra; @paternostro2010]. Continuous-variable QTC was demonstrated in [@koik06prl]. Although a symmetric Dicke state is known to be useful for such protocols (ODT being reformulated probabilistically) [@kies07prl], no experimental demonstration has yet been reported: in Ref. [@kies07prl], only an estimate of the efficiency of generation of a two-qubit Bell state between sender and receiver was given, based on data for ${|D^{(2)}_4\rangle}$. Differently, our setup allows to perform both QTC and probabilistic ODT. We start discussing the $1{\rightarrow}3$ QTC scheme based on ${|D^{(2)}_4\rangle}$, which is a variation of the protocol given in Ref. [@mura99pra]. We consider the qubit state to clone ${|\alpha\rangle}_X=\alpha{|0\rangle}_X+\beta{|1\rangle}_X~(|\alpha|^2+|\beta|^2{=}1)$, held by a [*client*]{} $X$. The agents of a [*server*]{} composed of qubits $\{a,b,c,d\}$ and sharing the Dicke resource agree on the identification of a [*port*]{} qubit $p$.The state of pair $(X,p)$ undergoes a Bell measurement (BM) performed by implementing a controlled-NOT gate ${\sf CX}_{Xp}$ followed by a projection of $X$ ($b$) on the eigenstates of $\hat\sigma^x$ ($\hat\sigma^z$). They publicly announce the results of their measurement, which leaves us with $\bigotimes_{j{\in}{\cal S}_{tc}}{\sf P}_j(\alpha{|D^{(1)}_3\rangle}+\beta{|D^{(2)}_3\rangle})_{{\cal S}_{tc}}\otimes{|\psi_+\rangle}_{Xp}$, where ${\cal S}_{tc}{=}\{a,b,c,d\}\slash p$ is the set of server’s qubits minus $p$, ${|D^{(k)}_{3}\rangle}$ is a three-qubit Dicke state with $k{=}1,2$ excitations and the gates ${\sf P}_j$ (identical for all the qubits in ${\cal S}_{tc}$) are determined by the outcome of the BM, as illustrated in Fig. \[setup\] [**d)**]{}. The protocol is now completed and the client’s qubit is cloned into the state of the elements of ${\cal S}_{tc}$. To see this, we trace out two of the elements of such set and evaluate the state fidelity between the density matrix $\rho_{r}$ of the remaining qubit $r$ and the client’s state, which reads ${\cal F}(\theta){=}[9{-}\cos(2\theta)]/12$, where [$\alpha{=}\cos(\theta/2)$]{}. Clearly, the fidelity depends on the state to clone, achieving a maximum (minimum) of $5/6$ (2/3) at $\theta=\pi/2$ ($\theta=0,\pi$). This exceeds the value $7/9$ achieved by a universal symmetric $1\to3$ cloner due to the state-dependent nature of our protocol.
![(Color online) [**a)**]{} Experimental QTC: for an input state ${|1\rangle}_X$, after the BM on $(X,b)$ with outcome ${|\phi^+\rangle}_{Xb}$, the ideal output state is $(2{|{0}\rangle_{j}\langle{0}|}+{|{1}\rangle_{j}\langle{1}|})/3$, $\forall j{=}a,c,d$ \[left column of the panel\]. The state of qubit $j$ after the experimental QTC, has very large overlap with the theoretical state. The right column of the panel shows the experimental single-qubit density matrices. [**b)**]{} Theoretical QTC fidelity and experimental density matrices of the clone (qubit $a$) for various input states. We show the fidelities between the experimental input states and clones (associated uncertainties determined by considering Poissonian fluctuations of the coincidence counts). The dashed line shows the theoretical fidelity for pure input states of the client’s qubit. The dashed area encloses the values of the fidelity achieved for a mixed input state of $X$ and the use of an imperfect Dicke resource compatible with the states generated in our experiment (cf. SI [@epaps])[]{data-label="Teleclone"}](resultsTeleN6small.eps){width="9cm"}
We now introduce the ODT protocol. As for QTC, this is formulated as a game with a client and a server. The client holds qubit $X$, into which the state ${|\alpha\rangle}_X$ to teleport is encoded. The elements of the server share the ${|D^{(2)}_4\rangle}$ resource. The client decides which party $r$ of the server should receive the qubit to teleport ($r$ and $p$ can be any of $\{a,b,c,d\}$, and $r$ is chosen at the last step of the scheme). Unlike QTC, the client performs a ${\sf CX}_{Xp}$. At this stage the information on the qubit to teleport is [spread]{} across the server, and the client declares who will receive it. Depending on his choice, the members in ${\cal S}_{odt}{=}\{a,b,c,d\}\slash\{r,p\}$ project their qubits onto ${|01\rangle}_{{\cal S}_{odt}}$, getting $[\alpha({|001\rangle}+{|010\rangle})_{Xpr}+\beta({|111\rangle}+{|100\rangle})_{Xpr}]\otimes{|01\rangle}_{{\cal S}_{odt}}$. [The scheme is completed by a projection onto ${|+1\rangle}_{Xp}$ with ${|+\rangle}{=}({|0\rangle}+{|1\rangle})/\sqrt2$]{} [@note]. [*Experimental implementations of $1{\rightarrow}3$ QTC.-*]{} The setup in Fig. \[setup\] [**b)**]{} and [**c)**]{}, which represents a significant improvement over the scheme used in [@dickeexpRome], allows for the implementation of both the protocols. The shown displaced Sagnac loop and the use of the lower photon B allow us to add the client’s qubit to the computational register. This is encoded in the sense of circulation of the loop by such field: modes ${|r\rangle}$ and ${|\ell\rangle}$ of photon B impinge on different points of beam splitter ${\rm BS}_2$, so that the photon entering the Sagnac loop can follow the clockwise path, thus being in the ${|{\circlearrowright}\rangle}{\equiv}{|0\rangle}$ state, or the counterclockwise one, being in ${|{\circlearrowleft}\rangle}{\equiv}{|1\rangle}$ (photon A does not pass through ${\rm BS}_2$). The probability $|\alpha|^2$ of being in the former (latter) state relates to the transmittivity (reflectivity) of ${\rm BS}_2$. [This probability is varied using intensity attenuators intercepting the output modes of ${\rm BS}_2$]{}. At this stage, the state of the register is ${|D^{(2)}_4\rangle}_{abcd}\otimes (\alpha {|{{\circlearrowright}}\rangle} + e^{i \phi_x}\sqrt{1- |\alpha|^2} {|{{\circlearrowleft}}\rangle})_X$, where $\phi_x$ is changed by tilting the glass-plate in the loop. The ${\sf CX}_{Xp}$ gate has been implemented with qubit $X$ as the control, qubit $b$ ([*i.e.*]{} the polarization of photon B) as the port $p$ and taking a HWP rotated at $45^{\circ}$ with respect to the optical axes, placed only on the counterclockwise circulating modes of the Sagnac loop [@notedelay]. The second passage of the lower photon in ${\rm BS}_2$ allows to project qubit $X$ on the eigenstates of $\hat\sigma^x_X$. To complete the Bell measurement on qubits $(X,p)$ we have placed a HWP and a PBS before the detector in order to project qubit $p$ on the eigenstates of $\hat\sigma^z_p$. The remaining qubits ($a,c$ and $d$) embody three copies of the qubit $X$. Their quality has been tested by performing QST over the reduced states obtained by tracing over any two qubits. Pauli operators in the path DOF have been measured using the second passage of both photons through ${\rm BS}_1$. The glass plates $\phi_{A,B}$ allowed projections onto $\frac{1}{\sqrt{2}}({|r\rangle}+e^{i\phi_{A(B)}}{|\ell\rangle})_{c(d)}$. To perform QST on the polarization DOF we used an analyzer composed of HWP, QWP and PBS before the photo-detector. To trace over polarization, we removed the analyzer. To trace over the path, a delayer was placed on either ${|r\rangle}$ or ${|\ell\rangle}$ coming back to ${\rm BS}_1$, thus making them distinguishable and spoiling their interference.
In Fig. \[Teleclone\] [**a)**]{} we show the experimental results obtained for the input states ${|1\rangle}_{X}$, when $p{=}b$. QST on qubit $j{=}a,c,d$ shows an almost ideal fidelity with the theoretical state, uniformly with respect to label $j$, thus proving the symmetry of QTC. Our setup allows us to teleclone arbitrary input states. [To illustrate the working principles and efficiency of the telecloning machine, we have considered the logical states ${|0\rangle}_X$ and ${|+\rangle}_X$ and ${|1\rangle}_X$ (i.e. we took $\theta{\simeq}0,\pi/2$ and $\pi$) and measured the corresponding copies in qubit $a$ ([*i.e.*]{} the polarization of photon A). States ${|0\rangle}_X$ and ${|1\rangle}_X$ were generated by selecting the modes in the displaced Sagnac. In the first (second) case we considered only modes ${|{{\circlearrowright}}\rangle}$ (${|{{\circlearrowleft}}\rangle}$), while ${|+\rangle}_X$ was generated using both modes and adjusting the relative phase with the glass-plate $\phi_X$ (by varying this phase, we can explore the whole phase-covariant case). Although the experimental results are very close to the expectations for ${\cal F}(\theta)$ \[cf. Fig. \[Teleclone\] [**b)**]{}\], some discrepancies are found for $\theta=\pi/2$. In particular, the theory seems to underestimate (overestimate) the experimental fidelity of telecloning close to $\theta=\pi/2$ ($\theta=0,\pi$). These effects are due to the mixedness of the $X$ state entering the Sagnac loop as well as the suboptimal fidelity between the experimental resource and ${|D^{(2)}_4\rangle}$. In fact, the experimental input state corresponding to $\theta\simeq\pi/2$ has fidelity $0.91\pm0.02$ with the desired ${|+\rangle}_X$ due to depleted off-diagonal elements in its density matrix (cf. SI [@epaps]). We have thus modelled the telecloning of dephased client states based on the use of a mixed Dicke channel of sub-unit fidelity with ${|D^{(2)}_4\rangle}$. The details are presented in Ref. [@epaps]. Here we mention that, by including the uncertainty associated with the estimated $F_{D^{(2)}_4}$, we have determined a $\theta$-dependent region of telecloning fidelities into which the fidelity between the experimental state of the clones and the input client state falls. As shown in Fig. \[Teleclone\] [**b)**]{}, this provides a better agreement between theory and data. ]{}
[*Experimental implementations of ODT.-*]{} In ODT the client holds qubit $X$, which is added to the computational register using the Sagnac loop. The client’s qubit has been teleported to the server’s elements $a$ and $b$ ([*i.e.*]{} the polarization of photons A and B). The necessary ${\sf CX}_{Xp}$ gate has been implemented, as above, by taking $X$ as the control and $p{=}b$ as the target qubit. The server’s elements $\{c,d\}$ have been projected onto ${|01\rangle}_{cd}$ and ${|10\rangle}_{cd}$. Depending on the chosen receiver (either $a$ or $b$), the scheme is implemented by projecting onto ${|+1\rangle}_{Xa(b)}$ and performing QST of the teleported qubit $b(a)$. While the projection onto ${|+\rangle}_{X}$ has been realized using the second passage of the lower photon through ${\rm BS}_2$, a projection onto ${|1\rangle}_{a(b)}$ is achieved projecting the physical qubit onto ${|V\rangle}_{a(b)}$. In Table \[ODT\] we report the experimental results obtained for several measurement configurations and teleportation channels. In SI [@epaps] we provide the reconstructed density matrices of qubits $\{X,a,b\}$ for each configuration used.
Projection $\theta$ Fidelity Projection $\theta$ Fidelity
--------------------- ---------- ----------------------------------- --------------------- ---------- -----------------------------------
$_{cd}{\langle10|}$ $0$ $\mathcal{F}_{a}{=}0.93{\pm}0.01$ $_{cd}{\langle01|}$ $\pi$ $\mathcal{F}_{a}{=}0.98{\pm}0.01$
$_{cd}{\langle10|}$ $0$ $\mathcal{F}_{b}{=}0.95{\pm}0.01$ $_{cd}{\langle01|}$ $\pi$ $\mathcal{F}_{b}{=}0.97{\pm}0.01$
$_{cd}{\langle01|}$ $0$ $\mathcal{F}_{a}{=}0.97{\pm}0.01$ $_{cd}{\langle10|}$ $1.46$ $\mathcal{F}_{a}{=}0.92{\pm}0.02$
$_{cd}{\langle01|}$ $0$ $\mathcal{F}_{b}{=}0.97{\pm}0.01$ $_{cd}{\langle10|}$ $1.46$ $\mathcal{F}_{b}{=}0.98{\pm}0.01$
$_{cd}{\langle10|}$ $\pi$ $\mathcal{F}_{a}{=}0.96{\pm}0.01$ $_{cd}{\langle01|}$ $1.37$ $\mathcal{F}_{a}{=}0.97{\pm}0.02$
$_{cd}{\langle10|}$ $\pi$ $\mathcal{F}_{b}{=}0.98{\pm}0.01$ $_{cd}{\langle01|}$ $1.37$ $\mathcal{F}_{b}{=}0.96{\pm}0.02$
: Experimental fidelities between the teleported qubit ($a$ or $b$) and the state of qubit $X$ (determined by $\theta$). Uncertainties result from associating Poissonian fluctuations to the coincidence counts.[]{data-label="ODT"}
[*Conclusions and outlook.-*]{} We have implemented QTC and ODT of logical states using a four-qubit symmetric Dicke state. We have realized a novel setup based on the well-tested HE polarization-path states and complemented by a displaced Sagnac loop. This allowed the encoding of non-trivial input states in the computational register, and the performance of high-quality quantum gates and protocols. Our results go beyond state-of-the-art in the manipulation of experimental Dicke states and the realization of quantum networking.
[*Acknowledgments.–*]{} [We thank Valentina Rosati for the contribution given to the early stages of this work. This work was supported by EU-Project CHISTERA-QUASAR, PRIN 2009 and FIRB-Futuro in ricerca HYTEQ, and the UK EPSRC (EP/G004579/1).]{}
Supplementary Information on: Experimental Quantum Networking Protocols via Four-Qubit Hyperentangled Dicke States
==================================================================================================================
In this supplementary Information we provide further details on both the theoretical and experimental results and analysis reported in the main Letter.
Resource production and state characterization
==============================================
Here we describe the source of hyperentanglement that has been used as the building block of our experiment. As remarked in the text of the Letter, we use the encodings $\{{|H\rangle},{|V\rangle}\}{\equiv}\{{|0\rangle},{|1\rangle}\}$, with $H/V$ the horizontal/vertical polarization states of a single photon, and $\{{|r\rangle},{|\ell\rangle}\}{\equiv}\{{|0\rangle},{|1\rangle}\}$, where $r$ and $\ell$ are the path followed by the photons emerging from the HE stage introduced and exploited in [@barbieri; @dickeexpRome; @discord2011; @chiur12njp].
We modify such setup so to prepare the HE resource ${|\xi\rangle}_{abcd}{=}[{|HH\rangle}_{ab}({|r \ell\rangle}-{|\ell r\rangle})_{cd}+2{|VV\rangle}_{ab}{|r \ell\rangle}]_{cd}/\sqrt6$ introduced in the main Letter. A sketch of the apparatus is shown in Fig. \[setupEPAPS\]. A Type-I nonlinear $\beta$-barium borate crystal, pumped by a vertically polarized laser field (wavelength $\lambda_p$), generates a polarization-entangled state given by the superposition of the spontaneous parametric down conversion (SPDC) signals at degenerate wavelength produced by a double-pass scheme. The mask selects four spatial modes $\{{|r\rangle},{|\ell\rangle}\}_{A,B}$ (two for each photon), parallelized by lens ${\rm L}$. ${\rm QWP}_{1,2}$ are quarter-wave plates. The first pass produces $2{|VV\rangle}{|r\ell\rangle}$. The spatial modes are intercepted by two beam stoppers. ${\rm QWP}_{1}$ changes the polarization into ${|VV\rangle}$ after reflection by mirror ${\rm M}$. The latter also reflects the pump, which produces the second-pass SPDC contribution ${|HH\rangle}({|r\ell\rangle}-{|\ell r\rangle})$. The weight of this term in the final state ${|\xi\rangle}$ is determined by ${\rm QWP}_{2}$ [@dickeexpRome].
![Sketch of the experimental setup used to produce the HE resource state ${|\xi\rangle}_{abcd}$. The setup is discussed fully in the body of the text.[]{data-label="setupEPAPS"}](setupEPAPSsmall.eps){width="\linewidth"}
On entanglement witnesses for genuine multipartite entanglement
===============================================================
Collective-spin operators are useful tools for the investigation of genuine multipartite entanglement, particularly for symmetric, permutation invariant states. One can construct the witness operator [@boundToth] $$\label{standard}
{\cal W}_{cs}=b_{n}{\openone}-(\hat{J}^2_{x}+\hat{J}^2_{y}),$$ where $b_{n}$ is the maximum expectation value of $\hat{J}^2_{x}+\hat{J}^2_{y}$ over the class of bi-separable states of $n$ qubits. Finding $\langle\hat{\cal W}^s_n\rangle\!<\!0$ for a given state implies genuine multipartite entanglement. It can be the case that Eq. (\[standard\]) fails to reveal the multipartite nature of a state endowed with a lower degree of symmetry. More flexibility can nevertheless be introduced by means of a suitable generalization such as $$\label{betterone}
\hat{\cal W}_{cs}(\gamma)=b_{n}(\gamma)\openone-(\hat{J}^2_{x}+\hat{J}^2_{y}+\gamma\hat J^2_z)~~(\gamma\in\mathbb R).$$ Negativity of $\langle\hat{\cal W}_{cs}(\gamma)\rangle$ over a given state guarantees multipartite entanglement. The witness requires only three measurement settings and is thus experimentally very convenient. The bi-separability bound $b_{n}(\gamma)$ is now a function of parameter $\gamma$ and can be calculated numerically using the procedure described in Ref. [@campbell]. In general, $b_{n}(\gamma)\!<\!b_{n}(0)$ for $\gamma<0$. Consequently, we restrict ourselves to the case of negative $\gamma$.
In Table \[tavola\] we provide the experimental values of $\langle\hat J^2_{x,y,z}\rangle$ through which we have evaluated Eq. , which is plotted against $\gamma$ in Fig. \[witness\]. While $\langle\hat{\cal W}_{cs}(\gamma)\rangle$ soon becomes negative as $\gamma{<}-0.1$ is taken, the uncertainty associated with such expectation value, calculated by propagating errors in quadrature as $$\label{error}
\delta \langle{\cal W}^{exp}_{cs}(\gamma)\rangle=\sqrt{\sum_{j=x,y}(\delta\langle{\hat J}^2_j\rangle^{})^2+\gamma^2(\delta\langle{\hat J}^2_z\rangle^{})^2},$$ grows only very slowly with $\gamma$, therefore signaling an increasingly significant violation of bi-separability.
![Functional form of $\langle\hat{\cal W}_{cs}(\gamma)=\rangle$ against $\gamma$, as determined by the measured expectation values of collective spin operators (cf. Table tavola). A negative value of $\langle\hat{\cal W}_{cs}(\gamma)\rangle$ signals genuine multipartite entanglement of the state experimental state under scrutiny. The associated experimental uncertainty \[see Eq. \] increases only very slowly as $|\gamma|$ grows. []{data-label="witness"}](GME.eps){width="6cm"}
Expectation value (with uncertainty) Value
------------------------------------------------------------------- -----------------
$\langle\hat J^2_x\rangle^{}\pm\delta\langle\hat J^2_x\rangle^{}$ 2.568$\pm$0.015
$\langle\hat J^2_y\rangle^{}\pm\delta\langle\hat J^2_y\rangle^{}$ 2.617$\pm$0.011
$\langle\hat J^2_z\rangle^{}\pm\delta\langle\hat J^2_z\rangle^{}$ 0.039$\pm$0.028
: Experimentally measured expectation values of collective spin operators for the symmetric four-qubit Dicke state prepared in our experiment. The uncertainties are determined by associating Poissonian fluctuations to the coincidence counts.
\[tavola\]
Optimal decomposition of the entanglement witness for ${|D^1_3\rangle}$
=======================================================================
As discussed in the main body of of the Letter, we have used a fidelity-based entanglement witness to characterize the genuine tripartite entanglement content of the state achieved upon projecting one of the qubits onto a state of the logical computational basis. Without affecting the generality of our discussion, here we concentrate on the case of a qubit-projection on qubit $d$ giving outcome ${|1\rangle}_d$, thus leaving us with state ${|D^{(1)}_3\rangle}_{abc}$. The fidelity-based witness that we have implemented is given in the main Letter and is decomposed in five measurement settings as [@gueh03ijtp] $$\begin{aligned}
{\cal W}_{D^{(1)}_3}&=\frac{1}{24}\Big\{17\openone{+}7\hat\sigma^z_a\hat\sigma^z_b\hat\sigma^z_c{+}3\hat\Pi[\hat{\sigma}^z_a\openone_{bc}]{+}5\hat\Pi[\hat\sigma_a\hat\sigma_b\openone_c]\\
&-\sum_{l=x,y}\sum_{k=\pm}(\openone_a{+}\hat\sigma^z_a{+}k\hat\sigma^l_a)(\openone_b{+}\hat\sigma^z_b{+}k\hat\sigma^l_b)(\openone_c{+}\hat\sigma^z_c{+}k\hat\sigma^l_c)\Big\}
\end{aligned}$$ where $\hat\Pi[\cdot]$ performs the permutation of the indices of its argument. The decomposition is optimal in the sense that ${\cal W}_{D^{(1)}_3}$ cannot be decomposed with lesser measurement settings. Experimentally, we have used the following rearrangement of the previous expression $$\begin{aligned}
&{\cal W}_{D^{(1)}_3}{=}\frac{1}{24}\Big\{13\openone_{abc}{+}3\hat\sigma^z_a\hat\sigma^z_b\hat\sigma^z_c{-}\hat\Pi[\hat{\sigma}^z_a\openone_{bc}]{+}\hat\Pi[\hat\sigma^z_a\hat\sigma^z_b\openone_c]\\
&{-}2\hat\Pi[\hat\sigma^x_a\hat\sigma^x_b\openone_c]{-}2\hat\Pi[\hat\sigma^y_a\hat\sigma^y_b\openone_c]{-}2\hat\Pi[\hat\sigma^x_a\hat\sigma^x_b\hat\sigma^z_c]{-}2\hat\Pi[\hat\sigma^y_a\hat\sigma^y_b\hat\sigma^z_c]\Big\},
\end{aligned}$$ which was easier to implement with our setup.
{width="18cm"}
{width="14cm"}
On the experimental measurement of the client’s qubit for quantum telecloning
=============================================================================
A few remarks are in order on the way the client’s qubit $X$ is experimentally measured in the actual implementation of the quantum telecloning protocol.
Due to slight unbalance at BS2 of Fig. 1 [**c)**]{} of the main Letter, the blue and yellow paths in the Sagnac loop used in order to encode qubit $X$ are unbalanced. We have thus corrected for such an asymmetry by first measuring the state of qubit $X$ generated entering the loop only with ${|r\rangle}$ modes \[[*i.e.*]{} the blue path in Fig. 2 [**c)**]{} and [**d)**]{}\]. We have then done the same with the ${|\ell\rangle}$ modes (yellow paths). Finally, we have traced out the path degree of freedom embodied by $\{{|r\rangle},{|\ell\rangle}\}$ by summing up the corresponding counts measured for every single projection that is needed for the implementation of single-qubit quantum state tomography, therefore reinstating symmetry.
Fidelity of quantum telecloning for mixed states of the client
==============================================================
Here we provide a model for the solid (red) line of Fig. 2 [**(b)**]{} accounting for the fidelity of quantum telecloning of a client’s mixed state. The evaluation of the theoretical fidelity of telecloning given in the main Letter does not take into account the mixed nature of the client’s state, as well as the non-ideality of the experimental Dicke channel used for the scheme. As argued in the main Letter, these are the main sources of discrepancy between the experimental results and the theoretical predictions. Here we provide a simple model that includes these imperfections and allows for a more faithful comparison between theoretical predictions and experimental data.
Our starting point is the observation that mixed input states of the client can correspond to telecloning fidelities larger than the theoretical values predicted by ${\cal F}(\theta)=[9-\cos(2\theta)]/12$. This can be straightforwardly seen by running the quantum telecloning protocol with a decohered state resulting from the application of a dephasing channel to a pure client’s state of the form $\alpha{|0\rangle}_X+\beta{|1\rangle}_X$ with $\alpha=\cos(\theta/2)$ as in the main Letter. This is illustrated in Fig. 2 of the main Letter. Quite intuitively, as the input client’s state loses tis coherences, the fidelity of telecloning improves. The second observation we make is that the entangled channel used in our experiment, although being of very good quality, has a non-unit overlap with an ideal Dicke resource. Taking into account the major sources of experimental imperfections, along the lines of the investigation in [@dickeexpRome], a reasonable description of the four-qubit resource produced in our experiment is the Werner-like state $$\label{wernerlike}
\rho_{D}=p{|D^{(2)}_4\rangle}{\langleD^{(2)}_4|}+(1-p){\openone}/16$$ with $0\le p\le1$. The entangled Dicke component in such state is evaluated considering that our experimental estimate for the lower bound on the state fidelity is$F_{D^{(2)}_4}=(0.78\pm0.5)$. Moreover, we have checked that slight experimental imperfections in the determination of the populations of the input client’s states (within the range observed experimentally) do not affect the overall picture significantly. We have thus incorporated the effects of a coherence-depleted input states of qubit $X$ into the protocol for $1\rightarrow3$ quantum telecloning performed using a mixed Dicke resource as in Eq. (\[wernerlike\]). The dephasing parameter used in the model for mixed client’s state has been adjusted so that, at $\theta=\pi/2,$ we get the real part of the experimentally reconstructed off-diagonal elements of the density matrix of qubit $X$ (fixed relative phases between ${|0\rangle}_X$ and ${|1\rangle}_X$ do not modify our conclusions). The resulting state fidelity, shown in Fig. 2 [**b)**]{} of the main letter, shows a very good agreement with the experimental data.
Single-qubit quantum state tomography of receivers’ states in experimental QTC and ODT
======================================================================================
In Fig. \[tomoQTC\] (Fig. \[tomo\]) we give the single-qubit density matrix obtained through quantum state tomography of the receiver’s state in the QTC (ODT) protocol. The telecloned states reported in Fig. \[tomoQTC\] have been shown in Fig. 2 [**b)**]{} of the main letter. We have considered three different input client’s states. For each of them, we have measured the telecloned state on the qubit $a$.
The values of state fidelity included in the Figure \[tomo\] are those reported in Table I of the main Letter. We have considered four different input client’s states. For each of them, we have projected the server’s elements onto either ${|01\rangle}_{{\cal S}_{odt}}$ or ${|10\rangle}_{{\cal S}_{odt}}$ and taken qubit $a$ or $b$ as he receiver. The corresponding quantum state fidelities are evidently quite uniform and consistently above $90\%$ (mean fidelity $0.96\pm0.01$), thus demonstrating high-quality and receiver-oblivious ODT.
[10]{} \[1\][`#1`]{} \[2\]\[\][[\#2](#2)]{}
A. I. Lvovsky, B. C. Sanders, and W. Tittel, Nature Photon. [**3**]{}, 706 (2009) and references therein.
D. Gottesman and I. Chuang, Nature (London) [**402**]{}, 390 (1999); J. Eisert, [*et al.*]{}, Phys. Rev. A [**62**]{}, 052317 (2000); D. Collins, N. Linden, and S. Popescu, [*ibid.*]{} [**64**]{}, 032302 (2001); S. F. Huelga, M. B. Plenio, and J. A. Vaccaro, [*ibid.*]{} [**65**]{}, 042316 (2002); M. Paternostro, M. S. Kim, and G. M. Palma, J. Mod. Opt. [**50**]{}, 2075 (2003).
H. J. Kimble, Nature (London) [**453**]{}, 1023 (2008); B. B. Blinov, D. L. Moehring, L.-M. Duan, and C. Monroe, [*ibid.*]{} [**428**]{}, 153 (2004); S. Olmschenk, [*et al.*]{}, Science [**323**]{}, 486 (2009).
H. J. Briegel, [*et al.*]{}, Nature Phys. [**5**]{}, 19 (2009).
A. Politi [*et al.*]{}, Science [**320**]{}, 646 (2008); [*ibid.*]{} [**325**]{}, 1221 (2009); L. Sansoni, [*et al.*]{}, Phys. Rev. Lett. [**105**]{}, 200503 (2010); A. Crespi, [*et al.*]{}, Nature Comm. [**2**]{}, 566 (2011).
W.-B. Gao, [*et al.*]{}, Phys. Rev. Lett. [**104**]{}, 020501 (2010). G. Vallone, [*et al.*]{}, Phys. Rev. Lett. [**100**]{}, 160502 (2008). C.-Y. Lu, [*et al.*]{}, Nature Phys. [**3**]{}, 91 (2007). G. Vallone, [*et al.*]{}, Phys. Rev. A [**81**]{}, 050302(R) (2010). D. N. Biggerstaff, [*et al.*]{}, Phys. Rev. Lett. [**103**]{}, 240504 (2009). W. B. Gao, [*et al.*]{}, Nature Physics **6**, 331 – 335 (2010) N. Kiesel, [*et al.*]{}, Phys. Rev. Lett. [**98**]{}, 063604 (2007).
R. Prevedel, [*et al.*]{}, Phys. Rev. Lett. [**103**]{}, 020503 (2009); W. Wieczorek, [*et al.*]{}, Phys. Rev. Lett. [**103**]{}, 020504 (2009).
A. Chiuri, [*et al.*]{}, Phys. Rev. Lett. [**105**]{}, 250501 (2010).
M. Radmark, M. Żukowski, and M. Bourennane, Phys. Rev. Lett. [**103**]{}, 150501 (2009); New J. Phys. [**11**]{}, 103016 (2009).
M. Bourennane, [*et al.*]{}, Phys. Rev. Lett. [**96**]{}, 100502 (2006).
M. Hillery, V. Bužek and A. Berthiaume, Phys. Rev. A [**59**]{}, 1829 (1999).
M. Murao, [*et al.*]{}, Phys. Rev. A [**59**]{}, 156 (1999).
A. Karlsson and M. Bourennane, Phys. Rev. A [**58**]{}, 4394 (1998).
Z. Zhao [*et al.*]{}, Nature (London) [**430**]{}, 54 (2004).
M. Barbieri, [*et al.*]{}, Phys. Rev. A **72**, 052110 (2005).
P. Krammer, [*et al.*]{}, Phys. Rev. Lett. [**103**]{}, 100502 (2009).
G. Töth, [*et al.*]{}, New J. Phys. [**11**]{}, 083002 (2009).
Almeida, M. P., [*et al.*]{}, [Science]{} [**316**]{}, 579 (2007). R. Ceccarelli, [*et al.*]{}, Phys. Rev. Lett. [**103**]{}, 160401 (2009). A. Chiuri, [*et al.*]{}, Phys. Rev. A. [**84**]{}, 020304(R) (2011).
A. Chiuri, [*et al.*]{}, New J. Phys. [**14**]{}, 085006 (2012).
S. Campbell, M. S. Tame, and M. Paternostro, New J. Phys. [**11**]{}, 073039 (2009).
See supplementary material at XXXX for an additional analysis on the properties of the system.
A. Acín, [*et al.*]{}, Phys. Rev. Lett. [**87**]{}, 040401 (2001).
O. Gühne, P. Hyllus, Int. J. Theor. Phys. [**42**]{}, 1001 (2003)
D. F. V. James, [*et al.*]{}, Phys. Rev. A **64**, 052312 (2001).
F. Ciccarello, [*et al.*]{}, Phys. Rev. A [**82**]{}, 030302(R) (2010).
Similar results hold for projections onto ${|10\rangle}_{{\cal S}_{odt}}$.
S. Koike, [*et al.*]{}, Phys. Rev. Lett. [**96**]{}, 060504 (2006).
V. Scarani, [*et al.*]{}, Rev. Mod. Phys. [**77**]{}, 1225 (2005).
The second HWP in Fig. \[setup\] [**d)**]{} compensates the temporal delay introduced by the first one.
G. Tóth, J. Opt. Soc. Am. B [**24**]{}, 275 (2007).
|
---
abstract: 'In this paper, we introduce Hierarchical Invertible Neural Transport (HINT), an algorithm that merges Invertible Neural Networks and optimal transport to sample from a posterior distribution in a Bayesian framework. This method exploits a hierarchical architecture to construct a Knothe-Rosenblatt transport map between an arbitrary density and the joint density of hidden variables and observations. After training the map, samples from the posterior can be immediately recovered for any contingent observation. Any underlying model evaluation can be performed fully offline from training without the need of a model-gradient. Furthermore, no analytical evaluation of the prior is necessary, which makes HINT an ideal candidate for sequential Bayesian inference. We demonstrate the efficacy of HINT on two numerical experiments.'
author:
- Gianluca Detommaso
- Jakob Kruse
- Lynton Ardizzone
- Carsten Rother
- 'Ullrich K[ö]{}the'
- Robert Scheichl
bibliography:
- 'INT\_biblio.bib'
nocite: '[@*]'
title: 'HINT: Hierarchical Invertible Neural Transport for General and Sequential Bayesian inference'
---
Introduction
============
Bayesian inference is a statistical inference framework where a *prior* probability distribution of some unknown variable gets updated as more observations become available. The task of sampling from such a *posterior* probability distribution can be extremely challenging, in particular when the unknown variable is either very high-dimensional, or the underlying model is very complex and expensive to evaluate (e.g. chaotic dynamical systems, SDEs, PDEs). The problem becomes even harder when observations arrive in a streaming form and Bayesian inference has to be performed *sequentially* (a.k.a. filtering). Standard attempts to solve these tasks include MCMC algorithms, SMC techniques, variational inference approximations [@gilks1995markov; @doucet2009tutorial; @blei2017variational] and many remarkable improvements of these contributions have been proposed in the last decades.
A more recent approach entails the use of optimal transport [@villani2008optimal] in order to map a reference density (e.g. Gaussian) to a target density (e.g. posterior) [@marzouk2016sampling]. One can define a possible class of transport maps and, within this class, seek for an optimal map such that the correspondent push-forward density of the reference approximates the target as closely as possible. Samples from the target density can then be recovered by applying the transport map to samples from the reference density. The class of transport maps that is chosen characterizes the method in use. For example, [@marzouk2016sampling] introduces polynomial maps within a Knothe-Rosenblatt rearrangement structure, whereas [@liu2016stein; @detommaso2018stein] exploit variational characterizations embedded in a RKHS.
In this paper, we propose different ways to use Invertible Neural Networks (INNs) within an optimal transport perspective. INNs have recently been introduced in [@dinh2016density] as class of neural networks characterized by an invertible architecture. Crucially, every layer combines orthogonal transformations and triangular maps, which ensures that the determinant of the Jacobian of the overall map can be computed at essentially no extra cost during the forward pass. Here, we extend the results in [@dinh2016density; @ardizzone2018analyzing] and we introduce and compare different transport constructions with the goal of sampling from a posterior distribution. Finally we propose HINT, a *Hierarchical Invertible Neural Transport* method where a hierarchical INN architecture is exploited within a Knothe-Rosenblatt structure in order to densify the input-to-output dependence of the transport map and to allow sampling from the posterior. One of the advantages of HINT is that model evaluations can be performed fully offline and the gradient of the model is not required, which is very convenient when models and their gradients (e.g. PDEs and their adjoint operators) are very expensive to compute. In addition, analytical evaluations of the prior density are never required, which makes HINT ideal for Sequential Bayesian inference.
The paper is organized as follows. Section \[sec:INNs\] gives a background on INNs, together with a possible extension in \[sec:nonlinQ\]. Section \[sec:transport\] provides a transport perspective, where new constructions are derived and studied, and a statistical error analysis of the transport estimator in \[sec:consistency\_CLT\]. Section \[sec:HINT\] introduces HINT, a novel hierarchical architecture, and how to use it to sample from the posterior. Section \[sec:seqBI\] describes HINT in a sequential Bayesian framework. Section \[sec:numerical\] studies the performance of the algorithm on two challenging inference problems. We finish with some conclusion in section \[sec:conclusion\].
Invertible Neural Networks {#sec:INNs}
==========================
Invertible Neural Networks (INNs) have been introduced in [@dinh2016density] as a special case of deep neural networks whose architecture allows for trivial inversion of the map from input to output. One can describe an INN as an invertible map ${{\boldsymbol}T}:{\mathcal{X}}\to{\mathcal{X}}$ defined on some vector space ${\mathcal{X}}$, characterized by the composition of invertible layers ${{\boldsymbol}T}_\ell$, i.e. ${{\boldsymbol}T}({{\boldsymbol}u}) := {{\boldsymbol}T}_L\circ \cdots \circ {{\boldsymbol}T}_1 ({{\boldsymbol}u})$, for ${{\boldsymbol}u}\in{\mathcal{X}}$ and $\ell = 1,\dots,L$. The architecture of each layer ${{\boldsymbol}T}_\ell$ can be described as follows: $$\label{eq:forw_arch}
{{\boldsymbol}T}_\ell({{\boldsymbol}u}) \coloneqq \begin{bmatrix}
{{\boldsymbol}T}_\ell^1({\tilde{{\boldsymbol}u}}_1) \\
{{\boldsymbol}T}_\ell^2({\tilde{{\boldsymbol}u}}_1,{\tilde{{\boldsymbol}u}}_2)
\end{bmatrix}
\coloneqq \begin{bmatrix}
{\tilde{{\boldsymbol}u}}_1\\
{\tilde{{\boldsymbol}u}}_2 \odot {\boldsymbol}\exp({{\boldsymbol}s}_\ell({\tilde{{\boldsymbol}u}}_1)) + {{\boldsymbol}t}_\ell({\tilde{{\boldsymbol}u}}_1)
\end{bmatrix}\,,\quad\text{with } {\tilde{{\boldsymbol}u}}\coloneqq \begin{bmatrix}
{\tilde{{\boldsymbol}u}}_1\\
{\tilde{{\boldsymbol}u}}_2
\end{bmatrix} := Q_\ell{{\boldsymbol}u}\,,$$ where ${{\boldsymbol}T}_\ell=[{{\boldsymbol}T}_\ell^1, {{\boldsymbol}T}_\ell^2]$ is an arbitrary splitting, ${\boldsymbol}\exp(\cdot)$ denotes element-wise exponential operation, $Q_\ell$ are orthogonal matrices, i.e. $Q_\ell^\top Q_\ell=I$, and ${{\boldsymbol}s}_\ell, {{\boldsymbol}t}_\ell$ are arbitrarily complex neural networks. In practice, we will take $Q_\ell$ to be series of Householder reflections and ${{\boldsymbol}s}_\ell$, ${{\boldsymbol}t}_\ell$ as sequences of fully-connected layers with leaky ReLU activations. We denote by ${\boldsymbol}\theta\in{\mathbb{R}}^n$ all the parameters within ${{\boldsymbol}s}_\ell$ and ${{\boldsymbol}t}_\ell$ that we want to learn across all layers, which will in turn parametrize the map ${{\boldsymbol}T}({{\boldsymbol}u}) = {{\boldsymbol}T}({{\boldsymbol}u}; {\boldsymbol}\theta)$.
We emphasize that each layer ${{\boldsymbol}T}_\ell$ is a composition of an orthogonal transformation and a triangular map, where the latter is better known in the field of transport maps as *Knothe-Rosenblatt rearrangement* [@marzouk2016sampling]. This factorization can be seen as a non-linear generalization to a classic QR decomposition [@stoer2013introduction]. Whereas the triangular part encodes the possibility to represent non-linear transformations, the orthogonal part reshuffles the entries to foster dependence of each part of the input to the final output, thereby drastically increasing the representation power of the map ${{\boldsymbol}T}$.
[0.49]{}
[0.49]{}
Figure \[fig:scheme\_prior\_to\_evidence\_and\_latent\] displays the architecture introduced in corresponding to the map ${{\boldsymbol}T}_\ell$ and Figure \[fig:jacobian\_prior\_to\_evidence\_and\_latent\] the layout of the Jacobian matrix $\nabla_{{\tilde{{\boldsymbol}u}}} {{\boldsymbol}T}_\ell$. We have the following Lemma.
\[lemma:detJac\] Let us call ${{\boldsymbol}u}^0 \coloneqq {{\boldsymbol}u}\in{\mathcal{X}}$, ${{\boldsymbol}u}^\ell\coloneqq{{\boldsymbol}T}_\ell({{\boldsymbol}u}^{\ell-1})$ and ${\tilde{{\boldsymbol}u}}^\ell \coloneqq Q_\ell{{\boldsymbol}u}^\ell$. Then $$\label{eq:log_det_nablaT}
\log|\det\nabla {{\boldsymbol}T}({{\boldsymbol}u})| = \sum_{\ell=1}^L\textnormal{sum}\big( {{\boldsymbol}s}_\ell({\tilde{{\boldsymbol}u}}_1^\ell)\big)\,,$$ where $\textnormal{sum}(\cdot)$ denotes the element-wise sum.
A proof is given in appendix \[sec:appendix\_proofs\]. Lemma \[lemma:detJac\] crucially shows that the determinant of the Jacobian of an INN can be calculated for free during the forward pass. The importance of this will become clear in Section \[sec:transport\].
We remark that the expressions in are trivially invertible: $$\label{eq:back_arch}
{{\boldsymbol}S}_\ell({{\boldsymbol}v}) \coloneqq \begin{bmatrix}
{{\boldsymbol}S}_\ell^1({{\boldsymbol}v}_1) \\
{{\boldsymbol}S}_\ell^2({{\boldsymbol}v}_1,{{\boldsymbol}v}_2)
\end{bmatrix}
\coloneqq Q_\ell^\top\begin{bmatrix}
{{\boldsymbol}v}_1\\
({{\boldsymbol}v}_2-{{\boldsymbol}t}_\ell({{\boldsymbol}v}_1)) \odot {\boldsymbol}\exp(-{{\boldsymbol}s}_\ell({{\boldsymbol}v}_1))
\end{bmatrix}\,,\quad\text{with } {{\boldsymbol}v}\coloneqq \begin{bmatrix}
{{\boldsymbol}v}_1\\
{{\boldsymbol}v}_2
\end{bmatrix}\in{\mathcal{X}}\,,$$ where ${{\boldsymbol}S}_\ell=[{{\boldsymbol}S}_\ell^1,{{\boldsymbol}S}_\ell^2]$ corresponds to the splitting of ${{\boldsymbol}T}_\ell$. Note that ${{\boldsymbol}s}_\ell$ and ${{\boldsymbol}t}_\ell$ do not need to be invertible.
About the extendibility to non-linear $Q_\ell$. {#sec:nonlinQ}
-----------------------------------------------
As previously highlighted, every layer ${{\boldsymbol}T}_\ell({{\boldsymbol}u})$ in could be seen as a QR-type decomposition with constant $Q_\ell$. However, one could argue that in order to increase the representation power of the decomposition, also $Q_\ell$ should be taken to be a non-linear function of ${{\boldsymbol}u}$. In this section, we generalize ${{\boldsymbol}T}_\ell$ to ${{\boldsymbol}T}_\ell({{\boldsymbol}u}) \coloneqq {{\boldsymbol}R}_\ell({{\boldsymbol}Q}_\ell({{\boldsymbol}u}))$, where ${{\boldsymbol}R}_\ell$ is the triangular part of the map in , whereas the function ${{\boldsymbol}Q}_\ell$ should generalize the role of $Q_\ell$. If ${{\boldsymbol}Q}_\ell({{\boldsymbol}u}) = Q_\ell{{\boldsymbol}u}$, we recover the architecture in $\eqref{eq:forw_arch}$. In addition, we have $\nabla{{\boldsymbol}T}_\ell = \nabla{{\boldsymbol}R}_\ell\nabla{{\boldsymbol}Q}_\ell$. Then, we would like to choose ${{\boldsymbol}Q}_\ell:{\mathcal{X}}\to{\mathcal{X}}$ to be a smooth and easily invertible transformation such that $\nabla{{\boldsymbol}Q}_\ell$ is an orthogonal matrix, so that $\log\det|\nabla{{\boldsymbol}T}_\ell|$ can still be computed at no extra cost. Such transformations belong to the more general class of *conformal maps* [@ahlfors2010conformal]. In fact, any conformal map ${{\boldsymbol}Q}_\ell$ in ${\mathcal{X}}\equiv {\mathbb{R}}^{\text{dim}}$ is characterized by $\nabla {{\boldsymbol}Q}_\ell \nabla {{\boldsymbol}Q}_\ell^\top = |\det\nabla{{\boldsymbol}Q}_\ell|^{2/\text{dim}}I$, where $\nabla {{\boldsymbol}Q}_\ell$ is assumed to exist almost everywhere. Unfortunately, Liouville’s theorem [@revsetnjak1967liouville] shows that, for any $\text{dim}>2$, the only conformal maps are Möbius transformations of the form $${{\boldsymbol}Q}_\ell({{\boldsymbol}u}) \coloneqq {{\boldsymbol}b}+ \alpha \frac{Q_\ell({{\boldsymbol}u}- {{\boldsymbol}a})}{\|{{\boldsymbol}u}-{{\boldsymbol}a}\|^\gamma}, \quad \quad \text{for } \gamma\in\{0,2\}\,,$$ where ${{\boldsymbol}b},{{\boldsymbol}a}\in{\mathbb{R}}^{\text{dim}}$, $\alpha\in{\mathbb{R}}\setminus\{0\}$, and $Q_\ell$ is an orthogonal matrix. Note that, for both $\gamma = 0$ and $2$, $\det\nabla{{\boldsymbol}Q}_\ell = \alpha^{\text{dim}}$, and Lemma \[lemma:detJac\] can be easily generalized.
A transport perspective for general Bayesian inference {#sec:transport}
=======================================================
In [@ardizzone2018analyzing], INNs were used to sample from a posterior probability distribution. In this section, we will generalize the results in [@ardizzone2018analyzing] via the mathematical framework of transport maps. We will suggest different procedures and architectures which involve INNs and we will propose a new algorithm to perform sampling from a posterior distribution.
A general Bayesian framework
----------------------------
Let us denote by ${{\boldsymbol}x}\in{\mathbb{R}}^d$ some hidden parameters of interest and by ${{\boldsymbol}y}\in{\mathbb{R}}^m$ some observations. We respectively denote by $p_x({{\boldsymbol}x})$, $p_{y|x}({{\boldsymbol}y}|{{\boldsymbol}x})$, $p_{y,x}({{\boldsymbol}y},{{\boldsymbol}x})$ and $p_y({{\boldsymbol}y})$ the prior density, the likelihood function, the joint density and the evidence. By Bayes’ theorem, the posterior density can be expressed as $p_{x|y}({{\boldsymbol}x}|{{\boldsymbol}y}) = p_x({{\boldsymbol}x})\,p_{y|x}({{\boldsymbol}y}|{{\boldsymbol}x})\,/\,p_{y}({{\boldsymbol}y})$.
All the results we will present hold for any possible likelihood function $p_{y|x}$, in contrast with the results in [@ardizzone2018analyzing] which hold exclusively for the Dirac delta likelihood $p_{y|x}({{\boldsymbol}y}|{{\boldsymbol}x}) = \delta_{{{\boldsymbol}F}({{\boldsymbol}x})}({{\boldsymbol}y})$, where ${{\boldsymbol}F}:{\mathbb{R}}^d\to{\mathbb{R}}^m$ is some non-linear operator. However, for sake of clarity, throughout the paper we will focus on additive Gaussian noise relations, i.e. ${{\boldsymbol}y}\coloneqq {{\boldsymbol}F}({{\boldsymbol}x}) + \sigma_y{\boldsymbol}\xi$, for some standard Gaussian noise ${{\boldsymbol}\xi}\sim {\mathcal{N}}({\boldsymbol}0, I)$ and standard deviation $\sigma_y$. It immediately follows that $p_{y|x}(\cdot|{{\boldsymbol}x}) = {\mathcal{N}}({{\boldsymbol}F}({{\boldsymbol}x}),\sigma_y^2 I)$.
Furthermore, we introduce a latent variable ${{\boldsymbol}z}$, whose dimension will be specified below. Although any treatable density can be chosen for ${{\boldsymbol}z}$, we will practically use a standard Gaussian $p_z = {\mathcal{N}}({\boldsymbol}0, I)$.
A transport perspective
-----------------------
Suppose that we want to approximate a target density ${p_{\textnormal{target}}}$. A possible approach is to seek for an invertible transport map ${{\boldsymbol}T}:{\mathcal{X}}\to{\mathcal{X}}$ such that the pushforward density ${{\boldsymbol}T}_\#{p_{\textnormal{ref}}}$ of some reference density ${p_{\textnormal{ref}}}$ approximates, in some sense, the target ${p_{\textnormal{target}}}$.[^1] We restrict the attention to parametric families of transport maps ${{\boldsymbol}T}(\cdot) = {{\boldsymbol}T}(\cdot; {\boldsymbol}\theta)$, for ${\boldsymbol}\theta\in{\mathbb{R}}^n$. We define the function $$\label{eq:J_map}
J:{\mathbb{R}}^n\to [0,\infty):\ {\boldsymbol}\theta\mapsto {{\mathcal{D}}_{\textnormal{KL}}}({{\boldsymbol}T}_\# {p_{\textnormal{ref}}}\,||\,{p_{\textnormal{target}}})\,,$$ where ${{\mathcal{D}}_{\textnormal{KL}}}(\cdot\,||\cdot)$ is the Kullback-Leibler divergence between two probability densities. One would like to minimize the function $J$ over ${\boldsymbol}\theta$; if the family of maps ${{\boldsymbol}T}(\cdot,{\boldsymbol}\theta)$ is rich enough, the KL divergence will approach zero and ${{\boldsymbol}T}_\# {p_{\textnormal{ref}}}\approx {p_{\textnormal{target}}}$.
In general, the family of maps ${{\boldsymbol}T}(\cdot,{\boldsymbol}\theta)$ can contain any invertible function approximator that can be trained over ${\boldsymbol}\theta$ to push a treatable reference ${p_{\textnormal{ref}}}$ to the target ${p_{\textnormal{target}}}$. Different choices for such a family have been proposed in the literature, e.g. polynomials regressors [@marzouk2016sampling], Tensor-Train representations [@dolgov2018approximation] and variational approximations [@liu2016stein; @detommaso2018stein]. The accuracy of ${{\boldsymbol}T}_\#{p_{\textnormal{ref}}}$ will depend on how well ${{\boldsymbol}T}$ represents a map between the two densities. In this paper, we propose INNs as a suitable choice for ${{\boldsymbol}T}$ that, at the same time, retains the universal approximation properties of deep neural networks and makes the minimization tractable because of the triangular architecture within the layers ${{\boldsymbol}T}_\ell$. In fact, the main computational bottleneck in the minimization of the expression in is that, as we will see, it involves the evaluation of $\log|\det\nabla {{\boldsymbol}T}|$ at every input of the training set and at every training step of the algorithm, which quickly becomes practically unfeasible. Lemma \[lemma:detJac\] showed how, for INNs, this expression can be calculated essentially at no extra cost during the forward pass.
Different algorithms can be constructed depending on the choices of ${p_{\textnormal{ref}}}$ and ${p_{\textnormal{target}}}$. Although the ultimate goal is to sample from the posterior $p_{x|y}$, for some specific combination of densities ${p_{\textnormal{target}}}$ and ${p_{\textnormal{ref}}}$ this can be done in two phases: first, train a transport map between the two densities; then, recover posterior samples via specific conditional procedures. We will study three cases, which are represented in Figure .
In both cases and , the family of maps ${{\boldsymbol}T}(\cdot,{\boldsymbol}\theta)$ is trained over ${\boldsymbol}\theta$ independently of the actual observation ${{\boldsymbol}y}$. After training is performed, samples from $p_{x|y}(\cdot|{{\boldsymbol}y})$ can be recovered for any contingent ${{\boldsymbol}y}$ by a conditional procedure. When ${{\boldsymbol}T}$ is taken to be an INN, case corresponds to an extension of [@ardizzone2018analyzing] to general likelihood functions $p_{y|x}$. Case is related to [@marzouk2016sampling], where families of polynomial maps ${{\boldsymbol}T}$ are employed, and will be studied further in section \[sec:HINT\]. On the other hand, case attempts to sample directly from the posterior $p_{y|x}(\cdot|{{\boldsymbol}y})$, therefore training requires the actual observation ${{\boldsymbol}y}$. For a different observation, training should be recomputed. Stein variational inference [@liu2016stein; @detommaso2018stein] closely relates to this case. In the following theorem, for each case we derive an explicit loss upon which a family of maps ${{\boldsymbol}T}(\cdot, {\boldsymbol}\theta)$ can be trained. Algorithms to sample from $p_{x|y}$ are given in appendix \[sec:appendix\_algs\].
\[thm:all\_results\] Given a parametric family of invertible maps ${{\boldsymbol}T}(\cdot,{\boldsymbol}\theta)$, minimizing the function $J({\boldsymbol}\theta)$ in over ${\boldsymbol}\theta$ is equaivalent to minimize the following losses $L({\boldsymbol}\theta)$:
1. Let ${\mathcal{X}}\equiv{\mathbb{R}}^d$, $z\in{\mathbb{R}}^{d-m}$ and assume $m < d$. Let ${p_{\textnormal{ref}}}= p_x$ and ${p_{\textnormal{target}}}= p_{y|x}p_{z}$. Then $$\label{eq:KLloss1}
L({\boldsymbol}\theta) \coloneqq {\mathbb{E}}_{{{\boldsymbol}x}\sim p_x}\Big[\frac{1}{2\sigma_y^2}\|{{\boldsymbol}T}^y({{\boldsymbol}x})-{{\boldsymbol}F}({{\boldsymbol}x})\|_2^2 + \frac{1}{2}\|{{\boldsymbol}T}^z({{\boldsymbol}x})\|_2^2-\log|\det\nabla {{\boldsymbol}T}({{\boldsymbol}x})|\Big]\,,$$ where ${{\boldsymbol}T}({{\boldsymbol}x}) = [{{\boldsymbol}T}^y({{\boldsymbol}x}),{{\boldsymbol}T}^z({{\boldsymbol}x})]$, with ${{\boldsymbol}T}^y({{\boldsymbol}x})\in{\mathbb{R}}^m$ and ${{\boldsymbol}T}^z({{\boldsymbol}x})\in{\mathbb{R}}^{d-m}$.
2. Let ${\mathcal{X}}\equiv{\mathbb{R}}^d$, ${{\boldsymbol}z}\in{\mathbb{R}}^d$ and an observed ${{\boldsymbol}y}$. Let ${p_{\textnormal{ref}}}= p_z$ and ${p_{\textnormal{target}}}= p_{x|y}(\cdot|{{\boldsymbol}y})$. Then $$\label{eq:KLloss2}
L({\boldsymbol}\theta) \coloneqq {\mathbb{E}}_{{{\boldsymbol}z}\sim p_z}\Big[\frac{1}{2\sigma_y^2}\|{{\boldsymbol}y}- {{\boldsymbol}F}({{\boldsymbol}T}({{\boldsymbol}z}))\|_2^2 - \log p_x({{\boldsymbol}T}({{\boldsymbol}z}))- \log|\det\nabla {{\boldsymbol}T}({{\boldsymbol}z})|\Big]\,.$$
3. Let ${\mathcal{X}}\equiv{\mathbb{R}}^{m+d}$, ${{\boldsymbol}z}\in{\mathbb{R}}^{m+d}$, ${p_{\textnormal{ref}}}= p_{y,x}$ and ${p_{\textnormal{target}}}= p_{z}$. Then $$\label{eq:KLloss3}
L({\boldsymbol}\theta) \coloneqq {\mathbb{E}}_{{{\boldsymbol}w}\sim p_{y,x}}\Big[\frac{1}{2}\|{{\boldsymbol}T}({{\boldsymbol}w})\|_2^2 - \log|\det\nabla {{\boldsymbol}T}({{\boldsymbol}w})|\Big]\,.$$
A proof of Theorem \[thm:all\_results\] is given in appendix \[sec:appendix\_proofs\]. Here, we comment on the different losses. A summarized qualitative comparison can be found in Table \[table:comparison\] of appendix \[sec:appendix\_algs\].
#### Loss comparison.
In practice, expectations within the losses of Theorem \[thm:all\_results\] should be approximated. A sensible choice is to exploit Monte Carlo samples, which can be directly obtained in each of the three cases and will play the role of training set for the family of maps ${{\boldsymbol}T}(\cdot,{\boldsymbol}\theta)$. For both cases and , these samples, together with potentially expensive evaluations of the model ${{\boldsymbol}F}$, can be performed *a priori* of the training phase. In addition, no gradient of ${{\boldsymbol}F}$ is required during training, which allows ${{\boldsymbol}F}$ to be completely treated as a black box. Vice versa, the loss in case involve the term ${{\boldsymbol}F}({{\boldsymbol}T}^y({{\boldsymbol}z}))$, which does not allow offline evaluations of ${{\boldsymbol}F}$ and requires its gradient during training.
The minimization of the first term in the loss of case can be challenging. First, ${{\boldsymbol}T}^y$ has to be trained to be a surrogate of ${{\boldsymbol}F}$, which can be difficult if ${{\boldsymbol}F}$ is particularly complicated or exhibits an unstable (e.g. chaotic) behaviour. In addition, either if $\sigma_y^2$ is very small or $m$ is very large, the loss can become very steep and difficult to optimize. Similar issues are present in case . In contrast, the loss in case strikes for its simplicity. The first term in the loss pushes all samples towards the mode of the standard Gaussian density $p_z$, whereas the second term is a repulsion force that maintains them apart. The information about the model and the probabilistic relation between hidden space and observation space is completely contained within the training set sampled from $p_{y,x}$.
Furthermore, we observe that, unlike in case , cases and never require evaluations from the prior $p_x$ but only samples from it. This makes them very suitable for sequential Bayesian inference, as we will study further in section \[sec:seqBI\].
In defence of case , if we want to sample from a specific posterior $p_{x|y}(\cdot|{{\boldsymbol}y})$ and if none of the issues raised above constitute a major difficulty, for particular applications this case may result faster than the other two, because the family of maps ${{\boldsymbol}T}(\cdot,{\boldsymbol}\theta)$ is trained to map samples directly to the posterior. In addition, unlike and , case does not require an explicit expression for ${{\boldsymbol}T}^{-1}$.
Consistency and Central Limit Theorem of the transport map estimator {#sec:consistency_CLT}
--------------------------------------------------------------------
Here, we analyze asymptotic properties of the statistical error due to the Monte Carlo approximation of the losses $L({\boldsymbol}\theta)$ in Theorem \[thm:all\_results\].
#### Assumptions.
Let us take ${\mathcal{X}}\equiv {\mathbb{R}}^{s}$, and assume ${\boldsymbol}\theta\in\Theta$, where $\Theta$ is a compact set. Although this is a technical limitation, it is also fully reasonable in practice, as trainable parameters are often clamped within a defined range to avoid instability. Let us introduce ${\mathcal{L}}({{\boldsymbol}u};{\boldsymbol}\theta)$ to be the integrand in the loss $L({\boldsymbol}\theta)\coloneqq {\mathbb{E}}_{{{\boldsymbol}u}\sim p}[{\mathcal{L}}({{\boldsymbol}u};{\boldsymbol}\theta)]$, for some probability density $p$. We denote by $\hat{L}({\boldsymbol}\theta) = \frac{1}{N}\sum_{k=1}^N{\mathcal{L}}({{\boldsymbol}u}^{(k)};{\boldsymbol}\theta)$ the Monte Carlo estimator of $L({\boldsymbol}\theta)$, where $({{\boldsymbol}u}^{(k)})_{k=1}^N\sim p$ are independent random variables. Suppose there exist strict global minimizers ${\boldsymbol}\theta^* = \operatorname*{argmin}_{{\boldsymbol}\theta} L({\boldsymbol}\theta)$ and ${\hat{{\boldsymbol}\theta}}= \operatorname*{argmin}_{{\boldsymbol}\theta} \hat{L}({\boldsymbol}\theta)$ in the interior of $\Theta$. We remark that both $\hat{L}({\boldsymbol}\theta)$ and ${\hat{{\boldsymbol}\theta}}$ are random variables depending on $({{\boldsymbol}u}^{(k)})_{k=1}^N$. Let us assume that the transport map ${{\boldsymbol}T}$ is continuous and differentiable with respect to ${\boldsymbol}\theta$. Furthermore, we impose the regularity conditions ${\mathbb{E}}_{{{\boldsymbol}u}\sim p}[\sup_{{\boldsymbol}\theta}{\mathcal{L}}({{\boldsymbol}u};{\boldsymbol}\theta)]<+\infty$ and ${\mathbb{E}}_{{{\boldsymbol}u}\sim p}[\sup_{{\boldsymbol}\theta}\|\nabla_{{\boldsymbol}\theta}^2{\mathcal{L}}({{\boldsymbol}u};{\boldsymbol}\theta)\|]<+\infty$, which ensure the applicability of the Uniform Law of Large Numbers. We will respectively denote convergence in probability and distribution for $N\to+\infty$ by ${\stackrel{p}{\longrightarrow}}$ and ${\stackrel{d}{\longrightarrow}}$.
\[thm:consistency\_and\_CLT\] With the assumptions above, for any input ${{\boldsymbol}u}\in{\mathbb{R}}^s$, the transport map estimator ${{\boldsymbol}T}({{\boldsymbol}u}; {\hat{{\boldsymbol}\theta}})$ is consistent, i.e. ${{\boldsymbol}T}({{\boldsymbol}u}; {\hat{{\boldsymbol}\theta}}){\stackrel{p}{\longrightarrow}}{{\boldsymbol}T}({{\boldsymbol}u}, {\boldsymbol}\theta^*)$. Furthermore, ${{\boldsymbol}T}({{\boldsymbol}u}; {\hat{{\boldsymbol}\theta}})$ satisfies the following Central Limit Theorem: $$\sqrt{N} \Big({{\boldsymbol}T}({{\boldsymbol}u}; {\hat{{\boldsymbol}\theta}}) - {{\boldsymbol}T}({{\boldsymbol}u}; {\boldsymbol}\theta^*)\Big) {\stackrel{d}{\longrightarrow}}{\mathcal{N}}({\boldsymbol}0, C({\boldsymbol}\theta^*))\,,$$ where @size[8.8]{}@mathfonts @@@\#1 $$C({\boldsymbol}\theta^*) \coloneqq \nabla_{{\boldsymbol}\theta}{{\boldsymbol}T}({{\boldsymbol}u};{\boldsymbol}\theta^*) \Big(\nabla_{{\boldsymbol}\theta}^2 L({\boldsymbol}\theta^*)\Big)^{-1}{\mathbb{E}}_{{{\boldsymbol}u}\sim p}[\nabla_{{\boldsymbol}\theta}{\mathcal{L}}({{\boldsymbol}u};{\boldsymbol}\theta^*) \nabla_{{\boldsymbol}\theta}{\mathcal{L}}({{\boldsymbol}u};{\boldsymbol}\theta^*)^\top ]\Big(\nabla_{{\boldsymbol}\theta}^2 L({\boldsymbol}\theta^*)\Big)^{-1} \nabla_{{\boldsymbol}\theta} {{\boldsymbol}T}({{\boldsymbol}u};{\boldsymbol}\theta^*)^\top\,.$$
A proof of Theorem \[thm:consistency\_and\_CLT\] is given in appendix \[sec:appendix\_proofs\]. Note that, for any differentiable scalar function $f$, we have $$\lim_{N\to+\infty}{\mathbb{P}}\left(\Big|f({{\boldsymbol}T}({{\boldsymbol}u};{\hat{{\boldsymbol}\theta}}))-f({{\boldsymbol}T}({{\boldsymbol}u};{\boldsymbol}\theta^*))\Big| \le c\frac{\sigma_f({\boldsymbol}\theta^*)}{\sqrt{N}}\right) = \frac{1}{\sqrt{2\pi}}\int_{-c}^{c}e^{-\frac{z^2}{2}}\,dz\,,$$ for any $c>0$, where $\sigma_f({\boldsymbol}\theta^*)$ denotes the asymptotic standard deviation of $\sqrt{N}f({{\boldsymbol}T}({{\boldsymbol}u};{\hat{{\boldsymbol}\theta}}))$. This highlights how Theorem \[thm:consistency\_and\_CLT\] provides a probabilistic error bound with rate of convergence ${\mathcal{O}}(N^{-1/2})$.
HINT: Hierarchical Invertible Neural Transport {#sec:HINT}
==============================================
In this section, we propose an algorithm to sample from a posterior density $p_{x|y}$ via case , which we discussed to be appealing for several reasons.
From joint to posterior via Knothe-Rosenblatt transport maps {#sec:joint2posterior}
------------------------------------------------------------
Consider an invertible transport map ${{\boldsymbol}T}:{\mathbb{R}}^{m+d}\to{\mathbb{R}}^{m+d}$ and suppose that it can be rewritten as a Knothe-Rosenblatt rearrangement, so that ${{\boldsymbol}T}({{\boldsymbol}w}) \coloneqq [{{\boldsymbol}T}^y({{\boldsymbol}y}), {{\boldsymbol}T}^x({{\boldsymbol}y},{{\boldsymbol}x})]$ for ${{\boldsymbol}w}\coloneqq [{{\boldsymbol}y},{{\boldsymbol}x}]\in{\mathbb{R}}^{m+d}$, ${{\boldsymbol}T}^y({{\boldsymbol}y})\in{\mathbb{R}}^m$ and ${{\boldsymbol}T}^x({{\boldsymbol}y},{{\boldsymbol}x})\in{\mathbb{R}}^d$. Let us denote by ${{\boldsymbol}S}({{\boldsymbol}z}) \coloneqq [{{\boldsymbol}S}^y({{\boldsymbol}z}_y),{{\boldsymbol}S}^x({{\boldsymbol}z}_y,{{\boldsymbol}z}_x)]$ its inverse, i.e. ${{\boldsymbol}S}= {{\boldsymbol}T}^{-1}$ and ${{\boldsymbol}S}^y = ({{\boldsymbol}T}^y)^{-1}$, for ${{\boldsymbol}z}\coloneqq [{{\boldsymbol}z}_y,{{\boldsymbol}z}_x]\in{\mathbb{R}}^{m+d}$.
Observe that we can split the latent density as $p_z = p_{z_y}p_{z_x|z_y}$, where $p_{z_y}$ and $p_{z_x|z_y}$ respectively correspond to the marginal density of ${{\boldsymbol}z}_y$ and the conditional density of ${{\boldsymbol}z}_x$ given ${{\boldsymbol}z}_y$. Because we chose $p_z$ to be a standard Gaussian, we further have $p_{z_x}=p_{z_x|z_y}$. Finally, assume that ${{\boldsymbol}S}_\# p_z = p_{y,x}$ exactly, or equivalently ${{\boldsymbol}T}_\# p_{y,x} = p_z$. Then, it was shown in [@marzouk2016sampling] that ${{\boldsymbol}S}^y_\# p_{z_y} = p_y$ and ${{\boldsymbol}S}^x_\# p_{z_x} = p_{x|y}$.
The result above suggests that, given a map ${{\boldsymbol}T}$ satisfying the conditions above, a posterior sample ${{\boldsymbol}x}\sim p_{x|y}(\cdot|{{\boldsymbol}y})$ can be simply achieved by calculating ${{\boldsymbol}x}= {{\boldsymbol}S}^x([{{\boldsymbol}T}^y({{\boldsymbol}y}),{{\boldsymbol}z}_x])$, for ${{\boldsymbol}z}_x\sim{\mathcal{N}}({\boldsymbol}0, I)$. Intuitively, whereas the simple application of the map ${{\boldsymbol}S}^x$ to a sample ${{\boldsymbol}z}=[{{\boldsymbol}z}_y, {{\boldsymbol}z}_x]\sim{\mathcal{N}}({\boldsymbol}0, I)$ would provide a sample from the joint $p_{y,x}$, the act of fixing ${{\boldsymbol}z}_y$ to be the anti-image of ${{\boldsymbol}y}$ through ${{\boldsymbol}T}^y$ makes sure that the resulting sample ${{\boldsymbol}x}$ comes from the posterior $p_{x|y}$. A simplified visualization of this procedure is displayed in Figure \[fig:densities\].
\[\]
Hierarchical architecture {#sec:Harch}
-------------------------
The goal is to introduce an architecture which endows ${{\boldsymbol}T}$ with a Knothe-Rosenblatt structure in order to apply the sampling procedure described in section \[sec:joint2posterior\]. In fact, such a structure is not satisfied by a general INN architecture in because of the orthogonal transformations $Q_\ell$, which are essential to have a large representation power of the network. In order to overcome this issue, we proceed as follows: first, we develop a hierarchical generalization of the architecture in to embolden a rich input-to-output dependence within each layer ${{\boldsymbol}T}_\ell$; then, we enforce the last level of each hierarchy to be triangular, so that the overall map ${{\boldsymbol}T}$ satisfies the desired structure.
Intuitively, we want to recursively nest INNs within each other in order to perform multiple coordinate splittings and therefore densify the architecture structure. In order to characterize this nesting procedure, for each layer $\ell$ we define a binary tree ${\mathcal{H}}_\ell$ of splittings. Each entry $h\in{\mathcal{H}}_\ell$ refers to a splitting coordinate and to the sub-tree of its children. We denote the tree root by $\tilde{h}\in{\mathcal{H}}_\ell$. Given $h$, let us denote by $h^-$ and $h^+$ its direct children. Also, we denote by $H \coloneqq |{\mathcal{H}}_\ell|$ the cardinality of the tree, i.e. the number of splittings. Let us define the following *hierarchical architecture*: $$\label{eq:hierarch_arch}
{{\boldsymbol}T}_{\ell,h}({{\boldsymbol}u})
\coloneqq \begin{bmatrix}
{{\boldsymbol}T}_{\ell,h^-}({\tilde{{\boldsymbol}u}}_1)\\
{{\boldsymbol}T}_{\ell,h^+}({\tilde{{\boldsymbol}u}}_2) \odot {\boldsymbol}\exp({{\boldsymbol}s}_{\ell,h}({\tilde{{\boldsymbol}u}}_1)) + {{\boldsymbol}t}_{\ell,h}({\tilde{{\boldsymbol}u}}_1)
\end{bmatrix}\,,\quad\text{with } {\tilde{{\boldsymbol}u}}\coloneqq \begin{bmatrix}
{\tilde{{\boldsymbol}u}}_1\\
{\tilde{{\boldsymbol}u}}_2
\end{bmatrix} := Q_{\ell,h}{{\boldsymbol}u}\,,$$ with the initial condition ${{\boldsymbol}T}_{\ell,h}({{\boldsymbol}u})\coloneqq{{\boldsymbol}u}$ for $h = \emptyset$. $Q_{\ell,h}$ is an arbitrary orthogonal matrix, whereas ${{\boldsymbol}s}_{\ell,h}$ and ${{\boldsymbol}t}_{\ell,h}$ are arbitrarily complex neural networks. Note that, for $H=1$, corresponds to . Figure \[fig:scheme\_latent\_to\_joint\] displays the architecture of a hierarchical layer ${{\boldsymbol}T}_{\ell, h}$, whereas Figure \[fig:jacobian\_latent\_to\_joint\] shows the layout of the Jacobian $\nabla{{\boldsymbol}T}_{\ell,\tilde{h}}({{\boldsymbol}u})$ for a tree ${\mathcal{H}}_\ell$ with $H=3$, with respect to the transformed inputs $Q_{\ell,\tilde{h}^-}{\tilde{{\boldsymbol}u}}_1$ and $Q_{\ell,\tilde{h}^+}{\tilde{{\boldsymbol}u}}_2$. Analogously to hierarchical matrices, the architecture in densifies the dependence between input and output of ${{\boldsymbol}T}_{\ell,\tilde{h}}$ by recursively nesting INNs layers within themselves.
[0.6]{}
[0.39]{}
Let us define ${{\boldsymbol}T}_\ell \coloneqq {{\boldsymbol}T}_{\ell,\tilde{h}}$ and the overall map ${{\boldsymbol}T}\coloneqq {{\boldsymbol}T}_L\circ \cdots {{\boldsymbol}T}_1$. It is easy to check that $\log|\det\nabla {{\boldsymbol}T}|$ can be recursively decomposed to assume a similar structure as in Lemma \[lemma:detJac\] and calculated essentially for free during the forward pass.
Given the hierarchical construction of ${{\boldsymbol}T}_\ell$ defined above, we can enforce the overall map ${{\boldsymbol}T}$ to retain the desired Knothe-Rosenblatt structure by, for each $\ell=1,\dots,L$: (a) defining $\tilde{h}$ to split between the variables ${{\boldsymbol}y}$ and ${{\boldsymbol}x}$; (b) taking $Q_{\ell,\tilde{h}}=I$. It immediately follows that, for any input ${{\boldsymbol}w}= [{{\boldsymbol}y},{{\boldsymbol}x}]$, we can split ${{\boldsymbol}T}({{\boldsymbol}w}) \coloneqq [{{\boldsymbol}T}^y({{\boldsymbol}y}),{{\boldsymbol}T}^x({{\boldsymbol}y},{{\boldsymbol}x})]$, with ${{\boldsymbol}T}^y({{\boldsymbol}y})\in{\mathbb{R}}^m$ and ${{\boldsymbol}T}^x({{\boldsymbol}y},{{\boldsymbol}x})\in{\mathbb{R}}^d$. Hence, after training is performed upon the loss in , we can apply the procedure described in section \[sec:joint2posterior\] to sample from a posterior density $p_{x|y}$. We address the overall algorithm as *Hierarchical Invertible Neural Transport* (HINT). An implementation is given in appendix \[sec:appendix\_algs\].
HINT for sequential Bayesian inference {#sec:seqBI}
======================================
Sequential Bayesian inference typically describes a dynamical framework where data arrives in a streaming form. We denote by ${{\boldsymbol}y}_{1:t}$ a sequence of data points ${{\boldsymbol}y}_1,\dots,{{\boldsymbol}y}_t$ at times $1,\dots, t$. Analogously, we assume that there is an underlying sequence of corresponding hidden states ${{\boldsymbol}x}_{1:t}$. Model dependencies are defined through the graphical model in Figure \[fig:SBI\_dependence\].
\[\]
(x0) at (-2, 0) [${{\boldsymbol}x}_0$]{}; (x1) at (0, 0) [${{\boldsymbol}x}_1$]{}; (x2) at (2, 0) [${{\boldsymbol}x}_2$]{}; (x3) at (4, 0) [${{\boldsymbol}x}_3$]{}; (xdots) at (6, 0) [$\cdots$]{}; (y1) at (0, -1) [${{\boldsymbol}y}_1$]{}; (y2) at (2, -1) [${{\boldsymbol}y}_2$]{}; (y3) at (4, -1) [${{\boldsymbol}y}_3$]{};
(x0) – (x1); (x1) – (x2); (x2) – (x3); (x3) – (xdots); (x1) – (y1); (x2) – (y2); (x3) – (y3);
By Bayes’ theorem and the assumed dependence structure, we have $$\begin{aligned}
{p_{x_t|y_{1:t}}}({{\boldsymbol}x}_t|{{\boldsymbol}y}_{1:t}) &\propto {p_{y_t|x_t}}({{\boldsymbol}y}_t|{{\boldsymbol}x}_t){p_{x_t|y_{1:t-1}}}({{\boldsymbol}x}_t|{{\boldsymbol}y}_{1:t-1})\,,\label{eq:assimilation_step} \\
{p_{x_t|y_{1:t-1}}}({{\boldsymbol}x}_t|{{\boldsymbol}y}_{1:t-1}) &= {\mathbb{E}}_{{{\boldsymbol}x}_{t-1}\sim {p_{x_{t-1}|y_{1:t-1}}}}[{p_{x_t|x_{t-1}}}({{\boldsymbol}x}_t|{{\boldsymbol}x}_{t-1})]\,. \label{eq:prediction_step}\end{aligned}$$ Equation is usually known as the prediction step, while equation is called assimilation (or updating) step. Here, ${p_{x_t|y_t}}$ is the desired posterior density, ${p_{y_t|x_t}}$ is the likelihood function, whereas ${p_{x_t|y_{1:t-1}}}$ plays the role of prior density given the previous observations ${{\boldsymbol}y}_{1:t-1}$. The analytic expression of the prior is not given explicitly, but rather through an expectation over the previous posterior ${p_{x_{t-1}|y_{1:t-1}}}$. If we have samples from the previous posterior, we can estimate the expectation via Monte Carlo, however this can easily be very inaccurate if the transition density ${p_{x_t|x_{t-1}}}$ is complicated or very concentrated. In the limit case where the relation between ${{\boldsymbol}x}_{t-1}$ and ${{\boldsymbol}x}_t$ is deterministic, i.e. ${p_{x_t|x_{t-1}}}$ is a Dirac delta function, a Monte Carlo estimate is not even possible. In addition to this problem, in many applications of interest, every evaluation of the transition density requires the solution of a very expensive model, and the evaluation of the prior density through a Monte Carlo approximation would become too expensive for online prediction.
For those reasons, we would like to have an algorithm that is able to sequentially generate samples from the posterior ${p_{x_t|y_{1:t}}}$ but does not need to evaluate the prior density ${p_{x_t|y_{1:t-1}}}$. Theorem \[thm:HINTseq\] shows how to generalize the results for HINT (case ) in Theorem \[thm:all\_results\] to sequential Bayesian inference and how only samples from ${p_{x_t|y_{1:t-1}}}$ are required, but never an analytical evaluation of it. An analogous result can be derived for case . As before, although the results can be achieved for general probability densities, we focus on additive Gaussian noise relations ${{\boldsymbol}x}_t\coloneqq{{\boldsymbol}M}({{\boldsymbol}x}_{t-1}) + \sigma_x{\boldsymbol}\eta$ and ${{\boldsymbol}y}\coloneqq {{\boldsymbol}F}({{\boldsymbol}x}_t) + \sigma_y{\boldsymbol}\xi$, for some non-linear operators ${{\boldsymbol}M}:{\mathbb{R}}^d\to{\mathbb{R}}^d$, ${{\boldsymbol}F}:{\mathbb{R}}^d\to{\mathbb{R}}^m$, some standard Gaussian noises ${\boldsymbol}\eta$, ${\boldsymbol}\xi$ and standard deviations $\sigma_x$ and $\sigma_y$. This immediately implies ${p_{x_t|x_{t-1}}}(\cdot|{{\boldsymbol}x}_{t-1}) = {\mathcal{N}}({{\boldsymbol}M}({{\boldsymbol}x}_{t-1}),\sigma_x^2I)$ and ${p_{y_t|x_t}}(\cdot|{{\boldsymbol}x}_t) = {\mathcal{N}}({{\boldsymbol}F}({{\boldsymbol}x}_t),\sigma_y^2I)$.
\[thm:HINTseq\] Let ${{\boldsymbol}T}(\cdot;{\boldsymbol}\theta)$ be a parametric family of invertible transport map from ${\mathbb{R}}^{m+d}$ to ${\mathbb{R}}^{m+d}$. Suppose we observed ${{\boldsymbol}y}_{1:t-1}$ and denote ${p_{y_t,x_t|y_{1:t-1}}}= {p_{y_t,x_t|y_{1:t-1}}}(\cdot|{{\boldsymbol}y}_{1:t-1})$. Let us define $$\label{eq:seqJ_map_case3}
J_t:{\mathbb{R}}^n\to [0,\infty):\ {\boldsymbol}\theta\mapsto {{\mathcal{D}}_{\textnormal{KL}}}({{\boldsymbol}T}_\# {p_{y_t,x_t|y_{1:t-1}}}\,||\,p_z)\,.$$ Then, minimizing $J_t({\boldsymbol}\theta)$ in over ${\boldsymbol}\theta$ is equivalent to minimize the following loss: $$\label{eq:HINTseqloss}
L_t({\boldsymbol}\theta) \coloneqq {\mathbb{E}}_{{{\boldsymbol}w}_t\sim {p_{y_t,x_t|y_{1:t-1}}}}\Big[\frac{1}{2}\|{{\boldsymbol}T}({{\boldsymbol}w})\|_2^2 - \log|\det\nabla {{\boldsymbol}T}({{\boldsymbol}w}_t)|\Big]\,.$$
A proof follows directly from the proof of Theorem \[thm:all\_results\] in appendix \[sec:appendix\_proofs\]. Importantly, the minimization of the loss in at time $t$ should be initialized at the optimal value of ${\boldsymbol}\theta$ at time $t-1$. In fact, if the geometry of the posterior does not change much, the new optimal value is going to be close to the previous one, and the training very short.
Numerical experiments {#sec:numerical}
=====================
In this section, we compare the performance of standard INN (case ) and HINT (case ) on two challenging numerical experiments in a Bayesian inference framework. In both cases, we use approximately $n=10^6$ parameters. We use HINT with a coarse hierarchical depth $H=3$ and show that this suffices to compare favorably to INN.
Competitive Lotka-Volterra and unobserved species prediction over time {#sec:CLV}
----------------------------------------------------------------------
Competitive Lotka-Volterra is a generalization of the classic Lotka-Volterra which describes the demographical interaction of $d$ species competing on common resources. It is given by $$\label{eq:complotkavolterra}
\frac{du_i}{dt} = r_iu_i\big(1 - \sum_{i=1}^d\alpha_{ij}u_j\big)\,$$ for $i=1,\dots,d$, where $u_i$ is the size of the $i$-th species at a given time, $r_i$ is its growth rate and $\alpha_{ij}$ describes the interaction with the other species. We take $d = 4$ and set parameters $r_i, \alpha_{ij} \sim {\mathcal{N}}(1,0.3^2)$. Observations are taken at times $t_j=j$, for $j = 1,\dots, 10$, and we set $t_0=0$. The solution of between $[t_{j-1},t_j]$ characterizes the transition model ${{\boldsymbol}M}$, and ${{\boldsymbol}x}(t_j) = {{\boldsymbol}M}({{\boldsymbol}x}(t_{j-1})) + \sigma_x{\boldsymbol}\eta$, where $\sigma_x = 10^{-2}$ and ${\boldsymbol}\eta\sim{\mathcal{N}}({\boldsymbol}0,I)$. We observe a perturbed number of the first three species ${{\boldsymbol}y}_{t_j} = {{\boldsymbol}F}({{\boldsymbol}x}^{\textnormal{true}}(t_j)) + \sigma_y{\boldsymbol}\xi_{t_j}$ with ${{\boldsymbol}F}({{\boldsymbol}x}(t)) \coloneqq {{\boldsymbol}x}_{1:3}(t)$ and $\sigma_y = 10^{-1}$, where ${{\boldsymbol}x}^{\textnormal{true}}(t)$ is a realization of the process ${{\boldsymbol}x}(t)$ with ${{\boldsymbol}x}^{\textnormal{true}}(0)\sim{\mathcal{N}}({\boldsymbol}1, 10^{-4}I)$. We set an initial prior ${{\boldsymbol}x}(0)\sim {\mathcal{N}}({\boldsymbol}1 ,10^{-2}I)$. The goal is to sequentially recover the posterior densities and to predict the unobserved component $x_4$.
At $t_1=1$, we estimate the mean-squared-error (MSE) of the trace of the posterior covariance matrix both varying the number of training samples and training steps. Figure \[fig:CLV\] (top row) shows that both standard INN and HINT converge to very small values of the MSE. INN appears to largely struggle for small training sizes, but seems to perform slightly better for many training epochs.
![MSE comparisons and prediction of $x_4$ for INN and HINT []{data-label="fig:CLV"}](figures/MSE_by_N_CLV.pdf "fig:")![MSE comparisons and prediction of $x_4$ for INN and HINT []{data-label="fig:CLV"}](figures/MSE_by_epoch_CLV.pdf "fig:")\
![MSE comparisons and prediction of $x_4$ for INN and HINT []{data-label="fig:CLV"}](figures/CLV_case_1.pdf "fig:")![MSE comparisons and prediction of $x_4$ for INN and HINT []{data-label="fig:CLV"}](figures/CLV_case_3.pdf "fig:")
Figure \[fig:CLV\] (bottom row) describes the sequential prediction of $x_4$, which shows a largely better performance of HINT over INN. Methods were trained with 64000 samples for 50 epochs.
Lorenz96 transition and log-Rosenbrock observation models {#sec:lorenz96}
---------------------------------------------------------
Lorenz96 is a chaotic dynamical system characterized by $$\label{eq:lorenz96}
\frac{d u_i}{dt} = (u_{i+1} - u_{i-2})u_{i-1} - u_i + \alpha\,,$$ for $i=1,\dots,d$. where it is assumed that $u_{-1}=u_{d-1}$, $u_0 = u_d$ and $u_{d+1} = u_1$. We take $d = 40$ and set $\alpha = 8$ (chaotic regime). We start at $t_0=0$ and take an observation at time $t_1=\tfrac{1}{10}$. The solution of between $[0,t_1]$ characterizes the transition model ${{\boldsymbol}M}$, and ${{\boldsymbol}x}(t_1) = {{\boldsymbol}M}({{\boldsymbol}x}(0)) + \sigma_x{\boldsymbol}\eta$, where $\sigma_x = 10^{-1}$ and ${\boldsymbol}\eta\sim{\mathcal{N}}({\boldsymbol}0,I)$. To make the problem very complicated, we take ${{\boldsymbol}F}\in{\mathbb{R}}^{39}$ to be a log-Rosenbrock function in each component, i.e. $F_i({{\boldsymbol}x}) = \log\big(100(x_{i+1} - x_i^2)^2 + (1-x_i)^2\big)$ for $i=1,\dots,39$, and observe ${{\boldsymbol}y}_{t_1} = {{\boldsymbol}F}({{\boldsymbol}x}^{\textnormal{true}}(t_1)) + \sigma_y{\boldsymbol}\xi_{t_1}$ with $\sigma_y=10^{-1}$, where ${{\boldsymbol}x}^{\textnormal{true}}(t)$ is a realization of the process ${{\boldsymbol}x}(t)$ with ${{\boldsymbol}x}^{\textnormal{true}}(0)\sim{\mathcal{N}}({\boldsymbol}1, 10^{-4}I)$. We set an initial prior ${{\boldsymbol}x}(0)\sim {\mathcal{N}}({\boldsymbol}1 ,I)$. The goal is to sample from the posterior at time $t_1$. For this hard experiment, INN failed to produce meaningful results. We remark that with better annealing rate of $\sigma_y$ and network configurations, INN may still be able to converge, but this can be tedious and difficult. In contrast, HINT does not have to worry about concentration issues, as highlighted in section \[sec:transport\]. Hence, it is much more robust and, importantly, requires much less parameters tweaking. Figure \[fig:lorenz\] shows its convergence at time $t_1$.
\[\]
[![MSE of the trace of the poster covariance over training epoches[]{data-label="fig:lorenz"}](figures/MSE_by_epoch_Lorenz96.pdf "fig:") ]{}
Conclusion {#sec:conclusion}
==========
In this work, we introduced HINT as an algorithm that combines INNs and optimal transport to sample from a posterior distribution in a general and a sequential Bayesian framework. We discussed how the use of HINT over INN can be advantageous for several reasons, and we performed numerical comparisons in two challenging test cases. Further research directions may include the use of Quasi-Monte Carlo training samples for better space-filling and generalization properties [@dick2013high], and multilevel techniques to beat down the computation cost of generating the training set [@kuo2017multilevel].
Appendix: Proofs {#sec:appendix_proofs}
================
With the notation above, we have ${\tilde{{\boldsymbol}u}}^{\ell-1} = Q_\ell^\top {{\boldsymbol}u}^{\ell-1}$. Then, by chain rule we have $$\begin{aligned}
\log|\det\nabla_{{{\boldsymbol}u}}{{\boldsymbol}T}({{\boldsymbol}u})| &= \log\left|\det\left(\prod_{\ell=1}^L\nabla_{{{\boldsymbol}u}^{\ell-1}}{{\boldsymbol}T}_\ell({{\boldsymbol}u}^{\ell-1})\right)\right|\\
&= \log\left|\det\left(\prod_{\ell=1}^L \nabla_{{\tilde{{\boldsymbol}u}}^{\ell-1}}{{\boldsymbol}T}_\ell({{\boldsymbol}u}^{\ell-1}) Q_\ell\right)\right|\\
&= \log\left|\prod_{\ell=1}^L\left( \det\nabla_{{\tilde{{\boldsymbol}u}}^{\ell-1}}{{\boldsymbol}T}_\ell({{\boldsymbol}u}^{\ell-1}) \det Q_\ell\right)\right|\\
&= \log\left(\prod_{\ell=1}^L \textnormal{prod}\left({\boldsymbol}\exp({{\boldsymbol}s}_\ell({\tilde{{\boldsymbol}u}}_1^{\ell-1}))\right)\right)\\
&= \sum_{\ell=1}^L \textnormal{sum}({{\boldsymbol}s}_\ell({\tilde{{\boldsymbol}u}}_1^{\ell-1}))\,,\end{aligned}$$ where we denote by $\textnormal{prod}(\cdot)$ the element-wise product and we used that $\det Q_\ell=1$ because $Q_\ell$ are orthogonal matrices.
Let us take a family of invertible maps ${{\boldsymbol}T}(\cdot) \coloneqq {{\boldsymbol}T}(\cdot;{\boldsymbol}\theta)$. It is easy to check that $${{\mathcal{D}}_{\textnormal{KL}}}({{\boldsymbol}T}_\# {p_{\textnormal{ref}}}\,\|\, {p_{\textnormal{target}}}) = {{\mathcal{D}}_{\textnormal{KL}}}( {p_{\textnormal{ref}}}\,\|\, ({{\boldsymbol}T}^{-1})_\#{p_{\textnormal{target}}}) = - {\mathbb{E}}_{{p_{\textnormal{ref}}}}\Big[\log ({p_{\textnormal{target}}}\circ {{\boldsymbol}T}) + \log |\det\nabla {{\boldsymbol}T}|\Big] + \textnormal{const}\,,$$ for any densities ${p_{\textnormal{ref}}}$ and ${p_{\textnormal{target}}}$, with $\text{const}={\mathbb{E}}_{{p_{\textnormal{ref}}}}[\log{p_{\textnormal{ref}}}]$. We study the following three cases separately.
- Let ${\mathcal{X}}\equiv{\mathbb{R}}^d$, assume $m < d$ and take ${{\boldsymbol}z}\in{\mathbb{R}}^{d-m}$. Let ${p_{\textnormal{ref}}}= p_x$ and ${p_{\textnormal{target}}}= p_{y|x}p_{z}$. We observe that the decomposition of the target density as $p_{y|x}p_{z}$ is enforcing independence of $z$ from $y$ and $x$, in fact ${p_{\textnormal{target}}}= p_{y,z|x} = p_{y|x}p_{z|y,x} = p_{y|x}p_{z}$. Then $${p_{\textnormal{target}}}({{\boldsymbol}T}({{\boldsymbol}x})) = p_{y|x}({{\boldsymbol}T}^y({{\boldsymbol}x})) p_{z}({{\boldsymbol}T}^z({{\boldsymbol}x}))\,,$$ where ${{\boldsymbol}T}({{\boldsymbol}x}) = [{{\boldsymbol}T}^y({{\boldsymbol}x}),{{\boldsymbol}T}^z({{\boldsymbol}x})]$, with ${{\boldsymbol}T}^y({{\boldsymbol}x})\in{\mathbb{R}}^m$ and ${{\boldsymbol}T}^z({{\boldsymbol}x})\in{\mathbb{R}}^{d-m}$. By definitions of ${p_{\textnormal{ref}}}$, ${p_{\textnormal{target}}}$, $p_{y|x}$ and $p_z$, we have $$\begin{aligned}
-{\mathbb{E}}_{{p_{\textnormal{ref}}}}[\log{p_{\textnormal{target}}}\circ {{\boldsymbol}T}] &= -{\mathbb{E}}_{{{\boldsymbol}x}\sim p_x}[\log p_{y|x}({{\boldsymbol}T}^y({{\boldsymbol}x})|{{\boldsymbol}x}) + \log p_z({{\boldsymbol}T}^z({{\boldsymbol}x}))]\\
&= {\mathbb{E}}_{{{\boldsymbol}x}\sim p_x}\Big[\frac{1}{2\sigma_y^2}\|{{\boldsymbol}T}^y({{\boldsymbol}x})-{{\boldsymbol}F}({{\boldsymbol}x})\|_2^2 + \frac{1}{2}\|{{\boldsymbol}T}({{\boldsymbol}x})\|_2^2\Big] + \textnormal{const}\,.
\end{aligned}$$ Hence, minimizing ${{\mathcal{D}}_{\textnormal{KL}}}({{\boldsymbol}T}_\# p_x\,\|\, p_{y|x}p_z)$ with respect to ${\boldsymbol}\theta$ is equivalent to minimize the following loss function: $$L({\boldsymbol}\theta) = {\mathbb{E}}_{{{\boldsymbol}x}\sim p_x}\Big[\frac{1}{2\sigma_y^2}\|{{\boldsymbol}T}^y({{\boldsymbol}x})-{{\boldsymbol}F}({{\boldsymbol}x})\|_2^2 + \frac{1}{2}\|{{\boldsymbol}T}({{\boldsymbol}x})\|_2^2-\log|\det\nabla {{\boldsymbol}T}({{\boldsymbol}x})|\Big]\,.$$
- Let ${\mathcal{X}}\equiv{\mathbb{R}}^d$, ${{\boldsymbol}z}\in{\mathbb{R}}^d$ and suppose we observed ${{\boldsymbol}y}$. Let ${p_{\textnormal{ref}}}= p_z$ and ${p_{\textnormal{target}}}= p_{x|y}(\cdot|{{\boldsymbol}y})$. By Bayes’ theorem and our likelihood definition, we can rewrite $$- \log p_{x|y}({{\boldsymbol}T}({{\boldsymbol}z})|{{\boldsymbol}y}) = \frac{1}{2\sigma_y^2}\|{{\boldsymbol}y}- {{\boldsymbol}F}({{\boldsymbol}T}({{\boldsymbol}z}))\|_2^2 - \log p_x({{\boldsymbol}T}({{\boldsymbol}z})) + \textnormal{const}\,.$$ Hence, minimizing ${{\mathcal{D}}_{\textnormal{KL}}}({{\boldsymbol}T}_\# p_z\,\|\, p_{x|y})$ with respect to ${\boldsymbol}\theta$ is equivalent to minimize the following loss function: $$L({\boldsymbol}\theta) = {\mathbb{E}}_{{{\boldsymbol}z}\sim p_z}\Big[\frac{1}{2\sigma_y^2}\|{{\boldsymbol}y}- {{\boldsymbol}F}({{\boldsymbol}T}({{\boldsymbol}z}))\|_2^2 - \log p_x({{\boldsymbol}T}({{\boldsymbol}z})) -\log|\det\nabla {{\boldsymbol}T}({{\boldsymbol}z})|\Big]\,.$$
- Let ${\mathcal{X}}\equiv{\mathbb{R}}^{m+d}$, ${{\boldsymbol}z}\in{\mathbb{R}}^{m+d}$, ${p_{\textnormal{ref}}}= p_{y,x}$ and ${p_{\textnormal{target}}}= p_{z}$. Because we assumed $p_z$ to be a standard Gaussian density, we have $$-\log p_z({{\boldsymbol}T}({{\boldsymbol}x})) = \frac{1}{2}\|{{\boldsymbol}T}({{\boldsymbol}x})\|_2^2 + {\textnormal{const}}\,.$$ Hence, minimizing ${{\mathcal{D}}_{\textnormal{KL}}}({{\boldsymbol}T}_\# p_{y|x}\,\|\, p_z)$ with respect to ${\boldsymbol}\theta$ is equivalent to minimize the following loss function: $$L({\boldsymbol}\theta) = {\mathbb{E}}_{{{\boldsymbol}w}\sim p_{y,x}}\Big[\frac{1}{2}\|{{\boldsymbol}T}({{\boldsymbol}w})\|_2^2 - \log|\det\nabla {{\boldsymbol}T}({{\boldsymbol}w})|\Big]\,.$$
We start by showing that $$\label{eq:cons_loss}
L({\hat{{\boldsymbol}\theta}}) {\stackrel{p}{\longrightarrow}}L({\boldsymbol}\theta^*)\,.$$ We have $$\begin{aligned}
0 \le L({\hat{{\boldsymbol}\theta}}) - L({\boldsymbol}\theta^*) &= L({\hat{{\boldsymbol}\theta}}) - \hat{L}({\hat{{\boldsymbol}\theta}}) + \underbrace{\hat{L}({\hat{{\boldsymbol}\theta}}) - \hat{L}({\boldsymbol}\theta^*)}_{\leq 0\text{ by def.~of } {\hat{{\boldsymbol}\theta}}} + \hat{L}({\boldsymbol}\theta^*) - L({\boldsymbol}\theta^*) \\
&\le |L({\hat{{\boldsymbol}\theta}}) - \hat{L}({\hat{{\boldsymbol}\theta}})| + |\hat{L}({\boldsymbol}\theta^*) - L({\boldsymbol}\theta^*)|\,. \end{aligned}$$ The first term converges in probability to zero by the Uniform Law of Large Number, which holds because of the regularity assumptions above (see [@newey1994large] for details). The second term converges in probability to zero by the Law of Large Numbers. Hence the result.
Second, we show that ${\hat{{\boldsymbol}\theta}}{\stackrel{p}{\longrightarrow}}{\boldsymbol}\theta^*$, i.e. ${\hat{{\boldsymbol}\theta}}$ is consistent. For any $\delta > 0$, we consider $\inf_{{\boldsymbol}\theta\in\Theta:\|{\boldsymbol}\theta-{\boldsymbol}\theta^*\|_2\ge \delta}L({\boldsymbol}\theta)$, which is obtained at some $\tilde{{\boldsymbol}\theta}\in\Theta$ since $L({\boldsymbol}\theta)$ is continuous and $\Theta$ is compact. Define $\varepsilon \coloneqq L(\tilde{{\boldsymbol}\theta}) - L({\boldsymbol}\theta^*) > 0$, which is positive because ${\boldsymbol}\theta^*$ is the strict global minimizer of $L({\boldsymbol}\theta)$. Because of the Uniform Law of Large Numbers, for any $\gamma >0$, there exists $N$ large enough such that $\sup_{{\boldsymbol}\theta\in\Theta}|\hat{L}({\boldsymbol}\theta) - L({\boldsymbol}\theta)|<\frac{\varepsilon}{2}$ with probability $1-\gamma$. Then, we have $$L({\hat{{\boldsymbol}\theta}}) < \hat{L}({\hat{{\boldsymbol}\theta}}) + \frac{\varepsilon}{2} \le \hat{L}({\boldsymbol}\theta^*) + \frac{\varepsilon}{2} < L({\boldsymbol}\theta^*) + \varepsilon = L(\tilde{{\boldsymbol}\theta}) = \inf_{{\boldsymbol}\theta\in\Theta:\|{\boldsymbol}\theta-{\boldsymbol}\theta^*\|_2\ge \delta}L({\boldsymbol}\theta)\,$$ with probability $1-\gamma$, which implies $\|{\hat{{\boldsymbol}\theta}}- {\boldsymbol}\theta^*\|_2 \le \delta$ with probability $1-\gamma$. We conclude that $\lim_{N\to+\infty}{\mathbb{P}}(\|{\hat{{\boldsymbol}\theta}}- {\boldsymbol}\theta^*\|_2 \le \delta) = 1$ for any $\delta > 0$, hence ${\hat{{\boldsymbol}\theta}}{\stackrel{p}{\longrightarrow}}{\boldsymbol}\theta^*$.
Fixed ${{\boldsymbol}u}\in{\mathbb{R}}^{s}$, ${{\boldsymbol}T}({{\boldsymbol}u};{\boldsymbol}\theta)$ is a continuous function over ${\boldsymbol}\theta$. Thus, by the Continuous Mapping Theorem, also ${{\boldsymbol}T}({{\boldsymbol}u};{\hat{{\boldsymbol}\theta}})$ is consistent, i.e. ${{\boldsymbol}T}({{\boldsymbol}u}; {\hat{{\boldsymbol}\theta}}){\stackrel{p}{\longrightarrow}}{{\boldsymbol}T}({{\boldsymbol}u}, {\boldsymbol}\theta^*)$.
We now prove that $\sqrt{N}\Big({\hat{{\boldsymbol}\theta}}-{\boldsymbol}\theta^*\Big)\sim {\mathcal{N}}({\boldsymbol}0, \tilde{C}({\boldsymbol}\theta^*))$, with $\tilde{C}({\boldsymbol}\theta^*)$ defined later. First, we remind that since ${\boldsymbol}\theta^*$ is a minimizer for $L({\boldsymbol}\theta)$ in the interior of $\Theta$, we have $\nabla_{{\boldsymbol}\theta}L({\boldsymbol}\theta^*) = 0$. Then, by the vector-valued Mean Value Theorem, there exist $\bar{{\boldsymbol}\theta}_i \coloneqq c_i {\hat{{\boldsymbol}\theta}}+ (1-c_i){\boldsymbol}\theta^*$, for $c_i\in (0,1)$ and $i=1,\dots,s$, such that $$0 = \partial_{\theta_i}\hat{L}({\hat{{\boldsymbol}\theta}}) = \partial_{\theta_i}\hat{L}({\boldsymbol}\theta^*) + \sum_{j=1}^s\partial_{\theta_i, \theta_j}^2 \hat{L}(\bar{{\boldsymbol}\theta}_i)(\hat{\theta}_j-\theta_j^*)\,.$$ By multiplying both sides of the previous equation by $\sqrt{N}$, we get $$0 = \sqrt{N}\partial_{\theta_i}\hat{L}({\boldsymbol}\theta^*) + \sum_{j=1}^s\partial_{\theta_i \theta_j}^2 \hat{L}(\bar{{\boldsymbol}\theta}_i)\sqrt{N}(\hat{\theta}_j-\theta_j^*)\,.$$ Because ${\hat{{\boldsymbol}\theta}}{\stackrel{p}{\longrightarrow}}{\boldsymbol}\theta^*$, by the Continuous Mapping Theorem we have $\bar{{\boldsymbol}\theta}_i{\stackrel{p}{\longrightarrow}}{\boldsymbol}\theta^*$ for each $i=1,\dots,s$. Furthermore, by proceeding analogously as above, we have $\nabla_{{\boldsymbol}\theta}^2 \hat{L}({\hat{{\boldsymbol}\theta}}){\stackrel{p}{\longrightarrow}}\nabla_{{\boldsymbol}\theta}^2 L({\boldsymbol}\theta^*)$ because of the Uniform Law of Large Numbers, which holds because of the regularity assumptions above (see [@newey1994large] for details). Since $\nabla_{{\boldsymbol}\theta}^2 L({\boldsymbol}\theta^*)$ is constant, we can rearrange the previous equation and apply Slutsky’s theorem to get $$\sqrt{N}({\hat{{\boldsymbol}\theta}}- {\boldsymbol}\theta^*) \stackrel{d}{\sim} -\Big(\nabla_{{\boldsymbol}\theta}^2 L({\boldsymbol}\theta^*)\Big)^{-1}\sqrt{N}\nabla_{{\boldsymbol}\theta}\hat{L}({\boldsymbol}\theta^*)\,,$$ where $\stackrel{d}{\sim}$ denotes asymptotic equivalence in distribution for $N\to+\infty$. We observe that $\Big(\nabla_{{\boldsymbol}\theta}^2 L({\boldsymbol}\theta^*)\Big)^{-1}$ exists because ${\boldsymbol}\theta^*$ is a strict minimizer in the interior of $\Theta$. In addition, by rewriting $\nabla_{{\boldsymbol}\theta}\hat{L}({\boldsymbol}\theta^*) = \frac{1}{N}\sum_{k=1}^N\nabla_{{\boldsymbol}\theta}{\mathcal{L}}({{\boldsymbol}u}^{(k)};{\boldsymbol}\theta^*)$, and since ${\mathbb{E}}_{{{\boldsymbol}u}\sim p}[\nabla_{{\boldsymbol}\theta}{\mathcal{L}}({{\boldsymbol}u};{\boldsymbol}\theta^*)] = {\boldsymbol}0$, by the Central Limit Theorem we have $$\sqrt{N}\nabla_{{\boldsymbol}\theta}\hat{L}({\boldsymbol}\theta^*) {\stackrel{d}{\longrightarrow}}{\mathcal{N}}({\boldsymbol}0, {\mathcal{I}}({\boldsymbol}\theta^*))\,,$$ where ${\mathcal{I}}({\boldsymbol}\theta^*) \coloneqq {\mathbb{E}}_{{{\boldsymbol}u}\sim p}[\nabla_{{\boldsymbol}\theta}{\mathcal{L}}({{\boldsymbol}u};{\boldsymbol}\theta^*)\nabla_{{\boldsymbol}\theta}{\mathcal{L}}({{\boldsymbol}u};{\boldsymbol}\theta^*)^\top]$. Hence, we have that $$\sqrt{N}({\hat{{\boldsymbol}\theta}}- {\boldsymbol}\theta^*) {\stackrel{d}{\longrightarrow}}{\mathcal{N}}({\boldsymbol}0, \tilde{C}({\boldsymbol}\theta^*))\,,$$ where $\tilde{C}({\boldsymbol}\theta^*) \coloneqq \Big(\nabla_{{\boldsymbol}\theta}^2 L({\boldsymbol}\theta^*)\Big)^{-1}{\mathcal{I}}({\boldsymbol}\theta^*)\Big(\nabla_{{\boldsymbol}\theta}^2 L({\boldsymbol}\theta^*)\Big)^{-1}$. Finally, for each ${{\boldsymbol}u}\in{\mathbb{R}}^s$, we can apply the multivariate Delta method to achieve $$\sqrt{N} \Big({{\boldsymbol}T}({{\boldsymbol}u}; {\hat{{\boldsymbol}\theta}}) - {{\boldsymbol}T}({{\boldsymbol}u}; {\boldsymbol}\theta^*)\Big) {\stackrel{d}{\longrightarrow}}{\mathcal{N}}({\boldsymbol}0, C({\boldsymbol}\theta^*))\,,$$ where $C({\boldsymbol}\theta^*) = \nabla_{{\boldsymbol}\theta}{{\boldsymbol}T}({{\boldsymbol}u};{\boldsymbol}\theta^*) \tilde{C}({\boldsymbol}\theta^*) \nabla_{{\boldsymbol}\theta} {{\boldsymbol}T}({{\boldsymbol}u};{\boldsymbol}\theta^*)^\top$.
Appendix: Algorithms {#sec:appendix_algs}
====================
\[sample-table\]
Case Case Case
----------------------------------------- -------------- -------------- ---------------
Sensitivity to $\sigma_y \ll 1$ Sensitive Sensitive Not sensitive
\[0.2cm\] Sensitivity to $m \gg 1$ Sensitive Sensitive Not sensitive
\[0.2cm\] Observed data during training Not required Required Not required
\[0.2cm\] Model evaluations Offline Online Offline
\[0.2cm\] Gradient of the model Not required Required Not required
\[0.2cm\] Prior evaluations Not required Required Not required
\[0.2cm\] Explicit inverse map Required Not required Required
: Qualitative summary of comparison between cases , and []{data-label="table:comparison"}
Sample inputs $({{\boldsymbol}x}^{(k)})_{k=1}^{{N_{\textnormal{train}}}}\sim p_x$ and estimate the loss in (5) Train an INN ${{\boldsymbol}T}(\cdot; {\boldsymbol}\theta)$ with the estimated loss and denote the minimizer by ${\boldsymbol}\theta^*$ Sample $({{\boldsymbol}z}^{(k)})_{k=1}^{{N_{\textnormal{out}}}}\sim {\mathcal{N}}({\boldsymbol}0, I)$ Get ${{\boldsymbol}x}^{(k)} = {{\boldsymbol}S}([{{\boldsymbol}y}, {{\boldsymbol}z}^{(k)}]; {\boldsymbol}\theta^*)\sim p_{x|y}$, for $k = 1,\dots,{N_{\textnormal{out}}}$, where ${{\boldsymbol}S}= {{\boldsymbol}T}^{-1}$
Sample inputs $({{\boldsymbol}z}^{(k)})_{k=1}^N\sim {\mathcal{N}}({\boldsymbol}0, I)$ and estimate the loss in (6) Train an INN ${{\boldsymbol}T}(\cdot; {\boldsymbol}\theta)$ with the estimated loss and denote the minimizer by ${\boldsymbol}\theta^*$ Get ${{\boldsymbol}x}^{(k)} = {{\boldsymbol}T}({{\boldsymbol}z}^{(k)}; {\boldsymbol}\theta^*)\sim p_{x|y}$, for $k = 1,\dots,N$
Sample inputs $({{\boldsymbol}w}^{(k)})_{k=1}^{{N_{\textnormal{train}}}}\sim p_{y,x}$ via Algorithm \[alg:sample\_joint\] and estimate the loss in (7) Train an INN ${{\boldsymbol}T}(\cdot; {\boldsymbol}\theta)$ with the estimated loss and denote the minimizer by ${\boldsymbol}\theta^*$ Get ${{\boldsymbol}z}_y = {{\boldsymbol}T}^y({{\boldsymbol}y})$ Sample $({{\boldsymbol}z}_x^{(k)})_{k=1}^{{N_{\textnormal{out}}}}\sim {\mathcal{N}}({\boldsymbol}0, I)$ Get ${{\boldsymbol}x}^{(k)} = {{\boldsymbol}S}([{{\boldsymbol}z}_y, {{\boldsymbol}z}_x^{(k)}]; {\boldsymbol}\theta^*)\sim p_{x|y}$, for $k = 1,\dots,{N_{\textnormal{out}}}$, where ${{\boldsymbol}S}= {{\boldsymbol}T}^{-1}$
Sample ${{\boldsymbol}x}\sim p_x$ Sample ${\boldsymbol}\xi\sim {\mathcal{N}}({\boldsymbol}0, I)$ Set ${{\boldsymbol}y}= {{\boldsymbol}F}({{\boldsymbol}x}) + \sigma_y{\boldsymbol}\xi \sim p_{y|x}$ Set ${{\boldsymbol}w}= [{{\boldsymbol}y},{{\boldsymbol}x}] \sim p_{y,x}$
[^1]: For any invertible map ${{\boldsymbol}T}$ and probability density $p$, the pushforward density of $p$ is given by ${{\boldsymbol}T}_\#p({{\boldsymbol}u}) = p({{\boldsymbol}T}^{-1}({{\boldsymbol}u}))|\det \nabla_{{{\boldsymbol}u}} {{\boldsymbol}T}^{-1}({{\boldsymbol}u})|$.
|
---
abstract: 'A model of interdependent networks of networks (NoN) has been introduced recently in the context of brain activation to identify the neural collective influencers in the brain NoN. Here we develop a new approach to derive an exact expression for the random percolation transition in Erdös-Rényi NoN. Analytical calculations are in excellent agreement with numerical simulations and highlight the robustness of the NoN against random node failures. Interestingly, the phase diagram of the model unveils particular patterns of interconnectivity for which the NoN is most vulnerable. Our results help to understand the emergence of robustness in such interdependent architectures.'
author:
- Kevin Roth
- Flaviano Morone
- Byungjoon Min
- 'Hernán A. Makse'
title: Emergence of Robustness in Network of Networks
---
Many biological, social and technological systems are composed of multiple, if not vast numbers of, interacting elements. In a stylized representation each element is portrayed as a node and the interactions among nodes as mutual links, so as to form what is known as a network [@Newman2010:Book]. A finer description further isolates several sub-networks, called modules, each of them performing a different function. These modules are, in turn, integrated to form a larger aggregate referred to as a network of networks (NoN). A compelling problem is how to define the interdependencies between modules, specifically how the functioning of nodes in one module depends on the functioning of nodes in other modules [@Buldyrev2010:Cascade; @Gao2012:Interdependent; @Bianconi2015:MutualComponent; @ReducingCouplingStrength; @saulo].
Current models of such interdependent NoN, inspired by the power grid, represent dependencies across modules through very fragile couplings [@Buldyrev2010:Cascade; @Gao2012:Interdependent], such that the random failure of few nodes gives rise to a catastrophic cascading collapse of the NoN. Many real-life systems, however, exhibit high resilience against malfunctioning. The prototypical example of such robust modular architectures is the brain, which thus cannot fit in catastrophic NoN models [@saulo]. To cope with the fragility of current NoN models, we recently introduced a model of interdependencies in NoN [@preprint], inspired by the phenomenon of top-down control in brain activation [@gallos; @sigman], in order to study the impact of rare events, i.e. non-random [*optimal percolation*]{} [@MoMa], on the global communication of the brain with application to neurological disorders.
Here we investigate the robustness of this NoN model with respect to typical node failures, i.e. [*random percolation*]{}. More precisely, we develop a new approach to derive an analytical expression for the random percolation phase diagram in Erdös-Rényi (ER) NoN, which predicts the conditions responsible for the emergence of robustness and the absence of cascading effects.
{width="0.7em"} ${\sigma}_i=1$; $\bullet$ $n_i=1$, ${\sigma}_i=0$; $\circ$ $n_i=0$, ${\sigma}_i=0$.[]{data-label="fig:fig1"}](Fig1_600.png){width="0.95\columnwidth"}
[**Definition of control intra-modular links.—**]{} Consider $N$ nodes in a NoN composed of several interdependent modules [(Fig.\[fig:fig1\])]{}. We distinguish the roles of intra-module links connecting nodes within a module, and inter-module dependency links (corresponding to control links in the brain [@gallos; @saulo]), connecting nodes across modules: the former (intra-links) only represent whether or not two nodes are connected, the latter (inter-links) express mutual control. Every has $k^{\rm in}_{i}$ intra-module links, referred to as node $i$’s in-degree, and $k^{\rm out}_{i}$ inter-module connections, referred to as $i$’s out-degree.
Each node can be present or removed, and, if present, it can be activated or inactivated. We introduce the binary occupation variable $n_i = 1, 0$ to specify whether node $i$ is present $(n_i =
1)$ or removed $(n_i = 0)$. By virtue of inter-module dependencies, the functioning of a node in one module depends on the functioning of nodes in other modules. In order to conceptualize this form of control, we introduce the activation state ${\sigma}_i$, taking values ${\sigma}_i = 1$ if node $i$ is activated and ${\sigma}_i = 0$ if not. A node $i$ with one or more inter-module dependency/control connections $(k^{\rm out}_{i} \geq 1)$ is activated $({\sigma}_i = 1)$ if and only if it is present $(n_i = 1)$ and at least one of its out-neighbors $j$ is also present $(n_j = 1)$, otherwise it is not activated $({\sigma}_i =
0)$. In other words, a node with one or several inter-module dependencies is inactivated when the last of its out-neighbors is removed.
The rationale for this control rule is that the activation (${\sigma}_i={\sigma}_j=1$) of two nodes connected by, for instance, one inter-link occurs only when both nodes are occupied, $n_i=n_j=1$. If just one of them is unoccupied, let’s say $n_j=0$, then both nodes become inactive. Thus, ${\sigma}_i=0$ even though $n_i=1$, and we say that $j$ exerts a control over $i$. This rule models the way neurons control the activation of other neurons in distant brain modules via control/dependency links (fibers through the white matter) in a process known as top-down influence in sensory processing [@sigman]. Mathematically, ${\sigma}_i$ is defined as $${\sigma}_i\ =\ n_i\bigg[1-\prod_{j \in \mathcal{F}(i)}(1-n_j)\bigg]\,,
\label{eq:sigma}$$ where $\mathcal{F}(i)$ denotes the set of nodes connected to $i$ via an inter-module link. Conceptually, the inter-links define a mapping from the configuration of occupation variables $\vec{n} \equiv
(n_1,...,n_N)$ to the configuration of activated states $\vec{{\sigma}}
\equiv({\sigma}_1,...,{\sigma}_N)$, as given by Eq.(\[eq:sigma\]).
Not all nodes participate in the control of other nodes via dependencies, i.e. a certain fraction of them does not establish inter-links. If a node does not have inter-module dependencies, it activates as long as it is present: $${\sigma}_{i} = n_{i}\,,\,\,\,\,\,\, \mbox{for $k^{\rm out}_{i}=0$ .}
\label{eq:sigma0}$$ Therefore, products over empty sets $\mathcal{F}({i})=\emptyset$ default to zero in Eq.(\[eq:sigma\]). This last property also guarantees that we recover the single network case for vanishing inter-module connections $(\langle k^{\rm
out}_{i}\rangle \rightarrow 0$), i.e. when considering the limiting case of one isolated module only.
When a fraction of nodes is removed, the NoN breaks into isolated components of activated nodes. In this work we focus on the *largest (giant) mutually connected activated component* $G$, which encodes global properties of the system. In contrast to previous NoN models [@Buldyrev2010:Cascade; @Gao2012:Interdependent], in our model a node can be activated even if it does not belong to $G$ (see Fig.\[fig:fig1\]). Indeed, the activation of a node, given by Eq.(\[eq:sigma\]), is not tied to its membership in the giant component. [Therefore, a node can be part of $G$ without being part of the largest connected activated component in its own module (consider for instance the top left node in Fig.1[**a**]{}).]{} As a consequence, controlling dependencies in the NoN do not lead to cascades of failures, which ultimately explains the robustness of our NoN model. [ In the model of Refs.[@Buldyrev2010:Cascade; @Gao2012:Interdependent], on the other hand, a node can be activated (therein termed “functional”) if and only if it belongs to the largest connected component of its own module and (for the case that it has inter-module dependency links) its out-neighbors also belong to the giant component within their module.]{} Indeed, in Refs.[@Buldyrev2010:Cascade; @Gao2012:Interdependent] the propagation of failures is not local as in Eq.(\[eq:sigma\]), implying that the failure of a single node may catastrophically destroy the NoN.
In order to quantify robustness, we measure the impact of node failures $n_i=0$ on the size of $G$ [@Buldyrev2010:Cascade; @Gao2012:Interdependent; @Bianconi2015:MutualComponent]. More precisely, we calculate $G$ under typical configurations $\vec{n}$, sampled from a flat distribution with a given fraction $q
\equiv 1 - \sum_{i = 1}^{N}n_i/N$ of removed nodes, and show that $G$ remains sizeable even for high values of $q$. In practice, starting from $q = 0$, we compute $G(q)$ while progressively increasing the fraction $q$ of randomly removed nodes. The robustness of the NoN is then formally characterized by the critical fraction $q_c$, the percolation threshold, at which the giant connected activated component collapses $G(q_c) = 0$ [@Buldyrev2010:Cascade; @Gao2012:Interdependent]. Accordingly, NoN models with high $q_c$ (ideally close to 1) are robust, whereas low $q_c$ is considered fragile. [A plot of $G(q)$ for ER 2-NoN is shown in the inset of Fig.2.]{}
[**Message Passing.—**]{} The problem of calculating $G$ can be solved using a message passing approach [@Bianconi2015:MutualComponent; @MoMa; @Zdeborova2014:Percolation] which provides exact solutions on locally tree-like NoN, containing a small number of short loops [@Zdeborova2014:Percolation]. This includes the thermodynamic limit $(N\to\infty)$ of Erdös-Rényi and scale-free random graphs as well as the configuration model (the maximally random graphs generated from a given degree distribution), which contain loops whose typical length grows logarithmically with the system size [@Dorogovtsev2003].
In principle, it works like this: each node receives messages from its neighbors containing information about their membership in $G$. Based on what they receive, the nodes then send further messages until everyone eventually agrees on who belongs to $G$. In practice, we need to derive a self-consistent system of equations that specifies for each node how the message to be sent is computed from the incoming messages [[@Mezard2009]]{}. To this end, we introduce two types of messages: $\rho_{i\to j}$ running along an intra-module link and $\varphi_{i\to j}$ running along an inter-module link. Formally, we denote $\rho_{i\to j}\equiv$ *probability that node $i$ is connected to $G$ other than via in-neighbor $j$*, and $\varphi_{i\to j}\equiv$ *probability that node $i$ is connected to $G$ other than via out-neighbor $j$*. The binary nature of the occupation variables and the activation states constrains the messages to take values $\rho_{i\to j}, \varphi_{i\to j} \in \{0,1\}$.
A node can only send non-zero information if it is activated, hence the messages must be proportional to ${\sigma}_{i}$. Assuming node $i$ is activated, it can send a non-zero intra-module message $\rho_{i\to j}$ to node $j$ if and only if it receives a non-zero message by at least one of its in-neighbors other than $j$ *or* one of its out-neighbors. Similarly, we can consider the message $\varphi_{i\to
j}$ along an inter-module link. Thus, the self-consistent system of message passing equations is given by: $$\begin{aligned}
\rho_{i\to j} &= {\sigma}_{i} \Big[ 1 - \hspace{-.2cm}\prod_{k \in \mathcal{S}(i) \setminus j} \hspace{-.2cm} (1-\rho_{k\to i})\hspace{-.1cm} \prod_{k \in \mathcal{F}(i)} \hspace{-.1cm}( 1 - \varphi_{k\to i} ) \Big]\ ,\label{eq:messagePassingRho} \\
\varphi_{i\to j} &= {\sigma}_{i} \Big[ 1 - \hspace{-.1cm}\prod_{k \in \mathcal{S}(i)} \hspace{-.1cm} (1-\rho_{k\to i} )\hspace{-.2cm} \prod_{k \in \mathcal{F}(i)\setminus j} \hspace{-.2cm}( 1 - \varphi_{k\to i} ) \Big]\ ,\label{eq:messagePassingVarphi}\end{aligned}$$ where $\mathcal{S}(i)$ denotes the set of node $i$’s intra-module nearest neighbors and $\mathcal{F}(i)$ denotes the set of $i$’s inter-module nearest neighbors. Note that products over empty sets $\mathcal{S}(i)=\emptyset$ or $\mathcal{F}(i)=\emptyset$ default to one.
In practice, the message passing equations are solved iteratively. Starting from a random initial configuration $\rho_{i\to j},
\varphi_{i\to j} \in \{0,1\}$, the messages are updated until they finally converge. From the converged solutions for the messages we can then compute the marginal probability $\rho_{i} = 0,1$ for each node $i$ to belong to the giant connected activated component $G$: $$\rho_{i}\ =\ {\sigma}_{i} \Big[{\hspace{1pt}}1\ - \hspace{-.1cm}\prod_{k \in \mathcal{S}(i)} \hspace{-.1cm} (1-\rho_{k\to i} ) \hspace{-.1cm} \prod_{k \in \mathcal{F}(i)} \hspace{-.1cm}( 1 - \varphi_{k\to i} ){\hspace{1pt}}\Big]\ .
\label{eq:marginal}$$
The size of $G$, or rather the fraction of nodes belonging to $G$, can then simply be computed by summing the probability marginals $\rho_i$ and dividing by the system size: $G(\vec{n}) = \big(\sum_{i = 1}^{N} \rho_{i}\big)/N$.
[**Percolation Phase Diagram for ER NoN.—**]{} In what follows we derive an exact expression for the percolation threshold in Erdös-Rényi , defined as two randomly interconnected ER modules. Each module is an ER random graph with Poisson degree distribution, $\mathds{P}_z[ k^{\rm in}] = e^{-z}z^{k^{\rm in}}/k^{\rm in}!$ for $k^{\rm in}\in\mathbb{N}_0$, where $z \equiv {\langle}k^{\rm in}{\rangle}$ denotes the average in-degree. [Similarly, we consider the inter-module links to form a bipartite ER random graph with Poisson degree distribution]{}, $\mathds{P}_w[ k^{\rm out}] = e^{-w}w^{k^{\rm out}}/k^{\rm out}!$ for $k^{\rm out}\in\mathbb{N}_0$, where $w \equiv {\langle}k^{\rm out}{\rangle}$ denotes the average out-degree. The corresponding distributions for the in-/out-degree at the end of an inter-link are given by, $\mathds{Q}_z[ k^{\rm in}] = (k^{\rm in}\mathds{P}_z[ k^{\rm in}] \mathds{1}_{\{k^{\rm in}>0\}})/z$ and $\mathds{Q}_w[ k^{\rm out}] = (k^{\rm out}\mathds{P}_w[ k^{\rm out}] \mathds{1}_{\{k^{\rm out}>0\}})/w$, for $k^{\rm in}, k^{\rm out}$ in $\mathbb{N}_0$, where $\mathds{1}_{\{\cdot\}}$ denotes the indicator function.
The random percolation process is then defined by removing each node in the NoN independently with probability $q$, which is equivalently formulated as taking the configurations at random from the binomial distribution, $\mathds{P}_p[\vec{n}] = \prod_{i = 1}^{N} p^{n_i}(1-p)^{1-n_i}$, where $p=1-q$ denotes the occupation probability.
The probability of a node to be activated when a randomly chosen fraction $p$ of nodes in the NoN is present, $\big\langle \sigma_i \big\rangle_{p} = p\mathds{1}_{\{k_i^{\rm out}=0\}} + p\big[1 - (1 - p)^{k^{\rm out}_i}\big]\mathds{1}_{\{k_i^{\rm out}>0\}}$, can straightforwardly be obtained by averaging $\sigma_i$, given by Eq.(\[eq:sigma\]), over $\mathds{P}_p[\vec{n}]$. The expected fraction of activated nodes $\big\langle \sigma_i \big\rangle_{p, w} = p\big[1+ e^{-w} - e^{-w p}\big]$ is then given by averaging $\big\langle \sigma_i \big\rangle_{p}$ over $\mathds{P}_w[k^{\rm out}_i]$. Unlike a node’s probability to be present $\langle n_i \rangle_{p} = p$, the probability to be activated $\langle \sigma_i \rangle_p$ is therefore highly dependent on the node’s out-degree $k^{\rm out}_i$. In other words, the deactivations are highly degree dependent, even if the fraction $q$ of nodes to be removed from the NoN is chosen randomly!
To compute the expectation of messages within the ensemble of ER 2-NoN, we average the expressions for $\rho_{i\to j}$ and $\varphi_{i\to j}$, representing the converged solutions to the message passing equations, over all possible realizations of randomness inherent in the above distributions. In doing so, we must however make sure to properly account for the fact that, for with inter-links ($k_i^{\rm out}\geq 1$), the binary occupation variable $n_i$ shows up more than once within the entire system of message passing equations, due to the activation rule for ${\sigma}_i$. Indeed, since the occupation variable is a binary number $n_i \in \{0,
1 \}$, powers of $n_i^{k} = n_i$ for each exponent and therefore the self-consistency is not affected by the existence of multiple $n_i$ per node. Yet, when naively averaging with the distribution of configurations, we would incorrectly obtain $n_i^{k}
\overset{\mathds{P}_p}{\longrightarrow} p^{k}$ instead of $n_i^{k} \overset{\mathds{P}_p}{\longrightarrow} p$, without properly accounting for the binary nature of the occupation variable across the entire system of equations.
[Specifically, when inserting the expression for the message $\varphi_{k\to i}$, determined by Eq.(\[eq:messagePassingVarphi\]), into the expression for $\rho_{i\to j}$, given by Eq.(\[eq:messagePassingRho\]), then the activation state ${\sigma}_k = n_k [1 - (1-n_i)\prod_{\ell \in
\mathcal{F}(k)\setminus i}(1-n_\ell) \big]$ (within $\varphi_{k\to
i}$) reduces to $n_k$, since for binomial variables. In other words, we need to replace ${\sigma}_k$ (${\sigma}_i$) with $n_k$ ($n_i$) within the expression for $\varphi_{k\to i}$ ($\varphi_{i\to j}$, Eq.(\[eq:messagePassingVarphi\])).]{}
Thus, the modified message passing equations we need to average read: $$\begin{aligned}
\rho_{i\to j} &= {\sigma}_{i} \Big[ 1 - \hspace{-.3cm}\prod_{k \in
\mathcal{S}(i) \setminus j} \hspace{-.2cm} (1-\rho_{k\to
i})\hspace{-.1cm} \prod_{k \in \mathcal{F}(i)} \hspace{-.1cm}( 1 -
\varphi_{k\to i} ) \Big]\ ,\\ \varphi_{i\to j} &= n_i \Big[ 1
- \hspace{-.1cm}\prod_{k \in \mathcal{S}(i)} \hspace{-.1cm}
(1-\rho_{k\to i} )\hspace{-.2cm} \prod_{k \in
\mathcal{F}(i)\setminus j} \hspace{-.2cm}( 1 - \varphi_{k\to i} )
\Big]\ .
\label{eq:modifiedMP}
\end{aligned}$$
In practice, we expand $\rho_{i \to j}$, given by Eq.(\[eq:modifiedMP\]), and perform the averaging separately for each term: $$\begin{aligned}
\rho_{i\to j} &= n_i \Big[1 - \hspace{-.3cm}\prod_{k\in
\mathcal{S}(i)\setminus j}\hspace{-.2cm}(1-\rho_{k\to i}) \Big]
\mathds{1}_{\{k^{\rm out}_i = 0\}}\\ &+ {\sigma}_i \Big[1
- \hspace{-.3cm}\prod_{k\in \mathcal{S}(i)\setminus
j}\hspace{-.2cm}(1-\rho_{k\to i})\hspace{-.2cm}\prod_{k\in
\mathcal{F}(i)}\hspace{-.2cm}(1-\varphi_{k\to i}) \Big]
\mathds{1}_{\{k^{\rm out}_i > 0\}} \ . \nonumber
\label{eq:rho_ij}\end{aligned}$$ The only non-trivial average involves the following expression: $$\begin{aligned}
& \Big{\langle}{\sigma}_i \hspace{-.2cm}\prod_{k\in \mathcal{S}(i)\setminus j}\hspace{-.2cm}(1-\rho_{k\to i})\hspace{-.1cm}\prod_{k\in \mathcal{F}(i)}\hspace{-.1cm}(1-\varphi_{k\to i}) \,\mathds{1}_{\{k^{\rm out}_i > 0\}} \Big{\rangle}\\
& = \Big{\langle}n_i \hspace{-.2cm}\prod_{k\in \mathcal{S}(i)\setminus j}\hspace{-.2cm}(1-\rho_{k\to i}) \Big[ \prod_{k\in \mathcal{F}(i)}\hspace{-.1cm}(1-\varphi_{k\to i})\\
& -\ \prod_{k\in \mathcal{F}(i)}\hspace{-.1cm}(1-n_k)(1-\varphi_{k\to i})\Big] \mathds{1}_{\{k^{\rm out}_i > 0\}} \Big{\rangle}\ ,
\end{aligned}$$ where we have to account for the fact that $(1-n_k)(1-\varphi_{k\to i}) = (1-n_k)$. The final expression for the average intra-module message $\rho$ reads: $$\rho = p \big[ 1+e^{-w}\hspace{-1pt}- e^{-w p}\hspace{-1pt}- e^{-z\rho
-w}\hspace{-1pt}+ e^{-z\rho -w p}\hspace{-1pt}- e^{-z\rho -w
\varphi} \big]\hspace{-0.2pt}.$$
Averaging the modified inter-link message $\varphi_{i\to j}$, given by Eq.(\[eq:modifiedMP\]), over all possible realizations of randomness inherent in the percolation process yields: $$\varphi\ =\ p\, \big[\, 1 - e^{-z\,\rho\,-w\,\varphi}\, \big]\ .$$
The percolation threshold $p_c = 1 - q_c$ of the ER 2-NoN can now be found by evaluating the leading eigenvalue determining the stability of the fixed point solution $\{\rho=\varphi=0\}$ to the averaged modified message passing equations [@Zdeborova2014:Percolation]: $$\left. \left(
\hspace{-2pt}\begin{array}{cc}
\frac{\partial \rho}{\partial \rho} & \frac{\partial \varphi}{\partial \rho}\\
\frac{\partial \rho}{\partial \varphi} & \frac{\partial \varphi}{\partial \varphi}
\end{array}\hspace{-2pt}\right) \hspace{-1pt}\right|_{\{\rho=\varphi=0\}} \hspace{-6pt}= \left(
\hspace{-2pt}\begin{array}{cc}
pz\big[1+e^{-w}\hspace{-2pt}- e^{-wp}\big] & pz\\
pw & pw
\end{array} \hspace{-2pt}\right)_{.}$$ The corresponding eigenvalues can readily be obtained as $$\lambda_{\pm}\hspace{-2pt}=\hspace{-1pt}\frac{p}{2}\Big[ z[1\hspace{-1pt}+\hspace{-1pt}f]\hspace{-1pt}+w \pm \sqrt{ z^2[1\hspace{-1pt}+\hspace{-1pt}f]^2\hspace{-1pt}+\hspace{-1pt}2zw[1\hspace{-1pt}-\hspace{-1pt}f]\hspace{-1pt}+\hspace{-1pt}w^2} \Big]
\label{eq:leadingEigenvalue}$$ where we define $f(p) \equiv e^{-w} - e^{-w p}$. Formally, the fixed point solution $\{\rho=\varphi=0\}$ is stable if and only if $\lambda_{+} \leq 1$ [@MoMa; @Zdeborova2014:Percolation]. The implicit function theorem then allows us to obtain the percolation threshold $p_c=1-q_c$ by saturating the stability condition as follows: $$\lambda_{+}{\hspace{1pt}}({\hspace{1pt}}p,{\hspace{1pt}}z,{\hspace{1pt}}w{\hspace{1pt}})\ =\ 1 \hspace{.2cm}\rightarrow\,\,\ p_c{\hspace{1pt}}({\hspace{1pt}}z,{\hspace{1pt}}w{\hspace{1pt}})\ .
\label{eq:stabilityCondition}$$
Results for $q_c(z, w) = 1-p_c(z, w)$ in are shown in Fig.\[fig:percolationThreshold\] and confirm the excellent agreement between direct simulations of the random percolation process on synthetic NoN and the theoretical percolation threshold calculated from Eq.(\[eq:stabilityCondition\]). The numerically measured percolation thresholds, $q_c^{\rm num}(z, w)$, were obtained at the peak of the second largest activated component [(Fig.\[fig:percolationThreshold\]Inset)]{}, measured relative to the fraction of randomly removed nodes in synthetic . The analytical prediction of the percolation threshold, $q_c^{\rm
analytic}(z, w)$, was obtained from the numerical solution of Eq.(\[eq:stabilityCondition\]).
The large values of $q_c$ in the percolation phase diagram confirm that the NoN is very robust with respect to random node failures. The results indicate, for instance, that a fraction of more than 70% of randomly chosen nodes in an ER with ${\langle}k^{\rm in}{\rangle}=
4$ can be damaged without destroying the giant connected activated component $G$. Moreover, the percolation transition, separating the phases $G > 0$ and $G = 0$, is of second order in the robust NoN [(Fig.\[fig:percolationThreshold\]Inset)]{}.
Interestingly, the phase diagram reveals that, for a given average in-degree $z$, the NoN exhibits maximal vulnerability $ q_c^{\rm
min}(w^{*}, z) = 1 - p_c^{\rm max}(w^{*}, z)$ at a characteristic average out-degree $w^{*}(z)$, indicated by the dip in the percolation threshold $q_c$ in Fig.\[fig:percolationThreshold\]. [The equation determining $w^*(z)$ can straightforwardly be obtained via implicit differentiation of , using $\partial p_c/\partial w\,|_{w^*}=0$, where $p_c(z,w)$ is given by the solution of Eq.(\[eq:stabilityCondition\]). The corresponding curve for $q_c^{\rm min}(w^{*}, z)$ is shown in Fig.\[fig:percolationThreshold\].]{} Conceptually, the dip in $q_c$ occurs as a consequence of the competition between dependency and redundancy effects in the NoN. Starting from vanishing inter-module connections, the critical fraction $q_c$, and therefore the robustness of the NoN, initially decreases slightly as the number of dependency links in the NoN is increased. However, upon further increasing the density of inter-module dependencies, the resilience of the NoN increases again with increasing redundancy among the dependency connections.
![(Color online) **Percolation phase diagram for ER 2-NoN.** Blue curves show our analytical prediction of the percolation threshold, $q_c^{\rm analytic}$, as a function of ${\langle}k^{\rm out}
{\rangle}$ for different values of ${\langle}k^{\rm in}{\rangle}= 0,2,4,6$, obtained from Eq.(\[eq:stabilityCondition\]). Black dots show the measured numerical percolation threshold, $q_c^{\rm num}$, from direct simulation of the random percolation process, obtained at the peak of the second largest connected activated component. [The green dashed line indicates the maximal vulnerability $q_c^{\rm min}$]{}. The percolation transition $q_c$ denotes the critical fraction of randomly removed nodes at which $G(q_c)=0$ collapses. Errors are s.e.m. over 10 NoN realizations of system size $N = 2\times10^6$. [ Size of $G$ (black dots) and $200$$*$size of the second largest connected activated component (red dots) as a function of $q$ for an ER 2-NoN with ${\langle}k^{\rm in}{\rangle}=4$, ${\langle}k^{\rm out}{\rangle}=2$ and $N = 2\times10^6$. The peak is at $q_c^{\rm num}=0.788$.]{} \[fig:percolationThreshold\]](Fig2_inset_900dpi.png){width="\columnwidth"}
The underlying mechanism responsible for the robustness of the NoN is best understood from the behaviour of the model in the limit ${\langle}k^{\rm in}{\rangle}\rightarrow 0$, which corresponds to a bipartite network equipped with our activation rule for ${\sigma}_i$, given by Eq.(\[eq:sigma\]). The corresponding message passing equations, $\varphi_{i\to j} = {\sigma}_i\big[1-\prod_{k\in\mathcal{F}(i)\setminus
j}(1-\varphi_{k\to i})\big]$, are straightforwardly obtainable from Eqs.(\[eq:messagePassingRho\])&(\[eq:messagePassingVarphi\]), and can be seen to coincide with the usual single network message passing equations by observing that the activation state ${\sigma}_i$ can actually be replaced with the occupation variable $n_i$ in this case (the reason is the following: assuming node $i$ is present $(n_i = 1)$, ${\sigma}_i = 0$ implies that none of $i$’s out-neighbors is present and so none of the incoming inter-module messages can be non-zero either). This property can of course directly be obtained also from Eq.(\[eq:leadingEigenvalue\]), which in the limit $z = 0$ implies $$\lambda_{\pm}^{z=0} = \frac{p}{2}\,\big\{\, w\,\pm\, \sqrt{\, w^2\,}
\,\big\}\ \hspace{.2cm}\rightarrow\,\,\ p_c^{z=0} = 1/w\ .$$ Therefore, the functioning of dependency links is well-defined even if they connect nodes that do not belong to the giant connected activated component within each module. In the model of , on the other hand, inter-module links only exist if they connect nodes that belong to the largest connected activated component in their own module. Hence, it is impossible to construct the NoN from below $p_c$ (or above $q_c$) using dependency links. In the present robust model, we can construct the links even if the nodes are not in $G$, allowing us to build the NoN from below $p_c$ using dependency connections. Thus, the transition is well-defined from above and below the percolation threshold.
In conclusion, we have seen that the robustness in NoN can be understood to emerge if dependency links do not need to be part of the giant connected activated component $G$ for their proper functioning. In contrast to previously existing models of interdependent networks [@Buldyrev2010:Cascade; @Gao2012:Interdependent], dependencies in the robust NoN do not lead to cascades of failures. The key point in our model is that a node can be activated even if it does not belong to $G$. An example of the structure of NoN where the model applies is that of the brain [@gallos; @saulo; @preprint; @sigman]. While in Ref.[@saulo] we have shown that the model of [@Buldyrev2010:Cascade] becomes robust when correlations in the dependencies are considered, here we show that a local activation rule Eq.(\[eq:sigma\]) akin to brain control between modules defines a novel model of NoN which is robust even without correlations. The effect of degree correlations on the robustness of the NoN is to be investigated [@saulo]. The model is straightforwardly generalizable also to directed links and to dependency connections not restricted to be only across modules, but also inside each module.
[**Acknowledgment.**]{} We acknowledge funding from NSF PHY-1305476, NIH-NIGMS 1R21GM107641, NSF-IIS 1515022 and Army Research Laboratory Cooperative Agreement Number W911NF-09-2-0053 (the ARL Network Science CTA).
M. E. J. Newman, [*Networks: An Introduction*]{} (Oxford University Press, USA, 2010).
S. V. Buldyrev, R. Parshani, G. Paul, H. E. Stanley, and S. Havlin, [Nature]{} [**464**]{}, 1025 (2010).
J. Gao, S. V. Buldyrev, H. E. Stanley, and S. Havlin, [Nature Phys.]{} [**8**]{}, 40 (2012).
G. Bianconi, S. N. Dorogovtsev, and J. F. F. Mendes, [Phys. Rev. E]{} [**91**]{}, 012804 (2015).
R. Parshani, S. V. Buldyrev, and S. Havlin, [Phys. Rev. E]{} [**105**]{}, 048701 (2010).
S. D. S. Reis, Y. Hu, A. Babino, J. S. Andrade Jr, S. Canals, M. Sigman, and H. A. Makse, [Nature Phys.]{} [**10**]{}, 762 (2014).
F. Morone, K. Roth, B. Min, H. E. Stanley, and H. A. Makse, (submitted, 2016) http://bit.ly/1YuumcS
L. K. Gallos, H. A. Makse, and M. Sigman, [Proc. Natl. Acad. Sci. USA]{} [**109**]{} 2825 (2012).
C. D. Gilbert and M. Sigman, [*Neuron*]{} [**54,**]{} 677 (2007).
F. Morone, and H. A. Makse, [Nature]{} [ **524**]{}, 65 (2015).
B. Karrer, M. E. J. Newman, and L. Zdeborová, [Phys. Rev. Lett.]{} [**113**]{}, 208702 (2014).
S. N. Dorogovtsev, J. F. F. Mendes, and A. N. Samukhin, [Nucl. Phys. B]{} [**653**]{}, 307 (2003).
M. Mézard, and A. Montanari, [*Information, Physics, and Computation*]{} (Oxford University Press, USA, 2009).
|
---
abstract: |
Coalition forming is investigated among countries, which are coupled with short range interactions, under the influence of external fields produced by the existence of global alliances. The model rests on the natural model of coalition forming inspired from Statistical Physics, where instabilities are a consequence of decentralized maximization of the individual benefits of actors within their long horizon of rationality as the ability to envision a way through intermediate loosing states, to a better configuration. The effects of those external incentives on the interactions between countries and the eventual stabilization of coalitions are studied. The results shed a new light on the understanding of the complex phenomena of stabilization and fragmentation in the coalition dynamics and on the possibility to design stable coalitions. In addition to the formal implementation of the model, the phenomena is illustrated through some historical cases of conflicts in Western Europe.
**Keywords:** Social Models, Statistical Physics, Coalition Forming, Coalition Stabilization, Political Instability.
author:
- |
[[**Galina Vinogradova**]{}]{}[^1]\
[[*CREA - Center of Research in Applied Epistemology, Ecole Polytechnique*]{}]{}\
[[*Palaiseau, France*]{}]{}\
- |
[[**Serge Galam**]{}]{}[^2]\
[[*CNRS - National Center of Scientic Research*]{}]{}\
[[*Paris, France*]{}]{}
title: The Stabilizing Role of Global Alliances in the Dynamics of Coalition Forming
---
Introduction
============
This work is devoted to the study of stabilization in coalition forming in a collective of individual actors under the influence of external fields. The model rests on the natural model of coalition forming [@GAVNM] inspired from the Statistical Physics’ model of Spin Glasses [@SGM], through which the system of countries is compared to a collection of interacting spins – tiny magnetic dipoles that interact with each other and align themselves in a way to attain the most “comfortable” position, the one that minimizes their energies. While the presentation addresses the coalition forming in an aggregate of countries, the discussion and the results can be applied to any type of political, social or economical collectives where the association of actors takes place based on their bilateral propensities.
This work subscribes to the growing field of modeling complex social situations using Statistical Physics [@CFLSF] which has started over thirty years ago with [@SYY]. Later, a study of collective decision making combining Social Psychology hypotheses with recent concept of Statistical Physics [@MG] set the frame of using spin Hamiltonian. Then, the coalition as a form of aggregation among a set of actors (countries, groups, individuals) has been studied using concepts from the theory of Spin Glasses [@Axel; @FVS; @GC; @SDO; @Flo; @APIM]. Various social applications of the model were suggested [@SPC; @TBISCF; @GAVNM]. The dynamical analogue of this model was introduced in [@GAVDP].
The model of coalition forming among countries leans on the existence of strong and static bilateral geographic-ethnic propensities linking the countries. Those propensities have emerged during the ongoing historical interactions between neighbor countries and appear to favor either cooperation or conflict. Their spontaneous and independent evolution have produced an intricate circuit of bilateral bonds which causes contradictory tendencies in the simultaneous individual searches for optimal coalitions. Due to stronger interactions with a common ally, conflicting countries may be brought to cooperate momentarily despite their natural tendency to conflict. Such a situation produces an endeavor of the concerned countries to escape from the unfavorable cooperation leading to instabilities, which in turn produce a break down of the current coalitions inducing the formation of new ones.
The origin of such instability is twofold, either coming from spontaneous fluctuations or directed by external attraction towards a global alliance. The extremely disordered dynamics of coalitions and fragmentation in Western Europe in past centuries belong to the first kind, while the building up of the Soviet and Nato global alliances is of the second kind.
In this work we aim to study the instability of coalition forming among countries, which, in contrast to physical entities, are rational actors that are able to maximize their individual benefits through a series of choices within a decentralized maximization process.
On this basis, coalitions are formed through the short range interactions between the countries – the attraction or repulsion based on the unalterable historical bonds between them. According to the principle that “ the enemy of an enemy is a friend”, the countries are assumed to ally to one of two competing coalitions.
Allying to the same coalition is unfavorable to the countries which went through historical rejection. As a result, such countries seek to affiliate with the opposite coalitions. Alternatively, allying to the opposite coalition is unfavorable to friendly countries. Countries which belong to the same coalition are expected to cooperate even if their natural propensity is to conflict. Such a contradiction results in a potential instability.
Our previous study [@GAVNM] focused on studying the effects of instabilities arising in the coalition forming among rational actors as a function of the bonds structure, the optimal and non-optimal stabilizations as well as the robustness of the stability.
In this work the model is extended to investigate the mechanisms by which the setting of a global alliance produces attraction in an aggregate of individual countries otherwise connected by their natural bilateral propensities. In particular, the focus is on how those new interactions can eventually stabilize the intrinsically unstable process of coalition forming keeping the short range nature of interactions. Global attraction is ensued from a global external field set over the system of countries, which in turn polarizes the countries’ interests and produces incentive unifications under two opposing *global alliances*. The resulting coalitions are affected by the net bilateral balance between the new motivations and the traditional historical ones.
We focus on how the interactions produced by attraction to global alliances overwhelm the current instability among the countries. The results provide new theoretical tools that enable to measure the efficiency of a global attraction in forming stable alliances, as well as to theoretically design new effective global attractions that can yield stability.
The study of stabilization of coalitions using global alliances was started in [@SPC]. The authors describe spontaneous formation of economic coalitions given a random distribution of propensity bonds, and illustrate new exchanges between the countries incited by the global alliances. Those exchanges, along with an additional parameter of economical and military pressure, are viewed as the ones that produce additional bilateral propensities yielding new stable coalitions.
In the current work, we develops further the research on coalition forming under a global external field. We address the stabilization by unique factors – such as economical, political, social, ecological, as well as by multi-factor stabilization, where the influence of several independent factors is equiprobable. Based on the new formulation, we investigate the remarkable historical cases of conflicts in Western Europe.
The multi-factor stabilization is an innovative concept both in Political Sciences where it explains the complexity of coalition forming, and in Statistical Physics where it illustrates how a stable disorder arises from an anti-ferromagnetic coupling achieved by the interlocking of two opposite ferromagnetic states. Some forms of such mixed phases of ferromagnetism have been studied in [@CSGM].
Background – The Natural Model and Instability {#sec_model}
==============================================
The Spin Glass model in Statistical Physics is an idealized model of bulk magnetism represented by a collection of interacting spins – atoms acting as a tiny dipole magnet with a mixture of ferromagnetic and anti-ferromagnetic couplings. Those magnets interact with each other seeking to align themselves parallel or anti-parallel in order to minimize their energies. The collection of spins forms a disordered material in which the competing interactions cause high magnetic frustration – changes of spins at no energy cost, with a highly degenerate ground state.
The Ising model of a random bond magnetic system can be described as follows. The model consists of $N$ discrete variables ${\ensuremath{\{S_i\}}}_{1}^{N}$, called spins, that can be in one of two states *up* or *down*. Figure (\[spin\_glass\]) shows schematically the case of $8$ spins with identical amplitude of the propensity bonds located on a lattice and interacting at most with their nearest neighbors. The spins for which a shift of the state cost no energy are defined as frustrated.
![Ising model of $8$-spins with mixed pair interactions. The pair propensity bonds are denoted by $+$ or $-$, and states of the spins are denoted by the arrows. Frustrated spins are marked by both up and down arrows. This Spin Glass phase yields an unstable disorder.[]{data-label="spin_glass"}](spin_glas){width="2.3in"}
The natural model of coalition forming is formally identical to the Ising model with pure or mixed anti-ferromagnetic couplings in a particular geometry of the lattice. the model considers a system of $N$ countries whose historical interactions have defined propensity bonds between them, which are either positive (ferromagnetic-like) or negative (anti-ferromagnetic-like). To each country labeled with an index $i$ ranging from $1$ to $N$, is attached a discrete variables $S_i$ which can assume one of two state values $S_i=+1$ or $S_i= -1$. The values correspond to the country’s choice between the two possible coalitions. The same choice allies two countries to the same coalition, while different choices separate them into the opposite coalitions.
Combination of states of all the countries $S= {\ensuremath{\{S_1, S_2, S_3,
\dots, S_N\}}}$ forms a state configuration that defines an allocation of coalitions. Here, by symmetry, both configuration $S$ and it’s inverse $-S = {\ensuremath{\{-S_1, -S_2, -S_3, \dots, -S_N\}}}$ define the same coalitions.
Bilateral propensities $J_{i,j}$ have emerged from the respective mutual historical experience between the countries $i$ and $j$. The propensities measure the amplitudes and the directions of the exchange between two countries – cooperation or conflict. $J_{i,j}$, which is symmetric, is zero when there are no direct exchanges between the countries.
Product $J_{ij} S_iS_j$ measures the benefit or the gain from the interactions between both the countries as a function of their choices. Aimed to maximize this measure, the countries seek to ally to the same coalition when $J_{ij}$ is positive and to the opposing ones, otherwise. Thus, depending on the direction of the primary propensity, the conflict can be beneficial to the same extent as the cooperation.
The sum of the benefits from all the interactions of country $i$ in the system makes up the net gain of the country: [$$\label{countr_gain} \begin{array}{l} H_i(S) = S_i\sum_{j\ne i} \hspace{0.1in}
J_{ij}S_j. \end{array}$$ ]{} Thus, a configuration $S$ that maximizes the gain function defines the country’s most beneficial coalition setting.
For the sake of visualization, we depict the system of countries through a connected weighted graph with the countries in the nodes and the bilateral propensities as the weights of the respective edges (see Figure (\[trian\_natural\_m\])). We take red (dark) color for the $+1$ choice and blue (light) color for the $-1$ choice.
![Triangle of three conflicting countries $1$, $2$, $3$ with negative mutual bonds. []{data-label="trian_natural_m"}](trian_natural_m){width="1.5in"}
The total gain of the system of countries is identical to the Hamiltonian of an Ising random bond magnetic system which represents the energy of the system. For a configuration $S$, we have for system’s gain : [$$\label{syst_gain} \begin{array}{l} \mathcal{H(S)} =
\frac{1}{2} \sum_i \hspace{0.1in} {H_i(S)}. \end{array}$$ ]{}
In physical systems, the Hamiltonian – the function that determines the physical properties of the spin system, is precisely concerned with minimization of the system’s energy. This physical analogy allows to address the bilateral propensities between the countries as mean of maximization of the countries’ individual gain (minimization of their energy) and as the principal guide in the coalition forming.
A major difference between the model of spins and the model of rational countries is the long horizon rationality of the countries in contrast to the spins, which are only able to foresee only the immediate effect of their shifts. Countries have the ability to maximize their individual benefits through a series of planned changes while assuming possible losses in the intermediate phase.
The Ising model, indeed, can be represented through the natural model where the countries’ rationality is limited to observation of an immediate gain, optimizing only their local maximums.
When the most beneficial coalition configurations of different countries do not coincide, the maximization of individual gains induces competitions for the beneficial associations. Among the countries with complete rationality which are aware of attainability of a better configuration, those competing interactions cause endless instability in the system. However, the system may remain stable when some actors have limited rationality – being unaware of possibility to attain a better configuration, they are satisfied having reached a local maximum.
Figure (\[trian\_nat\_config\]) shows the triangle of conflict in a configuration where it is stable when the countries $2$’s and $3$’s rationality is limited to immediate improvements; any change cause a loss in their gain. The triangle is unstable when the countries are fully rational: their most beneficial configurations $S_1=
{\ensuremath{\{{\ensuremath{\{+1, -1, -1\}}}, {\ensuremath{\{-1, +1, +1\}}}\}}}$, $S_2= {\ensuremath{\{{\ensuremath{\{-1, +1,
-1\}}}, {\ensuremath{\{+1, -1, +1\}}}\}}}$, $S_3= {\ensuremath{\{{\ensuremath{\{-1, -1, +1\}}}, {\ensuremath{\{+1, +1,
-1\}}}\}}}$, do not coincide. Here, at any coalition configuration, at least two of the countries improves their gain when the other changes. As a result, aiming in their best configurations and being able to forecast an improvement at any step, the countries may make changes that impair the gain in the immediate steps.
![The triangle of conflict. The triangle is stable when the countries $2$’s and $3$’s rationality is limited to immediate improvements; any change cause a loss in their gain. The triangle is unstable when the countries are fully rational: their most beneficial configurations do not coincide and the countries, being able to forecast an improvement, make changes that impair the gain in the immediate steps. []{data-label="trian_nat_config"}](trian_nat_config){width="1.5in"}
It is interesting to note that for the case of equal propensities over the edges, the triangle of conflict is unstable for any limited rationality actors, including the spins, due to zero-value gain produced in the cyclic geometry resulting in no-cost frustrations.
The system of countries is said to be unstable if in any configuration of the countries’ states there is a country which is able to forecast an improvement of its gain.
Negative product on a circle means an unpaired negative coupling where two neighbors are found to be connected both though positive and negative branch in the circle. This creates an everlasting competition between the neighbors for the exclusive arrangement to ally with the positive branch. The countries thereby continuously shift their respective choices producing the instability.
In Statistical physics the necessary condition of instabilities in Spin Glasses [@GT] reads that [$$\label{stab_spins} \begin{array}{l} $
\emph{the instability implies the existance of a closed circle
of spins connected with the bonds on which}$\\
$\emph{the product of total bonds is negative}.$ \end{array}$$ ]{} Indeed, the Spin Glasses’ instability is a result of frustrations, and a negative circle can appear to be stable as soon as a shift increases the spin energy preventing the spin flop.
In contrast to the Spin Glass model where changes are limited to the spontaneous no-cost fluctuations, in the natural model where the instability is due to the rationality of actors, changes may impair the immediate gain. In the theoretical interpretation of the model where the complete rationality of all countries is assumed, the terms (\[stab\_spins\]) are also the sufficient condition of the instability in the model – the condition of endless competitions among the countries for the beneficial configurations.
Formally, the theoretical terms of instability in the natural model are as follows. Denote a circle of countries by ${\mathcal{C}}$ and the countries composing the circle by $1,2, \ldots, k$. [$$\label{stab_countr} \begin{array}{l} $\emph{If there is a closed circle of
countries on which the product of total propensities is negative},$\\
\hspace{2in} \Pi_{i,j \in {\mathcal{C}}} \hspace{0.1in} p_{ij} < 0 \hspace{0.02in}\\
$\emph{then the system is unstable}.$ \end{array}$$ ]{}
Let us remark that, in the theoretical interpretation of the model where the complete rationality of all countries is assumed, the instability is not value-dependent but is determined by the signs of the propensities – the distribution is such that involves a negative circle. At the same time, any local maximum strictly depends on the propensities’ values.
Global Alliance Model Of Coalition Forming
==========================================
Global alliance model starts from a global principle which represents an external field polarizing the interests of the countries. This leads to the emergence of two opposing global alliances. The countries attach themselves to one or to the other based on their pragmatic interests with respect to the global principle. The new interactions, while favor either cooperation or conflict, stimulates contributions to the countries’ mutual propensities. The new prospectives unify or separate the countries based on the pragmatic motivations which in combination with the historical concerns allow other distributions of coalitions.
Here we address the role of the global alliances in forming of stable coalitions among the countries or other rational actors. Whether the system is unstable or there is an optimal or local maximum stable configuration, the new exchanges between the countries incited by the global alliances impact the stability. For the sake of simplicity of presentation, we assume the extensive rationality of the countries. While in such theoretical interpretation the instability is not value-dependent, the effect from the globally generated additional propensities on the stability is subject to the values of primary propensities. Therefore, in spite of the extensive rationality of the countries, in the presentation we address the model with arbitrary range of values.
Let us define the global alliance model formally. The global alliance unifies the countries that support the global principle, while its opponents are unified under the opposing global alliance. Denote the two alliances by $M$ and $C$. A country’s individual disposition to the alliances is determined by the countries’ cultural and historical experiences and is expressed through the parameter of *natural belonging*. The natural belonging parameter of country $i$ is $\epsilon_i = +1$ if the country has natural attraction to alliance $M$, $\epsilon_i = -1$ for $C$.
By making a choice among the two possible state values $S_i=+1$ and $S_i= -1$, country $i$ chooses to belong to either alliance $M$ or $C$. The choice of $+1$ allies the country to alliance $M$ and the choice of $-1$ allies it to alliance $C$. Any particular distribution of two countries among the alliances creates new interactions between the countries whose directions depends on the natural disposition of the countries. Namely, if the countries are attracted to the opposing alliances, the exchange will be negative as soon as they ally to the same alliance.
Those new exchanges between any two countries $i$ and $j$ define additional propensity between the countries. The propensity is the amplitude of the exchange $G_{ij}$ in the direction $\epsilon_i
\epsilon_j$ that favors either cooperation or conflict. For the purpose of this presentation we assume that the exchange amplitudes are unchanged.
The overall propensities between the countries, involving both the historical inclinations and the propensities resulting from the new exchanges, are determined as follows [$$\label{add_prop} \begin{array}{l} p_{ij}= J_{ij} + \epsilon_i \epsilon_j
G_{ij}. \end{array}$$ ]{} Respectively, the net gain of country $i$ is [$$\label{glob_gain} \begin{array}{l} H_i = S_i\sum_{j \ne i} \hspace{0.1in}
{(J_{ij} + \epsilon_i \epsilon_j G_{ij})S_j}. \end{array}$$ ]{}
Thus, in the presence of external incentives of the global alliances, the couplings between the countries obtain new guidance. The countries adjust their states to the best benefit with regards to the new propensities. The new choice of coalition is determined by both spontaneous reactions and planned interactions, which enable coupling based on a planned profit.
Stabilization Of The System By Additional Factors
=================================================
Here we address the stabilization of coalition forming in the system where rational countries have no optimal configuration of coalitions, and where, as result, the spontaneous stabilization can not be attained. Global alliances based interactions in such systems enable stable coalitions among the actors even if they remain of short-range nature. Such interactions, however, being a complex superposition of several factors of countries’ objectives, must satisfy particular stability constraints.
The Uni-Factor Stabilization
----------------------------
Consider two opposing global alliances $M$ and $C$ in a system of $N$ countries. A particular factor of the countries’ interests produces specific dispositions to the global alliances which encourage new exchanges between the countries. The appropriate amplitudes of the exchanges enable the stabilization among the countries, the *uni-factor stabilization*.
With respect of unique factor of stabilization, the necessary and sufficient condition stability (reformulated terms (\[stab\_countr\])) is that [$$\label{glob_total_prop_eq} \begin{array}{l} $\emph{A system is stable if and only if for any circle $ {\mathcal{C}}$ in the system,} $ \\
\Pi_{i,j \in {\mathcal{C}}} \hspace{0.1in} (J_{ij} + \epsilon_i \epsilon_j
G_{ij}) \ge 0. \end{array}$$ ]{}
Now we state the existence of a stable coalition within the global alliance model.
The presence of global alliances, regardless of the global principle that produced them, enables a stable coalition among countries.
In order to prove this statement, let us first observe that the product of the additional propensities $p^{{\mathcal{G} \xspace}}_{ij}=
\epsilon_i \epsilon_j G_{ij}$ on any circle is always positive. Indeed, given circle ${\mathcal{C}}$, [$$\begin{array}{l} \Pi_{i,j \in {\mathcal{C}}}
\hspace{0.1in} G_{ij} \hspace{0.1in} \epsilon_i \epsilon_j =
\Pi_{i,j \in {\mathcal{C}}} \hspace{0.1in} G_{ij} ( \epsilon_1 \epsilon_2
\epsilon_3 \ldots \epsilon_{k}) ^2 = \Pi_{i,j \in {\mathcal{C}}}
\hspace{0.1in} G_{ij}. \end{array}$$ ]{} This implies that on any circle, the number of negative couplings produced by the global alliances is even. If the system is unstable, than there is at least one negative circle. We define the new interaction amplitude as follows. For each couple $i,j$ whose $\epsilon_i \epsilon_j < 0$ we take $G_{ij} = 0$ if the primary propensity is negative and $G_{ij} = 2 |J_{ij}|$ for the positive original coupling. When $\epsilon_i \epsilon_j
> 0$, we take $G_{ij} = 2 |J_{ij}|$.
Making the new propensities negative for the negative global couplings and positive for the positive ones, guarantees that there is an even number of negative couplings on the circle. This remains invariant for each circle in the system, which implies that the construction produces non-negative product on any circle in the system. The stability condition (\[glob\_total\_prop\_eq\]) holds true which concludes the proof of the statement.
### A Case of the England-Spain-France Triangle
A typical examples of the uni-factor stabilization is stabilization of the triangle of England, Spain and France (\[engspfr\]) during historical events of $1584$ [@JISP].
\[conf\_tri\_unistab\]
Against the background of sequence of wars in the old Europe, the countries attained stability when in $1584$ Catholic Spain and France formed an alliance against Protestant forces, the most notable of which were settled in England.
In order to illustrate historical example using the global alliance model, we describe the propensities between the countries from “negative” to “positive” through mixed ones. Attaching to them numerical values with respect to their relative strength, taking “neutral” as $0$.
Accounting for the historical relationship between England, Spain and France, we take the propensities as “neutral-negative”, “negative” and “highly negative”. There numerical interpretations, as shown in Figure (\[engspfr\]), are arbitrary values that aim to account for a relative strength of the interactions.
![Triangle of England ($E$), Spain ($S$) and France ($F$), the $ESF$-conflicting triangle. []{data-label="engspfr"}](engspfr){width="1.5in"}
By $M$ and $C$ we denote the two opposing global alliances – the countries in $M$ choose unification into a “European union” and those in $C$ are against the unification. With respect to the religious factor, Catholic Spain and France were naturally associated to $M$ ($\epsilon_S, \epsilon_F = 1$), while Protestant England was associated to $C$ ($\epsilon_E = -1$). Then, $\epsilon_S
\epsilon_F = 1$, $\epsilon_E \epsilon_S = \epsilon_E \epsilon_F =
-1$, and the overall propensities between the three countries are:\
$p_{SE}= -3 - G_{SE}$, $ p_{EF} = -1 - G_{EF}$ and $p_{SF}= -2 +
G_{SF}$.
Solving the inequality [$$\label{trian_stab_term} \begin{array}{l} (-3
-G_{SE})(-1 - G_{EF})(-2 + G_{SF}) \ge 0 \end{array}$$ ]{} yields the constraint the new interaction amplitudes $G_{SE}, G_{EF}, G_{SF}$ must satisfy in order to stabilize the triangle. Since $G_{EF}$, $G_{SE}$ and $G_{SF} > 0$, the only root of the respective equality is $G_{SF} =
2$. The solution space is $G_{SF} \ge 2$, as depicted in Figure (\[trian\_sol\]), represents a three-dimensional space of the independent additional propensities.
![Three-dimensional solution space of the independent additional propensities in the uni-factor stabilization of the $ESF$- triangle of conflict.[]{data-label="trian_sol"}](trian_sol){width="2in"}
In the historical example, coalition of Spain and France against England implies that the amplitude of their new interaction belonged to the solution space. The respective stable configuration is $S =
(+1, -1, -1)$, as shown in Figure (\[engspfr\_stab\])) where $G_{EF}$, $G_{SE}$ are taken to be $0$ and $G_{SF}$ to be $3$, so that the corresponding total propensities become $-1$,$-3$ and $1$.
![The global alliances model of the $ESF$-triangle stabilized by the religious factor in configuration $~{S= (+1, -1, -1)}$. Here, $G_{EF} = G_{SE} =0$ , $G_{SF} = 3$, so that the respective resulting total propensities are $-1$,$-3$ and $1$. []{data-label="engspfr_stab"}](engspfr_stab){width="1.5in"}
It is interesting to observe that:
Any system of countries in the global alliance model with a unique factor of interests is reducible to a stable system represented in the natural model.
Indeed, given a system in the global alliance model, let us define the new state variable to be $\tau_{i} = \epsilon_i S_i$. The variable takes a value of ${\ensuremath{\{ +1,-1\}}}$. Then, the hamiltonian $H_i$ of country $i$ can be written in the terms of the new state variables as [$$\begin{array}{l} H_i = \sum_{i \ne j} \hspace{0.1in}(
J_{ij}S_iS_j + G_{ij}\epsilon_i \epsilon_jS_iS_j) = \sum_{i\ne j}
\hspace{0.1in} (J_{ij}\epsilon_i \epsilon_j + G_{ij})\tau_i \tau_j. \end{array}$$ ]{} Here, since $G_{ij}$ is positive, some choice of ${\ensuremath{\{G_{ij}\}}}_{i,j}$ produces the propensities that guarantee a stable system.
The Multi-Factor Stabilization
------------------------------
Taking into account only one factor of countries’ interests would be too restricting – along with religious interests, the global principle may impact economical, ecological, moral, political or any other interest and concern. Distinct interests simultaneously influence the interactions between the countries in different ways. They modify the countries’ propensities by aggregating the corresponding independent interactions – economical, political and others.
Let us define formally the multi-factor form of the global alliance model through two coexisting factors of interests, denoted by ${\mathcal{G} \xspace}$ and ${\mathcal{K} \xspace}$ respectively. Within each factor, a country has independent natural disposition to the global alliances. Therefore, each country has two independent natural belonging parameters associated with the factors. For country $i$, this is $\epsilon_i = +1$ if within factor ${\mathcal{G} \xspace}$ the country naturally belongs to $M$. Similarly, $\beta_i = +1$ within factor ${\mathcal{K} \xspace}$. For the global alliance $C$, $\epsilon_i = -1$ and $\beta_i = -1$ respectively.
We denote by $G_{ij}$ the amplitude of the exchanges between the countries $i$ and $j$ on factor ${\mathcal{G} \xspace}$, and by $K_{ij}$ the amplitude on ${\mathcal{K} \xspace}$. Then, the total new propensity between the countries $i$ and $j$ is the superposition of those directed exchanges on the two factors: $p^{{\mathcal{G} \xspace}}_{ij} = \epsilon_i
\epsilon_j G_{ij}$ and $p^{{\mathcal{K} \xspace}}_{ij} = \beta_i \beta_j K_{ij}$. The two-factor form of the global alliance model superposes the spontaneous interactions of the natural model with the intended interactions based on the two-dimensional choice among the global alliances: $p_{ij}= J_{ij} + \epsilon_i \epsilon_j G_{ij} + \beta_i
\beta_j K_{ij}$. The net gain of country $i$ is [$$\begin{array}{l} H_i = S_i
\sum_{j\ne i} \hspace{0.1in} S_j( J_{ij} + G_{ij}\epsilon_i
\epsilon_j + K_{ij} \beta_i \beta_j) . \end{array}$$ ]{}
In order to illustrate the multi-factor stabilization, we turn again to the Example (\[conf\_tri\_unistab\]) of stabilization of the $ESF$- conflicting triangle.
### Multi-factor Stabilization of the England-Spain-France Triangle {#conf_tri_multistab}
We assume, in addition to the religious factor ${\mathcal{G} \xspace}$ in the $ESF$- conflicting triangle, that there is an economical factor ${\mathcal{K} \xspace}$. In this golden age Spain had a pronounced disinclination to any economical unification with its old enemies, while England and France remarked the advantages of such unification. Therefore, the respective parameters of natural belonging on the economical factor ${\mathcal{K} \xspace}$ are $\beta_S = -1$ and $\beta_E, \beta_F = 1$. With respect to the economical factor, the overall propensities between the countries are : $p_{SE}= -3 - G_{SE} - K_{SE}$, $p_{EF}=
-1 - G_{EF} + K_{EF}$, and $ p_{SF}= -2 + G_{SF} - K_{SF}$.
Solutions of inequality [$$\label{trian_multi_fact_eq} \begin{array}{l} \Pi_{i,j
\in {\mathcal{C}}} \hspace{0.1in} p_{ij}= (-3 - G_{SE} - K_{SE})(-1 - G_{EF}
+ K_{EF})(-2 + G_{SF} - K_{SF}) \ge 0 \end{array}$$ ]{} yield the exchange amplitudes that guarantee stability of the $ESF$-triangle in the multi-factor form. Since $G_{SE} + K_{SE} \ge 0$, the solution must satisfy $-G_{EF} + K_{EF} \ge 1$ and $G_{SF} - K_{SF} \le 2$, or $- G_{EF} +
K_{EF} \le 1$ and $G_{SF} - K_{SF} \ge 2$ (see Figure (\[trian\_2\_sol\])).
![three-dimensional solution spaces of the independent additional propensities in the two-factor stabilization of the $ESF$-triangle.[]{data-label="trian_2_sol"}](trian_2_sol){width="3in"}
In the historical reality of this period of the countries, the economical factor ${\mathcal{K} \xspace}$ could not produce interactions as strong and significant as the exchanges on the religious factor. That is why $K_{EF} < 1 + G_{EF}$ and $K_{SF} < G_{SF} + 2$ which have prevented the $ESF$-triangle to reach the stability until religion took a secondary place conceding importance to economics. See Figure (\[engspfr\_multi\])), where $G_{SE}$ is taken to be $0$, $G_{EF}$ to be $2$ and $G_{SF}$ is $3$, so that the respective total propensities become $1$, $-3$, $1$.
![The global alliances model of the $ESF$-triangle in the multi-factor case. Here, $G_{SE} = 0$, $G_{EF} = 2$ and $G_{SF} =
3$, so that the respective total propensities become $1$, $-3$, $1$. The system remained unstable because the global exchange amplitudes did not satisfy the terms of stability – the circle remained negative. []{data-label="engspfr_multi"}](engspfr_multi){width="1.5in"}
It worth to notice that in the multi-factor form, a system in the global alliance model can be no more interpreted as a system in the natural model as soon as the choice of at least two countries differs on at least two factors. Still, the general multi-factor case can be reduced to the two-factor form of the global alliance model: one of the factors unifies the amplitudes of all the positive coupling and the other unifies those of all the negatives ones.
Therefore, with no restriction on the generality, the multi-factor form of global alliance model can be studied within the case of two coexisting factors. This also explains the fact that in the majority of cases, only two camps of opposing concerns play the crucial role in the coalition forming.
Physical Interpretation of the Multi-Factor Stabilization
=========================================================
In the context of Statistical Physics, the multi-factor stabilization is equivalent to the superposition of unstable disorder of a spin glass with two stable orders (two factors) of ferromagnetic states which split the spins in two directions (two alliances). Each spin’s absolute direction is the average of those ferromagnetic directions, as shown in Figure (\[spin\_glass\_two\_fact\])). Among the two opposite directions, either one of them dominates or the two eliminate each other, thus neutralizing the ferromagnetic states on the spin. In the figure, thick arrows indicate the absolute directions of the spins, and thin arrows show their ferromagnetic directions.
![Ising model of $8$-spins, initially mixed negative and positive pair interactions (highlighted by grey color), is stabilized by mixing of two ferromagnetic states. Each spin’s absolute direction (marked by the thick arrows) is the average of those ferromagnetic directions. Among the two opposite directions, either one of them dominates or the two eliminate each other, thus neutralizing the ferromagnetic states on the spin. The Spin Glass phase yields a stable disorder.[]{data-label="spin_glass_two_fact"}](spin_glas_two_fact){width="2.3in"}
The multi-factor stabilization of coalition forming is an innovative concept both in Political Sciences and in Statistical Physics. In the former, it explains the multitude of elements influencing the coalition forming. In the later, it shows how in a frustrated system a stable disorder is achieved from interlocking of two ferromagnetic states of opposite directions with anti-ferromagnetic coupling among them.
Multi-factor Stabilization in Western Europe
============================================
Here we attempt to illustrate the formation of Italian state within the context of the global alliance model. It is known that, given a system from the reality, it is hard to obtain exact numerical values of the propensities in the system. Once such values are known we can explain the transitions and predict resulting configurations with arguable precision. Having no such values, we still can provide some analysis based on estimated values of the propensities extracted from the historical chronicles. Running the model with those values allows to analyze and explain the transitions and the result configuration. This can not be done based only on the canonical representations of historical events.
Let us illustrate the Italian unification in $1856$ - $1858$, where four countries were involved: Italy, France, Russia and Austria [@EPR] and [@UIT]. The period from the end of 18th till the middle of 19th century was marked by the series of European wars including the French invasion of Italy where Austrian and Sardinian forces had to face French army in the War of the First Coalition, The War of the Fifth Coalition of Austria against French Empire.
In 1852, the new president of the Council of Ministers in an Italian region Piedmont, Camillo di Cavour, had expansionist ambitions one of which was to displace the Austrians from the Italian peninsula. An attempt to acquire British and French favor was however unsuccessful.
Then, Napoleon III, who had belonged to an Italian family originally, decided to make a significant gesture for Italy. In the summer of 1858, Cavour and Napoleon III agreed to cooperate on war against Austria. According to the agreement, Piedmont would be rewarded with the Austrian territories in Italy (Lombardy and Venice), as well as the Duchies of Parma and Modena, while France would gain Piedmont’s transalpine territories of Savoy and Nice.
Despite the Russian help in crushing the Hungarian Revolt in 1849, Austria failed to support Russia in the Crimean War of the middle of 1850s. Therefore, Austria could not count on Russian help in Italy and Germany. Alexander II has agreed to support France in a fight with Austria for the liberation of the Italians, though only by showing up the army on its borders with Austria. It appeared to be enough to force the Austrians withdrew behind the borders of Venice.
However, the conquest of Venice required a long and bloody mission, which may cause revolts and threaten Napoleon III’s position in France. In the private meeting with Franz Joseph, together they agreed on the principles of a settlement to the conflict according to which the Austrians have to cede Savoy and Nice to the French, yet would retain Venice. The Russian was indignant at this turn of France.
Let us reproduce the historical chronicle presented above with the help of our model. The initial states of the countries with their primary propensities are shown in Figure (\[fiar\]).
The value of propensities indicating the relative strength of primary interactions between the countries are taken from “negative” to “positive” through mixed ones with the respective numerical values taking “neutral” as $0$ are shown in Figure \[fiar\]. Thus, the historical relationships between the two absolutist monarchies Russia and Austria are estimated as “neutral” with $J_{RA} = 1$. Italy and Russia having no noticeable political relationship are “neutral” to each other. The Franco-Russian relationship built up during the French Revolutionary and the Napoleonic Wars is are rather “neutral-negative” with $J_{FR} = -1$, as well as the interactions of France with Italy and Austria who had experienced series of military conflicts. The opposition between Italy and Austria tied to the mutual territorial claims is estimated to be “significantly-negative” with $J_{IA} = -2$).
![The unstable system of France, Russia, Italy, Austria with their relative primary propensities, 1856-1858.[]{data-label="fiar"}](fiar){width="2.3in"}
Figure (\[fiar\]) shows the system of the countries in its natural model. The model has two negative circles and so is unstable which appears through the historical changes before the rise of the Italian question. The instability originates from the fact that France gets identical benefits from the alliance with Russia and Italy as from the opposing alliance with Austria.
An external field in the model results from the principle of independent state of Italy. The respective opposing global alliances are $M$ which associates the countries that support the independence of Italy, and $C$ which unifies the countries opposing the independence.
Here, two respective factors influencing the historical series of events must be distinguished: external politics with the military goals, and internal politics involving the social concerns of the countries (their governing classes). Denote the two factors by ${\mathcal{G} \xspace}$ and ${\mathcal{K} \xspace}$ respectively.
With respect to their external goals, Italy and France, as well as Russia, agree to the relevance of an independent state of Italy. Yet, in the social concerns the governing classes of France, Russia and Austria agree in their rejection of socialist ideas springing over all the Italy. Therefore, the respective parameters of the countries’ natural belonging to the alliances are distributed as follows. With the natural belonging parameter $\epsilon$ referring the external goals and $\beta$ referring the internal social politics, for France $\epsilon_F = +1 $ and $\beta_F= -1$, for Italy $\epsilon_I = +1$ and $\beta_I= +1$, for Russia $\epsilon_R = +1 $ and $\beta_R= -1$, and for Austria $\epsilon_A = -1 $ and $\beta_A=
-1$.
The global alliance motivated propensities are given is the following chart:
[| l | l | l | l | l | l |]{} Propensity & F-I & I-A & F-R & F-A & R-A\
Primary & -1 & -2 & -1 & -1 & 1\
On ${\mathcal{G} \xspace}$ & $G_{FI}$ & $- G_{IA}$ & $G_{FR}$ & $ -G_{FA} $ & $- G_{RA}$\
On ${\mathcal{K} \xspace}$ & $- K_{FI}$ & $- K_{IA}$ & $ K_{FR}$ & $ K_{FA} $ & $ K_{RA}$\
The historical chronicle of the four countries is concluded in three phases: a phase of no global alliances, or the natural model phase, and two phases of global alliances rose due to the Italian question, where in the first one the external and military concerns come to picture and in the second one the internal social concerns rise over the countries.
As we have seen in Figure (\[fiar\]), the system in its natural model is unstable, where France fluctuates between Russia and Austria.
Let us evaluated the amplitudes of the military exchanges between the countries through numerical values providing the relative magnitudes of interactions. Russia has equally “moderate” interest in military cooperation with both France and Austria, with $G_{FR} =
2$ and $G_{RA} = 2$. Italy and France are “strongly” interested in the conflict having Italian land at stake, $G_{FI} = 4$ and $G_{IA}
= 4$, while the interest between Austria and France is “moderately-strong” with $G_{FA} = 3$. A sympathy of Russian to Italian state comes up in the “basic” interest, $G_{RI} = 1$. The new propensities between the countries with respect to the external politics interests are shown in the following table:
Propensity F-I I-A F-R F-A R-A R-I
---------------------------- ----- ----- ----- ----- ----- -----
Primary -1 -2 -1 -1 1 0
On ${\mathcal{G} \xspace}$ 4 -4 2 -3 -2 1
Total 3 -6 1 -4 -1 1
As result of the interactions the system obtain a new shape shown in Figure (\[fiar1\]). Here, absence of negative circles allow a perfectly stable coalition of France, Italy and Russia against Austria .
![France, Russia, Italy and Austria, 1856-1858, with the new military propensities. It forms a stable system with the coalition of France, Italy and Russia against Austria.[]{data-label="fiar1"}](fiar1){width="2.3in"}
However, the social aspect of the internal politics of the countries dramatically interferes with the stability. The relative amplitudes of the consequent exchanges can be estimated as follows. Due to the political insularity of Russia where serfdom still prevailed over large part of the country, the amplitudes of all its exchanges on the social aspect are “negligible”, $K_{FR} = 0$ and $K_{RA} = 0$. France and Austria had a “strong” involvement in the subject, with $K_{FA} = K_{FI} = K_{IA} = 4$. The new propensities between the countries are shown in the table.
Propensity F-I I-A F-R F-A R-A R-I
---------------------------- ----- ----- ----- ----- ----- -----
Primary -1 -2 -1 -1 1 0
On ${\mathcal{G} \xspace}$ 4 -4 2 -3 -2 1
On ${\mathcal{K} \xspace}$ -4 -4 0 4 0 0
Total -1 -10 1 0 -1 1
The result system with the French change in favor of cooperation with Austria is shown in Figure (\[fiar2\]). As we can see the modified system includes three negative circles. The change of France put Russia in an unfavorable position moving it away from a most beneficial coalition configuration. At the same time, Italy and Austria found themselves in a satisfactory state.
![France, Russia, Italy, Austria, 1856-1858, with the new propensities on both the military and social factors. The result of the war for liberation of Italy is the instability in a new shape. []{data-label="fiar2"}](fiar2.eps){width="2.3in"}
Conclusions
===========
Coalitions in a collective of individual rational actors such as countries, when formed spontaneously are rare to stabilize. The probability that the system becomes stable vanishes exponentially with the size of the system. In reality, stabilization among countries as rational actors is more likely to happen under the external incentive of global alliances and is more practical. The impact of the global principle on the economical, political, social or any other factor of the countries’ interests produces new, intended, interactions between the countries. In contrast to the spontaneous primary interactions, those interactions are intended in the sense that they are based on the directed view of the countries’s needs and interests. Superposed with the spontaneous ones, the interactions guarantee the stabilization once their amplitudes satisfy the constraints of positive circuits of propensities.
One of the interesting directions for further research in the context of the global alliance model is to study the general effect from the global attractions, that is the general interaction amplitudes. While some global attractions represent efficient mediators, the others may be less successful or even harmful with respect to the system’s stability. Because either they become obsolete, or provide no sufficient motivations, or they acts with harmful intentions, the global alliance may fail to stabilize an unstable system, and even may destabilize a stable system. It is interesting to study those effects from general perspective of the system’s total gain (energy), which can be reduced or augmented by the global alliances. The study should help shed the new light on conflicts in post colonial Africa and Middle East which, being under the influence of external fields, continuously cycle in series of contentions.
Vinogradova G. & Galam S. (2012). Rational Instability in the Natural Coalition Forming, *Physica A: Statistical Mechanics and its Applications*, 392 (2013) 6025–6040.
Binder, K. & Young, A.P. (1986). Spin-Glasses: experimental facts, theoretical concepts, and open questions, *Review of Modern Physics*, 58, 801–911
C. Castellano & S. Fortunato & V. Loreto (2009). Statistical physics of social dynamic, *Reviews of Modern Physics*, (81) 591–646, 2009.
Galam S. & Gefen Y. & Shapir Y. (1982). Sociophysics: A new approach of sociological collective behavior, *British Journal Political Sciences*, 9, 1–13.
Galam, S. & Moscovici, S. (1991). Towards A Theory Of Collective Phenomena: Concensus And Attitude Changes In Groups, *European Journal Of Social Psychology*, 21, 49–74
Axelrod, R. & Bennett, D.S. (1993). A landscape theory of aggregation, *British Journal Political Sciences*, 23, 211–233
Galam S. (1996). Fragmentation Versus Stability In Bimodal Coalitions, *Physica*, A, 230, 174–188
Galam, S. (1998). Comment on A landscape theory of aggregation, *British Journal Political Sciences*, 28, 411–412
Matthews R. (2000). A Spin Glass model of decisions in organizations, *Business Research Yearbook, G. Biberman, A. Alkhafaji (eds), Saline, Michigan: McNaughton and Gunn*, 7, 6
Florian, R. & Galam, S. (2000). Optimizing conflicts in the formation of strategic alliances, *Eur. Phys. J. B* , 16, 189–194
Tim Hatamian G. (2005). On alliance prediction by energy minimization, neutrality and separation of players, *arxiv.org/pdf/physics/0507017*
Galam S. (2002). Spontaneous Coalition Forming. Why Some Are Stable?, *Springer-Verlag Berlin Heidelberg 2002, S. Bandini, B. Chopard, and M. Tomassini (Eds.):ACRI 2002, LNCS 2493*, 1–9
Gerardo G. N. & Samaniego-Steta F. & del Castillo-Mussot M. & G.J. Vazquez (2007). Three-body interactions in sociophysics and their role in coalition forming, *Physica*, A, 379, 226–234
Vinogradova G. (2012). Correction of Dynamical Network’s Viability by Decentralization by Price, *Journal of Complex Systems*, 20, 1, 37–55
Van Hemmen J.L. (1982). Classical Spin-Glass Model, *Physical Review Letters*, 49, 6
Toulouse, G. (1977). *Theory of the frustration effect in Spin Glasses: I*, Comm. on Physics, 2
Israel J. I. (1997). Conflicts of Empires: Spain, the Low Countries, and the Struggle for World Supremacy, 1585–1713.
Hales E.E.Y. (1954). A Study in European Politics and Religion in the Nineteenth Century, *P.J. Kenedy*.
Beales D. & Biagini E. (2003). The Risorgimento and the Unification of Italy, *Longman*, 2nd ed.
[^1]: [email protected]
[^2]: [email protected]
|
---
author:
- The CMS Collaboration
bibliography:
- 'auto\_generated.bib'
title: 'Search for a standard model-like Higgs boson in the $\Pgmp\Pgmm$ and $\Pep\Pem$ decay channels at the LHC'
---
=1
$Revision: 278149 $ $HeadURL: svn+ssh://svn.cern.ch/reps/tdr2/papers/HIG-13-007/trunk/HIG-13-007.tex $ $Id: HIG-13-007.tex 278149 2015-02-19 20:52:20Z alverson $
\[1\][D[.]{}[.]{}[\#1]{}]{}
Introduction
============
After the discovery of a particle with a mass near 125 [@ATLASDiscovery; @CMSDiscovery; @CMSDiscoveryLong] and properties in agreement, within current experimental uncertainties, with those expected of the standard model (SM) Higgs boson, the next critical question is to understand in greater detail the nature of the newly discovered particle. Answering this question with a reasonable confidence requires measurements of its properties and production rates into final states both allowed and disallowed by the SM. Beyond the standard model (BSM) scenarios may contain additional Higgs bosons, so searches for these additional states constitute another test of the SM [@2HDM]. For a Higgs boson mass, [$m_\PH$]{}, of 125, the SM prediction for the Higgs to [$\Pgmp\Pgmm$]{}branching fraction, ${\ensuremath{\mathcal{B}}\xspace}({\ensuremath{\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}}\xspace})$, is among the smallest accessible at the CERN LHC, $2.2\times 10^{-4}$ [@Denner_2011mq], while the SM prediction for [$\mathcal{B}$]{}([$\PH\to{\ensuremath{\Pep\Pem}\xspace}$]{}) of approximately $5\times10^{-9}$ is inaccessible at the LHC. Experimentally, however, [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}and [$\PH\to{\ensuremath{\Pep\Pem}\xspace}$]{}are the cleanest of the fermionic decays. The clean final states allow a better sensitivity, in terms of cross section, $\sigma$, times branching fraction, [$\mathcal{B}$]{}, than [$\PH\to{\ensuremath{\tau^{+}\tau^{-}}\xspace}$]{}. This means that searches for [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}and [$\PH\to{\ensuremath{\Pep\Pem}\xspace}$]{}, combined with recent strong evidence for decays of the new boson to [$\tau^{+}\tau^{-}$]{} [@cmsHtautau; @atlasHtautau], may be used to test if the coupling of the new boson to leptons is flavour-universal or proportional to the lepton mass, as predicted by the SM [@Weinberg:1967tq]. In addition, a measurement of the [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}decay probes the Yukawa coupling of the Higgs boson to second-generation fermions, an important input in understanding the mechanism of electroweak symmetry breaking in the SM [@Plehn:2001qg; @Han:2002gp]. Deviations from the SM expectation could also be a sign of BSM physics [@Vignaroli:2009vt; @newRatio]. A previous LHC search for SM [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}has been performed by the ATLAS Collaboration and placed a 95% confidence level (CL) upper limit of 7.0 times the rate expected from the SM at 125.5 [@atlasSM]. The ATLAS Collaboration has also performed a search for BSM [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}decays within the context of the minimal supersymmetric standard model [@atlasMSSM].
This paper reports on a search for a SM-like Higgs boson decaying to either a pair of muons or electrons ([$\PH\to{\ensuremath{\ell^+\ell^-}\xspace}$]{}) in proton-proton collisions recorded by the CMS experiment at the LHC. The [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}search is performed on data corresponding to integrated luminosities of $5.0\pm0.1$at a centre-of-mass energy of 7and $19.7\pm0.5$at 8, while the [$\PH\to{\ensuremath{\Pep\Pem}\xspace}$]{}search is only performed on the 8data. Results are presented for Higgs boson masses between 120 and 150. For ${\ensuremath{m_\PH}\xspace}=125\GeV$, the SM predicts 19(95) [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}events at 7(8), and ${\approx}2\times10^{-3}$ [$\PH\to{\ensuremath{\Pep\Pem}\xspace}$]{}events at 8 [@deFlorian_2012yg; @LHCHXSWG1; @LHCHXSWG2; @LHCHXSWG3].
The [$\PH\to{\ensuremath{\ell^+\ell^-}\xspace}$]{}resonance is sought as a peak in the dilepton mass spectrum, [$m_{\ell\ell}$]{}, on top of a smoothly falling background dominated by contributions from Drell–Yan production, $\ttbar$ production, and vector boson pair-production processes. Signal acceptance and selection efficiency are estimated using Monte Carlo (MC) simulations, while the background is estimated by fitting the observed [$m_{\ell\ell}$]{}spectrum in data, assuming a smooth functional form.
Near ${\ensuremath{m_\PH}\xspace}=125\GeV$, the SM predicts a Higgs boson decay width much narrower than the dilepton invariant mass resolution of the CMS experiment. For ${\ensuremath{m_\PH}\xspace}=125\GeV$, the SM predicts the Higgs boson decay width to be 4.2 [@LHCHXSWG1], and experimental results indirectly constrain the width to be ${<}22\MeV$ at the 95% CL, subject to various assumptions [@cmsHiggsWidthIndirect; @Caola:2013yja]. The experimental resolution depends on the angle of each reconstructed lepton relative to the beam axis. For dimuons, the full width at half maximum (FWHM) of the signal peak ranges from 3.9 to 6.2(for muons with $\abs{\eta}<2.1$), while for electrons it ranges from 4.0 to 7.2(for electrons with $\abs{\eta}<1.44$ or $1.57<\abs{\eta}<2.5$).
The sensitivity of this analysis is increased through an extensive categorization of the events, using kinematic variables to isolate regions with a large signal over background (S/B) ratio from regions with smaller S/B ratios. Separate categories are optimized for the dominant Higgs boson production mode, gluon-fusion (GF), and the sub-dominant production mode, vector boson fusion (VBF). Higgs boson production in association with a vector boson (VH), while not optimized for, is taken into account in the [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}analysis. The SM predicts Higgs boson production to be 87.2% GF, 7.1% VBF, and 5.1% VH for ${\ensuremath{m_\PH}\xspace}=125\GeV$ at 8 [@LHCHXSWG3]. In addition to [$m_{\ell\ell}$]{}, the most powerful variables for discriminating between the Higgs boson signal and the Drell–Yan and $\ttbar$ backgrounds are the jet multiplicity, the dilepton transverse-momentum ([$\pt^{\ell\ell}$]{}), and the invariant mass of the two largest transverse-momentum jets ([$m_{\mathrm{jj}}$]{}). The gluon-gluon initial state of GF production tends to lead to more jet radiation than the quark-antiquark initial state of Drell–Yan production, leading to larger [$\pt^{\ell\ell}$]{}and jet multiplicity. Similarly, VBF production involves a pair of forward-backward jets with a large [$m_{\mathrm{jj}}$]{}compared to Drell–Yan plus two-jet or $\ttbar$ production. Events are further categorized by their [$m_{\ell\ell}$]{}resolution and the kinematics of the jets and leptons.
This paper is organized as follows. Section \[sec:cmsDet\] introduces the CMS detector and event reconstruction, Section \[sec:evtSel\] describes the [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}event selection, Section \[sec:selEff\] the [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}selection efficiency, Section \[sec:systUnc\] details the systematic uncertainties included in the [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}analysis, Section \[sec:results\] presents the results of the [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}search, Section \[sec:hee\] describes the [$\PH\to{\ensuremath{\Pep\Pem}\xspace}$]{}search, and Section \[sec:summary\] provides a summary.
CMS detector and event reconstruction {#sec:cmsDet}
=====================================
The central feature of the CMS apparatus is a superconducting solenoid of 6 internal diameter, providing a magnetic field of 3.8. Within the superconducting solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass/scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. Extensive forward calorimetry complements the coverage provided by the barrel and endcap detectors.
The first level of the CMS trigger system, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select the most interesting events in a fixed time interval of less than 4. The high level trigger processor farm further decreases the event rate from at most 100 to less than 1, before data storage. A more detailed description of the detector as well as the definition of the coordinate system and relevant kinematic variables can be found in Ref. [@cmsDet].
The CMS offline event reconstruction creates a global event description by combining information from all subdetectors. This combined information then leads to a list of particle-flow (PF) objects [@CMS-PAS-PFT-09-001; @CMS-PAS-PFT-10-001]: candidate muons, electrons, photons, and hadrons. By combining information from all subdetectors, particle identification and energy estimation performance are improved. In addition, double counting subdetector energy deposits when reconstructing different particle types is eliminated.
Due to the high instantaneous luminosity of the LHC, many proton-proton interactions occur in each bunch crossing. An average of 9 and 21 interactions occur in each bunch crossing for the 7 and 8data samples, respectively. Most interactions produce particles with relatively low transverse-momentum (), compared to the particles produced in an [$\PH\to{\ensuremath{\ell^+\ell^-}\xspace}$]{}signal event. These interactions are termed “pileup”, and can interfere with the reconstruction of the high-interaction, whose vertex is identified as the vertex with the largest scalar sum of the squared transverse momenta of the tracks associated with it. All charged PF objects with tracks coming from another vertex are then removed.
Hadronic jets are clustered from reconstructed PF objects with the infrared- and collinear-safe anti-algorithm [@antikt; @fastjet], operated with a size parameter of 0.5. The jet momentum is determined as the vectorial sum of the momenta of all PF objects in the jet, and is found in the simulation to be within 5% to 10% of the true momentum over the whole spectrum of interest and detector acceptance. An offset correction is applied to take into account the extra neutral energy clustered in jets due to pileup. Jet energy corrections are derived from the simulation, and are confirmed by in-situ measurements of the energy balance in dijet, photon plus jet, and Z plus jet (where the Z-boson decays to $\Pgmp\Pgmm$ or $\Pep\Pem$) events [@Chatrchyan:2011ds]. The jet energy resolution is 15% at 10, 8% at 100, and 4% at 1 [@CMS-PAS-JME-10-003]. Additional selection criteria are applied to each event to remove spurious jet-like objects originating from isolated noise patterns in certain HCAL regions.
Matching muons to tracks measured in the silicon tracker results in a relative resolution for muons with $20 <\pt < 100\GeV$ of 1.3–2.0% in the barrel and better than 6% in the endcaps. The resolution in the barrel is better than 10% for muons with up to 1 [@cmsMuons]. The mass resolution for $\cPZ\to\Pgm\Pgm$ decays is between 1.1% and 1.9% depending on the pseudorapidity of each muon, for $\abs{\eta}<2.1$. The mass resolution for $\cPZ \to \Pe \Pe$ decays when both electrons are in the ECAL barrel (endcaps) is 1.6% (2.6%) [@Chatrchyan:2013dga].
H->mu+mu- event selection {#sec:evtSel}
============================
Online collection of events is performed with a trigger that requires at least one isolated muon candidate with above 24in the pseudorapidity range $\abs{\eta} \le 2.1$. In the offline selection, muon candidates are required to pass the “Tight muon selection” [@cmsMuons] and each muon trajectory is required to have an impact parameter with respect to the primary vertex smaller than 5 and 2 in the longitudinal and transverse directions, respectively. They must also have $\pt>15\GeV$ and $\abs{\eta} \le 2.1$.
For each muon candidate, an isolation variable is constructed using the scalar sum of the transverse-momentum of particles, reconstructed as PF objects, within a cone centered on the muon. The boundary of the cone is $\Delta R=\sqrt{\smash[b]{(\Delta\eta)^2+(\Delta\phi)^2}}=0.4$ away from the muon, and the of the muon is not included in the sum. While only charged particles associated with the primary vertex are taken into account, a correction must be applied for contamination from neutral particles coming from pileup interactions. On average, in inelastic proton-proton collisions, neutral pileup particles deposit half as much energy as charged pileup particles. The amount of energy coming from charged pileup particles is estimated as the sum of the transverse momenta of charged tracks originating from vertices other than the primary vertex, but still entering the isolation cone. The neutral pileup energy in the isolation cone is then estimated to be 50% of this value and subtracted from the muon isolation variable. A muon candidate is accepted if the corrected isolation variable is less than 12% of the muon .
To pass the offline selection, events must contain a pair of opposite-sign muon candidates passing the above selection, and the muon which triggered the event is required to have $\pt > 25$. All combinations of opposite-sign pairs, where one of the muons triggers the event, are considered as dimuon candidates in the dimuon invariant mass distribution analysis. Each pair is effectively treated as a separate event, and referred to as such for the remainder of this paper. Less than 0.1% of the SM Higgs boson events and 0.005% of the background events in each category contain more than one pair of muons.
After selecting events with a pair of isolated opposite-sign muons, events are categorized according to the properties of jets. Jets reconstructed from PF objects are only considered if their is greater than 30and $\abs{\eta}<4.7$. A multivariate analysis (MVA) technique is used to discriminate between jets originating from hard interactions and jets originating from pileup [@JME-13-005].
Dimuon events are classified into two general categories: a 2-jet category and a 0,1-jet category. The 2-jet category requires at least two jets, with $\pt>40$for the leading jet and $\pt>30$for the subleading jet. A 2-jet event must also have ${\ensuremath{\pt^{\text{miss}}}}<40$, where ${\ensuremath{\pt^{\text{miss}}}}$ is the magnitude of the vector sum of the transverse momenta of the dimuon and dijet systems. The ${\ensuremath{\pt^{\text{miss}}}}$ requirement reduces the contamination in the 2-jet category, since decays also include missing transverse momentum due to neutrinos. All dimuon events not selected for the 2-jet category are placed into the 0,1-jet category where the signal is produced dominantly by GF.
The 2-jet category is further divided into VBF Tight, GF Tight, and Loose subcategories. The VBF Tight category has a large S/B ratio for VBF produced events. It requires ${\ensuremath{m_{\mathrm{jj}}}\xspace}>650$and $\abs{\Delta{\ensuremath{\eta(\mathrm{jj})}\xspace}}>3.5$, where $\abs{\Delta{\ensuremath{\eta(\mathrm{jj})}\xspace}}$ is the absolute value of the difference in pseudorapidity between the two leading jets. For a SM Higgs boson with ${\ensuremath{m_\PH}\xspace}=125\GeV$, 79% of the signal events in this category are from VBF production. Signal events in the 2-jet category that do not pass the VBF Tight criteria mainly arise from GF events, which contain two jets from initial-state radiation. The GF Tight category captures these events by requiring the dimuon transverse momentum ([$\pt^{\Pgm\Pgm}$]{}) to be greater than 50and ${\ensuremath{m_{\mathrm{jj}}}\xspace}>250$. To further increase the sensitivity of this search, 2-jet events that fail the VBF Tight and GF Tight criteria are still retained in a third subcategory called 2-jet Loose.
In the 0,1-jet category, events are split into two subcategories based on the value of [$\pt^{\Pgm\Pgm}$]{}. The most sensitive subcategory is 0,1-jet Tight which requires [$\pt^{\Pgm\Pgm}$]{}greater than 10, while the events with [$\pt^{\Pgm\Pgm}$]{}less than 10are placed in the 0,1-jet Loose subcategory. The S/B ratio is further improved by categorizing events based on the dimuon invariant mass resolution as follows. Given the narrow Higgs boson decay width, the mass resolution fully determines the shape of the signal peak. The dimuon mass resolution is dominated by the muon resolution, which worsens with increasing $\abs{\eta}$ [@cmsMuons]. Hence, events are further sorted into subcategories based on the $\abs{\eta}$ of each muon and are labeled as “barrel” muons (B) for $\abs{\eta}<0.8$, “overlap” muons (O) for $0.8\leq\abs{\eta}<1.6$, and “endcap” muons (E) for $1.6\leq\abs{\eta}<2.1$. The 0,1-jet dimuon events are then assigned, within the corresponding Tight and Loose categories, to all possible dimuon $\abs{\eta}$ combinations. The dimuon mass resolution for each category is shown in Table \[tab:nEvts\]. Due to the limited size of the data samples, the 2-jet subcategories are not split into further subcategories according to the muon resolution. This leads to a total of fifteen subcategories: three 2-jet subcategories, six 0,1-jet Tight subcategories, and six 0,1-jet Loose subcategories.
[ld[1.1]{}d[1.1]{}d[1.1]{}d[1.1]{}d[2.2]{}d[5.1]{}d[5.1]{}d[2.2]{}d[2.0]{}]{} & & & & & & &\
Category & & & & & & & & & [\[%\]]{}\
0,1-jet Tight BB & 3.4 & 9.7 & 8.1 & 8.9 & 1.83 & 226.4 & 245 & 22.5 & 101\
0,1-jet Tight BO & 4.0 & 14.0 & 11.0 & 13.0 & 2.56 & 470.3 & 459 & 42.4 & 121\
0,1-jet Tight BE & 4.4 & 4.9 & 3.8 & 4.8 & 0.92 & 234.8 & 235 & 16.6 & 65\
0,1-jet Tight OO & 4.8 & 5.2 & 3.9 & 4.9 & 0.97 & 226.5 & 236 & 11.5 & 52\
0,1-jet Tight OE & 5.3 & 4.0 & 3.0 & 4.2 & 0.75 & 237.5 & 228 & 26.5 & 106\
0,1-jet Tight EE & 5.9 & 0.9 & 0.7 & 1.0 & 0.17 & 71.4 & 57 & 11.4 & 97\
0,1-jet Loose BB & 3.2 & 2.2 & 0.1 & 0.1 & 0.38 & 151.4 & 127 & 17.2 & 95\
0,1-jet Loose BO & 3.9 & 3.1 & 0.2 & 0.2 & 0.52 & 307.0 & 291 & 18.9 & 71\
0,1-jet Loose BE & 4.2 & 1.2 & 0.1 & 0.1 & 0.20 & 148.7 & 178 & 19.1 & 102\
0,1-jet Loose OO & 4.5 & 1.2 & 0.1 & 0.1 & 0.20 & 144.7 & 143 & 19.1 & 113\
0,1-jet Loose OE & 5.1 & 1.0 & 0.1 & 0.1 & 0.16 & 160.1 & 159 & 16.1 & 75\
0,1-jet Loose EE & 5.8 & 0.2 & 0.0 & 0.0 & 0.03 & 41.6 & 39 & 5.6 & 51\
2-jet VBF Tight & 4.4 & 0.1 & 8.7 & 0.0 & 0.14 & 1.3 & 2 & 0.5 & 24\
2-jet GF Tight & 4.5 & 0.5 & 7.9 & 0.5 & 0.20 & 12.9 & 16 & 1.7 & 27\
2-jet Loose & 4.3 & 2.1 & 6.2 & 10.2 & 0.53 & 66.2 & 78 & 8.4 & 64\
Sum of categories&& 50.3 & 53.9 & 48.1 & 9.56 & 2500.8& 2493 & &\
0,1-jet Tight BB & 3.9 & 9.6 & 7.1 & 8.5 & 8.87 & 1208.0 & 1311 & 40.8 & 73\
0,1-jet Tight BO & 4.4 & 13.0 & 10.0 & 13.0 &12.45 & 2425.3 & 2474 & 102.2 & 127\
0,1-jet Tight BE & 4.7 & 4.9 & 3.4 & 4.6 & 4.53 & 1204.8 & 1212 & 63.8 & 111\
0,1-jet Tight OO & 5.0 & 5.3 & 3.6 & 5.0 & 4.90 & 1112.7 & 1108 & 39.0 & 71\
0,1-jet Tight OE & 5.5 & 4.1 & 2.8 & 4.2 & 3.85 & 1162.1 & 1201 & 151.1 & 251\
0,1-jet Tight EE & 6.4 & 0.9 & 0.6 & 1.1 & 0.85 & 350.8 & 323 & 34.2 & 107\
0,1-jet Loose BB & 3.7 & 2.1 & 0.1 & 0.1 & 1.73 & 715.4 & 697 & 40.2 & 94\
0,1-jet Loose BO & 4.3 & 2.9 & 0.2 & 0.2 & 2.41 & 1436.4 & 1432 & 85.5 & 158\
0,1-jet Loose BE & 4.5 & 1.1 & 0.1 & 0.1 & 0.90 & 725.9 & 782 & 74.9 & 166\
0,1-jet Loose OO & 4.9 & 1.1 & 0.1 & 0.1 & 0.96 & 727.4 & 686 & 33.2 & 74\
0,1-jet Loose OE & 5.5 & 0.9 & 0.1 & 0.1 & 0.76 & 791.8 & 832 & 78.2 & 158\
0,1-jet Loose EE & 6.2 & 0.2 & 0.0 & 0.0 & 0.18 & 218.5 & 209 & 18.9 & 87\
2-jet VBF Tight & 5.0 & 0.2 & 11.0 & 0.0 & 0.95 & 10.6 & 8 & 1.6 & 35\
2-jet GF Tight & 5.1 & 0.7 & 8.4 & 0.6 & 1.14 & 74.8 & 76 & 11.8 & 88\
2-jet Loose & 4.7 & 2.4 & 6.3 & 10.4 & 2.90 & 431.7 & 387 & 25.3 & 73\
Sum of categories&& 49.4 & 53.8 & 48.0 &47.38 &12596.2 &12738 & &\
H->mu+mu- event selection efficiency {#sec:selEff}
=======================================
While the background shape and normalization are obtained from data, the selection efficiency for signal events has to be determined using MC simulation. For the GF and VBF production modes, signal samples are produced using the <span style="font-variant:small-caps;">powheg–box</span> next-to-leading-order (NLO) generator [@powheg; @powhegGF; @powhegVBF] interfaced with 6.4.26 [@pythia] for parton showering. VH samples are produced using [@herwigpp] and its integrated implementation of the NLO POWHEG method.
These samples are then passed through a simulation of the CMS detector, based on [@geant4], that has been extensively validated on both 7 and 8data. This validation includes a comparison of data with MC simulations of the Drell–Yan plus jets and plus jets backgrounds produced using [@madgraph] interfaced with 6.4.26 for parton showering. In all categories, the simulated ${\ensuremath{m_{\Pgm\Pgm}}\xspace}$ spectra agree well with the data, for $110 < {\ensuremath{m_{\Pgm\Pgm}}\xspace}< 160\GeV$. Scale factors related to muon identification, isolation, and trigger efficiency are applied to each simulated signal sample to correct for discrepancies between the detector simulation and data. These scale factors are estimated using the “tag-and-probe” technique [@cmsMuons]. The detector simulation and data typically agree to within 1% on the muon identification efficiency, to within 2% on the muon isolation efficiency, and to within 5% on the muon trigger efficiency.
The overall acceptance times selection efficiency for the [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}signal depends on the mass of the Higgs boson. For a Higgs boson mass of 125, the acceptance times selection efficiencies are shown in Table \[tab:nEvts\].
H->mu+mu- systematic uncertainties {#sec:systUnc}
=====================================
Since the statistical analysis is performed on the dimuon invariant mass spectrum, it is necessary to categorize the sources of systematic uncertainties into “shape” uncertainties that change the shape of the dimuon invariant mass distribution, and “rate” uncertainties that affect the overall signal yield in each category.
The only relevant shape uncertainties for the signal are related to the knowledge of the muon momentum scale and resolution and they affect the width of the signal peak by 3%. The signal shape is parameterized by a double-Gaussian (see Section \[sec:results\]) and this uncertainty is applied by constraining the width of the narrower Gaussian. The probability density function used to constrain this nuisance parameter in the limit setting procedure is itself a Gaussian with its mean set to the nominal value and its width set to 3% of the nominal value.
Rate uncertainties in the signal yield are evaluated separately for each Higgs boson production process and each centre-of-mass energy. These uncertainties are applied using log-normal probability density functions as described in Ref. [@HiggsStats]. Table \[tab:hmmSyst\] shows the relative systematic uncertainties in the signal yield for ${\ensuremath{m_\PH}\xspace}{}=125\GeV$, with more detail given below.
Source GF \[%\] VBF \[%\]
------------------------------------------------------------------------------ ---------- -----------
Higher-order corrections [@LHCHXSWG3] 1–25 1–7
PDF [@LHCHXSWG3] 11 5
PS/UE 6–60 2–15
[$\mathcal{B}$]{}([$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}) [@LHCHXSWG3] 6 6
Integrated luminosity [@lumi_2011; @lumi_2013] 2.2–2.6 2.2–2.6
MC statistics 1–8 1–8
Muon efficiency 1.6 1.6
Pileup $<1$–5 $<1$–2
Jet energy resolution 1–3 1–2
Jet energy scale 1–8 2–6
Pileup jet rejection 1–4 1–4
To estimate the theoretical uncertainty in the signal production processes due to neglected higher-order quantum corrections, the renormalization and factorization scales are varied simultaneously by a factor of two up and down from their nominal values. This leads to an uncertainty in the cross section and acceptance times efficiency which depends on the mass of the Higgs boson. The uncertainty is largest in the 2-jet VBF Tight and GF Tight categories, and smallest in the 0,1-jet Tight categories.
Uncertainty in the knowledge of the parton distribution functions (PDFs) also leads to uncertainty in the signal production process. This uncertainty is estimated using the PDF4LHC prescription [@Alekhin:2011sk; @Botje:2011sn] and the CT10 [@Lai:2010vv], MSTW2008 [@Martin:2009iq], and NNPDF 2.3 [@Ball:2010de] PDF sets provided by the <span style="font-variant:small-caps;">lhapdf</span> package version 5.8.9 [@LHAPDF]. The value of the uncertainty depends on the mass of the Higgs boson, while the dependence on the category is small.
Uncertainty in the modeling of the parton showers and underlying event activity (PS/UE) may affect the kinematics of selected jets. This uncertainty is estimated by comparing various tunes of the relevant parameters. The D6T [@tune_z2], P0 [@tune_P0], ProPT0, and ProQ20 [@tune_proq20] tunes are compared with the Z2\* [@tune_z2] tune, which is the nominal choice. The uncertainty is larger in the 2-jet categories than in the 0,1-jet categories. Large uncertainties in the 2-jet categories are expected for the GF production mode, since two-jet events are simulated solely by parton showering in the –NLO samples.
Misidentification of “hard jets” (jets originating from the hard interaction) as “pileup jets” (jets originating from pileup interactions) can lead to migration of signal events from the 2-jet category to the 0,1-jet category. Events containing a Z-boson, tagged by its dilepton decay, recoiling against a jet provide a pure source of hard jets similar to the Higgs boson signal. Data events may then be used to estimate the misidentification rate of the MVA technique used to discriminate between hard jets and pileup jets using data [@JME-13-005]. A pure source of hard jets is found by selecting events with $\pt^{\cPZ}>30\GeV$ and jets where $\abs{\Delta \phi(Z,\mathrm{j})} > 2.5$ and $0.5<\pt^{\mathrm{j}}/\pt^{\cPZ}<1.5$. The misidentification rate of these jets as pileup jets is compared in data and simulation, and the difference taken as a systematic uncertainty.
There are several additional uncertainties. The theoretical uncertainty in the branching fraction to $\Pgmp\Pgmm$ is taken from Ref. [@LHCHXSWG3], and depends on the Higgs boson mass. The uncertainty in the luminosity is directly applied to the signal yield in all categories. The signal yield uncertainty due to the limited size of the simulated event samples depends on the category, and is listed as “MC statistics” in Table \[tab:hmmSyst\]. There is a small uncertainty associated with the “tag-and-probe” technique used to determine the data to simulation muon efficiency scale factors [@cmsMuons]. This uncertainty is labeled “Muon efficiency” in Table \[tab:hmmSyst\]. A systematic uncertainty in the knowledge of the pileup multiplicity is evaluated by varying the total cross section for inelastic proton-proton collisions. The acceptance and selection efficiency of the jet-based selections are affected by uncertainty in the jet energy resolution and absolute jet energy scale calibration [@Chatrchyan:2011ds].
For VH production, only rate uncertainties in the production cross section due to quantum corrections and PDFs are considered. They are 3% or less [@LHCHXSWG3].
When estimating each of the signal yield uncertainties, attention is paid to the sign of the yield variation in each category. Categories that vary in the same direction are considered fully correlated while categories that vary in opposite directions are considered anticorrelated. These correlations are considered between all categories at both beam energies for all of the signal yield uncertainties except for the luminosity uncertainty and the uncertainty caused by the limited size of the simulated event samples. The luminosity uncertainty is considered fully correlated between all categories, but uncorrelated between the two centre-of-mass energies. The MC simulation statistical uncertainty is considered uncorrelated between all categories and both centre-of-mass energies.
To account for the possibility that the nominal background parameterization may imperfectly describe the true background shape, an additional systematic uncertainty is included. This uncertainty is implemented as a floating additive contribution to the number of signal events, constrained by a Gaussian probability density function with mean set to zero and width set to the systematic uncertainty. This systematic uncertainty is estimated by checking the bias in terms of the number of signal events that are found when fitting the signal plus nominal background model (see Section \[sec:results\]) to pseudo-data generated from various alternative background models, including polynomials, that were fit to data. Bias estimates are performed for Higgs boson mass points from 120 to 150. The uncertainty estimate is then taken as the maximum absolute value of the bias of all of the mass points and all of the alternative background models. It is then applied uniformly to all Higgs boson masses. The estimates of the uncertainty in the parameterization of the background ($N_\text{P}$) are shown in Table \[tab:nEvts\] for each category. The effect of this systematic uncertainty is larger than all of the others. The expected limit (see Section \[sec:results\]) would be 20% lower at ${\ensuremath{m_\PH}\xspace}= 125\GeV$ without the systematic uncertainty in the parameterization of the background.
H->mu+mu- results {#sec:results}
====================
To estimate the signal rate, the dimuon invariant mass ([$m_{\Pgm\Pgm}$]{}) spectrum is fit with the sum of parameterized signal and background shapes. This fit is performed simultaneously in all of the categories. Since in the mass range of interest the natural width of the Higgs boson is narrower than the detector resolution, the [$m_{\Pgm\Pgm}$]{}shape is only dependent on the detector resolution and QED final state radiation. A double-Gaussian function is chosen to parameterize the shape of the signal. The parameters that specify the signal shape are estimated by fitting the double-Gaussian function to simulated signal samples. A separate set of signal shape parameters are used for each category. The background shape, dominated by the Drell–Yan process, is modeled by a function, $f({\ensuremath{m_{\Pgm\Pgm}}\xspace})$, that is the sum of a Breit–Wigner function and a 1/${\ensuremath{m_{\Pgm\Pgm}}\xspace}^2$ term, to model the -boson and photon contributions, both multiplied by an exponential function to approximate the effect of the PDF on the [$m_{\Pgm\Pgm}$]{}distribution. This function is shown in the following equation, and involves the parameters $\lambda$, $\beta$, ${\ensuremath{m_\cPZ}\xspace}$, and $\Gamma$: The coefficients $C_1$ and $C_2$ are set to ensure the integral of each of the two terms is normalized to unity in the [$m_{\Pgm\Pgm}$]{}fit range, 110 to 160. Each category uses a different set of background parameters. Before results are extracted, the mass and width of the -boson peak, ${\ensuremath{m_\cPZ}\xspace}$ and $\Gamma$, are estimated by fitting a Breit–Wigner function to the -boson mass peak region (88–94) in each category. The other parameters, $\lambda$ and $\beta$, are fit simultaneously with the amount of signal in the signal plus background fit. Besides the Drell–Yan process, most of the remaining background events come from production. The background parameterization has been shown to fit the dimuon mass spectrum well, even when it includes a large fraction. Fits of the background model to data (assuming no signal contribution) are presented in Fig. \[fig:bakShapeData8TeV\_pas\] for the most sensitive categories: the 0,1-jet Tight category with both muons reconstructed in the barrel region and the 2-jet VBF Tight category.
![ The dimuon invariant mass at 8and the background model are shown for the 0,1-jet Tight category when both muons are reconstructed in the barrel () and the 2-jet VBF Tight category (). A best fit of the background model (see text) is shown by a solid line, while its fit uncertainty is represented by a lighter band. The dotted line illustrates the expected SM Higgs boson signal enhanced by a factor of 20, for ${\ensuremath{m_\PH}\xspace}=125\GeV$. The lower histograms show the residual for each bin (Data-Fit) normalized by the Poisson statistical uncertainty of the background model ($\sigma_\mathrm{Fit}$). Also given are the sum of squares of the normalized residuals ($\chi^2$) divided by the number of degrees of freedom (NDF) and the corresponding $p$-value assuming the sum follows the $\chi^2$ distribution.[]{data-label="fig:bakShapeData8TeV_pas"}](Jets01PassPtG10BB_8TeV_125_Jets01PassPtG10BB "fig:"){width="49.00000%"} ![ The dimuon invariant mass at 8and the background model are shown for the 0,1-jet Tight category when both muons are reconstructed in the barrel () and the 2-jet VBF Tight category (). A best fit of the background model (see text) is shown by a solid line, while its fit uncertainty is represented by a lighter band. The dotted line illustrates the expected SM Higgs boson signal enhanced by a factor of 20, for ${\ensuremath{m_\PH}\xspace}=125\GeV$. The lower histograms show the residual for each bin (Data-Fit) normalized by the Poisson statistical uncertainty of the background model ($\sigma_\mathrm{Fit}$). Also given are the sum of squares of the normalized residuals ($\chi^2$) divided by the number of degrees of freedom (NDF) and the corresponding $p$-value assuming the sum follows the $\chi^2$ distribution.[]{data-label="fig:bakShapeData8TeV_pas"}](Jet2CutsVBFPass_8TeV_125_Jet2CutsVBFPass "fig:"){width="49.00000%"}
Results are presented in terms of the signal strength, which is the ratio of the observed (or expected) $\sigma{\ensuremath{\mathcal{B}}\xspace}$, to that predicted in the SM for the [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}process. Results are also presented, for ${\ensuremath{m_\PH}\xspace}=125\GeV$, in terms of $\sigma{\ensuremath{\mathcal{B}}\xspace}$, and [$\mathcal{B}$]{}. No significant excess is observed. Upper limits at the 95% CL are presented using the $\mathrm{CL_s}$ criterion [@CLS1; @CLS2]. They are calculated using an asymptotic profile likelihood ratio method [@cmsCombineTool; @HiggsStats; @AsymptoticLimits] involving dimuon mass shapes for each signal process and for background. Systematic uncertainties are incorporated as nuisance parameters and treated according to the frequentist paradigm [@HiggsStats].
Exclusion limits for Higgs boson masses from 120 to 150are shown in Fig. \[fig:expectedLimitsMassScan\_pas\]. The observed 95% CL upper limits on the signal strength at 125are 22.4 using the 7data and 7.0 using the 8data. The corresponding background-only expected limits are $16.6^{+7.3}_{-4.9}$ using the 7data and $7.2^{+3.2}_{-2.1}$ using the 8data. Accordingly, the combined observed limit for 7 and 8is 7.4, while the background-only expected limit is $6.5^{+2.8}_{-1.9}$. This corresponds to an observed upper limit on ${\ensuremath{\mathcal{B}}\xspace}({\ensuremath{\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}}\xspace})$ of 0.0016, assuming the SM cross section. The best fit value of the signal strength for a Higgs boson mass of 125is $0.8^{+3.5}_{-3.4}$. We did not restrict the fit to positive values, to preserve the generality of the result.
Exclusion limits in terms of $\sigma(\text{8\TeV}){\ensuremath{\mathcal{B}}\xspace}$ using only 8data are shown in Fig. \[fig:xsbrLimits\] (). The relative contributions of GF, VBF, and VH are assumed to be as predicted in the SM, and theoretical uncertainties on the cross sections and branching fractions are omitted. At 125, the observed 95% CL upper limit on $\sigma(\text{7\TeV}) {\ensuremath{\mathcal{B}}\xspace}$ using only 7data is 0.084, while the background-only expected limit is 0.062$^{+0.026}_{-0.018}$. Using only 8data, the observed limit on $\sigma(\text{8\TeV}) {\ensuremath{\mathcal{B}}\xspace}$ is 0.033, while the background-only expected limit is 0.034$^{+0.014}_{-0.010}$.
![ Mass scan for the background-only expected and observed combined exclusion limits.[]{data-label="fig:expectedLimitsMassScan_pas"}](CombSplitAll_7P8TeV){width="49.00000%"}
![ \[fig:xsbrLimits\] Exclusion limits on $\sigma {\ensuremath{\mathcal{B}}\xspace}$ are shown for [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}(), and for [$\PH\to{\ensuremath{\Pep\Pem}\xspace}$]{}(), both for 8. Theoretical uncertainties on the cross sections and branching fraction are omitted, and the relative contributions of GF, VBF, and VH are as predicted in the SM. ](xsbr_CombSplitAll_8TeV "fig:"){width="49.00000%"} ![ \[fig:xsbrLimits\] Exclusion limits on $\sigma {\ensuremath{\mathcal{B}}\xspace}$ are shown for [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}(), and for [$\PH\to{\ensuremath{\Pep\Pem}\xspace}$]{}(), both for 8. Theoretical uncertainties on the cross sections and branching fraction are omitted, and the relative contributions of GF, VBF, and VH are as predicted in the SM. ](xsbr_EE_8TeV "fig:"){width="49.00000%"}
Exclusion limits on individual production modes may also be useful to constrain BSM models that predict [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}production dominated by a single mode. Limits are presented on the signal strength using a combination of 7 and 8data and on $\sigma(8\TeV) {\ensuremath{\mathcal{B}}\xspace}$ using only the 8 TeV data. The observed 95% CL upper limit on the GF signal strength, assuming the VBF and VH rates are zero, is 13.2, while the background-only expected limit is $9.8^{+4.4}_{-2.9}$. Similarly, the observed upper limit on the VBF signal strength, assuming the GF and VH rates are zero, is 11.2, while the background-only expected limit is $13.4^{+6.6}_{-4.2}$. The observed upper limit on $\sigma_\mathrm{GF}(8\TeV) {\ensuremath{\mathcal{B}}\xspace}$ is 0.056 and expected limit is $0.045^{+0.019}_{-0.013}$, using only 8data. Similarly, the observed upper limit on $\sigma_\mathrm{VBF}(8\TeV) {\ensuremath{\mathcal{B}}\xspace}$ is 0.0036 and the expected limit is $0.0050^{+0.0024}_{-0.0015}$, using only 8data.
For ${\ensuremath{m_\PH}\xspace}=125\GeV$, an alternative [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}analysis was performed to check the results of the main analysis. It uses an alternative muon isolation variable based only on tracker information, an alternative jet reconstruction algorithm (the jet-plus-track algorithm [@JME-09-002]), and an alternative event categorization. The event categorization contains similar 2-jet categories to the main analysis, while separate categories are utilized for 0-jet and 1-jet events. Dimuon mass resolution-based categories are not used, but the 0-jet category does contain two subcategories separated by ${\ensuremath{\pt^{\Pgm\Pgm}}\xspace}$. As in the main analysis, results are extracted by fitting signal and background shapes to the [$m_{\Pgm\Pgm}$]{}spectra in each category, but unlike the main analysis, $f({\ensuremath{m_{\Pgm\Pgm}}\xspace})=\exp(p_1{\ensuremath{m_{\Pgm\Pgm}}\xspace})/({\ensuremath{m_{\Pgm\Pgm}}\xspace}-p_2)^2$ is used as the background shape. The systematic uncertainty on the parameterization of the background is estimated and applied in the same way as in the main analysis. For the alternative analysis, the observed (expected) 95% CL upper limit on the signal strength is 7.8 (6.5$^{+2.8}_{-1.9}$) for the combination of 7and 8data and ${\ensuremath{m_\PH}\xspace}=125\GeV$. The observed limits of both the main and alternative analyses are within one standard deviation of their respective background-only expected limits, for ${\ensuremath{m_\PH}\xspace}=125\GeV$.
Search for Higgs boson decays to e+e- {#sec:hee}
=====================================
In the SM, the branching fraction of the Higgs boson into [$\Pep\Pem$]{}is tiny, because the fermionic decay width is proportional to the mass of the fermion squared. This leads to poor sensitivity to SM production for this search when compared to the search for [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}. On the other hand, the sensitivity in terms of $\sigma{\ensuremath{\mathcal{B}}\xspace}$ is similar to [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}, because dielectrons and dimuons share similar invariant mass resolutions, selection efficiencies, and backgrounds. Since the sensitivity to the SM rate of [$\PH\to{\ensuremath{\Pep\Pem}\xspace}$]{}is so poor, an observation of the newly discovered particle decaying to [$\Pep\Pem$]{}with the current integrated luminosity would be evidence of physics beyond the standard model.
In a similar way to the [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}analysis, a search in the [$m_{\Pe\Pe}$]{}spectrum is performed for a narrow peak over a smoothly falling background. The irreducible background is dominated by Drell–Yan production, with smaller contributions from $\ttbar$ and diboson production. Misidentified electrons make up a reducible background that is highly suppressed by the electron identification criteria. The reducible [$\PH\to{\ensuremath{\gamma\gamma}}$]{}background is estimated from simulation to be negligible compared to other backgrounds, although large compared to the SM [$\PH\to{\ensuremath{\Pep\Pem}\xspace}$]{}signal. The overall background shape and normalization are estimated by fitting the observed [$m_{\Pe\Pe}$]{}spectrum in data, assuming a smooth functional form, while the signal acceptance times selection efficiency is estimated from simulation. The analysis is performed only on proton-proton collision data collected at 8, corresponding to an integrated luminosity of $19.7\pm0.5$.
The trigger selection requires two electrons, one with transverse energy, , greater than 17and the other with greater than 8. These electrons are required to be isolated with respect to additional energy deposits in the ECAL, and to pass selections on the ECAL cluster shape. In the offline selection, electrons are required to be inside the ECAL fiducial region: $\abs{\eta} < 1.44$ (barrel) or $1.57 < \abs{\eta} < 2.5$ (endcaps). Their energy is estimated by the same multivariate regression technique used in the CMS [$\PH\to{\ensuremath{\cPZ\cPZ}\xspace}$]{}analysis [@HIG-13-002], and their is required to be greater than 25. Electrons are also required to satisfy standard CMS identification and isolation requirements, which correspond to a single electron efficiency of around 90% in the barrel and 80% in the endcaps [@CMS-DP-2013-003].
To improve the sensitivity of the search we separate the sample into four distinct categories: two 0,1-jet categories and two for which a pair of jets is required. The two 2-jet categories are designed to select events produced via the VBF process. The two jets are required to have an invariant mass greater than 500(250)for the 2-jet Tight (Loose) category, $\pt > 30\,(20)\GeV$, $\abs{\Delta{\ensuremath{\eta(\mathrm{jj})}\xspace}} > 3.0$, $\abs{\Delta\phi(\mathrm{jj},\Pep\Pem)}>2.6$, and $\abs{z}=\abs{\eta(\Pep\Pem)-[\eta(\mathrm{j_1})+\eta(\mathrm{j_2})]/2}< 2.5$ [@zeppenfeld]. The cut on $z$ ensures that the dielectron is produced centrally in the dijet reference frame, which helps to enhance the VBF signal over the Drell–Yan background. More details on the selection can be found in Ref. [@HIG-13-001]. The rest of the events are classified into two 0,1-jet categories. To exploit the better energy resolution of electrons in the barrel region, these categories are defined as: both electrons in the ECAL barrel (0,1-jet BB) or at least one of them in the endcap (0,1-jet Not BB). For each category, the FWHM of the expected signal peak, expected number of SM signal events for ${\ensuremath{m_\PH}\xspace}=125\GeV$, acceptance times selection efficiency, number of background events near 125, and number of data events near 125are shown in Table \[tab:nEvtsHee\].
Data have been compared to the simulated Drell–Yan and background samples described in Section \[sec:selEff\]. In all categories, the dielectron invariant mass spectra from 110 to 160agree well, and the normalizations agree within 4.5%. Using simulation, the reducible background of [$\PH\to{\ensuremath{\gamma\gamma}}$]{}events has also been estimated. For ${\ensuremath{m_\PH}\xspace}=125$, 0.23 SM [$\PH\to{\ensuremath{\gamma\gamma}}$]{}events are expected to pass the dielectron selection compared to about $10^{-3}$ events for the SM [$\PH\to{\ensuremath{\Pep\Pem}\xspace}$]{}signal. While this background is much larger than the SM [$\PH\to{\ensuremath{\Pep\Pem}\xspace}$]{}signal, it is negligible compared to the Drell–Yan and backgrounds in each category.
Results are extracted from the data for ${\ensuremath{m_\PH}\xspace}$ values between 120 and 150by fitting the mass spectra of the four categories in the range $110 < {\ensuremath{m_{\Pe\Pe}}\xspace}<160\GeV$. The parameterizations used for the signal and background are the same as used in the [$\Pgmp\Pgmm$]{}search, a double-Gaussian function and Eq. (\[eqn:bkg\]), respectively. Background-only [$m_{\Pe\Pe}$]{}fits to data are shown in Fig. \[fig:eemassspectrum\] for the 0,1-jet BB and 2-jet Tight categories.
![ The dielectron invariant mass at 8and the background model are shown for the 0,1-jet BB () and 2-jet Tight () categories. A best fit of the background model (see Section \[sec:results\]) is shown by a solid line, while its fit uncertainty is represented by a lighter band. The dotted line illustrates the expected SM Higgs boson signal enhanced by a factor of $10^6$, for ${\ensuremath{m_\PH}\xspace}=125\GeV$. The lower histograms show the residual for each bin (Data-Fit) normalized by the Poisson statistical uncertainty of the background model ($\sigma_\mathrm{Fit}$). Also given are the sum of squares of the normalized residuals ($\chi^2$) divided by the number of degrees of freedom (NDF) and the corresponding $p$-value assuming the sum follows the $\chi^2$ distribution.[]{data-label="fig:eemassspectrum"}](massPlot_cat0 "fig:"){width="49.00000%"} ![ The dielectron invariant mass at 8and the background model are shown for the 0,1-jet BB () and 2-jet Tight () categories. A best fit of the background model (see Section \[sec:results\]) is shown by a solid line, while its fit uncertainty is represented by a lighter band. The dotted line illustrates the expected SM Higgs boson signal enhanced by a factor of $10^6$, for ${\ensuremath{m_\PH}\xspace}=125\GeV$. The lower histograms show the residual for each bin (Data-Fit) normalized by the Poisson statistical uncertainty of the background model ($\sigma_\mathrm{Fit}$). Also given are the sum of squares of the normalized residuals ($\chi^2$) divided by the number of degrees of freedom (NDF) and the corresponding $p$-value assuming the sum follows the $\chi^2$ distribution.[]{data-label="fig:eemassspectrum"}](massPlot_cat2 "fig:"){width="49.00000%"}
Systematic uncertainties are estimated and incorporated into the results using the same methods as in the [$\Pgmp\Pgmm$]{}search (see Section \[sec:systUnc\]). Table \[tab:heeSyst\] lists the systematic uncertainties in the signal yield. The pileup modeling, pileup jet rejection, and MC statistics systematic uncertainties are small and neglected for the [$\Pep\Pem$]{}search. The systematic uncertainties due to the jet energy resolution and absolute jet energy scale are combined and listed as “Jet energy scale” in Table \[tab:heeSyst\]. The uncertainty related to the choice of background parameterization in terms of the number of signal events ($N_\text{P}$) is shown in Table \[tab:nEvtsHee\]. This systematic uncertainty is larger than all of the others, and removing it would lower the expected limit by 28%, for ${\ensuremath{m_\PH}\xspace}=125\GeV$.
Source GF \[%\] VBF \[%\]
--------------------------------------- ---------- -----------
Higher-order corrections [@LHCHXSWG3] 8–18 1–7
PDF [@LHCHXSWG3] 11 5
PS/UE 6–42 3–10
Integrated luminosity [@lumi_2013] 2.6 2.6
Electron efficiency 2 2
Jet energy scale $<$1–11 2–3
No significant excess of events is observed. Upper limits on $\sigma(8\TeV) {\ensuremath{\mathcal{B}}\xspace}$ and ${\ensuremath{\mathcal{B}}\xspace}$ are reported. The observed 95% CL upper limit on $\sigma(8\TeV) {\ensuremath{\mathcal{B}}\xspace}$ at 125is 0.041 while the background-only expected limit is $0.052^{+0.022}_{-0.015}$. Assuming the SM production cross section, this corresponds to an observed upper limit on ${\ensuremath{\mathcal{B}}\xspace}({\ensuremath{\PH\to{\ensuremath{\Pep\Pem}\xspace}}\xspace})$ of 0.0019, which is approximately $3.7\times10^5$ times the SM prediction. Upper limits on $\sigma(8\TeV) {\ensuremath{\mathcal{B}}\xspace}$ are shown for Higgs boson masses from 120 to 150at the 95% CL in Fig. \[fig:xsbrLimits\] ( ).
[ld[1.1]{}d[2.2]{}d[2.2]{}d[1.1]{}d[4.1]{}d[5.1]{}d[2.2]{}d[2.0]{}]{} & & & & & & &\
Category & & & & & & & & [\[%\]]{}\
0,1-jet BB & 4.0 & 27.5 & 16.7 & 56.1 & 5208.9 & 5163 & 75.0 & 61\
0,1-jet Not BB & 7.1 & 17.0 & 9.7 & 34.6 & 8675.0 & 8748 & 308.7 & 174\
2-jet Tight & 3.8 & 0.5 & 10.7 & 2.6 & 17.7 & 22 & 19.5 & 71\
2-jet Loose & 4.7 & 1.0 & 7.3 & 3.1 & 79.5 & 84 & 43.2 & 88\
Sum of categories & & 46.0 & 44.4 & 96.4 & 13981.1 & 14017 & &\
Summary {#sec:summary}
=======
Results are presented from a search for a SM-like Higgs boson decaying to [$\Pgmp\Pgmm$]{}and for the first time to [$\Pep\Pem$]{}. For the search in [$\Pgmp\Pgmm$]{}, the analyzed CMS data correspond to integrated luminosities of $5.0\pm0.1$collected at 7and $19.7\pm0.5$collected at 8, while only the 8data are used for the search in the [$\Pep\Pem$]{}channel. The Higgs boson signal is sought as a narrow peak in the dilepton invariant mass spectrum on top of a smoothly falling background dominated by the Drell–Yan, , and vector boson pair-production processes. Events are split into categories corresponding to different production topologies and dilepton invariant mass resolutions. The signal strength is then extracted using a simultaneous fit to the dilepton invariant mass spectra in all of the categories.
No significant [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}signal is observed. Upper limits are set on the signal strength at the 95% CL. Results are presented for Higgs boson masses between 120 and 150. The combined observed limit on the signal strength, for a Higgs boson with a mass of 125, is 7.4, while the expected limit is $6.5^{+2.8}_{-1.9}$. Assuming the SM production cross section, this corresponds to an upper limit of 0.0016 on ${\ensuremath{\mathcal{B}}\xspace}({\ensuremath{\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}}\xspace})$. For a Higgs boson mass of 125, the best fit signal strength is $0.8^{+3.5}_{-3.4}$.
In the [$\PH\to{\ensuremath{\Pep\Pem}\xspace}$]{}channel, SM Higgs boson decays are far too rare to detect, and no signal is observed. For a Higgs boson mass of 125, a 95% CL upper limit of 0.041 is set on $\sigma {\ensuremath{\mathcal{B}}\xspace}({\ensuremath{\PH\to{\ensuremath{\Pep\Pem}\xspace}}\xspace})$ at 8. Assuming the SM production cross section, this corresponds to an upper limit on ${\ensuremath{\mathcal{B}}\xspace}({\ensuremath{\PH\to{\ensuremath{\Pep\Pem}\xspace}}\xspace})$ of 0.0019, which is approximately $3.7\times10^5$ times the SM prediction. For comparison, the [$\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}$]{}observed 95% CL upper limit on $\sigma {\ensuremath{\mathcal{B}}\xspace}({\ensuremath{\PH\to{\ensuremath{\Pgmp\Pgmm}\xspace}}\xspace})$ is 0.033 (using only 8data), which is 7.0 times the expected SM Higgs boson cross section.
These results, together with recent evidence for the 125boson’s coupling to $\tau$-leptons [@cmsHtautau] with a larger [$\mathcal{B}$]{} consistent with the SM value of $0.0632 \pm 0.0036$ [@Denner_2011mq], confirm the SM prediction that the leptonic couplings of the new boson are not flavour-universal.
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMWFW and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); MoER, ERC IUT and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NIH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); NRF and WCU (Republic of Korea); LAS (Lithuania); MOE and UM (Malaysia); CINVESTAV, CONACYT, SEP, and UASLP-FAI (Mexico); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS and RFBR (Russia); MESTD (Serbia); SEIDI and CPAN (Spain); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (USA).
Individuals have received support from the Marie-Curie programme and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation à la Recherche dans l’Industrie et dans l’Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS programme of Foundation for Polish Science, cofinanced from European Union, Regional Development Fund; the Compagnia di San Paolo (Torino); the Consorzio per la Fisica (Trieste); MIUR project 20108T4XTM (Italy); the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF; and the National Priorities Research Program by Qatar National Research Fund; and the Russian Scientific Fund, grant N 14-12-00110.
The CMS Collaboration \[app:collab\]
====================================
=5000=500=5000
|
---
abstract: 'Tempered fractional Laplacian is the generator of the tempered isotropic Lévy process \[W.H. Deng, B.Y. Li, W.Y. Tian, and P.W. Zhang, Multiscale Model. Simul., 16(1), 125-149, 2018\]. This paper provides the finite difference discretization for the two dimensional tempered fractional Laplacian $(\Delta+\lambda)^{\frac{\beta}{2}}$. Then we use it to solve the tempered fractional Poisson equation with Dirichlet boundary conditions and derive the error estimates. Numerical experiments verify the convergence rates and effectiveness of the schemes.'
author:
- 'Jing Sun$^{1}$, Daxin Nie$^{1}$, Weihua Deng$^{*,1}$'
date: 'Received: date / Accepted: date'
title: 'Algorithm implementation and numerical analysis for the two-dimensional tempered fractional Laplacian'
---
Introduction
============
Anomalous diffusion refers to the movements of particles whose trajectories’ second moment is a nonlinear function of the time $t$ [@metzler], being widely observed in the natural world [@Klafter2005] and having many applications in various fields, such as physical systems [@Hilfer2000], stochastic dynamics [@Bogdan2003], finance [@Mainardi], image processing [@Buades] and so on. The fractional Laplacian $\Delta^{\beta/2}$ is the fundamental non-local operator for modelling anomalous dynamics, introduced as the infinitesimal generator of a $\beta$-stable Lévy process [@Applebaum2009; @Gunzburger2013; @Pozrikidis2016], being the scaling limit of the Lévy flight. The extremely long jumps make the second and all higher order moments of the Lévy flight diverge, sometimes failing to well model some practically physical processes. To overcome this, a trivial idea is to introduce a parameter $\lambda$ (a sufficiently small number) to exponentially temper the isotropic power law measure of the jump length; the new processes generate the tempered fractional Laplacian $(\Delta+\lambda)^{\frac{\beta}{2}}$, being physically introduced and mathematically defined in [@Deng:17] with its definition $$\label{idef1}
(\Delta+\lambda)^{\frac{\beta}{2}}u(\mathbf{x})=-c_{n,\beta,\lambda}{\rm P.V.}\int_{\mathbb{R}^n}\frac{u(\mathbf{x})-u(\mathbf{y})}{e^{\lambda|\mathbf{x}-\mathbf{y}|}|\mathbf{x}-\mathbf{y}|^{n+\beta}}d\mathbf{y} ~~~~~{\rm for}~~\beta \in(0,2),$$ where $$c_{n,\beta,\lambda}=
\frac{\Gamma(\frac{n}{2})}{2\pi^{n/2}|\Gamma(-\beta)|},$$ and P.V. denotes the principal value integral, and $\Gamma(t)=\int_{0}^{\infty}s^{t-1}e^{-s}ds$ is the Gamma function; as to its Fourier transform [@Deng:17], there is $$\label{idef2}
\begin{split}
&\mathcal{F}\left((\Delta +\lambda)^{\beta/2}u(\mathbf{x})\right)\\
&=(-1)^{\lfloor\beta\rfloor}\left(\lambda^\beta-(\lambda^2+|\mathbf{k}|^2)^\frac{\beta}{2}~_2F_1\left(-\frac{\beta}{2},\frac{n+\beta-1}{2};\frac{n}{2};\frac{|\mathbf{k}|^2}{\lambda^2+|\mathbf{k}|^2}\right)\right)\mathcal{F}({u}\left(\mathbf{x})\right),
\end{split}$$ where $\beta\in (0,1)\bigcup(1,2)$, $\lfloor\beta\rfloor$ means the biggest integer, being smaller than or equal to $\beta$, and $_2F_1$ is the Gauss hypergeometric function [@Abramowitz]. Evidently, when $\lambda=0$, the expression (\[idef1\]) reduces to the fractional Laplacian in the singular integral form [@Stein1970; @Kwa2015] $$\label{idef3}
(\Delta)^{\frac{\beta}{2}}u(\mathbf{x})=-c_{n,\beta} {\rm P.V.}\int_{\mathbb{R}^n}\frac{u(\mathbf{x})-u(\mathbf{y})}{|\mathbf{x}-\mathbf{y}|^{n+\beta}}d\mathbf{y} ~~~~~{\rm for} ~~\beta \in(0,2),$$ where $$c_{n,\beta}=\frac{\beta\Gamma(\frac{n+\beta}{2})}{2^{1-\beta}\pi^{n/2}\Gamma(1-\beta/2)}.$$
The main challenge for numerically solving (\[idef1\]) and (\[idef3\]) comes from their non-locality and weak singularity, especially in high dimensional cases. Currently, fractional Laplacian is the trendy and hot topic in both mathematical and numerical fields. For example, [@ACOSTA2017] introduces the finite element approximation for the $n$-dimensional Dirichlet homogeneous problem about fractional Laplacian and [@Acosta2017] presents the code employed for implementation in two dimension; [@Huang2014] provides a finite difference-quadrature approach and gives its convergence proof; [@Huang1611] proposes several finite difference discretizations and tackles the non-locality, singularity and flat tails in practical implementations; [@Duo2017] provides a weighted trapezoidal rule for the fractional Laplacian in the singular integral form and gives the additional insights into the convergence behaviour of the method by the extensive numerical examples. For the tempered fractional Laplacian (\[idef1\]), the existing numerical methods at present are mainly analyzed in one dimension. Among them, [@ZhangDeng2017] presents a Riesz basis Galerkin method for the tempered fractional Laplacian and gives the well-posedness proof of the Galerkin weak formulation and convergence analysis; [@Zhang2017] proposes a finite difference scheme and proves that the accuracy depends on the regularity of the exact solution on $\bar{\Omega}$ rather than the regularity on the whole line. So far, its seems that there are no numerical analysis and implementation discussion on (\[idef1\]) in two dimension.
In this paper, we derive a finite difference scheme for the tempered fractional Laplacian (\[idef1\]) in two dimension, based on the weighted trapezoidal rule combined with the bilinear interpolation. To be specific, we first write (\[idef1\]) as the weighted integral of a weak singular function by introducing the function $\phi_{\gamma}$ and transforming the integration over the whole plane to the one in the first quadrant by symmetry; then we approximate the integration by the weighted trapezoidal rule in the neighborhood of any fixed point $(x,y)$ and by bilinear interpolation for the rest of the computational domain $\Omega$. It’s worth mentioning that the present method also works well for the two dimensional fractional Laplacian (\[idef3\]). Furthermore, we apply the discretization to solve the two dimensional tempered fractional Poisson equation with Dirichlet boundary conditions [@Deng:17]
$$\label{defequ1}
\left\{
\begin{split}
-(\Delta+\lambda)^{\frac{\beta}{2}}u(\mathbf{x})&=f(\mathbf{x}) & {\rm for}~\mathbf{x}\in\Omega \\
u(\mathbf{x})&=0 & {\rm for}~\mathbf{x}\in\mathbb{R}^2\backslash \Omega.
\end{split}
\right.$$
The accuracy of the scheme is proved to be $O(h^{2-\beta})$ for $u\in C^2(\mathbb{R}^2)$.
As is well-known, it generally gives rise to a full matrix when discretizing the non-local operator. Therefore, the design of efficient iteration scheme makes more sense. When discretizing two dimensional tempered fractional Laplacian, we get a symmetric block Toeplitz matrix with Toeplitz block. Here, we use the structure of the matrix to design the solver algorithm to (\[defequ1\]). That is, we use Conjugate Gradient iterator to solve (\[defequ1\]); and in iteration process, we calculate the $B\mathbf{U}$ ($B$ is a symmetric block Toeplitz matrix with Toeplitz block and $\mathbf{U}$ is a vector) by fast Fourier transform [@Chen2005] to reduce the computational complexity. This algorithm has a memory requirement of $O(N^2)$ and a computational cost of $O(N^2 \log N^2)$ instead of a memory requirement of $O(N^4)$ and a computational cost of $O(N^6)$ per iteration. Next, to verify the convergence rates of the presented scheme, numerical experiments are performed for the equation with exact solution. For the unknown source term, we give an algorithm to approximate it, which changes the unbounded integration domain into bounded one through polar coordinate transformation in some special cases. For the details, see Appendix A. And we state the key points for the code implementation in Appendix B.
The paper is organized as follows. In Section 2, we propose a discretization scheme for the tempered fractional Laplacian through the weighted trapezoidal rule combined with the bilinear interpolation, and give its truncation error. In Section 3, we solve the tempered fractional Poisson equation with Dirichlet boundary conditions by the presented scheme and provide the error estimates. In the last Section, through numerical experiments for the equation with/without known solution, we verify the convergence rates and show the effectiveness of the schemes.
Numerical discretization of the tempered fractional Laplacian and its truncation error
======================================================================================
This section provides the discretization of the two dimensional tempered fractional Laplacian by the weighted trapezoidal rule combined with the bilinear interpolation on a bounded domain $\Omega=(-l,l)\times(-l,l)$ with extended homogeneous Dirichlet boundary conditions: $u(x,y)\equiv 0$ for $(x,y)\in\Omega^c$. Afterwards, we analyze the truncation error of the discretization.
Let us introduce the inner product and norms that will be used in the paper. Define the discrete $L_2$ inner product and $L_2$ norm as $$\begin{split}
&(\mathbf{V},\mathbf{W})=h\sum_{i=1}^{M}v_iw_i,\\
&\|\mathbf{V}\|=\sqrt{(\mathbf{V},\mathbf{V})};
\end{split}$$ denote $$\begin{split}
&\|v\|_{L_\infty(\Omega)}=\sup_{x\in \Omega}|v(x)|, \\
&\|\mathbf{V}\|_\infty=\max_{1\leq i\leq M}|v_i|,
\end{split}$$ as the continuous and discrete $L_\infty$ norm, where $\mathbf{V}, \mathbf{W} \in \mathbb{R}^M$.
Numerical scheme
----------------
According to (\[idef1\]), the definition of the tempered fractional Laplacian in two dimension is $$\label{def2}
\begin{split}
-(\Delta+\lambda)^{\frac{\beta}{2}}u(x,y)=-c_{2,\beta,\lambda}{\rm P.V.}\int\int_{\mathbb{R}^2}\frac{u(x+\xi,y+\eta)-u(x,y)}{e^{\lambda\sqrt{\xi^2+\eta^2}}\left(\sqrt{\xi^2+\eta^2}\right)^{{2+\beta}}}d\xi d\eta,
\end{split}$$ which can be symmetrized as $$\label{nonintegral}
\begin{split}
&-(\Delta+\lambda)^{\frac{\beta}{2}}u(x,y)\\
=&-\frac{c_{2,\beta,\lambda}}{2}\int\int_{\mathbb{R}^2}\frac{u(x+\xi,y+\eta)-2u(x,y)+u(x-\xi,y-\eta)}{e^{\lambda\sqrt{\xi^2+\eta^2}}\left(\sqrt{\xi^2+\eta^2}\right)^{{2+\beta}}}d\xi d\eta\\
=&-\frac{c_{2,\beta,\lambda}}{4}\int\int_{\mathbb{R}^2}\frac{g(x,y,\xi,\eta)}{e^{\lambda\sqrt{\xi^2+\eta^2}}\left(\sqrt{\xi^2+\eta^2}\right)^{{2+\beta}}}d\xi d\eta
\end{split}$$ with $$g(x,y,\xi,\eta)=u(x+\xi,y+\eta)+u(x-\xi,y+\eta)+u(x-\xi,y-\eta)+u(x+\xi,y-\eta)-4u(x,y).$$ By the symmetry of the integral domain and integrand, Eq. (\[nonintegral\]) can be rewritten as $$\begin{split}\label{decequa1}
&-(\Delta+\lambda)^{\frac{\beta}{2}}u(x,y)
=-c_{2,\beta,\lambda}\int_{0}^{\infty}\int_{0}^{\infty}\frac{g(x,y,\xi,\eta)}{e^{\lambda\sqrt{\xi^2+\eta^2}}\left(\sqrt{\xi^2+\eta^2}\right)^{{2+\beta}}}d\eta d\xi .\\
\end{split}$$ If we denote $$\phi_{\gamma}(\xi,\eta)=\frac{g(x,y,\xi,\eta)}{e^{\lambda\sqrt{\xi^2+\eta^2}}\left(\sqrt{\xi^2+\eta^2}\right)^{{\gamma}}},$$ where $\gamma\in(\beta,2]$, then (\[decequa1\]) becomes $$\begin{split}\label{decequa2}
&-(\Delta+\lambda)^{\frac{\beta}{2}}u(x,y)
=-c_{2,\beta,\lambda}\int_{0}^{\infty}\int_{0}^{\infty}\frac{\phi_{\gamma}(\xi,\eta)}{\left(\sqrt{\xi^2+\eta^2}\right)^{-\gamma+2+\beta}} d\eta d\xi.
\end{split}$$ Now, we just need to discretize the tempered fractional Laplacian in $[0,\infty)\times[0,\infty)$ instead of $\mathbb{R}\times\mathbb{R}$. Taking a constant $L=2l$, we have $u(x+\xi,y+\eta)=0$ for $(\xi,\eta)\notin (-L,L)\times(-L,L)$. Thus, $$\label{equdis28}
\begin{split}
-(\Delta+\lambda)^{\frac{\beta}{2}}u(x,y)=-c_{2,\beta,\lambda}&\Big(\int_{0}^{L}\int_{0}^{L}\frac{\phi_{\gamma}(\xi,\eta)}{\left(\sqrt{\xi^2+\eta^2}\right)^{-\gamma+2+\beta}} d\eta d\xi\\
&-4\int_{0}^{L}\int_{L}^{\infty}\frac{u(x,y)}{e^{\lambda\sqrt{\xi^2+\eta^2}}\left(\sqrt{\xi^2+\eta^2}\right)^{{2+\beta}}} d\eta d\xi\\
&-4\int_{L}^{\infty}\int_{0}^{L}\frac{u(x,y)}{e^{\lambda\sqrt{\xi^2+\eta^2}}\left(\sqrt{\xi^2+\eta^2}\right)^{{2+\beta}}} d\eta d\xi\\
&-4\int_{L}^{\infty}\int_{L}^{\infty}\frac{u(x,y)}{e^{\lambda\sqrt{\xi^2+\eta^2}}\left(\sqrt{\xi^2+\eta^2}\right)^{{2+\beta}}}d\eta d\xi \Big).
\end{split}$$ For convenience, we denote $$\begin{split}
G^{\infty}=&\int_{0}^{L}\int_{L}^{\infty}\frac{1}{e^{\lambda\sqrt{\xi^2+\eta^2}}\left(\sqrt{\xi^2+\eta^2}\right)^{{2+\beta}}}d\eta d\xi\\
&+\int_{L}^{\infty}\int_{0}^{L}\frac{1}{e^{\lambda\sqrt{\xi^2+\eta^2}}\left(\sqrt{\xi^2+\eta^2}\right)^{{2+\beta}}} d\eta d\xi\\
&+\int_{L}^{\infty}\int_{L}^{\infty}\frac{1}{e^{\lambda\sqrt{\xi^2+\eta^2}}\left(\sqrt{\xi^2+\eta^2}\right)^{{2+\beta}}}d\eta d\xi.\\
\end{split}$$
Let the mesh size $h_1=L/N_{i},h_2=L/N_{j}$; denote grid points $\xi_i=ih_1, \eta_j=jh_2$, for $1\leq i \leq N_{i}, 1\leq j \leq N_{j}$; for convenience, we set $N_i=N_j$. Then, we can formulate the first integral in (\[equdis28\]) as $$\label{equtodis}
\int_{0}^{L}\int_{0}^{L}\phi_{\gamma}(\xi,\eta)(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}} d\eta d\xi =\sum_{i=0}^{N_i-1}\sum_{j=0}^{N_j-1}\int_{\xi_{i}}^{\xi_{i+1}}\int_{\eta_{j}}^{\eta_{j+1}}\phi_{\gamma}(\xi,\eta)(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}}d\eta d\xi.$$
For (\[equtodis\]), when $(i,j)=(0,0)$, it is easy to see that the integration is weak singular. So we approximate the integral by the weighted trapezoidal rule. For different $\gamma$, we use different integral nodes to approximate it, namely, $$\label{equdis11}
\begin{split}
\int_{\xi_{0}}^{\xi_1}\int_{\eta_0}^{\eta_1}&\phi_{\gamma}(\xi,\eta)(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}} d\eta d\xi=\\&\left\{
\begin{split}
&\frac{1}{4}\left(\lim_{(\xi,\eta)\rightarrow(0,0)}\phi_{\gamma}(\xi,\eta)+\phi_{\gamma}(\xi_0,\eta_1)+\phi_{\gamma}(\xi_1,\eta_1)+\phi_{\gamma}(\xi_1,\eta_0)\right)G_{0,0},~~\gamma\in(\beta,2);\\
&\frac{1}{3}\left(\phi_{\gamma}(\xi_0,\eta_1)+\phi_{\gamma}(\xi_1,\eta_1)+\phi_{\gamma}(\xi_1,\eta_0)\right)G_{0,0},~~~~\gamma=2,
\end{split}
\right.
\end{split}$$ where $$\label{equdefG00}
G_{0,0}=\int_{\xi_{0}}^{\xi_{1}}\int_{\eta_{0}}^{\eta_{1}}(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}} d\eta d\xi.$$ Assuming $u$ is smooth enough, for $\gamma \in(\beta,2)$, there exists $$\lim_{(\xi,\eta)\rightarrow(0,0)}\phi_{\gamma}(\xi,\eta)=0,$$ so we introduce a parameter $k_\gamma$ $$k_\gamma=\left\{
\begin{split}
1~~~~~~~~~~~~~~&\gamma\in(\beta,2);\\
\frac{4}{3}~~~~~~~~~~~~~~&\gamma=2.
\end{split}
\right.$$ Then, Eq. (\[equdis11\]) can be rewritten as $$\int_{\xi_{0}}^{\xi_1}\int_{\eta_0}^{\eta_1}\phi_{\gamma}(\xi,\eta)(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}}d\eta d\xi=\frac{k_\gamma}{4}\left(\phi_{\gamma}(\xi_0,\eta_1)+\phi_{\gamma}(\xi_1,\eta_1)+\phi_{\gamma}(\xi_1,\eta_0)\right)G_{0,0}.$$ For another part of (\[equtodis\]), when $(i,j)\neq(0,0)$, we deal with the integration by the bilinear interpolation. Before discretizing it, we define the following functions $$\label{equdefG}
\begin{split}
G_{i,j}&=\frac{1}{h^2}\int_{\xi_{i}}^{\xi_{i+1}}\int_{\eta_{j}}^{\eta_{j+1}}(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}} d\eta d\xi,\\
G^\xi_{i,j}&=\frac{1}{h^2}\int_{\xi_{i}}^{\xi_{i+1}}\int_{\eta_{j}}^{\eta_{j+1}}\xi(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}} d\eta d\xi,\\
G^\eta_{i,j}&=\frac{1}{h^2}\int_{\xi_{i}}^{\xi_{i+1}}\int_{\eta_{j}}^{\eta_{j+1}}\eta(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}}d\eta d\xi, \\
G^{\xi\eta}_{i,j}&=\frac{1}{h^2}\int_{\xi_{i}}^{\xi_{i+1}}\int_{\eta_{j}}^{\eta_{j+1}}\xi\eta(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}} d\eta d\xi,
\end{split}$$ where $G_{i,j}$, $G^\xi_{i,j}$, $G^\eta_{i,j}$, $G^{\xi\eta}_{i,j}$ can be obtained by numerical integration.
Further denote $I_{i,j}$ as the interpolation integration in $[\xi_i,\xi_{i+1}]\times [\eta_j,\eta_{j+1}]$, i.e., $$\begin{split}
I_{i,j}=&\phi_\gamma(\xi_i,\eta_j)(G^{\xi\eta}_{i,j}-\xi_{i+1}G^\eta_{i,j}-\eta_{j+1}G^\xi_{i,j}+\xi_{i+1}\eta_{j+1}G_{i,j})\\
&-\phi_\gamma(\xi_{i+1},\eta_j)(G^{\xi\eta}_{i,j}-\xi_{i}G^\eta_{i,j}-\eta_{j+1}G^\xi_{i,j}+\xi_{i}\eta_{j+1}G_{i,j})\\
&-\phi_\gamma(\xi_i,\eta_{j+1})(G^{\xi\eta}_{i,j}-\xi_{i+1}G^\eta_{i,j}-\eta_{j}G^\xi_{i,j}+\xi_{i+1}\eta_{j}G_{i,j})\\
&+\phi_\gamma(\xi_{i+1},\eta_{j+1})(G^{\xi\eta}_{i,j}-\xi_{i}G^\eta_{i,j}-\eta_{j}G^\xi_{i,j}+\xi_{i}\eta_{j}G_{i,j});
\end{split}$$ and let $$\label{equdefW}
\begin{split}
W^1_{i,j}&=G^{\xi\eta}_{i,j}-\xi_{i+1}G^\eta_{i,j}-\eta_{j+1}G^\xi_{i,j}+\xi_{i+1}\eta_{j+1}G_{i,j},\\
W^2_{i,j}&=-\left(G^{\xi\eta}_{i-1,j}-\xi_{i-1}G^\eta_{i-1,j}-\eta_{j+1}G^\xi_{i-1,j}+\xi_{i-1}\eta_{j+1}G_{i-1,j}\right),\\
W^3_{i,j}&=-\left(G^{\xi\eta}_{i,j-1}-\xi_{i+1}G^\eta_{i,j-1}-\eta_{j-1}G^\xi_{i,j-1}+\xi_{i+1}\eta_{j-1}G_{i,j-1}\right),\\
W^4_{i,j}&=G^{\xi\eta}_{i-1,j-1}-\xi_{i-1}G^\eta_{i-1,j-1}-\eta_{j-1}G^\xi_{i-1,j-1}+\xi_{i-1}\eta_{j-1}G_{i-1,j-1}.
\end{split}$$ Then, $I_{i,j}$ can be rewritten as $$\label{equIij2}
\begin{split}
I_{i,j}=&\phi_\gamma(\xi_i,\eta_j)W^1_{i,j}+\phi_\gamma(\xi_{i+1},\eta_j)W^2_{i+1,j}+\phi_\gamma(\xi_i,\eta_{j+1})W^3_{i,j+1}
+\phi_\gamma(\xi_{i+1},\eta_{j+1})W^4_{i+1,j+1},
\end{split}$$ and Eq. (\[equtodis\]) becomes $$\label{equdiswithI}
\begin{split}
&\sum_{i=0}^{N_i-1}\sum_{j=0}^{N_j-1}\int_{\xi_{i}}^{\xi_{i+1}}\int_{\eta_{j}}^{\eta_{j+1}}\phi_{\gamma}(\xi,\eta)(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}}d\eta d\xi \\
=&\frac{k_\gamma}{4}\left(\phi_{\gamma}(\xi_0,\eta_1)+\phi_{\gamma}(\xi_1,\eta_1)+\phi_{\gamma}(\xi_1,\eta_0)\right)G_{0,0}\\
&+\sum^{i=N_i-1,j=N_j-1}_{
\begin{subarray}{c}
i,j=0;\\(i,j)\neq(0,0)
\end{subarray}}I_{i,j}.
\end{split}$$ Combining (\[equIij2\]) with (\[equdiswithI\]), we derive $$\begin{split}
&\sum_{i=0}^{N_i-1}\sum_{j=0}^{N_j-1}\int_{\xi_{i}}^{\xi_{i+1}}\int_{\eta_j}^{\eta_{j+1}}\phi_{\gamma}(\xi,\eta)(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}} d\eta d\xi \\
=&\left(\frac{k_\gamma}{4}G_{0,0}+W^1_{1,1}+W^2_{1,1}+W^3_{1,1}\right)\phi_{\gamma}(\xi_1,\eta_1)\\
&+\left(\frac{k_\gamma}{4}G_{0,0}+W^1_{1,0}\right)\phi_{\gamma}(\xi_1,\eta_0)+\left(\frac{k_\gamma}{4}G_{0,0}+W^1_{0,1}\right)\phi_{\gamma}(\xi_0,\eta_1)\\
&+\sum_{i=2}^{N_i-1}\left(W^1_{i,0}+W^2_{i,0}\right)\phi_{\gamma}(\xi_i,\eta_0)+\sum_{j=2}^{N_j-1}\left(W^1_{0,j}+W^3_{0,j}\right)\phi_{\gamma}(\xi_0,\eta_j)\\
&+\sum_{i=1}^{N_i-1}\left(W^3_{i,N_j}+W^4_{i,N_j}\right)\phi_{\gamma}(\xi_i,\eta_{N_j})+\sum_{j=1}^{N_j-1}\left(W^2_{N_i,j}+W^4_{N_i,j}\right)\phi_{\gamma}(\xi_{N_i},\eta_j)\\
&+W^3_{0,N_j}\phi_{\gamma}(\xi_0,\eta_{N_j})+W^2_{N_i,0}\phi_{\gamma}(\xi_{N_i},\eta_{0})+W^4_{N_i,N_j}\phi_{\gamma}(\xi_{N_i},\eta_{N_j})\\
&+\sum^{i=N_i-1,j=N_j-1}_{
\begin{subarray}{c}
i,j=1;\\(i,j)\neq(1,1)
\end{subarray}}\left(W^1_{i,j}+W^2_{i,j}+W^3_{i,j}+W^4_{i,j}\right)\phi_{\gamma}(\xi_i,\eta_j).
\end{split}$$ For the second part of (\[equdis28\]), namely $G^\infty$, we get it by numerical integration.
Denote $u_{p,q}=u(-l+ph,-l+qh)$, $(p,q\in \mathbb{Z})$. Then we can get the discretization scheme $$\label{defdis}
-(\Delta+\lambda)_h^{\beta/2}u_{p,q}=\sum_{i=-N_i}^{i=N_i}\sum_{j=-N_j}^{j=N_j}w_{|i|,|j|}u_{p-i,q-j},$$ where $$\label{equweightoffl}\footnotesize
w_{i,j}=-c_{2,\beta,\lambda}\left\{
\begin{split}
&-4\left(\frac{\frac{k_\gamma}{4}G_{0,0}+W^1_{1,1}+W^2_{1,1}+W^3_{1,1}}{e^{\lambda\sqrt{\xi_1^2+\eta_1^2}}\left(\sqrt{\xi_1^2+\eta_1^2}\right)^\gamma}\right.&\\
&~~~~~~ +\frac{\frac{k_\gamma}{4}G_{0,0}+W^1_{1,0}}{e^{\lambda\sqrt{\xi_1^2+\eta_0^2}}\left(\sqrt{\xi_1^2+\eta_0^2}\right)^\gamma}+\frac{\frac{k_\gamma}{4}G_{0,0}+W^1_{0,1}}{e^{\lambda\sqrt{\xi_0^2+\eta_1^2}}\left(\sqrt{\xi_0^2+\eta_1^2}\right)^\gamma}&\\
&~~~~~~ +\sum_{i=2}^{N_i-1}\frac{W^1_{i,0}+W^2_{i,0}}{e^{\lambda\sqrt{\xi_i^2+\eta_0^2}}\left(\sqrt{\xi_i^2+\eta_0^2}\right)^\gamma}+\sum_{j=2}^{N_j-1}\frac{W^1_{0,j}+W^3_{0,j}}{e^{\lambda\sqrt{\xi_0^2+\eta_j^2}}\left(\sqrt{\xi_0^2+\eta_j^2}\right)^\gamma}&\\
&~~~~~~ +\sum_{i=2}^{N_i-1}\frac{W^3_{i,N_j}+W^4_{i,N_j}}{e^{\lambda\sqrt{\xi_i^2+\eta_{N_j}^2}}\left(\sqrt{\xi_i^2+\eta_{N_j}^2}\right)^\gamma}+\sum_{j=2}^{N_j-1}\frac{W^2_{N_i,j}+W^4_{N_i,j}}{e^{\lambda\sqrt{\xi_{N_i}^2+\eta_j^2}}\left(\sqrt{\xi_{N_i}^2+\eta_j^2}\right)^\gamma}&\\
&~~~~~~+\sum_{i=1,j=1,(i,j)\neq(1,1)}^{i=N_i-1,j=N_j-1}\frac{W^1_{i,j}+W^2_{i,j}+W^3_{i,j}+W^4_{i,j}}{e^{\lambda\sqrt{\xi_{i}^2+\eta_j^2}}\left(\sqrt{\xi_{i}^2+\eta_j^2}\right)^\gamma}&\\
&~~~~~~ +\frac{W^3_{0,N_j}}{e^{\lambda\sqrt{\xi_0^2+\eta_{N_j}^2}}\left(\sqrt{\xi_0^2
+\eta_{N_j}^2}\right)^\gamma}+\frac{W^2_{N_i,0}}{e^{\lambda\sqrt{\xi_{N_i}^2+\eta_0^2}}\left(\sqrt{\xi_{N_i}^2+\eta_0^2}\right)^\gamma}&\\
&~~~~~~\left.+\frac{W^4_{N_i,N_j}}{e^{\lambda\sqrt{\xi_{N_i}^2+\eta_{N_j}^2}}\left(\sqrt{\xi_{N_i}^2+\eta_{N_j}^2}\right)^\gamma}+G^\infty\right),&i=0,j=0\\
&\frac{\frac{k_\gamma}{4}G_{0,0}+W^1_{1,1}+W^2_{1,1}+W^3_{1,1}}{e^{\lambda\sqrt{\xi_1^2+\eta_1^2}}\left(\sqrt{\xi_1^2+\eta_1^2}\right)^\gamma},&i=1,j=1\\
&2\frac{\frac{k_\gamma}{4}G_{0,0}+W^1_{1,0}}{e^{\lambda\sqrt{\xi_1^2+\eta_0^2}}\left(\sqrt{\xi_1^2+\eta_0^2}\right)^\gamma},&i=1,j=0\\
&2\frac{\frac{k_\gamma}{4}G_{0,0}+W^1_{0,1}}{e^{\lambda\sqrt{\xi_0^2+\eta_1^2}}\left(\sqrt{\xi_0^2+\eta_1^2}\right)^\gamma},&i=0,j=1\\
&2\frac{W^1_{i,0}+W^2_{i,0}}{e^{\lambda\sqrt{\xi_i^2+\eta_0^2}}\left(\sqrt{\xi_i^2+\eta_0^2}\right)^\gamma,}&1<i<N_i,j=0\\
&2\frac{W^1_{0,j}+W^3_{0,j}}{e^{\lambda\sqrt{\xi_0^2+\eta_j^2}}\left(\sqrt{\xi_0^2+\eta_j^2}\right)^\gamma},&i=0, 1<j<N_j\\
&\frac{W^3_{i,N_j}+W^4_{i,N_j}}{e^{\lambda\sqrt{\xi_i^2+\eta_{N_j}^2}}\left(\sqrt{\xi_i^2+\eta_{N_j}^2}\right)^\gamma},&1<i<N_i,j=N_j\\
&\frac{W^2_{N_i,j}+W^4_{N_i,j}}{e^{\lambda\sqrt{\xi_{N_i}^2+\eta_j^2}}\left(\sqrt{\xi_{N_i}^2+\eta_j^2}\right)^\gamma},&i=N_i, 1<j<N_j\\
&2\frac{W^3_{0,N_j}}{e^{\lambda\sqrt{\xi_{0}^2+\eta_{N_j}^2}}\left(\sqrt{\xi_{0}^2+\eta_{N_j}^2}\right)^\gamma},& i=0,j=N_j\\
&2\frac{W^2_{N_i,0}}{e^{\lambda\sqrt{\xi_{N_i}^2+\eta_0^2}}\left(\sqrt{\xi_{N_i}^2+\eta_0^2}\right)^\gamma},& i=N_i,j=0\\
&\frac{W^4_{N_i,N_j}}{e^{\lambda\sqrt{\xi_{N_i}^2+\eta_{N_j}^2}}\left(\sqrt{\xi_{N_i}^2+\eta_{N_j}^2}\right)^\gamma},&i=N_i,j=N_j\\
&\frac{W^1_{i,j}+W^2_{i,j}+W^3_{i,j}+W^4_{i,j}}{e^{\lambda\sqrt{\xi_{i}^2+\eta_j^2}}\left(\sqrt{\xi_{i}^2+\eta_j^2}\right)^\gamma},&otherwise
\end{split}
\right.$$
For the sake of convenience, we write the matrix form of the scheme (\[defdis\]) as $$\label{desdif2}
-(\Delta+\lambda)_{h}^{\frac{\beta}{2}}\mathbf{U}=B\mathbf{U},$$ where $$\mathbf{U}=\left(u_{1,1},u_{1,2},\cdots,u_{1,N_j-1},u_{2,1}\cdots,u_{2,N_j-1},\cdots,u_{N_i-1,N_j-1}\right)^{T},$$ and $$B=\left[\begin{matrix}
w_{|1-1|,|1-1|}& w_{|1-1|,|2-1|} &\cdots & w_{|(N_i-1)-1|,|(N_j-1)-1|} \\
w_{|1-1|,|1-2|}& w_{|1-1|,|2-2|} &\cdots & w_{|(N_i-1)-1|,|(N_j-1)-2|} \\
\vdots& \vdots &\ddots & \vdots \\
w_{|1-(N_i-1)|,|1-(N_j-1)|}& w_{|1-(N_i-1)|,|2-(N_j-1)|} &\cdots & w_{|(N_i-1)-(N_i-1)|,|(N_j-1)-(N_j-1)|} \\
\end{matrix}\right],$$ is the matrix representation of the tempered fractional Laplacian.
Denote the numerical solution of Eq. (\[defequ1\]) at $(-l+ph,-l+qh)$ as $u^h_{p,q}$ and the source term $F$ at $(-l+ph,-l+qh)$ as $f_{p,q}$, $(p,q\in \mathbb{Z})$. Then Eq. (\[defequ1\]) can also be written as $$\label{matex1}
B\mathbf{U}_h=F,$$ where $$\mathbf{U}_h=\left(u^h_{1,1},u^h_{1,2},\cdots,u^h_{1,N_j-1},u^h_{2,1}\cdots,u^h_{2,N_j-1},\cdots,u^h_{N_i-1,N_j-1}\right)^{T},$$ and $$F=\left(f_{1,1},f_{1,2},\cdots,f_{1,N_j-1},f_{2,1}\cdots,f_{2,N_j-1},\cdots,f_{N_i-1,N_j-1}\right)^{T}.$$
Structure of the stiffness matrix $B$
-------------------------------------
[@Chen2005] The symmetric $N\times N$ matrix $T$ is called the symmetric Toeplitz matrix if its entries are constant along each diagonal, i.e., $$T=\left[
\begin{matrix}
t_0&t_1&\cdots&t_{N-2}&t_{N-1}\\
t_1&t_0&\cdots&~t_{N-3}&t_{N-2}\\
\vdots& \vdots &\ddots & \vdots& \vdots \\
t_{N-1}&t_{N-2}&\cdots&~t_1&t_0\\
\end{matrix}\right].$$ And the symmetric $N^2\times N^2$ matrix $H$ is called the symmetric block Toeplitz matrix with Toeplitz block, which has following structure $$H=\left[
\begin{matrix}
T_0&T_1&\cdots&T_{n-2}&T_{n-1}\\
T_1&T_0&\cdots&~T_{n-3}&T_{n-2}\\
\vdots& \vdots &\ddots & \vdots& \vdots \\
T_{n-1}&T_{n-2}&\cdots&~T_1&T_0\\
\end{matrix}\right],$$ where each $T_i$ is a symmetric Toeplitz matrix.
Since a symmetric Toeplitz matrix $T$ is determined by its first column and each block of $H$ is symmetric Toeplitz matrix, we can store $H$ by a $N\times N$ matrix to reduce the memory requirement [@Chen2005]. In our scheme (\[desdif2\]), it is easy to verify that the matrix $B$ is a symmetric block Toeplitz matrix with Toeplitz block according to (\[equweightoffl\]), so we store $B$ by a $N\times N$ matrix to reduce the memory requirement to $O(N^2)$. When solving $B\mathbf{U}_h=F$, the fast Fourier transform can be used in the iteration process and the computational cost of calculating $B\mathbf{U}$ ($\mathbf{U}\in R^{N^2}$ is a vector) can be reduced to $O(N^2 \log N^2)$.
Truncation error
----------------
\[lemfunc2error\] Let $\beta\in(0,2)$ , $\xi> 0$ and $\eta>0$. If $u(x,y)\in C^{2}(\mathbb{R}^2)$, the derivative $D^{\alpha}\phi_{\gamma}$ ($\alpha$ is multi-index and $|\alpha|\leq2$) exists for any $\gamma\in (\beta,2]$, then for $(x,y)\in\Omega$, there are $$\label{eqphikz}
\begin{split}
&\left|\phi_{\gamma}\right|\leq C\left(\xi^2+\eta^2\right)^{1-\frac{\gamma}{2}},\\
&\left|\frac{\partial^2\phi_{\gamma}}{\partial \xi^2}\right|\leq C\left((\xi^2+\eta^2)^{-\frac{\gamma}{2}}+(\xi^2+\eta^2)^{\frac{1}{2}-\frac{\gamma}{2}}+(\xi^2+\eta^2)^{1-\frac{\gamma}{2}}\right),\\
&\left|\frac{\partial^2\phi_{\gamma}}{\partial \eta^2}\right|\leq C\left((\xi^2+\eta^2)^{-\frac{\gamma}{2}}+(\xi^2+\eta^2)^{\frac{1}{2}-\frac{\gamma}{2}}+(\xi^2+\eta^2)^{1-\frac{\gamma}{2}}\right)
\end{split}$$ with $C$ being a positive constants.
Using Taylor’s formula, we obtain $$\begin{split}
&\left|\phi_{\gamma}(\xi,\eta)\right|\leq\left|\frac{g(x,y,\xi,\eta)}{(\xi^2+\eta^2)^\frac{\gamma}{2}}\right|\\
\leq &\left|\frac{(\xi\frac{\partial}{\partial x}+\eta\frac{\partial}{\partial y})^2u\left|_{(x^*_1,y^*_1)}\right.+(-\xi\frac{\partial}{\partial x}+\eta\frac{\partial}{\partial y})^2u\left|_{(x^*_2,y^*_2)}\right.}{2!(\xi^2+\eta^2)^\frac{\gamma}{2}}\right.\\
&+\left.\frac{(\xi\frac{\partial}{\partial x}-\eta\frac{\partial}{\partial y})^2u\left|_{(x^*_3,y^*_3)}\right.+(-\xi\frac{\partial}{\partial x}-\eta\frac{\partial}{\partial y})^2u\left|_{(x^*_4,y^*_4)}\right.}{2!(\xi^2+\eta^2)^\frac{\gamma}{2}}\right|\\
\leq& C\left|\frac{\xi^2+\eta^2}{(\xi^2+\eta^2)^\frac{\gamma}{2}}\right|\\
\leq&C(\xi^2+\eta^2)^{1-\frac{\gamma}{2}},
\end{split}$$ where $$\begin{split}
(x^*_1,y^*_1)&\in[x,x+\xi]\times[y,y+\eta],\\
(x^*_2,y^*_2)&\in[x-\xi,x]\times[y,y+\eta],\\
(x^*_3,y^*_3)&\in[x,x+\xi]\times[y-\eta,y],\\
(x^*_4,y^*_4)&\in[x-\xi,x]\times[y-\eta,y].\\
\end{split}$$ For $\left|\frac{\partial^2\phi_{\gamma}}{\partial \xi^2}\right|$, we have $$\begin{split}
\left|\frac{\partial^2\phi_{\gamma}}{\partial \xi^2}\right|&\leq\left|\frac{ g^{(2,0)}(x,y,\xi,\eta)}{(\xi^2+\eta^2)^\frac{\gamma}{2}}\right|
+C\left|\frac{g^{(1,0)}(x, y,\xi,\eta)}{(\xi^2+\eta^2)^{1+\frac{\gamma}{2}}}\xi\right|\\
&+C\left|\frac{g^{(1,0)}(x, y,\xi,\eta)}{(\xi^2+\eta^2)^{\frac{1}{2}+\frac{\gamma}{2}}}\xi\right|
+C\left|\frac{g(x,y,\xi,\eta)}{(\xi^2+\eta^2)^{1+\frac{\gamma}{2}}}\right|\\
&+C\left|\frac{g(x,y,\xi,\eta)}{(\xi^2+\eta^2)^{\frac{1}{2}+\frac{\gamma}{2}}}\right|
+C\left|\frac{g(x,y,\xi,\eta)}{(\xi^2+\eta^2)^{2+\frac{\gamma}{2}}}\xi^2\right|\\
&+C\left|\frac{g(x,y,\xi,\eta)}{(\xi^2+\eta^2)^{\frac{3}{2}+\frac{\gamma}{2}}}\xi^2\right|
+C\left|\frac{g(x,y,\xi,\eta)}{(\xi^2+\eta^2)^{1+\frac{\gamma}{2}}}\xi^2\right|.
\end{split}$$ Using Taylor’s formula again leads to $$\label{phi}
\begin{split}
\left|\frac{\partial^2\phi_{\gamma}}{\partial \xi^2}\right|\leq C\left((\xi^2+\eta^2)^{-\frac{\gamma}{2}}+(\xi^2+\eta^2)^{\frac{1}{2}-\frac{\gamma}{2}}+(\xi^2+\eta^2)^{1-\frac{\gamma}{2}}\right).
\end{split}$$ The estimate for $\left|\frac{\partial^2\phi_{\gamma}}{\partial \eta^2}\right|$ can be similarly obtained as the one for $\left|\frac{\partial^2\phi_{\gamma}}{\partial \xi^2}\right|$. Then the desired inequalities (\[eqphikz\]) hold.
Next, we introduce a lemma about the error of the bilinear interpolation.
\[lemmaintepola\][@Mobner2009] Let $I_h$ denote the bilinear interpolant on the box $K=[0,h]\times[0,h]$. For $f\in W^{2,\infty}(K)$ ($W^{k,p}(K)$ denotes a sobolev space), the error of bilinear interpolant is bounded by $$\|f-If\|_{L_\infty}\leq ch^2\left(\left\|\frac{\partial^2 f}{\partial x^2}\right\|_{L_\infty}+\left\|\frac{\partial^2 f}{\partial y^2}\right\|_{L_\infty}\right).$$
The proof can be completed by using the tensor-product polynomial approximation given in [@Brenner2008]. We omit the details here.
\[thmtrunct\] Denote $(\Delta+\lambda)^{\frac{\beta}{2}}_{h}$ as a finite difference approximation of the tempered fractional Laplacian $(\Delta+\lambda)^{\frac{\beta}{2}}$. Suppose that $u(x,y)\in C^{2}(\mathbb{R}^2)$ has finite support on an open set $\Omega\subset\mathbb{R}^2$. Then, for any $\gamma\in(\beta,2]$, there is $$\left\|(\Delta+\lambda)^{\frac{\beta}{2}}u(x,y)-(\Delta+\lambda)_{h}^{\frac{\beta}{2}}u(x,y)\right\|_{L_\infty(\Omega)}\leq Ch^{2-\beta},~~~~~{\rm for}~\beta\in (0,2)$$ with $C$ being a positive constant depending on $\beta$ and $\gamma$.
From (\[decequa1\]), (\[equdis28\]), (\[equtodis\]) and (\[equdiswithI\]), we obtain the error function $$\label{error1}
\begin{split}
e^h_{\beta,\gamma}(x,y)=&(\Delta+\lambda)^{\frac{\beta}{2}}u(x,y)-(\Delta+\lambda)_{h}^{\frac{\beta}{2}}u(x,y)\\
=&\left(\int_{\xi_{0}}^{\xi_1}\int_{\eta_{0}}^{\eta_1}\phi_{\gamma}(\xi,\eta)(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}}d\eta d\xi\right.\\
&\left.-\int_{\xi_{0}}^{\xi_1}\int_{\eta_{0}}^{\eta_1}\frac{k_\gamma}{4}\left(\phi_{\gamma}(\xi_0,\eta_1)+\phi_{\gamma}(\xi_1,\eta_0)+\phi_{\gamma}(\xi_1,\eta_1)\right)(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}}d\eta d\xi\right) \\
&+\sum_{
\begin{subarray}
~i=0;j=0;\\(i,j)\neq (0,0)
\end{subarray}
}^{i=N_i-1;j=N_j-1}\left(\int_{\xi_{i}}^{\xi_{i+1}}\int_{\eta_{j}}^{\eta_{j+1}}\phi_{\gamma}(\xi,\eta)(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}}d\eta d\xi-I_{i,j}\right) \\
=&\uppercase\expandafter{\romannumeral1}+\uppercase\expandafter{\romannumeral2}.
\end{split}$$ For the first part of (\[error1\]), there exists $$\begin{split}
|\uppercase\expandafter{\romannumeral1}|&\leq\int_{\xi_{0}}^{\xi_1}\int_{\eta_{0}}^{\eta_1}\left(\left|\phi_{\gamma}(\xi,\eta)\right|+\frac{k_{\gamma}}{4}\left|\phi_{\gamma}(\xi_0,\eta_1)+\phi_{\gamma}(\xi_1,\eta_0)+\phi_{\gamma}(\xi_1,\eta_1)\right|\right)(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}} d\eta d\xi \\
&\leq\int_{\xi_{0}}^{\xi_1}\int_{\eta_{0}}^{\eta_1}\left(C(\xi^2+\eta^2)^{1-\frac{\gamma}{2}}+Ch^{2-\gamma}\right)(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}} d\eta d\xi .
\end{split}$$ Taking $\xi =ph$, $\eta =qh$, we have $$\begin{split}
|\uppercase\expandafter{\romannumeral1}|\leq& Ch^{2-\beta}\int_{{0}}^{1}\int_{{0}}^{1}(p^2+q^2)^{-\frac{\beta}{2}}dq dp \\
& +Ch^{2-\beta}\int_{{0}}^{1}\int_{{0}}^{1}(p^2+q^2)^{\frac{\gamma-2-\beta}{2}}dq dp .
\end{split}$$ Since $\beta<\gamma\leq2$, we obtain $-\beta>-2$ and $\gamma-2-\beta>-2$. Then it holds $$|\uppercase\expandafter{\romannumeral1}|\leq Ch^{2-\beta}.$$ For the second part of (\[error1\]), according to Lemma \[lemmaintepola\], we have $$\begin{split}
|\uppercase\expandafter{\romannumeral2}|\leq C&\sum_{
\begin{subarray}
~i=0;j=0;\\(i,j)\neq (0,0)
\end{subarray}
}^{i=N_i-1;j=N_j-1}\int_{\xi_{i}}^{\xi_{i+1}}\int_{\eta_{j}}^{\eta_{j+1}}\left(\left\|\frac{\partial^2\phi_{\gamma}}{\partial\xi^2}\right\|_{L_\infty}+\left\|\frac{\partial^2\phi_{\gamma}}{\partial\eta^2}\right\|_{L_\infty}\right)h^2(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}}d\eta d\xi .
\end{split}$$ Denote $\Omega_{i,j}=[\xi_{i},\xi_{i+1}]\times[\eta_{j},\eta_{j+1}]$. According to Lemma \[lemfunc2error\], we have $$\begin{split}
\left\|\frac{\partial^2\phi_{\gamma}}{\partial\xi^2}\right\|_{L_\infty(\Omega_{i,j})}&\leq C\sup_{(\xi,\eta)\in\Omega_{i,j}}\left((\xi^2+\eta^2)^{-\frac{\gamma}{2}}+(\xi^2+\eta^2)^{\frac{1}{2}-\frac{\gamma}{2}}+(\xi^2+\eta^2)^{1-\frac{\gamma}{2}}\right),\\
\left\|\frac{\partial^2\phi_{\gamma}}{\partial\eta^2}\right\|_{L_\infty(\Omega_{i,j})}&\leq C\sup_{(\xi,\eta)\in\Omega_{i,j}}\left((\xi^2+\eta^2)^{-\frac{\gamma}{2}}+(\xi^2+\eta^2)^{\frac{1}{2}-\frac{\gamma}{2}}+(\xi^2+\eta^2)^{1-\frac{\gamma}{2}}\right).
\end{split}$$ For any $(\xi,\eta)\in\Omega_{i,j}$ $(i,j\geq0,(i,j)\neq(0,0))$, there exists a constant $C$ satisfying $$\begin{split}
&\sup_{(\xi,\eta)\in\Omega_{i,j}}\left((\xi^2+\eta^2)^{-\frac{\gamma}{2}}\right)\leq C(\xi^2+\eta^2)^{-\frac{\gamma}{2}},\\
&\sup_{(\xi,\eta)\in\Omega_{i,j}}\left((\xi^2+\eta^2)^{\frac{1}{2}-\frac{\gamma}{2}}\right)\leq C(\xi^2+\eta^2)^{\frac{1}{2}-\frac{\gamma}{2}},\\
&\sup_{(\xi,\eta)\in\Omega_{i,j}}\left((\xi^2+\eta^2)^{1-\frac{\gamma}{2}}\right)\leq C(\xi^2+\eta^2)^{1-\frac{\gamma}{2}}.
\end{split}$$ Thus $$\begin{split}
|\uppercase\expandafter{\romannumeral2}|\leq &C\sum_{
\begin{subarray}
~i=0;j=0;\\(i,j)\neq (0,0)
\end{subarray}
}^{i=N_i-1;j=N_j-1}\int_{\xi_{i}}^{\xi_{i+1}}\int_{\eta_{j}}^{\eta_{j+1}}h^2\left((\xi^2+\eta^2)^{\frac{-2-\beta}{2}}+(\xi^2+\eta^2)^{\frac{-1-\beta}{2}}+(\xi^2+\eta^2)^{-\frac{\beta}{2}}\right)d\eta d\xi .\\
\leq& |\uppercase\expandafter{\romannumeral2}_1|+|\uppercase\expandafter{\romannumeral2}_2|+|\uppercase\expandafter{\romannumeral2}_3|.\\
\end{split}$$ Taking $\xi =ph$, $\eta =qh$, we have $$\begin{split}
|\uppercase\expandafter{\romannumeral2}_1|\leq& C\sum_{
\begin{subarray}
~i=0;j=0;\\(i,j)\neq (0,0)
\end{subarray}
}^{i=N_i-1;j=N_j-1}h^{2-\beta}\int_{{i}}^{i+1}\int_{{j}}^{j+1}(p^2+q^2)^{\frac{-2-\beta}{2}}dq dp .
\end{split}$$ And since $-2-\beta<-2$, it holds $$\label{eques2_1}
|\uppercase\expandafter{\romannumeral2}_1|\leq Ch^{2-\beta}.$$ Then, we have $$\begin{split}
|\uppercase\expandafter{\romannumeral2}_3|\leq Ch^2\sum_{
\begin{subarray}
~i=0;j=0;\\(i,j)\neq (0,0)
\end{subarray}
}^{i=N_i-1;j=N_j-1}\int_{\xi_{i}}^{\xi_{i+1}}\int_{\eta_{j}}^{\eta_{j+1}}(\xi^2+\eta^2)^{-\frac{\beta}{2}}d\eta d\xi .\\
\end{split}$$ Take $\xi=r\cos(\theta)$, $\eta=r\sin(\theta)$. Since $0<\beta<2$, there exists $$\label{eques2_3}
\begin{split}
|\uppercase\expandafter{\romannumeral2}_3|\leq& Ch^2\int_{0}^{\frac{\pi}{2}}\int_{h}^{\sqrt{2}L}r^{1-\beta}dr d\theta\\
\leq&Ch^2.
\end{split}$$ For $|\uppercase\expandafter{\romannumeral2}_2|$, being similar to $|\uppercase\expandafter{\romannumeral2}_1|$ and $|\uppercase\expandafter{\romannumeral2}_3|$, we have $$\label{eques2_2}
|\uppercase\expandafter{\romannumeral2}_2|\leq\left\{
\begin{split}
&Ch^2~~~~~~~\beta\in(0,1];\\
&Ch^{3-\beta}~~\beta\in(1,2).
\end{split}\right.$$ From (\[eques2\_1\]), (\[eques2\_3\]) and (\[eques2\_2\]), it can be obtained that $$\begin{split}
|\uppercase\expandafter{\romannumeral2}|\leq &C h^{2-\beta}.
\end{split}$$ So for $u(x,y)\in C^2(\mathbb{R}^2)$, we have $$\left\|e^h_{\beta,\gamma}(x,y)\right\|_{L_\infty}\leq Ch^{2-\beta}.$$ Then, the proof is completed.
Error estimates
===============
Now, we turn to the convergence proof of the designed scheme for the tempered fractional Poisson problem with Dirichlet boundary conditions (\[defequ1\]).
\[lemmaGersgorin\][@Axelsson1996] The spectrum $\lambda(A)$ of the matrix $A=[a_{i,j}]$ is enclosed in the union of the discs $$C_i=\{z\in \mathbb{C};|z-a_{i,i}|\leq\sum_{i\neq j}|a_{i,j}|\},~1\leq i\leq n$$ and in the union of the discs $$C'_i=\{z\in \mathbb{C};|z-a_{i,i}|\leq\sum_{i\neq j}|a_{j,i}|\},~1\leq i\leq n.$$
Next, we give the proposition of the weights $w_{i,j}$. From (\[equweightoffl\]), it’s easy to verify the following properties of weights.
\[proweight\] The weights of the tempered fractional Laplacian satisfy $$\left\{
\begin{split}
&\sum_{i=-N_i}^{i=N_i}\sum_{j=-N_j}^{j=N_j}w_{|i|,|j|}>CG^\infty>0;\\
&w_{i,j}<0,~~~~~(i,j)~\neq (0,0).
\end{split}
\right.$$
According to (\[equweightoffl\]), we just need to prove that $W^1_{i,j}$, $W^2_{i,j}$, $W^3_{i,j}$, $W^4_{i,j}>0$. Combining (\[equdefG\]) and (\[equdefW\]), there exists $$\begin{split}
W^1_{i,j}&=\frac{1}{h^2}\int_{\xi_{i}}^{\xi_{i+1}}\int_{\eta_{j}}^{\eta_{j+1}}(\xi-\xi_{i+1})(\eta-\eta_{i+1})(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}}d\eta d\xi\\
&\geq 0.
\end{split}$$ The proof for $W^2_{i,j}$, $W^3_{i,j}$ and $W^4_{i,j}$ is similar to the one for $W^1_{i,j}$. Combining $G^\infty>0$ and $G_{0,0}>0$, one can get $\sum_{i=-N_i}^{i=N_i}\sum_{j=N_j}^{j=N_j}w_{|i|,|j|}>CG^\infty>0$ for some $C>0$. For $w_{i,j}<0\,((i,j)\neq (0,0))$, one can directly get from (\[equweightoffl\]).
According to Proposition \[proweight\] and Lemma \[lemmaGersgorin\], the minimum eigenvalue of $B$ satisfies $$\lambda_{min}(B)>CG^\infty>0.$$ So $B$ is a strictly diagonally dominant and symmetric positive definite matrix.
\[thmposerror\] Suppose that $u$ is the exact solution of the tempered fractional Poisson equation (\[defequ1\]) and $\mathbf{U}_h$ is the solution of the finite difference scheme (\[matex1\]). Then, there are $$\begin{split}
&\left\|\mathbf{U}-\mathbf{U}_h\right\|\leq C\left\|(\Delta+\lambda)_{h}^{\frac{\beta}{2}}\mathbf{U}-((\Delta+\lambda)_{h}^{\frac{\beta}{2}}\mathbf{U}_h)\right\|,\\
& \left\|\mathbf{U}-\mathbf{U}_h\right\|_{\infty}\leq C\left\|(\Delta+\lambda)_{h}^{\frac{\beta}{2}}\mathbf{U}-((\Delta+\lambda)_{h}^{\frac{\beta}{2}}\mathbf{U}_h)\right\|_{\infty}.
\end{split}$$
According to the definition of $G^{\infty}$, taking an inner product of (\[matex1\]) with $\mathbf{U}_h$ and using the Cauchy-Schwarz inequality, we have $$CG^{\infty}\left\|\mathbf{U}_h\right\|^2\leq(B\mathbf{U}_h,\mathbf{U}_h)\leq \left\|F\right\|\left\|\mathbf{U}_h\right\|,$$ which leads to $$\label{equL2error10}
\|\mathbf{U}_h\|^2\leq\frac{1}{CG^{\infty}}\|F\|\|\mathbf{U}_h\|.$$ Thus $$\label{equL2error1}
\|\mathbf{U}_h\|\leq\frac{1}{CG^{\infty}}\|F\|.$$ Assuming $\|\mathbf{U}_h\|_\infty=|u^h_{p,q}|$, according to (\[equweightoffl\]), we obtain that $$\begin{split}
&u^h_{p,q}\left(\sum_{i=-N_i}^{i=N_i}\sum_{j=-N_j}^{j=N_j}w_{|i|,|j|}u^h_{p-i,q-j}-4c_{2,\beta,\lambda}G^\infty u^h_{p,q}\right)\\
=&u^h_{p,q}\left(\sum_{
\begin{subarray}
~i=-N_i;j=-N_j;\\(i,j)\neq (0,0)
\end{subarray}
}^{i=N_i;j=N_j}w_{|i|,|j|}u^h_{p-i,q-j}+(w_{0,0}-4c_{2,\beta,\lambda}G^\infty) u^h_{p,q}\right)\\
\geq&\sum_{
\begin{subarray}
~i=-N_i;j=-N_j;\\(i,j)\neq (0,0)
\end{subarray}
}^{i=N_i;j=N_j}-w_{|i|,|j|}((u^h_{p,q})^2-u^h_{p,q}u^h_{p-i,q-j})\\
\geq&0,
\end{split}$$ which implies $$CG^\infty \left\|\mathbf{U}_{h}\right\|_{\infty}\leq\left|F_{p,q}\right|.$$ So we have $$\label{equinferror1}
CG^\infty \left\|\mathbf{U}_{h}\right\|_{\infty}\leq\|F\|_\infty.$$ In addition, from (\[desdif2\]) $$\label{equdifdif}
\mathbf{B}(\mathbf{U}-\mathbf{U}_h)=(-(\Delta+\lambda)_{h}^{\frac{\beta}{2}}\mathbf{U})-(-(\Delta+\lambda)_{h}^{\frac{\beta}{2}}\mathbf{U}_h).$$ Applying (\[equL2error1\]) and (\[equinferror1\]) to (\[equdifdif\]), the desired results are obtained.
Suppose $u\in C^2({\mathbb{R}^2})$ is the exact solution of (\[defequ1\]), and $\mathbf{U}_h$ is the solution of the difference scheme (\[matex1\]). Then $$\left\|\mathbf{U}-\mathbf{U}_h\right\|\leq Ch^{2-\beta},~~\left\|\mathbf{U}-\mathbf{U}_h\right\|_{\infty}\leq Ch^{2-\beta}.$$
Combining Theorem \[thmtrunct\] and Theorem \[thmposerror\] leads to that for $u\in C^2(\mathbb{R}^2)$, $$\left\|\mathbf{U}-\mathbf{U}_h\right\|\leq Ch^{2-\beta},~~\left\|\mathbf{U}-\mathbf{U}_h\right\|_{\infty}\leq Ch^{2-\beta}.$$
Numerical experiments
=====================
In this section, extensive numerical experiments are performed, including verifying the theoretical results on convergence rates and showing the effectiveness of the scheme by simulating (\[defequ1\]) without known solution. The convergence results for $\lambda=0$ are also reported. Without loss of generality, we consider the domain $\Omega=(-1,1)\times(-1,1)$.
The truncation error of the tempered fractional Laplacian
---------------------------------------------------------
This subsection shows the truncation errors and convergence rates of discretizing the tempered fractional Laplacian. The $L_{\infty}$ norm and $L_{2}$ norm are used to measure the truncation errors here.
Compute $(\Delta+\lambda)^{\beta/2}u(x,y)$ with $u(x,y)=(1-x^2)^3(1-y^2)^3$ ($u(x,y)\in C^2(\mathbb{R}^2)$).
Table \[tab:dC2r0g1a05\] shows the accuracy of computing $(\Delta+\lambda)^{\beta/2}u(x,y)$ with $\lambda=0$ and $\gamma=1+\frac{\beta}{2}$, which verifies the numerical discretizations for fractional Laplacian. Table \[tab:dC2r05g1a05\] shows the accuracy of computing $(\Delta+\lambda)^{\beta/2}u(x,y)$ with $\lambda=0.5$ and $\gamma=1+\frac{\beta}{2}$. We find that for the fixed mesh size $h$, the numerical errors will be larger as the parameter $\beta$ increases and the truncation error is $O(h^{2-\beta})$ for any $\beta\in(0,2)$ from Table \[tab:dC2r0g1a05\] and \[tab:dC2r05g1a05\]. These results are consistent with the theoretical predictions.
Comparing Table \[tab:dC2r0g1a05\] with \[tab:dC2r05g1a05\], it’s easy to see that the convergence rates are independent of $\lambda$ and the numerical errors become smaller as the parameter $\lambda$ increases for fixed $h$ and $\beta$.
Convergence rates for solving the tempered fractional Poisson equation
----------------------------------------------------------------------
We solve (\[defequ1\]) with different $\beta$ and $\gamma$, and the exact solution is taken as $u(x,y)=(1-x^2)^3(1-y^2)^3$, where $u\in C^2(\mathbb{R}^2)$. The source term $f(x,y)$ is obtained numerically by the algorithm in Appendix A. Table \[tab:sC2r0g1a05\] shows that the convergence rate is $O(h^{2-\beta})$ when $\lambda=0$ and $\gamma=1+\frac{\beta}{2}$. Table \[tab:sC2r05g1a05\] shows that the convergence rate is also $O(h^{2-\beta})$ when $\lambda=0.5$ and $\gamma=1+\frac{\beta}{2}$. The results show that $\lambda$ has no effect on the convergence rates when $\gamma=1+\frac{\beta}{2}$.
But when choosing $\lambda=0$ and $\gamma=2$, the convergence rates showed in Table \[tab:sC2r0g2\] are higher than theoretical convergence rates; for any $\beta\in (0,2)$, the convergence rate is $O(h^2)$. For $\lambda>0$, the convergence rates showed in Table \[tab:sC2r05g2\] depend on $\beta$; when $\beta<1$, the convergence rate is $O(h^2)$, and $\beta>1$ the convergence rate is $O(h^{3-\beta})$. This phenomenon indicates that the provided scheme works very well for the equation (\[defequ1\]) when $\gamma=2$.
Next, we give Figures \[fig:a05r0\] and \[fig:a05r05\] to show the influence of different $\gamma$ on the convergence rates. Figure \[fig:a05r0\] shows that the convergence rate is almost $O(h^{2-\beta})$ except $\gamma=2$ when $\beta=0.5$ and $\lambda=0$; for the same mesh size $h$, the numerical errors become smaller as the parameter $\gamma$ increases. We can get the same results from Figure \[fig:a05r05\] when $\beta=0.5$ and $\lambda=0.5$. Comparing Figure \[fig:a05r0\] with \[fig:a05r05\], it’s easy to note that $\gamma$ has the same influence on the convergence rates for any $\lambda$.
![$L_2$ errors and convergence orders for the system with different $\gamma$ when $\beta=0.5$ and $\lambda=0$[]{data-label="fig:a05r0"}](pic/13.eps){width="13.66cm" height="6cm"}
![$L_2$ errors and convergence orders for the system with different $\gamma$ when $\beta=0.5$ and $\lambda=0.5$[]{data-label="fig:a05r05"}](pic/2.eps){width="13.66cm" height="6cm"}
Afterwards, we solve (\[defequ1\]) with the exact solution $u=(1-x^2)^2(1-y^2)^2$, which has a lower regularity than one in Example 2.
Taking the exact solution $u=(1-x^2)^2(1-y^2)^2$, the convergence rates are shown in Tables \[tab:sC1r0g1a05\] and \[tab:sC1r05g1a05\] with different $\beta$ and $\lambda$. It is easy to check that $u$ and $Du$ are continuous in $\mathbb{R}^2$, but $\frac{\partial^2 u}{\partial x^2}$ and $\frac{\partial^2 u}{\partial y^2}$ are discontinuous at the boundary of $\Omega$, so $u\in C^1(\mathbb{R}^2)$ and the second derivatives of $u$ are bounded. It can be noted that the provided scheme has the same convergence rates for $u\in C^2(\mathbb{R}^2)$ and $C^1(\mathbb{R}^2)$ with second bounded derivatives.
Finally, we use the provided scheme to solve (\[defequ1\]) with smooth right hand term.
\[example4\] We consider the model (\[defequ1\]) in $\Omega$ with the source term $f=1$. Here $${\rm rate}=\frac{\ln(e_{2h}/{e_h})}{\ln(2)}$$ is utilized to measure the convergence rates, where $u_h$ means the numerical solution under mesh size $h$ and $e_h=\|u_{2h}-u_{h}\|$.
Tables $\ref{tab:F1r0g1a05}$ and $\ref{tab:F1r05g2}$ show the numerical errors and the convergence rates with $\lambda=0$, $\gamma=1+\frac{\beta}{2}$ and $\lambda=0.5$, $\gamma=2$, respectively. The convergence rates are lower than desired ones because of the regularity of the exact solution $u$. These results are similar to the ones in one dimension [@Zhang2017].
In statistical physics [@Deng:17-2], the solution $u$ of Example \[example4\] represents the mean first exit time of a particle starting at $(x,y)$ away from given domain $\Omega$. Figure $\ref{figDepen}$ shows the dynamical behaviors when $\lambda=0,~0.5$ and $\beta=0.5,~0.8,~1.2,~1.5$; for any $\lambda$ and $\beta$, the mean first exit times of particles starting near the center are longer than the particles starting near the boundary of $\Omega$; for any fixed $\lambda$, the mean first exit time is shorter as $\beta$ increases; when exponentially tempering the isotropic power law measure of the jump length, the mean first exit time of any fixed starting point is longer than before.
Conclusion
==========
This paper provides the finite difference schemes for the two dimensional tempered fractional Laplacian, being physically introduced and mathematically defined in [@Deng:17]. The operator is written as the weighted integral of a weak singular function by introducing the auxiliary function $\phi_{\gamma}$. The weighted trapezoidal rule is used to approximate the integration of the weak singular part and the bilinear interpolation for the rest of the integration domain. The detailed error estimates are presented for the designed numerical schemes of the tempered fractional Poisson equation. Extensive numerical experiments are performed to verify the convergence rates and show the effectiveness of the scheme, and the quantity of mean first exit time in statistical physics is simulated. The schemes and its numerical analysis still work well for the case that $\lambda=0$, i.e., fractional Laplacian; the corresponding numerical experiments are also given.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was supported by the National Natural Science Foundation of China under Grant No. 11671182, and the Fundamental Research Funds for the Central Universities under Grant No. lzujbky-2017-ot10.
Appendix {#appendix .unnumbered}
========
Numerically calculating $(\Delta+\lambda)^{\frac{\beta}{2}}$ performed on a given function
==========================================================================================
According to the equation $-(\Delta+\lambda)^{\frac{\beta}{2}} u(x,y)=f(x,y)$, we can compute $-(\Delta+\lambda)^{\frac{\beta}{2}} u(x,y)$ to get the source term $f(x,y)$. Since the singularity and non-locality of $-(\Delta+\lambda)^{\frac{\beta}{2}} u(x,y)$, one can’t directly approximate it by the trapezoidal rule. Now we provide the technique to calculate it. For fixed $(x,y)$, we denote $$\begin{split}
&r_1=\sup_{\begin{subarray}{c}
(\xi,\eta)\in\partial \Omega
\end{subarray}}\max(|x-\xi|,|y-\eta|)\\
&r_2=\inf_{\begin{subarray}{c}
(\xi,\eta)\in\partial \Omega
\end{subarray}}\sqrt{(x-\xi)^2+(y-\eta)^2}.\\
\end{split}$$ Without loss of generality, we set $\Omega=(-1,1)\times(-1,1)$. For any $(x,y)\in\Omega$, we denote $A_1$ as a square whose length is $2r_1$ and center is located at $(x,y)$ and $A_2$ as a square whose length is $2r_2$ and center is located at $(x,y)$. To compute the source term $f(x,y)$, we divide the domain into four parts, i.e., $R\times R=(R\times R)/A_1\bigcup (A_1/\Omega)\bigcup(\Omega/A_2)\bigcup A_2$, shown in Figure \[Integral\_region\].
![Division of the integral region for fixed $(x,y)$[]{data-label="Integral_region"}](pic/5.eps){width="6cm" height="6cm"}
For the term $$\label{equimpleR1}
\int\int_{(R\times R)/(A_1)}\frac{u(\xi,\eta)-u(x,y)}{e^{\lambda\sqrt{(x-\xi)^2+(y-\eta)^2}}\left(\sqrt{(x-\xi)^2+(y-\eta)^2}\right)^{{2+\beta}}}d\xi d\eta,$$ since $\textbf{supp}~u(x,y)\in\Omega$, Eq. (\[equimpleR1\]) can be rewritten as $$\label{equimpleR2}
-u(x,y)\int\int_{(R\times R)/(A_1)}\frac{1}{e^{\lambda\sqrt{(x-\xi)^2+(y-\eta)^2}}\left(\sqrt{(x-\xi)^2+(y-\eta)^2}\right)^{{2+\beta}}}d\xi d\eta.$$ Next, we establish polar coordinates at $(x,y)$ and let $x-\xi=r\cos(\theta)$, $y-\eta=r\sin(\theta)$. Then, by simple calculation, we can obtain $$\label{equappa1}
\begin{split}
&\int\int_{(R\times R)/(A_1)}\frac{1}{e^{\lambda\sqrt{(x-\xi)^2+(y-\eta)^2}}\left(\sqrt{(x-\xi)^2+(y-\eta)^2}\right)^{{2+\beta}}}d\xi d\eta\\
=&\int_{0}^{\frac{\pi}{4}}\int_{\frac{r_1}{\cos(\theta)}}^{\infty}\frac{1}{r^{1+\beta}e^{\lambda r}}dr d\theta+\int_{\frac{\pi}{4}}^{\frac{2\pi}{4}}\int_{\frac{r_1}{\cos(\frac{\pi}{2}-\theta)}}^{\infty}\frac{1}{r^{1+\beta}e^{\lambda r}}dr d\theta\\
&+\int_{\frac{2\pi}{4}}^{\frac{3\pi}{4}}\int_{\frac{r_1}{\cos(\theta-\frac{\pi}{2})}}^{\infty}\frac{1}{r^{1+\beta}e^{\lambda r}}dr d\theta+\int_{\frac{3\pi}{4}}^{\frac{4\pi}{4}}\int_{\frac{r_1}{\cos(\pi-\theta)}}^{\infty}\frac{1}{r^{1+\beta}e^{\lambda r}}dr d\theta\\
&+\int_{\frac{4\pi}{4}}^{\frac{5\pi}{4}}\int_{\frac{r_1}{\cos(\theta-\pi)}}^{\infty}\frac{1}{r^{1+\beta}e^{\lambda r}}dr d\theta+\int_{\frac{5\pi}{4}}^{\frac{6\pi}{4}}\int_{\frac{r_1}{\cos(\frac{3\pi}{2}-\theta)}}^{\infty}\frac{1}{r^{1+\beta}e^{\lambda r}}dr d\theta\\
&+\int_{\frac{6\pi}{4}}^{\frac{7\pi}{4}}\int_{\frac{r_1}{\cos(\theta-\frac{3\pi}{2})}}^{\infty}\frac{1}{r^{1+\beta}e^{\lambda r}}dr d\theta+\int_{\frac{7\pi}{4}}^{\frac{8\pi}{4}}\int_{\frac{r_1}{\cos(2\pi-\theta)}}^{\infty}\frac{1}{r^{1+\beta}e^{\lambda r}}dr d\theta\\
=&8\int_{0}^{\frac{\pi}{4}}\int_{\frac{r_1}{\cos(\theta)}}^{\infty}\frac{1}{r^{1+\beta}e^{\lambda r}}dr d\theta.
\end{split}$$ When $\lambda=0$, we have $$\begin{split}
&\int\int_{(R\times R)/(A_1)}\frac{1}{e^{\lambda\sqrt{(x-\xi)^2+(y-\eta)^2}}\left(\sqrt{(x-\xi)^2+(y-\eta)^2}\right)^{{2+\beta}}}d\xi d\eta\\
=&8\int_{0}^{\frac{\pi}{4}}\int_{\frac{r_1}{\cos(\theta)}}^{\infty}\frac{1}{r^{1+\beta}}dr d\theta\\
=&\frac{8}{\beta}\int_{0}^{\frac{\pi}{4}}\left(\frac{r_1}{\cos(\theta)}\right)^{-\beta}d\theta.
\end{split}$$ We just approximate it by the trapezoidal rule in a finite interval. When $\lambda\neq 0$, we use the trapezoidal rule to approximate (\[equappa1\]) after suitable truncation.
For the term $$\label{equimpleA21}
\int\int_{A_2}\frac{u(\xi,\eta)-u(x,y)}{e^{\lambda\sqrt{(x-\xi)^2+(y-\eta)^2}}\left(\sqrt{(x-\xi)^2+(y-\eta)^2}\right)^{{2+\beta}}}d\xi d\eta,$$ using its symmetry leads to $$\label{equimpleA22}
\begin{split}
&\int\int_{A_2}\frac{u(\xi,\eta)-u(x,y)}{e^{\lambda\sqrt{(x-\xi)^2+(y-\eta)^2}}\left(\sqrt{(x-\xi)^2+(y-\eta)^2}\right)^{{2+\beta}}}d\xi d\eta\\
=&\int_{-r_2}^{r_2}\int_{-r_2}^{r_2}\frac{u(x+\xi,y+\eta)-u(x,y)}{e^{\lambda\sqrt{\xi^2+\eta^2}}\left(\sqrt{\xi^2+\eta^2}\right)^{{2+\beta}}}d\xi d\eta\\
=&\int_{0}^{r_2}\int_{0}^{r_2}\frac{u(x+\xi,y+\eta)+u(x-\xi,y-\eta)+u(x+\xi,y-\eta)+u(x-\xi,y+\eta)-4u(x,y)}{e^{\lambda\sqrt{\xi^2+\eta^2}}\left(\sqrt{\xi^2+\eta^2}\right)^{{2+\beta}}}d\xi d\eta.\\
\end{split}$$
Because of the weak singularity, we try to compute it in polar coordinates. Let $\xi=r\cos(\theta)$, $\eta=r\sin(\theta)$. Then Eq. (\[equimpleA22\]) can be rewritten as $$\label{equimpleA23}
\begin{split}
&\int\int_{A_2}\frac{u(\xi,\eta)-u(x,y)}{e^{\lambda\sqrt{(x-\xi)^2+(y-\eta)^2}}\left(\sqrt{(x-\xi)^2+(y-\eta)^2}\right)^{{2+\beta}}}d\xi d\eta\\
=&\int_{0}^{\frac{\pi}{4}}\int_{0}^{\frac{r_2}{\cos(\theta)}}\left(u(x+r\cos(\theta),y+r\sin(\theta))+u(x-r\cos(\theta),y+r\sin(\theta))\right.\\
&\left.+u(x+r\cos(\theta),y-r\sin(\theta))+u(x-r\cos(\theta),y-r\sin(\theta))-4u(x,y)\right)r^{-1-\beta}e^{-\lambda r}dr d\theta\\
&+\int_{\frac{\pi}{4}}^{\frac{2\pi}{4}}\int_{0}^{\frac{r_2}{\cos(\frac{\pi}{2}-\theta)}}\left(u(x+r\cos(\theta),y+r\sin(\theta))+u(x-r\cos(\theta),y+r\sin(\theta))\right.\\
&\left.+u(x+r\cos(\theta),y-r\sin(\theta))+u(x-r\cos(\theta),y-r\sin(\theta))-4u(x,y)\right)r^{-1-\beta}e^{-\lambda r}dr d\theta.\\
\end{split}$$ In (\[equimpleA23\]), for some special function, such as $u(x,y)=(1-x^2)^2(1-y^2)^2$, we can expand it as $$\begin{split}
&\left(u(x+r\cos(\theta),y+r\sin(\theta))+u(x-r\cos(\theta),y+r\sin(\theta))\right.\\
&\left.+u(x+r\cos(\theta),y-r\sin(\theta))+u(x-r\cos(\theta),y-r\sin(\theta))-4u(x,y)\right)r^{-1-\beta}e^{-\lambda r}\\
=&4 r^{1-\beta}e^{-\lambda r} \left(r^6 \sin ^4(\theta) \cos ^4(\theta)+6 r^4 x^2 \sin ^4(\theta) \cos ^2(\theta)+6 r^4 y^2 \sin ^2(\theta) \cos^4(\theta)-2 r^4 \sin ^2(\theta) \cos ^4(\theta)\right.\\
&-2 r^4 \sin ^4(\theta) \cos ^2(\theta)+r^2 x^4 \sin ^4(\theta)+36 r^2 x^2 y^2 \sin ^2(\theta) \cos ^2(\theta)-2 r^2 x^2 \sin ^4(\theta)\\
&-12 r^2 x^2 \sin ^2(\theta) \cos ^2(\theta)+r^2 y^4 \cos ^4(\theta)-2 r^2 y^2 \cos ^4(\theta)-12 r^2 y^2 \sin ^2(\theta) \cos ^2(\theta)\\
&+r^2 \sin ^4(\theta)+r^2 \cos ^4(\theta)+4 r^2 \sin ^2(\theta) \cos ^2(\theta)+6 x^4 y^2 \sin ^2(\theta)-2 x^4 \sin ^2(\theta)+6 x^2 y^4 \cos ^2(\theta)\\
&-12 x^2 y^2 \sin ^2(\theta)-12 x^2 y^2 \cos ^2(\theta)+4 x^2 \sin ^2(\theta)+6 x^2 \cos ^2(\theta)-2 y^4 \cos ^2(\theta)+6 y^2 \sin ^2(\theta)\\
&\left.+4 y^2 \cos ^2(\theta)-2 \sin ^2(\theta)-2 \cos ^2(\theta)\right).
\end{split}$$ When $\lambda=0$, the inner integration about $r$ can be calculated analytically, so we just need to approximate the outer integration about $\theta$ by the trapezoidal rule. When $\lambda\neq0$, we can transform the inner integration about $r$ to a nonsingular numerical integration through integration by parts.
For the another two terms, $$\label{equimpleR1l}
\int\int_{(A_1/\Omega)\bigcup(\Omega/A_2)}\frac{u(\xi,\eta)-u(x,y)}{e^{\lambda\sqrt{(x-\xi)^2+(y-\eta)^2}}\left(\sqrt{(x-\xi)^2+(y-\eta)^2}\right)^{2+\beta}}d\xi d\eta,$$ can be integrated by the trapezoidal rule directly.
key points of code implementation
=================================
When solving the tempered fractional Poisson problem with Dirichlet boundary conditions in two dimension, the computational complexity need to be carefully considered.
Firstly, since the weights $w_{i,j}$ can not be got analytically, we need to calculate it numerically. In order to get the weights (\[equweightoffl\]), we need to calculate $G_{i,j}$, $G^\xi_{i,j}$, $G^\eta_{i,j}$ and $G^{\xi\eta}_{i,j}$. It is easy to see that $G_{i,j}$ depends on the mesh size $h$. To get $G_{i,j}$ for different $h$ conveniently, we rewrite (\[equdefG\]) as $$G_{i,j}=\frac{1}{h^{2+\beta-\gamma}}g_{i,j}~~~~~~~~i,j\in N~{\rm and} ~(i,j)\neq(0,0),$$ where $$g_{i,j}=\int_i^{i+1}\int_j^{j+1}\left(\sqrt{p^2+q^2}\right)^{\gamma-2-\beta}dp dq~~~~~~~i,j\in N~{\rm and} ~(i,j)\neq(0,0).$$ We can calculate $g_{i,j}$ by the trapezoidal formula. And the same skill can be used to calculate $G^\xi_{i,j}$, $G^\eta_{i,j}$ and $G^{\xi\eta}_{i,j}$ when $(i,j)\neq(0,0)$.
Secondly, for $G_{0,0}$, we can integrate it in polar coordinates to deal with the singularity, that is $$\label{equapb1}
\begin{split}
G_{0,0}=&\int_{0}^{\frac{\pi}{2}}\int_{0}^{h}r^{\gamma-1-\beta}dr d\theta+\int_{\xi_0}^{\xi_1}\int_{\sqrt{h^2-\xi^2}}^{\eta_1}(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}}d\eta d\xi\\
=&\frac{\pi}{2(\gamma-\beta)}h^{\gamma-\beta}+\int_{\xi_0}^{\xi_1}\int_{\sqrt{h^2-\xi^2}}^{\eta_1}(\xi^2+\eta^2)^{\frac{\gamma-2-\beta}{2}}d\eta d\xi.
\end{split}$$ By the way, we only need to use the trapezoidal rule to calculate the second term in (\[equapb1\]).
Thirdly, when using the polar coordinates to calculate $G^\infty$, the integration in two dimension can be translated to a bounded integration in one dimension when $\lambda=0$. When $\lambda\neq 0$, $G^\infty$ can be calculated effectively after a suitable truncation. Lastly, when solving the linear equation $B\mathbf{U}_h=F$, the computation costs are expensive if we solve it directly. So we use the structure of the symmetric block Toeplitz matrix with Toeplitz block, the memory requirements can be reduced from $O(N^4)$ to $O(N^2)$. And the Fast Fourier transform is used to reduce computational cost from $O(N^6)$ to $O(N^2 \log N^2)$.
[99]{}
Abramowitz, M., Stegun, I.A.: Handbook of mathematical functions with formulas, graphs, and mathematical tables. New York: Dover Publications Inc., (1992).
Acosta, G., Bersetche, F.M., Borthagaray, J.P.: A short FE implementation for a 2D homogeneous Dirichlet problem of a Fractional Laplacian. Comput. Math. Appl. [**74**]{}, 784–816 (2017).
Acosta, G., Borthagaray, J.P.: A fractional Laplace equation: regularity of solutions and Finite Element approximations. SIAM J. Numer. Anal. [**55**]{}, 472–495 (2017).
Applebaum, D.: L$\acute{e}$vy Processes and Stochastic Calculus. UK Cambridge: Cambridge University Press, (2009).
Axelsson, O.: Iterative Solution Methods. UK Cambridge: Cambridge University Press, (1996).
Bogdan, K., Burdzy, K., Chen, Z.Q.: Censored stable process. Probab. Theory Rel. [**127**]{}, 89–152 (2003).
Brenner, S.C., Scott, L.R.: The mathematical theory of finite element methods. Texts in Applied Mathematics, (2008).
Buades, A., Coll,B., Morel, J.M.: Image denoising methods. A new nonlocal principle. SIAM Rev. [**52**]{}, 113–147 (2010).
Chen, K.: Matrix Preconditioning Techniques and Applications. UK Cambridge: Cambridge University Press, (2005).
Deng, W.H., Li, B.Y., Tian, W.Y., Zhang, P.W.: Boundary problems for the fractional and tempered fractional operators. Multiscale Model. Simul. [**16**]{}, 125–149 (2018).
Deng, W.H., Wu, X.C., Wang, W.L.: Mean exit time and escape probability for the anomalous processes with the tempered power-law waiting times. EPL [**117**]{}, 10009 (2017).
Duo, S.W., Wyk, H.W.V., Zhang Y.Z.: A novel and accurate finite difference method for the fractional Laplacian and the fractional Poisson problem. J. Comput. Phys. [**355**]{}, 233–252 (2018).
D’Elia, M., Gunzburger, M.: The fractional Laplacian operator on bounded domains as a special case of the nonlocal diffusion operator. Comput. Math. Appl. [**66**]{}, 1245–1260 (2013).
Hilfer, R.: Applications of fractional calculus in physics. New Jeasy: World Scientific Publishing Co., (2000).
Huang, Y.H., Oberman, A.: Finite difference methods for fractional Laplacians., in press (arXiv:1611.00164v1. \[math.NA\]).
Huang, Y.H., Oberman, A.: Numerical Methods for the Fractional Laplacian: a Finite Difference-quadrature Approach. SIAM J. Numer. Anal. [**52**]{}, 3056–3084 (2014).
Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. Amsterdam: Elsevier, (2006).
Klafter, J., Sokolov, I.M.: Anomalous diffusion spreads its wings. Physics world. [**18**]{}, 29–32 (2005).
Kwa$\acute{s}$nicki, M.: Ten equivalent definitions of the fractional Laplace operator. Fract. Calc. Appl. Anal. [**20**]{}, 7–51 (2015).
Mainardi, F., Raberto, M., Gorenflo, R., Scalas, E.: Fractional calculus and continuous-time finance II: the waiting-time distribtion. Phys. A. [**287**]{} 468–481 (2000).
Metzler, R., Klafter, J.: The random walk’s guide to anomalous diffusion: a fractional dynamics approach. Phys.Rep. [**339**]{}, 1–77 (2000).
Mö[ß]{}ner, B., Reif, U.: Error Bounds for Polynomial Tensor Product Interpolation. Computing. [**89**]{}, 185–197 (2009).
Pozrikidis, C.: The Fractional Laplacian. London: CRC Press, (2016).
Stein, E.M.: Singular Integrals and Differentiability Properties of Functions. New Jersey Princeton: Princeton University Press, (1970).
Zhang, Z.Z., Deng, W.H., Fan, H.T.: Finite difference schemes for the tempered fractional Laplacian., in press (arXiV:1711.05056v1. \[math.NA\]).
Zhang, Z.Z., Deng, W.H., Karniadakis, G.E.: A Riesz basis Galerkin method for the tempered fractional Laplacian., in press (arXiv:1709.10415. \[math.NA\]).
|
---
abstract: 'We play with a graph-theoretic analogue of the folklore infinite monkey theorem. We define a notion of graph likelihood as the probability that a given graph is constructed by a monkey in a number of time steps equal to the number of vertices. We present an algorithm to compute this graph invariant and closed formulas for some infinite classes. We have to leave the computational complexity of the likelihood as an open problem.'
author:
- Christopher Banerji
- Toufik Mansour
- Simone Severini
title: A notion of graph likelihood and an infinite monkey theorem
---
Introduction
============
The *infinite monkey theorem* is part of the popular culture [@wi]. A monkey sits in front to a typewriter hitting random keys. The probability that the monkey will type any given text tends to one, as the amount of time the monkey spends on the typewriter tends to infinity. The usual example is of Shakespeare’s Hamlet. Of course, the term monkey can refer to some abstract device producing random strings of symbols (*e.g.*, zeros and ones).
In this note, we consider an infinite monkey theorem, but for graphs rather than strings. Our setting involves a device which performs non-preferential attachment [@js12]. At time step $t+1$, a new vertex is added to a graph $G_{t}$ – the process starts from the single vertex graph, $G_{1}$. The degree and the neighbours of the newly added vertex at step $t+1$ are both chosen at random. The degree of the vertex is then $k\in\{0,1,...,t\}$ and its neighbours are $k$ random vertices in $G_{t}$. The (in fact obvious) analogue of the infinite monkey theorem is that every graph can be constructed in this way: the probability that the monkey will construct a given graph tends to one, as the amount of time the monkey spends on the graphwriter tends to infinity. Notice that the monkey makes two random choices, but these can be seen as a single one. Also, notice that the theorem is indeed a corollary of the usual infinite monkey theorem since we could encode a graph in a string (for example, by vectorizing the adjacency matrix). Below is a monkey enjoying the construction of the Petersen graph:
The construction is basically an excuse to discuss a graph invariant which we call *(graph) likelihood*. This is the probability that a given graph on $t$ vertices is obtained by the construction after exactly $t$ steps. In other words, this is the probability that a monkey constructs a given graph on $t$ vertices in exactly $t$ seconds, assuming that the monkey adds a new vertex each second. For a string, this would correspond to the probability that the monkey types a given text in a time equal to the length of the string produced.
The likelihood is a plausible measure to quantify how difficult is to construct a graph in the way we propose. Intuitively, graphs with more symmetries have generally smaller likelihood. We will show as expected that bounds on the likelihood can be given in terms of the automorphism group. Specifically, the likelihood can not be larger than the reciprocal of the number of automorphisms. Graphs with trivial automorphism group are then potentially the ones admitting highest likelihood. We will describe an algorithm to compute the likelihood of a given graph. The algorithm uses a rooted tree decomposition which takes into account all possible ways to construct the graph by adding one vertex at the time. The algorithm suggests a closed formula for the likelihood.
The reminder of the paper is organized as follows. In Section II, we define the likelihood. In Section III, we give closed formulas for complete graphs, star graphs, paths, and cycles. In Section IV, we describe the algorithm. Section V lists some open problems. In particular, we could not to prove the complexity of computing the likelihood. The paper is practically self-contained.
Graph likelihood
================
As usual, $G=(V,E)$ denotes a *graph*: $V(G)=\{v_{1},v_{2},...,v_{t}\}$ is a set whose elements are called *vertices* and $E(G)\subseteq
V(G)\times V(G)-\{\{v_{i},v_{i}\}:v_{i}\in V(G)\}$ is a set whose elements are called *edges*. The graph with a single vertex and no edges is denoted by $K_{1}$. Our main object of study will be the construction given in the following definition. This is a special case of a construction already presented in [@js12].
\[Construction\]\[def1\]We construct a graph $G_{t}=(V,E)$, starting from $G_{1}=K_{1}$. The construction involves an iteration with discrete steps. At the $t$-th step of the iteration, the graph $G_{t-1}$ is transformed into the graph $G_{t}$. The $t$-th step of the iteration is divided into three substeps:
1. We select a number $k\in\{0,1,...,t-1\}$ with equal probability.
Assume that we have selected $k$.
2. We select $k$ vertices of $G_{t-1}$ with equal probability.
Assume that we have selected the vertices $v_{1},v_{2},...,v_{k}\in
V(G_{t-1})$.
3. We add a new vertex $t$ to $G_{t-1}$ and the edges $\{v_{1},t\},\{v_{2},t\},...,\{v_{k},t\}\in E(G_{t})$.
On the basis of the construction, the following definition is natural:
\[Graph likelihood\]Let $G$ be a graph on $t$ vertices. The (*graph*) *likelihood* of $G$, denoted by $\mathcal{L}(G)$, is defined as the probability that $G_{t}=G$, where $G_{t}$ is the graph given by the construction in Definition \[def1\]: $$\mathcal{L}(G):=\emph{Pr}[G_{t}=G].$$
For clarifying this notion, in the next section, we write closed formulas for the likelihood of graphs in some infinite families. We use rather uninteresting proof techniques, but these serve the purpose, at least for very simple graphs.
Examples
========
The *complete graph* $K_{t}$ is defined as the graph on $t$ vertices and $t(t-1)/2$ edges.
Let $K_{t}$ be the complete graph on $t$ vertices. Then, $\mathcal{L}(K_{t})=1/t!$.
For $K_{t}$, the only significant step of the construction is the first one (*i.e.*, the selection of a number $k\in\{0,1,...,t-1\}$ with equal probability). Therefore, $\mathcal{L}(K_{t})=\prod_{i=1}^{t}\frac{1}{i}$. This equals $1/t!$ by definition.
The *star graph* $K_{1,t-1}$ is defined as the graph on $t$ vertices, $v_{1},v_{2},...,v_{t}$, and the edges $\{v_{1},v_{2}\},\{v_{1},v_{3}\},...,\{v_{1},v_{t-1}\}$. In a graph $G=(V,E)$, the *degree* of a vertex $i\in V(G)$ is defined and denoted by $d(i)=\left\vert \{j:\{i,j\}\in
E(G)\right\vert $.
Let $K_{1,t-1}$ be the star graph on $t$ vertices. Then, $\mathcal{L}(K_{1,t-1})=\frac{t}{(t!)^{2}}\sum_{i=0}^{t-1}i!$.
The star graph $K_{1,t-1}$ has $1$ vertex of degree $t-1$ and $t-1$ vertices of degree $1$. There are three cases relevant to the construction of $K_{1,t-1}$ such that $G_{t}=K_{1,t-1}$:
1. Suppose we add $t-1$ vertices, $1,2,...,t-1$, of degree $0$. At time $t$, we add a vertex, $t$, of degree $t-1$. Since $\Pr[d(i)=0]=1/i$, for $i=1,2,\ldots,t-1$, and $\Pr[d(t)=t-1]=1/t$, $\Pr[G_{t}=K_{1,t-1}$ by (1)$]=\prod_{i=1}^{t-1}\Pr[d(i)=0]\cdot\Pr[d(t)=t-1]=\left( \prod_{i=1}^{t-1}\frac{1}{i}\right) \cdot\frac{1}{t}=\frac{1}{t!}$.
2. Suppose there is an edge $\{1,2\}\in G_{2}$. Since $\Pr[d(3)=1]=1/3$, we distinguish two cases:
1. $\Pr[\{1,3\}\in E(G_{3})]=1/2$: $\Pr[d(2)=1]\cdot\Pr[d(3)=1]\cdot
\Pr[\{1,3\}\in E(G_{3})]=\frac{1}{2}\cdot\frac{1}{3}\cdot\frac{1}{2}=\frac
{1}{12}$. If $\{1,3\}\in E(G_{3})$ then each other edge of $G_{t}$, with $t\geq4$, must be of the form $\{1,4\},\ldots,\{1,t\}$ and $\Pr[\{1,t\}\in
E(G_{3})]=\frac{1}{t}\cdot\frac{1}{t-1}$.
2. $\Pr[\{2,3\}\in E(G_{3})]=1/2$: $\Pr[d(2)=1]\cdot\Pr[d(3)=1]\cdot
\Pr[\{2,3\}\in E(G_{3})]=\frac{1}{2}\cdot\frac{1}{3}\cdot\frac{1}{2}=\frac
{1}{12}$. If $\{2,3\}\in E(G_{3})$ then the situation is analogous to the previous case.
By combining together (a) and (b), it follows that $$\Pr[G_{t}=K_{1,t-1}\text{ by (2)}]=2\prod_{i=2}^{t}\left( \frac{1}{i}\cdot\frac{1}{i-1}\right) =\frac{2}{t!(t-1)!}.$$
3. Suppose we add $k-1$ vertices, $1,2,\ldots,k-1$, of degree $0$, where $k\geq3$. At time $k$, we add a vertex, $k$, of degree $k-1$. Since $\Pr[d(i)=0]=1/i$, for $i=1,2,\ldots,k-1$, and $\Pr[d(k)=k-1]=1/k$, $$\Pr[G_{k}=K_{1,k-1}\text{ by (3)}]=\prod_{i=1}^{t-1}\Pr[d(i)=0]\cdot
\Pr[d(k)=k-1]=\left( \prod_{i=1}^{k-1}\frac{1}{i}\right) \cdot\frac{1}{k}=\frac{1}{k!}.$$ The remaining $t-k$ vertices, $k+1,k+2,\ldots,t$, must be of the form $\{k,k+1\},\{k,k+2\},\ldots,\{k,t\}$ and $$\Pr[\{k,k+j\}\in E(G_{k+j})]=\frac{1}{k+j}\cdot\frac{1}{k+j-1},$$ for each $j=1,2,\ldots,t-k$. Hence, $$\Pr[G_{t}=K_{1,t-1}\text{ by (3)}]=\sum_{k=3}^{t-1}\frac{1}{k!}\prod
_{i=k+1}^{t}\left( \frac{1}{i}\cdot\frac{1}{i-1}\right) .$$
The analysis carried out with the three cases above is sufficient to obtain the following formula: $$\begin{aligned}
\mathcal{L}(K_{1,t-1}) & =\Pr[G_{t}=K_{1,t-1}\text{ by (1)}]+\Pr
[G_{t}=K_{1,t-1}\text{ by (3)}]+\Pr[G_{t}=K_{1,t-1}\text{ by (2)}]\\
& =\frac{1}{t!}+\frac{2}{t!(t-1)!}+\sum_{k=3}^{t-1}\frac{1}{k!}\prod
_{i=k+1}^{t}\frac{1}{i(i-1)}=\frac{t}{(t!)^{2}}\sum_{i=0}^{t-1}i!.\end{aligned}$$
Computation of the likelihood
=============================
Is the likelihood defined for any graph? The answer is yes, as demonstrated by the next statement. This is a plausible graph-theoretic analogue of the infinite monkey theorem:
\[procon\]Any graph can be obtained with the construction in Definition \[def1\].
An *orientation* of $G$ is a function $\alpha:E(G)\longrightarrow
E^{+}(G)$, where $E^{+}(G)$ is a set whose elements, called *arcs*, are ordered pairs of vertices such that either $\alpha(\{i,j\})=(i,j)$ or $\alpha(\{i,j\})=(j,i)$, for each $\{i,j\}\in E(G)$. An orientation is *acyclic* if it does not contain any directed cycles, *i.e.*, distinct vertices $v_{1},...,v_{k}$ such that $(v_{1},v_{2}),(v_{2},v_{3}),...,(v_{k-1},v_{k}),(v_{k},v_{1})$ are arcs. Clearly, every graph has an acyclic orientation. Every acyclic orientation determines at least one linear ordering $v_{1}<v_{2}<\cdots<v_{n}$ of the vertices such that, for each edge $\{v_{i},v_{j}\}$, we have $\alpha(\{v_{i},v_{j}\})=(v_{i},v_{j})$ if and only if $v_{i}<v_{j}$. This is also called a *topological ordering* of the vertices relative to the orientation. For a graph $G$, let $V(G)=\{w_{1},w_{2},...,w_{t}\}$ and let $w_{1}<w_{2}<\cdots<w_{t}$ realize a topological ordering. We can always obtain $G_{t}=G$, if in the iteration we have $v_{1}=w_{1},v_{2}=w_{2},...,v_{t}=w_{t}$.
And, of course:
Every graph on $n$ vertices has a positive likelihood. (More formally, $\mathcal{L}(G)>0$ for any graph $G$.)
Proposition \[procon\] suggests a natural computational problem:
\[Likelihood computation\]\[pro1\]**Given:** A graph $G$. **Task:** Compute $\mathcal{L}(G)$.
There are surely many ways to approach this problem. We consider an algorithm based on a tree whose vertices represent all intermediate graphs obtained during the construction.
\[Identity representation\]Let $G=(V,E)$ be a graph on the set of vertices $V(G)=\{v_{1},v_{2},...,v_{t}\}$. Let us fix an arbitrary labeling of the vertices of $G$ by a bijection $f:V(G)\longrightarrow\{1,2,...,t\}$. Once fixed the bijection, let us label the first row (resp. column) of the adjacency matrix of $G$, $A(G)$, by the number $t$, the second one by $t-1$,..., the last one by $1$. The bijection $f$ can be then represented by the ordered set $(1,2,...,t)$. We can then define an acyclic orientation of the edges such that $\alpha(\{i,j\})=(i,j)$ if and only if $i<j$, with $i,j=1,2,...,t$. The topological ordering relative to the orientation defines $G_{1}=(1,\emptyset)$, $G_{2}=\{\{1,2\},E(G_{2})\}$,...,$G_{t}=\{\{1,2,...,t\},E(G_{t}))=G$. The pair $(A(G),(1,2,...,t))$ given by the adjacency matrix $A(G)$ together with the ordered set *id* $:=(1,2,...,t)$ is said to be the *identity representation* of $G$.
The identity representation is arbitrary, since it entirely depends on the bijection $f$.
A *permutation of length* $n$ is a bijection $p:\{1,2,...,t\}\longrightarrow\{1,2,...,t\}$. Hence, each permutation $p$ corresponds to an ordered set $(p(1),p(2),...,p(t))$. The set of all permutations of length $t$ is denoted by $S_{t}$. A *permutation matrix* $P$ induced by a permutation $p$ of length $t$ is an $t\times t$ matrix such that $[P]_{i,j}=1 $ if $p(i)=j$ and $[P]_{i,j}=0$, otherwise. Lower case letters denote permutations; upper case letters their induced matrices.
\[(Generic) Representation\]Let $G=(V,E)$ be a graph on $t$ vertices. Let $(A(G),$ *id*$)$ be the identity representation of $G$. The pair $(PA(G)P^{T},p)$, where $P$ is a permutation matrix induced by the permutation $p$ is said to be a *representation* of $G$. A representation $(PA(G)P^{T},p)$ is also denoted by $A_{p}(G)$.
An *automorphism* of a graph $G=(V,E)$ is a permutation $p:V(G)\longrightarrow V(G)$ such that $\{v_{i},v_{j}\}\in E(G)$ if and only if $\{p(v_{i}),p(v_{j})\}\in E(G)$. The set of all automorphisms of $G$, with the operation of composition of permutations $\circ
$, is a permutation group denoted by Aut$(G)$. Such a group is the *full automorphism group* of $G$. The permutation matrices $P$, induced by the elements of Aut$(G)$, are precisely the matrices such that $PA(G)P^{T}=A(G)$, *i.e.*, $PA(G)=A(G)P$.
Let $G=(V,E)$ be a graph on $t$ vertices. The total number of different representations of $G$ is $t!/\left\vert \text{\emph{Aut}}(G)\right\vert $.
Let $A_{\text{id}}(G)$ be an identity representation of $G$. By the definition of full automorphism group, for each permutation $p\in$ Aut$(G)$, we have $A_{p}(G)=PA_{\text{id}}(G)P^{T}=A_{\text{id}}(G)$. Let $q\in S_{t}-$ Aut$(G)$. Then, there is a unique permutation $r\in S_{t}-$ Aut$(G)$ such that $q=p\circ r$. It follows that $QA_{\text{id}}(G)Q^{T}=PRA_{\text{id}}(G)R^{T}P^{T}=PA_{r}(G)P^{T}=A_{r}(G)$. This indicates that each representation of $G$ belongs to an equivalence class of representations. Since $\left\vert S_{t}\right\vert =t!$, the total number of different representations of $G$, *i.e.*, the total number of equivalence classes of representations, is $t!/\left\vert \text{Aut}(G)\right\vert $.
In the language of elementary group theory, the equivalence classes are the (left) cosets of the subgroup *Aut*$(G)$ in $S_{t}$.
In order to design an algorithm for $\mathcal{L}(G)$, we need some further definitions. A *subgraph* $H=(V^{\prime},E^{\prime})$ of a graph $G=(V,E)$ is a graph such that $V^{\prime}\subseteq V$ and $E^{\prime
}\subseteq E$. We say that a graph $G$ *contains* a graph $H$ if there is a subgraph of $G$ isomorphic to $H$.
Let $G$ be any nonempty graph with $t$ vertices, a *path construction* of $G$ is a sequence $(H_{1},H_{2},\ldots,H_{t})$ of $t$ graphs such that $H_{i}$ has $i$ vertices, $i=1,2,\ldots,t$, and $H_{i}\subset H_{i+1}$, for each $i=1,2,\ldots,t-1$; moreover, $H_{t}\cong G$. We denote the set of all path constructions of a graph $G$ by Path$(G)$.
It is clear that each path construction corresponds to an equivalence class of representations.
The set of path constructions can be represented as a rooted tree $T_{G}$ as follows:
- The root of $T_{G}$ is $T_{1}$. This is the empty graph with a single vertex.
- Assume we already have all the vertices at level $i$ (the level of the root is taken to be $1$) in the tree $T_{G}$. Let $(T_{1},T_{2},\ldots,T_{i})$ be a path in $T_{G}$, if there exists a path construction $L=(T_{1},T_{2},\ldots,T_{i},H_{i+1},\ldots,H_{t})\in$ Path$(G)$ then we define $H_{i+1}$ to be one of the children of the node $T_{i}$ in $T_{G}$.
\[ext1\]The rooted tree $T_{P_{3}}$ is given by
(0,-1)(4.4,2.6) (1.5,2.2)[0.08]{}(1.35,2.4)[$v_1$]{}(0.6,2.2)[$(T_1)$]{} (1.5,2)(.2,1)(1.5,2)(2.5,1) (-.2,0.55)[0.08]{}(-.35,.7)[$v_1$]{}(.5,0.55)[0.08]{}(.35,.7)[$v_2$]{}(-1.3,.6)[$(T_{21})$]{} (2.5,0)[(-.2,0.55)[0.08]{}(-.35,.7)[$v_1$]{}(.5,0.55)[0.08]{}(.35,.7)[$v_2$]{}(-.2,0.55)(.5,.55) (1,.6)[$(T_{22})$]{}]{} (.2,.3)(.2,-.6)(2.6,.3)(2.3,-.5)(2.6,.3)(4.3,-.5) (-.5,-1.5)[(-.2,0.55)[0.08]{}(-.35,.7)[$v_1$]{}(.5,0.55)[0.08]{}(.35,.7)[$v_3$]{} (1.2,0.55)[0.08]{}(1.05,.7)[$v_2$]{}(1.2,0.55)(.5,.55)(-.2,0.55)(.5,.55)(.1,.1)[$(T_{31})$]{}]{} (1.8,-1.5)[(-.2,0.55)[0.08]{}(-.35,.7)[$v_1$]{}(.5,0.55)[0.08]{}(.35,.7)[$v_2$]{} (1.2,0.55)[0.08]{}(1.05,.7)[$v_3$]{}(1.2,0.55)(.5,.55)(-.2,0.55)(.5,.55)(.1,.1)[$(T_{32})$]{}]{} (4,-1.5)[(-.2,0.55)[0.08]{}(-.35,.7)[$v_3$]{}(.5,0.55)[0.08]{}(.35,.7)[$v_1$]{} (1.2,0.55)[0.08]{}(1.05,.7)[$v_2$]{}(1.2,0.55)(.5,.55)(-.2,0.55)(.5,.55)(.1,.1)[$(T_{33})$]{}]{}
The above figure shows that the set of path constructions of $P_{3}$ is given by $$\emph{Path}(P_{3})=\{(T_{1},T_{21},T_{31}),(T_{1},T_{22},T_{32}),(T_{1},T_{22},T_{33})\}.$$
Let $P=(H_{1},H_{2},\ldots,H_{t})\in$ Path$(G)$ be any path construction of $G$. Fix $i$, then $H_{i+1}$ is obtained by adding a vertex $v_{i+1}$ of degree $d_{i+1}(P)$ to the graph $H_{i}$. Hence $$\Pr[G_{t}=G,\,P\mbox{ is a path construction of }G]=\prod_{i=1}^{t}\frac
{1}{i\binom{i-1}{d_{i}(P)}}=\frac{1}{t!\prod_{i=1}^{t}\binom{i-1}{d_{i}(P)}}.$$
From this algorithm, we obtain a relation between $\mathcal{L}(C_{n})$ and $\mathcal{L}(P_{n})$ as follows. Recall that $C_{n}$ is the cycle on $n$ vertices and $P_{n}$ is the path on $n$ vertices.
For all $n\geq3$, $\mathcal{L}(C_{n})=\mathcal{L}(P_{n-1})/n\binom{n-1}{2}$.
An algorithm for computing $\mathcal{L}(G)$ can be based on the following theorem:
\[thflg\] Let $G$ be a graph on $t$ vertices. Then $$\mathcal{L}(G)=\sum_{P\in\text{ \emph{Path}}(G)}\frac{1}{t!\prod_{i=1}^{t}\binom{i-1}{d_{i}(P)}}.$$
A simple example is useful:
Let $P_{3}$ be the path graph on $3$ vertices. By Example \[ext1\], we find that $$\begin{aligned}
\Pr[G_{t} & =G,\,(T_{1},T_{21},T_{31})\mbox{ is a path construction of }G]=1\cdot\frac{1}{2}\cdot\frac{1}{3}=\frac{1}{6},\\
\Pr[G_{t} & =G,\,(T_{1},T_{22},T_{32})\mbox{ is a path construction of }G]=1\cdot\frac{1}{2}\cdot\frac{1}{3\cdot
2}=\frac{1}{12},\\
\Pr[G_{t} & =G,\,(T_{1},T_{22},T_{33})\mbox{ is a path construction of }G]=1\cdot\frac{1}{2}\cdot\frac{1}{3\cdot
2}=\frac{1}{12}.\end{aligned}$$ Then, $\mathcal{L}(G)=\frac{1}{6}+\frac{1}{12}+\frac{1}{12}=\frac{1}{3}$.
By Theorem \[thflg\] and the fact that $|$Path$(G)|$ is exactly equal to the number of representations of $G$, *i.e.* $|$Path$(G)|=t!/\left\vert
\text{Aut}(G)\right\vert $, we obtain the following bounds:
Let $G$ be any nonempty graph on $t$ vertices. Then $$\frac{1}{|\text{\emph{Aut}}(G)|\prod_{i=1}^{t}\binom{i-1}{\lfloor
(i-1)/2\rfloor}}\leq\mathcal{L}(G)\leq\frac{1}{|\emph{Aut}(G)|}.$$
We give two general examples:
Let $G$ be a graph on $t$ vertices with exactly $s$ edges incident with $2s$ vertices. Any path $P$ of Path$(G)$ can be seen as a path from a single vertex to the graph $G$. At levels $i_{1},i_{2},\ldots,i_{s}$, we have added an edge between the new vertex and a vertex of degree zero. In all other levels we just added a new vertex. Therefore, $$\mathcal{L}(G)=\frac{1}{t!}\sum_{2\leq i_{1}<i_{2}<\cdots<i_{s}\leq t}\prod_{j=1}^{s}\frac{i_{j}+1-2j}{i_{j}-1}.$$
Let $G$ be a graph on $t$ vertices with exactly one edge, then $$\mathcal{L}(G)=\frac{1}{t!}\sum_{i=2}^{t}1=\frac{t-1}{t!}.$$
Let $G$ be a graph on $t$ vertices with exactly two edges incident on four vertices (a matching with two edges), then $$\mathcal{L}(G)=\frac{1}{t!}\sum_{i=2}^{t}\left( \frac{i-2}{i}+\frac{i-1}{i+1}+\cdots+\frac{n-3}{n-1}\right) .$$
(10,4) (0,4)[(0,0)[0.06]{}(0.05,0)[$1$]{}]{} (3,4)[(0,0)[0.06]{}(0.5,0)[0.06]{}(0.55,0)[$\frac{1}{2}$]{}]{} (6,4)[(0,0)[0.06]{}(0.5,0)[0.06]{}(0,0)(.5,0)(0.55,0)[$\frac{1}{2}$]{}]{} (9,4)[(0,0)[0.06]{}(0.5,0)[0.06]{}(1,0)[0.06]{}(1.05,0)[$\frac{1}{6}$]{}]{} (0,3)[(0,0)[0.06]{}(0.5,0)[0.06]{}(1,0)[0.06]{}(0,0)(.5,0)(1.05,0)[$\frac{1}{3}$]{}]{} (3,3)[(0,0)[0.06]{}(0.5,0)[0.06]{}(1,0)[0.06]{}(0,0)(1,0)(1.05,0)[$\frac{1}{3}$]{}]{} (6,3)[(0,0)[0.06]{}(0.5,.25)[0.06]{}(1,0)[0.06]{}(0,0)(.5,.25)(1,0)(0,0)(1.05,0)[$\frac{1}{6}$]{}]{} (9,3)[(0,0)[0.06]{}(0.5,0)[0.06]{}(1,0)[0.06]{}(1.5,0)[0.06]{}(1.55,0)[$\frac{1}{24}$]{}]{} (0,2)[(0,0)[0.06]{}(0.5,0)[0.06]{}(1,0)[0.06]{}(1.5,0)[0.06]{}(0,0)(.5,0)(1.55,0)[$\frac{1}{8}$]{}]{} (3,2)[(0,0)[0.06]{}(0.5,0)[0.06]{}(1,0)[0.06]{}(1.5,0)[0.06]{}(0,0)(.5,0)(1,0)(1.5,0)(1.55,0)[$\frac{1}{36}$]{}]{} (6,2)[(0,0)[0.06]{}(0.5,0)[0.06]{}(1,0)[0.06]{}(1.5,0)[0.06]{}(0,0)(1,0)(1.55,0)[$\frac{13}{72}$]{}]{} (9,2)[(0,0)[0.06]{}(0.5,0)[0.06]{}(1,0)[0.06]{}(1.5,0)[0.06]{}(0,0)(1.5,0)(1.55,0)[$\frac{1}{9}$]{}]{} (0,1)[(0,0)[0.06]{}(0.5,0)[0.06]{}(0,.25)[0.06]{}(.5,.25)[0.06]{}(0,0)(.5,0)(.5,.25)(0,.25)(0,0)(0.55,0)[$\frac{1}{36}$]{}]{} (3,1)[(0,0)[0.06]{}(0.5,0)[0.06]{}(.5,.25)[0.06]{}(.5,-.25)[0.06]{}(0,0)(.5,0)(0,0)(.5,.25)(0,0)(.5,-.25)(0.55,0)[$\frac{5}{72}$]{}]{} (6,1)[(0,0)[0.06]{}(0.5,.25)[0.06]{}(1,0)[0.06]{}(1.5,0)[0.06]{}(0,0)(.5,.25)(1,0)(0,0)(1.55,0)[$\frac{5}{72}$]{}]{} (9,1)[(0,0)[0.06]{}(0.5,.25)[0.06]{}(1,0)[0.06]{}(1.5,0)[0.06]{}(0,0)(.5,.25)(1,0)(0,0)(1.5,0)(1.55,0)[$\frac{13}{72}$]{}]{} (3,0)[(0,0)[0.06]{}(0.5,.25)[0.06]{}(1,0)[0.06]{}(1.5,0)[0.06]{}(0,0)(.5,.25)(1,0)(0,0)(1.5,0)(.5,.25)(1.55,0)[$\frac{1}{8}$]{}]{} (6,0)[(0,0)[0.06]{}(0.5,0)[0.06]{}(0,.25)[0.06]{}(.5,.25)[0.06]{}(0,0)(.5,0)(.5,.25)(0,.25)(0,0)(.5,.25)(0,.25)(.5,0)(0.55,0)[$\frac{1}{24}$]{}]{}
By making use of Theorem \[thflg\], we can prove in a straightforward way that a graph and its complement have equal likelihood. The *complement* of a graph $G=(V,E)$, denoted by $\overline{G}$, is the graph such that $V(\overline{G})=V(G)$ and $E(G)=V(G)\times V(G)-\{\{v_{i},v_{i}\}:v_{i}\in
V(G)\}-E(G)$.
Let $G$ be any graph. Then $\mathcal{L}(G)=\mathcal{L}(\overline{G})$.
Conclusions
===========
We have used a model of graph growth to introduce a notion of graph likelihood and we have then discussed some of its basic aspects. This is the probability that a graph is grown with the model. We have proposed an algorithm for the computation of the likelihood, and we have bounded this graph invariant in terms of the automorphism group. We conclude with two natural open problems:
How hard is to compute the likelihood?
Which graphs are extremal with respect to the likelihood?
*Acknowledgments.* We would like to thank Ginestra Bianconi, Sebi Cioaba, Chris Godsil, Anastasia Koroto, Matt DeVos, and Svante Janson.
[9]{}
C. Godsil, G. Royle, *Algebraic Graph Theory*, Springer-Verlag, 2004.
S. Janson, S. Severini, An example of graph limits of growing sequences of random graphs, June 2012. `arXiv:1206.4586v1 [math.CO]`
Wikipedia contributors, Infinite monkey theorem *Wikipedia, The Free Encyclopedia,*
`http://en.wikipedia.org/wiki/Infinite_monkey_theorem` (accessed February 21, 2013).
|
---
author:
- 'V. Joergens'
date: 'Received 12 April 2005; accepted 14 September 2005'
title: |
Improved kinematics for brown dwarfs\
and very low-mass stars in ChaI\
and a discussion of brown dwarf formation [^1]
---
Introduction
============
The formation of objects below or close to the hydrogen burning limit is one of the main open issues in the field of the origins of solar systems. The almost complete absence of brown dwarfs in close ($<$3AU) orbits around solarlike stars (‘brown dwarf desert’) found in ongoing high-precision RV surveys compared to the detection of more than 150 extrasolar planets in this separation range (e.g. Moutou et al. 2005; Marcy et al. 2005) suggests that brown dwarfs generally do not form like planets by dust condensation in a circumstellar disk. On the other hand, a starlike formation from direct gravitational collapse and fragmentation of molecular clouds requires the existence of cloud cores that are cold and dense enough to become Jeans-unstable for brown dwarf masses, which have not yet been found. On theoretical grounds, the opacity limit for the fragmentation (Low & Lynden-Bell 1976) might prevent the formation of (lower mass) brown dwarfs by direct collapse.
An alternative scenario was proposed in recent years, namely the formation of brown dwarfs by direct collapse of unstable cloud cores of stellar masses that would have become stars if the accretion was not stopped at an early stage by an external process before the object had accreted to stellar mass. It was proposed that such an external process could be the ejection of the protostar out of the dense gaseous environment due to dynamical interactions (Reipurth & Clarke 2001), analogous to the formation of so-called run-away T Tauri stars (Sterzik & Durisen 1995, 1998; Durisen et al. 2001). It is known that the dynamical evolution of gravitationally interacting systems of three or more bodies leads to frequent, close two-body encounters and to the formation of close binary pairs out of the most massive objects in the system, as well as to the ejection of the lighter bodies into extended orbits or out of the system with escape velocity (e.g. Valtonen & Mikkola 1991). The escape of the lightest body is an expected outcome, since the escape probability scales approximately as the inverse third power of the mass. The suggestion of this embryo-ejection model as the formation mechanism for brown dwarfs has stimulated in past years hydrodynamical collapse calculations (Bate et al. 2002, 2003; Bate & Bonnell 2005) and numerical N-body simulations (Sterzik & Durisen 2003; Delgado-Donate et al. 2003, 2004; Umbreit et al. 2005) which predict observable properties of brown dwarfs formed in this way. There are significant differences in the theoretical approaches and predictions of these models, which will be discussed later (see also Joergens 2005a for a recent review of brown dwarf formation).
An external process that prevents the stellar embryo from further accretion and growth in mass can also be photoevaporation by a strong UV wind from a nearby hot O or B star (Kroupa & Bouvier 2003; Whitworth & Zinnecker 2004). Since we focus on brown dwarfs in ChaI, where there is no such hot star, we will not discuss this scenario further.
The ejection process might leave an observable imprint in the kinematics of members ejected from a cluster in comparison to that of non-ejected members. In order to test this scenario, Joergens & Guenther (2001) carried out a precise kinematic analysis of brown dwarfs in ChaI and compared it to that of T Tauri stars in the same field based on mean radial velocities (RVs) measured from high-resolution spectra taken with the UV-Visual Echelle Spectrograph (UVES) at the Very Large Telescope (VLT). In this paper, we now present an improved analysis of this study based on additional RV measurements with UVES of several of the targets in 2002 and 2004, based on a revised data analysis and on a cleaned and updated T Tauri star sample as comparison. Furthermore, the dispersion measured in terms of full width at half maximum (fwhm) in Joergens & Guenther (2001) was misinterpreted in the literature as standard deviation. There is more than a factor of two difference between both quantities. In this paper, we also give the dispersion in terms of the standard deviation and then discuss the implications. We provide an empirical constraint for the kinematic properties of the studied group of very young brown dwarfs in ChaI and discuss the results in the context of current ideas about the formation of brown dwarfs and, in particular, the theoretical predictions of the embryo-ejection model.
It is noted that the UVES spectra were taken within the framework of an ongoing RV survey for planetary and brown dwarf companions to young brown dwarfs and very low-mass stars in ChaI, which was published elsewhere (Joergens 2003, 2005b).
The paper is organized as follows: Sect.\[sect:spec\] contains information about the UVES spectroscopy, data analysis, and the sample. In the next two sections, the kinematic study of brown dwarfs (Sect.\[sect:bds\]) and of T Tauri stars in ChaI (Sect.\[sect:tts\]) is presented. Section \[sect:discussion\] contains a discussion of the results and a comparison with theoretical predictions, and Sect.\[sect:concl\] conclusions and a summary.
UVES spectroscopy, RV determination, and sample {#sect:spec}
===============================================
High-resolution spectra were taken for twelve brown dwarfs and (very) low-mass stars in ChaI between the years 2000 and 2004 with the echelle spectrograph UVES (Dekkeretal.2000) attached to the 8.2m Kueyen telescope of the VLT operated by the European Southern Observatory at Paranal, Chile. Details of the data acquisition, analysis, and RV determination are given in Joergens (2005b). However, we point out here that the errors given in Table 2 of Joergens & Guenther (2001) refer solely to *relative* errors. Additionally, an error of about 400ms$^{-1}$ due to the uncertainty in the zero offset of the template has to be taken into account for the absolute RV. The targets of these observations are brown dwarfs and (very) low-mass stars with an age of a few million years situated in the center of the nearby ($\sim$160pc) ChaI star-forming cloud (Comerón et al. 1999, 2000; Neuhäuser & Comerón 1998, 1999). Membership in the ChaI cluster, and therefore the youth of the objects, is well established based on H$\alpha$ emission, Lithium absorption, spectral types, and RVs (references above; Joergens & Guenther 2001; this work). The measured mean RVs are listed in Table\[tab:bds\] for the nine M6–M8 type brown dwarfs and very low-mass stars ChaH$\alpha$1–8 and ChaH$\alpha$12, while the mean RVs for B34 (M5), CHXR74 (M4.5) and Sz23 (M2.5) are included in Table\[tab:tts\] in the T Tauri star list.
Deviations of RVs measured with UVES in this paper compared to RVs presented in Joergens & Guenther (2001) are attributed to a spectroscopic companion for ChaH$\alpha$8 (Joergens 2005b) and to improved data reduction for the other objects. For the latter, these deviations lie in the range of 3–80ms$^{-1}$ with the exception of ChaH$\alpha$7 where the deviation was on the order of 3kms$^{-1}$, a situation that can be attributed to both the very low S/N of the spectra of this object and a high sensitivity of the resulting RV on the extraction algorithm for very low S/N spectra.
Kinematics of brown dwarfs in ChaI {#sect:bds}
==================================
The determined mean RVs for the nine M6 to M8-type brown dwarfs and very low-mass stars ChaH$\alpha$1–8 and ChaH$\alpha$12 are given in Table\[tab:bds\]. They range between 14.5 and 17.1kms$^{-1}$ with an arithmetic mean of 15.71$\pm$0.31kms$^{-1}$. The RV dispersion measured in terms of standard deviation of a population sample is 0.92kms$^{-1}$ with an uncertainty of $\pm$0.32kms$^{-1}$ derived from error propagation. The RV dispersion measured in terms of fwhm is 2.15kms$^{-1}$. While the new mean RV is larger than the value of 14.9kms$^{-1}$ given by Joergens & Guenther (2001), their measurements of the total RV range (2.4kms$^{-1}$) and of the fwhm dispersion (2.0kms$^{-1}$) differ only marginally from our results[^2].
The borderline between brown dwarfs and stars lies at about spectral type M7; i.e. the sample in Table\[tab:bds\] contains at least two objects, ChaH$\alpha$4 and 5, which are most certainly of stellar nature. The substellar border defined by the hydrogen-burning mass is a crucial dividing line with respect to further evolution of an object, but there is no obvious reason it should be of significance for the formation mechanism by which this object was produced. Thus, by whichever process brown dwarfs are formed, it is expected to work continuously into the regime of very low-mass stars. For an observational test to compare properties of brown dwarfs with those of stars, it is hence not a priori clear where to set the dividing line in mass of the two samples.
Therefore, we also consider the following samples: a) a subsample of Table\[tab:bds\] containing only brown dwarfs and brown dwarf candidates, i.e. all objects with spectral types M6.5 to M8 (7 objects); as well as two larger samples that also include T Tauri stars from Table\[tab:tts\], namely, b) a sample of brown dwarfs and (very) low-mass stars with spectral types M4.5 to M8 (11 objects); and c) a sample composed of all M-type (sub)stellar objects (M0–M8, 17 objects). The mean RVs of these samples (15.87kms$^{-1}$, 15.77kms$^{-1}$, 15.36kms$^{-1}$) and their RV dispersions (standard deviations: 0.97kms$^{-1}$, 0.83kms$^{-1}$, 1.20kms$^{-1}$) do not differ significantly from the values derived for the brown dwarf sample (M6–M8) that was initially chosen. Also based on a Kolmogorov-Smirnov test, the RV distributions of the samples a) to c) are consistent with the one of the M6-M8 sample with a significance level of $\geq$ 99.98% for a) and b) and 98.50% for c).
![ \[hist\] ](3421fig1.ps){height=".45\textwidth"}
![ \[cdf\] ](3421fig2.ps){height=".45\textwidth"}
![ \[fig:cdf\_tts\] ](3421fig3.ps){height=".45\textwidth"}
![ \[cdf\_taurus\] ](3421fig4.ps){height=".45\textwidth"}
Kinematics of T Tauri stars in ChaI {#sect:tts}
===================================
In order to compare the kinematics of the brown dwarfs in ChaI with higher-mass stellar objects in this cluster, we compiled all T Tauri stars that are confined to the same region for which RVs have been measured with a precision of 2kms$^{-1}$ or better from the literature (Walter 1992; Dubath et al. 1996; Covino et al. 1997; Neuhäuser & Comerón 1999), from Guenther et al. (in prep., see Joergens & Guenther 2001), and from our own measurements based on UVES spectra for mid- to late M-type ChaI members. The result is a sample of 25 T Tauri stars (spectral types M5–G2) as listed in Table\[tab:tts\]. For nine of them, RV measurements are available from more than one author and for two objects, CHX18N (Walter et al. 1992; Covino et al. 1997) and CHXR74 (Joergens 2005b), RV measurements at different epochs are significantly discrepant, which hints at long-period spectroscopic companions. Table\[tab:tts\] gives the RVs measured by the different authors as well as the derived mean RV that was adopted for this study in the last column.
Compared to Joergens & Guenther (2001), the T Tauri sample was revised by identifying double entries in their Table2, and by taking additional RVs from the literature into account, by rejection of two foreground objects in the previous sample, by an improved data reduction for UVES-based RVs, as well as by additional UVES measurements. Details are given in the appendix Sect.\[sect:app\].
We found that these 25 T Tauri stars have an arithmetic mean RV of 14.73$\pm$0.25kms$^{-1}$, and an RV dispersion in terms of a standard deviation of 1.26$\pm$0.31kms$^{-1}$ and in terms of an fwhm of 2.96kms$^{-1}$. Compared to the values given by Joergens & Guenther (2001), namely a mean RV of 14.9kms$^{-1}$, a standard deviation of 1.5kms$^{-1}$, and an fwhm of 3.6kms$^{-1}$, the new values are slightly lower. Interestingly, the difference in the dispersion can be solely attributed to the previously unresolved binarity of CHX18N, and the difference in the mean RV can be partly attributed to this fact (cf. Sect.\[sect:app\]).
As for the brown dwarf case, we also calculated the kinematics for subsamples in order to account for the possibility that, if brown dwarfs are formed by a different mechanism than stars, (very) low-mass stars might form in a brown dwarf like manner rather than a star like. No significant differences were found between the original T Tauri sample and a sample of only those stars with a larger mass than about 0.2M$_{\odot}$ (23 stars, M3.25–G2, mean RV: 14.61kms$^{-1}$, RV standard deviation: 1.24kms$^{-1}$) and a sample of only K and G type T Tauri stars (17 stars, K8–G2, mean RV: 14.62kms$^{-1}$, RV standard deviation: 1.21kms$^{-1}$ ). A Kolmogorov-Smirnov test also showed that the RV distributions of the two subsamples are consistent with the one of the original T Tauri sample with significance levels above 99.52%.
A kinematic difference between single and binary/multiple stars was suggested by Sterzik & Durisen (2003) and Delgado-Donate et al. (2003) in the sense that the velocities of multiples are less dispersed. Therefore, we investigate the kinematics of the T Tauri stars in respect of their multiplicity status. A significant fraction of the T Tauri stars in our sample has been resolved into visual binaries (Sz19, Sz20, Sz41, F34, CVCha, SXCha, VWCha, CHX22) by Reipurth & Zinnecker (1993), Brandner et al. (1996), Brandner & Zinnecker (1997), and Ghez et al. (1997). Furthermore, as mentioned above, there are indications of spectroscopic companions around CHX18N (Walter 1992; Covino et al. 1997) and CHXR74 (Joergens 2005b). Thus, among the sample of 25 T Tauri stars, at least about 10 are in binary or multiple systems. We calculated a mean RV of 14.68kms$^{-1}$ and an RV dispersion (standard deviation) of 1.02kms$^{-1}$ for the sample of the 10 T Tauri stars in binary or multiple systems and a mean RV of 14.76kms$^{-1}$ and a RV dispersion (standard deviation) of 1.42kms$^{-1}$ for the remaining stars of the sample, which might, for the most part, be single. Thus, the RVs of the sample of ‘single’ T Tauri stars in ChaI are slightly more dispersed than the T Tauri binary stars, which are mainly visual and, hence, wide systems, as further discussed in the next section.
[lcc]{}
------------------------------------------------------------------------
object & SpT & RV$_\mathrm{UVES}$\
& & \[kms$^{-1}$\]\
------------------------------------------------------------------------
ChaH$\alpha$1 & M7.5 & 16.35 $\pm$ 0.63\
ChaH$\alpha$2 & M6.5 & 16.13 $\pm$ 0.53\
ChaH$\alpha$3 & M7 & 14.56 $\pm$ 0.60\
ChaH$\alpha$4 & M6 & 14.82 $\pm$ 0.40\
ChaH$\alpha$5 & M6 & 15.47 $\pm$ 0.43\
ChaH$\alpha$6 & M7 & 16.37 $\pm$ 0.68\
ChaH$\alpha$7 & M8 & 17.09 $\pm$ 0.98\
ChaH$\alpha$8 & M6.5 & 16.08 $\pm$ 1.62\
ChaH$\alpha$12 & M7 & 14.50 $\pm$ 0.96\
[llcccccccc]{}
------------------------------------------------------------------------
object & other names & SpT &RV$_{\rm W92}$&RV$_{\rm D96}$&RV$_{\rm C97}$&RV$_{\rm N99}$&RV$_{\rm G}$ & RV$_{\rm UVES}$ &RV$_{\rm mean}$\
& & &\[kms$^{-1}$\]&\[kms$^{-1}$\]&\[kms$^{-1}$\]&\[kms$^{-1}$\]&\[kms$^{-1}$\]& \[kms$^{-1}$\] & \[kms$^{-1}$\]\
------------------------------------------------------------------------
Sz19 & & G2 & 14$\pm$2 & 13.5$\pm$0.6 & & & & & 13.5$\pm$0.1\
CHX7 & & G5 & 17$\pm$2 & & & & & & 17$\pm$2\
CHX22 & & G8 & 14$\pm$2 & & & & & & 14$\pm$2\
CV Cha & & G9 & & 15.1$\pm$0.3 & & & 15.6$\pm$0.9 & & 15.2$\pm$0.2\
Sz6 & & K2 & & 14.9$\pm$0.8 & & & & & 14.9$\pm$0.8\
F34 & & K3 & & & 14.0$\pm$2.0 & & & & 14.0$\pm$2.0\
Sz41 & & K3.5 & & 13.9$\pm$0.4 & 16$\pm$2 & & & & 14.0$\pm$0.4\
CHX20E & & K4.5 & 13$\pm$2 & & & & & & 13$\pm$2\
CT Cha & & K5 & & 15.1$\pm$0.5 & & & 15.5$\pm$1.4 & & 15.1$\pm$0.1\
CHX18N & & K6 & 13$\pm$2 & & 19.0$\pm$2.0 & & & & 16.0$\pm$3.0\
CHX10a & & K6 & 16$\pm$2 & & & & & & 16$\pm$2\
VZ Cha & & K6 & & & & & 14.7$\pm$0.8 & & 14.7$\pm$0.8\
CS Cha & & K6 & & 14.7$\pm$0.3 & & & 14.9$\pm$0.8 & & 14.7$\pm$0.1\
CHXR37 & & K7 & & & 13.1$\pm$2.0 & & & & 13.1$\pm$2.0\
TW Cha & & K8 & & & & & 15.7$\pm$1.2 & & 15.7$\pm$1.2\
VW Cha & & K8 & & & & & 15.1$\pm$0.1 & & 15.1$\pm$0.1\
WY Cha & & K8 & & 12.9$\pm$0.9 & & & 12.1$\pm$0.8 & & 12.5$\pm$0.4\
SX Cha & & M0.5 & & & & & 13.4$\pm$0.9 & & 13.4$\pm$0.9\
SY Cha & & M0.5 & & & & & 12.7$\pm$0.1 & & 12.7$\pm$0.1\
CHX21a & & M1 & 14$\pm$2 & & & & & & 14$\pm$2\
Sz20 & & M1 & & 15.4$\pm$1.3 & & & & & 15.4$\pm$1.3\
Sz23 & & M2.5 & & & & & &15.57$\pm$0.55 & 15.57$\pm$0.55\
Sz4 & & M3.25 & & 16.5$\pm$1.3 & & & & & 16.5$\pm$1.3\
CHXR74 & & M4.5 & & & & 16.5$\pm$1.0 & &14.58$\pm$0.62 &\
& & & & & & & &17.42$\pm$0.44 & 16.2$\pm$0.8\
B34 & & M5 & & & & 17.6$\pm$1.7 & &15.75$\pm$0.42 & 15.9$\pm$0.4\
Discussion {#sect:discussion}
==========
Observational results for ChaI
------------------------------
The mean RV of the M6–M8 type brown dwarfs and very low-mass stars (15.7$\pm$0.3kms$^{-1}$) is consistent with that of the surrounding molecular gas (15.4kms$^{-1}$, Mizuno et al. 1999) in agreement with membership of this substellar population in the ChaI star-forming cloud. In Fig.\[hist\] the RVs determined for the brown dwarfs in ChaI are compared to those of T Tauri stars in the same region in the form of a histogram. It can be seen that the RVs of the brown dwarfs are on average larger, by 1.8 times the errors, than those for the T Tauri stars (14.7$\pm$0.3kms$^{-1}$). The dispersion of the RVs of the brown dwarfs measured in terms of standard deviation (0.9$\pm$0.3kms$^{-1}$) is slightly lower than that of the T Tauri stars (1.3$\pm$0.3kms$^{-1}$), and both are (slightly) larger than that of the surrounding molecular gas (the fwhm of 0.9kms$^{-1}$ given by Mizuno et al. (1999) for the average cloud core translates to a standard deviation of 0.4kms$^{-1}$). While the differences in the dispersion values lie well within the error range, it is unlikely from the data that the RVs of the brown dwarfs are significantly more dispersed than that of the T Tauri stars.
Since the velocity distribution arising from the dynamical decay of N-body clusters might be significantly non-Gaussian (Sterzik & Durisen 1998), the determined RVs of the brown dwarfs and T Tauri stars in ChaI are also analyzed in the form of cumulative distributions. In Fig.\[cdf\], the fraction of objects with a relative RV smaller or equal to a given relative RV is plotted. The relative RV is given by the absolute value of the deviation from the mean of the whole group. Since the measured mean RVs for the brown dwarfs and T Tauri stars deviate slightly but significantly, these two different mean RVs are used. It can be seen that the cumulative RV distributions for brown dwarfs and T Tauri stars in ChaI diverge for higher velocities. While the cumulative RV distribution of the brown dwarfs displays a linear and steeper increase, with none of the them having an RV deviating from the mean by more than 1.4kms$^{-1}$, the T Tauri stars show a tail of higher velocities: 28% have an RV deviating from the mean by more than 1.4kms$^{-1}$. An examination of the RV distributions of the studied brown dwarf and T Tauri star sample in ChaI based on Kolmogorov-Smirnov statistics also indicates a significant difference. The significance level for the consistency of the distributions of the absolute RVs is only 11.70%, while it is 74.30% for the relative RVs considered in the same manner as for Fig.\[cdf\].
That the improved value for the RV dispersion in terms of standard deviation of the T Tauri stars is slightly smaller than the value found by Joergens & Guenther (2001) can be attributed solely to the unresolved binarity of one of the T Tauri stars in the sample. This demonstrates that the orbital motion of unresolved spectroscopic binaries are a source of error. An additional possible error source is RV variability induced by surface spots. Given the recent indications that RV noise caused by surface activity is very small for brown dwarfs in ChaI (Joergens 2005b), the slightly higher value for the RV dispersion of T Tauri stars compared to brown dwarfs might be attributed to such systematic errors being more pronounced in the stellar than in the substellar regime. While the RVs of the T Tauri stars have to a large degree been collected from the literature and obtained with a variety of instruments and wavelength resolutions, there is no hint that this introduces a systematic error. First, for seven out of the nine T Tauri stars for which RVs have been measured by more than one author, consistent RVs were found despite different instrumentation. Secondly, RV dispersions and mean values of the T Tauri subsamples in Table\[tab:tts\] sorted by author, i.e. obtained in a uniform way, agree very well with the kinematic properties derived here for the total sample of compiled T Tauri stars. For example, the T Tauri stars observed by Guenther et al. have an RV dispersion of 1.3kms$^{-1}$ and a mean value of 14.4kms$^{-1}$; the ones observed by Dubath et al. (1996) have 1.1kms$^{-1}$ and 14.7kms$^{-1}$; and the ones observed by Walter (1992) have 1.5kms$^{-1}$ and 14.4kms$^{-1}$.
An investigation of the RVs of the T Tauri stars in ChaI taking their multiplicity status into account showed that the sample of 10 stars for which indications of visual or spectroscopic binaries were detected has less dispersed RVs (standard deviation 1.02$\pm$0.57kms$^{-1}$) compared to the remaining 15 ‘single’ stars of the sample (1.42$\pm$0.41kms$^{-1}$), while these values are still consistent within the errors. In Fig.\[fig:cdf\_tts\], cumulative RV distributions for T Tauri binaries and ‘singles’ are plotted. A trend of less dispersed velocities for the binaries can be seen, with 60% of the T Tauri ‘singles’ having v$\leq$1.5kms$^{-1}$ but 100% of T Tauri binaries having v$<$1.5kms$^{-1}$. A Kolmogorov-Smirnov test also indicates that the RV distributions of these samples might be different (71.59%). A similar study for the brown dwarfs in ChaI is not possible yet since among the brown dwarf sample, so far only one has shown any indication of binarity (ChaH$\alpha$8, Joergens 2005b). The fact that this object has one of the smallest deviations from the mean of the whole group, namely 0.4kms$^{-1}$, is in line with observations of the T Tauri binaries. However, several of the (sub)stellar objects regarded as ‘single’ may still be resolved into multiple systems in the future. Since nine out of these ten T Tauri binaries are *visual*, hence, wide binaries, we can assume that predominantly spectroscopic or, at least, close systems have still not been resolved. Thus, this kinematic difference might translate into a kinematic difference between the group of wide binaries and the group of single stars and close binaries.
Comparing ChaI and Taurus observations {#sect:chaTau}
--------------------------------------
Comparing our kinematic study in ChaI with RV measurements for the Taurus star-forming region shows that, in agreement with our finding for ChaI, the RV dispersion for brown dwarfs in Taurus does not seem to deviate significantly from that of T Tauri stars in this cloud. The RV dispersion measured in terms of the standard deviation of six brown dwarfs and very low-mass stars in Taurus (M6–M7.5) is 1.9kms$^{-1}$ (White & Basri 2003) and that for 38 T Tauri stars in the same cloud is 2.1kms$^{-1}$ (Hartmann et al. 1986).
There are still two differences compared to the situation in ChaI. In Fig.\[cdf\_taurus\], the cumulative RV distributions are plotted for both Taurus samples based on the RVs published by the authors. Both follow the same distribution in this diagram, in contrast to the case for ChaI (Fig.\[cdf\]) where the cumulative RV distributions for brown dwarfs and stars diverge for higher velocities.
Secondly, the RV dispersions for Taurus brown dwarfs and stars are significantly higher than the ones for ChaI brown dwarfs (0.9kms$^{-1}$) and stars (1.3kms$^{-1}$). We measured a global RV dispersion of 1.24$\pm$0.24kms$^{-1}$ for all ChaI brown dwarfs and stars in Tables\[tab:bds\] and \[tab:tts\]. For comparison, the velocity dispersion of the molecular gas in the ChaI cloud cores is on average 0.4kms$^{-1}$ in terms of the standard deviation (Mizuno et al. 1999). For Taurus, we found[^3] a global RV dispersion of 2.04$\pm$0.30kms$^{-1}$ for the sample of brown dwarfs and (very) low-mass stars in White & Basri (2003) combined with the sample of T Tauri stars in Hartmann et al. (1986). Again for comparison, the velocity dispersion of the molecular gas in the Taurus cloud cores is on average 0.3kms$^{-1}$ in terms of standard deviation (Onishi et al. 1996). We conclude that the RV dispersion for ChaI and Taurus (sub)stellar members deviate significantly. The RV dispersion for Taurus is about a factor of two higher than for ChaI, while Taurus has a much lower stellar density and star-formation efficiency (Oasa et al. 1999; Tachihara et al. 2002). Thus, a fundamental increase in velocity dispersion with stellar density of the star-forming region, as suggested by Bate & Bonnell (2005), is not established observationally (see also Sect.\[sect:sph\]).
Comparison with theoretical models
----------------------------------
### Overview of models
Enormous theoretical efforts have been undertaken in recent years to model the formation of brown dwarfs by the embryo-ejection mechanism and to simulate the dynamical evolution in star-forming regions. Hydrodynamical calculations of the collapse of a 50M$_{\odot}$ cloud (Bate et al. 2002, 2003; Bate & Bonnell 2005) have demonstrated that brown dwarfs are formed in these models as stellar seeds that are ejected early. While predictions of (sub)stellar parameters by these models are based on small numbers, this can be overcome by the combination of collapse calculations with N-body simulations of further dynamical evolution (Delgado-Donate et al. 2003, 2004). However, the predicted properties of current hydrodynamical models have been questioned because of the lack of feedback processes (Kroupa & Bouvier 2003a).
On the other hand, simulations of the dynamical evolution of small N-body clusters (Sterzik & Durisen 2003) made statistically robust predictions of the properties of very young brown dwarfs and stars possible. There have been recent efforts to incorporate further details of the star-formation process, in particular ongoing accretion during the dynamical interactions (Umbreit et al. 2005). However, the current N-body simulations do not consider the gravitational potential of the cluster, which might significantly influence, e.g., the predicted velocities.
An approach to estimating the gravitational potential of existing star-forming regions has been suggested by Kroupa & Bouvier (2003b) for Taurus-Auriga and the Orion Nebula Cluster (ONC). However, the assumed cloud core properties are not straightforward to understand. While the cluster mass (stars and gas) of 9000M$_{\odot}$ adopted by the authors for the ONC is plausible for the embedded gas of the whole ONC at birth time (Wilson et al. 2005), the value of 50M$_{\odot}$ for Taurus-Auriga corresponds to about twice the mass of *one* C18O cloud core in Taurus (Onishi et al. 1996). Furthermore, since ChaI cloud cores have a similar average mass and radius (Mizuno et al. 1999) to Taurus, we would find a similar estimate for the gravitational potential when following along these lines, while there is a large difference in stellar density and star-formation efficiency between these regions (Oasa et al. 1999; Tachihara et al. 2002).
### RVs and 3D velocities in ChaI
When considering dynamical evolutions, the transformation from RVs to 3D velocities is not straightforward, but instead depends on details of the gravitational potential of the cluster and on the number of objects with very small velocities, i.e. the details of the simulations, as explained in the following. The brown dwarfs studied in ChaI are situated in a relatively densely populated region of ChaI at the periphery of one of its six cloud cores, the so-called ‘YSO condensation B’ (Mizuno et al. 1999). They occupy a field of less than 0.2$\times 0.2 \deg$ at a distance of 160pc. Having an age of about 2Myr (Comerón et al. 2000), the brown dwarfs born within this field and ejected during their formation in directions with a significant fraction perpendicular to the line-of-sight, would have already vanished for velocities of 0.4kms$^{-1}$ or larger (Joergens et al. 2003a). With a velocity of $\sim$0.8kms$^{-1}$, an object can even cross the whole extent of the YSO condensation B ($\sim$0.2$\times$0.5$\deg$). In the case of a significant fraction of brown dwarfs leaving the survey area, the remaining observable objects will be of two sorts: those with very small velocities (too small to travel the extent of the region ($<$0.4kms$^{-1}$) and/or too small to overcome the binding energy of the cluster) and those with larger velocities but moving predominantly in a radial direction. Due to this selection of fast-moving objects predominantly in a radial direction, the observed RV dispersion of such a group in such a limited survey area would be larger than the calculated 1D velocities that consider all objects regardless of the distance to the birth place. On the other hand, the observed RV dispersion would still be smaller than the calculated 3D velocities given the majority of bound objects with a small velocity component.
### Comparison with hydrodynamical calculations {#sect:sph}
The brown dwarfs and stars formed in the hydrodynamical model by Bate et al. (2003) share the same kinematic properties. For a stellar density of 1.8 10$^3$stars/pc$^3$, they find an RMS velocity dispersion of 2.1kms$^{-1}$ in 3D. Identical calculations for a denser star-forming region (2.6 10$^4$stars/pc$^3$, Bate & Bonnell 2005) yield an RMS velocity dispersion of 4.3kms$^{-1}$. The combined hydrodynamic / N-body model by Delgado-Donate et al. (2004) also predict no, or at most slight kinematic differences between ejected and non-ejected members of a cluster. The 3D velocity dispersion of the produced objects is 2–3kms$^{-1}$ (Delgado-Donate, pers.comm.) for modeled stellar densities of 2.5–3.6 10$^4$ stars/pc$^{3}$ (Delgado-Donate et al. 2004).
The theoretical finding that there is no kinematic difference between stars and brown dwarfs is consistent with our measurement of no significant difference between the RV dispersion of brown dwarfs (0.9$\pm$0.3kms$^{-1}$) and stars (1.3$\pm$0.3kms$^{-1}$) in ChaI. However, the RV dispersions observed in ChaI for both brown dwarfs and stars are much smaller than predicted by the models. Since the stellar densities in these calculations (on the order of 10$^{3}$ to 10$^{4}$stars/pc$^3$) are much higher than in the ChaI cloud (on the order of 10$^{2}$stars/pc$^3$, Oasa et al. 1999), one could argue that extrapolation towards smaller densities might explain the discrepancy. An extrapolation of the trend seen in the models by Bate et al. (2003) and Bate & Bonnell (2005) towards the stellar densities in ChaI, for instance, would yield a velocity dispersion of about 1kms$^{-1}$ that is consistent with the RV dispersion measured for ChaI. However, the same extrapolation yields a value that is highly inconsistent with the observations for Taurus that has a much smaller observed stellar density than ChaI (see Sect.\[sect:chaTau\]). Furthermore, there are apparently inconsistencies between different theoretical models: the model from Delgado-Donate et al. (2004) produces similar stellar densities as calculation 2 of Bate & Bonnell (2005), but at the same time both predict very different velocity dispersions of 2–3kms$^{-1}$ (Delgado-Donate, pers.comm.) and 4.3kms$^{-1}$ (Bate & Bonnell 2005). We therefore conclude that the dependence of the velocity dispersion on the stellar densities has not yet been established and an extrapolation is not advisable.
### Comparison with N-body simulations
The decay models of Sterzik & Durisen (2003) predict that 25% of the brown dwarf singles have a velocity that is smaller than 1kms$^{-1}$. This is a smaller percentage than found by our observations that 67% of the brown dwarfs in ChaI have RVs smaller than 1kms$^{-1}$. Furthermore, Sterzik & Durisen (2003) find a high-velocity tail with 40% single brown dwarfs having v$>$1.4kms$^{-1}$ and 10% having v$>$5kms$^{-1}$. This is also not seen in our data, where none has an RV deviating by the mean RV of the group by more than 1.4kms$^{-1}$. Admittedly, the relatively small size of our brown dwarf sample does not exclude the possibility that we have missed the 40% objects moving faster than 1.4kms$^{-1}$.
In the formation phase where dynamical interactions become important, gas accretion might still be a significant factor. N-body simulations by Umbreit et al. (2005) find higher ejection velocities for models taking ongoing accretion during the dynamical encounters into account. They predict that between 60% and almost 80% of single brown dwarfs have velocities larger than 1kms$^{-1}$ depending on the accretion rates. That is much larger than found by our observations, where only about 30% of the brown dwarfs have velocities $>$ 1kms$^{-1}$.
Sterzik & Durisen (2003) furthermore find different kinematics for singles and binaries: 90% of their stellar binaries have velocities smaller than 1kms$^{-1}$, while only 50% of stellar singles can be found in that velocity range. This agrees with our tentative finding that the subgroup of binaries among the studied sample of T Tauri stars in ChaI has a lower RV dispersion (1.0kms$^{-1}$) and no high-velocity tail (Fig.\[fig:cdf\_tts\]) compared to the remaining ‘single’ stars (1.42kms$^{-1}$). However, the T Tauri ‘single’ star sample might be contaminated by unresolved binaries.
Conclusions and summary {#sect:concl}
=======================
In order to pave the way to an understanding of the still unknown origins of brown dwarfs, we explored the kinematic properties of extremely young brown dwarfs in the star-forming cloud ChaI based on precise RVs measured from high-resolution UVES/VLT spectra. This kinematic study is an improved version of the one by Joergens & Guenther (2001), which provided the first observational constraints for the velocity distribution of a group of very young brown dwarfs.
We found that nine brown dwarfs and very low-mass stars (M6–M8, M$\la$0.1M$_{\odot}$) in ChaI kinematically form a very homogeneous group. They have very similar absolute RVs with a mean value of 15.7kms$^{-1}$, an RV dispersion in terms of standard deviation of 0.9kms$^{-1}$, and a total covered RV range of 2.6kms$^{-1}$.
We conducted a comparison of the kinematic properties of these brown dwarfs with those of 25 T Tauri stars confined to the same field based on our UVES measurements, as well as on RVs from the literature. For the T Tauri stars, we determined a mean RV of 14.7kms$^{-1}$, a RV dispersion of 1.3kms$^{-1}$, and a total range of 4.5kms$^{-1}$.
The mean RVs of the brown dwarfs are larger than that of the T Tauri stars by less than two times the errors; however, both values are consistent with the velocity of the molecular gas of the surroundings. The RV dispersion measured for the brown dwarfs is slightly smaller than the one for the T Tauri stars, but this difference lies within the errors. We found that the cumulative RV distributions for the brown dwarfs and for the T Tauri stars diverge for RVs higher than about 1kms$^{-1}$, with the brown dwarfs displaying no tail with high velocities in contrast to the T Tauri stars. This could be an intrinsic feature or might be attributed to more pronounced systematic RV errors in the stellar than in the substellar mass domain.
The finding of consistent RV dispersions for brown dwarfs and stars in ChaI (Joergens & Guenther 2001; this paper) is also seen in RV data for brown dwarfs and stars in the Taurus star-forming region (Hartmann et al. 1986; White & Basri 2003). We calculated global RV dispersions for all brown dwarfs and stars in ChaI (1.24kms$^{-1}$) and Taurus (2.0kms$^{-1}$) and found that the value for Taurus is significantly higher than the one for ChaI by about a factor of two. Given that the stellar density of Taurus is much smaller than of ChaI (Oasa et al. 1999), we conclude that a fundamental increase of velocity dispersion with stellar density of the star-forming region as suggested by Bate & Bonnell (2005) is not established observationally.
We compared the kinematic study in ChaI with theoretical hydrodynamical or N-body calculations of the embryo-ejection scenario for the formation of brown dwarfs. That there is no significant difference between the RV dispersion of brown dwarfs and T Tauri stars in ChaI and that the differences found in the cumulative RV distributions for both groups might be explained by systematic errors are both consistent with the finding of no mass dependence of the velocities in models by Bate et al. (2003), Bate & Bonnell (2005) and with the finding of only small mass dependence in the model by Delgado-Donate et al. (2004). However, the observed RV dispersions in ChaI for both the brown dwarfs and the stars are much smaller than predicted by these models. There is a difference of about a factor of ten to one hundred in stellar density between ChaI and the simulated star-forming regions. While an extrapolation of the predictions of Bate & Bonnell (2005) to these smaller densities might be consistent with the empirical value measured by us for ChaI, we show that such an extrapolation is not advised, amongst others, because it does not, at the same time, yield consistent results for the less dense Taurus region.
Sterzik & Durisen (2003) and Umbreit et al. (2005) provide cumulative distributions of their results, which we can compare directly to our observed cumulative RV distributions. The high-velocity tail seen by Sterzik & Durisen (2003) is even more pronounced in the models by Umbreit et al. (2005), who consider ongoing accretion during the dynamical encounters. However, it is not seen in the observed RV distribution of brown dwarfs in ChaI. We suggest that the brown dwarfs in ChaI show no high-velocity tail, but the other possibility is that we have missed the 40–50% fast-moving brown dwarfs in our relatively small sample comprised of nine objects.
We found that a subsample of known predominantly wide binaries among the T Tauri stars studied in ChaI has (i) a smaller RV dispersion (1.0kms$^{-1}$) and (ii) no high-velocity tail compared to the remaining ‘single’ T Tauri stars (RV dispersion 1.4kms$^{-1}$). This observational hint of a kinematic difference between singles and binaries is in line with theoretical predictions by Sterzik & Durisen (2003).
The comparison of observations in ChaI with theoretical calculations had to deal with the difficulty that the current models do not predict uniform quantities to describe the velocity distribution. Furthermore, at the moment, their predictive power is limited by simplifications adopted therein, e.g. the lack of gravitational potential in N-body simulations and the lack of feedback processes in hydrodynamical calculations, as well as the fact that the latter are performed for much higher densities than found in intensively observed clouds like ChaI or Taurus.
The observational constraint for the velocity distribution of a homogeneous group of closely confined very young brown dwarfs provided by the high-resolution spectroscopic study here is a first *empirical* upper limit for ejection velocities. It would be valuable to extent these observations to those not yet observed and/or to newly detected young brown dwarfs in ChaI (e.g. López Martí et al. 2004) and in other star-forming regions, in order to put the results on an improved statistical basis.
I am grateful to the referee, Fernando Comerón, for very helpful comments which significantly improved the paper. I would also like to thank Kengo Tachihara, Pavel Kroupa, Matthew Bate, Eduardo Delgado-Donate, and Stefan Umbreit for interesting discussions and Michael C. Liu for hinting at double entries in a table in a previous publication. It is a pleasure to acknowledge the excellent work of the ESO staff at Paranal, who took all the UVES observations the present work is based on in service mode. Furthermore, I acknowledge support by a Marie Curie Fellowship of the European Community program ‘Structuring the European Research Area’ under contract number FP6-501875. This research made use of the SIMBAD database, operated at CDS, Strasbourg, France.
Armitage, P. J., & Clarke, C. J. 1997, MNRAS, 285, 540
Bate, M. R., Bonnell, I. A., & Bromm, V. 2002, MNRAS 332, L65
Bate, M. R., Bonnell, I. A., & Bromm, V. 2003, MNRAS 339, 577
Bate, M. R., & Bonnell, I. A. 2005, MNRAS 356, 1201
Brandner, W., Alcalá, J. M., Kunkel, M., Moneti, A., & Zinnecker, H. 1996, A&A 307, 121
Brandner, W., & Zinnecker, H. 1997, A&A 321, 220
Comerón, F., Rieke, G. H., & Neuhäuser, R. 1999, A&A, 343, 477
Comerón, F., Neuhäuser, R., & Kaas, A. A. 2000, A&A, 359, 269
Covino, E., Alcalá, J. M, Allain, S. et al. 1997, A&A 328, 187
Dekker, H., D’Odorico, S., Kaufer, A., Delabre, B., & Kotzlowski, H. 2000, In: SPIE Vol. 4008, p. 534, ed.by M. Iye, & A. Moorwood
Delgado-Donate, E. J., Clarke, C. J., & Bate, M. R. 2003, MNRAS 342, 926
Delgado-Donate, E. J., Clarke, C. J., & Bate, M. R. 2004, MNRAS 347, 759
Dubath, P., Reipurth, B., & Mayor, M. 1996, A&A 308, 107
Durisen, R. H., Sterzik, M. F., & Pickett, B. K. 2001, A&A 371, 952
Ghez, A. M., McCarthy, D. W., Patience, J. L., & Beck, T. L. 1997, ApJ 481, 378
Hartmann, L., Hewett, R., Stahler, S., & Mathieu, R. D. 1986, ApJ 309, 275
Joergens, V., & Guenther, E. 2001, A&A, 379, L9
Joergens, V. 2003, PhD thesis, Ludwigs-Maximilians Universität München
Joergens, V., Neuhäuser, R., Guenther, E. W., Fernández, M., & Comerón F., In: IAU Symposium No. 211, Brown Dwarfs, ed. by E. L. Martín, Astronomical Society of the Pacific, San Francisco, 2003, p.233
Joergens, V. 2005a, In: Reviews in Modern Astronomy, Vol. 18, ed. by S. Roeser, Wiley, Weinheim, p. 216-239 astro-ph/0501220
Joergens, V. 2005b, A&A, in press, astro-ph/0509134
Kroupa, P., & Bouvier, J. 2003a, MNRAS 346, 343 Kroupa, P., & Bouvier, J. 2003b, MNRAS 346, 369 López Martí, B., Eislöffel, J., Scholz, A., & Mundt, R. 2004, A&A 416, 555
Luhman, K. 2004, ApJ 602, 816
Marcy, G., Butler, P., Vogt, S., Fischer, D., Henry, G., Laughlin, G., Wright, J. & Johnson, J. 2005, ApJ, 619, 570
Moutou, C., Mayor, M., Bouchy, F. et al. 2005, A&A 439, 367
Mizuno, A., Hayakawa, T., Tachihara, K. et al. 1999, PASJ51, 859
Neuhäuser, R., & Comerón, F. 1998, Science, 282, 83
Neuhäuser, R., & Comerón, F. 1999, A&A, 350, 612
Onishi, T., Mizuno, A., Kawamura, A., Ogawa, H., & Fukui, Y. 1996, ApJ 465, 815
Reipurth, B., & Zinnecker, H. 1993, A&A 278, 81
Reipurth, B., & Clarke C. 2001, ApJ 122, 432
Sterzik, M. F., & Durisen, R. H. 1995, A&A 304, L9
Sterzik, M. F., & Durisen, R. H. 1998, A&A 339, 95
Sterzik, M. F., & Durisen, R. H. 2003, A&A, 400, 1031
Tachihara, K., Onishi, T., Mizuno, A., & Fukui, Y. 2002, A&A 385, 909
Umbreit, S., Burkert, A., Henning, T., Mikkola, S., & Spurzem, R. 2005, ApJ, 623, 940
Valtonen, M., & Mikkola, S. 1991, ARA&A 29, 9
Walter, F. M. 1992, AJ 104, 758
White, R. J., & Basri, G. 2003, , 582, 1109
Details on the T Tauri star sample {#sect:app}
==================================
The sample of T Tauri stars was revised in the presented work compared to Joergens & Guenther (2001) by identifying five double entries under different names that correspond to the very same objects in their Table2 (Sz9$\equiv$CS Cha; Sz11$\equiv$CT Cha; Sz36$\equiv$WY Cha; ; Sz42$\equiv$CV Cha). Furthermore, two stars (Sz15/T19 and B33/CHXR25) of the previous T Tauri star sample were revealed as foreground stars by Luhman (2004) and were rejected. Moreover, RV measurements by Walter (1992) of several T Tauri stars in ChaI were not considered for the previous kinematic study and were taken into account for the revised version presented here. For CHX18N, the RV measured by Walter (1992) is significantly discrepant with the measurement of Covino et al. (1997), thus hinting at a long-period spectroscopic binary. For the earlier publication, only the measurement by Covino et al. (1997) of 19.0$\pm$2.0kms$^{-1}$ for this star was taken into account, which made it an outlier in the T Tauri star sample, while the paper on hand also takes the previously overlooked RV determination of 13$\pm2$kms$^{-1}$ by Walter (1992) into account. Therefore, the new mean RV for this object results in a narrower RV dispersion of the whole sample.
The kinematic study of T Tauri stars was also revised by an improved analysis of the UVES spectra for B34, CHXR74, and Sz23 and, in addition to this, by taking into account new UVES-based RVs for CHXR74 and Sz23 obtained by us in 2002 and 2004. For Sz23, the change in RV from 2000 to 2004 is marginal, whereas for CHXR74, the discrepancy between the mean RV for 2000 and for 2004 is more than 2kms$^{-1}$, hinting at a spectroscopic companion (cf. Joergens 2003, 2005b).
[^1]: Based on observations at the Very Large Telescope of the European Southern Observatory at Paranal, Chile in program 65.L-0629, 65.I-0011, 268.D-5746, 72.C-0653.
[^2]: The fwhm is related to the standard deviation $\sigma$ of a Gaussian distribution by fwhm=$ \sigma \sqrt{8 \ln 2}$; however, as pointed out by Sterzik & Durisen (1998), the escape velocity distribution might be significantly non-Gaussian.
[^3]: For the calculation of the error of the dispersion, the individual errors of the RV values are necessary, which are not always given by Hartmann et al. (1986). In the cases of absent individual errors, we used an average error derived from the given individual errors.
|
Since the experimental achievement of Bose-Einstein condensation (BEC) in confined alkali gases [@Cornell; @Ketterle; @Hulet; @Others], the possibility of generating vortices in confined weakly-interacting dilute Bose gases has been intensively studied [@McCann; @Adams; @Dum; @Marshall; @Davies; @Marzlin; @Holland; @Cornell2]. While theoretical investigations of stability have generally been restricted to the case of a single vortex [@Dalfovo; @Fetter1; @Muller; @Pu; @Feder1], the proposed experimental techniques may induce several vortices simultaneously [@Rokhsar]. Under appropriate stabilizing conditions, such as a continuously applied torque, these vortices would form an array akin to those obtained in rotating superfluid helium [@Packard1; @Tkachenko; @Campbell].
A standard approach used to ‘spin-up’ superfluid helium is to rotate the container at an angular frequency $\Omega$. Aside from significant hysteresis effects [@Packard2; @Jones], vortices tend to first appear at a frequency $\Omega_{\nu}$, whose value is comparable to the critical frequency $\Omega_c$ at which the presence of vortices lowers the free energy of the interacting system [@Fetter2]. Energy minimization arguments have also yielded vortex arrays that are very similar to those observed experimentally [@Tkachenko; @Campbell]. Despite these successes, the mechanisms for vortex [*nucleation*]{} by rotation remain poorly understood; important factors are thought to include presence of a normal fluid, impurities, and surface roughness [@Jones; @Donnely; @Aranson].
It has been suggested that vortices may be similarly generated in the dilute Bose gases by rotating the trap about its center [@Fetter1; @Rokhsar]. Evidently, a harmonic potential can transfer angular momentum to the gas only if it is anisotropic in the plane of rotation. While vortices in such a system at zero temperature have been shown to become energetically stable for $\Omega>\Omega_c$ [@Dalfovo; @Fetter1; @Feder1], the particle flow could remain irrotational at these angular frequencies since there exists an energy barrier to vortex formation [@Fetter1]. Suppression of this barrier could be induced by application of a perturbing potential near the edge of the confined gas, as has been simulated in the low-density limit [@Rokhsar]. One of the primary motivations for the present work, however, is to determine if there exists any intrinsic mechanism for vortex nucleation in a dilute quantum fluid that is free of impurities, surface effects, and thermal atoms. We find that vortices can indeed be generated by rotating Bose condensates confined in an anisotropic harmonic trap. The value of $\Omega_{\nu}$ at which vortices are spontaneously nucleated is somewhat larger than $\Omega_c$. For $\Omega>\Omega_{\nu}$ multiple vortices appear simultaneously, in patterns that depend upon the geometry of the trap.
The dynamics of a dilute Bose condensate at zero temperature are governed by the time-dependent Gross-Pitaevskii (GP) equation [@GP]. Previous simulations of the GP equation have demonstrated that vortex-antivortex pairs or vortex half-rings can be generated by superflow around a stationary obstacle [@McCann; @Adams; @Davies; @Frisch] or through a small aperture [@Burkhart]. In , the vortex pairs form when the magnitude of the superfluid velocity exceeds a critical value which is proportional to the local sound velocity; recent experimental results support this conclusion [@Ketterle2]. To our knowledge, no numerical investigation of vortex nucleation in three-dimensional inhomogeneous rotating superfluids has hitherto been attempted.
The numerical calculations presented here model the experimental apparatus of Kozuma [*et al.*]{} [@Phillips], where $^{23}$Na atoms are confined in a completely anisotropic three-dimensional harmonic oscillator potential. In the presence of a constant external torque, the condensate obeys the time-dependent GP equation in the rotating reference frame:
$$i\partial_{\tau}\psi({\bf r},\tau)
=\left[T+V_{\rm trap}+V_{\rm H}-\Omega L_z\right]\psi({\bf r},\tau),
\label{gp}$$
where the kinetic energy is $T=-{\case1/2}\vec{\nabla}^2$, the trap potential is $V_{\rm trap}={\case1/2}\left(x^2+\alpha^2y^2+\beta^2z^2\right)$, and the Hartree term is $V_{\rm H}=4\pi\eta|\psi|^2$. The angular momentum operator $L_z=i\left(y\partial_x-x\partial_y\right)$ rotates the system about the $z$-axis at the trap center at a constant angular frequency $\Omega$. The trapping frequencies are $(\omega_x,\omega_y,\omega_z)=\omega_x(1,\alpha,\beta)$ with $\omega_x=2\pi\times 26.87$ rad/s, $\alpha=\sqrt{2}$, and $\beta=1/\sqrt{2}$. Normalizing the condensate $\int d{\bf r}|\psi({\bf r},\tau)|^2=1$ yields the scaling parameter $\eta=N_0a/d_x$, where $a=2.75$ nm is the s-wave scattering length for Na and $N_0$ is the number of condensate atoms. Unless explicitly written, energy, length, and time are given throughout in scaled harmonic oscillator units $\hbar\omega_x$, $d_x=\sqrt{\hbar/M\omega_x}\approx 4.0~\mu$m, and ${\rm T}=\omega_x^{-1}\approx 6$ ms, respectively.
The stationary ground-state solution of the GP equation, defined as that which minimizes the value of the chemical potential, is found by norm-preserving imaginary time propagation (the method of steepest descents) using an adaptive stepsize Runge-Kutta integrator. The complex condensate wavefunction is expressed within a discrete-variable representation (DVR) [@Feder2] based on Gauss-Hermite quadrature, and is assumed to be even under inversion of $z$. The numerical techniques are described in greater detail elsewhere [@Feder1; @Feder2]. The initial state (at zero imaginary time $\tilde{\tau}\equiv i\tau=0$) is taken to be the vortex-free Thomas-Fermi (TF) wavefunction $\psi_{\rm TF}=\sqrt{(\mu_{\rm TF}-V_{\rm trap})/4\pi\eta}$, which is the time-independent solution of Eq. (\[gp\]), neglecting $T$ and $L_z$, with chemical potential $\mu_{\rm TF}={\case1/2}(15\alpha\beta\eta)^{2/5}$. The GP equation for a given value of $\Omega$ and $N_0$ is propagated in imaginary time until the fluctuations in both the chemical potential and the norm become smaller than $10^{-11}$. It should be emphasized that the equilibrium configuration is found not to depend on the choice of purely real initial state. Since the final state is unconstrained except for $z$-parity, the lowest-lying eigenfunction of the GP equation corresponds to a local minimum of the free energy functional.
In Fig. \[irrot\] are depicted the condensate density, which is stationary in the rotating frame, as well as the condensate phase and the velocity field in the laboratory and rotating frames, for $\Omega=0.45\omega_x$ and $N_0=10^6$. The density profile at this angular frequency contains no vortices but is slightly extended from that of a non-rotating condensate due to the centrifugal forces. The velocity field in the laboratory frame is given by ${\bf v}_s^l\equiv\vec{\nabla}\varphi$ in units of $\omega_xd_x$, where $\varphi$ is the condensate phase. In the rotating frame, ${\bf v}_s^r={\bf v}_s^l-\Omega\hat{z}\times{\bf r}$. There are no closed velocity streamlines found in Fig. \[irrot\](a). Such an irrotational flow $\vec{\nabla}\times{\bf v}_s=0$ is characteristic of a superfluid, distinct from the related properties of vortex quantization and stability. The only solution of the GP equation satisfying irrotational flow in a cylindrically-symmetric trap is ${\bf v}_s=0$: rotating the trap is equivalent to doing nothing. The irrotational velocity field for an anisotropic trap is nontrivial, however. Since the density profile is independent of orientation, mass flow must accompany the rotation even though the superfluid prefers to remain at rest [@Fetter2].
The condensate is found to remain vortex-free for angular velocities significantly larger than the expected critical frequency for the stability of a single vortex $\Omega_c^{(1)}$ [@Dalfovo; @Fetter1; @Feder1]. In order to determine if irrotational configurations correspond to the global free energy minima of the system, vortex states are investigated by artificially imposing total circulation $n\kappa$ on the condensate wavefunction. By winding the phase at $\tilde{\tau}=0$ by $2\pi n$ about the trap center, imaginary-time propagation of the GP equation yields the minimum energy configuration with $n$ vortices if such a solution is stationary or metastable [@Feder1]. The results for $N_0=10^6$ and $\Omega=0.45\omega_x$ are summarized in Table \[tablerot\]. At this angular frequency, states with $n=1,2,3$ are all energetically favored over the vortex-free solution. The vortices in these cases are predominantly oriented along the ($\hat{z}$) axis of rotation, and are located symmetrically about the origin on the (loose) $x$-axis. The frequency chosen is too low to support the four vortex case $\Omega<\Omega_c^{(4)}$, but is larger than the frequency (which may correspond to metastability) at which the chemical potentials for $n=3$ and $n=4$ cross.
As shown in Fig. \[rot\], vortices with the same circulation $\kappa$ (as opposed to vortex-antivortex pairs) begin to penetrate the cloud above a critical angular velocity for vortex nucleation $\Omega_{\nu}$. The value of $\Omega_{\nu}$ is found to not depend strongly on trap geometry and to decrease very slowly with $N_0$; for $N_0=10^q$ with $q=\{5,6,7\}$, we obtain $\Omega_{\nu}=\{0.65,\,0.50,\,0.36\}\omega_x\pm 0.01\omega_x$, respectively. In contrast, the critical frequency for the stabilization of a single vortex in an anisotropic trap is approximately given by the TF expression $\Omega_c^{(1)}\approx(5\alpha/2R^2)\ln(R/\xi)\omega_x$, where $R=\sqrt{2\mu_{\rm TF}}$ and $\xi=\sqrt{\alpha}/R$ are the dimensionless condensate radius along $\hat{x}$ and the healing length, respectively [@Fetter1; @Feder1]. For the parameters considered here, the values are predicted to be $\Omega_c^{(1)}=\{0.61,\, 0.33,\,0.16\}\omega_x$ and are found numerically to be $\Omega_c^{(1)}=\{0.54,\, 0.29,\,0.14\}\omega_x$. The number of vortices $n_{\nu}$ present just above $\Omega_{\nu}$ is found to increase with $N_0$; $n_{\nu}=4$ and $8$ for $N_0=10^6$ and $10^7$, respectively. The value of $\Omega_{\nu}$ may be interpreted as the critical frequency $\Omega_c^{(n_{\nu})}$ for the stabilization of $n_{\nu}$ vortices. If $n_{\nu}=n$ for all $N_0$, then $\Omega_{\nu}\sim\Omega_c^{(n)}\sim N_0^{-2/5}$. That $\Omega_{\nu}$ decreases more slowly with $N_0$ implies that $n_{\nu}$ must increase with $N_0$. The small difference between $\Omega_c^{(1)}$ and $\Omega_{\nu}$ for $N_0=10^5$ reflects the instability of vortex arrays in the low-density limit. As $N_0$ decreases, the spacing between successive $\Omega_c^{(n)}$ diminishes, and vanishes for $N_0=0$ in cylindrically-symmetric traps; for very large $N_0$, the spacing approaches a constant as the vortex-vortex interactions become negligible.
\
\
The numerical results for $\Omega_{\nu}$ suggest that the criteria for vortex stabilization and nucleation are different. Superflow through microapertures, or the motion of an object or ion through a superfluid, can give rise to vortex half-ring [@Burkhart; @Packard3] or vortex-pair [@McCann; @Adams; @Davies; @Frisch; @Ketterle2] production through the accumulation of phase-slip. One might expect similar excitations in a rotating condensate [@Jones; @Kusmartsev]: vortex half-rings would be nucleated at the condensate surface when the local tangential velocity exceeds a critical value. Indeed, the distinction between a half-ring and vortex becomes blurred in a trapped gas with curved surfaces, as discussed further below.
A crude estimate of $\Omega_{\nu}$ may be obtained by invoking the Landau criterion for the critical velocity $v_{\rm cr}={\rm min}(\omega_q/q)$, where $\omega_q$ is the frequency of the mode at wavevector $q$. Such a minimum corresponds to values of $q_c$ at which the hydrodynamic description of the collective excitations begins to fail [@Dalfovo]. For a spherical trapped Bose gas, the crossover to a single-particle behavior occurs in a boundary-layer region at the cloud surface whose thickness is several $\delta=(2R)^{-1/3}d_x$ [@Pethick; @Feder3]. Minimizing $\omega_q/q$ using the dispersion relation for the planar surface modes [@AlKhawaja] of such a system $\omega_q^2\approx\omega_x^2[qR-d_x^4q^4\left(\ln(q\delta)-0.15\right)]$, one obtains $q_c=(R/0.3)^{1/3}d_x^{-1}\approx\delta^{-1}$ and $\Omega_{\nu}=v_{\rm cr}/R\sim R^{-2/3}$. Since $R\sim N_0^{1/5}$, the critical frequency $\Omega_{\nu}\sim N_0^{-2/15}$ decreases far more slowly than does the TF estimate for $\Omega_c\sim N_0^{-2/5}$. The number-dependence of $\Omega_{\nu}$ is in reasonable agreement with the numerical data. Real-time simulations further confirm that high-frequency oscillations of the condensate are required for vortex production at the same $\Omega_{\nu}$ found using the imaginary-time approach.
The above analysis does not clearly identify the instability of the surface modes with the penetration of vortices into the cloud, however. Further insight may be gained by considering the free energy $F$ of a single vortex in a cylindrical trap, relative to that of the vortex-free state, as a function of the vortex displacement $\rho$ from the trap center [@Fetter1]. In the TF limit, $F$ vanishes for $\rho^2=R^2$ and $\rho^2=R^2-(5/2\Omega)\ln(R/\xi)$, corresponding to the right and left roots of the free energy barrier to vortex generation, respectively. As $\Omega$ increases, the energy barrier at the surface narrows but remains finite. Yet, as discussed above, the hydrodynamic excitations begin to break down at a radius $\tilde{\rho}\approx R-\delta$. Vortices will therefore spontaneously penetrate the cloud when the angular frequency exceeds $$\Omega_{\nu}={5{\root 3\of 2}\over 4R^{2/3}}\ln\left({R\over\xi}\right)\omega_x,$$ since the barrier effectively disappears when $\tilde{\rho}\leq\rho$. Thus, the frequencies of nucleation and penetration have the same number-dependence and are defined by a single critical wavelength. Once the condensate contains vortices at a given $\Omega>\Omega_{\nu}$, the functional $F$ will again include a barrier to vortex penetration from the surface, reflecting the hydrodynamic stability of the vortex state. One may thus envisage a succession of multiple-vortex nucleation events at well-defined angular frequencies.
The stationary configurations of vortex arrays are shown as a function of applied rotation in Fig. \[rot\]. The condensate density is shown integrated down the axis of rotation $\hat{z}$, in order to mimic an [*in situ*]{} image of the cloud. While the vortices near the origin appear to have virtually isotropic cores, those in the vicinity of the surface are generally wider and are noticeably distorted. The anisotropy is due in part to the divergence of the coherence length as the density decreases, but is mostly the result of vortex curvature. Off-center vortices are not fully aligned with the axis of rotation $\hat{z}$, since they terminate at normals to the ellipsoidal condensate surface. Far from the origin, the vortex structure approaches that of a half-ring pinned to the condensate surface.
The symmetries of the confining potential impose constraints on the vortex arrays that may be produced by rotating anisotropic traps. Stationary configurations are found to always have the inversion symmetry $(x,y,z)\to(-x,-y,z)$. As shown in Fig. \[rot\], the number of vortices is at least four and is even for each array; real-time simulations demonstrate that vortices with the same circulation are nucleated in pairs at inversion-related points on the surface. No vortex is found at the origin, since the odd number of remaining vortices cannot be distributed symmetrically. At low angular velocities, therefore, the array tends to approximate a regular tetragonal lattice. As the total number of vortices increases with $\Omega$, however, a different pattern begins to emerge. While a triangular array is inconsistent with the twofold trap symmetries, it is more efficient for close packing; this geometry is favored for vortices near the rotation axis of rapidly rotating vessels of superfluid helium [@Packard1; @Tkachenko; @Campbell]. If vortices in trapped condensates could be made sufficiently numerous, they would likely form a near-regular triangular array but with the central vortex absent.
In summary, the critical frequencies for the zero-temperature nucleation of vortices $\Omega_{\nu}$ in rotating anisotropic traps are obtained numerically, and are found to be larger than the vortex stability frequencies $\Omega_c$. The number-dependence of $\Omega_{\nu}$ is consistent with a critical-velocity mechanism for vortex production. The structures of vortex arrays are strongly affected by trap geometry, but approach triangular at large densities.
The authors are grateful to A. L. Fetter, R. L. Pego, S. L. Rolston, J. Simsarian, and S. Stringari for numerous fruitful discussions, and to P. Ketcham for assistance in generating the figures. This work was supported by the U.S. office of Naval Research.
M. H. Anderson [*et al.*]{}, Science [**269**]{}, 198 (1995); D. S. Jin [*et al.*]{}, Phys. Rev. Lett. [**77**]{} 420 (1996).
K. B. Davis [*et al.*]{}, Phys. Rev. Lett. [**75**]{}, 3969 (1995).
C. C. Bradley, C. A. Sackett, R. G. Hulet, Phys. Rev. Lett. [**78**]{}, 985 (1997).
See http://amo.phy.gasou.edu/bec.html for information on recent experiments.
B. Jackson, J. F. McCann, and C. S. Adams, Phys. Rev. Lett. [**80**]{}, 3903 (1998).
T. Winiecki, J. F. McCann, and C. S. Adams, Phys. Rev. Lett. [**82**]{}, 5186 (1999); ibid, e-print: cond-mat/9907224.
R. Dum, J. I. Cirac, M. Lewenstein, and P. Zoller, Phys. Rev. Lett. [**80**]{}, 2972 (1998).
R. J. Marshall, G. H. C. New, K. Burnett, and S. Choi, Phys. Rev. A [**59**]{}, 2085 (1999).
B. M. Caradoc-Davies, R. J. Ballagh, and K. Burnett, Phys. Rev. Lett. [**83**]{}, 895 (1999).
K.-P. Marzlin, W. Zhang, and E. M. Wright, Phys. Rev. Lett. [**79**]{}, 4728 (1997); K.-P. Marzlin and W. Zhang, Phys. Rev. A [**57**]{}, 3801 (1998); ibid, 4761 (1998).
M. Holland and J. Williams, Nature [**401**]{}, 568 (1999).
M. R. Matthews [*et al.*]{}, Phys. Rev. Lett. [**83**]{}, 2498 (1999).
F. Dalfovo [*et al.*]{}, Phys. Rev. A [**56**]{}, 3840 (1997).
A. L. Fetter, Int. J. Mod. Phys. B [**13**]{}, 643 (1999); ibid., J. Low Temp. Phys. [**113**]{}, 189 (1998); A. A. Svidzinsky and A. L. Fetter, Phys. Rev. A [**58**]{}, 3168 (1998); ibid., e-print: cond-mat/9811348 (July 30, 1999); M. Linn and A. L. Fetter, e-print: cond-mat/9907045 (July 2, 1999); ibid., e-print: cond-mat/9906139 (June 9, 1999).
E. J. Muller, P. M. Goldbart, and Y. Lyanda-Geller, Phys. Rev. A [**57**]{}, R1505 (1998).
H. Pu, C. K. Law, J. H. Eberly, and N. P. Bigelow, Phys. Rev. A [**59**]{} 1533 (1999).
D. L. Feder, C. W. Clark, and B. I. Schneider, Phys. Rev. Lett. [**82**]{}, 4956 (1999).
D. A. Butts and D. S. Rokhsar, Nature [**397**]{}, 327 (1999).
G. A. Williams and R. E. Packard, Phys. Rev. Lett. [**33**]{}, 280 (1974); E. J. Yarmchuck, M. J. V. Gordon, and R. E. Packard, ibid [**43**]{}, 214 (1979); E. J. Yarmchuck and R. E. Packard, J. Low. Temp. Phys. [**46**]{}, 479 (1982).
V. K. Tkachenko, Sov. Phys. JETP [**22**]{}, 1282 (1966).
L. J. Campbell and R. M. Ziff, Phys. Rev. B [**20**]{}, 1886 (1979).
R. E. Packard and T. M. Sanders, Phys. Rev. A [**6**]{}, 799 (1972).
C. A. Jones, K. B. Khan, C. F. Barenghi, and K. L. Henderson, Phys. Rev. B [**51**]{}, 16174 (1995).
A. L. Fetter, J. Low Temp. Phys. [**16**]{}, 533 (1974).
R. J. Donnely, [*Quantized Vortices in Helium II*]{} (Cambridge University Press, Cambridge, 1991).
I. Aranson and V. Steinberg, Phys. Rev. B [**54**]{}, 13072 (1996).
E. P. Gross, Nuovo Cimento [**20**]{}, 454 (1961); L. P. Pitaevskii, Zh. Eksp. Teor. Fiz. [**40**]{}, 646 (1961) \[Sov. Phys. JETP [**13**]{}, 451 (1961)\].
T. Frisch, Y. Pomeau, and S. Rica, Phys. Rev. Lett. [**69**]{}, 1644 (1992).
S. Burkhart, M. Bernard, O. Avenel, and E. Varoquaux, Phys. Rev. Lett. [**72**]{}, 380 (1994).
C. Raman [*et al.*]{}, Phys. Rev. Lett. [**83**]{}, 2502 (1999).
M. Kozuma [*et al.*]{}, Phys. Rev. Lett. [**82**]{}, 871 (1999).
B. I. Schneider and D. L. Feder, Phys. Rev. A [**59**]{}, 2232 (1999).
R. E. Packard, Rev. Mod. Phys. [**70**]{}, 641 (1998).
F. V. Kusmartsev, Phys. Rev. Lett. [**76**]{}, 1880 (1996).
E. Lundh, C. Pethick, and H. Smith, Phys. Rev. A [**55**]{}, 2126 (1997).
A. L. Fetter and D. L. Feder, Phys. Rev. A [**58**]{}, 3185 (1998).
U. Al Khawaja, C. J. Pethick, and H. Smith, Phys. Rev. A [**60**]{}, 1507 (1999).
[lccc]{} $n$ & $\mu$ ($\hbar\omega_x$) & $E$ ($\hbar\omega_x$) & $\langle L_z\rangle$ ($\hbar$) $0$ & $19.874$ & $14.339$ & $0.779$ $1$ & $19.758$ & $14.196$ & $1.611$ $2$ & $19.624$ & $14.139$ & $2.355$ $3$ & $19.553$ & $14.130$ & $2.864$ $4^*$ & $19.517$ & $14.134$ & $3.157$
|
= 10000
3.7in
[**Black hole entropy calculations based on symmetries\
**]{}
**Olaf Dreyer[^1], Amit Ghosh[^2] and Jacek Wiśniewski[^3]**
Center for Gravitational Physics and Geometry, Department of Physics,\
The Pennsylvania State University, University Park, PA 16802-6300, USA\
Introduction
============
A microscopic derivation of black hole entropy has been one of the greatest theoretical challenges for any candidate quantum theory of gravity. String theory in case of some extremal and [*near*]{}-extremal black holes [@Strom] and canonical quantum gravity in case of general non-rotating black holes [@Abhay] have produced very interesting results in this direction.
As an alternative to both these approaches, a set of very attractive ideas was suggested by Strominger and Carlip [@Strom2; @Carlip1; @Carlip2] over the last few years. Motivated by some earlier works [@BH; @BH1; @Regge] on the relation between symmetries and Hamiltonians, these authors argued that states of a quantum black hole should belong to a multiplet of a representation of a suitable Lie algebra. Counting the number of states in the multiplet would then provide the black hole entropy. The Virasoro algebra has been proposed as a natural candidate for symmetries in this context.
An attractive feature of these alternative approaches is that they are not tied to the details of any specific model of quantum gravity. Even more strikingly, the central objects in this construction, namely the Virasoro algebra, central charge etc. appear already at the [*classical*]{} level through the Poisson bracket algebra. The Planck length in the expression of the entropy arises only from replacing Poisson brackets by appropriate quantum commutators. Being essentially classical, the scheme is quite robust and in principle applicable to black holes in any space-time dimension.
The first work [@Strom2] applies this idea to 2+1 dimensions in the context of the BTZ black hole [@BTZ]. The symmetries, however, are taken from a previous analysis [@BH1] which is tailored to *asymptotic infinity* rather than black hole horizon. Therefore, it is not apparent why these symmetries are relevant for the black hole in the space-time interior. For example, in asymptotically flat, 4-dimensional space-times, the symmetry group at (null) infinity is always the Bondi-Metzner-Sachs group, irrespective of the interior structure of the space-time. Thus, the results of [@Strom2] are equally applicable to a star that has similar asymptotic behavior as that of the black hole. Subsequently, Carlip improved on this idea significantly by making the symmetry analysis in the near-horizon region. Conceptually this approach is much more satisfactory in that the black hole geometry is now at the forefront. However, at the technical level, this work [@Carlip2] appears to have some important limitations. The purpose of the present paper is to elucidate and discuss these technical problems in some detail, and then to present a consistent calculation which correctly implements the general ideas of [@Strom2; @Carlip2] by a careful treatment of all the relevant technical issues.
The organization of the paper is as follows. In section 2 we discuss the technical framework set up in [@Carlip2] and point to the difficulties that arise in the implementation of the ideas mentioned above. In section 3 we investigate two different sets of symmetries. In 3.1 we consider symmetries that are defined intrinsically on the horizon and see if a central charge can be obtained. (To find these symmetries, as suggested in [@Carlip2], we use the isolated horizon framework [@letter].) We find that the answer is in the negative. In 3.2, we then consider potential symmetry vector fields defined in a neighborhood of the horizon as in [@Carlip2]. In 3.3, we find the corresponding Hamiltonians, and calculate the corresponding Poisson brackets. From these we read off the central charge. We conclude in 3.4 with a calculation of the entropy. The discussion of the last three sub-sections can be regarded as a careful reworking of the ideas introduced by Strominger and Carlip. We find that the entropy is indeed proportional to the area but the proportionality factor differs from the one of Bekenstein and Hawking by a factor of $\sqrt{2}$. Perhaps more importantly, although the Poisson brackets between the Hamiltonians are well-defined, the symmetry vector fields underlying this calculation fail to admit a well-defined limit to the horizon. These issues are discussed in section 4. Some technical details relevant to section 3.1 are given in the appendix.
For concreteness, we work in 2+1 dimensions. However, the framework should admit a straightforward generalization to arbitrary space-time dimensions.
Re-examination of the symmetry based calculations in 3 dimensions
=================================================================
In the standard conventions (with $8G =1$), the line-element of BTZ black hole in the [*Eddington-Finkelstein*]{} coordinates is given by &&ds\^2=-N\^2dv\^2+2dvdr+r\^2(d+N\^dv)\^2,&&N\^2=-M++,N\^=-.\[metric\] Here, $J$ and $M$ are two real parameters and $\ell$ is related to the negative cosmological constant as $\Lambda\ell^2=-1$. The black hole has a Killing-horizon at $r=r_{+}$ defined by $N^2(r_+)=0$, or r\_+\^2= M\^2\^[1/2]{},|J| M. For the purpose of calculations it is convenient to introduce Newman-Penrose like basis in 2+1 dimensions which has two null vectors $l^a$ and $n^a$ and a space-like vector $m^a$ (all real). They satisfy the relations ll=nn=lm=nm=0,-ln=mm=1.\[basis\] The 2+1 dimensional metric can be expressed in such a basis as $g_{ab}=-2l_{(a}n_{b)}+m_am_b$. The corresponding inverse metric is $g^{ab}=-2l^{(a}n^{b)}+m^am^b$. In the rest of the paper we will assume that we have chosen the triad $l, n$, and $m$ in such a way that the vectors $l$ and $m$ are tangent to the horizon at the horizon. For the metric (\[metric\]) a convenient choice of the basis vector fields is l=\_v+N\^2\_r-N\^\_, n=-\_r,m=\_.\[vectors\] and the corresponding one-forms that span the dual-basis are l=-N\^2dv+dr,n=-dv,m=rN\^dv+rd. \[1form\]The covariant derivatives of the one-forms, like $\nabla_al_b$, can be expressed solely in terms of the one-forms and the so-called Newman-Penrose coefficients (See e.g. [@Stewart]; an exposition of the formalism in 2+1 dimensions can be found in the appendix of [@ihtpo])
$$\nonumber
\nabla_al_b=-\epsilon n_al_b+\tilde\kappa n_am_b-\gamma l_al_b
+\tau l_am_b+\alpha m_al_b-\rho m_am_b$$
$$\label{diffid2}
\nabla_an_b=\epsilon n_an_b-\pi n_am_b+\gamma l_an_b-\nu l_am_b
-\alpha m_an_b+\mu m_am_b$$
$$\nonumber
\nabla_am_b=\tilde\kappa n_an_b-\pi n_al_b+\tau l_an_b -\nu
l_al_b-\rho m_an_b+\mu m_al_b$$
where, for the metric (\[metric\]) and the tetrad (\[vectors\]) the coefficients are given by $$\nonumber
\epsilon={r\over\ell^2}-r(N^\phi)^2,\ \rho = -{1\over 2r}N^2,\ \mu=-{1\over r}$$ $$\label{coeff}
\alpha\ =\ \tau\ =\ \pi = N^\phi$$ $$\nonumber
\tilde\kappa\ =\ \nu\ =\ \gamma = 0.$$ At the horizon $\epsilon(r_+)=\kappa$, where $\kappa$ is the surface gravity of the black hole.
With these preliminaries out of the way, let us now apply the general ideas of [@Carlip2] to this 3-dimensional black hole. The BTZ space-time admits a global Killing vector =\_v-\_,=N\^(r\_+). \[killing\] As in [@Carlip2] we now define another vector field $\rho^a$ which is given by \_a\^2=-2\_a,\^a=[rr\_+]{}( \_v+N\^2\_r-N\^\_).\[rho\] It follows that $\chi\cdot\rho=0$ and ${\cal L}_\chi\rho^a=0$ everywhere. For convenience we express both vector fields $\chi,\rho$ in the Newman-Penrose basis up to order $(r-r_+)^2$ terms &&\^a=l\^a+(r-r\_+)(n\^a+2m\^a)+[O]{}(r-r\_+)\^2,\
&&\^a=[rr\_+]{}l\^a-(r-r\_+)n\^a+[O]{}(r-r\_+)\^2. \[chirho\]Clearly, at the horizon $\chi\heq\rho\heq l$. Two other useful identities are $\nabla_a\rho_b=\nabla_b\rho_a$ and $\chi^a\nabla_a
\chi_b=\kappa\rho_b$ which follow from the definition (\[rho\]) of $\rho$ and the fact that $\chi$ is a Killing vector.
As in [@Carlip2], the classical phase-space can be taken to be the space of solutions of Einstein’s equations. Each space-time configuration which is a point in the phase-space contains an inner as well as an outer boundary. Moreover, all space-time configurations in the neighborhood of the inner boundary are BTZ-like. For this to achieve [@Carlip2] uses a set of boundary conditions which insure that all space-times admit a Killing vector $\chi$ in a neighborhood of the inner boundary and posses the same ‘near-horizon geometry’. More precisely, it requires \^a\^bg\_[ab]{}0,\^at\^bg\_[ab]{}0\[bc\], where $t^a$ is any space-like vector tangent to the inner boundary ($t\cdot\chi=0$). The hat over the equality sign here means that the above equation holds on the horizon. Clearly the vector field $\xi$ which preserves these boundary conditions (\[bc\]) under diffeomorphisms has to be tangent to the horizon. Keeping the same notation as in [@Carlip2] let us take the vector field to be \^a=T\^a+R\^a\[vectorf\] where $R$ and $T$ are arbitrary functions. By demanding that (\[vectorf\]) preserves (\[bc\]) under diffeomorphisms one puts restrictions on $R$ and $T$. These are derived in [@Carlip2] (cf. eq (4.8)) R=[1]{}[\^2\^2]{}DT,D\^a \_a.\[rt\] The vector field, satisfying (\[rt\]), can then be said to generate symmetries in the precise sense of (\[bc\]).
Let us now check the closure of the Lie-algebra of these vector fields. It is at this point that the analysis of [@Carlip2] appears to be flawed. The errors arise at three levels:\
a) As noted in [@Carlip2] the requirement that the Lie bracket of symmetry vector fields should close imposes a new condition \_T 0.\[newcond\] In [@Carlip2] this condition was imposed [*at the horizon*]{}. However, at the horizon $\rho^a\heq \chi^a\heq l$ and hence, (\[newcond\]) reads $DT\heq 0$. Then the main steps in the calculations of [@Carlip2] fail to go through. In particular, the central charge is expressed in terms of $DT$ at the horizon and therefore vanishes identically. This in turn implies that the entropy also vanishes identically. While the restriction on $DT$ has been noted explicitly in [@Carlip2], its (obvious) consequences on the value of the central change and entropy are overlooked.\
b) Furthermore, it is [*not*]{} sufficient to impose (\[newcond\]) only [*at*]{} the horizon; closure will fail unless it holds in a neighborhood.\
c) Later, for explicit calculations, a specific function $T$ is chosen in [@Carlip2] (cf. eq. (5.6)) . Unfortunately, this function does not satisfy the condition (\[newcond\]) which is required in the earlier part of the analysis in [@Carlip2].
In other words, although the boundary conditions (\[bc\]) and (\[newcond\]) are reasonable, the technical implementation of them, as presented in [@Carlip2], is incorrect. In the next section we will propose an implementation of the boundary conditions that does not suffer from these problems.
New Sets of Symmetries
======================
The purpose of this section is to present a systematic analysis which is free of the technical flaws discussed above. However, before embarking on this discussion, in section 3.1 we first investigate a separate issue. In Carlip’s analysis, the symmetry vector fields are defined in a neighborhood of the horizon. From general, classical considerations, one might expect that it should be possible to focus just on the horizon structure and consider symmetry vector fields defined intrinsically on the horizon. We consider this possibility in Section 3.1 and show that in this case the central charge vanishes. Thus the Carlip-type analysis can *not* be carried out with symmetries defined intrinsically on the horizon. This result suggests that, although the analysis appears to be classical, the origin of the central charge —and hence entropy— can not be captured in a classical analysis. In the remainder of the section, we consider symmetry vector fields more closely related to those of [@Carlip2] and improve on that analysis.
Geometrical symmetries
----------------------
A new framework that is naturally suited for the analysis of symmetries on the horizon is now available – the so-called ‘isolated horizons’. This is a notion that captures the minimum structure intrinsic to the horizon to describe an equilibrium state of a black hole. It allows, however, for matter and radiation in an arbitrary neighborhood of the black hole, as long as none crosses the horizon. As suggested in [@Carlip2], this framework is well-suited for the Carlip approach to entropy. A comprehensive description of the isolated horizons framework is given in [@letter; @afk]. Isolated horizons in 2+1 dimensions are discussed in detail in [@ihtpo].
It is natural to define symmetries as maps which preserve the basic horizon structure, by which we mean the induced metric and a class of null generators. More details are given in the Appendix. Here let us just state that a vector field which preserves that structure must be tangent to the horizon, i.e. of the form \^aAl\^a+Bm\^a\[xi\] where the functions $A$ and $B$ are restricted to be $$\begin{aligned}
A & = & C(v_{-})+{\mbox{const}}. \cdot v\\
B & = & {\mbox{const}}.\end{aligned}$$ The coordinates $v$ and $v_{-}$ are defined by the relations $n=-{\rm d}v$, $m=\frac{1}{r_{+}} \frac{\partial}{\partial \phi}$, and $v_{\pm} = v \mp \phi/\Omega$. It is easy to see that the algebra of these vector fields (\[xi\]) closes. Now the boundary conditions at the isolated horizon induce a natural symplectic structure in the phase space, where the phase space consists of all possible space-time configurations which admit a fixed isolated horizon. The symplectic structure can be used to evaluate the Poisson brackets between any two phase space functionals.
It is not difficult to check that the vector field (\[xi\]), is Hamiltonian. For details see the Appendix. The Poisson brackets of the corresponding Hamiltonians close on-shell {H\_[\_1]{},H\_[\_2]{}}H\_[\[\_1,\_2\]]{}.\[close\] Hence, the central charge is zero. This result is not quite unexpected since our analysis is entirely classical and typically the central charge arises from the failure of the classical symmetries to be represented in the quantized theory. This shows that, in general, for symmetries represented by smooth vector fields on the horizon, the ideas of [@Strom2; @Carlip2] do not go through. If one wishes to use smooth fields —as is most natural at least in the classical theory— the central charge can arise only from quantization and the analysis would be sensitive to the details of the quantum theory, such as the regularization scheme used, etc. If the original intent of the ideas of [@Strom2; @Carlip2] is to be preserved, one must consider symmetries represented by vector fields which do not admit smooth limits to the horizon; in a consistent treatment, the use of “stretched horizons” [@Carlip2] is not optional but a necessity. Perhaps this is the price one has to pay to transform an essentially quantum analysis in the language of classical Hamiltonian theory.
Finally, note that any reasonable local definition of a horizon should lead to the above conclusions since we have made very weak assumptions in this sub-section.
Extended notion of symmetries
-----------------------------
Let us now return to the discussion of Section 2 and consider symmetry vector fields defined in a neighborhood of the horizon. Thus, we will now use the stronger set of conditions (\[bc\]) which requires that the closure condition (\[newcond\]) be satisfied [*everywhere*]{} [^4]. This guarantees that the Lie-algebra of the vector fields (\[vectorf\]) closes =[L]{}\_[\_[T\_1]{}]{}\_[T\_2]{} =\_[T\_1DT\_2-T\_2DT\_1]{},\_T= R+Twhere $R$ is determined in terms of $T$ as in (\[rt\]). One is to make use of the facts that ${\cal L}_\chi
\rho={\cal L}_\rho T={\cal L}_\rho R=0$. The condition (\[newcond\]), however, restricts the choice of the vector fields everywhere. To solve for the vector fields we consider a ‘stretched’-horizon at $r=r_++\varepsilon$ as the inner boundary. The solutions that are of the form T\_n \~f\_n(r) (in v\_[+]{}) \[ansatz\] are especially interesting because they furnish a Diff($S^1$), provided $f_nf_m\sim f_{n+m}$. However, the condition (\[newcond\]) is to be imposed carefully because of the $(r-r_+)$ terms in the vector field $\rho$ (\[rho\]) \^a\_a T\~(\_[v\_+]{}+N\^2\_r)T=0. Clearly, the radial derivative of $T$ blows up at the horizon. With the ansatz (\[ansatz\]) there is a unique solution for $T_n$ in the neighborhood of the horizon T\_n\^=[12]{}(-in (r-r\_+)+inv\_+).\[solved\] The normalization of $T$ is so chosen that the vector fields $\xi$ form a Diff($S^1$) algebra =i(n-m)\_[T\_[m+n]{}]{}\^ \[diffs1\] in the neighborhood of the horizon.
Notice that because of (\[solved\]) the vector fields $\xi$ [*do not have a well defined limit at the horizon*]{}. They are defined only at the stretched horizon and oscillate wildly in the limit $r\to r_+$. Also the radial derivative of $\xi$ blows up, as expected from the condition (\[newcond\]). So one has to take great care in evaluating the Poisson bracket and Hamiltonians – now [*one cannot ignore terms which are of order*]{} ${\cal
O}(r-r_+)$ especially in presence of radial derivatives in the Poisson brackets. Actually more terms will contribute to the Poisson bracket and a thorough examination of the entire calculation is needed.
Hamiltonian and Poisson bracket algebra
---------------------------------------
The existence of the Hamiltonian under the boundary conditions (\[bc\]) is shown in [@Carlip2]. The surface Hamiltonian is (the bulk Hamiltonian is zero by constraints) H\_[\_n]{}\^=[12]{}\_[S\_]{}\_[abc]{} \^b\_n\^[c]{}.\[hamilt\]
The phase space, described in section 2 is associated with a conserved symplectic current [@Wald]. The corresponding symplectic structure may be used to evaluate the Poisson brackets between any two functionals in the phase-space. On shell, the symplectic structure can be written as the sum of boundary terms only. However, one may choose appropriate fall-off conditions of the fields at asymptotic infinity such that the contribution from the outer boundary vanishes. In the present example the fields approach ‘strongly’ to asymptotic AdS-values. In that case given two Hamiltonian vector fields $\xi_1$ and $\xi_2$, the Poisson bracket between the two corresponding Hamiltonian functionals is given solely by the terms at the inner-boundary [@Wald] {H\_[\_1]{},H\_[\_2]{}}=\_[S\_]{}(\_2-\_1-\_2( \_1L)) where $2\pi\Theta_{a}[g,\delta
g]=\epsilon_{ab}[g^{bc}\nabla_c(g_{de} \delta
g^{de})-\nabla_c\delta g^{bc}]$ is the one-form symplectic potential and $L$ is the three-form Lagrangian density. Making use of Einstein’s equations $R_{ab}=2\Lambda g_{ab}$ we can express the Poisson bracket explicitly in terms of the vector fields {H\_[\_1]{},H\_[\_2]{}}=[12]{}\_[S\_]{}\_[abc]{} .\[pb\]
Our purpose is to find the terms proportional to $n^3$ in the Poisson bracket (\[pb\]) which give rise to a non-trivial central extension to the Poisson bracket algebra. The Hamiltonian (\[hamilt\]) contains terms only linear in $n$. The central charge can then be read off from the $n^3$ terms with appropriate normalizations. After a long calculation we arrive at the following expression \_[0]{}\^3\_[m+n]{}[a\_2]{} + .\[n3\] Notice that although the vector fields (\[solved\]) do not have a smooth limit as $r\to r_+$ the Hamiltonian and the Poisson bracket have well defined limits.
Entropy arguments
-----------------
According to the standard normalization (up to linear order terms in $n$) \_[0]{}=i[c12]{} n\^3\_[n+m]{}\[virasoro\] the central charge can be read off from the $n^3$-term in the Poisson bracket (\[pb\]) c=24[a\_]{}.\[cent\] The zero mode of the Hamiltonian too can be read off from (\[hamilt\]) and is given by \_[0]{}= [a\_2]{}.\[zero\] Hence, by Cardy formula [@Cardy], the entropy is S=2=22a\_which agrees with the Bekenstein-Hawking entropy (in units $8G=\hbar=1$) up to a factor of $\sqrt 2$.
It is worth noting here that Carlip’s central extension (see formula 5.10 of [@Carlip2]) and zero-th mode Hamiltonian have the same numerical factor as ours. Nevertheless, he argues that one should use a different, so called effective central extension, and obtains the right numerical factor for entropy. In our case this strategy fails since we have an extra factor of $\Omega/\kappa$ or its inverse in front of our expressions. It should be stressed, however, that this factor is rigidly fixed by the requirements that the symmetry algebra closes, that it gives a Diff$(S^1)$, and that the symmetry vector fields are periodic in the coordinate $\phi$ with the period $2\pi$. Moreover, following the arguments of [@Kutasov; @Carlip1], since within a classical framework it is impossible to determine the value of the Hamiltonian in the ground state of the corresponding quantum theory, the right value of the central charge that is to be used in the Cardy formula is not determined classically.
Discussion
==========
The entropy calculation of [@Strom2] faces certain conceptual limitations because the asymptotic symmetries may be completely different from the horizon symmetries. Both central charge (\[cent\]) and Hamiltonian (\[hamilt\]) are quite different from the ones found in [@BH1] for asymptotic infinity. Thus, one needs an analysis restricted to the neighborhood of the horizon. In [@Carlip2], Carlip recognized this limitation and carried out a Hamiltonian analysis using symmetries defined near the horizon. However, as we saw in section 2, the resulting analysis has certain technical flaws. In particular, the vector fields which correctly incorporate the ideas laid out in the beginning of that paper are quite different from the ones used in the detailed analysis later on.
In section 3 we made a proposal to overcome those technical problems and obtained a consistent formulation which implements the previous ideas. However, now the symmetry vector fields (\[vectorf\]) do not have a well-defined limit at the horizon. Nonetheless both the Hamiltonians and their Poisson brackets are well-defined. Furthermore, there is a central charge which, following the reasoning of [@Strom2; @Carlip2], implies that the entropy is proportional to area. While the argument has attractive features, its significance is not entirely clear because the vector fields generating the relevant symmetries fail to admit well-defined limits to the horizon. Presumably, this awkward feature is an indication that, in a fully coherent and systematic treatment, the central charge would really be quantum mechanical in origin and could be sensitive to certain details of quantization, such as the regularization scheme used. Indeed, in the detailed analysis, we had to first evaluate the Poisson bracket and then take the limit $\lim \epsilon \rightarrow 0$ (see expressions (\[zero\]) and also (\[virasoro\])), a step typical in quantum mechanical regularization schemes. Thus, it could well be that the awkwardness stems from the fact that, following [@Strom2; @Carlip2], we have attempted to give an essentially classical argument for a phenomenon that is inherently quantum mechanical.
This viewpoint is supported by our analysis of section 3.1 of symmetries corresponding to smooth vector fields. If one requires that vector fields generating symmetries be smooth at the horizon —a most natural condition in a fully classical setting— we found that the central charge would be *zero*! Thus, the fact that the vector fields do not admit a smooth limit to the horizon is essential to the Carlip-type analysis. The fact that one has to ‘push’ the analysis an $\epsilon$ away from the horizon indicates that the procedure may be a ‘short-cut’ for a more complete quantum mechanical regularization[^5].
This, however, raises some questions about the method in general: a) How satisfactory is the classical analysis and how seriously should one consider such vector fields? In particular, role of such vector fields in terms of space-time geometry is far from obvious since they are not even defined on the horizon. b) Why should this particular algebra be the focus of attention? c) Does the whole analysis suggest a rather transparent quantum mechanical regularization scheme and hence, systematically constrain the quantum theory?
The fact that our final expression of entropy differs from the standard Hawking-Bekenstein formula by a factor of $\sqrt 2$ also provides a test for quantum gravity theories. The value $H_{\xi_0}$ appearing in Cardy’s formula is of a quantum mechanical nature. A classical calculation may not give the right numerical value for it. It then follows that a quantum theory of gravity will give the correct value for the entropy provided it (a) has classical general relativity as its low energy limit, and, (b) the expectation value $H_{\xi_0}$ is $a_\Delta\kappa/4\pi\Omega$ (assuming $H_{\xi_{0}}$ is well defined in quantum theory).
In spite of the limitations of this calculation, the final result *is* of considerable interest because it is not a priori obvious that all the relevant subtleties of the full quantum mechanical analysis can be compressed in a classical calculation simply by stretching the physical horizon an $\epsilon$ distance away, performing all the Poisson brackets and then taking the limit $\epsilon \rightarrow 0$ in the *final* expressions. Note, however, that a careful treatment of technical issues that were overlooked in [@Carlip2] was necessary to bring out these features. Indeed, our analysis provides the precise sense in which the original intention in [@Strom2; @Carlip2] of reducing the problem to a classical calculation is borne out in a technically consistent fashion.
Appendix
========
In this appendix we define what is called a weakly isolated horizon. It is a more general object then an isolated horizon, however it is sufficient for our purpose of finding the symmetries of the horizon.
Let $\Delta$ be a null hypersurface and $l$ a future pointing null normal vector field on $\Delta$. We will denote by $[l]$ the equivalence class of null normals which differ from $l$ only by a multiplicative constant. Let us also introduce a one-form $\omega_a$ defined intrinsically on $\Delta$ by: $$\nabla_{\underleftarrow{a}} l_{b} = \omega_{a} l_{b}.$$ The arrow in the above equation denotes the pull-back to $\Delta$.
We call a pair $(\Delta, [l])$ a weakly isolated horizon if and only if:\
1. $\Delta$ is topologically $S^{1} \times \mathbf{R}$.\
2. The expansion $\Theta_{(l)}$ of $l$ vanishes.\
3. The equations of motion hold on $\Delta$. The stress-energy tensor $T_{ab}$ is such that $-T^{a}_{b}l^{b}$ is future directed and causal.\
4. $\mathcal{L}_{l} \omega = 0$, where $\omega$ is the one-form given by the equivalence class $[l]$.\
We will say that a vector field $\xi$ generates a symmetry of the horizon if the flow generated by $\xi$ on the phase-space preserves the basic structure of the horizon, namely $[l]$ and $q$. Here, $q_{ab} \equiv g_{\underleftarrow{ab}}$. Thus we impose, $$\begin{aligned}
\mathcal{L}_{\xi} l \in [l], \\
\mathcal{L}_{\xi} q_{ab} = 0.
\end{aligned}$$ It is not difficult to check that any vector field $\xi$ satisfying the above conditions can be written as $$\xi^{a} = A l^{a} + B m^{a},$$ where $A=C(v_{-}) + {\mbox{const}}. \cdot v, B={\mbox{const}}$, and $v_{\pm}, v$ are defined by the relations $n=-{\rm d}v$, $m=\frac{1}{r_{+}}
\frac{\partial}{\partial \phi}$, and $v_{\pm} = v \mp
\frac{\phi}{\Omega}$. As in the main text we assume that the vector field $m^a$ is tangent to the horizon. Note that $C(v_{-})$ must be a periodic function, therefore one can perform a Fourier analysis and find a set of modes $\xi_{n}$.
Now, using Hamiltonian considerations, one can find the symplectic structure and Hamiltonians in the phase-space of isolated horizons. For details see [@ihtpo]. The symplectic structure on-shell is equal to $$\Omega(\delta_{\xi},\delta) = -\frac{1}{\pi} \oint_{S_{\Delta}}
\left[ (\xi \cdot A_{I}) \delta e^{I} + (\xi \cdot e^{I}) \delta
A_{I} \right] + \tilde{\Omega}(\delta_{\xi}, \delta),$$ where $\tilde{\Omega}$ is a gauge term which is not important for the present analysis. $A$ and $e$ are the connection one-form and the orthonormal triad, respectively. Using this expression one can find the Hamiltonian corresponding to $\xi$ as well as the Poisson bracket of two Hamiltonians. The corresponding expressions are $$\begin{aligned}
H_{\xi} & = & -\frac{1}{\pi} \oint_{S_{\Delta}} (\xi \cdot A_{I}) e^{I} + C_{\Delta}, \\
\{H_{\xi_{1}},H_{\xi_{2}}\}& = & -\frac{1}{\pi} \oint_{S_{\Delta}} \left[ (\xi_{1}
\cdot A_{I}) \mathcal{L}_{\xi_{2}} e^{I} + (\xi_{1} \cdot e^{I})
\mathcal{L}_{\xi_{2}} A_{I} \right],
\end{aligned}$$ where $C_{\Delta}$ is zero except when $\xi$ contains a constant multiple of $l$. Then we have $C_{\Delta}[cl]=c(M+2r_{+}\kappa+J\Omega)$.
Subsequently, one can check that for any such symmetry vector fields $$\{ H_{\xi_{1}}, H_{\xi_{2}} \} \heq H_{[\xi_{1},\xi_{2}]},$$ and therefore there is no central extension of the corresponding algebra of conserved charges.
Acknowledgments {#acknowledgments .unnumbered}
---------------
We gratefully thank Abhay Ashtekar for discussions and various important suggestions. Also, we would like to thank Steve Carlip for valuable correspondence. The work of AG was supported by the National Science Foundation grant PHY95-14240 and the Eberly Research Funds of Penn State.
[99]{} A. Strominger and C. Vafa, Phys. Lett. B379 (1996) 99; J. Maldacena, PhD Thesis, hep-th/9607235. A. Ashtekar, J. Baez, A. Corichi and K. Krasnov, Phys. Rev. Lett. 80 (1998) 904; A. Ashtekar, J. Baez and K. Krasnov, gr-qc/0005126. A. Strominger, JHEP 9802 (1998) 009. S. Carlip, Phys. Rev. Lett. 82 (1999) 2828. S. Carlip, Class. Quant. Grav. 16 (1999) 3327. C. Teitelboim, Ann. Phys. (NY) 79 (1973) 542; 207; M. Henneaux and C. Teitelboim, Comm. Math. Phys. 98 (1985) 391. J. Brown and M. Henneaux, Comm. Math. Phys. 104 (1986). T. Regge and C. Teitelboim, Ann. Phys. 88 (1974) 286. M. Bañados, C. Teitelboim and J. Zanelli, Phys. Rev. D 48 (1993) 1506. A. Ashtekar, C. Beetle, O. Dreyer, S. Fairhurst, B. Krishnan, J. Lewandowski and J. Wiśniewski, Phys. Rev. Letters 85 (2000) 3564. A. Ashtekar, S. Fairhurst and B. Krishnan, Phys. Rev. D 62 (2000) 104025. J. Stewart, Advanced general relativity, Cambridge University Press 1991 A. Ashtekar, O. Dreyer and J. Wiśniewski, Isolated Horizons in 2+1 Dimensions, in preparation. V. Iyer and R. M. Wald, Phys. Rev. D 50 (1994) 846. J. L. Cardy, Nucl. Phys. B 270 (1986) 186. D. Kutasov, N. Seiberg, Nucl. Phys. B 358 (1991) 600.
[^1]: [email protected]
[^2]: [email protected]
[^3]: [email protected]
[^4]: Strictly speaking, we only consider a neighborhood of the horizon where the vector field $\chi$ is Killing. In the BTZ example, however, it is globally Killing
[^5]: Sometimes it is argued that only a classical central charge can give rise to the standard expression $a_\Delta/4G\hbar$ of entropy and a central charge induced by a quantum anomaly can only give corrections to this expression. This, however, need not be the case. The central charge of a truly quantum origin must be a dimensionless number and the only such possibility is $c\sim a_\Delta/G\hbar$. This would appear in the quantum Virasoro algebra as =(n-m)L\_[n+m]{}+[c12]{}(n\^3-n)\_[m+n]{} where $\hat L_n$’s are now quantum operators. The eigenvalue of $\hat L_0$ should also be dimensionless ($\sim a_\Delta/G\hbar$). Thus, the correct semiclassical expression of entropy can be reproduced even when the central charge comes from the quantum theory.
|
[**Cooling phonon modes of a Bose condensate with uniform few body losses** ]{}
I. Bouchoule^1\*^, M. Schemmer^1^, C. Henkel^2^
[**1**]{} Laboratoire Charles Fabry, Institut d’Optique, CNRS, Université Paris Sud 11,\
2 Avenue Augustin Fresnel, 91127 Palaiseau Cedex, France\
[**2**]{} Institute of Physics and Astronomy, University of Potsdam,\
Karl-Liebknecht-Str. 24/25, 14476 Potsdam, Germany\
\* [email protected]
Abstract {#abstract .unnumbered}
========
[**We present a general analysis of the cooling produced by losses on condensates or quasi-condensates. We study how the occupations of the collective phonon modes evolve in time, assuming that the loss process is slow enough so that each mode adiabatically follows the decrease of the mean density. The theory is valid for any loss process whose rate is proportional to the $j$th power of the density, but otherwise spatially uniform. We cover both homogeneous gases and systems confined in a smooth potential. For a low-dimensional gas, we can take into account the modified equation of state due to the broadening of the [cloud width]{} along the tightly confined directions, which occurs for large interactions. We find that at large times, the temperature decreases proportionally to the energy scale $mc^2$, where $m$ is the mass of the particles and $c$ the sound velocity. We compute the asymptotic ratio of these two quantities for different limiting cases: a homogeneous gas in any dimension and a one-dimensional gas in a harmonic trap.**]{}
Introduction {#sec:intro}
============
Despite their extensive use as quantum simulators or for quantum sensing, the temperatures reached in ultracold gases are not fully understood. Careful analyses of the cooling mechanisms have a long tradition in the cold atoms community, and the corresponding temperature limits constitute important benchmarks. The role of atom losses, however, is not yet elucidated, although such processes often play a role in quantum gas experiments. Different loss processes may occur. One-body processes are always present, their origin could be for instance a collision with a hot atom from the residual vapour. The familiar method of evaporative cooling involves losses that depend on the particle energy, a case we exclude in this paper. For clouds trapped in an internal state which is not the lowest energy state, such as low-field seekers in a magnetic trap, two-body (spin flip) collisions may provide significant loss. Finally, three-body processes where atoms recombine into strongly bound dimers are always present and are often the dominant loss mechanism. The effect of one-body losses for an ideal Bose gas was investigated in [@schmidutz_quantum_2014]. Loss processes involving more than one body are a source of heating for trapped thermal clouds, since they remove preferentially atoms in dense regions where the potential energy is low [@weber_three-body_2003]. Here we are interested in the effect of losses in Bose condensates or quasi-condensates, and we focus on low energy collective modes, whose physics is governed by interactions between atoms.
One-body losses have recently been investigated for one-dimensional (1D) quasi-condensates [@rauer_cooling_2016; @grisins_degenerate_2016; @johnson_long-lived_2017; @schemmer_monte_2017]. Quasi-condensates characterise weakly interacting 1D Bose gases at low enough temperature: repulsive interactions prevent large density fluctuations such that the gas resembles locally a Bose Einstein condensate (BEC), although it does not sustain true long-range order [@Petrov_2000; @AlKhawaja_2003c]. The above studies have focussed on low-energy excitations in the gas, the phonon modes. These correspond to hydrodynamic waves propagating in the condensate, where long-wavelength phase (or velocity) modulations are coupled to density modulations. On the one hand, losses reduce density fluctuations and thus remove interaction energy from each phonon mode. This decrease in energy, and thus of quasiparticle occupation, amounts to a cooling of the modes. On the other hand, the shot noise due to the discrete nature of losses feeds additional density fluctuations into the gas. This increases the energy per mode and amounts to heating. Theoretical studies [@grisins_degenerate_2016; @johnson_long-lived_2017; @schemmer_monte_2017], valid for one-body losses in 1D homogeneous gases, predict that as a net result of these competing processes, the system is cooling down in such a way that the ratio between temperature $k_B T$ and the chemical potential $\mu$ becomes asymptotically a constant (equal to 1). Many questions remain open. For instance, the role of longitudinal confinement has not been elucidated. Moreover, theoretical predictions for higher-body loss processes are lacking, although cooling by three-body losses was recently demonstrated experimentally [@schemmer_cooling_2018].
In this paper, we generalise the theoretical results for one-body losses in homogeneous 1D gases and extend the analysis to a BEC or a quasicondensate in any dimension, for any $j$-body loss process, and for homogeneous gases as well as clouds confined in a smoothly varying trapping potential. We concentrate on phonon modes and the loss rate is assumed small enough to ensure adiabatic following of each mode. Low-dimensional systems are realised experimentally by freezing the transverse degrees of freedom with a strong transverse confinement. However, in many experiments the interaction energy is not negligible compared to the transverse excitation frequencies such that the freezing is not perfect. The interactions then broaden the wave function in the transverse directions, and longitudinal phonon modes are associated with transverse breathing [@Stringari_1998; @salasnich_effective_2002; @fuchs_hydrodynamic_2003]. Our theory can take this into account with a modified equation of state: the quantities $\mu$ and $m c^2$, where $m$ is the atomic mass and $c$ the sound velocity, equal for a strong transverse confinement, no longer coincide. We find that the evolution produced by losses is better described by a constant ratio $k_BT/(mc^2)$ instead of $k_B T/\mu$. The asymptotic ratio $k_B T / (m c^2)$ is computed for a few examples. Predictions from this paper have been tested successfully against recent experimental results obtained at Laboratoire Charles Fabry on the effect of three-body losses in a harmonically confined 1D Bose gas [@schemmer_cooling_2018].
Model
=====
We consider a condensate, or quasi-condensate, in dimension $d=1,2$ or $3$. The gas is either homogeneous or trapped in a smoothly varying potential $V({\bf r})$. We assume it is subject to a $j$-body loss process of rate constant $\kappa_j$: the number of atoms lost per unit time and unit volume is $\kappa_j n^{j}$ where $n$ is the density. This density includes fluctuations of quantum and thermal nature, and its average profile is denoted $n_0({\bf r}, t)$. Instead of using involved powerful theoretical techniques such as the truncated Wigner approach [@norrie_three-body_2006; @drummond_functional_2013], we compute the effect of losses in this paper with a spatially coarse-grained approach that does not rely on involved theory and in which the approximations are made transparent. For the same pedagogical reason, we explicitly construct the phase-density representation of the collective excitations of the gas, in a similar way as is done for instance in [@mora_extension_2003].
Stochastic dynamics of the particle density {#s:dN-per-pixel}
-------------------------------------------
Let us first consider the sole effect of losses and fix a cell of the gas of volume $\Delta$, small enough so that the density of the (quasi)condensate is about homogeneous in this volume, but large enough to accommodate many atoms. The atom number in the cell is $N = N_0 + \delta N$ where $N_0 = n_0\Delta$ and $\delta N\ll N_0$ since the gas lies in the [(quasi)]{}condensate regime. (We drop the position dependence $n_0 = n_0( {\bf r} )$ for the moment.) Since typical values of $\delta N$ are much smaller than $N_0$, one can assume without consequence that $\delta N$ is a variable that takes discrete values between $-\infty$ and $\infty$. Hence, one can define a phase operator $\theta$, whose eigenvalues span the interval $[0,2\pi[$ and that is canonically conjugate to $\delta N$. Losses will affect both the density fluctuations and the phase fluctuations.
We first concentrate on the effect of losses on density fluctuations. Consider a time step $dt$, small enough that the change $dN$ in atom number is much smaller than $N$, but large enough such that $dN$ is much larger than $1$. After the time step, we have $$dN = - K_j N^{j} dt + d\xi
\label{eq:stoch-dN}$$ where $K_j = \kappa_j / \Delta^{j-1}$. Here, $d\xi$ is a random number with vanishing mean value that translates the shot noise associated with the statistical nature of losses. The number of loss events during the small step $dt$ is Poisson distributed so that the variance of $d\xi$ relates to the mean number of lost atoms by $$\langle d\xi^2\rangle =
j K_j N^{j} dt \simeq j K_j N_0^{j} dt
\,,
\label{eq:dxi2}$$ the factor $j$ coming from the fact that at each event, $j$ atoms are lost. The evolution of fluctuations in the atom number is obtained from $d\delta N = dN - dN_0$, where $dN_0$ is the change of the mean number, equal to $dN_0 = -K_j N_0^{j} dt$ in the lowest order in $\delta N$. Expanding $N^{j}$ in Eq.(\[eq:stoch-dN\]) to first order in $\delta N$, we obtain the following evolution for the density fluctuation $\delta n = \delta N/\Delta$: $$d\delta n = -j \kappa_j n_0^{j-1}\delta n\, dt +d\eta
\label{eq.ddeltan}$$ where $d\eta=d\xi/\Delta$ is a random variable of variance $\langle d\eta^2\rangle = j \kappa_j n_0^{j} dt / \Delta$. The first term in the r.h.s, the drift term, decreases the density fluctuations. It will thus reduce the interaction energy associated to fluctuations in the gas and produce cooling. The second term on the other hand increases the density fluctuations in the gas which leads to heating.
Shot noise and phase broadening
-------------------------------
We now compute the effect of losses on the phase fluctuations, following an approach similar to Ref.[@korotkov_continuous_1999]. For this purpose, one imagines that one records the number of lost atoms during $dt$. This measurement increases the knowledge about $N$, and thus $\delta N$. To quantify this increase of knowledge, we use the Bayes formula $${\rm P}(\delta N|N_l)=\frac{{\rm P}(\delta N)}{\int d(\delta N') {\rm P}(N_l|\delta N')} {\rm P}(N_l|\delta N),
\label{eq.bayes}$$ where ${\rm P}(\delta N)$ is the initial probability of having an atom number $N = N_0 + \delta N$, and ${\rm P}(N_l|\delta N)$ is the probability that a number $N_l$ of atoms will be lost, given that the initial atom number was $N_0 + \delta N$. Finally, ${\rm P}(\delta N|N_l)$ is the probability that the final number is $N_0 -N_l+ \delta N$, knowing the fact that $N_l$ atom have been lost. As argued above, the Poissonian nature of the loss process and the assumption that the number of lost atoms is large compare to one, imply the Gaussian distribution $${\rm P}(N_l|\delta N)
\simeq
\frac{ 1 }{ \sqrt{2\pi}\sigma_l }
e^{-(N_l - K_j N^{j} dt)^2 / (2\sigma_l^2)}
\,,
\label{eq:Pnl}$$ where $N=N_0+\delta N$ and $\sigma_l^2 = j K_j N_0^{j} dt$. Expanding $N^{j}$ around $N_0^{j}$ and introducing $\overline{\delta N} = N_l/(j K_j N_0^{j-1} dt) - N_0/j$, one has $$\begin{aligned}
\frac{ (N_l - K_j N^{j} dt)^2 }{ \sigma_l^2 } &\simeq&
\frac{ (\overline{\delta N} - \delta N)^2 }{ \sigma_{\delta N}^2 }
$$ where $$\sigma_{\delta N}^2 = \frac{ N_0 }{ j K_j N_0^{j-1} dt }
\,.$$ Thus, according to Eq.(\[eq.bayes\]), the width of the distribution in $\delta N$ is multiplied by a function of rms width $\sigma_{\delta N}$ after recording the number of lost atoms. This narrows the number distribution and must be associated with a broadening in the conjugate variable, $\theta$, lest the uncertainty relations are violated. The phase broadening must be equal to $$\langle d\theta^2\rangle =
\frac{ 1 }{ 4 \sigma_{\delta N}^2 }
=
\frac{ j \kappa_j n_0^{j-1} }{ 4 n_0 \Delta }
dt
\,.
\label{eq.phasespread}$$ This spreading of the phase results from the shot noise in the loss process.
In the following, keeping in mind that only length scales larger than the interparticle distance have to be considered, we go to the continuous limit. The factors $1/\Delta$ in the variance for $d\eta$ in Eq.(\[eq.ddeltan\]) and in the phase diffusion of Eq.(\[eq.phasespread\]) then turn into $$\begin{aligned}
\langle d\eta({\bf r})d\eta({\bf r'})\rangle &=&
j \kappa_j n_0^{j} \delta({\bf r-r'}) dt
\\
\langle d\theta({\bf r}) d\theta({\bf r'}) \rangle &=&
\frac{ j }{ 4 } \kappa_j n_0^{j-2}\delta({\bf r-r'}) dt
\label{eq:density-phase-noise}\end{aligned}$$ Both diffusion terms are due to the quantised nature of the bosonic field, namely the discreteness of atoms. Their effects become negligible compared to the drift term in Eq. (\[eq.ddeltan\]) in the classical field limit, i.e. $n_0\rightarrow \infty$ at fixed typical density fluctuations $\delta n/n_0$. Note finally that these results could also have been obtained using a truncated Wigner approach [@drummond_functional_2013; @norrie_three-body_2006], using approximations based on the relation $\delta n\ll n_0$.
Before going on, let us make a remark concerning gases in reduced dimension. An effective 1D (resp. 2D) gas is obtained using a strong transverse confinement in order to freeze the transverse degree of freedom: the atoms are in the transverse ground state of the confining potential, of wave function $\psi( x_\perp )$. In the case of $j$-body losses with $j>1$, the loss process a priori modifies the transverse shape of the cloud since it occurs preferentially at the center, where the density is the highest. In other words, it introduces couplings towards transverse excitations. We assume here the loss rate to be much smaller than the frequency gap $\omega_\perp$ between the transverse ground and first excited states. Then the coupling to transverse excitations has negligible effects, and the above analysis of the effect of losses also holds for the effective 1D (resp. 2D) gas, provided $\kappa_j = \kappa_j^{3D}\!\int\!d^2x_\perp
|\psi(x_\perp)|^{2j}$ (resp. $\kappa_j=\kappa_j^{3D}\!\int\! dx_\perp |\psi(x_\perp)|^{2j}$), where $\kappa_j^{3D}$ is the rate constant coefficient for the 3D gas.
Collective excitations {#s:hydrodynamic-modes}
----------------------
Let us now take into account the dynamics of the gas. Under the effect of losses the profile $n_0({\bf r}, t)$ evolves in time and, except for a homogeneous system, a mean velocity field appears, generated by a spatially dependent phase $\theta_0({\bf r},t)$. Here we assume the loss rate is small enough so that, at any time, $n_0({\bf r})$ is close to the equilibrium profile. We moreover assume the potential varies sufficiently smoothly such that the equilibrium profile is obtained with the local density approximation. Then, at any time, $n_0({\bf r})$ fulfills $$\mu(n_0({\bf r}))=\mu_p - V({\bf r})$$ where $\mu(n)$ is the chemical potential of a homogeneous gas of density $n$ and $\mu_p$ is the peak chemical potential, which fixes the total atom number [^1]. In most cases $\mu=g n$ where $g$ is the coupling constant. In 3D condensates, $g=4\pi\hbar^2a/m$ where $a$ is the scattering length describing low-energy collisions. In situations where two (resp. one) degrees of freedom are strongly confined [by a transverse potential of frequency $\omega_\perp$]{}, $\mu$ depends on $a$, on the linear (resp. surface) density $n$, and on $\omega_\perp$. As long as $\hbar\omega_\perp \gg \mu$, the transverse cloud shape is close to that of the transverse ground state [^2], and one recovers the expression $\mu=gn$ where the effective 1D (resp. 2D) coupling constant $g$ depends only on $a$ and on $\omega_\perp$ [@petrov_bose-einstein_2000; @olshanii_atomic_1998]. At large densities, $\hbar \omega_\perp \sim \mu$, the transverse degrees of freedom are no longer completely frozen: interactions [broaden]{} the transverse wave function, and $\mu$ is no longer linear in $n$ [@salasnich_effective_2002; @fuchs_hydrodynamic_2003]. We discuss one example in Sec.\[s:homogeneous\].
To treat the dynamics around the average density $n_0( {\bf r}, t )$, a Bogoliubov approximation is valid since the gas is in the (quasi)condensate regime: one can linearise the equations of motion in the density and phase fluctuations $\delta n({\bf r})$ and $\varphi({\bf r})=\theta-\theta_0$ [@PitaevskiiStringari; @mora_extension_2003]. These equations involve the mean velocity field $\hbar \nabla \theta_0/m$. Here we assume the loss rate is small enough so that such terms are negligible. We moreover consider only length scales much larger than the healing length. Then, as detailed in Appendix \[a:lowD-hdyn\], the dynamics of $\delta n({\bf r})$ and $\varphi({\bf r})$ is governed by the hydrodynamic Hamiltonian $$H_{\rm{hdyn}} = \frac{\hbar^2}{2m}\int d^{d}{\bf r}\, n_0 \left( \nabla \varphi \right)^2
+\frac{m}{2}\int d^{d}{\bf r} \frac{ c^2 }{ n_0 } \delta n^2.
\label{eq.Hydro}$$ Here the speed of sound $c = c( {\bf r} )$ is related to the local compressibility, $m c^2 = n_0 \partial_n \mu$, evaluated at $n_0({\bf r})$. At a given time, $H_{\rm{hdyn}}$ can be recast as a collection of independent collective modes. The collective modes are described by the eigenfrequencies $\omega_\nu$ and the real functions $g_\nu$ \[details in Appendix \[SM\]\]. They obey $$\nabla \cdot \big( n_0 \nabla (\frac{ c^2 }{ n_0 } g_\nu) \big)
= - \omega_\nu^2g_\nu,
\label{eq:wave-eqn-gnu}$$ and are normalised according to $$\delta_{\nu,\nu'} = \frac{ m }{ \hbar \omega_\nu }
\int d^{d}{\bf r} \frac{ c^2 }{ n_0 }
g_\nu({\bf r}) g_{\nu'}({\bf r})
\,.
\label{eq:g-nu-normalisation}$$ Then $H_{\rm{hdyn}}=\sum_\nu H_\nu$ where $$H_\nu = \frac{ \hbar\omega_\nu }{ 2 } ({x_\nu^2} + {p_\nu^2}).$$ The dimensionless canonically conjugate quadratures $x_\nu$ and $p_\nu$ are related to $\delta n$ and $\varphi$ respectively. More precisely, $$\left \{\begin{array}{l}
\delta n({\bf r}) = \sum_\nu x_\nu g_\nu({\bf r})
\\[1ex]
\displaystyle
\varphi({\bf r}) = \frac{ m c^2 }{ n_0 }
\sum_\nu p_\nu \frac{ g_\nu({\bf r}) }{ \hbar\omega_\nu }
\end{array}
\right .
\label{deltanvsxnu}$$ which inverts into $$\left\{
\begin{array}{l}
\displaystyle
x_\nu = \frac{m}{\hbar\omega_\nu}\int d^{d}{\bf r}
\frac{ c^2 }{ n_0 } \delta n({\bf r}) g_\nu({\bf r})
\\[2ex]
p_\nu = \int d^d{\bf r}\, \varphi({\bf r}) g_\nu({\bf r})
\end{array}
\right.
\label{eq.xnu}$$ At thermal equilibrium, the energy in the mode $\nu$ is equally shared between both quadratures and, for temperatures $T\gg \hbar \omega_\nu$, one has $\langle H_\nu \rangle =T$.
Cooling dynamics
================
Evolution of the excitations
----------------------------
Let us consider the effect of losses on the collective modes. The loss process modifies in time the mean density profile and thus the two functions of ${\bf r}$, $n_0$ and $c$, that enter into the Hamiltonian Eq. (\[eq.Hydro\]). We however assume the loss rate is very low compared to the mode frequency and their differences $\omega_\nu - \omega_{\nu'}$, so that the system follows adiabatically the effect of these modifications. As a consequence, equipartition of the energy holds at all times for any collective mode $\nu$, and the adiabatic invariant $A_\nu=\langle H_\nu\rangle /(\hbar \omega_\nu)$ is unaffected by the slow evolution of $n_0$. The dynamics of $A_\nu$ is then only due to the modifications of $\delta n( {\bf r} )$ and $\varphi( {\bf r} )$ induced by the loss process (subscript $l$), namely $$\frac{ d A_\nu }{ dt } =
\frac{1}{2} \Big(
\frac{ d\langle x_\nu^2\rangle_l }{ dt }
+ \frac{ d\langle p_\nu^2\rangle_l }{ dt }
\Big)
\label{eq.dEtilde}$$ Injecting Eq. (\[eq.ddeltan\]) into Eq. (\[eq.xnu\]), we obtain for the ‘density quadrature’ $$ (d x_\nu)_l = \frac{m}{\hbar\omega_\nu}\int d^d{\bf r} \,
\frac{ c^2 }{ n_0 } g_\nu({\bf r})
\left(
-j \kappa_j n_0^{j-1}\delta n({\bf r}) dt + d\eta({\bf r})
\right).
\label{eq.dxnul}$$Using the mode expansion (\[deltanvsxnu\]) for $\delta n( {\bf r} )$ in the first term, we observe the appearance of couplings between modes. In the adiabatic limit (loss rate small compared to mode spacing), the effect of these couplings is however negligible. Then, Eq. (\[eq.dxnul\]) leads to $$\frac{d \langle x_\nu^2 \rangle_l}{dt} =
- \frac{ 2j \kappa_j m }{ \hbar\omega_\nu }
\langle x_\nu^2 \rangle
\int\!d^d{\bf r}\, c^2 n_0^{j-2} g_\nu^2
+ \frac{ j \kappa_j m^2 }{ (\hbar \omega_\nu)^2 }
\int\!d^d{\bf r} \, c^4 n_0^{j-2} g_\nu^2
\,.
\label{eq.dxnu2l}$$Let us now turn to the phase diffusion associated with losses. It modifies the width of the conjugate quadrature $p_\nu$, according to $$\frac{d \langle p_\nu^2 \rangle_l}{dt}
=
\frac{ j \kappa_j }{ 4 }
\int\!d^d{\bf r} \, n_0^{j-2} g_\nu^2
\,.
\label{eq.dpnu}$$ The hydrodynamic modes are characterised by low energies, $\hbar\omega_\nu \ll m c^2$, when the speed of sound is evaluated in the bulk of the [(quasi)]{}condensate. Then $d \langle p_\nu^2 \rangle_l/dt $ gives a contribution that scales with the small factor $(\hbar\omega_\nu / m c^2)^2$ compared to the second term of Eq. (\[eq.dxnu2l\]). In other words one expects that the phase diffusion associated to the loss process gives a negligible contribution to the evolution of $A_\nu$ \[Eq.(\[eq.dEtilde\])\] for phonon modes [^3].
We see from Eq.(\[eq.dxnu2l\]) that the adiabatic invariant $A_\nu$ is actually changed by $j$-body losses. We now show that the decrease in the energy per mode $\langle H_\nu\rangle$ is better captured by the energy scale associated with the speed of sound, as their ratio will [converge towards]{} a constant during the loss process. More precisely, we introduce $$y_\nu = \frac{ \langle H_\nu\rangle }{ m c_p^2 }
\simeq \frac{ k_B T_\nu }{ m c_p^2 }
$$ where $c_p$ is the speed of sound evaluated at the peak density $n_p$. The second expression is valid as long as the phonon modes stay in the classical regime, $\langle H_\nu\rangle\gg \hbar \omega_\nu$. From Eq. (\[eq.dEtilde\]) and (\[eq.dxnu2l\]), neglecting the contribution of Eq.(\[eq.dpnu\]), we immediately obtain $$\frac{d}{dt} y_\nu =
\kappa_j n_p^{j-1}
\left[
- (j{\cal A} - {\cal C}) y_\nu
+ j{\cal B}
\right]
\label{eq.ydot}$$ where the dimensionless parameters ${\cal A},{\cal B}$ and ${\cal C}$ are $$\begin{aligned}
{\cal A} &=& \frac{m}{\hbar \omega_\nu}
\int\!d^{d}{\bf r}
\frac{ c^2 n_0^{j-2} }{ n_p^{j-1} } g_\nu^2({\bf r})
\,,
\label{eq.calA}
\\
{\cal B} &=& \frac{m}{2\hbar\omega_\nu}
\int\!d^{d}{\bf r} \frac{ c^4 n_0^{j-2} }{ c_p^2 n_p^{j-1} } g_\nu^2({\bf r})
\,,
\label{eq.calB}
\\
{\cal C} &=&
\frac{ d\ln (m c_p^2 / \hbar\omega_\nu ) }{ dN_{\rm tot} }
\int\!d^d{\bf r} \frac{ n_0^{j} }{ n_p^{j-1} }
\,.
\label{eq.calC}\end{aligned}$$ In general, all of them depend on $\nu$ but we omit the index $\nu$ for compactness. The term ${\cal A}$ is the rate of decrease of $y_\nu$ induced by the reduction of the density fluctuations under the loss process, normalised to $\kappa_j n_p^{j-1}$. The term ${\cal B}$ originates from the additional density fluctuations induced by the stochastic nature of the losses. The term ${\cal C}$ arises from the time dependence of the ratio $m c_p^2 / \hbar\omega_\nu$. It is computed using the dependence of $m c_p^2 / \hbar\omega_\nu$ on the total atom number, the latter evolving according to $$\frac{ dN_{\rm tot} }{ dt } = - \int\!d^d{\bf r}\, \kappa_j n_0^{j}
\,.$$ Eqs. (\[eq.ydot\]–\[eq.calC\]) constitute the main results of this paper. They have been solved numerically for the experimental parameters corresponding to the data of [@schemmer_cooling_2018] ($j=3$ and anisotropic harmonic confinement) and their predictions compare very well with experimental results.
We would like at this stage to make a few comments about these equations. First, the factor $\hbar$, although it appears explicitly in the equations, is not relevant since it is canceled by the $\hbar$ contained in the normalisation (\[eq:g-nu-normalisation\]) of the mode functions $g_\nu$. Second, we note that ${\cal A}$, ${\cal B}$ and ${\cal C}$ are intensive parameters: they are invariant by a scaling transformation $V({\bf r})\rightarrow V(\lambda {\bf r})$ and depend only on the peak density $n_p$ and on the shape of the potential. Finally, Eqs. (\[eq.ydot\]–\[eq.calC\]) depend on $\nu$ and it is possible that the lossy (quasi-)condensate evolves into a non-thermal state where different modes acquire different temperatures. Such a non-thermal state of the gas is permitted within the linearised approach where modes are decoupled. In the examples studied below, however, it turns out that all hydrodynamic modes share about the same temperature[^4]. In the following, we investigate the consequences of Eq. (\[eq.ydot\]-\[eq.calC\]), considering different situations.
Example: homogeneous gas {#s:homogeneous}
------------------------
In this case, density $n_0$ and speed of sound $c$ are spatially constant. The collective modes are sinusoidal functions, labelled by $\nu$ and of wave vector ${\bf k_\nu}$ [^5]. The frequencies are given by the acoustic dispersion relation $\omega_\nu = c |{\bf k_\nu}|$ and the mode functions $g_{\nu c,s}({\bf r})$ are normalised to $$\int d^d{\bf r}\, g_{\nu}^2({\bf r}) =
\frac{ \hbar \omega_\nu }{ m c^2 } n_0$$ Then Eqs.(\[eq.ydot\]-\[eq.calC\]) reduce to $$\frac{d}{dt} y = \kappa_j n_0^{j-1}
\left[ - y \left(
j - \frac{ \partial \log c }{ \partial \log n_0 }
\right) + j/2 \right]
\label{eq.dydthomo}$$ which is the same for all modes $\nu$. Let us consider the limit $\mu=gn_0$, valid in 3D gases, or in low-dimensional gases with strong transverse confinement (negligible broadening of the transverse wave function). Then $c \propto n_0^{1/2}$ and Eq. (\[eq.dydthomo\]) shows that $y$ tends at long times towards the asymptotic value $$y_{\infty} = \frac{1}{2-1/j}
\,,$$ independent of the mode energy. For one-body losses, one recovers the result $y_{\infty}=1$ [@grisins_degenerate_2016; @johnson_long-lived_2017]. In the case of 3-body losses, one finds $y_{\infty} = 3/5$.
Let us now consider a quasi-low-dimensional gas, where transverse broadening of the wave function cannot be neglected. The logarithmic derivative in Eq.(\[eq.dydthomo\]) is then no longer constant. We will focus on the case of a quasi-1D gas, as realised experimentally for instance in [@schemmer_cooling_2018]. The effect of the transverse broadening is well captured by the heuristic equation of state [@salasnich_effective_2002; @fuchs_hydrodynamic_2003] $$\mu = \hbar\omega_\perp\left( \sqrt{1+4 n_0 a}-1 \right),
\label{eq:eff-mu-swelling}$$ where $\omega_\perp$ is the frequency of the transverse confinement and $a$ the 3D scattering length. Inserting into Eq. (\[eq.dydthomo\]), one can compute the evolution of $y$. The transverse broadening also modifies the rate coefficient $\kappa_j$, making it density-dependent. However, re-scaling the time according to $u=\int_0^t \kappa_j(\tau)n_p^{j-1}d\tau=\ln(n_0(0)/n_0(t))$, Eq. (\[eq.dydthomo\]) transforms into $${\frac{dy}{du} =
- y \left( j - 1/2 + \frac{ n_0(0) a \, e^{-u} }{ 1 + 4 n_0(0) a \, e^{-u}} \right) + j/2}$$ and no longer depends on $\kappa_j$. Fig.\[fig.effectbroadening\] shows the solution of this differential equation in the case of 3-body losses, and for a few initial situations, namely different values of $y$ and $n_0 a$ (right plot). The asymptotic value $y=y_\infty$ is always reached at long times since the transverse broadening then becomes negligible. Note that in distinction to pure 1D gases, the effect of transverse broadening allows the system to reach transiently lower [scaled]{} temperatures $y < y_\infty $, even when starting at values of $y$ larger than $y_\infty$. More precisely, let us denote $y_{\rm min}(n_0) = j/2/(j-1/2+an_0/(1+4an_0))$. When starting with $y>y_{\rm min}$, the lowest value of $y$ is reached for some (non-vanishing) density, [and]{} it falls on the curve $y_{\rm min}$. For $j=3$, one find that $y_{\rm min}$ varies between $y_\infty=0.6$ and $6/11\simeq 0.55$. Thus, the coldest temperatures in the course of the loss process never deviate by more than 10% from the asymptotic value 0.6: the impact of transverse swelling is relatively small. Note that, if one considered the scaled temperature $T/\mu$ rather than $y$, much larger deviations would appear.
Example: 1D harmonic trap {#s:harmonic}
-------------------------
We consider a 1D gas confined in a harmonic potential of trapping frequency $\omega$. We assume for simplicity a pure 1D situation with $\mu = g n = m c^2$. In the Thomas-Fermi approximation, the mean density profile is $$n_0(z)=n_p(1-(z/{R})^2)
\,, \quad
|z| \le R
\label{eq.noharm}$$ where $n_p$ is the peak density and ${R}= \sqrt{2gn_p/(m \omega^2)}$ is the axial radius of the quasicondensate. From Eq.(\[eq:wave-eqn-gnu\]), we recover the known result that the hydrodynamic modes are described by the Legendre polynomials $P_\nu$, and the eigenfrequencies are $\omega_\nu=\omega \sqrt{\nu(\nu+1)/2}$ [@Ho_1999; @Petrov_2000]. A trivial calculation using [$N_{\rm tot} = \frac{4}{3} n_p R
\propto c_p^{3}$]{} and the substitution $z = {R}\cos\alpha$ gives ${\cal C} = \int_0^{\pi/2}\!d\alpha \, \sin^{2j+1}\alpha
= 2/3, 8/15, 16/35$ for $j = 1, 2, 3$. To compute ${\cal A}$ and ${\cal B}$, one needs the exact expression of $g_\nu$, which according to the normalisation (\[eq:g-nu-normalisation\]) can be written $$g_\nu(z) =
\sqrt{\frac{ \hbar \omega_\nu }{ 2 g R } }
\sqrt{2\nu+1}
P_\nu(z/R)
\,.
\label{eq.gnuharm1D}$$ Inserting this expression, together with Eq. (\[eq.noharm\]), into the integrals (\[eq.calA\]) and (\[eq.calB\]), we find that ${\cal A}$, ${\cal B}$, and ${\cal C}$ are time-independent. Thus $y$ tends at long times towards the asymptotic value $y_\infty=j{\cal B}/(j{\cal A}-{\cal C})$. For large $\nu$, one can use the asymptotic expansion [@DLMF] $$P_{\nu}\left(\cos\alpha\right)
\simeq
\left(\frac{2}{\pi(\nu + \frac12)\sin\alpha}\right)^{1/2}\!
\cos\phi_{\nu}
,
\label{eq:Legendre-asymptote}$$ with $
\phi_{\nu} = (\nu + \tfrac{1}{2}) \alpha - \tfrac{1}{4} \pi
$. Moreover the fast oscillations of $P_\nu(\cos\alpha)$ can be averaged out in the calculation of the coefficients ${\cal A}$ and ${\cal B}$. Then ${\cal A}$ and ${\cal B}$ no longer depend on $\nu$, so that $y_\infty$ is identical for all modes, and we find $$y_\infty \simeq
\frac{\frac{j}{\pi}\int_0^{\pi/2}\!d\alpha \,\sin^{2j}\alpha}
{\frac{2j}{\pi}\int_0^{\pi/2}\!d\alpha\, \sin^{2j-2}\alpha
- \int_0^{\pi/2}\!d\alpha \,\sin^{2j+1}\alpha}
\label{eq.yinftyharmnugrand}$$ For one- and three-body losses, this gives $y_\infty = 3/4 = 0.75$ and $y_\infty = 525/748 \simeq 0.701$, respectively. This asymptotic result is compared to calculations using the expression Eq. (\[eq.gnuharm1D\]) in Fig. \[fig.yinftyharm\]. We find very good agreement as soon as the mode index is larger than 5.
![ Asymptotic ratio $y_\infty=k_BT/mc^2$ for hydrodynamic collective modes of a 1D quasi-condensate confined in a harmonic trap, for 1-body (red), 2-body (blue) and 3-body (green) losses. The modes are labeled by their eigenfrequencies $\omega_\nu = \omega \sqrt{\nu(\nu+1)/2}$ and we only consider $\nu \ge 2$. Symbols: calculation based on the Legendre polynomials of Eq.(\[eq.gnuharm1D\]), inserted into Eqs. (\[eq.calA\], \[eq.calB\]). Solid lines: large-$\nu$ approximation given by Eq. (\[eq.yinftyharmnugrand\]) with values $y_\infty = 3/4, 45/56, 525/748$ for $j = 1, 2, 3$.[]{data-label="fig.yinftyharm"}](Tlim_harm1D){width="0.65\columnwidth"}
To conclude this example, we come back to the diffusive dynamics of the ‘phase quadratures’ $p_\nu$ we neglected so far. In the case of one-body losses, however, it happens that the integral (\[eq.dpnu\]) does not converge: while the mode function $g_\nu(z)$ \[Eq.(\[eq.gnuharm1D\])\] remains finite at the condensate border $z \to \pm R$, the integrand $n_0^{j-2}(z) g_\nu^2(z)$ is not integrable for $j = 1$. This is actually an artefact of the hydrodynamic approximation, which breaks down at the border of the condensate.
We have performed numerical calculations of the collective excitations by solving the Bogoliubov equations [^6]. The mode functions $g_\nu( z )$ are defined according to Eq.(\[eq:def-fpm-numerics\]): they extend smoothly beyond the Thomas-Fermi radius and match well with the Legendre polynomials (\[eq.gnuharm1D\]) within the bulk of the gas. The resulting values for the parameter ${\cal B}$ \[Eq.(\[eq.calB\])\] are shown in Fig.\[fig.noise-density-phase\]: they depend very weakly on the mode index $\nu$ and are well described by the approximate calculation based on the Legendre modes mentioned after Eq.(\[eq:Legendre-asymptote\]) (solid lines). In the lower part of the figure, the corresponding values for the diffusion coefficient originating from phase noise are shown, namely the parameter $${\cal B}_\varphi =
\frac{\hbar\omega_\nu }{ 8 m c_p^2 }
\int\!d^{d}{\bf r} \frac{ n_0^{j-2} }{ n_p^{j-1} } g_\nu^2({\bf r})
\,.
\label{eq.Bphi}$$ They remain at least one order of magnitude below. For losses involving more than one particle, the [ approximation]{}, under which the functions $g_\nu$ are given by the Legendre polynomials, gives a convergent integral in Eq.(\[eq.Bphi\]). The result is shown as solid lines for two- and three-body losses, where we made the additional approximation Eq. (\[eq:Legendre-asymptote\]) on the Legendre functions and we averaged out the oscillating part. [We find that the Legendre approximation]{} performs better for three-body losses [than for 2-body losses, which is expected since]{} a stronger weight is given to the bulk rather than the edge of the condensate. [In conclusion of this numerical study, we verified the validity of the assumption that, for phonon modes, the phase diffusion term gives negligible contribution to the evolution of $y$. This term becomes noticeable when one leaves the phonon regime $\hbar\omega_\nu \ll mc^2$. Then, one should go beyond the hydrodynamic Hamiltonian Eq.(\[eq.Hydro\]) to properly compute the mode dynamics.[^7]]{}
![Diffusion of density and phase quadratures associated with many-body loss in a one-dimensional gas trapped in a harmonic potential. We plot the dimensionless coefficients $j {\cal B}$ \[Eq.(\[eq.calB\])\] and $j {\cal B}_\varphi$ \[Eq.(\[eq.Bphi\])\] that are proportional to the shot noise projected onto the corresponding quadratures. Symbols: numerically computed mode functions, improving upon the hydrodynamic approximation. Solid lines: approximate results based on the Legendre modes (\[eq.gnuharm1D\]). Dashed lines: guide to the eye. Parameters: strictly 1D equation of state $\mu = g n$, peak chemical potential $\mu_p \approx g n_p = 100\, \hbar\omega$. []{data-label="fig.noise-density-phase"}](dens-phase-noise_4.eps){width="0.55\columnwidth"}
Conclusion
==========
In this paper, we construct a stochastic model to describe the effect of losses on the hydrodynamic collective modes of condensates or quasicondensates. Explicit formulas for cooling and diffusion of the density and phase quadratures are derived. They provide the behaviour of the mode temperature $T$ with time. We show that $T$ becomes proportional to the energy scale $m c^2$ where $c$ is the hydrodynamic speed of sound. The asymptotic ratio $k_B T/(mc^2)$ is computed explicitly in different situations and for different $j$-body processes. [These results]{} are in good agreement with recent experiments in our group [@schemmer_cooling_2018] where three-body losses provided the dominant loss channel.
This work raises many different questions and remarks. First, it is instructive to investigate the evolution of the ratio $D = \hbar^2 n^{2/d}/(m k_B T)$, where $d$ is the gas dimension, since $D$ quantifies the quantum degeneracy of the gas.[^8] Let us focus for simplicity on a homogeneous system and use $mc^2 = gn$. Once the ratio $k_B T / (mc^2)$ has become stationary, we find that $D$ increases in time for 3-dimensional gases, while it decreases for one-dimensional gases. Starting with a 1D Bose gas in the quasi-condensate regime, losses let the quantity $D\gamma$ reach a stationary value of order one, but increase the dimensionless interaction parameter $\gamma=mg/(\hbar^2 n)$. When $\gamma$, from values much smaller, approaches 1, the gas lies at the crossover between four regimes: the quasi-condensate ($\gamma\ll 1$, $D \sqrt{\gamma}\gg 1$), the quantum-degenerate ideal Bose gas ($D \sqrt{\gamma}\ll 1$, $D \gg 1$), the non-degenerate ideal Bose gas ($D {\gamma}^2 \ll 1$, $D \ll 1$) and the Tonks-Girardeau regime ($\gamma \gg 1$, $D {\gamma}^2 \gg 1$). At later times, one expects the cloud to leave the quasi-condensate regime and we believe it becomes a non-degenerate ideal Bose gas. Second, the effect of losses on high-frequency modes, not described by our hydrodynamic model, leads a priori to higher temperatures; this was investigated for 1D gases subject to one-body losses [@johnson_long-lived_2017]. The gas is then described by a generalised Gibbs ensemble where different collective modes experience different temperatures. This non-thermal state is even long lived in 1D quasicondensates [@johnson_long-lived_2017]. While the calculations presented here are formally valid for higher dimensions, efficient coupling between modes may reduce their relevance, since such coupling favours a common temperature. It is an open question whether our methods could be extended to the case of evaporative cooling where the one-body loss rate is energy- or position-dependent. This mechanism may play a role in experiments where temperatures as low as $k_B T \approx 0.3\,m c^2$ have been observed, lower than the predicted temperatures for uniform losses [@rauer_cooling_2016]. Finally, it would be interesting to extend this work to different regimes of the gas. For instance, one may ask how the effect of losses transforms as one goes from a quasi-condensate to the ideal gas regime. The approximation of weak density fluctuations then clearly becomes invalid. One could also investigate losses at even lower densities, where the 1D gas enters the fermionised (or Tonks-Girardeau) regime. Here, ones expects that the losses act in a similar way as in a non-interacting Fermi gas. One-body losses, for example, should then produce heating, since the temperature increases as the degeneracy of an ideal Fermi gas decreases. Finally, it would be interesting to investigate whether the results presented here may also cover interacting Fermi gases in the superfluid regime.
Acknowledgements {#acknowledgements .unnumbered}
================
M. S. gratefully acknowledges support by the *Studienstiftung des deutschen Volkes*. This work was supported by Région Île de France (DIM NanoK, Atocirc project). The work of C. H. is supported by the *Deutsche Forschungsgemeinschaft* (grant nos. Schm 1049/7-1 and Fo 703/2-1).
Reduction to low-dimensional hydrodynamics {#a:lowD-hdyn}
==========================================
As mentioned in the main text, we assume the loss process is slow enough so that, first, the mean profile at each time is very close to the equilibrium profile with the same atom number, and second, we can safely neglect any mean velocity field when computing the time evolution of the fluctuating fields $\delta n$, $\varphi$. The evolution equations $\partial\delta n/\partial t$ and $\partial \varphi/\partial t$ are thus, at a given time, equal to those for a time-independent quasi-condensate. In the purely 3D, 2D and 1D cases, for contact interactions, we can use the well known results based on Bogoliubov theory. We then find that the equation of state takes the form $\mu=gn$ and $\partial\delta n/\partial t$ and $\partial \varphi/\partial t$ derive from Eq. (\[eq.Hydro\]) for the long-wavelength modes.
Let us now consider the case where the gas is confined strongly enough in 1 or 2 dimensions, such that the relevant low-lying excitations are of planar or axial nature. We allow, however, for a transverse broadening of the wave function under the effect of interactions. We show below that the equations of motion for the slow phononic modes, for which the transverse shape adiabatically follows the density oscillations, also derive from Eq. (\[eq.Hydro\]). The proof given here is complementary to Refs.[@Stringari_1998; @salasnich_effective_2002] because it does not need an explicit model about the shape of the transverse wave function. In order to simplify the notations, we restrict ourselves to the quasi-1D situation. The derivation can be easily translated to quasi-2D situations.
We thus consider a gas confined in a separable potential consisting of a strong transverse confinement and a smooth longitudinal confinement. The equilibrium density distribution of the quasi-condensate is $|\phi_0(x,y,z)|^2$ where the real function $\phi_0(x,y,z)$ obeys the stationary Gross-Pitaevskii equation $$\left(-\frac{\hbar^2}{2m}\partial_z^2 -\frac{\hbar^2}{2m}\Delta_\perp + V_\perp(x,y)+V(z) + g |\phi_0|^2 -\mu_p \right)\phi_0=0.
\label{eq:GPE}$$ Here $g=4\pi\hbar^2a/m$ is the 3D coupling constant with $a$ the zero-energy scattering length. Within the Bogoliubov theory, the evolution of excitations is governed by the equations [@PitaevskiiStringari] $$\left \{
\begin{array}{rcl}
i\hbar\partial_t \tilde{f}^+&=&
\displaystyle
\left(-\frac{\hbar^2}{2m}\partial_z^2 -\frac{\hbar^2}{2m}\Delta_\perp + V_\perp(x,y)+V(z) + g |\phi_0|^2 -\mu_p \right) \tilde{f}^-
\\[0.5ex]
i\hbar \partial_t \tilde{f}^-&=&
\displaystyle
\left(-\frac{\hbar^2}{2m}\partial_z^2 -\frac{\hbar^2}{2m}\Delta_\perp + V_\perp(x,y)+V(z) + 3g |\phi_0|^2 -\mu_p \right) \tilde{f}^+
\end{array}
\right .
\label{eq.evolf+f-}$$ The field operators are half sum and difference of the fluctuating field operators $\delta\psi$ and $\delta\psi^\dag$. $\tilde{f}^+$ is linked to density fluctuations and $\tilde{f}^-$ to phase fluctuations.
Since we assume that the axial variation is slow compared to the transverse one, the solution $\phi_0$ can be approximated by a function $\psi$ that depends on the axial coordinate $z$ only via a local chemical potential $$\phi_0( x, y, z ) \simeq \psi(x, y; \mu)
\,,\qquad
\mu = \mu_p - V(z)
\label{eq:phi0-separation}$$ Here, $\psi$ solves the Gross-Pitaevskii equation for an axially homogeneous system: $$\left(-\frac{\hbar^2}{2m}\Delta_\perp + V_\perp(x,y)
+ g |\psi|^2 - \mu \right)\psi = 0.
\label{eq.psimu}$$ This procedure is consistent, e.g., with making the Thomas-Fermi approximation in the axial direction. Solving this equation yields the local chemical potential as a function of the axial (average) density $\mu = \mu( n_0 )$ with $$n_0( z ) =
\int\!dx dy\, |\phi_0(x, y, z)|^2
\simeq
\int\!dx dy\, |\psi(x,y; \mu)|^2
\label{eq:def-densite-1D}$$ This motivates the following separation *Ansatz* for the Bogoliubov functions in Eq.(\[eq.evolf+f-\]):
$$\left \{
\begin{array}{l}
\tilde{f}^+ = \partial_\mu \psi \partial_n \mu {F^+}\\[0.5ex]
\tilde{f}^- = i\phi_0{F^-}\end{array}
\right .
\label{eq.ansatzf+f-}$$
where the functions ${F^+}$ and ${F^-}$ depend only on $z$ and the derivative $\partial_n \mu$ is evaluated at the local density $n_0$. Inserting this into the second line of Eq.(\[eq.evolf+f-\]), we find $$-\phi_0\hbar\partial_t{F^-}=
\left(-\frac{\hbar^2}{2m}\partial_z^2 -\frac{\hbar^2}{2m}\Delta_\perp + V_\perp(x,y)+V(z) + 3g |\phi_0|^2 -\mu_p \right)
\left( \partial_\mu \psi \partial_n \mu {F^+}\right)
\label{eq.partialF-}$$ The action of this operator on $\partial_\mu \psi$ can be worked out by differentiating Eq. (\[eq.psimu\]) versus $\mu$: this gives $$\left( -\frac{\hbar^2}{2m}\Delta_\perp + V_\perp(x,y)+ 3g |\psi|^2 -\mu \right)\partial_\mu \psi = \psi \simeq \phi_0$$ Eq.(\[eq.partialF-\]) thus simplifies into $$-\phi_0\hbar\partial_t{F^-}=
- \frac{ \hbar^2 }{ 2 m } \partial_z^2\left( \partial_\mu \psi \partial_n \mu {F^+}\right)
+ \phi_0\partial_n \mu {F^+}\label{eq:partialF-etape-2}$$ To find a closed equation for the axial dynamics, we multiply with $\psi(x, y; \mu )$ and integrate over the transverse coordinates. Using Eq.(\[eq:def-densite-1D\]) and its derivatives with respect to $\mu$ and $z$, we find the identities $$\int\!dxdy\, \phi_0 \partial_\mu \psi = \frac12 \partial_\mu n_0
= \frac{ 1 }{ 2 \partial_n \mu }
\,,
\qquad
\int\!dxdy\, \phi_0 \partial_z \phi_0 = \frac12 \partial_z n_0
\,.
\label{eq:astuces-projection}$$ Using the first one, Eq.(\[eq:partialF-etape-2\]) becomes: $$- \hbar\partial_t{F^-}= - \frac{ \hbar^2 }{ 4 m n_0 } \partial_z^2 {F^+}+ \partial_n\mu {F^+}\simeq
-\partial_n\mu {F^+}\label{eq.F-ronde}$$ where in the second step, we took the long-wavelength limit.
Let us now insert the *Ansatz* (\[eq.ansatzf+f-\]) into the first line of Eq.(\[eq.evolf+f-\]): $$\hbar\partial_\mu\psi \partial_n\mu \partial_t{F^+}=
\left(-\frac{\hbar^2}{2m}\partial_z^2 -\frac{\hbar^2}{2m}\Delta_\perp + V_\perp(x,y)+V(z) + g |\phi_0|^2 -\mu_p \right)
\left( \phi_0{F^-}\right)$$ The action of the operator in parentheses on $\phi_0$ simply vanishes because this is the Gross-Pitaevskii equation (\[eq:GPE\]). Since ${F^-}$ does only depend on the axial coordinate, we are left with: $$\partial_\mu\psi\partial_n\mu \partial_t {F^+}=
-\frac{\hbar}{m}
\left(\partial_z\phi_0\right)\partial_z {F^-}- \frac{\hbar}{2m}\phi_0\partial_z^2 {F^-}$$ We again project out the transverse coordinates and use the identities (\[eq:astuces-projection\]). Combining the axial derivatives, we then have $$\partial_t {F^+}=
-\frac{\hbar}{m}\partial_z ( n_0 \partial_z {F^-})
\label{eq.F+ronde}$$ These calculations illustrate that the *Ansatz* of Eq.(\[eq.ansatzf+f-\]) captures well the axial and transverse dependence of the collective excitations in the low-dimensional gas. Note in particular how the density fluctuations ($\tilde{f}^+$) are accompanied by density-dependent changes in the transverse wave function.
To make contact with the hydrodynamic Hamiltonian (\[eq.Hydro\]), we need to relate $F^+$ and $F^-$ to the low-dimensional density and phase fields, $\delta n$ and $\varphi$. Bogoliubov theory tells us that three-dimensional density fluctuations are linked to $\tilde{f}^+$ via $\delta \rho = 2\phi_0 \tilde{f}^+$. Integrating $\delta \rho$ over the transverse plane, replacing $\tilde{f}^+$ by its Ansatz (\[eq.ansatzf+f-\]) and using Eq. (\[eq:astuces-projection\]), we obtain $$F^+ = \delta n =n-n_0.$$ Phase fluctuations on the other hand are linked to $\tilde{f}^-$ according to $\tilde{f}^-=i\phi_0\varphi$. \[Recall that the ansatz (\[eq.ansatzf+f-\]) assumes a uniform phase in the $x,y$ plane.\] Comparison with Eq. (\[eq.ansatzf+f-\]) gives immediately $$F^- = \varphi.$$ Then Eq.(\[eq.F-ronde\]) and Eq.(\[eq.F+ronde\]) are precisely the evolution equations derived from the Hamiltonian (\[eq.Hydro\]).
Hydrodynamic Bogoliubov modes {#SM}
=============================
[Here we consider low-energy modes of either a three-dimensional gas or low-dimensional gas, whose dynamics is described by the hydrodynamic approximation.]{} [More precisely,]{} we diagonalize the Hamiltonian (\[eq.Hydro\]), for a given, time-independent, equilibrium profile $n_0({\bf r})$. From Eq.(\[eq.Hydro\]) we derive the evolution equations $$\frac{\partial}{\partial t}
\left(
\begin{array}{c}
\delta n/\sqrt{n_0}\\
\sqrt{n_0}\varphi
\end{array}
\right)
=
{\cal L}
\left(
\begin{array}{c}
\delta n/\sqrt{n_0}\\
\sqrt{n_0}\varphi
\end{array}
\right)
\label{eq.evolhydro}$$ where $${\cal L}=
\left(
\begin{array}{cc}
0 & - \frac{\hbar}{m\sqrt{n_0}}\nabla \cdot \left( n_0
\nabla \left( \frac{1}{\sqrt{n_0}} \cdot \right)\right)\\
- m c^2 / \hbar & 0
\end{array}
\right)$$ The factors $\sqrt{n_0}$ are convenient to give the two components the same dimension and to symmetrize the differential operator that appears in ${\cal L}$. The two equations derived from Eq.(\[eq.evolhydro\]) correspond to the hydrodynamic equations provided we identify $\hbar \nabla \varphi/m$ with the velocity: the first one is the continuity equation, the second one gives the Euler equation.
We build the mode expansion on pairs of real functions that form right eigenvectors of ${\cal L}$: $${\cal L} \left( \begin{array}{c}
f^+_\nu \\ i f^-_\nu
\end{array} \right)
=
i \omega_\nu
\left( \begin{array}{c}
f^+_\nu \\ i f^-_\nu
\end{array} \right)
\label{eq:eigenmode-problem}$$ Due to symmetry properties of ${\cal L}$, Eq.(\[eq:eigenmode-problem\]) entails the following properties: ([*a*]{}) $(f_\nu^+,-if_\nu^-)$ is a right eigenvector of ${\cal L}$ of eigenvalue $-i\omega_\nu$; ([*b*]{}) $(if_\nu^-,f_\nu^+)$ is a left eigenvector of same eigenvalue; and ([*c*]{}) different right eigenvectors of ${\cal L}$ verify $\int\!d^{d}{\bf r}\, f_{\nu}^-f_{\nu'}^+=0$. It is convenient to consider those eigenvectors of ${\cal L}$ which are normalized according to $\int\!d^{d}{\bf r}\, f_{\nu}^-f_{\nu}^+=1$. This yields the expansions $$\left(
\begin{array}{c}
\delta n/\sqrt{n_0}\\
\sqrt{n_0}\varphi
\end{array}
\right)
= \frac{ 1 }{ \sqrt{2} }
\sum_\nu \left \{
a_\nu \left(
\begin{array}{c}
f_\nu^+\\
-i f_\nu^-
\end{array}
\right)
+ a_\nu^+ \left(
\begin{array}{c}
f_\nu^+\\
if_\nu^-
\end{array}
\right)
\right \}
\label{eq.deltantheta}$$ which invert into $$a_\nu = \frac{ 1 }{ \sqrt{2} }
\int\!d^d{\bf r} \left(
\frac{ \delta n({\bf r}) }{\sqrt{n_0}} f_\nu^-({\bf r}) + i\sqrt{n_0}\,\varphi({\bf r}) f_\nu^+({\bf r})
\right)
\,.
\label{eq.anu}$$ The normalisation of the eigenvectors and the relation $[\delta n(z),\varphi(z')] = i\delta(z-z')$ ensure $[a_{\nu'},a_\nu^\dag] = \delta_{\nu',\nu}$.
We introduce the function $$g_\nu = \sqrt{n_0} \,f_\nu^+
,
\label{eq-B:def-gnu}$$ and use the relation $f_\nu^- = m c^2 f_\nu^+/(\hbar\omega_\nu)$ that follows from the eigenvalue problem (\[eq:eigenmode-problem\]). Then the normalisation of $g_\nu$ \[Eq.(\[eq:g-nu-normalisation\])\] follows from that of $(f_\nu^+,if_\nu^-)$. Defining the quadratures $x_\nu = (a_\nu+a_\nu^\dag)/\sqrt{2}$ and $p_\nu = - i(a_\nu-a_\nu^\dag)/\sqrt{2}$, the expansions (\[eq.deltantheta\]) give Eqs.(\[deltanvsxnu\]) of the main text.
Numerical calculation
=====================
For the numerical results shown in Fig.\[fig.noise-density-phase\], we have solved the Gross-Pitaevskii equation in a 1D harmonic trap by minimising the corresponding energy functional: this gives a smooth density profile $n_0( z )$. The Bogoliubov equations are solved with a finite-difference scheme on a non-uniform grid. We get a frequency spectrum that coincides to better than one percent with the Legendre spectrum for all modes with $\hbar \omega_\nu \lesssim 0.1\,g n_p$ ($n_p$ is the peak density). The traditional Bogoliubov modes $u_\nu$ and $v_\nu$ are related to the eigenfunctions of Eq.(\[eq:eigenmode-problem\]) by $$\begin{aligned}
f^+_\nu &=& \sqrt{2}\,( u_\nu + v_\nu )
\\
f^-_\nu &=& (u_\nu - v_\nu )/\sqrt{2}
\label{eq:def-fpm-numerics}\end{aligned}$$ Inserting this into Eq.(\[eq-B:def-gnu\]) gives the modes $g_\nu$. We have checked for phonon excitations with frequencies $\hbar\omega_\nu \ll g n_p$, that the proportionality between $f^+$ and $f^-$ \[see after Eq.(\[eq-B:def-gnu\])\] is an excellent approximation in the bulk of the condensate.
[^1]: The peak density is reached at the position ${\bf r}_p$ where $V$ reaches its minimum value. We impose $V({\bf r}_p)=0$.
[^2]: We assume here that the transverse width of the cloud fulfills $l_\perp\gg a$ such that the effect of interactions is well captured treating the gas as a 3D gas.
[^3]: At the border of the [(quasi)]{}condensate, where the density becomes small, the condition $\hbar \omega_\nu \ll m c^2$ breaks down, however. The effect of phase diffusion is more carefully evaluated in Sec.\[s:harmonic\].
[^4]: \[footnote:nonthermal\] the case of one-body losses, theories that go beyond the hydrodynamic approximation predict non-thermal states to appear, where the high-frequency modes reach higher temperatures than the phonon modes [@grisins_degenerate_2016; @johnson_long-lived_2017].
[^5]: For 1D gases, $\nu=(p,\sigma)$ where $p$ is a positive integer and $\sigma=c$ or $s$ depending wether we consider cosine or sine modes. The wave-vector is $k_\nu=2p\pi/L$ where $L$ is the length of the box, assuming periodic boundary conditions. This generalises to higher dimensions with $\nu=(p_1,\sigma_1,p_2,\sigma_2,p_3,\sigma_3)$ in 3D for instance.
[^6]: For the condensate wave-function, we also went beyond the Thomas-Fermi approximation by allowing for a ‘spill-over’ of the condensate density beyond the inverted parabola.
[^7]: A full treatment going beyond the hydrodynamic approximation has been performed for one-body losses in homogeneous 1D quasi-condensates [@johnson_long-lived_2017; @grisins_degenerate_2016].
[^8]: Note however that the temperature used in the definition of $D$ refers to the phononic modes only.
|
---
abstract: 'We prove the Polyakov conjecture on the supertorus $(ST_2)$: we dermine an iterative solution at any order of the superconformal Ward identity and we show that this solution is resumed by the Wess-Zumino-Polyakov (WZP) action that describes the $(1,0)$ $2D$-supergavity. The resolution of the superBeltrami equation for the Wess-Zumino (WZ) field is done by using on the one hand the Cauchy kernel defined on $ST_2$ in \[12\] and on the other hand, the formalism developed in \[11\] to get the general solution on the supercomplex plane. Hence, we determine the n-points Green functions from the (WZP) action expressed in terms of the (WZ) field.'
address: |
$^{\rm 1}$Département de Mathématiques, F.S.T.S.,B.P. 577,\
Université Hassan 1$^{\rm er}$, Settat, Morocco[^1]\
and\
UFR-HEP-Rabat, Université MV, Faculté des Sciences,\
Département de Physiques, B.P. 1014, Rabat, Morocco\
$^{\rm 2}$ Université MV, Faculté des Sciences,\
Département de Physiques, UFR-Physiques théoriques-Rabat,\
B.P. 1014, Rabat, Morocco
author:
- 'M. Kachkachi$^{\rm 1}$ and M. Nazah$^{\rm 2}$'
date: '10/08/1997'
title: Polyakov conjecture on the supertorus
---
Introduction
=============
A consistent framework for studying $N=1$ supergravity is provided by the covariant RNS model of the superstring theory where Lorentz invariance is manifest, but space-time supersymmetry is not \[1\]. In this model superstring theory is formulated as the superfield $\Phi^\mu (x,\theta) = X^\mu (z) + \theta \Psi^\mu (z)$, where $X^\mu$ determines the position of the bosonic string and $\Psi^\mu$ its supersymmetric partner, coupled to the superzweibein which defines the geometry of the corresponding supergravity theory. However, any supergravity geometry in two dimensions is locally flat which means that there exist local coordinates in which the superzweibein becomes flat. These local complex coordinates together with superconformal transformations define a compact superRiemann surface (SRS) which we denote by ${\hat \Sigma}$. Then, when we consider interactions at a given loop of order $g$, the world sheet of the superstring is ${\hat \Sigma}$. The corresponding action has a large gauge invariance symmetry: it is invariant under superdiffeomorphisims on ${\hat \Sigma}$, the local supersymmetry whose corresponding gauge field is the gravitino, it is also invariant under superWeyl transformations as well as the local Lorentz transformations of the superzweibein.
In the Polyakov formalism \[2\], which is geometric and can thus treat global objects, superstring quantization involves functional integration over the superfield $\Phi^\mu$ which is gaussian and that over the superzweibein which is non-trivial and leads to two different settings depending on the gauge we choose. In the superconformal gauge obtained after transforming the superzweibein by superdiffeomorphisms and superWeyl rescalings into a flat one, the functional integration analysis leads to the superLiouville theory \[3\] which represents the degree of freedom of the 2-dimensional supergravity.\
One can choose the chiral gauge which has a single non-vanishing metric mode, the superBeltrami differential that represents the graviton-gravitino multiplet, and recast the theory in a local form by introducing the (WZ) field defined by the superBeltrami equation. This field is the projective coordinate that represents the structure parametrized by the superBeltrami differential. Indeed, let us consider a SRS ${\hat \Sigma}$ (without boundary) of genus $g$, with a reference conformal structure $\{(z,\theta)\}$ together with an isothermal structure $\{({\hat Z},{\hat \Theta})\}$. This is obtained from the reference one by a quasisuperconformal transformation, i.e. a transformation that changes a circle into an ellipse. This transformation is parametrized in general by three superBeltrami differentials of which only two are linearly independent. There is a formalism in which one of the independent differentials is set to zero as it contains only non-physical degrees of freedom, thus ending up with only one superBeltrami differential and this implies the existence of a superconformal structure on the SRS which is necassary for defing the Cauchy-Riemann operator. This formalism is used in \[4,5\]. However, it is more natural from a geometrical point of view to work in another formalism that also reduces the number of superBeltrami differentials to one by eliminating the ${\bar \theta}$-dependence in the coordinate $({\hat Z},{\hat \Theta })$ \[6,7\] in addition to the superconformal structure condition of the previous formalism. The superconformal structure thus defined is parametrized by a single superBeltrami differential ${\hat \mu}$. More importantly, this gauge allows for decoupling the superBeltrami equations satisfied by ${\hat Z}$ and by the (WZ) field ${\hat \Theta}$ and then are more easly solved using the techniques of the Cauchy kernel. The solutions thus obtained enable us to write the action as a functional of the superBeltrami differential ${\hat \mu}$ from which we compute the Green functions and the energy-momentum tensor whose external source is ${\hat \mu}$.\
In this parametrization the superWeyl invariant effective action splits into two terms, i.e. $$\Gamma[{\hat \mu},{\bar {\hat\mu}}; {\hat R_0},{\bar {\hat R_0}}] =
\Gamma_{WZP}[{\hat \mu}; {\hat R_0}] + {\bar \Gamma_{WZP}}[{\bar {\hat \mu}};
{\bar {\hat R_0}}],$$ where ${\hat R_0}$ is a holomorphic background superprojective connection in the superconformal structure $\{(z,\theta)\}$, i.e. ${\bar
D_\theta }{\hat R_0} = 0$, which is introduced to insure a good glueing of the anomaly on ${\hat \Sigma }$. $ \Gamma_{WZP}$ is the Wess-Zumino-Polyakov action which describes the 2D induced quantum supergravity in the light-cone gauge, i.e. $ds^2 = (dz+ {\hat \mu}d{\bar z} + \theta d\theta)d{\bar
z}$. It depends on the background conformal geometry parametrized by the pair $({\hat \mu}, {\hat R_0})$ and satisfies the superconformal Ward identity \[8,9\] $$({\bar \partial} - {\hat \mu} \partial -{3\over 2} \partial {\hat \mu}
-{1\over 2}D{\hat \mu}D) {\delta \Gamma_{WZP} \over \delta {\hat \mu}} =
k\partial^2 D{\hat \mu},$$ where k is the central charge of the model which is the remnant of the matter system after functional integration. It measures the strength of the superdiffeomorphisms anomaly.\
Solving eq.(1.2) on a superRiemann surface of genus $g$ is the starting point for studying 2-dimensional superconformal models theron. A solution to this superconformal Ward identity was found by Grundberg and Nakayama in \[10\] on the supercomplex plane. Then, the Polyakov conjecture on the supercomplex plane, which tels us that the iterative solution of eq.(1.2) is resumed by the (WZP) action, is proved in \[11\]. The generalization of this solution, at the third order in perturbative series in terms of ${\hat \mu}$, to the supertorus was given in \[12\] and that to a $g$-SRS was performed in \[5\]. The subject of this work is to prove the Polyakov conjecture on the supertorus $ST_2$ at any order of the perturbative series and then to compute the $n$-points Green function for generic $n$ starting from the (WZP) on $ST_2$. To do this, we consider on the one hand the superquasielliptic Weierstrass ${\hat \zeta}$-function (the supersymmetric extension of the Weierstrass $\zeta$-function) constructed in \[12\] as the ${\bar \partial}$-Cauchy kernel on $ST_2$ to solve the superBeltrami equations (SBE). On the other hand, we adopt here the formalism developed in \[11\] to get the perturbative series solution on the supercomplex plane.
Resolution of (SBE) on $ST_2$
==============================
The superBeltrami equation in terms of the (WZ) field ${\hat \Theta}$ can be writen as \[11,12\]: $${\bar \partial}\Lambda = {1\over 2}\partial{\hat \mu} + BD\Lambda,$$ where $B = {\hat \mu}D + {1\over 2}D{\hat \mu}$, $\Lambda = \ln D{\hat
\Theta}$ and $D = \partial_\theta + \theta \partial_z$. Then, using the generalized Cauchy formula introduced in \[12\] that is, $$({\bar \partial}^{-1} F)(z_1,\theta_1) = \int_{ST_2} d\tau_2 {\hat
\zeta_{1,2}} F(z_2,\theta_2),$$ where $${\hat \zeta_{1,2}} \equiv (\theta_2 -\theta_1)\zeta(z_1 - z_2),$$ $$d\tau_2 \equiv {dz_2 \wedge d{\bar z_2}\over 2\pi i}d\theta_2,$$ and $$\int_{ST_2} d\tau_2 \delta^3 (a_1 - a_2) F(a_2) = F(a_1)$$ with $$\begin{aligned}
a_i \equiv (z_i,{\bar z_i},\theta_i),\nonumber\\\end{aligned}$$ we get the solution of eq.(2.1) as a formal series: $$\Lambda = \sum_{n=1}^{\infty} {\bar \partial }^{-1} \lambda_{n} (z,\theta),$$ with\
$\lambda_{1} = {1\over 2}\partial {\hat \mu}$ and $\lambda_{n} =
BD{\bar \partial}^{-1} \lambda_{n-1}$.\
For $n \geq 2$ we find, for the $n$-term of the series (2.6) the expression $${\bar \partial }^{-1}\lambda_{n} = (-1)^{{n(n-1)\over 2}}\int_{ST_{2}}
\prod_{j=2}^{n+1} d\tau_{j} \prod_{i=1}^{n-1} ({\hat \zeta_{i,i+1}}
B_{i+1}D_{i+1}) {\hat \zeta_{n,n+1}} \lambda_{1} (a_{n+1}).$$ $B_{i}$ means that $B$ is evaluated at the point $a_{i}$. The sign in front of the integral arises from the commuation of the Cauchy kernel ${\hat \zeta}$ with the product of measures $\prod_{j=2}^{n+1}d\tau_{j}$. Here we have adopted the convention $\prod_{i=1}^{0} ({\hat \zeta_{i,i+1}}B_{i}D_{i+1}) \equiv 1$. One should note that the formula (2.7) contains a power of the superBeltrami differential ${\hat \mu}$ and its derivatives. In order to express this equation in powers of ${\hat \mu}$ only, we rewrite eq.(2.7) as follows: $$\begin{aligned}
{\bar \partial}^{-1} \lambda_{n} (a_{1}) = (-1)^{{n(n-1) \over
2}} \int_{ST_{2}} \prod_{j=2}^{n+1} d\tau_{j} f_{1,k-1} {\hat
\zeta_{k,k+1}} \partial_{k+1} f_{k+1,n-1}.g +
\end{aligned}$$ $$\begin{aligned}
(-1)^{{n(n-1)\over 2}}
\int_{ST_{2}} \prod_{j=2}^{n+1} d\tau_{j} f_{1,k-1} {\hat
\zeta_{k,k+1}} (D_{k+1} {\hat \mu} (a_{k+1}))D_{k+1}f_{k+1,n-1}.g,\end{aligned}$$ where $f_{l,m} = \prod_{i=l}^{m} ({\hat \zeta_{i,i+1}}B_{i+1}D_{i+1})$, $g = {\hat \zeta_{n,n+1}} \lambda_{1} (a_{n+1})$ and where the $k$-term of the product $({\hat \zeta_{k,k+1}} B_{k+1} D_{k+1})$ was developed.\
The integration by parts of the second term in the r.h.s.l of eq.(2.8) yields $${\bar \partial}^{-1} \lambda_{n}(a_{1}) = {(-1)^{{n(n-1)\over 2}} \over
2^{n}} \int_{ST_{2}} \prod_{j=2}^{n+1} d\tau_{j}
[ \prod_{i=1}^{n-1}({\hat \zeta_{i,i+1}} \partial_{i+1} -D_{i}{\hat
\zeta_{i,i+1}} D_{i+1}) \partial_{n} {\hat \zeta_{n,n+1}}
\prod_{l=2}^{n+1} (a_{l})]$$ and then, the sommation over the index $n$ gives the superfield $\Lambda$.\
For example, one can verify that $\Lambda$ is given at the second order in ${\hat \mu}$ by the relation $$\Lambda (a_{1}) = {1\over 2} \int_{ST_{2}} d\tau_{2} \partial_{1} {\hat
\zeta_{1,2}} {\hat \mu}(a_{2}) - {1\over 4} \int_{ST_{2}} d\tau_{23}
[({\hat \zeta_{1,2}} \partial_{2} - D_{1} {\hat \zeta_{1,2}}
D_{2}) \partial_{2} {\hat \zeta_{2,3}}] {\hat \mu }(a_{2}) {\hat
\mu}(a_{3}),$$ where $d\tau_{23} \equiv d\tau_{2} d\tau_{3}$, that agrees with the solution given in \[12\].\
Hence, we have obtained the perturbative expression for the superprojective coordinates $({\hat Z},{\hat \Theta})$ in terms of the reference complex structure $(z,{\bar z}, \theta)$ on the supertorus.
The $n$-points Green function from the WZP action on $ST_{2}$
==============================================================
The WZP action on the supertorus $ST_{2}$ introduced in \[5\] is expressed as$$\Gamma_{WZP} [{\hat \mu},R_{0}] = k \int_{ST_{2}} d\tau_{1} \{2 (R_{0} -
R) {\hat \mu} + (\chi - \chi_{0} ) \Delta_{\chi} {\hat \mu} \}(a_{1}),$$ where $ \chi = - D\ln D{\hat \Theta}$ is a superaffine connection, $\Delta_{\chi} {\hat \mu} \equiv (\partial - 2 D\chi + \chi D) {\hat
\mu}$. $R_{0}$ is the background superprojective connection introduced to guarantee the global definition of the anomaly on $ST_{2}$. The superaffine connection $\chi_{0}$ appears in the action (3.1) to make it globally defined. However, this does not enter the superdiffeomorphims anomaly because it is not a fundamental parameter in the theory and does not contribute to the stress-energy tensor whose exterior source is ${\hat \mu}$: $$T(a_{1}) = 2k (R_0 -R).$$ $R = -\partial \chi - \chi D \chi$ is the superprojective connection. After some manipulation by considering the anti-commuting property of $\chi$, the action (3.1) reduces to the expression $$\Gamma_{WZP} [{\hat \mu},R_{0}] = F[\chi_{0} , \partial \Lambda ,{\hat
\mu}, D{\hat \mu},R_{0}] + k \int_{{ST_2}} d \tau_{1} \partial_{1} D_{1}
\Lambda (a_{1}) {\hat \mu}(a_{1}),$$ where $F$ is some functional that does not contribute to the $n$-points Green function for $n \geq 2$.\
Nows, eqs.(2.6) and (2.9) enable us to express the action (3.3) in the following form: $$\begin{aligned}
\Gamma_{WZP} = F + k \pi \sum_{n=1}^{\infty} {(-1)^{{n(n+1) \over 2}}
\over 2^{n}} \int_{ST_{2}} \prod_{j=1}^{n+1} d \tau_{j} [ \partial_{1}
D_{1} \prod_{i=1}^{n+1} ({\hat \zeta_{i,i+1}} \partial_{i+1}\end{aligned}$$ $$- D_{i}{\hat \zeta_{i,i+1}} D_{i+1}) \partial_{n} {\hat \zeta_{n,n+1}}]
\prod_{l=1}^{n+1} {\hat \mu} (l).$$ Then, from this action, we derive the $n$-points Green function as follows:
$$< T(1)...T(n) > \equiv (-1)^{n} {\delta ^{n} \Gamma_{WZP} \over \delta
{\hat \mu}(1)...\delta {\hat \mu}(n)}|_{{\hat \mu}(n) = 0}$$
$$= k{(-1)^{{n(n-1)\over 2}} \over (2\pi)^{n-1}}{\sum_{perm (p\neq 1)}} (-1)^{p}
\partial_{1} D_{1} \prod_{i=1}^{n-2} ({\hat \zeta_{i,i+1}}
\partial_{i+1} - D_{i} {\hat \zeta_{i,i+1}} D_{i+1}) \partial_{n-1}
{\hat \zeta_{n-1,n}}. \eqno(3.5)$$ The sum over all possible permutations, except for $p \neq 1$, is inderstood and $(-1)^p$ stands for the sign of the permutation. Furthermore, after some algebraic calculations, we get the final expression for the $n$-points Green function of the induced $(1,0)$-supergravity on the supertorus:
$$<T(1)...T(n)> = {k(-1)^{{n(n+1)\over 2}}\over 2(2\pi)^{n-1}}
{\sum_{perm (p\neq 1)}}(-1)^{p} [\prod_{i=1}^{n-2}(2{\hat
\zeta_{i,i+1}}\partial_{i+1} +D_{i}{\hat \zeta_{i,i+1}}D_{i+1} -$$
$$3\partial_{i}{\hat \zeta_{i,i+1}})\partial_{n-1}^2{\hat
\zeta_{n-1,n}}. \eqno(3.6)$$ Then, we derive the corresponding Ward identitty to the $n$-points Green function by applying the Cauchy operator defined say, at a point $a_{1}$, on the l.h.s of eq.(3.6):
$${\bar \partial}_{1}<T(1)...T(n)> = {k(-1)^{{n(n+1)\over 2}}\over
2^{n}}{\sum_{perm(p\neq 1)}} (-1)^{p}(2\delta^3 (a_{1}-a_{2})\partial_{2}
+D_{1}\delta^3 (a_{1}-a_{2}) D_{2}-$$ $$3\partial_{1} \delta^3 (a_{1}-a_{2}))\prod_{i=2}^{n-2} (2{\hat
\zeta_{i,i+1}} \partial_{i+1} +D_{i}{\hat
\zeta_{i,i+1}}D_{i+1}-3\partial_{i} {\hat \zeta_{i,i+1}})
D_{n-1}\partial^2_{n-1} {\hat \zeta_{n-1,1}}. \eqno(3.7)$$ For example, putting $n = 3$ in eqs.(3.6) and (3.7) we recover the results established in \[12\] for the $3$-points function and its associated Ward identity that are respectively,
$$<T(1)T(2)T(3)> = {k\over 2(2\pi)^{2}} {\sum_{perm(p\neq 1)}}
(-1)^{n}(2{\hat \zeta_{1,2}} \partial_{2} + D_{1}{\hat \zeta_{1,2}}
D_{2} - 3\partial_{1} {\hat \zeta_{1,2}}) D_{2}\partial^{2}_{2} {\hat
\zeta_{2,3}}, \eqno(3.8)$$
$${\bar \partial}_{1} <T(1)T(2)T(3)> = {k\over 2(2\pi)^2} \{[2\delta^{3}
(a_{1} -a_{2}) \partial_{2} +$$
$$D_{1}\delta^3 (a_{1} -a_{2}) D_{2} -
3\partial_{1} \delta^3 (a_{1} -a_{2})]D_{2}\partial^{2}_{2} {\hat
\zeta_{2,3}} - (2\leftrightarrow 3) \}. \eqno(3.9)$$
This shows that our formalism developed in \[11\] is general and applicable for any SRS of genus $g$.
Solution of the superconformal Ward identity on $ST_{2}$
========================================================
Now, let us rewrite the superconformal Ward identitty (1.2) in the form: $${\bar \partial} \left ({{\delta \Gamma_{WZP}} \over {\delta {\hat \mu}}} \right ) = p_{1} +
K {\delta \Gamma_{WZP} \over \delta {\hat \mu}},$$ with $p_{1} = k\partial^{2} D{\hat \mu}$ and $K = {\hat \mu}\partial +
{3\over 2}\partial {\hat \mu} + {1\over 2}D{\hat \mu}D$.\
Then, using the iterative method given in section 2 we get $${\delta \Gamma _{wzp} \over \delta {\hat \mu}} =\sum_{n=1}^{\infty}
{\bar \partial}^{-1} p_{n},$$ where $p_{n} = K{\bar \partial}^{-1}p_{n-1}$\
and
$${\bar \partial}^{-1}p_{n} = {(-1)^{n(n-1)\over 2}\over
2^{n-2}}k\int_{ST{_2}} \prod_{j=2}^{n+1} d\tau_{j} {\hat \zeta_{1,2}}
\prod_{i=2}^{n-1} [{\hat \mu} (i) {\hat \zeta_{i,i+1}} \partial_{i+1} +
{3\over2}\partial_{i} {\hat \mu}(i) {\hat \zeta_{i,i+1}}$$ $$+{1\over 2}D_{i+1}{\hat \mu}(i)D_{i}{\hat \zeta_{i,i+1}} \partial^2_{n+1}
D_{n+1} {\hat \mu}(n+1)].$$ Furthermore, to express eq.(4.3) in terms of ${\hat \mu}$ only, the integration by parts must be considered and then, we obtain $${\bar \partial}^{-1}p_{n} = {(-1)^{{n(n+1)\over 2}} \over
2^{n-1}}k \int_{{ST_2}} \prod_{j=2}^{n+1} d\tau_{j}
\prod_{i=1}^{n-1}[(2{\hat \zeta_{i,i+1}} \partial_{i+1}$$ $$+D_{i}{\hat \zeta_{i,i+1}} D_{i+1} -3\partial_{i} {\hat
\zeta_{i,i+1}})D_{n}\partial_{n}^{2} {\hat
\zeta_{n,n+1}} \prod_{l=2}^{n+1} {\hat \mu}(l)].$$ Hence, by using eq.(4.2) we obtain $\delta \Gamma_{WZP} \over \delta
{\hat \mu}(1)$ and the integration of the later gives the $n$-points Green function which coincides with eq.(3.6). Furthermore, this result means that the Polyakov action is the sum of the perturbative series that is a solution of the superconformal Ward identity and then the Polyakov conjecture on the supertorus is proved.
Conclusion and Open problems
============================
In this paper we have proved the Polyakov conjecture on the supertorus by using on the one hand the solution of the (SBE) established with the help of the superWeiestrass ${\hat \zeta}$-fuction introduced in \[12\] and, on the other hand the material developed in \[11\] to get the general ($n$-points) Green function on the supercomplex plane.\
However, one can express the superLiouville theory in the framework of the formalism developed here and in the refs.\[11,12\] by considering the superconformal gauge. This can be done by expressing the Liouville field in terms of the Beltrami field $\mu$. This Liouville field can be seen to verify the classical Liouville equation \[13\] $$-\Delta \Psi = R - \exp {(-2\Psi)} R_{0}$$ by taking a conformally equivalent metric to a given metric of constant curvature $(i.e. g = \exp {(2\Psi)} g_{0})$. Then, the supersymmetric extension of this development can be easly established.
Acnowledgment
=============
One of the authors (M.K.) would like to thank Professor M. Virasoro for his hospitality at ICTP where this work was partially done.
References {#references .unnumbered}
==========
\[1\] M. Kaku, Introduction to superstrings, Berlin, Heidelberg, New York, Springer [**1988**]{}.
\[2\] A. M. Polyakov, Phys.Lett. [**103 B**]{} (1981) 207; Phys. Lett. [**103 B**]{} (1981) 211.
\[3\] J. Distler and H. Kawi, Nucl. Phys. [**B321**]{} (1989) 509.
\[4\] J. -P. Ader and H. Kachkachi, Class. Quantum Grav. [**10**]{} (1993) 417.
\[5\] J. -P. Ader and H.Kachkachi, Class. Quantum Grav. [**11**]{} (1994) 767.
\[6\] L. Crane and J. Rabin, Commun. Math. Phys. [**113**]{} (1988) 601.
\[7\] M. Takama, Commun. Math. Phys. [**143**]{} (1991) 149.
\[8\] M. T. Grisaru and R. -M. Xu, Phys. Lett. [**B205**]{} (1993) 1.
\[9\] F. Delduc and F. Gieres, Class. Quantum Grav. [**7**]{} (1990) 1907.
\[10\] J. Grundberg and R. Nakayama, Mod. Phys. Lett. [**A4**]{} (1989) 55.
\[11\] M. Kachkachi and S. Kouadik, J. Math. Phys. [**38(7)**]{} (1997).
\[12\] H. Kachkachi and M. Kachkachi, Class. Quantum Grav. [**11**]{} (1994) 493.
\[13\] C. Itzykson and J. -M. Drouffe, Statistical field theory: 2,
Cambridge University Press, Cambridge [**1989**]{}.
[^1]: Permanent Address
|
---
abstract: 'We analyze theoretically the many-body dynamics of a dissipative Ising model in a transverse field using a variational approach. We find that the steady state phase diagram is substantially modified compared to its equilibrium counterpart, including the appearance of a multicritical point belonging to a different universality class. Building on our variational analysis, we establish a field-theoretical treatment corresponding to a dissipative variant of a Ginzburg-Landau theory, which allows us to compute the upper critical dimension of the system. Finally, we present a possible experimental realization of the dissipative Ising model using ultracold Rydberg gases.'
author:
- 'Vincent R. Overbeck'
- 'Mohammad F. Maghrebi'
- 'Alexey V. Gorshkov'
- Hendrik Weimer
bibliography:
- '../../Papers/bib/bib.bib'
title: Multicritical behavior in dissipative Ising models
---
The continuous transition between a paramagnetic and a ferromagnetic phase within the Ising model in a transverse field is one of the most important examples of a quantum phase transition. At finite temperature, thermal fluctuations dominate while the phase transition between the two phases remains continuous [@Sachdev1999]. Here, we show that adding dissipation to the model strongly modifies the phase diagram and gives rise to a multicritical point belonging to a different universality class.
Rapid experimental progress in the control of tailored dissipation channels [@Syassen2008; @Baumann2010; @Barreiro2011; @Krauter2011; @Barontini2013], combined with prospects to use dissipation for the preparation of interesting many-body states [@Diehl2008; @Verstraete2009; @Weimer2010], has put dissipative quantum many-body systems at the forefront of ultracold atomic physics, quantum optics, and solid state physics. In particular, systems driven to highly excited Rydberg atoms have emerged as one of the most promising routes [@Raitzsch2009; @Carr2013; @Malossi2014; @Schempp2014; @Urvoy2015; @Weber2015; @Goldschmidt2016; @Lee2011; @Honer2011; @Glatzle2012; @Ates2012a; @Lemeshko2013a; @Hu2013; @Honing2013; @Otterbach2014; @Sanders2014; @Hoening2014; @Marcuzzi2014; @Marcuzzi2016], as the dissipation and interaction properties of Rydberg gases can be very widely tuned [@Low2012]. These crucial experimental advances have led to the investigation of driven-dissipative models in a wide range of theoretical works [@Glatzle2012; @Goldstein2015; @Goldschmidt2016; @Hu2013; @Sieberer2013; @Wilson2016; @LeBoite2013; @Joshi2013; @Tomadin2010; @Tomadin2011; @Jin2014; @Marino2016; @Torre2010; @Mascarenhas2015; @Cui2015]. However, the theoretical understanding of dissipative quantum many-body systems is still in its infancy, as many of the concepts and methods from equilibrium many-body systems cannot be applied. As a consequence, little is known even about the most basic dissipative models.
![Phase diagram of the three-dimensional dissipative Ising model according to the variational principle based on product states. The system can undergo phase transitions between ferromagnetic (FM) and paramagnetic (PM) phases, which can be either continuous or first order. The continuous and first order transition lines meet at a tricritical point.[]{data-label="fig:phasediagram"}](PhaseDiagram3d.pdf){width="\linewidth"}
In this Letter, we perform a variational analysis of the steady state of dissipative Ising models using a recently introduced variational method. In contrast to the equilibrium case, we find that the continuous transition is replaced by a first order transition if the dissipation is sufficiently stronger than the transverse field, see Fig. 1. Strikingly, we find that the model gives rise to a multicritical behavior, as the two types of transitions are connected by a tricritical point. This deviation from the equilibrium situation underlines the fact that dissipative many-body systems constitute an independent class of dynamical systems that go beyond the presence of a finite effective temperature. Furthermore, we establish a field-theoretical treatment of dissipative many-body systems corresponding to a Ginzburg-Landau theory, which allows us to identify the upper critical dimension of the tricritical point. Finally, we give a concrete example of a possible experimental realization of dissipative Ising models based on Rydberg-dressed atoms in optical lattices, showing that the observation of the tricritical point is within reach in present experimental setups.
Dissipative systems are no longer governed by the unitary Schrödinger equation, but have to be described in terms of a quantum master equation instead. Here, we consider the case of a Markovian master equation for the density operator $\rho$, given in the Lindblad form as $$\begin{aligned}
\frac{d}{dt}\rho=-i[H,\rho]+ \sum\limits_{i} \left(c_i\rho
c_i^{\dagger}-\frac{1}{2}\{c_i^\dagger c_i,\rho\} \right).
\label{eq:mastereq}\end{aligned}$$ Importantly, dissipative quantum systems generically relax towards one or more steady states, which can be found by solving the equation $d \rho /dt = 0$.
For the dissipative Ising model, the Hamiltonian is of the form $$\begin{aligned}
H={\Delta}\sum_{i}\sigma_z^{(i)}-J\sum_{\langle i j \rangle}\sigma_x^{(i)}\sigma_x^{(j)}
\label{eq:Hamiltonian},\end{aligned}$$ where $\Delta$ denotes the strength of the transverse field and $J$ indicates the strength of the ferromagnetic Ising interaction. The quantum jump operators $c_i = \sqrt{\gamma} \sigma_-^{(i)}$ describe dissipative spin flips occuring with a rate $\gamma$. Consequently, this dissipative Ising model is a straightforward generalization including Lindblad dynamics. Note that the present model is unrelated to a series of similarly named models, where a strong coupling to the bath is present [@Werner2004; @Werner2005] or where explicit time-dependent driving is considered [@Goldstein2015]. We would also like to stress that in contrast to previous studies of dissipative Rydberg gases [@Lee2011; @Ates2012a; @Hu2013; @Hoening2014; @Weimer2015; @Weimer2015a; @Overbeck2016], the present model exhibits a global $Z_2$ symmetry. Since the dissipation acts in the eigenbasis of the transverse field, the master equation is invariant under applying a $\sigma_z$ transformation to all the spins. Different driven-dissipative models with $Z_2$ symmetry have been investigated previously [@Lee2013; @Torre2013]. Crucially, this $Z_2$ symmetry can be spontaneously broken by the steady state of the dynamics, constituting a continuous dissipative phase transition. Interestingly, recent results obtained within the Keldysh formalism show that this continuous transition can break down for sufficiently strong dissipation [@Maghrebi2016], hinting that the dissipative phase diagram is much richer than its equilibrium counterpart.
Here, we will calculate the properties of this steady state using a recently established variational principle [@Weimer2015]. In a spirit similar to equilibrium thermodynamics, where a free energy functional has to be minimized, we consider a functional for dissipative systems that becomes nonanalytic at a dissipative phase transition. To be specific, we will choose our variational manifold as product states of the form $$\rho = \prod\limits_i \rho_i,\;\;\;\;\;
\rho_{i} =\frac{1}{2} \left( 1+ \sum\limits_{\mu\in \{x,y,z\}} \alpha_\mu^{\phantom \dagger} \sigma_\mu^{(i)} \right).
\label{eq:varstate}$$ Later on, we will investigate in detail the validity of this approach by explicitly considering fluctuations around product states. In the case of product states, the variational principle is based on the minimization of $ D= \sum_{\langle ij \rangle }||\dot \rho_{ij}||$ [@Weimer2015]. Here, the norm $||\dot \rho_{ij}||$ is given by the trace norm, $||x||=\text{Tr}\{|x|\}$, and $\dot \rho_{ij}$ is the reduced two-site operator obtained after taking the partial trace of the time derivative $d \rho /dt $, according to $\dot
\rho_{ij}=\text{Tr}_{\not \phantom i i \not j}\{d \rho /dt \}$. As our model is translationally invariant, it is sufficient to consider the variational norm of a single bond $||\dot \rho_{ij}||$. Then, the steady state is approximated by the variational minimization procedure $||\dot \rho_{ij}|| \rightarrow \text{min}$. We would like to stress that although our ansatz according to Eq. (\[eq:varstate\]) is a product state, the variational principle differs from a pure mean-field decoupling as $\dot \rho_{ij}$ includes the time derivative of correlation functions [@Weimer2015].
Next, we perform an expansion of the variational norm $||\dot \rho_{ij}||$ in the order parameter $\phi \equiv \langle \sigma_x \rangle$, in close analogy to Landau theory for equilibrium phase transitions. The degree of non-analyticity of the order parameter can be used to classify the phase transition: a discontinuous jump indicates a first order transition, while a diverging derivative corresponds to a second order transition. Within our product state approach, we choose the variational parameters according to $$\alpha=\left (\langle \sigma_x \rangle,\langle \sigma_y \rangle , \langle \sigma_z \rangle \right )=(\phi, c\phi , \lambda).
\label{eq:alpha}$$ Separating the order parameter $\phi$ in the $\langle \sigma_y \rangle $ expression has the advantage that $c$ becomes an analytic function. In the following, we will choose $\lambda$ such that we always have a pure state satisfying $|\alpha|^2 = 1$. Taking $\lambda$ as an independent variational parameter does not lead to a significant difference in our results, i.e., solution close to phase boundaries exhibit high purity.
Expanding $||\dot \rho_{ij}||$ up to the sixth order in $\phi$ leads to $$||\dot \rho_{ij}||= u_0+u_2 \phi^2+u_4 \phi^4+u_6 \phi^6
\label{eq:normexp}$$ as odd powers in $\phi$ vanish because of the $Z_2$ symmetry. From the exact diagonalization of the $4\times 4$ matrix $\dot{\rho}_{ij}$, we can readily calculate the expansion coefficients $u_n$ as functions of the coupling constants $J$, $\Delta$, and $\gamma$, as well as the coordination number $z$ and the variational parameter $c$.
As the next step, we determine the variational solution for the parameter $c$. According to our ansatz of Eq. (\[eq:alpha\]), the non-analytic behavior is contained in $\phi$, whereas $c$ is a smooth function. Therefore the value of $c$ close to phase boundaries is fixed by its behavior far away from phase transitions. In the latter regime, $\phi^2$ is the leading order of the variational functional Eq. (\[eq:normexp\]), which allows us to find the variational minimum by minimizing only $u_2$. Doing so with respect to $c$ leads to $$\begin{aligned}
c=\frac{J \gamma z}{(\gamma/2)^2+4\Delta^2}.
\label{eq: c}\end{aligned}$$ Using that expression for $c$, there is only the order parameter $\phi$ left as an independent variational parameter. Consequently, we have successfully constructed the equivalent of Landau theory for dissipative phase transitions and determined all expansion parameters from the microscopic quantum master equation [^1].
![Expansion in the variational norm $||\dot \rho_{ij}||$ to the sixth order in $\phi$ close to the second order transition (left) and close to the first order transition (right). In the ferromagnetic phase, the minimal variational norm is found at $\phi \ne 0$ (solid lines). In the paramagnetic phase, the global minimum is located at $\phi=0$ (dashed lines). []{data-label="fig:varnorm"}](VarNormexp.eps){width="1\linewidth"}
The dissipative functional of Eq. (\[eq:normexp\]) is mathematically equivalent to the free energy functional of a $\phi^6$ theory, whose possible phases are known [@Chaikin1995]. For $u_4>0$, the $\phi^6$ term is irrelevant, and there is a continuous Ising transition between a paramagnetic phase ($u_2 > 0$) and a ferromagnetic phase ($u_2 < 0$). Close to the transition, the order parameter behaves as $\phi = \pm (|u_2|/2u_4)^{1/2}$. In the equilibrium Ising model, $u_4$ is always positive, but here we find that this is not the case when adding dissipation. If the dissipation rate $\gamma$ is sufficiently larger than the transverse field $\Delta$, $u_4$ will become negative, which substantially alters the phase diagram of the model. In order to find a stable variational solution, it is then necessary to also consider the $\phi^6$ term of the series expansion. We find that the variational norm has three different minima, which transforms the transition between the paramagnetic and the ferromagnetic phase into a first order transition, see Fig. \[fig:varnorm\]. Remarkably, the $\phi^6$ theory exhibits a tricritical point at $u_2 = u_4 = 0$, which belongs to a universality class different from that of the Ising transition. This change of the universality class can be seen from the scaling of the order parameter along the $u_4=0$ line, $\phi = \pm
(|u_2|/3u_6)^{1/4}$, which exhibits a different critical exponent [^2]. We have also confirmed the validity of our series expansion in $\phi$ by comparison to a numerical minimization of the variational norm including all orders. The full phase diagram of the dissipative Ising model is shown in Fig. 1.
*Fluctuations.—* So far, we have neglected the fact that the true steady state of the system is not a product state. In reality, there will be fluctuations in the system that lead to deviations from the variational solution of the series expansion of Eq. (\[eq:normexp\]). Importantly, the strength of these fluctuations is inherently determined by the value of the variational norm at the variational minimum. In close analogy to equilibrium transitions, we can analyze at which point fluctuations lead to a breakdown of the product state ansatz. To take these fluctuations into account, it is first necessary to introduce spatial inhomogeneities of the order parameter. Then, fluctuations generate such spatial inhomogeneities in the same manner as in equilibrium systems. To this end, we will take long-wavelength inhomogeneities into account by performing a gradient expansion of the variational norm. Then we can evaluate the equivalent of the Ginzburg criterion [@Kleinert2001] to determine the range of validity of our effective theory.
We first allow for spatial variations within our product state ansatz. Then the variational functional can be written as [^3] $$\begin{aligned}
D= \sum\limits_{\langle ij \rangle }||\dot \rho_{ij}|| = & \sum_{\langle ij \rangle } z \left[\frac{J}{2}\left(1-\frac{1}{z}\right)+\frac{J'}{z} \right](\phi_i-\phi_j)^2 \notag
\\ + & \sum_{ i } z \left [ u_0+u_2 \phi_i^2+u_4 \phi_i^4+u_6 \phi_i^6 \right ],
\label{eq:ineqgrad}\end{aligned}$$ where $\phi_i= \langle \sigma_x^{(i)} \rangle$ is the value of the order parameter field at site $i$ and the coupling constant $J'$ is given by $$\begin{aligned}
J'=-\frac{J}{4}+\frac{\left (\frac{\gamma}{4} \right )^2 + \Delta^2}{4 J}+\frac{J \gamma^2}{\gamma^2+16 \Delta^2}. \end{aligned}$$ The first term in Eq. (\[eq:ineqgrad\]) describes spatial variations of the order parameter to lowest order, while the other terms correspond to the original series expansion of Eq. (\[eq:normexp\]). For a finite value of $\phi_i$, the eigenbasis of $\rho_i$ is rotated away from the eigenbasis of $\sigma_z$. Consequently, the coupling constant $J'$ also depends on $\gamma$ and $\Delta$. Taking the continuum limit, we arrive at a Ginzburg-Landau-like functional for the variational norm, $$D[\Phi]= z\int d^{ d} x ~ u_0 + v_2 (\nabla \Phi)^2 + u_2 \Phi^2 + u_4 \Phi^4 + u_6 \Phi^6,$$ where the order parameter $\phi$ follows from spatial averaging of the fluctuating field $\Phi(x)$. The gradient term $v_2$ can then be readily identified as $$v_2 = \left[ \frac{J'}{z}+ \frac{J}{2}\left(1-\frac{1}{z}\right) \right]a^2,$$ where $a$ is the lattice spacing, which we set to unity in the following.
Following from the existence of a dynamical symmetry [@Sieberer2013], fluctuations in the system will exhibit thermal statistics at long wavelengths [@Maghrebi2016]. Hence, we can characterize the strength of these fluctuations by an effective temperature $T_\text{eff}$. Crucially, the strength of fluctuations is determined by the value of the variational norm, as its value is a measure of how much the exact steady state deviates from the product state solution. However, we have to renormalize the variational norm to get an intensive quantity. Then we find that the effective temperature is connected to the variational norm according to $T_{\text{eff}}=\frac{z}{2} || \dot \rho_{ij}||$, where the variational norm $|| \dot \rho_{ij}||$ is to be evaluated in the absence of spatial inhomogeneities, i.e., the choice of $i$ and $j$ does not matter. In the paramagnetic phase, the variational solution results, according to the minimization of Eq. (\[eq:normexp\]), in $\alpha=(0,0,-1)$, which corresponds to a variational norm of $||\dot
\rho_{ij}||=2J$. Remarkably, the resulting effective temperature on the Ising transition line is given by $T_\text{eff} = zJ$, which matches exactly the result found within the Keldysh formalism [@Maghrebi2016].
![First order jump $\delta \phi$ versus $\frac{\Delta_{TC}-\Delta}{Jz}$ along the first order line for $z=4,6,20$ and $200$. Inset: Logarithm of $b$, which is obtained from the fit of $\delta \phi$ according to Eq. (\[formula:deltaphi\]), versus $\ln(z)$ and the corresponding fit (solid line).[]{data-label="fig:deltaepsloglog"}](deltaphiz.eps){width="1.0\linewidth"}
Using this effective temperature, we can now evaluate the strength of fluctuations around the homogeneous solution. Considering Gaussian fluctuations, we find for the mean squared fluctuations $$\begin{aligned}
\langle [\phi-\Phi]^2 \rangle=\frac{T_{\text{eff}}}{2 v_2}\xi^{2-d} w^{d},
\label{formula:sqdeviation}\end{aligned}$$ where $\xi^2=v_2/2|u_2|$ is the square of the correlation length, $d$ is the number of spatial dimensions, and $w = 0.0952 $ is a numerical constant [@Kleinert2001]. Corresponding to the Ginzburg criterion, we compare these mean squared fluctuations to the square of the order parameter close to the multicritical point, which results in $$\begin{aligned}
\frac{ \langle [\phi-\Phi]^2\rangle}{\phi^2} = \frac{\sqrt 3}{4} w^{d} v_2^{-d/2}u_0 \sqrt u_6 u_2^{(d-3)/2}.\end{aligned}$$ The self-consistency of our effective theory is determined by the exponent of the $u_2$ term. For $d>3$, the exponent is positive and the relative strength of fluctuations is decreasing when approaching the multicritical point, i.e., our effective theory becomes self-consistent. For $d<3$, the exponent is negative and fluctuations diverge close to the multicritical point. Hence, $d=3$ is the upper critical dimension of the multicritical point, above which critical exponents derived within Landau theory according to Eq. (\[eq:normexp\]) become exact. At the experimentally accessible case of $d=3$, one can expect merely logarithmic corrections to the Landau theory exponents [@Kenna2004]. We would like to point out that the same result can also be obtained from a renormalization group calculation, which also allows to evaluate corrections to the position of the tricritical point in a systematic way. While the position of the tricritical point is shifted significantly on including the renormalization group corrections in three dimensions, we find that the strength of the shift decreases exponentially with increasing spatial dimensions [^4].
*Comparison to mean field results.—* In contrast to the equilibrium case, mean field does not describe the correct physics at the upper critical dimension in our open system as it misses the first order transition and the tricritical point [@Maghrebi2016]. Still, mean-field theory becomes exact as $d \rightarrow \infty$, where the variational approach and mean-field theory agree. We now investigate in detail how the variational solution behaves as the dimensionality is increased. Specifically, we consider the value of the jump of the order parameter at the first order transition, $\delta\phi$, which is given by $$\begin{aligned}
\delta \phi=b\left (\frac{\Delta_{TC}-\Delta}{Jz} \right )^{1/2},
\label{formula:deltaphi}\end{aligned}$$ where $\Delta_{TC}$ is the value of $\Delta$ at the tricritical point. Remarkably, the tricritical point remains at a finite value of $\Delta$ even when the dimensionality of the system diverges, asymptotically approaching $(\Delta/zJ,\gamma/zJ)_{TC}=(0.22,1.66)$ in the limit of infinite spatial dimensions. Consequently, the mean-field result is not recovered in a way that leads to a disappearance of the tricritical point. Instead, the prefactor $b$ decreases according to $b\sim 1/\sqrt{d}$ as the dimensionality of the system is increased, see Fig. \[fig:deltaepsloglog\]. Hence, for any finite dimension, the tricritical point can be observed and the mean-field prediction is incorrect. Therefore, our results present further evidence (see also [@Weimer2015a; @Maghrebi2016]) that, for dissipative systems, mean-field theory can be qualitatively incorrect even above the upper critical dimension. Instead, it appears that only the variational principle is capable of correctly describing this regime of high dimensionality. Finally, we find that, according to our variational analysis, the location of the first order transition at the $\Delta=0$ line approaches the value $\gamma =0$ with increasing dimension. This behavior is consistent with analytic arguments showing that there is no ferromagnetic phase at $\Delta=0$ in any dimension [^5].
![4-level scheme with the ground states corresponding to the two spin configurations $| \! \uparrow \rangle$ and $| \! \downarrow \rangle$, and the dressed Rydberg state $|r\rangle$. The dissipation is realized via the $|e\rangle$-state.[]{data-label="fig:levelscheme"}](twogroundst.eps){width="0.8\linewidth"}
*Experimental realization.—* For an experimental implementation of the dissipative Ising model, we turn to a scenario where a Rydberg state is weakly admixed to the electronic ground state manifold [@Henkel2010; @Pupillo2010; @Honer2010; @Glaetzle2015; @vanBijnen2015]. Such Rydberg dressing of ground state atoms has recently been observed in several experiments [@Jau2015; @Zeiher2016; @Helmrich2016]. Here, we consider the dressing performed within a Raman scheme, where two ground states are coupled to the same Rydberg state, see Fig. \[fig:levelscheme\]. We obtain the effective Hamiltonian for the dressed system based on fourth order degenerate perturbation theory [@Lindgren1974; @Fresard2012] in $\Omega_r/\delta_r$, which generically has the form $$H = \Delta \sum\limits_i \sigma_z^{(i)} + \Omega' \sum\limits_i \sigma_x^{(i)} - \sum\limits_{ij} J_{ij} \sigma_x^{(i)}\sigma_x^{(j)}~+~\text{const}.$$ This Hamiltonian is not yet in a $Z_2$ symmetric form as the $\Omega'$ term breaks the symmetry. Crucially, this symmetry-breaking term can be canceled by including a direct coupling $\Omega$ between the two ground states into our perturbative analysis, see Fig. \[fig:levelscheme\]. Choosing $\Omega_r = \delta_r/10$ and $\Delta\sim |J_{\langle
ij\rangle}| \sim \Omega_r^4/\delta_r^3$ allows one to suppress the strength of all $Z_2$ symmetry-breaking terms by several orders of magnitude. Tuning the Rydberg interaction strength $V$ such that $V=3\,\delta_r$ ensures that the effective interaction potential can be cut off beyond nearest neighbors.
Finally, we realize the dissipative terms by performing optical pumping from the spin-up into the spin-down state. In the case of $^{87}\text{Rb}$, this can be realized by choosing $\mathopen{|}\uparrow\mathclose{\rangle} = | 5S_{1/2}, F=2, m_F=2
\rangle$, $\mathopen{|}\downarrow\mathclose{\rangle} = |5S_{1/2}, F=2,
m_F=1 \rangle$, and $|e\rangle = |5P_{3/2}, F=3, m_F=2 \rangle$. Note that this will result in an additional dephasing term described by the jump operator $P_{\uparrow} =
\mathopen{|}\uparrow\mathclose{\rangle}\hspace{-0.25em}\mathopen{\langle}
\uparrow\mathclose{|}$, however, this term preserves the $Z_2$ symmetry and is also weaker than the dissipative spin flip [@Steck2001a]. Finally, we would like to mention that the dissipative Ising model can also be realized within the experimental implementation suggested in [@Lee2013], at the expense of requiring additional laser fields.
We acknowledge fruitful discussions with T. Vekua. This work was funded by the Volkswagen Foundation and the DFG within RTG 1729 and SFB 1227 (DQ-mat). M.F.M. and A.V.G acknowledge funding from NSF QIS, AFOSR, ARL CDQI, NSF PFC at JQI, ARO, and ARO MURI.
Appendix A: Expansion coefficients of the variational norm
==========================================================
The expansion coefficients of the variational norm according to $$||\dot \rho_{ij}||= u_0+u_2 \phi^2+u_4 \phi^4+u_6 \phi^6
\label{eq:normexp}$$ are given by
$$\begin{aligned}
u_0 &= 2J, ~~~
u_2 = \frac{\frac{\gamma ^2}{16}+\Delta ^2}{J}+J \left(\frac{16 \Delta ^2 z^2}{\gamma ^2+16 \Delta ^2}-1\right)-2 \Delta z ,
\\
u_4 &= -\frac{1}{512 J^3 \left(\gamma ^2+16 \Delta ^2\right)^4} \biggl[\left(\gamma ^2+16 \Delta ^2\right)^6+8192 \gamma ^5 J^7 z^4+131072 \gamma ^4 \Delta ^2 J^6 z^4
\\ \notag & -1024 \gamma ^2 J^5 z^2 \left(\gamma ^2+16 \Delta ^2\right)^2 (8 \Delta z-\gamma )
+16384
\Delta ^2 J^4 z^2 \left(\gamma ^2+16 \Delta ^2\right)^2 \left(\gamma ^2+4 \Delta ^2 z^2\right)
\\ \notag &
+32 J^3 \left(\gamma ^2+16 \Delta ^2\right)^3 \left(\gamma ^3+16 \gamma \Delta ^2+256 \Delta ^3 z
\left(1-2 z^2\right)+16 \gamma ^2 \Delta z\right)
\\ \notag &
-64 J^2 \left(\gamma ^2+16 \Delta ^2\right)^4 \left(\gamma ^2+8 \Delta ^2 \left(1-3 z^2\right)\right)-64 \Delta J z \left(\gamma ^2+16 \Delta
^2\right)^5 ],
\\
u_6 = & -\frac{1}{24576
J^5 (\gamma ^2+16 \Delta ^2 )^6} \biggl[- (\gamma ^2+16 \Delta ^2 )^9+1048576 \gamma ^7 J^{11} z^6
-524288 \gamma ^6 J^{10} z^6 (\gamma ^2-16 \Delta ^2 )
\\ \notag &
-65536 \gamma ^4 J^9 z^4
(\gamma ^2+16 \Delta ^2 ) (-3 \gamma ^3+16 \gamma \Delta ^2 (2 z^2-3 )+8 \gamma ^2 \Delta z+128 \Delta ^3 z )
\\ \notag &
-131072 \gamma ^4 J^8 z^4 (\gamma ^2+16 \Delta
^2 ) (\gamma ^4-8 \gamma ^2 \Delta ^2+128 \Delta ^4 (2 z^2-3 )-2 \gamma ^3 \Delta z-32 \gamma \Delta ^3 z )
\\ \notag &
+4096 \gamma ^2 J^7 z^2 (\gamma ^2+16 \Delta ^2 )^2
(\gamma ^5 (3-2 z^2 )-96 \gamma ^3 \Delta ^2 (z^2-1 )+1536 \gamma ^2 \Delta ^3 z (z^2-1 )\\ \notag &
+256 \gamma \Delta ^4 (3-4 z^2 )+4096 \Delta ^5 z (2
z^2-3 )-48 \gamma ^4 \Delta z )
-2048 J^6 z^2 (\gamma ^2+16 \Delta ^2 )^3 (5 \gamma ^6+64 \gamma ^4 \Delta ^2 (3 z^2-1 ) \\ \notag & +256 \gamma ^2 \Delta ^4 (14
z^2-9 )
+8192 \Delta ^6 z^2 (z^2-1 )-16 \gamma ^5 \Delta z-256 \gamma ^3 \Delta ^3 z ) \\ \notag & -256 J^5 (\gamma ^2+16 \Delta ^2 )^4 (\gamma ^5 (4 z^2-1 )-8
\gamma ^4 \Delta z (4 z^2+3 )+32 \gamma ^3 \Delta ^2 (3 z^2-1 )
\\ \notag &
-256 \gamma ^2 \Delta ^3 z (4 z^2+3 )+256 \gamma \Delta ^4 (2 z^2-1 )-6144 \Delta ^5 z
(1-2 z^2 )^2 ) \\ \notag & -256 J^4 (\gamma ^2+16 \Delta ^2 )^5 (5 \gamma ^4+16 \gamma ^2 \Delta ^2 (7-10 z^2 )
+256 \Delta ^4 (15 z^4-12 z^2+2 )\\ \notag &
-4 \gamma ^3 \Delta z
-64 \gamma \Delta ^3 z )
-32 J^3 (\gamma ^2+16 \Delta ^2 )^6 (\gamma ^3+16 \gamma \Delta ^2+1280 \Delta ^3 z (1-2 z^2 )+112 \gamma ^2 \Delta z )
\\ \notag &
+16 J^2 (\gamma ^2+16 \Delta ^2 )^7 (5 \gamma ^2+48 \Delta ^2 (1-5 z^2 ) )+96 \Delta J z (\gamma ^2+16 \Delta ^2 )^8 \biggr].
\end{aligned}$$
Appendix B: Derivation of the variational norm including spatial fluctuations
=============================================================================
In this section, we derive Eq. (9) in the main text.
In order to evaluate the consistency of our product state ansatz, we allow spatial inhomgeneities of the order parameter field. Consequently, the variational parameter $\phi_{i}=\langle \sigma^{(i)}_x \rangle$ has a different value at each site $i$ in our ansatz for the product state density matrix $\rho$.
Expanding the variational norm $||\dot \rho_{ij}||$ with respect to the order parameter, we get terms of the form $u_{2n} \phi_i^{2n}$, which correspond to the terms of the expansion in the homogeneous case. Due to the fluctuations of the order parameter field, we get additional gradient terms of the form $v_2 (\phi_k-\phi_l)^2$ in the lowest (quadratic) order, where $k$ and $l$ are neighbouring sites. Here, we have a contribution to the variational norm $||\dot\rho_{ij}||$ from the difference between sites $i$ and $j$ and from the gradient between site $i$ and $j$ and their nearest surrounding sites $k$ and $l$, respectively.
The variational functional can then be written as
$$\begin{aligned}
D=\sum_{\langle ij \rangle }||\dot \rho_{ij}|| = \sum_{\langle ijkl \rangle } \frac{J}{2}\left(z-1\right) (\phi_i-\phi_k)^2+\frac{J}{2}\left(z-1\right) (\phi_j-\phi_l)^2+J'(\phi_i-\phi_j)^2 + \sum_{\langle ij \rangle } \left [ u_0+u_2 \phi_i^2+u_4 \phi_i^4+u_6 \phi_i^6 \right ],\end{aligned}$$
with $$\begin{aligned}
J'=-\frac{J}{4}+\frac{\left (\frac{\gamma}{4} \right )^2 + \Delta^2}{4 J}+\frac{J \gamma^2}{\gamma^2+16 \Delta^2}.\end{aligned}$$ In the long wavelength limit, we have $\phi_i-\phi_k = \phi_j-\phi_l = \phi_i-\phi_j$, and after factoring out the coordination number $z$ we arrive at $$\begin{aligned}
D= & \sum_{\langle ij \rangle } z \left[\frac{J}{2}\left(1-\frac{1}{z}\right)+\frac{J'}{z} \right](\phi_i-\phi_j)^2 \\ & \notag +
\sum_{ i } z \left [ u_0+u_2 \phi_i^2+u_4 \phi_i^4+u_6 \phi_i^6 \right ].\end{aligned}$$
Appendix C: Renormalization group correction of the tricritical point
=====================================================================
In the following, we will calculate the shift of the tricritical point when renormalization group corrections of the $u_4$ term are included. Starting with the Ginzburg-Landau functional $$D[\Phi]= z\int d^{ d} x ~ u_0 + v_2 (\nabla \Phi)^2 + u_2 \Phi^2 + u_4 \Phi^4 + u_6 \Phi^6,
\label{glfunc}$$ a perturbative momentum space renormalization group analysis leads to the linear flow equations [@Wilson1974] $$\begin{aligned}
\frac{du_2}{dl} & =2 u_2+c_1 u_4+c_2 u_6 \\
\frac{du_4}{dl} & =(4-d)u_4+c_3 u_6 \\
\frac{du_6}{dl} & =(3-d) u_6.
\label{eq:flow}\end{aligned}$$ Here, the $c_i$ are constants that follow from the one-loop expansion of the interaction terms. In particular, the $c_3 u_6$- term stems from the one loop diagram shown in Fig. \[fig:loopdiag\]. The value of the $c_3$-coefficient is given by $$\begin{aligned}
c_3=\frac{2^{-d} 15 S_d}{\pi^2 v_2},\end{aligned}$$ where $S_d$ is the surface area of the $d$-dimensional unit sphere. Here, we made a cutoff of the momentum space integral at $\Lambda=\pi/a$, where $a=1$ is the lattice spacing.
![Diagrammatic visualization of the one-loop correction of the $u_4$-term. The branches represent the order parameter field in fourth order, the circle stands for the contracted part of the momentum space integral. []{data-label="fig:loopdiag"}](oneloopdiag.eps){width="0.75\linewidth"}
In the following, we choose $u_2 (0)$ such that we arrive at the fixed point $u^{\ast}_2$ corresponding to the Ising critical line. Then, the solution to the second equation will tell us about the nature of the transition [@Goldenfeld1992]. For $u^\ast_4 = \infty$, we have the conventional Ising transition, as the renormalized $u_4$ is positive. For $u^\ast_4 = -\infty$, we get the first order transition, while $u^\ast_4 = 0$ is the tricritical point (in $d \ge
3$). Depending on the initial values $u_4 (0)$ and $u_6 (0)$, we may end up in any of these fixed points, allowing us to relate the microscopic coupling constants $u_4 (0)$ and $u_6 (0)$ to the nature of the transition and hence to the position of the tricritical point. Using $\epsilon=3-d$, we arrive at the solutions $$\begin{aligned}
u_4(l) & =u_4(0) e^{(\epsilon +1)l}+c_3 u_6(0) \left[e^{(\epsilon +1)l}-e^{\epsilon l} \right ] \\
u_6(l) & =u_6(0) e^{\epsilon l}\end{aligned}$$ with $\epsilon=3-d$. From the first equation, we can immediately see that the sign of the fixed point depends on the sign of $u_4 (0)$ + $c_3
u_6 (0)$. Hence, the position of the tricritical point is shifted from $u_4 = 0$ in Landau theory to $u_4 = - c_3 u_6$ by the one-loop correction. For $d=3$, we find that the shifted tricritical point is located at $(\Delta/Jz,\gamma/Jz)_{\text{TC}}=(0.023,
0.35)$. In higher dimensions, the deviation from the variational solution of the tricritical point decreases exponentially with the number of spatial dimensions.
[^1]: See Appendix A for the full expression of the expansion coefficients.
[^2]: We would like to stress that this situation is different from other cases of multicritical behavior in dissipative systems, i.e., where already the equilibrium model has a tricritical point [@Keeling2014], or where there is no transition at all in the equilibrium model [@Marcuzzi2016].
[^3]: See Appendix B for a detailed derivation.
[^4]: See Appendix C for a one-loop calculation based on the perturbative renormalization group.
[^5]: A. V. Gorshkov et al., in preparation
|
---
abstract: 'We describe the representation theory of finitely generated indecomposable modules over artin algebras which do not lie on cycles of indecomposable modules involving homomorphisms from the infinite Jacobson radical of the module category.'
address: |
Faculty of Mathematics and Computer Science\
Nicolaus Copernicus University\
Chopina 12/18, 87-100 Toruń, Poland
author:
- Piotr Malicki and Andrzej Skowroński
title: 'Cycle-finite modules over artin algebras'
---
Introduction {#sect1}
============
Throughout the paper, by an algebra is meant an artin algebra over a fixed commutative artin ring $K$, which we shall assume (without loss of generality) to be basic and indecomposable. For an algebra $A$, we denote by $\operatorname{mod}A$ the category of finitely generated right $A$-modules and by $\operatorname{ind}A$ the full subcategory of $\operatorname{mod}A$ formed by the indecomposable modules. The Jacobson radical $\operatorname{rad}_A$ of $\operatorname{mod}A$ is the ideal generated by all nonisomorphisms between modules in $\operatorname{ind}A$, and the infinite radical $\operatorname{rad}^{\infty}_A$ of $\operatorname{mod}A$ is the intersection of all powers $\operatorname{rad}^i_A$, $i\geq 1$, of $\operatorname{rad}_A$. By a result of M. Auslander [@Au], $\operatorname{rad}_A^{\infty}=0$ if and only if $A$ is of finite representation type, that is, $\operatorname{ind}A$ admits only a finite number of pairwise nonisomorphic modules (see also [@KS0] for an alternative proof of this result). On the other hand, if $A$ is of infinite representation type then $(\operatorname{rad}_A^{\infty})^2\neq 0$, by a result proved in [@CMMS].
An important combinatorial and homological invariant of the module category $\operatorname{mod}A$ of an algebra $A$ is its Auslander-Reiten quiver $\Gamma_A$. Recall that $\Gamma_A$ is a valued translation quiver whose vertices are the isomorphism classes $\{X\}$ of modules $X$ in $\operatorname{ind}A$, the arrows correspond to irreducible homomorphisms between modules in $\operatorname{ind}A$, and the translation is the Auslander-Reiten translation $\tau_A=D\operatorname{Tr}$. We shall not distinguish between a module in $X$ in $\operatorname{ind}A$ and the corresponding vertex $\{X\}$ of $\Gamma_A$. If $A$ is an algebra of finite representation type, then every nonzero nonisomorphism in $\operatorname{ind}A$ is a finite sum of composition of irreducible homomorphisms between modules in $\operatorname{ind}A$, and hence we may recover $\operatorname{mod}A$ from the translation quiver $\Gamma_A$. In general, $\Gamma_A$ describes only the quotient category $\operatorname{mod}A/\operatorname{rad}^{\infty}_A$.
A prominent role in the representation theory of algebras is played by cycles of indecomposable modules (see [@MPS1], [@MS4], [@Ri1], [@Sk7]). Recall that a *cycle* in $\operatorname{ind}A$ is a sequence $$M_0 \buildrel {f_1}\over {\hbox to 6mm{\rightarrowfill}} M_1 \to \cdots \to M_{r-1} \buildrel {f_r}\over {\hbox to 6mm{\rightarrowfill}} M_r=M_0$$ of nonzero nonisomorphisms in $\operatorname{ind}A$ [@Ri1], and such a cycle is said to be *finite* if the homomorphisms $f_1,\ldots, f_r$ do not belong to $\operatorname{rad}_A^{\infty}$ (see [@AS1], [@AS2]). A module $M$ in $\operatorname{ind}A$ is said to be *cycle-finite* if every cycle in $\operatorname{ind}A$ passing through $M$ is finite. We note that this definition is more general then the one presented in [@MPS3]. Namely, every module $M$ in $\operatorname{ind}A$ which does not lie on a cycle (*directing module* in the sense of [@Ri1]) is cycle-finite.
If $A$ is an algebra of finite representation type, then all cycles in $\operatorname{ind}A$ are finite, and much of their properties is visible in the combinatorial structure of the finite Auslander-Reiten quiver $\Gamma_A$ of $A$. On the other hand, if $A$ is an indecomposable algebra of infinite representation type, then cycle-finite modules in $\operatorname{ind}A$ lie in infinite components of the Auslander-Reiten quiver $\Gamma_A$ and usually belong to cycles containing arbitrary large number of pairwise nonisomorphic indecomposable modules. Hence the study of cycle-finite modules over algebras of infinite representation type is a rather complicated problem.
The aim of this article is to present a rather complete representation theory of cycle-finite indecomposable modules over artin algebras.
For general results on the relevant representation theory we refer to the books [@ASS], [@ARS], [@Ri1], [@SS1], [@SS2], [@SY3], [@SY4] and the survey articles [@CB2], [@MPS1], [@MS4], [@Ri0], [@Ri-86], [@Sk7].
Preliminaries {#sect2}
=============
Let $A$ be an algebra and $M$ a module in $\operatorname{ind}A$. An important information concerning the structure of $M$ is coded in the structure and properties of its support algebra $\operatorname{Supp}(M)$ defined as follows. Consider a decomposition $A=P_M\oplus Q_M$ of $A$ in $\operatorname{mod}A$ such that the simple summands of the semisimple module $P_M/\operatorname{rad}P_M$ are exactly the simple composition factors of $M$. Then $\operatorname{Supp}(M)=A/t_A(M)$, where $t_A(M)$ is the ideal in $A$ generated by the images of all homomorphisms from $Q_{M}$ to $A$ in $\operatorname{mod}A$. We note that $M$ is an indecomposable module over $\operatorname{Supp}(M)$. Clearly, we may realistically hope to describe the structure of $\operatorname{Supp}(M)$ only for modules $M$ having some distinguished properties. For example, if $M$ is a directing module in $\operatorname{ind}A$, then the support algebra $\operatorname{Supp}(M)$ of $M$ over an algebra $A$ is a tilted algebra $\operatorname{End}_H(T)$, for a hereditary algebra $H$ and a tilting module $T$ in $\operatorname{mod}H$, and $M$ is isomorphic to the image $\operatorname{Hom}_H(T,I)$ of an indecomposable injective module $I$ in $\operatorname{mod}H$ via the functor $\operatorname{Hom}_H(T,-): \operatorname{mod}H \to \operatorname{mod}\operatorname{End}_H(T)$ (see [@Ri1] and [@JMS1], [@JMS2] for the corresponding result over arbitrary artin algebra).
Let $A$ be an algebra and $\Gamma_A$ be the Auslander-Reiten quiver of $A$. By a component of $\Gamma_A$ we mean a connected component of the quiver $\Gamma_A$. For a component ${\mathscr{C}}$ of $\Gamma_A$, we denote by $\operatorname{ann}_A({\mathscr{C}})$ the annihilator of ${\mathscr{C}}$ in $A$, that is, the intersection of the annihilators $\{a\in A\mid Ma=0\}$ of all modules $M$ in ${\mathscr{C}}$, and by $B({\mathscr{C}})$ the quotient algebra $A/\operatorname{ann}_A({\mathscr{C}})$, called the *faithful algebra* of ${\mathscr{C}}$. We note that ${\mathscr{C}}$ is a faithful component of $\Gamma_{B({\mathscr{C}})}$. For a module $M$ in $\Gamma_A$, the $\tau_A$-*orbit* of $M$ is the set $\mathcal{O}(M)$ of all possible vertices in $\Gamma_A$ of the form $\tau_A^nM$, $n\in\mathbb{Z}$. The $\tau_A$-orbit $\mathcal{O}(M)$ of $M$ (respectively, the module $M$) is said to be *periodic* if $M\cong\tau_A^nM$ for some $n\geq 1$. A module $M$ in $\Gamma_A$ is said to be an *acyclic module* if $M$ does not lie on an oriented cycle in $\Gamma_A$, and otherwise a *cyclic module*. A component ${\mathscr{C}}$ of $\Gamma_A$ without oriented cycles is said to be *acyclic*. Dually, a component ${\mathscr{C}}$ of $\Gamma_A$ is said to be *cyclic* if any module in ${\mathscr{C}}$ lies on an oriented cycle of ${\mathscr{C}}$. Following [@MS1], we denote by $_c{\mathscr{C}}$ the full translation subquiver of ${\mathscr{C}}$ obtained by removing all acyclic modules and the arrows attached to them, and call it the *cyclic part* of ${\mathscr{C}}$. The connected translation subquivers of $_c{\mathscr{C}}$ are said to be *cyclic components* of ${\mathscr{C}}$. It was shown in [@MS1 Proposition 5.1] that two modules $M$ and $N$ in $_c{\mathscr{C}}$ belong to the same cyclic component of ${\mathscr{C}}$ if there is an oriented cycle in ${\mathscr{C}}$ passing through $M$ and $N$. All the modules in a cyclic component ${\mathscr{C}}$ of $\Gamma_A$ containing a cyclic-finite module are cyclic-finite, and such an cyclic component ${\mathscr{C}}$ is said to be a *cycle-finite cyclic component*. For a subquiver ${\mathscr{C}}$ of $\Gamma_A$, we consider a decomposition $A=P_{{\mathscr{C}}}\oplus Q_{{\mathscr{C}}}$ of $A$ in $\operatorname{mod}A$ such that the simple summands of the semisimple module $P_{{\mathscr{C}}}/\operatorname{rad}P_{{\mathscr{C}}}$ are exactly the simple composition factors of indecomposable modules in ${\mathscr{C}}$, the ideal $t_A({\mathscr{C}})$ in $A$ generated by the images of all homomorphisms from $Q_{{\mathscr{C}}}$ to $A$ in $\operatorname{mod}A$, and call the quotient algebra $\operatorname{Supp}({\mathscr{C}})=A/t_A({\mathscr{C}})$ the *support algebra* of ${\mathscr{C}}$. Let $M$ be a nondirecting cycle-finite module in $\operatorname{ind}A$. Observe that $M$ belongs to a unique cyclic component ${\mathscr{C}}(M)$ of $\Gamma_A$ consisting entirely of cycle-finite indecomposable modules, and the support algebra $\operatorname{Supp}(M)$ of $M$ is a quotient algebra of the support algebra $\operatorname{Supp}({\mathscr{C}}(M))$ of ${\mathscr{C}}(M)$. Moreover, by a result stated in [@MPS3 Corollary 1.3], the support algebra $\operatorname{Supp}({\mathscr{C}})$ of a cycle-finite cyclic component ${\mathscr{C}}$ of $\Gamma_A$ is isomorphic to an algebra of the form $e_{{\mathscr{C}}}Ae_{{\mathscr{C}}}$ for an idempotent $e_{{\mathscr{C}}}$ of $A$ whose primitive summands correspond to the vertices of a convex subquiver of the valued quiver $Q_A$ of $A$. On the other hand, the support algebra $\operatorname{Supp}(M)$ of a cycle-finite module $M$ in $\operatorname{ind}A$ is not necessarily an algebra of the form $eAe$ for an idempotent $e$ of $A$ (see [@MPS3 Section 6]). A component ${\mathscr{C}}$ of $\Gamma_A$ is called *regular* if ${\mathscr{C}}$ contains neither a projective module nor an injective module, and *semiregular* if ${\mathscr{C}}$ does not contain both a projective and an injective module. It has been shown in [@Li1] and [@Zh] that a regular component ${\mathscr{C}}$ of $\Gamma_A$ contains an oriented cycle if and only if ${\mathscr{C}}$ is a *stable tube* (is of the form ${\Bbb Z}{\Bbb A}_{\infty}/(\tau^{r})$, for a positive integer $r$). Moreover, S. Liu proved in [@Li2] that a semiregular component ${\mathscr{C}}$ of $\Gamma_A$ contains an oriented cycle if and only if ${\mathscr{C}}$ is a *ray tube* (obtained from a stable tube by a finite number (possibly zero) of ray insertions) or a *coray tube* (obtained from a stable tube by a finite number (possibly zero) of coray insertions). A component ${\mathscr{C}}$ of $\Gamma_A$ is called *postprojective* if ${\mathscr{C}}$ is acyclic and every module in ${\mathscr{C}}$ belongs to the $\tau_A$-orbit of a projective module. Dually, a component ${\mathscr{C}}$ of $\Gamma_A$ is called *preinjective* if ${\mathscr{C}}$ is acyclic and every module in ${\mathscr{C}}$ belongs to the $\tau_A$-orbit of an injective module. An indecomposable module $X$ in a component ${\mathscr{C}}$ of $\Gamma_A$ is said to be *right coherent* if there is in ${\mathscr{C}}$ an infinite sectional path $$X = X_1 \longrightarrow X_2 \longrightarrow \cdots \longrightarrow X_i\longrightarrow X_{i+1} \longrightarrow X_{i+2} \longrightarrow \cdots$$ Dually, an indecomposable module $Y$ in ${\mathscr{C}}$ is said to be *left coherent* if there is in ${\mathscr{C}}$ an infinite sectional path $$\cdots \longrightarrow Y_{j+2} \longrightarrow Y_{j+1} \longrightarrow Y_j \longrightarrow \cdots \longrightarrow Y_2 \longrightarrow Y_1 = Y.$$ A module $Z$ in ${\mathscr{C}}$ is said to be *coherent* if $Z$ is left and right coherent. A component ${\mathscr{C}}$ of $\Gamma_A$ is said to be *coherent* [@MS1] (see also [@DR]) if every projective module in ${\mathscr{C}}$ is right coherent and every injective module in ${\mathscr{C}}$ is left coherent. Further, a component ${\mathscr{C}}$ of $\Gamma_A$ is said to be *almost cyclic* if its cyclic part $_c{{\mathscr{C}}}$ is a cofinite subquiver of ${\mathscr{C}}$. We note that the stable tubes, ray tubes and coray tubes of $\Gamma_A$ are special types of almost cyclic coherent components. In general, it has been proved in [@MS1] that a component ${\mathscr{C}}$ of $\Gamma_A$ is almost cyclic and coherent if and only if ${\mathscr{C}}$ is a *generalized multicoil*, obtained from a finite family of stable tubes by a sequence of admissible operations (ad 1)-(ad 5) and their duals (ad 1$^*$)-(ad 5$^*$). We refer to [@MS1 Section 2] for a detailed description of these admissible operations and generalized multicoils. In particular, one knows that all arrows of a generalized multicoil have trivial valuation. On the other hand, a component ${\mathscr{C}}$ of $\Gamma_A$ is said to be *almost acyclic* if all but finitely many modules of ${\mathscr{C}}$ are acyclic. It has been proved by I. Reiten and the second named author in [@RS2] that a component ${\mathscr{C}}$ of $\Gamma_A$ is almost acyclic if and only if ${\mathscr{C}}$ admits a multisection $\Delta$. Moreover, for an almost acyclic component ${\mathscr{C}}$ of $\Gamma_A$, there exists a finite convex subquiver $c({\mathscr{C}})$ of ${\mathscr{C}}$ (possibly empty), called the *core* of ${\mathscr{C}}$, containing all modules lying on oriented cycles in ${\mathscr{C}}$ (see [@RS2] for details).
Let $A$ be an algebra. A family ${{\mathscr{C}}}$ = $({{\mathscr{C}}}_{i})_{i \in I}$ of components of $\Gamma_A$ is said to be *generalized standard* if $\operatorname{rad}_A^{\infty}(X,Y)=0$ for all modules $X$ and $Y$ in ${\mathscr{C}}$ [@Sk3], and *sincere* if every simple module in $\operatorname{mod}A$ occurs as a composition factor of a module in ${\mathscr{C}}$. Two components ${\mathscr{C}}$ and ${\mathscr{D}}$ of an Auslander-Reiten quiver $\Gamma_A$ are said to be *orthogonal* if $\operatorname{Hom}_A(X,Y) = 0$ and $\operatorname{Hom}_A(Y,X) = 0$ for all modules $X \in {\mathscr{C}}$ and $Y \in {\mathscr{D}}$. We also note that if ${\mathscr{C}}$ and ${\mathscr{D}}$ are distinct components of $\Gamma_A$ then $\operatorname{Hom}_A(X,Y) = \operatorname{rad}_A^{\infty}(X,Y)$ for all modules $X \in {\mathscr{C}}$ and $Y \in {\mathscr{D}}$. Observe that a family ${{\mathscr{C}}}$ = $({{\mathscr{C}}}_{i})_{i \in I}$ of components of $\Gamma_A$ is generalized standard if and only if the components ${\mathscr{C}}_i$, $i\in I$, are generalized standard and pairwise orthogonal. A prominent role in the representation theory of algebras is played by the algebras with separating families of Auslander-Reiten components. A concept of a separating family of tubes has been introduced by C. M. Ringel in [@Ri0], [@Ri1] who proved that they occur in the Auslander-Reiten quivers of hereditary algebras of Euclidean type, tubular algebras, and canonical algebras. More generally, following I. Assem, A. Skowroński and B. Tomé [@AST], a family ${{\mathscr{C}}}$ = $({{\mathscr{C}}}_{i})_{i \in I}$ in $\Gamma_A$ is said to be *separating* in $\operatorname{mod}A$ if the components of $\Gamma_A$ split into three disjoint families ${{\mathscr{P}}}^A$, ${{\mathscr{C}}}^A={{\mathscr{C}}}$ and ${{\mathscr{Q}}}^A$ such that the following conditions are satisfied:
1. ${{\mathscr{C}}}^A$ is a sincere generalized standard family of components;
2. $\operatorname{Hom}_{A}({{\mathscr{Q}}}^A,{{\mathscr{P}}}^A) = 0$, $\operatorname{Hom}_{A}({{\mathscr{Q}}}^A,{{\mathscr{C}}}^A)=0$, $\operatorname{Hom}_{A}({{\mathscr{C}}}^A,{{\mathscr{P}}}^A) = 0$;
3. any homomorphism from ${{\mathscr{P}}}^A$ to ${{\mathscr{Q}}}^A$ in $\operatorname{mod}A$ factors through the additive category $\operatorname{add}({{\mathscr{C}}}^A)$ of ${{\mathscr{C}}}^A$.
Then we say that ${{\mathscr{C}}}^A$ separates ${{\mathscr{P}}}^A$ from ${{\mathscr{Q}}}^A$ and write $$\Gamma_A={{\mathscr{P}}}^A \cup {{\mathscr{C}}}^A \cup {{\mathscr{Q}}}^A.$$ We note that then ${{\mathscr{P}}}^A$ and ${{\mathscr{Q}}}^A$ are uniquely determined by ${{\mathscr{C}}}^A$ (see [@AST (2.1)] or [@Ri1 (3.1)]). Moreover, we have $\operatorname{ann}_A({{\mathscr{C}}}^A)=0$, so ${{\mathscr{C}}}^A$ is a faithful family of components of $\Gamma_A$. We note that if $A$ is an algebra of finite representation type that ${{\mathscr{C}}}^A = \Gamma_A$ is trivially a unique separating component of $\Gamma_A$, with ${{\mathscr{P}}}^A$ and ${{\mathscr{Q}}}^A$ being empty.
Quasitilted algebras {#sect-qua}
====================
In the representation theory of algebras an important role is played by the canonical algebras introduced by C. M. Ringel in [@Ri1] and [@Ri2]. We refer to [@Ri2 Appendix] for an elementary definition of the canonical algebras proposed by W. Crawley-Boevey. Every canonical algebra $\Lambda$ is of global dimension at most two. Moreover, the following theorem due to C. M. Ringel [@Ri2] describes the general shape of the Auslander-Reiten quiver of a canonical algebra.
\[t65\]Let $\Lambda$ be a canonical algebra. Then the general shape of the Auslander-Reiten quiver $\Gamma_{\Lambda}$ of $\Lambda$ is as follows $$\xymatrix@M=0pc@L=0pc@C=.8pc@R=1.6pc{
\ar@/^5ex/@{-}[dddd] &&&& && & && & && &&&& \ar@/_5ex/@{-}[dddd] \\
&&&&
\ar@{-}[dd] & {}\save[] *{\xycircle<.8pc,.4pc>{}} \restore & \ar@{-}[dd] &
\ar@{-}[dd] & {}\save[] *{\xycircle<.8pc,.4pc>{}} \restore & \ar@{-}[dd] &
\ar@{-}[dd] & {}\save[] *{\xycircle<.8pc,.4pc>{}} \restore & \ar@{-}[dd] \\
{}\save[] *{{\mathscr{P}}^{\Lambda}} \restore &&&& && & && & && &&&&
{}\save[] *{{\mathscr{Q}}^{\Lambda}} \restore \\
&&&& && & && & &&&& \\
&&&& && & & {}\save[] +<0mm,1mm> *{{\mathscr{T}}^{\Lambda}} \restore & && & &&&& \\
}$$ where ${\mathscr{P}}^{\Lambda}$ is a family of components containing a unique postprojective component ${\mathscr{P}}(\Lambda)$ and all indecomposable projective $\Lambda$-modules, ${\mathscr{Q}}^{\Lambda}$ is a family of components containing a unique preinjective component ${\mathscr{Q}}(\Lambda)$ and all indecomposable injective $\Lambda$-modules, and ${\mathscr{T}}^{\Lambda}$ is an infinite family of faithful generalized standard stable tubes separating ${\mathscr{P}}^{\Lambda}$ from ${\mathscr{Q}}^{\Lambda}$. In particular, we have $\operatorname{pd}_{\Lambda} X \leq 1$ for all modules $X$ in ${\mathscr{P}}^{\Lambda} \cup {\mathscr{T}}^{\Lambda}$ and $\operatorname{id}_{\Lambda} Y \leq 1$ for all modules $Y$ in ${\mathscr{T}}^{\Lambda} \cup {\mathscr{Q}}^{\Lambda}$.
Let $\Lambda$ be a canonical algebra. An algebra $C$ of the form $\operatorname{End}_{\Lambda}(T)$, where $T$ is a tilting $\Lambda$-module from the additive category $\operatorname{add}({\mathscr{P}}^{\Lambda})$ of ${\mathscr{P}}^{\Lambda}$ is called a *concealed canonical algebra of type $\Lambda$*. Then we have the following theorem describing the general shape of the Auslander-Reiten quiver of a concealed canonical algebra, which is a consequence of Theorem \[t65\] and the tilting theory.
\[t66\]Let $\Lambda$ be a canonical algebra, $T$ a tilting module in $\operatorname{add}({\mathscr{P}}^{\Lambda})$, and $C=\operatorname{End}_{\Lambda}(T)$ the associated concealed canonical algebra. Then the general shape of the Auslander-Reiten quiver $\Gamma_C$ of $C$ is as follows $$\xymatrix@M=0pc@L=0pc@C=.8pc@R=1.6pc{
\ar@/^5ex/@{-}[dddd] &&&& && & && & && &&&& \ar@/_5ex/@{-}[dddd] \\
&&&&
\ar@{-}[dd] & {}\save[] *{\xycircle<.8pc,.4pc>{}} \restore & \ar@{-}[dd] &
\ar@{-}[dd] & {}\save[] *{\xycircle<.8pc,.4pc>{}} \restore & \ar@{-}[dd] &
\ar@{-}[dd] & {}\save[] *{\xycircle<.8pc,.4pc>{}} \restore & \ar@{-}[dd] \\
{}\save[] *{{\mathscr{P}}^{C}} \restore &&&& && & && & && &&&&
{}\save[] *{{\mathscr{Q}}^{C}} \restore \\
&&&& && & && & &&&& \\
&&&& && & & {}\save[] +<0mm,1mm> *{{\mathscr{T}}^{C}} \restore & && & &&&& \\
}$$ where ${\mathscr{P}}^{C}$ is a family of components containing a unique postprojective component ${\mathscr{P}}(C)$ and all indecomposable projective $C$-modules, ${\mathscr{Q}}^{C}$ is a family of components containing a unique preinjective component ${\mathscr{Q}}(C)$ and all indecomposable injective $C$-modules, ${\mathscr{T}}^{C} = \operatorname{Hom}_{\Lambda}(T, {\mathscr{T}}^{\Lambda})$ is an infinite family of faithful pairwise orthogonal generalized standard stable tubes separating ${\mathscr{P}}^{C}$ from ${\mathscr{Q}}^{C}$. In particular, we have $\operatorname{pd}_{C} X \leq 1$ for all modules $X$ in ${\mathscr{P}}^{C} \cup {\mathscr{T}}^{C}$, $\operatorname{id}_{C} Y \leq 1$ for all modules $Y$ in ${\mathscr{T}}^{C} \cup {\mathscr{Q}}^{C}$, and $\operatorname{gl.dim}C \leq 2$.
The following characterization of concealed canonical algebras has been established by J. A. de la Peña and H. Lenzing in [@LP2].
Let $A$ be an algebra. The following statements are equivalent.
1. $A$ is a concealed canonical algebra.
2. $\Gamma_A$ admits a separating family ${\mathscr{T}}^A$ of stable tubes.
The concealed canonical algebras form a distinguished class of *quasitilted algebras*, which are the endomorphism algebras $\operatorname{End}_{\mathscr H}(T)$ of tilting objects $T$ in abelian hereditary $K$-categories ${\mathscr H}$ [@HRS]. The following characterization of quasitilted algebras has been established by D. Happel, I. Reiten and S. O. Smal[ø]{} in [@HRS].
\[th:3.7\]Let $A$ be an algebra. The following statements are equivalent.
1. $A$ is a quasitilted algebra.
2. $\operatorname{gl.dim}A \leq 2$ and every module $X$ in $\operatorname{ind}A$ satisfies $\operatorname{pd}_AX\leq 1$ or $\operatorname{id}_AX\leq 1$.
Recall briefly that the *tilted algebras* are the endomorphism algebras $\operatorname{End}_{H}(T)$ of tilting modules $T$ over hereditary algebras $H$ and the *quasitilted algebras of canonical type* are the endomorphism algebras $\operatorname{End}_{\mathscr H}(T)$ of tilting objects $T$ in abelian hereditary categories ${\mathscr H}$ whose derived category $D^b({\mathscr H})$ is equivalent to the derived category $D^b(\operatorname{mod}\Lambda)$ of the module category $\operatorname{mod}\Lambda$ of a canonical algebra $\Lambda$. The next result proved in [@HRe] shows that there are only two classes of quasitilted algebras.
\[t616\]Let $A$ be a quasitilted algebra. Then $A$ is either a tilted algebra or a quasitilted algebra of canonical type.
The structure of representation-infinite quasitilted algebras of canonical type has been described by H. Lenzing and A. Skowroński in [@LS1] (see also [@S7]). In [@LS1] the concept of a semiregular branch enlargement of a concealed canonical algebra has been introduced and the following characterization of quasitilted algebras of canonical type established, which extends characterizations of concealed canonical algebras proved in [@LP2], [@RS] and [@S6].
Let $A$ be an algebra. The following statements are equivalent.
1. $A$ is a representation-infinite quasitilted algebra of canonical type.
2. $A$ is a semiregular branch enlargement of a concealed canonical algebra.
3. $\Gamma_A$ admits a separating family ${\mathscr{T}}^A$ of semiregular tubes.
Here, by a semiregular tube we mean a ray tube or a coray tube.
We may visualise the shape of the Auslander-Reiten quiver $\Gamma_A$ of a representation-infinite quasitilted algebra $A$ of canonical type as follows $$\xy
0;/r1pc/:0
,{\ellipse(1,.4),=:a(180){-}}
,{\ellipse(1,.2)_,=:a(90){-}}
,{\ellipse(1,.7)__,=:a(90){-}}
*\dir{}="a",;p+(0,-.2)*\dir{}="b",**\dir{},"b"
;p+(.35,-.25)*\dir{}="b",**\dir{-},"b"
;p+(-.35,-.25)*\dir{}="b",**\dir{-},"b"
;p+(-1,.7)
;p+(0,-4)*\dir{}="b",**\dir{-},"b"
;p+(2,0)
;p+(0,4)*\dir{}="b",**\dir{-},"b"
;p+(2,0)
,{\ellipse(1,.4){-}}
;p+(-1,0)
;p+(0,-4)*\dir{}="b",**\dir{-},"b"
;p+(2,0)
;p+(0,4)*\dir{}="b",**\dir{-},"b"
;p+(2,0)
,{\ellipse(1,.4),=:a(180){-}}
,{\ellipse(1,.2)__,=:a(90){-}}
,{\ellipse(1,.7)_,=:a(90){-}}
*\dir{}="a",;p+(0,-.2)*\dir{}="b",**\dir{},"b"
;p+(-.35,-.25)*\dir{}="b",**\dir{-},"b"
;p+(.35,-.25)*\dir{}="b",**\dir{-},"b"
;p+(-1,.7)
;p+(0,-4)*\dir{}="b",**\dir{-},"b"
;p+(2,0)
;p+(0,4)*\dir{}="b",**\dir{-},"b"
;p+(6,-2)
,{\ellipse(5,4):a(120),,=:a(120){-}}
*\dir{}="a",;p+(-20,0)*\dir{}="b",**\dir{},"b"
,{\ellipse(5,4):a(210),,=:a(120){-}}
;p+(3,0)*+!{{\mathscr{P}}^A}
;p+(14,0)*+!{{\mathscr{Q}}^A}
;p+(-7,-3.5)*+!{{\mathscr{T}}^A}
\endxy$$ where ${\mathscr{T}}^A$ is a faithful infinite family of pairwise orthogonal, generalized standard ray or coray tubes, separating ${\mathscr{P}}^A$ from ${\mathscr{Q}}^A$. Then every indecomposable projective $A$-module lies in ${\mathscr{P}}^A \cup {\mathscr{T}}^A$ while every indecomposable injective $A$-module lies in ${\mathscr{T}}^A \cup {\mathscr{Q}}^A$.
The following example from [@MS4] illustrates the above theorem.
Let $K$ be an algebraically closed field, $Q$ the quiver $$\xymatrix@C=0.7pc@R=1.2pc{
&& 4 & (1,1) \ar[l]_{\sigma} \ar[llld]_{\alpha_1} \\
0 && (2,1) \ar[ll]^{\beta_1} && (2,2) \ar[ll]^{\beta_2} &&
\omega \ar[lld]^{\gamma_3} \ar[ll]^{\beta_3} \ar[lllu]_{\alpha_2} \\
&& (3,1) \ar[llu]^{\gamma_1} && (3,2) \ar[ll]^{\gamma_2} \\
&& 5 \ar[u]^{\xi} \ar[r]^{\eta} & 6 &
7 \ar[u]^{\delta} \ar[r]^{\varrho} & 9 \\
&&&& 8 \ar[u]_{\nu}
}$$ $I$ the ideal of $K Q$ generated by the elements $\alpha_2 \alpha_1 + \beta_3 \beta_2 \beta_1 + \gamma_3 \gamma_2 \gamma_1$, $\alpha_2\sigma$, $\xi \gamma_1$, $\delta \gamma_2$, $\nu \varrho$, and $A=KQ/I$ the associated bound quiver algebra. Then $A$ is a representation-infinite quasitilted algebra of canonical type. Moreover, $A$ is a semiregular branch enlargement of the canonical algebra $C = K\Delta / J$, where $\Delta$ is the full subquiver of $Q$ given by the vertices $0$, $\omega$, $(1,1)$, $(2,1)$, $(2,2)$, $(3,1)$, $(3,2)$, and $J$ is the ideal of $K \Delta$ generated by $\alpha_2 \alpha_1 + \beta_3 \beta_2 \beta_1 + \gamma_3 \gamma_2 \gamma_1$. The Auslander-Reiten quiver $\Gamma_A$ of $A$ has a disjoint union form $$\Gamma_A = {\mathscr{P}}^A \cup {\mathscr{T}}^A \cup {\mathscr{Q}}^A$$ where ${\mathscr{T}}^A$ a family of semiregular tubes separating ${\mathscr{P}}^A$ from ${\mathscr{Q}}^A$. Moreover, ${\mathscr{T}}^A$ is a ${\Bbb P}_1(K)$-family $({\mathscr{T}}^A_{\lambda})_{\lambda\in {\Bbb P}_1(K)}$ of semiregular tubes consisting of:
- a coray tube ${\mathscr{T}}_{\infty}^A$ obtained from the stable tube ${\mathscr{T}}_{\infty}^C$ of $\Gamma_C$ of rank $2$ by one coray insertion,
- a stable tube ${\mathscr{T}}_0^A = {\mathscr{T}}_0^C$ of rank $3$ formed by indecomposable $C$-modules,
- a ray tube ${\mathscr{T}}_1^A$ obtained from the stable tube ${\mathscr{T}}_{1}^C$ of $\Gamma_C$ of rank $3$ by $5$ ray insertions,
- the infinite family ${\mathscr{T}}^A_{\lambda}, \lambda\in {\Bbb P}_1(K)\setminus\{\infty,0,1\}$, of stable tubes of rank $1$, consisting of indecomposable $C$-modules.
We refer to [@MS4 Example 6.19] for a detailed description of the families ${\mathscr{P}}^A, {\mathscr{T}}^A$ and ${\mathscr{Q}}^A$.
Cycle-finite cyclic Auslander-Reiten components {#sect4}
===============================================
In this section we describe the support algebras of cycle-finite cyclic components of the Auslander-Reiten quivers of artin algebras. The description splits into two cases. In the case when a cycle-finite cyclic component ${\mathscr{C}}$ of $\Gamma_A$ is infinite, the support algebra $\operatorname{Supp}({\mathscr{C}})$ is a suitable gluing of finitely many generalized multicoil algebras (introduced by the authors in [@MS2]) and algebras of finite representation type, and ${\mathscr{C}}$ is the corresponding gluing of the associated cyclic generalized multicoils via finite translation quivers (Theorem \[thm-1-1\]). In the second case when a cycle-finite cyclic component ${\mathscr{C}}$ is finite, the support algebra $\operatorname{Supp}({\mathscr{C}})$ is a generalized double tilted algebra (in the sense of I. Reiten and A. Skowroński [@RS2]) and ${\mathscr{C}}$ is the core of the connecting component of this algebra (Theorem \[thm-1-2\]).
In order to present the first case, we need the class of generalized multicoil algebras. Recall that following [@MS2], an algebra $A$ is called a *generalized multicoil algebra*, if $A$ is a generalized multicoil enlargement of a product $C=C_1\times\ldots\times C_m$ of concealed canonical algebras $C_1, \ldots, C_m$ using modules from the separating families ${{\mathscr{T}}}^{C_1}, \ldots, {{\mathscr{T}}}^{C_m}$ of stable tubes of $\Gamma_{C_1}, \ldots, \Gamma_{C_m}$ and a sequence of admissible operations of types (ad 1)-(ad 5) and their duals (ad 1$^*$)-(ad 5$^*$). The following result has been established in [@MS2 Theorem A].
\[thm-41\] Let $A$ be an algebra. The following statements are equivalent.
1. $A$ is a generalized multicoil algebra.
2. $\Gamma_A$ admits a separating family of almost cyclic coherent components.
The following consequence of [@MS2 Theorem C] describes the structure of the Auslander-Reiten quivers of generalized multicoil algebras.
\[thm-43\] Let $A$ be a generalized multicoil algebra obtained from a family $C_1, \ldots, C_m$ of concealed canonical algebras. Then there are unique quotient algebras $A^{(l)}$ and $A^{(r)}$ of $A$ such that the following statements hold:
1. $A^{(l)}$ is a product of quasitilted algebras of canonical type having separating families of coray tubes.
2. $A^{(r)}$ is a product of quasitilted algebras of canonical type having separating families of ray tubes.
3. The Auslander-Reiten quiver $\Gamma_A$ has a disjoint union decomposition $$\Gamma_A = {\mathscr{P}}^A \cup {\mathscr{C}}^A \cup {\mathscr{Q}}^A,$$ where
1. ${{\mathscr{P}}}^A$ is the left part ${{\mathscr{P}}}^{A^{(l)}}$ in a decomposition $\Gamma_{A^{(l)}}={{\mathscr{P}}}^{A^{(l)}}\cup {{\mathscr{T}}}^{A^{(l)}}\cup {{\mathscr{Q}}}^{A^{(l)}}$ of the Auslander-Reiten quiver $\Gamma_{A^{(l)}}$ of the algebra $A^{(l)}$, with ${{\mathscr{T}}}^{A^{(l)}}$ a family of coray tubes separating ${{\mathscr{P}}}^{A^{(l)}}$ from ${{\mathscr{Q}}}^{A^{(l)}}$;
2. ${{\mathscr{Q}}}^A$ is the right part ${{\mathscr{Q}}}^{A^{(r)}}$ in a decomposition $\Gamma_{A^{(r)}}={{\mathscr{P}}}^{A^{(r)}}\cup {{\mathscr{T}}}^{A^{(r)}}\cup {{\mathscr{Q}}}^{A^{(r)}}$ of the Auslander-Reiten quiver $\Gamma_{A^{(r)}}$ of the algebra $A^{(r)}$, with ${{\mathscr{T}}}^{A^{(r)}}$ a family of ray tubes separating ${{\mathscr{P}}}^{A^{(r)}}$ from ${{\mathscr{Q}}}^{A^{(r)}}$;
3. ${{\mathscr{C}}}^A$ is a family of generalized multicoils separating ${{\mathscr{P}}}^A$ from ${{\mathscr{Q}}}^A$, obtained from stable tubes in the separating families ${{\mathscr{T}}}^{C_1}, \ldots, {{\mathscr{T}}}^{C_m}$ of stable tubes of the Auslander-Reiten quivers $\Gamma_{C_1}, \ldots, \Gamma_{C_m}$ of the concealed canonical algebras $C_1, \ldots, C_m$ by a sequence of admissible operations of types (ad 1)-(ad 5) and their duals (ad 1$^*$)-(ad 5$^*$), corresponding to the admissible operations leading from $C=C_1\times\ldots\times C_m$ to $A$;
4. ${{\mathscr{C}}}^A$ consists of cycle-finite modules and contains all indecomposable modules of ${{\mathscr{T}}}^{A^{(l)}}$ and ${{\mathscr{T}}}^{A^{(r)}}$;
5. ${{\mathscr{P}}}^A$ contains all indecomposable modules of ${{\mathscr{P}}}^{A^{(r)}}$;
6. ${{\mathscr{Q}}}^A$ contains all indecomposable modules of ${{\mathscr{Q}}}^{A^{(l)}}$.
Moreover, in the above notation, we have the following consequences of [@MS2 Theorem E] describing the homological properties of modules over generalized multicoil algebras
- $\operatorname{gl.dim}A\leq 3$;
- $\operatorname{pd}_AX\leq 1$ for any indecomposable module $X$ in ${{\mathscr{P}}}^A$;
- $\operatorname{id}_AY\leq 1$ for any indecomposable module $Y$ in ${{\mathscr{Q}}}^A$;
- $\operatorname{pd}_AM\leq 2$ and $\operatorname{id}_AM\leq 2$ for any indecomposable module $M$ in ${{\mathscr{C}}}^A$.
The algebra $A^{(l)}$ is said to be the *left quasitilted algebra* of $A$ and the algebra $A^{(r)}$ is said to be the *right quasitilted algebra* of $A$. The following theorem from [@MPS3 Theorem 1.1] describes the support algebras of infinite cycle-finite cyclic components of the Auslander-Reiten quivers of artin algebras.
\[thm-1-1\] Let $A$ be an algebra and ${\mathscr{C}}$ be an infinite cycle-finite component of $_c\Gamma_A$. Then there exist infinite full translation subquivers ${\mathscr{C}}_1, \ldots, {\mathscr{C}}_r$ of ${\mathscr{C}}$ such that the following statements hold.
1. For each $i\in\{1,\ldots,r\}$, ${\mathscr{C}}_i$ is a cyclic coherent full translation subquiver of $\Gamma_A$.
2. For each $i\in\{1,\ldots,r\}$, $\operatorname{Supp}({\mathscr{C}}_i)=B({\mathscr{C}}_i)$ and is a generalized multicoil algebra.
3. ${\mathscr{C}}_1, \ldots, {\mathscr{C}}_r$ are pairwise disjoint full translation subquivers of ${\mathscr{C}}$ and ${\mathscr{C}}^{cc}={\mathscr{C}}_1\cup\ldots\cup{\mathscr{C}}_r$ is a maximal cyclic coherent and cofinite full translation subquiver of ${\mathscr{C}}$.
4. $B({\mathscr{C}}\setminus{\mathscr{C}}^{cc})$ is of finite representation type.
5. $\operatorname{Supp}({\mathscr{C}})=B({\mathscr{C}})$.
It follows from the above theorem that all but finitely many modules lying in an infinite cycle-finite component ${\mathscr{C}}$ of $_c\Gamma_A$ can be obtained from indecomposable modules in stable tubes of concealed canonical algebras by a finite sequence of admissible operations of types (ad 1)-(ad 5) and their duals (ad 1$^*$)-(ad 5$^*$) (see [@MS2 Section 3] for details).
We would like to stress that the cycle-finiteness assumption imposed on the infinite component ${\mathscr{C}}$ of $_c\Gamma_A$ is essential for the validity of the above theorem. Namely, it has been proved in [@Sk11], [@Sk12] that, for an arbitrary finite dimensional algebra $B$ over a field $K$, a module $M$ in $\operatorname{mod}B$, and a positive integer $r$, there exists a finite dimensional algebra $A$ over $K$ such that $B$ is a quotient algebra of $A$, $\Gamma_A$ admits a faithful generalized standard stable tube ${\mathscr{T}}$ of rank $r$, ${\mathscr{T}}$ is not cycle-finite, and $M$ is a subfactor of all but finitely many indecomposable modules in ${\mathscr{T}}$. This shows that in general the problem of describing the support algebras of infinite cyclic components (even stable tubes) of Auslander-Reiten quivers is difficult.
In order to present the second case (when a cycle-finite cyclic component ${\mathscr{C}}$ is finite), we need the class of generalized double tilted algebras introduced by I. Reiten and A. Skowroński in [@RS2] (see also [@AC], [@CL] and [@RS1]). A *generalized double tilted algebra* is an algebra $B$ for which $\Gamma_B$ admits a separating almost acyclic component ${\mathscr{C}}$.
The following consequence of [@RS2 Section 3] describes the structure of the Auslander-Reiten quivers of generalized double tilted algebras.
\[thm-45\] Let $B$ be a generalized double tilted algebra. Then the Auslander-Reiten quiver $\Gamma_B$ has a disjoint union decomposition $\Gamma_B={{\mathscr{P}}}^B \cup {{\mathscr{C}}}^B \cup {{\mathscr{Q}}}^B$, where
1. ${{\mathscr{C}}}^B$ is an almost acyclic component separating ${{\mathscr{P}}}^B$ from ${{\mathscr{Q}}}^B$;
2. For each $i\in\{1,\ldots, m\}$, there exists hereditary algebra $H_i^{(l)}$ and tilting module $T_i^{(l)}\in\operatorname{mod}H_i^{(l)}$ such that the tilted algebra $B_i^{(l)}$ $=$ $\operatorname{End}_{H_i^{(l)}}(T_i^{(l)})$ is a quotient algebra of $B$ and ${{\mathscr{P}}}^B$ is the disjoint union of all components of $\Gamma_{B_i^{(l)}}$ contained entirely in the torsion-free part ${\mathscr Y}(T_i^{(l)})$ of $\operatorname{mod}B_i^{(l)}$ determined by $T_i^{(l)}$;
3. For each $j\in\{1,\ldots, n\}$, there exists hereditary algebra $H_j^{(r)}$ and tilting module $T_j^{(r)}\in\operatorname{mod}H_j^{(r)}$ such that the tilted algebra $B_j^{(r)}$ $=$ $\operatorname{End}_{H_j^{(r)}}(T_j^{(r)})$ is a quotient algebra of $B$ and ${{\mathscr{Q}}}^B$ is the disjoint union of all components of $\Gamma_{B_j^{(r)}}$ contained entirely in the torsion part ${\mathscr X}(T_j^{(r)})$ of $\operatorname{mod}B_j^{(r)}$ determined by $T_j^{(r)}$;
4. Every indecomposable module in ${{\mathscr{C}}}^B$ not lying in the core $c({{\mathscr{C}}}^B)$ of ${{\mathscr{C}}}^B$ is an indecomposable module over one of the tilted algebras $B_1^{(l)}, \ldots, B_m^{(l)}$, $B_1^{(r)}, \ldots,$ $B_n^{(r)}$;
5. Every nondirecting indecomposable module in ${{\mathscr{C}}}^B$ is cycle-finite and lies in $c({{\mathscr{C}}}^B)$;
6. $\operatorname{pd}_BX\leq 1$ for all indecomposable modules $X$ in ${{\mathscr{P}}}^B$;
7. $\operatorname{id}_BY\leq 1$ for all indecomposable modules $Y$ in ${{\mathscr{Q}}}^B$;
8. For all but finitely many indecomposable modules $M$ in ${{\mathscr{C}}}^B$, we have $\operatorname{pd}_BM\leq 1$ or $\operatorname{id}_BM\leq 1$.
Then ${{\mathscr{C}}}^B$ is called a *connecting component* of $\Gamma_B$, $B^{(l)}=B_1^{(l)}\times\ldots\times B_m^{(l)}$ is called the *left tilted algebra* of $B$, and $B^{(r)}=B_1^{(r)}\times\ldots\times B_n^{(r)}$ is called the *right tilted algebra* of $B$. Further, the almost acyclic component ${{\mathscr{C}}}^B$ of $B$ admits a multisection. Recall that, following [@RS2 Section 2], a full connected subquiver $\Delta$ of ${\mathscr{C}}$ is called a *multisection* if the following conditions are satisfied:
1. $\Delta$ is almost acyclic.
2. $\Delta$ is convex in ${\mathscr{C}}$.
3. For each $\tau_B$-orbit $\mathcal O$ in ${\mathscr{C}}$, we have $1 \leq | \Delta \cap {\mathcal O} | < \infty$.
4. $| \Delta \cap {\mathcal O} | = 1$ for all but finitely many $\tau_B$-orbits $\mathcal O$ in ${\mathscr{C}}$.
5. No proper full convex subquiver of $\Delta$ satisfies [(1)]{}–[(4)]{}.
Moreover, for a multisection $\Delta$ of a component ${\mathscr{C}}$, the following full subquivers of ${\mathscr{C}}$ were defined in [@RS2]: $$\! \Delta^{\prime}_{l} = \{ X\! \in \!\Delta; \text{there is a nonsectional path in ${\mathscr{C}}$ from $X$ to a projective module $P$}\},$$ $$\! \Delta^{\prime}_{r} = \{ X\! \in \!\Delta; \text{there is a nonsectional path in ${\mathscr{C}}$ from an injective module $I$ to $X$}\},$$ $$\Delta^{\prime\prime}_{l} =
\{ X \in \Delta^{\prime}_{l};
\tau_A^{-1} X \notin \Delta^{\prime}_{l}
\} , \qquad
\Delta^{\prime\prime}_{r} =
\{ X \in \Delta^{\prime}_{r};
\tau_A X \notin \Delta^{\prime}_{r}
\} ,$$ $$\Delta_l =
(\Delta \setminus \Delta^{\prime}_r)
\cup \tau_A \Delta^{\prime\prime}_r
, \quad
\Delta_c = \Delta^{\prime}_l \cap \Delta^{\prime}_r
, \quad
\Delta_r = (\Delta \setminus \Delta^{\prime}_l)
\cup \tau_A^{-1} \Delta^{\prime\prime}_l
.$$
Then $\Delta_l$ is called the [*left part*]{} of $\Delta$, $\Delta_r$ the [*right part*]{} of $\Delta$, and $\Delta_c$ the [*core*]{} of $\Delta$. The following basic properties of $\Delta$ have been established in [@RS2 Proposition 2.4]:
- Every cycle of ${\mathscr{C}}$ lies in $\Delta_c$.
- $\Delta_c$ is finite.
- Every indecomposable module $X$ in ${\mathscr{C}}$ is in $\Delta_c$, or a predecessor of $\Delta_l$ or a successor of $\Delta_r$ in ${\mathscr{C}}$.
The class of algebras of finite representation type coincides with the class of generalized double tilted algebras $B$ with $\Gamma_B$ being the connecting component ${{\mathscr{C}}}^B$ (equivalently, with the tilted algebras $B^{(l)}$ and $B^{(r)}$ being of finite representation type (possibly empty)).
The following theorem from [@MPS3 Theorem 1.2] describes the support algebras of finite cycle-finite cyclic components of the Auslander-Reiten quivers of artin algebras.
\[thm-1-2\] Let $A$ be an algebra and ${\mathscr{C}}$ be a finite cycle-finite component of $_c{\Gamma_A}$. Then the following statements hold.
1. $\operatorname{Supp}({\mathscr{C}})$ is a generalized double tilted algebra.
2. ${\mathscr{C}}$ is the core $c({{\mathscr{C}}}^{B({\mathscr{C}})})$ of the unique almost acyclic connecting component ${{\mathscr{C}}}^{B({\mathscr{C}})}$ of $\Gamma_{B({\mathscr{C}})}$.
3. $\operatorname{Supp}({\mathscr{C}})=B({\mathscr{C}})$.
It follows from [@MPS3 Corollary 2.7] that every finite cyclic component ${\mathscr{C}}$ of an Auslander-Reiten quiver $\Gamma_A$ contains both a projective module and an injective module, and hence $\Gamma_A$ admits at most finitely many finite cyclic components.
We may summarize the description of the support algebras of cycle-finite cyclic components of the Auslander-Reiten quivers of artin algebras as follows: $$\xymatrix@C=-50pt{
&*+[F-,]{\mbox{$A$-algebra, ${\mathscr{C}}$-cycle-finite component of $_c\Gamma_A$}}\ar@{~>}[ld]\ar@{~>}[rd] \\
*+[F]{\mbox{${\mathscr{C}}$-infinite}}\ar@{=>}[d]&&*+[F]{\mbox{${\mathscr{C}}$-finite}}\ar@{=>}[d]\\
*+[F]{\shortstack[l]{\footnotesize$\operatorname{Supp}({\mathscr{C}})$ - gluing of finitely many \\
\footnotesize generalized multicoil algebras and \\
\footnotesize algebras of finite representation type \\
\footnotesize ${\mathscr{C}}$ - corresponding gluing of the \\
\footnotesize associated cyclic generalized multicoils \\
\footnotesize via finite translation quivers}}&&
*+[F]{\shortstack[l]{\footnotesize$\operatorname{Supp}({\mathscr{C}})$ - generalized \\
\footnotesize double tilted algebra \\
\footnotesize ${\mathscr{C}}$ - core of the connecting \\
\footnotesize component of this algebra}} \\
}$$
Recall that an idempotent $e$ of an algebra $A$ is called *convex* provided $e$ is a sum of pairwise orthogonal primitive idempotents of $A$ corresponding to the vertices of a convex valued subquiver of the quiver $Q_A$ of $A$. The following direct consequence of Theorems \[thm-1-1\], \[thm-1-2\] and [@MPS3 Propositions 2.3, 2.4] provides a handy description of the faithful algebra of a cycle-finite component of $_c\Gamma_A$.
\[cor-50\] Let $A$ be an algebra and ${\mathscr{C}}$ be a cycle-finite component of $_c\Gamma_A$. Then there exists a convex idempotent $e_{{\mathscr{C}}}$ of $A$ such that $\operatorname{Supp}({\mathscr{C}})$ is isomorphic to the algebra $e_{{\mathscr{C}}}Ae_{{\mathscr{C}}}$.
Cycle-finite indecomposable modules {#sect5}
===================================
The aim of this section is to present some results describing homological properties of indecomposable cycle-finite modules over artin algebras.
Let $A$ be an algebra and $M$ a module in $\operatorname{mod}A$. We denote by $|M|$ the length of $M$ over the commutative artin ring $K$. The following theorem is a consequence of Theorems \[thm-1-1\], \[thm-1-2\], the results established in [@MS3 Theorem 1.3] and the properties of directing modules described in [@Ri1 2.4(8)].
\[thm-51\] Let $A$ be an algebra. Then, for all but finitely many isomorphism classes of cycle-finite modules $M$ in $\operatorname{ind}A$, the following statements hold.
1. $|\operatorname{Ext}_A^1(M,M)|\leq |\operatorname{End}_A(M)|$ and $\operatorname{Ext}_A^r(M,M)=0$ for $r\geq 2$.
2. $|\operatorname{Ext}_A^1(M,M)| = |\operatorname{End}_A(M)|$ if and only if there is a quotient concealed canonical algebra $C$ of $A$ and a stable tube ${\mathscr{T}}$ of $\Gamma_C$ such that $M$ is an indecomposable $C$-module in ${\mathscr{T}}$ of quasi-length divisible by the rank of ${\mathscr{T}}$.
In particular, the above theorem shows that, for all but finitely many isomorphism classes of cycle-finite modules $M$ in $\operatorname{ind}A$, the Euler characteristic $$\chi_A(M)=\sum_{i=0}^{\infty}(-1)^i|\operatorname{Ext}_A^i(M,M)|$$ of $M$ is well defined and nonnegative.
Let $A$ be an algebra and $K_0(A)$ the Grothendieck group of $A$. For a module $M$ in $\operatorname{mod}A$, we denote by $[M]$ the image of $M$ in $K_0(A)$. Then $K_0(A)$ is a free abelian group with a ${\Bbb Z}$-basis given by $[S_1], \ldots, [S_n]$ for a complete family $S_1, \ldots, S_n$ of pairwise nonisomorphic simple modules in $\operatorname{mod}A$. Thus, for modules $M$ and $N$ in $\operatorname{mod}A$, we have $[M]=[N]$ if and only if the modules $M$ and $N$ have the same composition factors including the multiplicities. In particular, it would be interesting to find sufficient conditions for a module $M$ in $\operatorname{ind}A$ to be uniquely determined (up to isomorphism) by its composition factors (see [@RSS] for a general result in this direction).
The next theorem provides information on the composition factors of cycle-finite modules, and is a direct consequence of Theorems \[thm-1-1\], \[thm-1-2\], \[thm-51\] and the results established in [@Ma Theorems A and B].
\[thm-55\] Let $A$ be an algebra. The following statements hold.
1. There is a positive integer $m$ such that, for any cycle-finite module $M$ in $\operatorname{ind}A$ with $|\operatorname{End}_A(M)| \neq |\operatorname{Ext}_A^1(M,M)|$, the number of isomorphism classes of modules $X$ in $\operatorname{ind}A$ with $[X]=[M]$ is bounded by $m$.
2. For all but finitely many isomorphism classes of cycle-finite modules $M$ in $\operatorname{ind}A$ with $|\operatorname{End}_A(M)| = |\operatorname{Ext}_A^1(M,M)|$, there are infinitely many pairwise nonisomorphic modules $X$ in $\operatorname{ind}A$ with $[X]=[M]$.
Following M. Auslander and I. Reiten [@AR], one associates with each nonprojective module $X$ in $\operatorname{ind}A$ the number $\alpha(X)$ of indecomposable direct summands in the middle term $$0\to \tau_AX\to Y\to X\to 0$$ of an almost split sequence with the right term $X$. It has been proved by R. Bautista and S. Brenner [@BaBr] that, if $A$ is an algebra of finite representation type and $X$ is a nonprojective module in $\operatorname{ind}A$, then $\alpha(X)\leq 4$, and if $\alpha(X)=4$ then $Y$ admits a projective-injective indecomposable direct summand $P$, and hence $X=P/{\mathrm{soc}}(P)$. In [@Li4] S. Liu proved that the same is true for any indecomposable nonprojective module $X$ lying on an oriented cycle of the Auslander-Reiten quiver $\Gamma_A$ of any algebra $A$, and consequently for any nonprojective and nondirecting cycle-finite module in $\operatorname{ind}A$.
The following theorem is a direct consequence of Theorems \[thm-1-1\], \[thm-1-2\], and [@MS1 Corollary B], and provides more information on almost split sequences of nondirecting cycle-finite modules.
\[thm-57\] Let $A$ be an algebra. Then, for all but finitely many isomorphism classes of nonprojective and nondirecting cycle-finite modules $M$ in $\operatorname{ind}A$, we have $\alpha(M)\leq 2$.
Cycle-finite artin algebras {#sect6}
===========================
Following I. Assem and A. Skowroński [@AS1], [@AS2], an algebra $A$ is said to be *cycle-finite* if all cycles in $\operatorname{ind}A$ are finite. A generalized multicoil algebra $B$ is called *tame* if the quasitilted algebras $B^{(l)}$ and $B^{(r)}$ (see Section \[sect4\] for definitions) are products of tilted algebras of Euclidean types or tubular algebras. Moreover, a generalized double tilted algebra $C$ is called *tame* if the tilted algebras $C^{(l)}$ and $C^{(r)}$ (see Section \[sect4\] for definitions) are generically tame (see Section \[sect7\] for definition) in the sense of W. Crawley-Boevey [@CB1], [@CB2]. We note that every tame generalized multicoil algebra and every tame generalized double tilted algebra is a cycle-finite algebra. The following theorem describe the structure of the category $\operatorname{ind}A$ of an arbitrary cycle-finite algebra $A$, and is a direct consequence of Theorems \[thm-1-1\] and \[thm-1-2\] (see also [@MPS1 Theorems 7.1, 7.2 and 7.3]).
\[thm7\] Let $A$ be a cycle-finite algebra. Then there exist tame generalized multicoil algebras $B_1, \ldots, B_p$ and tame generalized double tilted algebras $B_{p+1}, \ldots, B_q$ which are quotient algebras of $A$ and the following statements hold.
1. $\operatorname{ind}A = \bigcup_{i=1}^q\operatorname{ind}B_i$.
2. All but finitely many isomorphism classes of modules in $\operatorname{ind}A$ belong to $\bigcup_{i=1}^p\operatorname{ind}B_i$.
3. All but finitely many isomorphism classes of nondirecting modules in $\operatorname{ind}A$ belong to generalized multicoils of $\Gamma_{B_1}, \ldots, \Gamma_{B_p}$.
The next theorem extends the homological characterization of strongly simply connected algebras of polynomial growth established in [@PS1] to arbitrary cycle-finite algebras, and is a direct consequence of Theorem \[thm-51\] and the properties of directing modules described in [@Ri1 2.4(8)].
\[thm-62\] Let $A$ be a cycle-finite algebra. Then, for all but finitely many isomorphism classes of modules $M$ in $\operatorname{ind}A$, we have $|\operatorname{Ext}_A^1(M,M)|\leq |\operatorname{End}_A(M)|$ and $\operatorname{Ext}_A^r(M,M)$ $=0$ for $r\geq 2$.
In connection to Theorem \[thm-57\] we present the following theorem proved by J. A. de la Peña and the authors in [@MPS2 Main Theorem].
\[thm-69\] Let $A$ be a cycle-finite algebra and $M$ be a nonprojective module in $\operatorname{ind}A$, and $$0\to \tau_AM\to N\to M\to 0$$ be the associated almost split sequence in $\operatorname{mod}A$. The following statements hold.
1. $\alpha(M)\leq 5$.
2. If $\alpha(M)=5$ then $N$ admits an indecomposable projective-injective direct summand $P$, and hence $M\simeq P/{\mathrm{soc}}(P)$.
The following example of a cycle-finite algebra $A$ from [@Ma11 Example 5.17], illustrates the above theorem.
\[ex-5-cf\] Let $K$ be a field and $A=KQ/I$ the bound quiver algebra over $K$, where $Q$ is the quiver of the form $$\xymatrix@R=8pt@C=28pt{
&2\ar[ldd]_{\varepsilon}\\
&3\ar[ld]^{\eta}\\
1&&6\ar[luu]_{\alpha}\ar[lu]^{\beta}\ar[ld]_{\gamma}\ar[ldd]^{\delta}\\
&4\ar[lu]_{\mu}\\
&5\ar[luu]^{\omega}\\
}$$ and $I$ the ideal of the path algebra $KQ$ of $Q$ over $K$ generated by the paths $\beta\eta - \alpha\varepsilon$, $\gamma\mu - \alpha\varepsilon$, $\delta\omega - \alpha\varepsilon$. Denote by $B$ the hereditary algebra given by the full subquiver of $Q$ given by the vertices $1, 2, 3, 4, 5$ and by $C$ the hereditary algebra given by the full subquiver of $Q$ given by the vertices $2, 3, 4, 5, 6$. Note that $P_6 = I_1$ is projective-injective $A$-module. Therefore, applying [@ASS Proposition IV.3.11], we conclude that there is in $\operatorname{mod}A$ an almost split sequence of the form $$0 \longrightarrow \operatorname{rad}P_6 \longrightarrow S_2\oplus S_3\oplus S_4\oplus S_5\oplus P_6 \longrightarrow P_6/S_1 \longrightarrow 0,$$ where $S_2\oplus S_3\oplus S_4\oplus S_5 \cong \operatorname{rad}P_6/S_1$. Moreover, $\operatorname{rad}P_6$ is the indecomposable injective $B$-module $I_1^B$, whereas $P_6/S_1$ is the indecomposable projective $C$-module $P_6^C$. The component of $\Gamma_A$ containing $P_6 = I_1$ is the following gluing of the preinjective component of $\Gamma_B$ with the postprojective component of $\Gamma_C$ (see details in [@ASS Example VIII.5.7(e)]) $$\xymatrix@R=10pt@C=14pt{
\ldots{\phantom{P}}\ar[rdd]&&\tau_BS_2\ar[rdd]&&S_2\ar[rdd]&&\tau_C^{-1}S_2\ar[rdd]&&{\phantom{P}}\ldots \\
\ldots{\phantom{P}}\ar[rd]&&\tau_BS_3\ar[rd]&&S_3\ar[rd]&&\tau_C^{-1}S_3\ar[rd]&&{\phantom{P}}\ldots \\
&\tau_BI_1^B\ar[ruu]\ar[ru]\ar[rd]\ar[rdd]&&I_1^B\ar[r]\ar[ruu]\ar[ru]\ar[rd]\ar[rdd]&P_6\ar[r]&P_6^C\ar[ruu]\ar[ru]\ar[rd]\ar[rdd]&&
\tau_C^{-1}P_6^C\ar[ruu]\ar[ru]\ar[rd]\ar[rdd] \\
\cdots{\phantom{P}}\ar[ru]&&\tau_BS_4\ar[ru]&&S_4\ar[ru]&&\tau_C^{-1}S_4\ar[ru]&&{\phantom{P}}\cdots \\
\cdots{\phantom{P}}\ar[ruu]&&\tau_BS_5\ar[ruu]&&S_5\ar[ruu]&&\tau_C^{-1}S_5\ar[ruu]&&{\phantom{P}}\cdots \\
}$$ where $\tau_B$ and $\tau_C$ denote the Auslander-Reiten translations in $\operatorname{mod}B$ and $\operatorname{mod}C$, respectively.
Let $A$ be an algebra. Recall that, following C. M. Ringel [@Ri1], a module $M$ in $\operatorname{ind}A$ which does not lie on a cycle in $\operatorname{ind}A$ is called *directing*. We note that if all modules in $\operatorname{ind}A$ are directing, then $A$ is of finite representation type [@Ri1] (see also [@HL] for the corresponding result over arbitrary artin algebra). If $A$ is a cycle-finite algebra and $M$ is a module in $\operatorname{ind}A$, then $M$ is a directing if and only if $M$ is an acyclic vertex of $\Gamma_A$. The following result from [@BiSk] provides solution of the open problem concerning infinity of directing modules for cycle-finite algebras of infinite representation type.
\[thm-65\] Let $A$ be a cycle-finite algebra of infinite representation type. Then $\operatorname{ind}A$ contains infinitely many directing modules.
In connection to the above theorem the following question occurred naturally: does every cycle-finite algebra of infinite representation type contains at least one directing projective module or a directing injective module?
In general, the answer is negative. Namely, in the joint work with J. A. de la Peña [@MPS4] we constructed a family of cycle-finite algebras of infinite representation type with all indecomposable projective modules and indecomposable injective modules nondirecting. Moreover, it has been shown (see [@MPS4 Theorem]) that there are such algebras with an arbitrary large number of almost acyclic Auslander-Reiten components having finite cyclic multisections.
The following example from [@MPS4 Section 9], illustrates the situation. We refer to [@MPS4] for the general construction of algebras of this type and more examples.
\[ex-cm2017\] Let $K$ be an algebraically closed field and $A=K\Sigma/J$ the bound quiver algebra over $K$, where $\Sigma$ is the quiver of the form $$\xymatrix@R=14pt@C=14pt{
3\ar[d]_{\beta}&&4\ar[ll]_{\gamma}\ar[d]^{\alpha}&5\ar[l]_{\sigma}&6\ar[l]_{\delta}&7\ar[l]_{\xi}&&8\ar[ll]_{\eta}\cr
2\ar[rd]_{\rho}&&1\ar[ld]^{\theta}&&&10\ar[u]^{\omega}&&9\ar[u]_{\mu}\cr
&f\ar[rd]^{\psi}&&&&&c\ar[lu]^{\chi}\ar[ru]_{\lambda}\ar[rd]^{\pi}\cr
e\ar[ru]^{\nu}&&a\ar[ld]^{\varphi}&&&b\ar[ru]^{\phi}&&h\ar[ld]_{\iota}\ar[d]^{\epsilon}\cr
g\ar[u]^{\vartheta}&d\ar[lu]_{\varepsilon}\ar[l]^{\kappa}&&&&&j\ar[lu]^{\tau}&i\ar[l]^{\zeta}\cr
}$$ and $J$ is the ideal in $K\Sigma$ generated by the elements $\omega\xi\delta\sigma\alpha$, $\alpha\theta-\gamma\beta\rho$, $\chi\omega-\lambda\mu\eta$, $\rho\psi$, $\theta\psi$, $\nu\psi$, $\varphi\varepsilon$, $\varphi\kappa$, $\kappa\vartheta$, $\vartheta\nu$, $\phi\chi$, $\phi\lambda$, $\phi\pi$, $\pi\epsilon$, $\epsilon\zeta$, $\zeta\tau$, $\iota\tau$. Denote by $B=KQ/I$ the bound quiver algebra given by the subquiver $Q$ of $\Sigma$ given by the vertices $1, 2, \ldots, 10$ and the ideal $I$ in $KQ$ generated by the element $\omega\xi\delta\sigma\alpha$. Then $B$ is a tubular algebra of type $(2,3,6)$ in the sense of Ringel [@Ri1] and following [@MPS4 Section 7] the algebra $B$ is called *exceptional tubular algebra of type $(2,3,6)$*. Moreover, we denote by $H_0 = KQ^{(0)}$ the hereditary algebra of Euclidean type $\widetilde{{\Bbb E}}_8$ given by the full subquiver $Q^{(0)}$ of $\Sigma$ with the vertices $1, 2, 3, 4, 5, 6, 7, 8, 9$, and by $H_1 = KQ^{(1)}$ the hereditary algebra of Euclidean type $\widetilde{{\Bbb E}}_8$ given by the full subquiver $Q^{(1)}$ of $\Sigma$ with the vertices $2, 3, 4, 5, 6, 7, 8, 9, 10$.
Then it follows from [@MPS4 Theorems 5.1, 5.2 and 7.1] that the Auslander-Reiten quiver $\Gamma_A$ of $A$ has a decomposition $$\Gamma_A = {{\mathscr{C}}}_0 \cup {{\mathscr{T}}}^B_0 \cup \bigg(\bigcup_{q\in{\Bbb Q}_1^0}{{\mathscr{T}}}^B_q\bigg) \cup {{\mathscr{T}}}^B_{1} \cup {{\mathscr{C}}}_1,$$ where
- ${{\mathscr{C}}}_0$ is an almost acyclic component of the form $\Delta^{(0)}\cup{{\mathscr{P}}}^B$ with a cyclic multisection $\Delta^{(0)}$ such that the left part $\Delta^{(0)}_l$ of $\Delta^{(0)}$ is empty, the right part $\Delta^{(0)}_r$ of $\Delta^{(0)}$ is given by the indecomposable projective modules of the postprojective component ${{\mathscr{P}}}^B$ of $\Gamma_B$, being the postprojective component ${{\mathscr{P}}}^{H_0}$,
- ${{\mathscr{T}}}^B_0$ is the ${\Bbb P}_1(K)$-family $({{\mathscr{T}}}^B_{0,\lambda})_{\lambda\in{\Bbb P}_1(K)}$ of ray tubes obtained from the ${\Bbb P}_1(K)$-family ${{\mathscr{T}}}^{H_0}$ of stable tubes of tubular type $(2,3,5)$ by inserting in the unique stable tube of rank $5$, say ${{\mathscr{T}}}^{H_0}_{0,0}$, one ray as follows $$\xymatrix@C=13pt@R=13pt{
&&&&&\circ\ar[rd]&\hspace{-7mm}\scr{P_B(10)}&\circ\ar[rd]&&\circ\ar[rd]&&\circ\ar@{--}[dd]\cr
\circ\ar@{--}[dd]\ar[rd]&&\circ\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]\cr
&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar@{--}[ddd]\cr
\circ\ar[ru]\ar[rd]\ar@{--}[dd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]\cr
{\phantom{\circ}}&{\phantom{\circ}}\ar[ru]\ar@{.}[d]&&{\phantom{\circ}}\ar[ru]\ar@{.}[d]&&{\phantom{\circ}}\ar[ru]\ar@{.}[d]&&{\phantom{\circ}}\ar[ru]\ar@{.}[d]&&{\phantom{\circ}}\ar[ru]\ar@{.}[d]&&{\phantom{\circ}}\cr
{\phantom{\circ}}&{\phantom{\circ}}&&{\phantom{\circ}}&&{\phantom{\circ}}&&{\phantom{\circ}}&&{\phantom{\circ}}&&{\phantom{\circ}}\cr
}$$ where the vertices along to the dashed vertical lines have to be identified,
- For each $q\in{\Bbb Q}_1^0$, ${{\mathscr{T}}}^B_q$ is a ${\Bbb P}_1(K)$-family $({{\mathscr{T}}}^B_{q,\lambda})_{\lambda\in{\Bbb P}_1(K)}$ of stable tubes of tubular type $(2,3,6)$, where ${\Bbb Q}_1^0 = {\Bbb Q}\cap (0,1)$,
- ${{\mathscr{T}}}^B_{1}$ is the ${\Bbb P}_1(K)$-family $({{\mathscr{T}}}^B_{1,\lambda})_{\lambda\in{\Bbb P}_1(K)}$ of coray tubes obtained from the ${\Bbb P}_1(K)$-family ${{\mathscr{T}}}^{H_{1}}$ of stable tubes of tubular type $(2,3,5)$ by inserting in the unique stable tube of rank $5$, say ${{\mathscr{T}}}^{H_{1}}_{1,0}$, one coray as follows $$\xymatrix@C=13pt@R=13pt{
\circ\ar@{--}[dd]\ar[rd]&&\circ\ar[rd]&&\circ\ar[rd]&&\circ\ar[rd]&\hspace{-7mm}\scr{I_B(1)}\cr
&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[rd]&&\circ\ar[rd]&&\circ\ar@{--}[ddd]\cr
\circ\ar[ru]\ar[rd]\ar@{--}[dd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]&&\circ\ar[ru]\ar[rd]\cr
{\phantom{\circ}}&{\phantom{\circ}}\ar[ru]\ar@{.}[d]&&{\phantom{\circ}}\ar[ru]\ar@{.}[d]&&{\phantom{\circ}}\ar[ru]\ar@{.}[d]&&{\phantom{\circ}}\ar[ru]\ar@{.}[d]&&{\phantom{\circ}}\ar[ru]\ar@{.}[d]&&{\phantom{\circ}}\cr
{\phantom{\circ}}&{\phantom{\circ}}&&{\phantom{\circ}}&&{\phantom{\circ}}&&{\phantom{\circ}}&&{\phantom{\circ}}&&{\phantom{\circ}}\cr
}$$ where the vertices along to the dashed vertical lines have to be identified,
- ${{\mathscr{C}}}_1$ is an almost acyclic component of the form ${{\mathscr{Q}}}^B\cup\Delta^{(1)}$ with a cyclic multisection $\Delta^{(1)}$ such that the right part $\Delta^{(1)}_r$ of $\Delta^{(1)}$ is empty, the left part $\Delta^{(1)}_l$ of $\Delta^{(1)}$ is given by the indecomposable injective modules of the preinjective component ${{\mathscr{Q}}}^B$ of $\Gamma_B$, being the preinjective component ${{\mathscr{Q}}}^{H_1}$.
Therefore, all indecomposable projective modules and indecomposable injective modules in $\operatorname{mod}A$ are nondirecting.
Let $A$ be an algebra. We denote by ${\mathcal{F}}(A)$ the category of all finitely presented contravariant functors from $\operatorname{mod}A$ to the category ${{\mathcal{A}}b}$ of abelian groups. The category ${\mathcal{F}}(A)$ was intensively studied over the last 40 years, and is considered to be one of the important topics of the modern representation theory of algebras. It is a hard problem to describe the category ${\mathcal{F}}(A)$ even if the category $\operatorname{mod}A$ is well understood. A natural approach to study the structure of ${\mathcal{F}}(A)$ is via the associated Krull-Gabriel filtration $$0 = {\mathcal{F}}(A)_{-1}
\subseteq {\mathcal{F}}(A)_0
\subseteq {\mathcal{F}}(A)_1
\subseteq \dots
\subseteq {\mathcal{F}}(A)_{n-1}
\subseteq {\mathcal{F}}(A)_n
\subseteq \cdots$$ of ${\mathcal{F}}(A)$ by Serre subcategories, where, for each $n \in {\mathbb{N}}$, ${\mathcal{F}}(A)_n$ is the subcategory of all functors $F$ in ${\mathcal{F}}(A)$ which become of finite length in the quotient category ${\mathcal{F}}(A)/{\mathcal{F}}(A)_{n-1}$ [@F], [@Po]. Following W. Geigle [@Ge1], we define $KG(A) = \min \{ n \in {\mathbb{N}}\,|\, {\mathcal{F}}(A)_n = {\mathcal{F}}(A) \}$ if such a minimum exists, and set $KG(A) = \infty$ if it is not the case. Then $KG(A)$ is called the *Krull-Gabriel dimension* of $A$. The interest in the Krull-Gabriel dimension $KG(A)$ is motivated by the fact that the above filtration of ${\mathcal{F}}(A)$ leads to a hierarchy of exact sequences in $\operatorname{mod}A$, where the almost split sequences form the lowest level (see [@Ge1]).
The following characterization of cycle-finite algebras with finite Krull-Gabriel dimension, has been established by the second named author in [@Sk-20 Theorem 1.2].
\[thm-KG\] Let $A$ be a cycle-finite algebra of infinite representation type. The following statements are equivalent.
1. $KG(A) < \infty$.
2. $KG(A) = 2$.
3. $\bigcap_{m \geq 1} (\operatorname{rad}_A^{\infty})^m = 0$.
4. $\operatorname{rad}_A^{\infty}$ is nilpotent.
5. All but finitely many components of $\Gamma_A$ are stable tubes of rank one.
6. $A$ does not admit a tubular quotient algebra.
We end this section with the related open problem (see [@MPS4 Question 1]).
*Let $A$ be a cycle-finite algebra of infinite representation type and finite Krull-Gabriel dimension. Is it true that $\operatorname{ind}A$ admits a directing projective module or a directing injective module?*
Artin algebras with separating Auslander-Reiten components {#sect7}
==========================================================
In this section we discuss the structure of artin algebra $A$ having separating families of components in $\Gamma_A$.
Let $A$ be an algebra. A component ${\mathscr{C}}$ of $\Gamma_A$ is called *cycle-finite* if all modules in ${\mathscr{C}}$ are cycle-finite.
We have the following proposition.
\[pro-71\] Let $A$ be an algebra with a separating family ${{\mathscr{C}}}^A$ of components in $\Gamma_A$, and $\Gamma_A$=${{\mathscr{P}}}^A \cup {{\mathscr{C}}}^A \cup {{\mathscr{Q}}}^A$ the associated decomposition of $\Gamma_A$. Then ${{\mathscr{C}}}^A$ is a family of cycle-finite components.
The next theorem from [@MS9 Theorem 1.5] provides solution of the problem concerning the structure of artin algebras admitting a separating family of Auslander-Reiten components, initiated by Ringel [@Ri0], [@Ri1], [@Ri-86].
\[thm-75\] Let $A$ be an algebra with a separating family ${{\mathscr{C}}}^A$ of components in $\Gamma_A$, and $\Gamma_A$=${{\mathscr{P}}}^A \cup {{\mathscr{C}}}^A \cup {{\mathscr{Q}}}^A$ the associated decomposition of $\Gamma_A$. Then there exist quotient algebras $A^{(l)}$ and $A^{(r)}$ of $A$ such that the following statements hold.
1. $A^{(l)}=A^{(l)}_1 \times\cdots\times A^{(l)}_m \times A^{(l)}_{m+1} \times\cdots\times A^{(l)}_{m+p},$ where
1. For each $i\in\{1,\ldots,m\}$, $A^{(l)}_i$ is a tilted algebra of the form $\operatorname{End}_{H^{(l)}_i}(T^{(l)}_i)$ for a hereditary algebra $H^{(l)}_i$ and a tilting module $T^{(l)}_i$ in $\operatorname{mod}H^{(l)}_i$ without indecomposable preinjective direct summands.
2. For each $i\in\{m+1,\ldots,m+p\}$, $A^{(l)}_i$ is a quasitilted algebra of canonical type with a separating family of coray tubes in $\Gamma_{A^{(l)}_i}$.
2. $A^{(r)}=A^{(r)}_1 \times\cdots\times A^{(r)}_n \times A^{(r)}_{n+1} \times\cdots\times A^{(r)}_{n+q},$ where
1. For each $j\in\{1,\ldots,n\}$, $A^{(r)}_j$ is a tilted algebra of the form $\operatorname{End}_{H^{(r)}_j}(T^{(r)}_j)$ for a hereditary algebra $H^{(r)}_j$ and a tilting module $T^{(r)}_j$ in $\operatorname{mod}H^{(r)}_j$ without indecomposable postprojective direct summands.
2. For each $j\in\{n+1,\ldots,n+q\}$, $A^{(r)}_j$ is a quasitilted algebra of canonical type with a separating family of ray tubes in $\Gamma_{A^{(r)}_j}$.
3. ${\mathscr{P}}^A = \bigcup_{i=1}^{m+p} {\mathscr{P}}^{A^{(l)}_i}$ and every component in ${\mathscr{P}}^A$ is either a postprojective component, a ray tube, or obtained from a component of the form ${\Bbb Z}{\Bbb A}_{\infty}$ by a finite number (possibly zero) of ray insertions.
4. ${\mathscr{Q}}^A = \bigcup_{j=1}^{n+q} {\mathscr{Q}}^{A^{(r)}_j}$ and every component in ${\mathscr{Q}}^A$ is either a preinjective component, a coray tube, or obtained from a component of the form ${\Bbb Z}{\Bbb A}_{\infty}$ by a finite number (possibly zero) of coray insertions.
In [@CB1], [@CB2] Crawley-Boevey introduced the concept of a generically tame algebra. An indecomposable right $A$-module $M$ over an algebra $A$ is called a *generic module* if $M$ is of infinite length over $A$ but of finite length over $\operatorname{End}_A(M)$, called the *endolength* of $M$. Then an algebra $A$ is called *generically tame* if, for any positive integer $d$, there are only finitely many isomorphism classes of generic right $A$-modules of endolength $d$. An algebra $A$ is called *generically finite* if there are at most finitely many pairwise non-isomorphic generic right $A$-modules. Further, $A$ is called *generically of polynomial growth* if there is a positive integer $m$ such that for any positive integer $d$ the number of isomorphism classes of generic right $A$-modules of endolength $d$ is at most $d^m$. We note that every algebra $A$ of finite representation type is generically trivial, that is, there is no generic right $A$-module. We also stress that by a theorem of Crawley-Boevey [@CB1 Theorem 4.4], if $A$ is an algebra over an algebraically closed field $K$, then $A$ is generically tame if and only if $A$ is tame in the sense of Drozd [@Dro] (see also [@CB], [@SS2]).
Recall also that following [@Sk7] the *component quiver* $\Sigma_A$ of an algebra $A$ has the components of $\Gamma_A$ as vertices and there is an arrow ${\mathscr{C}}\to{\mathscr{D}}$ in $\Sigma_A$ if $\operatorname{rad}_A^{\infty}(X,Y)\neq 0$, for some modules $X$ in ${\mathscr{C}}$ and $Y$ in ${\mathscr{D}}$. In particular, a component ${\mathscr{C}}$ of $\Gamma_A$ is generalized standard if and only if there is no loop at ${\mathscr{C}}$ in $\Sigma_A$.
The final theorem is a consequence of [@MS9 Theorem 1.10], and characterizes the cycle-finite algebras with separating families of Auslander-Reiten components.
\[thm-77\] Let $A$ be an algebra with a separating family of components in $\Gamma_A$. The following statements are equivalent:
1. $A$ is cycle-finite.
2. $A$ is generically tame.
3. $A$ is generically of polynomial growth.
4. $A^{(l)}$ and $A^{(r)}$ are products of tilted algebras of Euclidean type or tubular algebras.
5. $\Gamma_A$ is almost periodic.
6. $\Sigma_A$ is acyclic.
[A]{} I. Assem and F. U. Coelho, *Two-sided gluings of tilted algebras*, J. Algebra **269** (2003), 456–479. I. Assem, D. Simson and A. Skowroński, *Elements of the Representation Theory of Associative Algebras 1: Techniques of Representation Theory*, London Mathematical Society Student Texts, vol. 65, Cambridge University Press, Cambridge, 2006. I. Assem and A. Skowroński, *Algebras with cycle-finite derived categories*, Math. Ann. **280** (1988), 441–463. I. Assem and A. Skowroński, *Minimal representation-infinite coil algebras*, Manuscr. Math. **67** (1990), 305–331. I. Assem, A. Skowroński and B. Tomé, *Coil enlargements of algebras*, Tsukuba J. Math. **19** (1995), 453–479. M. Auslander, *Representation theory of artin algebras II*, Comm. Algebra **1** (1974), 269–310. M. Auslander and I. Reiten, *Uniserial functors*, in: Representation Theory II, in: Lecture Notes in Math., vol. 832, Springer–Verlag, Berlin–Heidelberg, 1980, pp. 1–47. M. Auslander, I. Reiten and S. O. Smalø, *Representation Theory of Artin Algebras*, Cambridge Stud. in Adv. Math., vol. 36, Cambridge University Press, Cambridge, 1995. R. Bautista and S. Brenner, *On the number of terms in the middle of an almost split sequence*, in: Representations of Algebras, in: Lecture Notes in Math., vol. 903, Springer–Verlag, Berlin–Heidelberg, 1981, pp. 1–8. J. Bia[ł]{}kowski and A. Skowroński *Cycles of modules and finite representation type*, Bull. London Math. Soc. **48** (2016), 589–600. F. U. Coelho and M. Lanzilotta, *Algebras with small homological dimensions*, Manuscripta Math. **100** (1999), 1–11. F. U. Coelho, E. M. Marcos, H. A. Merklen and A. Skowroński, *Module categories with infinite radical square zero are of finite type*, Comm. Algebra **22** (1994), 4511–4517. W. Crawley-Boevey, *On tame algebras and bocses*, Proc. London Math. Soc. **56** (1988), 451–483. W. Crawley-Boevey, *Tame algebras and generic modules*, Proc. London Math. Soc. **63** (1991), 241–265. W. Crawley-Boevey, *Modules of finite length over their endomorphism rings*, in: Representations of Algebras and Related Topics, in: London Math. Soc. Lecture Note Series, vol. 168, Cambridge University Press, Cambridge, 1992, pp. 127–184. G. D’Este and C. M. Ringel, *Coherent tubes*, J. Algebra **87** (1984), 150–201. Y. A. Drozd, *Tame and wild matrix problems*, in: Representation Theory II, in: Lecture Notes in Math., vol. 832, Springer–Verlag, Berlin–Heidelberg, 1980, 242–258. C. Faith, *Algebra: Rings, Modules and Categories I*, Springer–Verlag, Berlin–Heidelberg, 1973. W. Geigle, *The Krull-Gabriel dimension of the representation theory of a tame hereditary Artin algebra and applications to the structure of exact sequences*, Manuscripta Math. **54** (1985), 83–106. D. Happel and S. Liu, *Module categories without short cycles are of finite type*, Proc. Amer. Math. Soc. **120** (1994), 371–375. D. Happel and I. Reiten, *Hereditary abelian categories with tilting object over arbitrary base fields*, J. Algebra **256** (2002), 414–432. D. Happel, I. Reiten and S. O. Smalø, *Tilting in abelian categories and quasitilted algebras*, Memoirs Amer. Math. Soc. **120** no. 575 (1996). A. Jaworska, P. Malicki and A. Skowroński, *Tilted algebras and short chains of modules*, Math. Z. **273** (2013), 19–27. A. Jaworska, P. Malicki and A. Skowroński, *Modules not being the middle of short chains*, Quart. J. Math. **64** (2013), 1141–1160. O. Kerner and A. Skowroński, *On module categories with nilpotent infinite radical*, Compositio Math. **77** (1991), 313-333. H. Lenzing and J. A. de la Peña, *Concealed-canonical algebras and separating tubular families*, Proc. London Math. Soc. **78** (1999), 513–540. H. Lenzing and A. Skowroński, *Quasi-tilted algebras of canonical type*, Colloq. Math. **71** (1996), 161–181. S. Liu, *Degrees of irreducible maps and the shapes of the Auslander-Reiten quivers*, J. London Math. Soc. **45** (1992), 32–54. S. Liu, *Semi-stable components of an Auslander-Reiten quiver*, J. London Math. Soc. **47** (1993), 405–416. S. Liu, *Almost split sequences for non-regular modules*, Fund. Math. **143** (1993), 183–190. P. Malicki, *On the composition factors of indecomposable modules in almost cyclic coherent Auslander-Reiten components*, J. Pure Appl. Algebra **207** (2006), 469–490. P. Malicki, *Auslander-Reiten theory for finite-dimensional algebras*, in: Homological Methods, Representation Theory, and Cluster Algebras, CRM Short Courses, Springer, Cham, 2018, pp. 21–63. P. Malicki, J. A. de la Peña and A. Skowroński, *Cycle-finite module categories*, Algebras, Quivers and Representations - Abel Symposium 2011. Abel Symposia vol. 8, Springer–Verlag, 2013, pp. 209–252. P. Malicki, J. A. de la Peña and A. Skowroński, *On the number of terms in the middle of almost split sequences over cycle-finite artin algebras*, Centr. Eur. J. Math. **12** (2014), 39–45. P. Malicki, J. A. de la Peña and A. Skowroński, *Finite cycles of indecomposable modules*, J. Pure Appl. Algebra **219** (2015), 1761–1799. P. Malicki, J. A. de la Peña and A. Skowroński, *Existence of cycle-finite algebras of infinite representation type without directing projective or injective modules*, Colloq. Math. **148** (2017), 165–190. P. Malicki and A. Skowroński, *Almost cyclic coherent components of an Auslander-Reiten quiver*, J. Algebra **229** (2000), 695–749. P. Malicki and A. Skowroński, *Algebras with separating almost cyclic coherent Auslander-Reiten components*, J. Algebra **291** (2005), 208–237. P. Malicki and A. Skowroński, *On the indecomposable modules in almost cyclic coherent Auslander-Reiten components*, J. Math. Soc. Japan **63** (2011), 1121–1154. P. Malicki and A. Skowroński, *Algebras with separating Auslander-Reiten components*, in: Representations of Algebras and Related Topics, European Math. Soc. Series Congress Reports, European Math. Soc. Publ. House, Zürich, 2011, pp. 251–353. P. Malicki and A. Skowroński, *The structure and homological properties of generalized standard Auslander-Reiten components*, J. Algebra **518** (2019), 1–39. J. A. de la Peña and A. Skowroński, *Geometric and homological characterizations of polynomial growth strongly simply connected algebras*, Invent. Math. **126** (1996), 287–296. N. Popescu, *Abelian Categories with Applications to Rings and Modules*, London Mathematical Society Monographs, No. 3, Academic Press, London–New York, 1973. I. Reiten and A. Skowroński, *Sincere stable tubes*, J. Algebra **232** (2000), 64–75. I. Reiten and A. Skowroński, *Characterizations of algebras with small homological dimensions*, Advances Math. **179** (2003), 122–154. I. Reiten and A. Skowroński, *Generalized double tilted algebras*, J. Math. Soc. Japan **56** (2004), 269–288. I. Reiten, A. Skowroński and S.O. Smal[ø]{}, *Short chains and short cycles of modules*, Proc. Amer. Math. Soc. **117** (1993), 343-354. C. M. Ringel, *Separating tubular series*, in: Séminare d’Algébre Paul Dubreil et Marie-Paul Malliavin, Lecture Notes in Math., vol. 1029, Springer–Verlag, Berlin–Heidelberg, 1983, pp. 134–158. C. M. Ringel, *Tame Algebras and Integral Quadratic Forms*, Lecture Notes in Math., vol. 1099, Springer–Verlag, Berlin–Heidelberg, 1984. C. M. Ringel, *Representation theory of finite dimensional algebras*, in: Representations of Algebras, London Mathematical Society Lecture Notes Series vol. 116, Cambridge University Press, Cambridge, 1986, 7–79. C. M. Ringel, *The canonical algebras*, with an appendix by W. Crawley-Boevey in: Topics in Algebra, Part 1: Rings and Representations of Algebras, Banach Center Publ. 26, PWN, Warsaw, 1990, pp. 407–432. D. Simson and A. Skowroński, *Elements of the Representation Theory of Associative Algebras 2: Tubes and Concealed Algebras of Euclidean Type*, London Mathematical Society Student Texts vol. 71, Cambridge University Press, Cambridge, 2007. D. Simson and A. Skowroński, *Elements of the Representation Theory of Associative Algebras 3: Representation-Infinite Tilted Algebras*, London Mathematical Society Student Texts vol. 72, Cambridge University Press, Cambridge, 2007. A. Skowroński, *Generalized standard Auslander-Reiten components*, J. Math. Soc. Japan **46** (1994), 517–543. A. Skowroński, *Cycles in module categories*, in: Finite Dimensional Algebras and Related Topics, NATO ASI Series, Series C: Math. and Phys. Sciences 424, Kluwer Acad. Publ., Dordrecht, 1994, pp. 309–345. A. Skowroński, *On omnipresent tubular families of modules*, in: Representation Theory of Algebras, Canad. Math. Soc. Conf. Proc. 18, Amer. Math. Soc., Providence, RI, 1996, pp. 641–657. A. Skowroński, *Tame quasi-tilted algebras*, J. Algebra **203** (1998), 470–490. A. Skowroński, *Generalized canonical algebras and standard stable tubes*, Colloq. Math. **90** (2001), 77–93. A. Skowroński, *A construction of complex syzygy periodic modules over symmetric algebras*, Colloq. Math. **103** (2005), 61–69. A. Skowroński, *The Krull-Gabriel dimension of cycle-finite artin algebras*, Algebr. Represent. Theory **19** (2016), 215–233. A. Skowroński and K. Yamagata, *Frobenius Algebras I. Basic Representation Theory*, European Mathematical Society, European Math. Soc. Publ. House, Zürich, 2011. A. Skowroński and K. Yamagata, *Frobenius Algebras II. Tilted and Hochschild extension algebras*, European Mathematical Society, European Math. Soc. Publ. House, Zürich, 2017. Y. Zhang, *The structure of stable components*, Canad. J. Math. **43** (1991), 652–672.
|
---
abstract: 'We prove constructive versions of various usual results related to the Gelfand duality. Namely, that the constructive Gelfand duality extend to a duality between commutative nonunital $C^{*}$-algebras and locally compact completely regular locales, that ideals of a commutative $C^{*}$-algebras are in order preserving bijection with the open sublocales of its spectrum, and a purely constructive result saying that a commutative $C^{*}$-algebra has a continuous norm if and only its spectrum is open. We also extend all these results to the case of localic $C^{*}$-algebras. In order to do so we develop the notion of one point compactification of a locally compact regular locale and of unitarization of a $C^{*}$-algebra in a constructive framework.'
author:
- Simon Henry
bibliography:
- 'Biblio.bib'
title: 'Constructive Gelfand duality for non-unital commutative $C^{*}$-algebras'
---
Notations and preliminares
==========================
This paper has been written to provide two technical tools which were needed in the proof of the main theorems of [@henry2015toward] : the non-unital Gelfand duality, including the characterization of the spectrum given by proposition \[spec\_class\_char\], and (one direction) of the “possitivity" theorem \[continuity=openess\]. We took the opportunity to prove (constructively) some other results in this spirit that might be useful for future works, like for example theorem \[openeqideal\] and the results of section \[secLocalic\].
In all this paper we are working in the internal logic of an elementary topos with a natural number object ${\mathbb{N}}$. The subobject classifier is denoted by $\Omega$, and $\top$ and $\bot$ denotes its top and bottom element, i.e. the proposition true and false.
A frame is a complete Heyting algebra, a frame homomorphism is an order preserving map commuting to arbitrary supremums and finite infimums. The category of locales is defined as the opposite of the category of frames, if $X$ is a locale the corresponding frame is denoted by ${\mathcal{O}}(X)$. If $f$ is a morphism of locales, the corresponding frame homomorphism is denoted $f^{*}$.
Elements of ${\mathcal{O}}(X)$ are called open sublocales of $X$. The top element of ${\mathcal{O}}(X)$ is denoted by $X$, the bottom element by $\emptyset$. When talking about open sublocales, “$V$ is bigger than $U$" or “$U$ is smaller than $V$" always means $U \leqslant V$. For more information on the theory of locale, the reader can consult [@picado2012frames] (which is unfortunately non constructive) or [@sketches C1].
Supremums and finite infimums in ${\mathcal{O}}(X)$ are called unions and intersections and are denoted by the symbols $\cup$ and $\cap$.
If $U$ is an open sublocale of $X$, then $\neg U$ denote the open sublocale $U \Rightarrow \emptyset$ and $U^{c}$ denote the closed complement of $U$, ie the locale such that ${\mathcal{O}}(U^{c}) = \{ V \in {\mathcal{O}}(X) | U \leqslant V \}$. In particular, $\neg U$ is the interior of $U^{c}$.
An (increasing) net of open sublocales of $X$ is an *inhabited* family $(U_i)_{i \in I}$ of open sublocales of $X$ such that for each $i,j \in I$ there exists $k \in I$ such that $U_k$ is bigger than $U_i$ and $U_j$.
If $U$ and $V$ are two open sublocales of a locale $X$, we say that:
- $U \ll V $ ($U$ is way below $V$) if for each increasing net $(U_i)_{i \in I}$ whose supremum is bigger than $V$ there exists $i \in I$ such that $U \leqslant U_i$.
- $ U \triangleleft V$ ($U$ is rather below $V$) if there exists $W \in {\mathcal{O}}(X)$ such that $V \cup W =X$ and $U \cap W = \emptyset$. Or equivalently if $\neg U \cup V = X$.
- $U \triangleleft_{CR} V$ ($U$ is completely below $V$) if there exists a “scale" $(U_q)_{q \in [0,1] \cap {\mathbb{Q}}}$ such that $U_0=U$, $U_1=V$ and for each $q'<q$ one has $U_{q'} \triangleleft U_q$. This is also equivalent to the existence of a function $f$ from $X$ to the locale[^1] $[0,1]$ of real numbers between $0$ and $1$ such that $f^{*}( ]0,1]) \subset V$ (i.e. $f$ restrictd to $V^{c}$ is $0$) and $f$ restricted to $U$ is constant equal to $1$. (see [@picado2012frames V.5.7 and XIV.6.2]).
One also says that $X$ is locally compact (resp. regular, resp. completely regular) if any open sublocale $V$ of $X$ can be written as a supremum of open sublocale $U$ such that $U \ll V$ (resp. $U \triangleleft V$, resp. $U \triangleleft_{CR} V$). A locale $X$ is said to be compact if $X \ll X$.
One has the following properties:
1. Each of the three relations $\ll$, $\triangleleft$ and $\triangleleft_{CR}$ satisfies the properties: if $a\leqslant b$, $b \ll c$ and $c \leqslant d$ then $a \ll d$; and if $a \ll b$ and $c \ll d$ then $a \cup c \ll b \cup d$.
2. In a regular (resp. completely regular) locale $U \ll V$ implies $U \triangleleft V$ (resp. $U \triangleleft_{CR} V$ ).
3. In a locally compact locale $X$, if $a \ll b$ then there exists $c$ such that $a \ll c \ll b$.
4. In a compact locale $a \triangleleft b$ imply $a \ll b$, more generally in any locale $X$, if $a \triangleleft b$ and $a \ll X$ then $a \ll b$.
5. In particular, A compact regular locale is locally compact.
6. And also, in a compact regular locale, $\triangleleft$ and $\ll$ are equivalent and in a compact completely regular locale these three relations are equivalent.
If $f:{\mathcal{T}}\rightarrow {\mathcal{E}}$ is a geometric morphism between two toposes, and $X$ a locale in the internal logic of ${\mathcal{E}}$, we denote by $f^{\sharp}(X)$ the pullback of $X$ along $f$, in particular $f^{*}({\mathcal{O}}(X))$ is different from ${\mathcal{O}}(f^{\sharp}(X))$ but is still a basis[^2] of the topologies of $f^{\sharp}(X)$.
We will frequently use expression of the form $\bigcup_{u} a$ where $u$ is a proposition and $a$ is an element of a frame which might seems strange for a reader unfamiliar with this. This expression makes sense because, as a proposition, $u$ is a subset of the singleton and $a$ can be seen as a family of elements indexed by the singleton (and hence also by its subset $u$). This is of course the same as $a \cap p^{*}(u)$ where $p^{*}$ denotes the canonical frame homomorphism from the initial frame $\Omega$. But the expression with a union allows to emphasize the fact that this is indeed a union, and also it might happen that the expression defining “$a$" only makes sense when $u$ holds, in which case only the first expression makes sense.
We conclude these preliminaries by the definition of real numbers. We will need two spaces of real numbers.
The first one is the set of non-negative upper semi-continuous real numbers, where the norm function of $C^{*}$-algebras will take value. A upper-semi-continuous real number is a subset $x$ of the set ${\mathbb{Q}}$ of rational such that:
- $\exists q \in x$
- If $q \in x$ and $q<q'$ then $q' \in x$
- For all $q \in x $ there exists a $q'<q$ such that $q' \in x$.
It is said to be non-negative if it is included in the set ${\mathbb{Q}}_+^{*}$ of positive rational numbers. Of course $q \in x$ has to be interpreted as $x<q$ (and will be denoted this way). Upper semi-continuous real number have good order property (every bounded set has a supremum) but poor algebraic property: even if one removes the positivity assumption, there is no opposite of an arbitrary element (the opposite of a upper semi-continuous real numbers should be a lower semi-continuous numbers) and we can only multiply positive elements.
The second is the set ${\mathbb{R}}$ of continuous real numbers, which will play the role of the “scalars field" for $C^{*}$-algebras. A continuous real number is a pair $x=(L,U)$ of subsets of the set of ${\mathbb{Q}}$ of rational numbers such that:
- $U$ is upper semi-continuous real number, and $L$ is a lower semi-continuous number (i.e. satisfy the same three axioms but for the reverse order relation).
- $L \cap U = \emptyset$.
- for all $q<q'$ either $q \in L$ or $q' \in U$.
Of course $q \in L$ mean $q<x$ and $q \in U$ means $x<q$. The continuous real number have good algebraic properties (they form a locale ring) and topological properties (they are complete, in fact they are exactly the completion of ${\mathbb{Q}}$ by Cauchy filters) but no longer have supremums in general. The complex numbers are defined as ${\mathbb{R}}\times {\mathbb{R}}$ endowed with their usual product.
Finally the map $(L,U) \mapsto U$ induce an injection of the continuous real numbers into the semi-continuous real numbers, in particular it makes sense to wonder whether a given semi-continuous real number is continuous or not.
One point compactification of locales
=====================================
In this section we will define a constructive and pointfree version of the process of one point compactification of a locally compact separated topological space.
Let $X$ be a locally compact regular locale, and $U \in {\mathcal{O}}(X)$. We will denote by $\omega(U)$ the proposition:
$$\omega(U) := ``\exists W \in {\mathcal{O}}(X) \text{ such that } W \ll X \text{ and } U \cup W =X "$$
i.e. $\omega(U)$ is the proposition “$U$ has a compact complement". The underlying idea is that in the one point compactification, the neighbourhoods of $\infty$ are exactly the open subspaces whose complement is compact, i.e. the $U$ such that $\omega(U)$.
The main result of this section is:
{#OnepointCptmain}
**Theorem :**
*Let $X$ be a locally compact regular locale. Then there exists a unique (up to unique isomorphism) compact regular locale $X^{\infty}$ with a (closed) point $\{ \infty \} \subset X^{\infty}$ and an isomorphism between $X$ and the open complement of $\{ \infty \}$.*
Moreover: $${\mathcal{O}}(X^{\infty}) \simeq \{(U,p) \in {\mathcal{O}}(X) \times \Omega | p \Rightarrow \omega(U) \}$$
And the two projections from ${\mathcal{O}}(X^{\infty})$ to ${\mathcal{O}}(X)$ and $\Omega$ are the frame homomorphisms corresponding to the injections of $X$ and $\{\infty\}$ into $X^{\infty}$.
The proof will be completed in \[OnepointCptproof\]. One can also note that this is a special case of Artin gluing[^3] of a closed point to $X$. This will be extremly apparent in \[defofXinfty\] and in \[OnepointCptproof\].
For the rest of this section, we fix a locally compact regular locale $X$.
{#section-1}
**Proposition :** [*The function $\omega : {\mathcal{O}}(X) \rightarrow \Omega = {\mathcal{O}}(\{ \infty \})$ is cartesian, i.e. order preserving and satisfies $\omega(X)=\top$ and $\omega(a \cap b) = \omega(a) \wedge \omega(b)$.*]{}
**Proof :**
[if $A \leqslant B$ and $\omega(A)$, then $\omega(B)$ also holds with the same $W$. One has $\omega(X)$ with $W=\emptyset$. As $\omega$ is order preserving one has $\omega(A \cap B) \leqslant \omega(A) \wedge \omega(B)$. For the converse inequality, if one has $\omega(A)$ and $\omega(B)$ then there is $W$ and $W'$ such that $W,W' \ll X$ and $W \cup A = X$ , $W \cup B =X$. Taking $W''=W \cup W'$ one has $W'' \ll X$ and $W'' \cup (A \cap B) = X$ which proves $\omega(A \cap B)$. ]{} $\square$
{#defofXinfty}
**Corollary :**
*There is a locale $X^{\infty}$ such that*
$${\mathcal{O}}(X^{\infty}) = \{(U,p) \in {\mathcal{O}}(X) \times \Omega | p \Rightarrow \omega(U) \}$$
as in theorem \[OnepointCptmain\]. Moreover the two projections from ${\mathcal{O}}(X^{\infty})$ to ${\mathcal{O}}(X)$ and $\Omega = {\mathcal{O}}(\{\infty \})$ corresponds to an open inclusion of $X$ into $X^{\infty}$ and the complementary closed inclusion.
**Proof :**
[From the fact that $\omega$ is cartesian one deduces[^4] that $\{(U,p) \in {\mathcal{O}}(X) \times \Omega | p \Rightarrow \omega(U) \}$ is stable under arbitrary joins and finite meets in ${\mathcal{O}}(X)\times \Omega$ hence it is a frame and the two projections are frame homomorphisms. Consider the element $X_0 :=(X,\bot) \in {\mathcal{O}}(X^{\infty})$ then the elements of ${\mathcal{O}}(X^{\infty})$ smaller than $X_0$ are the $(U,\bot)$ for $U \in {\mathcal{O}}(X)$ hence the open sublocale $X_0$ is isomorphic to $X$. Conversely, the element of ${\mathcal{O}}(X^{\infty})$ bigger than $X_0$ are exactly the $(X,p)$ hence the closed complement of $X_0$ is just a point, denoted $\infty$ and corresponding to $\infty^{*}(X,p)=p$. ]{} $\square$
For now on, the open sublocale $X_0$ of $X^{\infty}$ will be identified with $X$ (and in particular denoted $X$).
{#section-2}
**Lemma :** [*$X^{\infty}$ is compact.*]{}
**Proof :**
[Let $(X_i)_{i\in I} = (U_i,p_i)_{i \in I}$ be a covering net of open sublocales of $X^{\infty}$. As supremum in ${\mathcal{O}}(X^{\infty})$ are computed componentwise one has in particular $\bigvee p_i = \top$ i.e. there exists $i_0 \in I$ such that $p_{i_0}$ holds. As $ p_i \Rightarrow \omega(U_i)$ one also has $\omega(U_{i_0})$ i.e. there exists a $W$ such that $W \ll X$ and $W \bigcup U_{i_0} =X$. As the $U_i$ form a covering net of $X$ one also has a $i_1$ such that $W \leqslant U_{i_1}$ and a $i_2$ bigger than $i_1$ and $i_0$. Hence $U_{i_2} \geqslant W \cup U_{i_0}=X$ and $p_{i_2}\geqslant p_{i_0} = \top$, hence $X_{i_2} = X^{\infty}$ which concludes the proof. ]{} $\square$
{#OnepointCptRegular}
**Lemma :** [*$X^{\infty}$ is regular.*]{}
**Proof :**
Let $A=(U,p)$ be any open sublocale of $X^{\infty}$. We can first see that:
$$A = (U, \bot) \cup \bigcup_{p \atop W \ll X, U \cup W = X} (\neg W, \top)$$
The term in the union makes sense because if $W \ll X$ then there exists a $W'$ such that $W \ll W' \ll X$ hence $W \triangleleft W'$ and $\neg W \cup W' = X$ hence $\omega(\neg W)$. $A$ is bigger than this union because when $W \cup U = X$ one has $\neg W \leqslant U$ (and when $p$ holds then $\top \leqslant p$). Conversely, using the fact that unions in ${\mathcal{O}}(X^{\infty})$ are computed componentwise one easily checks that the right hand side union is indeed smaller than $(U,p) = A$.
Now, for any $V \ll U$ in $X$ one has $\omega(\neg V)= \top$ (using a $W$ such that $V \ll W \ll X$). Hence $(\neg V, \top) \in {\mathcal{O}}(X^{\infty})$ is an open such that $(V, \bot) \cap (\neg V, \top)=\emptyset$ and $(\neg V, \top) \cup (U,\bot) = (X, \top)$ hence $(V, \bot) \triangleleft (U,\bot)$ and $$(U,\bot) = \bigcup_{V \ll U} (V,\bot)$$ because $X$ is locally compact.
Moreover, if we assume $p$, then for any $W$ such that $W \ll X$ and $(U \cup W) = \top$ one has $(\neg W,\top) \triangleleft (U, \top)=(U,p)=A$ as attested by $(W,\bot)$. Hence, if one writes:
$$A = \left( \bigcup_{V \ll U} (V,\bot) \right) \cup \left( \bigcup_{p \atop W \ll X, U \cup W = \top} (\neg W, \top) \right),$$
then all the terms of the union are rather below $A$ ($\triangleleft A$) which proves that $X^{\infty}$ is regular.
$\square$
{#OnepointCptproof}
At this point, the existence part of theorem \[OnepointCptmain\] and the additional properties of $X^{\infty}$ stated in \[OnepointCptmain\] are proved, all that remains to do is to prove the uniqueness, and that is what we will do now:
**Proof :**
Let $Y$ be a compact regular locale with a (closed) point denoted $\infty$ such that $Y-\{\infty\}$ is identified with $X$. Let $i$ be the inclusion of $X$ into $Y$, open sublocales of $X$ will be identified with the corresponding open sublocales of $Y$ included in $X$. We will first show that for any $U \in {\mathcal{O}}(X)$ one has $\infty \in i_*(U)$ if and only if $\omega(U)$ (i.e. $\omega$ is the unique “gluing function" giving rise to a compact regular Artin gluing).
Indeed, assume $\omega(U)$, i.e. that there exists a $W \in {\mathcal{O}}(X)$ such that $W \ll X$ and $W \cup U =X$.
Now as $W \ll X$ in ${\mathcal{O}}(X)$ one also have $W \ll X$ in $Y$, hence (as $Y$ is regular) there exists $W' \in {\mathcal{O}}(Y)$ such that $X \cup W' = Y$, (i.e. $\infty \in W'$) and $W \cap W' = \emptyset$. In particular $i^{*}(W') \cap W= i^{*}(W') \cap i^{*}(W) = \emptyset$ hence as $W \cup U =X$ one has $i^{*}(W') \subset U$ hence $W' \subset i_* U$, which proves that $\infty \in i_* U$.
Conversely, assume that $\infty \in i_*(U)$, in particular, $i_*(U) \cup X = Y$, hence by locale compactness of $X$:
$$Y = \bigcup_{V \ll X} V \cup i_*(U)$$
as $Y$ is compact, there exists $V \ll X$ such that $V \cup i_*(U) = Y$ in particular
$$X =i^{*}(Y)=i^{*}(V \cup i_*(U) = V \cup i^{*}i_*(U) \leqslant V \cup U$$ which proves $\omega(U)$.
The end of the proof is then a general fact about Artin gluing: consider the natural map $ p : X \coprod \{ \infty \} \rightarrow Y$. This a surjection because $X$ is the open complement of $\{ \infty \}$, hence ${\mathcal{O}}(Y)$ can be identified with the set of open sublocale $A$ of $ X \coprod \{ \infty \} $ such that $p^{*}p_* A = A$.
An open sublocale of $X \coprod \{ \infty \}$ is exactly a pair $(U \in {\mathcal{O}}(X), m \in {\mathcal{O}}(\{ \infty \}) = \Omega)$ and from the first part of the proof one can deduce that $p^{*}p_*(U,m)= (U, m \cap \omega(U))$, which show that ${\mathcal{O}}(Y)$ is canonically identified with ${\mathcal{O}}(X^{\infty})$ and there is a unique identification which is compatible to the inclusion of $X$ and $\{\infty \}$.
$\square$
{#section-3}
In general, a map $f$ from $X^{\infty}$ to any locale $Y$ is the same thing as a map $f_0$ from $X$ to $Y$ and a point $f(\infty) \in Y$ such that for any open sublocale $U \subset Y$ which contains $f(\infty)$ one has $\omega(f^{*}(U))$. Indeed this follows directly from the decomposition of $f^{*} : {\mathcal{O}}(Y) \rightarrow {\mathcal{O}}(X^{\infty})$ in the expression of ${\mathcal{O}}(X^{\infty})$ as a subset of ${\mathcal{O}}(X) \times \Omega$.
This allows to define:
**Definition :** [*Let ${\mathcal{C}}_{0}(X)$ be the set of functions $f$ from $X$ to the locale ${\mathbb{C}}$ such that for any positive $\epsilon \in {\mathbb{Q}}$ one has $\omega(f^{*}(B_{\epsilon} 0))$ where $B_{\epsilon} 0$ denote the ball of radius $\epsilon$ and of center $0$ or, equivalently, the set of functions from $X^{\infty}$ to ${\mathbb{C}}$ which send $\infty$ to $0$.* ]{}
{#section-4}
**Proposition :** [*$X^{\infty}$ is completely regular if and only if $X$ is.*]{}
**Proof :**
If $X^{\infty}$ is completely regular, then any of its sublocales, in particular $X$, is completely regular.
Conversely assume that $X$ is completely regular. Let $A=(U,p) \in {\mathcal{O}}(X^{\infty})$. Consider first a $V \ll U$ then there exists $W$ such that $V \triangleleft_{CR} W \ll U \subset X$ and any function on $X$ which is zero outside of $W$ can be extended by $f(\infty)=0$ and hence one has $(V,\bot) \triangleleft_{CR} A$.
Assume now $p$, then one also has $\omega(U)$ hence there exists a $W$ such that $W \ll X$ and $U \cup W =X$. Consider any $W'$ such that $W \ll W' \ll X$, and (as $X$ is completely regular) $f$ a function from $X$ to the locale $[0,1]$ such that $f$ is zero on $W$ and $1$ outside of $W'$. As $W' \ll X$, this function extend to a function from $X^{\infty}$ to $[0,1]$ which satisfies $f^{*}(]0,1]) \subset (U,\top)=(U,p)$ and $f(\infty)=1$. Let $F_{\infty}$ the set of such functions, one can write that:
$$A= \left( \bigcup_{V \ll U} (V, \bot) \right) \cup \left( \bigcup_{p,\atop f \in F_{\infty}} f^{*}(]1/2,1]) \right)$$
which concludes the proof, as, assuming $p$, one has $f^{*}(]1/2,1]) \triangleleft_{CR} A$ for any $f \in F_{\infty}$.
$\square$
{#properfunctoriality}
Finally we need to understand the functoriality of the relation between $X$ and $X^{\infty}$:
**Proposition :**
*Let $X$ and $Y$ be two regular locally compact locales and $X^{\infty}$ and $Y^{\infty}$ their one point compactifications. Let $f :X \rightarrow Y$ the following conditions are equivalent:*
1. For any $U \in {\mathcal{O}}(Y)$ such that $U \ll Y$ one has $f^{*}(U) \ll X$
2. There is an extension $f^{\infty}:X^{\infty} \rightarrow Y^{\infty}$ of $f$ such that $(f^{\infty})^{*}(Y)=X$.
3. $f$ is proper (see [@sketches C.3.2.5]).
Moreover in this case the extension $f^{\infty}$ is unique and this induces a bijection between proper maps from $X$ to $Y$ and maps from $X^{\infty}$ to $Y^{\infty}$ such that $f^{*}(Y)=X$.
**Proof :**
- We define $f^{\infty}$ by:
$$(f^{\infty})^{*}(U,p) = (f^{*}(U),p).$$
Assuming $1.$ if $\omega(U)$ for $U \in {\mathcal{O}}(Y)$ then there exists a $W$ such that $W \ll Y$ and $W \cup U = Y$, and hence $f^{*}(W) \ll X$ and $f^{*}(U) \cup f^{*}(W)=X$ hence $\omega(f^{*}(U))$. This proves that if $(U,p) \in {\mathcal{O}}(Y^{\infty})$ , i.e. if $p \Rightarrow \omega(U)$ then one also has $p \Rightarrow \omega(f^{*}(U))$ hence $(f^{*}(U),p) \in \Omega(X^{\infty})$. Moreover as intersections and unions in ${\mathcal{O}}(X^{\infty})$ are computed componentwise, $f^{\infty}$ is indeed a morphism of locales. As $X$ and $Y$ correspond to the elements $(X, \bot)$ and $(Y,\bot)$ of ${\mathcal{O}}(X^{\infty})$ and ${\mathcal{O}}(Y^{\infty})$ one also has $(f^{\infty})^{*}(Y)=X$.
- In the situation of $2.$, the map $f$ from $X$ to $Y$ is a pullback of the map $f^{\infty}$ along the open inclusion of $Y$ into $Y^{\infty}$. But any map between two compact regular locales is proper (see [@sketches C.3.2.10 (i) and (ii)]) and a pullback of a proper map is again a proper map (see [@sketches C.3.2.6]).
- If $f$ is a proper then $f_{*}$ commutes to directed joins. So if we assume that $U \ll Y$, if $X$ is covered by some net $V_i$ then $$Y \leqslant f_* \left( \bigcup_i V_i \right) = \bigcup_i f_*(V_i)$$
hence there exists a $j$ such that $U \leqslant f_*(V_j)$ and hence $f^{*}(U) \leqslant V_j$ for some $j$ which concludes the proof of the equivalence.
The uniqueness of the extension is immediate because $f^{\infty}$ is defined both on $X$ and on its closed complement, and hence this indeed induce a bijection as stated in the proposition.
$\square$
{#partialfunctoriality}
One can be slightly more general:
**Definition :** [*Let $X$ and $Y$ be two locally compact regular locales. A partial proper map from $X$ to $Y$ is the data of an open sublocale ${\text{Dom}}(f) \subset X$ and a proper map (denoted $f$) from ${\text{Dom}}(f)$ to $Y$.*]{}
Partial proper maps can be composed (by restricting the domain of definition as much as neccessary) and as a pullback of a proper map is proper the composite of two proper partial maps is again a proper partial map, hence one has a category of proper partial map.
**Proposition :** [*The category of pointed compact (completely) regular locales is equivalent to the category of locally compact (completely) regular locales and proper partial maps between them.*]{}
**Proof :**
The functor are the same as those of proposition \[properfunctoriality\], they just apply to a larger category: to a map $f:X^{\infty} \rightarrow Y^{\infty}$ of pointed compact regular locale one associate the partial map $f':X \rightarrow Y$ whose domain is $f^{*}(Y)$, and $f'$ is proper because it is the pullback of $f$ along the inclusion of $Y$ into $Y^{\infty}$ (see the proof of $2. \Rightarrow 3.$ in \[properfunctoriality\]). Conversely, if $f$ is a partial proper map from $X$ to $Y$ then it extend into a map from $U^{\infty}$ to $Y^{\infty}$ by \[properfunctoriality\], composing it to the map $r_U$ of the next lemma yields the desired map from $X^{\infty}$ to $Y^{\infty}$ and these two constructions are clearly inverse of each other.
$\square$
{#mapru}
**Lemma :** [*Let $X$ be a locally compact regular locale, and $U \subset X$ an open sublocale of $X$ then their exists a (unique) map $r_U : X^{\infty} \rightarrow U^{\infty}$ such that $r_U$ is the identity on $U$ and $(r_U)^{*}(U)=U$.*]{}
**Proof :**
$r_U$ is defined on $U \subset X^{\infty}$ and on its closed complement (as the constant equal to $\infty$). Hence one has a map $f_U$ from $U \coprod U^{c}$ to $U^{\infty}$. It is a general fact that the canonical map $U \coprod U^{c} \rightarrow X^{\infty}$ is a surjection of locale (hence corresponds to an injection of frame). So all we have to do to proves that $f_U$ factors into a map $r_U$ on $X^{\infty}$ is to check that for any open sublocale $(V,p) \in {\mathcal{O}}(X^{\infty})$, the open sublocal $(f_U)^{*}(V,p)$ of $U \coprod U^{c}$ comes from an open sublocale of $U^{\infty}$.
By definition, $(f_U)^{*}(V,p)$ is $V$ on the $U$ part and $s^{*}(p)$ on the $U^{c}$ part (where $s$ is the canonical map $U^{c} \rightarrow {*}$). If we assume $p$ then there exists a $W \ll U$ such that $W \cup V = U$, and as $W \ll U$, there exists a $D \subset X^{\infty}$ such that $W \cap D = \emptyset$ and $U \cup D = X^{\infty}$. We define:
$$V' = V \cup \coprod_{p \atop D} D$$
where the coproduct is on the set of $D$ such that $p$ holds and $D$ satisfy the properties just describe. If $f$ denotes the map $U \coprod U^{c} \rightarrow X^{\infty}$ then $p^{*}(V') = (V' \cap U, V' \cap U^{c})$. On one hand $V' \cap U = V$ because as $D \cap W = \emptyset$ and $W \cup V =U$ one has $D \cap U \subset V$, and on the other hand, $U^{c} \cap V' = U^{c} \cap \coprod_{p,D} D $, and as $D \cup U = X^{\infty}$ , one has $U^{c} \subset D $ hence, $U^{c} \cap V' = s^{*}(p)$, and this concludes the proof.
$\square$
Unitarization of $C^{*}$-algebras {#secUnitarization}
=================================
{#section-5}
We follow the same definition of $C^{*}$-algebras as for example in [@banaschewski2000spectral]. In particular the norm of an element is only assumed to be a upper semi-continuous real number (the definition using rational ball of [@banaschewski2000spectral] is equivalent to a norm function with value into the non-negative upper semi-continuous real numbers). Of course contrary to [@banaschewski2000spectral], we do not assume the algebras to be unital.
{#section-6}
If $C$ is a a $C^{*}$-algebra we define $C^{+}$ as the as the set of couples $(c,z)$ with $c \in C$ and $z \in {\mathbb{C}}$, We endow $C^{+}$ with the componentwise addition and the multiplication $(c,z)(c',z')=(cc'+cz'+zc',zz')$. It is a unital algebra, with unit $(0,1)$. One also endows $C^{+}$ with the anti-linear involution $(c,z)^{*}=(c^{*},\overline{z})$.
If $(c,z) \in C^{+}$ we define:
$$\Vert (c ,z) \Vert = \max( |z|, \sup_{c' \in C_{\leqslant 1} } \Vert c'c+c'z \Vert )$$
Where $C_{\leqslant 1} $ denotes the set of elements of $C$ of norm $\leqslant 1$, and the supremum is to be considered in the set of upper semicontinuous real numbers.
Remark: Classically, it is usual to define the norm on $C^{+}$ to be simply $\sup_{c'} \Vert c'c+c'z \Vert$. This works perfectly when $C$ is indeed non-unital, but when $C$ is unital this gives a norm $0$ for the element $(1,-1)$ and hence (after taking the quotient by the ideal of norm zero elements) with this definition $C^{+}$ will be isomorphic to $C$ when $C$ is unital. This is not what we want because if $X$ is a compact locale, then its one point compactification $X^{\infty}$ is not $X$ itself but $X \coprod \{ \infty \}$. Classically this difference is harmless but in intuitionist logic the question of being compact/unital might be non decidable and hence it is important to have a uniform treatment on both side.
{#unitarizationLemma1}
Before proving that $C^{+}$ is indeed a $C^{*}$-algebra we need a few lemmas which are immediate in classical mathematics but require to be slightly more careful in intuitionist mathematics.
**Lemma :** [*Let $x$ be a nonnegative upper semicontinuous real number and $q$ be a nonnegative rational number then if $ x^{2} \leqslant q x $ one has $x \leqslant q $.*]{}
**Proof :**
Let $e$ be a rational number such that $x \leqslant q+e$.
One has:
$$x^{2} \leqslant q^{2} +q e \leqslant (q+\frac{e}{2})^{2}$$
hence $ x \leqslant q+(\frac{e}{2})$
by induction one obtains that for all $k \geqslant 0$ $$x \leqslant q+\frac{e}{2^{k}}$$
and hence that $x \leqslant q$ which concludes the proof.
$\square$
{#unitarizationLemma2}
**Lemma :**
*Let $C$ be a $C^{*}$-algebra and $c\in C$ then:*
$$\Vert c \Vert = \sup_{b \in C_{\leqslant 1}} \Vert bc \Vert = \sup_{b \in C_{\leqslant 1}} \Vert cb \Vert$$
**Proof :**
We start by the first equality. It is immediate that $\Vert bc \Vert \leqslant \Vert c \Vert$ for any $b$ of norm $\leqslant 1$ hence
$$\sup_{b \in C_{\leqslant 1}} \Vert bc \Vert \leqslant \Vert c \Vert$$
We only need to prove the reverse inequality. let $q$ be a rational number such that $ \sup_{b \in C_{\leqslant 1}} \Vert bc \Vert < q$, i.e. there exists a $q' <q$ such that for all $b$, $\Vert bc \Vert <q'$.
Let $\alpha$ be a rational number such that $\Vert c \Vert < \alpha$. One has $\Vert c^{*}/ \alpha \Vert < 1$ hence:
$$\frac{1}{\alpha} \Vert c^{*}c \Vert <q'$$
$$\frac{1}{q'} \Vert c^{*}c \Vert <\alpha,$$
and as this holds for any $\alpha$ such that $\Vert c \Vert < \alpha$ this proves that:
$$\frac{1}{q'} \Vert c^{*}c \Vert \leqslant \Vert c \Vert$$
Using $\Vert c^{*}c \Vert = \Vert c \Vert^{2}$ one obtains $\Vert c \Vert ^{2} \leqslant q' \Vert c \Vert$ and hence by lemma \[unitarizationLemma1\] this proves that $\Vert c \Vert \leqslant q' <q$ which concludes the proof of the first equality.
The second equality follows either by exactly the same proof, or by applying the first equality using $\Vert c ^{*} \Vert = \Vert c \Vert$ and $(cb)^{*}=b^{*} c^{*}$.
$\square$
{#unitarizationLemma3}
**Lemma :**
*For any $x = (c,z) \in C^{+}$ one has:*
$$\sup_{b \in C_{\leqslant 1}} \Vert cb+bz \Vert = \sup_{b \in C_{\leqslant 1}} \Vert bc+bz \Vert,$$
and, $\Vert x \Vert = \Vert x^{*} \Vert$.
**Proof :**
By lemma \[unitarizationLemma2\] one has:
$$\sup_{b \in C_{\leqslant 1}} \Vert cb+bz \Vert = \sup_{b \in C_{\leqslant 1}} \sup_{b' \in C_{\leqslant 1}} \Vert b'cb+b'bz \Vert$$
But the two supremums can be exchanged and as:
$$\sup_{b \in C_{\leqslant 1}} \Vert bc+bz \Vert = \sup_{b \in C_{\leqslant 1}} \sup_{b' \in C_{\leqslant 1}} \Vert bcb'+zbb' \Vert$$
This proves the first equality.
The fact that $\Vert x \Vert = \Vert x^{*} \Vert$ follows immediately:
$$\Vert x^{*} \Vert = \max ( |z|, \sup_{b \in C_{\leqslant 1}} \Vert bc^{*}+b\overline{z} \Vert )$$
and:
$$\sup_{b \in C_{\leqslant 1}} \Vert bc^{*}+b\overline{z} \Vert = \sup_{b \in C_{\leqslant 1}} \Vert c b^{*}+z b^{*} \Vert = \sup_{b \in C_{\leqslant 1}} \Vert c b+z b \Vert$$
which concludes the proof.
$\square$
{#unitarizationIsaCstarAlg}
**Proposition :** [*$C^{+}$ is a $C^{*}$-algebra.*]{}
**Proof :**
The fact that $\Vert . \Vert$ is a norm of algebra (i.e. such that $\Vert x y \Vert \leqslant \Vert x \Vert \Vert y \Vert $) is easy and exactly as in the classical case. Thanks to the term $ |z|$ in the definition it is an actual norm and not a semi-norm.
$C^{+}$ is complete for this norm because it is complete for the norm $|z|+\Vert c \Vert$ as a product of two Banach spaces, and these two norms are equivalent, indeed, one has in one direction: $$\Vert (c,z) \Vert \leqslant | z| + \Vert c \Vert$$ and in the other, one clearly have: $$|z | \leqslant \Vert (c,z) \Vert$$ $$\Vert c \Vert -|z| \leqslant \Vert (c,z) \Vert$$ hence: $$|z | + \Vert c \Vert \leqslant 3 \Vert (c,z)\Vert$$
All we have to do to conclude is to prove the $C^{*}$-equality $\Vert x^{*} x \Vert = \Vert x \Vert^{2}$. Let $x = (c,z)$ and element of $C^{+}$ then:
$$x^{*} x = (c^{*}c+zc^{*}+\overline{z}c,|z|^{2})$$
Hence, $$\Vert x^{*} x \Vert =\max (|z|^{2}, \sup_{b \in C_{\leqslant 1}} \Vert c^{*}cb+zc^{*}b+\overline{z}cb \Vert )$$
as, $$\begin{gathered}
\Vert c^{*}cb+zc^{*}b+\overline{z}cb \Vert \geqslant \Vert b^{*}c^{*}cb+zb^{*}c^{*}b + \overline{z}b^{*}cb \Vert \\ = \Vert (cb+z b)^{*} (cb+zb) \Vert =\Vert cb+zb \Vert ^{2} \end{gathered}$$
One obtains that $\Vert x^{*} x \Vert \geqslant \Vert x \Vert^{2}$. The other inequality follow from lemma \[unitarizationLemma3\] together with the fact that $\Vert x^{*} x \Vert \leqslant \Vert x^{*} \Vert \Vert x \Vert$.
$\square$
{#chiinfty}
One will identify $C$ with the two sidded ideal of $C^{+}$ of elements of the form $(c,0)$. We will denote by $\chi_{\infty}$ the character of $C^{+}$ defined by $\chi_{\infty}(c,z)=z$.
{#unitarizationUniversalProp}
**Proposition :** [*Let $C$ be a (possibly non unital) $C^{*}$-algebra and $B$ be a unital $C^{*}$-algebra. Any morphism from $C$ to $B$ extend uniquely into a unital morphism of $C^{*}$-algebra from $C^{+}$ to $B$.* ]{}
**Proof :**
The extension is necessary defined by $f(c,z)= f(c)+z$, it is clearly a morphism of $*$-algebra. It is continuous because:
$$\Vert f(c,z) \Vert \leqslant \Vert f(c) \Vert +|z| \leqslant \Vert f \Vert \Vert c \Vert + |z|$$ and as we observed in the proof of proposition \[unitarizationIsaCstarAlg\] the norm $\Vert c,z \Vert$ is equivalent to the norm $\Vert c \Vert+ |z|$ this proves that this extension is continuous.
$\square$
{#section-7}
**Proposition :** [*Let $X$ be a locally compact completely regular locale, then ${\mathcal{C}}(X^{\infty}) \simeq ({\mathcal{C}}_{0}(X))^{+}$* ]{}
**Proof :**
[${\mathcal{C}}_{0}(X)$ is identified with the set of functions on $X^{\infty}$ which send $\infty \in X^{\infty}$ to $0$, hence this induce a morphism from $({\mathcal{C}}_{0}(X))^{+}$ to $C(X^{\infty})$ by proposition \[unitarizationUniversalProp\]. A function $f \in C(X^{\infty})$ can be written in a unique way $h+c$ with $h \in {\mathcal{C}}_{0}(X)$ and $c$ a constant: $c$ has to be $f(\infty)$ and $h = f - f(\infty)$ hence the map from $({\mathcal{C}}_{0}(X))^{+}$ to $C(X^{\infty})$ is a bijection. One easily check that it is isometric, either by general theorems on $C^{*}$-algebras (which should of course be proved constructively first) or directly: If $f$ is a function on $X^{\infty}$ then $\Vert f \Vert <q$ if and only if both $f(\infty)<q$ (which imply that $f < q$ on some neighbourhood of $\infty$) and for each $U \triangleleft_{CR} X$ the function $f$ is strictly smaller than a $q'<q$ on $U$, which is equivalent to the fact that $\Vert fh \Vert <q'<q$ for every $h \in {\mathcal{C}}_0(X)$ of norm $\leqslant 1$. And these two conditions are equivalent to the fact that $\Vert (f-f(\infty),f(\infty) ) \Vert <q$. ]{} $\square$
{#section-8}
We conclude this section by discussing the compatibility of unitarization to pullback along geometric morphisms.
If ${\mathcal{C}}$ is a $C^{*}$-algebra in a topos ${\mathcal{E}}$ and $f:{\mathcal{T}}\rightarrow {\mathcal{E}}$ is a geometric morphism, then $f^{*}({\mathcal{C}})$ is in general not a $C^{*}$-algebra: it still satisfies all the “algebraic" axioms. But it might not be complete and separated (in the sense that $\Vert x \Vert =0 \Rightarrow x=0$). Fortunately, the separated completion of $f^{*}{\mathcal{C}}$ is complete and separated and hence is a $C^{*}$-algebra which we denote[^5] by $f^{\sharp}({\mathcal{C}})$.
**Proposition :**
*For any $C^{*}$-algebra ${\mathcal{C}}$ one has a natural isomorphism:*
$$f^{\sharp}( {\mathcal{C}}^{+}) \simeq f^{\sharp}({\mathcal{C}})^{+}$$
**Proof :**
[In ${\mathcal{T}}$, the canonical morphisms of pre-$C^{*}$-algebras $f^{*} {\mathcal{C}}\rightarrow f^{\sharp}({\mathcal{C}})$ and $f^{*}{\mathbb{C}}\rightarrow {\mathbb{C}}$ extend into a map $f^{*}{\mathbb{C}}\times f^{*}{\mathcal{C}}\rightarrow f^{\sharp}({\mathcal{C}})^{+}$. One can check that the semi-norm and the pre-$C^{*}$-algebra structure induced on ${\mathbb{C}}\times f^{*} {\mathcal{C}}$ by this map are exactly those of $f^{*}({\mathcal{C}}^{+})$, hence $f^{\sharp}({\mathcal{C}}^{+})$ is exactly the closure of $f^{*}({\mathcal{C}}^{+})$ in $f^{\sharp}({\mathcal{C}})^{+}$. But $f^{*}({\mathcal{C}}^{+})$ is clearly dense (because each component is dense) hence this concludes the proof. ]{} $\square$
The non-unital Gelfand duality
==============================
{#section-9}
**Definition :** [*If ${\mathcal{C}}$ is a $C^{*}$-algebra we denote by ${\text{Spec}^{\infty}}{\mathcal{C}}$ the spectrum of the unital $C^{*}$-algebra ${\mathcal{C}}^{+}$ and by ${\text{Spec }}{\mathcal{C}}$ the locally compact completely regular locale obtained by removing the point $\infty$ of ${\text{Spec}^{\infty}}{\mathcal{C}}$.*]{}
Of course by the uniqueness property in theorem \[OnepointCptmain\], ${\text{Spec}^{\infty}}{\mathcal{C}}$ is the one point compactification of ${\text{Spec }}{\mathcal{C}}$. Also, if ${\mathcal{C}}$ is unital then ${\mathcal{C}}^{+}$ is isomorphic to ${\mathcal{C}}\times {\mathbb{C}}$ hence ${\text{Spec}^{\infty}}{\mathcal{C}}$ is isomorphic to ${\text{Spec }}{\mathcal{C}}\coprod \{ \infty \}$ and the two definitions of ${\text{Spec }}{\mathcal{C}}$ (by considering ${\mathcal{C}}$ as a unital or general $C^{*}$-algebra) agree and there is no possible confusion.
At this point, the following theorem is immediate:
{#mainResult}
**Theorem :** [*The category of commutative $C^{*}$-algebras and arbitrary morphisms between them is anti-equivalent to the category of locally compact completely regular locales and partial proper maps between them. The equivalence is given on object by the constructions ${\mathcal{C}}_0$ and ${\text{Spec }}$.*]{}
**Proof :**
[The process of unitarization produce an equivalence between the category of commutative $C^{*}$-algebra and arbitrary morphism, and the category of unital $C^{*}$-algebras endowed with a character $\chi_{\infty}$ and unital morphism compatible to the character. Applying the Gelfand duality for unital $C^{*}$-algebra this category is in turn anti-equivalent to the category of pointed compact completely regular locales, which by proposition \[partialfunctoriality\] is equivalent to the category of locally compact completely regular locale and partial proper map between them. Under these composed equivalences, a commutative $C^{*}$-algebra ${\mathcal{C}}$ is associated to the spectrum of ${\mathcal{C}}^{+}$ minus the point at infinity, i.e. exactly ${\text{Spec }}{\mathcal{C}}$ and a locally compact completely regular locale $X$ is associated to the algebra of functions on $X^{\infty}$ which vanish at $\infty$, which is ${\mathcal{C}}_0(X)$. ]{} $\square$
In the rest of this section, we will give interpretation of ${\text{Spec }}$ and ${\text{Spec}^{\infty}}$ in term of classifying space of characters (proposition \[spec\_class\_char\]), we will show that the open sublocales of ${\text{Spec }}{\mathcal{C}}$ correspond to the closed ideals of ${\mathcal{C}}$ (theorem \[openeqideal\]) and that total proper maps of locales correspond to non-degenerate morphisms of $C^{*}$-algebras (theorem \[non-degen=propermap\]).
{#section-10}
We recall that when ${\mathcal{C}}$ is a unital commutative $C^{*}$-algera, then ${\text{Spec }}{\mathcal{C}}$ denotes the classyfing space of the theory of characters of ${\mathcal{C}}$. A precise geometric formulation of this theory can be found in [@banaschewski2000spectral] or in [@coquand2009constructive], but this can also be interpreted as the fact that for any locale $Y$ and $p:Y \rightarrow \{*\}$ the canonical map, functions from $Y$ to ${\text{Spec }}{\mathcal{C}}$ correspond to morphisms from $p^{\sharp}({\mathcal{C}})$ (or equivalently from $p^{*}({\mathcal{C}})$) to ${\mathbb{C}}$ internally in ${\textsf{Sh}}(Y)$.
{#spec_class_char}
**Proposition :**
*The locale ${\text{Spec}^{\infty}}{\mathcal{C}}$ classifies “nonunital characters" of ${\mathcal{C}}$, i.e. possibly nonunital morphism of $C^{*}$-algebras from ${\mathcal{C}}$ to ${\mathbb{C}}$.*
The locale ${\text{Spec }}{\mathcal{C}}$ classifies “nonzero characters" of ${\mathcal{C}}$, i.e. characters which satisfy the additional axiom $\exists c \in {\mathcal{C}}, |\chi(c)|>0$
**Proof :**
The important observation is that the process of unitarization of $C^{*}$-algebras commute with pullback along geometric morphisms. Hence points of ${\text{Spec}^{\infty}}{\mathcal{C}}$ over any locale ${\mathcal{L}}$ (with $p$ its canonical morphism to the point) are the characters of $p^{\sharp}({\mathcal{C}})^{+}$ which are in bijection with the non unital morphisms from $p^{\sharp}({\mathcal{C}})$ to ${\mathbb{C}}$ which proves the first part of the result.
For the second part, let us denote by $D(f)$ the (biggest) open sublocale of ${\text{Spec}^{\infty}}{\mathcal{C}}$ on which $|f|>0$, where $f$ is an element of ${\mathcal{C}}$. Using the complete regularity of ${\text{Spec}^{\infty}}{\mathcal{C}}$ it appears that ${\text{Spec }}{\mathcal{C}}$ is the union of the $D(f)$ for $f \in {\mathcal{C}}$. For each $f \in {\mathcal{C}}$, the open sublocale $D(f)$ classifies characters of ${\mathcal{C}}$ such that $|\chi(f)|>0$. Hence points of ${\text{Spec }}{\mathcal{C}}$ are the characters such that $\exists c \in p^{*}({\mathcal{C}}) $ with $|\chi(c)|>0$.
The formulation “$\exists c \in {\mathcal{C}}$" in the statement of the proposition is unambiguous because as $p^{*}({\mathcal{C}})$ is dense in $p^{\sharp}({\mathcal{C}})$ it is equivalent to says that $\exists c \in p^{*}({\mathcal{C}}), |\chi(c)|>0$ and that $\exists c \in p^{\sharp}({\mathcal{C}}), |\chi(c)|>0$ hence this concludes the proof.
$\square$
{#lemmaC0U}
**Lemma :** [*Let $X$ be a locally compact completely regular locale and $U \subset X$ an open sublocale, then the restriction to $U$ of functions in ${\mathcal{C}}_0(X)$ which vanish outside of $U$ are exactly the functions in ${\mathcal{C}}_0(U)$.*]{}
**Proof :**
Let $f$ be a function in ${\mathcal{C}}_0(X)$ which vanish outside of $U$, we will show that the restriction of $f$ to $U$ is in ${\mathcal{C}}_0(U)$. Let $\epsilon$ be any positive rational number, let $V$ be the open sublocale of $X$ on which $|f|>\epsilon$. As $f \in {\mathcal{C}}_0(X)$, $V\ll X$ and as $f$ vanish outside of $U$ on has $V \triangleleft_{CR} U$ and in particular $V \triangleleft U$. As mentioned in the preliminaries, this two properties together imply $V \ll U$. This being true for any $\epsilon$ this proves that $f \in {\mathcal{C}}_0(U)$.
Conversely, assume that $f\in {\mathcal{C}}_0(U)$, then because of the map $r_U: X^{\infty} \rightarrow U^{\infty}$ constructed in \[mapru\] one can define a map $f\circ r_U$ on $X ^{\infty}$ which vanish at infinity and outside of $U$ and which coincide with $f$ on $U$.
$\square$
{#lemmaextcharac}
**Lemma :** [*Let $I \subset {\mathcal{C}}$ be an ideal of a commutative $C^{*}$-algebra. and let $\chi$ be a non-zero character of $I$ (in the sense that $\exists i \in I, |\chi(i)|>0 $). Then $\chi$ admit a unique extension as a character of ${\mathcal{C}}$.*]{}
**Proof :**
The proof is exactly as in the classical case[^6]: any extension of $\chi$ to ${\mathcal{C}}$ has to satisfy $\chi(c)=\chi(c i)/\chi(i)$ for any $i \in I$ such that $|\chi(i)|>0$ hence the extension is unique. Conversely, if one has $f,g$ two elements of $I$ such that $|\chi(f)|>0$ and $|\chi(g)|>0$, and $c$ any element of ${\mathcal{C}}$ then, as:
$$\chi(gfc) = \chi(g) \chi(fc) = \chi(f) \chi(gc)$$ one has: $$\frac{\chi(fc)}{\chi(f)} = \frac{\chi(gc)}{\chi(g)}.$$
This proves that we can define $\chi(c)=\chi(fc)/\chi(f)$ for any $f \in I$ such that $\chi(f)$ is invertible and as $\chi(c)\chi(c')=\chi(fc)\chi(fc')/\chi(f)^{2}= \chi(f^{2} cc')/\chi(f^{2}) = \chi(cc')$, the extension of $\chi$ is a character of ${\mathcal{C}}$.
$\square$
{#openeqideal}
**Theorem :**
*The constructions ${\mathcal{C}}_0$ and ${\text{Spec }}$induce for any commutative $C^{*}$-algebra ${\mathcal{C}}$ an order preserving bijection between the open sublocales of ${\text{Spec }}{\mathcal{C}}$ and the closed ideals of ${\mathcal{C}}$.*
Moreover, if $f:{\mathcal{C}}\rightarrow {\mathcal{C}}'$ is a morphism between commutative $C^{*}$-algebra the pull-back of open sublocales along the corresponding (partial) continuous map corresponds under this bijection to the map which send an ideal $I$ of ${\mathcal{C}}$ to the closure of the ideal spammed by $f(I)$.
**Proof :**
Because ${\mathcal{C}}$ is an ideal of ${\mathcal{C}}^{+}$ it suffices to prove these results for unital algebras and a unital morphism.
If $U \subset {\text{Spec }}{\mathcal{C}}$ is an open sublocale, then by lemma \[lemmaC0U\] one can identify ${\mathcal{C}}_0(U)$ with an ideal of ${\mathcal{C}}$ whose spectrum is $U$. Conversely, if $I \subset {\mathcal{C}}$ is an ideal then an application of lemma \[lemmaextcharac\] internally in ${\text{Spec }}I$ give rise to a map from ${\text{Spec }}I$ to ${\text{Spec }}{\mathcal{C}}$ and a map from an arbitrary locale ${\mathcal{L}}$ to ${\text{Spec }}{\mathcal{C}}$ factor into ${\text{Spec }}I$ if and only the corresponding character of ${\mathcal{C}}$ in the logic of ${\mathcal{L}}$ satisfies $\exists i\in I, |\chi(i)|>0$ (this is again an application of \[lemmaextcharac\] internally to ${\mathcal{L}}$). Hence ${\text{Spec }}I$ is identified precisely with the open sublocales of ${\mathcal{C}}$ defined by $\bigcup_{i \in I} D(i)$. And as ${\mathcal{C}}_0({\text{Spec }}I) = I$ this proves that this two constructions are inverse of each other, and they clearly preserve the order.
For the second part, if $ f:{\mathcal{C}}\rightarrow {\mathcal{C}}'$ is a unital morphism of $C^{*}$-algebra, $g$ the corresponding continuous map ${\text{Spec }}{\mathcal{C}}' \rightarrow {\text{Spec }}{\mathcal{C}}$, and if $I \subset {\mathcal{C}}$ and $I' \subset {\mathcal{C}}'$ are two closed ideals, then $f(I) \subset I'$ if for any function $h$ on ${\text{Spec }}{\mathcal{C}}$ which vanish outside of ${\text{Spec }}I$ its composite with $g$ vanish outside of ${\text{Spec }}I'$. This will be the case if and only if $g^{*}({\text{Spec }}I) \subset {\text{Spec }}I'$. Hence the ideal corresponding to $g^{*}({\text{Spec }}I)$ is indeed the smallest closed ideal containing $f(I)$.
$\square$
{#non-degen=propermap}
The following theorem is in fact just a corollary of theorem \[openeqideal\]. We recall that a morphism of $C^{*}$-algebra is said to be non-degenerate it its image spam a dense ideal. A morphism between unital $C^{*}$-algebra is non-degenerate if and only if it is unital.
**Theorem :** [*The equivalence of category of theorem \[mainResult\] restrict to a (contravariant) equivalence of category between the category of commutative $C^{*}$-algebras and non degenerate morphisms and the category of locally compact completely regular locales and proper maps between them.*]{}
**Proof :**
[A morphism $f:{\mathcal{C}}\rightarrow {\mathcal{C}}'$ will corresponds to a total map on the spectrum if and only if the corresponding partial proper map $g$ satisfy $g^{*}({\text{Spec }}{\mathcal{C}})= {\text{Spec }}{\mathcal{C}}'$, i.e., applying the previous theorem, if and only if $f$ is non-degenerate.]{} $\square$
Local positivity and continuity of the norm
===========================================
{#section-11}
We recall that a locale $X$ is said to be positive if when $X = \bigcup_{i \in I} U_i$ then $\exists i \in I$. This is a “positive" way of saying that $X$ is non-zero. We will say that a locale $X$ is locally positive if every open sublocale of $X$ can be written as a union of positive open sublocales. Assuming the law of excluded middle, a locale is positive if and only if it is non-zero and any locale is locally positive, but in an intuitionist framework locale positivity is an extremly important properties: A locale $X$ is locally positive if and only if the map from $X$ to the terminal locale is an open map (see [@sketches C3.1.17]) for this reason locally positive locale are aslo called open locale (but this cause a confusion with open sublocales) or sometimes overt.
{#continuity=openess}
**Theorem :**
*Let ${\mathcal{C}}$ be a commutative $C^{*}$-algebra, then the following conditions are equivalent:*
- For any $c \in {\mathcal{C}}$, the norm of $c$ is a continuous real number.
- There is a dense family of elements of ${\mathcal{C}}$ whose norms are continuous real numbers.
- ${\text{Spec }}{\mathcal{C}}$ is locally positive. (i.e. is open or overt).
It appears that this result was already known for unital algebras and due to T.Coquand in [@coquand2005stone section 5]. We do needed the result for non-unital algebras in [@henry2015toward], but one could also deduce the non-unital case from the unital one using the unitarization process developed in section \[secUnitarization\]. This being said, we were not aware of Coquand’s paper at the time the first version of this paper has been written, and as the following proof is more complete than the original one we decided to leave it here.
**Proof :**
The first two conditions are clearly equivalent because a semi-continuous real number which can approximated arbitrarily closed by continuous real numbers is also continuous.
Assume first that the first two conditions hold.
We recall that if $f \in {\mathcal{C}}$ then $D(f)$ denotes the largest open sublocale of ${\text{Spec }}{\mathcal{C}}$ on which $|f|>0$. Let also $p$ denotes the canonical map from ${\text{Spec }}{\mathcal{C}}$ to the terminal locale.
We will first show that:
$$D(f) \subset p^{*}(``\Vert f \Vert>0").$$ Indeed, in the logic of ${\text{Spec }}{\mathcal{C}}$, $D(f)$ is the proposition $\chi(|f|)>0$, which imply that $\exists \epsilon \chi(|f|)>\epsilon >0$. But, as $\Vert f \Vert $ is continuous, one has (still internally in ${\text{Spec }}{\mathcal{C}}$) $\Vert f \Vert <\epsilon$ or $\Vert f \Vert >0$. $\Vert f \Vert <\epsilon$ is in contradiction with $\chi(|f|)>\epsilon$, hence $\Vert f \Vert >0$.
The $(D(f))_{f \in {\mathcal{C}}}$ form a basis of the topology of ${\text{Spec }}{\mathcal{C}}$, hence as: $$D(f) = D(f) \cap p^{*}(``\Vert f \Vert>0") = \bigcup_{\Vert f \Vert >0} D(f)$$ the $D(f)$ for $\Vert f \Vert >0$ also form a basis of the topology of ${\text{Spec }}{\mathcal{C}}$. We will now prove that the $D(f)$ for $\Vert f \Vert >0$ are positive and this will conclude the proof of this implication.
If $\Vert f \Vert >0$ there exists a rational $\epsilon>0$ such that $\Vert f \Vert >\epsilon$. Let $U_i$ be a familly of open sublocales of $D(f)$ such that:
$$D(f) = \bigcup_{i \in I} U_i$$
Let $W$ be the open sublocale on which $|f|$ is greater than $\epsilon/2$. One has $W \triangleleft_{CR} D(f)$ by definition and $W \ll {\text{Spec }}{\mathcal{C}}$ because $f \in {\mathcal{C}}_0({\mathcal{C}})$, hence, as mentioned in the preliminaries, one has $W \ll D(f)$, hence there exists a finite subset $J \subset I$ such that:
$$W \subset \bigcup_{j \in J} U_j$$
As $J$ is finite, it is either empty or inhabited, but if $J$ is empty then $W$ is empty hence $f$ is smaller than $\epsilon/2$ everywhere on ${\text{Spec }}{\mathcal{C}}$ and hence $\Vert f \Vert < \epsilon$ which yields a contradiction. This shows that $J$ is inhabited and hence that $I$ is inhabited, which concludes the proof of the first implication.
We now assume that ${\text{Spec }}{\mathcal{C}}$ is a locally positive locale. For any $h \in {\mathcal{C}}$, we denote $(|h|>q)$ the biggest open sublocale of ${\text{Spec }}{\mathcal{C}}$ on which $|h|>q$ holds (where $h$ is seen as a function on ${\text{Spec }}{\mathcal{C}}$). We fix an element $h \in {\mathcal{C}}$, and we will prove that $\Vert h \Vert$ is a continuous real number. We define:
$$L = \{q \in {\mathbb{Q}}| q<0 \text{ or } (|h|>q) \text{ is positive } \}$$
one has:
- If $q\in L$ and $q' < q$ then $q' \in L$
- $L$ is inhabited (it contains all the negative rational numbers).
- if $q \in L$ then there exists $q' \in L$ such that $q<q'$, indeed, if $q<0$ it is clear, and if $(|h|>q)$ is positive then it is the union for $q'>q$ of the $(|h|>q')$, as ${\text{Spec }}{\mathcal{C}}$ is assumed to be locally positive $(|h|>q)$ is also the union of the $(|h|>q')$ which are positive and hence there exists a $q'$ such that $(|h|>q')$ is positive.
This shows that $L$ is a lower semi-continuous real number. We will show that $(L, \Vert h \Vert )$ form a continuous real number, which means that $\Vert h \Vert$ is a continuous real number.
- Let $q$ such that $q \in L$ and $\Vert h \Vert <q$, this means that $|h|$ is both smaller than $q$ everywhere and bigger than $q$ on some positive sublocale which is impossible. Hence $L \cap \Vert h \Vert = \emptyset$.
- Let $q<q'$ be two rational numbers. Internally in ${\text{Spec }}{\mathcal{C}}$ one has $|h|<q'$ or $q<|h|$. Hence ${\text{Spec }}{\mathcal{C}}$ is the union of the open sublocales $(|h|<q')$ and $(q<|h|)$, moreover as $h\in {\mathcal{C}}_0({\text{Spec }}{\mathcal{C}})$ one has $(q<|h|) \ll {\text{Spec }}{\mathcal{C}}$. By locale positivity of ${\text{Spec }}{\mathcal{C}}$, the open sublocale $(q<|h|)$ can be written as a union positive open sublocales $(u_i)$ for $i \in I$. In particular:
$${\text{Spec }}{\mathcal{C}}= (|h|<q') \cup \bigcup_{i \in I} u_i$$
hence, as $(q<|h|) \ll {\text{Spec }}{\mathcal{C}}$, there exists a finite subset $J \subset I$ such that: $$(q<|h|) \subset (|h|<q') \cup \bigcup_{j \in J} u_j$$
As $J$ is finite, it is either empty or inhabited. If $J$ is empty, then $(q<|h|) \subset (|h|<q')$ hence $(|h|<q') = ([h|<q') \cup (q<|h|) = {\text{Spec }}{\mathcal{C}}$ hence $\Vert h \Vert < q'$. On the other hand, if $J$ is inhabited then $(q < |h|)$ contains a positive open sublocale, hence it is positive and hence $q \in L$. This proves that either $\Vert h \Vert<q'$ or $q \in L$.
This two conditions together show that $(L,\Vert h \Vert)$ form a continuous real number and this concludes the proof.
$\square$
Extension of the results to localic $C^{*}$-algebras {#secLocalic}
====================================================
{#section-12}
In [@henry2014localic] we have defined a notion of “localic $C^{*}$-algebras" and proved (as previously conjectured by C.J.Mulvey and B.Banachewski in [@banaschewski2006globalisation]) that the (constructive) Gelfand duality can be extended into a duality between compact regular locales and localic commutative unital $C^{*}$-algebras. The goal of this last section is to explain how the methods developed in [@henry2014localic] allow to extend the results of the present paper to the localic framework (we have in mind theorems \[mainResult\], \[openeqideal\] and \[continuity=openess\] and proposition \[spec\_class\_char\]). In particular, this section is not meant to be read independently of [@henry2014localic].
{#section-13}
Let us start with the construction of the spectrum of a localic $C^{*}$-algebra.
**Proposition :** [*If ${\mathcal{C}}$ is a (possibly non-unital) commutative $C^{*}$-locale then there exist locales ${\text{Spec}^{\infty}}{\mathcal{C}}$ and ${\text{Spec }}{\mathcal{C}}$ such that ${\text{Spec}^{\infty}}({\mathcal{C}})$ classifies the morphisms $\chi : {\mathcal{C}}\rightarrow {\mathbb{C}}$ of $C^{*}$-locales, and ${\text{Spec }}{\mathcal{C}}$ classifies those which satisfy additionally “$\chi^{-1}({\mathbb{C}}-\{0\})$ is positive". Moreover ${\text{Spec }}{\mathcal{C}}$ is a locally compact regular locale and ${\text{Spec}^{\infty}}({\mathcal{C}})$ is its one point compactification. Finally, these constructions are compatible with pullback along geometric morphisms.* ]{}
**Proof :**
In section $3.5$ of [@henry2014localic] we proved that there is a classifying space for metric maps from ${\mathcal{C}}$ to ${\mathbb{C}}$ denoted $[ {\mathcal{C}},{\mathbb{C}}]_1$, one can then construct ${\text{Spec}^{\infty}}{\mathcal{C}}$ as a sublocale of $[{\mathcal{C}},{\mathbb{C}}]_1$ using the same kind of co-equalizer as in $4.2.3$ of [@henry2014localic]. Moreover “$\chi^{-1}({\mathbb{C}}-\{0\})$ is positive " is an open subspace of $[{\mathcal{C}},{\mathbb{C}}]_1$ (it is even one of the basic open subspace) hence ${\text{Spec }}{\mathcal{C}}$ will be an open subspace of ${\text{Spec}^{\infty}}{\mathcal{C}}$.
The compatibility with pullback along geometric morphisms follows immediately from this definition as classifying space.
For the rest of the proposition we can use descent theory: from proposition $2.3.17$ of [@henry2014localic] there exists a locale ${\mathcal{L}}$, with $p:{\mathcal{L}}\rightarrow \{*\}$ the canonical map, such that $p^{\sharp}({\mathcal{C}})$ is weakly spatial, hence is the localic completion of an ordinary $C^{*}$-algebra. In particular, as character of a $C^{*}$-algebra and of its localic completion are the same, $p^{\sharp}({\text{Spec }}{\mathcal{C}})$ and $p^{\sharp}({\text{Spec}^{\infty}}{\mathcal{C}})$ are the spectrums of an ordinary $C^{*}$-algebra, hence they are respectively locally compact completely regular and compact completely regular and the second is isomorphic to the one point compactification of the first. (Complete) regularity alone is not a property that descend well along open surjections, but it is proved in [@sketches Lemma C3.2.10] that for compact locale regularity is equivalent to begin Hausdorff and $\cite[C5.1.7]{sketches}$ prove that as $p^{\sharp}({\text{Spec}^{\infty}}{\mathcal{C}})$ is compact and separated, ${\text{Spec}^{\infty}}{\mathcal{C}}$ is also compact and separated, hence compact regular. As ${\text{Spec }}{\mathcal{C}}$ is an open subspace of ${\text{Spec}^{\infty}}{\mathcal{C}}$ it is locally compact and regular. Finally, as one point compactification is also compatible with pullback along geometric morphism the isomorphism between $p^{\sharp}({\text{Spec}^{\infty}}{\mathcal{C}})$ and the one point compatification of $p^{\sharp}({\text{Spec }}{\mathcal{C}})$ descend into an isomorphism between ${\text{Spec}^{\infty}}{\mathcal{C}}$ and the one point compactification of ${\text{Spec }}{\mathcal{C}}$ (which of course is compatible with the natural inclusion of ${\text{Spec }}{\mathcal{C}}$).
$\square$
{#section-14}
**Theorem :** [*There is an anti-equivalence of categories between the category of commutative (possibly non-unital) $C^{*}$-locales and the category of locally compact regular locales and partial proper maps between them.* ]{}
**Proof :**
The proof given in [@henry2014localic 4.2.5] that the ordinary Gelfand duality extend to the localic Gelfand duality applies to the non-unital case almost without any change: If $X$ is a locally compact regular locale one can define the $C^{*}$-locale ${\mathcal{C}}_0(X)$ as the kernel of the evaluation at infinity on the $C^{*}$-locale ${\mathcal{C}}(X^{\infty})$. ${\mathcal{C}}_0(X)$ is a $C^{*}$-locale: the only non trivial point to check is that it is locally positive but it follows from the fact that the map ${\mathcal{C}}(X^{\infty}) \rightarrow {\mathcal{C}}_0(X)$ which send $f$ to $f-f(\infty)$ is a surjection. There is a canonical map from $X$ to ${\text{Spec }}{\mathcal{C}}_{0}(X)$. By [@henry2014localic 2.3.17 and 2.6] there exists a positive locally positive locale ${\mathcal{L}}$ such that $p^{\sharp}(X^{\infty})$ is completely regular in the internal logic of ${\mathcal{L}}$ (and hence also $p^{\sharp}(X)$), in particular the canonical map $p^{\sharp}(X) \rightarrow p^{\sharp}({\text{Spec }}{\mathcal{C}}_{0}(X)) \simeq {\text{Spec }}{\mathcal{C}}_{0}(p^{\sharp}(X))$ is an isomorphism because of the ordinary non-unital gelfand duality applied to the $C^{*}$-algebra of points of ${\mathcal{C}}_0(X)$, and because open surjections are effective descent morphisms (and $p$ is an open surjection) this imply that $X \simeq {\text{Spec }}{\mathcal{C}}_{0}(X)$.
The exact same argument also show that for any commutative ${\mathcal{C}}^{*}$-locale ${\mathcal{C}}$ the canonical map ${\mathcal{C}}\rightarrow {\mathcal{C}}_0({\text{Spec }}{\mathcal{C}})$ is an isomorphism, and the correspondence of morphisms is also obtains exactly in the same way (because partial maps descend well : first apply descent to their domain and then to the map itself).
$\square$
{#section-15}
**Theorem :** [*Let ${\mathcal{C}}$ be a commutative $C^{*}$-locale, then there is an order preserving bijection between open sublocales of ${\text{Spec }}{\mathcal{C}}$ and locally positive fiberwise closed ideals of ${\mathcal{C}}$.*]{}
One might be surprised to obtain “fiberwise" closed ideals, and not just closed ideals in this duality. But there is no reason to be, one should just notice that the usual notion of closeness ans density that we use in the constructive theory of Banach space does not correspond to closeness and density but to fiberwise closeness and fiberwise density.
Indeed, a point $x$ is in the closure of a subset $S$ of a Banach space $B$ if for all $\epsilon >0$ there exists a $s \in S$ such that $\Vert x - s \Vert < \epsilon$, i.e. such that for all neighbourhood $V$ of $x$, there is a point in $V \cap S$, i.e. in localic terms, $V \cap S$ is positive which exactly says that $x$ is in the fiberwise closure of $S$.
**Proof :**
[Let ${\mathcal{C}}$ be a commutative $C^{*}$-locale, and let $I$ be a locally positive fiberwise closed ideal of ${\mathcal{C}}$. In particular $I$ is a $C^{*}$-locale and hence has a spectrum $ U = {\text{Spec }}I$. Let ${\mathcal{L}}$ be a positive locally positive locale such that $p^{\sharp}(I)$ and $p^{\sharp}({\mathcal{C}})$ are weakly spatial, by \[openeqideal\] $p^{\sharp}(U)$ identify with a open sublocale of $p^{\sharp}({\text{Spec }}{\mathcal{C}})$. This open injection is canonical hence compatible to the descent data and hence comes from a map from $U$ to ${\text{Spec }}{\mathcal{C}}$ which also has to be an open injection (because its pullback along $p$ is an open injection). Conversely, if $U$ is an open sublocale of ${\text{Spec }}{\mathcal{C}}$ then $C_{0}(U)$ is a $C^{*}$-locale and identify by the same descent argument with a locally positive fiberwise closed ideal of ${\mathcal{C}}$, and these two construction are inverse of each other essentially because of the localic gelfand duality we just proved. ]{} $\square$
And finally:
{#section-16}
**Theorem :**
*Let ${\mathcal{C}}$ be a commutative $C^{*}$-locale, then the following conditions are equivalent:*
- ${\text{Spec }}{\mathcal{C}}$ is locally positive.
- the norm map from ${\mathcal{C}}$ to the locale of upper semi-continuous real number factors into the natural map from the locale of continuous real number to the locale of upper-semi-continuous number.
**Proof :**
Let ${\mathcal{L}}$ be a positive locally positive locale and $p$ the canonical map $p:{\mathcal{L}}\rightarrow \{* \}$ such that $p^{\sharp}({\mathcal{C}})$ is weakly spatial.
Assume the first condition, then in the logic of ${\mathcal{L}}$, the locale ${\text{Spec }}p^{\sharp}{\mathcal{C}}\simeq p^{\sharp} {\text{Spec }}{\mathcal{C}}$ is still locally positive, hence by theorem \[continuity=openess\] each point of $p^{\sharp}{\mathcal{C}}$ has a continuous norm. The subalgebra of points is fiberwise dense and endowed with a map to the locale of continuous real number which factors the norm map. This map from the locale of points to the locale of real numbers is clearly a metric map and hence extend by completion to a map from the all of $p^{\sharp}{\mathcal{C}}$ to the locale ${\mathbb{R}}$ which also factors the norm (by [@henry2014localic 3.3 and 3.2.5]). By uniqueness of such a factorisation, it is compatible with the descent datas on $p^{\sharp}{\mathcal{C}}$ and $p^{\sharp}{\mathbb{R}}$ and hence induces a map from ${\mathcal{C}}$ to ${\mathbb{R}}$ which also factor the norm and this concludes the proof of the first implication.
We now assume the second condition. This factorisation of the norm implies that every point of $p^{\sharp}({\mathcal{C}})$ has a continuous norm, hence ${\text{Spec }}p^{\sharp}({\mathcal{C}}) \simeq p^{\sharp}({\text{Spec }}{\mathcal{C}})$ is locally positive by \[continuity=openess\] and hence ${\text{Spec }}{\mathcal{C}}$ is also locally positive by [@sketches C5.1.7].
$\square$
[^1]: We mean the formal locale of real number, which might be non spatial and hence different from the topological space of real number in the absence of the law of excluded middle.
[^2]: This means that $f^{*}({\mathcal{O}}(X))$ generate ${\mathcal{O}}(f^{\sharp}(X))$ under arbitrary join
[^3]: We mean by that the localic form of construction like [@sketches A2.1.12, A4.1.12 and A4.5.6]
[^4]: This is exactly the general construction of an Artin gluing.
[^5]: This also corresponds to the pullback of the localic completion of ${\mathcal{C}}$, hence this is essentially compatible with the notation for pullback of locales
[^6]: Except that we need to be more careful on the “non-zero" hypothesis which is unessential in the classical case.
|
---
abstract: 'Five fields located close to the center of the globular cluster NGC 104=47 Tuc were surveyed in a search for variable stars. We present $V$-band light curves for 42 variables. This sample includes 13 RR Lyr stars – 12 of them belong to the Small Magellanic Cloud (SMC) and 1 is a background object from the galactic halo. Twelve eclipsing binaries were identified – 9 contact systems and 3 detached/semi-detached systems. Seven eclipsing binaries are located in the blue straggler region on the cluster color-magnitude diagram (CMD) and four binaries can be considered main-sequence systems. One binary is probably a member of the SMC. Eight contact binaries are likely members of the cluster and one is most probably a foreground star. We show that for the surveyed region of 47 Tuc, the relative frequency of contact binaries is very low as compared with other recently surveyed globular clusters. The sample of identified variables also includes 15 red variables with periods ranging from about 2 days to several weeks. A large fraction of these 15 variables probably belong to the SMC but a few stars are likely to be red giants in 47 Tuc. $VI$ photometry for about 50 000 stars from the cluster fields was obtained as a by product of our survey. [^1]'
author:
- 'J. Kaluzny'
- 'M. Kubiak'
- 'M. Szyma[ń]{}ski'
- 'A. Udalski'
- 'W. Krzemi[ń]{}ski'
- Mario Mateo
- 'K.Z. Stanek'
date: 'Received…, accepted…'
title: ' The Optical Gravitational Lensing Experiment. Variable stars in globular clusters -IV. Fields 104A-E in 47 Tuc [^2] '
---
Introduction
============
The Optical Gravitational Lensing Experiment (OGLE) is a long term project with the main goal of searching for dark matter in our Galaxy by identifying microlensing events toward the galactic bulge (Udalski et al. 1992, 1994). At times the bulge is unobservable we conduct other long-term photometric programs. A complete list of side-projects attempted by the OGLE team can be found in Paczy[ń]{}ski et al. (1995). In particular, during the observing seasons 1993, 1994 and 1995 we monitored globular clusters NGC 104=47 Tuc and NGC 5139=$\omega$ Cen in a search for variable stars of various types. Of primary interest was the detection of detached eclipsing binaries. In Papers I, II & III (Kaluzny et al. 1996, 1997a, 1997b) we presented results for $\omega$ Cen. Here we report on variables discovered in the field of 47 Tuc.
Observations and data reduction
===============================
The OGLE[^3] project was conducted using the 1-m Swope telescope at Las Campanas Observatory. A single $2048\times 2048$ pixels Loral CCD chip, giving a scale of 0.435 arcsec/pixel was used as the detector. The initial processing of the raw frames was done automatically in near-real time. Details of the standard OGLE processing techniques were described by Udalski et al. (1992).
In 1993 we monitored fields 104A and 104B located west and east of the cluster center, respectively. In 1994 we monitored field 104C located north of the cluster center. In 1995 we monitored fields 104D and 104E covering southern part of the cluster. A condensed summary of the data used in this paper is given in Table 1. Detailed logs of the observations can be found in Udalski et al. (1993, 1995, 1997). The equatorial coordinates of centers of fields 104A-E are given in Table 2. A schematic chart with marked locations of all of the monitored fields is shown in Fig. 1. Most of the monitoring was performed through the Johnson $V$ filter. Some exposures in the Kron-Cousins $I$ band were also obtained. Most of observations in the $V$-band were collected with an exposure time ranging from 300 to 600 seconds (420 seconds was the most common value). The $I$-band exposures lasted 300 seconds. For the majority of the analyzed frames the seeing was better than 1.6 arcsec. The reduction techniques as well as the algorithms used for selecting potential variables are described in Paper I. Profile photometry was extracted with the help of DoPHOT (Schechter et al. 1993). The total number of stars contained in data bases with $V$ band photometry ranged from 18397 to 33014. Table 3 gives condensed information about the numbers of stars analyzed for variability and about the quality of the derived photometry. The useful data were obtained for stars with $14.0<V<20.25$.
Variable stars
==============
In this paper we present results for 42 variables identified in the five observed fields. All except two are new discoveries and were assigned names OGLEGC212-255. Names OGLEGC217 and OGLEGC224 were given to previously known variables V9 and V3 (eg. Hogg 1973). Photometry obtained for these two stars was poor because their images were badly overexposed on most of analyzed frames. Therefore, we decided to drop OGLEGC217=V9 and OGLEGC224=V3 from our list of variables.
The rectangular and equatorial coordinates of the 42 newly identified variables are listed in Table 4[^4]. The rectangular coordinates correspond to positions of variables on the $V$-band “template” images. These images allow easy identification of all objects listed in Table 4. The name of the field in which a given variable can be identified is given in the 6th column. All frames collected by the OGLE team were deposited at the NASA NSS Data Center [^5]. Frames mr5228, mr5227, mr7890, mr14597 and mr14595 were used as templates for fields 104A, 104B, 104C , 104D and 104E, respectively. The transformation from rectangular to equatorial coordinates was derived from positions of stars which could be matched with objects from the astrometric list kindly provided by Kyle Cudworth. The number of “transformation stars” identified in a given field ranged from 55 to 100. The adopted frame solutions reproduce equatorial coordinates of these stars with residuals rarely exceeding 0.5 arcsec. According to Cudworth the absolute accuracy of equatorial coordinates for stars from his table is not worse than $2\arcsec$. Our sample of variables includes 13 RR Lyr stars. Table 5 lists basic characteristics of the light curves of these stars. The mean $V$ magnitudes were calculated by numerically integrating the phased light curves after converting them into an intensity scale. Photometric data for the remaining variables are given in Table 6. The $V-I$ colors listed in Tables 5 and 6 were measured at random phases. For each of fields we used a single exposure in the $I$ band bracketed by two exposures in the $V$ band. To determine the periods of identified variables we used an [*aov*]{} statistic (Schwarzenberg-Czerny 1989, 1991). This statistic allows – in particular – reliable determination of periods for variables with non-sinusoidal light curves (eg. eclipsing binaries). Phased light curves of RR Lyr stars are shown in Figs. 2 & 3 while Fig. 4 presents phased light curves for the remaining variables with determined periods. Time domain light curves for these variables for which we were unable to determine periods are shown in Fig. 5.
Figure 6 shows the location of all variables with known colors on the cluster color-magnitude diagram (CMD). For the RR Lyr stars marked positions correspond to the intensity-averaged magnitudes. For the remaining variables we marked positions corresponding to the magnitude at maximum light. All but one RR Lyr stars are grouped around $V\approx 19.5$ indicating that they belong to the SMC. RR Lyr variable OGLEGC223 is a background object in the galactic halo.
There are 12 certain eclipsing binaries in our sample of variables. This group of stars is dominated by contact binaries with EW-type light curves and periods shorter than 0.4 day. The only 3 stars whose light curves indicate a detached or semi-detached configuration are OGLEGC228, OGLEGC240 and OGLEGC253. OGLEGC240 is a detached binary with an EA-type light curve. The light curve of this variable is relatively noisy due to the faintness of the object. None the less examination of the individual frames leaves no doubts about the reality of the observed changes. The blue color and apparent magnitude of OGLEGC240 indicates that it is an A spectral type binary in the SMC.\
OGLEGC228 shows a light curve typical of semi-detached binaries. This star is located among candidate blue-stragglers on the cluster CMD. OGLEGC253 is also a potential blue straggler. Its light curve shows two minima of very different depth but we cannot exclude possibility that the components of this binary are in geometrical contact. Several systems with light curves similar to the light curve of OGLEGC253 were analyzed during last decade (eg. Hilditch, King & McFarlane 1989). Although most of detected binaries are candidate blue stragglers, there are four contact systems located slightly to the red of the cluster main sequence. These four binaries are potential main sequence systems belonging to 47 Tuc. We shall return below to the question about membership of identified contact binaries.
Variables which could not be classified as either RR Lyr stars or eclipsing binaries are generally red stars with periods ranging from 2 days to several weeks. Six red variables which are located on or near the subgiant branch of 47 Tuc can be considered candidates for cluster members. Recently Edmonds & Gilliland (1996) reported discovery of low amplitude variability among a large fraction of K giants in 47 Tuc. Using the data collected with the HST they estimated that most of variable giants have periods between 2 and 4 days and $V$ amplitudes in the range 5–25 mmag. Edmonds & Gilliliand (1996) argue that the observed variability of K giants from 47 Tuc is caused by low-overtone pulsations. The variable K giants from our sample have periods ranging from 2 to 36 days and show full amplitudes in the $V$ band ranging from 0.08 to 0.18 mag. Based on the quality of our data we estimate conservatively that we should be able to detect any periodic variables among cluster giants with periods up to 2 weeks and full amplitudes exceeding 0.05 mag. We note that six candidates for variable K giants identified by us can easily be studied spectroscopically. Such observations would answer the question about the mechanism of observed photometric variability. Since observed light variations are sufficiently large to imply detectable changes of $V_{rad}$ if the variability is indeed due to pulsations.
Variables with $V-I>1.1$ and $V>15.5$ are likely to be evolved stars on the AGB in the SMC. We note that SMC stars can be easily distinguished from 47 Tuc members based on their radial velocities (heliocentric radial velocities of SMC and 47 Tuc are $+175$ km/s and $-18.7$ km/s, respectively).
We consider some of our period determinations as preliminary. Particularly, for OGLEGC229 we adopted $P=8.38$ $d$ because the light curve seems to show two distinct minima. However, we cannot exclude the possibility that the correct period is in fact half this value. Also the period of OGLEGC240 can be half the adopted value of $P=4.32 d$. For $P=2.16$ $d$ our light curve of OGLEGC240 would show just one detectable eclipse.
Cluster membership of the contact binaries
------------------------------------------
The 47 Tuc cluster is located at a hight galactic latitude of $b=-45$ deg. However, we cannot assume that all eclipsing binaries listed in Table 6 are cluster members. In particular, faint contact binaries with $V>16$ are known to occur at high galactic latitudes (eg. Saha 1984). We have applied the absolute brightness calibration established by Rucinski (1995) to calculate $M_{\rm V}$ for the newly discovered contact binaries. Rucinski’s calibration gives $M_{\rm V}$ as a function of period, unreddened color $(V-I)_{0}$ and metallicity: $$\begin{aligned}
M_{\rm V}^{cal}=-4.43log(P)+3.63(V-I)_{0}
-0.31-0.12\times [{\rm Fe/H}].\end{aligned}$$ We adopted for all systems $[{\rm Fe/H}]=-0.76$ and $E(V-I)=0.05$ (Harris 1996). Figure 7 shows the period versus an apparent distance modulus diagram for contact binaries identified in fields 104A-E. An apparent distance modulus was calculated for each system as a difference between its $V_{max}$ magnitude and $M_{\rm V}^{cal}$. An apparent distance modulus for 47 Tuc is estimated at $(m-M)_{\rm V}=13.21$ (Harris 1996). The only system with significantly deviating value of $(m-M)_{\rm V}$ is OGLEGC245. This binary is most probably a foreground variable. The remaining 8 systems plotted in Fig. 7 are likely members of the cluster.
Completeness of the survey for contact binaries
-----------------------------------------------
Our survey resulted in the identification of 8 contact binaries which are likely members of the cluster and 2 detached/semidetached binaries which are possible blue stragglers belonging to the cluster. Only 4 contact systems were identified below the cluster turnoff. These numbers are surprisingly small considering that we analyzed the light curves of 76119 stars with average magnitudes $V<19.5$, mostly main sequence stars belonging to the cluster. For the clusters members the limiting magnitude $V=19.5$ corresponds to $M_{\rm V}=6.1$. We adopted here $(m-M)_{\rm V}=13.4$ for the apparent distance modulus of 47 Tuc (Hesser et al. 1987). The quality and quantity of photometry was sufficient to allow the detection of potential eclipsing binaries with periods shorter than 1 day and exhibiting eclipses deeper than about 0.3 mag (see Tables 1 & 3).
A hint that our survey is quite complete with respect to faint short period variables comes from the fact that we detected 12 RR Lyr stars from the SMC. Graham (1975) searched for variables a field covering an area $1\deg \times 1.3 \deg$. His field was centered north of 47 Tuc and included a small part of the cluster. Graham identified 76 RR Lyr stars, with surface density of 0.016 variables per arcmin$^{2}$. The effective area covered by our survey was 935 arcmin$^{2}$ yelding surface density of RR Lyr stars of about 0.013 variables per arcmin$^2$. Apparently we did not miss in our survey too many RR Lyr stars from the SMC.
The relative frequency of occurrence of $detectable$ contact binaries in our sample is $f_{c}=8/76119\approx 1.0E-4$. This frequency is more than an order of magnitude lower than the binary frequency observed for fields containing galactic open clusters (Kaluzny & Rucinski 1993; Mazur, Krzeminski & Kaluzny 1995) and for fields located near the galactic center which were monitored by OGLE (Rucinski 1997). Recent surveys of globular clusters M71 (Yan & Mateo 1994) and M5 (Yan & Reed 1996) gave $f_{c}=4/5300\approx 7.5E-4$ and $f_{c}=5/3600\approx 1.4E-3$, respectively.
To get a quantitative estimate of the completeness of our sample we performed tests with artificial variables for fields 104B and 104E. Results of test for field 104B should apply also to the fields 104A and 104C because all three fields contain similar numbers of measurable stars and were observed with comparable frequency. Similarly, results for field 104E should apply to field 104D. For both fields we selected 5 samples of objects from sets of stars whose light curves were examined for variability. The brightest sample included stars with $16.0<V<17.0$ and the faintest sample included stars with $19.0<V<19.5$. a total of 100 stars were selected at random from each sample. The observed light curves of these stars were then interlaced with the synthetic light curves of model contact binaries. The synthetic light curves were generated using a simple prescription given by Rucinski (1993). Two separate cases were considered. Case I – a contact binary with the inclination $i=60\deg$ and the mass ratio $q=0.10$. Case II – a contact binary with the inclination $i=70\deg$ and the mass ratio $q=0.30$. In both cases the so called “fill-out-parameter” was set to $f=0.5$. The light curves corresponding to Case-I and Case-II show depths of primary eclipses equal to 0.15 and 0.32 mag, respectively. For each of the artificially generated light curves a period was drawn in a random way from the range 0.2-0.45 d. Also the phase for the first point of the given light curve was randomly selected. The simulated light curves were then analysed in the manner as the observed light curves. Specifically, we applied a procedure based on the $\chi^{2}$ test. The number of artificial variables which were “recovered” for Cases I-II and 5 magnitude ranges is given in Table 7. It may be concluded that the completeness of our sample of contact binaries is better than 88% for systems with $V<19.5$ and depth of eclipses higher than 0.32 mag. For systems with full amplitudes as small as 0.15 mag the completeness is higher than 73% for $V<19.0$.
It has been noted by Kaluzny et al. (1997c) that the frequency of occurrence of contact binaries in 47 Tuc is very low in comparison with open clusters and with several globular clusters which have been recently surveyed for eclipsing binaries by various groups. However, results presented here are based on a larger sample of stars than the sample analyzed by Kaluzny et al. (1997c). A more extended discussion of this topic is given in Kaluzny et al. (1997c). It is appropriate to note at this point that the low frequency of occurrence of contact binaries among 47 Tuc stars was first suggested by Shara et al. (1988).
The color-magnitude diagrams
============================
As a by product of our survey we obtained $V$ vs. $V-I$ CMD’s for all 5 monitored fields. In Fig. 8 we show the CMD’s for fields 104A and 104E. For each field the final photometry was obtained by merging measurements extracted from “long” and “short” exposures. Photometry obtained for fields 104A-B extends to brighter magnitudes than photometry obtained for fields 104C-E. The frames used for construction of CMD’s of monitored fields are listed in Table 8. Any detailed analysis of these data is beyond the scope of this paper. We note only that our data can be used to select candidates for cluster blue stragglers.
All photometry presented in this section was submitted in tabular form to the editors of A&A and is available in electronic form to all interested readers (see Appendix A). The potential users of this photometry should be aware about possibility of some systematic errors of the photometry. These errors are most likely to be significant for relatively faint stars. The CCD chip used for observations by the OGLE suffers from some nonlinearity. More details on this subject can be found in Paper I.
Summary
=======
The main result of our survey is the identification of 8 contact binaries which are likely members of 47 Tuc and 2 detached/semidetached binaries which are possible blue stragglers. Particularly interesting is the bright binary OGLEGC228. By combining radial velocity curves with photometry one would be able to determine an accurate distance to this system. That would in turn give distance to the cluster if the binary is indeed member of 47 Tuc. We failed to identify any detached eclipsing systems among cluster turnoff stars. Three such systems with periods ranging from 1.5 to 4.6 day were identified in our survey of $\omega$ Cen (Papers I&II).
We identified 6 variables which are likely to be red giants belonging to the cluster. These stars exhibit modulation of luminosity with periods ranging from 2 to 36 days and full amplitudes in the $V$ band ranging from 0.08 to 0.18 mag. They may represent high-amplitude counterparts of low-amplitude variable K giants identified in the central region of 47 Tuc by Edmonds & Gilliliand (1996).
This project was supported by NSF grants AST-9530478 and AST-9528096 to Bohdan Paczynski. JK was supported also by the Polish KBN grant 2P03D-011-12. We are indebted to Kyle Cudworth for sending us the astrometric data on 47 Tuc. We thank Ian Thompson for his detailed remarks on the draft version of this paper.
Appendix A
==========
Tables containing light curves of all variables discussed in this paper as well as tables with $VI$ photometry for the surveyed fields are published by A&A at the centre de Données de Strasbourg, where they are available in electronic form: See the Editorial in A&A 1993, Vol. 280, page E1.
Edmonds, P.D., & Gilliliand, R.L. 1996, ApJ 464, L157 Graham, J.A. 1975, PASP 87, 641 Harris, W.E. 1996, AJ 112, 1487 Hesser, J.E., Harris, W.E., VandenBerg, D.A., Allwright, J.W.B., Shott, P., & Stetson, P.B. 1987, PASP, 99, 739 Hilditch, R.W., King, D.J., & McFarlane, T.M. 1989, MNRAS 237, 447 Hogg, H.S 1973, Publ. DDO 6, No. 3, p. 1 Kaluzny, J., & Rucinski, S.M. 1993, in [*Blue Stragglers*]{}, ed. R.A. Saffer (San Francisco, ASP) ASP Conf. Ser. Vol. 53, 164 Kaluzny, J., Kubiak, M., Szymański, M., Udalski, A., Krzeminski, W., & Mateo, M., 1996, A&AS 120, 139 (Paper I) Kaluzny, J., Kubiak, M., Szymański, M., Udalski, A., Krzeminski, W., Mateo, M., & Stanek, K.Z. 1997a, A&AS in press (Paper II) Kaluzny, J., Kubiak, M., Szymański, M., Udalski, A., Krzeminski, W., Mateo, M., & Stanek, K.Z. 1997b, A&AS in press (Paper III) Kaluzny, J., Krzeminski, W., Mazur, B, Stepien, K., & Wysocka A. 1997c, Acta Astron, submitted Mazur, B., Krzeminski, W., & Kaluzny, J. 1995, MNRAS 273, 59 Paczynski, B. et al. 1995, IAU Symp. 169: Unsolved Problems of the Milky Way, ed. L. Blitz, p. 133 Rucinski, S.M. 1993, PASP 105, 1433 Rucinski, S.M. 1995, PASP 107, 648 Rucinski, S.M. 1997, in Variable Stars and the Astrophysical Returns of Microlensing Surveys, ed. R. Ferlet (IAP: Paris), in press Saha, A. 1984, ApJ 283, 580 Schechter, P., Mateo, M., & Saha, A. 1993, PASP 105, 1342 Schwarzenberg-Czerny, A. 1989, MNRAS 241, 153 Schwarzenberg-Czerny, A. 1991, MNRAS 253, 198 Shara, M.M, Kaluzny, J., Potter, M., & Moffat, A.F.J. 1988, ApJ 328, 594 Udalski, A., Szyma[ń]{}ski, M., Kaluzny, J., Kubiak, M., & Mateo, M. 1992, Acta Astron. 42, 253 Udalski, A., Szyma[ń]{}ski, M., Kaluzny, J., Kubiak, M., Mateo, M., & Krzemi[ń]{}ski, W. 1993, Acta Astron. 44, 1 Udalski, A., Szyma[ń]{}ski, M., Kaluzny, J., Kubiak, M., Mateo, M., & Krzemi[ń]{}ski, W. 1994, ApJ 426, L69 Udalski, A., Szyma[ń]{}ski, M., Kaluzny, J., Kubiak, M., Mateo, M., & Krzemi[ń]{}ski, W. 1995, Acta Astron. 45, 237 Udalski, A., Szyma[ń]{}ski, M., Kaluzny, J., Kubiak, M., Mateo, M., Krzemi[ń]{}ski, W., Stanek, K.Z. 1997, Acta Astron. 47, 1 Yan, L., & Mateo, M. 1994, AJ 108, 1810 Yan, L., & Reid, N. 1996, MNRAS 279, 751
[lll]{} Field & $N_{\rm V}$ & Dates of\
& & observations\
104A & 286 & Jun 17 - Sep 7, 1993\
104B & 270 & Jun 17 - Sep 7, 1993\
104C & 288 & Jun 16 - Sep 15, 1994\
104D & 125 & Jun 8 - Aug 22, 1995\
104E & 120 & Jun 8 - Aug 22, 1995\
[lll]{} Field & RA(1950) & DEC(1950)\
& h:m:s & deg:$\arcmin$ : $\arcsec$\
104A & 0:19:52.7 & -72:22:45\
104B & 0:23:53.1 & -72:21:01\
104C & 0:21:47.9 & -72:10:35\
104D & 0:20:14.7 & -72:31:14\
104E & 0:23:10.4 & -72:31:22\
[ccrcrcrcrcr]{} &104A & & 104B & & 104C & & 104D & & 104E &\
V &$<rms>$& N &$<rms>$& N &$<rms>$& N &$<rms>$& N &$<rms>$&N\
14.25 & 0.014& 214 & 0.014& 196 & 0.021& 100 &0.016& 112&0.012& 86\
14.75 & 0.017& 118 & 0.015& 112 & 0.014& 65 &0.012& 54&0.011& 69\
15.25 & 0.016& 109 & 0.015& 115 & 0.014& 62 &0.012& 63&0.013& 57\
15.75 & 0.015& 153 & 0.016& 165 & 0.012& 117 &0.017& 93&0.012& 67\
16.25 & 0.018& 240 & 0.019& 244 & 0.016& 152 &0.018& 122&0.017& 111\
16.75 & 0.026& 434 & 0.023& 499 & 0.018& 261 &0.019& 222&0.020& 213\
17.25 & 0.027& 2057 & 0.027& 2181& 0.022& 1273&0.023& 1064&0.022& 1017\
17.75 & 0.032& 3149 & 0.032& 3248& 0.025& 1922&0.026& 1734&0.026& 1584\
18.25 & 0.041& 3973 & 0.039& 4089& 0.031& 2486&0.034& 2292&0.033& 2179\
18.75 & 0.052& 4574 & 0.054& 4884& 0.042& 3193&0.044& 2697&0.044& 2622\
19.25 & 0.072& 4515 & 0.071& 5000& 0.055& 3746&0.058& 3029&0.061& 2984\
19.75 & 0.102& 3803 & 0.100& 4248& 0.076& 3727&0.081& 2537&0.086& 2930\
20.25 & 0.148& 2986 & 0.144& 3029& 0.114& 3456&0.119& 2118&0.127& 2579\
[lrrrrrr]{} Name & X & Y & RA(1950) & Dec(1950) & Field\
& & & h:m:s & deg:$\arcmin$ : $\arcsec$ &\
OGLEGC212 & 220.9 & 254.7 & 0:18:41.83 &-72:28:44.7& A\
OGLEGC213 & 226.8 &1488.2 & 0:18:33.61 &-72:19:49.3& A\
OGLEGC214 & 993.9 &1581.3 & 0:19:45.95 &-72:18:42.8& A\
OGLEGC215 & 1338.7 & 541.9 & 0:20:26.80 &-72:26:01.4& A\
OGLEGC216 & 1364.1 &1905.1 & 0:20:18.63 &-72:16:09.3& A\
OGLEGC218 & 1706.3 &1622.4 & 0:20:53.31 &-72:17:59.4& A\
OGLEGC219 & 344.5 & 244.3 & 0:22:54.32 &-72:27:00.8& B\
OGLEGC220 & 206.5 &1776.8 & 0:22:30.21 &-72:16:00.3& B\
OGLEGC221 & 390.8 &1673.4 & 0:22:48.44 &-72:16:39.2& B\
OGLEGC222 & 643.2 & 280.3 & 0:23:22.66 &-72:26:35.0& B\
OGLEGC223 & 768.9 & 816.9 & 0:23:30.69 &-72:22:37.9& B\
OGLEGC225 & 709.6 &1398.1 & 0:23:20.72 &-72:18:27.9& B\
OGLEGC226 & 931.0 &1853.5 & 0:23:38.38 &-72:15:02.5& B\
OGLEGC227 & 1014.1 & 90.7 & 0:23:59.63 &-72:27:44.3& B\
OGLEGC228 & 1041.4 & 657.1 & 0:23:57.90 &-72:23:37.6& B\
OGLEGC229 & 1023.4 &1208.5 & 0:23:52.00 &-72:19:39.2& B\
OGLEGC230 & 1131.5 &1315.2 & 0:24:01.46 &-72:18:49.1& B\
OGLEGC231 & 1070.9 &1853.6 & 0:23:51.64 &-72:14:57.5& B\
OGLEGC232 & 1512.2 & 176.1 & 0:24:46.64 &-72:26:49.3& B\
OGLEGC233 & 1781.4 & 591.1 & 0:25:09.02 &-72:23:39.6& B\
OGLEGC234 & 163.2 & 618.6 & 0:20:29.83 &-72:13:58.7& C\
OGLEGC235 & 223.1 &1394.6 & 0:20:30.00 &-72:08:20.0& C\
OGLEGC236 & 800.0 & 337.9 & 0:21:32.23 &-72:15:38.6& C\
OGLEGC237 & 1403.4 &1232.5 & 0:22:22.42 &-72:08:48.9& C\
OGLEGC238 & 1800.1 & 661.7 & 0:23:04.39 &-72:12:41.6& C\
OGLEGC239 & 1359.7 & 528.5 & 0:22:23.80 &-72:13:55.8& C\
OGLEGC240 & 1552.8 &1853.8 & 0:22:31.56 &-72:04:13.9& C\
OGLEGC241 & 1649.5 & 992.6 & 0:22:47.51 &-72:10:23.8& C\
OGLEGC242 & 130.6 & 969.0 & 0:18:49.43 &-72:32:03.6& D\
OGLEGC243 & 328.9 & 931.3 & 0:19:08.78 &-72:32:14.1& D\
OGLEGC244 & 1467.4 & 389.3 & 0:21:02.32 &-72:35:33.2& D\
OGLEGC245 & 1227.3 &1183.7 & 0:20:33.52 &-72:29:56.4& D\
OGLEGC246 & 1074.1 &1119.8 & 0:20:19.25 &-72:30:29.1& D\
OGLEGC247 & 1559.2 & 285.6 & 0:21:11.95 &-72:36:15.1& D\
OGLEGC248 & 1604.9 & 520.4 & 0:21:14.64 &-72:34:31.7& D\
OGLEGC249 & 698.3 & 12.0 & 0:22:46.26 &-72:38:49.3& E\
OGLEGC250 & 930.6 &1371.6 & 0:22:59.38 &-72:28:51.8& E\
OGLEGC251 & 542.1 &1778.9 & 0:22:19.39 &-72:26:07.2& E\
OGLEGC252 & 1171.0 & 909.6 & 0:23:25.73 &-72:32:04.6& E\
OGLEGC253 & 1863.2 & 873.3 & 0:24:32.62 &-72:31:57.2& E\
OGLEGC254 & 1629.1 &1072.4 & 0:24:08.63 &-72:30:38.8& E\
OGLEGC255 & 1540.6 &1294.4 & 0:23:58.50 &-72:29:05.4& E\
[cclll]{} Name & P &$V-I$& $V$ &$A_{\rm V}$\
OGLEGC& day & & mean &\
212 & 0.6946& 0.63& 19.5 & 0.8\
213 & 0.6329& 0.56& 19.8 & 0.4\
216 & 0.3617& 0.47& 19.9 & 0.4\
223 & 0.2971& 0.33& 17.6 & 0.45\
226 & 0.6474& ? & 19.4 & 0.45\
232 & 0.3635& 0.53& 19.5 & 0.5\
234 & 0.6159& 0.79& 19.55& 0.6\
235 & 0.5317& 0.42& 19.8 & 0.6\
236 & 0.5083& 0.77& 19.8 & 0.3\
243 & 0.6255& 0.56& 19.8 & 0.55\
246 & 0.5719& 0.80& 19.65& 0.8\
247 & 0.5115& 0.51& 19.9 & 0.65\
255 & 0.5251& 0.50& 19.8 & 1.0\
[rllrrr]{} Name & Type & Period& $V-I$ & $V{\rm max}$ & $V{\rm min}$\
OGLEGC& & days & & &\
214& Ecl& 0.2737& 0.82& 17.96 & 18.34\
215& & 8.666 & 1.14& 16.56 & 16.68\
218& & ? & 1.69& 15.80 & 16.17\
219& K & 36.05 & 1.08& 15.28 & 15.46\
220& K & 10.69 & 1.03& 16.265& 16.34\
221& Ecl& 0.3135& 0.79& 17.78 & 18.22\
222& K & 18.93 & 0.95& 16.62 & 16.80\
225& Ecl& 0.2346& 1.04& 19.47 & 20.0\
227& Ecl& 0.3788& 0.52& 16.49 & 16.77\
228& Ecl& 1.1504& 0.34& 15.90 & 16.30\
229& K & 8.378 & 1.06& 14.92 & 15.05\
230& & 4.814 & 1.23& 17.51 & 17.71\
231& K & 6.498 & 0.93& 14.225& 14.325\
233& & 28.69 & 1.45& 16.55 & 16.72\
237& K & 18.80 & 0.85& 16.87 & 16.95\
238& Ecl& 0.2506& 0.77& 18.46 & 18.80\
239& & ? & 1.53& 16.58 & 16.67\
240& Ecl& 4.3158 & 0.00& 19.93 & 20.65\
241& & ? & 1.67& 16.72 & 16.83\
242& & ? & 2.48& 16.55 & 17.42\
244& Ecl& 0.3837& 0.51& 16.16 & 16.38\
245& Ecl& 0.2789& 0.69& 15.49 & 15.87\
248& & 1.9967?& 1.26& 17.55:& ?\
249& Ecl& 0.3226& 0.64& 17.33 & 17.66\
250& Ecl& 0.3514& 0.43& 16.34 & 16.56\
251& & 3.4629 & 1.12& 16.56 & 16.87\
252& & ? & 2.90& 17.04 & 16.68\
253& Ecl& 0.4462& 0.57& 16.77 & 17.12\
254& & ? & 1.81& 16.47 & 16.62\
[crrrr]{} Range & Field 104B & Field 104B & Field 104E & Field 104E\
of $V$ & Case-I & Case-II & Case-I & Case-II\
16.0-17.0& 89 & 90 & 99 & 99\
17.0-18.0& 88 & 94 & 90 & 94\
18.0-18.5& 81 & 96 & 89 & 94\
18.5-19.0& 74 & 89 & 73 & 92\
19.0-19.5& 52 & 88 & 35 & 88\
[rrrcr]{} Frame & Field &$T_{\rm exp}$ & Filter & FWHM\
& & sec & & arcsec\
mr5228 & 104A& 420 &V & 1.1\
mr5176 & 104A& 120 &V & 1.2\
mr8181 & 104A& 60 &V & 1.05\
mr5382 & 104A& 400 &I & 1.2\
mr5381 & 104A& 120 &I & 1.25\
mr8182 & 104A& 10 &I & 1.35\
mr5227 & 104B & 420 & V& 1.0\
mr5177 & 104B & 120 & V& 1.4\
mr8184 & 104B& 60 & V& 1.0\
mr5385 & 104B& 400 & I& 1.3\
mr5386 & 104B& 120 & I& 1.45\
mr8183 & 104B& 10 & V& 1.0\
mr7889 & 104C& 500 & V& 1.0\
mr7902 & 104C& 50 & V& 1.0\
mr7903 & 104C & 500 & I& 1.0\
mr7904 & 104C& 50 & I& 1.05\
mr14597& 104D & 420 & V& 1.05\
mr14589& 104D& 61 & V& 1.05\
mr14592& 104D& 420 & I& 1.20\
mr14591& 104D& 60 & I& 1.15\
mr14595& 104E& 420 & V& 1.1\
mr14596& 104E& 60 & V& 1.0\
mr14593& 104E& 420 & I& 1.1\
mr14594& 104E& 60 & I& 1.1\
Fig 1 – A schematic chart showing location of fields 104A-E. The cluster center is marked with a cross. Each field covers $14.7\times 14.7$ armin$^{2}$. North is up and east is to the left. Fig. 2 – Phased $V$ light curves for RR Lyr stars from the SMC. Inserted labels give the names of variables. Fig. 3 – Phased $V$ light curve for the halo RR Lyr star OGLEGC223. Fig. 4 – Phased $V$ light curves for the variables listed in Table 6. Inserted labels give the names of variables and their periods in days. Fig. 4a – Phased $V$ light curves for the variables listed in Table 6. Inserted labels give the names of variables and their periods in days. Fig. 5 – Time domain light curves for variables with unknown periods. Light curves for 1993 and 1994 seasons are shown for OGLEGC218. Fig. 6 – A schematic CMD for 47 Tuc with the positions of the variables from fields A-E marked. The triangles represent certain eclipsing binaries , the asterisks RR Lyr stars and the open circles the remaining variables. Positions of stars from Table 6 are labeled. Fig. 7 – Period vs. apparent distance modulus diagram for contact binaries from the field of 47 Tuc. A horizontal line at $(m-M)_{\rm V}=13.21$ corresponds to the distance modulus of the cluster. Error bars correspond to the formal uncertainty in the absolute magnitudes derived using Rucinski’s (1995) calibration. Fig. 8 – The CMDs for fields 104A (left) and 104E (right).
[^1]: The photometric data presented in this paper are available in electronic form at the CDS, via ftp 130.79.128.5
[^2]: Based on observations collected at the Las Campanas Observatory of the Carnegie Institution of Washington.
[^3]: The OGLE project is currently conducted, under the name OGLE-2, using a dedicated 1.3-m telescope located at Las Campanas Observatory
[^4]: In fact variability of OGLEGC212, 213, 214, 216, 243, 245 and 246 was reported recently by Kaluzny et al. (1997c). These authors surveyed a western part of the cluster covering a region overlapping with fields 104A and 104C. It is encouraging that all but one variable from Kaluzny et al. (1997c) were recovered in the current study. We missed variable V8 which is very faint and was not included on the list of template stars for the field 104A.
[^5]: The OGLE data (FITS images) are accessible to astronomical community from the NASA NSS Data Center. Send e-mail to: [email protected] with the subject line: REQUEST OGLE ALL and put requested frame numbers (in the form MR00NNNN where NNNN stands for frame number according to OGLE notation), one per line, in the body of the message. Requested frames will be available using an “anonymous ftp” service from nssdc.gfc.nasa.gov host in location shown in the return e-mail message from [email protected]
|
---
abstract: 'A Helson matrix (also known as a multiplicative Hankel matrix) is an infinite matrix with entries $\{a(jk)\}$ for $j,k\geq1$. Here the $(j,k)$’th term depends on the product $jk$. We study a self-adjoint Helson matrix for a particular sequence $a(j)=(\sqrt{j}\log j(\log\log j)^\alpha))^{-1}$, $j\geq 3$, where $\alpha>0$, and prove that it is compact and that its eigenvalues obey the asymptotics $\lambda_n\sim\varkappa(\alpha)/n^\alpha$ as $n\to\infty$, with an explicit constant $\varkappa(\alpha)$. We also establish some intermediate results (of an independent interest) which give a connection between the spectral properties of a Helson matrix and those of its continuous analogue, which we call the integral Helson operator.'
address: 'Department of Mathematics, King’s College London, Strand, London WC2R 2LS, United Kingdom'
author:
- Nazar Miheisi
- Alexander Pushnitski
title: A Helson matrix with explicit eigenvalue asymptotics
---
Introduction {#sec.a}
============
Background: Hankel matrices
---------------------------
We start our discussion by recalling relevant facts from the theory of Hankel matrices. Let $\{b(j)\}_{j=0}^\infty$ be a sequence of complex numbers. A *Hankel matrix* is an infinite matrix of the form $$H(b)=\{b(j+k)\}_{j,k=0}^\infty,$$ considered as a linear operator in $\ell^2({{\mathbb Z}}_+)$, ${{\mathbb Z}}_+=\{0,1,2,\dots\}$. One of the key examples of Hankel matrices is the *Hilbert matrix*, which corresponds to the choice $b(j)=1/(j+1)$. It is well known that the Hilbert matrix is bounded (but not compact). From the boundedness of the Hilbert matrix by a simple argument one obtains $$b(j)=o(1/j), \quad j\to\infty \quad \Rightarrow \quad H(b) \text{ is compact.}$$ A natural family of compact self-adjoint Hankel operators of this class was considered in [@PY1]. To state this result, we need some notation. For a compact self-adjoint operator $A$, let us denote by $\{\lambda_n^+(A)\}_{n=1}^\infty$ the non-increasing sequence of positive eigenvalues (enumerated with multiplicities taken into account), and let $\lambda_n^-(A)=\lambda_n^+(-A)$.
[@PY1] Let $b(j)$ be a sequence of real numbers defined by $$b(j)=1/(j(\log j)^\alpha),\quad j\geq2;$$ the choice of $b(0)$ and $b(1)$ (or of any finite number of $b(j)$) is not important. Then the eigenvalues of the Hankel matrix $H(b)$ have the asymptotic behaviour $$\lambda_n^+(H(b))=\frac{\varkappa(\alpha)}{n^\alpha}+o(n^{-\alpha}), \quad
\lambda_n^-(H(b))=O(n^{-\alpha-1}),
\quad n\to\infty,
\label{a0}$$ where $\varkappa(\alpha)$ is an explicit coefficient: $$\varkappa(\alpha)=2^{-\alpha}\pi^{1-2\alpha}B(\tfrac1{2\alpha},\tfrac12)^\alpha,
\label{a1}$$ and $B(\cdot,\cdot)$ is the standard Beta function.
For negative eigenvalues, this result is stated in a slightly weaker form in [@PY1]: $\lambda_n^-(H(b))=o(n^{-\alpha})$. However, following the logic of the proof of our main result below, it is easy to see that in fact the estimate $O(n^{-\alpha-1})$ is valid in Theorem A.
Helson matrices
---------------
In this paper, we consider an analogous question in the class of Helson matrices (also known as multiplicative Hankel matrices). These are infinite matrices of the form $$M(a)=\{a(jk)\}_{j,k=1}^\infty,$$ considered as linear operators in $\ell^2({{\mathbb N}})$. Here the $(j,k)$’th entry depends on the product of indices $jk$ rather than on the sum $j+k$. Helson matrices are a natural object in the theory of Hardy spaces of Dirichlet series, in the same way as Hankel matrices are naturally related to the theory of classical Hardy spaces. The study of Helson matrices was initiated in the pioneering paper [@Helson]; see also the book [@QQ] and a recent survey [@PerPu].
The *multiplicative Hilbert matrix* is a Helson matrix corresponding to the sequence $$a(j)=1/(\sqrt{j}\log j), \quad j\geq2$$ (there are variants of this definition, see [@PerPu2]; this notion has not become standardised yet). It is bounded and not compact, and its spectral properties are fully analogous to the classical Hilbert matrix, see [@BPSSV; @PerPu2]. Similarly to the Hankel case, it is not difficult to see that $$a(j)=o(1/(\sqrt{j}\log j)), \quad j\to\infty \quad \Rightarrow \quad M(a) \text{ is compact.}$$ In this paper, we consider a family of compact modifications of the multiplicative Hilbert matrix. Our main result is
\[thm.a1\] Let $\alpha>0$, and let $a(j)$ be the sequence of real numbers given by $$a(j)=1/(\sqrt{j}\log j(\log\log j)^\alpha)$$ for all sufficiently large $j$ (the choice of finitely many values $a(j)$ is not important). Then the Helson matrix $M(a)$ is compact and its sequence of eigenvalues obeys the asymptotics $$\lambda_n^+(M(a))=\frac{\varkappa(\alpha)}{n^\alpha}+o(n^{-\alpha}),
\quad
\lambda_n^-(M(a))=O(n^{-\alpha-1}),
\quad n\to\infty,
\label{a2}$$ where $\varkappa(\alpha)$ is given by .
Thus, we have a natural family of Helson matrices $M(a^{(\alpha)})$, parameterised by $\alpha$, such that $M(a^{(\alpha)})\in{\mathbf{S}}_p$ if and only if $p>1/\alpha$. Here ${\mathbf{S}}_p$ is the standard Schatten class, see Section \[sec.a6\] below.
Below we describe the key ideas of the proof of Theorem \[thm.a1\]; some of them may be of an independent interest. In order to do this, we need some definitions.
Integral Hankel and Helson operators
------------------------------------
First we recall the definition of a classical object: integral Hankel operators. For a complex valued *kernel function*, or more generally a distribution, ${{\mathbf b}}$ on ${{\mathbb R}}_+$, we denote by ${{\mathbf H}}({{\mathbf b}})$ the integral Hankel operator in $L^2({{\mathbb R}}_+)$, formally defined by $${{\mathbf H}}({{\mathbf b}}): f\mapsto \int_0^\infty {{\mathbf b}}(x+y)f(y)dy.$$ Clearly, integral Hankel operators are continuous analogues of Hankel matrices. Below we only consider bounded and compact Hankel operators. We use boldface font to denote integral operators (and their kernels).
Next, for a complex valued function or distribution ${{\mathbf a}}$ on $(1,\infty)$, let us consider an integral operator in $L^2(1,\infty)$, defined by $${{\mathbf M}}({{\mathbf a}}): f\mapsto \int_1^\infty {{\mathbf a}}(ts)f(s)ds, \quad t\geq1.$$ It will be convenient to call ${{\mathbf M}}({{\mathbf a}})$ an *integral Helson operator* (this is not a standard term). We regard ${{\mathbf M}}({{\mathbf a}})$ as a continuous analogue of the Helson matrix $M(a)$.
Observe that by an exponential change of variables, ${{\mathbf M}}({{\mathbf a}})$ reduces to an integral Hankel operator. More precisely, let $V$ be the unitary operator $$V: L^2({{\mathbb R}}_+)\to L^2(1,\infty),
\quad
(Vf)(t)=\frac1{\sqrt{t}}f(\log t),
\quad t>1,
\label{d0}$$ then $$V^*{{\mathbf M}}({{\mathbf a}})V={{\mathbf H}}({{\mathbf b}}), \quad {{\mathbf b}}(x)={{\mathbf a}}(e^x)e^{x/2}, \quad x>0.
\label{aa9}$$ Spectral theory of integral Hankel operators is very well developed, and below we will use some available results for eigenvalue estimates and asymptotics of such operators to deduce the corresponding statements for integral Helson operators.
Note that although ${{\mathbf M}}({{\mathbf a}})$ can be reduced to an integral Hankel operator through the exponential change of variable $t=e^x$, no such “change of variable" exists on integers, and therefore in general there is no simple reduction of Helson matrices to Hankel matrices.
The strategy of the proof
-------------------------
Consider the integral Helson operator ${{\mathbf M}}({{\mathbf a}})$ with the kernel function ${{\mathbf a}}\in C^\infty([1,\infty))$ which satisfies $${{\mathbf a}}(t)=t^{-1/2}(\log t)^{-1}(\log \log t)^{-\alpha}, \quad t\geq t_0>e.
\label{a9}$$ Clearly, the sequence $a$ of Theorem \[thm.a1\] is the restriction of the function ${{\mathbf a}}$ onto ${{\mathbb N}}$ (up to finitely many terms). It will be convenient to have some notation for the operation of restriction onto integers. If ${{\mathbf a}}$ is a continuous function on $(1,\infty)$, let $r({{\mathbf a}})$ denote the sequence $$r({{\mathbf a}})(j)=
\begin{cases}
0,\quad j=1, \\
{{\mathbf a}}(j), \quad j\ge 2.
\end{cases}
\label{a12}$$
*We will prove that the operators ${{\mathbf M}}({{\mathbf a}})$ and $M(r({{\mathbf a}}))$ have the same leading order asymptotics of both positive and negative eigenvalues.* This reduces the question to the spectral analysis of ${{\mathbf M}}({{\mathbf a}})$. Further, as already discussed, relation reduces the spectral analysis of ${{\mathbf M}}({{\mathbf a}})$ to that of the integral Hankel operator ${{\mathbf H}}({{\mathbf b}})$ with $${{\mathbf b}}(x)=e^{x/2}{{\mathbf a}}(e^x)=x^{-1}(\log x)^{-\alpha}, \quad x\geq x_0>1.
\label{a8}$$ This two step reduction procedure can be illustrated by the diagram $$M(r({{\mathbf a}}))\quad\to\quad{{\mathbf M}}({{\mathbf a}})\quad\to\quad{{\mathbf H}}({{\mathbf b}}).
\label{a7}$$ The integral operator ${{\mathbf H}}({{\mathbf b}})$ is a continuous analogue of the Hankel matrix in Theorem A. The eigenvalues of ${{\mathbf H}}({{\mathbf b}})$ satisfy the same asymptotic relation as , i.e. $$\lambda_n^+({{\mathbf H}}({{\mathbf b}}))=\frac{\varkappa(\alpha)}{n^\alpha}+o(n^{-\alpha}), \quad
\lambda_n^-({{\mathbf H}}({{\mathbf b}}))=O(n^{-\alpha-1}),
\quad n\to\infty,
\label{a11}$$ where $\varkappa(\alpha)$ is the same as in ; this is again a result of [@PY1]. Thus, reduction together with will yield a proof of Theorem \[thm.a1\].
Further details and the structure of the paper {#sec.a5}
----------------------------------------------
While the second reduction in is straightforward, the first reduction is technically a little more involved; we proceed to explain it. We split the sequence $a$ into two terms $$a(j)=a_0(j)+a_1(j).$$ Here $a_0$ is a sequence which has the same asymptotics as $a$, but is given by a convenient integral representation; $a_1$ is the error term. More precisely, let us describe the choice of $a_0$.
We use the fact that (see [@Erdelyi]) for any $0<c<1$, one has the Laplace transform asymptotics $$\int_0^c {\lvert\log \lambda\rvert}^{-\alpha}e^{-x\lambda}d\lambda=x^{-1}(\log x)^{-\alpha}
\bigl(1+O((\log x)^{-1})\bigr), \quad x\to\infty.$$ Substituting $x=\log t$ and multiplying by $t^{-1/2}$, we obtain $$\int_0^c {\lvert\log \lambda\rvert}^{-\alpha}t^{-\frac12-\lambda}d\lambda
=
t^{-1/2}(\log t)^{-1}(\log \log t)^{-\alpha}\bigl(1+O((\log\log t)^{-1})\bigr),
\quad t\to\infty.$$
Now let $w(\lambda)={\lvert\log\lambda\rvert}^{-\alpha}\chi(\lambda)$, where $\chi\in C^\infty({{\mathbb R}}_+)$ is a non-negative function such that $\chi(\lambda)=1$ for all sufficiently small $\lambda>0$ and $\chi(\lambda)=0$ for $\lambda\geq1$. We set $${{\mathbf a}}_0(t)=\int_0^\infty t^{-\frac12-\lambda}w(\lambda)d\lambda, \quad
{{\mathbf a}}_1(t)={{\mathbf a}}(t)-{{\mathbf a}}_0(t), \quad
t>1,
\label{aa8}$$ where the function ${{\mathbf a}}$ is given by . Then, by the above calculation, $${{\mathbf a}}_1(t)=O(t^{-1/2}(\log t)^{-1}(\log \log t)^{-\alpha-1}), \quad t\to\infty.$$ Further, with the notation , we set $a_0=r({{\mathbf a}}_0)$ and $a_1=r({{\mathbf a}}_1)$.
In Section \[sec.c\], we will prove that $M(a_0)$ is unitarily equivalent to ${{\mathbf M}}({{\mathbf a}}_0)$, up to a negligible term, and as a consequence, the spectral asymptotics of these two operators coincide to all orders. In fact, we will prove a more general statement (see Theorem \[thm.cc1\]): if ${{\mathbf a}}_0$ is given by the integral representation then, for a fairly general class of weights $w$, the Helson integral operator ${{\mathbf M}}({{\mathbf a}}_0)$ is unitarily equivalent to the Helson matrix $M(r({{\mathbf a}}_0))$, up to a negligible term.
In Section \[sec.d\], we will reduce the spectral estimates for $M(a_1)$ to those for ${{\mathbf M}}({{\mathbf a}}_1)$. More precisely, in Theorem \[thm.a2\] *we prove that the linear operator ${{\mathbf M}}({{\mathbf a}})\mapsto M(r({{\mathbf a}}))$ is bounded in Schatten classes ${\mathbf{S}}_p$ for $0<p\leq1$, i.e. one has the estimate* $${\lVertM(r({{\mathbf a}}))\rVert}_{{\mathbf{S}}_p}\leq C_p{\lVert{{\mathbf M}}({{\mathbf a}})\rVert}_{{\mathbf{S}}_p}, \quad 0<p\leq 1.$$ This statement might be of an independent interest. By using real interpolation, we obtain the implication $$s_n({{\mathbf M}}({{\mathbf a}}))=O(n^{-\alpha-1}), \quad n\to\infty
\quad\Rightarrow\quad
s_n(M(r({{\mathbf a}})))=O(n^{-\alpha-1}), \quad n\to\infty,$$ for any $\alpha>0$, where $s_n$ are singular values (see Section \[sec.a6\] below).
Thus, using somewhat different technical tools, we reduce the analysis of both Helson matrices $M(a_0)$ and $M(a_1)$ to the corresponding integral Helson operators ${{\mathbf M}}({{\mathbf a}}_0)$ and ${{\mathbf M}}({{\mathbf a}}_1)$. Next, we set, as in , $${{\mathbf b}}_i(x)=e^{x/2}{{\mathbf a}}_i(e^x), \quad i=0,1,
\label{aa10}$$ and use the available results from [@PY1; @PY2] which give $$\begin{aligned}
\lambda_n^+({{\mathbf H}}({{\mathbf b}}_0))
&=
\varkappa(\alpha)n^{-\alpha}+o(n^{-\alpha}), \quad n\to\infty,
\\
s_n({{\mathbf H}}({{\mathbf b}}_1))
&=
O(n^{-\alpha-1}), \quad n\to\infty\end{aligned}$$ (we also have ${{\mathbf H}}({{\mathbf b}}_0)\geq0$ and so $\lambda_n^-({{\mathbf H}}({{\mathbf b}}_0))=0$ for all $n$). Finally, in Section \[sec.e\], we use standard spectral stability results to combine these two relations to complete the proof of Theorem \[thm.a1\].
We represent this refined explanation of our proof by the following diagram: $$\begin{gathered}
M(a)=M(a_0)+M(a_1);
\\
{\begin{split}
M(a_0)\to {{\mathbf M}}({{\mathbf a}}_0)\to {{\mathbf H}}({{\mathbf b}}_0)&\to \text{\cite{PY1}: asymptotics}
\\
M(a_1)\to {{\mathbf M}}({{\mathbf a}}_1)\to {{\mathbf H}}({{\mathbf b}}_1)&\to \text{\cite{PY2}: estimates}
\end{split}} \biggr\}\text{(stability)}
\Rightarrow \text{Theorem~\ref{thm.a1}.}\end{gathered}$$
Notation: Schatten classes {#sec.a6}
--------------------------
We denote by $\{s_n(A)\}_{n=1}^\infty$ the non-increasing sequence of the singular values of a compact operator $A$, i.e. $s_n(A)=\lambda_n^+(\sqrt{A^*A})$. Recall that for $0<p<\infty$, the Schatten class ${\mathbf{S}}_p$ consists of all compact operators $A$ such that $${\lVertA\rVert}_{{\mathbf{S}}_p}:=\left(\sum_{n=1}^\infty s_n(A)^p\right)^\frac{1}{p}
<\infty.$$ We will write ${\mathbf{S}}_\infty$ to denote the class of compact operators. Observe that ${\lVertA\rVert}_{{\mathbf{S}}_p}$ is a norm for $p\geq1$ and a quasi-norm for $0<p<1$. For $0<p<1$, the usual triangle inequality fails in ${\mathbf{S}}_p$ but the following “modified triangle inequality" holds: $${\lVertA+B\rVert}_{{\mathbf{S}}_p}^p\leq {\lVertA\rVert}_{{\mathbf{S}}_p}^p+{\lVertB\rVert}_{{\mathbf{S}}_p}^p, \quad 0<p<1, \quad A,B\in {\mathbf{S}}_p.
\label{a10}$$
For $0<p<\infty$ and $0<q\le \infty$, the Schatten-Lorentz class ${\mathbf{S}}_{p,q}$ consists of all compact operators $A$ such that **** $${\lVertA\rVert}_{{\mathbf{S}}_{p,q}}:=
\begin{dcases}
\left(\sum_{n=1}^\infty s_n(A)^q(1+n)^{q/p -1}\right)^{\frac{1}{q}}
<\infty, \quad q<\infty, \\
\sup_{n\in{{\mathbb N}}} (1+n)^{1/p}s_n(A)<\infty, \quad q=\infty.
\end{dcases}$$ It is evident that ${\mathbf{S}}_{p,p} = {\mathbf{S}}_p$ for every $0<p<\infty$. The classes ${\mathbf{S}}_{p,\infty}$ are known as weak Schatten classes and have the property that $A\in {\mathbf{S}}_{p,\infty}$ if and only if $s_n(A)=O(n^{-1/p})$, $n\to\infty$.
We denote ${\mathbf{S}}_0=\cap_{p>0} {\mathbf{S}}_p$. This is the class of all operators $A$ such that $s_n(A)=O(n^{-c})$ as $n\to\infty$ for any $c>0$.
Notation: unitary equivalence modulo kernels
--------------------------------------------
If $A_j$ is a bounded operator in a Hilbert space ${{\mathcal H}}_j$ for $j=1,2$, we will say that $A_1$ and $A_2$ are unitarily equivalent modulo kernels and write $A_1\approx A_2$, if the operators $$A_1|_{(\operatorname{Ker}A_1)^\perp}
\quad\text{ and }\quad
A_2|_{(\operatorname{Ker}A_2)^\perp}$$ are unitarily equivalent. It is well known that for any bounded operator $A$ (acting from a Hilbert space to a possibly different Hilbert space), one has $$A^*A\approx AA^*.
\label{b8}$$ We will frequently use this relation in the following situation: if $A$ is compact, then implies that $s_n(A^*A)=s_n(AA^*)$ for all $n$.
Acknowledgements
----------------
We are grateful to K. Seip and H. Queffélec for stimulating discussions, and to J. Partington for help with the relevant literature.
${{\mathbf M}}({{\mathbf a}})\approx M(r({{\mathbf a}}))$ up to error term {#sec.c}
==========================================================================
Overview
--------
In this section, we prove
\[thm.cc1\] Let $w$ be a non-negative bounded function on ${{\mathbb R}}_+$ with bounded support. Let $${{\mathbf a}}(t)=\int_0^\infty t^{-\frac12-\lambda}w(\lambda)d\lambda, \quad t>1,
\label{cc1}$$ and let $a(j)={{\mathbf a}}(j)$, $j\in{{\mathbb N}}$. Then we have $M(a)\geq0$ and ${{\mathbf M}}({{\mathbf a}})\geq0$. Further, there exist self-adjoint operators $A$ and $B$ in $L^2({{\mathbb R}}_+)$ such that $$M(a)\approx A, \quad {{\mathbf M}}({{\mathbf a}})\approx B, \quad A-B\in {\mathbf{S}}_0.$$
In combination with standard results on the stability of spectral asymptotics, Theorem \[thm.cc1\] shows that if ${{\mathbf M}}({{\mathbf a}})$ and $M(a)$ are compact, then the eigenvalue asymptotics of these operators coincide to all orders. This is precisely what we need in our setting — see Section \[sec.e\].
Although our primary interest in this paper is to compact Helson matrices, Theorem \[thm.cc1\] can be used in the non-compact context as well. Indeed, in combination with the Weyl theorem on the invariance of the essential spectrum with respect to compact perturbations, this result shows that the non-zero parts of the essential spectra of ${{\mathbf M}}({{\mathbf a}})$ and $M(a)$ coincide. Similarly, in combination with the Kato-Rosenblum theorem, this result shows that the absolutely continuous parts of ${{\mathbf M}}({{\mathbf a}})$ and $M(a)$ are unitarily equivalent. Variants of this reasoning have been used in [@BPSSV; @PerPu2] in order to analyse the multiplicative Hilbert matrix.
Reduction to weighted integral Hankel operator
----------------------------------------------
We start by recalling a theorem from [@PerPu] which establishes a unitary equivalence modulo kernels between a Helson matrix $M(a)$, where $a$ has an integral representation of the type , and a weighted integral Hankel type operator $w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}(\cdot+1))w^{1/2}$ with the integral kernel $$w(x)^{1/2}{{\bm{\zeta}}}(x+y+1)w(y)^{1/2}, \quad x,y>0$$ in $L^2({{\mathbb R}}_+)$, where ${{\bm{\zeta}}}$ is the Riemann zeta function.
\[lma.c1\] Let $w\in L^\infty({{\mathbb R}})\cap L^1({{\mathbb R}})$ be a non-negative function, and let $$a(j)=\int_0^\infty j^{-\frac12-\lambda}w(\lambda)d\lambda, \quad j\geq1.$$ Then the Helson matrix $M(a)$ is a bounded non-negative operator on $\ell^2({{\mathbb N}})$. Let ${{\bm{\zeta}}}_1(x)={{\bm{\zeta}}}(x+1)$. Then $w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}_1)w^{1/2}$ is bounded on $L^2({{\mathbb R}})$ and $$M(a)\approx w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}_1)w^{1/2}.$$
This was proven in [@PerPu], but for completeness we repeat the proof.
First let us check that $w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}_1)w^{1/2}$ is bounded. Recall that the Carleman operator ${{\mathbf H}}(1/x)$, i.e. the integral Hankel operator with the kernel function ${{\mathbf b}}(x)=1/x$, is bounded on $L^2({{\mathbb R}}_+)$ and has norm $\pi$. Next, we have an elementary estimate $$0\leq {{\bm{\zeta}}}(x+1)-1=\sum_{j=2}^\infty j^{-x-1}\leq \int_1^\infty \frac{dt}{t^{x+1}}=\frac1x,$$ and so for ${{\mathbf b}}(x)={{\bm{\zeta}}}(x+1)-1$, we obtain the estimate ${\lVert{{\mathbf H}}({{\mathbf b}})\rVert}\leq \pi$. Further, we have $$w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}_1)w^{1/2}=w^{1/2}{{\mathbf H}}({{\mathbf b}})w^{1/2}+(\cdot,w^{1/2})w^{1/2},$$ where the second term denotes the rank one operator in $L^2({{\mathbb R}}_+)$ with the integral kernel $w(x)^{1/2}w(y)^{1/2}$. Since $w\in L^1({{\mathbb R}})\cap L^\infty({{\mathbb R}})$, both terms here are bounded: $${\lVertw^{1/2}{{\mathbf H}}({{\mathbf b}})w^{1/2}\rVert}\leq \pi{\lVertw^{1/2}\rVert}_{L^\infty}^2=\pi{\lVertw\rVert}_{L^\infty},
\quad
{\lVert(\cdot,w^{1/2})w^{1/2}\rVert}={\lVertw^{1/2}\rVert}_{L^2}^2={\lVertw\rVert}_{L^1}.$$ We obtain that $w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}_1)w^{1/2}$ is bounded.
Next, consider the operator $${\mathcal{N}}: L^2({{\mathbb R}}_+)\to \ell^2({{\mathbb N}}),
\quad
f\mapsto\biggl\{\int_0^\infty j^{-x-\frac12}w(x)^{1/2}f(x)dx\biggr\}_{j=1}^\infty,$$ defined initially on the dense set of functions $f\in L^2({{\mathbb R}}_+)$ with support separated away from zero. We claim that ${\mathcal{N}}$ is bounded and $w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}_1)w^{1/2}={\mathcal{N}}^*{\mathcal{N}}$. This is a direct calculation: $$\begin{aligned}
({\mathcal{N}}f_1,{\mathcal{N}}f_2)_{\ell^2({{\mathbb N}})}
&=
\sum_{j=1}^\infty
\int_0^\infty \int_0^\infty j^{-1-x-y}w(x)^{1/2}w(y)^{1/2}f_1(x)\overline{f_2(y)}dx \, dy
\\
&=
\int_0^\infty {{\bm{\zeta}}}(x+y+1)w(x)^{1/2}w(y)^{1/2}f_1(x)\overline{f_2(y)}dx\, dy
\\
&=
(w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}_1)w^{1/2} f_1,f_2)_{L^2({{\mathbb R}}_+)},\end{aligned}$$ which proves our claim.
Further, let us compute the adjoint ${\mathcal{N}}^*$: $${\mathcal{N}}^*: \ell^2({{\mathbb N}})\to L^2({{\mathbb R}}_+), \quad
u=\{u_j\}_{j=1}^\infty \mapsto w(x)^{1/2}\sum_{j=1}^\infty u_j j^{-\frac12-x},
\quad
x>0.$$ Then for $u,v\in \ell^2({{\mathbb N}})$ we have $$\begin{gathered}
({\mathcal{N}}^* u,{\mathcal{N}}^*v)_{L^2({{\mathbb R}}_+)}
=
\int_0^\infty w(x) \biggl(\sum_{j,k=1}^\infty (jk)^{-\frac12-x}u_j\overline{v_k}\biggr)dx
\\
=
\sum_{j,k=1}^\infty a(jk) u_j\overline{v_k}
=
(M(a)u,v)_{\ell^2({{\mathbb N}})}.\end{gathered}$$ This calculation proves that $M(a)$ is bounded and $M(a)={\mathcal{N}}{\mathcal{N}}^*$.
To summarise: for a bounded operator ${\mathcal{N}}$, we have proven the identities $$w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}_1)w^{1/2}={\mathcal{N}}^*{\mathcal{N}}, \quad M(a)={\mathcal{N}}{\mathcal{N}}^*.$$ This shows that $w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}_1)w^{1/2}\approx M(a)$, as required.
Reduction to a weighted Carleman operator
-----------------------------------------
\[lma.cc4\] Let $w$ be as in Theorem \[thm.cc1\]. Then $$w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}_1)w^{1/2}
-
w^{1/2}{{\mathbf H}}(1/x)w^{1/2}\in{\mathbf{S}}_0.$$
We will need one well-known statement: if ${{\mathbf b}}$ is a restriction of a Schwartz class function onto ${{\mathbb R}}_+$, then the integral Hankel operator ${{\mathbf H}}({{\mathbf b}})$ is in ${\mathbf{S}}_0$. This fact follows easily from Theorem \[thm.b3\] below.
*Step 1:* First we would like to replace ${{\bm{\zeta}}}(1+x)$ in the integral kernel of $w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}_1)w^{1/2}$ by a simpler function ${{\mathbf{h}}}$ with the same singularity at $x=0$. Let $\beta >0$ be sufficiently large so that $\operatorname{supp}w\subset[0,\beta]$; we choose $${{\mathbf{h}}}(x)=e^{-\beta x}/x, \quad x>0.$$ Our aim at this step is to prove that the error term arising through this replacement is negligible, i.e. $$w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}_1)w^{1/2}-w^{1/2}{{\mathbf H}}({{\mathbf{h}}})w^{1/2}\in {\mathbf{S}}_0.$$ Since the zeta function ${{\bm{\zeta}}}(z)$ has a simple pole at $z=1$ with residue one and converges to $1$ as $O(2^{-z})$ when $z\to+\infty$, we conclude that the function $${\widetilde}{{\mathbf{h}}}(x)={{\bm{\zeta}}}_1(x)-{{\mathbf{h}}}(x)-1, \quad x>0,$$ is a restriction of a Schwartz class function onto ${{\mathbb R}}_+$. It follows that ${{\mathbf H}}({\widetilde}{{\mathbf{h}}})\in{\mathbf{S}}_0$. Thus, $$w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}_1)w^{1/2}-w^{1/2}{{\mathbf H}}({{\mathbf{h}}})w^{1/2}
=
w^{1/2}{{\mathbf H}}({\widetilde}{{\mathbf{h}}})w^{1/2}+(\cdot,w^{1/2})w^{1/2}\in{\mathbf{S}}_0;$$ here the last term is the rank one operator with the integral kernel $w(x)^{1/2}w(y)^{1/2}$.
*Step 2:* Now it remains to prove that $$w^{1/2}{{\mathbf H}}(1/x) w^{1/2}
-
w^{1/2}{{\mathbf H}}({{\mathbf{h}}}) w^{1/2}
\in{\mathbf{S}}_0.
\label{cc3}$$ Let ${\mathcal{L}}$ be the Laplace transform in $L^2({{\mathbb R}}_+)$, $${\mathcal{L}}[f](x)=\int_0^\infty e^{-x\lambda}f(\lambda)d\lambda.$$ Observe that $1/x-{{\mathbf{h}}}(x)={\mathcal{L}}[{\mathbbm{1}}_{(0,\beta)}](x)$, where ${\mathbbm{1}}_{(0,\beta)}$ is the characteristic function of the interval $(0,\beta)$. Thus, the operator in can be written as $w^{1/2}{\mathcal{L}}{\mathbbm{1}}_{(0,\beta)}{\mathcal{L}}w^{1/2}$.
Since $w$ is bounded and $w^{1/2}=w^{1/2}{\mathbbm{1}}_{(0,\beta)}$, it suffices to prove that ${\mathbbm{1}}_{(0,\beta)}{\mathcal{L}}{\mathbbm{1}}_{(0,\beta)}\in{\mathbf{S}}_0$. Let $U$ be the unitary operator $$U: L^2(0,\beta)\to L^2({{\mathbb R}}_+), \quad
Uf(x)=\sqrt{\beta}e^{-x/2}f(\beta e^{-x}), \quad x>0.$$ A straightforward calculation shows that $$U{\mathbbm{1}}_{(0,\beta)}{\mathcal{L}}{\mathbbm{1}}_{(0,\beta)} U^*={{\mathbf H}}({{\mathbf k}}),$$ where the kernel function ${{\mathbf k}}$ is given by $${{\mathbf k}}(x)=\beta e^{-x/2}\exp(-\beta^2e^{-x}), \quad x>0.$$ Clearly, ${{\mathbf k}}$ is a Schwartz class function (to be precise, a restriction of a Schwartz class function onto the positive half-axis). Thus, ${{\mathbf H}}({{\mathbf k}})\in{\mathbf{S}}_0$ and so, by unitary equivalence, we obtain ${\mathbbm{1}}_{(0,\beta)}{\mathcal{L}}{\mathbbm{1}}_{(0,\beta)}\in{\mathbf{S}}_0$.
Reduction to ${{\mathbf M}}({{\mathbf a}})$ and completing the proof
--------------------------------------------------------------------
\[lma.cc5\] Let $w\in L^\infty\cap L^1$. Then $w^{1/2}{{\mathbf H}}(1/x)w^{1/2}\geq0$ and $$w^{1/2}{{\mathbf H}}(1/x)w^{1/2}\approx {{\mathbf M}}({{\mathbf a}}),$$ with ${{\mathbf a}}$ as in .
This argument is well known in the context of integral Hankel operators. We have $$w^{1/2}{{\mathbf H}}(1/x)w^{1/2}=w^{1/2}{\mathcal{L}}{\mathcal{L}}w^{1/2}=(w^{1/2}{\mathcal{L}})(w^{1/2}{\mathcal{L}})^*
\approx
(w^{1/2}{\mathcal{L}})^*(w^{1/2}{\mathcal{L}})
={{\mathbf H}}({{\mathbf b}}),$$ where ${{\mathbf b}}={\mathcal{L}}[w]$. Now it remains to observe that, with $V$ as in , we have, by , $$V{{\mathbf H}}({{\mathbf b}})V^*={{\mathbf M}}({{\mathbf a}}),$$ with $${{\mathbf a}}(t)=t^{-1/2}{{\mathbf b}}(\log t)
=
t^{-1/2}\int_0^\infty e^{-\lambda\log t}w(\lambda) d\lambda
=
\int_0^\infty t^{-1/2-\lambda}w(\lambda)d\lambda,$$ as required.
Combining Lemmas \[lma.c1\], \[lma.cc4\] and \[lma.cc5\], we obtain the required statement with $A=w^{1/2}{{\mathbf H}}({{\bm{\zeta}}}_1)w^{1/2}$ and $B=w^{1/2}{{\mathbf H}}(1/x)w^{1/2}$.
The map ${{\mathbf M}}({{\mathbf a}})\mapsto M(r({{\mathbf a}}))$ is bounded in ${\mathbf{S}}_p$ for $p\leq1$ {#sec.d}
=============================================================================================================
Overview
--------
Below for ${{\mathbf M}}({{\mathbf a}})\in{\mathbf{S}}_p$, $0<p\leq1$, we will associate with ${{\mathbf a}}$ its restriction $r({{\mathbf a}})$. In order for this restriction to make sense, we need a preliminary statement, the proof of which is given in Section \[sec.d3\]:
\[lma.d0\] If ${{\mathbf M}}({{\mathbf a}})\in{\mathbf{S}}_1$, then the kernel function ${{\mathbf a}}(t)$ is continuous in $t>1$.
Before continuing, let us fix some notation: throughout the remainder $C_p$ (or occasionaly $C'_p$) will denote a constant which only depends on $p$ but whose precise value may change from line to line.
Our main result in this section is:
\[thm.a2\]
1. Assume that ${{\mathbf M}}({{\mathbf a}})$ is bounded and belongs to the Schatten class ${\mathbf{S}}_p$ with $0<p\leq1$. Then $M(r({{\mathbf a}}))$ is also in ${\mathbf{S}}_p$, with the norm bound $${\lVertM(r({{\mathbf a}}))\rVert}_{{\mathbf{S}}_p}{\le C}_p {\lVert{{\mathbf M}}({{\mathbf a}})\rVert}_{{\mathbf{S}}_p}.
\label{d00}$$
2. Assume that ${{\mathbf M}}({{\mathbf a}})$ is bounded and belongs to the Schatten-Lorentz class ${\mathbf{S}}_{p,q}$ with $0<p<1$ and $1\leq q\le\infty$. Then $M(r({{\mathbf a}}))$ is also in ${\mathbf{S}}_{p,q}$, with the norm bound $${\lVertM(r({{\mathbf a}}))\rVert}_{{\mathbf{S}}_{p,q}}{\le C}_p {\lVert{{\mathbf M}}({{\mathbf a}})\rVert}_{{\mathbf{S}}_{p,q}}.$$
We will only need the case $q=\infty$ of part (ii) of the theorem.
In Sections \[SP estimate\]–\[sec.c4\] after some preliminaries, we prove part (i) of the theorem. The proof uses V. Peller’s description of Hankel operators of the class ${\mathbf{S}}_p$, $0<p\leq1$. In Sections \[sec.c5\]–\[sec.c6\] we use real interpolation to deduce part (ii) of the theorem.
Eigenvalue estimates for integral Hankel operators {#SP estimate}
--------------------------------------------------
For ${{\mathbf{f}}}\in L^1({{\mathbb R}})+L^2({{\mathbb R}})$, its Fourier transform is defined as usual by $${\widehat}{{\mathbf{f}}}(\xi):=\int_{-\infty}^\infty
{{\mathbf{f}}}(x)e^{-2\pi ix\xi}\,dx, \quad \xi\in{{\mathbb R}}.$$ Throughout this section, we let ${{\mathbf w}}\in C^\infty_0({{\mathbb R}})$ be a function with the properties ${{\mathbf w}}\geq0$, $\operatorname{supp}{{\mathbf w}}=[1/2,2]$ and $$\sum_{n=-\infty}^\infty {{\mathbf w}}(x/2^n)=1, \quad \text{for all } x>0.$$ For $n\in{{\mathbb Z}}$, let ${{\mathbf w}}_n(x)={{\mathbf w}}(x/2^n)$ and for a function ${{\mathbf b}}\in L^1_{\mathrm{loc}}({{\mathbb R}}_+)$ set $${{\mathbf b}}_n(x):={{\mathbf b}}(x){{\mathbf w}}_n(x), \quad x\in{{\mathbb R}},
\label{d11}$$ so that $${\widehat}{{{\mathbf b}}}_n(\xi) = ({\widehat}{{\mathbf b}}\ast{\widehat}{{\mathbf w}}_n)(\xi), \quad \xi\in{{\mathbb R}},$$ where $*$ denotes convolution. Clearly, we have $${{\mathbf b}}(x)=\sum_{n=-\infty}^\infty{{\mathbf b}}_n(x), \quad x>0,
\label{c3a}$$ where for every $x>0$, at most two terms of the series are non-zero.
Let us recall the necessary and sufficient conditions for the Schatten class inclusion ${{\mathbf H}}({{\mathbf b}})\in {\mathbf{S}}_p$.
[@Peller Theorem 6.7.4]\[thm.b3\] Let ${{\mathbf b}}\in L^1_{\rm loc} ({{\mathbb R}}_{+})$ and let $p>0$. The estimate $$C_p{\lVert{{\mathbf H}}({{\mathbf b}})\rVert}_{{\mathbf{S}}_p}^p
\le
\sum_{n=-\infty}^\infty 2^n {\lVert{\widehat}{{\mathbf b}}_n\rVert}^p_{L^p({{\mathbb R}})}
\le C'_p
{\lVert{{\mathbf H}}({{\mathbf b}})\rVert}_{{\mathbf{S}}_p}^p
\label{cb8}$$ holds, so that ${{\mathbf H}}({{\mathbf b}})\in {\mathbf{S}}_p$ if and only if the series in converges.
The convergence of the series in means that ${\widehat}{{\mathbf b}}$ belongs to the homogenous Besov class $B^{1/p}_{p,p}({{\mathbb R}})$.
Preliminary statements {#sec.d3}
----------------------
Using the unitary equivalence reduces the question to the following one: if ${{\mathbf H}}({{\mathbf b}})\in{\mathbf{S}}_1$, then the kernel function ${{\mathbf b}}(x)$ is continuous in $x>0$. This statement is known, and the proof is evident: if ${{\mathbf b}}_n$ is as in , then by Theorem \[thm.b3\], we have ${\widehat}{{\mathbf b}}_n\in L^1({{\mathbb R}})$ for all $n$, and so in the series each function ${{\mathbf b}}_n$ is continuous.
A key ingredient of the proof of Theorem \[thm.a2\] is a (scaled) classical inequality of Plancherel and Polya [@Plan-Pol; @Eoff] which states that if ${{\mathbf{f}}}\in L^p({{\mathbb R}})$ for $p>0$ and $\operatorname{supp}{\widehat}{{\mathbf{f}}}\subset[0,N]$, $N>0$, then $$\sum_{m=-\infty}^\infty{\lvert{{\mathbf{f}}}(m/N)\rvert}^p {\le C}_p N{\lVert{{\mathbf{f}}}\rVert}^p_{L^p({{\mathbb R}})}.$$
\[lma.d5\] Let $v\in L^1({{\mathbb R}})\cap L^p({{\mathbb R}})$, $0<p\leq1$, and assume that the function $${{\mathbf a}}(t)=\int_{-\infty}^\infty v(\xi)t^{-\frac12+2\pi i \xi}d\xi,\quad t>0,
\label{d3}$$ satisfies the condition $\operatorname{supp}{{\mathbf a}}\subset[1,e^N]$ for some $N\in{{\mathbb N}}$. Then for $a=r({{\mathbf a}})$, the Helson matrix $M(a)$ satisfies the estimate $${\lVertM(a)\rVert}_{{\mathbf{S}}_p}^p\leq C_pN{\lVertv\rVert}_{L^p({{\mathbb R}})}^p.$$
Condition $\operatorname{supp}{{\mathbf a}}\subset[1,e^N]$ means that we may regard $M(a)$ as an $[e^N]\times [e^N]$ matrix ($[e^N]$ is the integer part of $e^N$); we will use this throughout the proof.
1\) Let $p=1$. Equation implies that $$a(jk)=\int_{-\infty}^\infty v(\xi) (jk)^{-\frac12+2\pi i \xi}d\xi.
\label{d5}$$ This can be interpreted as an integral representation for $M(a)$ in terms of rank one $[e^N]\times [e^N]$ matrices $\{(jk)^{-\frac12+2\pi i \xi}\}_{j,k=1}^{[e^N]}$. The trace norm of these rank one matrices is easy to compute: $${\left\lVert\{(jk)^{-\frac12+2\pi i \xi}\}_{j,k=1}^{[e^N]}\right\rVert}_{{\mathbf{S}}_1}
=
\sum_{j=1}^{[e^N]}{\lvertj^{-\frac12+2\pi i \xi}\rvert}^2
=
\sum_{j=1}^{[e^N]}\frac1j\leq 1+N.$$ Substituting this estimate into , we get $${\lVertM(a)\rVert}_{{\mathbf{S}}_1}
\leq
\int_{-\infty}^\infty {\lvertv(\xi)\rvert}{\left\lVert\{(jk)^{-\frac12+2\pi i \xi}\}_{j,k=1}^{[e^N]}\right\rVert}_{{\mathbf{S}}_1}d\xi
\leq
(N+1){\lVertv\rVert}_{L^1({{\mathbb R}})}.$$
2\) Let $0<p<1$. Since the triangle inequality in ${\mathbf{S}}_p$ is no longer valid in this case, we have to use the modified triangle inequality . This forces us to use sums instead of integrals in estimates. In particular, we need a series representation substitute for . We claim that ${{\mathbf a}}$ can be represented as $${{\mathbf a}}(t)=\frac1N\sum_{m=-\infty}^\infty v(m/N)t^{-\frac12+2\pi i\frac{m}N}, \quad t>1,
\label{d6}$$ where the series converges absolutely and satisfies $$\sum_{m=-\infty}^\infty {\lvertv(m/N)\rvert}^p\leq C_pN{\lVertv\rVert}_{L^p({{\mathbb R}})}^p.
\label{d7}$$ In order to justify this, we set ${{\mathbf b}}(x)=e^{x/2}{{\mathbf a}}(e^{x})$; then means that ${\widehat}{{\mathbf b}}=v$. Since $\operatorname{supp}{{\mathbf b}}\subset[0,N]$, we can expand ${{\mathbf b}}$ in the orthonormal basis $$N^{-1/2}e^{2\pi ix \frac{m}N}, \quad m\in{{\mathbb Z}}$$ in $L^2(0,N)$. This yields $${{\mathbf b}}(x)=\frac1N\sum_{m=-\infty}^\infty e^{2\pi ix \frac{m}N}\int_0^N {{\mathbf b}}(y)e^{-2\pi i y\frac{m}N}dy
=\frac1N\sum_{m=-\infty}^\infty e^{2\pi i x\frac{m}N}v(m/N).$$ Changing the variable $x=\log t$ and coming back to ${{\mathbf a}}(t)$, we obtain . Since $\operatorname{supp}{{\mathbf b}}\subset[0,N]$, we can apply the Plancherel-Polya inequality, which gives . The same inequality with $p=1$ ensures the absolute convergence of the series in and justifies the above calculation.
3\) The representation yields $$a(jk)=\frac1N\sum_{m=-\infty}^\infty v(m/N)(jk)^{-\frac12+2\pi i\frac{m}{N}}, \quad j,k\in{{\mathbb N}}.$$ This is an expansion of $M(a)$ in a series of rank one operators. As on step 1 of the proof, we have $${\left\lVert\{(jk)^{-\frac12+2\pi i\frac{m}{N}}\}_{j,k=1}^N\right\rVert}_{{\mathbf{S}}_p}\leq(N+1).$$ Applying the modified triangle inequality for ${\mathbf{S}}_p$ and using , we get $${\lVertM(a)\rVert}_{{\mathbf{S}}_p}^p
\leq
N^{-p}\sum_{m=-\infty}^\infty{\lvertv(m/N)\rvert}^p(N+1)^p\leq C_p N{\lVertv\rVert}_{L^p({{\mathbb R}})}^p,$$ as required.
Proof of Theorem \[thm.a2\](i) {#sec.c4}
------------------------------
Let ${{\mathbf b}}(x)=e^{x/2}{{\mathbf a}}(e^x)$, ${{\mathbf b}}_n(x)={{\mathbf b}}(x) {{\mathbf w}}_n(x)$ and ${{\mathbf a}}_n(t)=t^{-1/2}{{\mathbf b}}_n(\log t)$, $n\in{{\mathbb Z}}$, where ${{\mathbf w}}_n$ are the functions defined in Section \[SP estimate\]. Clearly, we have $${{\mathbf a}}(t)=\sum_{n=-\infty}^\infty {{\mathbf a}}_n(t), \quad t>1.
\label{d9}$$ From the unitary equivalence , we see that ${\lVert{{\mathbf M}}({{\mathbf a}})\rVert}_{{\mathbf{S}}_p}={\lVert{{\mathbf H}}({{\mathbf b}})\rVert}_{{\mathbf{S}}_p}$. Hence by Theorem \[thm.b3\] we have $$\sum_{n=-\infty}^\infty 2^n{\lVert{\widehat}{{\mathbf b}}_n\rVert}^p_{L^p({{\mathbb R}})}
{\le C}_p {\lVert{{\mathbf M}}({{\mathbf a}})\rVert}_{{\mathbf{S}}_p}^p.
\label{estimate1}$$
Fix $n\in{{\mathbb N}}$. We have $\operatorname{supp}{{\mathbf a}}_n\subset[\exp(2^{n-1}),\exp(2^{n+1})]\subset[1,\exp(2^{n+1})]$, and $${{\mathbf a}}_n(t)
=t^{-1/2}{{\mathbf b}}_n(\log t)
=t^{-1/2}\int_{-\infty}^\infty {\widehat}{{\mathbf b}}_n(\xi)e^{i2\pi \xi\log t}d\xi
=\int_{-\infty}^\infty {\widehat}{{\mathbf b}}_n(\xi)t^{-\frac12+i2\pi \xi}d\xi.$$ Also, by with $p=1$, we have ${\widehat}{{\mathbf b}}_n\in L^1({{\mathbb R}})$. Thus, we can apply Lemma \[lma.d5\] with $N=2^{n+1}$, which yields $${\lVertM(r({{\mathbf a}}_n))\rVert}_{{\mathbf{S}}_p}^p {\le C}_p
2^n{\lVert{\widehat}{{\mathbf b}}_n\rVert}^p_{L^p({{\mathbb R}})}.$$ Now from we have $$M(r({{\mathbf a}}))=\sum_{n=-\infty}^\infty M(r({{\mathbf a}}_n));$$ applying the modified triangle inequality for ${\mathbf{S}}_p$, we obtain $${\lVertM(r({{\mathbf a}}))\rVert}_{{\mathbf{S}}_p}^p
\leq
\sum_{n=-\infty}^\infty
{\lVertM(r({{\mathbf a}}_n))\rVert}_{{\mathbf{S}}_p}^p
\leq
C_p
\sum_{n=-\infty}^\infty
2^n
{\lVert{\widehat}{{\mathbf b}}_n\rVert}^p_{L^p({{\mathbb R}})}.$$ Combining this with , we obtain the required estimate .
Real interpolation {#sec.c5}
------------------
We now wish to show that the restriction map ${{\mathbf M}}({{\mathbf a}})\mapsto M(r({{\mathbf a}}))$ is bounded between the Schatten-Lorentz classes ${\mathbf{S}}_{p,q}$, when $p<1$ and $1\leq q\le\infty$. To arrive at this we will use the real interpolation method (the “$K$-method”). We will quickly review this, but refer the reader to [@Ber-Lof §3.1] for the details.
A pair of quasi-Banach spaces $(X_0,X_1)$ are called compatible if they are both continuously included into the same Hausdorff topological vector space. Real interpolation between a compatible pair of quasi-Banach spaces $X_0$ and $X_1$ produces, for each $0<\theta<1$ and $1\le q\le\infty$, an intermediate quasi-Banach space which is denoted $(X_0,X_1)_{\theta,q}$ and which satisfies $X_0\cap X_1\subseteq (X_0,X_1)_{\theta,q} \subseteq X_0 + X_1$, with continuous inclusions. In addition, if $(X_0,X_1)$ and $(Y_0,Y_1)$ are two pairs of compatible quasi-Banach spaces and $A$ is a bounded linear map from $X_0$ to $Y_0$ and from $X_1$ to $Y_1$ then $A$ will be bounded from $(X_0,X_1)_{\theta,q}$ to $(Y_0,Y_1)_{\theta,q}$ for each $0<\theta<1$ and $1\le q\le\infty$.
An important result that we will make use of is the *reiteration theorem*: if $(X_0, X_1)$ are a compatible pair of quasi-Banach spaces, then for $0\le\theta_0<\theta_1\le 1$ and $0<q_0,q_1<\infty$ $$\left( (X_0, X_1)_{\theta_0,q_0}, (X_0, X_1)_{\theta_1,q_1}
\right)_{\theta,q}
= (X_0, X_1)_{(1-\theta)\theta_0 + \theta\theta_1,q},
\label{reiteration}$$ where we interpret $(X_0,X_1)_{0,q}$ and $(X_0,X_1)_{1,q}$ to be $X_0$ and $X_1$ respectively.
Of particular relevance to us are the following interpolation spaces: for $0< p_0<p_1\le\infty$ and $0<q\le\infty$ $$({\mathbf{S}}_{p_0},{\mathbf{S}}_{p_1})_{\theta,q} = {\mathbf{S}}_{p,q},
\quad \frac{1}{p}=\frac{1-\theta}{p_0} + \frac{\theta}{p_1}.
\label{Sp interpolation}$$
Interpolation spaces of Hankel and Helson operators {#sec.c6}
---------------------------------------------------
Let ${{\mathbf H}}{\mathbf{S}}_{p,q}$ denote the set of integral Hankel operators of class ${\mathbf{S}}_{p,q}$, and let us write ${{\mathbf H}}{\mathbf{S}}_p$ for ${{\mathbf H}}{\mathbf{S}}_{p,p}$. We claim that ${{\mathbf H}}{\mathbf{S}}_{p,q}$ is a closed subspace of ${\mathbf{S}}_{p,q}$ for all $0<p,q\le\infty$. This is a straightforward consequence of the following characterisation of integral Hankel operators [@Nikolski Part B, Section 4.8, page 273]. For $\lambda>0$, let $S_\lambda$ denote the right shift by $\lambda$ on $L^2({{\mathbb R}}_+)$ — that is, $$S_\lambda:L^2({{\mathbb R}}_+)\to L^2({{\mathbb R}}_+), \quad
S_\lambda f(x) =
\begin{cases}
f(x-\lambda), \quad x\ge\lambda, \\
0, \quad x<\lambda.
\end{cases}$$ Then, for a bounded operator $A$ on $L^2({{\mathbb R}}_+)$, one has $A={{\mathbf H}}({{\mathbf b}})$ for some distribution ${{\mathbf b}}$ on $(0,\infty)$ if and only if $$AS_\lambda = S_\lambda^*A \quad \text{for all}\quad \lambda>0.$$ One has a description for the interpolation spaces $({{\mathbf H}}{\mathbf{S}}_{p_0},{{\mathbf H}}{\mathbf{S}}_\infty)_{\theta,q}$, see [@Peller Theorem 6.4.1]: $$({{\mathbf H}}{\mathbf{S}}_{p_0},{{\mathbf H}}{\mathbf{S}}_\infty)_{\theta,q} = {{\mathbf H}}{\mathbf{S}}_{p,q},
\quad p=\frac{p_0}{1-\theta}.
\label{HSp interpolation}$$ It is worth noting that although [@Peller Theorem 6.4.1] is stated for Hankel matrices, the same argument also works for integral Hankel operators.
Similarly, let us write ${{\mathbf M}}{\mathbf{S}}_{p,q}$ and ${{\mathbf M}}{\mathbf{S}}_p$ to denote the set of integral Helson operators of class ${\mathbf{S}}_{p,q}$ and ${\mathbf{S}}_p$ respectively. Since the unitary equivalence provides an isomorphism between ${{\mathbf H}}{\mathbf{S}}_{p,q}$ and ${{\mathbf M}}{\mathbf{S}}_{p,q}$ it immediately follows from that $$({{\mathbf M}}{\mathbf{S}}_{p_0},{{\mathbf M}}{\mathbf{S}}_\infty)_{\theta,q} = {{\mathbf M}}{\mathbf{S}}_{p,q},
\quad p=\frac{p_0}{1-\theta}.
\label{MSp interpolation}$$
We are now in a position to conclude the proof of Theorem \[thm.a2\].
Fix $0<p_0<1$. Then by , for any $p_1>p_0$, we can write ${{\mathbf M}}{\mathbf{S}}_{p_1} = ({{\mathbf M}}{\mathbf{S}}_{p_0},{{\mathbf M}}{\mathbf{S}}_\infty)_{\theta_1,p_1}$ for some $0<\theta_1<1$. It then follows from the reiteration theorem that $$({{\mathbf M}}{\mathbf{S}}_{p_0},{{\mathbf M}}{\mathbf{S}}_{p_1})_{\theta,q}
= ({{\mathbf M}}{\mathbf{S}}_{p_0},{{\mathbf M}}{\mathbf{S}}_\infty)_{\theta\theta_1,q}
= {{\mathbf M}}{\mathbf{S}}_{p,q},
\label{MSp interpolation 2}$$ where $p$ is given by .
By , the linear map ${{\mathbf M}}({{\mathbf a}})\mapsto M(r({{\mathbf a}}))$ is bounded from ${{\mathbf M}}{\mathbf{S}}_{p_0}$ to ${\mathbf{S}}_{p_0}$ and from ${{\mathbf M}}{\mathbf{S}}_1$ to ${\mathbf{S}}_1$. It then follows from and that it is also bounded from ${{\mathbf M}}{\mathbf{S}}_{p,q}$ to ${\mathbf{S}}_{p,q}$ for every $p_0<p<1$ and $1\leq q\le\infty$. This completes the proof.
Proof of Theorem \[thm.a1\] {#sec.e}
===========================
Preliminaries
-------------
Here we collect three results from other sources that will be needed below for the proof. The first one is the stability of the eigenvalue asymptotic coefficient, which is is standard in spectral perturbation theory.
\[lma.b1\][@BSbook §11.6] Let $A$ and $B$ be compact self-adjoint operators and let $\gamma>0$. Suppose that $s_n(A-B)=o(n^{-\gamma})$ as $n\to\infty$. Then $$\begin{aligned}
\limsup_{n\to\infty}n^\gamma\lambda_n^+(A)
&=
\limsup_{n\to\infty}n^\gamma\lambda_n^+(B),
\\
\liminf_{n\to\infty}n^\gamma\lambda_n^+(A)
&=
\liminf_{n\to\infty}n^\gamma\lambda_n^+(B).\end{aligned}$$
Of course, similar relations hold true for negative eigenvalues $\lambda_n^-$.
Next, we need a result from [@PY1] on the spectral asymptotics of integral Hankel operators. Roughly speaking, we need the eigenvalue asymptotics for integral Hankel operators ${{\mathbf H}}({{\mathbf b}})$ with the kernel as in — this is one of the main results of [@PY1]. However, at the technical level, we need this result not for the kernel function ${{\mathbf b}}$ of , but for the kernel function ${{\mathbf b}}_0$ of , which has the same asymptotics as ${{\mathbf b}}$ but is given by the suitable integral representation. This happens to be one of the intermediate results of [@PY1], which fits our purpose.
[@PY1 Lemma 3.2]\[lma.bb4\] Let $w(\lambda)={\lvert\log\lambda\rvert}^{-\alpha}\chi(\lambda)$, where $\chi\in C^\infty({{\mathbb R}}_+)$ is a non-negative function such that $\chi(\lambda)=1$ near $\lambda=0$ and $\chi(\lambda)=0$ for $\lambda\geq1$. Consider the kernel function $${{\mathbf b}}_0(x)=\int_0^\infty w(\lambda)e^{-x\lambda}d\lambda,\quad x>0.$$ Then the corresponding integral Hankel operator ${{\mathbf H}}({{\mathbf b}}_0)$ is non-negative, compact and has the spectral asymptotics $$\lambda_n^+({{\mathbf H}}({{\mathbf b}}_0))=\varkappa(\alpha)n^{-\alpha}+o(n^{-\alpha}), \quad n\to\infty.$$
Finally, we will need a result from [@PY2] (which ultimately relies on Theorem \[thm.b3\]), which gives estimates on singular values for integral Hankel operators with kernels that behave, roughly speaking, as $O(x^{-1}(\log x)^{-\gamma})$. For $\gamma>0$, denote $$m(\gamma)=
\begin{cases}
[\gamma]+1& \text{ if } \gamma\geq1/2,
\\
0, & \text{ if } \gamma<1/2.
\end{cases}
\label{b7}$$
\[thm.b4\][@PY2 Theorem 2.7] Let $\gamma>0$ and let $m=m(\gamma)$ be the integer given by . Let ${{\mathbf b}}$ be a complex valued function, ${{\mathbf b}}\in L^\infty_{\mathrm{loc}}({{\mathbb R}}_+)$; if $\gamma\geq1/2$, suppose also that ${{\mathbf b}}\in C^m({{\mathbb R}}_+)$. Assume that ${{\mathbf b}}$ satisfies $${{\mathbf b}}^{(\ell)}(x)=O(x^{-1-\ell}{\lvert\log x\rvert}^{-\gamma})
\quad
\text{ as $x\to0$ and as $x\to\infty$,}
\label{b9}$$ for all $\ell=0,\dots,m(\gamma)$. Then $$s_n({{\mathbf H}}({{\mathbf b}}))=O(n^{-\gamma}), \quad n\to\infty.$$
Proof of Theorem \[thm.a1\] {#proof-of-theoremthm.a1}
---------------------------
We use the notation of Section \[sec.a5\]. More precisely, ${{\mathbf a}}(t)$ is a smooth function that satisfies for large $t$ and ${{\mathbf b}}(x)$ is the corresponding Hankel kernel function . Further, $\chi\in C^\infty({{\mathbb R}}_+)$ is a non-negative function such that $\chi(\lambda)=1$ for all sufficiently small $\lambda>0$ and $\chi(\lambda)=0$ for $\lambda\geq1$, and $w(\lambda)={\lvert\log\lambda\rvert}^{-\alpha}\chi(\lambda)$. The kernel functions ${{\mathbf a}}_0$ and ${{\mathbf a}}_1$ are given by and the corresponding Hankel kernels ${{\mathbf b}}_0$, ${{\mathbf b}}_1$ are given by ; finally, $a_0=r({{\mathbf a}}_0)$ and $a_1=r({{\mathbf a}}_1)$. Recall that we have $$M(a)=M(a_0)+M(a_1).$$
1\) Let us prove that the Helson matrix $M(a_0)$ is compact, non-negative and has the spectral asymptotics $$\lambda_n^+(M(a_0))=\varkappa(\alpha)n^{-\alpha}+o(n^{-\alpha}), \quad n\to\infty.
\label{dd0}$$ Lemma \[lma.bb4\] provides the asymptotics of the required type for ${{\mathbf H}}({{\mathbf b}}_0)$. By the unitary equivalence between ${{\mathbf M}}({{\mathbf a}}_0)$ and ${{\mathbf H}}({{\mathbf b}}_0)$, we have $$\lambda_n^+({{\mathbf M}}({{\mathbf a}}_0))=\lambda_n^+({{\mathbf H}}({{\mathbf b}}_0))$$ for all $n$, and so ${{\mathbf M}}({{\mathbf a}}_0)$ obeys the same spectral asymptotics. Finally, we use Theorem \[thm.cc1\] with $a=a_0$ and ${{\mathbf a}}={{\mathbf a}}_0$. By the unitary equivalence modulo kernels, we have $$\lambda_n^+(M(a_0))=\lambda_n^+(A),
\quad
\lambda_n^+({{\mathbf M}}({{\mathbf a}}_0))=\lambda_n^+(B),$$ for all $n$, where $A$ and $B$ are the operators in the statement of Theorem \[thm.cc1\]. Now since $A-B\in{\mathbf{S}}_0$, by Lemma \[lma.b1\], we have $$\limsup_{n\to\infty}n^\alpha\lambda_n^+(A)
=
\limsup_{n\to\infty}n^\alpha\lambda_n^+(B)$$ for all $\alpha>0$, and similarly for $\liminf$. This gives the required asymptotics for $\lambda_n^+(A)$ and so for $\lambda_n^+(M(a_0))$. The non-negativity $M(a_0)\geq0$ is given again by Theorem \[thm.cc1\].
2\) Let us prove that the Helson matrix $M(a_1)$ is compact and satisfies the spectral estimates $$s_n(M(a_1))=O(n^{-\alpha-1}), \quad n\to\infty.
\label{dd1}$$ By the choice of ${{\mathbf a}}_1$, we have that ${{\mathbf b}}_1$ is smooth on $[0,\infty)$ and $${{\mathbf b}}_1(x)=x^{-1}(\log x)^{-\alpha}-\int_0^\infty e^{-\lambda x}w(\lambda)d\lambda
\label{d1}$$ for all sufficiently large $x$. Let us check that ${{\mathbf b}}_1$ satisfies the hypothesis of Theorem \[thm.b4\] with $\gamma=\alpha+1$. Since ${{\mathbf b}}_1$ is smooth near $x=0$, we only need to check for $x\to\infty$.
We use the following well known fact [@Erdelyi]. Let $0<c<1$, $\ell\in{{\mathbb Z}}_+$, and $$I_\ell(x)= \int_{0}^c {\lvert\log \lambda\rvert}^{-\gamma}\lambda^{\ell} e^{-\lambda x}d\lambda, \quad x>0.$$ Then $$I_\ell(x)=\ell!\, x^{-1-\ell} (\log x)^{-\alpha} \bigl(1+O((\log x)^{-1})\bigr),
\quad x\to\infty.$$ Now differentiating $\ell$ times, we obtain $$\begin{gathered}
{{\mathbf b}}_1^{(\ell)}(x)
=
(-1)^\ell
\ell!\, x^{-1-\ell} (\log x)^{-\alpha}
+O(x^{-1-\ell}(\log x)^{-\alpha-1})
\\
-
(-1)^\ell
\int_0^\infty e^{-\lambda x}\lambda^{\ell}w(\lambda)d\lambda
=
O(x^{-1-\ell}(\log x)^{-\alpha-1}),
\quad
x\to\infty,\end{gathered}$$ for all $\ell\geq0$, which gives the required estimate with $\gamma=\alpha+1$.
Thus, Theorem \[thm.b4\] yields the inclusion ${{\mathbf H}}({{\mathbf b}}_1)\in{\mathbf{S}}_{p,\infty}$ with $p=1/(\alpha+1)$. By the unitary equivalence , it follows that ${{\mathbf M}}({{\mathbf a}}_1)\in{\mathbf{S}}_{p,\infty}$. Applying Theorem \[thm.a2\](ii), we obtain $M(a_1)\in{\mathbf{S}}_{p,\infty}$, which is equivalent to the required estimate .
3\) Now we can conclude the proof of the theorem. Let us apply the asymptotic stability result Lemma \[lma.b1\] with $A=M(a_0)$ and $B=M(a_1)$. By , this gives the asymptotics for positive eigenvalues.
Let us discuss the estimate for negative eigenvalues. By , we have $$\lambda_n^-(M(a_1))=O(n^{-\alpha-1}),\quad n\to\infty.
\label{e1}$$ Since $M(a_0)\geq0$, by the variational principle (see e.g. [@BSbook Theorem 9.3.7]), we obtain $$\#\{n: \lambda_n^-(M(a))>\lambda\}
\leq
\#\{n: \lambda_n^-(M(a_1))>\lambda\}$$ for any $\lambda>0$, which implies $$\lambda_n^-(M(a))\leq \lambda_n^-(M(a_1))$$ for any $n$. From here and we obtain the estimate for negative eigenvalues.
[14]{}
*Interpolation spaces: an introduction,* Springer-Verlag, 1976.
*Spectral theory of selfadjoint operators in Hilbert space,* D. Reidel, Dordrecht, 1987.
*The multiplicative Hilbert matrix,* Adv. Math. **302** (2016), 410–432.
*The discrete nature of Paley-Wiener spaces,* Proc. Amer. Math. Soc. **123** no. 2 (1995), 505–512.
*General asymptotic expansions of Laplace integrals,* Arch. Rational Mech. Anal. **7** no. 1 (1961), 1–20.
*Hankel forms,* Studia Math. **198** (2010), 79–83.
*Operators, Functions and Systems: An Easy Reading. Volume 1.* AMS, 2002.
*Hankel operators and their applications,* Springer, 2003.
*On Helson matrices: moment problems, non-negativity, boundedness, and finite rank,* to appear in Proc. London Math. Soc.; DOI: 10.1112/plms.12068
*On the spectrum of the multiplicative Hilbert matrix,* to appear in Arkiv för Mathematik; arXiv:1705.01959.
*Fonctions entiéres et intégrales de Fourier multiples,* Comment. Math. Helv. **10** (1937), 110–163.
*Asymptotic behavior of eigenvalues of Hankel operators,* Int. Math. Res. Notices **2015**, no. 22 (2015), 11861–11886.
*Sharp estimates for singular values of Hankel operators,* Integr. Equ. Oper. Theory, **83** no. 3 (2015), 393–411.
*Diophantine approximation and Dirichlet series.* Hindustan Book Agency, New Delhi, 2013.
|
---
abstract: 'We prove several inequalities using lowest-order effective field theory for nucleons which give an upper bound on the pressure of asymmetric nuclear matter and neutron matter. We prove two types of inequalities, one based on convexity and another derived from shifting an auxiliary field.'
author:
- Dean Lee
title: 'Pressure inequalities for nuclear and neutron matter\'
---
Introduction
============
In the effective field theory description of low-energy nuclear matter, nucleons are treated as point particles rather than composite objects. While much of the work in the community has focused on few-body systems, there has also been recent interest in lattice simulations of bulk nuclear matter using effective field theory [@Muller:1999cp; @Chen:2003vy; @Abe:2003fz; @Lee:2004si; @Wingate:2004wm; @Lee:2004qd]. In parallel with this computational effort, effective field theory was also recently used to prove inequalities for the correlation function of two-nucleon operators in low-energy symmetric nuclear matter [@Lee:2004ze]. It was shown that the $S=1$, $I=0$ channel must have the lowest energy and longest correlation length in the two-nucleon sector. These results were shown to be valid at nonzero density and temperature and could be checked in effective field theory lattice simulations. The proof relied on positivity of the Euclidean functional integral measure and is similar in spirit to several quantum chromodynamics (QCD) inequalities proved using quark-gluon degrees of freedom [@Weingarten:1983uj; @Vafa:1983tf; @Vafa:1984xg; @Vafa:1984xh; @Witten:1983ut; @Nussinov:1983vh; @Nussinov:1999sx; @Nussinov:2003uj; @Cohen:2003ut].
In this work we prove several new inequalities using effective field theory which give an upper bound on the pressure of asymmetric nuclear matter and neutron matter. We prove two types of inequalities, one based on convexity and one derived from shifting an auxiliary Hubbard-Stratonovich field. We consider two general types of systems, one with two fermion species and an $SU(2)$ symmetry and another with four fermion species and an $SU(2)\times
SU(2)$ symmetry. The results we prove are quite general. In addition to nuclear and neutron matter, our inequalities apply to systems of cold, dilute gases of fermionic atoms [@O'Hara:2002; @Gupta:2002; @Regal:2003; @Bourdel:2003; @Gehm:2003] which can be described by the same lowest-order effective field theory.
Lower bound
===========
Before deriving pressure upper bounds, we first state a general lower bound for the pressure. The result is simple and perhaps obvious, but the derivation is useful to help set our notation. Consider any system in thermodynamic equilibrium that is invariant under a symmetry group $S$. Let $\mu$ be a symmetric chemical potential which preserves the group $S$. $\ $Let $\mu_{3}$ be an asymmetric chemical potential which breaks $S$ and flips sign $\mu_{3}\rightarrow-\mu_{3}$ under some element of $S$. This means that the pressure $P$ is an even function of $\mu_{3}$.
Our condition of thermodynamic equilibrium requires that the system is stable and not further separating into regions with more widely different values of $\mu_{3}$. $\ $This implies the convexity condition,$$\tfrac{\partial^{2}P(\mu,\mu_{3})}{\partial\mu_{3}^{2}}\geq0.$$ Combining this with the fact that $P$ is even in $\mu_{3}$, we derive the lower bound$$P(\mu,\mu_{3})\geq P(\mu,0)\text{.} \label{lower bound}$$ This lower bound holds for all the systems we consider here.
Two fermion states - $SU(2)$
============================
We consider an effective theory with two species of interacting fermion fields and an $SU(2)$ symmetry. Let $n$ be a doublet of fermion fields which we can regard as neutron spin states,$$n=\left[
\begin{array}
[c]{c}\uparrow\\
\downarrow
\end{array}
\right] .$$ We can write the lowest-order Lagrange density in Euclidean space in two equivalent forms, $$\mathcal{L}_{E}=-\bar{n}[\partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0}-\mu-\mu_{3}\sigma_{3})]n-\tfrac{1}{2}C\bar{n}n\bar{n}n,
\label{first neutron}$$ and$$\mathcal{L}_{E}=-\bar{n}[\partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0\prime}-\mu-\mu_{3}\sigma_{3})]n-\tfrac{1}{2}C^{\prime}\bar{n}\vec{\sigma}n\cdot\bar{n}\vec{\sigma}n, \label{second neutron}$$ where$$C^{\prime}=-\tfrac{1}{3}C.$$ We use $\vec{\sigma}$ to represent Pauli matrices acting in spin space. $\mu$ is the symmetric chemical potential while $\mu_{3}$ is the asymmetric chemical potential. We assume the interaction is attractive so that $$C<0\text{, }C^{\prime}>0\text{.}$$
Two-body operator coefficients
------------------------------
We can calculate $C$ using a lattice regulator for various lattice spacings, which denote as $a_{lattice}$. For simplicity we take the temporal lattice spacing to be zero. We must sum all two-particle scattering bubble diagrams, as shown in Fig. \[scattering\], and locate the pole in the scattering amplitude.
\[ptb\]
[scattering.ps]{}
We then use Lüscher’s formula for energy levels in a finite periodic box [@Luscher:1986pf; @Beane:2003da; @Lee:2004qd] and tune the coefficients to give the physically measured scattering lengths. From Lüscher’s formula there should be a pole in the two-particle scattering amplitude with energy$$E_{pole}=\frac{4\pi a_{scatt}}{m_{N}L^{3}}+\cdots\text{,}$$ where $a_{scatt}$ is the scattering length. We can write the sum over bubble diagrams as a geometric series. In order to produce a pole at this energy we must have $$\frac{1}{m_{N}C}=\frac{1}{4\pi a_{scatt}}-\lim_{L\rightarrow\infty}\frac
{1}{a_{lattice}L^{3}}\sum_{\vec{k}\neq0}\frac{1}{6-2\cos\frac{2\pi k_{1}}{L}-2\cos\frac{2\pi k_{2}}{L}-2\cos\frac{2\pi k_{3}}{L}}, \label{pole}$$ where $a_{lattice}$ is the lattice spacing, and the sum is over integer values $k_{1},k_{2},k_{3}$ from $0$ to $L-1$. Solving for $C$ gives$$C\simeq\frac{1}{m_{N}\left( \frac{1}{4\pi a_{scatt}}-\frac{0.253}{a_{lattice}}\right) }.$$
For any chosen temperature and neutron density there is a corresponding maximum value for the lattice spacing, $a_{lattice}.$ The requirements are that the kinetic energy for the highest momentum mode must exceed the temperature, and the lattice spacing must be less than the interparticle spacing. We therefore have $$a_{lattice}^{-1}\gg(a_{lattice}^{-1})_{\min}=\max\left[ \pi^{-1}\sqrt
{2m_{N}T},\rho^{1/3}\right] .$$ This sets an upper bound for the absolute value for the scale-dependent coupling $C$,$$\left\vert C\right\vert \ll\left\vert C\right\vert _{\max}\equiv\frac{1}{m_{N}\left\vert \frac{1}{4\pi a_{scatt}}-0.253(a_{lattice}^{-1})_{\min
}\right\vert }\text{.}\label{Cmax}$$ This result will be useful for the shifted-field inequalities derived later.
Convexity inequality
--------------------
The grand canonical partition function is given by$$Z_{G}(\mu,\mu_{3})=\int DnD\bar{n}\exp\left( -S_{E}\right) =\int DnD\bar
{n}\exp\left( \int d^{4}x\,\mathcal{L}_{E}\right) ,$$ where we use the expression (\[first neutron\]) for $\mathcal{L}_{E}$, $$\mathcal{L}_{E}=-\bar{n}[\partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0}-\mu-\mu_{3}\sigma_{3})]n-\tfrac{1}{2}C\bar{n}n\bar{n}n.$$ Using a Hubbard-Stratonovich transformation [@Stratonovich:1958; @Hubbard:1959ub], we can rewrite $Z_{G}$ as$$Z_{G}\propto\int DnD\bar{n}Df\exp\left( \int d^{4}x\,\mathcal{L}_{E}^{f}\right) ,$$ where$$\mathcal{L}_{E}^{f}=-\bar{n}[\partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0}-\mu-\mu_{3}\sigma_{3})]n+Cf\bar{n}n+\tfrac{1}{2}Cf^{2}.$$
Let us define $\mathbf{M}$ as the matrix for the part of $\mathcal{L}_{E}^{f}$ bilinear in the neutron field,$$\mathbf{M}=-\left[ \partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0}-\mu-\mu_{3}\sigma_{3})\right] +Cf.$$ We observe that $\mathbf{M}$ has the block diagonal form,$$\mathbf{M}=\left[
\begin{array}
[c]{cc}M(\mu+\mu_{3}) & 0\\
0 & M(\mu-\mu_{3})
\end{array}
\right] ,$$ where$$M(\mu)=-\left[ \partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0}-\mu)\right] +Cf\text{.}$$ Since $M$ is real valued, $\det M$ must also be real.
Integrating over the fermion fields gives us$$\begin{aligned}
Z_{G}(\mu,\mu_{3}) & \propto\int DnD\bar{n}Df\exp\left( \int d^{4}x\,\mathcal{L}_{E}^{f}\right) \nonumber\\
& =\int D\Theta\det\mathbf{M=}\int D\Theta\det M(\mu+\mu_{3})\det M(\mu
-\mu_{3}),\end{aligned}$$ where $D\Theta$ is the positive measure$$D\Theta=Df\exp\left( \tfrac{1}{2}C\int d^{4}x\,f^{2}\right) .$$ Using the Cauchy-Schwarz inequality we find$$\begin{aligned}
\left\vert \int D\Theta\det M(\mu+\mu_{3})\det M(\mu-\mu_{3})\right\vert &
\leq\int D\Theta\left\vert \det M(\mu+\mu_{3})\det M(\mu-\mu_{3})\right\vert
\nonumber\\
& \leq\sqrt{\int D\Theta\left[ \det M(\mu+\mu_{3})\right] ^{2}}\sqrt{\int
D\Theta\left[ \det M(\mu-\mu_{3})\right] ^{2}}.\end{aligned}$$ We can now compare the asymmetric partition function to the symmetric partition function at chemical potentials $\mu+\mu_{3}$ and $\mu-\mu_{3}$,$$Z_{G}(\mu,\mu_{3})\leq\sqrt{Z_{G}(\mu+\mu_{3},0)}\sqrt{Z_{G}(\mu-\mu_{3},0)}\text{.}$$
We now use the thermodynamic relation,$$\ln Z_{G}=\tfrac{PV}{k_{B}T}, \label{thermodynamic}$$ where $P$ is the pressure, $V$ is the volume, and $T$ is the temperature. We find the upper bound$$P(\mu,\mu_{3})\leq\frac{1}{2}\left[ P(\mu+\mu_{3},0)+P(\mu-\mu_{3},0)\right]
.$$
Shifted-field inequality
------------------------
We start again with the grand canonical partition function$$Z_{G}(\mu,\mu_{3})=\int DnD\bar{n}\exp\left( -S_{E}\right) =\int DnD\bar
{n}\exp\left( \int d^{4}x\,\mathcal{L}_{E}\right) .$$ This time we use the other expression (\[second neutron\]) for $\mathcal{L}_{E}$, $$\mathcal{L}_{E}=-\bar{n}[\partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0\prime}-\mu-\mu_{3}\sigma_{3})]n-\tfrac{1}{2}C^{\prime}\bar{n}\vec{\sigma}n\cdot\bar{n}\vec{\sigma}n.$$ We can rewrite the grand canonical partition function using three Hubbard-Stratonovich fields, $$Z_{G}\propto\int DnD\bar{n}D\vec{\phi}\exp\left( \int d^{4}x\,\mathcal{L}_{E}^{\vec{\phi}}\right) ,$$ where $$\mathcal{L}_{E}^{\vec{\phi}}=-\bar{n}[\partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0\prime}-\mu-\mu_{3}\sigma_{3})]n+iC^{\prime}\vec{\phi}\cdot\bar{n}\vec{\sigma}n-\tfrac{1}{2}C^{\prime}\vec{\phi}\cdot\vec{\phi}.$$ Let $\mathbf{M}_{0}$ be the neutron matrix without the $\mu_{3}\sigma_{3}$ term$,$$$\mathbf{M}_{0}=-\left[ \partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0\prime}-\mu)\right] +iC^{\prime}\vec{\phi}\cdot\vec{\sigma}.$$ We note that$$\sigma_{2}\mathbf{M}_{0}\sigma_{2}=\mathbf{M}_{0}^{\ast},$$ where $\mathbf{M}_{0}^{\ast}$ is the complex conjugate of $\mathbf{M}_{0}$. This means that $\mathbf{M}_{0}$ is either singular, in which case $\det\mathbf{M}_{0}=0$, or has the same eigenvalues as $\mathbf{M}_{0}^{\ast}$. In all cases $\det\mathbf{M}_{0}$ is real. Furthermore the fact that $\sigma_{2}$ is antisymmetric means that the real eigenvalues of $\mathbf{M}_{0}$ are doubly degenerate, and so $\det\mathbf{M}_{0}\geq0$ [@Hands:2000ei].
We now concentrate on the part of $\mathcal{L}_{E}^{\vec{\phi}}$ that contains $\mu_{3}$ and $\phi_{3}$,$$-\tfrac{1}{2}C^{\prime}\phi_{3}^{2}+iC^{\prime}\phi_{3}\bar{n}\sigma_{3}n+\mu_{3}\bar{n}\sigma_{3}n.$$ We can rewrite this as$$-\tfrac{1}{2}C^{\prime}\phi_{3}^{\prime2}-i\mu_{3}\phi_{3}^{\prime}+iC^{\prime}\phi_{3}^{\prime}\bar{n}\sigma_{3}n+\tfrac{1}{2}\tfrac{\mu_{3}^{2}}{C^{\prime}}$$ where$$\phi_{3}^{\prime}=\phi_{3}-i\tfrac{\mu_{3}}{C^{\prime}}.$$ The original contour of integration for $\phi_{3}^{\prime}$ is off the real axis, but we can deform the contour onto the real axis. For notational convenience we now drop the prime on $\phi_{3}^{\prime}$ and have$$\mathcal{L}_{E}^{\vec{\phi}}=-\bar{n}[\partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0\prime}-\mu)]n+iC^{\prime}\vec{\phi}\cdot\bar{n}\vec
{\sigma}n-\tfrac{1}{2}C^{\prime}\vec{\phi}\cdot\vec{\phi}-i\mu_{3}\phi
_{3}+\tfrac{1}{2}\tfrac{\mu_{3}^{2}}{C^{\prime}}.$$ The neutron matrix is now $\mathbf{M}_{0}$, which we have shown has a non-negative determinant. The complex phase is contained entirely in the local expression $-i\mu_{3}\phi_{3}$.
We now have$$\begin{aligned}
Z_{G} & \propto\int D\Theta\exp\left\{ \int d^{4}x\left[ -i\mu_{3}\phi
_{3}+\tfrac{1}{2}\tfrac{\mu_{3}^{2}}{C^{\prime}}\right] \right\} \nonumber\\
& =\exp(\tfrac{V\mu_{3}^{2}}{2C^{\prime}k_{B}T})\int D\Theta\exp\left(
-i\mu_{3}\int d^{4}x\;\phi_{3}\right) ,\end{aligned}$$ where $D\Theta$ is the normalized positive measure $$D\Theta=\frac{D\vec{\phi}\;\det\mathbf{M}_{0}\exp\left( -\int d^{4}x\,\mathcal{V}(\vec{\phi})\right) }{\int D\vec{\phi}\;\det\mathbf{M}_{0}\exp\left( -\int d^{4}x\,\mathcal{V}(\vec{\phi})\right) }$$ with $$-\,\mathcal{V}(\vec{\phi})=-\tfrac{1}{2}C^{\prime}\vec{\phi}\cdot\vec{\phi}.$$
Using (\[thermodynamic\]) we find$$\begin{aligned}
P(\mu,\mu_{3})-P(\mu,0) & =\tfrac{k_{B}T}{V}\ln\left[ \exp(\tfrac{V\mu
_{3}^{2}}{2C^{\prime}k_{B}T})\int D\Theta\exp\left( -i\mu_{3}\int
d^{4}x\;\phi_{3}\right) \right] \nonumber\\
& =\tfrac{\mu_{3}^{2}}{2C^{\prime}}+\tfrac{k_{B}T}{V}\ln\left[ \int
D\Theta\exp\left( -i\mu_{3}\int d^{4}x\;\phi_{3}\right) \right] .\end{aligned}$$ So we conclude that$$P(\mu,\mu_{3})\leq P(\mu,0)+\tfrac{\mu_{3}^{2}}{2C^{\prime}}.\label{cprime neutron}$$ This upper bound is unusual in that it relates physical observables independent of the cutoff scale to the scale-dependent coupling $C^{\prime}$. By taking the lattice spacing as large as possible, we have $$C^{\prime}=\tfrac{1}{3}\left\vert C\right\vert _{\max},$$ where $\left\vert C\right\vert _{\max}$ was defined in (\[Cmax\]), and therefore$$P(\mu,\mu_{3})\leq P(\mu,0)+\tfrac{3\mu_{3}^{2}}{2\left\vert C\right\vert
_{\max}}.\label{shifted neutron}$$ As a rough estimate of the quantities involved, we note that for $\rho
\sim0.1\rho_{N}$ and $T<10$ MeV, $\left\vert C\right\vert _{\max}$ is about $3$ fm$^{2}$.
As $C^{\prime}$ decreases the upper bound in (\[cprime neutron\]) increases. But at the same time the tightness of the bound becomes poorer as complex phase oscillations due to the term$$\exp\left[ \int d^{4}x\left( -\tfrac{1}{2}C^{\prime}\phi_{3}^{2}-i\mu
_{3}\phi_{3}\right) \right]$$ become more significant. The average phase for our functional integral is given by$$\begin{aligned}
\left\langle \text{phase}\right\rangle & =\int D\Theta\exp\left( -i\mu
_{3}\int d^{4}x\;\phi_{3}\right) \nonumber\label{local}\\
& =\exp\left[ \tfrac{V}{k_{B}T}\left( P(\mu,\mu_{3})-P(\mu,0)-\tfrac
{\mu_{3}^{2}}{2C^{\prime}}\right) \right] .\end{aligned}$$
Given an estimate of the pressure difference, this relation can be used to predict the feasibility of a numerical simulation using this representation of the functional integral. In cases where the phase problem is not too severe we can use hybrid Monte Carlo to generate Hubbard-Stratonovich field configurations according to the relative probability weight $\det
\mathbf{M}_{0}$. The phase of the configuration can then be included as an observable using the local expression $-i\mu_{3}\phi_{3}$. This local expression for the phase could increase algorithmic speed by several orders of magnitude. The only known way to compute the phase of matrix determinants is LU decomposition, an algorithm which writes a matrix as a product of lower and upper triangular matrices. The number of operations for LU decomposition scales as $N^{3}$, where $N$ is the dimension of the matrix. For an $L^{4}$ lattice the scaling is thus $L^{12}$.
Four fermion states - $SU(2)\times SU(2)$
=========================================
We now consider an effective theory with four species of interacting fermions and an $SU(2)\times SU(2)$ symmetry. Let $N$ be a quartet of fermion states, which we can regard as nucleon fields,$$N=\left[
\begin{array}
[c]{c}p\\
n
\end{array}
\right] \otimes\left[
\begin{array}
[c]{c}\uparrow\\
\downarrow
\end{array}
\right] .$$ We use $p$($n$) to represent protons(neutrons) and $\uparrow$($\downarrow$) to represent up(down) spins. We use $\vec{\tau}$ to represent Pauli matrices acting in isospin space and $\vec{\sigma}$ to represent Pauli matrices acting in spin space. We assume exact isospin and spin symmetry in the absence of symmetry-breaking chemical potentials, and so the symmetry group is $SU(2)_{I}\times SU(2)_{S}$.
In the non-relativistic limit and below the threshold for pion production, we can write the lowest-order terms in the effective Lagrangian in two equivalent ways,$$\begin{aligned}
\mathcal{L}_{E} & =-\bar{N}[\partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0}-\mu)]N-\tfrac{1}{2}C_{S}(\bar{N}N)^{2}-\tfrac{1}{2}C_{T}\bar
{N}\vec{\sigma}N\cdot\bar{N}\vec{\sigma}N\nonumber\\
& -\tfrac{1}{3!}C_{3}(\bar{N}N)^{3}-\tfrac{1}{4!}C_{4}(\bar{N}N)^{4},
\label{C_T}$$ or$$\begin{aligned}
\mathcal{L}_{E} & =-\bar{N}[\partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0\prime}-\mu)]N-\tfrac{1}{2}C_{S}^{\prime}(\bar{N}N)^{2}-\tfrac
{1}{2}C_{U}^{\prime}\bar{N}\vec{\tau}N\cdot\bar{N}\vec{\tau}N\nonumber\\
& -\tfrac{1}{3!}C_{3}(\bar{N}N)^{3}-\tfrac{1}{4!}C_{4}(\bar{N}N)^{4}.
\label{C_U}$$ We will introduce symmetry breaking chemical potentials later. We have included both three-body and four-body forces. The $SU(4)$-symmetric three-nucleon force is needed for consistent renormalization and has been shown to be the dominant three-body force contribution [@Mehen:1999qs; @Bedaque:1998kg; @Bedaque:1999ve].
With four distinct fermion species there are two irreducible representations of $SU(2)_{I}\times SU(2)_{S}$ for two fermions in an s-wave, a spin-singlet isospin-triplet $(S=0)$ or an isospin-singlet spin-triplet $(I=0)$. One can show that [@Lee:2004ze]$$C_{U}^{\prime}=-C_{T},\quad C_{S}^{\prime}=C_{S}-2C_{T}.$$ In the case of nucleons, one finds that both of the s-wave channels are attractive, with the $I=0$ channel being more strongly attractive,$$\frac{1}{a_{scatt}^{I=0}}>\frac{1}{a_{scatt}^{S=0}}.$$ This implies that [@Lee:2004ze]$$\begin{aligned}
C_{S} & <3C_{T},\quad C_{T}<0,\\
C_{S}^{\prime} & <-C_{U}^{\prime},\quad C_{U}^{\prime}>0.\end{aligned}$$ For a more general system with four fermion states and an $SU(2)\times SU(2)$ symmetry, we can interchange the isospin and spin labels so that, without loss of generality,$$\frac{1}{a_{scatt}^{I=0}}\geq\frac{1}{a_{scatt}^{S=0}}.$$ In the special case when the scattering lengths are equal, the symmetry group is the full Wigner $SU(4)$ symmetry [@Wigner:1939a], and the isospin and spin labels can be interchanged.
Two-body operator coefficients
------------------------------
We determine the two-body operator coefficients in the same manner as before. The only difference is that there are now two s-wave channels. The coefficient $C$ in (\[pole\]) is replaced by $C^{S=0}$ and $C^{I=0}$ where$$\begin{aligned}
C^{S=0} & =C_{S}^{\prime}+C_{U}^{\prime},\\
C^{I=0} & =C_{S}^{\prime}-3C_{U}^{\prime}\text{.}$$ We then find$$\begin{aligned}
C_{S}^{\prime} & \simeq\frac{3}{4m_{N}\left( \frac{1}{4\pi a_{scatt}^{S=0}}-\frac{0.253}{a_{lattice}}\right) }+\frac{1}{4m_{N}\left( \frac{1}{4\pi
a_{scatt}^{I=0}}-\frac{0.253}{a_{lattice}}\right) },\\
C_{U}^{\prime} & \simeq\frac{1}{4m_{N}\left( \frac{1}{4\pi a_{scatt}^{S=0}}-\frac{0.253}{a_{lattice}}\right) }-\frac{1}{4m_{N}\left( \frac{1}{4\pi
a_{scatt}^{I=0}}-\frac{0.253}{a_{lattice}}\right) }\text{.}$$ For any chosen temperature and nucleon density there is again a corresponding maximum value for the lattice spacing,$$a_{lattice}^{-1}\gg(a_{lattice}^{-1})_{\min}=\max\left[ \pi^{-1}\sqrt
{2m_{N}T},\rho^{1/3}\right] .$$ This sets a maximum value for the absolute value of the coupling $C_{U}^{\prime}$,$$\left\vert C_{U}^{\prime}\right\vert \ll\left\vert C_{U}^{\prime}\right\vert
_{\max}\equiv\frac{\left\vert \frac{1}{4\pi a_{scatt}^{I=0}}-\frac{1}{4\pi
a_{scatt}^{S=0}}\right\vert }{4m_{N}\left\vert \left( \frac{1}{4\pi
a_{scatt}^{S=0}}-0.253(a_{lattice}^{-1})_{\min}\right) \left( \frac{1}{4\pi
a_{scatt}^{I=0}}-0.253(a_{lattice}^{-1})_{\min}\right) \right\vert
}.\label{CUmax}$$ A similar bound for $C_{S}^{\prime}$ can be made but is not needed in our analysis.
Convexity inequality for $\mu_{3}^{S}$
--------------------------------------
We first consider the case when an asymmetric chemical potential $\mu_{3}^{S}$ is coupled to the nucleon spins. The grand canonical partition function is given by$$Z_{G}=\int DND\bar{N}\exp\left( -S_{E}\right) =\int DND\bar{N}\exp\left(
\int d^{4}x\,\mathcal{L}_{E}\right) ,$$ where we take the form of $\mathcal{L}_{E}$ given in (\[C\_U\]) with an asymmetric spin chemical potential,$$\begin{aligned}
\mathcal{L}_{E} & =-\bar{N}[\partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0\prime}-\mu-\mu_{3}^{S}\sigma_{3})]N-\tfrac{1}{2}C_{S}^{\prime
}(\bar{N}N)^{2}-\tfrac{1}{2}C_{U}^{\prime}\bar{N}\vec{\tau}N\cdot\bar{N}\vec{\tau}N\nonumber\\
& -\tfrac{1}{3!}C_{3}(\bar{N}N)^{3}-\tfrac{1}{4!}C_{4}(\bar{N}N)^{4}.\end{aligned}$$ Using Hubbard-Stratonovich transformations we can rewrite $Z_{G}$ as$$Z_{G}\propto\int DND\bar{N}DfD\vec{\phi}\exp\left( \int d^{4}x\,\mathcal{L}_{E}^{f,\vec{\phi}}\right) ,$$ where$$\begin{aligned}
\mathcal{L}_{E}^{f,\vec{\phi}} & =-\bar{N}[\partial_{4}-\tfrac{\vec{\nabla
}^{2}}{2m_{N}}+(m_{N}^{0\prime}-\mu-\mu_{3}^{S}\sigma_{3})]N+f\bar{N}N+iC_{U}^{\prime}\vec{\phi}\cdot\bar{N}\vec{\tau}N\nonumber\\
& +g(f)-\tfrac{1}{2}C_{U}^{\prime}\vec{\phi}\cdot\vec{\phi}.\end{aligned}$$ In [@Chen:2004rq] it was shown that three-body and four-body forces can be introduced without spoiling positivity of the functional integral measure. The only requirements are that the three-body force is not too strong and the four-body force is not too repulsive. Estimates of the three- and four-body forces suggest that these conditions are satisfied. For our analysis here we assume that to be the case, and the function $g(f)$ is a real-valued function which produces the two-, three-, and four-body force terms involving $\bar{N}N$.
The nucleon matrix $\mathbf{M}$ has the block diagonal structure$$\mathbf{M}=\left[
\begin{array}
[c]{cc}M(\mu+\mu_{3}^{S}) & 0\\
0 & M(\mu-\mu_{3}^{S})
\end{array}
\right] ,$$ where the upper block is for up spins and the lower block is for down spins. $\ M$ is a matrix in isospin space, $$M(\mu)=-\left[ \partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0\prime
}-\mu)\right] +f+iC_{U}^{\prime}\vec{\phi}\cdot\vec{\tau}.$$ We note that $$\tau_{2}M\tau_{2}=M^{\ast},$$ and so $\det M\geq0$.
Integrating over the fermion fields gives us$$\begin{aligned}
Z_{G}(\mu,\mu_{3}^{S}) & \propto\int DND\bar{N}DfD\vec{\phi}\exp\left( \int
d^{4}x\,\mathcal{L}_{E}^{f,\vec{\phi}}\right) \nonumber\\
& =\int D\Theta\det\mathbf{M=}\int D\Theta\det M(\mu+\mu_{3}^{S})\det
M(\mu-\mu_{3}^{S}),\end{aligned}$$ where$$D\Theta=DfD\vec{\phi}\exp\left( -\int d^{4}x\,\mathcal{V}(f,\vec{\phi
})\right)$$ with $$-\,\mathcal{V}(f,\vec{\phi})=g(f)-\tfrac{1}{2}C_{U}^{\prime}\vec{\phi}\cdot\vec{\phi}.$$ From the Cauchy-Schwarz inequality we get$$Z_{G}(\mu,\mu_{3})\leq\sqrt{Z_{G}(\mu+\mu_{3}^{S},0)}\sqrt{Z_{G}(\mu-\mu
_{3}^{S},0)}\text{.}$$ We therefore find an upper bound for the pressure,$$P(\mu,\mu_{3}^{S})\leq\frac{1}{2}\left[ P(\mu+\mu_{3}^{S},0)+P(\mu-\mu
_{3}^{S},0)\right] .$$
Shifted-field inequality for $\mu_{3}^{I}$
------------------------------------------
We now consider the case with an isospin chemical potential $\mu_{3}^{I}$. We start with the Lagrange density in terms of the Hubbard-Stratonovich fields, $$\begin{aligned}
\mathcal{L}_{E}^{f,\vec{\phi}} & =-\bar{N}[\partial_{4}-\tfrac{\vec{\nabla
}^{2}}{2m_{N}}+(m_{N}^{0\prime}-\mu-\mu_{3}^{I}\tau_{3})]N+f\bar{N}N+iC_{U}^{\prime}\vec{\phi}\cdot\bar{N}\vec{\tau}N\nonumber\\
& +g(f)-\tfrac{1}{2}C_{U}^{\prime}\vec{\phi}\cdot\vec{\phi}.\end{aligned}$$ Let $\mathbf{M}_{0}$ be the nucleon matrix without the $\mu_{3}^{I}\tau_{3}$ term$,$$$\mathbf{M}_{0}=-\left[ \partial_{4}-\tfrac{\vec{\nabla}^{2}}{2m_{N}}+(m_{N}^{0\prime}-\mu)\right] +f+iC_{U}^{\prime}\vec{\phi}\cdot\vec{\tau}.$$ We note that$$\tau_{2}\mathbf{M}_{0}\tau_{2}=\mathbf{M}_{0}^{\ast}, \label{tau_2}$$ and so $\det\mathbf{M}_{0}\geq0.$
As we did for the two fermion case, we now shift the $\phi_{3}$ field and find the inequality$$P(\mu,\mu_{3}^{I})\leq P(\mu,0)+\tfrac{(\mu_{3}^{I})^{2}}{2C_{U}^{\prime}}.\label{mainresult}$$ If we take the lattice spacing as large as possible then$$P(\mu,\mu_{3}^{I})\leq P(\mu,0)+\tfrac{(\mu_{3}^{I})^{2}}{2\left\vert
C_{U}^{\prime}\right\vert _{\max}},$$ where $\left\vert C_{U}^{\prime}\right\vert _{\max}$ was defined in (\[CUmax\]). As a rough estimate of the quantities involved, we note that for $\rho\sim0.1\rho_{N}$ and $T<10$ MeV, $\left\vert C_{U}^{\prime
}\right\vert _{\max}$ is about $0.2$ fm$^{2}$. In this case however the situation is complicated by nuclear saturation, and it is not clear that the pionless effective theory is applicable.
Summary and discussion
======================
The main results we have shown are as follows. We first considered the two fermion system with an attractive interaction and $SU(2)$ symmetry. If $\mu$ is the symmetric chemical potential and $\mu_{3}$ is the asymmetric chemical potential, we proved both the convexity inequality$$P(\mu,0)\leq P(\mu,\mu_{3})\leq\frac{1}{2}\left[ P(\mu+\mu_{3},0)+P(\mu
-\mu_{3},0)\right] , \label{1}$$ and the shift-field inequality$$P(\mu,0)\leq P(\mu,\mu_{3})\leq P(\mu,0)+\tfrac{3\mu_{3}^{2}}{2\left\vert
C\right\vert _{\max}}. \label{2}$$
We then analyzed the four fermion system with an $SU(2)_{I}\times SU(2)_{S}$ symmetry. We considered the case when both s-wave channels are attractive and without loss of generality assumed the $I=0$ channel to be more strongly attractive. With $\mu$ as the symmetric chemical potential and $\mu_{3}^{S}$ as the asymmetric spin chemical potential we proved the convexity inequality$$P(\mu,0)\leq P(\mu,\mu_{3}^{S})\leq\frac{1}{2}\left[ P(\mu+\mu_{3}^{S},0)+P(\mu-\mu_{3}^{S},0)\right] . \label{3}$$ For non-zero asymmetric isospin chemical potential $\mu_{3}^{I}$ we proved the shifted-field inequality $$P(\mu,0)\leq P(\mu,\mu_{3}^{I})\leq P(\mu,0)+\tfrac{(\mu_{3}^{I})^{2}}{2\left\vert C_{U}^{\prime}\right\vert _{\max}}. \label{4}$$ In the Wigner $SU(4)$ symmetry limit, we note that the shift-field inequality (\[4\]) becomes meaningless since $\left\vert C_{U}^{\prime}\right\vert
_{\max}\rightarrow0$. However in this limit we also have the convexity inequality for $\mu_{3}^{I}$,$$P(\mu,0)\leq P(\mu,\mu_{3}^{I})\leq\frac{1}{2}\left[ P(\mu+\mu_{3}^{I},0)+P(\mu-\mu_{3}^{I},0)\right] . \label{Wigner}$$
The equation of state for nuclear matter with small isospin asymmetries can be measured indirectly in the laboratory by studying nuclear multifragmentation. Of the inequalities presented here, the simplest and perhaps most interesting to check is the isospin convexity inequality (\[Wigner\]) in the Wigner symmetry limit. Since much is still unknown about asymmetric nuclear matter, this Wigner pressure inequality may be a useful consistency check for proposed phenomenological models for asymmetric nuclear matter.
While some of the inequalities are difficult to observe in nuclear physics experiments, each of our results could be tested in the cold Fermi gas system where parameters in the effective Lagrangian can be tuned. Such experiments can in principle test the inequalities over a range of physical parameters and probe universal results in the limit of infinite scattering length and zero range. Although four fermion systems have not yet been produced, these may be possible in the near future.
On the computational side, the inequalities can also be checked by non-perturbative lattice simulations. There have been several recent simulations of effective theories on the lattice [@Muller:1999cp; @Lee:2004si; @Wingate:2004wm; @Lee:2004qd]. It will be particularly interesting to look at symmetric and asymmetric nuclear matter in the Wigner symmetry limit, which can be simulated without any sign problem.
It remains to be seen how well many-body nucleon systems can be described without explicit pions. Results from [@Lee:2004qd] for dilute neutron matter suggest that lowest-order effective field theory without pions works very well in describing the neutron equation of state. The situation for nearly symmetric nuclear matter, however, is less clear due to the effect of saturation which requires higher densities.
With pions included the effective theory action can in general become negative. This would in principle invalidate any inequality based on positivity of the action. However it has been shown that this sign problem goes away in the static limit [@Chandrasekharan:2003wy]. Furthermore the sign problem has been numerically observed to be small [@Lee:2004si] in simulations with neutrons and neutral pions for temperatures above 10 MeV and densities at or below normal nuclear matter density. If one neglects these sign changes, then the sign-quenched results for the effective theory with pions will also satisfy each of the inequalities proven here*.*
The author thanks Jiunn-Wei Chen and Thomas Schaefer for several helpful disucssions. This work was supported by Department of Energy grant DE-FG02-04ER41335.
|
---
abstract: 'A bosonized nonlinear (polynomial) supersymmetry is revealed as a hidden symmetry of the finite-gap Lamé equation. This gives a natural explanation for peculiar properties of the periodic quantum system underlying diverse models and mechanisms in field theory, nonlinear wave physics, cosmology and condensed matter physics.'
author:
- |
\
0.1cm [*${}^1$Departamento de Física, Universidad de Santiago de Chile, Casilla 307, Santiago 2, Chile ${}^2$Departamento de Física Teórica, Atómica y Óptica, Universidad de Valladolid, 47071, Valladolid, Spain\
0.15cm E-mails: [email protected], [email protected], [email protected]*]{}
title: ' **Hidden nonlinear supersymmetry of finite-gap Lamé equation**'
---
Supersymmetry [@SUSY], as a fundamental symmetry providing a natural mechanism for unification of gravity with electromagnetic, strong and weak interactions, still waits for experimental confirmation. On the other hand, in nuclear physics supersymmetry was predicted theoretically [@IachelloNucl] and has been confirmed experimentally [@NuclSUSY] as a *dynamic* symmetry linking properties of some bosonic and fermionic nuclei. It would be interesting to look for some physical systems whose special properties could be explained by a hidden *ordinary* (not a dynamic) supersymmetry.
In the present Letter we show that the quantum system described by the finite-gap Lamé equation possesses a hidden supersymmetry. A very unusual nature of the revealed supersymmetry is that it manifests as a *nonlinear* symmetry of a *bosonic* system without fermion (spin) degrees of freedom. This means that we find here a kind of a *bosonized* supersymmetry giving a natural explanation for peculiar properties of the periodic quantum problem underlying many physical systems.
The Lamé equation first arose in solution of the Laplace equation by separation of variables in ellipsoidal coordinates [@WW], and one of its early applications was in the quantum Euler top problem [@KramItWang]. It plays nowadays a prominent role in physics appearing in such diverse theories as crystals models in solid state physics [@Suth; @AGI], exactly and quasi-exactly solvable quantum systems [@Turb; @FGR], integrable systems and solitons [@PerOl; @Solitons], supersymmetric quantum mechanics [@SUSYper], BPS monopoles [@BPS], instantons and sphalerons [@Dunne; @Sphalerons], classical Ginzburg-Landau field theory [@MaiSte], Josephson junctions [@Josj], magnetostatic problems [@DobRit], inhomogeneous cosmologies [@RedSh], Kaluza-Klein theories [@KK], chaos [@chaos], and preheating after inflation modern theories [@PreHeat]. Most often, the Lamé equation appears in physics literature in the Jacobian form of a one-dimensional Schrodinger equation with a doubly periodic potential, $$\label{Lame}
H_j\Psi=E\Psi,\quad H_j=-\frac{d^2}{dx^2}+j(j+1) k^2 \sn^2(x,
k),$$ where $\sn (x,k)\equiv \sn\, x$ is the Jacobi elliptic odd function with modulus $k$ ($0<k<1$), and real and imaginary periods $4K$ and $2iK'$, $K=K(k)$ is a complete elliptic integral of the first kind, and $K'=K(k')$, $k'^2=1-k^2$ [@WW; @AbS]. A remarkable property of this equation is that at integer values of the parameter $j=n$, its energy spectrum has exactly $n$ gaps, which separate the $n+1$ allowed energy bands. The $2n+1$ eigenfunctions associated to the boundaries $E_i(n)$, $i=0,1,\ldots, 2n$, of the allowed energy bands $[E_{0},E_{1}]$, $[E_{2},E_{3}],\ldots, [E_{2n},\infty]$ are given by polynomials (‘Lamé polynomials’) of degree $n$ in the elliptic functions $\sn\, x$, $\cn\, x$ and $\dn\, x$. These polynomials have real periods $4K$ or $2K$, and the boundary energy levels $E_i(n)$ are *non-degenerated*. The states in the interior of allowed zones are described by the quasi-periodic Bloch-Floquet wave functions (which can be expressed in terms of theta functions [@WW]) of quasi-momentum $\kappa(E)$, $$\Psi^\pm_E(x+2K)=\exp (\pm
i\kappa(E))\Psi^\pm_E(x).$$ Every such interior energy level is *doubly degenerated*. For any non-integer value of the parameter $j$, Eq. (\[Lame\]) has an *infinite* number of allowed and prohibited zones.
The double degeneration of the energy levels is typical for a quantum mechanical system with $N=2$ supersymmetry. But the presence of $2n+1$ edge-bands singlet states in the $n$-gap Lamé equation indicates on an unusual, nonlinear character [@AISP; @PCM] of a possible hidden supersymmetry. To reveal it, one notes that in the limiting case $k=1$ we have $K=\infty$, $K'=\frac{\pi}{2}$, and system (\[Lame\]) reduces to the Pöschl-Teller quantum system given by the potential $$U(x)=-j(j+1)\sech^2 x+ j(j+1).$$ The latter, as it was shown recently in [@FP], at $j=n$ possesses a hidden polynomial supersymmetry [@AISP; @PCM] of order $2n+1$ generated by the supercharges $Q_n$ and $\tilde{Q}_n=iRQ_n$, $$\begin{aligned}
\label{Supern}
&[Q_n,H_n]=[\tilde{Q}_n,H_n]=0,
\quad \{Q_n,\tilde{Q}_n\}=0,&\\
&Q_n^2=\tilde{Q}_n^2= P_{2n+1}(H_n),&\label{QPH}\end{aligned}$$ where $P_{2n+1}(H_n)$ is some polynomial of the degree $2n+1$ of the Hamiltonian $H_n$, $R$ is a reflection, $R\Psi(x)=\Psi(-x)$, identified as the grading operator, $$\label{R}
[R,H_n]=0,\quad \{R,Q_n\}=\{R,\tilde{Q}_n\}=0,\quad R^2=1,$$ and $Q_n$ is a self-conjugate local differential operator of degree $2n+1$. Based on this observation, first we note that in the trivial case of the free particle system with $j=0$ characterized by the one allowed (‘conduction’) band $[E_0(0),\infty]$, $E_0(0)=0$, the odd first order differential operator $Q_0=-iD$, $D=\frac{d}{dx}$, is identified as the supercharge. For the one-gap Lamé system (\[Lame\]) with $j=1$, let us look for the self-conjugate integral of motion $Q_1$, $[Q_1,H_1]=0$, in the form of the third order differential operator. A direct check shows that $$\label{P1}
iQ_1=D^3+fD+\frac{1}{2}f',$$ is the odd integral, $\{R,Q_1\}=0$. Here $$\label{f}
f:= 1+ k^2-3k^2\sn^2x,$$ $f'=Df$. The double-periodic elliptic function $f$ with periods $2K$ and $2iK'$ satisfies the elliptic curve equation $$\begin{aligned}
\label{f'}
&(f')^2=\frac{4}{3}(a_1-f)(f-a_2)(f-a_3),&\end{aligned}$$ whose characteristic roots are $$\begin{aligned}
&a_1=f(0)=1+k^2,\quad a_2=f(K)=1-2k^2,&\nonumber\\
&a_3=f(K+iK')=k^2-2,& \label{roots}\end{aligned}$$ $a_1+a_2+a_3=0$. Differentiation of Eq. (\[f’\]) gives the identities $$\label{fn}
f''+2f^2=2b^2,\quad D^l(D^2+2f)f=0,$$ where $b^2=-\frac{1}{3}(a_1a_2+a_1a_3+a_2a_3)= k^4-k^2+1$, $l=1,2,\ldots$. Using these relations, one finds that $Q^2_1=P_3(H_1)$, $$P_3(H_1)=(H_1-E_0(1))(H_1-E_1(1))(H_1-E_2(1)).$$ The energies $$\label{Ej1}
E_0(1)=k^2,\quad E_1(1)=1,\quad E_2(1)=1+k^2$$ correspond here to the eigenfunctions $\Psi^{(1)}_0=\dn\,x$, $\Psi^{(1)}_1=\cn\,x$, $\Psi^{(1)}_2=\sn\,x$, which form a zero-mode subspace (kernel) of the supercharge $Q_1$. The states in the interior of the two allowed zones are described by the quasi-periodic eigenfunctions $$\label{j1quasi}
\Psi^\pm_E=\frac{H(x\pm\alpha)}{\Theta(x)}\exp(\mp xZ(\alpha)),$$ where $H(x)$, $\Theta(x)$ and $Z(x)$ are the Jacobi Eta, Theta and Zeta functions, while the parameter $\alpha$ is related to the energy eigenvalue $E$ via the equation $E=\dn^2\alpha+k^2$, see Ref. [@WW]. They are also the eigenstates of the supercharge, $$\label{QB1}
Q_1\Psi^\pm_E=\pm \sqrt{P_3(E)}\, \Psi^\pm_E.$$
Assuming that the $j=2$ Lamé polynomials $\Psi^{(2)}_0=f+b$, $\Psi^{(2)}_1=\cn\,x\dn\,x$, $\Psi^{(2)}_2=\sn\,x\dn\,x$, $\Psi^{(2)}_3=\sn\,x\cn\,x$, $\Psi^{(2)}_4=f-b$ [@FGR] form a zero-mode subspace of the fifth order integral $Q_2$, one finds $$\begin{aligned}
\label{Q2}
&iQ_2=D^5+5fD^3+\frac{15}{2}f'D^2+\left(\frac{9}{2}f''+4f^2\right)D.&\end{aligned}$$ In the same way for $j=3$ and $j=4$ a tedious calculation gives the supercharges $Q_3$ and $Q_4$. We do not display their explicit form here, but, instead, describe the general structure of the supercharges corresponding to $j=0,1,2,3,4$. First, one notes that if the derivative is assigned the homogeneity degree $d_h(D)=1$, in accordance with Eq. (\[f’\]) the function $f$ can be assigned $d_h(f)=2$, and then $d_h(H_j)=2$ and $d_h(Q_j)=2j+1$. Every supercharge has the leading term $D^{2j+1}$, the next term is of the form $fD^{2j-1}$, and every subsequent term decreases the order of the derivative on the right in one unit. The supercharges corresponding to the even cases $j=0,2,4$ contain the last term to be proportional to $D$. With this structure of the supercharges for the first cases of $j=0,\ldots, 4$ at hands, we can fix now the form of the supercharges in the generic case $j=n$. Let us present the Hamiltonian operator in terms of the function $f$, $$\label{Hj}
H_j=-D^2-h_j\left(f(x)-f(0)\right),\quad h_j=\frac{1}{3}j(j+1),$$ and look for the supercharge in the form $$\begin{aligned}
&iQ_j=D^{2j+1}+\alpha_jfD^{2j-1}+\beta_jf'D^{2j-2}
+(\gamma_jb^2+&\nonumber\\
&+\delta_jf^2)D^{2j-3}+\lambda_jf'''D^{2j-4}+...,&
\label{Qjgen}\end{aligned}$$ where in coefficients associated to the factors $D^l$, $l\geq 0$, it is necessary to include all the independent structures of homogeneity degree $d_h=2j+1-l$ given in terms of $f$ and its derivatives modulo identities (\[fn\]). Requiring $[Q_j,H_j]=0$, we can fix the first coefficients, $$\begin{aligned}
&\alpha_j=h_j\left(j+\frac{1}{2}\right),\,
\beta_j=\alpha_j\left(j-\frac{1}{2}\right),\,
\gamma_j=\frac{6}{5}\beta_j(j-1),&\nonumber\\
&\delta_j=\frac{5}{36}\gamma_j(j-6),\quad
\lambda_j=-\frac{5}{72}\gamma_j\left(j-\frac{3}{2}
\right)(j-2).&
\label{abg}\end{aligned}$$ These coefficients allow us to find a recurrence relation for the supercharges. We note that the kernel $K_{2n}$ of the supercharge $Q_j$ with $j=2n$ is spanned by the $4n+1$ functions $$\label{Kerodd}
\varphi_a\cdot (1,f,\ldots,f^{n-1}),\,\,\,a=1,\ldots,4;\,\quad f^n,$$ $\varphi_1=\sn\, x\cn\, x$, $\varphi_2=\sn\, x\dn\,x$, $\varphi_3=\cn\, x\dn\,x $, $\varphi_4=1$, which are linear combinations of the Lamé polynomials of the degree $2n$. For $j=2n+1$, the kernel $K_{2n+1}$ is formed by the $4n+3$ functions $$\label{Kereven}
\chi_a\cdot (1,f,\ldots,f^{n-1}),\, a=1,...,4;\,\,\, \chi_a f^n,\, a=1,2,3,$$ with $\chi_1=\dn\, x$, $\chi_2=\cn\, x$, $\chi_3=\sn\, x$, $\chi_4=\sn\, x\cn\, x\dn\, x$, where for $n=0$ the states proportional to $\chi_4$ are absent. In comparison with the kernel $K_{j-2}$ of the supercharge $Q_{j-2}$, the kernel $K_j$ of the supercharge $Q_j$ includes four additional states. These are $\varphi_af^{n-1}$, $a=1,2,3$, and $f^n$ for $j=2n$, and $\chi_af^{n-1}$, $a=1,2,3$, and $\chi_4f^{n-1}$ for $j=2n+1$. Therefore, there should exist a relation $$\label{LamQ}
Q_j=\Lambda_jQ_{j-2},$$ where $\Lambda_j$ is a fourth order differential operator of homogeneity degree $d_h(\Lambda)=4$ of the form (modulo the first identity from (\[fn\])) $$\label{Lamj}
\Lambda_j=D^4+\tilde{\alpha}_jfD^2+\tilde{\beta}_jf'D
+\tilde{\gamma}_jb^2+\tilde{\tau}_jf''.$$ Using Eqs. (\[Qjgen\]), (\[abg\]), one finds the numerical coefficients of the operator $\Lambda_j$ $$\begin{aligned}
&\tilde{\alpha}_j=2j(j-1)+1\, ,\quad
\tilde{\beta}_j=\frac{1}{3}j(4j^2-7)+\frac{3}{2}\, ,&\nonumber\\
&\tilde{\gamma}_j=j^2(j-1)^2\, ,\quad \tilde{\tau}_j=\frac{1}{6}
(j+3)(j+1)(j-1)^2.&
\label{tildecoef}\end{aligned}$$ The operator $\Lambda_j$ is not symmetric, and in the representation (\[LamQ\]) it serves to annihilate the additional zero modes of $Q_j$ after application to them of the operator $Q_{j-2}$. There is also an alternative recurrence representation of the supercharge, $
Q_j=Q_{j-2}\Lambda^\dagger_j.
$ The Hermitian conjugate operator $\Lambda^\dagger_j$ acts invariantly on the kernel of $Q_{j-2}$, $\Lambda^\dagger_j: K_{j-2}\rightarrow K_{j-2}$, and transforms four additional states of the kernel of $Q_j$ into some linear combinations of the states of $K_{j-2}$. Relation (\[LamQ\]) (or, $Q_j=Q_{j-2}\Lambda^\dagger_j$) allows ones to calculate $Q_j$ for arbitrary even and odd values of $j$ proceeding from the explicitly displayed supercharges $Q_2$ (or, $Q_0$) and $Q_1$.
With the fixed form of the integral $Q_j$, let us discuss the general structure of a hidden polynomial supersymmetry. Since $Q_j$ is a self-conjugate odd local differential operator, one can introduce another, nonlocal supercharge, $\tilde{Q}_j=iRQ_j$. $H_j$ and $R$ are commuting self-conjugate operators. Let $\Psi_E^\pm$ be their common eigenstates, $H_j\Psi_E^\pm=E\Psi_E^\pm$, $R\Psi_E^\pm=\pm\Psi_E^\pm$. Since $Q_j$ commutes with $H_j$ and anticommutes with $R$, there exist some linearly independent combinations $\Psi_{E,q}$ and $R\Psi_{E,q}$ of $\Psi_E^+$ and $\Psi_E^-$ such that $$\label{Qeq}
Q_j\Psi_{E,q}=q(E)\Psi_{E,q},\quad
Q_jR\Psi_{E,q}=-q(E)R\Psi_{E,q}.$$ Then, the states $\Psi_{E,q}$ and $R\Psi_{E,q}$ are the eigenstates of $Q^2_j$ and $\tilde{Q}_j^2$ with the same eigenvalue $q^2(E)$, and, hence, the same is valid for the states $\Psi^\pm_E$, $Q^2_j\Psi^\pm_E=\tilde{Q}_j^2\Psi^\pm_E= q^2(E)\Psi^\pm_E$. All the states $\Psi^\pm_E$ corresponding to the allowed zones constitute the basis in the class of Bloch functions (including the periodic and antiperiodic ones). Hence, we get the operator equality $Q_j^2=\tilde{Q}_j^2=q^2(H_j)$. The operator $Q_j^2$ is a local differential operator of degree $4j+2$. This means that the operator $q^2(H_j)$ is a polynomial of degree $2j+1$ of its argument, i.e. $q^2(H_j)=C(H_j-c_0)(H_j-c_1)\ldots(H_j-c_{2j})$, where $C$ is a real constant, while $c_i$ are real, or some pairs of them could be mutually conjugate complex numbers. Comparing the coefficients in $Q_j^2$ and $q^2(H_j)$ before the operator $D^{4j+2}$, we find that $C=1$. Then, applying the operator $q^2(H_j)$ to the $2j+1$ (anti)periodic eigenstates of the Hamiltonian $H_j$ corresponding to the boundaries of allowed zones $E_i(n)$, and remembering that the same states constitute the kernel $K_j$ of the supercharge $Q_j$, we find that the set of the constants $c_i$ coincides with the set of the boundary eigenvalues $E_i(n)$, $i=0,\ldots, 2n$. Thus, we have shown that $Q_n^2=\tilde{Q}_n^2=P_{2n+1}(H_n)$, where $$\begin{aligned}
\label{QHPL}
&{P}_{2n+1}(E):=\prod_{i=0}^{2n}(E-E_i(n))&\end{aligned}$$ is the Lamé spectral polynomial. The nontrivial odd integrals generate the order $2n+1$ *polynomial superalgebra* being the hidden symmetry of the *bosonic* system (\[Lame\]).
The nonlinear character of the local supercharges and supersymmetry of the system (\[Lame\]) is reminiscent to a nonlinear symmetry of a particle in a Coulomb potential generated by the Laplace-Runge-Lenz vector integral, and to that of an anisotropic oscillator with commensurable frequencies [@Boer]. Let us clarify the dynamical picture underlying the hidden nonlinear supersymmetry having in mind the analogy with the anisotropic oscillator. Consider the one-gap case. The Hamiltonian $H_1$ can be factorized in three possible ways: $$\begin{aligned}
\label{H1fac}
&H_1=A_{d}^\dagger A_{d}+k^2=A_c^\dagger A_c+1=A_s^\dagger
A_s+1+k^2,&\end{aligned}$$ where $A_d=D-(\ln \dn\, x)'$, $A_d\dn\, x=0$, and $A_c$ and $A_s$ have a similar structure in terms of the $\cn\, x$ and $\sn\, x$. Write the Heisenberg equations of motion of $A_d$ and $A_s^\dagger$, $$\label{AdAs}
i\dot{A}_d=\omega_d(x)A_d,\quad
i\dot{A}_s^\dagger=-A_s^\dagger\,
\omega_s(x),$$ $\omega_d(x)=-2(\ln \dn\, x)''$, $\omega_s(x)=-2(\ln
\sn\, x)''$. Define the operator $A_{s/d}=D-(\ln \sn\, x)'+(\ln\dn\,
x)'$ , for which $$\label{Asd}
i\dot{A}_{s/d}=\omega_s(x)A_{s/d}-A_{s/d}\omega_d(x).$$ Then the relation $$\label{Qfact}
iA_s^\dagger A_{s/d}A_d=Q_1$$ gives us one of the six possible factorizations of the supercharge (\[P1\]). Note that operators $A_c$, $A_s$, $A_{s/d}$ and associated instant frequencies have singularities on a real line, which cancel in the $H_1$ and $Q_1$. For $j=n>1$ the same dynamical mechanism underlies the supercharge structure and its possible factorizations.
We conclude that the physical systems associated with the $n$-gap Lamé equation possess a hidden bosonized nonlinear supersymmetry. It is behind the double degeneration of the energy levels in the interior of the allowed bands and the singlet character of the $2n+1$ edge-bands states. The latter form a zero-mode subspace of the local supercharge $Q_j$ (as well as of the nonlocal one, $\tilde{Q}_j$) being a differential operator of degree $2n+1$. Taking into account the parity of the states (\[Kerodd\]) and (\[Kereven\]), one finds that the system (\[Lame\]) with any $j=n$ is characterized by the Witten index [@Witten] equal to $1$. The information on the transfer matrix can also be extracted from the structure of its hidden supersymmetry. The detailed analysis of this aspect will be presented in a separate publication.
In the limit $k=1$, $\sn\,x=\tanh\, x$, $\cn\,x=\dn\,x=\sech\, x$, the valence bands $[E_0,E_1],\ldots, [E_{2n-2},E_{2n-1}]$ shrink, and two boundary states of a valence band transform into one bound state of the related Pöschl-Teller system. As a result, the kernel of the supercharges of the latter system is constituted not only by the bound eigenstates and the lowest eigenstate from the continuous part of its spectrum, but also should include some $n$ unbounded states. The discussion of this limit of the Lamé equation corresponding to the Pöschl-Teller problem, and its relation to the bound state Aharonov-Bohm and the Dirac delta potential systems from the viewpoint of the hidden supersymmetry will be presented elsewhere.
A simple shift of the argument in the Lamé equation for a half of the real period of the Hamiltonian $H_j$, $x\rightarrow x+ K$, gives the isospectral doubly-periodic system $$\begin{aligned}
\label{Iso}
\tilde{H}_j=-D^2+j(j+1)(1-k'{}^2\dn^{-2}(x,k)).\end{aligned}$$ At $j=n$ it has a hidden polynomial supersymmetry generated by the supercharges $Q_j$ and $\tilde{Q}_j$, whose explicit structure can be obtained by applying the Jacobi functions identities $\sn\,(x+K)=\cn\, x/\dn\, x$, $\cn\,(x+K)=-k'\sn\, x/\dn\, x$, $\dn\, (x+K)=k'/\dn\, x$ to the supercharges of the system (\[Lame\]). It would be interesting to look for the hidden polynomial supersymmetry in other finite-zone double periodic quantum systems.
Finally, it would be interesting to clarify the role played by the revealed hidden nonlinear supersymmetry of the finite-gap Lamé equation in the periodic Korteweg-de Vries equation theory [@Solitons] and in the periodic relativistic field theories [@Dunne; @Sphalerons; @MaiSte] to which system (\[Lame\]) is intimately related.
0.1cm The work has been supported partially by the FONDECYT Project 1050001 (MP), CONICYT PhD Program Fellowship (FC), Spanish Ministerio de Educación y Ciencia (Project MTM2005-09183) and Junta de Castilla y León (Excellence Project VA013C05) (LMN). LMN also thanks the Mecesup Project USA0108 for making possible his visit to the University of Santiago de Chile, and Department of Physics of this University for hospitality.
[99]{}
Y. A. Golfand and E. P. Likhtman, JETP Lett. [**13**]{}, 323 (1971); P. Ramond, Phys. Rev. D [**3**]{}, 2415 (1971); A. Neveu and J. H. Schwarz, Nucl. Phys. B [**31**]{}, 86 (1971); D. V. Volkov and V. P. Akulov, Phys. Lett. B [**46**]{}, 109 (1973); J. Wess and B. Zumino, Nucl. Phys. B [**70**]{}, 39 (1974); *ibid.* B [**78**]{}, 1 (1974). F. Iachello, Phys. Rev. Lett. [**44**]{}, 772 (1980); A. B. Balantekin, I. Bars and F. Iachello, *ibid.* [**47**]{}, 19 (1981); Nucl. Phys. A [**370**]{}, 284 (1981); F. Iachello, Phys. Rev. Lett. [**95**]{}, 052503 (2005).
A. Metz *et al.*, Phys. Rev. Lett. [**83**]{}, 1542 (1999); Phys. Rev. C [**61**]{}, 064313 (2000); J. Gröger *et al.*, Phys. Rev. C [**62**]{}, 064304 (2000).
E. T. Whittaker and G. N. Watson, *Course of Modern Analysis* (Cambridge Univ. Press, Cambridge, 1980).
H. A. Kramers and G. P. Ittmann, Z. Physik [**53**]{}, 553 (1929); *ibid.* [**58**]{}, 217 (1929); S. C. Wang, Phys. Rev. [**33**]{}, 123 (1929); *ibid.* [**34**]{}, 243-252 (1929).
B. Sutherland, Phys. Rev. A [**8**]{}, 2514 (1973).
Y. Alhassid, F. Gursey and F. Iachello, Phys. Rev. Lett. [**50**]{}, 873 (1983); H. Li and D. Kusnezov, *ibid.* [**83**]{}, 1283 (1999); H. Li, D. Kusnezov, and F. Iachello, J. Phys. A [**33**]{}, 6413 (2000).
A. V. Turbiner, Comm. Math. Phys. [**118**]{} (1988) 467; J. Phys. A [**22**]{}, L1 (1989).
F. Finkel, A. Gonzalez-Lopez, M. A. Rodriguez, J. Phys. A [**33**]{}, 1519 (2000).
M. A. Olshanetsky and A. M. Perelomov, Phys. Rep. [**94**]{}, 313 (1983). S. Novikov, S. V. Manakov, L. P. Pitaevskii and V. E. Zakharov, *Theory of Solitons* (Plenum, New York, 1984).
G. V. Dunne and J. Feinberg, Phys. Rev. D [**57**]{}, 1271 (1998); D. J. Fernandez, J. Negro and L. M. Nieto, Phys. Lett. A [**275**]{}, 338 (2000). R. S. Ward, J. Phys. A [**20**]{}, 2679 (1987); P. M. Sutcliffe, *ibid.* [**29**]{} (1996) 5187. G. V. Dunne and K. Rao, JHEP [**0001**]{}, 019 (2000).
N. S. Manton and T. M. Samols, Phys. Lett. B [**207**]{}, 179 (1988); J. Q. Liang, H. J. W. Muller-Kirsten and D. H. Tchrakian, *ibid.* [**282**]{}, 105 (1992); Y. Brihaye, S. Giller, P. Kosinski and J. Kunz, *ibid.* [**293**]{}, 383 (1992). R. S. Maier and D. L. Stein, Phys. Rev. Lett. [**87**]{}, 270601 (2001). J.-G. Caputo, N. Flytzanis, Y. Gaididei, N. Stefanakis and E. Vavalis, Supercond. Sci. Technol. [**13**]{}, 423 (2000). H.-J. Dobner and S. Ritter 1998 Math. Comput. Modelling [**27**]{}, 1 (1998).
R. Kantowski and R. C. Thomas, Astrophys. J. [**561**]{}, 491 (2001). S. k. Nam, JHEP [**0004**]{}, 002 (2000).
M. Brack, M. Mehta and K. Tanaka, J. Phys. A [**34**]{}, 8199 (2001). D. Boyanovsky, H. J. de Vega, R. Holman and J. F. J. Salgado, Phys. Rev. D [**54**]{}, 7570 (1996); P. B. Greene, L. Kofman, A. D. Linde and A. A. Starobinsky, *ibid.* [**56**]{}, 6175 (1997); D. I. Kaiser, *ibid.* [**57**]{}, 702 (1998); F. Finkel, A. Gonzalez-Lopez, A. L. Maroto and M. A. Rodriguez, *ibid.* [**62**]{}, 103515 (2000); P. Ivanov, J. Phys. A [**34**]{}, 8145 (2001). M. Abramowitz and I. Stegun (Eds.), *Handbook of Mathematical Functions*, (Dower, New York, 1990).
A. A. Andrianov, M. V. Ioffe and V. P. Spiridonov, Phys. Lett. A [**174**]{}, 273 (1993). M. Plyushchay, Int. J. Mod. Phys. A **15**, 3679 (2000); F. Correa, M. A. del Olmo and M. S. Plyushchay, Phys. Lett. B [**628**]{}, 157 (2005). F. Correa and M. S. Plyushchay, hep-th/0605104.
J. de Boer, F. Harmsze and T. Tjin, Phys. Rept. [**272**]{}, 139 (1996). E. Witten, Nucl. Phys. B [**188**]{}, 513 (1981).
|
---
abstract: 'We investigate the reaction path followed by Heavy Ion Collisions with exotic nuclear beams at low energies. We will focus on the interplay between reaction mechanisms, fusion vs. break-up (fast-fission, deep-inelastic), that in exotic systems is expected to be influenced by the symmetry energy term at densities around the normal value. The evolution of the system is described by a Stochastic Mean Field transport equation (SMF), where two parametrizations for the density dependence of symmetry energy (Asysoft and Asystiff) are implemented, allowing one to explore the sensitivity of the results to this ingredient of the nuclear interaction. The method described here, based on the event by event evolution of phase space quadrupole collective modes will nicely allow to extract the fusion probability at relatively early times, when the transport results are reliable. Fusion probabilities for reactions induced by $^{132}$Sn on $^{64,58}$Ni targets at 10 AMeV are evaluated. We obtain larger fusion cross sections for the more n-rich composite system, and, for a given reaction, in the Asysoft choice. Finally a collective charge equilibration mechanism (the Dynamical Dipole) is revealed in both fusion and break-up events, depending on the stiffness of the symmetry term just below saturation.'
author:
- 'C.Rizzo $^{a,b}$,V.Baran$^{c}$, M. Colonna$^{a}$, A. Corsi$^{d}$, M.Di Toro$^{a,b,*}$'
title: Symmetry Energy Effects on Fusion Cross Sections
---
Introduction
============
Production of exotic nuclei has opened the way to explore, in laboratory conditions, new aspects of nuclear structure and dynamics up to extreme ratios of neutron (N) to proton numbers (Z). An important issue addressed is the density dependence of the symmetry energy term in the nuclear Equation of State (EOS), of interest also for the properties of astrophysical objects [@bao01; @ste05; @bar05a; @bao08]. By employing Heavy Ion Collisions (HIC), at appropriate beam energy and centrality, the isospin dynamics at different densities of nuclear matter can be investigated [@bar05a; @bao08; @tsa_epj; @tsa09; @bar04; @Kel10; @fil05].
In this work we will focus the attention on the interplay of fusion vs. deep-inelastic mechanisms for dissipative HIC with exotic nuclear beams at low energies, just above the Coulomb Barrier (between $5$ and $20$ AMeV), where unstable ion beams with large asymmetry will be soon available. We will show that the competition between reaction mechanisms can be used to study properties of the symmetry energy term in a density range around the normal value. Dissipative collisions at low energy are characterized by interaction times that are quite long and by a large coupling among various mean field modes that may eventually lead to the break-up of the system. Hence the idea is to probe how the symmetry energy will influence such couplings in neutron-rich systems with direct consequences on the fusion probability. We will show that, within our approach, the reaction path is fully characterized by the fluctuations, at suitable time instants, of phase space quadrupole collective modes that lead the composite system either to fusion or to break-up.
Moreover, it is now well established that in the same energy range, for dissipative reactions between nuclei with different $N/Z$ ratios, the charge equilibration process has a collective character resembling a large amplitude Giant Dipole Resonance (GDR), see the recent [@bardip09] and refs. therein. The gamma yield resulting from the decay of such pre-equilibrium isovector mode can encode information about the early stage of the reaction [@cho93; @bar96; @sim01; @bar01b; @bar01]. This collective response is appearing in the intermediate neck region, while the system is still in a highly deformed dinuclear configuration with large surface contributions, and so it will be sensitive to the density dependence of symmetry energy below saturation [@bardip09]. Here we will show that this mode is present also in break-up events, provided that a large dissipation is involved. In fact we see that the strength of such fast dipole emission is not much reduced passing from fusion to very deep-inelastic mechanisms. This can be expected from the fact that such excitation is related to an entrance channel collective oscillation. Thus we suggest the interest of a study of the prompt gamma radiation, with its characteristic angular anisotropy [@bardip09], even in deep-inelastic collisions with radioactive beams.
The paper is organized as follows. In Sect.II we present our transport approach to the low energy HIC dynamics with description of the used symmetry effective potentials. Sect.III is devoted to the analysis of $^{132}Sn$ induced reactions with details about the procedure to select fusion vs. break-up events. In Sect.IV we discuss symmetry energy effects on the competition between fusion and break-up (Fast-fission, Deep-inelastic, Ternary/Quaternary-fragmentation) mechanisms. The dependence on symmetry energy of the yield and angular distribution of the Prompt Dipole Radiation, expected for entrance channels with large charge asymmetries, is presented in Sect.V. Finally in Sect.VI we summarize the main results and we suggest some experiments to be performed at the new high intensity Radioactive Ion Beam (RIB) facilities in this low energy range.
Reaction Dynamics
=================
The reaction dynamics is described by a Stochastic Mean-Field (SMF) approach, extension of the microscopic Boltzmann-Nordheim-Vlasov transport equation [@bar05a], where the time evolution of the semi-classical one-body distribution function $f({\bf r},{\bf p},t)$ is following a Boltzmann-Langevin evolution dynamics (see [@rizzo_npa] and refs. therein): $$\frac{\partial f}{\partial t}+\frac{\bf p}{m}\frac{\partial f}
{\partial {\bf r}}+
\frac{\partial U}{\partial {\bf r}}\frac{\partial f}{\partial
{\bf p}}=I_{coll}[f]+\delta I[f].$$ In the SMF model the fluctuating term $\delta I[f]$ is implemented in an approximate way, through stochastic spatial density fluctuations [@Mac_npa]. Stochasticity is essential to get distributions, as well as to allow the growth of dynamical instabilities. In order to map the particle occupation at each time step, gaussian phase space wave packets (test particles) are considered. In the simulations 100 test particles per nucleon have been employed for an accurate description of the mean field dynamics. In the collision integral, $I_{coll}$, an in-medium depending nucleon-nucleon cross section, via the local density, is employed [@li93]. The cross section is set equal to zero for nucleon-nucleon collisions below 50 MeV of relative energy. In this way we avoid spurious effects, that may dominate in this energy range when the calculation time becomes too large. In spite of that, for low energy collisions, the simulations cannot be trusted on the time scale of a compound nucleus formation, mainly for the increasing numerical noise. As it will be explained in Section III.B, the nice feature of the procedure described here to evaluate the fusion probability is that, on the basis of a shape analysis in phase space, we can separate fusion and break-up trajectories at rather early times, of the order of 200-300 fm/c, when the calculation can still be fully reliable.
The mean field is built from Skyrme forces: $$\begin{aligned}
U_{n,p}&=&A\frac{\rho}{\rho_0}+B(\frac{\rho}{\rho_0})^{\alpha+1}
+C(\rho)
\frac{\rho_n-\rho_p}{\rho_0}\tau_q+ \nonumber \\
&+&\frac{1}{2} \frac{\partial C}{\partial \rho} \frac{(\rho_n-\rho_p)^2}
{\rho_0}\end{aligned}$$ where $q=n,p$ and $\tau_n=1, \tau_p=-1$. The coefficients $A,B$ and the exponent $\alpha$, characterizing the isoscalar part of the mean-field, are fixed requiring that the saturation properties of symmetric nuclear matter ($\rho_0=.145fm^{-3}$, $E/A=-16MeV$), with a compressibility modulus around [**$200~MeV$**]{}, are reproduced. The function $C(\rho)$ will give the potential part of the symmetry energy: $$\frac{E_{sym}}{A}(\rho,T=0) =
\frac{E_{sym}}{A}(kin)+\frac{E_{sym}}{A}(pot)\equiv
\frac{\epsilon_F}{3} + \frac{C(\rho)}{2\rho_0}\rho$$
![Density dependence of the symmetry energy for the two parametrizations. Solid line: Asysoft. Dashed line: Asystiff[]{data-label="esym"}](fusfig1.eps)
For the density dependence of the symmetry energy, we have considered two different parametrizations [@col98; @bar02], that are presented in Fig.\[esym\]. In the Asysoft EOS choice, $\frac{C(\rho)}{\rho_0}=482-1638 \rho$, the symmetry energy has a weak density dependence close to the saturation, being almost flat around $\rho_0$. For the Asystiff case, $\frac{C(\rho)}{\rho_0}=\frac{32}{\rho_0}\frac{2 \rho}{\rho+\rho_0}$, the symmetry energy is quickly decreasing for densities below normal density. Aim of this work is to show that fusion probabilities, fragment properties in break-up events, as well as properties of prompt collective modes, in collisions induced by neutron-rich exotic beams, are sensitive to the different slopes of the symmetry term around saturation.
Fusion dynamics for $^{132}Sn$ induced reactions
================================================
In order to study isospin and symmetry energy effects on the competition between fusion and break-up (deep-inelastic) we consider the reactions $^{132}Sn~+~^{64,58}Ni$ at 10 AMeV, having in mind that $^{132}Sn$ beams with good intensities in this energy range will be soon available in future Radioactive Ion Beam facilities. In particular, we have performed collision simulations for semi-peripheral impact parameters (from b =4.5 fm to b = 8.0 fm, with $\Delta$b= 0.5 fm), to explore the region of the transition from fusion to break-up dominance. The transport equations clearly give fusion events at central impact parameters and break-up events for peripheral collisions, but there are some problems when we consider semi-peripheral impact parameters at such low energies, since the time scales for break-up are not compatible with the transport treatment, as already noted. It is then not trivial to extract the fusion probability from the early dynamics of the system and test the sensitivity to the asy-EOS. Therefore we have tried to find a reliable criterion that can indicate when the reaction mechanism is changing, from fusion to deep-inelastic dominance. This will also allow to evaluate the corresponding absolute cross sections.
The new method is based on a phase space analysis of quadrupole collective modes. The information on the final reaction path is deduced investigating the fluctuations of the system at early times (200-300 fm/c), when the formation of composite elongated configurations is observed and phenomena associated with surface metastabity and/or instability may take place. At later times, when the SMF dynamics is not reliable, the evolution of the most relevant degrees of freedom could be followed within a more macroscopic description, where the system is characterized in terms of global observables, for which the full treatment of fluctuations in phase space is numerically affordable [@Leonid]. However, we will show that a consistent picture of the fusion vs. break-up probabilities can be obtained already from a simpler analysis of phase space fluctuations in the time interval indicated above.
We start considering the time evolution, in each event, of the quadrupole moment in coordinate space which is given by: $$Q(t)=<2z^2(t)-x^2(t)-y^2(t)>,$$ averaged over the space distribution in the composite system. At the same time-steps we construct also the quadrupole moment in momentum space: $$QK(t)=<2p_z^2(t)-p_x^2(t)-p_y^2(t)>,$$ in a spatial region around the center of mass. The z-axis is along the rotating projectile-like/target-like direction, the x-axis is on the reaction plane.
Average dynamics of shape observables
-------------------------------------
We run 200 events for each set of macroscopic initial conditions and we take the average over this ensemble.
![Time evolution of the space quadrupole moments for different centralities and for the two systems. Solid line: Asysoft. Dashed line: Asystiff.[]{data-label="squad"}](fusfig2.eps)
![Like Fig.\[squad\] but more detailed in the angular momentum transition region, between b=5.0 and 7.0 fm. Solid line: Asysoft. Dashed line: Asystiff.[]{data-label="squadlarge"}](fusfig3.eps)
In Figs.\[squad\], \[squadlarge\] we present the time evolution of the mean space quadrupole moment at various centralities for the two reactions and for the two choices of the symmetry term. We notice the difference in Q(t) between the behavior corresponding to more peripheral impact parameters and that obtained for b=5-6 fm, where we have still a little oscillation in the time interval between 100 and 300 fm/c, good indication of a fusion contribution.
We can interpret these observations assuming that starting from about b = 5 fm, we have a transition from fusion to a break-up mechanism, like deep-inelastic. Positive values of the Q(t)-slope should be associated with a quadrupole deformation velocity of the dinuclear system that is going to a break-up exit channel. We notice a slight systematic difference, especially in the most neutron-rich system, with a larger deformation velocity in the Asystiff case, see the more detailed picture of Fig.\[squadlarge\]. Hence, just from this simple analysis of the average space quadrupole “trajectories” we can already appreciate that the Asysoft choice seems to lead to larger fusion cross sections, at least for less peripheral impact parameters, between b=5.0 fm and b=6.5 fm.
![Time evolution of the space density distributions for the reaction $^{132}Sn+^{64}Ni$ (n-rich systems), 10 AMeV beam energy, for semicentral collisions, b=6.5 fm impact parameter (average over 20 events). Upper Panel: Asystiff. Lower Panel: Asysoft. []{data-label="dens"}](fusfig4.eps)
The latter point can also be qualitatively seen from the time evolution of the space density distributions projected on the reaction plane, as shown in Fig.\[dens\]. The formation of a more compact configuration in the Asysoft case can be related to a larger fusion probability.
It is very instructive to look also at the time evolution of the quadrupole deformations in momentum space. For each event we perform the calculation in a spherical cell of radius 3 fm around the system center of mass. In Fig.\[pquad\] we present the time evolution of the average p-quadrupole moments at various centralities for the two systems and the two choices of the symmetry term. We notice a difference between the plots corresponding to peripheral or central collisions. With increasing impact parameter the quadrupole QK(t) becomes more negative in the time interval between 100 and 300 fm/c: the components perpendicular to the symmetry axis, that is rotating in reaction plane, are clearly increasing. We can interpret this effect as due to the presence, in the considered region, of Coriolis forces that are enhanced when the angular momentum is larger. These forces help to break the deformed dinuclear system. Then the break-up probability will be larger if the quadrupole moment in p-space is more negative.
![Time evolution of the momentum quadrupole moments, in a sphere of radius 3 fm around the c.o.m., for different centralities and for the two systems. Solid line: Asysoft. Dashed line: Asystiff.[]{data-label="pquad"}](fusfig5.eps)
From Fig.s 3,5 one can see that there is a region of impact parameter (b = 5-6.5 fm) where the derivative of the quadupole moment in coordinate space, $Q'$, and the quadrupole moment in momentum space, $QK$, are both rather close to zero. This is the region where we expect that fluctuations of these quantities should play an important role in determining the fate of the reaction and event-by-event analysis is essential to estimate fusion vs. break-up probabilities.
Analysis of fluctuations and fusion probabilities for $^{132}Sn$ induced reactions
----------------------------------------------------------------------------------
To define a quantitative procedure to fix the event by event fusion vs break-up probabilities, we undertake an analysis of the correlation between the two quadrupole moments introduced in the previous Section, in the time interval defined before (100-300 fm/c). Another important suggestion to look at correlations comes from the very weak presence of isospin as well as symmetry energy effects in the separate time evolution of the two quadrupole moments, as we can see from Figs.\[squad\],\[squadlarge\] and Fig.\[pquad\].
![$^{132}Sn$ + $^{64}Ni$ system. Mean value and variance of QK vs Q’, averaged over the 100-300 fm/c time interval, at various centralities in the transition region. The box limited by dotted lines represents the break-up region. Upper panel: Asystiff. Bottom Panel: Asysoft.[]{data-label="corr64"}](fusfig6.eps)
![Like in Fig.\[corr64\] but for the $^{132}Sn$ + $^{58}Ni$ system.[]{data-label="corr58"}](fusfig7.eps)
Negative $QK$ values denote the presence of velocity components orthogonal to the symmetry axis, due to angular momentum effects, that help the system to separate in two pieces. At the same time, the observation of a velocity component along the symmetry axis indicates that the Coulomb repulsion is dominating over surface effects (that would try to recompact the system), also pushing the system in the direction of the break-up. Hence, in order to get the fusion probability from the early evolution of the system we assume positive Q’ and negative $QK$ for break-up events. In other words, we suppose that, in the impact parameter range where the average value of the two quantities is close to zero, the system evolution is decided just by the amplitude of shape fluctuations, taken at the moment when the formation of a deformed composite system is observed along the SMF dynamics (t = 200-300 fm/c, see the contour plots of Fig.4). Within our prescription, the fusion probability is automatically equal to one for central impact parameters, where the system goes back to the spherical shape and Q’ is negative, while it is zero for peripheral reactions, where Q’ is always positive and $QK$ always negative.
The correlation plots for the two systems studied and the two asy-EOS are represented in Figs.\[corr64\] and \[corr58\], respectively. Through the quantities displayed in the Figures, mean value and variance of the two extracted properties of the phase space moment evolution, we can evaluate the normal curves and the relative areas for each impact parameter in order to select the events: break-up events will be located in the regions with both positive slope of Q(t) and negative $QK$. In this way, for each impact parameter we can evaluate the fusion events by the difference between the total number of events and the number of break-up events. Finally the fusion cross section is obtained (in absolute value) by $$\frac{d \sigma}{dl}=\frac{2 \pi}{k^2} l \frac{N_f}{N_{tot}},$$ where $l$ is the angular momentum calculated in the semiclassical approximation, $k$ is the relative momentum of the collision, $N_f$ the number of fusion events and $N_{tot}$ the total events in the angular momentum bin.
![Angular momentum distributions of the fusion cross sections (mb) for the two reactions and the two choices of the symmetry term. For the $^{132}Sn$ + $^{64}Ni$ system (left panel), the results of PACE4 calculations are also reported, for different l-diffuseness.[]{data-label="sigmafus"}](fusfig8.eps)
In Fig.\[sigmafus\] we present the fusion spin distribution plots. We note that just in the centrality transition region there is a difference between the $\sigma$-fusion corresponding to the two different asy-EOS, with larger values for Asysoft.
In fact, the total cross sections are very similar: the difference in the area is about 4-5 $\%$ in the neutron rich system, $1128~ mb$ (Asysoft) vs. $1078~ mb$ (Asystiff), and even smaller, $1020~ mb$ vs. $1009~ mb$, for the $^{58}$Ni target. However, through a selection in angular momentum, $130 \leq l \leq 180$ ($\hbar$), we find that the Asysoft curve is significantly above the Asystiff one, and so in this centrality bin the fusion cross section difference can reach a 10$\%$ in the case of the more neutron-rich system. Then it can be compared to experimental data as an evidence of sensitivity to the density dependence of the symmetry energy.
From the comparison of the total areas for the two systems we can also estimate isospin effects on the total fusion cross section, with a larger value in the more neutron-rich case, as also recently observed in fusion reactions with Ar + Ni [@indrafus] and Ca + Ca isotopes [@Chimerafus]. We note that this effect is also, slightly, dependent on the symmetry term: The total fusion cross section for the more neutron rich system is $10\%$ larger in the Asysoft calculation and about $7\%$ in the Asystiff case.
Finally we like to note that for the neutron-rich case, $^{132}Sn+^{64}Ni$, our absolute value of the total fusion cross section presents a good agreement with recent data, at lower energy (around 5 AMeV), taken at the ORNL [@liang07].
In Fig.\[sigmafus\] for the same system (left panel) we show also the results obtained with the macroscopic fusion probability evaluation code $PACE4$, [@gavron79; @tarasov03] obtained with different l-diffuseness parameters, fixing, as input parameters, our total fusion cross section and maximum angular momentum. We see that in order to have a shape more similar to our $\sigma(l)$ distribution we have to choose rather large diffuseness values, while the suggested standard choice for stable systems is around $\Delta l$=4. This seems to be a nice evidence of the neutron skin effect.
Our main conclusion is that we can extract significant signals on the event by event reaction mechanism by the fluctuations of the quadrupole moments in phase space evaluated in a time region well compatible with the interval where the transport results are reliable.
Analysis of symmetry energy effects
===================================
The larger fusion probability obtained with the Asysoft choice, especially in the more n-rich system, seems to indicate that the reaction mechanism is regulated by the symmetry term at suprasaturation density, where the Asysoft choice is less repulsive for the neutrons [@bar05a; @col98]. In order to check this point we have performed a detailed study of the density evolution in the region of overlap of the two nuclei, named $neck$ in the following.
![Reaction $^{132}Sn+^{64}Ni$ semiperipheral. Time evolution of the total density in the “neck” region[]{data-label="rhoneck"}](fusfig9.eps)
We present results obtained for the system $^{132}Sn+^{64}Ni$ at impact parameter b = 6.5 fm. To account for the system mass asymmetry, this “neck” region is identified by a sphere of radius 3 fm centered on the symmetry axis, at a distance from the projectile center of mass equal to $d(t)*R_1/(R_1+R_2)$, where $R_1$ and $R_2$ are the radii of projectile and target, and $d(t)$ is the distance between the centers of mass of the two colliding nuclei. In fact, in the time interval of interest for the fusion/break-up dynamics it will almost coincide with the system center of mass, see also the contour plots of Fig.4.
The time evolution of the total density in this “neck” region is reported in Fig.\[rhoneck\] for the two choices of the symmetry energy . We note that in the time interval of interest we have densities above or around the normal density and so a less repulsive symmetry term within the Asysoft choice, corresponding to larger fusion probabilities.
This also explains why larger fusion cross sections are seen for the neutron rich system, mainly in the Asysoft case. In fact, the neutron excess pushes the formed hot compound nucleus closer to the stability valley, especially when the symmetry energy is smaller. Other nice features are: i) the density values found in the Asysoft case are always above the Asystiff ones, to confirm the expectation of a smaller equilibrium density for a stiffer symmetry term [@bar05a]; ii) collective monopole oscillations are present after 100 fm/c, showing that also at these low energies we can have some compression energy.
![Reaction $^{132}Sn+^{64}Ni$ semiperipheral.Left panel: time evolution of the neutron/proton ratio in the “neck” region. The dotted line corresponds to the initial isospin asymmetry of the composite system. Right panel: time evolution of the neutron and proton densities.[]{data-label="nzneck"}](fusfig10.eps)
It is also instructive to look at the evolution of the isospin content, the N/Z ratio, in this “neck” region, plotted in Fig.\[nzneck\]. As reference we show with a dotted line the initial average isospin asymmetry. We see that in the Asysoft choice a systematic larger isospin content is appearing (Left Panel). This is consistent with the presence of a less repulsive neutron potential at densities just above saturation probed in the first $100fm/c$, when the fast nucleon emission takes place (Figs.\[rhoneck\] and \[nzneck\], Left Panel). All that is confirmed by the separate behavior of the neutron and proton densities shown in the Right Panel of Fig.\[nzneck\].
It is finally very interesting the appearance of N/Z oscillations after 100 fm/c. This can be related to the excitation of isovector density modes in the composite system during the path to fusion or break-up. Since initially a charge asymmetry is present in the system (N/Z=1.64 for $^{132}$Sn and 1.28 for $^{64}$Ni) we expect the presence of collective isovector oscillations during the charge equilibration dynamics for $ALL$ dissipative collisions, regardless of the final exit channel. The features of this isovector mode, the Dynamical Dipole already observed in fusion reactions with stable beams [@bardip09], will be further discussed in Section V.
Break-up Events {#break-up-events .unnumbered}
---------------
Within the same transport approach, a first analysis of symmetry energy effects on break-up events in semiperipheral collisions of $^{132}Sn+^{64}Ni$ at $10~AMeV$ has been reported in ref.[@NN06]. Consistently with the more accurate study presented here, smaller break-up probabilities have been seen in the Asysoft choice. Moreover the neck dynamics on the way to separation is found also influenced by the symmetry energy below saturation. This can be observed in the different deformation pattern of the Projectile-Like and Target-Like Fragments (PLF/TLF), as shown in Fig.1 of [@NN06]. Except for the most peripheral selections, larger deformations are seen in the Asystiff case, corresponding to a smaller symmetry repulsion at the low densities probed in the separation region. The neutron-rich neck connecting the two partners can then survive a longer time producing very deformed primary PLF/TLF. Even small clusters can be eventually dynamically emitted leading to ternary/quaternary fragmentation events [@skwira08; @wilcz10].
In conclusion not only the break-up probability but also a detailed study of fragment deformations in deep-inelastic (and fast-fission) processes, as well as of the yield of 3-4 body events, will give independent information on the symmetry term around saturation.
The Prompt Dipole Mode in Fusion and Break-up Events
====================================================
![Reaction $^{132}Sn+^{58}Ni$ semiperipheral. Prompt Dipole oscillations in the composite system for break-up (solid lines) and fusion (dashed lines) events. Left Panel: Asystiff. Right Panel: Asysoft.[]{data-label="dipboth"}](fusfig11.eps)
![Reaction $^{132}Sn+^{58}Ni$ semiperipheral. Prompt Dipole strengths (in $c^2$ units), see text, for break-up (solid lines) and fusion (dashed lines) events. Left Panel: Asystiff. Right Panel: Asysoft.[]{data-label="strengths"}](fusfig12.eps)
![Reaction $^{132}Sn+^{58}Ni$ semiperipheral to peripheral.Prompt Dipole oscillations in the composite system for break-up event selections at each impact parameter. Left Panel: Asystiff. Right Panel: Asysoft.[]{data-label="dipbreak"}](fusfig13.eps)
From the time evolution of the nucleon phase space occupation, see Eq.(1), it is possible to extract at each time step the isovector dipole moment of the composite system. This is given by $D(t)=\frac{NZ}{A}X(t)$, where $A=N+Z$, and $N=N_1+N_2$, $Z=Z_1+Z_2$, are the total number of participating nucleons, while $X(t)$ is the distance between the centers of mass of protons and neutrons. It has been clearly shown, in theory as well as in experiments, that at these beam energies the charge equilibration in fusion reactions proceeds through such prompt collective mode. In our study we have focused the attention on the system with larger initial charge asymmetry, the $^{132}$Sn on $^{58}$Ni case,
In Fig.\[dipboth\] we present the prompt dipole oscillations obtained for semicentral impact parameters, in the transition zone. We nicely see that in both classes of events, ending in fusion or deep-inelastic channels, the dipole mode is present almost with the same strength. We note that such fast dipole radiation was actually observed even in the most dissipative deep-inelastic events in stable ion collisions [@PierrouEPJA16; @PierrouPRC71; @Amorini2004].
The corresponding emission rates can be evaluated, through a ”bremsstrahlung” mechanism, in a consistent transport approach to the rection dynamics, which can account for the whole contribution along the dissipative non-equilibrium path, in fusion or deep-inelastic processes [@bar01].
In fact from the dipole evolution $D(t)$ we can directly estimate the photon emission probability ($E_{\gamma}= \hbar \omega$): $$\frac{dP}{dE_{\gamma}}= \frac{2 e^2}{3\pi \hbar c^3 E_{\gamma}}
|D''(\omega)|^{2} \label{brems},$$ where $D''(\omega)$ is the Fourier transform of the dipole acceleration $D''(t)$. We remark that in this way it is possible to evaluate, in [*absolute*]{} values, the corresponding pre-equilibrium photon emission yields.
In Fig.\[strengths\] we report the prompt dipole strengths $|D''(\omega)|^{2}$ for the same event selections of Fig.\[dipboth\].
The dipole strength distributions are very similar in the fusion and break-up selections in this centrality region where we have a strong competition between the two mechanisms. In any case there is a smaller strength in the less central collisions (b=6.0fm), with a centroid slightly shifted to lower values, corresponding to more deformed shapes of the dinuclear composite system.
In the Asysoft choice we have a systematic increase of the yields, roughly given by the area of the strength distribution, of about $40\%$ more than in the Asystiff case, for both centralities and selections. In fact from Eq.(\[brems\]) we can directly evaluate the total $\gamma$-multiplicities, integrated over the dynamical dipole region. For centrality b=5.5fm we get $2.3~10^{-3}$ ($1.6~10^{-3}$) in the Asysoft (Asystiff) choice, and for b=6.0fm respectively $1.9~10^{-3}$ ($1.3~10^{-3}$), with almost no difference betwen fusion and break-up events.
From Fig.\[esym\] we see that Asysoft corresponds to a larger symmetry energy below saturation. Since the symmetry term gives the restoring force of the dipole mode, our result is a good indication that the prompt dipole oscillation is taking place in a deformed dinuclear composite system, where low density surface contributions are important, as already observed in ref.[@bardip09].
In the previous Sections we have shown that the Asysoft choice leads to a large fusion probability since it gives a smaller repulsion at the suprasaturation densities of the first stage of the reaction. Here we see that for the dipole oscillation it gives a larger restoring force corresponding to mean densities below saturation. This apparent contradictory conclusion can be easily understood comparing Figs.\[rhoneck\] and \[dipboth\]. We note that the onset of the collective dipole mode is delayed with respect to the first high density stage of the neck region since the composite system needs some time to develop a collective response of the dinuclear mean field.
In this way fusion and dynamical dipole data can be directly used to probe the isovector part of the in medium effective interaction [*below and above*]{} saturation density.
Another interesting information is derived from Fig.\[dipbreak\] where we show the prompt dipole oscillations only for break-up events at centralities covering the range from semicentral to peripheral. We nicely see that the collective mode for charge equilibration, due to the action of the mean field of the dinuclear system, is disappearing for the faster, less dissipative break-up collisions.
Anisotropy {#anisotropy .unnumbered}
----------
Aside the total gamma spectrum the corresponding angular distribution can be a sensitive probe to explore the properties of preequilibrium dipole mode and the early stages of fusion dynamics. In fact a clear anisotropy vs. the beam axis has been recently observed [@martin08]. For a dipole oscillation just along the beam axis we expect an angular distribution of the emitted photons like $W(\theta)\sim \sin^2 \theta
\sim 1+a_2P_2(cos \theta)$ with $a_2=-1$, where $\theta$ is the polar angle between the photon direction and the beam axis. Such extreme anisotropy will be never observed since in the collision the prompt dipole axis will rotate during the radiative emission. In fact the deviation from the $\sin^2 \theta$ behavior will give a measure of the time interval of the fast dipole emission. In the case of a large rotation one can even observe a minimum at 90 degrees.
![reaction $^{132}Sn+^{58}Ni$ semiperipheral.Upper Left panel: Rotation angle. Bottom Left Panel: emission probabilities. Right panel: Weighted angular distribution. []{data-label="anisotropy"}](fusfig14.eps)
Let us denote by $\phi_i$ and $\phi_f$ the initial and final angles of the symmetry axis (which is also oscillation axis) with respect to the beam axis, associated respectively to excitation and complete damping of the dipole mode. Then $\Delta \phi=\phi_f-\phi_i$ is the rotation angle during the collective oscillations. We can get the angular distribution in this case by averaging only over the angle $\Delta \phi$ obtaining
$$W(\theta) \sim 1-(\frac{1}{4}+\frac{3}{4}x)P_2(cos \theta)
\label{angdis}$$
where $x=cos (\phi_f$+$\phi_i)\frac{sin (\phi_f-\phi_i)}{\phi_f-\phi_i}$ .
The point is that meanwhile the emission is damped.
Within the bremsstrahlung approach we can perform an accurate evaluation of the prompt dipole angular distribution using a weighted form where the time variation of the radiation emission probability is accounted for $$W(\theta)=\sum_{i=1}^{t_{max}} \beta_i W(\theta,\Phi_i)
\label{wweighted}$$ We divide the dipole emission time in $\Delta t_i$ intervals with the corresponding $\Phi_i$ mean rotation angles and the related radiation emission probabilities $\beta_i=P(t_i)-P(t_{i-1})$, where $P(t)=\int_{t_0}^{t} \mid D''(t) \mid^2 dt / P_{tot}$ with $P_{tot}$ given by $P(t_{max})$, total emission probability at the final dynamical dipole damped time.
In Fig.\[anisotropy\], upper left panel, we plot the time dependence of the rotation angle, for the $^{132}$Sn + $^{58}$Ni system, extracted from all the events, fusion and break-up, at two semiperipheral impact parameters, for the two symmetry terms. We note that essentially the same curves are obtained with the two Iso-EoS choices: the overall rotation is mostly ruled by the dominant isoscalar interaction.
Symmetry energy effects will be induced by the different time evolution of the emission probabilities, as shown in the bottom left panel.
We clearly see that the dominant emission region is the initial one, just after the onset of the collective mode between 80 and 150 fm/c, while the emitting dinuclear system has a large rotation. Another interesting point is the dependence on the symmetry energy. With a weaker symmetry term at low densities (Asystiff case), the $P(t)$ is a little delayed and presents a smoother behavior. As a consequence, according to Eq.(7), we can expect possible symmetry energy effects even on the angular distributions.
This is shown in the right panel of Fig.\[anisotropy\], where we have the weighted distributions (Eq.(\[wweighted\])), for the two impact parameters and the two choices of the symmetry energies. We see some sensitivity to the stiffness of the symmetry term. Hence, from accurate measurements of the angular distribution of the emitted $\gamma$’s, in the range of impact parameters where the system rotation is significant, one can extract independent information on the density behavior of the symmetry energy.
Conclusions and perspectives
============================
We have undertaken an analysis of the reaction path followed in collisions involving exotic systems at beam energies around 10 AMeV. In this energy regime, the main reaction mechanisms range from fusion to dissipative binary processes, together with the excitation of collective modes of the nuclear shape. In reactions with exotic systems, these mechanisms are expected to be sensitive to the isovector part of the nuclear interaction, yielding information on the density dependence of the symmetry energy. Moreover, in charge asymmetric systems, isovector dipole oscillations can be excited at the early dynamical stage, also sensitive to the behavior of the symmetry energy. We have shown that, in neutron-rich systems, fusion vs. break-up probabilities are influenced by the neutron repulsion during the approaching phase, where densities just above the normal value are observed. Hence larger fusion cross sections are obtained in the Asysoft case, associated with a smaller value of the symmetry energy at supra-saturation densities. On the other hand, the isovector collective response, that takes place in the deformed dinuclear configuration with large surface contributions, is sensitive to the symmetry energy below saturation.
The relevant point of our analysis is that it is based just on the study of the fluctuations that develop during the early dynamics, when the transport calculations are reliable. Fluctuations of the quadrupole moments, in phase space, essentially determine the final reaction path. It should be noticed that the fluctuations discussed here are essentially of thermal nature. It would be interesting to include also the contribution of quantal (zero-point) fluctuations of surface modes and angular momentum. Indeed the frequencies of the associated collective motions are comparable to the temperature ($T\approx 4~ MeV$) reached in our reactions [@Lan_book]. This would increase the overall amplitude of surface oscillations, inducing larger fluctuations in the system configuration and a larger break-up probability. Such quantum effect has been recently shown to be rather important for fusion probabilities at near and sub-barrier energies [@ayik10]. The agreement of our semiclassical procedure with present data above the barrier could be an indication of a dominance of thermal fluctuations at higher excitation energy. In any case this point should be more carefully studied.
Finally, we would like to stress that, according to our analysis, considerable isospin effects are revealed just selecting the impact parameter window corresponding to semi-peripheral reactions. Interesting perspectives are opening for new experiments on low energy collisions with exotic beams focused to the study of the symmetry term below and above saturation density. We suggest some sensitive observables:
i\) Fusion vs. Break-up probabilities in the centrality transition region;
ii\) Fragment deformations in break-up processes and probability of ternary/quaternary events.
iii\) $\gamma$-multiplicity and anisotropy of the Prompt Dipole Radiation, for dissipative collisions in charge asymmetric entrance channels.
0.3cm
[**Aknowledgements**]{}
We warmly thank Alessia Di Pietro for the discussions about the the use of Pace4 fusion simulations. One of authors, V. B. thanks for hospitality at the Laboratori Nazionali del Sud, INFN. This work was supported in part by the Romanian Ministery for Education and Research under the contracts PNII, No. ID-946/2007 and ID-1038/2008.
[00]{}
Isospin Physics in Heavy Heavy Ion Collisions at Intermediate Energies, Eds. Bao-An Li and W. Udo Schroder, Nova Science Publishers, Inc, New York, 2001.
A.W. Steiner,M. Prakash,J.M. Lattimer, P.J. Ellis, Phys. Rep. 411, 325 (2005)
V. Baran, M. Colonna, V.Greco, M. Di Toro, Phys. Rep. 410, 335 (2005)
Bao-An Li, Lie-Wen Chen, Che Ming Ko Phys. Rep. 464, 113 (2008)
M. Colonna and M.B. Tsang, Eur. Phys. Jou. A30, 165 (2006) M.B. Tsang et al., Phys. Rev. Lett. 102, 122701 (2009) A.L. Kelsis et al., Phys. Rev. C81, 054602 (2010)
V. Baran, M. Colonna, M. Di Toro, Nucl. Phys. A730, 329 (2004)
E. De Filippo et al., Phys. Rev. C71, 044602 (2005)
V.Baran, C.Rizzo, M.Colonna, M.Di Toro, D.Pierroutsakou, Phys.Rev. C79, 021603(R) (2009)
P. Chomaz, M. Di Toro, A. Smerzi, Nucl. Phys. A563, 509 (1993)
V. Baran et al., Nucl. Phys. A600, 111 (1996)
C. Simenel, P. Chomaz, G.de France, Phys. Rev.Lett. 86, 2971 (2001); Phys. Rev. C76, 024609 (2007)
V. Baran et al., Nucl. Phys. A679, 373 (2001)
V. Baran, D.M. Brink, M. Colonna, M. Di Toro, Phys. Rev. Lett. 87, 182501 (2001)
J. Rizzo, Ph. Chomaz, M. Colonna, Nucl. Phys. A806, 40 (2008)
M. Colonna et al., Nucl. Phys. A642, 449 (1998)
G.Q. Li, R. Machleidt, Phys. Rev. C48, 1702 (1993); Phys. Rev. C49, 566 (1994)
M. Colonna et al., Phys. Rev. C57, 1410 (1998)
V. Baran et al., Nucl. Phys. A703, 603 (2002)
L. Shvedov, M. Colonna, M. Di Toro, Physi. Rev. C81, 054605 (2010)
P. Marini, B. Borderie, A. Chbihi, N. Le Neindre, M.-F. Rivet, J.P. Wieleczko, M. Zoric et al. (Indra-Vamos Collab.), [*IWM2009 Int.Workshop*]{}, Eds.J.D.Frankland et al., SIF Conf.Proceedings Vol.101, pp.189-196, Bologna 2010.
F.Amorini et al., Phys. Rev. Lett. 102, 112701 (2009)
J.F. Liang et al., Phys. Rev. C75, 054607 (2007)
A. Gavron, Phys. Rev. 21, 230 (1979)
O.B. Tarasov, D. Bazin, Nucl. Inst. Methods, B204, 174 (2003)
M. Di Toro et al., Nucl. Phys. A787, 585c (2007)
I. Skwira-Chalot et al. (Chimera Collab.), Phys. Rev. Lett. 101, 262701 (2008)
J. Wilczynski et al. (Chimera Collab.), Phys. Rev. C81, 024605 (2010)
D. Pierroutsakou et al., Eur. Phys. Jour. A 16 (2003) 423, Nucl. Phys. A687, 245c (2003)
D. Pierroutsakou et al., Phys. Rev. C71, 054605 (2005)
F. Amorini et al., Phys. Rev. C69, 014608 (2004)
B. Martin, D. Pierroutsakou et al. (Medea Collab.), Phys. Lett. B664, 47 (2008)
L.D. Landau, E.M. Lifshitz, [*Statistical Physics*]{} Part.1, Vol.5 (3rd ed.), Butterworth-Heinemann, 1980
S. Ayik, B. Yilmaz, D. Lecroix, Phys. Rev. C81, 034605 (2010)
|
---
abstract: 'We prove that it is consistent with ZFC that every (unital) endomorphism of the Calkin algebra ${\mathcal{Q}}(H)$ is unitarily equivalent to an endomorphism of ${\mathcal{Q}}(H)$ which is liftable to a (unital) endomorphism of ${\mathcal{B}}(H)$. We use this result to classify all unital endomorphisms of ${\mathcal{Q}}(H)$ up to unitary equivalence by the Fredholm index of the image of the unilateral shift. Finally, we show that it is consistent with ZFC that the class of [$\mathrm{C}^\ast$]{}-algebras that embed into ${\mathcal{Q}}(H)$ is not closed under countable inductive limit nor tensor product.'
address: 'Department of Mathematics, Ben-Gurion University of the Negev, P.O.B. 653, Be’er Sheva 84105, Israel'
author:
- Andrea Vaccaro
bibliography:
- 'Bibliography.bib'
title: Trivial Endomorphisms of the Calkin Algebra
---
Introduction
============
Let $H$ be a separable, infinite-dimensional, complex Hilbert space. The Calkin algebra ${\mathcal{Q}}(H)$ is the quotient of ${\mathcal{B}}(H)$, the algebra of all linear, bounded operators on $H$, over the ideal of compact operators ${\mathcal{K}}(H)$. Let $q: {\mathcal{B}}(H) \to {\mathcal{Q}}(H)$ be the quotient map.
Over the last 15 years, the study of the automorphisms of the Calkin algebra has been the setting for some of the most significant applications of set theory to [$\mathrm{C}^\ast$]{}-algebras. The original motivation for these investigations is of [$\mathrm{C}^\ast$]{}-algebraic nature, and it dates back to the seminal paper [@bdf]. One of the most prominent questions asked in [@bdf] is whether there exists a K-theory reverting automorphism of ${\mathcal{Q}}(H)$ or, more concretely, an automorphism of ${\mathcal{Q}}(H)$ sending the unilateral shift to its adjoint. Since all inner automorphisms act trivially on the K-theory of a [$\mathrm{C}^\ast$]{}-algebra, a preliminary question posed in [@bdf] is whether the Calkin algebra has outer automorphisms. The answer turned out to be depending on set theoretic axioms. Phillips and Weaver show in [@outer] that outer automorphisms of ${\mathcal{Q}}(H)$ exist if the *Continuum Hypothesis* CH is assumed, while in [@inner] Farah proves that the *Open Coloring Axiom* OCA (see definition \[oca\]) implies that all the automorphisms of ${\mathcal{Q}}(H)$ are inner. It is still unknown whether it is consistent with ZFC that there exists an automorphism of ${\mathcal{Q}}(H)$ sending the unilateral shift to its adjoint, since the automorphisms built in [@outer] act like inner automorphisms on every separable subalgebra of ${\mathcal{Q}}(H)$.
In this note we investigate the effects of OCA on the endomorphisms of the Calkin algebra. The main consequence of OCA that we show is a complete classification of the endomorphisms of ${\mathcal{Q}}(H)$, up to unitary equivalence, by essentially the Fredholm index of the image of the unilateral shift. Two endomorphisms $ {\varphi}_1, {\varphi}_2 : {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ are *unitarily equivalent* if there is a unitary $v \in {\mathcal{Q}}(H)$ such that $\text{Ad}(v) \circ {\varphi}_1 = {\varphi}_2$. Let $\text{End}({\mathcal{Q}}(H))$ be the set of all endomorphisms of ${\mathcal{Q}}(H)$ modulo unitary equivalence. Let $\text{End}_u({\mathcal{Q}}(H))$ be the set of all the classes in $\text{End}({\mathcal{Q}}(H))$ corresponding to unital endomorphisms. The operation of direct sum $\oplus$ naturally induces a structure of semigroup on both $\text{End}({\mathcal{Q}}(H))$ and $\text{End}_u({\mathcal{Q}}(H))$, as well as the operation of composition $\circ$. We fix an orthonormal basis ${\{\xi_k\}}_{k \in {\mathbb{N}}}$ of $H$ and we let $S$ be the unilateral shift sending $\xi_k$ to $\xi_{k+1}$.
\[mt2\] Assume OCA. Two endomorphisms $ {\varphi}_1, {\varphi}_2: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ are unitarily equivalent if and only if the following two conditions are satisfied:
1. \[mta\] There is a unitary $w \in {\mathcal{Q}}(H)$ such that $w {\varphi}_1(1) w^* = {\varphi}_2(1)$.
2. \[mtb\] The (finite) Fredholm indices of ${\varphi}_1(q(S)) + (1 - {\varphi}_1(1))$ and ${\varphi}_2(q(S)) + (1- {\varphi}_2(1))$ are equal.
Moreover, the map sending ${\varphi}\in \text{End}_u({\mathcal{Q}}(H))$ to $- \text{ind}({\varphi}(q(S)))$ is a semigroup isomorphism between $(\text{End}_u({\mathcal{Q}}(H)), \oplus)$ and $({\mathbb{N}}\setminus {\{0\}}, +)$, as well as between $(\text{End}_u({\mathcal{Q}}(H)), \circ)$ and $({\mathbb{N}}\setminus {\{0\}}, \cdot)$.
An explicit description of $(\text{End}({\mathcal{Q}}(H)), \oplus)$ and $(\text{End}({\mathcal{Q}}(H)), \circ)$ under OCA is given in remark \[nonu\].
The second consequence of OCA we prove in this paper is related to the recent works [@fhv], [@fkv] and [@phd Chapter 2], where methods from set theory are employed in the study of nonseparable subalgebras of ${\mathcal{Q}}(H)$. Let $\mathbb{E}$ be the class of all [$\mathrm{C}^\ast$]{}-algebras that embed into the ${\mathcal{Q}}(H)$. Under CH the Calkin algebra has density character $\aleph_1$ (the first uncountable cardinal), hence a [$\mathrm{C}^\ast$]{}-algebra belongs to $\mathbb{E}$ if and only if its density character is at most $\aleph_1$ (see [@fhv]). Therefore, when CH holds, the class $\mathbb{E}$ is closed under all operations whose output, when starting from [$\mathrm{C}^\ast$]{}-algebras of density character at most $\aleph_1$, is a [$\mathrm{C}^\ast$]{}-algebra whose density character is at most $\aleph_1$, such as minimal/maximal tensor product and countable inductive limit. It is not clear whether CH is necessary to prove these closure properties, but we prove that they might fail if CH is not assumed, answering [@fkv Question 5.3].
\[closure\] Assume OCA.
1. \[oca1\] The class $\mathbb{E}$ is not closed under minimal/maximal tensor product. Moreover, there exists ${\mathcal{A}}\in \mathbb{E}$ such that ${\mathcal{A}}\otimes_{\gamma} {\mathcal{B}}\notin \mathbb{E}$ for every infinite-dimensional, unital ${\mathcal{B}}\in \mathbb{E}$ and for every tensor norm $\gamma$.
2. \[oca2\] The class $\mathbb{E}$ is not closed under countable inductive limits.
In particular, both and are independent from ZFC.
Theorems \[mt2\] and \[closure\] are proved in §\[Sclass\] using theorem \[thrm:main\], for which we need to introduce a definition.
We say that an endomorphism ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ is *trivial* if there is a unitary $v \in {\mathcal{Q}}(H)$ and a strongly continuous (i.e. strong-strong continuous) endomorphism $\Phi: {\mathcal{B}}(H) \to {\mathcal{B}}(H)$ such that the following diagram commutes. $$\begin{tikzcd}
{\mathcal{B}}(H) \arrow{r}{\Phi} \arrow[swap]{d}{q} & {\mathcal{B}}(H) \arrow{d}{q} \\{\mathcal{Q}}(H) \arrow{r}{\text{Ad}(v) \circ {\varphi}}& {\mathcal{Q}}(H)
\end{tikzcd}$$ With this terminology, the main theorem of [@inner] says that under OCA all automorphisms of ${\mathcal{Q}}(H)$ are trivial (indeed, up to unitary equivalence, they lift to the identity). We extend this result to all endomorphisms of ${\mathcal{Q}}(H)$.
\[thrm:main\] Assume OCA. All endomorphisms of the Calkin algebra are trivial.
The proof of theorem \[thrm:main\] occupies both §\[S3\] and §\[sctn:main\]. Similarly to theorem \[closure\], theorems \[mt2\] and \[thrm:main\] cannot be proved in ZFC alone, in fact they fail under CH. In §\[Sclass\] give two examples of non-trivial endomorphisms of ${\mathcal{Q}}(H)$ existing when CH is assumed (examples \[ex1\] and \[ex2\]), which highlight different levels of failure of the classification of $\text{End}({\mathcal{Q}}(H))$ in theorem \[mt2\]. In particular, it is possible to find uncountably many inequivalent automorphisms of ${\mathcal{Q}}(H)$ which send the unilateral shift to a unitary of index $-1$ (see example \[ex1\]). Examples \[ex1\] and \[ex2\], along with theorem \[thrm:main\], entail the following corollary.
\[indep\] The existence non-trivial endomorphisms of ${\mathcal{Q}}(H)$ is independent from ZFC.
We remark that, unlike the results concerning automorphisms of ${\mathcal{Q}}(H)$ in [@inner], the commutative analogue of theorem \[thrm:main\] for $\mathcal{P}({\mathbb{N}}) / \text{Fin}$ does not hold. In [@dow] it is proved that there are non-trivial endomorphisms of $\mathcal{P}({\mathbb{N}}) / \text{Fin}$ in ZFC. In this scenario, the best one can hope for is the so called *weak Extension Principle*, a consequence of $\text{OCA} + \text{MA}$ introduced and discussed in [@analy Chapter 4]. A crucial difference in the noncommutative context is the presence of partial isometries allowing to compress/decompress operators into/from infinite-dimensional subspaces of $H$. These objects play a crucial role in the proof of theorem \[thrm:lctr\].
The observations and results exposed in this note are in continuity with the numerous studies investigating the strong rigidity properties induced by the *Proper Forcing Axiom* (of which OCA is a consequence) on the Calkin algebra and on other nonseparable quotient algebras ([@inner], [@mckvignati], [@rig], [@roecoronas]; see also [@analy], [@veloca]). The Continuum Hypothesis, on the other hand, grants the opposite effect, allowing to prove the existence of too many maps on these quotients for all of them to be ‘trivial’ ([@outer] [@cosfar], [@fms]). Woodin’s $\Sigma^2_1$-absoluteness theorem gives a deeper metamathematical motivation for the efficacy of CH in solving these problems (see [@sigma12]).
The paper is structured as follows. Section \[S2\] contains preliminaries and definitions. Sections \[S3\] and \[sctn:main\] are devoted to the proof of theorem \[thrm:main\]. In section \[S3\] we show that locally trivial (see definition \[trivial\]) endomorphisms of ${\mathcal{Q}}(H)$ are, up to unitary transformation, locally liftable with ‘nice’ unitaries in ${\mathcal{B}}(H)$. In section \[sctn:main\], adapting the main arguments from [@inner] (see also [@ilijasbook §18]), we prove that under OCA all endomorphisms of ${\mathcal{Q}}(H)$ are locally trivial, and that all locally trivial endomorphisms are trivial. We remark that OCA is only needed in §\[sctn:main\], while the results in §\[S3\] require no additional set-theoretic axiom. Finally, §\[Sclass\] contains the proof of theorem \[mt2\] and of theorem \[closure\], some observations about what happens to $\text{End}({\mathcal{Q}}(H))$ under CH and some open questions.
Notation and Preliminaries {#S2}
==========================
The only extra set-theoretic assumption required for our proofs is the Open Coloring Axiom OCA, which is defined as follows.
\[oca\] Given a set $X$, let $[X]^2$ be the set of all unordered pairs of elements of $X$. For a topological space $X$, a *coloring* $[X]^2 = K_0 \sqcup K_1$ is *open* if the set $K_0$, when naturally identified with a symmetric subset of $X \times X$, is open in the product topology. For a $K \subseteq [X]^2$, a subset $Y$ of $X$ is $K$-homogeneous if $[Y]^2 \subseteq K$.
Let $X$ be a separable, metric space, and let $[X]^2 = K_0 \sqcup K_1$ be an open coloring. Then either $X$ has an uncountable $K_0$-homogeneous set, or it can be covered by countably many $K_1$-homogeneous sets.
This statement, which contradicts CH, is independent from ZFC and it was introduced by Todorcevic in [@parti].
For the rest of the paper, fix ${\{\xi_k\}}_{k \in {\mathbb{N}}}$ an orthonormal basis of $H$ and identify $\ell_\infty$, the [$\mathrm{C}^\ast$]{}-algebra of all uniformly bounded sequences of complex numbers, with the atomic masa of all operators in ${\mathcal{B}}(H)$ diagonalized by such basis. With this identification, the algebra $c_0 = \ell_\infty \cap {\mathcal{K}}(H)$ is the set of all sequences converging to zero. Given a set $M \subseteq {\mathbb{N}}$, $P_M$ denotes the orthogonal projection onto the closure of $\text{span}{\{\xi_k : k \in M\}}$. If $M = {\{k\}}$, we simply write $P_k$. Throughout this paper, all the partitions $\vec{E}$ of ${\mathbb{N}}$ are always implicitly assumed to be composed by consecutive, finite intervals. Given such a partition $\vec{E} = {\{E_n\}}_{n \in {\mathbb{N}}}$, ${\mathcal{D}}[\vec{E}]$ is the von Neumann algebra of all operators in ${\mathcal{B}}(H)$ for which each $\text{span}
{\{\xi_k : k \in E_n\}}$ is invariant. Equivalently, ${\mathcal{D}}[\vec{E}]$ is the set of all operators which commute with $P_{E_n}$ for every $n \in {\mathbb{N}}$. Given a subset $X \subseteq {\mathbb{N}}$, ${\mathcal{D}}_X[\vec{E}]$ denotes the [$\mathrm{C}^\ast$]{}-algebra of all the operators in ${\mathcal{D}}[\vec{E}]$ which act as zero on $\text{span}
{\{\xi_k : k \in E_n\}}$ for every $n \notin X$
Given a unital $*$-homomorphism $\Phi: {\mathcal{D}}[\vec{E}] \to {\mathcal{B}}(H)$ such that $\Phi[{\mathcal{D}}[\vec{E}] \cap \mathcal{K}(H)] \subseteq \mathcal{K}(H)$, $\vec{n}_\Phi$ denotes the sequence ${\{\text{rk}(\Phi(P_k))\}}_{k \in {\mathbb{N}}}$ of the (finite) ranks of the projections $\Phi(P_k)$. Notice that $\vec{n}_\Phi$ only depends on how $\Phi$ acts on $\ell_\infty$ and that $n_i = n_j$ whenever $i, j \in E_n$ for some $n \in {\mathbb{N}}$. For a partition $\vec{E} = {\{E_n\}}_{n \in {\mathbb{N}}}$ of ${\mathbb{N}}$, $\vec{E}^{\text{even}}$ is the partition ${\{E_{2n} \cup E_{2n+1}\}}_{n \in {\mathbb{N}}}$ and $\vec{E}^{\text{odd}}$ is the partition ${\{E_0\}} \cup {\{E_{2n+1} \cup E_{2n+2}\}}_{n \in {\mathbb{N}}}$.
The *strong topology* on ${\mathcal{B}}(H)$ (and on any subalgebra of ${\mathcal{B}}(H)$) is the one induced by the pointwise norm convergence on $H$, hence a sequence ${\{T_n\}}_{n \in {\mathbb{N}}}$ of operators in ${\mathcal{B}}(H)$ *strongly converges* to $T$ iff $T_n \xi \to T \xi$ in norm for every $\xi \in H$.
For every partial isometry $v$ in a [$\mathrm{C}^\ast$]{}-algebra ${\mathcal{A}}$, $\text{Ad}(v)$ is the endomorphism sending $a$ to $vav^*$ for every $a \in {\mathcal{A}}$.
For every subalgebra ${\mathcal{A}}$ of ${\mathcal{B}}(H)$, let ${\mathcal{A}}_{\mathcal{Q}}$ be the quotient ${\mathcal{A}}/(\mathcal{K}(H)\cap {\mathcal{A}})$. Given a map ${\varphi}: {\mathcal{A}}_{\mathcal{Q}}\to {\mathcal{Q}}(H)$, the function $\Phi: {\mathcal{A}}\to {\mathcal{B}}(H)$ *lifts* (or *is a lift of*) ${\varphi}$ if the following diagram commutes: $$\begin{tikzcd}
{\mathcal{A}}\arrow{r}{\Phi} \arrow[swap]{d}{q} & {\mathcal{B}}(H) \arrow{d}{q} \\{\mathcal{A}}_{\mathcal{Q}}\arrow{r}{{\varphi}}& {\mathcal{Q}}(H)
\end{tikzcd}$$
\[trivial\] Given a [$\mathrm{C}^\ast$]{}-algebra ${\mathcal{A}}\subseteq {\mathcal{B}}(H)$, we say that an embedding (i.e. an injective $*$-homomorphism) ${\varphi}:{\mathcal{A}}_{\mathcal{Q}}\to {\mathcal{Q}}(H)$ is *trivial* if there exists a unitary $v \in {\mathcal{Q}}(H)$ and a strongly continuous (i.e. strong-strong continuous), $*$-homomorphism $\Phi: {\mathcal{A}}\to {\mathcal{B}}(H)$ such that $\Phi$ lifts $\text{Ad}(v) \circ {\varphi}$. An endomorphism ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ is *locally trivial* if, for every partition $\vec{E}$, the restriction ${\varphi}\restriction {\mathcal{D}}[\vec{E}]_{\mathcal{Q}}$ is trivial.
Given two operators $T, S \in {\mathcal{B}}(H)$, we use the notation $T \sim_{{\mathcal{K}}(H)} S$ to abbreviate $T - S \in {\mathcal{K}}(H)$. Analogously, for a [$\mathrm{C}^\ast$]{}-algebra ${\mathcal{A}}$ and two functions $\Phi_1, \Phi_2: {\mathcal{A}}\to {\mathcal{B}}(H)$, $\Phi_1 \sim_{{\mathcal{K}}(H)} \Phi_2$ abbreviates $\Phi_1(a) \sim_{{\mathcal{K}}(H)}
\Phi_2(a)$ for all $a \in {\mathcal{A}}$.
Given a [$\mathrm{C}^\ast$]{}-algebra ${\mathcal{A}}\subseteq {\mathcal{B}}(H)$ (${\mathcal{A}}\subseteq {\mathcal{Q}}(H)$), the *commutant* ${\mathcal{A}}' \cap {\mathcal{B}}(H)$ (${\mathcal{A}}' \cap {\mathcal{Q}}(H)$) is the set of all the operators in ${\mathcal{B}}(H)$ (${\mathcal{Q}}(H)$) commuting with all elements of ${\mathcal{A}}$.
The (*Fredholm*) *index* of an operator $T \in {\mathcal{B}}(H)$ is the integer $\text{dim}(\text{ker}(T)) - \text{codim}(T[H])$. An element $a \in {\mathcal{Q}}(H)$ is invertible if and only if it can be lifted to an operator of finite index ([@murphy Theorem 1.4.16]), which can be assumed to be a partial isometry if $a$ is a unitary.
\[remark:sc\] Strongly continuous, unital endomorphisms of ${\mathcal{B}}(H)$ have an extremely rigid structure. Indeed, strong continuity implies that such maps are uniquely determined by how they behave on the projections whose range is 1-dimensional. Since all these projections are Murray-von Neumann equivalent, the same is true for their images, which therefore all have the same rank. For every $m \in {\mathbb{N}}$, let $\Phi_m: {\mathcal{B}}(H) \to {\mathcal{B}}(H \otimes {\mathbb{C}}^m)$ be the map sending $T$ to $T \otimes 1_m$. As ${\mathcal{B}}(H)$ and ${\mathcal{B}}(H \otimes {\mathbb{C}}^m)$ are isomorphic, with an abuse of notation we consider $\Phi_m$ as a map from ${\mathcal{B}}(H)$ into ${\mathcal{B}}(H)$. If $\Phi: {\mathcal{B}}(H) \to {\mathcal{B}}(H)$ is a unital, strongly continuous endomorphism sending compact operators into compact operators, by the previous observation it is possible tofind an $m \in {\mathbb{N}}\setminus {\{0\}}$ and a unitary $U \in {\mathcal{B}}(H)$ such that $\Phi = \text{Ad}(U) \circ \Phi_m$. Our classification of $\text{End}({\mathcal{Q}}(H))$ and $\text{End}_u({\mathcal{Q}}(H))$ in theorem \[mt2\] will be based on this simple observation. Since the commutant of the image of $\ell_\infty$ via $\Phi_m$ is isomorphic to $\ell_\infty(
M_m({\mathbb{C}}))$ (the [$\mathrm{C}^\ast$]{}-algebra of all norm-bounded sequences of $m \times m$ matrices with complex entries), the same is true for the commutant of the image of $\ell_\infty$ via $\Phi$.
Every unital endomorphism ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ is uniquely determined by its restrictions ${\{{\varphi}\restriction {\mathcal{D}}[\vec{E}]_{\mathcal{Q}}\}}$ as $\vec{E}$ varies among all partitions of ${\mathbb{N}}$. This is a consequence of the following standard fact (see [@inner Lemma 1.2] or the proof of [@matroid Theorem 3.1]).
\[prop:2qd\] For every countable set ${\{T_n\}}_{n \in {\mathbb{N}}}$ in ${\mathcal{B}}(H)$ there exists a partition $\vec{E}$ of ${\mathbb{N}}$ such that for every $n \in {\mathbb{N}}$ there are $T^0_n \in {\mathcal{D}}[\vec{E}^{\text{even}}]$ and $T^1_n \in {\mathcal{D}}[\vec{E}^{\text{odd}}]$ such that $T_n \sim_{{\mathcal{K}}(H)} T_n^0 + T_n^1$.
Locally Trivial Endomorphisms {#S3}
=============================
Given a unital, locally trivial endomorphism ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$, throughout this section we fix, for every partition $\vec{E}$ of ${\mathbb{N}}$, a partial isometry of finite index $v_{\vec{E}}$ and a strongly continuous, $*$-homomorphism $\Phi_{\vec{E}} : {\mathcal{D}}[\vec{E}] \to {\mathcal{B}}(H)$ such that $\text{Ad}(v_{\vec{E}})
\circ \Phi_{\vec{E}}$ lifts the restriction of ${\varphi}$ to ${\mathcal{D}}[\vec{E}]_{{\mathcal{Q}}}$. In this part we show that, up to considering $\text{Ad}(v) \circ {\varphi}$ for some unitary $v \in {\mathcal{Q}}(H)$, we can assume that $\Phi_{\vec{E}}$ is $\Phi_m$[^1] (as defined in remark \[remark:sc\]) and $v_{\vec{E}}$ is a unitary in the commutant of $\Phi_m[\ell_\infty]$, for every partition $\vec{E}$. We remark that no extra set-theoretic axiom is required in the present section.
\[remark:uni\] Notice that for a unital, locally trivial endomorphism ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$, for each partition $\vec{E}$ the projection $\Phi_{\vec{E}}(1)$ is a compact perturbation of the identity, hence its range has finite codimension $r$. Therefore, by multiplying $v_{\vec{E}}$ by a suitable partial isometry of index $-r$, we can always assume that $\Phi_{\vec{E}}$ is unital, and we will always implicitly do so.
\[lemma:seq\] Let $\Phi_1 : \ell_\infty \to {\mathcal{B}}(H)$ and $\Phi_2 : \ell_\infty \to {\mathcal{B}}(H)$ be two strongly continuous, unital $*$-homomorpisms such that $\Phi_1[c_0], \Phi_2[c_0]
\subseteq \mathcal{K}(H)$. Suppose there exist two partial isometries of finite index $v_1, v_2 $ such that $\text{Ad}(v_1) \circ \Phi_1 \sim_{{\mathcal{K}}(H)}
\text{Ad}(v_2) \circ \Phi_2$ . Then the sequences $\vec{n}_{\Phi_1} = {\{\text{rk}(\Phi_1(P_k))\}}_{k \in {\mathbb{N}}}$ and $\vec{n}_{\Phi_2} = {\{\text{rk}(\Phi_2(P_k))\}}_{k \in {\mathbb{N}}}$ are eventually equal.
Since $q(v_1)$ and $q(v_2)$ are unitaries in ${\mathcal{Q}}(H)$, we can assume that $v_1$ is the identity, and we denote $v_2$ by $v$. Let $\vec{n}_{\Phi_1} = {\{n_k\}}_{k \in {\mathbb{N}}}$, $\vec{n}_{\Phi_2} = {\{m_k\}}_{k \in {\mathbb{N}}}$ and suppose there is an infinite $X \subseteq {\mathbb{N}}$ such that $n_k > m_k$ for every $k \in X$. We inductively define a set $Y \subseteq X$ as follows.
Let $y_0$ be the minimum of $X$ and let $Y_0 = {\{y_0\}}$. Since $\text{rk}(\Phi_1(P_{y_{0}})) > \text{rk}(\Phi_2(P_{y_{0}}))\ge
\text{rk}(\text{Ad}(v) (\Phi_2(P_{y_{0}})))$, there is a norm-one vector $\xi_{0}$ in the image of $\Phi_1(P_{y_{0}})$ which also belongs to the kernel of $\text{Ad}(v) (\Phi_2(P_{y_{0}}))$. This is the case since the codimension of $\text{ker}
(\text{Ad}(v) ( \Phi_2(P_{y_{0}})))$ is strictly smaller than $\text{rk}(\Phi_1(P_{y_{0}}))$.
Suppose $Y_k = {\{y_0 < \dots < y_k\}}\subseteq X$ and that, for every $h \le k$, there is a norm-one vector $\xi_h$ such that $\Phi_1(P_{y_h})\xi_h =
\xi_h$ and $\lVert \text{Ad}(v) (\Phi_2(P_{Y_k}))
\xi_h \rVert < 1/2$. Let $y_{k+1}$ be the smallest element in $X$ greater than $y_k$ such that
1. \[itema\] $\lVert \text{Ad}(v) ( \Phi_2(P_{Y_k})) \Phi_1(P_{y_{k+1}}) \rVert < 1/2$,
2. \[itemb\] $\lVert \text{Ad}(v) (\Phi_2(P_{Y_k \cup {\{y_{k+1}\}}})) \xi_h \rVert < 1/2$ for every $h \le k$.
Such number $y_{k+1}$ exists since all $\text{Ad}(v) (\Phi_2(P_{Y_k}))$ and the projection onto $\text{span}{\{v^*\xi_h : h \le k\}}$ have finite rank and, by strong continuity, the sequences ${\{\Phi_1(P_k)\}}_{k \in {\mathbb{N}}}$ and ${\{\Phi_2(P_k)\}}_{k \in {\mathbb{N}}}$ strongly converge to zero. Define $Y_{k+1} = Y_k \cup {\{y_{k+1}\}}$. We have to verify that $Y_{k+1}$ satisfies the inductive hypothesis, namely that for every $h \le k+1$ there is a norm-one vector $\xi_h$ such that $\Phi_1(P_{y_h})\xi_h =
\xi_h$ and $\lVert \text{Ad}(v) (\Phi_2(P_{Y_{k+1}}))\xi_h
\rVert < 1/2$. For $h \le k$, pick the $\xi_h$ given by the inductive hypothesis, and the inequality follows by item . Since $y_{k+1} \in X$, it follows that $\text{rk}(\Phi_1(P_{y_{k+1}})) > \text{rk}(\Phi_2(P_{y_{k+1}}))\ge
\text{rk}(\text{Ad}(v)( \Phi_2(P_{y_{k+1}})))$. There exists thus a norm-one vector $\xi_{k+1}$ in the image of $\Phi_1(P_{y_{k+1}})$ which also belongs to the kernel of $ \text{Ad}(v)( \Phi_2(P_{y_{k+1}}))$. This is the case since the codimension of $\text{ker}
(\text{Ad}(v) ( \Phi_2(P_{y_{k+1}})))$ is strictly smaller than $\text{rk}(\Phi_1(P_{y_{k+1}}))$. Because of this and item : $$\begin{gathered}
\lVert \text{Ad}(v) ( \Phi_2(P_{Y_{k+1}}) )\xi_{k+1} \rVert =
\lVert \text{Ad}(v) (\Phi_2(P_{Y_k})) \xi_{k+1} \rVert \le \\ \le \lVert \text{Ad}(v)
(\Phi_2(P_{Y_k})) \Phi_1(P_{y_{k+1}}) \rVert < 1/2.\end{gathered}$$ Let $Y = \cup_{k \in {\mathbb{N}}} Y_k$. We show that, for every $k \in {\mathbb{N}}$, the following holds $$\lVert (\Phi_1 (P_Y) - \text{Ad}(v) (\Phi_2 (P_Y))) \xi_k \rVert \ge 1/2,$$ which contradicts $\Phi_1(P_Y) \sim_{\mathcal{K}(H)} \text{Ad}(v)( \Phi_2(P_Y))$. The previous inequality follows since, for every $k \in {\mathbb{N}}$, by strong continuity of $\Phi_1$ we have that $\Phi_1 (P_Y) \xi_k = \xi_k$, and by strong continuity of $\Phi_2$ we have that $$\lVert \text{Ad}(v) ( \Phi_2(P_Y)) \xi_k \rVert \le 1/2.$$
\[prop:const\] Let ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ be a unital, locally trivial endomorphism. There exists $m \in {\mathbb{N}}$ such that, for every $\vec{E}$, the sequence $\vec{n}_{\Phi_{\vec{E}}} = {\{\text{rk}(\Phi_{\vec{E}}
(P_k))\}}_{k \in {\mathbb{N}}}$ is, up to a finite number of entries, constantly equal to $m$.
By lemma \[lemma:seq\], it is enough to show that there exists a partition $\vec{E}$ such that $\vec{n}_{\Phi_{\vec{E}}}$ is eventually constant. Let $\vec{E}_1$ be the partition composed by the intervals ${\{2k,2k+1\}}$ and $\vec{E}_2$ be the partition composed by ${\{2k+1, 2k+2\}}$, as $k$ varies in ${\mathbb{N}}$. Let $\vec{n}_{\Phi_{\vec{E_1}}} =
{\{n_k\}}_{k \in {\mathbb{N}}}$ and $\vec{n}_{\Phi_{\vec{E_2}}} = {\{m_k\}}_{k \in {\mathbb{N}}}$. On the one hand we have that $n_{2k} = n_{2k+1}$ and $m_{2k+1} = m_{2k+2}$ for all $k \in {\mathbb{N}}$, since these couple of numbers belong to the same intervals in $\vec{E}_1$ and $\vec{E}_2$ respectively. On the other hand, by lemma \[lemma:seq\], there is $j \in {\mathbb{N}}$ such that $n_i = m_i$ for all $i \ge j$. It follows that there is $m \in {\mathbb{N}}$ such that $n_i = m_i = m$ for all $i \ge j$.
Let $\Phi: {\mathcal{D}}[\vec{E}] \to {\mathcal{B}}(H)$ be a strongly continuous, unital $*$-homomorphism such that $\vec{n}_\Phi$ is eventually constant with value $m$. This is not enough to infer that, even up to a unitary transformation of $H$, $\Phi$ is a compact perturbation of $\Phi_m$. For instance, the map $\Psi: \ell_\infty \to {\mathcal{B}}(H)$ sending $(a_0, a_1, a_2, \dots) \mapsto (a_0, a_0, a_1, a_2, \dots)$ is not a compact perturbation of the identity, since it is a compact perturbation of $\text{Ad}(S)$, being $S$ the unilateral shift. Nevertheless, by suitably ‘shifting’ $\Phi$ it is possible to obtain a compact perturbation of $\Phi_m$, as shown in the following lemma.
\[lemma:shift\] Let $\vec{E}$ be a partition of ${\mathbb{N}}$ and let $\Phi: {\mathcal{D}}[\vec{E}] \to {\mathcal{B}}(H)$ be a strongly continuous, unital $*$-homomorphism such that $\Phi[{\mathcal{D}}[\vec{E}] \cap {\mathcal{K}}(H)] \subseteq {\mathcal{K}}(H)$ and such that $\vec{n}_\Phi$ is eventually constant with value $m \in {\mathbb{N}}\setminus {\{0\}}$. There exists a partial isometry $w$ of finite index such that $\text{Ad}(w) \circ \Phi \sim_{\mathcal{K}(H)} \Phi_m$.
If two strongly continuous, unital embeddings $\Phi_1, \Phi_2: {\mathcal{D}}[\vec{E}] \to {\mathcal{B}}(H)$ are such that $\vec{n}_{\Phi_1} = \vec{n}_{\Phi_2}$, then there is a unitary $u \in {\mathcal{B}}(H)$ sending $\Phi_2(P_k)H$ to $\Phi_1(P_k)H$ for every $k \in {\mathbb{N}}$ such that $\text{Ad}(u) \circ \Phi_1 = \Phi_2$. Thus, it is enough to show that there is a partial isometry $w$ of finite index and a strongly continuous, unital $*$-homomorphism $\tilde{\Phi}: {\mathcal{D}}[\vec{E}] \to {\mathcal{B}}(H)$ such that $\tilde{\Phi} \sim_{{\mathcal{K}}(H)}
\text{Ad}w \circ \Phi$ and such that $\vec{n}_{ \tilde{\Phi}}$ is constantly equal to $m$. Let $\vec{n}_\Phi = {\{n_k\}}_{k \in {\mathbb{N}}}$ and let $h \in {\mathbb{N}}$ be such that $n_k = m$ for all $k \in E_j$ and all $j \ge h$, which exists by lemma \[prop:const\]. Let $\overline{k}$ be the minimum of $E_h$, let $n = \sum_{ k < \overline{k}} n_k$ and let $r = \overline{k}m - n$. Denote the projection $\sum_{k < \overline{k}} P_k$ by $Q$ and fix ${\{\zeta_i\}}_{i \in {\mathbb{N}}}$ an orthonormal basis of $H$ such that ${\{\zeta_i\}}_{i < n}$ is an orthonormal basis of $\Phi(Q)H$. We remark that $Q$ commutes with every element in ${\mathcal{D}}[\vec{E}]$. Let $S$ be the unilateral shift sending $\zeta_i$ to $\zeta_{i+1}$. The range of the projection $P = \text{Ad}(S^r) ( \Phi(Q)) + (1 - S^r S^{-r})$ has dimension $\overline{k}m$, since it is the space spanned by ${\{\zeta_i\}}_{i < n + r = \overline{k}m}$. Moreover $S^{-r}S^r \ge \Phi(1-Q)$, since the former is either the identity (if $r \ge 0$) or the orthogonal projection onto ${\{\zeta_i\}}_{i \ge -r}$ (if $r \le 0$) and $-r \le n$. Let $\tilde{\Phi} := (\Psi \circ \text{Ad}(Q)) \oplus (\text{Ad}(S^{r} \Phi (1-Q)) \circ \Phi)$, where $\Psi$ is a unital embedding between $Q {\mathcal{B}}(H) Q$($\cong M_{\overline{k}}({\mathbb{C}})$) and $P{\mathcal{B}}(H)P$($\cong M_{\overline{k}m}({\mathbb{C}})$). The map $\tilde{\Phi}$ is a strongly continuous, unital $*$-homomorphism (multiplicativity follows since $S^{-r}S^r \ge \Phi(1 - Q)$ and $Q$ commutes with every element in ${\mathcal{D}}[\vec{E}]$) with $\vec{n}_{\tilde{\Phi}}$ constantly equal to $m$ and such that $\tilde{\Phi} \sim_{{\mathcal{K}}(H)} \text{Ad}(S^r) \circ \Phi$.
The following lemma, an analogue of [@inner Lemma 1.4], shows that unital, locally trivial embeddings which lift to $\Phi_m$ on $\ell_\infty/c_0$ have nice and regular lifts also on the other ${\mathcal{D}}[\vec{E}]_{\mathcal{Q}}$’s.
\[lemma:uni\] Let ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ be a unital, locally trivial endomorphism such that $\Phi_m$ lifts ${\varphi}$ on $\ell_\infty/ c_0$. Then, for every partition $\vec{E}$, there exists a unitary $u_{\vec{E}}$ in $\Phi_m[\ell_\infty]' \cap {\mathcal{B}}(H) \cong \ell_\infty(M_m({\mathbb{C}}))$ such that $\text{Ad}(u_{\vec{E}}) \circ \Phi_m$ lifts ${\varphi}_{\vec{E}}$ on ${\mathcal{D}}[\vec{E}]_{\mathcal{Q}}$.
Fix a partition $\vec{E}$, let $v_{\vec{E}}$ be a partial isometry of finite index and let $\Phi_{\vec{E}}:
{\mathcal{D}}[\vec{E}] \to {\mathcal{B}}(H)$ be a strongly continuous, unital $*$-homomorphism such that $\text{Ad}(v_{\vec{E}}) \circ \Phi_{\vec{E}}$ lifts ${\varphi}$ on ${\mathcal{D}}[\vec{E}]$. By assumption we have that $\text{Ad}(v_{\vec{E}}) \circ \Phi_{\vec{E}} \sim_{\mathcal{K}(H)} \Phi_m$ on $\ell_\infty$. By lemmas \[prop:const\] and \[lemma:shift\] there is a finite index isometry $w$ such that $\text{Ad}(v_{\vec{E}})
\circ \Phi_{\vec{E}} \sim_{\mathcal{K}(H)}
\text{Ad}(w) \circ\Phi_m$ on ${\mathcal{D}}[\vec{E}]$, hence the latter also lifts ${\varphi}$ on ${\mathcal{D}}[\vec{E}]_{\mathcal{Q}}$. We have therefore that $\text{Ad}(w)
\circ \Phi_m \sim_{\mathcal{K}(H)} \Phi_m$ on $\ell_\infty$, which entails that $w$ commutes, up to compact operators, with the elements in $\Phi_m[\ell_\infty]$. The commutant of $\Phi_m[\ell_\infty]$ is (isomorphic to) $\ell_\infty(M_m({\mathbb{C}}))$ and by [@jp Theorem 2.1] we have that $w$ is a compact perturbation of an element $u$ in $\ell_\infty(M_m({\mathbb{C}}))$. Being an element of $\ell_\infty(M_m({\mathbb{C}}))$, the operator $u$ has Fredholm index zero, moreover $u \sim_{{\mathcal{K}}(H)} w$ entails that its class $q(u)$ in ${\mathcal{Q}}(H)$ is a unitary. Therefore, the polar decomposition of $u$ in $\ell_\infty(M_m({\mathbb{C}}))$ provides a unitary $u_{\vec{E}}$ in the commutant of $\Phi_m[\ell_\infty]$ such that $u_{\vec{E}} \sim_{{\mathcal{K}}(H)} w$.
All Endomorphisms are Trivial {#sctn:main}
=============================
We split the proof of theorem \[thrm:main\] in two steps. We first prove that all unital, locally trivial endomorphisms of ${\mathcal{Q}}(H)$ are trivial, then we show that all unital endomorphisms of ${\mathcal{Q}}(H)$ are locally trivial. We use OCA in both proofs. The non-unital case follows from the unital one, since every endomorphism ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ can be thought as a unital endomorphism with codomain ${\mathcal{Q}}({\varphi}(1)H)$.
Locally trivial endomorphisms are trivial {#4bis}
-----------------------------------------
\[thrm:main2\] Assume $OCA$. Every unital, locally trivial endomorphism ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ is trivial.
Fix a unital endomorphism ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$. The endomorphism ${\varphi}$ is trivial if and only if $\text{Ad}(v) \circ {\varphi}$ is trivial for some unitary $v \in {\mathcal{Q}}(H)$. Thus, by lemmas \[lemma:shift\] and \[lemma:uni\], we can assume that there is $m \in {\mathbb{N}}$ such that ${\varphi}$ lifts to $\Phi_m$ when restricted to $\ell_\infty /c_0$, and that for every partition $\vec{E}$ there is a unitary $u_{\vec{E}}$ in $\ell_\infty(M_m({\mathbb{C}}))$ such that $\text{Ad}(u_{\vec{E}}) \circ \Phi_m$ lifts ${\varphi}$ on ${\mathcal{D}}[\vec{E}]_{\mathcal{Q}}$. Given a unitary $u \in \ell_\infty(M_m({\mathbb{C}}))$ we denote $\text{Ad}(u) \circ \Phi_m$ by $\Phi^u$ (the endomorphism ${\varphi}$, and therefore the integer $m$, will be always fixed through this section, hence we omit $m$ in this notation).
The proof of theorem \[thrm:main2\] is inspired to [@inner Section 3], where theorem \[thrm:main2\] is proved for an automorphism, hence in case $m = 1$. The idea is to glue together the various $u_{\vec{E}}$ in a coherent way in order to define a unitary $u \in \ell_\infty(M_m({\mathbb{C}}))$ such that $\Phi^u$ lifts ${\varphi}$ globally. We identify the unitaries in $\ell_\infty(M_m({\mathbb{C}}))$ with elements in $(\mathcal{U}(M_m({\mathbb{C}})))^{\mathbb{N}}$, being $\mathcal{U}(M_m({\mathbb{C}}))$ the unitary group of $M_m({\mathbb{C}})$.
For $u =(u(i))_{i \in {\mathbb{N}}}, v= (v(i))_{i \in {\mathbb{N}}} \in \mathcal{U}(\ell_\infty(M_m({\mathbb{C}})))$ and $I \subseteq {\mathbb{N}}$, define $$\Delta_I(u,v) : = \sup_{i,j \in I} \lVert u(i) u^*(j) - v(i) v^*(j) \rVert.$$
\[lemma:delta\] For all $I \subseteq {\mathbb{N}}$ and $u, v \in \mathcal{U}(\ell_\infty(M_m({\mathbb{C}})))$:
1. \[item:delta1\] $\Delta_I(u,v) \le 2 \sup_{i \in I} \lVert u(i) - v(i) \rVert$.
2. \[item:delta2\] $\Delta_I(u,v) \ge \sup_{j \in I} \lVert u(j)- v(j) \rVert - \inf_{i \in I} \lVert u(i) - v(i)\rVert$. In particular, if $u(k) = v(k)$ for some $k \in I$, then $\Delta_I(u,v) \ge \sup_{j \in I} \lVert u(j) - v(j) \rVert$.
3. \[item:delta3\] If $w \in \mathcal{U}(M_m({\mathbb{C}}))$ then $\Delta_I(u,v) = \Delta_I(u,vw)$.
4. \[item:delta4\] If $I \cap J \not = \emptyset$, then $\Delta_{I \cup J}(u,v) \le \Delta_I(u,v) + \Delta_J(u,v)$.
5. \[item:delta5\] $\inf_{w \in \mathcal{U}(M_m({\mathbb{C}}))} \sup_{i \in I} \lVert u(i) - v(i) w \rVert \le \Delta_I(u,v)
\le 2 \inf_{w \in \mathcal{U}(M_m({\mathbb{C}}))} \linebreak \sup_{i \in I} \lVert u(i) - v(i) w \rVert$.
The proof of this lemma can be easily inferred from the proof of [@inner Lemma 1.5]. We have that $$\begin{aligned}
\lVert u(i) u^*(j) - v(i) v^*(j) \rVert &= \lVert v^*(i)(u(i) u^*(j) - v(i) v^*(j)) u(j) \rVert
\\ &=
\lVert v^*(i) u(i) - 1 + 1- v^*(j) u(j) \rVert \\
&= \lVert v^*(i)(u(i) - v(i)) - v^*(j)(u(j) - v(j)) \rVert.\end{aligned}$$ This entails $$\begin{gathered}
\lvert \lVert u(i) - v(i) \rVert - \lVert u(j) - v(j) \rVert \rvert \le \lVert u(i)u^*(j) - v(i)v^*(j) \rVert \le \\
\le \lVert u(i) - v(i) \rVert + \lVert u(j) - v(j) \rVert,\end{gathered}$$ from which both item and follow. Item is straightforward to check since $v(i) ww^* v^*(j) = v(i) v^*(j)$. Notice that, unlikely the 1-dimensional case, it is important to consider $vw$ rather than $wv$. Item follows by the triangular inequality, since $\lVert u(i) u^*(j) - v(i) v^*(j) \rVert$ is equal to $\lVert v^*(i) u(i) - v^*(j) u(j) \rVert$. Item follows by item plus items and .
\[lemma:cmpct\] Let $u = (u(i))_{i \in {\mathbb{N}}},v= (v(i))_{i \in {\mathbb{N}}} \in \mathcal{U}(\ell_\infty(M_m({\mathbb{C}})))$.
1. \[item:cmpct1\] If $\lim_{i \to \infty} \lVert u(i) - v(i) \rVert = 0$ then $\Phi^u \sim_{\mathcal{K}(H)} \Phi^v$.
2. \[item:cmpct2\] $\Phi^u \sim_{\mathcal{K}(H)} \Phi^v$ on ${\mathcal{D}}[\vec{E}]$ if and only if $\lim \sup_n \Delta_{E_n} (u,v) = 0$.
This lemma (and its proof) is an adapted version of [@inner Lemma 1.6] for endomorphisms. If $\lim_{i \to \infty} \lVert u(i) - v(i) \rVert$ is zero, it means that $u \sim_{\mathcal{K}(H)} v$, hence $\Phi^u \sim_{\mathcal{K}(H)} \Phi^v$.
In order to prove item , suppose first that $\lim \sup_n \Delta_{E_n} (u,v) = 0$. For every $n \in {\mathbb{N}}$, let $k_n$ be $\min(E_n)$. Let $w =(w(i))_{i \in {\mathbb{N}}} \in \ell_{\infty}(M_m({\mathbb{C}}))$ be the unitary defined, for $i \in E_n$, as $$w(i):= v(i) v^*(k_n) u(k_n).$$ The unitary $\sum P_{E_n} \otimes v^*(k_n) u(k_n) $ belongs to the commutant of $\Phi_m[{\mathcal{D}}[\vec{E}]]$, hence $\Phi^w = \Phi^v$ on ${\mathcal{D}}[\vec{E}]$. On the other hand, by items - of lemma \[lemma:delta\] we have that, for $i \in E_n$, $\lVert w(i) - u(i) \rVert
\le \Delta_{E_n}(u,w) = \Delta_{E_n}(u,v)$. Thus $\lim_{i \to \infty} \lVert w(i) - u(i) \rVert = 0$ and, by item of this lemma, $\Phi^u \sim_{\mathcal{K}(H)} \Phi^w = \Phi^v$. To prove the other direction, suppose there is $\epsilon > 0$ and a subsequence ${\{n_k\}}_{k \in {\mathbb{N}}}$ such that $\Delta_{E_{n_k}}(u,v) > \epsilon$. Fix two sequences $i_k,j_k \in E_{n_k}$ such that $\lVert u(j_k)u^*(i_k) - v(j_k)v^*(i_k) \rVert > \epsilon$ for every $k \in {\mathbb{N}}$. Let $\eta_k \in {\mathbb{C}}^m$ be a norm-one vector witnessing the previous inequality. Let $V$ be the partial isometry in ${\mathcal{D}}[\vec{E}]$ moving $\xi_{i_k}$ to $\xi_{j_k}$ (from the orthonormal basis of $H$ we fixed at the beginning of §\[S2\]) for every $k \in {\mathbb{N}}$ and sending all other vectors in ${\{\xi_n\}}_{n \in {\mathbb{N}}}$ to zero. We have that, if $\zeta \in \Phi_m(P_{i_k})$, $$\Phi^u(V) (\zeta) = u\Phi_m(V) u^*(i_k) (\zeta) = u(j_k) u^*(i_k)(\zeta),$$ $$\Phi^v(V) (\zeta) = v\Phi_m(V) v^*(i_k) (\zeta) = v(j_k) v^*(i_k)(\zeta).$$ Thus, for the vector $\eta_k$ we fixed before (or rather for $0 \oplus \dots \oplus 0 \oplus \eta_k \oplus 0 \dots$, where the non-zero coordinate appears in the $i_k$-th position), we have $$\lVert (\Phi^u(V) - \Phi^v(V)) (\eta_k) \rVert = \lVert (u(j_k)u^*(i_k) - v(j_k)v^*(i_k))(\eta_k) \rVert > \epsilon.$$ Since this holds for every $k \in {\mathbb{N}}$, it follows that the difference $\Phi^u(V) - \Phi^v(V)$ is not compact.
Given a function $f \in {\mathbb{N}}^{\mathbb{N}}$ we define $$E^f_n:= [f(n), f(n+1)),$$ $$F^f_n:=[f^+(n), f^+(n+1)),$$ $$E^{f, \text{even}}_n := [f(2n), f(2n+2)),$$ $$E^{f, \text{odd}}_n := [f(2n+1), f(2n+3)).$$
The corresponding partitions are $\vec{E}^f$, $\vec{F}^f$, $\vec{E}^{f, \text{even}}$ and $\vec{E}^{f, \text{odd}}$ respectively. We shall denote $\vec{E}^{f^+, \text{even}}$ and $\vec{E}^{f^+, \text{odd}}$ by $\vec{F}^{f, \text{even}}$ and $\vec{F}^{f, \text{odd}}$.
\[lemma:double\] Let ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ be a unital, locally trivial endomorphism which can be lifted to $\Phi_m$ on $\ell_{\infty}/c_0$ for some $m \in {\mathbb{N}}$. For every $f \in {\mathbb{N}}^{{\mathbb{N}}}$ there is a unitary $w \in \ell_\infty(M_m({\mathbb{C}}))$ such that $\Phi^w$ lifts ${\varphi}$ on both ${\mathcal{D}}[\vec{E}^{f, \text{even}}]$ and ${\mathcal{D}}[\vec{E}^{f, \text{odd}}]$.
This proof follows the one of [@inner Lemma 3.5]. By assumption there are two unitaries $u, v \in \ell_\infty(M_m({\mathbb{C}}))$ such that $\Phi^u$ and $\Phi^v$ lift ${\varphi}$ on ${\mathcal{D}}[\vec{E}^{f, \text{even}}]$ and ${\mathcal{D}}[\vec{E}^{f, \text{odd}}]$ respectively. We define inductively two unitaries $u', v' \in \ell_\infty(M_m({\mathbb{C}}))$ as follows. For $i \in [f(0), f(2))$, let $u'(i) = u(i)$. If $u'(i)$ has been defined for $i < f(2n)$, for $i \in [f(2n - 1), f(2n + 1))$ let $$v'(i) = v(i) v^*(f(2n - 1)) u'(f(2n-1)).$$ If $v'(i)$ has been defined for $i < f(2n +1)$, for $i \in [f(2n), f(2n + 2))$ let $$u'(i) = u(i) u^*(f(2n )) v'(f(2n)).$$ We have that $v'(f(n)) = u'(f(n))$, that $\Phi^u= \Phi^{u'}$ on ${\mathcal{D}}[\vec{E}^{f, \text{even}}]$ and that $\Phi^v= \Phi^{v'}$ on ${\mathcal{D}}[\vec{E}^{f, \text{odd}}]$. This implies that, by item of lemma \[lemma:delta\], $$\sup_{i \in E^f_n} \lVert u'(i) - v'(i) \rVert \le \Delta_{E^f_n}(u', v').$$ On the other hand we have that $\Phi^{u'} = \Phi^u \sim_{\mathcal{K}(H)} \Psi^v = \Psi^{v'}$ on ${\mathcal{D}}[\vec{E}^f]$ by hypothesis (remember that ${\mathcal{D}}[\vec{E}^f] \subseteq {\mathcal{D}}[\vec{E}^{f, \text{even}}] \cap {\mathcal{D}}[\vec{E}^{f, \text{odd}}]$), therefore by item of lemma \[lemma:cmpct\] it follows that $\lim_{n \in \infty} \Delta_{E^f_n}(u', v') = 0$. By item of lemma \[lemma:cmpct\], we infer that $\Phi^{u'}$ and $\Psi^{v'}$ agree on ${\mathcal{B}}(H)$ up to compact operator. In conclusion, $\Phi^{u'}$ lifts ${\varphi}$ on both ${\mathcal{D}}[\vec{E}^{f, \text{even}}]$ and ${\mathcal{D}}[\vec{E}^{f, \text{odd}}]$.
Given $f, g \in {\mathbb{N}}^{\mathbb{N}}$, we write $g \le^* f$ if $g(n) \le f(n)$ for all but finitely many $n \in {\mathbb{N}}$. A subset $\mathcal{F} \subseteq {\mathbb{N}}^{\mathbb{N}}$ is *$\le^*$-cofinal* if for every $g \in {\mathbb{N}}^{\mathbb{N}}$ there is $f \in \mathcal{F}$ such that $g\le^* f$.
\[lemma:cof0\] Assume $\mathcal{F} \subseteq {\mathbb{N}}^{\mathbb{N}}$ is $\le^*$-cofinal.
1. \[item:cof01\] If $\mathcal{F}$ is partitioned into countably many pieces, then at least one is $\le^*$-cofinal.
2. \[item:cof02\] $(\exists^\infty n) (\exists i) (\forall k \ge n)(\exists f \in \mathcal{F})
(f(i) \le n \text{ and } f(i+1) \ge k)$.
3. \[item:cof03\] ${\{f^+ : f \in \mathcal{F}\}}$ is $\le^*$-cofinal.
\[lemma:cof\] Let $f,g \in {\mathbb{N}}^{\mathbb{N}}$ be such that $g \le^* f$. For all but finitely many $n \in {\mathbb{N}}$ there is $i$ such that $f^+(i) \le g(n) < g(n+1) \le f^{+}(i+2)$. If $f(m) \ge g(m)$ for all $m \in {\mathbb{N}}$, then the previous statement holds for every $n \in {\mathbb{N}}$.
Lemma \[lemma:cof\] entails that if $g \le^* f$ then for all but finitely many $n \in {\mathbb{N}}$ there is $i_n \in {\mathbb{N}}$ such that $E^g_n \subseteq F^f_{i_n} \cup F^f_{i_n+1}$. In particular, if $f(m) \ge g(m)$ for all $m \in {\mathbb{N}}$, then ${\mathcal{D}}[\vec{E}^g]$ is contained in the algebra generated by ${\mathcal{D}}[\vec{F}^{f, \text{even}}] \cup {\mathcal{D}}[\vec{F}^{f, \text{odd}}]$.
We can assume that ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ is locally represented on ${\mathcal{D}}[\vec{E}]$ by $\Phi^{u_{\vec{E}}}$, where $u_{\vec{E}}$ is a unitary in $\ell_\infty(M_m({\mathbb{C}}))$ (see the paragraph after the statement of theorem \[thrm:main2\]). Let $\mathcal{X}
\subset {\mathbb{N}}^{\mathbb{N}}\times \mathcal{U}(M_m({\mathbb{C}}))^{\mathbb{N}}$ be the set of all pairs $(f, u)$ such that $\Phi^u$ lifts ${\varphi}$ on both ${\mathcal{D}}[\vec{F}^{f, \text{even}}]$ and ${\mathcal{D}}[\vec{F}^{f, \text{odd}}]$. By lemma \[lemma:double\], for every $f \in {\mathbb{N}}^{\mathbb{N}}$ there is $u$ such that $(f,u) \in \mathcal{X}$.
Fix $\epsilon > 0$ and consider the coloring of $[\mathcal{X}]^2 = K_0^\epsilon \sqcup K_1^\epsilon$, where the pair $(f, u)$, $(g, v)$ has color $K_0^\epsilon$ if there are $m,n \in {\mathbb{N}}$ such that $\Delta_{F^f_n \cap F^g_m}(u,v) > \epsilon$. We consider ${\mathbb{N}}^{\mathbb{N}}$ with the Baire space topology, induced by the metric $$d(f,g) = 2^{-\min {\{n: f(n) \not=g(n)\}}}.$$ This is a complete separable metric. We consider $\mathcal{U}(M_m({\mathbb{C}}))^{\mathbb{N}}$ with the product of the strong operator topology on $\mathcal{U}(M_m({\mathbb{C}}))$ and $\mathcal{X}$ with the product topology. In this setting, it is straightforward to check that $K_0^\epsilon$ is open.
\[main:claim1\] Assume OCA. For every $\epsilon > 0$ there are no uncountable $K_0^\epsilon$-homogeneous subsets of $\mathcal{X}$.
Fix $\epsilon > 0$ and let $\mathcal{H}$ be an uncountable $K_0^\epsilon$-homogeneous subset of $\mathcal{X}$. Let $$\mathcal{F} = {\{g^+: \exists u (g,u) \in \mathcal{H}\}}.$$ We can assume that $\mathcal{H}$, and thus $\mathcal{F}$, has size $\aleph_1$. By OCA and [@parti Theorems 3.4 and 8.5] there is $f \in {\mathbb{N}}^{\mathbb{N}}$ which is an upper bound for $\mathcal{F}$. Using the pigeonhole principle, we can assume that there is $\overline{n} \in {\mathbb{N}}$ such that $f(m) \ge g^+(m)$ for all $g^+ \in \mathcal{F}$ and all $m \ge \overline{n}$, and moreover that $g^+(i) = h^+(i)$ for all $g^+, h^+ \in \mathcal{F}$ and all $i \le \overline{n}$. By increasing $f$ by $f(\overline{n})$ we can also assume that $f(m) \ge g^+(m)$ for all $g^+ \in \mathcal{F}$ and $m \in {\mathbb{N}}$. By lemma \[lemma:cof\] this entails that for every $n \in {\mathbb{N}}$ there is $i \in {\mathbb{N}}$ such that $E^{g^+}_n = F^g_{n}
\subseteq F^f_i \cup F^f_{i+1}$. Let $u$ be a unitary in $\ell_{\infty}(M_m({\mathbb{C}}))$ such that $\Phi^u$ lifts ${\varphi}$ on ${\mathcal{D}}[\vec{F}^{f, \text{even}}]$ and ${\mathcal{D}}[\vec{F}^{f, \text{odd}}]$. Since, by the previous observations, for every $g^+ \in \mathcal{F}$ we have that ${\mathcal{D}}[\vec{F}^g]$ is contained in the algebra generated by ${\mathcal{D}}[\vec{F}^{f, \text{even}}] \cup {\mathcal{D}}[\vec{F}^{f, \text{odd}}]$, it follows that $\Phi^u$ lifts ${\varphi}$ also on ${\mathcal{D}}[\vec{F}^g]$ and therefore, by item of lemma \[lemma:cmpct\], we have that $\lim_{n \to \infty}
\Delta_{F^g_n} (u,v) = 0$ for every $(g,v) \in \mathcal{H}$. By taking an uncountable subset of $\mathcal{F}$ if necessary, we can assume that there is $\overline{k}$ such that $\Delta_{F^g_m}(u,v) < \epsilon/2$ for all $m \ge \overline{k}$ and all $(g, v) \in \mathcal{H}$. By separability of $\mathcal{U}(M_m({\mathbb{C}}))^{\mathbb{N}}$ there are $(g,v)$, $(h,w) \in \mathcal{H}$ such that $g^+(i) = h^+(i)$ for all $i \le \overline{k}$ and $\lVert w_i - v_i \rVert < \epsilon /2$ for all $i \le g^+(\overline{k})$. This entails that if $n,m \in {\mathbb{N}}$ are such that $F_n^g \cap F_m^h \not = \emptyset$, then either both $m,n \le \overline{k}$ or $m,n \ge \overline{k}$. In the former case it follows that $\Delta_{F_n^g \cap F_m^h}(v,w) < \epsilon$ by item of lemma \[lemma:delta\]. If $m,n \ge \overline{k}$ then we have $$\Delta_{F_n^g \cap F_m^h}(w,v) \le \Delta_{F_n^g \cap F_m^h}(u,v) + \Delta_{F_n^g \cap F_m^h}(w,u)
< \epsilon.$$ This is a contradiction since $(g,v)$, $(h,w) \in \mathcal{H}$.
By OCA, for every $\epsilon > 0$ there is a partition of $\mathcal{X}$ into countably many $K^\epsilon_1$-homogeneous sets. Let $\epsilon_n = 2^{-n}$. Repeatedly using item of lemma \[lemma:cof0\], find sequences $\mathcal{X} \supseteq \mathcal{X}_0 \supseteq \dots \supseteq \mathcal{X}_n
\supseteq \dots$ and $0 = m(0) < m(1) < \dots < m(n) < \dots$ such that $\mathcal{X}_n$ is $K^{\epsilon_n}_1$-homogeneous and such that the set ${\{f : (\exists u) (f,u) \in \mathcal{X}_n\}}$ is $\le^*$-cofinal. Let $m(n)$ be the natural number given by item of lemma \[lemma:cof0\] for $\mathcal{X}_n$. For each $n \in {\mathbb{N}}$ fix a sequence ${\{(f_{n,i}, u_{n,i})\}}_{i \in {\mathbb{N}}}$ in $\mathcal{X}_n$ such that, for some $j_i \in {\mathbb{N}}$ $$\label{ineq}
f_{n,i}^+(j_i) \le m(n) < m(n+i) \le f_{n,i}^+(j_i +1).$$ By compactness of $\mathcal{U}(M_m({\mathbb{C}}))^{\mathbb{N}}$, we can assume that each sequence ${\{u_{n,i}\}}_{
i \in {\mathbb{N}}}$ converges to some $u_n$.
\[main:claim2\] There is a subsequence ${\{u_{n_k}\}}_{k \in {\mathbb{N}}}$ such that $$\sup_{i \in [m(n_h), \infty)} \lVert
u_{n_k}(i) - u_{n_h}(i) \rVert \le \epsilon_k$$ for all $ h \ge k$.
We start by showing that $\Delta_{[m(n), \infty)} (u_h, u_n) \le \epsilon_h$ for all $h < n$. Suppose this is not the case and let $m(n) \le i_1 < i_2$ be such that $\lVert u_h(i_1)u^*_h(i_2) - u_n(i_1)u^*_n(i_2)
\rVert > \epsilon_h$. There is $j \in {\mathbb{N}}$ such that $$\lVert u_{h,j}(i_1)u^*_{h,j}(i_2) -
u_{n,j}(i_1)u^*_{n,j}(i_2) \rVert > \epsilon_h$$ and by there are $k_1, k_2 \in {\mathbb{N}}$ such that $$f^+_{n,j}(k_1) \le m(n) < i_2 < f^+_{n,j}(k_1+1),$$ $$f^+_{h,j}(k_2) \le m(h) < m(n) < i_2 < f^+_{h,j}(k_2+1).$$ In particular, this entails that $\Delta_{F^{f_{h,j}}_{k_1} \cap F^{f_{n,j}}_{k_2}} (u_{h,j}, u_{n,j}) > \epsilon_h$, which is a contradiction since $(f_{h,j}, u_{h,j})$ and $(f_{n,j}, u_{n,j})$ both belong to $\mathcal{X}_h$, which is $K_1^{\epsilon_h}$-homogeneous.
By item of lemma \[lemma:delta\], for every $h < n$ there is $w_{h,n} \in \mathcal{U}(M_m({\mathbb{C}}))$ such that $$\sup_{i \ge m(n)} \lVert u_n(i) - u_h(i)w_{h,n} \rVert \le \epsilon_h.$$ The unitary $w_{h,n}$ exists by compactness of $\mathcal{U}(M_m({\mathbb{C}}))$. Given $h < n < k \in {\mathbb{N}}$ we have that, for $i \ge
m(k)$ $$u_h(i) w_{h,n} \approx_{\epsilon_h} u_n(i) \approx_{\epsilon_h} u_k(i) w^*_{n,k} \approx_{\epsilon_h}
u_h(i) w_{h,k} w^*_{n,k},$$ hence $$\label{unit}
\lVert w_{h,n} - w_{h,k} w^*_{n,k} \rVert \le 3 \epsilon_h.$$ This can be used to show that there is an infinite $Y = {\{n_k\}}_{k \in {\mathbb{N}}} \subseteq {\mathbb{N}}$ such that, for $i < j \in {\mathbb{N}}$, then $\lVert 1 - w_{n_i,n_j} \rVert \le 4 \epsilon_{n_i}$. To see this, define a coloring $M_0 \sqcup M_1$ on the triples of elements in ${\mathbb{N}}$, by saying that the triple $i < j < k$ is in $M_0$ if and only if $$\lVert 1- w_{j,k} \rVert \le 4 \epsilon_i.$$ Suppose there is an infinite $M_1$-homogeneous set $Y$. Let $h$ be the minimum of $Y$. By compactness of the unit ball of $M_m({\mathbb{C}})$ there is $n \in Y$ big enough so that, for some $j < k < n$ all in $Y$ we have that $\lVert w_{j,n} - w_{k,n} \rVert < \epsilon_h$. It follows that $$\lVert w_{j,k} - 1 \rVert \le \lVert w_{j,k} - w_{j,n}w^*_{k,n} \rVert + \lVert 1 - w_{j,n}w^*_{k,n} \rVert
\stackrel{\mathclap{\eqref{unit}}}{\le} 4\epsilon_h,$$ which is a contradiction, since the triple $(h,j,k)$ is supposed to be in $M_1$. By Ramsey’s theorem there is an infinite $M_0$-homogeneous set $Y = {\{n_k\}}_{k \in {\mathbb{N}}}$. We have therefore, for $j > i \ge 1$ $$\lVert 1 - w_{n_i, n_j} \rVert \le 4 \epsilon_{n_{i-1}}.$$ Without loss of generality we can assume that $n_0 \ge 4$, hence that, for every $i \ge 1$ the following holds $$4 \epsilon_{n_{i-1}} \le \epsilon_{i-1}/4 = \epsilon_{i+1}.$$ Summarizing, we have that for every $k < h \in {\mathbb{N}}$ $$\begin{aligned}
\sup_{i \ge m(n_h)} \lVert u_{n_k}(i) - u_{n_h}(i) \rVert &\le \sup_{i \ge m(n_h)} \lVert u_{n_k}(i) w_{n_k,n_h}
- u_{n_h}(i) \rVert + \lVert 1 - w_{n_k,n_h} \rVert \\ &\le \epsilon_{n_k} + \epsilon_{k+1} \le \epsilon_k.\end{aligned}$$
Let ${\{u_{n_k}\}}_{k \in {\mathbb{N}}}$ be the subsequence given by the previous claim and let $v \in
\mathcal{U}(M_m({\mathbb{C}}))^{\mathbb{N}}$ be defined as $v(i) = u_{n_k}(i)$ for all $i \in [m(n_k), m(n_{k+1}))$ and $v(i) = u_{n_0}(i)$ for all $i \le m(n_0)$. It follows then that $\lVert v(i) - u_{n_k}(i) \rVert
< \epsilon_k$ for all $i \ge m(n_k)$. Given $j \in {\mathbb{N}}$ and $(g,w) \in \mathcal{X}_{n_j}$, we claim that for all $i \in {\mathbb{N}}$ we have $\Delta_{F^g_i \setminus m_{n_j}} (v,w) \le 3\epsilon_j$. This is the case since for every $i \in {\mathbb{N}}$ there is $h\in {\mathbb{N}}$ such that $$[f_{n_j,h}^+(j_h), f_{n_j,h}^+(j_h + 1)) \supseteq F^g_i \setminus m_{n_j}.$$ Hence, since both $g$ and $f_{n_j,h}$ belong to $\mathcal{X}_{n_j}$, we have that $\Delta_{F^g_i \setminus m_{n_j}} (w, u_{n_j,h}) < \epsilon_{n_j}$. By continuity, we also have $\Delta_{F^g_i \setminus m_{n_j}} (w, u_{n_j}) \le \epsilon_{n_j}$. Thus, in conclusion $$\begin{aligned}
\label{eq1}
\begin{split}
\Delta_{F^g_i \setminus m_{n_j}} (v,w) &\le \Delta_{F^g_i \setminus m_{n_j}} (v,u_{n_j}) + \Delta_{F^g_i \setminus m_{n_j}} (u_{n_j},w) \\ &\le 2 \sup_{h \in F^g_i \setminus m_{n_j}} \lVert v(h) - u_{n_k}(h) \rVert
+ \epsilon_{n_j} \\ &\le 3 \epsilon_j.
\end{split}\end{aligned}$$ We conclude by showing that $\Phi^v$ lifts ${\varphi}_{\vec{E}}$ for every partition $\vec{E}$. Let $g \in {\mathbb{N}}^{\mathbb{N}}$ be such that $\vec{E} = \vec{E}^g$ and find $u \in \mathcal{U}(M_m({\mathbb{C}}))^{\mathbb{N}}$ such that $\Phi^u$ lifts ${\varphi}$ on ${\mathcal{D}}[\vec{E}^g]$. By item of lemma \[lemma:cmpct\], it is enough to show that $\lim_{n \to \infty} \Delta_{E^g_n} (u,v) = 0$. Fix $k \in {\mathbb{N}}$, and let $(f,w) \in \mathcal{X}_{n_k}$ be such that $f \ge^* g$. For all but finitely many $n\in {\mathbb{N}}$ there is $i \in {\mathbb{N}}$ such that $E^g_n \subseteq F^f_{i_n} \cup F^f_{i_n+1}$. This implies that $\lim_{n \to \infty} \Delta_{E^g_n}(w,u) = 0$, which, by item of lemma \[lemma:delta\] in turn entails $$\begin{aligned}
\lim_{n \to \infty} \Delta_{E^g_n} (u,v) &\le \lim_{n \to \infty} \Delta_{E^g_n} (u,w) + \Delta_{E^g_n} (w,v) =
\lim_{n \to \infty} \Delta_{E^g_n} (w,v) \\ &\le \lim_{n \to \infty} \Delta_{F^f_{i_n} \cup F^f_{i_n+1}} (w,v)
\\ &\le \lim_{n \to \infty} \Delta_{F^f_{i_n} } (w,v) + \Delta_{F^f_{i_n+1}} (w,v) \\ &\stackrel{\mathclap{\eqref{eq1}}}{\le} 6 \epsilon_k.\end{aligned}$$ The inequality above holds for every $k \in {\mathbb{N}}$, thus $\lim_{n \to \infty} \Delta_{E^g_n} (u,v)$ is zero.
All endomorphisms are locally trivial
-------------------------------------
\[thrm:lctr\] Assume OCA. Every unital endomorphism ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ is locally trivial.
Given a partition $\vec{E}$, we want to find a finite index isometry $v_{\vec{E}}$ and a strongly continuous, unital $*$-homomorphism $\Phi_{\vec{E}}: {\mathcal{D}}[\vec{E}] \to {\mathcal{B}}(H)$ such that $\text{Ad}(v_{\vec{E}}) \circ \Phi_{\vec{E}}$ lifts ${\varphi}$ on ${\mathcal{D}}[\vec{E}]_{\mathcal{Q}}$. Without loss of generality, we take a partition $\vec{E}$ which is composed of intervals whose length is strictly increasing. First we need a fact following from OCA which is proved in [@inner §6, §7] (see also [@ilijasbook §18]). In that paper it is shown that there is a strongly continuous $*$-homomorphism $\Psi: {\mathcal{D}}[\vec{E}] \to {\mathcal{B}}(H)$ which lifts ${\varphi}$ on ${\mathcal{D}}_X[\vec{E}]$, for some infinite $X \subseteq {\mathbb{N}}$. The paper [@inner] focuses on automorphisms ${\mathcal{Q}}(H)$, but these proofs also work for unital endomorphisms. More specifically, [@inner Lemma 7.2] and [@inner Proposition 7.7] can be used, as shown in the proof of [@inner Proposition 7.1], to find an infinite $Y \subseteq {\mathbb{N}}$ such that ${\varphi}$ has a $C$-measurable (see [@inner §2.1] for a definition) lift $\Psi'$ on ${\mathcal{D}}_Y[\vec{E}]$. The proof of [@inner Theorem 6.3] (which does not require OCA) shows how to find an infinite $X \subseteq Y$ and refine $\Psi'$ to a strongly continuous $*$-homomorphism $\Psi: {\mathcal{D}}[\vec{E}] \to {\mathcal{B}}(H)$ such that $\Psi$ lifts ${\varphi}$ on ${\mathcal{D}}_X[\vec{E}]$. Alternatively, it is possible to use [@mckvignati Theorem 8.4] to directly obtain $X$ and $\Psi$. This result however, being a more general statement about corona algebras rather than only about the Calkin algebra, requires the stronger assumption $\text{OCA}_\infty + \text{MA}_{\aleph_1}$ (see [@mckvignati §2.2]). Given such lift $\Psi$ and $X \subseteq {\mathbb{N}}$, the idea now is to exploit the abundance of partial isometries in ${\mathcal{Q}}(H)$ to obtain a global unital lift in ${\mathcal{D}}[\vec{E}]$. Concretely, we ‘compress’ elements of ${\mathcal{D}}[\vec{E}]$ into ${\mathcal{D}}_X[\vec{E}]$, apply $\Psi$ and finally we ‘decompress’ their image in ${\mathcal{B}}(H)$ (see also [@inner Lemma 4.1]).
Let $v \in {\mathcal{B}}(H)$ be a partial isometry such that $v^*v = 1$, $P := vv^* \le P_X$ belongs to ${\mathcal{D}}_X[\vec{E}]$ and such that $v {\mathcal{D}}[\vec{E}] v^* \subseteq {\mathcal{D}}_X[\vec{E}]$. Such partial isometry exists since, by assumption, the length of the intervals in $\vec{E}$ is strictly increasing. Let $Q$ be the image of $P$ via $\Psi$. Since $P \in {\mathcal{D}}_X[\vec{E}]$ we have that $$\label{QP}
q(Q) = {\varphi}(q(P)).$$ Let $w$ be a partial isometry lifting ${\varphi}(q(v))$, hence we have $$\label{w}
q(w) = {\varphi}(q(v)), \
1 \sim_{{\mathcal{K}}(H)} w^*w \text{ and } Q
\sim_{{\mathcal{K}}(H)} ww^*.$$
\[cc\] Up to a compact perturbation, we can assume that $w$ satisfies the following porperties.
1. \[ic1\] $ww^* \le Q$.
2. \[ic2\] $w^*w \ge Q$.
We start by proving item . Let $w'$ be a lift of ${\varphi}(q(v^*))$. Then $w'Q$ is also a lift of ${\varphi}(q(v^*))$, since $$q(w'Q) = {\varphi}(q(v^*)) {\varphi}(q(P)) = {\varphi}(q(v^*P)) = {\varphi}(q(v^*)).$$ Let $w' = u \lvert w'Q \rvert$ be the polar decomposition of $w'$. Since $\lvert w'Q \rvert$ is a compact perturbation of the identity, we have that $u$, whose kernel is equal to $\text{ker}(w'Q) \subseteq \text{ker}(Q)$, is also a lift of ${\varphi}(q(v^*))$ such that $u^* u \le Q$. Let $w$ be $u^*$. Summarizing, we can assume that both $1 - w^*w$ and $Q - ww^*$ are finite rank projections.
In order to prove item , first notice that the space $K: = Q^\perp H \cap w^*w H$ is infinite dimensional, as the former is infinite dimensional and the latter has finite codimension. Let $n$ be the rank of $1 - w^*w$ and fix a set of linearly independent vectors ${\{\zeta_k\}}_{k < n}$ in $K$. Let ${\{\eta_k\}}_{k < n}$ be a basis of $(1 - w^*w) H$ and modify $w$ to be the operator sending all vectors in ${\{\zeta_k\}}_{k < n}$ to zero, sending $\eta_k$ to $w(\zeta_k)$ for every $k < n$, and acting as $w$ everywhere else. With these (compact) modifications, we have that $w^*w \ge Q$.
Let ${\{\zeta_k\}}_{k < n}$ be an orthonormal basis of $(1 - w^*w) H$, and let ${\{\zeta_k\}}_{k \in {\mathbb{N}}}$ be an orthonormal basis of $H$ extending it. Denote the shift operator sending $\zeta_k$ to $\zeta_{k+1}$ for all $k \in {\mathbb{N}}$ by $S$ and let $r \in \mathbb{Z}$ be the difference $$r: = \text{rk}(Q - ww^*) - \text{rk}(1 - w^*w).$$ The operator $S^{-r} S^r$ is either the identity (if $r$ is zero or positive) or a projection greater than $w^*w$, therefore, by claim \[cc\] $$\label{great}
S^{-r} S^r \ge w^*w \ge Q \ge ww^*.$$ Consequently, $\tilde{w} := S^r w S^{-r}$ is a partial isometry such that $\tilde{w}^* \tilde{w} = S^r w^* w S^{-r}$ and $\tilde{w} \tilde{w}^* = S^r ww^* S^{-r}$. Moreover $\text{rk}(S^r(Q - ww^*)S^{-r}) = \text{rk}(Q - ww^*)$, since $S^r$ acts as an isometry on $(Q - ww^*)H$ and $S^{-r}$ acts as an isometry on $S^r(Q - ww^*)H$, which is the case since $S^{-r}S^r \ge Q \ge Q - ww^*$. The operator $1 - S^r w^*w S^{-r}$ is the orthogonal projection onto the span of ${\{\zeta_k\}}_{k < r+n}$, therefore $\text{rk}(1 - S^rw^*wS^{-r}) = \text{rk}(1 - w^*w) + r$, thus $$\text{rk}(S^r(Q - ww^*)S^{-r}) = \text{rk}(1 - S^rw^*wS^{-r}).$$ Because of this, there is a partial isometry $\overline{w}$ such that $$\label{w1}
\overline{w}^*\overline{w} = 1, \ \overline{w}\overline{w}^* = S^r Q S^{-r}$$ and $$\label{w2}
\overline{w} \sim_{{\mathcal{K}}(H)} S^r w S^{-r}.$$ We claim that the map $$\begin{aligned}
\Phi_{\vec{E}} : {\mathcal{D}}[\vec{E}] &\to {\mathcal{B}}(H) \\
a &\mapsto \overline{w}^*S^r \Psi(vav^*)S^{-r} \overline{w}\end{aligned}$$ is a strongly continuous, unital $*$-homomorphism lifting $\text{Ad}(q(S^r)) \circ {\varphi}$ on ${\mathcal{D}}[\vec{E}]_{\mathcal{Q}}$. Strong continuity, linearity and preservation of the adjoint operation follow since $\Psi$ has these properties. Unitality is a consequence of the definition of $\overline{w}$: $$\overline{w}^*S^r \Psi(vv^*)S^{-r} \overline{w} = \overline{w}^*S^r QS^{-r} \overline{w}
\stackrel{\eqref{w1}}{=}
\overline{w}^* \overline{w} \overline{w}^* \overline{w} \stackrel{\eqref{w1}}{=} 1.$$ Given $a,b \in {\mathcal{D}}[\vec{E}]$ we have that $$\begin{aligned}
\overline{w}^*S^r \Psi(vabv^*)S^{-r} \overline{w} \ &=
\ \overline{w}^*S^r \Psi(vav^*Pvbv^*)S^{-r} \overline{w} \\ &=
\ \overline{w}^*S^r \Psi(vav^*) Q \Phi(vbv^*)S^{-r} \overline{w} \\ &\stackrel{\mathclap{\eqref{great}}}{=}
\ \overline{w}^*S^r \Psi(vav^*) S^{-r} S^r Q S^{-r} S^r \Psi(vbv^*)S^{-r} \overline{w} \\ & \stackrel{\mathclap{\eqref{w1}}}{=}
\ \overline{w}^*S^r \Psi(vav^*) S^{-r} \overline{w}\overline{w}^* S^r \Psi(vbv^*)S^{-r} \overline{w}.\end{aligned}$$ Finally, for $a \in {\mathcal{D}}[\vec{E}]$, the following holds $$\begin{aligned}
q(\overline{w}^*S^r \Psi(vav^*)S^{-r} \overline{w}) \ & \stackrel{\mathclap{\eqref{w2}}}{=}
\ q(S^r w^* S^{-r}S^r){\varphi}(q(vav^*))q(S^{-r}S^r w S^{-r}) \\ & \stackrel{\mathclap{\eqref{QP}}}{=}
\ q(S^r w^*) q( S^{-r}S^r Q){\varphi}(q(vav^*))q(Q S^{-r}S^r) q( w S^{-r}) \\ &\stackrel{\mathclap{\eqref{great}}}{=}
\ q(S^r w^*) q( Q){\varphi}(q(vav^*))q(Q ) q( w S^{-r}) \\ &\stackrel{\mathclap{\eqref{QP}}}{=}
\ q(S^r w^*) {\varphi}(q(vav^*))q( w S^{-r}) \\ & \stackrel{\mathclap{\eqref{w}}}{=}
\ q(S^r) {\varphi}(q(a)) q(S^{-r}).\end{aligned}$$
Classification and Closure Properties {#Sclass}
=====================================
We are finally ready to prove theorems \[mt2\] and \[closure\].
We start by showing the following claim, which does not require OCA.
\[ind\] The index of the image of the unilateral shift $S$ via a trivial, unital endomorphism ${\varphi}$ is finite and negative.
By definition of trivial endomorphism, there is a unitary $u \in {\mathcal{Q}}(H)$ such that $\text{Ad}(u) \circ {\varphi}$ is induced by a strongly continuous endomorphism $\Phi$ of ${\mathcal{B}}(H)$, which can assumed to be unital (see remark \[remark:uni\]). This means that there exists $m \in {\mathbb{N}}\setminus {\{0\}}$ such that, up to unitary transformation, ${\varphi}$ lifts to the map $\Phi_m: {\mathcal{B}}(H) \to {\mathcal{B}}(H \otimes {\mathbb{C}}^m)$ sending $T$ to $T \otimes 1_m$ (see remark \[remark:sc\]). In particular, the index of ${\varphi}(q(S))$ is $-m$.
The forward direction of the equivalence is straightforward. Suppose thus that ${\varphi}_1$, ${\varphi}_2$ are two endomorphisms of ${\mathcal{Q}}(H)$ that satisfy conditions and of the statement. By condition we can assume that ${\varphi}_1(1) = {\varphi}_2(1) = p$. If there is a unitary $v \in {\mathcal{Q}}(H)$ such that $\text{Ad}(v) \circ {\varphi}_1 = {\varphi}_2$, then $vpv^* = p$, hence $v$ and $p$ commute, which means that $v$ is a direct sum of a unitary in $p{\mathcal{Q}}(H) p$ and a unitary in $(1-p) {\mathcal{Q}}(H) (1-p)$. The only part of $v$ acting non-trivially on ${\varphi}_1[{\mathcal{Q}}(H)]$ is the one in $p {\mathcal{Q}}(H) p$. Hence, without loss of generality, we can assume that $p= {\varphi}_1(1) = {\varphi}_2(1) = 1$. By theorem \[thrm:main\] both ${\varphi}_1$ and ${\varphi}_2$ are trivial. Therefore, by condition and claim \[ind\], we have that ${\varphi}_1$ and ${\varphi}_2$, modulo unitary equivalence, lift to the same endomorphism on ${\mathcal{B}}(H)$, hence they are equal.
The final sentence of the theorem follows from claim \[ind\], since $\Phi_m \oplus \Phi_n = \Phi_{m+n}$ and $\Phi_m \circ \Phi_n = \Phi_{mn}$ for every $m, n \in {\mathbb{N}}\setminus {\{0\}}$.
\[nonu\] Every non-unital ${\varphi}\in \text{End}({\mathcal{Q}}(H))$ can be written as a direct sum ${\varphi}_1 \oplus 0$ where ${\varphi}_1$ is unital and $0$ is the zero endomorphism of ${\mathcal{Q}}(H)$. Therefore, by theorem \[thrm:main\], every non-unital endomorphism ${\varphi}$ is, up to unitary equivalence, equal to $\Phi_m \oplus 0$ for some $m \in {\mathbb{N}}$. Consider the set $\mathcal{N} := ({\mathbb{N}}\times {\{0,1\}}) \setminus {\{(0,1)\}}$ and, for ${\varphi}\in \text{End}({\mathcal{Q}}(H))$, define $\text{ind}({\varphi}) := -\text{ind}({\varphi}(q(S)) + (1 - {\varphi}(1)))$, where $S$ is the unilateral shift. Consider the map: $$\begin{aligned}
\Theta: \text{End}({\mathcal{Q}}(H)) &\to \mathcal{N} \\
{\varphi}&\mapsto
\begin{cases}
(0,0) \text{ if } {\varphi}(1) = 0 \\
(\text{ind}({\varphi}),1) \text{ if } {\varphi}(1) = 1 \\
(\text{ind}({\varphi}),0) \text{ if } 0 < {\varphi}(1) < 1
\end{cases}\end{aligned}$$ By theorem \[thrm:main\] and the previous observation, the map $\Theta$ is a bijection, since all projections $p \in {\mathcal{Q}}(H)$ such that $0 < p < 1$ are unitarily equivalent in ${\mathcal{Q}}(H)$. For ${\varphi}_1, {\varphi}_2 \in \text{End}({\mathcal{Q}}(H))$ we have that ${\varphi}_1 \oplus {\varphi}_2$ (and ${\varphi}_1 \circ {\varphi}_2$) is non-unital if and only if at least one between ${\varphi}_1$ and ${\varphi}_2$ is non-unital. Therefore, the map $\Theta$ is a semigroup isomorphism between $(\text{End}({\mathcal{Q}}(H)), \oplus)$ and $(\mathcal{N}, +)$, where the addition on $\mathcal{N}$ is defined as $(n,i) + (m,j) = (n+m,i\cdot j)$. Analogously, $\Theta$ is an isomorphism between $(\text{End}({\mathcal{Q}}(H)), \circ)$ and $(\mathcal{N}, \cdot)$, where $(n,i) \cdot (m,j) = (n\cdot m,i\cdot j)$.
Let ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ be a trivial, unital endomorphism. Up to unitary transformation there is $m \in {\mathbb{N}}$ such that ${\varphi}$ is induced by the map $\Phi_m: {\mathcal{B}}(H) \to {\mathcal{B}}(H \otimes {\mathbb{C}}^m)$ sending $T$ to $T \otimes 1_m$ (see remark \[remark:sc\]). The commutant of $\Phi_m[{\mathcal{B}}(H)]$ in ${\mathcal{B}}(H \otimes {\mathbb{C}}^m)$ is isomorphic to $M_m({\mathbb{C}})$. By [@jp Lemma 3.2] also the commutant of ${\varphi}[{\mathcal{Q}}(H)]$ in the codomain ${\mathcal{Q}}(H)$ is isomorphic to $M_m({\mathbb{C}})$. Thus, the commutant of the image of a trivial, unital endomorphism of ${\mathcal{Q}}(H)$ is always finite dimensional. Let ${\mathcal{B}}\in \mathbb{E}$ be unital and infinite-dimensional. Consider the algebraic tensor product ${\mathcal{Q}}(H) \otimes_{\text{alg}} {\mathcal{B}}$ and suppose $\psi: {\mathcal{Q}}(H) \otimes_{\text{alg}} {\mathcal{B}}\to {\mathcal{Q}}(H)$ is an embedding. The element $\psi(1)$ is a non-zero projection. Since $\psi(1) {\mathcal{Q}}(H) \psi(1)
\cong {\mathcal{Q}}(H)$, we can assume that $\psi$ is unital. On the one hand, by theorem \[thrm:main\] the restriction of $\psi$ to ${\mathcal{Q}}(H)$ is unital and trivial, on the other hand $\psi$ sends ${\mathcal{B}}$ into the commutant of $\psi[{\mathcal{Q}}(H)]$, which is a contradiction.
Let ${\{{\mathcal{A}}_n\}}_{n \in {\mathbb{N}}}$ be an increasing sequence of finite-dimensional, unital [$\mathrm{C}^\ast$]{}-algebra such that ${\mathcal{A}}:= \overline{\bigcup_{n \in {\mathbb{N}}} {\mathcal{A}}_n}$ is an infinite-dimensional, unital AF-algebra. We have that ${\mathcal{Q}}(H) \otimes {\mathcal{A}}_n \in \mathbb{E}$ for all $n \in {\mathbb{N}}$, but, by the proof of item , ${\mathcal{Q}}(H)
\otimes {\mathcal{A}}\notin \mathbb{E}$.
By the results in [@inner] we know that it is consistent with ZFC that there is no automorphism of ${\mathcal{Q}}(H)$ sending the unilateral shift $S$ to its adjoint. Combining claim \[ind\] with theorem \[thrm:main\] and with the equality $\text{ind}(q(S^*)) = 1$, we can generalize this statement to endomorphisms of ${\mathcal{Q}}(H)$.
Assume OCA. There is no endomorphism ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ sending the unilateral shift to its adjoint or to any unitary of index zero.
The following two examples witness the failure of theorems \[mt2\] and \[thrm:main\] when CH holds, suggesting a rather complicated picture of $\text{End}({\mathcal{Q}}(H))$ in that case. In particular, while $\text{End}({\mathcal{Q}}(H))$ is countable under OCA (theorem \[mt2\]), CH implies that $\text{End}({\mathcal{Q}}(H))$ has size $2^{\aleph_1}$, as shown in the following example.
\[ex1\] An automorphism is trivial if and only if it is inner. Hence all the inequivalent $2^{\aleph_1}$ outer automorphisms produced in [@outer] are examples of non-trivial endomorphisms of ${\mathcal{Q}}(H)$ (outer automorphisms of ${\mathcal{Q}}(H)$ can also be built using the *weak Continuum Hypothesis*, see [@outer2] and [@ilijasbook §18.1]). All the automorphisms built in [@outer] locally behave like an inner automorphism, in particular they behave like an inner automorphism on the unilateral shift. Therefore, under CH it is possible to find $2^{\aleph_1}$ inequivalent automorphisms which all send the unilateral shift to an element of index -1. Hence this invariant is not enough to classify $\text{End}_u({\mathcal{Q}}(H))$, and theorem \[mt2\] fails under CH. It would be interesting to investigate whether CH implies that for every endomorphism ${\varphi}$ of ${\mathcal{Q}}(H)$ there are $2^{\aleph_1}$ inequivalent endomorphisms of ${\mathcal{Q}}(H)$ with the same invariant as ${\varphi}$.
\[ex2\] In [@fhv] it is proved that all [$\mathrm{C}^\ast$]{}-algebras of density character $\aleph_1$ can be embedded into ${\mathcal{Q}}(H)$ with a map whose restriction to the separable subalgebras is a trivial extension[^2]. Under CH the density character of ${\mathcal{Q}}(H)$ is $\aleph_1$, thus there is a unital endomorphism ${\varphi}: {\mathcal{Q}}(H) \to {\mathcal{Q}}(H)$ that sends the unilateral shift $S$ to a unitary in ${\mathcal{Q}}(H)$ which lifts to a unitary in ${\mathcal{B}}(H)$, which has therefore index zero. By claim \[ind\], this map cannot be a trivial endomorphism. With this example we see that without OCA the range of values assumed by $\text{ind}({\varphi}(S))$, as ${\varphi}$ varies in $\text{End}_u({\mathcal{Q}}(H))$, can be strictly larger than the negative numbers. Notice that the existence of an automorphism sending $S$ to $S^*$ would give an example where the index of the image of the shift is positive.
In [@inner2] the author extends his results in [@inner] to all Calkin algebras on nonseparable Hilbert spaces, showing that the Proper Forcing Axiom implies that all automorphisms of the Calkin algebra on a nonseparable Hilbert space are inner. It would be interesting to known whether those techniques can be generalized to study the semigroup of endomorphisms of the Calkin algebra on a nonseparable Hilbert space.
In conclusion, we remark that an interesting consequence of the simple structure of $\text{End}_u({\mathcal{Q}}(H))$ under OCA is that the monoid of $(\text{End}_u({\mathcal{Q}}(H)), \circ)$ is commutative (theorem \[mt2\]). To our knowledge, surprisingly, it is not known whether commutativity fails when OCA is not assumed.
Is it relatively consistent with ZFC that $(\text{End}_u({\mathcal{Q}}(H)), \circ)$ is noncommutative?
Notice that, by Woodin’s $\Sigma^2_1$-absoluteness theorem ([@sigma12]), this is essentially asking whether commutativity holds under CH.
I wish to thank Ilijas Farah for his suggestions concerning these problems and for his useful remarks on the early drafts of this paper. I also would like to thank Alessandro Vignati for the valuable conversations we had about these topics.
[^1]: We denote the restriction of $\Phi_m$ to ${\mathcal{D}}[\vec{E}]$ by $\Phi_m$.
[^2]: Given a [$\mathrm{C}^\ast$]{}-algebra ${\mathcal{A}}$ and an embedding $\theta: {\mathcal{A}}\to {\mathcal{Q}}(H)$, the map $\theta$ is a *trivial extension of ${\mathcal{A}}$* iff there is a $*$-homomorphism $\Theta:{\mathcal{A}}\to {\mathcal{B}}(H)$ such that $\theta = q \circ \Theta$. This notion of trivial maps, albeit more common, is different from the one we used throughout this paper in in definition \[trivial\].
|
---
abstract: 'We construct a family of bipartite states of arbitrary dimension whose eigenvalues of the partially transposed matrix can be inferred directly from the block structure of the global density matrix. We identify from this several subfamilies in which the PPT criterion is both necessary and sufficient. A sufficient criterion of separability is obtained, which is fundamental for the discussion. We show how several examples of states known to be classifiable by the PPT criterion indeed belong to this general set. Possible uses of these states in numerical analysis of entanglement and in the search of PPT bound entangled states are briefly discussed.'
author:
- 'F. E. S. Steinhoff'
- 'M. C. de Oliveira'
title: Families of bipartite states classifiable by the positive partial transposition criterion
---
Introduction
============
Quantum entanglement has a major role in nowadays discussions about quantum information processing due to its potential application in protocols [@nielsen]. Despite its importance, entanglement characterization has been recognized as a difficult task. Such difficulties motivates the search for alternative ways of detecting the separability of a given state. Operational separability criteria based on positive, but not completely positive, maps appeared [@peres; @horodecki1; @horodecki2; @rudolph], with relative success. In the bipartite case, the most important of these criteria is the Positivity under Partial Transposition (PPT) criterion, due to Peres [@peres]. It asserts that if a state is separable, then its partial transposition will be a positive semidefinite operator. By partial transposition it is understood the operation of transposing the matrix elements of only one of the subsystems. It is sometimes referred as partial specular reflection operation or local time reversal operation [@sanpera]. A very illustrative way of seeing partial transposition operation is by considering the density matrix of the state in the basis $\{|0,0\rangle,|0,1\rangle,\ldots, |0,d_B-1\rangle, |1,0\rangle, |1,1\rangle, \ldots, |d_A-1,d_B-1\rangle\}$ - called here Standard Computational Basis (SCB) - where $d_A$ and $d_B$ are the dimensions of Alice and Bob´s subsystems, respectively, $$\begin{aligned}
\rho = \left(\begin{array}{c c c}{A_{00}}&{\ldots}&{A_{0,d_A-1}}\\{\vdots}&
{\ddots}&{\vdots}\\{A_{0,d_A-1}^{\dagger}}&{\ldots}&{A_{d_A-1,d_A-1}}\end{array}\right),
\label{state1}\end{aligned}$$ with $A_{ij}$ being $d_B\times d_B$ submatrices. The partial transposition of the state (\[state1\]) is simply $$\begin{aligned}
\rho^{\Gamma} = \left(\begin{array}{c c c}{(A_{00})^T}&{\ldots}&{(A_{0,d_A-1})^T}\\{\vdots}&
{\ddots}&{\vdots}\\{(A_{0,d_A-1}^{\dagger})^T}&{\ldots}&{(A_{d_A-1,d_A-1})^T}\end{array}\right),\label{nice}\end{aligned}$$ where we remark the importance of the ordering of the basis. In a different ordering the partially transposed matrix would assume a different aspect. If one has a separable state $\rho$, then PPT criterion assures that $\rho^{\Gamma}$ will be a positive semidefinite operator. But the converse is not generally true, making PPT criterion only a necessary one. In fact, PPT criterion has been shown to be necessary and sufficient for some special classes of states: two-qubit and qubit-qutrit states [@horodecki1], Werner [@werner] and isotropic states [@horodecki2], low-rank states [@horodecki3] and, in a different way, Gaussian states in continuous-variable context [@simon]. Other states which are positive under partial transposition, but are known, by other means, to be entangled belong to the class of bound entangled states [@horodecki2]. Those states have no distillable entanglement, i. e., no pure entangled states can be extracted through local operations and classical communication.
Among the open problems in quantum information theory, one which is extremely important is to encounter general positive but not completely positive maps that would detect PPT entangled states for systems with arbitrary Hilbert space dimension. Another important direction to follow is to search for classes of states for which the PPT criterion is a sufficient one for $d_A.d_B > 6$. Recent progress was given by [@kossakowski] and references therein, where classes of PPT states were obtained. The aim of this paper is to present novel families of bipartite states of arbitrary dimension whose eigenvalues of $\rho^{\Gamma}$ are easily inferred from the block structure of the state, allowing one to tell if the state is PPT and to compute its negativity straightforwardly. For several subfamilies we will see that PPT criterion is both necessary and sufficient. These subfamilies include several examples cited above, and known to follow the PPT criterion. With this extension of the set of states classifiable through the PPT criterion we expect some advantages for numerical analysis of entanglement as well as for discussions about bound entanglement. The paper is divided as follows: in Sec. II we present an illustrative example, in order to motivate the discussion. In Sec. III we present the family of states for arbitrary $d_A$ and $d_B$. In Sec. IV we prove a simple sufficient separability condition, identifying with this some important subfamilies in which positivity under partial transposition is equivalent to separability; in further subsections, we obtain nontrivial decompositions of the state space into direct-sums, which raise interesting questions and we also obtain yet another PPT-classifiable set of states. Finally, in Sec. V we present our conclusions, enclosing the paper.
First example: $d_A=d_B=4$
==========================
We start our discussion by presenting an illustrative example, having all the characteristics we want to generalize. We analyze first a situation in which Alice and Bob´s subsystems are four dimensional. Consider then the following matrix in the SCB,
[$$\begin{aligned}
\rho = \left(\begin{array}{c c c c | c c c c | c c c c | c c c c}
{x_{00}}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{a_1}&{a_4}&{a_6}&{x_{01}}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{a_4^*}&{a_2}&{a_5}&{0}&{0}&{0}&{0}&{x_{02}}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{a_6^*}&{a_5^*}&{a_3}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{x_{03}}&{0}&{0}&{0}
\\ \hline {0}&{x_{01}^*}&{0}&{0}&{b_1}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{x_{11}}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{b_2}&{b_4}&{0}&{x_{12}}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{b_4^*}&{b_3}&{0}&{0}&{0}&{0}&{0}&{x_{13}}&{0}&{0}
\\ \hline {0}&{0}&{x_{02}^*}&{0}&{0}&{0}&{0}&{0}&{c_1}&{c_4}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{x_{12}^*}&{0}&{c_4^*}&{c_2}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{x_{22}}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{c_3}&{0}&{0}&{x_{23}}&{0}
\\ \hline {0}&{0}&{0}&{x_{03}^*}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{d_1}&{d_4}&{d_6}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{x_{13}^*}&{0}&{0}&{0}&{0}&{d_4^*}&{d_2}&{d_5}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{x_{23}^*}&{d_6^*}&{d_5^*}&{d_3}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{x_{33}}
\end{array}\right),\label{m1}\end{aligned}$$]{} which under partial transposition on Bob’s subsystem writes as $$\begin{aligned}
\rho^{\Gamma} = \left(\begin{array}{c c c c | c c c c | c c c c | c c c c}
{x_{00}}&{0}&{0}&{0}&{0}&{x_{01}}&{0}&{0}&{0}&{0}&{x_{02}}&{0}&{0}&{0}&{0}&{x_{03}}
\\ {0}&{a_1}&{a_4^*}&{a_6^*}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{a_4}&{a_2}&{a_5^*}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{a_6}&{a_5}&{a_3}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ \hline {0}&{0}&{0}&{0}&{b_1}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {x_{01}^*}&{0}&{0}&{0}&{0}&{x_{11}}&{0}&{0}&{0}&{0}&{x_{12}}&{0}&{0}&{0}&{0}&{x_{13}}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{b_2}&{b_4^*}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{b_4}&{b_3}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ \hline {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{c_1}&{c_4^*}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{c_4}&{c_2}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {x_{02}^*}&{0}&{0}&{0}&{0}&{x_{12}^*}&{0}&{0}&{0}&{0}&{x_{22}}&{0}&{0}&{0}&{0}&{x_{23}}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{c_3}&{0}&{0}&{0}&{0}
\\ \hline {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{d_1}&{d_4^*}&{d_6^*}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{d_4}&{d_2}&{d_5^*}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{d_6}&{d_5}&{d_3}&{0}
\\ {x_{03}^*}&{0}&{0}&{0}&{0}&{x_{13}^*}&{0}&{0}&{0}&{0}&{x_{23}^*}&{0}&{0}&{0}&{0}&{x_{33}}
\end{array}\right). \label{m2}\end{aligned}$$
It is evident that for the matrix (\[m1\]) to represent a valid state, it must be normalised with $x_{00}+\sum_{i=1}^3\left(x_{ii}+a_i+b_i+c_i+d_i\right)=1$ and $\rho$ must be positive semidefinite. It is important here to note that this latter condition implies that the submatrices $$\begin{aligned}
A = \left(\begin{array} {c c c} {a_1}&{a_4}&{a_6} \\ {a_4^*}&{a_{2}}&{a_5} \\ {a_6^*}&{a_5^*}&{a_3} \end{array}\right); \ \ B = \left(\begin{array} {c c} {b_2}&{b_4} \\ {b_4^*}&{b_3} \end{array}\right);\nonumber\\ C = \left(\begin{array} {c c} {c_1}&{c_4} \\ {c_4^*}&{c_2} \end{array}\right); \ \ D = \left(\begin{array} {c c c} {d_1}&{d_4}&{d_6} \\ {d_4^*}&{d_{2}}&{d_5} \\ {d_6^*}&{d_5^*}&{d_3} \end{array}\right), \label{set1}\end{aligned}$$ are also positive semidefinite, since they are principal submatrices of $\rho$ [@horn]. The alternating sizes of these blocks are also a fundamental property for the extension for arbitrary dimension.
We claim that the operator defined by (\[m2\]) has a direct-sum structure $$\begin{aligned}
\rho^{\Gamma}=X\oplus A^T\oplus b_1\oplus B^T\oplus C^T\oplus c_3\oplus D^T, \label{decomposition}\end{aligned}$$ where $$\begin{aligned}
X=\left(\begin{array}{c c c c}{x_{00}}&{x_{01}}&{x_{02}}&{x_{03}}\\{x_{01}^*}&{x_{11}}&{x_{12}}&{x_{13}}\\{x_{02}^*}&{x_{12}^*}&{x_{22}}&{x_{23}}\\{x_{03}^*}&{x_{13}^*}&{x_{23}^*}&{x_{33}}\end{array}\right).\label{set2}\end{aligned}$$ To see this, note that we can decompose the total Hilbert space $\mathcal{H}$ as a direct sum: $$\begin{aligned}
\mathcal{H}=\mathcal{H}_X\oplus\mathcal{H}_A\oplus\mathcal{H}_{b_1}\oplus\mathcal{H}_B\oplus\mathcal{H}_C\oplus\mathcal{H}_{c_3}\oplus\mathcal{H}_D\end{aligned}$$ with support in the orthogonal subspaces $$\begin{aligned}
\mathcal{H}_X &=& span\{|00\rangle,|11\rangle,|22\rangle,|33\rangle\}; \label{sb1} \\ \mathcal{H}_A &=& span\{|01\rangle,|02\rangle,|03\rangle\}; \\ \mathcal{H}_{b_1} &=& span\{|10\rangle\}; \\ \mathcal{H}_B &=& span\{|12\rangle,|13\rangle\}; \\ \mathcal{H}_C &=& span\{|20\rangle, |21\rangle\}; \\ \mathcal{H}_{c_3} &=& span\{|23\rangle\}; \\ \mathcal{H}_D &=& span\{|30\rangle,|31\rangle,|32\rangle\}. \label{subspaces}\end{aligned}$$ Since the matrices (\[set1\]) and (\[set2\]) have support only in the subspaces with respective indexes in (\[sb1\])-(\[subspaces\]), then we have the decomposition (\[decomposition\]) for partial transposition. In the Appendix A we consider this in detail working out an example. We note that similar decompositions are given in [@kossakowski], where the authors construct several classes of PPT states decomposing the total Hilbert space $\mathcal{H}$ into well-suited direct sums, which can be changed through circular property of the supports.
The eigenvalues of $\rho^{\Gamma}$ are thus $b_1$, $c_3$ and the eigenvalues of the matrices $X$, $A^T$, $B^T$, $C^T$ and $D^T$. Since $A$, $B$, $C$ and $D$ are positive semidefinite - this is implied by the positive semidefiniteness of $\rho$ - so are $A^T$, $B^T$, $C^T$ and $D^T$. Of course $b_1,c_3\geq 0$ and then we conclude that the only negative eigenvalues of $\rho^{\Gamma}$, if any, are the negative eigenvalues of the matrix $X$. So, if the $X$ matrix has any negative eigenvalue, we can say that the state is entangled, by PPT criterion. Also, its Negativity [@vidal] will be simply the sum of these negative eigenvalues. However, if $\rho^{\Gamma}$ is positive semidefinite, in general we cannot say if $\rho$ in Eq. (\[m1\]) is separable or not. But by adding some constraints we restrict $\rho$ to a separable state. In the next section, devoted to the general case, we see how it works.
Generalization to arbitrary $d_A$, $d_B$
========================================
The generalization of the previous example to arbitrary dimensions is not a difficult task. We are assuming here that $d_A\leq d_B$. The expression of an arbitrary member of the family expressed in terms of the SCB $\{|0,0\rangle,|0,1\rangle,\ldots, |0,d_B-1\rangle, |1,0\rangle, |1,1\rangle, \ldots, |d_A-1,d_B-1\rangle\}$ is given by $$\begin{aligned}
\rho &=& \sum_{m,n=0}^{d_A-1}x_{mn}|mn\rangle\langle nm|+\sum_{k=0}^{d_A-1}|k\rangle\langle k|\label{final}\\
&&\otimes\left(\sum_{i,j=0}^{k-1}(M_{k})_{ij}|i\rangle\langle j|
+\sum_{i',j'=k+1}^{d_B-1}(N_{k})_{i'j'}|i'\rangle\langle j'|\right)\nonumber. \end{aligned}$$ We impose on this operator the following conditions:
1. $(M_{k})_{ji}=(M_{k})_{ij}^*$, $(N_{k})_{ji}=(N_{k})_{ij}^*$, $x_{nm}=x_{mn}^*$;
2. $\sum_{k=0}^{d_A-1}\left(\sum_{i=0}^{k-1}(M_{k})_{ii}+\sum_{i=k+1}^{d_B-1}(M_{k})_{ii}\right)\\+\sum_{m=0}^{d_A-1} x_{mm}=1$;
3. $\rho$ is positive semidefinite.
These are the usual conditions to be fulfilled for $\rho$ to represent a valid state, that is, hermiticity, unit trace and positive semidefiniteness, respectively. This is easier to see if one consider the block structure of $\rho$ in the SCB. Any operator can be expressed in this basis as $$\begin{aligned}
\rho = \left(\begin{array}{c c c}{A_{00}}&{\ldots}&{A_{0,d_A-1}}\\{\vdots}&
{\ddots}&{\vdots}\\{A_{0,d_A-1}^{\dagger}}&{\ldots}&{A_{d_A-1,d_A-1}}\end{array}\right),\end{aligned}$$ where $A_{ij}$ are $d_B\times d_B$ submatrices. The states just constructed have diagonal submatrices given by $$\begin{aligned}
A_{kk} = \left(\begin{array} {c | c | c} {(M_k)_{k\times k}}&{}&{} \\ \hline {}&{x_{kk}}&{} \\ \hline {}&{}&{(N_k)_{(d_B-1-k)\times (d_B-1-k)}}\end{array}\right),\end{aligned}$$ with $k=0,1,\ldots,d_A-1$, $M_k$ and $N_k$ being diagonal blocks of dimensions given by the respective subindexes and $x_{kk}$ is an arbitrary real number. The off-diagonal submatrices are simply $A_{ij}=x_{ij}|j\rangle\langle i|$, with $i\neq j$. The conditions on the elements in this fashion are simply that $M_k$, $N_k$ and the matrix with elements $x_{ij}$ are Hermitian, that the sum of their traces sum up to unit and that the global matrix is positive semidefinite.
Let $\rho^{\Gamma}$ be the operator obtained from $\rho$ through the partial transposition of Bob´s subsystem: $$\begin{aligned}
\rho^{\Gamma} &=& \sum_{m,n=0}^{d_A-1}x_{mn}|mm\rangle\langle nn|+\sum_{k=0}^{d_A-1}|k\rangle\langle k|\\
&&\otimes\left(\sum_{i,j=0}^{k-1}(M_{k})_{ij}|j\rangle\langle i|
%\right.\\&&+\left.
+\sum_{i',j'=k+1}^{d_B-1}(N_{k})_{i'j'}|j'\rangle\langle i'|\right).\nonumber\end{aligned}$$ The first term corresponds to a $d_B\times d_B$ matrix - called $X$ here - acting in the subspace spanned by $\{|00\rangle, |11\rangle, \ldots, |d_A-1, d_A-1\}$ only. The matrices $M_k^T$ act in the subspace spanned by $\{|k,0\rangle,|k,1\rangle,\ldots,|k,k-1\rangle\}$ only, while the matrices $N_k^T$ act in the subspace spanned by $\{|k,k+1\rangle,|k,k+2\rangle.\ldots,|k,d_B-1\rangle\}$ only. As the intersection between every two of these subspaces is the null vector, the total Hilbert space $\mathcal{H}$ can be decomposed as a direct sum of them and by the reasoning given, the operator $\rho^{\Gamma}$ has a direct sum structure, which can be compactly stated as $$\begin{aligned}
\rho^{\Gamma} = X\bigoplus_{k=0}^{d_A-1}\left(M_k^T\oplus N_k^T\right). \label{magic}\end{aligned}$$ The eigenvalues of the transposed matrix are thus the eigenvalues of the various matrices $M_k$, $N_k$ and $X$. However, a negative eigenvalue of $\rho^{\Gamma}$, if any, will be due only to a negative eigenvalue of the $X$ matrix. The reason is that $M_k$ and $N_k$ are principal submatrices of the original density matrix $\rho$. By elementary linear algebra [@horn], these matrices are already positive semidefinite, since $\rho$ is positive semidefinite. So, the only negative eigenvalues of $\rho^{\Gamma}$ will be the negative eigenvalues of the $X$ matrix.
The $X$ matrix has thus a large part of the information about the entanglement of the state. If the states just constructed have any experimental usage, to detect and to quantify the entanglement of $\rho$ will be simpler than full state reconstruction. The $X$ matrix is a $d_A\times d_A$ matrix and even if one has to reconstruct this matrix [^1] this task will be much simpler than to reconstruct the $(d_A.d_B)\times(d_A.d_B)$ global matrix representing $\rho$. In practice, one should know that the state is of this form, by the preparation procedure or by some characteristic - and yet not discovered - test. Also, from another point of view, we believe that these states would bring advantage in numerical research of entanglement, since one has to deal only with one matrix. It is easy to construct entangled states with the form (\[final\]) and, as will be shown in the next section, it is also easy to construct families of PPT-classifiable states.
However, if the measured eigenvalues of $X$ are all positive, then the PPT criterion alone will not be sufficient to detect entanglement, once in the general bipartite case this criterion is only necessary for separability. As the state is PPT, if it is shown to be entangled by other means, it will exhibit bound entanglement [@horodecki2].
Subfamilies classifiable through PPT criterion
==============================================
There are subfamilies inside the broad family of bipartite states presented in the last section in which PPT criterion is necessary and sufficient. We do not intend to present here all the situations, but instead to discuss how it includes some very important examples. For that, we would like first to prove a simple and relevant result:
If a state $\rho$ expressed in the standard computational basis has the block diagonal form $$\begin{aligned}
\rho_{ss} = \left(\begin{array}{c | c | c | c}
{A_{0}}&{}&{}&{} \\ \hline {}&{A_{1}}&{}&{} \\ \hline {}&{}&{\ddots}&{} \\ \hline{}&{}&{}&{A_{d-1}}\end{array}\right)\label{ss},\end{aligned}$$ where each $A_i$ is a $d_B\times d_B$ matrix, then the state is separable.
*Proof:* We have to prove that a matrix in the form $\rho_{ss}$ above has a decomposition $$\begin{aligned}
\rho_s = \sum_i p_i \rho_A^i\otimes\rho_B^i,\end{aligned}$$ with $\sum_i p_i =1$, $p_i\geq 0$ and $\rho_A^i$, $\rho_B^i$ being states in Alice and Bob´s subsystems, respectively. Indeed, we have $$\begin{aligned}
\rho_{ss} &=& |0\rangle\langle 0|\otimes A_0 + |1\rangle\langle 1|\otimes A_1 + \ldots \nonumber\\
&&+ |d-1\rangle\langle d-1|\otimes A_{d-1} = \sum_{i=0}^{d-1} |i\rangle\langle i|\otimes A_i,\end{aligned}$$ which thus can be written as $$\begin{aligned}
\rho_{ss} = \sum_{i=0}^{d-1} \underbrace{(trA_i)}_{p_i}\underbrace{|i\rangle\langle i|}_{\rho_A^i}\otimes\underbrace{\frac{A_i}{trA_i}}_{\rho_B^i} = \sum_i p_i \rho_A^i\otimes\rho_B^i,\end{aligned}$$ and since $\sum_i p_i = \sum_i trA_i = 1$ and $p_i\geq 0$, a state in the form $\rho_{ss}$ is separable.
------------------------------------------------------------------------
\
We will use this result as a probe to construct subfamilies of states classifiable through PPT criterion. We will call states that can be written in the block diagonal form (\[ss\]) as *simply separable states*. It is straightforward then that every state written in the SCB can be decomposed as $\rho=\rho_{ss}+M$, i.e., a simply separable state $\rho_{ss}$ plus a matrix $M$ that do not represent a state and that contains correlations associated with entanglement. We show that important examples, such as the Werner states are included in this subset.
Restrictions on the $X$ matrix
------------------------------
We shall construct examples of states in which PPT means separability restricting the form of the $X$ matrix appearing in the direct-sum decomposition of $\rho^{\Gamma}$. If, for example, the eigenvalues of the $X$ matrix are all of the form $\{-x_i\}_{i=0}^{d_A-1}$, for arbitrary real values $x_i$, then the state will have a positive partial transposition only in case all the $x_i$ are zero. But in this case the $X$ matrix can only be the null matrix, which implies that the matrix $\rho$ is block diagonal with $d_B\times d_B$ blocks, that is, the state is simply separable (Proposition 1). In this special case, positivity under partial transposition implies separability, i. e., we obtain a subfamily in which states are separable if and only if they are PPT [^2]. Another example can be given for a $X$ matrix of the form ($d_A$ even) $$\begin{aligned}
X=\bigoplus_{i=0}^{d_A/2-1}\left(\begin{array}{c c}{0}&{x_i}\\{x_i^*}&{0}\end{array}\right).\end{aligned}$$ The eigenvalues of this matrix are obviously $\{\pm |x_i|\}_{i=0}^{d_A/2-1}$. The matrix $\rho^{\Gamma}$ will be positive semidefinite if and only if all the $x_i$ are zero. This implies that $\rho$ is simply separable, i. e., we have again equivalence between separable and PPT states in this case. Indeed, whenever positivity under partial transposition implies that the $X$ matrix is the null matrix we will have this equivalence.
We obtain another such a subfamily whenever the $X$ matrix is itself diagonal. In this case it is clear that $\rho$ will be simply separable. Combining this with the previous reasoning, whenever positivity under partial transposition implies that the $X$ matrix is diagonal, we will have equivalence between separable and PPT states. In fact, the previous case can be trivially seen as a special case of this one. Consider, for example, a $X$ matrix of the form $$\begin{aligned}
X=\bigoplus_{i=0}^{d_A/2-1}\left(\begin{array}{c c}{0}&{x_i}\\{x_i^*}&{y_i}\end{array}\right).\end{aligned}$$ This matrix will be positive if and only if $x_i=0$; in this case, $X$ will be diagonal and the state will be simply separable. We showed thus several examples of subfamilies classifiable through PPT criterion.
Werner and isotropic states
---------------------------
For a $d\otimes d$ system, consider the state $$\begin{aligned}
\rho_W = (1-\epsilon)\frac{I}{d^2} + \epsilon\frac{F}{d} \label{wstate},\end{aligned}$$ where $I$ is the identity operator and $F$ is the usual flip operator, defined by $F|\phi\rangle\otimes |\varphi\rangle=|\varphi\rangle\otimes |\phi\rangle$. We call the family of states defined by (\[wstate\]) as Werner states, [@werner]; the connection with Werner´s original notation is $\epsilon=-\frac{1-d\Phi}{d^2 -1}$, where $\Phi=\langle F\rangle$. If we impose that the $M_k$, $N_k$ matrices are of the form $\frac{(1-\epsilon)}{d^2}I$ and the $X$ matrix elements are $x_{kk}=(1-\epsilon(d+1))/d^2$ and $x_{jk}=\epsilon/d$, for $j\neq k$, then we see that Werner states are also a subfamily of the broader family. A Werner state is separable if and only if it is PPT, as is well known.
The partial transposition establishes a nice connection between Werner and the so called isotropic states [@horodecki2] given by $$\begin{aligned}
\rho_I = (1-\epsilon)\frac{I_A\otimes I_B}{d^2} +\epsilon P_+ \label{isotr}\end{aligned}$$ where $P_+=|\phi_d^+\rangle\langle\phi_d^+|$ and $|\phi_d^+\rangle = \frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|ii\rangle$. Since the partial tranposition of the operator $P_+$ is simply $F/d$, we have that an isotropic state will be PPT only if its partial tranposition $\rho_I^{\Gamma}$ represents a Werner state and the same statement holds for a Werner state. However, the isotropic states do not belong to the family defined by (\[final\]). But we can easily define a new family which contains isotropic states, constituted of matrices in the form (\[magic\]) and restricting the submatrices $M_k$ and $N_k$ to be diagonal. Making $M_k$ and $N_k$ equal to $\frac{(1-\epsilon)}{d^2}I$ and $x_{kk}=(1-\epsilon(d+1))/d^2$, $x_{jk}=\epsilon/d$, for $j\neq k$, we see that isotropic states are contained in this new family. We get then an analogous connection between the two broader subfamilies through the partial transposition operation.
Qubit-qubit, qubit-qutrit and qubit-qudit cases
-----------------------------------------------
The PPT criterion is necessary and sufficient for $d_A=d_B=2$ and $d_A=2, d_B=3$ [@horodecki1]. The density matrix of a two qubit state in the SCB of the family reads $$\begin{aligned}
\rho = \left(\begin{array}{c c | c c} {x_{00}}&{0}&{0}&{0}\\{0}&{a}&{x_{01}}&{0}\\ \hline {0}&{x_{01}^*}&{b}&{0}\\{0}&{0}&{0}&{x_{11}}\end{array}\right),\end{aligned}$$ and its partial transposition is $$\begin{aligned}
\rho^{\Gamma} = \left(\begin{array}{c c | c c} {x_{00}}&{0}&{0}&{x_{01}}\\{0}&{a}&{0}&{0}\\ \hline {0}&{0}&{b}&{0}\\{x_{01}^*}&{0}&{0}&{x_{11}}\end{array}\right).\end{aligned}$$ The eigenvalues of $\rho^{\Gamma}$ are $a$, $b$ and the eigenvalues of the matrix $$\begin{aligned}
X= \left(\begin{array}{c c} {x_{00}}&{x_{01}} \\ {x_{01}^*}&{x_{11}} \end{array}\right),\end{aligned}$$ which are simply $\frac{1}{2}\left(x_{00}+x_{11}\pm\sqrt{(x_{00}-x_{11})^2 + 4|x_{01}|^2}\right)$. The state will be PPT and hence separable for $x_{00}x_{11}\geq |x_{01}|^2$. Otherwise, the state will be entangled and its negativity will be given by $N(\rho)=\frac{1}{2} max\{0,\sqrt{(x_{00}-x_{11})^2 + 4|x_{01}|^2}-(x_{00}+x_{11})\}$.
In case we are dealing with a qubit and a qutrit, the density matrix of the family reads $$\begin{aligned}
\rho = \left(\begin{array}{c c c | c c c}
{x_{00}}&{0}&{0}&{0}&{0}&{0} \\
{0}&{a}&{c}&{x_{01}}&{0}&{0} \\
{0}&{c^*}&{b}&{0}&{0}&{0} \\ \hline
{0}&{x_{01}^*}&{0}&{d}&{0}&{0} \\
{0}&{0}&{0}&{0}&{x_{11}}&{0} \\
{0}&{0}&{0}&{0}&{0}&{e} \end{array}\right),\end{aligned}$$ and is easy to see that the same results apply to this case. In fact, from a more general density matrix $$\begin{aligned}
\rho = \left(\begin{array}{c c c | c c c}
{x_{00}}&{0}&{0}&{0}&{0}&{0} \\
{0}&{a}&{c}&{0}&{0}&{0} \\
{0}&{c^*}&{b}&{x_{01}}&{0}&{0} \\ \hline
{0}&{0}&{x_{01}^*}&{d}&{f}&{0} \\
{0}&{0}&{0}&{f^*}&{e}&{0} \\
{0}&{0}&{0}&{0}&{0}&{x_{11}} \end{array}\right),\end{aligned}$$ we see that the negative eigenvalues of $\rho^{\Gamma}$ are also equal to the cases analyzed. We are thus induced to propose another family of states for the qubit-qudit case given by $$\begin{aligned}
\rho &=& x_{00}|00\rangle\langle 00| + x_{11}|d_B-1,d_B-1\rangle\langle d_B-1,d_B-1| \nonumber\\&&+ x_{01}|0,d_B-1\rangle\langle 10| + x_{01}^*|10\rangle\langle 0,d_B-1| \nonumber\\
&&+ \sum_{i,j=1}^{d_B-1}A_{ij}|0i\rangle\langle 0j|+ \sum_{i'j'=0}^{d_B-2}B_{i'j'}|1i'\rangle\langle 1j'|, \label{2xN}\end{aligned}$$ and it is easy to see that the negative eigenvalues of the transposed matrix will be the same. However, in this case it is not assured that positivity under partial transposition implies separability. One can conjecture here that this is indeed true, given the resemblance to the qubit-qubit and qutrit-qutrit cases. We make here a brief digression about this subject and we hope it can be useful for the search of bound entangled states. The partial transposes of (\[final\]) and (\[2xN\]) are of the form $$\begin{aligned}
\rho^{\Gamma} = X\oplus\tilde{\rho}_{ss} \label{ptrans}\end{aligned}$$ where $\tilde{\rho}_{ss}$ is an unnormalised simply separable density operator. As we are looking for bound entangled states, we assume $\rho^{\Gamma}$ is positive semidefinite and in this case this matrix represents a state. It is obvious that the original $\rho$ will be separable if $\rho^{\Gamma}$ is, so we will focus on the partially transposed matrix, due to its direct-sum decomposition.
The total Hilbert space $\mathcal{H}$ has a direct-sum decomposition $\mathcal{H}=\mathcal{H}_1\oplus \mathcal{H}_2$, where $\mathcal{H}_1$ and $\mathcal{H}_2$ are the subspaces spanning $X$ and $\tilde{\rho}_{ss}$, respectively. Remembering Horodecki´s result [@horodecki1], $\rho^{\Gamma}$ is separable if and only if $I\otimes\Lambda(\rho^{\Gamma})$ is positive for any Positive but not Completely Positive (PNCP) map $\Lambda$. But as these maps are linear and since the state space is decomposed as $H_1\oplus H_2$, the PNCP maps in this case will be all of the form $\Lambda=\Lambda_1\oplus\Lambda_2$, where $\Lambda_i$ is a PNCP map acting in $H_i$. It is straightforward that $I_{A'}\otimes\Lambda_2[\tilde{\rho}_{ss}]$ is a positive operator for any PNCP $\Lambda_2$, because $\tilde{\rho}_{ss}$ is separable.
The curious feature here is that $\mathcal{H}_1$ is in general *not* a tensor product space and so it is difficult to talk about complete positivity, since the meaning of such a concept may be obscure in this situation. However, for a subspace $\mathcal{H}_1$ with reasonable low-dimension one could say that all PNCP maps are of the form $\Lambda_1=\Lambda^{CP}_a+\Lambda^{CP}_bT$, given that for a tensor product space with dimension less than $6$ all PNCP maps are of this form [@horodecki1; @stormer]. With reasonable low-dimension we mean that there is a tensor product space with dimension less than $6$ which contains $\mathcal{H}_1$ as a subspace. In this case, $\Lambda_1$ would be seen as the restriction of PNCP maps $\Lambda^{CP}_a+\Lambda^{CP}_bT$ - all PNCP maps are of this form in this context - to the subspace $\mathcal{H}_1$. Now, as $(\rho^{\Gamma})^{\Gamma}=\rho$ is positive, we have that $I\otimes\Lambda_1(\rho^{\Gamma})$ is positive as well, for all PNCP maps $\Lambda_1$, which implies that $\rho^{\Gamma}$ is separable and hence $\rho$ is separable, i.e., $\rho$ is separable if and only if PPT.
With this reasoning, we can construct yet several subfamilies of PPT-classifiable states. For example, take the qubit-quatrit state in the SCB given by $$\begin{aligned}
\rho = \left(\begin{array}{c c c c | c c c c}{x_{00}}&{0}&{0}&{0}&{0}&{0}&{0}&{0}\\{0}&{a}&{c}&{0}&{0}&{0}&{0}&{0}\\{0}&{c^*}&{b}&{0}&{0}&{0}&{0}&{0}\\ {0}&{0}&{0}&{y}&{x_{01}}&{0}&{0}&{0}\\ \hline {0}&{0}&{0}&{x_{01}^*}&{z}&{0}&{0}&{0}\\{0}&{0}&{0}&{0}&{0}&{d}&{f}&{0}\\{0}&{0}&{0}&{0}&{0}&{f^*}&{e}&{0}\\{0}&{0}&{0}&{0}&{0}&{0}&{0}&{x_{11}}\end{array}\right)\end{aligned}$$ and so the partial tranposed matrix is $$\begin{aligned}
\rho^{\Gamma} &=& \left(\begin{array}{c c c c | c c c c}{x_{00}}&{0}&{0}&{0}&{0}&{0}&{0}&{x_{01}}\\{0}&{a}&{c}&{0}&{0}&{0}&{0}&{0}\\{0}&{c^*}&{b}&{0}&{0}&{0}&{0}&{0}\\ {0}&{0}&{0}&{y}&{0}&{0}&{0}&{0}\\ \hline {0}&{0}&{0}&{0}&{z}&{0}&{0}&{0}\\{0}&{0}&{0}&{0}&{0}&{d}&{f}&{0}\\{0}&{0}&{0}&{0}&{0}&{f^*}&{e}&{0}\\{x_{01}^*}&{0}&{0}&{0}&{0}&{0}&{0}&{x_{11}}\end{array}\right) \\ &=& \left(\begin{array}{c c c c}{x_{00}}&{0}&{0}&{x_{01}}\\{0}&{y}&{0}&{0}\\{0}&{0}&{z}&{0}\\{x_{01}^*}&{0}&{0}&{x_{11}}\end{array}\right)\oplus\left(\begin{array}{c c c c}{a}&{c}&{0}&{0}\\{c^*}&{b}&{0}&{0}\\{0}&{0}&{d}&{f}\\{0}&{0}&{f^*}&{e}\end{array}\right)\end{aligned}$$ The first matrix support is the subspace $\mathcal{H}_{A'}\otimes\mathcal{H}_{B'}$, with $\mathcal{H}_{A'}= span\{|0\rangle,|1\rangle\}$ and $\mathcal{H}_{B'}= span\{|0\rangle,|3\rangle\}$. The second matrix represents a simply separable (unnormalized) state. As the subspace $\mathcal{H}_{A'}\otimes\mathcal{H}_{B'}$ is four-dimensional, all PNCP maps are of the form $\Lambda^{CP}_a+\Lambda^{CP}_bT$ and by the above reasoning, the state is separable if and only if PPT. The extension to higher dimensions should be clear.
One can also apply known methods to find bound entangled PPT states restricting the search to the subspace where $X$ acts: if one proves that (\[ptrans\]), assumed positive, is entangled, then $\rho$ will be as well and we will have a bound entangled state.
Conclusions
===========
We constructed novel families of bipartite states for arbitrary Hilbert space dimensions whose negative eigenvalues of the partially transposed density matrix are the negative eigenvalues of a $d_B\times d_B$ submatrix. Using this property, we presented subfamilies in which the PPT criterion is both necessary and sufficient, using the result of Proposition 1 as a major step in derivations. We also proposed a qubit-qudit novel family whose eigenvalues of the partially transposed density matrix are the same as the two-qubit and qubit-qutrit cases, irrespective of the growing dimensions of the “core" blocks. Some nontrivial decompositions of the total Hilbert space into direct sums appeared naturally in discussions, the meaning of complete-positivity being obscure. A full mathematical and physical understanding of such situation is highly desirable. The resemblance to the Ansatz states used in [@verstraete] is very curious and we hope that some families proposed could be used in the same way in numerical analysis of entanglement. If any practical implementation of some of these families in experiments is done in the future, it is immediate from our discussion that the number of resources required to detect and quantify entanglement (by Negativity) is much less than the one required for full state reconstruction. We see that even for the states considered, theoretical discussions about the partial transposition operation are not trivial and the results presented hope to shed some light in the issue. We are led then to the important question: what is in general the set of states classifiable by PPT criterion? We believe that the structure presented here combined with the reasoning of [@kossakowski] may bring some important results in that direction.
Acknowledgments
===============
We are grateful to D. Chrúscínski for bringing his work on circulant states with positive partial transposition to our knowledge. We thank CAPES, and CNPQ and FAPESP, through the INCT-IQ program for financial support.
Appendix A - Direct sums and block diagonal representation of operators {#appendix-a---direct-sums-and-block-diagonal-representation-of-operators .unnumbered}
=======================================================================
Direct sums of matrices representing operators are usually understood as block diagonal matrices. However, a different ordering of the basis of the vector space where the matrix acts gives rise to block structures differing from the usual block diagonal form. But the direct sum structure of the *operator* defined by the matrix is not affected by a basis reordering. To start with a simple example, let us consider a block diagonal $4\times 4$ matrix expressed in an ordered basis $\{e_1,e_2,e_3,e_4\}$, the basis of the vector space where it acts, which we call here $\mathcal{V}$: $$\begin{aligned}
M=\left(\begin{array}{c c c c}
{a}&{b}&{0}&{0}\\
{c}&{d}&{0}&{0}\\
{0}&{0}&{e}&{f}\\
{0}&{0}&{g}&{h}
\end{array}\right)\end{aligned}$$ Calling $$\begin{aligned}
M_a=\left(\begin{array}{c c c c}
{a}&{b}\\
{c}&{d}
\end{array}\right); \ \ \ \ M_b=\left(\begin{array}{c c c c}
{e}&{f}\\
{g}&{h}
\end{array}\right)\end{aligned}$$ then we can write $$\begin{aligned}
M=M_a\oplus M_b\label{trivial}\end{aligned}$$ However, the direct sum symbol $\oplus$ means that the operators defined by matrices $M_a$ and $M_b$ act respectively only in subspaces $\mathcal{V}_a=span\{e_1,e_2\}$ and $\mathcal{V}_b=span\{e_3,e_4\}$ of the vector space $\mathcal{V}$. More precisely, the vector space can be decomposed as a direct sum $\mathcal{V}=\mathcal{V}_a\oplus\mathcal{V}_b$ and what (\[trivial\]) says is that the subspaces $\mathcal{V}_a$ and $\mathcal{V}_b$ are invariant by the action of operator $M$; due to this we have the induced decomposition $M=M_a\oplus M_b$.
It is clear that the invariance of $\mathcal{V}_a$ and $\mathcal{V}_b$ by $M$ is independent of the basis ordering. If we adopt a diverse ordering, for example, $\{e_1,e_3,e_2,e_4\}$, the operator $M$ still has a diret sum structure $M=M_a\oplus M_b$. But in this new ordering, the matrix which represents $M$ no longer has a block diagonal expression, but instead: $$\begin{aligned}
M=\left(\begin{array}{c c c c}
{a}&{0}&{b}&{0}\\
{0}&{e}&{0}&{f}\\
{c}&{0}&{d}&{0}\\
{0}&{g}&{0}&{h}
\end{array}\right)\end{aligned}$$ In general, if a vector space $\mathcal{V}$ has a direct sum decompostion $\mathcal{V}=\bigoplus_i\mathcal{V}_i$, an operator $M$ that leaves the subspaces $\mathcal{V}_i$ invariant will have the direct sum structure $M=\bigoplus_i M_i$, where operators $M_i$ act only in subspaces $\mathcal{V}_i$ respectively. If the basis is ordered according to the subspaces $\mathcal{V}_i$, that is $\{\mathcal{B}_1, \mathcal{B}_2,..\}$, then, the matrix that represents $M$ will have a block diagonal structure. As we have many different orderings possible, clearly the matrix will not be block diagonal in general. But the decompostion $M=\bigoplus_i M_i$ is independent of this, of course.
Considering these remarks, we will now see how the first example of state presented in the paper, (\[m1\]), is affected by a basis reordering. Adopting the following ordering of the basis,
$$\begin{aligned}
\{|00\rangle,|11\rangle,|22\rangle,|33\rangle,|01\rangle,|02\rangle,|03\rangle,|10\rangle,|12\rangle,|13\rangle,|20\rangle,|21\rangle,|22\rangle,|30\rangle,|31\rangle,|32\rangle\}, \end{aligned}$$
the state (\[m1\]) is expressed as $$\begin{aligned}
\rho = \left(\begin{array}{c c c c | c c c c | c c c c | c c c c}
{x_{00}}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{x_{11}}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{x_{22}}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{x_{33}}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ \hline {0}&{0}&{0}&{0}&{a_1}&{a_4}&{a_6}&{x_{01}}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{a_4^*}&{a_2}&{a_5}&{0}&{0}&{0}&{x_{02}}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{a_6^*}&{a_5^*}&{a_3}&{0}&{0}&{0}&{0}&{0}&{0}&{x_{03}}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{x_{01}^*}&{0}&{0}&{b_1}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ \hline {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{b_2}&{b_4}&{0}&{x_{12}}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{b_4^*}&{b_3}&{0}&{0}&{0}&{0}&{x_{13}}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{x_{02}^*}&{0}&{0}&{0}&{0}&{c_1}&{c_4}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{x_{12}^*}&{0}&{c_4^*}&{c_2}&{0}&{0}&{0}&{0}
\\ \hline {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{c_3}&{0}&{0}&{x_{23}}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{x_{03}^*}&{0}&{0}&{0}&{0}&{0}&{0}&{d_1}&{d_4}&{d_6}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{x_{13}^*}&{0}&{0}&{0}&{d_4^*}&{d_2}&{d_5}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{x_{23}^*}&{d_6^*}&{d_5^*}&{d_3}
\end{array}\right)\end{aligned}$$ and the partially transposed matrix in this basis reads: $$\begin{aligned}
\rho^{\Gamma} = \left(\begin{array}{c c c c | c c c c | c c c c | c c c c}
{x_{00}}&{x_{01}}&{x_{02}}&{x_{03}}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {x_{01}^*}&{x_{11}}&{x_{12}}&{x_{13}}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {x_{02}^*}&{x_{12}^*}&{x_{22}}&{x_{23}}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {x_{03}^*}&{x_{13}^*}&{x_{23}^*}&{x_{33}}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ \hline {0}&{0}&{0}&{0}&{a_1}&{a_4^*}&{a_6^*}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{a_4}&{a_2}&{a_5^*}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{a_6}&{a_5}&{a_3}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{b_1}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}
\\ \hline {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{b_2}&{b_4^*}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{b_4}&{b_3}&{0}&{0}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{c_1}&{c_4^*}&{0}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{c_4}&{c_2}&{0}&{0}&{0}&{0}
\\ \hline {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{c_3}&{0}&{0}&{0}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{d_1}&{d_4^*}&{d_6^*}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{d_4}&{d_2}&{d_5^*}
\\ {0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{0}&{d_6}&{d_5}&{d_3}
\end{array}\right)\end{aligned}$$ having a block diagonal structure, as expected.
[99]{}
M. A. Nielsen e I. L. Chuang, *Quantum computation and quantum information*, Cambridge University Press (2000).
A. Peres, Phys. Rev. Lett. **77**, 1413 (1996).
M. Horodecki, P. Horodecki, R. Horodecki, Phys. Lett. A **223**, 1 (1996).
M. Horodecki, P. Horodecki, Phys. Rev. A **59**, 4206 (1999).
A. Sanpera, R. Tarrasch, G. Vidal, quant-ph/9707041 (1997).
O. Rudolph, Phys. Rev. A **67**, 032312 (2003); K. Chen and L. A. Wu, Quant. Inf. Comput. **3**, 193 (2003).
R. F. Werner, Phys. Rev. A **40**, 4277 (1989).
P. Horodecki, M. Lewenstein, G. Vidal, I. Cirac, Phys. Rev. A **62**, 032310 (2000).
R. Simon , Physical Review Letters **84**, 2726 (2000).
R. A. Horn, C. R. Johnson, *Matrix Analysis*, Cambridge University Press (1985).
G. Vidal and R.F. Werner,Phys. Rev. A **65**, 032314 (2002).
E. Størmer, Acta. Math. **110** (1963), 233; S. L. Woronowicz, Rep. Math. Phys., **10** (1976) 165.
T. Wei, K. Nemoto, P. M. Goldbart, P. G. Kwiat, W. J. Munro, F. Verstraete, Phys. Rev. A, **67**, 022110 (2003).
D. Chrúscínski, A. Kossakowski, Phys. Rev. A **76**, 032308 (2007).
[^1]: It would be highly desirable that the eigenvalues of $X$ could be inferred from simple experimental schemes.
[^2]: That separability implies positivity under partial transposition is already true, by the PPT criterion.
|
---
abstract: 'Basic, local kinetic theory of ion temperature gradient driven (ITG) mode, with adiabatic electrons is reconsidered. Standard unstable, purely oscillating as well as damped solutions of the local dispersion relation are obtained using a bracketing technique that uses the argument principle. This method requires computing the plasma dielectric function and its derivatives, which are implemented here using modified plasma dispersion functions with curvature and their derivatives, and allows bracketing/following the zeros of the plasma dielectric function which corresponds to different roots of the ITG dispersion relation. We provide an open source implementation of the derivatives of modified plasma dispersion functions with curvature, which are used in this formulation. Studying the local ITG dispersion, we find that near the threshold of instability the unstable branch is rather asymmetric with oscillating solutions towards lower wave numbers (i.e. drift waves), and damped solutions toward higher wave numbers. This suggests a process akin to inverse cascade by coupling to the oscillating branch towards lower wave numbers may play a role in the nonlinear evolution of the ITG, near the instability threshold. Also, using the algorithm, the linear wave diffusion is estimated for the marginally stable ITG mode.'
author:
- 'Ö. Gültekin$^{1}$, Ö. D. Gürcan$^{2,3}$'
title: Stable and unstable roots of ion temperature gradient driven mode using curvature modified plasma dispersion functions
---
Introduction
============
Background
----------
Ion temperature gradient driven (ITG) mode was studied in great detail over the years in light of its relevance for transport in magnetized fusion devices[@coppi:67; @horton:81; @lee:86]. A basic formulation of the kinetic ITG that has been studied in the past is a local, electrostatic description based on the gyrokinetic equation[@catto:78; @frieman:82; @hahm:88] for ions, with adiabatic electrons, where the linear problem boils down to finding the roots of the plasma dielectric function $\varepsilon\left(\omega,\mathbf{k}\right)$ numerically.
Kinetic waves in electrostatic plasmas in general, can be described using the so-called plasma dispersion function [@fried:1961]. The cylindrical ITG mode for instance, can be formulated completely in terms of plasma dispersion functions[@mattor:89]. The advantage of such a formulation is that, the plasma dispersion function is linked to the complex error function and there exists efficient methods for its computation[@gautschi:70].
Recently, a similar, numerically efficient reformulation of local ITG in terms of curvature modified plasma dispersion functions was proposed [@gurcan:14], which is equivalent to the formulation in Refs. [@kim:94; @kuroda:98]. These functions, dubbed $I_{nm}\left(\zeta_{\alpha},\zeta_{\beta},b\right)$, and defined for $Im\left[\zeta_{\alpha}\right]>0$ as: $$\begin{aligned}
& I_{nm}\left(\zeta_{\alpha},\zeta_{\beta},b\right)\equiv\nonumber \\
& \frac{2}{\sqrt{\pi}}\int_{0}^{\infty}dx_{\perp}\int_{-\infty}^{\infty}dx_{\parallel}\frac{x_{\perp}^{n}x_{\parallel}^{m}J_{0}^{2}\left(\sqrt{2b}x_{\perp}\right)e^{-x^{2}}}{\left(x_{\parallel}^{2}+\frac{x_{\perp}^{2}}{2}+\zeta_{\alpha}-\zeta_{\beta}x_{\parallel}\right)}\;\text{,}\label{eq:db_int}\end{aligned}$$ can be written as a 1D integral of a combination of plasma dispersion functions, instead of the two dimensional integral shown above. Note that, since these functions have been formulated with built-in analytical continuation, a dispersion relation, written with these functions, can be used to describe oscillating and damped solutions as well as unstable ones.
In this paper, we extend the space of curvature modified plasma dispersion functions by including their derivatives [\[]{}i.e. $J_{nm}\left(\zeta_{\alpha},\zeta_{\beta},b\right)\equiv-\frac{\partial}{\partial\zeta_{\alpha}}I_{nm}\left(\zeta_{\alpha},\zeta_{\beta},b\right)$[\]]{}, defined as $$\begin{aligned}
& J_{nm}\left(\zeta_{\alpha},\zeta_{\beta},b\right)\equiv\nonumber \\
& \frac{2}{\sqrt{\pi}}\int_{0}^{\infty}dx_{\perp}\int_{-\infty}^{\infty}dx_{\parallel}\frac{x_{\perp}^{n}x_{\parallel}^{m}J_{0}^{2}\left(\sqrt{2b}x_{\perp}\right)e^{-x^{2}}}{\left(x_{\parallel}^{2}+\frac{x_{\perp}^{2}}{2}+\zeta_{\alpha}-\zeta_{\beta}x_{\parallel}\right)^{2}}\;\text{,}\label{eq:jnm1}\end{aligned}$$ and use these functions in order to compute the derivatives of the plasma dielectric function $\varepsilon\left(\omega,\mathbf{k}\right)$ with respect to the angular frequency $\omega$. This allows the use of a root finding alogrithm based on the argument principle as discussed for instance in Ref.[@johnson:09] (similar to the method used in the quasi-linear solver QualiKiz[@bourdelle:07] and detailed in Ref. [@davies:86]), which allows us to obtain both unstable and stable roots.
Notice that in most cases, the stable roots are considered to have a negligible effect on transport and are therefore ignored. However, from a simple quasi-linear theory (QLT) point of view, this is clearly not permissible, since as the nonlinear interactions appear, so does the transfer of energy to stable or damped modes. However, from a renormalized QLT perspective -*à la* Balescu [@balescu:book:anom], which is actually how the use of QLT to estimate transport is really justified- it is unclear whether one may use a single dominant (but renormalized) mode, or one still has to consider a coupling of a number of stable and unstable modes (even if each one of those modes are modified due to nonlinear effects, via mechanisms such as eddy damping). This may be crucial in particular if after renormalization, the most unstable (or the least damped) mode for a given wave-vector becomes subdominant to a previously subdominant mode.
The rest of the paper is organized as follows. In section II, the local ITG dispersion relation is recalled using curvature modified plasma dispersion functions, $I_{nm}$’s. Then, in subsection b), derivatives of the curvature modified dispersion functions are defined as $J_{nm}$’s, and the derivative of the plasma dielectric tensor is written in terms of $J_{nm}$’s. In Section III, methods and examples, first an efficient and accurate method for finding and tracing the roots of the dispersion relation is introduced in subsection a), and then an example of linear wave diffusion of an unstable mode into linearly stable region is considered and the diffusion coefficient is estimated. Section IV is results and conclusion.
Formulation
===========
Linear Dispersion Using $I_{nm}$’s:
-----------------------------------
A basic description of local kinetic ITG in the electrostatic limit, with adiabatic electrons is based on the gyrokinetic equation [@frieman:82; @lee:83; @hahm:88] for the non-adiabatic part of the fluctuating distribution function for the ions: $$\begin{aligned}
\frac{\partial}{\partial t}\delta g+ & \left[v_{\parallel}\frac{\mathbf{B}^{*}}{B}+\frac{\mu}{eB}\hat{\mathbf{b}}\times\nabla B\right]\cdot\nabla\delta g=\frac{e}{T_{i}}F_{0}\frac{\partial}{\partial t}\left\langle \delta\Phi\right\rangle -F_{0}\frac{\hat{\mathbf{b}}}{B}\times\nabla\left\langle \delta\Phi\right\rangle \cdot\left[\frac{1}{n}\nabla n+\left(\frac{E}{T}-\frac{3}{2}\right)\frac{1}{T}\nabla T\right]\;\text{.}\label{eq:deltageqn-1}\end{aligned}$$ This is then complemented by the quasi-neutrality relation ($n_{e}=n_{i}$), with adiabatic electrons: $$\frac{e}{T_{e}}\Phi=-\frac{e\Phi}{T_{i}}+\int J_{0}\delta gd^{3}v\;\text{.}\label{eq:qn}$$ Taking the Laplace-Fourier transform of (\[eq:deltageqn-1\]) in the form $\delta g_{\mathbf{k},\omega}\left(\mathbf{v}\right)=\int e^{-i\omega t+i\mathbf{k}\cdot\mathbf{x}}\delta g\left(\mathbf{x},\mathbf{v},t\right)$ and solving for $\delta g_{\mathbf{k},\omega}$ and substituting the result into (\[eq:qn\]), we obtain the dispersion relation in the form:
$$\varepsilon\left(\omega,\mathbf{k}\right)\equiv1+\frac{1}{\tau}-\left[\frac{1}{\sqrt{2\pi}v_{ti}^{3}}\int\frac{\left(\omega-\omega_{*Ti}\left(v\right)\right)J_{0}\left(\frac{v_{\perp}k_{\perp}}{\Omega_{i}}\right)^{2}}{\left(\omega-v_{\parallel}k_{\parallel}-\omega_{Di}\frac{1}{2}\left(\frac{v_{\parallel}^{2}}{v_{ti}^{2}}+\frac{v_{\perp}^{2}}{2v_{ti}^{2}}\right)\right)}e^{-\frac{v^{2}}{2v_{ti}^{2}}}v_{\perp}dv_{\perp}dv_{\parallel}\right]=0\;\text{,}\label{eq:drel-1}$$
where $\varepsilon\left(\omega,\mathbf{k}\right)$ is the plasma dielectric function, $$\omega_{*Ti}\left(v\right)\equiv\omega_{*i}\left[1+\left(\frac{v^{2}}{2v_{ti}^{2}}-\frac{3}{2}\right)\eta_{i}\right]\;\mbox{,}$$ and $\omega_{Di}=2\frac{L_{n}}{R}\omega_{*i}$. Using $\omega/\left|k_{y}\right|\rightarrow\omega$, and $\omega_{D}/\left|k_{y}\right|\rightarrow\omega_{D}$, the dispersion relation (\[eq:drel-1\]) can be written as:
$$\begin{aligned}
\varepsilon\left(\omega,\mathbf{k}\right)\equiv & 1+\frac{1}{\tau}+\frac{1}{\omega_{Di}}\bigg(I_{10}\left[\omega+\left(1-\frac{3}{2}\eta_{i}\right)\right]\nonumber \\
& +\left(I_{30}+I_{12}\right)\eta_{i}\bigg)=0\label{eq:eps_pdf}\end{aligned}$$
where $I_{nm}\equiv I_{nm}\left(-\frac{\omega}{\omega_{Di}},-\frac{\sqrt{2}k_{\parallel}}{\omega_{Di}k_{y}},b\right)$. The advantage of this particular form is that the explicit scaling of $\omega$ with $k_{y}$ is removed, so that we can define a region in $\omega$ space to search for roots, and do not need to scale it with $k_{y}$.
As discussed in detail in Ref. [@gurcan:14], the $I_{nm}$’s can be written as a single integral:
$$\begin{aligned}
& I_{nm}\left(\zeta_{\alpha},\zeta_{\beta},b\right)=\\
& \int_{0}^{\infty}s^{\frac{n-1}{2}}G_{m}\left(z_{1}\left(s\right),z_{2}\left(s\right)\right)J_{0}\left(\sqrt{2bs}\right)^{2}e^{-s}ds\;\mbox{,}\\
& \quad(Im\left[\zeta_{\alpha}\right]>0)\end{aligned}$$
using the straightforward multi-variable generalization of the standard plasma dispersion function: $$G_{m}\left(z_{1},z_{2},\cdots,z_{n}\right)\equiv\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}\frac{x^{m}e^{-x^{2}}}{\prod_{i=1}^{n}\left(x-z_{i}\right)}dx\label{eq:def_gen}$$ with $$z_{1,2}\left(s\right)=\frac{1}{2}\left(\zeta_{\beta}\pm\sqrt{\zeta_{\beta}^{2}-2\left(s+2\zeta_{\alpha}\right)}\right)\;\mbox{.}\label{eq:z12}$$ Note that, using Eqns. 4 and 6 of Ref. [@gurcan:14], we can write the $G_{m}\left(z_{1},z_{2}\right)$ in terms of the standard plasma dispersion function as:
$$\begin{aligned}
G_{m}\left(z_{1},z_{2}\right) & =\frac{1}{\sqrt{\pi}\left(z_{1}-z_{2}\right)}\bigg[z_{1}^{m}Z_{0}\left(z_{1}\right)-z_{2}^{m}Z_{0}\left(z_{2}\right)\nonumber \\
& +\sum_{k=2}^{m}\left(z_{1}^{k-1}-z_{2}^{k-1}\right)\Gamma\left(\frac{m-k+1}{2}\right)\bigg]\;,\label{eq:gm}\end{aligned}$$
which was then implemented using a 16 coefficient Weideman method [@weideman:94], in the form of an open source fortran library [\[]{}<http://github.com/gurcani/zpdgen>[\]]{} with a python interface.
As discussed in Refs. [@kim:94] and [@kuroda:98]. In addition to the integral in (\[eq:eps\_pdf\]), the analytical continuation requires adding a residue contribution, which can be computed as
$$\begin{aligned}
\Delta I_{nm}\left(\zeta_{\alpha},\zeta_{\beta},b\right)=-i\sqrt{\pi}2^{\frac{\left(n+3\right)}{2}}w^{\frac{n}{2}}\int_{-1}^{1}d\mu & \left(1-\mu^{2}\right)^{\frac{\left(n-1\right)}{2}}\left(\mu\sqrt{w}+\frac{\zeta_{\beta}}{2}\right)^{m}\\
& J_{0}^{2}\left(2\sqrt{b\left(1-\mu^{2}\right)w}\right)e^{-2\left(1-\mu^{2}\right)w-\left(\mu\sqrt{w}+\frac{\zeta_{\beta}}{2}\right)^{2}}\times\begin{cases}
0 & \zeta_{\alpha i}>0\quad\mbox{or }w_{r}<0\\
\frac{1}{2} & \zeta_{\alpha i}=0\quad\mbox{and }w_{r}>0\\
1 & \zeta_{\alpha i}<0\quad\mbox{and }w_{r}>0
\end{cases}\end{aligned}$$
where $w=\frac{\zeta_{\beta}^{2}}{4}-\zeta_{\alpha}$ and $\zeta_{\alpha i}=\text{Im}\left(\zeta_{\alpha}\right)$ and $w_{r}=\text{Re}\left[w\right]$. With this, $I_{nm}=I_{nm}^{'}+\Delta I_{nm}$ [\[]{}where $I_{nm}^{'}$ is the integral in (\[eq:eps\_pdf\])[\]]{} is defined everywhere on the complex plane.
Derivatives of $I_{nm}$’s
-------------------------
Similarly the derivatives as defined by the relation $$J_{nm}\left(\zeta_{\alpha},\zeta_{\beta},b\right)\equiv-\frac{\partial}{\partial\zeta_{\alpha}}I_{nm}\left(\zeta_{\alpha},\zeta_{\beta},b\right)$$ can be written using $$\begin{aligned}
& J_{nm}\left(\zeta_{\alpha},\zeta_{\beta},b\right)=\nonumber \\
& \int_{0}^{\infty}ds\left[s^{\frac{n-1}{2}}G_{m}\left(z_{1},z_{2},z_{1},z_{2}\right)J_{0}\left(\sqrt{2bs}\right)^{2}e^{-s}\right]\;\mbox{,}\nonumber \\
& \quad(Im\left[\zeta_{\alpha}\right]>0)\label{eq:jnm2}\end{aligned}$$ with repeating variables $z_{1}=z_{1}\left(s\right)$ and $z_{2}=z_{2}\left(s\right)$ as given in (\[eq:z12\]). Since the $\zeta_{\alpha}$ dependence is through $z_{1}$ and $z_{2}$, we can use (\[eq:z12\]) to compute the derivatives , acting $\frac{d}{d\zeta_{\alpha}}=\frac{1}{\left(z_{1}-z_{2}\right)}\left(\frac{d}{dz_{2}}-\frac{d}{dz_{1}}\right)$ on (\[eq:gm\]), in order to obtain:
![\[fig:rects\]The way the bracketing algorithm isolates the roots of the plasma dielectric function (\[eq:eps\_pdf\]) to a desired rectangle size. Shaded rectangles contain no roots and so are immediately abandoned. The roots, which are depicted by x’s are found using a least square optimization, where the midpoint of the final rectangle is used as the initial guess, and the rectangle itself is used as a boundary. The case that is shown here is $k_{y}=0.8$, which is usually used as the reference $k_{y}$.](gultekin_jnm17_fig1){width="0.98\columnwidth"}
$$\begin{aligned}
G_{m}\left(z_{1},z_{2},z_{1},z_{2}\right) & =-\frac{d}{d\zeta_{\alpha}}G_{m}\left(z_{1},z_{2}\right)=\frac{1}{\left(z_{1}-z_{2}\right)^{2}}\bigg\{\frac{1}{\sqrt{\pi}}\sum_{k=2}^{m}\left[\left(k-1\right)\left(z_{1}^{k-2}+z_{2}^{k-2}\right)-\frac{2\left(z_{1}^{k-1}-z_{2}^{k-1}\right)}{\left(z_{1}-z_{2}\right)}\right]\Gamma\left(\frac{m-k+1}{2}\right)\nonumber \\
& -2\left(z_{1}^{m}+z_{2}^{m}\right)+Z_{0}\left(z_{1}\right)z_{1}^{m-1}\left(m-2z_{1}^{2}-\frac{2z_{1}}{\left(z_{1}-z_{2}\right)}\right)+Z_{0}\left(z_{2}\right)z_{2}^{m-1}\left(m-2z_{2}^{2}+\frac{2z_{2}}{\left(z_{1}-z_{2}\right)}\right)\bigg\}\text{\;.}\label{eq:gm2}\end{aligned}$$
We also have to compute the derivatives of the residue contribution $\Delta I_{nm}$, which we dub $\Delta J_{nm}$ (note that, this is derivative of the residue and not the residue of the derivative), and can be written as: $$\begin{aligned}
\Delta J_{nm}= & -i\sqrt{\pi}2^{\frac{n+3}{2}}\int_{-1}^{1}\bigg\{\left(\frac{n}{2w}+\frac{\mu m}{2\mu w+\sqrt{w}\zeta_{\beta}}-2+\mu^{2}-\frac{\zeta_{\beta}}{2}\frac{\mu}{\sqrt{w}}\right)J_{0}^{2}\left(2\sqrt{b\left(1-\mu^{2}\right)w}\right)\nonumber \\
& -2\sqrt{\frac{b\left(1-\mu^{2}\right)}{w}}J_{0}\left(2\sqrt{b\left(1-\mu^{2}\right)w}\right)J_{1}\left(2\sqrt{b\left(1-\mu^{2}\right)w}\right)\bigg\}\nonumber \\
& w^{n/2}\left(1-\mu^{2}\right)^{\frac{n-1}{2}}\left(\mu\sqrt{w}+\frac{\zeta_{\beta}}{2}\right)^{m}e^{-2\left(1-\mu^{2}\right)w-\left(\mu\sqrt{w}+\frac{\zeta_{\beta}}{2}\right)^{2}}d\mu\times\begin{cases}
0 & \zeta_{\alpha i}>0\quad\mbox{or }w_{r}<0\\
\frac{1}{2} & \zeta_{\alpha i}=0\quad\mbox{and }w_{r}>0\\
1 & \zeta_{\alpha i}<0\quad\mbox{and }w_{r}>0
\end{cases}\label{eq:djnm}\end{aligned}$$
where we used the definition $\Delta J_{nm}=-\frac{d}{d\zeta_{\alpha}}\Delta I_{nm}=\frac{d}{dw}\Delta I_{nm}$. Finally, $J_{nm}=J_{nm}^{'}+\Delta J_{nm}$ where $J_{nm}^{'}$ is the integral given in (\[eq:jnm2\]) with (\[eq:gm2\]).
Using these $J_{nm}$ functions, which denote derivatives of the curvature modified plasma dispersion functions with respect to the first variable, the derivative of the plasma dielectric function can be written as: $$\begin{aligned}
\frac{\partial}{\partial\omega}\varepsilon\left(\omega,\mathbf{k}\right)\equiv\frac{1}{\omega_{Di}}I_{10} & +\frac{1}{\omega_{D}^{2}}\bigg(J_{10}\left[\omega+\left(1-\frac{3}{2}\eta_{i}\right)\right]\nonumber \\
& +\left(J_{30}+J_{12}\right)\eta_{i}\bigg)\;\text{.}\label{eq:deps}\end{aligned}$$ where $J_{nm}\equiv J_{nm}\left(-\frac{\omega}{\omega_{Di}},-\frac{\sqrt{2}k_{\parallel}}{\omega_{Di}k_{y}},b\right)$, and $\omega$ and $\omega_{D}$ are normalized to $\left|k_{y}\right|$ for convenience.
Methods and Examples
====================
Finding and tracking stable and unstable solutions
--------------------------------------------------
Fixing the values of plasma parameters such as $\eta_{i}$, $R/L_{n}$ and $\tau$, we can solve (\[eq:eps\_pdf\]) for $\omega$, for a given $\mathbf{k}$. In practice we fix $k_{\parallel}$ and $k_{x}$ and consider $\omega$ as a function of $k_{y}$. While there are many different ways of achieving this numerically, we have developed a simple algorithm for bracketing, solving and then tracing each root of the solution. Generally we pick a reference $k_{y}$ value, where we think the roots are reasonably distinct (choosing this reference $k_{y}$ may require trial and error). Then we use an algorithm very similar to the one outlined in Ref. [@johnson:09] in order to bracket each solution as shown in Fig. \[fig:rects\], with an initial rectangle that covers only the $\omega_{r}<0$ part of the complex plane avoiding the line $\omega_{r}=0$, where there is a branch cut. In fact a desired number of roots (i.e. $N_{r}$) are specified so that the algoritm repeats itself with larger and larger rectangles (always avoiding the branch cut) until the desired number of roots fall within the rectangle. This gives us $N_{r}$ rectangles with a root in each one. Then, a basic least square optimization is used to locate the exact root within each rectangle. Note that a small buffer is added around the boundary of the rectangle in order to succeed in cases where the point falls exactly on the boundary of the rectangle (e.g. third root from above in Fig. \[fig:rects\]).
When the $k_{y}$ is varied, a new rectangle is defined for each $N_{r}$ root, using the solutions from one of the previous steps (i.e. nearest $k_{y}$, for which $\omega$ have already been computed), with a predefined rectangle size (i.e. if $k_{y}$ resolution is high enough, the rectangle sizes can be very small and there are virtually no intersections), and the least square optimization is used again to find the new solutions in each rectangle. This allows us to trace curves in $\omega=\omega\left(k_{y}\right)$, which help distinguish different roots. Note that tracking $\omega$ as a function of $k_{y}$ instead of repeating the bracketing step each time, saves a huge amount of computation time. Such an approach would be useful also in quasilinear transport modelling geared towards speed[@bourdelle:07].
![\[fig:gamma\]Growth rates $\gamma$ (solid lines), and frequencies $\omega$ (dashed lines) as functions of $k_{y}$, for the first four roots of the local, kinetic ITG dispersion relation as defined in (\[eq:eps\_pdf\]), where each color denotes a seperate root. Note that around $k_{y}=2.5$, the second root becomes less damped than the unstable branch.](gultekin_jnm17_fig2){width="0.98\columnwidth"}
In any case, bracketing is necessary in order to isolate the different roots of (\[eq:eps\_pdf\]). Since the algorithm relies on the argument principle $$\oint_{C}\frac{\frac{\partial}{\partial\omega}\varepsilon\left(\omega,\mathbf{k}\right)}{\varepsilon\left(\omega,\mathbf{k}\right)}d\omega=2\pi i\left(N-P\right)$$ where $N$ and $P$ are the number of poles and zeros in the closed contour defined by$C$, we use (\[eq:deps\]) in order to compute the derivative of the plasma dielectric function analytically.
![\[fig:gamma2\]Growth rate $\gamma$ (solid line), and frequency $\omega$ (dashed lines) as functions of $k_{y}$, for the dominant root of the ITG dispersion relation near the threshold of instability (i.e. $\eta_{i}=0.68$). Note that the damped modes in this case, are strongly damped (i.e. $\gamma_{d}<-0.2$) as compared to the unstable mode.](gultekin_jnm17_fig3){width="0.98\columnwidth"}
Using this method, the growth (and damping) rates as well as frequencies for the reference shot studied in Ref. [@kim:94] (i.e. $\eta_{i}=2.5$, $L_{n}/R=0.2$, $\tau=1.0$ and $k_{\parallel}=0.01$) is shown in Fig. \[fig:gamma\]. It is remarkable that for these parameters, around $k_{y}=2.5$, the second root becomes less damped than the unstable branch. Since trapped electron physics is ignored, the second root never actually becomes unstable.
Another interesting observation about the nature of the roots of this particular limit of the gyrokinetic equation, is that near the instability threshold (slightly above, or slightly below), one observes a region to the left (in $k_{y}$ space) of the linearly most unstable (or the least damped) mode, where the solution becomes a propagating wave in the electron diamagnetic direction (i.e. $\omega>0$) as seen in Fig. \[fig:gamma2\]. This is the drift wave (DW) branch, that is modified due to the weak ion temperature gradient. This is not a surprise, since the equations considered in this paper should recover the drift wave limit as the ITG drive disappears.
{width="98.00000%"}
Example: Linear diffusion near marginality
------------------------------------------
For a given set of plasma parameters, $\eta_{i}$ determines the stability of the ITG mode. Considering $\eta_{i}$ as a function of $x$ , for instance of the form $\eta_{i}\left(x\right)=\eta_{ic}+\delta\eta_{i}\left[\frac{x-x_{0}}{x_{1}-x_{0}}\right]$, we can define a resonable description of an unstable region next to a stable region. The issue of turbulence spreading into the unstable region is a complex one and is out of the scope of the current paper. Here we discuss how a monochromatic wave, propagating mainly in the $y$ direction can diffuse in the radial direction due to $\partial^{2}\gamma_{k}/\partial k_{x}^{2}$ being finite and negative. Consider the evolution of the amplitude of ITG mode near its stability boundary. Close enough to the marginal stability, only a single mode will be linearly unstable. We can write the general two scale evolution equation for the amplitude of that mode in the form $$\left(\partial_{t}+v_{gi}\partial_{i}\right)I_{k}-2\gamma_{k_{x},k_{y}}I_{k}-D_{ij}\partial_{ij}I_{k}+\gamma_{n\ell}I_{k}^{2}=0\label{eq:landau}$$ Here $I=\left|\Phi_{k}\right|^{2}$ is the intensity of the most unstable mode, $v_{gi}=\partial\omega_{k}/\partial k_{i}$, $D_{ij}=-\partial^{2}\gamma_{k_{x},k_{y}}/\partial k_{i}\partial k_{j}$ and $\gamma_{n\ell}$ is the nonlinear damping via mode coupling or coupling to large scale flows, whose origin is, again, out of the scope here. Nonetheless, the local mixing length estimate would suggest $\gamma_{n\ell}\sim2k_{\perp}^{2}$. Note that the most unstable mode has $\frac{\partial}{\partial k_{x}}\gamma=\frac{\partial}{\partial k_{y}}\gamma=0$ , by definition and has $\omega_{k}\approx0$ for the ITG mode near marginality. In addition it is also true that $\partial\omega/\partial k_{x}\ll\partial\omega/\partial k_{y}$, near the stability boundary.
Eqn. (\[eq:landau\]) is a linear Fisher-Kolmogorov equation[@fisher:37] similar to the one discussed in the study of formation of subcritical turbulence fronts[@pomeau:86]. Moving to the group velocity frame in the $y$ direction, and considering mainly the diffusion in the $x$ direction, we get: $$\partial_{t}I-2\gamma_{k}\left(x\right)I-D_{xx}\partial_{xx}I+\gamma_{n\ell}I^{2}=0\label{eq:landau2}$$ Using $\theta=\frac{k_{x}}{\hat{s}k_{y}}$, with $\omega_{D}\rightarrow\omega_{D}\left(\cos\theta+\hat{s}\theta\sin\theta\right)$ and $b=k_{\perp}^{2}=k_{y}^{2}+k_{x}^{2}$ in (\[eq:drel-1\]), in order to get the $k_{x}$ dependence of the growth rate, we can obtain the growth rate and frequency as a function of $k_{x}$, $x$ and $k_{y}$ as show in Fig. \[fig:Profiles\]. This allows us to compute a linear diffusion coefficient via $D_{xx}=-\partial^{2}\gamma/\partial k_{x}^{2}$, which can be estimated to be around $D_{xx}\approx0.1$, near the the marginal point. More generally, the methodology that we have developed above allows us to determine the coefficients of (\[eq:landau2\]) - except $\gamma_{n\ell}$.
Results and Conclusion
======================
The method outlined in this paper allows us to solve the local linear gyrokinetic equation with adiabatic electrons and background density and ion temperature gradients as in (\[eq:deltageqn-1\]-\[eq:qn\]) for stable and unstable roots using generalized plasma dispersion functions as seen in Fig. \[fig:gamma\]. It can be used to study the behaviour of the ITG mode near $\eta_{i}=\eta_{ic}$ for the instability (i.e. $\eta_{ic}=2/3$ for small enough $R/L_{n}$), where the subcritical solution becomes a propagating wave in the electron diamagnetic direction (i.e. $\omega>0$) as seen in Fig. \[fig:gamma2\]. This is the drift wave (DW) branch, that is modified due to the weak ion temperature gradient.
The existence of a drift wave with $\gamma=0$, has important implications for subcritical turbulence, especially when one considers a stable region next to an ITG unstable region. In such a scenario, the ITG that is generated at the unstable region with higher wavenumbers (say around $k_{y}\sim0.3-0.5$) can couple to drift waves in the stable region, which has the nice property of having $\gamma=0$ (rather than negative) even when the $\eta_{i}$ is below critical. In this case the wave diffusion (as discussed above due to $d^{2}\gamma/dk_{x}^{2}$) or nonlinear spreading due to turbulent diffusion of broadband turbulence[@gurcan:05] is easier, since the subcritical region does not act as a sink.
It is also worth discussing the possibility of asymmetry in three wave couplings near the threshold of the ITG instability. In the standard picture of triadic interactions, the middle wave-number of a triad gives its energy to larger and smaller wavenumbers, this normally contributes equal amount to forward and backward cascades. However since the higher $k_{y}$ ’s are damped but lower $k_{y}$’s are not, the energy would travel towards the drift wave branch naturally. Notice that near marginal stability, the frequencies are such that it is easy to satisfy the resonance conditions with a positive frequency for the drift wave, and the negative frequency for the damped higher-$k_{y}$ ITG mode with $\omega\approx0$ for the pump. This may explain how the free energy can be transfered to low $k_{y}$ in a process similar to -but intrinsically different from- the inverse cascade.
The approach, developed in this paper, can be extended to a renormalized version of the plasma dielectric function[@dupree:67]. However, since both the careful implementation and the detailed analysis of the physics results of such a formulation requires dedicated effort, we leave this to future studies.
[22]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [**]{} (, , ) pp. , @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase
http://dx.doi.org/10.1016/j.jcp.2014.03.017) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase https://doi.org/10.1016/j.cam.2008.10.014) @noop [****, ()]{} [****, ()](\doibase http://dx.doi.org/10.1016/0021-9991(86)90052-5) @noop [**]{}, Series in Plasma Physics (, , ) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase http://dx.doi.org/10.1016/0167-2789(86)90104-1) @noop [****, ()]{} @noop [****, ()]{}
|
---
author:
- '<span style="font-variant:small-caps;">Hugues AUVRAY[^1] and Xiaonan MA[^2] and George MARINESCU[^3]</span>'
title: Quotient of Bergman kernels on punctured Riemann surfaces
---
Introduction
============
In this paper we study the asymptotics of Bergman kernels of high tensor powers of a singular Hermitian line bundle over a Riemann surface under the assumption that the curvature has singularities of Poincaré type at a finite set. We show namely that the *quotient* of these Bergman kernels and of the Bergman kernel of the Poincaré model near the singularity tends to one up to arbitrary negative powers of the tensor power. In our previous paper [@bkp] (see also [@bkp0]) we obtained a weighted estimate in the $C^m$-norm near the punctures for the *difference* of the global Bergman kernel and of the Bergman kernel of the Poincaré model near the singularity, uniformly in the tensor powers of the given bundle. Our method is inspired by the analytic localization technique of Bismut-Lebeau [@BL].
There exists a well-known expansion of the Bergman kernel on general compact manifolds [@bou; @Ca99; @DLM06; @Hs10; @mm; @MM08; @Ti90; @Z98] with important applications to the existence and uniqueness of constant scalar curvature Kähler metrics [@Don; @Ti90] as part of the Tian-Yau-Donaldson’s program. Coming to our context, a central problem is the relation between the existence of special complete/singular metrics and the stability of the pair $(X, D)$ where $D$ is a smooth divisor of a compact Kähler manifold $X$; see e.g. the suggestions of [@sze §3.1.2] for the case of asymptotically hyperbolic Kähler metrics, which naturally generalize to higher dimensions the complete metrics $\omega_{\Sigma}$ studied here. Moreover, the technique developed here can be extended to the higher dimensional situation in the case of Poincaré type Kähler metrics with reasonably fine asymptotics on complement of divisors, see the construction of [@auv §1.1] and [@auv2 Theorem 4].
The Bergman kernel function of a singular polarization is of particular interest in arithmetic situations [@BBK07; @BKK05; @bf]. In [@bkp] we applied the precise asymptotics of the Bergman kernel near the punctures in order to obtain optimal uniform estimates for the supremum of the Bergman kernel, relevant in arithmetic geometry [@AbUll95; @jk04; @fjk]. There are also applications to partial Bergman kernels, see [@CM15].
We place ourselves in the setting of [@bkp] which we describe now. Let $\overline\Sigma$ be a compact Riemann surface and let $D=\{a_1,\ldots,a_N\}\subset\overline\Sigma$ be a finite set. We consider the punctured Riemann surface $\Sigma = \overline{\Sigma}\smallsetminus D$ and a Hermitian form $\omega_{\Sigma}$ on $\Sigma$. Let $L$ be a holomorphic line bundle on $\overline{\Sigma}$, and let $h$ be a singular Hermitian metric on $L$ such that:
- $h$ is smooth over $\Sigma$, and for all $j=1,\ldots,N$, there is a trivialization of $L$ in the complex neighborhood $\overline{V_j}$ of $a_j$ in $\overline{\Sigma}$, with associated coordinate $z_j$ such that $|1|_{h}^2(z_{j})= \big|\!\log(|z_j|^2)\big|$.
- There exists $\varepsilon>0$ such that the (smooth) curvature $R^L$ of $h$ satisfies $iR^L\geq\varepsilon\omega_{\Sigma}$ over $\Sigma$ and moreover, $iR^L=\omega_{\Sigma}$ on $V_j:=\overline{V_j}\smallsetminus\{a_j\}$; in particular, $\omega_{\Sigma} = \omega_{{\mathbb{D}}^*}$ in the local coordinate $z_j$ on $V_j$ and $(\Sigma, \omega_{\Sigma})$ is complete.
Here $\omega_{{\mathbb{D}}^*}$ denotes the Poincaré metric on the punctured unit disc ${\mathbb{D}}^*$, normalized as follows: $$\label{eqn_omegaPcr}
\omega_{{\mathbb{D}}^*} := \frac{idz\wedge d\overline{z}}
{|z|^2\log^2(|z|^2)}\,\cdot$$ For $p\geq1$, let $h^p:=h^{\otimes p}$ be the metric induced by $h$ on $L^p\vert_{\Sigma}$, where $L^p:=L^{\otimes p}$. We denote by $H^0_{(2)}(\Sigma,L^p)$ the space of ${{\boldsymbol{L}}}^2$-holomorphic sections of $L^p$ relative to the metrics $h^p$ and $\omega_\Sigma$, $$\label{e:bs}
H^0_{(2)}(\Sigma,L^p)=\left\{S\in H^0(\Sigma,L^p):\,
\|S\|_{{{\boldsymbol{L}}}^2}^2:=\int_{\Sigma}|S|^2_{h^p}\,
\omega_\Sigma<\infty\right\},$$ endowed with the obvious inner product. The sections from $H^0_{(2)}(\Sigma,L^p)$ extend to holomorphic sections of $L^p$ over $\overline\Sigma$, i.e., (see [@mm (6.2.17)]) $$\label{e:bs1}
H^0_{(2)}(\Sigma,L^p)\subset
H^0\big(\overline\Sigma,L^p\big).$$ In particular, the dimension $d_p$ of $H^0_{(2)}(\Sigma,L^p)$ is finite.
We denote by $B_p({\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}},{\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}})$ and by $B_p({\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}})$ the (Schwartz-)Bergman kernel and the Bergman kernel function of the orthogonal projection $B_{p}$ from the space of ${\boldsymbol{L}}^{2}$-sections of $L^{p}$ over $\Sigma$ onto $H^0_{(2)}(\Sigma,L^p)$. They are defined as follows: if $\{S_\ell^p\}_{\ell=1}^{d_p}$ is an orthonormal basis of $H^0_{(2)}(\Sigma,L^p)$, then $$\label{e:BFS1}
B_p(x,y):=\sum_{\ell=1}^{d_p}S^p_\ell(x)\otimes(S^p_\ell(y))^{*}
\quad\text{and}\quad
B_p(x):=\sum_{\ell=1}^{d_p}|S^p_\ell(x)|_{h^p}^2\,.$$ Note that these are independent of the choice of basis (see [@mm (6.1.10)] or [@CM11 Lemma 3.1]). Similarly, let $B_p^{{\mathbb{D}}^*}(x,y)$ and $B_p^{{\mathbb{D}}^*}(x)$ be the Bergman kernel and Bergman kernel function of $\big({\mathbb{D}}^*, \omega_{{\mathbb{D}}^*},
{\mathbb{C}},\big|\!\log(|z|^2)\big|^p\, h_{0})$ with $h_{0}$ the flat Hermitian metric on the trivial line bundle ${\mathbb{C}}$.
Note that for $k\in {\mathbb{N}}$, the $C^{k}$-norm at $x\in \Sigma$ is defined for $\sigma\in C^\infty(\Sigma, L^p)$ as $$\label{eq:2.13c}\begin{split}
&|\sigma |_{C^k(h^p)}(x)= \big( |\sigma|_{h^p}
+\big|\nabla^{p,\Sigma}\sigma
\big|_{h^p,\omega_{\Sigma}}+\ldots+\big|(\nabla^{p,\Sigma})^k
\sigma\big|_{h^p,\omega_{\Sigma}}\big)(x),
\end{split}$$ where $\nabla^{p,\Sigma}$ is the connection on $(T\Sigma)^{\otimes\ell}\otimes L^p$ induced by the Levi-Civita connection on $(T\Sigma, \omega_{\Sigma})$ and the Chern connection on $(L^{p},h^p)$, and the pointwise norm $|\,{\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}}\,|_{h^p,\omega_{\Sigma}}$ is induced by $\omega_{\Sigma}$ and $h^{p}$. In the same way we define the $C^{k}$-norm $|f|_{C^{k}}(x)$ at $x\in \Sigma$ of a smooth function $f\in{C}^\infty(\Sigma,{\mathbb{C}})$ by using the Levi-Civita connection on $(T\Sigma, \omega_{\Sigma})$.
We fix a point $a\in D$ and work in coordinates centered at $a$. Let $\mathfrak{e}_{L}$ be the holomorphic frame of $L$ near $a$ corresponding to the trivialization in the condition ($\alpha$). By assumptions ($\alpha$) and ($\beta$) we have the following identification of the geometric data in the coordinate $z$ on the punctured disc ${\mathbb{D}}^*_{4r}$ of radius $4r$ centered at $a$, via the trivialization $\mathfrak{e}_{L}$ of $L$, $$\begin{aligned}
\label{eq:1.6a}
\big(\Sigma,\omega_{\Sigma}, L,h\big)\big|_{{\mathbb{D}}^*_{4r}}
=\big({\mathbb{D}}^*,\omega_{{\mathbb{D}}^*}, {\mathbb{C}}, h_{{\mathbb{D}}^*}
= \big|\!\log(|z|^2)\big|\cdot h_{0}\big)\big|_{{\mathbb{D}}^*_{4r}}\,,
\quad \text{ with } 0<r<(4e)^{-1}.\end{aligned}$$ In [@bkp Theorem 1.2] we proved the following weighted diagonal expansion of the Bergman kernel:
\[thm\_MainThm\] Assume that $(\Sigma, \omega_{\Sigma}, L, h)$ fulfill conditions ($\alpha$) and ($\beta$). Then the following estimate holds: for any $\ell, k\in {\mathbb{N}}$, and every $\delta>0$, there exists $C=C(\ell, k, \delta)>0$ such that for all $p\in{\mathbb{N}}^*$, and $z\in V_1\cup\cdots\cup V_N$ with the local coordinate $z_{j}$, $$\label{eqn_MainThm}
\Big | B_p - B_p^{{\mathbb{D}}^*}\Big |_{C^k} (z_{j})\leq Cp^{-\ell}
\, \big|\!\log(|z_{j}|^2)\big|^{-\delta},$$ with norms computed with help of $\omega_{\Sigma}$ and the associated Levi-Civita connection on ${\mathbb{D}}_{4r}^*$.
Note that in [@bkp Theorem 1.1] we also established the off-diagonal expansion of the Bergman kernel $B_p({\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}},{\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}})$. The main result of the present paper is the following estimate of the quotient of the Bergman kernels from :
\[thm\_apdx\] If $(\Sigma, \omega_\Sigma, L, h)$ fulfill conditions $(\alpha)$ and $(\beta)$, then $$\label{e:apdx}
\sup_{z\in V_1\cup\ldots\cup V_N}
\bigg|\frac{B_p}{B_p^{{\mathbb{D}}^*}}(z) - 1 \bigg| =
\mathcal{O}(p^{-\infty})\,,
$$ i.e., for any $\ell>0$ there exists $C>0$ such that for any $p\in{\mathbb{N}}^{*}$ we have $$\label{e:apdx1}
\sup_{z\in V_1\cup\ldots\cup V_N}
\bigg|\frac{B_p}{B_p^{{\mathbb{D}}^*}}(z) - 1 \bigg| \leq C p^{-\ell}.$$
Theorem \[thm\_apdx\] is related to estimates in exponentially small neighborhoods of the punctures obtained in [@S Theorem 1.6] and [@SS Lemma 3.3].
For each $p\geq 2$ fixed $(|z|^{2}\big|\!\log(|z|^2)\big|^{p})^{-1} B_{p}^{{\mathbb{D}}^*}(z)$ is smooth and strictly positive on ${\mathbb{D}}_{4r}$, as follows from . By [@bkp Remark 3.2], any holomorphic ${\boldsymbol{L}}^{2}$-section of $L^{p}$ over $\Sigma$ extends to a homomorphic section on $\overline{\Sigma}$ (see the inclusion ) vanishing at $0$ in ${\mathbb{D}}_{4r}$. Thus by the formula for $B_p$ we see that the quotient $\frac{B_p}{B_p^{{\mathbb{D}}^*}}$ is a smooth function on ${\mathbb{D}}_{4r}$ for each $p\geq 2$.
\[thm:diffquot\] For all $k\geq 1$ and $D_1,\ldots,D_k\in\Big\{\frac{\partial\,}{\partial z}\,,
\frac{\partial\,}{\partial \overline{z}}\Big\}$ we have $$\label{e:diffquot}
\sup_{z\in\overline{V_1}\cup\ldots\cup \overline{V_N}}
\Big|(D_1\cdots D_k)\frac{B_p}{B_p^{{\mathbb{D}}^*}}(z)\Big|
= \mathcal{O}(p^{-\infty}).$$
[Theorem \[thm\_MainThm\] admits a generalization to orbifold Riemann surfaces. Indeed, assume that $\overline\Sigma$ is a compact orbifold Riemann surface such that the finite set $D\subset\overline\Sigma$ does not meet the (orbifold) singular set of $\overline\Sigma$. Then by the same argument as in [@bkp Remark 1.3] (using [@DLM06; @DLM12]) we see that Theorems \[thm\_apdx\] and \[thm:diffquot\] still hold in this context.]{}
Note that the $C^k$-norm used in is induced by $\omega_{{\mathbb{D}}^*}$, roughly the sup-norm with respect to the derivatives defined by the vector fields $z\log(|z|^2) \frac{\partial\,}{\partial z}$ and $\overline{z}\log(|z|^2) \frac{\partial\,}{\partial \overline{z}}$, which vanish at $z=0$. Hence the norm in is stronger than the $C^k$-norm used in , because the norm in is defined by using derivatives along the vector fields $\frac{\partial\,}{\partial z}$ and $\frac{\partial\,}{\partial \overline{z}}$.
Let us mention at this stage that even if the results above follow from our work [@bkp], relying more precisely on [@bkp Theorem 1.2], the proofs are by no means an obvious rewriting of [@bkp Theorem 1.2] (for instance), since $B_p^{{\mathbb{D}}^*}({\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}})$ takes extremely small values arbitrarily near the origin. This can be seen in [@bkp §3.2] and it is specific to the non-compact framework. What estimate says is that $B_p({\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}})$ follows such a behaviour very closely in the corresponding regions of $\Sigma$ via the chosen coordinates.
Here is a general strategy of our approach for Theorems \[thm\_apdx\] and \[thm:diffquot\]. We choose a special orthonormal basis $\{\sigma^{(p)}_{\ell}\}_{\ell=1}^{d_{p}}$ of $H^{0}_{(2)}(\Sigma, L^{p})$ starting from $z^{l}$ on ${\mathbb{D}}^{*}_{4r}$ for $1\leq l\leq \delta_{p}$ with $0<\alpha<\delta_{p}/p <\alpha_{1}<1$. Our choice of $\sigma^{(p)}_{\ell}$ implies that the coefficients of the expansion $$\sigma^{(p)}_{\ell}(z)=\sum_{j=1}^{\infty} a^{(p)}_{j\ell}z^{j}$$ of $\sigma^{(p)}_{\ell}$ on ${\mathbb{D}}_{4r}^{*}$ satisfy $a^{(p)}_{j\ell}=0$ if $j<\delta_{p}$ and $j<l\leq d_{p}$ (cf. ). Now we separate the contribution of $\sigma^{(p)}_{\ell}$, $c^{(p)}_{\ell}$ (cf. ), $a^{(p)}_{j\ell}$ in $B_{p}, B_{p}^{{\mathbb{D}}^{*}}$ in two groups: $1\leq j, \ell\leq \delta_{p}$; $\max\{j, \ell\}\geq \delta_{p}+1$. The contribution corresponding to $1\leq j, \ell\leq \delta_{p}$, will be controlled by using Lemma \[lem\_phi0phisigma\] (or \[prop:rfnd\_estmt\]). The contribution corresponding to $\max\{j, \ell\}\geq \delta_{p}+1$ will be handled by a direct application of Cauchy inequalities . It turns out that by suitably choosing $c,A>0$ this contribution has uniformly the relative size $2^{-\alpha p}$ compared to $B_{p}^{{\mathbb{D}}^{*}}$ on $|z|\leq cp^{-A}$.
This paper is organized as follows. In Section \[eq:s2\], we establish Theorem \[thm\_apdx\] based on the off-diagonal expansion of Bergman kernel from [@bkp §6]. In Section \[eq:s3\], we establish Theorem \[thm:diffquot\] by refining the argument from Section \[eq:s2\]. In Section \[eq:s4\] we give some applications of the main results. Notation: We denote $\lfloor x\rfloor$ as the integer part of $x\in {\mathbb{R}}$.
***Acknowledgments.*** We would like to thank Professor Jean-Michel Bismut for helpful discussions. In particular, Theorem \[thm:diffquot\] answers a question raised by him at CIRM in October 2018.
$C^{0}$-estimate for the quotient of Bergman kernels {#eq:s2}
====================================================
This section is organized as follows. In Section \[eq:s2.1\], we obtain the $C^{0}$-estimate for the quotient of Bergman kernels, Theorem \[thm\_apdx\], admitting first an integral estimate, Lemma \[lem\_phi0phisigma\]. In Section \[eq:s2.2\], we deduce Lemma \[lem\_phi0phisigma\] from the *two-variable* Poincaré type Bergman kernel estimate of [@bkp Theorem 1.1 and Corollary 6.1].
Proof of Theorem \[thm\_apdx\] {#eq:s2.1}
------------------------------
We recall first some basic facts. For $\sigma\in C_{0}^{\infty}(\Sigma, L^{p})$, the space of smooth and compactly supported sections of $L^{p}$ over $\Sigma$, set $$\begin{aligned}
\label{eq:2.1a}
\|\sigma\|_{{\boldsymbol{L}}^2_p(\Sigma)}^{2}
:= \int_{\Sigma}|\sigma|^{2}_{h^{p}}\, \omega_{\Sigma}.\end{aligned}$$ Let ${\boldsymbol{L}}^2_p(\Sigma)$ be the $\|\cdot\|_{{\boldsymbol{L}}^{2}_p(\Sigma)}$-completion of $C_{0}^{\infty}(\Sigma, L^{p})$.
By [@bkp Remark 3.2] the inclusion identifies the space $H^{0}_{(2)}(\Sigma, L^{p})$ of ${\boldsymbol{L}}^2$-holomorphic sections of $L^p$ over $\Sigma$ to the subspace of $H^0(\overline\Sigma,L^p)$ consisting of sections vanishing at the punctures, so it induces an isomorphism of vector spaces $$\label{e:bs2}
H^0_{(2)}(\Sigma,L^p)\cong
H^{0}(\overline{\Sigma},L^{p}\otimes
\mathscr{O}_{\overline{\Sigma}}(-D)),$$ where $\mathscr{O}_{\overline{\Sigma}}(-D)$ is the holomorphic line bundle on $\overline{\Sigma}$ defined by the divisor $- D$. By the Riemann-Roch theorem we have for all $p$ with $p\deg(L)-N>2g-2$, $$\begin{aligned}
\label{eq:2.4a}
d_p: = \dim H^{0}_{(2)}(\Sigma, L^{p})
= \dim H^{0}(\overline{\Sigma},
L^{p}\otimes \mathscr{O}_{\overline{\Sigma}}(-D))
= \deg (L)\, \, p +1- g -N, \end{aligned}$$ where $\deg(L)$ is the degree of $L$ over $\overline{\Sigma}$, and $g$ is the genus of $\overline{\Sigma}$.
The Bergman kernel function satisfies the following variational characterization, see e.g. [@CM11 Lemma 3.1], $$\begin{aligned}
\label{eq:2.5a}
B_{p}(z) = \sup_{0\neq \sigma\in H^{0}_{(2)}(\Sigma, L^{p})}
\frac{|\sigma(z)|^{2}_{h^{p}}}{\|\sigma\|_{{\boldsymbol{L}}^2_p(\Sigma)}^{2}}\,,
\quad \text{ for } z\in \Sigma.\end{aligned}$$ By the expansion of the Bergman kernel on a complete manifold [@mm Theorem 6.1.1] (cf. also [@bkp Theorem 2.1, Corollary 2.4]), there exist coefficients ${\boldsymbol{b}}_i\in C^\infty(\Sigma)$, $i\in{\mathbb{N}}$, such that for any $k,m\in {\mathbb{N}}$, any compact set $K\subset \Sigma$, we have in the $C^{m}$-topology on $K$, $$\begin{aligned}
\label{eq:2.6a}
B_p(x)=\sum^k_{i=0}{\boldsymbol{b}}_i(x)p^{1-i}+\mathcal{O}(p^{-k})\,,
\quad \text{ as } p\to \infty,\end{aligned}$$ with ${\boldsymbol{b}}_0=- {\boldsymbol{b}}_1= \frac{1}{2\pi}$ on each $V_{j}$.
Consider now for $p\geq2$ the space $H_{(2)}^p({\mathbb{D}}^*)$ of holomorphic ${{\boldsymbol{L}}}^2$-functions on ${\mathbb{D}}^*$ with respect to the weight $\|1\|^{2}(z)=\big|\!\log(|z|^2)\big|^p$ (corresponding to a metric on the trivial line bundle ${\mathbb{C}}$) and volume form $\omega_{{\mathbb{D}}^*}$ on ${\mathbb{D}}^*$. An orthonormal basis of $H_{(2)}^p({\mathbb{D}}^*)$ is given by (cf. [@bkp Theorem 3.1]), $$\label{eq:2.7a}
c^{(p)}_\ell z^\ell \text{ with } \ell\in{\mathbb{N}},\,\ell\geq 1 \text{ and }
c^{(p)}_\ell = \left(\dfrac{\ell^{p-1}}{2\pi (p-2)!}\right)^{1/2}
= \|z^{\ell}\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*)}^{-1}\,,$$ and hence $$\label{eq:2.8a}
B_p^{{\mathbb{D}}^*}(z) = \big|\!\log(|z|^2)\big|^{p}
\sum_{\ell=1}^{\infty} (c^{(p)}_\ell)^2 |z|^{2\ell}\,,
\qquad \text{ for } z\in {\mathbb{D}}^{*}.$$ For any $m\in {\mathbb{N}}$, $0<b<1$ and $0<\gamma<\frac{1}{2}$ there exists by [@bkp Proposition 3.3] $\epsilon = \epsilon(b,\gamma)>0$ such that $$\begin{aligned}
\label{eq:2.12a}
\Big\|B_p^{{\mathbb{D}}^*}(z) - \frac{p-1}{2\pi}
\Big\|_{C^m(\{b e^{-p^\gamma}\leq |z|<1\}, \omega_{{\mathbb{D}}^*})}
= \mathcal{O}\big(e^{-\epsilon p^{1-2\gamma}}\big)
\:\: \text{ as } p\to +\infty\, .
\end{aligned}$$ Taking into account Theorem \[thm\_MainThm\] and we see that in order to prove Theorem \[thm\_apdx\] it suffices, after reducing to some $V_j$ and identifying the geometric data on ${\mathbb{D}}^*_{4r}$ and $\Sigma$ via , to show that for some (small) $c>0$ and (large) $A>0$, and for all $l\geq 0$ there exists $C=C(c,A,l)>0$ such that for all $p\geq 2$, $$\label{e:apdx2}
\sup_{0<|z|\leq cp^{-A}}
\bigg|\frac{B_p}{B_p^{{\mathbb{D}}^*}}(z) - 1 \bigg| \leq Cp^{-l}.$$ We now start to establish . In the whole paper we use the following conventions. $$\begin{aligned}
\label{eq:2.13a}\begin{split}
&\text{We fix $0<r<(4e)^{-1}$ as in \eqref{eq:1.6a},
and $0<\beta<1$ such that $r^{\beta}<2r$.} \\
&\text{We fix a (non-increasing) smooth cut-off function
$\chi:[0,1]\to {\mathbb{R}}$, }\\
&\text{satisfying $\chi(u)=1$ if $u\leq r^{\beta}$
and $\chi(u)=0$
if $u\geq 2r$.} \\
& \text{We set}\;\;\delta_p = \left\lfloor \frac{p-2}{2|\log r|}\right\rfloor
\text{ for } p\in {\mathbb{N}}, p\geq 2.
\end{split} \end{aligned}$$ The choice of $\delta_p$ will become clear in , and , for example. By there exist $\alpha>0$ such that $$\label{e:alphabeta}
\alpha p\leq \delta_p \quad \text{ and } \quad
\delta_p +1 \leq \frac{1}{2}\, p \quad \text{ for }
p \geq 2 + 2 |\log r|.$$ To establish we proceed along the following lines:
1. For $\ell\in\{1,\ldots,\delta_p\}$, we set $$\label{e:dfphi0}
\phi^{(p)}_{\ell,0} = c^{(p)}_\ell\chi(|z|)z^\ell.$$
2. Using the trivialization, that is, identifying $\phi^{(p)}_{\ell,0}$ with $\phi^{(p)}_{\ell,0}\mathfrak{e}_L^p$ when we work on $\Sigma$, we see the $\phi^{(p)}_{\ell,0}$ as (smooth) ${\boldsymbol{L}}^2$ sections of $L^p$ over $\Sigma$, that we correct into *holomorphic* ${\boldsymbol{L}}^2$ sections $\phi^{(p)}_{\ell}$ of $L^p$, by orthogonal ${\boldsymbol{L}}^2_p(\Sigma)$-projection.
3. Next we correct the family $(\phi^{(p)}_{\ell})_{1\leq\ell\leq\delta_p}$ into an *orthonormal* family $(\sigma^{(p)}_{\ell})_{1\leq\ell\leq\delta_p}$ by the Gram-Schmidt procedure, and we further complete $(\sigma^{(p)}_{\ell})_{1\leq\ell\leq\delta_p}$ into an orthonormal basis $(\sigma^{(p)}_{\ell})_{1\leq\ell\leq d_p}$ of $H^{0}_{(2)}(\Sigma, L^{p})$. In particular, for any $1\leq j \leq \delta_{p}$, $$\label{eq:2.14a}
{\rm Span}\big\{\phi^{(p)}_{1,0},\cdots, \phi^{(p)}_{j,0}\big\}
= {\rm Span}\big\{\phi^{(p)}_{1},\cdots, \phi^{(p)}_{j}\big\}
= {\rm Span}\big\{\sigma^{(p)}_{1},\cdots, \sigma^{(p)}_{j}\big\}.$$
4. Finally, we carefully compare $B^{{\mathbb{D}}^{*}}_{p}$ with $B_{p}$ using the three steps of the above construction to get estimate ; of particular importance are the following intermediate estimates which will be deduced from [@bkp §6]:
\[lem\_phi0phisigma\] With the notations above, for all $m\in {\mathbb{N}}$, there exists $C=C(m)>0$ such that for all $p\in {\mathbb{N}}^{*}$, $p\geq 2$, and all $j,\ell\in\{1,\ldots,\delta_p\}$, $$\label{e:phi0}
\begin{split}
1 - Cp^{-m}
\leq \big\|\phi^{(p)}_{\ell,0}\big\|_{{\boldsymbol{L}}^2_p(\Sigma)}^2&=
\big(c^{(p)}_\ell\big)^2 \int_{{\mathbb{D}}^*_{2r}}
\chi^{2}(|z|) |z|^{2\ell}\big|\!\log(|z|^2)\big|^{p}\, \omega_ {{\mathbb{D}}^*}\\
&\leq \big(c^{(p)}_\ell\big)^2 \int_{{\mathbb{D}}^*_{2r}} \chi(|z|)|z|^{2\ell}\big|
\!\log(|z|^2)\big|^{p}\, \omega_ {{\mathbb{D}}^*}
\leq 1\,,
\end{split}$$ and moreover, $$\begin{aligned}
\label{e:phi0phi} \begin{split}
&\big\|\sigma^{(p)}_{\ell}- \phi^{(p)}_{\ell,0}\big\|_{{\boldsymbol{L}}^2_p(\Sigma)}
\leq C p^{-m},\\
& \big|\big\langle \phi^{(p)}_{j}, \sigma^{(p)}_{\ell}
\big\rangle_{{\boldsymbol{L}}^{2}_p(\Sigma)} - \delta_{j\ell}\big|
\leq C p^{-m}.
\end{split} \end{aligned}$$
The proof of Lemma \[lem\_phi0phisigma\] is postponed to Section \[eq:s2.2\].
Notice that we take care of stating estimates uniform in $j,\ell\in \{1,\ldots,\delta_p\}$. Observe moreover that , are *integral* estimates, whereas we want to establish *pointwise* estimates in the end, hence we need an extra effort to convert these (among others) into . Let us see now how to build on to get the desired .
First, by , , , and the construction of $\phi_{\ell,0}^{(p)}$ and $\sigma^{(p)}_\ell$ we have for $z \in {\mathbb{D}}^*_r$, $$\begin{aligned}
\label{eq:2.16a}\begin{split}
B_p^{{\mathbb{D}}^*}(z) &
\, = \sum_{\ell=1}^{\delta_p} \big| \phi^{(p)}_{\ell,0}
\big|_{h^p,z}^2
+ \big|\!\log(|z|^2)\big|^{p}\sum_{\ell=\delta_p+1}^{\infty}
(c^{(p)}_\ell)^2 |z|^{2\ell} \\
\, &= B_p(z) - \sum_{\ell=\delta_p+1}^{d_p}
\big| \sigma^{(p)}_{\ell}\big|_{h^p,z}^2
+2{\rm Re}\Big[\sum_{\ell=1}^{\delta_p}
\big\langle \sigma^{(p)}_{\ell},
\phi^{(p)}_{\ell,0} -\sigma^{(p)}_{\ell}\big\rangle_{h^p,z}\Big] \\
& \quad +\sum_{\ell=1}^{\delta_p} \big| \phi^{(p)}_{\ell,0}
-\sigma^{(p)}_{\ell}\big|_{h^p,z}^2
+\big|\!\log(|z|^2)\big|^{p}\sum_{\ell=\delta_p+1}^{\infty}
(c^{(p)}_\ell)^2 |z|^{2\ell}.
\end{split} \end{aligned}$$ We deal with the summands of the last three terms separately; we start by claiming that up to a judicious choice of $c>0$ and $A>0$ we have for $0<|z|\leq cp^{-A}$: $$\label{e:btail}
\big|\!\log(|z|^2)\big|^{p}\sum_{\ell=\delta_p+1}^{\infty}
(c^{(p)}_\ell)^2 |z|^{2\ell}
= \mathcal{O}(p^{-\infty})\cdot B_p^{{\mathbb{D}}^*}(z)\,,
\qquad\text{as }p\to \infty,$$ that is, $$\sup_{0<|z|\leq cp^{-A}}
\big[ B_p^{{\mathbb{D}}^*}(z)^{-1}|\!\log(|z|^2)|^{p}
\sum_{\ell=\delta_p+1}^{\infty} (c^{(p)}_\ell)^2 |z|^{2\ell}\big]
= \mathcal{O}(p^{-\infty}).$$ Indeed, we have $\frac{\ell+\delta_p}{\ell} \leq \delta_p+1$ for all $\ell\geq 1$, so by we have for $z\in {\mathbb{D}}^*$, $$\label{eq:2.18a}
\begin{split}
\big|\!\log(|z|^2)\big|^{p}\sum_{\ell=\delta_p+1}^{\infty}
(c^{(p)}_\ell)^2 |z|^{2\ell}
&= \big|\!\log(|z|^2)\big|^{p}\frac{|z|^{2\delta_p}}{2\pi (p-2)!}
\sum_{\ell=1}^{\infty} \Big(\frac{\ell+\delta_p}{\ell}\Big)^{p-1}
\ell^{p-1}|z|^{2\ell} \\
&\leq (\delta_p+1)^{p-1}\frac{|z|^{2\delta_p}}{2\pi (p-2)!}
\big|\!\log(|z|^2)\big|^{p}\sum_{\ell=1}^{\infty}
\ell^{p-1}|z|^{2\ell} \\
&=(\delta_p+1\big)^{p-1}|z|^{2\delta_p}B_p^{{\mathbb{D}}^*}(z).
\end{split}$$ From follows $$\label{e:implies}
( \delta_p+ 1) |z|^{2\delta_p/(p-1)}
\leq \frac{1}{2} p |z|^{2\alpha} \leq \frac{1}{2}\,,
\quad \text{ for all } |z|\leq p^{-1/(2\alpha)}.$$ From and we get with $c=r $ and $A = \frac{1}{2\alpha}\,\cdot$
In the similar vein we now show the following.
\[eq:t2.2\] For $c=r$ and $A = \frac{1}{2\alpha}$, with $\alpha$ satisfying , we have uniformly in $z\in {\mathbb{D}}^*_{cp^{-A}}$, $$\label{e:sumsigma}
\sum_{\ell=\delta_p+1}^{d_p} \big| \sigma^{(p)}_{\ell}\big|_{h^p,z}^2
= \mathcal{O}(p^{-\infty})\cdot B_p^{{\mathbb{D}}^*}(z).$$ We have uniformly in $z\in {\mathbb{D}}^*_{cp^{-A}}$, and $\ell\in \{1,\ldots,\delta_p\}$, $$\label{e:sigmaminusphi0}
\big|\sigma^{(p)}_\ell - \phi^{(p)}_{\ell,0}\big|_{h^p,z}
= \mathcal{O}(p^{-\infty})\cdot B_p^{{\mathbb{D}}^*}(z)^{1/2} .$$
Let $p\geq 2$, $\ell\in \{1,\cdots,d_p\}$. By [@bkp Remark 3.2] and we know that $\sigma^{(p)}_{\ell}$ is a holomorphic section of $L^p$ over $\overline{\Sigma}$ vanishing at $D$. We use the trivialization to set $$\begin{aligned}
\label{eq:2.23a}
\sigma^{(p)}_{\ell} = \Big(\sum_{j=1}^{\infty}a^{(p)}_{j \ell} z^j\Big)
\mathfrak{e}_L^p =: s^{(p)}_{\ell}\mathfrak{e}_L^p
\qquad \text{ on } {\mathbb{D}}^*_{4r}.
\end{aligned}$$ We have for $j\geq 1$ by , , , and Cauchy inequalities, $$\label{eq:2.27a}
\begin{split}
|a^{(p)}_{j \ell}| &\leq (2r)^{-j} \sup_{|z|=2r}\big|s^{(p)}_{\ell}(z)\big| \\
&= (2r)^{-j}\big|\!\log(|2r|^2)\big|^{-p/2}
\sup_{|z|=2r}\big|\sigma^{(p)}_{\ell}(z)\big|_{h^p} \\
&\leq (2r)^{-j}\big|\!\log(|2r|^2)\big|^{-p/2}
\sup_{|z|=2r}B_p(z)^{1/2} \\
&\leq Cp^{1/2}(2r)^{-j}\big|\!\log(|2r|^2)\big|^{-p/2}.
\end{split}$$ Thus by and we have for $z\in {\mathbb{D}}^*_{r}$, $$\label{eq:2.28a}
\begin{split}
\Big| \sum_{j=\delta_p+1}^{\infty} a^{(p)}_{j \ell}z^j\Big|_{h^p}
&\leq C \big|\!\log(|z|^2)\big|^{p/2} \sum_{j=\delta_p+1}^{\infty} p^{1/2}
\Big|\!\log(|2r|^2)\Big|^{-p/2} \left(\frac{|z|}{2r}\right)^j \\
&= Cp^{1/2}\left(\frac{|\!\log(|z|^2)|}{|\!\log(|2r|^2)|}\right)^{p/2}
\left(1-\frac{|z|}{2r}\right)^{-1} \left(\frac{|z|}{2r}\right)^{\delta_p +1}.
\end{split}$$ By and we have $$\begin{aligned}
\label{eq:2.29a}
\big|\!\log(|z|^2)\big|^{p/2}|z|
= |z|_{h^p_{{\mathbb{D}}^*}}\leq \|z\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*)}B_p^{{\mathbb{D}}^*}(z)^{1/2}
= (2\pi(p-2)!)^{1/2}B_p^{{\mathbb{D}}^*}(z)^{1/2}.
\end{aligned}$$ We deduce from and that there exists $C>0$ such that the following estimate holds uniformly in $\ell\in \{1,\ldots,d_p\}$, $|z|\in {\mathbb{D}}^*_{r}$, $$\label{eq:2.30a}
\Big|\sum_{j=\delta_p+1}^{\infty} a^{(p)}_{j \ell} z^j\Big|_{h^p}
\leq Cp^{-1/2} \left( \left(\frac{|z|}{2r}\right)^{2\delta_p/p}
\frac{(p!)^{1/p}}{|\!\log(|2r|^2)|}\right)^{p/2} B_p^{{\mathbb{D}}^*}(z)^{1/2}.$$ By we have for $A=\frac{1}{2\alpha}$, $c_0= r e^{1/(2\alpha)}|\!\log(|2r|^2)|^{1/(2\alpha)}> r$, and $p\gg 1$, $$\begin{aligned}
\label{eq:2.31a}
\left(\frac{|z|}{2r}\right)^{2\delta_p/p}
\frac{1}{|\!\log(|2r|^2)|} \leq
\left(\frac{|z|}{2r}\right)^{2\alpha} \frac{1}{|\!\log(|2r|^2)|}
\leq 2^{- 2\alpha} \frac{e}{p}\,,\quad \text{ for } |z|\leq c_0 p^{-A}.
\end{aligned}$$ Recall that the Stirling formula states $$\begin{aligned}
\label{eq:2.34a}
\frac{p^p}{p!}= (2\pi p)^{-1/2} e^p \Big(1+\mathcal{O}(p^{-1})\Big)
\quad \text{ as } p\to +\infty.
\end{aligned}$$ We infer from , and , that there exists $C>0$ such that the following estimate holds uniformly in $|z|\leq r \, p^{-A}$ and $\ell\in \{1,\ldots,d_p\}$, $$\begin{aligned}
\label{eq:2.32a}
\Big|\sum_{j=\delta_p+1}^{\infty} a^{(p)}_{j \ell} z^j\Big|_{h^p}
\leq C\, 2^{-p\alpha}\, B_p^{{\mathbb{D}}^*}(z)^{1/2}.
\end{aligned}$$ Note that $ \phi_{j}^{(p)} - \phi_{j,0}^{(p)}$ is orthogonal to $H^0_{(2)}(\Sigma, L^p)$. By , , , and since $\sigma^{(p)}_{\ell}$ are holomorphic, we have for $j\in\{1,\cdots,\delta_p\}$, $\ell\in \{1,\cdots,d_p\}$, $$\begin{aligned}
\label{e:aj}
\big\langle \sigma^{(p)}_{\ell},\phi_{j}^{(p)}
\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)}
= \big\langle \sigma^{(p)}_{\ell},\phi_{j,0}^{(p)}
\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)}
= c^{(p)}_j a^{(p)}_{j \ell}
\int_{{\mathbb{D}}^*_{2r}} \chi(|z|)\big|z^j\big|^2 |\log(|z|^2)|^p \,
\omega_{{\mathbb{D}}^*}.
\end{aligned}$$ By we have $$\begin{aligned}
\label{eq:2.25a}
\big\langle \sigma^{(p)}_{\ell},\phi_{j}^{(p)}
\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)} =0 \:
\text{ for } j\in\{1,\cdots,\delta_p\}, j<\ell.
\end{aligned}$$ From and we get $$\begin{aligned}
\label{eq:2.26a}
a^{(p)}_{j \ell} =0\qquad \text{ for }\,
j\in\{1,\cdots,\delta_p\},\, \ell\in \{\delta_p+1,\cdots,d_p\} .
\end{aligned}$$ By , and , we get .
Fixing $\ell\in \{1,\ldots,\delta_p\}$, we have on ${\mathbb{D}}^*_{r}$ by , , , $$\label{e:sigmaphi0}
\big(\sigma^{(p)}_\ell - \phi^{(p)}_{\ell,0}\mathfrak{e}_L^p\big)(z)
= \Big(\big( a^{(p)}_{\ell \ell} - c^{(p)}_{\ell}\big)z^{\ell}
+ \sum_{\substack{j=1\\j\neq \ell}}^{\infty}
a^{(p)}_{j \ell} z^j \Big)\mathfrak{e}_L^p.$$ From Lemma \[lem\_phi0phisigma\] and we have uniformly for $j, \ell\in \{1,\ldots,\delta_p\}$, $$\label{eq:2.36a}
\begin{split}
a^{(p)}_{j \ell} &= c^{(p)}_{j} \Big((c^{(p)}_{j})^{2}
\int_{{\mathbb{D}}^*_{2r}} \chi(|z|)\big|z^j\big|^2 |\log(|z|^2)|^p \,
\omega_{{\mathbb{D}}^*}\Big)^{-1}
\big\langle \sigma^{(p)}_\ell,
\phi^{(p)}_{j}\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)}\\
&= (\delta_{j\ell} +\mathcal{O}(p^{-\infty})) c^{(p)}_{j}.
\end{split}$$ Thus from , we have on ${\mathbb{D}}^*_{r}$ uniformly in $\ell\in \{1,\ldots,\delta_p\}$, $$\begin{gathered}
\label{e:headsigma}
\bigg|\Big( \big( a^{(p)}_{\ell \ell} - c^{(p)}_{\ell}\big)z^{\ell} +
\sum_{\substack{j=1\\j\neq \ell}}^{\delta_p}
a^{(p)}_{j \ell} z^j\Big)
\mathfrak{e}_L^p\bigg|_{h^p}^2
= \big|\!\log(|z|^2)\big|^p\bigg|
\sum_{j=1}^{\delta_p}
\Big( a^{(p)}_{j \ell}- \delta_{j\ell} c^{(p)}_{j}\Big)
z^j \bigg|^2 \\
\leq \mathcal{O}(p^{-\infty})\big|\!\log(|z|^2)\big|^p
\delta_p \sum_{j=1}^{\delta_p}
(c^{(p)}_{j})^2|z|^{2j}
\leq \delta_p\mathcal{O}(p^{-\infty}) B_p^{{\mathbb{D}}^*}(z),
\end{gathered}$$ Now $\delta_p$ can be absorbed in the factor $\mathcal{O}(p^{-\infty})$, since $\delta_p=\mathcal{O}(p)$ by . Combining with we conclude that holds uniformly in $\ell\in \{1,\ldots, \delta_p\}$.
Since $\delta_p=\mathcal{O}(p)$ and $|\sigma_\ell^{(p)}|_{h^p,z}\leq B_p(z)^{1/2}$, also yields $$\label{e:dblprod}
\bigg|\sum_{\ell=1}^{\delta_p}
\big\langle \sigma^{(p)}_{\ell},
\phi^{(p)}_{\ell,0} -\sigma^{(p)}_{\ell}\big\rangle_{h^p,z}\bigg|
= \mathcal{O}(p^{-\infty})\cdot B_p^{{\mathbb{D}}^*}(z)^{1/2}B_p(z)^{1/2}
\quad\text{on ${\mathbb{D}}^*_{cp^{-A}}$}.$$ This way, putting together , , , and , we obtain $$\label{eq2.41a}
\big(1+\mathcal{O}(p^{-\infty})\big)\cdot B_p^{{\mathbb{D}}^*}(z)
= B_p(z)
+ \mathcal{O}(p^{-\infty})\cdot B_p^{{\mathbb{D}}^*}(z)^{1/2}B_p(z)^{1/2}
\quad\text{on ${\mathbb{D}}^*_{cp^{-A}}$},$$ and this implies . The proof of Theorem \[thm\_apdx\] is completed.
Proof of Lemma \[lem\_phi0phisigma\] {#eq:s2.2}
------------------------------------
At first, as $0\leq \chi\leq1$ and ${\rm supp}(\chi) \subset {\mathbb{D}}^*_{2r}$, we get from , and , $$\begin{gathered}
\label{eq:3.1a}
\|\phi_{\ell,0}^{(p)}\|_{{\boldsymbol{L}}^2_p(\Sigma)}^{2}
=\|\phi_{\ell,0}^{(p)}\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^{*})}^{2}
\leq (c^{(p)}_{\ell})^2
\int_{{\mathbb{D}}^*}\chi(|z|) \big|\!\log(|z|^2)\big|^p |z|^{2\ell}\,\omega_{{\mathbb{D}}^*}\\
\leq (c^{(p)}_{\ell})^2
\int_{{\mathbb{D}}^*} \big|\!\log(|z|^2)\big|^p |z|^{2\ell}\,\omega_{{\mathbb{D}}^*}
=\|c_\ell^{(p)}z^\ell\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*)}^{2}=1.\end{gathered}$$ This implies the inequalities of the right-hand side of .
We establish now the lower bound of . For $\ell\in \{1,\ldots,\delta_p\}$ we have by , , and , $$\label{e:phi01}
\begin{aligned}
1- \|\phi^{(p)}_{\ell,0}\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*)}^2
&= \big(c^{(p)}_{\ell}\big)^2
\int_{{\mathbb{D}}^*} \big|\!\log(|z|^2)\big|^p
\big\{1-\chi^{2}(|z|)\big\} |z|^{2\ell}\,\omega_{{\mathbb{D}}^*} \\
&= \dfrac{\ell^{p-1}}{ (p-2)!} \int_{r^\beta}^1
\big|\!\log(t^2)\big|^{p} t^{2\ell} (1-\chi^{2}(t))
\frac{2 t dt}{t^2 \big|\!\log(t^2)\big|^{2}}\\
&\overset{u=-2\ell \log t}{=}\dfrac{1}{ (p-2)!} \int_0^{2\ell \beta |\log r|}
u^{p-2} e^{-u} \Big(1-\chi^{2}(e^{-u/(2\ell)})\Big) du\\
&\leq \dfrac{1}{ (p-2)!} \int_0^{2\delta_p \beta |\log r|}
u^{p-2} e^{-u} du.
\end{aligned}$$ The function $u \mapsto \log u - u$ is strictly increasing on $(0,1]$ and equals $-1$ at $u=1$, hence $$\label{eq:3.7a}
\log \beta -\beta <-1.$$ As $ u^{p-2} e^{-u}$ is strictly increasing on $[0, p-2]$, and $2\delta_p |\log r| \leq p-2$ (by ), so and imply $$\label{eq:3.6a}
\begin{split}
\dfrac{1}{ (p-2)!} \int_0^{2\delta_p \beta |\log r|}
&u^{p-2} e^{-u} du
\leq \dfrac{1}{ (p-2)!} \int_0^{(p-2) \beta }
u^{p-2} e^{-u} du\\
&\leq \frac{(p-2)^{p-2}}{(p-2)!}
e^{(p-2)(\log \beta-\beta)} (p-2)\beta\\
&= \Big(\dfrac{p-2}{2\pi} \Big)^{1/2} \beta
\Big(1+ \mathcal{O}(p^{-1})\Big) e^{(p-2)( \log \beta -\beta +1)}\\
&= \mathcal{O}(p^{-\infty}).
\end{split}$$ Combining and we obtain that the first inequality of holds uniformly in $\ell\in \{1,\ldots,\delta_p\}$.
We move on to and we first estimate $\|\phi^{(p)}_{\ell} - \phi^{(p)}_{\ell,0}\|_{{\boldsymbol{L}}^2({\mathbb{D}}^*_{3r})}$. Using the identification as in [@bkp (6.1)] we denote for $x,y\in {\mathbb{D}}_{4r}^*$, $$\label{eq:3.10a}\begin{split}
& {B}_p(x,y)= \big|\!\log(|y|^2)\big|^{p} \beta^{\Sigma}_p(x,y),\\
& {B}^{{\mathbb{D}}^*}_p(x,y)
= \big|\!\log(|y|^2)\big|^{p} \beta^{{\mathbb{D}}^*}_p(x,y)
\text{ with } \beta^{{\mathbb{D}}^*}_p(x,y) = \frac{1}{2\pi(p-2)!}
\sum_{\ell=1}^{\infty} \ell^{p-1} x^{\ell}\overline{y}^{\ell}.
\end{split}$$ For $\ell\in \{1,\ldots,\delta_p\}$ set $$\label{eq:3.11a}\begin{split}
& I^{(p)}_{1,\ell}(x)= \int_{y\in{\mathbb{D}}^*_{2r}} \big|\!\log(|y|^2)\big|^p
\big\{\beta_p^{\Sigma}(x,y) - \beta_p^{{\mathbb{D}}^*}(x,y)\big\}
\chi(|y|) y^\ell\, \omega_{{\mathbb{D}}^*}(y) , \\
& I^{(p)}_{2,\ell}(x)= \int_{y\in{\mathbb{D}}^*} \big|\!\log(|y|^2)\big|^p
\beta_p^{{\mathbb{D}}^*}(x,y)\big\{\chi(|y|)-1\big\} y^\ell\, \omega_{{\mathbb{D}}^*}(y),\\
& I^{(p)}_{3,\ell}(x)= \int_{y\in{\mathbb{D}}^*} \big|\!\log(|y|^2)\big|^p
\beta_p^{{\mathbb{D}}^*}(x,y) y^\ell\, \omega_{{\mathbb{D}}^*}(y) = x^\ell,
\end{split}$$ where the last equality is a consequence of the reproducing property of the Bergman kernel ${B}^{{\mathbb{D}}^*}_p({\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}},{\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}})$. By the construction of $\phi^{(p)}_{\ell}$, , and the reproducing property of $B_p({\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}},{\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}})$ we have for $x\in {\mathbb{D}}^*_{4r}$, $$\label{e:phi0dcmp}
\begin{aligned}
\phi^{(p)}_{\ell}(x) &=(B_p \phi^{(p)}_{\ell,0})(x)
= \int_{y\in \Sigma} B_p(x,y) \phi^{(p)}_{\ell,0}(y)
\,\omega_{\Sigma}(y) \\
&= c_\ell^{(p)} \int_{y\in{\mathbb{D}}^*_{2r}} \big|\!\log(|y|^2)\big|^p
\beta_p^{\Sigma}(x,y)\chi(|y|) y^\ell\, \omega_{{\mathbb{D}}^*}(y) \\
& = c_\ell^{(p)} \Big( I^{(p)}_{1,\ell}(x)+ I^{(p)}_{2,\ell}(x)
+ I^{(p)}_{3,\ell}(x)\Big).
\end{aligned}$$ Now [@bkp Theorem 1.1 or (6.23)] and yield for fixed $\nu>0$ and $m>0$ and for any $x\in {\mathbb{D}}^*_{4r}$, $p\geq 2$, $$\begin{aligned}
\label{eq:3.14a}\begin{split}
\Big|I^{(p)}_{1,\ell} (x)\Big|
&\leq C(m,\nu) p^{-m}\big|\!\log(|x|^2)\big|^{-\nu-p/2}
\int_{y\in {\mathbb{D}}^*_{2r}} \big|\!\log(|y|^2)\big|^{-\nu+p/2}
\chi(|y|)|y|^{\ell}\, \omega_{{\mathbb{D}}^*}(y) \\
&\leq C(m,\nu) p^{-m}\big|\!\log(|x|^2)\big|^{-\nu-p/2} \\
& \quad
\cdot\Big(\int_{{\mathbb{D}}^*}\big|\!\log(|y|^2)\big|^{p}|y|^{2\ell}\,
\omega_{{\mathbb{D}}^*}(y)\Big)^{1/2}
\Big(\int_{{\mathbb{D}}^*}\big|\!\log(|y|^2)\big|^{-2\nu}\chi^{2}(|y|)\,
\omega_{{\mathbb{D}}^*}(y)\Big)^{1/2} \\
& = C'(m,\nu) p^{-m}\big|\!\log(|x|^2)\big|^{-\nu-p/2}
(c^{(p)}_\ell)^{-1}.
\end{split} \end{aligned}$$ Keeping $\nu$ fixed and varying $m$ in we obtain the following uniform estimate in $\ell\in \{1,\ldots,\delta_p\}$, $$\label{e:phi0phi2}
\Big\| c_\ell^{(p)}I^{(p)}_{1,\ell} \Big\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*_{3r})}
= \mathcal{O}(p^{-\infty}).$$ By circle symmetry first and , , and we obtain, $$\begin{aligned}
\label{eq:3.12a}\begin{split}
I^{(p)}_{2,\ell}(x) &= (c_{\ell}^{(p)})^{2}
\bigg[\int_{y\in{\mathbb{D}}^*} \big|\!\log(|y|^2)\big|^p
\big\{\chi(|y|)-1\big\}|y|^{2\ell}\,\omega_{{\mathbb{D}}^*}(y)\bigg]x^\ell
= \mathcal{O}(p^{-\infty})\cdot x^\ell,
\end{split} \end{aligned}$$ uniformly in $\ell\in \{1,\ldots,\delta_p\}$. Since $\|c^{(p)}_\ell x^\ell\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*_{3r})}
\leq \|c^{(p)}_\ell x^\ell\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*)}=1$, this tells us already that $$\label{e:phi0phi1}
\Big\| c_\ell^{(p)}I^{(p)}_{2,\ell} \Big\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*_{3r})}
= \mathcal{O}(p^{-\infty}).$$
Since $0\leq 1-\chi\leq 1$ and $1-\chi(t)=0$ for $t\leq r^\beta$, we get by , and , as in , that for $\ell\in \{1,\ldots,\delta_p\}$ the following holds, $$\begin{gathered}
\label{e:phi0phi3}
\Big\|c_\ell^{(p)} I^{(p)}_{3,\ell} (x)
- \phi^{(p)}_{\ell,0}(x)\Big\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*_{3r})}^2 \leq
\Big\|c_\ell^{(p)}\big(1-\chi(|x|)\big)x^\ell\Big\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*)}^2\\
= \dfrac{\ell^{p-1}}{ (p-2)!} \int_{r^\beta}^1
\big|\!\log(t^2)\big|^{p} t^{2\ell} (1-\chi(t))^2
\frac{2 t dt}{t^2 \big|\!\log(t^2)\big|^{2}}\\
\overset{u=-2\ell \log t}{=}
\dfrac{1}{ (p-2)!} \int_0^{2\ell \beta |\log r|}
u^{p-2} e^{-u} \Big(1-\chi(e^{-u/(2\ell)})\Big)^2 du\\
\leq \dfrac{1}{ (p-2)!} \int_0^{2\delta_p \beta |\log r|}
u^{p-2} e^{-u} du = \mathcal{O}(p^{-\infty}).
\end{gathered}$$ By , , and we get the following estimate uniformly in $\ell\in \{1,\ldots,\delta_p\}$, $$\label{e:phi0phi4}
\|\phi^{(p)}_{\ell,0} - \phi^{(p)}_{\ell}\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*_{3r})}
= \mathcal{O}(p^{-\infty}).$$ A weak form of [@bkp Corollary 6.1] tells us that for any $k\in {\mathbb{N}}$, $\varepsilon >0$, there exists $C>0$ such that $$\begin{aligned}
\label{eq:3.26a}
|B_p(x,y)|\leq C p^{-k} \quad \text{for }d(x,y)>\varepsilon,\:p\geq 2.\end{aligned}$$ By , and , $$\label{eq:3.27a}
\big\|\phi^{(p)}_{\ell}\big\|^2_{{\boldsymbol{L}}^2_p(\Sigma\smallsetminus{\mathbb{D}}^*_{3r})}
\leq C p^{-2k} \int_{\Sigma\setminus {\mathbb{D}}^{*}_{3r}}\omega_{\Sigma}
\int_{{\mathbb{D}}^*_{2r}}| \phi_{\ell,0}^p (y)|_{h^p}^2 \omega_{\Sigma}(y)
\leq C p^{-2k} \int_{\Sigma}\omega_{\Sigma}.$$ From and we have uniformly in $\ell\in\{1,\ldots,\delta_p\}$, $$\label{eq:3.28a}
\big\|\phi^{(p)}_{\ell}-\phi^{(p)}_{\ell,0}\big\|_{{\boldsymbol{L}}^2_p(\Sigma)}^2
= \|\phi^{(p)}_{\ell}-\phi^{(p)}_{\ell,0}\|^2_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*_{3r})}
+ \big\|\phi^{(p)}_{\ell}
\big\|^2_{{\boldsymbol{L}}^2_p(\Sigma\smallsetminus{\mathbb{D}}^*_{3r})}
= \mathcal{O}(p^{-\infty}).$$ By and , as $\phi^{(p)}_{j}- \phi^{(p)}_{j,0}$ is orthogonal to $H^{0}_{(2)}(\Sigma, L^{p})$, we have uniformly in $j,\ell\in \{1,\ldots,\delta_p\}$, $$\begin{aligned}
\label{eq:3.29a}\begin{split}
\big\langle \phi^{(p)}_{j}, \phi^{(p)}_{\ell}\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)}
&=\big\langle \phi^{(p)}_{j,0}, \phi^{(p)}_{\ell}
\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)} \\
&= \big\langle \phi^{(p)}_{j,0}, \phi^{(p)}_{\ell,0}
\big\rangle_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*_{2r})}
+ \big\langle \phi^{(p)}_{j,0}, \phi^{(p)}_{\ell}- \phi^{(p)}_{\ell,0}
\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)} \\
&= \delta_{j\ell} + \mathcal{O}(p^{-\infty}).
\end{split} \end{aligned}$$ Note that the circle symmetry and imply that $\big\langle \phi^{(p)}_{j,0}, \phi^{(p)}_{\ell,0}
\big\rangle_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*_{2r})}=0$ if $j\neq \ell$. We now observe that the Gram-Schmidt orthonormalization $(\sigma_{\ell}^{(p)})_{1\leq \ell \leq \delta_p}$ of the “almost-orthonormal” family $(\phi_{\ell}^{(p)})_{1\leq \ell \leq \delta_p}$ is the normalization of $$\label{eq:3.32a}
{\sigma'}_{\ell}^{(p)}= \phi^{(p)}_{\ell}
- \sum_{k=1}^{\ell-1} \frac{\big\langle \phi^{(p)}_{\ell},
\phi^{(p)}_{k}\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)}}{
\big\langle \phi^{(p)}_{k},
\phi^{(p)}_{k}\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)}} \phi^{(p)}_{k}.$$ Now , , and yield . This completes the proof of Lemma \[lem\_phi0phisigma\].
$C^{k}$-estimate of the quotient of Bergman kernels {#eq:s3}
===================================================
The proof of Theorem \[thm:diffquot\] follows the same strategy as in Section \[eq:s2\] (use of the orthonormal basis $(\sigma_j^{(p)})_{1\leq j\leq d_p}$), but with some play on the parameters (in particular, the truncation floor $\delta_p$ of Step 1. in the outline of the proof of Theorem \[thm\_apdx\]). Some precisions on this basis are also needed: we’ll see more precisely that in some sense, and provided relevant choices along the construction, the head terms $\sigma_{\ell}^{(p)}$, $1\leq j\leq \delta_p$ are much closer to their counterparts $c_{\ell}^{(p)}z^{\ell}$ of ${\mathbb{D}}^*$ than sketched above.
This section is organized as follows. In Section \[eq:s3.1\], we establish a refinement of the integral estimate Lemma \[lem\_phi0phisigma\] which is again deduced from [@bkp]. In Section \[eq:s3.2\], we establish Theorem \[thm:diffquot\] by using Lemma \[prop:rfnd\_estmt\].
A refined integral estimate {#eq:s3.1}
---------------------------
To establish Theorem \[thm:diffquot\], we follow Steps 1. to 4. in the outline of the proof of Theorem \[thm\_apdx\] by modifying $\delta_{p}$, thus refining Lemma \[lem\_phi0phisigma\] to Lemma \[prop:rfnd\_estmt\] below.
Let $\kappa>0$ fixed. We start by choosing $c(\kappa) \in (0,e^{-1})$ so that $$\label{e:ckappa}
\log (c(\kappa)) \leq -1-2\kappa.$$ Then we replace $\delta_{p}$ in by $$\label{eq:3.35a}
\delta_p'=\delta'_p(\kappa)
= \Big\lfloor\frac{(p-2)c(\kappa)}{2|\!\log r|}\Big\rfloor-2.$$
\[prop:rfnd\_estmt\] There exists $C = C(\kappa)>0$ such that for all $p\gg 1$ and $\ell \in \{1,\ldots,\delta'_p\}$, $$\label{e:rfnd_estmt}
\left\|\sigma_{\ell}^{(p)}
- c_{\ell}^{(p)} \chi(|z|)z^{\ell}\mathfrak{e}^p_L
\right\|_{{\boldsymbol{L}}^2_p(\Sigma)}
\leq Cp\, e^{-\kappa p}.$$ Moreover, $(\sigma_{\ell}^{(p)})_{1\leq j\leq d_p}$ is in echelon form up to rank $\delta'_p$, in the sense that if $\ell=1,\ldots,\delta'_p$, then $\sigma_{\ell}^{(p)}$ admits an expansion $$\label{e:echelon1}
\sigma_{\ell}^{(p)}
= \Big(\sum_{q=\ell}^{\infty}
a^{(p)}_{q\ell} z^q
\Big)\mathfrak{e}^p_L \quad \text{ on } {\mathbb{D}}_{4r}^{*},$$ and if $\ell= \delta_p'+1,\ldots,d_p$, then $\sigma_{\ell}^{(p)}$ admits an expansion $$\label{e:echelon2}
\sigma_{\ell}^{(p)}
= \Big(\sum_{q=\delta'_p+1}^{\infty}
a^{(p)}_{q\ell} z^q
\Big)\mathfrak{e}^p_L \quad \text{ on } {\mathbb{D}}_{4r}^{*}.$$
As will be seen, estimate is directly related to the play on $\delta_p'$, whereas the echelon property as such is not, and , are a direct consequence of and . Moreover, no estimate is given on the $\sigma_{\ell}^{(p)}$ for $\ell\geq \delta'_p+1$ in the above statement; as in the proof of Theorem \[thm\_apdx\], it turns out that we content ourselves with rather rough estimates on these tail sections.
\[Proof of Lemma \[prop:rfnd\_estmt\]\] Let $\overline\partial^{L^{p}*}$ be the formal adjoint of $\overline\partial^{L^{p}}$ on $(C^{\infty}_{0}(\Sigma, L^{p}),
\|\quad\|_{{\boldsymbol{L}}^2_p(\Sigma)})$. Then $\Box_{p}=\overline\partial^{L^{p}*}\overline\partial^{L^{p}}
: C^{\infty}_{0}(\Sigma, L^{p})\to C^{\infty}_{0}(\Sigma, L^{p})$ is the Kodaira Laplacian on $L^{p}$ and $$\begin{aligned}
\label{eq:3.68a}
\ker\square_p= H_{(2)}^{0}(\Sigma, L^{p}).
\end{aligned}$$
Observe that the construction of the $\phi_{\ell}^{(p)}$, $\ell = 1,\ldots,\delta'_p$, following Steps 1. to 4. of the proof of Theorem \[thm\_apdx\] can be led alternatively by the following principle:
- with the cut-off function $\chi$ in , for $\ell = 1,\ldots,\delta'_p$, set $$\begin{aligned}
\label{eq:3.70a}
\phi_{0,\ell}^{(p)}:=\phi_{\ell,0}^{(p)}
= c^{(p)}_{\ell}\chi(|z|)z^{\ell}\mathfrak{e}_L^p .\end{aligned}$$
- give an explicit estimate of $\big\|\square_p\phi_{0,\ell}^{(p)}\big\|_{{\boldsymbol{L}}^2_p(\Sigma)}$.
- we correct $\phi_{0,\ell}^{(p)}$ into holomorphic ${\boldsymbol{L}}^{2}$-section $\phi_{\ell}^{(p)}$ of $L^{p}$, by orthogonal ${\boldsymbol{L}}^2_p(\Sigma)$-projection. we use the spectral gap property [@bkp Cor.5.2] (as a direct consequence of [@mm Theorem 6.1.1]) together with the step 2’ to get .
*Step 1’.* We compute, by and , as in , for $\ell=1,\ldots,\delta'_p$, $$\label{eq:3.71a}
\begin{aligned}
0\leq 1 - \big\|c_{\ell}^{(p)}z^{\ell} \chi(|z|)\mathfrak{e}_L^p
\big\|_{{\boldsymbol{L}}^2_p(\Sigma)}^2
&=\dfrac{1}{ (p-2)!} \int_0^{2\ell \beta |\log r|}
u^{p-2} e^{-u} \Big(1-\chi^{2}(e^{-u/(2\ell)})\Big) du\\
&\leq \dfrac{1}{ (p-2)!} \int_0^{2\delta'_p \beta |\log r|}
u^{p-2} e^{-u} du.
\end{aligned}$$ As $ u^{p-2} e^{-u}$ is strictly increasing on $[0, p-2]$ and $\log \beta<0$, and from , $2(\delta'_p+2) |\log r| \leq (p-2)c(\kappa)$, by , , we get a refinement of , $$\label{eq:3.73a}
\begin{split}
\dfrac{1}{ (p-2)!} \int_0^{2\delta'_p \beta |\log r|}
& u^{p-2} e^{-u} du
\leq \dfrac{1}{ (p-2)!} \int_0^{(p-2) c(\kappa)\beta }
u^{p-2} e^{-u} du\\
&\leq \frac{(p-2)^{p-2}}{(p-2)!}
e^{(p-2)(\log (c(\kappa)\beta)-c(\kappa)\beta )} (p-2)c(\kappa) \beta\\
&= \Big(\dfrac{p-2}{2\pi} \Big)^{1/2} c(\kappa)\beta
\Big(1+ \mathcal{O}(p^{-1})\Big)
e^{(p-2)( \log (c(\kappa)\beta) -c(\kappa)\beta +1)}\\
& = \mathcal{O}(e^{-2\kappa \, p}).
\end{split}$$ From and , uniformly in $\ell\in \{1,\ldots,\delta'_p\}$, $$\label{e:|phi0|}
\big\| \phi_{0,\ell}^{(p)}
\big\|_{{\boldsymbol{L}}^2_p(\Sigma)}^{2}
= \big\|c_{\ell}^{(p)}z^{\ell} \chi(|z|)\mathfrak{e}_L^p
\big\|_{{\boldsymbol{L}}^2_p(\Sigma)}^{2}
= 1 + \mathcal{O}( e^{-2\kappa p}).$$ *Step 2’.* Recall from [@bkp (4.14), (4.15) or (4.30)] that on ${\mathbb{D}}_{2r}^*$ (seen in $\Sigma$), $$\label{eq:3.74a}
\square_p({\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}}\,\mathfrak{e}_L^p)
= \Big(- |z|^2\log^2(|z|^2)\frac{\partial^2{\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}}}
{\partial z\partial\bar{z}}
- p\bar{z}\log(|z|^2)\frac{\partial{\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}}}{\partial\bar{z}}\Big)
\mathfrak{e}_L^p.$$ Hence we obtain from and , for $\ell=1,\ldots,\delta'_p$, $$\label{eq:3.75a}
\square_p\phi_{0,\ell}^{(p)}
= c^{(p)}_{\ell}
\Big(- |z|^2\log^2(|z|^2)\frac{\partial^2}{\partial z\partial\bar{z}}
\big(\chi(|z|)z^{\ell}\big)
- p\, \bar{z}\log(|z|^2)\frac{\partial}{\partial\bar{z}}
\big(\chi(|z|)z^{\ell}\big)
\Big)\mathfrak{e}_L^p.$$ Since $\frac{\partial}{\partial\bar{z}}[\chi(|z|)z^{\ell}]
= \big(\frac{\partial}{\partial\bar{z}}\chi(|z|)\big)z^{\ell}
= \frac{|z|}{2\bar{z}}\chi'(|z|)z^{\ell}$, we have $$\label{eq:3.76a}
\frac{\partial^2}{\partial
z\partial\bar{z}}\Big[\chi(|z|)z^{\ell}\Big]
= \frac{2\ell+1}{4|z|}z^{\ell}\chi'(|z|)
+ \frac14 z^{\ell}\chi''(|z|),$$ which yields $$\begin{gathered}
\label{eq:3.77a}
\square_p\phi_{0,\ell}^{(p)} = c^{(p)}_{\ell}
\Big(- \frac{2\ell+1}{4}|z|z^{\ell}\log^2(|z|^2) \chi'(|z|) \\
- \frac14|z|^2z^{\ell}\log^2(|z|^2) \chi''(|z|)
-\frac{p}{2}|z|z^{\ell}\log(|z|^2) \chi'(|z|)\Big)\mathfrak{e}_L^p
\end{gathered}$$ on ${\mathbb{D}}^*_{2r}$, and this readily extends to the whole $\Sigma$. Therefore, $$\label{e:Kodphi0}
\begin{aligned}
\big\|\square_p\phi_{0,\ell}^{(p)}&\big\|_{{\boldsymbol{L}}^2_p(\Sigma)}
\leq c^{(p)}_{\ell}
\Big(\frac{2\ell+1}{4}\big\||z|^{\ell+1}\log^2(|z|^2) \chi'(|z|)
\big\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*)} \\
&+ \frac14\big\||z|^{\ell+2}\log^2(|z|^2) \chi''(|z|)\big\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*)}
+ \frac{p}{2} \big\||z|^{\ell+1}\log(|z|^2) \chi'(|z|)\big\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*)}
\Big).
\end{aligned}$$ Using nonetheless arguments similar to those of Step 1’. above, we claim that there exists $C>0$ such that for all $p\gg1$ and $\ell=1,\ldots,\delta'_p$, $$\label{e:|Kodphi0|}
\begin{aligned}
\big\||z|^{\ell+1}\log^2(|z|^2) \chi'(|z|)\big\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*)}
&\leq C(c^{(p+4)}_{\ell+1})^{-1} e^{-\kappa p}, \\
\big\||z|^{\ell+2}\log^2(|z|^2) \chi''(|z|)\big\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*)}
&\leq C(c^{(p+4)}_{\ell+2})^{-1} e^{-\kappa p}, \\
\big\||z|^{\ell+1}\log(|z|^2) \chi'(|z|)\big\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*)}
&\leq C(c^{(p+2)}_{\ell+1})^{-1} e^{-\kappa p}.
\end{aligned}$$ Indeed, from and since $2(\delta'_p+2) |\log r| \leq (p-2)c(\kappa)$, applying for $p+4$ as in , we get with $C_{0}=\sup_{[0,1]}|\chi'|$, $$\begin{aligned}
\label{eq:3.80a}\begin{split}
\big\| c^{(p+4)}_{\ell+1}|z|^{\ell+1}& \log^2(|z|^2)
\chi'(|z|)\big\|_{{\boldsymbol{L}}^2_p({\mathbb{D}}^*)}^2 \\
&= (c^{(p+4)}_{\ell+1})^{2}
\int_{r^\beta}^1
\big|\!\log(t^2)\big|^{p+4} t^{2\ell+2} \chi'(t)^{2}
\frac{4\pi t dt}{t^2 \big|\!\log(t^2)\big|^{2}}\\
&\overset{u=-2(\ell +1)\log t}{=\joinrel=}
\dfrac{1}{ (p+2)!} \int_0^{2(\ell+1) \beta |\log r|}
u^{p+2} e^{-u} \Big(\chi'\Big(e^{-\frac{u}{2(\ell+1)}}\Big)\Big)^{2}du\\
&\leq \dfrac{C_{0}^{2}}{ (p+2)!} \int_0^{2(\delta'_p+1) \beta |\log r|}
u^{p+2} e^{-u} du\\
&\leq \dfrac{C_{0}^{2}}{ (p+2)!} \int_0^{(p+2) c(\kappa)\beta }
u^{p+2} e^{-u} du= \mathcal{O}(e^{-2\kappa \, p}).
\end{split} \end{aligned}$$ Consequently, by and , there exists $C>0$ such that for all $p\gg1$ and $\ell=1,\ldots,\delta'_p$, $$\label{e:|Kophi0|2}
\begin{split}
&\big\|\square_{p}\phi_{0,\ell}^{(p)}\big\|_{{\boldsymbol{L}}^{2}_{p}}
\leq Cc^{(p)}_{\ell}\Big(\ell(c^{(p+4)}_{\ell+1})^{-1}
+(c^{(p+4)}_{\ell+2})^{-1}
+p(c^{(p+2)}_{\ell+1})^{-1}\Big)e^{-\kappa p}\\
&= C\left(\frac{\ell^{p-1}}{(p-2)!}\right)^{\!\frac{1}{2}}
\left(\ell \left(\frac{(p+2)!}{(\ell+1)^{p+3}}\right)^{\!\frac{1}{2}}
+ \left(\frac{(p+2)!}{(\ell+2)^{p+3}}\right)^{\!\frac{1}{2}}
+ p \left(\frac{p!}{(\ell+1)^{p+1}}\right)^{\!\frac{1}{2}}\right)
e^{-\kappa p}\\
&\leq C p^{2} e^{-\kappa p}.
\end{split}$$ *Step 3’.* Recall that the spectral gap property [@bkp Corollary 5.2] tells us that there exists $C_{1}>0$ such that for all $p\gg 1$ we have $$\begin{aligned}
\label{eq:3.85a}
\text{ Spec}(\square_{p})\subset \{0\} \cup [C_{1} p, +\infty).
\end{aligned}$$ For $\ell\in\{1,\ldots,\delta'_p\}$ let $\psi_{0,\ell}^{(p)} \in {\boldsymbol{L}}^{2}_p(\Sigma)$ such that $\psi_{0,\ell}^{(p)} \perp H^{0}_{(2)}(\Sigma, L^{p})$ and $\square_{p}\psi_{0,\ell}^{(p)}
=\square_{p}\phi_{0,\ell}^{(p)}$. Then by , $$\begin{aligned}
\label{eq:3.84a}
\phi_{\ell}^{(p)}=\phi_{0,\ell}^{(p)}-\psi_{0,\ell}^{(p)}.
\end{aligned}$$ By , and we get $$\label{e:|psi|}
\big\|\phi_{\ell}^{(p)} - \phi_{0,\ell}^{(p)}\|_{{\boldsymbol{L}}^{2}_{p}(\Sigma)}
= \big\|\psi_{0,\ell}^{(p)} \|_{{\boldsymbol{L}}^{2}_{p}(\Sigma)}
\leq (C_{1} p)^{-1} \|\square_{p}\phi_{0,\ell}^{(p)}
\|_{{\boldsymbol{L}}^{2}_{p}(\Sigma)}
\leq C C_{1}^{-1} p\, e^{-\kappa p},$$ uniformly in $\ell=1,\ldots,\delta'_p$. Note that can be reformulated as $$\label{eq:3.87a}
\big\langle \phi_{0,\ell}^{(p)}, \phi_{0,j}^{(p)}
\big\rangle_{{\boldsymbol{L}}^{2}_{p}(\Sigma)}
= \delta_{j\ell}\big(1+\mathcal{O}(e^{-2\kappa p})\big),$$ (the case $\ell\neq j$ provides 0 by circle symmetry). Thus , and entail $$\label{eq:3.88a}
\big\langle \phi_{\ell}^{(p)}, \phi_{j}^{(p)}
\big\rangle_{{\boldsymbol{L}}^{2}_{p}(\Sigma)}
= \big\langle \phi_{0,\ell}^{(p)}, \phi_{0,j}^{(p)}
\big\rangle_{{\boldsymbol{L}}^{2}_{p}(\Sigma)}
- \big\langle \psi_{0,\ell}^{(p)}, \psi_{0,j}^{(p)}
\big\rangle_{{\boldsymbol{L}}^{2}_{p}(\Sigma)}
= \delta_{j\ell} + \mathcal{O}(p^{2} e^{-2\kappa p}),$$ uniformly in $\ell, j=1,\ldots,\delta'_p$. Because $(\sigma_{\ell}^{(p)})_{1\leq \ell\leq\delta'_p}$ is obtained by the Gram-Schmidt orthonormalisation of $(\phi_{\ell}^{(p)})_{1\leq \ell\leq \delta'_p}$ (which is a $\delta'_p=\mathcal{O}(p)$ process) we infer from and that $$\label{eq:3.92a}
\big\|\sigma_{\ell}'^{(p)}
- \phi_{\ell}^{(p)}\|_{{\boldsymbol{L}}^{2}_p(\Sigma)}
=\mathcal{O}(p^{3} e^{-2\kappa p}),\qquad
\big\|\sigma_{\ell}'^{(p)} \|_{{\boldsymbol{L}}^{2}_p(\Sigma)}
= 1+ \mathcal{O}(p^{3} e^{-2\kappa p}).$$ Since $\sigma_{\ell}^{(p)}= \sigma_{\ell}'^{(p)} /
\big\|\sigma_{\ell}'^{(p)} \|_{{\boldsymbol{L}}^{2}_p(\Sigma)}$, we conclude from that there exists $C>0$ such that for $p\gg1$, $$\label{eq:3.89a}
\big\|\sigma_{\ell}^{(p)}
- \phi_{\ell}^{(p)}\|_{{\boldsymbol{L}}^{2}_p(\Sigma)}
\leq \Big | \big\|\sigma_{\ell}'^{(p)} \|_{{\boldsymbol{L}}^{2}_p(\Sigma)} -1\Big|
+ \big\|\sigma_{\ell}'^{(p)}
- \phi_{\ell}^{(p)}\|_{{\boldsymbol{L}}^{2}_p(\Sigma)}
\leq Cp^{3} e^{-2 \kappa p},$$ hence, by and that we have uniformly in $\ell=1,\ldots,\delta'_p$ for $p\gg1$, $$\label{eq:3.90a}
\big\|\sigma_{\ell}^{(p)}
- c_{\ell}^{(p)}\chi(|z|)z^{\ell}\mathfrak{e}_L^p
\big\|_{{\boldsymbol{L}}^{2}_p(\Sigma)}
= \big\|\sigma_{\ell}^{(p)} - \phi_{0,\ell}^{(p)}\|_{{\boldsymbol{L}}^{2}_p(\Sigma)}
\leq Cp \, e^{-\kappa p}.$$
*Echelon property. —* We use the expansion of $\sigma^p_{\ell}$ on ${\mathbb{D}}^*_{4r}$. By construction, $\phi_{j}^{(p)}\in {\rm Span}\{\sigma_1^{(p)},\ldots,\sigma_j^{(p)}\}$ for $1\leq j \leq \delta'_{p}$, so if $j<\ell$, then $\phi_{j}^{(p)}\perp_{{\boldsymbol{L}}^2_p(\Sigma)} \sigma^{(p)}_{\ell}$, as $\phi_j^{(p)}$ is the ${\boldsymbol{L}}^2_p(\Sigma)$-projection of $\phi_{0,j}^{(p)}$ on holomorphic sections. Hence we have as in that $$\label{e:<sigma,phi>}
\big\langle \sigma_{\ell}^{(p)}, \phi_{0,j}^{(p)}
\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)}
= \big\langle \sigma_{\ell}^{(p)}, \phi_{j}^{(p)}
\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)} =0\,, \quad \text{ if } j<\ell .$$ Now and entail $$\label{eq:3.98a}
a_{j\ell}^{(p)}=0\qquad \text{ if } j<\ell, \, j\in \{1,\ldots,\delta_p'\},
\, \ell\in \{1,\ldots,d_p\}.$$ From and we get and . The proof of Lemma \[prop:rfnd\_estmt\] is completed.
The following consequence of Lemma \[prop:rfnd\_estmt\] that refines is very useful in our computations.
\[eq:t3.4\] We have uniformly for $j, \ell\in \{1,\ldots,\delta'_p\}$, $$\label{e:a_alphaell} \begin{split}
a_{j\ell}^{(p)} =
\left \{ \begin{array}{l}
0 \hspace{3cm} \qquad \text{ for } j<\ell; \\
c_{j}^{(p)} \,\big(\delta_{j\ell}+\mathcal{O}(pe^{-\kappa p})\big)
\quad \text{ for } j\geq \ell .
\end{array} \right. \end{split}$$
First note that by we have $$\label{e:<sigma,phi0>2}
\begin{aligned}
\big\langle \sigma_\ell^{(p)} , \phi_{j}^{(p)}
\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)}
= \big\langle \sigma_\ell^{(p)}, \phi_{0,j }^{(p)}
\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)}
= &\big\langle \sigma_\ell^{(p)} - \phi_{0,\ell}^{(p)} ,
\phi_{0,j }^{(p)} \big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)}
+ \big\langle \phi_{0,\ell}^{(p)},
\phi_{0,j}^{(p)}\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)} .
\end{aligned}$$ Further, , and imply $$\label{eq:3.55a}
\big\langle \sigma_\ell^{(p)} , \phi_{j}^{(p)}
\big\rangle_{{\boldsymbol{L}}^2_p(\Sigma)}
= \mathcal{O}(pe^{-\kappa p})
+ \delta_{j \ell}\, \big(1+\mathcal{O}(e^{-2\kappa p})\big).$$ By and we have uniformly on $j\in \{1,\ldots,\delta'_p\}$, $$\label{eq:3.53a}
(c_{j }^{(p)})^2
\int_{{\mathbb{D}}^*_{2r}}\big|\!\log(|z|^2)\big|^p |z|^{2j } \chi(|z|)\,
\omega_{{\mathbb{D}}^*}
= 1+ \mathcal{O}(e^{-2\kappa p}).$$ The first equality of , , and entail .
Proof of Theorem \[thm:diffquot\] {#eq:s3.2}
----------------------------------
We show now how to establish Theorem \[thm:diffquot\] by using Lemma \[prop:rfnd\_estmt\]. It can be noticed here that while estimate is essential in establishing Theorem \[thm:diffquot\], the echelon property is not, but helps nonetheless clarify some of the upcoming computations.
The proof goes as follows: we start by explicit computations, then use Lemma \[prop:rfnd\_estmt\] to lead a precise analysis of head terms, i.e., all its indices $\leq \delta'_{p}$; and recall some rough estimates of tail terms, i.e., some of its indices $\geq \delta'_{p}+1$. On some shrinking disc family $\{|z|\leq c'p^{-A'}\}$, we will conclude from that the tails terms can be controlled by $2^{-\alpha' p} B^{{\mathbb{D}}^*}_{p}$, hence on some fixed trivialization disc, as for Theorem \[thm\_apdx\].
From , for $z\in {\mathbb{D}}^{*}_{4r}$, set $$\label{eq:4.1a}
\beta^\Sigma_{p}(z)= \beta^\Sigma_{p}(z,z), \qquad
\beta^{{\mathbb{D}}^*}_{p}(z)= \beta^{{\mathbb{D}}^*}_{p}(z,z).$$ By and , we have $$\begin{aligned}
\label{eq:4.3a}
\frac{B_p(z)}{B_p^{{\mathbb{D}}^*}(z)}
= \frac{\beta_p^{\Sigma}(z)}{\beta_p^{{\mathbb{D}}^*}(z)}
= 1 + \big(\beta_p^{\Sigma} - \beta_p^{{\mathbb{D}}^*}\big)(z)
( \beta_p^{{\mathbb{D}}^*}(z))^{-1}.
\end{aligned}$$ With the notations of Lemma \[prop:rfnd\_estmt\] we compute explicitly on ${\mathbb{D}}_{4r}^*$. By , , , and , we have $$\begin{aligned}
\label{eq:4.2a}\begin{split}
\beta_p^{\Sigma}(z)
=\sum_{q,s=1}^{\infty} \bigg(\sum_{\ell=1}^{d_p}
a_{q\ell}^{(p)}\overline{a_{s\ell}^{(p)}}\bigg) z^q \bar{z}^s ,
\qquad \beta_p^{{\mathbb{D}}^*}(z) =
\sum_{q=1}^{\infty} (c_q^{(p)})^2 |z|^{2q}.
\end{split} \end{aligned}$$ For $q,s\in {\mathbb{N}}^{*}$, set $$\begin{aligned}
\label{eq:4.7a}
\epsilon_{qs}= \sum_{\ell=1}^{d_p} \tfrac{a_{q\ell}^{(p)}}{c_q^{(p)}}
\tfrac{\overline{a_{s\ell}^{(p)}}}{c_s^{(p)}}
- \delta_{qs}.
\end{aligned}$$ From and , we get $$\begin{aligned}
\label{eq:4.5a}\begin{split}
\frac{d}{dz}& \big(
\beta_p^{\Sigma} - \beta_p^{{\mathbb{D}}^*} \big)(z)
\cdot \beta_p^{{\mathbb{D}}^*}(z) \\
&=\sum_{q,s=1}^{\infty}
q \bigg(\sum_{\ell=1}^{d_p} a_{q\ell}^{(p)}
\overline{a_{s\ell}^{(p)}} z^{q-1} \bar{z}^s
- \delta_{qs}(c_q^{(p)})^2 z^{q-1}
\bar{z}^s\bigg)
\cdot\sum_{m=1}^{\infty} (c_m^{(p)})^2 |z|^{2m} \\
&= \sum_{q,s,m=1}^{\infty}
q(c_m^{(p)})^2 c_q^{(p)}c_s^{(p)} \epsilon_{qs}
z^{q+m-1} \bar{z}^{s+m},
\end{split} \end{aligned}$$ and similarly, $$\begin{aligned}
\label{eq:4.6a}\begin{split}
\big(
\beta_p^{\Sigma} - \beta_p^{{\mathbb{D}}^*} \big)(z)
\cdot \frac{d}{dz}\beta_p^{{\mathbb{D}}^*}(z)
= \sum_{q,s,m=1}^{\infty}
m(c_m^{(p)})^2 c_q^{(p)}c_s^{(p)}\epsilon_{qs}
z^{q+m-1} \bar{z}^{s+m}.
\end{split} \end{aligned}$$ From , and , we get $$\label{eq:4.8a}
\frac{d}{dz}\frac{B_p}{B_p^{{\mathbb{D}}^*}} (z)
= (\beta_p^{{\mathbb{D}}^*}(z) )^{-2}
\sum_{q,s,m=1}^{\infty}
\bigg[
(q-m)(c_m^{(p)})^2c_q^{(p)}c_s^{(p)}
\epsilon_{qs}
\bigg]
z^{q+m-1} \bar{z}^{s+m}.$$ Observe that the coefficient inside $[\ldots]$ in the above sum vanishes if $q=m$. This allows to separate the above sum into $$\sum_{q=1, s\geq1, m\geq2}\;\;
\text{and}\;\; \sum_{q\geq 2, s\geq1, m\geq1}.$$ We first tackle the sum over $q=1$, $s\geq1$ and $m\geq2$, focusing on the cases $s, m\leq \delta'_p$; then we deal with the sum over $q\geq2$, $s\geq1$ and $m\geq1$, focusing on $q, s,m\leq \delta'_p$, before we also address the cases of “large indices” ($\max\{q, s, m\}\geq\delta'_p+1$).
*Head terms. —* We look at first $$\label{e:hdsum}
I_{p,\delta'_p}(z) = \sum_{s=1}^{\delta'_p}
\sum_{m=2}^{\delta'_p}
\bigg[
(1-m)(c_m^{(p)})^2c_1^{(p)}c_s^{(p)} \epsilon_{1s}
\bigg]
z^{m} \bar{z}^{s+m}.$$ By , , and , uniformly for $q,s\in \{1,\ldots,\delta'_p\}$, $$\label{eq:4.17a}
\epsilon_{qs}
= \sum_{\ell=1}^{{\rm min}\{q, s\}} \frac{a_{q\ell}^{(p)}}{c_q^{(p)}}
\frac{\overline{a_{s\ell}^{(p)}}}{c_s^{(p)}}
- \delta_{qs}
= \mathcal{O}( \delta'_p p\, e^{-\kappa p}) .$$ For all $t\in\{2,\ldots,\delta'_p\}$, $j\in \{1,\ldots,\delta'_p\}$, by , $$\label{eq:4.18a}
\big| (t-j)c_t^{(p)}\big|
\leq \delta'_p \Big(\frac{t}{t-1}\Big)^{(p-1)/2}c_{t-1}^{(p)}
\leq \delta'_p 2^{(p-1)/2}c_{t-1}^{(p)}.$$ From , and , we get $$\begin{aligned}
\label{eq:4.13a}\begin{split}
\Big|I_{p,\delta'_p}(z) \Big| &\leq
\sum_{s=1}^{\delta'_p}
\sum_{m=2}^{\delta'_p}
(m-1)(c_m^{(p)})^2c_1^{(p)}c_s^{(p)} |\epsilon_{1s}| |z|^{s+2m} \\
&\leq \mathcal{O}\big((\delta'_p)^{2}
2^{p/2} p\, e^{-\kappa p}\big)\cdot
\sum_{s=1}^{\delta'_p}
\sum_{m=2}^{\delta'_p} c_m^{(p)}c_{m-1}^{(p)}
c_1^{(p)}c_s^{(p)} |z|^{s+2m} .
\end{split} \end{aligned}$$ But $$\begin{gathered}
\label{eq:4.14a}
\sum_{s=1}^{\delta'_p}
\sum_{m=2}^{\delta'_p} c_m^{(p)}c_{m-1}^{(p)}
c_1^{(p)}c_s^{(p)} |z|^{s+2m}
=
\bigg(\sum_{s=1}^{\delta'_p} c_1^{(p)}c_s^{(p)} |z|^{1+s}\bigg)
\bigg(\sum_{m=2}^{\delta'_p} c_m^{(p)}c_{m-1}^{(p)}
|z|^{2m-1}\bigg) \\
\leq \frac{1}{2}\bigg(\delta'_p (c_1^{(p)})^2|z|^2 +
\sum_{s=1}^{\delta'_p} (c_s^{(p)})^2 |z|^{2s}\bigg)
\bigg(\sum_{m=2}^{\delta'_p} (c_m^{(p)})^2|z|^{2m}\bigg)^{\tfrac12}
\bigg(\sum_{m=2}^{\delta'_p} (c_{m-1}^{(p)})^2|z|^{2(m-1)}
\bigg)^{\tfrac12} \\
\leq (\delta'_p+1)
\bigg(\sum_{j=1}^{\infty} (c_{j}^{(p)})^2|z|^{2j}\bigg)^2
= (\delta'_p+1)( \beta^{{\mathbb{D}}^*}_p (z))^2 .
\end{gathered}$$ We proceed similarly with the sum $$\begin{aligned}
\label{eq:4.16a}
II_{p,\delta'_p}(z) &= \sum_{q=2}^{\delta'_p} \sum_{s=1}^{\delta'_p}
\sum_{m=1}^{\delta'_p}
\Big[(q-m)(c_m^{(p)})^2c_q^{(p)}c_s^{(p)} \epsilon_{qs}
\Big]z^{q+m-1}\bar{z}^{s+m} .
\end{aligned}$$ We have analogously to , $$\label{eq:4.19a}
\begin{split}
\sum_{q=2}^{\delta'_p}
\sum_{s=1}^{\delta'_p}
\sum_{m=1}^{\delta'_p}&
(c_m^{(p)})^2 c_{q-1}^{(p)} c_s^{(p)}|z|^{q+s+2m-1} \\
&=
\bigg(\sum_{q=2}^{\delta'_p} c_{q-1}^{(p)}|z|^{q-1}\bigg)
\bigg(\sum_{s=1}^{\delta'_p} c_s^{(p)}|z|^{s} \bigg)
\bigg(\sum_{m=1}^{\delta'_p} (c_m^{(p)})^2 |z|^{2m}\bigg) \\
&\leq \delta'_p
\bigg(\sum_{q=2}^{\delta'_p} (c_{q-1}^{(p)})^2|z|^{2q-2}
\bigg)^{\tfrac12}
\bigg(\sum_{s=1}^{\delta'_p} (c_s^{(p)})^2|z|^{2s} \bigg)^{\tfrac12}
\bigg(\sum_{m=1}^{\delta'_p} (c_m^{(p)})^2 |z|^{2m}\bigg) \\
&\leq \delta'_p ( \beta^{{\mathbb{D}}^*}_p (z))^2 .
\end{split}$$ From , , and , we get $$\begin{aligned}
\label{eq:4.20a}\begin{split}
\bigg| II_{p,\delta'_p}(z) \bigg|
&\leq \sum_{q=2}^{\delta'_p}
\sum_{s=1}^{\delta'_p}
\sum_{m=1}^{\delta'_p}
(c_m^{(p)})^2\big|(q-m) c_q^{(p)}\big| c_s^{(p)}|\epsilon_{qs}|
|z|^{q+s+2m-1} \\
&\leq\mathcal{O}\big( (\delta'_p)^2 2^{p/2} p\,
e^{- \kappa p}\big)\cdot
\sum_{q=2}^{\delta'_p}
\sum_{s=1}^{\delta'_p}
\sum_{m=1}^{\delta'_p}
(c_m^{(p)})^2 c_{q-1}^{(p)} c_s^{(p)}|z|^{q+s+2m-1} \\
&\leq \mathcal{O}\big((\delta'_p)^{3} 2^{p/2} p\, e^{-\kappa p}\big)
\cdot ( \beta^{{\mathbb{D}}^*}_p (z))^2 .
\end{split} \end{aligned}$$
*Tail terms. —* Set $$\begin{aligned}
\label{eq:4.22a}\begin{split}
\mathcal{A}_p^1 =& \{(q,s,m)\in ({\mathbb{N}}^{*})^{3}\,: \,
q\geq \delta'_p+1;\,s,m\leq \delta'_p\}, \\
\mathcal{A}_p^2 =& \{(q,s,m)\in ({\mathbb{N}}^{*})^{3}\,: \,
s\geq \delta'_p+1;\,m\leq \delta'_p\} , \\
\mathcal{A}_p^3 =& \{(q,s,m)\in ({\mathbb{N}}^{*})^{3}\,: \,
m\geq \delta'_p+1\} \}.
\end{split} \end{aligned}$$ For $j=1,2,3$, set $$\label{e:tailsumA1}
I(\mathcal{A}_p^j)(z) = \sum_{(q,s,m)\in \mathcal{A}_p^j}
(q-m)(c_m^{(p)})^2c_q^{(p)}c_s^{(p)}\epsilon_{qs}
z^{q+m-1}\bar{z}^{s+m}.$$ By , , and , we have $$\begin{aligned}
\label{eq:4.25a}
\frac{d}{dz}\frac{B_p}{B_p^{{\mathbb{D}}^*}} (z)
= (\beta_p^{{\mathbb{D}}^*}(z) )^{-2}
\Big( I_{p,\delta'_{p}}(z) +II_{p,\delta'_{p}}(z) + I(\mathcal{A}_p^1)(z)
+ I(\mathcal{A}_p^2)(z)+ I(\mathcal{A}_p^3)(z) \Big).
\end{aligned}$$ We now look at the remaining terms of the sum in , i.e., $I(\mathcal{A}_p^j) (j=1,2,3)$.
First, for all triple $(q,s,m)$ of $\mathcal{A}^1_p$, as $q\geq \delta'_p+1 > \delta'_p \geq s$, by , , one has: $$\begin{aligned}
\label{eq:4.26a}
c_q^{(p)}c_s^{(p)}\epsilon_{qs}
= \sum_{\ell=1}^{d_p} a_{q\ell}^{(p)}
\overline{a_{s\ell}^{(p)}}
= \sum_{\ell=1}^{\delta'_p} a_{q\ell}^{(p)}
\overline{a_{s\ell}^{(p)}}.
\end{aligned}$$ From and , we have $$\begin{gathered}
\label{eq:4.28a}
| I(\mathcal{A}_p^1)(z)|
\leq C d_p
\sum_{(q,s,m)\in \mathcal{A}_p^1}
q \Big(\sup_{1\leq\ell\leq d_p}|a_{q\ell}^{(p)}|\Big)
\Big(\sup_{1\leq\ell\leq d_p}|a_{s\ell}^{(p)}|\Big)
(c_m^{(p)})^2 |z|^{q+s+2m-1} \\
= C d_p \bigg(\sum_{q=\delta'_p+1}^{\infty}
q \Big(\sup_{1\leq\ell\leq d_p}|a_{q\ell}^{(p)}|\Big) |z|^{q-1}
\bigg) \\
\times \bigg(\sum_{s=1}^{\delta'_p}
\Big(\sup_{1\leq\ell\leq d_p}|a_{s\ell}^{(p)}|\Big)|z|^{s}\bigg)
\bigg(\sum_{m=1}^{\delta'_p}(c_m^{(p)})^2|z|^{2m}\bigg).
\end{gathered}$$ By and we get uniformly in $j\in \{1,\cdots,\delta'_p\}$, $$\begin{aligned}
\label{eq:4.27a}
\sup_{1\leq\ell\leq d_p}|a_{j\ell}^{(p)}|
=\sup_{1\leq\ell\leq \delta'_p}|a_{j\ell}^{(p)}| \leq Cc_j^{(p)}.
\end{aligned}$$ By and , observe that for $z\in {\mathbb{D}}^{*}_{r}$, $$\begin{gathered}
\label{eq:4.29a}
\sum_{s=1}^{\delta'_p}
\Big(\sup_{1\leq\ell\leq d_p}\big|a_{s\ell}^{(p)}\big|\Big)|z|^{s}
\leq C \sum_{s=1}^{\delta'_p} c_s^{(p)}|z|^{s}\\
\leq C(\delta'_p)^{1/2}
\bigg(\sum_{s=1}^{\delta'_p}
(c_s^{(p)})^2|z|^{2s}\bigg)^{1/2}
\leq C(\delta'_p)^{1/2} (\beta_p^{{\mathbb{D}}^*}(z) )^{1/2}.\end{gathered}$$ Now we give an estimate via $\beta_p^{{\mathbb{D}}^*}(z)$ for the sum $\sum_{q\geq \delta'_p+1}$ in . Recall that for $\xi\in [0,1)$ and $N\geq 0$ we have that $$\label{eq:4.35a}
\sum_{q=N+1}^{\infty}q\xi^{q-1}
=\Big( \sum_{q=N+1}^{\infty}\xi^{q} \Big)'
= \frac{(N+1)\xi^{N}-N\xi^{N+1}}{(1-\xi)^2}
\leq \frac{(N+1)\xi^{N}}{(1-\xi)^2},$$ thus, if $|z|\leq r$, $$\label{eq:4.36a}
\sum_{q=\delta'_p+1}^{\infty}q\Big(\frac{|z|}{2r}\Big)^{q-1}
\leq (\delta'_p+1)\Big(\frac{|z|}{2r}\Big)^{\delta'_p}
\Big(1-\frac{|z|}{2r}\Big)^{- 2}
\leq 4(\delta'_p+1) \Big(\frac{|z|}{2r}\Big)^{\delta'_p}.$$
Taking now $$\label{eq:4.38a}
A' = \frac{1}{2\alpha'}, \quad \alpha'= \frac{c(\kappa)}{4 |\log r|}
\quad \text{ and } \quad
c' = r e^{1/2\alpha'}\big|\!
\log\big(|2r|^{2}\big)\big|^{1/2\alpha'},$$ we obtain from that for any $\tau \in {\mathbb{N}}$ fixed, $$\label{eq:4.39a}
\alpha' p\leq \delta'_{p} -\tau \quad \text{ for } \qquad p\gg1.$$ Thus, as in , we have by and for $\tau\in {\mathbb{N}}$ fixed, $$\label{eq:4.41a}
\Big(\frac{|z|}{2r}\Big)^{2(\delta'_p-\tau)/p}
\frac{1}{|\log(|2r|^{2})|}
\leq \Big(\frac{|z|}{2r}\Big)^{2\alpha'}
\frac{1}{|\log(|2r|^{2})|}
\leq 2^{- 2\alpha'}\frac{e}{p},$$ for $p\gg1$, $|z|\leq c'p^{-A'}$. To conclude, we estimate by , and for any $\tau\in {\mathbb{N}}$ fixed, $$\begin{gathered}
\label{eq:4.43a}
\Big|\!\log\Big(|2r|^{2}\Big)\Big|^{-p/2}
\left(\frac{|z|}{2r}\right)^{\delta'_p-\tau +1}\\[2pt]
= \frac{1}{2 r}
\bigg( \left(\frac{|z|}{2r}\right)^{2(\delta'_p-\tau)/p}
\frac{1}{ |\log(|2r|^{2})|}\bigg)^{p/2}
\Big(2\pi (p-2)!\Big)^{1/2} c_1^{(p)} |z|\\
\leq C p^{-1/2} 2^{- \alpha' p} \beta^{{\mathbb{D}}^{*}}_{p}(z)^{1/2} . \end{gathered}$$ for all $p\gg 1$ and $|z|\leq c'p^{-A'}$, Thus by , and for $\tau=1$, we have for all $p\gg 1$ and $|z|\leq c'p^{-A'}$, $$\begin{gathered}
\label{eq:4.42a}
\sum_{q=\delta'_p+1}^{\infty}
q \Big(\sup_{1\leq\ell\leq d_p}|a_{q\ell}^{(p)}|\Big) |z|^{q-1}
\leq \frac{Cp^{1/2}}{2r} \Big|\!\log(|2r|^{2})\Big|^{-p/2}
\sum_{q=\delta'_p+1}^{\infty}q\Big(\frac{|z|}{2r}\Big)^{q-1}\\
\leq C \delta'_p 2^{- \alpha' p}\beta^{{\mathbb{D}}^{*}}_{p}(z)^{1/2}.\end{gathered}$$ By , , and we have for all $p\gg 1$ and $|z|\leq c'p^{-A'}$, $$\begin{aligned}
\label{eq:4.44a}
| I(\mathcal{A}_p^1)(z)| \leq C (\delta'_{p})^{3/2}
d_{p} 2^{-\alpha' p} (\beta_p^{{\mathbb{D}}^*}(z) )^{2}
\leq C p^{5/2} 2^{-\alpha' p} (\beta_p^{{\mathbb{D}}^*}(z) )^{2}.\end{aligned}$$ *Sums over $\mathcal{A}^2_p$ and $\mathcal{A}^3_p$. —* We continue to work on the estimates of the tail terms. We first deal with the sum over $\mathcal{A}^2_p$. By and , one has: $$\begin{aligned}
\label{eq:4.46a}\begin{split}
I(\mathcal{A}_p^2)(z) =& \sum_{(q,s,m)\in \mathcal{A}_p^2}
(q-m)(c_m^{(p)})^2 \bigg(\sum_{\ell=1}^{d_p}a_{q\ell}^{(p)}
\overline{a_{s\ell}^{(p)}}\bigg) z^{q+m-1}\bar{z}^{s+m} \\
&- \sum_{s=\delta'_p+1}^{\infty}\sum_{m=1}^{\delta'_p}
(s-m)(c_m^{(p)})^2(c_s^{(p)})^2 z^{s+m-1}\bar{z}^{s+m}. \\
=&\!: S_1 - S_2.
\end{split} \end{aligned}$$ Now, since $|q-m|\leq qm$ for all $(q,s,m)\in \mathcal{A}_p^2$, we obtain, $$\begin{gathered}
\label{e:S1}
|S_1| \leq d_p
\sum_{q=1}^{\infty} q\Big(\sup_{1\leq\ell\leq d_p}
|a_{q\ell}^{(p)}|\Big)|z|^q \cdot
\sum_{s=\delta'_p+1}^{\infty}
\Big(\sup_{1\leq\ell\leq d_p}|a_{s\ell}^{(p)}|\Big)|z|^{s-1} \cdot
\sum_{m=1}^{\delta'_p}
m(c_m^{(p)})^2|z|^{2m}.
\end{gathered}$$ By , and we get on $|z|\leq c'p^{-A'}$, $$\begin{gathered}
\label{eq:4.48a}
\sum_{q=1}^{\infty} q\Big(\sup_{1\leq\ell\leq d_p}
|a_{q\ell}^{(p)}|\Big)|z|^q
\leq C\delta'_p\sum_{q=1}^{\delta'_p} c_q^{(p)}|z|^q
+ C |z|\delta'_p 2^{- \alpha' p}\beta^{{\mathbb{D}}^{*}}_{p}(z)^{1/2} \\
= \mathcal{O}\big( (\delta'_{p})^{3/2} +
\delta'_{p} 2^{- \alpha' p}\Big)
(\beta_p^{{\mathbb{D}}^*}(z) )^{1/2}.
$$ From and for $\tau=1$ we infer that we have for $|z|\leq c'p^{-A'}$, $$\label{eq:4.49a}
\begin{split}
\sum_{s=\delta'_p+1}^{\infty} \Big(\sup_{1\leq\ell\leq d_p}
|a_{s\ell}^{(p)}&|\Big)|z|^{s-1}
\leq Cp^{1/2}
\Big|\!\log\Big(|2r|^{2}\Big)\Big|^{-p/2}
\sum_{s=\delta'_p+1}^{\infty}\Big(\frac{1}{2r}\Big)^{s}
|z|^{s-1} \\
&= \frac{C}{2r}C p^{1/2} \Big|\!\log\Big(|2r|^{2}\Big)\Big|^{-p/2}
\Big(\frac{|z|}{2r}\Big)^{\delta'_{p}} \frac{1}{1-|z|/2r} \\
&\leq C 2^{- \alpha' p } (\beta_p^{{\mathbb{D}}^*}(z) )^{1/2}.
\end{split}$$ Obviously, $$\begin{aligned}
\label{eq:4.50a}
\sum_{m=1}^{\delta'_p} m(c_m^{(p)})^2|z|^{2m}
\leq \delta'_p\sum_{m=1}^{\delta'_p} (c_m^{(p)})^2|z|^{2m}
\leq \delta'_p\beta_p^{{\mathbb{D}}^*}(z).
\end{aligned}$$ Thus, using these three estimates – together with , , we see that yields: $$\label{e:estS1}
|S_1| = \mathcal{O}(p\cdot p^{3/2}\cdot p)
2^{- \alpha' p} \beta_p^{{\mathbb{D}}^*}(z)^{2} \quad
\text{ for } |z|\leq c'p^{-A'}.$$ From , $$\begin{aligned}
\label{eq:4.52a}\begin{split}
|S_2| \leq& \sum_{s=\delta'_p+1}^{\infty}\sum_{m=1}^{\delta'_p}
|s-m|(c_m^{(p)})^2(c_s^{(p)})^2 |z|^{2s+2m-1} \\
\leq& \bigg(\sum_{s=\delta'_p+1}^{\infty}s(c_s^{(p)})^2
|z|^{2s-1}\bigg)
\bigg(\sum_{m=1}^{\delta'_p} (c_m^{(p)})^2|z|^{2m}\bigg).
\end{split}\end{aligned}$$ Note that by the argument in for ${\mathbb{D}}^{*}$ (or directly from , ), there exists $C>0$ such that for any $s\in {\mathbb{N}}^*$, $p\geq 2$, we have $$\begin{aligned}
\label{eq:4.54a}
|c_s^{(p)}|\leq
Cp^{1/2}\big(2r\big)^{-s}|\!\log(|2r|^{2})|^{-p/2}.
\end{aligned}$$ By , , for $\tau=1$, and , we get as in for all $p\gg 1$ and $|z|\leq c'p^{-A'}$, $$\begin{gathered}
\label{eq:4.55a}
\bigg(\sum_{s=\delta'_p+1}^{\infty}s(c_s^{(p)})^2
|z|^{2s-1}\bigg) \leq C\Big|\!\log\Big(|2r|^{2}\Big)\Big|^{-p}
\frac{|z|}{4r^{2}} p
\bigg(\sum_{s=\delta'_p+1}^{\infty} s \Big(\frac{|z|}{2 r}\Big)^{2s-2}
\bigg) \\
\leq Cp\, \Big|\!\log\Big(|2r|^{2}\Big)\Big|^{-p}|z|
\frac{(\delta'_p+1)}{(1-(|z|/2r)^{2})^2}
\Big(\frac{|z|}{2 r}\Big)^{2\delta'_p}
\leq Cp 2^{- 2\alpha' p} \beta_p^{{\mathbb{D}}^*}(z).
\end{gathered}$$ By , , and we obtain $$\label{eq:4.60a}
| I(\mathcal{A}_p^2)(z) |
= \mathcal{O}( p^{7/2} 2^{- \alpha' p}) \beta_p^{{\mathbb{D}}^*}(z)^{2}
\quad \text{ on } |z|\leq c'p^{-A'}.$$ We finally deal with the sum over $\mathcal{A}_p^3$, using the same principles[^4]. Write: $$\begin{aligned}
\label{eq:4.61a}\begin{split}
I(\mathcal{A}_p^3)(z)
=& \sum_{(q,s,m)\in \mathcal{A}_p^3} (q-m)(c_m^{(p)})^2
\bigg(\sum_{\ell=1}^{d_p}a_{q\ell}^{(p)}\overline{a_{s\ell}^{(p)}}\bigg)
z^{q+m-1}\bar{z}^{s+m} \\
&- \sum_{s=1}^{\infty}\sum_{m=\delta'_p+1}^{\infty}
(s-m)(c_m^{(p)})^2(c_s^{(p)})^2 z^{s+m-1}\bar{z}^{s+m}\\
=&\!: S_1' - S_2'.
\end{split} \end{aligned}$$ On the one hand, rather similarly as for (observe the precise exponents though), $$\begin{gathered}
\label{e:S1'}
|S_1'| \leq d_p
\sum_{q=1}^{\infty} q\Big(\sup_{1\leq\ell\leq d_p}
|a_{q\ell}^{(p)}|\Big)|z|^q
\sum_{s=1}^{\infty}
\Big(\sup_{1\leq\ell\leq d_p}|a_{s\ell}^{(p)}|\Big)|z|^{s}
\sum_{m=\delta'_p+1}^{\infty} m(c_m^{(p)})^2|z|^{2m-1}.
\end{gathered}$$ Again, we deal separately with $\sum_{s=1}^{\delta'_p}$ and $\sum_{s=\delta'_p+1}^{+\infty}$ from , . In conclusion, by , , , , and , we have on $|z|\leq c'p^{-A'}$, $$\label{eq:4.64a}
|S'_1| \leq \mathcal{O}(p^{4} 2^{- 2 \alpha' p})
\beta_p^{{\mathbb{D}}^*}(z)^{2}.$$ On the other hand, we have by on the set $|z|\leq c'p^{-A'}$, $$\begin{aligned}
\label{eq:4.65a} \begin{split}
|S_2'| &\leq \sum_{q=1}^{\infty}\sum_{m=\delta'_p+1}^{\infty}
|q-m|(c_m^{(p)})^2(c_q^{(p)})^2 |z|^{2q+2m-1} \\
&\leq \bigg(\sum_{q=1}^{\infty}q(c_q^{(p)})^2|z|^{2q}\bigg)
\bigg(\sum_{m=\delta'_p+1}^{\infty}m(c_m^{(p)})^2|z|^{2m-1}\bigg)\\
&\leq C \Big( \delta'_p \sum_{q=1}^{\delta'_{p}}(c_q^{(p)})^2|z|^{2q}
+p 2^{- 2\alpha' p}
\beta_p^{{\mathbb{D}}^*}(z)\Big)p 2^{- 2\alpha' p}
\beta_p^{{\mathbb{D}}^*}(z)\\
&\leq C 2^{- 2\alpha' p} p^{2}\,
\beta_p^{{\mathbb{D}}^*}(z)^{2}.
\end{split} \end{aligned}$$ By , and we have on the set $|z|\leq c'p^{-A'}$, $$\label{eq:4.66a}
\bigg| I(\mathcal{A}_p^3)(z) \bigg|
= \mathcal{O}(p^{4} 2^{-2\alpha' p} )\beta_p^{{\mathbb{D}}^*}(z)^{2}.$$ *Conclusion. —* We sum up the estimates above (head terms , , , and tail terms , , ) in , with $\kappa$ any fixed number larger than $\frac12 \log2$, and obtain for some $\gamma>0$, $$\label{eq:4.91a}
\sup_{|z|\leq c'p^{-A'}}
\bigg|\frac{d}{dz}\bigg(\frac{B_p}{B_p^{{\mathbb{D}}^*}}\bigg)(z)\bigg|
= \mathcal{O}(e^{-\gamma p}).$$ Applying Theorem \[thm\_MainThm\] for $k=1,\delta=0$, we get $$\label{eq:4.92a}
\sup_{c'p^{-A'}\leq |z| \leq r}
|z|\left|\log(|z|^2)\right|
\left|\frac{d}{dz}(B_p-B_p^{{\mathbb{D}}^*})(z)\right|
=\mathcal{O}(p^{-\infty}),$$ which can be rephrased as follows: $$\label{eq:4.93a}
\sup_{c'p^{-A'}\leq |z| \leq r}
\left|\frac{d}{dz}(B_p -B_p^{{\mathbb{D}}^*})(z)\right|
= \mathcal{O}(p^{-\infty}).$$ Estimates , , yield for $k=1$. Higher $k$-order estimates are established along the same lines: (1) the sum over the set of indices in $\mathcal{A}^{j}_{p}$ where one of indices satisfies $\geq \delta'_{p}+1$, will be controlled by a polynomial in $p$ times $2^{-\alpha' p}\beta_p^{{\mathbb{D}}^*}(z)^{k}$; (2) to handle the sum over the set of indices $\leq \delta'_{p}$, we observe first that the contribution from the terms with sum of indices $<2k+2$ is zero, so we will increase $\kappa$ to absorb the exponential factor in the estimates. Thus the analogue of holds for $k>1$. We exemplify this for the second derivative $\frac{d^{2}}{dz^{2}}$ to show how the above argument works. From , we get $$\begin{gathered}
\label{eq:4.95a}
\frac{d^{2}}{dz^{2}}\frac{B_p}{B_p^{{\mathbb{D}}^*}} (z)
= (\beta_p^{{\mathbb{D}}^*}(z) )^{-3} \\
\times\sum_{q,s,t,m=1}^{\infty}
(q-m)(q+m-1-2t)(c_m^{(p)})^2(c_t^{(p)})^2c_q^{(p)}c_s^{(p)}
\epsilon_{qs}
z^{q+m-2+t} \bar{z}^{s+m+t}.
\end{gathered}$$ It is clear that the contribution of the indices with $q+m+t< 5$ is zero, so the trick works even in the presence of a $z^{-2}$-term in . [ $\square$ ]{}
Applications {#eq:s4}
============
Theorem \[thm:diffquot\] can be interpreted in terms of Kodaira embeddings. Following the seminal papers [@bou; @Ca99; @DLM06; @Don; @MM08; @Ti90; @Z98] one of the main applications of the expansion of the Bergman kernel is the convergence of the induced Fubini-Study metrics by Kodaira maps. Let us consider the Kodaira map at level $p\geq2$ induced by $H^{0}_{(2)}(\Sigma, L^{p})$, which is a meromorphic map defined by $$\label{e:kod1}
\jmath_{p,(2)}:\Sigma\dashrightarrow
\mathbb{P}(H^{0}_{(2)}(\Sigma, L^{p})^{*})
\cong{\mathbb{C}}\mathbb{P}^{d_p-1}\,,\:\: x\longmapsto
\big\{\sigma\in H^{0}_{(2)}(\Sigma, L^{p}):\sigma(x)=0\big\}.$$ Recall that by [@bkp Remark 3.2] the sections of $H^0_{(2)}(\Sigma,L^p)$ extend to holomorphic sections of $L^p$ over $\overline\Sigma$ that vanish at the punctures and this gives an identification $$\label{e:bs22}
H^0_{(2)}(\Sigma,L^p)\cong\{\sigma\in H^0(\overline\Sigma,L^p):
\sigma|_D=0\}.$$ Let $\sigma_D$ be the canonical section of the bundle $\mathscr{O}_{\overline{\Sigma}}(D)$. The map $$\label{e:h2h0}
H^0(\overline\Sigma,L^p\otimes{\mathscr{O}}_{\overline{\Sigma}}(-D))
\to \{\sigma\in H^0(\overline\Sigma,L^p):
\sigma|_D=0\},\quad s\mapsto s\otimes\sigma_D\,,$$ is an isomorphism and we have an identification $H^0(\overline\Sigma,L^p\otimes{\mathscr{O}}_{\overline{\Sigma}}(-D))
\otimes\sigma_D\cong H^0_{(2)}(\Sigma,L^p)
\subset H^0(\overline\Sigma,L^p)$. Since the zero divisor of $\sigma_D$ is $D$ we have for $x\in\Sigma$, $$\label{e:hyp}
\big\{\sigma\in H^{0}_{(2)}(\Sigma, L^{p}):\sigma(x)=0\big\}=
\big\{s\in H^0(\overline\Sigma,L^p\otimes{\mathscr{O}}_{\overline{\Sigma}}(-D)):
s(x)=0\big\}\otimes\sigma_D.$$ Let $\jmath_p$ the Kodaira map defined by $H^0(\overline\Sigma,L^p\otimes{\mathscr{O}}_{\overline{\Sigma}}(-D))$. We have by the commutative diagram $$\label{e:comkod}
\xymatrix@C=3pc {
{\,\,}\Sigma_{}
\ar[rr]^{\jmath_{p,(2)}\qquad\,\,}
\ar@{^{(}->}[d]
&& \mathbb{P}(H^{0}_{(2)}(\Sigma, L^{p})^{*})
\ar[d]^{\operatorname{Id}} \\
\overline\Sigma \ar[rr]_{\jmath_{p}\qquad\,\,}
&& \mathbb{P}(H^0(\overline\Sigma,L^p\otimes
{\mathscr{O}}_{\overline{\Sigma}}(-D))^{*})
}$$ It is well known that $\jmath_{p}$ is a holomorphic embedding for $p$ large enough, namely for all $p$ satisfying $p\deg (L)-N>2g$ (see [@GH p. 215]). Thus $\jmath_{p,(2)}$ is also an embedding for $p$ large enough, as the restriction of an embedding of $\overline\Sigma$.
The $L^2$-metric on $H^{0}_{(2)}(\Sigma, L^{p})$ induces a Fubini-Study Kähler metric $\omega_{{\rm FS},p}$ on the projective space $\mathbb{P}(H^{0}_{(2)}(\Sigma, L^{p})^{*})$ and a Fubini-Study Hermitian metric $h_{{\rm FS},p}$ on the hyperplane line bundle ${\mathscr{O}}(1)\to\mathbb{P}(H^{0}_{(2)}(\Sigma, L^{p})^{*})$. By [@mm Theorem 5.1.6] $\jmath_{p}$ and $\jmath_{p,(2)}$ induce canonical isomorphisms $$\label{e:cipk}
\jmath_{p}^*{\mathscr{O}}(1)\simeq L^p\otimes{\mathscr{O}}(-D)\,,\quad
\jmath_{p,(2)}^*{\mathscr{O}}(1)\simeq L^p\big|_{\Sigma}\,.$$ Let $\jmath_{p,(2)}^{*}h_{{\rm FS},p}$ be the Hermitian metric induced by $h_{{\rm FS},p}$ via the isomorphism on $L^p\big|_{\Sigma}$.
\[thm:kodaira1\] Let $(\Sigma,\omega_{\Sigma}, L, h)$ fulfill conditions $(\alpha)$ and $(\beta)$. Then as $p\to\infty$, $$\label{e:FS1}
\begin{split}
&\jmath_{p,(2)}^{*}h_{{\rm FS},p}=
\big(1+\mathcal{O}(p^{-\infty})\big)
(B_p^{{\mathbb{D}}^*})^{-1}h^p\,,\\[2pt]
&\frac1p\jmath_{p,(2)}^{*}\omega_{{\rm FS},p}=
\frac{1}{2\pi }\omega_{\Sigma}+\frac{i}{2\pi p}\partial\overline{\partial}
\log\big(B_p^{{\mathbb{D}}^*}\big)
+\mathcal{O}(p^{-\infty})\,,
\end{split}$$ uniformly on $V_1\cup V_2\cup\ldots\cup V_N$.
We have indeed by [@mm Theorem 5.1.6], $$\label{e:FS2}
\jmath_{p,(2)}^{*}h_{{\rm FS},p}=
(B_p)^{-1}h^p\,,\quad
\frac1p\jmath_{p,(2)}^{*}\omega_{{\rm FS},p}=
\frac{i}{2\pi } R^{L}+\frac{i}{2\pi p}\partial\overline{\partial}
\log(B_p)\,,$$ so follows from Theorems \[thm\_apdx\] and \[thm:diffquot\].
We compare next the induced Fubini-Study metrics by $\jmath_{p,(2)}$ on $\Sigma$ and on ${\mathbb{D}}^*$, and show that they differ from each other (modulo the usual identification on ${\mathbb{D}}^*_{4r}$ in with the neighbourhood of a singularity of $\Sigma$) by a sequence of $(1,1)$-forms *which is $\mathcal{O}(p^{-\infty})$ (at every order) with respect to any smooth reference metric on ${\mathbb{D}}_r$*: the situation is just as good as in the smooth setting.
The infinite dimensional projective space ${\mathbb{C}}\mathbb{P}^{\infty}$ is a Hilbert manifold modeled on the space $\ell^2$ of square-summable sequences of complex numbers $(a_j)_{j\in{\mathbb{N}}}$ endowed with the norm $\|(a_j)\|=\big(\sum_{j\geq0}|a_j|^2\big)^{1/2}$. Then ${\mathbb{C}}\mathbb{P}^{\infty}=\ell^2\setminus\{0\}/{\mathbb{C}}^*$ and for $a\in\ell^2$ we denote by $[a]$ its class in ${\mathbb{C}}\mathbb{P}^{\infty}$. The affine charts are defined as usual by $U_j=\{[a]:a_j\neq0\}$. The Fubini-Study metric $\omega_{{\rm FS},\infty}$ is defined by $\omega_{{\rm FS},\infty}=
\frac{i}{2\pi}\partial\overline\partial\log\|a\|^2$ to the effect that for a holomorphic map $F:M\to{\mathbb{C}}\mathbb{P}^{\infty}$ from a complex manifold $M$ to ${\mathbb{C}}\mathbb{P}^{\infty}$ we have $F^*\omega_{{\rm FS},\infty}=
\frac{i}{2\pi}\partial\overline\partial\log\|F\|^2$. We define the Kodaira map of level $p$ associated with $({\mathbb{D}}^*, \omega_{{\mathbb{D}}^*}, {\mathbb{C}}, h_{{\mathbb{D}}^*})$ by using the orthonormal basis of $H_{(2)}^p({\mathbb{D}}^*)$, $$\label{e:kod2}
\imath_p:{\mathbb{D}}^* \to{\mathbb{C}}\mathbb{P}^{\infty}\,,\quad
\imath_p(z)=[c_1^{(p)}z,c_2^{(p)}z^2,\ldots,
c_\ell^{(p)}z^\ell,\ldots]\in{\mathbb{C}}\mathbb{P}^{\infty},\:\:
z\in{\mathbb{D}}^*.$$
\[thm:kodaira2\] Suppose that ${\mathbb{D}}^*_{4r}$ and $V\subset \Sigma$ are identified as in . On ${\mathbb{D}}^*_{4r}$ we set $$\label{e:etap}
\imath_p^*\omega_{{\rm FS},\infty}
- \jmath_{p,(2)}^*\omega_{{\rm FS},p} = \eta_p\,idz\wedge d\bar{z}.$$ Then $\eta_p$ extends smoothly to ${\mathbb{D}}_r$ and one has, for all $k\geq 0$, $\ell\geq 0$, $$\|\eta_p\|_{C^{k}({\mathbb{D}}_r)} \leq C_{k,\ell}\, p^{-\ell}\,,
\qquad\text{as }p\to\infty,$$ where $ \|\cdot\|_{C^{k}({\mathbb{D}}_r)}$ is the usual $C^{k}$-norm on ${\mathbb{D}}_{r}$.
We first observe that $\imath_p$ is an embedding, since already $z\mapsto[c_1^{(p)}z,c_2^{(p)}z^2]\in{\mathbb{C}}\mathbb{P}^{1}$ is an embedding. We have $$\label{e:FS3}
\frac{p}{2\pi }\, \omega_{{\mathbb{D}}^*} = \imath_p^*\omega_{{\rm FS},\infty}
- \frac{i}{2\pi}\partial\overline{\partial}\log\big(B_p^{{\mathbb{D}}^*}\big),$$ and consequently on ${\mathbb{D}}_{r}^{*}$, $$\label{e:FS4}
\imath_p^*\omega_{{\rm FS},\infty}-
\jmath_{p,(2)}^*\omega_{{\rm FS},p}=
\frac{i}{2\pi}\partial\overline{\partial}\log\big(B_p^{{\mathbb{D}}^*}/B_p\big)\,,$$ so the assertion follows from Theorem \[thm:diffquot\].
We finish with an application to random Kähler geometry, more precisely to the distribution of zeros of random holomorphic sections [@CM11; @DS06].
Let us endow the space $H^{0}_{(2)}(\Sigma, L^{p})$ with a Gaussian probability measure $\mu_p$ induced by the unitary map $H^{0}_{(2)}(\Sigma, L^{p})\cong{\mathbb{C}}^{d_p}$ given by the choice of an orthonormal basis $(S_j^p)_{j=1}^{d_p}$. Given a section $s\in H^{0}_{(2)}(\Sigma, L^{p})\subset
H^{0}(\overline\Sigma, L^{p})$ we denote by $[s=0]$ the zero distribution on $\overline\Sigma$ defined by the zero divisor of $s$ on $\overline\Sigma$. If the zero divisor of $s$ is given by $\sum m_jP_j$, where $m_j\in{\mathbb{N}}$ and $P_j\in\overline\Sigma$, then $[s=0]=\sum m_j\delta_{P_j}$, where $\delta_{P}$ is the delta distribution at $P\in\overline\Sigma$. We denote by $\langle{\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}},{\raisebox{-0.25ex}
{\scalebox{1.5}{$\cdot$}}}\rangle$ the duality between distributions and test functions. For a test function $\Phi\in C^\infty(\overline\Sigma)$ and $s$ as above we have $\langle[s=0],\Phi\rangle=\sum m_j\Phi(P_j)$.
The expectation distribution ${\mathbb{E}}[s_p=0]$ of the distribution-valued random variable $H^{0}_{(2)}(\Sigma, L^{p})\ni s_p\mapsto[s_p=0]$ is defined by $$\label{e:FS7}
\big\langle {\mathbb{E}}[s_p=0],\Phi\big\rangle=
\int\limits_{H^{0}_{(2)}(\Sigma, L^{p})}\big\langle[s_p=0],
\Phi\big\rangle\,d\mu_p(s_p),$$ where $\Phi$ is a test function on $\overline\Sigma$. We consider the product probability space $$(\mathcal{H},\mu)=
\left(\prod_{p=1}^\infty H^{0}_{(2)}(\Sigma, L^{p}),
\prod_{p=1}^\infty\mu_p\right).$$
\[thm:equi\] (i) The smooth $(1,1)$-form $\jmath_{p,(2)}^{*}\omega_{{\rm FS},p}$ extends to a closed positive $(1,1)$-current on $\overline\Sigma$ denoted $\gamma_p$ (called Fubini-Study current) and we have ${\mathbb{E}}[s_p=0]=\gamma_p$.\
(ii) We have $\frac1p\gamma_p\to \frac{i}{2\pi}R^L$ as $p\to\infty$, weakly in the sense of currents on $\overline\Sigma$, where $R^L$ is the curvature current of the singular holomorphic bundle $(L,h)$ on $\overline\Sigma$.\
(iii) For almost all sequences $(s_p)\in(\mathcal{H},\mu)$ we have $\frac1p[s_p=0]\to \frac{i}{2\pi}R^L$ as $p\to\infty$, weakly in the sense of currents on $\overline\Sigma$.
The convergence of the Fubini-Study currents $\gamma_p$ follows from . The rest of the assertions follow from the general arguments of [@CM11 Theorems 1.1, 4.3]. The conditions (A)-(C) in [@CM11 Theorems 1.1, 4.3] are implied by our hypotheses $(\alpha)$, $(\beta)$ and the required local uniform convergence $\frac1p\log B_p\to0$ as $p\to\infty$ on $\Sigma$ is a consequence of [@mm Theorem 6.1.1].
[22]{}
A. Abbes and E. Ullmo, *Comparaison des métriques d’Arakelov et de Poincaré sur $X_0(N)$*, Duke Math. J. **80** (1995), 295–307. H. Auvray, *The space of Poincaré type Kähler metrics*, J. Reine Angew. Math. **722** (2017), 1–64. H. Auvray, *Asymptotic properties of extremal Kähler metrics of Poincaré type*, Proc. Lond. Math. Soc. (3) **115** (2017), no. 4, 813–853. H. Auvray, X. Ma and G. Marinescu, *Bergman kernels on punctured Riemann surfaces*, C. R. Math. Acad. Sci. Paris **354** (2016), no. 10, 1018–1022. H. Auvray, X. Ma and G. Marinescu, *Bergman kernels on punctured Riemann surfaces*, Math. Ann., to appear; [DOI: 10.1007/s00208-020-01957-y](https://link.springer.com/article/10.1007%2Fs00208-020-01957-y).
R. Berman and G. Freixas i Montplet, *An arithmetic Hilbert-Samuel theorem for singular Hermitian line bundles and cusp forms*, Compos. Math. **150** (2014), no. 10, 1703–1728. J.-M. Bismut and G. Lebeau, *Complex immersions and [Q]{}uillen metrics*, Inst. Hautes [É]{}tudes Sci. Publ. Math. (1991), no. 74, ii+298 pp. (1992).
T. Bouche, *Convergence de la métrique de [F]{}ubini-[S]{}tudy d’un fibré linéaire positif*, Ann. Inst. Fourier (Grenoble), **40** (1990), no. 1, 117-130. J. H. Bruinier, J. I. Burgos Gil and U. Kühn, *Borcherds products and arithmetic intersection theory on Hilbert modular surfaces*, Duke Math. J. **139** (2007), 1–88.
J. I. Burgos Gil, J. Kramer and U. Kühn, *Arithmetic characteristic classes of automorphic vector bundles*, Doc. Math. **10** (2005), 619–716.
D. Catlin, *The Bergman kernel and a theorem of Tian*, in [*Analysis and geometry in several complex variables (Katata, 1997)*]{}, 1–23, Trends Math., Birkhäuser, Boston, 1999. D. Coman and G. Marinescu, *Equidistribution results for singular metrics on line bundles*, Ann. Sci. Éc. Norm. Supér. (4), **48** (2015), no. 3, 497–536. D. Coman and G. Marinescu, *On the first order asymptotics of partial Bergman kernels*, Ann. Fac. Sci. Éc. Toulouse Math. (6), **26** (2017), no. 5, 1193–1210. X. Dai, K. Liu, and X. Ma, *On the asymptotic expansion of [B]{}ergman kernel*, J. Differential Geom. **72** (2006), no. 1, 1–41. X. Dai, K. Liu, and X. Ma, *A remark on weighted [B]{}ergman kernels on orbifolds*, Math. Res. Lett. **19** (2012), no. 1, 143–148.
T.-C. Dinh and N. Sibony, *Distribution des valeurs de transformations méromorphes et applications*, Comment. Math. Helv. [**81**]{} (2006), 221–258.
S. K. Donaldson, *Scalar curvature and projective embeddings. I*, J. Differential Geom. **59** (2001), no. 3, 479–522.
J. S. Friedman, J. Jorgenson and J. Kramer, *Uniform sup-norm bounds on average for cusp forms of higher weights*, Arbeitstagung Bonn 2013, 127–154, Progr. Math., 319, Birkhäuser/Springer, 2016. P. Griffiths and J. Harris, *Principles of algebraic geometry*, Wiley-Interscience \[John Wiley & Sons\], New York, 1978, Pure and Applied Mathematics.
C.-Y. Hsiao, *Projections in several complex variables*, Mém. Soc. Math. Fr. (N.S.) No. 123 (2010), 131 pp.
J. Jorgenson and J. Kramer, *Bounding the sup-norm of automorphic forms*, Geom. Funct. Anal. **14** (2004), 1267–1277.
X. Ma and G. Marinescu, *Holomorphic Morse inequalities and Bergman kernels*, Progress in Mathematics, 254. Birkhäuser Verlag, Basel, 2007. X. Ma and G. Marinescu, *[Generalized Bergman kernels on symplectic manifolds]{}*, Adv. in Math. **217** (2008), no. 4, 1756–1815. J. Sun, *Estimations of the Bergman kernel of the punctured disc*, arXiv:1706.01018. J. Sun and S. Sun, *Projective embedding of log Riemann surfaces and K-stability*, arXiv:1605.01089. G. Székelyhidi, *Extremal metrics and K-stability* (PhD thesis), arXiv:0611002. G. Tian, *On a set of polarized Kähler metrics on algebraic manifolds*, J. Differential Geom. [**32**]{} (1990), 99–130.
S. Zelditch, *Szegő kernels and a theorem of Tian*, Internat. Math. Res. Notices [**1998**]{}, no. 6, 317–331.
[^1]: Laboratoire de Mathématiques d’Orsay, Université Paris-Sud, CNRS, Université Paris-Saclay, Département de Mathématiques, Bâtiment 307, 91405 Orsay, France. E-mail: [email protected]
[^2]: Université de Paris, CNRS, Institut de Mathématiques de Jussieu-Paris Rive Gauche, F-75013 Paris, France. E-mail: [email protected]
[^3]: Universit[ä]{}t zu K[ö]{}ln, Mathematisches Institut, Weyertal 86-90, 50931 K[ö]{}ln, Germany. E-mail: [email protected]
[^4]: Fine uniform control for small indices, rough control via Cauchy formula for large indices, sacrifice of a few powers of $|z|$ and restriction to $|z|\leq cp^{-A}$ for resulting sums.
|
---
abstract: 'In this contribution, results from CCD $vby$ Strömgren photometry of a statistically complete sample of red giants and stars in the main sequence turn-off region in $\omega$ Centauri are presented. From the location of stars in the $(b-y),m_1$ diagram metallicities have been determined. We argue that the Strömgren metallicity in terms of element abundances has another meaning than in other globular clusters. From a comparison with spectroscopic element abundances, we find the best correlation with the sum C+N. The high Strömgren metallicities, if interpreted by strong CN-bands, result from progressively higher N and perhaps C abundances in comparison to iron. We see an enrichment already among the metal-poor population, which is difficult to explain by self-enrichment alone. An attractive speculation (done before) is that $\omega$ Cen was the nucleus of a dwarf galaxy. We propose a scenario in which $\omega$ Cen experienced mass inflow over a long period of time, until the gas content of its host galaxy was so low that star formation in $\omega$ Cen stopped, or alternatively the gas was stripped off during its infall in the Milky Way potential. This mass inflow could have occurred in a clumpy and discontinuous manner, explaining the second peak of metallicities, the abundance pattern, and the asymmetrical spatial distribution of the most metal-rich population.'
author:
- Michael Hilker
- Tom Richtler
title: 'The enrichment history of $\omega$ Centauri: what we can learn from Strömgren photometry'
---
Introduction
============
Many medium and high resolution spectroscopy investigations (e.g. Brown & Wallerstein 1993; Norris & Da Costa 1995; Smith et al. 2000) have shown that among the stars in $\omega$ Cen there exist strong variations of nearly all element abundances investigated so far. This is reflected by the intrinsic broad scatter of the red giant branch that cannot be explained by internal reddening only (e.g. Norris & Bessell 1975). Concerning the iron abundance, several authors have confirmed that there exists a main metal-poor population, with a peak at about \[Fe/H\] $=
-1.7\pm0.1$ dex, and a broad tail to higher metallicities with a peak at about \[Fe/H\] $= -1.2$ dex (Norris, Freeman, & Mighell 1996; Suntzeff & Kraft 1996). This high metallicity tail extends to values of \[Fe/H\] $\simeq -0.7$ dex as deduced from the detection of a very red giant branch (RGB) that is well separated from the bulk of the RGB stars (Lee et al. 1999; Pancino et al. 2000).
The abundance variations in $\omega$ Cen point to a more complicated star formation history than that for other globular clusters (GCs) which contain a homogeneous stellar population. Whereas the CNO variations might be explained by evolutionary mixing effects in the stellar atmosphere as well as by mixing in the protocloud (e.g. Bessell & Norris 1976), the iron abundance variations need another explanation (e.g. Vanture, Wallerstein, & Brown 1994; Norris & Da Costa 1995). An increasing number of groups working on $\omega$ Cen favour an extended period of star formation connected with self-enrichment as the interpretation of their data. Smith et al. (2000, and this volume) studied the abundances of s-process elements in RGB stars with high resolution spectroscopy. A strong increase of \[Ba/Fe\] and \[La/Fe\] for metal-poor stars with \[Fe/H\]$<-1.5$ dex followed by a flat relation for higher metallicities led the authors conclude that low mass AGB stars have contributed to the enrichment. These stars have an evolutionary time of at least 1 Gyr. Metallicity and age estimates from Strömgren $vby$ photometry in the main sequence turn-off region (Hughes & Wallerstein 1999, and this volume) also suggest an age spread of at least 3 Gyr for stars between $-2.0
<$ \[Fe/H\] $<-0.5$ dex (the metal-richest stars also being the youngest). An age spread also was confirmed in our analysis of Strömgren data (Hilker & Richtler 2000), but will not be the main topic in this contribution. Here, we present the analysis and interpretation of Strömgren metallicities for more than 1500 RGB stars in $\omega$ Cen.
Strömgren photometry in $\omega$ Centauri
=========================================
Strömgren photometry has been proven to be a very useful metallicity indicator for globular cluster giants and subgiants (e.g. Richter, Hilker, & Richtler 1999, Hilker 2000, Grebel & Richtler 1992, Richtler 1989). The location of late type stars in the Strömgren $(b-y),m_1$ diagram is correlated with their metallicities, especially with their iron and CN abundances. Whereas the color $(b-y)$ is not sensitive to metallicity, the Strömgren $v$ filter includes several iron absorption lines as well as the CN band at 4215Å, and therefore $m_1 = (v-b) - (b-y)$ is a metallicity sensitive index (e.g. Bell & Gustafsson 1978). Within a certain color range, $0.5 < (b-y)
< 1.1$ mag, the loci of constant iron abundance of giants and supergiants can be approximated by straight lines. This is valid for CN-“normal” ($=$ CN-weak) stars. CN-strong stars, due to their higher absorption in the $v$ filter, scatter to higher $m_1$ values and therefore mimic a higher Strömgren metallicity than their actual iron abundance would correspond to. This can be used to learn more about the CN variations in $\omega$ Cen. A recent calibration of the Strömgren metallicity for CN-“normal” stars is presented in Hilker (2000).
The observations of $\omega$ Centauri have been performed in two observing runs in 1993 and 1995 with the Danish 1.54m telescope at ESO/La Silla. The details of the observations, data reduction, calibration and photometry are presented in Hilker (2000) and Hilker & Richtler (2000). The positions of all observed fields are illustrated in Fig. 1.
Color magnitude diagram and two-color diagram
---------------------------------------------
In Fig. 2 the color magnitude diagrams (CMD) and two-color diagrams of $\omega$ Cen and M55 are plotted. In the CMD of $\omega$ Cen, all stars with a photometric error less than 0.05 mag (grey dots) and less than 0.03 mag ($\simeq$ 20620 stars, black dots) in $y$ and $b$ are shown. The colors in all plots have been corrected for reddening with a value of $E_{B-V} = 0.11$ mag for $\omega$ Cen (Zinn 1985; Webbink 1985; Reed, Hesser, & Shawl 1988; Gonzalez & Wallerstein 1994), and $E_{B-V} = 0.09$ mag for M55 (the mean value between Harris 1996 and Richter et al. 1999).
In both diagrams the difference between a single age and metallicity cluster, as M55, and the unusual cluster $\omega$ Cen can be seen very nicely. The broad red giant branch of $\omega$ Cen cannot be explained by photometric errors or internal reddening. Since the $(b-y)$ color is not affected by CN variations, the spread in the RGB must be due to a spread in the overall metallicity (iron abundance) and/or age.
The $(b-y),m_1$ diagram (Fig. 2, upper right panel) is indicative for the metallicity distribution and CN variations of the red giants in $\omega$ Cen. The 1500 red giants that have been selected from the CMD show a large scatter between $-2.0$ and 1.0 dex in their Strömgren metallicity. The Strömgren metallicity is defined as $${\rm [Fe/H]}_{\rm ph} = \frac{m_{1,0} - 1.277 \cdot (b-y)_0 + 0.331}{0.324
\cdot (b-y)_0 - 0.032}$$ following the calibration by Hilker (2000). The trend exists that stars on the blue side of the RGB are mostly metal-poor, whereas the stars redwards of the “main” RGB populate the metal-rich regime.
The metallicity distribution
----------------------------
In Fig. 3 the metallicity histogram of RGB stars with an accurate metallicity determination (stars redder than the dotted line in Fig. 2) is presented. \[Fe/H\]$_{\rm ph}$ denotes the Strömgren metallicity. A peak around $\simeq
-1.7$ dex with a sharp cutoff towards low metallicities at $-1.9$ dex represents the blue RGB stars. Also most of the AGB stars bluewards the main RGB belong to this metal-poor population. Stars from the red side of the RGB have metallicities mainly in the range $-1.3$ to $-0.5$ dex, with a probable second peak at about $-0.9$ dex. Our metallicity distribution resembles fairly well the results of Norris et al. (1996). Stars with Strömgren metallicities higher than about $-0.8$ dex are supposed to be CN-rich stars of one of the two populations, since no stars with an iron abundance higher than that has been found in the cluster. Most of them are redder than the “main” RGB, thus belong to a more metal-rich population. When selecting the RGB stars by a cut in the CMD that corresponds to a cut in their mass function (more ore less a luminosity cut), the proportion of metal-poor to metal-rich stars is about 3:1.
Strömgren metallicity versus Fe, C and N abundance
==================================================
The metallicity distribution found in our investigation is qualitatively very similar to that found by Norris et al. (1996) and Suntzeff & Kraft (1996) from their Calcium abundance measurements. In Fig. 4 we show the metallicity distribution of those stars that are in common in Suntzeff & Kraft’s and our samples. Their Calcium abundances have been transformed to iron abundances according to the relation given in their paper.
The behaviour of $\omega$ Cen regarding its relation between Fe abundance and Strömgren metallicity is remarkably different from that of other globular clusters (open circles in the lower panel of Fig. 4), including NGC 6334, NGC 3680, NGC 6397, Melotte 66, M22 and M55, taken from Hilker (2000).
The straight relation up to -1 dex (with large scatter towards higher metallicities) is in striking contrast, for example, to the situation in M22 (Richter et al. 1999), where there is a considerable scatter at a fixed iron abundance. This relation already is present among the metal-poor stars (see small plot in the upper left of Fig. 4). No systematic effect that could cause this relation has been determined. It is not dependent on a magnitude, color or error selection. So, what determines the Strömgren colors in $\omega$ Cen? An answer may come from a comparison of the available elements abundances for 40 giants from Norris & Da Costa (1995). Fig. 5 shows in four panels (on the left) the Strömgren metallicity vs. \[Fe/H\]$_{\rm sp}$, \[C/H\], \[N/H\], and \[C+N/H\]. It is apparent that the correlation with \[Fe/H\]$_{\rm sp}$ and \[C/H\] is very poor. It is better for \[N/H\] (note the large error of 0.4 dex given for the N-abundance by Norris & Da Costa (1995)), and best for \[C+N/H\].
On the other hand, there is a close correlation of \[Fe/H\]$_{\rm sp}$ vs. \[C+N/H\] (Fig. 5, right panel) (which, by the way, is suprising, given the above large error of the N abundance). The two most deviating stars are ROA 139 and ROA 144. They have the highest N-abundances in this sample, simultaneously low oxygen abundances and hence are probably strongly affected by mixing effects. If we skip them, a linear regression returns $0.64 \pm 0.07$ for the slope, indicating that the increase in C+N is faster than in \[Fe/H\].
What can we learn from the C+N variations in $\omega$ Cen? Can they be understood as a [**stellar evolutionary effect**]{}?
As Norris & Da Costa point out, C-depletion as a signature of the CNO-cycle is present and one may see the increase in the C-abundance with \[Fe/H\] in their Fig. 8a to have its cause in the decreasing efficiency of the mixing-up of processed material with increasing metallicity (and thus mimicking a C-enhancenment), as it is theoretically expected (e.g. see Kraft 1994). But then, the increase in C+N is not easy to understand, since it is dominated by an increase of N, where we would expect a decrease, and, after all, the sum of C and N should be less sensitive to mixing effects.
So, is this a [**primordial effect**]{}? (We use the term “primordial” for pre-enriched material as the alternative to mixing effects).
A striking fact for this possibility is that we see in Fig. 4 the relation between \[Fe/H\] and Strömgren metallicity [*already present among the old population*]{}, where it is hard to understand that such small differences in \[Fe/H\] would cause a regular pattern in the mixing effects. We thus propose that the gradual enrichment of C+N, indicated by the Strömgren colors, is to a large degree primordial.
The suggestion that a part of the proto-cluster material of $\omega$ Cen has undergone considerable C-enrichment has also been made by e.g. Cohen & Bell (1986) and Norris & Da Costa (1995) based on the unique presence of CO-strong stars. However, the \[C/Fe\] abundance in the metal-poor population is about $-$0.7 dex according to Norris & Da Costa, which is close to $-$0.5 dex, theoretically expected from the yield ratios in SNe II (Tsujimoto et al. 1995). Also a mean \[O/C\]-value of approximately $-$1 dex, as one would read off from Norris & Da Costa is close to the expected yield ratios.
If, say, $-$2.2 dex[^1] would be the “starting” value of \[C/H\], one would require a considerable C-contribution from intermediate-age stars, which by itself is not easy to understand for the first stars formed in $\omega$ Cen. Therefore, if the increasing \[C+N/Fe\] among the metal-poor old population was, at least to large part, primordial, one is driven to the conclusion that already the star formation process, which formed these stars, did not took place in a single burst within a well-mixed environment, but must have been extended in time, allowing intermediate-age populations to contribute.
We shall attempt to combine these abundance pattern with other properties within a scenario later on.
Strömgren metallicity versus other properties in $\omega$ Centauri
==================================================================
As already mentioned in the introduction the metallicity spread is accompagnied by an age spread. Metallicity determinations and isochrone fitting of stars in the main sequence turn-off region (Hughes & Wallerstein 1999; Hilker & Richtler 2000) revealed that the more metal-rich stars tend to be younger. Whereas all stars of the main RGB with metallicities between $-2.0$ and $-1.4$ dex might be compatible with one age, the populations with metallicities around $-1.2$ and $-0.7$ dex are at least 2–4 and 3–5 Gyr younger. If the age metallicity relation in $\omega$ Cen can be understood as a continuous enrichment process after an initial starburst with \[Fe/H\]$\simeq-1.7$ dex, the age spread of the enrichment lies between 3 and maximally 5 Gyr.
Spatial distribution of sub-populations
---------------------------------------
To investigate the spatial distribution of sub-populations in $\omega$ Cen, the 1500 red giants with metallicity determinations have been divided into four sub-populations according to their age and metallicities: (1) an old metal-poor population from the blue side of the RGB, (2) a more metal-rich, mostly CN-rich, and younger population from the red side of the main RGB, (3) the youngest, very metal- and CN-rich population, and (4) the AGB stars of the old population.
Both, the cumulative radial distributions as well as the angular distributions of the sub-populations have been examined. For the angular distribution, only stars within a radius of $10\arcmin$ from the cluster center have been included. The number counts have been normalised to the total number for each selection. The angle $\phi$ is defined as $0\deg$ in East direction, $+90\deg$ North, and $-90\deg$ South.
The population of the selected 83 AGB stars, which is very metal-poor, is distributed as the metal-poor pop (1), as expected. Deviations in the angular distribution are statistically not significant.
When comparing population (1) with population (2) a difference in their radial distributions becomes evident in the sense that the ratio of the metal-rich to metal-poor stars is higher in the cluster center than in its outskirts (see Fig. 6, left panel). This result is statistically significant. A KS test reveals a probablity of less than 0.1% that the cumulative number counts of both populations follow the same radial distribution.
The radial distribution of the stars in pop (3) appears less concentrated than the average cluster population. Some of them might be solar metallicity foreground stars. However, others are confirmed cluster member stars. The angular distribution of pop (3) shows a concentration of stars towards the South and a slight depression in the West and North direction (see Fig. 6, right panel) which can explain the different radial distribution. We note that also Jurcsik (1998) reported on a spatial metallicity asymmetry in $\omega$ Cen. She found that the most metal-rich stars with \[Fe/H\]$>-1.25$ dex are concentrated towards the South, whereas the most metal-poor stars with \[Fe/H\]$<-1.75$ are more concentrated in the North. As shown in Fig. 6, nearly 30% of our metal-rich sample are located in the Southern angular bin, but an asymmetrical distribution in the North-South direction of the most metal-poor stars cannot be confirmed. The probability that the angular distribution of the metal-poor stars of Jurcsik’s and our sample agree is less than 7% (KS test). Also less than 7% is the probability that metal-rich and metal-poor stars are distributed equally.
Metallicity and kinematics
--------------------------
Along with the abundance variations in $\omega$ Cen, different sub-populations show clearly different kinematical behaviour. A dynamical analysis of 400 stars in the Norris et al. (1996) sample of RGB stars with calcium abundance measurements revealed a rotation of the metal poor component, whereas the metal rich one is not rotating (Norris et al. 1997) Recent studies with larger radial velocity samples, as presented in this conference proceedings (Gebhardt et al.; Seitzer et al.; Cannon et al.) confirm this result.
A match of our data set with that of Xie, Pryor & Gebhardt (private communication) shows similar results. Within 3$\arcmin$ radius we have more than 1000 RGB stars in common. Their radial velocity dispersion peaks in the center at about 19 km/sec, falling to 16 km/sec at a radius of 2$\arcmin$. When dividing the sample in 860 metal-poor (\[Fe/H\]$_{\rm ph}<-0.6$) and 150 metal-rich stars, one clearly can see differences in their rotation behaviour. Whereas metal-poor stars show a strong sign of rotation around the semi-minor axis of the elliptical shape of $\omega$ Cen, no rotation is seen among the most metal-rich stars. At 3$\arcmin$ radius from the center the rotation velocity of the metal-poor stars reaches 3.6 km/sec. Further detailed anlysis of both data sets will show what more can be learned about the connection between chemical enrichment and kinematical behaviour.
A scenario
==========
Problems with self-enrichment within $\omega$ Centauri
------------------------------------------------------
Can the younger populations in $\omega$ Cen be enriched by the older one? In trying to demonstrate the problems with this picture we use oxygen as a tracer for the synthesized material. First we estimate the number of SNe type II having occured in the old population. If we adopt the mass of $\omega$ Cen to be $4\times10^6$ solar masses (Pryor & Meylan 1993), the metal-poor population comprises about $2.8\times10^6$ M$_{\sun}$. We get 45000 SNe for all stars more massive than 10 M$_{\sun}$ when assuming a Salpeter mass function between 0.1 and 100 M$_{\sun}$. The total oxygen mass released by these SNe is about 50000 M$_{\sun}$ (based on Table 7.2 of Pagel 1997). On the other hand, following Norris & Da Costa (1995), a mean \[O/H\] value for the metal-rich population is $-$0.7 dex (adopting \[O/Fe\] = 0.5 dex and \[Fe/H\] = $-$1.2 dex), so we calculate the actual oxygen mass to be 2300 M$_{\sun}$, if the total mass of the young population is $1.2\times10^6$ M$_{\sun}$. Since Smith et al. (2000) see no signature of enrichment by SNe Ia up to \[Fe/H\] = $-$0.9 dex, it is reasonable to assume that the oxygen mass, which was present already in the gas before the enrichment, scales with the iron abundance. The ratio is about a factor of 3, so we have 1500 M$_{\sun}$ of newly synthesized oxygen in the metal-rich population. This means that only 3% (or less) of the released oxygen has been retained. If we do this exercise with the iron abundance, we have 1000 M$_{\sun}$ of iron released (Pagel, p. 158), and we have about 20 M$_{\sun}$ of newly synthesised iron present. Of course, the exact numbers are insignificant, but the above consideration suggests that practically [*all*]{} material must have been blown out.
This is also plausible from the energy point of view. We have a release of kinetic energy of about $5\times10^{54}$ erg from the SNe (neglecting previous stellar winds and ionizing radiation), while the binding energy of the “proto-young population” is about $2\times10^{52}$ erg, if for simplicity we imagine that the gas was confined within a half-light radius of 7 pc (Djorgovski 1993).
Similar factors must apply to the overall fraction of retained gas, implying an unreasonably large protocluster mass (neglecting the problem of how a bound system could survive after such strong mass loss), if one wants to keep the hypothesis of a permanently retained large gas fraction within $\omega$ Cen.
But we have even more problems. Smith et al. (2000) only detect weak signatures of SNe Ia in the metal-rich population, expressed by the low \[Cu/Fe\] value of $-0.6$ dex. On the other hand, the age spread, the increase of s-process elements, and the interpretation of the Strömgren results as primordial enrichment of C and N speak for the contribution of an intermediate-age population. Within 2-3 Gyr, at least some Ia events should have ocurred, making the problem with the low iron content even worse, if they would have provided iron to the young population. Why do we not see their debris?
Moreover, it is remarkable that we find the signature of intermediate-age populations already among the old population, indicated by the evidence that the same relation between \[Fe/H\] and Strömgren metallicity (Fig. 4), which connects the oldest and the younger population, appears already to be present at the lowest metallicities. In this respect, the general behaviour of the Strömgren metallicities resembles the well established enrichment of s-process elements relative to iron. How can these stars, in a regular pattern, be self-enriched simultaneously by SNe II and by intermediate-age stars?
It may be that one can construct a scenario in which these oddities can be explained by pure self-enrichment (i.e. Smith et al. 2000). However, we wish to point out an alternative, which seems to offer an easier way towards an understanding of $\omega$ Cen.
Just imagine ...
----------------
... $\omega$ Cen formed within a formerly much larger entity, outside the Milky Way, and at the central position of its ex-host-object. We (speculatively) assume that this object was a dwarf galaxy with $\omega$ Cen as its nucleus. We additionally speculate that its star formation rate was triggered over a very extended period (perhaps more than 5 Gyr) by [*mass supply*]{} from the overall gas reservoir of its host galaxy. This scenario can explain all characteristic properties of $\omega$ Cen found so far.
This gas inflow, already enriched in the host galaxy to at least $-0.9$ dex, could have occured in a non-spherical, clumpy and discontinuous manner, providing angular momentum to the first population in $\omega$ Cen and thus giving rise to the flattening of $\omega$ Cen. That no significant rotation is seen in the more metal-rich population might be due to the loss and transfer of angular momentum from the newly infalling gas to the very massive rotating dark matter halo of the first population (see ideas by Binney, Gerhard, & Silk 2001). We have no problems with the competition of gas removal and simultaneous enrichment. The intermediate-age population stars in $\omega$ Cen released their gas, for instance by planetary nebulae, in a much less violent fashion and the infalling gas mixed with this C and N rich material, which also was rich in s-process elements, giving rise to a new star formation period. The large scatter in the Strömgren metallicities may thus be in part primordial, reflecting the incomplete mixing of the infalling gas with the C-N-rich material. Both SNe II,Ib and Ia would sweep up the gas almost completely, terminating star formation for a short while, until further mass infall becomes possible.
It also seems natural that younger and more metal-rich populations show other kinematic and spatial properties, including asymmetries in their spatial distribution, depending on the details of the infall process. The initially asymmetrical distribution of the most metal-rich population is probably still not relaxed due to the long relaxation time of $\omega$ Cen beyond the half-mass radius (Meylan 1987). We would then expect many periods of strong star formation alternating with periods of mass infall. The mass infall would finally cease after the gas content of the host galaxy has become sufficiently low or was perhaps removed by ram pressure stripping in the Galactic halo during its infall in the Milky Way (e.g. Blitz & Robishaw 2000).
The subsequent evolution can be sketched as follows: on its retrograde orbit the dwarf galaxy spiralled towards the Galactic center (Dinescu, Girard, & van Altena 1999). On its way it lost the outermost stellar populations by tidal stripping, including the likely member globular clusters NGC 6779 (Dinescu et al. 1999). Finally, after its stellar population dissolved totally, the nucleus $\omega$ Cen remained and appears now as the most massive cluster of our Milky Way.
Summmary and concluding remarks
===============================
For about 1500 red giants in $\omega$ Centauri, Strömgren metallicities have been determined. Almost 2/3 of them turn out to be metal-poor, with a peak at \[Fe/H\]$_{\rm ph} = -$1.7 dex. Beyond this peak, the metallicity distribution shows a sharp cutoff towards lower metallicities, but a broad, long tail towards higher metallicities. Most of these stars are CN-rich and have ages 2-5 Gyr younger than that of the oldest population.
The comparison between \[Fe/H\] abundances derived from high-dispersion spectroscopy of Norris & Da Costa (1995) and Strömgren metallicities shows a behaviour distinctly different from that observed in other globular clusters. There is hardly a correlation with \[Fe/H\], but a close correlation with \[C+N/H\].
However, the comparison of Strömgren metallicities to the larger sample of Suntzeff & Kraft (1996) shows that there is a coupling to the iron abundance indicating that this has a primordial cause. It is already visible among the metal-poor population and we interpret it as another manifestation of the well established increasing contribution of intermediate-age populations with increasing iron abundance.
The comparison of the cumulative radial distribution of the two main populations in $\omega$ Cen exhibits a higher concentration of the metal-rich stars. The youngest, most metal-rich population has an asymmetrical distribution around the cluster center with a concentration towards the South.
When combining the kinematics of more than 1000 red giants with our Strömgren metallicities, a significant rotation of the metal-poor population was confirmed, whereas the most metal-rich stars do not rotate.
Our findings are consistent with a scenario in which enrichment of the cluster has taken place over a period of 3–6 Gyr. The conditions for such an enrichment can perhaps be found in nuclei of dwarf galaxies. All characteristic properties of $\omega$ Cen (flattening, abundance pattern, age spread, kinematic and spatial differences between metal-poor and metal-rich stars) could be understood in the framework of a scenario, where infall of previously enriched gas occured in $\omega$ Cen over a long period of time. Only the enrichment of nitrogen, carbon, and s-process elements took place within $\omega$ Cen, where the infalling gas mixed with the expelled matter from AGB stars.
The capture and dissolution of a nucleated dwarf galaxy by our Milky Way and the survival of $\omega$ Cen as its nucleus would thus be an attractive explanation for this extraordinary object. Several contributions in this conference proceedings also support this idea and rule out other possibilities like a chemically diverse parent cloud or a merger of two clusters.
We thank Bingrong Xie for providing us radial velocities from their Fabry-Perot sample (see also Gebhardt, this volume). This project was partly supported through ‘Proyecto FONDECYT 3980032’.
Bell, R. A., Gustafsson, B. 1978, A&AS, 34, 229
Bessell, M. S., Norris, J. 1976 ApJ, 208, 369
Binney, J., Gerhard, O., Silk, J. 2001, MNRAS, 321, 471
Blitz, L., Robishaw, T., 2000 ApJ, 541, 675
Brown, J. A., Wallerstein, G. 1993, AJ, 106, 133
Cohen, J. G., Bell, R. A. 1986 ApJ, 305, 698
Dinescu, D. I., Girard, T. M., van Altena, W. F. 1999, AJ, 117, 1792
Djorgovski, S. 1993, in ASP Conf. Ser. Vol. 50, Structure and Dynamics of Globular Clusters, ed. S. G. Djorgovski & G. Meylan, (San Francisco: ASP), 373
Gonzalez, G., Wallerstein, G. 1994, AJ, 108, 1325
Grebel, E. K., Richtler, T. 1992, A&A, 253, 359
Harris, W. E. 1996, AJ, 112, 1487
Hilker, M. 2000, A&A, 355, 994
Hilker, M., Richtler, T. 2000, A&A, 362, 895
Hughes, J. D., Wallerstein, G. 1999, AJ, 119,1225
Jurcsik, J. 1998, ApJ, 506, L113
Kraft, R. P. 1994, PASP, 106, 553
Lee, Y.-W., Joo, J.-M., Sohn, Y.-J., Rey, S.-C., Lee, H.-C., Walker, A.R. 1999, Nature, 402, 55
Meylan, G. 1987, A&A, 184, 144
Norris, J., Bessell, M. S. 1975, ApJ, 201, L75 (erratum in 210, 618)
Norris, J. E., Da Costa, G. S. 1995, ApJ, 447, 680
Norris, J. E., Freeman, K. C., Mayor, M., Seitzer, P. 1997, ApJ, 487, L187
Norris, J. E., Freeman, K. C., Mighell, K. J. 1996, ApJ, 462, 241
Pagel, B. E. J. 1997, in Nucleosynthesis and Chemical Evolution of Galaxies, Cambridge University Press
Pancino, E., Ferraro, F. R., Bellazzini, M., Piotto, G., Zoccali, M. 2000, ApJ, 534, L83
Pryor, C., Meylan, G. 1993, in ASP Conf. Ser. Vol. 50, Structure and Dynamics of Globular Clusters, ed. S. G. Djorgovski & G. Meylan, (San Francisco: ASP), 357
Reed, B. C., Hesser, J. E., Shawl, S. J. 1988, PASP, 100, 545
Richter, P., Hilker, M., Richtler, T. 1999, A&A, 350, 476
Richtler, T. 1989, A&A, 211, 199
Smith, V. V., Suntzeff, N. B., Cunia, K., Gallino, R., Busso, M., Lambert, D., 2000 AJ, 119, 1239
Suntzeff, N. B., Kraft, R. P. 1996, AJ, 111, 1913
Tsujimoto, T., Nomoto, K., Yoshii, Y., Hashimoto, M., Yanagida, S., Thielemann, F.-K. 1995, MNRAS, 277, 945
Vanture, A. D., Wallerstein, G., Brown, J. A., 1994, PASP 106, 835
Webbink R. F. 1985, in IAU Symp. 113, Dynamics of Star Clusters, ed. J. Goodman & P. Hut, (Dordrecht: Reidel), 541
Zinn, R. 1985, ApJ, 293, 424
[^1]: note that in our paper (Hilker & Richtler 2000) erranously the value $-$0.2 dex was given
|
---
abstract: 'We develop a new multi-detector signal-based discriminator to improve the sensitivity of searches for gravitational waves from compact binary coalescences. The new statistic is the traditional $\chi^2$ computed on a null-stream synthesized from the gravitational-wave detector strain time-series of three detectors. This null-stream-$\chi^2$ statistic can be extended to networks involving more than three detectors as well. The null-stream itself was proposed as a discriminator between correlated unmodeled signals in multiple detectors, such as arising from a common astrophysical source, and uncorrelated noise transients. It can be useful even when the signal model is known, such as for compact binary coalescences. The traditional $\chi^2$, on the other hand, is an effective discriminator when the signal model is known and lends itself to the matched-filtering technique. The latter weakens in its effectiveness when a signal lacks enough cycles in band; this can happen for high-mass black hole binaries. The former weakens when there are concurrent noise transients in different detectors in the network or the detector sensitivities are substantially different. Using simulated binary black hole signals, noise transients and strain for Advanced LIGO (in Livingston and Hanford) and Advanced Virgo detectors, we compare the performance of the null-stream-$\chi^2$ statistic with that of the traditional $\chi^2$ statistic using receiver-operating characteristics. The new statistic may form the basis for better signal-noise discriminators in multi-detector searches in the future.'
author:
- William Dupree
- Sukanta Bose
title: 'Multi-detector null-stream-based $\chi^2$ statistic for compact binary coalescence searches'
---
Introduction
============
The last few years have witnessed major progress in gravitational wave (GW) astronomy [@Abbott:2016blz; @Abbott:2016nmj; @Abbott:2017vtc; @Abbott:2017oio; @TheLIGOScientific:2017qsa], with the LIGO and Virgo detectors [@ligo; @virgo] successfully observing numerous black hole-black hole (BBH) mergers as well as one neutron star-neutron star merger (BNS) [@LIGOScientific:2018mvr] – jointly called compact binary coalescences (CBCs). This progress has led to the growth of ground-based detection efforts, in signal processing as well as the planning of new detectors and sites. As detections become more common there is a growing need for statistical analysis that improves our ability to separate signals from spurious noise transients [@TheLIGOScientific:2017lwt; @Bose:2016sqv; @Bose:2016jeo; @Mukund:2016thr; @Zevin:2016qwy; @Nuttall:2018xhi; @Berger:2018ckp; @Cavaglia:2018xjq; @Walker:2017zfa]. In this study we focus on this need, but with an emphasis on utilizing three or more detectors’ data streams in unison with a network-wide statistic. There have been methods proposed in coherent searches [@Bose:1999pj; @Pai:2000zt; @Bose:2011km; @Harry:2010fr; @Talukder:2013ioa] in data from multiple detectors for improving the separation of false positives from GW signals. These coherent methods involve multiple statistical tests, some applied separately to individual detectors while others applied to the network data jointly. Two such methods are the $\chi^2$-distributed statistics [@Allen:2004gu; @Babak:2012zx; @Dhurandhar:2017aan] and the network null-stream [@Guersel:1989th].
It is well known how Gaussian random noise influences the construction of GW search statistics for modeled signals, and how $\chi^2$ distributed statistics can distinguish between signals and certain types of non-Gaussian noise transients, or “glitches" that are sometimes present in the detector data [@Dhurandhar:2017aan]. These $\chi^2$ tests can take several forms (since the sum of squared normally distributed random variables is $\chi^2$ distributed), but one of the more common tests suited to distinguishing glitches from signals is described in Allen [@Allen:2004gu], which relies on dividing the matched filter [@Helstrom] over a putative signal’s band into several sub-bands and checking for consistency between the distribution of its anticipated and observed values in them. On the other hand, for separating signals from glitches in a network of detectors the null stream [@Guersel:1989th; @Chatterji:2006nh; @Klimenko:2008fu] has been found of some use since it is an antenna-pattern weighted combination of data from detectors that has the GW signal strain eliminated from it for the correct sky position of the source. Here we propose a network-wide statistic for CBC searches that combines both. With signal and noise simulations we demonstrate that this statistic has the potential for being useful in CBC searches in LIGO and Virgo data.
Our paper is laid out as follows: In Sec. \[preliminaries\] we introduce the conventions and notations used in this work. Section \[traditionalchisquared\] discusses the traditional $\chi^2$ test for a single detector, followed by how the null stream can be used as a discriminator in Sec. \[nullstream\]. In Sec. \[idealnullchisquared\] we develop the combined null-stream-$\chi^2$ statistic for the simple case of a network that has all detectors with identical noise power-spectral density. We generalize this statistic to more realistic networks in Sec. \[fullnullchisquared\]. Testing of our new statistic in simulated data is presented in Sec. \[numericaltesting\].
Preliminaries
=============
Here we describe the waveforms we use to model the CBC signals and the noise transients or glitches. We also present the matched-filter based statistic that is at the heart of the CBC searches in GW detectors.
The signal {#cbcsignal}
----------
We consider non-spinning CBC signals in Advanced LIGO (aLIGO) and Advanced Virgo (AdV) like detectors in this work. The GW strain $h(t)$ in such detectors due to a CBC signal can be expressed in terms of the plus and cross polarization components $h_{+,\times}(t)$ and the corresponding antenna patterns $F_{+,\times}$ as follows: \[strain\] h(t) = F\_+ h\_+(t) + F\_h\_(t), where \[polorizestrain\] h\_+(t) &=& H\_+ (m\_1, m\_2, , r) ,\
h\_(t) &=& H\_(m\_1, m\_2, , r) . In the above expressions $r$ is the luminosity distance to the binary, $\epsilon$ is the inclination of the binary’s orbit relative to the line of sight, and the signal phase $\Psi$ depends on the component masses $m_1$ and $m_2$, apart from auxiliary parameters such as the detector’s seismic cut-off frequency $f_s$, the signal’s time of coalescence $t_c$. Moreover, $H_{+, \times}$ are the two polarization amplitudes.
Matched Filtering {#matchedfiltering}
-----------------
The basic construct used to search for CBC signals is matched filtering, which involves cross-correlating detector strain data with templates that are modeled after the waveforms described above. Detector data comprises noise, $n(t)$, and sometimes a GW strain signal, $h(t)$. Matched filtering process takes data, $s(t) =
h(t) + n(t)$, and compares it to a template, $Q(t)$, designed to match the GW strain. It is common to use a normalized complex template modelled after theoretical waveforms: \[unitcomplexsignal\] Q\_[()]{}(t) = \^Q\_[()]{}(m\_1,m\_2,f\_s,t\_c), where $\alpha$ is the detector index and $\mathcal{N}^Q_{(\alpha)}$ is a (detector noise power spectral density dependent) normalization factor such that $\left<\tilde{Q}_{(\alpha)},\tilde{Q}_{(\alpha)}\right>_{(\alpha)} = 2$. The angular brackets denote an inner product, defined by \[crosscorrelation\] \_[()]{} = 2 \_[0]{}\^ df, where the tilde above a symbol denotes its Fourier transform and $S_{h (\alpha)} (f)$ is the $\alpha$th detector’s two-sided noise power spectral density (PSD): $\overline{\tilde{n}^*(f)\tilde{n}(f')} =
\delta(f-f')S_h (f)$, with the overbar symbolizing an average over multiple noise realizations. It is this inner product that will be the basis for our statistical analysis of GW signals. The matched filtering process is performed by using this inner product between data $s(t)$ and complex templates $Q(t)$, computed for various values of the signal parameters (such as $m_{1,2}$). Note that the choice of normalization for $\tilde{Q}_{(\alpha)}$ is consistent with our convention where both its real and imaginary parts are each normalized to unity, but is at variance with Ref. [@Allen:2004gu]. Nonetheless such a choice has no effect on the final result.
The matched-filter output is one of the primary ingredients of a decision statistic that allows one to assess if a feature in the GW data is consistent with a GW signal with a high enough significance. The decision statistic may also depend on other characteristics of the data, such as the traditional chi-square [@Usman:2015kfa; @Talukder:2013ioa]. When the decision statistic crosses a preset threshold value for a feature in the data, we will term that feature a trigger. Such features can be noise or signals and are characterized by the decision statistic (and other auxiliary statistics, such as the traditional chi-square), the properties of the template (e.g., $m_{1,2}$), the time of the trigger, etc. We next define the inner product between data and complex template as \[zdefinition\] z . Then the SNR is just \[SNR\] [SNR]{} = , where the denominator is a normalization factor. We again assume the noise to be Gaussian, stationary, and, in the case of multiple detectors, independent of the noise in other detectors.
Sine-Gaussian Glitch {#sgglitch}
--------------------
In this study our limited objective is to improve the separation of false positives, caused by noise transients, from CBC signals in a space similar to that of the trigger SNR and trigger $\chi^2$. We use the following sine-Gaussian function to model noise transients [@Chatterji:2005thesis; @Bose:2016jeo]: \[glitchstrain\] u(t) = u\_0 e\^[()\^2 t\^2]{} , which has amplitude $u_0$ and quality factor $K$. The glitch is centered at frequency $f_0$. If the amplitude is large and the central frequency lies within the band of a CBC template, then their matched filter can return a SNR value large enough to create a false trigger.
Traditional $\chi^2$ Discriminator {#traditionalchisquared}
==================================
Noise transients can sometimes masquerade as signals and, thereby, show up as potential detections during matched filtering of GW detector data with CBC templates. To combat this issue in a single wide-band detector, such as LIGO Hanford (H), LIGO Livingston (L), or Virgo (V), the traditional $\chi^2$ discriminator was designed to improve the ability to distinguish between such noise transients and CBC signals, especially, when the SNR of the noise triggers is sufficiently high. Let us assume that the noise in a given detector is Gaussian, stationary, and uncorrelated with that in the other detectors. Next one breaks up the matched filtering integral $z = \left<\tilde{Q},\tilde{s}\right>$ into $p$ smaller sub-bands, where each sub-band is integrated over the frequency interval $\Delta f_j$. Consequently, the matched-filter output can be expressed as \[zfreqpartition\] =2 \_0\^ df =2 \_[j=1]{}\^p \_[f\_j]{} df = \_[j=1]{}\^p \_j , and one can define $z_j = \left<\tilde{Q},\tilde{s}\right>_j$ to be the matched filtered output over the range $\Delta f_j = [f_{j-1},f_j]$. The frequency partitions can be unequal in frequency range, meaning $\Delta f_1$ can be different in length than $\Delta f_p$. To handle this difference in size in the final statistic we require that the frequency spacing adheres to \[qj\] q\_j = \_j , so that the normalization of $\tilde{Q}(f)$ ensures that the sum of the $q_j$’s is unity.
The usefulness of statistics with $\chi^2$ distributions partly arises from the property that their mean is equal to their number of degrees of freedom. Using the partitioned matched filtering band from above we can design such a statistic by comparing the smaller frequency sub-bands to the total matched filtering output. This is done by taking a difference of the sub-bands with a weighted value of the total \[deltaz\] z\_j = z\_j - q\_j z , so that the sum of $\Delta z_j$ over $j$ equals zero. Since these integrals return complex values, it is important that we take the modulus squared so that the statistic lies in the reals. The expected values of these objects lead to the statistic defined as [@Allen:2004gu] \[traditionalstatistic\] \^2 = \_[j=1]{}\^p , where the denominator comes from weighting to ensure that $\overline{\chi^2} = 2(p-1)$. The statistic is now complete; and Eq. (\[deltaz\]) implies that its mean equals the number of degrees of freedom. Using Eq. (\[traditionalstatistic\]) now distinguishes the glitches from the true signals by difference in the value of this statistic. Glitches will typically return significantly higher values of $\chi^2$ compared to GW signals, for large enough SNRs.
The null-Stream Veto {#nullstream}
====================
In addition to single-detector $\chi^2$ discriminators, such as the traditional one, and multi-detector $\chi^2$ discriminators, the null-stream construction has also been used to distinguish CBC signals (or parts thereof) from noise transients [@Colaboration:2011np; @Allen:2004gu; @Pai:2000zt; @Bose:2011km; @Harry:2010fr; @Babak:2012zx; @Talukder:2013ioa; @Bose:2016jeo; @Dhurandhar:2017aan]. The null stream is constructed by creating a weighted linear combination of different detector data streams $s(t)$ in such a way that contribution from $h(t)$ is eliminated in that combination and all that remains is noise. We may write Eq. (\[strain\]) for $D$ number of detectors in a network in the way \[vectordata\]
\_1\
\_2\
\
\_[D]{}
=
F\^+\_1 & F\^\_1\
F\^+\_2 & F\^\_2\
&\
F\^+\_[D]{} & F\^\_[D]{}
\_+\
\_
+
\_1\
\_2\
\
\_[D]{}
, where the data and noise are column vectors, while the strain is formed from antenna-pattern functions matrix with the GW strain vector. The detector index then takes the values $\alpha \in [1...D]$. For the following discussion, we define $\textbf{F}^+$ to be a $D$-dimensional vector with ordered components $F^+_1,F^+_2,...,F^+_D$, and similarly for $\textbf{F}^\times$.
In the simple hypothetical case of three aligned detectors, with identical noise PSDs, the null stream [@Chatterji:2006nh] is just \[simplenullstream\] N(t) = s\_1(t) + s\_2(t+\_2) - 2 s\_3(t+\_3), which accounts for the correct signal time delays $\tau_{(\alpha)}$ relative to the first (reference) detector to insure that the signal strain cancels out, leaving behind just noise. A more interesting expression for such a stream can be formed for the case of non-aligned detectors: \[3detectornullstream\] N(t) = A\_1 s\_1(t) + A\_2 s\_2(t+\_2) + A\_3 s\_3(t+\_3), where the components of $\textbf{A}$ can be found from the normalized cross product $\textbf{F}^+ \times \textbf{F}^{\times}/ ||\textbf{F}^+
\times \textbf{F}^{\times}|| $. For a network with $D>3$, $\textbf{A}$ takes a more general form in terms of $\textbf{F}^{+,\times}$, as given in Ref. [@Chatterji:2006nh]. In the case of detector data containing signal embedded in noise, this linear combination of data vectors $N(t)$ for the right source sky position would contain only noise.
The time series of ${\sf n}$ data points that makes up the $\alpha^{\rm th}$ detector’s data stream, $s_{\alpha}(t)$, may be thought of as an ${\sf n}$-dimensional vector. From this we see that the right-hand side of Eq. (\[vectordata\]) is made up of $D$ row vectors, each of which resides in an ${\sf n}$-dimensional vector space. If the detectors are non-aligned, then these $D$ vectors will occupy a $D$ dimensional space. Gravitational wave signals will invariably occupy a two-dimensional sub-space, which is spanned by $\textbf{F}^{+}$ and $\textbf{F}^{\times}$. The remaining $D-2$ basis vectors are then orthogonal to them and define their null space. We group these $D-2$ basis vectors into a matrix $\textbf{A}$ [@Chatterji:2006nh] with elements: A\_[P]{} =
A\_[11]{} & A\_[12]{} & …& A\_[1 D]{}\
A\_[21]{} & A\_[22]{} & …& A\_[2 D]{}\
& & &\
A\_[(D-2)1]{} & A\_[(D-2)1]{} & …& A\_[(D-2)D]{}
, where the capital Roman index on $A_{L\alpha}$ is the null-stream index (ranging from 1 to $D-2$) and the Greek letter denotes the detector index (ranging from 1 to $D$). Each row is normalized to be of unit magnitude, so that $\sum_{\alpha=1}^D A_{L \alpha} A_{\alpha R}= \delta_{LR}$. If we apply $\textbf{A}$ to Eq. (\[vectordata\]) we create a linear combination of data streams that only consists of weighted noise from each detector and no signal. This is termed as the null-stream and arises because the projection of GW signals into the aforementioned $D-2$ dimensional space is zero.
The null-stream can be used to devise statistics that can discriminate between GW signals and noise glitches. In Ref. [@Chatterji:2006nh], the authors construct a whitened null-stream to design a statistical test that compares the energy through cross-correlation and auto-correlation of the data in the discrete domain. In doing so they are able to distinguish between GW burst signals and noise glitches. The test is done by comparing the null energy \[nullenergy\] E\_[null]{} = 2 \_0\^ \^2(f) df, to \[incoherentenergy\] E\_[inc]{} = 2 \_0\^ \_[= 1]{}\^D B\_ \_\^2(f) df, the incoherent energy, with $B_{\alpha \beta} \equiv \sum_{L = 1}^{(D-2)} A_{\alpha L} A_{L \beta} $. Whereas the null energy is dependent on both auto-correlation and cross-correlation of data streams of the various detectors in the network, the incoherent energy depends only on the auto-correlation terms [@Chatterji:2006nh]. They then use the energy values to distinguish between event types. If a GW signal is present in the data its contribution to the null energy will be eliminated by its cancellation from the null stream construction. When this is the case the relationship between energies is $\overline{E_{null}} < \overline{E_{inc}} $. If there is only noise and glitches in the data then $\overline{E_{null}} \approx \overline{E_{inc}} $. By taking the ensemble average of different noise realizations for the two types of energies they are able to separate the different event types. As we will show further in our work, this is not the only way null stream construction can be used to create a statistic that is useful in distinguishing between true signal and glitch.
Idealized Null-Stream-$\chi^2$ {#idealnullchisquared}
==============================
We now wish to design a statistic that incorporates the strengths of both the traditional $\chi^2$ test as well as null stream veto. This test would focus on using the matched filtering process to compare signal in smaller frequency sub-bands, but instead of filtering the data from a single detector would use the null stream constructed data from multiple detectors in a network. Done correctly the statistic will return significantly smaller values for networks that have GW signal than those containing glitches in their data.
We initially consider a simple case in which one has a network of unaligned detectors that all have the same noise PSD. We generalize this to non-identical detector PSDs later. For now we focus on one null stream, which (after suppressing the null-stream index on $A_{L\alpha}$) takes the form \[detectornullstream\] N = A\_1 s\_1 (t) + A\_2 s\_2 (t+\_2) +...+ A\_[D]{} s\_[D]{} (t+\_[D]{}) , so that in the case of GW signal the weights $A_{\alpha}$ cancel out all but noise in the data. The goal is to manipulate Eq. (\[detectornullstream\]) into a form that takes advantage of the matched filtering. To this end, we multiply both sides of the null stream by the filtering function $\tilde{Q}(f)/S_h(f)$. Renaming the object and integrating gives \[singlePSDM\] M = A\_1 z\_1 + A\_2 z\_2 +...+A\_[D]{} z\_[D]{} , where we have taken advantage of all detectors having the same PSD to create the matched filtering outputs $z_{\alpha}$.
It is important to understand the statistical properties of Eq. (\[singlePSDM\]) so that we may use them to mould the final statistic. We first look at the average of the square of $M$. This is simplified by knowing \[zzaverage\] = + \_ , where the Greek letters denote detector index. Having this average makes the work of finding $\overline{M^2}$ simple, with the only unseen complexity coming from the null stream construction. Due to the same denominator in all terms of the integral the null stream constructively cancels cross terms arising from the squaring process, so that \[Maverage\] = \_[=1]{}\^D A\_\^2 = 2 , due to the properties of $\textbf{A}$ and normalization of the template.
Using the concepts of Allen [@Allen:2004gu] we now construct \[deltaMj\] M\_j = M\_j - q\_j M, which is the difference between the contribution to $M$ arising from the $j^{th}$ sub-band (from a total of $p$ sub-bands) and a weighted total, much like how $\Delta z_j$ was constructed in Eq. (\[deltaz\]). Finding the statistical properties pieces at a time, without exact specification of frequency bands, we see Eq. (\[zzaverage\]) can be split into $p$ pieces for the $k$th sub-band \[zzjaverage\] = \_j \_k + \_ \_[jk]{} \_j , so that $j$ is frequency sub-band index. (Note that there is no sum over $j$ in the last term on the right-hand side above.) From Eq. (\[Maverage\]) it is manifest that \[Mjaverage\] = \_[=1]{}\^D A\_\^2 \_j = \_j , which implies that the average is still dependent on $j$. Again, since we have assumed the noise PSD is the same for all detectors we may define the $q_j$ terms as well as the frequency sub-bands from the same integral as in Eq. (\[qj\]).
We must now put these terms together to find the properties of $\overline{M_j^2}$. It will also be important to know $\overline{M_j
M}$. As Allen does [@Allen:2004gu] we will use symmetry, and the fact that \[symmetryM\] \_[j=1]{}\^p M\_j M = M\^2 to find that both cross terms of $\overline{\Delta M_j^2}$ are $q_j \overline{M_j^2}$. Combining these properties leads to \[a\] &=&\
&=& -q\_j -q\_j +q\_j\^2\
&=& \_j - 2 q\_j\^2 + q\_j\^2\
&=& 2q\_j (1 - q\_j) , dependent only on the weights defined by the number of frequency sub-bands chosen. Much like Allen [@Allen:2004gu] we take this result and define our statistic \[rho1\] = \_[j=1]{}\^p , giving it an average of $2(p-1)$. Thus $\rho$ is $\chi^2$ distributed with its average being the same as the degrees of freedom. Lastly, to account for multiple null streams for cases when the network has four or more detectors we generalize $\rho$ by including their contributions. We do so by first reviving the null-stream index in Eq. (\[singlePSDM\]) such that \[singlePSDMnullstreamL\] M\_[L]{} A\_[L1]{} z\_1 + A\_[L2]{} z\_2 +...+A\_[L D]{} z\_[D]{} , and define $\Delta M_{Lj}$ for each null-stream similar to the one in Eq. (\[deltaMj\]). We finally generalize the $\rho$ in Eq. (\[rho1\]) by including the sum over the null-streams to arrive at: \[rhoPSDnull\] = \_[L=1]{}\^[D-2]{} \_[j=1]{}\^p , noting that the $q_j$ values are unaffected due to the same noise PSD in all detectors. From Eq. (\[rhoPSDnull\]) it is clear to see the degrees of freedom are accounted for in the average, coming out to $2(D-2)(p-1)$.
Full Null-stream-$\chi^2$ {#fullnullchisquared}
=========================
Having explained the basic idea of the null-stream-$\chi^2$ statistic in Sec. \[idealnullchisquared\], we develop it further here so that the resulting statistic addresses some of the challenges associated with real data. One such challenge is that the noise PSD is very likely to vary from one detector to another. Another one is the fact that the detectors are oriented differently around the globe.
We begin by pointing out that Eq. (\[Maverage\]) was clean and concise because all detectors there were assumed to have the same noise PSD. To address the fact that these noise PSDs will be different in general, we first introduce the over-whitened data stream $\tilde{s}_{w\alpha} \equiv \tilde{s}_{\alpha} / S_{h \alpha}$. Similarly, the over-whitened antenna-pattern functions are used to construct the $\textbf{F}$ matrix: \[whitenedFmatrix\]
F\^+\_1/S\_[h1]{} & F\^\_1/S\_[h1]{}\
F\^+\_2/S\_[h2]{} & F\^\_2/S\_[h2]{}\
&\
F\^+\_/S\_[h]{} & F\^\_/S\_[h]{}
, where we have divided each $F^{+,\times}_\alpha$ by the noise PSD of the corresponding detector. The $A_{w \alpha}$ are obtained from the weighted antenna-pattern functions the same way the $A_{\alpha}$ are deduced from the unweighted ones above. In the case of a network with three non-aligned detectors the $A_{w \alpha}$ take the form \[threedetnetAw\] A\_[w ]{} = , which shows their explicit dependence on the $A_{\alpha}$ and detector noise PSDs. They are now frequency dependent, owing to the newly incorporated PSD factors. We also define the bracket operation to define the following inner product: \[newfilter\] = 2 \_0\^ \^\*(f) \_w(f) df, where $\tilde{b}_w(f)$ is obtained by over-whitening $\tilde{b}(f)$. Therefore, it is clear that $[Q,s_{w \alpha}] = z_{\alpha} $.
Up to this point the templates used have been constructed from two orthonormal pieces for each polarization. Due to the frequency dependence in the $A_{w\alpha}$ terms we choose to construct our statistic filtering for each polarization separately, this motivation will become clear shortly. To handle the polarizations separately we note that the complex filter $Q$ can be decomposed into its real and imaginary parts such that \[plusfilter\] \_+ = () and \[crossfilter\] \_ = (), where we will use a network-wide plus and cross filter on the data. We will focus on the plus polarization first: \[networkPSDM\] W\_+ = \[Q\_+,N\_w\] , where similar to Sec. \[idealnullchisquared\] we constructed a filtered null stream, with $N_w$ being the over-whitened null stream. With this network template we can now focus on the matched-filter output outlined in Eq. (\[newfilter\]) to be the basis for constructing a network-wide statistic that will be $\chi^2$ dependent.
Proceeding in the same vein as Allen [@Allen:2004gu], we are interested in the mean of the square of Eq. (\[networkPSDM\]). Understanding the mean of $W_+^2$ can be aided by defining \[lambdameanfunction\] \_\^[D]{} A\_[w ]{}\^2/S\_[h ]{}, motivated by accounting for the possible differences in detector noise PSDs as well as the construction of the $A_{w\alpha}$ given in Eq. (\[threedetnetAw\]). The mean takes the form, \[whiteMaverage\] = \[Q\_+, Q\_+ \] , where the null stream construction $N_w$ has any contribution from a GW signal removed. We use this result to complete our construction of the statistic. This is done by renormalizing our detector templates so that $[Q_+,\Lambda Q_+]=[Q_{\times},\Lambda Q_{\times} ]=1$. For the mean of $W_+^2$ we then simply have one.
Turning our attention to breaking the frequency space into smaller bands we define the bands by \[geoqj\] q\_[+j]{} = \[Q\_+, Q\_+ \]\_j , similarly to the previous section. This insures that our partitions $q_j$ also sum to one. With our new constructs we may now build our new statistic. As Ref. [@Allen:2004gu] does, we take a difference so that $\Delta W_+ = W_{+j} - q_{+j} W_+$. Doing so leads to a statistic much like Eq. (\[rhoPSDnull\]) \[plusrhonull\] \_[+]{} = \_[j=1]{}\^p , that is $\chi^2$ distributed with a mean of $p-1$. While this statistic has the desired properties, it only filters for the plus polarization. A similar statistic can be constructed for the cross polarization, the only difference will be to use $Q_{\times}$ to construct $W_{\times}$. In doing so we have \[crossrhonull\] \_ = \_[j=1]{}\^p , that is also $\chi^2$ dependent with a mean of $p-1$. To complete our statistic we note that the addition of two $\chi^2$ statistics is also $\chi^2$ with a mean equal to the sum of the means of the original statistics. We then combine the cross and plus matched filter outputs to create the complete statistic: \[fullrhonull\] \_[f]{} = \_[L=1]{}\^[D-2]{} \_[j=1]{}\^p , where we have been careful to sum over the different null streams dependent on the number of detectors in the network. We see that $\rho_f$ is $\chi^2$ dependent with a mean of $2(D-2)(p-1)$, and constructed using a null stream focus on matched filtered outputs from a network of detectors.
This statistic takes full advantage of all the detectors in the network even when they have differing noise PSDs. The null stream construction allows for the elimination of the signal strain from our statistic when the correct sky location of the source is used. As we show below, $\rho_f$ can be utilized in the construction of a decision statistic (along with the SNR) that compares favorably with other alternative statistics in discriminating signal triggers from a class of noise-transient triggers.
Numerical Testing of The Null-stream-$\chi^2$ Discriminator Statistic {#numericaltesting}
=====================================================================
In this section we describe the signal and noise artifact simulations conducted to test the discriminatory power of the new statistic defined by Eq. (\[fullrhonull\]). For modeling noise transients we limit ourselves to sine-Gaussians, which have been shown to be a good (but not necessarily complete) basis for modeling glitches in real detector data [@Chatterji:2005thesis; @Bose:2016jeo; @Bose:2016sqv; @Nitz:2017lco]. We do this by examining how the traditional $\chi^2$ test in Eq. (\[traditionalstatistic\]) performs in separating distributions of signal and noise triggers: If a true GW signal is in the data then a band by band comparison in the frequency domain of the anticipated signal power and the actual power in the data, is expected to yield smaller $\chi^2$ values than when there is a noise transient instead. In our tests on simulated data we also explore the usefulness of increasing the number of degrees of freedom by partitioning the signal band into more sub-bands in computing the null-stream statistic. This facility in our null-stream construction may help in improving the discriminatory power of the null-stream statistics discussed by Chatterji *et al*. [@Chatterji:2006nh] and Harry *et al*. [@Harry:2010fr].
The BBH signals we simulated were based on the (frequency-domain) IMRPhenomD model [@Husa:2015iqa; @Khan:2015jqa]. [^1] The components were non-spinning and had masses chosen from the range $(10,30)~M_\odot$. The BBHs were distributed uniformly in volume between a luminosity distance of 1 Gpc and 3 Gpc. All noise transients were simulated to be sine-Gaussians (SGs), as introduced in Sec. \[preliminaries\], with quality factor $K\in (10,45)$ and central frequency $f_0 \in (80, 200)$ Hz. The strength of the SG glitches was taken to be such that the single detector matched-filter SNR was below $\approx 20$. We first present our results for the traditional $\chi^2$ test, given in Eq. (\[traditionalstatistic\]) (Ref. [@Allen:2004gu]), in Fig. \[fig:AllenNumericalSim\], where the reduced $\chi^2$ (i.e., $\chi^2$ per degree of freedom) values are plotted versus the SNR for three different types of triggers, namely, Gaussian noise, BBH signals, and SG glitches. As is expected of Gaussian noise and signals embedded in such noise, their $\chi^2$ per degree of freedom (DOF) values distribute with average around one. The SG glitch triggers carry higher values of $\chi^2$ with increasing SNR, which allows this test to distinguish the signal from such noise triggers better at higher SNRs.
![ The traditional $\chi^2$, Eq. (\[traditionalstatistic\]), per degree of freedom (DOF) plotted against the signal-to-noise ratio (SNR) Eq. (\[SNR\]), for various kinds of triggers in our simulation studies, for $p=12$ $\chi^2$ sub-bands, in a single detector. The triggers were generated for Gaussian noise alone (blue stars), binary black hole signals (red triangles), and sine-Gaussian glitches (green circles). Gaussian noise triggers are expected to have an average $\chi^2$ per DOF of unity. The signal triggers have a similar average above, for various signal strengths (and SNRs). With increasing SNR, the $\chi^2$ distribution of glitches separates more and more from that of signals. (Here we took the signal parameters to match the template parameters exactly. A parameter mismatch will cause the signal trigger $\chi^2$ to rise with increasing SNR.) []{data-label="fig:AllenNumericalSim"}](AllenChiSquare_2bin_Oct016018.png)
Having shown that our simulations yield results along lines expected of the traditional $\chi^2$ test [@Allen:2004gu], we move onto testing the null-stream statistic in Eq. (\[fullrhonull\]) using similar simulated data. Since we are now modeling a network of detectors we first create the null stream from Eq. (\[detectornullstream\]) for the Hanford, Livingston, Virgo (HLV) network. For the network test, we limit ourselves here to the targeted search, where the sky position of the GW source is known in advance [@Harry:2010fr], e.g., from the location of a putative electromagnetic counterpart, such as a short-duration gamma-ray burst [@Loeb:2016fzn]. (A blind search requires more computational time or resources and may also incur some deterioration in the search performance. We will pursue that study in a subsequent work.) The sky position information is used to compute the antenna-pattern vectors ${\bf F}_{+,\times}$, the correct time delays for signals across the detector baselines, as well as the $A_{\alpha}$ factors for the null stream construction. When studying SG glitches, we consider two cases: (a) There is a SG glitch in one of the detectors but only Gaussian noise in the other two; (b) There are near concurrent SG glitches, with varied $K$ and $f_0$ values, in two of the detectors, and only Gaussian noise in the third. When we vary $K$ and $f_0$ values they are chosen from the previously described range of values, but the glitch characteristics are different in the two detectors, such that the difference in $K$ is 5 or more and that in $f_0$ is at least 10Hz. While the second case is expected to be much rarer than the first one in real data, it may assume importance in situations when we ascribe false-alarm rates to our detections at the level of one in several tens of thousands of years. For the multi-detector simulations, when comparing the performance of different $\chi^2$ statistics, the same simulated signals, glitches and noise are used.
![Like in Fig. \[fig:AllenNumericalSim\], here too we compare noise only, BBH signal, and SG glitch triggers, but in the HLV network. The other differences are: (a) on the vertical axis the null-stream-$\chi^2$ per DOF is plotted for $p=2$ sub-bands and (b) in black pluses triggers arising from concurrent sine-Gaussian glitches (with different parameters) in two of the three detectors have been included. The SNR on the horizontal axis is the combined SNR in the HLV network (defined following Eq. (\[newstat\])). The null-stream-$\chi^2$ statistic is the same as the one defined in Eq. (\[fullrhonull\]) for the HLV network. It shares many features with Fig. \[fig:AllenNumericalSim\], with noise only and GW signal triggers both having an average approximately equal to unity. Since the number of sub-bands is small, at $p=2$, the relative contribution of the null stream as a discriminator, vis á vis the traditional $\chi^2$-statistic, is large. When $p$ is increased to 12, the traditional $\chi^2$ aids in improving this discrimination, as shown in Fig. \[fig:NullStreamNumericalSim\_12bin\]. []{data-label="fig:NullStreamNumericalSim_2bin"}](nullchisquareoutput_2bin_02242019.png)
![Same as Fig. \[fig:NullStreamNumericalSim\_2bin\], but with $p=12$ sub-bands.[]{data-label="fig:NullStreamNumericalSim_12bin"}](nullchisquareoutput_12bin_02242019.png)
![Here we show a histogram comparison of $\rho_f$ values for noise only data and GW signal data for our null-stream-$\chi^2$ test as described by Eq. (\[fullrhonull\]) using the same 3 detector method described at the beginning of Sec. \[numericaltesting\]. These histograms are compared to the analytic probability density function (PDF) for a $\chi^2$ statistic of a given degree of freedom. We include $p=2$ for (A), $p=3$ in (B), and $p=4$ for (C). These each correspond to a degree of freedom of $2$,$4$, and $6$ respectively. This shows explicitly the $\chi^2$ dependence that $\rho_f$ carries.[]{data-label="fig:whitenedPDFcompare"}](WhitenedNull_chi_square_noise_signal_pdf_comparison_Oct3018.png)
![ We compare the performances of the network traditional $\chi^2$ (red) and the null-stream-$\chi^2$ (black) discriminators by plotting the receiver operating characteristics (ROCs) of $\zeta_T$ and $\zeta_N$, respectively, for $p=2$ (top panel) and $p=12$ (bottom panel) sub-bands. The ROCs for just the combined SNR, without the use of any of the aforementioned $\chi^2$ statistics, are shown in blue. Increasing the number of sub-bands from $p=2$ to 12 not only improves the performance of the network traditional $\chi^2$ (which is expected), but also that of the null-stream-$\chi^2$, but the blue curve, understandably, remains unchanged. While for $p=12$, in our limited study, the null-stream-$\chi^2$ happens to perform slightly better than the traditional $\chi^2$ for all FAP values that we were able to explore, we caution that this performance may further improve or worsen in real data, and can vary from one type of noise transient to another. []{data-label="fig:ROCcomparison"}](ROC_curve_2bin_compare_12bin_log_log.png)
For $p = 2$ sub-bands, qualitatively, Fig. \[fig:NullStreamNumericalSim\_2bin\] shows that while the null-stream test performs well in discriminating single SG glitches from BBH signals, it does much worse for double SG glitches. This is to be expected in this case due to the small number of $\chi^2$ sub-bands ($p=2$) that are paired with the null stream construction. If there is a second glitch in the network, the null stream may share more overlap in the frequency bands used and thus not contribute as much to the $\chi^2$ value. This is where the ability to test the data in a larger number of sub-bands becomes useful, as seen in Fig. \[fig:NullStreamNumericalSim\_12bin\], which uses $p=12$ sub-bands. By increasing the number of sub-bands one is able to check for better time-frequency consistency of transient patterns in multiple detectors. Ideally, the value of $p$ is best chosen by comparing the performance of the null-stream-$\chi^2$ statistics for different values of $p$ in real data, with real noise transients. This “tuning" problem will be addressed in a subsequent work. The explicit $\chi^2$ distribution dependence of our statistic, $\rho_f$, can be seen in Fig. \[fig:whitenedPDFcompare\].
Finally, for an assessment of the power brought in by the null-stream-$\chi^2$, we constructed two multi-detector statistics, $$\label{newstat}
\zeta_{\rm T,N} \equiv \frac{\rm SNR_c}{\left( \chi^2_{\rm T,N}
\right)^{1/3}}\,,$$ where ${\rm SNR_c}$ denotes the detector network’s [*combined*]{} signal-to-noise ratio, which is defined as $[\sum_{\alpha=1}^D {\rm SNR}^2_\alpha]^{1/2}$, with ${\rm SNR}_\alpha$ being the signal-to-noise ratio of a trigger in the $\alpha^{\rm th}$ detector, $\chi^2_{\rm T}$ is the [*network*]{} traditional $\chi^2$ statistic per degree of freedom (DOF) and $\chi^2_{\rm N}$ is the null-stream-$\chi^2$ statistic ($\rho_f$) per DOF for the network. The network traditional $\chi^2$ statistic is the sum of the traditional $\chi^2$ statistics over all detectors in that network. Triggers with large values of $\chi^2_{\rm T,N}$ are penalized by the new statistics $\zeta_{\rm T,N}$. Constant $\zeta_{\rm T}$ curves represent constant false-alarm probability (FAP) curves on a $\chi^2_{\rm T}$ [*versus*]{} ${\rm SNR_c}$ plane in our HLV simulations. Thus, the fraction of all simulated signals with $\zeta_{\rm T}$ values above a threshold $\zeta_{{\rm T}0}$ denotes the detection probability for this statistic at a constant FAP. Constant $\zeta_{\rm N}$ curves, on the other hand, represent approximately constant false-alarm probability (FAP) curves on a $\chi^2_{\rm N}$ [*versus*]{} ${\rm SNR_c}$ plane for the same HLV simulations.
We use the $\zeta_{\rm T,N}$ statistics to construct receiver-operating characteristic (ROC) curves [@Helstrom] in Fig. \[fig:ROCcomparison\]. As shown there, increasing $p$ from 2 to 12 improves the performance of both the network traditional $\chi^2$ based statistic, $\zeta_{\rm T}$, and the null-stream-$\chi^2$ based one, $\zeta_{\rm N}$. The degree of improvement is more for the latter, but the performance of both statistics is comparable for $p=12$. This holds out hope that the null-stream-$\chi^2$ can be developed further to improve its ability to discriminate noise transients from signals (at least in certain sections of the signal parameter space) in real data. This proposition assumes significance now, given that in addition to the existing LIGO and Virgo detectors, KAGRA (Japan) and LIGO-India are being constructed, and it is likely that joint multi-detector analysis with three or more detectors will be pursued in that network, which will allow the application of statistics such as the null-stream-$\chi^2$.
Further study and possible extensions of the null-stream-$\chi^2$ will involve, e.g., finding the optimal value of $p$, identifying useful regions in the signal parameter space for its implementation, etc. Since these factors depend on the nature of the glitches, we plan to carry out such studies in real data, beginning with LIGO and Virgo, in the future.
The unified construction of $\chi^2$ tests [@Dhurandhar:2017aan] has shown that some of the existing discriminators, such as the traditional $\chi^2$ studied here, target a relatively small part of the space occupied by detector data. Therefore, it is imaginable that as the detectors become more sensitive new noise transients may arise that do not lie in that subspace and yet provide good overlap with CBC templates. The null-stream-$\chi^2$ may be useful against such artifacts. Moreover, the fact that the effectiveness of the null stream itself is less impacted by mismatches between the CBC template model and the signal in the data can also prove useful in devising better extensions of the null-stream-$\chi^2$. However, it remains to be seen how it can be useful in blind searches. The hope is that the null-stream-$\chi^2$ will complement the existing discriminators in reducing the significance of certain glitches in some sections of the parameter space of CBC searches in multi-detector data.
Conclusion
==========
In this work we introduced a new multi-detector statistic – the null-stream-$\chi^2$ – that can be developed further for discriminating between CBC signals and noise transients in real data. We did so by implementing the traditional $\chi^2$ test on a noise-PSD weighted null stream. The new statistic follows a $\chi^2$ distribution by design. We studied its performance by applying it to multi-detector simulated data, some subsets of which had simulated BBH signals and sine-Gaussian glitches added separately. We constructed SNR vs $\chi^2$ plots and receiver-operating characteristics [@Helstrom] to demonstrate that the new statistic compares well with statistics devised in the past in distinguishing signals from noise transients.
Null stream vetoes can be effective when the exact model or parameters of the signal are not known, but they require careful construction when detector sensitivity levels vary [@Harry:2010fr; @Chatterji:2006nh]. As we showed here, even for modeled signals their extension, in the specific form of the null-stream-$\chi^2$ statistic, has the potential to be useful in multi-detector CBC searches, at least when the detectors’ sensitivities to the common BBH source (as quantified by their horizon distances to it) are not very different. This first demonstration, however, has been for targeted searches. Since it is not clear if BBHs have electromagnetic counterparts a targeted search with BBH templates may not appear to be of much use. This point may be debatable [@Loeb:2016fzn], and some may argue for employing higher mass templates than just binary neutron star ones for targeted searches of gamma-ray bursts. In such an event, a $\zeta_{\rm N}$-like statistic can be useful. A more conservative scenario is one where a promising BBH candidate is found by other statistics in a blind search, and its parameters are then used as in a targeted search by $\zeta_{\rm N}$ (or its extended version for real data), as a follow-up, to improve the significance of that candidate. The real worth will instead be in demonstrating that $\zeta_{\rm N}$ can be developed further to work in blind searches more directly (i.e., not as a follow up), and explore the limits of its performance when the relative sensitivities of the detectors in the network are varied. This is what we plan to do in future.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Bhooshan Gadre for carefully reading the paper and making useful comments. This work is supported in part by the Navajbai Ratan Tata Trust and NSF grant PHY-1506497. This paper has the LIGO document number [ LIGO-DCC-P1800334]{}.
[9]{}
B. P. Abbott [*et al.*]{} \[LIGO Scientific and Virgo Collaborations\], Phys. Rev. Lett. [**116**]{}, no. 6, 061102 (2016) doi:10.1103/PhysRevLett.116.061102 \[arXiv:1602.03837 \[gr-qc\]\]. B. P. Abbott [*et al.*]{} \[LIGO Scientific and Virgo Collaborations\], Phys. Rev. Lett. [**116**]{}, no. 24, 241103 (2016) doi:10.1103/PhysRevLett.116.241103 \[arXiv:1606.04855 \[gr-qc\]\]. B. P. Abbott [*et al.*]{} \[LIGO Scientific and VIRGO Collaborations\], Phys. Rev. Lett. [**118**]{}, no. 22, 221101 (2017) Erratum: \[Phys. Rev. Lett. [**121**]{}, no. 12, 129901 (2018)\] doi:10.1103/PhysRevLett.118.221101, 10.1103/PhysRevLett.121.129901 \[arXiv:1706.01812 \[gr-qc\]\]. B. P. Abbott [*et al.*]{} \[LIGO Scientific and Virgo Collaborations\], Phys. Rev. Lett. [**119**]{}, no. 14, 141101 (2017) doi:10.1103/PhysRevLett.119.141101 \[arXiv:1709.09660 \[gr-qc\]\]. B. P. Abbott [*et al.*]{} \[LIGO Scientific and Virgo Collaborations\], Phys. Rev. Lett. [**119**]{}, no. 16, 161101 (2017) doi:10.1103/PhysRevLett.119.161101 \[arXiv:1710.05832 \[gr-qc\]\].
http://ligo.org
http://www.virgo-gw.eu
B. P. Abbott [*et al.*]{} \[LIGO Scientific and Virgo Collaborations\], arXiv:1811.12907 \[astro-ph.HE\]. B. P. Abbott [*et al.*]{} \[LIGO Scientific and Virgo Collaborations\], Class. Quant. Grav. [**35**]{}, no. 6, 065010 (2018) doi:10.1088/1361-6382/aaaafa \[arXiv:1710.02185 \[gr-qc\]\]. S. Bose, B. Hall, N. Mazumder, S. Dhurandhar, A. Gupta and A. Lundgren, J. Phys. Conf. Ser. [**716**]{}, no. 1, 012007 (2016) doi:10.1088/1742-6596/716/1/012007 \[arXiv:1602.02621 \[astro-ph.IM\]\]. S. Bose, S. Dhurandhar, A. Gupta and A. Lundgren, Phys. Rev. D [**94**]{}, no. 12, 122004 (2016) doi:10.1103/PhysRevD.94.122004 \[arXiv:1606.06096 \[gr-qc\]\]. N. Mukund, S. Abraham, S. Kandhasamy, S. Mitra and N. S. Philip, Phys. Rev. D [**95**]{}, no. 10, 104059 (2017) doi:10.1103/PhysRevD.95.104059 \[arXiv:1609.07259 \[astro-ph.IM\]\]. M. Zevin [*et al.*]{}, Class. Quant. Grav. [**34**]{}, no. 6, 064003 (2017) doi:10.1088/1361-6382/aa5cea \[arXiv:1611.04596 \[gr-qc\]\]. L. K. Nuttall, Phil. Trans. Roy. Soc. Lond. A [**376**]{}, no. 2120, 20170286 (2018) doi:10.1098/rsta.2017.0286 \[arXiv:1804.07592 \[astro-ph.IM\]\]. B. K. Berger \[LIGO Scientific Collaboration\], J. Phys. Conf. Ser. [**957**]{}, no. 1, 012004 (2018). doi:10.1088/1742-6596/957/1/012004 M. Cavaglia, K. Staats and T. Gill, Commun. Comput. Phys. [**25**]{}, 963 (2019) doi:10.4208/cicp.OA-2018-0092 \[arXiv:1812.05225 \[physics.data-an\]\]. M. Walker [*et al.*]{} \[LSC Instrument and LIGO Scientific Collaborations\], Rev. Sci. Instrum. [**88**]{}, no. 12, 124501 (2017) doi:10.1063/1.5000264 \[arXiv:1702.04701 \[astro-ph.IM\]\]. S. Bose, A. Pai and S. V. Dhurandhar, Int. J. Mod. Phys. D [**9**]{}, 325 (2000) doi:10.1142/S0218271800000360 \[gr-qc/0002010\]. A. Pai, S. Dhurandhar and S. Bose, Phys. Rev. D [**64**]{}, 042004 (2001) doi:10.1103/PhysRevD.64.042004 \[gr-qc/0009078\]. S. A. Usman [*et al.*]{}, Class. Quant. Grav. [**33**]{}, no. 21, 215004 (2016) doi:10.1088/0264-9381/33/21/215004 \[arXiv:1508.02357 \[gr-qc\]\].
D. Talukder, S. Bose, S. Caudill and P. T. Baker, Phys. Rev. D [**88**]{}, no. 12, 122002 (2013) doi:10.1103/PhysRevD.88.122002 \[arXiv:1310.2341 \[gr-qc\]\].
S. Bose, T. Dayanga, S. Ghosh and D. Talukder, Class. Quant. Grav. [**28**]{}, 134009 (2011) doi:10.1088/0264-9381/28/13/134009 \[arXiv:1104.2650 \[astro-ph.IM\]\]. I. W. Harry and S. Fairhurst, Phys. Rev. D [**83**]{}, 084002 (2011) doi:10.1103/PhysRevD.83.084002 \[arXiv:1012.4939 \[gr-qc\]\]. B. Allen, Phys. Rev. D [**71**]{}, 062001 (2005) doi:10.1103/PhysRevD.71.062001 \[gr-qc/0405045\]. S. Babak [*et al.*]{}, Phys. Rev. D [**87**]{}, no. 2, 024033 (2013) doi:10.1103/PhysRevD.87.024033 \[arXiv:1208.3491 \[gr-qc\]\]. S. Dhurandhar, A. Gupta, B. Gadre and S. Bose, Phys. Rev. D [**96**]{}, no. 10, 103018 (2017) doi:10.1103/PhysRevD.96.103018 \[arXiv:1708.03605 \[gr-qc\]\]. Y. Guersel and M. Tinto, Phys. Rev. D [**40**]{}, 3884 (1989). doi:10.1103/PhysRevD.40.3884 Carl W. Helstrom, “Statistical Theory of Signal Detection,” 2nd Edition, Pergamon (1968).
S. Chatterji, A. Lazzarini, L. Stein, P. J. Sutton, A. Searle and M. Tinto, Phys. Rev. D [**74**]{}, 082005 (2006) doi:10.1103/PhysRevD.74.082005 \[gr-qc/0605002\]. S. Klimenko, I. Yakushin, A. Mercer and G. Mitselmakher, Class. Quant. Grav. [**25**]{}, 114029 (2008) doi:10.1088/0264-9381/25/11/114029 \[arXiv:0802.3232 \[gr-qc\]\].
J. Abadie [*et al.*]{} \[LIGO Scientific and VIRGO Collaborations\], Phys. Rev. D [**85**]{}, 082002 (2012) doi:10.1103/PhysRevD.85.082002 \[arXiv:1111.7314 \[gr-qc\]\]. S. Chatterji, “The search for gravitational wave bursts in data from the second LIGO science run,” Ph.D. Thesis, Massachusetts Instititute of Technology (2005).
A. H. Nitz, Class. Quant. Grav. [**35**]{}, no. 3, 035016 (2018) doi:10.1088/1361-6382/aaa13d \[arXiv:1709.08974 \[gr-qc\]\]. S. Husa, S. Khan, M. Hannam, M. Pürrer, F. Ohme, X. Jiménez Forteza and A. Bohé, Phys. Rev. D [**93**]{}, no. 4, 044006 (2016) doi:10.1103/PhysRevD.93.044006 \[arXiv:1508.07250 \[gr-qc\]\].
S. Khan, S. Husa, M. Hannam, F. Ohme, M. Pürrer, X. Jiménez Forteza and A. Bohé, Phys. Rev. D [**93**]{}, no. 4, 044007 (2016) doi:10.1103/PhysRevD.93.044007 \[arXiv:1508.07253 \[gr-qc\]\].
A. Loeb, Astrophys. J. [**819**]{}, no. 2, L21 (2016) doi:10.3847/2041-8205/819/2/L21 \[arXiv:1602.04735 \[astro-ph.HE\]\].
[^1]: Since the detector data is in the time domain, ideally one should simulate signals and add them to noise (simulated or real) in the same domain, even if the matched-filtering is implemented in the frequency domain. We aim to carry out such studies in the future.
|
---
abstract: 'We consider the flotation of deformable, non-wetting drops on a liquid interface. We consider the deflection of both the liquid interface and the droplet itself in response to the buoyancy forces, density difference and the various surface tensions within the system. Our results suggest new insight into a range of phenomena in which such drops occur, including Leidenfrost droplets and floating liquid marbles. In particular, we show that the floating state of liquid marbles is very sensitive to the tension of the particle-covered interface and suggest that this sensitivity may make such experiments a useful assay of the properties of these complex interfaces.'
author:
- |
\
[*$^\dagger$Mathematical Institute, University of Oxford, UK*]{}\
[*$^\ast$Univ. Lyon, ENS de Lyon, Univ. Claude Bernard, CNRS, Laboratoire de Physique,*]{}\
[*F-69342 Lyon, France*]{}
---
------------------------------------------------------------------------
------------------------------------------------------------------------
Introduction
============
Drops of one liquid at the surface of another are common at both large and small scales: oil slicks may break up into droplets [@Nissanka2017], while droplets of coffee on the surface of coffee are often observed, albeit briefly, when making a morning brew [@neitzel2002noncoalescence].
Such droplets can only float in equilibrium when a number of different force balances are satisfied: their weight or buoyancy must be balanced by the net force from the underlying liquid. As with rigid particles [@vella2015floating], the restoring force from surface tension can lead to the flotation of droplets on the surface of a less dense liquid: small drops of water may float on the surface of oil [@balchin1990capillary; @phan2012can; @phan2014stability]. Unlike rigid particles, however, such droplets may also deform greatly, forming either liquid lenses (when the droplet is partially wetting) or small (relatively non-wetting) droplets, close to spherical. Which of these possibilities occurs depends on the detailed balance between the three interfacial tensions at the contact line: the Neumann conditions [@Langmuir1933; @de2013capillarity].
The flotation of liquid lenses, particularly oil droplets floating on a bath of water, has attracted the most attention. To understand the shape of such lenses, previous work has focused on numerical approaches [@pujado1972sessile; @boucher1980capillary; @burton2010experimental], although analytical results are available for very large, flat lenses [@pujado1972sessile].
For relatively non-wetting droplets, some analytical progress is possible when the droplet may be approximated as two spherical caps [@princen1965shape]. In other circumstances, particularly for larger drops, numerical techniques are again used [@elcrat2005numerical] though some understanding may be obtained by modelling the drop as two halves of an oblate spheroid [@ooi2015deformation]. Mahadevan *et al.* [@Mahadevan2002] also presented a detailed study of the (low gravity) behaviour of a compound drop sitting on a rigid substrate: here four phases frequently meet at a single point.
![Images of non-wetting drops ‘floating’ on a liquid interface: (a) A dyed ethanol drop evaporating at the surface of a hot liquid pool (reproduced with permission from ref. [@maquet2016leidenfrost]. Copyright (2016) by the American Physical Society), (b) a static drop consisting of the same fluid as the bath, separated by a thin air film (reproduced with permission from ref. [@couder2005bouncing]. Copyright (2005) by the American Physical Society) and (c) a liquid marble floating on water (reprinted from ref. [@bormashenko2009water], with permission from Elsevier). []{data-label="Fig:Examples"}](Fig1.pdf){width="0.5\columnwidth"}
Recent experiments have begun to focus on a range of problems that involve droplets floating at a liquid interface in what is close to, if not precisely, a non-wetting state. For example, volatile drops levitate above a bath of warm liquid in the Leidenfrost state [@maquet2016leidenfrost], see fig. \[Fig:Examples\](*a*), while the reverse scenario (a warm droplet levitating on a bath of liquid Nitrogen) has also been considered [@adda2016inverse]. A similar effect may be obtained by vibrating the liquid bath close to the Faraday threshold: the replenishing of the lubricating air film has the effect of maintaining the drops above a bath of the same liquid almost indefinitely [@couder2005bouncing; @bush2015], see fig. \[Fig:Examples\](*b*). Finally, one of the most compelling demonstrations of the stabilizing properties of a particle coating is the ability of a particle coated droplet (a ‘liquid marble’ [@aussillous2001liquid]) to float on the surface of the same liquid almost indefinitely, [@aussillous2001liquid; @gao2007ionic; @mchale2011liquid; @Bormashenko2017] see fig. \[Fig:Examples\](*c*), even when motion occurs due to Marangoni flows [@Bormashenko2015]. Despite the importance of liquid marbles generally, previous analysis of floating liquid marbles has been primarily qualitative [@gao2007ionic; @bormashenko2009mechanism]. Quantitative measurements on the dimensions of such drops have only been reported recently by Ooi *et al.* [@ooi2015deformation; @ooi2016floating]; this data has yet to be explained.
In this paper we seek to shed light on the manner in which a non-wetting (or very close to non-wetting) droplet floats at a liquid interface. In developing this insight we consider experimental data from a range of settings including Leidenfrost drops and liquid marbles and show that these can be understood as the appropriate limits of a single non-wetting drop model.
Theoretical formulation
=======================
We consider an isolated axisymmetric drop, of density $\rho_d$ at the interface between an infinite liquid bath (of density ${\rho_l}$) and vapour of negligible density, ${\rho_v}\ll{\rho_l}$ (see fig. \[fgr:diagram\] for a schematic of the setup). We assume that the three fluids are mutually immiscible and that there are three non-zero interfacial tensions as a result: the liquid–vapour, drop–liquid and drop–vapour interfacial tensions, which we denote by ${\gamma_{lv}}$, ${\gamma_{dl}}$ and ${\gamma_{dv}}$, respectively.
![Schematic diagram of a drop (blue) floating at a liquid–vapour interface. The three-phase contact line (TPCL) is the circle along which all three phases meet; the dotted horizontal line represents the projection of the TPCL onto the $(r,z)$ plane. []{data-label="fgr:diagram"}](Fig2.pdf){width=".75\columnwidth"}
Governing equations
-------------------
### Laplace–Young equation
When two immiscible fluids are in contact, the interfacial tension induces a discontinuity of pressure across the interface (see e.g. ref. [@de2013capillarity]): $$[p]_-^+=\gamma\kappa
\label{L-Y}$$ where the left-hand side denotes the pressure jump across the interface, $\gamma$ is the relevant interfacial tension and $\kappa$ is the total curvature of the interface. Though simple, this equation allows us to determine the shapes of the three fluid-fluid interfaces since in equilibrium the pressure within each liquid is everywhere hydrostatic. For a deformable floating drop, there are three interfaces of interest, each of which satisfies a slightly different version of the Laplace–Young equation. We denote each of these three interfaces by an index $i$ (with $i=1$ denoting the drop–vapour interface, $i=2$ the drop–liquid interface and $i=3$ the liquid–vapour interface). We use an intrinsic parametrization of each interface [@boucher1980capillary] using the arc-length $s$ and interfacial inclination $\phi_i(s)$; the interface shape may then be written $[r_i(s),z_i(s)]$. With this parametrization, simple trigonometry gives that $$\begin{aligned}
\frac{{\mathrm{d}}r_i}{{\mathrm{d}}s}&=\cos\phi_i, \label{rz1}\\
\frac{{\mathrm{d}}z_i}{{\mathrm{d}}s}&=\sin\phi_i, \label{rz2}\end{aligned}$$ while the total curvature of each interface is [@boucher1980capillary] $$\kappa_i=\frac{{\mathrm{d}}\phi_i}{{\mathrm{d}}s}+\frac{\sin \phi_i}{r_i}\text{.}$$ With this expression for $\kappa_i$, we may now equate the pressure change due to the interfacial tension, given by , to the hydrostatic pressure within each liquid. This leads to three different versions of the Laplace–Young equation, which each express the balance between hydrostatic pressure and pressure jumps due to surface tension. The simplest, and most familiar, of these is for the liquid–vapour meniscus; here atmospheric pressure is taken as the pressure datum and we find that $$\rho_lgz_3=\gamma_{lv}\left(\frac{{\mathrm{d}}\phi_3}{{\mathrm{d}}s}+\frac{\sin\phi_3}{r_3}\right)\text{.}
\label{threedim}$$ Within the drop, there is no universal pressure datum; we therefore introduce a constant pressure $p_0=\rho_dgz_0$, with $z_0$ some (*a priori* unknown) vertical position. We find that the corresponding equations for $z_1$ and $z_2$ are $$\rho_d g(z_1-z_0)=\gamma_{dv}\left(\frac{{\mathrm{d}}\phi_1}{{\mathrm{d}}s}+\frac{\sin\phi_1}{r_1}\right)
\label{onedim}$$ and $$\rho_dg(z_0-z_2)+\rho_lgz_2=\gamma_{dl}\left(\frac{{\mathrm{d}}\phi_2}{{\mathrm{d}}s}+\frac{\sin\phi_2}{r_2}\right)\text{.}
\label{twodim}$$
The equations –, together with and , must be solved with appropriate boundary conditions. For the outer meniscus, the conditions are that $z_3,\phi_3\to0$ as $s\to\infty$. Similarly, at the centre of the drop we have $\phi_1=\phi_2=r_1=r_2=0$. The three-phase contact line (TPCL) requires greater thought. A number of further conditions emerge from continuity at the TPCL, since $r_1=r_2=r_3$ and $z_1=z_2=z_3$ there. It is also clear from fig. \[fgr:diagram\] that $\phi_1=-\theta_1$, $\phi_2=\theta_2$ and $\phi_3=\theta_3$ at the contact line. To determine the angles, $\theta_i$, however, we must consider in detail the equilibrium of the TPCL itself.
### Equilibrium at the contact line
For the drop to float in equilibrium, as illustrated in fig. \[fgr:diagram\], the interfacial tensions, denoted by $\gamma_{lv}$, $\gamma_{dv}$ and $\gamma_{dl}$, have to balance at the TPCL. The three angles $\theta_1$, $\theta_2$ and $\theta_3$ must therefore satisfy the Neumann relations [@Langmuir1933; @de2013capillarity] $$\begin{aligned}
\theta_1+\theta_2&=&\pi-\cos^{-1}\left[\frac{\gamma_{dv}^2+\gamma_{dl}^2-\gamma_{lv}^2}
{2\gamma_{dv}\gamma_{dl}}\right],\label{neumann1}\\
\theta_1+\theta_3&=&\cos^{-1}\left[\frac{\gamma_{lv}^2+\gamma_{dv}^2-\gamma_{dl}^2}
{2\gamma_{lv}\gamma_{dv}}\right]\text{.}
\label{neumann2}\end{aligned}$$
Notice that the Neumann conditions – only have solutions for certain combinations of the interfacial tensions: when the arguments of the inverse cosines is greater than unity in magnitude, the droplet must spread to form a layer, rather than forming a floating drop or lens [@Langmuir1933]. Here, we shall most often be interested in situations where the droplet is close to being perfectly non-wetting, so that $\theta_1+\theta_2\approx\pi$ and $\theta_2\approx\theta_3$. However, we maintain the general notation for the time being.
Non-dimensionalization\[sec:nondim\]
------------------------------------
The balance between hydrostatic and capillary pressures, expressed in , causes interfacial deformations to decay over the *capillary length* $${\ell_c}=\left(\frac{{\gamma_{lv}}}{{\rho_l}g}\right)^{1/2}.$$ It is natural to use this length scale to non-dimensionalize lengths in this problem, introducing dimensionless variables $R_i=r_i/{\ell_c}$ etc. In performing this non-dimensionalization, three important dimensionless parameters emerge, namely $$D=\frac{{\rho_d}}{{\rho_l}},\hspace{1cm}{\Gamma_{dv}}=\frac{{\gamma_{dv}}}{{\gamma_{lv}}},\hspace{1cm}\text{and}\hspace{1cm}{\Gamma_{dl}}=\frac{{\gamma_{dl}}}{{\gamma_{lv}}},
\label{eqn:NonDim}$$ which represent the ratio of densities of the droplet (relative to the bath liquid), as well as the ratios of the two other interfacial tensions, ${\gamma_{dv}}$ and ${\gamma_{dl}}$, measured relative to the liquid–vapour interfacial tension, respectively. Our primary interest is in relatively heavy droplets, $D\gtrsim1$.
The appearance of the various dimensionless parameters in may be attributed to the appearance of several different effective capillary lengths in the problem: whereas a drop on a rigid surface has a single, well-defined capillary length, in this problem both the capillary length of the bare interface, ${\ell_c}$, and the capillary length of the drop itself, $${\ell_c^{D}}=\left(\frac{{\gamma_{dv}}}{{\rho_d}g}\right)^{1/2}=\left(\frac{{\Gamma_{dv}}}{D}\right)^{1/2}{\ell_c},$$ might also be of interest. We shall see that our numerical results may be interpreted physically using one or other of these two capillary lengths.
We also note that the two dimensionless interfacial tensions, ${\Gamma_{dv}}$ and ${\Gamma_{dl}}$, play a role in determining the contact angles, and hence the wettability of the droplet. For axisymmetric objects, this wettability plays a key role in determining an object’s flotability [@vella2015floating]. The combination of angles $\theta_1+\theta_2$ and $\theta_1+\theta_3$ are completely determined by the values of ${\Gamma_{dv}}$ and ${\Gamma_{dl}}$. Our particular focus here shall be on perfectly non-wetting drops, for which we have the simple relation $\Gamma_{dl}=1+\Gamma_{dv}$, corresponding to $\theta_1+\theta_2=\theta_1+\theta_3=\pi$. However, there is still some freedom in the values of the angle themselves. For simplicity, we shall isolate this freedom in the angle $\theta_3$ (so that $\theta_1$ and $\theta_2$ are completely determined once $\theta_3$ is determined).
The interfacial inclination of the outer meniscus at the TPCL, $\theta_3$, will depend on the precise conditions for the droplet floating at the interface. In particular, we expect $\theta_3$ to be a function of the droplet volume $V$ (since larger, heavier drops will sink lower into the liquid, increasing $\theta_3$). We shall therefore treat $\theta_3$ as a function of the droplet volume, $V$. (Indeed Ooi *et al.* [@ooi2016floating] plot experimental measurements of this relationship for floating liquid marbles.) The dimensionless droplet volume $V/{\ell_c}^3$ is therefore a key dimensionless parameter. However, it is more conventional to talk about the Bond number, ${\mbox{Bo}}=r_0^2/{\ell_c}^2$, where $r_0=(3V/4\pi)^{1/3}$ is the radius of the drop if spherical. We therefore write $${\mbox{Bo}}=\left(\frac{3V}{4\pi{\ell_c}^3}\right)^{2/3}$$ for the Bond number. We shall also find it helpful to discuss our results in terms of the Bond number using the droplet capillary length; we therefore introduce the droplet Bond number $${\mbox{Bo}_d}=\frac{r_0^2}{({\ell_c^{D}})^2}=\left[\frac{3V}{4\pi({\ell_c^{D}})^3}\right]^{2/3}.
\label{eqn:Bod}$$
Numerical results
-----------------
We solve the dimensionless versions of equations – numerically (subject to appropriate boundary conditions) using MATLAB and Mathematica. The full dimensionless problem, together with details of the numerical scheme are given in Appendix A. From a computational point of view, it is simpler to impose a value of $\theta_3$ and calculate the droplet volume, or Bond number ${\mbox{Bo}}$, that would give rise to this particular value of $\theta_3$. Figure \[fgr:transition\](*a*) shows the droplet shapes obtained as $\theta_3$ varies. This increase of $\theta_3$ correspond to increasing Bond number. As expected, the drop flattens out as the Bond number grows — gravity becomes more important.
![Numerically determined droplet shapes for perfectly non-wetting drops, so that ${\Gamma_{dl}}=1+{\Gamma_{dv}}$, with density ratio $D=2$. (*a*) The effect of increasing the droplet volume with ${\Gamma_{dv}}=0.1$ held fixed throughout (so that ${\Gamma_{dl}}=1.1$). Here, the angle $\theta_3$ is imposed and the corresponding volume computed. Results are shown with $\theta_3$ increasing from $5^{\circ}$ (red) to $30^{\circ}$ (dark blue), in increments of $5^{\circ}$, and corresponds to Bond numbers in the range $0.05\lesssim {\mbox{Bo}}\lesssim 1.05$. (*b*) The effect of increasing the droplet–vapour interfacial tension, ${\Gamma_{dv}}$, but maintaining a constant droplet volume, ${\mbox{Bo}}=1/4$. Profiles are shown for ${\Gamma_{dv}}=2^{i}$, $i=-5\ \text{(yellow)},-4,\dots,3 \ \text{(brown)}$, again with ${\Gamma_{dl}}=1+{\Gamma_{dv}}$.[]{data-label="fgr:transition"}](Fig3.pdf){width="0.9\columnwidth"}
The Bond number, ${\mbox{Bo}}$, does not tell the whole story, however: fixing the value of ${\mbox{Bo}}$ and altering the value of the droplet–vapour tension also changes the shape significantly, as shown in fig. \[fgr:transition\](*b*) for ${\mbox{Bo}}=1/4$. Here we see that for moderate and large values of ${\Gamma_{dv}}$ the droplets are essentially spherical (as should be expected since the Bond number is relatively small). However, decreasing ${\Gamma_{dv}}$, the droplet becomes highly deformable and adopts the pancake configuration usually associated with large droplets [@de2013capillarity].
Recalling that there are two capillary lengths in this problem, ${\ell_c}=({\gamma_{lv}}/{\rho_l}g)^{1/2}$ and ${\ell_c^{D}}=({\gamma_{dv}}/{\rho_d}g)^{1/2}$, there are therefore also two Bond numbers and it is natural to wonder whether the results shown in fig. \[fgr:transition\] can be understood solely in terms of the droplet Bond number ${\mbox{Bo}_d}$, defined in .
The question is which of the two Bond numbers is the better description of how deformed a droplet is? Figure \[fgr:transition\] shows that the drops become more deformed as they grow larger (as expected) and that with a fixed volume, but increasing ${\Gamma_{dv}}$ the drop becomes more spherical. This suggests that the appropriate Bond number may be the droplet Bond number, i.e. ${\mbox{Bo}_d}$ is the relevant measure of a drop’s deformability. Figure \[fgr:bond\] shows that this is also not the complete story: results with a fixed droplet Bond number, ${\mbox{Bo}_d}=1$, as well as droplet density, $D=1$, have an asymmetry that alters as the droplet–vapour tension changes. (Though, as expected the drop does revert to being approximately spherical in the limit of very large ${\Gamma_{dv}}$.)
![Plot of asymmetry (thickness/width) vs ${\Gamma_{dv}}$ for perfectly non-wetting drops. The blue solid line indicates the situation where the droplet Bond number is fixed at ${\mbox{Bo}_d}=[3V/(4\pi{({\ell_c^{D}})}^3)]^{2/3}=1$, for $D=1$. The red dots correspond to results for the droplet profiles in fig. \[fgr:transition\](*b*). []{data-label="fgr:bond"}](Fig4.pdf){width="0.5\columnwidth"}
From these numerical investigations it appears that, neglecting the effect of the droplet volume, the droplet–vapour tension ${\Gamma_{dv}}$ gives a measure of the droplet’s deformability. To understand this better, we now consider some analytical models of this problem, beginning with the case of ‘small drops’, which we expect to be relatively undeformable.
Relatively undeformable drops\[sec:undeform\]
=============================================
In the preceding section, we saw that there are a number of factors that affect the deformation of a floating drop: both its size (measured by the droplet Bond number ${\mbox{Bo}_d}$) and the intrinsic value of ${\Gamma_{dv}}$ can play a role. As such we now move on to consider the role of the droplet size and deformability separately. We begin by considering the case of small drops or, more accurately, droplets that deform the underlying liquid surface only slightly.
Small non-wetting drops
-----------------------
We consider the limit of small substrate deformation, in the sense that $\theta_3 \ll 1$. In particular, for heavy drops $D\geq 1$, small deformations can only occur for drops with effective radius $r_0\ll {\ell_c}$. For the moment, we anticipate that small substrate deformations will correspond to small drop volumes but verify this *a posteriori*.
In this situation, the droplet is approximately spherical, with radius $r_0\approx(3V/4\pi)^{1/3}$. This radius is determined by the volume of the droplet, and it, in turn, determines the pressure within the drop, $p_d\approx 2{\gamma_{dv}}/r_0$. However, the interface between the droplet and the liquid has a different interfacial tension, ${\gamma_{dl}}={\gamma_{dv}}+{\gamma_{lv}}$, and, since the small size of the droplet ensures that the pressure within it is approximately constant, the drop–liquid interface must have radius of curvature $$\tilde{r}_0=\frac{1+{\Gamma_{dv}}}{{\Gamma_{dv}}}r_0.$$ (We emphasize here that to maintain a constant pressure within the drop, the drop must consist of two spherical caps of different radii joined together. Here our assumption is that the majority of the droplet volume is stored within the cap of radius $r_0$, with only a small perturbation from the cap that is contact with the substrate.)
The quantities of real interest, however, are the radial position of the contact line, $r_c$, and the interfacial inclination, $\theta_3$. Elementary geometry gives that $r_c=\tilde{r}_0\sin\theta_2=\tilde{r}_0\sin\theta_3$ (since, for non-wetting drops, $\theta_2=\theta_3$). We therefore have that $$r_c=\frac{1+{\Gamma_{dv}}}{{\Gamma_{dv}}}r_0\sin\theta_3.
\label{rcro}$$
The final ingredient required to close the system, and determine the behaviour of $r_c$ and $\theta_3$ as functions of the droplet volume $V$, is the global force balance. The restoring from the liquid is dominated by the force due to surface tension, since the object is small [@de2013capillarity; @vella2015floating], and so we find that the leading order vertical force balance is $$\rho_d gV\approx 2\pi \gamma_{lv}r_c\sin \theta_3\approx2\pi \gamma_{lv}r_c\theta_3,
\label{stfb}$$ since $\theta_3\ll1$. Combining with we find that $$\theta_3\approx\sqrt{\frac{2}{3}}\left(\frac{D{\Gamma_{dv}}}{1+{\Gamma_{dv}}}\right)^{1/2}{\mbox{Bo}}^{1/2}\ \propto\ {\mbox{Bo}}^{1/2}
\label{theta3}$$ while immediately gives $$R_c=\frac{r_c}{{\ell_c}}\approx \sqrt{\frac{2}{3}}\left[\frac{D(1+{\Gamma_{dv}})}{{\Gamma_{dv}}}\right]^{1/2}{\mbox{Bo}}\ \propto {\mbox{Bo}}.
\label{Rc}$$ The same theoretical prediction for the contact radius $r_c$ in this regime has been reported by Princen & Mason [@princen1965shape]. The scaling $r_c\sim {\mbox{Bo}}$ has also been reported for non-wetting droplets on solid substrates [@mahadevan1999rolling; @aussillous2001liquid; @aussillous2006properties] and floating rigid spheres;[@vella2006load] however, the dependence on the interfacial tension ${\Gamma_{dl}}$ and density $D$ are different here since they result from the deflection of three phases. (For example, for a rigid non-wetting sphere, $r_c/{\ell_c}\approx(2D/3)^{1/2}{\mbox{Bo}}$.) The prediction for $\theta_3$ is, to our knowledge, novel.
![The transition of scaling behaviours for $\theta_3$ and $R_c$ respectively versus Bo as a droplet changes from non-wetting to partially-wetting. Shown is the case where $D=1$ and ${\Gamma_{dv}}=1$. The black solid lines indicate the scaling predictions for the non-wetting case; the grey dash-dotted lines and dotted lines indicate the scaling predictions for ${\Gamma_{dl}}=1.9$ and ${\Gamma_{dl}}=1.8$ respectively. Intermediate cases are plotted with squares ($\square$).[]{data-label="fgr:theta_Rc"}](Fig5.pdf){width="0.5\columnwidth"}
Small partially-wetting drops
-----------------------------
For perfectly non-wetting drops, the droplet angle, $\theta_1+\theta_2=\pi$. In reality, however, many super-hydrophobic materials are not ‘perfectly’ non-wetting. For example, the droplet angle of liquid marbles has been reported to be close to, but not exactly, $\pi$ [@ooi2016floating]. Here, we consider how the scaling laws and are altered for hydrophobic droplets with ${\Gamma_{dl}}-{\Gamma_{dv}}<1$.
Numerical results for $\theta_3$ and $r_c/{\ell_c}$ as functions of ${\mbox{Bo}}$ are shown in fig. \[fgr:theta\_Rc\]. Surprisingly, these numerical results show a significant deviation from the scaling laws and . Even with droplets that are very close to being perfectly hydrophobic, ${\Gamma_{dl}}-{\Gamma_{dv}}=0.999$, we observe significant deviations from and when the droplet Bond number becomes sufficiently small.
To understand this surprising behaviour, we reconsider the analysis of the last section. In particular, the vertical force balance, , remains valid for the partially wetting case; the key difference, however, lies in the deduction of the contact radius, $r_c$ in terms of $r_0$ and the angle $\theta_3$. By equating capillary pressures, we have $$r_c=\frac{{\Gamma_{dl}}}{{\Gamma_{dv}}}r_0\sin\theta_2
\label{rcro-partial}$$ which may be used to rewrite the vertical force balance as $$2\pi\left(\frac{{\Gamma_{dl}}}{{\Gamma_{dv}}}r_0\sin\theta_2\right)\gamma_{lv}\sin\theta_3\approx\rho_d gV\text{.}
\label{stfb-partial}$$
However, we cannot make the approximation $\sin\theta_2=\sin\theta_3\approx\theta_3$ here. Instead we must consider the Neumann conditions –. We are particularly interested in the effect of small changes in ${\Gamma_{dl}}$ from the perfectly non-wetting value $1+{\Gamma_{dv}}$. It is therefore natural to let $${\Gamma_{dl}}=(1+{\Gamma_{dv}})(1-\varepsilon)$$ and consider the behaviour for $\varepsilon\ll1$. From the Neumann conditions – we find that $$\begin{aligned}
\theta_1+\theta_2&\approx\pi-\sqrt{2}\left(\frac{\varepsilon}{{\Gamma_{dv}}}\right)^{1/2},\nonumber\\
\theta_1+\theta_3&\approx\pi-\sqrt{2}\left(\frac{\varepsilon}{{\Gamma_{dv}}}\right)^{1/2}(1+{\Gamma_{dv}})\nonumber\end{aligned}$$ and hence $$\theta_2\approx\theta_3+\sqrt{2}{\Gamma_{dv}}^{1/2}\varepsilon^{1/2}.
\label{eqn:Theta2Asy}$$
Equation gives some insight into the cause of the different behaviour that is observed when $\epsilon>0$ (i.e. when the droplet is slightly wetting): as the interfacial deformation $\theta_3$ decreases (corresponding to smaller and smaller droplets), the angle $\theta_2$ saturates at a finite value that is set by the surface tensions in the problem. This is different to the perfectly non-wetting case, when $\theta_2$ decreases with $\theta_3$ without bound. For $\sqrt{2}{\Gamma_{dv}}^{1/2}\varepsilon^{1/2}\ll\theta_3\ll1$, corresponding to moderately small droplets, we shall recover the scalings of the perfectly non-wetting case, –. However, if instead $\theta_3\ll\sqrt{2}{\Gamma_{dv}}^{1/2}\varepsilon^{1/2}\ll1$, corresponding to extremely small droplets, then we have the new scalings $$\theta_3\approx\frac{\sqrt{2}}{3}D\left[\frac{{\Gamma_{dv}}}{(1+{\Gamma_{dv}})(1+{\Gamma_{dv}}-{\Gamma_{dl}})}\right]^{1/2}{\mbox{Bo}}\propto{\mbox{Bo}}\label{theta3-partial}$$ and $$R_c\approx\sqrt{2}\left[\frac{(1+{\Gamma_{dv}})(1+{\Gamma_{dv}}-{\Gamma_{dl}})}{{\Gamma_{dv}}}\right]^{1/2}{\mbox{Bo}}^{1/2}\propto{\mbox{Bo}}^{1/2}.
\label{Rc-partial}$$ Note that in both the non-wetting and partially wetting cases, the product $R_c\theta_3$ is fixed by vertical force balance to take the value $R_c\theta_3\approx\tfrac{2}{3}D~{\mbox{Bo}}^{3/2}$, provided that $\theta_3\ll1$.
The new scaling behaviours ($\theta_3\sim{\mbox{Bo}}$, rather than $\theta_3\sim{\mbox{Bo}}^{1/2}$ in the non-wetting case and vice versa for $R_c$) are verified by comparison with numerical results in fig. \[fgr:theta\_Rc\]. We note that for a given value of $\varepsilon$ (i.e. fixed values of ${\Gamma_{dv}}$ and ${\Gamma_{dl}}$) there is a smooth transition between the two sets of scaling laws as the droplet size, measured via the Bond number ${\mbox{Bo}}$, varies. Furthermore, as $\varepsilon\to0$, i.e. as ${\Gamma_{dl}}\to1+{\Gamma_{dv}}$, the transition between scalings occurs for smaller and smaller droplets: as $\varepsilon$ decreases, makes it clear that only the very smallest drops will be affected by the effects of partial wetting, while other drops will behave as non-wetting drops to all intents and purposes.
Floating liquid marbles
-----------------------
In recent years, a very striking demonstration of super-hydrophobicity has been the formation of so-called ‘liquid marbles’: droplets of aqueous liquid encapsulated by a super-hydrophobic powder [@aussillous2001liquid; @aussillous2006properties; @mchale2011liquid; @daisuke2015xray]. While liquid marbles are often encountered on solid, effectively rigid, surfaces, one of the most striking features is that simply by coating a droplet of water with hydrophobic grains, the droplet is able to sit on the surface of water without coalescing (see fig. \[Fig:Examples\](*c*)). This coalescence may even be delayed for several weeks [@gao2007ionic]. However, the effective surface tension coefficient of such marbles has been somewhat controversial. A variety of techniques have been proposed to measure this tension, including analysis of the shape of a droplet sitting on a rigid surface [@aussillous2006properties], as well as direct measurement of the capillary pressure within the droplet [@arbatan2011measurement].
In this section, we have seen that even small departures from being perfectly non-wetting can have surprisingly large effects on the behaviour of floating drops, as measured in the interfacial inclination and the radius of contact. Recent experiments [@ooi2016floating] on floating liquid marbles report measurements of both $\theta_3$ and $R_c$ as functions of the Bond number. These experimental results (as determined from digitization of fig. 5 of ref. [@ooi2016floating]) are shown in fig. \[fgr:Marbles\](*c*,*d*) and show that the droplets do not obey simple power-law scalings. It is then natural to wonder whether these deviations encode useful information about the state in which liquid marbles float?
![Explaining the experimental data from ref. [@ooi2016floating] with the liquid bridging postulate. (*a*) Possible configuration of the drop-liquid interface of a liquid marble: the coating agglomerates (solid circles) are bridging between the drop and the substrate. Note that the thickness of the air gap and the size of the agglomerates have been exaggerated. (*b*) Mean relative error in data fitting of $\theta_3$ for different values of ${\Gamma_{dv}}$. (*c*)-(*d*) Results of numerical simulations for $\theta_3$ and $r_c$ as functions of the Bond number ${\mbox{Bo}}$ for different values of ${\Gamma_{dv}}$: ${\Gamma_{dv}}=0.971$ (dash-dotted), ${\Gamma_{dv}}=0.995$ (dashed), and ${\Gamma_{dv}}=1$ (solid). Asterisks are used to indicate the experimental data of ref. [@ooi2016floating]. []{data-label="fgr:Marbles"}](Fig6.pdf){width="0.8\columnwidth"}
### The effective interfacial tension
Ooi *et al.* [@ooi2016floating] also present experimental data showing that the marble contact angle, $\theta_1+\theta_2$ is slightly less than $\pi$ (in particular, they found that $\theta_1+\theta_2\approx170^\circ$). This shows that the droplet is not in a perfectly non-wetting state, and hence that ${\Gamma_{dl}}\neq1+{\Gamma_{dv}}$ in our notation. Instead of levitating, as would be required for the liquid marble to be truly non-wetting, we therefore assume that the hydrophobic grains ‘bridge’ both liquid gas interfaces (as shown in fig. \[fgr:Marbles\](*a*)). This is consistent with the fact that it is energetically favourable for the particles to absorb at any liquid interface [@mchale2011liquid; @binks2002particles]. This assumption also retains the symmetry that the two liquids are identical, though we note that the two liquid–vapour interfaces must remain out of contact for the liquid marble to sit stably at the interface. In this configuration then we expect that the effective tension of the drop–liquid interface is $${\Gamma_{dl}}={\Gamma_{dv}}+{\Gamma_{dv}}=2{\Gamma_{dv}}.$$
The value of ${\Gamma_{dv}}$ (corresponding to the tension of a single particle coated interface) has been reported to take several different values depending on the liquids used, as well as the type and size of grains used. [@Bormashenko2013] The first experiments, performed by Aussillous & Quéré [@aussillous2006properties], reported a value of ${\Gamma_{dv}}\approx1$ for water droplets coated with silica, while glycerol and water droplets coated with lycopodium gave values of ${\Gamma_{dv}}\approx0.70$ and ${\Gamma_{dv}}\approx0.71$, respectively. The experiments of Ooi *et al.* [@ooi2015deformation], using micron-scale polytetrafluoroethylene (PTFE) powder on water, gave, for a single interface, ${\Gamma_{dv}}=0.944$. An independent investigation [@arbatan2011measurement] reported similar results using the same material (${\Gamma_{dv}}=1$ and ${\Gamma_{dv}}=0.971\pm 0.008$, using two distinct methods of measurement). (We report dimensionless values here to avoid confusion from variations in ${\gamma_{lv}}$.) However, we note that despite small disagreements, and with the exception of lycopodium used by Aussillous & Quéré [@aussillous2006properties], all previous measurements of ${\Gamma_{dv}}$ suggest that the coating induces only a small reduction to the original (water-vapour) surface tension. We therefore consider the effect of a small correction from unity, i.e. we set ${\Gamma_{dv}}=1-\delta$ with $\delta\ll1$. Expanding the Neumann conditions for $\delta\ll1$ and using ${\Gamma_{dl}}=2{\Gamma_{dv}}$, one easily finds $$\theta_1+\theta_2\approx \pi-(1-{\Gamma_{dv}})^{1/2}$$ and hence $${\Gamma_{dv}}\approx1-(\pi-\theta_1-\theta_2)^2.
\label{eqn:GdvMarAng}$$
### Comparison with experimental results
Based on , it is tempting to try to infer the value of ${\Gamma_{dv}}$ from measurements of the marble angle $\theta_1+\theta_2$. Using the range of values measured by Ooi *et al.* [@ooi2016floating], namely $165^\circ\lesssim\theta_1+\theta_2\lesssim175^\circ$, we find that $0.931\lesssim{\Gamma_{dv}}\lesssim0.992$. While these values are consistent with previous measurements, they do not constrain the value of ${\Gamma_{dv}}$ particularly closely.
An alternative approach is to use the experimental measurements of $\theta_3$ as a function of ${\mbox{Bo}}$ (see fig. \[fgr:Marbles\](*c*)), using ${\Gamma_{dv}}$ as a single fitting parameter and constraining ${\Gamma_{dl}}=2{\Gamma_{dv}}$. Figure \[fgr:Marbles\](*b*) shows the mean relative error between experiments (measured for six different volumes by Ooi *et al.* [@ooi2016floating]) and our numerical results for different values of ${\Gamma_{dv}}$. In particular, since our numerical scheme computes the volume by fixing $\theta_3$, we define this relative error to be the relative discrepancy between the experimental volume and the computed volume, for a given $\theta_3$. In fig. \[fgr:Marbles\](*b*), we observe that the mean relative error varies continuously with ${\Gamma_{dv}}$ but has a minimum value at ${\Gamma_{dv}}\approx 0.995$.
Using the fitted value ${\Gamma_{dv}}\approx0.995$ we may also compute the variation of $R_c$ with ${\mbox{Bo}}$ (see fig. \[fgr:Marbles\](*d*)). This gives an independent verification of our fitting (which was performed only using the interfacial inclination, $\theta_3$). We also note that $\theta_3({\mbox{Bo}})$ is a very sensitive function of ${\Gamma_{dv}}$: the theoretical results with ${\Gamma_{dv}}=1$ and ${\Gamma_{dv}}=0.971$, which corresponds to $\theta_1+\theta_2=180^{\circ}$ and $\theta_1+\theta_2=170^{\circ}$ respectively, provide a noticeable error when compared with experimental data. Finally, we note that for ${\Gamma_{dv}}=0.995$, $\theta_1+\theta_2\approx 176^\circ$, which is just outside the range of values reported experimentally.
In summary, our investigation of the results of our model adapted to liquid marbles suggests that measurements of $\theta_3$ and $R_c$ as functions of marble size, ${\mbox{Bo}}$, provide a relatively sensitive way of estimating ${\Gamma_{dv}}$. This sensitivity is associated with the difference in scalings of $\theta_3$ and $R_c$ with ${\mbox{Bo}}$ depending on whether ${\Gamma_{dv}}=1$ or not: the transition between the ${\mbox{Bo}}^{1/2}$ and ${\mbox{Bo}}$ scalings occurs gradually and encodes a good deal of information about the precise interfacial tensions involved. We also note that the value of ${\Gamma_{dv}}$ we obtain through this procedure is consistent with previous estimates but suggests that experimental errors in those fitting procedures gives a relatively large uncertainty in the inferred value of ${\Gamma_{dv}}$.
We note that in the above calculation, we have assumed that the interface of liquid marbles behaves purely as that of a liquid drop. In fact, particle–coated interfaces have an elastic character [@Vella2004] consistent with a bending rigidity $B\sim{\gamma_{lv}}d^2$ where $d$ is the particle diameter. This elastic contribution can be neglected in comparison with the pressure due to surface tension, provided that $d/r_0\ll1$. Nevertheless, differences in particle packings may explain some of the variability in the effective surface tension coefficient ${\Gamma_{dv}}$ reported previously.
Deformable drops
================
The analysis of the previous section relied on the assumption that the drops are relatively undeformable. This may be realized with a combination of large ${\Gamma_{dv}}$ and/or small droplet Bond number. However, the alternative limit of relatively small ${\Gamma_{dv}}$ and/or large droplet Bond number is also of interest. For example Leidenfrost droplets, which are rendered non-wetting by a layer of vapour, may ‘float’ at a liquid interface [@adda2016inverse; @maquet2016leidenfrost] while a vibrating bath may support a long-lived droplet of the same fluid at the interface [@couder2005bouncing; @bush2015] (see fig. \[Fig:Examples\](*b*)). In such scenarios, the surface tension of the liquid droplet is usually the same (or similar to) that of the liquid bath, so that ${\Gamma_{dv}}\approx1$. (One exception to this is water drops performing the Leidenfrost effect on a bath of liquid Nitrogen, for which ${\Gamma_{dv}}\approx8$.[@adda2016inverse]) We begin by considering recent experimental work on Leidenfrost droplets [@maquet2016leidenfrost], which requires numerical analysis. We then move on to consider the case of much larger droplets, which form ‘puddles’ analogous to those observed on solid substrates [@aussillous2006properties], and can be understood analytically.
Leidenfrost drops
-----------------
Leidenfrost drops levitate due to the evaporation of the droplet — the resulting vapour produces a thin cushioning layer that prevents the droplet from touching the substrate [@quere2013leidenfrost]. In general the temperature of the substrate must exceed the droplet’s boiling point by a great deal to allow for this levitation. However, drops may levitate on a heated liquid bath even when the temperature of the bath exceeds the droplet’s boiling point only slightly since the surface in such cases is very smooth [@maquet2016leidenfrost], as shown in fig. \[Fig:Examples\](*a*). Determining the shape of the levitating droplet in such circumstances is generally complicated since the thickness of the vapour layer varies spatially. Such a calculation has been performed for Leidenfrost droplets on a rigid substrate [@Snoeijer2009; @Maquet2015] but only for floating Leidenfrost droplets in the limit of small interfacial deformations [@maquet2016leidenfrost]. Here we show that the model of non-wetting droplets developed above is able to explain features of experimental results obtained with larger interfacial deformations.
Maquet *et al.* [@maquet2016leidenfrost] present data for ethanol drops levitating on a bath of heated silicone oil. In particular, they measure the depth of the base of the droplet below the undeformed interface ($z_{\mathrm{min}}$ in the notation of fig. \[fgr:diagram\]) as a function of the maximum radius of the droplet. These data are shown in fig. \[fgr:maquet\_theory\], together with the results of the detailed (but small-slope) lubrication model presented there. We note that for relatively large drops the discrepancy between the theory and experiment grows, which Maquet *et al.* [@maquet2016leidenfrost] attribute to the growing interface deflections as the drops become larger.
Here we seek to understand this discrepancy by using our models of non-wetting drops: we do not take into account the details of the vapour layer, but merely assume that the droplet is non-wetting. (This corresponds to assuming that the vapour layer is very thin in comparison to the size of the droplet.) Ethanol drops on a silicone oil pool have ${\Gamma_{dv}}\approx1.34$ (using measurements of the capillary length from fig. 3 of ref. [@maquet2016leidenfrost]), and hence ${\Gamma_{dl}}=1+{\Gamma_{dv}}\approx2.34$. Using these values and computing the value of $z_{\mathrm{min}}$ numerically, we find good agreement between experimental data and the simple theory (see fig. \[fgr:maquet\_theory\]). However, we note that it does over-estimate the depth of the drops at a particular value of ${r_{\mathrm{max}}}$. This discrepancy may be the result of rounding errors in inferring the value of ${\Gamma_{dv}}$ or, alternatively, a small shear from the Leidenfrost layer that acts to spread the drop further (and hence make it less deep for a given droplet volume). We conclude that Maquet *et al.* [@maquet2016leidenfrost] are correct that large interface deformations become important for large droplets; we note, further, that the non-wetting droplet model may be used to provide a reasonable approximation to the shape of such drops.
![Experimental results for the maximum vertical displacement of the liquid substrate for a Leidenfrost drop of maximum radius $r_\text{max}$ ($\times$) [@maquet2016leidenfrost]. The blue (dashed) curve gives the theoretical prediction by Maquet *et al.* [@maquet2016leidenfrost] and the red (solid) curve gives the theoretical prediction with the non-wetting drop model of §2.[]{data-label="fgr:maquet_theory"}](Fig7.pdf){width="0.6\columnwidth"}
Very deformable drops
---------------------
The results above are purely numerical. However, it is possible to obtain some analytical understanding of the behaviour of very deformable non-wetting droplets (so that ${\Gamma_{dl}}=1+{\Gamma_{dv}}$). We are not aware of experimental data in this limit, but note that droplets that bounce on a vibrating bath may be subject to large deformations (since the accelerations, and hence effective gravity, are large) [@couder2005bouncing; @Pucci2011].
The analysis of relatively undeformable drops in §\[sec:undeform\] exploited the knowledge that in this limit the drop should remain approximately spherical. However, the richness of this problem lies in the ability of all three phases to deform simultaneously. To obtain a more complete picture of our problem we therefore focus now in the opposite regime: that in which the drop is very deformable.
It is well known that large deformable drops form ‘puddles’ or ‘pancakes’ — drops flattened by gravity to have a thickness comparable to the capillary length [@aussillous2006properties]. In this work we are primarily interested in capillary effects and it therefore seems that one cannot have droplets that are highly deformable, but supported primarily by surface tension (since $D>1$). This apparent contradiction is resolved by recalling that the floating droplet problem contains multiple capillary lengths. In particular, for heavy drops, it is possible for gravity to dominate the drop shape (${\mbox{Bo}_d}\gg1$), whilst simultaneously surface tension provides sufficient restoring force for the drop to remain afloat, i.e. ${\mbox{Bo}}\lesssim1$.
Recalling the numerical findings presented in fig. \[fgr:transition\](*a*) we expect that as $r_0$ increases, gravity becomes more important, encouraging the drop to spread and deform the substrate to minimize its (gravitational) potential energy. This behaviour is reminiscent of the findings of Aussillous & Quéré [@aussillous2006properties] for non-wetting drops on rigid substrates: the upper portion of the drop is essentially flat. An important distinction from the rigid substrate case, however, is that a bulge is formed below the TPCL. This reduces the radial length scale of the drop (for a given volume) by acting as a reservoir, in which the drop volume can be stored. We shall analyse this shape quantitatively later in this section, seeking to determine the radial position of the TPCL, $r_c$.
### Rigid substrate approximation {#rigid}
To obtain a first approximation for $r_c$ we imagine that the substrate is rigid. Intuitively, we expect this approximation to be valid provided that ${\Gamma_{dv}}\ll 1$, since then the drop will prefer to minimize its gravitational potential energy by spreading horizontally instead of deforming the substrate vertically (see e.g. fig. \[fgr:transition\](*b*)).
We begin by following the scaling argument by Aussillous & Quéré [@aussillous2006properties] for deformable drops on rigid substrates. Such drops adopt a puddle shape, with a constant thickness $h$ over most of their length. To determine $h$, we balance the hydrostatic pressure along the central plane of the droplet, ${\rho_d}gh/2$ with the curvature induced pressure from the rounded edges of the droplet, ${\gamma_{dv}}/(h/2)$. In this approximation, the height of the puddle satisfies $$h\approx 2\sqrt{\frac{{\Gamma_{dv}}}{D}}{\ell_c}=2{\ell_c}^D
\label{puddle_h}$$ where ${\ell_c}^D=\sqrt{{\Gamma_{dv}}/D}{\ell_c}$ is the capillary length of the drop. The position of the contact line, $r_c$, may then be estimated from the volume constraint, $V \approx \pi r_c^2 h$, to give $$R_c=\frac{r_c}{{\ell_c}}\approx\sqrt{\frac{2}{3}} \left(\frac{D}{{\Gamma_{dv}}}\right)^{1/4}{\mbox{Bo}}^{3/4}.
\label{puddle_rc}$$ We expect that this scaling may hold for ${\ell_c}^D\ll r_c \ll {\ell_c}$: the drop should be large enough to be a puddle, but small enough to remain afloat. We also note that the droplet radius given in scales as ${\mbox{Bo}}^{3/4}$, which is significantly different from the $r_c\propto{\mbox{Bo}}^{1/2}$ and $r_c\propto{\mbox{Bo}}$ scalings that we saw for relatively undeformable droplets in §\[sec:undeform\].
Numerical results, presented in fig. \[fgr:deform\_theory\] show that the rigid substrate approximation, , provides an upper bound on $r_c$ for a given value of ${\mbox{Bo}}$. However, we also note that there is a significant discrepancy, particularly as ${\mbox{Bo}}$ increases: over-estimates the lateral size of the droplet since a significant amount of volume is stored in the bulged, central part of the droplet (thereby decreasing $r_c$). To understand and account for this discrepancy, we construct a refined analytical model of the drop that takes this bulging into consideration.
### Refined analytical model {#refine}
![Schematic diagram of the refined analytical model.[]{data-label="fgr:deform_diagram"}](Fig8_Revised.pdf){width="0.8\columnwidth"}
We begin by modelling the bulge as a spherical cap with radius of curvature $\mathcal{R}$ (since the drop is small compared to the capillary length of the substrate, this seems reasonable). We also assume that the drop-vapour interface has negligible curvature and is level with the height of an undeformed substrate, so that ${z_{\mathrm{max}}}\approx0$. As in fig. \[fgr:deform\_diagram\] we specify two heights for the droplet: $h_1$ is the height between the top apex and the TPCL (so that the vertical position of the TPCL is $z\approx-h_1$) and $h_2$ is the maximal thickness of the drop.
At the base of the drop, the Laplace–Young equation for the drop–liquid interface gives $$({\rho_d}-{\rho_l})gh_2= \frac{2{\gamma_{dl}}}{\mathcal{R}}$$ and hence $$h_2= \frac{2{\Gamma_{dl}}}{D-1}\frac{{\ell_c}^2}{\mathcal{R}}\text{.}
\label{h2}$$ If we then assume that the bulge is relatively small, and may be approximated by a parabola, we may write $$h_2-h_1= \frac{r_c^2}{2\mathcal{R}}\text{.}
\label{refine_parabola}$$
The balance of pressures that led to applies equally to $h_1$, while determined $h_2$. We may therefore substitute these results into to write $$\begin{aligned}
\mathcal{R}=\frac{{\Gamma_{dl}}}{D-1}\frac{{\ell_c}^2}{{\ell_c}^D}\left[1-\frac{D-1}{4{\Gamma_{dl}}}\left(\frac{r_c}{{\ell_c}}\right)^2\right]\text{.}
\label{R_expression1}\end{aligned}$$ We rescale $\mathcal{R}$ by ${\ell_c}$ and rewrite the right-hand side of in terms of ${\ell_c}$ $$\begin{aligned}
\frac{\mathcal{R}}{{\ell_c}}=\frac{{\Gamma_{dl}}}{D-1}\sqrt{\frac{D}{{\Gamma_{dv}}}}\left[1-\frac{D-1}{4{\Gamma_{dl}}}\left(\frac{r_c}{{\ell_c}}\right)^2\right]\text{.}
\label{R_expression2}
\end{aligned}$$ Since we are assuming that $r/{\ell_c}\ll1$ and ${\Gamma_{dl}}=1+{\Gamma_{dv}}$ (for non-wetting drops), this may be simplified to give $$\frac{\mathcal{R}}{{\ell_c}}\approx \frac{1}{D-1}\sqrt{\frac{D}{{\Gamma_{dv}}}}{.}
\label{R_expansion}$$ Now, in order to find a relationship between $r_c$ and $V$, we must consider the conservation of volume. Combining the cylindrical approximation for the volume of the flat cap with the volume of the paraboloid, we have $$\begin{aligned}
\frac{4\pi}{3}{\mbox{Bo}}^{3/2}{\ell_c}^3&\approx &\pi h_1r_c^2+\pi r_c^2 (h_2-h_1)/2\label{refine_volume_1}\\
&=&2\pi {\ell_c}^D r_c^2 +\frac{\pi}{4\mathcal{R}}r_c^4,
\label{refine_volume}\end{aligned}$$ using . Taking the leading order approximation of $\mathcal{R}$ from , independent of $r_c$, we find a quadratic equation for $r_c^2$: $$\frac{1}{8}(D-1)\left(\frac{r_c}{{\ell_c}}\right)^4+\left(\frac{r_c}{{\ell_c}}\right)^2-\frac{2}{3}\frac{{\mbox{Bo}}^{3/2}{\ell_c}}{{\ell_c^{D}}}=0\text{,}$$ and hence $$\frac{r_c}{{\ell_c}}=R_c=\frac{2}{\sqrt{D-1}}\left\{\left[1+\frac{D-1}{3}\frac{{\mbox{Bo}}^{3/2}{\ell_c}}{{\ell_c}^D}\right]^{1/2}-1\right\}^{1/2}\text{.}
\label{refine_final}$$ At first sight, $r_c$ is now expressed in terms both ${\ell_c}$ and ${\ell_c}^D$, which is different from results we have had thus far. However, before we discuss the physical significance of this expression, we first establish its connection with the rigid substrate model presented in §\[rigid\]. This rigid substrate limit should be recovered in the limit ${\mbox{Bo}}\lll1$; a Taylor expansion of clearly shows that this expectation is correct.
Continuing this Taylor expansion, we find that the next order correction is negative, corresponding to a decrease in the contact radius $r_c$ compared to the rigid substrate model. This therefore captures something of the ‘bulging’ of the droplet that is neglected by the rigid substrate model. Furthermore, the size of the correction also increases with ${\mbox{Bo}}$, consistent with the preliminary numerical results presented in fig. \[fgr:transition\](*a*).
![Scaling behaviour of the contact radius as a function of drop volume in the highly deformable limit. Crosses ($\times$) denote numerical results with ${\Gamma_{dv}}=0.1$ and $D$ varying while bullets ($\bullet$) denote results with $D=2.5$ and ${\Gamma_{dv}}$ varying. The black solid curve indicates the theoretical prediction of the refined model, , while the dashed line indicates the scaling prediction for rigid substrates, , which is the generalization of a previous result [@aussillous2006properties]. The inset gives the same data with the nondimensionalization suggested by , highlighting that the collapse in the main figure is a considerable improvement.[]{data-label="fgr:deform_theory"}](Fig9.pdf){width="0.7\columnwidth"}
We compare the predictions of the two models (the rigid substrate and the refined models) with numerical simulations in fig. \[fgr:deform\_theory\]. We see that when plotting the data in the way suggested by the rigid substrate model (see inset of fig. \[fgr:deform\_theory\]) the agreement is qualitatively good, but that there is considerable scatter in the data. However, the rescaling suggested by the refined model , shown as the main figure in fig. \[fgr:deform\_theory\], show first that the data collapse significantly better when plotted in this way, and, second, that the trend of gives a much better quantitative account of the numerical results. The discrepancy that remains is presumably a result of the approximation that ${z_{\mathrm{max}}}\approx0$ for drops with ${\mbox{Bo}}\lesssim1$. Finally, we note that in the limit of very small Bond numbers, even drops with ${\Gamma_{dv}}\ll1$ eventually become relatively undeformable and return to the limit considered in §\[sec:undeform\].
Conclusions
===========
We have considered the flotation of liquid drops on the surface of another liquid, focusing in particular on the case where the droplet is close to being perfectly non-wetting. This is a scenario that is common in a number of applications, including liquid marbles, Leidenfrost droplets and droplets on a vibrating bath of the same liquid.
We presented numerical and analytical results for a range of different parameters but focussed in particular on two sets of behaviour corresponding to relatively undeformable droplets, as well as very deformable droplets. In the first case, we showed that the floating state can be understood using simple ideas of force balance. In the limit of small droplets, this force balance can be simplified to the extent that we are able to present analytical results for the state in which the droplet floats. In the second case (very deformable droplets), we found that the classic picture of a flat pancake or puddle is significantly modified because of the deformability of the liquid interface on which the droplet floats. We provided a refined analytical model that accounts for this deformability and provides improved agreement with numerical results. This model also emphasizes that the problem involves multiple capillary lengths, ${\ell_c}$ and ${\ell_c^{D}}$, and that it is a combination of the two that determines the behaviour of the system. We also demonstrated that some features of the quasi-static levitation of Leidenfrost drops are quantitatively captured by modelling such drops as static, perfectly non-wetting drops. This provides new insight into their floating state, without necessarily developing detailed hydrodynamic models of levitation.
A key finding of our analysis is that the properties of the floating state for relatively undeformable drops are surprisingly sensitive to the relative tensions of the drop–vapour and drop–liquid interfaces. In particular, the meniscus inclination, $\theta_3$, and contact line radius exhibit different size-dependent scalings depending on these differences. We suggest that the measurement of the floating state of such droplets may be a sensitive, non-invasive assay for the measurement of interfacial tensions close to perfect non-wetting. This possibility seems particularly relevant for the particle–coated interfaces that occur in liquid marbles, but are also used to stabilize emulsions [@Binks2006book].
Acknowledgements {#acknowledgements .unnumbered}
================
The research leading to these results has received funding from the European Research Council under the European Union’s Horizon 2020 Programme / ERC Grant Agreement no. 637334 (DV).
Appendix A: Dimensionless problem and numerical solution {#appendix-a-dimensionless-problem-and-numerical-solution .unnumbered}
========================================================
Dimensionless problem {#dimensionless-problem .unnumbered}
---------------------
The non-dimensionalization reported in §\[sec:nondim\] transforms the problem – to: $$\begin{aligned}
\frac{{\mathrm{d}}R_i}{{\mathrm{d}}S}&=&\cos\phi_i, \label{one}\\
\frac{{\mathrm{d}}Z_i}{{\mathrm{d}}S}&=&\sin\phi_i, \label{two}\\
\frac{{\mathrm{d}}\phi_i}{{\mathrm{d}}S}&=&\begin{cases}\frac{D}{{\Gamma_{dv}}}Z_i-\frac{\sin{\phi_i}}{R_i}-\frac{D}{{\Gamma_{dv}}}Z_{0} & \mbox{for}\ i=1\\
\frac{1-D}{{\Gamma_{dl}}}Z_i-\frac{\sin{\phi_i}}{R_i}+\frac{D}{{\Gamma_{dl}}}Z_{0}& \mbox{for}\ i=2\\
Z_i-\frac{\sin{\phi_i}}{R_i}& \mbox{for}\ i=3\text{.}\end{cases} \label{three}\end{aligned}$$
Using $S=0$ to denote the centre of the droplets, we have the symmetry boundary conditions $$R_{1,2}(0)=0,\quad \phi_{1,2}(0)=0.$$ The position of the TPCL is denoted by $S_{1,2}^{(c)}$ in arc length coordinates. At this point the appropriate boundary conditions are continuity with the expressions for the outer meniscus. Denoting the coordinates of the TPCL by $(R_c,Z_c)$ we find that
------------------------------- ------------------------------- ------------------------------------
$R_{1,2}(S_{1,2}^{(c)})=R_c$, $Z_{1,2}(S_{1,2}^{(c)})=Z_c$, $\phi_i(S_i^{(c)})=(-1)^i\theta_i$
$R_3(0)=R_c$, $Z_3(0)=Z_c$, $\phi_3(0)=\theta_3$
------------------------------- ------------------------------- ------------------------------------
Finally, we apply the decay condition for the outer meniscus $$Z_3(\infty)=0.$$ We therefore have a total of 14 boundary conditions for the 9 differential equations -. This apparent discrepancy is explained by the system having five additional quantities $Z_{0}$, $R_c$, $\theta_3$, $S_1^{(c)}$ and $S_2^{(c)}$ that are not known *a priori*. (The contact angles $\theta_1$ and $\theta_2$ are determined in terms of $\theta_3$ and the interfacial energies ${\Gamma_{dv}}$ and ${\Gamma_{dl}}$ by the Neumann relations -; similarly, $Z_c$ is determined by the outer meniscus, once $\theta_3$ and $R_c$ are given.)
Numerical scheme {#numerics .unnumbered}
----------------
Following ref. [@phan2012can], and for convenience, we impose $D$ and $\theta_3$ and compute the drop volume $V$ that would give this angle. However, the height and radial position of the TPCL are unknown. We therefore solve for the upper and lower drop surfaces using a shooting method that terminates when both interfaces meet at the TPCL (this determines the values of $S_{1,2}^{(c)}$ since $\phi_1(S_1^{(c)})=-\theta_1$ and $\phi_2(S_2^{(c)})=\theta_2$, are given). This stage of the numerics is completed via the EventLocator function in *Mathematica*. In practice, a change of variables in - facilitates the solution, by removing the coordinate singularity near the drop apices; we let $\phi_{1,2}(S)=S\psi_{1,2}(S)$ and $r=S\eta_{1,2}(S)$.
To solve for the outer meniscus, we use the MATLAB boundary value problem solver `bvp4c`. This outer meniscus problem is decoupled from the full problem once $R_c$ is determined from the first step. Finally, all three interfaces are matched. This hybrid method (using both Mathematica and MATLAB) is easily achieved using MATLink.
Verification of code {#verification-of-code .unnumbered}
--------------------
In the absence of previous analytical results with which to compare our numerical calculations, we verify numerically that the drop satisfies a global force balance condition: the generalized Archimedes’ Principle [@keller1998; @vella2015floating] shows that the restoring force on the drop equals the total weight of liquid displaced (including that displaced in the menisci). For practical purposes, it is often simpler to think of the weight of liquid that is displaced by the wetted portion of the drop, i.e. ${\rho_l}g\tilde{V}$, and the weight displaced in the menisci separately since the latter can be equated to the vertical force from surface tension acting along the contact line [@vella2015floating]. We therefore have that the global force balance reads $$\rho_d gV=\rho_lg\tilde{V}+2\pi\gamma_{lv} r_c \sin\theta_3.
\label{fb}$$ We check that this is satisfied to within $0.01\%$ in all of the numerical results presented here. The agreement between numerical and asymptotic results appropriate to small droplets gives a final check on the validity of the numerical results.
[10]{}
M. Adda-Bedia, S. Kumar, F. Lechenault, S. Moulinet, M. Schillaci, and D. Vella. Inverse leidenfrost effect: Levitating drops on liquid nitrogen. , 32(17):4179–4188, 2016.
T. Arbatan and W. Shen. Measurement of the surface tension of liquid marbles. , 27(21):12923–12929, 2011.
P. Aussillous and D. Qu[é]{}r[é]{}. Liquid marbles. , 411(6840):924–927, 2001.
P. Aussillous and D. Qu[é]{}r[é]{}. Properties of liquid marbles. In [*Proc. R. Soc. A*]{}, volume 462, pages 973–999. The Royal Society, 2006.
I. H. Balchin, E. A. Boucher, and M. J. B. Evans. Capillary phenomena: 30. the properties of heavy oil lenses on water. , 134(2):312–319, 1990.
B. P. Binks. Particles as surfactants: similarities and differences. , 7(1):21–41, 2002.
B. P. Binks and T. S. Horozov. . Cambridge University Press, 2006.
E. Bormashenko. Liquid marbles, elastic nonstick droplets: From minireactors to self- propulsion. , 33:663–669, 2017.
E. Bormashenko, Y. Bormashenko, R. Grynyov, H. Aharoni, G. Whyman, and B. P. Binks. Self-propulsion of liquid marbles: [L]{}eidenfrost-like levitation driven by [M]{}arangoni flow. , 119:9910–9915, 2015.
E. Bormashenko, Y. Bormashenko, and A. Musin. Water rolling and floating upon water: marbles supported by a water/marble interface. , 333(1):419–421, 2009.
E. Bormashenko, Y. Bormashenko, A. Musin, and Z. Barkay. On the mechanism of floating and sliding of liquid marbles. , 10(4):654–656, 2009.
E. Bormashenko, A. Musina, G. Whyman, Z. Barkay, A. Starostinc, V. Valtsifer, and V. Strelnikov. Revisiting the surface tension of liquid marbles: [M]{}easurement of the effective surface tension of liquid marbles with the pendant marble method. , 425:15–23, 2013.
E. A. Boucher. Capillary phenomena: properties of systems with fluid/fluid interfaces. , 43(4):497, 1980.
J. C. Burton, F. M. Huisman, P. Alison, D. Rogerson, and P. Taborek. Experimental and numerical investigation of the equilibrium geometry of liquid lenses. , 26(19):15316–15324, 2010.
J. W. M. Bush. Pilot-wave hydrodynamics. , 47:269–292, 2015.
Y. Couder, E. Fort, C.-H. Gautier, and A. Boudaoud. From bouncing to floating: noncoalescence of drops on a fluid bath. , 94(17):177801, 2005.
A. Elcrat and R. Treinen. Numerical results for floating drops. , pages 241–249, 2005.
L. Gao and T. J. McCarthy. Ionic liquid marbles. , 23(21):10445–10447, 2007.
J. B. Keller. Surface tension force on a partly submerged body. , 10:3009–3010, 1998.
I. Langmuir. Oil lenses on water and the nature of monomolecular expanded films. , 1:756–776, 1933.
L. Mahadevan, M. Adda-Bedia, and Y. Pomeau. Four-phase merging in sessile compound drops. , 451:411–420, 2002.
L. Mahadevan and Y. Pomeau. Rolling droplets. , 11(9):2449–2453, 1999.
L. Maquet, M. Brandenbourger, B. Sobac, A.-L. Biance, P. Colinet, and S. Dorbolo. Leidenfrost drops: Effect of gravity. , 110:24001, 2015.
L. Maquet, B. Sobac, B. Darbois-Texier, A. Duchesne, M. Brandenbourger, A. Rednikov, P. Colinet, and S. Dorbolo. Leidenfrost drops on a heated liquid pool. , 1:053902, Sep 2016.
D. Matsukuma, H. Watanabe, A. Fujimoto, K. Uesugi, A. Takeuchi, Y. Suzuki, A. Jinnai, and A. Takahara. X-ray computerized tomography observation of the interfacial structure of liquid marbles. , 88(1):84–88, 2015.
G. McHale and M. I. Newton. Liquid marbles: principles and applications. , 7(12):5473–5481, 2011.
G. P. Neitzel and P. Dell’Aversana. Noncoalescence and nonwetting behavior of liquids. , 34(1):267–289, 2002.
I. D. Nissanka and P. D. Yapa. Oil slicks on water surface: Breakup, coalescence, and droplet formation under breaking waves. , 114:480–493, 2017.
C. H. Ooi, C. Plackowski, A. V. Nguyen, R. K. Vadivelu, J. A. [St John]{}, D. V. Dao, and N.-T. Nguyen. Floating mechanism of a small liquid marble. , 6:21777, 2016.
C. H. Ooi, R. K. Vadivelu, J. [St John]{}, D. V. Dao, and N.-T. Nguyen. Deformation of a floating liquid marble. , 11(23):4576–4583, 2015.
C. M. Phan. Stability of a floating water droplet on an oil surface. , 30(3):768–773, 2014.
C. M. Phan, B. Allen, L. B. Peters, T. N. Le, and M. O. Tade. Can water float on oil? , 28(10):4609–4613, 2012.
H. M. Princen and S. G. Mason. Shape of a fluid drop at a fluid-liquid interface. ii. theory for three-phase systems. , 20(3):246–266, 1965.
G. Pucci, E. Fort, M. [Ben Amar]{}, and Y. Couder. Mutual adaptation of a faraday instability pattern with its flexible boundaries in floating fluid drops. , 106:024503, 2011.
P. R. Pujado and L. E. Scriven. Sessile lenticular configurations: translationally and rotationally symmetric lenses. , 40(1):82–98, 1972.
D. Qu[é]{}r[é]{}. Leidenfrost dynamics. , 45:197–215, 2013.
J. H. Snoeijer, P. Brunet, and J. Eggers. Maximum size of drops levitated by an air cushion. , 79:036307, 2009.
P.-G. Gennes, F. Brochard-Wyart, and D. Qu[é]{}r[é]{}. . Springer Science & Business Media, 2013.
D. Vella. Floating versus sinking. , 47:115–135, 2015.
D. Vella, P. Aussillous, and L. Mahadevan. Elasticity of an interfacial particle raft. , 68:212–217, 2004.
D. Vella, D.-G. Lee, and H.-Y. Kim. The load supported by small floating objects. , 22(14):5979–5981, 2006.
|
---
abstract: 'Estimating the human longevity and computing of life expectancy are central to the population dynamics. These aspects were studied seriously by scientists since fifteenth century, including renowned astronomer Edmund Halley. From basic principles of population dynamics, we propose a method to compute life expectancy from incomplete data.'
title: Computation of life expectancy from incomplete data
---
**Arni S.R. Srinivasa Rao**[^1]
Augusta University
1120 15th Street
Augusta, GA 30912, USA
Email: [email protected]
**James R. Carey**
Department of Entomology
University of California,
Davis, CA 95616 USA
and
Center for the Economics and Demography of Aging
University of California, Berkeley, CA 94720
Email: [email protected]
**Introduction** {#introduction .unnumbered}
================
In 1570 the Italian mathematician Girolamo Cardano suggested that a man who took care of himself would have a certain life expectancy of $\alpha$ (so that at any age $x$ we could expect him to live $e(x)=\alpha-x$ more years) and then asked how many years would be squandered by imprudent lifestyles [@Smith=000026Keyfitz]. Cardano’s healthiest man might be born with the potential of living to 260 years but die at age 80, having wasted away years due to bad habits and other such ill-advised choices. In this work, Cardano was in good company; mathematicians such as Fibonacci, dAlembert, Daniel Bernoulli, Euler, Halley, Lotka and many others contributed to our understanding of population dynamics through mathematical models. We can trace the notion of life expectancy in particular back to the seventeenth century astronomer Edmund Halley who developed a method to compute life expectancy [@Halley]. His studies led him to observe how unjustly we repine at the shortness of our Lives, and think our selves wronged if we attain not Old Age; for one half of those that are born are dead in Seventeen years time and to urge readers that instead of murmuring at what we call an untimely Death, we ought with Patience and unconcern to submit to that Dissolution which is the necessary Condition of our perishable Materials, and of our nice and frail Structure and Composition: And to account it as a Blessing that we have survived, perhaps by many Years, that Period of Life, whereat the one half of the whole Race of Mankind does not arrivepostscript to [@Halley]). Besides his philosophical musings Halley’s essay contained many tables and detailed analyses.
Life expectancy at birth, is defined as the number of years remaining to the average newborn. It is arguably the most important summary metric in the life table because it is based on and thus reflects the longevity outcome of the mortality experience of newborns throughout their life course. When life expectancy is referred to without qualification the value at birth is normally assumed [@Preston]. Life expectancy is intuitive and thus easily understandable by lay persons, independent of population age structure, an indicator of health conditions in different societies, used in insurance annuity computations and as a baseline for estimating the impact on longevity of diseases (e.g. AIDS; cancer, diabetes) and lifestyle choices (e.g. smoking; alcohol consumption). The value of life expectancy at birth is identical to the average age in a life table population. The difference in life expectancies between men and women is known as the gender gap. The inverse of life expectancy equals both the per capita birth $(b)$ and per capita death $(d)$ rates in stationary populations( $b-d=0)$. And since $b+d$ is a measure of the number of vital events in a population, double the inverse of life expectancy equals what is referred to as population metabolism as applied to stationary populations. Life expectancy at birth is the most frequently-used comparative metric in biological studies of plants and animals.
The first substantive demographic work in which life expectancy was estimated was the Bills of Mortality published in 1662 by John Graunt [@Graunt] who noted From when it follows, that of the said 100 conceived there remains at six years 64, at thirty-six 26 at sixty-six 3 and at eighty 0. Although Edmund Halley [@Halley] and Joshua Milne [@Milne] both introduced life table methods for computing life expectancy, George King [@King] is generally attributed to introducing the life table and life expectancy in modern notation. It was not until 1947 that life tables in general and life expectancy in particular were introduced to the population biology literature for studying longevity in non-human species [@Deevey]. Although life expectancy is computed straightforwardly from life table survival data, complete information is often not available.
Therefore our objective in this paper is to describe a model that we derived for use in estimating life expectancy at birth from a limited amount of information. The information required to estimate life expectancy in a given year with our model includes the number of births, the number of infant deaths, and the number in the population at each age from 0 through the maximal age, w. Our computational concept for $\omega=2$ is based on the following logic: (1) person-years lived for a newborn cohort during the first year is the difference between the number born and the number of infants that died. *Person-years* is the sum of the number of years lived by all persons in a cohort or population. The number of person-years equals the life expectancy of this cohort if their maximal age is one year (i.e. $l(1)$ = maximal age); (2) person-years lived for this cohort during their first two years of life is equal to person years lived up to one year and person years that would be lived by people who have lived up to age 1. Person-years lived by the newborn cohort during their second year of life is less than the person years lived by newborn during the first year; (3) the hypothetical number of person-years lived by the newborn cohort during their third year of life (i.e. $l(3)$ = 0) equals the number in the birth cohort minus the person-years lost due to deaths during the third year. We use number of newborn and population at age 1 to compute person years to be lived by newborn during first three years of life. And this process continues through the oldest age, **$\omega>2$.**
Traditionally, the life expectancy of a population is computed through life table techniques. **** Life table of a population is a stationary population mathematical model which primarily uses populations and death numbers in all the single year ages for an year or for a period of years to produce life expectancy through construction of several columns. The last column of the life table usually consists of life expectancies for each single year ages and first value of this column is called life expectancy of the corresponding population for the year for which the life table was constructed. See Figure \[US Life Table\] for the life table of US population in 2010 [@Arias]. There are seven columns in this life table and the second column in the Figure \[US Life Table\], which consists the values of probability of dying **** between ages $x$ to $x+1$ for $x=0,1,...100+$ is first computed from the raw data (See Figure \[LT-DE Fig\] for the data needed for a life table) and other columns are derived from the second column using formulae without any raw data. The last column of the table in Figure \[LT-DE Fig\] consists the values of expectation of life at age $x$ for $x=0,1,...,100+.$ The first value in the last column of the table in the Figure \[US Life Table\] is 78.7, which means life expectancy for the new born babies during 2010 in the US population (boys and girls combined who are of aged 0-1 during 2010) is 78.7 years**.** In **[@Arias]** we give the various steps involved at Figure **\[US Life Table\]**.
For standard life table methods, see [@Keyfitz=000026Caswal], for recent developments in computing life expectancy see [@Bon=000026FeenPNAS], for astronomer Edmund Haley’s life table constructed in 17th century, see **[@Smith=000026Keyfitz].** Recent advances in the theory of stationary population models [@Rao; @=000026; @Carey] are serving the purpose of computing life expectancies for populations in the captive cohorts [@MathDigest]. We propose a very simple formula for computing life expectancy of newly born babies within a time interval when age-specific death rates and life tables are not available. Age-specific death rates at age $a$ are traditionally defined as the ratio of the number of deaths at age $a$ to the population size at age $a$ [@Keyfitz=000026Caswal]. The method of calculating life expectancies given in standard life tables uses age-specific death rates which is computed from deaths and populations in each single year ages. Refer to Figure \[LT-DE Fig\] for the data needed in traditional life table approach and for the newly proposed method.
![\[US Life Table\]United States life table for the year 2010. This life table was directly taken from National Vital Statistics Reports [@Arias] ](US-LifeTable-Figure.eps)
![\[LT-DE Fig\] (a) Data needed for life table approach. (b) Data needed for computing life expectancy through new approach. Green bordered rectangles are populations and red colored rectangles are death numbers in the respective ages for an year. Blue-bordered rectangle is birth numbers for an year. ](Figure-LT-LE.eps)
In this paper, we propose a formula for computing life expectancies is comparable to the technique used to calculate life expectancies in standard life tables, but can be applied when limited data is available. The derived formula uses effective age-specific population sizes, the number of infant deaths, and the number of live births within a year. The number of infant deaths is usually defined as the number of deaths within the first year of life in human populations. If the study population is insects, necessary data can be considered within any appropriate time interval. We tested our proposed simple formula on both small hypothetical populations and global human populations. When a sufficient amount of data on age-specific death rates is available, the life table-based life expectancy is still recommended.
**Life Expectancy of newly born babies** {#life-expectancy-of-newly-born-babies .unnumbered}
=========================================
In this section we derive a formula for the life expectancy from basic elements of population dynamics, namely, population-age structure over two time points, simple birth and infant death numbers observed over an interval of time. Suppose, the global population at the beginning of times $t_{0}$ and $t_{1}$ (for $t_{0}<$$t_{1}$) is known, and we are interested in finding the life expectancy of the people who are born during $[t_{0},t_{1}).$ We assume the following information to be known: i) $P(t_{0})$, the effective population size by single-ages during $[t_{0},t_{1}$), which is indirectly computed as a weighted or ordinary average of respective population sizes by single-ages that are available at the beginning of $t_{0}$ and at the end of $[t_{0},t_{1})$, ii) the number of live births, $B(t_{0})$, and iii) the number of infant deaths, $D_{0}(t_{0})$ during the period $[t_{0},t_{1})$. These quantities of known information are expressed as,
$$\begin{aligned}
P(t_{0}) & = & \int_{0}^{\omega}P_{i}(t_{0})di=\int_{0}^{\omega}\left[\frac{a_{i}P_{t_{0}}(i)+b_{i}P_{t_{1}}(i)}{a_{i}+b_{i}}\right]di\\
B(t_{0}) & = & \int_{t_{0}}^{t_{1}}B(s)ds\\
D_{0}(t_{0}) & = & \int_{t_{0}}^{t_{1}}D_{0}(s)ds\end{aligned}$$
where $P_{i}(t_{0})$ is the effective population aged $[i,i+1)$ for $i=0,1,...,\omega$ during $[t_{0},t_{1})$, with $P_{\omega}(t_{0})=0,$ for an age $\omega$ which is the next integer larger than the age of eldest surviving person in $P(t_{0})$. $P_{t_{0}}(i)$ and $P_{t_{1}}(i)$ are observed populations in the age group $[i,i+1)$ at the beginning of $t_{0}$ and at the end of $[t_{0},t_{1})$, $a_{i}$ and $b_{i}$ are population weights corresponding to $P_{t_{0}}(i)$ and $P_{t_{1}}(i)$, respectively. $B(s)$ is the number of births at a given time $s\in[t_{0},t_{1})$ and $D_{0}(s)$ is the number of infant deaths for $s\in[t_{0},t_{1})$.
We use standard life table notations to relate the quantities of the population cohort life expectancy. Let, $l(x)$ be the number of survivors of $B(t_{0})$ at age $x$ for $x=0,1,2,...,\omega.$ Clearly, $l(0)=B(t_{0})$ and $l(1)$ is approximated as, $l(1)\approx B(t_{0})-D(t_{0})$. Suppose, $l(2)=0$. This implicitly implies that we have only observed the data for $P_{0}(t_{0}),$ $P_{1}(t_{0})$, $B(t_{0}),$ $D_{0}(t_{0})$ during $[t_{0},t_{1})$. We will now use the concept of person-years, which is a technical phrase in the life table model. Person-years of a cohort represents the average future life time to be lived by the cohort. The person-years lived by $B(t_{0})$ during their first year of life (after removing person-years lost due to deaths), and person-years lived by the remaining individuals of $B(t_{0})$ who are surviving at age 1, (and removing deaths that occurred during the second year of their life) and by assuming the deaths are uniformly distributed over the age intervals $[0,1)$ and $[1,2)$ are:
$$\begin{aligned}
\int_{t_{0}}^{t_{1}}B(s)ds & - & \frac{1}{2}\int_{t_{0}}^{t_{1}}D_{0}(s)ds\label{eq:L0}\end{aligned}$$
and
$$\begin{aligned}
\frac{1}{2}\int_{t_{0}}^{t_{1}}B(s)ds & - & \frac{1}{2}\int_{t_{0}}^{t_{1}}D_{0}(s)ds.\label{eq:L1 when l(2)=00003D0}\end{aligned}$$
The total person-years that would be lived by $B(t_{0})$ during their first two-years of life is
$$\begin{aligned}
\frac{3}{2}\int_{t_{0}}^{t_{1}}B(s)ds & - & \int_{t_{0}}^{t_{1}}D_{0}(s)ds.\label{eq:L1}\end{aligned}$$
The life expectancy of $B(t_{0})$, i.e. new born babies at **$[t_{0},t_{1})$** is,
$$\begin{aligned}
\frac{3}{2} & - & \frac{\int_{t_{0}}^{t_{1}}D_{0}(s)ds}{\int_{t_{0}}^{t_{1}}B(s)ds}.\label{LE-l(2)=00003D0}\end{aligned}$$
When $P(t_{0})=\int_{0}^{3}P_{i}(t_{0})di$, then $l(1)\neq0$ and $l(2)\neq0.$ The expression of $l(1)$ becomes $\int_{t_{0}}^{t_{1}}P_{1}(s)ds.$ We assume, $l(2)\approx2P_{1}(t_{0})-l(1)$, which implies the person-years lived by $B(t_{0})$, during their second year of life is approximately the same as the effective population at age $1$ during $[t_{0},t_{1})$, instead of the previously obtained quantity in (\[eq:L1 when l(2)=00003D0\]) (note that this effective population is computed from the observed population explained previously). Now, the person-years lived by $B(t_{0})$ during their third year of life, (after removing person-years lost due to deaths during third year) and assuming the deaths are uniformly distributed over the age intervals $[2,3)$ are:
$$\begin{aligned}
\int_{t_{0}}^{t_{1}}P_{1}(s)ds & - & \frac{1}{2}\int_{t_{0}}^{t_{1}}B(s)ds+\frac{1}{2}\int_{t_{0}}^{t_{1}}D_{0}(s)ds,\label{L2}\end{aligned}$$
The total person-years that would be lived by $B(t_{0})$ during their first three-years of life is
$$\begin{aligned}
\frac{1}{2}\int_{t_{0}}^{t_{1}}B(s)ds+2\int_{t_{0}}^{t_{1}}P_{1}(s)ds\label{T(0) when l(3)=00003D0}\end{aligned}$$
$$\begin{aligned}
\end{aligned}$$
The life expectancy of $B(t_{0})$, when $l(3)=0$ is:
$$\begin{aligned}
\frac{1}{2}+2\frac{\int_{t_{0}}^{t_{1}}P_{1}(s)ds}{\int_{t_{0}}^{t_{1}}B(s)ds}\label{LE when l(3)=00003D0}\end{aligned}$$
Proceeding further with a similar approach, we can obtain $e(B(t_{0}))$, the life expectancy of $B(t_{0})$ when $l(\omega)=0$ as:
$$\begin{aligned}
e(B(t_{0})) & = & \left\{ \begin{array}{cc}
\frac{3}{2}-\frac{\int_{t_{0}}^{t_{1}}D_{0}(s)ds}{\int_{t_{0}}^{t_{1}}B(s)ds}+\frac{2}{\int_{t_{0}}^{t_{1}}B(s)ds}\Sigma_{n=1}^{\frac{\omega}{2}-1}\int_{t_{0}}^{t_{1}}P_{2n}(s)ds & \mbox{if }\omega\mbox{ is even}\\
\\
\frac{1}{2}+\frac{2}{\int_{t_{0}}^{t_{1}}B(s)ds}\Sigma_{n=0}^{\frac{\omega-3}{2}}\int_{t_{0}}^{t_{1}}P_{2n+1}(s)ds & \mbox{if }\omega\mbox{ is odd}
\end{array}\right.\label{general LE}\end{aligned}$$
![\[LE-limiteddata\] Life expectancy with limited data. Only with the information on births, effective population by age and infant deaths in a year, the proposed formula will forecast the life expectancy of newly born babies in a year. ](Figure-LE.eps)
**Numerical Examples** {#numerical-examples .unnumbered}
======================
We consider an example population of some arbitrary species, whose effective population age structures, births and infant deaths are observed during some interval $[t_{0},t_{1})$ (see Table \[Table1\]). We give the computed life expectancies in Table **\[Table1\]**.
(a)
---------------------------------------------------------------------------
Age $\begin{array}{c} Births $\begin{array}{c} $\begin{array}{c}
\mbox{Effective}\\ \mbox{Infant}\\ \mbox{Life}\\
\mbox{Population} \mbox{Deaths} \mbox{Expectancy}
\end{array}$ \end{array}$ \end{array}$
----- -------------------- -------- ------------------- -------------------
0 10 12 1 **4.5**
1 12
2 14
3 12
4 6
5 0
---------------------------------------------------------------------------
: \[Table1\] Set of two hypothetically observed population age structures, births, infant deaths during $[t_{0},t_{1}),$ and computed life expectancies.
(b)
---------------------------------------------------------------------------
Age $\begin{array}{c} Births $\begin{array}{c} $\begin{array}{c}
\mbox{Effective}\\ \mbox{Infant}\\ \mbox{Life}\\
\mbox{Population} \mbox{Deaths} \mbox{Expectancy}
\end{array}$ \end{array}$ \end{array}$
----- -------------------- -------- ------------------- -------------------
0 12 9 3 **5.17**
1 16
2 18
3 12
4 0
---------------------------------------------------------------------------
: \[Table1\] Set of two hypothetically observed population age structures, births, infant deaths during $[t_{0},t_{1}),$ and computed life expectancies.
We further simplify the life expectancy formula of (\[general LE\]) based on **a** few assumptions and we obtain (\[general LE-simple\]). For details, see the Appendix. We tested this formula (for $\omega$ even and odd) on global population data [@UN-Population]. Total population in 2010 was approximately 6916 million, and infant deaths were 4.801 million. We have obtained $P_{\geq}(t_{0})$, the total population size with individuals whose age is one and above by removing the size of the population, whose age is zero, from the total population. The adjusted $P_{\geq1}(t_{0})$ is $6756$ million. Assuming a range of live births of 90-100 million occurred during 2010, we have calculated that the life expectancy of cohorts born in 2010 will be between 69 - 76.5 years (when $\omega$ is even), and life expectancy for these newly born will be 68.1 - 75.5 years (when $\omega$ is odd). In 2010, the actual global life expectancy was 70 years. We note that the formula in (\[general LE-simple\]), and the assumption in (\[assumption 2P=00003DP\]) may not be true for every population’s age-structure. Interestingly the formula results (\[general LE-simple\]) are very close to the life table-based standard estimates for the US and UK populations. However, it should be noted that the formula did not work for some populations. The total population in US in 2011 was approximately 313 million, and the total live births are approximately 4 million. This gives us $e(B(t_{0}))=0.5+78.25=78.75$ years, whereas the actual life expectancy for the US population for 2011 is $78.64$ years. Similarly, the formula-based values for UK is $78.23$ years and actual value is $80.75$ years.
In this paper we suggest a formula for computing life expectancy of a cohort of new born babies when it is difficult to construct a life table based life expectancy. For the standard life table technique, one requires information on $\int_{t_{0}}^{t_{1}}\int_{0}^{\omega}D_{i}(s)dids$, the total deaths during $[t_{0},t_{1})$, where $\int_{0}^{\omega}D_{i}(s)di$ is the age-specific death numbers at time $s\in[t_{0},t_{1})$, and then, traditionally compute age-specific death rates at age $i$ during $[t_{0},t_{1})$ using, $$\begin{aligned}
\frac{\int_{t_{0}}^{t_{1}}D_{i}(s)ds}{P_{i}(t_{0})}.\label{eq:ASDR}\end{aligned}$$ It is possible to obtain probability of deaths from (\[eq:ASDR\]), with some assumptions on the pattern of deaths within the time interval. We compute various columns of the life table from these death probabilities and compute life expectancy.
The proposed formula in (\[general LE\]) is very handy and can be computed by non-experts with minimal computing skills. It can be adapted by ecologists, experimental biologists, and biodemographers where the data on populations are limited. See Figure \[LE-limiteddata\] for the data needed to compute life expectancy of newly born babies in a year. It requires some degree of caution to apply the proposed formula when sufficient death data by all age groups is available. Our method heavily depends on the age structure of the population at the time of data collection. Our approach needs to be explored when populations are experiencing stable conditions given in [@Rao; @ASRS] and also to be tested for its accuracy at different stages of demographic transition. We still recommend to use life table methods when age-wise data on deaths and populations are available as indicated in Figure \[LT-DE Fig\].
**Acknowledgements** {#acknowledgements .unnumbered}
====================
Dr. Cynthia Harper (Oxford) and Ms. Claire Edward (Kent) have helped to correct and revise several sentences. Our sincere gratitude to all.
[10]{} Halley, E. 1693. An estimate of the degrees of the mortality of mankind. Philosophical Transactions 17: 596-610.
Preston, S. H., P. Heuveline, and M. Guillot (2001). *Demography: Measuring and Modeling Population Processes*. Blackwell Publishers, Malden, Massachusetts.
Smith, D. and N. Keyfitz, Eds. (1977). *Mathematical Demography*. Springer-Verlag, Berlin (p1).
Graunt, J. (1662). Natural and political observations mentioned in a following index, and made upon the bills of mortality. London. Republished with an introduction by B. Benjamin in the *Journal of the Institute of Actuaries* 90: 1-61 (1964)
Milne, J. (1815). A Treatise on the Valuation of Annuities and Assurances on Lives and Survivorships. London.
King, G. (1902). Institute of Actuaries Textbook. Part II. Second Edition. London: Charles and Edward Layton.
Deevey, E. S. Jr. (1947). Life tables for natural populations of animals. *Quarterly Review of Biology*, 22:283-314.
Arias E. United States life tables, 2010. National vital statistics reports; vol 63 no 7. Hyattsville, MD: National Center for Health Statistics. 2014.
Keyfitz, N and Caswell, H (2005). *Applied Mathematical Demography*, Springer (3/e).
Bongaarts, J. and G. Feeney (2003). Estimating mean lifetime. Proceedings of the National Academy of Sciences, 100, 23, 13127-13131.
Arni S.R. Srinivasa Rao and James R. Carey (2015). Carey’s Equality and a theorem on Stationary Population*, Journal of Mathematical Biology,* 71, 3, 583-594.
Age Tables Behind Bars by Ben Pittman-Polletta, (Math Digest Section of American Mathematical Societys Math in the Media), October 28, 2014 (http://www.ams.org/news/math-in-the-media/md-201410-toc\#201410-populations)
UN (2012). http://esa.un.org/unpd/wpp/index.htm.
Arni S.R. Srinivasa Rao (2014). Population Stability and Momentum. *Notices of the American Mathematical Society,* 61, 9, 1062-1065.
**Appendix: Analysis of the Life Expectancy function** {#appendix-analysis-of-the-life-expectancy-function .unnumbered}
======================================================
In general, $\int_{t_{0}}^{t_{1}}D(s)ds<\int_{t_{0}}^{t_{1}}B(s)ds$. When $\omega$ is even, the supremum and infemum of $\left(\frac{3}{2}-\frac{\int_{t_{0}}^{t_{1}}D_{0}(s)ds}{\int_{t_{0}}^{t_{1}}B(s)ds}\right)$ are $\frac{3}{2}$ and $\frac{1}{2}.$ The contribution of the term $\left(\frac{3}{2}-\frac{\int_{t_{0}}^{t_{1}}D_{0}(s)ds}{\int_{t_{0}}^{t_{1}}B(s)ds}\right)$ in computation of life expectancy is very minimal in comparison with the term $\left(\frac{2}{\int_{t_{0}}^{t_{1}}B(s)ds}\Sigma_{n=1}^{\frac{\omega}{2}-1}\int_{t_{0}}^{t_{1}}P_{2n}(s)ds\right)$, hence $e(B(t_{0}))$ can be approximated by,
$$\begin{aligned}
e(B(t_{0})) & \approx & \frac{2}{\int_{t_{0}}^{t_{1}}B(s)ds}\Sigma_{n=1}^{\frac{\omega}{2}-1}\int_{t_{0}}^{t_{1}}P_{2n}(s)ds\end{aligned}$$
Similarly, when $\omega$ is even, $e(B(t_{0}))$ can be approximated by,
$$\begin{aligned}
e(B(t_{0})) & \approx & \frac{2}{\int_{t_{0}}^{t_{1}}B(s)ds}\Sigma_{n=1}^{\frac{\omega-3}{2}}\int_{t_{0}}^{t_{1}}P_{2n+1}(s)ds\end{aligned}$$
Suppose $\left(P_{n}(t_{0})\right)_{0}^{\omega}$ is an increasing, then, we will arrive at the two inequalities (\[Ineq1\]) and (\[eIneq2\]).
$$\begin{aligned}
\Sigma_{n=1}^{\frac{\omega}{2}-1}\int_{t_{0}}^{t_{1}}P_{2n}(s)ds & < & \frac{1}{2}\Sigma_{n=1}^{\omega}\int_{t_{0}}^{t_{1}}P_{n}(s)ds\;\mbox{ if }\omega\mbox{ }\mbox{is even,}\label{Ineq1}\\
\nonumber \\
\Sigma_{n=1}^{\frac{\omega-3}{2}}\int_{t_{0}}^{t_{1}}P_{2n+1}(s)ds & > & \frac{1}{2}\Sigma_{n=1}^{\omega}\int_{t_{0}}^{t_{1}}P_{n}(s)ds\;\mbox{if }\omega\mbox{ is odd.}\label{eIneq2}\end{aligned}$$
$ $
In general when $\left(P_{n}(t_{0})\right)_{0}^{\omega}$ is an increasing, without any condition on $\omega$, we can write the inequality (\[inequlaity(combined)\]) by combining (\[Ineq1\]) and (\[eIneq2\]) as,
$$\Sigma_{n=1}^{\frac{\omega}{2}-1}\int_{t_{0}}^{t_{1}}P_{2n}(s)ds<\frac{1}{2}\Sigma_{n=1}^{\omega}\int_{t_{0}}^{t_{1}}P_{n}(s)ds<\Sigma_{n=1}^{\frac{\omega-3}{2}}\int_{t_{0}}^{t_{1}}P_{2n+1}(s)ds\label{inequlaity(combined)}$$
$ $
Suppose $\int_{t_{0}}^{t_{1}}D_{0}(s)ds=\int_{t_{0}}^{t_{1}}B(s)ds$ in (\[general LE\]), then, the life expectancy, irrespective of $\omega$ is even or odd, becomes, $$\begin{aligned}
e(B(t_{0})) & = & \frac{1}{2}+\frac{2}{\int_{t_{0}}^{t_{1}}B(s)ds}\Sigma_{n=0}^{\frac{\omega-3}{2}}\int_{t_{0}}^{t_{1}}P_{2n+1}(s)ds\label{eqLE when D0=00003DB}\end{aligned}$$
$ $
When the total population aged one and above at $t_{0}$ is approximately same as twice the sum of the populations of even single year ages and also twice the sum of the populations of odd single year ages, i.e. $$2\Sigma_{n=1}^{\frac{\omega}{2}-1}\int_{t_{0}}^{t_{1}}P_{2n}(s)ds\approx\int_{t_{0}}^{t_{1}}P(s)ds\approx2\Sigma_{n=0}^{\frac{\omega-3}{2}}\int_{t_{0}}^{t_{1}}P_{2n+1}(s)ds,\label{assumption 2P=00003DP}$$
then, life expectancy in (\[general LE\]) further reduces into,
$$\begin{aligned}
e(B(t_{0})) & = & \left\{ \begin{array}{cc}
\frac{3}{2}-\frac{\int_{t_{0}}^{t_{1}}D_{0}(s)ds}{\int_{t_{0}}^{t_{1}}B(s)ds}+\frac{\int_{t_{0}}^{t_{1}}P_{\geq1}(s)ds}{\int_{t_{0}}^{t_{1}}B(s)ds} & \mbox{if }\omega\mbox{ is even}\\
\\
\frac{1}{2}+\frac{\int_{t_{0}}^{t_{1}}P_{\geq1}(s)ds}{\int_{t_{0}}^{t_{1}}B(s)ds} & \mbox{if }\omega\mbox{ is odd}
\end{array}\right.\label{general LE-simple}\end{aligned}$$
where $P_{\geq1}(s)$ is the effective population who are aged one and above at time $s\in[t_{0},t_{1}).$
[^1]: **Corresponding author**
|
---
abstract: 'In this paper we give global characterisations of Gevrey-Roumieu and Gevrey-Beurling spaces of ultradifferentiable functions on compact Lie groups in terms of the representation theory of the group and the spectrum of the Laplace-Beltrami operator. Furthermore, we characterise their duals, the spaces of corresponding ultradistributions. For the latter, the proof is based on first obtaining the characterisation of their $\alpha$-duals in the sense of Köthe and the theory of sequence spaces. We also give the corresponding characterisations on compact homogeneous spaces.'
address:
- ' Aparajita Dasgupta: Department of Mathematics Imperial College London 180 Queen’s Gate, London SW7 2AZ United Kingdom '
- ' Michael Ruzhansky: Department of Mathematics Imperial College London 180 Queen’s Gate, London SW7 2AZ United Kingdom '
author:
- Aparajita Dasgupta
- Michael Ruzhansky
title: Gevrey functions and ultradistributions on compact Lie groups and homogeneous spaces
---
[^1]
Introduction
============
The spaces of Gevrey ultradifferentiable functions are well-known on $\Rn$ and their characterisations exists on both the space-side and the Fourier transform side, leading to numerous applications in different areas. The aim of this paper is to obtain global characterisations of the spaces of Gevrey ultradifferentiable functions and of the spaces of ultradistributions using the eigenvalues of the Laplace-Beltrami operator $\L_{G}$ (Casimir element) on the compact Lie group $G$. We treat both the cases of Gevrey-Roumieu and Gevrey-Beurling functions, and the corresponding spaces of ultradistributions, which are their topological duals with respect to their inductive and projective limit topologies, respectively.
If $M$ is a compact homogeneous space, let $G$ be its motion group and $H$ a stationary subgroup at some point, so that $M\simeq G/H.$ Our results on the motion group $G$ will yield the corresponding characterisations for Gevrey functions and ultradistributions on the homogeneous space $M$. Typical examples are the real spheres ${\mathbb S}^{n}={\rm SO}(n+1)/{\rm SO}(n)$, complex spheres (complex projective spaces) $\mathbb C\mathbb P^{n}={\rm SU}(n+1)/{\rm SU}(n)$, or quaternionic projective spaces $\mathbb H\mathbb P^{n}$.
Working in local coordinates and treating $G$ as a manifold the Gevrey(-Roumieu) class $\gamma_{s}(G)$, $s\geq 1$, is the space of functions $\phi\in{C^{\infty}(G)}$ such that in every local coordinate chart its local representative, say $\psi\in C^{\infty}(\Rn)$, is such that there exist constants $A>0$ and $C>0$ such that for all multi-indices $\alpha,$ we have that $${|\partial^{\alpha}\psi(x)|\leq C A^{|\alpha|}\left(\alpha !\right)^{s}}$$ holds for all $x\in\Rn$. By the chain rule one readily sees that this class is invariantly defined on (the analytic manifold) $G$ for $s\geq 1.$ For $s=1$ we obtain the class of analytic functions. This behaviour can be characterised on the Fourier side by being equivalent to the condition that there exist $B>0$ and $K>0$ such that $$|\widehat{\psi}(\eta)|\leq K e^{-B\jp{\eta}^{1/s}}$$ holds for all $\eta\in\Rn.$ We refer to [@Kom] for the extensive analysis of these spaces and their duals in $\Rn$. However, such a local point of view does not tell us about the global properties of $\phi$ such as its relation to the geometric or spectral properties of the group $G$, and this is the aim of this paper. The characterisations that we give are global, i.e. they do not refer to the localisation of the spaces, but are expressed in terms of the behaviour of the global Fourier transform and the properties of the global Fourier coefficients.
Such global characterisations will be useful for applications. For example, the Cauchy problem for the wave equation $$\label{WE}
\partial_{t}^{2}u-a(t)\L_{G}u=0$$ is well-posed, in general, only in Gevrey spaces, if $a(t)$ becomes zero at some points. However, in local coordinates becomes a second order equation with space-dependent coefficients and lower order terms, the case when the well-posedness results are, in general[^2], not available even on $\Rn$. At the same time, in terms of the group Fourier transform the equation is basically constant coefficients, and the global characterisation of Gevrey spaces together with an energy inequality for yield the well-posedness result. We will address this and other applications elsewhere, but we note that in these problems both types of Gevrey spaces appear naturally, see e.g. [@GR] for the Gevrey-Roumieu ultradifferentiable and Gevrey-Beurling ultradistributional well-posedness of weakly hyperbolic partial differential equations in the Euclidean space.
In Section \[SEC:Res\] we will fix the notation and formulate our results. We will also recall known (easy) characterisations for other spaces, such as spaces of smooth functions, distributions, or Sobolev spaces over $L^{2}.$ The proof for the characterisation of Gevrey spaces will rely on the harmonic analysis on the group, the family of spaces $\ell^{p}(\Gh)$ on the unitary dual introduced in [@RTb], and to some extent on the analysis of globally defined matrix-valued symbols of pseudo-differential operators developed in [@RTb; @RTi]. The analysis of ultradistributions will rely on the theory of sequence spaces (echelon and co-echelon spaces), see e.g. Köthe [@Koe], Ruckle [@WR]. Thus, we will first give characterisations of the so-called $\alpha$-duals of the Gevrey spaces and then show that $\alpha$-duals and topological duals coincide. We also prove that both Gevrey spaces are perfect spaces, i.e. the $\alpha$-dual of its $\alpha$-dual is the original space. This is done in Section \[SEC:alpha\], and the ultradistributions are treated in Section \[SEC:ultra\].
We note that the case of the periodic Gevrey spaces, which can be viewed as spaces on the torus $\mathbb T^{n}$, has been characterised by the Fourier coefficients in [@Ta]. However, that paper stopped short of characterising the topological duals (i.e. the corresponding ultradistributions), so already in this case our characterisation in Theorem \[THM:duals\] appears to be new.
In the estimates throughout the paper the constants will be denoted by letter $C$ which may change value even in the same formula. If we want to emphasise the change of the constant, we may use letters like $C', A_{1}$, etc.
Results {#SEC:Res}
=======
We first fix the notation and recall known characterisations of several spaces. We refer to [@RTb] for details on the following constructions.
Let $G$ be a compact Lie group of dimension $n$. Let $\Gh$ denote the set of (equivalence classes of) continuous irreducible unitary representations of $G$. Since $G$ is compact, $\Gh$ is discrete. For $[\xi]\in\Gh$, by choosing a basis in the representation space of $\xi$, we can view $\xi$ as a matrix-valued function $\xi:G\to\C^{d_{\xi}\times d_{\xi}},$ where $d_{\xi}$ is the dimension of the representation space of $\xi.$ For $f\in L^{1}(G)$ we define its global Fourier transform at $\xi$ by $$\widehat{f}(\xi)=\int_{G} f(x) \xi(x)^{*} dx,$$ where $dx$ is the normalised Haar measure on $G$. The Peter-Weyl theorem implies the Fourier inversion formula $$\label{EQ:FS}
f(x)=\sumxi d_{\xi} \operatorname{Tr}\p{\xi(x)\widehat{f}(\xi)}.$$ For each $[\xi]\in\Gh$, the matrix elements of $\xi$ are the eigenfunctions for the Laplace-Beltrami operator $\L_{G}$ with the same eigenvalue which we denote by $-\lambda_{[\xi]}^{2}$, so that $-\L_{G}\xi_{ij}(x)=\lambda_{[\xi]}^{2}\xi_{ij}(x),$ for all $1\leq i,j\leq d_{\xi}.$
Different spaces on the Lie group $G$ can be characterised in terms of comparing the Fourier coefficients of functions with powers of the eigenvalues of the Laplace-Beltrami operator. We denote $\jp{\xi}=(1+\lambda_{[\xi]}^{2})^{1/2}$, the eigenvalues of the elliptic first-order pseudo-differential operator $(I-\L_{G})^{1/2}.$
Then, it is easy to see that $f\in C^{\infty}(G)$ if and only if for every $M>0$ there exists $C>0$ such that $\|\widehat{f}(\xi)\|_{\HS}\leq C\jp{\xi}^{-M},$ and $u\in \D'(G)$ if and only if there exist $M>0$ and $C>0$ such that $\|\widehat{u}(\xi)\|_{\HS}\leq C\jp{\xi}^{M},$ where we define $\widehat{u}(\xi)_{ij}=u(\overline{\xi_{ji}})$, $1\leq i,j\leq d_{\xi}.$ For this and other occasions, we can write this as $\widehat{u}(\xi)=u(\xi^{*})$ in the matrix notation. The appearance of the Hilbert-Schmidt norm is natural in view of the Plancherel identity $$(f,g)_{L^{2}(G)}=\sumxi d_{\xi} \operatorname{Tr}\p{\widehat{f}(\xi)\widehat{g}(\xi)^{*}},$$ so that $$\|f\|_{L^{2}(G)}=\p{\sumxi d_{\xi} \|\widehat{f}(\xi)\|_{\HS}^{2}}^{1/2}=:
\|\widehat{f}\|_{\ell^{2}(\Gh)}$$ can be taken as the definition of the space $\ell^{2}(\Gh)$. Here, of course, $ \|A\|_{\HS}=\sqrt{\operatorname{Tr}(A A^{*})}.$ It is convenient to use the sequence space $$\Sigma=\{\sigma=(\sigma(\xi))_{[\xi]\in\Gh}: \sigma(\xi)\in\C^{d_{\xi}\times d_{\xi}} \}.$$ In [@RTb], the authors introduced a family of spaces $\ell^{p}(\Gh)$, $1\leq p<\infty$, by saying that $\sigma\in\Sigma$ belongs to $\ell^{p}(\Gh)$ if the norm $$\|\sigma\|_{\ell^{p}(\Gh)}:=\p{\sumxi d_{\xi}^{p\p{\frac2p-\frac12}} \|\sigma(\xi)\|_{\HS}^{p}}^{1/p}$$ if finite. There is also the space $\ell^{\infty}(\Gh)$ for which the norm $$\label{EQ:linf}
\|\sigma\|_{\ell^{\infty}(\Gh)}:=\sup_{[\xi]\in\Gh} d_{\xi}^{-\frac12} \|\sigma(\xi)\|_{\HS}$$ is finite. These are interpolation spaces for which the Hausdorff-Young inequality holds, in particular, we have $$\label{EQ:HY}
\|\widehat{f}\|_{\ell^{\infty}(\Gh)}\leq \|f\|_{L^{1}(G)} \; \textrm{ and } \;
\|\mathscr F^{-1}\sigma\|_{L^{\infty}(G)}\leq \|\sigma\|_{\ell^{1}(\Gh)},$$ with $(\mathscr F^{-1}\sigma)(x)=\sumxi d_{\xi} \operatorname{Tr}\p{\xi(x)\sigma(\xi)}.$ We refer to [@RTb Chapter 10] for further details on these spaces. Usual Sobolev spaces on $G$ as a manifold, defined by localisations, can be also characterised by the global condition $$\label{EQ:S1}
f\in H^{t}(G) \textrm{ if and only if } \jp{\xi}^t \widehat{f}(\xi)\in \ell^{2}(\Gh).$$ For a multi-index $\alpha=(\alpha_{1},\ldots,\alpha_{n})$, we define $|\alpha|=|\alpha_{1}|+\cdots+|\alpha_{n}|$ and $\alpha!=\alpha_{1}!\cdots\alpha_{n}!.$ We will adopt the convention that $0!=1$ and $0^{0}=1.$
Let $X_{1},\ldots,X_{n}$ be a basis of the Lie algebra of $G$, normalised in some way, e.g. with respect to the Killing form. For a multi-index $\alpha=(\alpha_{1},\ldots,\alpha_{n})$, we define the left-invariant differential operator of order $|\alpha|$, $\partial^{\alpha}:=Y_{1}\cdots Y_{|\alpha|},$ with $Y_{j}\in\{X_{1},\cdots,X_{n}\}$, $1\leq j\leq |\alpha|$, and $\sum_{j: Y_{j}=X_{k}} 1=\alpha_{k}$ for every $1\leq k\leq n.$ It means that $\paal$ is a composition of left-invariant derivatives with respect to vectors $X_{1},\cdots,X_{n}$, such that each $X_{k}$ enters $\paal$ exactly $\alpha_{k}$ times. There is a small abuse of notation here since we do not specify in the notation $\paal$ the order of vectors $X_{1},\cdots,X_{n}$ entering in $\paal$, but this will not be important for the arguments in the paper. The reason we define $\paal$ in this way is to take care of the non-commutativity of left-invariant differential operators corresponding to the vector fields $X_{k}.$
We will distinguish between two families of Sobolev spaces over $L^{2}$. The first one is defined by $H^{t}(G)=\left\{f\in L^2 (G) : (I-\L_G)^{t/2} f\in L^{2}(G) \right\}$ with the norm $$\label{EQ:S2}
\|f\|_{H^{t}(G)}:=\| (I-\L_G)^{t/2} f\|_{L^{2}(G)}=\|\jp{\xi}^{t}\widehat{f}(\xi)\|_{\ell^{2}(\Gh)}.$$ The second one is defined for $k\in \N_{0}\equiv \N\cup\{0\}$ by $$W^{k,2}=\left\{f\in L^2(G):
\|f\|_{W^{k,2}}:=\sum_{|\alpha|\leq k}||\partial^{\alpha}f||_{L^{2}(G)}<\infty\right\}.$$ Obviously, $H^{k}\simeq W^{k,2}$ for any $k\in\N_{0}$ but for us the relation between norms will be of importance, especially as $k$ will tend to infinity.\
Let $0<s<\infty.$ We first fix the notation for the Gevrey spaces and then formulate the results. In the definitions below we allow any $s>0$, and the characterisation of $\alpha$-duals in the sequel will still hold. However, when dealing with ultradistributions we will be restricting to $s\geq 1$.
\[DEF:GR\] Gevrey-Roumieu(R) class $\gamma_{s}(G)$ is the space of functions $\phi\in{C^{\infty}(G)}$ for which there exist constants $A>0$ and $C>0$ such that for all multi-indices $\alpha,$ we have $$\label{EQ:GR}
||\partial^{\alpha}\phi||_{L^\infty}\equiv\sup_{x\in G}{|\partial^{\alpha}\phi(x)|\leq
C A^{|\alpha|}\left(\alpha !\right)^{s}}.$$ Functions $\phi\in{\gamma_{s}}(G)$ are called ultradifferentiable functions of Gevrey-Roumieu class of order s.
For $s=1$ we obtain the space of analytic functions, and for $s>1$ the space of Gevrey-Roumieu functions on $G$ considered as a manifold, by saying that the function is in the Gevrey-Roumieu class locally in every coordinate chart. The same is true for the other Gevrey space:
Gevrey-Beurling(B) class $\gamma_{(s)}(G)$ is the space of functions $\phi\in{C^{\infty}(G)}$ such that for every $A>0$ there exists $C_A>0$ so that for all multi-indices $\alpha,$ we have $$||\partial^{\alpha}\phi||_{L^\infty}\equiv\sup_{x\in G}
{|\partial^{\alpha}f(x)|\leq C_A A^{|\alpha|}\left(\alpha !\right)^{s}}.$$ Functions $\phi\in{\gamma_{(s)}}(G)$ are called ultradifferentiable functions of Gevrey-Beurling class of order s.
\[THM:Gevrey\] Let $0<s<\infty$.\
[**(R)**]{} We have $\phi\in \gamma_{s}(G)$ if and only if there exist $B>0$ and $K>0$ such that $$\label{EQ:GR}
||\widehat{\phi}(\xi)||_{\HS}\leq K e^{-B\jp{\xi}^{1/s}}$$ holds for all $[\xi]\in \Gh.$\
[**(B)**]{} We have $\phi\in \gamma_{(s)}(G)$ if and only if for every $B>0$ there exists $K_B>0$ such that $$\label{EQ:GB}
||\widehat{\phi}(\xi)||_{\HS}\leq K_B e^{-B\jp{\xi}^{1/s}}$$ holds for all $[\xi]\in \Gh.$
Expressions appearing in the definitions can be taken as seminorms, and the spaces are equipped with the inductive and projective topologies, respectively[^3]. We now turn to ultradistributions.
The space of continuous linear functionals on $\gamma_s(G)\left(\textrm{or}~\gamma_{(s)}(G)\right)$ is called the space of ultradistributions and is denoted by $\gamma_s'(G)\left(\textrm{or}~\gamma_{(s)}'(G)\right),$ respectively.
For any $v\in \gamma_{s}'(G)\left(\textrm{or}~\gamma_{(s)}'(G)\right)$, for $[\xi]\in\Gh$, we define the Fourier coefficients $\widehat{v}(\xi):=\jp{v,\xi^{\ast}}\equiv v(\xi^{*}).$ These are well-defined since $G$ is compact and hence $\xi(x)$ are actually analytic.
\[THM:duals\] Let $1\leq s<\infty$.\
[**(R)**]{} We have $v \in \gamma_s'(G)$ if and only if for every $B>0$ there exists $K_B>0$ such that $$\label{EQ:ade1}
\|\widehat{v}(\xi)\|_{\HS} \leq K_B e^{B\left\langle \xi \right\rangle ^{\frac{1} {s}} }$$ holds for all $ [\xi] \in \Gh$.\
[**(B)**]{} We have $v \in \gamma_{(s)}'(G)$ if and only if there exist $B>0$ and $K_{B}>0$ such that holds for all $ [\xi] \in \Gh$.
The proof of Theorem \[THM:duals\] follows from the characterisation of $\alpha$-duals of[^4] the Gevrey spaces in Theorem \[THM:aduals\] and the equivalence of the topological duals and $\alpha$-duals in Theorem \[THM:equiv\].
The result on groups implies the corresponding characterisation on compact homogeneous spaces $M$. First we fix the notation. Let $G$ be a compact motion group of $M$ and let $H$ be the stationary subgroup of some point. Alternatively, we can start with a compact Lie group $G$ with a closed subgroup $H$. The homogeneous space $M=G/H$ is an analytic manifold in a canonical way (see, for example, [@Br] or [@St] as textbooks on this subject). We normalise measures so that the measure on $H$ is a probability one. Typical examples are the spheres ${\mathbb S}^{n}={\rm SO}(n+1)/{\rm SO}(n)$ or complex spheres (complex projective spaces) $\mathbb P\mathbb C^{n}={\rm SU}(n+1)/{\rm SU}(n)$.
We denote by $\Gh_{0}$ the subset of $\Gh$ of representations that are class I with respect to the subgroup $H$. This means that $[\xi]\in\Gh_{0}$ if $\xi$ has at least one non-zero invariant vector $a$ with respect to $H$, i.e. that $\xi(h)a=a$ for all $h\in H.$ Let $\Hcal_{\xi}$ denote the representation space of $\xi(x):\Hcal_{\xi}\to\Hcal_{\xi}$ and let $\B_{\xi}$ be the space of these invariant vectors. Let $k_{\xi}=\dim\B_{\xi}.$ We fix an orthonormal basis of $\Hcal_{\xi}$ so that its first $k_{\xi}$ vectors are the basis of $B_{\xi}.$ The matrix elements $\xi_{ij}(x)$, $1\leq j\leq k_{\xi}$, are invariant under the right shifts by $H$. We refer to [@Vi] for the details of these constructions.
We can identify Gevrey functions on $M=G/H$ with Gevrey functions on $G$ which are constant on left cosets with respect to $H$. Here we will restrict to $s\geq 1$ to see the equivalence of spaces using their localisation. This identification gives rise to the corresponding identification of ultradistributions. Thus, for a function $f\in \gamma_{s}(M)$ we can recover it by the Fourier series of its canonical lifting $\wt{f}(g):=f(gH)$ to $G$, $\wt{f}\in \gamma_{s}(G)$, and the Fourier coefficients satisfy $\widehat{\wt{f}}(\xi)=0$ for all representations with $[\xi]\not\in\Gh_{0}$. Also, for class I representations $[\xi]\in\Gh_{0}$ we have $\widehat{\wt{f}}(\xi)_{{ij}}=0$ for $i>k_{\xi}$.
With this, we can write the Fourier series of $f$ (or of $\wt{f}$, but as we said, from now on we will identify these and denote both by $f$) in terms of the spherical functions $\xi_{ij}$ of the representations $\xi$, $[\xi]\in\Gh_{0}$, with respect to the subgroup $H$. Namely, the Fourier series becomes $$\label{EQ:FSh}
f(x)=\sum_{[\xi]\in\Gh_{0}} d_{\xi} \sum_{i=1}^{d_{\xi}}\sum_{j=1}^{k_{\xi}}
\widehat{f}(\xi)_{ji}\xi_{ij}(x).$$ In view of this, we will say that the collection of Fourier coefficients $\{\widehat{\phi}(\xi)_{ij}: [\xi]\in\Gh, 1\leq i,j\leq d_{\xi}\}$ is of class I with respect to $H$ if $\widehat{\phi}(\xi)_{ij}=0$ whenever $[\xi]\not\in\Gh_{0}$ or $i>k_{\xi}.$ By the above discussion, if the collection of Fourier coefficients is of class I with respect to $H$, then the expressions and coincide and yield a function $f$ such that $f(xh)=f(h)$ for all $h\in H$, so that this function becomes a function on the homogeneous space $G/H$. The same applies to (ultra)distributions with the standard distributional interpretation. With these identifications, Theorem \[THM:Gevrey\] immediately implies
\[THM:Gevreyh\] Let $1\leq s<\infty$.\
[**(R)**]{} We have $\phi\in \gamma_{s}(G/H)$ if and only if its Fourier coefficients are of class I with respect to $H$ and, moreover, there exist $B>0$ and $K>0$ such that $$\label{EQ:GRh}
||\widehat{\phi}(\xi)||_{\HS}\leq K e^{-B\jp{\xi}^{1/s}}$$ holds for all $[\xi]\in \Gh_{0}.$\
[**(B)**]{} We have $\phi\in \gamma_{(s)}(G)$ if and only if its Fourier coefficients are of class I with respect to $H$ and, moreover, for every $B>0$ there exists $K_B>0$ such that $$\label{EQ:GBh}
||\widehat{\phi}(\xi)||_{\HS}\leq K_B e^{-B\jp{\xi}^{1/s}}$$ holds for all $[\xi]\in \Gh_{0}.$
It would be possible to extend Theorem \[THM:Gevreyh\] to the range $0<s<\infty$ by adopting Definition \[DEF:GR\] starting with a frame of vector fields on $M$, but instead of obtaining the result immediately from Theorem \[THM:Gevrey\] we would have to go again through arguments similar to those used to prove Theorem \[THM:Gevrey\]. Since we are interested in characterising the standard invariantly defined Gevrey spaces we decided not to lengthen the proof in this way. On the other hand, it is also possible to prove the characterisations on homogeneous spaces $G/H$ first and then obtain those on the group $G$ by taking $H$ to be trivial. However, some steps would become more technical since we would have to deal with frames of vector fields instead of the basis of left-invariant vector fields on $G$, and elements of the symbolic calculus used in the proof would become more complicated.
We also have the ultradistributional result following from Theorem \[THM:duals\].
\[THM:dualsh\] Let $1\leq s<\infty$.\
[**(R)**]{} We have $v \in \gamma_s'(G/H)$ if and only if its Fourier coefficients are of class I with respect to $H$ and, moreover, for every $B>0$ there exists $K_B>0$ such that $$\label{EQ:ade1h}
\|\widehat{v}(\xi)\|_{\HS} \leq K_B e^{B\left\langle \xi \right\rangle ^{\frac{1} {s}} }$$ holds for all $ [\xi] \in \Gh_{0}$.\
[**(B)**]{} We have $v \in \gamma_{(s)}'(G/H)$ if and only if its Fourier coefficients are of class I with respect to $H$ and, moreover, there exist $B>0$ and $K_{B}>0$ such that holds for all $[\xi] \in\Gh_{0}$.
Finally, we remark that in the harmonic analysis on compact Lie groups sometimes another version of $\ell^{p}(\Gh)$ spaces appears using Schatten $p$-norms. However, in the context of Gevrey spaces and ultradistributions eventual results hold for all such norms. Indeed, given our results with the Hilbert-Schmidt norm, by an argument similar to that of Lemma \[L:c\] below, we can put any Schatten norm $\|\cdot\|_{S_{p}}$, $1\leq p\leq\infty,$ instead of the Hilbert-Schmidt norm $\|\cdot\|_{\HS}$ in any of our characterisations and they still continue to hold.
Gevrey classes on compact Lie groups
====================================
We will need two relations between dimensions of representations and the eigenvalues of the Laplace-Beltrami operator. On one hand, it follows from the Weyl character formula that $$\label{EQ:dxi}
d_{\xi}\leq C\jp{\xi}^{\frac{n-{\rm rank}G}{2}}\leq C\jp{\xi}^{\frac{n}{2}},$$ with the latter[^5] also following directly from the Weyl asymptotic formula for the eigenvalue counting function for $\L_{G}$, see e.g. [@RTb Prop. 10.3.19]. This implies, in particular, that for any $0\leq p<\infty$ and any $s>0$ and $B>0$ we have $$\label{EQ:exp}
\sup_{[\xi]\in\Gh} d_{\xi}^p e^{-B\jp{\xi}^{1/s}}<\infty.$$ On the other hand, the following convergence for the series will be useful for us:
\[L:series\] We have $\sumxi \ d_{\xi}^{2}\ \jp{\xi}^{-2t}<\infty$ if and only if $t>\frac{n}{2}.$
We notice that for the $\delta$-distribution at the unit element of the group, $\widehat{\delta}(\xi)=I_{d_{\xi}}$ is the identity matrix of size $d_{\xi}\times d_{\xi}$. Hence, in view of and , we can write $$\sumxi d_{\xi}^{2} \jp{\xi}^{-2t}=
\sumxi d_{\xi} \jp{\xi}^{-2t}\|\widehat{\delta}(\xi)\|_{\HS}^{2}=
\|(I-\L_{G})^{-t/2}\delta\|_{L^{2}(G)}^{2}=\|\delta\|_{H^{-t}(G)}^{2}.$$ By using the localisation of $H^{-t}(G)$ this is finite if and only if $t>n/2.$
We denote by $\Ghs$ the set of representations from $\Gh$ excluding the trivial representation. For $[\xi]\in\Gh$, we denote $|\xi|:=\lambda_{\xi}\geq 0$, the eigenvalue of the operator $(-\L_{G})^{1/2}$ corresponding to the representation $\xi.$ For $[\xi]\in\Ghs$ we have $|\xi|>0$ (see e.g. [@F]), and for $[\xi]\in\Gh\backslash\Ghs$ we have $|\xi|=0.$ From the definition, we have $|\xi|\leq \jp{\xi}.$ On the other hand, let $\lambda_{1}^{2}>0$ be the smallest positive eigenvalue of $-\L_{G}.$ Then, for $[\xi]\in\Ghs$ we have $\lambda_{\xi}\geq \lambda_{1}$, implying $$1+\lambda_{\xi}^{2}\leq \p{\frac{1}{\lambda_{1}^{2}}+1}\lambda_{\xi}^{2},$$ so that altogether we record the inequality $$\label{EQ:ll}
|\xi|\leq \jp{\xi}\leq \p{1+\frac{1}{\lambda_{1}^{2}}}^{1/2}|\xi|,
\quad \textrm{ for all } [\xi]\in\Ghs.$$
We will need the following simple lemma which we prove for completeness. Let $a\in\C^{d\times d}$ be a matrix, and for $1\leq p<\infty$ we denote by $\ell^p(\C)$ the space of such matrices with the norm $$\|a\|_{\ell^p(\C)}=\p{\sum_{i,j=1}^d |a_{ij}|^p}^{1/p},$$ and for $p=\infty$, $ \|a\|_{\ell^\infty(\C)}=\sup_{1\leq i,j\leq d} |a_{ij}|.$ We note that $\|a\|_{\ell^2(\C)}=\|a\|_{\HS}.$ We adopt the usual convention $\frac{c}{\infty}=0$ for any $c\in\mathbb R.$
\[L:c\] Let $1\leq p< q\leq\infty$ and let $a\in\C^{d\times d}.$ Then we have $$\label{EQ:in}
\|a\|_{\ell^p(\C)}\leq d^{2\p{\frac1p-\frac1q}}\|a\|_{\ell^q(\C)}
\quad\textrm{ and } \quad
\|a\|_{\ell^q(\C)}\leq d^{\frac{2}{q}}\|a\|_{\ell^p(\C)}.$$
For $q<\infty$, we apply Hölder’s inequality with $r=\frac{q}{p}$ and $r'=\frac{q}{q-p}$ to get $$\|a\|_{\ell^p(\C)}^p=\sum_{i,j=1}^d |a_{ij}|^p\leq
\p{\sum_{i,j=1}^d |a_{ij}|^{pr}}^{1/r}\p{\sum_{i,j=1}^d 1}^{1/r'}=
\|a\|_{\ell^q(\C)}^{p} d^{2\frac{q-p}{q}},$$ implying for this range. Conversely, we have $$\|a\|_{\ell^q(\C)}^q=\sum_{i,j=1}^d |a_{ij}|^q\leq
\sum_{i,j=1}^d \|a\|_{\ell^p(\C)}^q= d^2\|a\|_{\ell^p(\C)}^q,$$ proving the other part of for this range. For $q=\infty$, we have $ \|a\|_{\ell^p(\C)}\leq \p{\sum_{i,j=1}^d \|a\|^{p}_{\ell^\infty(\C)}}^{1/p}\leq
\|a\|_{\ell^\infty(\C)} d^{2/p}.$ Conversely, we have trivially $\|a\|_{\ell^\infty(\C)}\leq \|a\|_{\ell^p(\C)},$ completing the proof.
We observe that the Gevrey spaces can be described in terms of $L^{2}$-norms, and this will be useful to us in the sequel.
\[L:gl2\] We have $\phi\in \gamma_{s}(G)$ if and only if there exist constants $A>0$ and $C>0$ such that for all multi-indices $\alpha$ we have $$\label{EQ:gl2}
\|\partial^{\alpha}\phi\|_{L^2}\leq
C A^{|\alpha|}\left(\alpha !\right)^{s}.$$ We also have $\phi\in \gamma_{(s)}(G)$ if and only if for every $A>0$ there exists $C_{A}>0$ such that for all multi-indices $\alpha$ we have $$\|\partial^{\alpha}\phi\|_{L^2}\leq
C_{A} A^{|\alpha|}\left(\alpha !\right)^{s}.$$
We prove the Gevrey-Roumieu case (R) as the Gevrey-Beurling case (B) is similar. For $\phi\in \gamma_{s}(G),$ follows in view of the continuous embedding $L^{\infty}(G)\subset L^{2}(G)$ with $\|f\|_{L^{2}}\leq \|f\|_{L^{\infty}}$ since the measure is normalised.
Now suppose that for $\phi\in C^{\infty}(G)$ we have . In view of , and using Lemma \[L:series\] with an integer $k>n/2$, we obtain[^6] $$\begin{aligned}
\|\phi\|_{L^{\infty}}&\leq&
\sumxi d_{\xi}^{3/2}\|\widehat{\phi}(\xi)\|_{\HS}\nonumber\\
&\leq&
\left(\sumxi d_{\xi} \|\widehat{\phi}(\xi)\|^{2}_{\HS}\jp{\xi}^{2k}\right)^{1/2}
\left(\sumxi d_{\xi}^{2}\jp{\xi}^{-2k}\right)^{1/2}\nonumber\\
&\le & C\|(I-\L_G)^{k/2}\phi\|_{L^2}\nonumber\\
&\leq& C_k\sum_{|\beta|\leq k}\|\partial^{\beta}\phi\|_{L^2},\nonumber \end{aligned}$$ with constant $C_{k}$ depending only on $G$. Consequently we also have $$\label{EQ:aux1}
\|\partial^{\alpha}\phi\|_{L^\infty} \leq C_{k}\sum_{|\beta|\leq k}\|{\partial^{\alpha+\beta}\phi}\|_{L^{2}}.$$ Using the inequalities $$\label{ineq}
\alpha !\leq |\alpha|!,\quad |\alpha|!\leq n^{|\alpha|}\alpha!
\quad\textrm{ and } \quad(|\alpha|+k)!\leq 2^{|\alpha|+k}k !|\alpha|!,$$ in view of and we get $$\begin{aligned}
||\partial^{\alpha}\phi||_{L^\infty}&\leq& C_{k}
A^{|\alpha|+k}\sum_{|\beta|\leq k}\left((\alpha +\beta)!\right)^{s}\nonumber\\
&\leq& C_{k} A^{|\alpha|+k}\sum_{|\beta|\leq k}\left((|\alpha| + k)!\right)^{s}\nonumber\\
&\leq& C_k' A^{|\alpha|+k} ( 2^{|\alpha|+k}k!)^s(|\alpha|!)^s \nonumber\\
&\leq& C_{k}'' A_{1}^{|\alpha|}(n^{|\alpha|} \alpha!)^s\nonumber\\
&\leq& C_{k}'' A_{2}^{|\alpha|}(\alpha!)^s,\nonumber
\end{aligned}$$ with constants $C_{k}''$ and $A_{2}$ independent of $\alpha$, implying that $\phi\in\gamma_{s}(G)$ and completing the proof.
The following proposition prepares the possibility to passing to the conditions formulated on the Fourier transform side.
\[PROP:l\] We have $\phi\in{\gamma_{s}(G)}$ if and only if there exist constants $A>0$ and $C>0$ such that $$\label{EQ:clap}
||\left(-\L_G\right)^{k}\phi||_{L^\infty}\leq C A^{2k}\left((2k)!\right)^s$$ holds for all $k\in\N_{0}$. Also, $\phi\in\gamma_{(s)}(G)$ if and only if for every $A>0$ there exists $C_{A}>0$ such that for all $k\in\N_{0}$ we have $$||\left(-\L_G\right)^{k}\phi||_{L^\infty}\leq C_A A^{2k}\left((2k)!\right)^s.$$
We prove the Gevrey-Roumieu case and indicate small additions to the argument for $\gamma_{(s)}(G)$. Thus, let $\phi\in{\gamma_{s}(G)}$. Recall that by the definition there exist some $A>0,$ $C>0$ such that for all multi-indices $\alpha$ we have $$||\partial^{\alpha}\phi||_{L^\infty}=
\sup_{x\in G}{|\partial^{\alpha}\phi(x)|\leq C
A^{|\alpha|}\left(\alpha !\right)^{s}}.$$ We will use the fact that for the compact Lie group $G$ the Laplace-Beltrami operator $\L_{G}$ is given by $\L_G=X_1^2+X_2^2+...+X_n^2$, where $X_i$, $i=1,2,\ldots,n$, is a set of left-invariant vector fields corresponding to a normalised basis of the Lie algebra of $G$. Then by the multinomial theorem[^7] and using , with $Y_{j}\in\{X_{1},\ldots,X_{n}\}$, $1\leq j\leq |\alpha|$, we can estimate $$\begin{aligned}
\label{EQ:estLG}
|(-{\mathcal{L}}_G)^{k}\phi(x)|&\leq& C\sum_{|\alpha|= k} \frac{k!}{\alpha!}\left| Y_1^{2}\ldots
Y_{|\alpha|}^{2}\phi(x)\right|\nonumber\\
&\leq& C \sum_{|\alpha|= k}\frac{k!}{\alpha!}[(2|\alpha|)!]^{s}A^{2|\alpha|}\nonumber\\
&\leq& C A^{2k}[(2k)!]^{s}
\sum_{|\alpha|= k}\frac{k! n^{|\alpha|}}{|\alpha|!} \nonumber\\
&\leq& C_{1} A^{2k}[(2k)!]^{s} n^{k} k^{n-1} \nonumber\\
&\leq& C_2 A_{1}^{2k}[(2k)!]^{s},\end{aligned}$$ with $A_{1}=2nA$, implying . For the Gevrey-Beurling case $\gamma_{(s)}(G)$, we observe that we can obtain any $A_{1}>0$ in by using $A=\frac{A_{1}}{2n}$ in the Gevrey estimates for $\phi\in\gamma_{(s)}(G).$
Conversely, suppose $\phi\in C^{\infty}(G)$ is such that the inequalities hold. First we note that for $|\alpha|=0$ the estimate follows from with $k=0$, so that we can assume $|\alpha|>0.$
Following [@RTi], we define the symbol of $\paal$ to be $\sigma_{\paal}(\xi)=\xi(x)^{*}\paal\xi(x),$ and we have $\sigma_{\paal}(\xi)\in\C^{d_{\xi}\times d_{\xi}}$ is independent of $x$ since $\paal$ is left-invariant. For the in-depth analysis of symbols and symbolic calculus for general operators on $G$ we refer to [@RTb; @RTi] but we will use only basic things here. In particular, we have $$\paal\phi(x)=\sumxi d_{\xi} \operatorname{Tr}\p{\xi(x)\sigma_{\paal}(\xi)\widehat\phi(\xi)}.$$ First we calculate the operator norm $||\sigma_\paal(\xi)||_{op}$ of the matrix multiplication by $\sigma_\paal(\xi)$. Since $\partial^{\alpha}=Y_{1}\cdots Y_{|\alpha|}$ and $Y_{j}\in\{X_{1},\ldots,X_{n}\}$ are all left-invariant, we have $\sigma_{\paal}=\sigma_{Y_1}\cdots \sigma_{Y_{|\alpha|}}$, so that we get $$\|\sigma_{\paal}(\xi)\|_{op}\leq
\|\sigma_{X_1}(\xi)\|^{\alpha_1}_{op}\cdots \|\sigma_{X_n}(\xi)\|^{\alpha_n}_{op}.$$ Now, since $X_{j}$ are operators of the first order, one can show (see e.g. [@RTi Lemma 8.6], or [@RTb Section 10.9.1] for general arguments) that $||\sigma_{X_j}(\xi)||_{op}\leq C_{j}\jp{\xi}$ for some constants $C_{j},$ $j=1,\ldots,n$. Let $C_0=\sup_{j}C_j+1,$ then we have $$\label{EQ:paalnorm}
\|\sigma_\paal(\xi)\|_{op}\leq C_{0}^{|\alpha|}\jp{\xi}^{|\alpha|}.$$ Let us define $\sigma_{P_{\alpha}}\in\Sigma$ by setting $\sigma_{P_{\alpha}}(\xi):=|\xi|^{-2k}\sigma_{\paal}(\xi)$ for $[\xi]\in\Ghs$, and by $\sigma_{P_{\alpha}}(\xi):=0$ for $[\xi]\in\Gh\backslash\Ghs.$ This gives the corresponding operator $$\begin{aligned}
\label{EQ:Pal}
(P_{\alpha}\phi)(x) & = & \sumxi d_{\xi} \operatorname{Tr}\p{\xi(x)\sigma_{P_{\alpha}}(\xi)\widehat\phi(\xi)} .
$$ From we obtain $$\label{EQ:paalnorm2}
\|\sigma_{P_{\alpha}}(\xi)\|_{op}\leq C_{0}^{|\alpha|} \jp{\xi}^{|\alpha|} |\xi|^{-2k}
\textrm{ for all } [\xi]\in\Ghs.$$ Now, for $[\xi]\in \Ghs$, from we have $$|\xi|^{-2k} \leq C_{1}^{2k}\jp{\xi}^{-2k},\quad
C_{1}=\p{1+\frac{1}{\lambda_{1}^{2}}}^{1/2}.$$ Together with , and the trivial estimate for $[\xi]\in\Gh\backslash\Ghs$, we obtain $$\label{EQ:paalnorm3}
\|\sigma_{P_{\alpha}}(\xi)\|_{op}\leq C_{0}^{|\alpha|} C_{1}^{2k} \jp{\xi}^{|\alpha|-2k}
\textrm{ for all } [\xi]\in\Gh.$$ Using and the Plancherel identity, we estimate $$\begin{aligned}
|P_{\alpha}\phi(x)| &\leq& \sumxi d_{\xi} \|\xi(x)\sigma_{P_{\alpha}}(\xi)\|_{\HS}
\|\widehat{\phi}(\xi)\|_{\HS}\nonumber\\
&\leq& \left(\sumxi d_{\xi}\|\widehat{\phi}(\xi)\|^{2}_{\HS}\right)^{1/2}
\left(\sumxi d_{\xi}
\|\sigma_{P_{\alpha}}(\xi)\|_{op}^{2}
\|\xi(x)\|_{\HS}^{2}
\right)^{1/2}\nonumber\\
&=& \|\phi\|_{L^{2}}
\left(\sumxi d_{\xi}^{2} \|\sigma_{P_{\alpha}}(\xi)\|_{op}^{2}\right)^{1/2}.\nonumber\end{aligned}$$ From this and we conclude that $$\begin{aligned}
|P_{\alpha}\phi(x)|
\leq \|\phi\|_{L^{2}}
C_{0}^{|\alpha|} C_{1}^{2k}
\left(\sumxi d_{\xi}^{2}\jp{\xi}^{-2(2k-|\alpha|)}\right)^{1/2}.\end{aligned}$$ Now, in view of Lemma \[L:series\] the series on the right hand side converges provided that $2k-|\alpha|>n/2.$ Therefore, for $2k-|\alpha|>n/2$ we obtain $$\label{EQ:pal2}
\|P_{\alpha}\phi\|_{L^2}\leq C C_{2}^{2k} \|\phi\|_{L^{2}},$$ with some $C$ and $C_{2}=C_{0}C_{1}$ independent of $k$ and $\alpha$. We note that here we used that $|\alpha|\leq 2k$ and that we can always have $C_{0}\geq 1$.
We now observe that from the definition of $\sigma_{P_{\alpha}}$ we have $$\label{EQ:sp}
\sigma_{\paal}(\xi)=\sigma_{P_{\alpha}}(\xi) |\xi|^{2k}$$ for all $[\xi]\in\Ghs$. On the other hand, since we assumed $|\alpha|\not=0$, for $[\xi]\in\Gh\backslash\Ghs$ we have $\sigma_{\paal}(\xi)=\xi(x)^{*}\paal\xi(x)=0$, so that holds true for all $[\xi]\in\Gh.$ This implies that in the operator sense, we have $\paal=P_{\alpha}\circ (-\L_{G})^{k}.$ Therefore, from this relation and , for $|\alpha|<2k-n/2$, we get $$\begin{aligned}
\|\partial^{\alpha}\phi\|_{L^2}^{2}&=& \|P_{\alpha}\circ(-\L_{G})^{k}\phi\|_{L^{2}}^{2}\nonumber\\
&\leq& C C_{2}^{4k} \int_{G} |(-\L_{G})^k \phi(x)|^2 dx\nonumber\\
&\leq& C' C_2^{4k} A^{4k} ((2k)!)^{2s}\nonumber\\
&\leq& C' A_1^{4k} ((2k)!)^{2s},\end{aligned}$$ where we have used the assumption , and with $C'$ and $A_{1}=C_{2}A$ independent of $k$ and $\alpha$. Hence we have $\|\partial^{\alpha}\phi\|_{L^2}\leq C A_1^{2k}((2k)!)^s$ for all $|\alpha|< 2k-n/2.$ Then, for every $\beta$, by the above argument, taking an integer $k$ such that $|\beta|+4n\geq 2k>|\beta|+n/2$, if $A_{1}\geq 1$, we obtain $$\begin{aligned}
\|\partial^{\beta}\phi\|_{L^2}\leq C A_1^{|\beta|+4n}\left((|\beta|+4n)!\right)^{s}
\leq C' A_1^{|\beta|} \left(2^{|\beta|+4n}(4n)!|\beta|!\right)^{s}
\leq C''A_{2}^{|\beta|} (\beta!)^{s},\end{aligned}$$ in view of inequalities . By Lemma \[L:gl2\] it follows that $\phi\in\gamma_{s}(G).$
If $A_{1}<1$ (in the case of $\gamma_{(s)}(G)$), we estimate $$\|\partial^{\beta}\phi\|_{L^2}\leq C A_1^{|\beta|+n/2}\left((|\beta|+4n)!\right)^{s}
\leq C''A_{3}^{|\beta|} (\beta!)^{s}$$ by a similar argument. The relation between constants, namely $A_{1}=C_{2}A$ and $A_{3}=2nA_{1}$, implies that the case of $\gamma_{(s)}(G)$ also holds true.
We can now pass to the Fourier transform side.
\[L:FT\] For $\phi\in \gamma_{s}(G)$, there exist constants $C>0$ and $A>0$ such that $$\label{EQ:FT1}
||\widehat{\phi}(\xi)||_{\HS}\leq C d_{\xi}^{1/2} |\xi|^{-2m} A^{2m}\left((2m)!\right)^{s}$$ holds for all $m\in\N_{0}$ and $[\xi]\in\Ghs.$ Also, for $\phi\in \gamma_{(s)}(G)$, for every $A>0$ there exists $C_{A}>0$ such that $$||\widehat{\phi}(\xi)||_{\HS}\leq C_{A} d_{\xi}^{1/2} |\xi|^{-2m} A^{2m}\left((2m)!\right)^{s}$$ holds for all $m\in\N_{0}$ and $[\xi]\in\Ghs.$
We will treat the case $\gamma_{s}$ since $\gamma_{(s)}$ is analogous. Using the fact that the Fourier transform is a bounded linear operator from $L^{1}(G)$ to $l^{\infty}(\widehat{G})$, see , and using Proposition \[PROP:l\], we obtain $$\begin{aligned}
|||\xi|^{2m}\widehat{\phi}(\xi)||_{l^{\infty}(\Gh)}
&\leq&\int_{G}|\left(-{\mathcal{L}}_G\right)^m\phi(x)|dx~\nonumber\\~
&\leq& C A^{2m}\left((2m)!\right)^s\end{aligned}$$ for all $[\xi]\in \Gh$ and $m\in\N_{0}.$ Recalling the definition of $\ell^{\infty}(\Gh)$ in we obtain .
We can now prove Theorem \[THM:Gevrey\].
**(R)** “Only if” part.\
Let $\phi\in \gamma_{s}(G).$ Using $k!\leq k^k$ and Lemma \[L:FT\] we get $$\label{EQ:ft1}
||\widehat{\phi}(\xi)||_{\HS}\leq C d_{\xi}^{1/2} \inf_{2m\geq 0}|\xi|^{-2m} A^{2m}\left(2m\right)^{2ms}$$ for all $[\xi]\in\Ghs.$ We will show that this implies the (sub-)exponential decay in . It is known that for $r>0,$ we have the identity $$\label{EQ:x0}
\inf_{x> 0}x^{sx}r^{-x}=e^{-(s/e)r^{1/s}}.$$ So for a given $r>0$ there exists some $x_0=x_{0}(r)>0$ such that $$\label{EQ:x00}
\inf_{x>0}x^{sx}\left(\frac{r}{8^s}\right)^{-x}=x_{0}^{sx_0}\left(\frac{r}{8^s}\right)^{-x_0}.$$ We will be interested in large $r$, in fact we will later set $r=\frac{|\xi|}{A}$, so we can assume that $r$ is large. Consequently, in and later, we can assume that $x_{0}$ is sufficiently large. Thus, we can take an even (sufficiently large) integer $m_{0}$ such that $m_0\leq x_0<m_0+2$. Using the trivial inequalities $$\left(m_0\right)^{sm_0}r^{-(m_0+2)}\leq x_0^{sx_0}r^{-x_0},\quad r\geq 1,$$ and $$\left(k+2\right)^{k+2}\leq 8^k k^k$$ for any $k\geq 2,$ we obtain $$\left(m_0+2\right)^{s(m_0+2)}r^{-(m_0+2)}\leq 8^{sm_0}m_0^{sm_0}r^{-(m_0+2)}
\leq x_{0}^{sx_0}\left(\frac{r}{8^s}\right)^{-x_0}.$$ It follows from this, and , that $$\label{EQ:ft3}
\inf_{2m\geq 0}{(2m)^{2sm}r^{-2m}}\leq x_0^{sx_0}\left(\frac{r}{8^s}\right)^{-x_0}
=e^{-(s/e)(\frac{r}{8^s})^{1/s}}.$$ Let now $r=\frac{|\xi|}{A}$. From and we obtain $$\begin{aligned}
\|\widehat{\phi}(\xi)\|_{\HS}&\leq&C d_{\xi}^{1/2}
\inf_{2m\geq 0}\frac{A^{2m}}{|\xi|^{2m}}\left(2m\right)^{2ms}~\nonumber\\~
&=&C d_{\xi}^{1/2} \inf_{2m\geq 0}r^{-2m}\left(2m\right)^{2ms}~\nonumber\\~
&\leq&C d_{\xi}^{1/2} e^{-(s/e)\left(\frac{r}{8^s}\right)^{1/s}}\nonumber\\~
&=&C d_{\xi}^{1/2} e^{-(s/e)\frac{|\xi|^{1/s}}{8A^{1/s}}}~\nonumber\\~
&\leq&C d_{\xi}^{1/2} e^{-2B|\xi|^{1/s}},
\label{aux5}\end{aligned}$$ with $2B=\frac{s}{8e}\frac{1}{A^{1/s}}.$ From it follows that $d_{\xi}^{1/2} e^{-B|\xi|^{1/s}}\leq C.$ Using , we obtain for all $[\xi]\in\Ghs.$ On the other hand, for trivial $[\xi]\in\Gh\backslash\Ghs$ the estimate is just the condition of the boundedness. This completes the proof of the “only if” part.
Now we prove the “if” part. Suppose $\phi\in C^{\infty}(G)$ is such that holds, i.e. we have $$||\widehat{\phi}(\xi)||_{\HS}\leq K e^{-B\jp{\xi}^{1/s}}.$$ The $\ell^{1}(\Gh)-L^{\infty}(G)$ boundedness of the inverse Fourier transform in implies $$\begin{aligned}
\|(-\L_G)^{k}\phi\|_{L^{\infty}(G)}&\leq& \| |\xi|^{2k}\widehat\phi\|_{\ell^{1}(\Gh)}\nonumber\\~
&=&\sumxi d_{\xi}^{3/2} |\xi|^{2k} ||\widehat{\phi}(\xi)||_{\HS}\nonumber\\~
&\leq&K\sumxi d_{\xi}^{3/2}\jp{\xi}^{2k}e^{-B\jp{\xi}^{1/s}}\nonumber\\~
&\leq&K\sumxi d_{\xi}^{3/2} e^{\frac{-B\jp{\xi}^{1/s}}{2}} \p{\jp{\xi}^{2k} e^{\frac{-B\jp{\xi}^{1/s}}{2}}}.
\label{aux2}\end{aligned}$$ Now we will use the following simple inequality, $\frac{t^N}{N!}\leq e^{t}$ for $t>0$. Setting later $m=2k$ and $a=\frac{B}{2}$, we estimate $$(m!)^{-s}\jp{\xi}^{m}=\p{\frac{(a\jp{\xi}^{1/s})^{m}}{m!}}^{s} a^{-sm} \leq
a^{-sm} e^{a\jp{\xi}^{1/s}},$$ which implies $e^{-\frac{B}{2}\jp{\xi}^{1/s}}\jp{\xi}^{2k}\leq A^{2k}((2k)!)^{s},$ with $A=a^{-s}=(2/B)^{s}.$ Using this inequality and we obtain $$\label{aux3}
\|(-\L_G)^{k}\phi\|_{L^{\infty}}\leq K\sumxi d_{\xi}^{3/2}
e^{\frac{-B\jp{\xi}^{1/s}}{2}}A^{2k}((2k)!)^s
\leq C A^{2k}((2k)!)^s$$ with $A= \frac{2^s}{B^s}$, where the convergence of the series in $[\xi]$ follows from Lemma \[L:series\]. Therefore, $\phi\in \gamma_{s}(G)$ by Proposition \[PROP:l\].
**(B)** “Only if” part. Suppose $\phi\in \gamma_{(s)}(G).$ For any given $B>0$ define $A$ by solving $2B=\left(\frac{s}{8e}\right)\frac{1}{A^{1/s}}.$ By Lemma \[L:FT\] there exists $K_{B}>0$ such that $$\label{EQ:ft2}
||\widehat{\phi}(\xi)||_{\HS}\leq K_{B} d_{\xi}^{1/2} \inf_{2m\geq 0} |\xi|^{-2m} A^{2m}\left(2m\right)^{2ms}.$$ Consequently, arguing as in case **(R)** we get , i.e. $$\label{aux4}
\|\widehat{\phi}(\xi)\|_{\HS}\leq
K_B d_{\xi}^{1/2}e^{-2B |\xi|^{1/s}}$$ for all $[\xi]\in \Gh.$ The same argument as in the case **(R)** now completes the proof.
“If” part. For a given $A>0$ define $B>0$ by solving $A=\frac{2^s}{B^s}$ and take $C_A$ big enough as in the case of **(R)**, so that we get $$\|(-\L_G)^{k}\phi\|_{L^{\infty}}\leq C_A A^{2k}((2k)!)^s.$$ Therefore, $\phi\in \gamma_{(s)}(G)$ by Proposition \[PROP:l\].
$\alpha$-duals $\gamma_s(G)^{\wedge}$ and $\gamma_{(s)}(G)^{\wedge}$, for any $s$, $0<s<\infty$. {#SEC:alpha}
================================================================================================
First we analyse $\alpha$-duals of Gevrey spaces regarded as sequence spaces through their Fourier coefficients.
We can embed $\gamma_{s}(G)\left(\textrm{or}~\gamma_{(s)}(G)\right)$ in the sequence space $\Sigma$ using the Fourier coefficients and Theorem \[THM:Gevrey\]. We denote the $\alpha$-dual of such the sequence space $\gamma_{s}(G)$ (or $\gamma_{(s)}(G)$) as $$\begin{gathered}
[\gamma_{s}(G)]^{\wedge}
=\left\{v=(v_{\xi})_{[\xi]\in\Gh}\in\Sigma:
\sumxi\sumij |(v_{\xi})_{ij}| |\widehat{\phi}(\xi)_{ij}|<\infty
\textrm{ for all } \phi\in\gamma_{s}(G)
\right\},\end{gathered}$$ with a similar definition for $\gamma_{(s)}(G).$
\[L:ser\] [**(R)**]{} We have $v \in \left[ {\gamma_s \left( G \right)} \right]^{\wedge}$ if and only if for every $B>0$ the inequality $$\label{EQ:ad}
\sumxi e^{ - B\jp{\xi}^{\frac{1} {s}} } \left\| {v_\xi}
\right\|_{\HS} < \infty$$ holds for all $[\xi]\in\Gh.$\
[**(B)**]{} Also, we have $v \in \left[ {\gamma_{(s)} \left( G \right)} \right]^{\wedge}$ if and only if there exists $B>0$ such that the inequality holds for all $[\xi]\in\Gh.$
The proof of this lemma in (R) and (B) cases will be different. For (R) we can show this directly, and for (B) we employ the theory of echelon spaces by Köthe [@Koe].
**(R)** “Only if” part. Let $v \in \left[ {\gamma_s \left( G \right)} \right]^{\wedge}$. For any $B>0$, define $\phi$ by setting its Fourier coefficients to be $\widehat{\phi}(\xi)_{ij}:=d_\xi e^{ -B\jp{\xi}^{\frac{1} {s}} },$ so that $\|\widehat{\phi}(\xi)\|_{\HS}=d_\xi^2 e^{ -B\jp{\xi}^{\frac{1} {s}} }\leq
Ce^{ -\frac{B}{2}\jp{\xi}^{\frac{1} {s}} }$ by , which implies that $\phi \in \gamma_s \left( G \right)$ by Theorem \[THM:Gevrey\]. Using Lemma \[L:c\], we obtain $$\sumxi e^{-B\jp{\xi}^{\frac{1}{s}}} \|{v_\xi}\|_{\HS} \leq
\sumxi d_{\xi} e^{-B\jp{\xi}^{\frac{1}{s}}} \|{v_\xi}\|_{\ell^1(\C)} =
\sumxi \sumij |(v_\xi)_{ij}| |\widehat{\phi}(\xi)_{ij}| <
\infty$$ by the assumption $v \in \left[ {\gamma_s \left( G \right)} \right]^{\wedge}$, proving the “only if” part.\
“If” part. Let $ \phi \in \gamma_s(G)$. Then by Theorem \[THM:Gevrey\] there exist some $B>0$ and $C>0$ such that $$\|\widehat{\phi}(\xi)\|_{\HS} \leq C
e^{-B\jp{\xi}^{\frac{1}{s}}},$$ which implies that $$\sumxi \sumij |(v_{\xi})_{ij}| |\widehat{\phi}(\xi)_{ij}|
\leq \sumxi \|v_\xi\|_{\HS} \|\widehat{\phi}(\xi)\|_{\HS}
\leq C \sumxi e^{-B\jp{\xi}^{\frac{1}{s}}} \|v_\xi\|_{\HS}
< \infty$$ is finite by the assumption . But this means that $v \in [\gamma_s(G)]^{\wedge}.$\
**(B)** For any $B>0$ we consider the so-called echelon space, $$E_B=\left\{v=(v_\xi)\in\Sigma: \sumxi\sumij
e^{- B\jp{\xi}^{\frac{1} {s}} }
|({v_\xi})_{ij}|<\infty\right\}.$$ Now, by diagonal transform we have $E_B \cong l^1$ and hence $\widehat{E_{B}}\cong l^{\infty}$, and it is easy to check that $\widehat{E_{B}}$ is given by $$\widehat{E_{B}}=
\left\{w=(w_{\xi})\in\Sigma\;|\; \exists K>0\; : \quad
|(w_\xi)_{ij}|\leq K e^{-B\jp{\xi}^{1/s}}
\textrm{ for all } 1\leq i,j\leq d_\xi
\right\}.$$ By Theorem \[THM:Gevrey\] we know that $\phi\in \gamma_{(s)}(G)$ if and only if $\left(\widehat{\phi}(\xi)\right)_{[\xi]\in \Gh}
\in \bigcap_{B>0}{\widehat{E_B}}.$ Using Köthe’s theory relating echelon and co-echelon spaces [@Koe Ch. 30.8], we have, consequently, that $v\in \gamma_{(s)}(G)^{\wedge}$ if and only if $(v_{\xi})_{[\xi]\in \Gh}\in \bigcup_{B>0}E_{B}$. But this means that for some $B>0$ we have $$\sumxi\sumij
e^{- B\jp{\xi}^{\frac{1} {s}} }
|({v_\xi})_{ij}| < \infty.$$ Finally, we observe that this is equivalent to if we use Lemma \[L:c\] and .
We now give the characterisation for $\alpha$-duals.
\[THM:aduals\] Let $0<s<\infty$.\
[**(R)**]{} We have $v \in \left[ {\gamma_s \left( G
\right)} \right]^\wedge$ if and only if for every $B>0$ there exists $K_B>0$ such that $$\label{EQ:ade}
|| { v_ \xi }||_{\HS} \leq K_B e^{B\left\langle \xi \right\rangle ^{\frac{1} {s}} }$$ holds for all $ [\xi] \in \Gh$.\
[**(B)**]{} We have $v \in \left[ {\gamma_{(s)} \left( G
\right)} \right]^\wedge$ if and only if there exist $B>0$ and $K_{B}>0$ such that holds for all $ [\xi] \in \Gh$.
We prove the case (R) only since the proof of (B) is similar. First we deal with “If” part. Let $v\in\Sigma$ be such that holds for every $B>0$. Let $\va\in\gamma_{s}(G)$. Then by Theorem \[THM:Gevrey\] there exist some constants $A>0$ and $C>0$ such that $\|\widehat{\phi}(\xi)\|_{\HS}\leq C e^{-A\jp{\xi}^{1/s}}.$ Taking $B=A/2$ in we get that $$\begin{aligned}
\sumxi \sumij |(v_{\xi})_{ij}| |\widehat{\phi}(\xi)_{ij}|
\leq
\sumxi \|v_{\xi}\|_{\HS} \|\widehat{\phi}(\xi)\|_{\HS}
\leq C K_B \sumxi e^{-\frac{A}{2}\jp{\xi}^{1/s}}<\infty,\end{aligned}$$ so that $v \in \left[ {\gamma_s \left( G
\right)} \right]^\wedge$.\
“Only if” part. Let $v\in \br{\gamma_{s}(G)}^{\wedge}$ and let $B>0$. Then by Lemma \[L:ser\] we have that $$\sumxi e^{-B\jp{\xi}^{1/s}}||v_\xi||_{\HS}<\infty.$$ This implies that the exists a constant $K_{B}>0$ such that $e^{-B\jp{\xi}^{1/s}}||{v}(\xi)||_{\HS}\leq K_{B},$ yielding .
We now want to show that the Gevrey spaces are perfect in the sense of Köthe. We define the $\alpha-$dual of $[\gamma_{s}(G)]^{\wedge}$ as $$[\widehat{\gamma_{s}(G)}]^{\wedge}=
\left\{w=(w_{\xi})_{[\xi]\in\Gh}\in\Sigma:
\sumxi \sumij |(w_{\xi})_{ij}| |(v_\xi)_{ij}|<\infty
\textrm{ for all } v\in[\gamma_{s}(G)]^{\wedge}\right\},$$ and similarly for $[\gamma_{(s)}(G)]^{\wedge}$. First, we prove the following lemma.
\[L:perfect\] [**(R)**]{} We have $w \in \left[ {\widehat{\gamma_s \left( G
\right)}} \right]^\wedge$ if and only if there exists $B>0$ such that $$\label{EQ:pc}
\sumxi e^{B\jp{\xi}^{\frac{1} {s}}} \| {w_\xi}\|_{\HS} < \infty.$$ [**(B)**]{} We have $w \in \left[ {\widehat{\gamma_{(s)} \left( G
\right)}} \right]^\wedge$ if and only if for every $B>0$ the series converges.
We first show the Beurling case as it is more straightforward.\
**(B)** “Only if” part. We assume that $w \in \left[ {\widehat{\gamma_{(s)} \left( G
\right)}} \right]^\wedge$. Let $B>0$, and define $(v_{\xi})_{ij}:=d_{\xi} e^{B\jp{\xi}^{\frac{1} {s}}}.$ Then $\|v_{\xi}\|_{\HS}=d_{\xi}^{2} e^{B\jp{\xi}^{\frac{1} {s}}}\leq Ce^{2B\jp{\xi}^{\frac{1} {s}}}$ by , which implies $v\in{[\gamma_{(s)}(G)]^{\wedge}}$ by Theorem \[THM:aduals\]. Consequently, using Lemma \[L:c\] we can estimate $$\sumxi e^{B\jp{\xi}^{\frac{1}{s}}} \|{w_\xi}\|_{\HS} \leq
\sumxi d_{\xi} e^{B\jp{\xi}^{\frac{1}{s}}} \sumij |(w_\xi)_{ij}| =
\sumxi \sumij |(v_\xi)_{ij}| |(w_{\xi})_{ij}| <
\infty,$$ implying .\
“If” part. Here we are given $w\in\Sigma$ such that for every $B>0$ the series converges. Let us take any $v\in {[\gamma_{(s)}(G)]^{\wedge}}$. By Theorem \[THM:aduals\] there exist $B>0$ and $K>0$ such that $\|v_{\xi}\|_{\HS}\leq K e^{B\jp{\xi}^{\frac{1} {s}}}.$ Consequently, we can estimate $$\sumxi \sumij |(v_\xi)_{ij}| |(w_{\xi})_{ij}| \leq
\sumxi \|v_\xi\|_{\HS} \|w_{\xi}\|_{\HS} \leq
K \sumxi e^{B\jp{\xi}^{\frac{1}{s}}} \|{w_\xi}\|_{\HS}<
\infty$$ by the assumption , which shows that $w \in \left[ {\widehat{\gamma_{(s)} \left( G \right)}} \right]^\wedge$.
**(R)** For $B>0$ we consider the echelon space $$D_{B}=
\left\{v=(v_{\xi})\in\Sigma\;|\; \exists K>0\; : \quad
|(v_\xi)_{ij}|\leq K e^{B\jp{\xi}^{1/s}}
\textrm{ for all } 1\leq i,j\leq d_\xi
\right\}.$$ By diagonal transform we have $D_{B}\cong l^{\infty}$, and since $l^{\infty}$ is a perfect sequence space, we have $\widehat{D_{B}}\cong l^{1}$, and it is given by $$\widehat{D_{B}}=\left\{w=(w_\xi)\in\Sigma: \sumxi\sumij
e^{B\jp{\xi}^{\frac{1} {s}} }
|({w_\xi})_{ij}|<\infty\right\}.$$ By Theorem \[THM:aduals\] we know that $\gamma_{s}(G)^{\wedge}=\bigcap_{B>0} D_{B}$, and hence $\left[ {\widehat{\gamma_{s} \left( G \right)}} \right]^\wedge
=\bigcup_{B>0} \widehat{D_{B}}.$ This means that $w\in \left[ {\widehat{\gamma_{s} \left( G \right)}} \right]^\wedge$ if and only if there exists $B>0$ such that we have $\sumxi \sumij e^{2B\jp{\xi}^{\frac{1}{s}}} |({w_\xi})_{ij}|<\infty.$ Consequently, by Lemma \[L:c\] we get $$\sumxi e^{B\jp{\xi}^{\frac{1}{s}}} \|{w_\xi}\|_{\HS}\leq
\sumxi d_{\xi} e^{B\jp{\xi}^{\frac{1}{s}}} \|{w_\xi}\|_{\ell^{1}(\C)}\leq
C\sumxi \sumij e^{2B\jp{\xi}^{\frac{1}{s}}} |({w_\xi})_{ij}|<\infty,$$ completing the proof of the “only if” part. Conversely, given for some $2B>0$, we have $$\sumxi \sumij e^{B\jp{\xi}^{\frac{1}{s}}} |({w_\xi})_{ij}|\leq
\sumxi d_{\xi }e^{B\jp{\xi}^{\frac{1}{s}}} \|{w_\xi}\|_{\HS}\leq
C\sumxi e^{2B\jp{\xi}^{\frac{1}{s}}} \|{w_\xi}\|_{\HS}<\infty,$$ implying $w\in\left[ {\widehat{\gamma_{s} \left( G \right)}} \right]^\wedge$.
Now we can show that the Gevrey spaces are perfect spaces (sometimes called Köthe spaces).
$\gamma_{s}(G)$ and $\gamma_{(s)}(G)$ are perfect spaces, that is, $\gamma_{s}(G)=[\widehat{\gamma_{s}(G)}]^{\wedge}$ and $\gamma_{(s)}(G)=[\widehat{\gamma_{(s)}(G)}]^{\wedge}.$
We will show this for ${\gamma_{s}(G)}$ since the proof for ${\gamma_{(s)}(G)}$ is analogous. From the definition of $[\widehat{\gamma_{s}(G)}]^{\wedge}$ we have $\gamma_{s}(G)\subseteq [\widehat{\gamma_{s}(G)}]^{\wedge}.$ We will prove the other direction, i.e., $[\widehat{\gamma_{s}(G)}]^ {\wedge}\subseteq \gamma_{s}(G)$. Let $w={(w_{\xi})_{[\xi]\in{\Gh}}}\in [\widehat{\gamma_{s}(G)}]^ {\wedge}$ and define $$\phi(x):=\sumxi d_{\xi}\operatorname{Tr}\left(w_{\xi}\xi(x)\right).$$ The series makes sense due to Lemma \[L:perfect\], and we have $\|\widehat{\phi}(\xi)\|_{\HS}=\|w_{\xi}\|_{\HS}$. Now since $w\in [\widehat{\gamma_{s}(G)}]^{\wedge}$ by Lemma \[L:perfect\] there exists $B>0$ such that $\sumxi e^{ B\left\langle \xi
\right\rangle^{\frac{1} {s}} } || {w_\xi}||_{\HS} < \infty$, which implies that for some $C>0$ we have $$\begin{aligned}
e^{B\jp{\xi}^{1/s}}||w_{\xi}||_{\HS}< C
&\Rightarrow&
||\widehat{\phi}(\xi)||_{\HS}\leq C e^{-B\jp{\xi}^{1/s}}.\end{aligned}$$ By Theorem \[THM:Gevrey\] this implies $\phi\in \gamma_{s}(G).$ Hence $\gamma_{s}(G)=[\widehat{\gamma_{s}(G)}]^{\wedge},$ i.e. $\gamma_{s}(G)$ is a perfect space.
Ultradistributions $\gamma_s'(G)$ and $\gamma_{(s)}'(G)$ {#SEC:ultra}
========================================================
Here we investigate the Fourier coefficients criteria for spaces of ultradistributions. The space $\gamma_{s}'(G)$ (resp. $\gamma_{(s)}'(G)$) of the ultradistributions of order $s$ is defined as the dual of $\gamma_{s}(G)$ (resp. $\gamma_{(s)}(G)$) endowed with the standard inductive limit topology of $\gamma_{s}(G)$ (resp. the projective limit topology of $\gamma_{(s)}(G)$).
\[DEF:dual\] The space $\gamma_{s}'(G)\left(\textrm{resp. } \gamma_{(s)}'(G)\right)$ is the set of the linear forms $u$ on $\gamma_{s}(G)\left(\textrm{resp. } \gamma_{(s)}(G)\right)$ such that for every $\epsilon>0$ there exists $C_\epsilon$ (resp. for some $\epsilon>0$ and $C>0$) such that $$|u(\phi)|\leq C_{\epsilon}\sup_{\alpha}\epsilon^{|\alpha|}(\alpha!)^{-s}
\sup_{x\in G}|(-\mathcal{L}_{G})^{|\alpha|/2}\phi(x)|$$ holds for all $\phi\in \gamma_{s}(G)$ (resp. $\phi\in\gamma_{(s)}(G)$).
We can take the Laplace-Beltrami operator in Definition \[DEF:dual\] because of the equivalence of norms given by Proposition \[PROP:l\].
We recall that for any $v\in \gamma_{s}'(G)$, for $[\xi]\in\Gh$, we define the Fourier coefficients $\widehat{v}(\xi):=\jp{v,\xi^{\ast}}\equiv v(\xi^{*}).$
We have the following theorem showing that topological and $\alpha$-duals of Gevrey spaces coincide.
\[THM:equiv\] Let $1\leq s<\infty.$ Then $v\in \gamma_{s}'(G)\left(\textrm{resp. } \gamma_{(s)}'(G)\right)$ if and only if $v\in \gamma_{s}(G)^{\wedge}\left(\textrm{resp. } \gamma_{(s)}(G)^{\wedge}\right).$
**(R)** “If” part. Let $v\in \gamma_{s}(G)^{\wedge}.$ For any $\phi\in \gamma_{s}(G)$ define $$\label{EQ:defv}
v(\phi):=\sumxi d_{\xi}\operatorname{Tr}\left(\widehat{\phi}(\xi)v_{\xi}\right).$$ Since by Theorem \[THM:Gevrey\] there exist some $B>0$ such that $||\widehat{\phi}(\xi)||_{\HS}\leq C e^{-B\jp{\xi}^{1/s}}$, we can estimate $$\sumxi d_{\xi}\operatorname{Tr}\left(\widehat{\phi}(\xi)v_{\xi}\right) \leq
\sumxi d_{\xi} \|\widehat{\phi}(\xi)\|_{\HS} \|v_{\xi}\|_{\HS} \leq
C\sumxi d_{\xi} e^{-B\jp{\xi}^{1/s}} \|v_{\xi}\|_{\HS}<\infty$$ by Lemma \[L:ser\] and . Therefore, $v(\phi)$ in is a well-defined linear functional on $ \gamma_{s}(G)$. It remains to check that $v$ is continuous. Suppose $\phi_j\rightarrow \phi$ in $\gamma_{s}(G)$ as $j\to\infty$, that is, in view of Proposition \[PROP:l\], there is a constant $A>0$ such that $$\sup_{\alpha}A^{-|\alpha|}(\alpha!)^{-s}\sup_{x\in G}
|(-\mathcal{L}_{G})^{|\alpha|/2}(\phi_j(x)-\phi(x))|\rightarrow 0$$ as $j\rightarrow \infty.$ It follows that $$\|\left(-{\mathcal{L}}_{G}\right)^{|\alpha|/2}(\phi_j-\phi)\|_{\infty}\leq
C_j A^{|\alpha|}\left((|\alpha|)!\right)^s,$$ for a sequence $C_j\rightarrow 0$ as $j\rightarrow \infty.$ From the proof of Theorem \[THM:Gevrey\] it follows that we then have $$\|\widehat{\phi_j}(\xi)-\widehat \phi(\xi)\|_{\HS}\leq
K_{j} e^{-B\jp{\xi}^{1/s}},$$ where $B>0$ and $K_j\rightarrow 0$ as $j\rightarrow \infty.$ Hence we can estimate $$\begin{aligned}
|v(\phi_j-\phi)|&\leq&
\sumxi d_{\xi} \|\widehat{\phi_j}(\xi)-\widehat\phi(\xi)\|_{\HS}
\|{v}_(\xi)\|_{\HS}\nonumber\\
&\leq& K_{j} \sumxi d_{\xi} e^{-B\jp{\xi}^{1/s}} \|v_{\xi}\|_{\HS}
\rightarrow 0\nonumber \end{aligned}$$ as $j\rightarrow \infty$ since $K_j\rightarrow 0$ as $j\rightarrow \infty$ and $\sumxi d_{\xi} e^{-B\jp{\xi}^{1/s}} \|v_{\xi}\|_{\HS}<\infty$ by Lemma \[L:ser\] and . Therefore, we have $v\in \gamma_{s}'(G).$\
“Only if” part. Let us now take $v\in \gamma_{s}'(G).$ This means that for every $\epsilon>0$ there exists $C_\epsilon$ such that $$|v(\phi)|\leq C_{\epsilon}\sup_{\alpha}\epsilon^{|\alpha|}(\alpha!)^{-s}\sup_{x\in G}
|(-\mathcal{L}_{G})^{|\alpha|/2}\phi(x)|$$ holds for all $\phi\in \gamma_{s}(G).$ So then, in particular, we have $$\begin{aligned}
|v(\xi^{\ast}_{ij})| &\leq& C_{\epsilon}\sup_{\alpha}\epsilon^{|\alpha|}(\alpha!)^{-s}
\sup_{x\in G}|(-\L_{G})^{|\alpha|/2}\xi^{\ast}_{ij}(x)|\nonumber\\
&=& C_\epsilon \sup_{\alpha}\epsilon^{|\alpha|}(\alpha!)^{-s}|\xi|^{|\alpha|}\sup_{x\in G}|\xi^{\ast}_{ij}(x)| \\
& \leq & C_\epsilon \sup_{\alpha}\epsilon^{|\alpha|}(\alpha!)^{-s}\langle\xi\rangle^{|\alpha|}\sup_{x\in G}
\|\xi^{\ast}(x)\|_{\HS} \\
& = & C_\epsilon \sup_{\alpha}\epsilon^{|\alpha|}(\alpha!)^{-s}\langle\xi\rangle^{|\alpha|}
d_{\xi}^{1/2}.\end{aligned}$$ This implies $$\begin{aligned}
||v({\xi^{\ast}})||_{\HS} =
\sqrt{{\sum_{i,j=1}^{d_{\xi}}|v(\xi^{\ast}_{ij})|^2}}
\leq
C_\epsilon d_{\xi}^{3/2} \sup_{\alpha}\epsilon^{|\alpha|}(\alpha!)^{-s}\langle\xi\rangle^{|\alpha|}.\end{aligned}$$ Setting $r=\epsilon\langle\xi\rangle$ and using inequalities $$\alpha!\geq|\alpha|!n^{-|\alpha|}
\textrm{ and }
\p{\frac{(r^{1/s}n)^{|\alpha|}}{|\alpha|!}}^{s}\leq
\p{e^{r^{1/s}n}}^{s}=e^{n s r^{1/s}},$$ we obtain $$\begin{aligned}
\|v(\xi^{\ast})\|_{\HS} &\leq&
C_{\epsilon} d_{\xi}^{3/2} \sup_{\alpha}\left(rn^{s}\right)^{|\alpha|}\left(|\alpha|!\right)^{-s}\nonumber\\
&\leq & C_{\epsilon}d_{\xi}^{3/2} \sup_{\alpha}e^{ns r^{1/s}}\nonumber\\
&=&C_\epsilon d_{\xi}^{3/2} e^{ns \epsilon^{1/s}\jp{\xi}^{1/s}} \end{aligned}$$ for all $\epsilon>0.$ We now recall that $v(\xi^{*})=\widehat{v}(\xi)$ and, therefore, with $v_{\xi}:=\widehat{v}(\xi)$, we get $v\in \gamma_s(G)^{\wedge}$ by Theorem \[THM:aduals\] and .\
**(B)** This case is similar but we give the proof for completeness.\
“If” part. Let $v\in \gamma_{(s)}(G)^{\wedge}$ and for any $\phi\in \gamma_{(s)}(G)$ define $v(\phi)$ by . By a similar argument to the case (R), it is a well-defined linear functional on $\gamma_{(s)}(G)$. To check the continuity, suppose $\phi_j\rightarrow \phi$ in $\gamma_{(s)}(G)$, that is, for every $A>0$ we have $$\sup_{\alpha}A^{-|\alpha|}(\alpha!)^{-s}\sup_{x\in G}
|(-\mathcal{L}_{G})^{|\alpha|/2}(\phi_j(x)-\phi(x))|\rightarrow 0$$ as $j\rightarrow \infty.$ It follows that $$\|\left(-{\mathcal{L}}_{G}\right)^{|\alpha|/2}(\phi_j-\phi)\|_{\infty}\leq
C_j A^{|\alpha|}\left((|\alpha|)!\right)^s,$$ for a sequence $C_j\rightarrow 0$ as $j\rightarrow \infty,$ for every $A>0.$ From the proof of Theorem \[THM:Gevrey\] it follows that for every $B>0$ we have $$\|\widehat{\phi_j}(\xi)-\widehat \phi(\xi)\|_{\HS}\leq
K_{j} e^{-B\jp{\xi}^{1/s}},$$ where $K_j\rightarrow 0$ as $j\rightarrow \infty.$ Hence we can estimate $$\begin{aligned}
|v(\phi_j-\phi)|&\leq&
\sumxi d_{\xi} \|\widehat{\phi_j}(\xi)-\widehat\phi(\xi)\|_{\HS}
\|{v}_{\xi}\|_{\HS}\nonumber\\
&\leq& K_{j} \sumxi d_{\xi} e^{-B\jp{\xi}^{1/s}} \|v_{\xi}\|_{\HS}
\rightarrow 0\nonumber \end{aligned}$$ as $j\rightarrow \infty$ since $K_j\rightarrow 0$ as $j\rightarrow \infty$, and where we now take $B>0$ to be such that $\sumxi d_{\xi} e^{-B\jp{\xi}^{1/s}} \|v_{\xi}\|_{\HS}<\infty$ by Lemma \[L:ser\] and . Therefore, we have $v\in \gamma_{(s)}'(G).$\
“Only if” part. Let $v\in \gamma_{(s)}'(G).$ This means that there exists $\epsilon>0$ and $C>0$ such that $$|v(\phi)| \leq C\sup_{\alpha}\epsilon^{|\alpha|}(\alpha!)^{-s}
\sup_{x\in G}|(-\mathcal{L}_{G})^{|\alpha|/2}\phi(x)|$$ holds for all $\phi\in \gamma_{(s)}(G).$ Then, proceeding as in the case (R), we obtain $$\|v(\xi^{\ast})\|_{\HS}\leq Cd_{\xi}^{3/2} e^{ns\epsilon^{1/s}\jp{\xi}^{1/s}},$$ i.e. $\|\widehat{v}(\xi)\|_{\HS}\leq C e^{\delta\jp{\xi}^{1/s}},$ for some $\delta>0$. Hence $v\in \gamma_{(s)}(G)^{\wedge}$ by by Theorem \[THM:aduals\].
[1]{}
M. D. Bronshtein, The Cauchy problem for hyperbolic operators with characteristics of variable multiplicity. (Russian) [*Trudy Moskov. Mat. Obshch.*]{} [**41**]{} (1980), 83–99; [*Trans. Moscow Math. Soc.*]{} [**1**]{} (1982), 87–103.
F. Bruhat, [*Lectures on Lie groups and representations of locally compact groups.*]{} Tata Institute of Fundamental Research, Bombay, 1968.
C. Garetto and M. Ruzhansky, On the well-posedness of weakly hyperbolic equations with time dependent coefficients, [*J. Differential Equations*]{}, [**253**]{} (2012), 1317–1340.
J. Faraut, [*Analysis on Lie groups. An introduction.*]{} Cambridge University Press, Cambridge, 2008.
H. Komatsu, Ultradistributions, I, II, III, [*J. Fac. Sci. Univ. of Tokyo*]{}, Sec. IA, [**20**]{} (1973), 25–105, [**24**]{} (1977), 607–628, [**29**]{} (1982), 653–718.
G. Köthe, [*Topological vector spaces. I.*]{} Springer, 1969.
W. H. Ruckle, [*Sequence Spaces*]{}, Pitman, 1981.
M. Ruzhansky and V. Turunen, [*Pseudo-differential operators and symmetries*]{}, Birkhäuser, Basel, 2010.
M. Ruzhansky and V. Turunen, Global quantization of pseudo-differential operators on compact Lie groups, SU(2) and 3-sphere, [*Int Math Res Notices IMRN*]{} (2012), 58 pages, doi: 10.1093/imrn/rns122.
E. M. Stein, [*Topics in harmonic analysis related to the Littlewood-Paley theory.*]{} Princeton University Press, Princeton, 1970.
Y. Taguchi, Fourier coefficients of periodic functions of Gevrey classes and ultradistributions. [*Yokohama Math. J.*]{} [**35**]{} (1987), 51–60.
N. Ja. Vilenkin and A. U. Klimyk, [*Representation of Lie groups and special functions. Vol. 1. Simplest Lie groups, special functions and integral transforms.*]{} Kluwer Academic Publishers Group, Dordrecht, 1991.
[^1]: The first author was supported by the Grace-Chisholm Young Fellowship from London Mathematical Society. The second author was supported by the EPSRC Leadership Fellowship EP/G007233/1.
[^2]: The result of Bronshtein [@B] holds but is, in general, not optimal for some types of equations or does not hold for low regularity $a(t).$
[^3]: See also Definition \[DEF:dual\] for an equivalent formulation.
[^4]: The characterisation of $\alpha$-duals is valid for all $0<s<\infty$.
[^5]: Namely, the inequality $d_{\xi}\leq C\jp{\xi}^{\frac{n}{2}}$.
[^6]: Note that this can be adopted to give a simple proof of the Sobolev embedding theorem.
[^7]: The form in which we use it is adapted to non-commutativity of vector fields. Namely, although the coefficients are all equal to one in the non-commutative form, the multinomial coefficient appears once we make a choice for $\alpha=(\alpha_{1},\cdots,\alpha_{n}).$
|
---
abstract: |
We present evidence for the existence of an IRAC excess in the spectral energy distribution (SED) of 5 galaxies at $0.6<z<0.9$ and 1 galaxy at $z=1.7$. These 6 galaxies, located in the Great Observatories Origins Deep Survey field (GOODS-N), are star forming since they present strong $6.2,\ 7.7$, and,$\,11.3\,\mu$m polycyclic aromatic hydrocarbon (PAH) lines in their *Spitzer* IRS mid-infrared spectra. We use a library of templates computed with PEGASE.2 to fit their multiwavelength photometry and derive their stellar continuum. Subtraction of the stellar continuum enables us to detect in 5 galaxies a significant excess in the IRAC band pass where the $3.3\,\mu$m PAH is expected (i.e IRAC $5.8\,\mu$m for the range of redshifts considered here). We then assess if the physical origin of the IRAC excess is due to an obscured active galactic nucleus (AGN) or warm dust emission. For one galaxy evidence of an obscured AGN is found, while the remaining four do not exhibit any significant AGN activity. Possible contamination by warm dust continuum of unknown origin as found in the Galactic diffuse emission is discussed. The properties of such a continuum would have to be different from the local Universe to explain the measured IRAC excess, but we cannot definitively rule out this possibility until its origin is understood. Assuming that the IRAC excess is dominated by the $3.3\,\mu$m PAH feature, we find good agreement with the observed $11.3\,\mu$m PAH line flux arising from the same C-H bending and stretching modes, consistent with model expectations. Finally, the IRAC excess appears to be correlated with the star-formation rate in the galaxies. Hence it could provide a powerful diagnostic for measuring dusty star formation in $z>3$ galaxies once the mid-infrared spectroscopic capabilities of the *James Webb Space Telescope* become available.\
\
author:
- 'B. Magnelli, R. R. Chary, A. Pope, D. Elbaz, G. Morrison & M. Dickinson'
title: 'IRAC Excess in Distant Star-Forming Galaxies: Tentative Evidence for the 3.3$\mu$m Polycyclic Aromatic Hydrocarbon Feature ?'
---
Introduction
============
Sample selection {#sec: samples}
================
Data Analysis {#sec: data analysis}
=============
Discussion on the origin of the IRAC excess\[sec:Origin\]
=========================================================
Obscured AGN\[sec:AGN\]
-----------------------
$$\label{eq:sfr lir}
SFR[\rm{M_{\odot}\,yr^{-1}}]=4.5\times10^{-44}L_{IR}[\rm{erg\,s^{-1}}]$$ $$SFR[\rm{M_{\odot}\,yr^{-1}}]=1.7\times10^{-43}L_{0.5-8\rm{keV}}^{1.07}[\rm{erg\,s^{-1}}]$$ $$\label{eq:kcorrec}
f_{soft/hard}^{observed}[\rm{ erg\,s^{-1}\,cm^{-2}}]=\frac{L_{\it{soft/hard}}^{\it{restframe}}[\rm{erg\,s^{-1}}]}{4\pi d_{l}^{2}\,(1+z)^{\Gamma -2}}$$
Free-free and gas lines\[sec:freefree\]
---------------------------------------
Warm dust continuum\[sec:cont\]
-------------------------------
$3.3 \,\mu$m PAH feature\[sec:PAH\]
-----------------------------------
Conclusion {#sec: conclusion}
==========
[39]{} natexlab\#1[\#1]{}
, K. L. & [Steidel]{}, C. C. 2000, , 544, 218
, D. M., [Bauer]{}, [et al.]{} 2003, , 125, 383
, L., [et al.]{} 2007, , 656, 148
, A. J., [Cowie]{}, L. L., & [Wang]{}, W.-H. 2007, , 654, 764
, F. E., [Alexander]{}, D. M., [Brandt]{}, W. N., [Hornschemeier]{}, A. E., [Vignali]{}, C., [Garmire]{}, G. P., & [Schneider]{}, D. P. 2002, , 124, 2351
, B. R., [et al.]{} 2006, , 653, 1129
, V., [et al.]{} 2005, , 619, L51
, D., [Kinney]{}, A. L., & [Storchi-Bergmann]{}, T. 1994, , 429, 582
, R. & [Elbaz]{}, D. 2001, , 556, 562
, L. L., [Barger]{}, A. J., [Hu]{}, E. M., [Capak]{}, P., & [Songaila]{}, A. 2004, , 127, 3137
, M., [Giavalisco]{}, M., & [The GOOGS Team]{}. 2003, in The Mass of Galaxies at Low and High Redshift, ed. R. [Bender]{} & A. [Renzini]{}, 324–+
, M., [Papovich]{}, C., [Ferguson]{}, H. C., & [Budav[á]{}ri]{}, T. 2003, , 587, 25
, B. T. & [Li]{}, A. 2007, , 657, 810 , W. W. & [Williams]{}, D. A. 1981, , 196, 269
, G. G., [et al.]{} 2004, , 154, 10
, M. & [Rocca-Volmerange]{}, B. 1997, , 326, 950
, N., [Boulanger]{}, F., [Verstraete]{}, L., [Miville Desch[ê]{}nes]{}, M. A., [Noriega Crespo]{}, A., & [Reach]{}, W. T. 2006, , 453, 969
, D. T., [et al.]{} 2006, , 647, L9
, E., [Alloin]{}, D., [Granato]{}, G. L., & [Villar-Mart[í]{}n]{}, M. 2003, , 412, 615
, S. J. U., [et al.]{} 2004, , 116, 975
, M. 2002, , 569, 44
, M. & [Dudley]{}, C. C. 2000, , 545, 701
, M., [Dudley]{}, C. C., & [Maloney]{}, P. R. 2006, , 637, 114
, Jr., R. C. 1998, , 36, 189
, A. & [Puget]{}, J. L. 1984, , 137, L5
, A. & [Draine]{}, B. T. 2001, , 554, 778
, N. 2004, , 154, 286
, N., [Helou]{}, G., [Werner]{}, M. W., [Dinerstein]{}, H. L., [Dale]{}, D. A., [Silbermann]{}, N. A., [Malhotra]{}, S., [Beichman]{}, C. A., & [Jarrett]{}, T. H. 2003, , 588, 199
, G. R., [Heckman]{}, T. M., & [Calzetti]{}, D. 1999, , 521, 64
, H., [Kawara]{}, K., [Taniguchi]{}, Y., & [Nishida]{}, M. 1990, , 356, L39
, P. M., [Brookings]{}, T., [Canizares]{}, C. R., [Lee]{}, J. C., & [Marshall]{}, H. L. 2003, , 402, 849
, A., [Chary]{}, R.-R., [Alexander]{}, D. M., [Armus]{}, L., [Dickinson]{}, M., [Elbaz]{}, D., [Frayer]{}, D., [Scott]{}, D., & [Teplitz]{}, H. 2008, , 675, 1171 , N. C. & [Basu]{}, S. 1992, , 265, 499
, W. T., [et al.]{} 2005, , 117, 978
, D., [Spoon]{}, H. W. W., [Genzel]{}, R., [Lutz]{}, D., [Moorwood]{}, A. F. M., & [Tran]{}, Q. D. 1999, , 118, 2625
, G., [Maiolino]{}, R., [Marconi]{}, A., [Sani]{}, E., [Berta]{}, S., [Braito]{}, V., [Ceca]{}, R. D., [Franceschini]{}, A., & [Salvati]{}, M. 2006, , 365, 303
, D., [et al.]{} 2005, , 619, L47
, M., [et al.]{} 2005, , 117, 1049
, J. D. T., [Armus]{}, L., [Dale]{}, D. A., [Roussel]{}, H., [Sheth]{}, K., [Buckalew]{}, B. A., [Jarrett]{}, T. H., [Helou]{}, G., & [Kennicutt]{}, Jr., R. C. 2007, , 119, 1133
, C. C., [Adelberger]{}, K. L., [Giavalisco]{}, M., [Dickinson]{}, M., & [Pettini]{}, M. 1999, , 519, 1
, T. T., [Buat]{}, V., [Iglesias-P[á]{}ramo]{}, J., [Boselli]{}, A., & [Burgarella]{}, D. 2005, , 432, 423
, H. I., [Chary]{}, R., [Colbert]{}, J. W., [Siana]{}, B., [Elbaz]{}, D., [Dickinson]{}, M., & [Papovich]{}, C. 2006, in Bulletin of the American Astronomical Society, Vol. 38, Bulletin of the American Astronomical Society, 1079–+
, G. D., [et al.]{} 2004, , 127, 3121
, M. A., [Fabian]{}, A. C., [Barcons]{}, X., [Mateos]{}, S., [Hasinger]{}, G., & [Brunner]{}, H. 2004, , 352, L28
, M. S., [Reddy]{}, N. A., & [Condon]{}, J. J. 2001, , 554, 803
\
\
\
\
\
\
[cccccccccccc]{} MIPS4 & 0.638 &189.01355 & 62.18634 & $3.9\pm0.2$ & $11.9\pm0.5$ & $27.2\pm1.2$ & $35.8\pm1.6$ & $129.9\pm6.5$ & $89.1\pm4.4$ & $107.9\pm5.4$ & $98.7\pm4.9$\
MIPS5 & 0.641 & 189.39383 & 62.28978 & $1.0\pm0.06$ & $5.0\pm0.2$ & $15.4\pm0.7$ & $24.3\pm1.0$ & $167.9\pm8.4$ & $113.9\pm5.7$ & $124.9\pm6.2$ & $100.9\pm5.0$\
MIPS6 & 0.639 & 189.09367 & 62.26231 & $5.3\pm0.2$ & $12.1\pm0.5$ & $24.5\pm1.1$ & $30.4\pm1.3$ & $67.3\pm3.3$ & $46.5\pm2.3$ & $56.4\pm2.8$ & $65.8\pm3.3$\
MIPS7 & 0.792 & 189.23306 & 62.13559 & $0.3\pm0.06$ & $2.0\pm0.09$ & $8.5\pm0.3$ & $13.5\pm0.6$ & $112.9\pm5.6$ & $80.8\pm4.0$ & $77.9\pm3.9$ & $80.9\pm4.0$\
MIPS3419 & 1.70 & 189.17568 & 62.28963 & $0.02\pm0.01$ & $0.06\pm0.01$ & $0.35\pm0.03$ & $0.55\pm0.04$ & $33.5\pm1.6$ & $38.5\pm1.9$ & $34.3\pm1.7$ & $23.5\pm1.2$\
MIPS5581 & 0.839 & 189.28491 & 62.25418 & $2.4\pm0.1$ & $3.9\pm0.2$ & $7.9\pm0.3$ & $9.9\pm0.4$ & $27.3\pm1.3$ & $18.7\pm0.9$ & $20.0\pm1.0$ & $14.7\pm0.8$\
[cccccccc]{} MIPS 4 & $777\pm20$& $1221\pm12$ & $11.0\pm 0.66$ & $161\pm12$ & 126 & 2.4 & $11.90\pm0.05$\
MIPS 5 & $575\pm11$& $750\pm7$ &$5.6\pm0.67$ &$91\pm14$ & 79 & 1.4 &$11.75\pm0.04$\
MIPS 6 & $398\pm5$& $721\pm7$ &$11.0\pm0.66$ &$64\pm16$& 75 & 1.0 &$11.70\pm0.04$\
MIPS 7 & $582\pm10$& $832\pm8$ &$14.0\pm0.7$&$104\pm11$& 81& 2.7 &$11.89\pm0.05$\
MIPS 3419 &$54\pm7$& $113\pm6$ & $<3.0$ & $<25$ & 19 & $<4.0$ &$11.76\pm0.10$\
MIPS 5581 &$194\pm5$ & $201\pm6$ & $<3.0$& $16\pm5$& 17 & 0.6 &$11.22\pm0.09$\
[cccc]{} E & 100 & 100 & 3000\
S0 & 500 & 100 & 5000\
Sa & 1500 & 500 & …\
Sb & 2500 & 1000 & …\
Sbc & 5000 & 1000 & …\
Sc & 10000 & 2000 & …\
Sd & 20000 & 2000 & …\
Irr & 20000 & 5000 & …\
[cccccc]{} MIPS 4 & Sbc & 7 & $1.8\times10^{11}$ & 0.18 & 5.15\
MIPS 5 & Sa & 6 & $2.8\times10^{11}$ & 0.72 & 9.75\
MIPS 6 & Sbc & 5 & $7.3\times10^{10}$ & 0.03 & 8.68\
MIPS 7 & E & 3 & $2.3\times10^{11}$ & 0.88 & 15.75\
MIPS 3419 & Sd & 3 & $2.6\times10^{11}$ & 2.73 & 6.48\
MIPS 5581 & Sb & 3 & $3.4\times10^{10}$ & 0.03 & 22.26\
[ccccccc]{} MIPS 4 & $3.7\pm0.7$ & $4.92\pm1.45$&$4.67\pm0.58$& 227&137.9&$2.7\pm0.8$\
MIPS 5 & $3.2\pm0.8$ & $3.66\pm0.75$ &$5.87\pm0.68$& 58 &97.8&$2.81\pm0.8$\
MIPS 6 & $2.0\pm0.3$ & $2.75\pm0.30$&$3.69\pm0.69$& 119 &86.3 &$2.48\pm0.5$\
MIPS 7 & $1.4\pm0.5$ & $1.34\pm0.62$&$4.16\pm1.66$& 45 &136.0&$1.29\pm0.6$\
MIPS 3419 & $0.0\pm0.1$ & $0.00\pm0.11$&$<0.18$& $<11$ &99.5&$<1.9$\
MIPS 5581 & $ 0.6\pm0.1$ & $0.67\pm0.15$&$0.17\pm0.9$& 106 &28.5&$3.5\pm1.0$\
[crlrlc]{} 4 & 1.42 &(1.86)& $<3.21$ &(2.33)& $>1.45$\
5 & $<0.76$ &(1.31)& $<7.49$ &(1.68)& …\
6 & 0.65 &(1.22) & 2.54&(1.48) & 1.06\
7 & 1.06 & (1.07)&$<2.73$ &(1.32)& $>1.35$\
5581 & $<0.29$&(0.24) & $<1.55$&(0.33) & …\
|
---
abstract: 'Connected radio interferometers are sometimes used in the tied-array mode: signals from antenna elements are coherently added and the sum signal applied to a VLBI backend or pulsar processing machine. Usually there is no computer-controlled amplitude weighting in the existing radio interferometer facilities. Radio frequency interference (RFI) mitigation with phase-only adaptive beamforming is proposed for this mode of observation. Small phase perturbations are introduced in each of the antenna’s signal. The values of these perturbations are optimized in such a way that the signal from a radio source of interest is preserved and RFI signals suppressed. An evolutionary programming algorithm is used for this task. Computer simulations, made for both one-dimensional and two-dimensional array set-ups, show considerable suppression of RFI and acceptable changes to the main array beam in the radio source direction.'
---
[**RFI mitigation with phase-only adaptive beamforming**]{}\
P. A. Fridman\
ASTRON, Dwingeloo,P.O. Box 2, 7990 AA The Netherlands\
e-mail: [email protected]
Introduction
============
Suppression of radio frequency interference (RFI) with adaptive beamforming is widely used in radio astronomy, radar and telecommunications. The main idea behind many algorithms proposed for use in radio astronomy consists of weighting the outputs of array elements in such a way as to create zero values in the synthesized array pattern in the direction of RFI and to keep the signal of interest (SOI), the radio source to be observed, in the maximum of the main lobe without significant loss of gain [@widrow1985; @gab1992; @krim1996]. During recent years there has been a growing interest in radio astronomy for applying these methods of RFI mitigation both to existing radio telescopes and to future generation projects. There are several specific features of the large connected radio interferometers (RI) used in radio astronomy such as Westerbork Synthesis Radio Telescope (WSRT), Very Large Array (VLA) and Giant Metrewave Radio Telescope (GMRT) which make the straightforward application of this adaptive beam-forming different and difficult when compared to classic phased arrays:
1\. Connected RI are highly sparse arrays.
2\. Their main mode of operation is correlation processing.
3\. Direction of arrival (DOA) of a signal of interest is a known and time-dependent vector.
4\. There is no computer-controlled amplitude weighting in the existing RI backend hardware.
5\. There is an auxiliary [*tied-array*]{} facility which is used during VLBI and pulsar observations. The mode of observation is similar to that of standard phase arrays: the signals from the antennas are added but without amplitude weighting, because the antennas of RI are identical. There is a phase-only control allowing coherent adding. The RI works as a “single dish".
6\. Noise-like radio source signals are usually much weaker than system noise (antenna + receiver) and RFI.
Phase-only adaptive nulling is proposed for RFI mitigation during tied-array observations. Small phase perturbations are introduced to the signals of every antenna. The values of these perturbations are optimized in such a way that the signal from SOI is preserved and the RFI signals suppressed. This techniques has been widely discussed [@thompson1976; @leavitt1976; @stey1983; @guisto1983; @haupt1997; @davis1998; @smith1999] and is well suited to tied-array observations.
Narrow-band model of SOI and RFI
================================
There are two approaches to adaptive beam-forming: narrow-band (complex weighting of amplitudes and phases) and wide-band (digital filtering, delay-tap weighting). The narrow-band approach will be used in the following text.
Let us consider an equidistant M-element linear array. The $M$-dimensional array output vector $X(\theta )$, as a function of an angle, i. e., the complex amplitude of the temporal signal $x(t)=X(\theta )e^{j2\pi f_{0}t},$ consists of the following components: $$X(\theta )=S(\theta _{0})+\sum_{n=1}^{N}RFI_{n}(\theta _{n})+N_{sys}$$
where $S(\theta _{0})$ is the signal vector corresponding to the plane wave coming from the direction $\theta _{0},RFI_{n}(\theta _{n})$ is the $n$th RFI vector, coming from any direction $\theta _{n},N_{sys}$ is the system noise vector. These three components are uncorrelated. Vector $S(\theta _{0})$ depends on the incidence angle $\theta _{0}$ of the plane wave, measured with respect to the normal to linear array $$S(\theta _{0})=[1,e^{-i\varphi _{0}},...e^{-i(M-1)\varphi _{0}}]^{T}$$ where phase shift $\varphi _{0}=(2\pi d/\lambda )\sin (\theta _{0}),d$ is the spacing between array elements, $\lambda $ is the wavelength. The phase of the first antenna is chosen to be equal to 0.
The beamformer, in general, consists of the complex weights $w_{m}e^{i\phi _{m}},m=1...M,$ which form the beamformer vector $W$ $$W=[1,w_{2}e^{i\phi _{2}},...w_{M}e^{i\phi _{M}}]^{T}.$$ The output of the phased array is $$Y=W^{H}X.$$ The beamformer should satisfy both following requirements:
a\) [*steering capability*]{}: the SOI is protected ($W^{H}S=g)$, for a prescribed direction $\theta _{0},$ the response of the array is constant regardless of what values are assigned to the weights $W$;
b\) the effects of RFI should be minimized.
The [*minimum-variance distortionless response* ]{}(MVDR) beamforming algorithm, subject to this constraint when $g=1,$ is proposed in order to minimize the variance of the beamformer output[@capon1969] . The solution for $W$ in this case is $$W_{MDVR}=R^{-1}S\left( \theta _{0}\right) [S\left( \theta _{0}\right) ^{H}R^{-1}S\left( \theta _{0}\right) ]^{-1},$$ where $R$ is the correlation matrix of $X.$
As was mentioned in the introduction, WSRT and other large radio astronomy arrays cannot use this algorithm in real time with the existing equipment because there are no amplitude control facilities. That is to say that “RFI nulling” is limited to [*phase-only*]{} control.
Phase-only adaptive nulling
===========================
Phase-only weights can be found to be the solution to the following system of nonlinear equations:
$$\begin{aligned}
Real\{\sum_{m=1}^{M}e^{i\phi _{m}}S_{m} =M\} \\
Imag\{\sum_{m=1}^{M}e^{i\phi _{m}}S_{m} =0\} \\
Real\{\sum_{m=1}^{M}e^{i\phi _{m}}RFI_{m} =0\} \\
Imag\{\sum_{m=1}^{M}e^{i\phi _{m}}RFI_{m} =0\}.\end{aligned}$$
$e^{i\phi _{m}}$ are the weights in our phase-only case. The vector $S$ is known and is determined by the SOI coordinates. The construction of the $RFI$ vector requires the knowledge of RFI’s DOA, which may be known beforehand or could be obtained from the observed correlation matrix $\widehat{R_{ij}}=<x_{i}x_{j}>,i,j=1...M$, because SOI is always much weaker than RFI. But it is necessary to have a special correlator for this purpose in order to follow all rapid scintillations of RFI which are usually averaged by the main radio interferometer correlator. So, in principle, the system of equations (6-9) can be solved and the phase corrections $\phi _{m}$ introduced into the phase control system. The optimal solution (5) can be used as a zero approximation for $\phi _{m}.$ The difficulties in implementing the solution of the system (6 - 9) are not mentioned here.
Total power detector output
===========================
A more practical method of calculating the phase corrections $\phi _{m}$ in the tied array case is proposed here. The tied array total power detector output ($TPD_{TA}$) is $$TPD_{TA}=<\int_{0}^{T}[x(t)]^{2}dt>=TPD_{sig}+TPD_{RFI}+TPD_{N},$$
where the total power components $TPD_{sig},TPD_{RFI},TPD_{N}$ correspond to the SOI, RFI and system noise, respectively, $<...>$ means the statistical expectation. The mean value of the signal component is $$TPD_{sig}(\phi _{m})=<\int_{0}^{T}\left\{ \sum_{m=1}^{M}\cos [2\pi f_{0}t+2\pi (m-1)d\sin (\theta _{0})/\lambda +\phi _{m}]dt\right\}^{2} >,$$ the mean value of the n-th RFI component is $$TPD_{RFI,n}(\phi _{m})=<\int_{0}^{T}\left\{ \sum_{m=1}^{M}A_{RFI,n}\cos [2\pi f_{0}t+2\pi (m-1)d\sin (\theta _{RFI,n})/\lambda +\phi _{m}]dt\right\}^{2} >,$$ the mean value of the system noise component is constant. We assume also that $TPD_{sig}\ll TPD_{RFI,n}$ and the different $RFI_{n}$ are uncorrelated.
Considering the TPD output as a function of $M$ variables $\phi _{m},$ the following criterium for a “good” vector $\Phi =[\phi _{1}....\phi _{M}]^{T}$ can be proposed: $$C(\Phi)=\frac{TPD_{sig}(\Phi )}{\sum_{n=1}^{N}TPD_{RFI,n}(\Phi )+TPD_{N}}\rightarrow \max .$$ The denominator $\sum_{n=1}^{N}TPD_{RFI,n}(\Phi )+TPD_{N}$ is the total TPD output under the assumption $TPD_{sig}<<TPD_{RFI,n}.$ The numerator $TPD_{sig}(\Phi)$ can be calculated for each given $\Phi$ and $\theta _{0}$ (the DOA of SOI). Therefore, maximizing $C(\Phi)$ with a proper choice of $\Phi$, a higher signal-to-RFI-plus-noise ratio at the tied array output can be achieved.
This is a classic $M$-variable optimization problem which is difficult to solve by the common gradient methods because of the multimodality of $C({\bf \Phi }):$ there are many local (secondary) maximums and a searching algorithm will “get stuck” at one of them without finding the global maximum.
Genetic algorithms (GA) search the solution for the set of variables through the use of simulated evolution, i.e., [*survival of the fittest*]{} strategy. In contrast to the gradient algorithms which are, in general, calculus-based algorithms, GA, first introduced by [@holland1975], exploits a [*guided random techniques* ]{}during optimization procedure [@gold1989; @mich1992; @haupt1995]. The multimodality problem is successfully overcome by this algorithm.
A simplified block diagram of GA implementation in a radio interferometer is depicted in Figure 1. A phase control subsystem introduces a certain initial phase distribution $\Phi _{0}$ corresponding to radio source coordinates and preliminary phase calibration corrections. The output of the TPD is then continiously measured and used to supply the GA program with the data (cost function samples) which monitor the performance of the tied array with respect to RFI. The GA uses these data to calculate new phases $\Phi _{m}$ with the aim to maximize $C(\Phi).$ These new phases are introduced into the phase control subsystem after each iteration and a new value of the TPD output signal is used for the next step. Thus the feedback loop, [*phase control subsystem - TPD - GA*]{}, maintains the low value of $TPD_{TA}(\Phi)$ and therefore the high value of the fitness function $C(\Phi)$, i. e., the high signal-to-RFI ratio.
Computer simulation
====================
Computer simulation was performed to illustrate the effectiveness of the phase-only nulling in RFI mitigation.
First, a 14-element half-wavelength linear array was modelled. The SOI direction is equal to $0^{\circ },$ and there are two RFI signals: one at the angle $-20.1^{\circ }$ , and the other at the angle $+10.015^{\circ }.$ Figure 2 shows in logarithmic scale the quiescent (dash line) and adapted (solid line) array patterns. The significant suppression of RFI with the adapted pattern is clearly visible, while the quiescent pattern has the secondary lobe maximums at the RFI positions. Figure 3 shows the corresponding array phase distribution.
Sparse 14-element array patterns are represented in Figures 4, 5, 6 and 7. The distance between the elements is $144m$ and the central frequency is $1420MHz$. The lobes are much narrower than for the half-wavelength array, so the different figures are given to illustrate the result of phase-only nulling. Figure 4 (a linear scale presentation of the pattern) is given here to illustrate the loss and distortions of the main lobe. This is more visible in the linear scale, whereas RFI suppression is better seen in the logarithmic scale. Figure. 5 - the quiescent and adapted patterns around angle $0^{\circ },$ logarithmic scale, Figures 6, 7 - the same patterns around the directions of RFI1 (DOA$=+10.015^{\circ })$ and RFI2 (DOA$=-20.1^{\circ }).$ The corresponding array phase distribution is shown in Fig. 8.
The subsequent figures illustrate this phase-only nulling for a two-dimensional planar array.
Adaptive nulling was simulated for the half-wavelength array with 10x10 elements, the central frequency is equal to $1420MHz$. Rectangular coordinates $a1$ and $a2$ are angles measured from the $x$ and $y$ axes, respectively, to the line from the array to the radio source; thus the SOI is at the zenith, with coordinates $(90^{\circ }, 90^{\circ }).$ Coordinates of RFI were chosen so as to put them on the maximums of the secondary lobes, the values of RFI suppression are shown in the captions. The following sequence of figures is given:
Figure 9: normalized (A($90^{\circ },90^{\circ })$=1) quiescent array with indicated RFI positions;
Figure 10: array’s pattern after adaptation;
Figure 11: this array’s phase distribution after adaptation.
Conclusions
===========
1\. Existing large radio interferometers (WSRT, VLA, GMRT) have only phase control facilities and the [*real-time*]{} adaptive nulling in the RFI direction should take this constraint into account .
2\. The total power detector at the tied-array output can be used for phase-only RFI mitigation as an indicator of the level of RFI.
3\. The Genetic Algorithm is a convenient tool for cost function maximization during the search for the optimal array phase distribution.
4\. Computer simulations show significant RFI mitigation for the sparse linear array in the narrow-band approximation ($\Delta f/f_{0}<<1$).
5\. Phase-only nulling can also be used for real-time RFI mitigation at the station’s level in new projects such as ATA, LOFAR and SKA.
Capon, J. (1969), High-Resolution Frequency-Wavebumber Spectrum Analysis, [*Proc. of the IEEE*]{}, 57, 1408–1418. Davis, R. M. (1998), Phase-only LMS and Perturbation Adaptive Algorithms, [*IEEE Trans on Aerosp. and Electr. Syst.*]{}, 34, 169–178. Giusto, R., and de Vincenti, P. (1983), Phase-Only Optimization for Generation of Wide Deterministic Nulls in the Radiation Patern of Phased Arrays, [*IEEE Trans. on Aerosp. and Electr. Syst.*]{}, 31, 814–817. Haupt, R. L. (1995), An Introduction to Genetic Algorithms for Electromagnetics, [*IEEE Ant. and Propag. Mag, 37,*]{} 7–15. Haupt R. L. (1997), Phase-Only Adaptive Nulling with a Genetic Algorithm, [*IEEE Trans. Ant. and Propag., AP-45,*]{} 1009–1015. Gabriel W. F. (1992), Adaptive Processing Array Systems, [*Proc. IEEE*]{}, 80, 152–162. Goldberg, D. E. (1989), [*Genetic Algorithms in Search, Optimization and Machine Learning*]{}, Addison-Wesley Publishing Company, Inc., NY. Holland, J.H. (1975), [*Adaptation in Natural and Artificial Systems*]{}, 1st ed. University of Michigan Press, Ann Arbor; 2nd ed.: 1992, MIT Press, Cambridge.
Krim, H. and Viberg, M. (1996) Two decade of Array Signal Processing Research, [*IEEE Sig. Proc. Mag.*]{}, 67–94, July. Leavitt, M. K. (1976), A Phase Adaptation Algorithm, [*IEEE Trans. on Ant. and Propag.*]{}, AP-24, 754–756.
Michalewicz, Z. (1992), [*Genetic Algorithms + Data Structure = Evolution Programs*]{}, Springer. Smith, S. T. (1999), Optimum Phase-Only Adaptive Nulling, [*IEEE Trans. on Sig. Proc.*]{} 47, 1835 -1843. Steyskal, H. (1983), Simple method for pattern nulling by phase perturbation, [*IEEE Trans. on Ant. and Propag.,*]{} 31, 163–166. Thompson, P.A. (1976), Adaptation by Direct Phase-Shift Adjustment in Narrow-Band Adaptive Antenna Systems, [*IEEE Trans. on Ant. and Propag.*]{}, 24, 756–760. Widrow, B and Stearns, S. (1985) Adaptive Signal Processing, Prentice-Hall, Inc., Englewood Cliffs, N. J.
{height="9.0cm" width="12.0cm"}
{height="9.0cm" width="12.0cm"}
{height="9.0cm" width="12.0cm"}
{height="9.0cm" width="12.0cm"}
{height="9.0cm" width="12.0cm"}
{height="9.0cm" width="12.0cm"}
{height="9.0cm" width="12.0cm"}
{height="9.0cm" width="12.0cm"}
![Two-dimensional 10x10-element half-wavelength, [*quiescent*]{} array pattern, linear scale, central frequency=1420MHz, RFI-1 at $[45^{\circ},90^{\circ}]$, RFI-2 at $[90^{\circ},135^{\circ}]$, directions of RFI coincide with the maximums of the sidelobes.](fig9.eps){width="15cm" height="9cm"}
![Two-dimensional 10x10-element half-wavelength, [*adapted*]{} array pattern, central frequency=1420MHz, RFI-1 at $[45^{\circ},90^{\circ}]$, RFI-2 at $[90^{\circ},135^{\circ}]$, linear scale; RFI-1 suppression=106.2dB, RFI-2 suppression=103.1dB.](fig10.eps){width="15cm" height="9cm"}
{width="15cm" height="12cm"}
|
---
abstract: 'A permutation array(or code) of length $n$ and distance $d$, denoted by $(n,d)$ PA, is a set of permutations $C$ from some fixed set of $n$ elements such that the Hamming distance between distinct members $\mathbf{x},\mathbf{y}\in C$ is at least $d$. Let $P(n,d)$ denote the maximum size of an $(n,d)$ PA. New upper bounds on $P(n,d)$ are given. For constant $\alpha,\beta$ satisfying certain conditions, whenever $d=\beta n^{\alpha}$, the new upper bounds are asymptotically better than the previous ones.'
author:
- 'Lizhen Yang, Ling Dong, Kefei Chen [^1][^2] [^3] [^4]'
bibliography:
- 'IEEEabrv.bib'
- 'bib.bib'
title: New Upper Bounds on Sizes of Permutation Arrays
---
permutation arrays (PAs), permutation code, upper bound.
Introduction
============
t $\Omega$ be an arbitrary nonempty infinite set. Two distinct permutations $\mathbf{x},\mathbf{y}$ over $\Omega$ have distance $d$ if $\mathbf{x}\mathbf{y}^{-1}$ has exactly $d$ unfixed points. A permutation array(permutation code, PA) of length $n$ and distance $d$, denoted by $(n,d)$ PA, is a set of permutations $C$ from some fixed set of $n$ elements such that the distance between distinct members $\mathbf{x},\mathbf{y}\in C$ is at least $d$. An $(n,d)$ PA of size $M$ is called an $(n,M,d)$ PA. The maximum size of an $(n,d)$ PA is denoted as $P(n,d)$.
PAs are somewhat studies in the 1970s. A recent application by Vinck [@Ferreira00; @Vinck00Code; @Vinck00Coded; @Vinck00Coding] of PAs to a coding/modulation scheme for communication over power lines has created renewed interest in PAs. But there are still many problems unsolved in PAs, e.g. one of the essential problem is to compute the values of $P(n,d)$. It’s known that determining the exactly values of $P(n,d)$ is a difficult task, except for special cases, it can be only to establish some lower bounds and upper bounds on $P(n,d)$. In this correspondence, we give some new upper bounds on $P(n,d)$, which are asymptotically better than the previous ones.
Concepts and Notations
----------------------
We introduce concepts and notations that will be used throughout the correspondence.
Since for two sets $\Omega,\Omega'$ of the same size, the symmetric groups $Sym(\Omega)$ and $Sym(\Omega')$ formed by the permutations over $\Omega$ and $\Omega'$ respectively, under compositions of mappings, are isomorphic, we need only to consider the PAs over $Z_n=\{0,1,\ldots,n-1\}$ and write $S_n$ to denote the special group $Sym(Z_n)$. In the rest of the correspondence, without special pointed out, we always assume that PAs are over $Z_n$. We also write a permutation $\mathbf{a}\in S_n$ as an $n-$tuple $(a_0,a_1,\ldots,a_{n-1})$, where $a_i$ is the image of $i$ under $\mathbf{a}$ for each $i$. Especially, we write the identical permutation $(0,1,\ldots,n-1)$ as $\mathbf{1}$ for convenience. The Hamming distance $d(\mathbf{a},\mathbf{b})$ between two $n-$tuples $\mathbf{a}$ and $\mathbf{b}$ is the number of positions where they differ. Then the distance between any two permutations $\mathbf{x},\mathbf{y}\in S_n$ is equivalent to their Hamming distance.
Let $C$ be an $(n,d)$ PA. For an arbitrary permutation $\mathbf{x}\in S_n$, $d(\mathbf{x},C)$ stands for the Hamming distance between $\mathbf{x}$ and $C$, i.e., $d(\mathbf{x},C)=\min_{\mathbf{c}\in C}d(\mathbf{x},\mathbf{c})$. A permutation in $C$ is also called a codeword of $C$. For convenience for discussion, without loss of generality, we always assume that $\mathbf{1}\in C$, and the indies of an $n-$tuple (vector, array) are started by $0$. The support of a binary vector $\mathbf{a}=(a_0,a_1,\ldots,a_{n-1})\in\{0,1\}^n$ is defined as the set $\{i:a_i=1,i\in Z_n\}$, and the weight of $\mathbf{a}$ is the size of its support, namely the number of ones in $\mathbf{a}$. The support of a permutation $\mathbf{x}=(x_0,x_1,\ldots,x_{n-1})\in S_n$ is defined as the set of the points not fixed by $\mathbf{x}$, namely $\{i\in Z_n:
x_i\neq i\}$=$\{i\in Z_n: \mathbf{x}(i)\neq i\}$, and the weight of $\mathbf{x}$, denoted as $wt(\mathbf{x})$, is defined as the size of its support, namely the number of points in $Z_n$ not fixed by $\mathbf{x}$.
A derangement of order $k$ is an element of $S_k$ with no fixed points. Let $D_k$ be the number of derangements of order $k$, with the convention that $D_0=1$. Then $D_k=k!\sum_{i=0}^k\frac{(-1)^k}{k!}=\left[\frac{k!}{e}\right]$, where $[x]$ is the nearest integer function, and $e$ is the base of the natural logarithm. The ball in $S_n$ of radius $r$ with center $\mathbf{x}$ is the set of all permutations of distance $\leq r$ from $\mathbf{x}$. The volume of such a ball is $$\label{eq:vol-PA}
V(n,r)=\sum_{i=0}^{r}{n\choose i}D_i.$$
An $(n,d,w)$ constant-weight binary code is a set of binary vectors of length $n$, such that each vector contains $w$ ones and $n-w$ zeros, and any two vectors differ in at least $d$ positions. The largest possible size of an $(n,d,w)$ constant-weight binary code is denoted as $A(n,d,w)$. Similarly, we define an $(n,d,w)$ constant-weight PA as an $(n,d)$ PA such that each permutation is of weight $w$, and denote the largest possible size of an $(n,d,w)$ constant-weight PA as $P(n,d,w)$.
The concept of $P(n,d)$ can be further generalized. Let $\Omega\subseteq S_n$, then $P_{\Omega}(n,d)$ denotes the maximum size of an $(n,d)$ PA $C$ such that $C\subseteq\Omega$. For trivial case $\Omega=S_n$, $P(n,d)=P_{\Omega}(n,d)$.
Previous Results
----------------
The most basic upper bound on $P(n,d)$ is given by Deza and Vanstone [@Deza78].
[@Deza78]. $$\label{upper:deza}
P(n,d)\leq \frac{n!}{(d-1)!}$$
We call the PAs which attain the Deza-Vanstone bound perfect PAs and the known perfect PAs are
- $(n,n,n)$ PAs for each $n\geq 1$;
- $(n,n!,2)$ PAs for each $n\geq 1$;
- $(n,n!/2,3)$ PAs for each $n\geq 1$ [@wensong04];
- $(q,q(q-1),q-1)$ PAs for each prime power $q$ [@Blake74];
- $(q+1,(q+1)q(q-1),q-1)$ PAs for each prime power $q$ [@Blake74];
- $(11,11\cdot10\cdot9\cdot8,8)$ PA [@Blake74];
- $(12,12\cdot11\cdot10\cdot9\cdot8,8)$ PA [@Blake74].
The Deza-Vanstone bound can be derived by recursively applying the following inequality.
[@wensong04].\[prop:n,n-1:prop:elementay:conse:P(n,d)\] $$P(n,d)\leq
nP(n-1,d).\label{eq:n,n-1:prop:elementay:conse:P(n,d)}$$
Then for $d\leq m<n$, if we know $P(m,d)\leq M<\frac{m!}{(d-1)!}$, we can get a stronger upper bound on $P(n,d)$: $$P(n,d)\leq \frac{n!P(m,d)}{m!}\leq\frac{n!M}{m!}.$$ Another nontrivial upper bound on $P(n,d)$ is the sphere packing bound obtained by considering the balls of radius $\lfloor
(d-1)/2\rfloor$ [@wensong04].
$$\label{eq:sphere-packing-upper-bound}
P(n,d)\leq \frac{n!}{V(n,\lfloor (d-1)/2\rfloor)}.$$
For small values of $n$ and $d$, still stronger upper bounds are founds in Tarnanen [@Tarnanen99] by the method of linear programming.
Organization and New Results
----------------------------
The correspondence is organized as follows. In Section II, we first prove a relation between $P(n,d)$ and $P_{\Omega}(n,d)$ that is the inequality $$P(n,d)\leq \frac{n!P_{\Omega}(n,d)}{|\Omega|}.$$ Next, we give some elementary properties of $P(n,d,w)$, and then use them to show a new upper bound on $P(n,d)$ for $d$ is even and a new upper bound on $P(n,d)$ for $d$ is odd. They are given by the following inequalities: $$P(n,2k)\leq \frac{n!}{V(n,k-1)+\frac{{n\choose k}D_k}{\lfloor
n/k\rfloor}}, \mbox{ for } 2\leq k\leq \lfloor n/2\rfloor;$$ $$P(n,2k+1)\leq \frac{n!}{V(n,k)+\frac{{n\choose
k+1}D_{k+1}-A(n-k,2k,k+1){n\choose k}D_k}{A(n,2k,k+1)}}, \mbox{
for } 2\leq k\leq \lfloor (n-k-1)/2\rfloor.$$ In Section III, we compare the upper bounds on $P(n,d)$ and show for constant $\alpha,\beta$ satisfying certain conditions, whenever $d=\beta n^{\alpha}$, the new upper bounds are asymptotically better than the previous ones.
The New Upper Bounds
====================
\[thm:Snboundsubset\] Let $\Omega$ be a subset of $S_n$. Then $$P(n,d)\leq \frac{n!P_{\Omega}(n,d)}{|\Omega|}.$$
Suppose $C$ is an $(n,P(n,d),d)$ PA. For any $\mathbf{x}\in S_n$, let $\mathbf{x}C=\{\mathbf{x}\mathbf{c}:\mathbf{c}\in C\}$. Then $$\begin{aligned}
\sum_{\mathbf{x}\in S_n} |\mathbf{x}C\cap
\Omega|&=&\sum_{\mathbf{c}\in
C}\sum_{\mathbf{\omega\in\Omega}}|\{\mathbf{x}\in
S_n:\mathbf{x}\mathbf{c}=\omega\}|\\
&=&\sum_{\mathbf{c}\in C}\sum_{\mathbf{\omega\in\Omega}}|\{
\omega\mathbf{c}^{-1}\}|\\
&=&P(n,d)|\Omega|.\end{aligned}$$ On the other hand, there must exist $\mathbf{x}'\in S_n$ such that $n!|\mathbf{x}'C\cap \Omega|\geq \sum_{\mathbf{x}\in
S_n}|\mathbf{x}C\cap \Omega|$. Then $n!|\mathbf{x}'C\cap
\Omega|\geq P(n,d)|\Omega|$, in other words, $P(n,d)\leq
\frac{n!|\mathbf{x}'C\cap \Omega|}{|\Omega|}$. This in conjunction with $|\mathbf{x}'C\cap \Omega|\leq P_{\Omega}(n,d)$ results the theorem.
Since $S_{d}$ can be considered as a subset of $S_n$ for $d\leq
n$, Theorem \[prop:n,n-1:prop:elementay:conse:P(n,d)\] is also a directly result of the above theorem, in fact $$P(n,d)\leq
\frac{|S_n|P(d,d)}{|S_d|}=\frac{n!d}{d!}=\frac{n!}{(d-1)!}.$$ The following is also obtained immediately by Theorem \[thm:Snboundsubset\].
$$P(n,d)\leq \frac{n!P(n,d,w)}{{n\choose w}D_w}.$$
The following are well-known elementary properties of $A(n,d,w)$, which will be applied to the proof of the properties of $P(n,d,w)$.
\[lem:elem:proper:A(n,d,w)\] $$\begin{aligned}
A(n,d,w)&=&1, \mbox{ if } d>2w;\\
A(n,2w,w)&=&\left\lfloor\frac{n}{w}\right\rfloor;\\
A(n,2k,k+1)&\leq& \left\lfloor \frac{n}{k+1}\left\lfloor
\frac{n-1}{k}\right\rfloor\right\rfloor.\end{aligned}$$
\[thm:constant-weight-PA\] $$\begin{array}{lll}
(I) &P(n,d,w)\leq A(n,2d-2w,w),&\mbox{for }d>w;\\
(II) &P(n,d,w)=1, &\mbox{for }d>2w,w\neq 1, d\geq 1;\\
(III) &P(n,2k,k)=\lfloor\frac{n}{k}\rfloor,&\mbox{for }2\leq
k\leq \lfloor n/2\rfloor;\\
(IV) &P(n,2k+1,k+1)=A(n,2k,k+1),&\mbox{for }1\leq
k\leq\lfloor (n-1)/2\rfloor;\\
(VI) &P(n,4,3)\leq \frac{2{n\choose 2}}{3},&\mbox{for }n\geq 4.\\
\end{array}$$
Part $(I)$ Let $C$ be an $(n,d,w)$ constant-weight PA with maximal size $P(n,d,w)$, where $d>w$. Define $f:S_n\mapsto \{0,1\}^n$ such that for any $\mathbf{a}=(a_0,a_1,\ldots,a_{n-1})\in S_n$ with support $A$, $f(\mathbf{a})=\mathbf{a'}=(a'_0,a'_1,\ldots,a'_{n-1})\in
\{0,1\}^n$, where $$\label{prf:thm:conweigt:eq1}
a'_i=\left\{\begin{array}{cl}
1,&\mbox{for }i\in A,\\
0,&\mbox{for }i\not\in A.
\end{array}
\right.$$ Then $C'=\{f(\mathbf{a}): \mathbf{a}\in C\}$ is an $(n,2d-2w,w)$ constant-weight code with size $P(n,d,w)$ and this means $P(n,d,w)\leq A(n,2d-2w,w)$. To prove this fact we need only to prove that $C'$ have mutual distances $\geq 2d-2w$.
Let $\mathbf{a},\mathbf{b}\in C$, $\mathbf{a}\neq \mathbf{b}$, and let $A$ and $B$ be the supports of $\mathbf{a}$ and $\mathbf{b}$ respectively. Suppose $\mathbf{a}'=f(\mathbf{a})$, $\mathbf{b}'=f(\mathbf{b})$. (\[prf:thm:conweigt:eq1\]) implies $$\begin{aligned}
d(\mathbf{a}',\mathbf{b}')&=&|(A/B)\cup(B/A)|\nonumber\\
&=&|A|+|B|-2|A\cap B|\nonumber\\
&=&2w-2|A\cap B| \label{eq:prf:thm:weight2}\end{aligned}$$ On the other hand, we have $$\begin{array}{lcl}
d&\leq& d(\mathbf{a},\mathbf{b})\\
&\leq& |A\cup B|\\
&=&|A|+|B|-|A\cap B|\\
&=&2w-|A\cap B|,
\end{array}$$ namely $|A\cap B|\leq 2w-d$. Putting this into (\[eq:prf:thm:weight2\]) we obtain $$d(\mathbf{a}',\mathbf{b}')\geq 2d-2w.$$ Since $f$ is an onto mapping, we complete the proof of Part $(I)$.
Part $(II)$ For $d>2w,w\neq 1$ and $d\geq 1$, since $$\begin{array}{lcl}
2d-2w&>& 2\cdot 2w-2w=2w,
\end{array}$$ $A(n,2d-2w,w)=1$ (by Lemma \[lem:elem:proper:A(n,d,w)\]) . This in conjunction with part $(I)$ yields $P(n,d,w)=1$.
Part $(III)$ For $2\leq k\leq \lfloor n/2\rfloor$, by part $(I)$ and Lemma \[lem:elem:proper:A(n,d,w)\] we have $$P(n,2k,k)\leq A(n,2k,k)=\lfloor n/k\rfloor.$$
On the other hand, we can construct an $(n,2k,k)$ constant-weight PA as follows: $$C=\{\mathbf{c}_i=(c_{i,0},c_{i,1},\ldots,c_{i,n-1})|i=0,1,\ldots,\lfloor
n/k\rfloor-1\},$$ where $$c_{i,j}=\left\{\begin{array}{cl} j+1,& \mbox{for }
j=ik,ik+1,\ldots,ik+k-2\\
ik,&\mbox{for }j=ik+k-1\\
j,&\mbox{others}.
\end{array}\right.$$ Then we conclude $P(n,2k,k)=\lfloor n/k\rfloor$.
Part $(IV)$ For case $1\leq k\leq \lfloor (n-1)/2\rfloor$, by part $(I)$ we have $$P(n,2k+1,k+1)\leq A(n,2k,k+1).$$ Let $C'$ be an $(n,2k,k+1)$ constant-weight binary code with maximal size $A(n,2k,k+1)$, then there exists $C\subseteq S_n$ such that for each member of $C$ there have one and only one member of $C'$ with same support. We will prove that $C$ is an $(n,2k+1,k+1)$ constant-weight PA, which implies $P(n,2k+1,k+1)\geq A(n,2k,k+1)$ and then results $P(n,2k+1,k+1)=A(n,2k,k+1)$. Let $\mathbf{x},\mathbf{y}\in
C,\mathbf{x}\neq\mathbf{y}$ with corresponding supports $X$ and $Y$. For case $X\cap Y=\O$, $d(\mathbf{x},\mathbf{y})=|X|+|Y|=2k+2$. So we need only to discuss the case $X\cap Y\neq \O$. Let $\mathbf{x}',\mathbf{y}'\in C'$ be the corresponding binary codewords with supports $X,Y$. Since $d(\mathbf{x}',\mathbf{y}')=|X|+|Y|-2|X\cap Y|=2(k+1)-2|X\cap
Y|\geq 2k$, $|X\cap Y|\leq 1$. Therefore, if $X\cap Y\neq \O$, then $|X\cap Y|=1$. Suppose $X\cap Y=\{a\}$. Then $\mathbf{x}(a)\neq \mathbf{y}(a)$, otherwise $\mathbf{x}(a)=\mathbf{y}(a)=a$ and it lead to a contradiction. Hence for this case, $d(\mathbf{x},\mathbf{y})=|A/B\cup
B/A|+|\{a\}|=|A/B|+|B/A|+1=2k+1$. Now we conclude that $C$ is an $(n,2k+1,k+1)$ constant-weight PA of size $A(n,2k,k+1)$, which completes the proof of Part $(IV)$.
Part $(VI)$ Suppose $C$ is an $(n,4,3)$ constant-weight PA. For any pair $\{i,j\}\in Z_n\times Z_n$ with $i\neq j$, let $C_{i,j}\subseteq C$ be the maximal set such that for each $\mathbf{x}\in C_{i,j}$ with support $X$, $\{i,j\}\subseteq X$. We are now ready to prove $|C_{i,j}|\leq 2$. Assume the contrary, i.e., that $|C_{i,j}|\geq 3$ and $\mathbf{x},\mathbf{y},\mathbf{z}$ are distinct elements of $C_{i,j}$. W.l.o.g, $(\mathbf{x}(i),\mathbf{x}(j))=(k,i)$, where $k\neq i,j$. Then $(\mathbf{y}(i),\mathbf{y}(j))=(j,k')$, where $k'\neq i,j,k$, otherwise $d(\mathbf{x},\mathbf{y})<4$, which is a contradiction. Similarly, $(\mathbf{z}(i),\mathbf{z}(j))=(j,k'')$, where $k''\neq i,j,k,k'$. Thus $d(\mathbf{y},\mathbf{z})<4$, which is a contradiction. Therefore $|C_{i,j}|\leq 2$.
Since there are ${n\choose 2}$ pairs of $(i,j)\in Z_n\times Z_n$ with $i\neq j$, $$\label{prf:thm:genboundweight5}
\sum_{i,j,i\neq j}|C_{i,j}|\leq 2{n\choose 2}.$$ On the other hand, for each member of $C$, there are exactly 3 $C_{i,j}$ containing it, hence $$\label{prf:thm:genboundweight6}
\sum_{i,j,i\neq j}|C_{i,j}|=3|C|.$$ Substituting (\[prf:thm:genboundweight6\]) into (\[prf:thm:genboundweight5\]) yields $|C|\leq \frac{2{n\choose
2}}{3}$, this means $P(n,4,3)\leq \frac{2{n\choose 2}}{3}$.
\[thm:upper\_bound\_P(n,2k)\] For $2\leq k\leq \lfloor n/2\rfloor$, $$P(n,2k)\leq \frac{n!}{V(n,k-1)+\frac{{n\choose k}D_k}{\lfloor
n/k\rfloor}}$$
Let there be $N_{k}$ permutations in $S_n$ which have distance $k$ to the $(n,M,d)$ PA $C$. Then $$\label{eq:prf:thm:upper_bound_P(n,2k)}
MV(n,k-1)+N_k\leq n!$$ In order to estimate $N_k$ we consider an arbitrary codeword $\mathbf{c}$ which we can take to be $\mathbf{1}$(w.l.o.g.). Then all permutations of weight $k$ has distance $k$ to $C$. Since there are ${n\choose k}D_k$ permutations of weight $k$, there must have ${n\choose k}D_k$ permutations that have distance $k$ to $C$. By varying $\mathbf{c}$ we thus count $M{n\choose k}D_k$ permutations in $S_n$ that have distance $k$ to the PA. How often has each of these permutations been counted. Take one of them; again w.l.o.g. we call it $\mathbf{1}$. The codewords with distance $k$ to $\mathbf{1}$ form an $(n,2k,k)$ constant-weight PA since they have mutual distances $\geq 2k$ and weight $k$. Hence there are at most $P(n,2k,k)=\lfloor n/k\rfloor$ (by part $(III)$ of Theorem \[thm:constant-weight-PA\]) such codewords. This gives $N_k\geq \frac{M{n\choose k}D_k}{\lfloor n/k\rfloor}$. Substituting this lower bounds on $N_k$ into (\[eq:prf:thm:upper\_bound\_P(n,2k)\]) implies the Theorem.
\[thm:upper\_bound\_P(n,2k+1)\] For $2\leq k\leq \lfloor (n-k-1)/2\rfloor$, $$P(n,2k+1)\leq \frac{n!}{V(n,k)+\frac{{n\choose
k+1}D_{k+1}-A(n-k,2k,k+1){n\choose k}D_k}{A(n,2k,k+1)}}$$
Let $C$ be an $(n,M,2k+1)$ PA. For any $\mathbf{x}\in S_n$, let $B_i(\mathbf{x})=|\{\mathbf{c}:\mathbf{c}\in
C,d(\mathbf{c},\mathbf{x})=i\}|$. The proof relies on the following lemma.
\[lem:relation\_for\_prf\_2k+1,upper\] $$\begin{array}{ll}
A(n,2k,k+1)\sum_{i<k}B_{i}(\mathbf{x})+(A(n,2k,k+1)-A(n-k,2k,k+1))B_{k}(\mathbf{x})+B_{k+1}(\mathbf{x})\\
\leq A(n,2k,k+1)
\end{array}$$
Without loss of generality, we can take $\mathbf{x}=\mathbf{1}$, then $B_i(\mathbf{x})$ is the number of codewords with weight $i$. Clearly, a permutation with weight $w_1$ has distance $\leq
w_1+w_2$ to that with weight $w_2$. Hence $\sum_{i<k}B_i(\mathbf{x})\leq 1$. If $B_i(\mathbf{x})>0$ for any $i<k$, then $B_{k}(\mathbf{x})=B_{k+1}(\mathbf{x})=0$ and all the other summands are zeros, and there is nothing to prove. Assume, therefore, that $B_{i}(\mathbf{x})=0$ for all $i<k$. We know that $B_k(\mathbf{x})\leq P(n,2k+1,k)=1$ (by part $(II)$ of Theorem \[thm:constant-weight-PA\]), in other words $B_k(\mathbf{x})$ is either 0 or 1: if it is 0, then the claim becomes $B_{k+1}(\mathbf{x})\leq A(n,2k,k+1)=P(n,2k+1,k+1)$ (by part $(IV)$ of Theorem \[thm:constant-weight-PA\]), which is clear; if it is 1, then the claim becomes $B_{k+1}(\mathbf{x})\leq
A(n-k,2k,k+1)=P(n-k,2k+1,k+1)$, which is correct for there are no points moved by both codewords of weight $k$ and of weight $k+1$.
We are now ready to complete the proof of the theorem. It follows from Lemma \[lem:relation\_for\_prf\_2k+1,upper\] that $$\label{eq:prf:2k+1:upper1}
\begin{array}{l}
\sum_{\mathbf{x}\in
S_n}\left(A(n,2k,k+1)\sum_{i<k}B_{i}(\mathbf{x})+(A(n,2k,k+1)-A(n-k,2k,k+1))B_{k}(\mathbf{x})\right.\\
\left.+B_{k+1}(\mathbf{x})\right)\leq n!A(n,2k,k+1)
\end{array}$$ The left side of the above inequality can be also written as $$\label{eq:prf:2k+1:upper2}
A(n,2k,k+1)\sum_{i<k}\sum_{\mathbf{x}\in
S_n}B_{i}(\mathbf{x})+(A(n,2k,k+1)-A(n-k,2k,k+1))\sum_{\mathbf{x}\in
S_n}B_{k}(\mathbf{x})+\sum_{\mathbf{x}\in S_n}B_{k+1}(\mathbf{x}).$$ Now we shall give an expression in term of $M$ for $\sum_{\mathbf{x}\in S_n}B_{i}(\mathbf{x})$. Since each codeword $\mathbf{x}\in C$ has exactly ${n\choose i}D_i$ permutations in $S_n$ which have distance $i$ to $\mathbf{x}$, each codeword is counted exactly ${n\choose i}D_i$ times by $\sum_{\mathbf{x}\in
S_n}B_{i}(\mathbf{x})$, which means $$\label{eq:prf:expression on M}
\sum_{\mathbf{x}\in S_n}B_{i}(\mathbf{x})=M{n\choose i}D_i.$$ Finally, the theorem is given by putting (\[eq:prf:expression on M\]) into (\[eq:prf:2k+1:upper2\]), and rewriting (\[eq:prf:2k+1:upper1\]) after replaced its left side by the new expression of (\[eq:prf:2k+1:upper2\]).
Using the upper bound on $A(n,2k,k+1)$ in Lemma \[lem:elem:proper:A(n,d,w)\], we get a determined upper bound on $P(n,2k+1)$.
\[cor:upper\_bound\_MO\_appx\] For $2\leq k\leq \lfloor(n-k-1)/2\rfloor$, $$P(n,2k+1)\leq
\frac{n!}{V(n,k)+\frac{{n\choose k+1}D_{k+1}-\lfloor
\frac{n-k}{k+1}\lfloor\frac{n-k-1}{k}\rfloor\rfloor {n\choose
k}D_k}{\lfloor \frac{n}{k+1}\lfloor\frac{n-1}{k}\rfloor\rfloor}}$$
Comparison of Upper Bounds
==========================
In this section, we will prove that for constant $\alpha,\beta$ satisfying certain conditions, whenever $d=\beta n^{\alpha}$, the new upper bounds on $P(n,d)$ are stronger than the previous ones when $n$ large enough.
For large $n$ and $d$, the previous upper bounds on $P(n,d)$ have Deza-Vanstone bound and sphere packing bound. Let $DV(n,d)$ denote the Deza-Vanstone upper bound on $P(n,d)$ and $SP(n,d)$ denote the sphere packing upper bound on $P(n,d)$, i.e. $$DV(n,d)=\frac{n!}{(d-1)!},$$ $$SP(n,d)=\frac{n!}{V(n,\lfloor (d-1)/2\rfloor)}.$$ Although we can get more upper bounds on $P(n,d)$ by recursively applying inequality (\[eq:n,n-1:prop:elementay:conse:P(n,d)\]) and using the sphere packing bounds as the initial bound, namely, for $d\leq m<n$, $$\label{eq:weaker:DV:SP}
P(n,d)\leq \frac{n!SP(m,d)}{m!},$$ these bounds are not stronger than the best bounds given by $DV(n,d)$ and $SP(n,d)$, which is shown as follows.
For $d\leq m<n$, $$\frac{n!SP(m,d)}{m!}\geq \min\{DV(n,d),SP(n,d)\}.$$
If $SP(m,d)\geq DV(m,d)$, $\frac{n!SP(m,d)}{m!}\geq\frac{n!}{m!}\cdot\frac{m!}{(d-1)!}=DV(n,d)$, and there is nothing to prove. Therefore, assume $SP(m,d)<DV(m,d)$. The claim is also correct since $$\begin{aligned}
SP(n,d) &=& \frac{SP(n,d)}{SP(m,d)}\cdot SP(m,d) \\
&=& \frac{n!}{m!}\cdot\frac{\sum_{i=0}^{\lfloor (d-1)/2\rfloor}{m\choose i}D_i}{\sum_{i=0}^{\lfloor (d-1)/2\rfloor}{n\choose i}D_i}\cdot SP(m,d) \\
&<&\frac{n!SP(m,d)}{m!}.\end{aligned}$$
Let $ME(n,k)$ denote the new upper bound on $P(n,2k)$ and $MO(n,k)$ denote the new upper bound on $P(n,2k+1)$, i.e. $$ME(n,k)=\frac{n!}{V(n,k-1)+\frac{{n\choose k}D_k}{\lfloor
n/k\rfloor}},$$ and $$MO(n,k)=\frac{n!}{V(n,k)+\frac{{n\choose
k+1}D_{k+1}-A(n-k,2k,k+1){n\choose k}D_k}{A(n,2k,k+1)}}.$$
\[lem:compare:DV:PS\] For constants $\alpha,\beta$ satisfying either $0<\alpha<1/2$, $\beta>0$ or $\alpha=1/2, 0<\beta<e$, whenever $d=\beta
n^{\alpha}$, $$\lim_{n\to\infty}\frac{DV(n,d)}{SP(n,d)}=\infty.$$
Let $k=\lfloor (d-1)/2\rfloor$. We have $$\begin{aligned}
\lim_{n\to\infty}\frac{DV(n,d)}{SP(n,d)}&=&\lim_{n\to\infty}\frac{V(n,k)}{(d-1)!}\nonumber\\
&\geq&\lim_{n\to\infty}\frac{{n\choose
k}D_k}{(d-1)!}\nonumber\\
&=&\lim_{n\to\infty}\frac{n!k!}{ek!(n-k)!(d-1)!}\nonumber\\
&=&\lim_{n\to\infty}\frac{\sqrt{2\pi
n}(n/e)^n}{e\sqrt{2\pi(n-k)}((n-k)/e)^{n-k}\sqrt{2\pi
(d-1)}((d-1)/e)^{d-1}}\label{eq:prf:compare:DV:SP:1}\end{aligned}$$ where the last equation is followed by Stirling’s formula $\lim_{n\to\infty}\frac{n!}{\sqrt{2\pi n}(\frac{n}{e})^n}=1$. By (\[eq:prf:compare:DV:SP:1\]), $$\label{eq:prf:compare:DV:SP:2}
\lim_{n\to\infty}\frac{DV(n,d)}{SP(n,d)}\geq\frac{1}{\sqrt{2\pi}}\lim_{n\to\infty}e^{d-k-2}\frac{n^{n+1/2}}{(n-k)^{n-k+1/2}(d-1)^{d-1/2}}.$$ Let $c$ be a constant such that $c<1$. Since $$\begin{aligned}
\lim_{n\to\infty}\left(\frac{n}{n-k}\right)^{\frac{n-k}{k}}&=&\lim_{n\to\infty}\left(1+\frac{1}{n/k-1}\right)^{n/k-1}\\
&=&e,\end{aligned}$$ for $n$ large enough, $
\left(\frac{n}{n-k}\right)^{\frac{n-k}{k}}\geq e^c, $ i.e. $$\label{eq:forprove:DV:PS}
(n-k)^{n-k}\leq e^{-ck}n^{n-k}.$$ Putting (\[eq:forprove:DV:PS\]) into the right side of (\[eq:prf:compare:DV:SP:2\]), and multiplying the right side of (\[eq:prf:compare:DV:SP:2\]) by $\lim_{n\to\infty}\frac{(n-k)^{1/2}}{n^{1/2}}=1$ and $$\lim_{n\to\infty}\frac{(d-1)^{d-1/2}}{e^{-1}(\beta
n^{\alpha})^{d-1/2}}=\lim_{d\to\infty}e(\frac{d-1}{d})^{d-1/2}=\lim_{d\to\infty}e(1-1/d)^{d-1/2}=1,$$ we obtain $$\begin{aligned}
\lim_{n\to\infty}\frac{DV(n,d)}{SP(n,d)}&\geq&\frac{1}{\sqrt{2\pi}}\lim_{n\to\infty}e^{d-k-2}\frac{n^{n+1/2}}{e^{-ck}n^{n-k+1/2}e^{-1}(\beta n^{\alpha})^{(d-1/2)}}\nonumber\\
&=&\frac{1}{\sqrt{2\pi}}\lim_{n\to\infty}e^{d+(c-1)k-1}\beta^{-d+1/2}n^{k-\alpha
d+\alpha/2}\nonumber\\
&=&\frac{1}{\sqrt{2\pi}}\lim_{n\to\infty}e^{(1-\ln
\beta)d+(c-1)k-1+\ln\beta/2}n^{k-\alpha
d+\alpha/2}\nonumber\\
&\geq&\frac{1}{\sqrt{2\pi}}\lim_{n\to\infty}e^{d(\frac{1+c}{2}-\ln\beta)-1+\frac{\ln\beta}{2}
}n^{(1/2-\alpha)d-1+\alpha/2}\label{eq:prf:compare:DV:SP:3}\end{aligned}$$ where the last inequality follows from $(c-1)k\geq (c-1)d/2$ and $$k-\alpha
d+\alpha/2\geq (d/2-1)-\alpha
d+\alpha/2=(1/2-\alpha)d-1+\alpha/2.$$ To see the limit of right side of (\[eq:prf:compare:DV:SP:3\]), we discuss in two cases:\
Case I:) $0<\alpha<1/2$. Since the coefficient $1/2-\alpha>0$, the limit is determined by exponent $n^{(1/2-\alpha)d-1+\alpha/2}$, and then the statement holds for this case.\
Case II:) $\alpha=1/2,0<\beta<e$. The right side of (\[eq:prf:compare:DV:SP:3\]) is equal to $$\frac{1}{\sqrt{2\pi}}\lim_{n\to\infty}e^{d(\frac{1+c}{2}-\ln\beta)-1+\ln\beta/2}n^{-3/4}.$$ The statement holds also, since for $0<\beta<e$ we can take $c$ such that $2\ln\beta-1<c<1$, in other words, $$0<\frac{1+c}{2}-\ln\beta<1-\ln\beta.$$
\[lem:compare:PS:ME\] For $k\geq 5$, $$SP(n,2k)-ME(n,k)>\frac{2(n-k+1)!}{n(k-1)}.$$
Since $$V(n,k-1)+\frac{{n\choose k}D_k}{\lfloor n/k\rfloor}\leq
V(n,k-1)+{n\choose k}D_k=V(n,k),$$ $$\begin{aligned}
SP(n,2k)-ME(n,k) &=& \frac{n!}{V(n,k-1)}-\frac{n!}{V(n,k-1)+\frac{{n\choose k}D_k}{\lfloor n/k\rfloor}}\nonumber \\
&=&\frac{n!\cdot\frac{{n\choose k}D_k}{\lfloor n/k\rfloor}}{V(n,k-1)\left(V(n,k-1)+\frac{{n\choose k}D_k}{\lfloor
n/k\rfloor}\right)}\nonumber\\
&\geq&\frac{n!{n\choose k}D_k}{\lfloor n/k\rfloor V(n,k-1)V(n,k)}. \nonumber\label{eq:prf:compare:PS:ME:1}\end{aligned}$$ When $k\geq 5$, $V(n,k-1)\leq (k-1){n\choose k-1}D_{k-1}$ and $V(n,k)\leq k{n\choose k}D_k$, thereby $$\begin{aligned}
SP(n,2k)-ME(n,k) &\geq&\frac{n!{n\choose k}D_k}{\lfloor n/k\rfloor k{n\choose k}D_k(k-1){n\choose k-1}D_{k-1}} \nonumber\label{eq:prf:compare:PS:ME:2}\\
&\geq&\frac{n!}{n(k-1){n\choose k-1}D_{k-1}}\label{eq:prf:compare:PS:ME:3}\end{aligned}$$ When $k\geq 5$, $D_{k-1}=[(k-1)!/e]< \frac{(k-1)!}{2}$, putting this into (\[eq:prf:compare:PS:ME:3\]) we have $$\begin{aligned}
SP(n,2k)-ME(n,k) &>&\frac{n!}{n(k-1)\cdot\frac{n!}{(n-k+1)!(k-1)!}\cdot\frac{(k-1)!}{2}}\\
&=&\frac{2(n-k+1)!}{n(k-1)}.\end{aligned}$$
For constants $\alpha,\beta$ satisfying either $0<\alpha<1/2$, $\beta>0$ or $\alpha=1/2, 0<\beta<e$, whenever $2k=\beta
n^{\alpha}$, $$\lim_{n\to\infty}\left(\min\{DV(n,2k),SP(n,2k)\}-ME(n,k)\right)=\infty.$$
By Lemma \[lem:compare:PS:ME\], we have $$\begin{aligned}
\lim_{n\to\infty} SP(n,2k)-ME(n,k)&\geq&\lim_{n\to\infty}\frac{2(n-k+1)!}{n(k-1)} \\
&=&\lim_{n\to\infty}\frac{2(n-(\beta n^{\alpha})/2+1)!}{n((\beta n^{\alpha})/2-1)} \\
&=&\infty.\end{aligned}$$ By Lemma \[lem:compare:DV:PS\], we have $$\begin{aligned}
\lim_{n\to\infty}(DV(n,2k)-SP(n,2k)) &=&
\lim_{n\to\infty}SP(n,2k)\left(\frac{DV(n,2k)}{SP(n,2k)}-1\right)\\
&=& \infty,\end{aligned}$$ hence $\lim_{n\to\infty}(DV(n,2k)-ME(n,k))=\infty$, and then follows the theorem.
As a simple example of the superiority of the new bound $ME(n,k)$ over $DV(n,2k)$ and $SP(n,2k)$ we can compare them for small values of $d$ and $n$.
$ME(20,4)<0.218\cdot 10^{15}$, $DV(20,8)>0.482\cdot 10^{15}$, $SP(20,8)>0.984\cdot 10^{15}$, then $ME(20,4)$ provides the best upper bound on $P(20,8)$.
\[lem:compare:PS:MO\] For $k\geq 4$, $$SP(n,2k+1)-MO(n,k)>
\frac{2(n-k)!}{(k+1)n(n-1)}\left(1+k-\frac{n-1}{k}\right).$$
We have $$\begin{aligned}
SP(n,2k+1)-MO(n,k) &=& \frac{n!}{V(n,k)}-\frac{n!}{V(n,k)+\frac{{n\choose
k+1}D_{k+1}-A(n-k,2k,k+1){n\choose k}D_k}{A(n,2k,k+1)}} \nonumber\\
&=&\frac{n!\left(\frac{{n\choose
k+1}D_{k+1}-A(n-k,2k,k+1){n\choose
k}D_k}{A(n,2k,k+1)}\right)}{V(n,k)\left(V(n,k)+\frac{{n\choose
k+1}D_{k+1}-A(n-k,2k,k+1){n\choose k}D_k}{A(n,2k,k+1)}\right)} \nonumber\\
&\geq&\frac{n!\left(\frac{{n\choose k+1}D_{k+1}-
\frac{(n-k)(n-k-1)}{(k+1)k} {n\choose
k}D_k}{\frac{n(n-1)}{(k+1)k}}\right)}{V(n,k)V(n,k+1)}\label{eq:prf:compare:PS:MO:1}\end{aligned}$$ where the last inequality is followed by $A(n-k,2k,k+1)\leq
\frac{(n-k)(n-k-1)}{(k+1)k}$, $A(n,2k,k+1)\leq\frac{n(n-1)}{(k+1)k}$ (by Lemma \[lem:elem:proper:A(n,d,w)\]) and $$V(n,k)+\frac{{n\choose k+1}D_{k+1}-A(n-k,2k,k+1){n\choose
k}D_k}{A(n,2k,k+1)}\leq V(n,k)+{n\choose k+1}D_{k+1}=V(n,k+1).$$ When $k\geq 4$, $V(n,k)\leq k{n\choose k}D_k$ and $V(n,k+1)\leq
(k+1){n\choose k+1}D_{k+1}$, then $$\begin{aligned}
SP(n,2k+1)-MO(n,k) &\geq&\frac{n!\left(\frac{{n\choose
k+1}D_{k+1}-\frac{(n-k)(n-k-1)}{(k+1)k}{n\choose
k}D_k}{\frac{n(n-1)}{(k+1)k}}\right)}{k{n\choose
k}D_k(k+1){n\choose k+1}D_{k+1}}\\
&=&\frac{(n-2)!\left(\frac{D_{k+1}}{D_k}-\frac{n-k-1}{k}\right)}{{n\choose
k}D_{k+1}}\end{aligned}$$ Since for $k\geq 4$, $\frac{D_{k+1}}{D_k}\geq
\frac{(k+1)!/e-1}{k!/e+1}=k+1-\frac{k+2}{k!/e+1}> k$, and $D_{k+1}\leq \frac{(k+1)!}{e}+1<(k+1)!/2$, $$\begin{aligned}
SP(n,2k+1)-MO(n,k)&>&\frac{(n-2)!\left(k-\frac{n-k-1}{k}\right)}{{n\choose
k}(k+1)!/2}\\
&=&\frac{2(n-k)!}{(k+1)n(n-1)}\left(1+k-\frac{n-1}{k}\right).\end{aligned}$$
For constant $\beta$ such that $2<\beta<e$, whenever $2k+1=\beta
n^{1/2}$, $$\lim_{n\to\infty}\left(\min\{DV(n,2k+1),SP(n,2k+1)\}-MO(n,k)\right)=\infty.$$
Since $$\begin{aligned}
1+k-\frac{n-1}{k} &\geq&1+\frac{2\sqrt{n}-1}{2}-\frac{n-1}{\frac{2\sqrt{n}-1}{2}}\\
&=&1+\left(\sqrt{n}-\frac{1}{2}\right)-\left(\sqrt{n}-\frac{1}{2}+\frac{\sqrt{n}-\frac{5}{4}}{\sqrt{n}-\frac{1}{2}}\right) \\
&=&\frac{3}{4\sqrt{n}-2},\end{aligned}$$ by Lemma \[lem:compare:PS:MO\] we have $$\begin{aligned}
\lim_{n\to\infty}SP(n,2k+1)-MO(n,k)&\geq&\lim_{n\to\infty}
\frac{2(n-k)!}{(k+1)n(n-1)}\left(1+k-\frac{n-1}{k}\right)\nonumber\\
&\geq&\lim_{n\to\infty}\frac{2(n-k)!}{(k+1)n(n-1)}\cdot\frac{3}{4\sqrt{n}-2}\label{eq:prf:compare:MO}\\
&=&\infty.\nonumber\end{aligned}$$
By Lemma \[lem:compare:DV:PS\], we have $$\begin{aligned}
\lim_{n\to\infty}(DV(n,2k+1)-SP(n,2k+1)) &=&
\lim_{n\to\infty}SP(n,2k+1)\left(\frac{DV(n,2k+1)}{SP(n,2k+1)}-1\right)\\
&=&\infty,\end{aligned}$$ hence $\lim_{n\to\infty}(DV(n,2k+1)-MO(n,k))=\infty$, and then follows the theorem.
As a simple example of the superiority of the new bound $MO(n,k)$ over $DV(n,2k+1)$ and $SP(n,2k+1)$ we can compare them for small values of $d$ and $n$.
$MO(20,4)<0.380\cdot 10^{14}$ by Corrollary \[cor:upper\_bound\_MO\_appx\], $SP(20,9)>0.528\cdot
10^{14}$, $DV(20,9)>0.603\cdot 10^{14}$, then $MO(20,4)$ provide the best upper bound on $P(20,9)$.
mds
November 18, 2002
[^1]: Manuscript received June 14, 2006. This work was supported by NSFC under grants 90104005 and 60573030.
[^2]: Lizhen Yang is with the department of computer science and engineering, Shanghai Jiaotong University, 800 DongChuan Road, Shanghai, 200420, R.P. China (fax: 86-021-34204221, email: lizhen\[email protected]).
[^3]: Ling Dong is with the department of computer science and engineering, Shanghai Jiaotong University, 800 DongChuan Road, Shanghai, 200420, R.P. China (fax: 86-021-34204221, email: [email protected]).
[^4]: Kefei Chen is with the department of computer science and engineering, Shanghai Jiaotong University, 800 DongChuan Road, Shanghai, 200420, R.P. China (fax: 86-021-34204221, email: [email protected]).
|
---
abstract: 'We present weak lensing cluster search using Hyper Suprime-Cam Subaru Strategic Program (HSC survey) first-year data. We pay a special attention to the dilution effect of cluster member and foreground galaxies on weak lensing peak signal-to-noise ratios ($SN$s); we adopt the globally normalized weak lensing estimator which is least affected by cluster member galaxies, and we select source galaxies by using photometric redshift information to mitigate the effect of foreground galaxies. We produce six samples of source galaxies with different low-$z$ galaxy cuts, construct weak lensing mass maps for each of source sample, and search for high peaks in the mass maps that cover the effective survey area of $\sim$120 deg$^2$. We compile six catalogs of high peaks into the sample of cluster candidates which contains 124 high peaks with $SN\ge 5$. We cross-match the peak sample with the public optical cluster catalog constructed from the same HSC survey data to identify cluster counterparts of the peaks. We find that 107 out of 124 peaks have matched clusters within 5 arcmin from peak positions. Among them, we define a sub-sample of 64 secure clusters that we use to examine dilution effects on our weak lensing peak finding. We find that source samples with the low-$z$ galaxy cuts mitigate the dilution effect on peak $SN$s of high-$z$ clusters ($z \gtsim 0.3$), and thus combining multiple peak catalogs from different source samples improves the efficiency of weak lensing cluster searches.'
author:
- 'Takashi <span style="font-variant:small-caps;">Hamana</span>'
- 'Masato <span style="font-variant:small-caps;">Shirasaki</span>'
- 'Yen-Ting <span style="font-variant:small-caps;">Lin</span>'
title: 'Weak lensing clusters from HSC survey first-year data: Mitigating the dilution effect of foreground and cluster member galaxies'
---
Introduction {#sec:intro}
============
Clusters of galaxies have been playing important roles in the modern cosmology: Their abundance and evolution have been used to place constraints on cosmological parameters , and their baryonic components (galaxies and hot intra-cluster gas) have been used to study physical processes of hierarchical structure formation in the universe . In those studies, a large sample of clusters of galaxies is fundamental data, which has been constructed by identifying their tracers such as optical galaxy concentrations, X-ray emissions, Sunyaev-Zel’dovich effect, and dark matter concentrations via weak lensing technique [@2019SSRv..215...25P]. Since any cluster mass-observable relations have scatters, sample completeness in terms of the cluster mass, which is the principal quantity to link an observation to a theory, varies from method to method. Weak lensing cluster finding is unique in that it uses the matter concentration as the tracer regardless of physical state of baryonic components, enabling to locate under-luminous clusters.
Observationally, there are two conflicting difficulties constructing a sizable cluster sample with weak lensing in a practical time scale; a wide survey area to locate rare objects, and a deep imaging to achieve a sufficient number density of source galaxies. Thanks to development of wide-field optical cameras with dedicated wide field surveys, weak lensing cluster finding has made rapid progress in the last two decades, [see Table 1 of @2018PASJ...70S..27M and references therein]. Recently, @2018PASJ...70S..27M have conducted weak lensing cluster finding in $\sim$160 deg$^2$ area of Hyper Suprime-Cam Subaru Strategic Program [hereafter, HSC survey, @2018PASJ...70S...4A] first-year data [@2018PASJ...70S...8A; @2018PASJ...70S..25M], and have reported detection of 65 peaks with signal-to-noise ratio ($SN$) greater than 4.7 in weak lensing mass maps. They cross-matched the peaks with optical cluster catalogs and found that 63 out of 65 peaks had optical counterparts, demonstrating that a wide field survey with a sufficient depth (for their case $i= 24.5$ mag) is indeed able to yield a sizable and high purity cluster sample.
In the near future, the size of weak lensing cluster sample will become much larger as much wider weak lensing-oriented surveys will come: The final survey area of HSC survey is 1400 deg$^2$ (more than eight times of the first-year data), and Large Synoptic Survey Telescope [LSST, @2019ApJ...873..111I] and Euclid survey [@2012SPIE.8442E..0TL; @2018cosp...42E2761R] will cover a large portion of the sky with a sufficient depth. It is thus worth improving methods of weak lensing cluster finding by making best use of multi-band dataset that on-going/future surveys take. This is exactly the purpose of this paper.
In this paper, we focus on the dilution effect that we briefly explain below: Weak lensing effect by clusters distorts shapes of background galaxies in a coherent manner. If a galaxy sample used for weak lensing analysis contains not only background lensed galaxies but also foreground and/or cluster member galaxies which are not affected by cluster lensing, the latter acts as contaminants in weak lensing analysis and [*dilute*]{} the lensing signals by clusters. In weak lensing cluster findings, redshifts of clusters are unknown in advance, and thus a galaxy sample is commonly selected by a simple magnitude-cut on a single band photometry [for example, @2002ApJ...580L..97M; @2015PASJ...67...34H]. Such a galaxy sample inevitably contains foreground/cluster member galaxies and suffers from the dilution effect.
Weak lensing cluster finding is based on peak heights on mass maps. The detection threshold is set by the peak height $SN$ considering the trade-off between numbers of detected clusters and false detections (lowering the threshold $SN$ leads to a larger number of cluster detections at the cost of a higher false detection rate). However, the peak heights of cluster lensing are indeed affected by the dilution effect. Its direct impact is the decline in numbers of cluster detections. Another impact is on theoretical models of weak lensing mass map peaks; incorporating its effect into theoretical models requires a realistic modeling of the dilution effect which is most likely dependent on cluster mass, redshift, and galaxy selection criteria (for example, a detection band, magnitude-cut, and size-cut). Therefore it is fundamentally important to understand actual dilution effects on weak lensing mass maps on a case-by-case basis.
The purpose of this paper is two-fold: The first is to develop an weak lensing cluster finding method that mitigates the dilution effects by incorporating photometric redshift information of galaxies. We apply it to HSC survey first-year data in which both the weak lensing shape catalog and photometric redshift data are publicly available [@2018PASJ...70S..25M; @2018PASJ...70S...9T]. We present a sample of weak lensing peaks located by our finding method. We identify their counterpart clusters by cross-matching with the optical cluster catalog [@2018PASJ...70S..20O]. Then using the derived weak lensing cluster sample, we examine the dilution effects on actual weak lensing mass maps in an empirical manner, which is our second purpose.
The structure of this paper is as follows. In Section \[sec:data\], we briefly summarize the HSC survey first-year shear catalog and the photometric redshift data used in this study. In Section \[sec:kappa-peaks\], we describe the methods to generate a sample of weak lensing peaks; including the selection of source galaxies, the method to reconstruct weak lensing mass maps, and peak finding algorithm. In Section \[sec:cross-matching\], the weak lensing peaks are cross-matched with a sample of optical clusters to identify their cluster counterparts. Then we examine fundamental properties of weak lensing clusters detected by our method. In Section \[sec:dilution\_effects\], we examine the dilution effects of foreground and cluster member galaxies on our weak lensing peaks in an empirical manner using actual source galaxy samples and empirical models. Finally, we summarize and discuss our results in Section \[sec:summary\]. In Appendix \[sec:cross\_matching\], we present results of cross-matching of our sample of weak lensing peaks with selected catalogs of known clusters. In Appendix \[sec:neighboring\_peaks\], we describe systems of neighboring peaks in our peak sample. In Appendix \[sec:cluster\_mass\], we present results of the cluster mass estimate of the weak lensing peak sample based on a model fitting to weak lensing shear profiles. In Appendix \[sec:local\_estimator\], we compare the globally normalized $SN$ estimator, which is adopted in this study, with the locally normalized $SN$ estimator adopted in some previous studies [for example, @2015PASJ...67...34H].
Throughout this paper we adopt the cosmological model with the cold dark matter (CDM) density $\Omega_{\rm cdm}=0.233$, the baryon density $\Omega_{\rm b}=0.046$, the matter density $\Omega_{\rm m}=\Omega_{\rm cdm} + \Omega_{\rm b} = 0.279$, the cosmological constant $\Omega_\Lambda=0.721$, the spectral index $n_s=0.97$, the normalization of the matter fluctuation $\sigma_8=0.82$, and the Hubble parameter $h=0.7$, which are the best-fit cosmological parameters in the Wilkinson Microwave Anisotropy Probe (WMAP) 9-year results [@2013ApJS..208...19H].
HSC survey data {#sec:data}
===============
In this section, we briefly describe those aspects of HSC survey first year products that are directly relevant to this study, see the following references for full details: @2018PASJ...70S...4A for an overview of the HSC survey and survey design, @2018PASJ...70S...8A for the first public data release, @2018PASJ...70S...1M [@2018PASJ...70S...2K; @2018PASJ...70...66K; @2018PASJ...70S...3F] for the performance of the HSC instrument itself, @2018PASJ...70S...5B for the optical imaging data processing pipeline used for the first-year data, @2018PASJ...70S..25M for the first-year shape catalog, @2018MNRAS.481.3170M for the calibration of galaxy shape measurements with image simulations, @2019PASJ..tmp..106A for the public data release of the first-year shape catalog, and @2018PASJ...70S...9T for photometric redshifts derived for the first-year data.
HSC first-year shape catalog {#sec:shape-catalog}
----------------------------
We use the HSC first-year shape catalog [@2018PASJ...70S..25M], in which the shapes of galaxies are estimated on the $i$-band coadded image adopting the re-Gaussianization PSF correction method [@2003MNRAS.343..459H]. Only galaxies that pass given selection criteria are contained in the catalog. Among others, the four major criteria, which are relevant to the following analyses, for galaxies to be selected are,
1. [*full-color and full-depth cut*]{}: the object should be located in regions reaching approximately full survey depth in all five ($grizy$) broad bands,
2. [*magnitude cut*]{}: $i$-band cmodel magnitude (corrected for extinction) should be brighter than 24.5 AB mag,
3. [*resolution cut*]{}: the galaxy size normalized by the PSF size, which varies from position to position on coadded images depending on observational condition, defined by the re-Gaussianization method should be larger than a given threshold of [ishape\_hsm\_regauss\_resolution $\ge$ 0.3]{},
4. [*bright object mask cut*]{}: the object should not be located within the bright object masks.
See Table 4 of @2018PASJ...70S..25M for the full description of the selection criteria.
The HSC shape catalog contains all the basic parameters needed to perform weak lensing analyses in this study. The following five sets of parameters for each galaxy are directly relevant to this study [see @2018PASJ...70S..25M for a detail description of each item]; (1) the two-component distortion, $\bm{e}=(e_1,e_2)$, which represents the shape of each galaxy image, (2) shape weight, $w$, (3) intrinsic shape dispersion per component, $e_{\mbox{rms}}$, (4) multiplicative bias, $m$, and (5) additive bias, $(c_1, c_2)$.
Photometric redshifts {#sec:photo-z}
---------------------
Using the HSC five-band photometry, photometric redshift (hereafter photo-$z$) was estimated with six independent codes, described in detail in @2018PASJ...70S...9T. In this study, we adopt [Ephor AB]{} photo-$z$ data which were derived from the PSF-matched aperture photometry (called the [afterburner]{} photometry) using a neural network code, [Ephor]{}[^1]. The data-set contains not only the point estimate but also the probability distribution function of the redshift for each galaxy, that we use to select source galaxies (see Section \[sec:source-galaxy\]).
Weak lensing mass maps and high $SN$ peaks {#sec:kappa-peaks}
==========================================
In this section, we describe our procedure for constructing a sample of high $SN$ peaks located in weak lensing mass maps.
Source galaxy selection {#sec:source-galaxy}
-----------------------
![[*Bottom panel*]{}: Estimates of redshift distribution of the source samples computed by summing up the redshift probability distribution, $P(z)$, over selected source galaxies \[see equation (\[eq:ns\])\]. The normalization is taken so that $\int dz n_s(z)=1$ for the “no-cut” case (i.e. the full galaxy sample) shown in black histogram. [*Top panel*]{}: Ratio of the redshift distribution for a source sample to that of “no-cut” case. \[wsumpdf\_zmax3.0\_zmin\]](fig1.eps){width="82mm"}
--------------- ---------------- ------------------- ---------------------------------------------- ---------------- -------------------------------------------
$z_{\rm min}$ Area $\bar{n}_{g}$ $\langle \sigma_{\rm shape}^2 \rangle^{1/2}$ $N_{\rm peak}$ $N_{\rm peak}$\[merged\] at $z_{\rm opt}$
\[deg$^{-2}$\] \[arcmin$^{-2}$\] $SN\ge 5$ $SN_{\rm max}>5$
0.0 120.01 19.3 0.0158 68 24 (-)
0.2 119.51 17.2 0.0167 71 14 (3)
0.3 118.90 15.2 0.0179 70 18 (9)
0.4 118.08 13.6 0.0190 75 22 (13)
0.5 117.50 12.7 0.0198 73 15 (7)
0.6 116.63 11.4 0.0209 69 31 (23)
--------------- ---------------- ------------------- ---------------------------------------------- ---------------- -------------------------------------------
We use photo-$z$ information to select source galaxies which are used in constructing weak lensing mass maps (detailed in the next subsection). We adopt the [*P-cut*]{} method proposed by @2014MNRAS.444..147O that uses the full probability distribution function of redshift, denoted by $P(z)$, for each galaxy estimated by the [ephor]{} method; we define samples of source galaxies that satisfy $$\label{eq:Pcut}
P_{int}\equiv \int_{z_{\rm min}}^{z_{\rm max}} P(z)~dz > P_{th},$$ with the threshold integrated probability of $P_{th} = 0.95$. Our main aim here is to mitigate the dilution effects of foreground and cluster member galaxies, and thus a choice of $z_{\rm max}$ is not crucial as far as it does not so much reduce the number density of source galaxies. In this study, we take $z_{\rm max}=3$. Since we do not know redshifts of clusters to be located in mass maps in advance, we take multiple choices of $z_{\rm min}$, to be specific, we take $z_{\rm min}=0$, 0.2, 0.3, 0.4, 0.5, and 0.6.
The summation of $P(z)$ over selected galaxies gives a reasonably reliable estimate of redshift distribution of the source sample[^2]. Taking the lensing weight ($w_i$) into account, we have $$\label{eq:ns}
n_s(z)=\sum_i w_i P_i(z).$$ The effective redshift distributions derived by this method are shown in Fig. \[wsumpdf\_zmax3.0\_zmin\] in comparison with the full galaxy sample. It is seen in the Figure that the [*P-cut*]{} method works well to suppress the probability that source samples include galaxies being at outside the given redshift ranges. The mean source galaxy number densities for each sample are summarized in Table \[table:kappa\_map\].
Field name Data-region area$^{a}$ \[deg$^2$\]
------------ ------------------------------------ -- --
XMM 26.30
GAMA09H 28.52
WIDE12H 11.45
GAMA15H 27.50
HECTOMAP 9.48
VVDS 16.77
total 120.01
: The effective survey area of each field. This is for the source sample with $z_{\rm min}=0$. Total areas of other source sample are summarized in Table \[table:kappa\_map\].[]{data-label="table:fields"}
\
$^{a}$ Area after removing regions affected by bright objects (masked-region) and edge-region in unit of degree$^2$. See Section \[sec:mass-reconstruction\] for the definitions those regions.
Weak lensing mass reconstruction {#sec:mass-reconstruction}
--------------------------------
The weak lensing mass map which is the smoothed lensing convergence field ($\kappa$) is evaluated from the tangential shear data by [@1996MNRAS.283..837S] $$\label{eq:shear2kap}
{\cal K }(\bm{\theta})
=\int d^2\bm{\phi}~ \gamma_t(\bm{\phi}:\bm{\theta}) Q(|\bm{\phi}|),$$ where $\gamma_t(\bm{\phi}:\bm{\theta})$ is the tangential component of the shear at position $\bm{\phi}$ relative to the point $\bm{\theta}$, and $Q$ is the filter function for which we adopt the truncated Gaussian function (for $\kappa$ field) [@2012MNRAS.425.2287H], $$\label{eq:Q}
Q(\theta)
={1 \over {\pi \theta^2}}
\left[1-\left(1+{{\theta^2}\over {\theta_G^2}}\right)
\exp\left(-{{\theta^2}\over {\theta_G^2}}\right)\right],$$ for $\theta < \theta_o$ and $Q=0$ elsewhere. The filter parameters should be chosen so that signals (high peaks in weak lensing mass maps) from expected target clusters (i.e. $M>10^{14}h^{-1}M_\odot$ at $0.1<z<0.6$) become largest [see @2004MNRAS.350..893H]. We take $\theta_G=1.5$ arcmin and $\theta_o=15$ arcmin.
In our actual computation, ${\cal K }$ is evaluated on regular grid points with a grid spacing of 0.15 arcmin. Since galaxy positions are given in the sky coordinates, we use the tangent plane projection to define the grid. On and around regions where no source galaxy is available due to imaging data being affected by bright stars or large nearby galaxies, ${\cal K}$ may not be accurately evaluated. We define “data-region”, “masked-region” and “edge-region” by using the distribution of source galaxies as follows: First, for each grid point, we check if there is a galaxy within 0.75 arcmin (about three times the mean galaxy separation) from the grid point. If there is no galaxy, then the grid point is flagged as “no-galaxy”. After performing the procedure for all the grid points, all the “no-galaxy” grid points plus all the grid points within 0.75 arcmin from all the “no-galaxy” grid points are defined as the “masked-region”. All the masked-regions are excluded from our weak lensing analysis. All the grid points located within 1.5 arcmin (we take this value by setting it equal to $\theta_G$) from any of masked-region grid points are defined as the “edge-region”. All the rest of grid points are defined as the “data-region”. Since the sky distribution of galaxies differs among different source samples, we carry out this procedure for every source sample. The total survey areas (data-region) of each source sample are summarized in Table \[table:kappa\_map\], and areas of six fields for $z_{\rm min}=0$ sample are summarized in Table \[table:fields\]. The difference in the total areas among different source samples is not large but 3 percents at largest. The total areas of the edge-region are $\sim 30$ degree$^2$, accounting for $\sim 20$ percents of the data- plus edge-region.
On grid points, ${\cal K }$ is evaluated using equation (\[eq:shear2kap\]), but the integral in that equation is replaced with a summation over galaxies; $$\label{eq:shear2kap_sum}
{\cal K }(\bm{\theta})
={1\over {\bar{n}_g}} \sum_i \hat{\gamma}_{t,i} Q(|\bm{\phi_i}|),$$ where the summation is taken over galaxies within $\theta_o$ from a grid point at $\bm{\theta}$, $\hat{\gamma}_{t,i}$ is an estimate of tangential shear of $i$-th galaxy at the angular position $\bm{\phi_i}$ from the grid point, and $\bar{n}_g$ is the mean galaxy number density (see Section \[sec:global-kap\] and Appendix \[sec:local\_estimator\] for discussion on our choice of the [*global normalization*]{}, and see also [@2011ApJ...735..119S] for a related study). The noise on mass maps coming from intrinsic shapes of galaxies is evaluated on each grid point [@1996MNRAS.283..837S], $$\label{eq:sigma_shape}
\sigma_{\rm shape}^2(\bm{\theta})
={1\over {2 \bar{n}_g^2}} \sum_i \hat{\gamma}_{i}^2 Q^2(|\bm{\phi_i}|).$$ We define the signal-to-noise ratio ($SN$) of weak lensing mass map by $$\label{eq:sn}
SN(\bm{\theta})=
{{{\cal K }(\bm{\theta})}
\over {\langle \sigma_{\rm shape}^2 \rangle^{1/2}}},$$ where $\langle \sigma_{\rm shape}^2 \rangle$ is the mean value over all the grids in the data-region.
Taking the lensing weight, which we normalized so that the total weight equals the total number of galaxies (i.e., $\sum_i w_i = N_g$), and measurement biases into account, equation (\[eq:shear2kap\_sum\]) is modified to, [@2018PASJ...70S..25M], $$\label{eq:shear2kap_weight}
{\cal K }(\bm{\theta}) =
{1 \over {\bar{n}_g}}
{{\sum_i w_i ( e_{t,i}/2{\cal{R}} -\hat{c}_{t}) Q(|\bm{\phi_i}|)}
\over
{1+\hat{m}}},$$ where $e_{t}$ is the tangential component of distortion taken from the HSC shape catalog. Sample averaged multiplicative bias, responsibity factor and additive bias are given as follows, $$\label{eq:hatm}
\hat{m}=
{{\sum_i w_i m_i}
\over
{\sum_i w_i}},$$ $$\label{eq:resp}
{\cal{R}}=
1-{{\sum_i w_i e_{\mbox{rms},i}^2}
\over
{\sum_i w_i}},$$ and $$\label{eq:hatc}
\hat{c}_t=
{{\sum_i w_i c_{t,i}}
\over
{\sum_i w_i}},$$ where $c_{t,i}$ is the tangential component the additive bias for each galaxy. Similarly, the expression for the shape noise, equation (\[eq:sigma\_shape\]), is modified to, $$\label{eq:sigma_shape_weight}
\sigma_{\rm shape}^2(\bm{\theta}) =
{1\over {2 \bar{n}_g^2}}
{{\sum_i w_i^2 ( e_{t,i}/2{\cal{R}} -\hat{c}_{t})^2 Q^2(|\bm{\phi_i}|)}
\over
{(1+\hat{m})^2}}.$$
Peak finding and merging multiple peak catalogs {#sec:peak-finding}
-----------------------------------------------
We apply weak lensing mass reconstruction to each sample of source galaxies. We define a peak in the generated mass maps as a grid point with $SN$ value being higher than any surrounding eight grid points. We first select peaks with $SN\ge 4$ located in the data-region. If there is a pair of peaks with their separation smaller than $\sqrt{2}\times \theta_G \simeq 2.1$ arcmin, the lower $SN$ peak of the pair is discarded to avoid multiple peaks from a single cluster (due to, for example, substructures of clusters).
The numbers of peaks with $SN\ge 5$ for six source samples are summarized in Table \[table:kappa\_map\]. Note that only peaks located in the data-region are included in the peak catalogs. In the same Table, the mean shape noise values measured from each sample are summarized, which scale with the galaxy number density approximately as $\langle \sigma_{\rm shape}^2 \rangle \propto \bar{n}_{g}^{-1}$ as expected [@1996MNRAS.283..837S]. It should be noticed that although the shape noise becomes larger for higher $z_{\rm min}$ samples, the number of peak detection does not always decrease. This may indicate that our source sample selection with a low-$z$ cut indeed mitigates the dilution effects, that we will go into detail in Section \[sec:dilution\_effects\].
We combine the six catalogs of high peaks ($SN\ge 4$) from different source samples by matching peak positions to a tolerance of $2\times \theta_G =3$ arcmin. Most of peaks have multiple matches. Matched peaks from different source samples are merged and are considered as peaks from the same cluster, and the highest $SN$ among matched peaks is taken as its peak $SN$ that we denote $SN_{\rm max}$ and we define its source sample’s $z_{\rm min}$ as $z_{\rm opt}$. There are 124 merged peaks with $SN_{\rm max}\ge 5$, which we take as our primary sample of cluster candidates. In Table \[table:peaklist\], basic information of those 124 merged peaks are summarized.
In the last column of Table \[table:kappa\_map\], we present numbers of merged peaks for each $z_{\rm opt}=z_{\rm min}$ with numbers in the parentheses being those not existing in $z_{\rm min}=0$ sample. We see that $z_{\rm opt}$ is distributed rather broadly with a noticeable number at the highest $z_{\rm min}$ sample. It is found that 69 out of 124 merged peaks have $SN(z_{\rm min}=0) < 5$ (to be specific, $SN$s of those peaks measured in mass maps from $z_{\rm min}=0$ source sample are smaller than 5). This may be an indication that the dilution effects indeed have non-negligible influence on peak $SN$s in mass maps of $z_{\rm min}=0$.
Cross-matching with CAMIRA-HSC clusters {#sec:cross-matching}
=======================================
We cross-match our merged peak catalog with the CAMIRA [Cluster-finding Algorithm based on Multi-band Identification of Red-sequence gAlaxies, @2014MNRAS.444..147O] HSC cluster sample to identify clusters of galaxies from which weak lensing peak signals originate. CAMIRA-HSC cluster sample is based on the HSC S16A data set [@2018PASJ...70S..20O] that our weak lensing analysis is also based on, and thus covers our survey fields uniformly except for regions affected by blight objects. We take this optically-selected cluster catalog as our primary reference sample, because it covers sufficiently wide redshift range ($0.1 < z <1.1$) and cluster mass (the richness $N_{mem}>15$, where richness is defined as the effective number of member galaxies above stellar mass greater than $10^{10.2}M_\odot$). For each cluster, the sky coordinates and cluster redshifts based on photo-$z$s are estimated [see details of cluster finding algorithm and definitions of those quantities, @2014MNRAS.444..147O; @2018PASJ...70S..20O], that we use in the following analysis. See Appendix \[sec:cross\_matching\] for results of cross-matching with other selected cluster samples.
We cross-match our merged peak catalog with CAMIRA-HSC clusters[^3] with their positions to a tolerance of 5 arcmin. We summarize the results in Table \[table:kappa\_map\], in which the angular separation between a peak position and a matched CAMIRA-HSC cluster position is given ($\theta_{\rm sep}$). Since the smoothing scale of weak lensing mass map is $\theta_G =1.5$ arcmin, the tolerance radius could be large enough to identify clusters of galaxies from which the weak lensing peaks originate. However, 17 out of 124 peaks have no CAMIRA-HSC cluster counterpart (see Appendix \[sec:no\_camira\] for some details of those peaks). Among the rest of 107 peaks, 25 peaks have multiple matches (mostly matched with two CAMIRA-HSC clusters, but 3 out of 25 peaks have three matches). There are some possible reasons for those systems: Some of such peaks could be due to physically interacting nearby cluster systems, but others could be generated not from a single system but from a line-of-sight projection of multiple clusters [@2004MNRAS.350..893H]. In this paper, we are not going into details of such multiple-match peaks.
![Black histogram shows the redshift distribution of the weak lensing secure clusters (64 out of 124 merged peaks), whereas the red hatched histogram shows the same clusters but having $SN\ge 5$ in weak lensing mass maps from the source sample of $z_{\rm min}=0$ (36 out of 68 peaks located in $z_{\rm min}=0$ mass maps with $SN\ge 5$). \[npeak\_z0\_z\]](fig2.eps){width="82mm"}
Among 82 peaks matched with a single CAMIRA-HSC cluster, 64 peaks have CAMIRA-HSC cluster counterparts within 2 arcmin from peak positions. Although it is possible that some of those peaks are affected by line-of-sight projections of small clusters (below the richness threshold of CAMIRA algorithm), it is highly likely that the major lensing contribution comes from the matched CAMIRA-HSC clusters. Also we have visually inspected those systems with HSC $riz$-color image, and found good correlations between weak lensing mass over-densities and galaxy concentrations for all the cases. We thus define those 64 peaks as the sample of [*weak lensing secure clusters*]{} with the redshift (that we denote $z_{cl}$) taken from the matched CAMIRA-HSC cluster, which we will use to investigate the dilution effects in the following section. The redshift distribution of those weak lensing secure clusters is shown in Figure \[npeak\_z0\_z\]. In the same plot, we also show the distribution of those weak lensing secure clusters that have $SN\ge 5$ in weak lensing mass maps from the source sample of $z_{\rm min}=0$. Comparing the two distributions, we see that a large part of clusters at $z_{cl}>0.4$ have peak $SN$s below our threshold of $SN=5$ in the mass maps of $z_{\rm min}=0$, and pass the threshold in mass maps of $z_{\rm min}\ge 0.2$.
![Distribution of weak lensing secure clusters on the $M_{200c}-z_{cl}$ plane. The cluster masses defined by the spherical overdensity mass $M_{200c}$ are derived by fitting the NFW model to measured weak lensing shear profiles based on the standard likelihood analysis (see Appendix \[sec:cluster\_mass\]), and filled squares and error bars show the peak and 68.3% confidence interval of the posterior distributions. Black (red) symbols are for clusters with the peak $SN\ge 5$ ($<5$) in weak lensing mass maps from the source sample of $z_{\rm min}=0$. \[z\_m\]](fig3.eps){width="82mm"}
We derive the cluster masses of the weak lensing secure clusters by fitting the NFW model to measured weak lensing shear profiles based on the standard likelihood analysis (see Appendix \[sec:cluster\_mass\] for details). Derived cluster masses are plotted on the cluster mass–redshift plane in Figure \[z\_m\], where black (red) symbols are for clusters with the peak $SN\ge 5(<5)$ in weak lensing mass maps from the source sample of $z_{\rm min}=0$. From this Figure, we find that clusters below the peak height threshold ($SN=5$) in the mass maps of $z_{\rm min}=0$ are mostly relatively lower mass clusters at $z_{cl} \gtsim 0.4$.
Dilution effects on weak lensing peaks from clusters {#sec:dilution_effects}
====================================================
{width="82mm"} {width="82mm"}
The dilution effects on weak lensing high peaks originating from clusters are caused by foreground and cluster member galaxies. Let us first make a rough estimate of proportions of those galaxies in our source galaxy samples. We see from the estimated redshift distributions of source samples shown in Figure \[wsumpdf\_zmax3.0\_zmin\] that a proportion of foreground to background galaxies depends strongly on both cluster redshifts and source samples, and it can be more than 20 percents for high-$z$ clusters in low-$z_{\rm min}$ source samples. We estimate a proportion of cluster member galaxies by measuring stacked galaxy number density profiles of sub-samples of weak lensing secure clusters selected based on cluster redshifts. The measurement is done for every source galaxy sample and results are presented in Figure \[gdensprof\_zcl\] for four redshift ranges. We find that, at cluster central regions, a considerable amount of cluster member galaxies are contained in source samples with $z_{\rm min}<z_{cl}$ except for the case of the lowest cluster redshift range. The excess mostly disappears in source samples with $z_{\rm min}>z_{cl}$. However, we note that the degree of the excess and its suppression largely vary from cluster to cluster.
We have adopted two means to mitigate the dilution effects: One is to take the [*globally normalized*]{} $SN$ estimator, equation (\[eq:sn\]) with equations (\[eq:shear2kap\_sum\]) and (\[eq:sigma\_shape\]), and the other is to combine multiple peak catalogs from weak lensing mass maps of source samples with different $z_{\rm min}$. In the following sub-sections, we will first describe the former, then we will examine the effectiveness of the latter using actual source galaxy samples.
Another important point seen in Figure \[gdensprof\_zcl\] is that the deficiency of source galaxies in cluster central regions for the lowest redshift cluster sample and for the other samples with $z_{\rm min}>z_{cl}$. There are two possible causes of this: One is the masking effect of bright cluster galaxies that screen background galaxies behind them. The other is the lensing magnification effect that enlarges a sky area behind clusters resulting in a decreases of the local galaxy number density (for more details, see [@2001PhR...340..291B]; and see [@2019arXiv190902042C] for a measurement of lensing magnification effect in the HSC data). We are not going into further details of those two effects because it is beyond the scope of this paper, but we examine their influence on the peak height using empirical models in Section \[sec:defficiency\_effect\].
The globally normalized $SN$ estimator {#sec:global-kap}
--------------------------------------
Here, we explain how the globally normalized $SN$ estimator defined by equation (\[eq:sn\]) can mitigate the dilution effect of cluster member galaxies. We examine actual advantage of this estimator over the locally normalized estimator in Appendix \[sec:local\_estimator\].
Let us assume the following simple model of a galaxy distribution which consists of three populations; lensed background galaxies ($bg$), unlensed foreground galaxies ($fg$), and unlensed cluster member galaxies ($cl$), with number densities of $n_{bg}$, $n_{fg}$, and $n_{cl}(\theta)$, respectively. Note that we have assumed that only $n_{cl}(\theta)$ has a non-uniform sky distribution associated with clusters of galaxies. As is seen in Figure \[gdensprof\_zcl\], $n_{cl}(\theta)$ can be comparable to $n_{bg}+n_{fg}$ at cluster central regions. However, since the cluster population is very rare in the sky, in what follows, we assume that the globally averaged $n_{cl}(\theta)$ is much smaller than $n_{bg}+n_{fg}$, and we take $\bar{n}_g=n_{bg}+n_{fg}$. Then the globally normalized estimator, equation (\[eq:shear2kap\_sum\]), can be formally written by $$\begin{aligned}
\label{eq:shear2kap_sum_G}
{\cal K }_G(\bm{\theta})
&=&{1\over {\bar{n}_g}} \sum_i \hat{\gamma}_{t,i} Q_i \nonumber\\
&=&{1\over {\bar{n}_g}}
\left(
\sum_{i\in bg} \hat{\gamma}_{t,i} Q_i
+\sum_{i\in fg} \hat{\gamma}_{t,i} Q_i
+\sum_{i\in cl} \hat{\gamma}_{t,i} Q_i
\right)\nonumber\\
&=&{1\over {n_{fg}+n_{bg}}}
\sum_{i\in bg} \hat{\gamma}_{t,i} Q_i,\end{aligned}$$ where from the second to third line, we have used the fact that the foreground and cluster member galaxies have no lensing signal. Denoting the galaxy intrinsic ellipticity by $e^{\rm int}$ and its shear converted one by $\hat{e}=e^{\rm int}/2\cal{R}$, the estimator of the shape noise, equation (\[eq:sigma\_shape\]), can be written, in the same manner, by $$\begin{aligned}
\label{eq:sigma_shape_G}
\sigma_{{\rm shape},G}^2(\bm{\theta})
&=&{1\over {2 (n_{fg}+n_{bg})^2}}\nonumber \\
&&\times \left(
\sum_{i\in bg} \hat{e}_i^2 Q_i^2
+\sum_{i\in fg} \hat{e}_i^2 Q_i^2
+\sum_{i\in cl} \hat{e}_i^2 Q_i^2
\right),\end{aligned}$$ where we have ignored the contribution from lensing shear. Taking an average over a survey field, we have, $$\begin{aligned}
\label{eq:sigma_shape_Gave}
\langle \sigma_{{\rm shape},G}^2\rangle
&\simeq& {1\over {2 (n_{fg}+n_{bg})^2}} \nonumber \\
&&\times \left(
\left\langle \sum_{i\in bg} \hat{e}_i^2 Q_i^2 \right\rangle
+\left\langle \sum_{i\in fg} \hat{e}_i^2 Q_i^2 \right\rangle
\right),\end{aligned}$$ where we have again assumed that on global average the contribution from the cluster member population is small and have ignored it. Using those expressions, the globally normalized $SN$ defined by equation (\[eq:sn\]), can be written by $$\label{eq:sn_G}
SN_G(\bm{\theta})\simeq
{\sqrt{2}{\sum_{bg} \hat{\gamma}_{t,i} Q_i}
\over
{\left( \bigl\langle \sum_{bg} \hat{e}_i^2 Q_i^2 \bigr\rangle
+\bigl\langle \sum_{fg} \hat{e}_i^2 Q_i^2 \bigr\rangle\right)^{1/2}}}.$$ Note that in the above expression, there is no contribution from cluster member population. Therefore, the globally normalized estimator is, to a good approximation, free from the dilution effect of the cluster member galaxies.
Dilution effect of foreground galaxies {#sec:foreground}
--------------------------------------
![Shown is dependence of any of $\langle D_{ls}/D_s\rangle_z$ (blue long-dashed lines), $\sigma_{\rm shape}^{-1}$ (black dashed lines) or $SN_{\rm peak}$ (red solid lines) on $z_{\rm min}$. All the quantities (signified by $Y(z_{\rm min})$) are normalized by their values at $z_{\rm min}=0$. Different panels for different cluster redshifts, $z_{cl}$, which are denoted in each panel. \[fg\_contmi\]](fig5.eps){width="82mm"}
Foreground galaxies have two effects on weak lensing peak $SN$s from clusters. One is to dilute the lensing signal, and the other is to make the shape noise level on mass maps smaller. Below we will first derive relevant expressions for those two effects, and evaluate the dilution effect of foreground galaxies using actual redshift distributions of source galaxies. Then, we will compare it with actual data measured using the secure weak lensing cluster sample.
Focusing on contributions from foreground and background galaxies, from equation (\[eq:shear2kap\_sum\_G\]), the peak signal can be approximately written by $$\label{eq:kap_G_app}
{\cal K }_G(\bm{\theta})
={1\over {n_{fg}+n_{bg}}}
\sum_{i\in bg} \hat{\gamma}_{t,i} Q_i
\propto {{n_{bg} \langle \hat{\gamma}_t \rangle_z}\over {n_{fg}+n_{bg}}},$$ where $\langle \hat{\gamma}_t \rangle_z$ is the source redshift distribution weighted mean tangential shear. Since the source redshift dependence of the tangential shear enters only through the distance ratio, $D_{ls}/D_s$, we can re-write equation (\[eq:kap\_G\_app\]) by $$\label{eq:dratio}
{\cal K }_G(\bm{\theta}) \propto
\left\langle{{D_{ls}} \over {D_s}}\right\rangle_z =
{{\int_{z_{cl}}^\infty dz~n_s(z) D_{ls}(z_{cl},z)/D_{s}(z)}
\over
{\int_0^\infty dz~n_s(z)}
},$$ where $n_s(z)$ is the redshift distribution of source galaxies.
![Shown is the $SN(z_{\rm min})$ of weak lensing secure cluster normalized by its $SN(z_{\rm min}=0)$. Weak lensing secure clusters are divided into four sub-samples based on the cluster redshifts (denoted in panels). The horizontal axis is $z_{\rm min}$ of source galaxy samples. For each sub-sample and each source galaxy sample, the mean and its 1-$\sigma$ error among the clusters (the number of clusters in each sub-sample is given in each panel) are plotted. For comparison, the red lines show the expected $SN$ ratios plotted in Figure \[fg\_contmi\] (red lines) for a cluster at a central redshift in respective redshift ranges. \[sn\_zmin\_stats\]](fig6.eps){width="82mm"}
In the same manner, from equation (\[eq:sigma\_shape\_Gave\]) we have $$\label{eq:sigma_shape_Gapp}
\langle \sigma_{{\rm shape},G}^2\rangle
\propto
{{(n_{fg}+n_{bg})\langle\hat{e}^2\rangle}
\over {2 (n_{fg}+n_{bg})^2}}
\propto {{\langle\hat{e}^2\rangle} \over {\bar{n}_g}},$$ where we have ignored a possible redshift dependence of $\langle\hat{e}^2\rangle$. This is the well known scaling relation between the shape noise and galaxy number density [@1996MNRAS.283..837S].
The weak lensing peak $SN$ from clusters relates to the source redshift weighted distance ratio and the shape noise as, $$\label{eq:SN-peak}
SN \propto
{ {\langle{{D_{ls}}/ {D_s}}\rangle_z }
\over
{\langle \sigma_{{\rm shape},G}^2 \rangle ^{1/2}}}.$$ Since both the distance ratio and the shape noise depend on source galaxy samples, the peak $SN$ from a cluster does as well. In our case, the source galaxy selection is characterized by $z_{\rm min}$, and thus we evaluate the dependence of those quantities on $z_{\rm min}$ using the redshift distributions of our source samples defined by equation (\[eq:ns\]). Results are shown in Figure \[fg\_contmi\] for cluster redshifts of $z_{cl}=0.15$, 0.25, 0.35, and 0.45. Findings from that figure are as follows: The shape noise, which is not dependent on $z_{cl}$, monotonically increases with $z_{\rm min}$ as expected. The source redshift weighted distance ratio increases with $z_{\rm min}$. Since a fraction of foreground galaxies is larger for the higher redshift clusters, the higher the cluster redshift is, the more the distance ratio increases. Those two effects compete; For clusters with redshifts lower than 0.3, their weak lensing peak $SN$ decreases with $z_{\rm min}$. However, for higher redshift clusters, $SN$ stays almost constant or slightly increases with $z_{\rm min}$.
We examine actual dependence of peak $SN$s on $z_{\rm min}$ using the weak lensing secure clusters. In doing so, we divide the secure clusters into four sub-samples based on the cluster redshift (to be specific, $0.1<z_{cl}<0.2$, $0.2<z_{cl}<0.3$, $0.3<z_{cl}<0.4$, and $0.4<z_{cl}<0.5$). For each sub-sample, we evaluate the mean of $SN(z_{\rm min})/SN(z_{\rm min}=0)$ and its standard error among sample clusters. The results are shown in Figure \[sn\_zmin\_stats\]. We find that the measured ratios of $SN(z_{\rm min})/SN(z_{\rm min}=0)$ is systematically larger than the expectations shown by the red lines (which are same as ones plotted in Figure \[fg\_contmi\]), especially for lower-$z$ clusters ($z_{cl}<0.3$). The reason of this is unclear; a possible cause is the intrinsic alignment of galaxies: Because the major axis of cluster neighbor galaxies tends to point toward a cluster center due to the intrinsic alignment effects (originating from, e.g., the gravitational tidal stretching, see for a review [@2015SSRv..193....1J], and references therein), it reduces the peak $SN$ value. Peak $SN$s of $z_{\rm min}=0$ maps are likely affected by this effect, and consequently they are likely biased low. If this is the case, it accounts for systematically larger $SN(z_{\rm min})/SN(z_{\rm min}=0)$ found in the measured results, though this argument is rather phenomenological. Aside from this systematic difference, the measured ratios are in a reasonable agreement with expectations in their amplitudes and in its increasing trend toward higher-$z$ clusters. From the above findings, we conclude that combining multiple peak catalogs from source samples with different $z_{\rm min}$ can mitigate the dilution effect of foreground galaxies, especially on high-$z$ clusters.
Impact of a source galaxy deficiency on peak heights {#sec:defficiency_effect}
----------------------------------------------------
Here we examine an impact on a peak $SN$ from source galaxy deficiency at cluster central regions seen in the stacked galaxy number density profiles shown in Figure \[gdensprof\_zcl\]. We note, however, that deficiency profiles vary greatly from cluster to cluster as is shown in Figure \[gdensprof\_indv\].
![Thin colored lines show azimuthally averaged galaxy number density normalized by the mean number density as a function of the angular separation from peak positions. Each colored line is for an individual cluster. Sub-samples of weak lensing secure clusters are shown. The upper/lower panel is for the clusters at $0.1<z_{cl}<0.2$/$0.3<z_{cl}<0.4$ with the source galaxy sample of $z_{\rm min}=0.3$/$0.4$. The thick black lines show the parametric model, equation (\[eq:fcl\_fit\]) but plotted is $[1+f_m(\theta)]$, for four sets of parameters denoted in the upper panel. \[gdensprof\_indv\]](fig7.eps){width="82mm"}
![[*Top-panel*]{}: Three models of the source galaxy deficiency profile are shown. See equation (\[eq:fcl\_fit\]) for the functional form of the model. [*Bottom-panels*]{}: Shown is ratios between expected peak $SN$s of an NFW halo with and without taking the source galaxy deficiency into account as a function of halo mass (the virial mass is taken here). Left-panel is for the case $z_{\rm halo}=0.15$ with the galaxy redshift distribution taken from the source sample with $z_{\rm min}=0.2$, whereas right-panel is for $z_{\rm halo}=0.35$ with $z_{\rm min}=0.4$. Different lines are for different deficiency models shown in the top-panel. \[fmask\_ratsn\]](fig8.eps){width="82mm"}
We use the empirical models of dark matter halo and simple model of the galaxy deficiency profiles, which we describe below. In the presence the source galaxy deficiency, the theoretical expression for the lensing signal from clusters, equation (\[eq:shear2kap\]), is modified to $$\label{eq:shear2kap_m}
{\cal K }(\bm{\theta})
=
\int d^2\bm{\phi}~
[1+f_m(\phi)]
\gamma_t(\bm{\phi}:\bm{\theta}) Q(|\bm{\phi}|),$$ where $f_m(\phi)$ is the source galaxy deficiency profile, for which we adopt the following parametric function, $$\label{eq:fcl_fit}
f_m(\phi) = \max\left\{
a \left( {\pi \over 2} -\tan^{-1}({\phi \over {\phi_0}}) \right),-1
\right\},$$ where $a$ and $\phi_0$ are the amplitude and scale parameters, respectively. We fit the measured deficiency profiles shown in Figure \[gdensprof\_zcl\] with this function and derive typical values of those parameters that we take in the following analysis. We consider three models of $f_m(\phi)$ shown in the top-panel of Figure \[fmask\_ratsn\]: The model with $a=-0.25$ mimics the stacked deficiency profile of a case $0.3<z_{cl}<0.4$ with $z_{\rm min} > z_{cl}$, and the model with $a=-0.5$ mimics ones of $0.1<z_{cl}<0.2$ (see Figure \[gdensprof\_zcl\]), whereas the model with $a=-0.75$ represents extreme cases from individual clusters shown in Figure \[gdensprof\_indv\]. Since details of the models needed to compute equation (\[eq:shear2kap\_m\]) are described in @2012MNRAS.425.2287H, here we summarize the main ingredients and relevant references:
- Dark matter halos of clusters are modeled by the truncated NFW model [@2009JCAP...01..015B] with the mass-concentration relation by @2008MNRAS.391.1940M and @2008MNRAS.390L..64D.
- Redshift distributions of source galaxies of our source samples are estimated by equation (\[eq:ns\]), which are dependent on $z_{\rm min}$, and are presented in Figure \[wsumpdf\_zmax3.0\_zmin\].
Results are presented in Figure \[fmask\_ratsn\]. Note, however, that since the deficiency model is a crude approximation and halo mass dependence of the source deficiency is not taken into account, the results should be considered as a rough estimate of the impact of source deficiency on a peak $SN$. In the bottom-panels, ratios between expected peak $SN$s with and without taking the source deficiency into account as a function of halo mass are shown for three deficiency models presented in the top-panel. Left-panel is for the case $z_{\rm halo}=0.15$, whereas right-panel is for $z_{\rm halo}=0.35$. We find that in both the cases, the suppression of $SN$ due to the source deficiency is 2–5 percents for models with $a=-0.25$, 8–10 percents for $a=-0.5$ and 16–20 percents for the extreme case of $a=-0.75$. We thus conclude that a typical impact of source deficiency on a peak $SN$ is a suppression of a few to $\sim10$ percents. However it can be $\sim20$ percents for individual cases. It is also seen in the Figure that for a given source deficiency model, the suppression decreases with increasing halo mass. The reason for this is that a relative contribution to a peak $SN$ from galaxies within a fixed aperture is smaller for more massive halos.
Summary and discussions {#sec:summary}
=======================
We have presented weak lensing cluster search using HSC first-year data. We generated six samples of source galaxies with different $z_{\rm min}$-cuts (we took $z_{\rm min}=0$, 0.2, 0.3, 0.4, 0.5 and 0.6), and made weak lensing mass maps for each source sample in which we searched for high peaks. From each source sample, we detected a sample of 68–75 weak lensing peaks with $SN\ge 5$. We compiled the six peak samples into the sample of merged peaks. We obtained a sample of 124 weak lensing merged peaks with $SN_{\rm max}\ge 5$ which are candidates of clusters of galaxies.
We cross-matched our peak sample with CAMIRA-HSC clusters [@2018PASJ...70S..20O] to identify cluster counterparts of the peaks. We found that 107 out of 124 merged peaks have matched CAMIRA-HSC clusters within 5 arcmin from peak positions. Among the 107 matched peaks, 25 peaks have multiple matches, which might be generated by a line-of-sight projection of multiple clusters. Among the remaining 82 peaks matched with a single CAMIRA-HSC cluster, 64 peaks have CAMIRA-HSC cluster counterparts within 2 arcmin from peak positions. We confirmed by visual inspection of HSC image that for all the 64 peaks, there exist good correlations between weak lensing mass over-densities and galaxy concentrations. We thus defined those peaks as the sample of [*weak lensing secure clusters*]{}, and used them to examine the dilution effects on our weak lensing peak finding.
In this study, we have paid a particular attention to the dilution effect of cluster member and foreground galaxies on weak lensing peak $SN$s, and have adopted two means to mitigate its impact; the globally normalized estimator \[equations (\[eq:shear2kap\_sum\]), and (\[eq:sigma\_shape\])\], and the source galaxy selection with different $z_{\rm min}$-cuts using the full probability distribution function of galaxy photo-$z$s.
We have demonstrated, using the simple model of galaxy populations introduced in Section \[sec:global-kap\], that the peak $SN$ defined by the globally normalized estimators is, to a good approximation, not affected by the dilution effect of the cluster member galaxies. This is in marked contract to the locally normalized $SN$ which is indeed affected by the cluster member galaxies as is demonstrated in Appendix \[sec:local\_estimator\]. We compared the peak heights of the globally normalized $SN_G$ with ones of the locally normalized $SN_L$ using our weak lensing mass maps, and found that for the peak samples with $SN_G \ge 5$, $SN_G$s are, on average, about 10 percents larger than the corresponding $SN_L$s.
In Section \[sec:foreground\], we have examined the dilution effect of foreground galaxies and have demonstrated the ability of our source galaxy selection to mitigate it. We used the probability distribution function of photo-$z$, and adopted [*P-cut*]{} method [@2014MNRAS.444..147O] to remove galaxies at $z<z_{\rm min}$ which are foreground galaxies of clusters at $z_{cl} > z_{\rm min}$. This galaxy selection has two competing influences on weak lensing peak $SN$s from clusters: One is to mitigate the dilution effect of foreground galaxies, and the other is to make the shape noise level larger as the $z_{\rm min}$-cut reduces the number density of source galaxies. We examined the expected impact on peak $SN$ heights from those two factors using the estimated redshift distribution of the source samples, and found that for high/low-$z$ clusters ($z \gtsim 0.3$/$z \lesssim 0.2$), the former/latter is more effective than the latter/former, leading to a gain/decline in peak $SN$s with increasing $z_{\rm min}$.
We examined actual dependence of peak $SN$s on the source selection using the weak lensing secure clusters. We measured the ratios of $SN(z_{\rm min})/SN(z_{\rm min}=0)$ for four sub-samples of secure clusters divided based on the cluster redshift (shown in Figure \[sn\_zmin\_stats\]). We found that the measured results were in a reasonable agreement with the expectations in their amplitudes and in their increasing trend toward higher-$z$ clusters, except for the systematic offset of about $+5$ to $+10$ percents which could be due to the intrinsic alignment of cluster neighbor galaxies. From the above findings, along with the fact that the number of merged peak sample (124 for $SN_{\rm max}\ge 5$) is nearly twice of the numbers from individual source samples (68–75 for $SN \ge 5$), we conclude that combining multiple peak samples from source samples with different $z_{\rm min}$ indeed improve the efficiency of weak lensing cluster search, especially for high-$z$ clusters.
We have also examined the effect of source galaxy deficiency on weak lensing peak heights. The source deficiency was clearly observed in stacked galaxy number density profiles of secure clusters at cluster central regions for the cluster sample of $0.1<z_{cl}<0.2$ and for the other samples with $z_{\rm min}>z_{cl}$ (Figure \[gdensprof\_zcl\]). This can be due to the masking effect of bright cluster galaxies and/or the lensing magnification effect. We made a simple model prediction of the source deficiency effect on peak $SN$ using the empirical models of dark matter halo combined with simple models of source deficiency profiles. We found that for realistic models of source deficiency, a peak $SN$ is suppressed by a few to $\sim10$ percents.
Since we have focused on the dilution effect, there are some important issues/tasks related to weak lensing cluster search which have not been examined in this paper: The three major matters among others are:
1. The purity of the sample of 124 weak lensing cluster candidates: For every 64 weak lensing secure clusters, we have found a good correlation between weak lensing mass over-density and galaxy concentration, and have concluded that those weak lensing signals have a physical relationship with the counterpart CAMIRA-HSC cluster. In 18 out of remaining 60 cases, a peak is matched with a single CAMIRA-HSC clusters but their separations are larger than 2 arcmin. Physical connections between those weak lensing mass over-densities and clusters are not clear, which are a subject of a future study. 25 out of remaining 42 peaks have multiple CAMIRA-HSC clusters within 5 arcmin. In 23 out of the 25 cases, matched clusters of the same peak are separated in the redshift direction by $\Delta z > 0.1$. Thus those peaks are likely affected by line-of-sight projections of physically unrelated clusters, though detail investigations of each peak are required to reveal their real nature. Finally, remaining 17 peaks have no matched CAMIRA-HSC cluster, for which we searched for possible counterpart clusters in a known cluster database taken from a compilation by [NASA/IPAC Extragalactic Database]{} (NED[^4]). The results are presented in Appendix \[sec:no\_camira\]. In 10 out of 17 peaks, possible counterpart clusters are found (see Table \[table:no\_camira\]). In Figure \[riz\_image\], we show HSC $riz$ composite images of the remaining 7 peaks (that have no counterpart cluster found), in which good correlations between the weak lensing mass over-density and galaxy concentration are seen in some of those systems. Clearly, the above information is not enough to evaluate the purity of our sample; further followup studies combining information from other wavelength data (for example, X-ray and Sunyaev-Zel’dovich effect), are required.
2. Weak lensing mass estimate of our cluster candidates: Although cluster mass derived from weak lensing analysis can add valuable information to our sample, an accurate determination of cluster redshift as well as carefully taking account of line-of-sight projections of uncorrelated objects are required to estimate weak lensing mass accurately. We have derived weak lensing cluster masses only for weak lensing secure clusters which have good correlations between the weak lensing mass peak and galaxy overdensity (see Appendix \[sec:cluster\_mass\]). Since the remaining weak lensing peaks have either multiple CAMIRA-HSC cluster counterparts or a less correlated/no CAMIRA-HSC cluster counterpart, further detailed studies of individual systems are needed to derive their cluster masses, which we leave for a future study.
3. Masking effect of bright cluster galaxies and lensing magnification effect on weak lensing peak finding. As we discussed in the above, we have seen an observational indication of those effects as the deficiency of source galaxies at cluster central regions, and have examined their impact on the peak $SN$ in Section \[sec:defficiency\_effect\]. Since those effects are unavoidable in weak lensing cluster search, a further detail study of those effects are important for cosmological applications of weak lensing selected clusters. It is, however, beyond the scope of this paper, and we leave it for a future study.
Weak lensing mass maps contain a rich cosmological information beyond those obtained by analyses of the cosmic shear power spectrum or two-point correlation function [see, for example, @2010MNRAS.402.1049D; @2011PhRvD..84d3529Y; @2013PhRvD..88l3002P; @2017MNRAS.466.2402S]. However, in this study, we showed that if a source galaxy sample is selected by, for example, a simple magnitude-cut, the dilution effects may alter $SN$s of high peaks in a non-negligible amount, and thus may modify statistical properties of weak lensing mass maps. Therefore, when one uses weak lensing mass maps for a cosmological application, account should be taken of the dilution effects as well as of the source deficiency effect. We note that the effects are dependent on a source sample that one takes, and thus should be examined on a case-by-case basis. At the same time, developing source galaxy selection methods that can mitigate the dilution effects is another important subject in that research field.
We would like to thank Masamune Oguri for useful comments. We would like to thank Nick Kaiser for making the software [ imcat]{} publicly available, and [ds9]{} developers for [ds9]{} publicly available. We have heavily used those softwares in this study. We would like to thank HSC data analysis software team for their effort to develop data processing software suite, and HSC data archive team for their effort to build and to maintain the HSC data archive system.
This work was supported in part by JSPS KAKENHI Grant Number JP17K05457. MS is supported by JSPS Overseas Research Fellowships.
Data analysis were in part carried out on PC cluster at Center for Computational Astrophysics, National Astronomical Observatory of Japan. Numerical computations were in part carried out on Cray XC30 and XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan, and also on Cray XC40 at YITP in Kyoto University.
The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University. This paper makes use of software developed for the Large Synoptic Survey Telescope. We thank the LSST Project for making their code available as free software at <http://dm.lsst.org>
The Pan-STARRS1 Surveys (PS1) have been made possible through contributions of the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, Queen’s University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant No. AST-1238877, the University of Maryland, and Eotvos Lorand University (ELTE) and the Los Alamos National Laboratory.
Based in part on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by Subaru Telescope and Astronomy Data Center at National Astronomical Observatory of Japan.
natexlab\#1[\#1]{}
, G. O., [Corwin]{}, Harold G., J., & [Olowin]{}, R. P. 1989, , 70, 1
, C., [Giles]{}, P., [Koulouridis]{}, E., [et al.]{} 2018, , 620, A5
, H., [Armstrong]{}, R., [Bickerton]{}, S., [et al.]{} 2018, , 70, S8
, H., [Arimoto]{}, N., [Armstrong]{}, R., [et al.]{} 2018, , 70, S4
, H., [AlSayyad]{}, Y., [Ando]{}, M., [et al.]{} 2019, , 106
, S. W., [Evrard]{}, A. E., & [Mantz]{}, A. B. 2011, , 49, 409
, E. A., [Marshall]{}, P., & [Oguri]{}, M. 2009, , 1, 15
, M., & [Schneider]{}, P. 2001, , 340, 291
, J., [Armstrong]{}, R., [Bickerton]{}, S., [et al.]{} 2018, , 70, S5
, I.-N., [Umetsu]{}, K., [Murata]{}, R., [Medezinski]{}, E., & [Oguri]{}, M. 2019, arXiv e-prints, arXiv:1909.02042
, N., [Adami]{}, C., [Lieu]{}, M., [et al.]{} 2014, , 444, 2723
, J. P., & [Hartlap]{}, J. 2010, , 402, 1049
, A. R., [Schaye]{}, J., [Kay]{}, S. T., & [Dalla Vecchia]{}, C. 2008, , 390, L64
, F., [Adami]{}, C., [Cappi]{}, A., [et al.]{} 2011, , 535, A65
, S., [Abdalla]{}, F. B., [Cypriano]{}, E. S., [Sabiu]{}, C., & [Blake]{}, C. 2011, , 417, 1402
, H., [Koike]{}, M., [Takata]{}, T., [et al.]{} 2018, , 70, S3
, T., [Sekiguchi]{}, M., [Nichol]{}, R. C., [et al.]{} 2002, , 123, 1807
, D., [Seitz]{}, S., [Becker]{}, M. R., [Friedrich]{}, O., & [Mana]{}, A. 2015, , 449, 4264
, T., [Oguri]{}, M., [Shirasaki]{}, M., & [Sato]{}, M. 2012, , 425, 2287
, T., [Sakurai]{}, J., [Koike]{}, M., & [Miller]{}, L. 2015, , 67, 34
, T., [Takada]{}, M., & [Yoshida]{}, N. 2004, , 350, 893
, J., [McKay]{}, T. A., [Koester]{}, B. P., [et al.]{} 2010, , 191, 254
, C., [Van Waerbeke]{}, L., [Miller]{}, L., [et al.]{} 2012, , 427, 146
, C., [Oguri]{}, M., [Hamana]{}, T., [et al.]{} 2019, , 71, 43
, G., [Larson]{}, D., [Komatsu]{}, E., [et al.]{} 2013, , 208, 19
, C., & [Seljak]{}, U. 2003, , 343, 459
, H. 2003, , 339, 1155
, [Ž]{}., [Kahn]{}, S. M., [Tyson]{}, J. A., [et al.]{} 2019, , 873, 111
, B., [Cacciato]{}, M., [Kitching]{}, T. D., [et al.]{} 2015, , 193, 1
, N. 1995, , 439, L1
, S., [Uraguchi]{}, F., [Komiyama]{}, Y., [et al.]{} 2018, , 70, 66
, B. P., [McKay]{}, T. A., [Annis]{}, J., [et al.]{} 2007, , 660, 239
, Y., [Obuchi]{}, Y., [Nakaya]{}, H., [et al.]{} 2018, , 70, S2
, A. V., & [Borgani]{}, S. 2012, , 50, 353
, R., [Gondoin]{}, P., [Duvet]{}, L., [et al.]{} 2012, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8442, [Euclid: ESA’s mission to map the geometry of the dark universe]{}, 84420T
, A. V., [Dutton]{}, A. A., & [van den Bosch]{}, F. C. 2008, , 391, 1940
, R., [Miyatake]{}, H., [Hamana]{}, T., [et al.]{} 2018, , 70, S25
, R., [Lanusse]{}, F., [Leauthaud]{}, A., [et al.]{} 2018, , 481, 3170
, E., [Oguri]{}, M., [Nishizawa]{}, A. J., [et al.]{} 2018, , 70, 30
, H., [Battaglia]{}, N., [Hilton]{}, M., [et al.]{} 2019, , 875, 63
, S., [Hamana]{}, T., [Shimasaku]{}, K., [et al.]{} 2002, , 580, L97
, S., [Oguri]{}, M., [Hamana]{}, T., [et al.]{} 2018, , 70, S27
, S., [Komiyama]{}, Y., [Kawanomoto]{}, S., [et al.]{} 2018, , 70, S1
, J. F., [Frenk]{}, C. S., & [White]{}, S. D. M. 1997, , 490, 493
, M. 2014, , 444, 147
, M., [Lin]{}, Y.-T., [Lin]{}, S.-C., [et al.]{} 2018, , 70, S20
, F., [Clerc]{}, N., [Giles]{}, P. A., [et al.]{} 2016, , 592, A2
, A., [Haiman]{}, Z., [Hui]{}, L., [May]{}, M., & [Kratochvil]{}, J. M. 2013, , 88, 123002
, M., [Pacaud]{}, F., [Adami]{}, C., [et al.]{} 2016, , 592, A1
, R., [Arnaud]{}, M., [Pratt]{}, G. W., [Pointecouteau]{}, E., & [Melin]{}, J. B. 2011, , 534, A109
, G. W., [Arnaud]{}, M., [Biviano]{}, A., [et al.]{} 2019, , 215, 25
, G., [Laureijs]{}, R., & [Mellier]{}, Y. 2018, in 42nd COSPAR Scientific Assembly, Vol. 42, E1.16–3–18
, F., & [Rozo]{}, E. 2011, , 735, 119
, P. 1996, , 283, 837
, M., [Nishimichi]{}, T., [Li]{}, B., & [Higuchi]{}, Y. 2017, , 466, 2402
, M., [Coupon]{}, J., [Hsieh]{}, B.-C., [et al.]{} 2018, , 70, S9
, K., [Sereno]{}, M., [Lieu]{}, M., [et al.]{} 2020, , 890, 148
, Z. L., & [Han]{}, J. L. 2011, , 734, 68
, Z. L., [Han]{}, J. L., & [Liu]{}, F. S. 2009, , 183, 197
, X., [Kratochvil]{}, J. M., [Wang]{}, S., [et al.]{} 2011, , 84, 043529
Cross-matching with selected cluster catalogs {#sec:cross_matching}
=============================================
------------ --------------------------------- ---------- ------------------------ ----------------------
ID Cluster name $z_{cl}$ $\theta_{\rm sep}{}^a$ Ref
\[arcmin\]
HWL16a-001 - - - -
HWL16a-011 - - - -
HWL16a-015 CFHTLS W1-2593 0.30 1.4
CFHTLS W1-2588 0.68 4.1
CFHT-W CL J022757.5$-$053537 0.32 4.9 @2011ApJ...734...68W
HWL16a-018 CFHT-W CL J023111.0$-$0536 0.67 1.9 @2011ApJ...734...68W
CFHTLS W1-2864 0.60 2.8
CFHTLS W1-2588 0.68 2.9
CFHTLS W1-2589 1.00 3.2
HWL16a-061 - - - -
HWL16a-065 SDSS CE J213.904556$-$00.069648 0.29 1.4 @2002AJ....123.1807G
WHL J141527.6$-$000319 0.15 1.5 @2009ApJS..183..197W
SDSS CE J213.843536$-$00.001681 0.29 4.1 @2002AJ....123.1807G
SDSS CE J213.919922$+$00.023597 0.33 4.9 @2002AJ....123.1807G
HWL16a-066 SDSS CE J214.633743$-$00.016635 0.23 3.3 @2002AJ....123.1807G
HWL16a-067 SDSS CE J214.788757$+$00.220532 0.44 1.5 @2002AJ....123.1807G
HWL16a-074 GMBCG J216.67104$-$00.08426 0.39 1.1 @2010ApJS..191..254H
SDSS CE J216.649841$-$00.110289 0.27 1.2 @2002AJ....123.1807G
GMBCG J216.63912$-$00.10900 0.25 1.4 @2010ApJS..191..254H
SDSS CE J216.635178$-$00.044207 0.42 3.0 @2002AJ....123.1807G
GMBCG J216.67010$-$00.03407 0.40 3.5 @2010ApJS..191..254H
HWL16a-079 SDSS CE J16.867157$-$00.209108 0.18 0.7 @2002AJ....123.1807G
SDSS CE J216.868240$-$00.171960 0.35 1.6 @2002AJ....123.1807G
SDSS CE J216.852905$-$00.249845 0.18 3.2 @2002AJ....123.1807G
HWL16a-089 SDSS CE J221.044815$+$00.172764 0.30 0.2 @2002AJ....123.1807G
GMBCG J221.00862$+$00.12188 0.29 3.6 @2010ApJS..191..254H
HWL16a-092 FAC2011 CL 0061 0.62 1.2 @2011MNRAS.417.1402F
GMBCG J221.15835$+$00.19581 0.43 3.1 @2010ApJS..191..254H
WHL J144437.5$+$001402 0.31 3.7 @2009ApJS..183..197W
SDSS CE J221.230865$+$00.138749 0.27 4.0 @2002AJ....123.1807G
MaxBCG J221.20075$+$00.12862 0.29 4.4 @2007ApJ...660..239K
SDSS CE J221.138031$+$00.233130 0.30 4.7 @2002AJ....123.1807G
HWL16a-096 - - - -
HWL16a-118 - - - -
HWL16a-121 WHL J223540.8$+$012906 0.058 0.3 @2009ApJS..183..197W
MCXC J2235.6$+$0128 0.060 0.8
ABELL 2457 0.059 1.5 @1989ApJS...70....1A
HWL16a-122 - - - -
HWL16a-123 - - - -
------------ --------------------------------- ---------- ------------------------ ----------------------
[$^a$ The angular separation between the weak lensing peak position and the cluster position.]{}
[c]{}
{width="66mm"}\
(a) HWL16a-001
{width="66mm"}\
(b) HWL16a-011
\
\
{width="66mm"}\
(c) HWL16a-061
{width="66mm"}\
(d) HWL16a-096
\
\
{width="66mm"}\
(e) HWL16a-118
{width="66mm"}\
(f) HWL16a-122
\
\
[c]{}
{width="66mm"}\
(g) HWL16a-123
{width="66mm"}\
(h) HWL16a-007 (lower peak), and 008 (upper peak)
\
\
{width="66mm"}\
(i) HWL16a-021 (lower peak), and 022 (upper peak)
{width="66mm"}\
(j) HWL16a-044 (lower peak), and 045 (upper peak)
\
\
{width="66mm"}\
(k) HWL16a-062 (right-peak), and 063 (left peak)
{width="66mm"}\
(l) HWL16a-068 (upper peak), and 069 (lower peak)
\
\
[c]{}
{width="66mm"}\
(m) HWL16a-085 (lower peak), and 086 (upper-left peak)
{width="66mm"}\
(n) HWL16a-105 (lower peak), and 106 (upper peak)
\
\
Cluster counterparts of weak lensing peaks with no CAMIRA-HSC cluster counterpart {#sec:no_camira}
---------------------------------------------------------------------------------
Among 124 weak lensing merged peaks, 17 peaks have no CAMIRA-HSC cluster counterpart within 5 arcmin radius from the peak positions. However, note that one of them, HWL16a-121, is a known cluster (Abell 2457) at $z=0.059$ which is outside the redshift coverage of CAMIRA algorithm [@2018PASJ...70S..20O].
We search for cluster counterparts of those peaks in a known cluster database taken from a compilation by [NASA/IPAC Extragalactic Database]{} (NED). Clusters matched within 5 arcmin radius from the peak positions are summarized in Table \[table:no\_camira\]. In 10 out of 17 peaks, possible counterpart clusters are found. For the remaining 7 peaks, we present HSC $riz$-band composite images in Figure \[riz\_image\] \[panels (a)–(g)\]. In that Figure, we find apparent galaxy concentrations near the weak lensing peaks of HWL16a-001, 011, 096, 118, and 122. It follows from this that those peaks are not necessarily false signals, but undiscovered counterpart clusters may exist.
In summary, combining results of cross-matching with CAMIRA-HSC cluster catalog and with known cluster database, we have found possible counterpart clusters for 117 out of 124 weak lensing peaks. However, since our matching is based on a simple positional correlation, some of matches can be by chance. Future followup studies of individual peaks on a case-by-case basis are required to reveal physical connections between weak lensing peaks and matched clusters.
Cross-matching with XXL clusters {#sec:xxl}
--------------------------------
have presented a sample of 365 clusters of galaxies detected in the XXL Survey, which is a wide-field and deep X-ray imaging survey conducted with [*XMM-Newton*]{} . The XXL survey consists of two survey fields, each covering $\sim25$ deg$^2$ area, and its north field (XXL-N) largely overlaps with our XMM field [see Figure 1 of @2020ApJ...890..148U]. Since the selection function of XXL clusters with respect to the cluster mass and redshift well covers that of our clusters [see, e.g., Fig 12 of @2018PASJ...70S..27M], the XXL cluster sample provides good reference data to test the completeness of our weak lensing clusters.
Among 23 weak lensing merged peaks in XMM field (HWL16a-001–023), 14 peaks are located on the XXL survey footprints . 11 out of 14 peaks have XXL cluster counterparts (see Table \[table:peaklist\]). The peaks with no XXL cluster counterpart are HWL16a-015, 018, and 019, for which we give their brief descriptions below, though future detail investigations of each peak are required to reveal their real nature:
- HWL16a-015 is matched with XLSSC 074 [@2014MNRAS.444.2723C] which is not contained in XXL 365 cluster sample . There are three known clusters within 5 arcmin from the peak position; see Appendix \[sec:no\_camira\] for details.
- HWL16a-018 has no CAMIRA-HSC cluster counterpart. There are four known clusters within 5 arcmin from the peak position; see Appendix \[sec:no\_camira\] for details.
- HWL16a-019 has one CAMIRA-HSC cluster counterpart (CAMIRA-ID 388, $z_{cl}=1.011$), but the separation between them is 5.0 arcmin. Therefore the physical connection between the weak lensing peak and CAMIRA-HSC 388 is uncertain. There is one known cluster, CFHTLS W1-2587 ($z_{cl}=0.30$, ), with the angular separation of 1.2 arcmin.
We note that HWL16a-021 and HWL16a-022 are both matched with the same XXL cluster, XLSSC 151 at $z=0.189$. Also they are both matched with the same CAMIRA-HSC cluster (CAMIRA-ID 355, the estimated redshift of $z=0.276$). In fact, those two are a close pair of peaks with their separation of $\sim 3$ arcmin (see Figure \[riz\_image\] (i)), though they are identified as two individual peaks under our peak identification criteria (described in section \[sec:peak-finding\]). It is seen in Figure \[riz\_image\] (i) that the X-ray cluster XLSSC 151 is at the peak position of HWL16a-021, whereas CAMIRA-HSC cluster 355 matches better with HWL16a-022. Considering the difference in redshifts of those two clusters, it is likely that the twin peaks of the weak lensing $SN$ map arise from a chance line-of-sight projection of two physically separated clusters. If this is the case, HWL16a-022 is another weak lensing peak having no XXL cluster counterpart.
Cross-matching with weak lensing peaks in @2018PASJ...70S..27M {#sec:M18}
--------------------------------------------------------------
@2018PASJ...70S..27M [M18 hereafter] presented a sample of weak lensing peaks detected in mass maps constructed from the HSC first-year shape catalog [@2018PASJ...70S..25M] that we also used in this study. Although the method of weak lensing mass map construction is very similar to that of this study, differences in source galaxy selection and criteria of peak identification result in different peak samples. Their peak sample contains 65 peaks with $SN>4.7$. We cross-match their peaks with our extended-sample (peaks with $SN\ge
4$ are included) by their peak positions to a tolerance of $\theta_G=1.5$ arcmin. Among their 65 peaks, 51 peaks are matched with our final merged peaks ($SN\ge 5$, see Table \[table:peaklist\]). The remaining 14 peaks fall into the following five categories:
1. [\[M18 rank 51\]]{}: A corresponding peak exists in our final merged sample (HWL16a-054), but their separation is 2.6 arcmin which exceeds the tolerance length.
2. [\[M18 rank 49, 55, and 60\]]{}: A matched peak exists in our extended-sample with $5 > SN_{\rm max}\ge 4$, but its $SN_{\rm max}$ is below our threshold.
3. [\[M18 rank 7, 27, 31, 41 and 43\]]{}: A corresponding peak exists in our extended-sample with $SN_{\rm max}\ge 4$, but is located in the edge-region.
4. [\[M18 rank 37 and 38\]]{}: No corresponding peak exists in our extended-sample. Note that both the peaks are located in our edge-region.
5. [\[M18 rank 15, 45 and 46\]]{}: Those peaks are located in our masked-regions where we have not performed weak lensing analysis.
In summary, among 65 peaks in M18 sample, all the 55 peaks located in our data-regions have counterpart peaks in our extended-sample (including M18 rank 51–HWL16a-054).
Systems of neighboring weak lensing peaks {#sec:neighboring_peaks}
=========================================
In weak lensing mass maps, there are systems of neighboring peaks; those are either two isolated clusters or one cluster having substructure or under a merging process. Distinguishing clusters’ dynamical states with the only weak lensing information is practically impossible. Nevertheless, we have adopted a simple criterion that neighboring peaks with their separation larger than $\sqrt{2}\times \theta_G \simeq 2.1$ arcmin are regarded as two isolated peaks (see Section \[sec:peak-finding\]). Consequently, there are systems of neighboring peaks, whose dynamical states are ambiguous, in our final peak catalog (Table \[table:peaklist\]).
Here we describe those systems of neighboring peaks having a common CAMIRA-HSC cluster within 5 arcmin from both the peaks. There are seven such systems, whose HSC $riz$ composite images are shown in Figure \[riz\_image\] \[panels (h)–(n)\]. Below we give short descriptions of them:
- HWL16a-007 and 008 \[Figure \[riz\_image\] (h)\]: Although the both the peaks have a common CAMIRA-HSC cluster (ID-149, $z_{cl}=0.287$), they match with the different XXL clusters, XLSSC 111 ($z=0.299$) and XLSSC 117 ($z=0.300$). Thus those are likely two isolated clusters at very close redshifts.
- HWL16a-021 and 022 \[Figure \[riz\_image\] (i)\]: See Appendix \[sec:xxl\].
- HWL16a-044 and 045 \[Figure \[riz\_image\] (j)\]: CAMIRA-HSC cluster (ID-870, $z_{cl}=0.260$) matches better with HWL16a-045. Not enough information is available to infer the physical connection between those two peaks.
- HWL16a-062 and 063 \[Figure \[riz\_image\] (k)\]: CAMIRA-HSC cluster (ID-1103, $z_{cl}=0.144$) matches better with HWL16a-062. Another cluster (ID-1105, $z_{cl}=1.105$) is probably a non-related high-$z$ cluster as no associated lensing signal appears. Beside it, a cluster counterpart of HWL16a-063 is not found. However, a good correlation between HWL16a-063 and an apparent concentration of bright galaxies is clearly seen.
- HWL16a-068 and 069 \[Figure \[riz\_image\] (l)\]: Both the peaks matches with CAMIRA-HSC clusters (ID-1155, $z_{cl}=0.322$) and (ID-1157, $z_{cl}=0.515$). Because of the difference in the cluster redshifts, the lensing signals likely originate from line-of-sight projections of the two clusters at different redshifts.
- HWL16a-085 and 086 \[Figure \[riz\_image\] (m)\]: HWL16a-085 matches better with CAMIRA-HSC cluster (ID-1339, $z_{cl}=0.536$), whereas HWL16a-086 matches better with CAMIRA-HSC cluster (ID-1344, $z_{cl}=0.149$). Another cluster (ID-1341, $z_{cl}=0.884$) probably a non-related high-$z$ cluster as no associated lensing signal appears. Thus the two peaks originate from two isolated clusters.
- HWL16a-105 and 106 \[Figure \[riz\_image\] (n)\]: CAMIRA-HSC cluster (ID-1628, $z_{cl}=0.100$) matches better with HWL16a-106. No apparent lensing signal associated with another cluster (ID-1680, $z_{cl}=0.357$) appears. Not enough information is available to infer the physical connection between those two peaks.
Cluster mass estimate {#sec:cluster_mass}
=====================
{height="152mm"}\
{height="152mm"}
{height="152mm"}\
{height="152mm"}
Here we present results of cluster mass estimate of the weak lensing peaks that meet the following two conditions:
1. Those peaks should be classified as weak lensing secure clusters (see Section \[sec:cross-matching\]) to ensure the presence of a secure cluster counterpart, and to avoid systems with line-of-sight projection. The latter is required as our cluster model (described below) assumes a single dark matter halo.
2. Cluster redshift should be lower than 0.7 to have an enough number density of background galaxies for the measurement of weak lensing shear profile (described below).
61 weak lensing secure clusters meet those conditions (see Table \[table:clustermass\]).
We derive cluster masses by fitting the NFW model to measured weak lensing shear profiles based on the standard likelihood analysis. We employ the weak lensing mass estimate procedure of @2020ApJ...890..148U who used the same HSC first-year shear catalog as one used in this study, allowing us to closely follow their procedure. Since details of the procedure are described in @2020ApJ...890..148U [and see the references therein], below we describe those aspects that are directly relevant to this study.
For each cluster, we select background galaxies using the $P$-cut method (see Section 3.4 of [@2020ApJ...890..148U], and see also [@2018PASJ...70...30M]) with the cluster redshift taken from the estimated redshift of matched CAMIRA-HSC clusters[^5], and we measure the azimuthally averaged tangential shear ($\gamma_t$) which relates to the excess surface mass density $\Delta \Sigma$ as [@1995ApJ...439L...1K] $$\label{eq:gamma2Sigma}
\gamma_t(R)={{\bar{\Sigma}(<R)-\Sigma(R)} \over
{\Sigma_{cr}(z_{cl},z_s)}}
\equiv {{\Delta \Sigma(R)} \over
{\Sigma_{cr}(z_{cl},z_s)}},$$ where $\Sigma(R)$ is the azimuthally averaged surface mass density at $R$, ${\bar{\Sigma}}(<R)$ denotes the average surface mass density interior to $R$, and $\Sigma_{cr}(z_{cl},z_s)$ is the critical surface mass density. We take the peak positions as the cluster centers, and we measure $\gamma_t(R)$ in 5 radial bins of equal logarithmic spacing of $\Delta \log R = 0.25$ with bin centers of $R_c(i)=0.3\times 10^{i\Delta \log R}[h^{-1}$Mpc\] where $i$ runs from 0 to 4. We use the photo-$z$ PDFs of background galaxies to evaluate $\Sigma_{cr}(z_{cl},z_s)$ following @2020ApJ...890..148U [Section 3.2]. Resulting $\Delta \Sigma(R)$ are shown in Figure \[fig:sufmass\_fit\].
We adopt the NFW model [@1997ApJ...490..493N] to make the model prediction of the weak lensing shear profile by a cluster. The spherical NFW density profile is specified by two parameters, the characteristic density parameter ($\rho_s$), and the scale radius ($r_s$), as $\rho_{\rm NFW}(r)=\rho_s/[r(r+r_s)^2]$. We define the halo mass by the overdensity mass ($M_\Delta$) which is given by integrating the halo density profile out to the corresponding overdensity radius ($r_s$) at which the mean interior density is $\Delta \times \rho_{cr}(z)$. The corresponding concentration parameter is defined by $c_\Delta=r_\Delta/r_s$. For a given set of ($M_\Delta, c_\Delta$), which is of our primary interest, the NFW parameters ($\rho_s, r_s$) are uniquely determined, and thus $\Delta \Sigma(R)$ is as well. Therefore we take ($M_\Delta, c_\Delta$) as fitting parameters in the likelihood analysis. We consider two cases, $\Delta=200$ and $\Delta=500$.
We employ the standard likelihood analysis for deriving constraints on the model parameters. The log-likelihood is given by, $$\label{eq:lnL}
-2\ln {\cal{L}}(\bm{p})=\sum_{i,j}[d_i-m_i(\bm{p})]
{\rm Cov}_{ij}^{-1}[d_j-m_j(\bm{p})],$$ where data vector $d_i=\Delta\Sigma(R_i)$, and $m_i(\bm{p})$ is the model prediction with the model parameters $\bm{p}=(M_\Delta, c_\Delta)$. The covariance matrix (${\rm Cov}$) is composed of the three components [see @2020ApJ...890..148U and references therein for detailed descriptions]: The statistical uncertainty due to the galaxy shape noise (${\rm Cov}^{\rm shape}$), the cosmic shear covariance due to uncorrelated large-scale structures projected along the line of sight [@2003MNRAS.339.1155H] (${\rm Cov}^{\rm lss}$), and the intrinsic variation of the cluster lensing signals at the fixed model parameters due to e.g., cluster asphericity, and the presence of correlated halos (${\rm Cov}^{\rm int}$) [@2015MNRAS.449.4264G; @2019ApJ...875...63M].
We compute the log-likelihood function over the two-parameter space in the ranges of $0.01<M_\Delta[\times 10^{14}h^{-1}M_\odot]<30$ and $0.01<c_\Delta<30$, and marginalize it to derive one-parameter posterior distributions. Peaks and 68.3% confidence intervals of marginalized posterior distributions of $c_{200c}$, $M_{200c}$, $M_{500c}$ are summarized in Table \[table:clustermass\]. Note that “N.A.” in the results of $c_{200c}$ means either the upper/lower bound of 68.3% confidence interval or the minimum of the marginalized likelihood function is not enclosed within the parameter range of $c_{200c}$. This is due to the limited coverage in $R$ with relatively large error bars. For a visual inspection of the goodness of fit, in Figure \[fig:sufmass\_fit\], we compared the measured excess surface mass density profiles with the best-fit NFW model in the $M_{200c}$-$c_{200c}$ space.
The locally normalized $SN$ estimator {#sec:local_estimator}
=====================================
In this study, we have adopted the globally normalized $SN$ estimator defined by equation (\[eq:sn\]) with equations (\[eq:shear2kap\_sum\]) and (\[eq:sigma\_shape\]). In some studies [for example, @2015PASJ...67...34H], however, the peak $SN(\bm{\theta})$ is defined by the locally normalized estimators, for which $\cal{K}(\bm{\theta})$ and $\sigma_{\rm shape}^2(\bm{\theta})$ are normalized by the local galaxy number density, $n_g(\bm{\theta})$, instead of the mean density $\bar{n}_g$. Here we compare those two estimators using a simple model, and using the actual weak lensing data. See @2011ApJ...735..119S for a related study on those estimators.
Following the same manner as one introduced in Section \[sec:global-kap\], the local estimators are written by, $$\label{eq:shear2kap_sum_L}
{\cal K }_L(\bm{\theta})
={1\over {n_{fg}+n_{bg}+n_{cl}(\bm{\theta})}}
\sum_{i\in bg} \hat{\gamma}_{t,i} Q_i,$$ and $$\begin{aligned}
\label{eq:sigma_shape_L}
\sigma_{{\rm shape},L}^2(\bm{\theta})
&=&{1\over {2 [n_{fg}+n_{bg}+n_{cl}(\bm{\theta})]^2}}\nonumber \\
&&\times \left(
\sum_{i\in bg} \hat{e}_i^2 Q_i^2
+\sum_{i\in fg} \hat{e}_i^2 Q_i^2
+\sum_{i\in cl} \hat{e}_i^2 Q_i^2
\right).\end{aligned}$$ Notice that contributions from cluster member population can not be ignored at the cluster central regions where we are interested in. Thus we have, $$\begin{aligned}
\label{eq:sn_L}
SN_L(\bm{\theta})=
{\sqrt{2}{\sum_{bg} \hat{\gamma}_{t,i} Q_i}
\over
{
\left(
\sum_{bg} \hat{e}_i^2 Q_i^2
+ \sum_{fg} \hat{e}_i^2 Q_i^2
+ \sum_{cl} \hat{e}_i^2 Q_i^2
\right)^{1/2}}}.\end{aligned}$$ Therefore, the locally defined $SN$ is affected by the cluster member population and can be smaller than the globally defined $SN$ \[see equation (\[eq:sn\_G\])\], though it depends on a local proportion of cluster member galaxies to background and foreground galaxies.
![Peak $SN_G$ values in the globally normalized $SN$ maps are compared with $SN_L$ values at the same positions in the locally normalized $SN$ maps. Plus marks are for high peaks ($SN_G\ge 5$) located in the globally normalized $SN$ maps. Different panels are for different source samples with $z_{\rm min}$ being shown in each panel. \[gsn2lsn\]](fig11.eps){width="82mm"}
We examine actual differences between the globally normalized and the locally normalized $SN$ values using our source galaxy samples. We have generated the locally normalized $SN$ maps for the six source samples used in this study. We evaluate locally normalized $SN_L$ values at positions of high peaks ($SN_G\ge 5$) located in the globally normalized $SN$ maps. This $SN_G$–$SN_L$ comparison is done for six sets of $SN$ maps. Results are shown in Figure \[avgdens\_zmim\_stats\], in which we find that $SN_L$ tends to be smaller than $SN_G$, and that this trend is more clearly seen in lower $z_{\rm min}$ cases as expected. We find that $SN_L$ is smaller about 10 percents on average than $SN_G$ for weak lensing maps used in this study.
![Shown is the local galaxy number density at cluster regions, which is defined by the mean number density within an angular radius of 15 arcmin from peak positions, normalized by the global mean galaxy number density. Weak lensing secure clusters are used, and are divided into four sub-samples based on the cluster redshifts (denoted in panels). The horizontal axis is $z_{\rm min}$ of source galaxy samples. For each sub-sample and each source galaxy sample, the mean and its 1-$\sigma$ error among the clusters (the number of clusters in each sub-sample is given in each panel) are plotted. \[avgdens\_zmim\_stats\]](fig12.eps){width="82mm"}
We note that one may take an averaged local shape noise (that is $\langle \sigma_{{\rm shape},L}^2 \rangle$) to define the $SN$, instead of the locally defined one. In this case, deriving its expression using the above manner is not straightforward, because it is necessary to take into account the covariance between numerator and denominator in equation (\[eq:sigma\_shape\_L\]) (see [@2011ApJ...735..119S] for an approximative approach to this). Instead, we evaluate $\langle \sigma_{{\rm shape},L}^2 \rangle$ with actual weak lensing data used in this study and compare it with $\langle \sigma_{{\rm shape},G}^2 \rangle$. We find that those are very close; $\langle \sigma_{{\rm shape},L}^2 \rangle^{1/2}$ is only slightly smaller than $\langle \sigma_{{\rm shape},G}^2 \rangle^{1/2}$ (to be specific, the fractional difference is smaller than 0.5 percents). Therefore, replacing $\sigma_{{\rm shape},L}^2(\bm{\theta})$ with $\langle \sigma_{{\rm shape},L}^2 \rangle$ does not mitigate the dilution effect, but an additional $n_{cl}(\bm{\theta})$ term in the normalization of ${\cal K }_L(\bm{\theta})$ suppresses the peak signal \[compare equations (\[eq:shear2kap\_sum\_G\]) with (\[eq:shear2kap\_sum\_L\])\]. We measure $n_{cl}$ from our data; in doing this, we have defined the local galaxy number density at cluster regions by the mean number density within a circular area with an angular radius of 15 arcmin from peak positions.[^6] Results are shown in Figure \[avgdens\_zmim\_stats\] for four cluster redshift ranges and six source samples, in which we find that for $z_{\rm min}$ from 0 to 0.3, the galaxy density excess is 5-10 percents, whereas for higher $z_{\rm min}$ it is consistent with zero for higher redshift clusters ($0.3<z_{cl}<0.5$) but is still 5-10 percents for lower redshift clusters. It follows from these results that for low-$z_{\rm min}$ source samples, a peak $SN$ from the globally normalized estimator can be 5-10 percents larger than one from the locally normalized estimator.
We note that the decreasing trend of the number excess at higher $z_{\rm min }$ seen in higher redshift clusters is expected, as $z_{\rm min}$-cut may exclude cluster member galaxies of clusters at $z_{cl} < z_{\rm min}$. However, the trend is not seen in the lower redshift clusters. The reason for this is not understood well; possible causes are the line-of-projection of undiscovered clusters at higher redshifts, and errors in photo-$z$ (cluster member galaxies at a low-$z$ are mis-estimated as higher-$z$ galaxies). We are not going into this issue in this study but leave it for a future study.
[^1]: https://hsc-release.mtk.nao.ac.jp/doc/index.php/photometric-redshifts/.
[^2]: Notice that the stacking photo-$z$ $P(z)$ is not a mathematically sound way to infer the true redshift distribution [see Section 5.2 of @2019PASJ...71...43H].
[^3]: There are some different CAMIRA-HSC catalogs based on different HSC data sets. We use the HSC wide cluster catalog based on HSC S16A data with updated star mask called ’Arcturus’ [@2018PASJ...70S..25M]. The catalog is available from https://www.slac.stanford.edu/[\~]{}oguri/cluster/.
[^4]: http://ned.ipac.caltech.edu/
[^5]: For HWL16a-002, the redshift of matched XXL cluster (XLSSC 114) is taken as it is based on spectroscopic redshifts .
[^6]: We note that the [ *local galaxy number density*]{} is not uniquely defined, because it is necessary to define a [*local scale*]{}, or an [*averaging scheme*]{}. Thus the estimated values given there are not general but are specific to our definition of the local galaxy number density.
|
---
abstract: 'Nanoporous materials provide high surface area per unit mass and are capable of fluids adsorption. While the measurements of overall amount of fluid adsorbed by a nanopororus sample are straightforward, probing the fluid spacial distribution is non-trivial. We consider published data on adsorption and desorption of fluids in nanoporous glasses reported along with the measurements of ultrasonic waves propagation. We analyse these using Biot’s theory of dynamic poroelasticity, approximating the patches as spherical shells. Our calculations show that on adsorption the patch diameter is on the order of 10-20 pore diameters, while on desorption the patch size is comparable to the sample size. Our analysis suggests that one can employ ultrasound to probe the uniformity of fluid spatial distribution in nanoporous materials.'
author:
- Boris Gurevich
- 'Michel M. Nzikou'
- 'Gennady Y. Gor'
title: Probing Patchy Saturation of Fluids in Nanoporous Media by Ultrasound
---
Many natural and synthetic materials of industrial relevance have nanoporous structure, providing high surface area per unit mass and being capable of fluids adsorption. While the overall amount of fluid adsorbed by a nanopororus sample can be routinely measured [@Rouquerol2013], the spatial distribution of fluid inside a nanoporous sample is not easy to probe. Yet, the non-uniformity of the fluid distribution in a nanoporous sample affects many of its physical properties. Adsorption-induced stresses strongly depend on the saturation of pores [@Gor2017review], therefore spatial distribution of fluid affects the strains in nanoporous materials. The strains in its own turn can affect the permeability of nanoporous media [@Pan2007]. Another example is the change of optical properties of porous glasses during fluids adsorption [@Barthelemy2007; @Varanakkottu2014]. Thus there is a clear demand to extract the information about fluid distribution from experimental measurements.
If a sample of a mesoporous material is placed in vapor at a pressure below the saturation pressure, some of the vapor is adsorbed on the pore walls. The amount of condensate adsorbed on the pore walls increases when the vapor pressure is increased (adsorption process) and decreases when the pressure is reduced (desorption process). However, it is believed that during adsorption and desorption processes, distributions of the confined fluid and vapor in the pores are very different. On adsorption, the thickness of the condensate film increases steadily and uniformly in all pores, whereas on desorption the fluids form macroscopic patches [@Page1995]. Indeed optical techniques show formation of macroscopic patches during desorption [@Page1995]. No such behavior has been shown for adsorption; yet the fluid distribution during the adsorption processes has not been studied in detail.
One method that can shed light on details of fluid distribution during sorption (adsorption and desorption) is the ultrasonic technique, and specifically dependence of ultrasonic velocity on saturation of the pore space. The theory of poroelasticity shows that the dependence of elastic modulus on liquid saturation is controlled by the spatial distribution of fluids [@Toms:etal:2007]. Indeed, ultrasonic data [@Page1995; @Schappert2014] show that the increase of the longitudinal modulus of the nano-porous glass near the capillary condensation point is sharp but not instant, see Fig. \[fig:hexane\_data\] and \[fig:argon\_data\]. This suggests that it could be possible to analyze the fluid distribution in sorption experiments using the dependence of elastic properties on liquid saturation obtained from ultrasonic data.
The analysis of patchy saturation of porous media is a topical issue in petroleum geophysics (see Refs. and references therein). However, application of dynamic saturation models to the data measured on rocks is often problematic, because the patchy saturation effects on the velocity and amplitude of ultrasonic waves in such complex porous media are often obscured by other phenomena, such as squirt flow [@Mavko:Nur:1975; @Jones:1986; @Murphy:etal:1986; @Mavko1991; @Gurevich2010; @Muller:Gurevich:Lebedev:2010]. Nanoporous Vycor glass, which has narrow pore size distribution and uniform mechanical properties provides an excellent medium for testing those models. Furthermore, adsorption processes in such uniform media result in extremely uniform saturation of the pore space, which is impossible to achieve in natural materals.
We consider two experimental works, reporting ultrasonic measurements during vapor adsorption on nanoporous Vycor glass: adsorption of n-hexane at room temperature [@Page1995] and of argon at cryogenic temperature [@Schappert2014]. The longitudinal moduli of the samples obtained from the velocity of ultrasonic waves as a function of vapor pressure are shown in Fig. \[fig:hexane\_data\] (for n-hexane) and \[fig:argon\_data\] (for argon) along with the saturation. Figure \[fig:hexane\_data\] also shows the data on wave attenuation from Ref. .
![Relative change in the mass $\Delta m/m$ (squares), in the longitudinal modulus $\Delta M/M$ (lines) and attenuation factor $1/Q$ (diamonds) during adsorption (blue) and desorption (red) of n-hexane in Vycor[@Page1995].[]{data-label="fig:hexane_data"}](pressure_mass_modulus_hexane){width="0.7\linewidth"}
![Mass fraction of liquid argon (squares) and relative change in the longitudinal modulus $\Delta M/M$ (lines) during adsorption (blue) and desorption (red) of argon in Vycor[@Schappert2014].[]{data-label="fig:argon_data"}](pressure_mass_modulus_argon){width="0.7\linewidth"}
Bulk modulus of a porous medium with porosity $\phi$ and bulk modulus of the empty matrix $K_{0}$, made up of a solid with bulk modulus $K_{s}$ and saturated with a single fluid with a bulk modulus $K_{f}$ is given by the Gassmann equation [@Gassmann1951; @Berryman1999], application of which has been recently demonstrated for nanoporous media [@Gor:Gurevich:2017] $$\label{Gassmann}
K_{G}(K_{f})=K_{0}+\frac{\alpha ^{2}}{\frac{\alpha -\phi }{K_{s}}+\frac{\phi}{K_{f}}},$$ where $\alpha =1-K_{0}/K_{s}$ is the Biot-Willis coefficient. For weak fluids (relative to the solid matrix), $K_{f}\ll K_{s},$ equation \[Gassmann\] can be linearized in $K_{f}$ to give $$\label{Gassmann_linear}
K_{G}(K_{f}) \simeq K_{0}+\frac{\alpha^2}{\phi }K_{f}.$$
If the pores are instead filled with a mixture of two fluids 1 and 2, then the bulk modulus is defined not just by their bulk moduli $K_{f1}$ and $K_{f2},$ and volume fractions $S_{1}$ and $S_{2}=1-S_{1},$ but also by their geometrical distribution. If the two fluids are distributed uniformly within the pore space so that the pressure in the two fluids is equilibrated, then the medium can be considered as saturated with a single fluid, whose bulk modulus $K_{f}$ is given by the harmonic average of the fluid moduli [@Domenico:1976; @Dutta:Ode:1979a; @Dutta:Ode:1979b; @Johnson:2001] $$\frac{1}{K_{W}}=\frac{S_{1}}{K_{f1}}+\frac{S_{2}}{K_{f2}}. \label{Wood}$$ Equation \[Wood\] is known as the Wood equation, and the combination of equations \[Gassmann\] and \[Wood\] for the bulk modulus of a medium saturated with a uniform (fine-scale) mixture of the two fluids, is known as the Gassmann-Wood (GW) limit $K_{GW}=K_{G}(K_{W})$ [@Johnson:2001; @Toms:etal:2007]. However, unlike a mixture of free fluids, pressure equilibration between fluid in pores is not instant and is controlled by the permeability $\kappa$ of the porous matrix and charasteristic fluid viscosity $\eta$. According to the Biot’s theory of poroelasticity [@Biot1956i], fluid pressure will have enough time to equilibrate within one period of the wave with frequency $\omega $ if the characteristic size $d$ of the patches of the medium saturated with different fluids is smaller than the hydraulic diffusion length $\delta =\left( \frac{\kappa K_{f}}{\eta \phi \omega }\right) ^{1/2}$ [@Johnson:2001; @Toms:etal:2007]. Conversely, if the patches saturated with two fluids are much larger than $\delta$, $d\gg \delta$, then the fluid pressure has no time to equilibrate between the two fluids, and hence fluid communication between these clusters can be neglected. In this case the bulk moduli $K_{G}^{(1)}=K_{G}(K_{f1})$ and $K_{G}^{(2)}=K_{G}(K_{f2})$ of the clusters saturated with fluids 1 an 2 are given by Gassmann equation \[Gassmann\] with $K_{f}=K_{f1}$ and $K_{f}=K_{f2}$, respectively. Furthermore, Gassmann theory shows that the shear modulus of a porous medium is independent of the saturating fluids and equals to the shear modulus $G_0$ of the empty porous matrix. Thus these clusters have the same shear modulus, and hence according to Hill’s [@HILL1963] theorem, the bulk modulus of their mixture is uniquely defined by their volume fractions $$\label{Hill}
\frac{1}{K_{GH}+\frac{4}{3}G_0}=\frac{S_{1}}{K_{G}^{(1)} +\frac{4}{3}G_0}+\frac{S_{2}}{K_{G}^{(2)}+\frac{4}{3}G_0}.$$ Equation \[Hill\] with $K_{G}^{(1)}$ and $K_{G}^{(2)}$ given by Gassmann equation is known as the Gassmann-Hill (GH) limit [@Norris:1993; @Johnson:2001]. If the compressibilities of the two fluids are similar, then the GW and GH limits are close. However if the compressibilities are very different (say $K_{f2}\ll K_{f1}$), then the GW and GH limits are also very different. Indeed in case $K_{f2}\ll K_{f1}\ll K_{s},$ the GH limit is nearly linear in saturations $$K_{GH} \simeq K_{0}+\frac{\alpha^2 }{\phi }\left( S_{1}K_{f1}+S_{2}K_{f2}\right) .$$ Conversely $K_{GW}$ is almost independent of saturation, $K_{GW}=K_{G}^{(2)}$ until $S_{1}$ becomes close to $=1-K_{f2}/K_{f1},$ when it rises sharply to $K_{GW}=K_{G}^{(1)}$. Figure \[fig:hexane\_modulus\] shows the dependence of the relative deviation of the measured longitudinal modulus from its value at zero vapor pressure $\Delta M/M_0=(M-M_0)/M_0$ along with GW and GH limits for liquid and vapor adsorbates as fluids 1 and 2, respectively, calculated from the Gassmann equation at the full saturation [@Gor:Gurevich:2017]. Since the vapor bulk modulus is negligibly small, $K_{GW}=K_{G}^{(2)}=K_{0}$ effectively for all measurable saturations below the capillary condensation. The modulus versus saturation data show that the saturation on adsorption is closer to the GW limit and indicating more uniform saturation that on desorption. Yet the data deviates from the GW limit close to full saturation; this shows that even on adsorption, the saturation is not perfectly uniform. Hence, it is potentially possible to estimate the spatial scale of the saturation heterogeneity using dynamic patchy saturation models, which quantify the transition from GW to GH limits as patch size (or frequency) increases.
![Relative change in the longitudinal modulus versus liquid mass fraction (saturation) during adsorption (blue) and desorption (red) of n-hexane [@Page1995] (a) and argon [@Schappert2014] (b): ultrasonic measurements (squares), best fit of the spherical shell model (solid lines), GW limit (black solid line), GH limit (black dashed line), modified GH limit (black dotted line), constant velocity (red dashed line), finite element simulations (red dotted line). []{data-label="fig:hexane_modulus"}](saturation_modulus_hexane){width="0.7\linewidth"}
![Relative change in the longitudinal modulus versus liquid mass fraction (saturation) during adsorption (blue) and desorption (red) of n-hexane [@Page1995] (a) and argon [@Schappert2014] (b): ultrasonic measurements (squares), best fit of the spherical shell model (solid lines), GW limit (black solid line), GH limit (black dashed line), modified GH limit (black dotted line), constant velocity (red dashed line), finite element simulations (red dotted line). []{data-label="fig:hexane_modulus"}](mod_vs_sat_LiquidArgon){width="0.7\linewidth"}
![Longitudinal attenuation factor $Q^{-1}$ versus liquid mass fraction (saturation) during adsorption (blue) and desorption (red) of n-hexane [@Page1995]: ulrasonic measurements (squares), best fit of the concentric sphere model (solid lines), finite element simulations (red dotted line).[]{data-label="fig:hexane_Qp"}](saturation_Qp_hexane){width="0.7\linewidth"}
![Relative change in the longitudinal velocity versus liquid mass fraction (saturation) during adsorption (blue) and desorption (red) of n-hexane [@Page1995] (a) and argon [@Schappert2014] (b). Notation the same as in Figure \[fig:hexane\_modulus\]. []{data-label="fig:hexane_Vp"}](saturation_Vp_hexane){width="0.7\linewidth"}
![Relative change in the longitudinal velocity versus liquid mass fraction (saturation) during adsorption (blue) and desorption (red) of n-hexane [@Page1995] (a) and argon [@Schappert2014] (b). Notation the same as in Figure \[fig:hexane\_modulus\]. []{data-label="fig:hexane_Vp"}](Vp_vs_sat_argon){width="0.7\linewidth"}
This transition depends not just on spatial scale but also on the geometry of the fluid distribution. The simplest of such models is the spherical shell model (SSM) [@White:1975; @Dutta:Ode:1979a; @Dutta:Ode:1979b; @Johnson:2001], in which the medium is assumed to consist of double spheres. Each inner sphere of radius $R_{i}$ is saturated with fluid 2 and is surrounded by an outer sphere with radius $R_{o}$, with the region between the two spheres saturated with fluid 1, so that $S_{2}=\left( R_{i}/R_{o}\right) ^{3}$ (or vice versa). A compact approximate analytical solution for the ultrasonic bulk modulus and attenuation corresponding to such a model is given in [@Johnson:2001 Eqs. 43-45, 40 and 34] and briefly summarized in Supplementary Material (SM). To describe the sorption data with this model it is necessary to make some modeling choices. In particular, at non-zero vapor pressures all parts of the sample will have some liquid film on the pore walls; hence it is reasonable to assume that the saturation is always uniform below certain minimum value of liquid saturation $S_{l0}.$ The value $S_{l0}$ depends on the specific glass sample as well as the properties of the adsorbate. For n-hexane data [@Page1995], one can choose the value $S_{l0}=0.5$ corresponding to $p/p_{0}=0.4,$ the value below which the adsorption and desorption isotherms overlap. Yet the ultrasonic moduli on adsorption and desorption only overlap below liquid saturation $S_{l}=0.46,$ indicating that ultrasonic data is more sensitive to details of fluid distribiution than mass isotherms, and suggesting that the value $S_{l0}=0.4$ is more appropriate. The same approach suggests $S_{l0}=0.5$ for the argon data of [@Schappert2014].
To take into account minimum saturation of the liquid phase, we assume that fluid 1 is liquid with the modulus $K_{l}$ and saturation $S_{1}=(S_{l}-S_{l0})(1-S_{l0})$ while fluid 2 is a mixture of the liquid and vapor with the modulus $\frac{1}{K_{2}}=\frac{S_{l0}}{K_{l}}+\frac{1-S_{l0}}{K_{v}}$ and saturation $S_{2}=S_{v}/(1-S_{l0}).$ For these “new” fluids, the GW limit is the same as for liquid and vapor, while the new GH limit is shown in Fig. \[fig:hexane\_modulus\] by the black dotted line.
The main parameter controlling the predictions of the SSM is the radius of the outer spherical shell $R_{o}$. For *adsorption* we assume that $R_{o}$ is constant (independent of saturation), which implies that the volume of the vapor “pockets” scales with vapor saturation ($R_{i}=R_{o}S_{2}^{1/3}$) while the number density (and distance between centres) of such pockets is constant. This assumption seems reasonable for $S_{2}<0.1$ but may break down at larger vapor saturations. On adsorption, the model gives the best match with the saturation dependence of the modulus for about $R_{o}=175$ nm for n-hexane and $R_{o}=130$ nm for argon. At $S_2=0.05$ this corresponds to the size of “vapor” pockets of around $R_{i}=65$ nm for n-hexane and $R_{i}=48$ nm for argon. For n-hexane, [@Page1995] also report change of longitudinal attenuation versus saturation, and indeed the value $R_{o}=175$ nm yields a reasonable qualitative agreement with the position of the attenuation peak at $S_{l} \sim 0.95$ (Fig. \[fig:hexane\_Qp\]).
For *desorption* of n-hexane, the modulus shows nearly linear dependence on saturation from full saturation down to $S_{l}=0.6$. The SSM fits these data for $R_{o} \sim 700$ nm (red solid line in Fig. \[fig:hexane\_modulus\]) but predicts a large and broad attenuation peak between $S_{l}$ values of 0.6 to 0.9, which is not supported by the data (Fig. \[fig:hexane\_Qp\]). The linear saturation dependence might also result from saturation forming very large patches ($d\gg \delta $) from full saturation down to $S_{l}=0.6$, below which it becomes uniform. However such behavior does not explain the attenuation peak around $S_{l}=0.5$; it is also unreasonable to assume that saturation on desorption is more uniform than on adsorption.
A simpler and much more convincing explanation can be inferred from the behavior of compressional wave velocity as a function of saturation, as plotted in Fig. \[fig:hexane\_Vp\]. n-hexane desorption data show that the velocity remains nearly constant from full saturation down to $S_{l} \sim 0.5$. This is consistent with the optical observations [@Page1995] that “drying” of the cylindrical sample begins from the surface, while the middle core remains fully saturated. As relative pressure and liquid saturation are reduced, the wave travels through this fully saturated core with the same velocity and is not affected by the reduced overall saturation until this core becomes relatively thin, and then recorded arrival switches to the unsaturated (“dried”) outer shell of the cylinder. This explains the reduction of the modulus (Fig. \[fig:hexane\_modulus\]) below that in the dry sample: for $S_{l}>0.5$ the wave travels through the saturated region with the velocity $v_{l}=\sqrt{M_{G}(K_{l})/\rho _{sat}},$ where $M_{G}(K_{l})=K_{G}(K_{l})+(4/3)G_0$ is the longitudinal modulus and $\rho _{sat}=\rho _{0}+\phi \rho _{l}$ is the density of the fully saturated sample (where $\rho _{0}$ and $\rho_{l}$ are the densities of the dry porous glass and liquid adsorbate, respectively). However, the apparent modulus in Fig. \[fig:hexane\_modulus\] is computed as $M=\rho_{b}v_{l}^{2}$, where $\rho _{b}=\rho _{0}+\phi S_{l}\rho _{l}=\rho_{sat}-\phi (1-S_{l})\rho _{l}$ (where we neglected the term with vapor density). Hence $$\label{modulusD}
\frac{\Delta M}{M_{0}}=\frac{\rho _{b}M_{G}(K_{l})}{\rho _{sat}M_{0}}-1=\frac{1+\frac{\alpha K_{l}}{\phi K_{0}}}{1+\frac{\phi \rho _{l}}{\rho _{0}}} \simeq \frac{\alpha K_{l}}{\phi K_{0}}-\frac{\phi \rho _{l}}{\rho_{0}}(1-S_{l}).$$ Thus at full liquid saturation $\Delta M/M_{0}$ is positive but decreases linearly as $S_{l}$ decreases, becoming negative at $S_{l}=0.5$. We note that for macroscopically heterogeneous materials (with the heterogeneity scale $d$ larger than the wavelength), calculation of the modulus from velocities is somewhat unphysical, as the wave speeds are controlled by distribution of densities as well as moduli, whereas the true effective modulus (ratio of stress to strain) depends on the distribution of moduli only. Our interpretation also explains the apparent attenuation peak around $S_{l}=0.5$. Indeed at this saturation the ultrasonic energy is split between two arrivals (traveling through the saturated and dry portions of the sample) and hence their amplitudes are lower than in the fully saturated, dry or uniformly saturated sample. This interpretation of the ultrasonic moduli and *apparent* attenuation on desorption of n-Hexane is supported by the results of finite element simulations shown as red dotted lines in Figs \[fig:hexane\_modulus\]a, \[fig:hexane\_Qp\] and \[fig:hexane\_Vp\]a, and detailed in SM. Argon desorption data (Figs. \[fig:hexane\_modulus\]b, \[fig:hexane\_Vp\]b) show similar behavior, but without the anomalously low apparent modulus (Fig. \[fig:hexane\_modulus\]). This may be a result of higher sensitivity of ultrasonic transducers (capable of detecting weak early arrivals) or a different algorithm for picking arrival times.
In summary, we have performed a poroelastic analysis of the dependence of ultrasonic moduli of Vycor glass on vapor pressure as measured during sorption experiments. This analysis shows that both on adsorption and desorption of argon and n-hexane, the condensate in the pore space forms patches much larger than the typical pore radius. The patch sizes are much larger on desorption than on adsorption. On adsorption the patch diameter is on the order of 10-20 pore diameters, while on desorption the patch size is comparable to the sample size.
These results suggest that ultrasonic measurements are a promising method for studying fluid distributions during sorption. More ultrasonic measurements on different porous materials with different adsorbates are required to better understand the fluid distributions in these processes.
G.G. thanks Patrick Huber for pointing out some of the references discussed in this work. B.G. thanks the sponsors of the Curtin Reservoir Geophysics Consortium for financial support, and Julianna Toms and Eva Caspari for discussions of implementation of the methods in Refs. .
[31]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , , , ** (, ).
, , , ****, ().
, ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, , , ****, (), ISSN .
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, Ph.D. thesis, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, (), ISSN .
, ****, ().
, ****, ().
|
---
abstract: 'In this paper we consider a class of piecewise affine Hamiltonian vector fields whose orbits are piecewise straight lines. We give a first classification result of such systems and show that the orbit-structure of the flow of such a differential equation is surprisingly rich.'
address: 'Mathematics Department, University of Warwick'
author:
- Georg Ostrovski and Sebastian van Strien
bibliography:
- 'GameTheory.bib'
title: 'Piecewise linear Hamiltonian flows associated to zero-sum games: transition combinatorics and questions on ergodicity'
---
[^1]
Keywords: Hamiltonian systems, non-smooth dynamics, Filippov systems, piecewise affine, Arnol’d diffusion, fictitious play, best-response dynamics, learning process\
2000 MSC: 37J, 37N40, 37G, 34A36, 34A60, 91A20.
Introduction
============
Traditionally, the motivation for studying Hamiltonian systems comes either from classical mechanics or from partial differential equations. In this paper we study a particular class of Hamiltonian systems which are motivated in a completely different way: they arise from dynamical systems associated to game theory. This class of Hamiltonian systems was proposed in [@VanStrien2009a] and corresponds to Hamiltonians which are piecewise affine. In this paper we study several properties of such dynamical systems.
For general smooth functions $H\colon {{\mathbb R}}^{2n}\to {{\mathbb R}}$ for which $H^{-1}(1)$ is (topologically) a $2n-1$ sphere, very little is known about the global dynamics of the corresponding Hamiltonian vector field, even if we make convexity assumptions on the level sets. If $H$ is a quadratic function, the system is completely integrable and the flow is extremely simple. So instead, in this paper we consider Hamiltonians $H$ which are piecewise affine and investigate whether the following analogy holds:\
\
---------------------------------------- ---------------------------------------
circle diffeomorphism circle rotation
quadratic map $x\mapsto 1-ax^2$ tent map $x\mapsto 1-a|x|$
Hénon map $(x,y)\mapsto (1-ax^2+by,x)$ Lozi map $(x,y)\mapsto (1-
a|x|+by,x)$
Smooth Hamiltonian Piecewise affine Hamiltonian
Smooth area preserving maps Piecewise affine area preserving maps
---------------------------------------- ---------------------------------------
\
This analogy would suggest that one might gain insight about smooth Hamiltonian systems by looking at piecewise affine ones, in the same way as circle diffeomorphisms can be modelled by circle rotations. Much of the complexity of the dynamics may well persist in the piecewise affine case, and even though these systems are not smooth they still may be easier to study.
This paper consists of two parts. In the first part we prove a classification theorem for the case when $n=2$ which goes towards a description of the global dynamics in terms of coding, while in the second part we give numerical results which show that much richness of the dynamics in the smooth case persists.
Let us be more precise. We say that $H\colon {{\mathbb R}}^{2n} \to {{\mathbb R}}$ is a [*piecewise affine*]{} function if
\(i) $H$ is continuous and
\(ii) there exists a finite number of hyper-planes $Z_1,\dots,Z_k$ in ${{\mathbb R}}^{2n}$ so that $H$ is affine on each component of ${{\mathbb R}}^{2n}\setminus \cup_{i=1}^k Z_i$.
Since $H$ is piecewise affine, $\left(\dfrac{\partial H}{\partial q}, \dfrac{\partial H}{\partial p}\right)$ are piecewise constant outside the hyper-planes $Z_i$ and [*there*]{} the derivatives are multivalued. Consider the Hamiltonian system $$\dfrac{dp}{dt}\in \dfrac{\partial H}{\partial q}, \dfrac{dq}{dt}\in -
\dfrac{\partial H}{\partial p},\label{eq-1}$$ or more generally the Hamiltonian vector field $X_H$ associated to $H$ and the symplectic 2-form $\sum_{ij}\omega_{ij} dp_i\wedge dq_j$ (where $(\omega_{ij})$ are the (real) coefficients of some constant non-singular $n\times n$ matrix $\Omega$) and the corresponding differential inclusion $$(\dfrac{dp}{dt},\dfrac{dq}{dt})\in X_H(p,q).\label{eq0}$$ (Here $X_H$ is defined by requiring that $\omega(X_H,Y)=dH(Y)$ for each vector field $Y$.) In other words, $$\left(\begin{array}{c}\dfrac{dp}{dt} \\ \\ \dfrac{dq}{dt}\end{array}\right)\in \left( \begin{array}{cc} 0 & \Omega \\ -\Omega' & 0 \end{array}\right) \left(\begin{array}{c}\dfrac{\partial H}{\partial p} \\ \\ \dfrac{\partial H}{\partial q}\end{array}\right),\label{eq0'}$$ where $\Omega'$ stands for the transpose of the matrix $\Omega$. The reason we write $\in$ rather than $=$ in the above equations is because $\dfrac{\partial H}{\partial q}, \dfrac{\partial H}{\partial p}$ are multivalued. In this generality, there is no reason to assume that the flow of these differential inclusions is continuous.
Instead we will consider the special case of $H\colon \Sigma\times \Sigma\to {{\mathbb R}}$ defined by $$H(p,q)= \max_i \, ({\, {\mathcal M}\, }q)_i - \min_j \, (p' {\, {\mathcal M}\, })_j\,\, ,
\label{defH}$$ where $p'$ stands for the transpose of $p$, $({\, {\mathcal M}\, }q)_i$ and $(p' {\, {\mathcal M}\, })_j$ stand for the $i$-th and $j$-th component of the vectors ${\, {\mathcal M}\, }q$ respectively $p' {\, {\mathcal M}\, }$ and $\Sigma$ consists of the set of probability vectors in ${{\mathbb R}}^n$. Here we consider the corresponding Hamiltonian system[^2] defined on the space $\Sigma \times \Sigma$. To ensure that level sets are simple, we make the following [**assumption**]{} on ${\, {\mathcal M}\, }$: there exist ${{\bar p}},{{\bar q}}\in \Sigma$ so that all its components are strictly positive and so that ${{\bar p}}' {\, {\mathcal M}\, }=\lambda {\underline 1}'$ and ${\, {\mathcal M}\, }{{\bar q}}=\mu {\underline 1}$ for some $\lambda,\mu\in {{\mathbb R}}$ where ${\underline 1}=(1\, 1 \, \dots \, 1)\in {{\mathbb R}}^n$. A motivation for this assumption and an interpretation of ${{\bar p}},{{\bar q}}$ is given in the paragraph above equation (\[eq:pi\_proj\]). As was shown in [@VanStrien2009a], one then has the following properties:
1. For an open set of full Lebesgue measure of $n\times n$ matrices ${\, {\mathcal M}\, }$, [*each*]{} level set $H^{-1}({\varrho})$, ${\varrho}>0$ is topologically a $(2n-3)$-dimensional sphere and $H^{-1}(0)=\{({{\bar p}},{{\bar q}})\}$ (note that $\dim(\Sigma\times \Sigma)=2n-2$). In fact, $H^{-1}({\varrho})$ bounds a convex ball.
2. For an open set of full Lebesgue measure of $n\times n$ matrices $\Omega$ and ${\, {\mathcal M}\, }$, there exists a unique solution of (\[eq0\]) for all initial conditions and the corresponding flow $(p,q,t)\mapsto \phi_t(p,q)$ is continuous, which is not generally true for differential inclusions.
3. The flow $(p,q,t)\mapsto \phi_t(p,q)$ is piecewise a translation flow, and first return maps to hyperplanes in $H^{-1}({\varrho})$ are piecewise affine maps.
In the second part of this paper we will present some numerical studies of examples of such systems when $n=3$, i.e., when each energy level $H^{-1}({\varrho})$, ${\varrho}>0$, is topologically a three-sphere. Let us single out one of these examples: there exists an open set of $3\times 3$ matrices ${\, {\mathcal M}\, }$ and $\Omega$ with the following property. For each ${\varrho}>0$ there exists a topological disc $D\subset H^{-1}({\varrho})$ which is made up of 4 triangles in $\Sigma\times \Sigma$ so that
1. the first return map $R$ to $D$ is continuous and extends continuously to the closure of $D$;
2. $R|\partial D=id$ (in fact $\partial D$ corresponds to a periodic orbit of the flow, whose Floquet multipliers are undefined);
3. $R\colon D\to D$ is area-preserving and piecewise affine;
4. each orbit in $H^{-1}({\varrho})$ intersects $D$ infinitely often.
Moreover, $R\colon D\to D$ contains hyperbolic horseshoes and also an elliptic periodic orbit, surrounded by an elliptical disk consisting of quasi-periodic orbits on invariant ellipses, see [@VanStrien2009a; @VanStrien2009b]. Numerical simulations for these systems suggest that all orbits outside this elliptical disk in $D$ are dense in this set, see Example 2 in Part 2 of this paper. This first return map could be a piecewise affine model for general smooth area preserving maps of the disk.
As mentioned, one motivation for looking at Hamiltonians as in (\[defH\]) and the corresponding systems (\[eq0\]) comes from game theory. Indeed, these correspond to certain differential inclusions which are naturally associated to so-called *zero-sum games* in game theory. For this, let $A,B$ be $n \times n$ matrices and define ${{\mathcal B \! \mathcal R \!}}_A(q):=\operatorname*{arg\,max}_{p\in \Sigma} p'A q$ and ${{\mathcal B \! \mathcal R \!}}_B(p):=\operatorname*{arg\,min}_{q\in \Sigma} \, p' B q$ and consider the differential inclusion: $$\dfrac{dp}{dt}\in{{\mathcal B \! \mathcal R \!}}_A(q)-p, \dfrac{dq}{dt}\in {{\mathcal B \! \mathcal R \!}}_B(p)-q.
\label{eqBR}$$ The zeros of this equation, i.e. the set of points $({{\bar p}},{{\bar q}})$ for which ${{\bar p}}\in {{\mathcal B \! \mathcal R \!}}_A({{\bar q}}), {{\bar q}}\in {{\mathcal B \! \mathcal R \!}}_B({{\bar p}})$, are called [*Nash equilibria*]{}. The differential inclusion (\[eqBR\]) is called the [*Best Response Dynamics*]{} and the corresponding time-rescaled $$\dfrac{dp}{dt}\in\dfrac{1}{t}({{\mathcal B \! \mathcal R \!}}_A(q)-p),
\dfrac{dq}{dt}\in \dfrac{1}{t}({{\mathcal B \! \mathcal R \!}}_B(p)-q),$$ the [*Fictitious Play Dynamics*]{} associated to the game with matrices $A$ and $B$. In game theory and economics these are often used to [*model learning*]{}, i.e. to describe how people learn to play a game. It turns out that in the case of zero-sum games, i.e. when $A=-B$, all orbits of (\[eqBR\]) converge to the set of Nash equilibria, see [@Robinson1951] and for a short proof see [@Hofbauer1995].
Note that if we define ${\, {\mathcal M}\, }:=A=-B$ then the existence of ${{\bar p}},{{\bar q}}\in \Sigma$ for which all coordinates are strictly positive and for which ${{\bar p}}' {\, {\mathcal M}\, }=\lambda {\underline 1}'$ and ${\, {\mathcal M}\, }{{\bar q}}=\mu {\underline 1}$ (as assumed just below (\[defH\])) implies that $0\in {{\mathcal B \! \mathcal R \!}}_A({{\bar q}}) - {{\bar p}}$ and $0\in {{\mathcal B \! \mathcal R \!}}_B({{\bar p}}) - {{\bar q}}$. So such a point $({{\bar p}},{{\bar q}})\in \Sigma\times \Sigma$ is a Nash equilibrium. To see this, notice that ${{\mathcal B \! \mathcal R \!}}_{{\, {\mathcal M}\, }}({{\bar q}})$ is the convex hull of all the unit vectors corresponding to the largest component(s) of ${\, {\mathcal M}\, }{{\bar q}}$. So if all components of ${{\bar p}},{{\bar q}}$ are strictly positive, then $0\in {{\mathcal B \! \mathcal R \!}}_A({{\bar q}})-{{\bar p}}$ holds iff all coordinates of ${\, {\mathcal M}\, }{{\bar q}}$ are equal.
We should note that $p$ and $q$ have a [*different connotation*]{} here from the usual one in classical mechanics: $p$ corresponds to the position (the probablity vector describing past play) of the first player and $q$ the corresponding object for the second player. Although the equation (\[eqBR\]) itself is not Hamiltonian at all, it is closely related to the Hamiltonian system (\[eq0\]). Indeed, for $(p,q)\in \Sigma\times \Sigma\setminus \{{{\bar p}},{{\bar q}}\}$, let $l(p,q)$ be the half-line from $({{\bar p}},{{\bar q}})$ containing $(p,q)$ and define $$\label{eq:pi_proj}
\pi\colon \Sigma\times \Sigma\setminus \{{{\bar p}},{{\bar q}}\} \rightarrow H^{-1}(1),$$ where $\pi(p,q)\in H^{-1}(1)$ is the intersection of $l(p,q)$ with the $2n-3$-dimensional sphere $H^{-1}(1)$. It turns out that the projection of the flow on $H^{-1}(1)$ corresponds to the solution of a Hamiltonian system as above. In other words, the Hamiltonian dynamics describes the spherical coordinates. So we will think of the dynamics of (\[eqBR\]) as [*inducing*]{} Hamiltonian dynamics. For more details see Section \[sec:ham\_br\_dynamics\].
In this paper we will study Hamiltonian dynamics coming from Best Response dynamics as in (\[eqBR\]) in the case where $n=3$. The aim of this paper is the following:
- Because of the special nature of the Hamiltonian systems we consider in this paper, one can associate in a natural way itineraries to each orbit. In the Main Theorem of this paper we will show that not all itineraries are possible and we will give a full classification of all possible transition diagrams[^3].
- The Hamiltonian dynamics appearing in this paper is much simpler than usual: the first return maps to planes are piecewise translations. In the second part of this paper we will show numerical simulations concerning a number of examples of such systems. The four examples which we show here, display (conjecturally):
1. fully ergodic behaviour;
2. elliptic behaviour of a very simple type (which we can prove rigorously, see [@VanStrien2009a]);
3. elliptic behaviour of a composite type;
4. Arnol’d diffusion, and intertwining of various elliptic regions.
- We believe that the elliptic behaviour occuring in our systems satisfies a huge amount of regularity, and we will formulate several questions and conjectures formalizing this.
In the first part of the paper we introduce the Best Response and Fictitious Play Dynamics inducing a special case of the Hamiltonian Dynamics presented above. We introduce a combinatorial description of the BR dynamics and provide a combinatorial characterisation of the dynamics for zero-sum games for $n = 3$ (inducing Hamiltonian dynamics with two degrees of freedom).
Hamiltonian and Best Response Dynamics {#sec:ham_br_dynamics}
======================================
We define $\Sigma_A$ to be the simplex of probability row vectors in $\mathbb{R}^n$ and $\Sigma_B$ the simplex of probability column vectors in $\mathbb{R}^n$: $$\begin{aligned}
\Sigma_A = \left\{ p \in \mathbb{R}^{1 \times n} : p_i \geq 0, \sum p_i = 1 \right\},
\Sigma_B = \left\{ q \in \mathbb{R}^{n \times 1} : q_i \geq 0, \sum q_i = 1 \right\}.\end{aligned}$$ We denote $\Sigma = \Sigma_A \times \Sigma_B$.
We consider a bimatrix $(A,B)$, where $A=(a_{ij}),B=(b_{ij}) \in \mathbb{R}^{n \times n}$. With a slight abuse of notation we identify the standard unit vector $e_k$ with the integer $k$ and define the following correspondences[^4]:
$$\begin{aligned}
&{{\mathcal B \! \mathcal R \!}}_A(q) = \operatorname*{arg\,max}_{p\in \Sigma_A} p A q = \operatorname*{arg\,max}_i \left( A q\right)_i \text{ for } q \in \Sigma_B,\\
&{{\mathcal B \! \mathcal R \!}}_B(p) = \operatorname*{arg\,max}_{q\in \Sigma_B} p B q = \operatorname*{arg\,max}_j \left( p B\right)_j \text{ for } p \in \Sigma_A.\end{aligned}$$
These correspondences are singlevalued almost everywhere, except on a finite number of hyperplanes. On these hyperplanes (also called *indifference planes*) at least one of the ${{\mathcal B \! \mathcal R \!}}$ correspondences has as its values the set of convex combinations of two (ore more) unit vectors.
We can now define the *Fictitious Play Dynamics* as a continuous time dynamical system in $\Sigma$: $$\begin{aligned}
\tag{FP}\label{eq:fp_dynamics}
\begin{split}
&\frac{dp}{dt} \in \frac{1}{t}\left( {{\mathcal B \! \mathcal R \!}}_A(q(t)) - p(t) \right), \\
&\frac{dq}{dt} \in \frac{1}{t}\left( {{\mathcal B \! \mathcal R \!}}_B(p(t)) - q(t) \right),
\end{split}\end{aligned}$$ for $t > 1$ and some given initial value $(p(1), q(1)) \in \Sigma$. Note that the right hand side is singlevalued almost everywhere.
Although (\[eq:fp\_dynamics\]) is more common in game theory, where it serves as a model of *myopic learning*, we prefer to consider the following time reparametrisation, referred to as the *Best Response Dynamics*: $$\begin{aligned}
\tag{BR}\label{eq:br_dynamics}
\begin{split}
&\frac{dp}{ds} \in {{\mathcal B \! \mathcal R \!}}_A(q(s)) - p(s) , \\
&\frac{dq}{ds} \in {{\mathcal B \! \mathcal R \!}}_B(p(s)) - q(s) .
\end{split}\end{aligned}$$ The orbits of both systems coincide (differing only in time parametrisation) but (\[eq:br\_dynamics\]) has the advantage of being autonomous. The existence of solutions for any initial conditions follows from upper semicontinuity of ${{\mathcal B \! \mathcal R \!}}_A$ and ${{\mathcal B \! \mathcal R \!}}_B$, see [@Aubin1984 Chapter 2.1].
The classical learning processes (\[eq:fp\_dynamics\]) and (\[eq:br\_dynamics\]) are naturally related to Hamiltonian Dynamics via the following construction. Let $H \colon \Sigma_A \times \Sigma_B \rightarrow {{\mathbb R}}$ be defined as $$H(p,q) = \max_i \left(Aq\right)_i - \min_j\left(pA\right)_j.$$ Further let $\Omega = ( \omega_{ij} ) $ be some non-singular $n \times n$ matrix. Consider a Hamiltonian vector field $X_H$ associated to $H$ and the symplectic 2-form $\sum_{ij}\omega_{ij} dp_i\wedge dq_j$. The corresponding differential inclusion is $$(\dfrac{dp}{dt},\dfrac{dq}{dt})\in X_H(p,q).
\label{eq3}$$ Let us denote $T \Sigma_A = T \Sigma_B = \left\{ v \in \mathbb{R}^n : \sum v_i = 0 \right\}$, and let $\Omega', A'$ be the transpose of the matrices $\Omega$ and $A$. Further let $P_A\colon {{\mathbb R}}^n \rightarrow T \Sigma_A , P_B\colon {{\mathbb R}}^n \rightarrow T \Sigma_B$ be the parallel projections to $T \Sigma_A$ and $T \Sigma_B$ along the vectors $\Omega'^{-1} {\underline 1}$ and $\Omega^{-1} {\underline 1}$ respectively (where ${\underline 1}= (1 1 \ldots 1) \in {{\mathbb R}}^n$). Then a simple calculation (see [@VanStrien2009a]) shows that the above inclusion (\[eq3\]) takes the form $$\begin{aligned}
\label{eq:ham_eq}
\begin{split}
\dfrac{dp}{dt} &\in P_A \Omega'^{-1} A' {{\mathcal B \! \mathcal R \!}}_A(q),\\
\dfrac{dq}{dt} &\in P_B \Omega^{-1} A {{\mathcal B \! \mathcal R \!}}_B(p).
\end{split}\end{aligned}$$ The projections $P_A,P_B$ appear in these equations because $H$ is considered as a function on $\Sigma = \Sigma_A \times \Sigma_B$ and so the dynamics is constrained to this affine subspace. The Hamiltonian differential inclusion (\[eq:ham\_eq\]) is closely related to the BR dynamics if we take $\Omega=A$. In this case the dynamics defined on $H^{-1}(1)$ by (\[eq:ham\_eq\]) equals the BR-dynamics projected via $\pi$ (defined in equation (\[eq:pi\_proj\])) to this level set of $H$ (for details see [@VanStrien2009a]). In other words, if we compute an orbit under the BR-dynamics then the image under $\pi$ of this orbit is an orbit under the Hamiltonian dynamics (\[eq:ham\_eq\]).
For this reason, for the rest of this paper we will assume $\Omega = A$, where $A \in \mathbb{R}^{3\times 3}$ is non-singular.
For later use, let us make some simple observations. $\Sigma_B$ can be divided into $n$ convex regions $R^A_i = {{\mathcal B \! \mathcal R \!}}_A^{-1}(e_i)$, where $i \in \left\{ 1, \ldots, n\right\}$ and analogously $R^B_j = {{\mathcal B \! \mathcal R \!}}_B^{-1}(e_j) \subset \Sigma_A$, $j \in \left\{ 1, \ldots, n\right\}$.
![An example of $\Sigma_A, \Sigma_B \subset \mathbb{R}^3$ being partitioned into regions $R^B_j$ and $R^A_i$. The dashed lines indicate the hyperplanes at which one of the ${{\mathcal B \! \mathcal R \!}}$ correspondences is multivalued. Also shown is a piece of an orbit with initial conditions $(p_0,q_0)$. The orbit changes direction four times: when $p$ crosses a dashed line, $q$ changes direction, and vice versa. Note that this is an orbit for the BR dynamics on $\Sigma = \Sigma_A \times \Sigma_B$, not for the induced Hamiltonian dynamics on $H^{-1}(1)$.[]{data-label="fig:fp_example"}](fp_example)
Since ${{\mathcal B \! \mathcal R \!}}_A \times {{\mathcal B \! \mathcal R \!}}_B$ is constant on $R_{ij} = R^B_j \times R^A_i$, (\[eq:fp\_dynamics\]) and (\[eq:br\_dynamics\]) have continuous orbits which are piecewise straight lines heading for vertices $(e_k,e_l) \in \Sigma$ whenever $(p(t),q(t)) \in R_{kl}$. The orbits only change direction at a finite number of hyperplanes, namely whenever ${{\mathcal B \! \mathcal R \!}}_A$ or ${{\mathcal B \! \mathcal R \!}}_B$ (or both) become multivalued. More precisely, $p(t)$ changes direction whenever $q(t)$ passes from $R^A_i$ to $R^A_{i'}$ for some $i \neq i'$, and vice versa. See Fig.\[fig:fp\_example\] for an example with $n = 3$.
Combinatorial Description for the Case $n = 3$
==============================================
In this section and later on we restrict our attention to the case of dimension $n = 3$. The partition of $\Sigma$ into the convex blocks $R_{ij}$ quite naturally gives rise to coding of orbits of (\[eq:br\_dynamics\]) and (\[eq:fp\_dynamics\]). We codify an orbit $(p(t),q(t))$ by a (finite or infinite, one-sided) itinerary $(i_0,j_0)\rightarrow (i_1,j_1)\rightarrow \ldots \rightarrow (i_k,j_k)\rightarrow \ldots$ indicating that there exists a sequence of times $(t_k)$, such that $(p(t),q(t)) \in R_{i_k,j_k}$ for $t_k < t < t_{k+1}$. To simplify notation we will often write $(i,j)$ instead of $R_{ij}$.
Abstractly we have a graph of nine vertices $(i,j),~i,j=1,2,3$ with directed edges between them (see Fig.\[fig:markov\]).
It is not difficult to see that for an orbit of BR dynamics for a fixed bimatrix $(A,B)$, such itinerary can then contain $(i,j) \rightarrow (i',j')$, $i \neq i'$ (or $j \neq j'$) if and only if $a_{i'j} \geq a_{ij}$ ($b_{ij'} \geq b_{ij}$). Note that for almost all initial conditions in the corresponding orbits $p$ and $q$ never switch directions simultaneously, i.e. the itinerary only contains transitions of the form $(i,j) \rightarrow (i',j)$ and $(i,j) \rightarrow (i,j')$, $i \neq i'$, $j \neq j'$ (see [@Sparrow2007]).
Further we want to make sure that for any $i \ne i'$ ($j \ne j'$) there is only one possible transition direction between $(i,j)$ and $(i',j)$ ($(i,j)$ and $(i,j')$). Therefore we introduce the following non-degeneracy assumption on the bimatrix $(A,B)$:
\[as:non\_deg\] $a_{ij} \neq a_{i'j}$ and $b_{ij} \neq b_{ij'}$ for all $i,i',j,j'$ with $i \neq i'$ and $j \neq j'$.
Clearly the set of bimatrices satisfying this assumption is open dense with full Lebesgue measure in the space of bimatrices.
The possible transitions can be expressed in a *transition diagram* as in Fig.\[fig:diag\_intro\](a). The three rows and three columns of the diagram represent the regions $R^A_i,~i=1,2,3$ and $R^B_j,~j=1,2,3$, respectively. The arrows indicate the possible transitions between the regions, which by Assumption \[as:non\_deg\] always have a unique direction. For example, the itinerary of an orbit of the BR dynamics for a given bimatrix can contain $(1,2) \rightarrow (1,3)$ if and only if in the first row of its transition diagram an arrow points from the second into the third column. Opposite sides of the diagram should be thought of as identified, so that possible transitions between the first and third rows and columns are indicated by arrows on the boundary of the diagram. It is important to note that this partition does not have the nice properties of a Markov partition: there is no claim that every itinerary that can be obtained from the transition diagram can actually be realised by an orbit of the BR dynamics.
![(a) example of a transition diagram (b) dominated row (c) dominated column (d),(e) examples of alternating cycles (which will be shown to be impossible in zero-sum games)[]{data-label="fig:diag_intro"}](diag_intro)
One may now ask whether a given transition diagram is realisable as the transition diagram of BR dynamics for a bimatrix game $(A,B)$, and how properties of a game relate to the combinatorial information given by its transition diagram.
A simple first observation is that no row of a transition diagram can have three horizonal arrows pointing in the same direction, as this would imply $a_{i,j} > a_{i,j'} > a_{i,j''} > a_{i,j}$. Analogously no column of such diagram can have three vertical arrows pointing in the same direction.
It is easy to see that apart from this restriction, any transition diagram can be realised by appropriate choice of $(A,B)$. However, our interest lies in bimatrix games whose BR dynamics displays ’non-trivial’ behaviour. For this we introduce the following assumption:
\[as:dom\_strat\] In the transition diagram no row (column) is dominated by another row (column), i.e. no three vertical (horizontal) arrows between two rows (columns) point in the same direction (Fig.\[fig:diag\_intro\](b)-(c)) [^5]. Formally, let $\{i,i',i''\} = \{j, j', j''\} = \{1,2,3\}$, then $$(i,j) \rightarrow (i',j) \text{ and } (i,j') \rightarrow (i',j') \Rightarrow (i',j'') \rightarrow (i,j'')$$ and $$(i,j) \rightarrow (i,j') \text{ and } (i',j) \rightarrow (i',j') \Rightarrow (i'',j') \rightarrow (i'',j).$$
Zero-Sum Games
==============
In game theory, an important class of (bimatrix) games are the zero-sum games, i.e. games $(A,B)$, such that $A+B = 0$. However, our analysis remains valid for a larger class, namely games that are linearly equivalent to a zero-sum game.
Two $3\times 3$ bimatrix games $(A,B)$ and $(C,D)$ are **linearly equivalent**, if there exist $e>0, f_j, j=1,2,3$ and $g>0, h_i, i = 1,2,3$ such that $$c_{ij} = e a_{ij} + f_j \text{ and } d_{ij} = g b_{ij} + h_i .$$
It can be checked that for linearly equivalent bimatrix games $(A,B)$ and $(C,D)$, the respective best-response correspondences coincide: ${{\mathcal B \! \mathcal R \!}}_A = {{\mathcal B \! \mathcal R \!}}_C$ and ${{\mathcal B \! \mathcal R \!}}_B = {{\mathcal B \! \mathcal R \!}}_D$. From the definitions it immediately follows that linearly equivalent bimatrix games induce the same dynamics (\[eq:fp\_dynamics\]) and (\[eq:br\_dynamics\]). Since our main focus lies on these dynamical processes, in the rest of this text we call a game $(A,B)$ zero-sum, if there exists a linearly equivalent game $(C,D)$, such that $C+D = 0$.
\[def:nash\_eq\] A point $({{\bar p}},{{\bar q}}) \in \Sigma$ is called **Nash Equilibrium** if ${{\bar p}}\in {{\mathcal B \! \mathcal R \!}}_A({{\bar q}})$ and ${{\bar q}}\in {{\mathcal B \! \mathcal R \!}}_B({{\bar p}})$.
By the definition, Nash Equilibria are precisely the fixed points of (\[eq:fp\_dynamics\]) and (\[eq:br\_dynamics\]). It has been proved by John Nash, that every game with finitely many players and strategies has got a Nash Equilibrium (see [@NashJohnForbes1951]). A very important classical result is the following:
\[thm:zs\_conv\] For a zero-sum game, every orbit of the (\[eq:br\_dynamics\]) and (\[eq:fp\_dynamics\]) converges to the set of Nash Equilibria.
A short proof using an explicitly given Lyapunov function can be found in [@Hofbauer1995]. In the same paper, Hofbauer states the converse conjecture, which still remains open:
\[conj:hofbauer\_conj\] A bimatrix game with a unique Nash Equilibrium point in $\mathring \Sigma$ that is stable under the BR (or FP) dynamics must be a zero-sum game.
As the above indicates, zero-sum games are of great interest in the study of BR and FP dynamics. A natural question to ask is now, which combinatorial configurations (transition diagrams) can be realised by zero-sum games. In this paper we will restrict our attention to zero-sum games with a unique Nash Equilibrium in the interior of $\Sigma$:
\[as:int\_ne\] The bimatrix game $(A,B)$ has a unique Nash Equilibrium point $(E^A,E^B)$, which lies in the interior of $\Sigma$. Equivalently (see for instance [@VanStrien2009a]), there exists precisely one point $E^B \in \mathring{\Sigma}_B$, such that $(A E^B)_i = (A E^B)_j ~\forall i,j$, and precisely one point $E^A \in \mathring{\Sigma}_A$, such that $(E^A B)_i = (E^A B)_j ~\forall i,j$.
For every bimatrix game $(A,B)$, Assumption \[as:int\_ne\] implies Assumption \[as:dom\_strat\].
If Assumption \[as:dom\_strat\] does not hold, there is a row (column) in the transition diagram for $(A,B)$ which is dominated by another row (column), say $i$ dominated by $i'$. It follows that $R_i^A$ ($R_i^B$) is empty.
On the other hand, Assumption \[as:int\_ne\] implies that for all $k,l$, $R_{kl} = R_l^B\times R_k^A$ has non-empty intersection with every neighbourhood of the Nash Equilibrium.
For later use, we make the following definitions:
\[def:alt\_cycle\] A sequence $(i_0,j_0), (i_1,j_1), \ldots , (i_n,j_n) = (i_0,j_0)$ is called **alternating cycle** for $(A,B)$, if after reversing either all downward and upward pointing or all right and left pointing arrows in the transition diagram of $(A,B)$, it forms a directed loop. Examples of alternating cycles are shown in Fig.\[fig:diag\_intro\](d) and (e). More formally, either
- $(i_k,j_k) \rightarrow (i_{k+1},j_{k+1})$ whenever $i_k \neq i_{k+1}$ and $(i_{k+1},j_{k+1}) \rightarrow (i_k,j_k)$ whenever $j_k \neq j_{k+1}$, or
- $(i_{k+1},j_{k+1}) \rightarrow (i_k,j_k)$ whenever $i_k \neq i_{k+1}$ and $(i_k,j_k) \rightarrow (i_{k+1},j_{k+1})$ whenever $j_k \neq j_{k+1}$.
We call $(i,j)$ a **sink**, if it can be entered but not left by trajectories of the BR dynamics (i.e. $(i',j) \rightarrow (i,j)$ and $(i,j') \rightarrow (i,j)$ for $i' \neq i$ and $j' \neq j$). Conversely we call it a **source**, if it can be left but not entered.
We can now formulate several consequences for the transition diagram of a zero-sum game $(A,B)$:
\[lem:zs\_impl\] Let $(A,B)$ be zero-sum and satisfy Assumptions \[as:non\_deg\] and \[as:int\_ne\]. Then:
1. The transition diagram does not have alternating cycles.
2. The transition diagram does not have sinks.
3. The transition diagram does not have sources.
One can see that in (1) without loss of generality we can only consider cycles in which the $i$- and $j$-component change alternatingly, which justifies the notion of alternating cycle. In fact in $3\times 3$ games (1) reduces to saying that there are no alternating cycles of the two kinds depicted in Fig.\[fig:diag\_intro\](d) and (e).
![Proof of statement (3) of Lemma \[lem:zs\_impl\]: (a) General case for a diagram with a source (b) Case 1: sink in (2,1) contradicts zero-sum (c) Case 2: alternating cycle contradicts zero-sum (d) Case 3 (e),(f) Case 3: necessarily following configuration[]{data-label="fig:diag_source"}](diag_source)
![Proof of statement (3) of Lemma \[lem:zs\_impl\]: This configuration necessarily follows from the transition diagram in Case 3 of the proof. $(N,M)$ is then a Nash Equilibrium, contradicting Assumption \[as:int\_ne\].[]{data-label="fig:diag_source2"}](diag_source2)
For statement (1) assume that $A+B = 0$ (otherwise choose linearly equivalent matrices such that this holds and note that this does not change the arrows in the transition diagram). Now note first that $(i,j) \rightarrow (i',j)$ iff $a_{i'j} > a_{ij}$. Further, $(i,j) \rightarrow (i,j')$ iff $b_{ij'} > b_{ij}$, which is equivalent to $a_{ij} > a_{ij'}$. It follows that an alternating cycle leads to a chain of inequalities $a_{i_0j_0} > a_{i_1j_1} > \ldots > a_{i_nj_n} = a_{i_0j_0}$ and therefore cannot exist.
To prove statement (2), we use Theorem \[thm:zs\_conv\]. It follows from this theorem and Assumption \[as:int\_ne\] that orbits of (\[eq:br\_dynamics\]) converge to a single isolated point in the interior of $\Sigma$. A sink in the transition diagram however would imply that orbits of (\[eq:br\_dynamics\]) that start in $R_{ij}$ (for some $i,j$) stay in it for all times. Since orbits do not change direction while in $R_{ij}$, this can only be the case if they converge (in straight line segments) towards a vertex on $\partial \Sigma$.
At last, to show statement (3) of the lemma let us assume for a contradiction that such a zero-sum game satisfying the assumptions and with a source in its transition diagram exists. After possibly permuting rows and columns and swapping the roles of the two players we can assume that the source is $(2,2)$ and we have the (incomplete) diagram as seen in Fig.\[fig:diag\_source\](a). Let us now consider all four possible cases for the vertical arrows in $(2,3)$:
- Case 1: $(2,3) \rightarrow (i,3),~i=1,2$, i.e. both arrows pointing *out of* $(2,3)$: Either row 2 dominates row 1 or 3, or $(2,1)$ is a sink, see Fig.\[fig:diag\_source\](b).
- Case 2: $(i,3) \rightarrow (2,3),~i=1,2$, i.e. both arrows pointing *into* $(2,3)$: Either column 2 dominates column 3 or there is an alternating cycle, see Fig.\[fig:diag\_source\](c).
- Case 3: $(3,3) \rightarrow (2,3)$, $(2,3) \rightarrow (1,3)$, both arrows point *upward* (Fig.\[fig:diag\_source\](d)): In order to avoid an alternating cycle and a dominated column, one necessarily has $(3,2)\rightarrow(3,3)$ and $(1,3)\rightarrow (1,2)$. Further, since row 2 may not dominate row 1 and alternating cycles cannot happen in a zero-sum game, one gets $(1,1) \rightarrow (2,1)$ and $(1,2) \rightarrow (1,1)$, also $(2,1) \rightarrow (3,1)$ is necessary to avoid a source in $(2,1)$, see Fig.\[fig:diag\_source\](e). With some further deductions of the same kind one can show that the only possible transition diagram is the one shown in Fig.\[fig:diag\_source\](f). We can now deduce that $\Sigma_A$ and $\Sigma_B$ are partitioned into the regions $R_i^A$ and $R_j^B$ as shown in Fig.\[fig:diag\_source2\]. Consider the point on $\partial \Sigma$ denoted by $(N,M)$ and note that ${{\mathcal B \! \mathcal R \!}}_A(M)$ cotains $e_2$ and $e_3$, hence all their convex combinations. Therefore $N \in {{\mathcal B \! \mathcal R \!}}_A(M)$. Analogously, $M \in {{\mathcal B \! \mathcal R \!}}_B(N)$. Therefore $(N,M)$ is a Nash Equilibrium contradicting our assumption that the interior Nash Equilibrium is unique.
(In fact it also follows from this configuration that there exist initial conditions arbitrarily close to the interior Nash Equilibrium whose trajectories spiral off towards $(N,M)$ and therefore the interior Nash Equilibrium cannot be stable for the dynamics.)
- Case 4: $(1,3) \rightarrow (2,3)$ and $(2,3) \rightarrow (3,3)$, i.e. both arrows point *downward*: This case is analogous to the previous one.
To conclude, we have shown that a source in the diagram contradicts our assumption of a zero-sum game with unique interior Nash Equilibrium, which finishes the proof of statement (3).
Main Result
===========
The next hope is of course to get a full characterization of all combinatorial configurations that can be realised by zero-sum games. This indeed can be done after defining a suitable notion of combinatorially equivalent games.
\[def:comb\_equiv\] We call two bimatrix games $(A,B)$ and $(C,D)$ **combinatorially identical**, if they induce the same transition relation, i.e. $(i,j)\rightarrow (i',j')$ for $(A,B)$ iff $(i,j)\rightarrow (i',j')$ for $(C,D)$.
We call two bimatrix games $(A,B)$ and $(C,D)$ **combinatorially equivalent**, if there exist permutation matrices $P, Q$ such that $(A,B)$ and $(PCQ, PDQ)$ are combinatorially identical or $(B',A')$ and $(PCQ, PDQ)$ are combinatorially identical.
The definition expresses the idea, that games are combinatorially equivalent if they have the same transition diagram up to permutation of rows and columns and transposition. The main result is the following:
\[thm:main\] The types of transition diagram (combinatorial equivalence classes) that can be realised by a zero-sum game satisfying Assumptions \[as:non\_deg\] and \[as:int\_ne\] are precisely all those that satisfy the following (combinatorial) conditions:
1. No row of the diagram has three horizonal arrows pointing in the same direction and no column has three vertical arrows pointing in the same direction.
2. No three horizontal arrows between two columns point in the same direction and no three vertical arrows between two rows point in the same direction.
3. The diagram has no sinks.
4. The diagram has no sources.
5. The diagram has no alternating cycles.
This gives precisely 23 different types of transition diagrams (up to combinatorial equivalence). These are listed in Appendix \[ap:types\]. [^6]
Throughout the proof of the theorem we will make use of the following notion:
\[def:short\_loop\] We call an oriented loop of length four (formed by the arrows in the diagram) a **short loop**, see Fig.\[fig:main\_proof\](a). A short loop always has the form $(i,j) \rightarrow (i',j) \rightarrow (i',j') \rightarrow (i,j') \rightarrow (i,j)$ and we indicate the vertex in the diagram encircled by such loop by a $\bullet$.[^7]
By the above discussion we know that (1) is true for any transition diagram of a game and (2) corresponds to Assumption \[as:dom\_strat\] (which is implied by Assumption \[as:int\_ne\]), so the only conditions that are left to check are (3)-(5). By Lemma \[lem:zs\_impl\], we already know that (3)-(5) are necessary conditions for a diagram to be realisable by a zero-sum game.
To show that (1)-(5) are also sufficient, we will proceed in two steps: we will show that combinatorially these conditions give rise to precisely 23 types of diagrams (up to permutation of rows and columns and transposition) and then we will give examples of zero-sum games realising these types. Because of the initially large number of possible transition diagrams, we will group them by the number of short loops contained in them.
![Proof of Theorem \[thm:main\]: (a) a short loop (b) two rows coinciding at all positions (c) two rows coinciding at one and differing at two positions (d) Lemma \[lem:rc\_lem\](1): if two rows differ at two positions, then either there is a short loop or one row dominates the other (e) Lemma \[lem:num\_sloops\]: two short loops not between the same rows/columns (f) Lemma \[lem:rc\_lem\](2) applied to columns 1,2 and 2,3 in previous diagram[]{data-label="fig:main_proof"}](main_proof)
Let us introduce the notion of rows (or columns) *coinciding or differing at a position*. Two rows, say $i$ and $i'$, coincide at a position, say between columns $j$ and $j'$, if $(i,j) \rightarrow (i,j')$ if and only if $(i',j) \rightarrow (i',j')$, and they differ at this position otherwise. E.g. in Fig.\[fig:main\_proof\](b) rows 1 and 2 coincide at all positions, whereas in Fig.\[fig:main\_proof\](c) they coincide at one and differ at two positions.
We now introduce a few very helpful lemmas about the transition diagrams of zero-sum games satisfying our assumptions:
\[lem:comb0\] Two columns (or rows) can have at most two short loops between them.
\[lem:rc\_lem\]
1. If two rows (columns) differ at two positions, then there is a short loop between these rows (columns).
2. If two rows (columns) differ at all three positions, then there are precisely two short loops between them.
3. If two rows (columns) coincide at two positions, then there is a short loop between each of these rows (columns) and the third row (column). In particular the diagram has at least two short loop.
4. If two rows (columns) coincide at all three positions, then there are precisely two short loops between each of these rows (columns) and the third row (column). Then the diagram has precisely four short loops.
For (1), note that every $2 \times 2$ block obtained by deleting one row and one column from a transition diagram has got either two arrows pointing in the same direction or contains and alternating cycle or a short loop. Assume that two rows (columns) differ at two positions (Fig.\[fig:main\_proof\](c)). Since we don’t allow alternating cycles, the only way a short loop between the two rows (columns) can be avoided is by having all arrows between them pointing in the same direction (Fig.\[fig:main\_proof\](d)). But this case is ruled out by hypothesis (2) of the theorem. Hence there is a short loop between them.
Essentially the same argument shows that statement (2) of the lemma holds.
If two rows (columns) coincide at two positions, then since no column (row) is allowed to be dominated, each of these rows differs at two positions from the third row (column). Statement (3) then follows from statement (1).
The same argument proves statement (4). The fact that the diagram then has precisely four short loops follows from Lemma \[lem:comb0\] and the fact that there cannot be any short loops between the two rows (columns) that coincide at all positions.
We can now proceed to grouping all possible transition diagrams by the number of short loops contained in them:
\[lem:num\_sloops\] The transition diagram of a game as in Theorem \[thm:main\] can only have between three and six short loops.
Note first that for any pair of rows (or columns) of a transition diagram at least one of the cases of Lemma \[lem:rc\_lem\] applies and we can make the following list of cases for two rows (columns), say $i$ and $j$:
- $i$ and $j$ coincide at 0 positions, then they have precisely 2 short loops between them.
- $i$ and $j$ coincide at 1 position, then they have 1 or 2 short loops between them.
- $i$ and $j$ coincide at 2 positions, then there is at most 1 short loop between them, and there is at least 1 short loop between each of them and the third row (column).
- $i$ and $j$ coincide at 3 positions, then they have no short loops between them, and there are precisely 2 short loops between each of them and the third row (column).
It is clear from these that there cannot be a diagram without any short loops: pick any two rows and whichever of the above cases applies, it follows that the diagram has at least one short loop.
Similarly, the transition diagram cannot have precisely one short loop. Assume for contradiction that (without loss of generality) there is a single short loop between rows 1 and 2. Now consider rows 2 and 3, which do not have a short loop between them. Then they must have at least 2 coinciding positions. But having 2 or more coinciding positions implies that there is also a short loop between rows 3 and 1, which contradicts the assumption.
Now let us show that there is no transition diagram with precisely two short loops. Assume first, such diagram exists and both short loops are between the same two rows (or columns), say rows 1 and 2. Applying the above rules to rows 2 and 3 we see that either there have to be more short loops between rows 2 and 3 or between rows 3 and 1, both a contradiction. So the only left possibility is that the two short loops are neither between the same two rows nor columns.
Here there are two cases to check: either both short loops run clockwise or one run clockwise and one runs anti-clockwise (any other configuration leads to a combinatorially equivalent diagram). Assume the short loops have different orientation. Without loss of generality we have the configuration shown in Fig.\[fig:main\_proof\](e). By Lemma \[lem:rc\_lem\](2) applied to columns 1,2 and 2,3 we get that $(1,1) \rightarrow(2,1)$ and $(2,3) \rightarrow(3,3)$ (Fig.\[fig:main\_proof\](f)). But now Lemma \[lem:rc\_lem\](1) applied to columns 1,3 implies that there is a third short loop. A similar chain of deductions shows that the case with both short loops having the same orientation also cannot happen. Hence a transition diagram with two short loops is not possible.
At last, the upper bound of six short loops follows directly from Lemma \[lem:comb0\], which finishes the proof of the lemma.
We can now state the final lemma of the proof of the main theorem. The proof of the lemma consists of easy (but somewhat tedious) deductions of the only possible combinatorial configurations for the transition diagrams and we do not provide complete details. Lemma \[lem:rc\_lem\] is very useful to reduce the number of diagrams that have to be checked.
Up to combinatorial equivalence, there are precisely
- two non-equivalent transition diagrams with precisely three short loops,
- fifteen non-equivalent transition diagrams with precisely four short loops,
- five non-equivalent transition diagrams with precisely five short loops,
- one transition diagram with precisely six short loops,
that satisfy conditions (1)-(5) of the main theorem.
![The possible (non-equivalent) configurations for a transition diagram containing 3 (a-c), 4 (d-g), 5 (h-i) or 6 (j) short loops[]{data-label="fig:sloop_pos"}](sloop_pos)
Up to combinatorial equivalence, there are three ways in which three short loops can be positioned, see Fig.\[fig:sloop\_pos\](a)-(c). It can be checked that only the first of these can give a transition diagram that satisfies (1)-(5), and there are two non-equivalent such diagrams.
Further, there are four ways to position four short loops (Fig.\[fig:sloop\_pos\](d)-(g)), the first three of which admit five non-equivalent transition diagrams each, whereas the last one contradicts (1)-(5).
The two ways to position five short loops (Fig.\[fig:sloop\_pos\](h)-(i)) admit two and three non-equivalent transition diagrams satisfying (1)-(5). At last, applying Lemma \[lem:comb0\] it is obvious that up to combinatorial equivalence the only way to position six short loops is the one shown in Fig.\[fig:sloop\_pos\](j) and it is straightforward to check that there is only one possible transition diagram of this type.
Together with Lemma \[lem:num\_sloops\], this shows that there are precisely 23 transition diagram types satisfying (1)-(5). A list of the 23 diagram types and zero-sum game bimatrices realising them can be found in Appendix \[ap:types\], which finishes the proof of Theorem \[thm:main\].
Quasi-Periodic Orbits
=====================
In this last section of the analytic part of the paper we introduce the game-theoretic notion of quasi-periodicity and investigate the relation to its usual mathematical definition. This notion was first introduced in [@Rosenmuller1971]:
We say that an orbit of BR dynamics is **quasi-periodic** (in the game-theoretic sense), if its itinerary is periodic.
Note that a priori this notion of quasi-periodicity of an orbit is different from the usual mathematical definition (of an orbit which is dense in an invariant torus). However, we will show that the two notions are closely related.
We consider the Hamiltonian dynamics on $S^3$ corresponding to a zero-sum game and its first return maps to the (two-dimensional) planes on which either $p(t)$ or $q(t)$ changes direction, i.e. where one of the ${{\mathcal B \! \mathcal R \!}}$-correspondences is multivalued.
Let $S$ be such a plane, let $x \in S$ have a quasi-periodic orbit with (infinite) periodic itinerary $I = I(x)$, and let $\hat T$ be the first return map to $S$ (defined on the non-empty subset of $S$ of points whose orbits return to $S$). Note that $\hat T$ acts as a shift by a finite number of symbols on the itinerary of $x$. In particular there exists $n \geq 1$, such that $\hat{T}^n(x)$ has the same itinerary as $x$. Let us denote $T = \hat{T}^n$ so that each point in the $T$-orbit of $x$ has the same periodic itinerary $I$.
Let $U = \left\{ z \in S \colon I(z) = I(x) = I \right\}$ be the set of points in $S$ with itinerary $I$. Then $U$ is convex and $T(U) = U$. Moreover the restriction of $T$ to $U$, $T \colon U \rightarrow U$, is affine.
Convexity follows from the convexity of the surfaces on which $p(t)$ or $q(t)$ change direction and the fact that the flow in each region $R_{ij}$ follows the ’rays’ of a central projection. By the definition of $T$ we have that $T(U) \subseteq U$. We know ([@VanStrien2009a]) that $T$ is area-preserving and piecewise affine on $S$, so that $T(U) = U$. Since all points in $U$ have the same itinerary, it follows that $T \colon U \rightarrow U$ is indeed affine.
Let $x\in S$ correspond to a quasi-periodic orbit (in the game theoretical sense) of the Hamiltonian dynamics of a $3\times 3$ zero-sum bimatrix game, where $S$ is an indifference plane and $T$ is the return map to $S$, such that $I(T(x)) = I(x)$. Then one of the following holds:
1. The orbit of $x$ is periodic and $T^n(x) = x$ for some $n \geq 1$.
2. The $T$-orbit of $x$ lies on a $T$-invariant circle (more generally, depending on coordinates: an ellipse) and $x$ corresponds to a quasi-invariant orbit of the Hamiltonian dynamics (in the usual sense), i.e. its orbit under the flow is dense in an invariant torus.
In the second case $U = \left\{ z \in S \colon I(z) = I(x) \right\}$ is a disk and $T\colon U \rightarrow U$ is a rotation by an irrational angle (in suitable linear coordinates). Therefore in this case [(2)]{.nodecor} holds for *every* $z\in U$.
Assume that $x$ is not periodic. We already know that $T \colon U \rightarrow U$ is a planar affine transformation of $U$. Since it is also an isometry and $T(U) = U$, the only possibility is that $T$ restricted to $U$ is a rotation by an irrational angle (any other kind of planar affine transformation satisfying these conditions would have $x$ as a periodic point). The result immediately follows.
The theorem shows that every quasi-periodic orbit (in the game-theoretic sense) is actually quasi-periodic in the usual sense. Conversely, every quasi-periodic orbit in the usual sense, which lies on a torus that only intersects the indifference surfaces along whole circles (and never just partially along an arc) is clearly quasi-periodic in the game theoretic sense. Throughout the rest of this paper, we will always refer to the game-theoretic definition, when using the notion of quasi-periodicity.
From the argument above we can also immediately conclude:
If a Hamiltonian system induced by a $3 \times 3$ zero-sum bimatrix game has got an orbit with periodic itinerary, then it also has an actual periodic orbit.
In the second part of this paper we investigate the Hamiltonian dynamics induced by the BR dynamics numerically. We explore which types of orbit occur in these systems and how the combinatorial description given in the first part relates to these observations.
Numerical Observations
======================
In this section we present some of our observations on the behaviour of BR dynamics for zero-sum games, mostly obtained from numerical experiments. The aspects we investigate are:
- The time fraction that different orbits spend in each of the regions $R_{ij}$.
- The frequencies with which different orbits visit the regions $R_{ij}$ and the transition probabilities for transitions between regions.
- The different types of orbits that can occur and their itineraries (periodic, quasi-periodic, space-filling).
The systems we consider are randomly generated examples of zero-sum games of different combinatorial types, for which we look at the induced Hamiltonian dynamics on level sets of $H$. For randomly chosen initial points we compute the orbits of the BR dynamics (more precisely, its induced Hamiltonian analogue) and study the time fractions spent in each region $R_{ij}$ and the frequencies, with which the orbits visit the regions. Especially with respect to the presented types of orbit we do not claim to give an exhaustive account of occuring types but rather a list of examples illustrating a few key concepts.
Formally, for an orbit of the BR dynamics $(p(t),q(t)),~t > 0$ with itinerary $(i_0,j_0) \rightarrow (i_1,j_1) \rightarrow \ldots \rightarrow (i_k,j_k) \rightarrow \ldots$ and switching times $(t_n)$ we define $$P^{BR}_{ij}(n) = \frac{1}{t_n} \int_0^{t_n} \chi_{ij}(p(s),q(s)) ds~,$$ where $\chi_{ij}$ is the characteristic function of the region $R_{ij}$.
Alternatively we record the number of times, that each region is being visited by an orbit and compute the frequencies: $$Q_{ij}(n) = \frac{1}{n} \sum_{k=0}^{n-1} I_{ij}(i_k,j_k)~,~\text{ where }
I_{ij}(i_k,j_k)=\begin{cases}
1, & \text{if } (i_k,j_k) = (i,j)\\
0, & \text{otherwise.} \end{cases}$$ We write $P^{BR} = \left( P^{BR}_{ij} \right)_{i,j}$ and $Q = \left( Q_{ij} \right)_{i,j}$.
Moreover throughout the following examples we look at orbits of the first return maps for the BR dynamics to certain surfaces of section. A convenient choice of such surface is a hyperplane on which either $p(t)$ or $q(t)$ changes direction, i.e. where one of the ${{\mathcal B \! \mathcal R \!}}$-correspondences is multivalued. We will mostly use the surfaces where $q(t)$ changes direction: $$S_{ij} = \left\{ (p,q) \in \Sigma : \{i,j\} \subset {{\mathcal B \! \mathcal R \!}}_B(p) \right\}.$$
Let us now look at some examples:
\[ex:erg\]
Let the zero-sum bimatrix game $(A,B)$ be given by $$A = \begin{pmatrix}
22 & 34 & -4 \\
7 & -32 & 16 \\
-53 & 96 & 23
\end{pmatrix}~,~ B = -A~.$$
We numerically calculate orbits with itineraries of $10^4$ transitions for several hundreds of randomly chosen initial conditions. For all of these orbits, the evolution of $P^{BR}(n)$ and $Q(n)$ indicates a convergence to $$P^{BR} \approx 10^{-2} \times \begin{pmatrix}
13 & 5 & 27 \\
14 & 5 & 27 \\
3 & 1 & 5
\end{pmatrix}, \\
Q \approx 10^{-2} \times \begin{pmatrix}
12 & 9 & 19 \\
9 & 13 & 15 \\
10 & 5 & 8
\end{pmatrix}.$$
In Fig.\[fig:ex1\_evolution\], the evolution of some of the $P^{BR}_{ij}(n)$ and $Q_{ij}(n)$ along an orbit is shown.
This or very similar statistical behaviour is observed for all sampled initial conditions. It seems to suggest that initial conditions with quasi-periodic orbits have zero or very small Lebesgue measure in the phase space of BR dynamics for this bimatrix game, as quasi-periodicity in all our experiments leads to very rapid convergence to certain frequencies. Most of the space seems to be filled with orbits that statistically resemble each other in the sense that they all visit certain portions of the space (the regions $R_{ij}$) with asymptotically equal (or very close) frequencies. The same seems to hold for the fraction of time spent in each region by the orbits.
Fig.\[fig:ex1\_section\] shows the intersections of an orbit of (\[eq:br\_dynamics\]) for this game with $S_{12}$, $S_{23}$ and $S_{31}$ (the hypersurfaces where $q(t)$ changes direction), i.e. the orbit of the first return map to these surfaces. Each $S_{ii'}$ consists of three triangular pieces, corresponding to the three pieces of hypersurface between regions $(i,j)$ and $(i',j)$ for $j=1,2,3$. Inside each of these triangles, the orbit seems to rather uniformly fill the space, suggesting ergodicity (of Lebesgue measure). If the BR dynamics had invariant tori, these would appear on all or some of these sections as invariant circles whose interior cannot be entered by orbits starting outside. Judging from the above observations, in this example they either don’t exist or have very small radius.
\[ex:qp\]
In this example we consider a bimatrix game, which is an element in a family of bimatrix games thoroughly studied in [@Sparrow2007] and [@VanStrien2009b]. Let the zero-sum bimatrix game $(A,B)$ be given by $$A = \begin{pmatrix}
1 & 0 & \sigma \\
\sigma & 1 & 0 \\
0 & \sigma & 1
\end{pmatrix}~,~ B = -A~,$$ where $\sigma = \frac{\sqrt{5}-1}{2} \approx 0.618$ is the golden mean.
Two types of orbit can be (numerically) observed for the BR dynamics of this game. The first type resembles the orbits in the previous example. The empirical frequencies $P^{BR}_{ij}(n)$ and $Q_{ij}(n)$ along such orbits initially behave erratically but seem to suggest convergence to certain values or narrow ranges of values, which are the same for all such orbits:
$$P^{BR} \approx 10^{-2} \times \begin{pmatrix}
9 & 11 & 13 \\
13 & 9 & 11 \\
11 & 13 & 9
\end{pmatrix}, \\
Q \approx 10^{-2} \times \begin{pmatrix}
11 & 13 & 9 \\
9 & 11 & 13 \\
13 & 9 & 11
\end{pmatrix}.$$
It can be observed that the values of $Q(n)$ are perhaps less erratic and in most of our experiments they seem to converge faster than those of $P^{BR}(n)$. As an example, the evolution of $P^{BR}_{32}(n)$ and $Q_{32}(n)$ along a typical orbit can be seen in Fig.\[fig:ex2\_p32\_erg\].
As in the previous example, in Fig.\[fig:ex2\_section\] we show the intersection of one such orbit with the surfaces $S_{ii'}$. Once again the orbit points have a certain seemingly uniform density inside each region, but here they leave out an elliptical region on each of the hypersurfaces. This invariant region consists of invariant circles, formed by quasi-periodic orbits of the system (the second type of observed orbits). The center of the circles corresponds to an actual periodic orbit, i.e. an elliptic fixed point of the return map to one of these surfaces. See [@VanStrien2009b] for an explicit analytic investigation of this (which is made possible by the high symmetry of this particular bimatrix game).
The invariant circles in the elliptical region correspond to invariant tori in the BR dynamics. Their itinerary is periodic with period 6: $$(1,1) \rightarrow (1,2) \rightarrow (2,2) \rightarrow (2,3) \rightarrow (3,3) \rightarrow (3,1) \rightarrow (1,1) \rightarrow \ldots.$$
Fig.\[fig:type6\_orbits\](a) shows the transition diagram for this bimatrix game. The periodic itinerary is indicated by a dashed line as a loop in the transition diagram. The empirical frequencies along such quasi-periodic orbits converge to $$P^{BR} = Q = \begin{pmatrix}
\frac{1}{6} & \frac{1}{6} & 0 \\
0 & \frac{1}{6} & \frac{1}{6} \\
\frac{1}{6} & 0 & \frac{1}{6}
\end{pmatrix}.$$
Fig.\[fig:ex2\_section\] suggests that there are no other invariant tori for this system, i.e. no open set of initial conditions outside of the visible elliptical regions, whose orbits are all quasi-periodic.
A question that arises naturally from the above example is the following: does an invariant torus of quasi-periodic orbits always have a ’simple’ periodic itinerary as the above? Are the periods of such elliptic islands necessarily equal to 6? As the next example shows, the situation can indeed be more complicated and less simple paths through the transition diagram are possible candidates for the periodic itinerary of quasi-periodic orbits in an invariant torus.
\[ex:qp\_loop\]
Consider the bimatrix game $(A,B)$ with $$A = \begin{pmatrix}
84 & -37 & 10 \\
24 & 33 & -14 \\
-26 & 9 & 20
\end{pmatrix}~,~ B = -A~.$$
Generally, the observations here coincide with Example \[ex:erg\]. However, one can detect a (quite thin) invariant torus. Fig.\[fig:erg\_qp\_orbits\] shows a typical orbit stochastically filling most of the space. In the bottom row of the same figure, the regions marked by rectangles are enlarged to reveal a thin invariant torus. These quasi-periodic orbits intersect $S_{12}$ once, $S_{23}$ and $S_{31}$ three times each. The orbits look essentially like the quasi-periodic orbits in the previous example, but with an extra loop added. The itineraries are periodic with period 13, where each period is of the form $$\begin{array}{llll}
(1,1)\rightarrow (1,2) & &\rightarrow (2,2)\rightarrow (2,3)\rightarrow (3,3)\rightarrow (3,1)\rightarrow \\
(1,1)\rightarrow (1,2) & \!\!\!\! \rightarrow (3,2) \!\!\! & \rightarrow (2,2)\rightarrow (2,3)\rightarrow (3,3)\rightarrow (3,1)\rightarrow (1,1).
\end{array}$$ In Fig.\[fig:type6\_orbits\](b), this itinerary is shown as a loop in the transition diagram. The example demonstrates that combinatorially more complicated quasi-periodic orbits are possible for open sets of initial conditions in the BR dynamics of zero-sum games.
The next example shows an even more complex quasi-periodic structure and gives numerical evidence for more subtle and involved effects than those observed above:
\[ex:arnold\_diff\]
Let us now consider the bimatrix game $(A,B)$ with $$A = \begin{pmatrix}
-92 & 18 & 52 \\
62 & -37 & -33 \\
-10 & 9 & -18\end{pmatrix}~,~ B = -A~.$$
As in all the previous examples, the largest part of the phase space of the BR dynamics seems to be filled with orbits which stochastically fill most of the space and along which the frequency distributions $P^{BR}(n)$ and $Q(n)$ seem to converge to certain (orbit-independent) values. Again, an invariant torus can be found. It is more complicated than those observed in the other examples (see Fig.\[fig:type12\_qp\_orbit\]). The orbits forming this invariant torus are quasi-periodic and have an itinerary of period 60. Its structure suggests a generalisation of the type of itinerary observed in Example \[ex:qp\_loop\]. It consists of a sequence of blocks of the following two forms: $$\begin{aligned}
a &= \left( (1,1) \rightarrow (3,1) \rightarrow (2,1) \rightarrow (2,2) \rightarrow (3,2) \rightarrow (3,3) \rightarrow (1,3) \rightarrow (1,1) \right) \text { and } \\
b &= \left( (1,1) \rightarrow (3,1) \rightarrow (2,1) \rightarrow (2,2) \rightarrow (3,2) \rightarrow (3,3) \rightarrow (1,3) \rightarrow (1,2) \rightarrow (1,1) \right).\end{aligned}$$
The two blocks are shown as pathes in the transition diagram in Fig.\[fig:type12\_blocktypes\]. Each period of the itinerary of orbits in the invariant torus then has the form $$a \rightarrow a \rightarrow b \rightarrow a \rightarrow b \rightarrow b \rightarrow a \rightarrow b.$$ As in the previous example the two blocks are the same except for one element (in the previous example the itinerary consisted of two blocks concatenated alternatingly).
In Fig.\[fig:type12\_erg\_orbit\] the intersection of an orbit outside of the invariant torus with one of the hypersurfaces is shown together with a quasi-periodic orbit. The orbit seems to have essentially the same property of filling the space outside the invariant torus, as in the previous examples. However, a closer look at a neighbourhood of the invariant circles (see the right part of Fig.\[fig:type12\_erg\_orbit\]) reveals that the orbit not only misses out the invariant circles but also a certain ’heart-shaped’ region surrounding these. The investigation of orbits with initial conditions in this set shows a range of effects not observed in any of the previous examples.
Several different orbits with initial conditions in this region can be seen in Fig.\[fig:type12\_orbit\_examples\]. The orbit points show complicated structures, revealing a large number of ’stochastic’ regions as well as invariant regions of periodic orbits of high periods and corresponding quasi-periodic orbits (Fig.\[fig:type12\_per\_orbits\] shows some examples of such quasi-periodic orbits of different higher periods). Some of these orbits spend very long times (itineraries of length $10^6$ and more) in the heart-shaped region before diffusing into the much larger ’stochastic’ rest of the space. On the other hand we observe orbits that stochastically fill (heart-shaped) annuli leaving out islands of quasi-periodic orbits. These annuli seem to be invariant for the dynamics (see Fig.\[fig:type12\_restricted\_orbit\]).
Altogether the observations described above strongly indicate the occurrence of Arnol’d diffusion: the coexistence of a family of invariant annuli, which contain regions of stochastic (space-filling) motion and islands of further periodic orbits and invariant circles (quasi-periodic orbits).
Conclusion and Discussion
=========================
We would like to propose some open questions for further investigation.
1. Does the Hamiltonian system induced by a $3\times 3$ zero-sum bimatrix game always have quasi-periodic orbits / invariant tori? Example \[ex:erg\] suggests that it is possible to have topological mixing and that the Lebesgue measure is ergodic. However, this might be due to the limited resolution of our numerical simulations and images.
2. Are there orbits which are dense outside of the elliptic regions? Are almost all orbits outside of the elliptic regions dense?
3. Example \[ex:arnold\_diff\] suggests that the system has infinitely many elliptic islands corresponding to quasi-periodic orbits of different periods. The pictures of orbits (e.g. Fig.\[fig:type12\_orbit\_examples\] and Fig. \[fig:type12\_restricted\_orbit\]) show many regions that could potentially contain such elliptic islands of quasi-periodic orbits of different periods. All regions that we investigated for this property actually revealed quasi-periodic orbits.
4. Given a specific bimatrix example, are there a finite number of blocks, so that the itinerary of any orbit on an elliptic island is periodic with each period being a (finite) concatenation of these blocks? The examples we looked at suggest the answer to be positive.
In Part 1 of this paper we assigned to a Hamiltonian system a transition diagram, giving a necessary condition on the itinerary of orbits. If we were able to develop some kind of ’admissibility condition’ (i.e. a sufficiency condition), we could perhaps obtain results such as ’density of periodic orbits’, in the same way as was done for quadratic maps of the interval.
In Part 2 we demonstrated that this class of dynamics is sufficiently rich to mimic many of the intricacies of smooth Hamiltonian systems. In spite of the many open questions that still remain, it appears that these piecewise affine Hamiltonian systems could provide a new way of gaining insight into global dynamics of Hamiltonian systems.
The 23 transition diagram types {#ap:types}
===============================
We list all 23 transition diagram types as in Theorem \[thm:main\] sorted by the number of short loops contained in them, together with the respective matrices $A$, such that the bimatrix games $(A,-A)$ realise the given diagram types.
[^1]: The authors would like to thank Abed Bounemoura and Vassili Gelfreich for several valuable comments.
[^2]: Differential equations with discontinuities along a hyperplane are often called ’Filippov systems’, and there is a large literature on such systems, see for example [@MR2368310], [@MR1789550] and [@MR2103797]. The special feature of the systems we consider here is that they have discontinuities along $n\cdot(n-1)$ intersecting hyperplanes in $\Sigma \times \Sigma$.
[^3]: A transition diagram shows all allowable transitions within orbits (but not all allowable transitions correspond to actual orbits).
[^4]: In a game theoretic context the matrices are *payoff matrices* of two players A and B, where each player has $n$ pure strategies, i.e. rows or columns, to choose from. $\Sigma_A$ and $\Sigma_B$ are the spaces of *mixed strategies*, i.e. probability distributions over the $n$ *pure strategies* that are represented as the standard unit vectors. ${{\mathcal B \! \mathcal R \!}}_A$ and ${{\mathcal B \! \mathcal R \!}}_B$ are the two players’ respective *best-response correspondences*, assigning a payoff-maximising pure strategy answer to the strategy played by the opponent.
[^5]: In game theoretic terms this means that none of the players has any ’strictly dominated pure strategy’. The existence of such strategy would mean that the dynamics essentially reduces to the FP/BR dynamics of a $2 \times 3$ game, which under mild genericity conditions is known to converge to the set of Nash Equilibria in a rather simple way, see [@Berger2005].
[^6]: Coincidentally (or not?) the number 23 is the most sacred number for the religion called ’Discordianism’. In this religion 23 is the number of the highest deity, Eris, who is the Greek goddess of Chaos.
[^7]: It can be checked that a short loop precisely corresponds to those $2\times 2$ subgames, which are linearly equivalent to a zero-sum game.
|
---
abstract: 'We consider a $d$-dimensional branching particle system in a random environment. Suppose that the initial measures converge weakly to a measure with bounded density. Under the Mytnik-Sturm branching mechanism, we prove that the corresponding empirical measure $X_t^n$ converges weakly in the Skorohod space $D([0,T];M_F({\mathbb{R}}^d))$ and the limit has a density $u_t(x)$, where $M_F({\mathbb{R}}^d)$ is the space of finite measures on ${\mathbb{R}}^d$. We also derive a stochastic partial differential equation $u_t(x)$ satisfies. By using the techniques of Malliavin calculus, we prove that $u_t(x)$ is jointly Hölder continuous in time with exponent $\frac{1}{2}-\epsilon$ and in space with exponent $1-\epsilon$ for any $\epsilon>0$.'
author:
- 'Yaozhong Hu [^1]'
- 'David Nualart [^2]'
- 'Panqiu Xia [^3]'
title: Hölder continuity of the solutions to a class of SPDEs arising from multidimensional superprocesses in random environment
---
**Keywords.** Superprocesses, random environment, stochastic partial differential equations, Malliavin calculus, Hölder continuity.
Introduction
============
Consider a $d$-dimensional branching particle system in a random environment. For any integer $n\geq 1$, the branching events happen at time $\frac{k}{n}$, $k=1,2,\dots$. The dynamics of each particle, labelled by a multi-index $\alpha$, is described by the stochastic differential equation (SDE): $$\begin{aligned}
\label{pmer}
dx^{\alpha,n}_t=dB^{\alpha}_t+\int_{\mathbb{R}^d} h(y-x^{\alpha,n}_t)W(dt,dy),\end{aligned}$$ where $h$ is a $d\times d$ matrix-valued function on $\mathbb{R}^d$, whose entries $h^{ij} \in L^2 (\mathbb{R}^d)$, $B^{\alpha}$ are $d$-dimensional independent Brownian motions, and $W$ is a $d$-dimensional space-time white Gaussian random field on ${\mathbb{R}}_+\times {\mathbb{R}}^d$ independent of the family $\{B^{\alpha}\}$. The random field $W$ can be regarded as the random environment for the particle system.
At any branching time each particle dies and it randomly generates offspring. The new particles are born at the death position of their parents, and inherit the branching-dynamics mechanism. The branching mechanism we use in this paper follows the one introduced by Mytnik [@ap-96-mytnik], and studied further by Sturm [@ejp-03-sturm]. Let $X^n=\{X^n_t, t\geq 0\}$ denote the empirical measure of the particle system. One of the main results of this work is to prove that the empirical measure-valued processes converge weakly to a process $X=\{X_t, t\geq 0\}$, such that for almost every $t\geq 0$, $X_t$ has a density $u_t(x)$ almost surely. By using the techniques of Malliavin calculus, we also establish the almost surely joint Hölder continuity of $u$ with exponent $\frac{1}{2}-\epsilon$ in time and $1-\epsilon$ in space for any $\epsilon>0$.
To compare our results with the classical ones. Let us recall briefly some existing work in the literature. The one-dimensional model was initially introduced and studied by Wang ([@ptrf-97-wang; @saa-98-wang]). In these papers, he proved that under the classical Dawson-Watanabe branching mechanism, the empirical measure $X^n$ converges weakly to a process $X=\{X_t, t\geq 0\}$, which is the unique solution to a martingale problem.
For the above one dimensional model Dawson et al. [@aihpps-00-dawson-vaillancourt-wang] proved that for almost every $t>0$, the limit measure-value process $X$ has a density $u_t(x)$ a.s. and $u$ is the weak solution to the following stochastic partial differential equation (SPDE): $$\begin{aligned}
\label{ldf1}
u_t(x)=&\mu(x)+\int_0^t \frac{1}{2}(1+\|h\|_2^2)\Delta u_s(x)ds-\int_0^t\int_{\mathbb{R}}\nabla_x[h(y-x)u_s(x)]W(ds,dy)\nonumber\\
&+\int_0^t\sqrt{u_s(x)}\frac{V(ds,dx)}{dx},\end{aligned}$$ where $\|h\|_2$ is the $L^2$-norm of $h$, and $V$ is a space-time white Gaussian random field on $\mathbb{R}_+\times \mathbb{R}$ independent of $W$.
Suppose further that $h$ is in the Sobolev space $H_2^2(\mathbb{R})$ and the initial measure has a density $\mu\in H_2^1({\mathbb{R}})$. Then Li et al. [@ptrf-12-li-wang-xiong-zhou] proved that $u_t(x)$ is almost surely jointly Hölder continuous. By using the techniques of Malliavin calculus, Hu et al. [@ptrf-13-hu-lu-nualart] improved their result to obtain the sharp Hölder continuity: they proved that the Hölder exponents are $\frac{1}{4}-\epsilon$ in time and $\frac{1}{2}-\epsilon$ in space, for any $\epsilon>0$.
Our paper is concerned with higher dimension ($d>1$). However in this case, the super Brownian motion (a special case when $h=0$) does not have a density (see e.g. Corollary 2.4 of Dawson and Hochberg [@ap-79-dawson-hochberg]). Thus in higher dimensional case we have to abandon the classical Dawson-Watanabe branching mechanism and adopt the Mytnik-Sturm one. As a consequence, the difficult term $\sqrt{u_s(x)}$ in the SPDE becomes $ u_s(x)$ (see equation in Section \[s.3\] for the exact form of the equation). In Section \[s.2\] we shall briefly describe the branching mechanism used in this paper.
In Section \[s.3\] we state the main results obtained in this paper. These include three theorems. The first one (Theorem \[unique\]) is about the existence and uniqueness of a (linear) stochastic partial differential equation (equation ), which is proved (Theorem \[tmpd\]) to be satisfied by the density of the limiting empirical measure process $X^n$ of the particle system (see ). The core result of this paper is Theorem \[tjhc\] which intends to give sharp Hölder continuity of the solution $u_t(x)$ to .
Section 4 presents the proofs for Theorems \[unique\] and \[tmpd\]. The proof of Theorem \[tjhc\] is the objective of the remaining sections. First, in Section 5, we focus on the one-particle motion with no branching. By using the techniques from Malliavin calculus, we obtain a Gaussian type estimates for the transition probability density of the particle motion conditional on $W$. This estimate plays a crucial role in the proof of Theorem \[tjhc\]. In Section 6, we derive a conditional convolution representation of the weak solution to the SPDE (\[dqmvp\]), which is used to establish the Hölder continuity. In Section 7, we show that the solution $u$ to is Hölder continuous.
Branching particle system {#s.2}
=========================
In this section, we briefly construct the branching particle system. For further study of this branching mechanism, we refer the readers to Mytnik’s and Sturm’s papers (see [@ap-96-mytnik; @ejp-03-sturm]).
We start this section by introducing some notation. For any integer $k\geq 0$, we denote by $C_b^k({\mathbb{R}}^d)$ the space of $k$ times continuously differentiable functions on ${\mathbb{R}}^d$ which are bounded together with their derivatives up to the order $k$. Also, $H_2^k({\mathbb{R}}^d)$ is the Sobolev space of square integrable functions on ${\mathbb{R}}^d$ which have square integrable derivatives up to the order $k$. For any differentiable function $\phi$ on ${\mathbb{R}}^d$, we make use of the notation $\partial_{i_1\cdots i_m} \phi(x)=\frac{\partial^m}{\partial x_{i_1} \cdots \partial x_{i_m}}\phi(x)$.
We write $M_F(\mathbb{R}^d)$ for the space of finite measures on $\mathbb{R}^d$. We denote by $D([0,T], M_F(\mathbb{R}^d))$ the Skorohod space of càdlàg functions on time interval $[0,T]$, taking values in $M_F(\mathbb{R}^d)$, and equipped with the weak topology. For any $\phi\in C_b({\mathbb{R}}^d)$ and $\mu\in M_F({\mathbb{R}}^d)$, we write $$\begin{aligned}
\label{itg}
\langle \mu, \phi \rangle=\mu(\phi):=\int_{{\mathbb{R}}^d}\phi(x)\mu(dx).\end{aligned}$$
Let $\mathcal{I}:=\{\alpha=(\alpha_0,\alpha_1,\dots,\alpha_N), \alpha_0\in\{1,2,3\dots\}, \alpha_i\in\{1,2\},\ \textrm{for}\ 1\leq i\leq N\}$ be a set of multi-indexes. In our model $\mathcal{I}$ is the index set of all possible particles. In other words, initially there are a finite number of particles and each particle generates at most $2$ offspring. For any particle $\alpha=(\alpha_0,\alpha_1,\dots, \alpha_N)\in\mathcal{I}$, let $\alpha -1=(\alpha_0\dots, \alpha_{N-1}), \alpha -2=(\alpha_0,\dots, \alpha_{N-2}), \dots, \alpha -N=(\alpha_0)$ be the ancestors of $\alpha$. Then, $|\alpha|=N$ is the number of the ancestors of the particle $\alpha$. It is easy to see that the ancestors of any particle $\alpha$ are uniquely determined.
Fix a time interval $[0,T]$. Let $(\Omega, \mathcal{F}, P)$ be a complete probability space, on which $\{B^{\alpha}_t, t\in[0,T]\}_{\alpha\in \mathcal{I}}$ are independent $d$-dimensional standard Brownian motions, and $W$ is a $d$-dimensional space-time white Gaussian random field on $[0,T]\times \mathbb{R}^d$ independent of the family $\{B^{\alpha}\}$.
Let $x_t=x(x_0,B^{\alpha},r,t)$, $t\in[0,T]$, be the unique solution to the following SDE: $$\begin{aligned}
\label{pmern}
x_t=x_0+B^{\alpha}_t-B^{\alpha}_r+\int_r^t\int_{\mathbb{R}^d} h(y-x_s)W(ds,dy),\end{aligned}$$ where $x_0\in\mathbb{R}^d$, and $h$ is a $d\times d$ matrix-valued function, with entries $h^{ij}\in H_2^3(\mathbb{R}^d)$. We denote by $H^3_2({\mathbb{R}}^d;{\mathbb{R}}^d\otimes {\mathbb{R}}^d)$ the space of such functions $h$, and equip it with the Sobolev norm: $$\begin{aligned}
\|h\|_{3,2}^2:=\sum_{i,j=1}^d\|h^{ij}\|_{3,2}^2.\end{aligned}$$ Let $\rho:{\mathbb{R}}^d\to{\mathbb{R}}^d\otimes{\mathbb{R}}^d$ be given by $$\begin{aligned}
\label{rho}
\rho(x)=\int_{\mathbb{R}^d}h(z-x)h^*(z)dz.\end{aligned}$$ Then, for any $1\leq i,j\leq d$, and $x\in{\mathbb{R}}^d$, by Cauchy-Schwarz’s inequality, we have $$\begin{aligned}
|\rho^{ij}(x)|\leq \sum_{k=1}^d\|h^{ik}\|_2\|h^{kj}\|_2.\end{aligned}$$ We denote by $\|\cdot\|_2$ the Hilbert Schmidt norm for matrices. Then, by Cauchy-Schwarz’s inequality again, we have $$\begin{aligned}
\|\rho\|_{\infty}:=&\sup_{x\in {\mathbb{R}}^d}\|\rho(x)\|_2=\sup_{x\in{\mathbb{R}}^d}\Big(\sum_{i,j=1}^d|\rho^{ij}(x)|^2\Big)^{\frac{1}{2}}\\
\leq &\Big(\sum_{i,j=1}^d\Big|\sum_{k=1}^d\|h^{ik}\|_2\|h^{kj}\|_2\Big|^2\Big)^{\frac{1}{2}}\\
\leq& \Big(\sum_{i,k=1}^d\|h^{ik}\|_2^2\sum_{j,k=1}^d\|h^{kj}\|_2^2\Big)^{\frac{1}{2}}\leq \|h\|_2^2\le \|h\|_{3,2}^2.\end{aligned}$$ Denote by $A$ the infinitesimal generator of $x_t$. That is, $A$ is a differential operator on $C_b^2(\mathbb{R}^d)$, with values in $C_b({\mathbb{R}}^d)$, given by $$\begin{aligned}
\label{fpgt}
A\phi(x)=\frac{1}{2}\sum_{i,j=1}^d \left(\rho^{ij}(0)\partial_{ij}\phi(x)\right)+\frac{1}{2}\Delta \phi(x),\ x\in {\mathbb{R}}^d.\end{aligned}$$
For any $t\in[0,T]$, let $t_n=\frac{{ \lfloor nt \rfloor}}{n}$ be the last branching time before $t$. For any $\alpha=(\alpha_0,\alpha_1,\dots, \alpha_N)$, if $nt_n={ \lfloor nt \rfloor}\leq N$, let $\alpha_t=(\alpha_0,\dots, \alpha_{{ \lfloor nt \rfloor}})$ be the ancestor of $\alpha$ at time $t$. Suppose that each particle, which starts from the death place of its parent, moves in ${\mathbb{R}}^d$ following the motion described by the SDE (\[pmern\]) during its lifetime. Then, the path of any particle $\alpha$ and all its ancestors, denoted by $x^{\alpha,n}_t$, is given by $$x_t^{\alpha,n}=x_t^{\alpha_t,n}=\begin{cases}
x\left(x^n_{\alpha_0}, B^{(\alpha_0)}, 0, t\right),\quad &0\leq t<\frac{1}{n},\\
x\big(x^{\alpha_t -1, n}_{t_n^-}, B^{\alpha_t}, t_n, t\big),\quad &\frac{1}{n}\leq t< \frac{N+1}{n},\\
\partial, & \mathrm{otherwise}.
\end{cases}$$ Here $x_{\alpha_0}^n\in{\mathbb{R}}^d$ is the initial position of particle $(\alpha_0)$, $x^{\alpha_t-1,n}_{t_n^-}:=\lim_{s\uparrow t_n}x^{\alpha_t-1,n}_s$, and $\partial$ denotes the “cemetery”-state, that can be understood as a point at infinity.
Let $\xi=\{\xi(x),x\in{\mathbb{R}}^d\}$ be a real-valued random field on ${\mathbb{R}}^d$ with covariance $$\begin{aligned}
\label{crlt}
{\mathbb{E}}\big(\xi(x)\xi(y)\big)=\kappa(x,y),\end{aligned}$$ for all $x,y\in{\mathbb{R}}^d$. Assume that $\xi$ satisfies the following conditions:
1. (i) $\xi$ is symmetric, that is $\mathbb{P}(\xi(x)> z)=\mathbb{P}(\xi(x)<-z)$ for all $x\in{\mathbb{R}}^d$ and $z\in{\mathbb{R}}$.
(ii) $\displaystyle\sup_{x\in {\mathbb{R}}^d}{\mathbb{E}}\big(|\xi(x)|^p\big)<\infty$ for some $p>2$.
(iii) $\kappa$ vanishes at infinity, that is $\displaystyle\lim_{|x|+|y|\to\infty}\kappa(x,y)=0$.
For any $n\geq 1$, the random field $\xi$ is used to define the offspring distribution after a scaling $\frac{1}{\sqrt{n}}$. In order to make the offspring distribution a probability measure, we introduce the truncation of the random field $\xi$, denoted by $\xi^n$, as follows: $$\xi^n(x)=
\begin{cases}
\sqrt{n},& \text{if}\ \xi(x)>\sqrt{n},\\
-\sqrt{n},& \text{if}\ \xi(x)<-\sqrt{n},\\
\xi(x),& \text{otherwise}.
\end{cases}$$ The correlation function of the truncated random field is then given by $$\kappa_n(x,y)={\mathbb{E}}\big(\xi^n(x)\xi^n(y)\big).$$
Let $(\xi^n_i)_{i\geq 0}$ be independent copies of $\xi^n$. Denote by $\xi_i^{n+}$ and $\xi_i^{n-}$ the positive and negative part of $\xi^n_i$ respectively. Let $N^{\alpha, n}\in\{0,1,2\}$ be the offspring number of the particle $\alpha$ at the branching time $\frac{|\alpha|+1}{n}$. Assume that $\{N^{\alpha,n}, |\alpha|=i\}$ are conditionally independent given $\xi^n_i$ and the position of $\alpha$ at its branching time, with a distribution given by $$\begin{aligned}
&P\Big(\left.N^{\alpha,n}=2\right|\xi^n_i, x^{\alpha,n}_{\frac{i+1}{n}^-}\Big)=\frac{1}{\sqrt{n}}\xi_i^{n+}\Big(x^{\alpha,n}_{\frac{i+1}{n}^-}\Big),\\
&P\Big(\left.N^{\alpha,n}=0\right|\xi^n_i, x^{\alpha,n}_{\frac{i+1}{n}^-}\Big)=\frac{1}{\sqrt{n}}\xi_i^{n-}\Big(x^{\alpha,n}_{\frac{i+1}{n}^-}\Big),\\
&P\Big(\left.N^{\alpha,n}=1\right|\xi^n_i, x^{\alpha,n}_{\frac{i+1}{n}^-}\Big)=1-\frac{1}{\sqrt{n}}|\xi_i^{n}|\Big(x^{\alpha,n}_{\frac{i+1}{n}^-}\Big).\end{aligned}$$
For any particle $\alpha=(\alpha_0,\dots, \alpha_N)$, $\alpha$ is called to be alive at time $t$, denoted by $\alpha\sim_n t$, if the following conditions are satisfied:
(i) There are exactly $N$ branching before or at $t$: ${ \lfloor nt \rfloor}=N$.
(ii) $\alpha$ has an unbroken ancestors line: $\alpha_{N-i+1}\leq N^{\alpha-i,n}$, for all $i=1, 2, \dots, N$.
\[Introduction of $N^{\alpha, n}$ allows the particle $\alpha$ produce one more generation, namely, produce new particle $(\alpha, N^{\alpha, n})$. However, $(\alpha, 0)$ is considered no longer alive and will not produce offspring any more.\] For any $n$, denote by $X^n=\{X^n_t, t\in[0,T]\}$ the empirical measure-valued process of the particle system. Then, $X^n$ is a discrete measure-valued process, given by $$\begin{aligned}
\label{napem}
X^n_t=\frac{1}{n}\sum_{\alpha\sim_n t}\delta_{x^{\alpha,n}_t},\end{aligned}$$ where $\delta_x$ is the Dirac measure at $x\in\mathbb{R}^d$, and the sum is among all alive particles at time $t\in[0,T]$. Then, for any $\phi\in C^2_b({\mathbb{R}}^d)$, with the notation (\[itg\]), we have $$X^n_t(\phi)=\int_{{\mathbb{R}}^d}\phi(x)X^n_t(dx)=\frac{1}{n}\sum_{\alpha\sim n}\phi(x^{\alpha,n}_t).$$
Main results {#s.3}
============
Let $(\Omega, \mathcal{F}, \{F_t\}_{t\in[0,T]}, P)$ be a complete filtered probability space that satisfies the usual conditions. Suppose that $W$ is a $d$-dimensional space-time white Gaussian random field on $[0,T]\times {\mathbb{R}}^d$, and $V$ is a one-dimensional Gaussian random field on $[0,T]\times {\mathbb{R}}^d$ independent of $W$, that is time white and spatially colored with correlation $\kappa$ defined in (\[crlt\]): $$\begin{aligned}
{\mathbb{E}}\big(V(t,x)V(s,y)\big)=(t\wedge s)\kappa(x,y),\end{aligned}$$ for all $s,t\in[0,T]$ and $x,y\in{\mathbb{R}}^d$. Assume that $\{W(t,x), x\in {\mathbb{R}}^d\}$, $\{V(t,x), x\in{\mathbb{R}}^d\}$ are $\mathcal{F}_t$-measurable for all $t\in [0,T]$, and $\{W(t,x)-W(s,x), x\in {\mathbb{R}}^d\}$, $\{V(t,x)-V(s,x), x\in {\mathbb{R}}^d\}$ are independent of $\mathcal{F}_s$ for all $0\leq s< t\leq T$ .
Denote by $A^*$ the adjoint of $A$, where $A$ is the generator defined in (\[fpgt\]). Consider the following SPDE: $$\begin{aligned}
\label{dqmvp}
u_t(x)=&\mu(x)+\int_0^t A^* u_s(x)ds-\sum_{i,j=1}^d\int_0^t\int_{\mathbb{R}^d}\frac{\partial}{\partial x_i} \left[h^{ij}(y-x)u_s(x)\right]W^j(ds,dy)\nonumber\\
&+\int_0^t u_s(x)\frac{V(ds,dx)}{dx}.\end{aligned}$$
\[def\] Let $u=\{u_t(x),t\in[0,T],x\in {\mathbb{R}}^d\}$ be a random field. Then,
(i) $u$ is said to be a strong solution to the SPDE (\[dqmvp\]), if $u$ is jointly measurable on $[0,T]\times {\mathbb{R}}^d\times \Omega$, adapted to $\{\mathcal{F}_t\}_{t\in[0,T]}$ and for any $\phi\in C_b^2({\mathbb{R}}^d)$, the following equation holds for almost every $t\in[0,T]$: $$\begin{aligned}
\int_{{\mathbb{R}}^d}\phi(x)u_t(x)dx=&\int_{{\mathbb{R}}^d}\phi(x)\mu(x)dx+\int_0^t\int_{{\mathbb{R}}^d} A \phi(x)u_s(x)dxds\nonumber\\
&+\int_0^t\int_{{\mathbb{R}}^d}\Big[\int_{{\mathbb{R}}^d}\nabla \phi(x)^*h(y-x)u_s(x)dx\Big]W(ds,dy)\nonumber\\
&+\int_0^t \int_{{\mathbb{R}}^d}\phi(x)u_s(x)V(ds,dx),\end{aligned}$$ almost surely, where the last two stochastic integrals are Walsh’s integral (see e.g. Walsh [@springer-86-walsh]).
(ii) $u$ is said to be a weak solution to the SPDE (\[dqmvp\]), if there exists a filtered probability space, on which $W$ and $V$ are independent random fields that satisfy the above properties, such that $u$ is a strong solution with this probability space.
In Section 4, we prove the following two theorems.
\[unique\] The SPDE (\[dqmvp\]) has a unique strong solution in the sense of Definition \[def\].
Let $X^n=\{X^n_t,0\leq t\leq T\}$ be defined in (\[napem\]). In order to show the convergence of $X^n$ in $D([0,T];M_F({\mathbb{R}}^d))$, we make use of the following hypothesis on the initial measures $X^n_0$:
1. (i) $\displaystyle \sup_{n\geq 1}|X_0^{(n)}(1)|<\infty$.
(ii) $X_0^n\Rightarrow X_0$ in $M_F(\mathbb{R}^d)$ as $n\to\infty$.
(iii) $X_0$ has a bounded density $\mu$.
\[tmpd\] Let $X^n$ be defined in (\[napem\]). Then, under hypotheses **\[H1\]** and **\[H2\]**, we have the following results:
(i) $X^n \Rightarrow X$ in $D([0,T], M_F(\mathbb{R}^d))$ as $n\to\infty$.
(ii) For almost every $t\in[0,T]$, $X_t$ has a density $u_t(x)$ almost surely.
(iii) $u=\{u_t(x), t\in[0,T], x\in{\mathbb{R}}^d\}$ is a weak solution to the SPDE (\[dqmvp\]) in the sense of Definition \[def\].
The last main result in this paper is the following theorem concerning the Hölder continuity of the solution to the SPDE .
\[tjhc\] Let $u=\{u_t(x),t\in[0,T],x\in{\mathbb{R}}^d\}$ be the strong solution to the SPDE (\[dqmvp\]) in the sense of Definition \[def\]. Then, for any $\beta_1,\beta_2\in (0,1)$, $p>1$, $x,y\in\mathbb{R}^d$, and $0< s<t\leq T$, there exists a constant $C$ that depends on $T$, $d$, $h$, $p$, $\beta_1$, and $\beta_2$, such that $$\begin{aligned}
\left\|u_t(x)-u_s(y)\right\|_{2p}\leq Cs^{-\frac{1}{2}}\big(|x-y|^{\beta_1}+|t-s|^{\frac{1}{2}\beta_2}\big).\end{aligned}$$ Hence by Kolmogorov’s criteria, $u_t(x)$ is almost surely jointly Hölder continuous, with exponent $\beta_1\in(0,1)$ in space and $\beta_2\in(0,\frac{1}{2})$ in time.
Proof of Theorems \[unique\] and \[tmpd\]
=========================================
We prove Theorems \[unique\] and \[tmpd\] in the following steps:
(i) In Section 4.1, we show that $\{X^n\}_{n\geq 1}$ is a tight sequence in $D([0,T];M_F({\mathbb{R}}^d))$, and the limit of any convergent subsequence in law solves a martingale problem.
(ii) In Section 4.2, we show that any solution to the martingale problem has a density almost surely.
(iii) In Section 4.3, we show the equivalence between martingale problem (see e.g. (\[mprf\]) - (\[qvmp\]) below) and the SPDE (\[dqmvp\]). Finally, we prove Theorems \[unique\] and \[tmpd\].
Tightness and martingale problem
--------------------------------
Recall the empirical measure-valued process $X^n=\{X^n_t, t\in[0,T]\}$ given by (\[napem\]). Let $\phi\in C_b^2(\mathbb{R}^d)$, then similar to the identity (49) of Sturm [@ejp-03-sturm], we can decompose $X^n_t$ as follows: $$\begin{aligned}
\label{dfx}
X_t^n(\phi)=X_0^n(\phi)+Z^n_t(\phi)+M^{b,n}_{t}(\phi)+B^n_t(\phi)+U^n_t(\phi),\end{aligned}$$ where $$\begin{aligned}
Z^n_t(\phi)=\int_0^t X_u^n(A\phi)du,\end{aligned}$$ $$\begin{aligned}
M_{t}^{b,n}(\phi)=M_{t_n}^{b,n}(\phi)=\frac{1}{n}\sum_{s_n<t_n}\sum_{\alpha\sim_n s_n}\phi \big(x^{\alpha, n}_{s_n+\frac{1}{n}}\big)(N^{\alpha,n}-1),\end{aligned}$$ $$\begin{aligned}
B^n_t(\phi)=\frac{1}{n}\Big(\sum_{s_n<t_n}\sum_{\alpha\sim_n s_n}\int_{s_n}^{s_n+\frac{1}{n}}\nabla\phi(x^{\alpha,n}_u)^* dB^{\alpha}_u+\sum_{\alpha\sim_n t}\int_{t_n}^t\nabla\phi(x^{\alpha,n}_u)^*dB^{\alpha}_u\Big),\end{aligned}$$ and $$\begin{aligned}
U_t^n(\phi)=&\frac{1}{n}\Big(\sum_{s_n<t_n}\sum_{\alpha\sim_n s_n}\int_{s_n}^{s_n+\frac{1}{n}}\int_{\mathbb{R}^d}\nabla\phi(x^{\alpha,n}_u)^* h(y-x^{\alpha,n}_u)W(du,dy)\\
&\qquad+\sum_{\alpha\sim_n t}\int_{t_n}^t\int_{\mathbb{R}^d}\nabla\phi(x^{\alpha,n}_u)^* h(y-x^{\alpha,n}_u)W(du,dy)\Big)\\
=&\int_0^t\int_{\mathbb{R}^d}\Big(\int_{\mathbb{R}^d}\nabla\phi(x)^* h(y-x)X_u(dx)\Big)W(du,dy).\end{aligned}$$ As in Sturm [@ejp-03-sturm], consider the natural filtration, generated by the process $X^n$ $$\mathcal{F}^n_t=\sigma\left(\{x^{\alpha,n},N^{\alpha,n}\big||\alpha|<{ \lfloor nt \rfloor}\}\cup\{x^{\alpha,n}_s, s\leq t, |\alpha|={ \lfloor nt \rfloor}\}\right),$$ and a discrete filtration at branching times $$\widetilde{\mathcal{F}}^n_{t_n}=\sigma\big(\mathcal{F}^n_{t_n}\cup\{x^{\alpha,n}\big||\alpha|=nt_n\}\big)=\mathcal{F}^n_{(t_n+n^{-1})^-}.$$ Then, $B^n_t(\phi)$ and $U^n_t(\phi)$ are continuous $\mathcal{F}^n_t$-martingales, while $M^{b,n}_{t}(\phi)$ is a discrete $\widetilde{\mathcal{F}}^n_{t_n}$-martingale.
\[ublxmu\] Assume hypotheses **\[H1\]**, **\[H2\]** (i) and (ii). Let $p>2$ be given in hypothesis **\[H1\]**. Then, for all $\phi\in C_b^2({\mathbb{R}}^d)$,
(i) $\displaystyle{\mathbb{E}}\Big(\sup_{0\leq t\leq T}|X^n_t(\phi)|^p\Big)$, $\displaystyle{\mathbb{E}}\Big(\sup_{0\leq t\leq T}|M^{b,n}_{t}(\phi)|^p\Big)$, and $\displaystyle{\mathbb{E}}\Big(\sup_{0\leq t\leq T}|U^{n}_{t}(\phi)|^p\Big)$ are bounded uniformly in $n\geq 1$.
(ii) $\displaystyle{\mathbb{E}}\Big(\sup_{0\leq t\leq T}|B^n_t(\phi)|^p\Big)\to 0$, as $n\to\infty$.
By the same argument as that for Lemma 3.1 of Sturm [@ejp-03-sturm], we can show that $${\mathbb{E}}\Big(\sup_{0\leq t\leq T}|X^n_t(\phi)|^p\Big)+{\mathbb{E}}\Big(\sup_{0\leq t\leq T}|M^{b,n}_{t}(\phi)|^p\Big)\leq C {\mathbb{E}}\Big(\sup_{0\leq t\leq T}|X^{n}_{t}(1)|^p\Big),$$ where the constant $C>0$ does not depend on $n$. Notice that $U^n_t(1)\equiv 0$, therefore $X^n_t(1)$ here is not different from the variable in Sturm’s model. Thus we can simply refer to her result: $\displaystyle{\mathbb{E}}\Big(\sup_{0\leq t\leq T}|X^{n}_{t}(1)|^p\Big)$ is bounded uniformly in $n$. Therefore, $\displaystyle{\mathbb{E}}\Big(\sup_{0\leq t\leq T}|X^n_t(\phi)|^p\Big)$ and $\displaystyle{\mathbb{E}}\Big(\sup_{0\leq t\leq T}|M^{b,n}_t(\phi)|^p\Big)$ are also uniformly bounded in $n$.
For $U^{n}_t(\phi)$, by using the stochastic Fubini theorem, we have $$\begin{aligned}
\label{qvwnr}
\langle U^n(\phi)\rangle_t=&\Big\langle \int_0^{\cdot}\int_{\mathbb{R}^d}\Big(\int_{\mathbb{R}^d}\nabla\phi(x)^*h(y-x)X^n_u(dx)\Big)W(du,dy)\Big\rangle_t\nonumber\\
=&\sum_{j=1}^d \int_0^t\int_{\mathbb{R}^d}\Big(\sum_{i=1}^d\int_{\mathbb{R}^d}\partial_i\phi(x)h^{ij}(y-x)X^n_u(dx)\Big)^2dydu\nonumber\\
=&\int_0^t\int_{\mathbb{R}^d\times\mathbb{R}^d}\nabla \phi(x)^*\rho(x-z)\nabla\phi(z)X^n_u(dx)X^n_u(dz)du\nonumber\\
\leq &\|\rho\|_{\infty}\|\phi\|_{1,\infty}^2\int_0^T \left|X_u^n(1)\right|^2 du.\end{aligned}$$ Thus by (\[qvwnr\]), Burkholder-Davis-Gundy’s and Jensen’s inequalities, we have $$\begin{aligned}
\label{ubpu}
{\mathbb{E}}\Big(\sup_{0\leq t\leq T}\left|U^n_t(\phi)\right|^p\Big)\leq &c_p {\mathbb{E}}\big(\left\langle U^n(\phi)\right\rangle_T^\frac{p}{2}\big)\leq c_p\|\rho\|_{\infty}^{\frac{p}{2}}\|\phi\|_{1,\infty}^pT^{\frac{p}{2}-1}{\mathbb{E}}\Big(\int_0^T|X_u^n(1)|^p du\Big)\nonumber\\
\leq &c_p\|\rho\|_{\infty}^{\frac{p}{2}}\|\phi\|_{1,\infty}^pT^{\frac{p}{2}}{\mathbb{E}}\Big(\sup_{0\leq t\leq T}\left|X_t^n(1)\right|^p\Big),\end{aligned}$$ that is also uniformly bounded in $n$.
Note that $\{B^{\alpha}\}$ are independent Brownian motions. Then, by Burkholder-Davis-Gundy’s inequality, we have $$\begin{aligned}
&{\mathbb{E}}\Big(\sup_{0\leq t\leq T}|B^n_t(\phi)|^2\Big)\leq\frac{c_2}{n^2}\Big[\sum_{s_n<T_n}\sum_{\alpha\sim_n s_n}{\mathbb{E}}\Big(\int_{s_n}^{s_n+\frac{1}{n}}|\nabla\phi(x^{\alpha,n}_u) |^2du\Big)\\
&\hspace{35mm}+\sum_{\alpha\sim_n t}{\mathbb{E}}\Big(\int_{T_n}^T|\nabla\phi(x^{\alpha,n}_u)|^2du\Big)\Big]\\
&\hspace{10mm}=\frac{c_2}{n}{\mathbb{E}}\Big(\int_0^T\int_{{\mathbb{R}}^d}|\nabla\phi(x)|^2X_u(dx)du\Big)\leq \frac{c_2}{n}\|\phi\|_{1,\infty}^2T{\mathbb{E}}\Big(\sup_{0\leq t\leq T}|X^{n}_{t}(1)|^p\Big)\to 0,\end{aligned}$$ because $\displaystyle{\mathbb{E}}\Big(\sup_{0\leq t\leq T}|X^{n}_{t}(1)|^p\Big)$ is uniformly bounded in $n$.
As a consequence of Lemma \[ublxmu\], the collection $$\Big\{\sup_{0\leq t\leq T}|X_t^n(\phi)|^2, \sup_{0\leq t\leq T}|M_t^{b,n}(\phi)|^2,\sup_{0\leq t\leq T}|U_t^n(\phi)|^2\Big\}_{n\geq 1}$$ is uniformly integrable.
Let $\{X^{\alpha}\}$ be a collection of real-valued stochastic processes. A family of stochastic processes $\{X^{\alpha}\}$ is said to be C-tight, if it is tight, and the limit of any subsequence is continuous.
\[ctwqv\] For all $\phi\in C^2_b(\mathbb{R}^d)$, $M^{b,n}(\phi)$, $Z^{n}(\phi)$, and $U^n(\phi)$ are C-tight sequences in $D([0, T],\mathbb{R})$. As a consequence, $X^{n}(\phi)$ is C-tight in $D([0, T],\mathbb{R})$.
Note that the branching martingale $M^{b,n}(\phi)$ and drift term $Z^n(\phi)$ are the same as in Sturm [@ejp-03-sturm]. Thus by the proof of Lemma 3.6 in [@ejp-03-sturm], we deduce the C-tightness of $M^{b,n}(\phi)$ and $Z^{n}(\phi)$.
We prove the tightness of $U^n_t(\phi)$ by checking Aldous’s conditions (see Theorem 4.5.4 of Dawson [@springer-92-dawson]). By Chebyshev’s inequality, for any fixed $t\in [0, T]$, and $N>0$, we have $$\begin{aligned}
\mathbb{P}\left(|U^n_t(\phi)|>N\right)\leq \frac{1}{N^p}{\mathbb{E}}\big(\left|U^n_t(\phi)\right|^p\big)\leq \frac{1}{N^p}{\mathbb{E}}\Big(\sup_{0\leq t\leq T}\left|U^n_t(\phi)\right|^p\Big)\to 0,\end{aligned}$$ uniformly in $n$ as $N\to\infty$ by Lemma \[ublxmu\] (i).
On the other hand, let $\{\tau_n\}_{n\geq 1}$ be any collection of stopping times bounded by $T$, and let $\{\delta_n\}_{n\geq 1}$ be any positive sequence that decreases to $0$. Then, due to (\[ubpu\]) and the strong Markov property of Itô’s diffusion $U^n(\phi)$, we have $$\begin{aligned}
\mathbb{P}\left(\big|U^n_{\tau_n+\delta_n}(\phi)-U^n_{\tau_n}(\phi)\big|>\epsilon\right)=&\mathbb{P}\left(\left|U^n_{\delta_n}(\phi)-U^n_0(\phi)\right|>\epsilon\right)\leq \frac{1}{\epsilon^p}{\mathbb{E}}\big(\left|U^n_{\delta_n}(\phi)\right|^p\big)\\
\leq &\frac{\delta_n^{\frac{p}{2}}}{\epsilon^p}c_p\|\rho\|_{\infty}^{\frac{p}{2}}\|\phi\|_{1,\infty}^p{\mathbb{E}}\Big(\sup_{0\leq t\leq T}\left|X_t^n(1)\right|^p\Big)\to 0,\end{aligned}$$ as $n\to 0$. Thus both of Aldous’s conditions are satisfied, and then it follows that $U^n_t(\phi)$ is tight in $D([0, T],\mathbb{R})$.
Notice that for any $n\geq 1$, $U^n_t(\phi)$ is a continuous martingale, then by Proposition VI.3.26 of Jacod and Shiryaev [@springer-13-jacod-shiryaev], every limit of a tight sequence of continuous processes is also continuous.
Recall the decomposition formula (\[dfx\]): $$\begin{aligned}
X_t^n(\phi)=X_0^n(\phi)+Z^n_t(\phi)+M^{b,n}_{t}(\phi)+B^n_t(\phi)+U^n_t(\phi).\end{aligned}$$ Notice that the first term converges weakly by assumption. The second, third, and last terms are C-tight as proved just now. The fourth term tends to $0$ in $L^2(\Omega)$ uniformly in $t\in[0,T]$ by Lemma \[ublxmu\], (ii). As a consequence, $X^n(\phi)$ is C-tight in $D([0,T],{\mathbb{R}})$.
Let $\mathscr{S}=\mathscr{S}({\mathbb{R}}^d)$ be the Schwartz space on ${\mathbb{R}}^d$, and let $\mathscr{S}'$ be the Schwartz dual space. Then, we have the following lemma:
\[tight\] Assume hypotheses **\[H1\]** and **\[H2\]** (i), (ii). Then,
(i) $\{X^n\}_{n\geq 1}$ is a tight sequence in $D([0,T]; M_F({\mathbb{R}}^d))$.
(ii) $\{B^n\}_{n\geq 1}$, $\{M^{b,n}\}_{n\geq 1}$, and $\{U^n\}_{n\geq 1}$ are C-tight in $D([0,T]; \mathscr{S}')$.
Let $\widehat{{\mathbb{R}}}^d={\mathbb{R}}^d\cup \{\partial\}$ be the one point compactification of ${\mathbb{R}}^d$. Then, by Theorem 4.6.1 of Dawson [@springer-92-dawson] and Lemma \[ctwqv\], $\{X^n\}_{n\geq 1}$ is a tight sequence in $D([0,T]; M_F(\widehat{{\mathbb{R}}}^d))$.
On the other hand, by the same argument as in Lemma 3.9 of Sturm [@ejp-03-sturm], we can show that any limit of a weakly convergent subsequence $X^{n_k}$ in $D([0,T]; M_F(\widehat{{\mathbb{R}}}^d))$ belongs to $C([0,T]; M_F({\mathbb{R}}^d))$, the space of continuous $M_F({\mathbb{R}}^d)$-valued functions on $[0,T]$. Therefore, $\{X^n\}_{n\geq 1}$ is a tight sequence in $D([0,T]; M_F({\mathbb{R}}^d))$.
To show the property (ii), notice that $\mathscr{S}\subset C_b^2({\mathbb{R}}^d)$. Then, by Theorem 4.1 of Mitoma [@ap-83-mitoma], $\{B^n\}_{n\geq 1}$, $\{M^{b,n}\}_{n\geq 1}$, and $\{U^n\}_{n\geq 1}$ are C-tight in $D([0,T]; \mathscr{S}')$.
\[propmp\] Assume hypotheses **\[H1\]**, **\[H2\]** (i) and (ii). Let $X$ be the limit of a weakly convergent subsequence $\{X^{n_k}\}_{k\geq 1}$ in $D([0,T]; M_F({\mathbb{R}}^d))$. Then, $X$ is a solution to the following martingale problem: for any $\phi \in C^2_b(\mathbb{R}^d)$, the process $M(\phi)=\{M_t(\phi):0\leq t\leq T\}$, given by $$\begin{aligned}
\label{mprf}
M_t(\phi):=&X_t(\phi)-X_0(\phi)-\int_0^tX_s(A\phi)ds,\end{aligned}$$ is a continuous and square integrable $\mathcal{F}^X_t$-adapted martingale with quadratic variation: $$\begin{aligned}
\label{qvmp}
\langle M(\phi)\rangle_t=&\int_0^t\int_{\mathbb{R}^d\times\mathbb{R}^d}\nabla \phi(x)^*\rho(x-y)\nabla\phi(y)X_s(dx)X_s(dy)ds\nonumber\\
&+\int_0^t\int_{\mathbb{R}^d\times\mathbb{R}^d}\kappa(x,y)\phi(x)\phi(y)X_s(dx)X_s(dy)ds.\end{aligned}$$
Let $\{X^{n_k}\}_{k\geq 1}$ be a weakly convergent subsequence in $D([0,T]; M_F({\mathbb{R}}^d))$. By taking further subsequences, we can assume, in view of Lemma \[tight\] (ii), that $\{B^{n_k}\}_{k\geq 1}$, $\{M^{b, n_k}\}_{k\geq 1}$, and $\{U^{n_k}\}_{k\geq 1}$ are weakly convergent subsequences in $D([0,T]; \mathscr{S}')$.
Then, by Skorohod’s representation theorem, there exists a probability space $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb{P}})$, on which $(\widetilde{X}^{n_k}, \widetilde{M}^{b,n_k}, \widetilde{B}^{n_k}, \widetilde{U}^{n_k})$ has the same joint distribution as $(X^{n_k}, M^{b,n_k}, B^{n_k}, U^{n_k})$ for all $k\geq 1$, and converge a.s. to $(\widetilde{X}, \widetilde{M}^b, \widetilde{B}, \widetilde{U})$ in the product space $D([0,T],M_F(\widehat{{\mathbb{R}}}^d))\times D([0,T],\mathscr{S}')^3$.
Then, for any $\phi\in \mathscr{S}'$, $(\widetilde{X}^{n_k}(\phi), \widetilde{M}^{b,n_k}(\phi), \widetilde{B}^{n_k}(\phi), \widetilde{U}^{n_k}(\phi))$ converges a.s. in $D([0,T],{\mathbb{R}})$. Since $\displaystyle\Big\{\sup_{0\leq t\leq T}|X_t^n(\phi)|^2, \sup_{0\leq t\leq T}|M_t^{b,n}(\phi)|^2, \sup_{0\leq t\leq T}|U_t^n(\phi)|^2\Big\}_{n\geq 1}$ is uniformly integrable, the convergence is also in $L^2([0,T]\times \Omega)$.
For any $t\in[0,T]$, let $$\widetilde{M}_t^{n_k}(\phi):=\widetilde{X}^{n_k}_t(\phi)-\widetilde{X}^{n_k}_0(\phi)-\int_0^t\widetilde{X}^{n_k}_s(A\phi)ds=\widetilde{M}^{b,n_k}_t(\phi)+\widetilde{B}^{n_k}_t(\phi)+\widetilde{U}^{n_k}_t(\phi).$$ Then, it converges to a continuous and square integrable martingale $\widetilde{M}(\phi)=\widetilde{M}^{b}(\phi)+\widetilde{U}(\phi)$ in $L^2(\widetilde{\Omega})$. It suffices to compute its quadratic variation.
Notice that $W$ and $\{B^{\alpha}\}$ are independent, then $U^{n}$ and $B^{n}$ are orthogonal. As a consequence, $\widetilde{U}^{n_k}$ and $\widetilde{B}^{n_k}$ are also orthogonal. On the other hand, $\widetilde{M}^{b,n}(\phi)$ is a pure jump martingale, while $\widetilde{U}^{n_k}(\phi)$ and $\widetilde{B}^{n_k}(\phi)$ are continuous martingales. Due to Theorem 43 on page 353 of Dellacherie and Meyer [@northholland-82-dellacherie-meyer], $\widetilde{M}^{b,n}(\phi)$, $\widetilde{B}^{n_k}(\phi)$ and $\widetilde{U}^{n_k}(\phi)$ are mutually orthogonal. By the same argument as in Lemma \[ublxmu\], we can show that $\langle\widetilde{M}^{b,n_k}(\phi)+\widetilde{B}^{b,n_k}(\phi)+\widetilde{U}^{n_k}(\phi)\rangle_t=\langle\widetilde{M}^{b,n_k}(\phi)\rangle_t+\langle\widetilde{B}^{b,n_k}(\phi)\rangle_t+\langle\widetilde{U}^{n_k}(\phi)\rangle_t$ are uniformly integrable. Then, by Theorem II.4.5 of Perkins [@springer-02-perkins], we have $$\begin{aligned}
\langle\widetilde{M}^{b,n_k}(\phi)+\widetilde{B}^{b,n_k}(\phi)+\widetilde{U}^{n_k}(\phi)\rangle_t&=\langle\widetilde{M}^{b,n_k}(\phi)\rangle_t+\langle\widetilde{B}^{b,n_k}(\phi)\rangle_t+\langle\widetilde{U}^{n_k}(\phi)\rangle_t \\
&\to\langle\widetilde{M}^{b}(\phi)\rangle_t+\langle\widetilde{U}(\phi)\rangle_t= \langle \widetilde{M}(\phi)\rangle_t\end{aligned}$$ as $k\to\infty$ in $D([0,T],{\mathbb{R}})$ in probability.
On the other hand, by the same argument of Lemma 3.8 of Sturm [@ejp-03-sturm], we have $$\begin{aligned}
\langle \widetilde{M}^b(\phi)\rangle_t=\lim_{k\to\infty}\langle \widetilde{M}^{b,n_k}(\phi)\rangle_t=\int_0^t\int_{\mathbb{R}^d\times\mathbb{R}^d}\kappa(x,y)\phi(x)\phi(y)\widetilde{X}_s(dx)\widetilde{X}_s(dy)ds,\ a.s.\end{aligned}$$ For $\langle \widetilde{U}(\phi)\rangle_t$, by (\[qvwnr\]), since $\widetilde{X}^{n_k}(\phi)\to \widetilde{X}(\phi)$ in $L^2([0,T]\times \Omega)$, it follows that $$\begin{aligned}
\lim_{k\to\infty}\langle\widetilde{U}^{n_k}(\phi)\rangle_t=&\lim_{k\to\infty}\int_0^t\int_{\mathbb{R}^d\times\mathbb{R}^d}\nabla \phi(x)^*\rho(x-z)\nabla\phi(z)\widetilde{X}^n_u(dx)\widetilde{X}^n_u(dz)du\\
=&\int_0^t\int_{\mathbb{R}^d\times\mathbb{R}^d}\nabla \phi(x)^*\rho(x-z)\nabla\phi(z)\widetilde{X}_u(dx)\widetilde{X}_u(dz)du.\end{aligned}$$ As a consequence, $\widetilde{M}=\{\widetilde{M_t},t\in[0,T]\}$, where $$\widetilde{M}_t(\phi)=\widetilde{X}_t(\phi)-\widetilde{X}_0(\phi)-\int_0^t\widetilde{X}_s(A\phi)ds=\widetilde{M}^b_t(\phi)+\widetilde{B}_t(\phi)+\widetilde{U}_t(\phi),$$ is a continuously square integrate martingale with the quadratic variation given by the expression (\[qvmp\]) .
Finally, by the same argument as in Theorem II.4.2 of Perkins [@springer-02-perkins], we can show $\widetilde{M}(\phi)$ is an $\mathcal{F}^{\widetilde{X}}$-adapted martingale.
Absolute continuity
-------------------
Let $X_t$ be a solution to the martingale problem (\[mprf\]) - (\[qvmp\]). In this section, we show that for almost every $t\in[0,T]$, as an $M_F({\mathbb{R}}^d)$-valued random variable, $X_t$ has a density almost surely.
For any $n \geq 1$, $f\in C_b^2({\mathbb{R}}^{nd})$, and $\mu\in M_F({\mathbb{R}}^d)$, we define $$\mu^{\otimes n}(f):=\int_{{\mathbb{R}}^{d}}\cdots\int_{{\mathbb{R}}^d}f(x_1,\dots,x_n)\mu(dx_1)\cdots \mu(dx_n).$$ We derive the moment formula ${\mathbb{E}}(X^{\otimes n}_t(f))$ of the process $X$. In the one dimensional case, Skoulakis and Adler [@aap-01-skoulakis-adler] obtained the formula by computing the limit of particle approximations. An alternative approach by Xiong [@ap-04-xiong] consists in differentiating a conditional stochastic log-Laplace equation. In the present paper we use the techniques of moment duality to derive the moment formula. It can be also formulated by computing the limit of particle approximations.
For any integers $n\geq 2$ and $k\leq n$, we make use of the notation $x_k=(x_k^1,\dots, x_k^d)\in {\mathbb{R}}^d$ and $x=(x_1,\dots, x_n)\in{\mathbb{R}}^{nd}$. Let $\Phi_{ij}^{(n)}: C_b^2({\mathbb{R}}^{nd}) \to C_b^2({\mathbb{R}}^{nd})$, and $F^{(n)}, G^{(n)}: C_b^{2}({\mathbb{R}}^{nd})\times M_F({\mathbb{R}}^d)\to {\mathbb{R}}$ be given by $$(\Phi_{ij}^{(n)}f)(x_1,\dots,x_n):=\kappa(x_i,x_j)f(x_1,\dots,x_n),\quad i,j\in\{1,2,\dots, n\},$$ $$F^{(n)}(f, \mu):=\mu^{\otimes n}(f),$$ and $$\begin{aligned}
G^{(n)}(f, \mu):=\mu^{\otimes n} (A^{(n)}f)+\frac{1}{2}\sum_{\mbox{\tiny$\begin{array}{c}
1\le i,j \le n\\
i\neq j\end{array}$}}\mu^{\otimes n} (\Phi^{(n)}_{ij}f),\end{aligned}$$ where $\kappa\in C_b^2({\mathbb{R}}^{2d})$ is the correlation of the random field $\xi$ given by (\[crlt\]), and $A^{(n)}$ is the generator of the $n$-particle-motion described by (\[pmern\]). More precisely, $$\begin{aligned}
A^{(n)}f(x_1,\dots, x_n)=\frac{1}{2}(\Delta +B^{(n)})f(x_1,\dots, x_n),\end{aligned}$$ where $\Delta$ is the Laplace operator in ${\mathbb{R}}^{nd}$ and $$\begin{aligned}
B^{(n)}f(x_1,\dots, x_n)=\sum_{k_1,k_2=1}^n\sum_{i,j=1}^d\rho^{ij}(x_{k_1}-x_{k_2})\frac{\partial^2 f}{\partial x_{k_1}^{i}\partial x_{k_2}^{j}}(x_1,\dots,x_n).\end{aligned}$$
\[mxf\] Let $X_t$ be a solution to the martingale problem (\[mprf\]) - (\[qvmp\]). Then, for any $n\geq 1$ and $n\geq 2$, $f\in C_b^2({\mathbb{R}}^{nd})$, the following process $$F^{(n)}(f, X_t)-\int_0^t G^{(n)}(f, X_s)ds$$ is a martingale.
See Lemma 1.3.2 of Xiong [@ws-13-xiong].
Let $\{T_t^{(n)}\}_{t\geq 0}$ be the semigroup generated by $A^{(n)}$, that is, $T^{(n)}_t: C_b^2 ({\mathbb{R}}^{nd})\to C_b^2 ({\mathbb{R}}^{nd})$, given by $$T_t^{(n)}f(x_1,\dots, x_n)=\int_{{\mathbb{R}}^{nd}}p(t, (x_1,\dots, x_n), (y_1,\dots, y_n))f(y_1,\dots, y_n)dy_1\dots dy_n,$$ where $p$ is the transition density of $n$-particle-motion.
Let $\{S_k^{(n)}\}_{k\geq 1}$ be i.i.d. uniformly distributed random variables taking values in the set $\{\Phi_{ij}, 1\leq i,j\leq n, i\neq j\}$. Let $\{\tau_k\}_{k\geq 1}$ be i.i.d exponential random variables independent of $\{S_k^{(n)}\}_{k\geq 1}$, with rate $\lambda_n=\frac{1}{2}n(n-1)$. Let $\eta_0\equiv 0$, and $\eta_j=\sum_{i=1}^j\tau_i$ for all $j\geq 1$. For any $f\in C_b^2({\mathbb{R}}^{nd})$, we define a $C_b^2({\mathbb{R}}^{nd})$-valued random process $Y^{(n)}=\{Y_t^{(n)}, 0\leq t\leq T\}$ as follows: for any $j\geq 0$ and $t\in[\eta_j, \eta_{j+1})$, $$\begin{aligned}
\label{dual}
Y^{(n)}_t:=T^{(n)}_{t-\eta_j}S^{(n)}_jT^{(n)}_{\tau_j}\cdots S^{(n)}_2T^{(n)}_{\tau_2}S^{(n)}_1T^{(n)}_{\tau_1}f.\end{aligned}$$ Then, $Y^{(n)}$ is a Markov process with $Y^{(n)}_0=f$. It involves countable many i.i.d. jumps $S^{(n)}_k$, controlled by i.i.d. exponential clocks $\tau_k$. In between jumps, the process evolves deterministically by the continuous semigroup $T^{(n)}_t$. Notice that the exponential clock is memoryless, and the semigroup $T^{(n)}_t$ is generated by a time homogeneous Markov process. Therefore, $Y^{(n)}$ is also time homogeneous.
\[ingbty\] For any $f\in C_b^2({\mathbb{R}}^{nd})$, let $Y^{(n)}_t$ be defined in (\[dual\]). Then $$\begin{aligned}
\label{ifyny}
{\mathbb{E}}\Big(\sup_{x\in{\mathbb{R}}^{nd}}\big|Y^{(n)}_t(x)\big|\Big) \leq \|f\|_{\infty}\exp\left(\|\kappa\|_{\infty}\lambda_nt\right).\end{aligned}$$
Since $T^{(n)}_t$ is the semigroup generated by a Markov process, for any $t>0$ and $f\in C_b^{2}({\mathbb{R}}^{nd})$, $\|T^{(n)}_tf\|_{\infty}\leq \|f\|_{\infty}$. By definition of the jump operators $\{S^{(n)}_j\}_{j\geq 1}$, it is easy to see that $\|S^{(n)}_jf\|_{\infty}\leq \|\kappa\|_{\infty}\|f\|_{\infty}$. Thus we have $$\begin{aligned}
\label{ifyny1}
{\mathbb{E}}\Big(\sup_{x\in{\mathbb{R}}^{nd}}\big|Y^{(n)}_t(x)\big|\Big) \leq\|f\|_{\infty} \sum_{j=0}^{\infty}\big[\|\kappa\|_{\infty}^j\mathbb{P}(\eta_j<t)\big].\end{aligned}$$ Notice that $\eta_j$ is the sum of i.i.d. exponential random variables. Then, we have $$\begin{aligned}
\label{ifyny2}
\mathbb{P}(\eta_j<t)=1-\exp\left(-\lambda_nt\right)\sum_{k=0}^{j-1}\frac{(\lambda_nt)^k}{k!}=\exp(\lambda_n(t'-t))\frac{(\lambda_nt)^j}{j!},\end{aligned}$$ for some $t'\in (0,t)$. Therefore, (\[ifyny\]) follows from (\[ifyny1\]) and (\[ifyny2\]).
Let $H^{(n)}:C_b^2({\mathbb{R}}^{nd})\times M_F(R^d)\to {\mathbb{R}}$ be given by $$H^{(n)}(f,\mu):=G^{(n)}(f,\mu)-\lambda_nF^{(n)}(f,\mu).$$
\[mxdual\] Let $\mu\in M_F({\mathbb{R}}^d)$, Then, the process $$\begin{aligned}
\label{mrgdual}
F^{(n)}(Y^{(n)}_t, \mu)-\int_0^t H^{(n)}(Y^{(n)}_s, \mu)ds\end{aligned}$$ is a martingale.
Let $\mu^{(n)}$ be any finite measure on ${\mathbb{R}}^{nd}$. Then, we have $$\begin{aligned}
\label{eexclc}
&{\mathbb{E}}\big(\mu^{(n)}(Y^{(n)}_t)\big)={\mathbb{E}}\big(\mu^{(n)}(Y^{(n)}_t){\mathbf{1}}_{\{\tau_1>t\}}\big)+{\mathbb{E}}\big(\mu^{(n)}(Y^{(n)}_t){\mathbf{1}}_{\{\eta_1\leq t<\eta_2\}}\big)+o(t).\end{aligned}$$ For the first term, we have $$\begin{aligned}
\label{eexclc0}
{\mathbb{E}}\big(\mu^{(n)}(Y^{(n)}_t){\mathbf{1}}_{\{\tau_1>t\}}\big)=&\mu^{(n)}(T^{(n)}_tf) \mathbb{P}(\tau_1>t)=\mu^{(n)}(T^{(n)}_tf) \exp(-\lambda_nt)).\end{aligned}$$ For the second term, since $\tau_1$, $\tau_2$ are independent, then for any $0\leq s\leq t$, we have $$\begin{aligned}
\label{p2iide}
\mathbb{P}(\tau_1+\tau_2>t,\tau_1\leq s)=\int_0^s\int_{t-s_1}^{\infty}\lambda_n^2\exp(-\lambda_n(s_1+s_2))ds_2ds_1=\lambda_nse^{-\lambda_nt}.\end{aligned}$$ Note that by Lemma \[ingbty\], $|Y^{(n)}_{\cdot}|$ is integrable on $[0,T]\times {\mathbb{R}}^{nd}\times \Omega$ with respect to the product measure $dt \times \mu^{(n)}(dx)\times P(d\omega)$. Then, by (\[p2iide\]), Fubini’s theorem, and the mean value theorem, we have $$\begin{aligned}
\label{eexclc1}
&{\mathbb{E}}\big(\mu^{(n)}(Y^{(n)}_t){\mathbf{1}}_{\{\eta_1\leq t<\eta_2\}}\big)\nonumber\\
=&\frac{1}{2}\sum_{\mbox{\tiny$\begin{array}{c}
1\le i,j \le n\\
i\neq j\end{array}$}}\int_0^t\int_{{\mathbb{R}}^{nd}}\big(T^{(n)}_{t-s}\Phi^{(n)}_{ij}T^{(n)}_sf\big)(x)\exp\left(-\lambda_nt\right)\mu^{(n)}(dx)ds\nonumber\\
=&\frac{t}{2}\exp\left(-\lambda_nt\right)\sum_{\mbox{\tiny$\begin{array}{c}
1\le i,j \le n\\
i\neq j\end{array}$}}\int_{{\mathbb{R}}^{nd}}\big(T^{(n)}_{t-t'}\Phi_{ij}^{(n)}T^{(n)}_{t'}f\big)(x)\mu^{(n)}(dx),\end{aligned}$$ for some $t'\in(0,t)$. Combining (\[eexclc\]), (\[eexclc0\]), and (\[eexclc1\]), we have $$\lim_{t\downarrow 0}\frac{{\mathbb{E}}\big( \mu^{(n)}(Y^{(n)}_t)\big)-\mu^{(n)}(f)}{t}=\mu^{(n)}(A^{(n)}f)+\frac{1}{2}\sum_{\mbox{\tiny$\begin{array}{c}
1\le i,j \le n\\
i\neq j\end{array}$}}\mu^{(n)}\big(\Phi^{(n)}_{ij}f-f\big).$$ By Proposition 4.1.7 of Ethier and Kurtz [@wiley-86-ethier-kurtz], the following process: $$\begin{aligned}
\label{mtgyg}
\mu^{(n)}(Y^{(n)}_t)-\int_0^t \Big[\mu^{(n)}(A^{(n)} Y_s^{(n)})+\frac{1}{2}\sum_{\mbox{\tiny$\begin{array}{c}
1\le i,j \le n\\
i\neq j\end{array}$}}\mu^{(n)}(\Phi^{(n)}_{ij}Y_s^{(n)}-Y_s^{(n)})\Big]ds,\end{aligned}$$ is a martingale. Then, the lemma follows by choosing $\mu^{(n)}=\mu^{\otimes n}$.
By Lemma \[mxf\], \[mxdual\] and Corollary 3.2 of Dawson and Kurtz [@springer-82-dawson-kurtz], we have the following moment identity: $$\begin{aligned}
\label{mmtidt}
{\mathbb{E}}\big(X_t^{\otimes n}(f)\big)={\mathbb{E}}\bigg[X^{\otimes n}_0(Y^{(n)}_t)\exp\Big(\int_0^t\lambda_nds\Big)\bigg]=\exp\Big(\frac{1}{2}n(n-1)t\Big){\mathbb{E}}\big(X^{\otimes n}_0(Y^{(n)}_t)\big).\end{aligned}$$
\[lmfif\] Fix $f\in C_b^2({\mathbb{R}}^{nd})$.
(i) The following PDE $$\begin{aligned}
\label{dualpde}
\partial_tv_t(x)=A^{(n)}v_t(x)+\frac{1}{2}\sum_{\mbox{\tiny$\begin{array}{c}
1\le i,j \le n\\
i\neq j\end{array}$}}\kappa(x_i,x_j)v(t,x),\end{aligned}$$ with initial value $v_0(x)=f(x)$, has a unique solution.
(ii) Let $X=\{X_t,t\in[0,T]\}$ be a solution to the martingale problem (\[mprf\]) - (\[qvmp\]). Then, $$\begin{aligned}
\label{dualid}
{\mathbb{E}}\big(X_t^{\otimes n}(f)\big)=X_0^{\otimes n}(v_t).\end{aligned}$$
Firstly, we claim that the operator $A^{(n)}=\frac{1}{2}(\Delta+B^{(n)})$ is uniformly parabolic in the sense of Friedman (see Section 1.1 of [@courier-13-friedman]). Because $\Delta$ is uniformly parabolic, then it suffices to analyse the properties of $B^{(n)}$. For any $k=1,\dots, n$, $i=1,\dots, d$, and $\xi_k^i\in{\mathbb{R}}$, let $\xi_k=(\xi_k^1,\dots,\xi_k^d)$. Then, we have $$\begin{aligned}
\sum_{k_1,k_2=1}^n\sum_{i,j=1}^d\rho^{ij}(x_{k_1}-x_{k_2})\xi_{k_1}^{i}\xi_{k_2}^{j}=\int_{{\mathbb{R}}^d}\bigg|\sum_{k=1}^nh^*(z-x_k)\xi_k\bigg|^2dz\geq 0.\end{aligned}$$ Thus $B^{(n)}$ is parabolic. On the other hand, by Jensen’s inequality, we have $$\sum_{k_1,k_2=1}^n\sum_{i,j=1}^d\rho^{ij}(x_{k_1}-x_{k_2})\xi_{k_1}^{i}\xi_{k_2}^{j}=\int_{{\mathbb{R}}^d}\bigg|\sum_{k=1}^nh^*(z-x_k)\xi_k\bigg|^2dz\leq n\|\rho\|_{\infty}\sum_{k=1}^n|\xi_{k}|^2.$$ It follows that $A^{(n)}=\frac{1}{2}(\Delta+B^{(n)})$ is uniformly parabolic.
Since $h\in H_2^3({\mathbb{R}}^d;{\mathbb{R}}^d\otimes {\mathbb{R}}^d)$, $\rho(x-y)=\int_{{\mathbb{R}}^d}h(z-x)h^*(z-y)dz$ has bounded derivatives up to order three, then by Theorem 1.12 and 1.16 of Friedman [@courier-13-friedman], the PDE (\[dualpde\]) has a unique solution.
In order to show (ii), let $$\widetilde{v}_t(x)={\mathbb{E}}\big(Y^{(n)}_t(x)\big),$$ where $Y^{(n)}$ is defined by (\[dual\]). By the same argument as we did in the proof of Lemma \[ingbty\], we can show that for any $t\in [0,T]$ and $x\in{\mathbb{R}}^{nd}$ $${\mathbb{E}}\Big(\sup_{x\in{\mathbb{R}}^d}\big|A^{(n)} Y_t^{(n)}(x)\big|\Big)< \infty.$$ Then, by the dominated convergence theorem, we have $${\mathbb{E}}\big( A^{(n)} Y_t^{(n)}(x)\big)=A^{(n)} {\mathbb{E}}(Y_t^{(n)}(x)).$$ Let $\mu^{(n)}$ be any finite measure on ${\mathbb{R}}^{nd}$. Recall that the process defined by (\[mtgyg\]) is a martingale, then the following equality follows from Fubini’s theorem: $$\begin{aligned}
\mu^{(n)}(\widetilde{v}_t)=&{\mathbb{E}}\big(\mu^{(n)}(Y^{(n)}_t)\big)=\mu^{(n)}(f)+\int_0^t \big\langle \mu^{(n)}, {\mathbb{E}}\big(A^{(n)} Y_s^{(n)}\big)\big\rangle ds\\
&\hspace{25mm}+\frac{1}{2}\sum_{\mbox{\tiny$\begin{array}{c}
1\le i,j \le n\\
i\neq j\end{array}$}}\int_0^t\big\langle\mu^{(n)}, [k(\cdot_i,\cdot_j)-1]{\mathbb{E}}(Y_s^{(n)})\big\rangle ds\\
=&\mu^{(n)}(f)+\int_0^t \big\langle \mu^{(n)}, A^{(n)} \widetilde{v}_s\big\rangle ds+\frac{1}{2}\sum_{\mbox{\tiny$\begin{array}{c}
1\le i,j \le n\\
i\neq j\end{array}$}}\int_0^t\big\langle \mu^{(n)}, [k(\cdot_i,\cdot_j)-1]\widetilde{v}_s\big\rangle ds.\end{aligned}$$ In other words, $$\bigg\langle \mu^{(n)}, \widetilde{v}_t-f-\int_0^t \Big[ A^{(n)} \widetilde{v}_s -\frac{1}{2}\sum_{\mbox{\tiny$\begin{array}{c}
1\le i,j \le n\\
i\neq j\end{array}$}} (k(\cdot_i,\cdot_j)-1)\widetilde{v}_s \Big]ds\bigg\rangle=0,$$ for all $\mu^{(n)}\in M_F({\mathbb{R}}^{nd})$. It follows that $\widetilde{v}=\{\widetilde{v}_t(x),t\in[0,T],x\in{\mathbb{R}}^d\}$ solves the following PDE $$\begin{aligned}
\label{dualst}
\partial_t\widetilde{v}_t(x)=A^{(n)}\widetilde{v}_t(x)+\frac{1}{2}\sum_{\mbox{\tiny$\begin{array}{c}
1\le i,j \le n\\
i\neq j\end{array}$}}[\kappa(x_i,x_j)-1]\widetilde{v}_t(x),\end{aligned}$$ with the initial value $\widetilde{v}_0(x)=f(x)$. This solution is unique by the same argument as in part (i). Observe that $$\begin{aligned}
\label{pdead}
v_t(x)=\widetilde{v}_t(x)\exp\Big(\frac{1}{2}n(n-1)t\Big).\end{aligned}$$ Therefore, (\[dualid\]) follows from (\[pdead\]) and the moment duality (\[mmtidt\]).
In Lemma \[lmfif\], we derived the moment formula for ${\mathbb{E}}\big(X_t^{(n)}(f)\big)$ in the case when $n\geq 2$. If $n=1$, the dual process only involves a deterministic evolution driven by the semigroup of one particle motion, which makes things much simpler. We write the formula below and skip the proof. Let $p(t,x,y)$ be the transition density of the one particle motion, then for any $\phi\in C_b^2({\mathbb{R}}^d)$, $${\mathbb{E}}(X_t(\phi))=X_0(T^{(1)}_t\phi)=\int_{{\mathbb{R}}^d}\int_{{\mathbb{R}}^d}p(t,x,y)\phi(y)dyX_0(dx).$$
The existence of the density of $X_t$ will be derived following Wang’s idea (see Theorem 2.1 of [@ptrf-97-wang]). For any $\epsilon>0$, $x\in {\mathbb{R}}^d$, let $p_{\epsilon}$ be the heat kernel on ${\mathbb{R}}^d$, that is $$p_{\epsilon}(x)=(2\pi \epsilon)^{-\frac{d}{2}}\exp\Big(-\frac{|x|^2}{2\epsilon}\Big).$$
\[lcsprt\] Let $X=\{X_t,t\in[0,T]\}$ be a solution to the martingale problem (\[mprf\]) - (\[qvmp\]). Assume that the initial measure $X_0\in M_F({\mathbb{R}}^d)$ has a bounded density $\mu$. Then, $$\begin{aligned}
\label{bdprt}
\int_0^T\int_{{\mathbb{R}}^d}{\mathbb{E}}\big(\big|X_t(p_{\epsilon}(x-\cdot))\big|^2\big)dxdt< \infty,\end{aligned}$$ and $$\begin{aligned}
\label{csprt}
\lim_{\epsilon_1,\epsilon_2\downarrow 0}\int_0^T\int_{{\mathbb{R}}^d}{\mathbb{E}}\big(\big|X_t(p_{\epsilon_1}(x-\cdot))-X_t(p_{\epsilon_2}(x-\cdot))\big|^2\big)dxdt=0.\end{aligned}$$
Let $\Gamma (t,(y_1,y_2); r, (z_1,z_2))$ be the fundamental solution to the PDE (\[dualpde\]) when $n=2$. We write $y=(y_1,y_2)$ and $ z=(z_1,z_2)\in{\mathbb{R}}^{2d}$. Then, for $f\in C_b^2({\mathbb{R}}^{2d})$, $$v(t,y)=\int_{{\mathbb{R}}^{2d}}\Gamma(t,y;0,z)f(z)dz,$$ is the unique solution to the PDE (\[dualpde\]) with initial condition $v_0=f$. Thus by Lemma \[lmfif\], we have $$\begin{aligned}
\label{bdps}
&{\mathbb{E}}\big[X_t(p_{\epsilon_1}(x-\cdot))X_t(p_{\epsilon_2}(x-\cdot))\big]\nonumber\\
&\qquad=\int_{{\mathbb{R}}^{2d}}\int_{{\mathbb{R}}^{2d}}\Gamma (t,y;0,z) p_{\epsilon_1}(x-z_1)p_{\epsilon_2}(x-z_2)dz X_0^{\otimes 2}(dy).\end{aligned}$$ By the inequality (6.12) of Friedman [@courier-13-friedman] on page 24, we know that there exists $C_{\Gamma},\lambda>0$, such that for any $0\leq r<t\leq T$, $$\begin{aligned}
\label{eubfs}
|\Gamma(t,y;r;z)|\leq C_{\Gamma} p_{\frac{t-r}{\lambda}}(y_1-z_1) p_{\frac{t-r}{\lambda}}(y_2-z_2).\end{aligned}$$ Therefore, by the semigroup property of heat kernels and Fubini’s theorem, we have $$\begin{aligned}
\label{410est1}
&\int_0^T\int_{{\mathbb{R}}^d} {\mathbb{E}}\big[X_t(p_{\epsilon_1}(x-\cdot))X_t(p_{\epsilon_2}(x-\cdot))\big]dxdt\nonumber\\
&\qquad=\int_0^T\int_{{\mathbb{R}}^{2d}}\int_{{\mathbb{R}}^{2d}}\Gamma (t,y;0,z) p_{\epsilon_1+\epsilon_2}(z_1-z_2)dz X_0^{\otimes 2}(dy)dt\end{aligned}$$ From (\[eubfs\]), (\[410est1\]) and the fact that $X_0\in M_F({\mathbb{R}}^d)$ has a bounded density $\mu$, it follows that (\[bdprt\]) is true.
Let $\mathcal{M}$ be the function on ${\mathbb{R}}^{2d}$, given by $$\mathcal{M}(z)= \int_0^T\int_{{\mathbb{R}}^{2d}}\Gamma(t,y;0,z)X_0^{\otimes 2}(dy)dt.$$ Notice that fix $0\leq r<t\leq T$, $\Gamma(t,y;r,x)$ is uniformly continuous in the spatial argument (see (6.13) of Friedman [@courier-13-friedman] on page 24). As a consequence $\mathcal{M}$ is continuous. Therefore, by (\[eubfs\]) and the continuity of $\mathcal{M}$, the function $\mathcal{N}$ on ${\mathbb{R}}^d$ given by $$\mathcal{N}(x):=\int_{{\mathbb{R}}^d}\mathcal{M}(z_1, z_1-x)dz_1,$$ is integrable and continuous everywhere. It follows that $$\begin{aligned}
\label{410est2}
&\lim_{\epsilon_1,\epsilon_2\to 0}\int_0^t\int_{{\mathbb{R}}^d}{\mathbb{E}}\big[X_t(p_{\epsilon_1}(x-\cdot))X_t(p_{\epsilon_2}(x-\cdot))\big]dxdt\nonumber\\
&\qquad=\lim_{\epsilon_1,\epsilon_2\to 0}\int_{{\mathbb{R}}^{2d}}\mathcal{M}(z)p_{\epsilon_1+\epsilon_2}(z_1-z_2)dz\nonumber\\
&\qquad=\lim_{\epsilon_1,\epsilon_2\to 0}\int_{{\mathbb{R}}^d}\mathcal{N}(y)p_{\epsilon_1+\epsilon_2}(y)dy\nonumber\\
&\qquad=\mathcal{N}(0)= \int_0^T\int_{{\mathbb{R}}^d}\int_{{\mathbb{R}}^{2d}}\Gamma(t,y;0,(x,x))X_0^{\otimes 2}(dy)dxdt\end{aligned}$$ Therefore, (\[csprt\]) is a consequence of (\[410est2\]).
\[extdst\] Let $X=\{X_t,t\in[0,T]\}$ be a solution to the martingale problem (\[mprf\]) - (\[qvmp\]). Assume that the initial measure $X_0\in M_F({\mathbb{R}}^d)$ has a bounded density $\mu$. Then, for almost every $t\in (0,T]$, $X_t$ is absolutely continuous with respect to the Lebesgue measure almost surely.
As proved in Lemma \[lcsprt\], for any $x\in {\mathbb{R}}^d$ and $\epsilon_n\downarrow 0$, the sequence $\{X_t(p_{\epsilon_n}^x)\}_{n\geq 1}$ is Cauchy in $L^2(\Omega \times {\mathbb{R}}^d \times [0,T])$. Then, it converges to some square integrable random field. By the same argument as in Theorem 2.1 of Wang [@ptrf-97-wang], we can show that the limit random field is the density of $X_t$ almost surely.
**Remark:** The assumption in Proposition \[extdst\], that the initial measure has a bounded density, cannot be removed. Actually, if we choose $X_0=\delta_0$, the Dirac delta mass at $0$, then $\int_0^T\int_{{\mathbb{R}}^d}\Gamma (t,0;0,(x,x)) dxdt$ behaves like $\int_0^Tt^{-\frac{d}{2}}dt$, which is finite only if $d=1$. This is another difference from the one dimensional situation, in which case $X_0(1)<\infty$ is enough to prove the existence of the density (see Theorem 2.1 Wang [@ptrf-97-wang]).
Proof of Theorems \[unique\] and \[tmpd\]
-----------------------------------------
The proof of Theorems \[unique\] and \[tmpd\] is based on the equivalence of the martingale problem (\[mprf\]) - (\[qvmp\]) and the SPDE (\[dqmvp\]).
The equivalence between martingale problems and SDEs in finite dimensions was observed in the 1970s (see Stroock and Varachan [@psbs-72-stroock-varadhan]). An alternative proof given by Kurtz [@sa-11-kurtz] consists of the “Markov mapping theorem”. In a recent paper [@arxiv-18-biswas-etheridge-klimek] Biswas et al. generalized this result to the infinite dimensional cases with one noise following Kurtz’s idea. Here in the present paper, we establish a similar result with two noises by using the martingale representation theorem.
\[propspde\] Let $\mu\in C_b({\mathbb{R}}^d)\cap L^1({\mathbb{R}}^d)$ be a nonnegative function on ${\mathbb{R}}^d$. Then, $u=\{u_t, t\in[0,T]\}$ is the density of a solution of the martingale problem (\[mprf\]) - (\[qvmp\]) with initial density $\mu$, if and only if $u$ is a weak solution to the SPDE (\[dqmvp\]).
If $u$ is a weak solution to (\[dqmvp\]), then, as a consequence of Itô’s formula, $u$ is the density of a measure-valued process that solves the martingale problem (\[mprf\]) - (\[qvmp\]). It suffices to show the converse statement.
Let $X=\{X_t, t\in[0,T]\}$ be a solution to the martingale problem (\[mprf\]) - (\[qvmp\]) with initial density $\mu$. Then, by Proposition \[extdst\], for almost every $t\in[0,T]$, $X_t$ has a density almost surely. We denote by $u_t$ the density of $X_t$.
Consider $M=\{M_t, t\in[0,T]\}$ defined by (\[mprf\]) as an $\mathscr{S}'$-martingale (see Definition 2.1.2 of Kallianpur and Xiong [@ims-95-kallianpur-xiong]). Then, by Theorem 3.1.4 of [@ims-95-kallianpur-xiong], there exists a Hilbert space $\mathcal{H}^*\supset L^2({\mathbb{R}}^d)$, such that $M$ is an $\mathcal{H}^*$-valued martingale. Denote by $\mathcal{H}$ the dual space of $\mathcal{H}^*$.
Let $\mathfrak{H}_1=L^2({\mathbb{R}}^d;{\mathbb{R}}^d)$, and let $\mathfrak{H}_2$ be the completion of $\mathscr{S}$ with the inner product $$\langle \phi,\varphi\rangle_{\mathfrak{H}_1}:=\int_{{\mathbb{R}}^d\times R^d}\kappa(x,y)\phi(x)\varphi(y)dxdy.$$ Consider the product space $\mathfrak{H}=\mathfrak{H}_1\times \mathfrak{H}_2$. Then, $\mathfrak{H}$ is a Hilbert space equipped with the inner product $$\big\langle (\phi_1,\phi_2),(\varphi_1,\varphi_2)\big\rangle_{\mathfrak{H}}:=\langle \phi_1, \varphi_1\rangle_{\mathfrak{H}_1}+\langle \phi_2,\varphi_2\rangle_{\mathfrak{H}_2}.$$ For any $t\in [0,T]$, let $\Psi_t:\mathcal{H}\to \mathfrak{H}$ be given by $\Psi_t(\phi)(x,y)=\big(\Psi^1_t(\phi)(x),\Psi^2_t(\phi)(y)\Big)$, where $$\Psi^1_t(\phi)(x):=\int_{{\mathbb{R}}^d}\nabla \phi(y)^*h(x-y)u_t(y)dy,$$ and $$\Psi^2_t(\phi)(x):=\phi(x)u_t(x).$$ Then, for any $\phi,\varphi\in \mathcal{H}$, we have $$\begin{aligned}
\langle M(\phi), M(\varphi)\rangle_t=&\int_0^t\nabla \phi(x)^*\rho(x-y)\nabla \varphi(y)X_s(dx)X_s(dy)ds\\
&+\int_0^t\int_{\mathbb{R}^d\times\mathbb{R}^d}\kappa(x,y)\phi(x)\phi(y)X_s(dx)X_s(dy)ds\\
=&\int_0^t \langle \Phi_s (\phi), \Phi_s (\varphi)\rangle_{\mathfrak{H}} ds,\end{aligned}$$ Therefore, by the martingale representation theorem (see e.g. Theorem 3.3.5 of Kallianpur and Xiong [@ims-95-kallianpur-xiong]), there exists a $\mathfrak{H}$-cylindrical Brownian motion $\mathfrak{B}=\{\mathfrak{B}_t, 0\leq t\leq T\}$, such that $$\begin{aligned}
M_t(\phi)=&\int_0^t\big\langle\Psi_s(\phi), d\mathfrak{B}_s\big\rangle_{\mathfrak{H}}.\end{aligned}$$ Let $\mathfrak{B}^1=\{\mathfrak{B}^1_t(\phi),t\in [0,T],\phi\in \mathfrak{H}_1\}$ and $\mathfrak{B}^2=\{\mathfrak{B}^2_t(\varphi),t\in [0,T],\varphi\in\mathfrak{H}_2\}$ be given by $$\mathfrak{B}^1_t(\phi)=\mathfrak{B}_t(\phi,0)\ \mathrm{and}\ \mathfrak{B}^2_t(\varphi)=\mathfrak{B}_t(0, \varphi).$$ Then, $\mathfrak{B}^1$ and $\mathfrak{B}^2$ are $\mathfrak{H}^1$- and $\mathfrak{H}^2$-cylindrical Brownian motion respectively, and they are independent. As a consequence, we have $$\begin{aligned}
\label{ihs}
M_t(\phi)=\int_0^t\Big\langle\int_{{\mathbb{R}}^d}\nabla\phi(z)^*h(\cdot-z)X_s(dz), d\mathfrak{B}^1_s\Big\rangle_{\mathfrak{H}_1}
+\int_0^t\big\langle \phi(z)u_s(z), d\mathfrak{B}^2_s\big\rangle_{\mathfrak{H}_2}.\end{aligned}$$ Let $\{e_j\}_{j\geq 1}$ be a complete orthonormal basis of $\mathfrak{H}_2$. Then, by Theorem 3.2.5 of [@ims-95-kallianpur-xiong], $V=\{V_t,t\in[0,T]\}$, defined by $$V_t:=\sum_{j=1}^{\infty}\mathfrak{B}^2_t(e_j)e_j,$$ is a $\mathscr{S}'$-valued Wiener process with covariance $${\mathbb{E}}\big[V_s(\phi)V_t(\varphi)\big]=s\wedge t\int_{{\mathbb{R}}^d\times {\mathbb{R}}^d}\kappa(x,y)\phi(x)\varphi(y)dxdy,$$ for any $\phi,\varphi\in \mathscr{S}$. Therefore, by (\[ihs\]) and the equivalence of stochastic integrals with Hilbert space valued Brownian motion and Walsh’s integrals (see e.g. Proposition 2.6 of Dalang and Quer-Sardanyons [@em-11-dalang-sardanyons] for spatial homogeneous noises), $u$ is a weak solution to the SPDE (\[dqmvp\]).
By Propositions \[propmp\] and \[propspde\], the SPDE (\[dqmvp\]) has a weak solution, that can be obtained by the branching particle approximation. Therefore, by Yamada-Watanabe argument (see Yamada and Watanabe [@jmku-71-yamada-watanabe] and Kurtz [@ejp-07-kurtz]), it suffices to show the pathwise uniqueness of the equation. Assume that $u$, $\widetilde{u}$ be two strong solutions to the SPDE (\[dqmvp\]). Let $d=u-\widetilde{u}$. Then, $d=\{d_t(x), t\in[0,T], x\in {\mathbb{R}}^d\}$ is a solution to (\[dqmvp\]), with initial condition $\mu\equiv 0$. Thus $d$ is also the density of a solution to the martingale problem (\[mprf\]) - (\[qvmp\]), with initial measure $X_0\equiv 0$. By the moment duality (\[mmtidt\]), for any $\phi\in C_b^2({\mathbb{R}}^d)$, we have $$\begin{aligned}
{\mathbb{E}}\langle d_t, \phi\rangle^2=\exp(t){\mathbb{E}}\big(X_0(Y^{(2)}_t)\big)\equiv 0,\end{aligned}$$ where $Y^{(2)}$ is the dual process defined by (\[dual\]) in the case when $n=2$. If follows that $u=\widetilde{u}$ almost surely.
As a consequence of Yamada-Watanabe’s argument, the weak solution to the SPDE (\[dqmvp\]) is unique in distribution. Assume hypotheses **\[H1\]** and **\[H2\]**. It follows that every weakly convergent subsequence of $\{X^n\}_{n\geq 1}$ converges to the same limit in $D([0,T]; M_F)$ in law. The limit has a density almost surely, that is a weak solution the SPDE (\[dqmvp\]).
[**Remark:**]{} Assume that the initial measure has a bounded density. According to Theorem \[unique\], \[tmpd\], Proposition \[extdst\] and \[ihs\], the martingale problem (\[mprf\]) - (\[qvmp\]) has a unique solution in distribution. If we allow the solution to the SPDE (\[dqmvp\]) to be a distribution-valued process, the existence and uniqueness of (\[dqmvp\]) are still true in the case when the initial value is only finite. This implies the uniqueness of the martingale problem (\[mprf\]) - (\[qvmp\]) can be generalized to the situation when the initial measure is only finite.
Moment estimates for one-particle motion
========================================
In this section, we focus on the one-particle motion without branching. By using the techniques of Malliavin calculus, we will obtain moment estimates for the transition probability density of the particle motion conditional on the environment $W$. A brief introduction and several theorems on Malliavin calculus are stated in Appendix A. For a detailed account on this topic, we refer the readers to the book of Nualart [@springer-06-nualart].
Fix a time interval $[0, T]$. Let $B=\{B_t,0\leq t\leq T\}$ be a standard $d$-dimensional Brownian motion and let $W$ be a $d$-dimensional space-time white Gaussian random field on $[0,T]\times \mathbb{R}^d$ that is independent of $B$. Assume that $h\in H_2^3({\mathbb{R}}^d;{\mathbb{R}}^d\otimes {\mathbb{R}}^d)$. For any $0\leq r<t\leq T$, we denote by $\xi_t=\xi_t^{r,x}$, the path of one-particle motion, with initial position $\xi_r=x$. It satisfies the SDE: $$\begin{aligned}
\label{sde}
\xi_t=x+B_t-B_r+\int_r^t\int_{\mathbb{R}^d}h(y-\xi_u)W(du, dy).\end{aligned}$$
We will apply the Malliavin calculus on $\xi_t$ with respect to the Brownian motion $B$. Let $H=L^2([0,T];{\mathbb{R}}^d)$ be the associated Hilbert space. By the Picard iteration scheme (see e.g. Theorem 2.2.1 of Nualart [@springer-06-nualart]), we can prove that for any $t\in(r, T]$, $\xi_t\in\cap_{p\geq 1}\mathbb{D}^{3,p}({\mathbb{R}}^d)$. Particularly, $D\xi_t$ satisfies the following system of SDEs: $$\begin{aligned}
\label{sdexi1}
D^{(k)}_{\theta}\xi_t^i=\delta_{ik}-\sum_{j_1,j_2=1}^d\int_{\theta}^t\int_{\mathbb{R}^d} \partial_{j_1} h^{ij_2}(y-\xi_s) D^{(k)}_{\theta}\xi_s^{j_1} W^{j_2}(ds, dy),\quad 1\leq i,k\leq d,\end{aligned}$$ for any $\theta\in [r,t]$, and $D^{(k)}_{\theta}\xi_t^i=0$ for all $\theta>t$.
In order to simplify the expressions, we rewrite the stochastic integrals in (\[sdexi1\]) as integrals with respect to martingales. To this end, let $M=\{M_t, r\leq t\leq T\}$ be the $d\times d$ matrix-valued process given by $$\begin{aligned}
M_t=\sum_{k=1}^d\int_r^t\int_{\mathbb{R}^d}g_k(s,y) W^k(ds, dy),\end{aligned}$$ where $g_k:\Omega\times [r,T]\times {\mathbb{R}}^d\to {\mathbb{R}}^d\otimes {\mathbb{R}}^d$ is given by $$g_k^{ij}(t,y)=\partial_i h^{jk}(y-\xi_t),\quad 1\leq i,j,k\leq d.$$ Notice that $M_t$ is the sum of stochastic integrals, so it is a matrix-valued martingale. The quadratic covariations of $\{M^{ij}\}_{i,j=1}^d$ are bounded and deterministic: $$\begin{aligned}
\label{rfqvm}
&\left\langle M^{i_1 j_1}, M^{i_2 j_2}\right\rangle_t=\sum_{k=1}^d\int_r^t\int_{\mathbb{R}^d}\partial_{i_1} h^{j_1k}(y-\xi_s)\partial_{i_2} h^{j_2k}(y-\xi_s)dyds\\
&\qquad=(t-r)\sum_{k=1}^d\int_{\mathbb{R}^d}\partial_{i_1} h^{j_1k}(y)\partial_{i_2} h^{j_2k}(y)dy:=Q^{i_1,j_1}_{i_2,j_2}(t-r)\leq \|h\|_{3,2}(t-r).\nonumber\end{aligned}$$ Now the equation (\[sdexi1\]) can be rewritten as follows: $$\begin{aligned}
\label{sdexi1m}
D^{(k)}_{\theta}\xi_t^i=\delta_{ik}-\sum_{j=1}^d\int_{\theta}^t\int_{\mathbb{R}^d} D^{(k)}_{\theta}\xi_s^{j} dM^{ji}_s,\quad 1\leq i,k\leq d.\end{aligned}$$
For any $0\leq r<t\leq T$, $x\in{\mathbb{R}}^d$, let $\gamma_t=\gamma_{\xi_t}$ be the Malliavin matrix of $\xi_t=\xi_t^{r,x}$, then $\gamma_t$ is nondegenerate almost surely.
We prove the lemma following Stroock’s idea (see Chapter 8 of Stroock [@lnm-83-stroock]). Let $\lambda_{\theta}(t)$ be the $d\times d$ symmetric random matrix given by $$\begin{aligned}
\lambda_{\theta}^{ij}(t)=\sum_{k=1}^d D_{\theta}^{(k)} \xi_t^i D_{\theta}^{(k)} \xi_t^j.\end{aligned}$$ Then, the Malliavin matrix of $\xi_t$ is the integral of $\lambda_{\theta}(t)$: $$\gamma_t=\int_r^t\lambda_{\theta}(t)d\theta.$$ By (\[sdexi1\]), (\[rfqvm\]) and Itô’s formula, we have $$\begin{aligned}
D^{(k)}_{\theta}\xi_t^iD^{(k)}_{\theta}\xi_t^j=&\delta_{ik}\delta_{kj}-\sum_{k_1=1}^d\int_{\theta}^t D^{(k)}_{\theta}\xi_s^i D^{(k)}_{\theta}\xi_s^{k_1}dM_s^{k_1j}-\sum_{k_2=1}^d\int_{\theta}^tD^{(k)}_{\theta}\xi_s^j D^{(k)}_{\theta}\xi_s^{k_2} dM_s^{k_2i}\nonumber\\
&+\sum_{k_1,k_2=1}^dQ^{k_1,j}_{k_2, i}\int_{\theta}^t D_{\theta}^{(k)}\xi_s^{k_1} D_{\theta}^{(k)}\xi_s^{k_2}ds.\end{aligned}$$ Therefore, $$\begin{aligned}
\label{lambda}
\lambda_{\theta}(t)=&I-\int_{\theta}^t \lambda_{\theta}(s) dM_s-\int_{\theta}^t dM_s^*\cdot\lambda_{\theta}(s)\nonumber\\
&+\sum_{k=1}^d \int_{\theta}^t\int_{\mathbb{R}^d} g_k^*(s,y)\lambda_{\theta}(s) g_k(s,y) dy d s.\end{aligned}$$ For any $\theta\in [r,t]$, we claim that $\lambda_{\theta}(t)$ is invertible almost surely, and its inverse $\beta_{\theta}(t)$ satisfies the following SDE: $$\begin{aligned}
\label{ivlambda}
\beta_{\theta}(t)=&I+\int_{\theta}^t \beta_{\theta}(s) dM_s^*+\int_{\theta}^t d M_s\cdot \beta_{\theta}(s)\\
&+\sum_{k=1}^d\int_{\theta}^t \int_{\mathbb{R}^n} \left(g_k(s,y)^2\beta_{\theta}(s)+g_k(s,y)\beta_{\theta}(s) g_k^*(s,y)+\beta_{\theta}(s)g_k^*(s,y)^2\right)dyds\nonumber.\end{aligned}$$ Indeed, by Itô’s formula, we have $$\begin{aligned}
\label{sdeplambda}
d[\lambda_{\theta}(t) \beta_{\theta}(t)]&=-d M^*_t\cdot [\lambda_{\theta}(t) \beta_{\theta}(t)]+[\lambda_{\theta}(t) \beta_{\theta}(t)] d M^*_t\\
+\sum_{k=1}^d &\Big(\int_{\mathbb{R}^d}\big([\lambda_{\theta}(t) \beta_{\theta}(t)] g^*_k(t,y)^2-g_k^*(t,y)[\lambda_{\theta}(t) \beta_{\theta}(t)] g_k^*(t,y)\big)dy\Big) dt\nonumber.\end{aligned}$$ Note that $\lambda_{\theta}(t)\beta_{\theta}(t)\equiv I$ solves the SDE (\[sdeplambda\]) with initial value $\lambda_{\theta}(\theta)\beta_{\theta}(\theta)=I$. Therefore, the strong uniqueness of the linear SDE (\[sdeplambda\]) implies that $\lambda^{-1}_{\theta}(t)=\beta_{\theta}(t)$ almost surely.
Denote by $\|\cdot\|_2$ the Hilbert-Schmidt norm of matrices. By Jensen’s inequality (see Lemma 8.14 of Stroock [@lnm-83-stroock]), the following inequality holds almost surely $$\begin{aligned}
\label{hsniigamma}
\left\|\gamma^{-1}_t\right\|_2=\bigg\|\Big(\int_r^t \lambda_{\theta}(t)d\theta\Big)^{-1}\bigg\|_2\leq \frac{1}{(t-r)^2}\Big\|\int_r^t \beta_{\theta}(t)d \theta\Big\|_2.\end{aligned}$$ It is easy to show that $\displaystyle\sup_{\theta\in[r,t]}\big\|\|\beta_{\theta}(t)\|_2\big\|_{2p}<\infty$ for all $p\geq 1$. Therefore, the right-hand side of (\[hsniigamma\]) is finite a.s., and thus $\gamma_t$ is nondegenerate almost surely.
We denote by $\sigma_t=\gamma^{-1}_t$ the inverse of the Malliavin matrix of $\xi_t$. In the following lemma, we obtain some moment estimates for the derivatives of $\xi_t$ and $\sigma_t$. Before estimates, we introduce the following generalized Cauchy-Schwarz’s inequality.
\[ttpcsi\] Let $n_1,n_2$ be nonnegative integers, $u_1\in L^{2p}(\Omega;( H^{\otimes n_1}))$, and $u_2\in L^{2p}(\Omega, ( H^{\otimes n_2}))$, then $u_1\otimes u_2 \in L^p(\Omega; (H^{\otimes (n_1+n_2)}))$, and $$\begin{aligned}
\label{tpcsi}
\big\|\|u_1\otimes u_2\|_{ H^{\otimes (n_1+n_2)}}\big\|_{p} \leq \big\|\|u_1\|_{H^{\otimes n_1}}\big\|_{2p}\big\|\|u_2\|_{H^{\otimes n_2}}\big\|_{2p}.\end{aligned}$$
The lemma can be obtained by the classical Cauchy-Schwarz inequality.
\[elgamma\] For any $p\geq 1$ and $0 \leq r<t\leq T$, there exists a constant $C>0$, that depends on $T$, $d$, $\|h\|_{3,2}$, $p$, such that $$\begin{aligned}
\max_{1\leq i\leq d}\left\|\|D \xi_t^i\|_{H}\right\|_{2p}\leq &C(t-r)^{\frac{1}{2}}. \label{medxih}\\
\max_{1\leq i,j\leq d}\left\|\sigma_t^{ij}\right\|_{2p}\leq &C(t-r)^{-1},\label{eigamma}\\
\max_{1\leq i,j\leq d}\left\|\|D\sigma_t^{ij}\|_{H}\right\|_{2p}\leq &C,\label{edgamma}\\
\max_{1\leq i\leq d}\left\|\|D^2\xi_t^i\|_{H^{\otimes 2}}\right\|_{2p}\leq &C(t-r)^{\frac{3}{2}}.\label{eddxi}\\
\max_{1\leq i,j\leq d}\left\|\|D^2\sigma_t^{ij}\|_{H^{\otimes 2}}\right\|_{2p}\leq &C(t-r)^{\frac{1}{2}},\label{ed2gamma}\\
\max_{1\leq i\leq d}\left\|\|D^3\xi_t^i\|_{H^{\otimes 3}}\right\|_{2p}\leq &C(t-r)^2.\label{edddxi}\end{aligned}$$
[**(i)**]{} By (\[rfqvm\]), (\[sdexi1m\]), Jensen’s, Burkholder-Davis-Gundy’s, and Minkowski’s inequalities, we have $$\begin{aligned}
\label{lambdai}
\sum_{i,k=1}^d\big\|D^{(k)}_{\theta}\xi_t^i\big\|_{2p}^2\leq &\sum_{i,k=1}^d\bigg(\delta_{ik}+\sum_{j=1}^d\Big\|\int_{\theta}^t\int_{\mathbb{R}^d} D^{(k)}_{\theta}\xi_s^j d M^{ji}_s\Big\|_{2p}\bigg)^2\nonumber\\
\leq &(d+1)\sum_{i,k=1}^d\bigg(\delta_{ik}+\sum_{j=1}^d\Big\|\int_{\theta}^t\int_{\mathbb{R}^d} D^{(k)}_{\theta}\xi_s^j d M^{ji}_s\Big\|_{2p}^2\bigg)\nonumber\\
\leq &d(d+1)+(d+1)c_p\sum_{i,j,k=1}^d Q_{ji}^{ji}\Big\|\int_{\theta}^t \big| D^{(k)}_{\theta}\xi_s^j\big|^2 d s\Big\|_{p}\nonumber\\
\leq &d(d+1)+2c_p d(d+1) \|h\|_{3,2}^2\sum_{j,k=1}^d \int_{\theta}^t \big\| D^{(k)}_{\theta}\xi_s^j\big\|_{2p}^2 ds.\end{aligned}$$ Thus by Grönwall’s lemma, we have $$\begin{aligned}
\label{elambda}
\sum_{i,j=1}^d \big\|D^{(k)}_{\theta}\xi_t^j\big\|_{2p}^2\leq d(d+1)\exp\left(2c_pd(d+1)\|h\|_{3,2}^2T\right):= C.\end{aligned}$$ Therefore, by (\[elambda\]) and Minkowski’s inequality, we have $$\begin{aligned}
\left\|\|D \xi_t^i\|_{H}\right\|_{2p}^2=\bigg\|\sum_{k=1}^d\int_r^t |D^{(k)}_{\theta}\xi_t^{i}|^2d\theta\bigg\|_{p}\leq \sum_{k=1}^d\int_r^t \big\|D^{(k)}_{\theta}\xi_t^{i}\big\|^2_{2p}d\theta\leq C(t-r).\end{aligned}$$
[**(ii)**]{} In order to prove (\[eigamma\]), we rewrite the SDE (\[ivlambda\]) in the following way: $$\begin{aligned}
\label{ivlembdai}
\beta^{ij}_{\theta}(t)=&\delta_{ij}+\sum_{k_1=1}^d\int_{\theta}^t\beta^{ik_1}_{\theta}(s) dM^{jk_1}_s+\sum_{k_2=1}^d\int_{\theta}^t\beta^{k_2j}_{\theta}(s) dM^{ik_2}_s\nonumber\\
&+\sum_{k_1,k_2=1}^d\Big(Q^{i,k_1}_{k_1,k_2}\int_{\theta}^t\beta^{k_2j}_{\theta}(s)ds\Big)+\sum_{k_1,k_2=1}^d\Big(Q^{i,k_1}_{j,k_2}\int_{\theta}^t\beta^{k_1k_2}_{\theta}(s)ds\Big)\nonumber\\
&+\sum_{k_1,k_2=1}^d\Big(Q^{k_2,k_1}_{j,k_2}\int_{\theta}^t\beta^{ik_1}_{\theta}(s)ds\Big).\end{aligned}$$ Similarly as we did in step [**(i)**]{}, by Burkholder-Davis-Gundy’s, and Minkowski’s inequalities, we can show that the martingale terms satisfies the following inequality $$\begin{aligned}
\label{tdiffusioni}
\Big\|\int_{\theta}^t\beta^{ik_1}_{\theta}(s)d M_s^{jk_1}\Big\|_{2p}^2\leq 2c_p \|h\|_{3,2}^2 \int_{\theta}^t\big\|\beta^{ik_1}_{\theta}(s)\big\|_{2p}^2ds.\end{aligned}$$ For the drift terms, by Minkowski’s and Jensen’s inequality, we have $$\begin{aligned}
\label{tdrifti}
\Big\|\int_{\theta}^t \beta^{k_1k_2}_{\theta}(s)ds\Big\|_{2p}^2\leq (t-\theta) \int_{\theta}^t \left\|\beta^{k_1k_2}_{\theta}(s)\right\|_{2p}^2 ds.\end{aligned}$$ Then, by (\[ivlembdai\]) - (\[tdrifti\]), and Grönwall’s lemma, we have $$\begin{aligned}
\sum_{i,j=1}^d \left\|\beta^{ij}_{\theta}(t)\right\|_{2p}^2\leq C.\end{aligned}$$ Thus by Minkowski’s and Jensen’s inequalities, we have $$\begin{aligned}
\label{ebbeta}
\bigg\|\Big\|\int_r^t \beta_{\theta}(t)d \theta\Big\|_2\bigg\|_{2p}\leq c_d\sum_{i,j=1}^d\int_r^t\left\|\beta^{ij}_{\theta}(t)\right\|_{2p} d\theta \leq C(t-r).\end{aligned}$$ Therefore, (\[eigamma\]) follows from (\[hsniigamma\]), (\[ebbeta\]), Minkowski’s and Jensen’s inequalities.
[**(iii)**]{} By integrating equation (\[lambda\]) on both sides with respect to $\theta$, and applying the stochastic Fubini theorem, we have $$\begin{aligned}
\label{gmsde}
\gamma_t=\int_r^t \lambda_{\theta}(t)d\theta=&I(t-r)-\int_r^t \gamma_s dM_s-\int_r^t d M^*_s\cdot \gamma_s\\
&+\sum_{m=1}^d\int_r^t \int_{\mathbb{R}^d} g_m^*(y, s)\gamma_s g_m(y, s) dy ds.\nonumber\end{aligned}$$ Taking the Malliavin derivative on both sides of (\[gmsde\]), we have the following SDE: $$\begin{aligned}
\label{dgme}
D^{(k)}_{\theta}\gamma_t^{ij}=&- \sum_{k_1=1}^d\int_{\theta}^tD^{(k)}_{\theta}\gamma_s^{ik_1} d M_s^{k_1j}-\sum_{k_1=1}^d\int_{\theta}^t \gamma_s^{ik_1} d \left(D^{(k)}_{\theta} M^{k_1j}_s\right)\nonumber\\
&-\sum_{k_2=1}^d\int_{\theta}^tD^{(k)}_{\theta}\gamma^{k_2j}_s d M^{k_2i}_s-\sum_{k_2=1}^d\int_{\theta}^t\gamma^{k_2j}_s d\left(D^{(k)}_{\theta} M^{k_2i}_s\right)\nonumber\\
&+\sum_{k_1,k_2=1}^d\left(Q^{k_1,i}_{k_2,j}\int_{\theta}^t D^{(k)}_{\theta}\gamma^{k_1k_2}_s ds\right),\end{aligned}$$ where $$\begin{aligned}
\label{drdm}
D^{(k)}_{\theta} M^{ij}_s=-\sum_{i_1,i_2=1}^d\int_{\theta}^s \int_{\mathbb{R}^d} \partial_{i,i_2}h^{ji_1}\left(y-\xi_r\right)D^{(k)}_{\theta}\xi^{i_2}_r W^{i_1}(dr, dy).\end{aligned}$$ For the first and the third term, by similar arguments as in (\[lambdai\]), we can show that $$\begin{aligned}
\label{diffusiondgamma1}
\Big\|\int_{\theta}^tD^{(k)}_{\theta}\gamma^{ik_1}_s d M_s^{k_1j}\Big\|_{2p}^2\leq c_{d,p} \|h\|_{3,2}^2 \int_{\theta}^t \big\|D^{(k)}_{\theta}\gamma^{ik_1}_s\big\|_{2p}^2ds.\end{aligned}$$ To estimate the second and the fourth term, notice that by (\[medxih\]), we have $$\begin{aligned}
\label{egamma}
\max_{1\leq i,j\leq d}\left\|\gamma_t^{ij}\right\|_{2p}=&\max_{1\leq i,j\leq d}\left\|\langle D\xi_t^i, D\xi_t^j\rangle_H\right\|_{2p}\nonumber\\
\leq &\max_{1\leq i\leq d}\left\|\| D\xi_t^i\|_H\right\|_{4p} \max_{1\leq j\leq d}\left\|\|D\xi_t^j\|_H\right\|_{4p}\leq C(t-r).\end{aligned}$$ Therefore, by (\[elambda\]), (\[drdm\]), (\[egamma\]), Jensen’s, Burkholder-Davis-Gundy’s, Minkowski’s, and Cauchy-Schwarz’s inequalities, we have $$\begin{aligned}
\label{diffusionddm1}
\Big\|\int_{\theta}^t\gamma_s^{ik_1} d \Big(D^{(k)}_{\theta} M^{k_1j}_s\Big)\Big\|_{2p}^2\leq &c_{d,p}\|h\|_{3,2}^2\sum_{k_2=1}^d\int_{\theta}^t\big\|\gamma_s^{ik_1}\|_{4p}^2\|D^{(k)}_{\theta}\xi^{k_2}_s \big\|_{4p}^2ds\nonumber\\
\leq& C(t-r)^3.\end{aligned}$$ For the last term, by Minkowski’s and Jensen’s inequalities, we have $$\begin{aligned}
\label{driftdgamma1}
\Big\|\int_{\theta}^t D^{(k)}_{\theta}\gamma^{k_1k_2}_sds\Big\|_{2p}^2\leq (t-\theta) \int_{\theta}^t \big\|D^{(k)}_{\theta}\gamma^{k_1k_2}_s\big\|_{2p}^2 ds\leq T \int_{\theta}^t \big\|D^{(k)}_{\theta}\gamma^{k_1k_2}_s\big\|_{2p}^2 ds.\end{aligned}$$ Combining (\[dgme\]) - (\[driftdgamma1\]), we obtain the following inequality $$\begin{aligned}
\sum_{i,j=1}^d \big\|D^{(k)}_{\theta} \gamma_t^{ij}\big\|_{2p}^2\leq c_1 \int_{\theta}^t\sum_{i,j=1}^d \big\|D^{(k)}_{\theta} \gamma_s^{ij}\big\|_{2p}^2 ds+c_2(t-r)^3,\end{aligned}$$ where $c_1,c_2$ depends on $T$, $d$, $\|h\|_{3,2}^2$, and $p$. Thus by Grönwall’s lemma, we have $$\begin{aligned}
\label{idgamma}
\sum_{i,j=1}^d\big\|D^{(k)}_{\theta} \gamma_t^{ij}\big\|_{2p}^2\leq C(t-r)^3.\end{aligned}$$ It follows that $$\begin{aligned}
\label{idgammah}
\big\|\|D\gamma_t^{ij}\|_H\big\|_{2p}\leq C(t-r)^2\end{aligned}$$ Notice that $\gamma_t\sigma_t=I$, a.s., as a consequence, $D\left(\gamma_t\sigma_t\right)=DI\equiv 0$. That implies $$\begin{aligned}
\label{digmf}
D\sigma^{ij}_t=-\sum_{i_1,i_2=1}^d\sigma^{ii_1}_tD\gamma^{i_1i_2}_t\sigma^{i_2j}_t.\end{aligned}$$ Then, (\[edgamma\]) follows from (\[tpcsi\]), (\[eigamma\]), (\[idgammah\]) and (\[digmf\]).
[**(iv)**]{} Fix $0\leq r<t\leq T$. For any $\theta_1,\theta_2\in [r,t]$, let $\theta=\theta_1\vee \theta_2$. Taking the Malliavin derivative on both sides of (\[sdexi1m\]), we have the following SDE: $$\begin{aligned}
\label{d2xisde}
D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi_t^i=&-\sum_{j_1=1}^d\int_{\theta}^t D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi_s^{j_1} dM^{j_1i}_s\nonumber\\
&+\sum_{j_1,j_2,j_3=1}^d\int_{\theta}^t\int_{\mathbb{R}^d}\partial_{j_2,j_3} h^{ij_1}(y-\xi_s) D^{(k_1)}_{\theta_1}\xi_s^{j_2} D^{(k_2)}_{\theta_2}\xi_s^{j_3}W^{j_1}(ds, dy).\end{aligned}$$ Similarly as in (\[lambdai\]), we can show the following inequalities $$\begin{aligned}
\label{ed2xi1}
\Big\|\int_{\theta}^t D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi_s^{j_1} dM^{j_1i}_s\Big\|_{2p}^2\leq c_{d,p}\|h\|_{3,2}^2\int_{\theta}^t \big\|D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi_s^{j_1}\big\|_{2p}^2ds,\end{aligned}$$ and $$\begin{aligned}
\label{ed2xi2}
&\Big\|\int_{\theta}^t\int_{\mathbb{R}^d}\partial_{j_2,j_3} h^{ij_1}(y-\xi_s) D^{(k_1)}_{\theta_1}\xi_s^{j_2} D^{(k_2)}_{\theta_2}\xi_s^{j_3}W^{j_1}(ds, dy)\Big\|_{2p}^2\nonumber\\
&\qquad \leq c_p\|h\|_{3,2}^2\int_{\theta}^t\big\|D^{(k_1)}_{\theta_1}\xi_s^{j_2}\big\|_{4p}^2\big\| D^{(k_2)}_{\theta_2}\xi_s^{j_3}\big\|_{4p}^2ds\leq C (t-r).\end{aligned}$$ Thus combining (\[d2xisde\]) - (\[ed2xi2\]), we have $$\begin{aligned}
\sum_{i=1}^d\big\|D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi_t^i\big\|_{2p}^2\leq &c_1\sum_{i=1}^d\int_{\theta}^t \big\|D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi_s^i\big\|_{2p}^2ds+c_2(t-r).\end{aligned}$$ Then, it follows from Grönwall’s lemma that $$\begin{aligned}
\label{mdxi2}
\sum_{i=1}^d\big\|D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi_t^i\big\|_{2p}^2\leq C(t-r).\end{aligned}$$ The inequality (\[eddxi\]) is a consequence of (\[mdxi2\]), Jensen’s and Minkowski’s inequalities.
[**(v)**]{} For any $\theta_1,\theta_2\in [r,t]$ and $\theta=\theta_1\vee\theta_2$, by taking the Malliavin derivative on both sides of (\[dgme\]), we have $$\begin{aligned}
\label{rfdgamma2}
&D^{(k_1,k_2)}_{\theta_1,\theta_2}\gamma_t^{ij}=- \sum_{i_1=1}^d\Big(\int_{\theta}^tD^{(k_1,k_2)}_{\theta_1,\theta_2}\gamma_s^{ii_1} d M_s^{i_1j}+\int_{\theta}^tD^{(k_1)}_{\theta_2}\gamma_s^{ii_1} d \left(D^{(k_2)}_{\theta_1}M_s^{i_1j}\right)\Big)\nonumber\\
&\quad -\sum_{i_1=1}^d\Big(\int_{\theta}^t D^{(k_2)}_{\theta_2}\gamma_s^{ii_1} d \left(D^{(k_1)}_{\theta_1} M^{i_1j}_s\right)+\int_{\theta}^t \gamma_s^{ii_1} d \left(D^{(k_1,k_2)}_{\theta_1,\theta_2} M^{i_1j}_s\right)\Big)\nonumber\\
&\quad -\sum_{i_2=1}^d\Big(\int_{\theta}^tD^{(k_1,k_2)}_{\theta_1,\theta_2}\gamma^{i_2j}_s d M^{i_2i}_s+\int_{\theta}^tD^{(k_1)}_{\theta}\gamma^{i_2j}_s d \left(D^{(k_2)}M^{i_2i}_s\right)\Big)\nonumber\\
&\quad -\sum_{i_2=1}^d\Big(\int_{\theta}^tD^{(k_2)}_{\theta_1}\gamma^{i_2j}_s d\left(D^{(k_1)}_{\theta_2} M^{i_2i}_s\right)+\int_{\theta}^t\gamma^{i_2j}_s d\left(D^{(k_1,k_2)}_{\theta_1,\theta_2} M^{i_2i}_s\right)\Big)\nonumber\\
&\quad +\sum_{i_1,i_2=1}^d\Big(Q^{i_1,i}_{i_2,j}\int_{\theta}^t D^{(k_1,k_2)}_{\theta_1,\theta_2}\gamma^{i_1i_2}_s ds\Big),\end{aligned}$$ where $$\begin{aligned}
D^{(k_1,k_2)}_{\theta_1,\theta_2} M^{ij}_s=&-\sum_{j_1,j_2,j_3=1}^d\int_{\theta}^s \int_{\mathbb{R}^d} \partial_{i,j_2,j_3}h^{jj_1}\left(y-\xi_r\right)D^{(k_1)}_{\theta_1}\xi^{j_2}_rD^{(k_2)}_{\theta_2}\xi^{j_3}_rW^{j_1}(dr, dy)\\
&+\sum_{j_1,j_2=1}^d \int_{\theta}^s \int_{\mathbb{R}^d} \partial_{i,j_2}h^{jj_1}\left(y-\xi_r\right)D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi^{j_2}_rW^{j_1}(dr, dy).\end{aligned}$$ By (\[elambda\]), (\[egamma\]), (\[idgamma\]), (\[mdxi2\]), Burkholder-Davis-Gundy’s, Minkowski’s and Hölder’s inequalities, we have the following inequalities $$\begin{aligned}
\label{ddgdm}
\Big\|\int_{\theta}^tD^{(k_1,k_2)}_{\theta_1,\theta_2}\gamma_s^{ii_1} d M_s^{i_1j}\Big\|_{2p}^2\leq c_{d,p}\|h\|_{3,2}^2\int_{\theta}^t\big\|D^{(k_1,k_2)}_{\theta_1,\theta_2}\gamma_s^{ii_1}\big\|_{2p}^2ds,\end{aligned}$$ $$\begin{aligned}
\label{dgmddm}
&\Big\|\int_{\theta}^tD^{(k_1)}_{\theta_2}\gamma_s^{ii_1} d \left(D^{(k_2)}_{\theta_1}M_s^{i_1j}\right)\Big\|_{2p}^2\leq c_{d,p}\|h\|_{3,2}^2\sum_{i_2=1}^d \int_{\theta}^t\left\|D^{(k_1)}_{\theta_2}\gamma_s^{ii_1}D_{\theta_2}^{(k_2)}\xi_s^{i_2}\right\|_{2p}^2ds\nonumber\\
\leq &c_{d,p}\|h\|_{3,2}^2\sum_{i_2=1}^d \int_{\theta}^t\left\|D^{(k_1)}_{\theta_2}\gamma_s^{ii_1}\right\|_{4p}^2\left\|D_{\theta_2}^{(k_2)}\xi_s^{i_2}\right\|_{4p}^2ds\leq C(t-r)^4,\end{aligned}$$ and $$\begin{aligned}
&\Big\|\int_{\theta}^t\gamma_t^{ii_1}d\left(D^{(k_1,k_2)}_{\theta_1,\theta_2}M_s^{i_1j}\right)\Big\|_{2p}^2\\
\leq&c_d\bigg(\sum_{j_1,j_2,j_3=1}^d\Big\|\int_{\theta}^t \int_{\mathbb{R}^d} \gamma_s^{ii_1}\partial_{i_1,j_2,j_3}h^{jj_1}\left(y-\xi_r\right)D^{(k_1)}_{\theta_1}\xi^{j_2}_sD^{(k_2)}_{\theta_2}\xi^{j_3}_sW^{j_1}(ds, dy)\Big\|_{2p}^2 \nonumber\\
&+\sum_{j_1,j_2=1}^d \Big\|\int_{\theta}^t \int_{\mathbb{R}^d} \gamma_s^{ii_1}\partial_{i_1,j_2}h^{jj_1}\left(y-\xi_s\right)D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi^{j_2}_sW^{j_1}(ds, dy)\Big\|_{2p}^2\bigg):=c_d\left(I_1+I_2\right).\nonumber\end{aligned}$$ We estimate $I_1$, $I_2$ as follows: $$\begin{aligned}
I_1\leq& d\|h\|_{3,2}^2\sum_{j_2,j_3=1}^d\int_{\theta}^t\left\|\gamma_s^{ii_1}\right\|_{6p}^2\big\|D^{(k_1)}_{\theta_1}\xi^{j_2}_s\big\|_{6p}^2\big\|D^{(k_2)}_{\theta_2}\xi^{j_3}_s\big\|_{6p}^2 ds\leq C(t-r)^3,\end{aligned}$$ and $$\begin{aligned}
I_2\leq d\|h\|_{3,2}^2\sum_{j_2=1}^d\int_{\theta}^t\left\|\gamma_s^{ii_1}\right\|_{4p}^2\big\|D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi^{j_2}_s\big\|_{4p}^2ds\leq C(t-r)^4\leq CT(t-r)^3.\end{aligned}$$ Thus we have $$\begin{aligned}
\label{gmd3m0}
\Big\|\int_{\theta}^t\gamma_t^{ii_1}d\left(D^{(k_1,k_2)}_{\theta_1,\theta_2}M_s^{i_1j}\right)\Big\|_{2p}^2\leq C(t-r)^3.\end{aligned}$$ Therefore, combine (\[rfdgamma2\]) - (\[gmd3m0\]), we have $$\begin{aligned}
\sum_{i,j=1}^d\left\|D^{(k_1,k_2)}_{\theta_1,\theta_2}\gamma_t^{ij}\right\|_{2p}^2\leq c_1(t-r)^3+ c_2\sum_{i,j=1}^d\int_\theta^t\left\|D^{(k_1,k_2)}_{\theta_1,\theta_2}\gamma_s^{ij}\right\|_{2p}^2ds,\end{aligned}$$ By Grönwall’s lemma, we have $$\begin{aligned}
\label{ddgm0}
\sum_{i,j=1}^d\big\|D^{(k_1,k_2)}_{\theta_1,\theta_2}\gamma_t^{ij}\big\|_{2p}^2\leq C(t-r)^3,\end{aligned}$$ which implies $$\begin{aligned}
\left\|\|D^2\gamma_t^{ij}\|_{H^{\otimes 2}}\right\|_{2p}\leq C(t-r)^\frac{5}{2}.\end{aligned}$$ By taking the second Malliavin derivative of $\gamma_t\sigma_t\equiv I$, we have $$\begin{aligned}
\label{ddigm}
D^2\sigma^{ij}_t=&-\sum_{i_1,i_2=1}^d\sigma^{ii_1}_t\big(D^2\gamma^{i_1i_2}_t\sigma^{i_2j}_t+D\gamma^{i_1i_2}_t\otimes D\sigma^{i_2j}_t+D\sigma^{i_2j}_t\otimes D\gamma^{i_1i_2}_t\big).\end{aligned}$$ Then, (\[ed2gamma\]) can be deduced by (\[tpcsi\]), (\[eigamma\]), (\[edgamma\]), (\[idgammah\]) and (\[ddigm\]).
[**(vi)**]{} For any $\theta_1,\theta_2,\theta_3\in [r,t]$, let $\theta=\theta_1\vee \theta_2\vee \theta_3$. Taking the Malliavin derivative on both sides of (\[d2xisde\]), we have $$\begin{aligned}
\label{d3xisde}
D^{(k_1,k_2,k_3)}_{\theta_1,\theta_2,\theta_3}&\xi_t^i=\sum_{j_1,j_2,j_3=1}^d\int_{\theta}^t\int_{\mathbb{R}^d} \partial_{j_2,j_3} h^{ij_1}(y-\xi_s) D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi_s^{j_2}D^{(k_3)}_{\theta_3}\xi_s^{j_3} W^{j_1}(ds, dy)\nonumber\\
&-\sum_{j_1,j_2=1}^d\int_{\theta}^t\int_{\mathbb{R}^d} \partial_{j_2} h^{ij_1}(y-\xi_s) D^{(k_1,k_2,k_3)}_{\theta_1,\theta_2,\theta_3}\xi_s^{j_2} W^{j_1}(ds, dy)\nonumber\\
&-\sum_{j_1,j_2,j_3,j_4=1}^d\int_{\theta}^t\int_{\mathbb{R}^d}\partial_{j_2,j_3,j_4} h^{ij_1}(y-\xi_s) D^{(k_1)}_{\theta_1}\xi_s^{j_2} D^{(k_2)}_{\theta_2}\xi_s^{j_3}D^{(k_3)}_{\theta_3}\xi_s^{j_4}W^{j_1}(ds, dy)\nonumber\\
&+\sum_{j_1,j_2,j_3=1}^d\int_{\theta}^t\int_{\mathbb{R}^d}\partial_{j_2,j_3} h^{ij_1}(y-\xi_s) D^{(k_1,k_3)}_{\theta_1, \theta_3}\xi_s^{j_2} D^{(k_2)}_{\theta_2}\xi_s^{j_3}W^{j_1}(ds, dy)\nonumber\\
&+\sum_{j_1,j_2,j_3=1}^d\int_{\theta}^t\int_{\mathbb{R}^d}\partial_{j_2,j_3} h^{ij_1}(y-\xi_s) D^{(k_1)}_{\theta_1}\xi_s^{j_2} D^{(k_2,k_3)}_{\theta_2,\theta_3}\xi_s^{j_3}W^{j_1}(ds, dy).\end{aligned}$$ By (\[elambda\]), (\[mdxi2\]), Burkholder-Davis-Gundy’s, Minkowski’s, and Hölder’s inequalities, we have the following inequalities: $$\begin{aligned}
\label{d3xi1}
&\Big\|\int_{\theta}^t\int_{\mathbb{R}^d} \partial_{j_2,j_3} h^{ij_1}(y-\xi_s) D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi_s^{j_2}D^{(k_3)}_{\theta_3}\xi_s^{j_3} W^{k_1}(ds, dy)\Big\|_{2p}^2\nonumber\\
\leq &c_p\|h\|_{3,2}^2\int_{\theta}^t \big\|D^{(k_1, k_2)}_{\theta_1,\theta_2}\xi_s^{j_2}\big\|_{4p}^2\big\|D^{(k_3)}_{\theta_3}\xi_s^{j_3}\big\|_{4p}^2ds\leq C(t-r)^2,\end{aligned}$$ $$\begin{aligned}
\label{d2xi2}
&\Big\|\int_{\theta}^t\int_{\mathbb{R}^d} \partial_{j_2} h^{ij_1}(y-\xi_s) D^{(k_1,k_2,k_3)}_{\theta_1,\theta_2,\theta_3}\xi_s^{j_2} W^{j_1}(ds, dy)\Big\|_{2p}^2\nonumber\\
\leq&c_p\|h\|_{3,2}^2\int_{\theta}^t\big\|D^{(k_1,k_2,k_3)}_{\theta_1,\theta_2,\theta_3}\xi_s^{j_2}\big\|_{2p}^2ds,\end{aligned}$$ and $$\begin{aligned}
\label{d3xi3}
&\Big\|\int_{\theta}^t\int_{\mathbb{R}^d}\partial_{j_2,j_3,j_4} h^{ij_1}(y-\xi_s) D^{(k_1)}_{\theta_1}\xi_s^{j_2} D^{(k_2)}_{\theta_2}\xi_s^{j_3}D^{(k_3)}_{\theta_3}\xi_s^{j_3}W^{j_1}(ds, dy)\Big\|_{2p}^2\nonumber\\
\leq&c_p\|h\|_{3,2}^2\int_{\theta}^t\big\|D^{(k_1)}_{\theta_1}\xi_s^{j_2}\big\|_{6p}^2\big\|D^{(k_2)}_{\theta_2}\xi_s^{j_3}\big\|_{6p}^2\big\|D^{(k_3)}_{\theta_3}\xi_s^{j_3}\big\|_{6p}^2ds\leq C(t-r).\end{aligned}$$ Thus combining (\[d3xisde\]) - (\[d3xi3\]), by Jensen’s inequality, we have $$\begin{aligned}
\sum_{i=1}^d\big\|D^{(k_1,k_2,k_3)}_{\theta_1,\theta_2,\theta_3}\xi_t^i\big\|_{2p}^2\leq &c_1\sum_{i=1}^d\int_{\theta}^t \big\|D^{(k_1,k_2,k_3)}_{\theta_1,\theta_2,\theta_3}\xi_t^i\big\|_{2p}^2ds+c_2(t-r).\end{aligned}$$ Then, the following inequality follows from Grönwall’s lemma $$\begin{aligned}
\label{d3xi0}
\sum_{i=1}^d\big\|D^{(k_1,k_2,k_3)}_{\theta_1,\theta_2,\theta_3}\xi_t^i\big\|_{2p}^2\leq C(t-r).\end{aligned}$$ Therefore, (\[edddxi\]) is a consequence of (\[d3xi0\]).
In the next lemma, we derive estimates for the moments of increments of the derivatives of $\xi_t$ and $\sigma_t$.
\[medxigm\] For any $p\geq 1$, $0 \leq r< s<t\leq T$, and $1\leq i,j\leq d$, there exists a constant $C>0$ depends on $T$, $d$, $p$, and $\|h\|_{3,2}$, such that $$\begin{aligned}
\max_{1\leq i\leq d}\left\|\|D\xi_t^i-D\xi_s^i\|_{H}\right\|_{2p}\leq &C(t-s)^{\frac{1}{2}},\label{ddxi}\\
\max_{1\leq i,j\leq d}\left\|\sigma_t^{ij}-\sigma_s^{ij}\right\|_{2p}\leq &C(t-r)^{-\frac{1}{2}}(s-r)^{-1}(t-s)^{\frac{1}{2}},\label{dgamma}\\
\max_{1\leq i,j\leq d}\left\|\|D\sigma_t^{ij}-D\sigma_s^{ij}\|_{H}\right\|_{2p}\leq &C(t-r)^{-\frac{1}{2}}(t-s)^{\frac{1}{2}},\label{ddgamma}\\
\max_{1\leq i\leq d}\left\|\|D\xi_t^i-D^2\xi_s^i\|_{H^{\otimes 2}}\right\|_{2p}\leq &C(t-r)(t-s)^{\frac{1}{2}}.\label{ddxi2}\end{aligned}$$
[**(i)**]{} By (\[sdexi1m\]), we have $$\begin{aligned}
D^{(k)}_{\theta}\xi_t^i-D^{(k)}_{\theta}\xi_s^i=\delta_{ik}{\mathbf{1}}_{[s,t]}(\theta)-\sum_{j=1}^d\int_{\theta\vee s}^tD^{(k)}_{\theta}\xi_u^j dM^{ji}_u.\end{aligned}$$ Thus by (\[elambda\]), Burkholder-Davis-Gundy’s, Jensen’s, and Minkowski’s inequalities, we have $$\begin{aligned}
\big\|D^{(k)}_{\theta}\xi_t^i-D^{(k)}_{\theta}\xi_s^i\big\|_{2p}^2\leq C\left[\delta_{ik}{\mathbf{1}}_{[s,t]}(\theta)+ (t-s)\right].\end{aligned}$$ Thus we can show (\[ddxi\]) by Minkowski’s inequality: $$\begin{aligned}
\left\|\|D\xi^i_t-D\xi^i_s\|_{H}\right\|_{2p}^2\leq &\sum_{k=1}^d\int_r^t\big\|D^{(k)}_{\theta}\xi_t^i-D^{(k)}_{\theta}\xi_s^i\big\|_{2p}^2 d\theta\nonumber\\
\leq&\sum_{k=1}^dC\Big(\int_s^t\delta_{ik}d\theta+\int_r^t(t-s) d\theta\Big)\leq C(t-s).\end{aligned}$$
[**(ii)**]{} Note that $\sigma_t-\sigma_s=\sigma_t\left(\gamma_s-\gamma_t\right)\sigma_s$. Then, by (\[eigamma\]) and Hölder’s inequality, it suffices to estimate the moment of $\gamma_t-\gamma_s$. By (\[gmsde\]), we have $$\begin{aligned}
\gamma_t^{ij}-\gamma_s^{ij}=&\delta_{ij}(t-s)-\sum_{k_1=1}^d\int_s^t\gamma_u^{ik_1}dM_u^{k_1j}-\sum_{k_2=1}^d\int_s^t\gamma_u^{jk_2}dM_u^{k_2i}\\
&+\sum_{k_1,k_2=1}^dQ^{i,k_1}_{k_2,j}\int_s^t \gamma_u^{k_1k_2}du.\end{aligned}$$ Then, by (\[egamma\]), Minkowski’s, Jensen’s, and Burkholder-Davis-Gundy’s inequalities, for all $1\leq i,j\leq d$, we have $$\begin{aligned}
\label{dgamma1}
\big\|\gamma_t^{ij}-\gamma_s^{ij}\big\|_{2p}^2\leq &C\left((t-s)^2+(t-r)^2(t-s)+(t-r)^2(t-s)^2\right)\nonumber\\
\leq &C(1+T)^2(t-r)(t-s).\end{aligned}$$ Then, (\[dgamma\]) is a consequence of (\[eigamma\]) and (\[dgamma1\]).
[**(iii)**]{} By (\[dgme\]), we have the following equation: $$\begin{aligned}
D^{(k)}_{\theta}\gamma_t^{ij}-D^{(k)}_{\theta}\gamma_s^{ij}=&- \sum_{k_1=1}^d\int_{\theta\vee s}^tD^{(k)}_{\theta}\gamma_u^{ik_1} d M_u^{k_1j}-\sum_{k_1=1}^d\int_{\theta \vee s}^t \gamma_u^{ik_1} d \left(D^{(k)}_{\theta} M^{k_2j}_u\right)\nonumber\\
&-\sum_{k_2=1}^d\int_{\theta \vee s}^tD^{(k)}_{\theta}\gamma^{k_2j}_u d M^{k_2i}_u-\sum_{k_2=1}^d\int_{\theta\vee s}^t\gamma^{k_2j}_u d\left(D^{(k)}_{\theta} M^{k_2i}_u\right)\\
&+\sum_{k_1,k_2=1}^d\Big(Q^{k_1,i}_{k_2,j}\int_{\theta \vee s}^t D^{(k)}_{\theta}\gamma^{k_1k_2}_u du\Big).\end{aligned}$$ Then, by (\[elambda\]), (\[egamma\]), and (\[idgamma\]), Burkholder-Davis-Gundy’s, Jensen’s, Minkowski’s, and Cauchy-Schwarz’s inequalities, we have $$\begin{aligned}
&\big\|D^{(k)}_{\theta}\gamma_t^{ij}-D^{(k)}_{\theta}\gamma_s^{ij}\big\|_{2p}^2\leq c_{d,p} \|h\|_{3,2}^2 \bigg[\sum_{k_1=1}^d\int_{\theta\vee s}^t \big\|D^{(k)}_{\theta}\gamma^{ik_1}_u\big\|_{2p}^2du\nonumber\\
&\hspace{20mm}+\sum_{k_2=1}^d\int_{\theta\vee s}^t\big\|\gamma_u^{ik_1}\big\|_{4p}^2\big\|D^{(k)}_{\theta}\xi^{k_2}_u \big\|_{4p}^2du+(t-s)\int_{\theta\vee s}^t \big\|D^{(k)}_{\theta}\gamma^{k_1k_2}_u\big\|_{2p}^2 du\bigg]\\
&\hspace{38mm} \leq C(t-r)^2(t-s).\end{aligned}$$ This implies $$\begin{aligned}
\label{ddgamma0}
\left\|\|D\gamma_t^{ij}-D\gamma_s^{ij}\|_{H}\right\|_{2p}\leq C(t-r)^{\frac{3}{2}}(t-s)^{\frac{1}{2}}.\end{aligned}$$ By (\[digmf\]), we have $$\begin{aligned}
D\sigma^{ij}_t-D\sigma^{ij}_s=&\sum_{i_1,i_2=1}^d\left(\sigma^{ii_1}_tD\gamma^{i_1i_2}_t\sigma^{i_2j}_t-\sigma^{ii_1}_sD\gamma^{i_1i_2}_s\sigma^{i_2j}_s\right)\\
=&\sum_{i_1,i_2=1}^d\sigma^{ii_1}_t\left(D\gamma^{i_1i_2}_t-D\gamma^{i_1i_2}_s\right)\sigma^{i_2j}_t+\sum_{i_1,i_2=1}^d\left(\sigma^{ii_1}_t-\sigma^{ii_1}_s\right)D\gamma^{i_1i_2}_s\sigma^{i_2j}_t\\
&+\sum_{i_1,i_2=1}^d\sigma^{ii_1}_sD\gamma^{i_1i_2}_s\left(\sigma^{i_2j}_t-\sigma^{i_2j}_s\right).\end{aligned}$$ Thus (\[ddgamma\]) follows from (\[tpcsi\]), (\[eigamma\]), (\[idgammah\]), (\[dgamma\]), and (\[ddgamma0\]).
[**(iv)**]{} Let $\theta=\theta_1\vee \theta_2$, by (\[d2xisde\]), we have the following equation: $$\begin{aligned}
D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi_t^i-&D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi_s^i=-\sum_{j_1,j_2=1}^d\int_{\theta\vee s}^t\int_{\mathbb{R}^d} \partial_{j_2} h^{ij_1}(y-\xi_u) D^{(k_1,k_2)}_{\theta_1,\theta_2}\xi_u^{j_2} W^{j_1}(du, dy)\nonumber\\
&+\sum_{j_1,j_2,j_3=1}^d\int_{\theta\vee s}^t\int_{\mathbb{R}^d}\partial_{j_2,j_3} h^{ij_1}(y-\xi_u) D^{(k_1)}_{\theta_1}\xi_u^{j_2} D^{(k_2)}_{\theta_2}\xi_u^{j_3}W^{j_1}(du, dy).\end{aligned}$$ As a consequence, by (\[elambda\]), (\[mdxi2\]), Burkholder-Davis-Gundy’s, Minkowski’s, and Cauchy-Schwarz’s inequalities, we have $$\begin{aligned}
\label{ddxi2bi}
\big\|D^{(i,j)}_{\theta_1,\theta_2}\xi_t^k-D^{(i,j)}_{\theta_1,\theta_2}\xi_s^k\big\|_{2p}^2\leq &c_p\bigg[\sum_{j_1=1}^d\|h\|_{3,2}^2\int_{\theta\vee s}^t\big\|D^{(i,j)}_{\theta_1,\theta_2}\xi_u^{j_1}\big\|_{2p}^2du\nonumber\\
&\quad +\sum_{j_1,j_2}^d\|h\|_{3,2}^2\int_{\theta\vee s}^t\big\|D^{(i)}_{\theta_1}\xi_u^{j_1}\big\|_{4p}^2\big\| D^{(j)}_{\theta_2}\xi_u^{j_2}\big\|_{4p}^2du\bigg]\nonumber\\
\leq &C(t-s)\end{aligned}$$ Therefore, we obtain (\[ddxi2\]) by integrating (\[ddxi2bi\]) and Minkowski’s inequality.
In the next lemma, we establish moment estimates for the functionals $H_{(i)}(\xi_t,1)$ and $H_{(i,j)}(\xi_t,1)$ introduced in (\[hfphif\]) and (\[hhfphif\]). Notice that these functionals are well defined, because $\xi_t\in\cap_{p\geq 1}{\mathbb{D}}^{3,p}({\mathbb{R}}^d)$ and $\sigma^{ij}_t\in\cap_{p\geq 2}{\mathbb{D}}^{2,p}$ for all $1\leq i,j\leq d$.
\[ehn\] Suppose that $h\in H_2^3({\mathbb{R}}^d;{\mathbb{R}}^d\otimes {\mathbb{R}}^d)$, then the following inequalities are satisfied: $$\begin{aligned}
\max_{1\leq i\leq d}\left\|H_{(i)}(\xi_t,1)\right\|_{2p}\leq C(t-r)^{-\frac{1}{2}},\label{ehxi}\\
\max_{1\leq i,j\leq d}\left\|H_{(i,j)}(\xi_t,1)\right\|_{2p}\leq C(t-r)^{-1}.\label{ehxi2}\end{aligned}$$
Due to Meyer’s inequality (see e.g. Proposition 1.5.4 and 2.1.4 of Nualart [@springer-06-nualart]), it suffices to estimate $$\left\|\|\sigma^{ji}_t D\xi_t^j\|_{H}\right\|_{2p},\ \left\|\|D\left(\sigma^{ji}_t D\xi_t^j\right)\|_{H^{\otimes 2}}\right\|_{2p},\ \textrm{and}\ \left\|\|D^2\left(\sigma^{ji}_t D\xi_t^j\right)\|_{H^{\otimes 3}}\right\|_{2p}.$$ By (\[medxih\]) and Lemma \[ttpcsi\] - \[elgamma\], we have $$\begin{aligned}
\left\|\|\sigma^{ji}_t D\xi_t^j\|_{H}\right\|_{2p}\leq \left\|\sigma^{ji}_t\right\|_{4p}\left\|\|D\xi_t^j\|_H\right\|_{4p}\leq C(t-r)^{-\frac{1}{2}},\end{aligned}$$ $$\begin{aligned}
\left\|\|D\left(\sigma^{ji}_t D\xi_t^j\right)\|_{H^{\otimes 2}}\right\|_{2p}\leq&\left\|\|D\sigma^{ji}_t \otimes D\xi_t^j\|_{H^{\otimes 2}}\right\|_{2p}+\left\|\|\sigma^{ji}_t D^2\xi_t^j\|_{H^{\otimes 2}}\right\|_{2p}\\
\leq &\left\|\|D\sigma^{ji}_t\|_H \right\|_{4p}\left\|\|D\xi_t^j\|_H\right\|_{4p}+\left\|\sigma^{ji}_t \right\|_{4p}\left\|\|D^2\xi_t^j\|_{H^{\otimes 2}}\right\|_{4p}\\
\leq &C(t-r)^{\frac{1}{2}},\end{aligned}$$ and $$\begin{aligned}
\left\|\|D^2\left(\sigma^{ji}_t D\xi_t^j\right)\|_{H^{\otimes 3}}\right\|_{2p}\leq&\left\|\|D^2\sigma^{ji}_t \otimes D\xi_t^j\|_{H^{\otimes 2}}\right\|_{2p}\\
&+\left\|\|D\sigma^{ji}_t \otimes D^2\xi_t^j\|_{H^{\otimes 2}}\right\|_{2p}+\left\|\|\sigma^{ji}_t D^3\xi_t^j\|_{H^{\otimes 2}}\right\|_{2p}\\
\leq & C (t-r)\end{aligned}$$ The above inequalities hold for all $1\leq i,j\leq d$. Then, (\[ehxi\]) and (\[ehxi2\]) follows.
The next lemma provides the moment estimate for the increment of $H_{(i)}(\xi_t,1)$.
\[edh12\] Suppose that $h\in H_2^3({\mathbb{R}}^d;{\mathbb{R}}^d\otimes {\mathbb{R}}^d)$. Then, $$\begin{aligned}
\label{edh1}
\max_{1\leq i\leq d}\left\|H_{(i)}(\xi_t,1)-H_{(i)}(\xi_s,1)\right\|_{2p}\leq C(s-r)^{-\frac{1}{2}}(t-r)^{-\frac{1}{2}}(t-s)^{\frac{1}{2}}.\end{aligned}$$
Notice that, by definition, we have $$\begin{aligned}
H_{(i)}(\xi_t,1)-H_{(i)}(\xi_s,1)=&-\sum_{j=1}^d\delta\left(\sigma_t^{ji} D\xi_t^j\right)+\sum_{j=1}^d\delta\left(\sigma_s^{ji} D\xi_s^j\right)\\
=&-\sum_{j=1}^d\delta\left(\sigma_t^{ji} D\xi_t^j-\sigma_s^{ji} D\xi_s^j\right).\end{aligned}$$ Thus by Meyer’s inequality again, it suffices to estimate $$\begin{aligned}
I_1:=\left\|\|\sigma_t^{ji} D\xi_t^j-\sigma_s^{ji} D\xi_s^j\|_H\right\|_{2p}\ \textrm{and}\ I_2:=\left\|\|D\left(\sigma_t^{ji} D\xi_t^j-\sigma_s^{ji} D\xi_s^j\right)\|_{H^{\otimes 2}}\right\|_{2p}.\end{aligned}$$ For $I_1$, we have $$\begin{aligned}
I_1\leq \left\|\|\left(\sigma_t^{ji}-\sigma_s^{ji}\right) D\xi_s^k\|_H\right\|_{2p}+\left\|\|\sigma_t^{ji} \left(D\xi_t^j-D\xi_s^j\right)\|_H\right\|_{2p}.\end{aligned}$$ Notice that by Lemmas \[ttpcsi\] - \[medxigm\], we can write $$\begin{aligned}
&\left\|\|\left(\sigma_t^{ji}-\sigma_s^{ji}\right) D\xi_s^j\|_H\right\|_{2p}\leq \left\|\sigma_t^{ji}-\sigma_s^{ji}\right\|_{4p}\left\| \|D\xi_s^j\|_H\right\|_{4p}\\
&\hspace{20mm}\leq C(t-r)^{-\frac{1}{2}}(s-r)^{-\frac{1}{2}}(t-s)^{\frac{1}{2}}\end{aligned}$$ and $$\begin{aligned}
&\left\|\|\sigma_t^{ji} \left(D\xi_t^j-D\xi_s^j\right)\|_H\right\|_{2p}\leq \left\|\sigma_t^{ji}\right\|_{4p}\left\|\|D\xi_t^j-D\xi_s^j\|_H\right\|_{4p}\\
&\quad \leq C(t-r)^{-1}(t-s)^{\frac{1}{2}}\leq C(t-r)^{-\frac{1}{2}}(s-r)^{-\frac{1}{2}}(t-s)^{\frac{1}{2}}.\end{aligned}$$ Thus combining the above inequalities, we have the following estimate for $I_1$: $$\begin{aligned}
\label{edgdxi}
I_1\leq C(t-r)^{-\frac{1}{2}}(s-r)^{-\frac{1}{2}}(t-s)^{\frac{1}{2}}.\end{aligned}$$ By Lemmas \[ttpcsi\] - \[medxigm\], we have the following estimate for $I_2$: $$\begin{aligned}
\label{edgammadxi}
I_2\leq &\left\|D\sigma_t^{ji}\otimes D\xi_t^j-D\sigma_t^{ji}\otimes D\xi_s^j\right\|_{2p, H^{\otimes 2}}+\left\|\sigma_t^{ji} D^2\xi_s^j-\sigma_s^{ji}D^2\xi_s^j\right\|_{2p, H^{\otimes 2}}\nonumber\\
\leq &\left\|\|D\sigma_t^{ji}\|_H\right\|_{4p} \left\|\|\left(D\xi_t^j- D\xi_s^j\right)\|_H\right\|_{4p}+\left\|\|\left(D\sigma_t^{ji}-D\sigma_s^{ji}\right)\|_H\right\|_{4p}\left\| \|D\xi_s^j\|_H\right\|_{4p}\nonumber\\
&+\left\|\sigma_t^{ji} \right\|_{4p}\left\|\|D^2\xi_t^j- D^2\xi_s^j\|_{H^{\otimes 2}}\right\|_{4p}+\left\|\sigma_t^{ji} -\sigma_s^{ji} \right\|_{4p} \left\|\|D^2\xi_s^j\|_{H^{\otimes 2}}\right\|_{2p}\nonumber\\
\leq &C(t-s)^{\frac{1}{2}}.\end{aligned}$$ Therefore, (\[edh1\]) follows from (\[edgdxi\]), (\[edgammadxi\]) and Meyer’s inequality.
The next lemma shows that $\xi$ is a $d$-dimensional Gaussian process in the whole probability space. Notice that, however, conditionally to $W$, the process $\xi$ is no longer Gaussian, because it is the solution to a nonlinear SDE.
\[lxigrv\] The process $\xi$ given by equation (\[sde\]) is a $d$-dimensional Gaussian process, with mean $x$ and covariance matrix $$\begin{aligned}
\label{cvmxi}
\Sigma_{s,t}= (t\wedge s-r) (I+\rho(0)),\end{aligned}$$ where $\rho(0)$ is defined in (\[rho\]). Moreover, the probability density of $\xi_t$, denoted by $p_{\xi_t}(y)$, is bounded by a Gaussian density: $$\begin{aligned}
\label{pdfid}
p_{\xi_t}(y)\leq \left(2\pi(t-r)\right)^{-\frac{d}{2}}\exp \Big(-\frac{k|x-y|^2}{t-r}\Big),\end{aligned}$$ where $$\begin{aligned}
\label{defk}
k=[2(d\|h\|_{2,3}^2+1)]^{-1}.\end{aligned}$$
Since $B$ is a $d$-dimensional Brownian motion and $W$ is a $d$-dimensional space-time white Gaussian random field independent of $B$, then $\xi=\{\xi_t,r\leq t\leq T\}$ is a square integrable $d$-dimensional martingale. The quadratic covariation of $\xi$ is given by $$\begin{aligned}
\langle \xi^i, \xi^j \rangle_t=&\delta_{ij}(t-r)+\sum_{k=1}^d \int_r^t\int_{\mathbb{R}^d}h^{ik}(\xi_s-y)h^{jk}(\xi_s-y)dyds\nonumber\\
=&\left(\delta_{ij}+ \rho^{ij}(0)\right)(t-r).\end{aligned}$$ Note that $\rho(0)$ is a symmetric nonnegative definite matrix. As a consequence, $I+\rho(0)$ is strictly positive definite, and thus nondegenerate. Therefore, we can find a nondegenerate matrix $M$, such that $M^* (I+\rho(0)) M=I$. Let $\eta=M \xi$, then $\eta=\{\eta_t, t\in[0,T]\}$ is a martingale with quadratic covariation $$\begin{aligned}
\langle\eta^i,\eta^j\rangle_t=(t-r) \sum_{k_1,k_2=1}^d M^{ik_1} M^{jk_2}\langle \xi^{k_1}, \xi^{k_2} \rangle_t=\delta_{ij}(t-r).\end{aligned}$$ By Levy’s martingale characterization, $\eta$ is a $d$-dimensional Brownian motion. Then, $\xi=M^{-1}\eta$ is a Gaussian process, with covariance matrix (\[cvmxi\]).
Since for any $t>r$, $\Sigma_t:=\Sigma_{t,t}=(t-r)(I+\rho(0))$ is symmetric and positive definite, the probability density of the Gaussian random vector $\xi_t$ is given by $$\begin{aligned}
\label{pdrxi}
p_{\xi_t}(y)=\frac{1}{\sqrt{(2\pi)^d|\Sigma_t|}}\exp\Big(-\frac{1}{2}(y-x)^*\Sigma_t^{-1}(y-x)\Big).\end{aligned}$$ Recall that $\rho(0)$ is symmetric and nonnegative definite. Then it has eigenvalues $\lambda_1\geq \lambda_2\geq \cdots \ge \lambda_d\geq 0$. Let $\lambda$ be the diagonal matrix with diagonal elements $\lambda_1,\dots, \lambda_d$. There is an orthogonal matrix $U$, such that $\rho(0)=U^* \lambda U$. Let $k$ be defined in (\[defk\]). It follows that $$\lambda_1+1\leq \sum_{i,j=1}^d |\rho^{ij}(0)|+1\leq \|\rho\|_{\infty}+1\leq d\|h\|_{3,2}^2+1=\frac{1}{2k}.$$ Thus for any nonzero $x\in\mathbb{R}^d$, we have $$\begin{aligned}
\frac{1}{2}x^*\Sigma_t^{-1}x-\frac{k}{t-r}x^*x=&\frac{1}{2}x^*\Big(\Sigma_t^{-1}-\frac{2k}{t-r} I\Big)x\\
=&\frac{1}{2(t-r)}x^*U^*\left( \left(I+\lambda\right)^{-1} -2k I\right)Ux\geq 0,\end{aligned}$$ because $\left(I+\lambda\right)^{-1} -2k I$ is a nonnegative diagonal matrix. Thus for any $x,y\in\mathbb{R}^d$, $t>r$, we have $$\begin{aligned}
\label{expi}
\exp\Big(-\frac{1}{2}(y-x)^*\Sigma_t^{-1}(y-x)\Big)\leq \exp \Big(-\frac{k|x-y|^2}{t-r}\Big),\end{aligned}$$ On the other hand, we have $$\begin{aligned}
\label{dcvi}
|\Sigma_t|=\left|U^*\left(I+\lambda\right)U(t-r)\right|\geq (t-r)^d.\end{aligned}$$ Therefore, we obtain (\[pdfid\]) by plugging (\[expi\]) - (\[dcvi\]) into (\[pdrxi\]).
Denote by $\mathbb{P}^W$, ${\mathbb{E}}^W$, and $\|\cdot\|_p^W$ the probability, expectation and $L^p$-norm conditional on $W$. The following two propositions are estimates for the conditional distribution of $\xi$.
\[cmijt\] For any $0\leq r<t\leq T$, $c>0$, choose $\rho\in(0, c\sqrt{t-r}]$. Then, for any $p_1,p_2\geq 1$ and $y\in{\mathbb{R}}^d$, there exist $C>0$, depending on $p_1$, $p_2$, $c$, $\|h\|_{2}$, and $d$, such that $$\begin{aligned}
\label{cmiji}
\left\|\mathbb{P}^W(|\xi_t-y|\leq \rho)^{\frac{1}{p_1}}\right\|_{p_2}\leq C\exp\Big(-\frac{k |x-y|^2}{p(t-r)}\Big),\end{aligned}$$ where $k$ is defined in (\[defk\]) and $p=p_1\vee p_2$.
Let $p=p_1\vee p_2$. Then, by Jensen’s inequality, we have $$\begin{aligned}
\left\|\mathbb{P}^W(|\xi_t-y|\leq \rho)^{\frac{1}{p_1}}\right\|_{p_2}=\left\|\left\|{\mathbf{1}}_{\{\xi_t-y|\leq \rho\}}\right\|^W_{p_1}\right\|_{p_2}\leq \left\|{\mathbf{1}}_{\{\xi_t-y|\leq \rho\}}\right\|_p,\end{aligned}$$ We consider two different cases.
[**(i)**]{} Suppose that $2\rho\leq |x-y|$. If $|\xi_t-y|\leq\rho\leq c\sqrt{t-r}$, then $$\begin{aligned}
|\xi_t-x|\geq |x-y|-|\xi_t-y|\geq |x-y|-\rho\geq \frac{|x-y|}{2},\end{aligned}$$ and equivalently $\{|\xi_t-y|<\rho\}\subset\{|\xi_t-x|\geq \frac{|x-y|}{2}\}$. Then, by Lemma \[lxigrv\], we have $$\begin{aligned}
\label{emd2}
\left\|\mathbb{P}^W(|\xi_t-y|\leq \rho)^{\frac{1}{p_1}}\right\|_{p_2}= &\left\|{\mathbf{1}}_{\{|\xi_t-x|\geq \frac{|x-y|}{2}\}\cap\{|\xi_t-y|<\rho\}}\right\|_p\leq C\bigg[V_d\rho^d\sup_{|z-x|\geq\frac{|x-y|}{2}}p_{\xi_t}(z)\bigg]^{\frac{1}{p}}\nonumber\\
\leq&C\left[V_d c^d(2\pi)^{-\frac{d}{2}}\exp\Big(-\frac{k|x-y|^2}{ t-r}\Big) \right]^{\frac{1}{p}}\end{aligned}$$ where $V_d=\frac{\pi^{\frac{d}{2}}}{\Gamma(1+\frac{d}{2})}$ is the volume of the unit sphere in ${\mathbb{R}}^d$.
[**(ii)**]{} On the other hand, suppose that $2\rho>|x-y|$. Then $|x-y|\leq 2\rho\leq 2c\sqrt{t-r}$. Thus by Lemma \[lxigrv\] again, we have $$\begin{aligned}
\label{emd3}
\left\|\mathbb{P}^W(|\xi_t-y|\leq \rho)^{\frac{1}{p_1}}\right\|_{p_2}\leq& C\big(V_d \rho^d(2\pi (t-r))^{-\frac{d}{2}} \big)^{\frac{1}{p}}\nonumber\\
\leq &C\big(V_dc^d(2\pi)^{-\frac{d}{2}}\big)^{\frac{1}{p}}\exp\Big(\frac{4kc^2}{p}-\frac{4kc^2}{p}\Big)\nonumber\\
\leq &C\big(V_dc^d(2\pi)^{-\frac{d}{2}}\big)^{\frac{1}{p}}e^{\frac{4kc^2}{p}}\exp\Big(-\frac{k|x-y|^2}{p(t-r)}\Big).\end{aligned}$$ Therefore, (\[cmiji\]) follows from (\[emd2\]) - (\[emd3\]).
Denote by $p^W(r,x;t,y)$ the transition probability density of $\xi$ conditional on $W$. In other words, $p^W(r,x;t,y)$ is the conditional probability density of $\xi_t=\xi_t^{r,x}$.
\[mectpd\] For any $0\leq r<t\leq T$, $p\geq 1$, and $y\in{\mathbb{R}}^d$, there exist $C>0$, depending on $T$, $d$, $\|h\|_{3,2}$, $p$, and $q$, such that $$\begin{aligned}
\label{edn}
\left\|p^W(r,x;t,y)\right\|_{2p}\leq C\exp\Big(-\frac{k|x-y|^2}{6pd(t-r)}\Big) (t-r)^{-\frac{d}{2}},\end{aligned}$$ where $k$ is defined in (\[defk\]).
In order to show this lemma, we apply a density formula based on the Riesz transformation (see Theorem \[bedf\]), and use the estimate stated in Theorem \[tdpdift\]. Choose $p_1\in(d,3pd]$, let $p_2=2p_1$, and $p_3=\frac{p_1p_2}{p_2-p_1}=p_2$. Then, by (\[de\]) and Hölder’s inequality, we have $$\begin{aligned}
\label{mcdxi1}
\left\|p^W_{\xi_t}(y)\right\|_{2p}\leq C\max_{1\leq i\leq d}\Big\{&\big\|\mathbb{P}^W\left(|\xi_t-y|<2\rho\right)^{\frac{1}{p_2}}\big\|_{6p} \Big\|\big|\|H_{(i)}(\xi_t, 1)\|^W_{p_1}\big|^{d-1}\Big\|_{6p}\nonumber\\
&\times\Big[\frac{1}{\rho}+\left\|\|H_{(i)}(\xi_t, 1)\|^W_{p_2}\right\|_{6p}\Big]\Big\},\end{aligned}$$ By Jensen’s inequality, we have for any $1\leq i\leq d$ $$\begin{aligned}
\Big\|\big|\left\|H_{(i)}(\xi_t, 1)\right\|^W_{p_1}\big|^{d-1}\Big\|_{6p}\leq\left\|H_{(i)}(\xi_t,1)\right\|_{6p\vee p_1}^{d-1}\leq \left\|H_{(i)}(\xi_t,1)\right\|_{6pd}^{d-1},\end{aligned}$$ and $$\begin{aligned}
\label{mchkdq}
\left\|\left\|H_{(i)}(\xi_t, 1)\right\|^W_{p_2}\right\|_{6p}\leq\left\|H_{(i)}(\xi_t,1)\right\|_{6pd}.\end{aligned}$$ Let $\rho=\frac{\sqrt{t-r}}{4}$. (\[edn\]) is a consequence of (\[mcdxi1\]) - (\[mchkdq\]), Lemma \[ehn\], and Proposition \[cmijt\].
A conditional convolution representation
========================================
In this section, we follow the idea of Li et. al. (see Section 3 of [@ptrf-12-li-wang-xiong-zhou]) to obtain a conditional convolution formulation of the SPDE (\[dqmvp\]). Consider the following SPDE: $$\begin{aligned}
\label{crd}
u_t(x)=\int_{\mathbb{R}^d}\mu(z)p^W(0,z;t,x)dz+\int_0^t\int_{\mathbb{R}^d}p^W(r,z;t,x) u_r(z)V(dr,dz),\end{aligned}$$ where $W$ and $V$ are the same random fields as in (\[dqmvp\]), $p^W$ is the transition density of $\xi_t$ given by (\[sde\]) conditional on $W$.
\[strsol\] A random field $u=\{u_t(x),t\in[0,T],x\in{\mathbb{R}}^d\}$ that is jointly measurable and adapted to the filtration generated by $W$ and $V$, is said to be a strong solution to the SPDE (\[crd\]), if the stochastic integral in (\[crd\]) is defined as Walsh’s integral and the equality holds almost surely for almost every $t\in[0,T]$ and $x\in{\mathbb{R}}^d$.
\[eumcr\] Assume that $\kappa$ and $\mu$ are bounded. Then the SPDE (\[crd\]) has a unique strong solution (in the sense of Definition \[strsol\]), denoted by $u=\{u_t(x), 0\leq t\leq T,x\in{\mathbb{R}}^d\}$, such that $$\begin{aligned}
\label{unibd}
\sup_{0\leq t\leq T}\sup_{x\in\mathbb{R}^d}\|u_t(x)\|_{2p}<\infty,\end{aligned}$$ for any $p\geq 1$.
We prove the lemma by the Picard iteration. Let $u_0(t,x)\equiv \mu(x)$ and $$\begin{aligned}
u_{n}(t,x)=\int_{\mathbb{R}^d}\mu(z)p^W(0,z;t,x)dz+\int_0^t\int_{\mathbb{R}^d}p^W(r,z;t,x) u_{n-1}(r,z)V(dr,dz),\end{aligned}$$ for all $n\geq 1$ and $0\leq t\leq T$. Let $d_n$ be the difference of $u_n$, that is $$\begin{aligned}
d_n(t,x):=u_{n+1}(t,x)-u_n(t,x)=\int_0^t\int_{\mathbb{R}^d}p^W(r,z;t,x) \left(d_{n-1}(r,z)\right)V(dr,dz).\end{aligned}$$ We write $$d^*_n(t):=\sup_{x\in\mathbb{R}^d}\left\|d_n(t,x)\right\|_{2p}^2.$$ Then, by Burkholder-Davis-Gundy’s and Minkowski’s inequalities, we have $$\begin{aligned}
\label{medu}
d_n^*(t)\leq &c_p \|\kappa\|_{\infty}\sup_{x\in\mathbb{R}^d}\int_0^t\Big(\int_{\mathbb{R}^d}\left\|p^W(r,z;t,x)d_{n-1}(r,z)\right\|_{2p}dz\Big)^2dr.\end{aligned}$$ By the Markov property, $p^{W}(r,z;t,x)$ depends on $\{W(s,z)-W(r,z), s\in(r,t], z\in{\mathbb{R}}^d\}$. On the other hand, $d_{n-1}(r,z)$ depends on $V$ and $\{W(s,z),s\in[0,r],z\in{\mathbb{R}}^d\}$. Thus, $p^{W}(r,z;t,x)$ and $d_{n-1}(r,z)$ are independent. That implies $$\begin{aligned}
\label{epp2d2}
{\mathbb{E}}\big(|p^W(r,z;t,x)d_{n-1}(r,z)|^{2p}\big)={\mathbb{E}}\big(|p^W(r,z;t,x)|^{2p}\big){\mathbb{E}}\big(|d_{n-1}(r,z)|^{2p}\big)\end{aligned}$$ Then, by (\[medu\]), (\[epp2d2\]) and Proposition \[mectpd\], we have $$\begin{aligned}
\label{medu0}
d^*_n(t)\leq &c_p \|\kappa\|_{\infty}\int_0^td^*_{n-1}(r)\sup_{x\in\mathbb{R}^d}\Big(\int_{\mathbb{R}^d}\left\|p^W(r,z;t,x)\right\|_{2p}dz\Big)^2dr\leq C\int_0^td^*_{n-1}(r)dr,\end{aligned}$$ where $C>0$ depends on $T$, $d$, $h$, $p$, and $\|\kappa\|_{\infty}$. Thus by iteration, we have $$\begin{aligned}
\label{sdspdef}
d^*_n(t)\leq C^n \int_0^t\int_0^{r_n}\cdots\int_0^{r_2}d_0^*(r_1) dr_1\cdots dr_n,\end{aligned}$$ To estimate $d_0^*$, we observe that $$\begin{aligned}
\label{duigdd}
d_0^*(t)=&\sup_{x\in\mathbb{R}^d}\Big\|\int_{\mathbb{R}^d}\left(\mu(z)-\mu(x)\right)p^W(0,z;t,x)dz+\int_0^t\int_{\mathbb{R}^d}p^W(r,z;t,x) \mu(z)V(dr,dz)\Big\|_{2p}^2\nonumber\\
\leq &\|\mu\|_{\infty}^2\sup_{x\in\mathbb{R}^d}\bigg(2\Big\|\int_{\mathbb{R}^d}p^W(0,z;t,x)dz\Big\|_{2p}+\Big\|\int_0^t\int_{\mathbb{R}^d}p^W(r,z;t,x) V(dr,dz)\Big\|_{2p}\bigg)^2\nonumber\\
:=&\|\mu\|_{\infty}^2\sup_{x\in\mathbb{R}^d}\left(2J_1+J_2\right)^2.\end{aligned}$$ For $J_1$, by Minkowski’s inequality and Proposition \[mectpd\], we have $$\begin{aligned}
\label{duigd1}
J_1\leq \sup_{x\in\mathbb{R}^d}\int_{\mathbb{R}^d}\left\|p^W(0,z;t,x)\right\|_{2p}dz\leq C.\end{aligned}$$ For $J_2$, we apply Burkholder-Davis-Gundy’s, Minkowski’s inequalities and Proposition \[mectpd\]. Then, $$\begin{aligned}
\label{duigd2}
J_2\leq&c_p\Big\|\int_0^t\int_{\mathbb{R}^d\times \mathbb{R}^d}\kappa(y,z) p^W(r,y;t,x) p^W(r,z;t,x) dydzdr\Big\|_p^{\frac{1}{2}}\\
\leq&c_p\|\kappa\|_{\infty}^{\frac{1}{2}}\bigg[\int_0^t\Big(\int_{\mathbb{R}^d}\left\|p^W(r,y;t,x) \right\|_{2p}dy\Big)^2dr\bigg]^{\frac{1}{2}}\leq C\end{aligned}$$ Combining (\[duigdd\]) - (\[duigd2\]), it follows that $d^*_0(t)<C$. As a consequence, we have $$\begin{aligned}
\label{sdspde0}
d^*_n(t)\leq &C \int_0^t\int_0^{r_n}\dots\int_0^{r_2} 1dr_1\dots dr_n=C\frac{t^n}{n!},\end{aligned}$$ that is summable in $n$. Therefore, for any fixed $t\in[0,T]$ and $x\in{\mathbb{R}}^d$, $\{u_n(t,x)\}_{n\geq 0}$ is convergent in $L^{2p}(\Omega)$. Denote by $u_t(x)$ the limit of this sequence.
We claim that $u=\{u_t(x),t\in[0,T], x\in{\mathbb{R}}^d\}$ is a strong solution to (\[crd\]). It suffices to show that as $n\to \infty$, $$\begin{aligned}
\label{l2cvg2}
\int_0^t\int_{\mathbb{R}^d}p^W(r,z;t,x) u_n(r,z)V(dr,dz)\to \int_0^t\int_{\mathbb{R}^d}p^W(r,z;t,x) u(r,z)V(dr,dz)\end{aligned}$$ in $L^{2p}(\Omega)$ for almost every $t\in[0,T]$ and $x\in{\mathbb{R}}^d$. Actually, by Burkholder-Davis-Gundy’s and Minkowski’s inequalities, and the fact that $\{p^W(r,z;t,x),x,z\in{\mathbb{R}}^d\}$ and $\{u_n(r,z)-u(r,z),z\in{\mathbb{R}}^d\}$ are independent, we have $$\begin{aligned}
&\Big\|\int_0^t\int_{\mathbb{R}^d}p^W(r,z;t,x) \left(u_n(r,z)-u(r,z)\right)V(dr,dz)\Big\|_{2p}^2\\
\leq& \|\kappa\|_{\infty}\sup_{r\in[0,T]}\sup_{z\in\mathbb{R}^d}\left\|u_n(r,z)-u(r,z)\right\|_{2p}^2\int_0^t\Big(\int_{\mathbb{R}^d}\left\|p^W(r,z;t,x)\right\|_{2p}dz\Big)^2dr.\end{aligned}$$ The integral on the right-hand side of above inequality is finite, and because of (\[sdspde0\]), as $n\to \infty$, $$\begin{aligned}
\sup_{r\in[0,T]}\sup_{z\in\mathbb{R}^d}\left\|u_n(r,z)-u(r,z)\right\|_{2p}^2\leq \sum_{k=n}^{\infty}d_k^*(T)\to 0,\end{aligned}$$ This implies that (\[l2cvg2\]) is true.
In order to show the uniqueness, we assume that $v=\{v_t(x), t\in [0,T], x\in{\mathbb{R}}^d\}$ is another strong solution to (\[crd\]). Let $d_t(x)=u_t(x)-v_t(x)$ for any $t\in[0,T]$ and $x\in{\mathbb{R}}^d$. Then, $$d_t(x)=\int_0^t\int_{{\mathbb{R}}^d}p^W(r,z;t,x)d_r(z)V(dr,dz).$$ By Burkholder-Davis-Gundy’s and Minkowski’s inequalities and the fact that the families $\{d_r(x), x\in{\mathbb{R}}^d\}$ and $\{p^W(r,z;t,x),x,z\in{\mathbb{R}}^d\}$ are independent, we have $$\begin{aligned}
\label{unq}
\sup_{x\in{\mathbb{R}}^d}\|d_t(x)\|_{2p}^2\leq &\int_0^t\sup_{x\in{\mathbb{R}}^d}\|d_r(x)\|_{2p}^2\Big(\int_{{\mathbb{R}}^d}\left\|p^W(r,z;t,x)\right\|_{2p}dz\Big)^2dr\nonumber\\
\leq &C\int_0^t\sup_{x\in{\mathbb{R}}^d}\|d_r(x)\|_{2p}^2dr.\end{aligned}$$ Due to Grönwall’s lemma and the fact that $d_0\equiv 0$, (\[unq\]) implies $d(t,x)\equiv 0$, a.s. It follows that the solution to (\[crd\]) is unique.
For the uniformly boundedness, let $u^*(t)=\sup_{x\in\mathbb{R}^d}\left\|u_t(x)\right\|_{2p}^2$. We can show that $$\begin{aligned}
u^*(t)\leq &2\|\mu\|_{\infty}^2\Big(\sup_{x\in\mathbb{R}^d}\int_{\mathbb{R}^d}\left\|p^W(0,z;t,x)\right\|_{2p}dz\Big)^2\\
&+2\|\kappa\|_{\infty}\int_0^tu^*(r)\Big(\sup_{x\in\mathbb{R}^d}\int_{\mathbb{R}^d}\left\|p^W(r,z;t,x)\right\|_{2p}dz\Big)^2dr\\
\leq& c_1+c_2\int_0^tu^*(r)dr\end{aligned}$$ Then, (\[unibd\]) is a consequence of Grönwall’s lemma.
Assume that $\kappa$ and $\mu$ are bounded. Let $u=\{u_t(x),0< t\leq T, x\in\mathbb{R}^d\}$ be the unique strong solution to (\[crd\]) in the sense of Definition \[strsol\]. Then, $u$ is the strong solution to (\[dqmvp\]) in the sense of Definition \[def\].
Let $u=\{u_t(x),t\in[0,T],x\in{\mathbb{R}}^d\}$ be the unique solution to the SPDE (\[crd\]), and write $Z(dt, dx)=u_t(x)V(dt,dx)$ for all $t\in[0,T]$ and $x\in{\mathbb{R}}^d$. Then, it suffices to show that $u$ satisfies the following equation: $$\begin{aligned}
\label{futf0}
\langle u_t, \phi \rangle=&\langle \mu, \phi\rangle+\int_0^t \langle u_s, A\phi\rangle ds+\int_0^t \int_{\mathbb{R}^d}\langle u_s, \nabla \phi^* h(y-\cdot) \rangle W(ds, dy)\nonumber\\
&+\int_0^t\int_{\mathbb{R}^d}\phi(x)Z(ds,dx),\end{aligned}$$ for any $\phi\in C^2_b\left(\mathbb{R}^d\right)$.
Denote by $${\mathbb{E}}^W_{s,x}(\phi(\xi_t)):={\mathbb{E}}\big(\phi(\xi_t)|W, \xi_s=x\big)=\int_{{\mathbb{R}}^d}\phi(z)p^W(s,x;t,z)dz.$$ As $u$ is the strong solution to (\[crd\]), the following equations are satisfied $$\begin{aligned}
\langle u_t, \phi\rangle=\left\langle \mu, {\mathbb{E}}^W_{0,\cdot}(\phi(\xi_t)) \right\rangle+\int_0^t\int_{\mathbb{R}^d}{\mathbb{E}}^W_{s,z}(\phi(\xi_t))Z(ds,dz),\end{aligned}$$ $$\begin{aligned}
\int_0^t \langle u_s, A\phi\rangle ds=\int_0^t\left\langle \mu, {\mathbb{E}}^W_{0,\cdot}(A\phi(\xi_s)) \right\rangle ds+\int_0^t\int_0^s\int_{\mathbb{R}^d}{\mathbb{E}}^W_{r,z}(A\phi(\xi_s))Z(dr,dz)ds,\end{aligned}$$ and $$\begin{aligned}
\int_0^t \int_{\mathbb{R}^d}&\left\langle u_s, \nabla \phi^* h(y-\cdot) \right\rangle W(ds, dy)=\int_0^t \int_{\mathbb{R}^d}\left\langle \mu, {\mathbb{E}}^W_{0,\cdot}\big(\nabla \phi(\xi_s)^*h(y-\xi_s)\big) \right\rangle W(ds, dy)\\
&\ +\int_0^t\int_{\mathbb{R}^d}\int_0^s\int_{\mathbb{R}^d}{\mathbb{E}}^W_{r,z}\big((\nabla \phi(\xi_s)^*h(y-\xi_s)\big)Z(dr,dz) W(ds, dy).\end{aligned}$$ Thus by stochastic Fubini’s theorem, we have $$\begin{aligned}
\label{futf}
&\langle u_t, \phi \rangle-\langle \mu, \phi\rangle-\int_0^t \langle u_s, A\phi\rangle ds-\int_0^t \int_{\mathbb{R}^d}\langle u_s, \nabla \phi^* h(y-\cdot) \rangle W(ds, dy)\\
=&\bigg\langle \mu, {\mathbb{E}}^W_{0,\cdot}\Big(\phi(\xi_t)-\phi(\xi_0)-\int_0^t A\phi(\xi_s)ds -\int_0^t \int_{\mathbb{R}^d} \nabla \phi(\xi_s)^*h(y-\xi_s)W(ds,dy)\Big)\bigg\rangle\nonumber\\
&+\int_0^t\int_{\mathbb{R}^d}{\mathbb{E}}^W_{s,z}\Big(\phi(\xi_t)-\int_s^tA\phi(\xi_r)dr-\int_s^t\int_{\mathbb{R}^d}\nabla \phi(\xi_r)^*h(y-\xi_r) W(dr, dy)\Big)Z(ds,dz).\nonumber\end{aligned}$$ Notice that by Itô’s formula, we have $$\begin{aligned}
\label{fxiito}
\phi(\xi_t^{s,x})=&\phi(x)+\int_s^t A \phi(\xi^{s,x}_r)dr+\int_s^r\nabla \phi(\xi^{s,x}_r)^*dB_r\nonumber\\
&+\int_s^t \int_{\mathbb{R}^d}\nabla \phi(\xi^{s,x}_r)^* h(y-\xi^{s,x}_r)W(dr,dy).\end{aligned}$$ Then, (\[futf0\]) follows from (\[futf\]) and (\[fxiito\]).
Proof of Theorem \[tjhc\]
=========================
In this section, we prove Theorem \[tjhc\] by showing the the Hölder continuity of $u_t(x)$ in spatial and time variables separately:
\[phcs\] Suppose that $h\in H^2_3\left(\mathbb{R}^d\right)$, $\|\kappa\|_{\infty}<\infty$, and $\mu\in L^1\left(\mathbb{R}^d\right)$ is bounded. Then, for any $0< s<t\leq T$, $x,y\in{\mathbb{R}}^d$ $\beta\in(0,1)$ and $p>1$, there exists a constant $C$ depending on $T$, $d$, $\|h\|_{3,2}$, $\|\mu\|_{\infty}$, $\|\kappa\|_{\infty}$, $p$, and $\beta$, such that the following inequalities are satisfied: $$\begin{aligned}
\left\|u_t(y)-u_t(x)\right\|_{2p}\leq &Ct^{-\frac{1}{2}}(y-x)^{\beta},\label{mpdhcsv}\\
\left\|u_t(x)-u_s(x)\right\|_{2p}\leq &Cs^{-\frac{1}{2}}(t-s)^{\frac{1}{2}\beta}.\label{mpdhctv}\end{aligned}$$
Then, Theorem \[tjhc\] is simply a corollary of Proposition \[phcs\]. In order to prove Proposition \[phcs\], we need the following Hölder continuity results for the conditional transition density $p^W(r,z;t,x)$:
\[lmdtpirl\] Suppose that $h\in H_2^3(\mathbb{R}^d)$, $0\leq r<s<t\leq T$, $x,y\in\mathbb{R}^d$, and $\beta\in(0,1)$. Then, there exists $C>0$, depending on $T$, $d$, $\|h\|_{3,2}$, $p$ and $\beta$, such that the following inequalities are satisfied: $$\begin{aligned}
\int_{\mathbb{R}^d}\left\|p^W(r,z;t,y)-p^W(r,z;t,x)\right\|_{2p}dz\leq &C(t-r)^{-\frac{1}{2}\beta}\left|y-x\right|^{\beta},\label{mdtpirl}\\
\int_{\mathbb{R}^d}\left\|p^W(r,z;t,x)-p^W(r,z;s,x)\right\|_{2p}dz\leq &C (s-r)^{-\frac{1}{2}\beta}(t-s)^{\frac{1}{2}\beta}.\label{idtmpd}\end{aligned}$$
Before the proof, let us firstly derive a variant of the density formula (\[dfrk\]). It will be used in the proof of (\[idtmpd\]). Choose $\phi\in C^2_b\left(\mathbb{R}^n\right)$, such that ${\mathbf{1}}_{B(0,1)}\leq \phi\leq {\mathbf{1}}_{B(0,4)}$, and its first and second partial derivatives are all bounded by $1$. For any $x\in{\mathbb{R}}^d$ and $\rho>0$, we set $\phi^x_{\rho}:=\phi(\frac{\cdot -x}{\rho})$. Assume that $F$ satisfies all the properties in Theorem \[bedf\]. Let $Q_n$ be the $n$-dimensional Poisson kernel (see (\[posnkn\])). Then, the density of $F$ can be represented as follows: $$\begin{aligned}
\label{dfrkip}
p_F(x)=&\sum_{i,j_1,j_2=1}^n {\mathbb{E}}\left[\partial_{j_1} Q_n(F-x)\left\langle DF^{j_1}, DF^{j_2}\right\rangle_H \sigma^{j_2i} H_{(i)}(F, \phi_{\rho}^x(F))\right]\nonumber\\
=& {\mathbb{E}}\bigg[\Big\langle DQ_n(F-x), \sum_{i,j_2=1}^mH_{(i)}(F, \phi_{\rho}^x(F))\sigma^{j_2i} DF^{j_2} \Big\rangle_H\bigg]\nonumber\\
=&\sum_{i=1}^m {\mathbb{E}}\Big[Q_n(F-x)\sum_{j_2=1}^m\delta\left[H_{(i)}(F, \phi_{\rho}^x(F))\sigma^{j_2i}DF^{j_2}\right]\Big]\nonumber\\
=&-\sum_{i=1}^m {\mathbb{E}}\big[Q_n(F-x)H_{(i,i)}(F, \phi_{\rho}^x(F))\big].\end{aligned}$$
Let $\xi_t=\xi_t^{r,z}$ be defined in (\[sde\]).\
[**(i)**]{} Choose $p_1\in(d,3pd]$, let $p_2=2p_1$, and $p_3=\frac{p_1p_2}{p_2-p_1}=p_2$. Then, by (\[dmvte\]) and Hölder’s inequality, for any fixed $z, x,y\in\mathbb{R}^d$ and $\rho>0$, we can show that $$\begin{aligned}
I(z):=&\|p^W(r,z;t,x)-p^W(r,z;t,y)\|_{2p}\\
\leq &C|y-x|\left\|\mathbb{P}^W\left(\xi_t-\tau\leq 4\rho\right)^{\frac{1}{p_2}}\right\|_{6p}\max_{1\leq i\leq d}\Big\{\Big\|\big|\|H_{(i)}(\xi_t;1)\|_{p_2}^W\big|^{d-1}\Big\|_{6p}\\
&\times\Big(\frac{1}{\rho^2}+\frac{2}{\rho}\left\|\|H_{(i)(\xi_t;1)}\|_{p_2}^{W}\right\|_{6p}+\left\|\|H_{(i,j)}(\xi_t;1)\|_{p_2}^W\right\|_{6p}\Big)\Big\},\end{aligned}$$ where $\tau=cx+(1-c)y$, for some $c\in(0,1)$ that depends on $z, x, y$.
Let $\rho=\frac{\sqrt{t-r}}{8}$. Similarly as proved in Proposition \[mectpd\], we can show that $$\begin{aligned}
\label{dmitgd}
I(z)\leq& C |y-x|(t-r)^{-\frac{d+1}{2}}\exp\Big(-\frac{k|\tau-z|^2}{(6p\vee p_2)(t-r)}\Big)\nonumber\\
\leq &C |y-x|(t-r)^{-\frac{d+1}{2}}\exp\Big(-\frac{k|\tau-z|^2}{6pd(t-r)}\Big),\end{aligned}$$ where $k$ is defined in (\[defk\]) and $C>0$ depends on $T$, $d$, $p$, and $\|h\|_{3,2}$.
Notice that even if we fix $x,y\in {\mathbb{R}}^d$, $\tau$ is still a function of $z$ that does not have an explicit formulation. Thus it is not easy to calculate the integral of $I$ directly. Without losing generality, assume that $x=0$, and $y=(y_1, 0,\dots, 0)$, where $y_1\geq 0$. Then $\tau=((1-c)y_1, 0, \dots, 0)$, where $c=c(z)\in(0, 1)$. Let $\widehat{k}=\frac{k}{6pd}$. For any $z=(z_1,\dots, z_d)\in {\mathbb{R}}^d$, we consider the following cases.
\(a) If $z_1\leq 0$, then $$\begin{aligned}
\label{2exp1}
\exp\Big(-\frac{k|\tau-z|^2}{6pd(t-r)}\Big)\leq \exp\Big(-\frac{\widehat{k}|z|^2}{t-r}\Big).\end{aligned}$$ (b) If $z_1\geq y_1$, then $$\begin{aligned}
\label{2exp2}
\exp\Big(-\frac{k|\tau-z|^2}{6pd(t-r}\Big)\leq \exp\Big(-\frac{\widehat{k}|y-z|^2}{t-r}\Big).\end{aligned}$$ (c) If $0< z_1< y_1$, then $$\begin{aligned}
\label{2exp3}
\exp\Big(-\frac{k|\tau-z|^2}{6pd(t-r}\Big)\leq \exp\Big(-\frac{\widehat{k}|\tau_0-z|^2}{t-r}\Big),\end{aligned}$$ where $\tau_0=(z_1, 0, \dots, 0)$.
Therefore, combining (\[dmitgd\]) - (\[2exp3\]), we have $$\begin{aligned}
\label{imdsv}
\int_{\mathbb{R}^d}I(z)dz\leq C |y-x|(t-r)^{-\frac{d+1}{2}}\left(I_1+I_2+I_3\right),\end{aligned}$$ where $$\begin{aligned}
I_1=\int_{-\infty}^0dz_1\int_{\mathbb{R}^{d-1}}\exp\Big(-\frac{\widehat{k}|z|^2}{t-r}\Big)dz_d\dots dz_2,\\
I_2=\int_{|y|}^{\infty}dz_1\int_{\mathbb{R}^{d-1}}\exp\Big(-\frac{\widehat{k}|y-z|^2}{t-r}\Big)dz_d\dots dz_2,\\
I_3=\int_0^{|y|}dz_1 \int_{\mathbb{R}^{d-1}}\exp\Big(-\frac{\widehat{k}|\tau_0-z|^2}{t-r}\Big)dz_d\dots dz_2.\end{aligned}$$ By a changing of variables, it is easy to show that $$\begin{aligned}
\label{imdsv12}
I_1+I_2=\int_{\mathbb{R}^d}\exp\Big(-\frac{\widehat{k}|z|^2}{t-r}\Big)dz=\widehat{k}^{-\frac{d}{2}}(t-r)^{\frac{d}{2}}.\end{aligned}$$ For $I_3$, we compute the integral as follows: $$\begin{aligned}
\label{imdsv3}
I_3=&\int_0^{|y|}dz_1\int_{\mathbb{R}^{d-1}}\exp\Big(-\frac{\widehat{k}\left(z_2^2+\dots z_d^2\right)}{t-r}\Big)dz_d\dots dz_2\nonumber\\
=&\big(2\pi\widehat{k}^{-1}\big)^{\frac{d-1}{2}}(t-r)^{\frac{d-1}{2}}|y|.\end{aligned}$$ Thus combining (\[imdsv\]) - (\[imdsv3\]), we have $$\begin{aligned}
\label{mdtpirl1}
\int_{\mathbb{R}^d}I(z)dz\leq &C\big[(t-r)^{-\frac{1}{2}}|y|+(t-r)^{-1}|y|^2\big] \nonumber\\
=&C\big[(t-r)^{-\frac{1}{2}}|y-x|+(t-r)^{-1}|y-x|^2\big].\end{aligned}$$ It is easy to see that the inequality (\[mdtpirl1\]) holds for all $x,y\in{\mathbb{R}}^d$.
On the other hand, by Proposition \[mectpd\], we have $$\begin{aligned}
\label{mdtpirl2}
\int_{\mathbb{R}^d}I(z)dz\leq\int_{\mathbb{R}^d}\|p^W(r,z;t,y)\|_{2p}+\|p^W(r, z;t,x)\|_{2p}dz\leq C.\end{aligned}$$ Therefore by (\[mdtpirl1\]) and (\[mdtpirl2\]), for any $\beta_1,\beta_2\in(0,1)$, we have $$\begin{aligned}
\int_{\mathbb{R}^d}I(z)dz\leq C\big[(t-r)^{-\frac{1}{2}\beta_1}\left|y-x\right|^{\beta_1}+(t-r)^{-\beta_2}\left|y-x\right|^{2\beta_2}\big]\end{aligned}$$ Then, (\[mdtpirl\]) follows by choosing $\beta=\beta_1=2\beta_2$.
[**(ii)**]{} Let $\rho_1=\sqrt{t-r}$ and $\rho_2=\sqrt{s-r}$. By density formula (\[dfrkip\]), we have $$\begin{aligned}
\label{dcdtv}
&\left|p^W(r,z;t,x)-p^W(r,z;s,x)\right|\nonumber\\
\leq &\sum_{i=1}^d\Big|{\mathbb{E}}^W\left\{\left[Q_d(\xi_t-x)-Q_d(\xi_s-x)\right]H_{(i,i)}(\xi_s,\phi_{\rho_2}^x(\xi_s))\right\}\Big|\nonumber\\
&+\sum_{i=1}^d\Big|{\mathbb{E}}^W\left\{Q_d(\xi_t-x)\left[H_{(i,i)}(\xi_t,\phi_{\rho_1}^x(\xi_t))-H_{(i,i)}(\xi_s,\phi_{\rho_2}^x(\xi_s))\right]\right\}\Big|\nonumber\\
=&I_1+I_2.\end{aligned}$$
Estimation for $I_1$: Note that by the local property of $\delta$ (see Proposition 1.3.15 of Nulart [@springer-06-nualart]), $H_{(i,i)}(\xi_s,\phi_{\rho_2}^x(\xi_s))$ vanishes except if $\xi_s\in B(x,4\rho_2)$. Choose $p_1\in (d, 2pd]$. Let $p_2=3p_1$ and $p_3=\frac{3p_1}{3p_1-2}$. Then, $\frac{2}{p_2}+\frac{1}{p_3}=1$. Thus, by Hölder’s inequality, we have $$\begin{aligned}
\label{medcdtv1p}
\left\|I_1\right\|_{2p}\leq &d\big\|\|{\mathbf{1}}_{B(x, 4\rho_2)}(\xi_s)\|^W_{p_2}\big\|_{6p}\big\|\|Q_d(\xi_t-x)-Q_d(\xi_s-x)\|^W_{p_3}\big\|_{6p}\nonumber\\
&\times \max_{1\leq i\leq d}\big\|\|H_{(i,i)}(\xi_s,\phi_{\rho_2}^x(\xi_s))\|^W_{p_2}\big\|_{6p}.\end{aligned}$$ By Proposition \[cmijt\], and the fact that $p_2=3p_1\leq 6pd$, the first factor satisfies the following inequality $$\begin{aligned}
\label{mpbxis}
\big\|\|{\mathbf{1}}_{B(x, 4\rho_2)}(\xi_s)\|^W_{p_2}\big\|_{6p}=\big\|\mathbb{P}^W(|\xi_s-x|<4\rho_2)^{\frac{1}{p_2}}\big\|_{6p}\leq C\exp\Big(-\frac{k|z-x|}{6pd(s-r)}\Big).\end{aligned}$$ By Lemmas \[ehn\] and \[ipfpd\], for all $1\leq i\leq d$, the last factor can be estimated as follows: $$\begin{aligned}
\label{h2xis}
\big\|\|H_{(i,i)}(\xi_s,\phi_{\rho_2}^x(\xi_s))\|^W_{p_2}\big\|_{6p}\leq &\frac{1}{\rho_2^2}+\frac{2}{\rho_2}\big\|\|H_{(i)}(\xi_s,1)\|^W_{p_2}\big\|_{6p}+\big\|\|H_{(i,i)}(\xi_s,1)\|^W_{p_2}\big\|_{6p}\nonumber\\
\leq & C(s-r)^{-1}.\end{aligned}$$ We estimate the second factor by the mean value theorem. Let $\eta_1=|\xi_t-x|$ and $\eta_2=|\xi_s-x|$. Then, we can write $$Q_d(\xi_t-x)-Q_d(\xi_s-x)=\begin{cases}
A_2^{-1}\left(\log \eta_1 -\log \eta_2\right), &\text{if}\ d=2, \\
-A_d^{-1}\big[\eta_1^{-(d-2)}-\eta_2^{-(d-2)}\big], &\text{if}\ d\geq 3.
\end{cases}$$ Thus, by the mean value theorem, it follows that $$\begin{aligned}
\left|Q_d(\xi_t-x)-Q_d(\xi_s-x)\right|=\frac{c_d|\eta_1-\eta_2|}{|\zeta\eta_1+(1-\zeta)\eta_2|^{d-1}},\end{aligned}$$ where $c_d$ is a constant coming from the Poisson kernel, and $\zeta\in(0,1)$ is a random number that depends on $\eta_1$ and $\eta_2$. Notice that $f(x)=x^{-(d-1)}$ is a convex function on $(0,\infty)$, and $\mathbb{P}(\eta_1>0)=\mathbb{P}(\eta_2>0)=1$, then we have $$|\zeta\eta_1+(1-\zeta)\eta_2|^{-(d-1)}\leq |\zeta\eta_1|^{-(d-1)}+|(1-\zeta)\eta_2|^{-(d-1)},\ a.s.$$ Let $q=\frac{p_1}{p_1-1}$, then $\frac{1}{q}+\frac{1}{p_2}=\frac{1}{p_3}$. As a consequence of Hölder’s inequality, we have $$\begin{aligned}
\label{mdqts0}
&\big\|\|Q_d(\xi_t-x)-Q_d(\xi_s-x)\|^W_{p_3}\big\|_{6p}\leq c_d\bigg\|\Big\|\frac{|\eta_1-\eta_2|}{|\zeta\eta_1+(1-\zeta)\eta_2|^{d-1}}\Big\|^W_{p_3}\bigg\|_{6p}\\
&\qquad\leq C \big\|\|\eta_1-\eta_2\|_{p_2}^W\big\|_{12p}\Big\|\left\|\zeta\eta_1+(1-\zeta)\eta_2|^{-(d-1)}\right\|^W_q\Big\|_{12p}\nonumber\\
&\qquad\leq C \|\eta_1-\eta_2\|_{12pd}\Big[\Big\|\big\|\zeta\eta_1^{-(d-1)}\big\|^W_q\Big\|_{12p}+\Big\|\big\|(1-\zeta)\eta_2^{-(d-1)}\big\|^W_q\Big\|_{12p}\Big]\nonumber\\
&\qquad\leq C \big\||\xi_t-\xi_s|\big\|_{12pd}\Big[\Big\|\big\||\xi_t-y|^{-(d-1)}\big\|^W_q\Big\|_{12p}+\Big\|\big\||\xi_s-y|^{-(d-1)}\big\|^W_q\Big\|_{12p}\Big].\nonumber\end{aligned}$$ The negative moments of $\xi_t-y$ can be estimated by (\[ehxi\]), Jensen’s inequality, and Lemma \[blerk\]: $$\begin{aligned}
\label{mdqts1}
\Big\|\big\||\xi_t-x|^{-(d-1)}\big\|^W_{q}\Big\|_{12p}\leq &C\max_{1\leq i\leq d}\big\|\big|\|H_{i}(\xi_t,1)\|_{p_1}^W\big|^{d-1}\big\|_{12p}\nonumber\\
\leq &C \max_{1\leq i\leq d}\left\|H_{(i)}(\xi_t, 1)\right\|_{12pd}^{d-1}
\leq C(t-r)^{-\frac{d-1}{2}}.\end{aligned}$$ Then, by (\[mdqts0\]) - (\[mdqts1\]), we have $$\begin{aligned}
\label{mdqts}
\big\|\|Q_d(\xi_t-x)-Q_d(\xi_s-x)\|^W_{p_3}\big\|_{6p}\leq C(t-s)^{\frac{1}{2}}(s-r)^{-\frac{d-1}{2}}.\end{aligned}$$ Thus combining (\[medcdtv1p\]), (\[mpbxis\]), (\[h2xis\]) and (\[mdqts\]), we have $$\begin{aligned}
\|I_1\|_{2p}\leq C\exp\Big(-\frac{k|z-x|}{6pd(s-r)}\Big)(s-r)^{-\frac{d+1}{2}}(t-s)^{\frac{1}{2}}.\end{aligned}$$ This implies $$\begin{aligned}
\label{medcdtv1}
\int_{\mathbb{R}^d}\|I_1\|_{2p}dz\leq C(s-r)^{-\frac{1}{2}}(t-s)^{\frac{1}{2}}.\end{aligned}$$
Estimates for $I_2$: Recall that $\gamma_t=(\langle D\xi^i, D\xi^j\rangle_H)_{i,j=1}^d=\sigma_t^{-1}$. By computation analogue to (\[dfrkip\]) going backward, we can show that $$\begin{aligned}
\label{medcdtv2}
&{\mathbb{E}}^W\left[Q_d(\xi_t-x)\left(H_{(i,i)}(\xi_t,\phi_{\rho_1}^x(\xi_t))-H_{(i,i)}(\xi_s,\phi_{\rho_2}^x(\xi_s))\right)\right]\nonumber\\
=& -\sum_{j_1,j_2=1}^d{\mathbb{E}}^W\left[\partial_{j_2} Q_d(\xi_t-x)\langle D\xi_t^{j_2}, D\xi_t^{j_1}\rangle_H H_{(i)}\left(\xi_t,\phi_{\rho_1}^x(\xi_t)\right)\sigma_t^{j_1i}\right]\nonumber\\
&+\sum_{j_1,j_2=1}^d{\mathbb{E}}^W\left[\partial_{j_2}Q_d(\xi_t-x)\langle D\xi_t^{j_2}, D\xi_s^{j_1}\rangle_H H_{(i)}\left(\xi_s,\phi_{\rho_2}^x(\xi^{r,z}_s)\right)\sigma_s^{j_1i}\right]\nonumber\\
=&-{\mathbb{E}}^W\left[\partial_iQ_d(\xi_t-x) \left(H_{(i)}\left(\xi_t,\phi_{\rho_1}^x(\xi_t)\right)- H_{(i)}\left(\xi_s,\phi_{\rho_2}^x(\xi_s)\right)\right)\right]\nonumber\\
&+ \sum_{j_1,j_2=1}^d{\mathbb{E}}^W\left[\partial_{j_2} Q_d(\xi_t-x)\langle D\xi_t^{j_2}-D\xi_s^{j_2}, D\xi_s^{j_1}\rangle_H H_{(i)}\left(\xi_s,\phi^x_{\rho_2}(\xi_s)\right)\sigma_s^{j_1i}\right]\nonumber\\
:=&J_1+J_2.\end{aligned}$$ By Lemma \[ipfpd\], we have $$\begin{aligned}
\label{dhtv}
&\left|H_{(i)}\left(\xi_t,\phi_{\rho_1}^x(\xi_t)\right)-H_{(i)}\left(\xi_s,\phi_{\rho_2}^x(\xi_s)\right)\right|\leq \left|\partial_i\phi_{\rho_1}^x(\xi_t)-\partial_i\phi_{\rho_2}^x(\xi_s)\right|\\
&\hspace{20mm} +|\phi_{\rho_2}^x(\xi_s)|\left|H_{(i)}(\xi_t,1)-H_{(i)}(\xi_s,1)\right|+\left|H_{(i)}(\xi_t,1)\right|\left|\phi_{\rho_1}^x(\xi_t)-\phi_{\rho_2}^x(\xi_s)\right|.\nonumber\end{aligned}$$ By the mean value theorem, for some random numbers $c_1,c_2\in(0,1)$, we have $$\begin{aligned}
\label{dphitv}
\left|\phi_{\rho_1}^x(\xi_t)-\phi_{\rho_2}^x(\xi_s)\right|=&\big|{\mathbf{1}}_{B(x, 4\rho_1)}(\xi_t)\vee {\mathbf{1}}_{B(x, 4\rho_2)}(\xi_s)\big|\Big|\phi\Big(\frac{\xi_t-x}{\rho_1}\Big)-\phi\Big(\frac{\xi_s-x}{\rho_2}\Big)\Big|\nonumber\\
=&\big|{\mathbf{1}}_{B(x, 4\rho_1)}(\xi_t)\vee {\mathbf{1}}_{B(x, 4\rho_2)}(\xi_s)\big|\nonumber\\
&\times\Big|\nabla\phi \Big(c_1\frac{\xi_t-x}{\rho_1}+(1-c_1)\frac{\xi_s-x}{\rho_2}\Big)^*\cdot\Big(\frac{\xi_t-x}{\rho_1}-\frac{\xi_s-x}{\rho_2}\Big)\Big|\nonumber\\
\leq &\big|{\mathbf{1}}_{B(x, 4\rho_1)}(\xi_t)\vee {\mathbf{1}}_{B(x, 4\rho_2)}(\xi_s)\big|\Big|\frac{\xi_t-x}{\rho_1}-\frac{\xi_s-x}{\rho_2}\Big|,\end{aligned}$$ and $$\begin{aligned}
\label{ddphitv}
&\left|\partial_i\phi_{\rho_1}^x(\xi_t)-\partial_i\phi_{\rho_2}^x(\xi_s)\right|=\Big|\rho_1^{-1}\partial_i\phi\Big(\frac{\xi_t-x}{\rho_1}\Big)-\rho_2^{-1}\partial_i\phi\Big(\frac{\xi_s-x}{\rho_2}\Big)\Big|\\
&\quad\leq\frac{1}{\rho_1}\Big|\nabla\partial_i\phi \Big(c_2\frac{\xi_t-x}{\rho_1}+(1-c_2)\frac{\xi_s-x}{\rho_2}\Big)^*\cdot\Big(\frac{\xi_t-x}{\rho_1}-\frac{\xi_s-x}{\rho_2}\Big)\Big|\\
&\qquad+\Big|\partial_i\phi_{\rho_2}^x(\xi_s)\Big|\Big|\frac{1}{\rho_1}-\frac{1}{\rho_2}\Big|\nonumber\\
&\quad\leq \frac{1}{\rho_1}\big({\mathbf{1}}_{B(x, 4\rho_1)}(\xi_t)\vee {\mathbf{1}}_{B(x, 4\rho_2)}(\xi_s)\big)\Big|\frac{\xi_t-x}{\rho_1}-\frac{\xi_s-x}{\rho_2}\Big|+{\mathbf{1}}_{B(x, 4\rho_2)}(\xi_s)\Big|\frac{1}{\rho_1}-\frac{1}{\rho_2}\Big|.\nonumber\end{aligned}$$ Choose $q\in(d,3pd]$, let $p_1=\frac{q}{q-1}$, $p_2=2q$, $p_3=4q$. Then, $$\frac{1}{p_1}+\frac{2}{p_2}=\frac{1}{p_1}+\frac{1}{p_2}+\frac{2}{p_3}=1.$$ Then, by (\[dhtv\]) - (\[ddphitv\]), and Hölder’s inequality, we have $$\begin{aligned}
\label{medcdtv21}
\|J_1\|_{2p}\leq&\rho_1^{-1}\big\|\|\partial_iQ_d(\xi_t-x) \|_{p_1}^W\big\|_{6p}\Big\|\big\|{\mathbf{1}}_{B(x, 4\rho_1)}(\xi_t)\vee {\mathbf{1}}_{B(x, 4\rho_2)}(\xi_s)\big\|_{p_2}^W\Big\|_{6p}\nonumber\\
&\times \bigg\|\Big\|\Big|\frac{\xi_t-x}{\rho_1}-\frac{\xi_s-x}{\rho_2}\Big|\Big\|_{p_2}^W\bigg\|_{6p}\nonumber\\
&+\big\|\|\partial_iQ_d(\xi_t-x) \|_{p_1}^W\big\|_{6p}\Big\|\big\|{\mathbf{1}}_{B(x, 4\rho_2)}(\xi_s)\big\|_{p_2}^W\Big\|_{6p}\big\|\|\rho_1^{-1}-\rho_2^{-1}\|_{p_2}\big\|_{6p}\nonumber\\
&+\big\|\|\partial_iQ_d(\xi_t-x)\|_{p_1}^W\big\|_{6p}\Big\|\big\|{\mathbf{1}}_{B(x, 4\rho_2)}(\xi_s)\big\|_{p_2}^W\Big\|_{6p}\big\|\|H_{(i)}(\xi_t,1)-H_{(i)}(\xi_s,1)\|_{p_2}^W\big\|_{6p}\nonumber\\
&+\big\|\|\partial_iQ_d(\xi_t-x) \|_{p_1}^W\big\|_{6p}\Big\|\big\|{\mathbf{1}}_{B(x, 4\rho_1)}(\xi_t)\vee {\mathbf{1}}_{B(x, 4\rho_2)}(\xi_s)\big\|_{p_2}^W\Big\|_{6p}\nonumber\\
&\times \bigg\|\Big\|\Big|\frac{\xi_t-x}{\rho_1}-\frac{\xi_s-x}{\rho_2}\Big|\Big\|_{p_3}^W\bigg\|_{12p}\quad \big\|\|H_{(i)}(\xi_t,1)\|_{p_3}^W\big\|_{12p}\nonumber\\
:=&L_1+L_2+L_3+L_4.\end{aligned}$$ In order to estimate the moments of $\frac{\xi_t-x}{\rho_1}-\frac{\xi_s-x}{\rho_2}$, we rewrite this random vector in the following way: $$\frac{\xi_t-x}{\rho_1}-\frac{\xi_s-x}{\rho_2}=\frac{\xi_t-\xi_s}{\rho_1}+\left(\xi_s-z\right)\Big(\frac{1}{\rho_1}-\frac{1}{\rho_2}\Big)+(z-x)\Big(\frac{1}{\rho_1}-\frac{1}{\rho_2}\Big).$$ It follows that $$\begin{aligned}
&\Big\|\Big|\frac{\xi_t-x}{\rho_1}-\frac{\xi_s-x}{\rho_2}\Big|\Big\|_{12p\vee p_3}\leq(t-r)^{-\frac{1}{2}}\big\||\xi_t-\xi_s|\big\|_{12pd}\\
&\hspace{15mm}+\frac{(t-r)^{\frac{1}{2}}-(s-r)^{\frac{1}{2}}}{(t-r)^{\frac{1}{2}}(s-r)^{\frac{1}{2}}}\big\||\xi_s-z|\big\|_{12pd}+|z-x|\frac{(t-r)^{\frac{1}{2}}-(s-r)^{\frac{1}{2}}}{(t-r)^{\frac{1}{2}}(s-r)^{\frac{1}{2}}}.\end{aligned}$$ According to Lemma \[lxigrv\], $\xi_t-\xi_s$ and $\xi_s-z$ are Gaussian random vectors with mean $0$, and covariance matrix $(t-s)(I+\rho(0))$ and $(s-r)(I+\rho(0))$ respectively. Therefore, we have $$\begin{aligned}
\label{mdgxi}
\Big\|\Big|\frac{\xi_t-x}{\rho_1}-\frac{\xi_s-x}{\rho_2}\Big|\Big\|_{12pd}\leq &c_{p,d}(t-r)^{-\frac{1}{2}}(t-s)^{\frac{1}{2}}+c_{p,d}\frac{(t-r)^{\frac{1}{2}}-(s-r)^{\frac{1}{2}}}{(t-r)^{\frac{1}{2}}(s-r)^{\frac{1}{2}}}(s-r)^{\frac{1}{2}}\nonumber\\
&+|z-x|\frac{(t-r)^{\frac{1}{2}}-(s-r)^{\frac{1}{2}}}{(t-r)^{\frac{1}{2}}(s-r)^{\frac{1}{2}}}\nonumber\\
\leq & C\big(|z-x|(s-r)^{-\frac{1}{2}}+1\big)(t-r)^{-\frac{1}{2}}(t-s)^{\frac{1}{2}}\end{aligned}$$ Therefore, by (\[mdgxi\]), Proposition \[cmijt\] and Lemma \[blerk\], we have $$\begin{aligned}
\label{medcdtv211}
L_1+L_4\leq &C(t-r)^{-\frac{d}{2}}\Big[\exp\Big(-\frac{k|z-x|^2}{6pd(t-r)}\Big)+\exp\Big(-\frac{k|z-x|^2}{6pd(s-r)}\Big)\Big]\nonumber\\
&\times\big(1+|z-x|(s-r)^{-\frac{1}{2}}\big)(t-s)^{\frac{1}{2}},\end{aligned}$$ and $$\begin{aligned}
\label{medcdtv212}
L_2+L_3\leq &C(t-r)^{-\frac{d}{2}}\exp\Big(-\frac{k|z-x|^2}{6pd(s-r)}\Big)(s-r)^{-\frac{1}{2}}(t-s)^{\frac{1}{2}}.\end{aligned}$$ Plugging (\[medcdtv211\]) and (\[medcdtv212\]) into (\[medcdtv21\]), we have $$\begin{aligned}
\label{medcdtv210}
\int_{\mathbb{R}^d}\left\|J_1\right\|_{2p}dz\leq C (s-r)^{-\frac{1}{2}}(t-s)^{\frac{1}{2}}.\end{aligned}$$ For $J_2$, notice that, by definition, $$\begin{aligned}
\langle D\xi_t^{j_2}-D\xi_s^{j_2}, D\xi_s^{j_1}\rangle_H=\sum_{k=1}^d\int_r^s \big(D^{(k)}_{\theta}\xi_t^{j_2}-D^{(k)}_{\theta}\xi_s^{j_2}\big)D^{(k)}_{\theta}\xi_s^{j_1}d\theta.\end{aligned}$$ By (\[sdexi1\]), we have $$\begin{aligned}
D^{(k)}_{\theta}\xi_t^{j_2}-D^{(k)}_{\theta}\xi_s^{j_2}={\mathbf{1}}_{[s,t]}(\theta)\delta_{j_2k}-\sum_{i=1}^d{\mathbf{1}}_{[r,t]}(\theta)\int_s^t D^{(k)}_{\theta}\xi_r^i dM^{ij_2}_r.\end{aligned}$$ By a argument similar to the one used in the proof of Lemma \[elgamma\], we can show that $$\begin{aligned}
\big\|{\mathbf{1}}_{[r,s]}(\theta)\big(D^{(k)}_{\theta}\xi_t^{j_2}-D^{(k)}_{\theta}\xi_s^{j_2}\big)\big\|_{2p}^2\leq C{\mathbf{1}}_{[r,s]}(\theta)(t-s).\end{aligned}$$ Therefore, by Hölder’s and Minkowski’s inequalities, we have $$\begin{aligned}
\label{mipdtv}
\left\|\langle D\xi_t^{j_2}-D\xi_s^{j_2}, D\xi_s^{j_1}\rangle_H\right\|_{2p}\leq &\sum_{k=1}^d\int_r^s \big\|{\mathbf{1}}_{[r,s]}(\theta)\big(D^{(k)}_{\theta}\xi_t^{j_2}-D^{(k)}_{\theta}\xi_s^{j_2}\big)\big\|_{4p}\big\|D^{(k)}_{\theta}\xi_s^{j_1}\big\|_{4p}d\theta\nonumber\\
\leq &C(s-r)(t-s)^{\frac{1}{2}}.\end{aligned}$$ Choose $q\in(d,3pd]$. Let $p_1=\frac{q}{q-1}$, $p_2=2q$ and $p_3=6q$. Then $\frac{1}{p_1}+\frac{1}{p_2}+\frac{3}{p_3}=1$. Thus, by (\[mipdtv\]), Hölder’s inequality, Lemmas \[elgamma\], \[ehn\], \[blerk\], and Proposition \[cmijt\], we have $$\begin{aligned}
\left\|J_2\right\|_{2p}\leq &\sum_{j_1,j_2=1}^d\big\|\|{\mathbf{1}}_{B(x, 4\rho_2)}(\xi_s)\|^W_{p_2}\big\|_{6p}\big\|\|\partial_{j_2} Q_d(\xi_t-x)\|^W_{p_1}\big\|_{6p}\nonumber\\
&\times\big\|\|\langle D\xi_t^{j_2}-D\xi_s^{j_2}, D\xi_s^{j_1}\rangle_H \|^W_{p_3}\big\|_{18p}\big\|\|H_{(i)}(\xi_s,\phi_{\rho_2}^y(\xi_s))\|^W_{p_3}\big\|_{18p}\big\|\|\sigma_s^{j_1i}\|^W_{p_3}\big\|_{18p}\nonumber\\
\leq &C\exp\Big(-\frac{k|z-x|^2}{6pd(s-r)}\Big)(t-r)^{-\frac{d-1}{2}}(t-s)^{\frac{1}{2}}(s-r)^{-\frac{1}{2}}.\end{aligned}$$ As a consequence, we have $$\begin{aligned}
\label{medcdtv220}
\int_{\mathbb{R}^d}\left\|J_2\right\|_{2p}dz\leq C (t-s)^{\frac{1}{2}}.\end{aligned}$$
Finally, combining (\[medcdtv1\]), (\[medcdtv210\]) and (\[medcdtv220\]), we have $$\begin{aligned}
\label{idtmpd1}
\int_{\mathbb{R}^d}\left\|p^W(r,z;t,x)-p^W(r,z;s,x)\right\|_{2p}dz\leq C (s-r)^{-\frac{1}{2}}(t-s)^{\frac{1}{2}}.\end{aligned}$$
On the other hand, by (\[edn\]), we have $$\begin{aligned}
\label{idtmpd2}
\int_{\mathbb{R}^d}\|I_2\|_{2p}dz\leq \int_{\mathbb{R}^d}\|p^W(r,z;t,y)\|_{2p}+\|p^W(r,z;s,y)\|_{2p}\leq C.\end{aligned}$$ Thus (\[idtmpd\]) follows from (\[idtmpd1\]) and (\[idtmpd2\]).
By the convolution representation (\[crd\]), Burkholder-Davis-Gundy’s, and Minkowski’s inequalities, we have $$\begin{aligned}
\label{mpddm}
&\left\|u_t(y)-u_t(x)\right\|_{2p}\leq \Big\|\int_{\mathbb{R}^d}\mu(z)\left(p^W(0,z;t, y)-p^W(0,z;t,x)\right)dz\Big\|_{2p}\nonumber\\
&\qquad +\Big\|\int_0^t\int_{\mathbb{R}^d}u_r(z)\left(p^W(r,z;t,y)-p^W(r,z;t,x)\right)V(dz,dr)\Big\|_{2p}\nonumber\\
&\quad\leq\left\|\mu\right\|_{\infty}\int_{\mathbb{R}^d}\left\|p^W(0,z;t,y)-p^W(0,z;t,x)\right\|_{2p}dz\nonumber\\
&\qquad+\|\kappa\|_{\infty}^{\frac{1}{2}}\bigg(\int_0^t\Big(\int_{\mathbb{R}^d}\left\|u_r(z)\left(p^W(r,z;t,y)-p^W(r,z;t,x)\right)\right\|_{2p}dz\Big)^2dr\bigg)^{\frac{1}{2}}\nonumber\\
&\quad:=I_1+\|\kappa\|^{\frac{1}{2}}_{\infty}I_2.\end{aligned}$$ Note that $I_1$ can be estimated by Lemma \[lmdtpirl\]. For $I_2$, recall that $u(r,z)$ is independent of $p^W(r,z;t,y)$. Then, by Lemma \[eumcr\] and \[lmdtpirl\], we have $$\begin{aligned}
\label{mpddm2}
I_2\leq&\bigg(\int_0^t\sup_{z\in\mathbb{R}^d}\left\|u_r(z)\right\|_{2p}^2\Big(\int_{\mathbb{R}^d}\left\|p^W(r,z;t,y)-p^W(r,z;t,x)\right\|_{2p}dz\Big)^2dr\bigg)^{\frac{1}{2}}\nonumber\\
\leq&C|y-x|^{\beta}\Big(\int_0^t(t-r)^{-\beta}dr\Big)^{\frac{1}{2}}\leq \frac{Ct^{\frac{1-\beta}{2}}}{\sqrt{1-\beta}}|y-x|^{\beta}.\end{aligned}$$ Therefore (\[mpdhcsv\]) follows from (\[mdtpirl\]), (\[mpddm\]) and (\[mpddm2\]).
The proof of (\[mpdhctv\]) is quite similar. As did in (\[mpddm\]), we can show that $$\begin{aligned}
&\left\|u_t(x)-u_s(x)\right\|_{2p}\leq \|\mu\|_{\infty}\int_{\mathbb{R}^d} \left\|p^W(0,z;t,x)-p^W(0,z;s,x)\right\|_{2p}dz\\
&\qquad+C\|\kappa\|_{\infty}^{\frac{1}{2}}\bigg[\int_s^t\sup_{z\in\mathbb{R}^d}\left\|u_r(z)\right\|_{2p}^2\Big(\int_{\mathbb{R}^d}\left\|p^W(r,z;t,x)\right\|_{2p}dz\Big)^2dr\bigg]^{\frac{1}{2}}\\
&\qquad+C\left\|\kappa\right\|_{\infty}^{\frac{1}{2}}\bigg[\int_0^s\sup_{z\in\mathbb{R}^d}\left\|u_r(z)\right\|_{2p}^2\Big(\int_{\mathbb{R}^d}\left\|\left(p^W(r,z;t,x)-p^W(r,z;s,x)\right)\right\|_{2p}dz\Big)^2dr\bigg]^{\frac{1}{2}}.\end{aligned}$$ Then, the estimate (\[mpdhctv\]) follows from (\[idtmpd\]), Proposition \[mectpd\] and Lemma \[eumcr\].
Basic introduction on Malliavin calculus
========================================
In this section, we present some preliminaries on the Malliavin calculus. We refer the readers to book of Nualart [@springer-06-nualart] for a detailed account on this topic.
Fix a time interval $[0,T]$. Let $B=\{B_t^1,\dots, B_t^d, 0\leq t\leq T\}$ be a standard $d$-dimensional Brownian motion on $[0,T]$. Denote by $\mathcal{S}$ the class of smooth random variables of the form $$\begin{aligned}
\label{eq1}
G=g\left(B_{t_1}, \dots, B_{t_m}\right)=g\left(B_{t_1}^1,\dots,B_{t_1}^d, \dots,B_{t_m}^1,\dots,B_{t_m}^d\right),\end{aligned}$$ where $m$ is any positive integer, $0\leq t_1<\dots<t_m\leq T$, and $g: \mathbb{R}^{md}\to \mathbb{R}$ is a smooth function that has all partial derivatives with at most polynomial growth. We make use of the notation $x=\left(x_i^k\right)_{1\le i\le m, 1\le k\le d}$ for any element $x\in {\mathbb{R}}^{md}$. The basic Hilbert space associated with $B$ is $H=L^2 \left([0,T]; {\mathbb{R}}^d\right)$.
For any $G\in\mathcal{S}$ given by (\[eq1\]), the Malliavin derivative, is the $H$-valued random variable $DG$ given by $$\begin{aligned}
D_{\theta}^{(k)} G=\sum_{i=1}^m \frac{\partial g}{\partial x_i^k}\left(B_{t_1}, \dots, B_{t_m}\right) \mathbf{1}_{[0,t_i]}(\theta), \quad 1\le k \le d, \,\, \theta \in [0,T].\end{aligned}$$
In the same way, for any $n\geq 1$, the iterated derivative $D^n G$ of a random variable of the form (\[eq1\]) is a random variable with values in $H^{\otimes n}=L^2\left([0,T]^n; {\mathbb{R}}^{d^n}\right)$. For each $p\ge 1$, the iterated derivative $D^n$ is a closable and unbounded operator on $L^p(\Omega)$ taking values in $L^p(\Omega; H^{\otimes n})$. For any $n\ge 1$, $p\ge 1$ and any Hilbert space $V$, we can introduce the Sobolev space $\mathbb{D}^{n,p}(V) $ of $V$-valued random variables as the closure of $\mathcal{S}$ with respect to the norm $$\begin{aligned}
\| G\|_{n,p,V}^2 =& \|G\|_{L^p(\Omega; V)}^2 + \sum_{k=1}^n \|D^k G\|_{L^p(\Omega; H^{\otimes k} \otimes V)}^2\\
=&\big[{\mathbb{E}}\big(\|G\|_V^p\big)\big]^{\frac{2}{p}} + \sum_{k=1}^n \big[{\mathbb{E}}\big(\|D^k G\|_{H^{\otimes k}\otimes V}^p\big)\big]^{\frac{2}{p}}.\end{aligned}$$
By definition, the divergence operator $\delta$ is the adjoint operator of $D$ in $L^2(\Omega)$. More precisely, $\delta$ is an unbounded operator on $L^2\left(\Omega; H\right)$, taking values in $L^2(\Omega)$. We denote by $\mathrm{Dom}(\delta)$ the domain of $\delta$. Then, for any $u=(u^1,\dots,u^d)\in \mathrm{Dom}(\delta)$, $\delta(u)$ is characterized by the duality relationship: for all for all $G\in \mathbb{D}^{1,2}={\mathbb{D}}^{1,2}({\mathbb{R}})$. $$\begin{aligned}
{\mathbb{E}}\left(\delta (u)G\right)= {\mathbb{E}}\left(\left\langle D G, u\right\rangle_H\right).\end{aligned}$$
Let $F$ be an $n$-dimensional random vector, with components $F^i\in \mathbb{D}^{1,1}, 1\leq i\leq n$. We associate to $F$ an $n\times n$ random symmetric nonnegative definite matrix, called the Malliavin matrix of $F$, denoted by $\gamma_F$. The entries of $\gamma_F$ are defined by $$\begin{aligned}
\gamma^{ij}_F= \left\langle D F^i, D F^j\right\rangle_H=\sum_{k=1}^d\int_0^T D^{(k)}_{\theta}F^iD^{(k)}_{\theta}F^j d \theta.\end{aligned}$$
Suppose that $F\in\cap_{p\geq 1}\mathbb{D}^{2,p}({\mathbb{R}}^n)$, and its Malliavin matrix $\gamma_F$ is invertible. Denote by $\sigma_F$ the inverse of $\gamma_F$. Assume that $\sigma_F^{ij}\in \cap_{p\geq 1}\mathbb{D}^{1,p}$ for all $1\leq i,j\leq n$. Let $G\in\cap_{p\geq 1}{\mathbb{D}}^{1,2}$. Then $G\sigma_F^{ij}DF^k\in \mathrm{Dom}(\delta)$ for all $1\leq i,j,k\leq n$. Under the hypotheses, we define $$\begin{aligned}
\label{hfphif}
H_{(i)}(F,G)=-\sum_{j=1}^n\delta\left(G\sigma_F^{ji}DF^j\right),\quad 1\leq i\leq n.\end{aligned}$$ If furthermore $H_{(i)}(F,G)\in\cap_{p\geq 1}{\mathbb{D}}^{1,p}$ for all $1\leq i\leq n$, then we define $$\begin{aligned}
\label{hhfphif}
H_{(i,j)}(F,G)=H_{(j)}\left(F,H_{(i)}(F, G)\right), \quad 1\leq i,j\leq n.\end{aligned}$$
The following lemma is a Wiener functional version of Lemma 9 of Bally and Caramellino [@spa-11-bally-caramellino].
\[ipfpd\] Suppose that $F\in\cap_{p\geq 1}\mathbb{D}^{2,p}({\mathbb{R}}^n)$, $(\gamma_F^{-1})^{ij}=\sigma_F^{ij}\in \cap_{p\geq 1}\mathbb{D}^{2,p}$ for all $1\leq i,j\leq n$, and $\phi\in C_b^1(\mathbb{R}^n)$. Then, for any $1\leq i\leq n$, we have $$\begin{aligned}
\label{ipfpd1}
H_{(i)}\left(F,\phi(F)\right)=&\partial_i\phi(F)+\phi(F)H_{(i)}(F,1).\end{aligned}$$ Suppose that $F\in\cap_{p\geq 1}\mathbb{D}^{3,p}({\mathbb{R}}^n)$ and $\phi\in C_b^2(\mathbb{R}^n)$. Then, for any $1\leq i,j\leq n$, we have $$\begin{aligned}
\label{ipfpd2}
H_{(i,j)}&\left(F,\phi(F)\right)=\partial_{ij}\phi(F)+\partial_i\phi(F)H_{(j)}(F,1)\nonumber\\
&\quad +\partial_{j}\phi(F) H_{(i)}(F,1)+\phi(F) H_{(i,j)}(F,1).\end{aligned}$$
For any $F\in\cap_{p\geq 1}\mathbb{D}^{2,p}({\mathbb{R}}^n)$ and $\phi\in C_b^1(\mathbb{R}^n)$, it is easy to check that $\phi(F)\in\cap_{p\geq 1}{\mathbb{D}}^{1,p}$. Then, $H_{(i)}(F,\phi (F))$ is well defined. For any $G\in\mathbb{D}^{1,2}$, by the duality of $D$ and $\delta$, we have $$\begin{aligned}
\label{edphigmdg}
{\mathbb{E}}\left(H_{(i)}\left(F,\phi(F)\right)G\right)=&-\sum_{j=1}^n{\mathbb{E}}\left(\delta\left(\phi(F)\sigma_F^{ji}DF^{j}\right)G\right)\nonumber\\
=&-\sum_{j=1}^n{\mathbb{E}}\left(\phi(F)\sigma_F^{ji}\left\langle DF^{j}, DG\right\rangle_H\right).\end{aligned}$$ On the other hand, by the product rule for the operator $D$, we have $$\begin{aligned}
&{\mathbb{E}}\left(\phi(F)H_{(i)}(F,1)G\right)=-\sum_{j=1}^m{\mathbb{E}}\left(\left\langle\sigma_F^{ji}DF^{j},D\left(\phi(F)G\right)\right\rangle_H\right)\nonumber\\
&\quad=-\sum_{j=1}^m{\mathbb{E}}\left(\phi(F)\sigma_F^{ji}\left\langle DF^{j},DG\right\rangle_H\right)\nonumber-\sum_{j_1,j_2=1}^m{\mathbb{E}}\left( G\partial_{j_2}\phi(F)\sigma_F^{j_1i}\left\langle DF^{j_1}, DF^{j_2}\right\rangle_H\right).\end{aligned}$$ Note that $\sigma_F$ is the inverse of $\gamma_F=\big(\langle DF^{i}, DF^{j}\rangle_H\big)_{i,j=1}^n$, then $$\begin{aligned}
\label{amie}
\sum_{j_1,j_2=1}^m{\mathbb{E}}\left(G\partial_{j_2}\phi(F)\sigma_F^{j_1i}\left\langle DF^{j_1}, DF^{j_2}\right\rangle_H\right)={\mathbb{E}}\left(G\partial_i\phi(F)\right).\end{aligned}$$ Then, (\[ipfpd1\]) follows from (\[edphigmdg\]) - (\[amie\]). The equality (\[ipfpd2\]) can be proved similarly.
The next theorem is a density formula using the Riesz transformation. The formula was first introduced by Malliavin and Thalmaier (see Theorem 4.23 of [@springer-06-malliavin-thalmaier]), then further studied by Bally and Caramenillo [@spa-11-bally-caramellino].
For any integer $n\geq 2$, let $Q_n$ be the $n$-dimensional Poisson kernel. That is, $$\label{posnkn}
Q_n(x)=
\begin{cases}
A_2^{-1}\log |x|, &n=1,\\
-A_n^{-1}|x|^{2-n}, &n>2,
\end{cases}$$ where $A_n$ is the area of the unit sphere in $\mathbb{R}^n$. Then, $\partial_i Q_n(x)=c_nx_i\left|x\right|^{-n}$, where $c_2=A_2^{-1}$ and $c_n=(\frac{n}{2}-1)A_n^{-1}$ for $n>2$.
The theorem below is the density formula for a class of differentiable random variables.
\[bedf\](Proposition 10 of Bally and Caramenillo [@spa-11-bally-caramellino]) Let $F\in\cap_{p\geq 1}\mathbb{D}^{2,p}({\mathbb{R}}^n)$. Assume that $(\gamma_F^{-1})^{ij}=\sigma_F^{ij}\in\cap_{p\geq 1}{\mathbb{D}}^{1,p}$ for all $1\leq i,j\leq n$. Then, the law of $F$ has a density $p_F$.
More precisely, for any $x\in {\mathbb{R}}^n$ and $r>0$, let $B(x,r)$ be the sphere on ${\mathbb{R}}^n$ centered at $x$ with radius $r$. Suppose that $\phi\in C^1_b({\mathbb{R}}^d)$, such that ${\mathbf{1}}_{B(0,1)}\leq \phi\leq {\mathbf{1}}_{B(0,2)}$, and $|\nabla \phi|\leq 1$. Define $\phi_{\rho}^x:= \phi(\frac{\cdot-x}{\rho})$ for any $\rho>0$ and $x\in\mathbb{R}^n$. Then, $$\begin{aligned}
\label{dfrk}
p_F(x)=&\sum_{i=1}^n{\mathbb{E}}\big( \partial_i Q_n (F-x)H_{(i)}(F,1)\big)\nonumber\\
=&\sum_{i=1}^n{\mathbb{E}}\big( \partial_i Q_n (F-x)H_{(i)}(F,\phi_{\rho}^x(F))\big)\nonumber\\
=&\sum_{i=1}^n{\mathbb{E}}\big( {\mathbf{1}}_{B_{(x,2\rho)}}(F)\partial_i Q_n (F-x)H_{(i)}(F,\phi_{\rho}^x(F))\big).\end{aligned}$$
The next theorem provides the estimates for the density and its increment.
\[tdpdift\] Suppose that $F$ satisfies the conditions in Theorem \[bedf\]. Then, for any $p_2>p_1>n$, let $p_3=\frac{p_1p_2}{p_2-p_1}$, there exists a constant $C$ that depends on $p_1$, $p_2$ and $n$, such that $$\begin{aligned}
\label{de}
p_F(x)\leq &C \mathbb{P}(|F-x|<2\rho)^{\frac{1}{p_3}}\max_{1\leq i\leq n} \Big[\left\|H_{(i)}(F, 1)\right\|_{p_1}^{n-1}\Big(\frac{1}{\rho}+\left\|H_{(i)}(F, 1)\right\|_{p_2}\Big)\Big].\end{aligned}$$ If furthermore, $F\in\cap_{p\geq 1}\mathbb{D}^{3,p}({\mathbb{R}}^n)$, for any $x_1,x_2\in\mathbb{R}^n$, we can find $y=cx_1+(1-c)x_2$ for some $c\in (0,1)$ that depends on $x_1$, $x_2$. Then, there exist a constant $F$ the constant $C$ that depends on $p_1$, $p_2$, and $m$, such that $$\begin{aligned}
\label{dmvte}
&\left|p_F(x_1)-p_F(x_2)\right|\leq C|x_1-x_2| \mathbb{P}(|F-y|<4\rho)^{\frac{1}{p_3}}\nonumber\\
&\quad \times\max_{1\leq i,j\leq n} \Big[\left\|H_{(i)}(F, 1)\right\|_{p_1}^{n-1}\Big(\frac{1}{\rho^2}+\frac{2}{\rho}\left\|H_{(i)}(F, 1)\right\|_{p_2}+\left\|H_{(i,j)}(F, 1)\right\|_{q_2}\Big)\Big].\end{aligned}$$
**Remark**: The inequalities stated in Theorem \[tdpdift\] are an improved version of those estimates by Bally and Caramillino (see Theorem 8 of [@spa-11-bally-caramellino]). We refer to Nualart and Nualart (see Lemma 7.3.2 of [@cambridge-18-nualart-nualart]) for a related result. For the sake of completeness, we present below a proof of Theorem \[tdpdift\]. The proof follows the same idea as in Theorem 8 of [@spa-11-bally-caramellino]. The only difference occurs when choosing the radius of the ball in the estimate for the Poisson kernel. If we optimize the radius, then the exponent of $\|H_{(i)}(F,1)\|_p$ is $n-1$, instead of $\frac{q_1(n-1)}{q_1-n}>n-1$ in [@spa-11-bally-caramellino]. In order to prove Theorem \[tdpdift\], we first give the estimate for the Poisson kernel:
\[blerk\] Suppose that $F$ satisfy the conditions in Theorem \[bedf\]. For any $p>n$, let $q=\frac{p}{p-1}$. Then, there exists a constant $C>0$ depends on $m$ and $p$, such that $$\begin{aligned}
\label{rke}
\sup_{x\in\mathbb{R}^n}\left\|\partial_iQ_n(F-x)\right\|_q\leq \sup_{x\in\mathbb{R}^n}\left\||F-x|^{-(n-1)}\right\|_q\leq C \max_{1\leq i\leq n}\left\|H_{(i)}(F, 1)\right\|_p^{n-1}.\end{aligned}$$
Assume that $$\|p_F\|_{\infty}:=\sup_{x\in{\mathbb{R}}^d}p_F(x)<\infty.$$ Denote by $\displaystyle M=\sup_{1\leq i\leq n}\|H_{(i)}(F,1)\|_p$. Then by Hölder’s inequality, for all $x\in{\mathbb{R}}^d$, we have $$\begin{aligned}
p_F(x)=&\sum_{i=1}^n{\mathbb{E}}\big( \partial_i Q_n (F-x)H_{(i)}(F,1)\big)\leq \sum_{i=1}^m\|\partial_i Q_n (F-x)\|_q\|H_{(i)}(F,1)\|_p\\
\leq & n\sup_{x\in\mathbb{R}^n}\left\||F-x|^{-(n-1)}\right\|_qM,\nonumber\end{aligned}$$ which implies $$\begin{aligned}
\label{rke1}
\|p_F\|_{\infty}\leq n\sup_{x\in\mathbb{R}^n}\left\||F-x|^{-(n-1)}\right\|_qM.\end{aligned}$$
In order to estimate $\||F-x|^{-(n-1)}\|_q$, choose any $\rho>0$. Then for all $x\in{\mathbb{R}}^n$, $$\begin{aligned}
\label{rke2}
{\mathbb{E}}(|F-x|^{-(n-1)q})=&\int_{{\mathbb{R}}^d}|y-x|^{-(n-1)q}p_F(y)dy\nonumber\\
=&\int_{|y-x|\leq \rho}|y-x|^{-(n-1)q}p_F(y)dy+\int_{|y-x|>\rho}|y-x|^{-(n-1)q}p_F(y)dy\nonumber\\
\leq &\|p_F\|_{\infty}\int_0^{\rho}r^{-(n-1)q}r^{n-1}dr+\rho^{-(n-1)q}\nonumber\\
=&k_{n,q}\|p_F\|_{\infty}\rho^{1-(n-1)(q-1)}+\rho^{-(n-1)q},\end{aligned}$$ where $k_{n,q}=[1-(n-1)(q-1)]^{-1}$. The last equality is due to the fact that $1-(n-1)(q-1)>0$.
Combining (\[rke1\]) and (\[rke2\]), we have $$\begin{aligned}
\label{rke3}
\|p_F\|_{\infty}\leq\Big[ nk_{n,q}^{\frac{1}{q}}\|p_F\|_{\infty}^{\frac{1}{q}}\rho^{\frac{1-(n-1)(q-1)}{q}}+\rho^{-(n-1)}\Big]M.\end{aligned}$$ By optimizing the right-hand side of (\[rke3\]), we choose $$\rho=\rho^*:=\Big[\frac{(n-1)q}{n}\Big]^{\frac{q}{n}}\|p_F\|_{\infty}^{-\frac{1}{n}}.$$ Plugging $\rho^*$ into (\[rke3\]), we obtain $$\begin{aligned}
\|p_F\|_{\infty}\leq \bigg(nk_{n,q}^{\frac{1}{q}}\Big[\frac{(n-1)q}{n}\Big]^{\frac{1-(n-1)(q-1)}{n}}+\Big[\frac{(n-1)qM}{n}\Big]^{-\frac{q(n-1)}{n}}\bigg)M|p_F\|_{\infty}^{\frac{n-1}{n}}.\end{aligned}$$ Then, it follows that $$\begin{aligned}
\label{dinm}
\|p_F\|_{\infty}\leq CM^{n}=C\max_{1\leq i\leq n}\|H_{(i)}(F,1)\|_p^n\end{aligned}$$ where $C$ is a constant that depends on $p$ and $n$. Thus (\[rke\]) follows from (\[rke3\]) and (\[dinm\]).
The result can be generalized to the case without the assumption $\|p_F\|_{\infty}<\infty$ by the same argument as in Theorem 5 of [@spa-11-bally-caramellino].
Choose $p_2>p_1>n$, let $p_3=\frac{p_1p_2}{p_2-p_1}$ and $q=\frac{p_1}{p_1-1}$. Then $\frac{1}{q}+\frac{1}{p_2}+\frac{1}{p_3}=1$. Thus by density formula (\[dfrk\]) and Hölder’s inequality, we have $$\begin{aligned}
\label{de1}
p_F(x)\leq \sum_{i=1}^n\| {\mathbf{1}}_{B_{(x,2\rho)}}(F)\|_{p_3}\|\partial_i Q_n (F-x)\|_{q}\|H_{(i)}(F,\phi_{\rho}^x(F))\|_{p_2}.
\end{aligned}$$ Then, (\[de\]) is a consequence of (\[de1\]), Lemma \[ipfpd\] and \[blerk\]. The inequality (\[dmvte\]) can be proved similarly.
[99]{}
Riesz transform and integration by parts formulas for random variables. , no. 6, (2011), 1332–1355.
The spatial lambda-fleming-viot process with fluctuating selection. (2018).
Stochastic integrals for spde’s: a comparison. , [**29**]{}, no. 1, (2011), 67–109.
Infinitely divisible random measures and superprocesses. In [*Stochastic analysis and related topics*]{}, vol. 31 of [ *Progress in Probability*]{}. Springer, 1992, pp. 1–129.
The carrying dimension of a stochastic measure diffusion. , [**7**]{}, no. 4, (1979), 693–703.
Applications of duality to measure-valued diffusion processes. In [*Advances in filtering and optimal stochastic control*]{}. Springer, 1982, pp. 91–105.
Stochastic partial differential equations for a class of interacting measure-valued diffusions. , [**36**]{}, no. 2, (2000), 167–180.
. North-Holland Publishing Co., Amsterdam, 1982.
. J. Wiley & Sons, 1986.
. Courier Corporation, 2013.
H[ö]{}lder continuity of the solutions for a class of nonlinear spde’s arising from one dimensional superprocesses. , [**156**]{}, no. 1-2, (2013), 27–49.
, vol. 288 of [ *Grundlehren der mathematischen Wissenschaften*]{}. Springer Science & Business Media, 2013.
, vol. 26 of [*IMS Lecture Notes-Monograph Series*]{}. Institute of Mathematical Statistics, 1995.
The yamada-watanabe-engelbert theorem for general stochastic equations and inequalities. , [**12**]{}, no. 33, (2007), 951–965.
Equivalence of stochastic equations and martingale problems. In [*Stochastic analysis 2010*]{}. Springer, 2011, pp. 113–130.
Joint continuity of the solutions to a class of nonlinear spdes. , [**153**]{}, no. 3-4, (2012), 441–469.
. Springer Science & Business Media, 2006.
Tightness of probabilities on ${C} ([0, 1]; \mathscr{S}')$ and ${D}
([0, 1]; \mathscr{S}')$. , [**11**]{}, no. 4, (1983), 989–999.
Superprocesses in random environments. , [**24**]{}, no. 4, (1996), 1953–1978.
. Springer Science & Business Media, 2006.
. Cambridge University Press, 2018.
Dawson-watanabe superprocesses and measure-valued diffusions. In [*Lectures on probability theory and statistics*]{}, vol. 1781 of [*Lecture notes in Mathematics*]{}. Springer, 2002, pp. 125–329.
Superprocesses over a stochastic flow. , [**11**]{}, no. 2, (2001), 488–543.
Some applications of stochastic calculus to partial differential equations. In [*Ecole d’Eté de Probabilités de Saint-Flour XI-1981*]{}, vol. 976 of [*Lecture notes in Mathematics*]{}. Springer, 1983, pp. 267–382.
On the support of diffusion processes with applications to the strong maximum principle. In [*Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (Univ. California, Berkeley, Calif., 1970/1971)*]{}, vol. 3. 1972, pp. 333–359.
On convergence of population processes in random environments to the stochastic heat equations with colored noise. , [**8**]{}, no. 6, (2003), 1–39.
An introduction to stochastic partial differential equations. In [*[É]{}cole d’[É]{}t[é]{} de Probabilit[é]{}s de Saint Flour XIV-1984*]{}. Springer, 1986, pp. 265–439.
State classification for a class of measure-valued branching diffusions in a brownian medium. , [**109**]{}, no. 1, (1997), 39–55.
A class of measure-valued branching diffusions in a random medium. , [**16**]{}, no. 4, (1998), 753–786.
A stochastic log-laplace equation. , [**32**]{}, no. 3B, (2004), 2362–2388.
. World Scientific, 2013.
On the uniqueness of solutions of stochastic differential equations. , [**11**]{}, no. 1, (1971), 155–167.
[^1]: Supported by an NSERC discovery grant. Email: [email protected]
[^2]: Supported by the NSF grant DMS 1811181. Email: [email protected]
[^3]: Email: [email protected]
|
---
abstract: 'Polar codes, introduced recently by Arikan, are the first family of codes known to achieve capacity of symmetric channels using a low complexity successive cancellation decoder. Although these codes, combined with successive cancellation, are optimal in this respect, their finite-length performance is not record breaking. We discuss several techniques through which their finite-length performance can be improved. We also study the performance of these codes in the context of source coding, both lossless and lossy, in the single-user context as well as for distributed applications.'
author:
-
-
bibliography:
- 'lth.bib'
- 'lthpub.bib'
title: Performance of Polar Codes for Channel and Source Coding
---
Introduction
============
Polar codes, recently introduced by Arikan in [@Ari08], are the first provably capacity achieving family of codes for arbitrary symmetric binary-input discrete memoryless channels (B-DMC) with low encoding and decoding complexity. The construction of polar codes is based on the following observation: Let $G_2 = \bigl[ \begin{smallmatrix} 1 &0 \\ 1& 1 \end{smallmatrix} \bigr]$. Apply the transform $G_2^{\otimes n}$ (where “$\phantom{}^{\otimes n}$” denotes the $n^{th}$ Kronecker power) to a block of $N = 2^n$ bits and transmit the output through independent copies of a symmetric B-DMC, call it $W$. As $n$ grows large, the channels seen by individual bits (suitably defined in [@Ari08]) start *polarizing*: they approach either a noiseless channel or a pure-noise channel, where the fraction of channels becoming noiseless is close to the capacity $I(W)$. In the following, let $\bar{u}$ denote the vector $(u_0,\dots,u_{N-1})$ and $\bar{x}$ denote the vector $(x_0,\dots,x_{N-1})$. Let $\pi:\{0,\dots,N-1\}\to \{0,\dots,N-1\}$ be the permutation such that if the $n$-bit binary representation of $i$ is $b_{n-1}\dots b_0$, then $\pi(i) = b_{0}\dots b_{n-1}$ (we call this a [*bit-reversal*]{}). Let $\wt(i)$ denote the number of ones in the binary expansion of $i$.
Construction of Polar Codes
---------------------------
The channel polarization phenomenon suggests to use the noiseless channels for transmitting information while fixing the symbols transmitted through the noisy ones to a value known both to sender as well as receiver. For symmetric channels we can assume without loss of generality that the fixed positions are set to $0$. Since the fraction of channels becoming noiseless tends to $I(W)$, this scheme achieves the capacity of the channel.
In [@Ari08] the following alternative interpretation was mentioned; the above procedure can be seen as transmitting a codeword and decoding at the receiver with a successive cancellation (SC) decoding strategy. The specific code which is constructed can be seen as a generalization of the Reed-Muller (RM) codes. Let us briefly discuss the construction of RM codes. We follow the lead of [@For01] in which the Kronecker product is used. RM codes are specified by the two parameters $n$ and $r$ and the code is denoted by $\RM(n,r)$. An RM($n,r$) code has block length $2^n$ and rate $\frac{1}{2^n} \sum_{i=0}^r {n \choose i}$. The code is defined through its generator matrix as follows. Compute the Kronecker product $G_2^{\otimes n}$. This gives a $2^n \times 2^n$ matrix. Label the rows of this matrix as $0,\dots,2^n-1$. One can check that the weight of the $i$th row of this matrix is equal to $2^{\wt(i)}$. The generator matrix of the code $RM(n,r)$ consists of all the rows of $G_2^{\otimes n}$ which have weight at least $2^{n-r}$. There are exactly $\sum_{i=0}^r {n \choose i}$ such rows. An equivalent way of expressing this is to say that the codewords are of the form $\bar{x}
= \bar{u}G_2^{\otimes n}$, where the components $u_i$ of $\bar{u}$ corresponding to the rows of $G_2^{\otimes n}$ of weight less than $2^{n-r}$ are fixed to $0$ and the remaining components contain the “information." Polar codes differ from RM codes only in the choice of generator vectors $\bar{u}$, i.e., in the choice of which components of $\bar{u}$ are set to $0$. Unlike RM codes, these codes are defined for any dimension $1\leq k \leq 2^n$. The choice of the generator vectors, as explained in [@Ari08], is rather complicated; we therefore do not discuss it here. Following Arikan, call those components $u_i$ of $\bar{u}$ which are set to $0$ “frozen," and the remaining ones “information" bits. Let the set of frozen bits be denoted by $F$ and the set of information bits be denoted by $I$. A polar code is then defined as the set of codewords of the form $\bar{x} =
\bar{u}G_2^{\otimes n}$, where the bits $i\in F$ are fixed to $0$.
A $\RM(n,r)$ is a polar code of length $2^n$ with $F=\{u_i: \wt(i) < r\}$.
Performance under Successive Cancellation Decoding
--------------------------------------------------
In [@Ari08] Arikan considers a low complexity SC decoder. We briefly describe the decoding procedure here. The bits are decoded in the order $\pi(0),\dots,\pi({N-1})$. Let the estimates of the bits be denoted by $\hat{u}_0,\dots,\hat{u}_{N-1}$. If a bit $u_i$ is frozen then $\hat{u}_i = 0$. Otherwise the decoding rule is the following: $$\begin{aligned}
\hat{u}_{\pi(i)} =
\left\{
\begin{array}{cc}
0, & \text{if $\frac{\Pr(y_{0}^{N-1} \mid \hat{u}_{\pi(0)}^{\pi(i-1)}, U_{\pi(i)} =
0)}{\Pr(y_{0}^{N-1}|\hat{u}_{\pi(0)}^{\pi(i-1)}, U_{\pi(i)}= 1)} > 1$},\\
1, & \text{otherwise}.
\end{array}\right.\end{aligned}$$
Using the factor graph representation between $\bar{u}$ and $\bar{x}$ shown in Figure \[fig:differenttrellis\](a), Arikan showed that this decoder can be implemented with $O(N\log N)$ complexity.
[@ArT08]\[thm:ArT\] Let $W$ be any symmetric B-DMC with $I(W) > 0$. Let $R < I(W)$ and $\beta < \frac12$ be fixed. Then for $N=2^n$, $n\geq 0$, the probability of error for polar coding under SC decoding at block length $N$ and rate $R$ satisfies $P_e(N,R) = o(2^{-N^\beta}).$
Optimality of the Exponent
--------------------------
The following lemma characterizes the minimum distance of a polar code.
\[lem:dminpol\] Let $I$ be the set of information bits of a polar code $\code$. The minimum distance of the code is given by $d_{\min}(\code) = \min_{i\in I} 2^{{\wt}(i)}.$ Let $w_{\min} = \min_{i\in I}\wt(i)$. Clearly, $d_{\min}$ cannot be larger than the minimum weight of the rows of the generator matrix. Therefore, $d_{\min} \leq 2^{w_{\min}}$. On the other hand, by adding some extra rows to the generator matrix we cannot increase the minimum distance. In particular, add all the rows of $G_2^{\otimes n}$ with weight at least $2^{w_{\min}}$. The resulting code is $\RM(n,n-w_{\min})$. It is well known that $d_{\min}(\RM(n,r)) = 2^{n-r}$ [@For01]. Therefore, $d_{\min} \geq d_{\min}(\RM(n,n-w_{\min})) = 2^{w_{\min}}.$ We conclude that for any given rate $R$, if the information bits are picked according to their weight (RM rule), i.e., the $2^nR$ vectors of largest weight, the resulting code has the largest possible minimum distance. The following lemma gives a bound on the best possible minimum distance for any non-trivial rate.
\[lem:mindistbnd\] For any rate $R > 0$ and any choice of information bits, the minimum distance of a code of length $2^n$ is bounded as $d_{\text{min}} \leq 2^{\frac{n}{2} + c \sqrt n}$ for $n > n_o(R)$ and a constant $c=c(R)$.
Lemma \[lem:dminpol\] implies that $d_{\min}$ is maximized by choosing the frozen bits according to the RM rule. The matrix $G_2^{\otimes n}$ has $n \choose i$ rows of weight $2^i$. Therefore, $d_{\min} \leq 2^{k} : \sum_{i=k+1}^{n}{{n}\choose{i}} < 2^nR \leq \sum_{i=k}^n{{n}\choose{i}}.$ For $R > \frac12$, more than half of the rows are in the generator matrix. Therefore, there is at least one row with weight less than or equal to $2^{\lceil{\frac{n}{2}}\rceil}$. Consider therefore an $R$ in the range $(0, 1/2]$. Using Stirling’s approximation, one can show that for any $R>0$ at least one row of the generator matrix has weight in the range $[2^{\lceil\frac{n}{2}\rceil-c\sqrt n},
2^{\lceil \frac{n}{2}\rceil+c\sqrt n}]$. Using Lemma \[lem:dminpol\] we conclude the result.
Let $R>0$ and $\beta > \frac12$ be fixed. For any symmetric B-DMC $W$, and $N=2^n$, $n\geq n(\beta, R, W)$, the probability of error for polar coding under MAP decoding at block length $N$ and rate $R$ satisfies $P_e(N,R) > 2^{-{N}^\beta}$
For a code with minimum distance $d_{\min}$, the block error probability is lower bounded by $2^{-K d_{\min}}$ for some positive constant $K$, which only depends on the channel. This is easily seen by considering a genie decoder; the genie provides the correct value of all bits except those which differ between the actually transmitted codeword and its minimum distance cousin. Lemma \[lem:mindistbnd\] implies that for any $R>0$, for $n$ large enough, $d_{\min} < \frac{1}{K} N^{\beta}$ for any $\beta > \frac12$. Therefore $P_e(N,R) > 2^{-{N}^\beta}$ for any $\beta > \frac12$.
This combined with Theorem \[thm:ArT\], implies that the SC decoder achieves performance comparable to the MAP decoder in terms of the order of the exponent.
Performance under Belief Propagation
====================================
Theorem \[thm:ArT\] does not state what lengths are needed in order to achieve the promised rapid decay in the error probability nor does it specify the involved constants. Indeed, for moderate lengths polar codes under SC decoding are not record breaking. In this section we show various ways to improve the performance of polar codes by considering belief propagation (BP) decoding. BP was already used in [@Ari08b] to compare the performance of polar codes based on Arikan’s rule and RM rule. For all the simulation points in the plots the $95\%$ confidence intervals are shown. In most cases these confidence intervals are smaller than the point size and are therefore not visible.
Successive Decoding as a Particular Instance of BP
--------------------------------------------------
For communication over a binary erasure channel (BEC) one can easily show the following.
Decoding the bit $U_i$ with the SC decoder is equivalent to applying BP with the knowledge of $U_0,\dots,U_{i-1}$ and all other bits unknown (and a uniform prior on them).
We conclude that if we use a standard BP algorithm (such a decoder has access also to the information provided by the frozen bits belonging to $U_{i+1},\dots,U_{N-1}$) then it is in general strictly better than a SC decoder. Indeed, it is not hard to construct explicit examples of codewords and erasure patterns to see that this inclusion is strict (BP decoder succeeds but the SC decoder does not). Figure \[fig:BEC\_BPcyclic\] shows the simulation results for the SC, BP and the MAP decoders when transmission takes place over the BEC. As we can see from these simulation results, the performance of the BP decoder lies roughly half way between that of the SC decoder and that of the MAP decoder. For the BEC the [*scheduling*]{} of the individual messages is irrelevant to the performance as long as each edge is updated until a fixed point has been reached. For general B-DMCs the performance relies heavily on the specific schedule. We found empirically that a good performance can be achieved by the following schedule. Update the messages of each of the $n$ sections of the trellis from right to left and then from left to right and so on. Each section consists of a collection of $Z$ shaped sub-graphs. We first update the lower horizontal edge, then the diagonal edge, and, finally, the upper horizontal edge of each of these $Z$ sections. In this schedule the information is spread from the variables belonging to one level to its neighboring level. Figure \[fig:AWGN\] shows the simulation results for the SC decoder and the BP decoder over the binary input additive white Gaussian noise channel (BAWGNC) of capacity $\frac12$. Again, we can see a marked improvement of the BP decoder over the SC decoder.
Overcomplete Representation: Redundant Trellises
------------------------------------------------
For the polar code of length $2^3$ one can check that all the three trellises shown in Figure \[fig:differenttrellis\] are valid representations. In fact, for a code of block length $2^n$, there exist $n!$ different representations obtained by different permutations of the $n$ layers of connections. Therefore, we can connect the vectors $\bar{x}$ and $\bar{u}$ with any number of these representations and this results in an [*overcomplete*]{} representation (similar to the concept used when computing the stopping redundancy of a code [@ScV06]). For the BEC any such overcomplete representation only improves the performance of the BP decoder [@ScV06]. Further, the decoding complexity scales linearly with the number of different representations used. Keeping the complexity in mind, instead of considering all the $n!$ factorial trellises, we use only the $n$ trellises obtained by cyclic shifts (e.g., see Figure \[fig:differenttrellis\]). The complexity of this algorithm is $O(N (\log N)^2)$ as compared to $O(N\log N)$ of the SC decoder and BP over one trellis. The performance of the BP decoder is improved significantly by using this overcomplete representation as shown in Figure \[fig:BEC\_BPcyclic\].
We leave a systematic investigation of good schedules and choices of overcomplete representations for general symmetric channels as an interesting open problem.
Choice of Frozen Bits
---------------------
For the BP or MAP decoding algorithm the choice of frozen bits as given by Arikan is not necessarily optimal. In the case of MAP decoding we observe (see Figure \[fig:BEC\_MAP\]) that the performance is significantly improved by picking the frozen bits according to the RM rule. This is not a coincidence; $d_{\text{min}}$ is maximized for this choice. This suggests that there might be a rule which is optimized for BP. It is an interesting open question to find such a rule.
Source Coding
=============
In this section we show the performance of polar codes in the context of source coding. We consider both lossless and lossy cases and show (in most cases) empirically that they achieve the optimal performance in both cases. Let Ber$(p)$ denote a Bernoulli source with $\Pr(1) = p$. Let $h_2(\cdot)$ denote the binary entropy function and $h_2^{-1}(\cdot)$ its inverse.
Lossless Source Coding
----------------------
### Single User
The problem of lossless source coding of a Ber$(p)$ source can be mapped to the channel coding problem over a binary symmetric channel (BSC) as shown in [@AlB72; @Wei62]. Let $\bar{x} = (x_0,\dots,x_{N-1})^T$ be a sequence of $N$ i.i.d. realizations of the source. Consider a code of rate $R$ represented by the parity check matrix $\mathbf{H}$. The vector $\bar{x}$ is encoded by its syndrome $\bar{s} = \mathbf{H}\bar{x}$. The rate of the resulting source code is $1-R$. The decoding problem is to estimate $\bar{x}$ given the syndrome $\bar{s}$. This is equivalent to estimating a noise vector in the context of channel coding over BSC$(p)$. Therefore, if a sequence of codes achieve capacity over BSC($p$), then the corresponding source codes approach a rate $h_2(p)$ with vanishing error probability.
We conclude that polar codes achieve the Shannon bound for lossless compression of a binary memoryless source. Moreover, using the trellis of Figure \[fig:differenttrellis\](a), we can compute the syndrome with complexity $O(N\log N)$. The source coding problem has a considerable advantage compared to the channel coding problem. The encoder knows the information seen by the decoder (unlike channel coding there is no noise involved here). Therefore, the encoder can also decode and check whether the decoding is successful or not. In case of failure, the encoder can retry the compression procedure by using a permutation of the source vector. This permutation is fixed a priori and is known both to the encoder as well as the decoder. In order to completely specify the system, the encoder must inform the decoder which permutation was finally used. This results in a small loss of rate but it brings down the probability of decoding failure. Note that the extra number of bits that need to be transmitted grows only logarithmically with the number of permutations used, but that the error probability decays exponentially as long as the various permuted source vectors look like independent source samples. With this trick one can make the curves essentially arbitrarily steep with a very small loss in rate (see Figure \[fig:SOURCE\]).
### Slepian-Wolf
Consider two Ber$(\frac12)$ sources $X$ and $Y$. Assume that they are correlated as $X=Y\oplus Z$, where $Z\sim$ Ber$(p)$. Recall that the Slepian-Wolf rate region is the unbounded polytope described by $R_X > H(X) =1$, $R_Y>H(Y) =1$, $R_X + R_Y > H(X,Y) =
1+h_2(p)$. The points $(R_X,R_Y)=(1,h_2(p))$ and $(R_X,R_Y)=(h_2(p),1)$ are the so-called [*corner points*]{}. Because of symmetry it suffices to show how to achieve one such corner point (say $(1,h_2(p))$). Let $\bar{x}$ and $\bar{y}$ denote $N$ i.i.d. realizations of the two sources. The scheme using linear codes is the following: The encoder for $X$ transmits $\bar{x}$ as it is (since $H(X)=1$, and so no compression is necessary). Let $\mathbf{H}$ denote the parity-check matrix of a code designed for communication over the BSC$(p)$. The encoder for $Y$ computes the syndrome $\bar{s} = \mathbf{H}\bar{y}$ and transmits it to the receiver. At the receiver we know $\bar{s} = \mathbf{H}\bar{y}$ and $\bar{x}$, therefore we can compute $\bar{s}' = \mathbf{H}(\bar{x} \oplus \bar{y}) = \mathbf{H}\bar{z}$. The resulting problem of estimating $\bar{z}$ is equivalent to the lossless compression of a Ber$(p)$ discussed in the previous section. Therefore, once again polar codes provide an efficient solution. The error probability curves under SC decoding are equal to the curves shown in the Figure \[fig:SOURCE\] with $0$ bits for permutations.
Lossy Source Coding
-------------------
### Single User
We do not know of a mapping that converts the lossy source coding problem to a channel coding problem. However, for the binary erasure source considered in [@MaY03], it was shown how to construct a “good” source code from a “good” (over the BEC) channel code. We briefly describe their construction here and show that polar codes achieve the optimal rate for zero distortion.
The source is a sequence of i.i.d. realizations of a random variable $S$ taking values in $\{0,1,\ast\}$ with $\Pr(S = 0) = \Pr(S=1) = \frac12 (1-\epsilon),
\Pr(S=\ast) = \epsilon$. The reconstruction alphabet is $\{0,1\}$ and the distortion function is given by $
\disto(\ast,0) = \disto(\ast,1) = 0, \disto(0,1) = 1.
$ For zero distortion, the rate of the rate-distortion function is given by $R(D=0) = 1-\epsilon$. In [@MaY03] it was shown that the dual of a sequence of channel codes which achieve the capacity of the BEC($1-\epsilon$) under BP decoding, achieve the rate-distortion pair for zero distortion using a message passing algorithm which they refer to as the erasure quantization algorithm. Polar codes achieve capacity under SC decoding. For communication over BEC, the performance under BP is at least as good as SC. Therefore, the dual of the polar codes designed for BEC$(1-\epsilon)$ achieve the optimum rate for zero distortion using the erasure quantization algorithm. Here, we show that the dual polar codes achieve the optimum rate for zero distortion even under a suitably defined SC decoding algorithm.
The dual of a polar code is obtained by reversing the roles of the check and variable nodes of the trellis in Figure \[fig:differenttrellis\](a) and reversing the roles of the frozen and free bits. It is easy to see that $G_2^{\otimes n}
G_2^{\otimes n} = \mathbf{I}$. This implies that the dual of a polar code is also a polar code. The [*suitably*]{} defined algorithm is given by SC decoding in the order $\pi({N-1}),\dots,\pi(0)$, opposite of the original decoding order. We refer to this as the dual order.
In [@Ari08], the probability of erasure for bit $u_i$ under SC decoding is given by $Z_n^{(i)}(1-\epsilon)$, computed as follows. Let the $n$-bit binary expansion of $i$ be given by $b_{n-1},\dots,b_0$. Let $Z_0 = 1-\epsilon$. The sequence $Z_1,\dots,Z_n=Z_n^{(i)}(1-\epsilon)$ is recursively defined as follows: $$\begin{aligned}
\label{eqn:Z}
Z_k = \left\{ \begin{array}{lc}
Z_{k-1}^2, & \text{ if } b_{k-1} = 1,\\
1-(1-Z_{k-1})^2, & \text{ if } b_{k-1} = 0.
\end{array}\right.\end{aligned}$$
From [@ArT08] we know that for any $R<\epsilon$, there exists an $n_0$ such that for $n \geq n_0$ we can find a set $I\subset \{0,\dots,2^{n}-1\}$ of size $2^nR$ satisfying $\sum_{i\in I}Z_n^{(i)}(1-\epsilon) \leq 2^{-N^\beta}$ for any $\beta
< \frac12$. The set $I$ is used as information bits. The complement of $I$ denoted by $F$, is the set of frozen bits.
Let $\Zd_n^{(i)}(\epsilon)$ denote the probability of erasure for bit $u_i$ of the dual code used for BEC$(\epsilon)$. One can check that for the dual code with the dual order, the bit $u_i$ is equivalent to the bit $u_{N-i}$ of the original code with the original decoding order. Let $\Zd_0 = \epsilon$. The recursive computation for $\Zd_n^{(i)}$ is given by $$\begin{aligned}
\label{eqn:dualZ}
\Zd_k = \left\{ \begin{array}{lc}
\Zd_{k-1}^2, & \text{ if } b_{k-1} = 0,\\
1-(1-\Zd_{k-1})^2, & \text{ if } b_{k-1} = 1.
\end{array}\right.\end{aligned}$$
$Z_n^{(i)}(1-\epsilon) + \Zd_n^{(i)}(\epsilon) = 1$.
The proof follows through induction on $n$ and using the equations $\eqref{eqn:Z}$ and $\eqref{eqn:dualZ}$. For the dual code the set $I$ (information set for BEC$(1-\epsilon)$) is used as frozen bits and the set $F$ is used as information bits. Let $\bar{x}$ be a sequence $2^n$ source realizations. The source vector $\bar{x}$ needs to be mapped to a vector $\bar{u}_{F}$ (information bits) such that $\disto(\bar{u}G_2^{\otimes n},\bar{x}) = 0$. The following lemma shows that such a vector $\bar{u}_F$ can be found using SC decoder with vanishing error probability.
The probability of encoding failure for erasure quantization of the source $S$ using the dual of the polar code designed for the BEC$(1-\epsilon)$ and SC decoding with dual order is bounded as $P_e(N) = o(2^{-N^\beta})$ for any $\beta < \frac{1}{2}$.
Let $I$ and $F$ be as defined above. The bits belonging to $I$ are already fixed to $0$ whereas the bits belonging to $F$ are free to be set. Therefore an error can only occur if one of the bits belonging to $I$ are set to a wrong value. However, if the SC decoding results in an erasure for these bits, these bits are also free to be set any value and this results in no error. Therefore the probability of error can be upper bounded by the probability that at least one of the bits in $I$ is not an erasure which in turn can be upper bounded by $\sum_{i\in
I}(1-\Zd_n^{(i)}(\epsilon)) = \sum_{i \in I}Z_n^{(i)}(1-\epsilon)
= o(2^{-N^\beta})$, where the last equality follows from Theorem \[thm:ArT\].
The fact that the dual code can be successfully applied to the erasure source, suggests to extend this construction to more general sources. Let us try a similar construction to encode the Ber($\frac12$) source with the Hamming distortion. To design a source code for distortion $D$ we first design a rate $1-h_2(p)$ polar code for the BSC$(p)$ where $p=h_2^{-1}(1-h_2(D))$. The design consists of choosing the generator vectors from the rows of $G_2^{\otimes n}$ as explained in [@Ari08]. The source code is then defined by the corresponding dual code with the dual decoding order. Since the rate of the original code is $1-h_2(p)$, the rate of the dual code is $h_2(p) = 1-h_2(D)$. Figure \[fig:LOSSY\_WYNER\] shows the rate-distortion performance of these codes for various lengths. As the lengths increase, empirically we observe that the performance approaches the rate-distortion curve.
It is an interesting and challenging open problem to prove this observation rigorously.
### Wyner-Ziv
Let $X$ denote a Ber$(\frac12)$ source which we want to compress. The source is reconstructed at a receiver which has access to a side information $Y$ correlated to $X$ as $Y=X\oplus Z$ with $Z\sim$ Ber$(p)$. Let $\bar{x}$ and $\bar{y}$ denote a sequence of $N$ i.i.d. realizations of $X$ and $Y$. Wyner and Ziv have shown that the rate distortion curve is given by the lower convex envelope of the curve $R_{WZ}(D) = h_2(D\ast p) - h_2(D)$ and the point $(0,p)$, where $D \ast p
= D(1-p) + (1-D)p$. As discussed in [@ZSE02], nested linear codes are required to tackle this problem. The idea is to partition the codewords of a code $\code_1$ into cosets of another code $\code_2$. The code $\code_1$ must be a good source code and the code $\code_2$ must be a good channel code.
Using polar codes, we can create this nested structure as follows. Let $\code_1$ be a source code for distortion $D$ (Bernoulli source, Hamming distortion) as described in the previous section. Let $F_s$ be the set of frozen (fixed to $0$) bits for this source code. Using this code we quantize the source vector $\bar{x}$. Let the resulting vector be $\hat{x}$. Note that the vector $\hat{x}$ is given by $\hat{u}G_2^{\otimes
n}$, where $\hat{u} = (\hat{u}_0,\dots,\hat{u}_{N-1})$ is such that $\hat{u}_i = 0$ for $i \in F_s$ and for $i\in F_s^c$, $\hat{u}_i$ is defined by the source quantization.
For a sequence of codes achieving the rate-distortion bound, it was shown in [@ErZ02] that the “noise” $\bar{x} \oplus \hat{x}$ added due to the quantization is comparable to a Bernoulli noise Ber$(D)$. Therefore, for all practical purposes, the side information at the receiver $\bar{y}$ is statistically equivalent to the output of $\hat{x}$ transmitted through a BSC$(D\ast p)$. Let $F_c$ be the set of frozen bits of a channel code for the BSC$(D\ast
p)$. Let the encoder transmit the bits $\hat{u}_{F_c\backslash
F_s}$ to the receiver. Since the bits belonging to $F_s$ ($\hat{u}_{F_s}$) are fixed to zero, the receiver now has access to $\hat{u}_{F_c}$ and $\bar{y}$. The definition of $F_c$ implies that the receiver is now able to decode $\hat{u}_{F_c^c}$ (and hence $\hat{x}$) with vanishing error probability. To see the nested structure, note that the code $\code_2$ and its cosets are the different channel codes defined by different values of $\bar{u}_{F_c\backslash F_s}$ and that these codes partition the source code $\code_1$. The capacity achieving property of polar codes implies that for $N$ sufficiently large, $|F_c| < N(h_2(D\ast p) + \delta)$ for any $\delta > 0$. In addition, if we assume that polar codes achieve the rate-distortion bound as conjectured in the previous section and $F_s \subseteq F_c $, then for $N$ sufficiently large, the rate required to achieve a distortion $D$ is $\frac{1}{N}|F_c \backslash F_s|
\leq h_2(D\ast p) - h_2(D) + \delta$ for any $\delta > 0$. This would show that polar codes can be used to efficiently realize the Wyner-Ziv scheme. The performance of the above scheme for various lengths is shown in Figure \[fig:LOSSY\_WYNER\].
|
---
author:
- 'Martin Asenov, Michael Burke, Daniel Angelov, Todor Davchev, Kartic Subr and Subramanian Ramamoorthy'
title: 'Vid2Param: Modelling of Dynamics Parameters from Video'
---
=4
|
---
abstract: 'The live streaming services have gained extreme popularity in recent years. Due to the spiky traffic patterns of live videos, utilizing the distributed edge servers to improve viewers’ quality of experience (QoE) has become a common practice nowadays. Nevertheless, current client-driven content caching mechanism does not support caching beforehand from the cloud to the edge, resulting in considerable cache missing in live video delivery. State-of-the-art research generally sacrifices the liveness of delivered videos in order to deal with the above problem. In this paper, by jointly considering the features of live videos and edge servers, we propose *PLVER*, a proactive live video push scheme to resolve the cache miss problem in live video delivery. Specifically, *PLVER* first conducts a one-to-multiple stable allocation between edge clusters and user groups, to balance the load of live traffic over the edge servers. Then it adopts proactive video replication algorithms to speed up the video replication among the edge servers. We conduct extensive trace-driven evaluations, covering $0.3$ million Twitch viewers and more than $300$ Twitch channels. The results demonstrate that with *PLVER*, edge servers can carry $28\%$ and $82\%$ more traffic than the auction-based replication method and the caching on requested time method, respectively.'
author:
- 'Huan Wang, Guoming Tang, Kui Wu, and Jianping Wang [^1] [^2][^3]'
bibliography:
- 'reference.bib'
title: 'PLVER: Joint Stable Allocation and Content Replication for Edge-assisted Live Video Delivery'
---
Introduction
============
The last few years have witnessed the dramatic proliferation of live video streams over streaming platforms (such as Twitch, Facebook Live, and Youtube Live, etc.) which have generated billion dollars of revenue [@wang2019intelligent]. According to the statistics of Twitch, in 2019, over $660$ billion minutes of live streams were watched by customers and $3.64$ million streamers (monthly average) broadcast their channels via Twitch [@twitch2019statis].
Nevertheless, the delivery of live videos is quite different from the conventional video-on-demand (VoD) service. First, live video has quite spiky traffic, which means the viewer popularity of live streams usually grows and drops very rapidly [@dogga2019edge]. Particularly, it often encounters the “thundering herd” problem [@federico2015hood; @2016Facebook]: a large number of users, sometimes in the scale of millions, may start to watch the same live video simultaneously when some popular events or online celebrities start a live broadcast. Second, live video delivery nowadays has stringent requirements on latency owing to the new breed of live video services that support interactive live video streaming. These services allow the broadcasters to interact with their stream viewers in real-time during the streaming process. In order to support the high interactivity, it requires low-latency end-to-end delivery while maintaining the Quality of Experience (QoE) for live viewers [@wang2016anatomy; @yi2019acm; @pang2018optimizing].
Typical thundering herd problem in live video can overload the system, causing lags and disconnections from the server. One efficient way to solve the thundering herd problem while maintaining low latency in delivery of live videos is to utilize edge caches. For example, Facebook uses edge PoPs distributed around the globe for the delivery of their live traffic [@federico2015hood]. Delivering contents via edge devices (e.g., edge servers co-located with mobile base stations) makes contents much closer to the end users and alleviates the traffic burden of backbone networks to the cloud.
Nevertheless, when applying edge-assisted live video delivery, a new problem of cache miss arises: when a large number of end users request for a newly generated video segment at the same time, this segment may not has enough time to be cached in the edge caches due to the real-time property of live streaming [@rainer2016investigating; @ge2018qoe]. As shown in Fig. \[fig:motivation\_gap\], the edge server would return a cache miss for the first group of requests that arrive the edge before the segment is fully cached. These cache-missed requests would pass the edge cache and go all the way to the origin cache or server. As a result, it would lead to deteriorated QoE to the live viewers (e.g., increased startup latency and playback stall rates). According to the statistics of Facebook [@federico2015hood], around 1.8% of their Facebook Live requests encountered cache miss at the edge layer, and cause failures at the origin server level. Note that this is still a significant number considering the magnitude of the number of live viewers. To make matters worse, high revolution videos (e.g., virtual reality (VR) streams) which need more time to be replicated to the edge would create a even higher cache miss rate.
The above caching problem only exists for live video streaming as people typically watch regular videos at different times. Therefore, there are sufficient time for the regular video chunks to be cached with few fast-entered content requests. State-of-the-art researches solve the above problem by holding back the availability of some newly encoded live segments from the playback clients so that the client requests could arrive at the edge after the caching process is finished [@federico2015hood; @ge2018qoe]. This strategy while solves the cache miss problem, would however pose extra latency to the live streams which sacrifices the “liveness” of delivered videos.
![Client-driven content caching (replication) for live videos.[]{data-label="fig:motivation_gap"}](./fig/introductionFig.pdf){width="1.0\columnwidth"}
The root cause of the cache miss problem is mainly because the current client-driven caching strategy was not designed for live videos in the first place. Since caching process in the current content delivery networks (CDNs) is normally triggered by the client requests, so the video segments caching (replication) will only commence when the cloud responses to the first request for a live video segment. While this strategy makes sense when delivering regular content, it slows down the caching process in the context of live videos: there exists a time *gap* (shown as $T_1$ in Fig. \[fig:motivation\_gap\]) between the time when a segment is generated from the cloud and when the caching process really started. This gap mainly consists of two parts: i) the time that the availability information of the newly encoded video segments is obtained to the playback clients, and ii) the time it takes for the clients to send their first segment request. However, in the current pull-based CDN architecture, both of these two parts of time are difficult to narrow down (refer to \[sec:motivation\_relatedWork\] for more details). This motivates us to rethink the caching design of live video delivery. Can the cloud CDN server adopts a video push model to proactively replicate the newly encoded video segments into the appropriate edge servers in real-time?
Although desirable, it is challenging to achieve such goal due to the massive video requests and edge servers, QoE guarantee, and high real-time requirement. First, in order to adopt the proactive caching strategy, we must solve the allocation problem between edge servers and live viewers (i.e., assign the viewers to the proper edge server). This is because: i) the video segments that need to be replicated in an edge server is determined by the live viewers served by this edge server, and ii) as the bandwidth capacity of edge servers is quite limited (much smaller than CDN servers), the workloads of many edge servers could be easily overwhelmed while the others are under-utilized. Conventional load balance solutions [@xu2013joint; @narayana2012coordinate] which assume that content replicas are stored over all CDN servers would be unrealistic in our context considering the massive number of edge servers. Second, since the service capability of each individual edge server is limited, newly encoded video segments have to be replicated to massive edge servers to alleviate the spiky live video traffic. For each live video segment that being encoded in real-time from the cloud, we need to make a fast decision on the appropriate edge servers to cache the segment.
In this paper, we propose a roactive ive ideo dge eplication scheme (*PLVER*) to resolve the cache miss problem in live video delivery. *PLVER* first conducts a *one-to-multiple* stable allocation between edge clusters and user groups which balances the load of live requests over edge servers such that each user group could be assigned to its most preferred edge cluster that it could be matched. Then based on the allocation result, *PLVER* proposes an efficient and proactive live video edge replication (push) algorithm to speed up the edge replication process by using real-time statistical viewership of the user groups allocated to the cluster.
In summary, this paper makes the following contributions:
- *PLVER* implements a stable *one-to-multiple* allocation between edge clusters and user groups (i.e., one user group is served by one edge cluster but one edge cluster can serve multiple user groups), under the constraint that the QoE of end users is guaranteed by their assigned edge clusters.
- Aiming at speeding up the edge replication process, *PLVER* identifies the unique traffic demand of live videos and develop a proactive video replication algorithm to provide fast and fine-grained replication schedule periodically. To the best of our knowledge, this is the first research work to provide proactive video replication algorithms (with details disclosed to the public) tailored for edge-assisted live video delivery.
- We perform comprehensive experiments to evaluate the performance of *PLVER*. A trace-driven allocations between $641$ edge clusters and $1253$ user groups are conducted, which cover $64$ ISP providers and $470$ cities. Then based on the allocation results, we further evaluate the performance of the video replication algorithm using traces of $0.3$ million Twitch viewers and more than $300$ Twitch channels. Performance results demonstrate the superiority of *PLVER*.
Motivation and Related Work {#sec:motivation_relatedWork}
===========================
{width="75.00000%"}
Live Video Delivery Background
------------------------------
A live stream is usually encoded into multiple pre-determined bitrates once it is generated and uploaded by the broadcasters. For each bitrate of a stream, it is further split into a sequence of small video segments with the same playback length, so that it can be fetched sequentially by playback clients (e.g., via `HTTP` `GET`), using a suitable bitrate matching their network conditions [@sodagar2011mpeg].
In the HTTP-based live video delivery, every time when a client joins a live channel, she first request and accesses stream’s playlist file (generated by the origin streaming server). This manifest contains the information of current available segments (i.e., segments that have been encoded in the cloud) and bitrates in the stream. Based on the information from the manifest, the clients send the HTTP requests to their local edge server. Afterwards, the playback client fetches the live video segments in sequence and periodically accesses the newest playlist file to check if any new segments have been produced. When live videos delivered over edge servers (as shown in Fig. \[fig:motivation\_gap\]), these video segments will then be replicated (cached) to the edge caches when the edge HTTP proxy receives the response (segment) from the cloud.
Observation and Motivation
--------------------------
To better explain the cache miss problem, we use Apple HLS (HTTP Live Streaming) protocol [@rfc8216roger] as an example to illustrate the live video delivery process. As shown in Fig. \[fig:liveVideo\_architec\], start from a certain time after $10^{th}$ second of a live stream, the first three video segment were generated from the cloud. By first accessing the playlist file, numerous clients (with geographical proximity) realize the segment update and begin to request segment $001.ts$ via `HTTP` `GET` during $10$ to $20$ seconds. These requests would first be handled by one of the HTTP proxies in an edge cluster, which checks if the requested segment is already in an edge cache. If the segment is in the edge cache, then it could be readily fetched from there (step $2(b)$). If not, the proxy will issue a HTTP request to the origin server in the cloud (step $2(a)$). (Note that there exist another layer of cache as well as proxy and encoding servers inside the data center. As our system design does not change the current structure within the data center, these components are dismissed in Fig. \[fig:liveVideo\_architec\].)
As we can easily find that an earlier fraction of requests (shown as step 2(a) in Fig. \[fig:liveVideo\_architec\]) before the segment is fully cached in the edge would miss the edge cache. The current client-driven caching architecture creates a time gap before the caching process is started, which is critical for the live video delivery with real-time requirement. The playback clients request and access the playlist file occupies the first part of time of the gap, which is inevitable in the client-driven content caching since the clients have to know the segment information (i.e., the URI) before sending the requests. Once a video segment availability information is obtained by the playback clients, it takes another period of time before the first request for the segment is sent out by the clients. This part of time exists because the current live streaming protocols (e.g., HLS or MPEG-DASH) would generally start a live streaming with an relatively “older” video segment instead of the newest one to avoid playback stalls [@rfc8216roger]. As shown in Fig. \[fig:liveVideo\_architec\], the playback clients would start the live streaming by first requesting segment $001.ts$ rather than segment $003.ts$, which makes the replication of segment $003.ts$ further postponed in the client-driven caching architecture.
{width="92.00000%"}
Improving the QoE of Live Streaming {#sec:relatedlive}
-----------------------------------
In order to solve the cache miss problem as well as to improve the QoE of $4$K live videos, Ge et al. [@ge2018qoe] proposed ETHLE, which “holds back” the availability of some newly encoded video segments from the playback clients so that the playback clients could send their live requests to a certain segment after it was cached in the edge server. While this work has shown considerable QoE improvement, it may pose extra undesirable latency to the live streams.
In the industry, Facebook proposed two alternative methods to solve the cache miss problem for delivering live video over edge servers [@federico2015hood]. In the first scheme, their solution uses the similar “holding back" idea as that in [@ge2018qoe]: the edge proxy returns a cache miss for the first request while holding the rest requests in a queue. Once the segment is stored in the edge cache via the HTTP response of the first request, the requests in the queue can be responded from the edge as cache hits. Similar with the work in [@ge2018qoe], this design would incur undesirable latency to the live stream. The other scheme adopts a video push model where the server continuously pushes newly generated video segments to the proxies and the playback clients. This is the only reported design that adopts proactive content push for live video over edge servers. Nevertheless, the exact details of their video replication algorithm are unknown.
In [@yan2017livejack], Yan et al. proposed LiveJack, a network service which allows CDN servers to leverage the ISP edge cloud resources to handle the dynamic live video traffic. Their work mainly focus on the dynamic scheduling of Virtual Media Functions (VMFs) at the edge clouds to accommodate with the dynamic viewer populations. Wang et al. proposed an edge-assisted crowdcast framework which makes smart decisions on viewer scheduling and video transcoding to accommodate with *personalized* QoE demands [@wang2019intelligent]. Mukerjee et al. in [@mukerjee2015practical] performed end-to-end optimization of live video delivery path, which coordinates the delivery paths for higher average bitrate and lower delivery cost. This work, however, mainly focuses on optimizing the routing of live video delivery. In [@zhang2018proactive], Zhang et al. provided a video push mechanism to lower the bandwidth consumption of CDN by proactively sending the videos to competent seeds in a hybrid CDN-P2P VoD system. This work uses proactive video push, but it does not target at live videos. The optimization for regular, non-live videos delivery was also investigated in [@joseph2014nova; @kim2016quality] and [@lu2018optimizing].
Generic Video Replication Techniques
------------------------------------
Different content replication strategies were developed in [@hu2016joint; @ma2017joint; @zhou2015video] and [@al2018edgecache]. In [@hu2016joint], Hu et al. considered both video replication and request routing for social videos. Their algorithm focuses on social videos and the watching interests of different communities. In [@ma2017joint], Ma et al. considered the video replication strategies in edge servers. They proposed a content replication algorithm to jointly minimize the accumulated user latency and the content replication cost. In [@zhou2015video], Zhou et al. investigated how the popularity of video changes over time and then designed the video replication strategies with the video popularity dynamics derived. Different from ours, the above works mainly focus on the video-on-demand (VoD) services.
System Overview {#sec:back}
===============
Our system design of *PLVER* is shown in Fig. \[fig:sys\_architec\]. Once a live viewer sends a `HTTP` request to the *request manager* of the system, the request manager identifies the key information of the request, including the requested channel, bitrates, and the user group it belongs to, by resolving the `URL` and the `IP` address. The above information is used by the request manager to redirect the request to an appropriate edge server. This procedure is denoted with blue-dash lines in Fig. \[fig:sys\_architec\]. The request manager also generates the viewership information (e.g., the number of viewers of each stream in each user group) and feeds the information to *PLVER* for edge servers selection.
As shown in Fig. \[fig:sys\_architec\], there are three main components *PLVER*: i) *stable allocation* module assigns the global user groups to their desired edge server cluster and balance the load of live traffic, ii) the *proactive replication algorithm* periodically computes the edge replication schedule within each edge cluster in the near future (e.g., next $5$ minutes), based on the viewership information from the request manager, and iii) *replication table* which contains the directly available information of replication servers for each live video segment.
When the new live video segments of a stream are encoded and generated, the system first checks the most up-to-date replication schedule from the replication table by identifying the key information of the segment. It then proactively replicates these video segments into the guided edge servers despite these videos are currently not requested by the users. In this way, replication schedule can be obtained easily and fast by using the stream id and version number of the target video segment as the key for searching. Note that the process of replicating the video segments into edge servers and delivering the video contents from edge servers to the end users are conducted concurrently, since the video segments are generated from the broadcasters sequentially.
The core component of the system is *PLVER*, denoted in the grey box in Fig. \[fig:sys\_architec\]. Its main goal is to provide replication schedule that can be readily used for live video replication over edge servers. To be more specific, it considers the traffic demand from different areas as well as resource capacity of edge servers so as to provide replication schedule that maximizes the traffic served by the edges. Note that tracking each viewer’s requests and directing the requests to edge servers or the origin server belong to real-time request redirection. It happens after the replication schedule is generated and needs to consider the dynamic content availability in edge servers, which is beyond the scope of this paper. Nevertheless, it will be utilized for performance evaluation of our algorithm in the evaluation part (\[sec:evaluation\]) of this paper.
In the following, we formally model the problem that needs to be solved by *PLVER*, and then present the two main components of our solution, namely *stable one-to-multiple allocation* and *proactive replication algorithm*, in \[sec:stable\_allocation\] and \[sec:per\], respectively.
Stable One-to-multiple Allocation {#sec:stable_allocation}
=================================
The Allocation Problem
----------------------
*Instead of making the request routing decisions individually for each client, we conducted the servers allocation at the granularity of user groups*. The users in the same group generally have the same network features (*e.g., subnet, ISP, location*) and thus are likely to experience similar QoE when dispatched to the same server [@niereducing; @sun2016cs2p]. Similarly with conventional content delivery problem, it is generally necessary to first consider the load balance problem between edge server clusters (consisting of a number of edge servers with the same network features) and the user groups.
We consider a target network of a number of edge server clusters and user groups. Each user group $i$ originates an associate live traffic demand $d_i$, and each edge cluster $j$ has a capacity $C_j$ to serve the demands. In order to satisfy the QoE of users, for each user group, it has a list of candidate edge clusters in descending order of preference. A higher preference indicates those clusters that can provide better predicted performance for the viewers in the group (e.g., lower latency and packet loss). Likewise, each edge cluster $j$ also has preferences regarding which map units it would like to serve [@maggs2015algorithmic].
An allocation of edge clusters to user groups is said to be a *stable marriage* if there is no pair of participants (i.e., edge clusters and user groups) that both would be individually better off than they are with the element to which they are currently matched [@gusfield1989stable]. By conducting stable allocation, each user group is assigned to its most preferred server cluster to which it could be assigned in any stable marriage. In other words, stable allocation implies the most desirable matching between user groups and server clusters. The goal of our allocation problem is to assign the user groups to the edge clusters, such that the capacity constraints are met and the bidirectional preferences are accounted for.
Stable Allocation Implementation Challenges
-------------------------------------------
However, in the context of live video delivered over edge servers, the stable allocation has practical implementation challenges listed below.
### Expensive many-to-many assignment
Conventional allocation used by CDN vendors normally generates a many-to-many assignment, i.e., the traffic demand of each user group could be served by multiple edge clusters. Many-to-many assignment makes sense when there are only a small number of server clusters globally. In our context, however, the number of edge clusters is much more than that of conventional CDN clusters, thus a many-to-many assignment becomes unnecessarily expensive.
### Partial preference lists
Considering the large number of edge clusters and user groups, it is unnecessary to measure and rank the preference of every edge cluster for each user group. Therefore, there is a partial preference lists of edge clusters that are likely to provide the best performance for a given user group. Similarly, the edge clusters also only need to express their preferences for the top user groups that are likely candidates for assignment.
### Integral demands and capacities
The canonical implementation of stable marriage problem considers unit value demands and capacity, while in our case the demands of user groups as well as the capacities of server clusters could be arbitrary positive integers.
Initialize all user groups as free; $G_j = \{j: \text{[], for $j$ in } E\}$ $G$;
$left \leftarrow$ position of $i$ in $G_j$; $right \leftarrow$ length of $G_j$ $mid = \frac{left + right}{2}$
Solution Methodology
--------------------
To address the above challenges, we propose a new allocation algorithm: *Integral Stable One-to-multiple Allocation (*ISOA*)*, which extends the Gale-Shapley algorithm used for solving the canonical stable allocation problem. *ISOA* works in rounds, where in each round, each free user group (all user groups are free initially) proposes to its most preferred edge cluster, and the edge cluster could (provisionally) accept the proposal. Let $G_i$ denote the list of user groups assigned to edge cluster $i$. In the case that capacity of edge cluster is violated, we perform a binary search on $G_i$ to identify the user groups that need to be evicted.
Algorithm \[algo:ext\_galeShapley\] shows the details of *ISOA*, where $uP$ and $cP$ are the preferred list of edge clusters (by user groups) and the preferred list of user groups (by edge clusters), respectively. $C_j$ denotes the service capacity of edge cluster $j$ (Note that in practice, $C_j$ could be a *resource tree* instead of a single value [@maggs2015algorithmic]). To find a stable one-to-multiple allocation, we first set all user groups as free and set the initial user groups to each edge cluster as empty (line 1). Then, we pick up a free user group $i$ in each round and get its most preferred edge cluster $j$ (line 2-3). Based on the preference of edge cluster, we insert $i$ into the temporarily-assigned user group list of edge cluster $j$. After adding a new user group to $G_j$, the current traffic demand needed by $G_j$ may or may not violate the capacity $C_j$. If $C_j$ is not violated, we go back to propose another free user group for proposing (line 5-6).
![An example of the stable one-to-multiple allocation containing four user groups and two edge clusters, where the service capacity of each edge cluster is denoted by ’c’ and the traffic demand of each user group is denoted by ’D’[]{data-label="fig:ispa_example"}](./fig/ISPA_example.pdf){width="0.8\columnwidth"}
In case of $C_j$ is violated, we conduct a binary search (refer to Algorithm \[algo:bSearch\]) to find out the first user group ($start + 1$) in $G_j$ which causes the capacity violation. We further go through all user groups from $G_j^{start +1}$ to the end of $G_j$: if adding a user group ($G_j^k$) would cause the capacity violation, then we remove this user group from $G_j$ (line 8-10). If the removed user group is $i$ itself, it suggests that $i$ cannot get its most preferred edge cluster. In that case, $i$ will go back to propose to its second preferred edge cluster (line 11-12). Otherwise, the evicted $G_j^k$ will be labelled as a free user group, waiting for a second-chance proposal.
As a simple example, Fig. \[fig:ispa\_example\] shows the meaning of the stable one-to-multiple allocation, where we have two edge clusters $c1$ and $c2$, with service capacity of $15$ and $10$, respectively. There are $4$ user groups ($g1$, $g2$, $g3$ and $g4$), which generate traffic demands of $3$, $5$, $6$ and $6$, respectively. The preferred edge cluster list by each user group as well as the preferred user group list by edge cluster are shown in this figure with $uP$ and $cP$, respectively. We need to match each user group to their most preferred edge cluster that it could be assigned to.
Running *ISOA* with the simple example in Fig. \[fig:ispa\_example\], user group $g1$, $g3$, $g4$ can propose and be matched to their most preferred edge cluster ($c1$, $c2$ and $c1$, respectively). Group $g2$, however, will trigger the capacity violation of $c2$, thus can only be matched to its second preferred cluster $c1$. The matching results are marked with the solid lines in Fig. \[fig:ispa\_example\].
[ | m[1cm]{}< | m[6.5cm]{}|]{} *Notation* & *Description*\
$F$ & Target edge cluster within which the replication problem to be solved\
$E$ & Set of all edge servers over $F$\
$U$ & Online viewers from the user groups assigned to $F$\
$T$ & Target time window in the near future\
$A_j$ & Set of viewers that are served by edge server $j$\
$B_j$ & Bandwidth capacity of edge server $j$\
$a_i$ & The edge server that serve viewer $i$ during $T$\
$b_i$ & Bandwidth consumed by viewer $i$\
$s_i$ & The live stream that viewer $i$ is watching\
$c_j$ & Cache capacity of edge server $j$.\
$\mathcal{D}_{i}^{T}$ & The video segments to be generated by $s_{i}$ in $T$\
$L_{A_j}$ & Non-redundant set of streams accessed by viewers in $A_j$\
$S(\cdot)$ & Size function that calculates the total data volume in a set of video segments\
$\mathcal{V}_j^T$ & Live video segments that should be replicated into edge server $j$ during $T$\
$L(v_{i}^t)$ & The replication schedule: list of edge servers that video segments $v_i^t$ should be replicated into\
Proactive Replication over the Edge {#sec:per}
===================================
Notations and Assumptions
-------------------------
Once the allocation problem is solved, we only need to consider the replication problem within each single edge cluster and its assigned user groups (Note that the QoE of the assigned user groups is guaranteed with stable allocation). We next formulate the single cluster replication problem by considering a given edge cluster $F$ and the user groups assigned to $F$. The main notations used in our problem formulation are listed in Table \[tbl:notations\]. Without loss of generality, we make the following assumptions:
- We consider a target time window $T$ in the near future that we need to generate the video replication schedule. During $T$, a number of live streams are watched by the live viewers $U$ distributed across the user groups allocated to $F$.
- Each end user is served by one edge server at a time, and clients that cannot be served by the edge servers will be directed to the cloud.
- The cache in an edge server can be shared by multiple viewers who are accessing the video, but each viewer consumes their exclusive bandwidth of the edge server.
Resource Constraints
--------------------
We divide time into a series of short, consecutive time windows, and try to generate video replication schedule for each time window based on the feed of most up-to-date viewership of live videos. The goal of the replication schedule is to maximize the traffic served by the edge servers so as to improve the QoE of end users.
Since clients generally access the video segments of a live stream sequentially, users’ demands to the video segments to be generated in the next short time window can be roughly estimated by the current viewership of this stream. Note that the live viewers might change their video quality (bitrates) during the watching process, while the distributed design and a fine-grained time window allow the system to quickly respond to stream demand change. We use $a_i$ to represent the edge server that serves viewer $i$ in $T$, and use $A_j$ to denote the set of viewers served by server $j$, i.e., $$\label{eqt:def_aj}
A_j := \{i| a_i = j, \forall i \in U\}.$$ Since each viewer only needs to be served by one edge server, we have $$\label{eqt:single_server}
A_{j_1} \cap A_{j_2} = \varnothing, \forall j_1 \neq j_2.$$ For an arbitrary edge server $j$, it should have enough bandwidth to serve $A_j$. Thus, the following constraint should be posed: $$\label{eqt:resource_constr1}
\sum_{i\in A_j}b_i \leq B_j, \forall j \in E,$$ where $b_i$ is bandwidth consumed by viewer $i$, and $B_j$ is the total bandwidth of edge server $j$.
Let $s_i$ denote an arbitrary live stream of one live channel with a certain bitrate. We denote the video segments to be generated by $s_{i}$ in $T$ as $\mathcal{D}_{i}^{T}$ (i.e., $\mathcal{D}_{i}^T = \{v_{i}^{t_1}, v_{i}^{t_2},\ldots,v_{i}^{t_n}\}$, where $(t_1, t_2,\ldots, t_n)$ are the timestamps of the video segments $(v_{i}, v_{i},\ldots,v_{i})$ generated in $T$, respectively). If we use $L_{A_j}$ to denote the non-redundant set of streams accessed by viewers in $A_j$ (note that $|L_{A_j}| \le |A_j|$ as one stream is normally watched by more than one viewers), the following constraint on cache capacity should be posed: $$\label{eqt:resource_constr2}
\sum_{i\in L_{A_j}}S(\mathcal{D}_{i}^T) \leq c_j, \forall j \in E,$$ where $S(\cdot)$ is the function that calculates the total caching size of a given set of video segments, $c_j$ denotes the cache capacity of edge server $j$.
Cost of Content Replication {#subsec:replicationCost}
---------------------------
While edge servers benefit the live viewers, we may need to generate multiple replicas of single video content over the edge servers. More replicas on the edge servers generally mean more cost of cache resources at the edge as well as extra delivery cost from cloud to the edge servers.
To reach a good balance, we pose the following constraint to limit the overall replication cost: $$\label{eqt:rep_cost}
\sum_{j \in E}\sum_{i\in L_{A_j}}S(\mathcal{D}_{i}^T) \leq \alpha \cdot \sum_{j \in E} c_j,$$ where $\sum_{j \in E}\sum_{i\in L_{A_j}}S(\mathcal{D}_{i}^T)$ is the total size of overall replicas cached in the edge servers, and $\sum_{j \in E}c_j$ represents the total size of videos that could be cached globally. We use $\alpha$, a percentage variable, to bound the total amount of videos that could be replicated, so as to limit the video replication cost.
Problem Formulation
-------------------
The problem needs to be solved by the *Brain* can be formulated as:
\[optimization\] $$\begin{aligned}
& \label{opt_target}\underset{\{a_i, A_j\}}{\text{max}} && \sum_{j\in E}\sum_{i \in A_j}b_i\\
& \text{s.t.} && (\ref{eqt:single_server}), (\ref{eqt:resource_constr1}), (\ref{eqt:resource_constr2}), \textit{ and } (\ref{eqt:rep_cost}) \end{aligned}$$
Solving (\[optimization\]), we obtain $a_i$ and $A_j$, with which the video replication schedule could be easily derived. That is, the video segments that should be replicated into edge server $j$ during $T$ are given by the following: $$\mathcal{V}_j^T = \sum_{i \in L_{A_j}}\mathcal{D}^T_i.$$ Based on $\mathcal{V}_j^T$ of each edge server, we can do a simple reverse transformation to get the video replication schedule, i.e., for an arbitrary video segment from stream $i$ with timestamp $t$ ($v_i^t$), the list of edge servers to which it should be replicated in $T$ is given by: $$\label{eqt:reverse_transformation}
L(v_{i}^t) = \{j| v_{i}^t \in \mathcal{V}_j^T, \forall j \in E, \forall t \in T\}.$$ This video replication schedule is then inserted into the *Replication table* in Fig. \[fig:sys\_architec\], by identifying the channel and bitrate of stream $i$. Problem (\[optimization\]) is an integer linear program with a massive number of design variables. In the rest of this section, we present a two-step heuristic algorithm to solve this problem.
Solution Methodology
--------------------
Since live video traffic is network intensive [@maggs2015algorithmic], the non-sharable bandwidth constraint is a harder constraint compared with the sharable cache capacity constraint. Therefore, *PLVER* first considers the replication problem while temporarily ignoring the constraint on the cache capacity (**Step 1**). It then conducts adjustments by moving workloads from the edge servers where the cache capacity constraint is violated to the edge servers with available cache and bandwidth resources (**Step 2**).
**Step 1: Greedy Edge Replication for Maximum Traffic:** By temporarily ignoring the cache capacity constraint, the replication problem could be transformed into the *Multiple Knapsack Problem (MKP)* [@chekuri2005polynomial]. This problem is defined as a pair $\mathcal{(B, S)}$ where $\mathcal{B}$ is a set of $m$ bins and $\mathcal{S}$ is a set of $n$ items. Each bin $j \in \mathcal{B}$ has a capacity $c_j$, and each item $i$ has a weight $w_i$ and a profit $p_i$. The objective is to assign the items to the bins such that the total profit of the assigned items is maximized, and the total weight assigned to each bin does not exceed the corresponding capacity.
If we treat the profit of each viewer $i$ as the bandwidth consumption $b_i$ of that viewer, the *MKP* problem is equivalent to our replication problem with unlimited cache capacities, i.e.,
\[opt-2\] $$\begin{aligned}
& \underset{\{x_{ij}\}}{\text{max}} && \sum_{j\in \mathcal{B}}\sum_{i\in \mathcal{S}}b_i x_{ij},\\
& \text{s.t.} && \sum_{i \in \mathcal{S}} b_i x_{ij} \leq B_j,\forall j \in \mathcal{B}\label{opt-2:band_constrait}\\
&&& \sum_{j \in \mathcal{B}} x_{ij} \leq 1, \forall i \in \mathcal{S}\\
&&& x_{ij} \in \{0, 1\}, \forall i \in \mathcal{S}, j \in \mathcal{B},\quad\quad\end{aligned}$$
where $\mathcal{B}$ and $\mathcal{S}$ are defined as the set of edge servers in a given edge cluster and the set of viewers assigned by the stable one-to-multiple allocation, respectively, and $x_{ij}$ is defined as follows: $$\label{eqt:s_function}
x_{ij} := \begin{cases}
1\text{ , if viewer $i$ is served by edge server $j$,}\\
0\text{ , otherwise}.
\end{cases}$$
The *MKP* problem has been well-researched and has a polynomial time approximation solution (*PTAS*) [@chekuri2005polynomial]. Once solving Problem (\[opt-2\]), we could further calculate the set of video segments that should be replicated into each edge server based on the value of $x_{ij}$. Let $\mathcal{P}_j$ denote the set of video segments that should be stored in edge server $j$ after Step 1.
**Step 2: Workload Adjustment:** The solution $\mathcal{P}_j$ can maximize the total amount of traffic served by edge cluster under the assumption of unlimited cache capacities of edge servers. Posing the constraint of limited cache capacity, we need to further adjust the solutions by moving part of the replication workloads from the edge servers whose cache capacity is violated, to the edge servers that have spare cache and bandwidth.
/\* Phase 1: Generate initial replication schedule by solving *MKP*. \*/\
$x_{ij} \leftarrow$ solvingMKP($bd_j$, $b_i$); $\mathcal{S}_j \leftarrow \varnothing$ , $\forall j \in \mathcal{F}$\
$a_i$ = $\varnothing$, for all$\text{ viewer } i \in \mathcal{M}$;\
/\* Phase 2: Redirecting viewers. \*/\
/\* Phase 3: Offloading replication tasks. \*/\
$\mathcal{S}_j$;\
The whole proactive replication algorithm is shown in Algorithm \[algo:scer\], including three phases. Phase 1 represents Step 1, and phases 2 and 3 represent Step 2 introduced above. Once phase 1 is finished, there might be some viewers whose demand cannot be satisfied (i.e., $a_i = \varnothing$) if we pose cache capacity constraint. For each of these unassigned viewers, in phase 2 we try to redirect it to an edge server with available bandwidth capacity and has the required video segments cached already. There are no directly available edge servers that could be used to serve the rest of the unassigned viewers after phase 2. Thus, in phase 3, the algorithm offloads the incomplete video caching tasks to the edge servers with residual resources. The algorithm returns when all traffic demands are completely satisfied or all edge servers in the given edge cluster are fully loaded.
The time complexity of our algorithm is ${O}(n+m)$ ($n$ and $m$ are the number of viewers in $\mathcal{M}$ and the number of edge servers in $F$, respectively), without considering the first step of solving the *MKP* problem. Since there are different approximation scheme for solving *MKP* in polynomial time and *PLVER* is a decentralized algorithm with viewers and edge servers from a single edge cluster (i.e., $n$ and $m$ in a small magnitude), *PLVER* could be solved easily.
\[remark2\] Note that PLVER does not need to track the real-time information of each viewer (e.g., stream being watched, bandwidth consumption). Instead, it only needs the statistics on the number of viewers of each stream (viewership) in each user group. In this sense, a “viewer” in PLVER actually means the corresponding resource demand to each live stream.
We introduce the reward for caching a stream $s$ in a certain edge server $j$ (line \[reward\] in Algorithm \[algo:scer\]). It implies the traffic demand that could be served by caching the video segments of this stream in server $j$ (during $T$). Let $b$ denote the bitrate of stream $s$; then the reward of $s$ could be defined as following: $$\label{eqt:reward_func}
\mathcal{R}(s,j) = b * \textit{min}\{\lfloor \frac{\Bar{B}_j}{b} \rfloor, N\},$$ where $\Bar{B}_j$ is the current available bandwidth of edge server $j$, and $N$ is the number of viewers on stream $s$ that have not been assigned to a server (i.e., $a_i = \varnothing$). The reward is in accord with our objective (\[opt\_target\]), i.e., maximizing the amount of traffic served by edge servers.
Experimental Setup {#sec:exp_setup}
==================
Live Video Viewership Dataset {#sec:TwitchData}
-----------------------------
Twitch provides developers with a RESTful API to obtain the live video information. In our experiment, we use a public dataset [@live] that consists of the traces of thousands of live streaming sessions on Twitch [@pires2015youtube]. The dataset contains the information of all live channels in the Twitch system, with a sampling interval of $5$ minutes. Detailed information includes the number of viewers of each channel, bitrates of each channel, and the duration of live sessions. We select the live channels that have more than $100$ viewers and extract the required information of these channels.
Fig. \[fig:nViewers\] shows the total number of viewers in the system from Jan. 06 to Jan. 09. During a certain time period, a channel can be either *online*, which means that it is broadcasting a live video, or offline. When a channel is online, we say that it corresponds to a *session*. Fig. \[fig:viewers\_dist\] shows the distribution of sessions with different average number of viewers.
Fig. \[fig:bitrateCdf\] illustrates the distribution of bitrates of channels in the dataset. Based on the video encoding guidelines [@youtubencode], we assume that the video streams can be encoded with multiple standard resolutions (or bitrates): $240$p, $360$p, $480$p and $720$p (or $400, 750, 1000, 2500$ *Kbps*). Obviously, while a channel broadcasts with bitrate $b$, the viewers of this channel cannot select the video quality with a bitrate exceeding $b$.
Target Network & User Groups {#setup:ugroup}
----------------------------
geoISP [@geoISP] collected the detailed performance and region coverage information of $2,317$ Internet Service Providers (ISPs) in the US. Based on the information, we build a target network over two US states (Washington and Oregon). We further develop a web crawler to collect the ISP coverage information of $470$ cities over $70$ counties in the two states from the website of geoISP.
We divide all the viewers from these two states (about $0.3$ million on average) into $1253$ user groups (based on the combination of ISP and city) within our target network. Note that one ISP can cover multiple cities and one city can be covered by multiple ISPs. From the dataset, we know the percentage of users in a city that is supported by a particular ISP. For each live stream, we distribute its viewers among these user groups based on the population of each user group (calculated based on each city’s population and the ISP coverage percentage of the city).
Edge Server Clusters {#setup:edgeCluster}
--------------------
### Setup of edge clusters & servers
[| m[2.4cm]{}< | m[5.6cm]{}|]{} **Preference Priority** & **Clusters Description**\
*Lv. 1* & clusters that are within the same ISP and located in the same city.\
*Lv. 2* & clusters that are within the same ISP and located in the same county.\
*Lv. 3* & clusters that are located in the same city while with different ISPs.\
*Lv. 4* & clusters that are within the same ISP and located in the same state.\
*Lv. 5* & clusters that are located in the same county while with different ISPs.\
*Lv. 6* & clusters that are located in the same state while with different ISPs.\
Among all user groups, we further extract $641$ city-ISP combinations as the target for deploying the edge clusters. Each edge cluster in our experiments consists of five types of edge servers with $5$, $10$, $20$, $40$ and $80$ Mpbs bandwidth capacity, respectively. The servers are randomly deployed at each edge cluster. The total bandwidth of all the deployed edge clusters is set to equal the total traffic demand of all viewers. Note that such bandwidth setting may not always guarantee the full satisfaction of all viewers’ demand because the bandwidth of each edge cluster may not be fully utilized and also because the cache capacity and QoE of each edge cluster may be different. Nevertheless, our later experiment shows that such a setting is appropriate to evaluate the performance of different edge caching strategies.
### Setting of cache capacity
For an edge server with bandwidth capacity $b$ Mbps, it should have at least $b * T$ Mb ($T$ is the considered time period) cache capacity to ensure that it has enough resources in our edge replication strategy. For simplicity, we use $\hat{b}$ to denote $b*T$ hereafter. Since video traffic delivery is network intensive, cache capacity of edge servers is normally larger than $\hat{b}$ Mb. In our experiments, we assume that the cache capacity (variable $X$) of all edge servers is uniformed distributed within the range of $(0.5 * \hat{b}, 2 * \hat{b})$. In the following section, we will further adjust the capacity that could be used in each edge server by setting different values of replication cost constraint factor $\alpha$.
[| m[2.5cm]{}< |c c c| ]{} &\
& Greedy Allocation & ISOA & Changes\
*Lv.1* & 390 & 451 & +61\
*Lv.2* & 496 & 457 & -39\
*Lv.3* & 120 & 106 & -14\
*Lv.4* & 136 & 127 & -9\
*Lv.5* & 56 & 57 & +1\
*Lv.6* & 36 & 37 & +1\
un-allocated & 19 & 18 & -1\
![Performance change with ISOA over *greedy allocation*.[]{data-label="fig:stableAllocation_map"}](./fig/stable_allocation_heatmap.pdf){width="0.55\columnwidth"}
Performance Evaluation {#sec:evaluation}
======================
Performance Evaluation of Stable One-to-multiple Allocation
-----------------------------------------------------------
### Evaluation Methodology
We compare the performance of *ISOA* with another edge cluster allocation strategy: *greedy allocation*. With the greedy allocation, each of the user groups selects its most preferred edge cluster iteratively, until all user groups get allocated or there is no available edge cluster.
### Preference List Generation
We define the rank of preferred edge clusters of user groups (introduced in \[sec:stable\_allocation\]), as listed in Table \[tbl:alloation\_define\]. Note that the preference list defined in Table \[tbl:alloation\_define\] is just an example paradigm to generate the input of our stable allocation algorithm, and it can be altered by the CDNs themselves, e.g., according to the contract terms under which the cluster is deployed, granularity of the user groups partition, and so on [@maggs2015algorithmic].
### Performance Evaluation of ISOA
We conduct the stable one-to-multiple allocation between user groups and edge clusters based on the data introduced in \[setup:ugroup\] and \[setup:edgeCluster\], using *ISOA* and the aforementioned *greedy allocation* method. The detailed allocation results are summarized in Table \[tbl:alloation\_results\]. Since there are $1253$ user groups in our experiment, the table shows the distribution of these user groups being allocated with different levels of preferred edge cluster. For example, there are $390$ user groups that are allocated with their first ranked (most preferred) edge cluster with *greedy allocation*, while the number is increased to $451$ with *ISOA*. Compared with the greedy allocation, *ISOA* can allocate more user groups with their higher ranked (more preferred) edge clusters. The performance improvement with *ISOA* over the greedy allocation for every user groups in our experiment are illustrated in Fig. \[fig:stableAllocation\_map\], where the performance of each user group is marked with a colored square.
Performance Evaluation of Proactive Edge Replication
----------------------------------------------------
### Evaluation Methodology
We evaluate *PLVER* by comparing it with the following replication strategies:
- *Auction Based Replication (ABR):* Each edge server conducts a simple “auction” to determine the cached videos: live videos with the largest number of viewers via the edge server win the auction and are cached, and the auction repeats until the edge server uses up its cache capacity [@hung2018combinatorial]. In other words, this method replicates the videos into an edge server based on their current number of viewers in the decreasing order.
- *Caching On Requested Time (CORT):* *This strategy does NOT adopt pre-replication and using request triggered caching strategy instead.* It caches the videos into the edge servers in real-time when video segments are truly requested by the end users. When content requested, it first checks if there are available edge servers to serve this request; if not, it replicate the video segments of this stream into a new edge server.
To evaluate the performance of different strategies, we use the metric ***offloading ratio***, which is calculated by the amount traffic served by the edge servers divided by that of the overall traffic in the time period of length $T$. The performance is evaluated under different values of ***replication cost factor $\alpha$*** (refer to \[subsec:replicationCost\]), so that we can investigate the tradeoff between performance and replication overhead.
### Overall Performance of PLVER
Based on the twitch viewership data from Jan. 06, 2014 to Jan. 09, 2014, we conduct experiments on an hourly base. By setting the value of $\alpha$ to $20\%, 40\%, 60\%, 80\%$ and $100\%$, we compute the average offloading ratios of the three strategies in each case. The results are shown in Fig. \[fig:overall\_performance\], from which we can see that *PLVER* outperforms *ABR* and *CORT* in all five cases. The overall performance improvement by *PLVER* for the five cases are $9\%, 10\%, 15\%, 28\%$ and $10\%$ over *ABR*, respectively, and $82\%, 82\%, 79\%, 81\%$ and $44\%$ over *CORT*, respectively.
Furthermore, we can find from Fig. \[fig:overall\_performance\] that the overall performance got a considerable improvement when the replication cost constraint ($\alpha$) is increased from $20\%$ to $60\%$. However, the performance improvement fades when $\alpha$ continues to increase after $60\%$. This situation holds for all the three replication strategies. Therefore, in our experiment, it reaches a good tradeoff between performance and replication costs when $\alpha$ is between $40\%$ and $60\%$ (as shown in Fig. \[fig:overall\_performance\]).
### Detailed Performance of PLVER
Referring to the overall performance, *ABR* is more comparable to *PLVER* (than *CORT*). We thus investigate the detailed performance behaviors of *PLVER* and *ABR*. The traffic offloading ratios for i) each hour and ii) each user group are shown in Fig. \[fig:seer\_perform\_time\] and Fig. \[fig:p\_heatmap\], respectively.
Fig. \[fig:seer\_perform\_time\] shows the hourly traffic offloading ratio of *PLVER* and *ABR* with the replication cost constraint factor $\alpha$ equal to $100\%$ and $60\%$, respectively. It shows that even within non-peak hours (when the resources of edge servers are sufficient), it is hard for *ABR* to yield a satisfied performance. In contrast, when $\alpha$ decreases from $100\%$ to $60\%$, the performance degradation of *PLVER* is much smaller than that of *ABR*.
Fig. \[fig:p\_heatmap\] shows a heat map indicating the performance of *PLVER* and *ABR* at each edge cluster (with $\alpha = 40\%$), where the traffic offloading ratios are represented by different colors. Since there are no edge clusters with performance less than $30\%$ or greater than $90\%$, our color bar denotes the traffic offloading ratio from $30\%$ to $90\%$. An edge cluster with better performance is colored in green, and worse in red.
We also investigate the performance when requesting different video qualities ($240$p, $360$p, $480$p, $720$p). The satisfaction ratio of requests (i.e., the ratio of requests that are successfully directed to corresponding edge servers) with different replication strategies are shown in Fig. \[fig:request\_success\_bitrate\]. We can observe that *PLVER* outperforms *ABR* and *CORT* for all types of quality requests. Among the four different quality requests, the high quality request of $720$p is with relatively low traffic offloading ratio than that of the other three video qualities. However, as high quality requests generate more traffic than the others, it impacts more on the final performance. *PLVER* provides a satisfaction ratio of $36\%$ for the $720$p requests, which is higher than those of *ABR* ($19\%$) and *CORT* ($30\%$), respectively.
### Impact of Viewership Fluctuation
![The performance of *PLVER* and ABR in each edge cluster with $\alpha = 0.4$.[]{data-label="fig:p_heatmap"}](./fig/perform_heatmap.pdf){width="1\columnwidth"}
As *PLVER* makes use of the viewership information (i.e., the number of viewers) in current time window to make decisions in next time window, the viewership fluctuation in consecutive time slots may impact the performance of replication algorithms. To investigate that, we first generate the replication schedules by different replication strategies referring to the viewership data in peak traffic hours of \[sec:TwitchData\], then we manually generate a new viewership data to test the performance of these replication schedules. The new viewership data is generated by introducing different levels of fluctuations on the former viewership data that we used to generate the replication schedules. To be more specific, the number of viewers of each channel are added with different percentages of fluctuation (e.g., randomly plus or minus $20\%$).
The performance of *PLVER* under different viewership fluctuations is shown in Fig. \[fig:perform\_on\_fluct\]. We can see that the performance curve (representing the traffic offloading ratios) slightly goes down from $75\%$ to $64\%$ with fluctuations changing from $10\%$ to $70\%$. Nevertheless, according to the statistical analysis of our dataset, viewership fluctuations higher than $30\%$ are quite rare. Hence, its impact on *PLVER* is quite small.
Conclusion
==========
Live video services have gain extreme popularity in recent years. The QoE of live videos, however, suffers from the cache miss problem occured in the edge layer. Solutions from the current live video products as well as the state-of-the-art researches would pose extra latency to the live streams which sacrifices the “liveness” of delivered video. In this paper, we propose *PLVER*, an efficient edge-assisted live video delivery scheme aiming at improving the QoE of live videos. *PLVEr* first conducts a *one-to-multiple* stable allocation between edge clusters and user groups. Then it adopts proactive video replication algorithms over the edge servers to speed up the video replication over edge servers. Trace-driven experimental results demonstrate that our solution outperforms other edge replication methods.
\[fig:PER\_exp2\]
[Huan Wang]{} received the Bachelor and Master degrees in computer science from Southwest Jiaotong University and the University of Electronic Science and Technology of China, in 2013 and 2016, respectively. He is currently pursuing Ph.D. degree with the Department of Computer Science, University of Victoria, BC, Canada. His research interests include content/video delivery, edge caching and computing, network traffic anomaly detection.
[Guoming Tang]{} (S’12-M’17) is currently a research fellow at the Peng Cheng Laboratory, Shenzhen, Guangdong, China. He received his Ph.D. degree in Computer Science from the University of Victoria, Canada, in 2017, and the Bachelor’s and Master’s degrees from the National University of Defense Technology, China, in 2010 and 2012, respectively. He was also a visiting research scholar of the University of Waterloo, Canada, in 2016. His research mainly focuses on cloud/edge computing, green computing, and intelligent transportation systems.
[Kui Wu]{} (S’98-M’02-SM’07) received the BSc and the MSc degrees in computer science from the Wuhan University, China, in 1990 and 1993, respectively, and the PhD degree in computing science from the University of Alberta, Canada, in 2002. He joined the Department of Computer Science, University of Victoria, Canada, in 2002, where he is currently a Full Professor. His research interests include network performance analysis, mobile and wireless networks, and network performance evaluation. He is a senior member of the IEEE.
[Jianping Wang]{} is a professor of the computer science department at City University of Hong Kong, Hong Kong. She received the B.E. degree and MSc degrees in computer science from Nankai University, Tian Jin, China, in 1996 and 1999, respectively and the Ph.D. degree in computer science from University of Texas at Dallas, USA, in 2003. Her research interests include cloud computing, service oriented networking, and data center networks.
[^1]: H. Wang and K. Wu are with the Department of Computer Science, University of Victoria, Victoria, BC V8W 3P6, Canada (e-mail: {huanwang, wkui}@uvic.ca).
[^2]: G. Tang is with Peng Cheng Laboratory, Shenzhen, Guangdong 518066, China (e-mail: [email protected]).
[^3]: J. Wang is with Department of Computer Science, City University of Hong Kong, Hong Kong (e-mail: [email protected]).
|
---
abstract: 'Understanding and evaluating the robustness of neural networks under adversarial settings is a subject of growing interest. Attacks proposed in the literature usually work with models trained to minimize cross-entropy loss and output softmax probabilities. In this work, we present interesting experimental results that suggest the importance of considering other loss functions and target representations, specifically, (1) training on mean-squared error and (2) representing targets as codewords generated from a random codebook. We evaluate the robustness of neural networks that implement these proposed modifications using existing attacks, showing an increase in accuracy against untargeted attacks of up to 98.7% and a decrease of targeted attack success rates of up to 99.8%. Our model demonstrates more robustness compared to its conventional counterpart even against attacks that are tailored to our modifications. Furthermore, we find that the parameters of our modified model have significantly smaller Lipschitz bounds, an important measure correlated with a model’s sensitivity to adversarial perturbations.'
author:
- |
Sean Saito\
SAP Asia, Singapore\
[[email protected]]{}
- |
Sujoy Roy\
SAP Asia, Singapore\
[[email protected]]{}
title: |
Effects of Loss Functions And\
Target Representations on Adversarial Robustness
---
Introduction
============
Neural networks produce state-of-the-art results across a large number of domains ([@krizhevsky2009learning], [@vaswani2017attention], [@van2016wavenet], [@hannun2014deep]). Despite increasing adoption of neural networks in commercial settings, recent work has shown that such algorithms are susceptible to inputs with imperceptible perturbations meant to cause misclassification ([@szegedy2013intriguing], [@goodfellow2014explaining]). It is thus important to investigate additional vulnerabilities as well as defenses against them.
![**Top.** Adversarial images generated by the **targeted white-box Madry et al. attack** using the CIFAR-10 test dataset and a DenseNet model with one-hot target representations and softmax outputs trained to minimize cross-entropy. **Bottom.** Adversarial images generated under the identical setting, but using a DenseNet model trained to minimize the mean-squared error between tanh outputs and codeword targets. We choose the smallest $\epsilon$ which achieve success rates of 100% against each model (**0.03** and **0.35**, respectively).[]{data-label="fig:fixed_success_rate_small"}](combined_madry_sprites_eps_0_03_0_035_2.png){width="0.94\linewidth"}
In this paper we investigate the problem of adversarial attacks on image classification systems. Attacks so far have only considered the conventional neural network architecture which outputs softmax predictions and is trained by minimizing the cross-entropy loss function. We thus propose and evaluate the robustness of neural networks against adversarial attacks with the following modifications:
- Train the model to minimize mean-squared error (MSE), rather than cross-entropy.
- Replace traditional one-hot target representations with codewords generated from a random codebook.
We evaluate our proposed modifications from multiple angles. First, we measure the robustness of the modified model using attacks under multiple threat scenarios. Secondly, we introduce an attack which, without sacrificing its efficacy towards conventional architectures, is tailored to our proposed modifications. Finally, we conduct spectral analysis on the model’s parameters to compute their upper Lipschitz bounds, a measure that has been shown to be correlated with a model’s robustness. Our results in Section \[sec:experimental\_results\] demonstrate that, across all three evaluations, our proposed model displays increased robustness compared to its conventional counterpart.
Background
==========
Neural networks
---------------
A neural network is a non-linear function $F_\theta$ that maps data $x \in \mathbb{R}^n$ to targets $y \in \mathbb{R}^d$, where $n, d$ are the dimensions of the input and target spaces, respectively, and $\theta$ represents the parameters of the neural network. For conventional neural networks and classification tasks, $y$ is typically a one-hot representation of the class label and $d$ is the number of classes in the dataset. In this work, we use the DenseNet architecture [@huang2017densely] as the existing benchmark, which has recently produced state-of-the-art results on several image datasets.
Adversarial examples
--------------------
The goal of an adversarial attack is to cause some misclassification from the target neural network. In particular, [@szegedy2013intriguing] has shown that it is possible to construct some $\Tilde{x}$ by adding minimal perturbations to the original input $x$ such that the model misclassifies $\Tilde{x}$. Here, $\Tilde{x}$ is commonly referred to as an *adversarial example*, while the original data $x$ is referred to as a *clean example*. Apart from image classification, adversarial attacks have been proposed in both natural language and audio domains ([@carlini2018audio], [@alzantot2018generating], [@yakura2018robust]).
Attacks {#subsec:attacks}
-------
#### Settings.
We explore two adversarial settings, namely white-box and black-box scenarios. In the white-box setting, the attacker has access to and utilizes the model’s parameters, outputs, target representations, and loss function to generate adversarial examples. In the black-box scenario, the attacker has no access to the model’s parameters or specifications and only has the ability to query it for predictions. In this work, we employ transfer attacks, a type of black-box attack where adversarial examples are generated using a proxy model which the adversary has access to.
#### Types.
There are mainly two types of attacks. In a *targeted attack*, an adversary generates an adversarial example so that the target model returns some target class $t$. A targeted attack is evaluated by its *success rate*, which is the proportion of images for which the target class was successfully predicted (the lower the better from the perspective of the defense). On the other hand, in an *untargeted attack*, the attacker causes the model to simply return some prediction $y' \neq y$. It is evaluated by the *accuracy* of the target model, which denotes the proportion of images which failed to get misclassified (the higher the better from the perspective of the defense).
The following sections describe the attacks used in this work.
#### Fast Gradient Sign Method (FGSM).
The Fast Gradient Sign Method [@goodfellow2014explaining], one of the earliest gradient-based attacks, generates adversarial examples via:
$$\Tilde{x} = x + \epsilon \cdot sign(\nabla_x J(x, y_t))$$
where $J$ is the loss function of the neural network, $y_t$ is the target class, and $\epsilon$ is a parameter which controls the magnitude of the perturbations made to the original input $x$. The gradient, which is taken w.r.t the input, determines which direction each pixel should be perturbed in order to maximize the loss function and cause a misclassification.
#### Basic Iterative Method (BIM).
The Basic Iterative Method, proposed by [@kurakin2016physicalworld], applies FGSM iteratively to find more effective adversarial examples.
#### Momentum Iterative Method (MIM).
The Momentum Iterative Method [@dong2018boosting] combines iterative gradient-based attacks with the accumulation of a velocity vector based on the gradient of the loss function.
#### L-BFGS Attack.
[@szegedy2013intriguing] proposed the L-BFGS attack, the first targeted white-box attack on convolutional neural networks, which solves the following constrained optimization problem:
$$\begin{aligned}
& {\text{minimize}}
& & c \cdot \left \| x - \Tilde{x} \right \|^2_2 + L_{F, t}(\Tilde{x})\\
& \text{s.t.}
& & \Tilde{x} \in [0, 1]^n
\end{aligned}$$
The above formulation aims to minimize two objectives; the left term measures the distance ($L_2$ norm) between the input and the adversarial example, while the right term represents the cross-entropy loss. It is used only as a targeted attack.
#### Deep Fool.
The Deep Fool attack, proposed by [@moosavi2016deepfool], is an attack which imagines the decision boundaries of neural networks to be linear hyperplanes and uses an iterative optimization algorithm similar to the Newton-Raphson method to find the smallest $L_2$ perturbation which causes a misclassification. It is used only as an untargeted attack.
#### Madry et al.
[@madry2017towards] proposed an attack based on projected gradient descent (PGD), which relies on local first order information of the target model. The method is similar to FGSM and BIM, except that it uses random starting positions for generating adversarial examples.
#### Carlini & Wagner L2 (CWL2)
. The Carlini & Wagner L2 attack [@carlini2016towards] follows an optimization problem similar to that of L-BFGS but replaces cross-entropy with a cost function that depends on the pre-softmax logits of the network. In particular, the attack solves the following problem:
$$\begin{aligned}
& {\text{minimize}}
& & \left \| \delta \right \|_2 + c \cdot f(x + \delta) \\
& \text{s.t.}
& & x + \delta \in [0, 1]^n \\
\end{aligned}$$
where $\delta$ is the perturbation made to the input and $f$ is the objective function:
$$f(\Tilde{x}) = \max (\max_{i\neq t}(z(\Tilde{x})_i - z(\Tilde{x})_{t}), 0)$$
Here, $z(x)$ represents the pre-softmax logits of the network. In short, the attack aims to maximize the logit value of the target class while minimizing the $L_2$ norm of the input perturbations.
Improving adversarial robustness
================================
In this work we have two proposals. First, we propose changes to the conventional neural network architecture and target representations to defend against adversarial attacks described in Section \[subsec:attacks\]. Second, we propose a modified, more effective CWL2 attack that is specifically tailored to our proposed defense.
Training on mean-squared error {#subsec:mseloss}
------------------------------
Instead of the conventional cross-entropy loss, we propose to use MSE to compute the error between the output of the model $F_\theta$ and the target $y \in Y$, where $Y$ is the set of target representations for all classes. During inference, we select the output class for which its target representation $y$ yields the smallest euclidean distance to ${\hat y}$.
Randomized target representations {#subsec:randomrep}
---------------------------------
Instead of using one-hot encoding as target representations, we represent each target class as a codeword from a random codebook. Specifically, the $n$ target representations corresponding to the $n$ classes are sampled once at the beginning of training from a uniform distribution $U(-1, 1)^d$ based on a secret key. To match the representation space of the network output and the targets, the conventional softmax layer is replaced with a tanh activation with $d$ outputs.
Modified CWL2 attack {#subsec:modifiedcwl2}
--------------------
The Carlini & Wagner L2 attack makes several assumptions about the target network’s architecture based on its cost function mentioned in Section \[subsec:attacks\], namely that the highest logit value corresponds to the most likely class. However, applying our proposed neural network modifications breaks such assumptions, for the output of the network would be tanh activations and the length of the output would not correspond to the number of classes in the dataset. We thus propose a simple modification to the CWL2 attack where the cost function considers the distance $D$ in some metric space between the logits and the targets:
$$f(\Tilde{x}) = \max (D(z(\Tilde{x})_{t}, t) - \min_{i\neq t} D(z(\Tilde{x})_i, i), 0)$$
Like with the Carlini & Wagner L2 attack, $f(\Tilde{x}) = 0$ if and only if the model predicts the target class. Using the change-of-variables formulation utilized in [@carlini2016towards] to enforce box constraints on the perturbations, our attack finds some $w$ which optimizes the following objective:
$$\min \left \| \frac{1}{2}(\tanh(w) + 1) - x \right \|^2_2 + c \cdot f(\frac{1}{2} (\tanh(w) + 1)$$
where $c$ is a trade-off constant that controls the importance of the size of perturbations (larger values of $c$ allow for larger distortions). For our experiments, we have defined $D(x, y)$ as the euclidean distance.
Lipschitz bounds and robustness {#subsec:lipschitz}
-------------------------------
Earlier works have suggested that the sensitivity of neural networks towards adversarial perturbations can be measured with the upper Lipschitz bound of each network layer [@szegedy2013intriguing]. Parseval Networks [@cisse2017parseval], for example, have introduced a layer-wise regularization technique for improving robustness by enforcing smaller global Lipschitz bounds. More specifically, [@cisse2017parseval] have shown that:
$$\operatorname{\mathbb{E}}[J_{adv}(F(\Tilde{x},\theta), y, \epsilon)] \leq \operatorname{\mathbb{E}}[J(F(x, \theta), y)] + \lambda \Lambda \epsilon$$
where $J_{adv} = \max_{\Tilde{x}:\left \| \Tilde{x} - x \right \| \leq \epsilon} J(F(\Tilde{x}, \theta), y)$ , and $\lambda, \Lambda$ are the upper Lipschitz bounds of $J$ and $F$, respectively. In other words, the efficacy of an adversarial attack depends on the generalization error of the target model as well as the Lipschitz bounds of its layers. This suggests that smaller Lipschitz bounds indicate a more robust model. For both fully-connected and convolutional layers, this can be measured by calculating their operator norms. The operator norm $\left \| \theta^l \right \|$ of the $l$-th fully-connected layer is simply the largest singular value of the weight matrix. The Lipschitz constant of the $l$-th layer is then: $$\Lambda^l = \left \| \theta^l \right \| \Lambda^{l-1}$$
For convolutional kernels, we rely on the formulation in [@szegedy2013intriguing], which involves applying the two-dimensional discrete Fourier Transform to find the largest singular values.
Section \[subsec:lipschitz\_upper\_bounds\] presents empirical results which demonstrate that simply changing the loss function from cross-entropy to mean-squared error can yield model parameters with significantly smaller Lipschitz bounds.
Experimental setup
==================
In this section we describe the evaluation datasets, evaluation models and adversarial image generation process.
Datasets {#subsec:dataset}
--------
**CIFAR-10** [@krizhevsky2009learning] is a small image classification dataset with 10 classes. It contains 60,000 thumbnail-size images of dimensions 32x32x3, of which 10,000 images are withheld for testing.
**MNIST** [@lecun-mnisthandwrittendigit-2010] is another image classification dataset containing monochromatic thumbnails (28x28) of handwritten digits. It is comprised of 60,000 training images and 10,000 testing images.
**Fashion-MNIST** [@xiao2017/online] is a relatively new image classification dataset containing thumbnail images of 10 different types of clothing (shoes, shirts, etc.) which acts as a drop-in replacement to MNIST.
Models evaluated {#subsec:modelsevaluated}
----------------
We use three variants of the DenseNet model to generate adversarial examples:
- O:SOFTMAX:CE refers to a DenseNet model with softmax activations trained on cross-entropy loss and one-hot target representations.
- O:SOFTMAX:MSE refers to a DenseNet model with softmax activations trained on MSE and one-hot target representations.
- R:TANH:MSE refers to a DenseNet model with tanh activations trained on MSE using codeword target representations. We used a codeword length of $d=128$.
We have evaluated the robustness of the R:TANH:MSE model with different codeword lengths (64, 256, and 1024) but found no significant discrepancies in the results.
\[table:attack\_params\]
\[table:other\_params\]
\[results:untargeted\_attacks\]
Generating adversarial examples {#sec:adv_generation}
-------------------------------
For each dataset mentioned in Section \[subsec:dataset\], we train a model on the training set and generate adversarial examples using the test set. For targeted attacks, we randomly sample a target class for each image in the test set.
We evaluate each model’s (listed in Section \[subsec:modelsevaluated\]) robustness against attacks (listed in Table \[table:attack\_params\]) under the white-box setting. For the R:TANH:MSE model, the attacker has access to the codeword representations. We also evaluate model robustness against transfer attacks, a type of black-box attack where adversarial examples are generated using a proxy model which the adversary has access to. Finally, we further measure the robustness of our proposed model using the modified CWL2 attack.
All experiments are implemented using TensorFlow [@tensorflow2015-whitepaper], a popular framework for building deep learning algorithms.
### Attack parameters
For a given attack, we generate adversarial examples across a range of values for a particular parameter which controls the magnitude of the perturbations made. Table \[table:attack\_params\] lists the parameters which are modified for each attack, whereas Table \[table:other\_params\] lists the parameters held constant. We use the default values defined in Cleverhans for our constant parameters.
### Adapting attacks to our proposed techniques
The attacks described in Section \[subsec:attacks\] are implemented using the Cleverhans library [@papernot2017cleverhans]. By default, the attacks assume that the model outputs softmax predictions and that the targets are represented as one-hot vectors. Hence the internal loss function for some attacks (e.g. gradient-based iterative attacks) is predefined as cross-entropy. However, because the cross-entropy loss function is not compatible with the R:TANH:MSE model, we have adapted the library to use mean-squared error when the target model has also been trained on mean-squared error. These adaptations are important in preserving the white-box assumption of each attack.
Experimental observations {#sec:experimental_results}
=========================
In this section, we present and analyze the performance of the evaluation models under different attack scenarios: untargeted and targeted attacks (Section \[subsec:untargeted\_and\_targeted\_results\]), black-box attacks (Section \[subsec:black\_box\_results\]), and our modified CWL2 attack (Section \[results:modifiedcwl2\]). Benchmark performances on the original datasets are presented in Section \[subsec:clean\_test\_acc\].
Clean test performance {#subsec:clean_test_acc}
----------------------
Table \[clean-acc\] lists the accuracy of each model across each clean test dataset. We observe minimal differences in accuracies across the models, and hence our proposed modifications can maintain state-of-the-art classification performances.
\[clean-acc\]
\[results:targeted\_attacks\]
Untargeted and targeted attacks {#subsec:untargeted_and_targeted_results}
-------------------------------
Table \[results:untargeted\_attacks\] lists the accuracies of the models against untargeted white-box attacks. Both O:SOFTMAX:MSE and R:TANH:MSE models demonstrate higher accuracies on the adversarial examples compared to the O:SOFTMAX:CE model; we observe an increase in accuracies of up to 98.7%. Similar results can be observed in Table \[results:targeted\_attacks\], where the O:SOFTMAX:MSE and R:TANH:MSE models achieve a consistent decrease in attack success rates of up to 99.8%.
Black box attacks {#subsec:black_box_results}
-----------------
Table \[black-box\] shows the accuracies of transfer attacks against the O:SOFTMAX:MSE and R:TANH:MSE models. Our proposed models demonstrate more robustness towards black-box attacks compared to the white-box versions with the same configurations. Though this is expected behavior, it is imperative to evaluate a defense under multiple threat scenarios.
\[black-box\]
\[results:srl2\]
Modified CWL2 attack {#results:modifiedcwl2}
--------------------
Table \[results:srl2\] compares our proposed attack with the CWL2 attack. The results show that our attack maintains its efficacy against O:SOFTMAX:CE models while significantly increasing its success rate against the R:TANH:MSE model up to **70.9%**. We note that increasing the initial constant for our attack yields increased success rates, which is aligned with the intuition that the parameter controls the importance of the attack’s success as highlighted in Section \[subsec:modifiedcwl2\]. We also observe that, despite the increase in the attack’s efficacy, the R:TANH:MSE model displays more robustness compared to the O:SOFTMAX:CE model, with a decrease in success rates of up to **28.5%**.
Distortion vs. performance {#subsec:visualization}
--------------------------
On page 1, Figure \[fig:fixed\_success\_rate\_small\] displays adversarial images generated from targeted white-box Madry et al. attacks on the O:SOFTMAX:CE and R:TANH:MSE models respectively. We choose the lowest $\epsilon$ for which the attack achieves success rates of 100%. It is clear that the R:TANH:MSE model requires much larger perturbations for an attack to achieve the same success rates as against the O:SOFTMAX:CE model.
Figure \[fig:c10\_mim\_images\] displays adversarial images generated using the Momentum Iterative Method against both O:SOFTMAX:CE and R:TANH:MSE models where $\epsilon=0.1$. We observe that the R:TANH:MSE model is robust even against adversarial images where the perturbations are clearly perceptible to humans.
Finally, we visualize adversarial examples generated using our modified CWL2 attack and the R:TANH:MSE model in Figure \[fig:srl2\_images\], where the attack achieves higher success rates compared to the original attack. The perturbations made to the images are much less perceptible compared to the adversarial examples displayed in Figures \[fig:fixed\_success\_rate\_small\] and \[fig:c10\_mim\_images\].
![**Top.** Adversarial images generated for MNIST using the **targeted MIM attack** $(\epsilon=0.1)$ on the O:SOFTMAX:CE model. The attack achieves a success rate of **90.8%**. **Bottom.** Adversarial images generated under the identical setting for the R:TANH:MSE model. The attack achieves a success rate of **2.3%**.[]{data-label="fig:c10_mim_images"}](MNIST_combined_sprites.png){width="0.8\linewidth"}
![Adversarial examples generated by our proposed attack ($c=0.1$) on the R:TANH:MSE model for test images from the CIFAR-10, MNIST, and Fashion-MNIST datasets.[]{data-label="fig:srl2_images"}](c10_srl2.png "fig:"){width="0.85\linewidth"} ![Adversarial examples generated by our proposed attack ($c=0.1$) on the R:TANH:MSE model for test images from the CIFAR-10, MNIST, and Fashion-MNIST datasets.[]{data-label="fig:srl2_images"}](mnist_srl2.png "fig:"){width="0.85\linewidth"} ![Adversarial examples generated by our proposed attack ($c=0.1$) on the R:TANH:MSE model for test images from the CIFAR-10, MNIST, and Fashion-MNIST datasets.[]{data-label="fig:srl2_images"}](fmnist_srl2.png "fig:"){width="0.85\linewidth"}
Comparing upper Lipschitz bounds {#subsec:lipschitz_upper_bounds}
--------------------------------
Figure \[fig:upper\_lipschitz\_bounds\] compares the upper Lipschitz bounds of convolutional layers between the O:SOFTMAX:CE and O:SOFTMAX:MSE models. The upper bounds for the O:SOFTMAX:MSE model are consistently smaller than those of the O:SOFTMAX:CE model across each dataset up to a factor of three, supporting our hypothesis that models trained to minimize mean-squared error are more robust to small perturbations.
![Upper Lipschitz bounds of convolutional layers of the O:SOFTMAX:CE and O:SOFTMAX:MSE models for each dataset.[]{data-label="fig:upper_lipschitz_bounds"}](CIFAR-10_upper_lipschitz_bounds_comparison.png "fig:"){width="0.93\linewidth"} ![Upper Lipschitz bounds of convolutional layers of the O:SOFTMAX:CE and O:SOFTMAX:MSE models for each dataset.[]{data-label="fig:upper_lipschitz_bounds"}](MNIST_upper_lipschitz_bounds_comparison.png "fig:"){width="0.93\linewidth"} ![Upper Lipschitz bounds of convolutional layers of the O:SOFTMAX:CE and O:SOFTMAX:MSE models for each dataset.[]{data-label="fig:upper_lipschitz_bounds"}](F-MNIST_upper_lipschitz_bounds_comparison.png "fig:"){width="0.93\linewidth"}
Related work {#sec:otherelatedwork}
============
Several defenses have also been proposed. To date, the most effective defense technique is adversarial training ([@kurakin2016adversarial], [@wu2018reinforcing], [@sinha2018certifying], [@tramer2017ensemble]), where the model is trained on a mix of clean and adversarial data. This has shown to provide a regularization effect that makes models more robust towards attacks.
[@papernot2015distillation] proposed defensive distillation, a mechanism whereby a model is trained based on soft labels generated by another ‘teacher’ network in order to prevent overfitting. Other methods include introducing randomness to or applying transformations on the input data and/or the layers of the network ([@guo2017countering], [@dhillon2018stochastic], [@samangouei2018defense], [@xie2017mitigating]). However, [@athalye2018obfuscated] have identified that the apparent robustness of several defenses can be attributed to the introduction of computation and transformations that mask the gradients and thus break existing attacks that rely on gradients to generate adversarial examples. Their work demonstrates that small, tailored modifications to the attacks can circumvent these defenses completely.
Conclusion {#sec:conclusion}
==========
We have reported interesting experimental results demonstrating the adversarial robustness of models that do not follow conventional specifications. We have observed that simply changing the loss function that is minimized during training can greatly impact the robustness of a neural network against adversarial attacks. Our evaluation strategy is manifold, consisting of existing attacks, new attacks adjusted to our proposed modifications, and a spectral analysis of the model’s parameters. The increase in robustness observed from experimental results suggests the importance of considering alternatives to conventional design choices when making neural networks more secure. Future work would involve further investigation into the reasons for such modifications to improve the robustness of neural networks.
[10]{}=-1pt
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. : Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. Generating natural language adversarial examples. , 2018.
Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. , 2018.
Arjun Nitin Bhagoji, Warren He, Bo Li, and Dawn Song. Exploring the space of black-box attacks on deep neural networks. , 2017.
Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. , 2016.
Nicholas Carlini and David Wagner. Audio adversarial examples: Targeted attacks on speech-to-text. , 2018.
Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. Parseval networks: Improving robustness to adversarial examples. , 2017.
Guneet S Dhillon, Kamyar Azizzadenesheli, Zachary C Lipton, Jeremy Bernstein, Jean Kossaifi, Aran Khanna, and Anima Anandkumar. Stochastic activation pruning for robust adversarial defense. , 2018.
Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In [*The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*]{}, 2018.
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. , 2014.
Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens van der Maaten. Countering adversarial images using input transformations. , 2017.
Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deep speech: Scaling up end-to-end speech recognition. , 2014.
Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. In [*Proceedings of the IEEE conference on computer vision and pattern recognition*]{}, volume 1, page 3, 2017.
Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural language models. In [*AAAI*]{}, pages 2741–2749, 2016.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. , 2014.
Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In [*Advances in neural information processing systems*]{}, pages 1097–1105, 2012.
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. , 2016.
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. , 2016.
Yann LeCun, L[é]{}on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. , 86(11):2278–2324, 1998.
Yann LeCun and Corinna Cortes. handwritten digit database. 2010.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. , 2017.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. , 2013.
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, pages 2574–2582, 2016.
Ian Goodfellow Reuben Feinman Fartash Faghri Alexander Matyasko Karen Hambardzumyan Yi-Lin Juang Alexey Kurakin Ryan Sheatsley Abhibhav Garg Yen-Chen Lin Nicolas Papernot, Nicholas Carlini. cleverhans v2.0.0: an adversarial machine learning library. , 2017.
Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In [*Security and Privacy (EuroS&P), 2016 IEEE European Symposium on*]{}, pages 372–387. IEEE, 2016.
Nicolas Papernot, Patrick D McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks. corr vol. abs/1511.04508 (2015), 2015.
Pouya Samangouei, Maya Kabkab, and Rama Chellappa. Defense-gan: Protecting classifiers against adversarial attacks using generative models. , 2018.
John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In [*International Conference on Machine Learning*]{}, pages 1889–1897, 2015.
Hanie Sedghi, Vineet Gupta, and Philip M. Long. The singular values of convolutional layers. , abs/1805.10408, 2018.
Aman Sinha, Hongseok Namkoong, and John Duchi. Certifying some distributional robustness with principled adversarial training. 2018.
Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. , 2017.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. , 2013.
Florian Tram[è]{}r, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. , 2017.
A[ä]{}ron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. In [*SSW*]{}, page 125, 2016.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, [Ł]{}ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In [*Advances in Neural Information Processing Systems*]{}, pages 5998–6008, 2017.
Xi Wu, Uyeong Jang, Jiefeng Chen, Lingjiao Chen, and Somesh Jha. Reinforcing adversarial robustness using model confidence induced by adversarial training. In [*International Conference on Machine Learning*]{}, pages 5330–5338, 2018.
Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017.
Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. Mitigating adversarial effects through randomization. , 2017.
Hiromu Yakura and Jun Sakuma. Robust audio adversarial example for a physical attack. , 2018.
Zhuolin Yang, Bo Li, Pin-Yu Chen, and Dawn Song. Characterizing audio adversarial examples using temporal dependency. , 2018.
Xiaoyong Yuan, Pan He, Qile Zhu, Rajendra Rana Bhat, and Xiaolin Li. Adversarial examples: Attacks and defenses for deep learning. , 2017.
|
---
abstract: 'The Vainshtein mechanism is known as an efficient way of screening the fifth force around a matter source in modified gravity. This has been verified mainly in highly symmetric matter configurations. To study how the Vainshtein mechanism works in a less symmetric setup, we numerically solve the scalar field equation around a disk with a hole at its center in the cubic Galileon theory. We find, surprisingly, that the Galileon force is enhanced, rather than suppressed, in the vicinity of the hole. This anti-screening effect is larger for a thinner, less massive disk with a smaller hole. At this stage our setup is only of academic interest and its astrophysical consequences are unclear, but this result implies that the Vainshtein screening mechanism around less symmetric matter configurations is quite nontrivial.'
author:
- Hiromu Ogawa
- Takashi Hiramatsu
- Tsutomu Kobayashi
title: 'Anti-screening of the Galileon force around a disk center hole'
---
Introduction
============
The origin of the current accelerated expansion of the Universe [@Perlmutter:1998np; @Riess:1998cb] is one of the biggest problem in modern physics (see [@Spergel:2003cb; @Eisenstein:2005su; @Abazjian:2008wr; @Ade:2013zuv] for several independent observations). In order to explain this phenomenon, one basically has to introduce an unknown energy source, i.e., dark energy [@Copeland:2006wr]. The simplest possibility is the cosmological constant, and introducing another unknown component, i.e., dark matter, the $\Lambda$CDM model is well compatible with observations. This concordance model, however, suffers from the fine-tuning problem of the cosmological constant. The mystery of the accelerated expansion of the Universe thus motivates us to explore the possibilities of modification of general relativity on cosmological scales [@clifton].
A number of theories have been proposed so far as alternatives to the cosmological constant and dark energy. Most of them can be described (effectively) by a scalar-tensor theory, which has a scalar degree of freedom in addition to the two tensor modes. This scalar mediates a new long-range force, i.e., a fifth force. Since any deviation from general relativity is strongly constrained in the solar system [@Bertotti:2003rm; @Will:2014kxa], scalar-tensor theories must possess a mechanism for screening the fifth force in the vicinity of matter sources such as in the solar system.
Several types of screening mechanisms have been known so far. The first one relies on the potential term of the scalar degree of freedom. The shape of the potential is designed so that the scalar becomes effectively massive in a high density region. This class of models include chameleon [@Khoury:2003rn], symmetron [@Hinterbichler:2010es], and dilaton [@Brax:2010gi] mechanisms. The second one relies on nonlinear derivative interactions of the scalar field, by which the kinetic term of the scalar becomes effectively large and hence it is effectively weakly coupled to matter near the source where its gradient is large. This class can be divided into two subclasses depending on whether first or second derivatives of the scalar field play a crucial role. The former includes models of [@Burrage:2014uwa; @Brax:2012jr; @Babichev:2009ee] and the latter includes the Galileons [@Nicolis:2008in; @Deffayet:2009wt; @Deffayet:2009mn]. The screening mechanism in this last class of models is called the Vainshtein mechanism [@Vainshtein:1972sx], and has been studied extensively (see Ref. [@Babichev:2013usa] for a review). The Vainshtein mechanism has been investigated [@Kimura:2011dc; @Narikawa:2013pjr; @Koyama:2013paa; @Kase:2013uja] even in the context of the most general scalar-tensor theory with second-order field equations [@Horndeski:1974wa] because it can be obtained by generalizing the Galileons [@Deffayet:2011gz; @Kobayashi:2011nu] and the mechanism can thus be implemented naturally. See Refs. [@Kobayashi:2014ida; @Crisostomi:2017lbg; @Langlois:2017dyl; @Dima:2017pwp] for the Vainshtein mechanism (and its partial breaking) in more general scalar-tensor theories which have been developed recently.
Previous works mostly focused on the Vainshtein mechanism around spherical distributions of matter, as a star can be well approximated by a sphere. The authors of [@Bloomfield:2014zfa] have investigated analytically the systems with cylindrical and planar symmetries, and found that screening is weaker in the cylindrically symmetric case and does not occur in the system with planar symmetry. This implies that Vainshtein screening might be sensitive to the shape of the matter distribution. It is, however, difficult in general to study the shape dependence of the Vainshtein mechanism because one has to treat derivative nonlinearities in less symmetric systems. In Ref. [@Hiramatsu:2012xj], a two-body system was investigated numerically and it was shown that the equivalence principle can be violated apparently in such systems. Approximate solutions for slowly rotating stars in the cubic Galileon theory were obtained in Ref. [@Chagoya:2014fza]. As for a dynamical aspect of the Vainshtein mechanism, the emission of scalar modes from a binary system was evaluated in Ref. [@deRham:2012fg]. Very recently the shape dependence of screening in the chameleon theory was addressed numerically in Ref. [@Burrage:2017shh].
In this paper, we consider a disk with a hole at its center as a source and solve the Galileon field equation fully numerically in order to address the consequence of nonlinear derivative interactions in a less symmetric system. We only study the cubic Galileons for simplicity. A similar system in a different theory of modified gravity has been considered in Refs. [@Davis:2014tea; @Davis:2016avf], where scalar field profiles around a black hole accretion disk have been investigated in the context of the chameleon theory.
The outline of this paper is as follows. In the next section, we summarize briefly the cubic Galileon theory and describe our numerical setup. We then present our main results in Sec. III. Finally, we draw our conclusions in Sec. IV. In the main text we only consider the Galileon field living in a flat background. To see how our result depends on the background curvature, in Appendix \[appe1\] we give a numerical result in the fixed Schwarzschild background. Details of the numerical scheme are given in Appendix \[appe2\].
Basic equations
===============
The cubic Galileon
------------------
We consider the cubic Galileon theory [@Nicolis:2008in; @Deffayet:2009wt] as an example of the model endowed with the Vainshtein mechanism. In the Einstein frame, the cubic Galileon $\phi$ and its coupling to matter are described by the action $$\begin{aligned}
S=\int d^4x \left[-\frac{1}{2}(\partial\phi)^2-\frac{c_3}{M^3}(\partial\phi)^2\Box\phi
+\frac{\beta}{M_{\rm Pl}}\phi T_\mu^{\;\mu}\right],\label{eq:action}\end{aligned}$$ where $c_3$ and $\beta$ are dimensionless parameters, $M_{\rm Pl}$ is the Planck mass, $M$ is another mass scale, and $T_\mu^{\;\mu}$ is the trace of the matter energy-momentum tensor. We assume that matter is non-relativistic, so that $T_\mu^{\;\mu}\simeq -\rho$. Varying the action with respect to $\phi$, we obtain the field equation, $$\begin{aligned}
\Delta\phi +\frac{c_3}{M^3}\left[
(\Delta\phi)^2-\nabla_i\nabla_j\phi\nabla^i\nabla^j\phi
\right]=\frac{\beta}{M_{\rm Pl}}\rho,\label{eq:field}\end{aligned}$$ where we assumed that $\phi$ is static. For a given configuration of matter, one can integrate Eq. (\[eq:field\]) to obtain the profile of $\phi$.
The coupling between matter and the metric fluctuations around the Minkowski spacetime, $h_{\mu\nu}$, is expressed as $(1/2)h_{\mu\nu}T^{\mu\nu}$. This implies that the Jordan frame metric is given by $h^{\rm J}_{\mu\nu}=h_{\mu\nu}+(2\beta/M_{\rm Pl})\phi\eta_{\mu\nu}$, and thus a test particle of mass $m$ feels the fifth force $$\begin{aligned}
\Vec{F}_\phi=-\frac{\beta}{M_{\rm Pl}}m\Vec{\nabla}\phi,\end{aligned}$$ in addition to the usual gravitational force $\Vec{F}_{\rm grav}=(m/2)\Vec{\nabla}h_{00}$.
Let us consider the profile of $\phi$ around a spherical matter configuration. In this case, using the spherical coordinates it is easy to get $$\begin{aligned}
\Vec{\nabla}\phi = \frac{M^3}{4c_3}r\left(
-1+\sqrt{1+\frac{2c_3\beta}{\pi M^3M_{\rm Pl}}\frac{{\cal M}}{r^3}}
\right)\Vec{e}_r,\end{aligned}$$ where $\Vec{e}_r$ is the radial unit vector and ${\cal M}$ is the mass of the spherical body. For $r\gg r_{\rm V}:=(2c_3\beta{\cal M}/\pi M^3 M_{\rm Pl})^{1/3}$, we have $|\Vec{F}_\phi|={\cal O}(\beta^2|\Vec{F}_{\rm grav}|)$, implying that the fifth force is as large as the usual gravitational force if $\beta={\cal O}(1)$. However, for $r\ll r_{\rm V}$ we find that $|\Vec{F}_\phi|=(r/r_{\rm V})^{3/2}{\cal O}(\beta^2|\Vec{F}_{\rm grav}|)\ll {\cal O}(|\Vec{F}_{\rm grav}|)$, and thus the fifth force is screened in the vicinity of the body. This is the Vainshtein mechanism. For this to happen the non-linear term in Eq. (\[eq:field\]) plays a crucial role. In order to pass laboratory and solar-system tests of gravity, $M^3/c_3$ must be sufficiently small so that $r_{\rm V}$ is sufficiently large.
So far, successful Vainshtein screening has been confirmed mainly for spherically symmetric configurations. The Vainshtein mechanism in the systems with planar and cylindrical symmetry has been investigated in Ref. [@Bloomfield:2014zfa], and it was found that the screening of the fifth forth is sensitive to the shape of the matter distribution. Only in such highly symmetric cases the non-linear equation (\[eq:field\]) can be integrated analytically, and one has to employ numerical methods in general cases. In Ref. [@Hiramatsu:2012xj] the Galileon field equation is integrated numerically for a two-body system. In the present paper, we examine the profile of the Galileon field around a matter distribution that has not been investigated previously, i.e., a disk with a hole.
Numerical setup
---------------
Specifically, we model the system by the following uniform density profile $$\begin{aligned}
\rho(r,\theta) & =\rho_{0} U(r-r_{1})U(r_{2}-r)U(\theta_{0}-\theta)U(\theta_0+\theta),
\\
\rho_0 & ={\rm const},\end{aligned}$$ where $U$ is the Heaviside function, with $r_1$, $r_2$, and $\theta_0$ being the inner radius, the outer radius, and the (half of the) opening angle of the disk, respectively (Fig. \[fig:fig1.eps\]). Note that here we are using the spherical coordinates whose definition is slightly different from the usual one, $x=r\cos\theta\cos\varphi$, $y=r\cos\theta\sin\varphi$, $z=r\sin\theta$, with $-\pi/2\le \theta\le \pi/2$.
To implement numerical integration, we introduce the following dimensionless quantities: $$\begin{aligned}
\bar{\phi}:=\frac{\phi}{M^{3}r_{0}^{2}}, \quad \bar{r}:=\frac{r}{r_{0}},
\quad \mu:= \frac{\beta \rho_{0}}{M^{3}M_{{\rm Pl}}},\end{aligned}$$ where $r_0$ is some arbitrary length scale and $\mu$ is the parameter that corresponds to the coupling between matter and the Galileon for fixed $\rho_0$. At a sufficiently large distance from the disk object, it can be regarded as a point particle and hence we have $\bar\phi\sim \mu /\bar r^2$. Therefore, it can be said that $\mu$ controls the nonlinearity of the scalar field. We rewrite Eq. (\[eq:field\]) in terms of the above variables assuming that $\phi$ is axisymmetric.
The boundary conditions we impose are given by $$\begin{aligned}
& \left.\frac{{\partial}\bar{\phi}}{{\partial}\bar{r}}\right|_{\bar{r}=0}=0,\label{eq:regularity}
\\
& \left.\frac{{\partial}\bar{\phi}}{{\partial}\theta}\right|_{\theta=0}=\left.\frac{{\partial}\bar{\phi}}{{\partial}\theta}\right|_{\theta=\pi/2}=0,\label{eq:simmetry}
\\
& \;\bar{\phi}(\bar{r}_{{\rm max}},\theta)=0,\label{eq:galileon}\end{aligned}$$ where $\bar r_{\rm max}:=r_{\rm max}/r_0$ corresponds to the boundary of the computational domain. The boundary condition (\[eq:regularity\]) amounts to the regularity at the center, while the condition (\[eq:simmetry\]) reflects the symmetry of the system. Since the field equation is invariant under the constant shift of the scalar field, $\phi\to \phi + c$, we may impose the boundary condition (\[eq:galileon\]) without loss of generality.
One may naively expect that derivative nonlinearity of the Galileon field is large for $r\lesssim (c_3 \beta \rho_0 V/M^2M_{\rm Pl})^{1/3}$, where $V$ is the size of a massive object. If we roughly take $r_1\sim r_2$, we can estimate $V$ as $V\sim r_2^3\theta_0$. Thus, in terms of the dimensionless variables, we see that the nonlinear effect is large for $\bar{r}\lesssim (c_3\mu \theta_0)^{1/3}\bar{r}_2$.
![A disk object with a hole in spherical coordinates.[]{data-label="fig:fig1.eps"}](fig1.png){width="80mm"}
Numerical Results
=================
We now present our numerical solutions to Eq. (\[eq:field\]). We fix $\bar{r}_2$ and $\bar{r}_{\rm max}$ as $\bar{r}_2= 30$ and $\bar{r}_{\rm max}=80$, respectively, and performed numerical calculations for different values of $r_1$, $\theta_0$, and $\mu$. The number of data points is 200 in the $r$ direction and 100 in the $\theta$ direction. The details of the numerical computation are described in the Appendix \[appe2\].
In Fig. \[fig:vector1\] we show a vector plot of the dimensionless force field $-(M^3r_0)^{-1}\Vec{\nabla}\phi$ for $c_3=1$, $\bar{r}_1=8$, $\theta_0=0.05$, and $\mu=36.8$. In order to clarify the effect of the nonlinear terms in Eq. (\[eq:field\]), we also calculated the force field with the same parameters, but with $c_3=0$. The result is also presented in Fig. \[fig:vector1\] for comparison. It can be seen that in the $c_3=1$ case the fifth force is suppressed compared to the $c_3=0$ case in almost every region, as expected. This is clear in particular for $\bar{r}\gtrsim 20$ around the disk. However, surprisingly enough, the nonlinear effect [*enhances*]{}, rather than suppresses, the fifth force in the vicinity of the hole.
To quantify this anti-screening effect, we introduce the following scalar quantity, $$\begin{aligned}
\mathcal{R}=\frac{|\Vec{\nabla} \phi|_{c_{3}=1}}{|\Vec{\nabla} \phi|_{c_{3}=0}}.
\label{eq:def_R}\end{aligned}$$ We may say that screening is successful if ${\cal R}<1$. Figure \[fig:main\_res\] shows ${\cal R}$ for the above case, which clearly indicates that the fifth force is enhanced near the hole.
![Dimensionless force fields for $\bar{r}_1=8$, $\theta_0=0.05$, and $\mu=36.8$, with $c_3=1$ (left) and $c_3=0$ (right). The thin black region represents the disk. []{data-label="fig:vector1"}](m69a.png "fig:"){width="8cm"} ![Dimensionless force fields for $\bar{r}_1=8$, $\theta_0=0.05$, and $\mu=36.8$, with $c_3=1$ (left) and $c_3=0$ (right). The thin black region represents the disk. []{data-label="fig:vector1"}](m70a.png "fig:"){width="8cm"}
![2D plot of the degree of (anti-)screening ${\cal R}$ for the case shown in Fig. \[fig:vector1\] (left) and ${\cal R}$ along $\theta = 2\pi/5$ as a function of $\bar{r}$ (right). []{data-label="fig:main_res"}](plot69a70a.png "fig:"){width="9.5cm"} ![2D plot of the degree of (anti-)screening ${\cal R}$ for the case shown in Fig. \[fig:vector1\] (left) and ${\cal R}$ along $\theta = 2\pi/5$ as a function of $\bar{r}$ (right). []{data-label="fig:main_res"}](m69a70aline.png "fig:"){width="8cm"}
To see how the enhancement of the fifth force depends on the parameters, we provide numerical results for different values of $r_1$, $\theta_0$ and $\mu$ in Figs. \[rdepen\]–\[mudepen\]. Figure \[rdepen\] shows ${\cal R}$ for different sizes of the hole, $\bar{r}_1=4$ and $20$, with $c_3$, $r_1$, $\theta_0$ being fixed to the previous values and $\mu$ being given such that the total mass of the disk is unchanged from the previous case. It is found that the fifth force around the hole is stronger for a smaller hole size, as is most clearly seen in the bottom panel of Fig. \[rdepen\]. Figure \[thetadepen\] represents the dependence of the enhancement effect on the thickness of the disk. We see that for smaller $\theta_0$, i.e., for a thinner disk, the fifth force around the hole is stronger. Finally, we see from Fig. \[mudepen\] how increasing $\mu$ changes the result with other parameters fixed. For larger $\mu$, the enhancement of the Galileon force is less evident. This is because larger $\mu$ implies that the disk is (effectively) more dense or more massive, and thus the screening effect from the disk itself is more efficient. To sum up, the anti-screening effect is larger for a thinner, less massive disk with a smaller hole.
![$\mathcal{R}$ for $r_{1}/r_0=4$ (top left) and $r_{1}/r_0=20$ (top right). The other parameters are the same as in the previous plots. ${\cal R}$ along $\theta = 2\pi/5$ as a function of $\bar{r}$ is also shown (bottom). []{data-label="rdepen"}](plot83a84a.png "fig:"){width="8.5cm"} ![$\mathcal{R}$ for $r_{1}/r_0=4$ (top left) and $r_{1}/r_0=20$ (top right). The other parameters are the same as in the previous plots. ${\cal R}$ along $\theta = 2\pi/5$ as a function of $\bar{r}$ is also shown (bottom). []{data-label="rdepen"}](plot89a90a.png "fig:"){width="8.5cm"} ![$\mathcal{R}$ for $r_{1}/r_0=4$ (top left) and $r_{1}/r_0=20$ (top right). The other parameters are the same as in the previous plots. ${\cal R}$ along $\theta = 2\pi/5$ as a function of $\bar{r}$ is also shown (bottom). []{data-label="rdepen"}](r1ratio.png "fig:"){width="9cm"}
![$\mathcal{R}$ for $\theta_{0}=0.1$ (top left) and $\theta_{0}=0.2$ (top right). ${\cal R}$ along $\theta = 2\pi/5$ as a function of $\bar{r}$ is also shown (bottom).[]{data-label="thetadepen"}](plot97a98a.png "fig:"){width="8.5cm"} ![$\mathcal{R}$ for $\theta_{0}=0.1$ (top left) and $\theta_{0}=0.2$ (top right). ${\cal R}$ along $\theta = 2\pi/5$ as a function of $\bar{r}$ is also shown (bottom).[]{data-label="thetadepen"}](plot99a100a.png "fig:"){width="8.5cm"} ![$\mathcal{R}$ for $\theta_{0}=0.1$ (top left) and $\theta_{0}=0.2$ (top right). ${\cal R}$ along $\theta = 2\pi/5$ as a function of $\bar{r}$ is also shown (bottom).[]{data-label="thetadepen"}](thetaratio.png "fig:"){width="9cm"}
![$\mathcal{R}$ for $\mu=369$ (left) and $\mu=3690$ (right). ${\cal R}$ along $\theta = 2\pi/5$ as a function of $\bar{r}$ is also shown (bottom).[]{data-label="mudepen"}](mu360.png "fig:"){width="8.5cm"} ![$\mathcal{R}$ for $\mu=369$ (left) and $\mu=3690$ (right). ${\cal R}$ along $\theta = 2\pi/5$ as a function of $\bar{r}$ is also shown (bottom).[]{data-label="mudepen"}](mu3600.png "fig:"){width="8.5cm"} ![$\mathcal{R}$ for $\mu=369$ (left) and $\mu=3690$ (right). ${\cal R}$ along $\theta = 2\pi/5$ as a function of $\bar{r}$ is also shown (bottom).[]{data-label="mudepen"}](muratio.png "fig:"){width="9cm"}
Discussion
==========
In this paper, we have studied numerically the fifth force around a disk with a hole at its center in the cubic Galileon theory. It is known that Vainshtein screening does not work for infinite planar sources [@Bloomfield:2014zfa]. Since our source is thin but finite, we have seen that screening still occurs in almost every region around the disk. However, we have found that the hole at the center causes an unexpected consequence: the Galileon force is not suppressed but enhanced in the vicinity of the hole, namely, anti-screening operates. Anti-screening we have seen in this paper occurs in the region where nonlinearity of the Galileon field is dominant and the configuration of matter is less symmetric. Due to this complexity, so far we have not arrived at analytic understanding of our result.
Some of the parameters we have used in our numerical calculations might not be realistic. In particular, we have seen that we need $\mu \lesssim 10^3$ in order for the force to be enhanced. For larger $\mu$, the effect of anti-screening is washed away by the screening effect from the disk in the present setup. If the Galileon field is responsible for the current cosmic acceleration, one would expect $M^3\sim M_{\rm Pl} H_0^2 \sim \bar\rho/M_{\rm Pl}$, where $H_0$ is the present Hubble parameter and $\bar\rho$ is the average energy density of the Universe. The energy density of our disk is thus given by $\rho_0\sim \mu\bar\rho$, assuming that $\beta={\cal O}(1)$. Our numerical calculations correspond to such very low density matter distribution. Therefore, at this stage it is difficult to derive direct implications of our results for astrophysics and experiments. Nevertheless, we believe that it is interesting to further explore how the (anti-)screening mechanism operates for nontrivial configurations of matter and the present work provides a first step toward understanding this complicated problem.
We thank Christos Charmousis, A. Emir Gümrükçüoğlu, Tomohiro Harada, and Kazuya Koyama for useful comments. H.O. also thanks Kazuya Koyama for his kind hospitality at University of Portsmouth where part of this work was done. This work was supported in part by the Rikkyo University Special Fund for Research (H.O.), JSPS Overseas Challenge Program for Young Researchers (H.O.), the JSPS Grants-in-Aid for Scientific Research Nos. 16K17695 (T.H.), 16H01102, and 16K17707 (T.K), MEXT-Supported Program for the Strategic Research Foundation at Private Universities, 2014-2017 (S1411024) (T.H. and T.K.), and MEXT KAKENHI Grant Nos. 15H05888 (T.K.) and 17H06359 (T.K.).
Scalar-field profile in Schwarzschild geometry {#appe1}
==============================================
In the main text we have solved the Galileon field equation in the flat background. In order to see the scalar-field profile in the curved background, let us consider the covariant version of Eq. (\[eq:field\]) in a [*fixed*]{} background: $$\begin{aligned}
\Box\phi +\frac{c_3}{M^3}\left[
(\Box\phi)^2-\nabla_\mu\nabla_\nu\phi\nabla^\mu\nabla^\nu\phi
-R^{\mu\nu}\nabla_\mu\phi\nabla_\nu\phi
\right] = \frac{\beta}{M_{\rm Pl}}\rho,
\label{eq:curved}\end{aligned}$$ where $\nabla_\mu$ is the covariant derivative with respect to $g_{\mu\nu}$, $R_{\mu\nu}$ is the Ricci tensor, and we take $$\begin{aligned}
g_{\mu\nu}dx^\mu dx^\nu=-\left(1-\frac{r_g}{r}\right)dt^2 +
\left(1-\frac{r_g}{r}\right)^{-1}dr^2 +r^2d \Omega_2^2,\end{aligned}$$ with $r_g=r_0$. We solve Eq.(\[eq:curved\]) numerically for the matter configuration with $\bar{r}_1=8$, $\theta_0=0.05$, and $\mu=36.8$. The resultant profile of $\phi$ should be compared with that in the flat background in Fig. \[fig:main\_res\]. It can be seen that the profiles are not so different. We thus conclude that the effect of the background curvature does not change the result of anti-screening.
![$\mathcal{R}$ in the Schwarzschild background. This should be compared with the left panel of Fig. \[fig:main\_res\]. []{data-label="fig:BH"}](blackhole830005.png){width="9.5cm"}
Numerical scheme and convergence of results {#appe2}
===========================================
Throughout this paper, we employed the numerical scheme developed in Ref. [@Hiramatsu:2012xj] to solve the field equation (\[eq:field\]). Basically, we regard the non-linear terms of Eq. (\[eq:field\]), the terms proportional to $c_3$, as the extra source term such that $$\begin{aligned}
\triangle\phi = \frac{\beta}{M_{\rm Pl}}\rho - \frac{c_3}{M^3}N[\phi].\end{aligned}$$ At the first step, we solve the linear equation with setting $N[\phi]=0$ and obtain the solution $\phi_*$. Then we update $\phi$ in the following manner: $$\begin{aligned}
\phi_{\rm new}(r,\theta) = (1-\omega)\phi_{\rm old}(r,\theta) + \omega\phi_*(r,\theta),\end{aligned}$$ with a mixing parameter $\omega=\mathcal{O}(0.01)$. At the next step, evaluating $N[\phi_{\rm new}]$, we solve the field equation again, and $\phi$ is futher updated. This iteration procedure is terminated when the update of $\phi$ is well suppressed, namely, $$\begin{aligned}
\frac{||\phi_{\rm new}-\phi_{\rm old}||}{||\phi_{\rm new}||} < 10^{-8},\end{aligned}$$ where the norm $||\phi||$ is defined as $||\phi||:=\sqrt{\sum_{ij}\phi(r_i,\theta_j)^2}$. Note that, unless the parameter $\omega$ is small, this iteration scheme does not work since the non-linear term $N[\phi]$ induces quite a large change of the field configuration. For details, see Ref. [@Hiramatsu:2012xj].
The field equation solved in this paper, given in Eq. (\[eq:field\]), is highly non-linear, and thus it should be confirmed whether our numerical results are reliable in the sense that they are well converged with the iteration scheme mentioned above. To see this, we solve the field equation with changing the number of grid points in the coordinate of $(r,\theta)$, $N_r$ and $N_\theta$, and the position of the boundary in the radial direction, $\bar{r}_{\rm max}$. The fiducial values of $N_r$, $N_\theta$ and $\bar{r}_{\rm max}$ in this paper are $N_r=200, N_\theta=100$ and $\bar{r}_{\rm max}=80$. Note that we do not focus on the other numerical parameters since they control solely the convergence speed and the precision, and thus do not affect the final results.
Figure \[fig:convcheck\] shows $\mathcal{R}$ evaluated at $\theta = 2\pi/5$. From the left panel, we find that our result is insensitive to the size of the computational box, which means that artificial effects from the boundary at $r=r_{\rm max}$ do not affect the feature at all. The remaining panels show the dependences of $\mathcal{R}$ on the number of grids, $N_r$ (center panel) and $N_\theta$ (right panel.) While the detailed structure of the peak around $\bar{r}\sim 1$ is sensitive to the spatial resolution, the fact that $\mathcal{R}$ can be larger than the unity at $\bar{r}\lesssim 8$ is confirmed to be robust.
![The convergence check of numerical results; the dependence on $\bar{r}_{\rm max}$ (left), on $N_r$ (middle) and on $N_\theta$ (right).[]{data-label="fig:convcheck"}](plot_m69ad.png "fig:"){width="5.4cm"} ![The convergence check of numerical results; the dependence on $\bar{r}_{\rm max}$ (left), on $N_r$ (middle) and on $N_\theta$ (right).[]{data-label="fig:convcheck"}](plot_m69ach.png "fig:"){width="5cm"} ![The convergence check of numerical results; the dependence on $\bar{r}_{\rm max}$ (left), on $N_r$ (middle) and on $N_\theta$ (right).[]{data-label="fig:convcheck"}](plot_m69abef.png "fig:"){width="5cm"}
[99]{}
S. Perlmutter [*et al.*]{} \[Supernova Cosmology Project Collaboration\], “Measurements of Omega and Lambda from 42 high redshift supernovae,” Astrophys. J. [**517**]{}, 565 (1999) \[astro-ph/9812133\].
A. G. Riess [*et al.*]{} \[Supernova Search Team\], “Observational evidence from supernovae for an accelerating universe and a cosmological constant,” Astron. J. [**116**]{}, 1009 (1998) \[astro-ph/9805201\].
D. N. Spergel [*et al.*]{} \[WMAP Collaboration\], “First year Wilkinson Microwave Anisotropy Probe (WMAP) observations: Determination of cosmological parameters,” Astrophys. J. Suppl. [**148**]{}, 175 (2003) \[astro-ph/0302209\].
D. J. Eisenstein [*et al.*]{} \[SDSS Collaboration\], “Detection of the Baryon Acoustic Peak in the Large-Scale Correlation Function of SDSS Luminous Red Galaxies,” Astrophys. J. [**633**]{}, 560 (2005) \[astro-ph/0501171\].
K. N. Abazajian [*et al.*]{} \[SDSS Collaboration\], “The Seventh Data Release of the Sloan Digital Sky Survey,” Astrophys. J. Suppl. [**182**]{}, 543 (2009) \[arXiv:0812.0649 \[astro-ph\]\].
P. A. R. Ade [*et al.*]{} \[Planck Collaboration\], “Planck 2013 results. XVI. Cosmological parameters,” Astron. Astrophys. [**571**]{}, A16 (2014) \[arXiv:1303.5076 \[astro-ph.CO\]\].
See, [*e.g.*]{}, E. J. Copeland, M. Sami and S. Tsujikawa, “Dynamics of dark energy,” Int. J. Mod. Phys. D [**15**]{}, 1753 (2006), \[hep-th/0603057\].
See, [*e.g.*]{}, T. Clifton, P. G. Ferreira, A. Padilla and C. Skordis, “Modified gravity and cosmology,” Phys. Rep., [**513**]{}, 1 (2012), \[arXiv:1106.2476 \[astro-ph.CO\]\].
C. M. Will, “The Confrontation between General Relativity and Experiment,” Living Rev. Rel. [**17**]{}, 4 (2014) \[arXiv:1403.7377 \[gr-qc\]\].
B. Bertotti, L. Iess and P. Tortora, “A test of general relativity using radio links with the Cassini spacecraft,” Nature [**425**]{}, 374 (2003).
J. Khoury and A. Weltman, “Chameleon cosmology,” Phys. Rev. D [**69**]{}, 044026 (2004) \[astro-ph/0309411\].
K. Hinterbichler and J. Khoury, “Symmetron Fields: Screening Long-Range Forces Through Local Symmetry Restoration,” Phys. Rev. Lett. [**104**]{}, 231301 (2010) \[arXiv:1001.4525 \[hep-th\]\].
P. Brax, C. van de Bruck, A. C. Davis and D. Shaw, “The Dilaton and Modified Gravity,” Phys. Rev. D [**82**]{}, 063519 (2010) \[arXiv:1005.3735 \[astro-ph.CO\]\]. C. Burrage and J. Khoury, “Screening of scalar fields in Dirac-Born-Infeld theory,” Phys. Rev. D [**90**]{}, no. 2, 024001 (2014) \[arXiv:1403.6120 \[hep-th\]\].
E. Babichev, C. Deffayet and R. Ziour, “k-Mouflage gravity,” Int. J. Mod. Phys. D [**18**]{}, 2147 (2009) \[arXiv:0905.2943 \[hep-th\]\].
P. Brax, C. Burrage and A. C. Davis, “Screening fifth forces in k-essence and DBI models,” JCAP [**1301**]{}, 020 (2013) \[arXiv:1209.1293 \[hep-th\]\].
A. Nicolis, R. Rattazzi and E. Trincherini, “The Galileon as a local modification of gravity,” Phys. Rev. D [**79**]{}, 064036 (2009) \[arXiv:0811.2197 \[hep-th\]\].
C. Deffayet, G. Esposito-Farese and A. Vikman, “Covariant Galileon,” Phys. Rev. D [**79**]{}, 084003 (2009) \[arXiv:0901.1314 \[hep-th\]\].
C. Deffayet, S. Deser and G. Esposito-Farese, “Generalized Galileons: All scalar models whose curved background extensions maintain second-order field equations and stress-tensors,” Phys. Rev. D [**80**]{}, 064015 (2009) \[arXiv:0906.1967 \[gr-qc\]\].
A. I. Vainshtein, “To the problem of nonvanishing gravitation mass,” Phys. Lett. [**39B**]{}, 393 (1972). E. Babichev and C. Deffayet, “An introduction to the Vainshtein mechanism,” Class. Quant. Grav. [**30**]{}, 184001 (2013) \[arXiv:1304.7240 \[gr-qc\]\].
R. Kimura, T. Kobayashi and K. Yamamoto, “Vainshtein screening in a cosmological background in the most general second-order scalar-tensor theory,” Phys. Rev. D [**85**]{}, 024023 (2012) \[arXiv:1111.6749 \[astro-ph.CO\]\].
T. Narikawa, T. Kobayashi, D. Yamauchi and R. Saito, “Testing general scalar-tensor gravity and massive gravity with cluster lensing,” Phys. Rev. D [**87**]{}, 124006 (2013) \[arXiv:1302.2311 \[astro-ph.CO\]\].
K. Koyama, G. Niz and G. Tasinato, “Effective theory for the Vainshtein mechanism from the Horndeski action,” Phys. Rev. D [**88**]{}, 021502 (2013) \[arXiv:1305.0279 \[hep-th\]\].
R. Kase and S. Tsujikawa, “Screening the fifth force in the Horndeski’s most general scalar-tensor theories,” JCAP [**1308**]{}, 054 (2013) \[arXiv:1306.6401 \[gr-qc\]\].
G. W. Horndeski, “Second-order scalar-tensor field equations in a four-dimensional space,” Int. J. Theor. Phys. [**10**]{}, 363 (1974). C. Deffayet, X. Gao, D. A. Steer and G. Zahariade, “From k-essence to generalised Galileons,” Phys. Rev. D [**84**]{}, 064039 (2011) \[arXiv:1103.3260 \[hep-th\]\].
T. Kobayashi, M. Yamaguchi and J. Yokoyama, “Generalized G-inflation: Inflation with the most general second-order field equations,” Prog. Theor. Phys. [**126**]{}, 511 (2011) \[arXiv:1105.5723 \[hep-th\]\].
T. Kobayashi, Y. Watanabe and D. Yamauchi, “Breaking of Vainshtein screening in scalar-tensor theories beyond Horndeski,” Phys. Rev. D [**91**]{}, no. 6, 064013 (2015) \[arXiv:1411.4130 \[gr-qc\]\].
M. Crisostomi and K. Koyama, “Vainshtein mechanism after GW170817,” Phys. Rev. D [**97**]{}, no. 2, 021301 (2018) \[arXiv:1711.06661 \[astro-ph.CO\]\].
D. Langlois, R. Saito, D. Yamauchi and K. Noui, “Scalar-tensor theories and modified gravity in the wake of GW170817,” arXiv:1711.07403 \[gr-qc\].
A. Dima and F. Vernizzi, “Vainshtein Screening in Scalar-Tensor Theories before and after GW170817: Constraints on Theories beyond Horndeski,” arXiv:1712.04731 \[gr-qc\].
J. K. Bloomfield, C. Burrage and A. C. Davis, “Shape dependence of Vainshtein screening,” Phys. Rev. D [**91**]{}, no. 8, 083510 (2015) \[arXiv:1408.4759 \[gr-qc\]\].
T. Hiramatsu, W. Hu, K. Koyama and F. Schmidt, “Equivalence Principle Violation in Vainshtein Screened Two-Body Systems,” Phys. Rev. D [**87**]{}, no. 6, 063525 (2013) \[arXiv:1209.3364 \[hep-th\]\].
J. Chagoya, K. Koyama, G. Niz and G. Tasinato, “Galileons and strong gravity,” JCAP [**1410**]{}, no. 10, 055 (2014) \[arXiv:1407.7744 \[hep-th\]\].
0 C. A. R. Herdeiro and E. Radu, “Asymptotically flat black holes with scalar hair: a review,” Int. J. Mod. Phys. D [**24**]{}, no. 09, 1542014 (2015) \[arXiv:1504.08209 \[gr-qc\]\].
C. Deffayet, O. Pujolas, I. Sawicki and A. Vikman, “Imperfect Dark Energy from Kinetic Gravity Braiding,” JCAP [**1010**]{}, 026 (2010) \[arXiv:1008.0048 \[hep-th\]\].
A. Barreira, B. Li, W. A. Hellwing, C. M. Baugh and S. Pascoli, “Nonlinear structure formation in the Cubic Galileon gravity model,” JCAP [**1310**]{}, 027 (2013) \[arXiv:1306.3219 \[astro-ph.CO\]\].
C. de Rham, A. Matas and A. J. Tolley, “Galileon Radiation from Binary Systems,” Phys. Rev. D [**87**]{}, no. 6, 064024 (2013) \[arXiv:1212.5212 \[hep-th\]\].
C. Burrage, E. J. Copeland, A. Moss and J. A. Stevenson, “The shape dependence of chameleon screening,” arXiv:1711.02065 \[astro-ph.CO\].
A. C. Davis, R. Gregory, R. Jha and J. Muir, “Astrophysical black holes in screened modified gravity,” JCAP [**1408**]{}, 033 (2014) \[arXiv:1402.4737 \[astro-ph.CO\]\].
A. C. Davis, R. Gregory and R. Jha, “Black hole accretion discs and screened scalar hair,” JCAP [**1610**]{}, no. 10, 024 (2016) \[arXiv:1607.08607 \[gr-qc\]\].
|
---
abstract: 'We examine the possibility of rolling tachyon to play the dual roll of inflaton at early epochs and dark matter at late times. We argue that enough inflation can be generated with the rolling tachyon either by invoking the large number of branes or brane world assisted inflation. However, reheating is problematic in this model.'
author:
- |
M. Sami\
Inter-University Centre for Astronomy and Astrophysics,\
Post Bag 4, Ganeshkhind, Pune-411 007, INDIA.[^1]\
Pravabati Chingangbam and Tabish Qureshi\
Department of Physics, Jamia Millia Islamia,\
New Delhi-110025, INDIA
title: COSMOLOGICAL ASPECTS OF ROLLING TACHYON
---
ł 22.0cm -0.4cm
Introduction
============
Cosmological inflation has become an integral part of the standard model of the universe. Apart from being capable of removing the shortcomings of the standard cosmology, it gives important clues for structure formation in the universe. The inflationary paradigm seems to have gained a fairly good amount of support from the recent observations on microwave background radiation. On the other hand there have been difficulties in obtaining accelerated expansion from fundamental theories such as M/String theory. Recently, Sen [@sen1; @sen2; @sen3] has shown that the decay of an unstable D-brane produces pressure-less gas with finite energy density that resembles classical dust. Gibbons has emphasized the cosmological implications of tachyonic condensate rolling towards its ground state [@gibbons], see Refs[@g2; @kim] for further details. Rolling tachyon matter associated with unstable D-branes has an interesting equation of state which smoothly interpolates between -1 and 0. The tachyonic matter, therefore, might provide an explanation for inflation at the early epochs and could contribute to some new form of dark matter at late times [@shiu], see also Refs[@sen4; @related; @topics; @quvedo] on the related theme and Ref[@matos] for an alternative approach to rolling tachyon cosmology. We shall review here the cosmological prospects of rolling tachyon with exponential potential.
COSMOLOGY WITH ROLLING TACHYON
==============================
It was recently shown by Sen that the dynamics of string tachyons in the background of an unstable D-brane can be described by an effective field theory with Born-Infeld type action[@sen3] $$S=\int{d^4x \sqrt{-g}\left({R \over {16\pi G}}- V(\phi) \sqrt{1+g^{\alpha \beta} \partial_\alpha \phi \partial_\beta \phi} \right)}
\label{action}$$ where $\phi$ is the tachyon field minimally coupled to gravity. In a specially flat FRW cosmology the stress tensor acquires the diagonal form $T^{\alpha}_{\beta}=diag(-\rho,p,p,p)$ where the pressure and energy density are given by $$\rho={V(\phi) \over {\sqrt{1-\dot{\phi}^2}}}$$ $$p=-V(\phi)\sqrt{1-\dot{\phi}^2}$$ The Friedmann equation takes the form $$H^2={1 \over 3M_p^2} \rho \equiv {1 \over 3M_p^2}{V(\phi) \over {\sqrt{1-\dot{\phi}^2}}}
\label{friedman}$$ The equation of motion of the tachyon field which follows from (\[action\]) is $${\ddot{\phi} \over {1-\dot{\phi}^2}}+3H \dot{\phi}+{V_{,\phi} \over V({\phi})}=0
\label{evolution eq}$$ The conservation equation equivalent to (\[evolution eq\]) has the usual form $${\dot{\rho}_{\phi} \over \rho_{\phi}}+3H (1+\omega)=0$$ where $\omega \equiv {p_{\phi} \over \rho_{\phi}}= \dot{\phi}^2-1$ is the equation of state for the tachyon field. Thus a universe dominated by tachyon field would go under accelerated expansion as long as $\dot{\phi}^2\ <\ {2
\over 3}$ which is very different from the condition of inflation for non-tachyonic field, $\dot{\phi}^2\ <\ V(\phi)$. This is related to the fact that field potential falls out of the equation of state in case of the tachyon field. It should also be noted that evolution equation for tachyon field contains the logarithmic derivative of the potential.
DYNAMICS OF TACHYONIC INFLATION IN FRW COSMOLOGY
------------------------------------------------
The tachyon potential $V(\phi) \rightarrow 0$ as $\phi \rightarrow \infty$ but its exact form is not know at present[@moore]. Sen has argued that the qualitative dynamics of string theory tachyons can be described by (\[action\]) with exponential potential[@sen3]. Padmanabhan went further to suggest that one can construct a phenomenological runaway potential with the tachyonic equation of state capable of leading to a desired cosmology[@paddy], see also Ref[@sami2] on the similar theme. In what follows we shall consider (\[action\]) with the exponential potential in purely phenomenological context without claiming any identification of $\phi$ with the string tachyon field. Indeed, there are problems with inflation in case the origin of $\phi$ is traced in string theory[@klinde] and we will come back to this point later. The field equations (\[friedman\]) and (\[evolution eq\]) for tachyonic matter with the exponential potential $$V(\phi)=V_0e^{-\alpha \phi}$$ can be solved exactly in the slow roll limit. The integration of these equations leads to[@sami1]
\_[*end*]{}=, \_[end]{}=-[1 ]{} ( [\^2 ]{} ), V\_[end]{}=[[\^2 M\_p\^2]{} 2]{} where $\beta=\sqrt{ V_0/3M_p^2}$. Eq(7) is consistent with the expression of the slow roll parameter =[M\_p\^2 2]{} ([V\_[,]{} V]{})\^2[1 V]{} The COBE normalized value for the amplitude of scalar density perturbations \_H\^2410\^[-10]{} can be used to estimate $ V_{end}$ as well as $ \alpha$. Here $ V_i$ refers to the value of the potential at the commencement of inflation and is related to $V_{end}$ as $$V_{end}={V_i \over {2N+1}}$$ Using Eqs. (9) and (10) with ${\cal N} =60$ we obtain V\_[end]{}4 10\^[-11]{}M\_p\^4 At the end of inflation, apart from the field energy density, a small amount of radiation is also present due to particles being produced quantum mechanically during inflation [@ford] \_r = 0.01 g\_p H\_[end]{}\^4 (10 g\_p 100) which shows that the field energy density far exceeds the density in the radiation 0.01g\_p [V\_[end]{} ]{}4g\_p 10\^[-14]{} From (7) we find that $\alpha \simeq 10^{-5}M_p$ and there is no problem as long as we consider the tachyonic model of inflation in phenomenological context. However, it would be problematic if we trace the origin of field $\phi$ in string theory as there is no free parameter there to tune. Indeed, $\alpha$ and $V_0$ can be expressed through string length scale and string coupling $g_s$ as $\alpha=\alpha_0/l_s$, $V_0=v_0/(2\pi)^3g_sl_s^4$ where $v_0$ and $\alpha_0$ are dimensionless constants and $V_0/v_0$ is brane tension and $\alpha$ is the tachyon mass. Tuning $\alpha$ to $10^{-5}M_p$ leads to one of the two unacceptable situations: light mass of the tachyon or large value of string coupling $g_s$. This problem is quite independent of the form of tachyonic potential, see the paper of Fairbairn and Tytgat Ref[@shiu]. The situation can be remedied by invoking the large number of D-branes separated by distance much larger than $l_s$[^2]. The number of such branes in our case turns out to be of the order of $10^{10}$. The other alternative could be brane assisted inflation. Indeed, the prospects of inflation in Brane World scenario improve due to the presence of an additional quadratic density term in the Friedmann equation. Enough inflation can be generated in this case without tuning $\alpha$, see the paper by Bento et al[@shiu] and Ref [@sami1]. The non-brane world alternatives to tackle this problem are discussed by Yun-Song Piao and collaborators[@shiu].
Regarding the late time behaviour, the phase space analysis of tachyon field with exponential potential was carried out in Ref [@sami1]. It was shown that dust like solution is a late time attractor of the tachyonic system. Therefore the tachyon field , in principal, could become a candidate for dark matter.
Inspite of the very attractive features of the rolling tachyon condensate, the tachyonic inflation faces difficulties associated with reheating [@klinde; @sami1]. A homogeneous tachyon field evolves towards its ground state without oscillating about it and , therefore, the conventional reheating mechanism in tachyonic model does not work. Quantum mechanical particle production during inflation provides an alternative mechanism by means of which the universe could reheat. Unfortunately, this mechanism also does not seem to work: the small energy density of radiation created in this process red-shifts faster than the energy density of the tachyon field and therefore radiation domination in the tachyonic model of inflation never commences. However, the tachyon field could play the role of dark matter if the problem associated with caustics could be overcome[@staro].
Acknowledgments
===============
We are thankful to Hang Bae Kim, Mohammad Reza Garousi and Piao Yu-song for useful comments.
[99]{} A. Sen, arXiv: hep-th/0203211. A. Sen, arXiv: hep-th/0203265. A. Sen, arXiv: hep-th/0204143. G. W. Gibbons, arXiv: hep-th/0204008. G.W. Gibbons, arXiv: hep-th/0301117. Chanju Kim, Hang Bae Kim, Yoonbai Kim and O-Kab Kwon, hep-th/0301142. M. Fairbairn and M.H.G. Tytgat, arXiv: hep-th/0204070;
A. Feinstein, arXiv: hep-th/0204140; D. Choudhury, D.Ghoshal, D. P. Jatkar and S. Panda, arXiv: hep-th/0204204; A. Frolov, L. Kofman and A. Starobinsky, arXiv: hep-th/0204187; H. S. Kim, arXiv: hep-th/0204191; G. Shiu and Ira Wasserman, arXiv: hep-th/0205003; T. Padmanabhan and T. Roy Choudhury, Phys.Rev. D66 (2002) 081301\[ hep-th/0205055\]; K. Hashimoto, arXiv: hep-th/0204203; S. Sugimoto, S. Terashima, hep-th/0205085; J. A. Minahan, arXiv: hep-th/0205098; L. Cornalba, M. S. Costa and C. Kounnas, arXiv: hep-th/0204261; H. B. Benaoum, arXiv: hep-th/0205140; xin-zhou Li, Jian-gang Hao and Dao-jun Liu, arXiv: hep-th/0204152; J. c. Hwang and H. Noh, arXiv: hep-th/0206100; Y.-S. Piao, R.-G. Cai, X. Zhang, and Y.-Z. Zhang, hep-ph/0207143; G. Shiu, S.-H. H. Tye, and I. Wasserman, hep-th/0207119; X.-z. Li, D.-j. Liu, and J.-g. Hao, hep-th/0207146, On the tachyon inflation; J.M. Cline, H. Firouzjahi, and P. Martineau,JHEP 0211, 041 (2002), hep-th/0207156; James M. Cline, Hassan Firouzjahi,hep-th/0301101; Bin Wang, Elcio Abdalla, Ru-Keng Su, hep-th/0208023; S. Mukohyama, hep-th/0208094; M.C. Bento, O. Bertolami and A.A. Sen, hep-th/0208124; J.-g. Hao and X.-z. Li, Phys. Rev. D66, 087301 (2002), hep-th/0209041; Chanju Kim , Hang Bae Kim and Yoonbai Kim, hep-th/021010. J.S. Bagla, H.K. Jassal, and T. Padmanabhan, astro-ph/0212198; Yun-Song Piao, Qing-Guo Huang, Xinmin Zhang, Yuan-Zhong Zhang, hep-ph/0212219.
Ashoke Sen, hep-th/0207105; Partha Mukhopadhyay and Ashoke Sen, hep-th/0208142; Ashoke Sen, hep-th/0209122. Akira Ishida, Shozo Uehara, hep-th/0206102 ;T. Mehen and Brian Wecht, hep-th/0206212; Kazutoshi Ohta, Takashi Yokono,hep-th/0207004; Nicolas Moeller, Barton Zwiebach, hep-th/0207107; T. Okuda and S. Sugimoto, hep-th/0208196; Gary Gibbons, Koji Hashimoto, Piljin Yi, hep-th/0209034; Mohammad R. Garousi, hep-th/0003122; Mohammad R. Garousi,hep-th/0209068; J. Kluson, hep-th/0209255; Sayan Kar, hep-th/0210108; Haewon Lee, W. S. l’Yi, hep-th/0210221; Kenji Hotta, hep-th/0212063; Soo-Jong Rey, Shigeki Sugimoto, hep-th/0301049; Chanju Kim, Hang Bae Kim, Yoonbai Kim, O-Kab Kwon, hep-th/0301076; Akira Ishida and Shozo Uehara, hep-th/0301179. M. Gasperini, G. Veneziano, The Pre-Big Bang Scenario in String Cosmology, hep-th/0207130; F. Quevedo, Lectures on Strings/Brane Cosmology, hep-th/0210292. G.A. Diamandis, B.C. Georgalas , N.E. Mavromatos, E. Papantonopoulos, hep-th/0203241; G.A. Diamandis, B.C. Georgalas , N.E. Mavromatos, E. Papantonopoulos, I. Pappa, hep-th/0107124.
A. A. Gerasimov, S. L. Shatashvili, JHEP 0010 034(2000) \[hep-th/0009103\]; A. Minahan and B. Zwiebach; JHEP 0103 038 (2001) \[hep-th/0009246\]; D. Kutasov, A. Tseytlin, J. Math Phys., 42 2854 (2001). T. Padmanabhan, Phys.Rev. D66 (2002) 021301\[hep-th/0204150\]. M. Sami, hep-th/0205146. L. Kofman and A. Linde, Phys.Lett. B545 (2002) 8-16\[hep-th/0205121\]. M. Sami, P. Chingangbam and T. Qureshi,Phys.Rev. D66 (2002) 043530\[hep-th/0205179\]. L.H. Ford, Phys. Rev. D, [**35**]{}, 2955 (1987), B. Spokoiny, Phys. Lett. [**B 315**]{}, 40 (1993).
E. Papantonopoulos and Papa, Mod. Phys. Lett. [**A15**]{}, 2145 (2000) \[hep-th/0001183\] ; Phys. Rev. [**D63**]{} , 103506 (2000) \[hep-th/0010014\] ; S. H. S. Alexander, Phys. Rev. [**D65**]{},023507 \[ hep-th/0105032\]; A. Mazumdar, S. Panda, A. Perez-Lorenzana, Nucl. Phys. [**B614**]{}, 101, (2001)\[hep-th-0107058\]; Shinji Mukohyama, arXiv: hep-th/0204084. Gary Felder, Lev Kofman and Alexei Starobinsky ,JHEP 0209 (2002) 026\[hep-th/0208019\].
[^1]: On leave from jamia Millia, New Delhi. email:[email protected]
[^2]: We thank S. Panda for this clarification
|
---
abstract: 'We present a toy model for five-dimensional heterotic M-theory where bulk three-branes, originating in 11 dimensions from M five-branes, are modelled as kink solutions of a bulk scalar field theory. It is shown that the vacua of this defect model correspond to a class of topologically distinct M-theory compactifications. Topology change can then be analysed by studying the time evolution of the defect model. In the context of a four-dimensional effective theory, we study in detail the simplest such process, that is the time evolution of a kink and its collision with a boundary. We find that the kink is generically absorbed by the boundary thereby changing the boundary charge. This opens up the possibility of exploring the relation between more complicated defect configurations and the topology of brane-world models.'
author:
- |
[ Nuno D. Antunes[^1] , Edmund J. Copeland[^2] , Mark Hindmarsh[^3] and André Lukas[^4]]{}\
[Centre for Theoretical Physics, University of Sussex]{}\
[Falmer, Brighton BN1 9QJ, United Kingdom]{}
title: '\'
---
Introduction
============
The single most important problem in trying to make contact between string-/M-theory and low-energy physics is probably the large number of degenerate and topologically distinct vacua of the theory. It is usually stated that non-perturbative effects will eventually lift most of this degeneracy. However, despite the advances over recent years in understanding non-perturbative string- and M-theory there is very little indication of progress in this direction. In fact, with the advent of M-theory and concepts such as branes and brane-world theories new classes of vacua have been constructed and, as a consequence, the degeneracy problem has perhaps grown even more serious. It seems worthwhile, therefore, to ask whether the cosmological evolution rather than inherent non-perturbative effects of the theory may play a prominent role in selecting the vacuum state. Indeed, it is known that the degeneracy of some vacua (particularly among those with a large number of supersymmetries) will not be lifted non-perturbatively, suggesting cosmology will have some role to play.
The first task to tackle, in this context, is the formulation of a workable theory capable of describing a number of topologically different vacua and transitions among them. As a second step, one will have to analyse the cosmological evolution of this theory. It is precisely these two problems which will be the main topic of the present paper.
The class of vacua we will use in our approach is provided by compactification of heterotic M-theory [@Horava:1996ma] on Calabi-Yau three folds [@Witten:1996mz; @Horava:1996vs; @Lukas:1997fg; @Lukas:1998hk] resulting in five-dimensional brane-world theories [@Lukas:1998yy; @Ellis:1998dh; @Lukas:1998tt]. These theories are defined on a space-time with two four-dimensional boundaries corresponding to the fixed planes of the orbifold $S^1/Z_2$ and, in addition, may contain bulk three-branes which originate from M five-branes wrapping two-cycles in the Calabi-Yau space [@Witten:1996mz; @Lukas:1998hk]. The associated effective actions are five-dimensional gauged $N=1$ supergravity theories in the bulk coupled to four-dimensional $N=1$ theories residing on the two boundaries and the three-branes. The prospects for particle-physics model building within this class of compactifications is quite promising and a number of models with attractive particle-physics properties on the “observable” boundary have been constructed [@Andreas:1999ei]–[@Donagi:2000zs]. The simplest way to characterise topologically different compactifications from the viewpoint of the five-dimensional effective theories is by using the charges $\a_1$ and $\a_2$ on the boundaries and the three-brane charge $\a_3$. These charges are not independent but must satisfy the cohomology constraint $\a_1+\a_2+\a_3=0$ which follows from anomaly cancellation. Two five-dimensional effective theories with different sets of charges $(\a_1,\a_2,\a_3)$ originate from topologically distinct compactifications. A transition between two such theories may occur through a small-instanton transition [@Witten:1996gx; @Ganor:1996mu] when a three-brane collides with one of the boundaries. The three-brane can then be “absorbed” by the boundary and, correspondingly, the boundary charge is changed by the amount carried by the incoming three-brane. This change in the boundary charge indicates a more dramatic transition in the boundary theory. For example, the gauge group and the amount of chiral matter [@Ovrut:2000qi] may be altered as a consequence of the internal topology change.
The goal of this paper is to find a five-dimensional (toy) model which provides a unified description for the above class of topologically distinct vacua, in the simplest setting, and allows for transitions between them. While, for simplicity, we will assume that the topology of space-time both in the internal Calabi-Yau space and in the orbifold direction remains unchanged we will allow for transitions corresponding to a topology change in the internal gauge-field instantons on the boundaries and a change in the number and charges of three-branes. Our basic method will be, starting with five-dimensional heterotic M-theory in its simplest form, to model the three-branes as topological defects [@DeWolfe:1999cp] (kinks) of a new bulk scalar field $\c$. We do not claim, of course, that this model provides the correct definition of M-theory in these backgrounds. However, we do show that the defect model in the background of its various vacuum states reproduces the five-dimensional M-theory effective actions with different charges $(\a_1,\a_2,\a_3)$, corresponding to topologically distinct M-theory compactifications.
Time-evolution of the defect model and the scalar $\c$ in particular then allows for a transition between these topologically distinct configurations. We will study in detail the simplest such transition, namely the collision of a three-brane kink with one of the boundaries. This will be done by calculating the four-dimensional effective action for the defect model in the background of such a kink. As we will see from this four-dimensional action, the collision process indeed generically leads to an absorption of the kink and a change in the boundary charge by the amount carried by the kink. Hence, we have established the existence of one of the elementary topology-changing processes in our defect model. This opens up the possibility, subject of ongoing research, that a study of more complicated configurations, such as brane-networks, will provide insight into topological properties of brane-world models.
The plan of the paper is as follows. In the next section, we will introduce the five-dimensional effective actions from heterotic M-theory, in their simplest form. For later reference, we will also review the associated four-dimensional effective theories. Section 3 then presents our defect model and explains how, precisely, it is related to the M-theory actions. In Section 4, we will compute the four-dimensional effective action for the defect model in the background of a kink and Section 5 presents the resulting evolution equations. Section 6 is devoted to a detailed study of the kink evolution and its collision with a boundary, based on these equations. A conclusion and outlook is presented in Section 7.
Effective actions from heterotic M-theory
=========================================
To set the scene, we will now describe the five-dimensional brane world theories for which we would like to find a smooth defect-model. These brane-world theories can be viewed as a minimal version of five-dimensional heterotic M-theory [@Lukas:1998yy]. For later purposes, it will also be useful to review the four-dimensional effective action associated to these brane-world theories.
Coordinates for the five-dimensional space $M_5$ are denoted by $x^\a$ where $\a ,\b,\dots = 0,1,2,3,5$. We also introduce four-dimensional indices $\m ,\n ,\dots = 0,1,2,3$. The coordinate $y\equiv x^5$ is compactified on an orbi-circle $S^1/Z_2$ in the usual way, that is, by first compactifying $y$ on a circle with radius $\r$ and then dividing by the $Z_2$ orbifold action $y\rightarrow -y$. Taking the $y$–coordinate in the range $y\in [-\p\r ,\p\r ]$ with the endpoints being identified the two resulting four-dimensional fixed planes (boundaries), denoted by $M_4^1$ and $M_4^2$, are located at $y=0$ and $y=\p\r$, respectively. Such a geometry is obtained by compactifying 11-dimensional heterotic M-theory on a Calabi-Yau space. If the five-branes present in the 11-dimensional theory are included in this compactification they lead, upon wrapping a two-cycle in the Calabi-Yau space, to bulk three-branes in the five-dimensional brane-world theories. For simplicity, we will consider a single such three-brane whose world-volume we denote by $M_4^3$. We also need to include the $Z_2$ mirror of this three-brane with world-volume $\tilde{M}_4^3$. Three-brane world-volume coordinates are denoted by $\s^\m$. In the minimal version of the model the bulk fields consist of the metric and the dilaton $\F$ while the three-brane world-volume fields are simply the embedding coordinates $X^\a =X^\a (\s^\m )$. The effective action for these fields is then given by [@Brandle:2001ts] $$\begin{aligned}
S_5 &=& -\frac{1}{2\k_5^2}\left\{\int_{M_5}\sqrt{-g}\left[
\frac{1}{2}R+\frac{1}{4}\partial_\a\F\partial^\a\F
+\frac{1}{3}\a^2 e^{-2\F}\right]\right. \nonumber \\
&&\qquad +\int_{M_4^1}\sqrt{-g}\; 2\a_1 e^{-\F}
+\int_{M_4^2}\sqrt{-g}\; 2\a_2 e^{-\F} \nonumber \\
&&\qquad \left. +\int_{M_4^3\cup\tilde{M}_4^3}\sqrt{-\g}\;
|\a_3| e^{-\F}\right\}\; . \label{S5}\end{aligned}$$ Note, that the dilaton $\F$ measures the size of the internal Calabi-Yau space which is, more precisely, given by $v e^\F$, where $v$ is a fixed reference volume. It relates the five-dimensional Newton constant $\k_5$ to its 11-dimensional counterpart $\k$ via $$\k_5^2 = \frac{\k^2}{v}\; .$$ Further, $\a_i$, where $i=1,2,3$ are the charges on the orbifold planes and the three-brane, respectively. They are quantised and can be written as integer multiples $$\a_i = \s\b_i\; ,\qquad \b_i\in{\bf Z}$$ of the unit charge $\s$ defined by $$\s = \frac{\e_0}{\p\r}\; ,\qquad
\e_0 = \left(\frac{\k}{4\p}\right)^{2/3}\frac{2\p^2\r}{v^{2/3}}\; .\label{e0}$$ These charges satisfy the important cohomology condition $$\sum_{i=1}^3\a_i = 0 \label{cohomology}$$ which follows from anomaly cancellation in the 11-dimensional theory. The quantity $\a$ which appears in the above bulk potential is a sum of step-functions given by $$\a = \a_1\theta (M_4^1) + \a_2\theta (M_4^2) + \a_3\left(\theta (M_4^3)+
\theta (\tilde{M}_4^3)\right) \; . \label{alpha}$$ Finally, the induced metric $\g_{\m\n}$ on the three-brane world-volume is defined as the pull-back $$\g_{\m\n} = \partial_\m X^\a\partial_\n X^\b g_{\a\b}$$ of the space-time metric.
For positive three-brane charge, $\a_3 > 0$, the above action can be embedded into a five-dimensional $N=1$ bulk supergravity theory coupled to four-dimensional $N=1$ theories on the boundaries and the branes. The details of this supergravity theory have been worked out in Ref. [@Brandle:2001ts]. In the case of an anti-three-brane, that is for $\a_3 < 0$, while bulk supersymmetry is preserved everywhere locally, it is broken globally. Technically, this happens because the chirality of the four-dimensional supersymmetry preserved on the three-brane is opposite to the one on the orbifold fixed planes. Such non-supersymmetric heterotic models containing anti-branes have not been studied in much detail, so far. We have included this possibility here because it will naturally arise later in our discussion of the defect model. The generalisation to include more than one three-brane is straightforward. It simply amounts to replacing the Nambu-Goto type three-brane action in the third line of by a sum over such actions (with generally different three-brane charges) and modifying the cohomology condition and the definition of $\a$, Eq. , accordingly.
Note that two actions of the type but with different sets of charges $\a_i$ correspond to topologically different M-theory compactifications. Specifically, the charges $\a_1$ and $\a_2$ on the boundaries are related to gravitational and gauge instanton numbers. If we keep the topology of the Calabi-Yau space fixed, as discussed, different values of $\a_1$ and $\a_2$ indicate a different topology of the internal gauge bundles. As a consequence, the values of $\a_1$, $\a_2$ are also correlated with other properties of the boundary theories, such as the types of gauge groups and the amount of chiral matter. Different values of $\a_3$ imply different internal wrapping numbers for the five-branes and, hence, clearly indicate different topologies.
For the case of a three-brane (rather than an anti-three-brane), the action has a BPS domain-wall vacuum [@Lukas:1998yy; @Brandle:2001ts] given by $$\begin{aligned}
ds^2 &=& a_0^2hdx^\m dx^\n\eta_{\m\n}+b_0^2h^4dy^2 \label{BPS1}\\
e^\F &=& b_0h^3 \label{BPS2}\\
X^\m &=& \s^\m \\
X^5 &=& Y = {\rm const}\label{BPS4}\end{aligned}$$ Here the function $h=h(y)$ is defined by $$h(y)=-\frac{2}{3}\left\{ \begin{array}{lll}
\a_{1} |y|+c_0 &\mbox{for}& 0\leq |y|\leq Y \\
(\a_{1}+\a_{3})|y|-\a_{3}\, Y+c_0 &\mbox{for}&
Y\leq |y|\leq \p\r
\end{array}\right.\label{BPS5}$$ and $a_0$, $b_0$ and $c_0$ are constants. Note that this solution is not smooth across the three-brane reflecting the fact that the three-brane as described by is infinitely thin. Such a static BPS solution does not exist for the anti-three-brane since the sum of the tensions $\a_1+\a_2+|\a_3|$ does not vanish for $\a_3<0$ by virtue of the cohomology condition . In fact, solutions which couple to an anti-three-brane will, in general, be time-dependent.
For later reference, it will be useful to discuss the four-dimensional effective action associated to the brane-world model and the above BPS vacuum. It is given by [@Derendinger:2001gy; @Brandle:2001ts] $$S_4 =
-\frac{1}{2\k_P^2}\int_{M_4}\sqrt{-g_4}\left[\frac{1}{2}R_4+\frac{1}{4}{\partial}_\m\f
{\partial}^\m\f +\frac{3}{4}{\partial}_\m\b{\partial}^\m\b+\frac{q_3}{2}e^{\b -\f}
{\partial}_\m z{\partial}^\m z\right]\; . \label{S4}$$ The three scalar fields $\f$, $\b$ and $z$ have straightforward interpretations in terms of the underlying higher-dimensional theories. The field $\f$, as the zero mode of the five-dimensional scalar $\F$, specifies the volume of the internal Calabi-Yau space averaged over the orbifold. More precisely, this average volume is given by $ve^\f$. The scalar $\b$, on the other hand, originates from the $(55)$-component of the five-dimensional metric and measures the size $\p\r e^\b$ of the orbifold. Finally, $z$ represents the position of the three-brane and is normalised to be in the range $z\in [0,1]$ with the endpoints corresponding to the two boundaries of five-dimensional space-time. The four-dimensional Newton constant $\k_P$ is related to its five-dimensional cousin by $$\k_P^2 = \frac{\k_5^2}{2\p\r}\; .$$ Finally, the three-brane charge $$q_3=\p\r\a_3=\e_0\b_3\; , \qquad \b_3\in{\bf Z} \label{q3}$$ is quantised in units of $\e_0$ as defined in Eq. and is positive for the case under discussion.
As expected, the action can be obtained from an $N=1$ supergravity theory by a suitable truncation. The Kahler potential for this supergravity theory has been first given in Ref. [@Derendinger:2001gy]. An important quantity which governs the validity of the effective action is the strong-coupling expansion parameter $$\e = \e_0 e^{\b - \f}\; .$$ It measures the relative size of string loop corrections to the four-dimensional action or, equivalently, the strength of the warping in the orbifold direction from a five-dimensional viewpoint. The effective action is valid as long as $\e <1$ and can be expected to break down otherwise. Another reason for a breakdown of the four- as well as the five-dimensional effective theory is the five-brane approaching one of the boundaries, that is, $z\rightarrow 0$ or $z\rightarrow 1$. In this case, the underlying heterotic M-theory may undergo a small-instanton transition [@Witten:1996gx; @Ganor:1996mu] which leads to the M five-brane being converted into a gauge-field instanton (or, so called gauge five-brane [@Strominger:et]) on the boundary. In such a process, properties of the boundary theory, such as the gauge group and the amount of chiral matter, can change dramatically as a result of the internal topology change [@Ovrut:2000qi]. In our simple five-dimensional model such a modification of the boundary theory is indicated by a change in the boundary charge $\a_1$ or $\a_2$ by the amount of incoming five-brane charge. It is clear, however, that the actions or are not capable of describing such a jump in the boundary charge in a dynamical way. In fact, the four-dimensional action does not retain any memory of the presence of the boundaries as $z\rightarrow 0,1$. This can also be seen from the moving-brane solutions to found in Ref. [@Copeland:2001zp] and will be explained in more detail later. As we will see, our defect model, to be presented in the next section, will considerably improve on these points.
Modelling heterotic brane-world theories
========================================
We would now like to find a “smooth” model, replacing the five-dimensional action , where the three-brane is not put in “by hand” but, rather, obtained as defect solution to the theory. Such a model should have, as a solution, a smooth version of the BPS domain wall –. Note, that we will not attempt to find a smooth description for the orbifold fixed planes. Their nature, as part of the space-time geometry, is entirely different from the one of the three-branes. In particular, the fixed plane tensions $\a_1$, $\a_2$ can be negative whereas the three-brane tension $|\a_3|$ is always positive.
Modelling co-dimension one objects such as our three-branes is usually achieved using kink-solutions of scalar field theories [@DeWolfe:1999cp]. This is indeed what we will do here. We, therefore, supplement the bulk field content of the five-dimensional theory by a second scalar field $\c$. For this bulk scalar along with the dilaton $\F$ and the five-dimensional metric, we propose the following action $$\begin{aligned}
\tilde{S}_5 &=& -\frac{1}{2\k_5^2}\left\{\int_{M_5}\sqrt{-g}\left[
\frac{1}{2}R+\frac{1}{4}\partial_\a\F\partial^\a\F
+\frac{1}{2}e^{-\F}\partial_\a\c\partial^\a\c
+V(\F ,\c )\right]\right.\nonumber \\
&&\qquad\left. +\int_{M_4^1}\sqrt{-g}\; 2W
-\int_{M_4^2}\sqrt{-g}\; 2W\right\}\; .
\label{S5t}\end{aligned}$$ We require that the potential $V$ be obtained from a “superpotential” $W$ following the general formula [@Skenderis:1999mm] $$V = \frac{1}{2}G^{IJ}\partial_IW\partial_JW - \frac{2}{3}W^2\; ,
\label{Vgen}$$ where $G_{IJ}$ is the sigma-model metric and indices $I,J,\dots$ label the various scalar fields $\F^I$. For our specific action , we have two scalar fields $(\F^I)=(\F ,\c )$ and the sigma-model metric is explicitly given by $$G = {\rm diag}\left(\frac{1}{2},e^{-\F}\right)\; .$$ Further, we propose the following form for the superpotential $$W=e^{-\F}w(\c ) \label{W}$$ where $w$ is an, as yet, unspecified function of $\c$. Using the general expression this results in a potential $$V = \frac{1}{3}e^{-2\F}w^2+\frac{1}{2}e^{-\F}U\; ,\qquad
U = \left(\frac{dw}{d\c}\right)^2\; . \label{U}$$ Note that, in , we have omitted the Nambu-Goto type action for the three-brane corresponding to the third line of the M-theory effective action . The reason is, of course, that we would like to recover the three-brane as a kink-solution of the new scalar field $\c$. For this to work out, the potential $U$ has to have a non-trivial vacuum structure. In fact, since the original three-brane charge is an (arbitrary) integer multiple of a certain unit, we need an infinite number of equally spaced minima. More precisely, we require the potential $U$ satisfies the following properties :
- $U$ is periodic with period $v$, that is $U(\c +v)=U(\c)$
- $U$ has minima at $\c = \c_n = nv$ for all $n\in{\bf Z}$
- $U$ vanishes at the minima, that is $U(\c_n)=0$.
These requirements can be easily translated into conditions on the function $w$ which determines the superpotential. Clearly, from the second and third condition, the derivative of $w$ has to vanish at all minima $\c_n=nv$ of $U$. The definition of $w$ in terms of $U$ involves a sign ambiguity which allows one, using the first condition on $U$ above, to make $w$ periodic as well. However, the structure of the action makes it clear that the “vacuum values” $w(\c_n )$ of $w$ have to reproduce the charges on the orbifold planes. We, therefore, define $w$ as $$w(\c ) = \int_0^\c d\tilde{\c}\sqrt{U(\tilde{\c})}$$ which implies quasi-periodicity, that is, $$w(\c +v) = w(\c )+w(v)\; .$$ We have plotted the typical form of $U$ and $w$ in Fig. \[fig1\].
![*Shown is the typical shape of the superpotential $w$ and the potential $U$ (in units of $\sigma$) as a function of the scalar field $\c$ (in units of $v$).* []{data-label="fig1"}](Uw.eps){height="9cm" width="9cm"}
For much of our discussion the concrete form of the potential will be irrelevant as long as the above conditions are met. A specific example, however, is provided by the sine-Gordon potential $$U = m^2\left[1-\cos\left(\frac{2\p\c}{v}\right)\right]\; ,$$ where $m$ is a constant. The associated superpotential $w$ is easily obtained by integration.
This concludes the set-up of our model. Let us now discuss how, precisely, this model corresponds to the brane-world theory introduced earlier. The simplest solution for $\c$ is to be in one of its vacuum states, that is, $\c=\c_n=nv$ for some integer $n$, throughout space-time. In this case, the superpotential and potential reduce to $$W = e^{-\F}w(\c_n)\; ,\qquad V = \frac{1}{3}e^{-2\F}w(\c_n)^2\; .$$ Substituting this back into the action and comparing with the M-theory result shows that this precisely corresponds to a situation without a bulk three-brane. In particular, one concludes that the boundary charge $\a_1$ has to be identified with the value $w(\c_n)=w(nv)=nw(v)$ of the superpotential at the respective minimum [^5]. This is, of course, the more precise reason why we have required the superpotential to be quasi-periodic rather than periodic. Furthermore, we learn that the elementary unit of charge $\s$ in the M-theory model (see Eq. ) corresponds to $w(v)$, that is, $$\s = w(v) = \int_0^vd\x\sqrt{U(\c )}\; .\label{ident}$$
The next more complicated solutions are kinks where the scalar field $\c$ interpolates between two of its minima as one moves along the orbifold direction. Due to the cross-couplings in the action also the dilaton $\F$ and the metric necessarily have a non-trivial profile in this case. To find such solutions, an appropriate Ansatz is provided by $$\begin{aligned}
ds^2 &=& e^{2A(y)}dx^\m dx^\n\eta_{\m\n}+e^{2B(y)}dy^2 \label{A1}\\
\F &=& \F (y) \\
\c &=& \c (y)\; . \label{A3}\end{aligned}$$ The four $y$-dependent functions $A$, $B$, $\F$, $\c$ are subject to the second order bulk equations of motion to be derived from the first line in and the boundary conditions $$\begin{aligned}
e^{-B} A' &=& -\frac{1}{3}W = -\frac{1}{3}e^{-\F}w,\label{bc1}\\
e^{-B}\F ' &=& 2\frac{\partial W}{\partial\F} = -2e^{-\F}w, \label{bc2}\\
e^{-B}\c ' &=& e^{\F}\frac{\partial W}{\partial\c} = \frac{dw}{d\c}\; .
\label{bc3}\end{aligned}$$ Here, the prime denotes the derivative with respect to $y$ and the equations hold at both boundaries, that is, at $y=0$ and $y=\p\r$. The first equality in each equation is easily derived from including the boundary terms while the second one follows from inserting the explicit form of the superpotential .
Instead of dealing with the second order equations to obtain explicit solutions it is much simpler to consider the first order BPS-type equations. Their existence is guaranteed by the special form of our scalar field potential $V$ as being obtained from a superpotential [@Skenderis:1999mm]. Concretely, inserting the Ansatz – into the bulk part of the action one obtains an energy functional $$\begin{aligned}
E &\sim& \int dy e^{4A}\left[-6e^{-2B}{A'}^2+\frac{1}{4}e^{-2B}{\F '}^2
+\frac{1}{2}e^{-\F-2B}{\c '}^2 + V\right], \nonumber\\
&=&\int dy e^{4A}\left[\frac{1}{4}\left(e^{-B}\F '\mp 2\frac{\partial W}
{\partial\F}\right)^2+\frac{1}{2}e^{-\F}\left(e^{-B}\c '\mp e^{\F}
\frac{\partial W}{\partial\c}\right)^2-\frac{2}{3}\left(3e^{-B}A'
\pm W\right)^2\right] \\
&&\qquad\qquad\quad \pm\left[ e^{4\a}W\right]_{y=0}^{y=\p\r}\nonumber\end{aligned}$$ which can be written in Bogomol’nyi perfect square form. This leads to the following first order equations $$\begin{aligned}
e^{-B}A' &=& \mp\frac{1}{3}W = \mp\frac{1}{3}e^{-\F}w, \label{be1}\\
e^{-B}\F ' &=& \pm 2\frac{\partial W}{\partial\F} = \mp 2e^{-\F}w,
\label{be2}\\
e^{-B}\c ' &=& \pm e^{\F}\frac{\partial W}{\partial\c}=\pm\frac{dw}{d\c}\; .
\label{be3}\end{aligned}$$ Again, the second equality in each line follows from inserting the explicit superpotential . The scale factor $B$ is, of course, a gauge degree of freedom and can, for example, be set to a constant. It is clear then that equation for $\c$ decouples from the other two. This $\c$ equation is, in fact, exactly the same first order equation one would derive for a single scalar field $\c$ with potential $U$ in a flat background. It is, therefore, clear and can be seen by direct integration, that this equation admits kink solutions where $\c$ interpolates between a certain minimum $\c= \c_n = nv$ of $U$ at $y\rightarrow -\infty$ and one of its neighbouring minima at $y\rightarrow +\infty$. More precisely, for the choice of the upper (lower) sign in Eq. the minimum at $\c = (n+1)v$ ($\c = (n-1)v$) is approached for $y\rightarrow +\infty$. The corresponding solutions for $A$ and $\F$ can then be obtained by inserting this kink solution and integrating Eqs. and . In the next section, this will be carried out in a more precise way. In addition, the solutions obtained in this way have to satisfy the boundary conditions –. Clearly, this is automatically the case if the upper sign in the first order equations – has been chosen, that is, if the kink interpolates between the minima $\c=nv$ and $\c=(n+1)v$ for increasing $y$. For the lower sign, on the other hand, there is no chance to satisfy the boundary conditions and, hence, no solutions of the type considered here exist in this case. The interpretation of these results is straightforward. While both types of kinks are on the same footing as far as the bulk equations are concerned the boundary conditions distinguish what should then be called an anti-kink, interpolating between $\c=nv$ and $\c =(n-1)v$, from a kink, interpolating between $\c =nv$ and $\c =(n+1)v$. While the latter represents a BPS solution of the theory, the former carries the wrong orientation to be compatible with the boundaries and, in fact, will only exist as a dynamical object. This is in direct analogy with the properties of three-branes and anti-three-branes in our original M-theory model .
For the case of a kink, we would like to make this correspondence with the M-theory model more precise. Let us consider a kink solution to Eqs. – and – with the kink width being small (compared to the size of the orbifold) and the core of the kink sufficiently away from the boundaries. In this case, the profile for $\c$ and $w(\c )$ can be approximated by a step-function. Specifically, we have $w(\c )\simeq n w(v)$ to the left of the kink and $w(\c )\simeq (n+1)w(v)$ to the right. Inserting this into the equations , and the boundary conditions , for $A$ and $\F$ and solving the resulting system precisely leads to the BPS three-brane solution given by Eqs. , , . The charges $\a_i$ appearing in this solution are given by $$\a_1= n\s\; ,\qquad \a_2= -(n+1)\s\; ,
\qquad \a_3 = \s\; ,\label{ident1}$$ where we have used our earlier identification of the superpotential value $w(v)$ with the elementary charge unit $\s$. Hence, our model allows for a solution which can be interpreted as a smooth version of the M-theory domain wall coupled to a single-charged three-brane.
More generally, we would like to discuss the relation between the action in the background of a kink solution and the M-theory action . To do this, we should allow for fluctuations of the kink. It is well-known [@effdefect] that, for sufficiently small width, the hypersurface prescribed by the kink’s core is a minimal surface and is, therefore, adequately described by a Nambu-Goto action. Practically, this implies that the kinetic term for $\c$ and the $U$ potential term in the action can be effectively replaced by a Nambu-Goto action describing the dynamics of the core of the kink. Of course, this core has to be identified with the three-brane in the M-theory model. It is easy to show that, by virtue of Eq. , the tension in this effective Nambu-Goto action is given by $\s$ which is the correct value for a single-charged three-brane with $\b_3 =1$. Further, the superpotential $w$ in such a kink background can be effectively replaced by a step-function, as discussed above. Using the identification of charges, it is easy to see that the superpotential $w$ precisely equals the function $\a$ , defined in Eq. , in this limit. As a consequence, the second potential term in proportional to $e^{-2\F}w^2$ precisely reproduces the bulk potential in the M-theory action . Similarly, the boundary potentials in match the boundary potentials in using that $w(\c (y=0))\simeq n\s = \a_1$ and $w(\c (y=\p\r ))\simeq (n+1)\s = -\a_2$. Although there are no BPS anti-kink solutions, it is clear that a similar argument can be made for the action in the background of an anti-kink leading to the M-theory action with an anti-three-brane.
In summary, we have seen that the action in the background of various vacuum configurations of the field $\c$ reproduces different versions of the M-theory effective action . For a constant field $\c$ located in one of the minima of $U$, we have reproduced the M-theory action without three-branes. For a kink (anti-kink) background with sufficiently small width away from the boundaries we have obtaining the M-theory action with a single-charged three-brane (anti-three-brane). Note that, while from the viewpoint of the smooth model these cases merely correspond to different configurations of the field $\c$, they represent different effective actions on the M-theory side. As we have discussed, these different effective actions arise from topologically distinct compactifications of the 11-dimensional M-theory. While these compactifications are known to be related by topology-changing transitions such as small-instanton transitions these processes cannot be described by the action . What we have seen is, that our smooth defect model incorporates a number of these topologically distinct configurations within a single theory and, may, describe transitions between them as the scalar field $\c$ evolves in time. In the subsequent sections, we will study the simplest example for such a transition, namely the collision of a kink with one of the boundaries.
A final comment concerns the question of multi-charged branes. Clearly, multi-charged BPS three-branes with $\b_3>1$ are allowed in the M-theory model . However, our defect model does not have exact BPS multi-kink solutions as long as the potential $U$ is smooth at its minima. The reason is that, for smooth $U$, a kink solution does not reach a minimum within a finite distance, as can be easily seen from Eq. with $U$ expanded around a minimum. As a consequence, single-kink solutions cannot be “stacked” to produce exact multi-kink solutions. There are a number of options available to remove this apparent discrepancy. Firstly, the model as stands does have approximate multi-kink solutions (with exponential accuracy) which could be identified with multi-charged three-branes. Secondly, if the potential $U$ is continuous but non-smooth at its minima a kink solution can reach a minimum within a finite distance. There is no obstruction then to build up exact multi-kinks by stacking single-kink solutions. Thirdly, some multi-scalar field models are known to admit multi-kink solutions [@multikink]. So, we may generalise the action by adding more than one scalar field. For the purpose of this paper, we will not implement any of these possibilities explicitly but, rather, focus on single-kink solutions in the following.
The four-dimensional effective action of a kink solution
========================================================
We would now like to study one of the simplest dynamical processes in the context of our defect model, namely the time-evolution of a kink solution and its collision with a boundary. For a sufficiently slow evolution this can be conveniently studied in the context of the four-dimensional effective theory associated to in the presence of a kink. The purpose of this section is to compute this effective four-dimensional theory. As we will see, this computation can be pushed a long way without specifying an explicit potential $U$. We will, therefore, keep $U$ general throughout this section. An explicit example for $U$ will be studied in the next section.
Our first step is to write the kink solution in a form which makes the dependence on the various integration constant (which will be promoted to four-dimensional moduli fields later on) as explicit as possible. We find that the kink solution to – and – interpolating between the minima $\c = \c_n = nv$ and $\c = \c_{n+1}=(n+1)v$ for increasing $y$ can be cast in the form $$\begin{aligned}
\c &=& C\left( e^\b\m^{-1}\left(y/\p\r -z\right)\right) ,
\label{sol1} \\
e^{\F} &=& e^{\f}\left( 1 + \e_0 e^{\b -\f}f(y,\b ,z)\right) ,\label{sol2}\\
A &=& A_0+\frac{1}{6}\F, \label{sol3}\\
B &=& \label{sol4}\b,\end{aligned}$$ where we recall that $A$ and $B$ are the scale factors in the five-dimensional metric as defined in Eq. –. The functions $C$ and $f$ in the above solution can be expressed in terms of the potential as follows $$\begin{aligned}
C^{-1}(\c ) &=& \frac{1}{\p\r\m}\int_{(n+\frac{1}{2})v}^\c\frac{d\tilde{\c}}
{\sqrt{U(\tilde{\c})}}\; , \qquad\label{C} \\
f(y,\b ,z) &=& -\frac{2}{\p\r \e_0}\int_{y_0}^yd\tilde{y}\; w\left(C\left(
e^\b\m^{-1}\left(\tilde{y}/\p\r -z\right)\right)
\right)\; . \label{f}\end{aligned}$$ Here $\f$, $\b$, $z$, $A_0$ and $y_0$ are integration constants, while $\m$ is a constant which measures the width of the kink in units of $\p\r$. It is clear from the form of the metric that the constant $A_0$ can be absorbed into the four-dimensional metric. As we will see, it is, however, convenient to keep this constant explicitly since it can be used to canonically normalise the four-dimensional Einstein-Hilbert term. For our subsequent discussion, let us define the average $\langle h \rangle$ of a function $h=h(y)$ over the orbifold by $$\langle h \rangle = \frac{1}{\p\r}\int_0^{\p\r}dy\; h(y)\; .$$ Since the constants $y_0$ and $\f$ really describe the same degree of freedom, we can fix $y_0$ by requiring that $\langle f\rangle=0$. With this convention, the integration constant $\f$ has a clear geometrical interpretation, namely $e^\f$ represents the orbifold average of the dilaton $e^\F$. Similarly, $e^\b$ measures the orbifold size in units of $\p\r$. The final integration constant $z$ specifies the position of the kink’s core (the position where $\c=(n+\frac{1}{2})v$) in the orbifold direction. Values $z\in [0,1]$ imply that the kink’s core is located within the boundaries of five-dimensional space and is, hence, physically present. Further, $z\rightarrow 0,1$ indicates collision of the kink with one of the boundaries. For $z\notin [0,1]$ the core is outside the physical region and we can merely think of $z$ as the virtual position of the core were space-time to continue beyond the boundaries. In this case, the physical part of the kink, located between the boundaries, is only its tail. In the limiting case $z\rightarrow\pm\infty$ the kink disappears completely and we approach one of the trivial vacuum states of the theory with either $\c = nv$ or $\c =(n+1)v$ throughout five-dimensional space-time depending on whether $z\rightarrow +\infty$ or $z\rightarrow -\infty$. Also note that the function $C$, defined in Eq. , is independent of all integration constants and can be computed for a given potential $U$.
We should now promote all integration constants in our kink solution to four-dimensional moduli fields. This leads to three scalar fields $(\f^I)=(\f ,\b ,z)$ and the four-dimensional effective metric $g_{4\m\n}$. Accordingly, the Ansatz – should then be modified to $$\begin{aligned}
ds^2 &=& e^{2A(y,\f ^I)}dx^\m dx^\n g_{4\m\n} + e^{2B(y,\f ^I)}dy^2,
\label{A1t} \\
\F &=& \F (y,\f^I), \\
\c &=& \c (y,\f^I)\; , \label{A3t}\end{aligned}$$ where $A$, $B$, $\F$ and $\c$ are as in Eqs. – but with $(\f^I)=(\f ,\b ,z)$ now viewed as functions of the external coordinates $x^\m$.
We are now ready to compute the four-dimensional effective action. Inserting the Ansatz – into the action and integrating over the orbifold direction we obtain the following result $$\tilde{S}_4 = -\frac{1}{2\k_P^2}\int_{M_4}\sqrt{-g_4}\left[
\frac{1}{2}R_4+\frac{1}{2}G_{IJ}\partial_\m\f^I
\partial^\m\f^J\right]\; . \label{S4t}$$ The sigma-model metric $G_{IJ}$ is given by $$G_{IJ} = 2\left< e^{2A+B}\left[-3\partial_IA\partial_JA-3\partial_{(I}A
\partial_{J)}B+\frac{1}{4}\partial_I\F\partial_J\F
+\frac{1}{2}e^{-\F}\partial_I\c\partial_J\c \right]\right>,
\label{G}$$ where $\partial_I=\frac{\partial}{\partial\f^I}$ and $(\f^I)=(\f ,\b
,z)$. Further, in order to obtain an Einstein-frame action we have required that $$\left< e^{2A+B}\right> = 1\; . \label{cannor}$$ This indeed fixes the constant $A_0$ in Eq. to be $$e^{2A_0} = e^{-\b}\left< e^{\F /3}\right>^{-1}\; . \label{A0}$$ The four-dimensional Planck scale $\k_P$ is defined by $$\k_P^2 = \frac{\k_5^2}{2\p\r}\; ,$$ as usual.
The remaining task is now to evaluate the expression for the moduli-space metric using the kink solution –. This leads to fairly complicated results, in general. There is, however, an approximation suggested by the original M-theory model which simplifies matters considerably. As discussed, the effective actions for heterotic M-theory in Section 2 are valid only if the strong-coupling expansion parameter $$\e =\e_0 e^{\b - \f}$$ is smaller than one. We are, therefore, led to compute the moduli-space metric in precisely this limit which corresponds to small warping in the orbifold direction. Concretely, we will keep terms up to ${\cal O}(\e )$ and neglect all terms of ${\cal O}(\e^2 )$ and higher in our computation. This implies a dramatic simplification since the function $f$, which enters the kink solution Eq. with an ${\cal O}(\e )$ suppression, drops out at this order. Inserting – and into , one then finds for the moduli-space metric $$G = \left(\begin{array}{ccc} \frac{1}{2}&0&0\\
0&\frac{3}{2}+e^{-\f}\left<\left(\partial_\b\c\right)^2\right>&
e^{-\f}\left<\partial_\b\c\partial_z\c\right>\\
0&e^{-\f}\left<\partial_\b\c\partial_z\c\right>&
e^{-\f}\left<\left(\partial_z\c\right)^2\right>
\end{array}\right) + {\cal O}(\e^2 )\; .$$ Using the solution for $\c$ we finally obtain $$\begin{aligned}
G_{\f\f} &=& \frac{1}{2}, \label{Gff}\\
G_{\b\b} &=& \frac{3}{2}+\left( e^{-\b}\m\right)^2\e_0 e^{\b-\f}
\left[ J_2\left( e^\b\m^{-1}(1-z)\right)-J_2\left(
-e^\b\m^{-1}z\right)\right], \label{Gbb}\\
G_{\b z} &=& -e^{-\b}\m\e_0 e^{\b -\f}\left[J_1\left( e^\b\m^{-1}(1-z)
\right)-J_1\left( -e^\b\m^{-1}z\right)\right], \label{Gbz}\\
G_{zz} &=& \e_0 e^{\b - \f}\left[ J_0\left( e^\b\m^{-1}(1-z)\right)
-J_0\left( -e^\b\m^{-1}z\right)\right], \label{Gzz}\end{aligned}$$ as the only non-vanishing components of $G$. Here, the functions $J_n$ are defined by $$J_n(x) = \frac{(\p\r )^2\m}{\e_0}\int_0^x d\tilde{x}\; \tilde{x}^n
U(C(\tilde{x}))
= \frac{1}{\s}\int_{C(0)}^{C(x)}d\c\left( C^{-1}(\c )\right)^n
w' (\c )\; , \label{Jn}$$ where we recall that the function $C$, defined in Eq. , can be computed for any given potential $U$ and is, by itself, independent of the moduli. The above result, good to ${\cal O}(\e )$, for the sigma model metric explicitly displays the complete moduli dependence of $G$ and its only implicit features are the dependence on the potential $U$ and a simple integral thereof. We find it quite remarkable that the calculation can be pushed this far without an explicit choice for the potential $U$.
The result – suggest the existence of another expansion parameter besides $\e$, namely the quantity $e^{-\b}\m$. It represent the ratio of the kink’s width and the size of the orbifold. Working in a thin-wall approximation where this ratio is much smaller than one our results simplify even further. Clearly, we then have to good accuracy $$G_{\b\b}=\frac{3}{2}\; ,\qquad G_{\b z} = 0\; .$$ For the remaining non-trivial component $G_{zz}$ we can explicitly carry out the integral and find by inserting into Eq. $$G_{zz} = \e_0 e^{\b -\f}F(\b ,z)$$ where $$\begin{aligned}
F(\b ,z) &=& \frac{1}{\s}\left[ w\left( C\left( e^\b\m^{-1}(1-z)\right)\right)
- w\left( C\left( -e^\b\m^{-1}z\right)\right)\right], \label{F1}\\
&=& \frac{1}{\s}\left[ w(y=\p\r )-w(y=0)\right] \; .\label{F2}\end{aligned}$$ Here, the notation $w(y=0)$ ($w(y=\p\r $) indicates the value of the superpotential evaluated for the kink solution at the boundary $y=0$ ($y=\p\r$).
To summarise, in the limit of both the strong-coupling expansion parameter and the ratio of wall to orbifold size being smaller than one, that is, $$\e = \e_0 e^{\b -\f} < 1\; , \qquad \frac{\m}{e^\b} < 1\; ,$$ the moduli-space metric for the kink solution is well-approximated by $$G = {\rm diag}\left(\frac{1}{2},\frac{3}{2},\e_0 e^{\b -\f}F(\b ,z)\right)
\label{Gf}$$ with associated four-dimensional effective action $$\tilde{S}_4 = -\frac{1}{2\k_P^2}\int_{M_4}\sqrt{-g_4}\left[
\frac{1}{2}R_4+\frac{1}{4}\partial_\m\f\partial^\m\f
+\frac{3}{4}\partial_\m\b\partial^\m\b
+\frac{1}{2}\e_0 e^{\b -\f}F(\b ,z)\partial_\m z
\partial^\m z\right]\; . \label{S4f}$$ Here, the function $F$ is as defined in Eq. .
It is interesting to compare this four-dimensional effective action to its counterpart obtained in the M-theory case. Obviously, the only difference arises in the kinetic term for $z$ where the function $F$ appears in but not in the M-theory result . A detailed comparison requires computing this function from Eq. by inserting an explicit potential $U$. However, the qualitative features of $F$ can be easily read off from the alternative expression . It states that $F$ is the difference of the superpotential on the two boundaries in units of $\s$ and, hence, it is simply the “charge difference” between the two boundaries. Suppose, that the kink’s core is well within the physical space and away from the boundaries, so that $z\in [0,1]$ and sufficiently different from the boundary values $0$, $1$. The field $\c$ will then be very close to the minimum $\c = nv$ at the $y=0$ boundary and very close to the minimum $\c = (n+1)v$ at the other boundary. The charge difference between the boundaries and, hence, the function $F$, is, therefore, very close to one. If, on the other hand, the virtual position of the kink’s core is at $z>1$ ($z<0$) and sufficiently away from the boundary, $\c$ will be close to the minimum $\c = nv$ ($\c = (n+1)v$) on [*both*]{} boundaries. Hence the function $F$ is approximately zero in this case. This obviously implies a non-trivial behaviour of $F$ close to the boundaries for $z\simeq 0$ and $z\simeq 1$. As a result, for the kink being inside the physical space and away from the boundaries by a distance large compared to its width the effective action completely agrees [^6] with the M-theory result . Conversely, if the kink approaches one of the boundaries or collides with it, that is, $z\rightarrow 0,1$, the function $F$ becomes non-trivial and the effective theories and differ substantially. It is clear then, that the effective theory carries some memory of the presence of the boundaries while the M-theory action does not. For this reason, studying the collision process in the context of is an interesting problem which we will address in Section 6.
An explicit example
===================
In this section, we consider the explicit example of the double-well potential $$U = m^2(v^2-\c^2)^2\; , \label{Uex}$$ where $m$ is a constant. As stands this potential does, of course, not satisfy our periodicity requirement for $U$. However, for our purposes this is largely irrelevant since the single-kink solution in which we are interested here probes the potential only between the two minima [^7]. The associated superpotential is given by $$w = m\c \left( v^2-\frac{1}{3}\c^2\right)\; .$$ Hence the elementary charge unit $\s$ and $\e_0$ take the form $$\s = w(v)-w(-v) = \frac{4}{3}mv^3\; ,\qquad
\e_0 = \p\r\s = \frac{4}{3}\p\r mv^3\; .$$
The kink-solution for this potential is of the general form – with the functions $C$ and $f$ given by $$C(x) = v\tanh (x) \label{Cex}$$ and $$f = \frac{1}{\e_0e^\b}\left[ c-\frac{1}{3}v^2\tanh^2\x - \frac{4}{3}
\ln (\cosh \x )\right]\; ,\qquad \x = \frac{e^\b}{\m}\left(
\frac{y}{\p\r}-z\right)\; , \label{fex}$$ where the thickness $\m$ of the kink can be identified as $$\m = \frac{1}{mv\p\r}\; .$$ The constant $c$ in Eq. has to be fixed so that $<f>=0$, as discussed before. This leads to an expression involving di-logarithms and we will not carry this out explicitly.
Instead, we consider the limit where the strong-coupling expansion parameter $\e$ remains small, so that $f$ becomes irrelevant and our general result – holds. The functions $J_n$ can now be explicitly computed inserting the potential and into their definition . This leads to $$J_n(x) = \frac{3}{4}\int_0^xd\tilde{x}\frac{\tilde{x}^n}
{\cosh^4\tilde{x}}\; .$$ This, together with Eqs. – completely determines the moduli-space metric for the double-well potential as long as $\e <1$. While the above integrals can be carried out for all relevant values $n=0,1,2$, the cases $n=1$ and $n=2$ lead to somewhat complicated expressions, the latter involving a di-logarithm. However, $J_0$ takes the relatively simple form $$J_0(x) = \frac{1}{2}\tanh x + \frac{\sinh x}{4\cosh^3 x}\; .
\label{J0}$$ As is clear from the general case discussed in the previous section, for a kink with small width, that is, $e^{-\b}\m < 1$, fortunately $J_0$ is the only relevant function. In this limit, the moduli-space metric is, therefore, given by the general form which we repeat for convenience $$G = {\rm diag}\left(\frac{1}{2},\frac{3}{2},\e_0 e^{\b -\f}F(\b ,z)
\right)\; .
\label{Gex}$$ The function $F$, defined in Eq. , now takes the explicit form $$F(\b ,z) = J_0\left( e^\b\m^{-1}(1-z)\right) - J_0\left( -e^\b\m^{-1}z
\right)\; , \label{Fex}$$ where $J_0$ is given in Eq. . Inserting this result into completely determines the four-dimensional kink effective theory for $\e < 1$ and $e^{-\b}\m
<1$. The function $F$ above indeed has the properties mentioned in the previous section, namely $F\simeq 1$ for $z$ well inside the interval $[0,1]$ and $F\rightarrow 0$ for $z\rightarrow\pm\infty$. The typical shape of $F$ as a function of $z$ is shown in Fig. \[fig2\].
![*The function $F$ which enters the effective four-dimensional action of the kink as a function of $z$ for $e^\b\m^{-1}=10$.*[]{data-label="fig2"}](F.eps){height="9cm" width="9cm"}
Kink evolution equations
========================
We will now study the time-evolution of the kink based on the effective four-dimensional action derived in the previous section. The collision of the kink with one of the boundaries will, of course, be of particular interest.
We focus on simple time-dependent backgrounds and a metric of Friedmann-Robertson-Walker form with flat spatial sections, that is $$\begin{aligned}
ds_4^2 &=& -dt^2+e^{2\a (t)}d{\bf x}^2, \\
\f^I &=& \f^I (t),\end{aligned}$$ where $(\f^I)=(\f ,\b ,z)$. Let us first review the general structure of the evolution equations for backgrounds of this form. From the general sigma-model action one obtains the equations of motion $$\begin{aligned}
3\dot{\a}^2 &=& \frac{1}{2}G_{IJ}\dot{\f}^I\dot{\f}^J, \\
2\ddot{\a}+3\dot{\a}^2 &=& -\frac{1}{2}G_{IJ}\dot{\f}^I\dot{\f}^J, \\
\ddot{\f}^I+3\dot{\a}\dot{\f}^I+\G_{JK}^I\dot{\f}^J\dot{\f}^K &=& 0\; ,\end{aligned}$$ where $\G_{JK}^I$ is the Christoffel connection associated to the sigma-model metric $G_{IJ}$ and the dot denotes the derivative with respect to time. Adding the first two equations, we obtain an equation for the scale factor $\a$ alone which can be immediately integrated. Discarding trivial integration constants one finds $$\a = \frac{1}{3}\ln |t|\; .$$ This power-law evolution with power $1/3$ is as expected for a universe driven by kinetic energy only. We also remark that we have, as usual, a $(-)$ branch, $t<0$, with decreasing $\a$ and a future curvature singularity at $t=0$ and a $(+)$ branch, $t>0$, with increasing $\a$ and a past curvature singularity at $t=0$. Our subsequent results will apply to both branches although, for the concrete discussion, we will mostly focus on the positive-time branch, where the universe expands. We find it convenient to use the scale factor $\a$, rather than $t$, as the time parameter in the following. The remaining evolution equations can then be written in the form $$\begin{aligned}
{\f^I}^{\prime\prime}+\G_{JK}^I{\f^J}'{\f^K}' &=& 0, \label{geo} \\
\frac{1}{2}G_{IJ}{\f^I}'{\f^J}' &=& 3\; ,\label{cons}\end{aligned}$$ where the prime denotes the derivative with respect to $\a$. Hence, the scalar fields $\f^I$, viewed as functions of the scale factor $\a$, move along geodesics in moduli space, with initial conditions subject to the constraint .
Let us now apply these equations to the moduli space metric for the kink in a double-well potential, as computed in the previous section. To keep the formalism as simple as possible we will focus on the case of a small kink width, that is, $e^{-\b}\m\ll 1$. The moduli-space metric is then specified by Eqs. , and . Inserting this metric into Eq. we find $$\begin{aligned}
\f^{\prime\prime} +\e_0e^{\b -\f}F {z'}^2 &=& 0 ,\label{eom1}\\
\b^{\prime\prime}-\frac{1}{3}\e_0e^{\b -\f}F\left( 1+e^\b\m^{-1}K\right)
{z'}^2 &=& 0 ,\label{eom2}\\
z^{\prime\prime} +(\b ' -\f ')z'+e^\b\m^{-1}K\b 'z'-\frac{1}{2}e^\b
\m^{-1}L{z'}^2 &=& 0 ,\label{eom3}\end{aligned}$$ while the constraint turns into $$\frac{1}{4}{\f '}^2+\frac{3}{4}{\b '}^2+\frac{1}{2}\e_0 e^{\b -\f}F{z'}^2
= 3\; . \label{eom4}$$ The functions $K=K(\b ,z)$ and $L=L(\b ,z)$ are related to derivatives of $F=F(\b ,z)$ and can be defined in terms of $J_0$, Eq. , as follows $$\begin{aligned}
F(\b ,z) &=& J_0\left( e^\b\m^{-1}(1-z)\right) - J_0\left( -e^\b\m^{-1}z
\right) ,\\
K(\b, z) &=& \frac{(1-z)J_0'\left( e^\b\m^{-1}(1-z)\right)
+zJ_0'\left( -e^\b\m^{-1}z\right)}{F(\b ,z)}, \\
L(\b ,z) &=& \frac{J_0'\left( e^\b\m^{-1}(1-z)\right)
-J_0'\left( -e^\b\m^{-1}z\right)}{F(\b ,z)}\; . \end{aligned}$$ The typical shape of $F$ has been indicated in Fig. \[fig2\]. Fig. \[fig3\] shows the shape of $K$ and $L$ as a function of $z$.
![*The function $K$ and $L$ which enter the effective equations of motion for the kink as a function of $z$ for $e^\b\m^{-1}=10$.*[]{data-label="fig3"}](KL.eps){height="9cm" width="12cm"}
The equations of motion are generally quite complicated due to these functions. However, as the figures show $F$, $K$ and $L$ are non-trivial only in small regions around the boundaries with size set by $\m e^{-\b}$ (the width of the kink relative to the orbifold size) while they are relatively simple outside these critical regions. It is, therefore, useful to discuss the asymptotic form of the equations of motion away from the boundaries. First of all, for $z\in [0,1]$ and away from the boundaries we have $$F\simeq 1\; ,\qquad K\simeq 0\; ,\qquad L\simeq 0\; .$$ Hence, for the kink being well inside the physical space the equations of motion – greatly simplify and become, in fact, identical to the analogous equations derived from the M-theory action .
On the other hand, for $z<0$ and away from the boundary we have $$F\simeq 0\; ,\qquad K\simeq 4z\; ,\qquad L \simeq -4\; ,$$ There are analogous results for $z>1$ but we will focus on the case $z<0$ for concreteness. Inserting these asymptotic expressions, we see that the equations , and for $\f$ and $\b$ decouple from the $z$ equation. They become, in fact, the equations for freely rolling radii and can be easily integrated to give $$\f = 3p_\f\a + \f_0\; ,\qquad \b = 3p_\b\a + \b_0 \label{rr}$$ where $\f_0$ and $\b_0$ are arbitrary constants and the expansion powers $p_\f$ and $p_\b$ satisfy the constraint $$p_\f^2+3p_\b^2 = \frac{4}{3}\label{crr}$$ which follows from . The evolution of the kink can now be studied in the background of these freely rolling radii. Inserting the above solutions for $\f$ and $\b$ into the equation for $z$, Eq. , we find $$z^{\prime\prime}+3\d z^\prime+2\m_0^{-1}e^{3p_\b\a}(6p_\b z+z')z' = 0\; ,
\label{zeq}$$ where $$\m_0 = \frac{\m}{e^{\b_0}}$$ is the width of the kink relative to the orbifold size initially at $\a =0$ and $$\d = p_\b - p_\f\; .$$ Hence, for $z<0$ and away from the boundary the evolution of the kink is described by the single differential equation .
Kink dynamics and kink-boundary collision
=========================================
We should now study the solutions to the system –. Given that our main interest is in the collision of the kink with a boundary, ideally, we would like to find solutions with $z\in [0,1]$ initially which evolve towards $z\rightarrow 0$. Given the complexity of the equations, we cannot possibly hope to achieve this analytically. Later, we will address this problem numerically. However, some progress can be made analytically as long as $z$ is away from the boundaries by using the approximate equations for $ z\in [0,1]$ or $z<0$ discussed in the previous section. One may hope that finding such analytical solutions for the evolution up to shortly before and after the collision will lead to a correct qualitative picture of the collision process, roughly by gluing together these two types of solutions across the critical boundary region. As we will see in our numerical analysis, this is indeed the case.
Let us start by looking at the case $z\in [0,1]$. As discussed above, as long as $z$ is not too close to one of the boundaries, the equations of motion reduce to the ones obtained from the M-theory effective action . Their solutions have been found in Ref. [@Copeland:2001zp] and are explicitly given by $$\begin{aligned}
\f &=& 3p_{\f ,i}\a +3(p_{\f ,f}-p_{\f ,i})\ln\left( 1+e^{-3\d_i\a}
\right)^{-\frac{1}{3\d_i}}+\f_0, \label{solf}\\
\b &=& 3p_{\b ,i}\a +3(p_{\b ,f}-p_{\b ,i})\ln\left( 1+e^{-3\d_i\a}
\right)^{-\frac{1}{3\d_i}}+\b_0, \label{solb}\\
z &=& \frac{d}{1+e^{3\d_i\a}}+z_0\; .\label{solz}\end{aligned}$$ Asymptotically, for $\a\rightarrow\pm\infty$, these solutions approach freely rolling radii solutions for $\f$ and $\b$ while $z$ becomes constant. The early (late) rolling radii solution is characterised by the expansion powers $p_{\f ,i}$ and $p_{\b ,i}$ ($p_{\f ,f}$ and $p_{\b ,f}$). Both sets of expansion powers are subject to the constraint $$p_{\f ,n}^2+3p_{\b ,n}^2 = \frac{4}{3}$$ where $n=i,f$ and are related by the linear map $$\left(\begin{array}{l}p_{\b ,f}\\p_{\f ,f}\end{array}\right)
= P\left(\begin{array}{l}p_{\b ,i}\\p_{\f ,i}\end{array}\right)\; ,
\qquad P = \frac{1}{3}\left(\begin{array}{rr}1&1\\3&-1\end{array}
\right)\; .\label{map}$$ Further, we have defined the quantity $$\d_i = p_{\b ,i}-p_{\f ,i}$$ which can be restricted, without loss of generality, to $$\begin{array}{lll}
\d_i > 0&\qquad&(-)\mbox{ branch},\\
\d_i < 0&\qquad&(+)\mbox{ branch},
\end{array}$$ We remark that $\d_f$, the analogous quantity at late times, is given by $$\d_f \equiv p_{\b ,f}-p_{\f ,f} = -\d_i$$ as follows from the map . The remaining integration constants $\f_0$, $\b_0$, $z_0$ and $d$ are subject to the restriction $$\f_0-\b_0 = \ln\left(\frac{2\e_0d^2}{3}\right)\; .$$ Note that $z_0$ specifies the initial position of $z$ which moves by a finite coordinate distance $d$ to its final position $z_0+d$.
What is the relevance of these solutions in our context? First, we remind the reader that the above solutions play a double-role as exact solution to the M-theory effective action and approximate solutions to the kink effective theory if $z\in [0,1]$ and away from the boundaries. In their former role they present another indication that the effective M-theory action , as it stands, is not adequate to describe the collision process since the boundary values $z=0,1$ are in no way singled out. In other words, $z$, as described by these solutions, passes through the boundary without being effected at all. For this reason, they will also be very useful for comparison with solutions to the kink evolution equations, to explicitly see the boundary effect in the latter. In their role as approximate solutions to the kink evolution equations for $z\in [0,1]$ they tell us that the collision can be arranged or avoided depending on a choice of initial conditions. Indeed, the initial position $z_0$ of the kink and the coordinate distance $d$ by which it moves can be chosen arbitrarily. Hence, for the choice $z_0\in [0,1]$ and $z_0+d\in [0,1]$ (and both values away from the boundaries) the entire evolution of the kink is described by the solutions above and a collision with the boundary never occurs. There is, however, a caveat to this argument. While the kink becomes static asymptotically also the strong-coupling expansion parameter $\e$ necessarily diverges [@Copeland:2001zp], as can be seen from the above solutions. Therefore, we eventually loose control of our approximation and the effective theory breaks down. Clearly, from the arguments so far, we cannot guarantee that the kink remains static when this happens. In this paper, we will not attempt to improve on this, for example by going back to the five-dimensional theory. Instead, we will be content with arranging a certain characteristic behaviour, such as the kink becoming static, to occur for some intermediate period of time before we loose control over the effective theory.
Let us now analyse the evolution of the kink for $z<0$ and away from the boundary (the case $z>1$ is similar, of course). In this case, the system is adequately described by the single approximate equation for $z$ while $\f$ and $\b$ are decoupled and evolve according to one of the rolling radii solutions . Unfortunately, we did not succeed in integrating the $z$ equation in general. However, we can find a number of partial solutions which, as we will see, provide a good indication of the various, qualitatively different types of $z$ evolution.
Let us consider the evolution of $z$ in the background of a special rolling radii solution with a static orbifold, that is, $$p_\b = 0\; ,\qquad p_\f = \pm\frac{2}{\sqrt{3}}$$ where the two possible values of $p_\f$ follow from Eq. . The equation for $z$ then simplifies to $$z^{\prime\prime}-3p_\f z'+2\m_0^{-1}{z'}^2 = 0\; .\label{zeq1}$$ The general solution to this equation can be readily found to be $$z = z_0+\frac{v_c}{3p_\f}\ln\left[ 1+\frac{v_0}{v_c}\left(e^{3p_\f\a}
-1\right)\right]\; , \label{zsol}$$ where $z_0$ and $v_0$ are integration constants specifying the initial position and velocity of $z$ at $\a =0$, that is, $z_0=z(\a =0)$ and $v_0=z'(\a =0)$. Here, we are interested in solutions where $z_0$ is negative and as close to the boundary as is compatible with the validity of . In addition, we need $v_0<0$ so $z$ evolves into the region well-approximated by . The parameter $v_c$ is defined as $$v_c=\frac{3}{2}p_\f\m_0\; .$$ Let us discuss the properties of this solution for an expanding universe starting with the case $p_\f = +2/\sqrt{3}$. It is easy to see from Eq. that, independent of the initial velocity $v_0$, $z$ always diverges to $-\infty$ at some finite value of the scale factor $\a$, in this case. For $p_\f = -2/\sqrt{3}$, however, the situation is somewhat more complicated and depends on the relation between $|v_0|$ and $|v_c|$. One has to distinguish the three cases
- $|v_0|<|v_c|$ : $z$ converges exponentially to a constant
- $|v_0|=|v_c|$ : $z$ diverges to $-\infty$ as $\a\rightarrow\infty$
- $|v_0|>|v_c|$ : $z$ diverges to $-\infty$ at a finite value of $\a$.
Hence, we see that $v_c$ plays the role of a critical velocity. As we will confirm later, these three cases already represent the three types of qualitatively different behaviour which can be observed for the full $z$-equation or even the complete system –.
We should remark, though, that the second case $|v_0|=|v_c|$ while typical in that $z$ diverges as $\a\rightarrow\infty$ is not representative as far as the nature of the divergence is concerned. While its divergence is linear in $\a$, the more characteristic case is an exponential divergence in $\a$. The existence of such exponential divergences can be seen from the special solution $$z = \frac{\m_0p_\f}{2p_\b}e^{-3p_\b\a}$$ to Eqs. . While this represents an exact solution for all values of $p_\f$ and $p_\b$ we have to restrict signs to $p_\b <0$ and $p_\f >0$ so that $z$ is negative and moves towards $-\infty$. Within this range of $p_\f$ and $p_\b$, however, the above solution shows an exponential divergence of $z$ as $\a\rightarrow\infty$.
After having identified the qualitatively different types of $z$ evolution we can now ask more systematically, based on the $z$ equation , which type is realized for a given set of parameters and initial conditions. As can be seen from a rescaling of $z$ in Eq. the type of evolution cannot depend on the value of $\m_0$. The only possible dependence is, therefore, on $p_\b$ (recall that, for given $p_\b$, $p_\f$ is determined, up to a sign, from Eq. ) and the initial velocity $v_0=z'(\a =0)$. A relevant question in this context concerns the stability of the solution $z=$ const which can be viewed as the limit of the exponentially converging case 1. Writing $$z=z_0+\z (\a )\; ,$$ where $z_0<0$, the linearised evolution equation for $\z$ is, from Eq. , given by $$\z^{\prime\prime} = -3\left[\d +4\m_0^{-1}z_0p_\b e^{3p_\b\a}\right]\z '\; .$$ We conclude that the solution $z=$ const can only be stable if $$p_\b < 0\; \quad\mbox{and}\quad \d = p_\b -p_\f > 0\; .\label{stablecond}$$ It is only then that we expect the first case of convergent $z$ to be realized.
This can indeed by verified by a numerical integration of Eq. . Solutions with converging $z$ exist if and only if the conditions are satisfied and, in addition, if the initial velocity $|v_0|$ is smaller than a certain critical velocity $v_c$. A simple scaling argument shows that $$v_c = h(p_\b ,p_\f )\m_0$$ where $h$ is a function which, from the numerical results, turns out to be of ${\cal O}(1)$ and slowly varying. What happens outside the region ? If we leave this range by crossing $p_\b =0$ we find for small positive $p_\b$ and $|v_0|$ below the critical velocity that $z$ still converges at first but then, in accordance with our analytic argument, develops an instability, which drives it to $-\infty$ at finite $\a$. The intermediate stable phase gradually disappears as one increases $p_\b$. For $p_\b >0$ and $|v_0|$ above the critical velocity one always finds divergence to $-\infty$ at finite $\a$. Hence, for $p_\b >0$ we are always in the third case above. As we leave the region crossing $\d =0$ we find case 2 is realized below and case 3 above the critical velocity. However, as $\d$ becomes more negative, the critical velocity decreases rapidly until we are left with case 3 only.
In summary, the converging case 1 is only found in the range and for initial velocities smaller than a certain critical value while otherwise $z$ always diverges to $-\infty$ typically according to case 3 at finite scale factor $\a$.
We can now try to combine the information we have gathered about the evolution of the system before and after the collision to set up criteria which will allow us to decide the outcome of a collision process. Let us consider a particular solution – for the evolution inside the interval $z\in [0,1]$. As we have already mentioned, the distance by which the kink moves is a free parameter so a collision may never occur. Then, this solution describes the full evolution of the system as far as it is accessible within the four-dimensional effective theory. On the other hand, if initial conditions are chosen so that a collision does occur, the particular solution – will determine the velocities $z'_{\rm col}$, $\f '_{col}$ and $\b '_{col}$ right before the collision. We can then, approximately, identify $v_0\simeq z'_{\rm col}$, $3 p_\f\simeq \f '_{col}$ and $3 p_\b\simeq \b '_{col}$ and apply the previous results for the evolution at $z<0$. One concludes that only for a very low-impact collision with small $z'_{\rm col}$ and an orbifold size which, at collision, decreases less rapidly than the dilaton, that is $\b '_{col}<0$ and $\f '_{col}-\b '_{col}>0$, does $z$ converge to a constant. Otherwise $z$ diverges to $-\infty$ and this can, in fact, be viewed as the generic case.
Of course, the criteria above may be somewhat inaccurate since we have ignored the complicated structure of the evolution equations near the boundary. We have, therefore, numerically integrated the full system – to test the above criteria for the outcome of a collision process. It turns out that, in broad terms, the picture remains qualitatively the same.
Starting with $z$ near zero inside the $[0,1]$ interval, we went around the ellipse $(\f '_{\rm col})^2+3 (\b '_{\rm col})^2\simeq 12$. Note that in this case the exact identity cannot be observed since the constraint equation Eq. includes an extra term proportional to $(z'_{\rm col})^2$. Nevertheless the correction is always small since we set $\f _{\rm col}-\b_{\rm col}$ to a large negative value. This makes the initial value for $\epsilon$ very small and allows us, for the cases where $\e$ grows, to follow the evolution for longer times until $\epsilon\simeq 1$ and the four-dimensional effective theory breaks down. We also chose a large initial $\b_{\rm
col}$ so that $e^{-\b}\mu$ remains as small as possible during the evolution, for the cases with $\b'_{\rm col}<0$. In all cases we set $\epsilon_0=1$ and $\mu=0.2$.
For each of these sets of initial conditions we then varied $z'_{\rm col}$ from zero upwards and looked for changes in the large time behaviour of $z$. The numerical results were obtained by evolving Eqs. – using a fourth-order fixed step Runge-Kutta method. The accuracy of the method was checked by confirming that the constraint equation Eq. was satisfied throughout the evolution. The individual terms on the left hand side of Eq. should sum to 3, and typically after 2000 time-steps of size 0.01 the deviation from this value was smaller than 0.01%. In the worst cases, where the equations of motion are no longer valid because one of the assumptions have broken down, the sum never gets above 0.2%
In Fig. \[fig4\] we have an example of the first type of behaviour, for a small negative value of $\b'_{\rm col}$. After crossing the boundary at $z=0$ the kink relaxes to a stable constant solution. For early times this solution matches the one obtained from the M-theory effective action for the same initial conditions. Nevertheless, as soon as the kink approaches the boundary the two start differing, converging to different asymptotic values.
![*Position modulus $z$ for the kink (solid line) and M-theory three-brane (dashed line) as a function of the scale factor $\a$. The initial conditions have been chosen as $z_{\rm col}=0.027$, $z'_{\rm col}=-0.12$, $\b_{\rm col}=2.0$, $\b'_{\rm col}=-0.72$, $\f_{\rm col}=16.14$, $\f'_{\rm col}=-3.23$* []{data-label="fig4"}](sol1.eps){height="9cm" width="9cm"}
For a slightly higher value of the initial velocity the difference is even more striking, as shown in Fig. \[fig5\]. In this case $z$ diverges in finite time, indicating that we are above the critical velocity. This third case turns out to be the most common, as already observed in the simplified system. Only for $\b'_{\rm col}<0$ and $\b'_{\rm col}-\f'_{\rm col}>0$ and $z'_{\rm col}$ below the critical velocity does the system avoid singular behaviour.
![*Same as in Fig. \[fig4\] but with $z'_{\rm col}=-0.14$.*[]{data-label="fig5"}](sol2.eps){height="9cm" width="9cm"}
In Fig. \[fig6\] we have an example for a solution corresponding to case 2. Here both $\b'_{\rm col}$ and $\f'_{\rm col}$ are negative and we are below the critical velocity. As a consequence of $\d \simeq \b'_{\rm col}-\f'_{\rm col}<0$, $z$ does not relax to a constant but its magnitude increases exponentially instead. In this case the solution has to be taken with care, since $e^{-\b}\mu$ quickly becomes large in the exponential regime and the equations of motion stop providing a reliable approximation.
Finally we have checked that once we go above the critical velocity, $z$ always diverges for finite $\alpha$. It is well known that in $\phi^4$ theory when a kink anti-kink collision takes place, above a certain limit velocity, they reflect and bounce back [@Campbell] (for lower velocities they can either reflect or form a bound state). This behaviour is a consequence of a resonance effect between the kink pair and higher field modes, so we should not be surprised not to observe it in the context of our four-dimensional effective action. This does not yet exclude the possibility of a bounce in a high-velocity regime which is accessible only in the context of the full five dimensional theory, a question which is currently under investigation [@inprep].
![*Position modulus $z$ for the kink (solid line) and M-theory three-brane (dashed line) as a function of the scale factor $\a$. The initial conditions have been chosen as $z_{\rm col}=0.027$, $z'_{\rm col}=-0.060$, $\b_{\rm col}=2.0$, $\b'_{\rm col}=-1.77$, $\f_{\rm col}=14.75$, $\f'_{\rm col}=-1.61$*[]{data-label="fig6"}](sol3.eps){height="9cm" width="9cm"}
What do these results imply in terms of the five-dimensional defect model ? As we have seen, if $z$ starts its evolution within the interval $[0,1]$ and subsequently collides with a boundary (at $z=0$) it is generically driven to $-\infty$ very rapidly. It should be stressed that the $z$ kinetic energy remains finite at this singularity. Nevertheless, we do expect the effective four-dimensional theory to break down eventually, as $z\rightarrow -\infty$. This is because some of the higher-order term we have neglected are likely to grow with $z$, in a way similar to the linear $z$ term in Eq. . However, at least for sufficiently small expansion parameters $\e$ and $\m e^{-\b}$ the four-dimensional theory will be valid some way into the singularity. Hence, we can conclude that a five-dimensional kink, interpolating between the vacua $\c = nv$ and $\c = (n+1)v$ which collides with the boundary at $z=0$ effectively disappears and leaves the field $\c$ in the vacuum state $\c = (n+1)v$ (and an analogous statement holds for collision with the boundary at $z=1$). From the M-theory perspective, such a process corresponds to a transition $$(\b_1,\b_2,\b_3) = (n,-(n+1),1)\longrightarrow
(\b_1,\b_2,\b_3) = (n+1,-(n+1),0)$$ between two different sets of charges and, hence, topologically different compactifications.
Conclusion and outlook
======================
In this paper, we have presented a toy defect model for five-dimensional heterotic brane-world theories, where three-branes are modelled by kink solutions of a bulk scalar field $\c$. We have shown that the vacuum states of this defect model correspond to a class of topologically distinct M-theory models characterised by the charges $\b_1$ and $\b_2$ on the boundaries and the three-brane charge $\b_3$. Specifically, we have seen that a state where $\c$ equals one of the minima $\c = \c_n=nv$ of the potential, where $n\in{\bf Z}$, corresponds to a state with charges $(\b_1,\b_2,\b_3)=(n,-n,0)$, that is, an M-theory model without three-branes. If, on the other hand, $\c$ represents a kink solution interpolating between the minima $\c =
nv$ and $\c = (n+1)v$ the associated M-theory charges are $(\b_1,\b_2,\b_3)=(n,-(n+1),1)$ corresponding to a model with a single-charged three-brane.
We have computed the effective four-dimensional action associated to a kink solution and have studied the time-evolution of a kink in this context. Our results show that, generically, a collision of the kink with a boundary will lead to a transition between the two types of vacua mentioned above. In other words, the kink will disappear after collision which corresponds to a transition between a state with a single-charged three brane and a state without a three-brane.
There are several interesting directions which may be pursued on the basis of these results. Clearly, our original M-theory model as well as the associated defect model are rather simple and a number of possible extension and modifications come to mind. First of all, we may try to modify our defect model by including more than one additional bulk scalar field, in particular to allow for exact BPS multi-kink solutions. One may ask whether the defect model can be embedded into a five-dimensional $N=1$ supergravity theory as is the case for the original M-theory model. Further, there are a number of generalisations of five-dimensional heterotic M-theory, such as including a more general set of moduli fields [@Lukas:1998tt], which one may try to implement into the defect model. For example, including the general set of Kahler moduli would allow one to study topological transitions of the underlying Calabi-Yau space through flops, in addition to the types of topology change considered in this paper.
Perhaps the most interesting direction is to study the evolution of more complicated configurations of our defect model . For example, one could envisage evolving the field $\c$ from some initial (say thermal) distribution to see which type of brane-network develops at late time [@inprep]. In particular, one would like to answer the important question whether the system can evolve from a brane-gas to a brane-world state. If this is indeed what happens such an approach will lead to predictions for the late-time brane-world that has evolved, given a certain class of plausible initial states. Concretely, within the context of the simple model presented in this paper, we may expect predictions for the charges $\b_i$ in this case. As we have discussed, the values of these charges are correlated with important properties of the theory such as the type of gauge group. Optimistically, we may therefore hope that our approach leads to prediction for such important low-energy data, at least within a restricted class of associated M-theory compactifications.
[**Acknowledgements**]{}\
A. L. is supported by a PPARC Advanced Fellowship. N. D. A. is supported by a PPARC Post-Doctoral Fellowship.
[99]{}
P. Horava and E. Witten, “Eleven-Dimensional Supergravity on a Manifold with Boundary,” Nucl. Phys. B [**475**]{} (1996) 94 \[hep-th/9603142\]. E. Witten, “Strong Coupling Expansion Of Calabi-Yau Compactification,” Nucl. Phys. B [**471**]{} (1996) 135 \[hep-th/9602070\]. P. Horava, “Gluino condensation in strongly coupled heterotic string theory,” Phys. Rev. D [**54**]{} (1996) 7561 \[hep-th/9608019\]. A. Lukas, B. A. Ovrut and D. Waldram, “On the four-dimensional effective action of strongly coupled heterotic string theory,” Nucl. Phys. B [**532**]{} (1998) 43 \[hep-th/9710208\]. A. Lukas, B. A. Ovrut and D. Waldram, “Non-standard embedding and five-branes in heterotic M-theory,” Phys. Rev. D [**59**]{} (1999) 106005 \[hep-th/9808101\]. A. Lukas, B. A. Ovrut, K. S. Stelle and D. Waldram, “The universe as a domain wall,” Phys. Rev. D [**59**]{} (1999) 086001 \[hep-th/9803235\]. J. R. Ellis, Z. Lalak, S. Pokorski and W. Pokorski, “Five-dimensional aspects of M-theory dynamics and supersymmetry breaking,” Nucl. Phys. B [**540**]{} (1999) 149 \[hep-ph/9805377\]. A. Lukas, B. A. Ovrut, K. S. Stelle and D. Waldram, “Heterotic M-theory in five dimensions,” Nucl. Phys. B [**552**]{} (1999) 246 \[hep-th/9806051\]. B. Andreas, “On vector bundles and chiral matter in N = 1 heterotic compactifications,” JHEP [**9901**]{} (1999) 011 \[hep-th/9802202\]. G. Curio, “Chiral matter and transitions in heterotic string models,” Phys. Lett. B [**435**]{} (1998) 39 \[hep-th/9803224\]. R. Donagi, A. Lukas, B. A. Ovrut and D. Waldram, “Non-perturbative vacua and particle physics in M-theory,” JHEP [**9905**]{} (1999) 018 \[hep-th/9811168\]. R. Donagi, A. Lukas, B. A. Ovrut and D. Waldram, “Holomorphic vector bundles and non-perturbative vacua in M-theory,” JHEP [**9906**]{} (1999) 034 \[hep-th/9901009\]. R. Donagi, B. A. Ovrut, T. Pantev and D. Waldram, “Standard models from heterotic M-theory,” hep-th/9912208. R. Donagi, B. A. Ovrut, T. Pantev and D. Waldram, “Non-perturbative vacua in heterotic M-theory,” Class. Quant. Grav. [**17**]{} (2000) 1049. R. Donagi, B. A. Ovrut, T. Pantev and D. Waldram, “Standard-model bundles,” math.ag/0008010. E. Witten, “Small Instantons in String Theory,” Nucl. Phys. B [**460**]{} (1996) 541 \[hep-th/9511030\].
O. J. Ganor and A. Hanany, “Small $E_8$ Instantons and Tensionless Non-critical Strings,” Nucl. Phys. B [**474**]{} (1996) 122 \[hep-th/9602120\].
B. A. Ovrut, T. Pantev and J. Park, “Small instanton transitions in heterotic M-theory,” JHEP [**0005**]{} (2000) 045 \[hep-th/0001133\].
O. DeWolfe, D. Z. Freedman, S. S. Gubser and A. Karch, “Modeling the fifth dimension with scalars and gravity,” Phys. Rev. D [**62**]{} (2000) 046008 \[hep-th/9909134\]. M. Brandle and A. Lukas, “Five-branes in heterotic brane-world theories,” Phys. Rev. D [**65**]{} (2002) 064024 \[hep-th/0109173\]. J. Derendinger and R. Sauser, “A five-brane modulus in the effective N = 1 supergravity of M-theory,” Nucl. Phys. B [**598**]{} (2001) 87 \[hep-th/0009054\].
A. Strominger, “Heterotic Solitons,” Nucl. Phys. B [**343**]{} (1990) 167 \[Erratum-ibid. B [**353**]{} (1991) 565\]. E. J. Copeland, J. Gray and A. Lukas, “Moving five-branes in low-energy heterotic M-theory,” Phys. Rev. D [**64**]{} (2001) 126003 \[hep-th/0106285\]. K. Skenderis and P. K. Townsend, “Gravitational stability and renormalization-group flow,” Phys. Lett. B [**468**]{} (1999) 46 \[hep-th/9909070\]. F. Bonjour, C. Charmousis and R. Gregory, “The dynamics of curved gravitating walls,” Phys. Rev. D [**62**]{} (2000) 083504 \[gr-qc/0002063\]; B. Carter and R. Gregory, “Curvature corrections to dynamics of domain walls,” Phys. Rev. D [**51**]{} (1995) 5839 \[hep-th/9410095\]. M. A. Shifman, “Degeneracy and continuous deformations of supersymmetric domain walls,” Phys. Rev. D [**57**]{} (1998) 1258 \[hep-th/9708060\]; C. Bachas, J. Hoppe and B. Pioline, “Nahm equations, N = 1\* domain walls, and D-strings in AdS(5) x S(5),” JHEP [**0107**]{} (2001) 041 \[hep-th/0007067\]; J. P. Gauntlett, D. Tong and P. K. Townsend, “Multi-domain walls in massive supersymmetric sigma-models,” Phys. Rev. D [**64**]{} (2001) 025010 \[hep-th/0012178\]; A. A. Izquierdo, M. A. Leon and J. M. Guilarte, “The kink variety in systems of two coupled scalar fields in two space-time dimensions,” Phys. Rev. D [**65**]{} (2002) 085012 \[hep-th/0201200\]; D. Tong, “The moduli space of BPS domain walls,” Phys. Rev. D [**66**]{} (2002) 025013 \[hep-th/0202012\]. D. K. Campbell, J. F. Schonfeld and C. A. Wingate “Resonance structure in kink-antikink interactions in $\phi^4$ theory,” Physica [**9**]{} D (1983) 1.
N. D. Antunes, E. J. Copeland, M. Hindmarsh and A. Lukas, in preparation.
[^1]: email: [email protected]
[^2]: email: [email protected]
[^3]: email: [email protected]
[^4]: email: [email protected]
[^5]: Note that, in the absence of three-branes, we have $\a_2=-\a_1$ from the cohomology condition . Therefore, also the charge on the second boundary is correctly being taken care of by our model.
[^6]: We recall that our kink carries a single charge and we should, therefore, set $\b_3=1$ in Eq. to obtain perfect agreement.
[^7]: One way to satisfy all earlier requirements is to restrict the potential to the interval $[-v,v]$ and continue it periodically outside. The subsequent results do not depend on whether one works with this periodic version of the potential or simply with its original form .
|
---
abstract: 'In this note we present a reconstructive algorithm for solving the cross-sectional pipe area from boundary measurements in a tree network with one inaccessbile end. This is equivalent to reconstructing the first order perturbation to a wave equation on a quantum graph from boundary measurements at all network ends except one. The method presented here is based on a time reversal boundary control method originally presented by Sondhi and Gopinath for one dimensional problems and later by Oksanen to higher dimensional manifolds. The algorithm is local, so is applicable to complicated networks if we are interested only in a part isomorphic to a tree. Moreover the numerical implementation requires only one matrix inversion or least squares minimization per discretization point in the physical network. We present a theoretical solution existence proof, a step-by-step algorithm, and a numerical implementation applied to two numerical experiments.'
address:
- '$^1$*Department of Civil and Environmental Engineering, Hong Kong University of Science and Technology, Hong Kong*'
- '$^2$*Jockey Club Institute for Advanced Study, Hong Kong University of Science and Technology, Hong Kong*'
- '$^3$*Department of Mathematics and Statistics, University of Helsinki, Finland*'
author:
- 'Emilia Bl[å]{}sten$^{1,2,3}$'
- Fedi Zouari$^1$
- Moez Louati$^1$
- 'Mohamed S. Ghidaoui$^1$'
bibliography:
- './network\_area\_reconstruction.bib'
title: 'Blockage detection in networks: the area reconstruction method'
---
Introduction
============
We present a reconstruction algorithm and its numerical implementation for solving for the pipe cross-sectional area in a water supply network from boundary measurements. Mathematically this is modelled by the frictionless waterhammer equations on a quantum graph, with Kirchhoff’s law and the law of continuity on junctions. The waterhammer equations on a segment $P = {({0,\ell})}$ are given by [@Ghidaoui2005; @Wylie1993] $$\begin{aligned}
&\partial_t H(t,x) = - \frac{a^2(x)}{g A(x)} \partial_x Q(t,x), &&
t\in{\mathbb{R}}, x\in P,\label{introWH1}\\ &\partial_t Q(t,x) = - g A(x)
\partial_x H(t,x), && t\in{\mathbb{R}}, x\in P,\label{introWH2}\end{aligned}$$ where
- $H$ is the hydraulic pressure (piezometric head) inside the pipe $P$, with dimensions of length,
- $Q$ is the pipe discharge or flow rate in the direction of increasing $x$. Its dimension is length$^3$/time,
- $a$ is the wave speed in length/time,
- $g$ is the graviational acceleration in length/time$^2$,
- $A$ is the pipe’s internal cross-sectional area in length$^2$.
We model the network by a set of segments whose ends have been joined together. The segments model pipes. In this model we first fix a positive direction of flow on each pipe $P_j$, and set the coordinates ${({0,\ell_j})}$ on it. Then we impose \[introWH1,introWH2\] on the segments. On the vertices, which model junctions, we require that $H(t,x)$ has a unique limit no matter which direction the point $x$ tends to the vertex. Moreover on any vertex $V$ we require that $$\label{kirchhoff}
\sum_{P_j \text{ connected to } V} \nu_j Q_j = 0$$ where $\nu_j \in \{+1,-1\}$ gives the *direction of the internal normal vector in coordinates $x$* to pipe $P_j$ at $V$, and $Q_j$ is the boundary flow of pipe $P_j$ at vertex $V$. In other words $\nu_j
Q_j$ is the flow *into* the pipe $P_j$ at the vertex $V$. Hence the sum of the flows into the pipes at each vertex must be zero. Therefore there are no sinks or sources at the junctions, and also none in the network in general, except for the ones at the boundary of the network that are used to create our input flows for the measurements.
The inverse problem we are set to solve is the following one. We assume that the wave speed $a(x)=a$ is constant. If not, see \[discussion\]. Given a tree network with $N+1$ ends $x_0,x_1,\ldots, x_N$, we can set the flow and measure the pressure on these points except for $x=x_0$. Using this information, can we deduce $A(x)$ for $x$ inside the network?
The problem of finding pipe areas arise in systems such as water supply networks and pressurized sewers, due to the formation of blockages during their lifetime [@area1; @Tolstoy2010]. In water supply pipes, blockages increase energy consumption and increase the potential of water contamination. Blockages in sewer pipes increase the risk of overflows in waste-water collection systems and impose a risk to public health and environment. Detection of these anomalies improves the effectiveness of pipe replacement and maintenance, and hence improves environmental health.
An analogous and important problem is fault detection in electric cables and feed networks. This problem can be solved in the same way as blockage detection because of the analogy between waterhammer equations and Telegrapher’s equations [@Jing2018].
Mathematically the simplest network is a segment joining $x_1$ to $x_0$. In this setting, the problem formulated as above was solved by [@Sondhi--Gopinath]. Various other algorithms were developed for solving the one-segment problem both before and after. See [@Bruckstein1985] for a review. After a while some mathematicians started focusing on the network situation and others into higher dimensional manifolds. Others continued on tree networks, which are also the setting of this manuscript. For tree networks Belishev’s boundary control method [@Belishev-network-paper1; @Belishev-network-paper2] showed that a tree network can be completely reconstructed after having access to all boundary points. These results were improved more recently by Avdonin and Kurasov [@Kurasov--tree]. A unified approach that uses Carleman estimates and is applicable to various equations on tree networks was introduced by Baudouin and Yamamoto [@Baudouin--Yamamoto]. Also, [@LeafPeeling] implies that it is possible to solve the problem in the presence of various different boundary conditions when doing the measurements. A network having loops presents various challenges to solving for the cross-sectional area from boundary measurements [@Belishev--loops]. In [@Kurasov--tree] the authors furthermore show that all but one boundary vertex is enough information, and also that if the network topology is known a-priori, then the simpler backscattering data is enough to solve the inverse problem. This latter data is easier to measure: the pressure needs to be measured only at the same network end from which the flow pulse is sent.
Our paper has two goals: 1) to solve the inverse problem by a reconstruction algorithm, and 2) to have a simple and concrete algorithm that is easy to implement on a computer. In other words we focus on the reconstruction aspect of the problem, given the full necessary data, and show that it is possible to build an efficient numerical algorithm for reconstruction. From the point of view of implementing the reconstruction, the papers cited above are of various levels of difficulty, and we admit that this is partly a question of experience and opinion. We view that apart from [@Sondhi--Gopinath], all of them focus much on the important theoretical foundations, but lack in the clarity of algorithmic presentation for non-experts. Therefore one of the major points of this text is a clearly defined algorithm whose implementation does not require understanding its theoretical foundations. See \[algoSect\].
Our method is based on the ideas of [@Sondhi--Gopinath; @Oksanen] and is a continuation of our investigation of the single pipe case [@area1]. In both of them the main point is to use *boundary control* to produce a wave that would induce a constant pressure in a *domain of influence* at a certain given time. A simple integration by parts gives then the total volume covered by that domain of influence. *Time reversal* is the main tool which allows us to build that wave without knowing a-priori what is inside the pipe network.
A numerical implemetation and proof of a working regularization scheme for [@Oksanen] in one space dimension was shown in [@Oksanen--regularization]. Earlier, we studied [@Sondhi--Gopinath] and tested it numerically in the context of blockage detection in water supply pipes [@area1] and also experimentally. We also showed that the method can be further extended to detect other types of faults such as leaks [@area3]. The one space-dimensional inversion algorithm can be implemented more simply than in [@Oksanen; @Oksanen--regularization], even in the case of networks. This is the main contribution of this paper. Furthermore we provide a step-by-step numerical implementation and present numerical experiments. Our reconstruction algorithm is local and time-optimal. In other words if we are interested in only part of the network, we can do measurements on only part of the boundary. Furthermore the algorithm uses time-measurements just long enough to recover the part of interest. Intuitively, to reconstruct the area $A(x)$ at a location $x$, the wave must reach the location $x$ and reflects back to the measurement location. Any measurement done on a shorter time-interval will not be enough to recover it.
Governing Equations
===================
Networks {#networks .unnumbered}
--------
Denote by $\mathbb G$ a tree network. Let $\mathbb J$ be the set of internal junction points and $\mathbb P$ the set of pipes, or segments. The boundary $\partial\mathbb G$ consists of all ends of pipes that are not junctions of two or more pipes. Each of them belongs to a single pipe unlike junction points which belong to three or more. There are no junction points that are connected to exactly two pipes: these two pipes are considered as one, and the point between is just an ordinary internal point.
Each pipe $P\in\mathbb P$ is modelled by a segment $({0,\ell})$ where $\ell$ is the length of the pipe. This defines a direction of positive flow on the pipe, namely from $x=0$ towards $x=\ell$. The pipes are connected to each other by junction points. Write $(x,P) \sim v$ if $v\in\mathbb J$, $x$ is the beginning ($x=0$) or endpoint ($x=\ell$) of pipe $P\in\mathbb P$, and the latter is connected to $v$ at the beginning ($x=0$) or end ($x=\ell$), respectively.
\[ex1\]
![A simple network.[]{data-label="fig1"}](simpleNet)
An example network is depicted in \[fig1\]. Here $\mathbb J =
\{D\}$, $\partial\mathbb G = \{A,B,C\}$, $\mathbb P = \{AD, BD,
DC\}$. Moreover here is a coordinate representation $$\begin{aligned}
AD = ({0,\SI{400}{\m}})&, &BD = ({0,\SI{300}{\m}})&, &DC =
({0,\SI{1000}{\m}})&, \\ (0, AD) \sim A&, & (0, BD) \sim B&, & (0,
DC) \sim D&,\\ (\SI{400}{\m}, AD) \sim D&, & (\SI{300}{\m}, BD)
\sim D&, & (\SI{1000}{\m}, DC) \sim C&.
\end{aligned}$$
\[ex2\] \[ex1\] can be implemented numerically as follows. In total there are four vertices and three pipes. If vector indexing starts with $1$ we number points as $C=1$, $A=2$, $B=3$, $D=4$ and pipes as $AD=1$, $BD=2$, $DC=3$. The adjacency matrix `Adj` is defined so as pipe number `i` goes from vertex number `Adj(i,1)` to vertex number `Adj(i,2)`.
V = 4; % number of vertices
L = [400; 300; 1000]; % length of pipes
Adj = [2 4; 3 4; 4 1]; % adjacency matrix
Equations {#equations .unnumbered}
---------
Inside pipe $P$ the perturbed pressure head $H$ and cross-sectional discharge $Q$ satisfy $$\begin{aligned}
&\partial_t H(t,x) = - \frac{a^2}{g A(x)} \partial_x Q(t,x), &&
t\in{\mathbb{R}}, \quad x\in P,\label{WH1}\\ &\partial_t Q(t,x) = - g A(x)
\partial_x H(t,x), && t\in{\mathbb{R}}, \quad x\in P,\label{WH2}\end{aligned}$$ where $a,A,g$ denote the wave speed, cross-sectional area and gravitational acceleration. The sign of $Q$ is chosen so that $Q>0$ means the flow goes from $0$ to $\ell$. In \[ex1\] the positive flow goes from $A$ to $D$, from $B$ to $D$ and from $D$ to $C$. The wave speed is assumed constant.
The pressure is a scalar: if $v\in\mathbb J$ connects two or more pipes $(x_j,P_j) \sim v$, $j=1,2,\ldots$, then $H$ has a unique value at $v$ $$\label{scalarH}
\lim_{\substack{x\to x_j\\x\in P_j}} H(t,x) = \lim_{\substack{x\to
x_k\\x\in P_k}} H(t,x), \qquad t\in{\mathbb{R}}, \quad (x_j,P_j) \sim v
\sim (x_k,P_k).$$ The flow satisfies mass conservation, i.e. a condition analogous to Kirchhoff’s law. The total flow into a junction must be equal to the total flow out of the junction at any time. To state this as an equation we define the *internal normal vector* $\nu$ for any pipe end. If the pipe $P$ has coordinate representation ${(0,\ell)}$ then $\nu(0) = +1$ and $\nu(\ell) = -1$. Recall that $Q>0$ mean a positive flow from the direction of $0$ to $\ell$. This means that $\nu(x)Q(t,x)$ is the flow *into* the pipe at point $x\in\{0,\ell\}$. If it is positive then there is a net flow of water into $P$ through $x$. If it is negative then there is a net flow out of $P$ through $x$. Mass conservation is then written as $$\label{Kirchhoff}
\sum_{(x,P)\sim v} \nu(x) Q(t,x) = 0, \qquad t\in{\mathbb{R}}, \quad
v\in\mathbb J.$$
Initial conditions {#initial-conditions .unnumbered}
------------------
The model assumes unperturbed initial conditions $$\label{zeroInitial}
H(t,x)=Q(t,x)=0, \qquad t<0, \quad x\in P$$ for any pipe $P\in\mathbb P$.
Direct problem {#direct-problem .unnumbered}
--------------
In this section we define the behaviour of the pressure $H$ and pipe cross-sectional discharge $Q$ in the network given a boundary flow and network structure. This is the direct problem. Recall that we have one inaccessible end of the network, $x_0$, whose boundary condition must be an inactive one, i.e. must not create waves when there are no incident waves. To make the problem mathematically well defined we choose for example \[x0BC\]. Other options would work too, for example involving derivatives of $H$ or $Q$, but it is questionable how physically realistic such an arbitrary boundary condition is. The main point is that any inactive condition at the inaccessible end is allowed without risking the reconstruction. This is because the theoretical waves used in the calculations of our reconstruction algorithm never have to reach this final vertex, and so the boundary condition there never has a chance to modify reflected waves.
\[model\] We say that $H$ and $Q$ *satisfy the network wave model with boundary flow $F : {\mathbb{R}}\times\partial\mathbb G\setminus\{x_0\} \to
{\mathbb{R}}$* if $F(t,x)=0$ for $t<0$ and $H,Q$ satisfy \[WH1,WH2\], the junction conditions of \[scalarH,Kirchhoff\] and the initial conditions of \[zeroInitial\]. Furthermore $Q$ must satisfy the boundary conditions $$\begin{aligned}
&\nu(x)Q(t,x)=F(t,x), &&\qquad x\in\partial\mathbb G, x\neq x_0,
t\in{\mathbb{R}}\\ &A(t) Q(t,x) + B(t) H(t,x) = 0, &&\qquad x=x_0,
t\in{\mathbb{R}}\label{x0BC}
\end{aligned}$$ for some given functions or constants $A(t),B(t)$ that we do not need to know.
Note that $\nu Q = F$ implies that $F$ is the flow *into* the network. If $F>0$ fluid enters, and if $F<0$ fluid is coming out.
Let us mention a few words about the unique solvability of the network wave model with a given boundary flow $F$. First of all we have not given any precise function spaces where the coefficients of the equation or the boundary flow would belong to. This means that it is not possible to find an exact reference for the solvability. On the other hand this is not a problem for linear hyperbolic problems in general.
The problem is a one-dimensional linear hyperbolic problem on various segments with co-joined boundary conditions. As the waves propagate locally in time and space, one can start with the solution to a wave equation on a single segment, as in e.g. Appendix 2 to Chapter V in [@Courant--Hilbert2]. Then when the wavefront approaches a junction, \[scalarH,Kirchhoff\] determine the transmitted and reflected waves to each segment joined there. Then the wave propagates again according to [@Courant--Hilbert2], and the boundary conditions are dealt with as in a one segment case. At no point is there any space to make any “choices”, and thus there is unique solvability. We will not comment on this further, but for more technical details we refer to [@Courant--Hilbert2] for the wave propagation in a segment and the boundary conditions, and to Section 3 of [@Belishev-network-paper1] for the wave propagation through junctions. For an efficient numerical algorithm to the direct problem, see [@Karney--McInnis].
Boundary measurements {#boundary-measurements .unnumbered}
---------------------
The area reconstruction method presented in this paper requires the knowledge of the *impulse-response matrix* (IRM) for all boundary points except one, which we denote by $x_0$.
We define the *impulse-response matrix*, or IRM, by $K =
(K_{ij})_{i,j=1}^N$. For a given $i$ and $j$ we assume that $H$ and $Q$ satisfy the network wave model with boundary flow[^1] $$\label{boundaryFlow}
\nu(x) Q(t,x) = \begin{cases}
V_0 \, \delta_0(t), & x=x_i\\ 0, &
x\neq x_i
\end{cases}$$ for $t\in{\mathbb{R}}$ and $x\in\partial\mathbb G, x\neq x_0$. Here $V_0$ is the volume of fluid injected at $t=0$. Then we set $$\label{impulseResponseMatrix}
K_{ij}(t) = H(t,x_j) / V_0$$ for any $t\in{\mathbb{R}}$.
The index $i$ represents the source and $j$ the receiver. Note also that the IRM gives complete boundary measurement information: if $\nu(x) Q(t,x_i) = F(t,x_i)$ were another set of injected flows at the boundary, then the corresponding boundary pressure would be given by $$\label{HfromIRM}
H(t,x_j) = \sum_{\substack{x_i\in\partial\mathbb G\\x_i\neq x_0}}
\int K_{ij}(t-s,x_i) F(s,x_i) ds$$ by the principle of superposition.
Area reconstruction algorithm
=============================
Strategy {#strategy .unnumbered}
--------
Once the impulse-response matrix from \[impulseResponseMatrix\] has been measured, we have everything needed to determine the cross-sectional area in a tree network. As in the one pipe case [@area1], we will calculate special “virtual” boundary conditions, which if applied to the pipe system, would make the pressure constant in a given region at a given time. Exploiting this and the knowledge of the total volume of water added to the network by these virtual boundary conditions gives the total volume of the region. Slightly perturbing the given region reveals the cross-sectional area.
Multiply \[WH1\] by $gA/a^2$ and integrate over a time-interval ${({0,\tau})}$, for a fixed $\tau>0$, and the whole network $\mathbb
G$. This gives $$\label{impedanceByBoundaryData}
\sum_{x_j\in\partial\mathbb G} \int_0^\tau \nu(x_j) Q(t,x_j) dt =
\int_{\mathbb G} H(\tau,x) \frac{g A(x)}{a^2(x)} dx$$ if $H(0,x)=0$ and mass conservation from \[Kirchhoff\] applies. Let $p\in\mathbb G$ be a point at which we would like to recover the cross-sectional area. To it we associate a set $D_p\subset\mathbb G$ which we shall define precisely later. Let us assume that there are boundary flows $Q_p(t,x_j)$ so that at time $t=\tau$ we have $$\label{Hcharacteristic}
H(\tau,x) = \begin{cases}
h_0, &x\in D_p,\\
0, &x\notin D_p,
\end{cases}$$ for some fixed pressure $h_0$. Then from \[impedanceByBoundaryData\] we have $$\sum_{x_j\in\partial\mathbb G} \int_0^\tau \nu(x_j) Q_p(t,x_j) dt =
\frac{h_0g}{a^2} \int_{D_p} A(x) dx.$$ Denote the integral on the left by $V(D_p)$. We can calculate its values once we know $Q_p$, hence we know the value of the integral on the right. By varying the shape of $D_p$ we can then find the area $A(p)$.
Admissible sets {#Dsection .unnumbered}
---------------
The only requirement for $D_p$ in the previous section was that \[Hcharacteristic\] holds. Boundary control, e.g. as in [@Belishev-network-paper2], implies that there are suitable boundary flows $Q_p$ such that the equation holds for any reasonable set $D_p$. However it is not easy to calculate the flows given an arbitrary $D_p$. In this section we define a class of such sets for which it is very simple to calculate the flows.
Let $p\in\mathbb G \setminus \mathbb J$ be a non-junction point in the network at which we wish to solve for the cross-section area $A(p)$. Since we have a tree network the point $p$ splits $\mathbb G$ into two networks. Let $D_p$ be the part that is not connected to the inaccessible boundary point $x_0$. The boundary of $D_p$ consists of let us say $y_1,y_2,\ldots,y_k$ and $p$, where the points $y_j$ are also boundary points of the original network.
For each boundary point $y_j\in\partial D_p$, $y_j\neq p$ define the action time $$\label{actionTime}
f(y_j) = \operatorname{TT}(y_j,p)$$ where $\operatorname{TT}$ gives the travel-time of waves from $y_j$ to $p$ calculated along the shortest path in $D_p$. Then set $f(x_j)=0$ for $x_j\in\partial\mathbb G \setminus\partial D_p$.
\[admissibleSet\] We say that $D_p$ is an *admissible set* associated with $p\in\mathbb G \setminus \mathbb J$ and with *action time* $f$, if $D_p$ and $f$ are defined as above given $p$.
It turns out that with the choice of admissible sets made above we have $$\label{DpFromActionTimes}
D_p = \{ x \in \mathbb G \mid \operatorname{TT}(x_j,x) < f(x_j)
\text{ for some } x_j\in\partial\mathbb G \}.$$ This is because $D_p$ lies between $p$ and the boundary points $x_j$ at which $f(x_j)\neq0$. This gives a geometric interpretation to the set $D_p$, i.e. that it is the *domain of influence* of the *action times $f(x_j)$, $x_j\in\partial\mathbb G$*. If we would have zero boundary flows at first, and then active boundary flows when $x_j\in\partial\mathbb G$, $\tau-f(x_j)< t\leq\tau$, then the transient wave produced would have propagated throught the whole set $D_p$ at time $t=\tau$ but not at all into $\mathbb G\setminus D_p$.
Reconstruction formula for the area {#areaSection .unnumbered}
-----------------------------------
Now that the form of the admissible sets have been fixed, we are ready to prove in detail what was introduced at the beginning of this section, namely a formula for solving the unknown pipe cross-sectional area. For simplicity we assume that the wave speed is constant.
\[areaAtP\] Let $p\in \mathbb G \setminus \mathbb J$ be a non-junction point and $D_p$, $f$ the associated admissible set and action time. Let $\tau>\max f$. For a small time-interval $\Delta_t>0$ set $$(f+\Delta_t)(x_j) = \begin{cases} f(x_j)+\Delta_t, &x_j \in \partial
D_p \setminus \{p\},\\ 0, &x_j \in \partial\mathbb G \setminus
\partial D_p. \end{cases}$$ For $\phi=f$ or $\phi=f+\Delta_t$ denote $$\label{DpdDef}
D^\phi = \{ x\in\mathbb G \mid \operatorname{TT}(x,x_j) <
\phi(x_j) \text{ for some } x_j\in \partial D_p \setminus \{p\} \}$$ so $D^f = D_p$ and $D^{f+\Delta_t}$ is a slight expansion of the former.
Assume that $H_\phi,Q_\phi$ satisfy the network wave model with a boundary flow $F$ which is nonzero only during the action time $\phi$, namely $F(t,x_j)=0$ when $x_j\in\partial\mathbb G$, $x_j\neq
x_0$ and $0 \leq t \leq \tau - \phi(x_j)$. Finally, assume that $$H_\phi(\tau,x) = \begin{cases} h_0, &x\in D^\phi,\\ 0, &x\notin
D^\phi,\end{cases}$$ for some given pressure head $h_0>0$ at time $t=\tau$.
Denote by $$V(\phi,\tau) = \frac{a^2}{h_0g} \sum_{x_j\in \partial D_p
\setminus\{p\}} \nu(x_j) \int_0^\tau Q_\phi(t,x_j) dt$$ the total volume of fluid injected into the network from the boundary in the time-interval ${({0,\tau})}$ to create the waves $H_\phi,Q_\phi$. Then $$A(p) = \lim_{\Delta_t\to0} \frac{V(f+\Delta_t,\tau) -
V(f,\tau)}{a\Delta_t}$$ gives the cross-sectional area of the pipe at $p$.
The assumption about the boundary flow implies that $$H_\phi(t,x_j) = Q_\phi(t,x_j) = 0$$ in the same space-time set, i.e. when $x_j\in\partial\mathbb G$, $x_j\neq x_0$ and $0 \leq t \leq \tau - \phi(x_j)$. Consider \[WH1\] for $H_\phi,Q_\phi$ where $\phi=f$ or $\phi=f+\Delta_t$ is fixed. Multiply the equation by $gA/a^2$ and integrate $\int_0^\tau \int_{\mathbb G} \ldots dx dt$. This gives $$- \int_0^\tau \int_{\mathbb G} \partial_x Q_\phi(t,x) dx dt =
\int_0^\tau \int_{\mathbb G} \frac{g A(x)}{a^2} \partial_t
H_\phi(t,x) dx dt.$$ The right-hand side is equal to $$\frac{g}{a^2} \int_{\mathbb G} A(x) \big(H_\phi(\tau,x) -
H_\phi(0,x) \big) dx = \frac{g h_0}{a^2} \int_{D^\phi} A(x) dx$$ because $H_\phi(0,x)=0$ by \[zeroInitial\]. We will use the junction conditions of \[Kirchhoff\] to deal with the left-hand side. But before that let us use a fixed coordinate system of the network $\mathbb G$.
Let the pipes of the network be $P_1,\ldots, P_n$ and model them in coordinates by the segments ${({0,\ell_1})}, \ldots,
{({0,\ell_n})}$, respectively. On pipe $P_k$, denote by $H_{\phi,k}$ the scalar pressure head, and by $Q_{\phi,k}$ the pipe discharge into the positive direction. Then $$\int_{\mathbb G} \partial_x Q_\phi(t,x) dx = \sum_{k=1}^n
\int_0^{\ell_k} \partial_x Q_{\phi,k}(t,x) dx = \sum_{k=1}^n \big(
Q_{\phi,k}(t,\ell_k) - Q_{\phi,k}(t,0) \big).$$ Note that $Q_{\phi,k}(t,\ell_k)$ is simply the discharge *out* of the pipe $P_k$ at the latter’s endpoint represented by $x=\ell_k$ at time $t$. Similarly $-Q_{\phi,k}(t,0)$ is the discharge out from the other endpoint, the one represented by $x=0$. Now \[Kirchhoff\] implies after a few considerations that $$-\int_{\mathbb G} \partial_x Q_\phi(t,x) dx = \sum_{x_j\in \partial
D_p \setminus \{p\}} \nu(x_j) Q_\phi(t,x_j).$$ Namely, previously we saw that the integral is equal to the sum of the total discharge out of every single pipe. But the discharge out of one pipe must go *into* another pipe at junctions (there are no internal sinks or sources). Hence the discharges at the junctions cancel out, and only the ones at the boundary $\partial\mathbb G$ remain. The boundary values are zero on $\partial \mathbb G
\setminus (\partial D_p \cup \{x_0\})$ by the definition of $f$ and $f+\Delta_t$. We also have zero initial values and a non-active boundary condition at $x_0$, hence $Q_\phi(t,x_0)=0$ too. Thus we have shown that $$\int_{D^\phi} A(x) dx = \frac{a^2}{g h_0} \sum_{x_j\in \partial D_p
\setminus \{p\}} \nu(x_j) Q_\phi(t,x_j) = V(\phi,\tau).$$
Next, we will show that $$\label{dV}
V(f+\Delta_t,\tau) - V(f,\tau) = \int_{D^{f+\Delta_t}\setminus
D^f} A(x) dx.$$ Recall that $p$ is not a vertex. Hence it is on a unique pipe, let’s say ${({0,\ell})}$ and $p$ is represented by $x_p$ on this segment. Furthermore assume that (or change coordinates so that) the point represented by $0$ is in $D_p$, and the one by $\ell$ is not. This implies that the difference of sets $D^{f+\Delta_t}\setminus D^f$ is just a small segment on ${({0,\ell})}$.
Let us look at the effect of $\Delta_t$ on $D^{f+\Delta_t}$ which is defined in \[DpdDef\]. We have $x\in D^{f+\Delta_t} \setminus
D^f$ if and only if $\operatorname{TT}(x,x_j) < f(x_j) + \Delta_t$ for some boundary point $x_j\in\partial D_p$, $x_j\neq p$, but also $\operatorname{TT}(x,x_k) \geq f(x_k)$ for all boundary points $x_k\in\partial D_p$, $x_k\neq p$. The former implies that $x$ is at most travel-time $\Delta_t$ from $D_p$, and the latter says that it should not be in $D_p$. In other words $D^{f+\Delta_t}\setminus D^f
= \{x\in {({0,\ell})} \mid x_p \leq x < x_p+a\Delta_t \}$ and so $$\label{dVformula}
V(f+\Delta_t,\tau)-V(f,\tau) = \int_{x_p}^{x_p+a\Delta_t} A(x) dx,$$ where we abuse notation and denote the cross-sectional area of the pipe modelled by ${({0,\ell})}$ at the location $x$ also by $A(x)$ without emphasizing that it is the area on this particular model of this particula pipe.
gives the area at $p$, $A(x_p)$, by differentiating. Let $B(s) = \int_{x_p}^{x_p+s} A(x) dx$. Then $A(x_p) = \partial_s B(0)$ and the right-hand side of \[dVformula\] is equal to $B(a\Delta_t)$. The chain rule for differentiation gives $$A(x_p) = \partial_s B(s)_{|s=0} = \frac{1}{a} \partial_{\Delta_t}
(B(a\Delta_t))_{|\Delta_t=0} = \frac{1}{a} \lim_{\Delta_t\to0}
\frac{V(f+\Delta_t,\tau)-V(f,\tau)}{\Delta_t}$$ from which the claim follows.
Solving for the area {#1pipeIdea .unnumbered}
--------------------
In this section we will show one way in which boundary values of $Q$ can be determined so that the assumptions of \[areaAtP\] are satisfied. It is previously known that there are boundary flows giving \[Hcharacteristic\], for example by [@Belishev-network-paper2] where the authors show *the exact $L^2$-controllability* of the network both locally and in a time-optimal way. However there was no simple way of calculating such boundary flows and the numerical reconstruction algorithm of that paper does not seem computationally efficient as it uses a Gram–Schmidt orthogonalization process on a number of vectors inversely proportional to the network’s discretization size. We show that if the flow satisfies a certain boundary integral equation then this is the right kind of flow. Moreover, in the appendix we give a proof scheme for showing that the equation has a solution.
More recently various layer-peeling type of methods have appeard [@Kurasov--tree; @LeafPeeling]. The method we present here is based on another idea, one whose roots are in the physics of waves, namely time reversibility. This is in essence a combination of the one pipe case originally considered in [@Sondhi--Gopinath], and of a fundamentally similar idea for higher dimensional manifolds introduced in [@Oksanen]. In the latter the author considers domains of influence and action times on the boundary, and builds a boundary integral equation whose solution then reveals the unknown inside the manifold. This will be our guide.
Let us recall the unique continuation principle used for the area reconstruction method for one pipe in [@area1; @Sondhi--Gopinath]. Consider a pipe of length $\ell>0$, modelled by the interval ${({0,\ell})}$. Let $\mathcal H$ and $\mathcal Q$ satisfy the Waterhammer \[WH1,WH2\] without requiring any initial conditions. Then
\[uniqCont1pipe\] If $\mathcal H(t,0)=2h_0$ and $\mathcal Q(t,0)=0$ for $t_m<t<t_M$, then inside the pipe we would have $\mathcal H(t,x)=2h_0$ and $\mathcal Q(t,x)=0$ in the space-time triangle $x/a +
{\left\lvert t-(t_M+t_m)/2 \right\rvert} < (t_M-t_m)/2$, $0<x<\ell$. See \[fig2\].
![The region $\frac{x}{a} + {\left\lvert t-\frac{t_M+t_m}{2} \right\rvert} <
\frac{t_M-t_m}{2}$, $0<x<\ell$.[]{data-label="fig2"}](spacetime_triangle.pdf)
The lemma was then used to build a virtual solution $H,Q$ satisfying also the initial conditions, and which would have $H(\tau,x)=h_0$ for $x < a \tau$ and $H(\tau,x)=0$ for $x>a\tau$ for a given $\tau>0$. Without going into detailed proofs, the same unique continuation idea works for a tree network. The reason is that one can propagate $\mathcal H=2h_0$, $\mathcal Q=0$ from one end of a pipe to the other, but by keeping in mind that the time-interval where these hold shrinks as one goes further into the pipe. Do this first on all the pipes that touch the boundary. Then use the junction conditions from \[scalarH,Kirchhoff\] to see that $\mathcal H=2h_0$, $\mathcal
Q=0$ on the next junctions. Then repeat inductively. We have shown
\[treeUniqCont\] Let $p\in\mathbb G \setminus \mathbb J$ and let $D_p \subset \mathbb
G$ be admissible associated with $p$ and with action time $f$. Let $\mathcal H, \mathcal Q$ satisfy \[WH1,WH2\] and the junction conditions of \[scalarH,Kirchhoff\]. Let $\tau \geq \max f$ and assume that $$\label{inductionReq}
\mathcal H(t,x_j) = 2h_0, \qquad \mathcal Q(t,x_j) = 0, \qquad
x_j\in\partial\mathbb G, \quad {\left\lvert t-\tau \right\rvert} < f(x_j).$$ Then $\mathcal H(t,x) = 2h_0$ and $\mathcal Q(t,x) = 0$ whenever $x\in\mathbb G$, $0<t<2\tau$ and $$\label{inductionConc}
\operatorname{TT}(x,x_j) + {\left\lvert \tau-t \right\rvert} < f(x_j)$$ for some $x_j\in\partial\mathbb G$.
We can now write an integral equation whose solution gives waves with $H(\tau,x)=h_0$ for $x\in D_p$ and $H(\tau,x)=0$ for $x\notin D_p$ at time $t=\tau$.
\[treeEquationForH1\] Let $K_{ij}$ be the impulse-response matrix from \[impulseResponseMatrix\]. If $A$ is constant near each network boundary point we have $$K_{ij}(t) = \frac{a}{A(x_i) g} \delta_0(t) \delta_{ij} + k_{ij}(t)$$ for some function[^2] $k_{ij}=k_{ji}$ that vanishes near $t=0$.
Let $p\in\mathbb G \setminus \mathbb J$ be and let $D_p \subset
\mathbb G$ be the admissible set associated with $p$, and with action time $f$. Take $\tau\geq\max f$ and let $Q_p(t,x_j)$ satisfy $$\begin{aligned}
h_0 &= \frac{a}{A(x_j) g} \nu(x_j) Q_p(t,x_j) \notag\\ &\quad +
\sum_{\substack{x_i\in\partial\mathbb G\\x_i\neq x_0}}
\frac{\nu(x_i)}{2} \int_0^\tau Q_p(s,x_i) \big( k_{ij}({\left\lvert t-s \right\rvert})
+ k_{ij}(2\tau-t-s) \big) ds \label{Qequation}
\end{aligned}$$ when $x_j\in\partial\mathbb G$, $x_j\neq x_0$, $\tau-f(x_j)<t\leq\tau$ and $$\label{Qsupport}
Q_p(t,x_j) = 0$$ when $x_j\in\partial\mathbb G$, $x_j\neq x_0$ and $0\leq t \leq
\tau-f(x_j)$.
Then if $H,Q$ satisfy the network wave model with boundary flow $\nu
Q_p$ we have $$\label{H1}
H(\tau,x) = \begin{cases} h_0, &x\in D_p,\\ 0, &x\in \mathbb G
\setminus D_p. \end{cases}$$
If one sets $Q_p(2\tau-t,x_j)=Q_p(t,x_j)$, one could instead have $$h_0 = \frac{a}{A(x_j) g} \nu(x_j) Q_p(t,x_j) +
\sum_{\substack{x_i\in\partial\mathbb G\\x_i\neq x_0}}
\frac{\nu(x_i)}{2} \int_0^{2\tau} Q_p(s,x_i) k_{ij}({\left\lvert t-s \right\rvert}) ds$$ with $x_j\in\partial\mathbb G$, ${\left\lvert t-\tau \right\rvert}<f(x_j)$ and $Q_p(t,x_j)=0$ for ${\left\lvert t-\tau \right\rvert}\geq f(x_j)$.
The claim for the impulse response matrix is standard. See for example Appendix 2 to Chapter V in [@Courant--Hilbert2] for the solution to the one segment setting, and then use mathematical induction and the junction conditions of \[scalarH,Kirchhoff\].
Extend $Q_p$ symmetrically past $\tau$, i.e. $Q_p(2\tau-t,x_j)=Q_p(t,x_j)$ for $0\leq t\leq\tau$ and $x_j\in\partial\mathbb G$, $x_j\neq x_0$. Continue $H$ and $Q$ to $0<t<2\tau$ while still having $Q=Q_p$ as the boundary condition at $x\neq x_0$, and \[x0BC\] when $x=x_0$. Define $$\mathcal H(t,x) = H(t,x) + H(2\tau-t,x), \qquad \mathcal Q(t,x) =
Q(t,x) - Q(2\tau-t,x)$$ for $x\in\mathbb G$ and $0<t<2\tau$. The symmetry of $Q_p$ implies that $\mathcal Q(t,x_j) = 0$ for $x_j\in\partial\mathbb G$, $x_j\neq
x_0$ and $0<t<2\tau$.
For the pressure, recall that the properties of the impulse response matrix from \[HfromIRM\] imply that $$H(t,x_j) = \sum_{\substack{x_i\in\partial\mathbb G\\x_i\neq x_0}}
\int_{-\infty}^\infty K_{ij}(t-s) \nu(x_i) Q_p(s,x_i) ds$$ for $x_j\in\partial\mathbb G$, $x_j\neq x_0$ and $0<t<2\tau$. It is then easy to calculate that $$H(t,x_j) = \frac{a}{A(x_j) g} \nu(x_j) Q_p(t,x_j) +
\sum_{\substack{x_i\in\partial\mathbb G\\x_i\neq x_0}} \int_0^t
k_{ij}(t-s) \nu(x_i) Q_p(s,x_i) ds$$ because $K_{ij}(t-s) = 0$ when $s>t$ and $Q_p(s,x_i)=0$ when $s<0$. For $H(2\tau-t,x_j)$ split the integral to get $$\begin{aligned}
&\int_0^{2\tau-t} k_{ij}(2\tau-t-s) Q_p(s,x_i) ds \\&\qquad =
\int_0^\tau k_{ij}(2\tau-t-s) Q_p(s,x_i) ds + \int_\tau^{2\tau-t}
k_{ij}(2\tau-t-s) Q_p(s,x_i) ds \\&\qquad = \int_0^\tau
k_{ij}(2\tau-t-s) Q_p(s,x_i) ds + \int_t^\tau k_{ij}(s-t)
Q_p(s,x_i) ds
\end{aligned}$$ where we again used the time-symmetry of $Q_p$. Summing all terms and using $Q_p(2\tau-t,x_j)=Q_p(t,x_j)$ we see that $$\begin{aligned}
&\mathcal H(t,x_j) = H(t,x_j) + H(2\tau-t,x_j) = 2
\frac{a}{A(x_j) g} \nu(x_j) Q_p(t,x_j) \\&\quad +
\sum_{\substack{x_i\in\partial\mathbb G\\x_i\neq x_0}} \nu(x_i)
\int_0^\tau Q_p(s,x_i) \big( k_{ij}({\left\lvert t-s \right\rvert}) + k_{ij}(2\tau-t-s)
\big) ds = 2h_0
\end{aligned}$$ when $x_j\in\partial\mathbb G$ and ${\left\lvert t-\tau \right\rvert}<f(x_j)$. The assumptions of \[treeUniqCont\] are now satisfied and so $\mathcal H(t,x) = 2h_0$ and $\mathcal Q(t,x) = 0$ when $0<t<2\tau$ and $\operatorname{TT}(x,x_j) + {\left\lvert \tau-t \right\rvert} < f(x_j)$ for some $x_j\in\partial\mathbb G$. \[DpFromActionTimes\] and the finite speed of wave propagation imply \[H1\].
Assuming an impulse-response matrix that is measured to infinite precision and without modelling errors, one can show that \[Qequation,Qsupport\] have a solution. The idea of the proof is shown in the appendix. The solution is not necessarily unique, but it gives a unique reconstructed cross-sectional area.
Step-by-step algorithm {#algoSect}
======================
Measuring the impulse-response matrix {#measuring-the-impulse-response-matrix .unnumbered}
-------------------------------------
Recall \[impulseResponseMatrix\]: the impulse-response matrix $K=(K_{ij})_{i,j=1}^N$ is defined by $K_{ij}(t) = H(t,x_j)/V_0$ where $H,Q$ solve the Waterhammer \[WH1,WH2\], junction \[scalarH,Kirchhoff\], zero initial conditions of \[zeroInitial\] and a flow impulse of volume $V_0$ at the boundary vertex $x_i$ and zero flow at other accessible vertices, as in \[boundaryFlow\]. Here $x_i, x_j$ with $i,j\neq0$ are the accessible boundary points, and $x_0$ is the inaccessible one that doesn’t produce surges.
Solving for the cross-sectional area {#solving-for-the-cross-sectional-area .unnumbered}
------------------------------------
In this second part of the step-by-step reconstruction algorithm we assume that the impulse-response matrix $K$ has been calculated. It can be either measured by closing all accessible ends, or by measuring the system response in a different setting (i.e. different boundary conditions), then pre-process the measured signal to obtain the desired matrix $K$.
Once the impulse-response matrix has been measured as discussed in the previous paragraph, the following mathematical algorithm can be applied to recover the cross-sectional area inside a chosen pipe or pipe-segment in the network.
\[alg1\] This algorithm calculates the cross-sectional area using a discretization and \[alg2\].
1. \[step:responseMat\] Define $k_{ij}(t)$ for $i,j\neq0$ by $$k_{ij}(t) = K_{ij}(t) - \frac{a}{A(x_i) g} \delta_0(t) \delta_{ij}$$ where $\delta_{ij}$ is the Kronecker delta.
2. Choose a point $p_1$ in the network that is not a junction. The algorithm will reconstruct the cross-sectional area starting from $p_1$ and going towards $x_0$ until it hits the endpoint $p_2$ of the current pipe.
3. Split the interval between $p_1$ and $p_2$ into pieces of length $\Delta_x$.
4. \[step:chooseP\] Let $p$ be any point between two pieces above that has not been chosen yet. Calculate the internal volume $V(p)$ using \[alg2\].
5. Redo \[step:chooseP\] for all the points in the discretization of the interval between $p_1$ and $p_2$, and save the values of $V(p)$ associated with the point $p$.
6. Denote the discretization by $(p(0)=p_1, p(1), \ldots,
p(M)\approx p_2)$. Then the area at $p(k)$ is approximately the volume between the points $p(k)$ and $p(k+1)$ divided by $\Delta_x$. In other words $$A\big(p(k)\big) \approx \frac{V\big(p(k+1)\big) -
V\big(p(k)\big)}{\Delta_x}.$$ This would be an equality if $\Delta_x$ were infinitesimal, or if $A(p(k))$ would signify the average area over the interval $(p(k),
p(k+1))$.
\[alg2\] This algorithm calculates the internal volume of the piece of network cut off by $p$: namely all points from which you have to pass through $p$ to get to $x_0$.
1. For any boundary point $x_j\neq x_0$ set $$f(x_j) = \begin{cases} \operatorname{TT}(x_j,p), & \text{if $p$ is
between $x_j$ and $x_0$}, \\ 0, & \text{if not}, \end{cases}$$ where $\operatorname{TT}$ gives the travel-time between points. It is just the distance in the network divided by the wave speed.
2. \[step:tau\] Take $\tau \geq \max f$ and fix a pressure head $h_0>0$.
3. \[step:solveQ\] For any boundary point $x_j\neq x_0$ and time $\tau-f(x_j) < t \leq \tau$, using regularization if necessary, let $Q_p$ solve $$\begin{aligned}
&h_0 = \frac{a}{A(x_j) g} \nu(x_j) Q_p(t,x_j) \label{eq:solveQ}
\\ & + \sum_{\substack{x_i\in\partial\mathbb G\\x_i\neq x_0}}
\frac{\nu(x_i)}{2} \int_0^\tau Q_p(s,x_i) \big(
k_{ij}({\left\lvert t-s \right\rvert}) + k_{ij}(2\tau-t-s) \big) ds \notag
\end{aligned}$$ and also simultanously set $$Q_p(t,x_i) = 0$$ for boundary points $x_i\neq x_0$ and time $0\leq t \leq \tau -
f(x_j)$. Thus the integral above can be calculated on $\tau-f(x_j)
< s \leq \tau$.
4. \[step:volume\] Set $$\label{eq:volume}
V(p) = \frac{a^2}{h_0g} \sum_{x_j\in\partial\mathbb G}
\int_0^\tau \nu(x_j) Q_p(t,x_j) dt.$$ It is the internal volume of the pipe network that is on the other side of $p$ than the inaccessible end $x_0$.
Numerical experiments
=====================
All the programming here was done in GNU Octave [@octave].
\[exp1\] We will start by solving for the trivial area of \[ex1\]. Consider this a test for the implementation of the area reconstruction algorithm. We start by calculating the impulse-response matrix of vertices $A$ and $B$ while $C$ is inaccessible. $D$ is the junction.
Set $A(x)=\SI{1}{\m\squared}$ everywhere, the gravitational acceleration $g=\SI{9.81}{\m\per\s\squared}$ and a wave speed of $a=\SI{1000}{\m\per\s}$.
%% Physical parameters
maxc = 1000; % maximal wave speed in the network
g = 9.81; % standard gravity value (m/s^2)
%% Network a-priori information
V = 4; % total number of vertices
L = [400; 300; 1000]; % L(j) length of pipe j
Adj = [1 4; 2 4; 4 3]; % pipe j goes from vertex Adj(j,1) to vertex Adj(j,2)
A0 = [1,1]; % area at accessible pipe ends
a0 = [maxc, maxc]; % wave speed at accessible pipe ends
Let us calculate the IRM by hand. If a flow of $V_0 \delta_0(t)$ is induced at point $A$, then it creates a propagating solution $Q =
V_0 \delta(a x - t)$, $H= a V_0 \delta(ax-t) / (g A)$ along $AD$. If a pressure pulse of magnitude $M \delta_0$ is incident to $D$, then it transmits two pulses of magnitude $2M \delta_0 /3$ to $BD$ and $DC$, and reflects one pulse of magnitude $-M \delta_0 / 3$ back to $AD$. These follow from the junction conditions of \[scalarH,Kirchhoff\]. Also, if a similar pressure pulse is incident to a boundary point with boundary condition $Q=0$, then a pulse of the same magnitude (no sign change) is reflected. However at that boundary point the pressure is measured as $2M
\delta_0$. These considerations produce the following impulse-response matrix $$\begin{aligned}
K_{AA}(t) &= \frac{a}{g A} \Big( \delta_0\big(t\big) - \frac23
\delta_0\big(t-\SI{0.8}{\s}\big) + \frac89 \delta_0\big(t -
\SI{1.4}{\s}\big) + \frac29 \delta_0\big(t - \SI{1.6}{\s}\big) +
\ldots \Big) \\ K_{BB}(t) &= \frac{a}{g A} \Big(
\delta_0\big(t\big) - \frac23 \delta_0\big(t-\SI{0.6}{\s}\big) +
\frac29 \delta_0\big(t-\SI{1.2}{\s}\big) + \frac89
\delta_0\big(t-\SI{1.4}{\s}\big) + \ldots \Big) \\ K_{AB}(t) &=
\frac{a}{g A} \Big( \frac43 \delta_0\big(t-\SI{0.7}{\s}\big) -
\frac49 \delta_0\big(t-\SI{1.3}{\s}\big) - \frac49
\delta_0\big(t-\SI{1.5}{\s}\big) + \ldots \Big) \\ K_{BA}(t) &=
K_{AB}(t)
\end{aligned}$$ for time $0\leq t \leq \SI{1.6}{\s}$.
We must have $2\tau\leq\SI{1.6}{\s}$, and so the algorithm can solve for the area up to points of the network that are at most $a\tau =
\SI{800}{\m}$ from each accessible end. Hence we can solve for the area only up to $\SI{400}{\m}$ in pipe $DC$ from the junction $D$. The reflections $k_{ij}$ used in \[alg1\] are $$\begin{aligned}
k_{AA}(t) &= \frac{a}{g A} \Big( - \frac23
\delta_0\big(t-\SI{0.8}{\s}\big) + \frac89 \delta_0\big(t -
\SI{1.4}{\s}\big) + \frac29 \delta_0\big(t - \SI{1.6}{\s}\big) +
\ldots \Big) \\ k_{BB}(t) &= \frac{a}{g A} \Big( - \frac23
\delta_0\big(t-\SI{0.6}{\s}\big) + \frac29
\delta_0\big(t-\SI{1.2}{\s}\big) + \frac89
\delta_0\big(t-\SI{1.4}{\s}\big) + \ldots \Big) \\ k_{AB}(t) &=
\frac{a}{g A} \Big( \frac43 \delta_0\big(t-\SI{0.7}{\s}\big) -
\frac49 \delta_0\big(t-\SI{1.3}{\s}\big) - \frac49
\delta_0\big(t-\SI{1.5}{\s}\big) + \ldots \Big) \\ k_{BA}(t) &=
k_{AB}(t).
\end{aligned}$$
``` {startFrom="last"}
%% Measurements
dt = 10/maxc; % in one time-step the wave propagates 10m
experiment_duration = 1.61; % impulse-response matrix should go from time t=0 to experiment_duration.
t = (0:dt:experiment_duration)';
k = cell(2,2); % k is the response matrix
% The below is calculated by hand for a network as above.
k{1,1} = a0(1)/(A0(1)*g) *(...
-2/3*1/dt.*(t >= 0.8-dt/2).*(t < 0.8+dt/2) ...
+8/9*1/dt.*(t >= 1.4-dt/2).*(t < 1.4+dt/2) ...
+2/9*1/dt.*(t >= 1.6-dt/2).*(t < 1.6+dt/2) ...
);
k{2,2} = a0(2)/(A0(2)*g) *(...
-2/3*1/dt.*(t >= 0.6-dt/2).*(t < 0.6+dt/2)...
+2/9*1/dt.*(t >= 1.2-dt/2).*(t < 1.2+dt/2) ...
+8/9*1/dt.*(t >= 1.4-dt/2).*(t < 1.4+dt/2) ...
);
k{1,2} = a0(2)/(A0(2)*g) *(...
+4/3*1/dt.*(t >= 0.7-dt/2).*(t < 0.7+dt/2)...
-4/9*1/dt.*(t >= 1.3-dt/2).*(t < 1.3+dt/2) ...
-4/9*1/dt.*(t >= 1.5-dt/2).*(t < 1.5+dt/2) ...
);
k{2,1} = k{1,2};
```
Let us define the action time functions for various points $p$ in the network. If $p \in AD$ then the action time is $$f^{AD}_p(x) = \begin{cases}
TT(A,p), & x = A\\
0, & x = B\\
0, & x = C
\end{cases}$$ and for $p \in BD$ $$f^{BD}_p(x) = \begin{cases}
0, & x = A\\
TT(B,p), & x = B\\
0, & x = C
\end{cases}.$$
The travel-time from $A$ to $D$ is $\SI{0.4}{\s}$, and from $B$ to $D$ it is $\SI{0.3}{\s}$. Recall that the IRM has been measured only for time $t\leq \SI{1.6}{\s}$. Hence we can solve for the area up to $\SI{400}{\m}$ into pipe $DC$. Let the point $p\in DC$ be given by the action time function $$f^{DC}_p(x) = \begin{cases}
\SI{0.4}{\s} + t_p, & x = A\\
\SI{0.3}{\s} + t_p, & x = B\\
0, & x = C
\end{cases}$$ where $0\leq t_p\leq \SI{400}{\m}/a = \SI{0.4}{\s}$ is the travel-time from $D$ to $p$ and recalling that we can solve the area only up to $\SI{400}{\m}$ from $D$.
``` {startFrom="last"}
%% Set parameters
tau = 0.8; % area will be solved up to 2*tau*maxc from accessible point furthest to inaccessible point x3.
assert(2*tau <= experiment_duration, 'Can recover area only up to experiment_duration*maxc/2.');
reguparam = 1e-5; % Tikhonov regularization parameter
dx = dt*maxc; % into how big chunks we discretize the pipe
%% x-discretization
maxL = [L(1); L(2); tau*maxc - max(L(1),L(2))];
maxL = min(L,maxL); % maximum length of pipes that can be reached from all accessible vertices in time tau
M = floor(maxL/dx); % number of discretized segments of length dx in each pipe
%% Action times to various points in the network
f = cell(length(L),1); % a different action time formula for points on each pipe
% action times to pipe 1 (AD) points
f{1} = nan(M(1),2);
f{1}(:,1) = (1:M(1))'*dx./maxc;
f{1}(:,2) = zeros(size((1:M(1))'));
% action times to pipe 2 (BD) points
f{2} = nan(M(2),2);
f{2}(:,1) = zeros(size((1:M(2))'));
f{2}(:,2) = (1:M(2))'*dx./maxc;
% action times to pipe 3 (DC) points
f{3} = nan(M(3),2);
f{3}(:,1) = (L(1)+(1:M(3))'*dx)./maxc;
f{3}(:,2) = (L(2)+(1:M(3))'*dx)./maxc;
```
Let us apply \[alg1\] next. The numerical implementation of \[alg2\], `makeHeq1`, is in the appendix.
``` {startFrom="last"}
%% Solve for the cross-sectional area
% Initialize cells for saving various vectors
pipeVolume = cell(length(L),1); % volume of network cut by p
pipeArea = cell(length(L),1); % cross-sectional area at p
pipeX = cell(length(L),1); % x-coordinates of points p
for P=1:length(L) % P indexes the pipe number (AD, BD, DC)
assert(max(max(f{P})) - tau <= dt/4, 'Travel-times must be at most tau.');
V = nan(M(P),1); % volume of the network up to points p
for p=1:M(P) % p indexes the point $p$ inside pipe P
% use Algorithm 2:
Qtau = makeHeq1(t, k, tau, f{P}(p,:), a0, g, A0, reguparam);
V(p) = 0;
% add to V the volume of water from each accessible end that would have gone INTO the pipe to make H=1 at t=tau:
for ii = 1:length(Qtau)
V(p) = sum(Qtau{ii})*dt + V(p)
end
end
pipeVolume{P} = maxc^2/g*V;
pipeArea{P} = (pipeVolume{P}(2:end)-pipeVolume{P}(1:end-1))/dx;
pipeX{P} = (1:M(P)-1)'.*dx;
end
```
A plot of the cross-sectional areas is shown in \[fig\_ex1\].
![Solved cross-sectional areas of \[exp1\][]{data-label="fig_ex1"}](ex1){width="\textwidth"}
\[exp2\] In the second experiment we consider a star-shaped network with four leaves, ending in points $A,B,C,D$. The internal node is denoted $E$. Let $D$ be the inaccessible end. Take the following lengths $$\begin{aligned}
AE = ({0,\SI{300}{\m}})&, &BE = ({0,\SI{400}{\m}})&, &CE =
({0,\SI{400}{\m}})&, &ED = ({0,\SI{500}{\m}}),&
\end{aligned}$$ as presented in \[exp2\_fig\].
![A more complicated network[]{data-label="exp2_fig"}](complicatedNet)
We set a constant wave speed of $a=\SI{1000}{\m\per\s}$ everywhere and an area function that models blockages at certain locations.
%% physical parameters
maxc = 1000; % max wave speed
g = 9.81; % standard gravity value (m/s^2)
``` {startFrom="last"}
%% Network
V = 5; % total number of vertices
L = [300; 400; 400; 500]; % pipe lengths
Adj = [1 5; 2 5; 3 5; 5 4]; % to avoid display problems make arrows point towards inaccessible node (nbr 4)
Afunc1 = @(s)( ones(size(s)) );
Afunc2 = @(s)( 2*ones(size(s)) - (s>350).*(s<375).*0.6);
Afunc3 = @(s)( ones(size(s)) - (s>210).*(s<250).*0.2);
Afunc4 = @(s)( ones(size(s)) - (s>410).*(s<450).*0.4 - (s>150).*(s<250).*0.2);
afunc1 = @(s)( maxc*ones(size(s)) );
afunc2 = @(s)( maxc*ones(size(s)) );
afunc3 = @(s)( maxc*ones(size(s)) );
afunc4 = @(s)( maxc*ones(size(s)) );
Afunc = {Afunc1; Afunc2; Afunc3; Afunc4};
afunc = {afunc1; afunc2; afunc3; afunc4};
BVtype = [1; 1; 1; 1; NaN]; % Are inputs pressure (0) or flow (1)
```
We simulate the IRM by an a finite-difference time domain (FDTD) algorithm with the following caveats: we use a Courant number smaller than one, simulate using a high resolution, and then interpolate the IRM to a lower time-resolution and use this as input for the inversion algorithm. This is to avoid the inverse crime [@KaipioSomersalo] which makes inversion algorithms give unrealistically good results when applied on data simulated with the same resolution or model as the inversion algorithm uses.
``` {startFrom="last"}
%% Measurements
% FDTD parameters
dx = 5;
courant = 0.95;
dt = courant*dx/maxc;
experiment_duration = 1.9;
doPlot = 0; % do we want to observe the FDTD simulation
% Boundary area and wave speed. Accessible ends are 1, 2 and 3
A0 = [Afunc1(0); Afunc2(0); Afunc3(0)];
a0 = [afunc1(0); afunc2(0); afunc3(0)];
```
For numerical reasons instead of sending a unit impulse $Q = \nu
\delta$ we send a unit step-function, and then differentiate the measurements with respect to time. We will not show the implementation of `FDTD` and the self-explanatory functions `removeInitialPulse`, `medianSmooth` and `differentiate` because they are not the focus of this already rather long article. The main point is the inversion algorithm.
``` {startFrom="last"}
t = (0:dt:experiment_duration)';
F = bndrySourceOn(t);
k = cell(3,3);
for ii=1:3
% Create input flows: constant flow at ii, otherwise zero
Vdata = cell(V,1);
for vv = 1:V
Vdata{vv} = zeros(size(F));
end
Vdata{ii} = F;
% Simulate the measurements with the given F
[Hhist, Qhist, thist] = FDTD(L, V, Adj, BVtype, Vdata, t, ...
doPlot, maxc, g, dx, courant, Afunc, afunc);
% Remove the initial pulses and differentiate with respect to time
for jj=1:3
H = Hhist{jj};
if(ii==jj)
H = removeInitialPulse(H, thist, a0(ii), g, A0(ii));
end
% Smoothen slightly to make differentiation well behaved
H = medianSmooth(H, floor(0.02/(thist(2)-thist(1))));
H = differentiate(H, thist);
k{ii,jj} = H;
end
end
% Avoid the inverse crime by interpolation to a lower resolution
dx = 7;
dt = dx / maxc;
t = (thist(1): dt : experiment_duration)';
for ii=1:size(k,1)
for jj=1:size(k,2)
k{ii,jj} = interp1(thist, k{ii,jj}, t);
end
end
```
The simulation gives us the following response matrix, as shown in \[exp2\_input\_fig\].
![IRM for \[exp2\][]{data-label="exp2_input_fig"}](ex2_input){width="\textwidth"}
Now solving the inverse problem has the same logic as in \[exp1\]. There are two differences, a large one and a small one. The former is that the action time functions are of course different. This is what encodes the network topology for the inversion algorithm. And secondly we must use quite a lot of regularization when solving for the area of $ED$. This is because of the “numerical error” introduced on purpose by the Courant number smaller than one and interpolating the measurements to avoid doing an inverse cime.
``` {startFrom="last"}
%% Set parameters
reguparam = [1e-5, 1e-5, 1e-5, 1e0]; % regularization parameter to Tikhonov regularization in various pipes
tau = 0.9;
%% x-discretization
maxL = [L(1); L(2); L(3); tau*maxc - max([L(1) L(2) L(3)])];
maxL = min(L,maxL); % maximum length of pipes that can be reached from all accessible vertices in time tau
M = floor(maxL/dx); % Number of discretized segments of length dx in each pipe
f = cell(length(L),1);
%% Action times to various points in the network
% action times to pipe 1 (AE) points
f{1} = nan(M(1),3);
f{1}(:,1) = (1:M(1))'*dx./maxc;
f{1}(:,2) = zeros(size((1:M(1))'));
f{1}(:,3) = zeros(size((1:M(1))'));
% action times to pipe 2 (BE) points
f{2} = nan(M(2),3);
f{2}(:,1) = zeros(size((1:M(2))'));
f{2}(:,2) = (1:M(2))'*dx./maxc;
f{2}(:,3) = zeros(size((1:M(2))'));
% action times to pipe 3 (CE) points
f{3} = nan(M(3),3);
f{3}(:,1) = zeros(size((1:M(3))'));
f{3}(:,2) = zeros(size((1:M(3))'));
f{3}(:,3) = (1:M(3))'*dx./maxc;
% action times to pipe 4 (ED) points
f{4} = nan(M(4),3);
f{4}(:,1) = (L(1)+(1:M(4))'*dx)./maxc;
f{4}(:,2) = (L(2)+(1:M(4))'*dx)./maxc;
f{4}(:,3) = (L(3)+(1:M(4))'*dx)./maxc;
```
Then applying \[alg1\] gives the solution as before.
``` {startFrom="last"}
%% Solve for the cross-sectional area
% Initialize cells for saving various vectors
pipeVolume = cell(length(L),1); % volume of network cut by p
pipeArea = cell(length(L),1); % cross-sectional area at p
pipeX = cell(length(L),1); % x-coordinate of points p
for P=1:length(L) % P indexes the pipe number (AE, BE, CE, ED)
assert(max(max(f{P})) - tau <= dt/4, 'Travel-times must be at most tau.');
V = nan(M(P),1); % volume of the network up to points p
for p=1:M(P) % p indexes the point $p$ inside pipe P
Qtau = makeHeq1(t, k, tau, f{P}(p,:), a0,g,A0, reguparam(P));
V(p) = 0;
% add to V the total volume of water from accessible end that would have gone INTO the pipe to make H=1 at t=tau.
for ii = 1:length(Qtau)
V(p) = sum(Qtau{ii})*dt + V(p);
end
end
pipeVolume{P} = maxc^2/g*V;
pipeArea{P} = (pipeVolume{P}(2:end)-pipeVolume{P}(1:end-1))/dx;
pipeX{P} = (1:M(P)-1)'.*dx;
end
```
The solution is displayed in \[fig\_ex2\]. The gray uniform line represents the original cross-sectional area. The dashed line is the solution to the inverse problem.
![Solved cross-sectional areas of \[exp2\][]{data-label="fig_ex2"}](ex2){width="\textwidth"}
Discussion and conclusions {#discussion}
==========================
We have developed and implemented an algorithm that reconstructs the internal cross-sectional area of pipes, filled with fluid, in a network arrangement. The region where the area is reconstructed must form a tree network and the input to our algorithm is the impulse-response measurements on all of the tree’s ends except for possibly one. The algorithm involves solving a boundary integral equation which is mathematically solvable in the case of perfect measurements and model (see the appendix). We wrote a step-by-step reconstruction algorithm and tested it using two numerical examples. The first one has a perfectly discretized measurement, and the second one has a discretization and numerical error introduced on purpose. Even with these errors, which are very typical when doing real-world measurements, Tikhonov regularization allows us to reconstruct the internal cross-sectional area with good precision. However this is just a first study of this algorithm, and a more in-depth investigation would be required for a more complete picture of its ability in the case of noise and other errors in the data. The theory is based on our earlier work [@area1] on solving for the area of one pipe, and on an iterative time-reversal boundary control algorithm [@Oksanen] in the context of multidimensional manifolds.
We assumed in several places that the wave speed $a(x)$ is a known constant. What if it is not? We considered this situation for a single pipe in our previous article, [@area1]. In there, if the area is known and the speed is unknown, then the algorithm can be slightly modified to determine the wave speed profile along the pipe. If both the wave speed and the area are unknown then it can determine the hydraulic impedance $Z = a/gA$ as a function of the travel-time coordinate. However this is not so straightforward for the network case because of the more complicated geometry. What one should do first is find an algorithm to solve this problem: the wave speed is constant and known, the area is unknown, the topology of the network is known, but the pipe lengths are unknown. We leave this problem for future considerations as this article is already quite long. But it is an important question, because in fact in some applications the anomaly is the loss in pipe thickness rather than the change in pipe area and this is often revealed by finding the wave speed along the pipes assuming that the area remains unchanged [@Gong2014].
In \[exp1\] we did not use regularization and the solution was perfect, as shown in \[fig\_ex1\]. However here we had the simplest tree network, the simplest area formula, and perfectly discretized measurements.
In \[exp2\] we simulated the measurements using a finite difference time domain (FDTD) method with Courant number smaller than one. Using Tikhonov regularization was essential, and produced the reconstruction shown in \[fig\_ex2\]. Without regularization one would get a large artifact, as in \[fig\_artifact\].
![Solving without regularization[]{data-label="fig_artifact"}](need_regu){width="\textwidth"}
The reasons for using imperfect measurements are to demonstrate the algorithm’s stability. We could have calculated the impulse-response matrix using the more exact method of characteristics. However in reality, when dealing with data from actual measurement sensors, one would never get such perfect inputs to the inversion algorithm. First of all there are modelling errors, and secondly, measurement noise. By using FDTD with a small Courant number and also by inputting measurements with a rougher discretization than in the direct model we purposefully seek to avoid inverse crimes, as decribed in Section 1.2 of [@KaipioSomersalo].
An inverse crime happens when the measurement data is simulated with the same model as the inversion algorithm assumes. Typically in these cases the numerical reconstruction looks unrealistically good, even with added measurement noise, and therefore does not reflect the method’s actual performance in real life, where models are always approximations.
Several directions of investigation still remain open. A more in-depth numerical study should be concluded, with proper statistical analysis and for example finding the signal to noise ratio that still gives meaningful reconstructions. There is also the issue of measuring the impulse-response matrix. It would be very much appreciated if there was no need to actually close almost all the valves in the network to perform measurements. Hence one could investigate various other types of boundary measurements and see how to process them to reveal the impulse-response matrix used here. To make even further savings in assessing the water supply network’s condition, one could also try implementing the theory from [@Kurasov--tree] as an algorithm. Their theoretical result only requires the so-called “backscattering data” from the network: instead of having to measure the full impulse-response matrix $(K_{ij})_{i,j=1}^N$ it would be enough to measure its diagonal $(K_{ii})_{i=1}^N$. This means that the pressure needs to be measured only at the same pipe end as the flow is injected. So the exact same type of measurement as is done for a single pipe, as in [@area1], should be done at each accessible network end. For a network with $N+1$ ends this translates into $N$ measurements compared to the $N^2$ for the full IRM.
Lastly we comment on our algorithm. What is interesting about it, is that there is no need to solve the area in the whole network at the same time. To save on computational costs one could for example reconstruct the cross-sectional area only in a small region of interest in a single pipe even when the network is otherwise quite large. Alternatively one could parallelize the process and solve for the whole network very fast using multiple computational cores. These would likely be impossible when having only the backscattering data described above.
The steps in \[alg1,alg2\] describe the reconstruction process in detail. In essence only a single matrix inversion (or least squares or other minimization) is required for solving the area at any given point in the network. The size of this matrix depends on how finely the impulse-response matrix has been discretized, and how far from the boundary this point is. The rougher the discretization, the faster this step. However having a poor discretization leads to a larger numerical error, and one might then need to use a regularization scheme. All the numerical examples in this article were performed with an office laptop from 2008, and they took a few minutes to calculate including simulating the measurements. The algorithm is computationally light, simple to implement, and thus suitable for practical applications.
Acknowledgements
================
This research was partly funded by the fullowing grants.
- T21-602/15R Smart Urban water supply systems(Smart UWSS), and
- 16203417 Blockages in water pipes: theoretical and experimental study of wave-blockage interaction and detection.
Furthermore all authors declare that there are no conflicts of interest in this paper.
Appendix
========
Existence of a solution
-----------------------
We give a list of the steps involved in proving that the equation in \[treeEquationForH1\] has a solution when the impulse-response matrix has been measured exactly and there is no modelling errors. Since the ultimate goal for our work is on assessing the quality of water supply network pipes, in the end there will be both modelling errors and measurement noise, so we will not give a complete formal proof. However we give enough details so that any mathematician specialized in the wave equation can fill the gaps after choosing suitable function space classes and assumptions for the various objects.
1. \[a\] Given $p\in\mathbb G \setminus \mathbb J$ which defines the admissible set $D_p$ and action time $f$ according to \[admissibleSet\], and with $\tau > \max f$ and $h_0>0$, we want to solve $$\begin{aligned}
h_0 &= \frac{a}{A(x_j)}\nu(x_j) Q_p(t,x_j) \\ &\qquad + \sum_{x_i
\in \partial\mathbb G \setminus \mathbb J} \frac{\nu(x_i)}{2}
\int_0^\tau Q_p(s,x_i) \big( k_{ij}({\left\lvert t-s \right\rvert}) +
k_{ij}(2\tau-t-s)\big) ds
\end{aligned}$$ when $\tau-f(x_j) < t \leq \tau$, $x_j \in \partial \mathbb G
\setminus \{x_0\}$, and $Q_p(s,x_j) = 0$ when $t \leq \tau -
f(x_j)$, $x_j\in\mathbb G \setminus \{x_0\}$.
2. \[b\] We extend boundary data from the interval $(0,\tau)$ to $(-\infty,+\infty)$ and also absorb the interior boundary normal, by writing $$\mathscr Q_p(t,x_j) = \begin{cases} \nu(x_j) Q_p(t,x_j), &0\leq
t\leq\tau,\\ \nu(x_j) Q_p(2\tau-t,x_j), & \tau < t \leq 2\tau,
\\ 0, &\text{otherwise}. \end{cases}$$
3. \[c\] Solving $$\begin{aligned}
\tilde h_0(t,x_j) &= \frac{a}{A(x_j)} \mathscr Q_p(t,x_j)
\\ &\qquad+ \sum_{x_i \in \partial\mathbb G \setminus \mathbb
J} \frac{\chi(t,x_j)}{2} \int_0^{2\tau} \mathscr Q_p(s,x_i)
k_{ij}({\left\lvert t-s \right\rvert}) ds
\end{aligned}$$ in $$\begin{aligned}
L^2_0 &= \Big\{ F \in L^2(\mathbb R \times \partial\mathbb G
\setminus \{x_0\}) \,\Big|\, \\&\qquad F(t,x_j) = 0 \text{ if }
{\left\lvert t-\tau \right\rvert}\geq f(x_j),\, F(2\tau-t,x_j) = F(t,x_j) \Big\}
\end{aligned}$$ where $$\chi(t,x_j) = \begin{cases} 1, & {\left\lvert t-\tau \right\rvert}<f(x_j),\\ 0, &
{\left\lvert t-\tau \right\rvert}\geq f(x_j), \end{cases} \qquad \tilde h_0(t,x_j) =
h_0 \chi(t,x_j)$$ is equivalent to solving the equation of \[a\] in the corresponding $L^2$-based space. We equip $L^2_0$ with the inner product $$\langle A, B \rangle = \sum_{x_j\in\partial\mathbb G \setminus
\mathbb J} \int_0^{2\tau} A(t,x_j) \overline{B(t,x_j)} dt$$ where the complex conjugation can be ignored because all of our numbers are real.
4. \[d\] The equation in \[c\] can be written as $\tilde h_0
= \mathscr K \mathscr Q_p$ in $L^2_0$. Here we define the operator $\mathscr K$ by $$\begin{aligned}
\mathscr K F (t,x_j) &= \frac{a}{A(x_j)} F(t,x_j) \\&\qquad +
\sum_{x_i \in \partial\mathbb G \setminus \mathbb J}
\frac{\chi(t,x_j)}{2} \int_0^{2\tau} F(s,x_i) k_{ij}({\left\lvert t-s \right\rvert}) ds
\end{aligned}$$ and it is a well-defined operator if $k_{ij}$ is a distribution of order $0$ which is the case if the cross-sectional area function $A$ is piecewise smooth for example. It indeed maps $L^2_0 \to L^2_0$ satisfying the support and time-symmetry conditions.
5. \[e\] We will show that $\mathscr K$ is a Fredholm operator $L^2_0\to L^2_0$ that is positive semidefinite.
6. \[f\] Recall the travel-time function $TT$, the action time function $f$ and the admissible set $D_p$ from \[a\]. Define $$\begin{aligned}
\Omega &= \Big\{(t,x) \in \mathbb R \times \mathbb G \,\Big|\,
\\&\qquad TT(x,x_j) + {\left\lvert t-\tau \right\rvert} < f(x_j) \text{ for some }
x_j \in \partial\mathbb G \setminus \{x_0\} \Big\}, \\ t_\pm(x) &=
\tau \pm \max_{x_j\in\partial\mathbb G\setminus\{x_0\}} \big(
f(x_j) - TT(x,x_j) \big).
\end{aligned}$$ The set $\Omega$ coincides with the set where unique continuation from the boundary holds, as in \[treeUniqCont\]. One can show that $$\Omega = \{(t,x)\in \mathbb R \times \mathbb G \mid x\in D_p, t_-(x)
< t < t_+(x) \}.$$
7. \[g\] Let $F\in L^2_0$ be given and fixed. Let $H,Q:\mathbb R
\times \mathbb G \to \mathbb R$ satisfy the network wave model of \[model\] with boundary flow $F$.
8. \[h\] We see that $H(t,x) = Q(t,x) = 0$ when $x\in D_p$ and $t\leq t_-(x)$. This follows from the zero initial conditions, finite speed of wave propagation and the definition of $t_\pm$ and the action time function $f$.
9. \[i\] Define $$S = \frac{\sqrt{g A}}{a} H + \frac{\mu}{\sqrt{g A}} Q$$ where $\mu(x)=+1$ if the positive direction of the coordinates at $x$ points towards $p$, and $\mu(x)=-1$ otherwise. Then one sees that $$\big( \partial_x - \tfrac{\mu}{a} \partial_t \big)(QH) = -
\frac{1}{2} \partial_t (S^2)$$ in $t\in\mathbb R$, $x\in \mathbb G \setminus \mathbb J$.
10. \[j\] By \[f,h,i\] we see that $$\begin{aligned}
-\frac{1}{2} \int_\Omega \partial_t (S^2) dt dx &= -\frac{1}{2}
\int_{D_p} \big(S^2(t_+(x),x) - S^2(t_-(x),x)\big) dx \\&=
-\frac{1}{2} \int_{D_p} S^2(t_+(x),x) dx \leq 0.
\end{aligned}$$
11. \[k\] By \[i,j\] and the divergence theorem we have $$\begin{aligned}
0 &\geq \int_\Omega \big( \partial_x - \tfrac{\mu}{a} \partial_t
\big)(QH) dt dx = \int_\Omega \nabla_{t,x} \cdot \big(
-\tfrac{\mu}{a} QH, QH \big) dt dx \\&= \int_{\partial\Omega}
\nu_{t,x} \cdot \big(-\tfrac{\mu}{a},1\big) QH d\sigma(t,x)
\end{aligned}$$ where $\nu_{t,x}$ is the external unit normal vector to $\Omega$ at $(t,x) \in \partial\Omega$.
12. \[l\] Let us split $\partial\Omega$ next. By \[f\] we see that $$\begin{aligned}
\partial\Omega &= \{ (t,x_j) \mid x_j\in\partial\mathbb
G\setminus\{x_0\}, {\left\lvert t-\tau \right\rvert}\leq f(x_j) \} \\ &\qquad \cup
\{(t,x) \mid x\in D_p, t=t_+(x) \} \cup \{(t,x) \mid x\in D_p,
t=t_-(x) \} \\ & \qquad \cup \{(\tau,p)\}.
\end{aligned}$$
13. \[m\] On the first set in \[l\] the external unit normal $\nu_{t,x_j}$ is given by $\nu_{t,x_j} = (0,-\nu(x_j))$ because $\nu$ was defined as the internal normal at the pipe ends.
14. \[n\] Consider the second set in \[l\]. On each individual pipe segment $P \subset D_p$ the map $x \mapsto t_+(x)$ is linear (affine). How does $t_+$ change when we go from $x$ to $x+\Delta x$? Recall that ${\left\lvert \Delta x \right\rvert} = a {\left\lvert \Delta t \right\rvert}$, and since $(t_+(x),x)$ follows the characteristics, so $t_+(x+\Delta x) =
t_+(x) \pm \Delta x / a$. Its value decreases if $\Delta x > 0$ and the positive direction of the coordinates $x$ point towards $p$. Both of these follow from \[f\] and the definition of the action time function $f$. By the definition of $\mu$ in \[i\] we see that $$\frac{\Delta x}{t_+(x+\Delta x) - t_+(x)} = - \frac{a}{\mu}.$$ Thus the normal has a slope of $\mu/a$ and so $$\nu_{t,x} = \frac{(a,\mu)}{\sqrt{a^2+\mu^2}} =
\frac{(a,\mu)}{\sqrt{1+a^2}}.$$
15. \[o\] By \[g,h,j,k,l,m,n\] we get $$\begin{aligned}
0 &\geq -\frac{1}{2} \int_{D_p} S^2(t_+(x),x) dx =
\int_{\partial\Omega} \nu_{t,x} \cdot \big(-\tfrac{\mu}{a},1\big)
QH d\sigma(t,x) \\ &= \sum_{x_j\in\partial\mathbb G\setminus
\mathbb J} \int_{\tau - f(x_j)}^{\tau+f(x_j)} -F(t,x_j) H(t,x_j)
dt \\ &\qquad + \int_{D_p} \frac{(a,\mu)}{\sqrt{1+a^2}} \cdot
\big( -\tfrac{\mu}{a}, 1 \big) (QH)(t_+(x),x) dx \\&\qquad +
\int_{D_p} \nu_{t,x} \cdot \big(-\tfrac{\mu}{a}, 1\big)
(QH)(t_-(x),x) dx + 0
\end{aligned}$$ which gives $$\sum_{x_j\in\partial\mathbb G\setminus \mathbb J} \int_0^{2\tau}
F(t,x_j) H(t,x_j) dt = \frac{1}{2} \int_{D_p} S^2(t_+(x),x) dx \geq
0$$ by the support condition of $F\in L^2_0$ given in \[c\]. In detail we see that the $D_p$-integrals vanish because of the following. Firstly we see that $(a,\mu) \cdot (-\mu/a,1) = 0$ making the first integral over $D_p$ vanish. Secondly, by \[h\], $(QH)(t_-(x),x)=0$ for $x\in D_p$. The integral over the singleton $(t,s) = (\tau, p)$ is zero because $QH$ is a function since $F$ is too.
16. \[p\] Recall \[HfromIRM\] and the splitting of the IRM $K$ into the impulse and response matrices in \[treeEquationForH1\]. Thus $$H(t,x_j) = \frac{a}{A(x_j)g} F(t,x_j) + \sum_{x_i\in\partial\mathbb
G\setminus \{x_0\}} \int_0^t F(s,x_i) k_{ij}(t-s) ds.$$
17. \[q\] Combining \[o,p\] and changing the order of integration in double integrals, we arrive at $$\langle \mathscr K F, F \rangle = \frac{1}{2} \int_{D_p}
S^2(t_+(x),x) dx \geq 0$$ for arbitrary $F\in L^2_0$ and where $S,H,Q$ are defined by \[g,i\].
18. \[r\] We cannot show that $\mathscr K F = 0$ implies $F=0$ unless the network is a single pipe. For a counter example choose a three-pipe network with constant unit area and wave speed, and each pipe is of unit length. Then send a pulse from one end and the same with opposite sign from another end. These pulses meet at the junction at the exact same time and cancel out the pressure there, so nothing propagates to the third pipe. Then they continue on the original two pipes until they hit the boundaries. The boundary flow $F$ should be then chosen so that these pulses are absorbed completely instead of reflecting back. This boundary flow $F$ gives $\langle \mathscr K F, F \rangle = 0$ by \[q\], but $F\neq0$.
19. \[s\] The existence of a solution follows from the following: that $\mathscr K$ is self-adjoint and Fredholm, and that $\tilde h_0
\perp \ker \mathscr K $, so then there is $F$ giving $\mathscr K F =
\tilde h_0$. We shall show these next.
20. \[t\] If we assume that $A(x)$ is smooth enough, then the components of the response matrix $(k_{ij})_{i,j=1}^N$ have at most a finite number of delta-functions arising from the junctions of $\mathbb G$ (in the time-interval ${({0,2\tau})}$), and are otherwise a function. Hence $\mathscr K$ can be split into a multiplication by nonvanishing functions, a finite number of translations that are also multiplied by such functions, and a smoothing integral operator which is compact $L^2_0\to L^2_0$. The identity and translations multiplied by non-vanishing functions are Fredholm operators, and the rest is compact. Hence the whole $\mathscr K$ is Fredholm. The space $L^2_0$ is Hilbert, and $\mathscr K$ is easily seen to be symmetric there because $k_{ij} =
k_{ji}$. It is a bounded operator. Hence it is self-adjoint. Thus $\mathscr K$ is a self-adjoint Fredholm operator.
21. \[u\] Let $w(s) = \int_{\mathbb G} H(s,x) \frac{g A}{a^2} dx$ where $H,Q$ are given by \[g\]. By differentiating with respect to $s$, using \[WH1\], calculating the $x$-integral, and integrating with respect to $s$ we see that $$\int_{D_p} H(\tau,x) \frac{g A}{a^2} dx = w(\tau) =
\sum_{x_j\in\partial\mathbb G \setminus\{x_0\}} \int_0^\tau \nu(x_j)
Q(s,x_j) ds$$ because $Q=0$ near the boundary point $x_0$ up to time $\tau$. This is the same as in \[impedanceByBoundaryData\].
22. \[v\] Let $F\in L^2_0$ with $\mathscr K F = 0$ and $H,Q$ as in \[g\]. Set $h_0=0$ in \[treeEquationForH1\] to conclude that $H(\tau,x)=0$ for $x\in\mathbb G$. Recall that the equations in that theorem are formulated using $\mathscr K$ here (see \[c\]).
23. \[w\] The time-symmetry of $F\in L^2_0$ gives the first equality below. \[u,v\] give the second and third equality, respectively. So $$\langle \tilde h_0, F \rangle = \sum_{x_j\in\partial\mathbb G
\setminus\{x_0\}} 2 h_0 \int_0^\tau F(s,x_j) ds = 2 h_0 \int_{D_p}
H(\tau,x) \frac{g A}{a^2} dx = 0.$$ Hence $\tilde h_0 \perp \ker\mathscr K = \ker\mathscr K^\ast$, the latter because of self-adjointness, so $$\tilde h_0 \in (\ker\mathscr K^\ast)^\perp =
\overline{\operatorname{ran} \mathscr K} = \operatorname{ran}
\mathscr K$$ because the range of a Fredholm operator is closed. In other words there is $F\in L^2_0$ such that $\mathscr K F = \tilde h_0$, and thus there is a solution to the equations in \[treeEquationForH1\].
Numerical reconstruction algorithm
----------------------------------
We describe here the numerical implementation of \[alg2\]. Its inputs include `tHist`, `K`, the discretized time-variable vector and cell containing the discretized components of the response matrix $k=(k_{ij})_{i,j=1}^N$ and `f` which is a vector modeling the discretized action time function. Other inputs are `a0`, `A0`, which are vectors containing the wave speed and cross-sectional area at the boundary points of the network, and `tau`, `g`, `reguparam` are scalars representing $\tau$, the constant of gravity $g$, and the Tikhonov regularization parameter used in the inversion of \[eq:solveQ\]. Its output, `Qtau`, is a cell whose components are discretizations of $t\mapsto Q_p(t,x_j)$ that solved \[eq:solveQ\] for the various boundary points $x_j$.
function Qtau = makeHeq1(tHist, K, tau, f, a0, g, A0, reguparam)
dt = tHist(2)-tHist(1);
nBndryPts = size(K,1);
We write first the $N$ (`=nBndryPts`) integral equations containing the $N$ response function components. This shall be indexed by `j` (pipe end where pressure measured) and `i` (pipe end where pulse sent from). Recall the equation, $$h_0 = \frac{a}{A(x_j)g} q(t,x_j) + \sum_{i=1}^N \frac12 \int_0^\tau
q(t,x_i) ( k_{ij}({\left\lvert t-s \right\rvert}) + k_{ij}(2\tau-t-s) ) ds$$ for $j=1,\ldots,N$ and $\tau-f(x_j) < t \leq \tau$. But recall that we must still set $q(t,x_j) = 0$ when $t\leq \tau-f(x_j)$. Our numerical implementation assumes that $h_0=1$.
We will write the equation above as `H*q = RHS` where $q$ is a block vector where each block is indexed by $i$ and $$q^i = [q(dt,x_i); q(2\,dt,x_i); ... ; q(M\,dt,x_i)]$$ with $M = \lfloor{\tau/dt}\rfloor$. Each component of `RHS` is either $h_0=1$ or $0$. We do a piecewise constant approximation of all of these functions and equations. We discretize `t=l*dt`, `s=k*dt` with `l,k=1,...,M`.
``` {startFrom="last"}
M = floor(tau/dt); % Number of time-steps of length dt in (0,tau).
% Discard unneeded measurements:
tHist = tHist(1:2*M);
for i = 1:nBndryPts
for j = 1:nBndryPts
K{i,j} = K{i,j}(1:2*M);
end
end
tVec = (1:M)*dt; lVec = 1:M;
sVec = (1:M)*dt; kVec = 1:M;
[t,s] = ndgrid(tVec,sVec);
[l,k] = ndgrid(lVec,kVec);
```
Set the tolerance for floating point numbers next. If two numbers are at most tolerance from each other, then they are considered the same. Without doing this, some numerical errors might happen when comparing two floating point numbers that are close to each other. This would cause errors of the order of $dx$ in the area reconstruction. We can now start discretizing the integral equation.
``` {startFrom="last"}
tol = dt/4;
Mij = nan(M,M); % Block (j,i) of matrix H
H = nan(nBndryPts*M, nBndryPts*M); % The linear operator
h0j = nan(M,1); % Block of RHS corresponding to j
RHS = nan(nBndryPts*M,1); % The RHS vector having 1's or 0's
for j = 1:nBndryPts
h0j = ones(M,1);
h0j((tVec - (tau - f(j)) <= tol)) = 0;
RHS( (j-1)*M + (1:M) ) = h0j;
for i = 1:nBndryPts
% Use 2*M-(l-1)-(k-1)=2*M+2-l-k for the time-reversal.
Mij = 0.5 * dt * ( K{i,j}(1+abs(l-k)) + K{i,j}(2*M+2-l-k) );
```
Recall that $q(s,x_i)$ must be zero when $s \leq \tau-f(x_i)$. Hence the columns of the matrix that hit the indices corresponding to $s\leq\tau-f(x_i)$ should be zero. Similarly, when $t \leq \tau -
f(x_j)$ we want the equation to give $q(t,x_j) = 0$, so the part coming from the integral must be zeroed. Then add the “identity”-type part of the integral equation, and save the block into the final matrix.
``` {startFrom="last"}
Mij((s - (tau - f(i)) <= tol)) = 0;
Mij((t - (tau - f(j)) <= tol)) = 0;
if i==j
Mij = Mij + a0(j)/(A0(j)*g) .* eye(M);
end
H( (j-1)*M + (1:M), (i-1)*M + (1:M) ) = Mij;
end
end
```
Next, we solve the integral equation for $q$ using Tikhonov regularization. The simplest way would be to set\
\
but it is unnecessarily slow. For a faster solution we first remove all the rows and columns which would look like $0=0$. The vector `nz` shows the indices that are not forced to be zero, i.e. those where $\tau - f(x_i) < s$. Then we remove the corresponding rows and columns from $H$. Then solve the equation.
``` {startFrom="last"}
nz = false(nBndryPts*M,1);
for i=1:nBndryPts
nz( (i-1)*M + (1:M) ) = ((sVec - (tau - f(i))) > tol);
end
H = H(nz, nz);
qnz = [H; sqrt(reguparam)*eye(size(H))] ...
\ [RHS(nz); zeros(size(RHS(nz)))];
q = zeros(nBndryPts*M,1);
q(nz) = qnz;
```
Finally split the vector `q` into a cell whose components correspond to the boundary flows at the different boundary points.
``` {startFrom="last"}
Qtau = cell(nBndryPts, 1);
for i=1:nBndryPts
Qtau{i} = q( (i-1)*M + (1:M) );
end
end
```
[^1]: $\delta_0(t)$ has dimensions of time$^{-1}$ because $\int \delta_0(t) dt = 1$ and $dt$ has dimensions of time.
[^2]: $k_{ij}$ might be a distribution if $A$ is not smooth enough.
|
---
author:
- 'K.D. Borne'
- 'H. Bushouse'
- 'L. Colina'
- 'R.A. Lucas'
- 'A. Baker'
- 'D. Clements'
- 'A. Lawrence'
- 'S. Oliver'
- 'M. Rowan-Robinson'
subtitle: Evidence for Multiple Mergers
title: A Morphological Classification Scheme for ULIRGs
---
Introduction
============
The Hubble Space Telescope (HST) has been used to study a large sample of ultraluminous IR galaxies (ULIRGs). With a rich legacy database of $\sim$150 high-resolution images, we are studying the fine-scale structure of this unique collection of violently starbursting systems (@Borne97a 1997a,b,c,d). We review here some of the latest results from our survey.
An HST Imaging Survey
=====================
Our combined data set includes $\sim$120 WFPC2 I-band (F814W) images and $\sim$30 NICMOS H-band (F160W) images. These are being used for multi-color analyses over a significant wavelength baseline. The NICMOS images are used specifically to probe through some of the dust obscuration that plagues the shorter wavelength images. The full set of images is being used to study the galaxies’ cores and starburst regions. Nearly all ULIRGs show evidence for a recent tidal interaction, and we are identifying the merger progenitors near the center of each galaxy, a task made significantly easier with the H-band images. These images are proving to be particularly useful in mapping the spatial distribution of starburst activity in these galaxies, in verifying the presence or absence of a bright active nucleus, in deriving the distribution (in both size and luminosity) of the multiple nuclei seen in these galaxies, and in determining if these multiple cores are the merger’s remnant nuclei or super star clusters formed in the merger/starburst event.
HST Results and Serendipity
===========================
Several new discoveries have been made through our HST imaging survey of the ULIRG sample (see Fig. \[kdb:figure1\] for some representative images). A few specific results are presented in the following discussion.
Mrk 273 = IR13428$+$5608
------------------------
A strong dust lane and a system of extended filaments have been discovered near the center of Mrk 273. The filaments are similar to those seen in M82 and are probably indicative of a strong outflow induced by a massive starburst. The central region of the galaxy contains several separate cores, which may be remnant cores from more than 2 merging galaxies.
The SuperAntennae = IR19254$-$7245
-----------------------------------
One of the most interesting ULIRGs is IR19254$-$7245 (the SuperAntennae; @Mirabel91 1991). With a morphology similar to the Antennae galaxies (N4038/39; @Whitmore95 1995), it is clearly the result of a collision between two spirals. In the case of the SuperAntennae, the tidal arms have a total end-to-end extent of 350 kpc — 10 times larger than the Antennae! We have resolved the two galaxies’ nuclei (8$''$ separation) and have discovered a small torus with diameter $\approx 2''$ around the center of each galaxy (Fig. \[kdb:figure1\]$c$). The southern component is known to have an active nucleus and the torus may be related to the AGN. It is possible that the northern galaxy also hosts an AGN, but the active nucleus is obscured from view by a large column of dust. Follow-up higher-resolution images with the HST PC have revealed a double-nucleus at the center of [*[each galaxy]{}*]{}, clearly suggesting a [*[multiple-merger origin]{}*]{} for the SuperAntennae.
Interaction / Merger Fraction
-----------------------------
Given the high angular resolution ($\sim$0.1–0.2$''$) of our HST images, a number of ULIRGs that were previously classified as “non-interacting” have now revealed secondary nuclei at their centers (remnant nuclei from a merger event?) and additional tidal features (tails, loops). An example of one such system is shown in Figure \[kdb:figure1\]$e$. It now appears that the fraction of ULIRGs that show evidence for interaction is very close to 100%. Observational estimates of this number have varied from 30% to 100% over the past 10 years, but it now seems to be converging on a value significantly above 90% (as indicated in the early work by @Sanders88 1988).
AGN Fraction
------------
The most significant question about the ULIRG phenomenon is the nature of the power source. That power source is generating the ultra-high IR luminosities ($L_{\tt{IR}} > 10^{12}L_\odot$) through dust heating and the corresponding conversion of UV/optical radiation into IR radiation. @Veilleux97 (1997) have shown that the frequency of AGN-powered ULIRGs increases sharply at $L_{\tt{IR}} \geq 10^{12.3}L_\odot$. It is very likely then that a combination of starburst power and AGN power is responsible for the ULIRG phenomenon among the various galaxies comprising the whole sample, and it is even possible that both power sources contribute energy in unique proportions within each individual ULIRG. In the latter scenario, the power source for the higher-luminosity ULIRGs is mainly the AGN and for the lower-luminosity ULIRGs (still quite luminous) it is the starburst. We have noted a particular morphological tendency in our HST images: objects whose nuclei appear most star-like (i.e., unresolved) also seem to be those that have been classified (from ground-based [*[spectroscopic]{}*]{} observations) to be AGN. About 15% of our total sample have unresolved nuclei (similar to the AGN fraction found by @Genzel98 1998, and others). This may represent the true fraction of ULIRGs that are dominated by an AGN power source. In the other cases, the observed near-IR flux (in HST images) is clearly spatially distributed among numerous bright star-forming (starbursting) knots, which therefore are very likely the primary energy sources for dust-heating.
Morphological Classification of ULIRGs
======================================
We have examined a complete subsample of ULIRG images and have identified 4 main morphological classes, plus 2 additional sub-classes (which are included in the main classes for statistical counting purposes). Figure 1 depicts 6 representative ULIRGs, one from each of these classes:
1. Strongly Disturbed Single Galaxy (Fig. 1$a$)
2. Dominant AGN/QSO Nucleus (Fig. 1$b$)
3. Strongly Interacting Multiple-Galaxy System (Fig. 1$c$)
4. Weakly Interacting Compact Groupings of Galaxies (Fig. 1$f$)
5. Collisional Ring Galaxy (Fig. 1$d$)
6. Previously Classified Non-Interacting Galaxy (Fig. 1$e$)
Class Number Fraction $<log L_{IR}/L_\odot>$ Notes
----------------------- --------- ------------ ------------------------ ----------------
Disturbed Singles 30 34% 11.85 morph. class 1
AGN/QSO Nucleus 13 15% 11.74 morph. class 2
Interacting Multiples 29 33% 11.81 morph. class 3
Compact Groupings 14 16% 11.94 morph. class 4
Collisional Rings 1-3 $\sim$1-3% ... re-classified
“Non-Interacting” $\sim$5 $\sim$5% ... re-classified
: ULIRG Morphological Classes
We show in Table 1 the distribution of ULIRGs among these morphological classes. It is seen here that there is little luminosity dependence among the classes and that there is a roughly equal likelihood that a ULIRG will appear either single (classes 1 and 2) or multiple (classes 3 and 4).
Evidence for Multiple Mergers
=============================
Many of the recent results on ULIRGs are pointing to a complicated dynamical history. It is not obvious that there is a well-defined dynamical point during a merger at which the ULIRG phase develops, nor is it clear what the duration of the ultraluminous phase is. Our new HST imaging surveys indicate that the mergers are well developed (with full coalescence) for some ULIRGs. Others show clear evidence for 2 (or more) nuclei. While others ($\sim$5%) can best be described as wide binaries, still a long way from coalescence. One possible explanation for this [*[dynamical diversity]{}*]{} has been proposed recently by @Taniguchi98 (1998). They suggested a multiple-merger model for ULIRGs. In this scenario, the existence of double nuclei is taken as evidence of a second merger, following the creation of the current starburst nuclei from a prior set of mergers. In fact, this would indicate for some systems (with double AGN or double starburst nuclei) that the currently observed merger is the third (at least) in the evolutionary sequence for that galaxy. This may seem unrealistic, but it may not be so unreasonable if these particular ULIRGs are [*[remnants of previous compact groups of galaxies]{}*]{}. Compact groups are known to be strongly unstable to merging, and yet examples are seen in the local (aged) universe. These may be the tail of a distribution of dynamically evolving galaxy groups. Similarly, the ULIRGs are presumed to be historically at the tail end of a distribution of major gas-rich mergers. A connection between the two populations, if only in a few cases, is therefore not unreasonable. Figure \[kdb:figure2\] presents images of 12 ULIRGs from our HST sample that appear to have evolved from multiple mergers. The evidence for this includes: $>$2 remnant nuclei, or $>$2 galaxies, or an overly complex system of tidal tails, filaments, and loops.
Acknowledgments
===============
Support for this work was provided by NASA through grant numbers GO–6346.01–95A and GO–7896.01–96A from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5–26555.
Borne, K.D., [*[et al.]{}*]{} (1997a), in [*[IAU Symposium 179: New Horizons from Multi-Wavelength Sky Surveys]{}*]{}, Dordrecht: Kluwer, pp. 275–277.
Borne, K.D., [*[et al.]{}*]{} (1997b), in [*[Star Formation, Near & Far]{}*]{}, Woodbury: AIP, pp. 295–298.
Borne, K.D., [*[et al.]{}*]{} (1997c), in [*[Extragalactic Astronomy in the Infrared]{}*]{}, Paris: Editions Frontieres, pp. 277–282.
Borne, K.D., [*[et al.]{}*]{} (1997d), in [*[The Ultraviolet Universe at Low and High Redshift]{}*]{}, Woodbury: AIP, pp. 423–428.
Genzel, R., [*[et al.]{}*]{} (1998), [*[ApJ]{}*]{}, [**[498]{}**]{}, pp. 579–605.
Mirabel, I.F., Lutz, D., & Maza, J. (1991), [*[A&A]{}*]{}, [**[243]{}**]{}, pp. 367–372.
Sanders, D.B., [*[et al.]{}*]{} (1988), [*[ApJ]{}*]{}, [**[325]{}**]{}, pp. 74–91.
Taniguchi, Y., & Shioya, Y. (1998), [*[ApJ]{}*]{}, [**[501]{}**]{}, pp. L167–L170.
Veilleux, S., Sanders, D.B., & Kim, D.-C. (1997), [*[ApJ]{}*]{}, [**[484]{}**]{}, pp. 92–107.
Whitmore, B.C., & Schweizer, F. (1995), [*[AJ]{}*]{}, [**[109]{}**]{}, pp. 960–980.
|
---
abstract: 'This paper focuses on the design and implementation of a high-quality and high-throughput true-random number generator (TRNG) in FPGA. Various practical issues which we encountered are highlighted and the influence of the various parameters on the functioning of the TRNG are discussed. We also propose a few values for the parameters which use the minimum amount of the resources but still pass common random number generator test batteries such as DieHard and TestU01.'
author:
- |
Cristian KLEIN$^1$\
Technical University of Cluj-Napoca\
E-mail: [email protected]
- |
Octavian CRET$^2$\
Technical University of Cluj-Napoca\
E-mail: [email protected]
- |
Alin SUCIU$^2$\
Technical University of Cluj-Napoca\
E-mail: [email protected]
bibliography:
- 'main.bib'
title: Design and Implementation of a High Quality and High Throughput TRNG in FPGA
---
Introduction
============
Random numbers are at the very core of cryptographic algorithms. They are used to generate either the public / private key pair in asymmetric algorithms, or the shared secret / initialisation vector in symmetric cyphers. The ability of an adversary to predict the random numbers used, voids the security of the cypher. In fact, the only cypher whose security is proven to be perfect (one time pad) relies on the fact that the random number source is perfectly uniform and unpredictable.
Random number generators are of two types. The first one, *pseudo-random number generators* (PRNG), are the ones in which a person who designed the system, or has access to its internal state can predict the next random number. The system is a deterministic Finite State Machine, whose evolution can usually be described based on an arithmetic formula which determines its transition from a given internal state to another state, while outputting a random number based on a portion of the state. They have the advantage of having high speeds and some of them are cryptographically secure. However, all of them require an initial state (also called *seed*), which determines the sequence of numbers which will be generated. The importance of well seeding a pseudo-random number generator has recently been highlighted in a Debian security vulnerability[@De08].
The second type of random numbers generators are *true-random number generators* (TRNG), whose output cannot be predicted, not even by the person who designed them. They are usually based on sampling some kind of physical phenomenon (such as noise) which has a lot of randomness. Although one would be tempted to use only TRNGs in cryptography, their smaller throughput prohibits this, so they are commonly used to seed PRNGs.
FPGAs are becoming a popular choice for implementing cryptographic devices, due to the fact they represent the middle ground between the flexibility of the microprocessor and the speed of an ASIC. They allow creating high-throughput cryptographic devices while at the same time making it possible to change or improve the underlying algorithms, should a security flaw be discovered.
Many papers [@Ko04][@Su06][@Sc06][@Ts03][@Si05] have explored the possibility of implementing TRNGs in FPGAs, motivated by the avoidance of additional hardware, and the impossibility to intercept the data stream between the TRNG and the actual cryptographic implementation. While all of them claim to obtain good-quality TRNG, few mention explicitly the methods involved in transforming a hardware which is supposed to work predictably into a source of entropy.
This paper elaborates on the design and implementation of the TRNG principle presented by Martin and Stinsonin in [@Su06] and highlights a few practical issues encountered while implementing a high-quality TRNG based on it. We identified a few generic parameters, whose influence on the TRNG will also be presented in this paper. The ultimate purpose is to enable the reader to easily implement this TRNG on a low-cost FPGA development board, such as one featuring a Xilinx Spartan 3E.
Principle
=========
Like many TRNG implemented in FPGA, this design is based on sampling jitter. In essence, due to various noise sources such as that induced by the power supply but also by nearby components, the behaviour of “demanding a 0 or 1” from the transition slope of an output is unpredictable. This is caused by the fact that each technology defines a low (L) threshold, which is the upper limit for voltages which represent a logic 0, and a high (H) threshold which is the lower limit of logic 1. Output behaviour between these two values is not well defined. This can be modelled as if the output of the component would have a perfectly vertical slope, but the time of the transition is unknown and can range from the beginning until the end of the real slope (figure \[fig:jitter\]).
![The Jitter Model[]{data-label="fig:jitter"}](jitter-crop.pdf){height="5cm"}
In order to produce jitter, TRNGs employ one or more ring oscillators (RO) (figure \[fig:ro\]). These are composed of a ring of odd number of inverting elements and an arbitrary number of delaying elements. The simplest RO is composed of a single inverter and a buffer. The output of a RO is never stable and does transitions from 0 to 1 and back to 0 with a frequency given by the propagation delay of the constituting elements. Due to the above described phenomena, the period of an oscillation will not be constant, because it will vary by a small amount each time. This is the manifestation of jitter and the source of entropy which our TRNG will collect.
![Ring Oscillator (RO)[]{data-label="fig:ro"}](ro-crop.pdf){width="5cm"}
Our first attempt was to create a TRNG based on [@Ko04], which uses one RO to sample the output of the other RO. We appreciated this approach due to the fact that the whole stream before the post-processing phase is random (although it might be biased a little bit). We also favoured this design, because, if there is some kind of predictable jitter (such as coming from the power source), both ROs are influenced the same way which should cancel out at the sampler. However, we found that putting this design into practice is very challenging. The two ROs have to be nearly identical, which requires manual placing and routing. Even after achieving that, the design proved to be very sensitive to other components in the FPGA. At the time of this writing we have been able to create a TRNG which outputs very good numbers on a serial interface, but have been unable to obtain good quality random numbers at the TRNG’s highest speed.
Therefore, we chose to implement [@Su06] which uses multiple ROs whose outputs are XOR-ed. A flip-flop whose clock is driven by a fixed frequency will sample the combined output of the ROs. The obtained stream will hit both jitter zones (our source of entropy) and flat zones (which are highly predictable). A post-processing phase is required which consists in a resilience function[@Resilience]. In essence, the function takes an $m$-bit input, out of which $n$-bits are known to be random (but we can’t determine which ones) and outputs $n$-bits which are known to be random. For $n=1$, the simplest resilience function is to xor all the input bits. Suppose all but one bits are deterministic, but the probability of a 0 or 1 value of one bit are equal, the output of the xor will also have equal probability of being 0 or 1.
Implementation Issues
=====================
{width="\textwidth"}
Creating ROs in VHDL
--------------------
Our first goal was to create a VHDL component which would implement a ring oscillator with a parametrised length.
We first studied what resources are available in the FPGA to create ring oscillators. The main building block of the FPGA, the CLB are the only ones that actually contain logic, and are interconnected by a network of routing wires. The CLB contains a LUT, an invertor and a memory element which can be either used as a latch or as a flip-flop. The output of latch / FF goes directly out of the CLB into the interconnection network. Two CLBs are grouped together in a slice, however in order to connect the output of one latch / FF to the other CLB in the same slice, the wire has to exit the slice, go through the interconnection network and reenter the CLB. From Xilinx’s reports we noticed that the main delays in FPGA come from latches and routing. The inverter induces a negligible delay. Another interesting thing we noticed is that during the mapping phase, a [GLOBAL\_LOGIC1]{} signal is created which provides logic “1” for all the CLBs that require it.
Having the knowledge above, we chose to have a single inverter at the beginning of the chain and a variable number of latches as delay components (like in figure \[fig:ro\]). A single inverter allows us to create ROs which both even and odd number of latches. By default Xilinx’s synthesis tool optimises out all but one latch, due to the fact that they seem redundant from its viewpoint. In order to prevent this, we must set the “keep” attribute[@XiKeep] of the [d]{} bus which interconnects the latches:
attribute keep : string;
attribute keep of d : signal is "true";
This tells the synthesis and mapping tool that we want the individual [d]{} signals not to be absorbed into a CLB. Each of them must pass through the interconnection network, which forces the tools to map the redundant latches to CLBs.
To make sure that the inverter does not add more delay, we added the [not]{} keyword directly into the port map of the first latch, without assigning it a signal. This has the effect that the inverter and the first latch are mapped to the same CLB.
Sampler
-------
We chose to give the whole TRNG circuit the same interface as the one used by [@Ko04], to which we added an input clock signal (figure \[fig:trng\]). The [BitReady]{} output signal is high when the TRNG has a new random bit, which will appear at the [RandomBit]{} pin. When the external circuit has stored the random bit, it will acknowledge the TRNG by rising the [ReadAck]{} pin.
Although our particular TRNG is synchronous, all three signals are assumed to be asynchronous, both inside the TRNG and the external circuitry that connects to it. We took this decision for two purposes: first, we wanted to be able to use a RO’s output as the sampling clock, which would make the TRNG truly asynchronous, and secondly, we wanted to used the very same design to test future TRNG, which might be asynchronous in nature.
Resilience Function
-------------------
Contrary to the design employed by others, we chose as the resilience function a simple XOR of $2^r$-bits (where $r$ is a generic parameter). We did this because we feared that using a more complex resilience function may hide possible defects in our TRNG, which we obviously want to avoid. Moreover, some resilience functions (such as cyclic codes) are implemented using shift registers and XORs which might act as PRNG. We specifically want to test how well the TRNG works with minimal post-processing. Using the TRNG to seed a PRNG (although a possibly weak one) is against the purpose of our paper.
High-throughput Measurements
----------------------------
It was very important for us to validate the TRNG at its maximum speed. We feared that the output interface from the FPGA to the computer (where the random bits are collected and analysed), whether RS232 or USB, would do additional sampling of the (possible partially) random stream. This would return more optimistic results compared to the TRNG being used only inside the FPGA.
In order to achieve this, we created a design which would first fill a 16 Kbit BlockRAM with TRNG output, then transfer this to the output interface (figure \[fig:hbm\]). We think that this is very close to how a TRNG would be used in a FPGA cryptographic application: the cypher gets values from the entropy buffer and while the algorithm proceeds, the TRNG fills back the entropy buffer.
{width="80.00000%"}
The design is able to handle burst transfers from the TRNG. The data-in port of the RAM is directly connected to the TRNGs output. The control signals of the address counter and the write-enable port of the RAM are directly connected to the [BitReady]{} port of the TRNG, provided the FSM is in the [FillRAM]{} state. A separate circuit is used to drive the [ReadAck]{} port of the TRNG which sets it to $1$ at the very next clock rising edge, exactly when the RAM has stored the random bit.
The FSM which controls this circuit has 8 states (figure \[fig:fsm\]). The first, [Idle]{} is the state in which the FSM is set immediately after reset. Transition is made immediately to the [PrepareFillRAM]{} state, which resets the address counter. Next, the [FillRAM]{} state allows the counter to increase and the RAM to store values when a new random bit is ready. The FSM stays in this state until the RAM is filled (i.e. the RAM address counter wraps around). The next three states ([ReadRAM]{}, [ShiftIn]{}, [CheckSR]{}) serialise the bits stored in the RAM into a byte for being transmitted to the UART module. The same counter is used to control the address of the RAM, but it is only incremented in the [ShiftIn]{} state. Finally, when a byte is complete (i.e. the [Serialiser]{} sets the [ready]{} port to 1) the FSM will wait for the UART to complete the previous transmission ([WaitUART]{}), then dispatch the data ([UARTSend]{}). If there is more data to transmit (i.e. RAM address counter is non-zero) then the FSM will transition to the [ReadRAM]{} state, serialising the next byte. If the whole contents of the RAM has been transmitted, it will be freshly filled with random numbers, by jumping to the [PrepareFillRAM]{} state.
{width="80.00000%"}
Tuning the TRNG
===============
A very important practical aspect of the TRNG is to know the influence of its generic parameters on the quality of its output. We also wanted to test practically what is the smallest number of FPGA resources which are required for this TRNG. We used the DieHard[@DieHard] and the TestU01[@TestU01] (NIST, Rabbit and Alphabit battery) suites to test the quality of the TRNG output. We only considered parameters for which the output of the TRNG passed all tests, i.e. all DieHard p-values are different from 0 or 1, and TestU01 prints “All tests were passed”. All files which we downloaded had at least 10 MB, due to limitations in DieHard. Interestingly, the TestU01 library proved to be a lot more sensitive than DieHard.
The proposed TRNG has the following generic parameters: number of ring oscillators ($n$), length of ring oscillators ($l$), sampling frequency divisor ($2^d$) and resilience function input width ($2^r$).
The first two aspects in which we were interested is the throughput and the amount of resources this design uses. The throughput can be easily computed as the output rate of the TRNG is the input clock frequency, divided first by the clock divider, then the resilience function. The formula is:
$$b = \frac{f}{2^r * 2^d}$$
where $f$ is the input clock frequency of the TRNG and $b$ is the throughput in bps.
The amount of resources can also be easily estimated. Each RO uses $l$ CLBs. The xor stage is synthesised as a tree of LUTs. Due to the fact that the slice of a Spartan 3E device contains 4-bit input LUTs the number of CLBs is $ \left\lceil \frac{n-1}{3} \right\rceil $. The clock divisor uses approximately $\frac{d}{4}$ CLBs. The counter uses about $\frac{r}{4}$ CLBs, while the and stage at its output uses $ \left\lceil \frac{r-1}{3} \right\rceil $. The other components (sampler FF, resilience XOR and FF, acknowledge circuit) use 3 CLBs. Therefore the total number of used CLBs ($C$) is:
$$C = l + \left\lceil \frac{n-1}{3} \right\rceil + \frac{d}{4} + \frac{r}{4} + \left\lceil \frac{r-1}{3} \right\rceil + 3$$
During our experiments we concluded that the quality of the output random bit stream increases with the increase of $d$, $r$ and $n$. As the number of ring oscillators ($n$) increases and because the ring oscillators don’t have the exactly same frequency, the signal after xoring them will be composed of much more jitter than flat zones. This means that the sampler will return much more non-deterministic bits compared to the amount of deterministic bits. The more input bits the resilience function has the more non-deterministic bits will be xored with the deterministic bits, which in effect will increase the chance of the TRNG to output a truly random bit.
Regarding the clock divider, if $d$ is too small (even comparable to the frequency of the ring oscillators), the sampler tends to hit the same flat zone or return the same non-deterministic bit several times. The resulting correlated bits can of course be eliminated in the resilience stage, provided that $r$ is large enough. We can clearly see that the well-known throughput vs. resources conflict also holds in case of this TRNG.
We haven’t found any significant influence of $l$ on the quality of the random numbers. This might be due to the fact that while each delay element increases the output period of the ring oscillators, it also increases the amount of jitter, so the percentage of the jitter after the sampling stage remains roughly the same. Although one is tempted to use ring oscillators with the minimum length, we recommend to use $l\ge3$ to make sure that the system does not remain without jitter in extreme conditions such as sudden temperature variations.
In our experiments the parameters values presented in table \[tab:param\] created a TRNG which passed all tests, while minimizing the number of ring oscillators. Please note that in case one wants to be absolutely sure that the TRNG will output high-quality random numbers, higher values should be used for $r$ or, if bandwidth is an issue, $n$.
\[tab:param\]
$d$ $r$ $n$ $l$
----- ----- ----- ----- -------
0 2 20 3 12500
0 3 10 3 6250
2 2 10 3 3125
5 3 5 3 195
Speeding up the TRNG
====================
FPGAs are becoming large enough to allow massive pipelining of arithmetic operands and compute one result per clock. In some applications it might be desirable to generate random numbers at the maximum frequency of the FPGA. In the above design, both the resilience function (characterised by $r$) and the sample clock divider ($d$) lower the frequency of the TRNG. While we could set $d$ to zero, so that the sampling clock is set to maximum, we can never set $r$ to zero, while at the same time obtain good quality random numbers.
First solution which would come to one’s mind is to use multiple parallel TRNGs and multiplex their outputs. Suppose the sample clock divider is equal to zero, each TRNG would output one bit each $2^r$ cycles. This means that we would need $2^r$ TRNGs for generating one random bit on each FPGA clock. While this solution would surely work (due to the fact that by interleaving truly random streams one obtains another truly random stream), we wanted to find a design that would minimise the resource utilisation.
Our idea is that we require the resilience function because not all our bits are sampled from jitter. The same would apply if we would XOR bits coming from different samplers. This way, we would save $2^r$ counters, FFs and AND gate and replace them with one big XOR.
Indeed, we have practically validated the fact that good quality random numbers are generated using the above concept, for $8$ samplers and $20$ ROs / sampler. Interestingly, the number of samplers required is equal to the number of bits which enters the resilience function in the design presented in figure \[fig:trng\].
Note however, what for the mentioned values, we used $160$ ROs, eight times more. An interesting question is whether this number of ROs could be used to generate a random bit stream, without using a resilience function ($d=0$ and $r=0$ in figure \[fig:trng\]). We have practically shown that this is not possible, as explained in [@Sc06]. In essence, the probability of sampling a random bit increases with the number of ROs, but never reaches 1. The small percent of the resulting correlated bits is enough to make the TRNG fail quality tests.
Conclusion
==========
In this paper we have shown how a simple yet of high-quality and high-throughput TRNG can be implemented on a low-end Xilinx Spartan 3E FPGA and presented the main implementation issues one might encounter. We have also discussed the various parameters of the TRNG and the influence they have on the design. We believe that this paper has paved the way to implementing secure cryptographic applications in low-end FPGAs, without requiring any external component.
[10]{}
. . Available Online: <http://www.debian.org/security/2008/dsa-1571>.
Gopalakrishnan and Stinson. Applications of designs to cryptography. In [*Charles J. Colbourn and Jeffrey H. Dinitz (Eds.), The [CRC]{} Handbook of Combinatorial Designs, [CRC]{} Press*]{}. 1996. Available Online: <http://citeseer.ist.psu.edu/126555.html>.
P. Kohlbrenner and K. Gaj. An embedded true random number generator for [FPGAs]{}. In [*FPGA ’04: Proceedings of the 2004 ACM/SIGDA 12th international symposium on Field programmable gate arrays*]{}, pages 71–78, New York, NY, USA, 2004. ACM.
P. L’Ecuyer and R. Simard. . , 33(4):22, 2007.
G. Marsaglia. Diehard: Battery of tests of randomness. Available Online: <http://www.stat.fsu.edu/pub/diehard/>.
W. J. Martin and D. R. Stinson. A provably secure true random number generator with built-in tolerance to active attacks. , 56(1):109–119, 2007. Member-Berk Sunar.
D. Schellekens, B. Preneel, and I. Verbauwhede. vendor agnostic true random number generator. In [*FPL*]{}, pages 1–6. IEEE, 2006.
K. H. Tsoi, K. H. Leung, and P. H. W. Leong. Compact fpga-based true and pseudo random number generators. In [*FCCM ’03: Proceedings of the 11th Annual IEEE Symposium on Field-Programmable Custom Computing Machines*]{}, page 51, Washington, DC, USA, 2003. IEEE Computer Society.
M. Šimka, M. Drutarovský, and V. Fischer. . In [*Workshop on Cryptographic Advances in Secure Hardware – CRASH 2005*]{}, Leuven, Belgium, Sept. 6–7, 2005.
. . Available Online: [http://toolbox.xilinx.com/docsan/xilinx7/books/data/docs/cgd/cgd0109\_70%
.html](http://toolbox.xilinx.com/docsan/xilinx7/books/data/docs/cgd/cgd0109_70%
.html).
|
---
abstract: 'We explore the possibility of achieving a significant nonlinear phase shift among photons propagating in nanoscale waveguides exploiting interactions among photons that are mediated by vibrational modes and induced through Stimulated Brillouin Scattering (SBS). We introduce a configuration that allows slowing down the photons by several orders of magnitude via SBS involving sound waves and two pump fields. We extract the conditions for maintaining vanishing amplitude gain or loss for slowly propagating photons while keeping the influence of thermal phonons to the minimum. The nonlinear phase among two counter-propagating photons can be used to realize a deterministic phase gate.'
author:
- Hashem Zoubi
- Klemens Hammerer
date: 11 October 2016
title: Nonlinear Quantum Optics in Optomechanical Nanoscale Waveguides
---
The non-interacting nature of photons makes them efficient as carriers for quantum information [@Walmsley2015] but non-efficient for information processing. Quantum nonlinear optics thrives to induce controlled interactions at the few photon level for fundamental physics and applications, e.g., for photonic switches, memory devices and transistors [@Chang2014; @Reiserer2015; @Firstenberg2016; @Murray2016]. The ultimate challenge is to achieve nonlinear phase shifts among two optical photons realizing a quantum logic gate for photonic quantum information processing [@Imamoglu1997; @OBrien2007; @Kimble2008]. In the recent decades several directions have been suggested for achieving effective photon-photon interactions. Among the first experiments was Cavity Quantum Electrodynamics (CQED) using atoms as a nonlinear medium [@Haroche2006; @Reiserer2015], which culminated in the recent demonstration of a deterministic quantum gate [@Hacker2016] along the lines suggested in [@Poyatos1997]. Avoiding the use of resonators, strong nonlinearities have been achieved for fields confined in waveguides, e.g., using tapered nanofibre strongly coupled to an atomic chain [@Vetsch2010; @Goban2012]. The restrictions on bandwidth imposed by the cavity spectrum motivated the search for cavity free environments [@Hammerer2010], for example, using Rydberg atoms in a dense medium [@Hau2008; @Gorshkov2011; @Peyronel2012; @Firstenberg2013] under the condition of Electromagnetic Induced Transparency (EIT) [@Harris1990; @Fleischhauer2005], and later by exploiting the blockade phenomena [@Lukin2001; @Pritchard2010]. The significant enhancement of photon-photon interactions in the latter approach is mainly due to the achievement of slow light using EIT which is subject to restrictions in bandwidth associated with the transparency window [@Petrosyan2011].
![(a) Schematic of the setup: Two signal fields at frequencies $\omega_{u(d)}$ propagate in a nanofibre of length $L$, and experience a cross-phase interaction mediated by SBS involving phonons of frequency $\Omega_v$. The effective group velocity $v_e$ is reduced due to EIT induced by counter-propagating pump fields. (b) Schematic dispersion of the lowest two, dispersion-less (solid) and acoustic (dashed), phonon branches. (c) Schematic dispersion of the fundamental photon mode. (d) Zoom in on the photon dispersion: signal fields (solid circle) interact via dispersion-less phonons (solid arrow), four pump fields (empty circle) induce EIT at the signal frequencies via acoustic phonons (dashed arrows), cf. Fig. \[Slow\]. (e) Level scheme with detuning $\Delta\Omega$ between signal fields and dispersion-less phonons.[]{data-label="NonPhase"}](Fig1){width="\linewidth"}
In parallel, optical fibres [@Zhu2007; @Thevenaz2008; @Douglas2015; @Goban2015] and photonic crystals [@Russell2006; @Baba2008; @Eichenfield2009] have received significant interest, as they can be easily integrated into all-optical on-chip platforms. In particular optical fibres can realize tunable delays of optical signals with the possibility of achieving fast and slow light in a comparatively wide bandwidth [@Okawachi2005; @Song2005; @Herraez2006]. The most efficient nonlinear process inside optical fibres is SBS, that is the scattering of optical photons by long lived acoustic phonons commonly induced by electrostriction [@Kim2015]. Recent progress in the fabrication of nanoscale waveguides in which the wavelength of light becomes larger than the waveguide dimension achieved a breakthrough in SBS [@Pant2011; @Shin2013; @Eggleton2013]. In this regime the coupling of photons and phonons is significantly enhanced due to radiation pressure dominating over electrostriction [@Rakish2012; @VanLaer2015a] with significant implications for the field of Brillouin continuum optomechanics [@Hammerer2014; @Rakich2016].
In the present letter we introduce an efficient method for generating effective interactions among photons induced through SBS involving vibrational modes in nanoscale waveguides. Our scheme crucially relies on achieving slow light by exploiting the significant scattering of photons from acoustic phonons. We study the correlations induced among slowly co- or counter-propagating photons, and show that a significant nonlinear phase shifts can be accumulated along a cm scale waveguide. We identify configurations where the slow group velocity of photons can be exploited without net gain or loss in photon number which can be achieved using two pump fields. We also consider the effect of thermal fluctuations in the phonon modes and determine conditions for negligible impact on the photon-photon interactions. Our treatment builds on the quantum mechanical Hamiltonian description of SBS in nanoscale waveguides recently developed in [@Sipe2016; @Zoubi2016]. Quantum nonlinear optics and photon phase gates have been discussed previously in cavity optomechanics [@Rabl2011; @Nunnenkamp2011; @Stannigel2012; @Ludwig2012; @Wang2016] generally assuming a large single photon coupling (with the notable exception of [@Wang2016]). The results reported here relate to these previous schemes as the approach towards quantum nonlinear optics based on atomic ensembles relates to the one based on CQED.
We consider a cylindrical nanoscale waveguide of length $L$ on cm scale with four pump fields propagating from right to left and two signal fields containing few photons from left to right which are coupled through SBS to vibrational modes of the fibre, as represented in Figure \[NonPhase\].a. The signal fields comprise wavenumbers centered around $k_u$ and $k_d$ of frequencies $\omega_u$ and $\omega_d$, respectively, as shown in Figures \[NonPhase\].c and \[NonPhase\].d. The fields are described by slowly varying amplitude operators $\psi_\alpha(x)$ where $\alpha=u,d$. For an effectively one-dimensional photon field the real space operator is expressed in terms of the momentum space one, $a_k$, by $\psi_{\alpha}(x)=\frac{1}{\sqrt{L}}\sum_{k\in B_{\alpha}}a_k
e^{i(k-k_{\alpha})x}$. Here $B_{\alpha}$ denotes a suitable bandwidth of photon wave numbers centered around a central wave numbers $k_{\alpha}$. The definition of $\psi_{\alpha}(x)$ implies $[\psi_{\alpha}(x),\psi_{\alpha}^\dagger(x')]=\delta(x-x')$ where the $\delta$-function is understood to be of width $\sim
B^{-1}_{\alpha}$. Moreover, we consider (effectively) dispersion-less vibrational modes of frequency $\Omega_v$ and wavenumber $q_v$ which are represented by a slowly varying phonon field operator $Q(x)$, as appeared in Fig. \[NonPhase\].b. The two photonic signal modes are detuned from the vibration by $\Delta\Omega=\omega_u-\omega_d-\Omega_v$ with difference in wavenumbers of $\Delta q=k_u-k_d-q_v$, cf. Fig. \[NonPhase\].e. The two signal fields are assumed to propagate at a slow group velocity $v_e$ which can be achieved by a proper choice of pump fields exploiting SBS involving acoustic phonons, as will be explained in detail further below. The Hamiltonian for the two slow signal fields and the vibrational modes reads [@Zoubi2016] $(\hbar=1)$ $$\begin{aligned}
H&=H_0-i v_e\sum_{\alpha}\int
dx\ \psi_{\alpha}^{\dagger}(x)\frac{\partial\psi_{\alpha}(x)}{\partial
x} \\
&\quad+\sqrt{L}\int dx\ \left(f_v\ Q^{\dagger}(x)\psi_d^{\dagger}(x)\psi_u(x)\ e^{i\Delta q x}+h.c.\right)\nonumber,\end{aligned}$$ where $H_0=\sum_{\alpha}\int
dx\omega_\alpha\psi_{\alpha}^{\dagger}(x)\psi_{\alpha}(x)+\Omega_v\int
dxQ^{\dagger}(x)Q(x)$. The frequency $f_v$ describes the strength of SBS among the two photonic signal fields and the vibrational fields. In the local field approximation it is independent of the wavenumber. The corresponding equations of motion for the photon operators in an interaction picture with respect to $H_0$ are $$\begin{aligned}
\Big(\tfrac{\partial}{\partial t}+v_e&\tfrac{\partial}{\partial
x}\Big)\psi_u(x,t)=\nonumber \\
& -if_v\sqrt{L}\ Q(x,t)\psi_d(x,t)\ e^{i(\Delta\Omega t-\Delta qx)}, \nonumber \\
\Big(\tfrac{\partial}{\partial t}+v_e&\tfrac{\partial}{\partial x}\Big)\psi_d(x,t)=\nonumber \\
& -if_v\sqrt{L}\ Q^{\dagger}(x,t)\psi_u(x,t)\ e^{-i(\Delta\Omega t-\Delta qx)}.\label{EoM:amplitudes}\end{aligned}$$ The phonon operator evolves as $$\begin{aligned}
\Big(\tfrac{\partial}{\partial
t}+&\tfrac{\Gamma_v}{2}\Big)Q(x,t)=\\
&-if_v\sqrt{L}\ \psi_d^{\dagger}(x,t)\psi_u(x,t)\ e^{-i(\Delta\Omega
t-\Delta qx)}-{\cal F}(x,t),\nonumber\end{aligned}$$ where $\Gamma_{v}$ is the vibrational mode damping rate, and ${\cal F}(x,t)$ is the Langevin noise operator [@Boyd1990] fulfilling $[{\cal F}^\dagger(x,t),{\cal
F}(x',t')]=\Gamma_{v}\delta(x-x')\delta(t-t')$ and $\langle
{\cal F}(x,t){\cal
F}^\dagger(x',t')\rangle=\Gamma_{v}(\bar{n}_{v}+1)\delta(x-x')\delta(t-t')$, where $\bar{n}_{v}$ is the average number of thermal phonons. We assumed that photon loss is negligible on the time scale $L/v_e$ of propagation of photons through the fibre. Dominant photon loss is to be expected from in- and out-coupling of photons from the nanofibre.
We will show now that the two signal fields experience a significant cross-phase shift mediated through their off-resonant interaction with the vibrational field. For sufficiently large detuning $\Delta\Omega>f_v$ the phonon field can be adiabatically eliminated from the equations of motion giving rise to a closed set of equations for the photon fields which can be integrated thanks to an (approximate) conservation of the number of photons in each mode. In order to demonstrate this we define the photon number density $\hat{N}_{\alpha}(x,t)=\psi_{\alpha}^{\dagger}(x,t)\psi_{\alpha}(x,t)$ for mode $\alpha=u,d$ and the total photon density $\hat{N}=\hat{N}_u+\hat{N}_d$. Direct calculation using the change of variables $\xi=x-v_{e}t$ and $\eta=v_{e}t$, after adiabatic elimination of the phonons, yields $\frac{\partial}{\partial
\eta}\hat{N}(\xi,\eta)=0$. For the time being we drop the Langevin term, and consider its influence in much details later. Thus the total photon density is conserved during propagation through the fibre, $\hat{N}^\mathrm{out}(\xi)=\hat{N}^\mathrm{in}(\xi)$, where we use the definition of input and output operators $\hat{\mathcal{O}}^\mathrm{in[out]}(\xi)=\hat{\mathcal{O}}(\xi,0[L])$ for any observable $\hat{\mathcal{O}}(\xi,\eta)$. Moreover, one finds that the photon number densities $\hat{N}_{\alpha}(\xi,\eta)$ obey the Riccati equations [@SupMat] $$\begin{aligned}
\frac{\partial}{\partial
\eta}\hat{N}_u(\xi,\eta)&=-V\hat{N}(\xi)\ \hat{N}_u(\xi,\eta)+V\ \hat{N}_u^2(\xi,\eta),
\nonumber \\
\frac{\partial}{\partial \eta}\hat{N}_d(\xi,\eta)&=V\hat{N}(\xi)\ \hat{N}_d(\xi,\eta)-V\ \hat{N}_d^2(\xi,\eta).\end{aligned}$$ Here $V=\vartheta\frac{\Gamma_v/(\Delta\Omega)}{1+\Gamma_v^2/(4\Delta\Omega^2)}$, and $\vartheta=\frac{f_v^2L}{v_e\Delta\Omega}$ will turn out to be the nonlinear phase shift among the modes $u$ and $d$, see below. The input-output relations resulting from these equations are $\hat{N}_u^{\mathrm{out}}(\xi)=\hat{N}_u^{\mathrm{in}}(\xi)\hat{N}^\mathrm{in}(\xi)\big[\hat{N}_u^{\mathrm{in}}(\xi)+e^{VL\hat{N}_\mathrm{in}(\xi)}\hat{N}_d^{\mathrm{in}}(\xi)\big]^{-1}$ and $\hat{N}_d^{\mathrm{out}}(\xi)=\hat{N}_d^{\mathrm{in}}(\xi)\hat{N}^\mathrm{in}(\xi)\big[\hat{N}_d^{\mathrm{in}}(\xi)+e^{-VL\hat{N}_\mathrm{in}(\xi)}\hat{N}_u^{\mathrm{in}}(\xi)\big]^{-1}$ where we used that input number density operators commute. For input states in the signal modes which fulfill $VL\langle\hat{N}_\mathrm{in}(\xi)\rangle\ll1$ the photon number in each mode is conserved, $\hat{N}^\mathrm{out}_\alpha(\xi)=\hat{N}^\mathrm{in}_\alpha(\xi)$, as we will assume in the following. It is interesting to note that in the opposite case the nonlinear interaction of photons acts as an incoherent adder in mode $d$, that is $\hat{N}_d^{\mathrm{out}}(\xi)=\hat{N}^{\mathrm{in}}(\xi)$, while $\hat{N}_u^{\mathrm{out}}(\xi)=0$.
In the limit where both $\hat{N}_u$ and $\hat{N}_d$ are conserved during their propagation in the waveguide the input-output relations for the photon field operators are [@SupMat]
\[InOutAmpl\] $$\begin{aligned}
\psi^\mathrm{out}_u(\xi)&=\psi^\mathrm{in}_u(\xi)e^{-i\vartheta \hat{N}^\mathrm{in}_d(\xi)L}\\
&+\frac{i}{v_e}\int_0^{L}d\eta'\ U(\xi,\eta')\ \psi_d(\xi,\eta')e^{-i\vartheta \hat{N}^\mathrm{in}_d(\xi)\ (L-\eta')},
\nonumber \\
\psi^\mathrm{out}_d(\xi)&=\psi^\mathrm{in}_d(\xi)e^{-i\vartheta \hat{N}^\mathrm{in}_u(\xi)L} \\
&+\frac{i}{v_e}\int_0^{L}d\eta'\ U^\dagger(\xi,\eta')\ \psi_u(\xi,\eta')e^{-i\vartheta \hat{N}^\mathrm{in}_u(\xi)\ (L-\eta')},\nonumber\end{aligned}$$
with $U(x,t)=f_v\sqrt{L}\ e^{i(\Delta\Omega t-\Delta
qx)}\int_{0}^t dt'\ {\cal F}(x,t')e^{-\frac{\Gamma_v}{2}(t-t')}$. In the first line of both Equations the nonlinear cross-phase shift $\vartheta$ appears in the exponent. The second lines describe the contributions due to thermal fluctuations of the phonon modes which generate an incoherent mixing of photon field amplitudes in modes $u$ and $d$. Using the properties of the Langevin force operators the average number of photons at the waveguide output are given by $N^\mathrm{out}_u=N_u^\mathrm{in}+N_u^\mathrm{fluct}$, and $N^\mathrm{out}_d=N_d^\mathrm{in}+N_d^\mathrm{fluct}$, where $N_{\alpha}=\langle\hat{N}_{\alpha}\rangle$. The average number of incoherently added photons is [@SupMat] $N^\mathrm{fluct}_u\approx W\bar{n}_vN^\mathrm{in}_d$ and $N^\mathrm{fluct}_d\approx W(1+\bar{n}_v)N^\mathrm{in}_u$ where $W=\frac{L^3 \Gamma_vf_v^2}{v_e^3}$. For $W(1+\bar{n}_v)\ll 1$ incoherently added photons make a small relative contribution.
As an example we consider a cylindrical nanofibre where $f_v=3.3\times10^6$ Hz can be achieved for $L=1$ cm and a diameter $d=500$ nm in silicon, as we have shown in [@Zoubi2016]. At the same time one finds $\Omega_v=2\pi\times 10$ GHz for longitudinal modes such that $\bar{n}_v\approx
0.1$ at $T=200$ mK. For a detuning of $\Delta\Omega=5\times 10^6~\mathrm{Hz}>f_v$ one obtains a significant nonlinear phase shift of $\vartheta\approx 1$ for an effective group velocity of $v_e\approx 2.2\times 10^4$ m/s which is reachable in this system as discussed below. In order to guarantee a small number of incoherent excitations $W\approx 0.1$ one has to require a mechanical quality factor $Q_v=6\times 10^5$ (that is, $\Gamma_v=10^5$ Hz). At the same time, this implies $V\simeq 0.02$ such that the number of photons in each mode is conserved as long as the input photon flux fulfills $v_e\langle\hat{N}_\mathrm{in}(\xi)\rangle \ll \frac{v_e}{VL}\simeq 10^8$sec$^{-1}$. The acceptable bandwidth of photons in the two modes $u$ and $d$ has to be small on the scale of the detuning but may be still on the order of $500$ kHz. We emphasize that the nonlinear phase resulting from these parameters is of the same order as the one achieved using cold atoms exploiting the Rydberg blockade phenomenon.
The nonlinear phase shifts appearing in Eqs. can be viewed formally as arising from a cross-phase interaction Hamiltonian $H_\mathrm{eff}=gL\int dx\ \psi_u^{\dagger}(x)\psi_d^{\dagger}(x)\psi_u(x)\psi_d(x)$ among the two photons where $g=\frac{f_v^2}{\Delta\Omega}$. This interaction gives rise to a wide range of quantum nonlinear optics on the level of single photons and many-body physics of photons [@Chang2014]. For two co-propagating single-photon pulses the nonlinear phase shift comes along with changes and correlations in the spatio-temporal profile of the pulses [@Shapiro2006] limiting the applicability of the nonlinear phase shift for the implementation in two-qubit quantum logic gate [@Fleischhauer2005; @Gea-Banacloche2010]. This effect can be suppressed by using counter-propagating pulses that still experience an identical nonlinear phase [@Gorshkov2011]. For counter-propagating modes $u$ and $d$ the treatment is essentially equivalent to the one given above and results in the same effective Hamiltonian $H_\mathrm{eff}$. Solving the Schrödinger equation for an initial state of two incoming counter-propagating photons in the waveguide $|\tilde{\phi}\rangle_\mathrm{in}=\int
dx_1dx_2\ \phi(x_1,x_2,t)\ \psi^{\dagger}_u(x_1)\psi^{\dagger}_d(x_2)|\mathrm{vac}\rangle$, where $\phi(x_1,x_2,t)$ is a given two-photon wave function, it is straight forward to derive the scattering relation $|\tilde{\phi}\rangle_\mathrm{out}=e^{i\vartheta}|\tilde{\phi}\rangle_\mathrm{in}$. A unique application of such nonlinear phase shift among single photons is in all-optical deterministic quantum logic.
![Scheme for achieving gain- and lossless slow light: (a) A signal field at frequency $\omega_s$ is dressed by two pump fields $\omega_{1(2)}$ interacting through acoustic phonons at frequencies $\Omega_{1(2)}$. (b) Photon dispersion with signal field (solid circle), pump fields (empty circles) and resonant acoustic phonon modes (dashed arrows). (c) Schematic level scheme with detunings $\Delta\omega_1=\omega_s-\omega_1-\Omega_1$ and $\Delta\omega_2=\omega_2-\omega_s-\Omega_2$ among the fields.[]{data-label="Slow"}](Fig2){width="\linewidth"}
In order to observe sizable nonlinear phase shifts it is crucial to achieve a small effective group velocity $v_e$. Slow (or fast) light based on SBS in waveguides has been demonstrated in several experiments [@Herraez2005; @Chin2006; @Zhu2006], and similar results have been achieved in cavity optomechanics [@Weis2010; @Safavi-Naeini2011; @Kim2015]. The effect can be understood in analogy to EIT in atomic media where acoustic phonons play the role of internal atomic states. Slowing (advancing) of light based on SBS in general is linked to a net Brillouin gain (loss) in the signal field [@Thevenaz2008] which, in a quantum mechanical treatment, will be connected necessarily to additional noise affecting the signal field. Therefore, in order to exploit SBS induced slowing of light for quantum nonlinear optics it is crucial to suppress Brillouin gain or loss while maintaining a slow group velocity. We will show now how this can be achieved using two pump fields counter-propagating to the signal field. We consider a signal field of frequency $\omega_s$ and wave number $k_s$ propagating to the right with group velocity $v_g$ that is described by the operator $\psi(x,t)$. Two additional strong (classical) fields of frequencies $\omega_1$ and $\omega_2$ with wavenumbers $k_1$ and $k_2$ are propagating to the left with the same group velocity $v_g$, as shown in Figures \[Slow\].a and \[Slow\].b. The signal is detuned from the sum of field $(1)$ and a phonon of frequency $\Omega_1$ by the detuning $\Delta\omega_1=\omega_s-\omega_1-\Omega_1$. On the other hand, field $(2)$ is detuned from the sum of the signal and a phonon of frequency $\Omega_2$ by the detuning $\Delta\omega_2=\omega_2-\omega_s-\Omega_2$, cf. Fig. \[Slow\].c. The two acoustic phonons are described by the operators $Q_1(x,t)$ and $Q_2(x,t)$, with sound velocity $v_a$ and wavenumbers $q_1$ and $q_2$, respectively. The strong fields $(1)$ and $(2)$ are taken to be classical amplitudes ${\cal E}_1(x,t)$ and ${\cal
E}_2(x,t)$, which are defined by ${\cal E}_{\alpha}(x,t)=\sqrt{L}\langle\psi_p^{\alpha}\rangle$. The configuration of fields is shown in Fig. \[Slow\].b, and for the case of two signals in Fig. \[NonPhase\].d. The system is described by $$\begin{aligned}
H&=-i v_g\int
dx\ \psi^{\dagger}(x)\frac{\partial\psi(x)}{\partial
x} \nonumber \\
&\quad-i v_a\int
dx\left\{ Q_1^{\dagger}(x)\frac{\partial Q_1(x)}{\partial
x}-Q_2^{\dagger}(x)\frac{\partial Q_2(x)}{\partial
x}\right\} \nonumber \\
&\quad+\int dx\ \left(f^{a}_{1}{\cal E}_{1}^{\ast}(x)\ Q_{1}^{\dagger}(x)\psi(x)+h.c.\right) \nonumber \\
&\quad+\int dx\ \left(f^{a}_{2}{\cal E}_{2}^{\ast}(x)\ Q_{2}(x)\psi(x)+h.c.\right).\end{aligned}$$ As before we assume the photon and phonon dispersions to be linear with group velocities $v_g$ and $v_a$, respectively. The photon-phonon coupling parameters, $f^a_1$ and $f^a_2$ are taken in the local field approximation. The acoustic phonons have a damping rate of $\Gamma_a$, and the photons have a negligible damping. Thermal fluctuations of phonons are included by adding Langevein noise operators, ${\cal F}_i(x,t)$ [@Boyd1990]. The equation of motion for signal photons reads $$\begin{aligned}
\Big(\frac{\partial}{\partial t}+v_g\frac{\partial}{\partial
x}\Big)&\psi(x,t)=\\
&=-i f^a_1{\cal
E}_{1}(x,t)\ Q_1(x,t)\ e^{i(\Delta\omega_1t-\Delta k_1x)} \nonumber \\
&\quad-i f^a_2{\cal
E}_{2}(x,t)\ Q_2^{\dagger}(x,t)\ e^{-i(\Delta\omega_2t+\Delta
k_2x)},\nonumber\end{aligned}$$ and the ones for the phonon modes are $$\begin{aligned}
\Big(\frac{\partial}{\partial t}&+v_a\frac{\partial}{\partial
x}+\frac{\Gamma_a}{2}\Big)Q_{1}(x,t) \\
&=-i f_1^{a}{\cal E}_{1}^{\ast}(x,t)\ \psi(x,t)\ e^{-i(\Delta\omega_1t-\Delta k_1x)}-{\cal F}_{1}(x,t),
\nonumber \\
\Big(\frac{\partial}{\partial t}&-v_a\frac{\partial}{\partial
x}+\frac{\Gamma_a}{2}\Big)Q_{2}(x,t)\nonumber \\
&=-i f^a_2{\cal E}_{2}(x,t)\ \psi^{\dagger}(x,t)\ e^{-i(\Delta\omega_2t+\Delta k_2x)}-{\cal F}_{2}(x,t),\nonumber\end{aligned}$$ where $\Delta k_1=k_s+k_1-q_1$ and $\Delta k_2=k_2+k_s-q_2$.
Elimination of the acoustic phonons leads to the formal solution of the photon operator [@SupMat] $$\begin{aligned}
\label{Signal}
\psi&(x,t)=e^{-({G}+i\kappa)x}\psi_\mathrm{in}(x-v_gt)+i\frac{f_a}{v_g}\int_0^{x}dx'e^{({G}+i\kappa)(x'-x)}\nonumber \\
&\times\left\{{\cal E}_{1}e^{-i\Delta k_1x'}W_1(x',t)+{\cal E}_{2}e^{-i\Delta k_2x'}W_2^{\dagger}(x',t)\right\},\end{aligned}$$ where $\psi_\mathrm{in}(x-v_gt)$ is the incident signal operator. We defined the gain coefficient ${G}=\frac{f_a^2\Gamma_a}{2v_g}\left\{\frac{|{\cal
E}_{1}|^2}{\frac{\Gamma_a^2}{4}+\Delta\omega_1^2}-\frac{|{\cal
E}_{2}|^2}{\frac{\Gamma_a^2}{4}+\Delta\omega_2^2}\right\}$, the shift in wave number $\kappa=\frac{f_a^2}{v_g}\left\{\frac{|{\cal
E}_{1}|^2\Delta\omega_1}{\frac{\Gamma_a^2}{4}+\Delta\omega_1^2}+\frac{|{\cal
E}_{2}|^2\Delta\omega_2}{\frac{\Gamma_a^2}{4}+\Delta\omega_2^2}\right\}$, and the noise operators $W_i(x,t)=e^{i\Delta\omega_it}\int_{0}^t
dt'\ {\cal F}_{i}(x,t')e^{-\frac{\Gamma_a}{2}(t-t')}$. For simplicity we assumed $f^a_1=f^a_2\equiv f_a$ and constant pumps.
At this point we estimate the contribution of the thermal fluctuations. We calculate the average number of signal photons using the properties of Langevin noise operators given above [@Boyd1990]. We are interest in the limit of negligible gain, that is $GL\ll1$. Later we extract the condition for achieving this limit. The thermal photons appear due to the scattering of the upper pump photons into the signal photons which is induced by thermal phonons. In the limit considered here we get $\hat{N}_\mathrm{out}=\hat{N}_\mathrm{in}+\hat{N}_\mathrm{fluct}$, where the density of incident photons is $N_\mathrm{in}=\langle\psi_\mathrm{in}^{\dagger}(L-v_gt)\psi_\mathrm{in}(L-v_gt)\rangle$, and the average number of incoherently added photons is [@SupMat] $\langle\hat{N}_\mathrm{fluct}\rangle \approx\frac{f_a^2\Gamma_a L^2}{v_g^3}\left\{|{\cal
E}_{1}|^2\bar{n}_a^{(1)}+|{\cal E}_{2}|^2(\bar{n}_a^{(2)}+1)\right\}$, where $\bar{n}_a^{(i)}$ is the average number of thermal phonons in the reservoir at frequency $\Omega_i$. For phonons of frequency $\Omega_a=2\pi\times 15$ GHz, at temperature $T=200$ mK, the average number of thermal phonons is $\bar{n}_a\approx 0.03$. Using the numbers $f_a=2.6\times 10^5$ Hz, $L=1$ cm, $|{\cal
E}_1|^2=4\times10^8$, and $|{\cal
E}_2|^2=10^8$, which are equivalent to about $10$ mW, we get a number density of incoherent photons of $\langle\hat{N}_\mathrm{fluct}\rangle \approx 0.3$. This corresponds to a photon flux of about $10^3$ sec$^{-1}$. For the example of an incoming single photon, incoherent photons will make a relatively small contribution for photon pulses with a bandwidth larger than $10$ kHz.
![(a) The reduction in group velocity $v_e/v_g$ versus the detunings $b_1=\frac{2\Delta\omega_1}{\Gamma_a}$ and $b_2=\frac{2\Delta\omega_2}{\Gamma_a}$. (b) Zoom around the working point $\left(b_1=2,b_2=-\frac{1}{2}\right)$ marked in (a). (c) The gradient of the gain $G_R=\frac{\partial
G}{\partial\omega_s}$. (d) Zoom around the point $\left(b_1=2,b_2=-\frac{1}{2}\right)$ marked in (c).[]{data-label="VelocityEff"}](Fig3a.jpg "fig:"){height="4.5cm"}![(a) The reduction in group velocity $v_e/v_g$ versus the detunings $b_1=\frac{2\Delta\omega_1}{\Gamma_a}$ and $b_2=\frac{2\Delta\omega_2}{\Gamma_a}$. (b) Zoom around the working point $\left(b_1=2,b_2=-\frac{1}{2}\right)$ marked in (a). (c) The gradient of the gain $G_R=\frac{\partial
G}{\partial\omega_s}$. (d) Zoom around the point $\left(b_1=2,b_2=-\frac{1}{2}\right)$ marked in (c).[]{data-label="VelocityEff"}](Fig3b.jpg "fig:"){height="4.5cm"}
![(a) The reduction in group velocity $v_e/v_g$ versus the detunings $b_1=\frac{2\Delta\omega_1}{\Gamma_a}$ and $b_2=\frac{2\Delta\omega_2}{\Gamma_a}$. (b) Zoom around the working point $\left(b_1=2,b_2=-\frac{1}{2}\right)$ marked in (a). (c) The gradient of the gain $G_R=\frac{\partial
G}{\partial\omega_s}$. (d) Zoom around the point $\left(b_1=2,b_2=-\frac{1}{2}\right)$ marked in (c).[]{data-label="VelocityEff"}](Fig3c.jpg "fig:"){height="4.5cm"}![(a) The reduction in group velocity $v_e/v_g$ versus the detunings $b_1=\frac{2\Delta\omega_1}{\Gamma_a}$ and $b_2=\frac{2\Delta\omega_2}{\Gamma_a}$. (b) Zoom around the working point $\left(b_1=2,b_2=-\frac{1}{2}\right)$ marked in (a). (c) The gradient of the gain $G_R=\frac{\partial
G}{\partial\omega_s}$. (d) Zoom around the point $\left(b_1=2,b_2=-\frac{1}{2}\right)$ marked in (c).[]{data-label="VelocityEff"}](Fig3d.jpg "fig:"){height="4.5cm"}
Now we have $\psi(x,t)=e^{i(Kx-\omega_st)}e^{-Gx}\psi_{in}(x-v_gt)$, where $K=k_s-\kappa$. The effective group velocity is defined by $\frac{1}{v_e}=\frac{dK}{d\omega_s}$. Our goal now is to identify parameter regimes exhibiting a small group velocity $v_e$ and, at the same time, vanishing gain $G$ in a sufficiently broad bandwidth, that is with a small gradient $G_R=\frac{\partial
G}{\partial\omega_s}$. The control parameters are the intensities $\mathcal{E}_i$ and detunings $\Delta\omega_i$ of the pump fields. It will be convenient to use dimensionless detunings $b_i=\Delta\omega_i/\frac{\Gamma_a}{2}$. For given detunings a vanishing gain $G=0$ is achieved for an intensity ratio of $|{\cal
E}_1|^2/|{\cal E}_2|^2=\frac{1+b_1^2}{1+b_2^2}$ which we assume to be fulfilled in the following. Using the same numbers as above and $\Gamma_a=10^8$ Hz corresponding to a mechanical quality factor of $Q_a=10^3$ we show the reduction in group velocity$\frac{v_e}{v_g}$ in Fig. \[VelocityEff\].a and the gradient of the gain $G_R$ in Fig. \[VelocityEff\].c versus the detunings. A convenient working point is found at $\left(b_1=2,b_2=-\frac{1}{2}\right)$ which is shown in more detail in Figs. \[VelocityEff\].b and \[VelocityEff\].d. At this point we have $|{\cal
E}_1|^2=4|{\cal E}_2|^2$, and $\Delta\omega_1=\Gamma_a$, $\Delta\omega_2=-\frac{\Gamma_a}{4}$. The reduction in group velocity is $\frac{v_e}{v_g}\approx\frac{\Gamma_a^2}{4f_a^2|{\cal E}|^2}$, and the above numbers yields $\frac{v_e}{v_g}\approx 3.7\times 10^{-4}$ which corresponds to what we have assumed above.
In conclusion, we predict that quantum nonlinear optics, slow light without gain/loss, and nonlinear phase shifts are possible in nanoscale waveguides exploiting SBS. Even though we considered here the most simple geometry of a cylindrical fibre, it is clear that coupling strengths and quality factors can be further optimized using different geometries of the nanostructure, see [@Russell2006] and [@Rakich2016] for examples. The present results provide encouraging evidence for the realization of many-body physics with strongly interacting photons and the implementation of deterministic quantum gates for photons in continuum optomechancis.
#### Acknowledgments {#acknowledgments .unnumbered}
This work was funded by the European Commission (FP7-Programme) through iQUOEMS (Grant Agreement No. 323924). We acknowledge support by DFG through QUEST. We thank Raphael Van Laer for fruitful discussions.
[57]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
(, ), vol. of **, pp. .
, , , , ****, ().
, ****, ().
, ****, ().
, ** (, ).
, , , , ().
, , , ****, ().
, , , , , , ****, ().
, , , , , , , , , ****, ().
, , , ****, ().
, ****, ().
, , , , , ****, ().
, , , , , , , , ****, ().
, , , , , , ****, ().
, , , ****, ().
, , , ****, ().
, , , , , , , ****, ().
, , , , , , ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, , , , , , ****, ().
, , , , , , , ****, ().
, ****, ().
, ****, ().
, , , , , ****, ().
, , , , , , , , ****, ().
, , , ****, ().
, , , ****, ().
, , , , , ****, ().
, , , , , , , , , , ****, ().
, , , , , , , , ****, ().
, , , ****, ().
, , , , , ****, ().
, , , , ****, ().
, in **, edited by , , (, ), chap. , pp. .
, ().
, ****, ().
, ().
, ****, ().
, , , ****, ().
, , , , , , , ****, ().
, , , , ****, ().
(), .
, , , ****, ().
.
, ****, ().
, ****, ().
, , , **** ().
, , , ****, ().
, ****, ().
, , , , , , , ****, ().
, , , , , , , , , ****, ().
Supplemental Materials:\
Nonlinear Quantum Optics in Optomechanical Nanoscale Waveguides {#supplemental-materials-nonlinear-quantum-optics-in-optomechanical-nanoscale-waveguides .unnumbered}
===============================================================
Hashem Zoubi and Klemens Hammerer {#hashem-zoubi-and-klemens-hammerer .unnumbered}
=================================
Photon correlations mediated by vibrational modes
=================================================
Starting from the Hamiltonian, which was derived in [@Zoubi2016], $$\begin{aligned}
H&=\sum_{\alpha}\int
dx\ \omega_\alpha\psi_{\alpha}^{\dagger}(x)\psi_{\alpha}(x)+\Omega_v\int dx\ Q^{\dagger}(x)Q(x) \nonumber \\
&-i v_e\sum_{\alpha}\int
dx\ \psi_{\alpha}^{\dagger}(x)\frac{\partial\psi_{\alpha}(x)}{\partial
x}+\sqrt{L}f_v\int dx\ \left(Q^{\dagger}(x)\psi_d^{\dagger}(x)\psi_u(x)+h.c.\right)\nonumber, \nonumber\end{aligned}$$ using $\psi_{\alpha}(x,t)\rightarrow\psi_{\alpha}(x,t)\ e^{ik_{\alpha}x}$ and $Q_{v}(x,t)\rightarrow Q_{v}(x,t)\ e^{iq_{v}x}$, we obtain Hamiltonian (1) of the letter, which yields the equations of motion for the field operators $$\begin{aligned}
\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\psi_u(x,t)&=-i\omega_u\ \psi_u(x,t)-if_v\sqrt{L}\ Q_v(x,t)\psi_d(x,t)e^{-i\Delta qx}, \nonumber \\
\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\psi_d(x,t)&=-i\omega_d\ \psi_d(x,t)-if_v\sqrt{L}\ Q_v^{\dagger}(x,t)\psi_u(x,t)e^{i\Delta qx}, \nonumber \\
\left(\frac{\partial}{\partial
t}+\frac{\Gamma_v}{2}\right)Q_{v}(x,t)&=-i\Omega_v\ Q_v(x,t)-if_v\sqrt{L}\ \psi_d^{\dagger}(x,t)\psi_u(x,t)e^{i\Delta qx}-{\cal
F}(x,t), \nonumber\end{aligned}$$ where $\Delta
q=k_u-k_d-q_v$. Using now $\psi_{\alpha}(x,t)\rightarrow\psi_{\alpha}(x,t)\ e^{-i\omega_{\alpha}t}$, $Q_{v}(x,t)\rightarrow Q_{v}(x,t)\ e^{-i\Omega_{v}t}$, and ${\cal
F}(x,t)\rightarrow {\cal F}(x,t)\ e^{-i\Omega_{v}t}$, we obtain $$\begin{aligned}
\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\psi_u(x,t)&=-if_v\sqrt{L}\ Q_v(x,t)\psi_d(x,t)\ e^{i(\Delta\Omega t-\Delta qx)}, \nonumber \\
\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\psi_d(x,t)&=-if_v\sqrt{L}\ Q_v^{\dagger}(x,t)\psi_u(x,t)\ e^{-i(\Delta\Omega t-\Delta qx)}, \nonumber \\
\left(\frac{\partial}{\partial
t}+\frac{\Gamma_v}{2}\right)Q_{v}(x,t)&=-if_v\sqrt{L}\ \psi_d^{\dagger}(x,t)\psi_u(x,t)\ e^{-i(\Delta\Omega
t-\Delta qx)}-{\cal F}(x,t), \nonumber\end{aligned}$$ where $\Delta\Omega=\omega_u-\omega_d-\Omega_v$. The above system of equations corresponds to Eqs. (2-3) in the main text. We now apply the adiabatic elimination of the phonon operators, which is applicable in the off resonant limit where $\Delta\Omega>\Gamma_v,f_v$. Formal integration of the phonon equation gives $$\begin{aligned}
Q(x,t)&=-i\sqrt{L}f_v\int_{0}^{t}dt'\ \psi_d^{\dagger}(x,t')\psi_u(x,t')\ e^{-i(\Delta\Omega
t'-\Delta qx)}e^{-\Gamma_v(t-t')/2} \nonumber \\
&+Q(x,0)e^{-\Gamma_v t/2}-\int_{0}^t dt'\ {\cal F}(x,t')e^{-\frac{\Gamma_v}{2}(t-t')}. \nonumber\end{aligned}$$ We neglect the initial value term of the phonon operator. Substitution in the photon equations yields $$\begin{aligned}
\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\psi_u(x,t)&=-f_v^2L\int_{0}^{t}dt'\ \psi_d^{\dagger}(x,t')\psi_u(x,t')\psi_d(x,t)\ e^{-i\Delta\Omega
(t'-t)}e^{-\Gamma_v(t-t')/2} \nonumber \\
&+if_v\sqrt{L}\ \psi_d(x,t)e^{i(\Delta\Omega t-\Delta
qx)}\int_{0}^t dt'\ {\cal F}(x,t')e^{-\frac{\Gamma_v}{2}(t-t')}, \nonumber \\
\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\psi_d(x,t)&=f_v^2L\int_{0}^{t}dt'\ \psi_u^{\dagger}(x,t')\psi_d(x,t')\psi_u(x,t)\ e^{i\Delta\Omega (t'-t)}e^{-\Gamma_v(t-t')/2} \nonumber \\
&+if_v\sqrt{L}\ \psi_u(x,t)e^{-i(\Delta\Omega t-\Delta
qx)}\int_{0}^t dt'\ {\cal F}^{\dagger}(x,t')e^{-\frac{\Gamma_v}{2}(t-t')}. \nonumber\end{aligned}$$ Now we apply an approximation by taking the operators out of the integral, which is allowed in the limit $\Delta\Omega> f_v$, to get $$\begin{aligned}
\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\psi_u(x,t)&\approx -Lf_v^2\ \hat{N}_d(x,t)\psi_u(x,t)\int_{0}^{t}dt'\ e^{-i\Delta\Omega
(t'-t)}e^{-(t-t')\Gamma_v/2}+iU(x,t)\ \psi_d(x,t), \nonumber \\
\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\psi_d(x,t)&\approx Lf_v^2\ \hat{N}_u(x,t)\psi_d(x,t)\int_{0}^{t}dt'\ e^{i\Delta\Omega (t'-t)}e^{-(t-t')\Gamma_v/2}+iU^{\dagger}(x,t)\ \psi_u(x,t), \nonumber\end{aligned}$$ where we defined the density operator by $\hat{N}_{\alpha}(x,t)=\psi_{\alpha}^{\dagger}(x,t)\psi_{\alpha}(x,t)$. Moreover, we used $$\begin{aligned}
U(x,t)=f_v\sqrt{L}\ e^{i(\Delta\Omega t-\Delta
qx)}\int_{0}^t dt'\ {\cal F}(x,t')e^{-\frac{\Gamma_v}{2}(t-t')}. \nonumber\end{aligned}$$ The time integration yields $$\begin{aligned}
\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\psi_u(x,t)&\approx-\frac{Lf_v^2}{\Gamma_v/2-i\Delta\Omega}\ \hat{N}_d(x,t)\psi_u(x,t)+iU(x,t)\ \psi_d(x,t), \nonumber \\
\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\psi_d(x,t)&\approx\frac{Lf_v^2}{\Gamma_v/2+i\Delta\Omega}\ \hat{N}_u(x,t)\psi_d(x,t)+iU^{\dagger}(x,t)\ \psi_u(x,t). \nonumber\end{aligned}$$
Conserved Number of Photons
---------------------------
We show now that the total density of signal photons, that is $\hat{N}=\hat{N}_u+\hat{N}_d$, is conserved. We drop the Langevin term in this part and consider it later. Direct calculations give $$\begin{aligned}
\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\hat{N}_u(x,t)&=-\frac{Lf_v^2\Gamma_v}{\Gamma_v^2/4+\Delta\Omega^2}\ \hat{N}_u(x,t)\hat{N}_d(x,t), \nonumber \\
\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\hat{N}_d(x,t)&=\frac{Lf_v^2\Gamma_v}{\Gamma_v^2/4+\Delta\Omega^2}\ \hat{N}_u(x,t)\hat{N}_d(x,t), \nonumber\end{aligned}$$ which yields $\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\hat{N}(x,t)=0$. Using the change of variables $\xi=x-v_et$ and $\eta=v_et$, gives $\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}=v_e\frac{\partial}{\partial \eta}$ and $\frac{\partial}{\partial
\eta}\hat{N}(\xi,\eta)=0$, and hence $\hat{N}(\xi)$ is conserved. Here we obtain $$\begin{aligned}
\frac{\partial}{\partial
\eta}\hat{N}_u(\xi,\eta)=-V\ \hat{N}_d(\xi,\eta)\hat{N}_u(\xi,\eta),\ \ \ \frac{\partial}{\partial \eta}\hat{N}_d(\xi,\eta)=V\ \hat{N}_u(\xi,\eta)\hat{N}_d(\xi,\eta), \nonumber\end{aligned}$$ where $V=\frac{Lf_v^2\Gamma_v}{v_e(\Gamma_v^2/4+\Delta\Omega^2)}$. Using $\hat{N}(\xi)=\hat{N}_u(\xi,\eta)+\hat{N}_d(\xi,\eta)$ gives the two Riccati equations (4) of the letter.
Thermal Fluctuations
--------------------
In the letter it was shown that $\hat{N}_u$ and $\hat{N}_d$ are conserved in the limit $\Delta\Omega>\Gamma_v$. Hence, we get $$\begin{aligned}
\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\psi_u(x,t)&\approx-i\frac{Lf_v^2}{\Delta\Omega}\ \hat{N}_d(x,t)\psi_u(x,t)+iU(x,t)\ \psi_d(x,t), \nonumber \\
\left(\frac{\partial}{\partial t}+v_e\frac{\partial}{\partial
x}\right)\psi_d(x,t)&\approx-i\frac{Lf_v^2}{\Delta\Omega}\ \hat{N}_u(x,t)\psi_d(x,t)+iU^{\dagger}(x,t)\ \psi_u(x,t). \nonumber\end{aligned}$$ We calculate here the contribution of the Langevin fluctuations. Applying the change of variables $\xi=x-v_et$ and $\eta=x$, where $\frac{\partial}{\partial t}=-v_e\frac{\partial}{\partial \xi}$ and $\frac{\partial}{\partial x}=\frac{\partial}{\partial
\xi}+\frac{\partial}{\partial \eta}$, and then $\frac{\partial}{\partial
t}+v_e\frac{\partial}{\partial x}=v_g\frac{\partial}{\partial \eta}$, we get $$\begin{aligned}
\frac{\partial}{\partial\eta}\psi_u(\xi,\eta)&\approx-i\frac{Lf_v^2}{v_e\Delta\Omega}\ \hat{N}_d(\xi)\psi_u(\xi,\eta)+\frac{i}{v_e}U(\xi,\eta)\ \psi_d(\xi,\eta), \nonumber \\
\frac{\partial}{\partial\eta}\psi_d(\xi,\eta)&\approx-i\frac{Lf_v^2}{v_e\Delta\Omega}\ \hat{N}_u(\xi)\psi_d(\xi,\eta)+\frac{i}{v_e}U^{\dagger}(\xi,\eta)\ \psi_u(\xi,\eta). \nonumber\end{aligned}$$ Formal integration gives $$\begin{aligned}
\psi_u(\xi,\eta)=\psi_u^\mathrm{in}(\xi)e^{-i\frac{Lf_v^2}{v_e\Delta\Omega}\ \hat{N}_d(\xi)\ \eta}+\frac{i}{v_e}\int_0^{\eta}d\eta'\ U(\xi,\eta')\ \psi_d(\xi,\eta')e^{-i\frac{Lf_v^2}{v_e\Delta\Omega}\ \hat{N}_d(\xi)(\eta-\eta')},
\nonumber \\
\psi_d(\xi,\eta)=\psi_d^\mathrm{in}(\xi)e^{-i\frac{Lf_v^2}{v_e\Delta\Omega}\ \hat{N}_u(\xi)\ \eta}+\frac{i}{v_e}\int_0^{\eta}d\eta'\ U^{\dagger}(\xi,\eta')\ \psi_u(\xi,\eta')e^{-i\frac{Lf_v^2}{v_e\Delta\Omega}\ \hat{N}_u(\xi)(\eta-\eta')}, \nonumber\end{aligned}$$ where $\psi_{\alpha}^{in}(\xi)=\psi_{\alpha}(\xi,\eta=0)$. Changing back into $(x,t)$ space we get Equ. (5) of the letter. The average numbers of photons are $$\begin{aligned}
&\langle\psi^{\dagger}_u(x,t)\psi_u(x,t)\rangle=\langle\psi^{\mathrm{in}\dagger}_u(x-v_et)\psi_u^\mathrm{in}(x-v_et)\rangle\nonumber
\\
&+\frac{1}{v_e^2}\int_0^{x}dx'dx''\ \langle\psi^{\dagger}_d(x',t)\psi_d(x'',t)\rangle\langle U^{\dagger}(x',t)U(x'',t)\rangle
e^{i\frac{Lf_v^2}{v_e\Delta\Omega}\ N_d(x-v_et)(x-x')}e^{-i\frac{Lf_v^2}{v_e\Delta\Omega}\ N_d(x-v_et)(x-x'')}\nonumber \\
&\langle\psi^{\dagger}_d(x,t)\psi_d(x,t)\rangle=\langle\psi^{\mathrm{in}\dagger}_d(x-v_et)\psi_d^\mathrm{in}(x-v_et)\rangle
\nonumber \\
&+\frac{1}{v_e^2}\int_0^{x}dx'dx''\ \langle\psi^{\dagger}_u(x',t)\psi_u(x'',t)\rangle\langle U(x',t)U^{\dagger}(x'',t)\rangle
e^{i\frac{Lf_v^2}{v_e\Delta\Omega}\ N_u(x-v_et)(x-x')}e^{-i\frac{Lf_v^2}{v_e\Delta\Omega}\ N_u(x-v_et)(x-x'')}, \nonumber\end{aligned}$$ where $N_{\alpha}=\langle\hat{N}_{\alpha}\rangle$. Using $$\begin{aligned}
\langle
{\cal F}^{\dagger}(x',t'){\cal F}(x'',t'')\rangle&=\Gamma_a\bar{n}_v\delta(t'-t'')\delta(x'-x''),
\nonumber \\
\langle
{\cal F}(x',t'){\cal F}^{\dagger}(x'',t'')\rangle&=\Gamma_a(\bar{n}_v+1)\delta(t'-t'')\delta(x'-x''), \nonumber\end{aligned}$$ where $\bar{n}_v$ is the average number of thermal phonons, we get $$\begin{aligned}
\langle U^{\dagger}(x',t)U(x'',t)\rangle&=Lf_v^2\bar{n}_v\delta(x'-x'')\left(1-e^{-\Gamma_vt}\right),
\nonumber \\
\langle U(x',t)U^{\dagger}(x'',t)\rangle&=Lf_v^2(1+\bar{n}_v)\delta(x'-x'')\left(1-e^{-\Gamma_vt}\right). \nonumber\end{aligned}$$ We obtain $$\begin{aligned}
\langle\psi^{\dagger}_u(x,t)\psi_u(x,t)\rangle&=\langle\psi^{\mathrm{in}\dagger}_u(x-v_et)\psi_u^\mathrm{in}(x-v_et)\rangle+\frac{Lf_v^2}{v_e^2}\bar{n}_v\left(1-e^{-\Gamma_vt}\right)\int_0^{x}dx'\ \langle\psi^{\dagger}_d(x',t)\psi_d(x',t)\rangle,
\nonumber \\
\langle\psi^{\dagger}_d(x,t)\psi_d(x,t)\rangle&=\langle\psi^{\mathrm{in}\dagger}_d(x-v_et)\psi_d^\mathrm{in}(x-v_et)\rangle+\frac{Lf_v^2}{v_e^2}(1+\bar{n}_v)\left(1-e^{-\Gamma_vt}\right)\int_0^{x}dx'\ \langle\psi^{\dagger}_u(x',t)\psi_u(x',t)\rangle, \nonumber\end{aligned}$$ which yields at the waveguide output $$\begin{aligned}
N_u=N_u^\mathrm{in}+\frac{L^2f_v^2}{v_e^2}\left(1-e^{-\Gamma_vL/v_e}\right)\bar{n}_vN_d,\ \ \ N_d=N_d^\mathrm{in}+\frac{L^2f_v^2}{v_e^2}\left(1-e^{-\Gamma_vL/v_e}\right)(1+\bar{n}_v)N_u. \nonumber\end{aligned}$$
Photon delay and gain via SBS involving acoustic phonons
========================================================
The real-space Hamiltonian is given in equation (6) of the letter. The photon dispersion has the form $\omega_k\approx\omega_0\pm v_gk$, and all photon fields are taken in a rotating frame of their respective central frequency $\omega_0$. We get the equations of motion for the field operators $$\begin{aligned}
\left(\frac{\partial}{\partial t}+v_g\frac{\partial}{\partial
x}\right)\psi(x,t)&=-if^a_1{\cal E}_{1}(x,t)\ Q^a_1(x,t)-i f^a_2{\cal E}_{2}(x,t)\ Q_2^{a\dagger}(x,t), \nonumber \\
\left(\frac{\partial}{\partial t}+v_a\frac{\partial}{\partial
x}+\frac{\Gamma_a}{2}\right)Q^a_{1}(x,t)&=-i f_1^{a}{\cal E}_{1}^{\ast}(x,t)\ \psi(x,t)-{\cal F}_1(x,t),
\nonumber \\
\left(\frac{\partial}{\partial t}-v_a\frac{\partial}{\partial
x}+\frac{\Gamma_a}{2}\right)Q^a_{2}(x,t)&=-i f^a_2{\cal E}_{2}(x,t)\ \psi^{\dagger}(x,t)-{\cal F}_2(x,t). \nonumber\end{aligned}$$ We use ${\cal E}_{\alpha}(x,t)\rightarrow{\cal
E}_{\alpha}(x,t)\ e^{-i(k_{\alpha}x+\omega_{\alpha}t)}$, $\psi(x,t)\rightarrow\psi(x,t)\ e^{i(k_sx-\omega_st)}$, $Q_{1}^a(x,t)\rightarrow Q_{1}^a(x,t)\ e^{i(q_{1}x-\Omega_{1}t)}$, and $Q_{2}^a(x,t)\rightarrow Q_{2}^a(x,t)\ e^{-i(q_{2}x+\Omega_{2}t)}$, with $(\alpha=1,2)$, where $\omega_s=v_gk_s$, $\omega_{\alpha}=v_gk_{\alpha}$, and $\Omega_{\alpha}=v_aq_{\alpha}$. Moreover we define ${\cal
F}_{1}(x,t)\rightarrow{\cal F}_{1}(x,t)\ e^{i(q_{1}x-\Omega_{1}t)}$, and ${\cal
F}_{2}(x,t)\rightarrow{\cal F}_{2}(x,t)\ e^{-i(q_{2}x+\Omega_{2}t)}$. We have now $$\begin{aligned}
\left(\frac{\partial}{\partial t}+v_g\frac{\partial}{\partial
x}\right)\psi(x,t)&=-i f^a_1{\cal
E}_{1}(x,t)\ Q_1^a(x,t)\ e^{i(\Delta\omega_1t-\Delta k_1x)}-i f^a_2{\cal
E}_{2}(x,t)\ Q_2^{a\dagger}(x,t)\ e^{-i(\Delta\omega_2t+\Delta k_2x)}, \nonumber \\
\left(\frac{\partial}{\partial t}+v_a\frac{\partial}{\partial
x}+\frac{\Gamma_a}{2}\right)Q_{1}^a(x,t)&=-i f_1^{a}{\cal E}_{1}^{\ast}(x,t)\ \psi(x,t)\ e^{-i(\Delta\omega_1t-\Delta k_1x)}-{\cal F}_{1}(x,t),
\nonumber \\
\left(\frac{\partial}{\partial t}-v_a\frac{\partial}{\partial
x}+\frac{\Gamma_a}{2}\right)Q_{2}^a(x,t)&=-i f^a_2{\cal E}_{2}(x,t)\ \psi^{\dagger}(x,t)\ e^{-i(\Delta\omega_2t+\Delta k_2x)}-{\cal F}_{2}(x,t), \nonumber\end{aligned}$$ where $\Delta\omega_1=\omega_s-\omega_1-\Omega_1$, $\Delta\omega_2=\omega_2-\omega_s-\Omega_2$, $\Delta k_1=k_s+k_1-q_1$ and $\Delta k_2=k_2+k_s-q_2$. The above system of equations corresponds to Eqs. (7-8) in the main text. For acoustic phonons it is a good approximation to neglect the $v_a\frac{\partial}{\partial
x}$ terms, as the sound velocity is much smaller than the light group velocity, then $$\begin{aligned}
\left(\frac{\partial}{\partial
t}+\frac{\Gamma_a}{2}\right)Q_{1}^a(x,t)&\approx-i f_1^{a}{\cal E}_{1}^{\ast}(x,t)\ \psi(x,t)\ e^{-i(\Delta\omega_1t-\Delta k_1x)}-{\cal F}_{1}(x,t),
\nonumber \\
\left(\frac{\partial}{\partial
t}+\frac{\Gamma_a}{2}\right)Q_{2}^a(x,t)&\approx-i f^a_2{\cal E}_{2}(x,t)\ \psi^{\dagger}(x,t)\ e^{-i(\Delta\omega_2t+\Delta k_2x)}-{\cal F}_{2}(x,t). \nonumber\end{aligned}$$
Formal integration of the phonon operators lead to $$\begin{aligned}
Q_1^a(x,t)&=-i f_1^{a}\int_{0}^t dt'\ {\cal
E}_{1}^{\ast}(x,t')\ \psi(x,t')\ e^{-i(\Delta\omega_1t'-\Delta
k_1x)}e^{-\frac{\Gamma_a}{2}(t-t')} \nonumber \\
&+Q_1(x,0)e^{-\Gamma_a t/2}-\int_{0}^t dt'\ {\cal F}_{1}(x,t')e^{-\frac{\Gamma_a}{2}(t-t')},
\nonumber \\
Q_2^a(x,t)&=-i f^a_2\int_{0}^t dt'\ {\cal
E}_{2}(x,t')\ \psi^{\dagger}(x,t')\ e^{-i(\Delta\omega_2t'+\Delta
k_2x)}e^{-\frac{\Gamma_a}{2}(t-t')} \nonumber \\
&+Q_2(x,0)e^{-\Gamma_a t/2}-\int_{0}^t dt'\ {\cal F}_{2}(x,t')e^{-\frac{\Gamma_a}{2}(t-t')}.
\nonumber\end{aligned}$$ In the following we neglect the initial value terms of the phonon operators. As an approximation we take the signal operator and the pump field out of the integral to get $$\begin{aligned}
Q_1^a(x,t)&\approx-i f_1^{a}\ {\cal E}_{1}^{\ast}(x,t)\ \psi(x,t)\int_{0}^t dt'\ e^{-i(\Delta\omega_1t'-\Delta k_1x)}e^{-\frac{\Gamma_a}{2}(t-t')}-\int_{0}^t dt'\ {\cal F}_{1}(x,t')e^{-\frac{\Gamma_a}{2}(t-t')},
\nonumber \\
Q_2^a(x,t)&\approx-i f^a_2\ {\cal E}_{2}(x,t)\ \psi^{\dagger}(x,t)\int_{0}^t dt'\ e^{-i(\Delta\omega_2t'+\Delta k_2x)}e^{-\frac{\Gamma_a}{2}(t-t')}-\int_{0}^t dt'\ {\cal F}_{2}(x,t')e^{-\frac{\Gamma_a}{2}(t-t')}. \nonumber\end{aligned}$$ This approximation is an iterative solution in terms of the small photon-phonon coupling parameter. Substitution in the signal operator equation yields $$\begin{aligned}
\left(\frac{\partial}{\partial t}+v_g\frac{\partial}{\partial
x}\right)\psi(x,t)&=-f^{a2}_1|{\cal
E}_{1}(x,t)|^2\int_{0}^t dt'\ e^{-i\Delta\omega_1(t'-t)}e^{-\frac{\Gamma_a}{2}(t-t')}\ \psi(x,t) \nonumber \\
&+f^{a2}_2|{\cal
E}_{2}(x,t)|^2\int_{0}^t
dt'\ e^{i\Delta\omega_2(t'-t)}e^{-\frac{\Gamma_a}{2}(t-t')}\ \psi(x,t) \nonumber \\
&+i f^a_1{\cal
E}_{1}(x,t)\ e^{i(\Delta\omega_1t-\Delta k_1x)}\int_{0}^t dt'\ {\cal F}_{1}(x,t')e^{-\frac{\Gamma_a}{2}(t-t')} \nonumber \\
&+i f^a_2{\cal
E}_{2}(x,t)\ e^{-i(\Delta\omega_2t+\Delta k_2x)}\int_{0}^t dt'\ {\cal F}_{2}^{\dagger}(x,t')e^{-\frac{\Gamma_a}{2}(t-t')}. \nonumber
\end{aligned}$$ Time integration gives $$\begin{aligned}
\left(\frac{\partial}{\partial t}+v_g\frac{\partial}{\partial
x}\right)\psi(x,t)&\approx f_a^2\left\{-\frac{|{\cal
E}_{1}(x,t)|^2}{\frac{\Gamma_a}{2}-i\Delta\omega_1}+\frac{|{\cal
E}_{2}(x,t)|^2}{\frac{\Gamma_a}{2}+i\Delta\omega_2}\right\}\psi(x,t)
\nonumber \\
&+if_a\left\{{\cal E}_{1}(x,t)e^{-i\Delta k_1x}W_1(x,t)+{\cal E}_{2}(x,t)e^{-i\Delta k_2x}W_2^{\dagger}(x,t)\right\}, \nonumber\end{aligned}$$ where we assume that $f^a_1=f^a_2\equiv f_a$. We defined $$\begin{aligned}
W_i(x,t)=e^{i\Delta\omega_it}\int_{0}^t dt'\ {\cal F}_{i}(x,t')e^{-\frac{\Gamma_a}{2}(t-t')}. \nonumber\end{aligned}$$ We can write $$\begin{aligned}
\left(\frac{\partial}{\partial t}+v_g\frac{\partial}{\partial
x}\right)\psi(x,t)=-v_g(G+i\kappa)\psi(x,t)+if_a\left\{{\cal E}_{1}e^{-i\Delta k_1x}W_1(x,t)+{\cal E}_{2}e^{-i\Delta k_2x}W_2^{\dagger}(x,t)\right\}, \nonumber\end{aligned}$$ where $G$ and $\kappa$ are defined in the letter. The pump fields are taken to be constants.
Thermal Fluctuations
--------------------
Applying the previous change of variables $\xi=x-v_gt$ and $\eta=x$, we get $$\begin{aligned}
\frac{\partial}{\partial\eta}\psi(\xi,\eta)=-(G+i\kappa)\psi(\xi,\eta)+i\frac{f_a}{v_g}\left\{{\cal E}_{1}e^{-i\Delta k_1\eta}W_1(\xi,\eta)+{\cal E}_{2}e^{-i\Delta k_2\eta}W_2^{\dagger}(\xi,\eta)\right\}, \nonumber\end{aligned}$$ with the solution $$\begin{aligned}
\psi(\xi,\eta)=e^{-(G+i\kappa)\eta}\psi_{in}(\xi)+i\frac{f_a}{v_g}\int_0^{\eta}d\eta'\left\{{\cal E}_{1}e^{-i\Delta k_1\eta'}W_1(\xi,\eta')+{\cal E}_{2}e^{-i\Delta k_2\eta'}W_2^{\dagger}(\xi,\eta')\right\}e^{(G+i\kappa)(\eta'-\eta)}, \nonumber\end{aligned}$$ where $\psi_{in}(\xi)=\psi(\xi,\eta=0)$. Back into $(x,t)$ variables we obtain equation (9) of the letter. The average density of photons is $$\begin{aligned}
\langle\psi^{\dagger}(x,t)\psi(x,t)\rangle&=\langle\psi_\mathrm{in}^{\dagger}(x-v_gt)\psi_\mathrm{in}(x-v_gt)\rangle
e^{-2Gx}+\frac{f_a^2}{v_g^2}\int_0^xdx'dx''e^{(G-i\kappa)(x'-x)}e^{(G+i\kappa)(x''-x)}
\nonumber \\
&\times\left\{|{\cal E}_{1}|^2e^{i\Delta k_1(x'-x'')}\langle
W_1^{\dagger}(x',t)W_1(x'',t)\rangle+|{\cal E}_{2}|^2e^{i\Delta k_2(x'-x'')}\langle
W_2(x',t)W_2^{\dagger}(x'',t)\rangle \right. \nonumber \\
&+\left. {\cal E}_{1}^{\ast}{\cal E}_{2}e^{i(\Delta k_1x'-\Delta k_2x'')}\langle
W_1^{\dagger}(x',t)W_2^{\dagger}(x'',t)\rangle+{\cal E}_{2}^{\ast}{\cal
E}_{1}e^{i(\Delta k_2x'-\Delta k_1x'')} \langle W_2(x',t)W_1(x'',t)\rangle \right\}, \nonumber\end{aligned}$$ where we neglect correlations between the light and the reservoir, of the type $\langle\psi_{in}^{\dagger}W_i\rangle,\cdots$. We use the properties $$\begin{aligned}
\langle {\cal F}_1^{\dagger}(x',t'){\cal
F}_2^{\dagger}(x'',t'')\rangle&=\langle {\cal F}_2(x',t'){\cal
F}_1(x'',t'')\rangle=0,\nonumber \\
\langle
{\cal F}_1^{\dagger}(x',t'){\cal F}_1(x'',t'')\rangle&=\Gamma_a\bar{n}_a^{(1)}\delta(t'-t'')\delta(x'-x''),
\nonumber \\
\langle
{\cal F}_2(x',t'){\cal F}_2^{\dagger}(x'',t'')\rangle&=\Gamma_a(\bar{n}_a^{(2)}+1)\delta(t'-t'')\delta(x'-x''), \nonumber\end{aligned}$$ where $\bar{n}_a^{(i)}$ is the average number of thermal phonons in the reservoir at frequency $\Omega_i$. The expectation values are $$\begin{aligned}
\langle W_1^{\dagger}(x',t)W_1(x'',t)\rangle&=\bar{n}_a^{(1)}\delta(x'-x'')\left(1-e^{-\Gamma_at}\right),\nonumber \\
\langle
W_2(x',t)W_2^{\dagger}(x'',t)\rangle&=(\bar{n}_a^{(2)}+1)\delta(x'-x'')\left(1-e^{-\Gamma_at}\right),\nonumber \\
\langle W_2(x',t)W_1(x'',t)\rangle&=\langle W_1^{\dagger}(x',t)W_2^{\dagger}(x'',t)\rangle=0, \nonumber\end{aligned}$$ which lead to $$\begin{aligned}
\langle\psi^{\dagger}(x,t)\psi(x,t)\rangle=\langle\psi_\mathrm{in}^{\dagger}(x-v_gt)\psi_\mathrm{in}(x-v_gt)\rangle
e^{-2Gx}+\frac{f_a^2}{2Gv_g^2}\left\{|{\cal E}_{1}|^2\bar{n}_a^{(1)}+|{\cal E}_{2}|^2(\bar{n}_a^{(2)}+1)\right\}\left(1-e^{-2Gx}\right)\left(1-e^{-\Gamma_at}\right). \nonumber\end{aligned}$$ We interest in the limit of $GL\ll1$, where $L$ is the waveguide length. Then the photon density, which is the number of photons per unit length, is $$\begin{aligned}
\langle\psi^{\dagger}(L,t)\psi(L,t)\rangle\approx\langle\psi_\mathrm{in}^{\dagger}(L-v_gt)\psi_\mathrm{in}(L-v_gt)\rangle+\frac{f_a^2L}{v_g^2}\left\{|{\cal
E}_{1}|^2\bar{n}_a^{(1)}+|{\cal E}_{2}|^2(\bar{n}_a^{(2)}+1)\right\}\left(1-e^{-\Gamma_at}\right). \nonumber\end{aligned}$$
[1]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ().
|
---
author:
- 'Xu Chen, *Member, IEEE*, Lei Jiao, *Member, IEEE*, Wenzhong Li, *Member, IEEE*, and Xiaoming Fu, *Senior Member, IEEE*'
bibliography:
- 'MobileCloud.bib'
title: 'Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing'
---
|
---
abstract: 'The Groverian entanglement measure of pure quantum states of $n$ qubits is generalized to the case in which the qubits are divided into any $m \le n$ parties and the entanglement between these parties is evaluated. To demonstrate this measure we apply it to general states of three qubits and to symmetric states with any number of qubits such as the Greenberg-Horne-Zeiliner state and the W state.'
author:
- Yishai Shimoni and Ofer Biham
title: Groverian Entanglement Measure of Pure Quantum States with Arbitrary Partitions
---
Introduction {#sec:introduction}
============
The potential speedup offered by quantum computers is exemplified by Shor’s factoring algorithm [@Shor1994], Grover’s search algorithm [@Grover1996; @Grover1997a], and algorithms for quantum simulation [@Nielsen2000]. Although the origin of this speed-up is not fully understood, there are indications that quantum entanglement plays a crucial role [@Jozsa2003; @Vidal2003]. In particular, it was shown that quantum algorithms that do not create entanglement can be simulated efficiently on a classical computer [@Aharonov1996]. It is therefore of interest to quantify the entanglement produced by quantum algorithms and examine its correlation with their efficiency. This requires to develop entanglement measures for the quantum states of multiple qubits that appear in quantum algorithms.
The special case of bipartite entanglement has been studied extensively in recent years. It was established as a resource for quantum teleportation procedures. The entanglement of pure bipartite states can be evaluated by the von Neumann entropy of the reduced density matrix, traced over one of the parties. For mixed bipartite states, several measures were proposed, namely entanglement of formation and entanglement of distillation [@Bennett1996a; @Bennett1996b]. In particular, for states of two qubits an exact formula for the entanglement of formation was obtained [@Hill1997; @Wootters1998]. Bipartite pure states of more than two qubits were also studied. It was shown that generic quantum states can be reconstructed from a fraction of the reduced density matrices, obtained by tracing over some of the qubits [@Linden2002a; @Linden2002b].
The more general case of multipartite entanglement is not as well understood. Recent work based on axiomatic considerations has provided a set of properties that entanglement measures should satisfy [@Vedral1997; @Vedral1998; @Vidal2000; @Horodecki2000]. These properties include the requirement that any entanglement measure should vanish for product (or separable) states, it should be invariant under local unitary operations and should not increase as a result of any sequence of local operations complemented by only classical communication between the parties. Quantities that satisfy these properties are called entanglement monotones. These properties, that should be satisfied for bipartite as well as multipartite entanglement, provide useful guidelines in the search for entanglement measures for multipartite quantum states. One class of entanglement measures, based on metric properties of the Hilbert space was proposed and shown to satisfy these requirements [@Vedral1997; @Vedral1997a; @Vedral1998]. Another class of measures, based on polynomial invariants has been studied in the context of multipartite entanglement [@Barnum2001; @Leifer2004]. However, the connection between such measures and the efficiency of quantum algorithms remains unclear.
The Groverian measure of entanglement for pure quantum states of multiple qubits provides an operational interpretation in terms of the success probability of certain quantum algorithms [@Biham2002]. More precisely, the Groverian measure of a state $| \psi \rangle$ it is related to the success probability of Grover’s search algotirhm when this state is used as the initial state. A pre-precessing stage is allowed in which an arbitrary local unitary operator is applied to each qubit. These operators are optimized in order to obtain the maximal success probability of the algorithm, $P_{\rm max}$. The Groverian measure is given by $G(|\psi\rangle) = \sqrt{1 - P_{\rm max}}$ [@Biham2002]. For a state $| \psi \rangle$ of $n$ qubits, the entanglement evaluated by this measure is, in fact, the entanglement between $n$ parties, where each of them holds a single qubit. The Groverian measure has been used in order to characterize quantum states of high symmetry such as the Greenberg-Horne-Zeilinger (GHZ) and the W states [@Shimoni2004]. It has also been used to evaluate the entanglement produced by quantum algorithms such as Grover’s algorithm [@Shimoni2004] and Shor’s algorithm [@Shimoni2005]. The Groverian measure was also generalized to the case of mixed states [@Shapira2006].
Consider a quantum state $| \psi \rangle$ of $n$ qubits. These qubits can be partitioned into any $m \le n$ parties, each holds one or more qubits. In this paper we present a generalized Groverian measure which quantifies the parties for any desired partition. This is done by allowing any unitary operators within each partition. This essentially changes the meaning of locality to encompass the whole party, enabling a more complete characterization of quantum states of multiple qubits.
The paper is organized as follows. In Sec. \[sec:algorithm\] we briefly describe Grover’s search algotirhm. In Sec. \[sec:measure\] we review the Groverian entanglement measure. In Sec. \[sec:generalized\] we present the generalized Groverian measure that applies for any desired partition of the quantum state. In Sec. \[sec:numerical\] we present an efficient numerical procedure for the calculation of the generalized Groverian measure. We use this measure in Sec. \[sec:results\] to characterize certain pure quantum states of high symmetry. A brief discussion is presented in Sec. \[sec:discussion\]. The results are summarized in Sec. \[sec:summary\].
Grover’s Search Algorithm {#sec:algorithm}
=========================
Grover’s algorithm performs a search for a marked element $m$ in a search space $D$ containing $N$ elements. We assume, for convenience, that $N = 2^n$, where $n$ is an integer. This way, the elements of $D$ can be represented by an $n$-qubit register $| x \rangle = | x_1,x_2,\dots,x_n \rangle$, with the computational basis states $| i \rangle$, $i=0,\dots,N-1$. The meaning of marking the element $m$, is that there is a function $f: D \rightarrow \{0,1\}$, such that $f=1$ for the marked elements, and $f=0$ for the rest. To solve this search problem on a classical computer one needs to evaluate $f$ for each element, one by one, until the marked state is found. Thus, on average, $N/2$ evaluations of $f$ are required and $N$ in the worst case. On a quantum computer, where $f$ can be evaluated *coherently*, a sequence of unitary operations, called Grover’s algorithm and denoted by $U_G$, can locate a marked element using only $O(\sqrt{N})$ coherent queries of $f$ [@Grover1996; @Grover1997a]. The algorithm is based on a unitary operator, called a quantum oracle, with the ability to recognize the marked states. Starting with the equal superposition state,
$$|\eta\rangle=\sum_{i=0}^{N-1}|i\rangle,$$
and applying the operator $U_G$ one obtains the state
$$U_G |\eta\rangle = |m\rangle + O({1}/{N}),
\label{eq:<mU}$$
which is then measured. The success probability of the algorithm is almost unity. The adjoint equation takes the form $\langle\eta| = \langle m|U_G + O({1}/{N})$. If an arbitrary pure state, $|\psi\rangle$, is used as the initial state instead of the state $| \eta \rangle$, the success probability is reduced to
$$P_s =
|\langle m|U_G|\psi\rangle|^2 + O({1}/{N}).
\label{eq:Psmpsi}$$
Using Eq. (\[eq:<mU\]) we obtain
$$P_s=|\langle\eta|\psi\rangle|^2 + O({1}/{N}),
\label{eq:etaU}$$
namely, the success probability is determined by the overlap between $| \psi \rangle$ and the equal superposition $| \eta \rangle$ [@Biham2002; @Biham2003].
The Groverian Entanglement Measure {#sec:measure}
==================================
Consider Grover’s search algorithm, in which an arbitrary pure state $| \psi \rangle$ is used as the initial state. Before applying the operator $U_G$, there is a pre-processing stage in which arbitrary local unitary operators $U_1$, $U_2$, $\dots$, $U_n$ are applied on the $n$ qubits in the register (Fig. \[fig1\]). These operators are chosen such that the success probability of the algorithm would be maximized. The maximal success probability is thus given by
$$P_{\rm max} =
\max_{U_1,U_2,\dots,U_n}
|\langle m|U_G(U_1\otimes\dots\otimes U_n)|\psi\rangle|^2.
\label{eq:Pmax}$$
Using Eq. (\[eq:<mU\]), this can be re-written as
$$P_{\rm max} =
\max_{U_1,U_2,\dots,U_n}|\langle\eta|U_1\otimes\dots\otimes U_n|\psi\rangle|^2,$$
or
$$P_{\rm max} =
\max_{|\phi\rangle \in T}|\langle\phi|\psi\rangle|^2,$$
where $T$ is the space of all tensor product states of the form
$$|\phi\rangle = |\phi_1\rangle\otimes\dots\otimes|\phi_n\rangle.
\label{eq:T}$$
The Groverian measure is given by
$$G(\psi) =
\sqrt{1 - P_{\rm max}},
\label{eq:G(psi)}$$
For the case of pure states, for which $G(\psi)$ is defined, it is closely related to an entanglement measure introduced in Refs. [@Vedral1997; @Vedral1997a; @Vedral1998] for both pure and mixed states and was shown to be an entanglement monotone. This measure can be interpreted as the distance between the given state and the nearest separable state. It is expressed in terms of the fidelity between the two states. Based on these results, it was shown [@Biham2002] that $G(\psi)$ satisfies (a) $G(\psi) \geq 0$, with equality only when $|\psi\rangle$ is a product state; (b) $G(\psi)$ cannot be increased using local operations and classical communication (LOCC). Therefore, $G(\psi)$ is an entanglement monotone for pure states. A related result was obtained in Ref. [@Miyake2001], where it was shown that the evolution of the quantum state during the iteration of Grover’s algorithm corresponds to the shortest path in Hilbert space using a suitable metric.
![ The quantum circuit that exemplifies the operational meaning of the Groverian entanglement measure $G(\psi)$. A pure state $| \psi \rangle$ of $n$ qubits is inserted as the input state. In the pre-processing stage, a local unitary operator is applied to each qubit before the resulting state is fed into Grover’s algorithm. The local unitary operators $U_i$, $i=1,\dots,n$ are optimized in order to maximize the success probability of the search algorithm for the given initial state $| \psi \rangle$. []{data-label="fig1"}](fig1){width="\columnwidth"}
The Generalized Groverian Measure {#sec:generalized}
=================================
Consider a quantum state $|\psi \rangle$ of $n$ qubits. In the original Groverian measure each qubit belongs to a separate party. The measure quantifies the entanglement between all these parties. This is a natural partitioning scheme for states created by quantum algorithms. The resulting measure can be considered as an intrinsic property of the state itself. However, consider a situation in which $m \le n$ different parties share the quantum state, where each party holds one or more qubits. These parties wish to cooperate and perform Grover’s search algorithm on the whole state. In this situation, in order to maximize the success probability, the operators $U_i$, $i=1,\dots,m$, should no longer be limited to single qubits. Instead, the operator $U_i$ applies on all the qubits in partition $i$. This enables to quantify the inter-party entanglement, removing the intra-party entanglement. The quantum circuit that demonstrates the evaluation of the generalized Groverian measure for the state $|\psi \rangle$ with any desired partition is shown in Fig. \[fig2\]. The generalized Groverian measure is given by Eq. (\[eq:G(psi)\]) where Eq. (\[eq:T\]) is replaced by $|\phi\rangle = |\phi_1\rangle\otimes\dots\otimes|\phi_m\rangle$, where $| \phi_i \rangle$ is a state of partition $i$. Clearly, the generalized Groverian measure is an entanglement monotone.
![ The quantum circuit that exemplifies the operational meaning of the generalized Groverian measure $G(\psi)$, for $n$ qubits divided in a certain way between $m$ parties. In this example, a pure state $| \psi \rangle$ of six qubits, which is divided between three parties, is inserted as the input state into Grover’s algorithm. In the pre-processing stage, a local unitary operator is applied on the qubits held by each party before the resulting state is fed into Grover’s algorithm. The local unitary operators $U_1, U_2, U_3$ are optimized in order to maximize the success probability of the search algorithm, for the given initial state $| \psi \rangle$. []{data-label="fig2"}](fig2){width="\columnwidth"}
Numerical Evaluation of the Generalized Groverian Measure {#sec:numerical}
=========================================================
For a given partition of $m$ parties, the generalized Groverian measure is expressed in terms of the maximal success probability
$$P_{\rm max} =
\max_{|\phi_i\rangle}|\langle\phi_1|\otimes\dots
\otimes\langle\phi_m|\psi\rangle|^2,$$
where the maximization is over all possible states $| \phi_i \rangle$ of each partition, $i$. This calls for a convenient parametrization of the state of each partition. Consider a partition $i$ that includes one qubit. The state of this partition can be expressed by
$$|\phi_i\rangle =
e^{i \gamma_0} \cos{\theta_0}|0\rangle
+ e^{i \gamma_1} \sin{\theta_0}|1\rangle.
\label{eq:singleq}$$
In case that the partition includes two qubits, its state can be expressed by
$$\begin{aligned}
|\phi_i\rangle &=&
e^{i \gamma_0} \cos{\theta_0} |0\rangle
+ \sin{\theta_0} [ e^{i \gamma_1} \cos{\theta_1} |1\rangle \nonumber \\
&+& \sin{\theta_1} ( e^{i \gamma_2} \cos{\theta_2} |2\rangle
+ e^{i \gamma_3} \sin{\theta_2} |3\rangle )].
\label{eq:twoqubits}\end{aligned}$$
This parametrization can be generalized to any number of qubits in partition $i$. Using this parametrization, one can express the overlap function
$$f = \langle\phi_1|\otimes\dots
\otimes\langle\phi_m|\psi\rangle,$$
in terms of the $\theta_k$’s and $\gamma_k$’s of all the partitions. In fact, $f$ is simply a sum of products of sine, cosine and exponential functions of the $\theta_k$’s and $\gamma_k$’s. At this point, the steepest descent algorithm can be applied to maximize $|f|$. However, a more efficient maximization procedure can be obtained as follows.
For a given partition, one can express $f$ as a function of $\theta_0$ and $\gamma_0$, fixing all the other parameters $\theta_k$ and $\gamma_k$ at this and all other partitions, in the form
$$f=c_0\sin{\theta_0}+e^{i\gamma_0}d_0\cos{\theta_0}.$$
The values of $c_0=|c_0|e^{i\alpha_0}$ and $d_0=|d_0|e^{i\beta_0}$ depend on all the fixed parameters. The maximization of $|f|^2$ vs. $\theta_0$ and $\gamma_0$ leads to
$$|f|^2 \rightarrow |c_0|^2+|d_0|^2.$$
The values of $\gamma_0$ and $\theta_0$ at which this maximization is obtained are
$$\begin{aligned}
\gamma_0 &\rightarrow& \beta_0 - \alpha_0 \nonumber \\
\cos{\theta_0} &\rightarrow&
\frac{|c_0|}{\sqrt{|c_0|^2+|d_0|^2}},
\label{eq:params}\end{aligned}$$
where the sign of $\theta_0$ is the same as the sign of $|d_0| - |c_0|$.
Note that the ordering of the states within each partition is arbitrary. Therefore, in order to perform the same procedure for $\theta_1$ and $\gamma_1$, the parametrization of the two-qubit partition in Eq. (\[eq:twoqubits\]) can be changed to
$$\begin{aligned}
|\phi_i\rangle &=&
e^{i \gamma_1} \cos{\theta_1} |1\rangle
+ \sin{\theta_1} [ e^{i \gamma_2} \cos{\theta_2} |2\rangle \nonumber \\
&+& \sin{\theta_2} ( e^{i \gamma_3} \cos{\theta_3} |3\rangle
+ e^{i \gamma_0} \sin{\theta_3} |0\rangle )].
\label{eq:twoqubits1}\end{aligned}$$
In practice, the optimization procedure consists of iterations of the following steps: (a) Randomly choose a basis state $| p \rangle$ in one of the $m$ partitions; (b) Reparamietrize the state of the chosen partition such that $| p \rangle$ will be the left-most state in Eq. (\[eq:twoqubits\]); (c) Reset $\theta_p$ and $\gamma_p$ in the chosen partition according to Eq. (\[eq:params\]) to maximize $|f|^2$, while fixing all the other parameters.
Results {#sec:results}
=======
Using the numerical tools described above, it is possible to evaluate the generalized Groverian entanglement of any pure quantum state for any given partition. Here we demonstrate this approach for pure quantum states of high symmetry, namely the generalized GHZ state and the W state.
Consider the generalized GHZ state of three qubits
$$| \psi \rangle = a_0 | 000 \rangle + a_1 | 111 \rangle.
\label{eq:GHZ}$$
The three-party case, in which each party holds one qubit was considered before [@Shimoni2004]. It was found that
$$P_{\rm max} = \max(|a_0|^2,|a_1|^2).
\label{eq:GHZpmax}$$
We will now evaluate the generalized Groverian measure for the case in which one party holds two qubits and the second party holds a single qubit. A general pure state of the first party can be expressed by
$$\begin{aligned}
| \phi_1 \rangle &=&
e^{i \gamma_0} \cos{\theta_0} | 00 \rangle
+ e^{i \gamma_1} \sin{\theta_0} \cos{\theta_1} | 01 \rangle \nonumber \\
&+& e^{i \gamma_2} \sin{\theta_0} \sin{\theta_1} \cos{\theta_2} | 10 \rangle
\nonumber \\
&+& e^{i \gamma_3} \sin{\theta_0} \sin{\theta_1} \sin{\theta_2} | 11 \rangle,\end{aligned}$$
while a general pure state of the second party is given by
$$| \phi_2 \rangle =
e^{i \gamma_4} \cos{\theta_4} | 0 \rangle
+ e^{i \gamma_5} \sin{\theta_4} | 1 \rangle.$$
The overlap function will take the form
$$\begin{aligned}
f &=& e^{i \gamma_0} e^{i \gamma_4} \cos{\theta_0} \cos{\theta_4} a_0
\nonumber \\
&+& e^{i \gamma_3} e^{i \gamma_5} \sin{\theta_0} \sin{\theta_1}
\sin{\theta_2} \sin{\theta_4} a_1. \end{aligned}$$
The maximization of $|f|^2$ vs. all the $\theta_i$’s and $\gamma_i$’s will lead to Eq. (\[eq:GHZpmax\]). This means that for the generalized GHZ state, the generalized Groverian measure does not depend on the partition. It can be shown that this result applies to generalized GHZ states with any number of qubits and any partition. This can be interpreted as if generalized GHZ states carry only bipartite entanglement, in agreement with previous studies [@Dur2000].
Another family of highly symmetric pure states of multiple qubits is the class of W states. The W state of $n$ qubits is given by
$$| \psi \rangle = \frac{1}{\sqrt{n}} \sum_{i=0}^{n-1} | 2^i \rangle,$$
namely it is the equal superposition of all basis states in which one qubit is 1 and all the rest are 0. This class of states was found to have $P_{\rm max} = (1-1/n)^{n-1}$ [@Shimoni2004].
We will now extend this analysis to more general partitions of the $n$-qubit W state. First, we consider the bipartite case. In this case, the generalized Groverian entanglement is equal to the maximal eigenvalue of the reduced density matrix, traced over one of the two parties [@Biham2002]. Consider the simple case in which one party includes a single qubit, while the other party includes all the other qubits. In this case we find that $P_{\rm max}=(n-1)/n$. In the general two-party case, one party includes $k$ qubits and the other includes $n-k$ qubits. In this case we find that $P_{\rm max} = \max(k/n,1-k/n)$.
For more that two parties, the analogy between the generalized Groverian measure and the largest eigenvalue of the reduced density matrix does not apply. Thus, the evaluation of the generalized Groverian measure can be performed analytically for a few simple cases, and in general requires the computational procedure described above.
Consider the $n$-qubit W state. Here we focus on a simple set of partitions to $m$ parties, in which $m-1$ parties include one qubit each, and the last party includes all the remaining qubits. In Table \[table1\] we present $P_{\max}$ for W states of $n=2,\dots,7$ qubits divided between $m=1,\dots,n$ parties. The results in the first two rows as well as the main diagonal were obtained analytically as well as by the numerical procedure. The rest of the results were obtained numerically. Those results that appear as exact integer fraction were identified as such based on the numerical results. In four other cases, we could not identify such exact fractions.
Partitions 1 bit 2 bits 3 bits 4 bits 5 bits 6 bits 7 bits
------------ ------- -------- ----------- ----------- ----------- ----------- -----------
1 1 1 1 1 1 1 1
2 1/2 2/3 3/4 4/5 5/6 6/7
3 $(2/3)^2$ 2/4 3/5 4/6 5/7
4 $(3/4)^3$ 0.4408 3/6 4/7
5 $(4/5)^4$ 0.4198 0.4494
6 $(5/6)^5$ 0.4084
7 $(6/7)^6$
: The success probability $P_{\max}$ obtained for states of the W class. Each column corresponds to the W states with a given number of qubits, $n = 1,\dots,7$. Each row corresponds to a given number of partitions, $m=1,\dots,n$. Since there can be many ways to partition $n$ qubits into $m$ parties, we focused on a specific class of partitions in which $m-1$ parties hold one qubit each and all the remaining qubits are in one party.
\[table1\]
Discussion {#sec:discussion}
==========
Consider a pure quantum state $| \psi \rangle$ of $n$ qubits. The number of ways to divide these $n$ qubits into $m$ parties is given by the binomial coefficient $C_n^m$. For each of these partitions, one can evaluate the generalized Groverian measure $G(\psi)$, that quantifies the $m$-partite entanglement between these parties. In this analysis, locality is defined according to the partition, so that all the operations that are performed within a single partition are considered as local. Using this approach, one can identify the partition for which $G(\psi)$ is maximal among all the partitions that include $m$ parties, and denote its value as $G_m(\psi)$. This quantity satisfies a monotonicity relation of the form $G_m(\psi) \le G_{m+1}(\psi)$, where $G_1(\psi)=0$. This means that splitting of parties tends to increase this measure of multipartite entanglement while merging of parties tends to decrease it.
Furthermore, the interesting question of state ordering may be addressed using this measure. It would be interesting to find pairs of states, $|\psi_1\rangle$ and $|\psi_2\rangle$, such that $G_{m_1}(\psi_1) < G_{m_1}(\psi_2)$ but $G_{m_2}(\psi_1) < G_{m_2}(\psi_2)$ for some integers $m_1$ and $m_2$.
Summary {#sec:summary}
=======
In summary, we have presented a generalization of the Groverian entanglement measure of multiple quibits to the case in which the qubits are divided into any desired partition. The generalized measure quantifies the multipartite entanglement between these partitions. To demonstrate this measure we evaluated it for a variety of pure quantum states using a combination of analytical and numerical methods. In particular, we have studied the entanglement of highly symmetric states of multiple qubits such as the generalized GHZ states and the W states.
[27]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, in **, edited by (, , ), p. .
, in ** (, , ), p. .
, ****, ().
, ** (, , ).
, ****, ().
, ****, ().
, in **, edited by (, , ), p. .
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
|
---
abstract: 'Particles moving along curved trajectories will diffuse if the curvature fluctuates sufficiently in either magnitude or orientation. We consider particles moving at a constant speed with either a fixed or with a Gaussian distributed curvature magnitude. At small speeds the diffusivity is independent of the speed. At larger particle speeds, the diffusivity depends on the speed through a novel exponent. We apply our results to intracellular transport of vesicles. In sharp contrast to thermal diffusion, the effective diffusivity [*increases*]{} with vesicle size and so may provide an effective means of intracellular transport.'
author:
- 'Andrew D. Rutenberg'
- 'Andrew J. Richardson'
- 'Claire J. Montgomery'
title: Diffusion of Asymmetric Swimmers
---
The thermal Stokes-Einstein diffusivity of a sphere decreases as the particle radius $R$ increases [@Berg93]. For this reason, while diffusive transport is used for individual molecules within living cells [@Alberts2002], larger objects such as vesicles and pathogens often use active means of transport. While many intracellular vesicles appear to be transported by molecular motors directed along existing cytoskeletal tracks [@motorvesicle; @Alberts2002], [*undirected*]{} actin-polymerization mediated vesicle transport has been reported in some endosomes, lysosomes, other endogenous vesicles, and phagosomes [@vesicles; @reviews]. Active transport is also observed in the actin-polymerization-ratchet motility of certain bacteria [@reviews; @rickettsiae] and virus particles [@vaccinia] within host cells. It is important to characterize the transport properties of vesicles that are not moving along pre-existing cytoskeletal tracks.
Existing discussions of the motion of actively propelled microscopic particles, or “swimmers”, assumes that in the absence of thermal fluctuations particles would move in straight trajectories [@Lovely75; @Berg93]. Thermal rotational diffusion will then randomly re-orient the trajectory [@Berg93], so that over long times diffusive transport will be observed. However in actin-polymerization based motility, particles appear to be attached to their long actin tails [@Gerbal2000] which in turn are embedded in the cytoskeleton [@Theriot92]. While thermal fluctuations will thereby be severely reduced, the actin-polymerization itself is a stochastic process with its own fluctuations [@Alberts2002; @Mogilner96]. These intrinsic fluctuations can explain the observed curved trajectories, as well as the variation of the curvature over time [@Rutenberg2001]. The diffusivity of such asymmetrically moving particles has not been previously explored.
In this letter, we study asymmetric swimmers that would move in perfect circles in the absence of fluctuations. We examine both a “broken swimmer” with a fixed curvature magnitude and an axis of curvature that is re-oriented by fluctuations (rotating curvature, RC), and a “microscopic swimmer” with a normally-distributed curvature that is spontaneously generated by fluctuations (Gaussian curvature, GC). In both of these systems, fluctuations lead to diffusion at long times. We use computer simulations to measure the diffusivity of these systems as a function of the root-mean-squared curvature $K_0$, the particle speed $v$, and the timescale characterizing the curvature dynamics $\tau$.
We obtain some exact results from polymer systems, where each polymer configuration represents a possible particle trajectory. Indeed, a broken swimmer with a fixed curvature magnitude in $d=3$ is exactly analogous to the hindered jointed chain discussed by Flory [@Flory89] and we thereby recover the entire scaling function exactly. In that case, the diffusivity is independent of particle speed $v$. For Gaussian curvatures and for systems in restricted geometries ($d=2$), the polymer analogy gives us the diffusivity only in the limit of slow speeds. At larger speeds, our simulations over $5$ decades of speed show that diffusivity depends on particle speed with a non-trivial exponent $\lambda$. The diffusivity appears to be dominated by the occasional long straight segments of trajectory that occur when the curvature is small. Scaling arguments based on this observation are consistent with the measured exponent $\lambda_{2d} =0.98 \pm 0.02$ in $d=2$, but do not recover our measured exponent $\lambda_{3d}=0.71 \pm 0.01$ in $d=3$.
A curved path has a curvature magnitude $K
\equiv 1/R$, where $R$ is the instantaneous radius of curvature. If we describe a particle trajectory by a position ${\bf r}(t)$, then the vector curvature is defined by the cross-product $ {\bf K} \equiv { {\bf v} \times \dot{\bf v} / v^3}$, where $v= |{\bf v}|$ is the speed and the dot $\dot{}$ indicates a time-derivative. For uniform motion around a circle, $R$ is the radius of the circle, and ${\bf K}$ is oriented perpendicular to the circle along the axis. We consider particles moving at a constant speed and with an instantaneous curvature ${\bf K}$, so that $\dot{\bf r} = {\bf v}$ and $ \dot{\bf v} = - v {\bf v} \times {\bf K}$. For “rotating curvature” dynamics (RC) we fix the curvature magnitude $|{\bf K}|=K_0$ but allow the curvature to randomly rotate around the direction of motion: $$\dot{\bf K}_{RC} = \xi \hat{\bf v} \times {\bf K}
\label{EQN:RCkdyn}$$ where the unit-vector $\hat{\bf v}= {\bf v}/v$, the Gaussian noise $\xi$ has zero mean, and $\left< \xi(t) \xi(t') \right> = 2
\delta(t-t')/ \tau$ with a characteristic timescale $\tau$. This represents the simplest description of a mesoscopic swimmer that has a “locked-in” curvature due to, e.g., an asymmetric shape. For “Gaussian curvature” dynamics (GC) the curvature magnitude changes as well: $$\dot{\bf K}_{GC} = -{\bf K}/\tau+\bxi
\label{EQN:KGC}$$ where the noise $\bxi$ is perpendicular to ${\bf v}$ with zero mean and $\left<\bxi(t) \cdot \bxi(t')\right>= \delta(t-t') K_0^2/\tau$, such that $\left<{\bf K}^2\right> = K_0^2$. This represents the simplest description of a microscopic swimmer “trying” to swim in a straight line subject to intrinsic fluctuations in the motion. The resulting curvatures are Gaussian distributed in each component. For particles restricted to two-dimensions with either RC or GC dynamics, we only use the normal ($\hat{z}$) component of the vector-curvature to update the velocity within the plane, i.e. $ \dot{\bf v} = - v {\bf v} \times \hat{\bf z} K_z$ in $d=2$.
There are two natural timescales. We explicitly introduce $\tau$, which controls the noise correlation and so sets the timescale over which the curvature changes. There is also the inverse of the angular rotation rate, $t_c \equiv 1/(v K_0)$. Diffusion will only be observed for elapsed times $t$ much greater than any other timescale in the system, i.e. $t \gg t_c$ [*and*]{} $t \gg \tau$. The diffusivity of a particle is given by $D \equiv \left<r^2\right>/(2dt)$ in the limit as the elapsed time $t \rightarrow \infty$, in spatial dimension $d$.
A polymer chain with fixed bond lengths ($\ell$) and angles ($\theta_f$), and with independent bond rotation potentials ($V(\phi_f)$) [@Flory89] is statistically identical to the continuous RC trajectory in $3d$ if for a discrete time-step $\Delta t$ we take $\ell = v \Delta t$. The end-to-end distance for a long $n$-bond polymer is $\left<r^2\right> = n \ell^2 C_n$. The correspondence is complete as the elapsed time $t = n \Delta t \rightarrow \infty$. The bond and dihedral angles determine $C_\infty = (1 + \cos\theta_f) (1+\left<\cos \phi_f\right>) / [(1- \cos \theta_f)
(1- \left<\cos \phi_f \right>)]$ [@Flory89]. Swimmers follow continuous paths, so we take the limit of small $\Delta t$ and fix the polymer rotation angle from the curvature in that limit by $\theta_f = K_0 v \Delta t$, and rotate the curvature by $< \phi_f^2 > = 2 \Delta t/\tau$ in agreement with Eqn. \[EQN:RCkdyn\]. In the limit $\Delta t \rightarrow 0$ we recover the [*exact*]{} result $D = 1/(3 K_0^2 \tau)$ in $d=3$. Remarkably, $D$ is independent of $v$.
For a GC trajectory in $d=3$, there is no obvious polymer analogy since the curvature magnitude evolves with time. In the limit of $\tau \rightarrow 0$ however, the curvature is independently Gaussian distributed at every point along the trajectory and the diffusivity can be extracted from the “worm-like chain” polymer model originally solved by Kratky and Porod [@Doi]. The result is $D = 1/(3 K_0^2 \tau)$ in the limit of small $\tau$. Note that the diffusivity diverges as $1/\tau$ so this is the leading asymptotic dependence for small $\tau$. We obtain the same diffusivity for both RC and GC dynamics for small $\tau$.
We use these exact results to define natural dimensionless scaling functions for the diffusivity of microscopic swimmers: $$\tilde{D}_{Rd, Gd}(\tilde{v}) \equiv D K_0^2 \tau
\label{EQN:RCGC}$$ where the index $Rd$ [*or*]{} $Gd$ indicates both the dynamics (RC or GC) and the spatial dimensionality $d$, and $$\tilde{v} \equiv v K_0 \tau$$ is a dimensionless speed. In terms of these scaling functions we have $\tilde{D}_{R3}(\tilde{v})=\tilde{D}_{G3}(0)= 1/3$. The same Kratky-Porod approach in $d=2$ gives $\tilde{D}_{R2}(0)=\tilde{D}_{G2}(0)= 1$.
= 1.6truein =1.6truein
We have simulated the trajectories of large-numbers of independent particles with RC and with GC dynamics. For fixed $v$ and $K_0$, we varied $\tau$ to explore the scaled velocity $\tilde{v} \equiv v K_0 \tau$ over $5$ orders of magnitude. For each $\tilde{v}$, we averaged over the trajectories of at least $1000$ particles. We explicitly integrated the dynamical equations using a simple Euler update with a small timestep $\Delta t$. In all cases $t \gg \tau \gg \Delta t$ and $t \gg t_c \gg \Delta t$, with separation of timescales by factors of $10-100$. Systematic errors due to $\Delta t$ and $t$ are below our noise levels, and statistical errors (when not shown) are smaller than the size of our plotted points. Consistently, our numerical results agree with all exact results from the polymer analogy. We illustrate the trajectories that we observe in $d=2$ in Fig. \[FIG:traj\], with both small and large scaled speeds $\tilde{v}$. In both cases the curvature $K_0=1$, but particles only complete loops at large $\tilde{v}$. Qualitatively similar trajectories are seen in $d=3$ with GC curvature dynamics.
In $d=2$, shown in Fig. \[FIG:2d\], both rotating curvature (open circles) and Gaussian curvature (filled circles) approach their asymptotic value of $\tilde{D}_{R2}(0)=\tilde{D}_{G2}(0) =1$ at small $\tilde{v}$. At $\tilde{v} \approx 1$ there is a sharp cross-over to a large-$\tilde{v}$ power-law regime, characterized by an exponent $\lambda_{2d}$ where $\tilde{D}_{R2} \sim \tilde{D}_{G2} \sim \tilde{v}^{\lambda_{2d}}$ for large $\tilde{v}$. We show the effective exponents $\lambda_{eff} \equiv \Delta \log{(D K_0^2 \tau)}/\Delta \log{\tilde{v}}$ between consecutive points in the inset of Fig. \[FIG:2d\], as well as the best fit exponent $\lambda_{2d} =0.98 \pm 0.02$. We fit $\lambda_{2d}$ from the large-$\tilde{v}$ GC data only, due to the systematic cross-over remaining in the RC data even at large $\tilde{v}$.
= 3.0truein
Simulations in $d=3$ with rotating curvature (RC) dynamics leads to a diffusivity in excellent agreement with the exact result from polymer physics, $\tilde{D}_{R3} = 1/3$, as shown by open circles in Fig. \[FIG:3d\]. Gaussian curvature dynamics ($\tilde{D}_{G3}$, filled circles) has the same behavior for small $\tilde{v}$, but exhibits a sharp crossover at $\tilde{v} \approx 1$ to a power-law regime $\tilde{D}_{G3} \sim \tilde{v}^{\lambda_{3d}}$ for large $\tilde{v}$. We find the best-fit exponent is $\lambda_{3d}=0.71 \pm 0.01$, as shown by the solid line in the inset of Fig. \[FIG:3d\]. Because $\lambda_{3d}<1$, this scaling curve may be used to uniquely identify the dynamical timescale $\tau$ if $D$, $K_0$, and $v$ are measured experimentally.
= 3.0truein
Is there a simple way of understanding the asymptotic behavior of $\tilde{D}$? For RC dynamics in $d=3$ the instantaneous curvature does not change in magnitude even while the curvature axis wanders. The particle will go in a circular trajectory, not contributing to diffusivity, until the curvature axis wanders significantly. The result is a random walk with step size given by the radius of curvature $\Delta r \sim 1/K_0$ and an interval between steps of $\tau$, leading to $D \sim 1/(K_0^2 \tau)$. This qualitatively explains why the exact result $\tilde{D}_{R3}=1/3$ is independent of $\tilde{v}$.
It is more difficult to understand the $\tilde{D} \sim \tilde{v}^\lambda$ behavior for large $\tilde{v}$ in the other systems. We start with a simple scaling argument based on the assumption that the relatively straight segments shown in Fig. \[FIG:traj\] b) dominate the diffusivity. The interval between periods of small curvature should be on order the autocorrelation time $\tau$. The length $\Delta r$ of the straight segments are determined by how long the interval of small curvature lasts, $\Delta t$, since $\Delta r \approx v \Delta t$. For the segment to be straight, the curvature must be less than the inverse length, i.e. $K_{max} \lesssim 1/\Delta r$. The fraction of the time we have small curvature below $K_{max}$ in magnitude should be proportional to the probability of having curvature below $K_{max}$. In $d=2$ only the normal component of curvature affects the dynamics, so that $P(K) \approx {\rm const}$ for $K \ll K_0$. This applies both to GC and RC. We therefore expect $\Delta t \sim \tau K_{max}/K_0$. We maximize $K_{max}$ to maximize the contribution to $D \approx \Delta r^2/\tau$ and find $\tilde{D}_{G2} \sim \tilde{D}_{R2} \sim \tilde{v} $ as $\tilde{v} \rightarrow \infty$. This indicates that $\lambda_{2d}=1$, which is consistent with our best-fit value $\lambda_{2d}=0.98 \pm 0.02$. However, in $d=3$ for GC dynamics the same argument leads to $\lambda_{3d}=2/3$ since two Gaussian distributed components of the curvature gives $P_<(K_{max}) = \int_0^{K_{max}} dK P(K) \sim K_{max}^2/K_0^2$ for $K_{max} \ll K_0$. This is inconsistent with our measured value of $\lambda_{3d}=0.71 \pm 0.01$, with a significant $4 \sigma$ variation.
At what radius $R_c$ does a small spherical particle achieve a higher diffusivity by actively swimming, as compared to passive thermal diffusion characterized by $D_T = k_B T/ (6 \pi \eta R)$ [@Berg93]? We can answer this question within the context of actin-polymerization based motility of small intracellular particles, since the size dependence of $K_0$, $v$, and $\tau$ is known, at least approximately. With the approximation that $n$ propulsive actin filaments are randomly distributed over a particle of size $R$, the curvature of the trajectory will be $K_0 \propto 1/(R\sqrt{n})$ [@Rutenberg2001]. With a size-independent surface-density of filaments we obtain $K_0 \approx A/R^2$, with a constant of proportionality $A$. By observations of [*Listeria monocytogenes*]{} we estimate $A \approx 0.1 \mu m$ [@Rutenberg2001]. We also [*conservatively*]{} assume size-independent values for cytoplasmic viscosity $\eta \approx 3 Pa \cdot s$ [@Berg93; @Rutenberg2001], speed $v \approx 0.1 \mu m/s$, and autocorrelation decay time $\tau \approx 100 s$ [@Sechi97]. We find that the micron-scale bacterium [*L. monocytogenes*]{} has $\tilde{v} \approx 1$, so that smaller particles will have $\tilde{v} >1$. Using the large $\tilde{v}$ asymptotic behavior of $\tilde{D}_{G3}$ shown in Fig. \[FIG:3d\], $D \approx 0.41 \tilde{v}^{\lambda_{3d}}/(K_0^2 \tau)$, and the size-dependence $K_0 \approx A/R^2$, we obtain $$D_{G3} \sim R^{4-2 \lambda_{3d}} v^{\lambda_{3d}}/(A^{2-\lambda_{3d}} \tau^{1-\lambda_{3d}}),
\label{EQN:G3size}$$ with a measured $\lambda_{3d}=0.71 \pm 0.01$. In dramatic contrast to thermal diffusion, $D$ [*increases*]{} with increasing particle size. Comparing with $D_T$ we find that for all sizes [*above*]{} $R_c \approx 80 nm$ a particle will have a higher diffusivity by actively swimming by the actin-polymerization mechanism than by passive thermal diffusion. Provocatively, this is in the middle of the vesicle-size distribution seen in neural systems [@vesiclesize].
Our treatment of microscopic swimmers has ignored thermal fluctuations. A “rocket” traveling straight at speed $v$ that is re-oriented only by thermal effects will have $D_u=4 \pi \eta R^3 v^2/(3 k_B T)$ [@Berg93]. In comparison with our results for $D$, we find that $D<D_u$ for particles larger than $R_u \approx 0.07 nm$. For actin-polymerization based motility, intrinsic fluctuations appear to dominate thermal fluctuations at the particle sizes where active transport is advantageous.
In summary, we find that diffusivities of asymmetric microscopic swimmers depends on whether the swimmers are restricted to $2d$ or $3d$, and whether they have fixed asymmetries (RC) or the asymmetries are spontaneously generated (GC). Diffusivities are independent of particle speed at low speeds, in agreement with analogous polymer systems. At higher speeds an anomalously large diffusivity is observed that depends on the particle speed by $\tilde{v}^\lambda$ where $\lambda_{2d}=0.98 \pm 0.02$, in agreement with a scaling argument for $\lambda_{2d}=1$. However $\lambda_{3d}=0.71 \pm 0.01$, which significantly differs from our simple scaling result in $d=3$. We apply our results to intracellular bacteria, virus particles, and vesicles that move via actin-polymerization. We find that diffusivities due to asymmetric swimming exceed thermal diffusivities for particles [*larger*]{} than approximately $80 nm$. As a result asymmetric swimming may provide a viable intracellular transport mechanism even for vesicle-sized particles. We find that for the relevant dynamics (GC in $d=3$), diffusivities should increase with particle size, speed, and filament turnover rate, and also with smaller curvatures for a given size. It is interesting that the bacterium [*Rickettsiae rickettsii*]{} exhibits actin-polymerization intracellular motility with smaller intra-cellular speeds but straighter trajectories [@rickettsiae; @Rutenberg2001] — raising the question of whether maximal diffusivity is selected for in this or other biological systems.
This work was supported financially by an NSERC discovery grant. C. Montgomery would also like to acknowledge support from an NSERC USRA.
[99]{} H.C. Berg, “Random Walks in Biology”, 2nd ed. (Princeton, 1993). B. Alberts [*et al.*]{}, “Molecular Biology of the Cell”, 4th ed. (Garland, 2002). J. Taunton [*et al.*]{}, J. Cell. Biol. [**148**]{}, 519 (2000); C.J. Merrifield [*et al.*]{}, Nature Cell. Biol. [**1**]{}, 72 (1999); A.L. Rozelle [*et al.*]{}, Curr. Biol. [**10**]{}, 311 (2000); F.L. Zhang [*et al.*]{}, Cell. Motil. Cyto. [**53**]{}, 81 (2002). For reviews see L.A. Cameron [*et al.*]{}, Nature Rev. Mol. Cell Biol. [**1**]{}, 110 (2000); D. Pantaloni [*et al.*]{}, Science [**292**]{}, 1502 (2001); F. Frishnecht and M. Way, Trends Cell Biol. [**11**]{}, 30 (2001). N. Hirokawa, Science [**279**]{}, 519 (1998). S. Cudmore [*et al.*]{}, Nature [**378**]{}, 636 (1995). P.S. Lovely and F.W. Dahlquist, J. Theor. Biol. [**50**]{}, 477 (1975). F. Gerbal [*et al.*]{}, Eur. Biophys. J. [**29**]{}, 134 (2000). J.A. Theriot [*et al.*]{}, Nature [**357**]{}, 257 (1992). A. Mogilner and G. Oster, Biophys. J. [**71**]{}, 3030 (1996). P.J. Flory, “Statistical mechanics of chain molecules” (Hanser Gardner, 1989). See, e.g., M. Doi and S.F. Edwards, “The Theory of Polymer Dynamics” (Oxford, 1986). A.D. Rutenberg and M. Grant, Phys. Rev. E [**64**]{}, 21904 (2001). R.A. Heinzen [*et al.*]{}, Infect. Imm. [**67**]{}, 4201 (1999). A.S. Sechi [*et al.*]{}, J. Cell. Biol. [**137**]{}, 155 (1997). D. Bruns [*et al.*]{}, Neuron [**28**]{}, 205 (2000); N. Harata [*et al.*]{}, Proc. Nat. Acad. Sci., [**98**]{}, 12748 (2001); D. Schubert [*et al.*]{}, Brain. Res. [**190**]{}, 67 (1980).
|
---
abstract: 'We study critical orbits and bifurcations within the moduli space ${\mathrm{M}}_2$ of quadratic rational maps, $f: {{\mathbb P}}^1\to {{\mathbb P}}^1$. We focus on the family of curves, ${\mathrm{Per}}_1(\lambda) \subset {\mathrm{M}}_2$ for $\lambda\in{{\mathbb C}}$, defined by the condition that each $f\in {\mathrm{Per}}_1(\lambda)$ has a fixed point of multiplier $\lambda$. We prove that the curve ${\mathrm{Per}}_1(\lambda)$ contains infinitely many postcritically-finite maps if and only if $\lambda=0$, addressing a special case of [@BD:polyPCF Conjecture 1.4]. We also show that the two critical points of $f$ define distinct bifurcation measures along ${\mathrm{Per}}_1(\lambda)$.'
address:
- 'Department of Mathematics, Northwestern University, USA'
- 'Department of Mathematics, Zhejiang University, P.R.China'
- 'Department of Mathematics, University of British Columbia, Canada'
author:
- Laura De Marco
- Xiaoguang Wang
- Hexi Ye
title: Bifurcation measures and quadratic rational maps
---
[^1]
Introduction
============
In this article, we study the dynamics of holomorphic maps $f: {{\mathbb P}}^1_{{\mathbb C}}\to {{\mathbb P}}^1_{{\mathbb C}}$ of degree $2$. We concentrate our analysis on the lines ${\mathrm{Per}}_1(\lambda)$ within the moduli space ${\mathrm{M}}_2{\simeq}{{\mathbb C}}^2$ of quadratic rational maps, introduced by Milnor in [@Milnor:quad]. For each $\lambda\in{{\mathbb C}}$, ${\mathrm{Per}}_1(\lambda)$ is the set of all (conformal conjugacy classes of) maps $f$ with a fixed point $p$ at which $f'(p) = \lambda$; so ${\mathrm{Per}}_1(0)$ is the family of maps conjugate to a polynomial.
Our first main result addresses a special case of Conjecture 1.4 of [@BD:polyPCF]. (See also the corrected version in [@D:stableheight §6.1] and this case presented in [@Silverman:moduli §6.5].) The conjecture aims to classify the algebraic subvarieties of the moduli space ${\mathrm{M}}_d$ containing a Zariski-dense set of postcritically-finite maps, for each degree $d\geq 2$. By definition, a rational map of degree $d$ is postcritically finite if each of its $2d-2$ critical points has a finite forward orbit. It is known that the postcritically finite maps form a Zariski-dense subset of ${\mathrm{M}}_d$ in every degree $d\geq 2$, but the subvarieties intersecting many of them are expected to be quite special.
\[PCF maps\] The curve ${\mathrm{Per}}_1(\lambda)$ in ${\mathrm{M}}_2$ contains infinitely many postcritically-finite maps if and only if $\lambda = 0$.
This result is the exact analog of Theorem 1.1 in [@BD:polyPCF] that treated cubic polynomials. As in that setting, one implication is easy: if $\lambda = 0$, the curve ${\mathrm{Per}}_1(0)$ defines the family of quadratic polynomials, and it contains infinitely many postcritically-finite maps (by a standard application of Montel’s theorem on normal families). The converse direction is more delicate; its proof, though similar in spirit to that of [@BD:polyPCF Theorem 1.1], required different techniques, more in line with our work for the Lattès family of [@DWY:Lattes].
The second theme of this paper is a study of the bifurcation locus in the curves ${\mathrm{Per}}_1(\lambda)$; refer to §\[bifurcation\] for definitions. For each fixed $\lambda\not=0$, we work with an explicit parametrization of ${\mathrm{Per}}_1(\lambda)^{cm}$, a double cover of ${\mathrm{Per}}_1(\lambda)$ consisting of maps with marked critical points: $$f_t(z) = \frac{\lambda z}{z^2 + t z + 1}$$ with $t\in{{\mathbb C}}$. The map $f_t$ has a fixed point at $z=0$ with multiplier $\lambda$; the critical points of $f_t$ are $\{\pm 1\}$ for all $t$; note that $f_t$ is conjugate to $f_{-t}$ via the conjugacy $z\mapsto -z$ interchanging the two critical points. Each critical point determines a finite bifurcation measure on ${\mathrm{Per}}_1(\lambda)^{cm}$, which we denote by $\mu^+_\lambda$ and $\mu^-_\lambda$. (The symmetry of $f_t$ implies that $\mu^-_\lambda = A_* \mu^+_\lambda$ for $A(t) = -t$.) Our main result in this direction is:
\[distinct measures\] For every $\lambda\not=0$, we have $\mu^+_\lambda \not = \mu^-_\lambda$ in ${\mathrm{Per}}_1(\lambda)^{cm}$.
Theorem \[distinct measures\] is not unexpected. For any $\lambda$, the two critical points should behave independently. In fact, it is not difficult to show that the critical points [*cannot*]{} satisfy any dynamical relation of the form $f^n(+1) \equiv f^m(-1)$ along ${\mathrm{Per}}_1(\lambda)^{cm}$; see Corollary \[independence\]. However, computational experiments suggested some unexpected alignment of the two bifurcation loci, $\mathrm{Bif}^+ = {\operatorname{supp}}\mu^+_\lambda$ and $\mathrm{Bif}^- = {\operatorname{supp}}\mu^-_\lambda$, for certain values of $\lambda$. For example, for values of $\lambda$ near $-4$, the two bifurcation loci appear remarkably similar. See Figure \[L=-4\] and Question \[distinct supports\].
A key ingredient in our proof of Theorem \[PCF maps\] is an equidistribution statement, that parameters $t$ where the critical point $\pm 1$ has finite forward orbit for $f_t$ will be uniformly distributed with respect to the bifurcation measure $\mu^{\pm}_\lambda$. Post-critically finite maps have algebraic multipliers, so it suffices to study the case where $\lambda\in{\overline{{{\mathbb Q}}}}$; to prove the equidistribution result, we rely on the arithmetic methods introduced in [@Baker:Rumely:equidistribution] and [@FRL:equidistribution]. But there are two features of ${\mathrm{Per}}_1(\lambda)$ that distinguish it from a series of recent articles on this theme (see e.g. [@BD:preperiodic; @BD:polyPCF; @Ghioca:Hsia:Tucker; @GHT:preprint; @Favre:Gauthier]); in particular, we could [*not*]{} directly apply the existing arithmetic equidistribution theorems for points of small height on ${{\mathbb P}}^1$.
1. The bifurcation locus can be noncompact, and the proof that the potential functions for the bifurcation measures are continuous across $t=\infty$ is more delicate (we show this in Theorem \[convergence\], with the method we used in [@DWY:Lattes]); and
2. the canonical height function defined on ${\mathrm{Per}}_1(\lambda)$ (associated to each critical point) is only “quasi-adelic," meaning that it may have nontrivial contributions from infinitely many places of any number field containing $\lambda$.
Because of (2), we use a modification of the original equidistribution result (and of its proofs, following [@BRbook; @FRL:equidistribution]) that appears in [@Ye:quasi]. We deduce the following result. (The full statement of this theorem appears as Theorem \[equidistribution at all places\].)
\[equidistribution\] For every $\lambda \in {\overline{{{\mathbb Q}}}}\setminus\{0\}$ with $\lambda$ not a root of unity, or for $\lambda=1$, the set $${\mathrm{Preper}}^+_\lambda = \{t\in{\mathrm{Per}}_1(\lambda)^{cm}: + 1 \mbox{ has finite forward orbit for } f_t\}$$ is equidistributed with respect to $\mu^+_\lambda$; similarly for ${\mathrm{Preper}}^-_\lambda$ and $\mu^-_\lambda$. More precisely, for any non-repeating sequence of finite sets $S_n \subset {\mathrm{Preper}}^+_\lambda$, the discrete probability measures $$\mu_n \; = \; \frac{1}{|G \cdot S_n|} \; \sum_{t \,\in \,G\cdot S_n} \; \delta_t$$ converge weakly to the measure $\mu^+_\lambda$, where $G= {\operatorname{Gal}}(\overline{{{\mathbb Q}}(\lambda)}/{{\mathbb Q}}(\lambda))$.
Note that the sets ${\mathrm{Preper}}^{\pm}_\lambda$ are invariant under the action of the Galois group $G$: if $+1$ is preperiodic for $t_0$, then $+1$ is preperiodic for all $t$ in its Galois orbit, since these parameters are solutions of an equation of the form $f_t^n(+1) = f_t^m(+1)$, with coefficients in ${{\mathbb Q}}(\lambda)$. A “classical" setting of Theorem \[equidistribution\] would be to take $S_n$ as the full set of solutions to the equation $f_t^n(+1) = f_t^m(+1)$, with any sequence $0 \leq m = m(n) < n$ as $n\to\infty$.
The equidistribution of Theorem \[equidistribution\] for $\lambda = 0$ is well known. It was first shown by Levin (in the classical sense of equidistribution) [@Levin:iteration], and it was shown in the stronger (arithmetic) form by Baker and Hsia [@Baker:Hsia Theorem 8.15]. In fact the equidistribution of Theorem \[equidistribution\] holds at each place $v$ of the number field ${{\mathbb Q}}(\lambda)$, on an appropriately-defined Berkovich space ${{\mathbb P}}^{1,an}_{{{\mathbb C}}_v}$, for sets $S_n$ with canonical height tending to 0; see Theorem \[equidistribution at all places\].
[**Outline of the article.**]{} In Section \[bifurcation\], we introduce the families ${\mathrm{Per}}_1(\lambda)^{cm}$, the bifurcation loci within these curves, and the bifurcation measures $\mu^+_\lambda$ and $\mu^-_\lambda$. We prove the independence of the critical points (Corollary \[independence\]) and pose Question \[distinct supports\] about the bifurcation loci. In Section \[measures\], we give the proof of Theorem \[distinct measures\]. In Section \[homogeneous\], we prove that the measures $\mu^+_\lambda$ and $\mu^-_\lambda$ have continuous potentials on all of ${{\mathbb P}}^1$, assuming that $\lambda$ is not “too close" to a root of unity (Theorem \[convergence\]). In Section \[non-archimedean\], we prove a non-archimedean convergence statement, analogous to Theorem \[convergence\], for $\lambda\in{\overline{{{\mathbb Q}}}}$ that are not equal to roots of unity. In Section \[sets\], we introduce the homogeneous bifurcation sets and compute their homogeneous capacities. In Section \[equidistribution section\], we prove the needed equidistribution theorems, including Theorem \[equidistribution\]. In Section \[proof section\], we complete the proof of Theorem \[PCF maps\].
[**Acknowledgements.**]{} We would like to thank Ilia Binder, Dragos Ghioca, and Curt McMullen for helpful comments. We also thank Suzanne Boyd for help with her program Dynamics Explorer, used to generate all images in this article.
Bifurcation locus in the curve ${\mathrm{Per}}_1(\lambda)$ {#bifurcation}
==========================================================
The moduli space ${\mathrm{M}}_2$ is the space of conformal conjugacy classes of quadratic rational maps $f: {{\mathbb P}}^1_{{\mathbb C}}\to {{\mathbb P}}^1_{{\mathbb C}}$, where two maps are equivalent if they are conjugate by a Möbius transformation; see [@Milnor:quad; @Silverman]. In this section, we provide some basic results about the bifurcation locus within the curves ${\mathrm{Per}}_1(\lambda)$ in ${\mathrm{M}}_2$. By definition, ${\mathrm{Per}}_1(\lambda)$ is the set of conjugacy classes of quadratic rational maps with a fixed point of multiplier $\lambda$. In Milnor’s parameterization of ${\mathrm{M}}_2{\simeq}{{\mathbb C}}^2$, using the symmetric functions in the three fixed point multipliers, each ${\mathrm{Per}}_1(\lambda)$ is a line [@Milnor:quad Lemma 3.4]. For $\lambda =0$, ${\mathrm{Per}}_1(0)$ is the family of quadratic polynomials, usually parametrized by $f_t(z) = z^2 +t$ with $t\in{{\mathbb C}}$.
A number of results have appeared since [@Milnor:quad] that address features of the bifurcations within ${\mathrm{Per}}_1(\lambda)$. For example, when $|\lambda| < 1$, it is known that the bifurcation locus is homeomorphic to the boundary of the Mandelbrot set; this follows from the straightening theorem of [@Douady:Hubbard] (see the remark following Corollary 3.4 of [@Goldberg:Keen:shift]). See [@Petersen:elliptic; @Uhre:model; @Buff:Epstein:Ecalle] for more in the setting of $\lambda$ a root of unity. Berteloot and Gauthier have recently studied properties of the bifurcation current on ${\mathrm{M}}_2$ near infinity [@Berteloot:Gauthier].
Bifurcations. {#bifurcation definition}
-------------
Let $X$ be a complex manifold. A [*holomorphic family*]{} of rational maps parametrized by $X$ is a holomorphic map $$f: X\times{{\mathbb P}}^1\to {{\mathbb P}}^1.$$ We often write $f_t$ for the restriction $f(t, \cdot): {{\mathbb P}}^1\to {{\mathbb P}}^1$ for each $t\in X$. A holomorphic family $\{f_t, t\in X\}$ of rational functions of degree $d\geq 2$ is [*stable at $t_0\in X$*]{} if the Julia sets $J(f_t)$ are moving holomorphically in a neighborhood of $t_0$. In particular, $f_{t_0}|J(f_{t_0})$ is topologically conjugate to all nearby maps when restricted to their Julia sets (and the Julia sets are homeomorphic) [@Mane:Sad:Sullivan; @McMullen:CDR]. An equivalent characterization of stability, upon passing to a branched cover of $X$ where the critical points $c_1, c_2, \ldots, c_{2d-2}$ can be labelled holomorphically, is that the sequence of holomorphic maps $$\{t \mapsto f_t^n(c_i(t))\}$$ forms a normal family for each $i$ on some neighborhood of $t_0$. The failure of normality can be quantified with the construction of a positive $(1,1)$-current on the parameter space $X$, as follows.
Suppose that we can express $f_t$ in homogeneous coordinates, as a holomorphic family $$F_t: {{\mathbb C}}^2 \to {{\mathbb C}}^2$$ for $t\in X$. Assume we are given holomorphic functions $\tilde{c}_i: X\to {{\mathbb C}}^2\setminus\{(0,0)\}$ projecting to the critical points $c_i(t)\in{{\mathbb P}}^1$ of $f_t$. We define the [*bifurcation current*]{} of $c_i$ on $X$ by $$T_i := dd^c \left(\lim_{n\to\infty} \frac{1}{d^n} \log \| F_t^n(\tilde{c}_i(t)) \| \right) .$$ The current vanishes if and only if the family $\{t \mapsto f_t^n(c_i(t))\}$ is normal. In particular, $T_i=0$ for all $i$ if and only if the family is stable. In fact, the family $F_t$ and the functions $\tilde{c}_i$ can always be defined locally on $X$, after passing to a branched cover where the critical points can be labelled, and the current $T_i$ is independent of the choice of $\tilde{c}_i$ and the homogenization $F_t$. When the parameter space $X$ has dimension 1, note that the current $T_i$ is a measure (where $dd^c$ is simply the Laplacian), and we will refer to it as the [*bifurcation measure*]{}. (See [@D:current; @D:lyap; @Dujardin:Favre:critical].)
The [*bifurcation locus*]{} is the set of parameters in $X$ where $f_t$ fails to be stable. It coincides with the union of the supports of the bifurcation currents $T_1, \ldots, T_{2d-2}$. The following lemma is a straightforward application of Montel’s theorem; for a proof of Montel’s theorem, see [@Milnor:dynamics §3].
\[activity\] Let $f: X\times{{\mathbb P}}^1\to {{\mathbb P}}^1$ be a holomorphic family of rational functions of degree $>1$, with marked critical point $c: X\to{{\mathbb P}}^1$. Let $T$ be the bifurcation current of $c$, and assume that $T\not= 0$. Then there are infinitely many parameters $t\in X$ where $c(t)$ has finite orbit for $f_t$.
Fix $t_0\in {\operatorname{supp}}T$. Choose any repelling periodic cycle for $f_{t_0}$ of period $\geq 3$ that is not in the forward orbit of $c(t_0)$. By the Implicit Function Theorem, the repelling cycle persists in a neighborhood $U$ of $t_0$. If the orbit of the critical point $c(t)$ were disjoint from the cycle for all $t\in U$, then Montel’s Theorem would imply that $\{t\mapsto f^n_t(c(t))\}$ forms a normal family on $U$. This contradicts the fact that $t_0$ lies in the support of $T$. Consequently, there is a parameter $t_1\in U\setminus\{t_0\}$ where $c(t_1)$ is preperiodic for $f_{t_1}$. Shrinking the neighborhood $U$, we obtain an infinite sequence of such parameters converging to $t_0$.
\[polynomials\] For the family of quadratic polynomials, $f_t(z) = z^2 + t$, there is only one critical point (at $z=0$) inducing bifurcations. The associated bifurcation current defines a measure on the parameter space ($t\in{{\mathbb C}}$). It is equal, up to a normalization factor, to the harmonic measure supported on the boundary of the Mandelbrot set $\mathcal{M}$ [@D:current Example 6.1]. In this case, there is no need to pass to homogeneous coordinates; a potential function for the normalized bifurcation measure is given by $$\label{quadratic G}
G_{\mathcal{M}}(t) = \lim_{n\to\infty} \frac{1}{2^n} \log^+ |f_t^n(t)|$$ for $t\in{{\mathbb C}}$.
The bifurcation locus in the critically-marked curve. {#definitions}
-----------------------------------------------------
Fix a complex number $\lambda \not=0$, and set $$f_{\lambda,t}(z) = \frac{\lambda z}{z^2 + tz + 1}$$ for all $t\in{{\mathbb C}}$. (We will often write $f_t$ for $f_{\lambda,t}$ when the dependence on $\lambda$ is clear.) Then $f_t$ has critical points at $c_+(t) = +1$ and $c_-(t) = -1$ for all $t\in{{\mathbb C}}$. Since $f_t$ is conjugate to $f_{-t}$ by $z\mapsto -z$, the family $f_t$ parametrizes a degree-2 branched cover of the curve ${\mathrm{Per}}_1(\lambda) \subset {\mathrm{M}}_2$, which we denote by ${\mathrm{Per}}_1(\lambda)^{cm}$; the $cm$ in the superscript stands for “critically marked."
We define the bifurcation currents associated to the critical points $c_+$ and $c_-$ as in §\[bifurcation definition\]. As the parameter space is 1-dimensional, the currents are in fact measures; we denote these bifurcation measures by $\mu^+_\lambda$ and $\mu^-_\lambda$. The supports will be denoted by $${\mathrm{Bif}}^+ = {\operatorname{supp}}\mu^+_\lambda \qquad \mbox{and} \qquad {\mathrm{Bif}}^- = {\operatorname{supp}}\mu^-_\lambda.$$ These bifurcation measures have globally-defined potential functions. We set $$\label{F_t}
F_t(z_1, z_2) = (\lambda z_1 z_2, z_1^2 + t z_1 z_2 + z_2^2)$$ and $$\label{H}
H_\lambda^{\pm}(t) = \lim_{n\to\infty} \frac{1}{2^n} \log \| F_t^n(\pm 1, 1) \|,$$ so that $$\mu^+_\lambda = \frac{1}{2\pi} \Delta H^+_\lambda \qquad \mbox{and} \qquad \mu^-_\lambda = \frac{1}{2\pi} \Delta H^-_\lambda.$$ Since $f_{-t}(z) = -f_t(-z)$, we see that $$H_\lambda^-(t) = H_\lambda^+(-t)$$ and ${\mathrm{Bif}}^- = -{\mathrm{Bif}}^+$.
The following proposition follows from the observations of Milnor in [@Milnor:quad].
\[compactness\] The bifurcation loci ${\mathrm{Bif}}^+$ and ${\mathrm{Bif}}^-$ are nonempty for all $\lambda \not=0$. They are compact in ${\mathrm{Per}}_1(\lambda)^{cm}$ if and only if $|\lambda| \not=1$ or $\lambda = 1$.
Moreover, when the bifurcation locus ${\mathrm{Bif}}= {\mathrm{Bif}}^+\cup {\mathrm{Bif}}^-$ is compact, the unbounded stable component consists of maps for which both critical points lie in the basin of an attracting (or parabolic, in the case of $\lambda=1$) fixed point.
The three fixed points of $f_t$ lie at $0$ and $$Z_{\pm}(t) = \frac{-t \pm \sqrt{t^2-4(1-\lambda)}}{2}.$$ The set of fixed point multipliers is $$\{ \lambda, (1- Z_\pm(t)^2)/\lambda \}.$$ For each fixed $\lambda$, there are well-defined branches of the square root for $|t| >>0$ so that $Z_\pm$ define analytic functions near infinity, with $Z_+(t) \to 0$ and $Z_-(t) \to \infty$ as $t\to \infty$. The set of fixed point multipliers converges to $\{\lambda, 1/\lambda, \infty\}$ as $t\to \infty$. (Compare [@Milnor:quad Lemma 4.1].)
We first observe that ${\mathrm{Bif}}^+$ and ${\mathrm{Bif}}^-$ are nonempty. Note that the two fixed points $Z_+(t)$ and $Z_-(t)$ must collide at $t = \pm 2 \sqrt{1-\lambda}$. Consequently, their multipliers are 1 at that point, while they cannot be persistently equal to 1. By the characterizations of stability [@McMullen:CDR Theorem 4.2], the parameters $t = \pm 2 \sqrt{1-\lambda}$ will lie in the bifurcation locus.
Suppose that $|\lambda|=1$ with $\lambda \not=1$. Then $$f_t'(Z_+(t)) = \frac{1}{\lambda} \left( 1 - \frac{(1-\lambda)^2}{t^2} + O\left(\frac{1}{t^4}\right) \right)$$ for $t$ large. Fixing any large value of $R>0$ and letting the argument of $t=Re^{i\theta}$ vary in $[0,2\pi]$, the absolute value of $(1 - (1-\lambda)^2/t^2)$ will fluctuate around 1. Consequently, the multiplier $f_t'(Z_+(t))$ will have absolute value 1 for some parameter $t$ with $|t| =R$, for all sufficiently large $R$. Again by the characterizations of stability, all such parameters will lie in the bifurcation locus. Consequently, ${\mathrm{Bif}}^+$ and ${\mathrm{Bif}}^-$ are unbounded.
For $|\lambda| \not=1$, $f_t$ has an attracting fixed point (of multiplier $\lambda$ or $\approx 1/\lambda$) for all $t$ large. Both critical points will lie in its basin of attraction for all $t$ large, demonstrating stability of the family $f_t$. To see this, we may place the three fixed points of $f_t$ at $\{0,1,\infty\}$ so that $f_t$ is conjugate to the rational function $$g_t(z) = z \frac{(1-\lambda)z + \lambda(1-\beta)}{\beta(1-\lambda)z + 1-\beta}.$$ where $\beta = \beta(t) = \beta(-t) \approx 1/\lambda$ is the multiplier of the fixed point at $\infty$. In this form, $g_t$ will converge (locally uniformly on $\mathbb{\widehat{C}}\setminus\{1\}$) to the linear map $z\mapsto \lambda z$ as $t\to \infty$. In particular, there is a neighborhood $U$ containing the attracting fixed point and the point $z = \lambda$ mapped compactly inside itself by $g_t$ for all $t$ large. On the other hand, we can explicitly compute the critical values $v_+(t), v_-(t)$ of $g_t$ and determine that $\lim_{t \to\infty} v_\pm(t) = \lambda$. Consequently, the critical points lie in the basin of attraction for all $t$ large enough, and the bifurcation locus must be bounded.
For $\lambda = 1$, it is convenient to conjugate $f_t$ by $1/(tz)$, to express it in the form $$g_t(z) = z + 1 + \frac{1}{t^2 z}$$ with a parabolic fixed point at $z=\infty$. In these coordinates, $g_t$ converges (locally uniformly on $\mathbb{\widehat{C}}\setminus\{0\}$) to the translation $z\mapsto z+1$ as $t\to \infty$. Again, we compute explicitly the critical values of $g_t$ and their limit as $t\to \infty$; in this case, they converge to the point $z=1$. As such, they both lie in the basin of the parabolic fixed point for $t$ large.
Comparing the two bifurcation loci.
-----------------------------------
We begin with a simple observation.
\[bifset\] For ${\operatorname{Re}}\lambda>1$, then ${\mathrm{Bif}}^+$ and ${\mathrm{Bif}}^-$ intersect only at the two points $t = \pm 2\sqrt{1-\lambda}$. For $|\lambda|<1$, the sets ${\mathrm{Bif}}^+$ and ${\mathrm{Bif}}^-$ are disjoint.
\[b\] ![ The bifurcation loci in ${\mathrm{Per}}_1(2)^{cm}$. At left, an illustration of ${\mathrm{Bif}}^+$ where $|{\operatorname{Re}}t\, |, |{\operatorname{Im}}t\,| \leq 3$; the color shading records a rate of convergence of the critical point $c_+$ to an attracting cycle. In the middle, ${\mathrm{Bif}}^-$ in the same region. At right, the two images superimposed. By Lemma \[bifset\], ${\mathrm{Bif}}^+\cap{\mathrm{Bif}}^- = \{2i, -2i\}$.[]{data-label="L=2"}](QPer_2_plus1.png "fig:"){width="2.05in"} ![ The bifurcation loci in ${\mathrm{Per}}_1(2)^{cm}$. At left, an illustration of ${\mathrm{Bif}}^+$ where $|{\operatorname{Re}}t\, |, |{\operatorname{Im}}t\,| \leq 3$; the color shading records a rate of convergence of the critical point $c_+$ to an attracting cycle. In the middle, ${\mathrm{Bif}}^-$ in the same region. At right, the two images superimposed. By Lemma \[bifset\], ${\mathrm{Bif}}^+\cap{\mathrm{Bif}}^- = \{2i, -2i\}$.[]{data-label="L=2"}](QPer_2_minus1.png "fig:"){width="2.05in"} ![ The bifurcation loci in ${\mathrm{Per}}_1(2)^{cm}$. At left, an illustration of ${\mathrm{Bif}}^+$ where $|{\operatorname{Re}}t\, |, |{\operatorname{Im}}t\,| \leq 3$; the color shading records a rate of convergence of the critical point $c_+$ to an attracting cycle. In the middle, ${\mathrm{Bif}}^-$ in the same region. At right, the two images superimposed. By Lemma \[bifset\], ${\mathrm{Bif}}^+\cap{\mathrm{Bif}}^- = \{2i, -2i\}$.[]{data-label="L=2"}](QPer_2_both.png "fig:"){width="2.05in"}
For ${\operatorname{Re}}\lambda > 1$ and $t\not= \pm 2\sqrt{1-\lambda}$, $f_t$ has at least one attracting fixed point. The proof is immediate from the index formula for fixed point multipliers (see [@Milnor:dynamics]). For all such $t$ then, at least one of the critical points must be stable. This shows that ${\mathrm{Bif}}^+\cap {\mathrm{Bif}}^- \subset \{\pm 2\sqrt{1-\lambda}\}$. A straightforward calculation shows that the fixed point multipliers at $t=0$ are $\{\lambda, -1 + 2/\lambda, -1 + 2/\lambda\}$, so $f_0$ must have two distinct attracting fixed points whenever ${\operatorname{Re}}\lambda>1$. Consequently, $t=0$ cannot lie in the unbounded stable component (where there is a unique attracting fixed point by Proposition \[compactness\]). For topological reasons, then, and since ${\mathrm{Bif}}^+ = -{\mathrm{Bif}}^-$, the intersection of ${\mathrm{Bif}}^+$ and ${\mathrm{Bif}}^-$ must consist of at least two points, concluding the proof that ${\mathrm{Bif}}^+ \cap {\mathrm{Bif}}^- = \{\pm 2\sqrt{1-\lambda}\}$.
For $|\lambda|<1$, the point $0$ is an attracting fixed point for all $t$, and its immediate attracting basin must contain at least one critical point. Thus, for all $t$, at least one critical point remains in an attracting basin under perturbation, and so it is stable; this implies that ${\mathrm{Bif}}^+\cap {\mathrm{Bif}}^-=\emptyset$.
For certain values of $\lambda$, the bifurcation loci ${\mathrm{Bif}}^\pm$ are remarkably similar. Though ${\mathrm{Bif}}^+$ does not appear to be equal to ${\mathrm{Bif}}^-$ for any value of $\lambda$, the differences can be subtle. We include illustrations of ${\mathrm{Bif}}^\pm$ in ${\mathrm{Per}}_1(\lambda)^{cm}$ for three values of $\lambda$ in Figures \[L=2\]-\[L=-4\]. In Figure \[L=-4 measures\], we illustrate the distribution of the parameters where the critical points are periodic, with $\lambda = -4$; these parameters converge to the bifurcation measures by Theorem \[equidistribution\]. Theorem \[distinct measures\] states that the two measures $\mu^+_\lambda$ and $\mu^-_\lambda$ are distinct for all $\lambda$.
\[distinct supports\] Are the bifurcation loci ${\mathrm{Bif}}^+$ and ${\mathrm{Bif}}^-$ distinct in ${\mathrm{Per}}_1(\lambda)^{cm}$ for all $\lambda$? What explains their near-coincidence for parameters such as $\lambda = -4$?
\[h\] ![ The bifurcation loci in ${\mathrm{Per}}_1(1.1i)^{cm}$. At left, an illustration of ${\mathrm{Bif}}^+$ where $|{\operatorname{Re}}t\, |, |{\operatorname{Im}}t\,| \leq 6$; the color shading records a rate of convergence of the critical point $c_+$ to an attracting cycle. In the middle, ${\mathrm{Bif}}^-$ in the same region. At right, the two images superimposed. []{data-label="L=1.1i"}](QPer_1p1i_plus1.png "fig:"){width="2.05in"} ![ The bifurcation loci in ${\mathrm{Per}}_1(1.1i)^{cm}$. At left, an illustration of ${\mathrm{Bif}}^+$ where $|{\operatorname{Re}}t\, |, |{\operatorname{Im}}t\,| \leq 6$; the color shading records a rate of convergence of the critical point $c_+$ to an attracting cycle. In the middle, ${\mathrm{Bif}}^-$ in the same region. At right, the two images superimposed. []{data-label="L=1.1i"}](QPer_1p1i_minus1.png "fig:"){width="2.05in"} ![ The bifurcation loci in ${\mathrm{Per}}_1(1.1i)^{cm}$. At left, an illustration of ${\mathrm{Bif}}^+$ where $|{\operatorname{Re}}t\, |, |{\operatorname{Im}}t\,| \leq 6$; the color shading records a rate of convergence of the critical point $c_+$ to an attracting cycle. In the middle, ${\mathrm{Bif}}^-$ in the same region. At right, the two images superimposed. []{data-label="L=1.1i"}](QPer_1p1i_both.png "fig:"){width="2.05in"}
\[h\] ![ The bifurcation loci in ${\mathrm{Per}}_1(-4)^{cm}$. At left, an illustration of ${\mathrm{Bif}}^+$ where $|{\operatorname{Re}}t\, |, |{\operatorname{Im}}t\,| \leq 5$; the color shading records a rate of convergence of the critical point $c_+$ to an attracting cycle. In the middle, ${\mathrm{Bif}}^-$ in the same region. At right, the two images superimposed.[]{data-label="L=-4"}](QPer1_-4_plus1.png "fig:"){width="2.05in"} ![ The bifurcation loci in ${\mathrm{Per}}_1(-4)^{cm}$. At left, an illustration of ${\mathrm{Bif}}^+$ where $|{\operatorname{Re}}t\, |, |{\operatorname{Im}}t\,| \leq 5$; the color shading records a rate of convergence of the critical point $c_+$ to an attracting cycle. In the middle, ${\mathrm{Bif}}^-$ in the same region. At right, the two images superimposed.[]{data-label="L=-4"}](QPer1_-4_minus1.png "fig:"){width="2.05in"} ![ The bifurcation loci in ${\mathrm{Per}}_1(-4)^{cm}$. At left, an illustration of ${\mathrm{Bif}}^+$ where $|{\operatorname{Re}}t\, |, |{\operatorname{Im}}t\,| \leq 5$; the color shading records a rate of convergence of the critical point $c_+$ to an attracting cycle. In the middle, ${\mathrm{Bif}}^-$ in the same region. At right, the two images superimposed.[]{data-label="L=-4"}](QPer1_-4_both.png "fig:"){width="2.05in"}
\[h\] ![ At left, a plot of parameters $t$ such that $f_t^n(+1) = +1$ in ${\mathrm{Per}}_1(-4)^{cm}$, with $n\leq 5000$. By Theorem \[equidistribution\], these parameters are equidistributed with respect to $\mu^+_\lambda$ as $n\to \infty$. At right, the parameters where critical point $-1$ is periodic, equidistributed with respect to $\mu^-_\lambda$. []{data-label="L=-4 measures"}](QPer1_-4_bifmeasure_5000.png "fig:"){width="2.7in"} ![ At left, a plot of parameters $t$ such that $f_t^n(+1) = +1$ in ${\mathrm{Per}}_1(-4)^{cm}$, with $n\leq 5000$. By Theorem \[equidistribution\], these parameters are equidistributed with respect to $\mu^+_\lambda$ as $n\to \infty$. At right, the parameters where critical point $-1$ is periodic, equidistributed with respect to $\mu^-_\lambda$. []{data-label="L=-4 measures"}](QPer1_-4_bifmeasure_5000_rotate.png "fig:"){width="2.7in"}
Dynamical independence of the critical points.
----------------------------------------------
We conclude this section with the observation that the two critical points $c_+ = +1$ and $c_- = -1$ must be dynamically independent along ${\mathrm{Per}}_1(\lambda)^{cm}$. We define, $${\mathrm{Preper}}^\pm_\lambda = \{t\in{\mathrm{Per}}_1(\lambda)^{cm}: \pm 1 \mbox{ has finite forward orbit for } f_t\}.$$
\[no synchrony\] For all $\lambda\in{{\mathbb C}}$, we have $${\mathrm{Preper}}^+_\lambda \not= {\mathrm{Preper}}^-_\lambda$$ in ${\mathrm{Per}}_1(\lambda)^{cm}$.
The case of $\lambda=0$ is easy. The curve ${\mathrm{Per}}_1(0)^{cm}$ has two irreducible components; each may be parameterized by $f_t(z) = z^2 + t$ with one critical point at $\infty$ and the other at 0. The critical point at $\infty$ is fixed for all $t$, while the orbit of $0$ is infinite for all but countably many parameters $t$.
For $0 < |\lambda|\leq 1$, a stronger statement is true: $${\mathrm{Preper}}^+_\lambda \cap {\mathrm{Preper}}^-_\lambda = \emptyset$$ in ${\mathrm{Per}}_1(\lambda)^{cm}$. Indeed, for every $f \in {\mathrm{Per}}_1(\lambda)^{cm}$, at least one critical point must have infinite forward orbit, as it is attracted to (or accumulates upon) the fixed point with multiplier $\lambda$ (or on the boundary of the Siegel disk in case the fixed point is of Siegel type). See, for example, [@Milnor:dynamics Corollaries 14.4 and 14.5].
For the remainder of this proof, assume that $|\lambda|>1$ and that ${\mathrm{Preper}}^+_\lambda = {\mathrm{Preper}}^-_\lambda$. Lemma \[activity\] shows that ${\mathrm{Bif}}^+$ is contained in the set of accumulation points of ${\mathrm{Preper}}^+_\lambda$. But, in fact, the characterizations of stability (as in [@McMullen:CDR Chapter 4]) imply that elements of ${\mathrm{Preper}}^+_\lambda$ cannot accumulate in a stable region. So we have ${\mathrm{Bif}}^+ = {\mathrm{Bif}}^-$.
As before, parameterize ${\mathrm{Per}}_1(\lambda)^{cm}$ as $$f_t(z) = \frac{\lambda z}{z^2 + tz + 1}.$$ Recall from Proposition \[compactness\] that $f_t$ has an attracting fixed point for all $t$ near $\infty$, with multiplier converging to $1/\lambda$ as $t\to \infty$. Moreover, both critical points lie in its basin of attraction for all $t$ in the unbounded stable component. In particular, each $f_t$ in the unbounded stable component has a unique attracting fixed point.
The multiplier of the unique attracting fixed point of $f_t$ defines a holomorphic function from the unbounded stable component to the unit disk. Recall that the fixed-point multiplier cannot be equal to $1/\lambda$ for any $f_t$, and it will converge to $1/\lambda$ if and only if $t\to\infty$ in ${\mathrm{Per}}_1(\lambda)^{cm}$ [@Milnor:quad]. Moreover, since ${\mathrm{Bif}}^+ = {\mathrm{Bif}}^-$, the multiplier must converge to 1 in absolute value as $t$ tends to the bifurcation locus. Consequently, the multiplier of the attracting fixed point determines a proper holomorphic map from the unbounded stable component to the unit disk punctured at $1/\lambda$. It follows that each preimage of the line segment $[0, 1/\lambda)$ defines a path from a parameter $t_0$ to infinity in this unbounded stable component. In fact, $t_0 = \lambda - 2$ is the unique parameter where the critical point $+ 1$ is fixed (and similarly, $2-\lambda$ is the unique parameter at which $-1$ is fixed), so $f_{t_0}$ is conjugate to a polynomial; we have just shown that $t_0$ lies in this unbounded stable component.
As the orbit of the critical point $-1$ for $f_{t_0}$ must converge towards the fixed critical point $+1$, we see that $-1$ has infinite orbit. This contradicts our assumption that ${\mathrm{Preper}}^+_\lambda = {\mathrm{Preper}}^-_\lambda$ and completes the proof.
As an immediate corollary of Proposition \[no synchrony\], we see that the two critical points are dynamically independent on ${\mathrm{Per}}_1^{cm}(\lambda)$ for any $\lambda\not=0$, in the sense of critical orbit relations as formulated in [@D:stableheight Question 6.4] (see also [@BD:polyPCF §1.4]).
\[independence\] Fix $\lambda\in{{\mathbb C}}^*$. The critical points $c_+ = +1$ and $c_- = -1$ cannot satisfy any dynamical relations along ${\mathrm{Per}}_1^{cm}(\lambda)$. In particular, for each pair of integers $n, m\geq 0$, there exists $f_t\in {\mathrm{Per}}_1(\lambda)^{cm}$ so that $f_t^n(c_+) \not= f_t^m(c_-)$.
Dynamical dependence of points $c_+$ and $c_-$, in the sense of [@D:stableheight Question 6.4], means that there exist rational functions $A_t$ and $B_t$, commuting with $f_t$ for all $t\in {\mathrm{Per}}_1^{cm}(\lambda)$, so that $A_t(c_+) = B_t(c_-)$ for all $t$. This includes orbit relations such as $f_t^n(c_+) = f_t^m(c_-)$ for all $t$. Dependence implies that $c_+$ is preperiodic for $f_t$ if and only if $c_-$ is preperiodic for $f_t$. This contradicts Proposition \[no synchrony\].
In contrast with Corollary \[independence\], conditions on the multipliers can and do impose relations between the critical points in other settings. For example, if we look at conjugacy classes $f\in M_2$ with two distinct period-3 cycles of the same multiplier, then we obtain the automorphism locus $\mathcal{A}_2$ [@Berker:Epstein:Pilgrim Theorem 3.1]. The family $\mathcal{A}_2$ is given by $f_{\lambda,0}(z) = \lambda z /(z^2 + 1)$, for parameter $\lambda\in{{\mathbb C}}^*$, with the automorphism $A(z) = -z$ for all $\lambda$; the critical points (at $\pm 1$) and their orbits are symmetric by $A$, and thus they define the same bifurcation locus and equal bifurcation measures.
The bifurcation measures are distinct {#measures}
=====================================
In this section, we provide a proof of Theorem \[distinct measures\]. Recall the definitions from §\[definitions\].
Potential functions for the bifurcation measures.
-------------------------------------------------
Recall the definitions of the measures $\mu^{\pm}_\lambda$ and their potential functions $H_\lambda^{\pm}$ from §\[definitions\]. We begin by showing that if the two bifurcation measures were to coincide, their potential functions would have to be equal. The first lemma controls the growth of $H_\lambda^{\pm}$. The lower bound will be used again in the proof of Theorem \[equidistribution\]. We will work with the norm $$\|(z_1, z_2)\| = \max\{|z_1|, |z_2|\}$$ on ${{\mathbb C}}^2$.
\[H bound\] For each $\lambda\neq 0$, there are constants $c, C >0$, such that $$c |t|^{-1} \leq \frac {\| F_t (z_1,z_2) \|}{\|(z_1,z_2)\|^2} \leq C |t|$$ for all $|t|\geq 1$ and all $(z_1, z_2)\neq (0,0)$. Consequently, $$H_\lambda^\pm(t) = O(\log |t|)$$ as $t\to \infty$.
The upper bound is immediate from the expression $F_t(z_1, z_2) = (\lambda z_1 z_2, z_1^2 + t z_1 z_2 + z_2^2)$. We may set $$C = \max\{|\lambda|, 3\}.$$ For the lower bound, by the symmetry and homogeneity of $F_t$, we may assume that $z_2=1$ and $|z_1|\leq 1$. Then $\|(z_1,z_2)\| = 1$, and we shall estimate the norm of $sF_{1/s}(z_1,1) = (s\lambda z_1, sz_1^2 + s + z_1)$ with $|s| \leq 1$. Let $$c = \min\{|\lambda|/2, 1/4\}.$$ For each $s$ with $|s| \leq 1$, either $|s\lambda z_1| \geq c|s|^2$, or $|z_1| < |s|/2$ in which case, $$|sz_1^2+s+z_1| \geq |s| - |s|/2 - |s|^3/4 \geq |s|/4 \geq c |s|^2.$$ Consequently, $\|F_{1/s}(z_1,z_2)\| \geq c|s|$ and the lower bound is proved.
By the identity $$H_\lambda^+(t)=\sum_{i=2}^{+\infty}\frac{1}{2^i}\log\Bigg(\frac{\|F_t^i(1,1)\|}{\|F_t^{i-1}(1,1)\|^2}\Bigg)
+\frac{1}{2}\log\|F_t(1,1)\|,$$ we have that $|H_\lambda^+(t)|<3\log|t|$ when $t$ is large. The same holds for $H_\lambda^-$, since $H_\lambda^-(t) = H_\lambda^+(-t)$.
\[H1=H2\] For any $\lambda\neq0$, we have $$\mu^+_\lambda=\mu^-_\lambda \Longrightarrow H_\lambda^+=H_\lambda^-.$$
Let $h(t)=H_\lambda^+(t) - H_\lambda^-(t)$. If $\mu^+_\lambda = \mu^-_\lambda$, then $\Delta h=0$ in $\mathbb{C}$. This implies that $h$ is harmonic. By Lemma \[H bound\], we have $h(t)=O(\log|t|)$ for $t$ near $\infty$. Therefore $h$ is constant. Combined with the symmetry $H_\lambda^-(t) = H_\lambda^+(-t)$, we may conclude that $H_\lambda^+ = H_\lambda^-$.
Showing that the potentials differ at a single point
----------------------------------------------------
As a consequence of Lemma \[H1=H2\], it suffices to show that $H^+$ and $H^-$ differ at a single point. For $t=\lambda-2$, we may compute the values. Observe that $f_{\lambda-2}$ is conjugate to a polynomial, as $f_{\lambda-2}(1) = 1$.
\[H12\] We have $$H_\lambda^+(\lambda-2)=\log|\lambda| \quad \mbox{ and } \quad \ H_\lambda^-(\lambda-2)=\frac{1}{2}G_{\mathcal{M}}(c(\lambda))+\log 2,$$ when $G_{\mathcal{M}}$ is defined in Example \[quadratic G\] and $c(\lambda)=\frac{\lambda}{2}-\frac{\lambda^2}{4}$.
The map $F_t$ defined in (\[F\_t\]) for $t=\lambda-2$ satisfies $F_{\lambda-2}(1/\lambda, 1/\lambda)=(1/\lambda,1/\lambda)$. Therefore, $$H_\lambda^+(\lambda-2)= \lim_{n\to\infty} \frac{1}{2^n} \log\|F_{\lambda-2}^n(1/\lambda, 1/\lambda)\| + \log|\lambda|=\log|\lambda|.$$
For $t=\lambda-2$, recall that $f_{\lambda-2}$ is conjugate to a polynomial $$\label{quadratic poly}
q_\lambda(z) = \lambda z(z+1).$$ Set $$A(z_1,z_2)=(z_1,z_2-z_1) \quad \mbox{ and } \quad \tilde{F}_{\lambda,t}=A\circ F_{\lambda,t}\circ A^{-1}.$$ When $t=\lambda-2$, we have $$\tilde{F}_{\lambda,\lambda-2}(z_1, z_2)=(\lambda z_1(z_1+z_2), z_2^2).$$ Note that $A(-1,1)=2(-1/2,1), \ \tilde{F}_{\lambda, \lambda-2}^n(A(-1,1))=2^{2^n}(q_\lambda^n(-1/2),1)$, so that $$\label{H2-}
\begin{array}{lll}
H_\lambda^-(\lambda-2)
&=\lim_{n\rightarrow\infty}2^{-n}\log\|F_{\lambda,\lambda-2}^n(-1,1)\|\\[6pt]
&=\lim_{n\rightarrow\infty}2^{-n}\log\|A^{-1}\circ \tilde{F}_{\lambda,\lambda-2}^n(A(-1,1))\|\\[6pt]
&=\lim_{n\rightarrow\infty}2^{-n}\log\|\tilde{F}_{\lambda,\lambda-2}^n(A(-1,1))\|\\[6pt]
&=\lim_{n\rightarrow\infty}2^{-n}\log^+|q_\lambda^n(-1/2)|+\log 2.
\end{array}$$ In this way, we express $H^-_\lambda(\lambda-2)$ in terms of the escape rate of the critical point $-1/2$ of the polynomial $q_\lambda$. We may conjugate $q_\lambda$ to the quadratic polynomial $$p_{c(\lambda)}(z) = z^2+c(\lambda), \qquad \mbox{with } c(\lambda)=\frac{\lambda}{2}-\frac{\lambda^2}{4}.$$ Then we get $$H^-_{\lambda}(\lambda-2)=\frac{1}{2}G_{\mathcal{M}}(c(\lambda))+\log 2.$$
Now we are ready to compare $H^+_\lambda(\lambda-2)$ and $H^-_\lambda(\lambda-2)$.
\[H1<=H2\] For any $\lambda\neq 0$ and ${\operatorname{Re}}\lambda\leq 1$, we have $$\label{H1 leq H2}
H_\lambda^+(\lambda-2)\leq H_\lambda^-(\lambda-2),$$ with equality if and only if $\lambda=-2$.
By Lemma \[H12\], we need to show that $$G_{\mathcal{M}}(c(\lambda))\geq 2\log|\lambda/2|,$$ for $\lambda\neq 0$ and ${\operatorname{Re}}\lambda\leq 1$. The proof relies on the following two claims:
[*Claim 1: If ${\operatorname{Re}}\lambda=1$, we have $$\label{hexi111}
G_{\mathcal{M}}(c(\lambda))> 2\log|\lambda/2|.$$*]{}
[*Claim 2: For $|\lambda|=2$, ${\operatorname{Re}}\lambda<1$, the parameter $c(\lambda)=\frac{\lambda}{2}-\frac{\lambda^2}{4}$ lies in the Mandelbrot set $\mathcal{M}$ if and only if $\lambda=-2$.*]{}
Lemma \[H1<=H2\] follows easily from Claims 1 and 2. Indeed, for $|\lambda|<2$, we have $G_{\mathcal{M}}(c(\lambda))\geq 0>2\log|\lambda/2|$. By Claim 2, for $|\lambda|=2$, ${\operatorname{Re}}\lambda<1$ and $\lambda\neq-2$, we have $G_{\mathcal{M}}(c(\lambda))>0= 2\log|\lambda/2|$, and for $\lambda=-2$, $G_{\mathcal{M}}(c(\lambda))=G_{\mathcal{M}}(-2)=0= 2\log|\lambda/2|$. By Claim 1, if ${\operatorname{Re}}\lambda=1$, we have $ G_{\mathcal{M}}(c(\lambda))> 2\log|\lambda/2|$. It follows from Claim 2 that $G_{\mathcal{M}}(c(\lambda))$ is harmonic in the region $\Omega=\{{\operatorname{Re}}\lambda<1, |\lambda|>2\}$. Observe that when $\lambda\rightarrow\infty$ in $\Omega$, $$G_{\mathcal{M}}(c(\lambda))-2\log|\lambda/2|\rightarrow 0.$$ By the maximum/minimum value theorem, we have $ G_{\mathcal{M}}(c(\lambda))> 2\log|\lambda/2|$ in $\Omega$. The conclusion then follows.
[*Proof of Claim 1.*]{} For ${\operatorname{Re}}\lambda=1$, we have $2-\lambda=\overline{\lambda}$ and $c(\lambda)=|\lambda|^2/4$. It is equivalent to show that $G_\mathcal{M}(c)>\log c$ when $c>1/4$ and $c\in\mathbb{R}$. For $p_c(z)=z^2+c$ with $c>1/4$, we have $p_c(c)=c^2+c$ and $p_c^{n}(c)\geq (c^2+c)^{2^{n-1}}$. Consequently, $$G_{\mathcal{M}}(c)\geq \lim_{n\rightarrow\infty} 2^{-n}\log (c^2+c)^{2^{n-1}}=\frac{1}{2}\log(c^2+c)>\log c.$$
[*Proof of Claim 2.*]{} Let $p_c(z)=z^2+c$. Recall that the Mandelbrot set $\mathcal{M}$ can be defined by $$\mathcal{M} = \{c\in {{\mathbb C}}: |p_c^n(0)|\leq 2 \textup{ for any $n\geq 1$}\}.$$ In order to show $c(\lambda)\notin \mathcal{M}$ when $|\lambda|=2$, ${\operatorname{Re}}\lambda<1$, by the above definition, it suffices to show that $$\label{mandelbrot}
|p_{c(\lambda)}^2(0)|=|(c(\lambda))(c(\lambda)+1)|>2.$$ Let $\lambda=2(\cos\theta+i\sin\theta)$ with $\theta \in [\pi/3, 5\pi/3]$. Then $\cos\theta\in [-1, 1/2]$. With some computation, one has $$|(c(\lambda))(c(\lambda)+1)|=\sqrt{2(5-5\cos\theta-4\cos^2\theta+4\cos^3\theta)}$$ Let $u=\cos\theta\in [-1, 1/2]$, the function $g(u)=5-5u-4u^2+4u^3$ have minimum when $u=-1$ or $1/2$. Consequently, for any $\lambda$ with $|\lambda|=2$, ${\operatorname{Re}}\lambda<1$ and $\lambda\neq-2$, the inequality (\[mandelbrot\]) holds; i.e., $c(\lambda)$ is not in the Mandelbrot set.
Finally, we treat the case of $\lambda = -2$.
\[QPer1(-2)\] For $\lambda=-2$, the bifurcation sets ${\mathrm{Bif}}^+$ and ${\mathrm{Bif}}^-$ are not equal.
For $\lambda=-2$ and $t=2\sqrt{3}$, the map $f_{\lambda,t}=f_{-2,2\sqrt{3}}$ has a fixed point at $z=-\sqrt{3}$ with multiplier $1$. For all $t> 2\sqrt{3}$, one of the fixed points is attracting. Using Proposition \[compactness\], to show the bifurcation sets are different, it suffices to show the existence of $t_0>2\sqrt{3}$ so that $f_{-2, t_0}$ has a second attracting cycle of period $>1$. In that case, both of the critical points cannot lie in the basin of the attracting fixed point.
For $t> 2\sqrt{3}$, consider the first three iterations of $1$ under $f_{-2,t}$ $$1\mapsto \frac{-2}{2+t}\mapsto\frac{8+4t}{8-t^2}\mapsto f_{-2,t}^3(1)=\frac{-8(2+t)(8-t^2)}{(8-t^2)^2+4t(t+2)(8-t^2)+16(2+t)^2}.$$ To see $f_{-2, t}^3(1)=1$ has a solution for $t>2\sqrt{3}$, define $$\begin{aligned}
\ell(t)&=&(8-t^2)^2+4t(t+2)(8-t^2)+16(2+t)^2+8(2+t)(8-t^2)\\
&=&16(2+t)^2+4(2+t)^2(8-t^2)+(8-t^2)^2\end{aligned}$$ Note that $$\ell(2\sqrt{3})=16>0; \lim_{t\rightarrow+\infty}\ell(t)=-\infty,$$ this implies that $\ell(t)=0$ has a solution for some $t_0>2\sqrt{3}$. Since $f_{-2, t_0}(1)\neq1$, the critical point $1$ is of periodic $3$ for $f_{-2,t_0}$.
Proof of Theorem \[distinct measures\].
---------------------------------------
First, suppose $|\lambda|<1$ or ${\operatorname{Re}}\lambda>1$. By Lemma \[bifset\], the two bifurcation sets ${\mathrm{Bif}}^+$ and ${\mathrm{Bif}}^-$ are not equal. As the supports of $\mu^+_\lambda$ and $\mu^-_\lambda$ are exactly ${\mathrm{Bif}}^+$ and ${\mathrm{Bif}}^-$, it follows that $\mu^+_\lambda\neq \mu^-_\lambda$.
Second, if $\lambda=-2$, by Proposition \[QPer1(-2)\], we know ${\mathrm{Bif}}^+ \neq {\mathrm{Bif}}^-$ which implies $\mu^+_\lambda \neq \mu^-_\lambda$.
Finally, assume ${\operatorname{Re}}\lambda\leq 1$ and $\lambda\neq 0, -2$ and suppose that $\mu^+_\lambda=\mu^-_\lambda$. Then by Lemma \[H1=H2\], we have $H_\lambda^+(\lambda-2)=H_\lambda^-(\lambda-2)$. However, this contradicts Lemma \[H1<=H2\].
Homogeneous potential functions {#homogeneous}
===============================
In this section, we study the potential functions $H_\lambda^+(t)$ and $H_\lambda^-(t)$ of (\[H\]) in more detail. From Lemma \[H bound\] we know that $H_\lambda^\pm(t) = O(\log|t|)$ as $t\to \infty$. Here, we refine this estimate and prove Theorem \[convergence\], the first step in our proof of Theorem \[equidistribution\].
The homogeneous potential functions on parameter space. {#F_n}
-------------------------------------------------------
Fix $\lambda\not=0$. Working in homogeneous coordinates, we write $$F_t(z_1,z_2) = (\lambda z_1 z_2, z_1^2 + t z_1 z_2 + z_2^2)$$ for a lift of $f_{\lambda,t}$ to ${{\mathbb C}}^2$, with $z = z_1/z_2$. We will also work in homogeneous coordinates over the parameter space. Consider the two sequences of maps, $$\label{F_n plus}
F_n^+(t_1,t_2) := t_2^{2^{n-1}} F^n_{t_1/t_2}(+ 1, 1),$$ and $$\label{F_n minus}
F_n^-(t_1,t_2) := t_2^{2^{n-1}} F^n_{t_1/t_2}(-1, 1),$$ for $(t_1, t_2) \in {{\mathbb C}}^2$.
\[convergence\] The maps $F_n^\pm$ are homogeneous polynomial maps in $(t_1,t_2)$ of degree $2^{n-1}$ with nonzero resultants. For each $\lambda\not=0$ such that $$\gamma(\lambda)=\frac{1}{2}\sum_{i=1}^{+\infty}\frac{1}{2^i}\log|1+\lambda+\cdots+\lambda^{i}|$$ converges, the limits $$\lim_{n\to\infty} \frac{1}{2^{n-1}} \log \| F_n^\pm(t_1,t_2) \|$$ converge locally uniformly on ${{\mathbb C}}^2\setminus\{(0,0)\}$ to continuous functions $G^\pm$ satisfying $$G^\pm(t_1,t_2) = \begin{cases}
2H^\pm_\lambda(t_1/t_2)+\log |t_2| &\text{ if } t_2\neq0,\\
\log|t_1|+ \gamma(\lambda) &\text{ if } t_2=0.
\end{cases}$$
It is easy to see that $\gamma(\lambda)$ is finite for all $\lambda\in{{\mathbb C}}$ with $|\lambda|\not=1$ and for $\lambda=1$. In the next section, we observe that it is finite for all algebraic numbers $\lambda$ that are not roots of unity.
For the proof of Theorem \[convergence\], it suffices to consider the maps $F_n^+$ and the function $G^+$; the results for $F_n^-$ and $G^-$ follow by symmetry. Define polynomials $P_n(t)$ and $Q_n(t)$ by $$F_{t}^n(1,1) = (P_n(t), Q_n(t)).$$ One may verify by induction that the degree of $P_n$ is $2^{n-1}-1$, and the degree of $Q_n$ is $2^{n-1}$. This shows that $F_n^+$ is polynomial in $(t_1,t_2)$. Also, since $F_t^{-1}\{(0,0)\} = \{(0,0)\}$ for all $t\in{{\mathbb C}}$, we see that $P_n$ and $Q_n$ have no common roots. Thus, $F_n^+$ has nonzero resultant in $(t_1,t_2)$.
For the convergence statement, note that standard arguments from complex dynamics imply that the convergence is uniform away from $t_2 = 0$. In fact, the escape-rate function for $F_t$ will be continuous in both the dynamical variable $(z_1,z_2)$ and the parameter $t$ for any holomorphic family [@Hubbard:Papadopol; @Fornaess:Sibony]. It follows immediately from the definitions that $$G^+(t_1,t_1) = 2 H^+_\lambda(t_1/t_2) + \log|t_2|$$ whenever $t_2\not=0$.
The remainder of this section is devoted to the proof of uniform convergence near $t_2=0$. The proof of Theorem \[convergence\] will be complete once we have proved Lemmas \[upper bound by epsilon\] and \[lower bound by epsilon\] below. We make an effort to include all details, especially the steps that will be repeated in the nonarchimedean setting in the following section.
Convergence near $t_2=0$.
-------------------------
Throughout this subsection, we set $t_1 =1$ and $t_2 = s$. We have $$F_{n+1}^+(1, s) = F_{1/s} (F_n^+(1, s)).$$
We begin by looking at the coefficients of $F_n^+(1,s)$. Write $$\label{coefficients}
F_n^+(1,s)=(sB_n(\lambda)+s^2A_n(\lambda)+O(s^3), C_n(\lambda)+sD_n(\lambda)+O(s^2)).$$ Note that $B_1(\lambda)=\lambda$, $C_1(\lambda)=1$, and $$B_{n+1}(\lambda)=\lambda B_n(\lambda)C_n(\lambda) \quad\mbox{ and }\quad C_{n+1}(\lambda)=C_n(\lambda)(B_n(\lambda)+C_n(\lambda))$$ for all $n\geq 1$. By induction, we obtain explicit expressions $$\begin{aligned}
\label{B coeff}
B_n(\lambda)&=& \lambda^n(1+\lambda)^{2^{n-3}}(1+\lambda+\lambda^2)^{2^{n-4}}\cdots(1+\lambda+\cdot\cdot+\lambda^{n-2})^{2^0} \\
\label{C coeff}
C_n(\lambda) &=& B_n(\lambda)(1+\lambda+\cdot\cdot+\lambda^{n-1})/\lambda^n\end{aligned}$$ for all $n\geq 3$.
\[coefficients estimate\] The coefficients $A_n(\lambda)$, $B_n(\lambda)$, $C_n(\lambda)$, and $D_n(\lambda)$ of (\[coefficients\]) satisfy $$\begin{aligned}
e^{\gamma(\lambda)}
&=&\lim_{n\to \infty} |B_n(\lambda)|^\frac{1}{2^{n-1}}=\lim_{n\to \infty} |C_n(\lambda)|^\frac{1}{2^{n-1}} \\
&\geq& \limsup_{n\to \infty} |A_n(\lambda)|^\frac{1}{2^{n-1}}, \limsup_{n\to \infty} |D_n(\lambda)|^\frac{1}{2^{n-1}}\end{aligned}$$ where $\gamma(\lambda)$ is defined in Theorem \[convergence\].
With the explicit expressions for $B_n$ and $C_n$ given above, the limiting value is clearly $e^{\gamma}$; it suffices to show the bound for $A_n$ and $D_n$. By induction, we find $$A_{n+1}(\lambda)=\lambda \left(B_n(\lambda)D_n(\lambda)+C_n(\lambda)A_n(\lambda)\right),$$ and $$D_{n+1}(\lambda)=C_n(\lambda)(2D_n(\lambda)+A_n(\lambda))+D_n(\lambda)B_n(\lambda),$$ with $A_1(\lambda)=0$ and $D_1(\lambda)=2$.
The explicit expressions for $B_n(\lambda)$ and $C_n(\lambda)$ and these inductive formulas for $A_n(\lambda), D_n(\lambda)$, show that $$A_{n}(\lambda)=(1+\lambda)^{2^{n-3}-2}(1+\lambda+\lambda^2)^{2^{n-4}-2}\cdot\cdot (1+\lambda+\cdots+\lambda^{n-3})^{0}A_{n}^*(\lambda),$$ and $$D_{n}(\lambda)=(1+\lambda)^{2^{n-3}-2}(1+\lambda+\lambda^2)^{2^{n-4}-2}\cdot \cdot(1+\lambda+\cdots+\lambda^{n-3})^{0}D_{n}^*(\lambda),$$ for sequences $A_{n}^*(\lambda)$ and $D_{n}^*(\lambda)$ given inductively by $$A_{n+1}^*(\lambda)=\lambda^{n+1} D_n^*(\lambda)+(\lambda+\lambda^2 +\cdot\cdot +\lambda^{n})A_n^*(\lambda)$$ and $$D_{n+1}^*(\lambda)=(2(1+\lambda +\cdot\cdot +\lambda^{n-1})+\lambda^n)D_n^*(\lambda)+(1+\lambda +\cdot\cdot +\lambda^{n-1})A_n^*(\lambda)$$ with $A_1^*(\lambda)=0$ and $D_1^*(\lambda)=2$. Finally, the induction formulas for $A_{n}^*(\lambda)$ and $D_{n}^*(\lambda)$ imply that $$\begin{aligned}
\max (|A_n^*(\lambda)|, |D_{n}^*(\lambda)|)&\leq& (2+2|\lambda|)^n \max (|A_{n-1}^*(\lambda)|, |D_{n-1}^*(\lambda)|)\\
&\leq& (2+2|\lambda|)^{2n-1} \max (|A_{n-2}^*(\lambda)|, |D_{n-2}^*(\lambda)|)\\
&\leq& 2(2+2|\lambda|)^{1+2+\cdots+n}\end{aligned}$$ Consequently, we have $ \lim_{n\to \infty} \sup |A_n(\lambda)|^\frac{1}{2^{n-1}}, \lim_{n\to \infty} \sup |D_n(\lambda)|^\frac{1}{2^{n-1}}\leq e^{\gamma(\lambda)}$.
The growth of the coefficients $B_n$ and $C_n$ in Lemma \[coefficients estimate\] provides a uniform upper bound on the size of $2^{1-n} \log\|F_n^+(1,s)\|$ for small $s$:
\[upper bound by epsilon\] For any given ${\varepsilon}>0$, there exists a $\delta>0$ and an integer $N>0$ so that $$\frac{1}{2^{n-1}} \log \| F_n^+(1,s) \| - \gamma(\lambda) < {\varepsilon}$$ for all $|s| < \delta$ and all $n\geq N$.
Define polynomials $p_n(s)$ and $q_n(s)$ by $$F_n^+(1,s) = (sp_n(s), q_n(s))$$ so that $p_n(0) = B_n(\lambda)$ and $q_n(0) = C_n(\lambda)$. By Lemma \[coefficients estimate\], there is a huge integer $N$ such that $$|B_N(\lambda)|, |C_{N}(\lambda)|<(1+{\varepsilon}/4)^{2^{N-1}}e^{\gamma(\lambda)2^{N-1}}$$ and $$\frac{\log (|\lambda|+3)}{2^{N-1}} \leq {\varepsilon}/2.$$ Set $$R:=(1+{\varepsilon}/4)^{2^{N-1}}e^{\gamma(\lambda)2^{N-1}}(|\lambda|+3).$$ Since $|B_{N}(\lambda)|, |C_{N}(\lambda)|< R/(|\lambda|+3)$, we can choose a very small $\delta>0$ such that $$|p_{N}(s)|, |q_{N}(s)| < R/(|\lambda|+3),$$ for any $s$ with $|s|\leq \delta$.
Recall that $F_{n+1}^+(1,s) = F_{1/s}(F_n^+(1,s))$ for all $n$. Thus, $$|p_{N+1}(s)|=|\lambda p_{N}(s) q_{N}(s)|< R^2/(|\lambda|+3)$$ and $$|q_{N+1}(s)|=|s^2 p_N(s)^2+q_{N}(s)^2+p_{N}(s)q_{N}(s)|<R^2/(|\lambda|+3).$$ Inductively, we find $$|p_{N+i}(s)|<R^{2^i}/(|\lambda|+3)$$ $$|q_{N+i}(s)|<R^{2^i}/(|\lambda|+3)$$ for any $s$ with $|s|\leq \delta$ and any $i\geq 0$. Consequently, as the integer $N$ satisfies $\log (|\lambda|+3)/2^{N-1}\leq {\varepsilon}/2$ and $|s|\leq \delta$, $$\begin{aligned}
\frac{\log\|F_{N+i,\lambda}(1,s)\|}{2^{N+i-1}}&<& \frac{\log R^{2^i}/(|\lambda|+3)}{2^{N+i-1}}\\
&<& \frac{ \log [(1+{\varepsilon}/4)^{2^{N-1+i}}e^{\gamma(\lambda)2^{N-1+i}}(|\lambda|+3)^{2^i}]}{2^{N+i-1}}\\
&<&\gamma(\lambda) +{\varepsilon}/4+\log (|\lambda|+3)/2^{N-1}\\
&<&\gamma(\lambda) +{\varepsilon}.\end{aligned}$$
The corresponding lower bound on the size of $2^{1-n} \log\|F_n^+(1,s)\|$ for small $s$ is more delicate, and we use Lemma \[H bound\] together with the estimates of Lemma \[coefficients estimate\].
\[lower bound by epsilon\] For any given ${\varepsilon}>0$, there exists a $\delta>0$ and an integer $N>0$ so that $$\frac{1}{2^{n-1}} \log \| F_n^+(1,s) \| - \gamma(\lambda) > -{\varepsilon}$$ for all $|s| < \delta$ and all $n\geq N$.
In contrast with the proof of Lemma \[upper bound by epsilon\], we define polynomials $p_n(s)$ and $q_n(s)$ by $$F_n^+(1,s) = (s(B_n(\lambda)+sp_n(s)), C_n(\lambda)+sq_n(s)).$$ By Lemma \[coefficients estimate\], there is a huge integer $N$, such that $$|A_N(\lambda)|, |D_{N}(\lambda)|<(1+{\varepsilon}/8)^{2^{N-1}}e^{\gamma(\lambda)2^{N-1}},$$ $$|B_{N+i}(\lambda)|<((1+{\varepsilon}/8)e^{\gamma(\lambda)})^{2^{N-1+i}},$$ and $$\label{Cn esitmate}((1-{\varepsilon}/8)e^{\gamma(\lambda)})^{2^{N-1+i}}< |C_{N+i}(\lambda)|<((1+{\varepsilon}/8)e^{\gamma(\lambda)})^{2^{N-1+i}},$$ for any $i\geq 0$. By increasing $N$ if necessary, we may also assume that $$\label{C term}
\frac{\log\left(c{\varepsilon}/8(12|\lambda|+12)^{2}\right)}{2^{N-1}}>-{\varepsilon}/10,$$ where the constant $c$ is defined in Lemma \[H bound\]. Set $$R:=((1+{\varepsilon}/8)e^{\gamma(\lambda)})^{2^{N-1}}(12|\lambda|+12).$$ Since $|A_{N}(\lambda)|, |D_{N}(\lambda)|<R/(12|\lambda|+12)$, we can choose a very small $\delta>0$ such that $$|p_{N}(s)|, |q_{N}(s)|<R/(12|\lambda|+12),$$ for any $s$ with $|s|\leq \delta$. Recalling that $F_{n+1}^+(1,s) = F_{1/s}(F_n^+(1,s))$ for all $n$, the estimate (\[Cn esitmate\]) implies $$|p_{N+1}(s)|=|\lambda (C_{N}(\lambda) p_{N}(s)+(B_{N}(\lambda) +s p_{N}(s))q_{N}(s))|<\frac{R^2}{12|\lambda|+12}$$ and similarly, $|q_{N+1}(s)|<R^2/(12|\lambda|+12)$. Inductively, for any $i\geq 0$ and $s$ with $|s|\leq \delta$, $$\label{qn estimate}
|p_{N+i}(s)|, |q_{N+i}(s)| < R^{2^i}/(12|\lambda|+12).$$
Choose an integer $N'>N$, such that $$\label{delta 3}
\delta':=\frac{{\varepsilon}(1-{\varepsilon}/8)^{2^{N'-1}}}{8(1+{\varepsilon}/8)^{2^{N'-1}}(12|\lambda|+12)^{2^{N'-N}}}<\delta.$$ For any $j\geq N'$ and $s$ with $$|s|\leq \frac{{\varepsilon}(1-{\varepsilon}/8)^{2^{j-1}}}{8(1+{\varepsilon}/8)^{2^{j-1}}(12|\lambda|+12)^{2^{j-N}}}\leq\delta',$$ by (\[Cn esitmate\]), we have $$\begin{aligned}
\frac{\log\|F_j(1,s)\|}{2^{j-1}}&\geq&\frac{\log |C_{j}(\lambda)+sq_j(s)|}{2^{j-1}}\\
&\geq& \frac{\log( |C_{j}(\lambda)|-|sq_j(s)|)}{2^{j-1}}\\
&\geq&\frac{\log\left( |(1-{\varepsilon}/8)^{2^{j-1}}e^{\gamma(\lambda)2^{j-1}}|-\frac{|s|R^{2^{j-N}}}{12|\lambda|+12}\right)}{{2^{j-1}}}\textup{, by (\ref{qn estimate})}\\
&\geq& \gamma(\lambda)+2\log (1-{\varepsilon}/8)\\
&>&\gamma(\lambda) -{\varepsilon}\textup{, as ${\varepsilon}$ is small.}\end{aligned}$$ For any $n\geq N'$ and $s$ with $|s|<\delta'$, if $|s|\leq \frac{{\varepsilon}(1-{\varepsilon}/8)^{2^{n-1}}}{8(1+{\varepsilon}/8)^{2^{n-1}}(12|\lambda|+12)^{2^{n-N}}}$, then the above inequality guarantees $$\frac{\log\|F_n^+(1,s)\|}{2^{n-1}}> \gamma(\lambda) -{\varepsilon}.$$ Otherwise, by (\[delta 3\]), there is a $j$ with $N'\leq j<n$, such that $$\frac{{\varepsilon}(1-{\varepsilon}/8)^{2^{j}}}{8(1+{\varepsilon}/8)^{2^{j}}(12|\lambda|+12)^{2^{j+1-N}}}\leq |s|\leq \frac{{\varepsilon}(1-{\varepsilon}/8)^{2^{j-1}}}{8(1+{\varepsilon}/8)^{2^{j-1}}(12|\lambda|+12)^{2^{j-N}}}.$$
From Lemma \[H bound\], we have $$\label{apply H bound}
\frac{1}{2^{(n+i)-1}}\log\|F_{n+i}^+(1,s)\|- \frac{1}{2^{n-1}}\log\|F_n^+(1,s)\|\geq \frac{1}{2^{n-1}}\log (c|s|),$$ for all $s$ with $|s|\leq 1$ and all $n, i\geq 1$. Indeed, note that $F_{n+1}^+(1,s) = F_{1/s}(F_n^+(1,s))$ for all $n$. We see that $$\begin{aligned}
\frac{ \|F_{n+i}^+ (1,s) \|}{\|F_n^+(1,s)\|^{2^i} }
&=& \frac{ \|F_{n+i}^+ (1,s) \|}{\|F_{n+i-1}^+(1,s)\|^2 } \left(\frac{ \|F_{n+i-1}^+ (1,s) \|}{\|F_{n+i-2}^+(1,s)\|^2 }\right)^2 \cdots \left(\frac{ \|F_{n+1}^+ (1,s) \|}{\|F_n^+(1,s)\|^2 }\right)^{2^{i-1}} \\
&\geq& (c |s|)^{2^i-1}.\end{aligned}$$ Therefore, $$\begin{aligned}
\frac{\log\|F_n^+(1,s)\|}{2^{n-1}}&\geq& \frac{\log\|F_j^+(1,s)\|}{2^{j-1}}+2^{1-j}\log (c|s|)\\
&\geq& \gamma(\lambda)+2\log (1-{\varepsilon}/8)
+2^{1-j}\log \frac{c{\varepsilon}(1-{\varepsilon}/8)^{2^{j}}}{8(1+{\varepsilon}/8)^{2^{j}}(12|\lambda|+12)^{2^{j+1-N}} }\\
&\geq& \gamma(\lambda)+4\log (1-{\varepsilon}/8)-2\log (1+{\varepsilon}/8)-{\varepsilon}/10, \textup{ by (\ref{C term})}\\
&>&\gamma(\lambda)-{\varepsilon}, \textup{ as ${\varepsilon}$ is small.}\end{aligned}$$
Non-archimedean potential functions {#non-archimedean}
===================================
In this section, we prove a non-archimedean counterpart to Theorem \[convergence\]. If we assume $\lambda\in{\overline{{{\mathbb Q}}}}$, many of the computations of the previous section hold (and simplify) for the nonarchimedean absolute values on the number field $k = {{\mathbb Q}}(\lambda)$. As such, we may conclude that the bifurcation measures $\mu^\pm_\lambda$ are the archimedean components of a pair of [*quasi-adelic measures*]{}, equipped with continuous potential functions. We use the term “quasi-adelic" because the measures might be nontrivial at infinitely many places, though the associated height functions (defined as a sum of all local potentials, over all places of $k$) converge.
Defining the potential functions at each place
----------------------------------------------
Let $k$ be a number field and let ${\overline{k}}$ denote a fixed algebraic closure of $k$. (In this article, we always take $k={{\mathbb Q}}(\lambda)$ for $\lambda\in {\overline{{{\mathbb Q}}}}$.) Any number field $k$ is equipped with a set ${\mathcal{M}}_k$ of pairwise inequivalent nontrivial absolute values, together with a positive integer $N_v$ for each $v \in {\mathcal{M}}_k$, such that
- for each $\alpha \in k^*$, we have $|\alpha|_v = 1$ for all but finitely many $v \in {\mathcal{M}}_k$; and
- every $\alpha \in k^*$ satisfies the [*product formula*]{} $$\label{product formula}
\prod_{v \in {\mathcal{M}}_k} |\alpha|_v^{N_v} \ = \ 1 \ .$$
For each $v \in {\mathcal{M}}_k$, let $k_v$ be the completion of $k$ at $v$, let ${\overline{k}_v}$ be an algebraic closure of $k_v$, and let ${{\mathbb C}}_v$ denote the completion of ${\overline{k}_v}$. We work with the norm $$\|(z_1, z_2)\|_v = \max\{|z_1|_v, |z_2|_v\}$$ on $({{\mathbb C}}_v)^2$. We let ${{\mathbb P}}^{1,an}_v$ denote the Berkovich projective line over ${{\mathbb C}}_v$, which is a canonically defined path-connected compact Hausdorff space containing ${{\mathbb P}}^1({{\mathbb C}}_v)$ as a dense subspace. If $v$ is archimedean, then ${{\mathbb C}}_v \cong {{\mathbb C}}$ and ${{\mathbb P}}^{1,an}_v = {{\mathbb P}}^1({{\mathbb C}})$. See [@BRbook] for more information.
With the $v$-adic norms and $t\in {{\mathbb C}}_v$, we can define $H_{\lambda,v}^\pm$ exactly as in the archimedean case, $$H_{\lambda,v}^{\pm}(t) = \lim_{n\to\infty} \frac{1}{2^n} \log \| F_t^n(\pm 1, 1) \|_v.$$ The definition extends naturally to the Berkovich affine line ${{\mathbb A}}^{1,an}_v$; see [@BRbook Chapter 10]. Recall the definition of $F_n$ from §\[F\_n\], now defined on $({{\mathbb C}}_v)^2$.
\[non-archimedean convergence\] Fix $\lambda \in {\overline{{{\mathbb Q}}}}$ with $\lambda$ nonzero and not a root of unity, or set $\lambda=1$. For each place $v$ of $k = {{\mathbb Q}}(\lambda)$, the limits $$\lim_{n\to\infty} \frac{1}{2^{n-1}} \log \| F_n^\pm (t_1, t_2) \|_v$$ converge locally uniformly on $({{\mathbb C}}_v)^2 \setminus\{(0,0)\}$ to continuous functions $G^\pm_v$ satisfying $$G_v^\pm(t_1,t_2) = \begin{cases}
2H^\pm_{\lambda,v}(t_1/t_2)+\log |t_2|_v &\text{ if } t_2\neq0,\\
\log|t_1|_v+ \gamma_v(\lambda) &\text{ if } t_2=0.
\end{cases}$$ The function $G_v^\pm$ extends uniquely to define a continuous potential function for a probability measure $\mu_v^\pm$ on ${{\mathbb P}}^{1,an}_v$.
Theorem \[non-archimedean convergence\] is nearly identical to Theorem \[convergence\], except there is no longer a condition on the finiteness of $\gamma(\lambda)$, and the convergence holds at all places $v$. This finiteness is guaranteed by the following lemma.
\[product formula convergence\] For every algebraic number $\lambda$ which is not a root of unity, or for $\lambda=1$, the sum $$\gamma_v(\lambda) = \frac{1}{2} \sum_{i=1}^\infty \frac{1}{2^i} \log|1 + \lambda + \cdots + \lambda^i| _v$$ converges for all places $v$ of $k = {{\mathbb Q}}(\lambda)$.
The statement follows from the product formula for the number field $k$. First assume that $\lambda\not=1$. Note that $(1+ \lambda + \cdots + \lambda^i)(1-\lambda) = 1-\lambda^{i+1}$, so it suffices to prove the convergence of the sum $$\sum_{i=0}^{\infty} \frac{1}{2^i} \log|1 -\lambda^i|_v$$ at all places $v$.
Let $v$ be a non-archimedean place of $k$. If $|\lambda|_v < 1$, then $|1-\lambda^i|_v = 1$ for all $i$, and $\gamma_v$ is 0. If $|\lambda|_v > 1$, then $|1-\lambda^i|_v = |\lambda|_v^i$ for all $i$, and again the sum converges. Similarly for archimedean places, as long as $|\lambda|_v \not=1$, it is easy to see that the sum defining $\gamma_v$ converges.
For any $\lambda\in{\overline{{{\mathbb Q}}}}$, there are only finitely many places of $k={{\mathbb Q}}(\lambda)$ for which $|\lambda|_v > 1$. For all such $v$, we have $|1-\lambda^i|_v$ growing as $|\lambda|_v^i$ as $i\to \infty$. For all other $v$, the absolute value $|1-\lambda^i|_v$ is uniformly bounded above (by 1 if non-archimedean, and by 2 if archimedean). Let $\ell$ be the number of archimedean places for which $|\lambda|_v \leq 1$.
Suppose $v$ is a place such that $|\lambda|_v = 1$. It remains to show that $|1-\lambda^i|_v$ cannot get too small as $i\to\infty$. As $1-\lambda^i \not=0$ for all $i$, the product formula for $k$ states that $$\prod_{w\in {\mathcal{M}}_k} |1-\lambda^i|_w^{N_w} = 1.$$ Therefore, $$|1 - \lambda^i|^{N_v}_v = \frac{1}{ \prod_{w\not=v \in {\mathcal{M}}_k} |1-\lambda^i|_w^{N_w} } \geq \frac{1}{2^\ell \, \prod_{w: |\lambda|_w > 1} |1-\lambda^i|^{N_w}_w } \geq c^i$$ for some constant $c>0$ and all $i$. It follows that the expression for $\gamma_v$ converges at this place $v$.
Finally, assume $\lambda = 1$. Then the sum becomes $$\gamma_v(1) = \sum_{j=2}^\infty \frac{1}{2^j} \log|j|_v.$$ The expression clearly converges at the unique archimedean place $v=\infty$. Setting $v = p$ for any prime $p$, we have $|j|_p \geq 1/j$ for all $j\in{{\mathbb N}}$; therefore, the sum is easily seen to converge also in this case.
For a given $\lambda$, the value $\gamma_v(\lambda)$ may be nonzero at infinitely many places $v$. For example, $\gamma_v(1)$ is nonzero at [*all*]{} places $v$ of ${{\mathbb Q}}$. The conclusion of Lemma \[product formula convergence\] also appears in [@Herman:; @Yoccoz Lemma 4].
Proof of Theorem \[non-archimedean convergence\].
-------------------------------------------------
Fix $\lambda\in{\overline{{{\mathbb Q}}}}$, with $\lambda$ nonzero and not a root of unity; or let $\lambda=1$. If $v$ is an archimedean place of the number field $k = {{\mathbb Q}}(\lambda)$, then the Theorem follows immediately from Theorem \[convergence\] and Lemma \[product formula convergence\] (for the finiteness of $\gamma_v(\lambda)$).
Now suppose that $v$ is a non-archimedean place of $k$. The proof of convergence in the archimedean case shows [*mutatis mutandis*]{} that the convergence to $G^\pm_v$ is locally uniform for all places $v$. A line-by-line analysis of the proof of Theorem \[convergence\] shows that the proof uses nothing more than the triangle inequality and elementary algebra. As such, the estimates can only be improved when the usual triangle inequality is replaced by the ultrametric inequality in the case of a non-archimedean absolute value.
The extension of $G_v^\pm$ to Berkovich space and the construction of the measure $\mu_v^\pm$ as its Laplacian are carried out exactly as in [@BRbook §10.1].
There is one lemma (Lemma \[H bound\]) used in the proof of Theorem \[convergence\] that we will need again in the next section, in the non-archimedean setting. We state it explicitly here.
\[constant c\] For each $\lambda\in {\overline{{{\mathbb Q}}}}\setminus\{0\}$, there is constant $c>0$ so that $$\frac {\| F_t (z_1,z_2) \|_v}{\|(z_1,z_2)\|_v^2} \geq c |t|_v^{-1}$$ for all $|t|_v\geq 1$, all $(z_1, z_2)\neq (0,0)$ in $({{\mathbb C}}_v)^2$, and all places $v$ of $k = {{\mathbb Q}}(\lambda)$.
Set $$c = \min\{\min\{|\lambda|_v/2: v \in {\mathcal{M}}_k\}, 1/4\}.$$ Note that $c > 0$ since there are only finitely many places $v$ for which $|\lambda|_v<1$. Now fix $v\in{\mathcal{M}}_k$. If $v$ is archimedean, then the proof is identical to that of Lemma \[H bound\]. If $v$ is non-archimedean, the estimate can be simplified a bit. Indeed, assume that $z_2=1$ and $|z_1|_v\leq 1$. Then $\|(z_1,z_2)\|_v = 1$, and we estimate the norm of $sF_{1/s}(z_1,1) = (s\lambda z_1, sz_1^2 + s + z_1)$ with $0 < |s|_v \leq 1$. For each such $s$, either $|s\lambda z_1|_v\geq c|s|_v^2$, or $|z_1|_v < |s|_v/2$ in which case, $$|sz_1^2+s+z_1|_v = |s|_v > c |s|_v^2.$$ In either case, $\|F_{1/s}(z_1,z_2)\| \geq c|s|$ and the lower bound is proved. The case of $|z_2|_v < |z_1|_v=1$ follows by symmetry, and the conclusion of the lemma is obtained from the homogeneity of $F_t$.
The homogeneous bifurcation sets {#sets}
================================
In this section, we study the escape-rate functions $G_v^\pm$ of Theorems \[convergence\] and \[non-archimedean convergence\] and compute the homogeneous capacity of the sets $$K^\pm_{\lambda,v} = \{z\in ({{\mathbb C}}_v)^2: G_v^\pm(z)\leq 0\}.$$ We also provide a bound on the diameter of the sets $K^\pm_{\lambda, v}$ that will be used in our proof of Theorem \[equidistribution at all places\] (and Theorem \[equidistribution\]).
The homogeneous capacity
------------------------
We will consider compact sets $K\subset {{\mathbb C}}^2$ that are circled and pseudoconvex: these are sets of the form $$K = \{(z,w)\in \mathbb{C}^2: G_K(z,w) \leq 0\}$$ for continuous, plurisubharmonic functions $G_K : {{\mathbb C}}^2\setminus\{(0,0)\} \to {{\mathbb R}}$ such that $$G_K(\alpha z, \alpha w) = G_K(z, w) + \log|\alpha|$$ for all $\alpha\in{{\mathbb C}}^*$; see [@D:lyap §3]. Such functions are (homogeneous) potential functions for probability measures on ${{\mathbb P}}^1$ [@Fornaess:Sibony Theorem 5.9].
Set $G_K^+=\max\{G_K,0\}$. The Levi measure of $K$ is defined by $$\mu_K=dd^cG_K^+\wedge dd^cG_K^+.$$ It is known that $\mu_K$ is a probability measure supported on $\partial K=\{G=0\}$. The homogeneous capacity of $K$ is defined by $${\operatorname{cap}}(K)=\exp\Big(\iint\log|\zeta\wedge \xi| d\mu_K(\zeta)d\mu_K(\xi)\Big).$$ This capacity was introduced in [@D:lyap] and shown in [@Baker:Rumely:equidistribution] to satisfy ${\operatorname{cap}}(K) = (d_\infty(K))^2$, where $d_\infty$ is the transfinite diameter in ${{\mathbb C}}^2$.
To compute the capacity, suppose that $$F_n : {{\mathbb C}}^2 \to {{\mathbb C}}^2$$ is a sequence of homogeneous polynomial maps such that the resultant ${\operatorname{Res}}(F_n) \not=0$ for all $n$ and that $$\lim_{n\to\infty} \frac{1}{\deg(F_n)} \log\|F_n\|$$ converges locally uniformly in ${{\mathbb C}}^2 \setminus \{(0,0)\}$ to the function $G_K$. The capacity ${\operatorname{cap}}(K)$ may be computed as $$\label{archimedean cap}
{\operatorname{cap}}(K)=\lim_{n\rightarrow\infty} |{{\operatorname{Res}}}(F_n)|^{-1/\deg(F_n)^2}.$$ If, in addition, the maps $F_n$ are defined over a number field $k$, and if the convergence holds for all absolute values $v$ in ${\mathcal{M}}_k$, with limiting function $G_v: ({{\mathbb C}}_v)^2\setminus \{(0,0)\} \to {{\mathbb R}}$, then the same computation works at all places $v$. That is, $$\label{cap at all places}
{\operatorname{cap}}(K_v)=\lim_{n\rightarrow\infty} |{{\operatorname{Res}}}(F_n)|_v^{-1/\deg(F_n)^2}.$$ See [@DWY:Lattes] for a proof; similar statements appear in [@DR:transfinite].
Homogeneous capacity of the bifurcation sets.
---------------------------------------------
Recall the definition of $\gamma(\lambda)$ from Theorem \[convergence\].
\[cap\] For all $\lambda\in{{\mathbb C}}\setminus\{0\}$ such that $|\gamma(\lambda)|<\infty$, the set $$K^\pm_\lambda = \{z\in {{\mathbb C}}^2: G_\lambda^\pm(z)\leq 0\}$$ is compact, circled, and pseudoconvex; its homogeneous capacity is $${\operatorname{cap}}(K^+_{\lambda})={\operatorname{cap}}(K^-_{\lambda})=\frac{1}{|\lambda|^2}\prod_{j=1}^{+\infty}|1+\lambda+\cdots+\lambda^{j}|^{-3\cdot 4^{-j-1}}.$$ For all $\lambda\in{\overline{{{\mathbb Q}}}}\setminus\{0\}$, not a root of unity, or for $\lambda=1$, we have $${\operatorname{cap}}(K^+_{\lambda,v})={\operatorname{cap}}(K^-_{\lambda,v})=\frac{1}{|\lambda|_v^2}\prod_{j=1}^{+\infty}|1+\lambda+\cdots+\lambda^{j}|_v^{-3\cdot 4^{-j-1}}$$ for all places $v$ of the number field $k = {{\mathbb Q}}(\lambda)$.
Fix $\lambda\in{{\mathbb C}}\setminus\{0\}$ so that $|\gamma(\lambda)| < \infty$. We will provide the proof for $K^+_\lambda$; the result for $K^-_\lambda$ follows by symmetry. Recall the definition of $F_n^+$ from (\[F\_n plus\]). That $K^+_\lambda$ is compact, circled and pseudoconvex follows immediately from the definition and continuity of $G^+_\lambda$, stated in Theorem \[convergence\].
To compute the capacity of $K^+_\lambda$, we give a recursive relation between ${\operatorname{Res}}(F_n)$ and ${\operatorname{Res}}(F_{n+1})$. Consider the transformation $A_\lambda(t_1,t_2)=(\lambda t_1 t_2, t_1^2+t_2^2)$. Then ${\rm Res}(A_\lambda \circ F_n)$ takes the form $$\left|
\begin{array} {cccccccc}
0 & a_1& \cdots & a_{d-1} & a_d &0 &\cdots & 0\\
0 & 0 & a_1& \cdots & a_{d-1} & a_d & \cdots &0\\
& & & \cdots & & &��& \\
0 & 0& \cdots & 0 & a_1& \cdots& a_{d-1} & a_d\\
b_0 & b_1&\cdots & b_{d-1} & b_d &0 &\cdots & 0\\
0 & b_0 & b_1& \cdots & b_{d-1} & b_d & \cdots &0\\
& & & \cdots & & &��& \\
0 & 0& \cdots & b_0 & b_1&\cdots& b_{d-1} & b_d
\end{array}
\right|$$ and ${\operatorname{Res}}(F_{n+1})$ takes the form $$\left|
\begin{array} {cccccccc}
0 & a_1& \cdots & a_{d-1} & a_d &0 &\cdots & 0\\
0 & 0 & a_1& \cdots & a_{d-1} & a_d & \cdots &0\\
& & & \cdots & & &��& \\
0 & 0& \cdots & 0 & a_1& \cdots& a_{d-1} & a_d\\
b_0+\frac{a_1}{\lambda}& b_1+\frac{a_2}{\lambda}&\cdots & b_{d-1}+\frac{a_d}{\lambda} & b_d &0 &\cdots & 0\\
0 & b_0+\frac{a_1}{\lambda} & b_1+\frac{a_2}{\lambda}& \cdots & b_{d-1}+\frac{a_d}{\lambda} & b_d & \cdots &0\\
& & & \cdots & & &��& \\
0 & 0& \cdots & b_0+\frac{a_1}{\lambda} & b_1+\frac{a_2}{\lambda}&\cdots& b_{d-1}+\frac{a_d}{\lambda}& b_d
\end{array}
\right|,$$ where $b_0=C_n^2(\lambda)$ and $b_0+a_1/\lambda=C_{n+1}(\lambda)$, the coefficients defined in (\[coefficients\]). Therefore the resultants ${\operatorname{Res}}(F_n)$ satisfy: $$\begin{aligned}
{\operatorname{Res}}(F_{n+1})&=&\frac{C_{n+1}(\lambda)}{C_n^2(\lambda)}{\operatorname{Res}}(A_\lambda \circ F_n)\\
&=&\frac{C_{n+1}(\lambda)}{C_n^2(\lambda)}{\operatorname{Res}}(A_\lambda)^{\deg(F_n)}{\operatorname{Res}}(F_n)^{2\deg(A_\lambda)} \\
&=&\frac{C_{n+1}(\lambda)}{C_n^2(\lambda)}{\operatorname{Res}}(A_\lambda)^{2^{n-1}}{\operatorname{Res}}(F_{n})^{4}\end{aligned}$$ where the second equality follows from the decomposition property of resultants (see [@D:lyap Proposition 6.1]).
We may compute from (\[C coeff\]) that $C_1(\lambda)=1,\ C_2(\lambda)=1+\lambda$, while for $n\geq3$, $$\label{C formula}
C_n(\lambda)=(1+\lambda)^{2^{n-3}}(1+\lambda+\lambda^2)^{2^{n-4}}\cdots (1+\lambda+\cdots+\lambda^{n-2})^{2^0}(1+\lambda+\cdots+\lambda^{n-1}).$$ From the definition of resultant, we have ${\operatorname{Res}}(A_\lambda) =\lambda^2$ and ${\operatorname{Res}}(F_1) =-\lambda$. Thus the recursive relation becomes $${\operatorname{Res}}(F_{n+1})=\frac{1+\lambda+\cdots+\lambda^n}{1+\lambda+\cdots+\lambda^{n-1}} \; \lambda^{2^n}{\operatorname{Res}}(F_n)^{4}.$$ By induction, ${\operatorname{Res}}(F_2)=\lambda^6(1+\lambda)$, and for $n\geq3$, we have $$\label{resultant formula}
{\operatorname{Res}}(F_n)=\lambda^{2\cdot 4^{n-1}-2^{n-1}}(1+\lambda+\cdots+\lambda^{n-1})\prod_{j=1}^{n-2}(1+\lambda+\cdots+\lambda^{j})^{3\cdot 4^{n-2-j}}.$$ From equation (\[archimedean cap\]), we conclude that $$\begin{aligned}
{\operatorname{cap}}(K^+_{\lambda})&=&\lim_{n\rightarrow\infty} |{\operatorname{Res}}(F_{n})|^{-1/\deg(F_{n})^2}\\
&=&\lim_{n\rightarrow\infty} \left(\frac{|\lambda|^{-2+2^{1-n}}}{|1+\lambda+\cdots+\lambda^{n-1}|^{4^{1-n}}}\prod_{j=1}^{n-2}|1+\lambda+\cdots+\lambda^{j}|^{-3\cdot 4^{-j-1}}\right)\\
&=&\frac{1}{|\lambda|^2}\prod_{j=1}^{+\infty}|1+\lambda+\cdots+\lambda^{j}|^{-3\cdot 4^{-j-1}}.\end{aligned}$$ The proof for $\lambda\in{\overline{{{\mathbb Q}}}}$ and non-archimedean absolute values is identical, using (\[cap at all places\]).
Bounds for the homogeneous sets. {#subsection bounds}
--------------------------------
Fix $\lambda\in{\overline{{{\mathbb Q}}}}$, not a root of unity (except possibly 1) and nonzero. In our proof of Theorem \[equidistribution\], we will need control over the diameter of $K^\pm_{\lambda,v}$ at most places $v$ of the number field $k = {{\mathbb Q}}(\lambda)$. We define subsets of the set of places ${\mathcal{M}}_k$:
- for each $n\geq 1$, let ${\mathcal{M}}_{k, n}$ be the set of all non-archimedean places $v$ for which $|\lambda|_v=1$, $|1 + \lambda + \cdots + \lambda^i|_v = 1$ for all $i< n$, and $|1 + \lambda + \cdots + \lambda^n|_v<1$; and
- let ${\mathcal{M}}_{k, 0}$ be the set of all non-archimedean places $v$ for which $|\lambda|_v = 1$ and $|1 + \lambda + \cdots + \lambda^i|_v=1$ for all $i$.
Note that the set ${\mathcal{M}}_k \setminus \bigcup_{n\geq 0} {\mathcal{M}}_{k,n}$ is finite. (Indeed, for non-archimedean $v$, there is an integer $i>0$ with $|1+\lambda+\cdots +\lambda^i|_{{v}}>1$ if and only if $|\lambda|_v>1$. There are only finitely many such non-archimedean places. And there are only finitely many archimedean places.) Also, the set ${\mathcal{M}}_{k,0}$ might be empty, as will be the case for $\lambda=1$.
\[trivial K\] For all $v$ in ${\mathcal{M}}_{k,0}$, the sets $K_v^+$ and $K_v^-$ are trivial; that is, $$G_v^\pm(t_1,t_2) = \log\|(t_1, t_2)\|_v$$ and $K_v^\pm = \bar{D}^2(0,1)$.
For each $n\geq 1$, the coefficients of $F_n$ will have absolute value $\leq 1$. From the formula for the resultant of $F_n$ in (\[resultant formula\]), we see that $|{\operatorname{Res}}(F_n)|_v = 1$ for all $n$. Applying [@BRbook Lemma 10.1] to $F_n$, setting $B_1=B_2=1$, we conclude that $$\|F_n(t_1, t_2)\| = \|(t_1, t_2)\|^{\deg F_n}$$ for all $n\geq 1$. The conclusion follows immediately.
\[the bounds for K\_v\^+\] Fix $\lambda\in{\overline{{{\mathbb Q}}}}\setminus\{0\}$ not a root of unity, or set $\lambda=1$. There exists a constant $c = c(\lambda) > 0$ so that $$\bar{D}^2(0,1) \subset K_v^+\subset \bar{D}^2(0, e^{-2\gamma_v (\lambda) -\frac {\log c}{2^{n-1}}})$$ for all $v\in {\mathcal{M}}_{k,n}$ and all $n \geq 1$.
Fix $n\geq 1$ and $v\in{\mathcal{M}}_{k,n}$. For each $m\geq 1$, the coefficients of $F_m$ lie in the valuation ring of $k$ (i.e. have absolute value $\leq 1$). It follows immediately that $\|F_m(t,s)\|_v \leq 1$ for all $m$ and all $\|(t,s)\|_v \leq 1$, and therefore, $G_v^+(t,s) \leq 0$ on $\bar{D}^2(0,1)$. Consequently, $\bar{D}^2(0,1) \subset K_v^+$.
Suppose $|s|_v = 1$ and $|t|_v \leq 1$. The resultant of the polynomial map $F_{\lambda,t}$ is $\lambda^2$, so it has $v$-adic absolute value $= 1$. The non-archimedean estimates of [@BRbook Lemma 10.1] imply that every iterate of $(z,w) = (1,1)$ under $F_{\lambda, t/s}$ will have norm 1. Therefore, the $v$-adic norm of $$F_m(t, s) = s^{2^{n-1}} F^m_{\lambda, t/s}(1,1)$$ is also equal to $1$ for all $m$. Consequently, $G^+_v(t,s) = 0$ whenever $|t|_v \leq 1$ and $|s|_v = 1$.
The coefficients $C_i$ of $F_i$, defined in (\[C coeff\]) with formulas in (\[C formula\]), satisfy $|C_i|_v=1$ for $i \leq n$ and $|C_i|_v<1$ for $i > n$. In fact, from the explicit expressions for $C_i$, we see that $|C_i|_v$ form a non-increasing sequence with $$\lim_{i\to\infty} |C_i|_v = 0$$ and the sequence of expressions $ \frac{\log |C_i|_v}{2^{i-1}}$ also form a non-increasing sequence with $$\lim_{i\to\infty} \frac{\log |C_i|_v}{2^{i-1}} = \gamma_v(\lambda).$$ Fix $s\in{{\mathbb C}}_v$ with $|s|_v<1$, and choose $j\geq n$ so that $$|C_{j+1}|_v\leq |s|_v <|C_j|_v.$$ Then by Lemma \[constant c\] (applied exactly as in (\[apply H bound\]) in the proof of Theorem \[convergence\]), we have $$\frac{\log \|F_m^+(1,s)\|_v}{2^{m-1}}\geq \frac{\log \|F^+_j(1,s)\|_v}{2^{j-1}}+\frac{\log c|s|_v}{2^{j-1}}$$ for all $m \geq j$. Since $|C_{j+1}|_v\leq|s|_v <|C_j|_v$ and all the coefficients of $F_j(1,s)$ are bounded by 1, the constant term $C_j$ dominates the norm of $F_j(1,s)$ (that is, $\|F_j(1,s)\|_v=|C_j|_v$); hence $$\frac{\log \|F_m^+(1,s)\|_v}{2^{m-1}}\geq \frac{\log |C_j|_v}{2^{j-1}}+\frac{\log (c |C_{j+1}|_v)}{2^{j-1}}.$$ Letting $m\to \infty$, and since $\frac{\log |C_j|_v}{2^{j-1}} \geq \gamma_v(\lambda)$ for all $j$, we have $$G_v^+(1,s)\geq 2\gamma_v (\lambda) +\frac {\log c}{2^{j-1}}.$$ It follows that $$G_v^+(1,s)\geq 2\gamma_v (\lambda) +\frac {\log c}{2^{n-1}}$$ for all $s$ with $|s|_v<1$, and we conclude that $$K_v^+\subset \bar{D}^2(0, e^{-2\gamma_v (\lambda) -\frac {\log c}{2^{n-1}}}).$$
The equidistribution theorem {#equidistribution section}
============================
Throughout this section, we fix $\lambda\not= 0$ in ${\overline{{{\mathbb Q}}}}$, and fix a number field $k$ containing $\lambda$. As in [@Call:Silverman], we define canonical height functions $\hat{h}^+$ and $\hat{h}^-$ on parameters $t\in {\mathrm{Per}}_1(\lambda)^{cm} ({\overline{{{\mathbb Q}}}})$, by $$\hat{h}^\pm(t) := \hat{h}_{f_t}(\pm1) = \lim_{n\to\infty} \frac{1}{2^n} h(f_t^n(\pm1)),$$ where $h$ is the logarithmic Weil height on ${{\mathbb P}}^1({\overline{{{\mathbb Q}}}})$ and $\hat{h}_{f_t}$ is the canonical height of the morphism $f_t$.
\[equidistribution at all places\] Assume that $\lambda\in{\overline{{{\mathbb Q}}}}\setminus\{0\}$ is not a root of unity, or set $\lambda=1$. Let $\{S_n\}$ be any non-repeating sequence of ${\operatorname{Gal}}({\overline{k}}/k)$-invariant finite sets in ${\mathrm{Per}}_1(\lambda)^{cm}$ for which $$\hat{h}^+(S_n) \to 0$$ as $n\to\infty$. Then the sets $S_n$ are equidistributed with respect to the measure $\mu^+_\lambda$. In fact, for each place $v$ of $k$, the discrete measures $$\mu_n = \frac{1}{|S_n|} \sum_{t\in S_n} \delta_t$$ converge weakly to the measure $\mu_v^+$ on the Berkovich projective line ${{\mathbb P}}^{1,an}_v$. Similarly for $\hat{h}^-$ and the measures $\{\mu^-_v\}$.
The main idea of the proof is to show that $\hat{h}^+$ and $\hat{h}^-$ are canonically associated to the “quasi-adelic" measures $\{\mu^{\pm}_v\}$. We use Theorems \[convergence\] and \[non-archimedean convergence\]. Then we may apply the arithmetic equidistribution theorem (as appearing in [@Ye:quasi], modified from the original treatements in [@BRbook; @FRL:equidistribution]) to obtain the theorem.
Quasi-adelic measures and equidistribution.
-------------------------------------------
For each $v \in {\mathcal{M}}_k$ there is a distribution-valued Laplacian operator $\Delta$ on ${{\mathbb P}}^1_{{\operatorname{Berk}},v}$. For example, the function $\log^+|z|_v$ on ${{\mathbb P}}^1({{\mathbb C}}_v)$ extends naturally to a continuous real valued function ${{\mathbb P}}^1_{{\operatorname{Berk}},v} \backslash \{ \infty \} \to {{\mathbb R}}$ and $$\Delta \log^+|z|_v = \lambda_v - \delta_{\infty},$$ where $\lambda_v$ is the uniform probability measure on the complex unit circle $\{ |z| = 1 \}$ when $v$ is archimedean and $\lambda_v$ is a point mass at the Gauss point of ${{\mathbb P}}^1_{{\operatorname{Berk}},v}$ when $v$ is non-archimedean. (The sign of the Laplacian $\Delta$ is reversed from that of [@BRbook] or the presentation in [@BD:polyPCF], to match the sign convention from complex analysis.)
A probability measure $\mu_v$ on ${{\mathbb P}}^1_{{\operatorname{Berk}},v}$ is said to have a [*continuous potential*]{} if $\mu_v - \lambda_v = \Delta g$ with $g : {{\mathbb P}}^1_{{\operatorname{Berk}},v} \to {{\mathbb R}}$ continuous. If $\mu_v$ has a continuous potential then there is a corresponding [*Arakelov-Green function*]{} $g_{\mu_v} : {{\mathbb P}}^1_{{\operatorname{Berk}},v} \times {{\mathbb P}}^1_{{\operatorname{Berk}},v} \to {{\mathbb R}}\cup \{ +\infty \}$ which is characterized by the differential equation $\Delta_x g_{\mu_v}(x,y) = \mu - \delta_y$ and the normalization $$\label{normalization}
\iint g_{\mu_v}(x,y) d\mu(x) d\mu(y) = 0.$$ Working with homogeneous coordinates, $g_{\mu_v}$ may be computed in terms of a continuous potential function for $\mu_v$, $$G_v: ({{\mathbb C}}_v)^2\setminus \{(0,0)\} \to {{\mathbb R}}$$ satisfying $G_v(z_1, z_2) = g(z_1/z_2) + \log^+|z_1/z_2|_v + \log|z_2|_v$ for some continuous potential $g$ as described above. For $x, y\in {{\mathbb P}}^1({{\mathbb C}}_v)$, the Arakelov-Green function for $\mu_v$ is given by $$\label{explicit g}
g_{\mu_v}(x,y) = -\log|\tilde{x} \wedge \tilde{y}|_v + G_v(\tilde{x}) + G_v(\tilde{y}) + \log {\operatorname{cap}}(K_v),$$ for any choice of lifts $\tilde{x}$ of $x$ and $\tilde{y}$ of $y$ to $({{\mathbb C}}_v)^2$. Here $$K_v = \{(a,b)\in ({{\mathbb C}}_v)^2: G_v(a,b) \leq 0\}.$$ The homogeneous capacity ${\operatorname{cap}}(K_v)$ is exactly what is needed to normalize $g_{\mu_v}$ according to (\[normalization\]). See [@BRbook §10.2] for details, in the setting where $K_v$ is the filled Julia set of a homogeneous polynomial lift of a rational function defined over $k$.
A [*quasi-adelic measure*]{} on ${{\mathbb P}}^1$ (with respect to the field $k$) is a collection ${\mathbb \mu} = \{ \mu_v \}_{v \in M_k}$ of probability measures on ${{\mathbb P}}^{1,an}_v$ with continuous potentials for which the product $$\prod_{v\in M_k} ( r(K_v) / {\operatorname{cap}}(K_v)^{1/2} )^{N_v}$$ converges strongly to a positive real number, where $$r(K_v) = \inf\{r>0: K_v \subset \bar{D}^2_v(0, r)\}$$ is the outer radius of $K_v$. (Strong convergence of a product is, by definition, absolute convergence of the sum of logarithms of the entries.)
If $\rho,\rho'$ are measures on ${{\mathbb P}}^{1,an}_v$, we define the [*$\mu_v$-energy*]{} of $\rho$ and $\rho'$ by $$( \rho, \rho' )_{\mu_v} := \frac{1}{2} \iint_{{{\mathbb P}}^1_{{\operatorname{Berk}},v} \times {{\mathbb P}}^1_{{\operatorname{Berk}},v} \backslash {\rm Diag}} g_{\mu_v}(x,y) d\rho(x) d\rho'(y).$$ Let $S$ be a finite, ${\operatorname{Gal}}({\overline{k}}/k)$-invariant subset of ${{\mathbb P}}^1({\overline{k}})$ with $|S|>1$. For each $v \in {\mathcal{M}}_k$, we denote by $[S]_v$ the discrete probability measure on ${{\mathbb P}}^{1,an}_v$ supported equally on the elements of $S$. For a quasi-adelic measure $\mu =\{\mu_v\}$, the [*$\mu$-canonical height*]{} of $S$ is defined by $$\label{height definition}
{\hat{h}}_{{\mu}}(S) := \frac{|S|}{|S|-1} \sum_{v \in {\mathcal{M}}_k} N_v \cdot ([S]_v,[S]_v)_{\mu_v}.$$ The constants $N_v$ are the same as those appearing in the product formula (\[product formula\]).
The definition of ${\hat{h}}_{\mu}$ differs slightly from that given in [@BD:polyPCF] or [@FRL:equidistribution], but agrees with the definition in [@DWY:Lattes]; the factor of $|S|/(|S|-1)$ is included to match the usual definition of canonical height. See Proposition \[same heights\] and [@BRbook Lemma 10.27]. With this normalization, the function ${\hat{h}}_\mu$ will extend naturally to sets with $|S|=1$, to define a function on ${{\mathbb P}}^1({\overline{k}})$.
The following equidistribution theorem is a modification of the ones appearing in [@BRbook; @FRL:equidistribution]; the proof is given in [@Ye:quasi].
\[quasi-adelic equidistribution\] Let ${\hat{h}}_{{\mu}}$ be the canonical height associated to a quasi-adelic measure $\mu$. Let $\{S_n\}_{n\geq 0}$ be any non-repeating sequence of ${\operatorname{Gal}}({\overline{k}}/k)$-invariant finite subsets of ${{\mathbb P}}^1({\overline{k}})$ for which ${\hat{h}}_{{\mathbb \mu}}(S_n) \to 0$ as $n \to \infty$. Then $[S_n]_v$ converges weakly to $\mu_v$ on ${{\mathbb P}}^1_{{\operatorname{Berk}},v}$ as $n \to \infty$ for all $v \in {\mathcal{M}}_k$.
Bifurcation measures are quasi-adelic.
--------------------------------------
Now we prove that the escape-rate functions $G_v^\pm$ from Theorems \[convergence\] and \[non-archimedean convergence\] are potential functions for a quasi-adelic measure. For each $n\geq 1$, recall from §\[subsection bounds\] that ${\mathcal{M}}_{k, n}$ denotes the set of all non-archimedean places in ${\mathcal{M}}_k$ such that $|\lambda|_v=1$, $|1 + \lambda + \cdots + \lambda^i|_v=1$ for all $i< n$, and $|1+ \lambda + \cdots + \lambda^n|_v<1$.
\[places bound\] For any $\lambda\in {\overline{{{\mathbb Q}}}}\setminus\{0\}$ not a root of unity, or for $\lambda=1$, there exists a constant $C = C(\lambda, k)$ so that $$|{\mathcal{M}}_{k, n}| \leq \, C \, n$$ for all $n\geq 1$, where the places $v\in {\mathcal{M}}_{k, n}$ are counted with multiplicity $N_v$.
We begin with a basic observation from algebraic number theory. Let $m = [k:{{\mathbb Q}}]$, the degree of the field extension. Suppose that $v$ is a place of $k$ extending the $p$-adic absolute value on ${{\mathbb Q}}$ for a prime $p$. Then $$|x|_v<1 \; \implies \; |x|_v\leq \frac{1}{p^{1/m}}$$ for all $x\in k$. Indeed, the absolute value will be bounded by $p^{-1/e}$ where $e$ is the index of ramification of the field $k$ at the prime $p$; and $e\leq m$.
The proof of the lemma follows from the product formula and the above control on the absolute values. There are only finitely many places $v\in {\mathcal{M}}_k$ for which $|1+\lambda+ \cdots + \lambda^i|_v>1$ for some $i\geq 1$. Then there is an $M>1$ such that for any $n\geq 1$, $$\prod_{v\in {\mathcal{M}}_k, |1+\lambda+ \cdots + \lambda^n|_v>1} |1+\lambda+ \cdots + \lambda^n|_v^{N_v} \leq M^n.$$ For each $v\in {\mathcal{M}}_{k,n}$, we have $$|1+\lambda+ \cdots + \lambda^n|_v^{N_v}\leq \frac{1}{2^{\frac{N_v}{m}}}.$$ By the product formula, we see that $$\begin{aligned}
\prod_{v\in{\mathcal{M}}_{k,n}} |1+\lambda+ \cdots + \lambda^n|_v^{N_v} &\geq& \prod_{v, |1+\lambda+ \cdots + \lambda^n|_v<1} |1+\lambda+ \cdots + \lambda^n|_v^{N_v} \\
&=& \prod_{v, |1+\lambda+ \cdots + \lambda^n|_v>1} |1+\lambda+ \cdots + \lambda^n|_v^{-N_v} \\
&\geq& \frac{1}{M^n}.\end{aligned}$$ Therefore, $$\frac{M^n}{2^{\frac{|{\mathcal{M}}_{k,n}|}{m}}}\geq 1.$$ and we conclude that $$|{\mathcal{M}}_{k,n}|\leq n m \log_2 M,$$ for any $n$. Set $C = m\log_2M$.
\[quasi-adelic\] For each $\lambda\in{\overline{{{\mathbb Q}}}}\setminus\{0\}$ not a root of unity, or for $\lambda=1$, the bifurcation measures $\{\mu^+_v\}$ and $\{\mu^-_v\}$ are quasi-adelic.
As continuity of the potentials has already been established (Theorem \[non-archimedean convergence\]), we need only show that the product $$\prod_{v\in {\mathcal{M}}_k} ( r(K_v) / {\operatorname{cap}}(K_v)^{1/2} )^{N_v}$$ converges strongly to a positive real number. That is, we need to show the absolute convergence of the sum $$\sum_{v\in {\mathcal{M}}_k} N_v \log | r(K_v) / {\operatorname{cap}}(K_v)^{1/2} |.$$ Lemma \[global\] implies that $\prod {\operatorname{cap}}(K_v)^{N_v}$ converges strongly to 1. It remains to show the strong convergence of $\prod_v r(K_v)^{N_v}$. Recall the definitions of ${\mathcal{M}}_{k,n}$ and ${\mathcal{M}}_{k,0}$ from §\[subsection bounds\], and recall that the set of places not in ${\mathcal{M}}_{k,0}$ or ${\mathcal{M}}_{k,n}$ for any $n$ is finite. From Lemma \[trivial K\], we have $K_v = \bar{D}^2(0,1)$ for all $v\in{\mathcal{M}}_{k,0}$ so that $r(K_v) = 1$. For $v\in {\mathcal{M}}_{k,n}$, Proposition \[the bounds for K\_v\^+\] shows that $$1 \leq r(K_v) \leq e^{-2\gamma_v(\lambda) - (\log c)/2^{n-1}}.$$ Strong convergence of $\prod_v r(K_v)^{N_v}$ will then follow from convergence of $$\sum_{n=1}^\infty \sum_{v\in {\mathcal{M}}_{k,n}} N_v \left(-2 \gamma_v(\lambda) - \frac{\log c}{2^{n-1}} \right).$$ Lemma \[global\] implies that the sum of the $N_v \gamma_v(\lambda)$ terms will converge. Lemma \[places bound\] shows that $|{\mathcal{M}}_{k,n}| \leq C n$ when counted with multiplicities $N_v$, showing that the sum of the $(\log c)/2^{n-1}$ terms will also converge.
Equivalence of two canonical heights.
-------------------------------------
Now we show that the Call-Silverman heights $\hat{h}^\pm$, defined at the beginning of §\[equidistribution section\], coincide with the $\mu^\pm$-canonical heights associated to the quasi-adelic measures $\{\mu^\pm_v\}$, defined in (\[height definition\]). We begin with a lemma. In other settings (e.g. those in [@BRbook Chapter 10], [@BD:polyPCF] or [@DWY:Lattes]), the analogous conclusion of Lemma \[global\] would be immediate from the product formula, since all but finitely many terms would be 0. Recall that the definition of $\gamma_v(\lambda)$ appears in Lemma \[product formula convergence\] and the formula for the capacity is given in Theorem \[cap\].
\[global\] Fix $\lambda\in{\overline{{{\mathbb Q}}}}\setminus\{0\}$ not a root of unity, or set $\lambda=1$. The sums $$\sum_{v\in{\mathcal{M}}_k} N_v \gamma_v(\lambda) \qquad \mbox{ and } \qquad \sum_{v\in{\mathcal{M}}_k} N_v \log{\operatorname{cap}}(K_v^\pm)$$ converge absolutely, and the sum is equal to 0.
There are only finitely many places $v$ for which there is an integer $i > 0$ with $|1+\lambda+\cdots +\lambda^i|_{{v}}>1$. (For non-archimedean $v$, this condition is equivalent to $|\lambda|_v>1$; there are only finitely many such non-archimedean places.) Therefore, there exists $M>1$ such that $$\prod_{v\in {\mathcal{M}}_k, |1+\lambda+ \cdots + \lambda^n|_v>1} |1+\lambda+ \cdots + \lambda^n|_v^{N_v} \leq M^n$$ for all $n\geq 1$. From the product formula, we see that $$\prod_{v\in {\mathcal{M}}_k, |1+\lambda+ \cdots + \lambda^n|_v<1} |1+\lambda+ \cdots + \lambda^n|_v^{N_v} \geq \frac{1}{M^n},$$ though the number of such places grows with $n$. Consequently, $$\sum_{n=1}^{\infty} \frac{1}{r^n} \sum_{v\in {\mathcal{M}}_k} N_v \left| \log|1+\lambda+\cdots +\lambda^n|_v \right|<\infty$$ for any constant $r>1$. Therefore, we can interchange the order of summation and deduce that $$\sum_v N_v \sum_{n=1}^\infty \frac{1}{r^n} \log|1+\lambda+\cdots +\lambda^n|_v =
\sum_{n=1}^\infty \frac{1}{r^n} \sum_v N_v \log|1+\lambda+\cdots +\lambda^n|_v = 0.$$ For $r=2$, we obtained the desired convergence for $\gamma_v(\lambda)$, and for $r=4$ the sum of the capacities.
\[same heights\] For each $\lambda\in{\overline{{{\mathbb Q}}}}\setminus\{0\}$, not a root of unity, or for $\lambda=1$, the Call-Silverman canonical height $\hat{h}^+$ and the $\{\mu^+_v\}$-canonical height $\hat{h}_\mu$ are related by $$\hat{h}_\mu(S) = \frac{2\, [k:{{\mathbb Q}}]}{|S|} \sum_{t\in S} \hat{h}^+(t)$$ for any ${\operatorname{Gal}}({\overline{k}}/k)$-invariant, finite set $S$ with $|S|>1$. Similarly for $\hat{h}^-$ and the $\{\mu^-_v\}$-canonical height.
Fix a finite set $S\subset {\overline{k}}$ which is ${\operatorname{Gal}}({\overline{k}}/k)$-invariant and has at least two elements. We begin by computing the $\{\mu^+_v\}$-canonical height of $S$, from the definition given in (\[height definition\]). For each $t\in S$, we choose a lift $\tilde{t} \in {\overline{k}}^2$ of $t$.
$$\begin{aligned}
{\hat{h}}_{\mu}(S) &=& \frac{|S|}{|S|-1}\sum_{v\in {\mathcal{M}}_k} N_v \cdot ([S]_v, [S]_v)_{\mu_v} \\
&=& \frac{|S|}{|S|-1} \sum_{v\in {\mathcal{M}}_k} \frac{N_v }{2|S|^2}\sum_{x,y\in S, x\not=y} g_{\mu_v}(x,y) \\
&=& \frac{1}{2|S|(|S|-1)} \sum_{x,y\in S, x\not=y} \sum_{v\in {\mathcal{M}}_k} N_v \, ( -\log|\tilde{x} \wedge \tilde{y}|_v + G^+_{v}(\tilde{x}) + G^+_{v}(\tilde{y}) + \log {\operatorname{cap}}(K^+_{\lambda,v}) ) \\
&=& \frac{1}{2|S|(|S|-1)} \sum_{x,y\in S, x\not=y} \; \sum_{v\in {\mathcal{M}}_k} N_v \, (G^+_{v}(\tilde{x}) + G^+_{v}(\tilde{y}) ) \quad \mbox{ by (\ref{product formula}) and Lemma \ref{global}} \\
&=& \frac{1}{2|S|(|S|-1)} \sum_{v\in {\mathcal{M}}_k} N_v \cdot 2\, (|S|-1)\sum_{x\in S} G^+_{v}(\tilde{x}) \\
&=& \frac{1}{|S|} \sum_{v\in {\mathcal{M}}_k} N_v \cdot \sum_{x\in S} G^+_{v}(\tilde{x})\\
&=& \frac{1}{|S|} \cdot \sum_{x\in S}\sum_{v\in {\mathcal{M}}_k} N_v G^+_{v}(\tilde{x})
\end{aligned}$$
Note that the product formula (and homogeneity of $G_v^+$) implies that $\sum_v G_v^+(\tilde{x})$ depends on $x$ but is independent of the choice of $\tilde{x}$. This formula for ${\hat{h}}_\mu$ also shows that it extends to define a function on points of ${{\mathbb P}}^1(k)$, as mentioned in the remark after equation (\[height definition\]).
Recall that the Weil height of $\alpha\in {\overline{k}}$ may be computed as $$h(\alpha) = \frac{1}{[k(\alpha):{{\mathbb Q}}]} \sum_{v\in{\mathcal{M}}_{k(\alpha)}} N_v \log \|(\alpha_1, \alpha_2)\|_v = \frac{1}{[k(\alpha):{{\mathbb Q}}]} \sum_{v\in{\mathcal{M}}_{k(\alpha)}} N_v \log\max\{|\alpha_1|_v, |\alpha_2|_v\}$$ for any homogeneous presentation $(\alpha_1, \alpha_2)\in k(\alpha)^2$ of $\alpha$ (so $\alpha = \alpha_1/\alpha_2$). Further, if $A$ is a ${\operatorname{Gal}}({\overline{k}}/k)$-invariant and finite set, then $$h(A) := \frac{1}{|A|} \sum_{\alpha\in A} h(\alpha) = \frac{1}{[k:{{\mathbb Q}}]} \frac{1}{|A|} \sum_{\alpha\in A} \sum_{v\in{\mathcal{M}}_{k}} N_v \log\max\{|\alpha_1|_v, |\alpha_2|_v\},$$ where $|\cdot|_v$ is a fixed choice of extension to ${\overline{k}}$ of the absolute value $|\cdot|_v$ on $k$.
For each $t\in S$, from the definition in (\[F\_n plus\]), the point $F^+_n(\tilde{t})\in {\overline{k}}^2$ is a homogeneous presentation of $f_t^n(+1)\in {\overline{k}}$. The Galois invariance of $S$ implies that $$\label{limit}
\hat{h}^+(S) = \frac{1}{|S|} \sum_{t\in S} \hat{h}^+(t)
=\frac{1}{[k:{{\mathbb Q}}]} \frac{1}{|S|} \sum_{t\in S} \lim_{n\rightarrow\infty}\frac{1}{2^n}\sum_{v\in \mathcal{M}_k} N_v \log\|F_n(\tilde{t})\|_v.$$ We need to show that we can interchange the limit and the infinite sum over ${\mathcal{M}}_k$. Then Theorems \[convergence\] and \[non-archimedean convergence\] will imply that $$\hat{h}^+(S) = \frac{1}{2 |S| [k:{{\mathbb Q}}]} \sum_{t\in S} \sum_{v\in {\mathcal{M}}_k} N_v G_v^+(\tilde{t}) = \frac{1}{2 [k:{{\mathbb Q}}]}\, \hat{h}_\mu (S),$$ completing the proof of the theorem.
To see that we may interchange the limit and the sum in (\[limit\]), we can use Lemma \[trivial K\] and Proposition \[the bounds for K\_v\^+\]. Outside of a finite number of places, we have $\|\tilde{t}\|_v = 1$ for all $t\in S$. For $v\in {\mathcal{M}}_{k,0}$, Lemma \[trivial K\] states that $\|F_n(\tilde{t})\|_v = 1$ for all $n$ when $\|\tilde{t}\|_v=1$, and so these terms do not contribute to the sum of (\[limit\]). For $v\in {\mathcal{M}}_{k,m}$ with $m\geq 1$, Proposition \[the bounds for K\_v\^+\] implies that $$e^{2^{n-1}(2\gamma_v(\lambda) + (\log c)/2^{m-1})} \leq \|F_n(\tilde{t})\|_v \leq 1$$ for all $n\geq 1$, when $\|\tilde{t}\|_v = 1$. Since the sum of the $\gamma_v(\lambda)$ converges absolutely (Lemma \[global\]) and there aren’t too many elements in each ${\mathcal{M}}_{k,m}$ (Lemma \[places bound\]), we deduce that for any ${\varepsilon}>0$, there is a finite set ${\mathcal{M}}({\varepsilon})$ of places so that $$\frac{1}{2^n} \sum_{v\in {\mathcal{M}}_k \setminus {\mathcal{M}}({\varepsilon})} N_v \left| \log \|F_n(\tilde{t}) \|_v \right| < {\varepsilon}$$ for every $n\geq 1$. This is enough to allow the exchange of the limit with the sum in (\[limit\]).
The proof for $\hat{h}^-$ and the $\{\mu^-_v\}$-canonical height is identical.
Proofs of the equidistribution theorems.
----------------------------------------
We are ready to complete the proofs of the equidistribution theorems.
[*Proof of Theorem \[equidistribution at all places\].*]{} The proof is immediate from Theorem \[quasi-adelic equidistribution\], once we know that the canonical height functions $\hat{h}^+$ and $\hat{h}^-$ are associated to the quasi-adelic measures $\{\mu^\pm_v\}$. That is the content of Propositions \[quasi-adelic\] and \[same heights\].
[*Proof of Theorem \[equidistribution\].*]{} This theorem is an immediate corollary of Theorem \[equidistribution at all places\], because the parameters where the critical point $\pm 1$ has finite orbit coincide with the points of canonical height 0, by [@Call:Silverman Corollary 1.1.1].
Proof of Theorem \[PCF maps\] {#proof section}
=============================
In this section, we complete the proof of Theorem \[PCF maps\]. As discussed in the introduction, one implication is well known; namely, the family of quadratic polynomials ${\mathrm{Per}}_1(0)$ contains infinitely many postcritically-finite maps. Indeed, as explained in Example \[quadratic poly\], the bifurcation locus for the (non-fixed) critical point is nonempty. The conclusion then follows from Lemma \[activity\].
For dynamical reasons, there can be no postcritically-finite maps in ${\mathrm{Per}}_1(\lambda)$ for $0 < |\lambda|\leq 1$. Indeed, if $f$ has a fixed point of multiplier $\lambda$, at least one critical point must have infinite forward orbit, as it is attracted to (or accumulates upon) the fixed point (or on the boundary of the Siegel disk in case the fixed point is of Siegel type). See [@Milnor:dynamics Corollary 14.5]. (Actually, for our proof of Theorem \[PCF maps\], we only need the easier fact that parabolic cycles always attract a critical point with infinite forward orbit [@Milnor:dynamics §10], since all other cases follow from the arguments below.)
Now assume that $\lambda\in{{\mathbb C}}$, $|\lambda| >1$, is chosen so that ${\mathrm{Per}}_1(\lambda)$ contains infinitely many postcritically-finite maps. We shall derive a contradiction. We first observe that $\lambda\in{\overline{{{\mathbb Q}}}}$, as a consequence of Thurston Rigidity.
\[algebraic\] If $f_{\lambda, t}(z) = \lambda z / (z^2 + t z + 1)$ is postcritically finite, then $\lambda, t\in {\overline{{{\mathbb Q}}}}$.
The critical points of a postcritically-finite map satisfy two equations $$f_{\lambda, t}^n(+1) = f_{\lambda, t}^m(+1) \qquad \mbox{ and } \qquad f_{\lambda, t}^r(-1) = f_{\lambda, t}^s(-1)$$ for pairs of integers $n>m\geq0$ and $r>s\geq 0$. Note that these two equations define polynomials in $(\lambda, t)$ with coefficients in ${{\mathbb Q}}$. By Thurston Rigidity (see [@McMullen:families Theorem 2.2]), we know that the set of solutions must be finite (or empty), since there are no flexible Lattès maps in degree 2. Consequently, a solution $(\lambda,t)$ must have coordinates in ${\overline{{{\mathbb Q}}}}$.
Let $k = {{\mathbb Q}}(\lambda)$. Then $k$ is a number field, by Proposition \[algebraic\]. Construct the Call-Silverman canonical height functions $\hat{h}^+_\lambda$ and $\hat{h}^-_\lambda$ on ${\overline{k}}$ as in Section \[equidistribution section\]. Let $\{t_n\}_{n\in{{\mathbb N}}}$ denote a sequence of parameters in ${\mathrm{Per}}_1(\lambda)^{cm}$ for which both critical points have finite orbit. Proposition \[algebraic\] also shows that $t_n\in{\overline{k}}$ for all $n$. Let $S_n$ denote the ${\operatorname{Gal}}({\overline{k}}/k)$-orbit of $t_n$. From [@Call:Silverman Corollary 1.1.1], we see that $$\hat{h}^+_\lambda(S_n) = \hat{h}^-_\lambda(S_n) = 0$$ for all $n$. By Theorem \[equidistribution at all places\], the sequence $\{S_n\}$ is equidistributed with respect to the bifurcation measures $\mu^+_v$ and $\mu^-_v$ at all places $v$ of $k$. It follows that $\mu^+_v = \mu^-_v$ for all $v$, and therefore the quasi-adelic height functions (defined in (\[height definition\])) must coincide. From Proposition \[same heights\], we conclude that $$\hat{h}^+_\lambda = \hat{h}^-_\lambda$$ on ${\mathrm{Per}}_1(\lambda)^{cm}$. Again appealing to [@Call:Silverman Corollary 1.1.1], we find that the critical point $+1$ will have finite orbit for $f_t$ if and only if $-1$ has finite orbit for $f_t$. This conclusion contradicts Proposition \[no synchrony\]. The proof is complete.
In our proof, we have used the full strength of Theorem \[equidistribution at all places\], with the equidistribution at all places of the number field $k = {{\mathbb Q}}(\lambda)$ to deduce that $\hat{h}^+ = \hat{h}^-$. Alternatively, we could have used only Theorem \[equidistribution\], the equidistribution to the (complex) bifurcation measure $\mu^+_\lambda$, and deduced that $\mu^+_\lambda = \mu^-_\lambda$. This conclusion would contradict Theorem \[distinct measures\].
[GHT2]{}
M. Baker and L. DeMarco. . (2011), 1–29.
M. Baker and L. DeMarco. . (2013), 35 pp.
M. Baker and R. Rumely. . (2006), 625–688.
M. Baker and R. Rumely. , volume 159 of [*Mathematical Surveys and Monographs*]{}. American Mathematical Society, Providence, RI, 2010.
M. H. Baker and L.-C. Hsia. . (2005), 61–92.
S. Berker, A. L. Epstein, and K. M. Pilgrim. . (2003), 93–100.
F. Berteloot and T. Gauthier. . .
X. Buff, A. Epstein, and J. Ecalle. . (2013), 42–95.
G. S. Call and J. H. Silverman. . (1993), 163–205.
L. DeMarco. . (2001), 57–66.
L. DeMarco. . (2003), 43–73.
L. DeMarco . .
L. DeMarco and R. Rumely. . (2007), 145–161.
L. DeMarco, X. Wang, and H. Ye. . To appear, [*American Journal of Math.*]{}
A. Douady and J. H. Hubbard. . (1982), 123–126.
R. Dujardin and C. Favre. . (2008), 979–1032.
C. Favre and T. Gauthier. .
C. Favre and J. Rivera-Letelier. . (2006), 311–361.
J. E. Forn[æ]{}ss and N. Sibony. . In [*Complex Potential Theory (Montreal, PQ, 1993)*]{}, pages 131–186. Kluwer Acad. Publ., Dordrecht, 1994.
D. Ghioca, L.-C. Hsia, and T. Tucker. . , [**7**]{}(2013), 701–732.
D. Ghioca, L.-C. Hsia, and T. Tucker. . (2015), 395–427.
L. R. Goldberg and L. Keen. . (1990), 335–372.
M. Herman and J. C. Yoccoz. . (1983), 408–447.
J. Hubbard and P. Papadopol. . (1994), 321–365.
G. M. Levin. . (1989), 94–106.
R. Mañ[é]{}, P. Sad, and D. Sullivan. . (1983), 193–217.
C. McMullen. . (1987), 467–493.
C. McMullen. . Princeton University Press, Princeton, NJ, 1994.
J. Milnor. . (1993), 37–83. With an appendix by the author and Lei Tan.
J. Milnor. , volume 160 of [*Annals of Mathematics Studies*]{}. Princeton University Press, Princeton, NJ, [T]{}hird edition, 2006.
C. L. Petersen. . (1999), 127–141.
J. H. Silverman. . (1998), 41–77.
J. H. Silverman. , volume 30 of [*CRM Monograph Series*]{}. American Mathematical Society, Providence, RI, 2012.
E. Uhre. . (2010), 1327–1330.
H. Ye. . .
[^1]: The research was supported by the National Science Foundation.
|
---
abstract: 'Mapping in the GPS-denied environment is an important and challenging task in the field of robotics. In the large environment, mapping can be significantly accelerated by multiple robots exploring different parts of the environment. Accordingly, a key problem is how to integrate these local maps built by different robots into a single global map. In this paper, we propose an approach for simultaneous merging of multiple grid maps by the robust motion averaging. The main idea of this approach is to recover all global motions for map merging from a set of relative motions. Therefore, it firstly adopts the pair-wise map merging method to estimate relative motions for grid map pairs. To obtain as many reliable relative motions as possible, a graph-based sampling scheme is utilized to efficiently remove unreliable relative motions obtained from the pair-wise map merging. Subsequently, the accurate global motions can be recovered from the set of reliable relative motions by the motion averaging. Experimental results carried on real robot data sets demonstrate that proposed approach can achieve simultaneous merging of multiple grid maps with good performances.'
author:
- Zutao Jiang$^1$
- 'Jihua Zhu $^{1*}$'
- Yaochen Li$^1$
- Zhongyu Li$^2$
- Huimin Lu$^3$
date: 'Received: date / Accepted: date'
title: Simultaneous merging multiple grid maps using the robust motion averaging
---
Introduction {#intro}
============
Mapping is one of the most fundamental and difficult issues in robotics, and has attracted more and more attention since the seminal work presented in [@Smith87]. In the past few decades, many effective approaches [@Thrun05] have been proposed to build several kinds of environment maps, such as grid map [@Giorg07], feature map [@John11], topological map [@Lui12], and hybrid map [@Bibby10], etc. As a kind of probabilistic map, the occupancy grid map is not required to extract any special features from environments, so it can easily model arbitrary types of environments. Therefore, the grid map is one of the most popular map representations in robot mapping. However, most of robot mapping approaches can only build single map for medium scale environments. For the large scale environment, multi-robots should cooperatively explore various parts of the same environment so as to build grid map with good efficiency and accuracy. The key problem is how to integrate these local grid maps built by multiple robots into a single global map.
To merge a pair of grid maps, Carpin $et$ $al.$ viewed it as the optimization problem [@Carpin05], where the optimal transformation should be searched to align two grid maps to be merged. Subsequently, two stochastic search approaches were proposed to solve this optimization problem [@Carpin05; @Carpin06]. Similarly, Li $et$ $al.$ proposed an grid map merging approach based on the genetic algorithm [@Li14]. Although these approaches may obtain the optimal rigid transformation, they are all time-consuming due to the nature of exhaustive search. Different from these passive merging approaches, some researchers proposed when two robots meet randomly or search each other out during the mapping, they can perform the map merging by determining their relative pose [@Howard06; @Fox06]. What’s more, Carpin $et$ $al.$ then proposed map merging approach based on the Hough transform [@Carpin08], which can merge grid maps containing the line features. Although this approach can efficiently merge grid map without any line feature extraction, its accuracy should be further improved due to the nature of discretization error in the Hough transform. Besides, it is required that the potentially being merged grid maps should contain a significant overlapping percentage. To address the accuracy issue, Zhu $et$ $al.$[@Zhu13] viewed the grid map merging as the point set registration problem and accomplished it by the trimmed iterative closest point (TrICP) [@Chet05; @Phils07], where the initial parameters are provided by the map merging approach based on the Hough transform. Meanwhile, Blanco $et$ $al.$ proposed a multi-hypothesis method to provide the initial parameters for point set registration algorithm so as to merge grid maps [@Blanco13]. By the confirmation of merging hypotheses, it can obtain the robust merging result. To address the robustness issue, Saeedi $et$ $al.$ proposed the improved grid map merging approach based on the Hough transform, which can merge grid map pair even with low overlapping percentage [@Saee14]. To merge grid maps with different resolutions, Ma $et$ $al.$ put forward an image registration based approach [@Ma16], which can determine whether one of the two maps should be minified or magnified in order to be merged with the other. It seems that many proposed approaches can merge grid map pair with good accuracy and efficiency, but few merging approaches can really accomplish simultaneous merging multiple grid maps.
Suppose there is a set of unordered grid maps, which are built by multiple robots exploring different parts of the same large environment. These grid maps are non-overlapping or partially overlapping with each other. Given the reference map, the goal of multiple grid map merging is to integrate these local grid maps into a global map by calculating the global motion for each grid map to the reference map. To solve this problem, many authors declaimed that their pair-wise merging approaches can be directly extended to merge multiple grid maps sequentially. More specifically, the pair-wise merging algorithm can repeatedly merge two grid maps and integrate them into one grid until all the grid maps are integrated together. However, this kind of approach suffers from the error accumulative problem. As mentioned in [@Zhu13; @Ma16], the problem of pair-wise grid map merging can be viewed as the pair-wise registration problem [@Besl92; @Zhu114]. Accordingly, the problem of multiple grid map merging can also be viewed as multi-view registration problem [@Huber03; @Ajmal06; @Zhu16; @Evang14; @Govindu14; @Zhu14; @Fed16]. However, most of multi-view registration should be provided with the good initial motions in advance [@Evang14; @Govindu14; @Zhu14; @Fed16]. Otherwise, they are unable to accomplish the multi-view registration. Besides, although some existing approaches can achieve multi-view registration without initial motions, they are designed to deal with 3D range scan and always time-consuming [@Huber03; @Ajmal06; @Zhu16]. Therefore, it is required to design an automatic multi-view registration approach, which can efficiently deal with 2D grid maps. Recently, motion averaging algorithm has been introduced as an effective means to solve the multi-view registration problem [@Govindu04]. Although this approach can effectively accomplish the multi-view registration, it should be provided with good initial global motions and reliable pair-wise registration results [@Govindu14; @Govindu06].
Based on the original motion averaging algorithm, this paper proposes an effective grid map merging approach, which can simultaneously merge multiple grid maps without any prior information. As it is difficult to directly calculate the global motions for these grid maps, the proposed approach accomplish the merging of multiple grid maps by three steps. Firstly, the pair-wise merging method is presented to estimate relative motions for the grid map pair, which has a certain amount of overlapping percentage. As the pair-wise merging algorithm may be applied to some grid map pairs, which have low overlapping percentages or even non-overlapping, the estimated relative motion may be unreliable. Therefore, all grid maps and the estimated relative motions are utilized to construct a undirected graph so as to sample the maximal connected subgraph (MCS). By confirming the sampled MCS with all relative motions, it is easy to calculate the initial global motions and eliminate all unreliable relative motions. Subsequently, the motion averaging algorithm can be adopted to refine the initial global motions so as to obtain accurate global motions for merging multiple grid maps. To illustrate its superiority, the proposed approach is tested on some real robot data sets.
This paper is organized as follows. In the next section, the grid map merging problem is stated and the TrICP algorithm is briefly reviewed. Section 3 proposes our approach for simultaneous merging of multiple grid maps. In Section 4, the proposed approach is tested and evaluated on three real robot data sets. Finally,some conclusions are drawn in Section 5.
Problem Statement and the TrICP algorithm {#sec:1}
=========================================
This section firstly states the problem of grid map merging. As pair-wise map merging is the basis of multiple map merging, it then briefly reviews the 2D TrICP algorithm for the pair-wise map merging.
Problem Statement {#sec:2}
-----------------
To build large grid map, mapping can be cooperatively implemented by multiple robots exploring different parts of the environment. Accordingly, a set of local grid maps built by different robots should be integrated into one global grid map.
Suppose there are two local grid maps built by robots exploring two parts of the same environment. According to [@Carpin08], the goal of pair-wise map merging is to find a relative motion: $${\bf{M}} = \left[ {\begin{array}{*{20}{c}}
{\bf{R}} & {t} \\
0 & 1 \\
\end{array}} \right],$$ with which these two local maps can be properly integrated into a global map. More specifically, ${\bf{R}} \in \mathbb{R}^{2\times 2}$ denotes a rotation matrix determined by the angle $\theta$ and $\vec t \in {\mathbb{R}^2}$ is a translation vector: $${\bf{R}} = \left[ {\begin{array}{*{20}{c}}
{\cos \theta } & { - \sin \theta } \\
{\sin \theta } & {\cos \theta } \\
\end{array}} \right],\quad t = \left[ {\begin{array}{*{20}{c}}
{{ t_x}} \\
{{ t_y}} \\
\end{array}} \right].$$
Given a set of local grid maps, the goal of multiple grid map merging is to integrate these local maps into a single global map. Without loss of generality, the first grid map can be viewed as the reference map. As shown in Fig \[fig:Multi\], this merging problem is equivalent to calculating a set of global motions ${{\bf{M}}_{global}} = \{ {{\bf{I}}},{{\bf{M}}_2},...,{{\bf{M}}_N}\}$, so that these local maps can be properly merged into a global global map.
The TrICP algorithm {#sec:2}
-------------------
Suppose there are two grid maps with non-overlapping areas, the model map $P$ and the subject map $Q$, where $\xi$ represents their overlapping percentage. By applying the edge extraction algorithm, two edge point sets $P \buildrel \Delta \over = \{ p_i\} _{i = 1}^{{N_p}}$ and $Q \buildrel \Delta \over = \{ q_j\} _{j = 1}^{{N_q}}$ can be extracted from these two grid maps to be merged. Denote ${P_\xi }$ as the point subset, which corresponds the overlapping part of the subject map to the model map. For pair-wise map merging, the relative motion ${\bf{M}}$ can be estimated by minimizing the following objective function: $$\begin{array}{l}
\mathop {\arg \min }\limits_{\xi ,{\bf{R}}, t} \frac{{\sum\limits_{{p_i} \in {P_\xi }} {\left\| {{\bf{R}}{{p}_i} + t - { q_{c(i)}}} \right\|_2^2} }}{{\left| {{P_\xi }} \right|{\xi ^{1 + \lambda }}}} \\
{\rm{s}}{\rm{.t}}{\rm{.}}\quad \quad {{\bf{R}}^{\rm{T}}}{\bf{R}}{\rm{ = }}{{\rm{I}}_2},\det ({\bf{R}}) = 1 \\
\end{array}
\label{eq:TrICP}$$ where ${{\rm{I}}_2}$ denotes the 2D identity matrix, $\lambda$ is a preset parameter and $\left| \cdot \right|$ indicate the cardinality of a set.
Actually, Eq. (\[eq:TrICP\]) can be solved by the TrICP algorithm [@Chet05; @Phils07], which can obtain the optimal relative motion by iterations. Given the initial relative motion ${{\bf{M}}_0}$, three steps are included in each iteration of this algorithm:
\(1) Based on the previous motion, establish the point correspondence for each edge point in the subject map: $${c_k}(i) = \mathop {\arg \min }\limits_{j \in \{ 1,2, \cdots ,{N_q}\} } {\left\| {{{\bf{R}}_{k - 1}}{p_i} + {t_i} - {q_j}} \right\|_2}\quad \quad i = 1,2, \cdots {N_p}.$$
\(2) Update the $k$th overlapping percentage and its corresponding subset: $$({\xi _k},{P_{{\xi _k}}}) = \mathop {\arg \min }\limits_\xi \sum\limits_{{p_i} \in {P_\xi }} {\left\| {{{\bf{R}}_{k - 1}}{p_i} + {t_{k - 1}} - {q_{{c_k}(i)}}} \right\|_2^2} /(\left| {{P_\xi }} \right|{(\xi )^{1 + \lambda }})$$
\(3) Calculate the current relative motion: $${{\bf{M}}_k} \buildrel \Delta \over = ({{\bf{R}}_k},{t_k})\mathop { = \arg \min }\limits_{{\bf{R}},t} \sum\limits_{{p_i} \in {P_{{\xi _k}}}} {\left\| {{\bf{R}}{p_i} + t - { q_{c(i)}}} \right\|_2^2}
\label{eq:SVD}$$
Finally, the optimal relative motion can be obtained by repeating these three steps until some stop conditions are satisfied. It should be noted that the TrICP algorithm can only obtain reliable relative motions for the grid map pair, which contains a certain amount of overlapping percentage [@Zhu114].
Merging multiple grid maps
==========================
This section proposes the effective approach for simultaneous merging of multiple grid maps by the robust motion averaging.
Given a set of gird maps, the proposed approach can accomplish grid map merging by three steps displayed in Fig. \[fig:Flow\]. Firstly, the pair-wise merging method is presented to estimate the relative motions for many grid map pairs. Subsequently, all grid maps and the estimated relative motions can be viewed as an undirected graph, where each vertex denotes a grid map and each edge indicates an estimated relative motion between the two vertices. Then, a randomized sampling scheme is utilized to find the maximal connected subgraph (MCS). As there may exist unreliable relative motions obtained from the pair-wise merging step, the sampling MCS should be confirmed by all relative motions. The process of MCS sampling and confirming should be repeated until the preset number of iterations so as to search for the optimal MCS and eliminate unreliable relative motions. Finally, the accurate global motions can be recovered by the application of the 2D motion averaging algorithm to all reliable relative motions.
Pair-wise grid map merging
--------------------------
To estimate the relative motion ${{\bf{M}}_{ij}}$, the pair-wise grid map merging method should be well designed. As mentioned before, the TrICP algorithm can be utilized to estimate the relative motion of one map pair which includes a certain amount of overlapping percentage. However, owing to the local convergence property, good initial relative motion should be provided to the TrICP algorithm. Otherwise, it is easy to be trapped into the local minimum and obtain the unreliable relative motion.
For the pair-wise map merging, the scale-invariant feature transform (SIFT) futures [@Lowe04; @Brown07] can be extracted from two grid maps respectively. As the SIFT features are invariant to rotation and translation changes, it is easy to establish feature matches between these two grid maps. Due to the sensor noise and the accuracy of mapping algorithm, there might exist some false matches. As shown in Fig. \[fig:SIFT\], there are two grid maps $P$ and $Q$, which include overlapping areas. Suppose there are a set of SIFT feature matches $\{ {F_{i,P}},{F_{i,Q}}\} _{i = 1}^N$, which are extracted and matched from these two grid maps. Obviously, if the match $\{ {F_{i,P}},{F_{i,Q}}\}$ is true, the SIFT features ${F_{i,P}}$ and ${F_{i,Q}}$ must correspond to the same location of the environment, and they should satisfy the following equation: $$\left\| {{\bf{R}}{f_{i,P}} + t - {f_{i,Q}}} \right\|_2^2 \approx 0,
\label{eq:Cons}$$ where ${\bf{M}} \buildrel \Delta \over = ({\bf{R}}, t)$ denotes the relative motion of these two grid maps, ${f_{i,P}}$ and ${f_{i,Q}}$ represent the locations of SIFT features ${F_{i,P}}$ and ${F_{i,Q}}$, respectively. However, the false feature match does not meet this requirement.
According to Eq. (\[eq:SVD\]), two true feature matches are enough to estimate the initial relative motion for the TrICP algorithm. Therefore, the random sample consensus (RANSAC) algorithm can be used to find the true matches. More specifically, two feature matches can be randomly selected from all feature matches so as to calculate the guess of relative motion ${\bf{\tilde M}}$, then Eq. (\[eq:Cons\]) can be used to test all established feature matches and count the number of true feature matches. And the best guess ${{\bf{\tilde M}}_{best}}$ corresponds to the one, which can receive the support of all true matches. To obtain the best guess, the random guess should be repeatedly generated and tested until the preset maximum number of iteration reaches. Finally, the best guess ${{\bf{\tilde M}}_{best}}$ can be viewed as the initial relative motion of the TrICP algorithm so as to refine the relative motion of two grid maps to be merged.
Based on the above description, the proposed pair-wise map merging method can be summarized as the Algorithm 1.
**Input**: Grid maps $P$ and $Q$
**Output**: Estimation of the relative motion ${\bf{\hat M}}$
Extract SIFT features for $P$ and $Q$, respectively;
Establish all the feature matches $\{ {F_{i,P}},{F_{i,Q}}\} _{i = 1}^N$ and set $k$ = 0;
**Do**
$k = k + 1$;
Randomly select two matches ${\{ {F_{i,P}},{F_{i,Q}}\} _{i = m,n}}$;
Calculate the motion guess ${{\bf{\tilde M}}_k}$ by Eq.(6);
Compute ${d_i}= \left\| {{\bf{R}}{{f}_{i,P}} + t - {{f}_{i,Q}}} \right\|_2$ for each feature match;
Count the number ${N_k}$ of feature matches with ${d_i} \le {d_{thr}}$;
**If** ${N_k} > {N_{best}}$
${N_{best}} = {N_k}$;
${{\bf{\tilde M}}_{best}} = {{\bf{\tilde M}}_k}$;
**End**
**While** ($k < 200$)
Extract the edge point sets $P \buildrel \Delta \over = \{ {p_i}\} _{i = 1}^{{N_p}}$ and $Q \buildrel \Delta \over = \{ {q_j}\} _{j = 1}^{{N_q}}$;
Obtain ${\bf{\hat M}}$ by refining ${{\bf{\tilde M}}_{best}}$ with the TrICP algorithm.
Theoretically, two true feature matches are enough to estimate the initial relative motion for the TrICP algorithm. However, if the number of true matches is less than three, there is no way to confirm and calculate the correct initial motion. To guarantee the robustness, the TrICP algorithm is only applied to these map pairs, which satisfy ${{\bf{\tilde M}}_{best}} \ge 4$. Otherwise, there is no need to apply the TrICP algorithm. Suppose SIFT features has been extracted for grid maps $P$ and $Q$. To establish the feature matches, we can either search the nearest neighbor from the map $Q$ for each SIFT feature in the map $P$ or vice verse. In practice, these two strategies can obtain different number of consistent matches for these two grid maps to be merged. Therefore, during the establishment of feature matches, both strategies should be implemented so as to obtain as many consistent matches as possible.
After the application of pair-wise map merging, a set of relative motions can be obtained for the construction of undirected graph so as to sample and confirm the optimal MCS.
MCS sampling and confirming
---------------------------
Among these estimated relative motions, there may exist some unreliable relative motions due to the unreasonable application of the pair-wise merging method to these grid map pairs, which contain low percentage or even non-overlapping. Therefore, the optimal MCS should be confirmed so as to calculate initial global motions and eliminate unreliable relative motions for the motion averaging.
Given a set of relative motions ${\rm{\{ \hat M}}_{ij}^r{\rm{\} }}_{r = 1}^R$, it is easy to construct an undirected graph $G$, where one vertex denotes a grid map and each edge indicates the estimated relative motion of its connected grid maps. Accordingly, global motions can be estimated from the MCS, which is composed of $(N-1)$ edges and $N$ vertexes of the graph $G$. As displayed in Fig. \[fig:MCS\], based on the MCS, the global motion guess of the $i$th grid map can be directly set as ${{\bf{\tilde M}}_i} = {{\bf{\hat M}}_{1i}}$, where ${{\bf{\hat M}}_{1i}}$ has been estimated by the pair-wise map merging. Subsequently, the global motion of the $j$th grid map can be calculated as: $${{\bf{\tilde M}}_j} = {{\bf{\tilde M}}_i}{{\bf{\hat M}}_{ij}}.
\label{eq:Mj}$$ where ${{\bf{\hat M}}_{ij}}$ has been estimated and included in the relative motion set ${\rm{\{ \hat M}}_{ij}^r{\rm{\} }}_{r = 1}^R$. As the MCS exits a path between the 1st vertex to all other vertexes in the $G$, Eq. (\[eq:Mj\]) can be transitively used to calculate all other global motions. The main questions arising here are how to sample the MCS from the graph $G$ and how to confirm the optimal MCS.
To sample a MCS, we can set a null matrix ${\bf{L}}$ of the size $N \times N$. As one MCS contains $(N-1)$ edges of the graph $G$, a subgraph ${G}'$ with all vertex of $G$ can be generated by the random selection of $(N-1)$ relative motions from the motion set ${\rm{\{ \hat M}}_{ij}^r{\rm{\} }}_{r = 1}^R$. Then we can set ${\bf{L}}(i,j) = 1$, if the corresponding relative motion ${{\rm{\hat M}}_{ij}}$ is included in the subgraph ${G}'$. Subsequently, a matrix ${\bf{g}}$ can be calculated as follows: $${\bf{g}} = {({\bf{L}} + {\bf{L}'} + {{\bf{I}}_N})^N}
\label{eq:MCS}$$ where ${{\bf{I}}_N}$ denotes the identity matrix of the size $N \times N$. If and only if all the elements of the matrix ${\bf{g}}$ are non-zeros, the subgraph ${G}'$ can be viewed as a MCS of the graph $G$.
As displayed in Fig. \[fig:MCS\], only $(N-1)$ relative motions are contained in the sampled MCS. Hence, all other relative motions can be used to confirm the sampled MCS. Because each edge of the optimal MCS corresponds to a reliable relative motion, Eq.(\[eq:Mj\]) can be transitively used to calculated all global motions ${{\bf{\tilde M}}_{global}} = \left\{ {{\bf{I}},{{{\bf{\tilde M}}}_2},...,{{{\bf{\tilde M}}}_m},...,{{{\bf{\tilde M}}}_n},{{{\bf{\tilde M}}}_N}} \right\}$ with good accuracy. Suppose the graph G includes an reliable relative motion ${{\bf{\hat M}}_{mn}}$, which is not contained in the optimal MCS. Since the relative motion ${{\bf{\hat M}}_{mn}}$ estimated by the pair-wise merging algorithm, it inevitably contains error. Therefore, $${{\bf{\hat M}}_{mn}} \approx {{\bf{\tilde M}}_m}^{ - 1}{{\bf{\tilde M}}_n}.$$
However, this relationship no longer holds for the unreliable relative motions. In practice, Eq. (10) can be replaced by the following constraint: $$d({{\bf{\hat M}}_{ij}},{{\bf{\tilde M}}_i}^{ - 1}{{\bf{\tilde M}}_j}) = {\left\| {{{{\bf{\hat M}}}_{ij}} - {{\bf{\tilde M}}_i}^{ - 1}{{{\bf{\tilde M}}}_j}} \right\|_F} \le {d_{thr}}$$
where ${d_{thr}}$ denotes the preset distance threshold. Based on this constraint, all estimated relative motions can be used to confirm the optimal MCS, which can receive the support of most relative motions in the graph.
The randomly sampled MCS is not necessary optimal due to the existence of unreliable relative motions, so the sampling and confirming of MCS should be repeatedly until the preset maximum number of iterations are reached. Accordingly, the proposed MCS sampling and confirming method can be summarized as the Algorithm 2.
**Input**: All the relative motions $\{ {{\bf{\hat M}}_{ij}^r} \}_r^R$
**Output**: Global motions ${{\bf{\hat M}}_{global}}$ and reliable relative motions $\{ {{\bf{\hat M}}_{ij}^r} \}_r^{R'}$
${E_{best}} = 0$ and $k=0$;
Construct the graph G based on $\{ {{\bf{\hat M}}_{ij}^r} \}_r^{{R}}$;
**If** $(k \le 10{N^2})$
$k= k+ 1$;
**do**
Sample the subgraph ${{\bf{G}}'}$ from the graph $G$;
Compute the matrix ${\bf{g}}$ denoted by Eq. (\[eq:MCS\]);
**Until** (All elements of ${\bf{g}}$ are non-zeros)
Estimate ${\bf{\tilde M}}_{global}^r$ from the MCS by Eq. (\[eq:Mj\]);
Count the number ${E_r}$ of edges that satisfy $d({{\bf{\hat M}}_{ij}},{{\bf{\tilde M}}_i}^{ - 1}{{\bf{\tilde M}}_j}) \le {d_{thr}}$;
**If** (${E_r} \ge {E_{best}}$)
${{\bf{\hat M}}_{global}} = {\bf{\tilde M}}_{global}^r$
Eliminate edges from $\{ {{\bf{\hat M}}_{ij}^r} \}_r^R$, which satisfy $d({{\bf{\hat M}}_{ij}},{{\bf{\tilde M}}_i}^{ - 1}{{\bf{\tilde M}}_j}) > {d_{thr}}$;
**end**
**end**
After the application of MCS sampling and confirming, the initial global motions and a set of reliable relative motions can be obtained for the motion averaging.
Motion Averaging
----------------
Although global motions have been estimated from the optimal MCS by transitively using Eq. (\[eq:Mj\]), they are coarse due to the accumulative error. Since a set of reliable relative motions have been confirmed by the optimal MCS, they can be incorporated to optimize the coarse global motions. The key question arising here is how to use these 2D relative motions so as to refine the coarse global motions. In [@Govindu04], Govindu $et$ $al$. proposed the 3D motion averaging algorithm, which can refine the coarse global motions by a set of relative motions. For the 2D motion, the original motion averaging algorithm should be extended.
In fact, the 2D motion ${\bf{M}} \in SE(2)$ belongs to the Lie group and its logarithm ${\bf{M}}$ belongs to the Lie algebra ${\bf{m}} \in SE(2)$, which can be denoted as follows: $${\bf{m}} = {\mathop{\rm logm}\nolimits} ({\bf{M}}) = \left[ {\begin{array}{*{20}{c}}
{\bf{\Omega}} & u \\
0 & 0 \\
\end{array}} \right],$$ where $ u = {[{u_1},{u_2}]^T}$ is a vector and ${\bf{\Omega}}$ is a skew-symmetric matrix: $${\vec{\Omega}} {\rm{ = }}\left[ {\begin{array}{*{20}{c}}
1 & {{{\vec{\Omega}} _{12}}} \\
{ - {{\vec{\Omega}} _{12}}} & 1 \\
\end{array}} \right].$$ Accordingly, the Lie algebra ${\vec{m}} \in SE(2)$ can be transformed into other form ${v} = vec({\bf{m}})$, where $vec(.)$ indicates the function which can arrange all parameters of ${\bf{m}}$ into a compated 3D column vector. Vice verse, $rvec(.)$ can be utilized to denote the inverse function of $vec(.)$. By applying the first-order approximation to the Riemannian distance [@Govindu04], there exists the following relationship for two approximate motions ${{\bf{M}}_i}$ and ${{\bf{M}}_j}$: $$\begin{array}{l}
{\rm{ logm({{\bf{M}}}}_i}^{ - 1}{{\rm{{\bf{M}}}}_j}{\rm{)}} \approx {\rm{logm(}}{{\rm{{\bf{M}}}}_i}{\rm{) - logm(}}{{\rm{{\bf{M}}}}_j}{\rm{)}} \\
\Rightarrow {{\bf{m}}_{ij}} \simeq {{\bf{m}}_i}{\bf{ - }}{{\bf{m}}_j}, \\
\end{array}
\label{eq:Lie}$$ where the more these two motions are approximate, the more ${{\bf{m}}_{ij}}$ approximates to the term $({{\bf{m}}_i}{\bf{ - }}{{\bf{m}}_j})$.
Suppose ${{\bf{M}}_i}(\bf{M_j})$ denotes the global motion of the $i$th ($j$th) grid map to the reference map, ${{\bf{M}}_{ij}}$ indicates the relative motion between the $i$th grid map and the $j$th grid map. They obey the constraint ${{\bf{M}}_{ij}} = {{\bf{M}}_i}^{ - 1}{{\bf{M}}_j}$ . For the problem of multiple map merging, the motions ${{\bf{M}}_i}$ and ${{\bf{M}}_j}$ are variables required to be estimated. While, ${{\bf{M}}_{ij}}$ can be approximated by the one ${{\bf{\hat M}}_{ij}}$ estimated from the pair-wise map merging. In other words, ${{\bf{M}}_{ij}}$ and ${{\bf{\hat M}}_{ij}}$ is very approximate. Therefore: $$\Delta {{\bf{m}}_{ij}} = {\rm{logm}}({{\bf{M}}_i}{{\bf{\hat M}}_{ij}}{{\bf{M}}_j}^{ - 1}) = \Delta {{\bf{m}}_j} - \Delta {{\bf{m}}_i}.$$ As the column vector $v$ represents another form of [**[m]{}**]{}, the same relationship also holds for the column vector, i.e. $\Delta {{v}_{ij}} = \Delta {{v}_j} - \Delta {{v}_i}$. Obviously, all the column vectors $\{ \Delta {{v}_i}\} _{i = 1}^N$ can be concatenated into one large vector $\Im {\rm{ = [}}{{\rm{{v}}}_1};{{v}_2}; \cdots ;{{v}_N}]$. Subsequently, the equation $\Delta {{v}_{ij}} = \Delta {{v}_j} - \Delta {{v}_i}$ can be transformed into the following form: $$\Delta {v_{ij}} = {{\bf{D}}_{ij}}\Im = [ \cdots ,{{\bf{I}}_3}, \cdots , - {{\bf{I}}_3}]\Im
\label{eq:ma}$$ where ${{\bf{I}}_3}$ is the 3D identity matrix, ${{\bf{D}}_{ij}}$ can be viewed as an indicator matrices of size $3 \times (3N - 3)$ with matrices ${{\bf{I}}_3}$ and $ - {{\bf{I}}_3}$ at position $j$ and $i$, respectively. As there are a set of reliable relative motions confirmed by the optimal MCS, it is convenient to concatenate all increment vectors of relative motions into one large vector ${\bf{V}} = \left[ {\begin{array}{*{20}{c}}
{\Delta {{v}_{i{j_1}}};} & {\Delta {{v}_{ij2}};} & {...} \\
\end{array}} \right]$, Similarly, all the indicator matrices can also be concatenated into one large matrix ${\bf{D}} = \left[ {\begin{array}{*{20}{c}}
{{{\bf{D}}_{i{j_1}}};} & {{{\bf{D}}_{ij2}};} & {...} \\
\end{array}} \right]$. According to Eq. (\[eq:ma\]), there exists the following relationships: $${\bf{V}} = {\bf{D}}\Im$$ and $$\Im = {{\bf{D}}^\dag }{\bf{V}},$$ where ${{\bf{D}}^\dag }$ denotes the pseudo inverse matrix of ${\bf{D}}$. Given the initial global motion $\{ {{\bf{\hat M}}_i}\} _{i = 1}^N$, the increment vectors $\{\Delta {{v}_i}\} _{i = 1}^N$ can be incorporated to refine the global motion as follows: $${{\bf{M}}_i} = {\rm{expm(rvec(}}\Delta {{\vec{v}}_i}{\rm{))}}{{\bf{\hat M}}_i}\quad (i = 2,3,...,N)$$ where the function $expm(.)$ denotes the exponential operation of matrix. As displayed in Eq. (\[eq:Lie\]), the motion averaging algorithm cannot obtain the closed-form solution for global motions, so it is required to repeat the refinement until some stop conditions are satisfied. The sketch of the global motion refining algorithm is shown in Algorithm 3.
**Input**: Initial global motions ${\bf\hat{M}}_{global} = \{ {\rm{I}},{{\bf{\hat M}}_2}, \cdots ,{{\bf{\hat M}}_N}\} $
reliable relative motions $\{ {{\bf{\hat M}}_{ij}^r}\} _{r = 1}^{R'}$
**Output**: Fine global motions ${\bf{M}}_{global} = \{ {\rm{I}},{{\bf{M}}_2}, \cdots ,{{\bf{M}}_N}\} $
**Do**
$\Delta {{\bf{M}}_{ij}} = {{\bf{\hat M}}_i}{{\bf{\hat M}}_{ij}}{{\bf{\hat M}}_j}^{ - 1}$;
$\Delta {{\bf{m}}_{ij}} = \log (\Delta {{\bf{M}}_{ij}})$;
$\Delta {{\rm{v}}_{ij}} = vec(\Delta {{\bf{m}}_{ij}})$;
$\Im {\rm{ = }}{D^\dag }{V_{ij}}$;
**for**
$\Delta {{\bf{m}}_i} = rvec(\Delta {v_i})$;
${{\bf{M}}_i} = \exp (\Delta {{\bf{m}}_i}){{\bf{M}}_i}$;
${{\bf{\hat M}}_i} ={{\bf{M}}_i}$;
**end**
**Until** ${\rm{|}}\Delta \Im {\rm{|| < }}\varepsilon $
After the application of motion averaging, accurate global motions can be obtained for the merging of multiple grid maps.
Implementation
--------------
Given a set of unordered grid maps, the relative motions of grid maps can be estimated by the pair-wise map merging. As there may be exist unreliable relative motions, an undirected-graph can be constructed by all grid maps and their estimated relative motions. Accordingly, the MCS can be randomly sampled and then confirmed by all estimated relative motions. By repeating the process of MCS sampling, the optimal MCS can be confirmed to calculate the initial global motions and select all reliable relative motions. Consequently, the initial global motions can be refined by applying the motion averaging algorithm to all reliable relative motions. Based on the refined global motions, the set of grid maps can be integrated into a single global map. Therefore, the proposed approach can be outlined in Algorithm 4.
**Input**: A set of unordered grid maps
**Output**: Merged map
Extract the SIFT features and edge point sets from all grid maps;
Estimate the relative motions for many map pairs by Algorithm 1;
Obtain the initial global motions and reliable relative motions by the Algorithm 2;
Acquire the fine global motions by the Algorithm 3;
Merge all grid maps based on the fine global motions.
Experimental Results
====================
To verify the performance of the proposed approach, a set of experiments were tested on three public datasets: Tim.log [@Bailey], Intel.log [@Stachniss15] and Fr079.log [@Stachniss15], which were recorded by mobile robots equipped with a laser range finder and odometer. All these datasets were recorded in door environment. To simulate multi-robot systems, these three data sets can be separated into four, eight and eleven parts, respectively. By applying the simultaneous localization and mapping(SLAM) algorithm [@Giorg07; @Parr05], they can be used to build grid map sets for testing the proposed approach. These grid map sets are displayed in Figs. \[fig:Tim\], \[fig:Intel\] and \[fig:Fr079\] Experiments were implemented in MATLAB on a four-core 3.6GHz computer with 8GB of memory.
Validation
----------
To validate the proposed approach, it was firstly tested on the grid map set built from Fr079.log. As shown in Fig. \[fig:Fr079\], there are eleven unordered grid maps, which require to be merged.
At the beginning, the pair-wise merging method should be utilized to calculate the relative motions of grid map pairs. During pair-wise merging, true feature matches can be detected between each grid map pairs. Fig. \[fig:IntMid-a\] displays the detected number of true feature matches for all grid map pairs. As shown in Fig. \[fig:IntMid-a\], there are a portion of map pairs, which are lack of enough true feature matches due to the low overlapping percentages or even non-overlapping. For these map pairs, it is difficult to estimate their relative motions. For efficiency, the proposed approach only applies the pair-wise merging method to these map pairs, which at least contains four detected true feature matches. Given the true feature matches, initial relative motions can be provided to the TrICP algorithm so as to refine the relative motions for grid map pairs. Fig. \[fig:IntMid-b\] indicates these map pairs, which can obtain their estimated relative motions. Due to some reasons, the pair-wise merging method may obtain some unreliable relative motions.
Subsequently, the undirected graph should be constructed based on all grid maps and estimated relative motions. On the constructed graph, it is easy to randomly sample the MCS, which contains $(N-1)$ estimated relative motions. As the number of estimated relative motions are more than $(N-1)$, the residual relative motions can be utilized to confirm whether the randomly sampled MCS is the optimal one or not. The process of sampling and confirming MCS should be repeated until the preset iteration number is reached. As a result, the optimal MCS can be searched out with all the reliable relative motions. Fig. \[fig:IntMid-c\] displays all reliable relative motions and $(N-1)$ relative motions involved in the optimal MCS. As shown in \[fig:IntMid-c\], there are some of map pairs, whose estimated relative motions are unreliable. These unreliable relative motions may be caused by two reasons: (1) False true feature matches can only provide invalid initial relative motions to the TrICP algorithm. (2) Even given moderate initial relative motions, the TrICP algorithm may be trapped into local minimum due to the property of local convergence. To view them in a more intuitive way, Fig. \[fig:Unrm\] displays the merging results of one map pair, which is denoted in the gray in \[fig:IntMid-c\]. As shown in Fig. \[fig:Unrm\], the relative motion of this map pair is really unexpected, so it should be eliminated by the optimal MCS.
As the optimal MCS contains the minimum set of good relative motions, they can be employed to estimate initial global motions. Fig. \[fig:ComFr-a\] shows the multiple map merging results based on the initial global motions. As shown in Fig. \[fig:ComFr-a\], the initial global motions are not so satisfactory due to the accumulative errors. Hence, they should further be refined by the motion averaging algorithm. With all reliable relative motions, the motion averaging algorithm can calculate accurate global motions for the merging of multiple grid maps. Fig. \[fig:ComFr-b\] illustrates the final merging result of multiple grid maps. As shown in Fig. \[fig:ComFr\], it is really necessary to apply the motion averaging algorithm, which can result in good merging results.
In one word, the proposed approach can accomplish the simultaneous merging of multiple grid maps with good accuracy.
Comparison
----------
0.09in
--------- --------- --------- ----- -- -------- --------- ----- -- --
Dataset Obj. T(s) Suc Obj. T(s) Suc
Tim 1.4242 8.7890 Y 0.5546 5.0969 Y
Ineel 16.2770 28.1093 Y 0.4509 20.4698 Y
Fr079 4.7405 27.3639 N 0.2940 19.6659 Y
--------- --------- --------- ----- -- -------- --------- ----- -- --
: Performance comparison for map merging of grid maps
\[tab:com\]
To illustrate its superiority, the proposed approach requires to be compared with other related grid map merging approaches. However, to the best of our knowledge, few approaches can really accomplish the simultaneous merging of multiple grid maps. Therefore, the proposed approach is only compared with the sequential merging approach based on the pair-wise merging algorithm presented in [@Blanco13]. Experiments were tested on three grid map sets, which are displayed in Figs. \[fig:Tim\], \[fig:Fr079\] and \[fig:Intel\], respectively. As there is no ground truth of global motions, the error criterion presented in [@Zhu16] can be utilized to quantitatively analyze the accuracy of competed merging approaches. During experiments, the runtime, merging error and merging status were recorded in Table \[tab:com\]. To view the results in a more intuitive way, Fig. \[fig:ComFr\] shows the merging results of three data sets for two competed approaches. As shown in Tabel \[tab:com\] and Fig. \[fig:ComFr\], the proposed approach can obtain more efficient and accurate merging results than that of the sequential merging approach.
To merge multiple grid maps, the sequential merging approach estimate the relative motion of two grid maps and integrate them into one grid map, which will further be merged with another new grid map. The process of estimation and merging is repeated until all the grid maps are integrated into one global grid map. Although this approach is straight-forward, it suffers from the well-known problem that merging errors accumulate at each step. As the grid map grows, the accumulate errors may lead to the failure of map merging. Therefore, the sequential merging approach can not always accomplish the merging of multiple grid maps. Besides, this approach requires to repeatedly extract SIFT features from the new merged grid map, so it is less efficient.
However, the proposed approach only utilizes the pair-wise merging approach to estimate relative motions of several map pairs. Among these estimated relative motions, there may exist unreliable ones. Subsequently, it randomly samples a minimum set of relative motions to estimate the initial global motions, which can be further confirmed by all relative motions. By repeating the process of sampling and confirming, it can find the optimal MCS for the estimation of initial global motions and confirm all reliable relative motions. Given the initial global motions, the motion averaging algorithm can be applied to all reliable relative motions so as to calculate the accurate global motions for simultaneous merging of multiple grid maps. Hence, the proposed approach can always accomplish merging multiple grid maps with good efficiency and accuracy.
Robustness to grid map orders
-----------------------------
To verify its robustness, the proposed approach was tested on three data sets with different group of orders, which can be randomly changed. During the experiment, grid maps with different orders were viewed as inputs and four groups of map merging results for each data set were recorded in Table \[tab:rob\]. To view the results in a more intuitive way, Fig. \[fig:Resu\] displays the merged maps for both Tim.log and Intel.log under one group of grid map order. As shown in Table \[tab:rob\], the running time of the proposed approach is varied due to the size of grid map set. Besides, for each data set, the proposed approach can obtain almost the same merging results for different map orders.
0.16in
--------- -------- ---------- -------- ---------- -------- --- -- -- -- --
Dataset ID Suc.
(Coarse) (Fine) (Coarse) (Fine)
Tim Order1 0.5713 0.5546 4.7392 0.3577 Y
Order2 0.5715 0.5560 4.3052 0.3587 Y
Order3 0.5713 0.5576 4.5370 0.3575 Y
Order4 0.5874 0.5497 4.0893 0.3582 Y
Intel Order1 0.4877 0.4509 19.1101 1.3597 Y
Order2 0.4906 0.4470 18.8025 1.3739 Y
Order3 0.4882 0.4390 19.663 1.3715 Y
Order4 0.4944 0.4560 18.6518 1.3672 Y
Fr079 Order1 0.3083 0.2940 18.1678 1.4981 Y
Order2 0.3110 0.2938 18.4086 1.5012 Y
Order3 0.3084 0.2933 19.4578 1.5157 Y
Order4 0.3042 0.2936 17.6649 1.4938 Y
--------- -------- ---------- -------- ---------- -------- --- -- -- -- --
: Map merging results for the grid maps with different orders.
\[tab:rob\]
Before performing multiple gird merging, an exhaustive search strategy is utilized to independently estimate the relative motions of map pairs, and the results can be utilized to construct a undirected graph with all grid maps. On this constructed graph, a set of MCS are randomly sampled and then confirmed by all other relative motions. Subsequently, no matter what the order of grid maps is, the proposed approach can always search for the optimal MCS and obtain all the reliable relative motions. Based on the optimal MCS, it is easy to estimate good initial global motions. As shown in Fig \[fig:Resu\], initial global motions are not very satisfactory, so they can further be refined by the motion averaging algorithm with all the reliable relative motions. As shown in Table \[tab:rob\], the motion averaging only costs a small portion of merging time but can seriously reduce the merging error. Accordingly, the proposed approach can always obtain the grid map merging results, which are independent with the order of grid maps to be merged. Therefore, the proposed approach is robust to the order of grid maps to be merged.
Conclusion
==========
This paper is, to the best of our knowledge, the first that proposes an effective approach for simultaneous merging grid maps built by multiple robots. Given a set of grid maps to be merged, it can accomplish grid map merging by several steps. It first utilizes the pair-wise map merging method to estimate the relative motion of grid map pairs. For the reason of low overlapping percentage, it may get unreliable estimation of relative motions for some grid map pairs. Therefore, the minimum set of reliable relative motions should be sampled and confirmed by other relative motions so as to eliminate unreliable relative motions. Then, the initial global motions can be estimated from the minimum set of reliable relative motions. Since the unreliable relative motions have been discarded, the motion averaging algorithm can be applied to the reserved relative motions so as to get accurate global motions for grid map merging. The proposed approach has been implemented and tested on the real robot data sets. Experimental results illustrate that the proposed approach can accomplish simultaneous merging multiple grid maps merging with good accuracy, efficiency and robustness.
The proposed approach includes some limitations. If one grid map has low overlap percentages with all other grid maps, it is difficult to obtain good pair-wise merging results for this grid map. In this case, there is no way to integrate it into the global grid maps. However, we note that most merging approaches proposed so far share this limitations as well. Besides, if these grid maps to be merged are in different resolutions, the proposed approach can not accomplish the merging of multiple grid maps. Our future work will focus on addressing the second limitation.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work is supported by the National Natural Science Foundation of China under Grant nos. 61573273, 61573280 and 61503300.
Smith R, Self M, Cheeseman P, A stochastic map for uncertain spatial relationships, International Symposium on Robotics Research, 467-474(1988)
Thrun S, Probabilistic robotics, 45(3):52-57. MIT Press(2005)
Grisetti G, Stachniss C, Burgard W, Improved Techniques for Grid Mapping With Rao-Blackwellized Particle Filters, IEEE Transactions on Robotics, 23(1), 34-46(2007)
Mullane J, Vo BN, Adams MD, Vo BT, A Random-Finite-Set Approach to Bayesian SLAM, IEEE Transactions on Robotics, 27(2), 268-282(2011)
Wen LDL, Jarvis R, A pure vision-based topological SLAM system, International Journal of Robotics Research, 31(4), 403-428 (2012)
Bibby C, Reid I, A hybrid SLAM representation for dynamic marine environments, ICRA, 58(8), 257-264(2010)
Carpin S, Birk A, Jucikas V, On map merging, Robotics and Autonomous Systems, 53(1), 1-14(2005)
Birk A, Carpin S, Merging occupancy grid maps from multiple robots, Proceedings of IEEE, 94(7), 1384-1397(2006)
Li H, Tsukada M, Nashashibi F, Parent M, Multivehicle Cooperative Local Mapping: A Methodology Based on Occupancy Grid Map Merging, IEEE Transactions on Intelligent Transportation Systems, 15(5), 2089-2100(2014)
Howard A, Parker LE, Sukhatme GS, Experiments with a large heterogeneous mobile robot team: exploration, mapping, deployment and detection, International Journal of Robotics Research, 25(5), 431-447(2005)
Fox D, Ko J, Konolige K, et al, Distributed multirobot exploration and mapping, Proceedings of IEEE, 94(7), 1325-1339(2006)
Carpin S, Fast and accurate map merging for multi-robot systems, Autonomous Robots, 25(3), 305-316(2008)
Zhu J, Du S, Ma L, et al, Merging grid maps via point set registration, International Journal of Robotics and Automation, 28(2), 180-191(2013).
Chetverikov D, Stepanov D, Krsek P, Robust Euclidean alignment of 3D point sets: the trimmed iterative closest point algorithm, Image and Vision Computing, 23(3), 299-309(2005)
Phillips JM, Liu R, Tomasi C, Outlier Robust ICP for Minimizing Fractional RMSD, International Conference on 3-d Digital Imaging and Modeling, 606098, 427-434(2007)
Blanco JL, Gonzálezjiménez J, Fernándezmadrigal JA, A robust, multi-hypothesis approach to matching occupancy grid maps, Robotica, 31(5), 687-701(2013)
Saeedi S, Paull L, Trentini M, et al, Map merging for multiple robots using Hough peak matching, Robotics and Autonomous Systems, 62(10), 1408-1424(2014)
Ma L, Zhu J, Zhu L, et al, Merging grid maps of different resolutions by scaling registration, Robotica, 34(11), 2516-2531(2016)
Besl PJ, McKay ND, A method for registration of 3-D shapes, IEEE Transactions on Pattern Anallysis and Machine Intelligence, 14(2), 239-256(1992)
Zhu J, Meng D, Li Z, Robust registration of partially overlapping point sets via genetic algorithm with growth operator, IET Image Processing, 8(10), 582-590(2014)
Huber DF, Heber M, Fully automatic registration of multiple 3D data sets, Image and Vision Computing, 21(7), 637-650(2003)
Mian AS, Bennamoun M, Owens R, Three-Dimensional Model-Based Object Recognition and Segmentation in Cluttered Scenes, IEEE Transactions on Pattern Anallysis and Machine Intelligence, 28(10), 1584-1601(2006)
Zhu J, Zhu L, Li Z, Automatic multi-view registration of unordered range scans without feature extraction, Neurocomputing, 171(C), 1444-1453(2016)
Evangelidis GD, Kounades-Bastian D, Horaud R, Psarakis EZ. A generative model for the joint registration of multiple point sets, Proceedings of European Conference on Computer Vision (ECCV), 8695, 109-122(2014)
Govindu VM, Pooja A, On Averaging Multiview Relations for 3D Scan Registration, IEEE Transactions Image Processing, 23(3), 1289-1302(2014).
Zhu J, Surface reconstruction via efficient and accurate registration of multiview range scans, Optical Engineering, 53(10), 102104(2014)
Arrigoni F, Rossi B, Fusiello A, Global Registration of 3D Point Sets via LRS Decomposition. Proceedings of European Conference on Computer Vision (ECCV), 489-501(2016)
Govindu VM, Lie-Algebraic averaging for globally consistent motion estimation, Computer Vision and Pattern Recognition(CVPR), 1, I-684-I-691(2004)
Govindu VM, Robustness in Motion Averaging, Asian Conference on Computer Vision, 3852, 457-466(2006)
Lowe DG, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, 60(2), 91-110(2004)
Brown, Matthew, Lowe, et al, Automatic panoramic image stitching using invariant features, International Journal of Computer Visionk, 74(1), 59-73(2007).
Stachniss C, Robotics Datasets \[Online\],available:<http://www.ipb.uni-bonn.de/data/>, Jun. 15th, 2017
Bailey T, Robotics Datasets \[Online\],available:<http://www-personal.acfr.usyd.edu.au/tbailey/software/scan_matching.zip>, Jun. 15th, 2017
Eliazar AI, Parr R, Hierarchical Linear/Constant Time SLAM Using Particle Filters for Dense Maps, NIPS, 339-346(2005)
|
---
abstract: 'We have discovered an Ofpe/WN9 (WN11 following Smith et al.) star in the Sculptor spiral galaxy NGC 300, the first object of this class found outside the Local Group, during a recent spectroscopic survey of blue supergiant stars obtained at the ESO VLT. The light curve over a five-month period in late 1999 displays a variability at the 0.1 mag level. The intermediate resolution spectra (3800-7200 Å) show a very close resemblance to the Galactic LBV AG Car during minimum. We have performed a detailed non-LTE analysis of the stellar spectrum, and have derived a chemical abundance pattern which includes H, He, C, N, O, Al, Si and Fe, in addition to the stellar and wind parameters. The derived stellar properties and the He and N surface enrichments are consistent with those of other Local Group WN11 stars in the literature, suggesting a similar quiescent or post-LBV evolutionary status.'
author:
- Fabio Bresolin
- 'Rolf-Peter Kudritzki'
- Francisco Najarro
- Wolfgang Gieren
- 'Grzegorz Pietrzy[ń]{}ski'
title: 'Discovery and quantitative spectral analysis of an Ofpe/WN9 (WN11) star in the Sculptor spiral galaxy NGC 300'
---
Introduction
============
With the new telescopes of the 8-10 meter class stellar astronomy is branching out beyond the Local Group. Ideal targets for our understanding of young stellar populations in distant galaxies are hot massive stars. These objects have strong stellar winds producing broad and easily detectable spectral features distributed over the whole wavelength range from the UV to the IR, and providing unique information on chemical composition, galactic evolution and extragalactic distances. With these perspectives in mind we have recently begun a systematic spectroscopic study of luminous blue stars in galaxies beyond the Local Group, and presented spectral classification and first quantitative results for A supergiants in NGC 3621 (6.7 Mpc, @bresolin01) and NGC 300 (2.0 Mpc, @bresolin02). Here we report on the discovery and detailed quantitative analysis of the first example of an Ofpe/WN9 star outside of the Local Group. We will present a detailed chemical abundance pattern – the first in a galaxy beyond the Local Group – together with stellar parameters and a determination of the stellar wind properties.
The Ofpe/WN9 class was introduced to include objects which show in their spectra high excitation emission lines from He[[ii]{}]{} and N[[iii]{}]{}, typical of Of stars, together with low excitation lines from He[[i]{}]{} and N[[ii]{}]{}, seen in late WN stars (@walborn82, @bohannan89). Objects of this class have so far been identified in the Galaxy (possibly several stars in the Galactic center: @allen90, @najarro97b, @figer99), the LMC (ten stars: @bohannan89), M33 (seven stars: @massey96, @crowther97b) and M31 (one star: @massey98). The importance of Ofpe/WN9 stars as objects in a transitional stage of evolution between O and W-R stars has been recognized in the last decade, and a connection to the LBV class has been suggested, Ofpe/WN9 stars being observed during a quiescent or post-LBV phase (@crowther97, @pasquali97). Indeed, in at least a couple of instances Ofpe/WN9 stars have been observed to turn into LBVs (R127: @stahl83; HDE 269582: @bohannan89b). The LBV AG Car is also known to show an Ofpe/WN9-like spectrum during its hot phase at visual minimum (@stahl86). The discovery of ejected circumstellar nebulae, in some cases measured in a state of expansion, associated with some of the LMC Ofpe/WN9 stars by @nota96 and @pasquali99 brings forward strong evidence for the occurence of violent episodes of mass loss in these stars, similar to the shell-producing, eruptive outbursts of LBVs.
@smith94 revised and extended the classification of late WN stars to include lower excitation objects. In their scheme, stars showing spectra like AG Car at minimum, where no N[[iii]{}]{} is detected, are reclassified as WN11. Given the spectral resemblance of the NGC 300 star we analyze here to AG Car at visual minimum (see § 4) we will adopt the WN11 classification for the remainder of this Letter. We describe the observational data in § 2, and the photometry in § 3. In § 4 we present our VLT spectra, together with the stellar and wind properties derived from a quantitative spectral analysis.
Observations
============
The data presented here are part of a spectroscopic survey of photometrically selected blue supergiants in the Sculptor spiral galaxy NGC 300 obtained at the VLT with the FORS multiobject spectrograph, and described in detail by @bresolin02. This latter paper presents spectra obtained in September 2000 in the blue spectral region ($\sim$4000-5000 Å) and used for the spectral classification of about 70 supergiants. The emission-line star analyzed here corresponds to star B-16 of the spectral catalog (see Table 2 and finding charts in the aforementioned paper), a rather isolated bright star apparently not associated with any prominent OB association, star cluster or nebulosity. The nearest OB association (from the catalog of @pietrzynski01) lies at a deprojected distance of $\sim$450 pc. Given the estimated age of 3.8 Myr (§4.1) a speed on the order of 100 km/s would be required for the star to reach its current position from this association. This is much larger than the typical stellar random velocities, and comparable to that of Galactic runaway stars. To the best of our knowledge the star is not included in published listings of emission-line or W-R stars in NGC 300 (@schild92, @breysacher97, and references therein), and we will refer to it as star B-16.
In a more recent observing run (September 2001), in order to measure the mass-loss rates of blue supergiants in NGC 300 from the H$\alpha$ line profile, we secured MOS spectra in the red with a 600 linesmm$^{-1}$ grating and 1 slitlets, which, together with the older spectra, provides us with complete coverage at $R\simeq1,000$ resolution of the 3800-7200 Å wavelength range, except for a 50 Å-wide gap centered at 5025 Å. The total exposure time was 13,500s for both the blue and the red spectra, while better seeing conditions favored the blue observations. The mass-loss rates we derive for the whole blue supergiant sample will be discussed in a forthcoming paper, and we concentrate here on the single B-16 star.
Photometry
==========
As part of a multi-epoch photometric study of NGC 300 carried out for the discovery and monitoring of Cepheid variables (@pietrzynski01, see also @pietrzynski02), $BV$ photometry from ESO/MPI 2.2m telescope WFI images is available for B-16 covering nearly a one-half year period. The resulting $B$ and $V$ light curves are shown in Fig. \[curve\]. An apparently irregular variability, on the order of 0.1 mag (peak-to-peak), is detected in both bands, at a significant level above the typical photometric uncertainty of 0.015 mag. The color index remains approximately constant around $B-V=-0.07$. While [*bona fide*]{} LBV variables show large amplitude variability on timescales of years (@stahl01), partly characterizing the outburst phenomena in these objects, smaller-amplitude variability is observed in Ofpe/WN9 stars or LBVs during quiescence (e.g., @stahl84, @sterken97). Assuming the intrinsic color index from the stellar atmosphere models ($E_{B-V}=0.06$), $R=A_V/E_{B-V}=3.1$ and an average magnitude $V=19.00$, we obtain from the adopted distance modulus for NGC 300, $m-M=26.53$ (@freedman01), an absolute magnitude $M_V\simeq-7.72\pm0.13$.
Spectral analysis
=================
The blue and red portions of the rectified VLT optical spectrum are shown in Fig. \[spectrum\]. In the top panel we have also superimposed the spectrum of AG Car during minimum, taken from the @walborn00 atlas, in order to illustrate the remarkable similarity between the two spectra. Line identification is provided for most of the recognizable features, which characterize B-16 as a WN11 star. These include the hydrogen Balmer series lines (pure emission), He[[i]{}]{} lines (together with He[[ii]{}]{}[$\lambda$]{}4686) and mostly low-excitation metal lines such as N[[ii]{}]{}, Fe[[iii]{}]{} and Si[[ii]{}]{}. P-Cygni profiles are seen in most of the He[[i]{}]{} lines, as well as in Si[[iii]{}]{}[$\lambda\lambda$]{}4552-4575, N[[ii]{}]{}[$\lambda\lambda$]{}4601-4643 and Fe[[iii]{}]{}[$\lambda\lambda$]{}5127,5156. The absence of N[[iii]{}]{} is a discriminant factor against an earlier (WN9-10) classification. The WN11 classification is further confirmed by the observed equivalent widths of He[[i]{}]{}[$\lambda$]{}5876 (32Å) and He[[ii]{}]{}[$\lambda$]{}4686 (1Å), compared to those given by @crowther97.
Atmosphere models
-----------------
For the quantitative analysis of the stellar spectrum we have used the iterative, non-LTE line blanketing method presented by @hillier98. The code solves the radiative transfer equation in the co-moving frame for the expanding atmospheres of early-type stars in spherical geometry, subject to the constraints of statistical and radiative equilibrium. Steady state is assumed, and the density structure is set by the mass-loss rate and the velocity field via the equation of continuity. The velocity field (@hillier89) is characterized by an isothermal effective scale height in the inner atmosphere, and becomes a $\beta$ law in the wind (e.g., @lamers96). Better fits are obtained if instead of the standard $\beta$ law a combined 2-$\beta$ law is used, where each value of $\beta$ (given in Table \[observations\]) characterizes the shape of the velocity field in the inner and outer parts of the wind, respectively. We allow for the presence of clumping via a clumping law characterized by a volume filling factor $f(r)$, so that the ‘smooth’ mass-loss rate, $\dot{M}_S$, is related to the ‘clumped’ mass-loss rate, $\dot{M}_C$, through $\dot{M}_S =
\dot{M}_C/f^{1/2}$. The model is then prescribed by the stellar radius $R_*$, the stellar luminosity $L_*$, the mass-loss rate $\dot{M}$, the velocity field $v(r)$, the volume filling factor $f$, and the abundances of the elements considered. The reader is referred to @hillier98 [@hillier99] for a detailed discussion of the code.
The main stellar parameters of our best-fitting model are summarized in Table \[observations\], whereas fits to some of the most important line diagnostics are illustrated in Fig. \[model\]. As can be seen, the majority of the spectral features are well reproduced by the adopted model. The Fe[[iii]{}]{}[$\lambda\lambda$]{}5127,5156 lines, together with He[[ii]{}]{}[$\lambda$]{}4686, provide excellent constraints for the effective temperature. Using the strength of the electron scattering wings of the Balmer lines we derive a clumping factor $f=0.25$. Thus the “smooth” mass loss rate would be twice the value displayed in Table \[observations\]. The parameters describing the stellar wind, i.e., the terminal velocity $v_\infty=325$ kms$^{-1}$, the mass-loss rate $\dot{M_C}=4.6\times10^{-5}$ $M_\odot$yr$^{-1}$, and the wind performance number $\eta=\dot{M}v_\infty/(L_*/c)=0.84$, lie in the range found for Local Group WN11 stars (@crowther97b). We estimate the uncertainty in $L$ and $\dot{M_C}/f^{1/2}$ to be approximately 10% and 30%, respectively. As a further comparison, Fig. \[hr\] shows the location of star B-16 in the H-R diagram, together with additional WN9-11 stars in the Local Group (@crowther95; @smith95; @crowther97b), as well as LBVs at minimum in the Galaxy (P Cyg: @pauldrach90; AG Car: @smith94) and in the LMC (R71: @lennon94).
Table \[abundances\] summarizes the model fractional mass abundances. Uncertainties in the latter range between 0.2 dex (He, N, Si and Fe) and 0.3 dex (C, O and Al). The chemistry of B-16 resembles that of other known Local Group WN9-11 stars. The severe depletion of hydrogen inferred from our analysis (mass fraction $X=0.27$), and the correspondingly large helium surface abundance (H/He = 1.5 by number), are fairly typical for late WN stars (H/He = 0.8-3 by number), and are indicative of heavy mass-loss stripping of the stellar outer layers during the post-main sequence evolution of massive stars. We find a total CNO abundance close to that of Galactic main sequence B stars (@gies92, @kilian92), consistent with a half solar chemical composition in these elements. This agrees with the empirical O abundance derived for an H [ii]{} region located 30 away (region 5 observed by @pagel79). On the other hand, Fe has a roughly solar abundance, suggesting an $\alpha$/Fe ratio different than in the Galaxy. In comparison with the Galactic B star abundance pattern, N is overabundant (by mass) by a factor of 10, while C and O are reduced by factors of $\sim$2 and 6, respectively, although the latter abundances are highly uncertain.
The current evolutionary status of B-16 can be estimated by means of the recent Geneva stellar tracks including rotation (@meynet00) with an initial rotational velocity $v_{ini}=300$ kms$^{-1}$ and Z = 0.02 (Fig. \[hr\]). The star’s position in the H-R diagram approaches the 60 $M_\odot$ track, and an initial mass of $\sim$55 $M_\odot$ can also be derived from the relation between stellar luminosity and initial mass for WNL stars given by @schaerer92. From the 60 $M_\odot$ stellar model more closely matching the observed H and He surface abundances we infer an age of 3.8 Myr and a present mass of $\sim$36 $M_\odot$. At this stage, the predicted N overabundance with respect to the ZAMS abundance is 12 (mostly occuring during the MS phase, as a consequence of rotational mixing), in good agreement with our finding. According to its He- and N-enriched surface chemistry, B-16 might be in a dormant or post-LBV phase of evolution. This is also indicated by the position in the H-R diagram, well above the Humphreys-Davidson limit (@humphreys79), and intermediate between the LBVs AG Car and P Cygni, which have comparable mass-loss rates to B-16, as well as similar wind performance numbers. The NGC 300 star shows higher He enrichment at the surface (n$_{He}$/n$_H\simeq0.7$, versus 0.4-0.5 for the LBVs), and a somewhat faster wind (325 versus 185-250 kms$^{-1}$; @najarro97, @langer94), so that a more advanced (post-LBV) state seems more likely. This picture is in agreement with the conclusions reached by @crowther95 and @crowther97b regarding the close connection between WN9-11 stars and LBVs.
An unexpected reward out of our spectroscopic survey of blue supergiants in NGC 300, the WN11 star B-16 studied here is the first star in this galaxy (and beyond the Local Group) for which we have attempted a detailed quantitative analysis. In the near future the systematic study of the massive stellar population in NGC 300 and similar galaxies within a few Mpc, well within current observational capabilities, will certainly provide new insights into massive stellar evolution and stellar abundances.
We thank J. Hillier for providing his code, and G. Meynet for the Geneva stellar evolutionary tracks including rotation. WG gratefully acknowledges support for this research from the Chilean Center for Astrophysics FONDAP No. 15010003. FN acknowlegdes Spanish MCYT PANAYA2000-1784 and Ramon y Cajal grants.
Allen, D.A., Hyland, A.R., & Hillier, D.J. 1990, , 244, 706
Bohannan, B., & Walborn, N.R. 1989, , 101, 520
Bohannan, B. 1989, in Physics of Luminous Blue Variables, ed. K. Davidson, A.F.J. Moffat & H.J.G.L.M. Lamers (Dordrecht: Kluwer), p. 35
Bresolin, F., Gieren, W., Kudritzki, R.-P., Pietrzy[ń]{}ski, G., & Przybilla, N. 2002, , 567, 277
Bresolin, F., Kudritzki, R.-P., Mendez, R.H., & Przybilla, N. 2001, , 548, L159
Breysacher, J., Azzopardi, M., Testor, G., & Muratorio, G. 1997, , 326, 976
Crowther, P.A., & Smith, L.J. 1997, , 320, 500
Crowther, P.A., Szeifert, T., Stahl, O., & Zickgraf, F.-J. 1997, , 318, 543
Crowther, P.A., Hillier, D.J., & Smith, L.J. 1995, , 293, 172
Figer, D.F., McLean, I.S., & Morris, M. 1999, , 514, 202
Freedman, W.L., et al. 2001, , 553, 47
Gies, D.R., & Lambert, D.L. 1992, , 387, 673
Hillier, D.J., & Miller, D.L. 1999, , 519, 354
Hillier, D.J., & Miller, D.L. 1998, , 496, 407
Hillier, D.J. 1989, , 347, 392
Humphreys, R.M., & Davidson, K. 1979, , 232, 409
Kilian, J. 1992, , 262, 171
Lamers, H.J.G.L.M., Najarro, F., Kudritzki, R.P., Morris, P.W., Voors, R.H.M., van Gent, J.I., Waters, L.B.F.M., de Graauw, T., Beintema, D., Valentijn, E.A., & Hillier, D.J. 1996, , 315, 229
Langer, N., Hamann, W.-R., Lennon, M., Najarro, F., Pauldrach, A.W.A., & Puls, J. 1994, , 290, 819
Lennon, D.J., Wobig, D., Kudritzki, R.-P., & Stahl, O. 1994, in Evolution of Massive Stars: A Confrontation between Theory and Observation, eds. D. Vanbeveren, W. van Rensbergen & C. de Loore, Space Sci. Rev. 66, 207
Massey, P., Waterhouse, E., & DeGioia-Eastwood, K. 2001, , 119, 2214
Massey, P., & Johnson, O. 1998, , 505, 793
Massey, P., Bianchi, L., Hutchings, J.B., & Stecher, T.P. 1996, , 469, 629
Meynet, G., & Maeder, A. 2000, , 361, 101
Meynet, G., Maeder, A., Schaller, G., Schaerer, D., & Charbonnel, C. 1994, , 103, 97
Najarro, F., Hillier, D.J., & Stahl, O. 1997, , 326, 1117
Najarro. F., Krabbe, A., Genzel, R., Lutz, D., Kudritzki, R.P., & Hillier, D.J. 1997, , 325, 700
Nota, A., Pasquali, A., Drissen, L., Leitherer, C., Robert, C., Moffat, A.F.J., & Schmutz, W. 1996, , 102, 383
Pagel, B.E.J., Edmunds, M.G., Blackwell, D.E., Chun, M.S., & Smith, G. 1979, , 189, 95
Pasquali, A., Nota, A., & Clampin, M. 1999, , 343, 536
Pasquali, A., Langer, N., Schmutz, W., Leitherer, C., Nota, A., Hubeny, I., & Moffat, F.J. 1997, , 478, 340
Pauldrach, A.W.A., & Puls, J. 1990, , 237, 409
Pietrzyński, G., Gieren, W., Fouqué, P., & Pont, F. 2002, , 123, 789
Pietrzyński, G., Gieren, W., Fouqué, P., & Pont, F. 2001, , 371, 497
Schaerer, D., & Maeder, A. 1992, , 263, 129
Schild, H., & Testor, G. 1992, , 266, 145
Smith, L.J., Crowther, P.A., & Willis, A.J. 1995, , 302, 830
Smith, L.J., Crowther, P.A., & Prinja, R.K. 1994, , 281, 833
Stahl, O., Jankovics, I., Kovács, J., Wolf, B., Schmutz, W., Kaufer, A., Rivinius, T., & Szeifert, T. 2001, , 375, 54
Stahl, O. 1986, , 164, 321
Stahl, O., Wolf, B., Leitherer, C., Zickgraf, F.-J., Krautter, J., & de Groot, M. 1984, , 140, 459
Stahl, O., Wolf, B., Klare, G., Cassatella, A., Krautter, J., Persi, P., & Ferrari-Toniolo, M. 1983, , 127, 49
Sterken, C., van Genderen, A.M., & de Groot, M. 1997, in ASP Con. Series 120, Luminous Blue Variables: Massive Stars in Transition, ed. A. Nota & H.J.G.L.M. Lamers (San Francisco: ASP), 35
Walborn, N.R., & Fitzpatrick, E.L. 2000, , 112, 50
Walborn, N.R. 1982, , 256, 45
|
---
abstract: 'In experiments that are aimed at detecting astrophysical sources such as neutrino telescopes, one usually performs a search over a continuous parameter space (e.g. the angular coordinates of the sky, and possibly time), looking for the most significant deviation from the background hypothesis. Such a procedure inherently involves a “look elsewhere effect”, namely, the possibility for a signal-like fluctuation to appear anywhere within the search range. Correctly estimating the $p$-value of a given observation thus requires repeated simulations of the entire search, a procedure that may be prohibitively expansive in terms of CPU resources. Recent results from the theory of random fields provide powerful tools which may be used to alleviate this difficulty, in a wide range of applications. We review those results and discuss their implementation, with a detailed example applied for neutrino point source analysis in the IceCube experiment.'
address: 'Weizmann Institute of Science, Rehovot 76100, Israel'
author:
- Ofer Vitells
- Eilam Gross
title: 'Estimating the significance of a signal in a multi-dimensional search'
---
look-elsewhere effect ,statistical significance ,neutrino telescope ,random fields
Introduction
============
The statistical significance associated with the detection of a signal source is most often reported in the form of a $p$-value, that is, the probability under the background-only hypothesis of observing a phenomenon as or even more ‘signal-like’ than the one observed by the experiment. In many simple situations, a $p$-value can be calculated using asymptotic results such as those given by Wilk’s theorem [@wilks], without the need of generating a large number of pseudo-experiments. This is not the case however when the procedure for detecting the source involves a search over some range, for example, when one is trying to observe a hypothetic signal from an astrophysical source that can be located at any direction in the sky. Wilk’s theorem does not apply in this situation since the signal model contains parameters (i.e. the signal location) which are not present under the null hypothesis. Estimation of the $p$-value could be then performed by repeated Monte Carlo simulations of the experiment’s outcome under the background-only hypothesis, but this approach could be highly time consuming since for each of those simulations the entire search procedure needs to be applied to the data, and to establish a discovery claim at the $5\sigma$ level ($p$-value=$2.87\times10^{-7}$) the simulation needs to be repeated at least $\mathcal{O}(10^7)$ times. Fortunately, recent advances in the theory of random fields provide analytical tools that can be used to address exactly such problems, in a wide range of experimental settings. Such methods could be highly valuable for experiments searching for signals over large parameter spaces, as the reduction in necessary computation time can be dramatic. Random field theoretic methods were first applied to the statistical hypothesis testing problem in [@davies], for some special case of a one dimensional problem. A practical implementation of this result, aimed at the high-energy physics community, was made in [@epjc]. Similar results for some cases of multi-dimensional problems [@adlerhasofer][@adler1] were applied to statistical tests in the context of brain imaging [@worsley]. More recently, a generalized result dealing with random fields over arbitrary Riemannian manifolds was obtained [@adler], openning the door for a plethora of new possible applications. Here we discuss the implementation of these results in the context of the search for astrophysical sources, taking IceCube [@icecube] as a specific example. In section \[sec1\] the general framework of an hypothesis test is briefly presented with connection to random fields. In section \[sec2\] the main theoretical result is presented, and an example is treated in detail in section \[sec3\].
Formalism of a search as a statistical test {#sec1}
===========================================
The signal search procedure can be formulated as a hypothesis testing problem in the following way. The null (background-only) hypothesis $H_0:\mu=0$, is tested against a signal hypothesis $H_1:\mu>0$, where $\mu$ represents the signal strength. Suppose that $\theta$ are some nuisance parameters describing other properties of the signal (such as location), which are therefore not present under the null. Additional nuisance parameters, denoted by $\theta'$, may be present under both hypotheses. Denote by $\mathscr{L}(\mu,\theta,\theta')$ the likelihood function. One may then construct the profile likelihood ratio test statistic [@asimov]
$$\label{eq:q}
q = -2\log \frac{\displaystyle\max_{\theta'}
\mathscr{L}(\mu=0,\theta')}{\displaystyle\max_{\mu,\theta,\theta'}
\mathscr{L}(\mu,\theta,\theta')}$$
and reject the null hypothesis if the test statistic is larger then some critical value. Note that when the signal strength is set to zero the likelihood by definition does not depend on $\theta$, and the test statistic (\[eq:q\]) can therefore be written as
$$\label{eq:qtheta}
q = \displaystyle\max_{\theta \in \mathscr{M}} q(\theta)$$
where $q(\theta)$ is the profile likelihood ratio with the signal nuisance parameters fixed to the point $\theta$, and we have explicitely denoted by $\mathscr{M}$ the $D$-dimensional manifold to which the parameters $\theta$ belong. Under the conditions of Wilks’ theorem [@wilks], for any fixed point $\theta$, $q(\theta)$ follows a $\chi^2$ distribution with one degree of freedom when the null hypothesis is true. When viewed as a function over the manifold $\mathscr{M}$, $q(\theta)$ is therefore a $\chi^2$ *random field*, namely a set of random variables that are continuously mapped to the manifold $\mathscr{M}$. To quantify the significance of a given observation in terms of a $p$-value, one is required to calculate the probability of the maximum of the field to be above some level, that is, the excursion probability of the field:
$$\label{eq:pval}
p\text{-value}=\mathbb{P}[ \displaystyle\max_{\theta \in
\mathscr{M}} q(\theta)
> u].$$
Estimation of excursion probabilities has been extensively studied in the framework of random fields. Despite the seemingly difficult nature of the problem, some surprisingly simple closed-form expressions have been derived under general conditions, which allow to estimate the excursion probability (\[eq:pval\]) when the level $u$ is large. Such ‘high’ excursions are of course the main subject of interest, since one is interested in estimating the $p$-value for apparently significant (signal-like) fluctuations. We shall briefly describe the main theoretical results in the following section. For a comprehensive and precise definitions, the reader is referred to Ref. [@adler].
The excursion sets of random fields {#sec2}
===================================
The excursion set of a field above a level $u$, denoted by $A_u$, is defined as the set of points $\theta$ for which the value of the field $q(\theta)$ is larger than $u$,
$$\label{eq:Au}
A_u = \{\theta \in \mathscr{M} : q(\theta) > u \}$$
and we will denote by $\phi(A_u)$ the *Euler characteristic* of the excursion set $A_u$. For a 2-dimensional field, the Euler characteristic can be regarded as the number of disconnected components minus the number of ‘holes’, as is illustrated in Fig.\[fig:eulerillus\]. A fundamental result of [@adler] states that the expectation of the Euler characteristic $\phi(A_u)$ is given by the following expression:
$$\label{eq:euler}
\mathbb{E}[\phi(A_u)] = \sum_{d=0}^D \mathscr{N}_d \rho_d(u).$$
The coefficients $\mathscr{N}_d$ are related to some geometrical properties of the manifold and the covariance structure of the field. For the purpose of the present analysis however they can be regarded simply as a set of unknown constants. The functions $\rho_d(u)$ are ‘universal’ in the sense that they are determined only by the distribution type of the field $q(\theta)$, and their analytic expressions are known for a large class of ‘Gaussian related’ fields, such as $\chi^2$ with arbitrary degrees of freedom. The zeroth order term of eq. (\[eq:euler\]) is a special case for which $\mathscr{N}_0$ and $\rho_0(u)$ are generally given by
$$\label{eq:zero}
\mathscr{N}_0 = \phi(\mathscr{M}), \hspace{0.5cm} \rho_0(u) =
\mathbb{P}[q(\theta) > u]$$
Namely, $\mathscr{N}_0$ is the Euler characteristic of the entire manifold and $\rho_0(u)$ is the tail probability of the distribution of the field. (Note that when the manifold is reduced to a point, this result becomes trivial).
![Illustration of the Euler characteristic of some 2-dimensional bodies. []{data-label="fig:eulerillus"}](figures/euler_illus.pdf){width="12cm"}
When the level $u$ is high enough, excursions above $u$ become rare and the excursion set becomes a few disconnected hyper-ellipses. In that case the Euler characteristic $\phi(A_u)$ simply counts the number of disconnected components that make up $A_u$. For even higher levels this number is mostly zero and rarely one, and Its expectation therefore converges asymptotically to the excursion probability. We can thus use it as an approximation to the excursion probability for large enough $u$ [@jonatan]
$$\label{eq:limit}
\mathbb{E}[\phi(A_u)] \approx \mathbb{P}[ \displaystyle\max_{\theta
\in \mathscr{M}} q(\theta)
> u].$$
The practical importance of Eq. (\[eq:euler\]) now becomes clear, as it allows to estimate the excursion probabilities above high levels. Furthermore, the problem is reduced to finding the constants $\mathscr{N}_d, d>0$. Since Eq. (\[eq:euler\]) holds for any level $u$, this could be achieved simply by calculating the average of $\phi(A_u)$ at some low levels, which can be done using a small set of Monte Carlo simulations. We shall now turn to a specific example where this procedure is demonstrated.
Application to neutrino source detection {#sec3}
========================================
The IceCube experiment [@icecube] is a neutrino telescope located at the south pole and aimed at detecting astrophysical neutrino sources. The detector measures the energy and angular direction of incoming neutrinos, trying to distinguish an astrophysical point-like signal from a large background of atmospheric neutrinos spread across the sky. The nuisance parameters over which the search is performed are therefore the angular coordinates $(\theta,\varphi)$[^1]. We follow [@teresa] for the definitions of the signal and background distributions and the likelihood function. The signal is assumed to be spatially Gaussian distributed with a width corresponding to the instrumental resolution of $0.7^o$, and the background from atmospheric neutrinos is assumed to be uniform in azimuthal angle. We use a background simulation sample of 67000 events, representing roughly a year of data, provided to us by the authors of [@teresa]. We then calculate a profile likelihood ratio as described in the previous section. Figure \[fig:map\] shows a “significance map” of the sky, namely the values of the test statistic $q(\theta,\varphi)$ as well as the corresponding excursion set above $q=1$. To reduce computation time we restrict here the search space to the portion of the sky at declination angle 27$^{\circ}$ below the zenith, however all the geometrical features of a full sky search are maintained. Note that the most significance point has a value of the test statistic above 16, which would correspond to a significance exceeding 4$\sigma$ if this point would have been analyzed alone, that is without the “look elsewhere” effect.
Computation of the Euler characteristic
---------------------------------------
In practice, the test statistic $q(\theta,\varphi)$ is calculated on a grid or points, or ‘pixels’, which are sufficiently smaller than the detector resolution. The computation of the Euler characteristic can then be done in a straightforward way, using Euler’s formula:
$$\label{eq:ef}
\phi = V - E + F$$
where $V$, $E$, and $F$ are respectively the numbers of *vertices* (pixels), *edges* and *faces* making up the excursion set. An edge is a line connecting two adjacent pixels and a face is the square made by connecting the edges of four adjacent pixels. An Illustration is given Fig.\[fig:illus2\]. (Although it is most convenient to use a simple square grid, other grid types can be used if necessary, in which case the faces would be of other polygonal shapes).
![Illustration of the computation of the Euler characteristic using formula (\[eq:ef\]). Each square represents a pixel. Here, the number of vertices is 18, the number of edges is 23 and the number of faces is 7, giving $\phi=18-23+7=2$. []{data-label="fig:illus2"}](figures/euler_illus_2.pdf){width="12cm"}
Once the Euler characteristic is calculated, the coefficients of Eq. (\[eq:euler\]) can be readily estimated. For a $\chi^2$ random field with one degree of freedom and for two search dimensions, the explicit form of Eq. (\[eq:euler\]) is given by [@adler]:
$$\label{eq:ex1}
\mathbb{E}[\phi(A_u)] = \mathbb{P}[\chi^2 > u] +
e^{-u/2}(\mathscr{N}_1 + \sqrt{u}\mathscr{N}_2).$$
To estimate the unknown coefficients $\mathscr{N}_1,\mathscr{N}_2$ we use a set of 20 background simulations, and calculate the average Euler characteristic of the excursion set corresponding to the levels $u=0,1$ (The number of required simulations depends on the desired accuracy level of the approximation. For most practical purposes, estimating the $p$-value with a relative uncertainty of about 10% should be satisfactory.). This gives the estimates $\mathbb{E}[\phi(A_0)]=33.5 \pm 2$ and $\mathbb{E}[\phi(A_1)]=94.6 \pm 1.3$. By solving for the unknown coefficients we obtain $\mathscr{N}_1 = 33 \pm 2$ and $\mathscr{N}_2
= 123 \pm 3$. The prediction of Eq. (\[eq:ex1\]) is then compared against a set of approx. 200,000 background simulations, where for each one the maximum of $q(\theta,\varphi)$ is found by scanning the entire grid. The results are shown in Figure \[fig:2\]. As expected, the approximation becomes better as the $p$-value becomes smaller. The agreement between Eq. (\[eq:ex1\]) and the observed $p$-value is maintained up to the smallest $p$-value that the available statistics allows us to estimate.
![The prediction of Eq. (\[eq:ex1\]) (dashed red) against the observed $p$-value (solid blue) from a set of 200,000 background simulations. The yellow band represents the statistical uncertainty due to the available number of background simulations. []{data-label="fig:2"}](figures/pval_180k.pdf){width="8cm" height="7cm"}
Slicing the parameter space
---------------------------
A useful property of Eq. (\[eq:euler\]) that can be illustrated by this example, is the ability to consider only a small ‘slice’ of the parameter space from which the expected Euler characteristic (and hence $p$-value) of the entire space can be estimated, if a symmetry is present in the problem. This can be done using the ‘inclusion-exclusion’ property of the Euler characteristic:
$$\label{eq:slicing}
\phi(A \cup B) = \phi(A) + \phi(B) - \phi(A\cap B).$$
Since the neutrino background distribution is assumed to be uniform in azimuthal angle ($\varphi$), we can divide the sky to $N$ identical slices of azimuthal angle, as illustrated in Figure \[fig:slice\]. Applying (\[eq:slicing\]) to this case, the expected Euler characteristic is given by
$$\label{eq:euler_slice}
\mathbb{E}[\phi(A_u)] =
N\times(\mathbb{E}[\phi(slice)]-\mathbb{E}[\phi(edge)]) +
\mathbb{E}[\phi(0)]$$
where an ‘edge’ is the line common to two adjacent slices, and $\phi(0)$ is the Euler characteristic of the point at the origin (see Figure \[fig:slice\]).
![Illustration of the excursion set in a slice of a sky, showing also an edge (solid blue) and the origin as defined in Eq. (\[eq:euler\_slice\]). In this example $\phi(slice)=6$, $\phi(edge)=2$ and $\phi(0)=0$.[]{data-label="fig:slice"}](figures/slice_demo.pdf){width="8cm"}
We can now apply Eq. (\[eq:euler\]) to both $\phi(slice)$ and $\phi(edge)$ and estimate the corresponding coefficients as was done before, using only simulations of a single slice of the sky. Following this procedure we obtain for this example with $N=18$ slices from 40 background simulations, $\mathscr{N}_1^{slice} = 6\pm0.5 , \mathscr{N}_2^{slice} =
6.7\pm0.8$ and $\mathscr{N}_1^{edge} = 4.4\pm0.2$. Using (\[eq:euler\_slice\]) this leads to the full sky coefficients $\mathscr{N}_1=28\pm9$ and $\mathscr{N}_2=120\pm14$, a result which is consistent with the full sky simulation procedure. This demonstrates that the $p$-value can be accurately estimated by only simulating a small portion of the search space.
Summary
=======
The Euler characteristic formula, a fundamental result from the theory of random fields, provides a practical mean of estimating a $p$-value while taking into account the “look elsewhere effect”. This result might be particularly useful for experiments that involve a search for signal over a large parameter space, such as high energy neutrino telescopes. While the example considered here deals with a search in a 2-dimensional space, the formalism is general and could be in principle applied to any number of search dimensions. For example, if one is trying to detect a ‘burst’ event then time would constitute an additional search dimension. In such case the method of slicing could be useful as well, as one will not have to simulate the entire operating period of the detector but only a small ’slice’ of time (provided that the background does not vary in time). Thus, the computational burden of having to perform a very large number of Monte Carlo simulations in order to to estimate a $p$-value, could be greatly reduced.
Acknowledgments
===============
We thank Jim Braun and Teresa Montaruli for their help in providing us the background simulation data of IceCube which was used to perform this analysis. One of us (E. G.) is obliged to the Minerva Gesellschaft for supporting this work.
[99]{}
S.S. Wilks, *The large-sample distribution of the likelihood ratio for testing composite hypotheses*, Ann. Math. Statist. **9** (1938) 60-62.
R.B. Davies, *Hypothesis testing when a nuisance parameter is present only under the alternative.*, Biometrika **74** (1987), 33-43. E. Gross and O. Vitells, *Trial factors for the look elsewhere effect in high energy physics* , Eur. Phys. J. C, **70** (2010), 525-530. R.J. Adler and A.M. Hasofer, *Level Crossings for Random Fields*,Ann. Probab. **4**, Number 1 (1976), 1-12. R.J. Adler, *The Geometry of Random Fields*, New York (1981), Wiley, ISBN: 0471278440. K.J. Worsley, S. Marrett, P. Neelin, A.C. Vandal, K.J. Friston and A.C. Evans, *A Unified Statistical Approach for Determining Significant Signals in Location and Scale Space Images of Cerebral Activation*, Human Brain Mapping **4** (1996) 58-73. R.J. Adler and J.E. Taylor, *Random Fields and Geometry* , Springer Monographs in Mathematics (2007). ISBN: 978-0-387-48112-8. J. Ahrens et al. and The IceCube Collaboration, Astropart. Phys. **20** (2004), 507.
G. Cowan, K. Cranmer, E. Gross and O. Vitells, *Asymptotic formulae for likelihood-based tests of new physics*, Eur. Phys. J. C **71** (2011) 1544, \[arXiv:1007.1727\].
J. Taylor, A. Takemura and R.J. Adler, *Validity of the expected Euler characteristic heuristic*, Ann. Probab. 33 (2005) 1362-1396.
J. Braun, J. Dumma, F. De Palmaa, C. Finleya, A. Karlea and T. Montaruli, *Methods for point source analysis in high energy neutrino telescopes*, Astropart. Phys. **29** (2008) 299-305 \[arXiv:0801.1604\].
[^1]: The signal model may include additional parameters such as spectral index and time, which we do not consider here for simplicity.
|
---
abstract: |
I discuss a hypothetical historical context in which a Bohm-like deterministic interpretation of the Schrödinger equation could have been proposed before the Born probabilistic interpretation and argue that in such a context the Copenhagen (Bohr) interpretation would probably have never achieved great popularity among physicists.
> \
> \
> Freddie Mercury, “Boh(e)mian Rhapsody"
author:
- Hrvoje Nikolić
title: 'Would Bohr be born if Bohm were born before Born?'
---
Introduction
============
The Copenhagen interpretation of quantum mechanics (QM) was the first interpretation of QM that achieved a significant recognition among physicists. It was proposed very early by the fathers of QM, especially Bohr and Heisenberg. Later, many other interpretations of QM were proposed, such as statistical ensemble interpretation, Bohm (pilot wave) interpretation, Nelson (stochastic dynamics) interpretation, Ghirardi-Rimini-Weber (spontaneous collapse) interpretation, quantum logic interpretation, information theoretic interpretation, consistent histories interpretation, many-world (relative state) interpretation, relational interpretation, etc. All these interpretations seem to be consistent with experiments, as well as with the minimal pragmatic “shut-up-and-calculate interpretation". Nevertheless, apart from the minimal pragmatic interpretation, the Copenhagen interpretation still seems to be the dominating one. Is it because this interpretation is the simplest, the most viable, and the most natural one? Or is it just because of the inertia of pragmatic physicists who do not want to waste much time on (for them) irrelevant interpretational issues, so that it is the simplest for them to (uncritically) accept the interpretation to which they were exposed first? I believe that the second answer is closer to the truth. To provide an argument for that, in this essay I argue that if some historical circumstances had been only slightly different, then it would have been very likely that the Bohm deterministic interpretation would have been proposed and accepted first, and consequently, that this interpretation would have been dominating even today.[@foot0] (In fact, if the many-world interpretation taken literally is correct, then such an alternative history of QM is not hypothetical at all. Instead, it is explicitly realized in many branches of the whole multi-universe containing a huge number of parallel universes.) For the sake of easier reading, in the next section I no longer use the conditional, but present an alternative hypothetical history of QM as if it really happened, trying to argue that such an alternative history was actually quite natural.[@foot1] Although a prior knowledge on the Bohm deterministic interpretation is not required here, for readers unfamiliar with this interpretation I suggest to read also the original paper [@bohm] or a recent pedagogic review [@tumul].
An alternative history of quantum mechanics
===========================================
When Schrödinger discovered his wave equation, the task was to find an interpretation of it. The most obvious interpretation – that electrons are simply waves – was not consistent because it was known that electrons behave as pointlike particles in many experiments. Still, it was known that electrons also obey some wave properties. What was the most natural interpretation of that? Of course, the notion of “naturalness" is highly subjective and strongly depends on personal knowledge, prejudices, and current paradigms. At that time, classical deterministic physics was well understood and accepted, so it was the most natural to try first with an interpretation that maximally resembles the known principles of classical mechanics. In particular, classical mechanics contains only real quantities, so it was very strange that the Schrödinger equation describes a complex wave. Consequently, it was natural to rewrite the Schrödinger equation in terms of real quantities only. The simplest way to do this was to write the complex wave function $\psi$ in the polar form $\psi=R e^{i\phi}$ and then to write the complex Schrödinger equation as a set of two (coupled) real equations for $R({\bf x},t)$ and $\phi({\bf x},t)$. However, such a simple mathematical manipulation did not immediately reveal the physical interpretation of $R$ and $\phi$. Fortunately, a physical interpretation was revealed very soon, after an additional mathematical transformation $$\phi({\bf x},t)=\frac{S({\bf x},t)}{\hbar} ,$$ where $S$ is some new function. The Schrödinger equation for $\psi$ rewritten in terms of $R$ and $S$ turns out to look remarkably similar to something very familiar from classical mechanics. One equation looks similar to the classical Hamilton-Jacobi equation for the function $S({\bf x},t)$, differing from it only by a transformation $$V({\bf x},t) \rightarrow V({\bf x},t) + Q({\bf x},t) ,$$ where $V$ is the classical potential and $$\label{Q}
Q \equiv -\frac{\hbar^2}{2m} \frac{\nabla^2 R}{R} .$$ The other equation turns out to look exactly like the continuity equation $$\label{cont}
\frac{\partial\rho}{\partial t}+\nabla(\rho {\bf v})=0$$ for the density $\rho \equiv R^2$, with the Hamilton-Jacobi velocity $$\label{v}
{\bf v}=\frac{\nabla S}{m} .$$ Thus, at that moment, the most natural interpretation of the phase of the wave function seemed to be a quantum version of the Hamilton-Jacobi function that determines the velocity of a pointlike particle. But what was $\rho$? Since one of the equations looks just like the continuity equation, at the beginning it was proposed that $\rho$ was the density of particles. That meant that the Schrödinger equation described a fluid consisting of a huge number of particles. The forces on these particles depended not only on the classical potential $V$, but also on the density $\rho$ through the quantum potential (\[Q\]) in which $R=\sqrt{\rho}$.[@foot2]
Although the interpretation above seemed appealing theoretically, it was very soon realized that it was not consistent with experiments. It could not explain why, in experiments, only one localized particle at a single position was often observed. Thus, $\rho$ could not be the density of a fluid. It seemed that $\rho$ (or $R$) must be an independent continuous field, qualitatively similar to an electromagnetic or a gravitational field, that, similarly to an electromagnetic or a gravitational field, influences the motion of a particle. But why does $\rho$ satisfy the continuity equation, what is the meaning of this? They could not answer this question, but they were able to identify a physical consequence of the continuity equation. To see this, assume that one studies a statistical ensemble of particles with the probability distribution of particle positions equal to some function $p({\bf x},t)$. Assume also that, for some reason, the initial distribution at $t=0$ coincides with the function $\rho$ at $t=0$. Then the continuity equation implies that $$\label{p=rho}
p({\bf x},t)=\rho({\bf x},t)$$ at [*any*]{} $t$. But why should these two functions coincide initially? Although nobody was able to present an absolutely convincing explanation, at least some heuristic arguments were found, based on statistical arguments.[@foot3] This suggested that, in typical experiments, $\rho$ could be equal to the measured probability density of particle positions. Indeed, it turned out that such a prediction agrees with experiments. Since this prediction was derived from the natural assumption that each particle has the velocity determined by (\[v\]), it was concluded that experiments confirm (\[v\]). Thus, this interpretation became widely accepted and received the status of an “orthodox" interpretation.[@foot4]
However, not everybody was satisfied with this interpretation. In particular, Born objected that there was no direct experimental evidence for the particle velocities as given by (\[v\]), so this assumption was questioned by him. As an alternative, he proposed a different interpretation. In his interpretation, the equality (\[p=rho\]) was a fundamental postulate. Thus, he avoided a need for particle velocities as given by (\[v\]). However, his interpretation has not been widely accepted among physicists. The arguments against the Born interpretation were the following: First, this [*ad hoc*]{} postulate could not explain [*why*]{} the probability density was given by $\rho$. Second, a theory in which the probabilistic interpretation was one of the fundamental postulates was completely against all current knowledge about fundamental laws of physics. The classical deterministic laws were well established, so it was more natural to accept a deterministic interpretation of QM that differs from classical mechanics less radically. Third, it was observed that if one used the arguments of Born to argue that QM is to be interpreted probabilistically, then one could use analogous arguments to argue that even classical mechanics should be interpreted probabilistically,[@foot5] which seemed absurd.
Although the Born purely probabilistic interpretation was not considered very appealing, mainly owing to the overwhelming mechanistic view of physics of that time, it was appreciated by some positivists that such an interpretation should not be excluded. The Born interpretation was quite radical, but still acceptable as a possible alternative. Indeed, his interpretation seemed to fit well with a mathematically more abstract formulation of QM (which started with the Heisenberg matrix formulation of QM proposed even before the Schrödinger equation, and was further developed by Dirac who formulated the transformation theory and von Neumann who developed the Hilbert-space formulation), in which Eq. (\[v\]) did not seem very natural. However, one version of the Born interpretation was much more radical, in fact too radical to be taken seriously. This new interpretation was suggested by Bohr. In fact, Bohr was already known in the physics community for proposing the famous Bohr model of the hydrogen atom, in which electrons move circularly at discrete distances from the nucleus. Now a much better model of the hydrogen atom (the one based on the Schrödinger equation and particle trajectories that it predicts) was known, so the Bohr model was no longer considered that important, although it still enjoyed a certain respect. Since the model by which Bohr achieved respect among physicists was based on particle trajectories, it was really a surprise when Bohr in his new interpretation proposed that particle trajectories did not exist at all. But this was not the most radical part of his interpretation. The most radical part was the following: he proposed that it did not even make sense to talk about particle properties unless these properties were measured. An immediate argument against such a proposal was the well-established classical mechanics, in which particle properties clearly existed even without measurements. Bohr argued that there was a separation between the microscopic quantum world and the macroscopic classical world, so that the measurement-independent properties made sense only in the latter. However, Bohr never explained how and where this separation took place. In his interpretation, he introduced no new equation. His arguments were considered pure philosophy, not physics. Although his arguments were partially inspired by the widely accepted Heisenberg uncertainty relations, the orthodox interpretation of the uncertainty relations (expressing practical limitations on experiments, rather than properties of nature itself) seemed more viable. Thus, it is not a surprise that his interpretation has never been taken seriously. His interpretation was soon forgotten. (Much later it was found that the mechanism of decoherence through the interaction with the environment provides a sort of dynamical separation between “classical" and “quantum" worlds, but this separation was not exactly what Bohr suggested.[@foot6])
Another prominent physicist who criticized the orthodox interpretation of QM was Einstein. He liked the determinism of orthodox QM (despite the fact he made contributions to the probabilistic descriptions of quantum processes such as spontaneous emission and photoelectric effect), but there was something else that was bothering him. To see what, consider a system containing $n$ particles with positions ${\bf x}_1,\ldots,{\bf x}_n$ described by a single wave function $\psi({\bf x}_1,\ldots,{\bf x}_n,t)$. The $n$-particle analog of (\[Q\]) is a nonlocal function of the form $Q({\bf x}_1,\ldots,{\bf x}_n,t)$. In general it is a truly nonlocal function, i.e., not of the form $Q_1({\bf x}_1,t)+\ldots +Q_n({\bf x}_n,t)$, provided that the system exhibits entanglement, i.e., that the wave function is not of the form $\psi_1({\bf x}_1,t) \cdots \psi_n({\bf x}_n,t)$. In the orthodox interpretation such nonlocal $Q$ is interpreted as a nonlocal potential that determines forces on particles that depend on [*instantaneous*]{} positions of all other particles. This means that entangled spatially separated particles must communicate instantaneously. Einstein argued that this is in contradiction with his theory of relativity, because he derived that no signal can exceed the velocity of light. Orthodox quantum physicists admitted that this is a problem for their interpretation, but soon they found a solution. They observed that the geometric formulation of relativity does not really exclude superluminal velocities, unless some additional properties of matter are assumed. Thus, they introduced the notion of [*tachyons*]{},[@foot7] hypothetical particles that can move faster than light and still obey the geometrical principles of relativity. Einstein admitted that tachyons are consistent with relativity, but he objected that this is not sufficient to solve the problem of instantaneous communication. If the communication is instantaneous, then it can be so only in one reference frame. This means that there must be a preferred reference frame with respect to which the communication is instantaneous, which again contradicts the principle of relativity according to which all reference frames should enjoy the same rights. At that time orthodox quantum physicists understood relativity sufficiently well to appreciate that Einstein was right. On the other hand, the theory of relativity was also sufficiently young at that time, so that it did not seem too heretic to modify or reinterpret the theory of relativity itself. It was observed that with a preferred foliation of spacetime specified by a fixed timelike vector $n^{\mu}$ one can still write all quantum equations in a relativistic covariant form. It was also observed that, by using an analogy with nonrelativistic fluids, relativity may correspond only to a low-energy approximation of a theory with a fundamental preferred time.[@foot8] Thus, it was clear that the preferred foliation of spacetime does not necessarily contradict the theory of relativity (both special and general), provided that the theory of relativity is viewed as an effective theory. At the beginning, Einstein was not very happy with the idea that his theory of relativity might not be as fundamental as he thought. Nevertheless, he finally accepted that QM is irreducibly nonlocal when he was confronted with the rigorous mathematical proof that, in QM, the assumption of reality existing even without measurements is not compatible with locality.[@bell]
A new crisis for orthodox QM arose with the development of quantum field theory (QFT). At the classical level, fields are objects very different from particles. As QFT seemed to be a theory more fundamental than particle QM, it seemed natural to replace the quantum particle trajectories with the quantum field trajectories (or more precisely, time-dependent field configurations). However, there were two problems with this. First, from the trajectories of fields, it was not possible to reproduce the trajectories of particles. Second, the idea of field trajectories did not seem to work for fermionic (anticommuting) fields. Still, the agreement with experiments was not ruined, as all measurable predictions of QFT were actually predictions on the properties of particles. Therefore, it seemed natural to interpret QFT not as a theory of new more fundamental objects (the fields), but rather as a more accurate effective theory of particles, in which fields play only an auxiliary role. Indeed, the divergences typical of QFT reinforced the view that QFT cannot be the final theory, but only an effective one. As quantum physics made further progress, it became clear that many theories that were considered fundamental at the beginning turned out to be merely effective theories. This reinforced the dominating paradigm according to which relativity is also an effective, approximate theory. Nevertheless, some relativists still believed that the principle of relativity was a fundamental principle. Consequently, they were not satisfied with the orthodox interpretation of QM that requires a preferred foliation of spacetime. Instead they were trying to interpret QM in a completely local and relativistic manner. To do that, they were forced to introduce some rather radical views of nature. In one way or another, they were forced to assume that a single objective reality did not exist.[@foot9] However, such radical interpretations were not very appreciated by the mainstream physicists. It did not seem reasonable to crucify one of the cornerstones not only of physics but of the whole of science (the existence of objective reality) just to save one relatively new theoretical principle (the principle of locality and relativity) for which there existed good evidence that it could be only an approximate principle.[@foot10] Therefore, the deterministic interpretation of QM survived as the dominating paradigm, while the probabilistic rules of QM, used widely in practical phenomenological calculations, were considered emergent, not fundamental. In fact, it has been found that, in some cases, the probabilistic rules cannot be derived in a simple way, so that one is forced to use the fundamental fully deterministic theory explicitly.[@foot11]
Conclusion
==========
In this paper, I have argued that, in the context of scientific paradigms that were widely accepted when the Schrödinger equation was discovered, it was much more natural to propose and accept the Bohmian deterministic interpretation than the Copenhagen interpretation. I have also argued that, if that had really happened, then the Bohmian interpretation (or a minor modification of it) would have been a dominating view even today. In other words, the answer to the allegoric tongue-twisting question posed in the title of this paper is – probably no! This, of course, does not prove that the Bohmian interpretation is more likely to be correct than some other interpretation. But the point is that it really seems surprising that the history of QM chose a path in which the Copenhagen interpretation became much more accepted than the Bohmian one. I leave it to the sociologists and historians of science to explain why the history of QM chose the path that it did.
Acknowledgments {#acknowledgments .unnumbered}
===============
The author is grateful to anonymous referees for numerous suggestions for improvements. This work was supported by the Ministry of Science of the Republic of Croatia under Contract No. 098-0982930-2864.
[99]{}
A similar thesis with somewhat different arguments has been also advocated in J. T. Cushing, Quantum Mechanics: Historical Contingency and the Copenhagen Hegemony (University of Chicago Press, Chicago, 1994).
Remarks concerning the actual history of QM are given in references.
D. Bohm, “A suggested interpretation of the quantum theory in terms of “hidden variables“. I,” Phys. Rev. [**85**]{} (2), 166-179 (1952); D. Bohm, “A suggested interpretation of the quantum theory in terms of “hidden variables“. II,” Phys. Rev. [**85**]{} (2), 180-193 (1952).
R. Tumulka, “Understanding Bohmian mechanics: A dialogue," Am. J. Phys. [**72**]{} (9), 1220-1226 (2004).
Such an interpretation was really proposed already in 1926: E. Madelung, Z. Phys. [**40**]{}, 322-326 (1926).
These arguments might have looked similar to those in D. Dürr, S. Goldstein, and N. Zanghì, “Quantum equilibrium and the origin of absolute uncertainty," J. Stat. Phys. [**67**]{}, 843-907 (1992); A. Valentini, “Signal-locality, uncertainty, and the subquantum H-theorem," Phys. Lett. A [**156**]{}, 5-11 (1991).
In reality, this interpretation is known today as the Bohm interpretation, while the status of an “orthodox" interpretation is enjoyed by a significantly different interpretation. De Broglie has also proposed the same equation for particle trajectories much earlier than Bohm did, but de Broglie did not develop a theory of quantum measurements, so he could not reproduce the predictions of standard QM for observables other than particle positions, such as particle momenta. For more historical details see also G. Bacciagaluppi and A. Valentini, Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference (to be published by Cambridge University Press); quant-ph/0609184.
Such arguments might have looked similar to those in H. Nikolić, “Classical mechanics without determinism," Found. Phys. Lett. [**19**]{}, 553-566 (2006). In this paper, it is shown that classical statistical physics can be represented by a nonlinear modification of the Schrödinger equation, in which classical particle trajectories may be identified with special solitonic solutions. A Bohr-like interpretation of general (not solitonic) solutions suggests that even classical particles may not have trajectories when they are not measured, while a measurement of the previously unknown position may induce an indeterministic wave-function collapse to a solitonic state.
For a review of the theory of decoherence with emphasis on the interpretational issues, see M. Schlosshauer, “Decoherence, the measurement problem, and interpretations of quantum mechanics," Rev. Mod. Phys. [**76**]{}, 1267-1305 (2004).
In reality, tachyons have been introduced in physics somewhat later, see O. M. P. Bilaniuk, V. K. Deshpande, and E. C. G. Sudarshan, ““Meta“ relativity,” Am. J. Phys. [**30**]{} (10), 718-723 (1962); O. M. P. Bilaniuk and E. C. G. Sudarshan, “Particles beyond the light barrier," Physics Today [**22**]{} (5), 43-51 (1969).
It is well known that a wave equation describing the propagation of sound with the velocity $c_s$ in a fluid has the same mathematical form as a special-relativistic wave equation describing the propagation of light with the velocity $c$ in vacuum. Consequently, such a wave equation of sound is invariant with respect to Lorentz transformations in which the velocity $c$ is replaced by $c_s$. A fluid analogy of curved spacetime may also be constructed, by introducing an inhomogeneous fluid. For more details, see, e.g., M. Visser, “Acoustic black holes: horizons, ergospheres, and Hawking radiation," Class. Quant. Grav. [**15**]{}, 1767-1791 (1998).
This proof is now usually attributed to Bell, although other versions of this proof also exist. For a pedagogic review see F. Laloë, “Do we really understand quantum mechanics? Strange correlations, paradoxes, and theorems," Am. J. Phys. [**69**]{} (6), 655-701 (2001). Many of the current interpretations of QM mentioned in the introduction are of this form.
String theory also contains evidence against locality at the fundamental level. Although the theory is originally formulated as a local theory, nonlocal features arise in a rather surprising and counterintuitive manner. It turns out that string theories defined on different background spacetimes may be mathematically equivalent, which suggests that spacetime is not fundamental at all. Without a fundamental notion of spacetime, there is no fundamental notion of locality and relativity as well. It is believed that a more fundamental formulation of string theory should remove locality more explicitly, while known local laws of field theory should emerge as an approximation. See, e.g., G. T. Horowitz, “Spacetime in String Theory," New J. Phys. [**7**]{}, 201 (2005); N. Seiberg, “Emergent Spacetime," hep-th/0601234.
It is known that relativistic QM based on the Klein-Gordon equation, as well as QFT, do not contain a position operator. Therefore, the conventional interpretation of quantum theory does not have clear predictions on probabilities of particle positions in the relativistic regime. The fundamentally deterministic Bohmian interpretation may lead to clearer predictions, which means that it may be empirically richer than (and thus inequivalent to) the conventional formulation. For more details, see, e.g., H. Nikolić, “Relativistic quantum mechanics and the Bohmian interpretation," Found. Phys. Lett. [**18**]{}, 549-561 (2005); H. Nikolić, “Is quantum field theory a genuine quantum theory? Foundational insights on particles and strings," arXiv:0705.3542. Unfortunately, experiments that could confirm or reject such a formulation have not yet been performed. It is also fair to note that today such a version of the Bohmian interpretation not empirically equivalent to the conventional interpretation is considered controversial even among the proponents of the Bohmian interpretation. Nevertheless, in an alternative history of QM in which the conventional probabilistic interpretation never became widely accepted, such a fundamentally deterministic Bohmian interpretation might have seemed more natural.
|
---
abstract: 'A true quantum reason for why people fib on April first.'
author:
- 'George Svetlichny[^1]'
date: April first 2014
title: The April First Phenomenon
---
The truth…
==========
One of the least understandable of human phenomena is the propensity to fib on April first. How is it that an activity so reprehensible on other days of the year is so readily accepted on this one singular day? Many theories have been put forth about this [@svet:jbbt3.14; @tem:vov271.8281; @perr:apoc] usually of a sociological type. However, sociology alone cannot explain this. This is because even scientists, for whom the truth is the highest virtue, succumb to this failing. One can attest to this fact by numerous text published by such reputable venues as Scientific American or arXiv. Physicist, for whom truth and precise language is such an uncompromising commitment as to make them, in gentle terms, the least tolerable of the science workers[@bbt], and who would not jeopardise their professional standing and careers in such a manner, nevertheless engage in this practice. This observation shows that we have to seek deeper causes for the phenomenon, and obviously, only quantum physics can supply this.
So one comes to this paradox: physicists are perceived to sometimes lie in reputable scientific venues and yet they cannot lie. The obvious conclusion is that they are not lying but telling the truth, and it is the nature of the universal physical state on each April first that somehow makes us believe, after the fact, that lies have been perpetrated. As already mentioned, quantum physics has an obvious explanation as to how this can happen. In the Everett many-worlds interpretation of quantum mechanics[@slide] there are the so called *Maverick Universes* in which the ordinary laws of physics can break down because quantum probabilities don’t follow the usual Born rule. It must surely be that on each April first we enter a maverick universe and so what appears to be fibs are in fact solid truths *in the current unverse*. This settles that. April Fools’ Day proves the truth of the Everett picture.
But why is it that it is precisely on one day of the year and not on any other day that we *slide* from one universe to another? Cosmic alignment of this planet with whatever unknown thing is out there cannot be the cause. It could not be a close whatever for our sun is speeding through our galaxy and so alignment days would not repeat. With a distant whatever *all* days would be like April Fools’ Day, and they’re not. With intermediate distances the singular day would slowly slide trough the calendar but it’s been steady for centuries. No, one needs a local explanation, and, as expected, quantum physics provides the answer. As April first nears, many people on the planet perform a quantum measurement on their friends and colleagues by telling fibs to see if their friends and colleagues accept them or not. The latter, exercising their free will, go along so as to humor the fibbers. Free will does not have to follow the Born rule, so the unverse slowly gets pushed into a maverick state. As fibbing continues one enters into a quantum Zeno process by which the state of the universe freezes as a maverick. Physicists are notoriously detached and distracted[@bbt], don’t notice the goings-on, and working in a maverick universe perform experiments and calculations that give what would otherwise be totally absurd and contradictory results but which on that day are truly true. Unwittingly they publish. This settles that.
That it is April first that people in general put the universe through a quantum Zeno process is pure coincidence; the practice started somehow, caught on, and became a tradition.[^2] This is sociology.[^3] It could have been any other day of the year. This is spontaneous symmetry breaking.
…shall set you free
===================
There is a universally unnoticed side effect of the April First phenomenon. Quantum probabilities that do not follow the Born rule allow for superluminal communication and so also for all of its collateral advantages.
So, on April first, if you are very clever, you can instantly communicate with distant galaxies, solve any hard computation problem in polynomial time, send messages to the past, move things with your mind, generate any amount of energy, and maybe even become immortal. But only on that day.
Carpe Diem!
[xx]{} George Svetlichny,[^4] “Would Dexter do it? Sociopathy in April", *Journal of Barely Believable Theories*, Vol. **3**, March–April 2008, pp. 14–15.
[[M. Yu. Temnozadniĭ, “ ‘Vse skazala paren126 ku Natasha’ i drugie yavleniya pervogo aprelya", *Vse o Vsem*, **271**, Aprel126–Maĭ 2010, str. 8281–8284.]{}]{} Listen [here.](http://ololo.fm/search/%D0%93%D1%80.+%D0%9C%D0%B0%D0%BB%D0%BE%D0%BB%D0%B5%D1%82%D0%BA%D0%B0/%D0%9F%D0%B5%D1%80%D0%B2%D0%BE%D0%B5+%D0%90%D0%BF%D1%80%D0%B5%D0%BB%D1%8F/)
Ruy Cabeza de Perro, “El pozo como campanario” First part of a philosopical manuscript discovered in a bottle in a crocodile’s stomach in the Florida Everglades in April 2013. Truly confidential source. [^5] Watching a few episodes of the TV series *The Big Bang Theory* convinces anyone of this. The TV series *Sliders* offers an excellent explanation. Hodža Nasrudin, “The Turkish Jester or The Pleasantries of Cogia Nasr Eddin Effendi." Translated from the Turkish by George Borrow. 1884. Available [here.](http://www.gutenberg.org/ebooks/16244)
[^1]: Departamento de Matemática, Pontifícia Universidade Católica, Rio de Janeiro, Brazil [email protected] <http://www.mat.puc-rio.br/~svetlich>
[^2]: Look up “April Fools’ Day" in Wikipedia, if you’re so inclined, but be sure to check the dates of writing.
[^3]: Yes, unfortunately some sociology had to creep in. Sorry.
[^4]: This author is *not I*, the author of the present manuscript, just someone with an identical (and very common) name. I personally don’t know this author.
[^5]: Probably apocryphal and an April Fools’ joke in itself. The text claims it was inspired by the stories of the great Sufi wise man and wise guy Hodža Nasrudin, of whom it could be said, as he says of one of his characters, “Some people say that, whilst uttering what seemed madness, he was, in reality, divinely inspired, and that it was not madness but wisdom that he uttered."[@borr], pretty much what all physicists aspire to and not only on April first. I found no parallels between the text and Nasrudin except maybe for the title. I did once read in an obscure humor magazine of the late 1940’s that Hodža’s instruction for erecting a minaret was to dig a well and then turn it inside out. This engineering feat is yet to be performed.
|
---
abstract: 'Recently, Convolutional Neural Networks (CNNs) have achieved tremendous performances on face recognition, and one popular perspective regarding CNNs’ success is that CNNs could learn discriminative face representations from face images with complex image feature encoding. However, it is still unclear what is the intrinsic mechanism of face representation in CNNs. In this work, we investigate this problem by formulating face images as points in a shape-appearance parameter space, and our results demonstrate that: (i) The encoding and decoding of the neuron responses (representations) to face images in CNNs could be achieved under a linear model in the parameter space, in agreement with the recent discovery in primate IT face neurons, but different from the aforementioned perspective on CNNs’ face representation with complex image feature encoding; (ii) The linear model for face encoding and decoding in the parameter space could achieve close or even better performances on face recognition and verification than state-of-the-art CNNs, which might provide new lights on the design strategies for face recognition systems; (iii) The neuron responses to face images in CNNs could not be adequately modelled by the axis model, a model recently proposed on face modelling in primate IT cortex. All these results might shed some lights on the often complained blackbox nature behind CNNs’ tremendous performances on face recognition.'
bibliography:
- 'egbib.bib'
---
1
\
[Face representation by deep learning: a linear encoding in a parameter space?]{}
\
[**Qiulei Dong$^{\displaystyle 1, \displaystyle 2, \displaystyle 3}$, Jiayin Sun$^{\displaystyle 1, \displaystyle 2}$, Zhanyi Hu$^{\displaystyle 1, \displaystyle 2, \displaystyle 3, *}$**]{}\
[$^{\displaystyle 1}$National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.]{}\
[$^{\displaystyle 2}$University of Chinese Academy of Sciences, Beijing 100049, China.]{}\
[$^{\displaystyle 3}$CAS Center for Excellence in Brain Science and Intelligence Technology, Beijing 100190, China.]{}\
[$^*$ Corresponding author: [email protected]]{}\
\
Introduction
============
Human face representation, aiming to represent the identity of human face, is an important and challenging topic in both the fields of computer vision and neuroscience, and has attracted more and more attention in recent years.
In the neuroscience field, visual object representation, including face representation, is generally believed to happen in primate inferotemporal (IT) cortex, and the population responses of IT neurons to an object image stimulus is considered as the representation of this object [@Freiwald2010; @Grimaldi2016; @Lehky2014; @Majaj; @Khaligh:PLOS; @Yamins2014Performance; @Yamins2016; @Chang2017; @DongWH18]. In the early years, many traditional works on face representation assumed an exemplar-based mechanism for representing face identity in primate IT cortex: face identification was mediated by units tuned to a set of exemplar faces. Such an exemplar-based representation mechanism is supported by the results in [@Freiwald2010] that some neurons in the anterior medial face patch are view-independent, which respond to faces of only a few specific individuals regardless of view orientations. Recently, different from the results in [@Freiwald2010], Chang and Tsao [@Chang2017] found that by formulating face images as points in a multi-dimensional linear parameter space, face images could be linearly encoded in macaque IT cortex, and they could also be linearly decoded from IT neuron responses, and a new face representation model, called “the axis model”, was proposed. Their experimental results demonstrated that the proposed axis model could achieve satisfactory encoding and decoding performances of IT neuron responses.
In the computer vision field, the performances of face recognition systems depend heavily on face representation, which is naturally coupled with many adverse factors, such as pose variation, illumination change, expression, occlusion and so on. Face representation could either be manually designed or automatically learnt from face image datasets. In the early days, the face representations were mainly constructed with manually designed features, such as Local Binary Patterns [@OjalaPAMI], Histogram of Oriented Gradients [@Dalal2005Histograms], etc. In recent years, Convolutional Neural Networks (CNNs), which are generally believed to be able to learn complex and effective representations from image stimuli, have achieved tremendous successes on object categorization and face recognition [@zhu2013deep; @taigman2014deepface; @sun2014deep; @sun2014deep1; @taigman2014web; @Parkhi15; @jung2015rotating; @zhang2016pursuing; @Zhang2017Two; @Zhang2018deep; @Liu_2017_CVPR; @wu2018light; @deng2018arcface]. For example, DeepFace [@taigman2014deepface] trained a deep CNN to classify faces using a dataset of $400,000$ examplar images. DeepID [@sun2014deep] employed a CNN to learn face representations for identifying $10,000$ different faces. In [@sun2014deep1], a new CNN was introduced to learn face representations, which was trained with both face identification and verification signals. Hayat et al. [@Hayat2017Joint] proposed a data-driven method to jointly learn registration with face representation in a CNN. Liu et al. [@Liu2017SphereFace] proposed a deep hypersphere embedding approach for face recognition, where the angular Softmax loss for CNNs was introduced to learn discriminative face representations (called SphereFace) with angular margin. Zhang et al. [@Zhang2018deep] proposed a disentangling siamese network, which could automatically disentangle the face features into the identity representations, as well as the identity-orthogonal factors including poses and illuminations. Wu et al. [@wu2018light] proposed a light CNN framework to learn a compact face representation on the large-scale face data with noisy labels. Deng et al. [@deng2018arcface] proposed a geometrically interpretable loss function, called ArcFace, which is integrated with different CNN models (e.g. ResNet [@He7780459]) for face recognition and verification.
Why do CNNs perform so well on face recognition? One popular perspective is that CNNs could learn effective and discriminative face representations with complex image feature encoding, because of the repeatedly used nonlinear operators such as ReLU (Rectified Linear Unit) and max pooling in CNNs. However, **what is the intrinsic mechanism of face representation in CNNs?** It seems this is still largely an open question. In addition, CNNs’ successes in generic object categorization and recognition are often attributed by many researchers to their inherent hierarchical architectures, similar to the primate visual ventral pathway. It is also shown in [@Khaligh:PLOS] that if an object representation is monkey IT-like, it can give a good object recognition performance. Hence, a further question naturally comes up: **is the face representation mechanism in CNNs is similar to that in monkey IT cortex found recently in [@Chang2017]? or more specifically, could the responses of CNN neurons (units) to face stimuli be linearly modelled in a parameter space?** If so, it would mean that although CNNs generally concatenate multiple convolution layers and nonlinear operators, there essentially exists a linear mapping between the face vectors in a parameter space and the corresponding face representations in DNNs. This linear mapping is more explicit and largely different from the aforementioned complex image feature encoding on CNNs’ face representation.
Addressing the above questions, we investigated the face representation problem at higher CNN layers by formulating face images as points in a parameter space in this work, with six typical multi-layered CNNs for face recognition: VGG-Face [@Parkhi15], DeepID [@sun2014deep], ResNet-Face (defived from ResNet [@He7780459]), SphereFace [@Liu_2017_CVPR], Light-CNN [@wu2018light], and ArcFace [@deng2018arcface], and three commonly used face datasets: Multi-PIE [@gross2010multi], LFW [@LFWTech], and MegaFace [@kemelmacher2016megaface]. We found that there indeed exists a linear encoding/decoding model for face representation in these CNNs, i.e., face vectors in the parameter space could not only be effectively decoded from the neuron responses at the higher CNN layers, but also be encoded linearly for predicting the responses of CNN neurons, similar to the face representation of monkey IT neurons reported in [@Chang2017]. In addition, we found that the predicted representations by the linear model could achieve comparable performances on face recognition and verification to those by the above CNNs. However, we found that the neuron responses at the higher CNN layers could not be adequately modelled by the axis model in [@Chang2017]. These results partially reveal the linear face representation mechanism in CNNs, as well as the similarities and differences of face representation between CNNs and primate IT cortex. Additionally, the revealed linear face encoding might also be referenced for the future design of new face recognition systems.
Method
======
In this section, the used method for investigating the face representation mechanism of CNNs is described, and its flowchart is shown in Figure \[vggface\]. The face representation at a CNN layer is defined as:
\[def1\] The set of neuron responses at a given layer of a face recognition CNN is defined as the face representation of this layer.
As seen from Figure \[vggface\], we generate the parameterized face images from a given face image dataset using the AAM (Active Appearance Model) approach [@Cootes2001], and formulate these images as points in a $50$-D(dimensional) parameter space. Then, we analyze the encoding/decoding relationship between the face representations of higher CNN layers and the $50$-D face vectors in the parameter space using a linear model and the proposed axis model in [@Chang2017]. In addition, considering that face recognition and verification are strongly linked to face representation, we also perform comparative experiments on face recognition and verification with the predicted responses by both the linear model and the axis model. The details are elaborated in the next.
Model CNNs and CNN layer selection for face representation
----------------------------------------------------------
In our work, the following six popular deep neural networks for face recognition and verification are used as our model CNNs:
**VGG-Face [@Parkhi15]:** It is a typical CNN model for face recognition, derived from the classical VGG model [@Simonyan15] for general object categorization. It consists of $13$ convolutional layers and $2$ fully connected layers (except the final classification layer for predicting identities).
**DeepID [@sun2014deep]:** It is a classical CNN model for face verification, aiming to learn so-called deep hidden identity features from face images. It consists of $4$ convolutional layers and $1$ fully connected layers.
**ResNet-Face:** It is used for face recognition and verification, derived from the popular ResNet model for general object categorization [@He7780459]. The code for this model has been released in the Dlib toolkit[^1].
**SphereFace [@Liu_2017_CVPR]:** It is a state-of-the-art model for face recognition and verification based on ResNet, where the angular softmax loss is utilized for learning discriminative face features with angular margin. In this work, the used model consists of $20$ convolutional layers and $1$ fully connected layers.
**Light-CNN [@wu2018light]:** The Light CNN framework [@wu2018light] is to learn a compact embedding on the large-scale face data with massive noisy labels. Here, the Light CNN-29 model, which is a 29-layer convolutional network derived from the Light CNN framework, is utilized.
**ArcFace [@deng2018arcface]:** It is a state-of-the-art model for face recognition and verification based on ResNet, where a geometrically interpretable loss function is utilized. Here, the used model consists of $18$ convolutional layers and $1$ fully connected layers, and the corresponding code is obtained at GitHub [^2].
Considering that higher CNN layers could generally learn global object information from object stimuli, we investigate the face representations of the neuron responses at Layers $\{13, 14, 15\}$ in VGG-Face, and those at the last fully connected layer (rather than the final classification layer for predicting identities) in DeepID, ResNet-Face, SphereFace, Light-CNN, and ArcFace.
Face image synthesis in parameter space {#imgsyn}
---------------------------------------
Although the size of a real face image is usually of millions of dimensions or even higher, it is generally believed that face data lies on an embedded low-dimensional manifold within the original high-dimensional space [@Roweis2000; @Tenenbaum2000]. In order to alleviate the disturbances of the unrelated information to face identity (e.g. background, hair, neck, etc.) in the original high-dimensional form of face data, and simultaneously to reduce the possible information loss due to the transformation from the high-dimensional face space to a low-dimensional space, similarly as that in [@Chang2017], we utilize the AAM approach [@Cootes2001] to extract the low-dimensional shape and appearance features of faces from the original face images, and then generate the parameterized face images with these face features for investigating the face representation mechanism of CNNs, as shown in Figure \[vggface\].
In detail, given a face image dataset, a set of $68$ $2$-D landmark points are automatically extracted from each face image using the Dlib toolkit at first. Then, the obtained sets of landmark points for all the images are aligned into a common co-ordinate frame and stored as a shape matrix, whose each column represents a aligned set of landmark points extracted from a face image. In addition, the original face images are warped such that its landmark points match the mean shape, and the gray information over the warped region covered by the mean shape is stored as an appearance matrix, whose each column represents the appearance of a warped face image. Then, Principal Components Analysis (PCA) is applied to the shape matrix for extracting a set of $25$-D feature vectors accounting for the face geometry, and to the appearance matrix for extracting a set of $25$-D feature vectors accounting for the face appearance. For a given face image, its $25$-D shape vector and $25$-D appearance vector are concatenated to form **its $50$-D face vector** in this work.
Accordingly, a $50$-D parameter space is spanned by the obtained face vectors, where a point represents a face. Finally, a parameterized face image is generated with its $50$-D face vector as well as the stored shape and appearance transformation matrices via PCA. In our experiments, the obtained face vectors are used for analyzing the face representation mechanism of CNNs. The parameterized face images are used for training and testing CNNs.
**Remark:** (i) As described above, compared with the original face images, the parameterized face images have a much less amount of identity-unrelated information to faces (e.g. they do not have complex backgrounds, hairs, and necks), and there is no information loss generated by the transformation from the high-dimensional parameterized face space to the lwo-dimensional parameter space. Hence, the parameterized face images seem more appropriate to control and make a strict experimental evaluation on the face representation mechanism of CNNs than the original ones. (ii) Other than AAM, there exist many other face synthesis approaches in literatures. Here we utilize AAM in this work only for conveniently making a comparison with the results in [@Chang2017] on face representation of IT neurons, where AAM was also utilized.
Linear model for face encoding/decoding
---------------------------------------
Let $n$ denote the number of face stimuli, and $m$ the number of neurons at a CNN layer. Let $A$ denote the response matrix $R \in \mathcal{R}^{m\times n}$ to all the face stimuli at a CNN layer, and $P\in \mathcal{R}^{50\times n}$ the matrix for storing the corresponding face vectors defined in the $50$-D parameter space. A linear model for face encoding and decoding is defined as:
\[def2\] Under a linear model for face encoding and decoding, face encoding could be achieved by linearly transforming a face vector into the face representation (neuron responses) of a CNN layer, and face decoding could be achieved by linearly transforming the face representation of a CNN layer into the face vector.
If such a linear model holds true for a CNN on face recognition, the response matrix $R$ for a CNN layer can be roughly approximated by a linear combination of the elements of the $50$-D face vectors $P$ as follows: $$\begin{aligned}
R = TP + b\mathbf{1}_n^T \label{linearreg}\end{aligned}$$ where $T\in \mathcal{R}^{m\times 50}$ is the transformation matrix, $b\in \mathcal{R}^{m\times 1}$ is the bias vector, and $\mathbf{1}_n \in \mathcal{R}^{n\times 1}$ is the $n$-D all-one column vector.
Once both the transformation matrix $T$ and bias vector $b$ are obtained by solving Eq. (\[linearreg\]), the Pearson and Spearman correlation coefficients are computed respectively to measure the correlation between the neuron responses outputted from a CNN layer and those predicted by the linear model.
If the mean of the computed correlation coefficients is high, it suggests that the face representations of this layer could be adequately predicted by linearly encoding the face vectors, and the face vectors could be linearly decoded from the face representations by inverting (\[linearreg\]) accordingly. Otherwise, it suggests that the face representations of this layer could not be linearly encoded/decoded.
The axis model for face encoding/encoding
-----------------------------------------
The axis model [@Chang2017] can be considered as a special linear model followed by a nonlinear rectification. The axis model consists of two steps: firstly, the dot product between a face image stimulus (described as a face vector in the parameter space) and the STA $P_{STA}$ (spike-triggered average) axis of a face cell is computed, and then the value is rectified by a $3$-order polynomial. Here, for a CNN neuron, like that in [@Chang2017], we firstly compute its STA $P_{STA}$ by: $$\begin{aligned}
P_{STA} = \frac{\sum_{i=1}^n r_iP_i}{\sum_{i=1}^n r_i}\end{aligned}$$ where $r_i (i=1,2,...,n)$ is the response of this neuron to the $i$-th face image stimulus, and $P_i$ is the $50$-D face vector of this stimulus. Then, we fit a $3$-order polynomial on the dot product between the face vector $P_i$ and the STA axis $P_{STA}$ of this neuron for modelling its response $r_i$ by: $$\begin{aligned}
r_i = a + b\langle P_i, P_{STA}\rangle + c\langle P_i, P_{STA} \rangle^2 + d\langle P_i, P_{STA} \rangle^3, \ \ i=1,2,...,n\end{aligned}$$ where $\{a,b,c,d\}$ are the polynomial parameters, and $\langle \cdot, \cdot \rangle$ is the dot product operator.
With the obtained fitted parameters for each CNN neuron, its response to an arbitrary face image could be predicted, and the Pearson and Spearman correlation coefficients are computed respectively to measure the correlation between the neuron responses outputted from a CNN layer and those predicted by the axis model. If the mean of the computed correlation coefficients is high, it suggests that the axis model could well model the neuron responses at this layer, and the face vectors could also be decoded from the neuron responses with the fitted parameters of the axis model accordingly. Otherwise, it suggests that the axis model is not appropriate for encoding and decoding the CNN neuron responses at this layer.
Face recognition and verification {#frv}
---------------------------------
Face recognition is to determine the identity of the person in the input face image. It is a multi-class classification problem. For a new face dataset, the original model CNNs (at least their final classification layer) generally have to be fine-tuned with part of this dataset so that these CNNs would be able to recognize persons from this dataset.
Face verification is to determine whether the persons in the input pair of face images are the same or not. Unlike face recognition, face verification is typically a binary classification problem, and it does not require fine-tuning the used model CNNs, which could reflect the representation capability of CNNs more generally.
Face representation is the base for face verification and recognition. If the neuron responses of higher CNN layers (particularly the last layer) could be adequately predicted by a linear model in the parameter space, the predicted responses would achieve similar performances on face verification and recognition to those outputted from the original model CNNs. Hence, the verification and recognition results could be used indirectly to show the goodness of the predicted face representation. In this work, we also follow this path to assess the fitness of the linear encoding model, and the methods used for face recognition and verification are described next:
**Remark:** Other than face recognition and verification, we also carried out experiments on face identification, which is to determine the image of a person in a set of face images, who is the same person in the input face image. Our results show that the predicted responses by the linear model achieve similar performances on face identification to those outputted from the original model CNNs, but the predicted responses by the axis model achieve much lower performances than those outputted from the model CNNs, which is in agreement with our results on face recognition and verification. We donot report them in detail, due to the limitation of space.
**Face recognition:**
For each of the three used datasets in this work, it is divided into two parts: training data and testing data. We fine-tune a CNN in the following two ways: (i) All the layers of the CNN are fine-tuned with the training data; (ii) Only the final classification layer is fine-tuned with the training data, while the other layers are fixed, in order to maintain the representation generality of the CNN. Then, the classification accuracies on the testing data are computed.
In addition, we train linear classifiers under two popular loss functions (Softmax Loss and Hinge Loss), with the predicted responses to the training data by the linear model and the axis model respectively, and then compare their performances on the testing data with those of the model CNNs.
The used Softmax-Loss function, by combining the standard Softmax Loss and a regularizer for the loss function, is defined as $$\begin{aligned}
\min_{\theta} \quad -\frac{1}{n}\left[\sum_{i=1}^n\sum_{j=1}^k \mathrm{1}\{y_i=j\} \log\frac{e^{\theta_{j}^Tx_i}}{\sum_{l=1}^k e^{\theta_{l}^Tx_i}} \right] + \frac{\lambda}{2}||\theta||_F^2 \end{aligned}$$ where $n$ is the number of stimuli, $k$ is the number of identities, $x_i \in \mathcal{R}^p$ is the $i$-th input stimulus, $\theta \in \mathcal{R}^{p\times k}$ is the model parameter matrix, $y_i\in \{1,2,...,k\}$ is the identity of the $i$-th face image, $\lambda$ is the weight of the regularizer, and $\mathrm{1}\{\cdot\}$ is the indicator function with $\mathrm{1}\{\mathrm{a \ true \ statement}\} = 1$ and $\mathrm{1}\{\mathrm{a \ false \ statement}\} = 0$.
The used Hinge-Loss function also combines the standard Hinge Loss with a regularizer as: $$\begin{aligned}
\min_{\theta} \ \ -\frac{1}{n}\left[\sum_{i=1}^n \max(0, 1 - \theta_{y_i}^Tx_i + \max_{j\neq y_i}(\theta_j^Tx_i)) \right] + \lambda||\theta||_F^2 \end{aligned}$$
**Face verification:**
For the linear model (also the axis model and the model CNNs), the verification on a given pair of images is carried out by testing whether the Euclidean distance between the predicted response vectors to the two images is smaller than a threshold $\tau$. And the two common measures, Verification Accuracy $Acc$ and Equal Error Rate $EER$, are used for comparing the verification results of the linear/axis model with those of the model CNNs.
The Verification Accuracy $Acc$ is defined as follows, and the threshold $\tau$ is generally learned to maximize the verification accuracy on the training data:
\[def3\] $Acc$ is the proportion of true results (both true positives and true negatives) among the total number of cases examined.
The Equal Error Rate $EER$ is defined as:
\[def4\] $EER$ is the rate at the ROC (receiver operating characteristic curve) operating point where the false positive and false negative rates are equal.
Note that a smaller value of $EER$ corresponds to a better result, but for comparison convenience, we report the value of $100\% - EER$ as done in [@Parkhi15]. This measure is independent on the distance threshold $\tau$.
Results
=======
Data sets
---------
The following three widely-used face datasets are used in our experiments:
**Multi-PIE [@gross2010multi]:** It is a popular dataset for algorithmic evaluation on face recognition, containing images of $337$ people with different poses, illuminations, and expressions. In our experiments, a subset of Multi-PIE, consisting of the images of $249$ people under all the $7$ poses ($\{-45^{\circ},-30^{\circ},-15^{\circ},0^{\circ},+15^{\circ},+30^{\circ},+45^{\circ}\}$) and $10$ illuminations with the neutral expression in Session One of Multi-PIE (totally $249 \times 7\times 10 = 17430$ images), is used for testing the model CNNs.
**LFW [@LFWTech]:** It is a standard in-the-wild benchmark for automatic face verification, containing $13233$ images from $5749$ different identities, with large variations in pose, expression and illuminations. Following the standard evaluation protocol defined for the “unrestricted setting” [@Liu_2017_CVPR; @deng2018arcface], we test the model CNNs on $6000$ face pairs ($3000$ matched pairs and $3000$ mismatched pairs).
**MegaFace [@kemelmacher2016megaface]:** It is a standard in-the-wild benchmark for face verification, which contains in-the-wild face photos with unconstrained pose, expression, lighting, and exposure. It includes a probe set and a gallery set. The probe set consists of two existing datasets: Facescrub [@NgW14] and FGNet. The gallery set contains around $1$ million images from $690$K different individuals. Considering that our goal in this work is not to evaluate which model DNN performs best on face recognition and verification, but to evaluate (i) whether the encoding and decoding of the face representations of CNNs could be modelled by the linear/axis model and (ii) whether the linear/axis model could achieve close performances on face verification in comparison to the model CNNs, hence, we choose a subset of MegaFace, consisting of $4000$ images from $80$ identities ($40$ males and $40$ females, $50$ images per identity). Then, we construct $6000$ face pairs ($3000$ matched pairs and $3000$ mismatched pairs) with the subset of images for our experiments.
As described in Section \[imgsyn\], the $50$-D face vectors are extracted from the original images in the three datasets using the AAM approach. Then, the parameterized face images are generated using these $50$-D face vectors.
Following the common practice, the three synthesized face datasets are partitioned into two subsets separately: a training set and a testing set. The training set is used for estimating the fitted parameters in both the linear model and the axis model, fine-tuning VGG-Face and training the linear classifiers for the face recognition experiments, while the testing set is only to test the face representation performances of our linear model as well as the axis model. For Multi-PIE, five data partition schemes are assessed in order to give a detailed analysis on the influences of viewing pose and illumination, which are listed in Table \[fnet\]. For LFW and MegaFace, the aggregate performance of each CNN on 10 separate experiments are evaluated in a 10-fold cross validation scheme. In each experiment, nine of the subsets are combined to form a training set, with the remaining subset used for testing.
Index Partition schemes (for each identity)
------- -----------------------------------------------------------------------------------------------------------------------------
1 Select samples with poses $\{-30^{\circ},-15^{\circ},0^{\circ},45^{\circ}\}$ for training, the rest for testing.
2 Select samples with poses $\{-30^{\circ},-15^{\circ},0^{\circ},30^{\circ},45^{\circ}\}$ for training, the rest for testing.
3 Select samples with 6 random illuminations for training, the rest for testing.
4 Select samples with poses $\{-45^{\circ},15^{\circ},30^{\circ}\}$ for training, the rest for testing.
5 Select samples with 3 random poses for training, the rest for testing.
: Partition schemes for constructing the training and testing sets in Multi-PIE.[]{data-label="fnet"}
Encoding/decoding under the linear model {#expLinear}
----------------------------------------
**Results on Multi-PIE:**
For this relatively simple dataset, only VGG-Face [@Parkhi15] is tested. As described in Section \[frv\], each of the five training sets (as described in Table \[fnet\]) is used to fine-tune VGG-Face in two different ways, and we denote: the model by fine-tuning its classification layer while fixing its rest layers as VGG-Face1 and the model by fine-tuning all the layers as VGG-Face2.
Following Eq. (\[linearreg\]), we fit the linear model between the $50$-D face vectors of the training data and the corresponding neuron responses at each of Layers $\{L13,L14,L15\}$ for VGG-Face1 and VGG-Face2 respectively, and the obtained model parameters are used for predicting the neuron responses of the selected three layers to the testing data. After that, the Pearson and Spearman coefficients are computed between the predicted single neuron responses and those outputted from the three layers, and the mean values of the two coefficients on the five testing sets are shown in Figure \[corrLG\]. As seen from Figure \[corrLG\], the computed Pearson and Spearman coefficients for $L14$ of both VGG-Face1 and VGG-Face2 are around $0.6$, and the computed Pearson and Spearman coefficients for $L15$ of both VGG-Face1 and VGG-Face2 are close to $0.4$. This suggests that the predicted representations by the linear model are strongly correlated with those outputted from Layers $\{L14, L15\}$.
We also linearly decode the $50$-D face vectors from the neuron responses at Layers $\{L13,L14,L15\}$ respectively, then reconstruct the synthesized face images according to the AAM approach. Figure \[reconPIE\] shows the reconstructed results on an examplar image at Layers $\{L13,L14,L15\}$ of VGG-Face1 and VGG-Face2, and it can be seen that these reconstructed images are similar to the original synthesized face image, indicating that the representations outputted by higher CNN layers could also be utilized for linearly decoding the face vectors in the $50$-D parameter space.
In addition, using the predicted representations by the linear model for each of Layers $\{L13,L14,L15\}$ of both VGG-Face1 and VGG-Face2, we train the linear classifiers for face recognition under the Softmax Loss and the Hinge Loss respectively. Then, we evaluate their performances on the five testing sets, and Figure \[acc\] reports the recognition accuracies by VGG-Face1/VGG-Face2 and the linear model. As seen from the dark blue bars of Figure \[acc\], the recognition accuracies by VGG-Face1 on the five testing sets are $\{72.70\%, 75.74\%, 99.54\%, 58.78\%, 61.71\%\}$, and the recognition accuracies by VGG-Face2 are $\{83.73\%, 88.05\%, 99.01\%, 72.33\%, 81.90\%\}$. The learnt linear classifiers for Layers $\{L14, L15\}$ achieve close or better performances than VGG-Face1/VGG-Face2, while the linear classifiers for Layer $L13$ achieve close performances to VGG-Face1/VGG-Face2 in most cases.
Note that VGG-Face2 achieves close or slightly better performances than VGG-Face1, mainly because VGG-Face2 is obtained by fine-tuning all the layers. And it is also noted that the accuracies on the testing sets with Nos. $\{1,2,4,5\}$ are much lower than those on the third set, mainly because (i) the images in the testing sets with Nos. $\{1,2,4\}$ have different head orientations from those in their corresponding training sets, and (ii) for each identity, its face images in the fifth testing set have different head orientations from those in the corresponding training set.
**Results on LFW and MegaFace:**
For the two in-the-wild datasets, all the six CNNs without fine-tuning are used for conducting face verification experiments to further investigate whether their face representations could be adequately modelled by linear encoding.
As described in Section \[frv\], each CNN on 10 separate experiments are evaluated in a 10-fold cross validation scheme. In each experiment, we fit the linear model between the $50$-D face vectors of the training data and the corresponding neuron responses at the last layer of each referred CNN. Accordingly, the obtained model parameters are used for predicting the neuron responses of this layer to the testing data. Then, the Pearson and Spearman correlation coefficients are computed between the predicted single neuron responses and those outputted from this layer. The significance of the computed correlations is also tested, and more than $94\%$ of the corresponding $p$-values for each CNN (close to $100\%$ for ResNet-Face, SphereFace, Light-CNN, ArcFace) are lower than the significance level of $0.01$. The mean values (also the standard deviations) of the two computed correlation coefficients by all the referred CNNs are shown in Figure \[corrLFWMEGA\]. As seen from Figure \[corrLFWMEGA\], both the computed Pearson and Spearman coefficients for the six CNNs on the two datasets are larger than $0.4$ in most cases, and particularly, those coefficients for the four more recent CNNs (ArcFace, Light-CNN, SphereFace, and ResNet-Face) are even close to or larger than $0.6$ with relatively smaller standard deviations, in agreement with the previous results for VGG-Face1/VGG-Face2 on Multi-PIE, which further suggests that the predicted representations by the linear model are strongly correlated with those outputted from CNNs.
We also linearly decode the $50$-D face vectors from the neuron responses at the last Layer of each CNN respectively, then reconstruct the synthesized face images according to the AAM approach. Figure \[reconMega\] shows the reconstructed results on an examplar image from the in-the-wild dataset MegaFace, and these reconstructed images are similar to the original synthesized face image (Since Light-CNN uses a grey image as input, its reconstructed image is also a grey image). The results once again indicate that the representations outputted by higher CNN layers could be utilized for linearly decoding the face vectors in the $50$-D parameter space.
In addition, the experiments on face verification with the representation of each CNN and the predicted representation by the linear model are conducted, and the corresponding $ACC$ and $EER$ (in face, $100\% - EER$ ) on LFW and MegaFace are shown in Figure \[accLFWMEGA\]. The results show that all the predicted representations achieve close performances to the corresponding CNN representations, in agreement with the above results on face recognition.
In sum, all the above results indicate:
- The representations of higher CNN layers could be well predicted by the linear model, and notably, the linear model tends to give a better prediction for the representations of more recent CNNs.
- The face vectors in the parameter space could be well decoded from the CNN representations by the linear model.
**Remark:** Similar to the nonlinear rectification used in [@Chang2017], after obtaining the responses fitted by the linear model, we also tried to rectify these responses with a $3$-order polynomial, and found that such a rectification step did not affect the encoding/decoding results.
Encoding/decoding under the axis model
--------------------------------------
In this subsection, we investigate whether the proposed axis model in [@Chang2017] for primate IT cortex is suitable for modelling the face representations of higher CNN layers.
The same procedure in Section \[expLinear\] is used here, except that the axis model is used to replace the linear model in Section \[expLinear\]. The results are summarized as follows:
**Results on Multi-PIE:**
Figure \[corrLGAxis\] shows the mean values of the Pearson and Spearman correlation coefficients on the five testing sets from Multi-PIE for Layers $\{L13, L14, L15\}$ of VGG-Face1/VGG-Face2. These coefficients are lower than $0.25$ in most cases, indicating that the predicted responses by the axis model are not strongly correlated with those outputted from Layers $\{L13, L14, L15\}$ of VGG-Face1 and VGG-Face2.
Figure \[reconPIE\] shows the reconstructed results on an examplar image at Layers $\{L13,L14$, $L15\}$ by the axis model. The reconstructed images are dramatically different from the original face image, indicating that the axis model cannot effectively decode the face features from the representations outputted from higher CNN layers, although it was successful for IT face neuron decoding in [@Chang2017].
The face recognition accuracies of VGG-Face1/VGG-Face2 and the learned linear classifiers from the predicted representations for Layers $\{L13,L14,L15\}$ by the axis model are shown in Figure \[accAxis\]. As seen from Figure \[accAxis\], the learnt linear classifiers on the third training set perform better than those on the other four training sets, mainly because the third training set contains all the possible poses in the corresponding testing set. However, the linear classifiers for the three layers on all the training sets are much less accurate than the corresponding CNN in most cases, which further demonstrates that the axis model is not suitable for modelling the face representations in CNNs.
**Results on LFW and MegaFace:**
The mean values (also the standard deviations) of the Pearson and Spearman coefficients between the predicted representations by the axis model and those by all the referred CNNs are shown in Figure \[corrLFWMEGAaxis\]. The significance of the computed correlations is also tested, and more than $80\%$ of the corresponding $p$-values for each CNN are lower than the significance level of $0.01$. As seen from Figure \[corrLFWMEGAaxis\], both the computed coefficients for the six CNNs on the two datasets are close to $0.25$ in most cases, in agreement with the above results for VGG-Face1/VGG-Face2 on Multi-PIE. This further suggests that the predicted representations by the axis model are not strongly correlated with those of CNNs.
Figure \[reconMega\] shows the reconstructed results on an examplar image from MegaFace, which are also dramatically different from the original face image.
The $ACC$ and $EER$ on the two datasets by the axis model are shown in the green bars of Figure \[accLFWMEGA\]. All the predicted representations by the axis model give lower $ACC$ and $EER$ than the corresponding CNN representations. The verification results are similar to the face recognition results on the third Multi-PIE dataset(as defined in Table \[fnet\]), mainly because in these experiments, their training sets contains similar (even the same) poses to those in the testing sets, although the images in LFW and MegaFace have a mount of varying poses.
From all these results, we can see that the axis model is not as good as the linear model for modelling the neuron responses of DNNs.
DNN neurons versus IT Neurons on face representation
----------------------------------------------------
In [@Chang2017], the following points on face representation in primate IT cortex are observed:
- By formulating faces as points in a $50$-D parameter space, human faces could be linearly decoded from IT neuron responses, and the responses of IT neurons could be linearly predicted with the face vectors.
- The response of each face cell is the dot product of an incoming face vector onto its STA axis, followed by a nonlinear rectification, called “the axis model”. This model could adequately decode face vectors from neural population responses and predict neural firing rates to new faces.
Comparing with the observations in IT cortex, the following points on CNNs are observed:
- By formulating face images as points in a $50$-D parameter space, the face vectors could also be linearly decoded from the representations at higher CNN layers, and the representations at higher CNN layers could be linearly predicted with the face vectors. This indicates to a large degree, or at a “coarse-grained” level, CNNs have a similar linear encoding and decoding mechanism as that in primate IT cortex.
- The axis model fails to adequately model the face representations at higher CNN layers. This suggests that the face representation mechanism in CNNs have noticeable discrepancies with that in primate IT cortex at a “finer-grained” level, similarly demonstrated for general object representations in [@Rishi2018].
Conclusions and discussions {#Consec}
===========================
In this work, we investigate the face representations of CNNs using six state-of-the-art CNNs as our model CNNs on three representative datasets, and our main findings are as follows:
- CNNs for face recognition could be considered as a linear model in a $50$-D parameter space. Although the face representations of higher CNN layers are obtained by implementing a cascade of nonlinear operators, these representations could in fact be encoded/decoded by the linear model in this parameter space, similar to primate IT cortex. Since all the six DNNs exhibit this linear encoding/decoding property and the six CNNs have diverse architectures, such as VGG-Face vs ResNet-Face, we thought this linear encoding/decoding property could not be due to some specific CNN architecture, but it should be an inherent property of face recognition DNNs in general.
- The linear model is more effective for modelling the face representations of CNNs than the axis model in [@Chang2017], probably because the number of the fitted parameters in the linear model is much larger than that in the axis model.
- The face recognition and verification accuracies of the linear classifiers with the linearly-predicted representations as inputs are close to or even higher than those of the model CNNs.
The above linear encoding of face representation by CNNs in a parameter space seems both interesting and surprising, considering the parameter space is purely a mathematical concept and modern CNNs for face recognition, composed of many layers with enormous parameters to train, are in fact to recover a few dozen shape and appearance model parameters. What could be the implications of such a linear encoding for both deep learning and neurosciences? Here are some points:
- **The inverse generative model of CNNs:** Currently, CNNs are largely of “blackbox” nature in the sense that their exceptionally good object recognition performances still lack sufficient explanatory theory. One of the proposals is called the inverse generative model [@Lin2017Why; @Kulkarni2015Picture; @Patel2015A], that is, CNNs are mainly to recover hierarchically the generative model parameters. The inverse graphics in [@NIPS2015_5851] and the hypothesis-and-verification approach in [@Yildirim2015Efficient], are just such examples. The linear encoding in this work seems to support the theory of the inverse generative model, at least for face recognition. As linear encoding has quite a number of salient advantages as shown in [@Chang2017], it seems worthy exploring new simpler networks to directly regress generative model parameters, which is also one of our future research directions, rather than to train a very-deep layered network by a heavy data-driven approach currently. Of course, how to establish an adequate parameterized model for a given class of objects is still a difficult research direction in both computer vision and computer graphics communities.
- **The goal-driven approach for sensory cortex understanding:** Face recognition by CNNs, in essence, is purely data-driven under some recognition performance criteria. As shown in this work, CNNs have similar linear face encoding mechanism to that by macaques. This seems to suggest that, the macaque face processing system could be modelled by only optimizing the face recognition performances of CNNs, which is in support to the goal-driven paradigm for sensory cortex understanding advocated in [@Yamins2014Performance; @Yamins2016].
- **Validity of linear encoding for familiar faces and faces with expressions:** It is generally believed that face recognition and face expression in primate are processed in different cortical areas, face recognition in IT, and face expression in the superior temporal sulcus (STS) [@Rolls2017]. In [@Chang2017], their axis model is mainly for rapid face recognition, or core face recognition [@Tsao2008Patches]. In addition, as shown in [@Landi2017Two], two additional cortical areas are detected for only familiar face recognition in monkeys. Our results show that CNNs are able to handle both familiar and unfamiliar face images, as well as faces with different expressions. This seems to suggest that either monkey also has a linear encoding mechanism for familiar faces and faces with expressions, which needs to be clarified in the future, or the face encoding by CNNs has substantial differences with that in primate.
Of course, our work also has some limitations notably:
- This work only focuses on the face representation of CNNs, rather than general object representation of CNNs. Considering that different faces generally vary slightly in topology and geometry, while general objects (such as tables, chairs, cars, etc.) have no resemblance among them, whether this simple linear model for face representation is extendable to general object modelling is doubtful. Besides, how to parameterize general objects seems also an insurmountable difficulty.
- There are various approaches for generating parameterized face images, other than our used AAM approach here, which could form different parameter spaces. Our results only reveal that there exists at least such a parameter space (determined by the AAM approach) where the face representations of CNN layers could be predicted by linearly encoding the face vectors. In the future, other parameter spaces would be explored.
- In [@Szegedy2014], it is reported that a distinct difference on object recognition between CNNs and human visual system is their sensitivity to adversarial images, that is, images slightly corrupted with random noise. Human visual system is generally immune to adversarial images, while the performance of CNNs on object recognition is quite sensitive to them. It remains unclear whether the linear face representation mechanism in CNNs still holds on adversarial face images, which would be another line of our future works.
In summary, to the best of our knowledge, this work is the first attempt to partially reveal the linear face representation mechanism in CNNs, different from commonly believed complex feature encoding by CNNs. In addition, our results shed some lights on the similarities and differences of face representation between CNNs and primate IT cortex. Finally, our results reveal that the linear face encoding by CNNs might be used for designing new CNNs for face recognition, which is also one of our future research directions.
Data availability {#data-availability .unnumbered}
=================
The CMU Multi-PIE dataset could be accessed at <http://www.cs.cmu.edu/afs/cs/project/PIE/MultiPie/Multi-Pie/Home.html>. The LFW dataset could be accessed at <http://vis-www.cs.umass.edu/lfw/index.html>. The MegaFace dataset could be accessed at <http://megaface.cs.washington.edu/>.
[^1]: Dlib toolkit could be downloaded at <http://dlib.net/>
[^2]: <https://github.com/ronghuaiyang/arcface-pytorch>
|
---
author:
- |
[Jisho Miyazaki]{}$^\ast$\
$^\ast$ Saihoji, Fukui, Japan
bibliography:
- 'bibliography.bib'
title: 'Strongly non-quantitative classical information in quantum carriers'
---
A quantum state from which one can guess little about its underlying physical system may hide knowledge of the system which is revealed when the copy of quantum state is supplied. We give an example of two quantum states parameterized differently by the same random variable such that the first state alone offers a more accurate guess about the random variable in any figure of merit, while the two copies of second quantum state together do more in some figure of merit than the two copies of the original quantum state. The amount of information contained in quantum carriers does not behave quantitatively with respect to the number of simultaneously available carriers. Hidden information activated by copies implies the impossibility to specify the capability of quantum states to carry classical information from the single state.
When a complete description for carrier of information is given by a probability distribution or a quantum density operator, a single measurement of the carrier may not be sufficient to perfectly recover original information conveyed by the carrier. It is better to request multiple copies of the same carrier from the source if possible. If we have limitations on resources such as the number of copies, we have to optimize the measurement and guessing strategy for better information.
In this article, we show that the amount of information contained in quantum carriers may increase under copying so that it behaves non-quantitatively with respect to the number of copies to be measured together. Suppose we have two carriers ${{\mathcal{E}_\rho}}$ and ${{\mathcal{E}_\tau}}$, whose states are differently parameterized by a random variable of the underlying physical system. Carrier ${{\mathcal{E}_\rho}}$ alone is assumed to offer better knowledge about the system than carrier ${{\mathcal{E}_\tau}}$ does: that is, the reader can make a more accurate guess about the value of random variables from measurement results on ${{\mathcal{E}_\rho}}$ than from on ${{\mathcal{E}_\tau}}$, where the accuracy is measured by a certain figure of merit. Then the reader might guess that multiple copies of ${{\mathcal{E}_\rho}}$ will give even better information than multiple copies of ${{\mathcal{E}_\tau}}$, and would prefer to have carrier ${{\mathcal{E}_\rho}}$ no matter whether copying is possible or not. Behind this guess is an intuition that information content is a quantity inherent to its carriers, and grows quantitatively along the number of identical carriers.
If the carriers are quantum entities, however, two copies of ${{\mathcal{E}_\tau}}$ may offer better knowledge about the system. A series of analyses on entangled measurements [@PeresWootters1991; @Massar1995; @Massar2000] leads the existence of two carriers such that the first alone contains more information in a certain measure, while the two copies of second carrier get the benefit of entangled measurement and together offer more information in the same measure than the two copies of the first carrier. When the amount of information contained in these carriers are evaluated by a certain measure, it does not necessarily behave quantitatively with respect to the number of identically copied quantum carriers.
A question at this point is whether non-quantitative information (NQI) can be exhibited without employing particular measures of information. Even if carrier ${{\mathcal{E}_\rho}}$ contains more information than ${{\mathcal{E}_\tau}}$ does in a certain measure, it does not necessarily in an other measure [@JoszaSchlienz2000; @Chefles2002]. If a certain measure behaves non-quantitatively on a pair of carriers under copying, and if another measure evaluates their information content without copies differently, we say the pair exhibits [*weakly*]{} non-quantitative information (wNQI). The choice of measure is essential for wNQI.
Pairs of quantum carriers, if carefully chosen, may exhibit NQI independently to the measure evaluating information content of carriers without their copies. As we will show in the following, there are carriers ${{\mathcal{E}_\rho}}$ and ${{\mathcal{E}_\tau}}$ such that the former alone offers better information about the system in any measure, but with copies, the latter performs better in a certain measure. In contrast to wNQI, the measure only needs to be chosen on copied carriers, hence we say these carriers exhibit [*strongly*]{} non-quantitative information (sNQI) in this case. A quantum carrier less informatic about the underlying physical system than another carrier on its own in any measure may still hide knowledge on the system and outperform the other one when multiple copies of them are compared.
The sNQI breaks the intuition that information content is a quantity inherent to its carriers. Besides its fundamental interest, further analysis on the pair of carriers exhibiting the sNQI leads to observations on quantum information theory that wNQI does not. Among these observations, here we address quantum non-Markovianity exhibited by multiple uses of same channel sequences, incompleteness of what we call “single-carrier” measures, and a relationship between quantum information and hidden classical information potentially activated by copying.
To explain the NQI precisely, we employ the following abstract treatments of quantum carriers, their information content, and strategies to obtain the information. A quantum carrier refers to any physical system whose state is described by a density operator on a Hilbert space ${\mathcal{H}}$. The density operator of the carrier is assumed to be parameterized by a random variable $x \in X$ and denoted by $\rho_x \in {\mathcal{B}}({\mathcal{H}})$; (${\mathcal{B}}({\mathcal{H}})$ denotes the space of linear operators on ${\mathcal{H}}$). Since the information theoretic character of a carrier investigated in this article is completely characterized by the ensemble ${\mathcal{E}}_\rho = \{ \rho_x, p_x \}_{x \in X}$ of quantum states with probability $p_x$ of the random variable, we sometimes use the symbol ${\mathcal{E}}_\rho$ to refer also to the corresponding carrier.
When copies of the carrier is not available, the observer gets a supply of single carriers in state $\rho_x$ with given probability $p_x$, on which they perform a measurement represented by positive operator-valued measure (POVM) elements $\{ E_y \in {\mathcal{B}}({\mathcal{H}}) \}_{y \in Y}$ on ${\mathcal{H}}$. They obtain result $y$ with probability $p(y|x) = {\mathrm{Tr}}[\rho_x E_y]$ and guess the value $x$ from $y$. The guessing process is represented by the function $g:Y \rightarrow X,~y \mapsto g_y$.
When copies of the carrier are available, the observer gets a supply of two carriers in the same state $\rho_x \otimes \rho_x$ with probability $p_x$, on which they perform a joint measurement represented by POVM elements $\{ E_y \in {\mathcal{B}}({\mathcal{H}}\otimes {\mathcal{H}}) \}_{y \in Y}$. They obtain result $y$ with probability $p(y|x) = {\mathrm{Tr}}[\rho_x \otimes \rho_x E_y]$ and guess the value $x$ from $y$. A strategy by the observer is constituted of the POVM measurement and the function of the guessing process. The observer can optimize the strategy according to how density operators are parameterized by random variables, and to how the accuracy of guesses are estimated.
Since we consider information content obtainable by measurement strategies, its measures are real-valued functions of only measurement probabilities of single POVM measurements applied on carriers. For such a function ${{\mathcal{M}}}$ to be a measure of information content obtainable without copies, it must satisfy the following: Let ${\mathcal{E}}_\tau = \{ \tau_x \in {\mathcal{B}}({\mathcal{H}}_1), p_x \}_{x \in X}$ and ${\mathcal{E}}_\rho = \{ \rho_x \in {\mathcal{B}}({\mathcal{H}}_2), p_x \}_{x \in X}$ be ensembles with the same random variable. If for any set of POVM elements $\{ E_y \in {\mathcal{B}}({\mathcal{H}}_1) \}_{y \in Y}$ there is a set of POVM elements $\{ E'_y \in {\mathcal{B}}({\mathcal{H}}_2) \}_{y \in Y}$ such that ${\mathrm{Tr}}[ E_y \tau_x ] = {\mathrm{Tr}}[ E'_y \rho_x ]$ holds for any $y \in Y$ and $x \in X$, then ${{\mathcal{M}}}({\mathcal{E}}_\tau) \leq {{\mathcal{M}}}({\mathcal{E}}_\rho)$. In words, if measurement results for ensemble ${\mathcal{E}}_\tau$ can be reproduced by measurement results for ${\mathcal{E}}_\rho$, information content of ${\mathcal{E}}_\tau$ must be estimated to be lower than or equal to that of ${\mathcal{E}}_\rho$. Conversely, any function of probabilities obtained by single POVM measurements with the above described condition is regarded as a measure of information content obtainable without copies, and we call them single-carrier (SC) measures.
The set of SC measures thus defined contains distinguishability measures such as maximum probabilities of correct hypothesis testing [@Chefles1998hypothesis] and unambiguous state discrimination [@Helstrom1976]. These measures include maximization or minimization with regard to measurement probabilities on single carriers in their definition. If any measurement on ${{\mathcal{E}_\tau}}$ can be simulated by those on ${{\mathcal{E}_\rho}}$, ${{\mathcal{E}_\rho}}$’s distinguishability should be evaluated higher since ${{\mathcal{E}_\rho}}$ has larger family of measurement probabilities over which optimization is taken. There are SC measures such as accessible information [@Holevo1973] which are not considered as distinguishability measures.
When a SC measure ${{\mathcal{M}}}$ is used to estimate information content of ${\mathcal{E}}_\rho = \{ \rho_x ,p_x \}_{x \in X}$ without copies, the corresponding measure of information content obtainable with the aid of single copy is ${{\mathcal{M}}}_{2}({\mathcal{E}}_\rho) := {{\mathcal{M}}}(\{ \rho_x \otimes \rho_x , p_x \}_{x \in X})$. We call ${{\mathcal{M}}}_{2}$ a double-carrier (DC) measure. Measurements for DC measure may be jointly performed on the 2-copies of the same state from the ensemble.
NQI can be stated in a precise manner based on the presented setup. When a pair of quantum carriers, ${\mathcal{E}}_\rho = \{ \rho_x , p_x \}_{x \in X}$ and ${\mathcal{E}}_\tau = \{ \tau_x , p_x \}_{x \in X}$, satisfies the following two conditions: $$\begin{aligned}
\label{eq:cond1} {{\mathcal{M}}}({\mathcal{E}}_\rho) &>& {{\mathcal{M}}}({\mathcal{E}}_\tau),\\
\label{eq:cond2} {{\mathcal{M}}}_{2}({\mathcal{E}}_\rho) &<& {{\mathcal{M}}}_{2}({\mathcal{E}}_\tau),\end{aligned}$$ for some measure of information content ${{\mathcal{M}}}$, the pair is said to exhibit wNQI. If the pair further satisfies $$\label{eq:cond3} {{\mathcal{M}}}' ({\mathcal{E}}_\rho) \geq {{\mathcal{M}}}' ({\mathcal{E}}_\tau),$$ for any SC measure ${{\mathcal{M}}}'$, the pair is said to exhibit sNQI. In what follows we present a measure of information content and an example pair of carriers exhibiting sNQI.
The random variable in this article is a vector ${{\mathbf{n}}}$ uniformly distributing over unit sphere $S_2$, which is called “spin direction” from its relevance to particle physics. For this random variable, averaged fidelity used in [@Massar1995; @GisinPopescu1999; @Massar2000] estimates information content of a carrier. Averaged fidelity ${{\mathrm F}}({\mathcal{E}}_\rho)$ as a SC measure for a carrier ${\mathcal{E}}_\rho =\{ \rho_{{\mathbf{n}}}, {\mathrm{d}}{{\mathbf{n}}}\}_{{{\mathbf n} \in S_2}}$ (${\mathrm{d}}{{\mathbf{n}}}$ represents the probability density for uniform distribution over unit sphere) is $$\label{eq:maxfidelity} {{\mathrm F}}({\mathcal{E}}_\rho) := \max \int p(y|{{\mathbf{n}}}) \frac{1+{{\mathbf{n}}}\cdot \mathbf{g}_y}{2} {\mathrm{d}}{{\mathbf{n}}}{\mathrm{d}}y,$$ where the maximization is over strategies constituted of POVM elements $\{ E_y \}_{y \in Y}$ and guessing process $g: y \mapsto \mathbf{g}_y \in S_2$. The averaged fidelity estimates how much on average the observer can learn about the direction ${{\mathbf{n}}}$ from a given carrier with state $\rho_{{\mathbf{n}}}$, where the score of leaning is $\cos^2 (\alpha/2) = (1+{{\mathbf{n}}}\cdot \mathbf{g}_y)/2$ with $\alpha$ being the angle between ${{\mathbf{n}}}$ and guess $\mathbf{g}_y$.
Dimensions of Hilbert spaces for our carriers ${{\mathcal{E}_\rho}}= \{ {\rho_{\mathbf{n}}}, {\mathrm{d}}{{\mathbf{n}}}\}_{{{\mathbf n} \in S_2}}$ and ${{\mathcal{E}_\tau}}= \{ {\tau_{\mathbf n, \delta}}, {\mathrm{d}}{{\mathbf{n}}}\}_{{{\mathbf n} \in S_2}}$ exhibiting sNQI are $2$ and $4$, respectively. For later convenience we denote the Hilbert space for ${\rho_{\mathbf{n}}}$ by ${\mathcal{H}}$ and that for ${\tau_{\mathbf n, \delta}}$ by ${\mathcal{H}}\otimes {\mathcal{H}}'$ where $\dim {\mathcal{H}}= \dim {\mathcal{H}}' =2$. The density operators ${\rho_{\mathbf{n}}}$ and ${\tau_{\mathbf n, \delta}}$ are given by $$\begin{aligned}
\label{eq:rhn} {\rho_{\mathbf{n}}}&:=& \frac{{\mathbb{I}}_{\mathcal{H}}+ \sum_{i=1}^{3} n_i \sigma_i}{2}, \\
\label{eq:hfe} {\tau_{\mathbf n, \delta}}&:=& {\rho_{\mathbf{n},\delta}}\otimes \frac{{| 0 \rangle \langle 0 |}}{2} + {\rho_{- \mathbf{n},\delta}}\otimes \frac{{| 1 \rangle \langle 1 |}}{2},\end{aligned}$$ where ${\mathbb{I}}_{\mathcal{H}}$ is the identity operator on ${\mathcal{H}}$, $\sigma_i$ ($i=1,2,3$) are unitary Pauli operators, ${| 0 \rangle}, {| 1 \rangle} \in {\mathcal{H}}'$ are orthonormal vectors, and state ${\rho_{\mathbf{n},\delta}}$ is defined by $$\label{eq:rhne} {\rho_{\mathbf{n},\delta}}= (1- \delta) {\rho_{\mathbf{n}}}+ \delta \frac{{\mathbb{I}}_{\mathcal{H}}}{2}$$ with a constant $\delta \in [0,1]$.
To check that carriers ${{\mathcal{E}_\rho}}$ and ${{\mathcal{E}_\tau}}$ exhibit wNQI, we list the averaged fidelity for both carriers in TABLE \[table\].
averaged fidelity ${{\mathcal{E}_\rho}}$ ${{\mathcal{E}_\tau}}$
---------------------------------------- ----------------------------- -----------------------------------------------------------------------
without copies ${{\mathrm F}}$ $\frac{2}{3}$ $\frac{2}{3} - \frac{\delta}{6}$
with a single copy ${{\mathrm F}}_{2}$ $\frac{3}{4}$ [@Massar1995] l.b.: $ \frac{2\sqrt{3} + 15}{24} - \frac{2 \sqrt{3} + 3}{24} \delta$
: \[table\]The averaged fidelity for carriers ${{\mathcal{E}_\rho}}$ and ${{\mathcal{E}_\tau}}$ with and without their copies. ${{\mathrm F}}({{\mathcal{E}_\rho}})$ is well known and ${{\mathrm F}}_{2}({{\mathcal{E}_\rho}})$ is obtained in [@Massar1995]. Only lower bound is derived for ${{\mathrm F}}_{2}({{\mathcal{E}_\tau}})$ (“l.b.” stands for lower bound). See supplemental material [@suppl] for derivations of ${{\mathrm F}}_{2}({{\mathcal{E}_\tau}})$ and lower bound of ${{\mathrm F}}_{2}({{\mathcal{E}_\tau}})$.
When $\delta =0$, ensembles ${{\mathcal{E}_\rho}}$ and ${{\mathcal{E}_\tau}}$ have the same averaged fidelity, and ${{\mathrm F}}({{\mathcal{E}_\tau}})$ decreases as $\delta$ increases. Especially condition (\[eq:cond1\]) is satisfied for non-zero $\delta$. While ${{\mathrm F}}_{2}({{\mathcal{E}_\tau}})$ has not been obtained, we have constructed a strategy $(\{ E_y \}_{y\in Y}, g)$ giving its lower bound $(2\sqrt{3} + 15)/24 - (2 \sqrt{3} + 3)\delta/24$, which is greater than ${{\mathrm F}}_{2}({{\mathcal{E}_\rho}}) = 3/4$ when $\delta \leq 7-4\sqrt{3} \approx 0.0718$ [@suppl]. At least when $0 < \delta < 7-4\sqrt{3}$, carriers ${{\mathcal{E}_\rho}}$ and ${{\mathcal{E}_\tau}}$ exhibits wNQI.
To show sNQI, it remains to prove condition (\[eq:cond3\]). In the supplemental material [@suppl] we construct a unital positive map ${{\mathcal{L}}}_\delta:{\mathcal{B}}({\mathcal{H}}\otimes {\mathcal{H}}') \rightarrow {\mathcal{B}}({\mathcal{H}})$ such that $$\label{eq:unitalmap} {\mathrm{Tr}}[ E {\tau_{\mathbf n, \delta}}] = {\mathrm{Tr}}[ {{\mathcal{L}}}_\delta (E) {\rho_{\mathbf{n}}}] ~ (\forall {{\mathbf{n}}}\in S_2),$$ for any operator $E \in {\mathcal{B}}({\mathcal{H}}\otimes {\mathcal{H}}')$. Existence of the map ${{\mathcal{L}}}_\delta$ satisfying Eq. (\[eq:unitalmap\]) is sufficient for condition (\[eq:cond3\]). In fact any POVM measurement with elements $\{ E_i \}_{i \in I}$ on ensemble ${{\mathcal{E}_\tau}}$ is simulated by that with elements $\{ {{\mathcal{L}}}_\delta (E_i) \}_{i \in I}$ on ${{\mathcal{E}_\rho}}$.
Remarkably, condition (\[eq:cond3\]) is satisfied with equality for any SC measure at $\delta = 0$. This can be observed by the inverse of relation Eq. (\[eq:unitalmap\]), namely, there is a unital positive map $\mathcal{J}:{\mathcal{B}}({\mathcal{H}}) \rightarrow {\mathcal{B}}({\mathcal{H}}\otimes {\mathcal{H}}')$ such that $$\label{eq:unitalmap2} {\mathrm{Tr}}[ \mathcal{J}(E) \tau_{{{\mathbf{n}}},0} ] = {\mathrm{Tr}}[ E {\rho_{\mathbf{n}}}] ~ (\forall {{\mathbf{n}}}\in S_2),$$ for any operator $E \in {\mathcal{B}}({\mathcal{H}})$ [@suppl]. Any SC measure is evaluated to be same for ensembles ${{\mathcal{E}_\rho}}$ and ${{\mathcal{E}_\tau}}$ at $\delta=0$, since any POVM measurement on carrier ${{\mathcal{E}_\rho}}$ can be simulated by that on ${{\mathcal{E}_\tau}}$ and vice versa.
In summary, the pair of carriers ${{\mathcal{E}_\rho}}$ and ${{\mathcal{E}_\tau}}$ whose states defined by Eqs. (\[eq:rhn\]) and (\[eq:hfe\]), satisfies conditions (\[eq:cond1\]), (\[eq:cond2\]) and (\[eq:cond3\]) when $0 < \delta < 7-4\sqrt{3}$. These carriers exhibit sNQI: information content of these carriers reverses when copies are available. Without copies, the spin direction cannot be guessed more accurately by measurements on ${{\mathcal{E}_\tau}}$ than on ${{\mathcal{E}_\rho}}$ in any figure of merit. With copies, in other words when pairs of these carriers are compared, the averaged fidelity of ${{\mathcal{E}_\tau}}$ is higher than that of ${{\mathcal{E}_\rho}}$.
Average fidelities calculated above for showing sNQI does not contradict values of mutual information [@CoverThomas2006]. In FIG. \[fig:mutinfo\], we plot mutual information $$\label{eq:mutinfo} H (S_2;Y) := \int p(y|{{\mathbf{n}}}) \log_2 \frac{p(y|{{\mathbf{n}}})}{p(y)} {\mathrm{d}}{{\mathbf{n}}}{\mathrm{d}}y,$$ between spin direction $S_2$ of the underlying physical system and observers’ register $Y$ created by the measurements giving fidelities listed in TABLE. \[table\].
![\[fig:mutinfo\] Mutual information (\[eq:mutinfo\]) between spin direction and observers’ register obtained by measuring carriers ${{\mathcal{E}_\rho}}$ and ${{\mathcal{E}_\tau}}$ with and without their copies. The POVM elements $\{ E_y \}_{y \in Y}$ of the observers’ measurement are those we used to obtain the values of fidelity listed in TABLE \[table\]. Mutual information for ${{\mathcal{E}_\tau}}$ with its single copyt is higher than that for ${{\mathcal{E}_\rho}}$ when $0 \leq \delta \leq 0.0575 $. See supplemental material [@suppl] for derivations and analytic forms of these mutual information.](mutinfo.eps){width="8.4cm"}
With a single copy, mutual information of ${{\mathcal{E}_\tau}}$ is larger than that of ${{\mathcal{E}_\rho}}$ for small enough $\delta$. Averaged fidelity and mutual information share a region of $\delta$ in which the order of their value is reversed under copying.
Currently we are not sure if the carriers ${{\mathcal{E}_\rho}}$ and ${{\mathcal{E}_\tau}}$ exhibit sNQI with accessible information, namely, the maximally attainable mutual information. Under an assumption that the optimal strategy constitutes of covariant measurements [@Holevo1982], the values of mutual information plotted in FIG. \[fig:mutinfo\] for ${\mathcal{E}}_\rho$ with and without its copies, and for ${\mathcal{E}}_\tau$ without its copies are maximum [@suppl]. Accessible information demonstrates sNQI if the assumption is true.
Perhaps sNQI is against ones’ intuition if one knows classical information theory, because it is never demonstrated by any pair of probabilistic carriers, of which states are described by random variables. In terms of the difference between quantum and probabilistic carriers, sNQI is originating with the gap between positivity and complete positivity. The unital positive map ${{\mathcal{L}}}_\delta$ satisfying Eq. (\[eq:unitalmap\]) is not completely positive, and the parallel application of two maps ${{\mathcal{L}}}_\delta \otimes {{\mathcal{L}}}_\delta$ is no more positive. Hence POVM measurement with element $E$ on ${\tau_{\mathbf n, \delta}}\otimes {\tau_{\mathbf n, \delta}}$ is not necessarily simulated by element ${{\mathcal{L}}}_\delta \otimes {{\mathcal{L}}}_\delta (E)$ on ${\rho_{\mathbf{n}}}\otimes {\rho_{\mathbf{n}}}$. Our measurement strategy on ${\tau_{\mathbf n, \delta}}\otimes {\tau_{\mathbf n, \delta}}$ makes use of such entangled POVM measurements. For probabilistic carriers, the classical analogue of Eq. (\[eq:unitalmap\]) immediately implies its extension for copied carriers since positive maps between random variables are automatically completely positive. Thus probabilistic carriers never exhibit sNQI.
The difference between Markov processes in classical [@CoverThomas2006] and quantum information theory [@BaeChruscinski2016; @ChruscinskiManiscalco2014; @BuscemiDatta2016; @RivasHuelgaPlenio2014:markov; @BreuerLainePiiloVacchini2016] is highlighted by sNQI. Let us consider a sequence of classical-input quantum-output channels $(\Gamma_\rho: S_2 \rightarrow {\mathcal{B}}({\mathcal{H}}),~\Gamma_\tau: S_2 \rightarrow {\mathcal{B}}({\mathcal{H}}\otimes {\mathcal{H}}'))$ defined by $\Gamma_\rho ({{\mathbf{n}}}) = {\rho_{\mathbf{n}}}$ and $\Gamma_\tau ({{\mathbf{n}}}) = {\tau_{\mathbf n, \delta}}$. The existence of positive map ${{\mathcal{L}}}_\delta$ (regarded as a statistical morphism in [@Buscemi2012; @Buscemi2016]) implies Markovianity of sequence $(\Gamma_\rho,~\Gamma_\tau)$ in any of its classical snap-shots: for any POVM measurement $\{ E_j \}_{j \in J}$ on ${{\mathcal{E}_\tau}}$ there exists a POVM measurement $\{ F_i \}_{i \in I}$ on ${{\mathcal{E}_\rho}}$ and conditional probability $P(j|i)$ such that ${\mathrm{Tr}}[ E_j {\tau_{\mathbf n, \delta}}] = \sum_i P(j|i) {\mathrm{Tr}}[ F_i {\rho_{\mathbf{n}}}]$ holds for any ${{{\mathbf n} \in S_2}}$. Nevertheless sequence $(\Gamma_\rho,~\Gamma_\tau)$ exhibits quantum non-Markovianity, since ${{\mathcal{L}}}_\delta$ is not completely positive [@Buscemi2016]. Here, sNQI tells us that the increase of information content is simply demonstrated by the doubled sequence $(\Gamma_\rho \otimes \Gamma_\rho,~\Gamma_\tau \otimes \Gamma_\tau)$. In this way, sNQI adds a new perspective that quantum non-Markovianity can be already observed when certain sequences are used in combination with the sequences themselves.
Comparison of carriers ${{\mathcal{E}_\rho}}$ and ${{\mathcal{E}_\tau}}$ in their information content leads to a consequence on quantum statistics which we call [*incompleteness*]{} of SC measures. According to sNQI, there is hidden information in ensembles of quantum states which cannot be witnessed by any SC measure. Moreover, even if the values of all SC measures are available at the same time, one cannot recognize the information hidden in ${{\mathcal{E}_\tau}}$ potentially activated by copying. In fact, at $\delta =0 $, any SC measure is evaluated to the same for ${{\mathcal{E}_\rho}}$ and ${{\mathcal{E}_\tau}}$, while at least one DC measure is evaluated higher for ${{\mathcal{E}_\tau}}$. In this sense the set of all SC measures is incomplete among all measures, since they are not sufficient for recognizing the hidden information potentially activated by copying.
If SC measures do not witness the hidden information, which measure effectively detects it without the use of measurements on copied systems? The incompleteness of SC measures tells us that such a measure does not estimate [*classical*]{} information extracted by measurements. Therefore, it is worth comparing the DC measures and measures of [*quantum*]{} information to see if classical information is hidden in a form of quantum information, while the notion of quantum information itself is ambiguous [@Josza2004].
Among calculable functions of quantum information, optimal compression rate $R$ of blind compression task, in which the message sender has to compress a sequence of unidentified quantum states supplied from a source, is evaluated higher for ensemble ${{\mathcal{E}_\tau}}$ than for ${{\mathcal{E}_\rho}}$. It is given by von Neumann entropy for ensembles constituted only of pure states such as ${{\mathcal{E}_\rho}}$ [@JoszaSchumacher1994; @Schumacher1995], and can be calculated according to the prescription from [@KoashiImoto2001:compressibility; @KoashiImoto2002:operations] for ensembles of general mixed states. We have $R({{\mathcal{E}_\rho}}) = 1$, and $R({{\mathcal{E}_\tau}})$ keeps constant value $2$ for $0 \leq \delta <1$. Thus the optimal blind compression rate witnesses the hidden information contained in ${{\mathcal{E}_\tau}}$.
This result on blind compression rate, together with the incompleteness of SC measures, extends a known discrepancy between von Neumann entropy and pairwise fidelity. We have ${{\mathrm F}}_p ({\rho_{\mathbf{n}}}, \rho_{{\mathbf{ m }}}) = {{\mathrm F}}_p (\tau_{{\mathbf{ n }},0}, \tau_{{\mathbf{ m }},0})$ for all the pairs of unit vectors ${{\mathbf{n}}},{\mathbf{ m }} \in S_2$, for pairwise fidelity ${{\mathrm F}}_p(\rho_1,\rho_2) := {\mathrm{Tr}}(\rho_1^{1/2} \rho_2 \rho_1^{1/2})^{1/2}$. However, the blind compression rate of ensembles ${{\mathcal{E}_\rho}}$ and ${{\mathcal{E}_\tau}}$ at $\delta=0$ differ. Thus, it is possible to change the blind compression rate while keeping the values of all pairwise fidelity and all SC measures of the ensemble. The same discrepancy is previously known between von Neumann entropy and pairwise fidelity for pure state ensembles [@JoszaSchlienz2000]. Here we extend the discrepancy to mixed state ensembles where von Neumann entropy is generalized to blind compression rate, and under this generalized setting, answer to a question remain opened in [@JoszaSchlienz2000]: SC measures such as accessible information and minimum error probability does not help calculating blind compression rate for mixed state ensembles.
The presented sNQI reveals that the concept of “classical information” is independent of its “carrier” in quantum theory. When we say “classical information is conveyed by its carrier,” it is assumed that the carrier itself has an inherent ability to convey the information. It is already known that this inherent ability does not behave perfectly quantitatively when different carriers are combined (see e.g. [@GisinPopescu1999]). Still, we intuitively consider good carriers remain good when same copies of them can be used at the same time. This intuition lasting in a small way finally collapses by the discovery of sNQI. If carriers “contain” classical information, how is the carrier containing hidden information, potentially activated? Classical information requires a carrier when it is conveyed. However, the ability to convey classical information is not inherent in each carrier, but in the final structure of carriers at the message receiver.
acknowledgments {#acknowledgments .unnumbered}
===============
We thank E. Wakakuwa for helpful comments, and the monks and nuns of Toshoji for their support at the stage of summarizing our results into an article.
|
---
abstract: 'The orientational order of nematic liquid crystals is traditionally studied by means of the second-rank ordering tensor ${\mathbb{S}}$. When this is calculated through experiments or simulations, the symmetry group of the phase is not known *a-priori*, but needs to be deduced from the numerical realisation of ${\mathbb{S}}$, which is affected by numerical errors. There is no generally accepted procedure to perform this analysis. Here, we provide a new algorithm suited to identifying the symmetry group of the phase. As a by product, we prove that there are only five phase-symmetry classes of the second-rank ordering tensor and give a canonical representation of ${\mathbb{S}}$ for each class. The nearest tensor of the assigned symmetry is determined by group-projection. In order to test our procedure, we generate uniaxial and biaxial phases in a system of interacting particles, endowed with $D_{\infty h}$ or $D_{2h}$ symmetry, which mimic the outcome of Monte-Carlo simulations. The actual symmetry of the phases is correctly identified, along with the optimal choice of laboratory frame.'
author:
- 'Stefano S.Turzi [^1]'
- Fulvio Bisi
title: |
**Determination of the symmetry classes\
of orientational ordering tensors**
---
Introduction
============
The orientational order of an ensemble of molecules is a key feature in complex fluids made of anisotropic molecules, e.g. liquid crystals. For example, the phase of a liquid crystal affects some rheological and optical properties of the material, such as viscosity coefficients and refracting index. From a mathematical viewpoint, the phase is a macroscopic manifestation of the point-group symmetry of the mesoscopic orientational order of the molecules. The precise quantification of the notion of order requires the introduction of the orientational probability density function of the molecules. It is impractical to study this function in its full generality from a mathematical perspective. Furthermore, only its very first moments are amenable of experimental investigation. For these reasons the orientational probability density is usually truncated at the second-rank level and this defines the second-rank ordering tensor ${\mathbb{S}}$. Usually, the matrix entries of ${\mathbb{S}}$ are considered to capture correctly the most important features of the mesoscopic order.
The final output of a molecular dynamics or a Monte Carlo simulation of a liquid crystal compound is given in terms of the orientation of the molecular frames of reference, for all molecules. The ordering tensor ${\mathbb{S}}$ is then obtained by averaging over all molecular orientations. However, the computations have to be carried out with respect to an arbitrarily chosen laboratory frame. By contrast, key physical information as phase symmetry, directors and order parameters are readily accessible only when the laboratory axes are properly chosen in agreement with the yet unknown underlying symmetry of the orientational distribution. Therefore, the experimental or simulation data need to be analysed and refined in order to capture the physical features of the system at the meso-scale and there is no standard method to perform this analysis.
The main motivation of the present work is to provide such a systematic procedure to determine the symmetry class (the “phase” of the system), the symmetry axes (the “directors”), and the scalar order parameters of a liquid crystal compound whose second-rank ordering tensor is obtained through experiments or simulations.
Experimental or numerical errors are a further source of complications and may hinder the correct identification of the phase symmetry[[, even in the simplest cases]{}]{}. For instance, [[a uniaxial order could be described naively as a phase in which rod-like molecules are substantially aligned parallel to a fixed direction, identified by a]{}]{} well defined director; [[in such phase]{}]{} most of the entries of ${\mathbb{S}}$ [[ought to]{}]{} vanish, [[but the presence of errors can make all the entries generally non-zero. Furthermore, an unwise choice of the laboratory axes may be the cause of several non-vanishing entries.]{}]{} When dealing with such an ordering tensor, it may not be immediately evident whether [[these non-vanishing entries reveal an intrinsic lack of uniaxial phase symmetry or are a consequence of one –or possibly both– of the issues described.]{}]{}
Our strategy takes inspiration from a similar problem in Elasticity where the main concern is the identification of the linear elastic tensor of a particular material symmetry. This problem has been intensively studied by a number of authors in the last decades. We refer to [@ForteVianello; @2011Slawinski] for a historical overview. In particular, we adopt similar mathematical techniques and in this respect we have found the following papers particularly illuminating [@ForteVianello; @2011Slawinski; @2004Slawinski; @1987Wadhawan; @1963Toupin; @1998Geymonat].
This paper is organised as follows. Sec. \[sec:background\] reviews the theoretical background on orientational order parameters; namely, the spherical and Cartesian definitions of orientational ordering tensors are discussed. The specific case of *second-rank* ordering tensors is developed in Sec. \[sec:second-rankOPs\]. [[The definitions we provide here best fit the group-theoretic analysis put forward in the rest of the paper, and allow taking a non-standard view on this topic, by describing the ordering tensor in terms of a linear map in the space of symmetric, traceless second rank tensors;]{}]{} an analogous approach is found in Refs. [@2011turzi; @2015chillb; @2015chill; @07Rosso]. In Secs. \[sec:second-rankOPs\_symmetry\] and \[sec:second-rankOPs\_symmetryclasses\] we define the notion of symmetry-class of an ordering tensor and prove that it is only possible to distinguish five phase-symmetry classes at the second-rank level. A finer identification of the phase group of a non-polar liquid crystal requires higher rank ordering tensors. The following Sec. \[sec:identification\] deals with the identification of the closest ordering tensor that belongs to a given symmetry class. To this end, we introduce the invariant projection on a chosen symmetry group, define the distance of the raw ordering tensor from one of the five symmetry classes and provide a canonical representation of the ordering tensor in each symmetry class. After all these mathematical ingredients are established, we describe the algorithm for the determination of the effective phase in Sec. \[sec:identification\_effectivephase\]. Sec. \[sec:examples\] contains the discussion of two paradigmatic examples where the algorithm is put into practice and Sec. \[sec:conclusions\] summarises the results.
Notations. {#sec:notations}
----------
For the reader’s convenience, let us give a brief description of some notational conventions used throughout the paper.
1. Vectors in the linear space $W$ isomorphic to $\real^3$ are denoted by boldface small letters (${\mathbf{a}}, {\mathbf{b}}, {\mathbf{c}}, \dots,{\mathbf{u}}, {\mathbf{v}},\dots$). After choosing an orthonormal basis $\{\ex,\ey,\ez\}$, the coordinate of a vector ${\mathbf{v}}$ are denoted with the same plain letter, and a subscript (in general $i=1,\dots,3$) distinguishes the coordinates: ${\mathbf{v}}=v_1 \ex+ v_2 \ey + v_3 \ez$. This avoids confusion between ${\mathbf{v}}_1$, which the vector 1 in a list of vectors, and $v_1$ which is the first coordinate of vector ${\mathbf{v}}$: $v_1 = {\mathbf{v}}\cdot \ex$. Unit vectors in the laboratory reference frame play a special role, therefore we will denote them by ${\bm{\ell}}_{\xi}$, with $\xi = x,y,z$.
2. Second-rank tensors, i.e. linear maps ${\mathbf{L}}\colon W \to W$ in the linear space $W$, are denoted by boldface capital letters (${\mathbf{A}}, {\mathbf{B}}, \dots, \allowbreak {\mathbf{R}}, {\mathbf{S}}, {\mathbf{T}}, \dots$); ${\mathbf{I}}$ is the identity tensor.
3. \[sec:notationsdiad\] The tensor (or dyadic) product in $W$ between vectors ${\mathbf{a}}$ and ${\mathbf{b}}$ is a second-rank tensor such that $({\mathbf{a}}\operatorname{\otimes}{\mathbf{b}}){\mathbf{v}}= ({\mathbf{b}}\cdot{\mathbf{v}}){\mathbf{a}}$ for every vector ${\mathbf{v}}$, where the dot ${}\dot{}$ denotes the standard inner (scalar) product (i.e. the matrix representative of the dyadic product is $({\mathbf{a}}\operatorname{\otimes}{\mathbf{b}})_{ij} = a_i b_j$ $(i,j = 1,\ldots, 3)$ in an orthonormal basis).
4. \[sec:innerT\] The scalar product between two second-rank tensors ${\mathbf{T}}$, ${\mathbf{L}}$ is defined as $$\label{eq:dot_tensors}
{\mathbf{T}}\cdot{\mathbf{L}}= \operatorname{tr}({\mathbf{T}}\transp {\mathbf{L}}) .$$
5. Second-rank tensors can be endowed with the structure of a linear space, denoted by $L(W)$; linear maps ${\mathbb{T}}\colon L(W) \to L(W)$ in such space are denoted by “blackboard” capital letters (${\mathbb{S}}, {\mathbb{T}}, \dots$).
6. \[sec:tensordiad\] The square tensor product ${\boxtimes}$ between second-rank tensors is to be interpreted as a tensor dyadic product: $$({\mathbf{L}}{\boxtimes}{\mathbf{M}}) \,{\mathbf{T}}= ({\mathbf{M}}\cdot {\mathbf{T}})\,{\mathbf{L}}, \quad \text{ for every tensor } {\mathbf{T}}.$$
7. $\Otre$ is the orthogonal group, i.e. the group of all isometries of $W$ ($\real^3$); ${\mathbf{A}}\in \Otre \, \Leftrightarrow \, {\mathbf{A}}^{-1} ={\mathbf{A}}\transp \, \Rightarrow \det {\mathbf{A}}= \pm 1$, where the superscript $()\transp$ denotes the transposition.
8. $\essotre$ is the special orthogonal group, i.e. the subgroup of $\Otre$ of elements ${\mathbf{R}}$ satisfying $\det({\mathbf{R}}) = 1$. In other words, the group of 3D rotations.
9. Similarly, $\Odue$ is the orthogonal group in two dimensions, and $\essodue$ is the special orthogonal group in two dimensions
10. \[sec:schoenflies\] For point symmetry groups, subgroups of $\Otre$, we comply with the standard Schönflies notation [@PointGroups; @2001Michel; @McWeeny]. Here, we only give a brief description of these groups and refer the reader to the cited references for a more in-depth discussion. The complete list include the seven infinite sequences of the axial groups $C_n$, $C_{nh}$, $C_{nv}$, $S_{2n}$, $D_{n}$, $D_{nd}$, $D_{nh}$ and seven exceptional groups $T$, $T_d$, $T_h$, $O$, $O_h$, $I$, $I_h$. The axial group $C_n$ contains $n$-fold rotational symmetry about an axis, and $D_n$ contains $n$-fold rotational symmetry about an axis and a 2-fold rotation about a perpendicular axis.
The other axial groups are obtained by adding reflections across planes through the main rotation axis, and/or reflection across the plane perpendicular to the axis. In particular, sub-indexes $h$, $v$, and $i$ stands for “horizontal”, “vertical” and “inversion” and denotes, respectively, the presence of a mirror reflection perpendicular to rotation axis ($\sigma_h$), a mirror reflection parallel to the rotation axis ($\sigma_v$) and the inversion ($\iota = -{\mathbf{I}}$). We recall that $C_1$ is the trivial “no symmetry” group; $S_2$ is the group of order two that contains the inversion and is usually written as $C_i$; the group of order two with a single mirror reflection is denoted by $C_{1h}$, $C_{1v}$ or $C_s$. By contrast, the seven exceptional groups contains multiple 3-or-more-fold rotation axes: $T$ is the rotation group of a regular tetrahedron, $O$ is the rotation group of a cube or octahedron and $I$ is the rotation group of the icosahedron. Finally, taking $n \to \infty$ yields the additional continuous groups: $C_{\infty}$, $C_{\infty h}$, $C_{\infty v}$, $D_{\infty}$, $D_{\infty h}$. $C_{\infty}$ is another notation for $\essodue$, and $C_{\infty v}$ is $\Odue$, which can be generated by $C_{\infty}$ and a reflection through any vertical plane containing the vertical rotation axis.
11. \[sec:ensemble\] The ensemble average of a function $\chi$ with respect to the orientational probability distribution of the molecules is sometimes denoted by angle brackets: $\langle \chi \rangle$.
General background on orientational order parameters {#sec:background}
====================================================
In the first part of the present section, we recall the basics of liquid crystal theory; the simplest mesogenic molecules have cylindrical symmetry ($D_{\infty h}$, see Sec. \[sec:notations\](\[sec:schoenflies\].)), and if they are basically arranged so as to have their main axis (identified by a unit vector ${\mathbf{m}}$ parallel to the cylindrical symmetry axis) along one preferred direction, the mesophase is *uniaxial*; however, each molecule might slightly deviate from the alignment direction, and in a uniaxial phase this deviation occurs randomly with equal probability in any other direction.
Formally speaking, if $f({\mathbf{m}})$ is the distribution describing the probability that the direction of the main axis of the molecule is exactly ${\mathbf{m}}$, and considering that $f({\mathbf{m}}) = f(-{\mathbf{m}})$, since the opposite direction cannot be distinguished due to the symmetry of the molecule, we need to resort to the second moment of the distribution $${\mathbf{N}}= \int_{S^2} ({\mathbf{m}}\otimes {\mathbf{m}}) f({\mathbf{m}})\ \dd \Omega\,,$$ where the integration is performed over the unit sphere $S^2$, in other words on all directions (e.g. $\dd \Omega = \sin\beta \dd \beta\ \dd \alpha$, with $\alpha$ and $\beta$ defined as the colatitude and the azimuth in a spherical frame).
Since we are interested in describing *non*-isotropic phases, typically the order tensor ${\mathbf{Q}}= {\mathbf{N}}- \frac{1}{3} {\mathbf{I}}$ is used, which is identically zero in the isotropic phase. ${\mathbf{Q}}$ is a symmetric traceless tensor (since ${\mathbf{N}}$ is symmetric and $\operatorname{tr}{\mathbf{N}}=1$), and a uniaxial phase corresponds to (at least) two equal eigenvalues for ${\mathbf{Q}}$; if we choose a laboratory (orthonormal) reference frame $( \lx, \ly, \lz )$ for which $\lz$ is along the preferred direction of the molecule (known as the *director* ${\mathbf{n}}$), we can write $$\label{eq:Quniaxial}
{\mathbf{Q}}= S \left(\diadlzz - \frac{1}{3}{\mathbf{I}}\right) = S \left({\mathbf{n}}\otimes {\mathbf{n}}- \frac{1}{3}{\mathbf{I}}\right)\,;$$ the scalar $S \in [-\frac{1}{2}, 1]$ is the main uniaxial order parameter, and is actually the ensemble average $\langle \frac{3}{2} \cos^2\beta - \frac{1}{2} \rangle$; $S=1$ would describe “perfect” alignment along ${\mathbf{n}}$[^2].
We point out that the matrix representation of the tensor ${\mathbf{Q}}=S({\mathbf{n}}\otimes {\mathbf{n}}- \frac{1}{3}{\mathbf{I}})$, which intrinsically describes a uniaxial phase, can be strongly different from the diagonal form if the reference lab frame is poorly chosen (recall that the elements of the matrix representative of the dyadic product ${\mathbf{n}}\otimes {\mathbf{n}}$ are $n_i n_j, \,\, i,j = 1,\dots,3$; cf. Sec. \[sec:notations\]).
In a generic uniaxial phase, the order parameter does not attain its maximum value, as molecules are not perfectly aligned along ${\mathbf{n}}$. However, all molecule might deviate from the direction of ${\mathbf{n}}$ not entirely in a random way. To picture the situation, we can say that in the previous case the molecule are uniformly distributed in a circular cone the axis of which has direction ${\mathbf{n}}$, although the aperture of the cone is small. Under different circumstances, e.g. a frustration induced by the boundary of the region in which the liquid crystal is confined, the molecules might be distributed in an elliptical cone, meaning there are two directions orthogonal to ${\mathbf{n}}$ along which the molecule have maximum and minimum deviations. By properly choosing the laboratory frame, the order tensor (now having 3 different eigenvalues) can be written as $${\mathbf{Q}}= S \left(\diadlzz - \frac{1}{3}{\mathbf{I}}\right) + P (\diadlxx - \diadlyy)\,,$$ where the additional biaxial order parameter $P = \langle \sin^2\beta\, \cos 2\alpha \rangle$ ranges in $[-1,1]$ and vanishes for uniaxial phases. However, whenever $P \neq 0$ the phase is not purely uniaxial, but has a *biaxial* (orthorhombic) phase symmetry: a symmetry lower than that of the molecule.
On a dual point of view, molecules endowed with a $D_{2h}$ symmetry are characterised by 3 main axes instead of one; whenever only the main axes $\bm$ are aligned, the phase has a higher $D_{\infty h}$ (uniaxial) symmetry; differently, when also the other two axes tend to align respectively in two orthogonal directions, we obtain a phase with the same $D_{2h}$ (biaxial) symmetry. We omit the detail of the description in the laboratory frame (see [@2003Virga; @universal; @bisi2011]), however pictures in Fig \[fig:D2hmols\] show the difference of the two cases.
[0.4]{} ![Schematic representations of (a) $D_{\infty h}$ and (b) $D_{2h}$ phase-symmetries made with molecules possessing a $D_{2h}$ symmetry.[]{data-label="fig:D2hmols"}](fig_D2hmol_uni.pdf "fig:"){width="95.00000%"}
[0.4]{} ![Schematic representations of (a) $D_{\infty h}$ and (b) $D_{2h}$ phase-symmetries made with molecules possessing a $D_{2h}$ symmetry.[]{data-label="fig:D2hmols"}](fig_D2hmol_bi.pdf "fig:"){width="95.00000%"}
In general, when no symmetry for the molecule or the phase can be assumed *a-priori*, the orientational distribution of a collection of molecules is most conveniently described in terms of a space-dependent probability density function $f({\mathbf{x}},{\mathbf{R}})$. Here ${\mathbf{x}}$ is the space point of the considered molecule and ${\mathbf{R}}\in \essotre$ is the rotation of a right orthonormal frame set in the molecule, with $({\mathbf{m}}_1, {\mathbf{m}}_2, {\mathbf{m}}_3)$ unit vectors along the axes, with respect to the laboratory frame of reference, identified by the three mutually orthogonal unit vectors $({\bm{\ell}}_1, {\bm{\ell}}_2, {\bm{\ell}}_3)$. In the following we drop the explicit dependence of $f$ on the space point ${\mathbf{x}}$ since we are mainly interested in its orientational properties and we will assume a continuous dependence on ${\mathbf{R}}$. Hence, $f \colon \essotre \to {\mathbb{R}}_+$ is a continuous function from the group of proper rotations to the non-negative real numbers. Several equivalent description can be given of the rotation matrix ${\mathbf{R}}$ that describes the orientation of the molecule with respect to the laboratory axes. The matrix ${\mathbf{R}}$ is defined as the rotation that brings [[each]{}]{} unit vector ${\bm{\ell}}_i$ into coincidence with the [[corresponding]{}]{} molecular unit vector ${\mathbf{m}}_i$: [[${\mathbf{R}}{\bm{\ell}}_i = {\mathbf{m}}_i,
\allowbreak i=1,\dots,3$]{}]{}, and the matrix entries of ${\mathbf{R}}$ are given by the director cosines [[$R_{ij} = {\bm{\ell}}_i \cdot {\mathbf{R}}{\bm{\ell}}_j = {\bm{\ell}}_i \cdot {\mathbf{m}}_j
\allowbreak i,j = 1,\dots,3$]{}]{}. Equivalent but more intrinsic descriptions of the same matrix are obtained as follows $${\mathbf{R}}= \sum_{i,j=1}^{3} R_{ij} {\bm{\ell}}_i \operatorname{\otimes}{\bm{\ell}}_j, \qquad \text{ or } \qquad
{\mathbf{R}}= \sum_{k=1}^{3} {\mathbf{m}}_k \operatorname{\otimes}{\bm{\ell}}_k \, ,
\label{eq:rotation_matrix}$$ (cf. Sec \[sec:notations\])
It is known from group representation theory (more precisely from Peter-Weyl theorem, see for instance [@1986Barut; @Sternberg]) that the matrix entries of all the irreducible representations of the rotation group $\essotre$ form a complete orthonormal set for the continuous functions $f \colon \essotre \to {\mathbb{R}}$. Traditionally, this irreducible decomposition is based on the properties of the spherical harmonic functions, as we now recall. The space $L^2(S^2)$ of square integrable functions over the two-dimensional unit sphere $S^2$ can be decomposed into the infinite direct sum of suitable finite-dimensional vector spaces $V_j$: $L^2(S^2) = \bigoplus_j V_j$, where $j$ is a non-negative integer. Each $V_j$ is generated by the spherical harmonics $\{Y_{jk}\}$ of rank $j$ and has dimension $\dim(V_j) = 2j + 1$. The [[irreducible]{}]{} representation of $\essotre$, given by ${\mathcal{D}}^{(j)}$, is defined by assigning the linear maps ${\mathcal{D}}^{(j)}({\mathbf{R}}): V_j \to V_j$ such that for all $\psi \in V_j$, ${\mathbf{R}}\in \essotre$ $${\mathcal{D}}^{(j)}({\mathbf{R}})\psi({\mathbf{x}}) = \psi ({\mathbf{R}}\transp{\mathbf{x}})$$ (where a superscript ‘$\mathrm{T}$’ stands for transpose). The explicit expressions for this irreducible representations of $\essotre$ are usually known as Wigner rotation matrices [@Wigner]. Hence, $f$ is usually expanded in terms of Wigner rotation matrices [@Wigner; @Rose; @1979zannoni].
Cartesian definition {#sec:background_Cartesian}
--------------------
However, for many purposes it is more convenient to use an equivalent definition and identify $V_j$ with the space of *traceless symmetric tensors* of rank $j$. [[In fact, also traceless symmetric tensors can be used to form a basis for the irreducible representations of $\essotre$ [@2011turzi; @Wigner].]{}]{} This result is known in other branches of mathematics and physics and is sometimes called *harmonic tensor decomposition* [@ForteVianello]. The irreducible representations are then identified with the non-singular linear maps $D^{(j)}(g):V_j \to V_j$, defined as follows. If $({\mathbf{v}}_1, {\mathbf{v}}_2, \ldots, {\mathbf{v}}_j)$ are a set of $j$ vectors belonging to the three dimension real vector space $W\simeq {\mathbb{R}}^3$, we define the action of $g \in \essotre$ on $V_j$ as the restriction of the diagonal action of $\essotre$ over the tensor product space $W^{\operatorname{\otimes}j}:=\underbrace{W \operatorname{\otimes}W \operatorname{\otimes}\ldots \operatorname{\otimes}W}_{j\text{ times}}$. Explicitly, we define $$D(g)({\mathbf{v}}_1 \operatorname{\otimes}{\mathbf{v}}_2 \operatorname{\otimes}\ldots \operatorname{\otimes}{\mathbf{v}}_j) = g{\mathbf{v}}_1 \operatorname{\otimes}g{\mathbf{v}}_2 \operatorname{\otimes}\ldots \operatorname{\otimes}g{\mathbf{v}}_j
\label{eq:diagonal_action}$$ and then extend by linearity to any tensor ${\mathbf{T}}\in W^{\operatorname{\otimes}j}$. The $D^{(j)}(g)$ is then obtained as the restriction of $D(g)$ to $V_j \subset W^{\operatorname{\otimes}j}$. Peter-Weyl theorem can now be used to show that the entries $\sqrt{2j+1}\,D^{(j)}_{pm}({\mathbf{R}})$, form a purely Cartesian complete orthonormal system for the continuous functions in $\essotre$, with respect to its *normalised invariant (or Haar) measure* [[(in terms of the common Euler angles $\alpha, \beta,\gamma$, such measure is explicitly given by $\dd \mu = \frac{1}{8\pi^2}\sin\beta \dd \beta \ \dd \alpha \ \dd \gamma$)]{}]{}. The Fourier expansion of the probability distribution function is [^3]
$$f({\mathbf{R}}) = \sum_{j=0}^{+\infty}\,\, (2j+1) \!\!\!{{\color{black}\sum_{p,m=0}^{2j}}} f_{pm}^{(j)} D^{(j)}_{pm}({\mathbf{R}}) ,$$
where the coefficients are readily obtained via orthogonality from the integrals $$f_{pm}^{(j)} = \int_{\essotre} D^{(j)}_{pm}({\mathbf{R}}) f({\mathbf{R}})\ \dd\mu({\mathbf{R}}) \, .
\label{eq:exp_coefficients}$$
When studying phase transformations, it is often useful to define an order parameter, that is a quantity which changes the value on going from one phase to the other and that can therefore be used to monitor the transition. From a molecular point of view, however, we should describe the passage from one phase to another in terms of the modifications that this produces in the distribution function. Therefore, a standard assumption is to identify the order parameters with the expansion coefficients (see Refs. [@1973Boccara; @1979zannoni; @1985zannoni; @2001luckhurst; @LuckhurstBook; @bisirossoC2v])
Second-rank order parameters {#sec:second-rankOPs}
============================
In nematic liquid crystals, the expansion of the probability distribution function is usually truncated at $j=2$. The $j=0$ term represents the isotropic distribution, while the $j=1$ terms vanishes for symmetry reasons. Therefore, the first non trivial information about the molecular order is provided by the $j=2$ terms which then acquire a particular important part in the theory. Higher rank terms are sometimes also studied, but this is uniquely done in particular cases (i.e. uniaxial molecules) where simplifying assumptions or the symmetry of the problem restrict the number of independent order parameters and the complexity of their calculation. Surely, it is already difficult to have an insight about the physical meaning of the $j=2$ order parameters, when no particular symmetry is imposed [@2011turziC2h; @2012turziD2h; @2013TurziSluckin].
Therefore, in the present paper we will only consider the *second-rank order parameters*, i.e. with $j=2$, although, at least formally, it is easy to extend our definitions to higher rank ordering tensors. The invariant space ${V_2}$ is described as the (five dimensional) space of symmetric and traceless [[second-rank]{}]{} tensors[^4] on the three dimensional real space $W \simeq {\mathbb{R}}^3$. [[ Let $L({V_2})$ be the space of the linear maps ${V_2}\to {V_2}$.]{}]{}
Given the general definition of Cartesian ordering tensor , we are led to consider traceless symmetric tensor spaces and define second-rank ordering tensor (or order parameter tensor) as the [[linear map ${\mathbb{S}}\in L({V_2})$]{}]{}, such that [@2015chillb; @2015chill]
$${\mathbb{S}}({\mathbf{T}}) = \int_{\essotre} D^{(2)}({\mathbf{R}}){\mathbf{T}}\,f({\mathbf{R}})\ \dd\mu({\mathbf{R}}) := \langle D^{(2)}({\mathbf{R}})\rangle {\mathbf{T}},$$
where the $(j=2)$-irreducible representation matrix $D^{(2)}({\mathbf{R}})$ acts explicitly by conjugation as follows $$D^{(2)}({\mathbf{R}}){\mathbf{T}}= {\mathbf{R}}{\mathbf{T}}{\mathbf{R}}\transp .
\label{eq:conjugation}$$
It is worth noticing that the 5 $\times$ 5 matrices $D^{(2)}({\mathbf{R}})$, defined by , yield an irreducible real *orthogonal* representation of $\Otre$ and as such they satisfy $$D^{(2)}({\mathbf{R}})\transp = D^{(2)}({\mathbf{R}}\transp) = D^{(2)}({\mathbf{R}}^{-1})= D^{(2)}({\mathbf{R}})^{-1}.
\label{eq:D(R)orthogonality}$$ [[Since $\Otre$ is the direct product of $\essotre$ and $C_i$, each representation of $\essotre$ splits into two representations of $\Otre$. However, in our application the natural generalisation of to the $j^{th}$-rank tensors induces us to choose the following representation for the inversion: $D^{(j)}(\iota)= (-1)^j D^{(j)}({\mathbf{I}})$. Hence, when $j=2$, we obtain $D^{(2)}(\iota) = D^{(2)}({\mathbf{I}})$.]{}]{}
It is convenient for the sake of the presentation to introduce an orthonormal basis that describes the orientation of the molecule in the 5-dimensional space ${V_2}$. It is natural to build this basis on top of the three dimensional orthonormal frame $({\mathbf{m}}_1,{\mathbf{m}}_2,{\mathbf{m}}_3)$. Therefore, in agreement with [@07Rosso; @47; @48][^5], we define
$$\begin{aligned}
{\mathbf{M}}_0 & = \sqrt{\frac{3}{2}} \left({\mathbf{m}}_3 \operatorname{\otimes}{\mathbf{m}}_3 - \frac{1}{3}{\mathbf{I}}\right) , &
{\mathbf{M}}_1 & = \frac{1}{\sqrt{2}} \left({\mathbf{m}}_1 \operatorname{\otimes}{\mathbf{m}}_1 - {\mathbf{m}}_2 \operatorname{\otimes}{\mathbf{m}}_2 \right), \\
{\mathbf{M}}_2 & = \frac{1}{\sqrt{2}} \left({\mathbf{m}}_1 \operatorname{\otimes}{\mathbf{m}}_2 + {\mathbf{m}}_2 \operatorname{\otimes}{\mathbf{m}}_1 \right), &
{\mathbf{M}}_3 & = \frac{1}{\sqrt{2}} \left({\mathbf{m}}_2 \operatorname{\otimes}{\mathbf{m}}_3 + {\mathbf{m}}_3 \operatorname{\otimes}{\mathbf{m}}_2 \right), \\
{\mathbf{M}}_4 & = \frac{1}{\sqrt{2}} \left({\mathbf{m}}_1 \operatorname{\otimes}{\mathbf{m}}_3 + {\mathbf{m}}_3 \operatorname{\otimes}{\mathbf{m}}_1 \right).\end{aligned}$$
\[eq:molorienttensors\]
The tensors $\{{\mathbf{M}}_0,{\mathbf{M}}_1,\ldots,{\mathbf{M}}_4\}$ are orthonormal with respect to the standard scalar product .
Similarly, we define the basis of five symmetric, traceless tensors ${{\mathbf{L}}_0,\ldots, {\mathbf{L}}_4}$ in terms of ${{\bm{\ell}}_1, {\bm{\ell}}_2,{\bm{\ell}}_3}$ that are used as laboratory frame of reference. The matrix $D^{(2)}({\mathbf{R}})$ is an irreducible representation of the rotation ${\mathbf{R}}$ in the 5-dimensional space of symmetric traceless tensors. Specifically, it describes how the “molecular axis” ${\mathbf{M}}_i$ of a given molecule is rotated with respect the laboratory axis: $$D^{(2)}({\mathbf{R}}) {\mathbf{L}}_i = {\mathbf{M}}_i, \qquad i = 0,\dots,4,$$ to be compared with the similar expression ${\mathbf{R}}{\bm{\ell}}_i = {\mathbf{m}}_i \allowbreak \, (i=1,\dots,3)\,$ in the three-dimensional space. Likewise, components of $D^{(2)}({\mathbf{R}})$ and Eq. become $$D^{(2)}({\mathbf{R}})_{ij} = {\mathbf{L}}_i \cdot {\mathbf{M}}_j , \qquad
D^{(2)}({\mathbf{R}}) = \sum_{i,j=0}^{4} D^{(2)}({\mathbf{R}})_{ij} {\mathbf{L}}_i {\boxtimes}{\mathbf{L}}_j , \qquad
D^{(2)}({\mathbf{R}}) = \sum_{k=0}^{4} {\mathbf{M}}_k {\boxtimes}{\mathbf{L}}_k,$$ [[where $i,j = 0,\dots,4$]{}]{}, and [[${\boxtimes}$ stands for the tensor dyadic product (cf. Sec. \[sec:notations\], .)]{}]{} Hence, the Cartesian components of the ordering tensor ${\mathbb{S}}\in L({V_2})$ are [[defined as]{}]{} $$S_{ij} = {\mathbf{L}}_{i} \cdot {\mathbb{S}}({\mathbf{L}}_j) = {\mathbf{L}}_{i} \cdot \langle {\mathbf{M}}_j \rangle,\qquad i,j= 0,\dots,4,
\label{eq:Sij}$$ [[with the usual notation for the ensemble average (cf. Sec. \[sec:notations\], .)]{}]{} The components give the averaged molecular direction ${\mathbf{M}}_j$ with respect to the laboratory axis ${\mathbf{L}}_i$. In general there are 25 independent entries (as expected). We can alternatively write $${\mathbb{S}}({\mathbf{L}}_i) = \langle {\mathbf{M}}_i \rangle, \,(i = 0,1,\ldots,4)\qquad
{\mathbb{S}}= \sum_{i,j=0}^{4} S_{ij} {\mathbf{L}}_{i} {\boxtimes}{\mathbf{L}}_j, \qquad
{\mathbb{S}}= \sum_{k=0}^{4} \langle {\mathbf{M}}_k \rangle {\boxtimes}{\mathbf{L}}_k \, .$$ In simulations, the orientational probability density $f({\mathbf{R}})$ is reconstructed by keeping track of the orientations of a large number $N$ of sample molecules. Thus, the ensemble average in is approximated by the sample mean of ${\mathbf{M}}_j$ and the components $S_{ij}$ are calculated as $$S_{ij} = \frac{1}{N}\sum_{\alpha=1}^{N} {\mathbf{L}}_{i} \cdot {\mathbf{M}}^{(\alpha)}_j ,\qquad i,j= 0,\dots,4,$$ where the index $\alpha$ runs over all the molecules in the simulation.
As a final example, let us consider a system of uniaxial molecules with long axis ${\mathbf{m}}_3$. For symmetry reasons, all the averages of the molecular orientational tensors ${\mathbf{M}}_j$ with $j\neq 0$ vanish. The average of ${\mathbf{M}}_0$ yields the five components ($i = 0,1,\ldots,4$)
$$S_{i, 0} = \sqrt{\frac{3}{2}} \, {\mathbf{L}}_{i} \cdot \langle {\mathbf{m}}_3 \operatorname{\otimes}{\mathbf{m}}_3 - \tfrac{1}{3}{\mathbf{I}}\rangle,
\label{eq:S00}$$
which provides a description of the molecular order equivalent to the standard de Gennes ${\mathbf{Q}}$ tensor [[or Saupe ordering matrix [@LuckhurstBook; @1995deGennes], as given in Eq. ]{}]{}.
Change of basis and group action {#sec:second-rankOPs_basis}
--------------------------------
Let us investigate how the components $S_{ij}$ are affected by a change of the molecular or laboratory frames of reference. We first consider the components of the ordering tensor with respect to rotated *laboratory axes*. More precisely, let ${\bm{\ell}}'_i={\mathbf{A}}_{P} {\bm{\ell}}_i$ be the unit vectors along the primed axes, obtained from the old ones by a (proper or improper) rotation ${\mathbf{A}}_P$. A molecule, whose orientation was described by the rotation ${\mathbf{R}}$ with respect to $({\bm{\ell}}_1,{\bm{\ell}}_2,{\bm{\ell}}_3)$, now is oriented as ${\mathbf{R}}{\mathbf{A}}_{P}\transp$ with respect to the primed frame $$\begin{aligned}
{\mathbf{R}}{\bm{\ell}}_i = {\mathbf{m}}_i = {\mathbf{R}}' {\bm{\ell}}'_i = {\mathbf{R}}' {\mathbf{A}}_P {\bm{\ell}}_i \qquad \Rightarrow \qquad {\mathbf{R}}'={\mathbf{R}}{\mathbf{A}}_{P}\transp .\end{aligned}$$ Correspondingly, in the five-dimensional space ${V_2}$, the rotation that brings the new frame $\{{\mathbf{L}}'_{i}\}$ into coincidence with the molecular frame $\{{\mathbf{M}}_i\}$ is $$D^{(2)}({\mathbf{R}}{\mathbf{A}}_{P}\transp) = D^{(2)}({\mathbf{R}})D^{(2)}({\mathbf{A}}_{P})\transp, \qquad
D^{(2)}({\mathbf{R}}{\mathbf{A}}_{P}\transp){\mathbf{L}}'_i = {\mathbf{M}}_i,$$ where we have used the identities $D^{(2)}({\mathbf{R}}){\mathbf{L}}_i = {\mathbf{M}}_i$ and $D^{(2)}({\mathbf{A}}_{P}){\mathbf{L}}_i = {\mathbf{L}}'_i$. The components of the ordering tensor in the new basis are then calculated as follows $$\begin{aligned}
S'_{i j} & = {\mathbf{L}}'_{i} \cdot \langle{\mathbf{M}}_{j} \rangle
= {\mathbf{L}}'_{i} \cdot \langle D^{(2)}({\mathbf{R}}{\mathbf{A}}_{P}\transp) \rangle {\mathbf{L}}'_j
= D^{(2)}({\mathbf{A}}_{P}){\mathbf{L}}_{i} \cdot \langle D^{(2)}({\mathbf{R}}) \rangle {\mathbf{L}}_j \notag \\
& = \sum_{k=0}^{4} D^{(2)}({\mathbf{A}}_{P}){\mathbf{L}}_{i} \cdot ({\mathbf{L}}_{k} {\boxtimes}{\mathbf{L}}_{k})\langle D^{(2)}({\mathbf{R}}) \rangle {\mathbf{L}}_j
= \sum_{k=0}^{4} D^{(2)}_{ik}({\mathbf{A}}_{P}\transp) S_{kj},
\label{eq:changebasis}\end{aligned}$$ where $D^{(2)}_{ik}({\mathbf{A}}_{P}\transp) = {\mathbf{L}}_{k} \cdot D^{(2)}({\mathbf{A}}_{P}){\mathbf{L}}_{i}$. Since only the relative orientation of the molecular frame with respect to the laboratory frame is important, a rigid rotation of *all* the molecules is equivalent to an inverse rotation of the laboratory frame. It is easy to check that the ordering tensor in the two cases is the same. Let ${\mathbf{A}}_M$ be a common rotation to all the molecular frames, so that the new molecular axes are ${\mathbf{m}}'_i = {\mathbf{A}}_M {\mathbf{m}}_i$. The orientation of these axes with respect to the laboratory frame is given by ${\mathbf{A}}_{M}{\mathbf{R}}$: ${\mathbf{A}}_{M}{\mathbf{R}}{\bm{\ell}}_i = {\mathbf{A}}_{M}{\mathbf{m}}_i = {\mathbf{m}}'_i$. The components of the ordering tensor then becomes $$\begin{aligned}
S'_{i j} & = {\mathbf{L}}_{i} \cdot \langle{\mathbf{M}}'_{j} \rangle
= {\mathbf{L}}_{i} \cdot D^{(2)}({\mathbf{A}}_{M})\langle D^{(2)}({\mathbf{R}}) \rangle {\mathbf{L}}_j \notag \\
& = \sum_{k=0}^{4} {\mathbf{L}}_{i} \cdot D^{(2)}({\mathbf{A}}_{M})({\mathbf{L}}_{k} {\boxtimes}{\mathbf{L}}_{k})\langle D^{(2)}({\mathbf{R}}) \rangle {\mathbf{L}}_j
= \sum_{k=0}^{4} D^{(2)}_{ik}({\mathbf{A}}_{M}) S_{kj},
\label{eq:changebasis_mol}\end{aligned}$$ to be compared with . When combined together, and show that only the relative rotation ${\mathbf{A}}_{M}{\mathbf{A}}^{T}_{P}$ has a physical meaning.
However, in this context a rotation of the molecular frame has to be interpreted as an orthogonal transformation of the molecular axes *before* the orientational displacement of the molecule, ${\mathbf{R}}$, has taken place. In such a case the overall rotation that brings the laboratory axes into coincidence with the new molecular axes is described by the product ${\mathbf{R}}{\mathbf{A}}_M$. The new components of the ordering tensor are then given by $$\begin{aligned}
S'_{i j} & = {\mathbf{L}}_{i} \cdot \langle D^{(2)}({\mathbf{R}}{\mathbf{A}}_{M}) \rangle {\mathbf{L}}_j
= \sum_{k=0}^{4} {\mathbf{L}}_{i} \cdot \langle D^{(2)}({\mathbf{R}}) \rangle ({\mathbf{L}}_{k} {\boxtimes}{\mathbf{L}}_{k}) D^{(2)}({\mathbf{A}}_{M}) {\mathbf{L}}_j
= \sum_{k=0}^{4} S_{ik} D^{(2)}_{kj}({\mathbf{A}}_{M}) \, .
\label{eq:changebasis_mol2}\end{aligned}$$
When both laboratory and molecular transformations are allowed, the combination of and yields $$S'_{i j} = \sum_{h,k=0}^{4} D^{(2)}_{ih}({\mathbf{A}}_{P}\transp) S_{hk} D^{(2)}_{kj}({\mathbf{A}}_{M})
\label{eq:changebasis_tot}$$
Dually, we can study the action of two groups $G_P, G_M \subset \Otre$ on ${\mathbb{S}}\in {V_2}\otimes {V_2}^*$ by left and right multiplication respectively (i.e., $G_P$ acts on the “phase index” and $G_M$ on the “molecular index”). According to this *active* interpretation of the orthogonal transformations ${\mathbf{A}}_P\in G_P$ and ${\mathbf{A}}_M \in G_M$, the ordering tensor is transformed in such a way that the following diagram commutes

In formula, we have $$\begin{aligned}
({\mathbf{A}}_{P} \times {\mathbf{A}}_{M})\,{\mathbb{S}}= D^{(2)}({\mathbf{A}}_{P})\, {\mathbb{S}}\, D^{(2)}({\mathbf{A}}_{M}\transp) .
\label{eq:activeaction}\end{aligned}$$
Molecular and phase symmetry {#sec:second-rankOPs_symmetry}
----------------------------
When dealing with liquid crystals we must distinguish between the symmetry of the molecule and the symmetry of the phase, shared by the aggregation of the molecules, but not necessarily by the molecules themselves. [[A thorough description of the symmetries of a physical system is envisaged by the symmetries of the corresponding orientational probability density $f({\mathbf{R}})$. However, when we analyse the order of the system only in terms of the descriptor ${\mathbb{S}}$, some degeneracy arises. The ordering tensor ${\mathbb{S}}$ is an averaged quantity obtained by computing the second moments of $f({\mathbf{R}})$. It is therefore possible that systems possessing different physical symmetries may be described by the same ordering tensor, since in the averaging procedure some information may be lost. This means that to be able to distinguish the fine details of these degenerate cases we need to carry on the expansion of $f({\mathbf{R}})$ to higher orders. Here, we mainly focus on second-rank properties and we first define what we mean by symmetry group of ${\mathbb{S}}$.]{}]{}
According to the action , [[we define the *second-rank molecular symmetry group* as the set of all elements in $\Otre$ that fix ${\mathbb{S}}$ under right multiplication]{}]{} [^6] $$G_{M}({\mathbb{S}}) = \{{\mathbf{A}}_M \in \Otre \suchcol {\mathbb{S}}\, D^{(2)}({\mathbf{A}}_{M}\transp) = {\mathbb{S}}\},$$ and similarly for the [[second-rank phase symmetry group, where the multiplication appears on the left]{}]{} $$G_{P}({\mathbb{S}}) = \{{\mathbf{A}}_P \in \Otre \suchcol D^{(2)}({\mathbf{A}}_{P})\, {\mathbb{S}}= {\mathbb{S}}\}.$$ [[We also refer to these subgroups as the right and left stabiliser subgroup for ${\mathbb{S}}$. A *second-rank symmetry group* or *stabiliser subgroup* is then defined as the subgroup of $\Otre \times \Otre$ that collects all the orthogonal transformations, both in the phase and in the molecule, that leave ${\mathbb{S}}$ invariant ]{}]{}(see [@ForteVianello] for the corresponding definition in the context of Elasticity Theory). Mathematically, this is the direct product of $G_P$ and $G_M$: $G({\mathbb{S}}) = G_P({\mathbb{S}}) \times G_M({\mathbb{S}})$.
The definition of symmetry group explicitly contains the information about the symmetry axes of the molecule and the phase. However, our main interest in this Section lies in classifying second-rank ordering tensors with respect to their symmetry properties. This means that we wish to introduce, among such tensors, a relation based on the idea that different materials which can be rotated so that their symmetry groups become identical are ‘equivalent’. For instance, two uniaxially aligned liquid crystals are viewed as equivalent in this respect even if the direction of alignment of the molecules may be different in the two compounds. Therefore, it is quite natural to think of ordering tensors lying on the same $\essotre$–orbit as describing the same material albeit possibly with respect to rotated directions. As a consequence of the definition of the stabilisers $G_{M}({\mathbb{S}})$ and $G_{P}({\mathbb{S}})$, the symmetry groups with respect to a *rotated* frame of reference are simply obtained by conjugation. [[For example, if ${\mathbf{R}}_{P} \in \essotre$ is a rotation of the laboratory axes, the new ordering tensor is $D^{(2)}({\mathbf{R}}_{P})\,{\mathbb{S}}$ and the new symmetry group is conjugated through $D^{(2)}({\mathbf{R}}_{P})$ to $G_{P}({\mathbb{S}})$ $$\begin{aligned}
G_{P}\big(D^{(2)}({\mathbf{R}}_{P})\,{\mathbb{S}}\big)
& = \{{\mathbf{A}}_P \in \Otre \suchcol D^{(2)}({\mathbf{A}}_{P})D^{(2)}({\mathbf{R}}_{P})\, {\mathbb{S}}= D^{(2)}({\mathbf{R}}_{P})\,{\mathbb{S}}\} \notag \\
& = \{{\mathbf{A}}_P \in \Otre \suchcol D^{(2)}({\mathbf{R}}_{P})\transp D^{(2)}({\mathbf{A}}_{P}) D^{(2)}({\mathbf{R}}_{P})\, {\mathbb{S}}= {\mathbb{S}}\} \notag \\
& = D^{(2)}({\mathbf{R}}_{P}) G_{P}({\mathbb{S}}) D^{(2)}({\mathbf{R}}_{P})\transp.\end{aligned}$$ ]{}]{} We regard two ordering tensors ${\mathbb{S}}_1$ and ${\mathbb{S}}_2$ which are related by a rotation of the axes as representing the same material and hence equivalent. Thus, we speak about (second-rank) *symmetry classes* and say that the two ordering tensors belong to the same symmetry class (and are therefore equivalent) when their [[stabiliser subgroups]{}]{} are conjugate. More precisely, we write ${\mathbb{S}}_1 \sim {\mathbb{S}}_2$ if and only if there exist two rotations ${\mathbf{R}}_{P}, {\mathbf{R}}_{M} \in \essotre$ such that $$D^{(2)}({\mathbf{R}}_{P}) G_{P}({\mathbb{S}}_1) D^{(2)}({\mathbf{R}}_{P})\transp = G_{P}({\mathbb{S}}_2) \qquad \text{ and } \qquad
D^{(2)}({\mathbf{R}}_{M}) G_{M}({\mathbb{S}}_1) D^{(2)}({\mathbf{R}}_{M})\transp = G_{M}({\mathbb{S}}_2).$$
[[Finally, we say that the point groups $G_1$ and $G_2$ are (second-rank) *indistinguishable symmetries for the physical system* if, for any two probability densities $f_1$ and $f_2$ that are fixed by $G_1$ and $G_2$[^7], respectively, the corresponding ordering tensors ${\mathbb{S}}_1$ and ${\mathbb{S}}_2$ belong to the same symmetry class (${\mathbb{S}}_1 \sim {\mathbb{S}}_2$). ]{}]{}
Symmetry classes of second-rank ordering tensors {#sec:second-rankOPs_symmetryclasses}
------------------------------------------------
A preliminary problem which we need to address is counting and determining all symmetry classes for second-rank ordering tensors. An analogous problem in Elasticity, i.e., determining the symmetry classes of the linear elasticity tensor, is discussed in [@ForteVianello]. However, as we shall see, in our case this determination is simpler because we are dealing with *irreducible second-rank* tensors (instead of reducible fourth-rank) and the action of molecular and phase symmetry can be studied separately.
It is worth remarking that there is not a one to one correspondence between the [[stabiliser subgroups for ${\mathbb{S}}$ and the point groups in three dimensions: liquid crystal compounds possessing different (molecular or phase) physical symmetry may have the same stabiliser for ${\mathbb{S}}$ and thus may belong to the same second-rank symmetry class.]{}]{} This fact is related to the truncation of the probability density used to define the order parameters: at the second-rank level the ordering tensors may coincide even if the actual material symmetry is different as in the truncation process some information about the molecular distribution is lost. For example, at the second rank level, materials with $C_{3h}$ and $D_{\infty h}$ symmetry are effectively indistinguishable. This would not be true if we considered third-rank tensorial properties. However, for the sake of simplicity and also because it is most widely adopted in the literature, we will consider only second-rank order parameters.
A number of authors have made the same classification based on the number of non-vanishing independent order parameters for each group [@1985zannoni; @2001luckhurst; @LuckhurstBook; @2006Mettout]. However, in our view, this classification of the symmetry classes rests on two standard theorems, which we now state without proof. The first theorem is known as Hermann-Herman’ theorem in Crystallography. The interested reader can consult the original references [@1934Hermann; @1945HermanB]; Refs. [@2004Slawinski; @2001Wadhawan] for a proof and Refs. [@1987Wadhawan; @1982Sirotin; @1998Handbook] for a more accessible account of this result.
Let $\,{\mathbb{T}}\,$ be an $r$-rank ($r>0$) tensor in $W^{\operatorname{\otimes}r}$, where $W$ is a *3-dimensional real* vector space. If $\,{\mathbb{T}}\,$ is invariant with respect to the group $C_n$ of $n$-fold rotations about a fixed axis and $n>r$, then it is [[$C_{\infty}$]{}]{}-invariant relative to this axis (i.e., it is $C_{m}$-invariant for all $m \geq n$). \[thm:Herma\]
Quoting Herman, from [@2004Slawinski; @1945HermanB] “If the medium has a rotation axis of symmetry $C_n$ of order $n$, it is axially isotropic relative to this axis for all the physical properties defined by the tensors of the rank $r=0,1,2,\ldots,(n-1)$.”
To perform the classification of the second-rank symmetry classes, we also need to recall a standard classification theorem in Group Theory [@ForteVianello; @Sternberg; @1972Bredon].
Every closed subgroup of $\essotre$ is isomorphic to exactly one of the following groups ($n\geq 2$): $C_1$, $C_n$, $D_n$, $T$, $O$, $I$, $C_{\infty}$, $C_{\infty v}$, $\essotre$. \[thm:closedsubgroups\]
In view of these theorems, for $j=2$ we obtain a result that allows collecting all point groups in five classes, which greatly simplifies the classification of phase or molecular symmetries.
There are exactly five (phase or molecular) symmetry-classes of the second-rank ordering tensor: *Isotropic, Uniaxial (Transverse Isotropic), Orthorhombic, Monoclinic and Triclinic*. The corresponding stabiliser subgroups for ${\mathbb{S}}$ are: $O(3)$, $D_{\infty h}$, $D_{2 h}$, $C_{2 h}$ and $C_i$. The last column in the table collects the second-rank indistinguishable symmetries, i.e., physical symmetries that yield equivalent second-rank ordering tensors.

Furthermore, since ${\mathbb{S}}$ vanishes in the isotropic class, there are only $4 \times 4 + 1 = 17$ possible different combinations of molecular and phase symmetries that can be distinguished at the level of second-rank order parameters.
The proof is a consequence of the following remarks.
1. Since the symmetry group $G$ is the direct product of $G_M$ and $G_P$, the action of a symmetry transformation can be studied independently for the molecules and the phase. This greatly simplify the classification (by contrast, this fact is not true in Elasticity).
2. The definition of $D^{(2)}({\mathbf{R}})$, as given in , involves exactly twice the product of the rotation matrix ${\mathbf{R}}$. This implies, by Herman’s theorem, that all the groups with an $n$-fold rotation axis with $n>2$ are effectively indistinguishable from [[the $C_{\infty}$-symmetry]{}]{} about that axis.
3. Furthermore, [[Eq. shows immediately that $D^{(2)}(\iota{\mathbf{R}}) = D^{(2)}({\mathbf{R}})$. In particular, this yields $D^{(2)}(\iota) = D^{(2)}({\mathbf{I}})$, $D^{(2)}(\sigma_h) = D^{(2)}(C_{2z})$, and $D^{(2)}(\sigma_v) = D^{(2)}(C_{2x})$ so that the inversion and the identity are represented by the same matrix; a horizontal mirror reflection is equivalent to a 2-fold rotation about the main axis $z$, and a vertical mirror reflection is equivalent to a 2-fold rotation about an orthogonal axis $x$. Therefore, the classification can be first performed on the subgroups of $\essotre$ since each class will have at least one representative subgroup in $\essotre$. The indistinguishable subgroups of $\Otre$ are then classified by considering the trivial actions of $\iota$ or $\sigma_h$.]{}]{}
4. [[Whenever a point group has two independent rotation axes of order $n>2$, it must contain two distinct copies of $C_{\infty}$]{}]{}. By checking the list of the possible closed subgroups of $\essotre$ (theorem \[thm:closedsubgroups\]) we see that it must be the whole $\essotre$.
5. [[Finally, the stabiliser of ${\mathbb{S}}$ in each class is determined by the largest group in the class.]{}]{}
The proof is then completed by inspection of the various point groups. For example, point (4) immediately shows that the higher order groups $T$, $O$ and $I$ are all indistinguishable from $\essotre$ and thus they all belong to the same class. Then, from point (3) we learn that adding an inversion or a mirror reflection has no effect on ${\mathbb{S}}$. Hence we can also classify $I_h$ $T_h$, $T_d$, $O_h$, and $\Otre$ as belonging to the same class (called *isotropic*). The stabiliser for ${\mathbb{S}}$ is the largest among such groups and it is $\Otre$.
Likewise, axial groups with a 3-fold or higher rotation axis are to be placed in the same class of $C_{\infty}
=\essodue$, according to the remark in point (2).
We now choose a frame of reference with the rotation axis along the $z$ coordinate and use the basis $\{{\mathbf{L}}_i\}$, as given in Eq., to represent ${\mathbb{S}}$. The only basis tensor that is fixed by an arbitrary rotation about the $z$-axis is ${\mathbf{L}}_0$. Therefore, in this basis a $C_{\infty}$-invariant ${\mathbb{S}}$ must be written either as $${\mathbb{S}}= \sum_{i=0}^{4} S_{i,0} {\mathbf{L}}_{i} \operatorname{\boxtimes}{\mathbf{L}}_{0}, \qquad \text{ or } \qquad {\mathbb{S}}= \sum_{j=0}^{4} S_{0,j}
{\mathbf{L}}_{0} \operatorname{\boxtimes}{\mathbf{L}}_{j} ,$$ depending on whether $C_{\infty}$ is acting on the right or the left. The only non-vanishing entries are either the first column or the first row of the matrix representing ${\mathbb{S}}$ (see also Table \[tab:canonical\]). Since the tensor ${\mathbf{L}}_0$ is not affected by a mirror reflection across planes through the $z$-axis, or by a $C_2$ rotation about the $x$-axis, the ordering tensor ${\mathbb{S}}$ is (right- or left-) fixed also by $\sigma_v$ and $C_{2x}$. This brings $C_{\infty v}=\Odue$ and $D_{\infty h}$ into the class. From these it is then easy to classify all the other groups in the *uniaxial* class.
By contrast, the *orthorhombic* class and the *monoclinic* class are only composed by groups with 2-fold rotation axes, so that Theorem \[thm:Herma\] does not apply. The two classes are distinguished by the presence or absence of a second rotation axis orthogonal to $z$ and point (3) allows us to identify the indistinguishable subgroups in each class. Finally, the *triclinic* class collects the remaining trivial groups $C_1$ and $C_i$. All the symmetry classes are disjoint since it is possible to provide independent examples of an ordering tensor in each class.
Identification of the nearest symmetric ordering tensor {#sec:identification}
=======================================================
Invariant projection {#sec:identification_projection}
--------------------
[[If $G_M, G_P \subset \Otre$ represent the real symmetry of the material, the order parameter ${\mathbb{S}}$ must be an invariant tensor for the action of $G_P \times G_M$. However, in practice, ${\mathbb{S}}$ will not be fixed exactly by any non-trivial group, due to measurements errors or non perfect symmetry of the real system. Therefore, our problem can be stated as follows: given a measured 5 $\times$ 5 ordering tensor ${\mathbb{S}}$ and assuming two specific stabiliser subgroups $G_M, G_P$, find the ordering tensor ${{\mathbb{S}}^{\text{sym}}}$ which is and is closest to ${\mathbb{S}}$. We now discuss two related sub-problems, namely, (1) how to find ${{\mathbb{S}}^{\text{sym}}}$ and (2) what is meant by “closest to ${\mathbb{S}}$”.]{}]{}
[[Let us define the fixed point subspace $L({V_2})^G$ as the space of the tensors ${\mathbb{S}}$ that are fixed under $G$: $$L({V_2})^G = \{{\mathbb{S}}\in L({V_2}): g {\mathbb{S}}={\mathbb{S}}, \forall g \in G\}.$$ ]{}]{} The invariant projection onto [[$L({V_2})^{G_P \times G_M}$]{}]{} can be easily obtained by averaging over the group $${{\mathbb{S}}^{\text{sym}}}= \frac{1}{|G_P| |G_M|}\sum_{{\mathbf{A}}_{P} \in G_P} \sum_{{\mathbf{A}}_{M} \in G_M} D^{(2)}({\mathbf{A}}_{P})\, {\mathbb{S}}\,D^{(2)}({\mathbf{A}}_{M}\transp) ,
\label{eq:Reynolds}$$ where $|G_P|$ and $|G_M|$ are the orders of the two (finite) groups. We observe that the transposition of ${\mathbf{A}}_M$ in Eq. is unnecessary since we are summing over the whole group, but it is maintained here for consistency with . This averaging procedure is standard in many contexts and takes different names accordingly. It is called “averaging over the group” in Physics, “projection on the identity representation” in Group Theory and “Reynolds operator” in Commutative Algebra. The expression is manifestly invariant by construction. It is also easy to show that it constitutes an *orthogonal projection*[[, with respect to the standard Frobenius inner product, i.e., the natural extension of Eq. to $L({V_2})$]{}]{}. Hence, the distance of ${{\mathbb{S}}^{\text{sym}}}$ from the original ${\mathbb{S}}$ is minimal and can be easily computed, as we now discuss.
The Reynolds operator as given in , with $G_M, G_P \subset \Otre$, is an orthogonal projector [[onto $L({V_2})^{G_P \times G_M}$]{}]{}
Let us introduce the linear operator ${\mathcal{R}}$ such that ${\mathcal{R}}({\mathbb{S}}) = {{\mathbb{S}}^{\text{sym}}}$ as given in Eq. . For convenience, we will use the following more compact notation for the Reynolds operator: $${\mathcal{R}}({\mathbb{S}}) = \frac{1}{|G|} \sum_{g\in G} g{\mathbb{S}},$$ where $G$ is the direct product of groups $G=G_P \times G_M$ and $g= D^{(2)}({\mathbf{A}}_{P})\operatorname{\otimes}\,D^{(2)}({\mathbf{A}}_{M})$ is the tensor (Kronecker) product of the matrix representation. First, we observe that by construction ${\mathcal{R}}^2 = {\mathcal{R}}$ as we are summing over the whole group.
Next, we show that ${\mathcal{R}}\transp = {\mathcal{R}}$. Since $G_M, G_P \subset \Otre$, it follows from that $g^{-1} = g\transp$. Therefore, for any two ordering tensors ${\mathbb{S}}, {\mathbb{T}}\in L({V_2})$, we have $$\begin{aligned}
{\mathcal{R}}({\mathbb{T}}) \cdot {\mathbb{S}}= \frac{1}{|G|} \sum_{g\in G} g{\mathbb{T}}\cdot {\mathbb{S}}= \frac{1}{|G|} \sum_{g\in G} {\mathbb{T}}\cdot g\transp{\mathbb{S}}=\frac{1}{|G|} \sum_{g\in G} {\mathbb{T}}\cdot g^{-1}{\mathbb{S}}= \frac{1}{|G|} \sum_{g\in G} {\mathbb{T}}\cdot g{\mathbb{S}}= {\mathbb{T}}\cdot {\mathcal{R}}({\mathbb{S}}),\end{aligned}$$ where we have used the fact that summing over $g^{-1}$ is the same as summing over $g$, since $G$ is a group. Finally, we define ${\mathbb{S}}^{\perp}:= {\mathbb{S}}-{\mathcal{R}}({\mathbb{S}})$ and obtain $$\begin{aligned}
{\mathcal{R}}({\mathbb{S}}) \cdot {\mathbb{S}}^{\perp} = {\mathcal{R}}({\mathbb{S}}) \cdot \big({\mathbb{S}}-{\mathcal{R}}({\mathbb{S}})\big)
= {\mathcal{R}}({\mathbb{S}}) \cdot {\mathbb{S}}- {\mathcal{R}}({\mathbb{S}}) \cdot {\mathcal{R}}({\mathbb{S}})
= {\mathcal{R}}({\mathbb{S}}) \cdot {\mathbb{S}}- {\mathcal{R}}^2({\mathbb{S}}) \cdot {\mathbb{S}}=0 .\end{aligned}$$
The above lemma suggests that the [[Frobenius]{}]{} norm $\|{\mathbb{S}}^{\perp}\|$ is a suitable candidate for the “distance from the $G$-invariant subspace”. It is worth noticing explicitly that this distance is properly defined as its calculation does not depend on the particular chosen matrix representation of ${\mathbb{S}}$, i.e., on the molecular and laboratory axes. Furthermore, since by orthogonality we have $\|{\mathbb{S}}\|^2 = \|{{\mathbb{S}}^{\text{sym}}}\|^2 + \|{\mathbb{S}}^{\perp}\|^2$, we readily obtain the following expression for the distance $$\mathrm{d}({\mathbb{S}}, {{\color{black}L({V_2})^{G_P \times G_M}}}) = \|{\mathbb{S}}^{\perp}\| = \sqrt{\phantom{\big(}\|{\mathbb{S}}\|^2 - \|{{\mathbb{S}}^{\text{sym}}}\|^2} .
\label{eq:distance}$$
Canonical matrix representation {#sec:identification_canonical}
-------------------------------
The matrix representation of the ordering tensor ${\mathbb{S}}$ assumes a particular simple form when the molecule and the laboratory axes are chosen in accordance with the molecular and phase symmetry, respectively. When the $z$-axis is assumed to be the main axis of symmetry and the basis tensors ${\mathbf{M}}_i$ and ${\mathbf{L}}_{j}$ are chosen accordingly, the projection gives a “canonical” form for the ordering tensor, shown explicitly in Table \[tab:canonical\]. The number of non-vanishing entries in each matrix corresponds to the number of independent (second-rank) order parameters necessary to describe the orientational order of molecules of the given symmetry in a given phase. The same results can be obtained by direct computation using the invariance of each matrix entry by symmetry transformations (see for example the calculations at the end of [@07Rosso] relative to a nematic biaxial liquid crystal).
---------------------- ------------------------------------------------------------ ------------------------------------------------------------ ------------------------------------------------------------ ------------------------------------------------------------
Triclinic Monoclinic Orthorhombic Uniaxial
\[2mm\]
\[-1mm\] Triclinic $\begin{pmatrix} $\begin{pmatrix} $\begin{pmatrix} $\begin{pmatrix}
S_{00} & S_{01} & S_{02} & S_{03} & S_{04} \\ S_{00} & S_{01} & S_{02} & S_{03} & S_{04} \\ S_{00} & S_{01} & S_{02} & S_{03} & S_{04} \\ S_{00} & S_{01} & S_{02} & S_{03} & S_{04} \\
S_{10} & S_{11} & S_{12} & S_{13} & S_{14} \\ S_{10} & S_{11} & S_{12} & S_{13} & S_{14} \\ S_{10} & S_{11} & S_{12} & S_{13} & S_{14} \\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\
S_{20} & S_{21} & S_{22} & S_{23} & S_{24} \\ S_{20} & S_{21} & S_{22} & S_{23} & S_{24} \\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\
S_{30} & S_{31} & S_{32} & S_{33} & S_{34} \\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\
S_{40} & S_{41} & S_{42} & S_{43} & S_{44} {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\end{pmatrix} $ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\end{pmatrix} $ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\end{pmatrix} $
\end{pmatrix} $
\[8mm\] Monoclinic $\begin{pmatrix} $\begin{pmatrix} $\begin{pmatrix} $\begin{pmatrix}
S_{00} & S_{01} & S_{02} & {\cdot}& {\cdot}\\ S_{00} & S_{01} & S_{02} & {\cdot}& {\cdot}\\ S_{00} & S_{01} & S_{02} & {\cdot}& {\cdot}\\ S_{00} & S_{01} & S_{02} & {\cdot}& {\cdot}\\
S_{10} & S_{11} & S_{12} & {\cdot}& {\cdot}\\ S_{10} & S_{11} & S_{12} & {\cdot}& {\cdot}\\ S_{10} & S_{11} & S_{12} & {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\
S_{20} & S_{21} & S_{22} & {\cdot}& {\cdot}\\ S_{20} & S_{21} & S_{22} & {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\
S_{30} & S_{31} & S_{32} & {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\
S_{40} & S_{41} & S_{42} & {\cdot}& {\cdot}\end{pmatrix} $ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\end{pmatrix} $ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\end{pmatrix} $ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\end{pmatrix} $
\[8mm\] Orthorhombic $\begin{pmatrix} $\begin{pmatrix} $\begin{pmatrix} $\begin{pmatrix}
S_{00} & S_{01} & {\cdot}& {\cdot}& {\cdot}\\ S_{00} & S_{01} & {\cdot}& {\cdot}& {\cdot}\\ S_{00} & S_{01} & {\cdot}& {\cdot}& {\cdot}\\ S_{00} & S_{01} & {\cdot}& {\cdot}& {\cdot}\\
S_{10} & S_{11} & {\cdot}& {\cdot}& {\cdot}\\ S_{10} & S_{11} & {\cdot}& {\cdot}& {\cdot}\\ S_{10} & S_{11} & {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\
S_{20} & S_{21} & {\cdot}& {\cdot}& {\cdot}\\ S_{20} & S_{21} & {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\
S_{30} & S_{31} & {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\
S_{40} & S_{41} & {\cdot}& {\cdot}& {\cdot}\end{pmatrix} $ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\end{pmatrix} $ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\end{pmatrix} $ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\end{pmatrix} $
\[8mm\] Uniaxial $\begin{pmatrix} $\begin{pmatrix} $\begin{pmatrix} $\begin{pmatrix}
S_{00} & {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ S_{00} & {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ S_{00} & {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ S_{00} & {\cdot}& {\cdot}& {\cdot}& {\cdot}\\
S_{10} & {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ S_{10} & {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ S_{10} & {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\
S_{20} & {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ S_{20} & {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\
S_{30} & {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\\
S_{40} & {\cdot}& {\cdot}& {\cdot}& {\cdot}\end{pmatrix} $ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\end{pmatrix} $ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\end{pmatrix} $ {\cdot}& {\cdot}& {\cdot}& {\cdot}& {\cdot}\end{pmatrix} $
\[8mm\]
---------------------- ------------------------------------------------------------ ------------------------------------------------------------ ------------------------------------------------------------ ------------------------------------------------------------
: []{data-label="tab:canonical"}
We immediately read from Eq. that $$S_{00} = {\mathbf{L}}_{0}\cdot \langle {\mathbf{M}}_{0} \rangle
= {\bm{\ell}}_{3}\cdot \left(\frac{3}{2}\left\langle {\mathbf{m}}_3 \operatorname{\otimes}{\mathbf{m}}_3 - \frac{1}{3}{\mathbf{I}}\right\rangle \right) {\bm{\ell}}_3
= \frac{1}{2}\left\langle 3({\bm{\ell}}_{3}\cdot{\mathbf{m}}_3)^2 - 1 \right\rangle = S,$$ where $S$ is the standard uniaxial ($D_{\infty h}$) order parameter (degree of orientation). This is due to the fact that ${\mathbf{m}}_3$ is the main rotation axis of the molecule and we have chosen the laboratory axis ${\bm{\ell}}_3$ along the uniaxial symmetry axis of the phase (see Sec. \[sec:background\]). Likewise, we can see that when the frames of reference are adapted to the molecular and phase symmetries of the system, $S_{10}$ corresponds to the degree of phase biaxiality $P$ while $S_{01}$ and $S_{11}$ are the two additional nematic biaxial ($D_{2h}$) order parameters, usually written as $D$ and $C$ (see [@07Rosso] for notations)[^8].
However, some ambiguity arises from this definition because in low symmetry groups not all the axes are uniquely defined by the group operations. This, in a sense, suggests that, contrary to common understanding, the matrix entries of ${\mathbb{S}}$ are not suitable candidates for the order parameters, but rather the ordering tensor ${\mathbb{S}}$ as a whole should be considered as the correct descriptor of the molecular order in a low symmetry system. For example, while the $D_{2h}$ symmetry uniquely identifies three orthogonal directions that can be used as coordinate axes, the $C_{2h}$ symmetry only identifies the rotation axis $z$, but gives no indication on how to identify the coordinate axes $x$ and $y$. We note that this choice is important, because some matrix entries are altered when we choose different $x$ and $y$ axes. By contrast, in a system with $D_{\infty h}$ symmetry the choice of the $x$ and $y$ axes is not important as all the associated order parameters vanish by symmetry. An extreme example is furnished by the triclinic case, where the molecule and the phase posses no symmetry. Here, there is no reason to select one preferred direction with respect to any other and any coordinate frame should be equivalent. However, the matrix entries of ${\mathbb{S}}$ surely depend on the chosen axis.
Therefore, in general, a different choice of the $x$ and $y$ axes could lead to different values for the matrix entries and the canonical form of the matrix ${\mathbb{S}}$ is thus not uniquely defined. By contrast, the *structure* of the matrix as given in Table \[tab:canonical\], and in particular the position of the vanishing entries of ${\mathbb{S}}$, are not affected by such a change of basis. This suggests that the order parameters should not be [[identified with the matrix entries, but rather with the *invariants* of ${\mathbb{S}}$ (for example, its eigenvalues).]{}]{}
Finally, it is important to observe that the molecular and phase symmetries are uniquely identified once we recognise the structure of the canonical form, independently of the definition of the order parameters.
An ordering tensor $\,{\mathbb{S}}\,$ [[is fixed by any group in]{}]{} the symmetry class ${\mathcal{C}}_{(p,m)} =
{\mathcal{C}}_{p}\times {\mathcal{C}}_{m}$ if and only if there exist molecular and laboratory frames of reference such that the matrix representation of $\,\,{\mathbb{S}}\,$ has a canonical form given by the entry $(p,m)$ of Table \[tab:canonical\].
This can be checked by direct computation. The canonical forms in Table \[tab:canonical\] are obtained by projection of a full matrix ${\mathbb{S}}$ onto the [[fixed point subspace $L({V_2})^{G_P \times G_M}$ using the Reynolds operator , where $G_P$ and $G_M$ are any two of the indistinguishable subgroups in the given symmetry class.]{}]{} The molecular and laboratory $z$ axes are assumed to be the main axes of symmetry. Let ${\mathcal{R}}_{(p,m)}$ be the [[corresponding]{}]{} Reynolds operator.\
If ${\mathbb{S}}$ [[is fixed by $G_P \times G_M$]{}]{}, then ${\mathcal{R}}_{(p,m)}({\mathbb{S}})={\mathbb{S}}$. By construction, ${\mathbb{S}}$ must be equal to the canonical form, once we choose the molecular frame and the laboratory frame [[in accordance with the symmetry axes of $G_P$ and $G_M$.]{}]{}\
Vice versa, if ${\mathbb{S}}$ is in a canonical form $(p,m)$, then we can check directly that it is invariant for the application of ${\mathcal{R}}_{(p,m)}$ for a suitable choice of the symmetry axes.
Determining the effective phase {#sec:identification_effectivephase}
-------------------------------
In order to determine the phase [[of the system]{}]{} it is not necessary to identify the scalar order parameters suitable to describe the molecular order in each case. As discussed in the previous section, it is sufficient to study whether the ordering tensor ${\mathbb{S}}$ is close to one of the canonical forms of Table \[tab:canonical\], for a suitable choice of the molecular and laboratory axis.
To evaluate the distance of ${\mathbb{S}}$ from a given [[fixed point subspace]{}]{}, we choose any group in the [[corresponding class]{}]{} and describe its elements concretely by assigning the orthogonal transformations and assuming the rotation axes. However, the rotation axes are not in general known (and are indeed part of the sought solution). Therefore, we calculate the distance with respect to all of the possible directions of the symmetry axes and define the distance from the symmetry class as the minimum among the distances.
It is useful to define the *coefficient of discrepancy* [@1998Geymonat] as the minimum relative distance of ${\mathbb{S}}$ from a symmetry class
$$c(p,m) = \min \left\{\frac{d({\mathbb{S}},{{\color{black}L({V_2})^{G}}})}{\| {\mathbb{S}}\|}\, : \, G = G_P \times G_M \in {\mathcal{C}}_{(p,m)} \right\}.
\label{eq:discrepancy_2}$$
When the molecular symmetry is known, say $G_M$, we perform the optimisation only with respect to the phase groups and the coefficient of discrepancy depends only on the phase-index $$c(p) = \min \left\{\frac{d({\mathbb{S}},{{\color{black}L({V_2})^{G\times G_M}}})}{\| {\mathbb{S}}\|}\, : \, G = G_P \in {\mathcal{C}}_{p} \right\}.
\label{eq:discrepancy}$$
The algorithm we propose is described schematically as follows (we specifically concentrate on the phase symmetry as the molecular symmetry can usually be assumed to be known *a-priori*).
1. If $\| {\mathbb{S}}\|=0$, then the phase is isotropic.
2. If $\| {\mathbb{S}}\| \neq 0$, then **loop over** the phase symmetry classes for (there is no need to minimise in the trivial class ${\mathcal{C}}_4$).
3. **Choose the lowest order group in ${\mathcal{C}}_{p}$**. Select an abstract group $G_P$ in the chosen class. This is reasonably the one with lowest order.
4. **Minimisation step.** The distance depends, via the Reynolds operator , on the concrete realisation of $G_P$, i.e., on the direction of the symmetry axes. Therefore we need to minimise the distance with respect to all possible directions of the rotation axes allowed by the specific abstract group.
5. **Selection step.** Clearly, the correct symmetry class is not that of the lowest distance. For instance, any ordering tensor ${\mathbb{S}}$ has a vanishing distance with respect to the triclinic class (absence of symmetry). As a second example, since the lattice of the stabiliser subgroups is composed by a single chain, an uniaxial ordering tensor has zero distance also from all the previous classes in the chain, i.e. orthorhombic, monoclinic and triclinic.\
In principle, when ${\mathbb{S}}$ is free from numerical errors, we should choose the class with the highest symmetry and vanishing coefficient of discrepancy. However, in practice we will choose the highest symmetry compatible with a coefficient of discrepancy not exceeding the experimental error (or simulation error).
Examples {#sec:examples}
========
We now briefly describe how to apply our algorithm to determine the phase symmetry of two liquid crystal compounds, one composed of uniaxial ($D_{\infty h}$) molecules and the other made of biaxial ($D_{2h}$) molecules. These symmetry assumptions are quite standard and shared by most theoretical studies in the field (see for example [@2011turziC2h; @2012turziD2h; @2013TurziSluckin; @2003Virga; @universal; @bisi2011; @bisi2013]). In the following we assume (quite reasonably) that the molecular symmetry and the molecular axes are known *a-priori*, as is the case of Monte Carlo simulations or mean-field analysis. Therefore, no minimisation is required with respect to the molecular frame of reference. Rather, we concentrate on the determination of the phase symmetry and its principal axes.
In the examples that follow, we have produced possible outcomes of simulations by computing ${\mathbb{S}}$ for a system composed of a large number of molecules. We have randomly perturbed their initial perfect uniaxial or biaxial order to simulate more realistic results, affected by noise.
Uniaxial phase {#sec:examples_uniaxial}
--------------
First, let us consider a uniaxial phase with symmetry axis determined by the Euler angles $\alpha = 60^\circ$, $\beta = 30^\circ$, $\gamma = 0$. After introducing a random perturbation, the degree of order is $S=S_{00}\approx 0.69$, and the following ordering matrix is obtained: $${\mathbb{S}}= \begin{pmatrix}
0.404 & -0.016 & 0.005 & -0.013 & -0.005 \\
-0.090 & -0.008 & -0.005 & 0.016 & -0.009 \\
0.122 & 0.006 & -0.009 & -0.004 & -0.012 \\
0.476 & 0.009 & -0.005 & 0.000 & 0.020 \\
0.234 & -0.007 & -0.014 & -0.011 & 0.010
\end{pmatrix},
\label{eq:bS_example_uniaxial}$$ written with respect to an arbitrarily chosen laboratory frame, but with the molecular frame accurately selected according to the molecular symmetry. A quick look at Table \[tab:canonical\] correctly suggests that ${\mathbb{S}}$ refers to a system of uniaxial molecules, although affected by experimental or numerical errors (only the first column contains significantly non-vanishing entries). However, the symmetry class of the phase, the axes of symmetry and the relevant order parameter(s) are yet unknown. To this end, we project ${\mathbb{S}}$ on each symmetry class and compute the coefficient of discrepancy in each case. It is unnecessary to project on the triclinic class since by definition this includes all the possible ordering tensors and the distance is therefore always zero.
The optimal choice for the symmetry axes is given by minimising the distance of ${\mathbb{S}}$ from each symmetry class. We implement this minimisation procedure rather naively by uniformly sampling $\essotre$, i.e., we generate $N=10^4$ orientations of the laboratory axes uniformly. There are of course more refined optimisation algorithms that could yield far better results, but these fall outside the scope of the present paper. We intend to study this computational issue more deeply in a subsequent paper. The result of our analysis is summarised in the following table
---------------- ----- ------- ------- ------- -----
$p$ $0$ $1$ $2$ $3$ $4$
\[1mm\] $c(p)$ 1 0.063 0.056 0.049 0
---------------- ----- ------- ------- ------- -----
The coefficients of discrepancy suggest that the phase is uniaxial and ${\mathbb{S}}\in {\mathcal{C}}_1$. The Euler angles of the phase axes are then found to be $\alpha \approx 63.9^\circ$, $\beta \approx 31.1^\circ$, $\gamma \approx 124^\circ$. Note that the value of the proper rotation angle, $\gamma$, is irrelevant in a uniaxial phase. Finally, we can write the ordering tensor in the symmetry adapted frame of reference according to . The new ordering matrix ${\mathbb{S}}'$ reads $${\mathbb{S}}'
= \begin{pmatrix}
0.684 & -0.003 & -0.006 & -0.014 & 0.013 \\
0.006 & 0.001 & -0.004 & 0.018 & -0.007 \\
-0.008 & 0.003 & 0.001 & -0.007 & -0.018 \\
-0.003 & -0.021 & 0.001 & -0.004 & -0.014 \\
-0.005 & -0.002 & 0.017 & 0.001 & -0.001
\end{pmatrix}$$ from which we obtain a degree of order $\approx 0.684$ in agreement with the expected value of $0.69$.
Biaxial phase {#sec:examples_biaxial}
-------------
We now present an analogous analysis, but relative to a less symmetric phase, namely $D_{2h}$. The ordering matrix we consider in this example is $${\mathbb{S}}= \begin{pmatrix}
0.301 & -0.115 & -0.003 & 0.004 & -0.001 \\
0.127 & -0.537 & 0.007 & 0.000 & 0.002 \\
0.131 & -0.354 & 0.006 & 0.000 & -0.003 \\
0.403 & -0.303 & 0.003 & 0.001 & 0.004 \\
0.118 & 0.255 & 0.000 & 0.004 & -0.001
\end{pmatrix}
\label{eq:bS_example_biaxial}$$ which is built to represent a biaxial phase with symmetry axes rotated by $\alpha=60^\circ$, $\beta=30^\circ$, $\gamma=45^\circ$ with respect to the laboratory axes. The order parameters, i.e. the ordering matrix entries when referred to its principal axes, are $S_{00} \approx 0.507$, $S_{01} \approx -0.179$, $S_{10} \approx -0.201$, $S_{11} \approx 0.743$. In more standard notation, these order parameters correspond respectively to $S$, $D$, $P$ and $C$, albeit with different normalisation coefficients.
We sample the orientations of the laboratory frame uniformly, and find the following discrepancy coefficients for the five symmetry classes
---------------- ----- ------ ------- ------- -----
$p$ $0$ $1$ $2$ $3$ $4$
\[1mm\] $c(p)$ 1 0.43 0.070 0.017 0
---------------- ----- ------ ------- ------- -----
The projection on the Orthorhombic class, yields $\alpha \approx 60.7^\circ$, $\beta \approx 29.7^\circ$ and $\gamma \approx 43.6^\circ$. The reconstruction of the ordering matrix with respect to its principal axes reads $${\mathbb{S}}'
= \begin{pmatrix}
0.505 & -0.181 & 0.000 & 0.005 & 0.001 \\
-0.204 & 0.742 & -0.009 & 0.001 & -0.002 \\
0.06 & 0.028 & 0.000 & 0.000 & 0.005 \\
-0.005 & -0.005 & -0.005 & -0.001 & 0.001 \\
0.000 & 0.000 & 0.001 & -0.003 & 0.002
\end{pmatrix}.$$
Conclusions {#sec:conclusions}
===========
We have proposed a method able to provide the canonical form of an ordering tensors ${\mathbb{S}}$. This canonical representation readily yields the order parameters, i.e., the scalar quantities that are usually adopted to describe the order in a liquid crystal compound. However, the physical meaning of the matrix entries (for low symmetry molecules and phases) is still a matter of debate and fall outside the scope of our paper. The laboratory axes of the canonical form are interpreted as “directors” and provide the directions of the symmetry axes, if there are any. Finally, we have shown that there are only five possible phase symmetry-classes, when the orientational probability density function is truncated at the second-rank level. This is a standard approximation in many theoretical studies of uniaxial and biaxial liquid crystals.
Our proposed method is simple enough to be applicable to the analysis of real situations. For the examples considered in Sec. \[sec:examples\] the proposed algorithm seems to be reliable and give a fast analysis of the ordering tensor that leads to the correct identification of a uniaxial and a biaxial phase in a model system. For this purpose it has been sufficient to implement a very simple Monte-Carlo optimisation procedure. However, it may be appropriate to develop more efficient methods in case of more complex real systems.
Our strategy, based on the second-rank ordering tensor, is not able to distinguish amongst the phase groups belonging to the same class. In principle our approach could be easily extended to include higher-rank ordering tensors. The same mathematical ideas and tools we have put forward could be applied to this more general case, only at the cost of more involved notations. However, we believe that in so doing the presentation and the readability of the paper could be seriously affected. Furthermore, the second-rank case is the most relevant from a physical perspective, since most tensorial properties that can be measured in liquid crystals are second-rank. For these reasons we have only given the detailed presentation in the case of a second-rank ordering tensor.
Acknowledgements {#acknowledgements .unnumbered}
================
[[The authors wish to thank the two anonymous referees for their valuable comments, which led to an improved paper.]{}]{} S.T. wishes to thank Maurizio Vianello and Antonio DiCarlo for instructive conversations concerning related problems in Elasticity. The authors are also grateful to the Isaac Newton Institute, Cambridge, where this work was originated during the Programme on the Mathematics of Liquid Crystals in 2013.
Appendix: orientational probability densities in $\Otre$ {#apdx:otre}
========================================================
When dealing with the orientation of molecules in the physical space, it is natural choose the molecular and laboratory [[frames of reference]{}]{} with the same handedness. The orientation of a molecule is then assigned in terms of the rotation ${\mathbf{R}}\in \essotre$ that brings the laboratory axes into coincidence with the molecular axes. In this respect, an inversion or a mirror reflection do not represent a change of the orientation of the molecule and the orientational probability density function is usually taken to be a function $f:\essotre \to {\mathbb{R}}_+$.
However, when we need to consider how the symmetry groups acts on the orientational distribution, for instance because we need to exploit the mirror symmetry of a molecule, we need to consider probability densities $g:\Otre \to {\mathbb{R}}_+$. The two pictures can be reconciled as follows.
Since $\Otre = \essotre \times C_i$ is composed of two connected components, that is $\essotre$ and the other obtained from $\essotre$ by inversion, the integration over $\Otre$ separates into two integrals over $\essotre$. Namely, the ensemble average of a function $\chi:\Otre \to {\mathbb{R}}$ is $$\langle \chi \rangle_{\Otre} = \int_{\essotre} \chi({\mathbf{R}})g({\mathbf{R}}) \ \dd\mu({\mathbf{R}}) +
\int_{\essotre} \chi({\mathbf{R}}\iota) g({\mathbf{R}}\iota) \ \dd\mu({\mathbf{R}}).
\label{eq:averageO3}$$
For *apolar molecules*, which posses inversion symmetry, $g({\mathbf{R}})=g({\mathbf{R}}\iota)$. The compatibility of the distributions in $\essotre$ requires $$g({\mathbf{R}}) = \tfrac{1}{2}f({\mathbf{R}}), \qquad \text{ for all } {\mathbf{R}}\in \essotre,$$ where $f({\mathbf{R}})$ is the distribution function in $\essotre$ that we have defined in the text and the factor $1/2$ comes from the normalisation of the distributions. The ensemble average then becomes $$\langle \chi \rangle_{\Otre} = \int_{\essotre} \tfrac{1}{2}\left(\chi({\mathbf{R}}) + \chi({\mathbf{R}}\iota) \right) g({\mathbf{R}}) \ \dd\mu({\mathbf{R}}).
\label{eq:averageO3_b}$$ This equation shows that the order parameter tensors are again calculated as averages over $\essotre$. Explicitly, we have $$\langle D^{(j)} \rangle_{\Otre} = \int_{\essotre} \tfrac{1}{2}\left(D^{(j)}({\mathbf{R}}) + (-1)^j D^{(j)}({\mathbf{R}}) \right) g({\mathbf{R}}) \ \dd\mu({\mathbf{R}}).
\label{eq:opO3}$$ where we have used $$D^{(j)}({\mathbf{R}}\iota) = (-1)^j D^{(j)}({\mathbf{R}}),$$ an identity that follows from the generalisation of Eq. to $j^{\text{th}}$-rank tensors (see Eq.). In particular, this shows that the order parameters for apolar molecules vanish if $j$ is odd, the second rank order parameter ${\mathbb{S}}$ considered in the text clearly do not vanish since $j$ is even.
By contrast, for *polar molecules*, which do not posses inversion symmetry, it is not possible (if the system is homogeneous, of the same chirality) to find an inverted molecule. Therefore, $g({\mathbf{R}}\iota) = 0$, for all ${\mathbf{R}}\in \essotre$ and the probability density $g({\mathbf{R}})$ coincides with the $f({\mathbf{R}})$ in $\essotre$.
[^1]: `[email protected]`
[^2]: Actually, the phase we have described is a *calamitic* uniaxial phase. A uniaxial phase can also be *discotic* when ${\mathbf{m}}$ is randomly distributed in a plane orthogonal to ${\mathbf{n}}$, in which case “perfect” alignment would correspond to $S=-\frac{1}{2}$
[^3]: [[Basically, this expansion is the analogue of the Fourier analysis for compact groups.]{}]{}
[^4]: [[In Elasticity theory, traceless tensors are sometimes called *deviatoric* tensors. Since the common second-rank tensors found in Elasticity are symmetric, $V_2$ is usually referred to as the ’space of deviatoric tensors’.]{}]{}
[^5]: In Ref. [@07Rosso] definitions of ${\mathbf{M}}_3$ and ${\mathbf{M}}_4$ are swapped
[^6]: Some authors, especially in the mathematical literature, use the term *isotropy group*. Here we prefer to adopt the term *symmetry group* [[or *stabiliser*]{}]{} to avoid confusion with the term “isotropy” as used, e.g., for the isotropic phase, which is a totally different thing.
[^7]: Since $G_1,G_2 \subset \Otre$, a correct definition would require $f_1,f_2:\Otre \to {\mathbb{R}}_+$. This point is discussed in the Appendix, to avoid diverting here from the main discourse.
[^8]: NB: In our notation $S_{ij}={\mathbf{L}}_i \cdot \langle {\mathbf{M}}_j \rangle$ (see ), whilst in [@07Rosso] $S_{ij}=\langle {\mathbf{M}}_i \rangle \cdot {\mathbf{L}}_j$.
|
[ ]{}
Aurélie Muller-Gueudin $^{1}$
[*$^{1}$ Institut Elie Cartan Nancy, Nancy-Université\
Boulevard des Aiguillettes B.P. 239\
F-54506 Vandoeuvre lès Nancy\
`[email protected]`* ]{}
[**Résumé.**]{} Pour représenter l’évolution des espèces moléculaires dans un réseau de gènes, le modèle le plus classique est le processus de Markov à sauts. Ce modèle a l’inconvénient d’être long à simuler en raison de la rapidité et du grand nombre de réactions chimiques. Nous proposons des modèles approximatifs, basés sur les processus déterministes par morceaux, permettant de raccourcir les temps de simulation. Dans un article récent, nous avons montré rigoureusement la convergence des premiers modèles (processus de Markov à sauts) vers les seconds (processus déterministes par morceaux). Dans l’exposé, nous n’entrerons pas dans les détails de cette justification rigoureuse, mais nous montrerons des exemples d’application à des réseaux de gènes très simples (modèle de Cook et modèle du phage Lambda).
[**Mots-clés.**]{} Réseau de gènes, Processus déterministes par morceaux, Processus de Markov à sauts, Simulation.
[**Abstract.**]{} The molecular evolution in a gene regulatory network is classically modeled by Markov jump processes. However, the direct simulation of such models is extremely time consuming. Indeed, even the simplest Markovian model, such as the production module of a single protein involves tens of variables and biochemical reactions and an equivalent number of parameters. We study the asymptotic behavior of multiscale stochastic gene networks using weak limits of Markov jump processes. The results allow us to propose new models with reduced execution times. In a new article, we have shown that, depending on the time and concentration scales of the system, the Markov jump processes could be approximated by piecewise deterministic processes. We give some applications of our results for simple gene networks (Cook’s model and Lambda-phage model).
[**Keywords.**]{} Gene networks, Piecewise deterministic processes, Markov jump processes, Simulation.
Présentation du problème
=========================
En biologie moléculaire, les réseaux de gènes sont définis par un ensemble de réactions chimiques entre les espèces moléculaires présentes dans une cellule considérée. Il est maintenant établi que la dynamique de ces réseaux est stochastique : les systèmes de réactions chimiques sont modélisés par des processus de Markov à sauts homogènes.
Soit un ensemble de réactions chimiques notées $R_r$, pour $r\in \mathcal R$; l’ensemble $\mathcal R$ est supposé fini. Ces réactions modifient les quantités d’espèces moléculaires présentes dans la cellule. Chaque espèce moléculaire est notée $i$, pour $i\in S=\{1,\dots,M\}$. Le nombre de molécules de l’espèce moléculaire $i$ est noté $n_i$ et l’état du système est décrit par le vecteur $X=(n_1,\ldots, n_M)\in\mathbb N^{M}$. Chaque réaction $R_r$ change l’état du système de la manière suivante : $X\mapsto X+\gamma_r$, avec $\gamma_r\in\mathbb Z^M$. Le vecteur $\gamma_r$ est le saut associé à la réaction $R_r$. La réaction $R_r$ a lieu avec un taux $\lambda_r(X)$ qui dépend de l’état du système.
Cette évolution est décrite par un processus de Markov à sauts. Les instants de sauts, notés $(T_j)_{j\ge 1}$ vérifient $T_0=0$, $T_j= \tau_1+\dots +\tau_j$, où $(\tau_k)_{k\ge 1}$ est une suite de variables aléatoires indépendantes et telles que $$\mathbb P(\tau_i>t)= \exp\Big(-\displaystyle\sum_{r\in {\mathcal R}} \lambda_r(X(T_{i-1}))t\Big).$$ Au temps $T_i$, la réaction $r\in {\mathcal R}$ a lieu avec probabilité $\displaystyle\frac{\displaystyle\lambda_r\left(X(T_{i-1})\right)}{\displaystyle\sum_{r\in {\mathcal R}} \lambda_r\left(X(T_{i-1})\right)}$ et l’état du système change selon l’équation $X\to X+\gamma_r$, c’est-à-dire: $$X(T_{i})= X(T_{i-1})+\gamma_{r}.$$ Ce processus de Markov a pour générateur : $A f(X) = \sum_{r\in \mathcal R} \left[ f(X+\gamma_r)-f(X)\right]\lambda_r(X),$ pour des fonctions $f$ appartenant au domaine du générateur.
Ces modèles de Markov à sauts ont des temps d’exécution très longs. Par exemple, même un simple réseau de gènes contient des dizaines de variables et de paramètres. Nous avons montré (Crudu *et al.* (2007); Radulescu *et al.* (2012)) que ces modèles pouvaient être approchés (via des convergence en loi (Billingsley (1999)) par des processus markoviens déterministes par morceaux. Ces modèles approchés ont amélioré les temps d’exécution.
Nos résultats
=============
Dans les applications, les nombres de molécules sont de différentes échelles : certaines espèces sont en grand nombre, et d’autres en faible nombre. En conséquence, nous avons décomposé l’ensemble des espèces en deux sous-ensembles, notés $C$ et $D$ de cardinaux $M_C$ et $M_D$. De même, l’état du système est noté $X=(X_C,X_D)$, et les sauts notés $\gamma_r=(\gamma_r^C,\gamma_r^D)$. Pour $i\in D$, $n_i$ est d’ordre $1$, tandis que pour $i\in C$, $n_i$ est proportionnel à $N$ où $N$ est grand. Nos résultats asymptotiques correspondent à $N\to +\infty$. Notons également $\displaystyle x_C=\frac1N X_C$ et $x=(x_C,X_D)$. De même, l’ensemble des réactions se décompose suivant les espèces mises en jeu dans les réactions : $\mathcal R=\mathcal R_D\cup \mathcal R_C\cup \mathcal R_{DC}$. Une réaction dans $\mathcal R_D$ (resp. $\mathcal R_C$) produit ou consomme uniquement des espèces rares, c’est-à-dire de $D$ (resp. des espèces fréquentes, c’est-à-dire de $C$). De même, les taux de réaction dans $\mathcal R_{D}$ (resp. $\mathcal R_{C}$) dépendent uniquement de $X_{D}$ (resp. $x_{C}$). Une réaction dans $\mathcal R_{DC}$ a un taux qui dépend à la fois de $x_C$ et $X_D$ et produit ou consomme des espèces à la fois rares ($D$) et fréquentes ($C$).
Les taux des réactions $r\in\mathcal R_{C}$ sont également grands et d’ordre $N$, et nous posons $\tilde \lambda_{r}= \frac{\lambda_{r}}{N}$. Ceci signifie que la variable fréquente $x_C$ est fréquemment impliquée dans des réactions chimiques. Supposons pour l’instant que les réactions dans $\mathcal R_{D}$ ou $\mathcal R_{DC}$ ont un taux d’ordre 1.
Ces nouvelles variables obtenues par changement d’échelle obéissent encore à un processus de Markov à saut, dont le générateur est : $$\begin{array}{ll}
\tilde{\mathcal{A} }f(x_C,X_D) &=\displaystyle \sum_{r\in \mathcal R_C} \left[ f(x_C+\frac1N \gamma_r^C,X_D)-f(x_C,X_D)
\right]N\tilde\lambda_r(x_C)\\
\\
&\displaystyle +\sum_{r\in \mathcal R_{DC}} \left[ f(x_{C}+\frac1N\gamma_r^C,X_{D}+\gamma_{r}^D)-f(x_{C},X_{D})\right]\lambda_r(x_{C},X_{D})\\
\\
&\displaystyle +\sum_{r\in \mathcal R_{D}} \left[ f(x_{C}, X_{D}+\gamma_r^D)-f(x_{C},X_{D})\right]\lambda_r(X_{D}).
\end{array}$$ Si $N\to+\infty$, Kurtz (1971,1978) a montré que les espèces rares et les espèces fréquentes se découplent, c’est-à-dire obéissent à des processus qui leur sont propres. En effet, les réactions dans $\mathcal R_{DC}$ ne sont pas suffisamment fréquentes pour changer le comportement de la variable rapide $x_{C}$. A la limite, la variable $x_C$ obéit à un système différentiel, qui fonctionne sans influence de la variable discrète $X_D$. La variable discrète $X_D$, quant à elle, obéit à un processus de sauts, indépendant de la variable continue $x_C$.
Notre travail a été de considérer des système plus généraux contenant d’autres types de réactions, et aboutissant à d’autres systèmes limites. Selon les différentes échelles de temps de réactions et de concentration des espèces, nous avons démontré rigoureusement quatre types de limites pour le processus de Markov à sauts : des processus continus et déterministes par morceaux (Davis (1993)), des processus déterministes par morceaux avec des sauts sur la variable ’continue’, des processus déterministes par morceaux moyennés, et des processus déterministes par morceaux avec des sauts singuliers sur la variable ’continue’.
Illustration
============
Le phage $\lambda$
------------------
Le phage $\lambda$ est un parasite de la bactérie E.Coli. L’état du phage $\lambda$ est donné par le vecteur $X = (C,C_2,D,D_1,D_2) $, où $C$ et $C_1$ représentent une protéine et son dimère, produits par le phage, et $D,D_1,D_2$ représentent des sites promoteurs sur l’ADN du phage, respectivement non occupés, en simple ou en double occupation. Nous avons $D+D_1+D_2=$constante. Le phage se développe au sein de la bactérie E. Coli selon les réactions chimiques suivantes: $2C \underset{k_{-1}}{\overset{k_1}{\rightleftharpoons}} C_2$, $D+C_2 \underset{k_{-2}}{\overset{k_2}{\rightleftharpoons}} D_1$, $D_1+C_2 \underset{k_{-3}}{\overset{k_3}{\rightleftharpoons}} D_2$, $D_1 \xrightarrow{k_4} D_1 + nC$, $C \xrightarrow{k_5}$. La dernière équation signifie qu’une protéine $C$ meurt.
Si les espèces moléculaires sont en grands nombres et ont des sauts rapides (ce qui se vérifie par certaines conditions sur les constantes cinétiques $k_i$), nous pouvons approcher la trajectoire de $X_t$ par une trajectoire déterministe. Les fluctuations autour des trajectoires déterministes sont illustrées dans la figure \[phage\] a).
![Modèle de phage $\lambda$, la limite déterministe est marquée en trait continu. a) molécules et sites en grands nombres; b) un seul site. Les paramètres sont a) $k_1^{\pm}=k_2^{\pm}=k_3^{\pm}=0.1, k_4=0.006, k_5=0.01 $ b) $k_1^{\pm}=k_3^{\pm}=0.01, k_4=0.3, k_5=0.005 $. []{data-label="phage"}](fig1.pdf "fig:"){width="6cm"}a) ![Modèle de phage $\lambda$, la limite déterministe est marquée en trait continu. a) molécules et sites en grands nombres; b) un seul site. Les paramètres sont a) $k_1^{\pm}=k_2^{\pm}=k_3^{\pm}=0.1, k_4=0.006, k_5=0.01 $ b) $k_1^{\pm}=k_3^{\pm}=0.01, k_4=0.3, k_5=0.005 $. []{data-label="phage"}](fig2.pdf "fig:"){width="6cm"}b)
En réalité, toutes les molécules ne sont pas en grand nombre, car $D,D_1,D_2$ valent soit 0 soit 1. Dans la figure \[phage\] b) nous avons simulé des trajectoires sous cette hypothèse. Les processus déterministes par morceaux sont des bonnes approximations pour les trajectoires de $C,C_2$. En effet, les molécules $C,C_2$ sont en grand nombre, on peut leur appliquer la limite déterministe entre deux sauts des variables discrètes. L’évolution de $(D,D_1)$ peut être décrite par un processus de Markov à sauts sur l’ensemble $\{0, 1 \}^2$. Lorsque le site est non occupé ou en double occupation $D=1,D_1=0$ ou $D=0,D_1=0$, il n’y a pas de production de $C$ qui tend à s’équilibrer avec son dimère. Lorsque le site est en simple occupation $D=0,D_1=1$ il y a production de $C$. Le caractère déterministe par morceaux de la dynamique des variables $C,C_2$ ne signifie pas que les fluctuations de ces variables sont absentes. Bien au contraire, ces variables sont soumises à des fluctuations importantes (Fig. \[phage\] b). Cette possibilité conduit a une conclusion biologique : la source des fluctuations n’est pas nécessairement la petitesse du nombre moyen de molécules observées mais pourrait être la petitesse du nombre de sites.
Modèle de Cook
--------------
Le modèle de Cook modélise les phénomènes de haploinsuffisance (invalidité de la moitié du nombre total de copies d’un gène). Il peut être décrit par le système de réactions suivant: $G \underset{k_{-1}}{\overset{k_1}{\rightleftharpoons}} G^*$, $G^*
\xrightarrow{k_2} G^* + P$, $P \xrightarrow{k_3}\cdot$
Dans le modèle de Cook $G,G^*$ sont des versions dormante et active d’un gène, $P$ est la protéine traduite.
Ce système de réactions conserve la quantité $G+G^*=G_0$. Le régime de haploinsuffisance est défini par une faible valeur $G_0$. Pour simplifier considérons $G_0=1$, ce qui signifie une seule copie valide du gène. Dans cette situation $G,G^* \in \{0,1 \}$. Plus généralement, le modèle de Cook pourrait décrire d’autres cas d’activité intermittente en biologie moléculaire. Par exemple, dans le fonctionnement de voies de signalisation, $G,G^*$ peuvent être considérées comme des variables cachées à deux valeurs paramètrisant la dynamique d’un système moléculaire dans deux situations (présence et absence d’une molécule clé). Des extensions sont possibles à des variables cachées à plusieurs valeurs discrètes (le comportement du système pourrait dépendre de la présence et de l’absence de certaines molécules).
Si les paramètres cinétiques $k_i$ vérifient certaines conditions, et si le nombre de molécules $P$ est grand, on peut considérer que la dynamique de $P$ est déterministe pour une valeur de $G^*$ fixée. On arrive au modèle déterministe par morceaux suivant: $$\frac{ dP}{dt} = -k_3 P + k_2 G^*(t)$$
où $G^*(t)$ est un processus de Markov à espace d’états $E=\{0,1\}$ et fonction d’intensité: $$\lambda(G^*) = \left\{ \begin{array}{ll} k_1 & \text{si} \quad G^*=0
\\ k_{-1} & \text{si} \quad G^*=1 \end{array}\cdot \right.$$
Une trajectoire est simulée sur la figure \[Cook\].
![Modèle de Cook. Les paramètres sont $k_1=20,k_{-1}=10, k_2=4000, k_3=1$.[]{data-label="Cook"}](Cook4.pdf){width="8cm"}
Conclusion
==========
- Les simulations précédentes montrent que les temps d’exécutions des modèles déterministes par morceaux sont bien plus courts que ceux des modèles de Markov à sauts. En effet, il n’est plus nécessaire de simuler TOUTES les réactions chimiques, et donc TOUS les sauts du processus markovien. L’approximation déterministe par morceaux, justifiée théoriquement dans notre article, est une alternative économique aux modèles markoviens de sauts.
- Les trajectoires obtenues avec ces modèles approchés (les modèles déterministes par morceaux) sont similaires aux trajectoires obtenues en simulant toutes les réactions des modèles de Markov à sauts.
- Nous avons généralisé les résultats de Kurtz (1971,1978). En effet, Kurtz a proposé des modèles déterministes qui ne sont pas adaptés dans les situations où les espèces moléculaires conservent un comportement stochastique. Nos modèles, déterministes par morceaux, ont permis de palier à cette lacune. Les exemples du phage $\lambda$ et de Cook illustrent ce point.
Bibliographie {#bibliographie .unnumbered}
=============
\[1\] Billingsley, P. (1999), [*Convergence of Probability Measures*]{}, Wiley Series in Probability Statistics.
\[2\] Crudu, A., Debussche, A., Muller, A. et Radulescu, O. (2012), Convergence of stochastic gene networks to hybrid piecewise deterministic processes, [*A paraître dans Annals of Applied Probability*]{}.
\[3\] Davis, M. (1993), [*Markov Models and Optimization*]{}, Chapman and Hall.
\[4\] Ethier, S. and Kurtz, T. (1986), [*Markov processes. [C]{}haracterization and Convergence*]{}, Wiley Series in Probability Statistics.
\[5\] Kurtz, T. (1971), Limits theorems for sequences of jump markov processes approximating ordinary differential processes, [*J.Appl.Prob.*]{}, 8,344–356.
\[6\] Kurtz, T. (1978), Strong approximation theorems for density dependent markov chains, [*Stoch. Proc. Appl.*]{}, 6,223–240.
\[7\] Radulescu, O., Muller, A. et Crudu, A. (2007), Théorèmes limites pour les processus de Markov à sauts. Synthèse de résultats et applications en biologie moléculaire, [*Technique et Science Informatiques*]{}, 3-4,441–467
|
---
abstract: 'Using density functional theory calculations, the ground state structure of BaFeO$_3$ (BFO) is investigated with local spin density approximation (LSDA). Cubic, tetragonal, orthorhombic, and rhombohedral types BFO are considered to calculate the formation enthalpy. The formation enthalpies reveal that cubic is the most stable structure of BFO. Small energy difference between the cubic and tetragonal suggests a possible tetragonal BFO. Ferromagnetic(FM) and anitiferromagnetic (AFM) coupling between the Fe atoms show that all the striochmetric BFO are FM. The energy difference between FM and AFM shows room temperature ferromagnetism in cubic BFO in agreement with the experimental work. The LSDA calculated electronic structures are metallic in all studied crystallographic phases of BFO. Calculations including the Hubbard potential $U,i.e.$ LSDA+$U$, show that all phases of BFO are half-metallic consistent with the integer magnetic moments. The presence of half-metallicity is discussed in terms of electronic band structures of BFO.'
address: 'Department of Physics, Quaid-i-Azam University, Islamabad 45320, Pakistan'
author:
- Gul Rahman
- Saad Sarwar
title: 'Ground state structure of BaFeO$_{3}$: Density Functional Theory Calculations'
---
Introduction
============
Perovskite materials are very important both from theory and experiment point of view due to ferroelectricity,[@1] spin dependent transport and magnetic properties. [@2] Magnetic oxides are helpful in understanding the magnetic coupling through nanostructured interfaces.[@10; @11; @12] Particularly, those perovskite oxides that exhibit magnetic and ferroelectric characteristics simultaneously, known as multiferroics, can have practical device applications such as spin transistor memories, whose magnetic properties can be tuned by electric fields through the lattice strain effect.[@G1] Iron based perovskite oxides have very interesting properties due to the different oxidation states of Fe, which gives rise to different crystal structures and stoichiometries. A small number of oxides containing Fe in high valence state (Fe$^{4+}$) are known where Fe is surrounded by six oxygen atoms.[@8] BaFeO$_{3}$ (BFO) is one of the examples of the perovskite oxides with iron in valency +4 state. In cubic crystal of BFO, ferromagnetism has been observed in the recent experiments.[@19] Ferromagnetism is found in pseudocubic BFO on SrTiO${_3}$(STO) films.[@20] There are also experimental reports on the successful growth of cubic BFO on STO.[@21; @25; @26] Callender $et$ $al$.,[@26] have epitaxially grown cubic BFO on STO and reported week ferromagnetism with transition temperature 235 K. Fully oxidized single crystal of BFO thin film also shows large saturation magnetization, in plane 3.2 $\mu_B$/formula unit (f.u) and out of plane 2.7 $\mu_B$/f.u. [@27] The reported lattice constant and saturation magnetization of BFO in thin films is quite close to bulk BFO(a = 3.97 $\rm \AA$) with no observed helical magnetic structure. [@27] The absence of helical magnetic order in thin film might be due to small energy barrier between A-type helical magnetic order and ferromagnetic (FM) phases.[@28] Very recently, we also found distortion-induced FM to antiferromagnetic (AFM) and ferrimagnetic transition in cubic BFO.[@p1]
Tetragonal BFO is also of great interest as it may be a multiferroic phase of BFO,[@24] and Taketani $et$ $al$., [@21] have reported the epitaxially grown BFO thin films on STO (100) substrate with a tetragonal crystal structure. Hexagonal BaFeO$_{3-\delta}$ is expected to be the most stable phase although various polymorphs have been observed with oxygen deficit BFO.[@15; @15A; @15B; @15C] Bulk hexagonal BaFeO$_{3-\delta}$ also exhibits an AFM to FM transition at 160 K.[@G2] Using density functional theory (DFT), we found that strain and correlation also play a significant role in the magnetic and electronic properties of orthorrhombic BFO.[@34] BFO can be alloyed with other magnetic perovskites such as BiFeO$_3$ to yield good multiferroic properties.
The well studied ferroelectric BaTiO$_3$ has different crystal structures, and shows different crystallographic behavior at different temperature and pressure.[@29] Similarly, the multiferroic BiFeO$_3$ also has different crystal structures.[@BiFeO3; @a; @b; @c; @d; @e] However, there is no comprehensive theoretical calculations on less studied perovskite BaFeO$_3$ to investigate the true ground state structure of stoichiometric BFO with the help of DFT. Hence, we use DFT to investigate the equilibrium structure of BaFeO$_3$.
Computatiioinal Method
======================
Calculations based on DFT are performed with plane-wave and pseudopotential method as implemented in the Quantum Espresso package.[@30] The exchange correlation effects are treated within the local spin density approximation (LSDA). The on-site Coulomb potential $U$(=5.0eV) [@31] has also been added in LSDA to perform LSDA+$U$ calculations to correctly describe the electronic structure of BFO in different crystallographic phases. The ultrasoft pseudopotentials are used to describe the core-valence interactions. The valence wave functions and the electron density are described by plane-wave basis sets with kinetic energy cutoffs of 30 Ry with $12 \times 12 \times 12$ Monckhorst-Pack grid. All the computational parameters are fully converged. We studied BFO in four different crystal structures $i.e$, cubic, tetragonal, orthorhombic and rhombohedral with space groups *Pm3m, P4mm, Amm2, R3m*, [@29] respectivily.
Results and Discussions
=======================
For different crystal structures (phases) of a material, it is very essential to optimize the lattice constants (volume) either using DFT or Molecular Dynamics. We used DFT and studied four different crystal structures of BFO $i.e$, cubic, tetragonal, orthorhombic, and rhombohedral which are shown in Fig. \[crys\_str\] and their space groups are mentioned in Table \[tab1\]. To optimized the lattice volume, we carried out DFT calculations in ferromagnetic(FM) and non-magnetic(NM) states and then fitted the data using Birch Murnaghan equation of state (EOS) [@3] (shown in Fig. \[crys\_vol\]), which enables us to estimate the equilibrium volume (lattice constant). These plots show that the FM state is more stable than the NM state in all crystallographic phases of BFO. The optimized volumes and lattice parameters in the FM states are summarized in Table \[tab1\]. For comparison purpose, the combined energy volume (EV) curves in the FM states of all the studied structures are also shown in Fig.\[vol\_fit\]. From Fig.\[vol\_fit\] and Table \[tab1\], it is inferred that the FM cubic BFO is the equilibrium (stable) structure of BFO among all the studied structures. This work is also supported by recently work on cubic BFO.[@32] After confirming the stability of FM state w.r.t NM state, further calculations were performed to check the stability of FM w.r.t AFM state, in the (001) direction. We used the relation $\bigtriangleup E = E_{AFM} - E_{FM}$ \[where $E_{AFM}(E_{FM}$) is the total energy in AFM(FM) state\] to address the magnetic stability, and found that the FM state is more stable than the AFM state as shown in Table \[tab1\]. We see that cubic BFO has the largest $\bigtriangleup E $ which indicates possible room temperature ferromagnetism which is in agreement with the recent experimental work on cubic BFO.[@21; @25; @26]
Once it is confirmed that the FM state is more stable than the NM and AFM states of BFO, we calculated the formation energies of BFO in different structural phases. The calculated formation energy in each phase confirms the lowest ground state energy. The formation energy is calculated by using the following formula,
$$\bigtriangleup E_f = E(\rm BaFeO_3) - [E(Ba) + E(Fe) + 3(\frac{1}{2})E(O_2)],$$
where $E(\rm BaFeO_3)$ is the total energy in any studied crystallographic phase ($e.g.$ cubic) and $E$(Ba), $E$(Fe), and $E$(O$_2$) are the energies of BCC Ba, BCC Fe and oxygen molecule, O$_2$, respectivily. The formation energy of each system is shown in the Table \[tab1\]. The formation energy of cubic phase is minimum (-3.85 eV) as noted in the EV curve (Fig.\[vol\_fit\]) that cubic structure is the most stable structure of BFO. At the same volume (equilibrium volume of cubic BFO), the second most stable structure is tetragonal BFO (Fig.\[vol\_fit\]). The small energy difference between cubic BFO and tetragonal BFO clearly indicates a possible phase transition to tetragonal BFO. Such a small energy difference can easily be recovered if the cubic BFO is grown as a thin film on a suitable substrate, e.g., SrTiO$_{3}$.[@21] Note that such a possible tetragonal BFO is supported by the experimental work.[@21] Furthermore, when we expand the cubic unit cell, it cuts rhombohedral phase at volume slightly larger than the equilibrium volume of cubic phase, so the third stable (metastable) phase is the rhombohedral phase consistent with the formation energy (Table \[tab1\]). Note that similar crystal stability was also observed in BiFeO$_{3}$.[@BiFeO3]
The calculated magnetic moment (MMs)/f.u of BFO in different crystal structures are shown in Fig.\[mag\], and in Table \[tab2\] the values of total and local MMs of Fe and O atoms in each phases of BFO at equilibrium lattice volume are summarized. In Fig.\[mag\], for all the phases of BFO, the magnetic moment increases with increasing volume and at some particular volume (near equilibrium volume), it attains almost a constant value. The increase in MM with lattice constant usually happens due to the decrease in the overlap between the orbitals, $i.e.,$ Fe and O atoms. The comparison of total magnetic moments of all the structures (Table \[tab2\]) shows that the orthorhombic BFO has the lowest MM value due to its smaller volume/f.u as shown in Table. \[tab1\] The total magnetic moments of cubic, tetragonal, and rhombohedral phases are approximately the same because their equilibrium volumes are slightly different from each other. The local MMs show that the major contribution to the total moment is coming from the Fe atoms. Small induced MMs at O sites due to Fe-O hybridization can also be seen. In cubic and rhombohedral systems, the Fe-O bond lengths are the same so the local moments are also the same for all the three oxygen atoms. In the remaining structures, due to different Fe-O bond length, oxygen atoms have different magnetic moments. These different local moments of oxygen show different crystal symmetry of BFO. In BFO, Fe has four unpaired electrons, all with high spin (spin up) state. This suggests that BFO should have total magnetic moment of 4$\mu_B$, but this is not the situation. The deviation from 4$\mu_B$ is due to the strong hybridization of Fe with oxygen atoms. However, LDSA+$U$ calculation shows 4$\mu_B$ which suggests that the strong Fe-O hybridization reduces the magnetic moment. Including the Columb-type repulsive interaction, i.e., $U$, decreases the Fe-O hybridization that gives an integer magnetic moment value in each phase of BFO. The non-integer (integer) values of the MMs in LSDA (LSDA$+U$) is also confirmed in the electronic structures which are discussed in the the following paragraphs.
The electronic properties are also investigated in each phase of BFO using both LSDA and LSAD$+U$ (see Fig. \[dos\]). We noticed that the electronic properties of BFO in all studied phases are showing almost similar behaviour and there is a transformation from metallic to half-metallic phase in each case by including $U$. The total density of states (DOS) and projected density of states (PDOS) of cubic BFO in LSDA show that cubic BFO is a metal consistent with the non integer value of the magnetic moment. The contribution to the total DOS separately from Fe and O atoms is also shown in the PDOS. The Fe $d$ orbital is further splitted into doubly degenerate $e_g$ and triply degenerate $t_{2g}$ states. From PDOS it can be seen that there is a strong hybridization between Fe-$e_g$ and O-$p$ states, while the Fe-$t_{2g}$ and O-$p$ states are weakly hybridized. The strong coupling between Fe-$e_g$ and O-$p$ states is due to direct overlapping of these orbitals. The $t_{2g}$ states have minor contributions in the majority spin states at the Fermi level. Such hybridization of Fe and O orbitals in cubic BFO generates metallicity. However, including $U$ has quite different effect on the Fe-$d$ orbitals. The DOS and PDOS for LSDA+$U$ is also plotted in the same Figure \[dos\](b). Due to the introduction of correlation energy through the Hubbard-$U$ term, a different behavior is appeared in the minority spin states, creating a band gap of approximately 0.95 eV at the Fermi level, giving a half metallic character to the cubic BFO. Such half metallic behavior in perovskites is very important from application point of view. The Hubbard-$U$ term has a small effect on Fe-$e_g$ bands as these states are strongly hybridized with the O-$p$ states while increasing the localization of $t_{2g}$ bands. This (Hubbard $U$ term) reduces the already small $t_{2g}$ electrons hybridization with O-$p$ states and shifting both the minority and majority $t_{2g}$ states further away from the Fermi level. The shifting is from -5 eV to -7 eV below the Fermi level in the spin up $t_{2g}$ states, and from 1 to 2 eV above the Fermi level in the spin down $t_{2g}$ states. Our results are in well agreement with the previous *ab-initio* calculations.[@33]
The electronic structures of tetragonal, orthorhombic, and rhomohedral BFO are also analysed, and the electronic structures are almost similar to cubic BFO. The DOS and PDOS for tetragonal BFO is plotted in Fig.\[dos\], which shows the same behaviour as the cubic structure but the band gap in spin down for LSDA+$U$ is 1.02 eV which is larger than the cubic BFO band gap. Similarly, the orthorhombic BFO is shown in Fig.\[dos\] and is showing the same results as for cubic and tetragonal except a prominent pseudo gap in LSDA in minority spin states just below the Fermi level which can also be shifted to Fermi level by applying external strain.[@34] However, in LSDA+$U$ calculations the band gap in the minority spin state is 1.03 eV. We also expect that rhombohedral BFO (Fig.\[dos\]) may show half-metallic behavior under uniaxial strain, similar to orthorhombic BFO.[@34] The LSDA+$U$ calculated half-metallic band gap is 1.3 eV, which is larger than the other phases of BFO due to larger volume of orthorhombic BFO.
Conclusion
==========
Density functional theory is used to predict the ground state crystal structure of BaFeO$_{3}$. Local spin density approximation (LSDA) was used for the exchange and correlation functional. Different crystal structures (cubic, tetragonal, orthorhombic, and rhombohedral) of BFO are considered. LSDA calculations showed that cubic BFO has the lowest formation enthalpy among the studied crystal structures. It is also observed that FM states of BFO are more stable as compared with NM and AFM states. The electronic structures within LSDA showed that all phases of BFO are metallic. To correctly describe the electronic band structures, further calculations were carried out using the LSDA+$U$ approach. The LSDA+$U$ calculated band structures of BFO are half-metallic. The LSDA+$U$ approach showed that adding Hubbard like potential $U$ deceased the hybridizations between the Fe $d$ and O $p$ orbitals, and such reduced hybridization resulted half-metallicity in BFO. The calculated magnetic moments showed integer (non-integer) values in LSDA+$U$ (LSDA)calculations.
Acknowledgment
==============
We acknowledge National Centre for Physics (NCP) Islamabad, Pakistan for providing computing facilities.
R. E. Cohen, Nature [**358**]{}, 136 (1992). B. John, Reports on Progress in Physics [**67**]{}, 1915 (2004). H. Zheng, J. Wang, S. E. Lofland, Z. Ma, L. Mohaddes-Ardabili, T. Zhao, L. Salamanca-Riba, S. R. Shinde, S. B. Ogale, F. Bai, D. Viehland, Y. Jia, D. G. Schlom, M. Wuttig, A. Roytburd, R. Ramesh, Science [**303**]{}, 661 (2004). R. Ramesh, N. A. Spaldin, Nat. Mater. [**6**]{}, 21 (2007). M. Sepliarsky, S. R. Phillpot, M. G. Stachiotti and R. L. Migoni, J. Appl.Phys. [**91**]{}, 3165 (2002). Y. Ohno, D. K. Young, B. Beschoten, F. Matsukura, H. Ohno and D. D. Awschalom, Nature [**402**]{}, 790 (1999). T. Matsui, H. Tanaka, N. Fujimura, T. Ito, H. Mabuchi, and K. Morii, Appl. Phys. Lett [**81**]{}, 2764 (2002). N. Hayashi, T. Terashima, and M. Takamo, J. Matter. Chem. [**11**]{}, 2235 (2001). T. Matsui, H. Tanaka, N. Fujimura, T. Ito, H. Mabuchi, and K. Morii, Appl. Phys. Lett. [**81**]{} ,2764 (2002). T. Matsui, E. Taketani, N. Fujimura, T. Ito, and K. Morii, J. Appl. Phys. [**93**]{}, 6993 (2003). B. Ribeiro, R. P. Borges, R. C. da Silva, N. Franco, P. Ferreira, E. Alves, B. Berini, A. Fouchet, N. Keller, and M. Godinho, J. Appl. Phys. [**111**]{}, 113923 (2012). C. Callender, D. P. Norton, R. Das, A. F. Hebard, and J. D. Budai, Appl. Phys. Lett. [**92**]{}, 012514 (2008). S. Chakraverty, T. Matsuda, N. Ogawa, H. Wadati, E. Ikenaga, M. Kawasaki, Y. Tokura, and H. Y. Hwang, Appl. Phys. Lett. [**103**]{}, 142416 (2013). Z. Li, T. Iitaka, and T. Tohyama, Phys. Rev. B [**86**]{}, 094422 (2012). G. Rahman, J. M. Morbec, R. Ferradas, V. M. Garcia-Suarez, and N. J. English, J. Mag. Mag. Mat. **401**, 1097 (2016). F. Hong-Jian, and L. Fa-Min, Chin. Phys. B [**17**]{}, 1874 (2008). S. Mori, J. Am. Ceram. Soc. [**49**]{}, 600 (1966). J-C. Grenier, A. Wattiaux, M. Pouchard, P. Hagenmuller, M. Parras, M. Vallet, J. Calbet, and M. A. Alario-Franco, J. Solid State Chem. [**80**]{}, 6 (1989). H. J. Van Hook, J. Phys. Chem. [**68**]{}, 3786 (1964). J. M. Gonzalez-Calbet, M. Parras, M. Vallet-Regi, and J. C. Grenier. J. Solid State Chem. [**86**]{}, 149 (1990).
S. Mori, J. Phys. Soc. Jpn. [**28**]{}, 44 (1970). G. Rahman, and S. Sarwar, phys. status solidi B (2015). M. Uludogan, T. Cagin, and W. A. Goddard. MRS Proceedings. Vol. 718. Cambridge University Press, 2002.
J-H Lee, M-Ae Oak, H. J. Choi, J. Y. Sonc, and H. M. Jang, J. Mater. Chem., **22** 1667 (2012).
J. F. Scott, Adv. Mater. [**22**]{}, 2106 (2010). W. Siemons, M. D. Biegalski, J. H. Nam, and H. M. Christen, Appl. Phys. Express [**4**]{}, 095801 (2011). J. P. Zhou, R. L. Yang, R. J. Xiao, X. M. Chen, and C. Y. Deng, Mater. Res. Bull. [**47**]{}, 3630 (2012). I. Levin, M. G. Tucker, H. Wu, V. Provenzano, C. L. Dennis, S. Karimi, T. Comyn, T. Stevenson, R. I. Smith, and I. M. Reaney, Chem. Mater. [**23**]{}, 2166 (2011). V. Kothai, A. Senyshyn, and R. Ranjan, J. Appl. Phys. [**113**]{}, 084102 (2013). P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, D. L. Chiarotti, M. Cococcioni, I. Dabo, A. D. Corso, S. de Dironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, R. M. Wentzcovitch, J. Phys. Condens. Matter [**21**]{}, 39 (2009). Y. P. Liu, S. H. Chen, J. C. Tung, Y. K. Wang, Solid State Commun. [**152**]{}, 968 (2012) S. M. Yusuf, Pramana-j. phys. [**63**]{}, 133 (2004). M. Mizumaki, H. Fujii, K. Yoshii, N. Hayashi, T. Saito, Y. Shimakawa, T. Uozumi, and M. Takano, phys. status solidi C (2015). M. Godinho, C. Cardoso, R. P Borges, T. P. Gasche, J. Appl. Phys. [**113**]{}, 083906 (2013).
.
System Vol/f.u $a$ $b$ $c$ $\alpha$ $\beta$ $\gamma$ $\bigtriangleup E_f$ $\bigtriangleup E $ space group
-------------- --------- ------ ------- ------- ---------- --------- ------------ ---------------------- --------------------- -------------
Cubic 388.87 7.30 7.30 7.30 90$^o$ 90$^o$ 90$^o$ -3.85 0.25 Pm3m
Tetragonal 389.55 7.28 7.28 7.35 90$^o$ 90$^o$ 90$^o$ -3.84 0.07 P4mm
Orthorhombic 380.77 7.20 10.25 10.29 90$^o$ 90$^o$ 90$^o$ -3.80 0.13 Amm2
Rhombohedral 392.93 7.32 7.32 7.32 90$^o$ 90$^o$ $<$ 90$^o$ -3.83 0.03 R3m
: The LSDA calculated total volume per formula unit (vol/f.u), lattice parameters $(a, b, c)$, angles $(\alpha, \beta, \gamma)$, formation enthalpy ($\bigtriangleup$E$_f$), energy difference between AFM and FM states($\bigtriangleup E$ $E_{AFM}-E_{FM}$) and space groups of cubic, tetragonal, orthorhombic and rhombohedral crystal structures. The volume (lattice parameter) is in units of a.u.$^3$ (a.u.). The $\bigtriangleup E_f$ and $\bigtriangleup E$ are in units of eV
\[tab1\]
System Total Moment of System/f.u Fe O$_1$ O$_2$ O$_3$
-------------- ---------------------------- ------ ------- ------- -------
Cubic 3.46 2.83 0.20 0.20 0.20
Tetragonal 3.44 2.81 0.23 0.19 0.19
Orthorhombic 3.34 2.74 0.19 0.20 0.20
Rhombohedral 3.42 2.80 0.21 0.21 0.21
: The LSDA calculated Total Magnetic Moment of the system per f.u. and the local magnetic moment of Fe, O$_1$, O$_2$ and O$_3$. Total (local) moment is in units of $\mu_B$.
\[tab2\]
![(Color online) The LSDA calculated total energy(Ry) vs volume(a.u.$^3$) curves for cubic(a), tetragonal(b), orthorhombic(c) and rhombohedral(d) BFO. Filled (empty) symbols show FM(NM) states of BFO.[]{data-label="crys_vol"}](cub-bfo_vol.eps "fig:"){width="40.00000%"} ![(Color online) The LSDA calculated total energy(Ry) vs volume(a.u.$^3$) curves for cubic(a), tetragonal(b), orthorhombic(c) and rhombohedral(d) BFO. Filled (empty) symbols show FM(NM) states of BFO.[]{data-label="crys_vol"}](tet-bfo_vol.eps "fig:"){width="40.00000%"} ![(Color online) The LSDA calculated total energy(Ry) vs volume(a.u.$^3$) curves for cubic(a), tetragonal(b), orthorhombic(c) and rhombohedral(d) BFO. Filled (empty) symbols show FM(NM) states of BFO.[]{data-label="crys_vol"}](orth-bfo_vol.eps "fig:"){width="40.00000%"} ![(Color online) The LSDA calculated total energy(Ry) vs volume(a.u.$^3$) curves for cubic(a), tetragonal(b), orthorhombic(c) and rhombohedral(d) BFO. Filled (empty) symbols show FM(NM) states of BFO.[]{data-label="crys_vol"}](rhom-bfo_vol.eps "fig:"){width="40.00000%"}
![(Color online) The LSDA calculated total energy(Ry) vs volume(a.u.$^3$) curves in FM states of cubic(fm$_c$), tetragonal(fm$_t$), orthorhombic(fm$_o$) and rhombohedral(fm$_r$) BFO.[]{data-label="vol_fit"}](vol_fit.eps){width="40.00000%"}
![(Color online) The LSDA calculated total magnetic moment/fu(in units of $\mu_B$) vs volume/fu (a.u.$^3$) for cubic(circle), tetragonal(square), orthorhombic(up triangle) and rhombohedral(down triangle) BFO.[]{data-label="mag"}](mag.eps){width="40.00000%"}
![(Color online) The calculated total density of states(DOS) and projected density of states(PDOS) in cubic(up left), tetragonal(up right), orthorhombic(down left) and rhombohedral(down right) BFO. The left(right) pannel shows the result of LSDA(LSDA+$U$). The Fermi energy is set at zero eV.[]{data-label="dos"}](cub_dos.eps "fig:"){width="40.00000%"} ![(Color online) The calculated total density of states(DOS) and projected density of states(PDOS) in cubic(up left), tetragonal(up right), orthorhombic(down left) and rhombohedral(down right) BFO. The left(right) pannel shows the result of LSDA(LSDA+$U$). The Fermi energy is set at zero eV.[]{data-label="dos"}](tet_dos.eps "fig:"){width="40.00000%"} ![(Color online) The calculated total density of states(DOS) and projected density of states(PDOS) in cubic(up left), tetragonal(up right), orthorhombic(down left) and rhombohedral(down right) BFO. The left(right) pannel shows the result of LSDA(LSDA+$U$). The Fermi energy is set at zero eV.[]{data-label="dos"}](orth_dos.eps "fig:"){width="40.00000%"} ![(Color online) The calculated total density of states(DOS) and projected density of states(PDOS) in cubic(up left), tetragonal(up right), orthorhombic(down left) and rhombohedral(down right) BFO. The left(right) pannel shows the result of LSDA(LSDA+$U$). The Fermi energy is set at zero eV.[]{data-label="dos"}](rhom_dos.eps "fig:"){width="40.00000%"}
|
---
abstract: '[Quantum statistics, quantum graphs, anyons]{} Quantum graphs are commonly used as models of complex quantum systems, for example molecules, networks of wires, and states of condensed matter. We consider quantum statistics for indistinguishable spinless particles on a graph, concentrating on the simplest case of abelian statistics for two particles. In spite of the fact that graphs are locally one-dimensional, anyon statistics emerge in a generalized form. A given graph may support a family of independent anyon phases associated with topologically inequivalent exchange processes. In addition, for sufficiently complex graphs, there appear new discrete-valued phases. Our analysis is simplified by considering combinatorial rather than metric graphs – equivalently, a many-particle tight-binding model. The results demonstrate that graphs provide an arena in which to study new manifestations of quantum statistics. Possible applications include topological quantum computing, topological insulators, the fractional quantum Hall effect, superconductivity and molecular physics.'
author:
- 'J.M. Harrison$^1$, J.P. Keating$^2$ & J.M. Robbins$^2$'
title: Quantum statistics on graphs
---
\[firstpage\]
Introduction {#sec: intro}
============
The quantum mechanical properties of a many-particle system depend profoundly on whether the particles are distinguishable or not, even if, in the former case, the distinguishing properties have little effect on the underlying classical mechanics. For this reason, the description of indistinguishable particles in quantum mechanics – quantum statistics – continues to be revisited, both to gain insight into its foundations (Duck & Sudarshan 1997, Berry 2008) as well as to predict and understand new phenomena, including the fractional quantum Hall effect (Wilczek 1990, Jain 2007), superconductivity (Wilczek 1990), topological quantum computing (Nayak [*et al*]{}. 2008) and particle-like states of nonlinear field theories, where standard quantization procedures are not straightforward to apply (Finkelstein & Rubenstein 1968, Manton 2008).
The standard treatment of quantum statistics takes as its starting point the quantum mechanical description of a single particle. The $n$-particle Hilbert space, ${{\cal H}}_n$, is then taken to be the tensor product of $n$ copies of the one-particle Hilbert space, ${{\cal H}}$. The fact that the particles are indistinguishable means that physically observable operators commute with permutations of the particle labels. This implies that ${{\cal H}}_n$ may be decomposed into physically distinct components characterized by different permutation symmetries. The symmetrization postulate restricts the physically realizable spaces to either the symmetric component, which obeys Bose statistics, or the antisymmetric component, which obeys Fermi statistics; components characterized by more complicated behaviour under permutations – parastatistics – are thus excluded. Finally, the spin-statistics relation determines which of the two allowed alternatives applies – Bose statistics for integer spin, and Fermi statistics for half-odd-integer spin.
There is another approach to quantum statistics which is based on the topology of the classical configuration space. It takes as its starting point the configuration space, $M$, of a single particle. The $n$-particle configuration space, $C_n(M)$, is then taken to be the cartesian product of $n$ copies of the one-particle configuration space, with, crucially, coincident configurations excluded (no two particles can be in the same place) and permuted configurations identified. So defined, $C_n(M)$ may have nontrivial topology. In particular, curves along which the particles are exchanged (some particles returning to where they started, others ending up where another started) are regarded as closed curves (as permuted configurations are identified), and these exchange curves cannot be continuously contracted to a single point. If such curves exist, then $C_n(M)$ is multiply connected.
Quantum mechanics on a multiply connected configuration space ${{\cal C}}$, whether ${{\cal C}}$ describes $n$ particles or just one, allows for an additional freedom in the quantum description, namely the choice of a representation of the fundamental group, $\pi_1({{\cal C}})$, of ${{\cal C}}$ . The (single-particle) Aharonov-Bohm effect provides a familiar example: the configuration space ${{\cal C}}$ for a particle with charge $q$ in the presence of an impenetrable solenoid, say along the $z$-axis, may be taken to be ${{\mathbb R}}^3$ with the solenoid – an infinite cylinder – removed. ${{\cal C}}$ is multiply connected; closed curves may be classified according to the number of times they wind around the excluded cylinder. To the canonical momentum operator, ${{\bf p}}= -i\nabla$, we can add a vector potential, ${{\bf A}}$, whose curl vanishes away from the cylinder, so that the commutation relations are preserved, as are the classical equations of motion – there are no physically accessible magnetic fields. However, the line integral of ${{\bf A}}$ around the cylinder need not vanish, and may be realized by a magnetic flux $\phi$ inside the cylinder, measured in units of the flux quantum $qh/c$. The flux determines a representation of $\pi_1({{\cal C}})$ that assigns to a closed curve with $m$ windings the phase factor $\exp(im\phi)$. The value of the flux modulo $2\pi$ – equivalently, the choice of a representation of $\pi_1({{\cal C}})$ – affects the quantum mechanical properties, although observable consequences vanish in the classical limit.
More generally, on a multiply-connected configuration space ${{\cal C}}$, the momentum is described by a covariant derivative which, in the absence of external fields, has zero curvature (so that the classical mechanics is unaffected) but which may engender nontrivial holonomy. The holonomy corresponds to a representation of the fundamental group $\pi_1({{\cal C}})$. The representation could be one-dimensional and therefore abelian, assigning (commuting) phase factors to noncontractible closed curves, or higher-dimensional, assigning $d\times d$ unitary matrices (in general, non-commuting) to noncontractible closed curves. The unitary matrices act on the wavefunctions, which are taken to have $d$ components (more generally, wavefunctions are taken to be sections of a $d$-dimensional complex vector bundle).
Taking ${{\cal C}}$ to be the configuration space $C_n(M)$ of $n$ indistinguishable particles, Leinaas and Myrheim (1977) showed that representations of the fundamental group $\pi_1(C_n(M))$ determine the possible realizations of quantum statistics. Similar conclusions were reached by Laidlaw and de Witt (1971) using path integrals, an approach that has been applied in a variety of settings, including cases where canonical quantization is difficult to carry out (Finkelstein and Rubenstein 1968, Balachandran [*et al*]{}. 1993, Manton 2008).
If $\pi_1(C_n(M))$ coincides with the symmetric group, $S_n$, then the topological approach to quantum statistics yields the same possibilities as does the standard one, namely Bose or Fermi statistics, or, if one gives up the symmetrization postulate, parastatistics, which are associated with higher-dimensional representations of $S_n$. This is the case for $M = {{\mathbb R}}^d$ with $d
> 2$. As is well known, the situation is different for $M =
{{\mathbb R}}^2$, i.e. for particles in the plane. The fundamental group $\pi_1(C_n({{\mathbb R}}^2))$ is the braid group $B_n({{\mathbb R}}^2)$, which has infinitely many elements. The characterization of unitary representations of the braid group is not complete (see, e.g., Birman & Brendle 2004), but its one-dimensional (abelian) representations include a new type of statistics – anyon statistics (Wilczek 1990). Closed curves on which two of the particles wind around each other $m/2$ times (odd values of $m$ correspond to an exchange of positions) are assigned a phase factor $\exp (im \alpha)$, independently of any phases associated with external gauge fields. The special values $\alpha = 0$ and $\alpha = \pi$ correspond to Bose and Fermi statistics respectively.
Anyon statistics have been found to provide deep insight into phenomena involving strongly interacting electrons, for example the fractional quantum Hall effect (Laughlin 1983). In the composite fermion theory (Jain 2007), for example, the collective excitations of the many-electron system are approximately described by an effective Hamiltonian in which they obey anyon statistics with phase $\alpha$ related to the fractional Hall conductance.
The situation is different again for $M = {{\mathbb R}}$, i.e. particles on the line. Here the fundamental group $\pi_1(C_n({{\mathbb R}}^2))$ is trivial; there are no noncontractible closed curves, as it is not possible to exchange particles without one passing through another. Thus, the topological approach predicts only Bose statistics (but see Leinaas and Myrheim (1977) for yet another approach to quantum statistics in one dimension).
One motivation for this paper is to study quantum statistics for particles on networks of one-dimensional wires, or metric graphs. A metric graph $\Gamma$ is a collection of nodes, or vertices, connected by edges, or one-dimensional intervals, of specified lengths. Single-particle quantum mechanics on metric graphs has been extensively studied. The model was originally introduced to describe free electrons on chemical bonds (Pauling 1936). Some current applications include superconductivity (de Gennes 1981), quantum chaos (Kottos and Smilansky 1997, Keating 2008), and Anderson localization (Aizenman [*et al.*]{} 2006). The particle is described by a set of wavefunctions, $\Psi_\epsilon(x_\epsilon)$, one for each edge of $\Gamma$. Each wavefunction is acted on by a Hamiltonian, $H_\epsilon$. For example, $H_\epsilon$ could be a Schrödinger operator with a gauge potential, i.e. of the form ${{\textstyle \frac{1}{2}}}(-id/dx_\epsilon - A_\epsilon(x_\epsilon))^2 + V_\epsilon(x_\epsilon)$.
The boundary conditions at the vertices of the quantized graph is a subtle point in the theory. For free particles, i.e. $H_\epsilon
= -{{\textstyle \frac{1}{2}}}d^2/dx_\epsilon^2$, a simple choice motivated by physical considerations is to require that i) wavefunctions are continuous across vertices and ii) the sum of the outgoing derivatives at each vertex should vanish (Neumann-like boundary conditions). A complete characterization of boundary conditions that render the free-particle Hamiltonian self-adjoint has been given in terms of the Lagrangian planes of a certain complex symplectic boundary form (Kostrykin and Schrader 1999, Kuchment 2004). For more general operators, the classification of self-adjoint boundary conditions remains an open problem (see Bolte and Harrison (2003) for a corresponding classification scheme for the Dirac operator). For a given graph model, one can in principle derive physically appropriate boundary conditions from the limiting behaviour of a more realistic two- or three-dimensional model, but the analysis can be quite involved.
As a metric graph is locally one-dimensional, one might imagine that quantum statistics on a metric graph would be limited to Bose or Fermi statistics. This turns out not to be the case, as was previously demonstrated in particular examples by Balachandran & Ercolessi (1992) (discussed in Section \[sec: examples\]\[subsec: lasso and figure-of-eight\]). However, a systematic treatment of quantum statistics for general metric graphs has not been given. One difficulty is that the quantization of many-particle metric graphs poses analytical challenges beyond those encountered in the one-particle theory. Besides boundary conditions for a single particle at a vertex, one has also to specify boundary conditions for coincidences of two or more particles on edges and at vertices that render the many-particle Hamiltonian self-adjoint.
Here we circumvent these difficulties while preserving the essential topological features by considering the simpler problem of indistinguishable particles on a [*combinatorial graph*]{} $G$. A combinatorial graph consists of a set of vertices together with a specification of pairs of vertices that are connected by edges. The edges themselves do not contain any points, so the configuration space for the particle consists of the vertices alone, and hence is zero- rather than one-dimensional. Quantum mechanics on a combinatorial graph is a tight-binding model on the vertices. Thus, quantum statistics on combinatorial graphs is an interesting problem in its own right.
It turns out that the $n$-particle configuration space, $C_n(G)$, can also be regarded as a combinatorial graph, and hence, in contrast to a many-particle metric graph, is easily quantized. Following the topological approach, we can characterize the possible quantum statistics in terms of representations of the fundamental group of $C_n(G)$. As $C_n(G)$ is a discrete space, it is not immediately apparent what is meant by its fundamental group, but one can define a combinatorial version that coincides with its metric-graph counterpart. For simplicity, we consider here two particles on a combinatorial graph, and restrict our attention to abelian (one-dimensional) representations of the fundamental group. It turns out that a given graph may support a family of independent anyon phases associated with topologically inequivalent exchange processes. In addition, for sufficiently complex graphs, there appear new discrete-valued phases. (Mathematically, these discrete phases correspond to representations of the torsion component of the abelianized fundamental group, or, equivalently, the first homology group of $C_n(G)$.) The results extend straightforwardly to abelian statistics for $n
> 2$ particles. The extension to nonabelian statistics will be discussed in a forthcoming paper.
The paper is organized as follows. In Section \[sec: one-particle graph\] we present quantum mechanics on a combinatorial graph as a tight-binding model. We introduce gauge potentials, the analogue of vector potentials, which assign phase factors to the edges of the graph. These phase factors are then incorporated into the off-diagonal matrix elements (transition amplitudes) of the Hamiltonian matrix. Gauge potentials are determined, up to a choice of gauge, by the accumulated phases along, or fluxes through, the cycles of the graph.
In Section \[sec: 2 particles\] we consider two indistinguishable particles on a combinatorial graph. The two-particle configuration space can itself be regarded as a combinatorial graph (usually larger than the original graph), that can be quantized following the general prescription of Section \[sec: one-particle graph\]. We characterize, up to a choice of gauge, [*topological gauge potentials*]{} on the two-particle graph, which determine the quantum statistics. Topological gauge potentials have the property that the only cycles with nonzero flux correspond to nontrivial closed curves in the metric setting. They are parameterized by a set of [*free statistics phases*]{}, which range between $0$ and $2\pi$, as well as, in some cases, a set of [*discrete statistics phases*]{} whose values are constrained to certain rational multiples of $2\pi$. A certain subset of the free statistics phases may be attributed to Aharonov-Bohm flux lines threading the one-particle graph; the remainder describe many-body effects (Section \[sec: 2 particles\]\[subsec: AB potentials\]). Bose and Fermi statistics, as understood in the conventional approach, may be recovered from particular choices of topological gauge potential (Section \[sec: 2 particles\]\[subsec: Bose and Fermi\]) Our results are illustrated by a number of examples in Section \[sec: examples\], and in the concluding remarks we consider perspectives for further investigations and possible applications.
Quantum mechanics on combinatorial graphs {#sec: one-particle graph}
=========================================
Combinatorial graphs {#subsec: 1-particle comb graph}
--------------------
A combinatorial graph $G$ consists of a set $V=\{1,\ldots,v\}$ of sites, or vertices, labeled $1$ through $v$, which may be connected by bonds, or edges. It is convenient to describe the connectivity of the graph, i.e. its edges, by a $v \times v$ [*adjacency matrix*]{}, $A$, whose $(j,k)^{th}$ entry gives the number of edges from vertex $j$ to vertex $k$. We assume that $G$ is [*undirected*]{}, i.e. $A_{jk} = A_{kj}$. We write $j \sim k$ to indicate that $j$ and $k$ are connected by an edge. The number of edges in the graph, $e$, is then given by $e = {{\textstyle \frac{1}{2}}}\sum_{jk} A_{jk}$. Edges will be labeled by Greek indices which take values between $1$ and $e$. Given an edge $\epsilon$, we let $\epsilon_<$ and $\epsilon_>$ denote its vertices of lower and higher index respectively. Sometimes it will be useful to assign an orientation to an edge; our convention will be that positive orientation corresponds to going from $\epsilon_<$ to $\epsilon_>$.
We also assume that $G$ is [*simple*]{}, so that there is at most one edge between any two vertices, and that no vertex is connected to itself, i.e. $A_{jk}$ is equal to zero or one, and $A_{jj}$ is equal to zero. Any graph can be made simple by introducing additional vertices on its multiply-connecting and self-connecting edges. The valency, or [*degree*]{}, $v_j$, of the vertex $j$ is the number of edges connected to $j$, i.e. $v_j = \sum_{k} A_{jk}$. Finally, we assume that $G$ is [*connected*]{}, i.e. every pair of vertices is connected by some sequence of edges. In terms of the adjacency matrix, this means that, for any given vertices $j$ and $k$, $(A^n)_{jk} \neq 0$ for some $n$.
While our focus is on combinatorial graphs, some aspects of our treatment may be motivated by considering [*metric graphs*]{} associated to a combinatorial graph. Given a combinatorial graph, $G$, we can associate to it a metric graph, $\Gamma$, by assigning a length $L_\epsilon>0$ to each edge $\epsilon$ of $G$. On $\Gamma$, $\epsilon$ is regarded as an interval $[0,L_\epsilon]$, with $0$ identified with $\epsilon_<$ and $L_\epsilon$ identified with $\epsilon_>$. One can define continuous curves on $\Gamma$, and a metric is obtained by taking the distance between two points to be the minimum length of continuous curves joining the points. One can consider closed curves based at a vertex $*$ and define the fundamental group $\pi_1(\Gamma,*)$ in the usual way.
Next we recall some basic facts about the topology of combinatorial graphs (see, e.g., Hatcher 2001), which will play a role in their quantization. A [*path*]{} $p = (j_0, j_1, \ldots, j_n)$ on a combinatorial graph $G$ is a sequence of vertices in which consecutive vertices are connected by edges, i.e. $j_r \sim j_{r+1}$ (thus, consecutive vertices must be distinct). The length of a path is the number of edges along it. A single vertex, $j$, may be regarded as a path of zero length, and an edge, $\epsilon$, regarded as the path $(\epsilon_<,\epsilon_>)$. Given a path $p = (j_0, j_1, \ldots, j_n)$, we define the inverse of $p$, denoted $p^{-1}$, to be the path $(j_n, j_{n-1}, \ldots, j_0)$. If $p = (j_0,\ldots,j_n)$ and $q = (k_0,\ldots,k_r)$ are two paths such that the last vertex of $p$ coincides with the first vertex of $q$, we define the concatenation, or product, of $p$ and $q$, denoted $pq$, to be the path $(j_0, \ldots, j_n,
k_1,\ldots, k_r)$.
The path $qpp^{-1}r$ describes the product of paths $q$ and $r$ with an intervening retracing of the path $p$. We want to regard paths that differ by retracings of intermediate components as being the same. Thus, we introduce the equivalence relation $ q p p^{-1} r \equiv qr$. Paths that are equivalent to a zero-length path, i.e. to their initial vertex, are called [*self-retracing*]{}.
A [*cycle*]{} is a path $c = (j_0, j_1, \ldots, j_n)$ that is closed, i.e. $j_n = j_0$. A cycle is [*primitive*]{} if its vertices, apart from the first and last, are all distinct. The set of cycles on a combinatorial graph can be characterized with the aid of a [*spanning tree*]{}, which we define next. A [*tree*]{} is a combinatorial graph whose only cycles are self-retracing. A [*subgraph*]{} of a combinatorial graph $G$ is a combinatorial graph whose vertices and edges are subsets of the vertices and edges of $G$. A spanning tree, $T$, of $G$ is a subgraph that is a connected tree containing all the vertices of $G$. Thus, given any two vertices of $G$, there is a path on $T$, unique up to retracings, that joins them. $G$ has at least one spanning tree, and, unless $G$ is itself a tree, more than one. (An iterative algorithm for constructing a spanning tree is to remove an edge from a primitive cycle of $G$ until no primitive cycles remain). Clearly, any spanning tree of $G$ has $v-1$ edges. We let $f$ denote the number of edges of $G$ not in a spanning tree, so $f$ is one minus the Euler characteristic of $G$, $$\label{eq: f for G}
f = e - (v-1).$$
Let $T$ denote a spanning tree of $G$ and $*$ a vertex of $G$. Let us label the edges of $G$ that are not in $T$ by an index $\phi$, $1 \le \phi \le f$. Let $c_\phi(*)$ denote the cycle obtained by proceeding along the (unique) path with no retracings from $*$ to $\phi_<$ on $T$, then from $\phi_<$ to $\phi_>$ along $\phi$, and finally from $\phi_>$ back to $*$ along the (unique) path with no retracings on $T$. We call $c_\phi(*)$ a [*fundamental cycle*]{}. The definition depends on the choice of $T$ and $*$, but the dependence on $*$ is easily accounted for, as $$\label{eq: c(**_}
c_\phi(**) \equiv p c_\phi(*) p^{-1},$$ where $p$ is the (unique) path on $T$ from $**$ to $*$. An arbitrary cycle $c$ beginning and ending at $*$ can be expressed, up to retracings, as a product of fundamental cycles and their inverses, i.e. $$\label{eq: arbitrary cycle}
c \equiv c^{s_1}_{\phi_1}(*) \cdots c^{s_t}_{\phi_t}(*),$$ where $s_j = \pm 1$. (More formally, the set of cycles based at $*$ modulo retracings forms a group, the combinatorial fundamental group $\pi_1^C(G,*)$. $\pi_1^C(G,*)$ is a free group on $f$ generators $c_1(*), \ldots, c_f(*)$ and is isomorphic to the fundamental group, $\pi_1(\Gamma,*)$, of a metric graph, $\Gamma$, associated to $G$.)
Quantization {#subsec: 1-particle quantization}
------------
We regard the set of vertices $V$ of a combinatorial graph $G$ as a configuration space. $V$ might be the configuration space of a single particle, but we shall not restrict ourselves to this point of view. Indeed, in Section \[sec: 2 particles\] we will consider combinatorial graphs whose vertices represent configurations of two particles. We take quantum mechanics on $G$ to be given by a tight-binding model on the set of vertices, in which short-time transitions are allowed only between vertices connected by edges. The Hilbert space is ${{\mathbb C}}^v$, with basis vectors ${|j\rangle}$, $1 \le j \le v$, describing states in which the system is localized at one of the vertices. Dynamics is given by the Schrödinger equation, $
i\dot{{|\psi\rangle}} = H{|\psi\rangle}$, where the Hamiltonian $H$ is a $v \times v$ hermitian matrix with nonzero off-diagonal entries only between connected vertices, i.e. $$\label{eq: constraint on H}
H_{jk} = 0 \text{ if } j \neq k \text{ and } j\nsim k.$$ One example of a Hamiltonian is (minus) the [*combinatorial Laplacian*]{}, or the discrete kinetic energy, $H = A - D$, where $D$ is given by $D_{jk} = v_j \delta_{jk}$.
Gauge potentials {#subsec: gauge potentials}
----------------
A [*gauge potential*]{} $\Omega$ on $G$ is a $v \times v$ real antisymmetric matrix such that $\Omega{jk}$ vanishes if $j \nsim
k$. Given a path $p = (j_0, \ldots, j_n)$ on $G$, we define $$\label{eq: flux along p}
\Omega(p) = \sum_{r = 0}^{n-1} \Omega_{j_r j_{r+1}}.$$ Clearly, if $p$ and $p'$ differ by retracings, then $$\label{eq: Omega and retracings}
\Omega(p) = \Omega(p'),$$ and if $p$ and $q$ can be concatenated, then $$\label{eq: Omega through product of paths}
\Omega(p q) = \Omega(p) + \Omega(q). $$ For a cycle $c$, we refer to $\Omega(c)$ as the [*flux*]{} of $\Omega$ through $c$. From and , it follows that the flux through a fundamental cycle $c_\phi(*)$ is independent of $*$; hence we write $\Omega(c_\phi)$, omitting $*$ from the notation. For an arbitrary cycle $c$, it follows from that $$\label{eq: Omega for general c}
\Omega(c) = \sum_{j = 1}^r (-1)^{s_j} \Omega(c_{\phi_j}),$$ where $\phi_j$ and $s_j$ are given by . Thus, all fluxes are determined by the fluxes through a set of fundamental cycles.
We incorporate a gauge potential $\Omega$ into a Hamiltonian $H$ by multiplying the transition amplitudes from $j$ to $k$ by the phase factors $\exp\left(i
\Omega_{jk}\right)$. The new Hamiltonian is given by $$\label{eq: incorporate gauge potential}
H^\Omega_{jk} = H_{jk} \exp\left(i \Omega_{jk}\right)$$ Clearly, $H^\Omega$ is hermitian and satisfies (\[eq: constraint on H\]) (see Oren [*et al.*]{} 2009 for a similar construction).
We can motivate the prescription by making the following analogy with vector potentials on a metric graph. We note that the non-diagonal part of the Hamiltonian can be expressed as a real linear combination of the $v(v-1)$ transition Hamiltonians $$\label{eq: S,A defined}
S_{(j,k)} = {|j\rangle} {\langle k|} + {|k\rangle} {\langle j|}, \quad
A_{(j,k)} = i\left({|j\rangle} {\langle k|} - {|k\rangle} {\langle j|}\right),$$ where $j \sim k$, and, for definiteness, we take $j < k$. Let $\epsilon$ denote the edge between $j$ and $k$, and consider a metric graph where $\epsilon$ corresponds to the interval $[0,L_\epsilon]$. Let $\Psi_\epsilon(x_\epsilon)$ denote the wavefunction along $\epsilon$ on the quantized metric graph, and let us formally regard ${|j\rangle}$ and ${|k\rangle}$ as vertex states localized at $x_\epsilon = 0$ and $x_\epsilon = L_\epsilon$ respectively, so that ${\langle j| \Psi_\epsilon \rangle} =
\Psi_\epsilon(0)$ and ${\langle k| \Psi_\epsilon \rangle} =
\Psi_\epsilon(L_\epsilon)$. Then, formally, ${|k\rangle} = T_\epsilon{|j\rangle}$, where $T_\epsilon= \exp(-i p_\epsilon L_\epsilon)$ is the unitary translation by a distance $L_\epsilon$ along $\epsilon$ generated by the momentum operator $p_\epsilon = -i d/dx_\epsilon$. Thus we can rewrite as $$\label{eq: S_jk, A_jk in terms of projector}
S_{(j,k)} = P_j T^\dag_\epsilon + T_\epsilon P_j ,
\quad
A_{(j,k)} = i \left(P_j T^\dag_\epsilon - T_\epsilon P_j\right),$$ where $P_j$ denotes the projector ${|j\rangle}{\langle j|}$. Now suppose we introduce a vector potential on the edge $\epsilon$, replacing $p_\epsilon$ by $p_\epsilon - A_\epsilon(x_\epsilon)$, and make the substitution $$\label{eq: translation with vector potential}
T_\epsilon \rightarrow \exp \left(-i \int_0^{L_\epsilon} (p_\epsilon -
A_\epsilon(x_\epsilon))\, dx_\epsilon\right) = e^{i\Omega_{jk}} T_\epsilon, $$ where $\Omega_{jk}$ is the integral of $A_\epsilon$ along $\epsilon$. Making this same substitution in , we obtain expressions for the transformed transition Hamiltonians, $S_{(j,k)}^{\Omega}$ and $A_{(j,k)}^{\Omega}$, that are equivalent to the prescription .
A gauge potential is [*trivial*]{} if it is of the form $$\label{eq: trivial gauge potential}
\Omega_{jk} = \begin{cases}
\theta_k - \theta_j + 2\pi M_{jk}, & j \sim k,\\
0,& \text{otherwise},
\end{cases}$$ where ${{\boldsymbol \theta}}$ is a $v$-tuple of phases and $M$ an (antisymmetric) integer matrix. For a trivial gauge potential, $H^\Omega$ is given by $H^\Omega = U H U^\dag$, where $U$ is the diagonal unitary matrix $U_{jk} = \exp(i \theta_j)
\delta_{jk}$ that generates the gauge transformation ${|j\rangle} \mapsto \exp(i \theta_j)
{|j\rangle}$. The terminology is in analogy with quantum mechanics on ${{\mathbb R}}^3$, where a vector potential ${{\bf A}}({{\bf r}})$ is trivial if it is a gradient ${{\boldsymbol\nabla}}\theta$, in which case it is induced by the gauge transformation $\Psi({{\bf r}}) \mapsto e^{i\theta({{\bf r}})} \Psi({{\bf r}})$.
In analogy with the fact that a trivial vector potential has vanishing curl, we have the following characterization of trivial gauge potentials on $G$ (we omit the argument, which is not difficult): $$\label{eq: used to be prop}
\text{$\Omega$ is trivial if and only if $\Omega(c)$ is an integer
multiple of 2$\pi$ for every cycle $c$.}$$ It follows from and that a gauge potential $\Omega$ is determined up to a gauge transformation by its fluxes through a set of fundamental cycles. Indeed, given a set of fluxes $\omega_\phi$, we can construct a gauge potential $\Omega$ for which $\Omega(c_{\phi}) =
\omega_\phi$, as follows: if $j$ and $k$ are vertices of an edge in $T$, we take $$\label{eq: Omega_jk on tree}
\Omega_{jk} = \Omega_{kj} = 0.$$ For an edge $\phi$ not in $T$ with vertices $j = \phi_<$ and $k = \phi_>$, we take $$\label{eq: Omega given omega}
\Omega_{jk} = -\Omega_{kj} = \omega_{\phi}.$$
Two particles {#sec: 2 particles}
==============
Quantization {#subsec: 2 particle quantization}
------------
Given a one-particle configuration space, $X$, there is a standard construction for the configuration space, $C_2(X)$, of two indistinguishable particles on $X$ (the construction generalizes straightforwardly to more than two particles). $C_2(X)$ consists of unordered pairs of distinct points of $X$, i.e. $$\label{eq: two-particle configuration space}
C_2(X) = \{X \times X - \Delta_2(X)\}/S_2,$$ where $\Delta_2(X) = \{(x,x)\}$ denotes the coincident two-particle configurations, which are excluded, and $S_2$ denotes the symmetric group, whose single nontrivial element – exchange – acts on $X\times X$ according to $(x,y) \mapsto (y,x)$.
Given a combinatorial graph $G$, we shall now regard the set of its vertices, $V
= \{1,\ldots, v\}$, as a one-particle configuration space. Hence, we take $G_2 := C_2(V)$ to be the configuration space for two indistinguishable particles on $G$. Depending on context, we denote configurations in $G_2$ either by unordered pairs of vertices $\{j,k\}$, or by ordered pairs $(j,k)$ with $j < k$.
We may regard $G_2$ as a new combinatorial graph. Nodes (or $G_2$-vertices) $(j,l)$ and $(k,m)$ are taken to be connected by an edge if they have one $G$-vertex in common while their other $G$-vertices are connected by an edge of $G$. Equivalently, moving along an edge of $G_2$ corresponds to keeping one particle fixed at a vertex of $G$ while moving the other along an edge of $G$. The adjacency matrix of $G_2$, denoted $A_{2}$, is given by $$\label{eq: A^(2)}
A_{2; (j,l),(k,m)} = \delta_{jk} A_{lm} + \delta_{jm} A_{lk} +
A_{jk} \delta_{lm} + A_{jm} \delta_{lk},$$ where $A$ is the adjacency matrix of $G$. (To avoid awkward notation, we will not relabel the vertices of $G_2$ by a single index.) The number of vertices and edges of $G_2$, denoted $e_2$ and $v_2$ respectively, are given by $$\label{eq: v_2 and e_2}
v_2 = v(v-1)/2, \quad e_2 = e(v-2).$$
We define quantum mechanics on $G_2$ following the prescription of Section \[sec: one-particle graph\]\[subsec: 1-particle quantization\]. The Hilbert space is ${{\mathbb C}}^{v_2}$ with basis vectors ${|jl\rangle}$, $j < l$, representing states where one particle is at site $j$ and the other at site $l$. Two-particle Hamiltonians, denoted $H_2$, are $v_2$-dimensional hermitian matrices $H_{2;jl,km}$, where $j < l$ and $k < m$, whose off-diagonal elements, i.e., elements for which $(j,l) \neq (k,m)$, vanish whenever $A_{2; (j,l),(k,m)}$ vanishes. Thus, short-time transitions generated by $H_2$ involve one particle making an allowed transition on $G$ while the other particle remains fixed.
Given a one-particle Hamiltonian $H$ on $G$, we can construct a two-particle Hamiltonian $H_2^\sigma$ on $G_2$ according to $$\label{eq: H^2}
{\langle jl|} H_2^\sigma{|km\rangle} = \delta_{jk} H_{lm}+
\sigma \delta_{jm} H_{lk} + \sigma \delta_{lk} H_{jm} +
\delta_{lm} H_{jk}, \quad j < l, k < m,$$ where $\sigma = \pm 1$ (one can check that $H_2^\sigma$ satisfies (\[eq: constraint on H\])). As discussed in Section \[sec: 2 particles\]\[subsec: Bose and Fermi\] below, $\sigma = -1$ describes noninteracting fermions, while $\sigma = +1$ describes hard-core (and therefore interacting) bosons which are prevented from occupying the same site. A simple illustration (two free particles on a linear graph) is given in Section \[sec: examples\]\[subsec: linear\].
Topological gauge potentials {#subsec: 2 particle topological gauge potentials}
----------------------------
Gauge potentials can be introduced on a two-particle combinatorial graph following the general prescription of Section \[sec: one-particle graph\]\[subsec: gauge potentials\]. Quantum statistics is described by a subset of these, which we call [*topological gauge potentials*]{}. As discussed below, topological gauge potentials correspond to gauge potentials on a two-particle metric graph whose fluxes through all contractible closed curves vanish (modulo $2\pi$). Thus, their effects are purely quantum mechanical, vanishing in the classical limit. Let $G$ be a combinatorial graph, and $G_2$ the corresponding two-particle combinatorial graph. Let $\epsilon$ and $\phi$ denote disjoint edges of $G$, i.e. $\epsilon$ and $\phi$ have no vertices in common. Let $c_{\epsilon,\phi} $ denote the cycle on $G_2$ given by $$\label{eq: c_jk,lm}
c_{\epsilon,\phi} = \left(\{\epsilon_<,\phi_<\}, \{\epsilon_>,\phi_<\}, \{\epsilon_>,\phi_>\},
\{\epsilon_<,\phi_>\}, \{\epsilon_<,\phi_<\}\right).$$ That is, along $c_{\epsilon,\phi}$, the particles move in alternating steps back and forth along $\epsilon$ and $\phi$. We say that $c_{\epsilon,\phi}$ is [*metrically contractible*]{}. The reason is that $c_{\epsilon,\phi}$ corresponds to a loop $\gamma_{\epsilon,\phi}$ on $C_2(\Gamma)$, the two-particle configuration space for a metric graph $\Gamma$ associated with $G$, which can be continuously contracted to a point – see Figure \[fig: contractible cycle\].
(These considerations may be made more formal. Let $**$ denote a vertex of the two-particle configuration space. One can show that $\pi_1(C_2(\Gamma,**))$ is isomorphic to the quotient of $\pi^C(G_2,**)$ by the subgroup $K(G_2,**)$ generated by cycles of the form $p c_{\epsilon,\phi} p^{-1}$, where $\epsilon$ and $\phi$ are disjoint and $p$ is a path from $**$ to $\{\epsilon_<,\phi_<\}$.)
Let $\Omega_2$ denote a gauge potential on $G_2$. We say that $\Omega_2$ is a [*topological gauge potential*]{} if its flux through every metrically contractible cycle vanishes modulo $2\pi$, i.e. $$\label{eq: topological gauge potential}
\Omega_2(c_{\epsilon,\phi}) \equiv 0 \mod 2\pi, \text{ for } \epsilon, \phi
\text{ disjoint.}$$ (More formally, topological gauge potentials may be identified with one-dimensional representations of $\pi_1(C_2(\Gamma),**)$.)
Informally, gauge potentials that are not topological may be understood to introduce additional forces into the underlying classical dynamics, as the following heuristic argument, based on the analogy with metric graphs, demonstrates: when one particle is on the edge $\epsilon$ and the other on $\phi$, the free-particle Hamiltonian for the two-particle metric graph is given by ${{\textstyle \frac{1}{2}}}(p^2_\epsilon + p^2_\phi)$. If we introduce a gauge potential, replacing $p_\epsilon$ by $p_\epsilon - A_\epsilon(x_\epsilon,y_\phi)$ and $p_\phi$ by $p_\phi - A_\phi(x_\epsilon,y_\phi)$, the equations of motion become $$\label{eq: equations of motion with vector potential}
\ddot x_\epsilon = B(x_\epsilon,y_\phi) \dot y_\phi, \quad \ddot y_\phi =
-B(x_\epsilon,y_\phi) \dot x_\epsilon,$$ where the gauge field, $B$, is given by $\partial A_\phi/\partial x_\epsilon -
\partial A_\epsilon/\partial y_\phi$. If $B \ne 0$ (analogous to $\Omega_2(c_{\epsilon,\phi}) \ne 0 \mod 2 \pi)$, the classical motion is no longer free. For combinatorial graphs, some numerical evidence for the effects of these gauge forces is given in Section \[sec: examples\]\[subsec: linear\].
Characterization of topological gauge potentials {#subsec: characterization}
------------------------------------------------
Given a combinatorial graph $G$, we obtain below an explicit parametrization of the topological gauge potentials on the two-particle graph, $G_2$. The parametrization is given in terms of a set of fluxes, or phases. Let $T_2$ denote a spanning tree of $G_2$. As shown in Section \[sec: one-particle graph\]\[subsec: gauge potentials\], given a gauge potential $\Omega_2$ on $G_2$ (not necessarily topological), we can choose a gauge so that $\Omega_2$ vanishes on the edges of $T_2$. Let us label the edges of $G_2$ that are not in $T_2$ by an index $\phi_2$, and let $c_{\phi_2}$ denote the corresponding fundamental cycles. The number of such cycles, which we denote by $f_2$, is given by (cf. ) $$\label{eq: g_2}
f_2 = e_2 - (v_2 - 1) = e(v-2) - v(v-1)/2 + 1.$$ Let $\omega_{\phi_2}$ denote the flux of $\Omega_2$ through $c_{\phi_2}$. Then $\Omega_2$ is determined by ${{\boldsymbol \omega}}=
(\omega_{1},\ldots,\omega_{f_2})$.
The conditions $\Omega_2(c_{\epsilon,\phi}) = 0 \mod 2\pi $ on topological gauge potentials (cf. ) can be expressed as linear relations on the components of ${{\boldsymbol \omega}}$. It will be convenient to label pairs of disjoint edges $(\epsilon, \phi)$ by a single index $a$. The number of such pairs, which we denote by $g_2$, is just the difference between the number of all pairs of edges of $G$ and the number of pairs of edges that share a vertex, so that $$\label{eq: g - number of disjoint edge pairs}
g_2 = {{\textstyle \frac{1}{2}}}e(e-1) - {{\textstyle \frac{1}{2}}}\sum_{j = 1}^v v_j (v_j - 1).$$ Therefore, the condition for $\Omega_2$ to be a topological gauge potential can be written as $$\label{eq: constraint on topological Omega_2}
R\cdot {{\boldsymbol \omega}}= 2\pi {{\bf n}},$$ where $R$ is a $g_2 \times f_2$ integer matrix and ${{\bf n}}$ an arbitrary $g_2$-dimensional integer vector. The specific form of $R$ depends on the choice of fundamental cycles. We note that the rows of $R$ may be linearly dependent (this turns out to be the case if, for example, $G$ contains two or more disjoint cycles).
The system can be solved by expressing $R$ in Smith normal form (see, e.g., Dummit & Foote 2003). We write $$\label{eq: Smith}
R = PDQ,$$ where $ P$ and $Q$ are integer matrices with integer inverses of dimensions $g_2$ and $f_2$ respectively, and $D$ is a $g_2 \times f_2$ nonnegative diagonal integer matrix. The number of nonzero diagonal elements of $D$ is given by the rank of $R$, denoted $r$, and $D$ may be chosen so that its first $r$ diagonal elements are nonzero with $D_{jj}$ divisible by $D_{j-1,j-1}$ for $1 < j \le r$. The nonzero diagonal elements of $D$ are called the [*divisors*]{} of $R$. They are uniquely determined by $R$, and do not depend on the choice of fundamental cycles (i.e., they are basis-independent).
Substituting the Smith normal form into , we can write the conditions on $\Omega_2$ as $$\label{eq: contraint on Omega 2 2}
D_{aa} \Phi_a = 2\pi m_a, \quad 1 \le a \le g_2,$$ where ${{\bf m}}= P^{-1} {{\bf n}}$ is integral and ${{\boldsymbol \Phi}}= Q {{\boldsymbol \omega}}$. $\Phi_a$ may be regarded as the flux of $\Omega_2$ through a cycle $C_a$ in which the fundamental cycle $c_{\phi_2}$ appears with multiplicity given by $Q_{a, \phi_2}$.
From , the allowed values of $\Phi_a$ depend on $D_{aa}$, and in particular on whether $D_{aa}$ is one, greater than one, or zero. Let $p$ denote the number of divisors of $R$ equal to one, $q = r - p$ the number of divisors greater than one, and $s = g_2 - r$ the number of vanishing diagonal elements of $D$. Then can be written as $$\begin{aligned}
\label{eq: Phi_p}
\Phi_j &= 0 \mod 2\pi, &1 \le j \le p,\nonumber\\
\Phi_{p+ k} &= 2 \pi m_k/d_k \mod 2\pi, \ \ m_k = 0,\ldots,d_k -1, &1 \le k \le
q,\nonumber\\
\Phi_{r + l} &= \alpha_l \mod 2\pi,\ \ 0 \le \alpha_l < 2 \pi, &1 \le l \le s,\end{aligned}$$ where $d_k = D_{p + k,p +k}$ is shortened notation for the divisors of $R$ greater than one. Once the $\Phi_a$’s are specified, ${{\boldsymbol \omega}}$, and hence $\Omega_2$, are determined by ${{\boldsymbol \omega}}= Q^{-1}{{\boldsymbol \Phi}}$.
Thus, topological gauge potentials are parameterized by $s$ phases $\alpha_1,\ldots, \alpha_{s}$ taking values between $0$ and $2\pi$, and $q$ phases $2\pi m_1/d_1, \ldots, 2\pi m_{q}/d_{q}$ taking values constrained to be rational multiples of $2\pi$. We shall refer to the $\alpha_l$’s as [*free statistics phases*]{} and the $2\pi m_k/d_k$’s as [*discrete statistics phases*]{}. Examples of both are given in Section \[sec: examples\]. (More formally, the set of topological vector potentials modulo trivial gauge potentials, regarded as a group under matrix addition modulo $2\pi$, is isomorphic to $U(1)^{s} \times {{\mathbb Z}}/d_1 \times
\cdots \times {{\mathbb Z}}/d_{q}$, and is the character group of $\pi_1(C_2(\Gamma))$.)
Aharonov-Bohm phases and two-body phases {#subsec: AB potentials}
----------------------------------------
Among the free statistics phases, there may be some that correspond to one of the particles going around a cycle on $G$ while the other remains fixed. Physically, such phases could be produced by solenoids threading the cycles of $G$ with magnetic flux (assuming the particles are charged). From the point of view of quantum statistics, we would like to distinguish between contributions to topological gauge potentials that may be attributed to individual particles interacting with an external gauge potential, on the one hand, and contributions involving many-body effects on the other.
Let $\Omega$ be a gauge potential on a combinatorial graph $G$. We can construct a corresponding gauge potential on $G_2$, denoted $\Omega^{AB}_2$, following the prescription for constructing a two-particle Hamiltonian from a single-particle Hamiltonian, i.e. $$\label{eq: Omega_2 from Omega}
\Omega^{AB}_{2; \{j,l\}\{k,m\}} = \Omega_{jk} \delta_{lm} +\Omega_{jm} \delta_{kl}
+ \Omega_{lk} \delta_{jm} + \Omega_{lm} \delta_{jk}.$$ $\Omega^{AB}_2$ is antisymmetric (since $\Omega$ is antisymmetric), and hence constitutes a gauge potential on $G_2$. We shall call gauge potentials of the form . They are parameterized (up to a choice of gauge) by fluxes, or [*Aharonov-Bohm phases*]{}, $\phi^{AB}_1,\ldots, \phi^{AB}_f$, through a set of fundamental cycles on $G$.
Given a cycle $c_2$ on $G_2$, the flux $\Omega^{AB}_2(c_2)$ can be evaluated as follows. Let $p(c_2)$ and $q(c_2)$ denote the paths on $G$ described by the individual particles as $c_2$ is traversed. If each particle returns to where it started, then $p(c_2)$ and $q(c_2)$ are themselves cycles, and we say that $c_2$ is a [*direct cycle*]{}. In this case, $\Omega^{AB}_2(c_2)$ is the sum of one-particle fluxes, i.e. $$\label{eq: Omega_2 for direct cycle}
\Omega^{AB}_2(c_2) = \Omega(p(c_2)) + \Omega(q(c_2)).$$ Note that implies that $\Omega^{AB}_2$ is in fact a topological gauge potential; $p(c_{\epsilon,\phi})$ and $q(c_{\epsilon,\phi})$ are self-retracing cycles on the edges $\epsilon$ and $\phi$ through which the flux of $\Omega$ obviously vanishes. If $c_2$ is not a direct cycle, then the particles exchange positions along $c_2$, and we say that $c_2$ is an [*exchange cycle*]{}. In this case, the last vertex of $p(c_2)$ is the first vertex of $q(c_2)$, and vice versa, and $$\label{eq: Omega_2 for exchange cycle}
\Omega^{AB}_2(c_2) = \Omega(p(c_2)q(c_2)).$$
We will say that two topological gauge potentials $\Omega_2$ and $\Omega_2'$ are [*AB-equivalent*]{} if their difference, $\Omega_2 - \Omega_2'$, is an Aharonov-Bohm potential; equivalently, $\Omega_2'$ can be produced from $\Omega_2$ by adjusting the Aharonov-Bohm fluxes $\phi^{AB}_j$. We would like to find parameters that determine topological gauge potentials up to AB equivalence. These parameters, together with the $\phi^{AB}_j$’s, then provide a complete parametrization of topological gauge potentials (up to a choice of gauge).
We proceed by fixing a representative $\Omega^*_2$ from each family of AB-equivalent topological gauge potentials, and then finding parameters that characterize $\Omega^*_2$. For the special case of circular graphs this is easily done. Circular graphs have their vertices arranged on a ring, and the only edges are between adjacent vertices. We may take $\Omega^*_2 = 0$, because every topological gauge potential $\Omega_2$ may be generated by an Aharonov-Bohm flux through the ring. Circular graphs are discussed further in Section \[sec: examples\]\[subsec: circular\].
Assuming that $G$ is not a circular graph, we can choose a set of fundamental cycles $c_j$ on $G$ such that none of the $c_j$’s contains every vertex of $G$. Let $k_j$ denote a vertex of $G$ not contained in $c_j$, and let $c_{2; j}$ denote the cycle on $G_2$ on which one particle traverses $c_{j}$ while the other remains fixed at $k_j$. We fix $\Omega_2^*$ (up to a choice of gauge) by requiring that $$\label{eq: fix AB gauge}
\Omega_2^*(c_{2; j}) = 0 \mod 2\pi$$ (a straightforward argument shows that every topological gauge potential is $AB$-equivalent to a unique topological gauge potential satisfying ).
The $f$ linear conditions on $\Omega^*_2$ can be combined with the $g_2$ linear conditions which characterize general topological vector potentials, and the two sets of conditions written collectively as (cf. ) $$\label{eq: augmented relations}
R^* \cdot {{\boldsymbol \omega}}^* = 2\pi {{\bf n}}^*,$$ where $R^*$ is an integer matrix of dimension $(f+g_2) \times f_2$, and ${{\bf n}}^*$ is an integer vector of dimension $(f+g_2)$. The solution of proceeds as in Section \[sec: 2 particles\]\[subsec: characterization\]. Solutions are parameterized by $s-f$ phases $\beta_1,\ldots,
\beta_{s-f}$, which we call [*two-body phases*]{}, and $q$ discrete statistics phases $2\pi m_1/d_1, \ldots, 2\pi
m_{q}/d_{q}$. It follows that a general topological gauge potential may be parameterized by $f$ Aharonov-Bohm phases $\phi^{AB}_1, \ldots, \phi^{AB}_f$, which determine its fluxes through the $c_{2;j}$’s, in addition to the two-body phases $\beta_l$ and discrete statistics phases $2\pi m_k/d_k$. The lasso and bowtie graphs (Section \[sec: examples\]\[subsec: lasso and figure-of-eight\]) provide simple illustrations of the distinction between Aharonov-Bohm and two-body phases.
Bose and Fermi statistics {#subsec: Bose and Fermi}
-------------------------
Our treatment of quantum statistics on combinatorial graphs – referred to in what follows as the [*identified scheme*]{} – follows the approach of Leinaas and Myrheim (1977), treating the particles as classically indistinguishable. If instead one followed the conventional approach – referred to in what follows as the [*distinguished scheme*]{}, in which the particles are labeled from the start, one would find Bose and Fermi statistics. Below we establish the relationship between the two schemes. Note that, in the identified scheme, it does not make sense to speak of the exchange symmetry of a quantum state, as exchange is not defined – the wavefunction assumes a single value for each configuration of the indistinguishable particles. It turns out that Bose and Fermi statistics may be regarded as particular cases of the more general quantum statistics we have obtained here. In particular, Bose statistics corresponds to a trivial (e.g., vanishing) topological gauge potential, while Fermi statistics corresponds to a topological gauge potential that assigns a phase of $\pi$ to exchange cycles and zero phase to direct cycles.
Following the distinguished scheme one proceeds as follows. Let $G$ denote a combinatorial graph with adjacency matrix $A$. We introduce the distinguished two-particle configuration space, denoted ${{\overline C}}_2(G)$, or ${{\overline G}}_2$ for short, that consists of ordered pairs $(j,l)$, $j \ne l$, of distinct vertices of $G$ (note that particles are still not allowed to occupy the same vertex). We regard ${{\overline G}}_2$ as a graph with $v(v-1)$ vertices and adjacency matrix $$\label{eq: A^2D}
{{\overline A}}_{2; jl,km} = A_{jk} \delta_{lm} + A_{lm} \delta_{jk}.$$ Following the general quantization prescription given in Section \[sec: one-particle graph\]\[subsec: 1-particle quantization\], we introduce the distinguished Hilbert space ${{\overline {\cal H}}}_2 = {{\mathbb C}}^{v(v-1)}$, with basis vectors ${|\overline{jl}\rangle}$, $j \ne
l$, which describe states where particle 1 is at vertex $j$ and particle 2 at vertex $l$. A quantum Hamiltonian on ${{\overline {\cal H}}}_2$, denoted ${{\overline H}}_2$, is a $v(v-1)$-dimensional matrix with matrix elements $$\label{eq: H^D}
{{\overline H}}_{2; jl,km} = {\langle \overline{jl}|} {{\overline H}}_2 {|\overline{km}\rangle}.$$ satisfying (so that short-time transitions are single-particle transitions).
Exchange is defined on ${{\overline {\cal H}}}_2$ by ${|\overline{jl}\rangle}
\rightarrow {|\overline{lj}\rangle}$. We decompose ${{\overline {\cal H}}}_2$ into two $v(v-1)/2$-dimensional subspaces, ${{\overline {\cal H}}}_{2}^{\sigma}$, consisting of states that are even ($\sigma = 1$) or odd ($\sigma = -1$) under exchange. Indistinguishability is incorporated by requiring the Hamiltonian (and, indeed, any hermitian matrix representing an observable) to be invariant under exchange, i.e. ${{\overline H}}_{2; jl, km} = {{\overline H}}_{2; lj, mk}$. Then ${{\overline H}}_2$ preserves exchange symmetry, i.e. it leaves the subspaces ${{\overline {\cal H}}}^\sigma$ invariant. For example, given a one-particle Hamiltonian $H$, we may construct the two-particle Hamiltonian $$\label{eq: H^D as sum of one-particle Hamiltonians}
{{\overline H}}_{2; jl, km} = H_{jk} \delta_{lm} + H_{lm} \delta_{jk}.$$ On the antisymmetric subspace ${{\cal H}}_2^-$, the energy levels of ${{\overline H}}_2$ are just sums of distinct energy levels of $H$, and the two-particle eigenstates are antisymmetric products of one-particle eigenstates – this corresponds to noninteracting fermions. The situation is different on the symmetric subspace ${{\overline {\cal H}}}_2^+$. In general, the spectrum of $H_2$ on ${{\overline {\cal H}}}_2^+$ is not simply related to the one-particle spectrum (although in some special cases it is). This is because the diagonal states ${|\overline{jj}\rangle}$ are excluded; bosons are prevented from occupying the same vertex, which, unlike fermions, they would otherwise do.
The equivalence between the identified and distinguished schemes may be established by means of a unitary map, $U^\sigma$, from the identified Hilbert space, ${{\cal H}}_2$, to the symmetric or antisymmetric subspace, ${{\overline {\cal H}}}_2^\sigma$, of the distinguished Hilbert space. $U^\sigma$ is given by $$\label{eq: U}
U^\sigma {|jl\rangle} = \frac{1}{\sqrt 2} \left( {|\overline{jl}\rangle} + \sigma
{|\overline{lj}\rangle}\right), \ \ j < l.$$ Given an exchange-invariant Hamiltonian ${{\overline H}}_2$ on ${{\overline {\cal H}}}^\sigma$, we can construct a unitarily equivalent Hamiltonian $H_2^\sigma$ on ${{\cal H}}_2$ by taking $H_2^{\sigma} = U^\sigma {{\overline H}}_2 {U^\sigma}^\dag$. The matrix elements of $H_2^{\sigma}$ are given explicitly by $$\label{eq: translation of H^2}
H^{\sigma}_{2; jl,km} = {{\overline H}}_{2; jl,km} + \sigma {{\overline H}}_{2; lj,km}, \ \ j
< l, k < m.$$ Thus, in the identified scheme, the statistics sign $\sigma$ appears in the Hamiltonian $H_2^\sigma$, while the Hilbert space, ${{\cal H}}_2$, is independent of $\sigma$. In contrast, in the distinguished scheme, $\sigma$ determines the Hilbert space ${{\overline {\cal H}}}_2^\sigma$, while the Hamiltonian ${{\overline H}}_2$ is independent of $\sigma$. In particular, if ${{\overline H}}_2$ is given by the sum of one-particle Hamiltonians as in , then yields $$\label{eq: H^2 second}
{\langle jl|} H^\sigma_2 {|km\rangle} = \delta_{jk} H_{lm}
+ \sigma \delta_{jm} H_{lk} + \sigma \delta_{lk} H_{jm} +
\delta_{lm} H_{jk}, \quad j < l, k < m.$$ This is in accord with , and establishes that $\sigma$ corresponds to Bose or Fermi statistics.
From and the constraints , we can express the relation between the Bose and Fermi Hamiltonians $H^{-}_2$ and $H^{+}_2$ as $$\label{eq: H_+ and H_- second}
H^{-}_{2; jl,km} = e^{i\Omega^{F}_{2; jl,km}}H^{+}_{2; jl,km},$$ where $\Omega^{F}_2$ is given by $$\label{eq: Omega Bose Fermi}
\Omega^{F}_{2; jl,km} =
\begin{cases}
\pi,& j = m, l \sim k ,\\
-\pi,&j \sim m, l = k,\\
0,& \text{otherwise}.
\end{cases}$$ Thus, $H^+_2$ and $H^-_2$ are related by a gauge potential, as $\Omega^{F}_2$ is real antisymmetric and its ${(jl,km)}$th element vanishes unless $\{j,l\} \sim \{k,m\}$. In fact, $\Omega^{F}_2$ is a topological gauge potential. This can be verified by checking the conditions explicitly. Alternatively, we may argue as follows: given a path $p_2$ on $G_2$ of length $r$, let $p(p_2) =
(x_1,\ldots,x_r)$ and $ q_2(p) = (y_,\ldots,y_r)$ denote the single-particle paths along $p_2$. Nonzero contributions of $\pm
\pi$ to $\Omega^{F}_2(p_2)$ come from those edges of $p_2$ where the ordering of the single-particle positions switches over, either from $x_j < y_j$ to $x_{j+1} > y_{j+1}$ or from $x_j > y_j$ to $x_{j+1} < y_{j+1}$. However, on a direct cycle $c_2$, and, in particular, on $c_{\epsilon,\phi}$, the particles separately return to where they started, so the number of switchovers, and hence the number of $\pm \pi$ contributions, must be even – therefore $\Omega^{F}_2(c_2)$ is a multiple of $2\pi$. Indeed, by this reasoning we see that $$\label{eq: Omega Bose Fermi on cycles}
e^{i\Omega^{F}_2(c_2)} = \begin{cases}
1, & \text{if $c_2$ is a direct cycle},\\
-1,& \text{if $c_2$ is an exchange cycle}. \end{cases}$$
Examples {#sec: examples}
========
We investigate statistics phases for a number of graphs, starting with the simplest examples.
Linear graphs {#subsec: linear}
-------------
The linear graph $L_N$ consists of $N$ vertices on a line with adjacent vertices connected by edges, so that $A_{j,k} = \delta_{|j-k|,1}$. From the point of view of quantum statistics, linear graphs are trivial; it turns out that there are no nontrivial topological gauge potentials on two-particle linear graphs. However, linear graphs provide simple examples where the free-particle energy levels and eigenstates can be calculated explicitly, and they serve to illustrate some points of the preceding discussion, including the equivalence of Bose and Fermi statistics in cases where topological phases are absent, as well as the effect of non-topological gauge potentials.
Let $H$ be the one-particle kinetic energy on $L_N$ (cf. Section \[sec: one-particle graph\]\[subsec: 1-particle quantization\]. It is straightforward to show that the energy levels of $H$ are given by $E_a = 4\sin^2(\pi a/2N)$, $0 \le a < N$, while the eigenstates are given, up to normalization, by ${\langle j| \psi_a \rangle} = \cos(\pi a j/N)$. We take the two-particle Hamiltonian, $H_2^\sigma$, to be the sum of one-particle Hamiltonians as given by . As discussed in Section \[sec: 2 particles\]\[subsec: Bose and Fermi\], $\sigma = 1$ corresponds to Bose statistics and $\sigma=-1$ to Fermi statistics. It is straightforward to show that the energy levels of $H_2^\sigma$ are independent of $\sigma$, and are given by sums $E_a + E_b$ with $a$ and $b$ distinct. The corresponding eigenstates are, for $\sigma = -1$, antisymmetric products of distinct eigenstates ${|\psi_a\rangle}$ and ${|\psi_b\rangle}$, and, for $\sigma = +1$, symmetric products of these eigenstates with diagonal components removed. The interaction between the (hard-core) bosons is reflected in the fact that two-particle states with $a = b$ are absent from the spectrum.
The fact that the Bose and Fermi spectra coincide for linear graphs can be understood from our topological treatment of quantum statistics. The only nontrivial cycles on the two-particle configuration space correspond to each particle moving back and forth in alternation along separated segments of the linear graph. Such cycles are metrically contractible, so that there are no nontrivial topological gauge potentials. It follows that Fermi gauge potential $\Omega^{F}_2$ (cf. ) is just a gauge transformation.
There is an alternative, single-particle interpretation of $H^\sigma_2$, namely as a discrete approximation to the quantum Hamiltonian for a free particle in a two-dimensional right triangular domain with sides $1$, $1$ and $\sqrt 2$ (i.e., half the unit square below the diagonal), and with $\hbar = 1/N$. Neumann boundary conditions apply on the right sides of the triangle, while $\sigma$ determines the boundary condition on the diagonal (Neumann for $\sigma = 1$, Dirichlet for $\sigma = -1$).
Fixing the Hamiltonian to be $H_2^-$ for definiteness, it is instructive to observe the consequences of introducing a gauge potential, denoted ${{\hat \Omega}}_2$, that is [*not*]{} topological. Let us take ${{\hat \Omega}}_2$ to have flux $2\pi p/t$ through every cycle $c_{\epsilon,\phi}$ where $\epsilon$ is an edge between vertices in the range between $r$ and $r +t$, and $\phi$ an edge between vertices in the range between $s$ and $s+t$. For cycles $c_{\epsilon,\phi}$ outside this range, we take ${{\hat \Omega}}_2(c_{\epsilon,\phi}) = 0$. In terms of the single-particle interpretation, ${{\hat \Omega}}_2$ corresponds to a uniform magnetic field of strength $2\pi p /(t/N)$ (in units where $q/c$, $q$ being the charge of the particle, is equal to one) through the square of area $t^2/N^2$ with diagonal corners $(r/N,
s/N)$ and $((r+t)/N,(s+t)/N)$. Numerical calculations show that the shifts in energy levels produced by ${{\hat \Omega}}_2$ scale with $1/N$, in accord with the fact that a magnetic field does not change the mean density of states. However, with ${{\hat \Omega}}_2$ present, while typical eigenstates are delocalized (cf. Figure \[fig:states:a\]), one finds some eigenstates that are strongly localized in the flux square $10 \le j \le 15$ and $25 \le l \le 30$ (cf. Figure \[fig:states:b\]). In the single-particle interpretation these are Landau-like levels, and are indicative of Lorentz forces in the classical dynamics.
Circular graphs {#subsec: circular}
---------------
The circular graph, $C_N$, is obtained by connecting the first and last vertices of the linear graph $L_N$. For a circular graph, we naturally expect, and indeed find, anyon statistics, confirming that our model provides a reasonable description of quantum mechanics on a loop. For simplicity we consider a loop with three vertices, shown in Figure \[fig:triangle\] (a) (the conclusions are similar for $N>3$). Writing down the three allowed two-particle configurations in Figure \[fig:triangle\] (b), it is apparent that the two-particle graph $G_2$ is also a single loop. A single traversal of this loop is an exchange cycle, and the associated flux $\phi$ corresponds to anyon statistics. The flux $\phi$ can be generated by an Aharonov-Bohm potential, corresponding to a solenoid threading the one-particle graph $C_3$.
(8,2.5) (0.5,0.3)[![(a) $C_3$ (b) Two-particle graph for $C_3$[]{data-label="fig:triangle"}](triangle.eps "fig:"){width="2.5cm"}]{} (5.5,0.3)[![(a) $C_3$ (b) Two-particle graph for $C_3$[]{data-label="fig:triangle"}](triangle.eps "fig:"){width="2.5cm"}]{} (0,2.5)[a)]{} (5,2.5)[b)]{} (0.2,0)[$1$]{} (3.1,0)[$3$]{} (1.6,2.2)[$2$]{} (4.6,0)[$(1,2)$]{} (7.9,0)[$(2,3)$]{} (6.4,2.2)[$(1,3)$]{}
Star graphs {#subsec: star}
-----------
The star graph, $S_e$, shown in Figure \[fig:stargraph\] (c), consists of $e$ vertices each connected to a central vertex, and so has $e$ edges and $e+1$ vertices. We consider first the $e =
3$ star graph, or $Y$-graph, for which the two-particle graph is easily displayed (Figures \[fig:stargraph\] (a) and (b)). The two-particle graph consists of a single cycle which exchanges the particles through the arms of the ‘Y’. A flux through this cycle produces anyon statistics. For $e > 3$, the two-particle star graph consists of $v_2 = (e+1)e/2$ vertices and $e_2 = e(e-1)$ edges. The number of independent cycles, $f_2$, is given by $(e-1)(e-2)/2$. There are no Aharonov-Bohm phases (there are no nontrivial cycles on $S_e$) and no constraints on topological gauge potentials (there are no disjoint edges on $S_e$). Therefore, topological gauge potentials on $S_e$ are parameterized by $g_2 = (e-1)(e-2)/2$ two-body statistics phases. The number of statistics phases can also be obtained from the following simple argument: each phase corresponds to the choice of a pair of edges along which to exchange the particles, given that the particles start on the vertices of some given edge. $g_2$ is therefore the number of pairs of $e-1$ objects.
Lasso and bowtie {#subsec: lasso and figure-of-eight}
----------------
The lasso graph consists of a three-vertex loop with a single external lead – see Figure \[fig:lasso+bowtie\] (a). It provides a simple example of quantum statistics which combines aspects of circular graphs and star graphs discussed above. The two-particle lasso, $G_2$, is shown Figure \[fig:lasso+bowtie\] (b) along with a spanning tree, $T_2$ (in bold). We fix the gauge so that the edges of $T_2$ are assigned zero phase. The three edges of $G_2$ not in $T_2$ determine the fundamental cycles. The central square corresponds to the metrically contractible cycle $c_{\epsilon,\varphi}$ on which the particles move in alternation along the edges $\epsilon$ and $\varphi$ of the lasso. As the flux through this cycle must vanish, the edge $((1,3),(2,3))$ is assigned zero phase as well. The left triangle of $G_2$ corresponds to the cycle in which the one of the particles goes around the loop of the lasso while the other remains on the external lead, and may be assigned an Aharonov-Bohm phase $\phi^{AB}$. The right triangle of $G_2$ corresponds to an exchange cycle in which the particles move around the loop of the lasso. The associated phase, denoted $\alpha$, is a two-body phase. The length-six cycle along the perimeter of $G_2$ coincides with the exchange cycle on the $Y$ graph, and has phase $\alpha + \phi_{AB}$. These results coincide with those of Balachandran & Ercolessi (1992), who considered the two-particle metric lasso graph.
Another example considered by Balachandran & Ercolessi (1992) is the bowtie, or figure-of-eight, which consists of two three-vertex loops sharing a common vertex – see Figure \[fig:lasso+bowtie\] (c). Calculations show that the two-particle bowtie has two Aharonov-Bohm phases (corresponding to the two loops) and two two-body phases.
Nonplanar graphs: $K_5$, $K_{3,3}$ and the $K_5$ molecule {#subsec: K_5 and K_3,3}
---------------------------------------------------------
$K_5$ is the fully connected graph with five vertices, and $K_{3,3}$ is the fully connected bi-partite graph with two sets of three vertices – see Figure \[fig: K5K33K5K5\] (a) and (b). A theorem of Kuratowski (1930) states that every non-planar graph (i.e., a graph that cannot be drawn in the plane without crossings) contains $K_5$ or $K_{3,3}$ as a subgraph (possibly after contracting some edges to points). From the point of view of quantum statistics, $K_5$ and $K_{3,3}$ are interesting because they are the smallest graphs that exhibit discrete statistics phases. Calculations following the procedure of Section \[sec: 2 particles\]\[subsec: characterization\] (details are omitted) show that $K_5$ has six Aharonov-Bohm phases and $K_{3,3}$ has four Aharonov-Bohm phases (corresponding to their respective number of fundamental cycles). In addition, both have a single discrete statistics phase that can be either $0$ or $\pi$ (mod $2\pi$). Cycles whose fluxes are given by this discrete phase are necessarily exchange cycles. An example for $K_5$ consists of the cycle in which one particle goes around a three-vertex loop (e.g., $(1,2,3,1)$) while the other remains fixed, followed by an exchange around the same three-vertex loop in the opposite direction (e.g., $((1,2),(2,3),(1,3),(1,2))$). An example for $K_{3,3}$ consists of the cycle in which one particle goes around a four-vertex bowtie loop (e.g., $(1,4,2,5,1)$) while the other remains fixed, followed by an exchange around the same four-vertex loop in the opposite direction (e.g., $((1,4),(4,5),(1,5),(1,2),(1,4))$).
The $K_5$ molecule is the graph consisting of two $K_5$’s joined by a single edge, as in Figure \[fig: K5K33K5K5\] (c). Calculations (again, details are omitted) show that two discrete statistics phases appear, both either $0$ or $\pi$, along with 12 Aharonov-Bohm phases and 6 two-body phases.
Discussion {#sec: discussion}
==========
The abelian statistics of two indistinguishable quantum particles on a combinatorial graph are characterized by a set of continuous and discrete-valued phases. The continuous phases may be separated into some that are produced by external Aharonov-Bohm fluxes and the rest describing two-body statistical interactions. The appearance of discrete phases may be related to whether the graph is planar or not, a connection that merits further investigation. While we have concentrated on the case of two particles, the abelian statistics for more than two particles may be characterized and calculated using the results presented here.
Nonabelian statistics requires new considerations. A full description is needed of the fundamental group of the $n$-particle configuration space, or, equivalently, the braid groups of the graph (not simply their abelianized versions). Recently, Farley & Sabalka (2005) have developed methods based on discrete Morse theory for obtaining efficient presentations of graph braid groups. We will discuss applications of these techniques to nonabelian graph statistics in a forthcoming publication (see (Kitaev 2008) for an exact solution of a spin-lattice model where nonabelian statistics emerge).
From the point of view of physics, one of the principal attractions of quantum graphs is that they provide mathematically tractable models of complex physical systems. As applications have so far concentrated on independent-particle models, the scope for manifestations of quantum statistics on graphs is great. We suggest a few possibilities here. Graph statistics may play a role in many-electron network models of molecules, in analogy with the emergence of the Berry phase – the molecular Aharonov-Bohm effect (Mead and Truhlar 1979) – in molecular spectra. Topological signatures in single-particle transport on networks (Avron 1995), which provide models and variants of the quantum Hall effect, may have many-particle generalizations in which statistics plays a role. Many-particle graphs may provide new models for anyon superconductivity (Wilczek 1990). An intriguing potential application of nonabelian graph statistics is to topological quantum computing (Nayak [*et al*]{}. 2008). There one looks for systems with a degenerate ground state spanned by distinct quasiparticle configurations, in which the only (easily) realizable evolutions are, up to a phase, a discrete set of unitary transformations generated by the (adiabatic) exchange of quasiparticles. By introducing spin (Harrison 2008), quantum graphs might also provide models in which to investigate the role of quantum statistics in the quantum spin Hall effect and topological insulators (Hasan & Kane 2010, Qi & Zheng 2010).
Applications will depend on nontrivial graph statistics emerging in a particular model. A standard approach would be to look for novel many-particle ground states on graphs and to study their excitations. This is work for the future. However, even without a specific mechanism, we believe it is worthwhile to pursue a general investigation of quantum statistics on networks and its consequences. Quantum physics has provided many examples where topology underlies new and unexpected phenomena. Nature seems to exploit the opportunities available to it, and discoveries may follow from knowing where to look and what to look for.\
[*Acknowledgements.*]{} We thank Dan Farley for helpful discussions. JMH is supported by National Science Foundation grant DMS-0604859.
Aizenman, M., Sims, R. & Warzel, S. 2006 [*Commun. Math. Phys.*]{} [**264**]{}, 371–389.
Avron, J.E. 1995 Adiabatic quantum transport. In [*Mesoscopic Quantum Physics (Les Houches Summer School Proceedings)*]{}. Amsterdam: Elsevier. eds. E. Akkermans, G. Montambaux and J.L. Pichard North-Holland
Balachandran, A.P. & Ercolessi, E. 1992 [*Int. J. Mod. Phys. *]{} A[**7**]{}, 4633–4654.
Balachandran, A.P., Daughton, A., Gu, Z., Marmo, G., Sorkin, R.D., & Srivastava, A.M. 1993 [*Int. J. Mod. Phys. *]{} A[**8**]{}, 2993–3044.
Berry, M.V. 2008. [*Nonlinearity*]{} [**21**]{}, T19–T26.
Birman J.S. & Brendle, T.E. 2005 Braids: A survey. In [*Handbook of Knot Theory*]{} (eds. W. Menasco & M. Thistlethwaite). Amsterdam: Elsevier.
Bolte, J. & Harrison, J.M. 2003 [*J. Phys.*]{} A [**36**]{} L433–L440.
Duck, I. & Sudarshan, E.C.G. 1997 [*Pauli and the spin-statistics theorem*]{}. Singapore: World Scientific.
Dummit, D.S. & Foote, R.M. 2003 [*Abstract algebra*]{}, 3rd edn. Wiley.
Farley, D. & Sabalka, L. 2005 [*Algebr. Geom. Topol.*]{} [**5**]{}, 1075–-1109.
Finkelstein, D. & Rubenstein, J. 1968 [*J. Math. Phys. *]{} [**9**]{}, 1762–1779.
de Gennes, P.G. 1981 [*C. R. Acad. Sci. Paris*]{} [**292**]{} 279–282.
Harrison, J.M. 2008 Quantum graphs with spin Hamiltonians. In [*Analysis on graphs and its applications*]{} (eds. P. Exner, J. P. Keating, P. Kuchment, T. Sunada, A. Teplyaev) Proceedings of Symposia in Pure Mathematics [**77**]{} AMS, 261–277.
Hasan, M.Z. & Kane, C.L. 2010, arXiv:1002.3895v1 \[cond-mat.mes-hall\].
Hatcher, A. 2001 [*Algebraic topology*]{}. Cambridge: Cambridge University Press.
Jain, J.K. 2007 [*Composite fermions*]{}. Cambridge: Cambridge University Press.
Keating, J.P. 2008 Quantum graphs and quantum chaos. In [*Analysis on graphs and its applications*]{} (eds. P. Exner, J. P. Keating, P. Kuchment, T. Sunada, A. Teplyaev) Proceedings of Symposia in Pure Mathematics [**77**]{} AMS, 261–277.
Kitaev, A. 2006 [*Ann. Phys.*]{} [**321**]{}, 2–111; arXiv:cond-mat/0506438v3.
Kostrykin, V. & Schrader, R. 1999 [*J. Phys. *]{}A [**32**]{}, 595–630.
Kottos, T. & Smilansky, U. 1997 [*Phys. Rev. Lett.*]{} [**79**]{}, 4794–4797.
Kuchment, P. 2004 [*Waves Random Media*]{} [**14**]{}, S107–128.
Kuratowski, K. 1930 [*Fund. Math.*]{} [**15**]{}, 271–283.
Laidlaw, M.G.G. & DeWitt, C.M. 1971 [*Phys. Rev. *]{}D [**3**]{}, 1375–-1378.
Laughlin, R. B. 1983. [*Phys. Rev. Lett.*]{} [**50**]{}, 1395–1398.
Leinaas, J.M. & Myrheim, J. 1977 [*Nuovo Cim.*]{} [**37B**]{}, 1–23.
Manton, N.S. 2008 [*Nonlinearity*]{} [**21**]{}, T221-–T232.
Mead, C.A. & Truhlar, D.G.J. 1979. [*Chem. Phys.*]{} [**70**]{}, 2284-–2296.
Nayak, C., Simon, S.H., Stern, A., Freedman, M., & Das Sarma, S. 2008. [*Rev. Mod. Phys.*]{} [**80**]{}, 1083–1159.
Oren, O., Godel, A. & Smilansky, U. 2009 [*J. Phys.*]{} A [**42**]{}, 415101.
Pauling, L. 1936 [*J. Chem. Phys.*]{} [**4**]{}, 673–677.
Qi, X.L. & Zheng, S.C. 2010. [*Phys. Today*]{} [**63**]{}, 33–38.
Wilczek, F. (ed) 1990 [*Fractional statistics and anyon superconductivity*]{}. Singapore: World Scientific.
\[lastpage\]
|
---
abstract: 'The interplay among topology, disorder, and non-Hermiticity can induce some exotic topological and localization phenomena. Here we investigate this interplay in a two-dimensional non-Hermitian disordered [ Chern-insulator model with two typical kinds of non-Hermiticities]{}, the nonreciprocal hopping and on-site gain-and-loss effects. The topological phase diagrams are obtained by numerically calculating two topological invariants in the real space, which are the disorder-averaged open-bulk Chern number and the generalized Bott index, respectively. We reveal that the nonreciprocal hopping (the gain-and-loss effect) can enlarge (reduce) the topological regions and the topological Anderson insulators induced by disorders can exist under both kinds of non-Hermiticities. Furthermore, we study the localization properties of the system in the topologically nontrivial and trivial regions by using the inverse participation ratio and the expansion of single particle density distribution.'
author:
- 'Ling-Zhi Tang'
- 'Ling-Feng Zhang'
- 'Guo-Qing Zhang'
- 'Dan-Wei Zhang'
title: 'Topological Anderson insulators in two-dimensional non-Hermitian disordered systems'
---
[^1]
[^2]
Introduction
============
Topological insulators, as a new class of states of matter with nontrivial band structures, have witnessed fast development in condensed matter physics [@Hasan2010; @XLQi2011] and tunable engineered systems [@DWZhang2018; @Cooper2019; @Goldman2016; @Schroer2014; @Roushan2014; @XTan2018; @XTan2019b; @Lee2018b; @Huber2016; @LLu2014; @Ozawa2019] in recent years. Topological insulators are generally characterized by topological invariants of extended bulk states and gapless edge states under open boundary conditions (OBCs). For instance, two-dimensional (2D) Chern insulators are topologically characterized by the Chern number and chiral edge states [@XLQi2011]. These topological characters are robust against certain types of weak disorders as the topological band gap is preserved under these perturbations. For sufficiently strong disorders, the systems usually become trivial insulators as the band gaps close and all states are localized due to the Anderson localization [@Anderson1958]. In contrast to this common wisdom, a counter-intuitive behavior that the disorder can induce the non-trivial phase from trivial phase has been revealed and the disorder-induced topological phase is known as the topological Anderson insulator (TAI) [@JLi2009]. TAIs have been theoretically investigated in various models and systems [@JLi2009; @Groth2009; @HJiang2009; @HMGuo2010; @Altland2014; @Mondragon-Shem2014; @Titum2015; @BLWu2016; @Sriluckshmy2018; @ZQZhang2019; @JHZheng2019; @Kuno2019; @ZQZhang2019; @RChen2019], such as the disordered Su-Schrieffer-Heeger (SSH) chain [@WPSu1979] and Haldane model [@Haldane1988] of Chern insulators. Recently, the experimental observation of TAIs in two engineered systems with tunable disorder and topology was reported, i.e., one-dimensional (1D) cold atomic wires [@Meier2018] and 2D photonic waveguide arrays [@Stutzer2018].
Most studies of topological states focus on Hermitian systems; however, the recent theoretical and experimental advances of non-Hermitian physics have inspired the extension of topological systems to the non-Hermitian regime [@Bender1998; @El-Ganainy2018; @Miri2019]. A remarkable, fast-growing effort has been undertaken to explore novel topological states and phenomena in non-Hermitian systems [@Rudner2009; @Zeuner2015; @Esaki2011; @YHu2011; @BZhu2014; @Malzard2015; @Leykam2017; @YXu2017b; @XZhan2017; @Lee2016; @Fring2016; @QBZeng2017; @Jin2017; @SYao2018; @FSong2019; @Kunst2018; @YXiong2018; @Yuce2018; @HJiang2018; @LWZhou2018; @RWang2018; @LJin2019; @Borgnia2019; @ZGong2018; @SYao2018b; @HShen2018; @Takata2018; @YChen2018; @LJLang2018; @Harari2018; @Bandres2018; @HZhou2018; @TSDeng2019; @Ezawa2019; @Kawabata2019; @Ghatak2019; @Kawabata2018; @Yuce2019; @TLiu2019; @Lee2019; @Luitz2019; @Yamamoto2019; @Yuce2018; @CHYin2018; @Herviou2019; @DWZhang2020; @TLiu2020; @ZXu2020], where the non-Hermiticites include the non-Hermitian gain and loss [@LFeng2017; @El-Ganainy2018; @Miri2019], the nonreciprocal hopping [@Hatano1996; @Hatano1997], and the dissipation in open systems [@Diehl2008]. It has also been revealed that non-Hermitian disordered systems have unique localization properties [@Hatano1996; @Hatano1997; @Carmele2015; @Levi2016; @Hamazaki2019; @Mejia-Cortes2015; @QBZeng2017; @Alvarez2018; @PWang2019]. Moreover, the interplay of topology and disorder in 1D non-Hermitian systems has been studied [@DWZhang2019; @XWLuo2019; @Longhi2019; @HJiang2019; @QBZeng2020; @JHou2019]. The topological phase (transition) in 1D non-Hermitian quasicrystals has been explored [@Longhi2019; @HJiang2019; @QBZeng2020; @JHou2019]. Notably, non-Hermitian TAIs induced by a combination of disorder and non-Hermiticity in the SSH chains consisting of nonreciprocal disordered hoppings have been uncovered [@DWZhang2019; @XWLuo2019].
[ In this work, we explore the topological and localization properties in a 2D non-Hermitian disordered Chern-insulator model, which in the Hermitian limit is a square-lattice variation of the Haldane model [@Haldane1988].]{} Two typical kinds of non-Hermiticities from the nonreciprocal hopping and the Zeeman potential with gain-and-loss effects are considered in the constructed 2D tight-binding model. We obtain the topological phase diagrams by numerically calculating two topological invariants in real space, which are the disorder-averaged open-bulk Chern number and generalized Bott index [@Loring2010; @HHuang2018; @QBZeng2020]. In the Hermitian limit, we show that the disorder in the system can induce a Chern-insulator phase. We demonstrate that the nonreciprocal hopping (the gain-and-loss effect) can enlarge (reduce) the topological regions and find that topological Anderson insulators induced by disorders can exist under these two kinds of non-Hermiticities. Moreover, we study the localization properties of the system in the topologically nontrivial and trivial regions by using the inverse participation ratio and the expansion of the single particle density distribution.
The rest of this paper is organized as follows. We first propose the 2D non-Hermitian disordered Chern-insulator model with nonreciprocal hopping and on-site gain-and-loss terms, and then introduce two approaches for calculating the topological invariants for the non-Hermitian disordered systems in Sec. \[sec2\]. Section \[sec3\] is devoted to investigating topological phase diagrams and discussing the non-Hermtian TAI in the 2D system. We further study the localization properties of our model in Sec. \[sec4\]. A conclusion and discussion are finally presented in Sec. \[sec5\].
\[sec2\]model and topological invariants
========================================
We begin by constructing a [ 2D non-Hermitian square-lattice Chern-insulator model]{}, where the non-Hermiticity arises from the nonreciprocal hopping and the Zeeman potential with on-site gain-and-loss term. As depicted in Fig. \[fig1\], the tight-binding model reads $$\label{H}
\begin{aligned}
H=\sum_\textbf{x}\sum_{j=x,y}[c_\textbf{x}^\dagger T_j c_{\textbf{x}+\textbf{e}_j}+c_{\textbf{x}+\textbf{e}_j}^\dagger T_j^\dagger e^\gamma c_\textbf{x}]+\sum_\textbf{x}c_\textbf{x}^\dagger M_\textbf{x} c_\textbf{x},
\end{aligned}$$ where $c_\textbf{x}^\dagger=(c_{\textbf{x},\uparrow}^\dagger ,c_{\textbf{x},\downarrow}^\dagger)$ is a two-component creation operator that creates a particle at lattice site $\textbf{x}=(x,y)$ with (pseudo-)spin $\{\uparrow,\downarrow\}$; $c_\textbf{x}$ is the corresponding annihilation operator; $T_j=-\frac{1}{2}t_j\sigma_z-\frac{i}{2}\nu_j\sigma_j$; $M_\textbf{x}=(m_\textbf{x}+i\Gamma)\sigma_z$, with $\gamma$ and $\Gamma$ being the non-Hermiticity strengths of the nonreciprocal intercell hopping and the gain-and-loss term; $\textbf{e}_j$ is the unit vector along the $j$ direction ($j=x,y$), and $\sigma_j$ are Pauli matrices on the spin basis. The hopping strengths $t_x=t_y=t=1$ and $v_x=v_y=t=1$ are set with $t=1$ as the energy unit hereafter for convenience. In this model, we consider the diagonal disorder in the Zeeman potential $m_\textbf{x}=m+W\omega_\textbf{x}$, where $m$ is a constant Zeeman strength and $\omega_\textbf{x}$ denotes independent random numbers chosen uniformly in the range $[-1,1]$ with the disorder strength $W$.
![(Color online) Schematic 2D lattice of the non-Hermitian disordered Chern-insulator model with the site index $\mathbf{x}=(x,y)$. Here $T_{x,y}$ and $T_{x,y}e^\gamma$ denote the nonreciprocal intercell hopping matrices on the (pseudo-)spin basis $\{\uparrow,\downarrow\}$ along the $x,y$ axis, and $M_\textbf{x}$ denotes the on-site potential with a disordered Zeeman term $m_\textbf{x}\sigma_z$ and the gain-and-loss term $i\Gamma\sigma_z$ acting on the (pseudo-)spins. []{data-label="fig1"}](fig1.pdf){width="38.00000%"}
[ In the Hermitian and clean limit of $\gamma=\Gamma=W=0$, the Hamiltonian in Eq. (\[H\]) returns to the square-lattice model of 2D Chern (quantum Hall) insulators [@XLQi2011], which is a variation of the Haldane model [@Haldane1988]. In this clean case, when $|m|<2$, the band topology can be characterized by a non-zero Chern number, and the system has chiral edge modes located at the boundary of the square lattice under OBCs. When $|m|>2$, the two energy bands become trivial with a zero Chern number and the edge modes disappears. Thus, a topological phase transition between Chern insulators and trivial insulators occurs at $|m|=t_x+t_y=2$.]{}
In the presence of disorder, the translation symmetry is broken and the wave vector is no longer a good quantum number. In this case, the conventional Chern number whose formula is defined in terms of Bloch vectors is inapplicable. In addition, due to the non-Hermitian skin effect when $\gamma\neq0$ [@SYao2018; @FSong2019; @Kunst2018], the conventional bulk-boundary correspondence based on the Hermitian model may fail to correctly describe the relation between the number of chiral edge states and the conventional Chern number. Here, we use the open-bulk Chern number developed in Ref. [@FSong2019] to characterize the topological properties of our non-Hermitian disordered model, which is calculated in real space and can recover to the Hermitian (clean) case [@Bianco2011]. For a disorder configuration denoted by $s$, the corresponding open-bulk Chern number is given by [@FSong2019] $$C_s=\frac{2\pi i}{L_x^\prime L_y^\prime}\text{Tr}^\prime(P_s[[X,P_s],[Y,P_s]]),$$ where $X$ and $Y$ denote the coordinate operators, $P_s$ is a valence-band projector operator, $\text{Tr}^\prime$ stands for the trace within the central region of size $L_x^\prime \times L_y^\prime$ with system size $L_x\times L_y$ and $L_j^\prime=L_j-2l_j$. Here $l_j$ should be sufficiently large to dissolve the finite-size effect. For convenience, we set $L_x=L_y=L$ in our numerical simulations. In the biorthonormal basis, $P_s$ is calculated under OBCs, $$P_s=\sum|nR\rangle_s {_s}\langle nL|,$$ where $\ket{nR}_s$ and $\ket{nL}_s$ represent the $n$-th right and left eigenstates obtained from the eigenfunctions $H_s\ket{nR}_s=E_{s,n}\ket{nR}_s$ and $H_s^\dagger\ket{nL}_s=E_{s,n}^\ast\ket{nL}_s$, respectively. Note that the eigenstates obey the biorthonormal condition ${_s}\langle nL|n^\prime R\rangle_s=\delta_{nn^\prime}$ and $C_s$ converges to an integer when $L\rightarrow\infty$. We define the disorder-averaged open-bulk Chern number as $$\label{C}
C=\frac{1}{N_s}\sum_{s=1}^{N_s}C_s,$$ where $N_s$ denotes the disorder configuration number and is set as $N_s\in[100,300]$ to make $C$ converge.
We also consider the Bott index as another topological invariant [@Loring2010; @HHuang2018; @QBZeng2020], which can be calculated in real space and generalized for non-Hermitian models. The Bott index measures the commutativity of two unitary and commuting matrices, and also discriminates pairs of commuting matrices from those far from commuting. The Bott index equals the Chern number in the thermodynamic limit. For the disorder configuration, the generalized Bott index is given by [@Loring2010; @HHuang2018; @QBZeng2020] $$B_s=\frac{1}{2\pi}\text{Im}\left[\text{Tr}\left(\text{ln}V_sU_sV_s^\dagger U_s^\dagger\right)\right],$$ where $U_s$ and $V_s$ are the projected position operators and the matrix elements $$U_{s,mn}={_s}\langle mL|e^{2\pi iX/L}\ket{nR}_s$$ $$V_{s,mn}={_s}\langle mL|e^{2\pi iY/L}\ket{nR}_s$$ are defined in the biorthonormal basis for non-Hermitian systems. The disorder-averaged Bott index is given by $$\label{B}
B=\frac{1}{N_s}\sum_{s=1}^{N_s}B_s.$$ When $\gamma=\Gamma=0$, the Chern number in Eq. (\[C\]) and the Bott index in Eq. (\[B\]) reduce to those in Hermitian systems with $\ket{nR}_s=\ket{nL}_s$ involved in the calculations.
\[sec3\]Topological phase diagrams
==================================
![(Color online) [Hermitian case.]{} (a) The Chern number $C$ and (c) Bott index $B$ as a function of the potential $m$ and the disorder strength $W$ for a system size $L=16$. (b) The Chern number $C$ and (d) Bott index $B$ as a function of $W$ for $m=2.2$ and various system sizes $L$. $\gamma=\Gamma=0$ for all panels. []{data-label="fig2"}](fig2.pdf){width="45.00000%"}
We first consider the disorder effect on the topological phase of the 2D Hermitian Chern-insulator model with $\gamma=\Gamma=0$ in Eq. (\[H\]). The obtained topological phase diagram in Fig. \[fig2\](a) shows the disorder-averaged Chern number $C$ as a function of the potential strength $m$ and the disorder strength $W$ for $L=16$. In the clean limit $W=0$, a topological transition between the non-trivial phase with $C=1$ and the trivial phase with $C=0$ occurs at $m=2$ as excepted. As the disorder strength grows to a modest region, the topological transition point moves forward larger $m$, and the region of Chern insulators is enlarged. Remarkably, in the region $m\in[2,3]$, the topological phase can be induced by moderate disorder from the trivial phase in the clean case $W=0$, which indicates the TAI in this 2D Hermitian system. A consistent topological phase diagram with the TAI is obtained from the calculated Bott index $B$, as shown in Fig. \[fig2\](c). Notably, $C$ and $B$ can approach to 1 in the TAI region by increasing the lattice size, with two examples as a function of $W$ with $m=2.2$ shown in Figs. \[fig2\](b) and \[fig2\](d).
We now study the effect of the combined disorder and non-Hermiticity on an initially trivial phase in the clean and Hermitian cases. We consider the first kind of non-Hermiticity from the nonreciprocal hopping $\gamma$ with fixed $\Gamma=0$ and $m=3$. In Figs. \[fig3\](a) and \[fig3\](b), we present the numerically calculated Chern number $C$ and Bott index $B$ as a function of $\gamma$ and disorder strength $W$, respectively. As shown in the topological phase diagrams, the non-Hermiticity $\gamma$ can induce a topological phase transition from the trivial phase to the topological phase in the clean case of $W=0$ and at modest disorder $W\lesssim6$. Thus, the nonreciprocal hopping can enlarge the topological region in this 2D system. More interestingly, the 2D non-Hermitian TAI can be induced by the combination of disorder and non-Hermiticity [@DWZhang2019; @XWLuo2019], as an extension of the TAI in Hermitian systems [@JLi2009; @Groth2009; @HJiang2009; @HMGuo2010; @Altland2014; @Mondragon-Shem2014; @Titum2015; @BLWu2016; @Sriluckshmy2018; @ZQZhang2019; @JHZheng2019; @Kuno2019; @ZQZhang2019; @RChen2019].
![(Color online) [ Non-Hermitian case with nonreciprocal hopping.]{} (a) The Chern number $C$ and (b) Bott index $B$ as a function of the nonreciprocal parameter $\gamma$ and the disorder strength $W$ for $\Gamma=0$, $m=3$ and $L=16$. (c) The Chern number $C$ and (d) Bott index $B$ as a function of $W$ and $\tilde{t}$ after the similarity transformation of the non-Hermitian Hamiltonian $H$ with $\Gamma=0$ under OBCs. []{data-label="fig3"}](fig3.pdf){width="48.00000%"}
To further understand the topological phase diagrams in Figs. \[fig3\](a) and \[fig3\](b), we take a similarity transformation of the non-Hermitian Hamiltonian in Eq. (\[H\]) with $\Gamma=0$ for a given configuration $H_s$ under OBCs $\tilde{H}_s=S^{-1}H_sS$, where the transformation matrix takes the diagonal form $S=\text{diag}(1,1,r,r,r,r,\cdots,r^{L^2/2-1},r^{L^2/2-1},r^{L^2/2-1},r^{L^2/2-1},\\r^{L^2/2},r^{L^2/2})$ with $r=\sqrt{e^\gamma}$. After the similarity transformation, one has the eigenfunction for a Hermitian model $\tilde{H}_s\ket{\tilde{n}}_s=E_{s,n}\ket{\tilde{n}}_s$, with the wave function $\ket{\tilde{n}}_s=S^{-1}\ket{nR}_s$ and the corresponding Hermitian Hamiltonian $\tilde{H}_s$, with parameters $\tilde{t}_j=\sqrt{e^\gamma}t_j$, $\tilde{\nu}_j=\sqrt{e^\gamma}\nu_j$, and $\tilde{m}_\textbf{x}=m_\textbf{x}$. In this case, $H_s$ and $\tilde{H}_s$ have the same real energy spectrum $E_{s,n}$ under OBCs, which is confirmed in our numerical simulations. By using the Hermitian Hamiltonian $\tilde{H}_s$, we calculate the corresponding disorder-averaged Chern number and Bott index as a function of $\tilde{t}$ ($\tilde{t}_x=\tilde{t}_y=\tilde{t}$) and $W$, as shown in Figs. \[fig3\](c) and \[fig3\](d). One can find that the topological phase boundaries of the Hermitian model is consistent with the original non-Hermitian model under the similarity transformation. In this non-Hermitian case, the TAIs can be topologically connected to those in the Hermitian case through the similarity transformation under OBCs. However, as is clearly seen from the transformation, the wave functions (the density distribution) of the bulk states are accumulated at the boundaries, which is the non-Hermitian skin effect [@SYao2018; @FSong2019; @Kunst2018].
![(Color online) [Non-Hermitian case with gain and loss.]{} (a) The Chern number $C$ and (b) Bott index $B$ as a function of the non-Hermitian gain-and-loss parameter $\Gamma$ and the disorder strength $W$ for $\gamma=0$, $m=2.2$ and $L=16$. []{data-label="fig4"}](fig4.pdf){width="48.00000%"}
We also consider the non-Hermitian gain and loss in the model with $\Gamma\neq0$ and $\gamma=0$. In this case, the energy spectrum is generally complex, and there is no similarity transformation to Hermitian models. We find that the increasing of the gain-and-loss strength can not enlarge and generally reduces the topological regions in the topological phase diagrams. However, the disorder-induced TAI can persist for small $\Gamma$. Figures. \[fig4\](a) and \[fig4\](b) show the calculated $C$ and $B$ as a function of $\Gamma$ and $W$ for $m=2.2$ and $L=16$. In the Hermitian limit $\Gamma=\gamma=0$, the TAI exists for modest $W$. When $\Gamma$ increases, the TAI persist under finite non-Hermitian gain and loss and finally becomes trivial when $\Gamma\gtrsim0.5$. Since the nonreciprocal hopping enlarges the topological region, one can except the TAI to exist in the general non-Hermitian cases with non-vanishing $\gamma$ and $\Gamma$.
\[sec4\]Localization properties
===============================
![(Color online) [ Disorder-averaged IPR of the right eigenstates for the system size $L=20$.]{} (a) and (b) The Hermitian limit of the model with $\gamma=0$, $\Gamma=0$ and $m=2.2$. (c) and (d) The nonreciprocal case of the non-Hermitian model with $\gamma=0.75$, $\Gamma=0$ and $m=3$. (e) and (f) The gain-and-loss case of the non-Hermitian model with $\gamma=0$, $\Gamma=0.25$ and $m=2.2$. (a), (c) and (e) Blue dashed lines and blue solid lines denote $\bar{I}$ as a function of the disorder strength $W$ under OBCs and PBCs, respectively. The red dotted line and red dot-dashed line in (a) denote $\bar{I}$ for $L=30$. (b),(d) and (f) $I_n$ in the whole energy spectrum as a function of $W$. []{data-label="fig5"}](fig5.pdf){width="48.00000%"}
In this section, we consider the localization properties of the non-Hermitian systems induced by the disorder.
First, we calculate the disorder-averaged inverse participation ratio (IPR) [@PhysRevLett.84.3690; @edelman2005random]. For non-Hermitian systems, the IPR can be directly defined for the right (left) eigenstates governed by the system Hamiltonian $H$ ($H^{\dag}$) [@Mejia-Cortes2015; @QBZeng2017; @Alvarez2018; @PWang2019; @DWZhang2019; @Longhi2019; @HJiang2019; @QBZeng2020; @JHou2019] or under the biorthogonal basis from both right and left eigenstates [@ZGong2018; @PWang2019; @XWLuo2019]. The disorder-averaged IPR of the $n$-th right eigenstate $\ket{nR}_s$ is defined as $$\label{IPR}
I_n=\overline{\frac{\sum_{\textbf{x}}|\phi_\textbf{x}^{(s,n)}|^2}{(\sum_{\textbf{x}}|\phi_\textbf{x}^{(s,n)}|)^2}},$$ where $\phi_\textbf{x}^{(s,n)}={_s}\bra{nR}\textbf{\^{x}}\ket{nR}_s$ with $\textbf{\^{x}}$ being the position operator of the lattice site $\textbf{x}$, and the overline indicates the average over many disorder configurations of index $s$. Then the mean IPR $\bar{I}=\frac{1}{N}\sum_{n=1}^NI_n$, with $N=2L^2$, is given by averaging over the whole energy spectrum. The value of IPR saturates to a finite value for a localized state in this 2D system but scales as $L^{-2}$ and approaches zero in the thermodynamic limit for a delocalized state.
Because $I_n$ in Eq. (\[IPR\]) measures the eigenstate density distribution without the non orthogonality of different right eigenstates, the IPR can be used to capture the localization properties of the right eigenstates of the non-Hermitian Hamiltonian $H$ in Eq. (\[H\]) (see examples for 1D non-Hermitian systems in Refs. [@Mejia-Cortes2015; @QBZeng2017; @Alvarez2018; @PWang2019; @DWZhang2019; @Longhi2019; @HJiang2019; @QBZeng2020; @JHou2019]). Furthermore, one can define the biorthogonal IPR \[see Eq. (\[BIPR\])\] from both right and left eigenstates to study the non-Hermitian effect on the localization properties of non-Hermitian disordered systems [@ZGong2018; @PWang2019; @XWLuo2019].
In the Hermitian limit of this model with $\gamma=0$, $\Gamma=0$ and $m=2.2$, Fig. \[fig5\](a) shows the corresponding $\bar{I}$ as a function of $W$ under both OBCs and periodic boundary conditions (PBCs). The value of $\bar{I}$ increases with the enhancement of the disorder strength $W$, which implies that the initial extended bulk states are gradually localized by the growing disorder in both boundary conditions and totally localized in the TAI region. Although $\bar{I}$s under PBCs are less sensitive to system size $L$ than those under OBCs, $\bar{I}$s under these two boundary conditions approach each other in the large-$L$ limit. $I_n$s as a function of $W$ in the whole energy spectrum under OBCs are plotted in Fig. \[fig5\](b). It is clear that the eigenstates in the middle of the spectrum show small values of the IPR and these topologically protected zero-mode eigenstates are robust to disorder.
For the nonreciprocal case in the non-Hermitian model with $\gamma=0.75$, $\Gamma=0$, and $m=3$, the result of $\bar{I}$ in Fig. \[fig5\](c) indicates that the localization of this non-Hermitian system is sensitive to the boundary condition. The value of $\bar{I}$ under OBCs is nonzero even in the clean limit $W=0$ due to the non-Hermitian skin effect [@SYao2018; @HJiang2019], where most bulk states prefer to localize in one corner of the 2D lattice \[see Fig. \[fig6\](d) for an example\]. Figure \[fig5\](d) shows the results of $I_n$ in the whole energy spectrum as a function of the disorder strength $W$. It is clear that $I_n$s of some eigenstates are non-zero even in the clean limit $W=0$ and are localized. For the non-Hermitian gain-and-loss case with $\gamma=0$, $\Gamma=0.25$, and $m=2.2$ in the model, the corresponding disorder-averaged IPR in Figs. \[fig5\](e) and \[fig5\](f) are similar to the Hermitian results in Figs. \[fig5\](a) and \[fig5\](b). This indicates that the non-Hermitian gain-and-loss just slightly affect the localization properties of the system under OBCs and PBCs.
![(Color online) [ (a) and (b) The nonreciprocal case of the non-Hermitian model with $\gamma=0.75$, $\Gamma=0$, and $m=3$. (c) and(d) The gain-and-loss case of the non-Hermitian model with $\gamma=0$, $\Gamma=0.25$, and $m=2.2$. (a) and (c) Blue dashed lines and blue solid lines denote $\bar{I}_B$ as a function of the disorder strength $W$ under OBCs and PBCs, respectively. (b) and (d) $I_{nB}$ in the whole energy spectrum as a function of $W$ under OBCs.]{} []{data-label="fig6"}](fig6.pdf){width="48.00000%"}
[In order to further understand the influence of the two kinds of non-Hermiticities on the localization properties, we calculate the disorder-averaged biorthogonal IPR defined under biorthogonal eigenstates [@ZGong2018; @PWang2019; @XWLuo2019] $$\label{BIPR}
I_{nB}=\overline{\frac{\sum_{\textbf{x}}|\tilde{\phi}_\textbf{x}^{(s,n)}|^2}{(\sum_{\textbf{x}}|\tilde{\phi}_\textbf{x}^{(s,n)}|)^2}},$$ where $ \tilde{\phi}_\textbf{x}^{(s,n)}={_s}\bra{nL}\textbf{\^{x}}\ket{nR}_s$. The mean biorthogonal IPR is then given by $\bar{I}_B=\frac{1}{N}\sum_{n=1}^NI_{nB}$ with $N=2L^2$. For the nonreciprocal and gain-and-loss cases, we calculate the (mean) biorthogonal IPR with the results shown in Fig. \[fig6\]. For the nonreciprocal case with $\gamma=0.75$, $\Gamma=0$, and $m=3$ shown in Figs. \[fig6\](a) and \[fig6\](b), the values of (mean) biorthogonal IPR under OBCs are small for weak disorders, which is different from those of the right-eigenstate (mean) IPR in Figs. \[fig5\](c) and \[fig5\](d). This is due to the fact that the biorthogonal density distributions under OBCs do not suffer the skin effects [@PWang2019; @XWLuo2019]. The right and left eigenstates under OBCs suffer the non-Hermitian skin effect, which can be understood from the similarity transformations $\ket{\tilde{n}}_s=S^{-1}\ket{nR}_s$ and $\ket{\tilde{n}}_s=S^{-1}\ket{nL}_s$ with $\ket{\tilde{n}}_s$ being the eigenstates of the corresponding Hermitian system. The corresponding biorthogonal density distribution is $ \tilde{\phi}_\textbf{x}^{(s,n)}={_s}\bra{\tilde{n}}S^{-1}\textbf{\^{x}}S\ket{\tilde{n}}_s={_s}\bra{\tilde{n}}\textbf{\^{x}}\ket{\tilde{n}}_s$, which is free of the non-Hermitian skin effect. Under PBCs, due to the absence of the non-Hermitian skin effect in this case, the biorthogonal IPR in Fig. \[fig6\](a) is almost the same as the IPR of right eigenstates shown in Fig. \[fig5\](c). For the non-Hermitian gain-and-loss case, there is no non-Hermitian skin effect of right and left eigenstates under both OBCs and PBCs. The corresponding biorthogonal IPR shown in Figs. \[fig6\](c) and \[fig6\](d) are similar to the IPR of right eigenstates displayed in Figs. \[fig5\](e) and \[fig5\](f). From these results, one can find that the IPR of right eigenstates and the biorthogonal IPR have similar localization properties in the absence of the non-Hermitian skin effect induced by the nonreciprocal hopping under OBCs, while the biorthogonal IPR does not suffer this effect.]{}
![(Color online) The averaged density distribution of the time-evolved state $\ket{\phi(t=10)}$ with the initial state $\ket{\phi(t=0)}$ being a single particle placed at the center cell of the 2D lattice. The parameters are chosen to be the Hermitian limit with $\gamma=0$, $\Gamma=0$, and $m=2.2$ for (a) $W=2.5$ and (b) $W=0.5$; the nonreciprocal case of the non-Hermitian model with $\gamma=0.25$, $\Gamma=0$ and $m=3$ for (c) $W=3.2$ and (d) $W=1.25$, respectively; the gain-and-loss case of the non-Hermitian model with $\gamma=0$, $\Gamma=0.25$ and $m=2.2$ for (e) $W=2.5$ and (f) $W=0.5$, respectively. (a), (c), (e) and (b), (d), (f) correspond to topological nontrivial and trivial regions (see phase diagrams in Figs. \[fig2\], \[fig3\] and \[fig4\]). []{data-label="fig7"}](fig7.pdf){width="48.00000%"}
To further understand the localization properties in different topological regions, we also study the expansion of a single particle initialized at the center cell of the 2D lattice under OBCs [@Lahini2010]. The initial state is prepared as $\ket{\phi(t=0)}=\frac{1}{\sqrt{2}}(c_{\textbf{x}',\uparrow}+c_{\textbf{x}',\downarrow})\ket{0}$ with the system size $L=21$ and $\textbf{x}'=((L+1)/2, (L+1)/2)$. The evolution of the initial state is ruled by the Schrödinger equation $i\partial_t\ket{\phi(t)}=H\ket{\phi(t)}$, and the intensity of the time-dependent state $\ket{\phi(t)}$ is normalized explicitly for the non-Hermitian Hamiltonian $H$ where the evolution is no longer unitary. Figure \[fig7\] shows the density distribution of $\ket{\phi(t=10)}$ for the Hermitian and two kinds of non-Hermitian cases, which is averaged over 100 disorder realizations. For strong disorders in the topological nontrivial regions \[Figs. \[fig7\](a), \[fig7\](c), and \[fig7\](e)\] with localized bulk states, the propagation of the particle is highly impeded by the disorder, and the density distribution is localized around the center cell (i.e., the initial position). In contrast, the distribution probability of the particle spreads into the bulk under weak disorders in the topologically trivial cases in Figs. \[fig7\](b), \[fig7\](d), and \[fig7\](f). Notably, the distribution probability of the particle shown in Fig. \[fig7\] (d) reveals the tendency of the expansion into the top right-hand corner of the 2D lattice due to the non-Hermitian skin effect [@SYao2018; @FSong2019; @Kunst2018]. The interplay between disorder and nonreciprocal non-Hermiticity can also be observed in the single-particle expansion dynamics. Note that when $\gamma$ is large enough, the density probability tends to congregate in one corner even under strong disorder and the skin effect rather than the localization being revealed by the density distribution (not shown here).
\[sec5\]Conclusion and discussion
=================================
In summary, we have investigated the interplay of topology and disorder in the 2D disordered Chern-insulator model with two types of non-Hermiticities from the nonreciprocal hopping and on-site gain-and-loss effects. We have calculated the topological phase diagrams with the open-bulk Chern number and Bott index as two topological invariants in real space. Based on the numerical results, we have revealed that the nonreciprocal hopping can enlarge the topological regions and the disorder-induced TAI can exist under both kinds of non-Hermiticities. Moreover, we have studied the localization properties of the 2D system by calculating the inverse participation ratio and the expansion evolution of a single particle.
The findings in this paper can be applied to other 2D topological models, such as the Hofstadter model and the Haldane model with additional disorder and non-Hermiticities. The three-dimensional TAI [@HMGuo2010] may also be extended to the non-Hermitian cases, where the interplay of the mobility edge and non-Hermitian topology may give rise to unknown physics. [ The interplay of disorder and topology could be further explored in non-Hermitian higher-order topological insulators [@Yuce2019; @TLiu2019; @Lee2019], such as in a non-Hermitian 2D SSH model [@Yuce2019].]{} In addition, it would be interesting to consider the cases when the disorder becomes non-Hermitian, which may lead to a TAI induced by purely non-Hermitian disorders [@XWLuo2019]. In future work, feasible schemes for experimental realization and detection of the proposed non-Hermitian TAI with some artificial systems will be studied.
We thank Prof. S.-L. Zhu for helpful discussions. This work was supported by the NKRDP of China (Grant No. 2016YFA0301800), the NSAF (Grant No. U1830111 and No. U1801661), the Key-Area Research and Development Program of Guangdong Province (Grant No. 2019B030330001), and the Key Program of Science and Technology of Guangzhou (Grant No. 201804020055).
[^1]: [email protected]
[^2]: [email protected]
|
---
abstract: 'We prove the existence of automorphisms of $\mathbb C^k$, $k\ge 2$, having an invariant, non-recurrent Fatou component biholomorphic to $\mathbb C \times (\mathbb C^\ast)^{k-1}$ which is attracting, in the sense that all the orbits converge to a fixed point on the boundary of the component. As a corollary, we obtain a Runge copy of $\mathbb C \times (\mathbb C^\ast)^{k-1}$ in $\mathbb C^k$. The constructed Fatou component also avoids $k$ analytic discs intersecting transversally at the fixed point.'
address:
- |
F. Bracci: Dipartimento di Matematica\
Università di Roma Tor Vergata \
Via Della Ricerca Scientifica 1, 00133\
Roma, Italy
- 'J. Raissy: Institut de Mathématiques de Toulouse; UMR5219, Université de Toulouse; CNRS, UPS IMT, F-31062 Toulouse Cedex 9, France.'
- 'B. Stensønes: Department of Mathematics, Norwegian University of Science and Technology, Alfred Getz vei 1, Sentralbygg II 950, Trondheim, Norway'
author:
- 'Filippo Bracci$^{\diamondsuit\star}$'
- 'Jasmin Raissy$^\spadesuit$'
- 'Berit Stensønes$^{\clubsuit\star}$'
title: 'Automorphisms of ${\mathbb C}^k$ with an invariant non-recurrent attracting Fatou component biholomorphic to ${\mathbb C}\times ({\mathbb C}^\ast)^{k-1}$'
---
[^1]
[^2]
[^3]
[^4]
Introduction {#introduction .unnumbered}
============
Let $F$ be a holomorphic endomorphism of ${\mathbb C}^k$, $k\ge 2$. In the study of the dynamics of $F$, that is of the behavior of its iterates, a natural dichotomy is given by the division of the space into the [*Fatou set*]{} and the [*Julia set*]{}. The Fatou set is the largest open set where the family of iterates is locally normal, that is the set formed by all points having an open neighborhood where the restriction of the iterates of the map forms a normal family. The Julia set is the complement of the Fatou set and is the part of the space where chaotic dynamics happens. A [*Fatou component*]{} is a connected component of the Fatou set.
A Fatou component $\Omega$ for a map $F$ is called [*invariant*]{} if $F(\Omega)=\Omega$.
We call an invariant Fatou component $\Omega$ for a map $F$ [*attracting*]{} if there exists a point $p\in \overline{\Omega}$ with $\lim_{n\to \infty}F^n(z)=p$ for all $z\in \Omega$. Note that, in particular, $p$ is a fixed point for $F$. If $p\in \Omega$ then $\Omega$ is called [*recurrent*]{}, and it is called [*non-recurrent*]{} if $p\in \partial \Omega$.
Every attracting recurrent Fatou component of a holomorphic automorphism $F$ of ${\mathbb C}^k$ is biholomorphic to ${\mathbb C}^k$. In fact it is the global basin of attraction of $F$ at $p$, which is an attracting fixed point, that is all eigenvalues of $dF_p$ have modulus strictly less than $1$ (see [@PVW] and [@RR]).
As a consequence of the results obtained by T. Ueda in [@U0] and of Theorem 6 in [@LP] by M. Lyubich and H. Peters, every non-recurrent invariant attracting Fatou component $\Omega$ of a *polynomial* automorphism of ${\mathbb C}^2$ is biholomorphic to ${\mathbb C}^2$. L. Vivas and the third named author in [@SV] produced examples of automorphisms of ${\mathbb C}^3$ having attracting non-recurrent Fatou component biholomorphic to ${\mathbb C}^2\times{\mathbb C}^*$.
The main result of our paper is the following:
\[main\] Let $k\ge 2$. There exist holomorphic automorphisms of ${\mathbb C}^k$ having an invariant, non-recurrent, attracting Fatou component biholomorphic to ${\mathbb C}\times ({\mathbb C}^*)^{k-1}$.
In particular, this shows that there exist (non polynomial) automorphisms of ${\mathbb C}^2$ having a non-simply connected attracting non-recurrent Fatou component. Our construction also shows that the invariant non-recurrent attracting Fatou component biholomorphic to ${\mathbb C}\times({\mathbb C}^*)^{k-1}$ avoids $k$ analytic discs which intersect transversally at the fixed point. Moreover as a corollary of Theorem \[main\] and [@U0 Proposition 5.1], we obtain:
Let $k\ge 2$. There exists a biholomorphic image of ${\mathbb C}\times({\mathbb C}^*)^{k-1}$ in ${\mathbb C}^k$ which is Runge.
The existence of an embedding of ${\mathbb C}\times{\mathbb C}^*$ as a Runge domain in ${\mathbb C}^2$ was a long standing open question, positively settled by our construction. After a preliminary version of this manuscript was circulating, F. Forstnerič and E. F. Wold constructed in [@FrancErlend] other examples of Runge embeddings of ${\mathbb C}\times {\mathbb C}^\ast$ in ${\mathbb C}^2$ (which do not arise from basins of attraction of automorphisms) using completely different techniques.
Notice that, thanks to the results obtained by J. P. Serre in [@Serre] (see also [@Horm Theorem 2.7.11]), every Runge domain $D\subset{\mathbb C}^k$ satisfies $H^q(D)=0$ for all $q\ge k$. Therefore the Fatou component in Theorem \[main\] has the highest possible admissible non-vanishing cohomological degree.
The proof of Theorem \[main\] is rather involved and we give an outline in the next section. In the rest of the paper, we will first go through the proof in the case $k=2$, and then show the modifications needed for all dimensions.
The proof relies on a mixture of known techniques and new tools. We first choose a suitable germ having a local basin of attraction with the proper connectivity and extend it to an automorphism $F$ of ${\mathbb C}^k$. Using more or less standard techniques we extend the local basin to a global basin of attraction $\Omega$ of $F$ and then we define a Fatou coordinate. Next, we exploit a new construction to prove that the Fatou coordinate is in fact a fiber bundle map, allowing us to show that $\Omega$ is biholomorphic to ${\mathbb C}\times ({\mathbb C}^*)^{k-1}$. The final rather subtle point is to show that $\Omega$ is indeed a Fatou component. We have to introduce a completely new argument, which is based on Pöschel’s results in [@Po] and detailed estimates for the Kobayashi metric on certain domains.
[**Acknowledgements.**]{} Part of this paper was written while the first and the third named authors were visiting the Center for Advanced Studies in Oslo for the 2016-17 CAS project Several Complex Variables and Complex Dynamics. They both thank CAS for the support and for the wonderful atmosphere experienced there.
The authors also thank Han Peters for some useful conversations, and the anonymous referee, whose comments and remarks improved the presentation of the original manuscript.
Outline of the proof in dimension 2
===================================
For the sake of simplicity, we give the outline of the proof for $k=2$. We start with a germ of biholomorphism at the origin of the form $$\label{form-intro}
F_N(z,w)=\left(\lambda z\left(1 - \frac{zw}{2}\right), \overline{\lambda}w \left(1 - \frac{zw}{2}\right)\right),$$ where $\lambda\in {\mathbb C}$, $|\lambda|=1$, is not a root of unity and satisfies the Brjuno condition . Thanks to a result of B. J. Weickert [@W1] and F. Forstnerič [@F], for any large $l\in {\mathbb N}$ there exists an automorphism $F$ of ${\mathbb C}^2$ such that $$\label{Eq-motiv_2}
F(z,w)-F_N(z,w)=O(\|(z,w)\|^l).$$ These kind of maps are a particular case of the so-called [*one-resonant*]{} germs. Recall that a germ of biholomorphism $F$ of ${\mathbb C}^2$ at the origin is called [*one-resonant*]{} if, denoting by $\lambda_1, \lambda_2$ the eigenvalues of its linear part, there exists a fixed multi-index $P = (p_1, p_2)\in{\mathbb N}^2$ with $p_1+p_2\ge 2$ such that all the resonances $\lambda_j - \lambda_1^{m_1}\lambda_2^{m_2}=0$, for $j=1,2$, are precisely of the form $\lambda_j = \lambda_j\cdot\lambda_1^{kp_1}\lambda_2^{kp_2}$ for some $k\ge 1$.
The local dynamics of one-resonant germs has been studied by the first named author with D. Zaitsev in [@BZ] (see also [@BRZ]).
Let $$B:=\{(z,w)\in {\mathbb C}^2: zw\in S, |z|<|zw|^\beta, |w|<|zw|^\beta\},$$ where $\beta\in (0,\frac{1}{2})$ and $S$ is a small sector in ${\mathbb C}$ with vertex at $0$ around the positive real axis. In [@BZ] (see also Theorem \[Thm:BZ\]) it has been proved that for sufficiently large $l$ the domain $B$ is forward invariant under $F$, the origin is on the boundary of $B$ and $\lim_{n\to \infty}F^n(p)=0$ for all $p\in B$. Moreover, setting $x=zw,y=w$ (which are coordinates on $B$) the domain becomes $\{(x,y)\in {\mathbb C}\times{\mathbb C}^*: x\in S, |x|^{1-{\beta}}<|y|<|x|^{\beta}\}$. Hence $B$ is doubly connected.
Now let $$\Omega:=\cup_{n\in {\mathbb N}}F^{-n}(B).$$ The domain $\Omega$ is connected but not simply connected.
For a point $(z,w)\in {\mathbb C}^2$, let $(z_n,w_n):=F^n(z,w)$. In Theorem \[characterized Omega\] we show that $$\Omega=\{(z,w)\in {\mathbb C}^2\setminus\{(0,0)\}: \lim_{n\to \infty}\|(z_n,w_n)\|=0, \quad |z_n|\sim |w_n|\},$$ and moreover, if $(z,w)\in \Omega$ then $|z_n|\sim |w_n|\sim \frac{1}{\sqrt{n}}$.
Having a characterization of the behavior of the orbits of a map on a completely invariant domain is however in general not enough to state that such a domain is the whole Fatou component, as this trivial example illustrates: the automorphism $(z,w)\mapsto (\frac{z}{2}, \frac{w}{2})$ has the completely invariant domain ${\mathbb C}^\ast\times {\mathbb C}^\ast$ which is not a Fatou component but $|z_n|\sim |w_n|$.
In order to prove that $\Omega$ coincides with the Fatou component $V$ containing it, we exploit the condition that $\lambda$ is also Brjuno (see Section \[FGB\] for details). In this case there exist two $F$-invariant analytic discs, tangent to the axes, where $F$ acts as an irrational rotation. In particular, one can choose local coordinates at $(0,0)$, which we may assume to be defined on the unit ball ${\mathbb B}$ of ${\mathbb C}^2$ and $B\subset {\mathbb B}$, such that $\{z=0\}$ and $\{w=0\}$ are not contained in $V\cap {\mathbb B}$. Let ${\mathbb B}_\ast:={\mathbb B}\setminus\{zw=0\}$. Now, if $V\neq \Omega$, we can take $p_0\in\Omega$, $q_0\in V\setminus \Omega$, and $Z$ a connected open set containing $p_0$ and $q_0$ and such that $\overline Z\subset V$. Moreover, since $\{F^n\}$ converges uniformly to the origin on $\overline Z$, up to replacing $F$ by one of its iterates, we can assume that the forward $F$-invariant set $W :=\cup_{n\in{\mathbb N}}F^n(Z)$ satisfies $W\subset {\mathbb B}_\ast$. By construction, for every $\delta >0$ we can find $p\in Z\cap \Omega$ and $q\in Z\cap (V\setminus \Omega)$ such that $k_W(p,q)\le k_Z(p,q) <\delta$, where $k_W$ is the Kobayashi (pseudo)distance of $W$. By the properties of the Kobayashi distance, for every $n\in{\mathbb N}$ we have $$k_{{\mathbb B}_\ast}(F^n(p), F^n(q))\leq k_W(p, q)<\delta.$$ Also, if $(z_n,w_n):=F^n(p)$, $(x_n,y_n):=F^n(q)$, then $$k_{{\mathbb D}^\ast}(z_n,x_n)<\delta, \quad k_{{\mathbb D}^\ast}(w_n,y_n)<\delta,$$ where ${\mathbb D}^\ast$ is the punctured unit disc. Since $q\not\in\Omega$, $F^n(q)\not \in B$ for all $n\in{\mathbb N}$, and so (by the above mentioned characterization of orbits’ behavior of points in $\Omega$) we can ensure that, up to passing to a subsequence, we have $|x_n|\not\sim |y_n|$. By the triangle inequality and properties of the Kobayashi distance of ${\mathbb D}^\ast$, the shape of $B$ forces $k_{{\mathbb D}^\ast}(x_n,y_n)$ to be bounded from below by a constant depending only on $\beta$, leading to a contradiction (see Theorem \[Fatou-Omega\] for details).
Finally, in order to show that $\Omega$ is biholomorphic to ${\mathbb C}\times {\mathbb C}^\ast$ we construct a fibration from $\Omega$ to ${\mathbb C}$ in such a way that $\Omega$ is a line bundle minus the zero section over ${\mathbb C}$, hence, trivial. In fact, for this aim we do not need the Brjuno condition on $\lambda$.
We first prove in Section \[local-coordi\] the existence of a univalent map $Q$ on $B$ which intertwines $F$ on $B$ with a simple overshear. The first component $\psi$ of $Q$ is essentially the Fatou coordinate of the projection of $F$ onto the $zw$-plane and satisfies $$\psi \circ F=\psi+1.$$ The second component $\sigma$ is the local uniform limit on $B$ of the sequence $\{\sigma_n\}$ defined by $$\sigma_n(z,w):= \lambda^n \pi_2(F^n(z,w)) \exp\left(\frac{1}{2}{\sum_{j=0}^{n-1} \frac{1}{\psi(z,w)+j}}\right),$$ and satisfies the functional equation $$\sigma \circ F=\overline{\lambda}e^{-\frac{1}{2\psi}}\sigma.$$
Next, using dynamics, we extend such a map to a univalent map $G$ defined on a domain $\Omega_0\subset\Omega$, and we use it to prove that $\Omega$ is a line bundle minus the zero section over ${\mathbb C}$. Since all line bundles over ${\mathbb C}$ are globally holomorphically trivial, we obtain that $\Omega$ is biholomorphic to ${\mathbb C}\times{\mathbb C}^\ast$ (see Section \[topology\] for details).
We will now go through the proof in great detail in dimension $2$ and in the last section we will give the changes needed for the higher dimensional case.
Notations and conventions in ${\mathbb C}^2$ {#notations-and-conventions-in-mathbb-c2 .unnumbered}
============================================
We set up here some notations and conventions we shall use throughout the paper.
We let $\pi:{\mathbb C}^2\to {\mathbb C}$, $\pi_1:{\mathbb C}^2\to {\mathbb C}$, $\pi_2:{\mathbb C}^2\to {\mathbb C}$ be defined by $$\pi(z,w)=zw,\quad \pi_1(z,w)=z,\quad \pi_2(z,w)=w.$$ If $F:{\mathbb C}^2\to {\mathbb C}^2$ is a holomorphic map, we denote by $F^n$ the $n$-th iterate of $F$, $n\in {\mathbb N}$, defined by induction as $F^n=F\circ F^{n-1}$, $F^0={\sf id}$. Moreover, for $(z,w)\in {\mathbb C}^2$ and $n\in {\mathbb N}$, we let $$u_n:=\pi(F^n(z,w)), \quad U_n:=\frac{1}{u_n}, \quad z_n:=\pi_1(F^n(z,w)), \quad w_n:=\pi_2(F^n(z,w)).$$
If $f(n)$ and $g(n)$ are real positive functions of $n\in {\mathbb N}$, we write $f(n)\sim g(n)$, if there exist $0<c_1<c_2$ such that $c_1 f(n)<g(n)<c_2 f(n)$ for all $n\in {\mathbb N}$. Moreover, we use the Landau little/big “O” notations, namely, we write $f(n)=O(g(n))$, if there exists $C>0$ such that $f(n)\leq C g(n)$ for all $n\in {\mathbb N}$, while we write $f(n)=o(g(n))$, if $\lim_{n\to \infty}\frac{f(n)}{g(n)}=0$.
The local basin of attraction $B$ {#localB}
=================================
In this section we recall the construction of the local basin of attraction, and we provide the local characterization that we use in our construction.
Let $F_N$ be a germ of biholomorphism of ${\mathbb C}^2$, fixing the origin, of the form $$\label{Expression FN}
F_N(z,w)=\left(\lambda z\left(1 - \frac{zw}{2}\right), \overline{\lambda}w \left(1 - \frac{zw}{2}\right)\right),$$ where $\lambda\in {\mathbb C}$, $|\lambda|=1$, is not a root of unity.
For $\theta\in (0,\frac{\pi}{2})$ and $R>0$ we let $$S(R,\theta):=\left\{\zeta\in {\mathbb C}: \left|\zeta-\frac{1}{2R}\right|<\frac{1}{2R}, \ \ |{\sf Arg} (\zeta)|<\theta\right\}.$$ Also, we let $$H(R,\theta):=\{\zeta\in {\mathbb C}: \Re \zeta>R, |{\sf Arg}(\zeta)|<\theta\}.$$
D. Zaitsev and the first named author proved that any small variation of $F_N$ admits a local basin of attraction. In order to state the result in our case, let us introduce some sets:
For $\beta\in (0,\frac{1}{2})$ we let $$W(\beta):=\{(z,w)\in {\mathbb C}^2: |z|<|zw|^\beta,\ \ |w|<|zw|^\beta\}.$$ For every $R\geq 0$, $\beta\in (0,\frac{1}{2})$ and $\theta\in (0,\frac{\pi}{2})$, we let $$B(\beta, \theta, R):=\{(z,w)\in W(\beta): zw\in S(R, \theta)\}.$$
In [@BZ Theorem 1.1] it is proven:
\[Thm:BZ\] Let $F_N$ be a germ of biholomorphism at $(0,0)$ of the form . Let $\beta_0\in (0,1/2)$ and let $l\in {\mathbb N}$, $l\geq 4$ be such that $\beta_0(l+1)\geq 4$. Then for every $\theta_0\in(0, \pi/2)$ and for any germ of biholomorphism $F$ at $(0,0)$ of the form $$F(z,w)=F_N(z,w)+O(\|(z,w)\|^l)$$ there exists $R_0>0$ such that the (non-empty) open set $B_{R_0}:=B(\beta_0,\theta_0, R_0)$ is a uniform local basin of attraction for $F$, that is $F(B_{R_0})\subseteq B_{R_0}$, and $\lim_{n\to \infty}F^n(z,w)=(0,0)$ uniformly in $(z,w)\in B_{R_0}$.
Let $F(z,w)=F_N(z,w)+O(\|(z,w)\|^l)$ be as in Theorem \[Thm:BZ\] and fix $\theta_0\in(0, \pi/2)$. We set $$B:=B_{R_0}=B(\beta_0,\theta_0, R_0).$$
In the following, we shall use some properties of $B$, that we prove below. We start with a lemma, allowing us to characterize the pre-images of $B$.
\[go-good-down\] Let $F$ and $B$ be as in Theorem \[Thm:BZ\]. Let $\beta\in (0,\frac{1}{2})$ be such that $\beta(l+1)> 2$ and $(z,w)\in {\mathbb C}^2$ such that $(z_n,w_n)\to (0,0)$ as $n\to \infty$. If there exists $n_0\in {\mathbb N}$ such that $(z_n,w_n)\in W(\beta)$ for all $n\geq n_0$, then
1. $\lim_{n\to \infty}nu_n=1$ and $\lim_{n\to \infty}\frac{u_n}{|u_n|}=1$ (in particular, $|u_n|\sim \frac{1}{n}$),
2. $|z_n| \sim n^{-1/2}$ and $|w_n| \sim n^{-1/2}$,
3. for every $\gamma\in (0,1/2)$ there exists $n_\gamma\in {\mathbb N}$ such that $(z_n,w_n)\in W(\gamma)$ for all $n\geq n_\gamma$.
In particular, $(z_n,w_n)\in B$ eventually.
We can locally write $F$ in the form $$\label{automFp-local}
F(z,w)=\left(\lambda z\left (1-\frac{zw}{2} \right)+R_l^1(z,w), \overline{\lambda} w\left (1-\frac{zw}{2} \right)+ R_l^2(z,w)\right),$$ where $R_l^j(z,w)=O(\|(z,w)\|^l)$, $j=1,2$.
Since $(z_n,w_n)\to (0,0)$, we have $$U_{n+1}=U_n\left(1+\frac{1}{U_n}+O\left(\frac{1}{|U_n|^2}, |U_n|\|(z_n,w_n)\|^{l+1}\right)\right).$$ For $n\geq n_0$, $O(\|(z_n,w_n)\|^{l+1})$ is at most an $O(|u_n|^{\beta(l+1)})= O\left(\frac{1}{|U_n|^{\beta(l+1)}}\right)$, since $\beta(l+1)> 2$. Hence, $$\label{eq:U-n-n1}
U_{n+1}=U_n\left(1+\frac{1}{U_n}+O\left(\frac{1}{|U_n|^{\beta(l+1)-1}}, \frac{1}{|U_n|^2}\right)\right).$$ Fix $\epsilon>0$. Let $c:=1+\epsilon$. Notice that, by , there exists $n_c\geq n_0$ such that for all $n\geq n_c$, $\left|U_{n+1}-U_n-1\right|<(c-1)/c$. Arguing by induction on $n$, it easily follows that for all $n\geq n_c$ we have $$\label{eq:calim1}
\Re U_n\geq \Re U_{n_c}+\frac{n-n_c}{c},$$ and $$\label{eq:calim2}
|U_n|\leq |U_{n_c}|+c(n-n_c).$$ Letting $\epsilon\to 0^+$ we obtain that $$\label{U-n-goes0}
\lim_{n\to \infty} \frac{\Re U_n}{n}=\lim_{n\to \infty} \frac{| U_n|}{n}=1.$$ In particular, this means that $\lim_{n\to \infty} n\Re u_n=\lim_{n\to \infty}n |u_n|=1$. Hence, $\lim_{n\to \infty}\frac{|u_n|}{\Re u_n}=1$, which implies at once that $$\label{Eq:go-right-up}
\lim_{n\to \infty} \frac{\Im u_n}{\Re u_n}=0.$$ Hence statement (1) follows.
Arguing by induction, we have $$\label{forma-wn}
\begin{split}
z_{n+1}&=z_0\lambda^n \prod_{j=0}^n \Big(1-\frac{u_j}{2}\Big)+\sum_{j=0}^n R_l^1(z_j,w_j)\prod_{k=j+1}^n \lambda \Big(1-\frac{u_k}{2}\Big), \\
w_{n+1}&=w_0 \overline{\lambda}^n \prod_{j=0}^n \Big(1-\frac{u_j}{2}\Big)+\sum_{j=0}^n R_l^2(z_j,w_j)\prod_{k=j+1}^n \overline{\lambda} \Big(1-\frac{u_k}{2}\Big),
\end{split}$$ Therefore, $$\label{how-z-n}
|z_{n+1}|\leq |z_0| \prod_{j=0}^n \Big|1-\frac{u_j}{2}\Big|+\sum_{j=0}^n |R_l^1(z_j,w_j)|\prod_{k=j+1}^n \Big|1-\frac{u_k}{2}\Big|.$$
Taking into account statement (1), we have $$\begin{split}
\lim_{j\to \infty}(-2j)\log \Big|1-\frac{u_j}{2}\Big|
&=\lim_{j\to \infty}(-2j)\left(\frac{1}{2}\log \Big|1-\frac{u_j}{2}\Big|^2\right)\\
&=\lim_{j\to \infty}(-2j)\left( \frac{1}{8}|u_j|^2-\frac{1}{2}\Re u_j\right)=1.
\end{split}$$ Therefore, $$\label{Eq:prod1a}
\prod_{j=0}^n \Big|1-\frac{u_j}{2}\Big|=\exp\left(\sum_{j=0}^n \log \Big|1-\frac{u_j}{2}\Big|\right)\sim \exp\left(\sum_{j=1}^n -\frac{1}{2j}\right)\sim \frac{1}{\sqrt{n}}.$$ Moreover, since $(z_n,w_n)\in W(\beta)$ eventually, and $|R_l^1(z_j,w_j)|=O(\|(z_j,w_j)|^l)$, it follows that there exist some constants $0<c\leq C$ such that $$|R_l^1(z_j,w_j)|\leq c |u_j|^{\beta l}\leq C j^{-\beta l}.$$ Hence, by we have for $j>1$ sufficiently large $$\begin{split}
|R_l^1(z_j,w_j)|\prod_{k=j+1}^n \Big|1-\frac{u_k}{2}\Big|
&=|R_l^1(z_j,w_j)|\exp\left(\sum_{k=j+1}^n\log \Big|1-\frac{u_k}{2}\Big| \right)\\
&\sim |R_l^1(z_j,w_j)|\exp\left(-\frac{1}{2}\sum_{k=j+1}^n\frac{1}{k} \right)\\
&\sim |R_l^1(z_j,w_j)| \frac{\sqrt{j}}{\sqrt{n}}\leq C\frac{j^{\frac{1}{2}-\beta l}}{\sqrt{n}}.
\end{split}$$ Since $\beta l-\frac{1}{2}>1$, it follows that there exists a constant (still denoted by) $C>0$ such that $$\sum_{j=0}^n |R_l^1(z_j,w_j)|\prod_{k=j+1}^n \Big|1-\frac{u_k}{2}\Big|\leq C \frac{1}{\sqrt{n}}.$$ Hence, from , there exists a constant $C>0$ such that $$\label{Eq-z-n-goes}
|z_n|\leq C \frac{1}{\sqrt{n}}.$$ A similar argument for $w_n$, shows that $$\label{Eq-w-n-goes}
|w_n|\leq C \frac{1}{\sqrt{n}}.$$ By statement (1), it holds $|z_n|\cdot |w_n|=|u_n| \sim \frac{1}{n}$. Since $|z_n|\leq C\frac{1}{\sqrt{n}}$ and $|w_n|\leq C\frac{1}{\sqrt{n}}$ by and , it follows that, in fact, $|z_n|\sim \frac{1}{\sqrt{n}}$ and $|w_n|\sim \frac{1}{\sqrt{n}}$, proving statement (2).
Finally, by statement (2), there exist constants $c, C>0$ such that $|z_n|\leq C\frac{1}{\sqrt{n}}$ for all $n\in {\mathbb N}$ and $|u_n|\geq c\frac{1}{n}$. Fix $\gamma\in (0,1/2)$. Then for every $n$ large enough $$|z_n|\leq C \frac{1}{\sqrt{n}}\leq \frac{C}{c^{1/2}}|u_n|^{1/2}<|u_n|^\gamma.$$ Similarly, one can prove that $|w_n|<|u_n|^{\gamma}$. As a consequence, eventually $(z_n,w_n)$ is contained in $W(\gamma)$ for every $\gamma\in (0,1/2)$.
\[uniform-convergence-onB\] From the uniform convergence of $\{F^n\}$ to $(0,0)$ in $B$, and from the proof of the previous lemma, it follows that (1) and (2) in Lemma \[go-good-down\] are uniform in $B$.
We shall also need the following local result concerning the topology of $B$:
\[Omega\] Let $F$ and $B$ be as in Theorem \[Thm:BZ\]. Then $ B$ is a doubly connected domain ([*i.e.*]{}, $B$ is connected and its fundamental group is ${\mathbb Z}$).
Let $\Phi:{\mathbb C}^2\to {\mathbb C}^2$ be defined by $$\label{Phi}
\Phi(z,w)=(zw,w).$$ The thesis then follows since $\Phi\colon B\to \Phi(B)$ is a biholomorphism and a straightforward computation shows that $$\label{imagePhi}
\Phi( B)=\{(x,y)\in {\mathbb C}\times{\mathbb C}^*: x\in S(R, \theta), |x|^{1-{\beta_0}}<|y|<|x|^{\beta_0}\}.$$
Local Fatou coordinates on $B$ {#local-coordi}
==============================
In this section we introduce special coordinates on $B$, which will be used later on in our construction. The first coordinate was introduced in [@BRZ Prop. 4.3]. Here we shall need more precise information, that is the following result:
\[BRZ\] Let $F$ and $B$ be as in Theorem \[Thm:BZ\]. Then there exists a holomorphic function $\psi: B\to {\mathbb C}$ such that $$\label{func-psi}
\psi \circ F = \psi +1.$$ Moreover$$\label{psi-form}
\psi(z,w)=\frac{1}{zw}+c\log \frac{1}{zw}+v(z,w),$$ where $c\in {\mathbb C}$ depends only on $F_N$, and $v:B\to {\mathbb C}$ is a holomorphic function such that for every $(z,w)\in B$, $$\label{v-gozero}
v(z,w)=zw\cdot g(z,w),$$ for a bounded holomorphic function $g:B\to {\mathbb C}$.
The strategy of the proof follows the one for the existence of Fatou coordinates in the Leau-Fatou flower theorem. Given a point $(z,w)\in B$, for all $n\in {\mathbb N}$ we have $$U_{n+1}=U_n+1+\frac{c}{U_n}+O(|U_n|^{-2})$$ where $c\in{\mathbb C}$ depends on $F_N$ and, as usual, $U_n:=\frac{1}{\pi(F^n(z,w))}$. The map $\psi$ is then obtained as the uniform limit in $B$ of the sequence of functions $\{\psi_m\}_{m\in {\mathbb N}}$, where $\psi_m\colon B\to {\mathbb C}$ is defined as $$\label{approxFatou}
\psi_m(z,w):=\frac{1}{\pi(F^m(z,w))} - m-c\log \pi(F^m(z,w)).$$ In fact, a direct computation as in [@BRZ Prop. 4.3] implies that there exists $A>0$ such that for all $m\in {\mathbb N}$ and all $(z,w)\in B$, $$\label{psi-meno-psi}
|\psi_{n+1}(z,w)-\psi_{n}(z,w)|\leq A|U_n|^{-2}.$$ Therefore, since $|U_n|=1/|u_n|\sim n$ uniformly in $B$ by Lemma \[go-good-down\] and Remark \[uniform-convergence-onB\], the sequence $\sum_{j=0}^n(\psi_{j+1}-\psi_j)$ is uniformly converging in $B$ to a bounded holomorphic function $v$, that is, $$v(z,w):=\sum_{j=0}^\infty (\psi_{j+1}(z,w)-\psi_j(z,w)).$$ Moreover, follows from $\psi_n-\psi_0=\sum_{j=0}^n (\psi_{j+1}-\psi_j)$, and $\psi_n \circ F=\psi_{n+1}+1$ yields the functional equation . Notice that implies $|\psi-\psi_m|\sim \frac{1}{m}$. Finally, since $U_n\in H(R_0,\theta_0)$ for all $n\in {\mathbb N}$, there exists $K\in (0,1)$ such that $\Re U_0>K|U_0|$ for all $U_0\in H(R_0,\theta_0)$. Hence, by , $$\begin{split}
|v(z,w)|
&
\leq A\sum_{j=0}^\infty \frac{1}{|U_j|^2}
\leq A\sum_{j=0}^\infty \frac{1}{(\Re U_j)^2}
\leq A\sum_{j=0}^\infty\frac{1}{(\Re U_0+\frac{j}{2})^2}\\&
\sim A\int_0^\infty\frac{dt}{(\Re U_0+\frac{t}{2})^2} =\frac{2A}{\Re U_0}\leq \frac{2A}{K |U_0|},
\end{split}$$ from which follows at once.
The map $\psi: B \to {\mathbb C}$ is called a [*Fatou coordinate*]{} for $F$.
\[Lem-psi-quasi-inj\] Let $F$ be as in Theorem \[Thm:BZ\]. Let $\psi$ be the Fatou coordinate for $F$ given by Proposition \[BRZ\]. Then there exist $R_1\geq R_0$, $\beta_1\in (\beta_0, \frac{1}{2})$ and $0<\theta_1<\theta_0$ such that the holomorphic map $$B(\beta_1, \theta_1, R_1)\ni (z,w)\mapsto (\psi(z,w), w)$$ is injective.
First we search for $\beta_1$, $\theta_1$ and $R_1$ so that on $B(\beta_1, \theta_1, R_1)$ we have good estimates for the partial derivatives of $g$ and $v$ with respect to $U$.
Since the map $\chi\colon B\ni (z,w)\mapsto (U, w)$ is univalent, we can consider $v$ as a function of $(U,w)$ defined on $$\chi(B)=\{(U,w): U\in H(R_0,\theta_0), |U|^{\beta_0-1}<|w|<|U|^{-\beta_0}\}.$$
Denote by $(H(R_0,\theta_0)+1)$ the set of points $U=V+1$ with $V\in H(R_0, \theta_0)$. Let $\theta_1\in (0,\theta_0)$ be such that $H(R_0+1,\theta_1)\subset (H(R_0, \theta_0)+1)$. There exists $\delta_0>0$ such that for every $U\in (H(R_0,\theta_0)+1)$ the distance of $U$ from $\partial H(R_0, \theta_0)$ is greater than $2\delta_0$.
Let $\tilde\beta\in (\beta_0,\frac{1}{2})$. For $R\geq R_0$, we have $$\chi(B(\tilde\beta, \theta_1, R))
=
\{(U,w): U\in H(R,\theta_1), |U|^{\tilde\beta-1}<|w|<|U|^{-\tilde\beta}\},$$ and there exists $\tilde R\geq R_0$ such that for all $(U,w)\in \chi(B(\tilde\beta, \theta_1, \tilde R))$ and all $t\in{\mathbb R}$ it holds $$|U+\delta_0e^{it}|^{\beta_0-1}\le(|U|-\delta_0)^{\beta_0-1}<|U|^{\tilde\beta-1}<|w|<|U|^{-\tilde\beta}<(|U|+\delta_0)^{-\beta_0}\le|U+\delta_0e^{it}|^{-\beta_0},$$ which implies that $(U+\delta_0 e^{it}, w)\in \chi(B)$ for all $t\in {\mathbb R}$ and all $(U,w)\in \chi(B(\tilde\beta, \theta_1, \tilde R))$, since $U+\delta_0e^{it}\in H(R_0, \theta_0)$ for all $t\in {\mathbb R}$. Therefore, for every $(U_0, w_0)\in \chi(B(\tilde\beta, \theta_1, \tilde R))$, the Cauchy formula for derivatives yields $$\left|\frac{\partial g}{\partial U}(U_0,w_0) \right|=\frac{1}{2\pi}\left|\int_{|\zeta-U_0|=\delta_0}\frac{g(\zeta, w_0)}{(\zeta-U_0)^2}d\zeta\right|
\leq \frac{1}{2\pi\delta_0}\sup_{(U,w_0)\in \chi(B)}|g(U,w_0)|\leq \frac{C}{2\pi \delta_0}=:C_1.$$ Hence, setting $C_2:=C+C_1$, for every $R\geq \min\{\tilde R, 1\}$, we have $$\label{fuuuf}
\left| \frac{\partial v}{\partial U}(U_0,w_0)\right|
\leq \frac{C}{|U_0|^2}+\frac{1}{|U_0|}\left|\frac{\partial g}{\partial U}(U_0,w_0)\right|
\leq \frac{C}{|U_0|^2}+\frac{C_1}{|U_0|}
\leq \frac{C_2}{R}$$ for all $(U_0, w_0)\in \chi(B(\tilde\beta, \theta_1, R))$. Now, since there exists $K\in (0,1)$ such that $\Re U> K |U|$ for every $U\in H(\theta_1, R)$, we fix $\beta_1\in (\tilde\beta,\frac{1}{2})$ and let $R\geq \tilde R$ be such that $$\label{t-subj}
K^{1-\beta_1}r^{\beta_1-1}>r^{\tilde\beta-1} \quad \forall r\geq R.$$
To prove the injectivity on $B(\beta_1, \theta_1, R_1)$, we first prove that for each $(U_1, w_0), (U_2, w_0)\in \chi(B(\beta_1, \theta_1, R))$ we have $(\gamma(t), w_0)\in \chi(B(\tilde\beta, \theta_1, R))$ where $\gamma(t)= tU_1+(1-t)U_2$ with $t\in [0,1]$ is the real segment joining $U_1$ and $U_2$. In fact, we have $\gamma(t)\subset H(\theta_1, R)$ for all $t\in[0,1]$ since $H(\theta_1, R)$ is convex. Moreover, since $|U_j|>|w_0|^{\frac{1}{\beta_1-1}}$ and $\Re U_j>K|U_j|$ for $j=1,2$, we have $$|t U_1+(1-t)U_2|
>t\Re U_1+(1-t)\Re U_2
>K\left(t|w_0|^{\frac{1}{\beta_1-1}}+(1-t)|w_0|^{\frac{1}{\beta_1-1}}\right)
=K|w_0|^{\frac{1}{\beta_1-1}},
$$ for all $t\in [0,1]$, and so, by , $$|w_0|>\left(\frac{1}{K} \right)^{\beta_1-1}|t U_1+(1-t)U_2|^{\beta_1-1}>|t U_1+(1-t)U_2|^{\tilde\beta-1}.$$ On the other hand, since $|U_j|<|w_0|^{-\frac{1}{\beta_1}}$, $j=1,2$, for all $t\in [0,1]$ we have, $$|tU_1+(1-t)U_2|
<t|w_0|^{-\frac{1}{\beta_1}}+(1-t)|w_0|^{-\frac{1}{\beta_1}}
=|w_0|^{-\frac{1}{\beta_1}},$$ hence, $$|tU_1+(1-t)U_2|^{-\tilde\beta}>|tU_1+(1-t)U_2|^{-\beta_1}>|w_0|.$$ Therefore using we obtain $$\begin{split}
|\psi(U_1,w_0)-\psi(U_2,w_0)|
&=\left|\int_\gamma\frac{\partial \psi}{\partial U}(U,w_0)dU \right|=\left|\int_\gamma \left[1+\frac{c}{U}+\frac{\partial v}{\partial U}(U,w_0)\right]dU \right|\\
&\geq |U_1-U_0|-\frac{|c|}{R}|U_1-U_0|-\frac{C_2}{R}|U_1-U_0|\\
&=\left(1-\frac{|c|}{R}-\frac{C_2}{R}\right)|U_1-U_0|,
\end{split}$$ and we obtain the injectivity of $(U,w)\mapsto (\psi(U,w),w)$ on $\chi(B(\beta_1, \theta_1, R))$, and hence of $(z,w)\mapsto (\psi(z,w), w)$ on $B(\beta_1,\theta_1, R)$, for $R$ sufficiently large.
The next result shows the existence of another “coordinate” on $B$ defined using the Fatou coordinate.
\[Prop:second-local-coord\] Let $F$ and $B$ be as in Theorem \[Thm:BZ\] and $\psi$ the Fatou coordinate given by Proposition \[BRZ\]. Then there exists a holomorphic function $\sigma\colon B\to {\mathbb C}^\ast$ such that $$\label{func-sigma}
\sigma \circ F=\overline{\lambda}e^{-\frac{1}{2\psi}}\sigma.$$ Moreover, $\sigma(z,w) = w + \eta(z,w)$, where $\eta\colon B\to {\mathbb C}$ is a holomorphic function such that for every $(z,w)\in B$ $$\label{eta-gozero}
\eta(z,w)=(zw)^\alpha \cdot h(z,w),$$ for a holomorphic bounded function $h\colon B\to {\mathbb C}$, with $\alpha\in (1-\beta_0,1)\subset (1/2,1)$.
For $n\in {\mathbb N}$, consider the holomorphic function $\sigma_n \colon B \to {\mathbb C}^\ast$ defined by $$\label{sigma-n}
\sigma_n(z,w):= \lambda^n \pi_2(F^n(z,w)) \exp\left({\frac{1}{2}\sum_{j=0}^{n-1} \frac{1}{\psi(z,w)+j}}\right).$$ We will prove that the sequence $\{\sigma_n\}$ converges uniformly in $B$ to a holomorphic function $\sigma\colon B\to {\mathbb C}^\ast$ satisfying the assertions of the statement.
First, if $\{\sigma_n\}$ is uniformly convergent on compacta of $ B$, then follows from $$\begin{split}
\sigma_n \circ F
&=\lambda^n w_{n+1} \exp\left(\frac{1}{2}\sum_{j=0}^{n-1}\frac{1}{\psi\circ F+j} \right)
=\lambda^n w_{n+1} \exp\left(\frac{1}{2}\sum_{j=0}^{n-1}\frac{1}{\psi+j+1} \right)
\\
&=\overline{\lambda}\exp\left(-\frac{1}{2\psi} \right)\lambda^{n+1}w_{n+1}\exp\left(\frac{1}{2}\sum_{j=0}^{n}\frac{1}{\psi+j} \right)=\overline{\lambda}\exp\left(-\frac{1}{2\psi} \right)\sigma_{n+1}.
\end{split}$$
Now we show that $\{\sigma_n\}$ is equibounded in $B$. By Proposition \[BRZ\] we have $$\left|\psi-\frac{1}{u_j}+j+c\log u_j\right|= |\psi-\psi_j|\sim \frac{1}{j}.$$ By Lemma \[go-good-down\] and Remark \[uniform-convergence-onB\], $|u_j|\sim \frac{1}{j}$ uniformly in $B$, hence, $$\label{eq:psi-goes-j}
\frac{1}{\psi+j}= \frac{u_j}{1-cu_j\log u_j +O(u_j)}=u_j+O(u_j^2\log u_j).$$ Now, by statement (1) in Lemma \[go-good-down\], we have that $\lim_{j\to \infty}\frac{1}{2} j\Re u_j
=\frac{1}{2}$. Therefore, $$\exp\left(\frac{1}{2}\sum_{j=0}^{n-1} \Re u_j\right)\sim \exp\left(\sum_{j=1}^{n-1} \frac{1}{2j}\right)=O(n^{1/2}).$$ Moreover, again thanks to Lemma \[go-good-down\] and Remark \[uniform-convergence-onB\], there exists $C>0$ such that $\sum_{j=0}^\infty |u_j^2\log u_j|\leq C$. Hence, there exists $C'>0$ such that $$\label{Eq:zero}
\begin{split}
\left|\exp\left(\frac{1}{2}{\sum_{j=0}^{n-1} \frac{1}{\psi(z,w)+j}}\right)\right|
&=\left| \exp\left({\sum_{j=0}^{n-1} \left( \frac{u_j}{2}+O(u_j^2\log u_j)\right)}\right) \right|\\
&\leq C'\exp\left(\sum_{j=1}^{n-1} \frac{1}{2j}\right)=O(n^{1/2}).
\end{split}$$ Therefore, since $|w_n|\sim n^{-1/2}$, we have $$\label{Eq:1a}
|\sigma_n(z,w)|
=
|w_n| \left|\exp\left(\frac{1}{2}{\sum_{j=0}^{n-1} \frac{1}{\psi(z,w)+j}}\right)\right|=|w_n|O(n^{1/2})=O(1),$$ showing that the sequence $\{\sigma_n\}$ is equibounded on $B$.
To prove that $\{\sigma_n\}$ is in fact convergent, let us first notice that we have $$\begin{aligned}
\sigma_{n+1}(z,w)
&= \lambda^{n+1} w_{n+1} \exp\left({\frac{1}{2}\sum_{j=0}^{n} \frac{1}{\psi(z,w)+j}}\right)\\
&= \lambda^{n+1} \left[\bar\lambda w_{n}\left(1-\frac{u_n}{2}\right) + R^2_l(z_n, w_n)\right] \exp\left({\frac{1}{2}\sum_{j=0}^{n} \frac{1}{\psi(z,w)+j}}\right)\\
&=\sigma_n(z, w) \left(1-\frac{u_n}{2}\right) e^{\frac{1}{2(\psi(z,w)+n)}} + \lambda^{n+1} R^2_l(z_n, w_n)\exp\left(\frac{1}{2}{\sum_{j=0}^{n} \frac{1}{\psi(z,w)+j}}\right).
\end{aligned}$$ Therefore, $$\label{Eq:diff-sigman}
\begin{split}
\sigma_{n+1}(z,w) - \sigma_{n}(z,w)
&=
\sigma_n(z, w) \left[\left(1-\frac{u_n}{2}\right) e^{\frac{1}{2(\psi(z,w)+n)}} -1\right] \\
&\quad+ \lambda^{n+1} R^2_l(z_n, w_n)\exp\left({\frac{1}{2}\sum_{j=0}^{n} \frac{1}{\psi(z,w)+j}}\right).
\end{split}$$
Now we estimate the terms in the right hand side of . Fix $\alpha \in (1-\beta_0,1)$. Note that $\alpha>\frac{1}{2}$. By , recalling that $|u_n|\sim \frac{1}{n}$, we have $$\label{Eq:1b}
\begin{split}
\left(1-\frac{u_j}{2}\right) e^{\frac{1}{2(\psi(z,w)+n)}} -1
&=\left(1-\frac{u_j}{2}\right) e^{\frac{1}{2}u_n+O(u_n^2\log u_n)}-1\\
&=\left(1-\frac{u_j}{2}\right)\left(1+\frac{1}{2} u_n +O(u_n^2\log u_n)\right)-1\\
&=O(u_n^2\log u_n)
=|u_n|^\alpha O\left(\frac{\log n}{n^{2-\alpha}}\right).
\end{split}$$ Next, since $(z_n,w_n)\in B$, we have that $|R^2_l(z_n,w_n)|=O(|u_n|^{\beta_0 l})$, and by , we have $$\label{Eq:1c}
|R_l^2(z_n,w_n)| \left|\exp\left({\frac{1}{2}\sum_{j=0}^{n} \frac{1}{\psi(z,w)+j}}\right)\right|\leq C |u_n|^\alpha n^{\frac{1}{2}+\alpha-\beta_0 l},$$ for some $C>0$.
From , using , , , it follows that there exists a constant $C'>0$ such that for all $(z,w)\in B$, $$\label{Eq:sigma-good}
|\sigma_{n+1}(z,w) - \sigma_{n}(z,w)|
\le C_n|u_n|^\alpha,$$ with $C_n=C' \left(\frac{\log n}{n^{2-\alpha}}+n^{\frac{1}{2}+\alpha-\beta_0 l}\right)$. Therefore the sequence $\{\sigma_n\}$ converges uniformly on $B$ to a holomorphic function $\sigma$. Let $C:=\sum_{n=0}^\infty C_n<+\infty$. For all $n\in{\mathbb N}$, we have $|u_n|\leq 1/R_0$, hence implies that $\sigma_n-\sigma_0=\sum_{j=0}^n (\sigma_{j+1}-\sigma_j)$ converges uniformly on $B$ to a holomorphic function $\eta$ such that $\eta(z,w)= \sigma(z,w)-\sigma_0(z,w)=\sigma(z,w)-w$.
Moreover, for all $(z,w)\in B$ we have $$|\eta(z,w)|\leq \sum_{n=0}^\infty |\sigma_{n+1}(z,w)-\sigma_n(z,w)|\leq \sum_{n=0}^\infty C_n |u_n|^\alpha<|u_0|^\alpha\sum_{n=0}^\infty C_n=C |zw|^\alpha.$$ Finally, since $\sigma_n(z,w)\neq 0$ for all $n\in {\mathbb N}$ and $(z,w)\in B$, it follows that either $\sigma\equiv 0$ or $\sigma(z,w)\neq 0$ for all $(z,w)\in B$. Since $(r,r)\in B$ for all $r>0$ sufficiently small, recalling that we just proved that $\sigma(z,w)=w+(zw)^{\alpha} h(z,w)$, with $|h|\leq C$ for all $(z,w)\in B$, and $2\alpha>1$, we have $$|\sigma(r,r)|=|r+r^{2\alpha}h(r,r)|\geq r-r^{2\alpha}C=r(1-o(r)),$$ proving that $\sigma\not\equiv 0$.
We shall now prove that the map $B\ni (z,w)\mapsto (\psi(z,w), \sigma(z,w))$ is injective on a suitable subset of $B$. Such a result is crucial to show that the global basin of attraction which we shall introduce in the next section, is biholomorphic to ${\mathbb C}\times {\mathbb C}^\ast$.
\[local-inj\] Let $F$ and $B$ be as in Theorem \[Thm:BZ\], let $\psi:B\to {\mathbb C}$ be the Fatou coordinate given by Proposition \[BRZ\] and let $\sigma: B\to {\mathbb C}$ be the second local coordinate defined in Proposition \[Prop:second-local-coord\]. Then there exist $R_1\geq R_0$, $\beta_1\in (\beta_0, \frac{1}{2})$ and $\theta_1\in(0,\theta_0]$ such that the holomorphic map $$B(\beta_1, \theta_1, R_1)\ni (z,w)\mapsto Q(z,w):=(\psi(z,w), \sigma(z,w))$$ is injective.
Moreover, there exist $\tilde R>1$, $\tilde \theta\in (0,\frac{\pi}{2})$ and $\tilde \beta\in (0,\frac{1}{2})$ such that $$\label{inside-claim}
\left\{(U,w)\in{\mathbb C}^2: U\in H(\tilde R, \tilde\theta), |U|^{\tilde\beta-1}<|w|<|U|^{-\tilde\beta}\right\}
\subset Q(B).$$
Let $R_1\geq R_0$, $\beta_1\in (\beta_0, \frac{1}{2})$ and $0<\theta_1\leq\theta_0$ be given by Lemma \[Lem-psi-quasi-inj\]. Thanks to the injectivity of $B(\beta_1, \theta_1, R_1)\ni (z,w)\mapsto (\psi(z,w), w)$ showed in Lemma \[Lem-psi-quasi-inj\], it follows easily that the map $$B(\beta_1, \theta_1, R_1)\ni (z,w)\mapsto (\psi(z,w), \sigma_n(z,w))$$ is injective for all $n\in {\mathbb N}$, where $\sigma_n$ is the map defined in for $n\in {\mathbb N}$. Since $\sigma$ is the uniform limit of the sequence $\{\sigma_n\}$, it follows that either the Jacobian of $Q=(\psi,\sigma)$ is identically zero on $B(\beta_1, \theta_1, R_1)$, or $Q$ is injective on $B(\beta_1, \theta_1, R_1)$.
We now compute the Jacobian of $Q$ at $(r,r)\in B(\beta_1, \theta_1, R_1)$, for $r>0$, $r$ sufficiently small. To simplify computation, we consider the holomorphic change of coordinates $\chi\colon B(\beta_1, \theta_1, R_1)\to {\mathbb C}^2$ given by $\chi(z,w)=(\frac{1}{zw},w)=(U,w)$ and we compute the Jacobian of $Q(U,w)$ at $(\frac{1}{r^2}, r)$.
By Proposition \[BRZ\] and Proposition \[Prop:second-local-coord\], we have $$\label{Q-expr}
Q(U,w)=(U+c\log U+v(U,w), w+\eta(U,w)),$$ where $v(U,w)=\frac{1}{U}g(U,w)$ and $\eta(U,w)=\frac{1}{U^\alpha}h(U,w)$, $\alpha\in (1-\beta_0,1)$, with $|g|, |h|\leq C$ for some $C>0$ on $B$. Hence, $$\begin{split}
{\sf Jac}_{\left(\frac{1}{r^2},r\right)}Q&=\det
\left(\begin{matrix}
1+cr^2+\frac{\partial v}{\partial U}\left(\frac{1}{r^2},r\right)& \frac{\partial v}{\partial w}\left(\frac{1}{r^2},r\right)\\
\frac{\partial \eta}{\partial U}\left(\frac{1}{r^2},r\right)& 1+\frac{\partial \eta}{\partial w}\left(\frac{1}{r^2},r\right)
\end{matrix}\right)\\
&=\left(1+cr^2+\frac{\partial v}{\partial U}\left(\frac{1}{r^2},r\right)\right)\left(1+\frac{\partial \eta}{\partial w}\left(\frac{1}{r^2},r\right) \right)-\frac{\partial v}{\partial w}\left(\frac{1}{r^2},r\right)\frac{\partial \eta}{\partial U}\left(\frac{1}{r^2},r\right).
\end{split}$$ First of all, note that for $\gamma\in (0,\frac{1}{2})$, $\tilde R>1$ and $\tilde\theta\in (0,\frac{\pi}{2})$ there exists $r_0>0$ such that $(\frac{1}{r^2}, r)\in \chi((B(\gamma, \tilde\theta, \tilde R)))$ for all $r\in (0,r_0)$. Hence, by , there exists $C_2>0$ such that for $r$ sufficiently small, $$\left|\frac{\partial v}{\partial U}\left(\frac{1}{r^2},r\right)\right|\leq r^2C_2.$$ A similar argument as in for $\eta$ instead of $v$, shows that for $r$ sufficiently small, $$\left|\frac{\partial \eta}{\partial U}\left(\frac{1}{r^2},r\right)\right|\leq r^{2\alpha} C_3,$$ for some $C_3>0$.
On the other end, it is easy to check that, for every $t\in {\mathbb R}$, $(\frac{1}{r^2}, r(1+\frac{e^{it}}{2}))\in \chi(B)$ whenever $r$ is positive and small enough. Hence, by the Cauchy formula for derivatives $$\left|\frac{\partial v}{\partial w}\left(\frac{1}{r^2},r\right)\right|
=\frac{1}{2\pi}\left|\int_{|\zeta-r|=r/2}\frac{v(\frac{1}{r^2},\zeta)}{(\zeta-r)^2}d\zeta\right|=\frac{r^2 \max_{|\zeta-r|=r/2}|g(\frac{1}{r^2},\zeta)|}{r}\leq Cr.
$$ Similarly, $$\left|\frac{\partial \eta}{\partial w}\left(\frac{1}{r^2},r\right)\right|\leq C r^{2\alpha-1}.$$ Therefore, $${\sf Jac}_{\left(\frac{1}{r^2},r\right)}Q=1+O(r^{2\alpha-1}),$$ showing that the Jacobian is not zero for $r$ sufficiently small since $\alpha>1/2$. Hence $Q$ is injective on $B(\beta_1, \theta_1, R_1)$.
Now we prove there exist $\tilde R>1$, $\tilde \theta\in (0,\frac{\pi}{2})$ and $\tilde \beta\in (0,\frac{1}{2})$ such that holds. The rough idea is that $Q|_{B}$ is “very close” to the map $(z,w)\mapsto (\frac{1}{zw}-c\log (zw), w)$, for which the statement is true, and hence follows by Rouché’s Theorem. Consider again the constants $R_1\geq R_0$, $\beta_1\in (\beta_0, \frac{1}{2})$ and $\theta_1\in(0,\theta_0]$ given by Lemma \[Lem-psi-quasi-inj\], and the holomorphic change of coordinates on $\tilde B$ given by $\chi(z,w)=(\frac{1}{zw}, w)=(U,w)$. Then $\chi(\tilde B)=\{(U,w): U\in H(R_1,\theta_1), |U|^{\beta_1-1}<|w|<|U|^{-\beta_1}\}$.
The map $\chi(\tilde B)\ni (U,w)\mapsto Q(U,w)=(\psi(U,w), \sigma(U,w))$ is given by . In particular $$\label{good-exp}
\psi(U,w)=U(1+\tau(U,w)),$$ where $|\tau|<C$ on $\chi(\tilde B)$ for some $C>0$, and $\lim_{|U|\to \infty}\tau(U,w)=0$. This implies immediately that there exist $\tilde R_1>0$ and $\tilde\theta\in (0,\frac{\theta_0}{2})$ such that $H(\tilde R_1,2\tilde\theta)\subset \psi(\tilde B)\subset \psi(B)$.
To prove it suffices to show that there exist $\tilde R\geq \tilde R_1$ and $\tilde\beta\in (\beta_1, \frac{1}{2})$ such that for every $\zeta_0\in H(\tilde R,\tilde\theta)$, $$\label{last-claim}
\{\xi\in {\mathbb C}: |\zeta_0|^{\tilde\beta-1}<|\xi|<|\zeta_0|^{-\tilde\beta}\}\subset \sigma(\psi^{-1}(\zeta_0)).$$
In order to prove , we first show that there exist $\tilde R_2\geq \tilde R_1$ and $\tilde \beta_2\in (\beta_1, \frac{1}{2})$ such that for every $\zeta_0\in H(\tilde R_2,\tilde\theta)$ it holds $$\label{fiber-pi2}
\{\xi\in {\mathbb C}: |\zeta_0|^{\tilde\beta_2-1}<|\xi|<|\zeta_0|^{-\tilde\beta_2}\}\subset \pi_2(\psi^{-1}(\zeta_0)).$$ Indeed, by , $\zeta_0=\psi(U,w)=U(1+\tau(U,w))$ with $|\tau|<C$ and $\lim_{|U|\to \infty}\tau(U,w)=0$. Hence, if $\zeta_0\in H(\tilde R_2,\tilde\theta)$ for some $\tilde R_2\geq \tilde R_1$, $$|U|\geq \frac{|\zeta_0|}{1+|\tau(U,w)|}\geq \frac{\tilde R_2}{1+C}.$$ Therefore, given $c'\in (0,1)$, we can choose $\tilde R_2\geq \tilde R_1$ large enough so that for every $(U,w)\in \chi(\tilde B)$ such that $\psi(U,w)=\zeta_0$ and $\zeta_0\in H(\tilde R_2,\tilde\theta)$, the modulus $|U|$ is so large that $|\tau(U,w)|<c'$. This implies that $$\label{c-stima-U}
(1-c')|U|<|\zeta_0|<(1+c')|U|$$ for every $U\in {\mathbb C}$ such that there exists $w\in {\mathbb C}$ so that $(U,w)\in \chi(\tilde B)$ and $\psi(U,w)=\zeta_0\in H(\tilde R_2,\tilde\theta)$.
Let $\tilde \beta_2\in (\beta_1, \frac{1}{2})$. Let $r_0>0$ be such that $$\frac{1}{[(1+c')t]^{1-\beta_1}}<\frac{1}{t^{1-\tilde\beta_2}}<\frac{1}{[(1-c')t]^{\tilde\beta_2}}<\frac{1}{t^{\beta_1}}, \quad \forall t\geq r_0.$$ Up to choosing $\tilde R_2\geq r_0$, implies that $$\label{one-another}
|U_0|^{\beta_1-1}<|\zeta_0|^{\tilde\beta_2-1}<|\zeta_0|^{-\tilde\beta_2}<|U_0|^{-\beta_1}$$ for every $U_0\in {\mathbb C}$ such that there exists $w\in {\mathbb C}$ so that $(U_0,w)\in \chi( \tilde B)$ and $\psi(U_0,w)=\zeta_0\in H(\tilde R_2,\tilde\theta)$.
Fix $\zeta_0\in H(\tilde R_2,\tilde\theta)$ and fix $\xi_0\in {\mathbb C}$ such that $|\zeta_0|^{\tilde\beta_2-1}<|\xi_0|<|\zeta_0|^{-\tilde\beta_2}$. Since there exists $(U_0,w_0)\in \chi(\tilde B)$ such that $\psi(U_0,w_0)=\zeta_0$, it follows from that $(U_0,\xi_0)\in \chi(\tilde B)$. In particular, $\chi(\tilde B)\cap \{w=\xi_0\}\neq \emptyset$. Set $$A(\xi_0):=
\left\{U\in H(R_1,\theta_1): \frac{1}{|\xi_0|^{\frac{1}{1-\beta_1}}}<|U|<\frac{1}{|\xi_0|^{\frac{1}{\beta_1}}}\right\}
=\chi(\tilde B)\cap \{w=\xi_0\}.$$ Then, $$A(\xi_0)\ni U\mapsto \psi_{\xi_0}(U):=\psi(U,\xi_0)=U+c\log U+\frac{g(U,\xi_0)}{U}\in {\mathbb C}$$ is well defined and holomorphic. Moreover, up to taking $\tilde R_2$ larger and $\tilde \theta$ smaller, we can assume that the set $H(\tilde R_2, \tilde\theta)$ is contained in the image of the map $\chi(\tilde B) \ni (U,w)\mapsto U+c\log U$. Hence, there exists $(U_0, w_0)\in \chi(\tilde B)$ such that $U_0+c\log U_0=\zeta_0$. Since $\zeta_0=U_0(1+c\frac{\log U_0}{U_0})$, it follows that $|U_0|(1-\epsilon)\leq|\zeta_0|\leq |U_0|(1+\epsilon)$ for some $\epsilon\in (0,1)$, provided that $\tilde R_2$ is sufficiently large. Recalling that $|\zeta_0|^{\tilde\beta_2-1}<|\xi_0|<|\zeta_0|^{-\tilde\beta_2}$, we have $$|U_0|\geq \frac{|\zeta_0|}{1+\epsilon} >\frac{1}{(1+\epsilon)|\xi_0|^{1/(1-\tilde\beta_2)}}>\frac{1}{|\xi_0|^{\frac{1}{1-\beta_1}}},$$ where the last inequality holds provided $\tilde R_2$ is sufficiently large. Similarly, one can show that $|U_0|<\frac{1}{|\xi_0|^{\frac{1}{\beta_1}}}$, namely, $U_0 \in A(\xi_0)$.
Let $\delta\in (0,1)$ be such that $D(U_0,\delta):=\{U\in {\mathbb C}: |U-U_0|<\delta\}\subset A(\xi_0)$. Since $|g(U,\xi_0)|/|U|<c'$, up to choosing $\tilde R_2$ so large that $\displaystyle c'+\delta<|c|\max_{|U-U_0|=\delta}\left|\log{U}-\log{U_0}\right|$, it follows that for all $U\in \partial D(U_0,\delta)$, $$\begin{split}
|\psi_{\xi_0}(U)-U-c\log U|&<c'<|c|\left|\log\frac{U}{U_0}\right|-\delta\le|U+c\log U-\zeta_0|\\&\le|U+c\log U-\zeta_0|+|\psi_{\xi_0}(U)-\zeta_0|.
\end{split}$$ Hence, Rouché’s Theorem implies that there exists $U_1\in D(U_0,\delta)\subset A(\xi_0)$ such that $\psi(U_1,\xi_0)=\psi_{\xi_0}(U_1)=\zeta_0$, proving .
Let $K\colon\chi(\tilde B)\to {\mathbb C}^2$ be defined by $K(U,w):=(\psi(U,w), w)$. Then the map $K$ is injective and from , we obtain that $$\label{XQR}
\chi(B(\tilde\beta_2, \tilde\theta, \tilde R_2))\subset K(\chi(\tilde B)).$$ Let $\tilde R\geq \tilde R_2$, and let $\zeta_0\in H(\tilde R, \tilde \theta)$. Thanks to , we have $(\zeta_0,w)\in K(\chi(\tilde B))$ for every $w\in J(\zeta_0)$, where $$J(\zeta_0):=\{w \in {\mathbb C}: |\zeta_0|^{\tilde\beta_2-1}<|w|<|\zeta_0|^{-\tilde\beta_2}\}.$$
Let $\tilde\beta\in (\tilde\beta_2,\frac{1}{2})$, and let $\xi_0\in {\mathbb C}$ be such that $ |\zeta_0|^{\tilde\beta-1}<|\xi_0|<|\zeta_0|^{-\tilde\beta}$. In particular $\xi_0\in J(\zeta_0)$, and setting $r:=\min\{|\zeta_0|^{\tilde \beta-1}-|\zeta_0|^{\tilde\beta_2-1}, |\zeta_0|^{-\tilde\beta_2}-|\zeta_0|^{-\tilde\beta}\}>0$, the disc $D(\xi_0,r):=\{\xi\in {\mathbb C}: |\xi-\xi_0|<r\}$ is contained in $J(\zeta_0)$. Moreover, if $\tilde R$ is sufficiently large, $$\label{r-minmin}
r>\frac{1}{2}\min\{|\zeta_0|^{\tilde \beta-1}, |\zeta_0|^{-\tilde \beta_2}\}.$$
Set $(\tilde U,w):=K(U,w)$. For every $(\tilde U, w)\in K(\chi(\tilde B))$, we can write $$\tilde\sigma(\tilde U, w):=(\sigma \circ K^{-1})(\tilde U, w)=w+\eta(\tilde U,w),$$ where $\eta(\tilde U,w)=\frac{1}{\tilde U^\alpha}h(\tilde U,w)$, with $\alpha\in (1-\beta_0,1)$, and $|h|\leq C$ for some $C>0$. By , since $\alpha>1-\beta_0>1/2$, if $\tilde R$ is sufficiently large, then $|\eta(\zeta_0, w)|<r$, for every $w\in J(\zeta_0)$. Therefore, for all $w\in \partial D(\xi_0,r)$, $$|w-\tilde\sigma(\zeta_0, w)|=|\eta(\zeta_0,w)|< r=|w-\xi_0|\leq |w-\xi_0|+|\tilde\sigma(\zeta_0, w)-\xi_0|.$$ Hence, by Rouché’s Theorem, there exists $w_0\in D(\xi_0,r)$ such that $\tilde\sigma(\zeta_0, w_0)=\xi_0$. By the arbitrariness of $\xi_0$, this implies that for every $\zeta_0\in H(\tilde R, \tilde\theta)$ $$\{\xi\in {\mathbb C}: |\zeta_0|^{\tilde\beta-1}<|\xi|<|\zeta_0|^{-\tilde\beta}\}\subset \tilde\sigma(\zeta_0, \cdot)(J(\zeta_0))\subset \sigma(\psi^{-1}(\zeta_0)),$$ which finally proves .
The topology of the global basin $\Omega$ {#topology}
=========================================
Let $F_N$ be a germ of biholomorphism of ${\mathbb C}^2$ at $(0,0)$ of the form . Thanks to a result of B. J. Weickert [@W1] and F. Forstnerič [@F] (see in particular [@F Corollary 2.2]), given any $l\geq 2$ there exists an automorphism $F$ of ${\mathbb C}^2$ such that $\|F(z,w)-F_N(z,w)\|=O(\|(z,w)\|^l)$. In particular, given $\lambda$ a unimodular number not a root of unit, we take $l\geq 4$ such that $\beta_0 (l+1)\geq 4$, where $0<\beta_0<1/2$ is given by Theorem \[Thm:BZ\], and we consider automorphisms of ${\mathbb C}^2$ of the form $$\label{automFp}
F(z,w)=\left(\lambda z\left(1-\frac{zw}{2} \right)+R_l^1(z,w), \overline{\lambda}w \left(1-\frac{zw}{2} \right)+ R_l^2(z,w)\right),$$ where $R_l^j(z,w)=O(\|(z,w)\|^l)$, $j=1,2$.
Let $F$ be an automorphism of ${\mathbb C}^2$ of the form . Let $B$ be the local basin of attraction of $F$ given by Theorem \[Thm:BZ\]. The *global attracting basin of $F$* is $$\Omega:=\bigcup_{n\in {\mathbb N}} F^{-n}(B).$$
In this section we are going to prove that the global basin $\Omega$ is biholomorphic to ${\mathbb C}\times {\mathbb C}^\ast$. We start by proving that $\Omega$ is not simply connected:
\[prop2.1\] The open set $\Omega$ is connected but not simply connected.
We see that $\Omega$ is the growing union of images biholomorphic to $B$ which is doubly connected by Lemma \[Omega\]. Moreover, $F_*$ is the identity on $\pi_1(B)$ and on $H_1(B)$, therefore $\pi_1(\Omega) = H_1(\Omega) = {\mathbb Z}$.
In order to prove that $\Omega$ is biholomorphic to ${\mathbb C}\times{\mathbb C}^\ast$, let us consider the Fatou coordinate $\psi$ for $F$ given by Proposition \[BRZ\] and the holomorphic function $\sigma$ given by Proposition \[Prop:second-local-coord\]. We can use the functional equation to extend $\psi$ to all $\Omega$. Indeed, let $p\in \Omega$. Then there exists $n\in {\mathbb N}$ such that $F^n(p)\in B$. We define $$g_1(p):=\psi(F^n(p))-n.$$ Set $H:=g_1(B)$, and consider $\Omega_0:=g_1^{-1}(H)=\bigcup_{\zeta\in H}g_1^{-1}(\zeta)$.
Using we can extend $\sigma$ to $\Omega_0$ as follows. For any $p\in \Omega_0$, we set $$\begin{split}
g_2(p)&:=\lambda^n \exp\left(\frac{1}{2}\sum_{j=0}^{n-1}\frac{1}{g_1(p)+j} \right)\sigma(F^n(p))\\
&=\lambda^n \exp\left(\frac{1}{2}\sum_{j=0}^{n-1}\frac{1}{\psi(F^n(p))+j-n} \right)\sigma(F^n(p)),
\end{split}$$ where $n\in {\mathbb N}$ is such that $F^n(p)\in B$. Notice that, since $g_1(p)\in H$, we have $\Re g_1(p)>0$ and the previous formula is well defined.
The next lemma shows that the map $G:=(g_1, g_2)\colon \Omega_0\to {\mathbb C}^2$ is well defined and holomorphic:
\[lemma\_g\_1\] The map $G:=(g_1,g_2)\colon \Omega_0 \to {\mathbb C}^2$ is well-defined, holomorphic and injective.
The map $G$ is holomorphic by construction and since $\Re g_1(p)>0$ for all $p\in\Omega_0$.
The map $G$ is well defined. Indeed, if $n$ and $m$ are both integers so that $F^n(p)$ and $F^m(p)$ belong to $ B$, and $n<m$, then $F^m(p) = F^{m-n}(F^n(p))$. Therefore $\psi(F^m(p)) = \psi(F^{m-n}(F^n(p))) = \psi(F^n(p)) + m-n$, whence $\psi(F^m(p))-m = \psi(F^n(p))
-n$. Analogously, $\sigma(F^m(p))= \overline{\lambda}^{m-n}\exp((1/2)\sum_{j=0}^{m-n-1}1/(\psi(F^n(p))+j))\sigma(F^n(p))$, and so $$\begin{aligned}
&\lambda^m \exp\left(\frac{1}{2}\sum_{j=0}^{m-1}\frac{1}{\psi(F^m(p))+j-m} \right)\sigma(F^m(p))
\\
&=
\lambda^m \exp\left(\frac{1}{2}\sum_{j=0}^{m-1}\frac{1}{\psi(F^n(p))+j-n} \right)\overline{\lambda}^{m-n}\exp\left(-\frac{1}{2}\sum_{j=0}^{m-n-1}\frac{1}{\psi(F^n(p))+j}\right)\sigma(F^n(p))\\
&=\lambda^n \exp\left(\frac{1}{2}\sum_{j=0}^{n-1}\frac{1}{\psi(F^n(p))+j-n} \right)\sigma(F^n(p)),
\end{aligned}$$ and we are done.
Let us now prove the injectivity of $G$. Let $p, q\in \Omega_0$. By the very definition of $G$, $G(p)=G(q)$ if and only if $$(\psi(F^n(p)), \sigma(F^n(p)))=(\psi(F^n(q)), \sigma(F^n(q)))$$ for all $n\in {\mathbb N}$ such that $F^n(p)$ and $F^n(q)$ are contained in $B$. By Proposition \[local-inj\], there exist $R_1\geq R_0$, $\beta_1\in (\beta_0, \frac{1}{2})$ and $0<\theta_1\leq\theta_0$ such that $Q:=(\psi, \sigma)$ is injective on $B(\beta_1, \theta_1, R_1)$. Also, by Lemma \[go-good-down\], there exists $n\in {\mathbb N}$ such that $F^n(p), F^n(q)\in B(\beta_1, \theta_1, R_1)$. Therefore, $G(p)= G(q)$ if and only if $p=q$.
\[HxCstar\] $G(\Omega_0)=H\times {\mathbb C}^\ast$.
Let $T:{\mathbb C}^2\to{\mathbb C}^2$ be defined by $$T(\zeta, \xi):=(\zeta+1,\overline{\lambda}e^{-\frac{1}{2\zeta}}\xi).$$ Notice that $T$ is not defined at $\zeta=0$. However, since $g_1(\Omega_0)=H$, the map $T$ is well-defined and holomorphic on $G(\Omega_0)$ and satisfies $$G\circ F=T\circ G.$$
Let $(\zeta_0, \xi_0)\in H\times {\mathbb C}^\ast$. By induction, for $n\in {\mathbb N}$, we have $$(\zeta_n,\xi_n)
:=
T^n(\zeta_0,\xi_0)=\left(\zeta_0+n,\overline{\lambda}^n\exp\left(-\frac{1}{2}\sum_{j=0}^{n-1}\frac{1}{\zeta_0+j} \right) \xi_0\right).$$ Now, $$\begin{split}
|\xi_n|
&=\exp\left(-\frac{1}{2}\sum_{j=0}^{n-1}\Re\left(\frac{1}{\zeta_0+j} \right) \right)|\xi_0|\\
&=\exp\left(-\frac{1}{2}\sum_{j=1}^{n-1}\frac{1}{j}\left(\frac{1+j^{-1}\Re \zeta_0}{\left|j^{-1}\zeta_0+1\right|^2}\right)\right)\exp\left(-\Re\frac{\zeta_0}{2|\zeta_0|^2} \right)|\xi_0|,
\end{split}$$ which implies that $$|\zeta_n|\sim n, \quad |\xi_n|\sim \frac{1}{\sqrt{n}}.$$ Therefore, given $\tilde\beta\in (0,\frac{1}{2})$, for all $n$ sufficiently large, $$\label{xn}
|\zeta_n|^{\tilde\beta-1}<|\xi_n|<|\zeta_n|^{-\tilde\beta}.$$ Moreover, since $\zeta_n=\zeta_0+n$, it follows that, given $\tilde R>0$ and $\tilde \theta\in (0,\frac{\pi}{2})$, for all $n$ sufficiently large, $$\label{Hxn}
\zeta_n\in H(\tilde R, \tilde\theta).$$
Note that $G(z,w)=Q(z,w)=(\psi(z,w), \sigma(z,w))$ for all $(z,w)\in B$. Hence, by Proposition \[local-inj\], there exist $\tilde\beta\in (0,\frac{1}{2})$, $\tilde\theta\in (0,\pi/2)$ and $\tilde R>1$ such that $\{(U,w)\in{\mathbb C}^2: U\in H(\tilde R, \tilde\theta), |U|^{\tilde\beta-1}<|w|<|U|^{-\tilde\beta}\}\subset G(B)$. Therefore, from and , it follows at once that $H\times {\mathbb C}^\ast\subseteq G(\Omega_0)$, and, in fact, equality holds since $\Omega_0$ — and hence $G(\Omega_0)$ — is not simply connected.
We finally have all ingredients to prove the final result of this section.
\[CxCstar\] $\Omega\simeq {\mathbb C}\times{\mathbb C}^\ast$.
Consider again $H:= g_1(B)$ and set $H_n:=H-n$. Since $\psi(B)\subset H$, we clearly have $\bigcup_{n\in{\mathbb N}} H_n={\mathbb C}$. For each $n$, define $\varphi_n:g_1^{-1}(H_n)\to {\mathbb C}^2$ by $$\varphi_n(z,w):=G(F^n(z,w))-(n,0).$$ Note that $g_1(F^n(z,w))=g_1(z,w)+n$, hence $F^n$ is a fiber preserving biholomorphism from $(g_1^{-1}(H_n))$ to $\Omega_0$. Therefore, by Proposition \[HxCstar\] $$\varphi_n: g_1^{-1}(H_n)\to H_n\times {\mathbb C}^\ast$$ is a fiber preserving biholomorphism. Moreover, for each $p\in \Omega$, if $F^n(p)\in \Omega_0$ we have $$G(F^{n+1}(p))=G(F(F^n(p)))=T(G(F^n(p))).$$ Now, take $\zeta\in H_n\cap H_{n+1}$ and let $w\in {\mathbb C}^\ast$. Note that $\zeta\mapsto \lambda e^{\frac{1}{2(\zeta+n)}}$ is a never vanishing holomorphic function on $H_n\cap H_{n+1}$. Hence, thanks to the previous equation, we have $$\varphi_n\circ \varphi_{n+1}^{-1}(\zeta, w)=(G\circ F^n)\circ (G\circ F^n)^{-1} T^{-1}(\zeta+n+1,w)-(n,0)=(\zeta, \lambda e^{\frac{1}{2(\zeta+n)} }w).$$ This proves that $\Omega$ is a fiber bundle over ${\mathbb C}$ with fiber ${\mathbb C}^\ast$ and with transition functions $\zeta\mapsto \lambda e^{\frac{1}{2(\zeta+n)}}$ on $H_n\cap H_{n+1}$. In particular, $\Omega$ is a line bundle minus the zero section over ${\mathbb C}$. Since $H^1({\mathbb C}, \mathcal O_{\mathbb C}^\ast)=0$, that is, all line bundles over ${\mathbb C}$ are (globally) holomorphically trivial, we obtain that $\Omega$ is biholomorphic to ${\mathbb C}\times{\mathbb C}^\ast$.
The global basin $\Omega$ and the Fatou component containing $B$ {#FGB}
================================================================
Let $F$ be an automorphism of the form as in the previous section, let $B$ be the local basin of attraction given by Theorem \[Thm:BZ\] and $\Omega$ the associated global basin of attraction. Since $B$ is connected by Lemma \[Omega\], and $\{F^n\}$ converges to $(0,0)$ uniformly on $B$, there exists an invariant Fatou component, which we denote by $V$, containing $B$, and we clearly have $\Omega\subseteq V$.
The aim of this section is to characterize $\Omega$ in terms of orbits behavior, and to prove that $\Omega=V$ under a generic condition on $\lambda$.
We use the same notations introduced in the previous sections. We start with the following corollary of Lemma \[go-good-down\].
\[Cor:how-Omega-is\] Let $F$ be an automorphism of ${\mathbb C}^2$ of the form . Suppose that $\{(z_n,w_n):=F^n(z_0,w_0)\}$, the orbit under $F$ of a point $(z_0,w_0)$, converges to $(0,0)$. Then $(z_0,w_0)\in \Omega$ if and only if $(z_n,w_n)$ is eventually contained in $W(\beta)$ for some—and hence any—$\beta\in (0,1/2)$ such that $\beta(l+1)>2$.
If $(z_n,w_n)\in W(\beta)$ eventually for some $\beta\in (0,1/2)$ with $\beta(l+1)>2$ then, by Lemma \[go-good-down\], $(z_n,w_n)\in B$ eventually, and hence, $(z_0,w_0)\in \Omega$. Conversely, if $(z_0,w_0)\in \Omega$, then $(z_n,w_n)\in W(\beta_0)$ eventually and $\beta_0(l+1)\geq 4$, and hence Lemma \[go-good-down\] implies that $(z_n,w_n)\in W(\beta)$ eventually for any $\beta\in (0,1/2)$ such that $\beta(l+1)>2$.
We can now prove the following characterization of $\Omega$.
\[characterized Omega\] Let $F$ be an automorphism of ${\mathbb C}^2$ of the form . Then, $$\Omega=\{(z,w)\in {\mathbb C}^2\setminus\{(0,0)\}: \lim_{n\to \infty}\|(z_n,w_n)\|=0, \quad |z_n|\sim |w_n|\},$$ where $(z_n,w_n)=F^n(z,w)$.
If $(z,w)\in \Omega$, then eventually $(z_n,w_n)\in W(\beta_0)$ and, hence, $|z_n|\sim |w_n|$ by Lemma \[go-good-down\]. On the other hand, if $(z_n,w_n)\to (0,0)$ and $|z_n|\sim |w_n|$, it follows that for every $\beta\in (0,1/2)$, $(z_n,w_n)\in W(\beta)$. Indeed, let $0<c_1<c_2$ be such that $c_1 |z_n|<|w_n|<c_2 |z_n|$ eventually. Let $\beta\in (0,1/2)$. Then for $n$ large, $$|z_n|^{\frac{1-\beta}{\beta}}<c_1 |z_n|<|w_n|,$$ that is, $|z_n|<|u_n|^\beta$, and similarly it can be proved that $|w_n|<|u_n|^\beta$. Hence, by Corollary \[Cor:how-Omega-is\], $(z,w)\in \Omega$.
In order to show that, under some generic arithmetic assumptions on $\lambda$, $\Omega$ coincides with the Fatou component which contains it, we need to prove some preliminary results.
\[Change-of-coordinates\] Let $\chi$ be a germ of biholomorphism of ${\mathbb C}^2$ at $(0,0)$ given by $$\chi(z,w)=(z+A(z,w), w+B(z,w)),$$ where $A$ and $B$ are germs of holomorphic functions at $(0,0)$ with $A(z,w)=O(\|(z,w)\|^h)$ and $B(z,w)=O(\|(z,w)\|^h)$ for some $h\geq 2$. Let $\beta\in (0,1/2)$. Assume that $\beta(h+1)>1$. Then for any $\beta'\in (0,\beta)$ there exists $\epsilon>0$ such that for every $(z,w)\in W(\beta)$ with $\|(z,w)\|<\epsilon$ it holds $\chi(z,w)\in W(\beta')$.
Let us write $(\tilde{z}, \tilde{w})=\chi(z,w)$. Then we have $\tilde{z}=z+A(z,w)$ and $\tilde{w}=w+B(z,w)$. Fix $r>0$, $\beta\in (0,1/2)$ such that $\beta(h+1)>1$, and $\beta'\in (0,\beta)$. By definition, for $\|(z,w)\|<r$, if $(z,w)\in W(\beta)$, then there exists a constant $C>0$ such that $|A(z,w)|\leq C |zw|^{\beta h}$ and $|B(z,w)|\leq C|zw|^{\beta h}$. Hence, for all $(z,w)\in W(\beta)$ with $\|(z,w)\|<r$, $$|\tilde{z}|\leq |z|+|A(z,w)|<|zw|^\beta+C |zw|^{\beta h}=|zw|^\beta(1+o(|zw|^{\beta(h-1)})),$$ and similarly, $|\tilde{w}|<|zw|^\beta(1+o(|zw|^{\beta(h-1)}))$. Therefore, since $\beta(h+1)>1$, $$\begin{split}
|\tilde z \tilde w|&\geq |zw|-|z||B|-|w||A|-|AB|\\&
\geq |zw|-2C|zw|^{\beta (h+1)}-C^2|zw|^{2 h\beta}\\
&= |zw|(1+o(|zw|^{\beta(h+1)-1})).
\end{split}$$ It thus follows that, for $(z,w)\in W(\beta)$ sufficiently close to $(0,0)$, we have $$|\tilde{z}|<|zw|^\beta(1+o(|zw|^{\beta(h-1)}))\leq |\tilde z \tilde w|^{\beta}\frac{1+o(|zw|^{\beta(h-1)})}{1+o(|zw|^{\beta(h+1)-1})}\leq |\tilde z \tilde w|^{\beta}(1+o(1))<|\tilde z \tilde w|^{\beta'}.$$ A similar argument holding for $\tilde{w}$, the statement is proved.
Note that the previous lemma does not hold without the hypothesis $\beta(h+1)>1$. Consider for instance the holomorphic map $\chi(z,w)=(z+w^2, w)$. Then the points of the form $(-w^2,w)$ belong to $W(\beta)$ for all $\beta<1/3$ but $\chi(-w^2,w)=(0,w)\not\in W(\beta')$ for any $\beta'\in (0,1/2)$.
To state and prove Theorem \[Fatou-Omega\] we also need one more assumption, namely an arithmetic condition on the eigenvalue $\lambda$.
Let $\lambda\in {\mathbb C}$ be such that $|\lambda|=1$. Recall that $\lambda$ is called [*Siegel*]{} if there exist $c>0$ and $N\in {\mathbb N}$ such that $|\lambda^k-1|\geq c k^{-N}$ for all $k\in {\mathbb N}$, $k\geq 1$ (such a condition holds for $\theta$ in a full Lebesgue measure subset of the unit circle, see, [*e.g.*]{}, [@Po]). More generally, one says that a number $\lambda$ is [*Brjuno*]{} if $$\label{eq:brjuno}
\sum_{k=0}^{+\infty}{\frac{1}{2^k}}\log{\frac{1}{\omega(2^{k+1})}}<+\infty\;,$$ where $\omega(m) = \min_{2\le k\le m} |\lambda^k - \lambda|$ for any $m\ge 2$. Roughly speaking, the logarithm of a Brjuno number is badly approximated by rationals (see [@Brjuno] or [@Po] for more details). Siegel numbers are examples of Brjuno numbers.
\[Brunocoord\] Let $F$ be given by . If $\lambda$ is Brjuno, then there exists a germ of biholomorphism $\chi$ of ${\mathbb C}^2$ at $(0,0)$ of the form $\chi(z,w)=(z,w)+O(\|(z,w)\|^l)$, such that $$\label{F-bruno}
\tilde F(\tilde z, \tilde w):=(\chi \circ F \circ \chi^{-1})(\tilde z,\tilde w)=(\lambda \tilde z + \tilde z\tilde w A(\tilde z, \tilde w), \overline{\lambda} \tilde w + \tilde z\tilde w B(\tilde z, \tilde w)),$$ where $A, B$ are germs of holomorphic functions at $(0,0)$.
Thanks to the fact that $\lambda$ is Brjuno, the divisors $\lambda^k-\lambda$ and $\lambda^k-\overline{\lambda}$ are “admissible” in the sense of Pöschel [@Po] for all $k\in {\mathbb N}$, $k\geq 2$. Hence, by [@Po Theorem 1], there exist $\delta>0$ and an injective holomorphic map $\varphi_1:{\mathbb D}_\delta \to {\mathbb C}^2$, where ${\mathbb D}_\delta:=\{\zeta\in {\mathbb C}: |\zeta|<\delta\}$, such that $\varphi_1(0)=(0,0)$, $\varphi_1'(0)=(1,0)$ and $$\label{poschel1}
F(\varphi_1(\zeta))=\varphi_1(\lambda\zeta),$$ for all $\zeta\in {\mathbb D}_\delta$. Since $F$ is tangent to $\{w=0\}$ up to order $l$, if follows from the proof of [@Po Theorem 1] that $\varphi_1$ can be chosen of the form $\varphi_1(\zeta)=(\zeta,0)+O(|\zeta|^l)$. In particular, up to shrinking $\delta$, we can write $\varphi_1({\mathbb D}_\delta)$ implicitly as $w=\psi_1(z)$ for some holomorphic function $\psi_1$ defined on ${\mathbb D}_\delta$ and such that $\psi_1(\zeta)=O(|\zeta|^l)$.
Similarly, $\overline{\lambda}^k-\lambda$ and $\overline{\lambda}^k-\overline{\lambda}$ are admissible divisors in the sense of Pöschel for all $k\in {\mathbb N}$, $k\geq 2$ and hence there exist $\delta'>0$ and a holomorphic function $\psi_2:{\mathbb D}_{\delta'}\to {\mathbb C}$ with $\psi_2(\zeta)=O(|\zeta|^l)$, such that $F$ leaves invariant the local curve $C:=\{(z,w): z=\psi_2(w)\}$ and the restriction of $F$ to $C$ is a $\overline{\lambda}$-rotation.
We can therefore define $(\tilde z, \tilde w):=\chi(z,w)=(z-\psi_2(w), w-\psi_1(z))$. By construction, $\chi$ is a germ of biholomorphism at $(0,0)$ and $\chi(z,w)=(z,w)+O(\|(z,w)\|^l)$. Moreover, the conjugate germ $\tilde F(\tilde z, \tilde w):=(\chi \circ F \circ \chi^{-1})(\tilde z,\tilde w)$ satisfies our thesis. Indeed, $\tilde{z}=0$ corresponds to $z-\psi_2(w)=0$, and since $F$ leaves such a curve invariant and it is a $\overline{\lambda}$-rotation on it, it follows that $\tilde F(0, \tilde w)=(0,\overline{\lambda}\tilde w)$. A similar argument proves that $\tilde F( \tilde z, 0)=(\lambda \tilde z,0)$.
The last ingredient in the proof of Theorem \[Fatou-Omega\] is the following fact which can be easily proved via standard estimates:
\[Lem:metric\] Let ${\mathbb D}^\ast=\{\zeta\in {\mathbb C}: 0<|\zeta|<1\}$. Let $k_{{\mathbb D}^\ast}$ denote the hyperbolic distance in ${\mathbb D}^\ast$. Let $$g(\zeta,\xi):=2\pi \max\left\{-\frac{1}{\log |\zeta|},-\frac{1}{\log |\xi|}\right\}.$$ Then for all $\zeta, \xi\in {\mathbb D}^\ast$ it holds $$\left|\log\frac{\log|\zeta|}{\log|\xi|} \right|- g(\zeta,\xi)\leq k_{{\mathbb D}^\ast}(\zeta, \xi)\leq \left|\log\frac{\log|\zeta|}{\log|\xi|} \right|+g(\zeta,\xi).$$
Now we are in a good shape to state and prove the main result of this section:
\[Fatou-Omega\] Let $F$ be an automorphism of ${\mathbb C}^2$ of the form . If $\lambda$ is Brjuno, then $\Omega=V$.
Assume by contradiction that the statement is not true. Hence, there exists $q_0\in V\setminus \Omega$. Let $p_0\in \Omega$, and let $Z$ be an open connected set containing $p_0$ and $q_0$ and such that $\overline{Z}\subset V$.
By Lemma \[Brunocoord\], since $\lambda$ is Brjuno, there exists an open neighborhood $U$ of $(0,0)$ and a biholomorphism $\chi:U \to \chi(U)$, such that holds for all $(\tilde z,\tilde w)\in \chi(U)$. Up to rescaling, we can assume that $${\mathbb B}^2:=\{(\tilde z,\tilde w)\in {\mathbb C}^2: |\tilde z|^2+|\tilde w|^2<1\}\subset \chi(U).$$ Since $\{F^n\}$ converges uniformly to $(0,0)$ on $\overline{Z}$, up to replacing $F$ with $F^m$ for some fixed $m\in {\mathbb N}$, we may assume that $Q:=\cup_{n\in {\mathbb N}}F^n(Z)$ satisfies $\tilde Q:=\chi(Q)\subset {\mathbb B}^2$.
The axes $\tilde z$ and $\tilde w$ are $\tilde F$-invariant and $\tilde F$ is a rotation once restricted to the axes, therefore $$\tilde Q\subset {\mathbb B}^2_\ast:={\mathbb B}^2\setminus(\{\tilde z=0\}\cup \{\tilde w=0\}).$$
Given a complex manifold $M$, we denote by $k_M$ its Kobayashi distance. By construction, for every $\delta>0$, one can find $p\in Z\cap \Omega$ and $q\in Z\cap (V\setminus \Omega)$ such that $k_Q(p,q)\leq k_Z(p,q)<\delta$. Let $\tilde p:=\chi(p)$ and $\tilde q:=\chi(q)$. Hence, $k_{\tilde Q}(\tilde p, \tilde q)<\delta$. Thus, since $\tilde F(\tilde Q)\subset \tilde Q$ by construction, and $\tilde{Q}\subset {\mathbb B}^2_\ast$, it follows that for all $n\in {\mathbb N}$, $$\label{delta-sub}
k_{{\mathbb B}^2_\ast}(\tilde F^n(\tilde p), \tilde F^n(\tilde q))\leq k_{\tilde Q}(\tilde F^n(\tilde p), \tilde F^n(\tilde q))<\delta.$$ Now, since $q\not\in \Omega$, by Lemma \[go-good-down\], there is no $\beta\in (0,1/2)$ with $\beta(l+1)>2$ such that $\{F^n(q)\}\subset W(\beta)$ eventually. We claim that the same happens to $\{\tilde F^n (\tilde q)\}$. Indeed, if there existed $\beta\in (0,1/2)$ with $\beta(l+1)>2$ such that $\{\tilde F^n (\tilde q)\}\subset W(\beta)$ eventually, taking $\beta'\in (0,\beta)$ so that $\beta'(l+1)>2$, Lemma \[Change-of-coordinates\] applied to $\chi^{-1}(\tilde z, \tilde w)=(\tilde z, \tilde w)+O(\|(\tilde z, \tilde w)\|^l)$ would imply that $\{F^n(q)\}\subset W(\beta')$ eventually, contradicting our assumption.
Therefore, fixing $\beta\in (0,1/2)$ with $\beta(l+1)>2$, we can assume, without loss of generality, that there exists an increasing subsequence $\{n_k\}\subset {\mathbb N}$ tending to $\infty$ such that, setting $(\tilde z_n(\tilde q),\tilde w_n(\tilde q)):=\tilde F^n(\tilde q)$, for all $n_k$ it holds $|\tilde z_{n_k}(\tilde q)|\geq |\tilde z_{n_k}(\tilde q)\tilde w_{n_k}(\tilde q)|^{\beta}$, that is $$\label{Eq:bad-go}
|\tilde w_{n_k}(\tilde q)|\leq |\tilde z_{n_k}(\tilde q)|^{\frac{1-\beta}{\beta}}.$$ On the other hand, by Lemma \[go-good-down\], $\{F^n(p)\}\subset W(\beta)$ eventually for all $\beta\in (0,1/2)$ such that $\beta(l+1)>2$. Hence, by Lemma \[Change-of-coordinates\], it follows that $\{\tilde F^n(\tilde p)\}\subset W(\beta')$ eventually for all $\beta'\in (0,\beta)$. Since this holds for all $\beta\in (0,1/2)$ such that $\beta(l+1)>2$, we obtain that $\{\tilde F^n(\tilde p)\}\subset W(\beta)$ eventually. Therefore, again by Lemma \[go-good-down\], there exist $0<c<C$ and $\tilde n>0$ such that for all $n\ge \tilde n$ $$\label{Eq:good-go}
c|\tilde z_n(\tilde p)|\leq |\tilde w_n(\tilde p)|\leq C |\tilde z_n(\tilde p)|.$$
Consider the holomorphic projections $\pi_1:{\mathbb B}^2_\ast\to {\mathbb D}^\ast$ given by $\pi_1(\tilde z, \tilde w)=\tilde z$, and $\pi_2:{\mathbb B}^2_\ast\to {\mathbb D}^\ast$ given by $\pi_2(\tilde z, \tilde w)=\tilde w$. By the properties of the Kobayashi distance, $k_{{\mathbb D}^\ast}(\pi_j(A), \pi_j(B))\leq k_{{\mathbb B}^2_\ast}(A, B)$ for every $A, B\in {\mathbb B}^2_\ast$. Hence, by , for all $n_k$, $$\label{eq:stima delta}
k_{{\mathbb D}^\ast}(\tilde z_{n_k}(\tilde p), \tilde z_{n_k}(\tilde q))<\delta, \quad k_{{\mathbb D}^\ast}(\tilde w_{n_k}(\tilde p), \tilde w_{n_k}(\tilde q))<\delta.$$ Thanks to and Lemma \[Lem:metric\], since the orbit of $\tilde p$ converges to the origin, there exists $k_0\in {\mathbb N}$ such that for all $n_k\geq k_0$, $$k_{{\mathbb D}^\ast}(\tilde z_{n_k}(\tilde p), \tilde w_{n_k}(\tilde p))\leq \left|\log\frac{\log|\tilde z_{n_k}(\tilde p)|}{\log|\tilde w_{n_k}(\tilde p)|} \right|+g(\tilde z_{n_k}(\tilde p),\tilde w_{n_k}(\tilde p))<\delta.$$ Hence, by and the triangle inequality, for all $n_k\geq k_0$, $$\label{eq:stimona1}
k_{{\mathbb D}^\ast}(\tilde z_{n_k}(\tilde q), \tilde w_{n_k}(\tilde p))\leq k_{{\mathbb D}^\ast}(\tilde z_{n_k}(\tilde q), \tilde z_{n_k}(\tilde p))+k_{{\mathbb D}^\ast}(\tilde z_{n_k}(\tilde p), \tilde w_{n_k}(\tilde p))<2\delta.$$ On the other hand, let $k_1\in {\mathbb N}$ be such that, for all $n_k\geq k_1$, $$g(\tilde z_{n_k}(\tilde q),\tilde w_{n_k}(\tilde q))<\delta,$$ where $g$ is the function defined in Lemma \[Lem:metric\]. By the same lemma and $$\label{eq:stimona2}
\begin{split}
k_{{\mathbb D}^\ast}(\tilde z_{n_k}(\tilde q), \tilde w_{n_k}(\tilde q))
&\geq \left|\log\frac{\log|\tilde z_{n_k}(\tilde q)|}{\log|\tilde w_{n_k}(\tilde q)|} \right|-g(\tilde z_{n_k}(\tilde q),\tilde w_{n_k}(\tilde q))\\
&\geq \log\left(\frac{\log|\tilde z_{n_k}(\tilde q)|^{\frac{1-\beta}{\beta}}}{\log|\tilde z_{n_k}(\tilde q)|} \right)-\delta= \log \frac{1-\beta}{\beta} -\delta.
\end{split}$$ The triangle inequality, together with and yield that for $n_k\geq \max\{k_0, k_1\}$ $$\begin{split}
k_{{\mathbb D}^\ast}(\tilde w_{n_k}(\tilde p), \tilde w_{n_k}(\tilde q))&\geq k_{{\mathbb D}^\ast}(\tilde z_{n_k}(\tilde q), \tilde w_{n_k}(\tilde q))-k_{{\mathbb D}^\ast}(\tilde z_{n_k}(\tilde q), \tilde w_{n_k}(\tilde p))\\&\geq \log \frac{1-\beta}{\beta} -3\delta.
\end{split}$$ Therefore, by , $$4\delta\geq \log \frac{1-\beta}{\beta},$$ giving a contradiction since $\frac{1-\beta}{\beta}>1$ is fixed and $\delta>0$ is arbitrary.
The proof of Theorem \[main\] for $k=2$
=======================================
Let $F$ be an automorphism of the form , and assume that $\lambda$ is Brjuno. By Theorem \[Fatou-Omega\], $\Omega$ is an invariant attracting Fatou component at $(0,0)$ and $\Omega$ is biholomorphic to ${\mathbb C}\times {\mathbb C}^\ast$ by Proposition \[CxCstar\].
The case $k\ge 3$
=================
In the general case, $k\ge 3$, we start with a germ of biholomorphism of ${\mathbb C}^k$ at the origin of the form $$\label{form-intro-gen}
F_N(z_1,\dots,z_k)=\left(\lambda_1 z_1\left(1 - \frac{z_1\cdots z_k}{k}\right), \dots,
\lambda_k z_k\left(1 - \frac{z_1\cdots z_k}{k}\right)\right),$$ where
1. each $\lambda_j\in {\mathbb C}$, $|\lambda_j|=1$, is not a root of unity for $j=1,\dots, k$,
2. the $k$-tuple $(\lambda_1,\dots, \lambda_k)$ is [*one-resonant with index of resonance $(1,\dots,1)\in{\mathbb N}^k$*]{} in the sense of [@BZ Definition 2.3], that is all the resonances $\lambda_j - \lambda_1^{m_1}\cdots\lambda_k^{m_k}=0$, for $j=1,\dots, k$, are precisely of the form $\lambda_j = \lambda_j\cdot\left(\lambda_1\cdots\lambda_n\right)^{k}$ for some $k\ge 1$,
3. the $k$-tuple $(\lambda_1,\dots, \lambda_k)$ is [*admissible*]{} in the sense of Pöschel (see [@Po]), that is we have $$\sum_{n=0}^{+\infty}{\frac{1}{2^n}}\log{\frac{1}{\omega_j(2^{n+1})}}<+\infty\;,~\hbox{for}~j=1,
\dots, k$$ where $\omega_j(m) = \min_{2\le h\le m} \min_{1\le i\le k}|\lambda_j^h - \lambda_i|$ for any $m\ge 2$.
Thanks to a result of B. J. Weickert [@W1] and F. Forstnerič [@F], for any large $l\in {\mathbb N}$ there exists an automorphism $F$ of ${\mathbb C}^k$ such that $$\label{Eq-motiv}
F(z_1,\dots, z_k)-F_N(z_1,\dots, z_k)=O(\|(z_1,\dots, z_k)\|^l).$$ Moreover, thanks to [@BZ Theorem 1.1], given $\beta\in (0,\frac{1}{k})$ and $l\in{\mathbb N}$, $l\ge 4$ such that $\beta(l+1)\ge 4$, for every $\theta\in (0,\frac{\pi}{2})$, there is $R>0$ such that the open set $$B:=\{(z_1,\dots, z_k)\in {\mathbb C}^k: u:= z_1\cdots z_k\in S(R,\theta), |z_j|<|u|^\beta~\hbox{for}~j=1,\dots, k\},$$ is non-empty, forward invariant under $F$, the origin is on the boundary of $B$ and we have $\lim_{n\to \infty}F^n(p)=0$ for all $p\in B$, uniformly on compacta. Arguing as in Lemma \[go-good-down\] we obtain that for each $p\in B$, we have that $\lim_{n\to \infty} nu_n
=1$ and $|\pi_j(F^n(p))|\sim n^{-1/k}$, for $j=1,\dots, k$, where $\pi_j$ is the projection on the $j$th coordinate. Moreover, the analogue of the statement of Proposition \[BRZ\] holds for $k\ge 3$ (see also [@BRZ]) allowing to define a local Fatou coordinate $\psi \colon B\to {\mathbb C}$ such that $\psi\circ F = \psi+1$ with the required properties.
Now we need $k-1$ other local coordinates $\sigma_2, \dots, \sigma_{k}$. For $2\le j\le k$, $\sigma_j\colon B\to {\mathbb C}$ is defined as the uniform limit on compacta of the sequence $\{\sigma_{j,n}\}_n$ where $$\sigma_{j,n}(z_1,\dots, z_k):= (\lambda_j\dots\lambda_k)^{-n} \Pi_j(F^n(z_1,\dots, z_k)) \exp\left({\frac{k-j+1}{k}\sum_{m=0}^{n-1} \frac{1}{\psi(z_1,\dots, z_k)+m}}\right),$$ and $\Pi_j\colon {\mathbb C}^k\to{\mathbb C}$ is defined as $\Pi_j(z_1,\dots, z_k) := z_j\cdots z_k$. The map $\sigma_j$ satisfies the functional equation $$\sigma_j\circ F = {\lambda_j\cdots\lambda_k} e^{-\frac{k-j+1}{k\psi}}\sigma_j.$$
Let $\Omega:=\cup_{n\ge 0} F^{-n}(B)$. Arguing like in dimension $2$, one can prove that $H^{k-1}(\Omega,{\mathbb C})\ne 0$. Using the functional equation we can extend $\psi$ to a map $g_1\colon \Omega \to {\mathbb C}$. Moreover, set $H:=g_1(B)$ and $\Omega_0:=g_1^{-1}(H)$. For $j=2,\dots, k$, we can extend $\sigma_j$ to $\Omega_0$ by setting, for any $p\in \Omega_0$, $$g_j(p) = (\lambda_j\cdots\lambda_k)^n \exp{\left(-\frac{k-j+1}{k}\sum_{m=0}^{n-1}\frac{1}{g_1(p)+j}\right)}\sigma_j(F^n(p))$$ where $n\in{\mathbb N}$ is so that $F^n(p)\in B$. As in dimension $2$, the map $\Omega_0\ni p\mapsto G(p):=(g_1(p),\dots, g_k(p))\in H\times {\mathbb C}^{k-1}$ is univalent with image $H\times ({\mathbb C}^*)^{k-1}$. In fact, we can use coordinates $$(u, y_2, \dots, y_k) := (z_1\cdots z_k,z_2\cdots z_k, \dots, z_k),$$ in $B$ so that we have $$B = \{u\in S(R,\theta), |u|^{1-k\beta}<|y_k|<|u|^\beta,~|u|^{1-j\beta}<|y_j|<|u|^\beta |y_{j+1}|~\hbox{for}~j=2,\dots, k-1\}.$$ Following the proof of Proposition \[HxCstar\], since, for $p\in \Omega_0$, $\lim_{n\to \infty} nu_n=1$ and $|\Pi_j(F^n(p))|\sim n^{-(k-j+1)/k}$ for $j=2,\dots, k$ one can see that for any $a\in H$ and $b_k\in{\mathbb C}^*$ there is a point $p\in \Omega_0$ such that $g_1(p)=a$ and $g_k(p)=b_k$. Now fix $a\in H$ and $b_k\in{\mathbb C}^*$. Using $$|u|^{1-(k-2)\beta}<|y_{k-1}|<|u|^\beta |y_{k}|$$ one sees that ${\mathbb C}^*\subseteq g_{k-1}(g_1^{-1}(a)\cap g_k^{-1}(b_k))$, and so on for every $j=2,\dots, k-2$. Therefore $G(\Omega_0)=H\times ({\mathbb C}^*)^{k-1}$, and as in Proposition \[CxCstar\] we see that $g_1\colon \Omega\to{\mathbb C}$ is a holomorphic fiber bundle map with fiber $({\mathbb C}^*)^{k-1}$. Since the transition functions belong to ${\rm GL}_n({\mathbb C})$, by [@Franz Corollary 8.3.3] we obtain that $\Omega$ is biholomorphic to ${\mathbb C}\times({\mathbb C}^*)^{k-1}$.
Finally, assuming the $k$-tuple $(\lambda_1,\dots, \lambda_k)$ to be admissible in the sense of Pöschel [@Po], we can locally choose coordinates as in Lemma \[Brunocoord\] so that the Fatou component $V$ containing $\Omega$ cannot intersect the coordinate axes in a small neighborhood of the origin. Hence using the estimates for the Kobayashi distance as done in Theorem \[Fatou-Omega\], one can show that $V=\Omega$.
[10]{} M. Abate, [*Discrete holomorphic local dynamical systems*]{}. [**Holomorphic dynamical systems**]{}, Eds. G. Gentili, J. Guenot, G. Patrizio, Lect. Notes in Math. 1998, Springer, Berlin, 2010, pp. 1-55.
A.F. Beardon, D. Minda [*The hyperbolic metric and geometric function theory*]{}. [**Quasiconformal mappings and their applications**]{}, 9–56, Narosa, New Delhi, (2007).
F. Bracci, D. Zaitsev, [*Dynamics of one-resonant biholomorphisms*]{}. J. Eur. Math. Soc., 15, 1, (2013), 179–200.
F. Bracci, J. Raissy, D. Zaitsev, [*Dynamics of multi-resonant biholomorphisms*]{}. Int. Math. Res. Not., 20 (2013), 4772–4797.
A.D. Brjuno, [*Analytic form of differential equations. I.*]{}, Trans. Mosc. Math. Soc. [**25**]{}, (1971), 131–288.
F. Forstnerič, [*Interpolation by holomorphic automorphisms and embeddings in ${\mathbb C}^n$*]{}. J. Geom. Anal. 9, 1, (1999), 93–117.
F. Forstnerič, [**Stein manifolds and holomorphic mappings. The homotopy principle in complex analysis**]{}. Second edition. Springer, Cham, 2017.
F. Forstnerič, E. F. Wold, [*Runge tubes in Stein manifolds with the density property*]{}, to appear in Proc. Amer. Math. Soc., https://doi.org/10.1090/proc/14309, arXiv:1801.07645.
M. Hakim, [*Analytic transformations of $({\mathbb C}^p, 0)$ tangent to the identity*]{}, Duke Math. J. 92 (1998), 403–428.
L. Hörmander, [**An introduction to complex analysis in several variables**]{}, Third edition. North-Holland Mathematical Library, 7. North-Holland Publishing Co., Amsterdam, 1990.
M. Lyubich, H. Peters, [*Classification of invariant Fatou components for dissipative Hénon maps*]{}. Geom. Funct. Anal., 24, (2014), 887–915.
H. Peters, L. Vivas, E. Fornæss Wold, [*Attracting basins of volume preserving automorphisms of ${\mathbb C}^k$*]{}. Internat. J. Math., Vol. 19 (2008), no. 7, 801–810.
J. Pöschel, [*On invariant manifolds of complex analytic mappings near fixed points*]{}. Expo. Math. 4 (1986), 97–109.
J.P. Rosay, W. Rudin, [*Holomorphic maps from ${\mathbb C}^n$ to ${\mathbb C}^n$*]{}. Trans. Amer. Math. Soc., 310, (1998), 47–86.
J.P. Serre, [*Une propriété topologique des domaines de Runge*]{}. Proc. Amer. Math. Soc. 6, (1955), 133–134.
B. Stensønes, L. Vivas, [*Basins of attraction of automorphisms in ${\mathbb C}^3$*]{}. Ergodic Theory Dynam. Systems. 34, (2014), 689–692.
T. Ueda, [*Local structure of analytic transformations of two complex variables I*]{}. J. Math. Kyoto Univ., 26, (1986), 233–261. B. J. Weickert, [*Attracting basins for automorphisms of ${\mathbb C}^2$*]{}. Invent. Math. 132, (1998), 581–605.
[^1]: $^\diamondsuit$Partially supported by the ERC grant “HEVO - Holomorphic Evolution Equations” n. 277691 and the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C18000100006
[^2]: $^\spadesuit$Partially supported by the ANR project LAMBDA, ANR-13-BS01-0002 and by the FIRB2012 grant “Differential Geometry and Geometric Function Theory”, RBFR12W1AQ 002.
[^3]: $^\clubsuit$Partially supported by the FRIPRO Project n.10445200
[^4]: $^\star$Members of the 2016-17 CAS project [*Several Complex Variables and Complex Dynamics*]{}.
|
---
abstract: 'Given a bridgeless graph $G$, the Cycle Double Cover Conjecture posits that there is a list of cycles of $G$, such that every edge appears in exactly two cycles. This conjecture was originally posed independently in 1973 by Szekeres and 1979 by Seymour. We here present a proof of this conjecture by analyzing certain kinds of cycles in the line graph of $G$. Further, in the case that $G$ is 3-regular, we prove the stronger conjecture that given a bridgeless graph $G$ and a cycle $C$ in $G$, then there exists a cycle double cover of $G$ containing $C$.'
author:
- Mary Radcliffe
bibliography:
- 'bib\_items.bib'
title: A Proof of the Cycle Double Cover Conjecture
---
Introduction {#S:intro}
============
The Cycle Double Cover Conjecture (CDCC) was originally posed independently by Szekeres [@szekeres1973polyhedral] in 1973 and Seymour [@seymour1979sums] in 1979. The conjecture is as follows:
If $G$ is a bridgeless graph, then there is a list of cycles $\mathcal{C}$ in $G$ such that every edge appears in exactly two cycles in $\mathcal{C}$.
Such a list of cycles is typically referred to as a cycle double cover. Much effort has been spent on resolving this conjecture, and an excellent history of approaches to the problem can be found in several survey papers [@chan2009survey; @jaeger1985survey]. In [@jaeger1985survey], it is shown that it is sufficient to prove that every 3-regular graph has a cycle double cover, and it is this theorem that we prove in this work. Specifically, we show
If $G$ is a cubic, bridgeless graph, then there is a list of cycles $\mathcal{C}$ in $G$ such that every edge appears in exactly two cycles in $\mathcal{C}$.
In [@cai1992cycle], an approach to the CDCC is considered in which, rather than find cycle double covers in the graph $G$, one can instead find cycle double covers in the line graph $L(G)$. We here adapt this technique, and show that it is sufficient to produce a cycle decomposition in $L(G)$, rather than a double cover, where the cycles in the decomposition satisfy some particular properties. Using this adapted technique, we provide a full proof that every 3-regular, bridgeless graph has a cycle double cover.
Specifically, rather than consider the line graph alone, we color the edges of the line graph $L(G)$ with the vertices $V(G)$, so that for each $v\in V(G)$ there exists a monochromatic triangle in $L(G)$ of color $v$. We then note that cycles in $G$ correspond directly to rainbow cycles in $L(G)$, and hence a rainbow cycle decomposition in $L(G)$ corresponds to a cycle double cover in $G$. We note here that this is only true in the case of a 3-regular graph $G$, as for a 3-regular graph, any rainbow cycle decomposition in $L(G)$ will use each vertex of $L(G)$ exactly twice; that is, such a decomposition will use each edge of $G$ exactly twice.
The proof is obtained by induction on a larger class of edge-colored graphs, of which the line graph of any bridgeless cubic graph is a member. This class of graphs is formally defined in Section \[S:setup\] below. Fundamentally, the only barrier to producing a cycle decomposition in this larger class of graphs will be a vertex that behaves like a bridge: it is a cut vertex, and has all its edges of one color into one block, and all edges of another color into another block. It is in forbidding this type of vertex that the work of the proof is found.
As we will note in Section \[S:conclusions\], our proof technique in fact resolves a stronger conjecture, due to Goddyn, in the case of 3-regular graphs. Specifically, Goddyn’s conjecture is as follows.
\[C:Goddyn\] If $G$ is a bridgeless graph, and $C$ is a cycle in $G$, then there exists a cycle double cover of $G$ containing $C$.
Our technique will immediately yield the following partial resolution to Conjecture \[C:Goddyn\].
If $G$ is a cubic bridgeless graph, and $C$ is a cycle in $G$, then there exists a cycle double cover of $G$ containing $C$.
This paper is organized as follows. First, in Section \[S:defs\], we introduce the basic notations, concepts, and language we shall require. In Section \[S:outline\], we outline the proof of the CDCC, and reduce to an equivalent condition on a particular class of edge-colored graphs (Theorem \[T:mainthm\]). In Section \[S:proof\], we prove Theorem \[T:mainthm\]. Finally, in Section \[S:conclusions\], we conclude with some generalizations and conjectures that might be addressed using a similar technique.
Definitions and Notations {#S:defs}
=========================
We here outline the basic tools and notations we shall require for this proof. Any language or notation not defined in this section, as well as any basic facts and observations made, can be found in [@chartrand2010graphs].
As our primary object of study here will be cycles, we make a note on the notation used to describe cycles. Primarily, we shall write a path as $P=u_1, u_2, \dots, u_j$, and a cycle as $C=(v_1, v_2, \dots, v_k, v_1)$, where the $v_i$ are the vertices involved. At times, we shall use the notation $C=(v_1, v_2, \dots, v_i, u_1, P, u_j, \dots, v_k, v_1)$ to indicate the cycle $C=(v_1, v_2, \dots, v_i, u_1, u_2, \dots, u_j, \dots, v_k, v_1)$. That is, inserting $P$ into the cycle implies that we take all internal vertices of $P$ as members of the cycle. In these cases, we include the endpoints $u_1$ and $u_j$ in the cycle definition to indicate the direction along which the path is followed.
On some occasions, we may refer to a cycle by its edges, in the form $C=(e_1, e_2, \dots, e_n)$. We shall routinely abuse notation and write $e\in C$ to indicate that the edge $e$ appears on the cycle $C$, even if the cycle is presented in vertex notation. We shall also occasionally write $C\subset E(G)$ to describe the cycle, as $C$ can be uniquely defined by the set of edges that appear in $C$.
Given a graph $G$ and a subgraph $H\subset G$, we write $G{\backslash}H$ to indicate the subgraph of $G$ defined by $V(G{\backslash}H)=V(G)$ and $E(G{\backslash}H)=E(G){\backslash}E(H)$. Given a vertex $v$, we write $G{\backslash}\{v\}$ as the subgraph of $G$ obtained by deleting $v$ from $V(G)$, and deleting all edges from $E(G)$ that include the vertex $v$.
In an edge-colored graph $G$, given a color $c$, we define the [*color class*]{} of $G$ corresponding to $c$ to be the set of edges in $G$ having color $c$. In this way, the color classes partition the edges of $G$. We will at times refer to the color class for a given color as a subgraph of $G$, by taking the subgraph having exactly these edges. A subgraph of $G$ is called [*rainbow*]{} if no two edges in the subgraph belong to the same color class. A subgraph of $G$ is called [*monochromatic*]{} if every edge in the subgraph belongs to the same color class. Throughout, if $G$ is a colored graph, and $H$ is a subgraph of $G$, we will assume that $H$ is also colored by the induced coloring from $G$.
Given a graph $G$, we define the colored line graph of $G$ to be the edge-colored graph $L=L(G)$, having
- $V(L) = E(G)$
- $e\sim_{L} f$ if and only if $e$ and $f$ share a vertex in $G$
- The edges of $L$ are colored by $V(G)$, where $c(ef)=v$ whenever $e$ and $f$ share the vertex $v$ in $G$.
Note that in any line graph colored in this way, the color classes form cliques in $L$, and each vertex of $L$ is a member of exactly two such cliques. In particular, since $G$ is 3-regular, the color classes here will be copies of $K_3$. Moreover, each vertex in $L$ will be a member of exactly two distinct color classes, so each vertex in $L$ is a member of exactly two monochromatic triangles, and has degree 4. Given any edge-colored graph, we shall say that $v$ has [*color degree*]{} $k$ if the number of colors assigned to the incident edges of $v$ is exactly $k$. Hence, in a line graph, every vertex has color degree 2. As for our purposes, the line graph of $G$ will always be colored, we shall omit the adjective “colored” and simply write “line graph” to mean the edge-colored line graph. We first make the following simple observation regarding triangles in $G$.
\[monos\] If $L$ is the colored line graph of a cubic graph $G$, and $G$ has no triangles, then the only triangles in $L$ are the color classes.
The following lemma is a key ingredient for the proof of the CDCC.
\[ltog\] Let $C=(e_1, e_2, \dots, e_k)$. Then $C$ is a rainbow cycle in $L$ if and only if $C$ is a cycle in $G$.
For the forward direction, we need only show that in $G$, the cycle $C$ does not reuse any vertex. Let $v_i = c(e_ie_{i+1})$, where $i$ is taken modulo $k$. Then we may write $C$ in its edge form in $L$ as $C=(v_1, v_2, \dots, v_k)$. Moreover, as $C$ is rainbow, the set $\{v_1, \dots, v_k\}$ has $k$ distinct colors. But then it is clear that $(v_1, v_2, \dots, v_k)$ is a cycle in $G$, having edges $e_1, e_2, \dots, e_k$, and hence every rainbow cycle in $L$ is also a cycle in $G$.
The other direction is similar.
The other direction is similar.
Combining Lemma \[ltog\] with Observation \[monos\], we have that if $G$ is a cubic graph, then the only triangles in $L$ are either monochromatic or rainbow. This piece of structure will be fundamental to our main proof.
Contractions, subdivisions, and cuts
------------------------------------
Throughout the main proof, we shall frequently make use of ideas related to topological structure in graphs. Indeed, the proof will hinge on the analysis of a certain kind of cut-vertex. We here outline the basic concepts and observations that will feature in the proof.
Let $G$ be a connected graph. We say that $S\subset V(G)$ is a [*cut-set*]{} of $G$ if the graph $G{\backslash}S$ is disconnected. If $S$ consists of a single vertex $v$, we say that $v$ is a [*cut vertex*]{} of $G$. We recall the basic fact that a vertex $v$ is a cut-vertex of $G$ if and only if there exists a pair of vertices $u, w\in V(G)$, with $u, w\neq v$, such that every path from $u$ to $w$ includes the vertex $v$.
A graph $G$ is called [*2-connected*]{} if $G$ has no cut vertices. Equivalently, $G$ is 2-connected if for every pair of vertices $u, w\in V(G)$, there exist at least two paths from $u$ to $w$ that share no vertices other than the endpoints (such paths are called [*internally vertex disjoint*]{}).
A [*block*]{} in a graph $G$ is a maximal 2-connected subgraph of $G$. One can view a block as an induced 2-connected subgraph $H$, in which the addition of any vertex to $V(H)$ yields a graph that is not 2-connected. As a result, the edges of $G$ can be partitioned uniquely into subsets $E_1, E_2, \dots, E_k$, such that each subset $E_i$ induces a block, and the corresponding blocks are edge-disjoint. We note that any two blocks in $G$ can share at most one vertex, and this vertex must be a cut vertex. Let $G_1, G_2, \dots, G_k$ be the unique blocks of $G$. The [*block graph*]{} of $G$ is defined as the graph $B=B(G)$ having $V(B)=\{G_1, G_2, \dots, G_k\}$, and $G_i\sim_BG_j$ if and only if $V(G_i)\cap V(G_j)\neq\emptyset$. We recall the following basic fact.
Given a connected graph $G$, the block graph $B(G)$ is a tree.
We shall frequently wish to focus our analysis on a single cut vertex, even though the graph may have more than one cut vertex. In this case, we use the following language. Given a cut vertex $v$, define the [*pseudoblocks*]{} of $G$ corresponding to $v$ to be subgraphs $G_1$ and $G_2$, having $V(G_1)\cap V(G_2)=\{v\}$, $E(G_1)\cup E(G_2)=E(G)$, and $E(G_1)\cap E(G_2)=\emptyset$. That is, we divide $G$ into two subgraphs, such that these subgraphs share the vertex $v$, but share no edges. We note that this can be done if and only if $v$ is a cut vertex; if we consider the block graph $B$, we may obtain the subgraphs $G_1$ and $G_2$ by taking $G_1$ as the union of a chosen block containing $v$ with all of its descendents, and $G_2$ as the union of all remaining blocks. As such, a pseudoblock decomposition is therefore not unique, but still satisfies the property that there will be no edges between the pseudoblocks.
We also routinely use the following basic fact about even graphs, a standard exercise in graph theory.
If $G$ is an even graph, and $v$ has degree 2, then $v$ is not a cut vertex of $G$.
Similarly, an edge $e\in E(G)$ is called a [*bridge*]{} of $G$ if $G{\backslash}\{e\}$ is disconnected. As above, we have the following fact, another standard exercise.
If $G$ is an even graph, then $G$ has no bridge.
Let $G$ be a graph, and let $e=\{u, v\}\in E(G)$. The [*contraction of $G$ along $e$*]{} is the graph $H$ defined by $V(H)=V(G){\backslash}\{u, v\}\cup\{x\}$, where $x$ is a new vertex not found in $G$, and $$E(H)=E(G){\backslash}\{(y, z)\ |\ y=u\hbox{ or }v,\hbox{ or }z=u\hbox{ or }v\}\cup\{(x, y)\ | u\sim y\hbox{ or }v\sim y\}.$$ That is to say, we create $H$ by removing the vertices $u, v$, and replacing them with a new vertex $x$, having as its neighbors all the neighbors of $u$ or $v$. We note that if $u$ and $v$ have a common neighbor, this can create multiple edges in $H$; this will not come up in our proof.
Given a graph $G$ and a subgraph $R$, the contraction of $G$ along $R$ is the graph $H$ obtained by contracting (in any order) all the edges in $R$. More generally, we refer to $H$ as a contraction of $G$. We may view $H$ as obtained by partitioning the vertices of $G$ into subsets $S_1, S_2, \dots, S_k$, and placing an edge between $S_i$ and $S_j$ if and only if there are vertices $u\in S_i$ and $v\in S_j$ such that $u\sim_Gv$. We shall write $[u]$ to denote the subset of this partition containing $u$. Then we have the following observation.
\[contraction\] Suppose that $G$ is a connected graph, and $H$ is obtained from $G$ by the contraction of a triangle in $G$, such that this triangle is not a block of $G$. Suppose that $x$ is a cut vertex of $G$. Then $[x]$ is a cut vertex of $H$.
Let $v_1, v_2, v_3$ be the vertices of the triangle in $G$ that is contracted to form $H$. Let $G_1, G_2, \dots, G_k$ be the block decomposition of $H$. Then since the triangle $v_1, v_2, v_3$ is 2-connected, we have that there exists a block, say $G_1$, having $v_1, v_2, v_3\in V(G_1)$. Moreover, $G_1$ also contains at least one other vertex. Then $H$ has entirely the same structure as $G$, except that $G_1$ is replaced by the contraction of $G_1$ along the triangle $(v_1, v_2, v_3)$, and this contraction is not a single vertex. Hence, any cut vertex $x$ other than $v_1, v_2, v_3$ is still a cut vertex after the contraction. If one of $v_1, v_2, v_3$ is a cut vertex, say $v_1$, then for every vertex $x\neq v_1$ in $G_1$, and every vertex $y\neq v_1$ that is not in $G_1$, every path from $x$ to $y$ passes through $v_1$. Let $x\in V(G_1)$, with $x\neq v_1, v_2, v_3$. Then in $H$, every path from $x$ to $y$ passes through $[v_1]$, and hence $[v_1]$ is a cut vertex in $H$.
Let $G$ be a graph, and let $e=\{u,v\}\in E(G)$. The [*subdivision*]{} of $G$ at $e$ is the graph $H$ defined by $V(H)=V(G)\cup\{x\}$, where $x$ is a new vertex not found in $G$, and $E(H) = E(G){\backslash}\{u, v\}\cup\{u, x\}\cup\{v, x\}$. That is, $H$ is obtained from $G$ by removing the edge $e$ and replacing it with a length 2 path $uxv$. We have the following observation.
\[subdivide\] If $H$ and $G$ are both even graphs, and $H$ is obtained from $G$ by subdividing one edge, then $v$ is a cut vertex of $G$ if and only if $v$ is a cut vertex of $H$.
Note that this observation does not hold in general; if a graph is not even, then it could have a bridge, in which case subdividing the bridge produces a new cut vertex. However, in the case of an even graph, since no bridge may be present the observation holds.
Proof outline and main ingredients {#S:outline}
==================================
In order to prove the main theorem, we first use Lemma \[ltog\] to obtain the following equivalent condition.
\[equiv\] A 3-regular graph $G$ has a cycle double cover if and only if its line graph $L$ has a decomposition of its edges into rainbow cycles.
If $L$ has a decomposition into rainbow cycles, using Lemma \[ltog\] gives the result immediately. As each edge of $L$ is used exactly once, and $L$ is 4-regular, we will thus have that each vertex of $L$ appears in precisely two cycles in the decomposition, i.e., each edge of $G$ appears in precisely two cycles in the decomposition.
For the other direction, suppose that $G$ has a cycle double cover $\mathcal{C}$. Consider a vertex $v$ with incident edges $vu$, $vw$, and $vx$. There will be precisely three cycles in $\mathcal{C}$ that include the vertex $v$, say $C_1, C_2, C_3$. Moreover, we have that each pairing $\{vu, vw\}$, $\{vw, vx\}$ and $\{vx, vu\}$ appears in exactly one of these cycles. Moreover, these pairings are precisely the edges of color $v$ in $L$, and thus, the corresponding rainbow cycles in $L$ use each edge with color $v$ exactly once, and no other cycle from $\mathcal{C}$ induces a cycle in $L$ including the color $v$. Hence, the cycles $\mathcal{C}$ are a rainbow cycle decomposition of the edges of $L$.
Hence, it suffices to show that every line graph of a 3-regular graph has a rainbow cycle decomposition. In fact, we shall prove something slightly more general. We shall define a class of graphs as [*good*]{} if they satisfy a collection of characteristics that will always be satisfied by line graphs of 3-regular graphs. In this way, we can inductively find rainbow cycles in these graphs, remove them, and provided that the resulting structure is good, find more. The basic structure of the proof will be as follows. First, we define good graphs by isolating the characteristics of line graphs that are important to the cycle decomposition. We then observe some of the basic properties of such graphs, and develop terminology to discuss them. We then show by induction that every good graph $G$ has a rainbow cycle decomposition. In order to do so, we shall focus on a piece of local structure in $G$, and show that there is always a way to define a strictly smaller good (or almost-good) graph by deleting or contracting edges, or removing vertices and rewiring their edges. The proof will be by cases, depending on which particular local structures are present in $G$, and the structures of graphs obtained by the manipulation of the local structure in $G$.
Fundamentally, the only obstruction we can have to finding a rainbow cycle in $G$ or a subgraph of $G$ is what we shall term a cut vertex of Type $X$: a cut vertex that has all of its edges of one color into one pseudoblock, and all of its edges of the other color into the other pseudoblock. Formally, we have the following definition.
Let $G$ be an even edge-colored graph with maximum degree at most 4. We say a vertex $v$ is a we define a vertex $v$ to be a [*cut vertex of type $X$*]{} if the following conditions are met:
- $v$ is a cut vertex.
- $v$ has degree 4.
- There exist pseudoblocks $G_1$ and $G_2$ at $v$ such that $v$ has two edges into $G_1$ and two edges into $G_2$, and the edges in to each pseudoblock have the same color.
A cut vertex of type $X$ is shown in Figure \[F:cvtypeX\].
![A cut vertex of Type $X$. Note that there can be no rainbow cycles involving the central vertex, as necessarily a cycle must remain entirely in one of pseudoblocks $G_1$ or $G_2$.[]{data-label="F:cvtypeX"}](typeX.pdf)
Notice that this is a clear obstacle to a rainbow cycle decomposition. If $v$ is a cut vertex of Type $X$, then the only cycles involving $v$ are those that remain entirely within the pseudoblocks $G_1$ and $G_2$, and hence must use two edges incident to $v$ in the same color class. Hence, $v$ and its incident edges cannot be members of any rainbow cycles. We note also that in the case that $v$ is a cut vertex of Type $X$ in an even graph $G$, we must have that there is a single component of $G_1{\backslash}\{v\}$ such that both edges from $v$ into $G_1$ have their other endpoint at a vertex in this component; if this is not true, then these edges would be bridges, and as mentioned above, no even graph may have a bridge.
Essentially, as our proof will show, this is the only obstacle to a rainbow cycle decomposition in $G$. We note that if $G$ is itself the line graph of a cubic graph, a cut vertex of Type $X$ in $G$ corresponds to a bridge in in the original graph. Hence, we are isolating the “bridge-like” structure, and expressly forbidding it as we construct our decomposition. The difficulty of the proof lies not in finding a rainbow cycle in a good graph, but showing that one can always find a rainbow cycle such that upon its removal, there is not any cut vertex of Type $X$.
Main ingredients {#S:setup}
----------------
In this section, we define our fundamental structures, and prove the key lemmas that will allow us to prove the main theorem. We first begin with a full definition of a good colored graph. Throughout, given a colored graph $G$, we shall use $c:E(G)\to {\mathbb{R}}$ to denote the coloring on the edges, even if such function has not explicitly been defined.
Given an edge-colored graph $G$, we say $G$ is a [*good*]{} colored graph if the following conditions are met.
1. Every vertex of $G$ has even degree.\[inheritb\]
2. $G$ has maximum degree at most 4.
3. Every triangle in $G$ is either rainbow or monochromatic. \[monotri\]
4. Every nonisolated vertex of $G$ has color degree 2. \[bender\]
5. The subgraph induced by each color class has at most three vertices.\[inherite\]
6. $G$ has no cut-vertices of type $X$. \[nox\]
Note that these conditions force that every vertex of $G$ has at least two incident colors, and moreover, no color appears incident to $v$ more than two times (or else there would be more than three vertices incident to a given color.). This implies, together with property , that the subgraph induced by each color class is either $K_2$, $P_2$, or $K_3$; that is, either a single edge, a path of length two, or a 3-clique. Further, we classify the vertices of a good colored graph into two types: Type I, a vertex of degree 2, having its two edges of different colors, and Type II, a vertex of degree 4, having two edges each of two different colors. We observe the following.
If $L$ is the line graph of a 3-regular graph, then $L$ is good.
Moreover, as we shall be primarily seeking rainbow cycles in good graphs, we note that removing a rainbow cycle from a good graph will automatically preserve almost every property of goodness.
\[heredity\] If $G$ is a good colored graph, and $R$ is a rainbow cycle in $G$, then $G{\backslash}R$ inherits properties - from $G$.
Hence, if we remove a rainbow cycle from a good colored graph, in order to check if the resulting colored graph is good, we need only verify property .
Let us consider some basic properties of good colored graphs. First, we shall examine good colored graphs for which every vertex is of Type II. We note that in this case, we must have that the subgraph induced by each color class is a triangle, as every vertex incident to that color class has exactly two edges of that color.
\[conn\] If $G$ is a connected good colored graph consisting entirely of vertices of Type II, and $R$ is a rainbow cycle in $G$, then $G{\backslash}R$ is connected.
Suppose not, so that $G{\backslash}R$ has components $G_1, G_2, \dots, G_k$ for some $k\geq 2$. Then wolog there exists an edge $e=\{x, y\}\in R$ such that $e$ has one endpoint $x$ in $G_1$, and the other endpoint $y$ in $G_2$. Let $\alpha=c(e)$. Note that there are two other edges in $G$ of color $\alpha$, since every vertex in $G$ is of Type II. As $R$ is a rainbow cycle, both of these edges must appear in $G{\backslash}R$. Note that one such edge must be incident to $x$, and one such edge must be incident to $y$, and hence we have one edge of color $\alpha$ in $G_1$ and one edge of color $\alpha$ in $G_2$. But these two edges then share no incident vertices, a violation of condition of good colored graphs. Thus, a contradiction has been reached, and thus $G{\backslash}R$ is connected.
If $G$ is a good colored graph consisting entirely of vertices of Type II, then there exists a rainbow cycle $C$ in $G$. Moreover, for every rainbow cycle $C$ in $G$, $G{\backslash}C$ is also good.\[type2\]
If $G$ has a rainbow triangle, then clearly $G$ has a rainbow cycle. Let us assume, then, that $G$ has no rainbow triangle. Choose any vertex $v\in V(G)$. Build a rainbow path $v, v_1, v_2, v_3, \dots, v_k$ beginning at $v$ by arbitrarily choosing $v_{i+1}$ from among all neighbors of $v_i$ that do not use any of the colors $c(vv_1), c(v_1v_2), \dots, c(v_{i-1}v_i)$, until such a choice is impossible. Then as $v_k$ is a Type II vertex, it must have two incident edges of color $\alpha$, such that $\alpha = c(v_jv_{j+1})$ for some $j<k-1$, and moreover these three edges of color $\alpha$ form a triangle. Therefore, the edges $v_kv_j$ and $v_kv_{j+1}$ both present, and both with color $\alpha$, and no other edges in the rainbow path $v, v_1, \dots, v_k$ use color $\alpha$ (see Figure \[Lemma1a\]). Hence, we have the rainbow cycle $C=(v_k,v_{j+1},v_{j+2},\dots, v_{k-1},v_k)$.
![The rainbow path formed by starting at some $v$ and proceeding arbitrarily in Lemma \[type2\]. The color $\alpha$ is represented by the red edges, and we note that no black edges here can take color $\alpha$, and no two black edges may have the same color.[]{data-label="Lemma1a"}](Lemma1.pdf)
Hence, $G$ contains a rainbow cycle.
Now, let us take $C$ to be any rainbow cycle in $G$. Notice that removing this rainbow cycle preserves properties - for a good colored graph, and we need only verify property \[nox\]. To that end, we shall suppose to the contrary that $G{\backslash}C$ has a cut-vertex of Type $X$, say $v$. Let $G_1$ and $G_2$ be the pseudblocks of $G$ at $v$. Note that as $G$ had no cut-vertices of Type $X$, there must have been an additional path between $G_1$ and $G_2$ along $C$ in $G$. Moreover, by Lemma \[conn\], as $G{\backslash}C$ is connected, this path must be a single edge between $G_1$ and $G_2$, as there are no vertices of degree 2 in $G$ and no other components (see Figure \[Lemma1b\]).
![An illustration of a potential cut vertex of Type $X$ in $G{\backslash}C$ obtained in Lemma \[type2\]. We note here that we do not necessarily assume that the vertices from $R$ incident to the edge colored $\alpha$ are all disjoint from the neighbors of $v$; but as noted above, they cannot both be neighbors of $v$.[]{data-label="Lemma1b"}](Lemma1b.pdf)
Let $\alpha$ be the color of this edge. By the same argument as in Lemma \[conn\], we must have that these two vertices have a common neighbor in $G$ via edges of color $\alpha$; in Figure \[Lemma1b\], this neighbor is shown wolog in $G_1$. But neither of these two edges of color $\alpha$ appear in $C$, hence, vertex $v$ is not a cut vertex of Type $X$, a contradiction.
Therefore, the removal of $C$ from $G$ yields a good colored graph, as desired.
The following lemma, although somewhat trivial, will in fact be quite useful in the main proof.
\[setcycles\] Let $G$ be a good colored graph, and let $C_1, C_2, \dots, C_k$ be a collection of edge-disjoint rainbow cycles in $G$ such that $G{\backslash}\{C_1, C_2, \dots, C_k\}$ is a good colored graph. Then $G{\backslash}C_1$ is a good colored graph also.
Note that we need only prove that $G{\backslash}C_1$ has no cut vertices of Type $X$, by Observation \[heredity\]. To that end, let us suppose that $v$ is a cut vertex in $G{\backslash}C_1$ of Type $X$. Let $G_1$ and $G_2$ be the pseudoblocks of $G{\backslash}C_1$ at $v$.
Notice that no cycle in $\{C_2, C_3, \dots, C_k\}$ can include the vertex $v$, since as noted above, no cut vertex of Type $X$ can be present in any rainbow cycle. Hence, in $G{\backslash}\{C_1, \dots, C_k\}$, we must have that $v$ is still a cut vertex of Type $X$, as we may take pseudoblocks by removing the cycles $C_2, \dots, C_k$ from $G_1$ and $G_2$. Therefore, $G{\backslash}C_1$ contains no cut vertices of Type $X$.
Now, in order to proceed with the main proof, we shall first require a slightly modified version of the condition of goodness in a colored graph. This will be necessary, as in some of our cases, we shall have local structure that does not lend itself easily to a modification that forces goodness. As a result, we shall bend condition slightly to broaden the class of graphs we consider.
Given a connected edge-colored graph $G$, we say that $G$ is an [*almost-good*]{} colored graph if the following conditions are met.
1. Every vertex of $G$ has even degree.
2. $G$ has maximum degree 4.
3. Every triangle in $G$ is either rainbow or monochromatic.
4. There is exactly one nonisolated vertex $v$ in $V(G)$ with color degree 1, and moreover this vertex has degree 2. Every other nonisolated vertex has color degree 2.\[atmostone\]
5. The subgraph induced by each color class has at most three vertices. \[colorgraphs\]
6. $G$ has no cut-vertices of type $X$.
Hence, the difference between a good and an almost-good graph is that in an almost-good graph, we permit exactly one vertex to violate the condition that every vertex has color degree exactly 2. Moreover, any vertex violating this condition must have degree 2. We shall refer to this violating vertex as the bad vertex of $G$; the other vertices will still be called Type I and Type II, as with good colored graphs. We note that Observation \[heredity\] will extend naturally to almost good graphs.
\[rainbowtri\] Let $G$ be a good or almost-good colored graph, and let $C=(v_1, v_2, v_3, v_1)$ be a rainbow triangle in $G$. Then $G{\backslash}C$ is a good or almost-good colored graph, respectively.
By Observation \[heredity\], it is necessary only to check that the removal of the rainbow triangle $C$ does not result in any cut vertices of Type $X$. Clearly, if $G$ is almost good, we cannot have any of the $v_i$ as the bad vertex. Moreover, if every vertex of the triangle is Type I, this is immediate, and hence we may assume that there exists at least one vertex of the triangle that is Type II, say $v_1$.
Let us suppose that $z$ is a cut vertex of Type $X$ in $G{\backslash}C$. Let $G_1$ and $G_2$ be the pseudoblocks of $G{\backslash}C$ at $z$. Note that as $z$ was not a cut vertex of Type $X$ in $G$, we must have that at least one of the vertices of $C$ is in $G_1$, and at least one in $G_2$.
Wolog, suppose that $v_1\in V(G_1)$ and $v_2\in V(G_2)$. Let $\alpha = c(v_1v_2)$. Then as $v_1$ is of Type II, there must exist a vertex $w\neq v_2, v_3$ such that $v_1\sim_G w$ and $c(v_1w)=\alpha$. By the definition of $G_1$ and $G_2$, we must also have $w\in V(G_1)$ (see Figure \[Lemma4\]). But then $v_2\not\sim_G w$, and hence $v_2$ is of Type $I$ in $G$. Hence, $v_2$ is isolated in $G{\backslash}C$, and thus $v_2$ can be reassigned to $V(G_1)$ to obtain a new psuedoblock decomposition of $G{\backslash}C$ at $z$. Similarly, if $v_3\in V(G_2)$, then $v_3$ is also of Type I in $G$ and can be reassigned to $V(G_1)$. But then $z$ is a cut vertex of Type $X$ in $G$, also, a contradiction.
![Structure of $G$ in Lemma \[rainbowtri\] in the case that $z$ is a cut vertex of Type $X$.[]{data-label="Lemma4"}](Lemma4.pdf)
Therefore, the removal of a rainbow triangle cannot produce any cut vertices of Type $X$, and hence $G{\backslash}C$ is good or almost-good, respectively.
Our primary goal is to show that given a good colored graph, we can always find a rainbow cycle decomposition. In order to do so, we actually prove a stronger result that holds for both good and almost good colored graphs. This stronger version is required, as the proof will be by induction, and the inductive step will require reduction to a smaller graph. Such a reduction may, in some cases, yield an almost-good graph, instead of a good graph, and hence we shall include almost-good graphs in our key thorem.
In order to state the main theorem, we must first consider what a rainbow-like cycle decomposition should look like in an almost-good graph, since certainly there can be no actual rainbow cycle decomposition. However, since only one vertex fails to have color degree 2, we could (and will) decompose the edges into cycles so that only one cycle fails to be rainbow. Moreover, we shall ensure that this cycle is as close to rainbow as possible. Specifically, we shall use the following definition.
Let $G$ be an edge colored graph. We call a cycle $C=(v_0, v_1, v_2, \dots, v_k, v_0)$ an [*almost-rainbow*]{} cycle if $c(v_0v_1)=c(v_kv_0)$, but the path $v_0, v_1, \dots, v_k$ is rainbow.
That is to say, a cycle is almost-rainbow if it uses $k$ colors for $k+1$ edges, and the repeated color appears on two consecutive edges.
\[T:mainthm\] Let $G$ be a graph that is either good or almost-good. Then there is a decomposition of the edges of $G$ into cycles $\{C_1, C_2, \dots, C_r\}$ such that one of the following is true:
1. If $G$ is good, then $C_i$ is rainbow for all $i$. \[firsttype\]
2. If $G$ is almost good, then $C_i$ is rainbow for all $i\geq 2$, and $C_1$ is almost-rainbow. \[secondtype\]
We note that as line graphs of cubic graphs are always good, combining Theorem \[T:mainthm\] with Lemma \[equiv\] immediately yields a proof of the CDCC. Hence, in order to resolve the CDCC, it remains only to prove Theorem \[T:mainthm\].
Proof of Theorem \[T:mainthm\] {#S:proof}
==============================
Our proof shall be done in cases. Before we begin, we first define a singular path; the presence of such a path will be one of the cases on which the proof relies.
Let $G$ be a good or almost-good graph. We call a path $v_0, v_1, v_2, \dots, v_k$ in $G$ a [*singular path of length $k$*]{} if the vertices $v_1, v_2, \dots, v_{k-1}$ are all of Type I.
We note that the length of a singular path is the number of edges involved, not the number of vertices. Moreover, any cycle that includes one edge of the singular path must include all edges of the singular path.
Let $G$ be a good or almost-good colored graph with order $n$ and size $m$. We shall work by induction, first on $n$ and then on $m$. We shall assume throughout that $G$ is connected, as if not, we may simply choose one connected component of $G$ to work with.
Note that the minimum case will be when $n=4$, and $G$ is a $C_4$ having either three or four colors, such that if a color is repeated, the repetition appears at a single vertex. This is clearly a cycle satisfying one of the above conditions.
Indeed, this base case extends to any $n$ with the minimal number of edges $n$, as in these cases, the graph is a single cycle, which automatically satisfy condition if $G$ is good, and if $G$ is almost-good.
Now, suppose that $G$ is either a good or almost-good colored graph, and suppose further that any good or almost-good colored graph on either fewer vertices or edges has a cycle decomposition satisfying one of the two conditions. Note that by induction, it is sufficient to show that $G$ contains either a rainbow cycle $C$, or, if $G$ is almost good, an almost-rainbow cycle $C$ using the bad vertex, such that $G{\backslash}C$ is also either good or almost-good (appropriately). We may then remove this cycle and use induction to decompose the remainder of the graph into cycles that satisfy the conditions. Note further that by Lemma \[rainbowtri\], we may assume that $G$ has no rainbow triangles, as if it does, we may immediately remove one such, and apply induction to the remainder of the graph. We shall consider two cases, according to whether $G$ is good or almost-good.
$G$ is almost-good.\[C:almost\]
Let $v$ be the bad vertex of $G$, let $\alpha$ be its incident color, and let $x_1, x_2$ be its neighbors. Note that by property , there may be only one other edge in $G$ with color $\alpha$, namely $x_1x_2$. Moreover, by the restriction against triangles having exactly two edges with the same color, if $x_1x_2\in E$, it must take color $\alpha$. We shall split into two subcases according to whether this edge is present.
$x_1x_2\in E$. \[C:almost\_tri\]
Note that there are no edges of color $\alpha$ anywhere else in the graph $G$. Moreover, $x_1$ and $x_2$ are both Type II vertices. Let $x_1$ have neighbors $y_1, w_1$, with $c(x_1y_1)=c(x_1w_1)=\beta$, and let $x_2$ have neighbors $y_2, w_2$, with $c(x_2y_2)=c(x_2w_2)=\gamma$. Notice that we must have $\{y_1, w_1\}$ disjoint from $\{y_2, w_2\}$, as otherwise we have a rainbow triangle in $G$. Create a new graph $G'$ by contracting $G$ along the triangle $(x_1, v, x_2, x_1)$; note that as $\{y_1, w_1\}$ is disjoint from $\{y_2, w_2\}$, this cannot create any multiple edges. Let $x=[x_1]$ in the contraction; by abuse of notation, we shall refer to any other vertex by its label in $G$. This contraction is shown in Figure \[F:Case1.1\].
![An illustration of the construction of $G'$ from $G$ in Subcase \[C:almost\_tri\], by contracting the triangle $x_1vx_2$. Here, we use red for color $\alpha$, blue for color $\beta$, and green for color $\gamma$.[]{data-label="F:Case1.1"}](Case1_1G.pdf "fig:") ![An illustration of the construction of $G'$ from $G$ in Subcase \[C:almost\_tri\], by contracting the triangle $x_1vx_2$. Here, we use red for color $\alpha$, blue for color $\beta$, and green for color $\gamma$.[]{data-label="F:Case1.1"}](Case1_1Gprime.pdf "fig:")
Note that $G'$ has $n-2$ vertices and $m-3$ edges. Also, since $x_1$ and $x_2$ are both not cut vertices in $G$, clearly $x$ is not a cut vertex in $G'$, and hence $G'$ has no cut vertex of Type $X$. Moreover, provided we have not formed a nonmonochromatic, nonrainbow triangle, the resulting graph is good. Note that we can only form a nonmonochromatic triangle in the event that there is an edge between the sets $\{y_1, w_1\}$ and $\{y_2, w_2\}$. Moreover, if such an edge exists, it cannot take color $\alpha, \beta, $ or $\gamma$, as there would be too many vertices having incident edges in these color classes. Hence, we cannot form a triangle having exactly two edges of the same color. Thus, $G'$ is good, and by the inductive hypothesis, there exists a decomposition $\mathcal{C}$ of the edges of $G'$ into rainbow cycles. Let $C'\in \mathcal{C}$ be such that $C'$ includes the vertex $x$. Note that $G'{\backslash}C'$ is a good colored graph, by Lemma \[setcycles\].
Form a cycle $C$ in $G$ by replacing the vertex $x$ in $C'$ with the path $x_1vx_2$ or $x_2vx_1$, appropriately. Note then that $G{\backslash}C$ can be obtained from $G'{\backslash}C'$ by subdividing the edge $xy_1$ (wolog), and recoloring appropriately. Hence, since $G'{\backslash}C'$ is good by induction, we have that $G{\backslash}C$ is good by Observation \[subdivide\]. This construction is illustrated in Figure \[F:Case1.1C\].
![The construction of $C$ from $C'$ in Subcase \[C:almost\_tri\], in the case that the cycle $C'$ uses the vertex $x$. Here, we indicate the cycles $C$ and $C'$ using arrows on the associated edges. Between two vertices, we draw a dashed line to indicate a path (rather than an edge); here we see a dashed line indication the portion of the cycles $C$ and $C'$ that does not involve the vertices pictured. []{data-label="F:Case1.1C"}](Case1_1Cprime.pdf "fig:") ![The construction of $C$ from $C'$ in Subcase \[C:almost\_tri\], in the case that the cycle $C'$ uses the vertex $x$. Here, we indicate the cycles $C$ and $C'$ using arrows on the associated edges. Between two vertices, we draw a dashed line to indicate a path (rather than an edge); here we see a dashed line indication the portion of the cycles $C$ and $C'$ that does not involve the vertices pictured. []{data-label="F:Case1.1C"}](Case1_1C.pdf "fig:")
Therefore, $G{\backslash}C$ is almost-good if $C$ does not use the path $x_1vx_2$, or good if it does, as desired.
$x_1x_2\notin E$\[C:almost\_notri\]
In this case, we must have that both $x_1$ and $x_2$ are vertices of Type I. Let $y_1$ be the neighbor of $x_1$ other than $v$, and $y_2$ the neighbor of $x_2$ other than $v$. Let $\beta = c(y_1x_1)$ and $\gamma = c(y_2x_2)$. Then we have a singular path of length $4$ given by $y_1, x_1, v, x_2, y_2$. Let $G'$ be the graph obtained from $G$ by contracting the edge $vx_1$, and (by abuse of notation) labeling the resulting node $x_1$. This is illustrated in Figure \[F:Case1.2\].
The resulting graph $G'$ is clearly good, and moreover, $G'$ has $n-1$ vertices. Hence, by induction, there exists a decomposition $\mathcal{C}$ of the edges of $G'$ into rainbow cycles. Let $C'\in \mathcal{C}$ with the edge $x_1x_2$ appearing on $C'$, and note that $G'{\backslash}C'$ is good. Form a cycle $C$ in $G$ by subdividing this edge in $C'$ by $v$. Note that this is an almost-rainbow cycle, and $G{\backslash}C=G'{\backslash}C'$, a good colored graph.
![The graph $G$ and corresponding contraction to $G'$ for Subcase \[C:almost\_notri\]. As above, red edges indicate color $\alpha$, blue indicate color $\beta$, and green indicate color $\gamma$.[]{data-label="F:Case1.2"}](Case1_2G.pdf "fig:") ![The graph $G$ and corresponding contraction to $G'$ for Subcase \[C:almost\_notri\]. As above, red edges indicate color $\alpha$, blue indicate color $\beta$, and green indicate color $\gamma$.[]{data-label="F:Case1.2"}](Case1_2Gprime.pdf "fig:")
$G$ is good.
We note that by Lemma \[type2\], if $G$ consists entirely of vertices of Type II, then we are done. Hence, we may suppose that $G$ has at least one vertex of Type I. We consider two cases, according to the length of the longest singular path in $G$.
The longest singular path in $G$ has length at least 3.\[C:good\_singular\]
Let $P=v_0, v_1, v_2, v_3$ be a singular path in $G$, so that $v_1$ and $v_2$ are both Type I vertices. Let $\alpha = c(v_1v_2)$. Note that no other edge in $G$ may have color $\alpha$, as if so, it would be incident to one of $v_1$ or $v_2$, and thus one of these vertices would be of Type II.
Form a new graph $G'$ from $G$ by contracting the edge $v_1v_2$; label the new vertex formed (by abuse of notation) as $v_1$, and color the new edge $v_1v_3$ with $c(v_2v_3)$. This contraction is illustrated in Figure \[F:Case2.1\].
Clearly, $G'$ is a good colored graph, and moreover, $G'$ has order $n-1$. Hence, by the inductive hypothesis, there exists a decomposition $\mathcal{C}$ of the edges of $G'$ into rainbow cycles. Let $C'\in\mathcal{C}$ be a rainbow cycle that includes the edge $v_1v_3$, and note that $G'{\backslash}C'$ is good. Create a rainbow cycle $C$ in $G$ by subdividing this edge with $v_2$, and recoloring as in $G$. Note that $G{\backslash}C = G'{\backslash}C' \cup\{v_2\}$, where $v_2$ is an isolated vertex, so $G{\backslash}C$ is good. Moreover, no edge in $C'$ can be colored $\alpha$, and hence $C$ is also rainbow.
![The graph $G$ and corresponding transformation to $G'$ for Subcase \[C:good\_singular\]. As above, red edges indicate color $\alpha$.[]{data-label="F:Case2.1"}](Case2_1G.pdf "fig:") ![The graph $G$ and corresponding transformation to $G'$ for Subcase \[C:good\_singular\]. As above, red edges indicate color $\alpha$.[]{data-label="F:Case2.1"}](Case2_1Gprime.pdf "fig:")
The longest singular path in $G$ has length $2$.\[C:nonsingular\]
As we know that $G$ contains at least one Type I vertex, let $v$ be a Type I vertex in $G$, with neighbors $x_1$ and $x_2$, and $\alpha = c(x_1v)$, $\beta = c(vx_2)$. Note that as $G$ contains no singular path of length 3, we must have that $x_1$ and $x_2$ are both of Type II.
Let $y_1$, $w_1$, and $z_1$ be the neighbors of $x_1$ other than $v$, such that $c(x_1y_1)=\alpha$, and $c(x_1w_1)=c(x_1z_1)=\gamma$. Likewise, let $y_2, w_2,$ and $z_2$ be the neighbors of $x_2$ other than $v$, such that $c(x_2y_2)=\beta$, and $c(x_2w_2)=c(x_2z_2)=\delta$. This basic structure is shown in Figure \[F:2.2setup\].
![We here illustrate the basic structure for all Subcases under Subcase \[C:nonsingular\]. Throughout this subcase, we use color red for $\alpha$, blue for $\beta$, green for $\gamma$, and orange for $\delta$. We note that this particular drawing depicts vertices $\{y_1, w_1, z_1\}$ as disjoint from $\{y_2, w_2, z_2\}$; although that may not be the case, as in Subcase \[C:overlap\], it is sufficient for this illustration of the fundamental structure. Note that both $y_1$ and $y_2$ are vertices of Type I.[]{data-label="F:2.2setup"}](Case2_2setup.pdf)
Moreover, as the edges $y_1v$ and $y_2v$ are not present in $G$, we must have that $y_1$ and $y_2$ are both vertices of Type I. We shall consider several cases, depending on whether $\{y_1, w_1, z_1\}$ is disjoint from $\{y_2, w_2, z_2\}$. We first consider the case that the two sets are disjoint.
The sets $\{y_1, w_1, z_1\}$ and $\{y_2, w_2, z_2\}$ are disjoint.
Form a new graph $G'$ from $G$ as follows.
- Remove edges $y_1x_1$, $x_1v$, $vx_2$, and $x_2y_2$ from $G$.
- Add edges $y_1v$ and $vy_2$, colored $\alpha$ and $\beta$, respectively.
- Merge vertices $x_1$ and $x_2$ into a new vertex, $x$, having neighbors $w_1, z_1, w_2, z_2$.
This construction is illustrated in Figure \[F:2.2.1Gprime\]. Note that $G'$ has $n-1$ vertices. Moreover, $G'$ immediately satisfies all but properties and of a good graph.
We first claim that $G'$ cannot contain any nonmonochromatic, nonrainbow triangles. Indeed, there are two ways to produce a triangle in $G'$ that was not already present in $G$. Either we have the edge $y_1y_2$, or we have at least one edge between $\{w_1, z_1\}$ and $\{w_2, z_2\}$.
Let us first consider the second case. Suppose, wolog, that $w_1w_2\in E(G)$; and let $\zeta = c(w_1w_2)$. Note that $\zeta \neq \alpha$, as it is not incident to any of $x_1, y_1 v$, and likewise, $\zeta\neq \beta$. On the other hand, as only vertices $x_1, w_1, z_1$ may be incident to edges of color $\gamma$, and the sets $\{y_1, w_1, z_1\}$, $\{y_2, w_2, z_2\}$ are disjoint, we also have $\zeta \neq \gamma$. Likewise, $\zeta \neq\delta$, and hence the triangle $(w_1,w_2,x)$ is rainbow.
On the other hand, let us suppose that we have the edge $y_1y_2$ in $G$. Recalling that both $y_1$ and $y_2$ are Type I vertices in $G$, we have that the edge $y_1y_2$ cannot take color $\alpha$ or $\beta$. Hence, the triangle $vy_1y_2$ in $G'$ must be rainbow.
![The transformation to $G'$ in the subcase \[C:nonsingular\].1.[]{data-label="F:2.2.1Gprime"}](Case2_2_1Gprime.pdf)
Therefore, any new triangle created in $G'$ must be rainbow. Hence, there are two possibilities: either $G'$ is good, or $G'$ contains a cut vertex of Type $X$. We consider each of these as subcases.
**Subcase 2.2.1(a).** $G'$ is a good colored graph.
Since $G'$ is good, there exists a decomposition of the edges of $G'$ into rainbow cycles, $\mathcal{C}=\{C_1, C_2, \dots, C_k\}.$
Suppose that we have a rainbow cycle $C\in\{C_1, \dots, C_k\}$ such that $C$ uses neither $v$ nor $x$. Note then that $C$ uses none of the edges shown in Figure \[F:2.2.1Gprime\]. Moreover, $C$ is also a rainbow cycle in $G$, using none of the edges shown in Figure \[F:2.2setup\]. Moreover, by Lemma \[setcycles\], $G'{\backslash}C$ is a good colored graph. Moreover, if $G{\backslash}C$ has a cut vertex of Type $X$, say $t$, then take $G_1, G_2$ to be pseudoblocks of $G{\backslash}C$ at $t$. Note that if $t$ is not one of $x_1, x_2, w_i$ or $z_i$, then as the subgraph induced on $\{v, x_1, y_1, w_1, z_1, x_2, y_2, w_2, z_2\}$ is connected in $G$, we must have that all of these vertices are in the same pseudoblock, say $G_1$. But then $G_2$ is an induced subgraph of $G'{\backslash}C$, and hence $t$ is also a cut vertex of Type $X$ in $G'{\backslash}C$, a contradiction. Hence, $t$ must be one of $x_1, x_2, w_i$ or $z_i$.
Suppose $x_1$ or $x_2$ is a cut vertex of Type $X$ in $G{\backslash}C$; wolog, say it is $x_1$. Then wolog, we have $w_1, z_1\in V(G_1)$ and $y_1, v, x_2, w_2, z_2\in V(G_2)$, as shown in Figure \[F:2.2.1axcut\]. But notice then that if $G_1'$ and $G_2'$ are subgraphs of $G'{\backslash}C$, induced on $V(G_1){\backslash}\{x_1\}\cup \{x\}$ and $V(G_2){\backslash}\{x_1\}\cup \{x\}$, then $G_1'$ and $G_2'$ cover all edges of $G'{\backslash}C$, and hence $x$ is a cut vertex of Type $X$ in $G'{\backslash}C$, a contradiction.
![The structure of $G{\backslash}C$ in the case that $x_1$ is a cut vertex in Subcase 2.2.1(a).[]{data-label="F:2.2.1axcut"}](Case2_2_1axcut.pdf)
The final possibility, then, is that one of $\{w_1, w_2, z_1, z_2\}$ is a cut vertex of Type $X$, suppose wolog that it is $w_1$. Note then that there must be an edge $w_1z_1\in E(G)$, with color $\gamma$, and moreover, vertices $z_1, x_1$ are in the same pseudoblock of $G{\backslash}C$, say $G_1$. But then as the subgraph induced on $x_1, v, x_2, w_2, z_2$ is connected, we must have that all of these vertices are in the same pseudoblock of $G{\backslash}C$, and hence as in the case that $x_1$ was a cut vertex of Type $X$, we must have $w_1$ is a cut vertex of Type $X$ in $G'{\backslash}C$, a contradiction.
Hence, if there is a cycle $C$ in $\{C_1, C_2, \dots C_k\}$ that does not use the vertices $v$ or $x$, then that same cycle can be lifted to $G$, and moreover, its removal yields a good colored graph $G{\backslash}C$.
On the other hand, if there are no rainbow cycles in $\{C_1, C_2, \dots, C_k\}$ that do not use the vertices $v$ or $x$, then we have two cases. Either $k=2$, and there is one cycle using both $v$ and $x$, and another using only $x$, or $k=3$, and there are two cycles using $x$ but not $v$, and one using $v$. The possible arrangements of these cycles are shown in Figure \[F:2.2.1fewcycles\].
In the case that $k=3$, let $C_1$ be a cycle using both $v$ and $x$. Then there are two possibilities. Either we have a path $P_1$ from $y_1$ to (wolog) $w_1$ and a path $P_2$ from $y_2$ to (wolog) $w_2$ (see Figure \[F:2.2.1fewcyclesa\]), or we have paths $P_1$ from $y_1$ to $w_2$ and $P_2$ from $y_2$ to $w_1$ (see Figure \[F:2.2.1fewcyclesb\]). In either case, both of these paths do not use the colors $\alpha, \beta, \gamma$, or $\delta$, and the color sets of $P_1$ and $P_2$ are disjoint. In addition, we have a third path $P_3$ from $z_1$ to $z_2$, also not using the colors $\alpha, \beta, \gamma,$ or $\delta$.
[0.3]{} ![The possible options for Subcase 2.2.1(a) in the event that there are no rainbow cycles in $\{C_1, C_2, \dots, C_k\}$. In these diagrams, we note that any dashed lines represent paths, rather than edges. Moreover, we do not claim that any of these paths are in fact disjoint; any specific disjoint paths are mentioned for that case.[]{data-label="F:2.2.1fewcycles"}](Case2_2_1few1.pdf "fig:"){width=".9\textwidth"}
[0.3]{} ![The possible options for Subcase 2.2.1(a) in the event that there are no rainbow cycles in $\{C_1, C_2, \dots, C_k\}$. In these diagrams, we note that any dashed lines represent paths, rather than edges. Moreover, we do not claim that any of these paths are in fact disjoint; any specific disjoint paths are mentioned for that case.[]{data-label="F:2.2.1fewcycles"}](Case2_2_1few2.pdf "fig:"){width=".9\textwidth"}
[0.3]{} ![The possible options for Subcase 2.2.1(a) in the event that there are no rainbow cycles in $\{C_1, C_2, \dots, C_k\}$. In these diagrams, we note that any dashed lines represent paths, rather than edges. Moreover, we do not claim that any of these paths are in fact disjoint; any specific disjoint paths are mentioned for that case.[]{data-label="F:2.2.1fewcycles"}](Case2_2_1few3.pdf "fig:"){width=".9\textwidth"}
For the first option, we take the decomposition in $G$ to be three cycles: $(x_1, w_1, P_1, y_1, x_1)$, $(x_2, w_2, P_2, y_2, x_2)$, and $(z_1, x_1, v, x_2, z_2, P_3, z_1)$ (see Figure \[F:2.2.1fewcyclesGa\]). All of these are rainbow cycles, and cover all edges of $G$. In the second option, we take the decomposition in $G$ to be two cycles: $(x_1, w_1, P_2, y_2, x_2, w_2, P_1, y_1, x_1)$ and $(x_1, v, x_2, z_2, P_3, z_1, x_1)$ (see Figure \[F:2.2.1fewcyclesGb\]). All of these are rainbow cycles, since $P_1$ and $P_2$ may not repeat colors, and cover all edges of $G$.
In the case that $k=2$, we have that $G'$ takes the following structure: One rainbow cycle of the form $(y_1, v, y_2, P_1, y_1)$, and two rainbow cycles involving vertex $x$; wolog, these take the form $(w_1, P_2, w_2, x, w_1)$ and $(z_1, P_3, z_2, x, z_1)$ (see Figure \[F:2.2.1fewcyclesc\]). We note that the path $P_1$ does not use colors $\alpha$ or $\beta$, and the paths $P_2, P_3$ do not use colors $\alpha, \beta, \gamma, $ or $\delta$, as the only edges colored $\alpha$ or $\beta$ are incident to $y_1$ and $y_2$, respectively. We further note that it may not be the case that any of these cycles are disjoint; that is, we may have shared colors in any of these paths.
We then consider the following cycle in $G$: $C=(x_1, w_1, P_2, z_1, x_2, v, x_1)$ (see Figure \[F:2.2.1fewcyclesGc\]). Clearly this cycle is rainbow, since $P_2$ cannot include any edges of color $\alpha, \beta, \gamma, $ or $\delta$. Moreover, we claim that the graph $G{\backslash}C$ is 2-connected (disregarding any isolated vertices), and hence has no cut vertices of Type $X$. Indeed, suppose that $a, b\in V(G{\backslash}C)$ are nonisolated. If $a, b$ both appear on the cycle $C_1=(x_1, y_1, P_1, y_2, x_2, v, x_1)$, then clearly there are two vertex disjoint paths between them. Likewise, if $a, b$ both appear on the cycle $C_2=(x_1, z_1, P_3, z_2, x_2, v, x_1)$, then again there are two vertex disjoint paths between them.
Hence, we may suppose that $a$ is on $C_1$, and $b$ is on $C_2$, but neither vertex is on both cycles. We form two vertex disjoint paths between $a$ and $b$ as follows. First, let $c$ be the first vertex of $C_1$ appearing before $a$ in the presentation $(y_1, x_1,v,x_2,y_2,P_1,y_1)$ with $c$ also a member of $C_2$. Note that as $a$ is not a member of $C_2$, $a\neq x_1$, and hence this is well defined, as $x_1$ is a member of $C_2$ and we thus will always choose a vertex between $x_1$ and $a$. Likewise, let $d$ be the first vertex of $C_1$ appearing after $a$ in the presentation $(y_1,x_1,v,x_2,y_2,P_1,y_1)$ with $d$ also a member of $C_2$. As above, $a\neq x_2$, and hence this vertex is well defined. Moreover, as $x_1$ and $x_2$ could both satisfy these conditions, we therefore will have that $c\neq d$. As $c$ and $d$ are both distinct vertices on $C_2$, we can construct two paths $P_c$ and $P_d$ from $c$ to $b$ and $d$ to $b$, respectively, along $C_2$, such that $P_c$ and $P_d$ are internally vertex disjoint. Hence, by concatenating $P_c$ and $P_d$ with the paths along $C_1$ from $a$ to $c$ and $a$ to $d$, we obtain two internally vertex disjoint paths between $a$ and $b$.
Therefore, in the case that $k=2$, we have found a cycle $C$ such that $G{\backslash}C$ is good, as desired.
[0.3]{} ![The transformations from $G'$ to $G$ corresponding to the cycle structures shown in Figure \[F:2.2.1fewcycles\]. We note as above that any dashed lines represent paths, rather than edges, and that we make no assumptions about the disjointness of these paths that are not mentioned in the captions in Figure \[F:2.2.1fewcycles\].[]{data-label="F:2.2.1fewcyclesG"}](Case2_2_1fewG1.pdf "fig:"){width=".9\textwidth"}
[0.3]{} ![The transformations from $G'$ to $G$ corresponding to the cycle structures shown in Figure \[F:2.2.1fewcycles\]. We note as above that any dashed lines represent paths, rather than edges, and that we make no assumptions about the disjointness of these paths that are not mentioned in the captions in Figure \[F:2.2.1fewcycles\].[]{data-label="F:2.2.1fewcyclesG"}](Case2_2_1fewG2.pdf "fig:"){width=".9\textwidth"}
[0.3]{} ![The transformations from $G'$ to $G$ corresponding to the cycle structures shown in Figure \[F:2.2.1fewcycles\]. We note as above that any dashed lines represent paths, rather than edges, and that we make no assumptions about the disjointness of these paths that are not mentioned in the captions in Figure \[F:2.2.1fewcycles\].[]{data-label="F:2.2.1fewcyclesG"}](Case2_2_1fewG3.pdf "fig:"){width=".9\textwidth"}
Therefore, if $G'$ is good, then there exists a rainbow cycle $C$ in $G$ such that $G{\backslash}C$ is good. Let us now turn to the subcase in which $G'$ is not good.
**Subcase 2.2.1(b).** $G'$ has a cut vertex of Type $X$.
Let $t$ be a cut vertex of Type $X$ in $G'$. First, suppose $t=x$. Let $G_1$ and $G_2$ be the pseudoblocks of $G$ at $x$. Wolog, suppose $w_1, w_2\in V(G_1)$, and $z_1, z_2\in V(G_2)$, and $y_1, v, y_2\in V(G_1)$. Let $G_2'$ be the induced subgraph of $G$ having $V(G_2')=V(G_2){\backslash}\{x\}\cup\{x_2\}$, and let $G_1'$ be the induced subraph of $G$, having $V(G_1')=V(G_1){\backslash}\{x\}\cup\{x_1\}$. Notice then that all edges of $G$ are present in either $G_1'$ or $G_2'$, and hence we have that $x_2$ is a cut vertex of type $X$ in $G$. As this is impossible, since $G$ is good by hypothesis, $x$ cannot be a cut vertex of type $X$. This structure is illustrated in Figure \[F:Case2.2.1cxcut\].
Hence, we must have that $t\neq x$. Let us form a decomposition of $G$ into induced subgraphs $G_1, G_2, \dots, G_s$ as follows.
First, let $B_1, B_2, \dots, B_r$ be the blocks of $G$, and let $B$ be the block graph of $G$. Define an equivalence relation $R$ on $\{B_i\}$ as follows: for any $i, j$, if $B_i$ and $B_j$ are in the same component of $B$, let $P$ be the path $B_i=B_{i_0}, B_{i_1}, \dots, B_{i_r}, B_j=B_{i_{r+1}}$ from $B_i$ to $B_j$ in $B$. If, for all $0\leq k\leq r$, we have that the unique vertex in $V(B_{i_k})\cap V(B_{i_{k+1}})$ is not a cut vertex of Type $X$, then we take $B_i \sim_R B_j$.
Let $G_1, G_2, \dots, G_s$ be the graphs obtained by the unions of each equivalence class under $R$. Then although these graphs may not be 2-connected, they are induced subgraphs, and the intersection between any pair $G_i, G_j$ is either empty or is a cut vertex of Type $X$. Moreover, every cut vertex of Type $X$ in $G$ will be found as the intersection of two such graphs. We shall refer to these graphs as $X$-blocks, and the decomposition as the $X$-block decomposition of $G'$; note that unlike pseudoblocks, this decomposition is unique. In a natural way, then, we may define a forest $T$ with $V(T)=\{G_1, \dots, G_s\}$ and $E(T) = \{ij\ | \ V(G_i)\cap V(G_j)\hbox{ is a cut vertex of Type $X$}\}$. Note that $T$ can also be viewed as a contraction of $B$, where we contract any edge corresponding to a cut vertex that is not of Type $X$. Wolog, let $x\in V(G_1)$, and note that as $x$ is not a cut vertex of type $X$ in $G'$, we must have $w_1, z_1, w_2, z_2\in V(G_1)$ also. Moreover, since no cut vertices of Type $X$ exist in $G$, we must have that $v$ is in a distinct $X$-block of $G'$, say $G_s$. We note that $v$ cannot be a cut vertex, since it is of degree 2 and $G'$ is even, and so we must also have $y_1, y_2\in V(G_s)$.
![The structure in $G'$ (left) and $G$ (right) in the case that $x$ is a cut vertex in $G'$ in Subcase 2.2.1(b).[]{data-label="F:Case2.2.1cxcut"}](Case2_2_1c_xcut.pdf "fig:"){height=".2\textheight"} ![The structure in $G'$ (left) and $G$ (right) in the case that $x$ is a cut vertex in $G'$ in Subcase 2.2.1(b).[]{data-label="F:Case2.2.1cxcut"}](Case2_2_1c_xcutG.pdf "fig:"){height=".18\textheight"}
$T$ is a path, and its endpoints are $G_1$ and $G_s$.
Notice that if we consider the $X$-block decomposition of $G$, it can be obtained from the $X$-block decomposition of $G'$ by merging together $G_1, G_s$, and (if $G_1$ and $G_s$ are in the same component of $T$) any vertices on the path between $G_1$ and $G_s$, as in $G$, these two $X$-blocks will be joined by the edges $x_1v$ and $x_2v$.
Moreover, since $G$ is either good or almost good, the $X$-block decomposition of $G$ is a single vertex. Hence, if $T$ is disconnected, then $T$ consists of exactly two $X$-blocks, namely $G_1$ and $G_2=G_s$. But then there is no cut vertex of Type $X$ in either of these $X$-blocks, a contradiction. Thus, $T$ is connected, and as merging $G_1$ with $G_s$ and all other vertices on the path between them, we have that every vertex in $T$ appears in the path between $G_1$ and $G_s$; that is, $T$ must consist of a single path, with endpoints $G_1$ and $G_s$.
Hence, we have a canonical labeling of the vertices $G_1, G_2, \dots, G_s$, by traveling along the path. Note moreover that both $G_1$ and $G_s$ are almost good colored graphs, as each contains no cut vertices of Type $X$, but will have a bad vertex at the intersection $V(G_1)\cap V(G_2)$ or $V(G_{s-1})\cap V(G_s)$, respectively.
Let $t=V(G_1)\cap V(G_2)$, so that $t$ is the bad vertex of $G_1$. Applying the induction hypothesis on $G_1$, there exists a decomposition of the edges of $G_1$ into cycles $\mathcal{C}=\{C_1, C_2, \dots, C_k\}$, such that $C_1$ is almost rainbow, and the remaining cycles are rainbow. Moreover, by Lemma \[setcycles\], $G_1{\backslash}\{C_i\}$ has no cut vertices of Type $X$ for any $i$.
Note that as $x$ is of Type II in $G_1$, we must have two cycles in $\mathcal{C}$ that use the vertex $x$. Moreover, at most one of these two cycles may use the vertex $t$, since $t$ is present in exactly one cycle in $\mathcal{C}$. Wolog, suppose that $x\in V(C_2)$, so that $t\notin V(C_2)$, since $t$ appears in the cycle $C_1$. Moreover, we may assume wolog that the adjacent vertices to $x$ in $C_2$ are $w_1$ and $w_2$. Write $C_2 = (x, w_1, u_2, \dots, u_\ell, w_2, x)$. This structure is shown in Figure \[F:2.2.1cXblock\]. We note that as $u_i\neq y_j$ for any $j$, and $C_2$ is rainbow, that the path $w_1, u_2, \dots, u_ell, w_2$ does not use any of the colors $\alpha, \beta, \gamma$, or $\delta$.
![The structure of the $X$-block decomposition in $G'$. Here, the dashed line between $w_1$ and $w_2$ indicates the remaining vertices of the cycle $C_2$, that is, the vertices $u_2, u_3, \dots, u_\ell$. Note that no edges of this path may be colored $\alpha, \beta, \gamma$, or $\delta$, due to the constraints on the number of vertices incident to each color set in $G$.[]{data-label="F:2.2.1cXblock"}](Case2_2_1cXblock.pdf)
Let $C$ be the cycle in $G$ defined by $C=(v, x_1, w_1, u_2, \dots, u_\ell, w_2, x_2, v)$, see Figure \[F:2.2.1cXblockG\]. As noted above, since the path from $w_1$ to $w_2$ along the $u_i$ does not use any colors among $\alpha, \beta, \gamma, \delta$, the cycle $C$ is rainbow. By Observation \[heredity\], to show that $G{\backslash}C$ is good, we need only check the condition that $G{\backslash}C$ contains no cut vertices of Type $X$.
![The structure of $G$, taking into account the $X$-blocks from $G'$. As noted above, since no edges on the dashed path between $w_1$ or $w_2$ have color $\alpha, \beta, \gamma, $ or $\delta,$ the cycle $C=(v, x_1, w_1, u_2, \dots, u_\ell, w_2, x_2, v)$ is rainbow in $G$.[]{data-label="F:2.2.1cXblockG"}](Case2_2_1cXblockG.pdf)
To that end, let us suppose that $r$ is a cut vertex of Type $X$ in $G{\backslash}C$. We note that as $x_1, x_2, y_1, y_2$ are all Type I vertices in $G{\backslash}C$, that $r$ may not be one of these nodes.
Suppose that $r\in V(G_1)$, and we first suppose that $r\neq t$. First, we note that if $r$ were a cut vertex in $G_1$, it was not a cut vertex of Type $X$, that is, if $G_1'$ and $G_2'$ are pseudoblocks of $G_1$ at $r$, then $r$ must have two incident edges in $G_1'$ of different colors, and two incident edges in $G_2'$ of different colors. Moreover, by recalling that $G_1$ is connected, we must have that $G_1'$ is connected and $G_2'$ is connected. But the cycle $C$ may only intersect one of $G_1'$ and $G_2'$, say $G_1'$ wolog. But then $r$ is not a cut-vertex of Type $X$ in $G{\backslash}C$, as removing $C$ still yields an induced connected subgraph $G_2'$ having two incident edges to $r$ with two different colors. Thus, $r$ is not a cut vertex in $G_1$.
Now, as $r$ is a cut vertex in $G{\backslash}C$ it must be that there are a pair of vertices, $a, b\in V(G_1)$ such that the only paths from $a$ to $b$ in $G{\backslash}C$ use the vertex $r$. Moreover, since $G_1{\backslash}C_2$ is almost good, it must be that $r$ is not a cut vertex in $G_1{\backslash}C_2$. Hence there is a path $P'$ in $G_1{\backslash}C_2$ from $a$ to $b$ not using the vertex $r$ that is not present in $G{\backslash}C$. We note that such a path must have used, as a subpath, the length two path $z_1xz_2$, as that is the only path that has been destroyed in the transformation from $G'{\backslash}C_2$ to $G{\backslash}C$.
Let $P$ be a path in $G_s$ between $y_1$ and $y_2$, not using the vertex $v$. Note that such a path must exist, as $v$ is of degree 2 in $G_s$, and hence cannot be a cut vertex. Then we may form a new path in $G{\backslash}C$ from $a$ to $b$, and not using the vertex $r$, by replacing the length two path $z_1xz_2$ in $P'$ with the path $z_1x_1y_1Py_2x_2z_2$. Hence, $r$ is not a cut vertex in $G{\backslash}C$. This situation is illustrated in Figure \[F:2.2.1cG\_1cut\].
![The structure of $G{\backslash}C$ in the case that $r\in V(G_1)$, with $r\neq t$, is a cut vertex of Type $X$ in $G{\backslash}C$. Note that the first two ovals indicate the pseudoblocks of $G_1$ at $r$, and the remaining ovals indicate the $X$-block structure in $G'$. []{data-label="F:2.2.1cG_1cut"}](Case2_2_1cG_1cut.pdf)
Therefore, any cut vertex of Type $X$ in $G{\backslash}C$ must be from $G_i$, with $i>1$, or equal to $t$. Note that as we have not removed any edges or vertices from any $X$-block other than $G_1$, in fact we must have that any cut vertex of Type $X$ in $G{\backslash}C$ must occur as the vertex $r=V(G_i)\cap V(G_{i-1})$ for some $i$. As we will add edges between $G_s$ and $G_1$ when we transform from $G'{\backslash}C_2$ to $G{\backslash}C$, we note that it must be the case that $t$ itself is a cut vertex of Type $X$. Moreover, as we will add the edges between $G_1$ and $G_s$ when we transform from $G'$ to $G$, the component of $G{\backslash}C$ containing $x$ will be connected to all of $G_2, G_3, \dots, G_s$. Since $G_2$ also contains $t$, we must have that the removal of the cycle $C_2$ from $G_1$ disconnects $G_1$, in such a way that the vertices $z_1, x, z_2$ are in one component and the bad vertex $t$ is in another component (see Figure \[F:2.2.1ctacutsetup\]).
![ The structure of $G$ in the case that $t$ is a cut vertex of Type $X$ in $G{\backslash}C$. Note that in order for the removal of $t$ to disconnect $G$, we must have that there is a collection of vertices (indicated here by the grey oval) in $G_1$ such that the removal of $t$ disconnects these vertices from the remainder of the graph. Since $x_1$ and $x_2$ are both connected via paths to $G_2$, we cannot have $x_1$ or $x_2$ in the component of $G_1{\backslash}C$ containing $t$. Moreover, although it will not be relevant, none of $w_1, w_2, z_1,$ or $z_2$ can be connected to $t$ in $G{\backslash}C$, as $w_i$ are both either isolated or adjacent to $z_i$, and $z_i$ are both adjacent to $x_i$.[]{data-label="F:2.2.1ctacutsetup"}](Case2_2_1c_tcut_setup.pdf)
Let $C_j\neq C_2$ be the other cycle in $\mathcal{C}$ using the vertex $x$. Note that since $C_j$ is edge disjoint from $C_2$, and $G_1{\backslash}C_2$ has $t$ and $x$ in distinct components, we must have that $t$ is not used in the cycle $C_j$.
Now, since $G_1$ is connected, there must be a path in $G_1$ from $x$ to $t$. Moreover, since removing $C_2$ disconnects $G_1$, we must have that there exists a vertex $a$ on $C_2$ such that there is a path from $x$ to $t$ that follows $C_2$ to the vertex $a$, then leaves $C_2$ and takes a path to $t$. Note that $a\neq z_1$ and $a\neq z_2$, as otherwise all edges on a path from $x$ to $t$ are present in $G_1{\backslash}C_2$, and as noted above these vertices must be in distinct components of $G_1{\backslash}C_2$. Note further that no edges on this path are used in $C_j$, as all edges are either members of $C_2$ or in a different component of $G_1{\backslash}C_2$ from $C_j$. This is illustrated in Figure \[F:2.2.1ctacut\].
Now, notice that $C_j$ must be entirely contained in the component of $G_1{\backslash}C_2$ containing $x$. Hence, we may replace $C_2$ with $C_j$ and repeat this argument; upon so doing, we must have that $x$ and $t$ are in the same component of $G_1{\backslash}C_j$, since we may obtain a path between the two by following the path indicated above. But then, using $C_j$ in place of $C_2$, we have that $t$ is not a cut vertex of Type $X$ in $G{\backslash}C$. Moreover, the previous analysis of cut vertices is unaffected, and hence with this new cycle $C$, we have that $G{\backslash}C$ is a good colored graph.
![ The structure of $G'$ in the case that $t$ is a cut vertex of Type $X$ in $G{\backslash}C$ in Subcase 2.2.1(b).[]{data-label="F:2.2.1ctacut"}](Case2_2_1c_tcut.pdf)
We now turn our attention to the final subcase, that in the basic structure necessary for Subcase \[C:nonsingular\] illustrated in Figure \[F:2.2setup\], the neighbor sets of $x_1$ and $x_2$ are not disjoint.
The sets $\{y_1, w_1, z_1\}$ and $\{y_2, w_2, z_2\}$ are not disjoint.\[C:overlap\]
We consider here several possibilities, that cover all possible overlaps between these two sets, wolog.
**Subcase 2.2.2(a).** $\{w_1, z_1\}$ is not disjoint from $\{w_2, z_2\}$.
Without loss of generality, let us suppose that $w_1=w_2$. Let $C=(x_1,v,x_2,w_1,x_1)$. Note that $C$ is a rainbow cycle in $G$. Hence, we need only verify that $G{\backslash}C$ contains no cut vertices of Type $X$.
First, note that upon removing this cycle, we have that $x_1$ and $x_2$ are both vertices of Type I, and hence cannot be vertices of Type $X$. Suppose that one of $z_1, z_2$ is a cut vertex of Type $X$, wolog, suppose it is $z_1$. Note that this immediately implies that $z_1$ is a vertex of Type II in $G$, and hence the edge $z_1w_1$ must also be present, with color $\gamma$. But then $w_1=w_2$ is also a vertex of Type II in $G$, and therefore, the edge $z_2w_1$ must also be present, with color $\delta$ (this also implies that we cannot have $z_1=z_2$). However, this immediately implies that in any pseudoblock decomposition of $G{\backslash}C$, the vertices $y_1, y_2, x_1, x_2, w_1, z_2$ are all in the same pseudoblock. Therefore, it must have been the case that $z_1$ was a cut vertex of Type $X$ in $G$, which is impossible (see Figure \[F:2.2.2az\_1cut\]).
![The structure of $G$ in Subcase 2.2.2(a), when $z_1$ is a cut vertex of Type $X$. Here, the dashed gray oval represents one $X$-block of $G{\backslash}C$, and the dotted lines indicate the rainbow cycle to be removed. Note that upon removing that cycle, we must have that the remaining labeled vertices are all in the same $X$-block of $G{\backslash}C$. We include in this case the possibility (not pictured) that either $y_1=z_2$ or $y_2=z_1$.[]{data-label="F:2.2.2az_1cut"}](Case2_2_2az_1cut.pdf)
Hence, there must be a cut vertex of Type $X$ that is not one of our heretofore labeled nodes. Suppose that $t$ is such a cut vertex, and let us take $G_1$ and $G_2$ to be pseudoblocks of $G$ at $t$. Note that we must have some vertices among $x_1, x_2, y_1, y_2, z_1, z_2, w_1, w_2$ in each of $G_1$ and $G_2$. Moreover, if $w_1=w_2$ is a Type II vertex in $G$, then the subgraph induced on $\{x_1, x_2, y_1, y_2, z_1, z_2, w_1, w_2\}$ is connected in $G{\backslash}C$, as we would require $z_1w_1, z_2w_1\in E(G)$, and hence this is impossible. Therefore, it must be that $w_1$ is a Type I vertex in $G$. Similarly, if the sets $\{y_1, z_1\}$ and $\{y_2, z_2\}$ are not disjoint, we would also have this subgraph connected in $G{\backslash}C$, and hence this is also impossible. Therefore, wolog, we have $x_1, z_1, y_1\in V(G_1)$ and $x_2, z_2, y_2\in V(G_2)$.
Note that in this situation, we must have that all of $y_1, y_2, z_1, z_2$ are vertices of Type I in $G$. Let their heretofore unlabeled neighbors be $\hat{y}_1, \hat{y}_2, \hat{z}_1, \hat{z}_2$, respectively. Note that every rainbow path between $G_1$ and $G_2$ either passes through $t$, or includes one of the following subpaths: $P_y=\hat{y}_1, y_1, x_1, w_1, x_2, y_2, \hat{y}_2$ or $P_z=\hat{z}_1, z_1, x_1, v, x_2, z_2, \hat{z}_2$. This structure is illustrated in Figure \[F:2.2.2aGcut\], left.
![The structure of $G$ (left) and $G'$ (right) in Subcase 2.2.2(a), in the case that there is a cut vertex in $G{\backslash}C$ of Type $X$ other than $z_1$. Note that the unlabeled vertices incident to $y_1, y_2, z_1,z_2$ are $\hat{y}_1, \hat{y}_2, \hat{z}_1, \hat{z}_2$, respectively. We further note that these vertices may not be distinct.[]{data-label="F:2.2.2aGcut"}](Case2_2_2acutG.pdf "fig:") ![The structure of $G$ (left) and $G'$ (right) in Subcase 2.2.2(a), in the case that there is a cut vertex in $G{\backslash}C$ of Type $X$ other than $z_1$. Note that the unlabeled vertices incident to $y_1, y_2, z_1,z_2$ are $\hat{y}_1, \hat{y}_2, \hat{z}_1, \hat{z}_2$, respectively. We further note that these vertices may not be distinct.[]{data-label="F:2.2.2aGcut"}](Case2_2_2acutGprime.pdf "fig:")
Create a new graph $G'$ from $G$ as follows:
- remove the edges $\{x_1y_1, x_1z_1, x_1v, x_1w_1, x_2y_2, x_2z_2, x_2v, x_2w_1\}$, that is, all edges colored $\alpha, \beta, \gamma, $ or $\delta$.
- remove the vertices $x_1, x_2$.
- add the edges $y_1w_1, y_2w_1, z_1v, z_2v$. Color these edges with $\alpha, \beta, \gamma, \delta$, respectively.
This structure $G'$ is illustrated in Figure \[F:2.2.2aGcut\], right. Note that clearly, we have created no triangles in $G'$, and moreover, there can be no additional cut vertices of Type $X$. As all other properties are clear from construction, we thus have that $G'$ is a good colored graph, having strictly fewer vertices than $G$. Hence, we may apply the induction hypothesis to obtain a rainbow cycle $C'$ in $G'$ that uses the vertex $y_1$; note that such a rainbow cycle must be present as we may decompose all edges of $G'$ into rainbow cycles.
As $y_1$ is of Type I in $G'$, we must have that $C'$ contains the entire path $\hat{y}_1, y_1, w_1, y_2, \hat{y}_2$, and we may thus replace this path by $P_y$ in $G$ to form a new rainbow cycle $C$. Moreover, $G{\backslash}C$ can be obtained from $G'{\backslash}C'$ by subdividing the edges $z_1v$ by $x_1$ and $vz_2$ by $x_2$. Hence, $G{\backslash}C$ can contain no cut vertices of Type $X$, and therefore, $G{\backslash}C$ is good.
Therefore, if $w_1=w_2$, then $G$ contains a rainbow cycle $C$ such that $G{\backslash}C$ is good.
We note that our analysis in this case, did not rely on the fact that $\{y_1, z_1\}$ and $\{y_2, z_2\}$ are disjoint, although this was a consequence of the presence of any cut vertices of Type $X$ in $G{\backslash}C$. Hence, we may assume for all remaining cases that $\{w_1, z_1\}$ is disjoint from $\{w_2, z_2\}$.
**Subcase 2.2.2(b).** $y_1=y_2$.
Here, we have a rectangle $(x_1,y_1,x_2,v,x_1)$, and the only edges colored $\alpha$ or $\beta$ are in this rectangle. Contract the rectangle to form a new graph $G'$, having a single Type II vertex $x$, with neighbors $w_1, z_1, w_2$, and $z_2$, and recolor these edges as $\gamma,\gamma, \delta, \delta$, respectively; see Figure \[F:2.2.2b\].
![The structure of $G$ (left) and $G'$ (right) in Subcase 2.2.2(b). []{data-label="F:2.2.2b"}](Case2_2_2b.pdf "fig:") ![The structure of $G$ (left) and $G'$ (right) in Subcase 2.2.2(b). []{data-label="F:2.2.2b"}](Case2_2_2bGprime.pdf "fig:")
We note that $x$ cannot be a cut vertex of Type $X$ in $G'$, since if so, clearly $x_1$ and $x_2$ would also be cut vertices of Type $X$ in $G$. Moreover, if we have formed a triangle that was not present in the original graph $G$, it must be that this triangle uses the new vertex $x$, and (wolog) the vertices $w_1, w_2$. Note that in $G$, the edge $w_1w_2$ cannot use color $\delta$ or $\gamma$, as otherwise we would have more than three vertices incident to this color. Hence, the triangle created is rainbow. Therefore, $G'$ is good, so we may apply the inductive hypothesis to form a rainbow cycle $C'$ in $G'$, such that $G'{\backslash}C'$ is good.
If $C'$ does not use the vertex $x$, then $C'$ is a rainbow cycle in $G$, and clearly expanding $x$ back to a rectangle cannot introduce any cut vertices of Type $X$. If $C'$ does use the vertex $x$, then we may create a rainbow cycle $C$ in $G$ by replacing this vertex with the length two path $x_1y_1x_2$. As $C'$ does not use colors $\alpha$ or $\beta$, this is a rainbow cycle in $G$, and moreover, $G{\backslash}C$ can be obtained from $G'{\backslash}C'$ by subdividing the path through $x$ (which consists entirely of Type I vertices). Hence, $G{\backslash}C$ is good.
**Subcase 2.2.2(c).** $y_1=w_2$, but $\{w_1, z_1\}$ is disjoint from $\{y_2, z_2\}$.
Note that $y_1=w_2$ must be a Type I vertex in $G$, and hence we have a rectangle $(x_1,y_1,x_2,v,x_1)$. Moreover, since $w_2$ is a Type I vertex, we cannot have the edge $w_2z_2$, and hence $z_2$ is also a Type I vertex, and no edges other than $w_2x_2$ and $z_2x_2$ can take color $\delta$. Form a new graph $G'$ by contracting this rectangle to a single vertex $x$, having neighbors $w_1, z_1, y_2, z_2$, and recolor these edges as $\gamma, \gamma, \beta, \beta$, respectively; see Figure \[F:2.2.2c\].
![The structure of $G$ (left) and $G'$ (right) in Subcase 2.2.2(c). []{data-label="F:2.2.2c"}](Case2_2_2cG.pdf "fig:") ![The structure of $G$ (left) and $G'$ (right) in Subcase 2.2.2(c). []{data-label="F:2.2.2c"}](Case2_2_2cGprime.pdf "fig:")
We note that $x$ cannot be a cut vertex of Type $X$ in $G'$, since if so, we have $x_1$ is a cut vertex of Type $X$ in $G$. Moreover, if we have formed a new triangle that was not present in the original graph $G$, it must be that this triangle uses the vertex $x$, and hence one of its other vertices is $w_1$ (wolog). But note that neither edge $w_1y_2$ nor $w_1z_2$ can use colors $\gamma$ or $\beta$, as we already have three vertices in $G$ incident to these color classes, and hence any new triangle formed must be rainbow.
Therefore, $G'$ is good, and by induction there exists a rainbow cycle $C'$ in $G'$ such that $G'{\backslash}C'$ is good. If $C'$ does not use the vertex $x$, then $C'$ is a rainbow cycle in $G$, and clearly expanding $x$ back to a rectangle cannot introduce any cut vertices of Type $X$. If $C'$ does use the vertex $x$, then there are two possibilities. Either the edge $xy_2$ is used, or the edge $xz_2$ is used; note that one of these must be true as any rainbow cycle through $x$ must use color $\beta$. If the edge $xy_2$ is used, we shall form a rainbow cycle $C$ in $G$ by replacing this edge with the path $x_1y_1x_2y_2$, which replaces an edge of color $\beta$ with three edges, having colors $\alpha, \delta, \beta$, respectively. Moreover, we note that $G{\backslash}C$ can be obtained from $G'{\backslash}C'$ by subdividing the edge $xz_2$ with the vertex $v$, and recoloring appropriately. Hence, $G{\backslash}C$ cannot contain a cut vertex of Type $X$, and thus $G{\backslash}C$ is good.
On the other hand, if the edge $xz_2$ is used in $C'$, we similarly create a rainbow cycle $C$ in $G$ by replacing this edge with the path $x_1vx_2z_2$, having colors $\alpha, \beta, \delta$, respectively. As above, this will yield a good colored graph $G{\backslash}C$.
**Subcase 2.2.2(d).** $y_1=w_2$ and $y_2=w_1$.
Note that in this case, as $y_1$ and $y_2$ are both Type I vertices in $G$, we have that $\{z_1, z_2\}$ is a cutset of $G$, and both $z_1$ and $z_2$ are vertices of Type I; see Figure \[F:2.2.2d\] Moreover, we have a rainbow cycle $x_1y_1x_2y_2$. Clearly, we cannot create any cut vertices of Type $X$ by the removal of this rainbow cycle, as no cut vertices are introduced at all.
![The structure of $G$ in Subcase 2.2.2(d).[]{data-label="F:2.2.2d"}](Case2_2_2d.pdf)
Therefore, if $G$ is a good or almost good graph having $n$ vertices and $m$ edges, we can find a rainbow cycle or almost-rainbow cycle, respectively, in $G$, such that $G{\backslash}C$ is good or almost good, respectively. By then applying the induction hypothesis, we therefore have a decomposition of $G$ into cycles, such that at most one such cycle is almost rainbow, and the remainder are rainbow.
Conclusions and Conjectures {#S:conclusions}
===========================
We note, as mentioned in Section \[S:intro\], that our proof technique also resolves Conjecture \[C:Goddyn\] in the case of 3-regular graphs. This conjecture is implied by our proof technique, as in Lemma \[type2\], the rainbow cycle chosen was entirely arbitrary. Hence, the first cycle we remove from $G$ is irrelevant, as removing any rainbow cycle from $L(G)$ will allow us to proceed with the remainder of the inductive proof.
However, it is unclear from this proof if Goddyn’s conjecture holds in the general case. Hence, for graphs that are not 3-regular, Goddyn’s conjecture remains unresolved.
In addition, we suspect that the technique used here could also be used to consider cycle $k$-covers for certain graphs, as follows. A cycle $k$-cover is a collection of cycles in $G$ for which every edge of $G$ is contained in exactly $k$ of the cycles.
Let $G$ be a $k$-regular graph, $(k-1)$-edge connected graph. Then there exists a list of cycles $\mathcal{C}$ in $G$ such that every edge of $G$ appears in exactly $k-1$ cycles.
We note that in this case, the color classes in $L(G)$ are $k$-cliques, and every vertex in $L(G)$ is a member of two of these, and hence has $k-1$ neighbors in each of two incident color classes. Thus, any decomposition of the edges of $L(G)$ into rainbow cycles would produce a cycle $(k-1)$-cover, as suggested by the conjecture. The difficulty in generalizing to this case is likely to be found in how to generalize a cut vertex of Type $X$. The true condition here is that if $S$ is a cutset, and $G_1$ and $G_2$ are pseudoblocks corresponding to $S$, then no more than half the edges incident to vertices of $S$ in $G_1$ may have the same color. In the case of a 3-regular graph, this is automatically true provided that the cut set has at least two vertices, or the cutset has one vertex, but its two edges in $G_1$ are not of the same color. In the case of a higher regularity, this condition becomes more obtuse.
There are many other standing conjectures related to cycle covers in graphs, and we do not doubt that similar techniques might be used to approach these conjectures. Many of these can be found in [@jaeger1985survey].
In addition, there are many questions here relating, rather than to cycle covers, to decompositions of edge colored graphs into rainbow cycles or subgraphs. Here, we show that if $G$ is an edge colored graph, such that every color class of $G$ is a triangle, no two color classes share more than one vertex, and $G$ has no cut vertices, then the edges of $G$ may be decomposed into a set of disjoint rainbow cycles. This begs the question: under what conditions can such a decomposition be guaranteed?
Let $G$ be an even, edge-colored graph having no cut vertices. Under what conditions on the color classes of $G$ can it be assured that $G$ has an edge decomposition into disjoint rainbow cycles?
More specifically, we restricted here to the case that each color class has at most 3 vertices, and no two color classes share more than one vertex. Is it possible that under the second condition, a similar proof could be found for a graph having more vertices in each color class?
\[decompd\] Let $G$ be an even, edge-colored graph having no cut vertices, and suppose that any two color classes share no more than one vertex. Moreover, suppose that the subgraph induced on each color class contains at most $d$ vertices. For what values of $d$ can we guarantee that the edges of $G$ can be decomposed into rainbow cycles?
This paper answers Question \[decompd\] in the case that $d=3$ and $n\geq 6$. The case that $d=2$ is trivial; every edge of $G$ has a unique color. However, a generalization of the proof in this article to the case that $d\geq 4$ is not apparent. Further, it may be that a precise value of $d$ will depend upon $n$; in the case that $d=3$ analyzed in this proof, we have at least $\frac{2}{3}n$ distinct colors available. It may be that there is a function $d=d(n)$ such that the decomposition will be possible in the case of any even edge-colored graph on $n$ vertices, having at most $d(n)$ vertices in each color class.
Acknowledgements
================
The author would like to extend sincere gratitutde to Paul Horn for his thoughts in the development of this approach, and to SOMEBODY for proofreading this manuscript.
|
---
abstract: 'We explore novel approaches to the task of image generation from their respective captions, building on state-of-the-art GAN architectures. Particularly, we baseline our models with the Attention-based GANs that learn attention mappings from words to image features. To better capture the features of the descriptions, we then built a novel cyclic design that learns an inverse function to maps the image back to original caption. Additionally, we incorporated recently developed BERT pretrained word embeddings as our initial text featurizer and observe a noticeable improvement in qualitative and quantitative performance compared to the Attention GAN baseline. [^1]'
author:
- |
Trevor Tsue\
`ttsue`\
Computer Science Dept. Jason Li\
`jasonkli`\
Computer Science Dept. Samir Sen\
`samirsen`\
Computer Science Dept.
bibliography:
- 'acl2019.bib'
date: 9 June 2019
title: 'Cycle Text-to-Image GAN with BERT'
---
Introduction
============
[.5]{} {width=".75\linewidth"}
[.5]{} {width=".75\linewidth"}
The goal of the text-to-image task is to generate realistic images given a text description. This problem has many possible applications ranging from computer-aided design to art generation [@DBLP:journals/corr/abs-1711-10485]. Moreover, this multimodal problem is an interesting and important task in natural language understanding because it connects language to an understanding of the visual world.
The problem can be naturally decomposed into two parts: embedding the text into a feature representation that captures relevant visual information and using that representation to generate an realistic image that corresponds to the text. This problem is particularly challenging for several reasons. For one, there is the issue of a domain gap between the text and image feature representations. Furthermore, there are a myriad of legitimate images that correspond to a single text description and part of the goal is to be able to capture the diversity in plausible images [@DBLP:journals/corr/ReedAYLSL16].
Recent advances in the field of deep learning have made significant strides in this challenging task. In particular, recurrent architectures, such as LSTMs, can be used to learn feature representations from text and generative adversarial networks (GANs) can be used to create images conditioned on information. Nonetheless, the aforementioned challenges leave the text-to-image task an open problem. Adding to those problems, GANs, despite having widespread success in generative learning, often produce lower-resolution images, [@DBLP:journals/corr/ZhangXLZHWM16], lack diversity in image generation, and fail to capture intricate details.
The goal of our work is to explore and compare state-of-the-art methods for addressing some of these problems. These ideas include: stacking multiple GANs to sequentially learn higher resolution images; applying attention mechanisms to “focus” the generator on important parts of the text and to help bridge the domain gap; and unifying text-to-image and image-to-text in a single model. In addition to evaluating these ideas for diversity and quality of generated images, we also investigate the effect of using a pre-trained language model for contextualized word embeddings. Pretrained language models, such as BERT [@DBLP:journals/corr/abs-1810-04805] and ELMO [@DBLP:journals/corr/abs-1802-05365], have revolutionized NLP as an effective means of transfer learning, similar to the impact of ImageNet on the field of computer vision. As such, we sought to explore the potential benefit of these pretrained embeddings since most current approaches learn the word embeddings from scratch.
Related Work
============
Generative adversarial networks (GANs) are the most widely used model in generative learning. Originally proposed to generate realistic images, GANs consist of two neural networks, a generator and a discriminator, that effectively compete with one another in a zero-sum game. The discriminator attempts to distinguish real and fake images, while the generator tries to create images that fool the discriminator into classifying them as real [@NIPS2014_5423].
Numerous works have built off of the basic idea. Some of these idea include the Conditional GANs, which pass a class label to both the generator and discriminator, unlike the original GAN where the generator creates an image solely from noise [@DBLP:journals/corr/MirzaO14]. GANs have also been used for style transfer between image domains. In this formulation, the generator is passed an image from a source domain and tries to fool the discriminator into thinking it is from the target domain. An extension to this idea is the CycleGAN, which learns to transfer from source to target and target back to source to ensure consistency and to stabilize training. This setup also ensures that important latent features are captured so that the source image can be reconstructed from the generated one [@originalcyclegan]. As we will see next, these ideas have a natural extension to the text-to-image problem.
Reed et al. describe the first fully differentiable, end-to-end model that learns to construct images from text, building on conditional GANs [@DBLP:journals/corr/ReedAYLSL16]. They use a character-level convolutional-recurrent network to encode the input text. A fully-connected (FC) layer with a Leaky ReLU activation embeds the encoding into a lower dimension before concatenation with input noise drawn from a standard Gaussian, which is then fed into the generator. Instead of just training on pairs of either real images/matching text or fake images/matching text, they also train using pairs of real images with mismatched text. This is to encourage the discriminator to not only generate realistic images regardless of the text, but also to create realistic images that match the text.
Zhang et al. build upon this work in “StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks”. Inspired by other works that use multiple GANs for tasks such as scene generation, the authors used two stacked GANs for the text-to-image task [@DBLP:journals/corr/ZhangXLZHWM16]. The motivating intuition is that the Stage-I GAN produces a low-resolution outline of the desired image and the Stage-II GAN fills in the details of the sketch. In addition to the stacked architecture, they also propose a novel augmentation technique to address the aforementioned interpolation issues. Rather than using the text embedding directly (following a single FC layer) as done in Reed et al., they instead use an FC layer to produce a mean and variance before sampling from the normal distribution. This sampling augments the data and increases the robustness.
Another architecture developed by Xu et al. drew on the widespread success of attention-based models, particularly in NLP for tasks such as machine translation and image captioning. Their model, termed AttnGAN, introduces several novel ideas to the text-to-image task related to attention [@DBLP:journals/corr/abs-1711-10485]. Unlike previous approaches that focus on sentence-level encodings, AttnGAN extracts both sentence-level and word-level features from a bidirectional LSTM. In the stacked generator stages, multiplicative attention is performed over the encoded word vectors so the model can learn which words to attend to at each step. Finally, they add a Deep Attentional Multimodal Similarity Model, which is constructed to learn an attention-based matching score between the image-sentence pairs.
Finally, Gorti et al. incorporate the ideas of stacking, attention, and cycle consistency in their state-of-the-art model, MirrorGAN [@mirrorgan]. Influenced by the CycleGAN architecture, the model adds an image-to-text component which acts as a sanity check that the image generated is indeed semantically consistent with the input caption text. The results demonstrate MirrorGAN’s ability to train networks that can generate both higher quality images as well as image details which are semantically consistent with a provided caption (and in comparison with the true image for a given caption).
MirrorGAN is the culmination of the work on the text-to-image problem. Nonetheless, there is room for improvement. The works up until this point use word embeddings trained from scratch. With the advent of pretrained language models such as ELMO or BERT, a possible extension is to initialize the embeddings with deep, contextualized word vectors derived from BERT or ELMO. In this work, we explore the effect of using BERT-derived word vectors.
Data
====
We used the 2011 Caltech-UCSD Birds 200 dataset (CUB-200), which contains 11,788 images of 200 different types of birds and is a widely used benchmark for text-to-image generation [@WahCUB_200_2011]. These images provide a boundary box and vary in size. Additionally, we have 10 text descriptions of the dataset downloaded from a github repository that serve as the text descriptions of the generated images [^2].
Methods
=======
Data Preprocessing
------------------
We preprocess this data according the precedence set by StackGAN++ [@stackganpp]. This includes cropping all images to ensure all bounding boxes have at least a 0.75 object-image size ratio and then downsampled to 64x64, 128x128, 256x256. Then, the data is split into class disjoint train and test sets.
Models
------
### AttnGAN
{width="\textwidth"}
This first model combines both elements of the stack GAN [@DBLP:journals/corr/ZhangXLZHWM16] and attention [@DBLP:journals/corr/abs-1711-10485] . This attention GAN first embeds the caption and runs them through a LSTM, generating both word and sentence vectors. Using the Conditioning Augmentation first proposed in the StackGAN[@DBLP:journals/corr/SalimansGZCRC16], we create a mean and variance from the sentence embedding via a fully-connected layer. We use this mean and variance to parameterize a normal distribution from which a sentence embedding sample is generated to pass into the GAN. This is used for regularization and to promote manifold smoothness. Additionally, we concatenate Gaussian noise to this new sentence embedding sample and pass into the generator.
With the StackGAN architecture, we stack three generators together, generating 64x64, 128x128, and 256x256, respectively. Additionally, for the second and third generator, we pass the image and the word embeddings through an attention module to pass into the next generator [@DBLP:journals/corr/abs-1711-10485]. Each of these generators has a corresponding discriminator that take in both the original sentence embedding and the image. Finally, the 256x256 image is passed through an image encoder to generate local image features (a 17x17 feature map). These image features from the image encoder and word features from the text encoder combine to form the Deep Attentional Multimodal Similarity Model (DAMSM) and trained with an attention loss [@DBLP:journals/corr/abs-1711-10485]. For stability, we pretrained this DAMSM model.
### CycleGAN
{width="\textwidth"}
Our CycleGAN combines the attention GAN and the original cycle GAN approach [@originalcyclegan]. By adding an RNN conditioned on the image features and the embedded captions, we attempt to return to text with the Semantic Text REgeneration and Alignment Module (STREAM) [@mirrorgan]. By learning this transition to the original text domain, we allow our images to better represent our captions, as they must hold the latent information to recreate the original caption [@mirrorgan] [^3]. Additionally, we added the pretrained BERT encoding transformers, which we use instead of the standard word embeddings.
### BERT
Pretrained word vectors are a common component of many of NLP models. However, until recently, one primary limitation of these word vectors was that they only allowed for one context-independent embedding. One of the biggest game-changers in recent NLP research is the advent of deep, contextualized word vectors. These vectors are derived from the internal states of deep, pretrained language models trained on massive corpuses of text and use the entire sequence to embed each word, not just the word itself. A key premise to this idea was previous research that showed different layers of an LSTM language model captured different information, such as part of speech at the lower levels and context at the higher layers [@DBLP:journals/corr/abs-1802-05365]
Peters et al. were the first to introduce this idea with their language model, ELMO.. Their model was a deep, bidirectional LSTM with character-level convolutions. This pretrained model could then be used for more specific tasks, where each word vector is computed as the (learned) weighted sum of the hidden states of the LSTM, using the entire input sequence as input. Then, tasks like sentiment classification could be done by simply adding a fully connected layer on top of ELMO [@DBLP:journals/corr/abs-1802-05365]. Devlin et al. expanded on this work with the BERT model that replaces the bidirectional LSTM with a bidirectional Transformer [@DBLP:journals/corr/abs-1810-04805]. The Transformer is another recent innovation in NLP that replaces the recurrent nature of LSTMs with positional encoding and blocks of self-attention, layer normalization, and fully connected layers [@DBLP:journals/corr/VaswaniSPUJGKP17]. This architecture has become the de facto model for NLP, replacing LSTMs and standard RNNs in many cases. The BERT model has been widely used to achieve state-of-the-art results in challenging tasks such as Question-Answer (QA). In this work, we used a pretrained BERT model to obtain our embeddings and pass it through a fully connected layer before continuing in the CycleGAN architecture.
Loss
----
For a generator $G_i$ and the corresponding discriminator $D_i$, we have the following loss function which combines both a conditional and unconditional loss (conditioned on the sentence embedding): $$\begin{aligned}
\mathcal{L}_{G_i} = - \frac{1}{2} \mathbb{E}_{\hat x_i \sim p_{G_i}} [\log(D_i(\hat x_i))]\\
- \frac{1}{2} \mathbb{E}_{\hat x_i \sim p_{G_i}} [\log(D_i(\hat x_i, \bar e))]\end{aligned}$$ where $\hat x_i$ is the generated image and $\bar e$ is the sentence embedding.
Then, we have the following discriminator loss: $$\begin{aligned}
\mathcal{L}_{D_i} = - \frac{1}{2} \mathbb{E}_{x_i \sim p_{data_i}} [\log(D_i(x_i))]\\
- \frac{1}{2} \mathbb{E}_{\hat x_i \sim p_{G_i}} [\log(1 - D_i(\hat x_i))]\\
- \frac{1}{2} \mathbb{E}_{x_i \sim p_{data_i}} [\log(D_i(x_i))]\\
- \frac{1}{2} \mathbb{E}_{\hat x_i \sim p_{G_i, \bar e}} [\log(1 - D_i(\hat x_i, \bar e))]\end{aligned}$$
### AttnGAN Loss
With the word embeddings matrix $e$ and the image embeddings $v$, we calculate a similarity score between the sentence: $$\begin{aligned}
s = e^\intercal v\end{aligned}$$ We create a $s \in \mathbb{R}^{T \times 289}$ with $T$ as the number of words in the sentence and 289 referring to a flattened version of the 17x17 image feature map. We then normalize the similarity matrix $$\begin{aligned}
\bar s_{ij} = \frac{\exp(s_{ij})}{\sum_{k=1}^T \exp(s_{kj})}\end{aligned}$$ We build a context vector $c_i$, where that represent the image regions that relate to the $i$th word in the sentence. $$\begin{aligned}
c_i = \sum_{j=1}^{289} \alpha_j v_j \text{, where } \alpha_j = \frac{\exp(\gamma \bar s_{ij})}{\sum_{k=1}^{289} \exp(\bar s_{ik})}\end{aligned}$$ where $\gamma$ is a hyperparameter to pay attention to certain features in the regions. We then have an attention-driven image-text matching score matching the entire image $Q$ to the whole text description $D$ that utilizes the cosine similarity cosine$(c_i, e_i) = \frac{c_i^\intercal e_i}{\|c_i\| \|e_i\|}$ $$\begin{aligned}
R(Q,D) = \log \Big( \sum_{i=1}^T \exp(\gamma \text{cosine}(c_i, e_i))\Big)\end{aligned}$$ We have the DAMSM probability between the different image-sentence pairs in the batch $$\begin{aligned}
P(D_i|Q_i) = \frac{\exp(\gamma R(Q_i, D_i))}{\sum_{j=1}^M \exp(\gamma R(Q_i D_j))}\\
P(D_i|Q_i) = \frac{\exp(\gamma R(Q_i, D_i))}{\sum_{j=1}^M \exp(\gamma R(Q_j D_i))}\end{aligned}$$ Therefore, our DAMSM combines the following $$\begin{aligned}
\mathcal{L}_1^w = - \sum_{i=1}^M \log P(D_i | Q_i)\\
\mathcal{L}_2^w = - \sum_{i=1}^M \log P(Q_i | D_i)\end{aligned}$$ We also define $\mathcal{L}_1^s$ and $\mathcal{L}_2^s$ the same as above but instead substituting $\bar e$ for $e$. Combining everything, we have the final loss of the attention generator $$\begin{aligned}
\mathcal{L}_{DAMSM} = \mathcal{L}_1^w + \mathcal{L}_2^w + \mathcal{L}_1^s + \mathcal{L}_2^s\end{aligned}$$ $$\begin{aligned}
\mathcal{L} = \mathcal{L}_G + \lambda \mathcal{L}_{DAMSM} \text{, where } \mathcal{L}_G = \sum_{i=1}^3 \mathcal{L}_{G_i}\end{aligned}$$
### CycleGAN Loss
In addition to the loss of the AttnGAN, we add an additional cross entropy loss to correctly predict the output word in the caption recreation $$\begin{aligned}
\mathcal{L}_{CE} = - \frac{1}{M} \sum_{i=1}^M \sum_{c=1}^{|V|} y_c^{(i)} \log (\hat y_c^{(i)})\\
\mathcal{L} = \mathcal{L}_G + \lambda \mathcal{L}_{DAMSM} + \lambda \mathcal{L}_{CE}\end{aligned}$$ where $M$ represents the batch size, $|V|$ is the size of the vocab, $y_c^{(i)}$ is the binary label of the $c$-th class of the $i$-th example, and $\hat y_c^{(i)}$ is the model probability output of the $c$-th class of the $i$-th example.
Evaluation Metrics
------------------
### Inception Score
To assess our models, we use the Inception score, which is a widely used ad hoc metric for generative models [@DBLP:journals/corr/SalimansGZCRC16]. The Inception score uses a pretrained Inception model that is fine-tuned to the specific dataset being used. The Inception score is computed by exponentiating the KL-divergence between the conditional distribution p(y $|$ x) and marginal distribution p(y), where y is the class label predicted by the Inception model and x is a generated sample. The intuition is that a good generative model should produce images with a conditional label distribution that has low entropy relative to the marginal distribution. In other words, we want images that can be easily classified into a category by the model but also create images that belong to many different classes. $$\begin{aligned}
D_{KL} (P \| Q) = - \sum_{x \in \mathcal{X}} P(x) \log \Big(\frac{Q(x)}{P(x)}\Big)\\
IS(G) = \exp \Big(\mathbb{E}_{\bold x \sim p_G} D_{KL} (p(y|\bold x) \| p(y)) \Big)\end{aligned}$$
The score rewards images that have greater variety and has been shown to be well-correlated with human evaluations of realistic quality. We randomly select 20 captions for each class and use our trained model to generate images, which is then fed into the Inception model to generate the distributions and to compute the score.
### Mean Opinion Score (MOS)
Nonetheless, the Inception score cannot capture how well the generated images reflect accurate conditioning on the input text. Thus, we have humans examine the perceptual quality of images as well as their correspondence to the input task with the Mean Opinion Score [@DBLP:journals/corr/LedigTHCATTWS16]. Specifically, we asked $n=10$ subjects to rate the quality of images on a scale from 1 (poor quality) to 5 (high quality). We showed them 20 images from the ground truth, 20 from the AttnGAN, and 20 from the CycleGAN, along with corresponding captions, in random order, and averaged them to report the MOS.
Results
=======
![Generated Images from the models[]{data-label="fig:generated"}](figures/ablation.png){width=".55\textwidth"}
We trained the AttnGAN model over 100 epochs using Adam optimization to train the generator and all the discriminators. CycleGAN was trained over 100 epochs using the same generator and discriminator optimizers with betas of 0.5 and 0.999. For the AttnGAN, we pretrained the DAMSM architecture for 200 epochs. For the CycleGAN, we pretrained the STREAM architecture for 100 epochs. We used pretrained BERT embeddings only for the CycleGAN implementation, while initialized randomly initialized embeddings which were trained in the AttnGAN.\
We report both the inception v3 scores, computed as a average measure of divergence with the true distribution of bird images of the generated test outputs for each of the models as well as the qualitative MOS scores from peer judges, reported below. We see that the CycleGAN trained with BERT embeddings had the strongest performance overall across the proposed metrics, and display generated samples from our model along with their representative ground truth image labels.\
[width=.48]{}
------------------ --------- ---------- -----------
**Model** Epoch 0 Epoch 50 Epoch 100
Ground Truth 11.63 - -
AttnGAN 0.94 2.78 3.92
CycleGAN w/ BERT 1.05 5.48 5.92
------------------ --------- ---------- -----------
[width=0.48]{}
**Model** **MOS (n=10)**
------------------ ----------------
Ground Truth 4.7
AttnGAN 3.6
CycleGAN w/ BERT 3.9
We save model weights for the CycleGAN model with BERT text features every 25 epochs and compute Inception scores on the validation set during training. In Figure 7, we observe the CycleGAN inception score leveling, but still increasing as we approach 100 epochs of training.
![CycleGAN Inception Scores[]{data-label="fig:inception_score"}](figures/inception.png){width=".50\textwidth"}
Discussion
==========
Examining several images output from the AttnGAN and CycleGAN (with BERT) in Figure 4, we can see some clear improvements from AttnGAN to CycleGAN. For one, the CycleGAN model generally produces clearer, more realistic looking images relative to AttnGAN. Furthermore, the CycleGAN model appears to be more precise with respect to details. We can see that for the AttnGAN model, the colors are occasionally incorrect (presence of red in the top image and brown instead of grey in the bottom). Additionally, AttnGAN images lack the level of detail in the beak present in the CycleGAN model.
We found the CycleGAN with pretrained BERT embeddings was able to outperform AttnGAN on the test set in both the Inception score (Figure 5) and in the Mean Opinion Score (Figure 6). In particular, with respect to Inception scores, we found that CycleGAN with BERT was able to reach a higher score and train significantly faster, as indicated by the scores at 50 epochs (5.48 vs 2.78) and at 100 epochs (5.92 vs 3.92). In general, a higher Inception score reflects greater variety as well as distinctly capturing unique features, but we note that an ideal quantitative metric remains elusive for this task, particularly in capturing the correspondence between the image and caption. The higher performance of the CycleGAN with BERT on qualitative, human evaluation indicates some level of improvement in the image-text correspondence. One limitation of our work is that we were not able to train until convergence of the Inception score for comparison in the limit, which we note as a possible avenue for future work.
Conclusion
==========
In this paper, we investigate the text-to-image generation task by experimenting with state-of-the-art architectures and incorporating the latest innovations in NLP, namely, the use of deep contextualized word vectors from pretrained languagew models, such as BERT. Our baseline model is the AttnGAN, which utilizes several key features, including stacking of GANs to progressively learn more detail at higher resolution and attention over word features, a technique that has found widespread success in a variety of NLP tasks. Our main model adds two additional features: a cyclic architectures that adds the image-to-text task in addition to the test-to-image task, and the use of contexualized word embeddings from a pretrained BERT. Through both qualitative and quantitative metrics, we found that the addition of these features showed improved generation of images conditioned on the text and had faster learning.
For future work, it would be useful to train the models longer until convergence in some metric (such as Inception score) is reached for complete analysis. In addition, we did not do any hyperparameter tuning due to time constraints and thus further improvement may be found through a hyperparameter search. Further, with more time, it would also be interesting to perform ablation studies on our full model to show the additional gain, if any, achieved from adding only BERT or only the cyclic architecture to AttnGAN.
Acknowledgements
================
We would like to thank the instructors and TAs for designing and running the course. In particular we would like to thank Ignacio Cases for guiding us on this project.
Authorship Statement
====================
Trevor Tsue: Coded the AttnGAN and CycleGAN, trained models, made architecture diagrams, wrote architecture and loss.\
\
Jason Li: Performed literature review, proposed and coded/integrated BERT extension with CycleGAN; wrote intro, related work, discussion, conclusion\
\
Samir Sen: AttnGAN implementation and training. Worked on incorporating BERT encodings within the attention GAN text featurization and built inception network for capturing key metrics across models. Data cleaning and visualization. Results, discussion, abstract.\
[^1]: [Cycle Image GAN Github](https://github.com/suetAndTie/cycle-image-gan)
[^2]: [taoxugit AttnGAN](https://github.com/taoxugit/AttnGAN) \[attngancode\]
[^3]: [komiya-m MirrorGAN](https://github.com/komiya-m/MirrorGAN)
|
---
abstract: 'In cavity optomechanics, radiation pressure and photothermal forces are widely utilized to cool and control micromechanical motion, with applications ranging from precision sensing and quantum information to fundamental science. Here, we realize an alternative approach to optical forcing based on superfluid flow and evaporation in response to optical heating. We demonstrate optical forcing of the motion of a cryogenic microtoroidal resonator at a level of 1.46 nN, roughly one order of magnitude larger than the radiation pressure force. We use this force to feedback cool the motion of a microtoroid mechanical mode to 137 mK. The photoconvective forces demonstrated here provide a new tool for high bandwidth control of mechanical motion in cryogenic conditions, and have the potential to allow efficient transfer of electromagnetic energy to motional kinetic energy.'
author:
- 'D. L. McAuslan'
- 'G. I. Harris'
- 'C. Baker'
- 'Y. Sachkou'
- 'X. He'
- 'E. Sheridan'
- 'W. P. Bowen'
title: Microphotonic Forces From Superfluid Flow
---
[^1]
[^2]
Optical forces are widely utilized in photonic circuits [@Li_Nat08; @Roels_NatNano09], micromanipulation [@Ashkin_Science87; @MacDonald_Nat03], and biophysics [@Burg_Nat07; @Taylor_NatPhot13]. In cavity optomechanics, in particular, optical forces enable cooling and control of microscale mechanical oscillators that can be used for ultrasensitive detection of forces, fields and mass [@Mamin_APL01; @Forstner_PRL12; @Chaste_NatNano12], quantum and classical information systems [@Beugnon_NatPhys07], and fundamental science [@Orzel_Science01; @Greiner_Nature02]. Recent progress has seen radiation pressure used for coherent state-swapping [@Verhagen12_Nat], ponderomotive squeezing [@Brooks12_Nat] and ground state cooling [@Chan11_Nat], while static gradient forces have enabled all-optical routing [@Rosenberg_NatPhot09] and non-volatile mechanical memories [@Bagheri11_NatNano]. Likewise, photothermal forces, where the mechanical element moves in response to mechanical stress from localized optical absorption and heating, have been used to demonstrate cavity cooling of a semiconductor membrane [@Usami_Nat12; @Barton_NanoLett12], single molecule force spectroscopy [@Stahl_RSI09] and rich chaotic dynamics in suspended mirrors [@Marino_PRE11]. Here we demonstrate an alternative photoconvective approach to optical forcing. In our implementation, this technique utilizes the convection in superfluids, whereby frictionless fluid flow is generated in response to a local heat source. This well-known superfluid fountain effect [@Allen_Nat38] is a direct manifestation of the phenomenological two-fluid model proposed by Landau and Tisza [@Landau_USSR41; @Tisza_Nat38]. The momentum carried by the helium-4 flow is then transferred to a mechanical element via collision and recoil of superfluid atoms. If the heat source is localized upon the mechanical element, the incident superfluid atoms are converted either to a normal fluid counter-flow or evaporated (see Fig. \[fig4\](a, b)). Alternatively, a distant heat source could be utilized with the mechanical element acting to reroute the superflow.
![\[fig4\] (a) In bulk, a local heat source generates flow of superfluid helium (blue arrows) and counter-flow of normal fluid (red arrow), imparting momentum onto the oscillator. (b) Representation of a microtoroid covered in a thin film of superfluid helium. Heat around the periphery caused by optical absorption generates fluid flow (blue arrows). At low pressures the superfluid then transitions directly into gas phase and leaves the subsystem (red arrows). (c) Superfluid mediated photothermal forcing may be readily extended to other optomechanical systems such as photonic crystal cavities, membranes and nanostrings.](Fig1){width="0.95\columnwidth"}
![\[Fig1\] (a) Experimental schematic. A microtoroid is nested inside an all-fiber interferometer and cooled by a He-3 refrigerator. BS: Beamsplitter, AM: Amplitude modulator, SA: Spectrum analyser, NA: Network analyser. (b) Optical microscope image of the microtoroid used in these experiments, showing the support beams, one either side, that are used to stabilize the tapered optical fiber. (c) Zoomed in microscope image of the microtoroid. Scale bar is $20~\mu$m long. (d) Zoomed in microscope image of a stabilization beam. The circular pads support a suspended beam which has been thinned to 200 nm thickness in order to minimize optical scattering loss. Scale bar is $100~\mu$m long. (e) Thermal motion of the flexural mode of a microtoroid at 3 K. Inset: FEM simulation of the mechanical displacement profile.](Fig2){width="0.95\columnwidth"}
The configuration used here to realize superfluid photoconvective forcing is represented in [Fig. \[fig4\]]{}(b). A microtoroidal resonator is covered in a several nanometre thick film of superfluid helium [@Harris15_arxiv] which forms naturally due to van der Waals forces. Absorption of the circulating laser field at the microtoroid periphery (red glow) causes localized heating. This increase in temperature generates superfluid helium flow up the pedestal towards the heat source via the fountain effect (blue arrows). At the periphery superfluid helium is evaporated (red arrows) resulting in a force on the microtoroid that, on average, is directed radially inwards. The magnitude of this radial force is given by $$\begin{aligned}
F_{\mathrm{radial}} &=& -\frac{\mathrm{d} (mv_{\mathrm{radial}})}{\mathrm{d}t} \\
&=& \frac{4}{\pi^2} \dot{m} v_{\mathrm{rms}}
\label{Eqradialforcephotothermal}\end{aligned}$$ where $\dot{m}$ is the mass flow rate of evaporated helium. The net radial velocity $v_{\rm radial}$ is calculated by integrating the contribution from isotropic evaporation in the outwards facing half-space with a root-mean-square (RMS) velocity of $$v_{\mathrm{rms}}=\sqrt{\frac{3 k_{\rm B} T_{\rm evap}}{m_{\mathrm{He}}}}
\label{v_rms}$$ where $T_{\rm evap}$ is the temperature of evaporated atoms and $m_{\rm He}$ is the mass of a helium atom (see Supplementary Information). In steady state, the mass flow rate of the superfluid is determined by balancing the optical heat load with the energy dissipated through normal fluid counter-flow or evaporation of the film (See Supplementary Information for further discussion). While in bulk superfluid systems the energy dissipation is typically dominated by counter-flow, for thin films the normal fluid fraction is viscously clamped to the surface [@Atkins_PR59], and evaporation dominates. To prevent the continuous accumulation of fluid at the heat source the rate of evaporation must equal the in-flux from superfluid flow. For an absorbed optical power $P_\text{abs}$ the superfluid mass flow rate is then $\dot{m}=P_\text{abs}/(L-\langle \mu_\text{VDW} \rangle)$ where $L$ is the latent heat of vaporization and $\langle \mu_\text{VDW} \rangle$ is the van der Waals potential of the superfluid film (see Supplementary Information) and the resulting inward radial force from helium evaporation is $$F_{\mathrm{radial}} = \frac{4}{\pi^2} \sqrt{\frac{3 k_{\rm B} T_{\rm evap}}{m_{\mathrm{He}}}} \frac{P_\text{abs}}{L-\langle \mu_\text{VDW} \rangle}.
\label{evapForce}$$ Note that, similar to photothermal forces [@Metzger_Nat04], this expression is independent of the cavity finesse allowing photoconvective forces to be applied effectively where only a weak cavity, or no cavity, is present. In way of comparison, if the incident light is fully absorbed, the radiation pressure force is given by $F_\text{RP} = P_\text{abs}\mathcal{F}/c$ where $\mathcal{F}$ is the cavity finesse and $c$ is the speed of light. For a 1 K superfluid evaporation temperature, the ratio $F_\text{radial}/F_\text{RP} \sim 4 \times 10^5/\mathcal{F}$, indicating that, in our configuration, the superfluid photoconvective force is similar in magnitude to the radiation pressure force from a cavity with a finesse of around 400,000. For our experimental conditions, with a finesse of $\mathcal{F}=53,000$, the superfluid force is predicted to be approximately one order of magnitude larger than radiation pressure.
To experimentally realize this prediction we use the setup shown in Fig. \[Fig1\](a). A microtoroidal whispering-gallery-mode resonator (major radius = 37.5 $\mu$m, minor radius = 3.5 $\mu$m, Fig. \[Fig1\](c)) is located inside the sample chamber of a helium-3 closed-cycle cryostat (Oxford Triton). Laser light at 1555.08 nm is coupled into a high-quality optical mode (linewidth $\kappa/2\pi = 23.5$ MHz) of the microtoroid via a tapered optical fiber. The tapered fiber rests on suspended stabilization beams fabricated near the microtoroid (Fig. \[Fig1\](b, d)), ensuring that cryostat vibrations do not affect the taper-toroid separation. The microtoroid supports a number of intrinsic mechanical modes ranging in frequency from 1 MHz to 50 MHz. The thermal motion of these modes is imprinted as phase fluctuations onto the optical field which are measured using homodyne detection. The radial forces applied by both radiation pressure and superfluid flow have optimal overlap with the radial breathing mode of the toroid, at 40 MHz. However, the superfluid forcing was observed to be ineffective above frequencies of approximately 2 MHz, possibly due to breakdown of superfluidity as the Landau critical velocity is reached [@Landau_USSR41]. Consequently, we perform experiments with the first order flexural mode at $\Omega_{\rm m}/2\pi = 1.35~\rm MHz$, which has a mechanical dissipation rate of $\Gamma_\text{m}/2\pi = 530$ Hz at base temperature (559 mK). The single-photon optomechanical coupling rate of this mode is measured as $g_0/2\pi = 12.3$ Hz and the Brownian fluctuations at $3~\rm K$ are shown in Fig. \[Fig1\](e) with the displacement profile from finite element modelling (FEM) shown in the inset.
![\[Fig2\] (a) Mode temperature of the microtoroid flexural mode as the cryostat is cooled from 10 K to 0.32 K. The microtoroid reaches a base temperature of 0.56 K with 100 nW of injected optical power. (b) Mode temperature of the flexural mode as the probe laser is increased from $10~\rm nW$ to $3.3~\rm \mu W$. Below $2.2~\rm \mu W$ the temperature increases slightly as the laser power is increased. Above $2.2~\rm \mu W$ the superfluid boils off causing a sharp rise in mode temperature.](Fig3){width="0.95\columnwidth"}
To produce the superfluid film, the sample chamber was filled with low-pressure helium-4 gas (19 mBar at 2.9 K) and cooled to base temperature. This gas pressure was specifically chosen to provide a superfluid film with a thickness such that the characteristic frequencies of third sound modes intrinsic to the superfluid film [@Harris15_arxiv] do not overlap with the microtoroid mode. At 850 mK the helium transitions directly from the gas phase to its superfluid state, forming a thin ($<$5 nm) superfluid layer over the chamber and its contents. To estimate the final temperature of the microtoroid the flexural mode was monitored as the cryostat temperature was decreased from 10 K to 320 mK. Spectral analysis of the homodyne photocurrent gave the mechanical mode temperature via the integrated power spectral density. From 10 K to $600~\rm mK$ the microtoroid is well thermalized to the cryostat, as shown by the linear fit in Fig. \[Fig2\](a); however, at lower cryostat temperatures the microtoroid mode temperature plateaus and is no longer in thermal equilibrium with the cryostat. We attribute this temperature deviation to the heat dissipated at the sample causing a thermal gradient between the microtoroid and cryostat cold plate.
![\[Fig3\] Driven response of the flexural mode as the cryostat is cooled (red points), showing a step increase in response at the superfluid transition temperature. Black line represents a theoretical fit to the data. Grey shaded area indicates a superfluid layer has formed on the microtoroid surface. Pink shaded area represents the theoretical force if $T_\text{evap}$ is up to 1 K higher than the mode temperature. Inset: Displacement spectrum of the flexural mode at $0.7~\rm K$ and $2~\rm K$ with a coherent drive applied via optical amplitude modulation. The response to coherent drive is shown to increase with the presence of superfluid helium. ](Fig4){width="0.95\columnwidth"}
To investigate the effects of optical absorption on the temperature of the microtoroid, we determined the integrated power spectral density of the mechanical mode as a function of laser power. The temperature was found to increase with increasing laser power, eventually causing a boil-off of the superfluid film, as shown in Fig. \[Fig2\](b). As the laser power is increased over two orders of magnitude from 10 nW to 2.1 $\mu$W the mode temperature increases only modestly from 510 mK to 730 mK. Above 2.2 $\mu$W the mode temperature jumps sharply to 3 K, indicated by the red shaded region in Fig. \[Fig2\](b). This threshold behaviour manifests due to Landau’s critical velocity, which sets an upper limit on the superfluid flow rate (see Supplementary Information). This results in a thermal run-away process, wherein the superfluid can no longer be replenished at the periphery of the microtoroid as fast as it evaporates and therefore boils off completely. The microtoroid is then no longer effectively thermally anchored to the cryostat, and the final mode temperature is dominated by laser heating.
To investigate the optical forces present in the system a constant optical amplitude modulation was applied at the frequency of the flexural mode as the cryostat temperature was varied (Fig. \[Fig3\]). This applies resonant forces on the mode, both through radiation pressure, and, below the superfluid transition temperature, superfluid flow. The mechanical response to this drive was measured via homodyne detection of the phase quadrature of the output field. At temperatures above the superfluid transition the optical force originates from radiation pressure alone and is essentially independent of temperature (see right inset to Fig. \[Fig3\]). However, upon formation of a superfluid layer, indicated by the blue shaded region in Fig. \[Fig3\], the response of the flexural mode to the laser drive abruptly increases by 21 dB to a maximum of 540 fN (left inset to Fig. \[Fig3\]). Taking into account the poor overlap between the flexural mode and the radial evaporative force (0.037% calculated from FEM modelling) gives a total superfluid photoconvective force of 1.46 nN. This increased response in the presence of superfluid is in good agreement with theoretical predictions, corresponding to a superfluid photoconvective force that is a factor of eleven larger than radiation pressure. The measured superfluid convective force decreases in magnitude as the temperature is reduced away from the transition temperature (see Fig. \[Fig3\]). This occurs because of the reduced RMS velocity of the evaporated atoms (see Eq. \[v\_rms\]), with colder atoms contributing less recoil to the microtoroid. This behaviour is accurately predicted by our model, where the superfluid evaporation temperature in Eq. \[evapForce\] is equated to the measured microtoroid mode temperature $T_m$. However, the observed superfluid forces are found consistently to be larger than predicted by the model, with a maximum deviation of approximately 60%. We attribute this discrepancy to a temperature differential existing between the evaporated atoms and the helium film. It has been shown that helium atoms evaporated from a superfluid thin film have a temperature that is up to 1 K hotter than the film temperature, dependent on the total heat applied to the liquid [@Hyman_PhysRev69; @Andres_PhysRevA73]. To account for this phenomenon we have included in Fig. \[Fig3\] a theoretical band (pink shading) showing the expected applied force for atoms that are evaporated with temperatures $T_\text{evap}$ ranging from the mode temperature $T_{\text{m}}$ to $T_\text{m}+1$ K.
![\[Fig4\] Feedback cooling of the flexural mode from $715~\rm mK$ to $137~\rm mK$ using superfluid mediated photothermal forcing. With a fixed probe power of $1.9~\rm\mu W$ the feedback gain is varied over three orders of magnitude, showing good agreement with the estimated mode temperature from in-loop measurements (solid line). The out-of-loop mode temperature is then inferred by a transformation (dashed line) that is derived in the Supplementary Information. Inset: Displacement spectrum of the flexural mode with varying feedback gain. ](Fig5){width="0.95\columnwidth"}
As a specific example of an application that takes advantage of the enhanced optical force provided by the superfluid, we perform feedback cooling on the microtoroid mode. This is done by passing the homodyne photocurrent through various filter and amplification stages, then feeding it into an amplitude modulator placed before the microtoroid (see [Fig. \[Fig1\]]{}(a)). Provided the phase of the feedback loop has been chosen correctly to provide a force that opposes the velocity of the mode, the thermal motion of the 1.35 MHz flexural mode is reduced via cold damping [@Cohadon_PRL99]. Figure \[Fig4\] shows that as the feedback gain is increased the microtoroid mode temperature decreases in excellent agreement with theory [@Lee_PRL10]. The flexural mode is thus cooled from 715 mK to 137 mK, with a final occupancy of $n=2110 \pm 40$ phonons; constrained primarily by the optomechanical coupling rate to the flexural mode.
While we already demonstrated an order of magnitude improvement in optical force over radiation pressure using evaporative recoil in thin superfluid films, for completeness we discuss in the supplementary information the magnitude of the forces arising from superflow and normal fluid counterflow during heat transport in bulk superfluid (as illustrated in Fig. \[fig4\] (a)). By choosing the right experimental parameters, we show that the force could be further increased by over an order of magnitude compared to the thin film case, enabling optical forces more than 2 orders of magnitude larger than what is achievable with radiation pressure even in a high finesse cavity.
![\[SFforces\] By varying the geometry, superfluid flow could be used to apply a large range of different forces including: (a) & (b) linear force, (c) torque, (d–f) examples of geometries designed to efficiently leverage forces from superfluid flow. (d) Expansive force that is efficiently coupled to a microdisk/microtoroid, (e) compressive force, and (f) long lived centrifugal forces resulting from persistent current flow. ](Fig6){width="0.95\columnwidth"}
The strongest known optical actuation capabilities are provided by the photothermal interaction [@Restrepo_CRP11]. Indeed, the first optomechanical system to demonstrate cooling via dynamical backaction was based on this mechanism [@Metzger_Nat04]. Further, it has been shown that photothermal forces should enable optomechanical cooling to the ground state, without requiring sideband resolution [@Restrepo_CRP11]; thus enabling efficient cooling of low frequency mechanical oscillators. However, accessing large photothermal forces requires strong optical absorption and a large thermal expansion coefficient, limiting devices to specific materials and geometries, and additionally precludes cryogenic operation as the thermal expansion coefficient of most materials reduces by several orders of magnitude when cooled to cryogenic temperatures. Furthermore, the characteristic bandwidth is defined by the typically slow thermalization rate of the device material. Superfluid convective and evaporative forces could alleviate these constraints, allowing new regimes to be realized characterized by fast, strong actuation; with the potential to incorporate superfluid-enhanced optical forcing into existing cryogenic optomechanical systems.
Superfluid convective and evaporative forces may find applications where strong optical forces are required in the absence of an optical cavity; for example in photonic circuits [@Li_Nat08; @Li_NatPhot09; @Roels_NatNano09], or cryogenic MEMS [@Waldis_QuantElec07]. It should also be possible to design systems where the heat source is applied at a location spatially remote from the resultant force, and in a range of geometries to apply not only forces, but also torques, at the microscale (see Fig. \[SFforces\]). This could be advantageous in applications where the device is highly-sensitive to temperature fluctuations.
In conclusion, we have demonstrated photoconvective forcing of a mechanical oscillator based on superfluid flow and recoil. This enables large optical forces at cryogenic temperatures, in contrast to photothermal forcing mechanisms that are generally precluded from cryogenic operation. Furthermore, the exceptionally high thermal conductivity of superfluid helium provides good thermal anchoring to the environment while permitting fast optical forces to be realized. The self assembling nature of superfluid helium means this technique may be relatively straightforwardly incorporated into other cryogenic optomechanical systems.
[*Acknowledgments*]{}: This research was funded by the Australian Research Council Centre of Excellence CE110001013. WPB is supported by the Australian Research Council Future Fellowship FT140100650. Device fabrication was undertaken within the Queensland Node of the Australian Nanofabrication Facility.
[34]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [**** ()]{} @noop [**** ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [ ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{}
[^1]: These authors contributed equally to this work.
[^2]: These authors contributed equally to this work.
|
---
abstract: 'We show how to perform renormalization in the framework of the 2PI effective action for abelian gauge theories. In addition to the usual renormalization conditions one needs to incorporate new ones in order to remove non-transverse UV divergences in the truncated two- and four-photon functions. The corresponding counterterms are allowed by gauge symmetry, in-medium independent and suppressed with respect to the accuracy of the truncation.'
address: |
Institute für Theoretische Physik, Universität Heidelberg,\
Philosophenweg 16, 69120 Heidelberg, Germany
author:
- 'U. Reinosa'
title: 2PI renormalized effective action for gauge theories
---
Introduction
============
Functional techniques based on the two-particle-irreducible (2PI) effective action [@Luttinger:1960ua] provide a powerful tool to devise systematic non-perturbative approximations, of particular interest in numerous physical situations [@Blaizot:2003tw] where standard expansion schemes are badly convergent [@Berges:2004hn]. However, the systematic implementation of 2PI techniques for gauge theories has been postponed for a long time due to formal intricacies [@Arrizabalaga:2002hn]. Here, we discuss the issue of renormalization[^1] and show how to remove UV divergences for any loop approximation of the 2PI effective action. For the sake of simplicity, we consider QED in the vacuum although our approach could in principle be used for non-abelian gauge theories at finite temperature or finite chemical potential. We choose a covariant gauge, for which the classical action reads $$S=\int_x \left\{\bar\psi\Big[i\slashchar{\partial}-e\slashchar{A}-m\Big]\psi+\frac{1}{2}A_\mu\Big[g^{\mu\nu}\partial^2-(1-\lambda)\partial^\mu\partial^{\nu}\Big]A_\nu\right\}\,,$$ with $\lambda$ the gauge fixing parameter. A detailed version of this work is found in [@Reinosa:2006cm].
Two- and four-point functions
=============================
The 2PI effective action for QED in the absence of mean fields is given by $$\label{eq:2PI}
\Gamma_{\rm 2PI}[D,G]=-i{\hbox{Tr}}\ln D^{-1}-i{\hbox{Tr}}\, D_0^{-1}D+\frac{i}{2}{\hbox{Tr}}\ln G^{-1}+\frac{i}{2}{\hbox{Tr}}\,G_0^{-1}G+\Gamma_{\rm int}[D,G]\,,$$ where the trace [Tr]{} includes integrals in configuration space and sums over Dirac and Lorentz indices. The free inverse propagators are given by $$\begin{aligned}
iD_{0,\alpha\beta}^{-1}(x,y) & = & \left[\,i\slashchar{\partial}_x-m\,\right]_{\alpha\beta}\delta^{(4)}(x-y)\,,\\
iG_{0,\mu\nu}^{-1}(x,y) & = & \left[\,g_{\mu\nu}\partial_x^2-(1-\lambda)\partial^x_\mu\partial^x_\nu\,\right]\delta^{(4)}(x-y)\,.\end{aligned}$$ The functional $\Gamma_{\rm int}[D,G]$ is the infinite series of closed two-particle-irreducible (2PI) diagrams with lines corresponding to arbitrary two-point functions $D$ and $G$ and with the usual QED vertex. The physical two-point functions $\bar D$ and $\bar G$ can be obtained from the condition that the 2PI functional be stationary, which can be written as $$\begin{aligned}
\bar D_{\,\alpha\beta}^{-1}(x,y)-D^{-1}_{0,\alpha\beta}(x,y) & = &
i\left.\frac{\delta\Gamma_{\rm int}[D,G]}{\delta D^{\beta\alpha}(y,x)}\right|_{\bar D,\bar G}\equiv
\bar\Sigma_{\alpha\beta}(x,y)\,,\\
\bar G_{\mu\nu}^{-1}(x,y)- G_{0,\mu\nu}^{-1}(x,y) & = &
-2i\left.\frac{\delta\Gamma_{\rm int}[D,G]}{\delta G^{\nu\mu}(y,x)}\right|_{\bar D,\bar G}\equiv \bar\Pi_{\mu\nu}(x,y)\,.\end{aligned}$$ In the next sections we explain how to renormalize $\bar D$ and $\bar G$. In this procedure, an important role is played by the four-point function with four photon legs which is constructed as follows (see also [@Reinosa:2006cm]). First, one defines the 2PI four-point kernels $\bar\Lambda_{GG}$, $\bar\Lambda_{GD}$, $\bar\Lambda_{DG}$ and $\bar\Lambda_{DD}$ by $$\begin{aligned}
\bar\Lambda_{GG}^{\mu\nu,\rho\sigma}(p,k) & \equiv &
\left.4\frac{\delta^2\Gamma_{\rm int}[D,G]}
{\delta G_{\nu\mu}(p)\,\delta G_{\rho\sigma}(k)}\right|_{\bar D,\bar G}\,,
\\
\bar\Lambda_{GD}^{\mu\nu;\alpha\beta}(p,k) & \equiv &
\left.-2\frac{\delta^2\Gamma_{\rm int}[D,G]}
{\delta G_{\nu\mu}(p)\,\delta D_{\alpha\beta}(k)}\right|_{\bar D,\bar G}\,,
\\
\bar\Lambda_{DG}^{\alpha\beta;\mu\nu}(p,k) & \equiv &
\left.-2\frac{\delta^2\Gamma_{\rm int}[D,G]}
{\delta D_{\beta\alpha}(p) \,\delta G_{\mu\nu}(k)}\right|_{\bar D,\bar G}\,,
\\
\bar\Lambda_{DD}^{\alpha\beta,\delta\gamma}(p,k) & \equiv &
\left.\frac{\delta^2\Gamma_{\rm int}[D,G]}
{\delta D_{\beta\alpha}(p)\,\delta D_{\delta\gamma}(k)}\right|_{\bar D,\bar G}\,.\end{aligned}$$ These are then combined in order to build the kernel $$\begin{aligned}
&&\bar K^{\mu\nu,\rho\sigma}(p,k) =
\bar \Lambda_{GG}^{\mu\nu,\rho\sigma}(p,k)
-\int_q \bar \Lambda_{GD}^{\mu\nu;\alpha\beta}(p,q)M^{DD}_{\alpha\beta,\gamma\delta}(q)
\bar \Lambda_{DG}^{\gamma\delta;\rho\sigma}(q,k)\nonumber\\
&&\qquad+
\int_q\int_r\bar\Lambda_{GD}^{\mu\nu;\alpha\beta}(p,q)M^{DD}_{\alpha\beta,\bar\alpha\bar\beta}(q)
\bar \Lambda^{\bar\alpha\bar\beta,\bar\gamma\bar\delta}(q,r)
M^{DD}_{\bar\gamma\bar\delta,\gamma\delta}(r)\bar \Lambda_{DG}^{\gamma\delta;\rho\sigma}(r,k)\,,\end{aligned}$$ where $M^{DD}_{\alpha\beta,\gamma\delta}(q)\equiv\bar D_{\alpha\gamma}(q)\bar D_{\delta\beta}(q)$ and $$\bar \Lambda^{\alpha\beta,\gamma\delta}(p,k)=
\bar \Lambda_{DD}^{\alpha\beta,\gamma\delta}(p,k)
-\int_q\bar\Lambda_{DD}^{\alpha\beta,\bar\alpha\bar\beta}(p,q)
M^{DD}_{\bar\alpha\bar\beta,\bar\gamma\bar\delta}(q)
\bar\Lambda^{\bar\gamma\bar\delta,\gamma\delta}(q,k)\,.$$ The kernel $\bar K$ is 2PI with respect to photon lines but 2PR (two-particle-reducible) with respect to fermion lines. Finally the four-photon function $\bar V$ is obtained from the equation $$\bar V^{\mu\nu,\rho\sigma}(p,k)=\bar K^{\mu\nu,\rho\sigma}(p,k)+\frac{1}{2}\int_q
\bar K^{\mu\nu,\bar\mu\bar\nu}(p,q)M^{GG}_{\bar\mu\bar\nu,\bar\rho\bar\sigma}(q)
\bar V^{\bar\rho\bar\sigma,\rho\sigma}(q,k)\,,$$ where $M^{GG}_{\mu\nu,\rho\sigma}(q)=\bar G_{\mu\rho}(q)\bar G_{\sigma\nu}(q)$. The four-photon function $\bar V$ is 2PR both with respect to fermion and photon lines.
Gauge symmetry and counterterms
===============================
In the absence of mean fields, the 2PI effective action satisfies the Ward-Takahashi identity $\Gamma_{\rm int}[G^\alpha,D^\alpha]=\Gamma_{\rm int}[G,D]$ with $G^\alpha(x,y) \equiv G(x,y)$ and $D^\alpha(x,y) \equiv e^{i\alpha(x)}D(x,y) e^{-i\alpha(y)}$. In view of eliminating divergences in $\bar D$ and $\bar G$, one modifies the 2PI effective action by adding a shift $\delta \Gamma_{\rm int}[D,G]$ compatible with all the symmetries of the system, namely Lorentz and gauge symmetry. At two-loop order the shift reads for instance: $$\begin{aligned}
\delta\Gamma_{\rm int}[D,G]&=&\int_x\left\{
-{\hbox{tr}}\Big[(i\delta Z_2\slashchar{\partial}_x -\delta m)D(x,y)\Big]
+\frac{\delta Z_3}{2}\,\Big(g^{\mu\nu}\partial_x^2
-\partial_x^\mu\partial_x^\nu\Big)G_{\mu\nu}(x,y)\right.\nonumber\\
&&\qquad+\frac{\delta\lambda}{2}\,\partial_x^\mu\partial_x^\nu G_{\mu\nu}(x,y)
+\frac{\delta M^2}{2}\,{G^\mu}_\mu(x,x)\nonumber\\
&&\qquad\left.+\,\frac{\delta g_1}{8}\,{G^\mu}_\mu(x,x){G^\nu}_\nu(x,x)+
\frac{\delta g_2}{4}\,G^{\mu\nu}(x,x)G_{\mu\nu}(x,x)\right\}_{\!\!y=x}\,,\end{aligned}$$ where ${\rm tr}$ denotes the trace over Dirac indices. The counterterms $\delta Z_2$, $\delta Z_3$ and $\delta m$ are the analog of the corresponding ones in perturbation theory.[^2] The extra counterterms $\delta\lambda$, $\delta M^2$, $\delta g_1$ and $\delta g_2$ have no analog in perturbation theory but are allowed by the 2PI Ward-Takahashi identity. Their role is to absorb non-transverse divergences in the two- and four-photon functions. At higher loops, additional diagrams including counterterms need to be considered.
Renormalization conditions
==========================
If we were to work with the exact 2PI effective action (no truncations), it would be easy to check that the two-photon and four-photon functions $\bar \Pi$ and $\bar V$ are transverse. This would prevent the appearance of non-transverse divergences in $\bar \Pi$ and $\bar V$ and would lead to $\delta\lambda=\delta M^2=\delta g_1=\delta g_2=0$, as one is used to in perturbation theory. However, one has in general to approximate the 2PI effective action by truncating its diagrammatic expansion to a certain loop order. This leads to non-transverse contributions in the two- and four-photon functions. For instance, at two-loop order order, one can check that divergent non-transverse contributions appear in $\bar V$ at order $e^4$: $$\bar V_{\rm 2-loop}^{\rm \mu\nu,\,\rho\sigma,\,
div}=\frac{e^4}{\pi^2}\frac{d-2}{d+2}\frac{g^{\mu\rho}g^{\nu\sigma}+g^{\mu\sigma}g^{\nu\rho}}{d-4}\,,$$ where $d$ represents the dimension of space in dimensional regularization. In general, at $L$-loop order, non-transverse divergences appear at order $e^{2L}$ and one has to devise a procedure to consistently remove them.
First one needs to remove four-photon sub-divergences in $\bar D$ and $\bar G$. A diagrammatic analysis reveals that these are nothing but the ones encoded in $\bar V$.[^3] Renormalization of $\bar V$ is then achieved by tuning $\delta g_1$ and $\delta g_2$ via the renormalization condition $$P_{L,\mu\nu}(k_*)\bar V^{\mu\nu,\rho\sigma}(k_*,k_*)=0\,,$$ where $P_L^{\mu\nu}(k_*)=n_*^\mu\,n_*^\nu$ is the longitudinal projector, with $n_*^\mu\equiv
k_*^\mu/\sqrt{k_*^2}$. Notice that, as the number of loops goes to infinity, the renormalization condition becomes an identity (1PI Ward-Takahashi identity), ensuring that no new parameter is introduced and that $\delta g_1 \rightarrow 0$ and $\delta
g_2 \rightarrow 0$, as they should.
Once sub-divergences have been eliminated, there only remain overall divergences in $\bar D$ and $\bar G$.[^4] Divergences in $\bar D$ are removed as usual via $\delta m$ and $\delta Z_2$ which are fixed by imposing the renormalization conditions $\bar{\Sigma}(p=p_\star)=0$ and $d\bar{\Sigma}(p)/d\slashchar{p}|_{p=p_\star}=0$. Similarly, for the photon self-energy, one can impose the following condition on the transverse part of the photon polarization tensor $d\bar{\Pi}_{\rm T}(k^2)/dk^2|_{k=k_*}=0$. Usually (in perturbation theory for instance), this is enough to fix the photon wave-function renormalization counterterm and to remove all divergences, which are purely transverse. In the 2PI framework, however, non-transverse divergences appear and one needs to fix three counterterms $\delta Z_3$, $\delta\lambda$ and $\delta M^2$. Two other independent conditions are thus needed. Again, we choose renormalization conditions which are automatically fulfilled in the exact theory in order to ensure that $\delta\lambda \rightarrow 0$ and $\delta M^2 \rightarrow 0$, in the limit of an infinite number of loops. A possible choice is to impose transversality of the photon polarization tensor: $$\bar{\Pi}_{\rm L}(k^2_*)=0\,.$$ Since in the exact theory the transversality condition has to hold for all momenta, one can further impose that $$\left.\frac{d\bar\Pi_{\rm L}(k^2)}{d k^2}\right|_{k=k_*}=0\,.$$
[9]{} J. M. Luttinger, J. C. Ward, PRD [**118**]{}, 1417 (1960); G. Baym, PR [**127**]{}, 1391 (1962); J. M. Cornwall, R. Jackiw, E. Tomboulis, PRD [**10**]{}, 2428 (1974). J. P. Blaizot, E. Iancu, A. Rebhan, in [*Quark Gluon Plasma 3*]{}, Eds. R.C. Hwa, X.N. Wang, World Scientific, Singapore, hep-ph/0303185; J. O. Andersen, M. Strickland, AP [**317**]{}, 281 (2005); J. Berges, J. Serreau, in [*SEWM04*]{}, Eds. K.J. Eskola, K. Kainulainen, K. Kajantie, K. Rummukainen, World Scientific, Singapore, hep-ph/0410330; J. Berges, [*AIP Conf. Proc. * ]{}[**739**]{}, 3 (2005); S. Borsányi, PoS JHW2005, 004 (2006). J. Berges, S. Bors[á]{}nyi, U. Reinosa, J. Serreau, PRD [**71**]{}, 105004 (2005); J. P. Blaizot, A. Ipp, A. Rebhan, U. Reinosa, PRD [**72**]{}, 125005 (2005). A. Arrizabalaga, J. Smit, PRD [**66**]{}, 065014 (2002); M.E. Carrington, G. Kunstatter, H. Zaraket, EPJC [**42**]{}, 253 (2005); E. Mottola, in [*SEWM02*]{}, Ed. M.G. Schmidt, World Scientific, Singapore, hep-ph/0304279. H. van Hees, J. Knoll, PRD [**65**]{}, 025010 (2002); PRD [**65**]{}, 105005 (2002); J.-P. Blaizot, E. Iancu, U. Reinosa, PLB [**568**]{}, 160 (2003); NPA [**736**]{}, 149 (2002); F. Cooper, B. Mihaila, J. F. Dawson, PRD [**70**]{}, 105008 (2004). J. Berges, S. Bors[á]{}nyi, U. Reinosa, J. Serreau, AP [**320**]{}, 344 (2005); U. Reinosa, NPA [**772**]{}, 138 (2006).
U. Reinosa and J. Serreau, JHEP [**07**]{}, 028 (2006).
[^1]: For a detailed analysis of this issue in scalar theories see [@vanHees:2001ik].
[^2]: Notice that the counterterm $\delta e$ only appears at higher orders in the 2PI loop-expansion.
[^3]: The connection between $\bar D$, $\bar G$ and $\bar V$ is also easily seen at finite temperature where requiring that $\bar D$ and $\bar G$ do not contain temperature dependent divergences is equivalent to requiring that $\bar V$ is finite.
[^4]: At finite temperature or finite chemical potential, these divergences are in-medium independent. This is the benefit of properly removing four-photon sub-divergences first.
|
---
abstract: |
While O is often seen in spectra of Type Ia supernovae (SNe Ia) as both unburned fuel and a product of C burning, C is only occasionally seen at the earliest times, and it represents the most direct way of investigating primordial white dwarf material and its relation to SN Ia explosion scenarios and mechanisms. In this paper, we search for C absorption features in 188 optical spectra of 144 low-redshift ($z <
0.1$) SNe Ia with ages $\la$3.6 d after maximum brightness. These data were obtained as part of the Berkeley SN Ia Program (BSNIP; Silverman [et al. ]{}2012) and represent the largest set of SNe Ia in which C has ever been searched. We find that [$\sim\!\!$ ]{}11 per cent of the SNe studied show definite C absorption features while [$\sim\!\!$ ]{}25 per cent show some evidence for [C$\;$]{} in their spectra. Also, if one obtains a spectrum at $t \la -5$ d, then there is a better than 30 per cent chance of detecting a distinct absorption feature from [C$\;$]{}. SNe Ia that show C are found to resemble those without C in many respects, but objects with C tend to have bluer optical colours than those without C. The typical expansion velocity of the [C$\;$]{} $\lambda$6580 feature is measured to be 12,000–13,000 [km s$^{-1}$]{}, and the ratio of the [C$\;$]{} $\lambda$6580 to [Si$\;$]{} $\lambda$6355 velocities is remarkably constant with time and among different objects with a median value of [$\sim\!\!$ ]{}1.05. While the pseudo-equivalent widths (pEWs) of the [C$\;$]{} $\lambda$6580 and [C$\;$]{} $\lambda$7234 features are found mostly to decrease with time, we see evidence of a significant increase in pEW between [$\sim\!\!$ ]{}12 and 11 d before maximum brightness, which is actually predicted by some theoretical models. The range of pEWs measured from the BSNIP data implies a range of C mass in SN Ia ejecta of about $\left(2\textrm{--}30\right) \times 10^{-3}$ [M$_\odot$]{}.
author:
- |
Jeffrey M. Silverman,$^{1}$[^1] Alexei V. Filippenko$^{1}$\
$^{1}$Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA\
bibliography:
- 'astro\_refs.bib'
date: 'Accepted . Received ; in original form '
title: 'Carbon Detection in Early-Time Optical Spectra of Type Ia Supernovae'
---
\[firstpage\]
[methods: data analysis – techniques: spectroscopic – supernovae: general]{}
Introduction {#s:intro}
============
It is thought that thermonuclear explosions of C/O white dwarfs (WDs) give rise to Type Ia supernovae (SNe Ia; e.g., @Hoyle60 [@Colgate69; @Nomoto84]; see @Hillebrandt00 for a review). However, after decades of observations and theoretical work, the details of SN Ia progenitors and explosion mechanisms are still missing. Despite this, SNe Ia have been used in the recent past to discover the accelerating expansion of the Universe [@Riess98:lambda; @Perlmutter99], as well as to measure cosmological parameters [e.g., @Astier06; @Riess07; @Wood-Vasey07; @Hicken09:cosmo; @Kessler09; @Amanullah10; @Suzuki12].
As the explosion proceeds, pristine C and O from the progenitor WD may get mixed throughout various layers of the ejecta. However, the amount and exact location of this unburned material varies widely among published models [e.g., @Hoflich02; @Gamezo03; @Ropke07; @Kasen09]. Thus, determinations of the quantity of these elements after explosion, their spatial distribution in the ejecta, and other measurements of this primordial material can help constrain possible explosion mechanisms of SNe Ia.
Oxygen is found in SNe Ia as both unburned fuel from the progenitor WD and burned ash as a product of C burning. Therefore, it is difficult to associate O detections, which are common in SNe Ia [e.g., @Filippenko97], with primordial material. This leaves us with C as the most direct link to matter from the pre-explosion WD. Observations during the first 2–3 weeks after explosion probe the outermost layers of the ejecta, which is where the unburned material is most likely to reside. At the temperatures and densities observed in SNe Ia at these epochs, singly ionised states of C are expected to be the dominant species [e.g., @Tanaka08]. Neutral C could appear (mostly in the near-infrared) at significantly lower temperatures, and doubly ionised C would require much higher temperatures [@Marion06]. Thus, the best chance of detecting unburned material is to look for [C$\;$]{} features before and near $B$-band maximum brightness.
Until recently, C detections in SNe Ia were only noticed in a small handful of objects. Most of the SNe Ia that show obvious [C$\;$]{} absorption features are extremely luminous objects with slowly evolving light curves and exceptionally low expansion velocities. These objects are thought to arise from super-Chandrasekhar-mass WDs [@Howell06; @Yamanaka09; @Scalzo10; @Silverman11; @Taubenberger11], and thus the detection of unburned C in their spectra is likely related to the relatively rare explosion mechanism that produces these objects. In addition, there are a few instances of relatively normal SNe Ia [i.e., ones that obey the relation between light-curve decline rate and luminosity at peak brightness, known as the “Phillips relation”; @Phillips93] that also show strong [C$\;$]{} absorption [e.g., @Patat96; @Garavini05]. On the other hand, it has usually been found that C does not appear at all in early-time optical spectra of SNe Ia, or it is weak or extremely blended [e.g., @Mazzali01; @Branch03; @Stanishev07; @Thomas07].
Recently, however, C detections in SNe Ia at early times have become more common thanks to the amassing of many more spectra of SNe Ia at very early epochs, higher signal-to-noise ratio (S/N) data, and astronomers being more meticulous in their search for this elusive unburned material. @Parrent11 present new spectra of 3 SNe Ia that show distinct C absorptions and analyze those alongside 65 other objects from the literature. They find that [C$\;$]{} features are detected more often than previously thought and estimate that 30 per cent of all SNe Ia may show evidence of unburned C in their pre-maximum spectra. This work is explained by @Thomas11, who discuss observations and analyses of 5 more objects that show [C$\;$]{} features. They conclude that the C is likely distributed spherically symmetrically and that $22^{+10}_{-6}$ per cent of SNe Ia show C absorption features at epochs near 5 d before maximum brightness. Finally, @Folatelli11 use data from the Carnegie Supernova Project to determine that at least 30 per cent of objects show [C$\;$]{} absorption and that the mass of C in the ejecta is consistent with $10^{-3}$–$10^{-2}$ [M$_\odot$]{}. They also find evidence that SNe Ia with C tend to have bluer colours and lower luminosities at maximum light.
In this work, we search for possible C signatures in low-redshift ($z
< 0.1$) optical spectra of SNe Ia obtained as part of the Berkeley SN Ia Program (BSNIP). The data are presented in BSNIP I [@Silverman12:BSNIPI], and we utilise the spectral feature measurement tools described in BSNIP II [@Silverman12:BSNIPII]. With such a large, self-consistent dataset, we are able to accurately explore the incidence rate of C in SN Ia spectra and how it varies as a function of observed epoch. Furthermore, we can quantify the amount and location in the ejecta of the unburned C by measuring pseudo-equivalent widths (pEWs) and expansion velocities, respectively.
The spectral and photometric data used herein are summarised in Section \[s:data\], and our methods for determining the presence or absence of C and (for SNe Ia with definite C detection) measuring [C$\;$]{} spectral features is described in Section \[s:procedure\]. Section \[s:analysis\] presents the rate of C detection, the differences and similarities between SNe Ia with and without C, and a discussion of the [C$\;$]{} spectral feature measurements. We present our conclusions in Section \[s:conclusions\].
Dataset {#s:data}
=======
The SN Ia spectral data investigated in the current study are a subset of those used in BSNIP II and originally published in BSNIP I. The majority of the spectra were obtained using the Shane 3 m telescope at Lick Observatory with the Kast double spectrograph [@Miller93], and the typical wavelength coverage is 3300–10,400 Å with resolutions of [$\sim\!\!$ ]{}11 and [$\sim\!\!$ ]{}6 Å on the red and blue sides (crossover wavelength [$\sim\!\!$ ]{}5500 Å), respectively. For more information regarding the observations and data reduction, see BSNIP I.
In BSNIP II, we ignored [*a priori*]{} the extremely peculiar SN 2000cx [e.g., @Li01:00cx], SN 2002cx [e.g., @Li03:02cx; @Jha06:02cx], SN 2005hk [e.g., @Chornock06; @Phillips07], and SN 2008ha [e.g., @Foley09:08ha; @Valenti09]. This was mainly due to the fact that they are so spectroscopically distinct from the bulk of the SN Ia population that their spectral features are difficult to measure in the same way as for the other objects. It should be noted that @Parrent11 find that all of these objects show evidence for unburned C. However, in this work we will only concentrate on SNe Ia that follow the Phillips relation, and thus can be used as cosmological distance indicators. This means that we also remove all super-Chandrasekhar-mass SNe Ia from our sample, even though they show strong absorption from [C$\;$]{} (as mentioned above).
BSNIP II contains 432 spectra of 261 SNe Ia with ages younger than 20 d (rest frame) past maximum brightness. We began the current study by inspecting all 206 spectra (of 156 objects) younger than 5 d past maximum for possible C signatures. The oldest spectrum to show evidence of a [C$\;$]{} feature was obtained [$\sim\!\!$ ]{}3.6 d after maximum. Hence, for the rest of this study, we only consider spectra younger than this epoch. This yields a sample of 188 spectra of 144 SNe Ia, which is the largest set of SNe Ia that has ever been inspected for C features. A summary of these objects, their “Carbon Classification” (see Section \[s:procedure\]), and their spectral classifications based on various classification schemes can be found in Table \[t:objects\]. For comparison, @Parrent11 investigated 58 objects[^2] younger than 1 d past maximum brightness, @Thomas11 used 124 objects at epochs before 2.5 d past maximum, and the study by @Folatelli11 utilised 51 SNe Ia with spectra before maximum.
The spectral ages of the BSNIP data referred to throughout this work are calculated using the redshift and Julian Date of $B$-band maximum brightness presented in Table 1 of BSNIP I. Furthermore, photometric parameters (such as light-curve width and colour information) used in the present study can be found in Ganeshalingam [et al. ]{}(in preparation).
Carbon Detection and Measurement {#s:procedure}
================================
As mentioned above, [C$\;$]{} is the dominant species of C for typical SN Ia temperatures before and near maximum brightness [[$\sim\!\!$ ]{}10,000 K; e.g., @Hatano99]. This species has four major absorption lines in the optical regime: $\lambda$4267, $\lambda$4745, $\lambda$6580, and $\lambda$7234 [e.g., @Mazzali01; @Branch03]. The bluest two lines are usually overwhelmed in SN Ia spectra by broad, blended absorption from iron-group elements (IGEs), so we do not attempt to search for either of them in our data. The $\lambda$7234 line is more promising, but still not perfect since it is relatively weak, usually quite broad, and often falls close to the telluric absorption feature at 6900 Å[@Folatelli11]. When this line is detected, however, it is often only observed as an “inflection” in the spectral continuum [@Thomas11], though it becomes more obvious when the $\lambda$6580 line is easily detected [@Parrent11].
The $\lambda$6580 line is the most obvious [C$\;$]{} absorption feature in the optical [e.g., @Hatano99] and represents the best chance of making a definitive detection of unburned C in pre-maximum optical spectra of SNe Ia. However, even though it is the deepest [C$\;$]{} absorption, it is still significantly weaker than the nearby [Si$\;$]{} $\lambda$6355 line. Furthermore, [C$\;$]{} $\lambda$6580 is usually blueshifted to [$\sim\!\!$ ]{}6300 Å, which often intersects the red wing (or emission component of a P-Cygni profile) of the [Si$\;$]{} $\lambda$6355 line. Therefore, even though we concentrate mainly on searching for [C$\;$]{} $\lambda$6580, unambiguously observing this feature is still a difficult task.
The Search for Carbon
---------------------
The first step in our search for C consists of visually inspecting each of the 188 spectra in our sample. We concentrate on the region 5600–7800 Å in order to cover the spectral range around [Si$\;$]{} $\lambda$6355 and [C$\;$]{} $\lambda$6580 as well as [C$\;$]{} $\lambda$7234 and the [O$\;$]{} triplet (centered near 7770 Å). Spectra where there is an obvious, distinct absorption feature likely associated with [C$\;$]{} $\lambda$6580 is given an ‘A’ (“Absorption”) classification. ‘A’ spectra often also show depressions or distinct absorption features associated with [C$\;$]{} $\lambda$7234. Spectra that show no distinct absorption, but the possibility of a depression or flattening of the red wing of the [Si$\;$]{} $\lambda$6355 feature are classified as ‘F’ (“Flattened”). These data represent tentative C detections, and no ‘F’ spectra show obvious evidence for [C$\;$]{} $\lambda$7234 absorption. We consider any spectrum with an ‘A’ or ‘F’ classification to “have C” or be “C positive.”
‘N’ (“No C”) classifications are given to spectra where there is no evidence of C absorption features and the red half of the [Si$\;$]{} $\lambda$6355 line appears to be unaffected by any other species. Finally, spectra where no definite classification can be made (often due to low S/N; i.e., the noise fluctuations were as large as possible C absorption features) are denoted as ‘?’ (“Inconclusive”). This classification scheme is similar to those used previously [@Parrent11; @Folatelli11].
To more quantitatively determine a spectrum’s C classification, we use the spectrum-synthesis code [SYNOW]{} [@Synow]. [SYNOW]{} is a parametrised resonance-scattering code which allows for the adjustment of chemical composition, optical depths, temperatures, and velocities in order to help identify spectral features seen in SNe. We fit all spectra initially classified as ‘A’ or ‘F’ using [SYNOW]{}, both with and without [C$\;$]{}, to investigate whether the addition of [C$\;$]{} significantly improves the match between the synthetic and observed spectra. Again, this is similar to previous work [@Parrent11; @Folatelli11], though @Thomas11 use a different spectral synthesis code, [SYNAPPS]{} [@Thomas11:synapps].
After using [SYNOW]{} to fit all spectra thought to have C, it was found that the addition of [C$\;$]{} did not improve the fit to 33 spectra that were initially classified as ‘F’ (and thus they were reclassified as ‘N’). This likely implies that the initial visual inspections were perhaps a bit too “optimistic” in detecting possible C absorptions. However, all spectra initially classified as ‘A’ were confirmed to contain C in their spectra from the [SYNOW]{} fits. Examples of observed and synthetic spectra of an ‘A’ spectrum, an ‘F’ spectrum, and an ‘N’ spectrum can be found in the top, middle, and bottom panels of Figure \[f:synow\_fits\], respectively. Furthermore, Table \[t:spectra\] lists each spectrum in our sample, along with its age and carbon classification, and a summary of the number of spectra in each class is presented in Table \[t:counts\]. As mentioned above, the oldest spectrum that shows evidence for [C$\;$]{} absorption (i.e., an ‘F’ classification) is [$\sim\!\!$ ]{}3.6 d after maximum brightness. The oldest spectrum with an ‘A’ classification was obtained about 4.4 d before maximum.
$
\begin{array}{c}
\includegraphics[width=3.4in]{sn2005iq} \\
\includegraphics[width=3.4in]{sn2008hs} \\
\includegraphics[width=3.4in]{sn2008ar}
\end{array}$
[lrr]{} C Class.$^\textrm{a}$ & & \# Spectra\
A & 16$^\textrm{b}$ & 19\
F & 20 & 29\
N & 95$^\textrm{c}$ & 117\
? & 13 & 23\
Total & 144 & 188\
\
\
\
There are 34 SNe Ia in the current sample for which we inspect multiple spectra, and in many of these cases the C classifications of different spectra of the same object disagree. However, this is unsurprising since C features tend to weaken with time [e.g., @Folatelli11]. In all cases using the BSNIP data, the temporal evolution of the C classification is ‘A’$\rightarrow$‘F’$\rightarrow$‘N.’ Therefore, we classify a SN by the C classification of its earliest spectrum. Also, since any spectrum classified as ‘?’ does not indicate whether C is present, we ignore any ‘?’ spectra when determining the C classification of a given SN.
When comparing the BSNIP data to previous studies, 33 SNe Ia have been classified in earlier work [@Parrent11; @Thomas11; @Folatelli11] and of these, our classifications agree for 23 objects. For most of the objects where the C classification differs, the BSNIP spectra are from earlier epochs or have higher S/N and show stronger evidence for C than previously thought; for these objects we retain the C classification determined from the BSNIP data. However, there are two objects we classify as ‘?’ that were previously classified as ‘N’ using higher S/N data from earlier epochs, as well as two objects that we classify as ‘N’ that were previously classified as ‘A’ using spectra from earlier epochs. In these four cases we adopt the classification from the literature in lieu of the one determined from our own data. These reclassifications are reflected throughout this work. We note here that if we had access to more high-quality data or more spectra at younger epochs, we might classify even more objects as C positive. Therefore, the incidence rates of C in SNe Ia that we calculate herein should be considered a lower limit.
The final C classification for each SN Ia in our sample (including the above reclassifications) can be found in Table \[t:objects\], and a summary of the number of objects in each class is presented in Table \[t:counts\]. In the BSNIP data, [$\sim\!\!$ ]{}11 per cent of the SNe Ia show definite C absorption features (‘A’), while an additional [$\sim\!\!$ ]{}14 per cent show some evidence for C in their spectra (‘F’), for a total of [$\sim\!\!$ ]{}25 per cent ‘A’ + ‘F.’ This is consistent with previous studies that find [$\sim\!\!$ ]{}20–33 per cent of SNe Ia show evidence for C [@Parrent11; @Thomas11; @Folatelli11].
Measuring the Carbon {#ss:measure}
--------------------
Figure \[f:carbons\] shows the region near [C$\;$]{} $\lambda$6580 and [Si$\;$]{} $\lambda$6355 of all 19 spectra in our sample that are classified as ‘A.’ The wavelengths corresponding to [C$\;$]{} $\lambda$6580 with expansion velocities of 10,500–14,000 [km s$^{-1}$]{} (i.e., the range of velocities observed in this work) are highlighted. For each of these spectra, we determine the pEW and expansion velocity of the [C$\;$]{} $\lambda$6580 feature, and we attempt to measure these parameters for the [C$\;$]{} $\lambda$7234 feature as well. The algorithm used to measure the [C$\;$]{} $\lambda$6580 absorption is described in detail in BSNIP II, but here we give a brief summary of the procedure.
$
\begin{array}{cc}
\includegraphics[width=3.35in]{all_a0} &
\includegraphics[width=3.35in]{all_a1} \\
\multicolumn{2}{c}{\includegraphics[width=3.35in]{all_a2}} \\
\end{array}$
Each spectrum first has its host-galaxy recession velocity removed and is corrected for Galactic reddening (according to the values presented in Table 1 of BSNIP I), and then is smoothed using a Savitzky-Golay smoothing filter [@Savitzky64]. We attempt to define a pseudo-continuum for each spectral feature. This is done by determining where the local slope changes sign on either side of the feature’s minimum. Quadratic functions are fit to each of these endpoints, and the peaks of the parabolas (assuming that they are both concave downward) are used as the endpoints of the feature; they are then connected with a line to define the pseudo-continuum. This defines the pEW [e.g., @Garavini07]. Once a pseudo-continuum is calculated, a cubic spline is fit to the smoothed data between the endpoints of the spectral feature. The expansion velocity is calculated from the wavelength at which the spline fit reaches its minimum. Every fit is visually inspected, and the fits to all of the 19 ‘A’ spectra are found to be acceptable.
We attempt to use the same procedure for the [C$\;$]{} $\lambda$7234 absorption feature, but due to its relative shallowness and the difficulty in automatically defining the endpoints, our fitting routine fails on nearly all of the ‘A’ spectra. Therefore, a manual version of the algorithm is used (i.e., the endpoints are defined by hand). Even so, the $\lambda$7234 absorption cannot be accurately measured in 7 of 19 ‘A’ spectra. Also, as a sanity check, this manual version of the fitting procedure is used to measure the $\lambda$6580 feature in all of the ‘A’ spectra and the results are compared to those of the more robust, automated fitting method described above. All values of pEW and velocity from these two methods are consistent within the measured uncertainties, adding credibility to the values measured for the $\lambda$7234 feature. Note that throughout the analysis presented here we use only pEW and velocity measurements for the $\lambda$6580 feature as determined by our automated fitting routine. The results of these measurements, for both [C$\;$]{} features inspected, can be found in Table \[t:vels\].
Analysis {#s:analysis}
========
When is Carbon Detectable? {#ss:time}
--------------------------
As stated above, the oldest ‘A’ spectrum in the BSNIP sample is from 4.4 d [*before*]{} maximum brightness and the oldest ‘F’ spectrum was obtained 3.6 d [*after*]{} maximum. This matches well with previous studies, which found that ‘A’ spectra are found at ages less than 3 d before maximum [@Parrent11; @Thomas11; @Folatelli11]. Furthermore, all three of the earlier investigations only use pre-maximum spectra (for SNe Ia that are not possible super-Chandrasekhar-mass or SN 2002cx-like objects), and like the current study they find ‘F’ spectra near maximum brightness.
The top panel of Figure \[f:fractions\] shows the fraction of spectra with C (just ‘A,’ and the sum of ‘A’ and ‘F’) as a function of time. The horizontal error bars represent the width of each bin (i.e., 2 d) and the vertical error bars represent the range of fractions if one SN with C is added to or subtracted from that bin. The fraction of ‘A’ spectra and ‘A’+‘F’ spectra both start at 60 per cent at 12 d before maximum brightness. The fraction of ‘A’ spectra decreases monotonically with time, while the fraction of ‘A’+‘F’ spectra generally (but not always) decreases with time as well.
The small rise in the fraction of ‘A’+‘F’ spectra from $t = -12$ d to $t =
-10$ d appears to be insignificant. However, the rise of ‘A’+‘F’ spectra for $-8\textrm{ d} \leq t \leq -4$ d seems inconsistent with an actual decrease. It is unclear what might cause this small spike in the fraction of spectra with C at these epochs. This is consistent with what was seen by @Folatelli11 in their Figure 11, but not exactly the same. Their fractions of ‘A’ and ‘A’+‘F’ both monotonically decrease with time, and they detect no ‘F’ spectra older than 2 d before maximum. Using the BSNIP data, 10–20 per cent of the spectra with $-2\textrm{~d} < t < 5$ d are classified as ‘F.’
$
\begin{array}{c}
\includegraphics[width=3.4in]{fraction_t} \\
\includegraphics[width=3.4in]{fraction_t_cum} \\
\end{array}$
The [*cumulative*]{} fraction of spectra with C (both ‘A’ and ‘A’+‘F’) is shown as a function of time in the bottom panel of Figure \[f:fractions\]. Each point represents the fraction of spectra with C from epochs in that bin or younger. The symbols and error bars have the same meanings as in the top panel. Once again there is a monotonic decrease in the fraction of ‘A’ spectra with time, in addition to mostly decreasing fractions of C-positive spectra (i.e., ‘A’+‘F’) with time. By $t \approx 5$ d, which corresponds to the age bin that includes our oldest ‘F’ spectrum, [$\sim\!\!$ ]{}12 per cent of the spectra in the BSNIP sample show definitive C signatures (i.e., ‘A’) while [$\sim\!\!$ ]{}29 per cent of them show at least possible evidence for C (i.e., ‘A’+‘F’).
It seems that if one wants to detect C in an optical spectrum of a SN Ia that follows the Phillips relation, a relatively high-quality spectrum must be obtained at an epoch younger than [$\sim\!\!$ ]{}4 d past maximum brightness. However, observations at the end of this epoch range will yield only a possible C signature (i.e., an ‘F’ spectrum) and will occur $<$10 per cent of the time. At $t \approx -4$ d, the chance of detecting C goes up significantly. For the BSNIP data there is about a 50 per cent chance of obtaining an ‘A’ or ‘F’ spectrum at this epoch, though the probability of obtaining an ‘A’ spectrum is still only [$\sim\!\!$ ]{}8 per cent. Finally, it appears that obtaining spectra at $t \la
-5$ d yields a relatively good chance of showing some sign of C (just over 50 per cent) and a better than one-third chance of yielding an ‘A’ spectrum.
Carbon and Various Classification Schemes {#ss:classification}
-----------------------------------------
In Table \[t:objects\], the “Carbon Classification” for each object studied here is listed, along with its spectral classification based on various other classification methods. The “SNID type” of each SN is taken from BSNIP I. The SuperNova IDentification code [SNID; @Blondin07], as implemented in BSNIP I, was used to determine the spectroscopic subtype of each SN in the BSNIP sample. SNID compares an input spectrum to a library of spectral templates in order to determine the most likely spectroscopic subtype. Spectroscopically normal objects are objects classified as “Ia-norm” by SNID.
The spectroscopically peculiar SNID subtypes used here include the often underluminous SN 1991bg-like objects [“Ia-91bg,” e.g., @Filippenko92:91bg; @Leibundgut93], and the often overluminous SN 1991T-like objects [“Ia-91T,” e.g., @Filippenko92:91T; @Phillips92] and SN 1999aa-like objects [“Ia-99aa,” @Li01:pec; @Strolger02; @Garavini04]. See BSNIP I for more information regarding our implementation of SNID and the various spectroscopic subtype classifications. If an object has a SNID type of simply “Ia,” it means that no definitive subtype could be determined.
According to Table \[t:objects\], all ‘A’ objects are Ia-norm, with the exception of one “Ia.” All ‘F’ objects are also Ia-norm, except one each of Ia-99aa, Ia-91bg, and Ia. On the other hand, all SNID types are well represented (relative to their overall incidence rate) in the ‘N’ objects. However, due to the relative rarity of the spectroscopically peculiar subtypes, one would expect only 1–2 of each of the non-Ia-norm subtypes in a sample of 19 objects [@Ganeshalingam10:phot_paper; @Li11a]. Thus, it seems that the SNe Ia showing evidence for C in their spectra are spectroscopically normal objects when examining their entire optical spectrum (as SNID does). Note, though, that there [*could*]{} exist very rare cases of spectroscopically peculiar SNe Ia that follow the Phillips relation and that show C features.
The fourth column of Table \[t:objects\] presents the “Benetti type” of each object, which is based on the velocity gradient of the [Si$\;$]{} $\lambda$6355 feature [@Benetti05]. The high velocity gradient (HVG) group has the largest velocity gradients while the low velocity gradient (LVG) group has the smallest velocity gradients. The third subclass (FAINT) has the lowest expansion velocities, yet moderately large velocity gradients, and consists of subluminous SNe Ia with the narrowest light curves. As mentioned in BSNIP II, from where these classifications are taken, the BSNIP data are not well suited to velocity-gradient measurements since the average number of spectra per object is [$\sim\!\!$ ]{}2 (see BSNIP I). However, we are still able to calculate the velocity gradient for a subset of our data.
Of the SNe Ia with C and a known Benetti type, four are LVG (three of which are ‘A’) and four are HVG (two of which are ‘A’). We also find that the actual values of the velocity gradient itself are similar for objects with and without C. @Parrent11 find that LVG objects have a greater chance of showing C as compared to HVG objects and that no HVG SNe show a definitive C signature. They point out that this could be an observational bias since HVG objects tend to have higher [Si$\;$]{} velocities [e.g., @Hachinger06; @Wang09], increasing the amount of blending between [Si$\;$]{} $\lambda$6355 and [C$\;$]{} $\lambda$6580 and thus making it more difficult to detect C. In BSNIP II we show that the one-to-one association between HVG and high expansion velocities is not as clear as has been assumed previously, which could explain how we are able to detect C in some HVG objects. However, this possible connection between velocity gradient and incidence of C should be explored further in the future with datasets that are more suited than BSNIP to velocity-gradient calculations.
The “Branch type” referred to in Table \[t:objects\] uses pEWs of [Si$\;$]{} $\lambda$6355 and [Si$\;$]{} $\lambda$5972 measured near maximum light to classify SNe Ia [@Branch06]. The four groups they define based on these two pEW values are core normal (CN), broad line (BL), cool (CL), and shallow silicon (SS). However, they point out that SNe seem to have a continuous distribution of pEW values; hence, how the exact boundaries are defined is not critical. The Branch-type classifications used here can be found in BSNIP II. The majority (63 per cent) of CN objects show evidence for C, while only 16 per cent of BL objects have C in their spectra. This is consistent with the idea mentioned above that it is harder to distinguish C absorption in SNe Ia having high expansion velocities such as BL objects [@Parrent11; @Folatelli11].
Furthermore, only 25 per cent and 18 per cent of SS and CL objects show evidence for C, respectively, which has been noticed in earlier work [@Parrent11; @Folatelli11]. This has been interpreted as evidence that the presence or absence of C depends on the effective temperature [@Parrent11], since the effective temperature is directly related to the relative strengths of the [Si$\;$]{} $\lambda$6355 and [Si$\;$]{} $\lambda$5972 features [@Nugent95]. The prevalence of CN objects with C, as compared to other Branch types, is unsurprising given what was found above when discussing SNID types. In BSNIP II it was shown that SNID types are often equivalent to the more extreme objects in each of the non-CN Branch types. Therefore, since nearly all SNe Ia with C are Ia-norm, it stands to reason that most of them should also be CN.
The last column of Table \[t:objects\] lists the “Wang type” of each object, taken from BSNIP II, which is determined from the [Si$\;$]{} $\lambda$6355 velocity near maximum brightness [@Wang09]. SNe Ia which are classified as Ia-norm by SNID and have high velocities near maximum ($\ga$11,800 [km s$^{-1}$]{}) are considered to be high-velocity (HV) objects. Ia-norm with velocities less than this cutoff are classified as normal (N).
No HV objects are classified as ‘A’ and only two are classified as ‘F,’ while 21 HV objects show no evidence of C. Nearly one-third of normal-velocity objects, however, have C. This is consistent with the relative lack of BL objects that show C, since both BL and HV SNe have large expansion velocities and this makes C detection difficult [@Parrent11; @Folatelli11].
Similarities (and Differences) Between Objects With and Without Carbon {#ss:comparison}
----------------------------------------------------------------------
As mentioned above, the Phillips relation correlates the peak luminosity of a SN Ia to its light-curve decline rate [@Phillips93]. One way to parametrise this decline rate is to calculate the difference in magnitudes between maximum and fifteen days past maximum in the $B$ band, referred to as $\Delta m_{15}(B)$. The BSNIP sample has 196 objects for which we calculate $\Delta m_{15}(B)$ and of those, 28 show C and 67 show no C. A histogram of $\Delta m_{15}(B)$ values can be found in the top panel of Figure \[f:width\_hist\]. The average $\Delta m_{15}(B)$ for each of the three samples show in the figure (all of BSNIP, with C, and without C) are consistent with each other. This was hinted at by @Folatelli11, though they admit that they have too few objects to make any robust statistical statement. There also appears to be no difference in $\Delta m_{15}(B)$ values when the “with C” sample is subdivided into ‘A’ and ‘F’ objects.
$
\begin{array}{c}
\includegraphics[width=3.4in]{dm15_hist} \\
\includegraphics[width=3.4in]{x1_hist} \\
\end{array}$
We also use an alternative parametrisation of the light-curve width: the $x_1$ parameter from SALT2 [@Guy07]. The value of $x_1$ is in the sense opposite that of $\Delta m_{15}(B)$. Thus, underluminous, narrow, fast-evolving light curves have large values of $\Delta m_{15}(B)$ but small values of $x_1$. There are 335 SNe Ia in BSNIP that have a SALT2 fit, and 30 (83) of them exhibit (do not exhibit) C. A histogram of $x_1$ values is shown in the bottom panel of Figure \[f:width\_hist\].
Like $\Delta m_{15}(B)$, the average $x_1$ value for each sample is consistent with one another. This is at odds with the finding of @Thomas11 that SNe Ia with C have lower $x_1$ values when compared to SNe without C. Their C-positive objects are mostly clustered near $x_1 \approx -2$ while our SNe Ia with C have a wide range of $x_1$ values, with the average being $x_1
\approx -0.74$ and the peak of the distribution occurring at $x_1
\approx 0$. However, the [*overall*]{} distributions of $x_1$ values are different between @Thomas11 and the current study; there are a significant number of SNe Ia with $x_1 < -2.5$ in the BSNIP sample while there are none shown by @Thomas11. The colours of SNe Ia that show or do not show C signatures can also be investigated. One way to quantify the colour of a SN Ia is to measure the difference between its $B$-band magnitude and $V$-band magnitude at the time of $B$-band maximum brightness (referred to in this work as ). The BSNIP data contain 190 objects for which is measured, 28 of which have C and 67 of which do not. While there is no significant difference in when the “with C” sample is subdivided into ‘A’ and ‘F’ objects, there [*is*]{} a difference between the “with C” objects and the “without C” objects: SNe Ia with C are bluer. A Kolmogorov-Smirnov (KS) test on these two samples implies that they very likely come from different parent populations ($p \approx 0.07$). The top panel of Figure \[f:bv\] shows a histogram of values for the entire BSNIP dataset, objects with C, and those without. The bottom panel shows the cumulative distribution function of these three samples.
$
\begin{array}{c}
\includegraphics[width=3.4in]{bv_hist} \\
\includegraphics[width=3.4in]{bv_cdf} \\
\end{array}$
This trend can also be seen if one parametrises SN colour by the SALT2 $c$ parameter [@Guy07]. Of the 335 objects in BSNIP with good SALT2 fits, 30 have C and 83 do not. And, as with the values, C-positive objects appear to have bluer colours than C-negative ones (a KS test yields $p \approx 0.01$), while no significant colour difference is found between ‘A’ objects and ‘F’ objects. Figure \[f:c\] presents the histogram (top panel) and the cumulative distribution function (bottom panel) of the SALT2 $c$ values for each of the three samples.
$
\begin{array}{c}
\includegraphics[width=3.4in]{c_hist} \\
\includegraphics[width=3.4in]{c_cdf} \\
\end{array}$
Yet another way to quantify the colour of a SN Ia is to calculate synthetic photometric colours from a spectrum. In BSNIP I it was shown that the relative spectrophotometry of our data, when compared to the actual light curves, is accurate to $\le 0.07$ mag across the entire spectrum. Therefore, synthetic colours derived from BSNIP spectra should be photometrically accurate to about this level. In order to determine the synthetic colours of our spectra in this work, we follow the procedure from BSNIP I. Simply stated, we convolve each spectrum with the @Bessell90 filter functions, which have approximate wavelength ranges of 3400–4100, 3700–5500, 4800–6900, 5600–8500, and 7100–9100 Åfor $U$, $B$, $V$, $R$, and $I$, respectively. We then calculate the $U-B$, $B-V$, $V-R$, and $R-I$ colours. Most of the spectra studied herein fully cover the [ ]{}bands and about half cover the $U$ band as well. Using synthetic colours calculated from our spectra, objects with C signatures once again have significantly bluer $U-B$ and $B-V$ colours at all epochs. On the other hand, for all other colours calculated, the data are consistent with C-positive and C-negative objects having similar colours.
The result found here that SNe Ia with evidence for C tend to have bluer optical/near-ultraviolet (NUV) colours confirms the work of previous groups [@Thomas11; @Folatelli11]. A relationship between colour and light-curve width was shown by @Folatelli11 to be steeper for objects with C as compared to objects without C when only SNe Ia with little intrinsic reddening were considered. The BSNIP data show no significant evidence of different light-curve width versus colour relationships for objects with or without C, whether we use all objects or just those that are unreddened.
@Milne10, @Thomas11, and @Milne12 present [*Swift*]{}/UVOT [@Gehrels04; @Roming05] photometry of a handful of SNe Ia. In all of these works it is shown that the objects that are relatively bright in the NUV (“NUV blue”; i.e., those with the largest NUV excesses) also show strong evidence for C absorption. However, they note that SN 2005cf, an object which is clearly C positive, has completely normal colours in the [*Swift*]{}/UVOT data. All of the NUV-blue objects presented by @Thomas11 and @Milne12 that are also studied in this work are found to exhibit C features, and we find no evidence for C in three SNe Ia that are NUV-red (according to the [*Swift*]{}/UVOT data). However, two objects in BSNIP that are NUV-red appear to have C signatures in their spectra: SN 2005cf (as mentioned above) and SN 2007cq (the [*Swift*]{}/UVOT data indicate that it is NUV-red, even though it has quite blue optical colours — its and SALT2 $c$ are low, 0.004 mag and 0.024, respectively).
In summary, SNe Ia which show evidence for C in their pre-maximum spectra have bluer colours in the optical bands at all pre-maximum epochs and are [*almost*]{} always found to be NUV-blue in space-based UV/optical photometry. On the other hand, all SNe Ia which are found to be NUV-blue in the [*Swift*]{}/UVOT data are C-positive objects. Finally, SNe Ia which show no evidence for C always have redder colours in both the optical and NUV.
[C$\;$]{} Velocities {#ss:vel}
--------------------
As described in Section \[ss:measure\], we measure the expansion velocity and pEW of the [C$\;$]{} $\lambda$6580 line in all 19 spectra classified as ‘A.’ We also measure the velocity and pEW of the [C$\;$]{} $\lambda$7234 line in 12 of these spectra. The temporal evolution of the expansion velocities for both features can be found in Figure \[f:c\_vel\]. The different shapes correspond to different SNe, and for the two objects which have multiple velocity measurements (SN 1994D and SN 2008s1[^3]), their velocities are connected with a solid line.
$
\begin{array}{c}
\includegraphics[width=3.4in]{v_c6580_t} \\
\includegraphics[width=3.4in]{v_c7234_t} \\
\end{array}$
The range of velocities spanned by both features is relatively small. The [C$\;$]{} $\lambda$6580 line mostly has velocities around 12,000–13,000 [km s$^{-1}$]{}, while the [C$\;$]{} $\lambda$7234 line is mainly found between 9500 [km s$^{-1}$]{} and 11,500 [km s$^{-1}$]{}. However, this is a somewhat larger range of $\lambda$6580 velocities than what has been seen in previous work [@Folatelli11]. The largest [C$\;$]{} velocity observed in the BSNIP data is [$\sim\!\!$ ]{}14,000 [km s$^{-1}$]{}, which may be caused more by an observational bias than a real, physical limit. Both @Parrent11 and @Folatelli11 discuss the difficulty of measuring [C$\;$]{} $\lambda$6580 with $v \ga 15,000$ [km s$^{-1}$]{} due to the fact that at these high velocities the feature becomes strongly blended with [Si$\;$]{} $\lambda$6355.
As seen in Figure \[f:c\_vel\], the typical [C$\;$]{} velocities of all objects at a given epoch decrease with time [as has been seen before; @Folatelli11], though there is a dramatic [*increase*]{} in velocity between the first and second epochs of SN 1994D. This has not been seen in previous work, likely due to the fact that our data were obtained at earlier epochs. Furthermore, @Thomas11 measure all [C$\;$]{} $\lambda$6580 velocities to be [$\sim\!\!$ ]{}12,000 [km s$^{-1}$]{}, and they see very little change with time.
The difference in the typical velocities of the two [C$\;$]{} features implies that the $\lambda$7234 line is [$\sim\!\!$ ]{}2000 [km s$^{-1}$]{}slower than the $\lambda$6580 line. There has been little attempt previously to determine the velocity of [C$\;$]{} $\lambda$7234 due to its relative weakness, but @Thomas11 do mention possible detections of this absorption at velocities somewhat lower than those of [C$\;$]{} $\lambda$6580 (consistent with what is found here).
The [Si$\;$]{} $\lambda$6355 velocities (as measured in BSNIP II) of the objects with and without C can also be compared and are shown in the top panel of Figure \[f:si\_vel\]. Red circles are SNe Ia with C and blue squares do not have C. The grey shaded area represents the 1$\sigma$ region around the average [Si$\;$]{} $\lambda$6355 velocity from the entire BSNIP sample. Individual objects with multiple velocity measurements are connected with a solid line. The panel includes all 131 objects in this work with a definitive C classification (‘A,’ ‘F,’ or ‘N’).
$
\begin{array}{c}
\includegraphics[width=3.4in]{vsi_t} \\
\includegraphics[width=3.4in]{vc_vsi} \\
\end{array}$
According to the figure, SNe Ia with C tend to have lower [Si$\;$]{} $\lambda$6355 velocities. Nearly all of the C-positive spectra have [Si$\;$]{} velocities that are at or below average, while the objects without C span the entire range of [Si$\;$]{} velocities from below average to well above average. This lack of HV objects that show C was mentioned above, and is likely in part due to the difficulty in detecting and measuring [C$\;$]{} $\lambda$6580 at large expansion velocities. In fact, @Folatelli11 found that for all C-positive objects in their sample, the [Si$\;$]{} $\lambda$6355 velocities were $<$12,500 [km s$^{-1}$]{}. We find six spectra of C-positive SNe Ia that have [Si$\;$]{} velocities above this value, and three of them are from epochs earlier than the earliest ones studied by @Folatelli11, when one expects even larger expansion velocities for all elements. Thus, our findings appear to be consistent with those of previous work that the [Si$\;$]{} $\lambda$6355 velocities of C-positive SNe Ia are significantly [ *lower*]{} than average.
The typical [Si$\;$]{} $\lambda$6355 velocities for the C-positive objects are 10,000–12,000 [km s$^{-1}$]{}, very close to the typical [C$\;$]{} $\lambda$7234 velocities. However, this is 1000–2000 [km s$^{-1}$]{} than the typical [C$\;$]{} $\lambda$6580 velocities, as was also found previously [@Folatelli11]. The bottom panel of Figure \[f:si\_vel\] shows the ratio of the [C$\;$]{} $\lambda$6580 velocity to the [Si$\;$]{} $\lambda$6355 velocity for all 19 ‘A’ spectra. The plot symbols are the same as in Figure \[f:c\_vel\], and again the two objects having multiple velocity measurements are connected with a solid line. The dashed line is the median ratio ([$\sim\!\!$ ]{}1.05) and the dotted lines are the median $\pm 10$ per cent.
The ratio of these two velocities is remarkably constant, especially for $t > -10$ d. A similar trend was found by @Parrent11, though their average ratio was slightly larger than ours (1.1). For a given object, the ratio may increase somewhat with time, but with only two objects with multiple velocity measurements in our sample it is difficult to make any definitive statement about this. However, the data presented by @Parrent11 seem to support this conclusion as well. Furthermore, we note that the [C$\;$]{} velocities are [ *usually*]{} similar to or larger than the [Si$\;$]{} velocities, which supports the idea of the layered structure of SN Ia ejecta with some additional mixing. The standard layering picture includes unburned C in layers that are further out (i.e., faster expanding) than the layers containing newly synthesised Si. However, some degree of mixing between these layers is required in order to reproduce the observed overlap in velocity space of the [C$\;$]{} and [Si$\;$]{} features.
For the spectra with $t < -10$ d, the two outliers on the low end (the first epoch of SN 1994D and SN 2005cf) both have relatively low [C$\;$]{} $\lambda$6580 velocities [*and*]{} higher than average [Si$\;$]{} $\lambda$6355 velocities (leading to a small ratio). @Parrent11 found no normal SNe Ia with a ratio much less than 1, but their ratio for SN 1994D at $t \approx -11$ d is [$\sim\!\!$ ]{}1, which matches very well our second epoch of SN 1994D. Thus, these abnormally low velocity ratios at early epochs may be real and should be investigated further in the future with more early-time spectra. SN 1998dm appears to be somewhat of an outlier ($<$10 per cent above the median ratio) at the high end at early times; it has a higher than normal [C$\;$]{} $\lambda$6580 velocity with a relatively normal [Si$\;$]{} $\lambda$6355 velocity (yielding a larger ratio).
Comparing C features to O features (specifically, the [O$\;$]{} triplet centered near $\lambda$7773) may also be interesting since O is found in SN Ia ejecta as unburned fuel and a product of C burning. While the discussion of how to distinguish between O that is fuel and O that is ash is beyond the scope of this paper, we nonetheless compare our [C$\;$]{} measurements to [O$\;$]{} triplet measurements taken from BSNIP II. However, we note that the [O$\;$]{} triplet is notoriously difficult to measure accurately due to the fact that it is highly contaminated by telluric absorption features. Moreover, the [O$\;$]{} triplet is quite weak at the early epochs studied herein.
Those caveats notwithstanding, we find that objects with and without evidence for [C$\;$]{} have similar [O$\;$]{} triplet velocitie. Furthermore, the [O$\;$]{} velocities of C-positive objects closely follow the average [O$\;$]{} velocities of the entire BSNIP sample. Finally, there are two spectra (of two objects) for which we measure velocities of [*both*]{} [C$\;$]{} $\lambda$6580 and the [O$\;$]{} triplet, and we find that the velocities of these two features are effectively equal to each other in a given spectrum.
We also investigated any possible correlations between [C$\;$]{} velocities and photometric parameters. No significant correlations were found between [C$\;$]{} velocity and light-curve width (parametrised by $\Delta m_{15}(B)$ or SALT2 $x_1$) or SN colour (parametrised by or SALT2 $c$).
[C$\;$]{} pEWs {#ss:ew}
--------------
The temporal evolution of the pEWs for both features can be found in Figure \[f:c\_ew\]. The plot symbols are the same as in Figure \[f:c\_vel\], and the two objects which have multiple pEW measurements are connected with a solid line. The [C$\;$]{} $\lambda$6580 feature has pEWs which are all $<$3.5 Å, consistent with what has been seen previously [@Folatelli11]. The [C$\;$]{} $\lambda$7234 feature, on the other hand, has not been measured before, and we find a range of pEW values of [$\sim\!\!$ ]{}4–11 Å.
$
\begin{array}{c}
\includegraphics[width=3.4in]{ew_c6580_t} \\
\includegraphics[width=3.4in]{ew_c7234_t} \\
\end{array}$
As seen in Section \[ss:time\], the probability of detecting C decreases with time, mostly due to a weakening of the [C$\;$]{} absorption features. Figure \[f:c\_ew\] shows evidence to support the idea that, for the most part, the pEWs of the two [C$\;$]{} features decrease with time. However, we must point out that there are only two objects with multiple pEW measurements. Interestingly, there appears to be an increase in the pEW for one of these objects (SN 1994D) at the earliest epochs ($-13\textrm{ d} \la t \la -11$ d). While the increase is marginal and consistent with no change in pEW for the [C$\;$]{} $\lambda$7234 feature, the pEW increase is quite significant for the [C$\;$]{} $\lambda$6580 feature (this can be seen visually in the top-left panel of Figure \[f:carbons\]).
It was suggested by @Folatelli11 that one might observe such an increase in pEW between 13 and 11 d before maximum brightness, but their data did not extend to sufficiently early epochs to investigate this further. The expected increase in pEW was based on synthetic spectra created using a Monte Carlo code [@Mazzali93; @Lucy99; @Mazzali00] as implemented in the analysis of SN 2003du [@Tanaka11]. @Folatelli11 find that the synthetic spectra at these early epochs show a strong, red emission component of the [Si$\;$]{} $\lambda$6355 feature which tends to “fill in” some of the [C$\;$]{} $\lambda$6580 absorption, thus leading to a low pEW measurement for C. Furthermore, they point out that at these epochs some C is below the photosphere (leading to a lower measured pEW), and that the large expansion velocities at these times make [C$\;$]{} $\lambda$6580 become increasingly blended with [Si$\;$]{} $\lambda$6355 (again leading to difficulty in measuring pEWs of [C$\;$]{}).
The model used for SN 2003du by @Tanaka11 required $6.8 \times
10^{-3}$ [M$_\odot$]{} of C (mass fraction $X\left(C\right) = 0.002$) in the velocity range $10,500 < v < 15,000$ [km s$^{-1}$]{}, and the pEWs measured from these synthetic spectra are plotted as open red squares in Figure 9 of @Folatelli11. This velocity range is consistent with the velocities we find for the [C$\;$]{} $\lambda$6580 feature; more impressively, the theoretical pEW values shown by @Folatelli11 are excellent matches to the pEWs we measure for SN 1994D. Furthermore, @Folatelli11 discuss two other models where the amount of C is increased and decreased by a factor of four from the SN 2003du value. They state that this range of C mass ($1.7 \times 10^{-3}$ – $2.7 \times 10^{-2}$ [M$_\odot$]{}) includes objects where no C is detected as well as objects that have the largest [C$\;$]{} $\lambda$6580 pEWs (at [$\sim\!\!$ ]{}1 week before maximum light). The pEWs measured from the BSNIP data span a similar range of values as the data studied by @Folatelli11, and so we find that this mass range for C also encompasses all of our data.
We plot the temporal evolution of the [Si$\;$]{} $\lambda$6355 pEWs (as measured in BSNIP II) of SNe Ia with and without C in Figure \[f:si\_ew\]. As before, red circles are SNe Ia with C, blue squares do not have C, and the grey area is the 1$\sigma$ region around the average [Si$\;$]{} $\lambda$6355 pEW from the entire BSNIP sample. Again, individual objects with multiple measurements are connected with a solid line. Objects which show evidence for C tend to have slightly below average [Si$\;$]{} $\lambda$6355 pEWs, while objects without C follow the average pEW distribution quite well. However, the significance of this difference between C-positive objects and C-negative objects (or the entire BSNIP sample) is relatively weak. This may yet again be the observational bias that as the [Si$\;$]{} $\lambda$6355 pEW increases, it becomes more blended with the [C$\;$]{} $\lambda$6580 feature (making C detection more difficult).
![The temporal evolution of the [Si$\;$]{} $\lambda$6355 pEW for SNe Ia with C (red circles), without C (blue squares), and the 1$\sigma$ region around the average velocity as determined by the entire BSNIP sample (grey area). Objects with multiple velocity measurements are connected with a solid line.[]{data-label="f:si_ew"}](ewsi_t){width="3.4in"}
As with the [C$\;$]{} velocities, no significant correlations were found between [C$\;$]{} pEW and light-curve width (parametrised by $\Delta m_{15}(B)$ or SALT2 $x_1$) or SN colour (parametrised by or SALT2 $c$). Furthermore, no correlations were seen between pEW and synthetic photometric colours as derived from the spectra themselves (Section \[ss:comparison\]). Various spectroscopic luminosity and colour indicators (which are defined and discussed at length in BSNIP II and BSNIP III) were also found to be uncorrelated with [C$\;$]{} pEW. Moreover, no relationship was found between the pEW and velocity of either [C$\;$]{} feature.
Objects with and without C have similar [O$\;$]{} triplet pEWs and both samples are similar to the [O$\;$]{} triplet pEW distribution of the full BSNIP sample. The pEWs of most other spectral features seen in near-maximum spectra of SNe Ia showed no significant correlation with [C$\;$]{} pEW, except for the so-called [Mg$\;$]{} complex (with a correlation coefficient of $-0.73$). Figure \[f:ewc\_ewmg\] shows the 10 objects which have measured pEW values for both [C$\;$]{} $\lambda$6580 and the [Mg$\;$]{} complex. The solid line is the best linear fit to the data and the dotted lines are the root-mean square error. The plot symbols are the same as in Figure \[f:c\_vel\]. If the two pEWs are measured in more than one spectrum of a given object, we only plot the spectrum that is closest to maximum brightness (i.e., the oldest) in Figure \[f:ewc\_ewmg\].
![The pEW of the [C$\;$]{} $\lambda$6580 feature versus the pEW of the [Mg$\;$]{} complex. The data are highly correlated with a correlation coefficient of $-0.73$. The solid line is the best linear fit to the data and the dotted lines are the root-mean square error. The plot symbols are the same as in Figure \[f:c\_vel\].[]{data-label="f:ewc_ewmg"}](ewc_ewmg){width="3.4in"}
The [Mg$\;$]{} complex was defined in BSNIP II and consists of a blend of many IGE spectral lines which encompasses a broad, complex absorption feature at 4100–4500 Å. In BSNIP III it was shown that both the pEWs of the [Mg$\;$]{} and [Fe$\;$]{} (another broad complex of IGE features at 4500–5200 Å) complexes were relatively good proxies for SALT2 colour [$c$, @Guy07], with increased pEW implying a redder colour. While the [Fe$\;$]{} complex is not very well correlated with pEW of [C$\;$]{} $\lambda$6580 (correlation coefficient of $-0.32$), the strong anti-correlation between the pEW of the [Mg$\;$]{} complex and the pEW of [C$\;$]{} indicates that [ *increased*]{} pEW of [C$\;$]{} implies a [*bluer*]{} colour.
It has already been shown in this work that C-positive SNe Ia tend to have bluer colours than objects without C (Section \[ss:comparison\]). In light of the relationship between the pEWs of [C$\;$]{} and the [Mg$\;$]{} complex, perhaps this relationship with colour extends further than a simple binary splitting of objects with C versus those without C. It seems possible that the strength of the C feature is directly related to the colour of the SN. Objects where no C is detectable have the reddest colours, while objects with some C (i.e., low pEWs) have moderate colours, and finally objects with the most C (i.e., high pEWs) have the bluest colours. Unfortunately, as mentioned above, the pEW of the [C$\;$]{} $\lambda$6580 feature is not significantly correlated with any direct measure of the SN colour (via light curves or synthetic photometry from the spectra). The fact that the [C$\;$]{} pEW appears well correlated with two pEW-based spectral indicators of colour is intriguing, though, and should most definitely be investigated further in future studies.
Conclusions {#s:conclusions}
===========
In this work we have searched for signatures of unburned C from the progenitor WD using a subset of the BSNIP spectroscopic sample. We classify 188 spectra of 144 SNe Ia with ages $\la$3.6 d after maximum brightness as either showing definite [C$\;$]{} absorption (‘A’), possibly showing evidence for C (‘F’), definitely not showing C (‘N’), or inconclusive (‘?’). The spectrum-synthesis code [SYNOW]{} was used to accurately classify all spectra that showed possible evidence for C. The primary evidence for C is a distinct absorption line associated with [C$\;$]{} $\lambda$6580, though absorption from [C$\;$]{} $\lambda$7234 is also sometimes detected.
We find that [$\sim\!\!$ ]{}11 per cent of the SNe studied show definite C absorption features, while a total of [$\sim\!\!$ ]{}25 per cent show at least some evidence for C in their spectra, consistent with previous work [@Parrent11; @Thomas11; @Folatelli11]. The detection rate of C decreases with time, though C can sometimes be seen at all ages younger than [$\sim\!\!$ ]{}4 d past maximum brightness. Near 4 d [*before*]{} maximum brightness, there is a 50 per cent probability of detecting C, according to the BSNIP data. If one obtains a spectrum at $t \la
-5$ d, then there is a better than 30 per cent chance of detecting a distinct absorption feature from [C$\;$]{}.
Nearly all objects that show C are spectroscopically normal (as defined by various classification schemes), while SNe Ia without C detections are from all spectroscopic subtypes. The velocity gradients of objects with and without C have a similar average and range, and C detections and velocity gradients do not seem to be related in any way. However, we again point out that the BSNIP dataset is not well suited to velocity-gradient measurements. The light curves of SNe Ia with and without C also appear to have the same distribution. On the other hand, confirming previous work [@Thomas11; @Folatelli11], objects with C tend to have bluer optical colours than those without, and some (but not all) also have strong NUV excesses. This is shown with the BSNIP data using a variety of optical colour measurements.
The typical expansion velocity of the [C$\;$]{} $\lambda$6580 feature is 12,000–13,000 [km s$^{-1}$]{}, which is somewhat faster than the usual velocity measured for the [C$\;$]{} $\lambda$7234 feature (and we are the first to carefully study the velocity of this feature). The [Si$\;$]{} $\lambda$6355 velocities, measured in BSNIP II, tend to be lower than average for C-positive objects, while SNe Ia without C have a wide range of [Si$\;$]{} velocities. The ratio of the [C$\;$]{} $\lambda$6580 to [Si$\;$]{} $\lambda$6355 velocities is remarkably constant with time and among different objects, with a median value of [$\sim\!\!$ ]{}1.05 [consistent with what was reported by @Parrent11].
The pEWs of the [C$\;$]{} $\lambda$6580 and [C$\;$]{} $\lambda$7234 features are found mostly to decrease with time, though there is a significant increase between [$\sim\!\!$ ]{}13 and 11 d before maximum light. This is consistent with the predictions made by @Folatelli11 from spectral models based on those presented by @Tanaka11. The range of pEWs measured from the BSNIP data is consistent with earlier work and implies a range of C mass in SN Ia ejecta of $2 \times 10^{-3}$ – $3 \times 10^{-2}$ [M$_\odot$]{}[@Folatelli11]. C-positive objects tend to have slightly lower than average [Si$\;$]{} $\lambda$6355 pEWs, but at a relatively low significance. The pEW of the [Mg$\;$]{} complex is found to be strongly anti-correlated with the pEW of [C$\;$]{} $\lambda$6580, implying that bluer objects should have larger [C$\;$]{} pEWs. This is consistent withour finding that objects with obvious C tend to have bluer optical colours than those without. However, we find no strong correlation when comparing the pEW of [C$\;$]{} to direct measures of SN colour.
Even though this is the largest set of SNe Ia for which C has ever been searched, there are still only a handful of strong C detections and measurements of C absorption features. Other studies using independent datasets [@Thomas11; @Folatelli11] and literature searches [@Parrent11] have also been conducted, and we confirm most of their findings at higher significance. Still, many more moderate-to-high S/N SN Ia spectra at early epochs are needed to better investigate C and further probe WD progenitor models and SN Ia explosion mechanisms. New, large-scale transient searches such as Pan-STARRS [@Kaiser02] and the Palomar Transient Factory [PTF; @Rau09; @Law09] will be critical to moving this topic forward as they find progressively more young SNe of all types. One success story already is SN 2011fe (PTF11kly), discovered only 11 hr after explosion by PTF in M101, the Pinwheel Galaxy [@Nugent11; @Li11:ptf11kly]. The search for and measurement of C in the many spectra of SN 2011fe obtained at extremely early epochs will further our quest to better understand SNe Ia.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank R. C. Thomas for comments on earlier drafts of this work. We are also grateful to the referee for suggestions that improved the manuscript. Some of the data utilised herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration (NASA); the observatory was made possible by the generous financial support of the W. M. Keck Foundation. We wish to recognise and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community; we are most fortunate to have the opportunity to conduct observations from this mountain. We thank the staffs at the Lick and Keck Observatories for their support during the acquisition of data. Financial support was received through U.S. NSF grant AST-0908886, DOE grants DE-FC02-06ER41453 (SciDAC) and DE-FG02-08ER41563, and the TABASGO Foundation. A.V.F. is grateful for the hospitality of the W. M. Keck Observatory, where this paper was finalised.
\[lastpage\]
[^1]: E-mail: [email protected]
[^2]: The dataset only includes SNe Ia that follow the Phillips relation.
[^3]: Also known as SNF20080514-002.
|
---
abstract: 'The Internet has enabled the creation of a growing number of large-scale knowledge bases in a variety of domains containing complementary information. Tools for automatically aligning these knowledge bases would make it possible to unify many sources of structured knowledge and answer complex queries. However, the efficient alignment of *large-scale* knowledge bases still poses a considerable challenge. Here, we present (), a simple algorithm for aligning knowledge bases with millions of entities and facts. is an iterative propagation algorithm which leverages both the structural information from the relationship graph as well as flexible similarity measures between entity properties in a greedy local search, thus making it scalable. Despite its greedy nature, our experiments indicate that can efficiently match some of the world’s largest knowledge bases with high precision. We provide additional experiments on benchmark datasets which demonstrate that can outperform state-of-the-art approaches both in accuracy and efficiency.'
author:
- |
Simon Lacoste-Julien\
\
\
Konstantina Palla\
\
\
Alex Davies\
\
\
- |
Gjergji Kasneci\
\
\
Thore Graepel\
\
\
Zoubin Ghahramani\
\
\
title: |
SiGMa: Simple Greedy Matching\
for Aligning Large Knowledge Bases
---
Introduction
============
In the last decade, a growing number of large-scale knowledge bases have been created online. Examples of domains include music, movies, publications and biological data[^1]. As these knowledge bases sometimes contain both overlapping and complementary information, there has been growing interest in attempting to merge them by *aligning* their common elements. This alignment could have important uses for information retrieval and question answering. For example, one could be interested in finding a scientist with expertise on certain related protein functions – information which could be obtained by aligning a biological database with a publication one. Unfortunately, this task is challenging to automate as different knowledge bases generally use different terms to represent their entities, and the space of possible matchings grows exponentially with the number of entities.
A significant amount of research has been done in this area – particularly under the umbrella term of *ontology matching* [@choi06survey; @kalfoglou03om-state-of-the-art; @euzenat07om-book]. An ontology is a formal collection of world knowledge and can take different structured representations. In this paper, we will use the term *knowledge base* to emphasize that we assume very little structure about the ontology (to be specified in Section \[sec:problem\]). Despite the large body of literature in this area, most of the work on ontology matching has been demonstrated only on fairly small datasets of the order of a few hundred entities. In particular, Shvaiko and Euzenat [@shvaiko08challenges] identified *large-scale evaluation* as one of the ten challenges for the field of ontology matching.
In this paper, we consider the problem of aligning the *instances* in *large* knowledge bases, of the order of millions of entities and facts, where *aligning* means automatically identifying corresponding entities and interlinking them. Our starting point was the challenging task of aligning the movie database to the Wikipedia-based [@suchanek2007WWW], as another step towards the Semantic Web vision of interlinking different sources of knowledge which is exemplified by the Linking Open Data Initiative[^2] [@lee08WWW]. Initial attempts to match entities to entities by naively exploiting string and neighborhood information failed, and so we designed (), a scalable greedy iterative algorithm which is able to exploit previous matching decisions as well as the relationship graph information between entities.
The design decisions behind were both to be able to take advantage of the combinatorial structure of the matching problem (by contrast with database record linkage approaches which make more independent decisions) as well as to focus on a simple approach which could be scalable. works in two stages: it first starts with a small seed matching assumed to be of good quality. Then the algorithm incrementally augments the matching by using *both* structural information and properties of entities such as their string representation to define a modular score function. Some key aspects of the algorithm are that (1) it uses the current matching to obtain structural information, thereby harnessing information from previous decisions; (2) it proposes candidate matches in a *local* manner, from the structural information; and (3) it makes greedy decisions, enabling a scalable implementation. A surprising result is that we obtained accurate large-scale matchings in our experiments despite the greediness of the algorithm.
#### Contributions {#contributions .unnumbered}
The contributions of the present work are the following:
1. We present , a knowledge base alignment algorithm which can handle millions of entities. The algorithm is easily extensible with tailored scoring functions to incorporate domain knowledge. It also provides a natural tradeoff between precision and recall, as well as between computation and recall.
2. In the context of testing the algorithm, we constructed two large-scale partially labeled knowledge base alignment datasets with hundreds of thousands of ground truth mappings. We expect these to be a useful resource for the research community to develop and evaluate new knowledge base alignment algorithms.
3. We provide a detailed experimental comparison illustrating how improves over the state-of-the-art. is able to align knowledge bases with millions of entities with over 95% precision in less than two hours (a 50x speed-up over [@suchanek12PARIS]). On standard benchmark datasets, obtains solutions with higher F-measure than the best previously published results.
The remainder of the paper is organized as follows. Section \[sec:problem\] presents the knowledge base alignment problem with a real-world example as motivation for our assumptions. We describe the algorithm in Section \[sec:algorithm\]. We evaluate it on benchmark and on real-world datasets in Section \[sec:experiments\], and situate it in the context of related work in Section \[sec:related\].
Aligning Large-Scale Knowledge Bases {#sec:problem}
====================================
Motivating example: and
------------------------
Consider merging the information in the following two knowledge bases:
1. , a large semantic knowledge base derived from English Wikipedia [@suchanek2007WWW], WordNet [@wordnet] and GeoNames.[^3]
2. , a large popular online database that stores information about movies.[^4]
The information in is available as a long list of triples (called *facts*) that we formalize as: $$\label{eq:triplet}
\langle e, r, e'\rangle,$$ which means that the directed relationship $r$ holds from entity $e$ to entity $e'$, such as $\langle \textrm{John\_Travolta}, \textrm{ActedIn}, \textrm{Grease}\rangle$. The information from was originally available as several files which we merged into a similar list of triples. We call these two databases *knowledge bases* to emphasize that we are not assuming a richer representation, such as RDFS [@RDFs], which would distinguish between classes and instances for example. In the language of ontology matching, our setup is the less studied *instance matching* problem, as pointed out by Castano et al. [@castano08instanceMatching], for which the goal is to match concrete instantiations of concepts such as specific actors and specific movies rather than the general actor or movie class. comes with an RDFS representation, but not ; therefore we will focus on methods that do not assume or require a class structure or rich hierarchy in order to find a one-to-one matching of instances between and .
We note that in the full generality of the ontology matching problem, both the schema and the instances of one ontology are to be related with the ones of the other ontology. Moreover, in addition to the `isSameAs` (or “$\equiv$”) relationship that we consider, these matching relationships could be `isMoreGeneralThan` (“$\supseteq$”), [`isLessGeneralThan`]{} (“$\subseteq$”) or even `hasPartialOverlap`. In our example, because the number of relations in the knowledge bases is relatively small (108 in and 10 in ), we could align the relations manually, discovering six equivalent ones as listed in Table \[tab:relations\]. As we will see in our experiments, focussing uniquely on the `isSameAs` type of relationship between instances of the two knowledge bases is sufficient in the - setup to cover most cases. The exceptions are rare enough for to obtain useful results while making the simplifying assumption that the alignment between the instances is *injective* (1-1).
---------------------- -----------------------
actedIn actedIn
directed directed
produced produced
created composed
hasLabel$^*$ hasLabel$^*$
wasCreatedOnDate$^*$ hasProductionYear$^*$
---------------------- -----------------------
: Manually matched relations between and . [The starred pairs are actually pairs of *properties*, as defined in the text.]{.nodecor}\[tab:relations\]
#### Relationships vs. properties {#relationships-vs.-properties .unnumbered}
Given our assumption that the alignment is 1-1, it is important to distinguish between two types of objects which could be present in the list of triples: *entities* vs. *literals*. By our definition, the *entities* will be the only objects that we will try to align – they will be objects like specific actors or specific movies which have a clear identity. The *literals*, on the other hand, will correspond to a value related to an entity through a special kind of relationship that we will call *property*. The defining characteristic of literals is that it would not make sense to try to align them between the two knowledge bases in a 1-1 fashion. For example, in the triple $\langle {\texttt{m1}}$, ${\texttt{wasCreatedOnDate}}$, ${\texttt{1999-12-11}}\rangle$, the object [`1999-12-11`]{} could be interpreted as a literal representing the value for the property [`wasCreatedOnDate`]{} for the entity [`m1`]{}. The corresponding property in our version of is [`hasProductionYear`]{} which has values only at the year granularity ([`1999`]{}). The 1-1 restriction would prevent us to align both [`1999-12-11`]{} and [`1999-12-10`]{} to [`1999`]{}. On the other hand, we can use these literals to define a similarity score between entities from the two knowledge bases (for example in this case, whether the year matches, or how close the dates are to each other). We will thus have two types of triples: entity-relationship-entity and entity-property-literal. We assume that the distinction between relationships and properties (which depends on the domain and the user’s goals) is easy to make; for example, in the dataset that we also used in our experiments, the entities would have unique identifiers but not the literals. Figure \[fig:example\] provides a concrete example of information presents in the two knowledge bases that we will keep re-using in this paper.
We are now in a position to state more precisely the problem that we address.
**Definition:** A *knowledge base* ${K\!B}$ is a tuple\
$(\mathcal{E},\mathcal{L},\mathcal{R},\mathcal{P},\mathcal{F}_R,\mathcal{F}_P)$ where $\mathcal{E}$, $\mathcal{L}$, $\mathcal{R}$ and $\mathcal{P}$ are sets of entities, literals, relationships and properties respectively; $\mathcal{F}_R \subseteq \mathcal{E}\times\mathcal{R}\times\mathcal{E}$ is a set of relationship-facts whereas $\mathcal{F}_P \subseteq \mathcal{E}\times\mathcal{P}\times\mathcal{L}$ is a set of property-facts (both can be represented as a simple list of triples). To simplify the notation, we assume that all inverse relations are also present in $\mathcal{F}_R$ – that is, if $\langle e,r,e' \rangle$ is in $\mathcal{F}_R$, we also have $\langle e',r^{-1}, e \rangle$ in $\mathcal{F}_R$, effectively doubling the number of possible relations in the ${K\!B}$.[^5]
**Problem: one-to-one alignment of instances between two knowledge bases.** Given two knowledge bases ${K\!B}_1$ and ${K\!B}_2$ as well as a partial mapping between their corresponding relationships and properties, we want to output a 1-1 partial mapping $m$ from $\mathcal{E}_1$ to $\mathcal{E}_2$ which represents the semantically equivalent entities in the two knowledge bases (by partial mapping, we mean that the domain of $m$ does not have to be the whole of $\mathcal{E}_1$).
Possible approaches {#ssec:approaches}
-------------------
Standard approaches for the ontology matching problem, such as RiMOM [@li09RiMOM], could be used to align small knowledge bases. However, they do not scale to millions of entities as needed for our task given that they usually consider all pairs of entities, suffering from a quadratic scaling cost. On the other hand, the related problem of identifying duplicate entities known as *record linkage* or *duplicate detection* in the database field, and *co-reference resolution* in the natural langue processing field, do have scalable solutions [@arasu09deduplication; @gracia09largeScaleSenses], though these do not exploit the 1-1 matching combinatorial structure present in our task, which reduces their accuracy. More specifically, they usually make independent decisions for different entities using some kind of similarity function, rather than exploiting the competition between different assignments for entities. A notable exception is the work on *collective* entity resolution by Bhattacharya and Getoor [@getoor07relational], solved using a greedy agglomerative clustering algorithm. The algorithm that we present in Section \[sec:algorithm\] can actually be seen as an efficient specialization of their work to the task of knowledge base alignment.
Another approach to alignment arises from the word alignment problem in natural language processing [@och03comparison], which has been formulated as a maximum weighted bipartite matching problem [@taskar05matching] (thus exploiting the 1-1 matching structure). It also has been formulated as a quadratic assignment problem in [@lacoste06qap], which encourages neighbor entities in one graph to align to neighbor entities in the other graph, thus enabling alignment decisions to depend on each other — see the caption of Figure \[fig:example\] for an example of this in our setup. The quadratic assignment formulation [@lawler63qap], which can be solved as an integer linear program, is NP-hard in general though, and these approaches were only used to align at most one hundred entities. In the algorithm that we propose, we are interested in exploiting both the 1-1 matching constraint, as well as building on previous decisions, like these word alignment approaches, but in a scalable manner which would handle millions of entities. does this by greedily optimizing the quadratic assignment objective, as we will describe in Section \[ssec:greedy\]. Finally, Suchanek et al. [@suchanek12PARIS] recently proposed an ontology matching approach called that they have succeeded to apply on the alignment of to as well, though the scalability of their approach is not as clear, as we will explain in Section \[sec:related\]. We will provide a detailed comparison with in the experiments section.
Design choices and assumptions
------------------------------
Our main design choices result from our need for a fast algorithm for knowledge base alignment which scales to millions of entities. To this end we made the following assumptions:
**1-1 matching and uniqueness.** We assume that the true alignment between the two ${K\!B}$s is a partial function which is mainly 1-1. If there are duplicate entities inside a ${K\!B}$, will only align one of the duplicates to the corresponding entity in the other ${K\!B}$.
**Aligned relationships.** We assume that we are given a partial alignment between relationships and between properties of the ${K\!B}$s.
![**Example of neighborhood to match in and .** [Even though entities $i$ and $j$ have no words in common, the fact that several of their respective neighbors are matched together is a strong signal that $i$ and $j$ should be matched together. This is a real example from the dataset used in the experiments and was able to correctly match all these pairs ($i$ and $j$ are actually the same movie despite their different stored titles in each ${K\!B}$).]{.nodecor}[]{data-label="fig:example"}](figures/graph_example.pdf){width="\columnwidth"}
The SiGMa Algorithm {#sec:algorithm}
===================
Greedy optimization of a quadratic assignment objective {#ssec:greedy}
-------------------------------------------------------
The algorithm can be seen as the greedy optimization of an objective function which globally scores the suitability of a particular matching $m$ for a pair of given ${K\!B}$s. This objective function will use two sources of information useful to choose matches: a similarity function between pairs of entities defined from their properties; and a graph neighborhood contribution making use of neighbor pairs being matched (see Figure \[fig:example\] for a motivation). Let us encode the matching $m : \mathcal{E}_1 \rightarrow \mathcal{E}_2 $ by a matrix $y$ with entries indexed by the entities in each ${K\!B}$, with $y_{ij} = 1$ if $m(i) = j$, meaning that $i \in \mathcal{E}_1$ is matched to $j \in \mathcal{E}_2$, and $y_{ij} = 0$ otherwise. The space of possible 1-1 partial mappings is thus represented by the set of binary matrices: $\mathcal{M} \doteq \{y \in \{0,1\}^{\mathcal{E}_1 \times \mathcal{E}_2} : \sum_{l} y_{il} \leq 1 \; \forall i \in \mathcal{E}_1$ and $\sum_{k} y_{kj} \leq 1 \; \forall{j} \in \mathcal{E}_2\}$. We define the following quadratic objective function which globally scores the suitability of a matching $y$: $$\label{eq:obj}
\begin{split}
\texttt{obj}(y) & \doteq \sum_{(i,j) \in \mathcal{E}_1 \times \mathcal{E}_2} y_{ij} \left[ (1-\alpha) s_{ij} + \alpha g_{ij}(y) \right], \\
& \textrm{where} \qquad g_{ij}(y) \doteq \sum_{(k,l) \in \mathcal{N}_{ij}} y_{kl} \, w_{ij,kl}.
\end{split}$$ The objective contains linear coefficients $s_{ij}$ which encode a similarity between entity $i$ and $j$, as well as quadratic coefficients $w_{ij,kl}$ which control the algorithm’s tendency to match $i$ with $j$ given that $k$ was matched to $l$[^6]. $\mathcal{N}_{ij}$ is a local neighborhood around $(i,j)$ that we define later and which will depend on the graph information from the ${K\!B}$s – $g_{ij}(y)$ is basically counting (in a weighted fashion) the number of matched pairs $(k,l)$ which are in the neighborhood of $i$ and $j$. $\alpha \in [0,1]$ is a tradeoff parameter between the linear and quadratic contributions. Our approach is motivated by the maximization problem: $$\label{eq:opt}
\begin{aligned}
\max_y &\quad \texttt{obj}(y) \\
\textrm{s.t.} &\quad y \in \mathcal{M}, \quad \lVert y \rVert_1 \leq R,
\end{aligned}$$ where the norm $\lVert y \rVert_1 \doteq \sum_{ij} y_{ij}$ represents the number of elements matched and $R$ is an unknown upper-bound which represents the size of the best partial mapping which can be made from ${K\!B}_1$ to ${K\!B}_2$. We note that if the coefficients are all positive (as will be the case in our formulation – we are only encoding similarities and not repulsions between entities), then the maximizer $y^*$ will have $\lVert y^* \rVert_1 = R$. Problem is thus related to one of the variations of the quadratic assignment problems, a well-known NP-complete problem in operational research [@lawler63qap][^7]. Even though one could approximate the solution to the combinatorial optimization using a linear program relaxation (see Lacoste-Julien et al. [@lacoste06qap]), the number of variables is quadratic in the number of entities, and so is obviously not scalable. Our approach is instead to *greedily optimize* by adding the match element $y_{ij}=1$ at each iteration which increases the objective the most and selected amongst a small set of possibilities. In other words, the high-level operational definition of the algorithm is as follows:
1. Start with an initial good quality partial match $y_0$.
2. At each iteration $t$, augment the previous matching with a new matched pair by setting $y_{ij}=1$ for the $(i,j)$ which maximally increases $\texttt{obj}$, chosen amongst a small set $\mathcal{S}_t$ of reasonable candidates which preserve the feasibility of the new matching.
3. Stop when the bound $\lVert y \rVert_1 = R$ is reached (and never undo previous decisions).
Having outlined the general framework, in the remainder of this section we will describe methods for choosing the similarity coefficients $s_{ij}$ and $w_{ij,kl}$ so that they guide the algorithm towards good matchings (Section \[sec:score\]), the choice of neighbors, $\mathcal{N}_{ij}$, the choice of a candidate set $\mathcal{S}_t$, and the stopping criterion, $R$. These choices influence both the speed and accuracy of the algorithm.
**Compatible-neighbors.** $\mathcal{N}_{ij}$ should be chosen so as to respect the graph structure defined by the ${K\!B}$ facts. Its contribution in the objective crucially encodes the fact that a neighbor $k$ of $i$ being matched to a ‘compatible’ neighbor $l$ of $j$ should encourage $i$ to be matched to $j$ — see the caption of Figure \[fig:example\] for an example. Here, compatibility means that they are related by the same relationship (they have the same color in Figure \[fig:example\]). Formally, we define: $$\begin{gathered}
\label{eq:compatible}
\mathcal{N}_{ij} = \textrm{{\texttt{compatible-neighbors}}}(i,j) \doteq \\
\textrm{\parbox{0.8\columnwidth}{\{ $(k,l)$ : $\langle i,r,k \rangle$ is in $\mathcal{F}_{R1}$ and $\langle j,s,l \rangle$ is in $\mathcal{F}_{R2}$ and relationship $r$ is matched to $s$\}.}}\end{gathered}$$ Note that a property of this neighborhood is that $(k,l) \in \mathcal{N}_{ij}$ iff $(i,j) \in \mathcal{N}_{kl}$, as we have that the relationship $r$ is matched to $s$ iff $r^{-1}$ is matched to $s^{-1}$ as well. This means that the increase in the objective obtained by adding $(i,j)$ to the current matching $y$ defines the following *context dependent similarity score function* which is used to pick the next matched pair in the step 2 of the algorithm: $$\begin{gathered}
\label{eq:score}
\texttt{score}(i,j; y) = (1-\alpha) s_{ij} + \alpha \, \delta g_{ij}(y) \\
\textrm{where } \delta g_{ij}(y) \doteq \sum_{(k,l) \in \mathcal{N}_{ij}} y_{kl} \, (w_{ij,kl} + w_{kl,ij}).
\end{gathered}$$
**Information propagation on the graph.** The [`compatible-neighbors`]{} concept that we just defined is one of the most crucial characteristics of . It allows the information of a new matched pair to propagate amongst its neighbors. It also defines a powerful heuristic to suggest new candidate pairs to include in a small set $\mathcal{S}_t$ of matches to choose from: after matching $i$ to $j$, adds all the pairs $(k,l)$ from [`compatible-neighbors`]{}$(i,j)$ as new candidates. This yields a fire propagation analogy for the algorithm: starting from an initial matching (fire) – it starts to match their neighbors, letting the fire propagate through the graph. If the graph in each ${K\!B}$ is well-connected in a similar fashion, it can visit most nodes this way. This heuristic enables to avoid the potential quadratic number of pairs to consider by only focussing its attention on the neighborhoods of current matches.
**Stopping criterion.** terminates when the variation in the objective value, `score`$(i,j; y)$, of the latest added match $(i,j)$ falls below a threshold (or the queue becomes empty). The threshold in effect controls the precision / recall tradeoff of the algorithm. By ensuring that the $s_{ij}$ and $g_{ij}(y)$ terms are normalized between 0 and 1, we can standardize the scale of the threshold for different score functions. In our experiments, a threshold of 0.25 is observed to correlate well with a point at which the F-measure stops increasing and the precision is significantly decreasing.
Algorithm and implementation {#ssec:implementation}
----------------------------
We present the pseudo-code for in Table \[tab:alg\]. We now elaborate on the algorithm design as well as its implementation aspects. We note that the `score` defined in to greedily select the next matched pair is composed of a static term $s_{ij}$, which does not depend on the evolving matching $y$, and a dynamic term $\delta g_{ij} (y)$, which depends on $y$, though only through the local neighborhood $\mathcal{N}_{ij}$. We call the $\delta g_{ij}$ component of the score function the graph contribution – its local dependence means that it can be updated efficiently after a new match has been added. We explain in more details the choice of similarity measures for these components in Section \[sec:score\].
#### Initial match structure $m_0$ {#initial-match-structure-m_0 .unnumbered}
The algorithm can take any initial matching seed assumed of good quality. In our current implementation, this is done by looking for entities with the same string representation (with minimal standardization such as removing capitalization and punctuation) with an *unambiguous 1-1 match* – that is, we do not include an exact matched pair when more than two entities have this same string representation, thereby increasing precision.
#### Increasing score function with local dependence {#increasing-score-function-with-local-dependence .unnumbered}
The score function has a component $s_{ij}$ which is static (fixed at the beginning of the algorithm) from the properties of entities such as their string representation, and a component $\delta g_{ij}(y)$ which is dynamic, looking at how many neighbors are correctly matched. The dynamic part can actually only increase when new neighbors are matched, and only the scores of neighbors can change when a new pair is matched.
#### Optional static list of candidates $\mathcal{S}_0$ {#optional-static-list-of-candidates-mathcals_0 .unnumbered}
Optionally, we can initialize $\mathcal{S}$ with a static list $\mathcal{S}_0$ which only needs to be scored once as any score update will come from neighbors already covered by step 11 of the algorithm. $\mathcal{S}_0$ has the purpose to increase the possible exploration of the graph when another strong source of information (which is not from the graph) can be used. In our implementation, we use an inverted index built on words to efficiently suggest entities which have at least two words in common in their string representation as potential candidates.[^8]
#### Data-structures {#data-structures .unnumbered}
We use a binary heap for the priority queue implementation—insertions will thus be $O(\log n)$ where $n$ is the size of the queue. Because the score function can only increase as we add new matches, we do not need to keep track of stale nodes in the priority queue in order to update their scores, yielding a significant speed-up.
Score functions {#sec:score}
---------------
An important factor for any matching algorithm is the similarity function between pairs of elements to match. Designing good similarity functions has been the focus of much of the literature on record linkage, entity resolution, etc., and because uses the score function in a modular fashion, is free to use most of them for the term $s_{ij}$ as long as they can be computed efficiently. We provide in this section our implementation choices (which were motivated by simplicity), but we note that the algorithm can easily handle more powerful similarity measures. The generic score function used by was given in . In the current implementation, the static part $s_{ij}$ is defined through the *properties* of entities only. The graph part $\delta g_{ij}(y)$ depends on the *relationships* between entities (as this is what determines the graph), as well as the previous matching $y$. We also make sure that $s_{ij}$ and $g_{ij}$ stay normalized so that the score of different pairs are on the same scale.
### Static similarity measure
The static property similarity measure is further decomposed in two parts: we single out a contribution coming from the string representation property of entities (as it is such a strong signal for our datasets), and we consider the other properties together in a second term: $$\label{eq:static}
s_{ij} = (1-\beta) \textrm{{\texttt{string}}}(i,j) + \beta \textrm{{\texttt{prop}}}(i,j),$$ where $\beta \in [0,1]$ is a tradeoff coefficient between the two contributions set to 0.25 during the experiments.
#### String similarity measure {#string-similarity-measure .unnumbered}
For the string similarity measure, we primarily consider the number of words which two strings have in common, albeit weighted by their information content. In order to handle the varying lengths of strings, we use the Jaccard similarity coefficient between the sets of words, a metric often used in information retrieval and other data mining fields [@hamers89Jaccard; @getoor07relational]. The Jaccard similarity between set $A$ and $B$ is defined as $\textrm{Jaccard}(A,B) \doteq |A \cap B| / |A \cup B|$, which is a number between 0 and 1 and so is normalized as required. We also add a smoothing term in the denominator in order to favor longer strings with many words in common over very short strings. Finally, we use a *weighted* Jaccard measure in order to capture the information that some words are more informative than others. In analogy to a commonly used feature in information retrieval, we use the IDF (inverse-document-frequency) weight for each word. The weight for word $v$ in ${K\!B}_o$ is $w^o_v \doteq \log_{10} \frac{|\mathcal{E}_o|}{|E^o_v|}$, where $E^o_v \doteq \{e \in \mathcal{E}_o :$ $e$ has word $v$ in its string representation$\}$. Combining these elements, we get the following string similarity measure: [ $$\label{eq:string}
\textrm{{\texttt{string}}}(i,j) = \frac{{\displaystyle}\sum_{v \in \left(\mathcal{W}_i \cap \mathcal{W}_j\right)} (w^1_v + w^2_v)}
{{\displaystyle}\texttt{smoothing } + \sum_{v \in \mathcal{W}_i} w^1_v + \sum_{v' \in \mathcal{W}_j} w^2_{v'}},$$ ]{}where $\mathcal{W}_e$ is the set of words in the string representation of entity $e$ and `smoothing` is the scalar smoothing constant (we try different values in the experiments). Using unit weights and removing the smoothing term would recover the standard Jaccard coefficient between the two sets. As it operates on set of words, this measure is robust to word re-ordering, a frequently observed variation between strings representing the same entity in different knowledge bases. On the other hand, this measure is not robust to small typos or small changes of spelling of words. This problem could be addressed by using more involved string similarity measures such as *approximate string matching* [@chulman97wordMatching; @stoilos05stringMetric], which handles both word corruption as well as word reordering, though our current implementation only uses for simplicity. We will explore the effect of different scoring functions in our experiments in Section \[sec:parameters\].
#### Property similarity measure {#property-similarity-measure .unnumbered}
We recall that we assume that the user provided a partial matching between properties of both databases. This enables us to use them in a property similarity measure. In order to elegantly handle missing values of properties, varying number of property values present, etc., we also use a smoothed weighted Jaccard similarity measure between the sets of properties. The detailed formulation is given in Appendix \[ap:property\] for completeness, but we note that it can make use of a similarity measure between literals such a normalized distance on numbers (for dates, years etc.) or a string-edit distance on strings.
### Dynamic graph similarity measure
We now introduce the part of the score function which enables to build on previous decisions and exploit the relationship graph information. We need to determine $w_{ij,kl}$, the weight of the contribution of a neighboring matched pair $(k,l)$ for the score of the candidate pair $(i,j)$. The general idea of the graph score function is to count the number of compatible neighbors which are currently matched together for a pair of candidates (this is the $g_{ij}(y)$ contribution in ). Going back at the example in Figure \[fig:example\], there were three compatible matched pairs shown in the neighborhood of $i$ and $j$. We would like to normalize this count by dividing by the number of possible neighbors, and we would possibly want to weight each neighbor differently. We again use a smoothed weighted Jaccard measure to summarize this information, averaging the contribution from each ${K\!B}$. This can be obtained by defining $w_{ij,kl}$ = $\gamma_{i} w_{ik} + \gamma_{j} w_{jl}$, where $\gamma_i$ and $\gamma_j$ are normalization factors specific to $i$ and $j$ in each database and $w_{ik}$ is the weight of the contribution of $k$ to $i$ in ${K\!B}_1$ (and similarly for $w_{kl}$ in ${K\!B}_2$). The graph contribution thus becomes: $$\label{eq:g_ij}
g_{ij}(y) = \sum_{(k,l) \in \mathcal{N}_{ij}} y_{kl} (\gamma_i w_{ik} + \gamma_j w_{jl}).$$ So let $\mathcal{N}_{i}$ be the set of neighbors of entity $i$ in ${K\!B}_1$, i.e. $\mathcal{N}_{i} \doteq \{ k : \exists r \textrm{ s.t. } (i,r,k) \in \mathcal{F}_{R1} \}$ (and similarly for $\mathcal{N}_j$). Then, remembering that $\sum_{k} y_{kl} \leq 1$ for a valid partial matching $y \in \mathcal{M}$, the following normalizations $\gamma_i$ and $\gamma_j$ will yield the average of two smoothed weighted Jaccard measures for $g_{ij}(y)$: [ $$\label{eq:gamma_i}
\gamma_i \doteq \frac{1}{2} \left(1 + \sum_{k \in \mathcal{N}_{i}} w_{ik} \right)^{-1} \gamma_j \doteq \frac{1}{2} \left(1 + \sum_{l \in \mathcal{N}_{j}} w_{jl} \right)^{-1}$$ ]{}We thus have $g_{ij}(y) \leq 1$ for $y \in \mathcal{M}$, keeping the contribution of each possible matched pair $(i,j)$ on the same scale in `obj` in .
The graph part of the score in then takes the form: $$\label{eq:graphscore}
\delta g_{ij}(y) = \sum_{(k,l) \in \mathcal{N}_{ij}} y_{kl} \, ( \gamma_i w_{ik} + \gamma_j w_{jl} + \gamma_k w_{ki} + \gamma_l w_{lj}).$$ The summation over the first two terms yields $g_{ij}(y)$ and so is bounded by $1$, but the summation over the last two terms could be greater than 1 in the case that $(i,j)$ is filling a ‘hole’ in the graph (thus increasing the contribution of many neighbors $(k,l)$ in `obj` in ). For example, suppose that $i$ has $n$ neighbors with degree 1 (i.e. they only have $i$ as neighbor); and the same thing for $j$, and that they are all matched pairwise — Figure \[fig:example\] is an example of this with $n=3$ if we suppose that no other neighbors are present in the ${K\!B}$. Suppose moreover that we use unit weights for $w_{ik}$ and $w_{jl}$. Then the normalization is $\gamma_k = 1/4$ for each $k \in \mathcal{N}_i$ (as they have degree 1); and similarly for $\gamma_l$. The contribution of the sum over the last two terms in is thus $n/2$ (whereas in this case $g_{ij}(y) = n/(n+1) \leq 1$).
**Neighbor weight $w_{ik}$.** We finally need to specify the weight $w_{ik}$, which determines the strength of the contribution of the neighbor $k$ being correctly matched to the score of a suggested pair containing $i$. In our experiments, we consider both the constant weight $w_{ik} = 1$ and a weight $w_{ik}$ that varies inversely with the number of neighbors entity $k$ has where the relationship is of the same type as the one with entity $i$. The motivation for the latter is explained in Appendix \[ap:weights\].
Experiments {#sec:experiments}
===========
Setup {#ssec:setup}
-----
We made a prototype implementation of in Python[^9] and compared its performance on benchmark datasets as well as on large-scale knowledge bases. All experiments were run on a cluster node Hexacore Intel Xeon E5650 2.66GHz with 46GB of RAM running Linux. Each knowledge base is represented as two text files containing a list of triples of relationships-facts and property-facts. The input to is a pair of such ${K\!B}$s as well as a partial mapping between the relationships and properties of each ${K\!B}$ which is used in the computation of the score in , and the definition of `compatible-neighbors` . The output of is a list of matched pairs $(e_1,e_2)$ with their score information and the iteration number at which they were added to the solution. We evaluate the final alignment (after reaching the stopping threshold) by comparing it to ground truth using the standard metrics of precision, recall and F-measure on the number of *entities* correctly matched.[^10] The benchmark datasets are available together with corresponding ground truth data; for the large-scale knowledge bases, we built their ground truth using web url information as described in Section \[sec:datasets\].
We found reasonable values for the parameters of by exploring its performance on the to pair (the methodology is described in Section \[sec:parameters\]), and then kept them fixed for all the other experimental comparisons (Section \[sec:exp1\] and \[sec:exp2\]). This reflects the situation where one would like to apply to a new dataset without ground truth or to minimize parameter adaptation. The standard parameters that we used in these experiments are given in Appendix \[ap:params\].
Datasets {#sec:datasets}
--------
Our experiments were done both on several large-scale datasets and on some standard benchmark datasets from the ontology alignment evaluation initiative (OAEI) (Table \[tab:all\_stats\]). We describe these datasets below.
#### Large-scale datasets {#large-scale-datasets .unnumbered}
As mentioned throughout this paper so far, we used the dataset pair - as the main motivating example for developing and testing . We also test on the pair -, for which we could obtain a sizable ground truth. We describe here their construction. Both and are available as lists of triples from their respective websites.[^11] , on the other hand, is given as a list of text files.[^12] There are different files for different categories, e.g.: actors, producers, etc. We use these categories to construct a list of triples containing facts about movies and people. Because ignores relationships and properties that are not matched between the ${K\!B}$s, we could reduce the size of and by keeping only those facts which had a 1-1 mapping with as presented in Table \[tab:largeKB\], and the entities appearing in these facts. To facilitate the comparison of with , the authors of kindly provided us their own version of that we will refer from now on as — this version has actually a richer structure in terms of properties. We also kept in the relationships and properties which were aligned with those of (Table \[tab:largeKB\]). Table \[tab:all\_stats\] presents the number of unique entities and relationship-facts included in the relevant reduced datasets. We constructed the ground truth for - by scraping the relevant Wikipedia pages of entities to extract their link to the corresponding page, which often appears in the ‘external links’ section. We then obtained the entity name by scraping the corresponding page and matched it to our constructed database by using string matching (and some manual cleaning). We obtained 54K ground truth pairs this way. We used a similar process for - by accessing the urls which were actually stored in the database. This yielded 293K pairs, probably one of the largest knowledge bases alignment ground truth sets to date.
#### Benchmark datasets {#benchmark-datasets .unnumbered}
We also tested on three benchmark dataset pairs provided by the ontology alignment evaluation initiative (OAEI), which allowed us to compare the performance of to some previously published methods [@li09RiMOM; @hu11objectCoref]. From the OAEI 2009 edition,[^13] we use the - instance matching benchmark from the domain of scientific publications.[^14] contains publications and authors as entities extracted from the search results of the search server. is a version of the DBLP dataset listing publications from the computer science domain. The pair has one matched relationship, `author`, as well several matched properties such as [`year`]{}, [`volume`]{}, [`journal name`]{}, [`pages`]{}, etc. Our goal was to align publications and authors. The other two datasets come from the Person-Restaurants (PR) task from the OAEI 2010 edition,[^15] containing data about people and restaurants. In particular, there are - pairs where the second entity is a copy of the first with one property field corrupted, and - pairs coming from two different online databases that were manually aligned. All datasets were downloaded from the corresponding OAEI webpages, with dataset sizes given in Table \[tab:all\_stats\].
Exp. 1: Large-scale alignment {#sec:exp1}
-----------------------------
In this experiment, we test the performance of on the three pairs of large-scale ${K\!B}$s and compare it with [@suchanek12PARIS], which is described in more details in the related work Section \[sec:related\]. We also compare and with the simple baseline of doing the unambiguous exact string matching step described in Section \[ssec:implementation\] which is used to obtain an initial match $m_0$ (called ). Table \[tab:res\_large\] presents the results.
Despite its simple greedy nature which never goes back to correct a mistake, obtains an impressive F-measure above 90% for all datasets, significantly improving over the baseline. We tried running [@suchanek12PARIS] on a smaller subset of -, using the code available from its author’s website. It did not complete its first iteration after a week of computation and so we halted it (we did not have the SSD drive which seems crucial to reasonable running times). The results for in Table \[tab:res\_large\] are thus computed using the prediction files provided to us by its authors on the - dataset. In order to better relate the - results with the - ones, we also constructed a larger ground truth reference on - by using the same process as described in Section \[sec:datasets\]. On both ground truth evaluations, obtains a similar F-measure as , but in 50x less time. On the other hand, we note that is solving the more general problem of instances and schema alignment, and was not provided any manual alignment between relationships. The large difference of recall between and on the ground truth from [@suchanek12PARIS] can be explained by the fact that more than a third of its entities had no neighbor; whereas the process used to construct the new larger ground truth included only entities participating in movie facts and thus having at least one neighbor. The recall of actually increases for entities with increasing number of neighbors (going from 68% for entities in the ground truth from [@suchanek12PARIS] with 0 neighbor to 97% for entities with 5+ neighbors).
About 2% of the predicted matched pairs from on - have no word in common and thus zero string similarity – difficult pairs to match without any graph information. Examples of these pairs came from spelling variations of names, movie titles in different languages, foreign characters in names which are not handled uniformly or multiple titles for movies (such as the ‘Blood In, Blood Out’ example of Figure \[fig:example\]).
**Error analysis.** Examining the few errors made by , we observed the following types of matching errors: 1) errors in the ground truth (either coming from the scraping scheme used; or from Wikipedia () which had incorrect information); 2) having multiple very similar entities (e.g. mistaking the ‘making of’ of the movie vs. the movie itself); 3) pair of entities which shared exactly the same neighbors (e.g. two different movies with exactly the same actors) but without other discriminating information. Finally, we note that going through the predictions of that had a low property score revealed a significant number of errors in the databases (e.g. wildly inconsistent birth dates for people), indicating that could be used to highlight data inconsistencies between databases.
Exp. 2: Benchmark comparisons {#sec:exp2}
-----------------------------
In this experiment, we test the performance of on the three benchmark datasets and compare them with the best published results so far that we are aware of: [@suchanek12PARIS] for the Person-Restaurants datasets (which compared favorably over ObjectCoref [@hu11objectCoref]); and [@li09RiMOM] for . Table \[tab:res\_bench\] presents the results. We also include the results for as a simple baseline as well as , which is the algorithm without using the graph information at all[^16], to give an idea of how important the graph information is in these cases.
Interestingly, significantly improved the previous results without needing any parameter tweaking. The Person-Restaurants datasets did not have a rich relationship structure to exploit: each entity (a person or a restaurant) was linked to exactly one another in a 1-1 bipartite fashion (their address). This is perhaps why is surprisingly able to *perfectly* match both the Person and Restaurants datasets. Analyzing the errors made by , we noticed that they were due to a violation of the assumption that each entity is unique in each ${K\!B}$: the same address is represented as different entities in , and greedily matched the one which was not linked to another restaurant in , thus reducing the graph score for the correct match. couldn’t suffer from this problem, and thus obtained a perfect matching. The dataset has a more interesting relationship structure which is not just 1-1: papers have multiple authors and authors have written multiple papers, enabling the fire propagation algorithm to explore more possibilities. However, it appears that a purely string based algorithm can already do quite well on this dataset — obtains a 89% F-measure, already significantly improving the previously best published results ( at 76% F-measure). improves this to 91%, and finally using the graph structure helps to improve this to 94%. This benchmark which has a medium size also highlights the nice scalability of : despite using the interpreted language Python, our implementation runs in less than 10 minutes on this dataset, which can be compared to taking 36 hours on a 8-core server in 2009.
Parameter experiments {#sec:parameters}
---------------------
In this section, we explore the role of different configurations for on the - pair, as well as determine which parameters to use for the other experiments. We recall that with the final parameters (described in Appendix \[ap:params\]) yields a 95% F-measure on this dataset (second section of Table \[tab:res\_large\]). Experiments 5 and 6 which explore the optimal weighting schemes as well as the correct stopping threshold are described for completeness in Appendix \[ap:param\_exp\].
### Exp. 3: Score components
In this experiment, we explore the importance of each part of the score function by running with some parts turned off (which can be done by setting the $\alpha$ and $\beta$ tradeoffs to 0 or 1). The resulting precision / recall curves are plotted in Figure \[fig:test\_scores\]a. We can observe that turning off the static part of the score (string and property) has the biggest effect, decreasing the maximum F-measure from 95% to about 80% (to be contrasted with the 72% F-measure for as shown in Table \[tab:res\_large\]). By comparing with , we see that including the graph information moves the F-measure from a bit below 85% to over 95%, a significant gain, indicating that the graph structure is more important on this dataset than the OAEI benchmark datasets.
![**Exp. 3: Precision/Recall curves for on - with different scoring configurations.** [The filled circles indicate the maximum F-measure position on each curve, with the corresponding diamond giving the F-measure value at this recall point.]{.nodecor} \[fig:test\_scores\] ](figures/test_scores_newyago.pdf){width="\columnwidth"}
### Exp. 4: Matching seed
In this experiment, we tested how important the size of the matching seed $m_0$ is for the performance of . We report the following notable results. We ran with no exact seed matching at all: we initialized it with a random exact match pair and let it explore the graph greedily (with the inverted index still making suggestions). This obtained an even better score than the standard setup: 99% of precision, 94% recall and 96% F-measure, demonstrating that *a good initial seed is actually not needed for this setup*. If we do not use the inverted index but initialize with the top 5% of the exact match sorted by their score in the context of the whole exact match, the performance drops a little, but is still able to explore a large part of the graph: it obtains 99% / 87% / 92% of precision/recall/F-measure, illustrating the power of the graph information for this dataset.
Related work {#sec:related}
============
We contrast here with the work already mentioned in Section \[ssec:approaches\] and provide further links. In the ontology matching literature, the only approach which was applied to datasets of the size that we considered in this paper is the recently proposed [@suchanek12PARIS], which solves the more general problem of matching instances, relationships and classes. The framework defines a normalized score between pairs of instances to match representing how likely they should be matched,[^17] and which depends on the matching scores of their compatible neighbors. The final scores are obtained by first initializing (and fixing) the scores on pairs of literals, and then propagating the updates through the relationship graph using a fixed point iteration, yielding an analogous fire propagation of information as , though it works with soft \[0-1\]-valued assignment whereas works with hard {0,1}-valued ones. The authors handle the scalability issue of maintaining scores for all pairs by using a sparse representation with various pruning heuristics (in particular, keeping only the maximal assignment for each entity at each step, thus making the same 1-1 assumption that we did). An advantage of over is that it is able to include property values in its neighborhood graph (it uses soft-assignments between them) whereas only uses relationships given that a 1-1 matching of property values is not appropriate. We conjecture that this could explain the higher recall that obtained on entities which had no relationship neighbors on the - dataset. On the other hand, was limited to use a 0-1 similarity measure between property values for the large-scale experiments in [@suchanek12PARIS], as it is unclear how one could apply the same sparsity optimization in a scalable fashion with more involved similarity measures (such as the IDF one that is using). The use of a 0-1 similarity measure on strings could explain the lower performance of on the Restaurants dataset in comparison to . We stress that is able in contrast to use sophisticated similarity measures in a scalable fashion, and had a 50x speed improvement over on the large-scale datasets.
The algorithm is related to the collective entity resolution approach of Bhattacharya and Getoor [@getoor07relational], which proposed a greedy agglomerative clustering algorithm to cluster entities based on previous decisions. Their approach could handle constraints on the clustering, including a $1-1$ matching constraint in theory, though it was not implemented. A scalable solution for collective entity resolution was proposed recently in [@restogi11largeEM], by treating the sophisticated machine learning approaches to entity resolution as black boxes (see references therein), but running them on small neighborhoods and combining their output using a message-passing scheme. They do not consider exploiting a $1-1$ matching constraint though, as most entity resolution or record linkage work.
The idea to propagate information on a relationship graph has been used in several other approaches for ontology matching [@hu05GMO; @mao07network], though none were scalable for the size of knowledge bases that we considered. An analogous ‘fire propagation’ algorithm has been used to align social network graphs in [@narayanan11deanonymization], though with a very different objective function (they define weights in each graphs and want to align edges which has similar weights). The heuristic of propagating information on a relationship graph is related to a well-known heuristic for solving Constraint Satisfactions Problems known as constraint propagation [@bessiere2006constraint]. Ehrig and Staab [@ehrig04QOM] mentioned several heuristics to reduce the number of candidates to consider in ontology alignment, including a similar one to `compatible-neighbors`, though they tested their approach only on a few hundred instances. Finally, we mention that Peralta [@peralta07movieMatching] aligned the movie database MovieLens to IMDb through a combination of steps of manual cleaning with some automation. could be considered as an alternative which does not require manual intervention apart specifying the score function to use.
Conclusion
==========
We have presented , a simple and scalable algorithm for the alignment of large-scale knowledge bases. Despite making greedy decisions and never backtracking to correct decisions, obtained a higher F-measure than the previously best published results on the OAEI benchmark datasets, and matched the performance of the more involved algorithm while being 50x faster on large-scale knowledge bases of millions of entities. Our experiments indicate that can obtain good performance over a range of datasets with the same parameter setting. On the other hand, is easily extensible to more powerful scoring functions between entities, as long as they can be efficiently computed.
Some apparent limitations of are a) that it cannot correct previous mistakes and b) cannot handle alignments other than 1-1. Addressing these in a scalable fashion which preserves high accuracy are open questions for future work. We note though that the non-corrective nature of the algorithm didn’t seem to be an issue in our experiments. Moreover, pre-processing each knowledge base with a de-duplication method can help make the 1-1 assumption more reasonable, which is a powerful feature to exploit in an alignment algorithm. Another interesting direction for future work would be to use machine learning methods to learn the parameters of more powerful scoring function. In particular, the ‘learning to rank’ model seems suitable to learn a score function which would rank the correctly labeled matched pairs above the other ones. The current level of performance of already makes it suitable though as a powerful generic alignment tool for knowledge bases and hence takes us closer to the vision of Linked Open Data and the Semantic Web.
**Acknowledgments:** We thank Fabian Suchanek and Pierre Senellart for sharing their code and answering our questions about . We thank Guillaume Obozinski for helpful discussions. This research was supported by a grant from Microsoft Research Ltd. and a Research in Paris fellowship.
Property similarity measure {#ap:property}
===========================
We describe here the property similarity measure used in our implementation. We use a smoothed weighted Jaccard similarity measure between the sets of properties defined as follows. Suppose that $e_1$ has properties $p_1, p_2, \ldots, p_{n_1}$ with respective literal values $v_1, v_2, \ldots, v_{n_1}$, and that $e_2$ has properties $q_1, q_2, \ldots, q_{n_2}$ with respective literal values $l_1, l_2, \ldots, l_{n_2}$. In analogy to the string similarity measure, we will also associate IDF weights to the possible property values $w^o_{p,v} \doteq \log_{10} \frac{N^o_p}{|E^o_{p,v}|}$ where $E^o_{p,v} \doteq \{e \in \mathcal{E}_o :$ $e$ has literal $v$ for property $p\}$ and $N^o_p$ is the total number of entities in knowledge base $o$ which have a value for property $p$. We then define the following property similarity measure: $$\label{eq:property}
\mathtt{prop}(i,j) = \frac{{\displaystyle}\sum_{(a,b) \in M_{12} } (w^1_{p_a,v_a} + w^2_{q_b,l_b}) \,\mathrm{Sim}_{p_a,q_b}(v_a,l_b)}
{{\displaystyle}2 + \sum_{a=1}^{n_1} w^1_{p_a,v_a} + \sum_{b=1}^{n_2} w^2_{q_b,l_b}}.$$ where $M_{12}$ represents the property alignment: $M_{12} \doteq \{(a,b) : p_a \textrm{ is matched to } q_b\}$. $\mathrm{Sim}_{p_a,q_b}(v_a,l_b)$ is a $[0,1]$-valued similarity measure between literals; it could be a normalized distance on numbers (for dates, years, etc.), a string-edit distance on strings, etc.
Graph neighbor weight {#ap:weights}
=====================
We recall that the the graph weight $w_{ik}$ determines the strength of the contribution of the neighbor $k$ being correctly matched to the score of a suggested pair containing $i$. In our experiments, we consider both the constant weight $w_{ik} = 1$ and a weight $w_{ik}$ that varies inversely with the number of neighbors entity $k$ has where the relationship is of the same type as the one with entity $i$. To motivate the latter, we go back again to our running example of Figure \[fig:example\], but switching the role of $i$ and $k$ as we need to look at the neighbors of $k$ – this is illustrated in Figure \[fig:graph\_weight\] and explained in its caption. In case there are multiple different relationships linking the same pair $i$ to $k$, we take the maximum of the weights over these (i.e. we pick the most informative information to weight it). So formally, we have: $$\label{eq:w_ik}
w_{ik} \doteq \max_{r \textrm{ s.t. } (i,r,k) \in \mathcal{F}_R} | \{i' : (i',r,k) \in \mathcal{F}_R | ^{-1} .$$
We also point out that the normalization of $g_{ij}(y)$ in is made over each ${K\!B}$ independently, in contrast with the `string` and `prop` similarity measures and which are normalized in both ${K\!B}$ jointly. The motivation for this is that the neighborhood size in and are overly asymmetric (there is much more information about each movie in ). The separate normalization means that as long as most of a neighborhood in *one* ${K\!B}$ is correctly aligned, the graph score will be high. The information about strings and properties is more symmetric in the ${K\!B}$ pairs that we consider, so a joint normalization seems reasonable in this case.
Quadratic assignment problem {#ap:qap}
============================
The quadratic assignment problem is traditionally defined as finding a bijection between $R$ facilities and $R$ locations which *minimizes* the expected cost of transport between the facilities. Given that facilities $i$ and $k$ are assigned to locations $j$ and $l$ respectively, the cost of transport between facility $i$ and $k$ is $w_{ij,kl} = n_{ik} c_{jl}$, where $n_{ik}$ is the expected number of units to ship between facilities $i$ and $k$, and $c_{jl}$ is the expected cost of shipment between locations $j$ and $l$ (depending on their distance). In its more general form [@lawler63qap], the coefficients can be negative, and so there is no major difference between minimizing and maximizing, and we see that our optimization problem is a special case of this.
![ Graph weight illustration. []{data-label="fig:graph_weight"}](figures/graph_weight.pdf){width="0.5\columnwidth"}
Parameters used for SiGMa {#ap:params}
=========================
We use $\alpha=1/3$ as the graph score tradeoff[^18] in and $\beta=0.25$ as the property score tradeoff in . We set the string score `smoothing` term in as the sum of the maximum possible word weights in each ${K\!B}$ ($\log |\mathcal{E}_o|$). We use 0.25 as the score threshold for the stopping criterion (step 6 in the algorithm), and stop considering suggestions from the inverted index on strings when their score is below 0.75. We use as initial matching the unambiguous exact string comparison test as described in Section \[sec:algorithm\]. We use uniform weights $w_{ik} = 1$ for the matched neighbors contribution in the graph score . We use a `Sim` measure on property values as used in which depends on the type of property literals: for dates and numbers, we simply use $0$-$1$ similarity (1 when they are equal) with some processing — e.g. for dates, we only consider the year; for secondary strings (i.e. strings for other properties than the main string representation of an entity), we use a weighted Jaccard measure on words as defined in but with the IDF weights derived from the strings appearing in this property only.
Additional parameter\
experiments {#ap:param_exp}
=====================
We provide here the additional parameter experiments which were skipped from the main text for brevity.
Exp. 5: Weighting schemes, smoothing and tradeoffs
--------------------------------------------------
In this experiment, we explored the effect of the weighting scheme for the three different score components (string, property and graph) by trying two options per component, with precision / recall curves given in Figure \[fig:test\_weights\]. For string and property components, we compared uniform weights vs. IDF weights. For the graph component, we compare uniform weights (which surprisingly got the best result) with the inverse number of neighbors weight proposed in . Overall, the effect for these variations was much smaller than the one for the score component experiment, with the biggest decrease of less than 1% F-measure obtained by using uniform string weights instead of the IDF-scores. We also varied the 3 smoothing parameters (one for each score component) as well as the 2 tradeoff parameters linearly around their chosen values: the performance does not change much for changes of the order of 0.1-0.2 for the tradeoff, and 1.5 for the smoothing parameters (stay with 1% range of F-measure).
Exp. 6: Stopping threshold choice
---------------------------------
In this experiment, we studied whether the score information correlated with changes in the precision / recall information, in order to determine a possible stopping threshold. We overlay in Figure \[fig:detailed\_run\] the precision / recall at each iteration of the algorithm (blue / red) with the score (in green) of the matched pair chosen at this iteration (as given by ). The vertical black dashed lines correspond to the iteration at which the score threshold of 0.35 and 0.25 are reached, respectively, which correlated with a drop of precision for the current predictions (black line with diamonds) and a leveling of the F-measure (curved dashed black line), respectively. We note that this correlation was also observed on all the other datasets, indicating that this threshold is robust to dataset variations.
![**Exp. 5: Precision/Recall curves for on - with different weighting configurations.** [The filled circles indicate the maximum F-measure position on each curve, with the corresponding diamond giving the F-measure value at this recall point. Each curve is one of the 8 possibilities of having the weight ‘off’ (set to unity) or ‘on’, for the graph / property / string part of the score function. The legend indicates the difference between the reference setup (graph off / property on / string on) and the given curve.]{.nodecor} \[fig:test\_weights\] ](figures/pr_rec_weights_newyago.pdf){width="\columnwidth"}
![**Exp. 6: Precision/recall and score evolution for on the - dataset as a function of iterations (predictions).** [The magenta line indicates the proportion out of the last 1k predictions for which we had ground truth information; the black line with diamonds indicate the precision for these 1k predictions. The score of the matching pair chosen at each iteration is shown in green; notice how the precision starts to drop when the score goes below 0.35 (first vertical black dashed line) and the F-measure starts to level when the score goes below 0.25 (second vertical dashed line). We note that the periodic increase of the score is explained by the fact that if compatible neighbors are matched, the graph score part of their neighbors can increase sufficiently to exceed the previous maximum score in the priority queue.]{.nodecor} []{data-label="fig:detailed_run"}](figures/detailed_run_newyago.pdf){width="\columnwidth"}
[^1]: Such as [MusicBrainz](http://musicbrainz.org/), [IMDb](http://www.imdb.com/), [DBLP](http://www.informatik.uni-trier.de/~ley/db) and [UnitProt](http://www.uniprot.org/).
[^2]: <http://linkeddata.org/>
[^3]: <http://www.geonames.org/>
[^4]: <http://www.imdb.com/>
[^5]: This allows us to look at only one standard direction of facts and cover all possibilities – see for example how it is used in the definition of [`compatible-neigbhors`]{} in .
[^6]: In the rest of this paper, we will use the convention that $i$ and $k$ are always entities in ${K\!B}_1$; whereas $j$ and $l$ are in ${K\!B}_2$. $e$ could be in either ${K\!B}$.
[^7]: See Appendix \[ap:qap\] for the traditional description of the quadratic assignment problem and its relationship to our problem.
[^8]: To keep the number of suggestions manageable, we exclude a list of stop words built automatically from the 1,000 most frequent words of each ${K\!B}$.
[^9]: The code and datasets will be made available at <http://mlg.eng.cam.ac.uk/slacoste/sigma>.
[^10]: Recall is defined in our setup as the number of correctly matched entities in ${K\!B}_1$ divided by the number of entities with ground truth information in ${K\!B}_1$. We note that recall is upper bounded by precision because our alignment is a 1-1 function.
[^11]: was downloaded from: [](http://www.mpi-inf.mpg.de/yago-naga/yago/downloads.html) and from: [](http://wiki.freebase.com/wiki/Data_dumps).
[^12]: <http://www.imdb.com/interfaces#plain>
[^13]: <http://oaei.ontologymatching.org/2009/instances/>
[^14]: We note that the smaller dataset also present in the benchmark was not suitable for 1-1 matchings as its ground truth had a large number of many-to-one matches.
[^15]: <http://oaei.ontologymatching.org/2010/im/index.html>
[^16]: is not using the graph score component ($\alpha$ is set to 0) and is only using the inverted index $\mathcal{S}_0$ to suggest candidates – not the neighbors in $\mathcal{N}_{ij}$.
[^17]: The authors call these ‘marginal probabilities’ as they were motivated from probabilistic arguments, but these do not sum to one.
[^18]: This value of $\alpha$ has the nice theoretical justification that it gives twice much more weight to the linear term than the quadratic term, a standard weighting scheme given that the derivative of the quadratic yields the extra factor of two to compensate.
|
---
abstract: |
We prove the following generalization of Severi’s Theorem:\
Let $X$ be a fixed complex variety. Then there exist, up to birational equivalence, only finitely many complex varieties $Y$ of general type of dimension at most three which admit a dominant rational map $f:X \r Y$.
---
ø
v ‘=
\[section\] \[defi\][Proposition]{} \[defi\][Theorem]{} \[defi\][Lemma]{} \[defi\][Vermutung]{} \[defi\][Corollary]{} \[defi\][Remark]{} \[defi\][Conjecture]{}
[Iitaka-Severi’s Conjecture]{}\
[for Complex Threefolds]{}\
Introduction
============
Let $X$ and $Y$ be algebraic varieties, i.e. complete integral schemes over a field of characteristic zero, and denote by $R(X,Y)$ the set of all dominant rational maps $f:X \r Y$. Moreover denote by ${\cal F} = {\cal F}(X)$ the set $\{ f: X \r Y|$ $f$ is a dominant rational map onto an algebraic variety $Y$ of general type$\}$ and by ${\cal F}_m = {\cal F}_m(X)$ the set $\{ f: X \r Y|$ $f$ is a dominant rational map and $Y$ is birationally equivalent to a nonsingular algebraic variety for which the $m$-th pluricanonical mapping is birational onto its image $\}$. We introduce an equivalence relation $\sim$ on the sets ${\cal F}$ and ${\cal F}_m$ as follows: $(f:X \r Y) \sim
(f_1:X \r Y_1)$ iff there exists a birational map $b:Y \r Y_1$ such that $b \circ f = f_1$.\
The classical theorem of Severi can be stated as follows (cf.[@Sa]):
\[1.1\] For a fixed algebraic variety $X$ there exist only finitely many hyperbolic Riemann surfaces $Y$ such that $R(X,Y)$ is nonempty.
We may ask if a finiteness theorem of this kind also can be true in higher dimensions. This leads to the following:
\[1.2\] For a fixed variety $X$ there exist, up to birational equivalence, only finitely many varieties $Y$ of general type such that $R(X,Y)$ is nonempty.\
Moreover, the set ${\cal F}/ \sim$ is a finite set.
Maehara calls this conjecture Iitaka’s Conjecture based on Severi’s theorem (cf. [@Ma3]), and we abbreviate this as Iitaka-Severi’s Conjecture. In [@Ma3] Maehara states the Conjecture more generally for algebraic varieties (over any field) and separable dominant rational maps. He also mentioned that K. Ueno proposed that a variety of general type could be replaced by a polarized non uniruled variety in this Conjecture.\
Maehara proved in Proposition 6.5. in [@Ma2] that in characteristic zero the Conjecture is true if one restricts the image varieties $Y$ to such varieties that can be birationally embedded by the $m$-th pluricanonical map for any given $m$, i.e. ${\cal F}_m / \sim$ is finite for all $m$. This especially proves the Conjecture for surfaces $Y$ (take $m=5$). Furthermore Maehara shows that one can find a fixed $m$ such that for all [**smooth**]{} varieties $Y$ which have nef and big canonical bundle the m-th pluricanonical map is a birational embedding, which proves the Conjecture also in this case. Earlier Deschamps and Menegaux [@DM2], [@DM3] proved, in characteristic zero, the cases where the varieties $Y$ are surfaces which satisfy $q >0$ and $P_g \geq 2$, or where the maps $f:X \r Y$ are morphisms. In this direction Maehara [@Ma1] also showed finiteness of isomorphism classes of smooth varieties with ample canonical bundles which are dominated by surjective morphisms from a fixed variety.\
There is a related classical result due to de Franchis [@Fr] which states that for any Riemann surface $X$ and any fixed hyperbolic Riemann surface $Y$ the set $R(X,Y)$ is finite. At the same time he gives an upper bound for $\#R(X,Y)$ only in terms of $X$. The generalization of this theorem to higher dimensions is not a conjecture any more: Kobayashi and Ochiai [@KO] proved that if $X$ is a Moisheson space and $Y$ a compact complex space of general type, then the set of surjective meromorphic maps from $X$ to $Y$ is finite. Deschamps and Menegaux [@DM1] proved that if $X$ and $Y$ are smooth projective varieties over a field of arbitrary characteristic, and $Y$ is of general type, then $\#R(X,Y)$ is finite (where one has additionally to assume that the dominant rational maps $f:X \r Y$ are separable).\
From these results it follows that the second part of Conjecture \[1.2\] is a consequence of the first part, hence we only have to deal with the first part.\
Bandman [@Ba1], [@Ba2] and Bandman and Markushevich [@BM] also generalized the second part of de Franchis’ theorem, proving that for projective varieties $X$ and $Y$ with only canonical singularities and nef and big canonical line bundles $K_X$ and $K_Y$ the number $\#R(X,Y)$ can be bounded in terms of invariants of $X$ and the index of $Y$.\
Another generalization of the (first part of) de Franchis’ theorem was given by Noguchi [@No], who proved that there are only finitely many surjective meromorphic mappings from a Zariski open subset $X$ of an irreducible compact complex space onto an irreducible compact hyperbolic complex space $Y$. Suzuki [@Su] generalized this result to the case where $X$ and $Y$ are Zariski open subsets of irreducible compact complex spaces $\overline{X}$ and $\overline{Y}$ and $Y$ is hyperbolically embedded in $\overline{Y}$. These results can be generalized to finiteness results for nontrivial sections in hyperbolic fiber spaces. But since a more precise discussion would lead us too far from the proper theme of this paper, we refer the interested reader to Noguchi [@No] and Suzuki [@Su], or to the survey [@ZL] of Zaidenberg-Lin, where he also can find an overview for earlier results which generalized de Franchis’ theorem.\
It is a natural question if Conjecture \[1.2\] can also be stated in terms of complex spaces. In [@No] Noguchi proposed the following:
\[1.3\] Let $X$ be a Zariski open subset of an irreducible compact complex space. Then the set of compact irreducible hyperbolic complex spaces $Y$ which admit a dominant meromorphic map $f:X \r Y$ is finite.
Let us now return to Conjecture \[1.2\]. In this paper we are only interested in the case of complex varieties. Since we want to prove finiteness only up to birational equivalence, we may assume without loss of generality that $X$ and all $Y$ in the Conjecture are nonsingular projective complex varieties, by virtue of Hironaka’s resolution theorem [@Hi], cf. also [@Ue], p.73. Now fix a complex projective variety $X$. We define ${\cal G}_m:=\{$ a nonsingular complex projective variety $Y$ : the $m$-th pluricanonical map $\Phi_m: Y \r \Phi_m(Y)$ is birational onto its image and there exists a dominant rational map $f:X \r Y \}
$. In order to show Conjecture \[1.2\] it is sufficient, by Proposition 6.5. of Maehara [@Ma2], to show the following
\[1.4\] There exists a natural number $m$ only depending on $X$ such that all smooth complex projective varieties $Y$ of general type which admit a dominant rational map $f:X \r Y$ belong to ${\cal G}_m$.
We will prove that Conjecture \[1.4\] is true for varieties $Y$ which are of dimension three, thus we prove Iitaka-Severi’s Conjecture for complex 3-folds. Since for varieties $Y$ of dimension one resp. two we can take $m=3$ resp. $m=5$, our main theorem is:
\[1.5\] Let $X$ be a fixed complex variety. Then there exist, up to birational equivalence, only finitely many complex varieties $Y$ of general type of dimension at most three which admit a dominant rational map $f:X \r Y$.\
Moreover the set ${\cal F}/ \sim$ is a finite set if one resticts to complex varieties $Y$ of dimension at most three .
As Maehara [@Ma3], p.167 pointed out already, in order to prove Conjecture \[1.4\] it is enough to show that for all varieties $Y$ there exists a minimal model and the index of these minimal models can be uniformly bounded from above by a constant only depending on $X$. Since in dimension three minimal models and even canonical models do exist, the problem is reduces to the question how to bound the index.\
But it turns out that one is running into problems if one directly tries to bound the indices of the canonical models $Y_c$ of threefolds $Y$, only using that they are all dominated by dominant rational maps from a fixed variety $X$. So we will proceed in a different way:\
The first step of the proof is to show that the Euler characteristic $\chi (Y, {\cal O}_{Y})$ is uniformly bounded by an entire constant $C$ depending only on $X$ (Proposition 3.2), that is how we use the fact that all threefolds $Y$ are dominated by a fixed variety $X$.\
In the second step of the proof, we show that we can choose another entire constant $R$, also only depending on $X$, such that for any threefold $Y$ of general type for which the Euler characteristic is bounded by $C$ the following holds (Proposition 3.3): Either the index of the canonical model $Y_c$ of $Y$ divides $R$ (first case) or the pluricanonical sheaf ${\cal O}_{Y_c}((13C)K_{Y_c})$ has two linearly independant sections on $Y_c$ (second case). In order to prove this Proposition, we use the Plurigenus Formula due to Barlow, Fletcher and Reid and estimates of some terms in this formula due to Fletcher. In the first case the index is bounded, and we are done (Proposition \[2.8\]).\
The third step of the proof deals with the second case. Here we remark that the two linearly independant sections on $Y_c$ can be lifted to sections in $H^0 (Y,{\cal O}_{Y}(mK_{Y}))$, and then we can apply a theorem of Kollar [@Ko] which states that now the $(11m+5)$-th pluricanonical map gives a birational embedding (Proposition 3.4), and we are also done in the second case.\
Hence we do not prove directly that under our assumptions the index is uniformly bounded, we prove that if it is not, then there is some other way to show that some fixed pluricanonical map gives a birational embedding. The fact that the index actually has such a uniform bound then follows as a result of Theorem \[1.5\].\
It finally might be worth while to point out that the second and the third step of our proof actually yield:
Let $C$ be a positive entire constant. Define $R={\rm lcm}(2,3,...$ $,26C-1)$ and $m={\rm lcm}(18R+1,143C+5)$. Then for all smooth projective 3-folds of general type for which the Euler characteristic is bounded above by $C$ the $m$-th pluricanonical map is birational onto its image.
Despite the fact that our $m=m(C)$ is explicit, it is so huge that it is only of theoretical interest. For example for $C=1$ it is known by Fletcher [@Fl] that one can choose $m=269$, but for $C=1$ our $m$ is already for of the size $10^{12}$. Moreover J.P. Demailly recently told me that he conjectures that for 3-folds of general type any $m \geq 7$ should work, independantly of the size of the Euler characteristic.\
The paper is organized as follows: In section 2 we collect, for the convenience of the reader and also for fixing the notations, the basic facts from canonical threefolds which we need. We try to give precise references to all these facts, but do not try to trace these facts back to the original papers. Where we could not find such references we give short proofs. However we expect that all these facts should be standard to specialists on threefolds. In section 3 we give the proof of Theorem \[1.5\].\
The author would like to thank S.Kosarew (Grenoble) for pointing out Noguchi’s Conjecture \[1.3\] to him. This was his starting point for working on problems of this kind. He would also like to thank F.Catanese (Pisa) for pointing out Fletcher’s paper [@Fl] to him, since this paper later gave him the motivation for the key step in the proof of Theorem \[1.5\]. He finally would like to thank the Institut Fourier in Grenoble, the University of Pisa and the organizers of the conference Geometric Complex Analysis in Hayama for inviting him, since this gave him the possibility to discuss with many specialists.
Some Tools from the Theory of 3-folds
=====================================
Let $Y$ be a normal complex variety of dimension $n$, $Y_{reg}$ the subspace of regular points of $Y$ and $j: Y_{reg} \hookrightarrow Y$ the inclusion map. Then the sheaves $\h_Y(mK_Y)$ are defined as $$\h_Y(mK_Y) := j_*((\Omega^n_{Y_{reg}})^{\otimes m})$$ Equivalently $\h_Y(mK_Y)$ can be defined as the sheaf of $m$-fold tensor products of rational canonical differentials on $Y$ which are regular on $Y_{reg}$. The $mK_Y$ can be considered as Weil divisors. For this and the following definitions, cf. [@Re2] and [@Mo1].
\[2.1\] $Y$ has only [**canonical**]{} singularities if it satisfies the following two conditions:\
i) for some integer $r \ge 1$, the Weil divisor $rK_Y$ is a Cartier divisor.\
ii) if $f: \tilde{Y} \r Y $ is a resolution of $Y$ and $\{ E_i \}$ the family of all exceptional prime divisors of $f$, then $$rK_{\tilde{Y}} = f^*(rK_Y) + \sum a_iE_i$$ with $a_i \geq 0$.\
If $a_i >0$ for every exceptional divisor $E_i$, then $Y$ has only [**terminal**]{} singularities.\
The smallest integer $r$ for which the Weil divisor $rK_Y$ is Cartier is called the [**index**]{} of $Y$.
\[2.2\] A complex projective algebraic variety $Y$ with only canonical (resp. terminal) singularities is called a [**canonical**]{} (resp. [**minimal**]{}) model if $K_Y$ is an ample (resp. a nef) $\raz$-divisor.\
We say that a variety $Z$ has a [**canonical (resp. minimal) model**]{} if there exists a canonical (resp. minimal) model which is birational to $Z$.
Later on we will need the following theorem due to Elkik [@El] and Flenner [@Fle], 1.3 (cf. [@Re2], p.363):
\[2.3\] Canonical singularities are rational singularities.
The first part of the following theorem, which is of high importance for the theory of 3-folds, was proved by Mori [@Mo2], the second part follows from the first part by works of Fujita [@Fu], Benveniste [@Be] and Kawamata [@Ka]:
\[2.4\] Let $Y$ be a non singular projective 3-fold of general type.\
i) There exists a minimal model of $Y$.\
ii) There exists a unique canonical model of $Y$, the canonical ring $R(Y,K_Y)$ of $Y$ is finitely generated, and the canonical model is just ${\rm Proj} R(Y,K_Y)$.
We have the following Plurigenus Formula due to Barlow, Fletcher and Reid (cf. [@Fl], [@Re2], see also [@KM], p.666 for the last part):
\[2.5\] Let $Y$ be a projective 3-fold with only canonical singularities. Then we have $$\chi (Y, \h_Y(mK_Y)) = \frac{1}{12}(2m-1)m(m-1)K_Y^3 - (2m-1) \chi (Y, \h_Y)
+ \sum_Q l(Q,m)$$ with $$l(Q,m) = \sum_{k=1}^{m-1} \frac{\overline{bk}(r- \overline{bk})}{2r}$$ Here the summation takes place over a basket of singularities $Q$ of type $\frac{1}{r}(a,-a,1)$ (see below for these notations). $\overline{j}$ denotes the smallest nonnegative residue of $j$ modulo $r$, and $b$ is chosen such that $\overline{ab}=1$.\
Furthermore we have $${\rm index}(Y) = {\rm lcm}\{r=r(Q): \: Q\in {\rm basket} \}$$
A singularity of type $\frac{1}{r}(a,-a,1)$ is a cyclic quotient singularity $\cz^3 / \mu_r$, where $\mu_r$ denotes the cyclic group of $r$th roots of unity in $\cz$, and $\mu_r$ acts on $\cz^3$ via $$\mu_r \ni \epsilon: (z_1,z_2,z_3) \r (\epsilon^a z_1, \epsilon^{-a}z_2,
\epsilon z_3)$$ Reid introduced the term ‘basket of singularities’ in order to point out that the singularities $Q$ of the basket are not necessarily singularities of $Y$, but only ‘fictitious singularities’. However the singularities of $Y$ make the same contribution to $\chi (Y, \h_Y(mK_Y))$ as if they were those of the basket, hence we also can work with the singularities of the basket, which have the advantage that their contributions are usually easier than those of the original singularities. More precisely, one can pass from $Y$ to a variety where the singularities of the basket actually occur by a crepant partial resolution of singularities and then by a flat deformation. For the details cf. [@Re2], p.404, 412.\
For estimating from below the terms $l(Q,m)$ in the Plurigenus Formula, we will need two Propositions due to Fletcher [@Fl]. In those Propositions $[s]$ denotes the integral part of $s \in \rz$.
\[2.6\] $$l( \frac{1}{r}(1,-1,1), m) = \frac{\overline{m} (\overline{m} -1)(
3r+1-2 \overline{m})}{12r} + \frac{r^2-1}{12} [\frac{m}{r}]$$
\[2.7\] For $\alpha, \beta \in \gz$ with $0 \leq \beta \leq
\alpha$ and for all $m \leq [(\alpha +1)/2]$, we have: $$l(\frac{1}{\alpha} (a,-a,1), m) \geq l(\frac{1}{\beta} (1,-1,1),m)$$
At last, we want to give a proof for Maehara’s remark [@Ma3], p.167 which we already mentioned, namely that it is enough to show that the index of the canonical models of the varieties $Y$ can be uniformly bounded from above by a constant only depending on $X$. We prove more precisely:
\[2.8\] Let $Y$ be a smooth projective 3-fold of general type and $l$ a natural number such that $l$ is an integer multiple of the index $r$ of the canonical model $Y_c$ of $Y$. Then the $(18l+1)$-th pluricanonical map is birational onto its image.
[**Proof:**]{} We could pass from the canonical model $Y_c$ of $Y$ to a minimal model $Y_m$ and then apply Corollary 4.6 of the preprint [@EKL] of Ein, Küchle and Lazarsfeld. But since it might even be easier we want to pass directly from $Y_c$ to $Y$, and then apply the corresponding result of Ein, Küchle and Lazarsfeld for smooth projective 3-folds, namely Corollary 3. of [@EKL].\
Since $l$ is a multiple of the index of $Y_c$, $lK_{Y_c}$ is an ample line bundle. Since we only are interested in $Y$ up to birational equivalence we may assume that $\pi : Y \r Y_c$ is a desingularization. Since the bundle $lK_{Y_c}$ on $Y_c$ is ample, the pulled back bundle $\pi^*(lK_{Y_c})$ on $Y$ is still nef and big. Hence we can apply Corollary 3. of [@EKL] to this bundle and get that the map obtained by the sections of the bundle $K_Y + 18\pi^*(lK_{Y_c})$ maps $Y$ birationally onto its image. But since by the Definition 2.1 of canonical singularities every section of the bundle $K_Y + 18\pi^*(lK_{Y_c})$ is also a section of $K_Y + 18lK_Y = (1+18l)K_Y$, the claim follows.
[**Remark:**]{} Notice that for Proposition 2.8 we do not need the assumption that the smooth projective 3-folds $Y$ are dominated by a fixed complex variety $X$. This only will be needed to bound the indices of the canonical models $Y_c$ of the $Y$.
Bounding the Index of a Dominated Canonical 3-fold
==================================================
In order to prove Theorem \[1.5\], it is enough to prove the following
\[3.1\] Let $X$ be a fixed smooth complex variety. Then there exists a natural number $m$ depending only on $X$ such that all smooth complex projective 3-folds $Y$ of general type which admit a dominant rational map $f:X \r Y$ belong to ${\cal G}_m$.
Here, as in the introduction, ${\cal G}_m$ is defined as ${\cal G}_m:=\{$ a nonsingular complex projective variety $Y$ : the $m$-th pluricanonical map $\Phi_m: Y \r \Phi_m(Y)$ is birational onto its image and there exists a dominant rational map $f:X \r Y \}
$.\
The rest of this chapter is devoted to the proof of Theorem \[3.1\]. We denote by $Y_c$ the canonical model of $Y$. Furthermore we may assume without loss of generality that $\pi : Y \r Y_c$ is a desingularization (since we only need to look at those smooth projective varieties $Y$ up to birational equivalence).\
In the first step of the proof we show:
\[3.2\] Under the assumptions of Theorem \[3.1\] there exists an entire constant $C
\geq 1$ only depending on $X$, such that for all $Y$ we have $ \chi (Y, \h_Y) = \chi (Y_c, \h_{Y_c}) \leq C$.
[**Proof:**]{} First we get by Hodge theory on compact Kähler manifolds (cf. [@GH], or [@Ii], p.199): $$h^i(Y, \h_Y) = h^0(Y, \Omega_Y^i), \:\: i=0,1,2,3$$ Now by Theorem 5.3. in Iitaka’s book [@Ii], p.198 we get that $$h^0(Y, \Omega_Y^i) \leq h^0(X, \Omega_X^i), \:\: i=0,1,2,3$$ Hence by the triangle inequality we get a constant $C$, only depending on $X$, such that $$| \chi (Y, \h_Y) | \leq C$$ Now by the theorem of Elkik and Flenner (Theorem \[2.3\]) $Y_c$ has only rational singularities, hence by degeneration of the Leray spectral sequence we get that $$\chi (Y, \h_Y) = \chi (Y_c, \h_{Y_c})$$ This finishes the proof of Proposition \[3.2\].
In the second step of the proof of Theorem \[3.1\] we show:
\[3.3\] Let $C\geq 1$ be an entire constant, $R := {\rm lcm}(2,3,...,
26C-1)$ and $m_1=18R+1$. Then for all smooth projective complex 3-folds $Y$ of general type with $\chi (Y, \h_Y) \leq C$ we have\
either $Y \in {\cal G}_{m_1}$ or $h^0(Y_c, \h_{Y_c}((13C)K_{Y_c})) \geq 2$.
[**Proof:**]{} We distinguish between two cases: The first case is that the index of $Y_c$ divides $R$. Then applying Proposition \[2.8\] we get that $Y \in {\cal G}_{m_1}$ and we are done. The second case is that the index does not divide $R$. Then in the Plurigenus Formula Theorem \[2.5\] of Barlow, Fletcher and Reid we necessarily have at least one singularity $\tilde{Q}$ in the basket of singularities which is of the type $\frac{1}{r}(a,-a,1)$ with $r \geq 26C$. Now applying first a vanishing theorem for ample sheaves (cf. Theorem 4.1 in [@Fl]), the fact that $K^3_{Y_c} >0$ (since $K_{Y_c}$ is an ample $\raz$-divisor) and then the Propositions \[2.6\] and \[2.7\] due to Fletcher, we get: $$h^0(Y_c, \h_{Y_c}((13C)K_{Y_c}))$$ $$= \chi (Y_c, \h_{Y_c}((13C)K_{Y_c}))$$ $$\geq (1-26C)\chi (Y_c, \h_{Y_c}) + \sum_{ Q \in {\rm basket}} l(Q,13C)$$ $$\geq (1-26C)C + l(\tilde{Q},13C)$$ $$\geq (1-26C)C + l(\frac{1}{26C}(1,-1,1),13C)$$ $$= (1-26C)C + \frac{13C(13C-1)(78C+1-26C)}{312C}$$ $$= \frac{312C^2 - 8112C^3 + 8788C^3 - 507C^2 -13C}{312C}$$ $$= \frac{52C^2 - 15C - 1}{24}$$ $$\geq \frac{36}{24} = 1.5$$
The last inequality is true since $C\geq 1$. Since $ h^0(Y_c, \h_{Y_c}((13C)K_{Y_c}))$ is an entire, this finishes the proof of Proposition \[3.3\].
In the third step of the proof of Theorem \[3.1\] we show:
\[3.4\] Assume that for a smooth projective complex 3-fold $Y$ of general type we have $h^0(Y_c, \h_{Y_c}((13C)K_{Y_c})) \geq 2$. Then $Y \in {\cal G}_{m_2}$ with $m_2=143C+5$.
[**Proof:**]{} Kollar proved that if $h^0(Y, \h_Y(lK_Y
)) \geq 2$ then the $(11l+5)$-th pluricanonical map is birational onto its image (Corollary 4.8 in [@Ko]). So the only thing which remains to prove is that from $h^0(Y_c, \h_{Y_c}((13C)K_{Y_c})) \geq 2$ we get $h^0(Y, \h_{Y}((13C)K_{Y})) \geq 2$. This fact is standard for experts (cf. e.g. [@Re1], p.277, or [@Fl], p.225), but since we remarked in talks about this paper that this fact doesn’t seem to be generally well known, we want to indicate how one can prove it:\
What we have to prove is that taking linearly independant sections $s_1$, $s_2$ from $H^0(Y_c, \h_{Y_c}(lK_{Y_c}))$ we can get from them linearly independant sections $t_1$, $t_2$ from $H^0(Y, \h_Y(lK_Y))$. We mentioned at the beginning of section 1 that $\h_{Y_c}(lK_{Y_c})$ can also be defined as the sheaf of $l$-fold tensor products of rational canonical differentials on $Y_c$ which are regular on $(Y_c)_{reg}$. But since $Y$ and $Y_c$ are birationally equivalent, from this definition it is immediate that any linearly independant sections $s_1$, $s_2$ from $H^0(Y_c, \h_{Y_c}(lK_{Y_c}))$ can be lifted, namely as pull backs of (tensor products of rational) canonical differentials with the holomorphic map $\pi$, to linearly independant [**rational**]{} sections $t_1$, $t_2$ of the bundle $\h_Y(lK_Y)$. These lifted sections are regular outside the family of the exceptional prime divisors $\{ E_i \}$ of the resolution $\pi : Y \r Y_c$. We have to show that $t_1$ and $t_2$ are regular everywhere. Since $Y$ is a manifold, by the First Riemann Extension Theorem it is sufficient to show that these sections are bounded near points of the $\{ E_i \}$. In order to show this, choose a natural number $p$, which now may depend on $Y$, such that index($Y_c$) divides $pl$. Then by the definition of canonical singularities (Definition \[2.1\]) the sections $s_1^p$ and $s_2^p$ lift to [**regular**]{} sections $t_1^p$ and $t_2^p$. Hence $t_1$ and $t_2$ have to be bounded near points of the $\{ E_i
\}$, and we are done.
Now the proof of Theorem \[3.1\] is immediate: If we take $m_0 := {\rm
lcm}(m_1,m_2)$, then by Proposition \[3.3\] and Proposition \[3.4\] we have $Y \in {\cal
G}_{m_0}$ for all $Y$ which occur in Theorem \[3.1\].
[9999]{} T. Bandman, [*Surjective holomorphic mappings of projective manifolds*]{}, Siberian Math. J. [**22**]{} (1982), 204–210.
T. Bandman, [*Topological invariants of a variety and the number of its holomorphic mappings*]{}, in: J. Noguchi (Ed.) Proceedings of the International Symposium Holomorphic Mappings, Diophantine Geometry and Related Topics, 188-202. RIMS, Kyoto University, 1992.
T. Bandman and D. Markushevich, [*On the Number of Rational Maps Between Varieties of General Type*]{}, Preprint, 10 pages.
X. Benveniste, [*Sur l’anneau canonique de certaine variété de dimension 3*]{}, Invent. Math [**73**]{} (1983), 157-164.
M.M. Deschamps and R.L. Menegaux, [*Applications rationelles séparables dominantes sur une variété de type général*]{}, Bull. Soc. Math. France [**106**]{} (1978), 279-287.
M.M. Deschamps and R.L. Menegaux, [*Surfaces de type géneral dominées par une variété fixe*]{}, C.R.Acad.Sc.Paris Ser.A [**288**]{} (1979), 765-767.
M.M. Deschamps and R.L. Menegaux, [*Surfaces de type géneral dominées par une variété fixe II*]{}, C.R.Acad.Sc.Paris Ser.A [**291**]{} (1980), 587-590.
R. Elkik, [*Rationalité des singulatrités canoniques*]{}, Invent. Math [**64**]{} (1981), 1-6.
L. Ein, O. Küchle and R. Lazarsfeld, [*Local Positivity of Ample Line Bundles*]{}, Preprint, 23 pages.
H. Flenner, [*Rational singularities*]{}, Arch. Math. [**36**]{} (1981), 35-44.
M. de Franchis, [*Un Teorema sulle involuzioni irrationali.*]{}, Rend. Circ. Mat. Palermo [**36**]{} (1913), 368.
A.R. Fletcher, [*Contributions to Riemann-Roch on projective 3-folds with only canonical singularities and applications*]{}, in: S.J. Bloch (Ed.) Algebraic Geometry, Bowdoin 1985, 221-231. Proc Symp. in Pure Math. [**46**]{}, 1987.
T. Fujita, [*Zariski decomposition and canonical rings of elliptic threefolds*]{}, J. Math. Japan [**38**]{} (1986), 20-37.
P. Griffiths and J. Harris, [*Principles of Algebraic Geometry*]{}. John Wiley and Sons, 1978.
H. Hironaka, [*Resolution of singularities of an algebraic variety over a field of characteristic zero I*]{}, Ann. Math. [**79**]{} (1964), 109-326.
S. Iitaka, [*Algebraic Geometry*]{}. Springer, 1982.
Y. Kawamata, [*On the finiteness of generators of pluricanonical ring for a 3-fold of general type*]{}, Amer. J. Math. [**106**]{} (1984), 1503-1512.
Y. Kawamata, K. Matsuda and K. Matsuki, [*Introduction to the Minimal Model Problem*]{}, in: T. Oda (Ed.) Algebraic Geometry, Sendai 1985, 283-360. Advanced Studies in Pure Math. [**10**]{}, 1987.
S. Kobayashi and T. Ochiai, [*Meromorphic Mappings onto Compact Complex Spaces of General Type*]{}, Invent. Math. [**31**]{} (1975), 7-16.
J. Kollar, [*Higher direct images of dualizing sheaves I*]{}, Annals of Math. [**123**]{} (1986), 11-42.
J. Kollar and S. Mori, [*Classification of three-dimensional flips*]{}, J. of the AMS [**5**]{} (1992), 533-703.
K. Maehara, [*Families of Varieties dominated by a variety*]{}, Proc. Japan Acad. Ser. A [**55**]{} (1979), 146-151.
K. Maehara [*A Finiteness Property of Varieties of General Type*]{}, Math. Ann. [**262**]{} (1983), 101-123.
K. Maehara, [*Diophantine problem of algebraic varieties and Hodge theory*]{}, in: J. Noguchi (Ed.) Proceedings of the International Symposium Holomorphic Mappings, Diophantine Geometry and Related Topics, 167-187. RIMS, Kyoto University, 1992.
S. Mori, [*Classification of Higher -Dimensional Varieties*]{}, in: S.J. Bloch (Ed.) Algebraic Geometry, Bowdoin 1985, 269-331. Proc Symp. in Pure Math. [**46**]{}, 1987.
S. Mori, [*Flip Theorem and the Existence of Minimal Models for 3-folds*]{}, J. AMS [**1**]{} (1988), 117-253.
J. Noguchi [*Meromorphic Mappings into Compact Hyperbolic Complex Spaces and Geometric Diophantine Problems*]{}, Interntl. J. Math. [**3**]{} (1992), 277-289, 677.
M. Reid, [*Canonical 3-folds*]{}, in A. Beauville (Ed.) Algebraic Geometry, Angers 1979, 273-310. Sijthoff and Noordhoff, 1980.
M. Reid, [*Young person’s guide to canonical singularities*]{}, in: S.J. Bloch (Ed.) Algebraic Geometry, Bowdoin 1985, 345-414. Proc Symp. in Pure Math. [**46**]{}, 1987.
P. Samuel, [*Complements a un article de Hans Grauert sur la conjecture de Mordell*]{}, Publ. Math. IHES [**29**]{} (1966), 311-318.
M. Suzuki, [*Moduli spaces of holomorphic mappings into hyperbolically embedded complex spaces and hyperbolic fibre spaces*]{}, in: J. Noguchi (Ed.) Proceedings of the International Symposium Holomorphic Mappings, Diophantine Geometry and Related Topics, 157-166. RIMS, Kyoto University, 1992.
K. Ueno, [*Classification Theory of Algebraic Varieties and Compact Complex Spaces*]{}, LNM [**439**]{}. Springer, 1975.
M.G. Zaidenberg and V.Ya. Lin, [*Finiteness theorems for holomorphic maps*]{}, Several Complex Variables III, Encyclopaedia Math. Sciences [**9**]{}, 113-172. Springer, 1989.
Gerd Dethloff, Mathematisches Institut der Universität Göttingen, Bunsenstrae 3-5, 37073 Göttingen, Germany\
e-mail: [email protected]\
|
---
abstract: 'We report on detailed abundances of giants in the Galactic bulge, measured with the HIRES echelle spectrograph on the 10-m Keck telescope. We also review other work on the bulge field population and globular clusters using Keck/HIRES. Our new spectra have 3 times the resolution and higher S/N than previous spectra obtained with 4m telescopes. We are able to derive $\log g$ from Fe II lines and excitation temperature from Fe I lines, and do not rely on photometric estimates for these parameters. We confirm that the iron abundance range extends from $-1.6$ to $+0.55$ dex. The improved resolution and S/N of the Keck spectra give \[Fe/H\] typically 0.1 to 0.2 dex higher than previous studies,[@mr94] for bulge stars more metal rich than the Sun. Alpha elements are enhanced even for stars at the Solar metallicity (as is the case for bulge globular clusters). We confirm our earlier abundance analysis of bulge giants[@mr94] and find that Mg and Ti are enhanced relative to Ca and Si even up to \[Fe/H\]=+0.55. We also report the first reliable estimates of the bulge oxygen abundance. Our element ratios confirm that bulge giants have a clearly identifiable chemical signature, and suggest a rapid formation timescale for the bulge.'
author:
- |
R. Michael Rich and A. McWilliam University of California, Los Angeles, Division of\
Astronomy and Astrophysics, 405 Hilgard Avenue, Los Angeles, CA 90095-1562 Observatories of the Carnegie Institution of Washington, 813 Santa Barbara St., Pasadena, CA 91101
bibliography:
- 'spi.bib'
title: Abundances of Stars in the Galactic Bulge Obtained Using the Keck Telescope
---
Introduction
============
Because of their faintness, reddening, severe crowding, and high metallicity, the stars of the Galactic bulge remained among the last Galactic population to be studied with high resolution spectroscopy. In the scientific cases for large telescopes, the goal of successfully defining the abundances and chemistry of bulge stars has often figured prominently. Of course, the real driver for studying these stars is not the technical challenge, rather it is their potential to yield insights into the formation of bulges and ellipticals.
Within the last five years, the combination of spectroscopy with the Keck telescopes and imaging with the Hubble Space Telescope has revolutionized the study of galaxies at high redshift. A population of plausible progenitors [@steidel96] to present-day $L^*$ galaxies has been discovered at $z>3$ and a proposed star formation history [@madau96] of the Universe has been sketched out. However, these observations cannot trace the evolution of the $z>3$ galaxies into their present-day counterparts. In many respects, such as luminosity and clustering, they strongly resemble the progenitors of present-day luminous galaxies. It is also possible to constrain the formation time of bulges from observations of galaxies at $z\leq 1$. Recent pixel-by-pixel analysis [@ellis2000] of resolved images of high redshift galaxies with clearly visible bulges apparently shows that at any given redshift, bulges are bluer than the reddest galaxies of elliptical morphology. Unfortunately, this imagery cannot easily distinguish between a late starburst on top of an old population versus a mostly intermediate-age population. So it is valuable to seek other available evidence, such as the ages and abundances of stars in the Galactic bulge.
The exact agreement between HST luminosity functions of old metal rich globular clusters, and NTT luminosity functions of the Galactic bulge field[@sergio95] strongly suggests that the bulge formed early and rapidly. HST photometry in a number of different bulge fields also shows that the stars brighter than the oldest turnoff point are foreground stars associated with the disk, not the bulge[@feltz00]. Age constraints from luminosity functions or the luminosity of the main sequence turnoff point, while powerful, are only accurate to (at best) $\approx 1-2 Gyr$. The detailed composition of stars in the bulge does not constrain the absolute age of the bulge. However, it does constrain the timescale for chemical enrichment, and it helps to relate the bulge (or not) to elliptical galaxies. As a larger sample of stars is accumulated, more detailed theoretical inferences about the enrichment history will be possible.
The bulge of the Milky Way is a clearly distinct population, as defined by the classical characteristics of a stellar population, age, abundance, kinematics, and structure. The central 1000 pc of our galaxy is dominated by old, metal rich [@rich88]$^,$[@mr94] stars with very high phase space density. The stellar mass of the bulge is $2\times 10^{10}M_\odot$, roughly 1/3 that of the disk, but it still accounts for a large fraction of the baryonic mass of the Galaxy. The image of the bulge[@hauser90] obtained using the DIRBE instrument on board the COBE satellite dramatically illustrates its distinct nature and its similarity to more distant ellipticals. It is possible to develop a model[@zhao96] that both fits the surface brightness in the COBE image, solves Poisson’s equation, and gives stellar orbits that reproduce the observed kinematics of the bulge.
Presently, there is no clear consensus on the ages and formation timescales of bulges in general. The colors of bulges imaged in detail in the optical and IR by HST are consistent with very large ages[@pel99], a result first found in 1969 for the bulge of M31[@sandage69]. On the other hand, the integrated Mg line strengths of bulges are less than those of ellipticals at the same iron line strengths[@proc00], which would argue that bulges might have experienced a less intense and more extended period of star formation than the ellipticals.
How Element Ratios May Constrain the Formation of the Bulge
-----------------------------------------------------------
The motivation for measuring abundance ratios in old stars is that they preserve the fossil record of the early star formation process. Potentially, the initial mass function, star formation rate, and importance of infall or extended star formation at late times can all be recovered from abundance ratios. The material treated briefly below is discussed in more detail elsewhere. [@andy97]$^,$[@mr99] Scenarios for forming the bulge predict a wide range of timescales, from $\sim 10^8$ yr for a violent starburst, to a few Gyr for a massive disk that thickens into a bar. The modeling of observed abundance trends can distinguish among these models.
[*Metallicity:*]{} The fundamental notion of chemical evolution is that other than those light elements produced in the Big Bang, metals are made in supernovae. Because SNe explode in $\sim 10^6$ yr and distribute their metals widely, it is possible to model the process as a simple differential equation (the Simple Model[@ss72] of chemical evolution). In the case of the bulge, the deep potential well and likely violence of the early starburst satisfy the model assumptions, and the abundance distribution fits the Simple Model[@rich90]. The yield is the ratio of the mass of metals produced to the total mass locked up in long-lived stars. In the Simple Model, the yield is the mean metal abundance of the population. The shallower the initial mass function slope (more massive stars) the higher the yield.
[*Alpha Elements:*]{} When the first 200-inch echelle spectra of metal poor stars and globular clusters were obtained[@wall62] it was noted that some even-Z elements (O, Mg, Si, Ca, and Ti) were overabundant by $\sim +0.3$ dex relative to the Solar Neighborhood. These are the so-called $\alpha$-elements, although their actual synthesis is far more complicated than transmutation by successive capture of helium nuclei in massive stars. The widely accepted explanation for these over abundances[@tinsley79]$^,$[@wheeler89]is that massive star (Type II supernovae) dominated the enrichment at early times; models[@ww95] indicate that the ejecta of these SNe are very rich in alpha elements. Although type I SNe produce the iron peak elements, their contribution to the iron abundance becomes important only after $\sim 1$Gyr, as time is required for the formation of a prior generation of white dwarfs.
The diagnostic value of trends of $\rm [\alpha/Fe]$ vs \[Fe/H\] extend beyond their use as a crude clock, as has been suggested for the bulge[@matt90]. If the IMF is dominated by massive stars, the alpha elements can be enhanced by more than +0.3 dex[@ww95], while a high star formation rate will result in stars of Solar iron abundance having an alpha-enhanced composition, as appears to be the case for the bulge. Finally, although Ti is observed to be elevated with the alpha elements, the nucleosynthesis calculations[@ww95] predict a low yield of Ti in massive stars; this remains a problem.
[*Neutron-Capture Elements:*]{} The two dominant modes of neutron capture also offer the potential to serve as clocks, and as a fossil record of early star formation. Supernovae (probably Type II) are believed to be the site of the r-process[@wh92], while the helium burning shells of AGB stars are suspected as the site of s-process production, as was shown in early calculations[@iben75]. The Ba/Eu ratio is is especially useful because it is sensitive to the r-process fraction of heavy elements. However, practical use of this diagnostic in the bulge is somewhat complicated by the lack of weak Ba lines, although La and Nd offer excellent possibilities as s-process indicators. Depending on whether \[Ba/Eu\] as a function of \[Fe/H\] approaches the s-process or the lower r-process value, one can infer either a disk-like or halo-like (Type II SN ejecta dominated composition) star formation history.[@andy98] In principle, evidence for r-process nucleosynthesis indicates the presence of enrichment due to Type II SNe which could be due either to a rapid burst of star formation or a shallow IMF. In the bulge, we hope to use \[Ba/Eu\] and other heavy element diagnostics to test the hypothesis[@wyse92] that the bulge formed from gas initially enriched by the astration of the halo. The heavy elements have tremendous potential to constrain the enrichment timescale (and stellar masses responsible) in great detail, especially in the difficult 1-5 Gyr regime[@busso99]. The production of stable isotopes of some s-process elements such as Rb turns out to be very sensitive to the temperature of the helium burning shells of AGB stars.
The derivation of abundances from the equivalent widths of the lines of heavy elements is done with caution. Each absorption line is split into multiple sub-components by nuclear hyperfine splitting; failure to account for this effect can lead to serious errors in the abundances.
Before turning to a discussion of our results, we point out that our program would not have been possible without the HIRES echelle spectrograph [@vogt94] as well as the Keck telescopes. Just now, in the year 2000, we are seeing the successful first light of UVES at the VLT, and HDS at Subaru. HIRES paved the way for these successful instruments, at a time when the operational success of such an instrument on a 10m telescope was far from guaranteed.
Prior Work With Keck
====================
The first major effort on high resolution spectroscopy of bulge giants was that of McWilliam & Rich (1994)[@mr94]\[MR94\], which represents the limit of what may be accomplished with 4m-class telescopes. This work showed that the mean iron abundance in the bulge is $-0.25$ dex, not the $+0.3$ dex found from the early low resolution spectroscopy[@rich88]. As we discuss later, the new Keck data may once again revise the abundance scale somewhat upwards. MR94 also found that the alpha elements behave in a peculiar manner. Mg and Ti are enhanced up to Solar metallicity, while Ca and Si follow a more disk-like enrichment trend. Also, MR94 found several stars with enhancements of the largely r-process element Eu, which is thought to be produced by type II SNe, consistent with the Mg over-abundances. One aim of the new 10m science is to test these findings.
MR94 found that the neutron-capture elements Ba, Y, Zr scale approximately with Fe in the solar ratios. Although these elements are also made by neutron-capture in massive stars, the bulk of the solar composition is from low-mass AGB star nucleosynthesis. Rapid bulge enrichment by massive stars would likely have excluded low-mass AGB stars from contributing s-process elements. A constraint on the bulge formation timescale is possible if the detailed heavy-element abundance patterns can be used to identify the role of low-mass AGB nucleosynthesis.
Bulge Field Stars
-----------------
The CTIO 4m data were S/N=40 and R=17,000, while the HIRES spectra range from R=45,000 to 60,000 for the most metal rich stars. The first goal after MR94 was to verify the surprising result that the most metal rich bulge giants have \[Fe/H\]=+0.44. As we mentioned earlier, Castro et al. (1996)[@castro96] analyses the spectrum of the bulge giant BW 4167 using 3 different methods: spectrum synthesis, curve of growth, and classical equivalent width analysis using the spectrum synthesis code MOOG[@sneden73]. The analysis was hampered by the S/N of the spectrum available, but still gave \[Fe/H\]=+0.47, and that the most metal rich bulge giant has the same line strength as the canonical metal rich disk giant, $\mu$ Leonis. Castro et al. largely confirmed the high end of the bulge metallicity scale found from the 4m data. Even considering the low S/N of these early data, the result is important as it was the first Keck spectroscopy of a bulge giant.
Echelle spectroscopy of bulge main sequence stars at $V>20$ would appear to be beyond the grasp of the present generation of 8-10m telescopes. However, these stars are occasionally magnified by factors of 10 or more by microlensing events, and current surveys identify rising events with enough regularity that one can confidently schedule observing runs in the anticipation that amplified stars will be available to observe. Minniti has acquired KECK/HIRES spectroscopy of one such event in which microlensing boost enhanced the effective diameter of the Keck telescope to 15m [@min98]. The capability to measure a dwarf star gives the first Li abundance constraint in the bulge; the Li abundance of $A(Li)=2.25\pm0.25$ is slightly below that of the Hyades ridgeline. Although no conclusions can be drawn from this single measurement, the technique is important for two reasons. First, Li is of course destroyed in the course of stellar evolution. While Li rich giants are known (even in the bulge; our Keck spectra confirm the 180mA Li line found by MR94 in BW I-194), the source of Li in giants is widely thought to be nuclear reactions in the envelope and is of course not primordial. Second, red giants in globular clusters are established to undergo deep mixing, during which nuclear transmutation of certain alpha elements (O and Mg, but not Si, Ca, and Ti) may occur[@kraft94]. A detailed abundance analysis of stars in NGC 6528[@carretta00] finds evidence for deep mixing, even in this cluster of approximately Solar metallicity. While deep mixing apparently does not appear to affect abundances of halo field giants, it is important to obtain spectra of dwarfs in the bulge to be certain that deep mixing is not affecting the derived abundances.
With microlensing surveys continuing, one may anticipate that the use of the microlens boost technique will be of increasing importance. However, one must obtain excellent spectra for each case, as once the microlensing event concludes, there is no opportunity to repeat the high-dispersion spectroscopy until the 30m - 100m telescopes of the future become available.
Bulge Globular Clusters
-----------------------
As observations have improved, our view of the globular cluster system toward the Galactic Center has changed. Early studies of the kinematics supported association of these metal rich clusters with a disk-like system [@arm89]. As larger samples of these obscured globular clusters were studied, it became apparent that the kinematics of these clusters more closely resemble the bulge stars;[@min95] recent kinematic studies uphold this view[@cote99]. Considering a new enlarged sample of these clusters, with distances, [@barbuy98] find that their spatial distribution and abundance distribution both follow the light of the Galactic bulge.
In contrast to the wide abundance range in the field, the bulge globular clusters are simple stellar populations, that is, having a very narrow range in age and abundances of their constituent stars. Therefore, abundance analysis of these clusters is especially valuable in understanding the more distant stellar populations in bulges and ellipticals which presently may only be studied in their integrated light. In these metal rich globular clusters, the stars at the tip of the red giant branch are so blanketed by TiO that their $V$ band magnitudes are fainter than those of the red clump. The giant branch thus has the form of an arc; even the $I$ band suffers some blanketing. The same giant branch morphology is seen in the field population of the bulge [@rich98]. Population synthesis models of stellar populations must include the correct giant branch. Because many stars on first ascent have strong TiO bands, the overall impact is to increase the TiO line strength in stellar populations. TiO bands are found throughout the spectrum, including overlapping the very important MgH feature at 5170A (the basis of the Faber $\rm Mg_2$ index. Excessively strong TiO in the giants potentially could contribute to a spurious measurement of enhanced Mg in elliptical galaxy populations. It is therefore very important to understand the underlying composition that gives rise to these descending giant branches, and the simple stellar populations of metal rich globular clusters are ideal for this.
The earliest abundance study[@barbuy99] of two giants in NGC 6553 relied on spectra from the ESO 3.6m telescope. NGC 6553 has the classic descending giant branch arc, indicating near-solar metallicity; it lies 6kpc from the Sun and has the same turnoff to HB magnitude difference as is found in the extreme halo[@sergio95]. The initial results[@barbuy99] are surprising: \[Fe/H\]=$-0.55$ and $\rm [\alpha/Fe]=+0.6$ for a total $Z$ of Solar or greater. If correct, these findings would require a revision in our view of nucleosynthesis in metal rich populations, including elliptical galaxies.
However, a different result[@cohen99]$^,$[@carretta00] has been found from spectra obtained using Keck/HIRES. The greater aperture of Keck permits spectroscopy of hotter stars on the red horizontal branch. A team led by Cohen and Gratton analyze spectra of 5 RHB stars in NGC 6553 and 4 giants in NGC 6528. They find \[Fe/H\]=$-0.16$ and \[Ca/Fe\]$\approx +0.3$ for NGC 6553[@cohen99], and \[Fe/H\]=$-0.13$ with a similar alpha enhancement in NGC 6528 [@carretta00]. In contrast to our new HIRES spectroscopy of bulge giants which we report below, the effective temperature and gravities are derived from photometry and the distance to the clusters. However, the basic checks such as trend of iron abundance with excitation potential show that this has been a reasonable approach. In NGC 6528, Carretta et al. find star-to-star variations in O and Na are reminiscent of the deep mixing effects noted in other clusters. In particular, O and Na are anti-correlated. These results show that the two bulge clusters have composition and abundance similar to that of the bulge field stars at the same iron abundance.
The formation of globular clusters is clearly observed at the present epoch in merging galaxies[@whitmore99]. A simple analytic model[@fall00] in which globular clusters are tidally limited and experience disk and bulge shocking can account for the truncated mass distribution seen amongst the old Galactic globular clusters at the present time. Very likely, NGC 6528 and 6553 are survivors of the bulge’s ancient globular cluster population left over from its formation.
As the S/N of data improve, the derived abundances of metal rich stars frequently increase because the continuum is defined more clearly. The Keck’s aperture also enabled Cohen et al. to use red horizontal branch stars some 700K hotter than the 4000K cool red giants observed by Barbuy et al. Nonetheless, more work is called for on the bulge clusters and Keck, VLT, and Subaru will contribute toward this effort in the next year.
New Results for the Bulge K Giants from Keck/HIRES
==================================================
We began our Keck/HIRES spectroscopy of bulge giants in August of 1998, with the aim of obtaining high S/N, high resolution spectra of 25 bulge giants with Keck/HIRES. Initially, we are reobserving a number of stars from our earlier study[@mr94]; these stars are located in Baade’s Window, a region of relatively low extinction in the Galactic bulge some 500 pc south of the nucleus, at $l=0^o,b=-4^o$. At a declination of $-30^o$, the field is accessible from Mauna Kea for about 4 hours per night. For all but the most metal-rich stars a 0.86 arcsec slit is used, giving $R\sim 45,000$. For the very metal rich stars I$-$039 and IV$-$167 we used a 0.57 arcsec slit to obtain $R\sim 60,000$. The data have been reduced using MAKEE, written by Tom Barlow at Caltech. This code has enabled us to speedily reduce these otherwise very complicated data. After the continuum has been defined, the program GETJOB[@andy95] is run to semi-automatically fit all measurable lines with gaussian profiles, to obtain equivalent widths, which are then input to the MOOG spectrum synthesis code[@sneden73] using the Kurucz [@kurucz92] 64 layer model atmospheres. Ultimately, we will synthesize small regions of spectra around each element of interest. Fig. \[fig:bw1039\] shows part of the spectrum near the forbidden O I lines, for one of our faintest, most metal rich stars, BW I-039. In Fig. \[fig:mgplot\], we illustrate how the wide abundance range present in the bulge affects the spectra. Spectrum synthesis of most features will be required for stars exceeding the Solar iron abundance.
The higher resolution and greater wavelength coverage of the HIRES spectra offer many advantages over the MR94 study. In particular, the problems of line blending and the lack of continuum regions is greatly improved; this is especially important for the derivation of the oxygen abundance from the \[O I\] lines.
Even so, analysis of these spectra are complicated by the well-known problems of bulge stars. The alpha elements, especially Mg, are an important source of electrons in the atmospheres of K giants. Consequently, if \[Mg/Fe\]=+0.4, the $H^-$ continuous opacity increases relative to that for Solar composition. Therefore, use of simple model atmospheres with a scaled Solar composition will give element abundances that are spuriously low. If bulge giants contain excess CN (and this is very likely the case for the metal rich stars) the atmosphere boundary temperature may be reduced enough to cause serious deviations from the temperature structure of solar-neighborhood giants, and so affect the abundance derived from spectrum synthesis programs which adopt solar composition model atmospheres. One must use a grid of realistic model atmospheres, but also one must derive as many of the stellar parameters as possible from the spectra themselves.
The spectra have such good resolution and S/N that we are able to be determine the gravity, microturbulence, effective temperature, and \[Fe/H\] in a self-consistent analysis from the Fe I and Fe II lines. This is arguably more reliable than relying upon photometric measures for $T_{eff}$ and $\log g$ (e.g. MR94) because of the classic problems that have plagued analysis of bulge stars: large and spatially variable reddening, uncertainty in distance, and at the metal rich end, blanketed broad-band colors. Temperatures and microtubulent velocities were obtained by forcing the iron abundance to be independent of excitation potential (Fig. \[fig:exab\]) and equivalent width, respectively; the atmosphere gravities were adopted by requiring agreement between Fe II and Fe I abundances.
We find that the iron abundances are typically 0.1 to 0.2 dex higher, an average of 0.11 dex greater, in a sample of 6 stars in common with the MR94[@mr94] sample. For example, I-194 was $-0.26$ dex, and we find \[Fe/H\]=$-0.03$. BW IV-072 was $-0.05$ dex in MR94, but the Keck spectra give \[Fe/H\]=+0.25. At the metal rich end, BW IV-167 was found in MR94 to be +0.44 dex, and this was confirmed in Castro et al. (1996). We find \[Fe/H\]=+0.54 dex for IV-167 and $+0.55$ dex for I-039, one of the most metal rich stars found in the Rich (1988) survey of 88 bulge K giants. It is noteworthy that Castro et al. [@castro96] took 3 approaches to the abundance analysis, and that the spectrum synthesis method gave \[Fe/H\]=+0.55 for BW IV-167. The analysis of our small sample suggests that the mean iron abundance of the bulge may increase from the $-0.25$ dex of MR94 to $-0.14$ dex. The upper limit of iron metallicity appears to be at \[Fe/H\]=+0.55, but obviously, we need a larger sample of stars at the metal rich end.
The increase in iron abundance derived from the higher quality spectra comes mainly from two sources: Most important is that our new continuum levels are higher, whereas MR94 could not detect the presence of weak line blanketing (mostly from CN). In the MR94 study, the CN blanketing had the effect of increasing derived abundances for the weak Fe I lines. This resulted in a higher microtubulent velocity for MR94, to force stronger Fe I lines into agreement with the weak lines; the Keck spectra yield microtubulent velocities of 0.57 km/s lower than MR94. The second factor is the adopted gravities: The photometric gravities of MR94 were lower than the present spectroscopic values by an average of $\sim 0.2$ dex, and in some cases, by as much as 0.6 dex. The error analysis of MR94 showed that a +0.30 dex increase in gravity gives \[Fe/H\] higher by +0.05 dex.
Is this the final answer on the iron abundance? We are beginning to feel more confident, but we do plan to measure a large number of weak iron lines, to confirm our findings. Even at $R=60,000$, the continuum is not always clearly found in the most metal-rich stars. We have not yet synthesized all 8,000 CN lines (as was done in MR94) but we will do so. Coincidences of some Fe I lines with the occasional CN line may bring the abundances of the most metal rich stars down slightly. However, we believe that we are converging on the correct answer, finally.
Relative Abundances of the Alpha Elements
-----------------------------------------
We now turn to the alpha elements, for which we report preliminary abundances. The final abundance analysis will employ spectrum synthesis for each line region. Returning to Figure 2, one can inspect the Mg lines to see why this is necessary, even for BW I-194 which has Solar abundance. Our abundances are based generally on 2-15 lines each of Ca, Si, and Ti, but we have only 1-3 usable Mg lines. The following results which are based on just the equivalent width measurements should be taken with caution. The oxygen abundances are from the 6300.3A forbidden line, but we consider these to be quite preliminary. We have not yet performed the requisite CNO equilibrium calculations, since we have not measured the carbon abundances in these stars.
However, interesting trends are beginning to emerge in Fig. \[fig:ratios\]. MR94 found a peculiar behavior among the alpha-elements, that Ca and Si follow trends somewhat characteristic of the Solar neighborhood, while Mg and Ti are enhanced as would be expected for a stellar population enriched in a short timescale starburst. Analysis of our first 8 stars appears to confirm MR94. In fact, the effect appears to be even more extreme at the metal rich end, with O joining Ca and Si. One interesting new result is that two stars have \[O/Fe\]=+0.3 at \[Fe/H\]$\approx 0$. This result was expected for the bulge[@matt90], and is now tentatively confirmed. Disk stars[@edvard93] have Solar oxygen abundance at \[Fe/H\]=0.
It is very premature to even speculate on the cause of the peculiar trends among the alpha elements. However, the source of enrichment is supernovae, and we can turn to models of supernova yields[@ww95] in search of an explanation. The production factors in Fig. \[fig:yields\] were calculated as followed. First, the total mass of each element produced in the various SN models is the sum of the mass of all the stable isotopes of that element. Dividing the mass of each element by the mass of the SN ejecta gives the mass fraction of that element. The production factor for an element is the mass fraction of that element divided by the mass fraction of that element in the Sun. The production factors approximately indicate the enrichment of the ejecta relative to Solar composition. Fig. \[fig:yields\] shows that O and Mg should be preferentially produced in the most massive SNe, while Si and Ca are produced more copiously in 15-25$M_\odot$ stars. All of the models produce about the same amount of Ti; presently, there are no SNe nucleosynthesis calculations which are consistent with the observation that Ti is enhanced in the Galactic halo and bulge populations. Yet enhancement of Ti is certainly real and is seen, for example, in the metal rich globular cluster (\[Fe/H=$-$0.79) M71[@sneden94] at the level of +0.5 dex. The evident behavior of Ti as an alpha element remains a problem in the modeling of supernova yields.
The incorporation of SN yields and star formation rates into chemical evolution models gives increasingly detailed predictions of abundance trends; the latest of these efforts[@matt99] argues for a bulge enrichment timescale of $\sim 0.5$ Gyr. However, the physical constraint on the formation timescale is the point at which Type I SNe begin to contribute the bulk of iron production, which depends on the as yet unknown mechanism for Type I SNe.
Summary
=======
In contrast to the well known achievements in the high redshift universe, the impact of Keck on stellar abundances is less widely known, yet significant. Keck/HIRES spectroscopy has placed the abundance scale of the bulge on a secure footing. We have just begun to tap the potential information in these spectra. Prior efforts at measuring the oxygen abundance in the bulge from data obtained on 4m class telescopes were ineffective. For the first time, we are beginning to see emerging some clear trends in oxygen as a function of iron abundance. The abundance range, and puzzling element trends found by McWilliam & Rich (1994)[@mr94] are confirmed.
Two metal rich globular clusters toward the bulge have also been the subject of a major campaign with HIRES.[@cohen99]$^,$[@carretta00] NGC 6553 and 6528 have been found to have Solar metallicity with the alpha elements of O and Ca enhanced. The compositions of their stars are precisely those of bulge field giants at the same metallicity. The formation of the proto-bulge probably proceeded much as is observed in starburst galaxies today, with the production of numerous star clusters, a few of the more luminous of which are observed to survive to the present day.
As spectroscopy of fainter stars becomes feasible, enrichment trends are now available for new stellar populations, such as dwarf spheroidal galaxies. As more high resolution spectra from large telescopes are analyzed, these trends may become valuable in distinguishing the formation histories of stellar populations. The Sagittarius dwarf spheroidal galaxy (a tidally disrupted dwarf galaxy lying in the direction of the bulge) is the only dwarf companion of the Milky Way that contains stars as metal rich as the Sun. One might speculate that the bulge could have been built from the shards of a few such disrupted systems, and the presence of Solar metallicity stars in the Sgr dwarf strengthens this idea. However, the Galactic bulge and disk populations are dramatically different from the Sgr dwarf stars, which have subsolar Ca and Si abundances[@smecker99] at \[Fe/H\]=0. The trends of Mn with \[Fe/H\] and \[Ba/Y\] with \[Fe/H\] are even more different between the bulge and the Sag dwarf[@msh00], and it is possible to explain these differences as being caused be early, rapid enrichment in the bulge. The origin of the metal rich population in the Sgr dwarf is an interesting problem in chemical evolution, given the low mass of that galaxy and its encounter with the Milky Way. We can pretty much rule out, however, that the metal rich population in the Sgr dwarf was somehow captured from the bulge, or that Sgr was once a much larger galaxy that enriched as quickly as the bulge did.
One may also compare the bulge composition to metal rich dwarf stars in the Solar neighborhood, which are $\approx 10$ Gyr old and reach the same high metallicities (\[Fe/H\]=+0.55). High resolution spectroscopy[@castro97]$^,$[@feltz98] of these stars shows them to clearly have disk-like compositions: Mg, Ti, and O abundances are at approximately Solar values with no clear trends. In contrast, the old open cluster NGC 6791 has \[Fe/H\]=+0.4 and enhanced Ca[@peterson98]. Chemical enrichment reaching high iron abundance evidently does not proceed the same way in all environments. Based on the compositions of stars, one clearly cannot produce the bulge out of the disintegrated remnants of systems like the Sagittarius dwarf spheroidal.
Qualitatively, the abundance pattern in the bulge strongly suggests rapid, early enrichment, consistent with the predictions of chemical evolution models[@matt99]. The notion of rapid enrichment agrees with other studies of the age of the stellar population[@sergio95] in the bulge. The distinct nature of the bulge composition gives us confidence that abundance ratios offer a powerful diagnostic tool that may help to decipher the fossil record of galaxy formation.
Many open questions remain. The bulge has a bar-like morphology, and the most successful scenario[@merritt94] for forming a bar-like bulge requires dynamical instabilities occurring in a pre-existing disk. However, N-body simulations of bars indicate that they are unlikely to survive for a Hubble time, yet the Galactic bulge is extremely old.[@sergio95] Further, the extreme stellar density near the nucleus is evidence for strong dissipation being a factor in the formation of the Galactic bulge.
If the bulge abundance ratios favor a top-heavy IMF and very rapid formation, one must infer that ellipticals enrich more rapidly (and perhaps with a heavier IMF) because their $\rm Mg_2$ indices at a give $<\rm Fe>$ line strength are so much higher compared to the bulges; in fact spiral bulges lie near the lower range in Mg index in these diagrams [@worthey92]$^,$[@proc00] Before addressing these questions, and the challenge of relating the local data to high redshift observations, we plan to increase our sample size and explore the behavior of different atomic species. However, the study of the Milky Way bulge stars (and eventually, perhaps, individual stars in the bulge of M31) does have promise in illuminating the chemical evolution of ellipticals.
Looking Towards the Future
--------------------------
This year, two new powerful high dispersion spectrographs come on line. At the VLT UVES has already passed science verification and has produced beautiful data. The HDS spectrograph at Subaru is just about to see first light. Fiber feeds to UVES will enable the acquisition of as many as 8 stars in a single exposure covering all orders, or spectroscopy of over 100 stars in a single echelle order. The latter capability will be enjoyed by the new echelle spectrograph that will be commissioned next year on the Magellan I (Baade) telescope.
On Keck, the NIRSPEC infrared spectrograph can reach $R=30,000$ in the near-IR, and places old giants in the Galactic center within reach. We plan to extend our abundance studies to the field and cluster stars of the Galactic center in the next few years.
The hard reality remains that analysis of the data will still be time consuming. For metal rich stars, it is clear that even at $R=60,000$ we require a full spectrum synthesis before we can feel completely secure in our results. It will be a challenge to keep up with the flood of new data in the coming years. This situation should be an inspiration to observers and theorists alike, as we enter these unprecedented times.
Acknowledgments
===============
The entire Keck/HIRES user community owes a great debt to Jerry Nelson, Gerry Smith, and Steve Vogt, and all of the many people have contributed to make Keck and HIRES a reality. We are also grateful to the W.M. Keck foundation and to its late president, Howard Keck, for their visionary gift that made Keck the first of a great generation of 8-10m telescopes. We are also grateful to Tom Barlow for his MAKEE echelle extraction software, which made possible the rapid reduction of these data. The work of Andy Mcwilliam was partially supported by NSF grant AST-96-18623. RMR acknowledges partial support from grant GO-7891 from the Space Telescope Science Institute. RMR is also grateful to Mark Morris, Bob Kraft, Laura Ferrarese, and Pat Coté for critical reading of the final manuscript, and to J. Horn for advice in formatting the manuscript.
|
---
abstract: 'We review the inclusion of dark energy into the formalism of spherical collapse, and the virialization of a two-component system, made of matter and dark energy. We compare two approaches in previous studies. The first assumes that only the matter component virializes, e.g. as in the case of a classic cosmological constant. The second approach allows the full system to virialize as a whole. We show that the two approaches give fundamentally different results for the final state of the system. This might be a signature discriminating between the classic cosmological constant which cannot virialize and a dynamical dark energy mimicking a cosmological constant. This signature is [*[independent]{}*]{} of the measured value of the equation of state. An additional issue which we address is energy non-conservation of the system, which originates from the homogeneity assumption for the dark energy. We propose a way to take this energy loss into account.'
address:
- ' Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0WA, UK [*[[email protected]]{}*]{}'
- ' Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK [*[[email protected]]{}*]{}'
author:
- Irit Maor and Ofer Lahav
title: On virialization with dark energy
---
Introduction {#intro}
============
The top hat spherical collapse formalism dates back to Gunn and Gott [@gg]. In addition to its beautiful simplicity, it has proven to be a powerful tool for understanding and analysing the growth of inhomogeneities and bound systems in the Universe. It describes how a small spherical patch of homogeneous overdensity decouples from the expansion of the Universe, slows down, and eventually turns around and collapses. One assumes that the collapse is not completed into a singularity, but that the system eventually virializes and stabilizes at a finite size. The definition of the moment of virialization depends on energy considerations. The top hat spherical collapse is incorporated, for example, in the Press-Schecter [@ps] formalism. It is therefore widely used in present day interpretation of data sets.
For an Einstein-de Sitter Universe (EdS), i.e. a Universe with $\Omega_m=1$ and $\Omega_{\Lambda} = 0$ that is composed strictly of non relativistic dust, there is an analytical solution for the spherical collapse. The ratio of the final, virialized radius to the maximal size at turnaround of the bound object is $R_{vir}/R_{ta}=\frac{1}{2}$. The situation becomes more complicated and parameter dependent when one considers a component of dark energy. This was a subject of numerous studies [@llpr; @ws; @is; @wk; @bw; @zg; @mb]. Lahav [*et al*]{} (LLPR) [@llpr] generalized the formalism to a Universe composed of ordinary matter and a cosmological constant. In this case the cosmological constant is ‘passive’, and only the matter virializes. This leads to $R_{vir}/R_{ta}< \frac{1}{2}$. This scenario also corresponds to the dynamics implemented in $N$-body simulations for the concordance $\Lambda CDM$ model, i.e. the cosmological constant only affects the time evolution of the scale factor of the background Universe. Wang and Steinhardt (WS) [@ws] included quintessence with a constant or slowly varying equation of state. Battye and Weller (BW) [@bw] included quintessence in a different manner to WS, taking into account its pressure. Mota and Van de Bruck (MB) [@mb] considered spherical collapse for different potentials of the quintessence field, and checked what happens when one relaxes the common assumption that the quintessence field does not cluster on the relevant scales.\
In this work we wish to review the inclusion of a cosmological constant and quintessence into the formalism of spherical collapse. Adding dark energy creates a system with two components - the matter and the additional fluid. Most existing works [@llpr; @ws; @bw; @zg] look at the virialization of the matter component (luminous and dark), which feels an additional potential due to the presence of the dark energy. With this procedure, one implicitly assumes that the dark energy either does not virialize, or does so separately from the matter component. MB on the other hand, included the additional fluid in the virialization equations, the assumption here being that all of the system’s components virialize together. However, they did not remark either on the difference in physical understanding of the system between their approach and the one common in the literature, or on the case of the cosmological constant. Our aim here is to critically contrast the two approaches - the assumption that the dark matter component virializes separately (as in LLPR, WS, and BW), and the assumption that the whole system virializes as a whole (as in MB). We wish to consider the meaning of including or not including the additional fluid into the virialization, and point out a few puzzles.
A second issue which we will address here is the use of energy conservation in order to find the condition of virialization. Assuming that the quintessence field does not collapse with the mass perturbation but stays homogeneous as the background means that the system must lose energy as it collapses. Yet, energy conservation between turnaround and virialization is assumed. This inconsistency is for quintessence fields with equation of state $w\neq -1$. Energy is conserved with a cosmological constant ($w=-1$), for reasons that will be discussed later on. We will propose a way to take into account this energy loss for quintessence, and introduce a correction to the equation that defines the final virialized radius of the system.\
The inclusion of dark energy in the virialization process changes the results in a fundamental manner. As we will show, the ratio of final to maximal size of the spherical perturbation is [*[larger]{}*]{} if the dark energy is part of the virialization.
While the results we will show are of the cosmological constant or quintessence with a constant $w$, the methods we use are applicable to a time dependent equation of state, as well as to models in which quintessence is coupled to matter [@qc]. We limit the discussion here to $w\geq-1$. While an equation of state which is more negative than $-1$ is observationally interesting, the physical interpretation of it is unclear, and beyond the scope of this work. We assume throughout that the background is described by a flat, FRW metric, with two energy components - the matter and the dark energy.
The paper is organized as follows. In section \[1cs\] we give the general picture of how one calculates the point of virialization of a single component system, and define the relevant fundamental quantities. In section \[2cs\] we consider the case of a two component system. Section \[cq\] reviews the case of a clustered quintessence. In section \[g\] we examine the transition from clustered to homogeneous quintessence and in section \[cc\] we examine the transition toward $w=-1$. We summarize and conclude in section \[conclusions\].
Virialization of a single component system {#1cs}
==========================================
The spherical collapse provides a mathematical description of how an initial inhomogeneity decouples from the general evolution of the Universe, and expands in a slower fashion, until it reaches the point of turnaround and collapses on itself. The mathematical solution gives a point singularity as the final state. Physically though, we know that objects go through a virialization process, and stabilize towards a finite size.
Since virialization is not ‘built in’ into the spherical collapse model (see though [@pad]), the common practice is to [*[define]{}*]{} the virialization radius as the radius at which the virial theorem holds, and the kinetic energy $T$ is related to the potential energy $U$ by $T_{vir}=\frac{1}{2}(R ~\partial
U/\partial R)_{vir}$. Using energy conservation between virialization and turnaround (where $T_{ta}=0$) gives $$\begin{aligned}
\left[ U+\frac{R}{2}\frac{\partial U}{\partial R}
\right]_{vir} & = & U_{ta} \label{ec} ~.\end{aligned}$$ Equation (\[ec\]) defines $R_{vir}$. Thus in order to calculate the final size of a bound object, we need to know how to calculate the potential energy of the spherical perturbation, and to use energy conservation between turnaround and the time of virialization. We discuss later the case where energy is not conserved, and how to account for it. For an EdS Universe, $U=-\frac{3}{5}GM/R$ ($M$ is the conserved mass within the spherical perturbation) and $T_{vir}=-\frac{1}{2}U$, so the ratio of final to maximal radii of the system is $x=R_{vir}/R_{ta}=\frac{1}{2}$.\
The virial equation, which at equilibrium gives the above results, is usually derived from the Euler Equation for particles. It is worth noting here that one can derive the virial equation for a cosmological fluid. The starting point is the continuity equation of a perfect fluid (derived from $T^{0\nu}~_{;\nu}=0$), with equation of state (the ratio of pressure to energy density) $w=p/\rho$: $$\begin{aligned}
\dot\rho+3\left( 1+w \right) \frac{\dot r}{r}\rho &=& 0 ~.
\label{vir1}\end{aligned}$$ Multiplying equation (\[vir1\]) by $r^2$, taking the time derivative and integrating over a sphere of radius $R$ gives $$\begin{aligned}
\int\dot G dV+\frac{1}{2}\left(1+3w \right)\left[
\int\rho\dot r^2 dV+\int r\frac{d}{dt}\left(\rho \dot r
\right)dV \right] &=& 0 ~,
\label{vir2}\end{aligned}$$ where $G=(d/dt)\left(\frac{1}{2} \rho r^2\right)$. In the classical analogy, $\dot G=\ddot I$ is the second derivative of the inertia tensor. In a state of equilibrium, $\dot G=0$. The quantity $\int\rho\dot r^2 dV$ is twice the kinetic energy, and $\int r(d/dt)\left(\rho \dot r \right)dV $ is $R~ \partial
U/\partial R$. As equation (\[vir2\]) shows, the value of $w$ factorizes out when one is looking for the equilibrium condition. In the case where the fluid does not conserve energy, the right hand side of of equation (\[vir1\]) will be equal to some function $\Gamma$. In that case, the virial equation (\[vir2\]) will have an additional surface term. In equilibrium, the surface term and $\dot G$ should vanish.\
This is the non-relativistic version of the scalar virial theorem. Hence, it is not applicable for a fluid with a relativistic equation of state, $w\rightarrow \frac{1}{3}$. It can be shown that when writing the relativistic version, the energy of the radiation field drops out of the virial equation [@ll].
A two - component system {#2cs}
========================
When adding a component to the system, there are three questions to be asked - [**[(a)]{}**]{} how does the potential induced by the new component affect the system? [**[(b)]{}**]{} does the new component participate in the virialization? and [**[(c)]{}**]{} does the new component cluster, or does it stay homogeneous? These questions should be addressed in the framework of a fundamental theory for dark energy. Here we work out the consequences of virialization and clustering of dark energy, if they do take place. We shall try to address each of these questions separately.\
\
[**[(a)]{}**]{} In the case where the new component does not cluster or virialize, its sole effect is contributing to the potential energy of component 1. LLPR [@llpr] calculated this contribution to the potential energy using the Tolman-Bondi equation. Their result was generalized for quintessence by WS [@ws] and, in a different manner, by BW [@bw]. We follow LLPR and WS in our numerical calculations.\
[**[(b)]{}**]{} Any energy component with non vanishing kinetic energy is capable of virializing, but the question is: on what time scale? If one imagines that the full system virializes, then the virial theorem should relate the [*[full]{}*]{} kinetic and potential energies of the system, $$\begin{aligned}
U &=& U_{11}+U_{12}+U_{21}+U_{22}=
\frac{1}{2}\int\left(\rho_1+\rho_2\right)
\left(\Phi_1+\Phi_2 \right)dV \label{uf} ~,\end{aligned}$$ where the potential $\Phi_x$ induced by each energy component in a spherical homogeneous configuration is $$\begin{aligned}
\Phi_x(r) &=& -2\pi G (1+3w_x)\rho_x\left(R^2-\frac{r^2}{3}
\right)~.\end{aligned}$$ The kinetic energy at virialization is then $$\begin{aligned}
T_{tot} &=& \frac{1}{2}R\frac{\partial}{\partial R}
\left(U_{11}+U_{12}+U_{21}+U_{22}\right) ~.
\label{vf}\end{aligned}$$ The expression above is the full potential energy of the system. As we will show, the addition of these new terms to the virial theorem makes a fundamental difference in the final state of the system, so the question of whether dark energy participates in the virialization is crucial.\
\
[**[(c)]{}**]{} Every positive energy component other than the cosmological constant is capable of clustering. Even though Caldwell [*[et al]{}*]{} [@c] have shown that quintessence cannot be perfectly smooth, it is assumed that the clustering is negligible on scales less than $100~Mpc$. It is therefore common practice to keep the quintessence homogeneous during the evolution of the system. The effects of relaxing this assumption were explored in MB. We will consider both cases here. The continuity equation for a Q component which is kept homogeneous is $$\begin{aligned}
\dot\rho_{Qc}+3(1+w)\frac{\dot a}{a}\rho_{Qc} & = & 0 ~,\end{aligned}$$ and for clustering Q is $$\begin{aligned}
\dot\rho_{Qc}+3(1+w)\frac{\dot r}{r}\rho_{Qc} & = & 0\end{aligned}$$ ($a$ and $r$ are the global and local scale factors, respectively). One can ask what happens if one slowly ‘turns on’ and enables the possibility of clustering for the Q component. To enable a slow continuous ‘turn on’ of the clustering, one can write $$\begin{aligned}
&& \dot\rho_{Qc}+3(1+w)\left(\frac{\dot r}{r}\right)\rho_{Qc} = \gamma \Gamma
\label{13}\\
&& \Gamma = 3(1+w)\left(\frac{\dot r}{r}-
\frac{\dot a}{a}\right)\rho_{Qc} \\
&& 0\leq \gamma \leq 1 ~,\end{aligned}$$ where $\rho_{Qc}$ is the dark energy’s density within the cluster. The notation $\Gamma$ follows MB. $\gamma$ is the ‘clustering parameter’, $\gamma=0$ gives clustering behaviour and $\gamma=1$ gives homogeneous behaviour. In the case of $\gamma=1$, the dark energy inside the spherical region and the background dark energy $\rho_Q$, behave in similar ways: $\rho_{Qc}=\rho_Q$. A point to bear in mind is that for the case of homogeneous quintessence, the system does not conserve energy as it collapses from turnaround to virialized state.\
Putting all this together, the equations governing the dynamics of the spherical perturbation are $$\begin{aligned}
\left(\frac{\ddot r}{r} \right) & = &-\frac{4\pi G}{3}
\left(\frac{}{}\rho_{mc}+\left(1+3w \right)\rho_{Qc}\right) \label{qr2} \\
\dot\rho_{mc} & + & 3\left(\frac{\dot r}{r}\right)\rho_{mc} = 0\\
\dot \rho_{Qc} & + & 3\left(1+w \right)\left(\frac{\dot r}{r}\right)
\rho_{Qc} = \gamma \Gamma \label{qc} ~.\end{aligned}$$
Our results are going to be presented as a function of $q$, which is defined as the ratio of the system’s dark energy to matter’s densities at the time of turnaround, $q(z_{ta})\equiv\rho_{Qc}(z_{ta})/\rho_{mc}(z_{ta})$. The system’s matter density $\rho_{mc}$ at turnaround is $\rho_{mc}(z_{ta})=\zeta(z_{ta})\rho_m(z=0)(1+z_{ta})^3$, expressed in terms of the background matter density $\rho_m(z_{ta})$ and the density contrast at turnaround, $\zeta(z_{ta})$. In order to estimate which values of $q$ are of interest,we plotted, in figure \[qz\], the dependence of $q$ on the turnaround redshift $z_{ta}$, for various values of $\Omega_m$ and $\Omega_{\Lambda}$. As can be seen, $q$ takes typical values no larger than $0.3$.
A clustered quintessence {#cq}
========================
In the case of fully clustering quintessence, $\gamma=0$, the quintessence field responds to the infall in the same way as matter, the only difference being its equation of state which dictates a different energy conservation (in the general relativity sense). However, energy is conserved, and since the quintessence is active in the dynamics of the system, it is quite reasonable to imagine that it takes part in the virialization. We therefore present here the calculation assuming the whole system virializes, matter and dark energy.\
Following equation (\[uf\]), the potential energy of the full system is $$\begin{aligned}
% U &=&-\frac{16\pi^2 G}{15}\left(
% \rho_{mc}^2+(2+3w)\rho_{mc}\rho_{Qc}+(1+3w)\rho_{Qc}^2 \right)
% R^5 ~~~\nonumber \\
U &=& -\frac{3}{5}\frac{GM^2}{R}-(2+3w)\frac{4\pi G}{5}M\rho_{Qc} R^2
-(1+3w)\frac{16\pi^2 G}{15}\rho_{Qc}^2R^5 ~. \label{qec}\end{aligned}$$
Once the potential energy has been calculated, virialization is found with the use of equation (\[ec\]). Expressing it in term of $q=\rho_{Qc}/\rho_{mc}$ at turnaround and $x=R_{vir}/R_{ta}$ gives $$\begin{aligned}
&&\left[1+(2+3w)q+(1+3w)q^2\right]x \nonumber \\
&&-\frac{1}{2}(2+3w)(1-3w)qx^{-3w}-
\frac{1}{2}(1+3w)(1-6w)q^2x^{-6} = \frac{1}{2} ~.
\label{gen0}\end{aligned}$$ Equation (\[gen0\]) is valid under the assumption that the whole system virializes.\
If, on the other hand, only the matter virializes, then the equation defining $x$ is $$\begin{aligned}
&&\left(1+q \right)x-\frac{q}{2}\left(1-3w \right)x^{-3w}
=\frac{1}{2} ~.
\label{ws0}\end{aligned}$$
In figure \[wc08\] we show $x$ as a function of $q$, for a fluid with $w=-0.8$ that fully clusters, $\gamma=0$. The dotted line is the ratio in the case where only matter virializes, equation (\[ws0\]). The solid line is the ratio when the whole system, including the dark energy component, have virialized, equation (\[gen0\]). As can be seen, there is a fundamental difference of the solutions: if only the matter virializes then the final ratio is smaller than the EdS $\frac{1}{2}$ value, while when the whole system virializes, the final ratio is larger than $\frac{1}{2}$.
Turning off the clustering {#g}
==========================
It is the usual practice to neglect spatial perturbations of the quintessence field, and to keep it homogeneous [@c]. With our generalized notation of the ‘clustering parameter’ $\gamma$, one can also allow a small but non-zero amount of clustering for the quintessence.\
For any $\gamma\neq 0$, the quintessence field within the system does not conserve energy. As equation (\[ec\]) assumes energy conservation, the problem with not allowing the quintessence to fully cluster is how to find the radius of virialization. We will now propose a correction to equation (\[ec\]), that will take into account the loss of energy.
We denote the potential energy at turnaround as $U_{ta}$, and at virialization as $U_{vir}$. We also define a function $\tilde U$ as the system’s potential energy [*had*]{} it conserved energy. Thus by construction the energy that the system lost is $$\begin{aligned}
\Delta U & \equiv & \tilde U-U ~.\end{aligned}$$ Equation (\[ec\]) which describes energy conservation between turnaround and virialization now needs to be corrected. Accounting for the lost energy gives $$\begin{aligned}
\left[ U+\frac{R}{2}\frac{\partial U}{\partial R}
\right]_{vir}+\Delta U_{vir} =
\left[\tilde U+\frac{R}{2}\frac{\partial U}{\partial
R}\right]_{vir}
= U_{ta} ~. \label{new}\end{aligned}$$ We are now set to calculate $\Delta U$.\
Looking at equation (\[qec\]), one can treat $U$ as $U(\rho_x,R)$ ($\rho_x$ being the various energy density components). In order to calculate $\tilde U(\rho_x,R)$, one needs to replace $\rho_x$ with $\tilde \rho_x$ in the expression for $U$, which has $\gamma=0$ and conserves energy. The continuity equation for $\tilde \rho_x$ is then $$\begin{aligned}
\dot{\tilde \rho_x}+3\left(\frac{\dot R}{R} \right)
\left(1+w_x\right)\tilde\rho_x=0 ~, \label{tilde}\end{aligned}$$ and we impose boundary conditions such that $\tilde\rho_x(a_{ta})=\rho_x(a_{ta})$.\
For a constant equation of state, this gives $$\begin{aligned}
\tilde\rho_x & = & \tilde \rho_x(a_{ta})
\left(\frac{R_{ta}}{R_{vir}} \right)^{3(1+w_x)}
=\rho_x(a_{ta})
\left(\frac{R_{ta}}{R_{vir}} \right)^{3(1+w_x)} ~.\end{aligned}$$ For a time dependent $w$ one needs to use the integral expression for $\tilde \rho$.\
We therefore have $$\begin{aligned}
\tilde U(\rho_x,R) & = & U(\tilde \rho_x, R) ~.\end{aligned}$$
Equation (\[new\]) is now a function of $R_{vir}$ and values determined at turnaround time (such as $U_{ta}$ and $\rho_x(a_{ta})$), and defines $R_{vir}$ in the same manner as equation (\[ec\]) did. With the definitions of $q$, $x$, $y=(a_{vir}/a_{ta})^{1+w}$ and $p=x/y$, the final form of equation (\[ec\]) for a quintessence with a general value of $\gamma$ is then $$\begin{aligned}
&&
\frac{q^2}{2}\left(1+3w\right)
\left[\frac{}{}1+6w-6\gamma\left(1+w\right)\right]
\left[\frac{}{}\left(1-\gamma \right)x^{-3w}+
\gamma p^3\right]^2
\nonumber \\
&+&
\frac{q}{2}\left(2+3w\right)
\left[\frac{}{}1+3w-3\gamma\left(1+w\right)\right]
\left[\frac{}{}\left(1-\gamma \right)x^{-3w}+\gamma p^3 \right]
\nonumber \\
&+&
\left[\frac{}{}1+\left(2+3w \right)q+\left(1+3w\right)q^2 \right]x
-\left(2+3w \right)qx^{-3w}- \left(1+3w\right)q^2x^{-6w}
\nonumber \\
&=& \frac{1}{2} ~.
\label{general}\end{aligned}$$ For the case (common in literature) of a completely homogeneous quintessence, $\gamma=1$, the virialization condition (\[general\]) is reduced to $$\begin{aligned}
&& \left[1+(2+3w)q+(1+3w)q^2\right]x- \nonumber \\
&& (2+3w)q\left(p^3+x^{-3w}\right)-
(1+3w)q^2\left(\frac{5}{2}p^6+x^{-6w}\right)= \frac{1}{2} ~.
\label{gen1}\end{aligned}$$\
The equation for $x$ when only the matter virializes does not need to be corrected for energy conservation, as it counts only the energy associated with the matter. Its general form is $$\begin{aligned}
&& \left(1+q\right)x-\frac{q}{2}
\left[\frac{}{}1-3w+3\gamma\left(1+w\right)\right]
\left[\left(1-\gamma \right) x^{-3w}+\gamma p^3 \right]
=\frac{1}{2} ~,
\label{wsg}\end{aligned}$$ and for $\gamma=1$ it reproduces the solution of WS, $$\begin{aligned}
&& \left(1+q\right)x-2qp^3 =\frac{1}{2} ~,
\label{ws1}\end{aligned}$$\
which is a generalization of LLPR’s results (to be discussed later on, see equation (\[llpr\])).
One should consider which of the solutions is more plausible. Ultimately the choice between the two solutions should be dictated by the theory with which the dark energy is modelled. As we are looking at an effective description of the dark energy as a perfect fluid and not with a fundamental theory, this information is lost.
For the case of $\gamma=1$, when one keeps the evolution of the dark energy in the system identical to that of the background, it is reasonable to assume that it does not participate in the local processes that lead to virialization. This gives credibility to the solution of equation (\[ws1\]), allowing only the matter to virialize. However, this presents a question of continuity, presented in figure \[gamma\]. The figure shows the solutions of $x$ as a function of $\gamma$, with fixed $w=-0.8$ and $q=0.2$. The circle on the right is the WS’s result when the quintessence is kept completely homogeneous. The square on the left is the result when both the matter and the quintessence virialize, for the fully clustering case. The ‘clustering parameter’ allows us to think of a continuous transition between the two cases. One would expect the transition in the behavior of the system along $\gamma$ to be smooth. Allowing the dark energy to virialize for the clustering case, $\gamma=0$, and keeping it out of the virialization process when $\gamma=1$, raises the question of how one should extrapolate smoothly between the two cases. As figure \[gamma\] suggests, there will be a discontinuity.\
In figure \[r08\] we compare the solution of equation (\[ws1\]) and (\[gen1\]) for $w=-0.8$ and $\gamma=1$, and show the effect of the energy correction. As can be seen, taking into account the loss of energy produces a small quantitative correction, but keeps the general feature of enlarging the final size of the system if the dark energy is allowed to virialize.
The limit of a cosmological constant {#cc}
====================================
Equations (\[general\]) and (\[wsg\]) are valid for any constant $w$. As $w$ approaches $-1$, we get that $p\rightarrow
x$, and the dependency on $\gamma$ vanishes. The reason that $\gamma$ plays no role in the limit of $w\rightarrow -1$ is that the question whether such a fluid is allowed to cluster ($\gamma=0$) or not ($\gamma=1$) is rather abstract. It stays homogeneous in any case, because of its equation of state, $w_{\Lambda}=-1$ (which leads to $\Gamma=0$). Accordingly, energy is automatically conserved.\
In that limit, equation (\[general\]) which assumes that the whole system to virialize, is simplified into $$\begin{aligned}
7 q^2 x^6 + 2 q x^3 + \left(1-q-2q^2\right)x & = & \frac{1}{2} ~.
\label{gencc}\end{aligned}$$
Taking the same limit for equation (\[wsg\]) yields the familiar result of LLPR: $$\begin{aligned}
\left(1+q\right)x-2qx^3 & = & \frac{1}{2} ~~~~~(LLPR) ~,
\label{llpr}\end{aligned}$$ which is valid under the assumption that the matter component alone virializes [^1] (notice that our definition of $q$ differs by a factor of $2$ from the definition of $\eta$ of LLPR, $q=\frac{1}{2}\eta$).\
Again, we wish to consider the plausibility of the two solutions. If one considers the cosmological constant as a true constant of Nature, $\rho_{\Lambda}=\Lambda/(8 \pi G)$, it is hard to imagine it participating in the dynamics that lead to virialization, as it is a true constant. In this case, one could categorically say that the right procedure is to look at the virialization of the matter fluid only, and follow LLPR’s solution, equation (\[llpr\]). The sole effect of the cosmological constant, then, is to modify the potential that the matter fluid feels.
If, on the other hand, one considers the origin of a perfect fluid with $w\approx -1$ as a special case of quintessence, which is indistinguishable from a cosmological constant, it is reasonable to expect continuity in the behaviour of the system as one slowly changes the value of $w$ toward $-1$. In other words, if the physical interpretation of the fluid with $w\approx -1$ is of a dynamical field that [*[mimics]{}*]{} a constant, the idea of including it in the dynamics of the system has a physical meaning.
The result, then, is that we possibly have a signature differentiating between a cosmological constant which is a true constant, and something else which [*[mimics]{}*]{} a constant. This point is shown in figure \[w0\]. The figure shows $x$ as a function of $w$, with $q=0.2$ and $\gamma=0$. The dotted line follows the solution of equation (\[wsg\]), with the matter alone virializing. The circle on the left is LLPR’s solution for the cosmological constant. The solid line follows the solution of equation (\[general\]). The square on the right is an example of a clustered quintessence, where we expect to take into account the whole system in the virialization. As with figure \[gamma\], there is a suggested discontinuity, but here one can associate the discontinuity with a clear physical meaning: a true cosmological constant is not on the continuum of perfect fluids with general $w$, as its physical behaviour is different.\
An observational detection of virialized objects with $R_{vir}>\frac{1}{2}R_{ta}$ would be a strong evidence against a cosmological constant which is a true constant, regardless of the measured value of the equation of state.
Conclusions
===========
In this work, we have reconsidered the inclusion of a dark energy component into the formalism of spherical collapse. We compared existing results (such as those of LLPR and WS) which implicitly assume that only the dark matter virializes, to the case where the whole system’s energy is taken into account for virialization, implying that the dark energy component also participates in the process (MB). While previous studies allow the dark energy component either to fully cluster or keep completely homogeneous, we generalized and allowed a smooth transition between the two cases. Additionally, we addressed the issue of energy non-conservation when the dark energy is kept homogeneous.
Our main conclusions are:
- In the case of a true cosmological constant only the matter component virlializes and the LLPR solution is valid.
- If both components of the system virialize, two additional terms to the potential energy appear. These are the self - energy of the additional energy source, and its reaction to the presence of the matter.
- The inclusion of these terms results in a fundamentally different behaviour of the system. If only dark matter virializes, the final size of the system is [*[smaller]{}*]{} than half of its maximal size. When the whole system virializes, its final size is [*[bigger]{}*]{} than half of its maximal size.
- It is hard to understand the physical meaning of a cosmological constant ‘virializing’, if it is a true constant. Accordingly, observational evidence for $R_{vir}>\frac{1}{2}R_{ta}$ would be strong evidence in favour of a dynamical field for the dark energy, regardless of the measured value of the equation of state. On the other hand, $R_{vir}<\frac{1}{2}R_{ta}$ is compatible with both a true constant and a field mimicking the cosmological constant.
- Keeping the dark energy component homogeneous implies that the overdense region does not conserve energy. The exception here is the case of the cosmological constant, for which the non-clustering behaviour is exact and not an approximation. The equation defining virialization needs to be corrected, in order to account for the energy lost by the Q field between turnaround and virialization. It should read $$\begin{aligned}
\left[\tilde U+\frac{R}{2}\frac{\partial U}{\partial
R}\right]_{vir} & = & U_{ta} ~. \nonumber\end{aligned}$$ This introduces a small quantitative correction.
Table \[table\] gives a summary of the relevant solution for the different cases that we considered in this work.
[**[$\rho_{mc}$ virializes]{}**]{} [**[$\rho_{mc}$ and $\rho_{Qc}$ virialize]{}**]{}
----------------------------------- ------------------------------------ ---------------------------------------------------
[**[general case]{}**]{} (\[wsg\]) (\[general\])
[**[$\gamma=0$]{}**]{} (\[ws0\]) (\[gen0\])
[**[$\gamma=1$]{}**]{} (\[ws1\]) (\[gen1\])
[**[$w\rightarrow -1$]{}**]{} (\[llpr\]) (\[gencc\])
[**[Cosmological Constant]{}**]{} (\[llpr\]) –
: A summary of the relevant equations defining $x$ for the various cases that we considered. \[table\]
Our work has consequences for the linear theory as well. As we have not altered any of the equations governing the evolution of the system, the linear equation of growth will not be altered either. Nonetheless, reconsidering the energy budget changed the time in which we perceive virialization to happen, and as a consequence the linear contrast at virialization ($1.686$ for the EdS case) will change. In practice though, the numerical change is rather small. We find that for the cosmological constant, the maximal deviation from the EdS value is a rise of about $3\%$.
A future work would be to incorporate the possible virialization of dark energy into numerical simulations and into analyses of observations. It would be particularly interesting to see how it affects cluster abundances, and which approach provides a better fit to the observations. Several works are pursuing such directions [@additional].
For various models of coupled quintessence [@qc], it is very likely that the dark energy component clusters and virializes. For models in which the dark energy doesn’t cluster, one could ask how plausible the scenario of the dark energy participating in the virialization is. Of course, should one argue that the dark energy virializes and not just the dark matter component, a mechanism of how it physically happens would be needed. An additional direction to pursue is the actual mechanism of virialization, which at the moment is still rather obscure. Understanding what the physical process by which the system virializes is will hopefully give us a clue as to whether we should include the dark energy or not.
We would like to thank Jacob Bekenstein, Carsten van de Bruck, Ramy Brustein, Uri Keshet, Donald Lynden-Bell, David Mota, Amos Ori, Martin Rees and Jochen Weller for useful discussions. OL acknowledges a PPARC Senior Research Fellowship. IM acknowledges the support from the Leverhulme Quantitative Cosmology grant. Part of this investigation was carried out while one of us (IM) was visiting the Weizmann Institute for Science. IM wishes to thank Micha Berkuz and Yosi Nir for their kind invitation and warm hospitality.
References {#references .unnumbered}
==========
[00]{}
J. E. Gunn and J. R. I. Gott, Astrophys. J. [**176**]{}, 1 (1972). W. H. Press and P. Schechter, Astrophys. J. [**187**]{}, 425 (1974). O. Lahav, P. B. Lilje, J. R. Primack and M. J. Rees, Mon. Not. Roy. Astron. Soc. [**251**]{}, 128 (1991). L. M. Wang and P. J. Steinhardt, Astrophys. J. [**508**]{}, 483 (1998) \[arXiv:astro-ph/9804015\]. I. T. Iliev and P. R. Shapiro, Mon. Not. Roy. Astron. Soc. [**325**]{}, 468 (2001). \[arXiv:astro-ph/0101067\]. N. N. Weinberg and M. Kamionkowski, Mon. Not. Roy. Astron. Soc. [**341**]{}, 251 (2003) \[arXiv:astro-ph/0210134\]. R. A. Battye and J. Weller, Phys. Rev. D [**68**]{}, 083506 (2003) \[arXiv:astro-ph/0305568\]. D. F. Zeng and Y. H. Gao, arXiv:astro-ph/0412628. D. F. Mota and C. van de Bruck, Astron. Astrophys. [**421**]{}, 71 (2004) \[arXiv:astro-ph/0401504\]. C. Wetterich, Astron. Astrophys. [**301**]{}, 321 (1995) \[arXiv:hep-th/9408025\].\
L. Amendola, Phys. Rev. D [**62**]{}, 043511 (2000) \[arXiv:astro-ph/9908023\].
S. Engineer, N. Kanekar and T. Padmanabhan, Mon. Not. Roy. Astron. Soc. [**314**]{}, 279 (2000) \[arXiv:astro-ph/9812452\].
L. D. Landau and E. M. Lifshitz, [*[The classical theory of fields]{}*]{}, Oxford: Pergamon Press, 1975, 4th edition.
R. R. Caldwell, R. Dave and P. J. Steinhardt, Phys. Rev. Lett. [**80**]{}, 1582 (1998) \[arXiv:astro-ph/9708069\]. N. J. Nunes and D. F. Mota, arXiv:astro-ph/0409481.\
S. Hannestad, arXiv:astro-ph/0504017.\
C. Horellou and J. Berge, arXiv:astro-ph/0504465.\
M. Manera and D. F. Mota, arXiv:astro-ph/0504519.
[^1]: One can look at a test particle feeling an inverse square force and an additional repulsive $\Lambda$ force. Consider two possible orbits of the particle: one circular, and one in which its kinetic energy can vanish. The ratio of the circular radius to the radius of zero kinetic energy (‘turnaround’) is described exactly by equation (\[llpr\]). This assumes that the test particle does not contribute to the forces of the system. We thanks Martin Rees for pointing out this similarity.
|
---
abstract: 'Most old distant radio galaxies should be extended X–ray sources due to inverse Compton scattering of Cosmic Microwave Background (CMB) photons. Such sources can be an important component in X-ray surveys for high redshift clusters, due to the increase with redshift of both the CMB energy density and the radio source number density. We estimate a lower limit to the space density of such sources and show that inverse Compton scattered emission may dominate above redshifts of one and X-ray luminosities of $10^{44}$, with a space density of radio galaxies $> 10^{-8}$ Mpc$^{-3}$. The X-ray sources may last longer than the radio emission and so need not be associated with what is seen to be a currently active radio galaxy.'
author:
- |
\
\
$^1$SISSA, via Beirut, 2-4, 34014 Trieste, Italy\
$^2$Institute of Astronomy, Madingley Road, Cambridge CB3 0HA\
title: 'Extended X-ray emission at high redshifts: radio galaxies versus clusters'
---
0[[*H*]{}$_0$]{} 0[[*q*]{}$_0$]{}
\#1[to 0pt[\#1]{}]{}
galaxies: active - galaxies: clusters - X-ray: galaxies - clusters
Introduction
============
[*Chandra*]{} has revealed extended X-ray emission from a wide range of radio sources out to high redshifts. Jets and the lobes and cocoons of radio quasars and galaxies have been imaged with unprecedented resolution (e.g. Chartas et al 2000, Schwarz et al 2001, Harris & Krawczynski 2002, Kataoka et al 2003, Comastri et al 2003, Wilson, Young & Shopbell 2001, Kraft et al 2002). At low redshifts, extended emission directly associated with the radio lobes is seen through its inverse Compton emission in some objects (e.g. Fornax A; Feigelson et al 1995, Kaneda et al 1995; 3C219, Comastri et al 2003). In the powerful radio source Cygnus A, diffuse X-ray emission is also detected from the radio cocoon, i.e. the reservoirs of shocked material associated with the radio expansion (Wilson, Young & Smith 2003). When however a modest radio source lies in a rich cluster, the surface brightness of the inverse Compton emission in soft X-rays can be so low that it cannot be separated from thermal emission and the lobes appear as holes in the X-ray emission due to displacement of the hot gas by the lobes (e.g. the Perseus cluster, Fabian et al 2000, Sanders et al 2004; Hydra A, McNamara et al 2000; A2052, Blanton et al 2001).
Extended X-ray emission is also associated with an increasing number of radio sources at cosmological distances, for example 3C 294 (Fabian et al 2003), 3C 9 (Fabian, Celotti & Johnstone 2003), PKS1138-262 (Carilli et al 2002), 4C 41.17 (Scharf et al 2003), GB 1508+5714 (Yuan et al 2003, Siemiginowska et al 2003), although often the low photon count rate makes it hard to disentangle the different X-ray components. In general however, much of the emission can be interpreted as due to inverse Compton scattering of non-thermal radio emitting electrons on CMB photons (Felten & Rees 1967, Cooke, Lawrence & Perola 1978, Harris & Grindlay 1979). Where directly associated with powerful jets relativistic bulk motion may be involved (Celotti, Ghisellini & Chiaberge 2001; Tavecchio et al 2000). The steep increase in the energy density of the CMB with redshift $z$ (as $(1+z)^4$) partially compensates for the large distance to such sources (Felten & Rees 1967, Schwartz 2002), thereby making them detectable.
Independently of the spatial distribution, the presence of extended radio synchrotron emission is a direct indication of the presence of a non-thermal population of relativistic particles. These particles, at least, must produce high energy emission via inverse Compton scattering of CMB photons (direct measurements and upper limits for such emission are used to estimate the intracluster magnetic field). Here, we consider the effective number density of extended X-ray emitting sources due to this process as a function of X-ray luminosity and show that they can be a serious contaminant to X–ray surveys searching for clusters and protoclusters at high redshift.
The outline of the paper is as follows: in Section 2 we estimate the ratio of (synchrotron) radio to (inverse Compton) X-ray emission, while in Section 3 we estimate the corresponding X-ray luminosity functions of radio sources as a function of $z$ and compare them with those of X-ray clusters. A discussion in Section 4 concludes this Letter. A cosmology with $\Omega_{\Lambda}=0.7, \Omega_{\rm M}=0.3$, $H_0=50$ km s$^{-1}$ Mpc$^{-1}$ has been assumed.
X-ray emission from scattering on the CMB
=========================================
We now attempt to estimate the X-ray luminosity associated with each extended radio source and the number density of X-ray emitting objects using radio emission as a direct tracer of the relativistic electron population. Although based on well known classical arguments (Felten & Morrison 1966), we explicitly re-derive the limits on the inverse Compton emission as these are the basis for the robustness of the inferred X–ray emission from radio sources. A simplified approach restricts us to lower limits, but bypasses the need for determining particle life-times (due to acceleration/injection history and radiative and adiabatic cooling, see Sarazin 1999) and consideration of the (uncertain) conversion of radio luminosity into source power. It should be noted that for typical gas densities associated with radio emitting regions, inverse Compton largely dominates over non-thermal bremsstrahlung emission (Sarazin 1999, Petrosian 2001).
In order to include only radio emission arising as synchrotron radiation from a non-thermal (power-law) distribution of electrons on extended scales we use low frequency radio luminosities, $L_{\rm R}$, at 151 MHz. The major uncertainty in the estimate of the X-ray luminosity is the magnetic field in the radio-emitting region. Depending on its intensity the same electrons can give both the observed radio emission and the X-ray emission, or some extrapolation of the electron spectrum is required.
Let us consider the monochromatic X-ray luminosity $L_{\rm x}$ at a reference frequency of 1 keV (in the observer frame), and define $\gamma_{\rm x}= 3 \nu_{\rm x} /4 \nu_{\rm CMB}\simeq 10^3$ as the Lorentz factor of the electrons which could emit at (the observed) 1 keV photon frequency $\nu_{\rm x}$ via inverse Compton on the CMB (peaked at the frequency $\nu_{\rm CMB}$) and $B_{\rm x}\sim$ few $\times 10^{-5}$ as the magnetic field intensity for which these same electrons radiate at $\nu_{\rm R}$ (151 MHz) via synchrotron (i.e. $B_{\rm x}\equiv 5.4\times 10^{-5} \nu_{\rm R} (1+z) \gamma_{\rm
x}^{-2}$ G for $z=0$). In other words, for any magnetic field $B <
B_{\rm x}$, the Lorentz factor of the radio (151 MHz) emitting particles $\gamma_{\rm R} > \gamma_{\rm x}$, and vice-versa.
The relative luminosity in the synchrotron and inverse Compton components is given by the ratio of the magnetic $U_{\rm B}$ vs radiation $U_{\rm CMB}$ energy densities. More specifically, the relative luminosities at the two fixed observed frequencies in the radio and X-ray bands scale as
$$\frac{\nu_{\rm x} L_{\rm x}}{\nu_{\rm R} L_{\rm R}} = \frac{U_{\rm
CMB}}{U_{\rm B}} \left(\frac{\nu_{\rm X}\, \nu_{\rm B}} {\nu_{\rm R}\,
\nu_{\rm CMB}}\right)^{1-\alpha} (1+z)^{(3+\alpha)-k(1+\alpha)},$$
where $U_{\rm B}$ and $U_{\rm CMB}$ are the energy densities at redshift $z=0$ and $k$ accounts for a possible dependence of the magnetic energy density on $z$, parametrized as $B(z)=B(0) (1+z)^k$. The non-thermal particle distribution has been assumed to be a power-law whose slope $p$ is related to the luminosity spectral index $\alpha = (p-1)/2$ ($L(\nu)\propto \nu^{-\alpha}$).
Figs. 1a and 2a show the ratio of the expected X-ray (inverse Compton) and radio (synchrotron) luminosities at the (observed) frequencies of 1 keV and 151 MHz, at different redshifts and for two representative power-law particle distributions, with $p$=2.6 and 2, respectively. Reference values for the magnetic field ($B_{\rm eq}$, i.e. the field in equipartition with $U_{\rm CMB}$, and $B_{\rm x}$) are also reported as vertical lines. The X-ray/radio luminosity ratio ranges (case $p=2.6$) for $0<z<2$, between $300-3\times 10^4$ (8$\times
10^{-2}- 8$) for a $0.1\mu$G (10$\mu$G) field. \[For $p=2$, analogous ranges imply ratios spanning $50-3\times 10^3$ for $B\sim 0.1\mu$G ($8\times 10^{-2}-3$ for $B\sim 10\mu$G)\]. Alternatively, for fields in the interval 1-10$\mu$G and $z>1$, $L_{\rm x} \nu_{\rm x}/ L_{\rm
R} \nu_{\rm R}>0.5$. These estimates assume a homogeneous region and $B=$ constant as a function of redshift (i.e. $k=0$).
### Caveats
There are two main caveats to be considered before adopting the above estimates.
Firstly, the actual value of the magnetic field. Estimates for nearby clusters based on equipartition (minimum energy) arguments from the non-thermal emitting component lead typically to $B\sim 0.1-1 \mu$G for unit volume filling factor and equal electron and proton energies. Similar estimates are derived for the few clusters where there is a detection of hard X-ray emission [^1], assumed to be inverse Compton scattering on CMB photons. Fields inferred from Faraday rotation measures are instead about one order of magnitude higher (including cooling flow clusters), in the range $1-10\mu$G (Carilli & Taylor 2002, Feretti 2003 for reviews). These values can be reconciled with the former ones by taking into account the dependence on radial distance, field substructures, different electron spectra, etc. (see e.g. Feretti 2003). In any case, typical fields appear to be in the range considered above, up to $\sim 10\mu$G, i.e. $\approxlt B_{\rm x}$, and thus not dynamically important in clusters. Although the origin of such fields is not known, it appears likely that seed fields (primordial or produced from galactic winds, AGN, shocks associated to large scale structure formation) could be amplified following cluster mergers up to $\sim \mu$G values (e.g. Roettiger, Stone & Burns 1999). Therefore, it seems plausible that, despite a higher (thermal) gas pressure at higher cluster luminosities/redshift, the actual field might be a decreasing function of $z$, leading to even higher $L_{\rm x} \nu_{\rm x}/ L_{\rm R}
\nu_{\rm R}$ values. We conclude that the above estimates for $B\sim
1-10\mu < B_{\rm x}$ G fields are a reasonably robust lower limit to such ratio.
The second critical assumption in the above estimates is the shape of the particle distribution, considered as a single power-law. As this might well not be the case, let us thus consider the uncertainties related to this assumption, which depend on which energy range of the particle distribution is actually responsible for the X-ray emission. Figs. 1b and 2b show the Lorentz factors of the emitting electrons as a function of $B$. The horizontal line indicates $\gamma_{\rm x}$ while the oblique lines (labeled $\gamma_{\rm R}$) show the Lorentz factors of electrons emitting at 151 MHz at different redshifts.
For any $B< B_{\rm x}$ (as discussed above), corresponding to $\gamma_{\rm R}> \gamma_{\rm x}$, it is therefore necessary to assess whether the extrapolation on the particle number density at energy $m_{\rm e} c^2 \gamma_{\rm x}$ from the corresponding intensity at 151 MHz provides a robust lower limit, as the particle distribution might flatten or cut off at a Lorentz factor $\gamma^*$, with $\gamma_{\rm
x}< \gamma^* <\gamma_{\rm R}$. Indeed such a flattening/cutoff could be expected if:
a\) relativistic particles have been injected with energies $>\gamma_{\rm x}$ and had not (yet) cooled down to $\gamma_{\rm x}$; the observed spectral slope would then correspond to the slope of a cooled particle distribution, i.e. above a cooling break $\gamma_{\rm
b}$ with $\gamma_{\rm R} > \gamma_{\rm b}>\gamma_{\rm x}$; the low energy particles lose energy non radiatively, namely via Coulomb losses – indeed for typical (cluster) densities this process dominates at $\gamma< 100$ (e.g. Petrosian 2001);
b\) self-absorption is effective in re-heating the low energy particles, causing the particle distribution to (quasi-) thermalize around a self-absorption Lorentz factor $\gamma_{\rm t}>\gamma_{\rm
x}$ (Ghisellini, Guilbert & Svensson 1988). The ‘flattest’ oblique lines in Figs. 1b, 2b represent $\gamma_{\rm t}$ for increasing values of the (Thomson) optical depth (for $\tau = 10^{-3}, 10^{-2}, 10^{-1},
1$), showing that indeed self-absorption might cause the particle spectrum to flatten at energies higher than $\gamma_{\rm x}$, although only for significantly large optical depths in relativistic particles.
The radiative cooling timescales of the X–ray emitting electrons $$t_{\rm rad} \simeq 2.4\times 10^9 \gamma^{-1}_{\rm x} (1+z)^{-4}
\left(1+\frac{U_{\rm B}}{U_{\rm CMB}} (1+z)^{k-4}\right) \quad {\rm yr}$$ would be typically much larger than the adiabatic one $$t_{\rm ad}\simeq 3.2\times 10^6
R_2 (R_2/\lambda_{\rm scatt,1})\qquad\qquad {\rm yr}$$ where $R_2 = R /100$ kpc is a typical size of extended radio emission, and $\lambda_{\rm scatt}=10$ kpc is considered as the typical ‘scattering’ length for field coherence cells of $\sim 10$ kpc (Carilli & Taylor 2002). Furthermore expansion – and thus cooling - would likely start from the injection/acceleration site, i.e. on presumably smaller scales. Thus it appears unlikely that particles with $\gamma>\gamma_{\rm x}$ have not cooled or that particle thermalisation could be effective (also inverse Compton cooling should prevent the quasi-thermalization of the non-thermal particles below $\sim \gamma_{\rm t}$). Note also that Coulomb losses start to dominate over radiative ones for $\gamma < 100$, not affecting the shape of the particle distribution above $\gamma_{\rm x}$.
In order to estimate the effect on the X-ray-to-radio luminosity ratio of a break in the particle distribution we considered an ‘extreme’ case where the particle spectrum flattens to $p=1$ (i.e. as expected from adiabatic cooling and flatter than for radiative cooling) just below $\sim \gamma_{\rm R}$. The corresponding luminosity ratio is shown in Fig. 1a: for $B < 10 \mu$G the ratio is $\approxgt 0.3$ at $z\approxgt 1$.
It is worth noticing that these considerations also account for an inhomogeneous radio emitting volume (equivalent to the added contributions from different regions), as the above results would refer to the X-ray luminosity expected from the region dominating the radiative output at 151 MHz. In this case however the observed spectrum would not be indicative of the shape of the emitting particle distribution: if the latter is steeper than $p=2.6$ the above estimates (for $p=2.6$) provide a lower limit to $\nu_{\rm x} L_{\rm
x}/\nu_{\rm R} L_{\rm R}$. The reported estimates for $p=1$ provide instead a lower limit for any flat distribution with $p\approxgt 1$.
We conclude that radio sources constitute a potentially significant population of extended X-ray emitting objects.
**High redshift extended X-ray emitters**
=========================================
In order to estimate the number density and thus the possible contribution of radio sources to X-ray surveys (see also Schwartz 2002), we consider the luminosity function of steep radio sources, as determined from surveys at 151 MHz. This has a twofold advantage: it avoids or at least reduces contamination from compact components (with respect to the extended lobe/cocoon emission) and it takes advantage of the most recent and deeper radio surveys.
In particular we adopt the luminosity function and evolution determined by Willott et al (2001) from the 3CRR, 6CE and 7CRS samples [^2]. They parametrized (see also Dunlop & Peacock 1990) the radio source luminosity function and its evolution as due to two (distinct) populations: FR I plus low excitation line (LEG) FR II sources and high excitation line (HEG) FR II, representing low and high radio power sources, respectively (Fanaroff & Riley 1974, see Laing et al 1994, Jackson & Wall 1999). We associate an extended X-ray luminosity with each 151 MHz radio sources as determined above: Fig. 3 shows the corresponding number density of the two populations of X-ray sources at different redshifts assuming a conservative (and constant) radio-to-X-ray (151 MHz and 1 keV) conversion factor $L_{\rm
x} \nu_{\rm x}/L_{\rm R} \nu_{\rm R} = 1$.
For comparison, Fig. 3 also reports the luminosity function and evolution of X-ray clusters in the (0.5-2) keV band following the parametrization by Rosati et al (2000) – although the evolution at luminosities $<$ few$\times 10^{44}$ erg s$^{-1}$ is currently not robustly determined by the data. It is apparent that at redshift $z\approxgt 1$ the radio source population starts becoming comparable to or even exceeding the expected cluster number density at the high X-ray luminosity end, $L_{\rm x} > 10^{44}$ erg s$^{-1}$. Because of the weaker cosmological evolution and intrinsic lower luminosity of the FR I+FR II/LEG population, the major contribution is provided by the high radio luminosity, high excitation line, FR II radio galaxies.
**Discussion**
==============
The above estimates indicate that powerful radio galaxies are expected to be found in significant numbers at redshift $z\approxgt 1$ as extended X-ray sources. Indeed, we expect that most old distant radio galaxies are also extended X–ray sources.
It should be stressed that for any $B< B_{\rm x}$ the above estimates provide a [*lower*]{} limit on both the individual luminosity and the number density of extended X–ray emitting radio sources for the following reasons. Firstly, extended X-ray emission can also be produced by non-thermal particles which do not contribute to the 151 MHz emission, further increasing the actual X-ray luminosity with respect to the estimates given above: lobes, cocoons, relics and jets can emit not only as inverse Compton emission on the CMB, but also via other emission processes such as synchrotron self-Compton, bremsstrahlung and inverse Compton on other photon fields, such as far-infrared photons in the vicinity of the massive, extremely luminous galaxies detected in the sub-mm at high redshifts. Secondly, the radio-emitting electrons cool faster than the X-ray emitting ones (for $B< B_{\rm x}$). Thus the radio luminosity function might significantly underestimate, by a factor corresponding to the relative cooling times $t_{\rm cool} (\gamma_{\rm R})/t_{\rm cool} (\gamma_{\rm
x})$, the number density of sources and the volume pervaded by non-thermal electrons with energy $\sim \gamma_{\rm x}$ contributing to the X-ray emission. In other words the estimates above refer to ’prompt’ X-ray emission only over the cooling timescale due to radio emission. Taking into account this ratio effectively increases the normalization of the luminosity function of radio sources by factors $\sim 3-10$ (see Fig. 1b). In this respect it is interesting to notice that indeed in the case of 3C294 (Fabian et al 2001) the X-ray emission extends much further than the radio structure (visible at 5 GHz), indicating that $B<B_{\rm x}$ there. Consequently, extended non-thermal X-ray emission is not necessarily associated with a currently active radio source, instead providing information on the past radio behaviour of a galaxy.
We conclude that deep X–ray surveys should detect a significant population of extended X-ray sources associated with both ‘live and dead’ radio sources. At redshift $z\approxgt 1$ their space density should be at least comparable to or possibly larger than that of high X–ray luminosity, high redshift clusters. Caution in the interpretation of the origin of extended X–ray emission of course applies to that associated with high redshift radio sources, treated as beacons for clusters. While the presence of a radio source could still by itself be a cluster tracer (the radio-emitting plasma is probably confined by some intracluster or intragroup gas), the inferred luminosities could lead to misleading results on the cluster luminosity function and thus evolution. Particularly ambiguous situations for disentangling X–ray emission from cluster gas or a present/past AGN activity can arise when only inverse Compton emission can be detected at the location of a cluster, or when the surface brightness of inverse Compton emission corresponds to that of the surrounding intracluster medium[^3].
Spectral information, including evidence for a thermal component and/or an iron line feature, such as detected in the cluster RDCS 1252.9-2927 at $z=1.24$ (Rosati et al 2004) and RXJ1053.7+5735 at $z=1.14$ (Hashimoto et al 2004), will be a key to disentangling the thermal and non-thermal contributions. The search for extended X–ray sources in deep $Chandra$ fields (Bauer et al 2002) indicates a surface density of about 150 deg$^{-2}$ at soft X–ray fluxes $> 3\
10^{-16}$ erg cm$^{-2}$ s$^{-1}$. A potential diagnostic of nonthermal emission is emission at energies $\gg1$ keV is also expected. Although a study of the effective extension of the particle distribution requires a knowledge of the acceleration processes at work as well as its history (see Sarazin 1999), if a source is radio emitting at 151 MHz, for any given $B$ the spectrum produced via the scattering of CMB photons extends at least up to $\sim 40 (1+z)
B^{-1}_{-6}$ keV; possibly to a few hundreds MeV (for $\gamma \sim
10^6$). Because of photon redshifting, detection of any emission above a few keV at high $z$ would be a clear signature of a non-thermal component, presumably due to inverse Compton emission. Interestingly, one out of the six extended X-ray sources detected by Bauer et al. (2002) has a high temperature with respect to the $L_{\rm x}-T$ cluster correlation.
The inverse Compton emission is a direct result, and also a probe, of the major non-gravitational energy injection phase of the present day intracluster medium. In fact, the radio luminosity of radio galaxies grossly underestimates the intrinsic power of their jets which can be 1000 or more times greater. The radio-emitting plasma is likely to be confined by some intracluster or intragroup gas, which is displaced outward. The energy dumped into the immediate surroundings of such sources can be considerable and thereby influence the gaseous properties of clusters and groups (Ensslin et al 1997, Valageas & Silk 1999; Wu, Fabian & Nulsen 2000). Integrating the radio galaxy luminosity function over time leads to a comoving energy input of about $10^{57}$(Inoue & Sasaki 2001) which can (pre)heat – although the precise mechanism is not yet clear – the intracluster medium by 1–2 keV per particle, so explaining much of the non-gravitational scaling behavior of groups and clusters (e.g. Lloyd-Davies et al 2000). The intracluster or intragroup medium will be highly disturbed during the energy injection phase and for up to a core crossing time (about a Gyr) after. This means that the detection and interpretation of Sunyaev-Zeldovich signals from high redshift clusters (Carlstron, Holder & Reese 2003) may be complicated.
Alternatively, the lack of detection of a large population of non-thermal extended X-ray emitters would provide interesting information about the radio source/cluster magnetic field evolution. This would suggest that the radio emitting particles have lower energy than the radio emitting ones, indicating $B>B_{\rm x}$, and thus point to positive evolution in the magnetic field associated with the diffuse radio emission at higher redshifts, although it would be difficult to precisely quantify and interpret such result.
**Acknowledgments**
===================
We thank the anonymous referee for helpful criticisms. The Italian MIUR and INAF (AC) and the Royal Society (ACF) are thanked for financial support.
Bauer F.E., et al. 2002, AJ, 123, 1163
Blanton E., Sarazin C.L., McNamara B.R., Wise M.W., 2001, ApJ, 558, L15
Carilli C.L., Harris D.E., Pentericci L., Röttgering H.J.A., Miley G.K., Kurk J.D., van Breugel W., 2002, ApJ, 567, 781
Carilli C.L., Taylor G.B., 2002, ARA&A, 40, 319
Carlstrom, J., Holder G.P., Reese E.D., 2003, ARAA, 40, 643
Celotti A., Ghisellini G., Chiaberge M., 2001, MNRAS, 321, L1
Chartas G., et al, 2000, ApJ, 542, 655
Cooke B.A., Lawrence A., Perola G.C., 1978, MNRAS 182, 661
Comastri A., Brunetti G., Dallacasa D., Bondi M., Pedani M., Setti G., 2003, MNRAS, 340, L52
Crawford C.S., Fabian A.C., 2003, MNRAS, 339, 1163
Ensslin T.A., Biermann P.L., Kronberg P.P., Wu X.-P., 1997, ApJ, 477, 560
Fabian A.C., et al, 2000, MNRAS, 318, L65
Fabian A.C., Crawford C.S., Ettore S., Sanders J.S., 2001, MNRAS, 322, L11
Fabian A.C., Sanders J.S., Crawford C.S., Ettori S., 2003, MNRAS, 341, 729 (astro-ph/0301468)
Fabian A.C., Celotti A., Johnstone R.M., 2003, MNRAS, 338, L7
Felten J.E., Morrison, P., ApJ, 146, 686
Felten J.E., Rees M.J., 1967, Nature, 221, 924
Feretti L., 2003, proc. XXI Texas Symposium on Relativistic Astrophysics, Florence Dec. 9-13 2002 (astro-ph/0309221)
Ghisellini G., Guilbert P.W., Svensson R., 1988, ApJ, 334, L5
Harris D.E., Krawczynski H., 2002, ApJ, 565, 244
Inoue S., Sasaki S., 2001, ApJ, 562, 618
Jackson C.A., Wall J.V., 1999, MNRAS, 304, 160
Kaneda H., et al., 1995, ApJ, 453, L13
Kataoka J., Leahy J.P., Edwards P.G., Kino M., Takahara F., Serino Y., Kawai N., Martel A.R., 2003, A&A, 410, 833
Kraft R.P., et al, 2000, ApJ, 531, L9
Laing R.A., Jenkins C.R., Wall J.V., Unger S.W., 1994, in The First Stromlo Symposium: The Physics of Active Galaxies. ASP Conference Series, Vol. 54, G.V. Bicknell, M.A. Dopita, and P.J. Quinn Eds., 201M.J., Worrall D.M., 2002, ApJ 569, 54
Lloyd-Davies E.J., Ponman T.J., Cannon D.B., 2000, MNRAS, 315, 689
McNamara B.R., et al, 2000, ApJ, 534, L135
Petrosian V., 2001, ApJ, 557, 560
Rosati P., Borgani S., Della Ceca R., Stanford A., Eisenhardt P., Lidman C., 2000, Large Scale Structure in the X-ray Universe, Atlantisciences, eds. M. Plionis, I. Georgantopoulos, 13
Rosati P., et al, 2004, AJ, in press (astro-ph/0309546)
Sarazin C.L., 1999, ApJ, 520, 529
Scharf C., Smail I., Ivison R., Bower R., van Breugel W.,Reuland M., 2003, ApJ, 596, 105
Schwartz D.A., 2002, ApJ, 569, L23
Schwartz D.A., et al, 2000, ApJ, 540, L69
Siemiginowska A., Smith R.K., Aldcroft T., Schwartz D.A., Paerels F., Petric A.O., 2003, ApJ, 598, L15
Tavecchio F., Maraschi L., Sambruna R.M., Urry C.M., 2000, ApJ, 544, L23
Valageas P., Silk J., 1999, A&A, 350, 725
Willott C.J., Rawlings S., Blundell K.M., Lacy M., Eales S.A., 2001, MNRAS, 322, 536
Wilson A.S., Young A.J., Shopbell P.L., 2001, ApJ, 547, 740
Wilson A.S., Young A.J., Smith D.A., 2003, Active Galactic Nuclei: from Central Engine to HOst Galaxy, ASP Conf. Series, eds S. Collin, F. Combes, I. Shlosman, in press (astro-ph/0211541)
Worrall D.M., 2002, New Astron. Rev, 46, 121
Wu K.K.S., Fabian A.C., Nulsen PE.J., 2000, MNRAS, 318, 889
Yuan W., Fabian A.C., Celotti A., Jonker P.G., 2003, MNRAS, 346, L7
[^1]: see however the recent results by Rossetti & Molendi (2003) on the Coma cluster.
[^2]: Their results have been converted here for the different assumed cosmology.
[^3]: Powerful radio galaxies and quasars at $z\sim 0.5-1$ appear to be associated with only poor to moderately rich clusters with $L_{\rm x}\sim 10^{44}$(Crawford & Fabian 2003; Worrall 2002)
|
---
abstract: 'In this paper we study the so called “warp drive” spacetimes within the $U_{4}$ Riemann-Cartan manifolds of [Einstein-Cartan theory]{}. Specifically, the role that spin may play with respect to energy condition violation is considered. It turns out that with the addition of spin, the torsion terms in [Einstein-Cartan gravity ]{}do allow for energy condition respecting warp drives. Limits are derived which minimize the amount of spin required in order to have a weak/null-energy condition respecting system. This is done both for the traditional Alcubierre warp drive as well as for the modified warp drive of Van Den Broeck which minimizes the amount of matter required for the drive. The ship itself is in a region of effectively vacuum and hence the torsion, which in [Einstein-Cartan theory ]{}is localized in matter, does not affect the geodesic nature of the ship’s trajectory. We also comment on the amount of spin and matter required in order for these conditions to hold.'
author:
- |
[Andrew DeBenedictis [^1]]{}\
*[The Pacific Institute for the Mathematical Sciences]{}\
*[and]{}\
*[Department of Physics, Simon Fraser University,]{}\
*[Burnaby, British Columbia, V5A 1S6, Canada ]{}\
\
\
*[Department of Applied Physics, Faculty of Electrical Engineering and Computing, University of Zagreb]{}\
*[HR-10000 Zagreb, Unska 3, Croatia ]{}******
date: '[September 7, 2018]{}'
title: '****'
---
=1
------------------------------------------------------------------------
[PACS numbers: 04.40.-b]{}\
[Key words: Einstein-Cartan gravity, spin, energy conditions ]{}\
[Introduction]{}
================
There is no doubt that general relativity, with its description of a dynamical spacetime, is one of the most fascinating of physical theories. In its over one-hundred year history it has changed our understanding of the universe dramatically. For example, general relativity has provided an explanation for the residual perihelion precession of the planets [@ref:perihelion], and has predicted an expanding universe [@ref:expandingstart], [@ref:expandingend]. To date, general relativity, with the introduction of dark matter and dark energy, has passed experimental tests of very high precision [@ref:willbook]. With the more recent direct detection of gravitational waves from black hole and neutron star events, the tests of general relativity are no longer restricted merely to the weak-field regime. It can easily be argued, therefore, that general relativity remains a robust theory of gravity.
Because of these successes, any deviations of gravitational theory from general relativity are highly restricted. If the theory of classical gravity is not general relativity, then it must be very close to it, matching it almost exactly in the regimes in which gravity has been tested accurately. One rather interesting theory is that of [Einstein-Cartan gravity ]{}[@ref:einstcart], [@ref:einstcart2]. It is arguably the simplest extension of gravity which includes torsion and does not alter the dynamical spacetime picture of general relativity. In fact, the [Einstein-Cartan theory ]{}may be viewed simply as general relativity supplemented with torsion. In [Einstein-Cartan gravity ]{}the spacetime possesses torsion as well as curvature, and in the limit that torsion vanishes it coincides exactly with general relativity. One reason [Einstein-Cartan theory ]{}has not been ruled out is that the effects of torsion in this theory are rather difficult to measure. It is the spin of matter which couples to the antisymmetric part of the connection, and the spin is directly proportional to the modified torsion tensor. Therefore, outside of matter, there are no torsion effects, and gravitation is fully governed by general relativity, although the vacuum solution may now differ somewhat compared to that of pure general relativity, due to the source term having been modified by the spin. The small magnitude of the spin-torsion coupling, along with the fact that experiments validating or ruling out torsion effects must be done within matter possessing significant spin content, means that performing experiments which may invalidate [Einstein-Cartan gravity ]{}is very difficult.
Under extreme conditions, however, it may be that the spin density of matter becomes large enough to produce serious deviations from pure general relativity. For example, in cosmology [Einstein-Cartan theory ]{}has been shown to eliminate the big bang singularity [@ref:ectbigbang]. As well, [Einstein-Cartan gravity ]{}may naturally explain the flatness and horizon problems, due to the presence of small torsion densities [@ref:popinfl]. In the realm of black holes it has been shown that the torsion leads to a non-singular bounce in gravitational collapse that otherwise would lead to a singularity within general relativity, [@ref:ziaie],[@ref:hashemicollapse]. Regarding more exotic solutions, wormholes have been studied within [Einstein-Cartan theory ]{}[@ref:bronwh], [@ref:ectwhsols]. Other studies in spacetimes with torsion include Maxwell fields [@ref:katkarmax], Proca fields [@ref:seitzproca], and Dirac fields [@ref:ecdiracthesis]. (See also references therein.)
Many of the above studies indicate that torsion in [Einstein-Cartan theory ]{}acts as a moderating effect in gravitation. That is, torsion often softens the effects of gravity, eliminating seemingly unphysical effects in gravitational theory. It is with this in mind that we study here what are known in the literature as warp drive spacetimes within [Einstein-Cartan theory]{}. It is known that within general relativity (GR) warp drive spacetimes must violate energy conditions [@ref:alcub], [@ref:lobobook] and therefore it is of interest to study if the energy condition violation may be eliminated by the torsion effects of [Einstein-Cartan gravity ]{}. For this we utilize the [Weyssenhoff ]{}spin fluid description of matter [@ref:weys], supplemented with spinless auxiliary structure, whose rationale is described later. The [Weyssenhoff ]{}description of matter has been rather successful in various studies in [Einstein-Cartan theory ]{}[@ref:weysstudystart] - [@ref:weysstudyend] and it represents a fluid model whose spin content is manifest.
We choose to study the warp drive spacetimes for several reasons. Perhaps the primary reason is that it is generally interesting to study exactly what established theories may predict under extreme situations. There may be much that can be learned from studying the extreme limits of a physical theory. In this vein there is ample pedagogical value in studying exotic solutions to gravitational field theory. This, for example, was a primary motivation in the now classic paper of Morris and Thorne [@ref:morthorne] (see also [@ref:hiscock]). Also, exotic solutions are interesting to study in their own right, as they serve to illustrate the richness of the solution space of field theories. There is no doubt that the warp drive, though not practically feasible, is an interesting solution to the gravitational field equations in much the same way as, for example, the Gödel universe is [@ref:godel]-[@ref:godelreview].
A brief review of the [Einstein-Cartan theory ]{}of gravity {#sec:ect}
===========================================================
We give here a short review of [Einstein-Cartan gravity ]{}and of the [Weyssenhoff ]{}fluid. Units will be used such that $G=c=\hbar=1$, and these factors will be reinstated in the final analysis.
In the microscopic realm of relativistic matter, the rotational sector of the Lorentz group naturally classifies elementary particles within unitary representations of the group’s rotations. The spin notion of a particle is therefore just as elementary as its mass. It seems interesting then that matter’s spin content, unlike its mass counterpart (and by extension energy and momentum) does not play a role as a source of gravity in general relativity theory. The motivation behind [Einstein-Cartan theory ]{}is to eliminate this asymmetry between mass and spin and so also include spin as a source of gravitation. Interestingly, since spin is a fundamental quantum mechanical property of matter, it may be that a quantum theory of gravity must include a spin coupling to gravitation in order to be fully consistent.
The key ingredient of [Einstein-Cartan theory ]{}that causes deviations from Einstein gravity is the presence of non-zero torsion, $T_{\beta\gamma}^{\;\;\;\alpha}$, in the spacetime affine connection $\Gamma^{\alpha}_{\;\beta\gamma}$: $$T_{\beta\gamma}^{\;\;\;\alpha}:=\frac{1}{2}\left[\Gamma^{\alpha}_{\;\beta\gamma} - \Gamma^{\alpha}_{\;\gamma\beta} \right]\,. \label{eq:torsion}$$ Since in general relativity the symmetric Christoffel connection, $\Gamma^{\alpha}_{\;\beta\gamma}\rightarrow$, is utilized, torsion identically vanishes everywhere in Einstein gravity. It should be noted that although the connection is not a tensor, the torsion, being a difference of connections, is. Further, in order to make ties with the well tested theory of special relativity, it is demanded that the metric tensor is covariantly constant $$\nabla_{\mu}\,g_{\alpha\beta} = 0\, , \label{eq:covconstmet}$$ where the covariant derivative in (\[eq:covconstmet\]) is with respect to the full connection, $\Gamma^{\alpha}_{\;\beta\gamma}$. A manifold in which (\[eq:covconstmet\]) holds is called a $U_{4}$ manifold. Further, if one restricts the torsion to zero one will have a Riemannian manifold, and if one instead restricts curvature, but not torsion, to zero, one has a Weitzenböck manifold. Here we are concerned with the Riemann-Cartan manifold, $U_{4}$, allowing for both curvature as well as torsion. Whereas upon transport around an infinitesimal closed loop, curvature yields a holonomy in the angle of the vector (see figure \[fig:holonomy\]a), torsion on the other hand preserves the orientation of the vector but instead induces a holonomy in its translation in the tangent space (see figure \[fig:holonomy\]b).
![[]{data-label="fig:holonomy"}](holonomy.pdf){width="\textwidth"}
In [Einstein-Cartan gravity ]{}the action is postulated to resemble that of general relativity, $$I=\int d^{4}x\,\sqrt{-g} \left[-\frac{1}{2\kappa} R + \mathscr{L}_{{\mbox{{\tiny{m}}}}}\right]\,, \label{eq:action}$$ with $\kappa=8\pi G/c^{4}$ (in our units $\kappa = 8 \pi$), $g$ the determinant of the metric tensor, and $\mathscr{L}_{{\mbox{{\tiny{m}}}}}$ the matter Lagrangian density. It should be noted that the curvature scalar $R$ is to be calculated with the full connection of the theory. Specifically, the general connection is given by $$\Gamma^{\alpha}_{\;\beta\gamma}={\tiny{{\ensuremath{\left\{\hspace{-0.15cm}\!
\begin{array}{l}
{\alpha} \\
{\beta}\,{\gamma}
\end{array}
\!\hspace{-0.15cm}\right\}}}}} - K_{\beta\gamma}^{\;\;\;\alpha}\,, \label{eq:connection}$$ where the last quantity is known as the contorsion (sometimes contortion) tensor $$K_{\beta\gamma}^{\;\;\;\,\alpha}:=T_{\gamma\;\;\beta}^{\;\,\alpha}-T_{\beta\gamma}^{\;\;\;\,\alpha} - T^{\alpha}_{\;\,\beta\gamma}\,. \label{eq:contortion}$$ Since the connection is not symmetric, the Ricci tensor in [Einstein-Cartan theory ]{}will also in general not be symmetric. One must of course assume here that the metric tensor and the connection are independent quantities.
In [Einstein-Cartan theory ]{}the equations of motion are derived via variation of (\[eq:action\]) with respect to the metric and also with respect to the contorsion tensor (\[eq:contortion\]), which represents the non-metric part of the independent connection. This results in two sets of equations:
[$$\begin{aligned}
G^{\mu\nu} - \left(\nabla_{\alpha} + 2T_{\alpha\lambda}^{\;\;\;\lambda}\right)\left[S^{\mu\nu\alpha} - S^{\nu\alpha\mu} +S^{\alpha\mu\nu}\right] = & \kappa \mathcal{T}^{\mu\nu}\,, \label{eq:eom1} \\[0.1cm]
S^{\alpha}_{\;\beta\gamma}= & \kappa \tau^{\alpha}_{\;\beta\gamma}\,, \label{eq:eom2}\end{aligned}$$]{}
with $G_{\mu\nu}$ the Einstein tensor created out of the full (non-symmetric) connection, and $\mathcal{T}^{\mu\nu}$ the usual (symmetric) stress-energy tensor. The covariant derivative in (\[eq:eom1\]) is also with respect to the full connection. The above two sets of equations constitute the Einstein-Cartan field equations. It should be mentioned here that the left-hand side of (\[eq:eom1\]) is actually symmetric overall, as it must be in order to equal its right-hand side. In (\[eq:eom2\]) the quantity $S^{\alpha}_{\;\beta\gamma}$ represents the modified torsion tensor, sometimes known as the superpotential: $$S_{\alpha\beta}^{\;\;\;\;\gamma}:=T_{\alpha\beta}^{\;\;\;\gamma}+\delta^{\gamma}_{\;\alpha}T_{\beta\;\;\,\lambda}^{\;\,\lambda}-\delta^{\gamma}_{\;\beta}T_{\alpha\;\;\,\lambda}^{\;\,\lambda}\,.$$ In (\[eq:eom2\]) there is also present the dynamical spin tensor, $\tau^{\alpha}_{\;\beta\gamma}$. This quantity is the spin analog of the stress-energy tensor. That is, it contains the physical spin content of the theory.
By substituting equation (\[eq:eom2\]) in lieu of the superpotential terms in (\[eq:eom1\]) one may re-write (\[eq:eom1\]) as $$G^{\mu\nu}=R^{\mu\nu}-\frac{1}{2}R\,g^{\mu\nu}= \kappa \Theta^{\mu\nu}\,, \label{eq:eomb}$$ where the Ricci tensor is the non-symmetric tensor constructed out of the full connection, and $R$ is its trace. $\Theta_{\mu\nu}$ is the non-symmetric *canonical* stress-energy tensor with spin content. The antisymmetric part of (\[eq:eomb\]) is automatically satisfied and therefore (\[eq:eomb\]) is equivalent to (\[eq:eom1\]) and (\[eq:eom2\]). Explicitly written, with all torsion and modified torsion terms replaced by spin tensors via (\[eq:eom2\]), the surviving terms in (\[eq:eom1\]) or (\[eq:eomb\]) yield the following equation: $${\scalebox{0.90}{$G^{\mu\nu}\left({\tiny{{\ensuremath{\left\{\hspace{-0.15cm}\!
\begin{array}{l}
{\alpha} \\
{\beta}\,{\gamma}
\end{array}
\!\hspace{-0.15cm}\right\}}}}}\right)-\kappa^{2}\left[\tau^{\lambda\sigma\mu}\tau_{\lambda\sigma}^{\;\;\;\nu} -2 \tau^{\mu\lambda\sigma}\tau^{\nu}_{\;\lambda\sigma} -4 \tau^{\mu\lambda}_{\;\;\;[\sigma}\tau^{\nu\sigma}_{\;\;\;\lambda]} +\frac{1}{2}g^{\mu\nu}\left( 4 \tau_{\rho\;\;\;[\sigma}^{\;\lambda} \tau^{\rho\sigma}_{\;\;\;\lambda]} + \tau^{\rho\lambda\sigma}\tau_{\rho\lambda\sigma}\right) \right] = \kappa \mathcal{T}^{\mu\nu}\,,$}} \label{eq:goodeom}$$ with $G^{\mu\nu}\left({\tiny{{\ensuremath{\left\{\hspace{-0.15cm}\!
\begin{array}{l}
{\alpha} \\
{\beta}\,{\gamma}
\end{array}
\!\hspace{-0.15cm}\right\}}}}}\right)$ the Einstein tensor created from the Christoffel connection[^2].
In [Einstein-Cartan theory ]{}the algebraic structure of equation (\[eq:eom2\]) dictates that the modified torsion tensor vanishes wherever the spin tensor vanishes. Therefore, any torsion effects are manifest only inside of matter. Outside of matter the theory is equivalent to general relativity. As well, outside of matter, test particles (meaning here particles whose stress-energy and spin may be neglected for gravitational purposes) will follow the usual geodesic equation of general relativity. These properties of [Einstein-Cartan gravity ]{}will be particularly desirable for the study here.
The physics of the spin is contained in the tensor $\tau^{\alpha}_{\;\beta\gamma}$, in much the same way as the physics of energy is contained in the stress-energy tensor. One may, for example, prescribe certain components of $\tau^{\alpha}_{\;\beta\gamma}$ from some reasonable physical demands, being careful not to prescribe more components than the number of independent equations allow. Alternatively, one may resort to other physical theories, such as the theory of Dirac particles in order to construct a spin tensor out of Dirac spinors. In this work we will utilize the [Weyssenhoff ]{}fluid description of matter along with a supplementary structure. As mentioned in the introduction, the [Weyssenhoff ]{}fluid has been used in a number of interesting studies in [Einstein-Cartan theory ]{}[@ref:weysstudystart] - [@ref:weysstudyend]. This model represents a fluid whose elements possess net (intrinsic) spin as well as stress-energy content.
It is useful to construct the spin tensor via a second-rank tensor, $\tau_{\alpha\beta}$, as: $$\tau_{\alpha\beta}^{\;\;\;\;\gamma}= \tau_{\alpha\beta}u^{\gamma}\, \label{eq:secondspin}$$ with $u^{\gamma}$ the 4-velocity of the fluid. The tensor $\tau_{\alpha\beta}$ is often known as the spin density. It is antisymmetric and is often subject to the restriction $$\tau_{\alpha\beta}u^{\beta}=0\,. \label{eq:frenkel}$$ This last equation is often referred to as the Frenkel condition [@ref:frenkel]. It encodes a statement about the spacelike nature of spin. There is some debate on the suitability of enforcing the Frenkel condition when it comes to cosmological applications [@ref:frenkeldebate]. Strictly speaking it shall not be relevant for the calculations here as the study here is not within that realm.
In the simplest of scenarios the matter will be unpolarized. That is, the spins would be oriented randomly. This implies that the average of the spin density tensor would vanish; i.e. $\langle \tau_{\alpha\beta}\rangle=0$, as well as its gradients. However, there are spin contributions in (\[eq:goodeom\]) which are quadratic in the spin, and it is generally not true that $\langle \tau_{\alpha\beta}\tau^{\alpha\beta}\rangle=0$. Hence, the quadratic contributions from spin in a macroscopic average will still contribute to the Einstein-Cartan field equations [@ref:hehl]. This therefore allows one to write equation (\[eq:goodeom\]) for the [Weyssenhoff ]{}fluid as [@ref:hehl], [@ref:gasperiniprl] $$G^{\mu\nu}\left({\tiny{{\ensuremath{\left\{\hspace{-0.15cm}\!
\begin{array}{l}
{\alpha} \\
{\beta}\,{\gamma}
\end{array}
\!\hspace{-0.15cm}\right\}}}}}\right)-\kappa^{2}s^2\left(-2 u^{\mu}u^{\nu}-g^{\mu\nu}\right)=\kappa \mathcal{T}^{\mu\nu}\,, \label{eq:ssquaredeqn}$$ where here the notation $s^{2}:=\tau_{\alpha\beta}\tau^{\alpha\beta}$ has been employed. The vector $u^{\mu}$ represents the local 4-velocity of the fluid. We take it to have the same functional form as that of the ship.
A few comments are in order before proceeding. First, the spin contribution, $s^{2}$, is in principle prescribable. However, reasonable physics dictates that the larger the particle content, the larger the $s^{2}$ contribution. Therefore it may be desirable to make $s^{2}$ proportional to the fluid energy density. Second, the stress-energy tensor of a fluid is algebraically incompatible with the symmetries required for the warp drive metrics. Therefore, one cannot simply use a perfect fluid (or even an anisotropic fluid) stress-energy tensor on the right-hand side of (\[eq:ssquaredeqn\]). There must be some spinless auxiliary structure to the matter in order to bring the algebraic class of the right-hand side to compatibility with the left-hand side. In the analysis below we leave $\mathcal{T}^{\mu\nu}$ free. In other words, it is whatever is required in order to create the warp drive. It will in general be algebraically decomposable as Segre characteristic $$\left[1,(1,1,1)\right] + \mbox{aux}\,,$$ where “aux” represents the residual algebraic structure (non fluid structure) of the left-hand side of the equation (\[eq:ssquaredeqn\]). That is, the net stress-energy content is that of the spin fluid plus any auxiliary matter required for algebraic compatibility.
The warp drive in [Einstein-Cartan theory ]{} {#sec:warpdrive}
=============================================
The traditional warp drive {#sec:alcubierre}
--------------------------
We will first analyze the original warp drive of Alcubierre but within [Einstein-Cartan theory ]{}[@ref:alcub]. This is arguably the most studied of such metrics. Its line element takes the form $${\rm d}s^{2}=-{\rm d}t^{2} +\left[{\rm d}z -v_{\rm{s}}(t) f(x,y,z-z_{\rm{s}}(t))\,{\rm d}t\right]^{2} + {\rm d}x^{2} + {\rm d}y^{2}\,. \label{eq:alcumetric}$$ Here the quantity $v_{\rm{s}}(t)$ represents the coordinate velocity of the ship, $v_{\rm{s}}(t)={\rm d}z_{\rm{s}}(t)/{\rm d}t$, so that the ship is moving in the $z$ direction. Due to the complexity of many of the expressions required for calculation we will consider the ship velocity to be constant. The fact that energy conditions may be respected even in an accelerating scenario can be deduced from the lemma below. The Christoffel-Einstein tensor, $G_{\mu\nu}\left({\tiny{{\ensuremath{\left\{\hspace{-0.15cm}\!
\begin{array}{l}
{\alpha} \\
{\beta}\,{\gamma}
\end{array}
\!\hspace{-0.15cm}\right\}}}}}\right)$, constructed out of this metric is presented in the Appendix. The function $f(x,y,z-z_{\rm{s}}(t))$ is required to be “top-hat” like with a value of $0$ outside the warp bubble, and a value of $1$ inside. Specifically, Alcubierre chose $$f\left(r_{\rm{s}}(t)\right)=\frac{\tanh\left[\sigma\left(r_{\rm{s}}(t)+P\right)\right] - \tanh\left[\sigma\left(r_{\rm{s}}(t)-P\right)\right]}{2\tanh(\sigma P)}\,, \label{eq:alcuf}$$ where ${\scalebox{0.90}{$r_{\rm{s}}(t)=\left\{x^{2}+y^{2}+ \left[z-z_{\rm{s}}(t)\right]^{2}\right\}^{1/2}$}}$, $P$ is the “radius” of the warp bubble, and $\sigma$ is a parameter which controls how close $f\left(r_{\rm{s}}(t)\right)$ is to a true top-hat function. In this version of the warp drive there are contracting and expanding volume elements near the ship. However, it should be stressed that this is simply a by product of the metric (\[eq:alcumetric\]). The contraction has little to do with the arbitrarily high velocity of the warp bubble. The ship does not reside in that region of the spacetime and, in fact, by a modification one may construct a similar warp drive without the contraction of the volume elements [@ref:noncontract].
Staying within the paradigm of general relativity for the moment, it is easy to see that the spacetime generated by the metric (\[eq:alcumetric\]) violates the weak energy condition (WEC). To see this let us consider observers in free-fall whose 4-velocity is given by $$[u^{\mu}]=\left[1,\,0,\,0, v_{s} f(r_{\rm{s}}(t))\right]\,. \label{eq:4vel}$$ In accordance with the literature we will refer to such observers as Eulerian. As long as the observer is a test particle (meaning his/her stress-energy and, in [Einstein-Cartan gravity ]{}also spin structure, may be neglected), this observer will correspond to one in free-fall. One may calculate the following quantity relevant to the WEC: $$G_{\mu\nu}u^{\mu}u^{\nu}=\kappa \mathcal{T}_{\mu\nu}u^{\mu}u^{\nu}=-\frac{v_{s}^{2}}{4} \left[ \left(\partial_{x}f\right)^{2} + \left(\partial_{y}f\right)^{2} \right]\,, \label{eq:alcuecond}$$ using (\[eq:4vel\]) and of course the Christoffel connection for $G_{\mu\nu}$, as we are currently working within general relativity. Note that the right-hand side of (\[eq:alcuecond\]) is non-positive, and hence a negative energy density will be measured by the free-fall observers. The distribution of this energy density, $\mathcal{T}_{\mu\nu}u^{\mu}u^{\nu}$, is depicted in figure \[fig:alcurho\]. One may minimize the volume of exotic matter required by selecting parameters in $f$ so that the function is as close to a top-hat as reasonably possible. However, then the derivatives in (\[eq:alcuecond\]) become large, so although the volume of the WEC violating region is minimized, the severity of the violation in the region is increased.
![[]{data-label="fig:alcurho"}](alcubierreRhoGR.pdf){width="3in"}
Next we wish to analyze the original warp drive within the context of [Einstein-Cartan theory]{}. One potential issue is that, in general, the motion of free-falling particles in [Einstein-Cartan gravity ]{}does not coincide with the geodesic equation. The torsion in general will couple to any spin the test particle may possess, altering its trajectory [@ref:pereirabook]. (The trajectory will be neither geodesic nor an autoparallel of the $U_{4}$ spacetime generally.) Strictly speaking, in order for the test particle (ship) to move according to the geodesic equation, the spin of the test particle must be small (required so that locally created torsion can be ignored, as required for a test particle). Also, ideally the contorsion of the spacetime should vanish in the vicinity of the test particle [@ref:handwrittenequations]. This issue will be circumvented by demanding that the solution possesses no matter (or at least very little matter) in the vicinity of the ship, and therefore the test particle is located in a region where there is no torsion, and hence general relativity, and the geodesic equation of test particles, holds. The matter field which is responsible for the warp bubble will, of course, not be in vacuum, but since it is not a test particle but part of the solution of the field equations, its distribution will be whatever is required by the field equations in order to produce metric (\[eq:alcumetric\]) and eliminate energy condition violation. Its 4-velocity, although mimicking that of the ship in functional form, is not one of free-fall.
In [Einstein-Cartan gravity ]{}there is the extra degree of freedom introduced from the presence of torsion, via the spin. This extra degree of freedom is manifest in $s^{2}$ and may be utilized in order to attempt to eliminate energy condition violation throughout the spacetime. We will concentrate our analysis on the WEC/null energy condition (NEC) specifically, which we will refer to as the WEC for simplicity, as by simple extension to null vectors the WEC can include the NEC. The WEC/NEC stipulates that for any timelike/null vector $v^{{\mu}}$, the following weak inequality must hold: $$\mathcal{T}_{{\mu}{\nu}}v^{{\mu}}v^{{\nu}} \geq 0\qquad \forall\quad v^{{\mu}}v_{{\mu}}=-1,\, 0 \,, \label{eq:wec}$$ and this inequality will be rewritten using (\[eq:ssquaredeqn\]) as $$\mathcal{T}_{\mu\nu}v^{\mu}v^{\nu}=\left(\frac{1}{\kappa}G_{{\mu}{\nu}}\left({\tiny{{\ensuremath{\left\{\hspace{-0.15cm}\!
\begin{array}{l}
{\alpha} \\
{\beta}\,{\gamma}
\end{array}
\!\hspace{-0.15cm}\right\}}}}}\right) -\kappa s^2\left(-2 u_{{\mu}}u_{{\nu}}-g_{{\mu}{\nu}}\right) \right) {v}^{{\mu}}{v}^{{\nu}}\,. \label{eq:econd2a}$$ We know that where the WEC is violated in general relativity that the first term on the right-hand side will be negative. Therefore, we wish to prove that the remaining terms on the right-hand side can always be made sufficiently positive in order to negate the WEC violation in general relativity. The proof that the WEC can always be respected is facilitated by the following simple lemma:
Define $\chi_{\mu\nu}:=\left(2 u_{{\mu}}u_{{\nu}}+g_{{\mu}{\nu}}\right)$ with $u^{\mu}$ a normalized future-pointing timelike vector field. Then, for $v^{\mu}$ a normalized timelike vector isochronous with $u^{\mu}$, the quantity $\chi_{\mu\nu}v^{\mu}v^{\nu}$ is strictly positive.
We have the expression $$\chi_{\mu\nu}v^{\mu}v^{\nu}=2 u_{{\mu}}u_{{\nu}}v^{\mu}v^{\nu}+g_{{\mu}{\nu}}v^{\mu}v^{\nu}\,, \nonumber$$ which due to the normalized nature of the vectors may be written as $$2 \left(u_{{\mu}}v^{{\mu}}\right)^{2}-1\,. \label{eq:chimunu}$$ The Cauchy-Schwarz inequality for timelike vectors states that $$-g_{\mu\nu}u^{\mu}v^{\nu} \geq \sqrt{g_{\alpha\beta}u^{\alpha}u^{\beta}\: g_{\sigma\rho}v^{\sigma}v^{\rho}}\,, \nonumber$$ which, due to the normalization of the vectors may be re-written as $$-g_{\mu\nu}u^{\mu}v^{\nu} \geq 1\,. \nonumber$$ The left-hand side of the weak inequality above is obviously positive, due to the timelike nature of $u^{\mu}$ and $v^{\mu}$ and the fact that they are isochronous (and of course also the fact it obeys the inequality). Therefore we conclude that $(u_{\mu}v^{\mu})^{2} \geq 1$. This then renders (\[eq:chimunu\]) strictly positive and thus the assertion that the WEC (\[eq:econd2a\]) may be made positive for sufficiently large $s^{2}$ is proven.
The extension to show that $\chi_{\mu\nu}v^{\mu}v^{\nu}$ is positive for null $v^{\mu}$ is straightforward.
Although we have shown that the WEC can be satisfied, from a physical perspective it is desirable to accomplish this with the minimum amount of spin. That is, with the smallest $s^{2}$ allowable. For this we consider as before the left-hand side (l.h.s) of equation (\[eq:ssquaredeqn\]) as: $$\mathcal{T}_{\hat{\mu}\hat{\nu}}=\frac{1}{\kappa}\left(\mbox{l.h.s.}\right)_{\alpha\beta} e_{\hat{\mu}}^{\;\,\alpha} e_{\hat{\nu}}^{\;\,\beta}\,, \label{eq:stressprojection}$$ where we will perform the calculation in the orthonormal frame (indicated by hatted indices). Here $e_{\hat{\mu}}^{\;\alpha}$ indicate the components of the locally orthonormal tetrad, which we pick adapted to the motion as $$\left[e^{\hat \mu}{}_{\alpha}\right] = \left[ \begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
- v_s f & 0 & 0 & 1
\end{array} \right]\;,
\qquad
\left[e_{\hat \mu}{}^{\alpha}\right] = \left[ \begin{array}{cccc}
1 & 0 & 0 & v_s f \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{array} \right]\,,$$ although strictly speaking, since the analysis below will be in terms of invariants, any frame would be sufficient.
In order to make the analysis of the WEC more tractable, the following parameterization for the observer 4-velocity can be chosen, which respects the condition $v^{\hat{\mu}}v_{\hat{\mu}}=-1$ without loss of generality $$\label{eq:paramvel}
[\,v^{\hat\mu}\,] = [\, \cosh\beta,\;\sinh\beta\sin\theta\cos\phi,\;
\sinh\beta\sin\theta\sin\phi,\;\sinh\beta\cos\theta\,]\,,$$ where $\beta \in \mathbb{R}$, $\theta\in(0,\,\pi)$, and $\phi \in (-\pi,\,\pi)$.
To limit the amount of spin required in order to respect the WEC, first a specific trajectory is chosen in which to calculate the following quantity throughout the spacetime: $$\left(\frac{1}{\kappa}G_{\hat{\mu}\hat{\nu}}\left({\tiny{{\ensuremath{\left\{\hspace{-0.15cm}\!
\begin{array}{l}
{\alpha} \\
{\beta}\,{\gamma}
\end{array}
\!\hspace{-0.15cm}\right\}}}}}\right) -\kappa s^2\left(-2 u_{\hat{\mu}}u_{\hat{\nu}}-g_{\hat{\mu}\hat{\nu}}\right) \right) \tilde{v}^{\hat{\mu}}\tilde{v}^{\hat{\nu}}\,. \label{eq:econd2}$$ Here the tilde indicates that $\tilde{v}^{\hat{\alpha}}$ is a specific 4-velocity. Equation (\[eq:econd2\]) is, via (\[eq:ssquaredeqn\]), the quantity which is required when the energy condition is to be calculated. This expression must be non-negative for all timelike ${v}^{\hat{\alpha}}$ in order for the WEC to be respected. The first term in (\[eq:econd2\]) is completely determined from the metric (\[eq:alcumetric\]), save for the 4-velocities, and since it is what goes into the WEC for general relativity, this term will in general be negative for warp drive metrics. If it is negative, then $s^{2}$ is to be set such that (\[eq:econd2\]) vanishes everywhere. This needs to be done specifically for the vector $\tilde{v}^{\hat{\mu}}$ which produces the most severe negative result in general relativity at any given point in the spacetime. In other words, one chooses an $s^{2}$ so that energy condition inequality violation in general relativity, for the observer who measures this violation most severely at a certain point, is canceled by the spin terms which arise in [Einstein-Cartan gravity]{}. This is to be done at every point in the spacetime, and may generally involve a different $\tilde{v}^{\hat{\mu}}$ vector at each spacetime point. It should be noted that in regions of the spacetime where the WEC would be violated within general relativity, this procedure will yield zero for the energy density as measured by the observer with 4-velocity $\tilde{v}^{\hat{\mu}}$ in [Einstein-Cartan gravity]{}. Mathematically this is allowed. However, as mentioned previously, it is unphysical that spin should be present in the absence of matter. Hence, the procedure just described yields the absolute lower limit on $s^{2}$ which is capable of generating an energy condition respecting warp drive in [Einstein-Cartan theory]{}. From a more physical perspective, one must actually increase the value of $s^{2}$, at least slightly, from this minimum value in order for the observer to measure a non-zero (and positive) energy density.
In principle, finding the minimum $s^{2}$ required (generally corresponding to the greatest WEC violation within general relativity) can be done via the standard extremization technique. One first sets the function (\[eq:econd2\])$=0$, and solves for $s^{2}$ as a function of $\beta$, $\theta$ and $\phi$ (and, of course, the coordinates). Let us call this function $\tilde{S}:=s^{2}$; the value of $s^{2}$ which yields zero for (\[eq:econd2\]). One then evaluates the gradients $$\frac{\partial \tilde{S}}{\partial \beta} =0,\quad \frac{\partial \tilde{S}}{\partial \theta} =0, \quad \frac{\partial \tilde{S}}{\partial \phi} =0\,, \label{eq:zerograd}$$ and simultaneously solves these equations for $\beta$, $\theta$, and $\phi$. Now one will have critical values of $\beta(t,\,x,\,y,\,z)$, $\theta(t,\,x,\,y,\,z)$, and $\phi(t,\,x,\,y,\,z)$. At these values the function $\tilde{S}$ will be some sort of extremum. One wishes to find which ones are *maxima* (as this corresponds to the vector $v^{\hat{\mu}}$ which produces the most negative result in the GR WEC). This is done by forming the Hessian $$H=\begin{bmatrix}
\frac{\partial^{2} \tilde{S}}{\partial \beta^{2}} & \frac{\partial^{2} \tilde{S}}{\partial \beta\,\partial\theta}& \frac{\partial^{2} \tilde{S}}{\partial \beta \,\partial\phi}\\[0.1cm]
\frac{\partial^{2} \tilde{S}}{\partial \theta \, \partial \beta} & \frac{\partial^{2} \tilde{S}}{\partial\theta^{2}}& \frac{\partial^{2} \tilde{S}}{\partial \theta \,\partial\phi}\\[0.1cm]
\frac{\partial^{2} \tilde{S}}{\partial \phi \, \partial \beta} & \frac{\partial^{2} \tilde{S}}{\partial\phi\, \partial\theta}& \frac{\partial^{2} \tilde{S}}{\partial \phi^{2}}
\end{bmatrix}_{|\mbox{\small{cp}}}\;, \label{eq:hessian}$$ where “cp” indicates at the critical points, and by studying the determinants of the principal minors: $$h_{1}:= \frac{\partial^{2} \tilde{S}}{\partial \beta^{2}}_{|\mbox{\small{cp}}}, \quad h_{2}:=\begin{vmatrix}
\frac{\partial^{2} \tilde{S}}{\partial \beta^{2}} & \frac{\partial^{2} \tilde{S}}{\partial \beta \, \partial\theta}\\[0.1cm]
\frac{\partial^{2} \tilde{S}}{\partial \theta \, \partial \beta} & \frac{\partial^{2} \tilde{S}}{\partial\theta^{2}}
\end{vmatrix}_{|\mbox{\small{cp}}}, \quad h_{3}:=\left|H\right|_{|\mbox{\small{cp}}}\,.$$ If $h_{1}$, $h_{2}$ and $h_{3}$ alternate between positive and negative at a critical point, then that critical point is a maximum. If a boundary is present it remains to be checked separately.
The above procedure may be implemented in principle. However, in practice the warp drive spacetime is rather complicated, and the expression (\[eq:econd2\]) is rather unwieldy. The lengths of the resulting expressions obscure any chance of reasonable analysis. Instead we are forced to resort to a somewhat simplified scenario. In other words we choose a special class of observers and calculate the spin required to respect the WEC inequality for this class of observers. Since the spacetime is $x$, $y$ symmetric, we will choose the class of observers boosted in the $z$ direction. In this scenario the 4 velocities (\[eq:paramvel\]) simplify to $$[\,v^{\hat{\mu}}\,]
= [\,\cosh \beta,\, 0,\, 0,\, \sinh \beta\,]\,. \label{eq:zvel}$$ We will find the value of $\beta$ which produces the most serious WEC violation in general relativity, and construct $s^{2}$ such that it just cancels out this violation (keeping in mind the comment earlier that $s^{2}$ should be at least slightly larger than this). Solving for $\kappa^{2}s^{2}$ under this assumption yields $$\begin{aligned}
\kappa^{2}s^{2} = & - \frac{
G_{\hat\mu\hat\nu}\left({\tiny{{\ensuremath{\left\{\hspace{-0.15cm}\!
\begin{array}{l}
{\alpha} \\
{\beta}\,{\gamma}
\end{array}
\!\hspace{-0.15cm}\right\}}}}}\right)
v^{\hat{\mu}}v^{\hat{\nu}}
}{
2(u_{\hat{\alpha}}v^{\hat{\alpha}})^{2} - 1} \nonumber \\
= & \mbox{} \frac{v_{\rm{s}}^{2}}{4} \left( 2 - \operatorname{sech}(2\beta) \right)
\left((\partial_{x}f)^{2} + (\partial_{y}f)^{2}\right)
- \frac{v_{\rm{s}}}{2} \tanh(2\beta) \left(\partial^{2}_{x}f + \partial_{y}^{2}f\right)\,. \label{eq:spinexpr}\end{aligned}$$ Setting the derivative with respect to $\beta$ equal to zero to find the extrema yields $$\sinh(2\beta)_{|\mbox{\small{cp}}} =
\frac{2(\partial_{x}^{2}f +\partial_{y}^{2}f)}{v_{\rm{s}}
\left((\partial_{x}f)^{2} + (\partial_{y}f)^{2}\right)}\,$$ which gives the extrema of $\kappa^{2}s^{2}$ as $$\kappa^{2}s^{2} = \frac{v_{\rm{s}}^{2}}{2} \left(
(\partial_{x} f)^{2}+(\partial_{y} f)^{2}
- \sqrt{
\frac14 \big( (\partial_{x} f)^{2}+(\partial_{y} f)^{2} \big)^2
+ \frac1{v_s^2} \big( \partial^{2}_{x}f + \partial^{2}_{y}f \big)^2
} \right) \,.$$ The boundary points, $\beta\rightarrow \pm \infty$, need to be checked independently and yield $$\kappa^{2}s^{2}{}_{|\beta\rightarrow \pm \infty} =
\frac{v_{\rm{s}}^{2}}{2}\left((\partial_{x}f)^{2} +(\partial_{y}f)^{2}\right)^{2}
\mp \frac{v_{\rm{s}}}{2} \left(\partial^{2}_{x}f +\partial^{2}_{y}f\right)\,. \label{eq:spinboundary}$$ It turns out that the maxima occur on the boundary, $\beta\rightarrow \pm \infty$. Using the values of (\[eq:spinboundary\]) gives the *minimum* spin required in order to cancel out GR WEC violation for all observers boosted in the $z$ direction. Again we stress that in a physical situation, the spin should be at least slightly larger than this. We plot this spin in the vicinity of the ship in figure \[fig:alcussquared\] for $v_{\rm{s}}=1.2$ and $\sigma=5$ inverse length units. The resulting energy density, as measured by Eulerian observers using this value of $\kappa^{2}s^{2}$, is plotted in figure \[fig:energydens\]. Note that this is everywhere non-negative. (It will be so for all observers by construction.)
![[]{data-label="fig:alcussquared"}](kssq_ship.pdf){width="3in"}
![[]{data-label="fig:energydens"}](alcubierreRhoEC){width="3in"}
It might be interesting to consider the actual amount of spin required in order for this scheme to work. Let us consider the graph of figure \[fig:alcussquared\] at the region with the highest spin density. Here $\kappa^{2}s^{2}$ is approximately $5$m$^{-2}$ at $x/P\approx 1$. Converting the corresponding value of $s$ to S.I. units yields an angular momentum density of approximately $\mathcal{L}\approx 3.6\times 10^{34}$kg/(m$\cdot$sec), or in units of $\hbar$, $\mathcal{L}\approx 3.5\times 10^{68}$ $\hbar/$m$^{3}$. For the sake of simplicity, and for a very crude approximation, let us assume that the spin density is sourced by particles whose spin is of the order $\hbar$. This spin density then corresponds to $\mathcal{N}=3.5\times 10^{68}$ of such particles per cubic meter. As another approximation, the energy density, as measured by Eulerian observers is given by $\mathcal{T}_{\mu\nu}u^{\mu}u^{\nu}$, where $u^{\mu}$ is of the form (\[eq:4vel\]). This energy density is plotted in figure \[fig:energydens\]. Near the maximum of this plot, also near $x/P =1$, this energy density is approximately $\rho_{0}=0.3$ inverse-square meter, which corresponds to $3.64\times 10^{43}$ J/m$^{3}$, or a mass density equivalent of $4\times 10^{26}$ kg/m$^{3}$. (As is often the case with exotic solutions in gravitational field theory, the devil is in the details.) If the field sourcing the spin is, for example, a monochromatic photon field[^3] (algebraic class of an anisotropic fluid) of frequency $\omega$, then this energy density corresponds to an electric field, $E$, of approximate magnitude $$E=\sqrt{\frac{2}{\epsilon_{0}} \hbar \omega \mathcal{N}} \approx 9\times 10^{22} \sqrt{\omega}\; \mbox{N/C}\,, \label{eq:elecfield}$$ where the free-space permittivity is $\epsilon_{0}=8.85\times 10^{-12}\,$Farad/m. Of course, this is assuming that all of the stress-energy present at this point is due to the photon field itself, which for this illustrative purpose neglects the presence of the auxiliary matter discussed previously. $3.5\times 10^{68}$ photons per cubic meter, having an energy density of $3.64\times 10^{43}$ Joules per cubic meter, would possess a wavelength of approximately $1.8$ meters, which is not unreasonable, although somewhat large given the dimensions of the warp bubble for this example. Looking to other possible field sources, composite particles of high spin are unlikely candidates as they are found in unstable resonances. They also tend to carry too much energy due to their mass in order for them to be feasible in this scenario, as too much energy will tend increase the amount of spin required in order to equate the left and right-hand sides of the gravitational field equations. Fundamental high spin states are problematic in that it is difficult to describe point-like interactions of high spin in a consistent manner within a field theoretic framework, even in the massless case, although it seems within certain extensions of the standard theory it may be possible (see [@ref:klish] and [@ref:foto] for a summary of the issues and possible resolutions). One could perhaps utilize the arbitrarily high spin states allowed within the realm of string theory, although for low dimension these modes must be massive.
Even though the spin density is rather high, the overall amount of spin and energy required could be made much smaller. This was the motivation behind the modification to the warp drive by Van Den Broeck [@ref:vanden], which modifies the geometry in such a way as to minimize the amount of matter required for the warp drive.
The modified warp drive {#sec:vanden}
-----------------------
Here we will briefly discuss the Van Den Broeck warp drive [@ref:vanden] in light of [Einstein-Cartan theory]{}. The method here mimics the analysis of the traditional warp drive above and so we only present the metric and the results. The Van Den Broeck warp drive minimizes the amount of exotic matter required by modifying the spacetime so that the volume in which the ship is located is bounded by a small area. In other words, a small warp bubble surrounds a throat leading to an approximately flat region with large volume. The line element is given by [@ref:vanden] $${\mathrm d}s^{2}= -{\mathrm d}t^{2} +B^{2}(r_{\rm{s}})\left[\left({\mathrm d}z-v_{\rm{s}}(t) f(r_{\rm{s}}){\mathrm d}t\right)^{2} + {\mathrm d}x^{2} + {\mathrm{d}}y^{2} \right]\,. \label{eq:vandenmet}$$ The Christoffel-Einstein tensor for this metric is also presented in the Appendix and again we consider the constant velocity scenario. For the functions appearing in (\[eq:vandenmet\]) we will make the same assumptions as in [@ref:vanden]. That is $$\begin{aligned}
B(r_{\rm{s}})= 1+\alpha, & \quad \text{for } r_{\rm{s}} < \tilde{P}\,, \nonumber \\
1 < B(r_{\rm{s}}) \leq 1+ \alpha, & \quad \text{for } \tilde{P} \leq r_{\rm{s}} < \tilde{P} +\tilde{\Delta}\,, \\
B(r_{\rm{s}})=1, & \quad \text{for } \tilde{P}+\tilde{\Delta} \leq r_{\rm{s}} \,, \nonumber \end{aligned}$$ with $P > \tilde{P}+\tilde{\Delta}$. The quantity $\tilde{\Delta}$ represents the coordinate thickness of the transition domain between the large volume inner region and the region which mimics the traditional warp drive. We use a function similar to (\[eq:alcuf\]) ($B=1+f$), but with different parameters ($P\rightarrow \tilde{P}$), in order to model $B(r_{\rm{s}})$. Using the above metric one calculates the quantity (\[eq:econd2\]) and, again considering longitudinally boosted observers (\[eq:zvel\]), we set the value of $s$ by requiring that WEC violation is canceled for the most severe scenario. In this case it turns out that at certain points in the spacetime there are relevant extrema at the $\beta$ boundary, $\beta \rightarrow \pm\infty$, as well as at intermediate values of $\beta$ for other regions. We plot these values of $\kappa^{2}s^{2}$ in the vicinity of the warp bubble in figure \[fig:vdbspin\]. The values chosen are as follows: $$P=3\,\mbox{fm}, \quad \tilde{P}=1\,\mbox{fm}, \quad \alpha =5, \quad \sigma =8\,\mbox{fm}^{-1}\,. \label{vdbparams}$$ This actually corresponds to a tiny vessel, and the values are chosen only because they are useful for our purposes of analysis. This corresponds to a tiny warp bubble, whose inner-volume is admittedly not practical but it yields an idea of the spin densities required for a non-extreme case (one whose area-volume ratio is rather mild). We use femtometers here since the idea behind the modified warp drive is to have as small a WEC violating region as possible.
![[]{data-label="fig:vdbspin"}](vdbspin.pdf){width="3in"}
From the plot in figure \[fig:vdbspin\] it can be noted that the maximum value of $\kappa^{2}s^{2}$ is approximately 150. This in turn corresponds to an angular momentum density of approximately $\mathcal{L}\approx 1.9 \times 10^{35}$ kg/(fm$\cdot$s) or $1.87 \times 10^{39}$$\hbar/$fm$^{3}$. This is of the order of $10^{84}$ spin-1 particles per cubic meter. Although the spin density is extremely large, one might minimize the net amount of spin required due to the design of this spacetime. However, the issue of exactly how to support this spin density is subject to similar comments as made for the traditional warp drive above.
Finally, we show the energy density as measured by Eulerian observers for this scenario in figure \[fig:vdbedense\]. The approximate value here, $\rho_{\mbox{\tiny{max}}}\approx 0.6$fm$^{-2}$, corresponds to approximately $8\times 10^{11}$ kg/fm$^{3}$ of mass density.
![[]{data-label="fig:vdbedense"}](vdbedense.pdf){width="3in"}
In closing we should note that although in principle it is possible to create an energy condition respecting warp drive within [Einstein-Cartan theory]{}, there are other issues with warp drive spacetimes which we did not address. For example, it has been shown that warp drive spacetimes contain an effective horizon, preventing the ship from possessing causal contact with points of the warp bubble when the effective speed becomes superluminal [@ref:krasnik]. This issue may be alleviated somewhat as it has been shown that parts of the bubble are still causally connected to the control region [@ref:causcon]. As well, when semiclassical effects are taken into account it has been shown that radiation build up in the warp drive spacetime may become large enough to render it unstable [@ref:semiclass], at least at the semiclassical level. A full quantum analysis would require a theory of quantum gravity.
[Concluding remarks]{} {#sec:conc}
======================
It has been shown how, within the paradigm of [Einstein-Cartan theory]{}, an energy condition respecting warp drive may exist, where no such counterpart is present in curvature-only general relativity. The Weyssenhoff fluid was utilized to calculate the spin density contribution, along with auxiliary structure to the matter which allows the stress energy tensor to be algebraically compatible with the warp drive spacetime. With the addition of spin as a source of gravity, the matter field supporting the warp bubble can indeed respect the weak energy condition inequality, and by extension the null energy condition. An attempt was made in order to minimize the amount of spin required in order to have WEC non-violation. For reasonable values of the parameters, we find that rather large spin angular momentum densities are required per cubic meter. The Van Den Broeck warp drive requires a much higher density, although in this case the spin distribution is located within a much smaller area, and therefore the net overall spin required may be less. Admittedly, these are rather large values and it is difficult to imagine how such spin densities may be achieved. However, the study does illustrate that in principle WEC violation may be alleviated in warp drive spacetimes within [Einstein-Cartan theory]{}. It also serves to show that solutions which are considered exotic in general relativity may be less peculiar within theories where spacetime torsion exists as an extra degree of freedom. The effects of torsion may therefore be non trivial in extreme scenarios.
[Acknowledgments]{} {#acknowledgments .unnumbered}
===================
We are grateful to M. Sossich for stimulating discussions. AD would like to acknowledge the kind hospitality of FER, University of Zagreb, where part of this work was carried out. This work was partially supported by the VIF program of the University of Zagreb.
Appendix - The Christoffel-Einstein tensor ${\bm{G^{\mu\nu}\left({\tiny{{\ensuremath{\left\{\hspace{-0.15cm}\!
\begin{array}{l}
{\alpha} \\
{\beta}\,{\gamma}
\end{array}
\!\hspace{-0.15cm}\right\}}}}}\right)}}$ {#appendix---the-christoffel-einstein-tensor-bmgmunulefttinyensuremathlefthspace-0.15cm-beginarraylalpha-betagammaendarrayhspace-0.15cmrightright .unnumbered}
===============================================================================================================
We present here all the unique components of the Christoffel-Einstein tensor $G^{\mu\nu}\left({\tiny{{\ensuremath{\left\{\hspace{-0.15cm}\!
\begin{array}{l}
{\alpha} \\
{\beta}\,{\gamma}
\end{array}
\!\hspace{-0.15cm}\right\}}}}}\right)$ which appear in (\[eq:goodeom\]), (\[eq:ssquaredeqn\]), and which is used to calculate the weak/null energy condition violation in the general relativity limit, $G_{\mu\nu}\left({\tiny{{\ensuremath{\left\{\hspace{-0.15cm}\!
\begin{array}{l}
{\alpha} \\
{\beta}\,{\gamma}
\end{array}
\!\hspace{-0.15cm}\right\}}}}}\right)v^{\mu}v^{\nu}/\kappa$. We omit the explicit Christoffel connection dependence here.
The traditional warp drive {#the-traditional-warp-drive .unnumbered}
--------------------------
The line element is $${\rm d}s^{2}=-{\rm d}t^{2} +\left[{\rm d}z -v_{\rm{s}}(t) f(x,y,z-z_{\rm{s}}(t))\,{\rm d}t\right]^{2} + {\rm d}x^{2} + {\rm d}y^{2}\,.$$
[$$\begin{aligned}
G^{tt}=&{\scalebox{0.90}{$-1/4\, \left( {v_{\rm{s}}}
\right) ^{2} \left( \left( {\frac {\partial }{\partial x}}f \right) ^{2}+ \left( {\frac {\partial }{\partial y}}f
\right) ^{2} \right) \,$}} \\[0.1cm]
G^{tz}=&{\scalebox{0.90}{$-1/4\, \left( {v_{\rm{s}}}
\right) \left( \left( {v_{\rm{s}}} \right) ^{2}f \left( {\frac {
\partial }{\partial x}}f \right) ^{2}+ \left(
{v_{\rm{s}}} \right) ^{2}f
\left( {\frac {\partial }{\partial y}}f
\right) ^{2}+2\,{\frac {\partial ^{2}}{
\partial {y}^{2}}}f +2\,{\frac {\partial ^{2}}{
\partial {x}^{2}}}f \right)\,$}} \\[0.1cm]
G^{ty}=&{\scalebox{0.90}{$1/2\, \left( {v_{\rm{s}}}
\right) {\frac {\partial ^{2}}{\partial z\partial y}}f$}} \, \\[0.1cm]
G^{tx}=&{\scalebox{0.90}{$1/2\, \left( {v_{\rm{s}}}
\right) {\frac {\partial ^{2}}{\partial z\partial x}}f \,$}} \\[0.1cm]
G^{zz}=&{\scalebox{0.90}{$-1/4\, \left( {v_{\rm{s}}}
\right) ^{2} \left( \left( {v_{\rm{s}}} \right) ^{2} f ^{2}
\left( {\frac {\partial }{\partial x}}f
\right) ^{2}+ \left( {v_{\rm{s}}} \right) ^{2} \left( f \right)^{2}
\left( {\frac {\partial }{\partial y}}f
\right) ^{2} \right.$}} \nonumber\\
&{\scalebox{0.90}{$+\left. 4\,f {\frac {\partial ^{2}}{
\partial {y}^{2}}}f +4\,f {\frac {\partial ^{2}}{\partial {x}^{2}}}f +3\, \left( {\frac {\partial }{\partial x}}f \right) ^{2}+3\, \left( {\frac {\partial }{\partial y}}f \right) ^{2} \right) \, ,$}} \\[0.1cm]
G^{zy}=&{\scalebox{0.90}{$1/2\, \left( {\frac {{\rm d}}{{\rm d}{t}}}{v_{\rm{s}}} \right) {\frac {\partial }{\partial y}}f +1/2\, \left( {v_{\rm{s}}} \right) {\frac {\partial ^{2}}{\partial y\partial t}}f + \left( {\frac {\partial }{\partial z}}f \right) v_{\rm{s}} ^{2}{\frac {\partial }{\partial y}}f +f \left( v_{\rm{s}}\right) ^{2}{\frac { \partial ^{2}}{\partial z\partial y}}f \,, $}} \\[0.1cm]
G^{zx}=&{\scalebox{0.90}{$1/2\, \left( {\frac {{\rm d}}{{\rm d}{t}}}{v_{\rm{s}}} \right) {\frac {\partial }{\partial x}}f +1/2\, \left( {v_{\rm{s}}} \right) {\frac {\partial ^{2}}{\partial x\partial t}}f
+ \left( {\frac {\partial }{\partial z}}f
\right) v_{\rm{s}} ^{2}{\frac {\partial }{\partial x}}f
+f \left( {v_{\rm{s}}} \right) ^{2}{\frac {
\partial ^{2}}{\partial z\partial x}}f \, ,$}} \\[0.1cm]
G^{yy}=&{\scalebox{0.90}{$1/4\, \left( {\frac {\partial }{\partial y}}f
\right) ^{2} \left( {v_{\rm{s}}} \right) ^{2}-f v_{\rm{s}} ^{2}{\frac {\partial ^{2
}}{\partial {z}^{2}}}f -1/4\, \left( {\frac {
\partial }{\partial x}}f \right) ^{2} \left( {
v_{\rm{s}}} \right) ^{2}$}} \nonumber \\
&{\scalebox{0.90}{$-\left( {\frac {{\rm d}}{{\rm d}{t}}}{v_{\rm{s}}} \right) {\frac {\partial }{\partial z}}f - \left( {v_{\rm{s}}}
\right) {\frac {\partial ^{2}}{\partial z\partial t}}f - \left( {\frac {\partial }{\partial z}}f \right) ^{2} \left( {v_{\rm{s}}}
\right) ^{2}\,,$}} \\[0.1cm]
G^{yx}=&{\scalebox{0.90}{$1/2\, \left( {\frac {\partial }{\partial y}}f
\right) \left( {v_{\rm{s}}}
\right) ^{2}{\frac {\partial }{\partial x}}f \, ,$}} \\[0.1cm]
G^{xx}=&{\scalebox{0.90}{$1/4\, \left( {\frac {\partial }{\partial x}}f
\right) ^{2} \left( {v_{\rm{s}}} \right) ^{2}-f v_{\rm{s}} ^{2}{\frac {\partial ^{2
}}{\partial {z}^{2}}}f -1/4\, \left( {\frac {
\partial }{\partial y}}f \right) ^{2} \left( {
v_{\rm{s}}} \right) ^{2} $}} \nonumber \\
&{\scalebox{0.90}{$- \left( {\frac {{\rm d}}{{\rm d}{t}}}{v_{\rm{s}}} \right) {\frac {\partial }{\partial z}}f - \left( {v_{\rm{s}}}
\right) {\frac {\partial ^{2}}{\partial z\partial t}}f - \left( {\frac {\partial }{\partial z}}f \right) ^{2} \left( {v_{\rm{s}}}
\right) ^{2}\, .$}}\end{aligned}$$]{}
The modified warp drive {#the-modified-warp-drive .unnumbered}
-----------------------
The line element is $${\mathrm d}s^{2}= -{\mathrm d}t^{2} +B^{2}(r_{\rm{s}})\left[\left({\mathrm d}z-v_{\rm{s}} f(r_{\rm{s}}){\mathrm d}t\right)^{2} + {\mathrm d}x^{2} + {\mathrm{d}}y^{2} \right]\,.$$ Due to the complexity of the resulting expressions, we set $v_{\rm{s}}=\mbox{const.}$ right away here.
[$$\begin{aligned}
G^{tt}=& {\scalebox{0.90}{$-\frac{1}{4 B^{4}}\,\left[-12\, B ^{2}
f ^{2} \left( {\frac {\partial }{\partial z}}B \right) ^{2}{v_{\rm{s}}}^{2}-8\, \left( B
\right) ^{3}f \left( {\frac {\partial }{\partial z}}B \right) \left( {\frac {\partial }{\partial z}}f \right) {v_{\rm{s}}}^{2}+ \left( {\frac {\partial }{
\partial y}}f \right) ^{2} B ^{4}{v_{\rm{s}}}^{2} \right.$}} \nonumber \\
&{\scalebox{0.90}{$\left. + B ^{4} \left( {\frac {\partial }{\partial x}}f \right) ^{2}{v_{\rm{s}}}^{2}-24\,v_{\rm{s}}\,f B ^{2} \left( {
\frac {\partial }{\partial z}}B \right) {\frac {\partial }{\partial t}}B -8\, \left( B
\right) ^{3} \left( {\frac {\partial }{\partial t}}B \right) \left( {\frac {
\partial }{\partial z}}f \right) v_{\rm{s}}-12\, B ^{2} \left( {\frac {
\partial }{\partial t}}B \right) ^{2} \right.$}} \nonumber \\
&{\scalebox{0.90}{$ \left. +8\,B {\frac {\partial ^{2}}{\partial {y}^{2}}}B
+8\,B {\frac {\partial ^{2}}{\partial {z}^{2}}}B -4\, \left( {\frac {
\partial }{\partial x}}B \right) ^{2}+8\,B {\frac {\partial ^{2}}{\partial {x}^{2}}}B
-4\, \left( {\frac {\partial }{\partial y}}B \right) ^{2}-4\, \left( {\frac {\partial }{
\partial z}}B \right) ^{2}\right]\, ,$}} \\[0.1cm]
G^{tz}=&{\scalebox{0.90}{$\frac{1}{4
B ^{4} }\,\left[12\, B ^{2}
f ^{3} \left( {\frac {\partial }{\partial z}}B \right) ^{2}{v_{\rm{s}}}^{3}+8\, \left( B
\right) ^{3} f ^{2} \left( {\frac {\partial }{
\partial z}}B \right) \left( {\frac {\partial }{\partial z}}f \right) {v_{\rm{s}}}^{
3}-f \left( {\frac {\partial }{\partial y}}f \right) ^{2} \left( B
\right) ^{4}{v_{\rm{s}}}^{3} \right.$}} \nonumber \\
&{\scalebox{0.90}{$\left. - B ^{4}f \left( {\frac {\partial }{
\partial x}}f \right) ^{2}{v_{\rm{s}}}^{3}+24\, B ^{2} \left( f
\right) ^{2} \left( {\frac {\partial }{\partial z}}B \right) \left( {\frac {\partial }{\partial t}}B
\right) {v_{\rm{s}}}^{2}+8\, B ^{3}f \left( {\frac {
\partial }{\partial t}}B \right) \left( {\frac {\partial }{\partial z}}f \right)
{v_{\rm{s}}}^{2} \right.$}} \nonumber \\
&{\scalebox{0.90}{$ \left. +12\, B ^{2}f \left( {\frac {\partial }{\partial t}}B
\right) ^{2}v_{\rm{s}}-2\, B ^{2}v_{\rm{s}}\,{\frac {\partial ^{2}}{\partial {y}^{2}}}f
-8\,B v_{\rm{s}}\,f {\frac {\partial ^{2}}{\partial {y}^{2}}}B
\right.$}} \nonumber \\
& {\scalebox{0.90}{$\left. +4\, \left( {\frac {\partial }{\partial x}}B \right) ^{2}v_{\rm{s}}\,f -8\,B v_{\rm{s}}\,f {\frac {\partial ^{2}}{\partial {x}^{2}}}B +4\, \left( {\frac {\partial }{\partial y}}B
\right) ^{2}v_{\rm{s}}\,f -4\, \left( {\frac {\partial }{\partial z}}B \right) ^{2}{
v_{\rm{s}}}\,f -6\,B v_{\rm{s}}\, \left( {\frac {\partial }{\partial x}}f
\right) {\frac {\partial }{\partial x}}B \right.$}} \nonumber \\
&{\scalebox{0.90}{$ \left. -6\,B v_{\rm{s}}\, \left( {\frac {\partial }{
\partial y}}f \right) {\frac {\partial }{\partial y}}B -2\, B ^{2}v_{\rm{s}}\,{\frac {\partial ^{2}}{\partial {x}^{2}}}f +8\,B {\frac {\partial ^{2}}{\partial z\partial t}}B -8\,
\left( {\frac {\partial }{\partial t}}B \right) {\frac {\partial }{\partial z}}B \right]\, ,$}} \\[0.1cm]
G^{ty}=&{\scalebox{0.90}{$\frac{1}{2 B ^{4}}\,\left[4\,B v_{\rm{s}}\,f {\frac {\partial ^{2}}{\partial z\partial y}}B + B ^{2}v_{\rm{s}}\,{\frac {\partial ^{2}}{\partial z\partial y}}f -
4\, \left( {\frac {\partial }{\partial z}}B \right) v_{\rm{s}}\,f {\frac {\partial }{
\partial y}}B +B v_{\rm{s}}\, \left( {\frac {\partial }{\partial y}}f
\right) {\frac {\partial }{\partial z}}B \right.$}} \nonumber \\
&{\scalebox{0.90}{$ \left. +2\,B v_{\rm{s}}\, \left( {\frac {\partial }{
\partial z}}f \right) {\frac {\partial }{\partial y}}B +4\,B {
\frac {\partial ^{2}}{\partial y\partial t}}B -4\, \left( {\frac {\partial }{\partial t}}B
\right) {\frac {\partial }{\partial y}}B \right]\,,$}} \\[0.1cm]
G^{tx}=& G^{ty} \quad x \leftrightarrow y \, , \\[0.1cm]
G^{zz}=&{\scalebox{0.90}{$-\frac{1}{4 B ^{6} }\,\left[-8\, f ^{2}
B ^{5} \left( {\frac {\partial }{\partial t}}B \right) \left( {
\frac {\partial }{\partial z}}f \right) {{v_{\rm{s}}}}^{3}-12\, f ^{2} B ^{4} \left( {\frac {\partial }{\partial t}}B \right) ^{2}{{v_{\rm{s}}}}^{2}+4\,
B ^{4}f {{v_{\rm{s}}}}^{2}{\frac {\partial ^{2}}{\partial {y}^{2}}}f \right.$}} \nonumber \\
&{\scalebox{0.90}{$\left. +8\, B ^{3} f ^{2}{{v_{\rm{s}}}}^{2}{
\frac {\partial ^{2}}{\partial {y}^{2}}}B -4\, B ^{2}{{v_{\rm{s}}}}^{2}
f ^{2} \left( {\frac {\partial }{\partial x}}B \right) ^{2}+8\,
B ^{3} f ^{2}{{v_{\rm{s}}}}^{2}{\frac {\partial ^{2}}{\partial {x}^{2}}}B
-4\, f ^{2} B ^{2} \left( {\frac {
\partial }{\partial y}}B \right) ^{2}{{v_{\rm{s}}}}^{2} \right.$}} \nonumber \\
& {\scalebox{0.90}{$\left. +4\, B ^{4}f
{{v_{\rm{s}}}}^{2}{\frac {\partial ^{2}}{\partial {x}^{2}}}f -12\, \left( f
\right) ^{4} B ^{4} \left( {\frac {\partial }{\partial z}}B \right) ^{2}{{
v_{\rm{s}}}}^{4}+ f ^{2} \left( {\frac {\partial }{\partial y}}f \right) ^{2}
B ^{6}{{v_{\rm{s}}}}^{4}+ f ^{2} \left( {\frac {\partial }{
\partial x}}f \right) ^{2} B ^{6}{{v_{\rm{s}}}}^{4} \right.$}} \nonumber \\
& {\scalebox{0.90}{$\left. -8\, f ^{3}
B ^{5} \left( {\frac {\partial }{\partial z}}B \right) \left( {\frac {\partial }{\partial z}}f
\right) {{v_{\rm{s}}}}^{4}-24\, f ^{3} B ^{4}
\left( {\frac {\partial }{\partial t}}B \right) \left( {\frac {\partial }{\partial z}}B \right) {{v_{\rm{s}}}}^{3}+12\, B ^{3}{{v_{\rm{s}}}}^{2}f \left( {\frac {\partial }{\partial x}}B \right) {
\frac {\partial }{\partial x}}f \right.$}} \nonumber \\
&{\scalebox{0.90}{$ \left. +12\,f \left( {\frac {\partial }{\partial y}}f \right)
B ^{3} \left( {\frac {\partial }{\partial y}}B
\right) {{v_{\rm{s}}}}^{2}+8\, B ^{3}{\frac {\partial ^{2}}{\partial {t}^{2}}}B
+4\, \left( {\frac {\partial }{\partial x}}B \right) ^{2}-4\,B {\frac {\partial ^{2}}{
\partial {x}^{2}}}B +4\, \left( {\frac {\partial }{\partial y}}B \right) ^{2} \right.$}} \nonumber \\
&{\scalebox{0.90}{$ \left. -4\,B
{\frac {\partial ^{2}}{\partial {y}^{2}}}B -4\, \left( {\frac {\partial }{\partial z}}B
\right) ^{2}+24\,{v_{\rm{s}}}\,f B ^{2} \left( {\frac
{\partial }{\partial z}}B \right) {\frac {\partial }{\partial t}}B +16\, B ^{2}
f ^{2} \left( {\frac {\partial }{\partial z}}B \right) ^{2}{{v_{\rm{s}}}}^{2} \right.$}} \nonumber \\
& {\scalebox{0.90}{$\left. +8\, \left( B
\right) ^{3} \left( {\frac {\partial }{\partial z}}B \right) \left( {\frac {\partial }{\partial t}}f
\right) {v_{\rm{s}}}+4\, B ^{2} \left( {\frac {\partial }{\partial t}}B \right) ^{2}+8\, B ^{3}f \left( {\frac {\partial }{
\partial z}}B \right) \left( {\frac {\partial }{\partial z}}f \right) {{v_{\rm{s}}}}^{
2}+3\, B ^{4} \left( {\frac {\partial }{\partial x}}f \right) ^{2}{{v_{\rm{s}}}}^{2} \right.$}} \nonumber \\
&{\scalebox{0.90}{$ \left. +3\, \left( {\frac {\partial }{\partial y}}f \right) ^{2} B ^{4}{{v_{\rm{s}}}}^{2}\right]\, ,$}}\\[0.1cm]
G^{zy}=&{\scalebox{0.90}{$\frac{1}{2 B ^{6} }\,\left[4\, B ^{3} f ^{2}{{v_{\rm{s}}}}^{2}{\frac {\partial
^{2}}{\partial z\partial y}}B +2\, B ^{4}f
{{v_{\rm{s}}}}^{2}{\frac {\partial ^{2}}{\partial z\partial y}}f -4\, f ^{2} B ^{2} \left( {\frac {\partial }{\partial y}}B \right) \left( {\frac {\partial }{\partial z}}B \right) {{v_{\rm{s}}}}^{
2} \right.$}} \nonumber \\
& {\scalebox{0.90}{$\left. +4\, B ^{3}{{v_{\rm{s}}}}^{2}f \left( {\frac {\partial }{\partial z}}B \right) {\frac {\partial }{\partial y}}f +2\,f B ^{3} \left( {\frac {\partial }{\partial y}}B
\right) \left( {\frac {\partial }{\partial z}}f \right) {{v_{\rm{s}}}}^{2}+2\, \left( {\frac
{\partial }{\partial y}}f \right) B ^{4} \left( {\frac {\partial }{
\partial z}}f \right) {{v_{\rm{s}}}}^{2} \right.$}} \nonumber \\
&{\scalebox{0.90}{$ \left. + B ^{4}{v_{\rm{s}}}\,{\frac {\partial ^{2}
}{\partial y\partial t}}f +4\, B ^{3}{v_{\rm{s}}}\,f {
\frac {\partial ^{2}}{\partial y\partial t}}B -4\,f B
^{2} \left( {\frac {\partial }{\partial y}}B \right) \left( {\frac {\partial }{\partial t}}B \right) {v_{\rm{s}}}+3\,
B ^{3} \left( {\frac {\partial }{\partial t}}B \right) {v_{\rm{s}}}\,{\frac {\partial }{\partial y}}f \right.$}} \nonumber \\
& {\scalebox{0.90}{$\left. -2\,B {\frac {\partial ^{2}}{\partial z\partial y}}B +4\, \left( {\frac {
\partial }{\partial z}}B \right) {\frac {\partial }{\partial y}}B \right]\,$}} \\[0.1cm]
G^{zx}=&G^{zy} \quad x \leftrightarrow y \, , \\[0.1cm]
G^{yy}=&{\scalebox{0.90}{$-\frac{1}{4 B^{6}}\,\left[4\, B ^{4}f {{v_{\rm{s}}}}^{2}{\frac {\partial ^{2}}{
\partial {z}^{2}}}f +8\, B ^{3} f ^{2}{
{v_{\rm{s}}}}^{2}{\frac {\partial ^{2}}{\partial {z}^{2}}}B +4\, B ^{2} \left( f
\right) ^{2} \left( {\frac {\partial }{\partial z}}B \right) ^{2}{{v_{\rm{s}}}}^{2}+20\,
B ^{3}f \left( {\frac {\partial }{\partial z}}B \right) \left( {\frac {\partial }{\partial z}}f \right) {{v_{\rm{s}}}}^{2} \right.$}} \nonumber \\
& {\scalebox{0.90}{$\left. - \left( {\frac {\partial }{
\partial y}}f \right) ^{2} B ^{4}{{v_{\rm{s}}}}^{2}+ B ^{4} \left( {\frac {\partial }{\partial x}}f \right) ^{2}{{v_{\rm{s}}}}^{2}+4\, B ^{4} \left( {\frac {\partial }{\partial z}}f
\right) ^{2}{{v_{\rm{s}}}}^{2}+4\, B ^{4}{v_{\rm{s}}}\,{\frac {\partial ^{2}}
{\partial z\partial t}}f +16\, B ^{3}{v_{\rm{s}}}\,f {
\frac {\partial ^{2}}{\partial z\partial t}}B \right.$}} \nonumber \\
& {\scalebox{0.90}{$\left. +8\,{v_{\rm{s}}}\,f B ^{2} \left( {\frac {\partial }{\partial z}}B \right) {\frac {\partial }{\partial t}}B +12\, B ^{3} \left(
{\frac {\partial }{\partial t}}B \right) \left( {\frac {\partial }{\partial z}}f
\right) {v_{\rm{s}}}+8\, B ^{3} \left( {\frac {\partial }{\partial z}}B
\right) \left( {\frac {\partial }{\partial t}}f \right) {v_{\rm{s}}} +8\, \left( B
\right) ^{3}{\frac {\partial ^{2}}{\partial {t}^{2}}}B\right.$}} \nonumber\\
&{\scalebox{0.88}{$\left. +4\, B ^{2} \left( {
\frac {\partial }{\partial t}}B \right) ^{2}-4\,B {\frac {\partial ^{2}}{\partial {z}^{2}}}B
+4\, \left( {\frac {\partial }{\partial x}}B \right) ^{2}-4\,B {
\frac {\partial ^{2}}{\partial {x}^{2}}}B -4\, \left( {\frac {\partial }{\partial y}}B
\right) ^{2}+4\, \left( {\frac {\partial }{\partial z}}B \right) ^{2}\right]\, ,$}} \\[0.1cm]
G^{yx}=&{\scalebox{0.90}{$\frac{1}{2 B ^{6}}\,\left[B ^{4}{{v_{\rm{s}}}}
^{2} \left( {\frac {\partial }{\partial x}}f
\right) {\frac {\partial }{\partial y}}f +4\,
\left( {\frac {\partial }{\partial y}}B
\right) {\frac {\partial }{\partial x}}B \right.$}} { \left. -2\,B
{\frac {\partial ^{2}}{\partial y\partial x}}B
\right]\, ,} \\[0.1cm]
G^{xx}=&G^{yy} \quad x \leftrightarrow y \, .\end{aligned}$$]{}
[10]{}
[^1]: [email protected]
[^2]: One might expect to find derivatives of $\tau^{\alpha}_{\;\beta\gamma}$ in the resulting equation due to the covariant derivative in (\[eq:eom1\]). However, these cancel with corresponding derivatives in the full $G^{\mu\nu}$ when the modified torsion terms in $G^{\mu\nu}$ are replaced with the spin tensor via (\[eq:eom2\]).
[^3]: Coupling photons to torsion will, however, come at the expense of losing $U(1)$ gauge invariance.
|
\#1 \#1\#2 \#1\#2\#3[\#2]{}
\#1\#2\#3[Nucl. Phys. [**[B\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Comm. Math. Phys. [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Phys. Rep. [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Phys. Rev. [**[D\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Phys. Rev. [**[E\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Phys. Rev. Lett. [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Class. Quantum Grav. [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Rev. Mod. Phys. [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Prog. Theor. Phys. [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Ann. of Phys. [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Ann. Physik [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Phys. Lett. [**[A\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Phys. Lett. [**[B\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Phys. Rev. [**[A\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[J. Stat. Phys. [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Chaos [**[\#1]{}**]{} (\#2) \#3]{}
\#1\#2\#3[Mod. Phys. Lett. [**[A\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[J. Math. Phys. [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[J. Gen. Rel. Grav. [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Int. J. Mod. Phys. [**[A\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[J. Diff. Geom. [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[JHEP [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[J. Phys. [**[A\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Physica [**[A\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Lett. Math. Phys. [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Adv. Chem. Phys. [**[A\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Rev. Mod. Phys. [**[A\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Trans. Amer. Math. Soc. [**[A\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[J. SIAM [**[\#1]{}**]{} (\#2) \#3]{} \#1\#2\#3[Acta Protozool. [**[\#1]{}**]{} (\#2) \#3]{}
= 1.0cm
|
---
abstract: |
We discuss whether position measurements in quantum mechanics can be contradictory with Bohmian trajectories, leading to what has been called surrealistic trajectories in the literature. Previous work has considered that a single Bohmian position can be ascribed to the pointer. Nevertheless, a correct treatment of a macroscopic pointer requires that many particle positions should be included in the dynamics of the system, and that statistical averages should be made over their random initial values. Using numerical as well as analytical calculations, we show that these surrealistic trajectories exist only if the pointer contains a small number of particles; they completely disappear with macroscopic pointers.
With microscopic pointers, non-local effects of quantum entanglement can indeed take place and introduce unexpected trajectories, as in Bell experiments; moreover, the initial values of the Bohmian positions associated with the measurement apparatus may influence the trajectory of the test particle, and determine the result of measurement. Nevertheless, a detailed observation of the trajectories of the particles of the pointer reveals the nature of the trajectory of the test particle; nothing looks surrealistic if all trajectories are properly interpreted.
author:
- |
G. Tastevin[^1] and F. Laloë[^2]\
Laboratoire Kastler Brossel, ENS-Université PSL,\
CNRS, Sorbonne Université, Collège de France,\
24 rue Lhomond 75005 Paris, France
title: Surrealistic Bohmian trajectories do not occur with macroscopic pointers
---
\*\*\*\*\*\*\*\*\*
De Broglie and Bohm (dBB) have introduced an interpretation of quantum mechanics where material particles are described, not only by wave functions as in standard quantum mechanics, but also by point positions that are guided by the wave function [@De-Broglie-1927; @Bohm-1952]. Trajectories in the configuration space can then be associated with the time evolution of any system made of massive particles. These trajectories can be projected into ordinary 3D space, which provides the trajectory of each constituent particle. Such projections sometimes exhibit unexpected properties. They are interesting, since their study may reveal unexpected quantum properties that could otherwise remain unnoticed inside the standard equations. General reviews of Bohmian mechanics and trajectories in various situations can be found for instance in [@Holland-1993], [@Oriols-Mompart] or [@Bricmont-2016].
In the context of standard theory, the study of these trajectories leads to nothing but a visualization of the motion of the usual probability current. Because the dBB language is convenient, in this article we will speak of positions and trajectories of the particles, instead of streamlines of the probability. Nevertheless, the reader who is allergic to the very idea of particle positions in quantum mechanics can easily translate every statement in terms of the trajectories of the elements of the probability fluid. Our purpose here is not to plead in favor of one interpretation or another, but just to provide a more precise analysis of the dBB trajectories in the presence of quantum non-local effects. In particular, we will not study the trajectories defined within the consistent history interpretation [@Griffiths-1999], which are different.
In 1992, Englert, Scully, Süssmann and Walther proposed an interesting thought experiment [@Englert-1992] where a test particle crosses a two slit interferometer, while another quantum system (the pointer) plays the role of a Welcher Weg (which way) detector, indicating through which slit the test particle went. These authors argue that, if one observes the position of the pointer after the test particle has crossed the interference region, in some cases the pointer seems to indicate that the particle crossed one slit, while a reconstruction of the past Bohmian trajectory of the particles shows that it went through the other. The reason behind this unexpected conclusion is that, when two wave packets of the test particle cross in the interference region, the Bohmian position of the particle may jump from one wave packet to the other, leading to a curved Bohmian trajectory without any force from outside. The authors of [@Englert-1992] consider that, even under these conditions, the indication of the pointer still provides a correct measurement of which slit was really crossed by the test particle; since some Bohmian trajectories nevertheless cross the other slit, they express strong doubts about the real physical interest of these trajectories, and call them surrealistic.
Several authors have discussed the question and reached various conclusions. A first reaction by Dürr et al. [@Durr-et-al-1993] in 1993 was very general: these authors pointed out that this property of Bohmian trajectories is no more paradoxical than the orthodox point of view where the particles goes through two slits at the same time; in their opinion, the qualification of surrealistic" could arise only from a naive version of operationalism. Also in 1993, Dewdney, Hardy and Squires published an article [@Dewdney-et-al-1993] where a Mach-Zhender interferometer is studied; they argue that this simpler version of the thought experiment reveals nothing surrealistic about Bohmian trajectories, but only illustrates the well-known non-local influence of the quantum potential introduced by Bohm [@Bohm-1952]. Three years later, Aharonov and Vaidman discussed the relation between position measurements and Bohmian positions in general, but also in the context of weak and protective measurements [@Aharonov-Vaidman-1996]. In 1998, Scully stated again that, in his opinion, Bohmian trajectories do not always provide a trustworthy physical picture of particle motion [@Scully-1998]; he nevertheless concludes that he agrees Bohmian mechanics offers an interesting line of thought. In 1999, Aharonov, Englert and Scully studied the relation between protective measurements and Bohmian trajectories, and concluded that the results challenge any realistic interpretation of the trajectories [@Aharonov-Englert-Scully-1999]. But, in 2006, Hiley expressed again the opinion that surreal trajectories occur only from an incorrect use of the Bohm approach [@Hiley-2006]. In 2007, Wiseman pointed out that the measurement of weak values can lead to an experimental determination of Bohmian trajectories [@Wiseman--2007]; see also Ref. [@Durr-Goldstein-Zanghi-2009]. Bohmian trajectories can also be reconstructed with a generalization of the technique of quantum state reconstruction [@Schleich-Freyberger-Zubairi-2013]. Gisin [@Gisin-2015] has pointed out that surrealistic trajectories occur only if the pointer of the measurement apparatus is slow, i.e. if it indicates which slit the particle crossed with a delay, only after the particle has left the interference region. We conclude this brief review by noting that related experiments have been performed. In 2011, Kocsis et al. have observed the average trajectories of single photons in a two slit interferometer [@Kocsis-et-al-2011]. Of course, photons are relativistic particles, which stricto sensu have no quantum position operator, and therefore neither a Bohmian position nor a trajectory. Nevertheless, Braveman and Simon [@Braverman-Simon-2013] have pointed out that, in the paraxial approximation, the propagation of an electromagnetic field obeys an equation that is identical to the Schrödinger equation for a 2D massive particle; the same authors have proposed to observe the non-locality of trajectories with entangled photons. In 2016, Mahler et al. have indeed observed non-local and surreal Bohmian trajectories, and conclude that the trajectories seem surreal only if one ignores their manifest non-locality [@Mahler-et-al-2016].
A common feature of these publications is that their theoretical analysis assigns a single Bohmian position to the pointer of the measurement apparatus; the corresponding object may for instance be the center of mass of the pointer, and therefore have a macroscopic mass. Nevertheless, in a real measurement apparatus, the pointer necessarily contains many particles, which are described by a large number of degrees of freedom. It then becomes necessary to assign a large number of Bohmian positions to the pointer; the purpose of this article is to discuss their role. As we will see, their presence radically changes the situation, since the fraction of surrealistic trajectories decreases when the number of positions increases. In other words, we will check that non-local effects vanish in the macroscopic limit for the pointer, as one could expect. Moreover, even with microscopic pointers, if quantum effects are properly taken into account, no actual contradiction appears between the trajectories and the indications given by the pointer. Actually, the pointer positions can be used to obtain correct information, not only on the slit crossed by the test particle, but also quantum non-local effects taking place in the interference region.
In § \[one-particle-pointer\], we introduce the model used to make the numerical computations in this article. In the original scheme of Ref. [@Englert-1992], the Welcher Weg quantum system was a micromaser enclosed in a cavity, possibly complemented by a second massive particle taking a trajectory that depends on the state of radiation inside the cavity. In order to avoid the introduction of Bohmian variables associated with the electromagnetic field, in this article we will assume that the Welcher Weg apparatus contains only (one or several) massive particles having Bohmian positions. In § \[fast-and-slow-pointers\], we assume that the pointer is made of a single particle and we discuss the characteristics of the trajectories in different situations: pointers providing Welcher Weg information almost immediately, or only delayed information, or intermediate cases. In § \[pointer-several-particles\], we assume that one or two pointers contain several particles, each of them associated with its Bohmian position. Numerical calculations then show that the trajectories strongly depend on the number of these positions; quantum non-local effects may still take place only if the number of Bohmian positions is not too high. In § \[macroscopic-pointer\], the pointer is assumed to be macroscopic; it contains an enormous number of particles (some fraction of the Avogadro number). An analytic argument shows that non-local effects then disappear: surrealistic Bohmian trajectories no longer exist when the pointer is macroscopic (this analytic argument is expanded in an Appendix, where we show that this disappearance is a consequence of the multiplication of the effective velocity of the pointer by a factor $\sqrt{N}$, where $N$ is the number of particles $N$ contained in the pointer). Finally, in § \[discussion\], we conclude that adequate observations of the pointer trajectories allows one to understand the detailed characteristics of the test particle trajectory in all cases: as soon as the non-local dynamics of the coupled quantum systems is well understood, the trajectories cease to look surrealistic. A preliminary account of this work can be found in [@FL-CNVLMQ].
Pointer with one degree of freedom, numerical model {#one-particle-pointer}
===================================================
We first study a pointer having one degree of freedom (it contains only one particle moving in 1-dimension space), which allows us to introduce the model and the notation.
Wave functions
--------------
The test particle is assumed to move in a 2-dimension space, with a wave function depending on coordinates $x$ and $y$, as shown schematically in Figure \[Fig-1\]. The coordinate variable of the 1-dimension pointer is $z$. Just after the test particle has crossed the screen pierced with two slits, it is entangled with the variable of the pointer; the total wave function is the sum of two components: $$\Psi(x,y,z;t)=\Phi_{+}(x,y,z;t)+\Phi_{-}(x,y,z;t) \label{art-1}$$ where $t$ is the time. In (\[art-1\]), each component is the product of a wave function for the test particle (which is itself a product of functions $\varphi_{\pm}^{x}$ of $x$ and $\varphi^{y}$ of $y$), and of a wave function for the pointer $\chi_{\pm}$ containing the $z$ dependence:$$\Phi_{\pm}(x,y,z;t)=\varphi_{\pm}^{x}(x;t)~\varphi^{y}(y;t)~\chi_{\pm}(z;t)
\label{art-1-1}$$ Note that both components of (\[art-1\]) contain the same function $\varphi^{y}$: for the sake of simplicity, we assume that the motion of the test particle in direction $Oy$ (perpendicularly to the screen) is independent of the slit crossed by the particle.
Functions $\varphi$ and $\chi$ are assumed to be simple Gaussian wave packets (Gaussian slits):$$\varphi_{\pm}^{x}(x;t)\sim\left[ a^{4}+\frac{4\hslash^{2}t^{2}}{m^{2}}\right] ^{-1/4}\exp\left\{ \mp i\frac{mv_{x}x}{\hslash}-\frac{\left[ x\mp
d\pm v_{x}t\right] ^{2}}{a^{2}+\frac{2i\hslash t}{m}}\right\}
\label{art-1-2}$$ which at time $t=0$ is centered at a position $x=\pm d$, and has initial velocity $\mp v_{x}$ (the signs are chosen so that the wave packets of the test particle cross in the interference region if $v_{x}>0$); $m$ is the mass of the test particle. For the second coordinate $y$ of the test particle, we set:$$\varphi^{y}(y;t)\sim\left[ b^{4}+\frac{4\hslash^{2}t^{2}}{m^{2}}\right]
^{-1/4}\exp\left\{ i\frac{mv_{y}y}{\hslash}-\frac{\left[ y-v_{y}t\right]
^{2}}{b^{2}+\frac{2i\hslash t}{m}}\right\} \label{art-1-3}$$ which has an initial width $b$, is centered at $y=0$, and has an initial velocity $v_{y}$. Finally, for the wave function of the pointer, we choose:$$\chi_{\pm}(z;t)\sim\left[ c^{4}+\frac{4\hslash^{2}t^{2}}{M^{2}}\right]
^{-1/4}\exp\left\{ \pm i\frac{MVz}{\hbar}-\frac{\left[ z\mp Vt\right] ^{2}}{c^{2}+\frac{2i\hslash t}{M}}\right\} \label{art-1-4}$$ where $M$ is the mass of the pointer particle. This wave packet has an initial width $c$ and an initial velocity $+V$ if the test particle crosses the upper slit, $-V$ if it crosses the lower slit (with this convention, the signs of the velocities of the wave packets associated with the test and pointer particles are opposite if $v_x$ and $V$ are both positive). This corresponds to what Ref. [@Vaidman-2012] calls a Bohmian velocity detector.
Motion of the Bohmian positions
-------------------------------
We now introduce Bohmian positions $X$ and $Y$ for the test particle, and $Z$ for the pointer particle. The guiding equation of the Bohmian position of the test particle reads: $$\begin{aligned}
\frac{\text{d}X}{\text{d}t} & =\frac{\hslash}{2im\left\vert \Psi\left(
X,Y,Z\right) \right\vert ^{2}}\left[ \Psi^{\ast}\left( X,Y,Z\right)
\frac{\partial \Psi}{\partial x}\left( X,Y,Z\right) -\text{c.c.}\right]
\nonumber\\
\frac{\text{d}Y}{\text{d}t} & =\frac{\hslash}{2im\left\vert \Psi\left(
X,Y,Z\right) \right\vert ^{2}}\left[ \Psi^{\ast}\left( X,Y,Z\right)
\frac{\partial \Psi}{\partial y}\left( X,Y,Z\right) -\text{c.c.}\right]
\label{art-4}$$ (c.c. means complex conjugate; the time dependence is not explicitly written for simplicity). Equivalently, this Bohmian velocity can also be written in terms of the gradient of the phase of the wave function. As for the motion of the pointer Bohmian position, it is given by:$$\frac{\text{d}Z}{\text{d}t}=\frac{\hslash}{2iM\left\vert \Psi\left(
X,Y,Z\right) \right\vert ^{2}}\left[ \Psi^{\ast}\left( X,Y,Z\right)
\frac{\partial \Psi}{\partial z}\left( X,Y,Z\right) -\text{c.c.}\right]
\label{art-5}$$
Since the wave function is factorized with respect to the $y$ variable, the Bohmian motion along axis $Oy$ is independent of the other variables: there is actually no $X$ or $Z$ dependence in the second equation (\[art-4\]). The motion of $Y(t)$ is therefore given by:$$\frac{\text{d}Y}{\text{d}t}=v_{y}+\frac{4\hbar^{2}t}{m}\frac{Y-v_{y}t}{b^{4}+\frac{4\hbar^{2}t^{2}}{m^{2}}} \label{art-11}$$ The solution of this equation is:$$Y(t)=v_{y}t+Y_{0}\sqrt{1+\frac{4\hbar^{2}t^{2}}{m^{2}b^{4}}} \label{art-12}$$ where $Y_{0}$ is the initial value of $Y$ at time zero. The motion of $Y(t)$ is therefore relatively simple: a uniform motion along $Oy$ plus a correction introduced by the diffraction of the wave packet.
Dimensionless variables
-----------------------
We now introduce dimensionless variables by setting:$$\begin{aligned}
x^{\prime} & =\frac{x}{a}~~~~~~~~~y^{\prime}=\frac{y}{b}~~~~~~~~~~~z^{\prime
}=\frac{z}{c}\nonumber\\
X^{\prime} & =\frac{X}{a}~~~~~~~~Y^{\prime}=\frac{Y}{b}~~~~~~~~~Z^{\prime
}=\frac{Z}{c} \label{art-6}$$ and:$$d^{\prime}=\frac{d}{a} \label{art-6-2}$$ All positions are then expressed in terms of the initial width of the corresponding wave packet. We also introduce a dimensionless time $t^{\prime
}$ by:$$t^{\prime}=\frac{v_{y}t}{b} \label{art-7}$$ The velocities are defined in terms of the variables:$$\xi_{x}=\frac{mv_{x}a}{\hslash}~~~~~~~~~~~~~~~~\xi_{y}=\frac{mv_{y}b}{\hslash
}~~~~~~~~~~~~~~~~\Xi=\frac{MVc}{\hbar} \label{art-8}$$ Equation (\[art-12\]) then simplifies into:$$Y^{\prime}(t^{\prime})=t^{\prime}+Y_{0}^{\prime}\sqrt{1+\frac{4(t^{\prime
})^{2}}{\xi_{y}^{2}}} \label{art-12-bis}$$
Finally, it is also convenient to introduce the dimensionless parameters:$$r=\frac{a}{b}~~~~~~~~~~~~~~~~R=\frac{a}{c}~~~~~~~~~~~~~~~~\mu=\frac{m}{M}
\label{art-13}$$ The equations of motion of the Bohmian positions of the test and pointer particles then become:$$\begin{aligned}
\frac{\text{d}X^{\prime}}{\text{d}t^{\prime}} & =\frac{1}{2ir^{2}\xi
_{y}\left\vert \Psi\right\vert ^{2}}\left[ \Psi^{\ast}\frac{\partial
}{\partial x^{\prime}}\Psi-\Psi\frac{\partial}{\partial x^{\prime}}\Psi^{\ast
}\right] \nonumber\\
\frac{\text{d}Z^{\prime}}{\text{d}t^{\prime}} & =\frac{\mu R^{2}}{2ir^{2}\xi_{y}\left\vert \Psi\right\vert ^{2}}\left[ \Psi^{\ast}\frac{\partial
}{\partial z^{\prime}}\Psi-\Psi\frac{\partial}{\partial z^{\prime}}\Psi^{\ast
}\right] \label{art-14}$$
The initial values of the Bohmian positions are denoted as $X_{0}$, $Y_{0}$ and $Z_{0}$ respectively (with added primes for dimensionless variables). For a given realization of the experiment, these variables are random, with a distribution given by the squared modulus of the corresponding wave function. For instance, $X_{0}$ can fall at any place inside one of the two Gaussian slits; for all figures of this article, we show the trajectories corresponding to $9$ equidistant values inside each slit. Since $Y$ does not play an important role in the problem (it just increases smoothly in time), we set $Y_{0}=0$, so that $Y$ (or $Y^{\prime}$) provides a rough measure of the time. Finally, to study the impact of the values of $Z_{0}$ on the trajectories, we make a random choice of $Z_{0}$ inside its Gaussian distribution.
The time at which the wave packets of the test particle cross in the interference region is $t_{\text{cross}}\simeq d/v_{x}$, and the time at which the wave packets of the pointer become well separated is $t_{P}\simeq c/V$. The pointer thus gives a clear indication before the test particle reaches the interference region if:$$\frac{t_{\text{cross}}}{t_{P}}\simeq\frac{Vd}{v_{x}c}>1 \label{art-14-2}$$ In terms of the dimensionless variables introduced above, this condition becomes:$$E=\frac{Vd}{v_{x}c}=\frac{\Xi}{\xi_{x}}R^{2}d^{\prime}\mu>1 \label{art-14-3}$$ For short, we will call (\[art-14-3\]) the fast pointer condition.
Bohmian trajectories may be obtained by straightforward time integration of the appropriate set of coupled differential equations: Eq. (\[art-14\]) for a single-particle pointer, a set of $1+N$ similar equations for the Bohmian positions associated to the test particle, $X'$, and to the multi-particle pointer(s), $Z'_{1}$ to $Z'_{N}$, see Appendix. With conventional numerical tools or dedicated software, this is manageable on a personal computer for reasonably small values of $N$ (and, possibly, any type of wave packets).
For our Gaussian two-slit model, we have used Mathematica [@Mathematica] for computation and plotting of all trajectories. Solving this type of ordinary differential equations with Mathematica’s generic tool and default options works well up to a few tens of pointer particles and time integration is fast (a fraction of a second up to $N=5$, typically). For larger $N$ values, we resorted to the (automatically) suggested optional method of simplification suited for the problem, which is more robust but slower. The computing time increases exponentially with the number of pointer particles, ranging from 5 sec. for $N=10$ to 3.5 hours for $N=200$ on a computer equipped with a 3.2 GHz Intel Core i3-550 CPU, for example, or 1.5 h for $N=200$ on a 4.0 GHz Intel Core i7-6700K CPU.
One way to substantially reduce the computational load is to use the explicit analytical expressions of the Bohmian velocities, which can be derived for Gaussian wave packets (see Appendix, Section 2). Then a roughly 10-fold shorter time is needed for computation of all trajectories.
Finally and more simply, as shown in Section 3 of the Appendix, the particle and pointer trajectories may actually be obtained from the time integration of only 2 differential equations. These two equations rule the evolution of $X^{\prime}$ and $\hat{\Sigma}$, the Bohmian positions of the test particle and of a single new particle associated to the pointer, respectively. The trajectories of all pointer particles can be subsequently derived analytically, from the evolution of a new particle that has an effective velocity parameter $\hat{\Xi} = \Xi\sqrt{N}$ when the pointer includes $N$ particles with (dimensionless) velocity $\Xi$, and therefore moves $\sqrt N$ times faster.
Single particle pointer {#fast-and-slow-pointers}
=======================
When the Welcher Weg pointer contains a single particle, the system under study is made of two entangled particles, exactly as in EPR/Bell non-locality experiments. Various situations can be considered, depending on the nature of the entanglement between the test particle and the pointer. For the sake of simplicity, we first assume that they are not coupled, a case in which no Welcher Weg information whatsoever is provided by the pointer; it has no effect on the trajectory of the test particle. Next we consider the opposite situation, where the pointer is strongly coupled to the test particle and and provides a Welcher Weg information before the test particle reaches the interference region (fast pointer); the trajectories are then completely different and look classical. We also study what happens if the pointer is slow and provides the information only after the test particle has crossed the interference region, and finally consider intermediate situations.
Immobile pointer providing no information
-----------------------------------------
![(color on line) Trajectories obtained when the pointer particle is not coupled to the test particle ($\Xi=V=0$). The two particles are not entangled and their positions move independently. The figure shows the trajectories of the test particle originating from $9$ possible initial values $X_{0}$ inside each of the slits. The trajectories crossing the upper slit are drawn with full lines (blue color on line), those crossing the lower slit in dashed lines (red color on line). All trajectories of the test particle bounce on the symmetry plane of the experiment (no-crossing rule).[]{data-label="Fig-2"}](Fig-2.eps){width="7cm"}
If we set $\Xi=E=0$, the wave function (\[art-1\]) factorizes into a component for the test particle and another for the pointer, so that the motions of their Bohmian positions become independent: the motion of the test particle occurs as if the apparatus did not exist. Figure \[Fig-2\] shows the trajectories obtained in this case. They exhibit the so-called no-crossing rule: trajectories cannot cross the symmetry plane of the experiment, and they bounce on this plane in the interference region. This is a simple consequence of the fact that, in the symmetry plane, the probability current is contained inside the plane. In dBB quantum mechanics, it is well-known that interference effects can curve trajectories in free space, so that an observation of the trajectory of the particle after the interference region should not be extrapolated as a straight line backwards in time.
Pointer providing real-time information (fast pointer)
------------------------------------------------------
We now assume that the pointer is sufficiently coupled to the test particle to provide a real-time information: it indicates the slit crossed by the particle before the interference region is reached. This situation is obtained when condition (\[art-14-3\]) for $E$ is met. As shown in Figure \[Fig-3\], the trajectories then cross the interference region as almost straight lines. This was expected since, in quantum mechanics, any Welcher Weg information stored in the pointer necessarily destroys the interference effects. One can then infer which slit was crossed by the test particle by a simple backwards extrapolation of its trajectory along a straight line; no surrealistic trajectory appears.
![(color on line) Trajectories obtained when the pointer particle is coupled to the test particle and plays the role of a fast pointer(i.e. a pointer providing a Welcher Weg information before the test particle reaches the interference region). The left part of the figure shows the trajectories of the test particle, the right part the trajectories of the pointer particle. The values of the input parameters used for the calculation are $\Xi=\xi_{x}=10$, $r=R=1$, $d^{\prime}=3$, $\mu=1$, corresponding to $E=3$. The position of the pointer particle is assumed to be initially centered ($Z_0^{\prime} = 0$). It subsequently moves upwards if the test particle crosses the upper slit, downwards if it crosses the lower slit. Because this information may be obtained before the test particle reaches the interference region, no interference effect takes place, so that the trajectories of the test particle are (almost) straight lines. Similarly, the trajectories of the pointer particle also remain straight, and depend only on the slit crossed by the test particle, independently on its position inside the slit. No trajectory can then be interpreted as surrealistic.[]{data-label="Fig-3"}](Fig-3.eps){width="6cm"}
![(color on line) Trajectories obtained when the pointer particle is coupled to the test particle and plays the role of a fast pointer(i.e. a pointer providing a Welcher Weg information before the test particle reaches the interference region). The left part of the figure shows the trajectories of the test particle, the right part the trajectories of the pointer particle. The values of the input parameters used for the calculation are $\Xi=\xi_{x}=10$, $r=R=1$, $d^{\prime}=3$, $\mu=1$, corresponding to $E=3$. The position of the pointer particle is assumed to be initially centered ($Z_0^{\prime} = 0$). It subsequently moves upwards if the test particle crosses the upper slit, downwards if it crosses the lower slit. Because this information may be obtained before the test particle reaches the interference region, no interference effect takes place, so that the trajectories of the test particle are (almost) straight lines. Similarly, the trajectories of the pointer particle also remain straight, and depend only on the slit crossed by the test particle, independently on its position inside the slit. No trajectory can then be interpreted as surrealistic.[]{data-label="Fig-3"}](Fig-3-pointeur.eps){width="6cm"}
Pointer providing only delayed information (slow pointer)
---------------------------------------------------------
![(color on line) Trajectories obtained when the pointer particle is slow: it provides only a delayed Welcher Weg information after the test particle has crossed the interference region. Interference effects can then occur for the test particle. The only input parameter that has been changed with respect to Figure \[Fig-3\] is $R=0.2$ (this corresponds to $E=0.12$). Most trajectories now change direction during the crossing of the interference region by the test particle. These curved trajectories are sometimes called surrealistic trajectories because the position of the pointer, long after the crossing, has a behavior opposite of that of a fast pointer: it moves downwards if the trajectory of the test particle crosses the upper slit, upwards if it crossed the lower slit. A few trajectories remain normaland do not exhibit this apparent contradiction. In this figure, we have assumed that the initial Bohmian position of the pointer particle vanishes ($Z_{0}^{\prime} =
0$); the trajectories, thus, form a perfectly symmetric pattern.[]{data-label="Fig-4"}](Fig-4.eps){width="6cm"}
![(color on line) Trajectories obtained when the pointer particle is slow: it provides only a delayed Welcher Weg information after the test particle has crossed the interference region. Interference effects can then occur for the test particle. The only input parameter that has been changed with respect to Figure \[Fig-3\] is $R=0.2$ (this corresponds to $E=0.12$). Most trajectories now change direction during the crossing of the interference region by the test particle. These curved trajectories are sometimes called surrealistic trajectories because the position of the pointer, long after the crossing, has a behavior opposite of that of a fast pointer: it moves downwards if the trajectory of the test particle crosses the upper slit, upwards if it crossed the lower slit. A few trajectories remain normaland do not exhibit this apparent contradiction. In this figure, we have assumed that the initial Bohmian position of the pointer particle vanishes ($Z_{0}^{\prime} =
0$); the trajectories, thus, form a perfectly symmetric pattern.[]{data-label="Fig-4"}](Fig-4-pointeur.eps){width="6cm"}
We now assume that the pointer is slow: its wave packets separate significantly only after the test particle has crossed the interference region. This situation corresponds to what Ref. [@Dewdney-et-al-1993] calls late measurements. Figure \[Fig-4\] is obtained by changing parameter $R$ from $R=1$ (Figure \[Fig-3\]) to $R=0.2$. At constant $\Xi$, this reduces the velocity of the pointer by a factor $5$; moreover, at constant $a$ (no change of the wave packet of the test particle), this multiplies the width of the wave packet of the pointer particle by a factor $5$; altogether, parameter $E$ is divided by $25$ and becomes equal to $E=0.12$.
We then obtain a completely different situation. Initially, the pointer starts to move in a direction that indicates the slit crossed by the Bohmian position of the test particle, as expected (and as in Figure \[Fig-3\]). But, when the test particle reaches the interference region, non-local interference effects take place: the Bohmian positions can jump[^3] from one wave packet to the other, so that both trajectories change directions. They remain consistent with each other, inasmuch as the motion of the pointer particle (its velocity) constantly indicates the beam in which the test particle propagates.
Nevertheless, if one extrapolates the behavior of a fast pointer to this case, one reaches a contradiction. If for instance the test particle crossed the upper slit, at long times the pointer is more likely to move downwards (as a fast pointer would do if the test particle had crossed the lower slit, see Fig. \[Fig-3\]). Ref. [@Englert-1992] considers that this motion of the pointer provides a correct information on which slit was really crossed by the test particle; because the Bohmian trajectory crossed the other slit, it becomes contradictory with (this interpretation of) the measurement, therefore physically meaningless and surrealistic.
Another interesting feature of this situation is that the initial value $Z_{0}$ of the Bohmian position of the pointer plays an important role. It is a random variable with a probability distribution that is determined by the squared modulus of the initial wave function $\chi_{\pm}(z;t=0)$. Figure \[Fig-4\] corresponds to the case where $Z_{0}=0$, when the initial position of the pointer is perfectly centered, so that no slit is favored. In Figure \[Fig-5\], the dimensionless value of $Z_{0}$ has been changed to $Z_{0}^{\prime}=0.5$; we see that the trajectories are significantly different. We actually observe an interesting non-local predestination effect: because the initial position of the pointer particle seems to already point to one of the slits, in the interference region the trajectory of the test particle is influenced by this indication. We have computed series of trajectories for numerous random initial values of the position of the pointer particle. Systematically, the most frequently selected direction seems to originate from the slit selected by the pointer from the beginning; the measured system follows the initial indications of the measurement apparatus, so to say.
![(color on line) Same figure as Fig. \[Fig-4\], but $Z_{0}^{\prime}$ has been changed to $Z_{0}^{\prime}=0.3$, which introduces an asymmetry. The proportion of situations where the pointer position ends up moving upwards (and the test particle downwards) has increased. This illustrates how the initial value of an additional variable of the measurement apparatus can influence the future trajectory of the measured system, due to a quantum non-local effect. After the interference region, the trajectory of the test particle takes more often a direction that seems to originate from the slit corresponding to the initial indication of the pointer (predestination effect).[]{data-label="Fig-5"}](Fig-5.eps){width="6cm"}
![(color on line) Same figure as Fig. \[Fig-4\], but $Z_{0}^{\prime}$ has been changed to $Z_{0}^{\prime}=0.3$, which introduces an asymmetry. The proportion of situations where the pointer position ends up moving upwards (and the test particle downwards) has increased. This illustrates how the initial value of an additional variable of the measurement apparatus can influence the future trajectory of the measured system, due to a quantum non-local effect. After the interference region, the trajectory of the test particle takes more often a direction that seems to originate from the slit corresponding to the initial indication of the pointer (predestination effect).[]{data-label="Fig-5"}](Fig-5-pointeur.eps){width="6cm"}
![(color on line) Same figure as Fig. \[Fig-5\], but in a three dimension representation showing the simultaneous motion of the test and pointer particle. This illustrates how the trajectories avoid each other in the third dimension. 2D projections, showing the trajectories of the test particle and of the pointer, are also plotted in thin lines and gray color.[]{data-label="Fig-6"}](Fig-6.eps){width="8cm"}
It is known that Bohmian trajectories can never cross each other. The apparent crossings of Figure \[Fig-5\] are due to the fact that this figure is a projection over a 2D plane of a trajectory that actually takes place in a 3D space, the third dimension being the position of the pointer particle. Figure \[Fig-6\] illustrates how the trajectories avoid crossing in the third dimension.
This brief survey of the behavior of a microscopic pointer shows that the random initial position of the pointer plays an important role, and may even influence the trajectory of the test particle. The possible surrealistic character of the trajectory is interpretation dependent; surrealism appears only if one interprets the indications of the pointer as providing information about the past positions of the test particle. Nevertheless, at any time the motion of the pointer particle gives an information that is consistent with the present motion of the test particle: just after it crossed the slit, the position of the pointer moves in the corresponding direction; if the test particle later jumps from one wave packet to the other, the pointer also reverses its motion, providing an indication that remains consistent with the present trajectory of the test particle. Nothing seems surrealistic if the non-local dynamics of the measurement process is duly taken into account.
One or two pointers containing several particles {#pointer-several-particles}
================================================
Description of the model
------------------------
We now generalize (\[art-1-1\]) by assuming that each of the two components of the wave function contains the product of $N$ individual wave functions associated with particles contained in one or several pointers. With the same dimensionless variables as defined above, we write:$$\begin{aligned}
\Phi_{\pm}(x,y,z_{1},z_{2,..,}z_{N};t) & \propto\nonumber\\
& ~~\exp\left\{ -\frac{\left[ x^{\prime}\mp d^{\prime}\pm\frac{\xi_{x}}{r^{2}\xi_{y}}t^{\prime}\right] ^{2}}{1+\frac{4t^{\prime2}}{r^{4}\xi_{y}^{2}}}-\frac{\left[ y^{\prime}-t^{\prime}\right] ^{2}}{1+\frac{4t^{\prime2}}{\xi_{y}^{2}}}-\sum_{n=1}^{N}\frac{\left[ z_{n}^{\prime} - \frac{\mu\Xi_n^{\pm} \,
R^{2}}{r^{2}\xi_{y}}t^{\prime}\right] ^{2}}{1+4\frac{\mu^{2}R^{4}t^{\prime2}}{r^{4}\xi_{y}^{2}}}\right\} \nonumber\\
& \times\exp\left\{ i\left[ \left( \pm\xi_{x}x^{\prime}+\xi_{y}y^{\prime
} + \sum_{n=1}^{N} \Xi_n^{\pm} \, z_{n}^{\prime}~\right) +\frac{2t^{\prime}}{r^{2}\xi_{y}}\frac{\left[ x^{\prime}\mp d^{\prime}\pm\frac{\xi_{x}}{r^{2}\xi_{y}}t^{\prime}\right] ^{2}}{1+\frac{4t^{\prime2}}{r^{4}\xi_{y}^{2}}}\right.
\right. \nonumber\\
& ~~~~~~~~~~~~~~~~~\left. \left. +\frac{2t^{\prime}}{\xi_{y}}\frac{\left[ y^{\prime}-t^{\prime}\right] ^{2}}{1+\frac{4t^{\prime2}}{\xi_{y}^{2}}}+\sum_{n=1}^{N}\frac{2\mu R^{2}t^{\prime}}{r^{2}\xi_{y}}\frac{\left[ z_{n}^{\prime} - \frac{\mu\Xi_n^{\pm} \, R^{2}}{r^{2}\xi_{y}}t^{\prime
}\right] ^{2}}{1+4\frac{\mu^{2}R^{4}t^{\prime}}{r^{4}\xi_{y}^{2}}}\right]
\right\} \label{art-100}$$ Each $z_{n}$ is the 1D spatial coordinate of one pointer particle; a different origin may be chosen for each value of $n$, meaning that the wave packets do not necessarily coincide at time $t=0$. The parameters $\Xi_n^{\pm}$ define the initial velocities of the wave packets of the pointer particles: if the test particle crosses the upper slit, the (dimensionless) velocity is $\Xi_n^{+}$; if it crosses the lower slit, it is $\Xi_n^{-}$. Of course, if there is only one pointer, all the pointer particles belong to the same solid object, and we will assume that all these wave packets move at the same speed; we then simply choose: $$\Xi_n^{\pm}=\pm \, \Xi$$
But we can also assume that two independent pointers are used to detect the test particle: for instance, one pointer starts moving if this particle crosses the upper slit, but remains still otherwise; the other pointer operates in the same way for the lower slit. The simplest case is obtained if each pointer contains only one particle ($N=2$), and with the following choice of parameters: $$\begin{aligned}
\Xi_1^+ = \Xi \hspace{1cm} & \hspace{1cm} \Xi_1^- = 0 \notag \\ \Xi_2^+=0 \hspace{1cm} & \hspace{1cm} \Xi_2^- = \Xi
\label{article-21}\end{aligned}$$ We now study this latter case.
Two independent pointers {#two-pointers}
------------------------
![(color on line) Trajectories obtained with two slow pointers, with the same values of the input parameters as in Figure \[Fig-4\]. The initial positions of the pointer particles $Z_{1,0}^{\prime}$ and $Z_{2,0}^{\prime}$ are both equal to $0.01$. The upper part of the figure shows two trajectories of the test particle, one originating from the upper slit (full line, blue on line) and one originating from the lower slit (dashed line, red on line). The trajectories of the two pointers are shown in the lower part of the figure: on the left the pointer associated with the upper slit, on the right that associated with the lower slit. Initially, the motions of the positions are exactly those one could naively expect: the position of the pointer located near the slit crossed by the particle moves, while the other remains still. Later, when the test particle crosses the interference region, the Bohmian position in the configuration space jump from one wave packet to the other, resulting in the simultaneous appearance of curved trajectories for each constituent particle. The test particle then changes direction, the first pointer particle stops, and the second pointer particle starts to move. This non-local effect, similar to that shown in Figure \[Fig-5\], is called surrealistic trajectory in the literature.[]{data-label="Fig-7"}](Fig-7.eps){width="7cm"}
![(color on line) Trajectories obtained with two slow pointers, with the same values of the input parameters as in Figure \[Fig-4\]. The initial positions of the pointer particles $Z_{1,0}^{\prime}$ and $Z_{2,0}^{\prime}$ are both equal to $0.01$. The upper part of the figure shows two trajectories of the test particle, one originating from the upper slit (full line, blue on line) and one originating from the lower slit (dashed line, red on line). The trajectories of the two pointers are shown in the lower part of the figure: on the left the pointer associated with the upper slit, on the right that associated with the lower slit. Initially, the motions of the positions are exactly those one could naively expect: the position of the pointer located near the slit crossed by the particle moves, while the other remains still. Later, when the test particle crosses the interference region, the Bohmian position in the configuration space jump from one wave packet to the other, resulting in the simultaneous appearance of curved trajectories for each constituent particle. The test particle then changes direction, the first pointer particle stops, and the second pointer particle starts to move. This non-local effect, similar to that shown in Figure \[Fig-5\], is called surrealistic trajectory in the literature.[]{data-label="Fig-7"}](Fig-7-pointeur-1.eps){width="6cm"}
![(color on line) Trajectories obtained with two slow pointers, with the same values of the input parameters as in Figure \[Fig-4\]. The initial positions of the pointer particles $Z_{1,0}^{\prime}$ and $Z_{2,0}^{\prime}$ are both equal to $0.01$. The upper part of the figure shows two trajectories of the test particle, one originating from the upper slit (full line, blue on line) and one originating from the lower slit (dashed line, red on line). The trajectories of the two pointers are shown in the lower part of the figure: on the left the pointer associated with the upper slit, on the right that associated with the lower slit. Initially, the motions of the positions are exactly those one could naively expect: the position of the pointer located near the slit crossed by the particle moves, while the other remains still. Later, when the test particle crosses the interference region, the Bohmian position in the configuration space jump from one wave packet to the other, resulting in the simultaneous appearance of curved trajectories for each constituent particle. The test particle then changes direction, the first pointer particle stops, and the second pointer particle starts to move. This non-local effect, similar to that shown in Figure \[Fig-5\], is called surrealistic trajectory in the literature.[]{data-label="Fig-7"}](Fig-7-pointeur-2.eps){width="6cm"}
We consider two microscopic pointers, each made of a single particle with parameters given by (\[article-21\]). Figure \[Fig-7\] shows two trajectories that are obtained in this case, with the same set of input parameters as for Figures \[Fig-4\] and \[Fig-5\]. For clarity, only one trajectory is displayed for each slit, assuming that the test particle crosses it at the center. Initially, when the test particle crosses the upper slit, the position of the upper pointer moves upwards, while the position of the lower pointer remains still; this is exactly as expected. Later, when the test particle reaches the interference region, it can bounce and the velocity can change its direction; this introduces curvatures in the trajectories of all three particles. The position of the upper pointer stops, and that of the lower pointer starts moving up, according to the new velocity of the test particle – clearly a quantum non-local effect. Nevertheless, the successive positions of the pointers give a perfectly correct real-time information on the actual trajectory of the test particle. No contradiction appears between the indications of the pointers and the trajectory of the test particle, provided the motion of the pointers are interpreted as measurements of the velocity of the test particle at the same time; this is an interesting illustration of how a quantum non-local measurement apparatus can operate.
The initial values of the Bohmian positions of the pointer particles are of course random. It may happen that they are for instance both positive, and seem to indicate the presence of the test particle in one slit even before the measurement has begun. Figure \[Fig-8\] shows another example, with the same input parameters as in \[Fig-5\], but different initial values of the positions of the pointer particles. These values are such that, initially, the two pointers have positions corresponding to a detection of the particle in the upper slit. We then observe the same predestination effect as in Figure \[Fig-5\]: in the interference region, the trajectory of the test particle takes more often a direction that is consistent with this initial value.
This analysis shows that the Bohmian trajectories of the pointer particles always remain consistent with that of the test particle: before the crossing of the interference region, the trajectory of the pointer particles indicates which slit has been crossed, and after the crossing it indicates in which beam the test particle propagates. Non-local effects are also visible on the pointer trajectories during the crossing time, and constantly reflect the behavior of the test particle. In the words of Ref. [@Vaidman-2012], this phenomenon is a dramatic demonstration of the consequences of non-locality on the interpretation of measurements.
![(color on line) Same figure as Fig. \[Fig-7\], but with different initial values for the positions of the pointers ($Z_{1,0}^{\prime} = 0.5 $ and $Z_{2,0}^{\prime}=0.9$) corresponding to a larger average value than in that figure. In this case, after crossing the interference region, the trajectory of the test particle takes a direction that is determined by the initial positions of the pointer particles, revealing a strong effect of the additional variables of the measurement apparatus.[]{data-label="Fig-8"}](Fig-8.eps){width="6cm"}
![(color on line) Same figure as Fig. \[Fig-7\], but with different initial values for the positions of the pointers ($Z_{1,0}^{\prime} = 0.5 $ and $Z_{2,0}^{\prime}=0.9$) corresponding to a larger average value than in that figure. In this case, after crossing the interference region, the trajectory of the test particle takes a direction that is determined by the initial positions of the pointer particles, revealing a strong effect of the additional variables of the measurement apparatus.[]{data-label="Fig-8"}](Fig-8-pointeur-1.eps){width="5cm"}
![(color on line) Same figure as Fig. \[Fig-7\], but with different initial values for the positions of the pointers ($Z_{1,0}^{\prime} = 0.5 $ and $Z_{2,0}^{\prime}=0.9$) corresponding to a larger average value than in that figure. In this case, after crossing the interference region, the trajectory of the test particle takes a direction that is determined by the initial positions of the pointer particles, revealing a strong effect of the additional variables of the measurement apparatus.[]{data-label="Fig-8"}](Fig-8-pointeur-2.eps){width="5cm"}
Single pointer with more particles
----------------------------------
Coming back to the case where the measurement apparatus consists in a single pointer, but assuming that this pointer contains $N$ particles, we have plotted a number of trajectories. The input parameters are the same as in § \[fast-and-slow-pointers\]; the only difference is that $N=10$, and that $10$ initial positions $Z_{n,0}^{\prime}$ of the pointer particles are randomly chosen within their common Gaussian distribution. Since it would be inconvenient to plot $10$ separate trajectories, in the figures we show the trajectory of the averaged variable (also used in the Appendix): $$\hat{\Sigma}^{\prime} = \frac{1}{\sqrt N} \sum_n Z_n^{\prime}
\label{defn-sigma-prime}$$
Figure \[Fig-9\] shows a case where the initial value of $\hat{\Sigma}^{\prime}$ is zero, a case in which the initial dBB state of the pointer is neutral (no preference for one result or the other). We then observe that the various trajectories of the test particle are symmetrical with respect to the symmetry plane of the interference device. Some of them still bounce on this plane, as in Figure \[Fig-4\], but a larger proportion of trajectories crosses the interference region: the number of surrealistic trajectories is therefore reduced when the pointer contains more particles.
![(color on line) Trajectories obtained with $N=10$ pointer particles. The left part shows the trajectory of the test particle, the right part the trajectory associated with the averaged pointer variable $\hat{\Sigma}^{\prime}$ defined in (\[defn-sigma-prime\]). This variable is assumed to have an initial value $\hat{\Sigma}^{\prime} (0) =0 $. Apart from $N$, all parameters are the same as in Figure \[Fig-4\]. A larger proportion of trajectories than is that figure do not change direction, illustrating the decrease of the proportion of surrealistic trajectories when $N$ increases.[]{data-label="Fig-9"}](Fig-9.eps){width="6cm"}
![(color on line) Trajectories obtained with $N=10$ pointer particles. The left part shows the trajectory of the test particle, the right part the trajectory associated with the averaged pointer variable $\hat{\Sigma}^{\prime}$ defined in (\[defn-sigma-prime\]). This variable is assumed to have an initial value $\hat{\Sigma}^{\prime} (0) =0 $. Apart from $N$, all parameters are the same as in Figure \[Fig-4\]. A larger proportion of trajectories than is that figure do not change direction, illustrating the decrease of the proportion of surrealistic trajectories when $N$ increases.[]{data-label="Fig-9"}](Fig-9-pointeur.eps){width="6cm"}
Figure \[Fig-10\] shows another case where the initial value of $\hat{\Sigma}^{\prime}$ is $\hat{\Sigma}^{\prime} (0) =0.3$; this positive value favors one indication of the pointer and, therefore, a result of measurement. In the interference region, what is observed is that the trajectories of the test particle tend to deviate in order to take a direction that seems to agree with this initial average value. The effect is even more pronounced in Fig. \[Fig-11\] with a still larger initial value $\hat{\Sigma}^{\prime} (0) =1$; now almost all trajectories of the test particle take a final direction that is determined by this initial average. Interestingly, we have a case where it is not the hidden variable associated with the measured particle that determines the result of measurement, as often believed in the context of the dBB theory. What matters here is the initial values of the hidden variables of the measurement apparatus, which determine in advance which path will be taken by the test particle. This sort of predestination effect is also a generalization of the non-local steering of Bohmian trajectories observed with photons in Ref. [@Xiao-et-al-2017].
![Trajectories obtained with $N=10$. The initial values of $\hat{\Sigma}^{\prime} (0)$ is now $0.3$, but otherwise all parameters are the same as in Figure \[Fig-9\]. Because the average value of the positions of the pointer is positive, more trajectories of the test particle seem to originate from the upper than from the lower slit, and go downwards after crossing the interference region.[]{data-label="Fig-10"}](Fig-10.eps){width="6cm"}
![Trajectories obtained with $N=10$. The initial values of $\hat{\Sigma}^{\prime} (0)$ is now $0.3$, but otherwise all parameters are the same as in Figure \[Fig-9\]. Because the average value of the positions of the pointer is positive, more trajectories of the test particle seem to originate from the upper than from the lower slit, and go downwards after crossing the interference region.[]{data-label="Fig-10"}](Fig-10-pointeur.eps){width="6cm"}
![Same figure as \[Fig-10\], but with a still larger initial value $\hat{\Sigma}^{\prime} (0) = 1$. In this case, all trajectories of the test particle after the interference region follow a trajectory that seem to come from the upper slit and goes downwards: the result of measurement is determined by the initial values of the additional variables attached to the measurement apparatus (predestination effect).[]{data-label="Fig-11"}](Fig-11.eps){width="6cm"}
![Same figure as \[Fig-10\], but with a still larger initial value $\hat{\Sigma}^{\prime} (0) = 1$. In this case, all trajectories of the test particle after the interference region follow a trajectory that seem to come from the upper slit and goes downwards: the result of measurement is determined by the initial values of the additional variables attached to the measurement apparatus (predestination effect).[]{data-label="Fig-11"}](Fig-11-pointeur.eps){width="6cm"}
We have also performed computations with a larger number of pointer particles, $N=200$. Figure \[Fig-12\] shows the results obtained with input parameters identical to those of Figure \[Fig-11\], but for a pointer composed of $200$ particles. Most trajectories of the test particle are now almost straight lines.
![Trajectories obtained with the same input parameters as those of Fig \[Fig-10\], but 200 pointer particles instead of 10. Because the number of pointer particles has increased, the trajectories are now all close to straight lines. The quantum non-local effects have completely disappeared and no Bohmian trajectory can be called surrealistic.[]{data-label="Fig-12"}](Fig-12.eps){width="6cm"}
![Trajectories obtained with the same input parameters as those of Fig \[Fig-10\], but 200 pointer particles instead of 10. Because the number of pointer particles has increased, the trajectories are now all close to straight lines. The quantum non-local effects have completely disappeared and no Bohmian trajectory can be called surrealistic.[]{data-label="Fig-12"}](Fig-12-pointeur.eps){width="6cm"}
The conclusion of this study is that, the larger the number of pointer particles, the weaker the interference effects of the test particle, and therefore also the smaller the proportion of apparently surrealistic trajectories. In the limit of very large values of $N$, they completely disappear, as we now show analytically.
Macroscopic pointer, relation to decoherence {#macroscopic-pointer}
============================================
We now give a brief analytic argument showing that, when the number of pointer particles tends to infinity, all trajectories of the test particle cross the interference region as straight lines, even if the pointer is slow. This is actually nothing but the extension of a brief argument given by Bell already in 1980 [@Bell-1980], assuming fast pointers; it is interesting to note that he had already discussed the essence of the phenomenon 12 years before the notion of surrealistic trajectories was introduced [@Englert-1992].
As we have seen, in order to determine the trajectory of the test particle in the interference region, the crucial element is the ratio between the two components of the wave function, evaluated at the Bohmian positions of all particles: if the Bohmian positions of the pointer are such that only one component remains active (the other takes negligible values), the Bohmian position of the test particle follows a straight line, and no surrealistic trajectory occurs. We therefore have to study the $Z_{n}$ dependence of the amplitudes associated with the two components $\Phi_{\pm}$ given by (\[art-1\]), which is contained in the last term of the first line of (\[art-100\]).
At time $t=0$, when the test particle crosses one of the slits, it is the Bohmian position of this particle that determines which of the two components $\Phi_{\pm}$ is effective; the other is inactive, what Bohm calls an empty wave. The Bohmian positions of the pointer particles have not yet changed, and they give the same values to the two components of the wave function. Nevertheless all these positions have an initial velocity that depends on the component of the wave function: $+V$ for the component where the test particles goes through the upper slit, $-V$ in the other case. The pointer Bohmian positions actually move together with their wave packet in the effective wave, being insensitive to the empty wave that would induce an opposite motion.
Consider now a time $\delta t$ that is smaller than the time $t_{\text{cross}}$ at which the wave packets of the test particle reaches the interference region. The two wave packets of the pointer are then at a mutual distance $\delta z=2V\delta t$, so that they may take different values at the Bohmian positions. Let us first consider just one particle of the pointer labeled $n$. The amplitude of the effective wave function at the position $Z_{n}$ is proportional to :$$\left\vert \chi_{a}\left( Z_{n}\right) \right\vert \sim\text{e}^{-\left(
Z_{n}\right) ^{2}/c^{2}} \label{art-103}$$ (we shift the origin for the $Z_n$ coordinate at the center of this wave packet; $c$ is width of the packet, as defined in § \[one-particle-pointer\]). The amplitude of the the empty wave is proportional to:$$\left\vert \chi_{b}\left( Z_{n}\right) \right\vert \sim
\text{e}^{-\left( Z_{n}+\delta z\right) ^{2}/c^{2}} \label{art-104}$$ The ratio between these amplitudes is therefore:$$\left\vert \frac{\chi_{b}\left( Z_{n}\right) }{\chi_{a}\left(
Z_{n}\right) }\right\vert =\text{e}^{-2Z_{n}~\delta z/c^{2}}\text{e}^{-\left( \delta z/c\right) ^{2}} \label{art-105}$$
If one takes into account all particles in the pointer, the ratio $K$ between the amplitudes of the effective and empty waves becomes:$$\begin{aligned}
K & =\exp\left\{ -\sum_{n=1}^{N}\left[ \frac{2Z_{n}\delta z}{c^{2}}+\left( \frac{\delta z}{c}\right) ^{2}\right] \right\} \nonumber\\
& =\exp\left\{ -\frac{N}{c^{2}}\left[ 2\left\langle Z\right\rangle \delta
z+\left( \delta z\right) ^{2}\right] \right\} \label{art-106}$$ where $\left\langle Z\right\rangle $ is the average of the Bohmian positions:$$\left\langle Z\right\rangle =\frac{1}{N}\sum_{n}Z_{n} \label{art-107}$$ The Bohmian positions take random values; $\left\langle Z\right\rangle $ fluctuates from one realization of the experiment to the next. Nevertheless, when $N$ is large we have:$$\left\langle Z\right\rangle \simeq\frac{c}{\sqrt{N}} \label{art-108}$$ so that, if $N>c^{2}/\delta z^{2}$ :$$K\simeq\exp\left\{ -N\frac{\left( \delta z\right) ^{2}}{c^{2}}\right\}
=\exp\left\{ -4N\frac{\left( V\delta t\right) ^{2}}{c^{2}}\right\}
\label{art-109}$$ A high value of $N$ can therefore reduce the values of the empty wave at the Bohmian positions of the pointer by a large factor: if the pointer contains $10^{20}$ particles, a factor $10^{20}$ enters in the exponential!
More precisely, the empty wave takes negligible values as soon as:$$\delta t\geq\tau=\frac{c}{V\sqrt{N}} \label{art-110}$$ The time during which the $N$ particle wave packets still overlap has significantly decreased: it is reduced by a factor $\sqrt{N}$, which can be $10^{10}$ or more. With a macroscopic pointer, in practice this time is always shorter than the time at which the test particle reaches the interference region. Then, even if the Bohmian position of a single pointer particle can no longer cancel the empty wave, the cumulative effect of the Bohmian positions will perform this task. The occurrence of surrealistic trajectories would require an extremely unlikely distribution of the positions where their average value $\left\langle
Z\right\rangle $ is $\left\langle Z\right\rangle \simeq-\delta z/2$, in other words an impossible situation when $N$ tends to infinity.
The variables of the pointer play the role of an environment for the test particle, so that the disappearance of curved trajectories is reminiscent of a decoherence phenomenon. But, in the situations we have studied, decoherence has already fully taken place from the beginning. This is because, in the two components $\Phi_{\pm}$ of the total wave function, the two initial states of the test particle are quasi-orthogonal (because of their spatial separation); this is also the case of the states of the pointer particles (because they do not overlap in momentum space if $\Xi \gg 1$). Since we have chosen $\Xi=10$ for all figures (except for Fig. \[Fig-2\] where entanglement was canceled by setting $\Xi=0$), the off diagonal elements of the reduced density matrix of the test particle have an initial value close to zero. What we have studied is therefore rather a post-decoherence effect, a change from a situation where the test particle is still influenced by two effective waves, to another situation where only one wave plays a role. This can also be seen as a change of the conditional quantum potential acting on the test particle: a relaxation from a large value in the interference region to a practically zero value. The study of this relaxation effect requires, as we have seen, that all degrees of freedom of the environment, including their Bohmian positions (the pointer positions in our case), should be taken into account. Related effects are discussed in Ref. [@Toros-2016], which show how averages over the Bohmian positions of the environment leads to a reduction of the effective wave functions.
The reasoning can be generalized to other symmetric (non-Gaussian) distributions by using a Taylor expansion of its logarithm around the center ($1/c^{2}$ then becomes the second derivative of the distribution at the origin); higher moments of the $Z$ distribution may then appear, but the essence of the results remains similar. The evolution at long times can also be studied: in the Appendix, assuming Gaussian distributions, we show that the scaling of the characteristic evolution time in $1/\sqrt N$ remains valid at any time. Another generalization is to assume that several pointers detect the passage of the test particle, for instance one pointer per slit as in § \[two-pointers\]; this does not change the structure of the calculations, and the results remain basically the same.
Conclusion {#discussion}
==========
Our conclusion is that, strictly speaking, late measurements of quantum trajectories cannot really fool a detector [@Dewdney-et-al-1993], or at least not fool the physicist who makes careful observations of this detector: no problem, or contradiction, occurs if the coupled dynamics of the positions is properly understood. It is true that the dynamics of a microscopic pointer may be complex, so that the interpretation of the results of measurements requires a detailed analysis of how the test particle interacts with the measurement apparatus, and of how the pointer moves as a result of this interaction. In particular, since the motion of a pointer may change its direction, a simple extrapolation to the past may lead to incorrect results. Generally speaking, it is known that the mechanism of empty waves and conditional wave functions provides a dBB dynamics for the reduction of the wave function; the examples we have studied show that this dynamic may be rather rich.
If the pointer is macroscopic, the final result remains simple: it always behaves as a fast pointer, and neither curved Bohmian trajectories nor non-local effect take place. As we have seen, this simple behavior is not predicted if the pointer is treated as a single particle with a single Bohmian position. It remains of course perfectly legitimate to introduce a collective variable, for instance the position of the center of mass of the pointer, and to study its evolution. But, when a partial trace operation over all the variables of the pointer is required, it becomes indispensable to ascribe a Bohmian position to every particle of the pointer and to study its contribution. It is also necessary to perform a statistics over the initial values of all Bohmian positions associated with the pointer; the average value of these positions and their dispersion plays a significant role. The morale of the story is that, when a trace operation is necessary in quantum mechanics to evaluate the decoherence induced by the pointer on the test particle, in dBB theory all degrees of freedom of the pointer that are traced out must be taken into account, including every Bohmian position. We mention in passing that this is also important if one wishes to avoid apparent contradictions concerning the measurement of correlation functions within dBB theory [@Morchio; @Neumaier; @FL-CNVLMQ].
If the pointer is microscopic (or mesoscopic) and contains a relatively small number of particles, quantum non-local effects may indeed take place. The variety of possible phenomena is significantly richer than the simple bounce of trajectories on the symmetry plane generally discussed in the literature. Nevertheless, no contradiction appears between the whole trajectory of the test particle and the results of measurements (assumed to be contained in the successive positions of the pointer particles). As soon as the position of the test particle jumps from one wave packet to the other, the positions of the pointer particles do the same, so that their trajectory constantly reflects that of the test particle. In fact, all the information is contained in the pointer trajectories, ensuring the consistency of the dBB interpretation. An interesting feature is that, in some cases, the result of measurement is not determined by the additional variable attached to the measured test particle, but by the initial values additional variables of the pointer. We then have an interesting predestination effect where the result is determined by the variables of the measurement apparatus rather than by that of the measured system.
Any impression of surrealism disappears if one understands that the velocity of the particles of the pointer provides information about the velocity of the position of the test particle at the same time; this is what a detailed analysis of the coupled dynamics of the particle and the pointer shows. Long after the test particle crossed the interference region, the motion of the pointer indicates in which beam the test particle propagates at this time, not which slit it went through in the past. Actually, one can even argue that the trajectories in question are more real than surreal, since their characteristics (including the changes of directions) should be experimentally accessible by observing the successive positions of the pointer particles with sufficient accuracy.
APPENDIX
The wave functions studied in this article have the general form:$$\Psi=R_{1}\text{e}^{iS_{1}}+R_{2}\text{e}^{iS_{2}} \label{52}$$ where $R_{1}$, $R_{2}$, $S_{1}$ and $S_{2}$ are real functions of the position variables of all particles; the square norms of $R_{1}$ and $R_{2}$ are both assumed to be $1/2$.
\(i) [**General expression of the velocity**]{}
The probability current associated with a particle of mass $m$ reads:$$\begin{aligned}
\mathbf{J} & =\frac{\hslash}{2im}\left\{ \left[ R_{1}\text{e}^{-iS_{1}}+R_{2}\text{e}^{-iS_{2}}\right] \left[ R_{1}\text{e}^{iS_{1}}\left(
i\bm{\nabla}S_{1}+\frac{\bm{\nabla}R_{1}}{R_{1}}\right) +R_{2}\text{e}^{iS_{2}}\left( i\bm{\nabla}S_{2}+\frac{\bm{\nabla}R_{2}}{R_{2}}\right)
\right] \right. \nonumber\\
& -\left. \left[ R_{1}\text{e}^{iS_{1}}+R_{2}\text{e}^{iS_{2}}\right]
\left[ R_{1}\text{e}^{-iS_{1}}\left( -i\bm{\nabla}S_{1}+\frac
{\bm{\nabla}R_{1}}{R_{1}}\right) +R_{2}\text{e}^{-iS_{2}}\left(
-i\bm{\nabla}S_{2}+\frac{\bm{\nabla}R_{2}}{R_{2}}\right) \right] \right\}
\label{53}$$ where the gradients are taken with respect to the coordinates of this particular particle; for the pointer particles, $m$ is replaced by $M$. We then have:$$\begin{aligned}
\mathbf{J} & =\frac{\hslash}{m}\left\{ \left( R_{1}\right) ^{2}\bm{\bm{\nabla}}S_{1}+\left( R_{2}\right) ^{2}\bm{\bm{\nabla}}S_{2}\right\}
\nonumber\\
& +\frac{\hslash}{m}R_{1}R_{2}\left\{ \cos(S_{1}-S_{2})\left[
\bm{\nabla}S_{1}+\bm{\nabla}S_{2}\right] +\sin(S_{1}-S_{2})\left[
\frac{\bm{\nabla}R_{1}}{R_{1}}-\frac{\bm{\nabla}R_{2}}{R_{2}}\right]
\right\} \label{55}$$ The velocity $\mathbf{V}$ of this particle is then given by:$$\begin{aligned}
\mathbf{V} & =\frac{\hslash}{m}\frac{\left( R_{1}\right) ^{2}}{\rho}\bm{\nabla}
S_{1}+\frac{\hslash}{m}\frac{\left( R_{2}\right) ^{2}}{\rho}\bm{\nabla}
S_{2}\nonumber\\
& +\frac{\hslash}{m}\frac{R_{1}R_{2}}{\rho}\left\{ \cos(S_{1}-S_{2})\left[
\bm{\nabla} S_{1}+\bm{\nabla} S_{2}\right] +\sin(S_{1}-S_{2})\left[ \frac{\bm{\nabla}
R_{1}}{R_{1}}-\frac{\bm{\nabla} R_{2}}{R_{2}}\right] \right\} \label{56}$$ where $\rho$ is the local density:$$\rho= \left( R_{1}\right) ^{2}+\left( R_{2}\right) ^{2}+2R_{1}R_{2}\cos(S_{1}-S_{2}) \label{57}$$ The first line of (\[56\]) gives the average of the velocities associated with the two waves, with weights given by their intensities. Ref. [@Vaidman-2012] gives a discussion of the approximation where this term only is included. But other contributions also arise; they contain, not only the gradients of the phases of the two waves, but also the gradients of their amplitudes. If we set: $$S_{1,2}=\bar{S}\pm\frac{\delta S}{2} \label{55-3}$$ we can rewrite the velocity in the more compact form:$$\mathbf{V}=\frac{\hslash}{m}\left\{ \bm{\nabla} \bar{S}+\frac{\left( R_{1}\right)
^{2}-\left( R_{2}\right) ^{2}}{2\rho}\bm{\nabla}\delta S+\frac{R_{1}R_{2}}{\rho
}\sin(\delta S)\left[ \frac{\bm{\nabla} R_{1}}{R_{1}}-\frac{\bm{\nabla} R_{2}}{R_{2}}\right] \right\} \label{56-bis}$$
The only parts of (\[56\]) of (\[56-bis\]) that are particle-dependent are the gradients, which are taken with respect to the coordinates of the particle under study, and the value of the mass, $m$ or $M$. Otherwise, $R_1$, $R_2$, $\rho$ and $\delta S$ are functions the Bohmian positions $X$, $Y$ and $Z_n$ defined in the configuration space, which have the same values for all particles. We notice that, in these coefficients, the functions $R_{1}$ and $R_{2}$ appear only though their ratio $\Omega= R_{1} / R_{2}$.
\(ii) [**Gaussian wave functions, trajectory of the center of mass**]{}
For the calculation that follows, it is convenient to rewrite the wave function (\[art-1-2\]) in the form: $$\begin{aligned}
\varphi_{\pm}^{x}(x;t) & \sim\exp\left\{ -\frac{a^{2}}{a^{4}+\frac
{4\hslash^{2}t^{2}}{m^{2}}}\left[ x^{2}+\left( d-v_{x}t\right) ^{2}\mp2x\left( d-v_{x}t\right) \right] \right\} \nonumber\\
& \times\exp\left\{ \frac{i}{a^{4}+\frac{4\hslash^{2}t^{2}}{m^{2}}}\left[
\frac{2\hslash t}{m}\left[ x^{2}+\left( d-v_{x}t\right) ^{2}\right]
\mp\left[ \frac{mv_{x}x}{\hslash}a^{4}+\frac{4\hslash dxt}{m}\right]
\right] \right\} \label{nouveau-phi-x}$$ We assume that there is only one pointer, and therefore that the parameter $V$ (or $\Xi$) is the same for all pointer particles. The wave function of each pointer particle has a very similar expression, obtained by substituting $z$ to $x$, $M$ to $m$, $c$ to $a$, $V$ to $-v_x$, and finally cancelling $d$ in (\[nouveau-phi-x\]).
The Bohmian velocities are obtained from the values of the wave functions at the Bohmian positions $x=X$, $y=Y$ and $z_n=Z_n$. We then obtain:$$\Omega=\frac{R_{1}}{R_{2}}=\exp\left\{ \frac{4\left( d-v_{x}t\right)
X}{a^{2}+4\frac{\hslash^{2}t^{2}}{m^{2}a^{2}}}\right\} \exp\left\{
\sum_{n=1}^{N}\frac{4Vt~Z_{n}}{c^{2}+4\frac{\hslash^{2}t^{2}}{M^{2}c^{2}}}\right\} \label{58}$$ as well as:$$\delta S =S_{1}-S_{2}= - 2 \left[ \frac{mv_{x}X}{\hslash}+\frac{4\hslash dXt}{ma^{4}} \right]
\frac{a^4}{a^{4}+4\frac{\hslash^{2}t^{2}}{m^{2}}} ~ X
+\sum_{n=1}^{N} \frac{2MV}{\hbar} \frac{c^{4}}{c^{4}+4\frac{\hslash^{2}t^{2}}{M^{2}}} ~ Z_{n}
\label{59}$$ A remarkable property is that, with the Gaussian wave packets we consider, both functions $\Omega$ and $S_{1}-S_{2}$ depend on the variables $Z_{n}$ only through their sum $\Sigma$:$$\Sigma=\sum_{n=1}^{N}Z_{n} \label{59-2}$$ which is nothing but the position of the center of mass of the pointer multiplied by $N$. Reference [@Oriols-Benseny] gives a general discussion of the motion of the center of mass in dBB theory.
For the test particle, the gradients appearing in the velocity are obtained by taking the derivative of the phase and modulus of (\[nouveau-phi-x\]):$$\bm{\nabla}_{x} S_{1,2}=\frac{1}{a^{4}+\frac{4\hslash^{2}t^{2}}{m^{2}}}\left[ 4 \frac{\hslash tX}{m} \mp\left(
\frac{mv_{x}}{\hslash}+\frac{4\hslash dt}{m}\right) \right] \label{60}$$ and:$$\frac{\bm{\nabla}_{x} R_{1,2}}{R_{1,2}}= - 2 ~\frac{X \mp ( d - v_{x}t)}{a^{2}+4\frac{\hslash^{2}t^{2}}{m^{2} a^{2}}} \label{61}$$ The velocity of the test particle is therefore a function of $X$, of the time, and of $\Sigma$ only through the $\Sigma$ dependence of $\delta S $ and $\Omega$.
For every pointer particle, the gradients are:$$\bm{\nabla}_{z_{n}}S_{1,2}=
\frac{1}{c^{4}+4\frac{\hslash^{2}t^{2}}{M^{2}}}
\left[ 4 \frac{\hslash tZ}{M} \mp
\frac{MV}{\hslash} \right]
\label{62}$$ and: $$\frac{\bm{\nabla}_{z_{n}}R_{1,2}}{R_{1,2}}= -2 ~\frac{\left[ Z_{n}\mp
Vt\right] }{c^{2}+4\frac{\hslash^{2}t^{2}}{M^{2} c^{2}}} \label{63}$$ To obtain the time evolution of $\Sigma$, we have to add the velocities of all positions $Z_{n}$, that is the values of these gradients for all values of $n$. Relation (\[56-bis\]) then leads to :$$\begin{aligned}
\frac{\text{d}}{\text{d}t}\Sigma =\mathbf{V}_{\Sigma}= & \frac{c^{4}}{c^{4}+4\frac{\hslash^{2}t^{2}}{M^{2}}} \frac{\hslash}{M}
\left[ \frac{4\hslash t}{M c^{4}} ~ \Sigma + \frac{\Omega ^2 -1}{1+ \Omega ^2 + 2 \Omega \cos ( \delta S )} ~ \frac{M N V }{\hslash} \right.
\nonumber \\
& \hspace{4cm} \left. + \frac{\Omega }{1+ \Omega ^2 + 2 \Omega \cos \delta S } ~ \sin
(\delta S) ~4 \frac{NVt}{c^{2}} \right]
\label{64}\end{aligned}$$ When the sum of the $Z_{n}$ is replaced by $\Sigma$ in (\[58\]) and (\[59\]), all the individual $Z_{n}$ have disappeared from the equations; a closed system of equations is obtained for the dynamics of $X$ and $\Sigma$.
\(iii) [**Change of variables, effective velocity**]{}
In (\[64\]), the parameters $N$ and $V$ appear only through their product $NV$, but this is not the case in (\[58\]) and (\[59\]). We therefore change variables and set: $$\Sigma=\hat{\Sigma}\sqrt{N} \label{65}$$ to obtain:$$\Omega=\frac{R_{1}}{R_{2}}=\exp\left\{ \frac{4\left( d-v_{x}t\right)X}
{a^{2}+4\frac{\hslash^{2}t^{2}}{m^{2}a^{2}}}\right\} \exp\left\{
\frac{4Vt\sqrt{N} ~\hat{\Sigma}}{c^{2}+4\frac{\hslash^{2}t^{2}}{M^{2}c^{2}}}\right\} \label{65-bis}$$ as well as:$$\delta S = - 2
\frac{a^4}{a^{4}+4\frac{\hslash^{2}t^{2}}{m^{2}}} \left[ \frac{mv_{x} X}{\hslash}+\frac{4\hslash dXt}{ma^{4}} \right] X
+ \frac{2MV \sqrt N}{\hbar} \frac{c^{4}}{c^{4}+4\frac{\hslash^{2}t^{2}}{M^{2}}} ~ \hat{\Sigma}
\label{66}$$ Similarly, (\[64\]) becomes:$$\begin{aligned}
\frac{\text{d}}{\text{d}t}\hat{\Sigma} = & \frac{c^{4}}{c^{4}+4\frac{\hslash^{2}t^{2}}{M^{2}}} \frac{\hslash}{M}
\left[ \frac{4\hslash t}{M c^{4}} ~ \hat{\Sigma} + \frac{\Omega ^2 -1}{1+ \Omega ^2 + 2 \Omega \cos ( \delta S )} ~ \frac{M V \sqrt N }{\hslash} \right.
\nonumber \\
& \hspace{4cm} \left. + \frac{\Omega }{1+ \Omega ^2 + 2 \Omega \cos \delta S } ~ \sin
(\delta S) ~4 \frac{V \sqrt N ~t}{c^{2}} \right]
\label{67}\end{aligned}$$ We now obtain a closed system of evolution equations for $X$ and $\hat{\Sigma}$ where the parameters $N$ and $V$ appear only through the product $V\sqrt{N}$. We therefore obtain the same evolution as for a single pointer particle of position $\Sigma/\sqrt{N}$, except that the velocity $V$ is multiplied by $\sqrt{N}$. If $N$ is of the order of $10^{20}$, the pointer always behaves as a fast pointer, so that no surrealistic trajectory can occur.
[99]{}
L. de Broglie, La mécanique ondulatoire et la structure atomique de la matière et du rayonnement, *J. Physique et le Radium*, série VI, tome VIII, 225–241 (1927); Interpretation of quantum mechanics by the double solution theory, *Ann. Fond. Louis de Broglie* **12**, Nr 4 (1987); *Tentative d’Interprétation Causale et Non-linéaire de la Mécanique Ondulatoire*, Gauthier-Villars, Paris (1956).
D. Bohm, A suggested interpretation of the quantum theory in terms of hidden variables, *Phys. Rev.* **85**, 166–179 and 180–193 (1952).
P.R. Holland, *The Quantum Theory of Motion*, Cambridge University Press (1993).
X. Oriols and J. Mompart, Overview of Bohmian mechanics, Chapter 1 of *Applied Bohmian mechanics: from nanoscale systems to cosmology*, 15-147, Editorial Pan Stanford Publishing Pte. Ltd. (2012) ; arXiv:1206.1084v2 \[quant-ph\].
J. Bricmont, *Making sense of quantum mechanics*, Springer (2016).
R.B. Griffiths, Bohmian mechanics and consistent histories, *Phys. Lett.* **A 261**, 227-234 (1999).
B.G. Englert, M.O. Scully, G. Süssmann, and H. Walther, Surrealistic Bohm trajectories, *Z. Naturforschung* **47a**, 1175–1186 (1992).
D. Dürr, W. Fusseder, S. Goldstein and N. Zanghi, Comments on surrealistic Bohm trajectories, Z. Naturforschung **48a**, 1261-62 (1993).
C. Dewdney, L. Hardy, and E.J. Squires, How late measurements of quantum trajectories can fool a detector, *Phys. Lett.* **A 184**, 6–11 (1993).
Y. Aharonov and L. Vaidman, About position measurements which do not show the Bohmian particle position, pp. 141-154 in J.T. Cushing et al (eds.), *Bohmian mechanics and quantum theory: an appraisal*, Kluwer (1996).
M.O. Scully, Do Bohm trajectories always provide a trustworthy physical picture of particle motion?, *Phys. Scripta* **T 76**, 41-46 (1998).
Y. Aharonov, B-G. Englert and M.O. Scully, Protective measurements and Bohm trajectories, *Phys. Lett.* **A 263**, 137-146 (1999).
B.J. Hiley, Welcher Weg experiments from the Bohm perspective, pp. 154-160 of *Quantum theory: reconsiderations of foundations; Växjö conbference*, AIP conf. Proceedings **810** (2006).
H.M. Wiseman, Grounding Bohmian mechanics in weak values and bayesianism, *New J. Phys.*, **9**, 165 (2007).
D. Dürr, S. Goldstein and N. Zanghi, On the weak measurements of velocity in Bohmian mechanics, *J. Stat. Phys.* **134**, 1023-1032 (2009).
W.P. Schleich, M. Freyberger and M.S. Zubairy, Reconstruction of Bohm trajectories and wave functions from interferometric measurements, *Physical Review* **A 87**, 014102 (2013).
N. Gisin, Why Bohmian mechanics?, arXiv:1509.00767 \[quant-ph\] (2015).
S. Kocsis, B. Braverman, S. Ravets, M.J. Stevens, R.P. Mirin, L.K. Shalm and A.M Steinberg, Observing the average trajectories of single photons in a two slit interferometer, *Science*, **332**, 1170-1173 (2011).
B. Braveman and C. Simon, Proposal to observe the nonlocality of Bohmian trajectories with entangled photons, *Phys. Rev. Lett.* **110**, 060406 (2013).
D.H. Mahler, L. Rozema, K. Fisher, L. Vermeyden, K.J. Resch, H.W. Wiseman and A. Steinberg, Experimental nonlocal and surreal Bohmian trajectories, *Sci. Adv.* 2016;2:e1501466 (2016).
F. Laloë, *Comprenons-nous vraiment la mécanique quantique?* " 2e édition, EDP Sciences (2018); see in particular Appendix I.
G. Naaman-Marom, N/ Erez and L. Vaidman, Position measurements in the de Broglie-Bohm interpretation of quantum mechanics, *Ann. of Phys.* **327**, 2522-2542 (2012).
Wolfram Research, Inc., Mathematica, Version 11.1,Champaign, IL, 2017
Y. Xiao, Y. Kedem, J-S. Xu, C6F. Li and G-C. Guo, Experimental nonlocal steering of Bohmian trajectories, *Optics Express* **25**, 14643-14472 (2017).
M. Toros, S. Donaldi and A. Bassi, , Bohmian mechanics, collapse models and the emergence of classicality, *J. Physics A* **49**, 355302 (2016).
J.S. Bell, de Broglie-Bohm, delayed-choice double-slit experiment, and density matrix, Quantum Chemistry Symposium, *International Journal of Quantum Chemistry*, **14**, 155-159 (1980).
M. Correggi and G.Morchio, Quantum mechanics and stochastic mechanics for compatible observables at different times, *Ann. Physics* **296**, 371–389 (2002).
A. Neumaier, Bohmian mechanics contradicts quantum mechanics, arXiv:quant-ph/0001011 (2000).
X. Oriols and A. Benseny, Conditions for the classicality of the center of mass of many-particle quantum states, *New J. Phys.* **19**, 063031 (2017).
[^1]: [email protected]
[^2]: [email protected]
[^3]: We use the word jump as several other authors have done, because it is convenient. This does not mean that there is a real jump is space, since the Bohmian trajectory remains perfectly continuous; one could also say that the Bohmian positions change the wave on which they surf.
|
---
abstract: '[ We give an explanation for the Pieri coefficients for the stable and dual stable Grothendieck polynomials; their non-leading terms are obtained by taking an alternating sum of meets (or joins) of their leading terms. ]{}'
author:
- Motoki Takigiku
title: '[On the Pieri rules of stable and dual stable Grothendieck polynomials]{}'
---
[tex\_util\_preamble\_general.tex]{}
\[section\]
\[defi\][Lemma]{} \[defi\][Sublemma]{} \[defi\][Proposition]{} \[defi\][Theorem]{} \[defi\][Corollary]{} \[defi\][Conjecture]{} \[defi\][Question]{} \[defi\][Claim]{}
\[defi\][Remark]{} \[defi\][Example]{}
[tex\_util\_preamble\_kschur.tex]{}
[tex\_util\_preamble\_tikz\_young.tex]{}
[main.bbl]{}
Introduction
============
The stable Grothendieck polynomials $G_{\lambda}$ and the dual stable Grothendieck polynomials $g_{\lambda}$ are certain families of inhomogeneous symmetric functions parametrized by interger partitions ${\lambda}$. They are certain $K$-theoretic deformations of the Schur functions and dual to each other via the Hall inner product.
Historically the stable Grothendieck polynomials (parametrized by permutations) were introduced by Fomin and Kirillov [@MR1394950] as a stable limit of the Grothendieck polynomials of Lascoux–Schützenberger [@MR686357]. In [@MR1946917] Buch gave a combinatorial formula for the stable Grothendieck polynomials $G_{\lambda}$ for partitions using so-called set-valued tableaux, and showed that their span $\bigoplus_{{\lambda}\in{\mathcal{P}}}{\mathbb{Z}}G_{\lambda}$ is a bialgebra and its certain quotient ring is isomorphic to the $K$-theory of the Grassmannian $\mathrm{Gr}=\mathrm{Gr}(k,\mathbb{C}^{n})$.
The dual stable Grothendieck polynomials $g_{\lambda}$ were introduced by Lam and Pylyavskyy [@MR2377012] as generating functions of reverse plane partitions, and shown to be the dual basis for $G_{\lambda}$ via the Hall inner product. They also showed there that $g_{\lambda}$ represent the $K$-homology classes of ideal sheaves of the boundaries of Schubert varieties in the Grassmannians. The Pieri rule for $G_{\lambda}$ was given in [@MR1763950], and that for $g_{\lambda}$ was given in [@MR1946917] as a formula for coproduct structure constants of $G_{\lambda}$. Both formulas involve certain binomials coefficients, and we show in this paper that these coefficients are the values of the Möbius functions of certain posets of horizontal strips (Lemma \[theo:mobius\]) and hence the Pieri formulas are written as alternating sums of meets/joins of the leading terms (Proposition \[theo:Pieris\] and \[theo:G:Pieri\]). We also explain in Section \[sect:Ggsums\] that the linear map $g_{\lambda}\mapsto\sum_{\mu\subset{\lambda}}g_\mu$ ($=:{\widetilde{g}_{{\lambda}}}$) is a ring automorphism and the linear map $G_{\lambda}\mapsto\sum_{\mu\supset{\lambda}}G_\mu$ ($=:{\widetilde{G}_{{\lambda}}}$) is a multiplication map. With these bases, the Pieri rules are rewritten as certain multiplicity-free sums ( and in Proposition \[theo:Pieris\] and \[theo:G:Pieri\]).
Acknowledgment {#acknowledgment .unnumbered}
--------------
[ The author would like to thank Takeshi Ikeda for communicating to him the idea of considering the class of the structure sheaves of Schubert varieties in the $K$-homology of the affine Grassmannian when the author did a study on ${g^{(k)}_{{\lambda}}}$, which is where the idea of taking the sum $\sum_{\mu\subset{\lambda}}g_{\lambda}$ originally came from. The author is also grateful to Itaru Terada for many valuable discussions and comments. This work was supported by the Program for Leading Graduate Schools, MEXT, Japan. ]{}
Stable and dual stable Grothendieck polynomials {#sect:Prel::gla}
================================================
For basic definitions for symmetric functions, see for instance [@MR1354144 Chapter I].
Let ${\mathcal{P}}$ be the set of integer partitions. For partitions ${\lambda},\mu\in{\mathcal{P}}$, the inclusion ${\lambda}\subset\mu$ means ${\lambda}_i\le\mu_i$ for all $i$, and ${\lambda}\cap\mu$ and ${\lambda}\cup\mu$ ($\in{\mathcal{P}}$) are given by $({\lambda}\cap\mu)_i=\min({\lambda}_i,\mu_i)$ and $({\lambda}\cup\mu)_i=\max({\lambda}_i,\mu_i)$ for all $i$. In other words, $\cap$ and $\cup$ are the meet and join of the poset $({\mathcal{P}},\subset)$.
Let ${\Lambda}$ be the ring of symmetric functions, namely consisting of all symmetric formal power series in variable $x=(x_1,x_2,\dots)$ with bounded degree. Let ${\widehat}{\Lambda}$ be its completion, consisting of all symmetric formal power series (with unbounded degree).
In [@MR1946917 Theorem 3.1] Buch gave a combinatorial description of the [*stable Grothendieck polynomial*]{} $G_{\lambda}$ as a (signed) generating function of so-called [*set-valued tableaux*]{}. We do not review the detail here and just recall some of its properties: $G_{\lambda}\in{\widehat}{\Lambda}$ (although $G_{\lambda}\notin{\Lambda}$), $G_{\lambda}$ is an infinite linear combination of the Schur functions $\{s_\mu\}_{\mu\in{\mathcal{P}}}$ and its lowest degree component is $s_{\lambda}$ (hence ${\widehat}{\Lambda}=\prod_{{\lambda}\in{\mathcal{P}}}{\mathbb{Z}}G_{\lambda}$). Moreover the span $\bigoplus_{\lambda}{\mathbb{Z}}G_{\lambda}$ ($\subset{\widehat}{\Lambda}$) is a bialgebra, in particular the expansion of the product $G_\mu G_\nu = \sum_{{\lambda}} c^{{\lambda}}_{\mu\nu} G_{\lambda}$ and the coproduct $\Delta(G_{\lambda}) = \sum_{\mu,\nu} d^{{\lambda}}_{\mu\nu} G_\mu\otimes G_\nu$ are finite.
The [*dual stable Grothendieck polynomial*]{} $g_{{\lambda}}$ (for ${\lambda}\in{\mathcal{P}}$) is defined in [@MR2377012] as the generating function of so-called *reverse plane partitions* of shape ${\lambda}$. It is also shown there that $g_{{\lambda}}\in{\Lambda}$ and $g_{{\lambda}}$ has the highest degree component $s_{{\lambda}}$ and thus forms a ${\mathbb{Z}}$-basis of ${\Lambda}$. Moreover $g_{\lambda}$ is dual to $G_{\lambda}$: it holds $(G_{\lambda},g_\mu)=\delta_{{\lambda}\mu}$ where $(\,,)\colon{\widehat}{\Lambda}\times{\Lambda}{\longrightarrow}{\mathbb{Z}}$ is the Hall inner product. Hence the product (resp.coproduct) structure constants for $G_{\lambda}$ coincide with the coproduct (resp.product) structure constants for $g_{\lambda}$: it holds $g_\mu g_\nu = \sum_{{\lambda}} d^{{\lambda}}_{\mu\nu} g_{\lambda}$ and $\Delta(g_{\lambda}) = \sum_{\mu,\nu} c^{{\lambda}}_{\mu\nu} g_\mu\otimes g_\nu$.
Pieri rules
-----------
The (row) Pieri formula for $G_{\lambda}$ was given by Lenart [@MR1763950 Theorem 3.2]: for any partition ${\lambda}\in{\mathcal{P}}$ and integer $a\ge 0$, $$\label{eq:G:Pieri}
G_{(a)} G_{\lambda}=
\sum_{\mu/{\lambda}\text{: horizontal strip}}
(-1)^{|\mu/{\lambda}|-a}
\binom{r(\mu/{\lambda})-1}{|\mu/{\lambda}|-a}
G_\mu,$$ where $r(\mu/{\lambda})$ denotes the number of the rows in the skew shape $\mu/{\lambda}$. Subsequently, the (row) Pieri formula for $g_{\lambda}$ is given in [@MR1946917 Corollary 7.1] (as a formula for $d^{\mu}_{{\lambda},(a)}$, the coproduct structure constants for $G_{\lambda}$): $$g_{(a)} g_{\lambda}=
\sum_{\mu/{\lambda}\text{$:$ horizontal strip}}
(-1)^{a-|\mu/{\lambda}|}
\binom{r({\lambda}/\bar\mu)}{a-|\mu/{\lambda}|}
g_\mu,
\label{eq:g_Pieri_hs}$$ where $\bar\mu=(\mu_2,\mu_3,\dots)$.
Their sums {#sect:Ggsums}
----------
For ${\lambda}\in{\mathcal{P}}$ we let $${\widetilde{g}_{{\lambda}}} = \sum_{\mu\subset{\lambda}} g_\mu \ (\in{\Lambda}),
\qquad
{\widetilde{G}_{{\lambda}}} = \sum_{\mu\supset{\lambda}} G_\mu \ (\in{\widehat}{\Lambda}).$$ It is known (see [@MR1946917 Section 8]) that $(1-G_1)^{-1} = \sum_{{\lambda}\in{\mathcal{P}}}G_{\lambda}$ and $$\label{eq:H(1):G}
(1-G_1)^{-1} G_{\lambda}= \sum_{\mu\supset{\lambda}} G_\mu \ \big(= {\widetilde{G}_{{\lambda}}}\big).$$ It is also easy to see that $1-G_1 = \sum_{i\ge 0}(-1)^{i} e_i$ and hence $(1-G_1)^{-1}=\sum_{i\ge 0}h_i$ ($=:H(1)$), where $e_i$ and $h_i$ are the elementary and complete symmetric functions.
Recall the notation $F^\perp(f) = \sum (F,f_1) f_2$ for $F\in{\widehat}{\Lambda}$, $f\in{\Lambda}$ and $\Delta(f)=\sum f_1\otimes f_2$ with the Sweedler notation, and that the multiplication map by $F$ is the dual map of $F^\perp$. From this, and that $G_{\lambda}$ and $g_{\lambda}$ are dual, we see that $H(1)^\perp(g_{\lambda})={\widetilde{g}_{{\lambda}}}$. Besides it is known (see [@MR1354144 Chapter 1.5, Example 29]) that $H(1)^\perp(f(x_1,x_2,\cdots)) = f(1,x_1,x_2,\cdots)$ for any $f\in{\Lambda}$, and hence $H(1)^\perp$ is a ring morphism. Since $F^\perp G^\perp=(GF)^\perp$ in general, that $H(1)$ is invertible implies that so is $H(1)^\perp$. Hence we have
\[theo:H(1):perp\] Let $H(1) = \sum_{i\ge 0} h_i$. The map $H(1)^\perp\colon{\Lambda}{\longrightarrow}{\Lambda}$ is a ring automorphism and $$\begin{aligned}
{\widetilde{g}_{{\lambda}}}(x) &= H(1)^\perp (g_{\lambda}(x)) = g_{\lambda}(1,x).
\label{eq:H(1):g}
\end{aligned}$$ where we write $f(x)=f(x_1,x_2,\cdots)$ and $f(1,x)=f(1,x_1,x_2,\cdots)$.
Note that we can directly show ${\widetilde{g}_{{\lambda}}}(x_1,x_2,\cdots)=g_{\lambda}(1,x_1,x_2,\cdots)$ from the fact that $g_{\lambda}$ is a generating function of reverse plane partitions; see [@Takigiku_dualstable2] for more details. As seen in Section \[sect:geom\] below, ${\widetilde{g}_{{\lambda}}}$ correspond to the classes in $K$-homology of the structure sheaves of Schubert varieties in the Grassmannian.
$K$-(co)homology of Grassmannians {#sect:geom}
---------------------------------
We recall geometric interpretations of $G_{\lambda}$ and $g_{\lambda}$. Let ${\mathrm{Gr}}(k,n)$ be the Grassmannian of $k$-dimensional subspaces of $\mathbb{C}^n$, $R=(n-k)^k$ the rectangle of shape $(n-k)\times k$, and ${{\mathcal}{O}}_{\lambda}$ (for ${\lambda}\subset R$) the structure sheaves of Schubert varieties of ${\mathrm{Gr}}(k,n)$. The $K$-theory ${K^*({\mathrm{Gr}}(k,n))}$, the Grothendieck group of algebraic vector bundles on ${\mathrm{Gr}}(k,n)$, has a basis $\{[{{\mathcal}{O}}_{\lambda}]\}_{{\lambda}\subset R}$, and the surjection $\bigoplus_{{\lambda}\in{\mathcal{P}}} {\mathbb{Z}}G_{\lambda}{\longrightarrow}{K^*({\mathrm{Gr}}(k,n))}= \bigoplus_{{\lambda}\subset R} {\mathbb{Z}}[{{\mathcal}{O}}_{\lambda}]$ that maps $G_{\lambda}$ to $[{{\mathcal}{O}}_{\lambda}]$ (which is considered as $0$ if ${\lambda}\not\subset R$) is an algebra homomorphism [@MR1946917].
There is another basis of ${K^*({\mathrm{Gr}}(k,n))}$ consisting of the classes $[{{\mathcal}{I}}_{\lambda}]$ of ideal sheaves of boundaries of Schubert varieties. In [@MR1946917 Section 8] it is shown that the bases $\{[{{\mathcal}{O}}_{\lambda}]\}_{{\lambda}\subset R}$ and $\{[{{\mathcal}{I}}_{\lambda}]\}_{{\lambda}\subset R}$ relates to each other by $[{{\mathcal}{O}}_{\lambda}] = \sum_{{\lambda}\subset\mu\subset R} [{{\mathcal}{I}}_\mu]$ and that they are dual: more precisely $([{{\mathcal}{O}}_{\lambda}],[{{\mathcal}{I}}_{{\tilde}\mu}])={\delta}_{{\lambda}\mu}$ where ${\tilde}\mu=(n-k-\mu_k,\cdots,n-k-\mu_1)$ is the rotated complement of $\mu\subset R$ and the pairing $(\,,)$ is defined by $(\alpha,\beta) = \rho_*(\alpha\otimes\beta)$ where $\rho_*$ is the pushforward to a point.
The $K$-homology ${K_*({\mathrm{Gr}}(k,n))}$, the Grothendieck group of coherent sheaves, is naturally isomorphic to ${K^*({\mathrm{Gr}}(k,n))}$. Lam and Pylyavskyy proved in [@MR2377012 Theorem 9.16] that the surjection ${\Lambda}=\bigoplus_{{\lambda}\in{\mathcal{P}}} {\mathbb{Z}}g_{\lambda}{\longrightarrow}{K_*({\mathrm{Gr}}(k,n))}= \bigoplus_{\mu\subset R} {\mathbb{Z}}[{{\mathcal}{I}}_\mu]$ that maps $g_{\lambda}$ to $[{{\mathcal}{I}}_{{\tilde}{\lambda}}]$ (which is considered as $0$ if ${\lambda}\subset R$) identifies the coproduct and product on ${\Lambda}$ with the pushforwards of the diagonal embedding map and the direct sum map. Since $\mu\subset{\lambda}\iff\tilde\mu\supset\tilde{\lambda}$, under this identification we see that $\sum_{\mu\subset{\lambda}}g_\mu\in{\Lambda}$ corresponds to $[{{\mathcal}{O}}_{\tilde{\lambda}}]\in{K_*({\mathrm{Gr}}(k,n))}$.
Description for the Pieri coefficients {#sect:gs_Pieri}
======================================
In this section we give an explanation for the Pieri coefficients for $G_{\lambda}$ and $g_{\lambda}$ ; their non-leading terms (higher-degree terms for the case of $G_{\lambda}$; lower-degree terms for the case of $g_{\lambda}$) are obtained by taking an alternating sum of meets/joins of the leading terms ( and ). Another equivalent description is that the product ${\widetilde{G}_{{\lambda}}} G_{(a)}$ (resp.${\widetilde{g}_{{\lambda}}} {\widetilde{g}_{(a)}}$) is expanded into a certain multiplicity-free sum of $G_\mu$ (resp.$g_\mu$) ( and ).
The key fact is that the coefficients in the Pieri rule and are values of the Möbius functions of certain posets of horizontal strips over ${\lambda}$: for ${\lambda}\in{\mathcal{P}}$ and $a\in{\mathbb{Z}}_{>0}$, let $$\begin{gathered}
{\mathrm{HS}_{}({\lambda})}=\{\mu\in{\mathcal{P}}\mid\text{$\mu/{\lambda}$ is a horizontal strip}\}, \\
{\mathrm{HS}_{\le a}({\lambda})}=\{\mu\in{\mathrm{HS}_{}({\lambda})} \mid |\mu/{\lambda}|\le a\}, \qquad
{{\widehat}{\mathrm{HS}}_{\le a}({\lambda})}= {\mathrm{HS}_{\le a}({\lambda})} \sqcup \{\hat{1}\}, \\
{\mathrm{HS}_{\ge a}({\lambda})}=\{\mu\in{\mathrm{HS}_{}({\lambda})} \mid |\mu/{\lambda}|\ge a\}, \qquad
{{\widehat}{\mathrm{HS}}_{\ge a}({\lambda})}= {\mathrm{HS}_{\ge a}({\lambda})} \sqcup \{\hat{0}\}.\end{gathered}$$ Here $\hat{0}$ and $\hat{1}$ are the minimum and maximal elements. For a poset $P$, let $\mu_P$ denote its Möbius function (see Appendix \[sect:Prel::Mobius\]). Then we have
\[theo:mobius\] $(1)$ For any $\mu\in{\mathrm{HS}_{\ge a}({\lambda})}$, we have $c^{\mu}_{{\lambda},(a)} = - \mu_{{{\widehat}{\mathrm{HS}}_{\ge a}({\lambda})}}(\hat{0}, \mu)$. That is, $$\begin{gathered}
\sum_{\substack{
\mu\supset\nu\in{\mathrm{HS}_{\ge a}({\lambda})}
}
}
c^{\nu}_{{\lambda},(a)}
= 1.
\label{eq:c_sum}
\end{gathered}$$
$(2)$ For any $\mu\in{\mathrm{HS}_{\le a}({\lambda})}$, we have $d^{\mu}_{{\lambda},(a)} = - \mu_{{{\widehat}{\mathrm{HS}}_{\le a}({\lambda})}}(\mu,\hat{1})$. That is, $$\begin{gathered}
\sum_{\substack{
\mu\subset\nu\in{\mathrm{HS}_{\le a}({\lambda})}
}}
d^{\nu}_{{\lambda},(a)}
= 1.
\label{eq:d_sum}
\end{gathered}$$
Before proving Lemma \[theo:mobius\] we show the following propositions. Let ${\lambda}^{(1)}, {\lambda}^{(2)}, \cdots$ be the list of all horizontal strips over ${\lambda}$ of size $a$. Then
\[theo:Pieris\] We have $$\begin{aligned}
{\widetilde{g}_{(a)}} {\widetilde{g}_{{\lambda}}}
&=
\sum_{\mu\subset{\lambda}^{(i)} \text{ for $\exists i$}}
g_{\mu}
\label{eq:gsum} \\
&=
\sum_{i} {\widetilde{g}_{\mu^{(i)}}}
- \sum_{i<j} {\widetilde{g}_{\mu^{(i)}\cap\mu^{(j)}}}
+ \sum_{i<j<k} {\widetilde{g}_{\mu^{(i)}\cap\mu^{(j)}\cap\mu^{(k)}}}
- \cdots,
\label{eq:gs:Pieri:altsum}
\end{aligned}$$ and $$\begin{aligned}
g_{(a)} g_{\lambda}&=
\sum_{i} g_{{\lambda}^{(i)}}
- \sum_{i<j} g_{{\lambda}^{(i)}\cap{\lambda}^{(j)}}
+ \sum_{i<j<k} g_{{\lambda}^{(i)}\cap{\lambda}^{(j)}\cap{\lambda}^{(k)}}
- \cdots.
\label{eq:g:Pieri:altsum}
\end{aligned}$$
\[theo:G:Pieri\] We have $$\begin{aligned}
G_{(a)} {\widetilde{G}_{{\lambda}}}
&= \sum_{\mu\supset{\lambda}^{(i)} \text{ for $\exists i$}}
G_\mu \label{eq:Gs:Pieri:G} \\
&= \sum_{i} {\widetilde{G}_{{\lambda}^{(i)}}}
- \sum_{i < j} {\widetilde{G}_{{\lambda}^{(i)} \cup {\lambda}^{(j)}}}
+ \sum_{i < j < k} {\widetilde{G}_{{\lambda}^{(i)} \cup {\lambda}^{(j)} \cup {\lambda}^{(k)}}}
- \cdots, \label{eq:Gs:Pieri:altsum}
\end{aligned}$$ and $$G_{(a)} G_{{\lambda}}
= \sum_{i} G_{{\lambda}^{(i)}}
- \sum_{i < j} G_{{\lambda}^{(i)} \cup {\lambda}^{(j)}}
+ \sum_{i < j < k} G_{{\lambda}^{(i)} \cup {\lambda}^{(j)} \cup {\lambda}^{(k)}}
- \cdots
\label{eq:G:Pieri:altsum}$$
Note that the left-hand side of is not ${\widetilde{G}_{(a)}}{\widetilde{G}_{{\lambda}}}$ but $G_{(a)}{\widetilde{G}_{{\lambda}}}$ while that of is ${\widetilde{g}_{(a)}}{\widetilde{g}_{{\lambda}}}$, reflecting the fact that the map $G_{\lambda}\mapsto{\widetilde{G}_{{\lambda}}}$ is a module morphism while $g_{\lambda}\mapsto{\widetilde{g}_{{\lambda}}}$ is a ring morphism.
and are mere specializations of corresponding results for *affine dual stable Grothendieck polynomials* ${g^{(k)}_{{\lambda}}}$ shown in [@TakigikuKkSchurSum], but here we give another proof since it is easier and also applicable to $G_{\lambda}$. It is also notable that in the affine case (that is, for ${g^{(k)}_{{\lambda}}}$), equations of the form and hold but does not. In an earlier version of this paper [^1] there was an exposition of the proof of and that is adopted from [@TakigikuKkSchurSum] and optimized for the non-affine case, and by using this and the argument of Lemma \[theo:mobius\] the fact that $g_{\lambda}\mapsto{\widetilde{g}_{{\lambda}}}$ is a ring morphism was derived. Later, a simpler proof for this was found (as given in Section \[sect:Ggsums\]) and the exposition became unnecessary and therefore has been removed.
The right-hand sides of and are equal by the Inclusion-Exclusion Principle, and and are equivalent by Proposition \[theo:H(1):perp\].
Let $P$ be the order ideal of ${\mathcal{P}}$ generated by $\{{\lambda}^{(1)}, {\lambda}^{(2)}, \cdots\}$ (i.e.the set of $\mu\in{\mathcal{P}}$ satisfying $\mu\subset{\lambda}^{(i)}$ for some $i$) and ${\widehat}{P}=P\sqcup\{{\hat{1}}\}$ where ${\hat{1}}$ is the maximum element. Note that $\{{\lambda}^{(1)}, {\lambda}^{(2)}, \cdots\}$ is the set of coatoms in ${\widehat}{P}$ and ${{\widehat}{\mathrm{HS}}_{\le a}({\lambda})}$ ($\subset{\widehat}{P}$) is closed under meet. Then $$\begin{aligned}
{2}
{\widetilde{g}_{{\lambda}}} {\widetilde{g}_{(a)}}
&= \sum_{\nu} d^{\nu}_{{\lambda},(a)} {\widetilde{g}_{\nu}}
& \qquad&\text{(\eqref{eq:g_Pieri_hs} and Proposition \ref{theo:H(1):perp})} \\
&= - \sum_{\nu} \mu_{{{\widehat}{\mathrm{HS}}_{\le a}({\lambda})}}(\nu,\hat{1}) {\widetilde{g}_{\nu}}
& &\text{(Lemma \ref{theo:mobius} (2))} \\
&= - \sum_{\nu} \mu_{{\widehat}{P}}(\nu,\hat{1}) {\widetilde{g}_{\nu}}
& &\text{(Lemma \ref{theo:MobiusLem} (3))} \\
&= \sum_{\mu\in P} g_\mu.
& &\text{(Lemma \ref{theo:MobiusLem} (1))}
\end{aligned}$$ Hence follows.
Similarly to Proposition \[theo:Pieris\], the equivalence of , and follows and we have by , , Lemma \[theo:mobius\] (1) and Lemma \[theo:MobiusLem\] (with all ordering reversed) $$\begin{aligned}
{\widetilde{G}_{{\lambda}}} G_{(a)}
&= \sum_{\nu} c^{\nu}_{{\lambda},(a)} {\widetilde{G}_{\nu}}
= \sum_{\mu\in Q} G_\mu,
\end{aligned}$$ where $Q$ is the order filter of ${\mathcal{P}}$ generated by $\{{\lambda}^{(1)}, {\lambda}^{(2)}, \cdots\}$, i.e.the set of $\mu\in{\mathcal{P}}$ satisfying $\mu\supset{\lambda}^{(i)}$ for some $i$. Hence follows.
Fix ${\lambda}\in{\mathcal{P}}$. Let $r_0<r_1<\dots<r_t$ be the row indices for which rows there are addable corners of ${\lambda}$, i.e.${\lambda}_{r_i-1}>{\lambda}_{r_i}$ (we consider ${\lambda}_0=\infty$, whence $r_0=1$). Let $n_i={\lambda}_{r_i-1}-{\lambda}_{r_i}$, i.e.the number of boxes that can be added to ${\lambda}$ in the $r_i$-th row (we consider $n_0=\infty$). Then $${\mathrm{HS}_{}({\lambda})}
\simeq
\{(b_0,\dots,b_t)\in{\mathbb{Z}}^{t+1}\mid 0\le b_i\le n_i\ (\text{for } 0\le i\le t)\},$$ where $(b_0,\dots,b_t)$ in the right-hand side corresponds to the partition obtained by adding $b_i$ boxes to ${\lambda}$ in the $r_i$-th row. $${
\begin{tikzpicture}[scale=0.25]
\draw (0,0) -| (13,3) -| (8,5) -| (5,8) -| (0,0);
\draw (13,0) rectangle +(2,1);
\draw (8,3) rectangle +(3,1);
\draw (5,5) rectangle +(2,1);
\draw (0,8) rectangle +(3,1);
\draw (0,9) to [out=20, in=160] node (bt)[above]{$b_t$} +(3,0);
\draw (0,8) to [out=-20, in=-160] node [below]{$n_t$} +(5,0);
\draw (8,4) to [out=20, in=160] node (b1)[above]{$b_1$} +(3,0);
\draw (8,3) to [out=-20, in=-160] node[below]{$n_1$} +(5,0);
\draw [loosely dotted, thick] (6,8) -- (7.5,6.5);
\draw (13,1) to [out=20, in=160] node[above]{$b_0$} +(2,0);
\node at (4,2.5) {${\lambda}$};
\end{tikzpicture}
}$$
Under this correspondence $\mu\mapsto (b_0,\dots,b_t)$ and $\nu\mapsto (c_0,\dots,c_t)$, we have $\nu\subset\mu\iff c_i\le b_i$ (for all $i$) and $$\label{eq:stats}
|\nu/{\lambda}| = \sum_{i=0}^{t} c_i, \qquad
r(\nu/{\lambda}) = \sum_{i=0}^{t}{{\delta}\left[c_i>0\right]}, \qquad
r({\lambda}/\bar\nu) = \sum_{i=1}^{t} {{\delta}\left[c_i<n_i\right]},$$ where we use the notation ${{\delta}\left[P\right]}=1$ if $P$ is true and ${{\delta}\left[P\right]}=0$ if $P$ is false for a condition $P$.
Now we prove . For $\mu\in{\mathrm{HS}_{\ge a}({\lambda})}$ by we have $$\begin{aligned}
\text{(LHS of \eqref{eq:c_sum})}
&=
\sum_{\substack{\nu\in{\mathrm{HS}_{\ge a}({\lambda})} \\ \nu\subset\mu}}
(-1)^{|\nu/{\lambda}|-a} \binom{r(\nu/{\lambda})-1}{|\nu/{\lambda}|-a} \\
&=
\sum_{0\le c_0\le b_0}
\sum_{0\le c_1\le b_1}
\dots
\sum_{0\le c_t\le b_t}
{{\delta}\left[\sum_{i=0}^{t} c_i\ge a\right]}
(-1)^{\sum_{i=0}^{t}c_i - a}
\binom{\sum_{i=0}^{t}{{\delta}\left[c_i>0\right]}-1}{\sum_{i=0}^{t}c_i-a}.
\label{eq:c_sum:totyu}
\intertext{
Applying Lemma \ref{theo:binom} below
to simplify the summation on $c_t$,
we have
}
&=
\sum_{0\le c_0\le b_0}
\dots
\sum_{0\le c_{t-1}\le b_{t-1}}
{{\delta}\left[b_t + \sum_{i=0}^{t-1} c_i\ge a\right]}
(-1)^{b_t+\sum_{i=0}^{t-1}c_i - a}
\binom{\sum_{i=0}^{t-1}{{\delta}\left[c_i>0\right]}-1}{b_t+\sum_{i=0}^{t-1}c_i - a}.
\intertext{
Repeating this to simplify the summations on $c_{0},\dots,c_{t-1}$,
we have
}
&= \dots \\
&= {{\delta}\left[\sum_{i=0}^{t} b_i\ge a\right]}
(-1)^{\sum_{i=0}^{t}b_i - a}
\binom{-1}{\sum_{i=0}^{t} b_i - a} \\
&= {{\delta}\left[\sum_{i=0}^{t} b_i\ge a\right]}
= {{\delta}\left[|\mu/{\lambda}|\ge a\right]}
= 1.\end{aligned}$$ Hence is proved.
Next we prove . By similar arguments we have $$\begin{aligned}
\text{(LHS of \eqref{eq:d_sum})}
&=
\sum_{b_0\le c_0\le n_0}
\sum_{b_1\le c_1\le n_1}
\dots
\sum_{b_t\le c_t\le n_t}
{{\delta}\left[\sum_{i=0}^{t} c_i\le a\right]}
(-1)^{a-\sum_{i=0}^{t}c_i}
\binom{\sum_{i=1}^{t}{{\delta}\left[c_i<n_i\right]}}{a-\sum_{i=0}^{t}c_i}.
\label{eq:d_sum:totyu}\end{aligned}$$ Note that this is actually a finite sum despite $n_0=\infty$, and we can replace $n_0$ with a sufficiently large positive integer without changing the value of . Noticing ${{\delta}\left[c_0<n_0\right]}=1$ for any $c_0$ that contributes to the summation , and letting $b'_i = n_i-b_i$, $c'_i=n_i-c_i$ and $a'=(\sum_{i=0}^{t}n_i)-a$, we have $$\begin{aligned}
\eqref{eq:d_sum:totyu}
&=
\sum_{0\le c'_0\le b'_0}
\sum_{0\le c'_1\le b'_1}
\dots
\sum_{0\le c'_t\le b'_t}
{{\delta}\left[\sum_{i=0}^{t} c'_i\ge a'\right]}
(-1)^{\sum_{i=0}^{t}c'_i - a'}
\binom{\sum_{i=0}^{t}{{\delta}\left[c'_i>0\right]}-1}{\sum_{i=0}^{t}c'_i-a'}.
\intertext{
Since this summation is of the same form as \eqref{eq:c_sum:totyu},
by the same arguments we have
}
&=
{{\delta}\left[\sum_{i=0}^{t}b'_i\ge a'\right]}
= {{\delta}\left[\sum_{i=0}^{t}b_i\le a\right]} = {{\delta}\left[|\mu/{\lambda}|\le a\right]} = 1.\end{aligned}$$ Hence is proved.
\[theo:binom\] For $R,q,b, b'\in\mathbb{Z}$ with $b'\le b$, we have $$\sum_{b'\le x\le b}
{{\delta}\left[x\ge R\right]}
(-1)^{x-R}
\binom{q+{{\delta}\left[x>b'\right]}}{x-R}
= {{\delta}\left[b\ge R\right]} (-1)^{b-R} \binom{q}{b-R},$$ where we use the notation ${{\delta}\left[P\right]}=1$ if $P$ is true and ${{\delta}\left[P\right]}=0$ if $P$ is false for a condition $P$.
We carry induction on $b-b'$. The lemma is clear when $b'=b$. When $b'<b$, it is easy to check $$- {{\delta}\left[b'\ge R\right]} \binom{q}{b'-R}
+ {{\delta}\left[b'+1\ge R\right]} \binom{q+1}{b'+1-R}
= {{\delta}\left[b'+1\ge R\right]} \binom{q}{b'+1-R}.$$ Hence we can replace $b'$ with $b'+1$, completing the proof.
Möbius function of a poset {#sect:Prel::Mobius}
==========================
For basic definitions for posets we refer the reader to [@MR2868112 Chapter 3].
For a locally finite (i.e.every interval is finite) poset $P$, the *Möbius function* $\mu_{P}(x,y)$ (for $x,y\in P$ with $x\le y$) is characterized by $$\sum_{x\le z\le y} \mu_{P}(x,z) = \delta_{xy} \quad\text{for any $x\le y$},$$ or equivalently $$\label{eq:mobius}
\sum_{x\le z\le y} \mu_{P}(z,y) = \delta_{xy} \quad\text{for any $x\le y$}.$$
\[theo:MobiusLem\] Let ${\widehat}{P}$ be a locally finite poset with the maximum element ${\hat{1}}$. Let $P={\widehat}{P}{\setminus}\{{\hat{1}}\}$ and $\{x_1,\cdots,x_n\}$ be the maximal elements in $P$, i.e.the coatoms in ${\widehat}{P}$. Consider formal variables $\{g(s)\mid s\in {\widehat}{P}\}$ and let $\widetilde{g}(t)=\sum_{s\le t} g(s)$ for $t\in {\widehat}{P}$.
$(1)$ We have $$\begin{aligned}
\sum_{s\in P} g(s)
= - \sum_{s\in P} \mu_{{\widehat}{P}}(s,{\hat{1}}) {\widetilde}{g}(s). \label{eq:mobius_gsum2}
\end{aligned}$$
$(2)$ Assume that $P$ admits the meet operation $\wedge$. Then $$\begin{aligned}
\sum_{s\in P} g(s) &=
\sum_{m\ge 1} (-1)^{m-1}
\sum_{i_1<\dots<i_m}
{\widetilde}{g}(x_{i_1}\wedge\cdots\wedge x_{i_m}) \\
\bigg(&=
\sum_{i} {\widetilde}{g}(x_i)
- \sum_{i<j} {\widetilde}{g}(x_i\wedge x_j)
+ \sum_{i<j<k} {\widetilde}{g}(x_i\wedge x_j\wedge x_k)
- \dots, \bigg)
\end{aligned}$$
$(3)$ In the same situation as $(2)$, $\mu_{{\widehat}{P}}(s,{\hat{1}}) = 0$ unless $s$ is of the form $s=x_{i_1}\wedge\dots\wedge x_{i_l}$, and $$\label{eq:mobius_PP}
\mu_{{\widehat}{P}}(s,{\hat{1}}) = \mu_{{\widehat}{P}'}(s,{\hat{1}})$$ for any subposet ${\widehat}{P}'$ of ${\widehat}{P}$ that contains all elements of the form $x_{i_1}\wedge\dots\wedge x_{i_l}$ (including ${\hat{1}}$ as the meet of an empty set).
It is known (see [@MR2868112 Proposition 3.7.1] for example) that $$\label{eq:mobius_inv}
g(t)=\sum_{s\le t} \mu_{{\widehat}{P}}(s,t) \widetilde{g}(s)
\quad\text{(for $\forall t\in {\widehat}{P}$)}.$$ Hence we have $$\begin{aligned}
\sum_{s\in P} g(s)
= \widetilde{g}({\hat{1}}) - g({\hat{1}}) = {\widetilde}{g}({\hat{1}}) - \sum_{s\in{\widehat}{P}} \mu_{{\widehat}{P}}(s,{\hat{1}}) {\widetilde}{g}(s)
= - \sum_{s\in P} \mu_{{\widehat}{P}}(s,{\hat{1}}) {\widetilde}{g}(s), \label{eq:mobius_gsum}
\end{aligned}$$ proving (1). (2) is by the Inclusion-Exclusion Principle. (3) follows from (1) and (2).
[^1]: arXiv:1806.06369v2
|
---
abstract: 'We establish a new algorithm that generates a new solution to the Einstein field equations, with an anisotropic matter distribution, from a seed isotropic solution. The new solution is expressed in terms of integrals of an isotropic gravitational potential; and the integration can be completed exactly for particular isotropic seed metrics. A good feature of our approach is that the anisotropic solutions necessarily have an isotropic limit. We find two examples of anisotropic solutions which generalise the isothermal sphere and the Schwarzschild interior sphere. Both examples are expressed in closed form involving elementary functions only.'
author:
- |
M. Chaisi[^1] and S. D. Maharaj[^2]\
Astrophysics and Cosmology Research Unit\
School of Mathematical Sciences\
University of KwaZulu-Natal\
Durban 4041, South Africa
---
Introduction \[sec:intro\]
==========================
Numerous models of static perfect fluid spheres, in the context of general relativity, have been constructed in the past because these are first approximations in building a realistic model for a star. Lists of exact solutions to the Einstein field equations modelling relativistic perfect fluid spheres are given in several treatments [@DelgatyLake; @FinchSkea; @stephaniEtAl]. In these works it is assumed that the matter distribution is isotropic so that the radial pressure is the same as the transverse pressure. A strong case can be made to study matter distributions which are anisotropic in which the radial component of the pressure is not the same as the transverse component. Anisotropy should not be neglected when analysing the critical mass and the redshift of highly compact bodies, and it is important in modelling boson stars and strange stars. Consequently anisotropy has been studied extensively in recent years by a number of researchers within the framework of general relativity [@DevGleiser; @DevGleiser2; @HerreraMartin; @HerreraTroconis; @Ivanov; @MakHarko2002; @MakHarko; @SharmaMukherjee2002].
Most solutions of the Einstein field equations with isotropic matter have been obtained in an ad hoc approach. Recent investigations have attempted to generate isotropic models using a systematic and algorithmic approach[@RahmanVisser; @Lake; @MartinVisser; @BoonsermEtAl]. However there exist very few analogous results for generating anisotropic models. In this regard Maharaj and Chaisi[@MaharajChaisi] have established an algorithm that produces a new anisotropic solution from a given seed isotropic line element. Here we present a different algorithm that generates a new anisotropic solution. The present algorithm has a simpler form and involves fewer integrations, and it is consequently easier to apply. A desirable physical feature of our approach is that these solutions have an isotropic limit; we expect that gravity acts to eventually isotropize matter in the absence of other external forces. Note that many of the exact solutions found previously remain anisotropic with no possibility of regaining an isotropic matter distribution in a suitable limit[@MaharajMaartens; @gokhroo; @ChaisiMaharajA; @ChaisiMaharajB].
The main objective of this paper is to demonstrate that it is possible to generate anisotropic solutions from a given isotropic solution. To achieve this we need to complete an integration which we show is possible for particular metrics. In §2 we provide the fundamental field equations for isotropic and anisotropic matter. The field equations are presented as a first order system of differential equations. The algorithm that produces a new anisotropic solution is described in §3. As a first example we use the isothermal model to produce a new anisotropic solution in §4. As a second example we use the Schwarzschild interior model to produce a new anisotropic solution in §5. In both examples the solution can be given explicitly in terms of elementary functions which makes it possible to study the physical features. We briefly study the behaviour of the anisotropy factor in both examples.
Field equations \[sec:fieldeqns\]
=================================
We utilise a form of the Einstein field equations in which only first order derivatives appear. This representation assists in simplifying the integration process as pointed out by Chaisi and Maharaj[@ChaisiMaharajA] whose notation and conventions we follow. The line element for static spherically symmetric spacetimes is given by $$\begin{aligned}
\mbox{d}s^2 & = &
-e^{\nu}\mbox{d}t^2+e^{\lambda}\mbox{d}r^2+r^2\left(\mbox{d}\theta^2
+\sin^2\theta\mbox{d}\phi^2\right) \label{metric2}\end{aligned}$$ where $\nu(r)$ and $\lambda(r)$ are arbitrary functions. The energy-momentum tensor, for nonradiating matter, for isotropic distributions has the form $$\begin{aligned}
T^{ab}=(\mu+p) u^au^b+pg^{ab} \label{Tiso}\end{aligned}$$ where $\mu$ is the energy density and $p$ is the isotropic pressure. These are measured relative to the comoving four-velocity $u^a=e^{-\nu/2}\delta^a_0 $. We define the mass function as $$\begin{aligned}
m(r) & = & \frac{1}{2}\int^r_0x^2\mu(x)\mbox{d}x \label{massFun}\end{aligned}$$ Consequently $M=m(R)$ is the total mass of a sphere of radius $R$. The Einstein field equations are equivalent to the system $$\begin{aligned}
e^{-\lambda} & = & 1-\frac{2m}{r} \label{EFEs:1} \\
r(r-2m)\nu^\prime & = & p r^3+2m \label{EFEs:2} \\
\left(\mu+p\right)\nu^\prime+2p^\prime & = & 0 \label{EFEs:3}\end{aligned}$$ where we have used (\[metric2\])-(\[massFun\]).
The energy-momentum tensor for anisotropic matter which is not radiating has the form $$\begin{aligned}
T^{ab}=(\mu+p) u^au^b+pg^{ab}+\pi^{ab} \label{Taniso}\end{aligned}$$ The quantity $\pi^{ab}=\sqrt{3}S(r)\left(c^ac^b-\frac{1}{3}h^{ab}\right) $ is the anisotropic stress tensor; the spacelike vector $c^a =
e^{-\lambda/2}\delta^a_1$ is orthogonal to the fluid four-velocity $u^a=e^{-\nu/2}\delta^a_0 $ and $|S(r)|$ is the magnitude of the stress tensor. The Einstein field equations, with the metric (\[metric2\]) and the matter content (\[Taniso\]), can be written in the form $$\begin{aligned}
e^{-\lambda} & = & 1-\frac{2m}{r} \label{EFEs2:1} \\
r(r-2m)\nu^\prime & = & p_r r^3+2m \label{EFEs2:2} \\
\left(\mu+p_r\right)\nu^\prime+2p^\prime_r & = &
-\frac{4}{r}\left(p_r-p_\perp\right) \label{EFEs2:3}\end{aligned}$$ for anisotropic matter distributions. The radial pressure $p_r$ is distinct from the tangential pressure $p_\perp$. It is convenient to write $p_r$ and $p_\perp$ in the form $$p_r=p+2S/\sqrt{3}, \quad\quad p_\perp=p-S/\sqrt{3}$$ where $S$ provides a measure of anisotropy. Note that for isotropic matter $p_r=p_\perp=p$ and we regain (\[EFEs:1\])-(\[EFEs:3\]).
The Algorithm \[sec:algorithm\]
===============================
In this section we establish a procedure for generating a new anisotropic solution of the Einstein field equations from a specified isotropic solution. We start by considering the Einstein field equations (\[EFEs:1\])-(\[EFEs:3\]) with isotropic matter distribution. We assume that an explicit solution to (\[EFEs:1\])-(\[EFEs:3\]) is known where $$\begin{aligned}
(\nu, \lambda, m, p) & = & (\nu_0, \lambda_0, m_0, p_0)
\label{eq:isosol}\end{aligned}$$ and functions $\nu_0,\; \lambda_0,\; m_0\;\;\mbox{and}\;\; p_0$ are explicitly given. Then the equations in (\[EFEs:1\])-(\[EFEs:3\]) are satisfied and we can write $$\begin{aligned}
e^{-\lambda_0} & = & 1-\frac{2m_0}{r} \label{eq:iso1}\\
r(r-2m_0)\nu_0^\prime & = & p_0r^3+2m_0 \label{eq:iso2}\\
\left(\frac{2m_0^\prime}{r^2}+p_0\right)\nu_0^\prime + 2p_0^\prime &
= & 0 \label{eq:iso3}\end{aligned}$$ We next consider the Einstein field equations (\[EFEs2:1\])-(\[EFEs2:3\]) with anisotropic matter distribution and seek an explicit solution. To this end we propose the possible solution $$\begin{aligned}
(\nu,\lambda,m,p_r,p_\perp) & = &
\left(\nu_0,\;\lambda_0+x(r),\;m_0+y(r),\;p_0+\alpha(r),\;p_0-
\frac{\alpha(r)}{2}\right)\label{eq:anisosol}\end{aligned}$$ where $(\nu_0,\lambda_0,m_0,p_0)$ are given by (\[eq:isosol\]) and $x,y\;\;\mbox{and}\;\;\alpha$ are arbitrary functions, and we have set $\alpha=2S/\sqrt{3}$ for convenience. Then the system (\[EFEs2:1\])-(\[EFEs2:3\]) becomes $$\begin{aligned}
e^{-\left(\lambda_0+x\right)} & = & 1-\frac{2m_0+2y}{r}\label{eq:aniso2:1}\\
r(r-2m_0-2y)\nu_0^\prime & = & p_0r^3 +\alpha
r^3+2m_0+2y \label{eq:aniso2:2}\\
\left(\frac{2m_0^\prime+2y^\prime}{r^2}+p_0+\alpha\right)\nu_0^\prime+
2p_0^\prime+2\alpha^\prime & = &
-\frac{6}{r}\alpha \label{eq:aniso2:3}\end{aligned}$$\[eq:aniso2\] The systems (\[eq:iso1\])-(\[eq:iso3\]) and (\[eq:aniso2:1\])-(\[eq:aniso2:3\]) lead to $$\begin{aligned}
x & = & -\ln \left\{1-\frac{2y}{r}e^{\lambda_0} \right\}
\label{eq:x}\\
y & = & -\frac{\alpha r^3}{2(1+r \nu_0^\prime)}
\label{eq:y}\\
\left(\frac{2y^\prime}{r^2}+\alpha\right)\nu_0^\prime +
2\alpha^\prime & = & -\frac{6\alpha}{r} \label{eq:alphayprime}\end{aligned}$$ We need to integrate (\[eq:alphayprime\]) to find the function $\alpha$. The remaining functions $x$ and $y$ are defined in terms of $\alpha$. Two cases arise: $\alpha=0$ and $\alpha\ne 0$. If $\alpha=0$ then (\[eq:x\])-(\[eq:alphayprime\]) has the solution $$\begin{aligned}
(\alpha,x,y) & = & (0, 0, 0)\label{eq:alphaxy0soln}\end{aligned}$$ The trivial solution (\[eq:alphaxy0soln\]) corresponds to the isotropic case. Thus this algorithm regains the isotropic solution in the appropriate limit. If $\alpha\ne 0$ then we can eliminate $y$ from (\[eq:alphayprime\]) to get $$\begin{aligned}
\frac{2\nu_0^\prime}{r^2}\left\{-\frac{\alpha^\prime r^3}{2(1+r
\nu_0^\prime)} -\frac{3\alpha r^2}{2(1+r \nu_0^\prime)}+\frac{\alpha
r^3}{2}\frac{\nu_0^\prime+r \nu_0^{\prime\prime}}{(1+r
\nu_0^\prime)^2} \right\} + \alpha \nu_0^\prime +2\alpha^\prime & =
& -\frac{6\alpha}{r}\end{aligned}$$ This differential equation can be written as $$\begin{aligned}
\frac{\alpha^\prime}{\alpha} - \frac{\nu_0^\prime}{2+r
\nu_0^\prime} \left\{3- r \frac{\nu_0^\prime+r
\nu_0^{\prime\prime}}{1+r \nu_0^\prime} \right\} & = &
-\left(\nu_0^\prime+\frac{6}{r}\right)\left(\frac{1+r\nu_0^\prime}{2+r
\nu_0^\prime}\right) \label{eq:alphaprime}\end{aligned}$$ after some simplification. On integration (\[eq:alphaprime\]) leads to $$\begin{aligned}
\ln\alpha & = & J_\alpha+\ln k\label{eq:Jint}\end{aligned}$$ where $\ln k$ is a constant of integration, $k\ne 0$, and we have set $$\begin{aligned}
J_\alpha & = & \int \left\{ \frac{\nu_0^\prime}{2+r
\nu_0^\prime}\left(\frac{3+2r\nu_0^\prime-r^2\nu_0^{\prime\prime}}{1+r
\nu_0^\prime} \right)
-\left(\nu_0^\prime+\frac{6}{r}\right)\left(\frac{1+r\nu_0^\prime}{2+r
\nu_0^\prime}\right) \right\}\mbox{d}r\end{aligned}$$ We can write (\[eq:Jint\]) in the compact form $$\begin{aligned}
\alpha & = & k e^{ J_\alpha}\label{eq:alphaJ}\end{aligned}$$ Equations (\[eq:x\]), (\[eq:y\]) and (\[eq:alphaJ\]) correspond to anisotropic matter.
Thus if given a known isotropic solution (\[eq:isosol\]) we can generate a new anisotropic solution (\[eq:anisosol\]) where $$\begin{aligned}
\alpha & = & k e^{J_\alpha} \label{eq:alphaxy:1} \\
x & = & -\ln \left\{1-\frac{2y}{r}e^{\lambda_0}\right\}
\label{eq:alphaxy:2}\\
y & = & -\frac{\alpha r^3}{2(1+r\nu_0^\prime)} \label{eq:alphaxy:3}\end{aligned}$$ and the integral $J_\alpha$ is given by $$\begin{aligned}
J_\alpha & = & \int \left\{ \frac{\nu_0^\prime}{2+r
\nu_0^\prime}\left(\frac{3+2r\nu_0^\prime-r^2\nu_0^{\prime\prime}}{1+r
\nu_0^\prime} \right)
-\left(\nu_0^\prime+\frac{6}{r}\right)\left(\frac{1+r\nu_0^\prime}{2+r
\nu_0^\prime}\right) \right\}\mbox{d}r\end{aligned}$$ The integration in $J_\alpha$ can be explicitly performed as $\nu_0$ is specified in the isotropic solution (\[eq:isosol\]). Note that (\[eq:alphaxy:1\])-(\[eq:alphaxy:3\]) applies to both cases $\alpha= 0$ and $\alpha\ne 0$. If $\alpha= 0$ we can set $k=0$ and regain the isotropic result (\[eq:alphaxy0soln\]). When $\alpha\ne 0$ then $k\ne 0$ and we regain the anisotropic equations (\[eq:x\]), (\[eq:y\]) and (\[eq:alphaJ\]).
It is remarkable that our simple ansatz leads to a new anisotropic solution of the Einstein field equations. This is subject to completing the integration in $J_\alpha$; clearly this is possible for particular choices of the isotropic function $\nu_0$. We demonstrate two examples of anisotropic solutions for familiar choices of $\nu_0$ in the next two sections. The algorithm that we have generated in this paper is easy to apply as there is only a single integration to be performed unlike the earlier algorithm of Maharaj and Chaisi[@MaharajChaisi] which is more complicated and involves further integrations. We believe that new anisotropic solutions that arise from our procedure are likely to produce realistic anisotropic stellar models. We emphasise that a desirable feature of our approach is that our models contain an isotropic limit which is often not the case in other approaches.
Example 1 \[sec:iso\]
=====================
As a first example we demonstrate the applicability of the algorithm in §3 by generating anisotropic isothermal spheres. The line element for the isothermal model[@SaslawMaharaj] has the form $$\begin{aligned}
\mbox{d}s^2 & = &
-r^{\frac{4c}{1+c}}\mbox{d}t^2+\left(1+\frac{4c}{\left(1+c\right)^2}\right)\mbox{d}r^2
+r^2\left(\mbox{d}\theta^2+\sin^2\theta\mbox{d}\phi^2\right)\label{eq:isothermalLE}\end{aligned}$$ where $c$ is a constant. The corresponding isotropic functions for (\[eq:isothermalLE\]) are given by $$\begin{aligned}
\left(\nu_0,\lambda_0,m_0,p_0\right) & = & \left(\frac{4c}{1+c}\ln
r,\;\ln\left\{1+\frac{4c}{1+c}\right\},
\frac{2cr}{4c+(1+c)^2},\;\frac{4c^2}{4c+(1+c)^2}\left(\frac{1}{r^2}\right)\right)
\label{eq:isothermalISOsol}\end{aligned}$$ The energy density function has the form $$\begin{aligned}
\mu_0 & = &
\frac{4c}{4c+(1+c)^2}\left(\frac{1}{r^2}\right)\label{eq:mu0iso}\end{aligned}$$ Hence (\[eq:isothermalISOsol\]) and (\[eq:mu0iso\]) imply that $$\begin{aligned}
p_0 & = & c\mu_0 \label{eq:p0mu0}\end{aligned}$$ which is a linear barotropic equation of state. Isothermal spheres with the density profile $\mu\propto r^{-2}$ and the equation of state (\[eq:p0mu0\]) appear in a variety of models for both Newtonian and relativistic stars[@ChaisiMaharajA; @ChaisiMaharajB]. They have been studied extensively in astrophysics as an equilibrium approximation to more complicated systems which are close to a dynamically relaxed state[@saslaw].
With the isotropic functions (\[eq:isothermalISOsol\]) we can evaluate the integral $J_\alpha$ in (\[eq:alphaxy:1\]) and we find $$J_\alpha =
-\int\left(\frac{3+14c+19c^2}{1+4c+3c^2}\right)\frac{\mbox{d}r}{r}
=-\frac{3+14c+19c^2}{1+4c+3c^2}\ln r +\ln k$$ which leads to the expressions $$\begin{aligned}
\alpha & = & kr^{-\frac{3+14c+19c^2}{1+4c+3c^2}}\\
x & = &-\ln
\left\{1+k\frac{4c+\left(1+c^2\right)}{(1+c)(1+5c)}r^{-\frac{1+6c+13c^2}{1+4c+3c^2}}
\right\}
\\
y & = &
-\frac{k}{2}\left(\frac{1+c}{1+5c}\right)r^{-\frac{2c+10c^2}{1+4c+3c^2}}\end{aligned}$$ Consequently we obtain the new line element in the form $$\begin{aligned}
\mbox{d}s^2 & = & -r^{\frac{4c}{1+c}}\mbox{d}t^2+
\left(1+\frac{4c}{(1+c)^2}\right)
\left(1+k\frac{4c+(1+c)^2}{(1+c)(1+5c)}r^{-\frac{1+6c+13c^2}{1+4c+3c^2}}
\right)^{-1}\mbox{d}r^2 \nonumber\\ & &
+r^2\left(\mbox{d}\theta^2+\sin^2\theta\mbox{d}\phi^2\right)
\label{eq:Lelement}\end{aligned}$$ and the matter variables have the analytic representation $$\begin{aligned}
m & = & \frac{2cr}{4c+(1+c)^2}-\frac{k}{2}\left(\frac{1+c}{1+5c}
\right)r^{-\frac{2c+10c^2}{1+4c+3c^2}} \\
p_r & = & \frac{1}{r^2}\frac{4c^2}{4c+(1+c)^2}+ kr^{-\frac{3+14c+19c^2}{1+4c+3c^2}} \\
p_\perp & = &
\frac{1}{r^2}\frac{4c^2}{4c+(1+c)^2}-\frac{k}{2}r^{-\frac{3+14c+19c^2}{1+4c+3c^2}}\end{aligned}$$ The isotropic isothermal sphere model (\[eq:isothermalLE\]) produces the anisotropic isothermal sphere model (\[eq:Lelement\]) utilizing our algorithm. With the parameter values $k=0$, we regain the conventional isothermal sphere.
The degree of anisotropy is $$\begin{aligned}
S & = &
\frac{k}{2}\sqrt{3}r^{-\frac{3+14c+19c^2}{1+4c+3c^2}}\label{eq:isothermalSA}\end{aligned}$$ Mathematica[@wolfram] was used to graph the anisotropy factor (\[eq:isothermalSA\]). The plots are as shown in Figures \[fig:SisoB\] and \[fig:SisoBup\] for particular values of the parameters shown. The anisotropy factor $S$ is plotted against the radial distance on the interval $0< r\leq 1$. There is a singularity at $r=0$ that has been carried over from the other dynamical and metric functions. However because constants $k$ and $c$ can be picked arbitrarily, the pair can be chosen such that $S(r)$ is monotonically decreasing or increasing. The physical considerations of a problem may lead to the choice of one profile as opposed to the other; for example the $S(r)$ profile in Figure \[fig:SisoB\] may be preferable where a stellar body with vanishing anisotropy as one moves from the center of the body to the boundary is considered. The $S(r)$ profile in Figure \[fig:SisoBup\] could be chosen over the one in Figure \[fig:SisoB\] for boson star models as proposed by Dev and Gleiser[@DevGleiser]. The fairly simple behaviour of the $S(r)$ in these plots shows that a more extensive physical analysis of the solutions is possible, which will be carried out in future work.
Example 2\[sec:Schwarz\]
========================
As a second example we demonstrate the applicability of the algorithm in §\[sec:algorithm\] by generating anisotropic Schwarzschild spheres. The line element for the interior Schwarzschild model[@DelgatyLake] is $$\begin{aligned}
\mbox{d}s^2 & = &
-\left(A-B\Delta\right)^2\mbox{d}t^2+\Delta^{-2}\mbox{d}r^2
+r^2\left(\mbox{d}\theta^2+\sin^2\theta\mbox{d}\phi^2\right)\label{eq:schwarzsLE}\end{aligned}$$ where $\Delta=\sqrt{1-r^2/R^2}$, $A$ and $B$ are constants. The corresponding isotropic functions for (\[eq:schwarzsLE\]) are given by $$\begin{aligned}
\left(\nu_0,\lambda_0,m_0,p_0 \right) & = &
\left(2\ln\left\{A-B\Delta\right\},\;-\ln\Delta^2,\frac{r^3}{2R^2},\;
-\frac{1}{R^2}\left[\frac{A-3B\Delta}{A-B\Delta}\right]\right)
\label{eq:schwarzsISOsols}\end{aligned}$$ The energy density function has the form $\mu_0=3/R^2$. We therefore have $$\begin{aligned}
\mu_0 & = & \mbox{constant}\label{eq:schwarzsMU0}\end{aligned}$$ for incompressible matter. This is a reasonable approximation in particular situations as the interior of dense neutron stars and superdense relativistic stars are of near uniform density[@MaharajLeach; @RhoadesRuffini]. Consequently the assumption (\[eq:schwarzsMU0\]) of uniform energy density is often used to build prototypes of realistic stars in the modelling process[@DevGleiser; @MaharajMaartens; @BowersLiang].
The integral $J_\alpha$ in (\[eq:alphaxy:1\]) takes the form $$\begin{aligned}
J_\alpha & = &\int\left\{ \left[ 2Br\left( 3-r^2\left(
\frac{2Br^2}{R^4\Delta^3\left(A+B\Delta\right)}
-\frac{2B^2r^2}{R^4\Delta^2\left(A+B\Delta\right)^2}
+\frac{2B}{R^2\Delta\left(A+B\Delta\right)}
\right) \right.\right.\right.\nonumber\\
& & \left. \left.+\frac{4Br^2}{R^2\Delta\left(A+B\Delta\right)}
\right) \right] \nonumber\\ & & \times \left[
R^2\Delta\left(A+B\Delta\right)
\left(1+\frac{2Br^2}{R^2\Delta\left(A+B\Delta\right)}\right)\left(2
+\frac{2Br^2}{R^2\Delta\left(A+B\Delta\right)}
\right) \right]^{-1} \label{eq:JalphaSchw}\\
& & \left.- \left(\frac{6}{r}
+\frac{2Br}{R^2\Delta\left(A-B\Delta\right)}\right)\left(1+\frac{2Br^2}{R^2
\Delta\left(A-B\Delta\right)}\right)\left(2+\frac{2Br^2}{R^2\Delta\left(A
-B\Delta\right)}\right)^{-1} \right\}\mbox{d}r\nonumber\end{aligned}$$ With the substitution $u=\Delta=\sqrt{1-r^2/R^2}$, (\[eq:JalphaSchw\]) becomes $$\begin{aligned}
J_\alpha & = & \int\left(
\frac{\left(B+3Au-4Bu^2\right)\left(2B+Au-3Bu^2\right)}{\left(1-u^2\right)\left(A
-Bu\right)\left(B+Au-2Bu^2\right)}\right.
\\
& &
-\frac{3Bu^2\left(A-Bu\right)}{\left(2B+Au-3Bu^2\right)\left(B+Au-2Bu^2\right)} \\
& & \left.
+\frac{2B^2\left(1-u^2\right)\left(A-2Bu-2Au^2+3Bu^3\right)}{u\left(A
-Bu\right)\left(2B+Au-3Bu^2\right)\left(B+Au-2Bu^2\right)}
\right)\mbox{d}u\end{aligned}$$ The above integral can be simplified with the help of partial fractions. We obtain $$\begin{aligned}
J_\alpha & = & \int \left(\frac{1}{u}+\frac{3}{2\left(1-u\right)}-
\frac{3}{2\left(1+u\right)}+\frac{B}{A-Bu}
+\frac{A-6Bu}{2B+Au-3Bu^2}\right.\\ & &
\left.-\frac{2A-7Bu}{B+Au-2Bu^2}\right)\mbox{d}u \\
& = & \ln
u-\frac{3}{2}\ln\left\{1-u\right\}-\frac{3}{2}\ln\left\{1+u\right\}-
\ln\left\{A-Bu\right\}+\ln\left\{2B+Au-3Bu^2\right\}\\
& &
-\int\left(\frac{A/B}{\frac{A^2+8B^2}{16B^2}-\left(u-\frac{A}{4B}\right)^2}
-\frac{\left(7/2\right)\left(u-\frac{A}{4B}\right)}{\frac{A^2+8B^2}{16B^2}-
\left(u-\frac{A}{4B}\right)^2}\right)\mbox{d}u\\
& = & \ln\left\{
\frac{u\left(2B+Au-3Bu^2\right)}{\left(A-Bu\right)\left(1-
u^2\right)^{\frac{3}{2}}\left(B+Au-2Bu^2\right)^{\frac{7}{4}}}\right\}\\
& & +\frac{A}{4\sqrt{A^2+8B^2}}\ln\left\{\frac{1-\frac{4B}{\sqrt{A^2
+8B^2}}\left(u-\frac{A}{4B}
\right)}{1+\frac{4B}{\sqrt{A^2+8B^2}}\left(u-\frac{A}{4B}\right)}\right\}\end{aligned}$$ We have completed the integration and obtained $J_\alpha$ in terms of the intermediate variable $u$. In terms of the original variable $r$ (in $\Delta$) we can write $J_\alpha$ as $$\begin{aligned}
J_\alpha & = & \ln\left\{\frac{\Delta\left(2B+
A\Delta-3B\Delta^2\right)}{\left(A-B\Delta\right)\left(1-
\Delta^2\right)^{3/2}\left(B+A\Delta-2B\Delta^2\right)^{7/4}}\right\}\\
& & +\frac{A}{4\sqrt{A^2+8B^2}}\ln\left\{\frac{1-
\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}{1+\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}\right)}\right\}\end{aligned}$$
Then the function $\alpha$ in (\[eq:alphaxy:1\]) becomes $$\begin{aligned}
\alpha & = &
\frac{kR^3\Delta\left(2B+A\Delta-3B\Delta^2\right)}{r^3\left(A
-B\Delta\right)\left(B+A\Delta-2B\Delta^2\right)^{7/4}}
\left(\frac{1-\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}{1+\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}\right)^{\frac{A}{4\sqrt{A^2+8B^2}}} \label{eq:alphaSchw2}\end{aligned}$$ and (\[eq:alphaxy:2\]) and (\[eq:alphaxy:3\]) respectively lead to $$\begin{aligned}
x & = & -\ln\left\{1+\frac{k R^3\left(2B+A\Delta-
3B\Delta^2\right)}{r\Delta\left(A-B\Delta\right)\left(B+
A\Delta-2B\Delta^2\right)^{7/4}}\left(1+
\frac{2Br^2}{R^2\Delta\left(A-B\Delta\right)}\right)^{-1} \right.\nonumber\\
& &\left.
\times\left(\frac{1-\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}{1+\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}\right)^{\frac{A}{4\sqrt{A^2+8B^2}}}
\right\}\label{eq:xySchw:1}\end{aligned}$$ $$\begin{aligned}
y & = & -\frac{k
R^3\Delta\left(2B+A\Delta-3B\Delta^2\right)}{2\left(A-
B\Delta\right)\left(B+A\Delta-2B\Delta^2\right)^{7/4}}\left(1+
\frac{2Br^2}{R^2\Delta\left(A-B\Delta\right)}\right)^{-1}\nonumber\\
& &
\times\left(\frac{1-\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}{1+\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}\right)^{\frac{A}{4\sqrt{A^2+8B^2}}} \label{eq:xySchw:2}\end{aligned}$$
Hence the new line element has the form $$\begin{aligned}
\mbox{d}s^2 & = &
-\left(A-B\Delta\right)^2\mbox{d}t^2\nonumber\\
& & + \frac{1}{\Delta^2}\left[1+\frac{}{}\frac{}{}\frac{k
R^3\left(2B
+A\Delta-3B\Delta^2\right)}{r\Delta\left(A-B\Delta\right)\left(B+
A\Delta-2B\Delta^2\right)^{7/4}}\left(1+\frac{2Br^2}{R^2\Delta\left(A-
B\Delta\right)}\right)^{-1} \right.\nonumber\\
& &\left.
\times\left(\frac{1-\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}{1+\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}\right)^{\frac{A}{4\sqrt{A^2+8B^2}}} \right]^{-1}
\mbox{d}r^2
+r^2\left(\mbox{d}\theta^2+\sin^2\theta\mbox{d}\phi^2\right)\label{eq:schwarzsLE2}\end{aligned}$$ and the matter variables have the form $$\begin{aligned}
m & = & \frac{r^3}{2R^2} -\frac{}{}\frac{}{}\frac{k
R^3\Delta\left(2B+
A\Delta-3B\Delta^2\right)}{2\left(A-B\Delta\right)\left(B+
A\Delta-2B\Delta^2\right)^{7/4}}\left(1+
\frac{2Br^2}{R^2\Delta\left(A-B\Delta\right)}\right)^{-1}\nonumber\\
& &
\times\left(\frac{1-\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}{1+\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}\right)^{\frac{A}{4\sqrt{A^2+8B^2}}}\end{aligned}$$ $$\begin{aligned}
p_r & = &
-\frac{A-3B\Delta}{R^2\left(A-B\Delta\right)}+ \frac{k
R^3\Delta\left(2B+A\Delta-3B\Delta^2\right)}{r^3\left(A
-B\Delta\right)\left(B+A\Delta-2B\Delta^2\right)^{7/4}}\nonumber\\
& & \times
\left(\frac{1-\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}{1+\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}\right)^{\frac{A}{4\sqrt{A^2+8B^2}}}\end{aligned}$$ $$\begin{aligned}
p_\perp & = &
-\frac{A-3B\Delta}{R^2\left(A-B\Delta\right)}- \frac{}{}\frac{k
R^3\Delta\left(2B+A\Delta- 3B\Delta^2\right)}{2r^3\left(A-
B\Delta\right)\left(B+A\Delta-2B\Delta^2\right)^{7/4}}\nonumber\\
& &
\times\left(\frac{1-\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}{1+\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}\right)^{\frac{A}{4\sqrt{A^2+8B^2}}} \label{last}\end{aligned}$$ The isotropic Schwarzschild sphere model (\[eq:schwarzsLE\]) generates the anisotropic Schwarzschild sphere model (\[eq:schwarzsLE2\])-(\[last\]). With the parameter value $k=0$ we regain the original interior Schwarzschild sphere.
The degree of anisotropy has the form $$\begin{aligned}
S & = & \frac{}{}\frac{\sqrt{3}k
R^3\Delta\left(2B+A\Delta-3B\Delta^2\right)}{2r^3\left(A-
B\Delta\right)\left(B+A\Delta-
2B\Delta^2\right)^{7/4}}\left(\frac{1-
\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}{1+\frac{4B}{\sqrt{A^2+8B^2}}\left(\Delta-\frac{A}{4B}
\right)}\right)^{\frac{A}{4\sqrt{A^2+8B^2}}}\label{eq:schwarzSB}\end{aligned}$$ Mathematica[@wolfram] was once again used to plot the anisotropic factor (\[eq:schwarzSB\]). The resulting plot is shown in Figure \[fig:SchwrzIsoBup\] for chosen particular values of the parameters $A$, $B$, $k$ and $R$. Other choices of these parameters may produce a different behaviour for $S$. The plot of $S$ against $r$ is in the interval $0<r\leq 1$. The fact that the anisotropic factor is in closed form and the profile as shown in Figure \[fig:SchwrzIsoBup\] shows that physical analysis of this model can be investigated, which will be pursued at a later stage.
Acknowledgements {#acknowledgements .unnumbered}
================
SDM and MC thank the National Research Foundation of South Africa for financial support. MC is grateful to the University of KwaZulu-Natal for a scholarship.
[0]{}
M S R Delgaty and K Lake, [*Comput. Phys. Commun.*]{} [**115**]{}, 395 (1998)
M R Finch and J F E Skea, Preprint available on the web:\
http://edradour.symbcomp.uerj.br/pubs.html (1998)
H Stephani, D Kramer, M A H MacCullum, C Hoenslaers and E Herlt, [*Exact solutions of Einstein’s field equations*]{} (Cambridge University Press, Cambridge, 2003)
K Dev and M Gleiser, [*Gen. Rel. Grav.*]{} [ **34**]{}, 1793 (2002)
K Dev and M Gleiser, [*Gen. Rel. Grav.*]{} [**35**]{}, 1435 (2003)
L Herrera, J Martin and J Ospino, [*J. Math. Phys.*]{} [**43**]{}, 4889 (2002)
L Herrera, A D Prisco, J Martin, J Ospino, N O Santos and O Troconis, [*Phys. Rev. D*]{} [**69**]{}, 084026 (2004)
B V Ivanov, [*Phys. Rev. D*]{} [**65**]{}, 10411 (2002)
M K Mak and T Harko, [*Chin. J. Astron. Astrophys.*]{} [**2**]{}, 248 (2002)
M Mak and T Harko, [*Proc. Roy. Soc. Lond. A*]{} [**459**]{}, 393 (2003)
R Sharma and S Mukherjee, [*Mod. Phys. Lett. A*]{} [**17**]{}, 2535 (2002)
S Rahman and M Visser, [*Class. Quantum Grav.*]{} [**19**]{}, 935 (2002)
K Lake [*Phys. Rev. D*]{} [**67**]{}, 104015 (2003)
D Martin and M Visser, [*Phys. Rev. D*]{} [**69**]{}, 104028 (2004)
P Boonserm, M Visser and S Weinfurtner, ArXiv:gr-qc/0503007 (2005)
S D Maharaj and M Chaisi, [*Math. Meth. Appl. Sci.*]{} [**29**]{} 67 (2006)
S D Maharaj and R Maartens, [*Gen. Rel. Grav.*]{} [**21**]{}, 899 (1989)
M K Gokhroo and A L Mehra, [*Gen. Rel. Grav.*]{} [**26**]{}, 75 (1994)
M Chaisi and S D Maharaj, [*Gen. Rel. Grav.*]{} [**37**]{} 1177 (2005)
M Chaisi and S D Maharaj, [*Pramana - J. Phys.*]{} Submitted (2006)
W C Saslaw, S D Maharaj, and N Dadhich, [*Astrophys. J.*]{} [**471**]{}, 571 (1996)
W C Saslaw, [*Gravitational physics of stellar and galactic systems*]{} (Cambridge University Press, Cambridge, 2003)
S Wolfram, [*Mathematica*]{} (Wolfram, Redwood City, 2003)
S D Maharaj and P G L Leach, [*J. Math. Phys.*]{} [**37**]{}, 430 (1996)
C E Rhoades and R Ruffini, [*Phys. Rev. Lett.*]{} [**32**]{}, 324 (1974)
R L Bowers and E P T Liang, [*Astrophys. J.*]{} [**188**]{} 657 (1974)
[^1]: Permanent address: Department of Mathematics & Computer Science, National University of Lesotho, Roma 180, Lesotho; eMail: `[email protected]`
[^2]: Author for correspondence; email: `[email protected]`; fax: +2731 260 2632
|
---
abstract: 'The most interesting current open question in the theory of GRB afterglow is the propagation of jetted afterglows during the sideway expansion phase. Recent numerical simulations show hydrodynamic behavior that differs from the one suggested by simple analytic models. Still, somewhat surprisingly, the calculated light curves show a ‘jet break’ at about the expected time. These results suggest that the expected rate of orphan optical afterglows should be smaller than previously estimated.'
author:
- 'T. Piran'
- 'J. Granot'
title: Theory of GRB Afterglow
---
Introduction {#intro}
============
Our understanding of GRBs has been revolutionized by the BeppoSAX discovery of GRB afterglow. While GRBs last seconds or minutes the afterglow lasts days, weeks months or even years. This makes afterglow observations much richer. These observations provide us with multi-wavelength and multi-timescales data. At the same time the afterglow, which is a blast wave propagating into the surrounding matter is a much simpler phenomena than the GRB and it is possible to construct a simple theory that can be compared directly with the observations.
In this short review we describe the theory of GRB afterglow. We begin with the simplest idealized model and continue with various levels of complications. The final level is full numerical simulations. We present preliminary results of such simulations and compare them with analytic models. At present there is no simple analytic explanation for the features seen in the numerical results.
Spherical Hydrodynamics
=======================
The theory of relativistic blast waves has been worked out in a classical paper by Blandford & McKee (BM) already in 1976 [@BM76]. The BM model is a self-similar spherical solution describing an adiabatic ultra relativistic blast wave in the limit $\Gamma \gg 1$. The basic solution is a blast wave propagating into a constant density medium. However, Blandford and McKee also describe in the same paper a generalization for varying ambient mass density, $\rho =A R^{-k}$, $R$ being the distance from the center. The latter case would be particularly relevant for $k=2$, as expected in the case of wind from a progenitor, prior to the GRB explosion.
The BM solution describes a narrow shell of width $\sim
R/\Gamma^2$, in which the shocked material is concentrated, where $\Gamma$ is the typical Lorentz factor. The conditions in this shell can be approximated if we assume that the shell is homogeneous. Then the adiabatic energy conservation yields: $$E = {\Omega\over 3-k} A R^{3-k} \Gamma^2 c^2 \ , \label{ad}$$ where $E$ is the energy of the blast wave and $\Omega$ is the solid angle of the afterglow. For a full sphere $\Omega= 4\pi$, but it can be smaller if the expansion is conical with an opening angle $\theta$: $\Omega = 2 \pi \theta^2$ (assuming a double sided jet).
A natural length scale, $l=\left[(3-k)E/\Omega A c^2\right]^{1/(3-k)}$, appears in equation \[ad\]. For a spherical blast wave $\Omega$ does not change with time, and when the blast wave reaches $R=l$ it collects ambient rest mass that equals its initial energy, the Lorentz factor $\Gamma$ drops to 1 and the blast wave becomes Newtonian. The BM solution is self-similar and assumes $\Gamma
\gg 1$. Obviously, it breaks down when $R\sim l$. We therefore expect that a Relativistic-Newtonian transition should take place around $t_{\rm
NR}=l/c \approx 1.2 \, {\rm yr} (E_{\rm iso,52}/n_1)^{1/3}$, where the scaling is for $k=0$, $E_{52}$ is the isotropic equivalent energy, $E_{\rm iso}=4\pi E/\Omega$, in units of $10^{52} {\rm
ergs}$ and $n_1$ is the external density in ${\rm cm}^{-3}$. After this transition the solution will turn into the Newtonian Sedov-Tailor solution. Clearly this produces an achromatic break in the light curve.
The adiabatic approximation is valid for most of the duration of the fireball. However, during the first hour or so (or even for the first day, for $k = 2$), the system could be radiative (provided that $\epsilon_e \approx 1$). During a radiative phase the evolution can be approximated as: $$E = {\Omega\over 3-k} A R^{3-k} \Gamma \Gamma_0 c^2 \ , \label{rad}$$ where $\Gamma_0$ is the initial Lorentz factor. Cohen, Piran & Sari [@CPS98] derived an analytic self-similar solution describing this phase. Cohen & Piran [@CP99] describe a solution for the case when energy is continuously added to the blast wave by the central engine, even during the afterglow phase. A self-similar solution arises if the additional energy deposition behaves like a power law. This would arise naturally in some models, e.g. in the pulsar like model [@Usov94].
Spherical Afterglow Models
==========================
A good model for the observed emission from spherical blast waves can be obtained by adding synchrotron radiation to these hydrodynamic models. Sari, Piran & Narayan [@SPN98] used the simple adiabatic scaling (\[ad\]) together with synchrotron radiation model and the relation between the observer time $t$, and $R$: $$t =R / C_1 c \Gamma^{2} \ , \label{tobs}$$ where $C_1$ is a constant that may vary from $2$ to $16$ [@Sari97].
Assuming a powerlaw energy distribution of the shocked relativistic electrons: $N(E_e)\propto E_e^{-p}$, and that the electrons and the magnetic field energy densities are $\epsilon_e$ and $\epsilon_B$ times the total energy density, Sari, Piran & Narayan [@SPN98] estimate the observed emission as a series of power law segments (PLSs), where $$F_\nu \propto t^{-\alpha} \nu^{-\beta} \ ,$$ that are separated by break frequencies, across which the exponents of these power laws change: the cooling frequency, $\nu_c$, the typical synchrotron frequency $\nu_m$ and the self absorption frequency $\nu_{sa}$. The analytic calculations were done for a homogeneous shell and for emission from a single representative point. At a specific frequency one will observe a break in the light curve when one of these break frequencies passes the observed frequency. An intriguing feature of this model is that for a given PLS, say for emission above the cooling frequency, there is a unique relation between $\alpha$, $\beta$ and $p$. The power law index $p$ is expected to be a universal quantity as it depends on the, presumably common, acceleration processes and it is expected to be between 2 and 2.5 [@SP97]. The consistency of those observed parameters could be a simple check of the theory.
The simple solution, that is based on a homogeneous shell approximation, can modified by using the full BM solution and integrating over the entire volume of shocked fluid [@GPS99]. Such an integration can be done only numerically. It yields a smoother spectrum and light curve near the break frequencies, but the asymptotic slopes, away from the break frequencies and the transition times, remain the same as in the simpler theory.
Chevalier & Lee [@CheLi99] estimated the emission from a blast wave propagating into a wind profile $n(R) \propto
R^{-2}$. They use equation (\[ad\]) and calculate the synchrotron emission from a single representative point. This leads to different temporal scalings $\alpha$ of the PLSs, while the spectral indices $\beta$ remain the same, since they are independent of the hydrodynamic solution. This results in different relations between $\alpha$, $\beta$ and $p$, providing in principle a way to distinguish between different neighborhoods of GRBs and between different progenitor models.
Another modification to the “standard” model arises from a variation of the emission process. Sari & Esin [@SariEsin01] considered the influence of Inverse Compton on the observed spectrum. They find that in some cases the additional cooling channel might have a significant effect on the observed spectrum and light curves.
Jets
====
The afterglow theory becomes much more complicated if the relativistic ejecta is not spherical. To model jetted afterglows we consider relativistic matter ejected into a cone of opening angle $\theta$. Initially, as long as $\Gamma \gg \theta^{-1}$ [@Pi94] the motion would be almost conical. There isn’t enough time, in the blast wave’s rest frame, for the matter to be affected by the non spherical geometry, and the blast wave will behave as if it was a part of a sphere. When $\Gamma = C_2
\theta^{-1}$, namely at[^1]: $$t_{\rm jet} = {1\over C_1}\left( l\over c \right )
\left({\theta\over C_2}\right)^{2(4-k)\over (3-k)} = {1 \, {\rm
day} \over C_1 C_2^{8/3}} \left({E_{\rm iso,52}\over
n_1}\right)^{1/3} \left({\theta\over 0.1}\right)^{8/3} \ ,
\label{tjet}$$ rapid sideway propagation begins. The last equality holds, of course for $k=0$.
The sideways expansion continues with $\theta \sim \Gamma^{-1}$. Plugging this relations in equation (\[ad\]) we find that $R
\approx {\rm const}.$ This is obviously impossible. A more detailed analysis [@Rhoads99; @Pi00; @KP00] reveals that according to the simple one dimensional analytic models $\Gamma$ decreases exponentially with $R$ on a very short length scale.[^2]
The sideways expansion causes a change in the hydrodynamic behavior and hence a break in the light curve. Additionally, when $\Gamma \sim \theta^{-1}$ relativistic beaming of light will become less effective. This would cause an extra spreading of the emission (that was previously focused into a narrow angle $\theta$ and is now focused into a larger cone of opening angle $\Gamma^{-1}$). If the sideways expansion is at the speed of light than both transitions would take place at the same time [@SPH99]. If the sideways expansion is at the sound speed then the beaming transition would take place first and only later the hydrodynamic transition would occur [@PM99]. This would cause a slower and wider transition with two distinct breaks, the first and steeper break when the edge of the jet becomes visible and later a shallower break when sideways expansion becomes important.
The analytic or semi-analytic calculations of synchrotron radiation from jetted afterglows [@Rhoads99; @SPH99; @PM99; @MSB00; @KP00] have led to different estimates of the jet break time $t_{\rm jet}$ and of the duration of the transition. Rhoads [@Rhoads99] calculated the light curves assuming emission from one representative point, and obtained a smooth ’jet break’, extending $\sim 3-4$ decades in time, after which $F_{\nu>\nu_m}\propto t^{-p}$. Sari Piran & Halpern [@SPH99] assume that the sideway expansion is at the speed of light, and not at the speed of sound ($c/\sqrt{3}$) as others assume, and find a smaller value for $t_{\rm jet}$. Panaitescu and Mészáros [@PM99] included the effects of geometrical curvature and finite width of the emitting shell, along with electron cooling, and obtained a relatively sharp break, extending $\sim 1-2$ decades in time, in the optical light curve. Moderski, Sikora and Bulik [@MSB00] used a slightly different dynamical model, and a different formalism for the evolution of the electron distribution, and obtained that the change in the temporal index $\alpha$ ($F_{\nu}\propto
t^{-\alpha}$) across the break is smaller than in analytic estimates ($\alpha=2$ after the break for $\nu>\nu_m$, $p=2.4$), while the break extends over two decades in time. Kumar and Panaitescu [@KP00] find that for a homogeneous (or stellar wind) environment there is a steepening of $\Delta\alpha\sim 0.7$ ($0.4$) when the edge of the jet becomes visible, while the steepening due to sideways expansion extends over 2 (4) decades in time. They conclude that a jet running into a stellar wind will not leave a prominent detectable signature in the light curve.
![A relativistic jet at the last time step of the simulation [@Granot01]. ([**left**]{}) A 3D view of the jet. The outer surface represents the shock front while the two inner faces show the proper number density ([*lower face*]{}) and proper emissivity ([*upper face*]{}) in a logarithmic color scale. ([**right**]{}) A 2D ’slice’ along the jet axis, showing the velocity field on top of a linear color-map of the lab frame density.[]{data-label="3Djet"}](figure1a.eps "fig:"){width="4.97cm"} ![A relativistic jet at the last time step of the simulation [@Granot01]. ([**left**]{}) A 3D view of the jet. The outer surface represents the shock front while the two inner faces show the proper number density ([*lower face*]{}) and proper emissivity ([*upper face*]{}) in a logarithmic color scale. ([**right**]{}) A 2D ’slice’ along the jet axis, showing the velocity field on top of a linear color-map of the lab frame density.[]{data-label="3Djet"}](R3v500.eps "fig:"){width="7.0cm"}
The different analytic or semi-analytic models have different predictions for the sharpness of the ’jet break’, the change in the temporal decay index $\alpha$ across the break and its asymptotic value after the break, or even the very existence a ’jet break’ [@HDL00]. All these models rely on some common basic assumptions, which have a significant effect on the dynamics of the jet: (i) the shocked matter is homogeneous (ii) the shock front is spherical (within a finite opening angle) even at $t>t_{\rm jet}$ (iii) the velocity vector is almost radial even after the jet break.
However, recent 2D hydrodynamic simulations [@Granot01] show that these assumptions are not a good approximation of a realistic jet. Figure \[3Djet\] shows the jet at the last time step of the simulation. The matter at the sides of the jet is propagating sideways (rather than in the radial direction) and is slower and much less luminous compared to the front of the jet. The shock front is egg-shaped, and quite far from being spherical. Figure \[averages\] shows the radius $R$, Lorentz factor $\Gamma$, and opening angle $\theta$ of the jet, as a function of the lab frame time. The rate of increase of $\theta$ with $R\approx ct_{\rm lab}$, is much lower than the exponential behavior predicted by simple models [@Rhoads99]. The value of $\theta$ averaged over the emissivity is practically constant, and most of the radiation is emitted within the initial opening angle of the jet. The radius $R$ weighed over the emissivity is very close to the maximal value of $R$ within the jet, indicating that most of the emission originates at the front of the jet[^3], where the radius is largest, while $R$ averaged over the density is significantly lower, indicating that a large fraction of the shocked matter resides at the sides of the jet, where the radius is smaller. The Lorentz factor $\Gamma$ averaged over the emissivity is close to its maximal value, (again since most of the emission occurs near the jet axis where $\Gamma$ is the largest) while $\Gamma$ averaged over the density is significantly lower, since the matter at the sides of the jet has a much lower $\Gamma$ than at the front of the jet. The large differences between the assumptions of simple dynamical models of a jet and the results of 2D simulations, suggest that great care should be taken when using these models for predicting the light curves of jetted afterglows. Since the light curves depend strongly on the hydrodynamics of the jet, it is very important to use a realistic hydrodynamic model when calculating the light curves.
![The radius $R$ ([*left frame*]{}), Lorentz factor $\Gamma-1$ ([*middle frame*]{}) and opening angle $\theta$ of the jet ([ *right frame*]{}), as a function of the lab frame time in days [@Granot01].[]{data-label="averages"}](R.eps "fig:"){width="3.97cm"} ![The radius $R$ ([*left frame*]{}), Lorentz factor $\Gamma-1$ ([*middle frame*]{}) and opening angle $\theta$ of the jet ([ *right frame*]{}), as a function of the lab frame time in days [@Granot01].[]{data-label="averages"}](gamma-1.eps "fig:"){width="4.05cm"} ![The radius $R$ ([*left frame*]{}), Lorentz factor $\Gamma-1$ ([*middle frame*]{}) and opening angle $\theta$ of the jet ([ *right frame*]{}), as a function of the lab frame time in days [@Granot01].[]{data-label="averages"}](theta.eps "fig:"){width="3.97cm"}
Granot et al. [@Granot01] used 2D numerical simulations of a jet running into a constant density medium to calculate the resulting light curves, taking into account the emission from the volume of the shocked fluid with the appropriate time delay in the arrival of photons to different observers. They obtained an achromatic jet break for $\nu>\nu_m(t_{\rm jet})$ (which typically includes the optical and near IR), while at lower frequencies (which typically include the radio) there is a more moderate and gradual increase in the temporal index $\alpha$ at $t_{\rm jet}$, and a much more prominent steepening in the light curve at a latter time when $\nu_m$ sweeps past the observed frequency. The jet break appears sharper and occurs at a slightly earlier time for an observer along the jet axis, compared to an observer off the jet axis (but within the initial opening angle of the jet). The value of $\alpha$ after the jet break, for $\nu>\nu_m$, is found to be slightly larger than $p$ ($\alpha=2.85$ for $p=2.5$).
Somewhat surprisingly we find that in spite of the different hydrodynamic behavior the numerical simulations show a jet break at roughly the same time as the analytic estimates. This encourages us to trust the current estimates of the jet opening angles. However, we should search for an intuitive explanation for the nature of the hydrodynamic behavior and for a simple analytic model that would predict it.
[8.]{}
R.D. Blandford, C.F. McKee: Phys. of Fluids, [**19**]{}, 1130 (1976).
E. Cohen, T. Piran, R. Sari: Ap. J., **509**, 717 (1998).
E. Cohen, T. Piran: Ap. J., **518**, 346 (1999).
V.V. Usov: MNRAS, [**267**]{},1035 (1994)
R. Sari, T. Piran, R. Narayan: Ap. J. Lett., **497**, L17 (1998).
R. Sari: Ap. J. Lett., **489**, L37 (1997).
R. Sari, T. Piran: MNRAS, **287**, 110 (1997).
J. Granot, T. Piran, R. Sari: Ap. J., **513**, 679 (1999).
R.A. Chevalier, Z.-Y. Li: Ap. J. Lett., **520**, L29 (1999).
R. Sari, A.A. Esin: Ap. J., **548**, 787 (2001).
T. Piran: in AIP Conference Proceedings [**307**]{}, [*Gamma-Ray Bursts, Second Workshop, Huntsville, Alabama, 1993*]{}, Fishman, G.J., Brainerd, J.J., & Hurley, K., Eds., (New York: AIP), p. 495. (1994)
J.E. Rhoads: Ap. J., **525**, 737 (1999)
T. Piran: Phys. Rep., **333**, 529 (2000)
P. Kumar & A. Panaitescu: Ap. J., **541**, L9 (2000)
R. Sari, T. Piran, T. Halpern: Ap. J., **519**, L17 (1999).
A. Panaitescu & P. Mészáros: Ap. J., **526**, 707 (1999)
R. Moderski, M. Sikora, T Bulik: Ap. J., **529**, 151 (2000)
Y. Huang, Z. Dai & T. Lu: A&A **355**, L43 (2000)
J. Granot, et al.: These proceedings (astro-ph/0103038) (2001)
[^1]: The exact values of the uncertain constants $C_2$ and $C_1$ are extremely important as they determine the jet opening angle (and hence the total energy of the GRB) from the observed breaks, interpreted as $t_{\rm jet}$, in the afterglow light curves.
[^2]: Note that the exponential behavior is obtained after converting equation \[ad\] to a differential equation and integrating over it. Different approximations used in deriving the differential equation lead to slightly different exponential behavior, see [@Pi00].
[^3]: This implies that the expected rate of orphan optical afterglows should be smaller than estimated assuming significant sideways expansion!
|
---
abstract: 'We study the interplay between the chiral and the deconfinement transitions, both at high temperature and high quark chemical potential, by a non local Nambu-Jona Lasinio model with the Polyakov loop in the mean field approximation and requiring neutrality of the ground state. We consider three forms of the effective potential of the Polyakov loop: two of them with a fixed deconfinement scale, cases I and II, and the third one with a $\mu$ dependent scale, case III. In the cases I and II, at high chemical potential $\mu$ and low temperature $T$ the main contribution to the free energy is due to the $Z(3)$-neutral three-quark states, mimicking the quarkyonic phase of the large $N_c$ phase diagram. On the other hand in the case III the quarkyonic window is shrunk to a small region. Finally we comment on the relations of these results to lattice studies and on possible common prospects. We also briefly comment on the coexistence of quarkyonic and color superconductive phases.'
author:
- 'H. Abuki'
- 'R. Anglani'
- 'R.Gatto'
- 'G. Nardulli'
- 'M. Ruggieri'
title: 'Chiral crossover, deconfinement and quarkyonic matter within a Nambu-Jona Lasinio model with the Polyakov loop'
---
Introduction
============
Color confinement and chiral symmetry breaking are some of the most intriguing topics in modern theoretical physics. Quantum Chromodynamics (QCD) is believed to be the ultimate theory describing strong interactions. Nowadays it is accepted that the main ground state properties of QCD can be described in terms of non perturbative spontaneous breaking and/or restoring of some of the global symmetries of the QCD lagrangian.
Unfortunately solving QCD in its non perturbative regime is a hard task. At zero and small quark chemical potential $\mu$ lattice calculations are a good tool to derive the equation of state of QCD matter, the transition temperatures and so on starting from the first principles, see for example [@Aoki:2006br; @Schmidt:2006us; @Philipsen:2005mj; @Heller:2006ub] and references therein. Several approximation methods are available to overcome the sign problem of the fermion determinant with three colors at finite $\mu$ (see Refs. [@Ejiri:2004yw; @Splittorff:2006vj; @Splittorff:2006fu] for reviews on the sign problem): small-$\mu$ expansion [@Allton:2003vx; @Allton:2002zi; @Allton:2005gk], reweighting tecniques [@Fodor:2001pe; @Fodor:2002km], density of the states methods [@Fodor:2007vv] and analytic continuation to imaginary chemical potential [@Laermann:2003cv; @de; @Forcrand:2003hx; @D'Elia:2007ke; @D'Elia:2004at; @D'Elia:2002gd].
Besides lattice calculations there exist effective descriptions of QCD. Among them Nambu-Jona Lasinio (NJL in the following) models [@Nambu:1961tp] are very popular, see [@revNJL] for reviews. They are based on the observation that several properties of the QCD ground state are related to the spontaneous breaking of some of the global symmetries of the QCD lagrangian. Therefore one hopes that by a model that has the same global symmetry breaking of QCD one can capture the essential physics of QCD itself.
In recent years it has been argued that the NJL model, which does not contain gluons, can be improved by adding a non linear term to the lagrangian which describes the dynamics of the traced Polyakov loop [@Polyakovetal], and an interaction term of the Polyakov loop with the quarks. The resulting model is called the PNJL model, introduced in Refs. [@Meisinger:1995ih; @Fukushima:2003fw] and extensively studied in [@Ratti:2005jh; @Roessner:2006xn; @Ghosh:2007wy; @Kashiwa:2007hw; @Schaefer:2007pw; @Ratti:2007jf; @Sasaki:2006ww; @Megias:2006bn; @Zhang:2006gu; @Fukushima:2008wg; @Sakai:2008py; @Sakai:2008um; @Kashiwa:2008ga; @Ciminale:2007ei; @Fu:2007xc; @Ciminale:2007sr; @Hansen:2006ee; @Abuki:2008tx; @Abuki:2008ht; @Contrera:2007wu; @Blaschke:2007np]. In the PNJL model one assumes that a homogeneous euclidean temporal background gluon field couples to the quarks via the covariant derivative of QCD. This coupling gives rise to the interplay between the chiral condensate and the Polyakov loop. Even if it is very simple, the PNJL model turned out to be a powerful tool which allows to compute several quantities that can be computed on the lattice as well. The agreement with existing lattice data is satisfactory [@Ratti:2005jh].
One of the exciting characteristics of the PNJL model is the [*statistical confinement*]{} of quarks at low temperature [^1]. In a few words this means that at small temperature and small chemical potential the contribution to the free energy, $\Omega$, of the states with one and two quarks are suppressed, and the leading contribution to $\Omega$ arises from the thermal excitations of colorless three quark states. This property is related to the small value of the expectation value of Polyakov loop which is found in the self-consistent calculations within the PNJL model in the aforementioned conditions of temperature and chemical potential. It has been recently argued by Fukushima that the statistical confinement property of the PNJL model persists even at high chemical potential [@Fukushima:2008wg]. This result is in agreement with the phase diagram of QCD obtained in the large number of colors $N_c$ approximation [@McLerran:2007qj; @Hidaka:2008yy], see also Refs. [@Glozman:2007tv; @Glozman:2008kn] for recent related studies. Inspired by Ref. [@McLerran:2007qj] Fukushima has suggested to interpret the statistical confined phase of the PNJL model at high quark chemical potential as the quarkyonic state found in [@McLerran:2007qj].
In this work we investigate on the ground state of the electrically neutral two flavor PNJL model, focusing on its possible quarkyonic structure at high $\mu$ and low $T$. We use a non local four fermion interaction instead of the local one [@Nambu:1961tp]. The local NJL model is usually regularized by means of an ultraviolet sharp cutoff, which amounts to artificially cutoff the quark momenta that are larger than the cutoff itself. Thus the extensions of the model to temperatures and/or chemical potentials of the order of the cutoff are quite dubious. However if one introduces a non local interaction, which corresponds to the multiplication of the NJL coupling by a momentum dependent form factor $f(p)$, and requires that the form factor satisfies the asymptotic freedom property of QCD $f(p\rightarrow\infty)=0$, then all of the momentum integrals are convergent and the model is consistent at any value of temperature and chemical potential. In this paper we use one specific form of the form factor. Although the choice of a different functional form for $f(p)$ can lead to different quantitative results (mainly the shift of the critical points) we believe that our picture should not be modified qualitatively. We consider the logarithmic form of the Polyakov loop effective potential ${\cal U}$ suggested by Ratti, Roessner and Weise in Ref. [@Roessner:2006xn]; moreover we investigate on the effects of a dependence of ${\cal U}$ on the quark chemical potential as well as on the number of flavors as suggested in Ref. [@Schaefer:2007pw]. We compare the phase diagrams obtained in the cases in which we do not consider (cases I and II) and do consider (case III) the $\mu$-dependence of ${\cal U}$. Cases I and II differ for the value of the deconfinement scale in the Polyakov loop effective potential.
We find that the phase diagram in the two cases (I and II on one side, III on the other side) differ even qualitatively. In particular, on one hand in the cases I and II we confirm the results of Fukushima [@Fukushima:2008wg] and strengthen his interpretation of the high chemical potential/small temperature state of the PNJL model as the quarkyonic matter of the large $N_c$ phase diagram. On the other hand in case III we find that the quarkyonic-like window found in the cases I and II is shrunk and becomes a small region in the $\mu-T$ plane in the case III, opening a wide room for the deconfined quark matter of the pure NJL model.
The plan of the paper is as follows. In Section II we sketch the formalism. In Section III we discuss our results. Finally in Section IV we draw our conclusions.
Thermodynamic potential with a non local four fermion interaction
=================================================================
The Lagrangian density of the two flavor PNJL model is given by [@Fukushima:2003fw; @Abuki:2008tx] $${\cal L}^\prime= \bar{e}(i\gamma_\mu\partial^\mu)e + \bar\psi\left(i\gamma_\mu D^\mu + \mu\gamma_0 -m\right)\psi +
{\cal L}_{4} - {\cal U}[\Phi,\bar\Phi,T]~. \label{eq:LagrP}$$ In the above equation $e$ denotes the electron field; $\psi$ is the quark spinor with Dirac, color and flavor indices (implicitly summed). $m$ corresponds to the bare quark mass matrix; we assume from the very beginning $m_u = m_d$. The covariant derivative is defined as usual as $D_\mu =
\partial_\mu -i A_\mu$. The gluon background field $A_\mu=\delta_{0\mu}A_0$ is supposed to be homogeneous and static, with $A_0 = g A_0^a T_a$ and $T_a$, $a=1,\dots,8$ being the $SU(3)$ color generators with the normalization condition $\text{Tr}[T_a,T_b]=\delta_{ab}$. Finally $\mu$ is the chemical mean quark chemical potential, related to the conserved baryon number.
In Eq. $\Phi$, $\bar\Phi$ correspond to the normalized traced Polyakov loop $L$ and its hermitian conjugate respectively, $\Phi=\text{Tr}W/N_c$, $\bar\Phi=\text{Tr}W^\dagger/N_c$, with $$W={\cal P}\exp\left(i\int_0^\beta A_4 d\tau\right)=\exp\left(i \beta A_4\right)~,~~~~~A_4=iA_0~,$$ and $\beta=1/T$. $\Phi$ is a color singlet but it has a $Z(3)$ charge [@Polyakovetal], where $Z(3)$ is the center of the color group $SU(3)$; thus if $\Phi\neq0$ in the ground state then the $Z(3)$ symmetry is spontaneously broken. The term ${\cal U}[\Phi,\bar\Phi,T]$ is the effective potential for the traced Polyakov loop; in absence of dynamical quarks it is built in order to reproduce the pure glue lattice data of QCD, namely thermodynamical quantities (pressure, entropy and energy density) and the deconfinement temperature of heavy (non-dynamical) quarks, $T= 270$ MeV. Several forms of this potential have been suggested in the literature, see for example [@Fukushima:2003fw; @Ratti:2005jh; @Roessner:2006xn; @Ghosh:2007wy; @Fukushima:2008wg]. In this paper we adopt the following logarithmic form [@Roessner:2006xn], $${\cal U}[\Phi,\bar\Phi,T] = T^4\left[-\frac{b_2(T)}{2}\bar\Phi\Phi + b(T)\log\left[1-6\bar\Phi\Phi + 4(\bar\Phi^3 +
\Phi^3) -3(\bar\Phi\Phi)^2\right]\right]~,\label{eq:Poly}$$ with $$b_2(T) = a_0 + a_1 \left(\frac{\bar T_0}{T}\right) + a_2 \left(\frac{\bar T_0}{T}\right)^2~,~~~~~b(T) =
b_3\left(\frac{\bar T_0}{T}\right)^3~.\label{eq:lp}$$ Numerical values of the coefficients are as follows [@Roessner:2006xn]: $$a_0=3.51~,~~~a_1 = -2.47~,~~~a_2 = 15.2~,~~~b_3=-1.75~.$$ If dynamical quarks were not present then one should chose $\bar T_0 = 270$ MeV in order to reproduce the deconfinement transition at $T = 270$ of the pure gauge theory [@Fukushima:2003fw; @Ratti:2005jh; @Ratti:2007jf]. In presence of quarks $\bar T_0$ might get a dependence on the number of active flavors as well as on the quark chemical potential [@Ratti:2005jh; @Schaefer:2007pw]. Inspired by Refs. [@Fukushima:2003fw; @Ratti:2005jh; @Schaefer:2007pw] in this paper we consider three cases: $$\begin{aligned}
\bar{T}_0 &=& 208~\text{MeV}~,~~~\text{Case I}~,\\
\bar{T}_0 &=& 270~\text{MeV}~,~~~\text{Case II}~,\\
\bar{T}_0(\mu) &=& T_\tau e^{-1/\alpha_0 c(\mu)}~,~~~\text{Case III}~.\label{eq:T0m}\end{aligned}$$ Case II corresponds to the deconfinement temperature in the pure glue theory; the parameters in the cases I and III have been evaluated in Ref. [@Schaefer:2007pw] on the basis of hard dense and hard thermal loop approximations to QCD. In the equation corresponding to Case III we have set $$\alpha_0 = 0.304~,~~~T_\tau = 1770~\text{MeV}~,$$ and $$c(\mu) = \frac{11 N_c - 2 N_f}{6\pi} - \frac{16 N_f}{\pi}\frac{\mu^2}{T_\tau^2}~,$$
with $N_f = 2$ and $N_c = 3$. At $\mu=0$ we have $\bar{T}_0(\mu=0) = 208$ MeV as case I; for comparison, at $\mu=500$ MeV the deconfinement scale is given by $\bar{T}_0(\mu=0) = 19$ MeV.
In Eq. ${\cal L}_4$ represents the lagrangian density for the four fermion interaction. If we define $S_4 = \int d^4 x {\cal L}_4$ as the interaction action then in the local version of the NJL model one has $$S_{4} = G \int d^4 x~\left[(\bar\psi \psi)^2 + (\bar\psi i\gamma_5\bm\tau\psi)^2\right]~. \label{eq:c1}$$ In the non local version of the NJL model the contact term Eq. is replaced by [@Sasaki:2006ww; @Schmidt:1994di; @Bowler:1994ir; @Blaschke:2000gd; @GomezDumm:2005hy; @Aguilera:2006cj; @Grigorian:2006qe] $$S_{4} = G \int d^4 x~\left[(\bar q(x) q(x))^2 + (\bar q(x) i\gamma_5\bm\tau q(x))^2\right]~,\label{eq:1}$$ where the dressed quark field is defined as $$q(x)= \int d^4 y~F(x-y) \psi(y)~,\label{eq:dress}$$ and $F(r)$ is a form factor whose Fourier transform $f(p)$ satisfies the constraint $f(p)\rightarrow 0$ for $p\rightarrow\infty$, $p$ being the 3-momentum. In this paper we follow Ref. [@Sasaki:2006ww] and use the Lorentzian form factor, $$f(p)=\frac{1}{\sqrt{1+(p/\Lambda)^{2\alpha}}}~.\label{eq:f1}$$ In the above equation $\Lambda = 684.2$ MeV and $\alpha=10$. Moreover we use $m=4.46$ MeV and $G=2.33/\Lambda^2$ [@Sasaki:2006ww]. By these numerical values we reproduce the pion decay constant $f_\pi = 92.3$ MeV and the pion mass $m_\pi = 135$ MeV, as well as the chiral condensate $\langle\bar u u\rangle =
-(256.2~\text{MeV})^3$. Although the choice of a different form factor will lead to different critical temperatures and/or chemical potentials, it is quite reasonable that the qualitative picture that we draw in this work is insensitive to the specific form of $f(p)$.
As explained in the Introduction we are interested to the ground state of the model specified by the Lagrangian in Eq. , at each value of the temperature $T$ and the chemical potential $\mu$, corresponding to a vanishing total electric charge. In order to build the neutral ground state we use the standard grand canonical ensemble formalism, adding to Eq. the term $\mu_Q N_Q$, $\mu_Q$ being the chemical potential (i.e. Lagrange multiplier) for the total charge $N_Q$, and requiring stationarity of the thermodynamic potential with respect to variations of $\mu_Q$, which is equivalent to the requirement $<N_Q>=0$ in the ground state. This amounts to write the lagrangian ${\cal L}$ in the gran canonical ensemble ${\cal L} = {\cal L}^\prime + \mu_Q N_Q$ as [@Abuki:2008tx] $${\cal L}=\bar{e}(i\gamma_\mu\partial^\mu + \mu_e \gamma_0)e + \bar\psi\left(i\gamma_\mu D^\mu + \hat\mu\gamma_0
-m\right)\psi + G\left[\left(\bar\psi \psi\right)^2 + \left(\bar\psi i \gamma_5 \vec\tau \psi\right)^2\right] - {\cal
U}[\Phi,\bar\Phi,T]~, \label{eq:Lagr}$$ where $\mu_e = - \mu_Q$ and the quark chemical potential matrix $\hat\mu$ is defined in flavor-color space as $$\hat\mu=\left(\begin{array}{cc}
\mu-\frac{2}{3}\mu_e & 0 \\
0 & \mu + \frac{1}{3}\mu_e \\
\end{array}\right)\otimes\bm{1}_c~,\label{eq:chemPot}$$ where $\bm{1}_c$ denotes identity matrix in color space. At $\mu_e\neq0$ a difference of chemical potential between up and down quarks, $\delta\mu=\mu_2 /2$, arises.
In this paper we work in the mean field approximation. Because of $\delta\mu\neq 0$ a pion condensation might occur in the ground state [@Ebert:2005wr]. In order to study simultaneously chiral symmetry breaking and pion condensation we assume that in the ground state the expectation values, real and independent on $x$, for the following operators may develop [@Zhang:2006gu; @Abuki:2008tx; @Ebert:2005wr; @Ebert:2000pb; @Ebert:2008tp], $$\sigma = G\langle \bar q(x)q(x)\rangle~,~~~ \pi = G\langle \bar q(x)i\gamma_5\tau_1 q(x)\rangle~.\label{eq:condensates}$$ In the above equation a summation over flavor and color is understood. We have assumed that the pion condensate aligns along the $\tau_1$ direction in flavor space. This choice is not restrictive. As a matter of fact we should allow for independent condensation both in $\pi^+$ and in $\pi^-$ channels [@Zhang:2006gu]: $$\pi^\pm\equiv G\langle\bar\psi i \gamma_5 \tau_\pm \psi\rangle = \frac{\pi}{\sqrt{2}}e^{\pm i\theta}~,$$ with $\tau_\pm = (\tau_1\pm\tau_2)/\sqrt{2}$; but the thermodynamical potential is not dependent on the phase $\theta$, therefore we can assume $\theta=0$ which leaves us with $\pi^+ = \pi^- = \pi/\sqrt{2}$ and introduce only one condensate, specified in Eq. .
In what follows we consider the system at finite temperature $T$ in the volume $V$. This implies that the space-time integral is $\int d^4x = \int_{0}^\beta d\tau \int d^3\bm x$ with $\beta=1/T$. In the mean field approximation the PNJL action reads $$\begin{aligned}
S&=&\int d^4 x\left[\bar{e}(i\gamma_\mu\partial^\mu + \mu_e \gamma_0)e + \bar\psi\left(i\gamma_\mu D^\mu +
\hat\mu\gamma_0 \right)\psi\right] \nonumber\\
&&+2 \sigma\int d^4 x~\bar q(x) q(x) + 2\pi\int d^4 x~\bar q(x)i\gamma_5 \tau_1 q(x)\nonumber\\
&&
~~~- \beta V \frac{\sigma^2 + \pi^2}{G} - \beta V {\cal U}[\Phi,\bar\Phi,T]~, \label{eq:LagrMF}\end{aligned}$$ where $V$ is the quantization volume and $\beta=1/T$. In momentum space one has $$\begin{aligned}
S&=&\int\frac{d^4p}{(2\pi)^4} \left[\bar{e}(\gamma_\mu p^\mu + \mu_e \gamma_0)e + \bar\psi\left(\gamma_\mu p^\mu
-\gamma_\mu A^\mu - \hat\mu\gamma_0 \right)\psi\right] \nonumber\\
&&+\int\frac{d^4p}{(2\pi)^4} f(p)^2\left[2\sigma~\bar\psi(p) \psi(p) + 2\pi~\bar\psi(p)i\gamma_5 \tau_1\psi(p)
\right] \nonumber\\
&&~~~- \beta V \frac{\sigma^2 + \pi^2}{G} - \beta V {\cal U}[\Phi,\bar\Phi,T]~, \label{eq:LagrMFms}\end{aligned}$$ with $A_\mu = g A_\mu^a T_a$. We introduce the mean field momentum dependent constituent quark mass $M(p)$ and renormalized pion condensate $N(p)$: $$M(p) \equiv m-2\sigma f^2(p)~,~~~N \equiv -2\pi f^2(p)~.\label{eq:mass}$$
The thermodynamical potential $\Omega$ per unit volume in the mean field approximation can be obtained by integration over the fermion fields in the partition function of the model, see for example Ref. [@Ebert:2000pb], $$\begin{aligned}
\Omega &=& -\left(\frac{\mu_e^4}{12\pi^2} + \frac{\mu_e^2 T^2}{6} + \frac{7\pi^2 T^4}{180}\right) + {\cal
U}[\Phi,\bar\Phi,T] + \frac{\sigma^2 + \pi^2}{G} \nonumber\\
&&~~~- T\sum_n\int \frac{d^3{\bm p}}{(2\pi)^3}~\text{Tr}~\text{log}\frac{S^{-1}(i\omega_n,{\bm p})}{T}~,\end{aligned}$$ where the sum is over fermion Matsubara frequencies $\omega_n = \pi T(2n+1)$, and the trace is over Dirac, flavor and color indices. The inverse quark propagator is defined as $$\begin{aligned}
&& S^{-1}(i\omega_n,{\bm p})=\nonumber\\
&& \left(\begin{array}{cc}
(i\omega_n+\mu-\frac{2}{3}\mu_e+iA_4)\gamma_0 -{\bm\gamma}\cdot{\bm p} -M(p) & -i\gamma_5 N(p) \\
-i\gamma_5 N(p) & (i\omega_n+\mu+\frac{1}{3}\mu_e+iA_4)\gamma_0 -{\bm\gamma}\cdot{\bm p} -M(p)\\
\end{array}\right)\otimes{\bm 1}_c~.\nonumber\\
&&\label{eq:po}\end{aligned}$$ Performing the trace and the sum over Matsubara frequencies we have the effective potential for $\Phi$, $\sigma$ and $\pi$, namely $$\begin{aligned}
\Omega &=& -\left(\frac{\mu_e^4}{12\pi^2} + \frac{\mu_e^2 T^2}{6} + \frac{7\pi^2 T^4}{180}\right) + {\cal
U}[\Phi,\bar\Phi,T] + \frac{\sigma^2 + \pi^2}{G} -2N_c \int\!\frac{d^3\bm p}{(2\pi)^3}\left[E_+ + E_- -2p\right]\nonumber \\
&& -2 T\int\! \frac{d^3{\bm p}}{(2\pi)^3}~\text{log}\left[1+3 \Phi e^{-\beta(E_+ - \mu)} + 3\bar\Phi
e^{-2\beta(E_+ - \mu)} + e^{-3\beta(E_+ - \mu)} \right]~\nonumber\\
&&-2 T\int\! \frac{d^3{\bm p}}{(2\pi)^3}~\text{log}\left[1+3 \Phi e^{-\beta(E_- - \mu)} + 3\bar\Phi e^{-2\beta(E_-
- \mu)} + e^{-3\beta(E_- - \mu)} \right]~\nonumber\\
&&-2 T\int\! \frac{d^3{\bm p}}{(2\pi)^3}~\text{log}\left[1+3 \bar\Phi e^{-\beta(E_+ + \mu)} + 3 \Phi e^{-2\beta(E_+
+ \mu)} + e^{-3\beta(E_+ + \mu)} \right]~\nonumber\\
&&-2 T\int\! \frac{d^3{\bm p}}{(2\pi)^3}~\text{log}\left[1+3 \bar\Phi e^{-\beta(E_-
+ \mu)} + 3 \Phi e^{-2\beta(E_- + \mu)} + e^{-3\beta(E_- + \mu)} \right]~,\nonumber\\
\label{eq:O1}\end{aligned}$$ where $$E_\pm = \sqrt{(E_p \mp \mu_e/2)^2 + N^2}~,\label{eq:Epm}$$ and $E_p = \sqrt{p^2 + M^2(p)}$. In Eq. the integral of $2p$ is an irrelevant constant that we subtract in order to make the thermodynamical potential finite at each value of temperature and chemical potential. The ground state of the model is defined by the values of $\sigma$, $\pi$, $\Phi$, $\bar\Phi$ that minimize $\Omega$ and that have a vanishing total charge; the latter condition is equivalent to the requirement $$\frac{\partial\Omega}{\partial\mu_e}=0~.$$
In this paper we use the convenient Polyakov gauge, $$\Phi = \frac{1}{3}\text{Tr}\left[e^{i \beta (\lambda_3 \phi_3 + i \lambda_8 \phi_8})\right]~,$$ with $\phi_3$, $\phi_8$ real parameters. It has been widely discussed in Ref. [@Roessner:2006xn] that in the mean field approximation and with the choice of the effective potential ${\cal U}$ given by Eq. one has $\langle\Phi\rangle = \langle\bar\Phi\rangle$ for any value of $T$ and $\mu$, and the solution $\langle\Phi\rangle\neq\langle\bar\Phi\rangle$ at finite $\mu$ is due to quantum fluctuations. Since in this paper we consider only the mean field approximation we chose $\Phi = \bar\Phi$ in the calculations. This choice implies $\phi_8
= 0$ thus we are left with only one parameter $\phi_3\equiv\phi$.
Before closing this section we write the dispersion laws of the quasi-particles in the Polyakov gauge, defined as the poles of the quark propagator given by Eq. : $$\begin{aligned}
E_{ur} = \mp\mu \pm i\phi + E_+~,&&E_{dr} = \mp\mu \pm i\phi + E_-~,\label{eq:Dr}\\
E_{ug} = \mp\mu \mp i\phi + E_+~,&&E_{dg} = \mp\mu \mp i\phi + E_-~,\label{eq:Dg}\\
E_{ub} = \mp\mu + E_+~,&& E_{db} = \mp\mu + E_-~.\label{eq:Db}\end{aligned}$$ In the previous equations $u$, $d$ correspond to up and down quarks, $r$, $g$ and $b$ to the colors red, green and blue; the upper (lower) sign multiplying $\mu$ and $\phi$ correspond to quarks (antiquarks).
Susceptibilities in the PNJL model
==================================
In order to study the landscape of the phases of the PNJL model we introduce the susceptibility matrix. Susceptibilities are useful to identify phase transitions since they are proportional to the fluctuations of the order parameters around their mean field values, which usually are enhanced near a phase transition. We follow closely Ref. [@Sasaki:2006ww] for the formalism settings. The first step is the definition of the dimensionless curvature matrix of the free energy around its global minima $C$ [@Fukushima:2003fw; @Sasaki:2006ww], $$C\equiv\left(%
\begin{array}{ccc}
C_{MM} & C_{M\Phi} & C_{M\bar\Phi} \\
C_{M\Phi} & C_{\Phi\Phi} & C_{\Phi\bar\Phi} \\
C_{M\bar\Phi} & C_{\Phi\bar\Phi} & C_{\bar\Phi\bar\Phi} \\
\end{array}%
\right)~.$$ In the above equation the diagonal entries are defined as $$\begin{aligned}
C_{MM} &=& \frac{\beta}{\Lambda}\frac{\partial^2\Omega}{\partial M^2}~, \\
C_{\Phi\Phi} = \frac{\beta}{\Lambda^3}\frac{\partial^2\Omega}{\partial \Phi^2}~,&&~~~ C_{\bar\Phi\bar\Phi} =
\frac{\beta}{\Lambda^3}\frac{\partial^2\Omega}{\partial \bar\Phi^2}~;\end{aligned}$$ with $\beta=1/T$ and $\Lambda$ is the mass scale defining the form factor Eq. . $\Omega$ is defined in Eq. . In what follows we denote by $M$ the constituent quark mass computed at $p=0$, which is a function of $\mu$ and $T$. The off diagonal entries are given by $$\begin{aligned}
C_{M\Phi} = \frac{\beta}{\Lambda^2}\frac{\partial^2\Omega}{\partial \Phi \partial M}~,&&~~~
C_{M\bar\Phi} = \frac{\beta}{\Lambda^2}\frac{\partial^2\Omega}{\partial \bar\Phi\partial M}~,\\
C_{\Phi\bar\Phi} &=& \frac{\beta}{\Lambda^3}\frac{\partial^2\Omega}{\partial \Phi \partial\bar\Phi}~;\end{aligned}$$ the derivatives are computed at the global minimum of $\Omega$. Notice that the proper definition of the curvature matrix requires that we put $\Phi = \bar\Phi$, namely the mean field solution, only after differentiation.
The susceptibility matrix $\hat\chi$ is computed as the inverse of the curvature matrix $C$. We have $$\hat\chi=\left(%
\begin{array}{ccc}
\chi_{MM} & \chi_{M\Phi} & \chi_{M\bar\Phi} \label{eq:chiMM}\\
\chi_{M\Phi} & \chi_{\Phi\Phi} & \chi_{\Phi\bar\Phi} \\
\chi_{M\bar\Phi} & \chi_{\Phi\bar\Phi} & \chi_{\bar\Phi\bar\Phi} \\
\end{array}%
\right)~.$$ Here $\chi_{MM}$, $\chi_{\Phi\Phi}$ and $\chi_{\bar\Phi\bar\Phi}$ denote respectively the dimensionless susceptibilities of the constituent quark mass, of the Polyakov loop and of its complex conjugate. We also introduce the average susceptibility $$\bar\chi = \frac{1}{4}\left(\chi_{\Phi \Phi} + \chi_{\bar\Phi\bar\Phi} + 2\chi_{\Phi\bar\Phi}\right)~.\label{eq:chiAV}$$
Results and discussion
======================
In this Section we sketch our results. Firstly we discuss the set of parameters corresponding to the case I which corresponds to $\bar{T}_0 = 208$ MeV. Case II is qualitatively similar to case I, therefore after the discussion of the results obtained in the latter case we briefly show the results corresponding to the former case. Finally we compare both qualitatively and quantitatively the cases I and III. We find that the phase structures of the models corresponding to cases I and III are quite different.
Case I: masses, Polyakov loop and quarkyonic matter
---------------------------------------------------
{width="7cm"} {width="7cm"}\
{width="7cm"} {width="7cm"}
In the upper panel of Fig. \[fig:m0\] we plot the constituent quark mass at $p=0$, the expectation value of the traced Polyakov and the electron chemical potential as a function of the temperature, computed at $\mu=0$ (left) and $\mu=300$ MeV (right). $M_0$ denotes the constituent quark mass at $p=0$, $\mu=0$, $\mu_e = 0$ and $T=0$, $M_0 = 335$ MeV. The pion condensate $N$ is not shown since we find $N=0$ once electrical neutrality has been imposed. The latter result is in agreement with what we have found in our previous work, see Ref. [@Abuki:2008tx], where we have considered the local version of the neutral two flavor PNJL model. Even if we have shown results only for two values of the quark chemical potential, we have explicitly verified that $N$ vanishes in the whole range of chemical potentials and temperatures considered in this work, namely $0\leq\mu\leq 500$ MeV and $0\leq T \leq 250$ MeV.
The expectation value of the Polyakov loop at $\mu=0$ is consistent with zero up to temperatures of the order of $100$ MeV. [^2] It raises as the temperature is increased becoming of the order of $1$ for temperatures close to $250$ MeV. This behavior signals a crossover from a low temperature phase with an unbroken $Z(3)$ symmetry, to a high temperature phase with $Z(3)$ symmetry spontaneously broken. The behavior of $\Phi$ as a function of the temperature is observed even at higher values of $\mu$, see for example the right upper panel of Fig. \[fig:m0\]. We call such a crossover as the $Z(3)$ crossover throughout this paper.
In the lower panel of Fig. \[fig:m0\] we plot three of the susceptibilities defined in the previous Section, namely $\chi_{MM}$ (solid line), $\chi_{\Phi\bar\Phi}$ (dashed line) and $\bar\chi$ (dot-dashed line), as a function of temperature at $\mu=0$ (left) and $\mu=300$ MeV (right). In this work we identify the chiral crossover temperature with the temperature where $\chi_{MM}$ is maximum. In the same way and following Ref. [@Sasaki:2006ww] we define the $Z(3)$ crossover temperature as the one corresponding to the maximum of $\bar\chi$.
{width="7cm"} {width="7cm"}\
{width="7cm"} {width="7cm"}
We wish to investigate on the spontaneous breaking of the $Z(3)$ symmetry in the neutral PNJL model as the quark chemical potential is increased at a fixed low temperature. To this end we plot in Fig. \[fig:m300\] the constituent quark mass at $p=0$, the expectation value of the traced Polyakov and the electron chemical potential as a function of the quark chemical potential $\mu$, computed at $T=20$ MeV (left panel). $M_0$ denotes the constituent quark mass at $p=0$, $\mu=0$ and $T=0$, $M_0=335$ MeV. Again we do not show the pion condensate since it turns out to vanish in the neutral phase. At low temperatures we find a first order chiral transition at $\mu\approx 353$ MeV, in agreement with our previous analysis [@Abuki:2008tx]. In correspondence of the chiral restoration the expectation value of the Polyakov loop has a sudden jump. Nevertheless its value remains much smaller than one even if $\mu$ is increased to $500$ MeV, where $\Phi\approx0.04$. For comparison we show the same quantities at $T=130$ MeV in the right panel.
We now focus on the low temperature regime, therefore we refer to the left panel of Fig. \[fig:m300\]. In this case we can not identify the jump of $\Phi$ as the $Z(3)$ crossover. Instead the discontinuity of $\Phi$ is simply due to the coupling of the Polyakov loop with the chiral condensate. This is confirmed by the calculation of the Polyakov loop susceptibilities, see the lower panel of Fig. \[fig:m300\]. At $T=20$ MeV, in correspondence of the jump of the constituent quark mass, the chiral susceptibility has a pronounced peak. On the other hand the Polyakov loop susceptibilities are very smooth functions of $\mu$ with a small cusp in correspondence of the chiral transition, signaling the absence of a phase transition (as well as of a crossover). For comparison we show the same quantities at $T=130$ MeV in the right panel.
Our results can be interpreted by assuming that at low temperatures the $Z(3)$ symmetry is not spontaneously broken, both at low and at high chemical potentials. The non zero value of $\Phi$ can be related to the existence of dynamical quarks in the system, that break explicitly the center symmetry. The fact that $\Phi\ll 1$ means that in the ground state colored quarks are suppressed (they have a finite $Z(3)$-charge), and the main contribution to the free energy is due to the $Z(3)$-invariant multi-quark states, that are states with a zero $Z(3)$-charge. This point can be clarified by studying the thermal population of the quasi-quarks excitations at low temperature. To this end we compute the quark number density $n_q$, $$n_q = - \frac{\partial\Omega}{\partial\mu}~,$$ as a function of the chemical potential at fixed temperature. The result is shown in Fig. \[fig:bd\]. Evaluation of the derivative of $\Omega$ defined in Eq. leads to the expression
$$n_q = \frac{3}{\pi^2}\int_0^\infty p^2 dp \left[\frac{g_{+-}}{f_{+-}} + \frac{g_{--}}{f_{--}} - \frac{g_{++}}{f_{++}}
- \frac{g_{-+}}{f_{-+}}\right]~, \label{eq:n1}$$
where we have introduced the functions $$\begin{aligned}
f_{\pm\pm} &=& 1 + 3\Phi e^{-\beta(E_\pm \pm \mu)} + 3\Phi e^{-2\beta(E_\pm \pm \mu)} + e^{-3\beta(E_\pm \pm \mu)}~,\\
g_{\pm\pm} &=& \Phi e^{-\beta(E_\pm \pm \mu)} + 2\Phi e^{-2\beta(E_\pm \pm \mu)} + e^{-3\beta(E_\pm \pm \mu)}~,\end{aligned}$$ and $E_\pm$ are defined in Eq. . The addenda in the r.h.s. of Eq. correspond respectively to up quarks, down quarks, up antiquarks and down antiquarks. If we put by hand $\Phi = 1$ in Eq. we recover the usual expression of the NJL model, $$n_{q,NJL} = \frac{3}{\pi^2}\int_0^\infty p^2 dp \left[\frac{1}{1 + e^{\beta(E_+ - \mu)}} + \frac{1}{1 + e^{\beta(E_- -
\mu)}} - \frac{1}{1 + e^{\beta(E_+ + \mu)}} - \frac{1}{1 + e^{\beta(E_- + \mu)}} \right]~, \label{eq:n2}$$ where the $3$ overall counts the number of colors. Eq. is the number density of a free fermion gas; it shows that in the zero temperature limit and $\mu
> M$, $M$ denoting the constituent quark mass, the ground state of the NJL model is made of Fermi spheres of red, green and blue quarks. Moreover at small but non vanishing temperatures the thermal excitations over the Fermi spheres are still quarks.
Now we compare Eq. with the analogous result of the PNJL model. At low temperature we have $\Phi \ll 1$ therefore for a rough analysis we can put $\Phi=0$ in Eq. . We are left with the expression: $$n_{q,PNJL} = \frac{3}{\pi^2}\int_0^\infty p^2 dp \left[\frac{1}{1 + e^{3\beta(E_+ - \mu)}} + \frac{1}{1 +
e^{3\beta(E_- - \mu)}} - \frac{1}{1 + e^{3\beta(E_+ + \mu)}} - \frac{1}{1 + e^{3\beta(E_- + \mu)}} \right]~.
\label{eq:n3}$$
The above equation is valid for every value of $\mu$. In the limit $T\rightarrow0$ and for $\mu > M$, with $M$ the constituent quark mass, it gives the equation obtained in the NJL model, that is a ground state of Fermi spheres of red, green and blue quarks at the chemical potential $\mu$. If we introduce a small temperature then the thermal excitations are not quarks but the $Z(3)$ symmetric three quark states, that is states made of one red quark, one green quark and one blue quark. This is clear from the above Eq. by looking at the arguments of the exponentials in the four addenda. Each of the addenda corresponds to the occupation number of fermions with energy given by $3E_{\pm} - 3\mu$ which is exactly the energy of the lightest $Z(3)$ symmetric state, namely (see Eqs. -) $$E_{red} + E_{green} + E_{blue} = 3E_{\pm} - 3\mu~,\label{eq:E1}$$ the sign depending on the flavor we consider ($E_+$ corresponds to up quarks, $E_-$ to down quarks). The same result holds for antiquarks, simply by replacing $\mu\rightarrow-\mu$. The combination is exactly the argument of the exponentials in Eq. .
To summarize: for the parametrization I the ground state of PNJL quark matter in the regime of low temperature $T\ll M$ and $\mu> M$ is made of Fermi spheres of quarks, and the thermal excitations above the aforementioned Fermi spheres are the three quark states, neutral with respect to $Z(3)$.
{width="7cm"} {width="7cm"}
For completeness, on the right panel of Fig. \[fig:bd\] we plot the dimensionless quark number susceptibilities, $\chi_q$, defined as $$\chi_q = -\frac{1}{\Lambda^2}\frac{\partial^2\Omega}{\partial\mu^2}~,$$ where $\Lambda$ is the form factor momentum scale in Eq. , and $\Omega$ is the PNJL free energy given by Eq. .
Case I: phase diagram in the $\mu-T$ plane
------------------------------------------
{width="10cm"}
In Fig. \[fig:phased\] we summarize the phase diagram of the model in the $\mu-T$ plane with the parametrization I. The thin line corresponds to the chiral crossover; the thick line is the first order chiral transition. We identify the peaks (or the local maxima) in the susceptibilities with the phase transitions. In particular, the chiral crossover is related to the peak of $\chi_{MM}$ in Eq. ; on the other hand following Ref. [@Sasaki:2006ww] we identify the peak of the average susceptibility $\bar\chi$ defined in Eq. with the Polyakov loop crossover.
From the qualitative point of view the phase diagram does not differ from our previous result [@Abuki:2008tx] obtained in the sharp cutoff regularization scheme. The chiral crossover at $\mu = 0$ is located at $T_c = 215$ MeV, to be compared with our previous work [@Abuki:2008tx] $T_c = 206$ MeV. The critical end point is only slightly shifted: in this work we find $$(\mu_E,T_E)\approx(350,55)~\text{MeV}~;$$ this result has to be compared with [@Abuki:2008tx] $$(\mu_E,T_E)\approx(340,80)~\text{MeV}~,~~~\text{sharp cutoff.}$$ Finally at $T=0$ we find that the chiral crossover occurs at $\mu = 370$ MeV, while in our previous work we have found $\mu = 350$ MeV.
We now discuss the Polyakov loop crossover line, corresponding to the thin solid line in Fig. \[fig:phased\]. At small values of the quark chemical potential the peaks of the averaged susceptibility are well pronounced, see for example Fig. \[fig:m0\]. As $\mu$ is increased the peaks of $\bar\chi$ as well as of the diagonal $\chi_{\Phi\Phi}$, $\chi_{\bar\Phi \bar\Phi} $ and off-diagonal $\chi_{\bar\Phi \Phi}$ susceptibilities are broadened and the crossover is dilute over a wide interval of temperatures, see the right panel in Fig. \[fig:m0\]. In the window of chemical potential studied in this paper, $0\leq \mu\leq 500$ MeV, we are still able to observe maxima of $\bar\chi$ (as well as for the other susceptibilities) as a function of the temperature at a fixed value of $\mu$; the width of the maxima increases as $\mu$ is increased. Therefore we expect that at high values of $\mu$ and $T$ the peaks of $\bar\chi$ will be very dilute, meaning that the crossover disappears in the model under consideration. This result changes if we consider $\mu$ dependent coefficients of the Polyakov loop effective potential as we discuss later.
We finally notice that our results for the $Z(3)$ crossover is in qualitative agreement with the results obtained in Ref. [@Sasaki:2006ww], where the authors study the phase diagram and the susceptibilities of the PNJL model with quarks at the same chemical potential, and with a polynomial form of the Polyakov loop effective potential ${\cal U}$. This suggests that the $Z(3)$ crossover is not mainly governed by the specific form of ${\cal U}$ or by electrical neutrality, but by the assumption that the deconfinement scale $\bar{T}_0$ in ${\cal U}$ is kept independent on $\mu$ in this calculation.
Case II: critical points
------------------------
From the qualitative point of view the case with $\bar{T}_0 = 270$ MeV does not differ from the previously analyzed case II. Therefore we simply give the coordinates of the critical points obtained in this case. At $\mu=0$ we find the chiral crossover at $T=219$ MeV and the $Z(3)$ crossover at $T=211$ MeV. The critical end point coordinates are $$(\mu_E,T_E)\approx(336,103)~\text{MeV}~,~~~\bar{T}_0=270~\text{MeV}~.$$
Case III: critical points and phase structure
---------------------------------------------
{width="7cm"} {width="7cm"}\
{width="7cm"} {width="7cm"}
We now discuss the results obtained in the case III in which we assume both a $\mu$ and a $N_f$ dependence of the parameter $\bar{T}_0$ of the Polyakov loop potential, see Eq. . Our main goal is to emphasize the differences between case III and case I. The main difference arises at low temperature and high chemical potential, so we focus on this regime. In Fig. \[fig:caxx1\] we plot in the left panel the constituent quark mass at $p=0$ and the expectation value of $\Phi$ as a function of $\mu$ at $T=20$ MeV, with the related susceptibilities, for the case III, and compare these results with those obtained in the case I at the same temperature (right panel). We have verified that qualitatively the picture does not change if we lower the temperature to the order of one MeV.
At $\mu=0$ the critical temperatures are equal to those computed in case I (simply because $\bar{T}_0(\mu=0) = 208$ MeV). Moreover the coordinates of the critical end point are $$(\mu_E,~T_E)=(339,53)~\text{MeV}~,~~~\bar{T}_0=\bar{T}_0(\mu)~.$$
The data on $\Phi$ corresponding to the parametrization III show that the case $\bar{T}_0 = \bar{T}_0(\mu)$ is quite different from the case $\bar{T}_0=208$ MeV. In the case III (left panel) in correspondence of the chiral transition at $\mu\equiv\mu_c\approx 350$ MeV the Polyakov loop has a net jump from $\Phi\ll 1$ at $\mu=\mu_c - 0^+$ to a definitely non zero value $\Phi\approx 0.3$ at $\mu=\mu_c + 0^+$. Since the contribution of the one and two quark states ($Z(3)$ charges) to the free energy is multiplied by $3\Phi$, see Eq. , and in the present case $3\Phi$ is of the order of unity, the weight of the $Z(3)$ charges in the free energy is the same of the weight of the three quark states. This behavior is different from what we have found in the case of $\bar{T}_0 = 208$ MeV. The similarity between the two cases is partially recovered if we consider temperatures of the order of one MeV; in this case we find a narrow window in $\mu$ where $3\Phi$ is of the order of $0.1$, revealing a ground state in which the leading contribution to the free energy comes from the thermal excitations of $Z(3)$ neutral states. We discuss this point in more detail in the following Section. Finally the analysis of the peaks of the susceptibilities $\chi_{\bar\Phi\Phi}$ and $\bar\chi$ (lower left panel in Fig. \[fig:caxx1\]) reveals that the $Z(3)$ crossover occurs at $\mu\approx 460$ MeV.
Comparison between the two scenarios
====================================
In this Section we compare the qualitative picture that arises from the study of the phase diagram of the neutral PNJL model within two scenarios: the first one corresponds to keeping an independent $\bar{T}_0$, case I; the second one corresponds to keeping a $\mu$-dependent $\bar{T}_0$, case III.
The results that we have discussed in the previous Sections show that the phase diagram of the PNJL model in the case I at low temperatures is similar to the phase diagram obtained in the large $N_c$ approximation of QCD, see Refs. [@McLerran:2007qj; @Hidaka:2008yy]. At low temperatures the latter phase diagram consists of two regions: the first one at low values of $\mu$, defined as the [*confined phase*]{} and characterized by $\Phi=0$ and a vanishing baryon density; the second one at large values of $\mu$ called [*quarkonia*]{} in which $\Phi=0$ but the baryon density is not vanishing. Finally at high temperature one finds the [*deconfined phase*]{} with $\Phi\neq 0$ and a non vanishing baryon density. In the quarkyonic phase the free energy is that of free quarks, but the thermal excitations are those of baryons. Our previous discussion and Figs. \[fig:m300\] and \[fig:bd\] show that this happens even in the PNJL model in the low temperature regime. Therefore the PNJL model with parametrization I approximately reproduces the large $N_c$ phase diagram at low temperatures, if one interprets the state with $\Phi\ll 1$ at high $\mu$ with the quarkyonic phase of large $N_c$. This fact has been already noticed in a study of the three flavor model by Fukushima [@Fukushima:2008wg] where the author has suggested to identify the low temperature-high density ground state of the model as the quarkyonic phase of large $N_c$ QCD. Our results strengthen this idea and thus suggest that the quarkyonic-like ground state of low temperature-high density PNJL model is not a peculiarity of the three flavor case, but it seems to be a characteristic of the PNJL model itself, as far as we do not include an explicit $\mu$ dependence into the coefficients of ${\cal U}$ (we discuss this case in a next Subsection). The main difference between large $N_c$ and PNJL is that in the latter model one can excite one and two quark states (that is $Z(3)$ charges) if the temperature is high enough. As a consequence, the deconfinement transition observed in the large $N_c$ model at high temperature and high chemical potential is replaced in the present model by a smooth $Z(3)$ crossover.
{width="10cm"}
In Fig. \[fig:phase-im\] we show a cartoon phase diagram of the neutral two flavor PNJL model and a comparison with that obtained in the large $N_c$ approximation [@McLerran:2007qj]. The bold line denotes the chiral crossover as well as the chiral first order transition. The thin line corresponds to the deconfinement crossover. Both of these lines are the same which we have shown in Fig. \[fig:phased\]. Since this is simply a cartoon we do not distinguish between the crossover (small $\mu$) and first order transition (higher values of $\mu$). In the PNJL model the quark density does not vanish at any finite temperature, even if for small chemical potential $n_q$ is very small at low temperature (see Fig. \[fig:bd\]). To compare the phase diagram of the PNJL model with that of the large $N_c$ approximation we need a criterion to say if $n_q$ is zero or not. Analogously to Ref. [@Fukushima:2008wg] we identify the $n_q$ crossover with the value of $\mu$ corresponding to the inflection point of the quark density. We find that the $n_q$ crossover defined in this way coincides with the chiral crossover as in Ref. [@Fukushima:2008wg]. Therefore the chiral crossover line in Fig. \[fig:phase-im\] represents the density crossover as well. In the chiral broken phase and at low temperature $n_q\approx0$. On the other hand $n_q\neq0$ in correspondence to the chiral symmetric phase at low temperature. At high temperature $n_q\neq0$ both in the chiral broken and in the chiral restored phases.
At low temperature we have $\Phi\approx0$ both on the left and on the right of the dashed line, see Figs. \[fig:m0\] and \[fig:m300\]. At low temperature the region with broken chiral symmetry has the same characteristics of the hadron phase found in Ref. [@McLerran:2007qj]; on the other hand at low temperature the region on the right of the dashed line has the same characteristics of the quarkyonic phase found in Ref. [@McLerran:2007qj]. For these reasons we have called the two regions [*hadronic-like*]{} and [*quarkyonic-like*]{} respectively. We stress that this analogy holds strictly speaking only at low temperature (for temperatures of the order of one hundred MeV $n_q\neq0$ even in the chiral broken phase, see Fig. \[fig:bd\]). Finally at high temperature (above the $Z(3)$ transition line) we have both $\Phi$ of order of unity and $n_q\neq 0$. In analogy to the terminology of Ref. [@McLerran:2007qj] we call this region of the phase diagram the [*deconfined-like*]{} phase.
We briefly compare the results discussed above in relation with the case I with those obtained in the large $N_c$ approximation and at $T=0$ in Ref. [@Glozman:2008kn], where the author discusses a gap in the spectrum of quarkyonic matter within a model. Such a gap is given by the pion mass, $M_\pi$, which becomes larger as $\mu$ is increased. Even if the values of $M_\pi$ as a function of $\mu$ computed in Ref. [@Glozman:2008kn] might differ from the non local PNJL ones, the calculations of $M_\pi$ carried out in Refs. [@Hansen:2006ee; @Abuki:2008tx] using the local NJL model show that the qualitative behavior of $M_\pi$ as a function of $\mu$ is the same in the two models. Thus in the PNJL model we expect a large pion mass at large $\mu$ as well. However this mass does not correspond to the gap in the excitations spectrum in our model. As a matter of fact in the quarkyonic-like region of the phase diagram in Fig. \[fig:phase-im\] the three quark states can be excited; each quark has a constituent mass $M(p)$ given by Eq. and plotted in Figs. \[fig:m0\] and \[fig:m300\], hence the three quark state has a mass $3M(p)$ which at small quark momenta and large $\mu$ is of the order of $10$ MeV. Therefore in our case a gap in the spectrum still exists but it is given by the three quark state mass which is much lighter than $M_\pi$.
{width="10cm"}
We now turn on the parametrization III. As discussed in the previous Section, the small chemical potential region of the phase diagram in case III is qualitatively similar to that obtained in the case I, therefore we focus on the low temperature/large chemical potential region from now on. In Fig. \[fig:phasedCOMP\] we draw the low temperature phase diagram of the PNJL model with parametrization III. The bold line denotes the chiral transition; the thin line corresponds to the $Z(3)$ transition. As in the previous section the transition lines are computed by looking at the peaks of the chiral and $\bar\chi$ susceptibilities. The diagram in Fig. \[fig:phasedCOMP\] should be compared with the analogous diagram obtained for the parametrization I which is shown in Fig. \[fig:phased\]. The main effect of choosing the parameter $\bar{T}_0$ as a $\mu$ dependent one in the Polyakov loop potential is the lowering of the $Z(3)$ transition line. Moreover the wide quarkyonic-like window in Fig. \[fig:phase-im\] is shrunk to a small region in Fig. \[fig:phasedCOMP\]. At low temperature it is enough to reach a chemical potential of the order of $500$ MeV to have $\Phi\approx 1$ and a net quark density; both these characteristics define the deconfined phase of Fig. \[fig:phase-im\] [@McLerran:2007qj]. Even if we have used the particular form $\bar{T}_0(\mu)$ suggested in Ref. [@Schaefer:2007pw] we are confident that the aforementioned results are simply due to the lowering of the deconfinement scale $\bar{T}_0$ as $\mu$ is increased and not to the detailed analytical form of $\bar{T}_0(\mu)$. Thus our picture should be qualitatively robust.
Before closing this Section we make a brief comment on the possible study of the scenarios discussed above on lattice. Recently the density of states (DOS) method has been used to investigate the QCD phase transition at large $\mu$ [@Fodor:2007vv]. In this paper the QCD phase diagram is mapped by studying the plaquette expectation value in the $\mu-T$ plane. Although the lattice size implemented in [@Fodor:2007vv] is relatively small and a finite volume study is still missing, so that the results should be taken as preliminary, an interesting phase transition is observed as $\mu$ crosses a critical value $\mu_c$ at a fixed temperature. Moreover the quark number shows a sudden rise as $\mu$ reaches $\mu_c$. The qualitative behavior is similar in the PNJL model, see Figs. \[fig:m0\], \[fig:m300\] and \[fig:bd\]. In the PNJL calculation with parametrization of Case I and II (fixed values of $T_0$) at low temperature and in the correspondence of the the chiral crossover a small jump of the Polyakov loop occurs, the true $Z(3)$ crossover being shifted to larger values of $\mu$. On the other hand, in Case III with a $\mu$-dependent $T_0$, a net rise of $\Phi$ occurs in correspondence of the chiral crossover. It would be very interesting if by means of the DOS method one could compute the expectation value of the Polyakov loop, as well as the chiral and the Polyakov loop susceptibilities, in the low temperature regime as a function of $\mu$. This lattice calculation might improve the understanding of the new low temperature/large chemical potential state of matter claimed in [@Fodor:2007vv], and at the same time it would allow to distinguish between the two PNJL scenarios discussed in this paper.
Conclusions
===========
In this paper we have investigated on the landscape of the possible phases of the neutral two flavor PNJL model. We have considered the logarithmic effective potential of the Polyakov loop ${\cal
U}$ [@Fukushima:2003fw; @Roessner:2006xn], see Eq. , and a non local interaction in the quark sector, see Eqs. -. Our main results are summarized in Figs. \[fig:phased\] and \[fig:phasedCOMP\]. Fig. \[fig:phased\] corresponds to a fixed value of $\bar{T}_0$ in the Polyakov loop effective potential. In this case the phase diagram is qualitatively similar to that obtained in the large $N_c$ approximation of QCD [@McLerran:2007qj].
In particular, at high chemical potential and low temperature we find a phase in which the main contribution to the thermal quark population is given by $Z(3)$ neutral states, that is three quark states made of one red quark, one green quark and one blue quark. This characteristic resembles the quarkyonic phase of Ref. [@McLerran:2007qj]. The quarkyonic-like structure of the ground state of the PNJL model has been already noticed in Ref. [@Fukushima:2008wg] in non neutral and three flavor version of the model. Moreover the $Z(3)$ transition line has been already studied in Ref. [@Sasaki:2006ww] with a different effective potential for the Polyakov loop and in a non neutral state. The results of Ref. [@Sasaki:2006ww] are qualitatively similar to ours. Therefore we suggest that the quarkyonic-like state of matter is a feature of the PNJL model, independently to the number of flavors and to the difference of the chemical potentials between quarks, as far as a $\mu$ dependence of the coefficients of ${\cal
U}$ is not considered.
In Fig. \[fig:phasedCOMP\] we show the phase diagram of the model when a $\mu$ dependence of the coefficients of the effective potential of the Polyakov loop is introduced. We have used the analytic form suggested in Ref. [@Schaefer:2007pw]. The main results are the lowering of the $Z(3)$ transition line of Fig. \[fig:phased\], and the shrinking of the quarkyonic-like phase window of Fig. \[fig:phased\]. We have used the form of $\bar{T}_0(\mu)$ of Ref. [@Schaefer:2007pw]. We believe that the result is rather robust as it does not follow from such a detailed form but only from the lower scale of deconfinement when mu increases.
We have not considered in this work for simplicity the possibility of color superconductivity at high $\mu$ [@Rapp:1997zu; @Alford:1998mk]. At a first sight it could seem that the results found with parameterizations I and II, i.e. a quarkyonic-like phase at high chemical potential and low temperature, exclude the possibility of a superconductive gap in the spectrum. This reasoning could be supported by the observation that the quarkyonic-like phase is similar to a confined phase, differing from the latter only for a non zero value of the quark density. Such a conclusion is not necessarily true. As a matter of fact, even if not noticed explicitly in Ref. [@Roessner:2006xn] for the two flavor and in [@Ciminale:2007ei; @Abuki:2008ht] for the three flavor models where the color superconductivity has been kept into account, in the quarkyonic-like region (high $\mu$ and small $T$) the minimization of the thermodynamic potential leads to a phase where quarks have a color superconductive gap in the spectrum. It is the 2SC gap [@Rapp:1997zu] in the two flavor case, and the CFL gap [@Alford:1998mk] in the three flavor case. Therefore the realization of a color superconductive phase in the PNJL models at high $\mu$ and small $T$ is not forbidden in principle, even if the ground state has a quarkyonic structure.
An interesting investigation is the computation of the spectra of the mesonic and baryonic thermal excitations in the quarkyonic-like phase of the PNJL model, and compare them with those obtained in a different model [@Glozman:2008kn] that mimics QCD in the large $N_c$ approximation. We are now working on this topic and the results will be the object of a forthcoming paper.
We aknowledge K. Fukushima, M. Hamada, M. Huang, O. Kiriyama, T. Kunihiro, V. A. Miransky, C. Sasaki, A. Schmitt, I. Shovkovy, W. Weise for discussions made during the Workshop “New Frontiers in QCD 2008”. Moreover we have benefited a discussion with K. Redlich during the aforementioned Workshop which has stimulated the main part of the present work. We finally thank P. Cea and L. Cosmai for enlightening discussions, and D. Blaschke for useful correspondence.
[99]{}
Y. Aoki, Z. Fodor, S. D. Katz and K. K. Szabo, Phys. Lett. B [**643**]{}, 46 (2006) \[arXiv:hep-lat/0609068\]. C. Schmidt, PoS [**LAT2006**]{}, 021 (2006) \[arXiv:hep-lat/0610116\]. O. Philipsen, PoS [**LAT2005**]{}, 016 (2006) \[PoS [**JHW2005**]{}, 012 (2006)\] \[arXiv:hep-lat/0510077\]. U. M. Heller, PoS [**LAT2006**]{}, 011 (2006) \[arXiv:hep-lat/0610114\]. S. Ejiri, Phys. Rev. D [**69**]{}, 094506 (2004) \[arXiv:hep-lat/0401012\]. K. Splittorff, PoS [**LAT2006**]{}, 023 (2006) \[arXiv:hep-lat/0610072\]. K. Splittorff and J. J. M. Verbaarschot, Phys. Rev. Lett. [**98**]{}, 031601 (2007) \[arXiv:hep-lat/0609076\].
C. R. Allton, S. Ejiri, S. J. Hands, O. Kaczmarek, F. Karsch, E. Laermann and C. Schmidt, Phys. Rev. D [**68**]{}, 014507 (2003) \[arXiv:hep-lat/0305007\]. C. R. Allton [*et al.*]{}, Phys. Rev. D [**66**]{}, 074507 (2002) \[arXiv:hep-lat/0204010\]. C. R. Allton [*et al.*]{}, Phys. Rev. D [**71**]{}, 054508 (2005) \[arXiv:hep-lat/0501030\]. Z. Fodor, S. D. Katz and K. K. Szabo, Phys. Lett. B [**568**]{}, 73 (2003) \[arXiv:hep-lat/0208078\].
Z. Fodor and S. D. Katz, JHEP [**0203**]{}, 014 (2002) \[arXiv:hep-lat/0106002\].
Z. Fodor, S. D. Katz and C. Schmidt, JHEP [**0703**]{}, 121 (2007) \[arXiv:hep-lat/0701022\]. E. Laermann and O. Philipsen, Ann. Rev. Nucl. Part. Sci. [**53**]{}, 163 (2003) \[arXiv:hep-ph/0303042\]. P. de Forcrand and O. Philipsen, Nucl. Phys. B [**673**]{}, 170 (2003) \[arXiv:hep-lat/0307020\]. M. D’Elia, F. Di Renzo and M. P. Lombardo, Phys. Rev. D [**76**]{}, 114509 (2007) \[arXiv:0705.3814 \[hep-lat\]\]. M. D’Elia and M. P. Lombardo, Phys. Rev. D [**70**]{}, 074509 (2004) \[arXiv:hep-lat/0406012\]. M. D’Elia and M. P. Lombardo, Phys. Rev. D [**67**]{}, 014505 (2003) \[arXiv:hep-lat/0209146\].
Y. Nambu and G. Jona-Lasinio, Phys. Rev. [**122**]{}, 345 (1961); Phys. Rev. [**124**]{}, 246 (1961).
S. P. Klevansky, Rev. Mod. Phys. [**64**]{}, 649 (1992); T. Hatsuda and T. Kunihiro, Phys. Rept. [**247**]{}, 221 (1994) \[arXiv:hep-ph/9401310\]; M. Buballa, Phys. Rept. [**407**]{}, 205 (2005) \[arXiv:hep-ph/0402234\].
A. M. Polyakov, Phys. Lett. B [**72**]{}, 477 (1978); L. Susskind, Phys. Rev. D [**20**]{}, 2610 (1979); B. Svetitsky and L. G. Yaffe, Nucl. Phys. B [**210**]{}, 423 (1982); B. Svetitsky, Phys. Rept. [**132**]{}, 1 (1986).
P. N. Meisinger and M. C. Ogilvie, Phys. Lett. B [**379**]{}, 163 (1996) \[arXiv:hep-lat/9512011\]. K. Fukushima, Phys. Lett. B [**591**]{}, 277 (2004) \[arXiv:hep-ph/0310121\]. C. Ratti, M. A. Thaler and W. Weise, Phys. Rev. D [**73**]{}, 014019 (2006) \[arXiv:hep-ph/0506234\]. S. Roessner, C. Ratti and W. Weise, Phys. Rev. D [**75**]{}, 034007 (2007) \[arXiv:hep-ph/0609281\]. S. K. Ghosh, T. K. Mukherjee, M. G. Mustafa and R. Ray, arXiv:0710.2790 \[hep-ph\]; S. K. Ghosh, T. K. Mukherjee, M. G. Mustafa and R. Ray, Phys. Rev. D [**73**]{}, 114007 (2006) \[arXiv:hep-ph/0603050\]. K. Kashiwa, H. Kouno, M. Matsuzaki and M. Yahiro, arXiv:0710.2180 \[hep-ph\].
B. J. Schaefer, J. M. Pawlowski and J. Wambach, Phys. Rev. D [**76**]{}, 074023 (2007) \[arXiv:0704.3234 \[hep-ph\]\]. C. Ratti, S. Roessner and W. Weise, Phys. Lett. B [**649**]{}, 57 (2007) \[arXiv:hep-ph/0701091\]. C. Sasaki, B. Friman and K. Redlich, Phys. Rev. D [**75**]{}, 074013 (2007) \[arXiv:hep-ph/0611147\]; C. Sasaki, B. Friman and K. Redlich, Phys. Rev. D [**75**]{}, 054026 (2007) \[arXiv:hep-ph/0611143\]. E. Megias, E. Ruiz Arriola and L. L. Salcedo, Phys. Rev. D [**74**]{}, 114014 (2006) \[arXiv:hep-ph/0607338\]; E. Megias, E. Ruiz Arriola and L. L. Salcedo, Phys. Rev. D [**74**]{}, 065005 (2006) \[arXiv:hep-ph/0412308\].
Z. Zhang and Y. X. Liu, Phys. Rev. C [**75**]{}, 064910 (2007) \[arXiv:hep-ph/0610221\]. K. Fukushima, arXiv:0803.3318 \[hep-ph\]. Y. Sakai, K. Kashiwa, H. Kouno and M. Yahiro, Phys. Rev. D [**77**]{}, 051901 (2008) \[arXiv:0801.0034 \[hep-ph\]\]. Y. Sakai, K. Kashiwa, H. Kouno and M. Yahiro, arXiv:0803.1902 \[hep-ph\]. K. Kashiwa, Y. Sakai, H. Kouno, M. Matsuzaki and M. Yahiro, arXiv:0804.3557 \[hep-ph\]. M. Ciminale, G. Nardulli, M. Ruggieri and R. Gatto, Phys. Lett. B [**657**]{}, 64 (2007) \[arXiv:0706.4215 \[hep-ph\]\]. W. j. Fu, Z. Zhang and Y. x. Liu, Phys. Rev. D [**77**]{}, 014006 (2008) \[arXiv:0711.0154 \[hep-ph\]\]. M. Ciminale, R. Gatto, N. D. Ippolito, G. Nardulli and M. Ruggieri, arXiv:0711.3397 \[hep-ph\]. H. Hansen, W. M. Alberico, A. Beraudo, A. Molinari, M. Nardi and C. Ratti, Phys. Rev. D [**75**]{}, 065004 (2007) \[arXiv:hep-ph/0609116\]. H. Abuki, M. Ciminale, R. Gatto, N. D. Ippolito, G. Nardulli and M. Ruggieri, arXiv:0801.4254 \[hep-ph\]. H. Abuki, M. Ciminale, R. Gatto, G. Nardulli and M. Ruggieri, arXiv:0802.2396 \[hep-ph\]. G. A. Contrera, D. Gomez Dumm and N. N. Scoccola, Phys. Lett. B [**661**]{}, 113 (2008) \[arXiv:0711.0139 \[hep-ph\]\]. D. Blaschke, M. Buballa, A. E. Radzhabov and M. K. Volkov, arXiv:0705.0384 \[hep-ph\]. L. McLerran and R. D. Pisarski, Nucl. Phys. A [**796**]{}, 83 (2007) \[arXiv:0706.2191 \[hep-ph\]\]. Y. Hidaka, L. D. McLerran and R. D. Pisarski, arXiv:0803.0279 \[hep-ph\]. L. Y. Glozman and R. F. Wagenbrunn, Phys. Rev. D [**77**]{}, 054027 (2008) \[arXiv:0709.3080 \[hep-ph\]\]. L. Y. Glozman, arXiv:0803.1636 \[hep-ph\]. S. M. Schmidt, D. Blaschke and Yu. L. Kalinovsky, Phys. Rev. C [**50**]{}, 435 (1994). R. D. Bowler and M. C. Birse, Nucl. Phys. A [**582**]{}, 655 (1995) \[arXiv:hep-ph/9407336\]. D. Blaschke, G. Burau, Yu. L. Kalinovsky, P. Maris and P. C. Tandy, Int. J. Mod. Phys. A [**16**]{}, 2267 (2001) \[arXiv:nucl-th/0002024\]. D. Gomez Dumm, D. B. Blaschke, A. G. Grunfeld and N. N. Scoccola, Phys. Rev. D [**73**]{}, 114019 (2006) \[arXiv:hep-ph/0512218\]. D. N. Aguilera, D. Blaschke, H. Grigorian and N. N. Scoccola, Phys. Rev. D [**74**]{}, 114005 (2006) \[arXiv:hep-ph/0604196\]. H. Grigorian, Phys. Part. Nucl. Lett. [**4**]{}, 223 (2007) \[arXiv:hep-ph/0602238\]. D. Ebert and K. G. Klimenko, Eur. Phys. J. C [**46**]{}, 771 (2006) \[arXiv:hep-ph/0510222\].
D. Ebert, K. G. Klimenko and H. Toki, Phys. Rev. D [**64**]{}, 014038 (2001) \[arXiv:hep-ph/0011273\]; D. Ebert, V. V. Khudyakov, V. C. Zhukovsky and K. G. Klimenko, Phys. Rev. D [**65**]{}, 054024 (2002) \[arXiv:hep-ph/0106110\]. D. Ebert, K. G. Klimenko, A. V. Tyukov and V. C. Zhukovsky, arXiv:0804.0765 \[hep-ph\]. R. Rapp, T. Schafer, E. V. Shuryak and M. Velkovsky, Phys. Rev. Lett. [**81**]{}, 53 (1998) \[arXiv:hep-ph/9711396\]; M. G. Alford, K. Rajagopal and F. Wilczek, Phys. Lett. B [**422**]{}, 247 (1998) \[arXiv:hep-ph/9711395\]. M. G. Alford, K. Rajagopal and F. Wilczek, Nucl. Phys. B [**537**]{}, 443 (1999) \[arXiv:hep-ph/9804403\].
[^1]: The term statistical confinement has been introduced by K. Redlich during the Workshop “New Frontiers in QCD08”.
[^2]: $\Phi$ can not be exactly zero because dynamical quarks break the $Z(3)$ symmetry explicitly; nevertheless $\Phi$ turns out to be very small, signaling that the center symmetry is broken only softly.
|
---
abstract: 'Certain geological features have been interpreted as evidence of channelized magma flow in the mantle, which is a compacting porous medium. @aharonov95 developed a simple model of reactive porous flow and numerically analysed its instability to channels. The instability relies on magma advection against a chemical solubility gradient and the porosity-dependent permeability of the porous host rock. We extend the previous analysis by systematically mapping out the parameter space. Crucially, we augment numerical solutions with asymptotic analysis to better understand the physical controls on the instability. We derive scalings for critical conditions of the instability and analyse the associated bifurcation structure. We also determine scalings for the wavelength and growth rate of the channel structures that emerge. We obtain quantitative theories for and a physical understanding of: first, how advection or diffusion over the reactive time scale set the horizontal length scale of channels; second, the role of viscous compaction of the host rock, which also affects the vertical extent of channelized flow. These scalings allow us to derive estimates of the dimensions of emergent channels that are consistent with the geologic record.'
author:
- 'David W. Rees Jones'
- 'Richard F. Katz'
bibliography:
- 'main.bib'
title: 'Reaction-infiltration instability in a compacting porous medium'
---
Introduction {#sec:intro}
============
Melting of mantle rock fuels volcanism at Hawaii and Iceland, as well as along the plate-tectonic boundaries where oceanic plates spread apart. Typically this melt is understood to come from mantle decompression: as the solid rock slowly upwells, it experiences decreasing pressure, which lowers its solidus temperature and drives quasi-isentropic melting [@ramberg72; @asimow97]. The magma produced in this way segregates from its source and rises buoyantly through the interconnected pores of the polycrystalline mantle [@mckenzie84]. The equilibrium chemistry of magma is a function of pressure; rising magma, produced in equilibrium with the mantle, becomes undersaturated in a component of the mantle as it ascends [@OHara65; @Stolper80; @elthon84]. The magma reacts with adjacent solid mantle grains and the result is a net increase in liquid mass [@kelemen90]. This reactive melting (or, equivalently, reactive dissolution) augments decompression melting. The corrosivity of vertically segregating melt is thought to promote localisation into high-flux magmatic channels [@quick1982; @kelemen92; @kelemen95]; these probably correspond to zones observed in exhumed mantle rock where all soluble minerals have been replaced with olivine [@kelemen00; @braun02]. Such channelised transport has important consequences for magma chemistry [@spiegelman03a] and, in particular, may explain the observed chemical disequilibrium between erupted lavas and the shallowest mantle [@kelemen95; @braun02]. Laboratory experiments at high temperature and pressure confirm that magma–mantle interactions can lead to a channelisation instability [@pec15; @pec17]. Here we analyse a simplified model of this system to better understand the character of the instability.
The association of reactive flow with channelisation was established by early theoretical work that considered a corrosive, aqueous fluid propagating through a soluble porous medium [@Hoefner88; @ortoleva94 and refs. therein]. A general feature of porous media is that permeability increases with porosity. If an increase of fluid flux enhances the dissolution of the solid matrix, increasing the porosity, then a positive feedback ensues. This drives a channelisation instability, either in the presence or absence of a propagating reaction front [@szymczak2012; @Szymczak13; @Szymczak14]. [@aharonov95] adapted the previous theory to model reactive magmatic segregation. In their adaptation, two key differences from earlier work arise. The first is that reaction is not limited to a moving front [as in, for example, @hinch90], but rather occurs pervasively within the domain. The second is that mantle rocks are ductile and undergo creeping flow in response to stress. This includes isotropic compaction, whereby grains squeeze together and the interstitial melt is expelled (or vice versa). Equations governing the mechanics of partially molten rock were established by [@mckenzie84]. We will see that the compaction of the solid phase plays a crucial role in modifying and even stabilizing the instability, and so this is a key aspect of our study.
[@aharonov95] obtained numerical results showing the systematic dependence on reaction rate (Damköhler number) and diffusion rate (Péclet number), but did not consider the co-variation of these parameters. They obtained numerical results indicative of the effect of compaction when the stiffness parameter, defined in our §\[sec:eqs\_simplified\], is $O(1)$. However, they did not present scalings when the stiffness parameter is much smaller than 1, which is an interesting and geologically relevant regime. [@spiegelman01] performed two-dimensional numerical calculations of the instability and used a similar analysis to [@aharonov95] to interpret the results. [@hewitt10] considered the reaction-infiltration instability in the context of thermochemical modelling of mantle melting. The problem was again considered by [@hesse11], but their focus was mostly on an instability to compaction–dissolution waves, which were first studied by [@aharonov95]. While interesting theoretically, there is no geological evidence for these waves. [@Schiemenz2011] performed high-order numerical calculations of channelized flow in the presence of sustained perturbations at the bottom of the domain.
In the present paper, we describe the physical problem and its mathematical expression (§\[sec:problem\]), perform a linear stability analysis and give numerical solutions (§\[sec:LSA\]) and, by asymptotic analysis, elucidate the control of physical processes (§\[sec:asymptotics\]). The asymptotics provide scalings that are difficult to obtain numerically. They hence allow us to explore a broader parameter space, crucially including the regime in which compaction is significant. Finally, we discuss the geological implications of our analysis (§\[sec:geological\_discussion\]).
Governing equations {#sec:problem}
===================
Dimensional equations
---------------------
Figure \[fig:diagram\] shows a schematic diagram of the domain: a region of partially molten rock of height $H$ in the $z$ direction, composed of a solid phase ($s$, matrix, mantle rock) and a liquid phase ($l$, magma).
![Diagram of the problem. (*a*) shows a region of partially molten rock of depth $H$ with a flux of liquid phase from beneath and free-flux boundary condition above. (*b*) shows the gradient of the equilibrium composition of the liquid phase $c^{{\textrm{eq}}}_l $. When a parcel of liquid (dashed blue circle) is raised (full blue circle), it has a concentration below the equilibrium, leading to reactive melting along the horizontal blue arrow. The composition of reactively produced melts $c_\Gamma$ is greater than equilibrium.[]{data-label="fig:diagram"}](Figure1-eps-converted-to.pdf){width="0.8\linewidth"}
We account for conservation in both phases. Mass conservation in the solid and liquid is given by, respectively,
$$\begin{aligned}
\frac{\partial (1 -\phi)}{\partial t} + \nabla \cdot ((1-\phi) {\boldsymbol{v}_s}) &= -\Gamma, \label{eq:mass_s} \\
\frac{\partial \phi}{\partial t} + \nabla \cdot (\phi {\boldsymbol{v}_l}) &= \Gamma, \label{eq:mass_l}
\end{aligned}$$
where $t$ is time, $\phi$ is the volume fraction of liquid phase (termed porosity), $(1-\phi)$ is the fraction of solid phase, ${\boldsymbol{v}_l}$ is the liquid velocity, ${\boldsymbol{v}_s}$ is the solid velocity, and $\Gamma$ is the volumetric melting rate (the rate at which volume is transferred from solid to liquid phase).
We use conservation of momentum to determine the solid and liquid velocities [@mckenzie84]. In general, the solid phase (mantle) can deform viscously by both deviatoric shear and isotropic compaction. The latter is related to the pressure difference between the liquid and solid phases. We neglect deviatoric stresses on the solid phase and consider only the isotropic part of the stress and strain-rate tensors. The compaction rate $ \nabla \cdot {\boldsymbol{v}_s}$ is related to the compaction pressure $\mathcal{P}$ according to the linear constitutive law $$\label{eq:def-zeta}
\nabla \cdot {\boldsymbol{v}_s}= \mathcal{P} / \zeta,$$ where $\zeta$ is an effective compaction or bulk viscosity. The solid matrix behaves like a rigid porous matrix when the bulk viscosity is sufficiently large (an idea we will relate to a non-dimensional matrix stiffness later). $\zeta$ can be estimated using micromechanical models of partially molten rocks, and may depend on the porosity [@sleep88]. The most recent calculations show that the bulk viscosity depends only weakly on porosity [@rudge17agu]. Therefore, we make the simplifying assumption that $\zeta$ is constant. We discuss this issue further in appendix \[app:hewitt\].
Fluid flow is given by Darcy’s law: $$\label{eq:darcy_l}
\phi \left( {\boldsymbol{v}_l}- {\boldsymbol{v}_s}\right) =K \left[ (1-\phi)\Delta \rho g \hat{ \boldsymbol z} - \nabla \mathcal{P} \right].$$ A Darcy flux $\phi \left( {\boldsymbol{v}_l}- {\boldsymbol{v}_s}\right) $ is driven by gravity $g \hat{ \boldsymbol z}$ associated with the density difference between the phases $\Delta \rho $ and by compaction pressure gradients. Crucially, the mobility $K$ ($\equiv$ permeability divided by liquid viscosity) of the liquid depends on the porosity: $$K = K_0 (\phi/\phi_0)^n,$$ where $K_0$ is a reference mobility at a reference porosity $\phi_0$ (equal to the porosity at the base of the column $z=0$) and $n$ is a constant (we take $n=3$ in our numerical calculations). It is thought that for the geological systems of interest [@vonbargen86; @miller14; @rudge18].
Finally, we must determine the melting or reaction rate $\Gamma$. The focus of this paper is the mechanics of the instability, so we adopt a fairly simple treatment of its chemistry, largely following @aharonov95. The reaction associated with the reaction-infiltration instability is one of chemical dissolution. At a simple level, this can be described as follows. As magma rises its pressure decreases and it becomes undersaturated in silica. This, in turn, drives a reaction in which pyroxene is dissolved from the solid while olivine is precipitated [*cf*. figure 8 in @longhi02]. Schematically, the dissolution reaction can be written: $$\mathrm{Magma}_1(l) + \mathrm{Pyroxene}(s) \rightarrow \mathrm{Magma}_2(l) + \mathrm{Olivine}(s),$$ where ($l$) denotes a component in the liquid phase and ($s$) a component in the solid phase, and we use subscript $(1,2)$ to indicate magmas of slightly different composition. Crucially, this reaction involves a net transfer of mass from solid to liquid [@kelemen90] and hence it is typically called a melting reaction. Because the reaction replaces pyroxene with olivine, geological observations of tabular dunite bodies in exhumed mantle rock are interpreted as evidence for the reaction-infiltration instability (dunites are mantle rocks, residual after partial melting, that are nearly pure olivine) [@kelemen92].
We now formulate the reactive chemistry in terms of the simplest possible mathematics. We assume that $\Gamma$ is proportional to the undersaturation of a soluble component in the melt. The concentration of this component in the melt is denoted $c_l$; the equilibrium concentration is denoted $c^{{\textrm{eq}}}_l$. Hence the melting rate is written $$\label{eq:melting-rate-kinetic}
\Gamma = -R\left(c_l-c^{{\textrm{eq}}}_l\right),$$ where $R$ is a kinetic coefficient with units 1/time. We assume that $R$ is a constant, independent of the concentration of the soluble component in the solid phase. This is valid for the purposes of studying the onset of instability if the soluble component is abundant and homogeneously distributed, both reasonable assumptions [@liang10].
In this formulation, the chemical reaction rate depends on the composition of the liquid phase $c_l$. Chemical species conservation in the liquid phase is given by $$\frac{\partial}{\partial t} \left( \phi c_l \right) + \nabla \cdot \left( \phi {\boldsymbol{v}_l}c_l \right) = \nabla \cdot \left(\phi D \nabla c_l \right) + \Gamma c_\Gamma,$$ where the effective diffusivity of chemical species is $\phi D$ (diffusivity in the liquid phase is written $D$; diffusion through the solid phase is negligible) and $c_\Gamma$ is the concentration of reactively-produced melts. We then expand out the partial derivatives and simplify using equation to obtain $$\label{eq:cl}
\phi \frac{\partial c_l}{\partial t} + \phi {\boldsymbol{v}_l}\cdot \nabla c_l = \nabla \cdot \left(\phi D \nabla c_l \right) + (c_\Gamma -c_l) \Gamma.$$
To close the system, we suppose that the equilibrium concentration has a constant gradient $\beta\hat{\boldsymbol{{z}}}$, as shown in figure \[fig:diagram\]. If we define (without loss of generality) the equilibrium concentration at the base of the region ($z=0$) to be zero, then $$c^{{\textrm{eq}}}_l = \beta z.$$ We suppose further that the concentration $c_\Gamma$ of the reactively produced melts is offset from the equilibrium concentration by $\alpha$, a positive constant, so $$c_\Gamma = \beta z + \alpha.$$ A scaling argument clarifies the meaning of the compositional parameters: for a fast reaction ($R\to\infty$) and hence for a liquid that is close to equilibrium, a vertical liquid flux $f_0$ would cause reactive melting at a characteristic rate $\Gamma_0 \sim f_0\beta/\alpha$, so $\beta/\alpha$ is the rate of reactive melting per unit of liquid flux. Our formulation of $c_\Gamma$ is slightly different to that of @aharonov95, who take $c_\Gamma =1$. Their resulting, simplified equations are equivalent to ours when $\alpha = 1$ (following the non-dimensionalization in our §\[sec:eqs\_simplified\]).
At this point, we remark briefly on two simplifications inherent in the approach described above. First, we assume that the equilibrium chemistry of the liquid phase is a function of depth. A fuller treatment might consider the chemistry of the liquid as a function of pressure [@longhi02]. However, to an excellent approximation, the liquid pressure is equal to the lithostatic pressure $\rho_s g (H-z)$, in which case pressure and depth are linearly related. Indeed, the dimensionless error in making this approximation is , where $\mathcal{S}$ is the matrix stiffness parameter introduced below. Thus we neglect the difference relative to lithostatic pressure consistent with a Boussinesq approximation $\Delta \rho / \rho_s \ll 1$, where $\rho_s$ is the density of the solid phase.
Second, we use a very simple treatment of melting that neglects, for example, latent heat and temperature variations. @hewitt10 developed a consistent thermodynamic model of melting and showed that latent heat may suppress instability because it reduces the melting rate. Such an effect can be represented within our simpler model by reducing the melting-rate factor $\beta/\alpha$ (see further discussion in appendix \[app:hewitt\]).
Simplified, non-dimensional equations {#sec:eqs_simplified}
-------------------------------------
The governing equations (\[eq:mass\_s\], \[eq:mass\_l\], \[eq:darcy\_l\], \[eq:cl\]) can be non-dimensionalized according to the characteristic scales $$\begin{aligned}
\label{eq:scales}
{\left[{x,z}\right]}=H, \qquad {\left[{\phi}\right]} &= \phi_0,\nonumber\\
{\left[{{\boldsymbol{v}_l}}\right]} = w_0 = K_0\Delta\rho g/\phi_0, \qquad {\left[{{\boldsymbol{v}_s}}\right]} &=
\phi_0w_0, \qquad {\left[{t}\right]}=\alpha/\left(w_0\beta\right), \\
{\left[{\mathcal{P}}\right]}=\zeta\phi_0w_0\beta/\alpha, \qquad {\left[{c_l}\right]} &= \beta H, \qquad {\left[{\Gamma}\right]} =
\phi_0w_0\beta/\alpha.\nonumber\end{aligned}$$ The dimensionless parameters of the system are as follows. First, $\mathcal{M} = \beta H / \alpha \ll 1$, which is the change in solubility across the domain height and characterises the reactivity of the system. Second, stiffness $\mathcal{S} = \mathcal{M} \delta^2 / H^2$, which characterises the rigidity of the medium, where $\delta = \sqrt{K_0\zeta}$ is the dimensional compaction length, an emergent lengthscale [e.g. @spiegelman93a]. Third, $\mathrm{Da} = \alpha R H / (\phi_0 w_0) \gg 1$, the Damköhler number, which characterises the importance of reaction relative to advection. Fourth, $\mathrm{Pe} = w_0 H/D \gg 1$ is the Péclet number, which characterises the importance of advection relative to diffusion.
Then the equations can be simplified by taking the limit of small porosity $\phi_0 \ll \mathcal{M} \ll 1$ and considering only horizontal diffusion (because we expect channelized features with a short horizontal wavelength compared to their vertical structure). We also assume that the reaction rate is fast, so we neglect terms of $O(\mathcal{M}/\textrm{Da})\ll 1$. We also expand out the divergence term in equation using equation . Thus the four governing equations (\[eq:mass\_s\], \[eq:mass\_l\], \[eq:darcy\_l\], \[eq:cl\]) become
\[eq:governing-non-dimensional\] $$\begin{aligned}
\frac{\partial \phi}{\partial t} &= \mathcal{P} + \chi, \\
\mathcal{M} \frac{\partial \phi}{\partial t} + \nabla \cdot (\phi
{\boldsymbol{v}_l}) &=\mathcal{M} \chi, \\
\label{eq:governing-non-dimensional-darcy}
\phi {\boldsymbol{v}_l}&=K \left[ \hat{ \boldsymbol z} -
\mathcal{S} \nabla \mathcal{P} \right], \\
\label{eq:governing-non-dimensional-concentration}
\phi {\boldsymbol{v}_l}\cdot \left[\frac{ \nabla \chi
}{\mathrm{Da}} - \hat{ \boldsymbol z} \right]
&= \frac{1}{\mathrm{Da}\mathrm{Pe}} \frac{\partial }
{\partial x}\left(\phi \frac{\partial \chi}{\partial x}\right)- \chi,
\end{aligned}$$
where, from this point forward, we use the same symbol to denote the dimensionless version of a variable. The dimensionless mobility is $K=\phi^n$ and we have introduced a scaled undersaturation $\chi$ of the chemical composition of the liquid phase $$\chi = \textrm{Da}(z-c_l).$$ The dimensionless reactive melting rate is equal to the scaled undersaturation $\chi$.
A set of appropriate boundary conditions is:
$$\begin{aligned}
&\phi =1, \quad \chi =1, \quad \frac{\partial \mathcal{P}}{\partial z} = 0, \qquad (z=0), \label{eq:nd-bcs-z0} \\
& \frac{\partial \mathcal{P}}{\partial z} = 0, \qquad (z=1).\label{eq:nd-bcs-z1} \end{aligned}$$
The boundary conditions at $z=0$ combine with equation to give a incoming vertical liquid velocity $w=1$. At the upper boundary there is no driving compaction pressure gradient (a ‘free-flux’ condition).
Linear stability analysis {#sec:LSA}
=========================
We expand the variables as the sum of a $z$-dependent, $O(1)$ term, and a $(x,z,t)$-dependent perturbation,
$$\begin{aligned}
\phi &= \phi_0(z) + \phi_1(x,z,t), \\
\mathcal{P} &= \mathcal{P}_0(z) + \mathcal{P}_1(x,z,t), \\
\chi &= \chi_0(z) + \chi_1(x,z,t), \\
{\boldsymbol{v}_l}&= w_0(z)\hat{\boldsymbol{z}}
+ \boldsymbol{v}_1 (x,z,t).
\end{aligned}$$
The perturbations are much smaller than the leading-order terms and hence we linearise the governing equations by discarding terms containing products of perturbations.
The base state
--------------
The leading-order flow is purely vertical. The conservation equations at this order are
\[eq:zeroth\_order\] $$\begin{aligned}
0 &= \mathcal{P}_0 + \chi_0, \\
\frac{d}{dz} (\phi_0 w_0) &=\mathcal{M} \chi_0, \\
\phi_0 w_0 &=K_0\left[ 1 - \mathcal{S}
\frac{ d \mathcal{P}_0}{dz }\right], \label{eq:darcy_0} \\
\phi_0 w_0 \left[\frac{1}{\mathrm{Da}}\frac{ d \chi_0 }{dz } -
1\right] &= - \chi_0, \label{conc_0}
\end{aligned}$$
where $K_0=\phi_0^n$. In the limit of large $\mathrm{Da}$, an exact solution is $\mathcal{P}_0=-\mathcal{\chi}_0$, where $\mathcal{\chi}_0=\exp(\mathcal{M}z)$. The prefactor is unity to satisfy equation . We can then rearrange for $\phi_0$ and $w_0$. Since $\mathcal{M} \ll 1 $, $\exp(\mathcal{M}z) \approx 1 $, and so we work in terms of a uniform base state, $$\label{eq:1}
-\mathcal{P}_0 = \chi_0 = \phi_0 = w_0 = 1.$$ The uniformity of the base state significantly simplifies the subsequent analysis.
Perturbation equations
----------------------
The equations governing the perturbations can be written
$$\begin{aligned}
\frac{\partial \phi_1 }{\partial t} &= \mathcal{P}_1 + \chi_1, \label{eq:solid_1} \\
\mathcal{M} \frac{\partial \phi_1 }{\partial t}+ \phi_0 \nabla
\cdot \boldsymbol v_1 + w_0 \frac{\partial \phi_1}{\partial z} &=\mathcal{M} \chi_1, \label{eq:liquid_1} \\
\phi_0 \boldsymbol v_1 &=-\mathcal{S} K_0 \nabla \mathcal{P}_1 +
(n-1) w_0 \phi_1 \hat{ \boldsymbol z},
\label{eq:darcy_1} \\
\left( \phi_0 w_1 + \phi_1 w_0 \right)
\left[\frac{1}{\mathrm{Da}}\frac{
d \chi_0 }{dz } - 1\right] + \frac{\phi_0 w_0}{\mathrm{Da}}
\frac{ \partial \chi_1 }{\partial z }
&= \frac{\phi_0}{\mathrm{Da}\mathrm{Pe}}
\frac{\partial^2 \chi_1 }{\partial x^2}- \chi_1. \label{eq:reaction_1_unmod}
\end{aligned}$$
The third of these expressions was obtained using the exact base state relation and the fact that $K_0 = \phi_0^n$ and hence that $K_0' = n K_0 /\phi_0$.
We eliminate $\chi_1$ using and $\boldsymbol v_1$ using . We also use to simplify the expressions and obtain
$$\begin{aligned}
-\mathcal{S} K_0 \nabla^2 \mathcal{P}_1 +
n w_0 \frac{\partial \phi_1 }{\partial z}
&= -\mathcal{M} \mathcal{P}_1, \label{eq:mass_1} \\
\left(-\mathcal{S} K_0 \frac{\partial
\mathcal{P}_1}{\partial z} + n w_0 \phi_1 \right)
\left[\frac{ - \chi_0 }{\phi_0 w_0 }\right]
&= -\left[\frac{\phi_0 w_0}{\mathrm{Da}}\frac{ \partial
}{\partial z } -\frac{\phi_0}{\mathrm{Da}\mathrm{Pe}}
\frac{\partial^2 }{\partial x^2}+
1\right] \left(\frac{\partial \phi_1}{\partial t}
-\mathcal{P}_1\right). \label{eq:reaction_1}
\end{aligned}$$
We now substitute in the constant base state expressions, self-consistently neglect the $O(\mathcal{M})$ term, and cross differentiate to eliminate $\phi_1$ $$\left[ \frac{1}{\mathrm{Da}}\partial_{tz} -
\frac{1 }{\mathrm{Da}\mathrm{Pe}} \partial_{txx} + \partial_t -
n \right] \nabla^2 \mathcal{P}_1 = \frac{n}{\mathcal{S}}
\left[ \left( \frac{1}{\mathrm{Da}} -\mathcal{S} \right)
\partial_z
- \frac{1 }{\mathrm{Da}\mathrm{Pe}} \partial_{xx} + 1\right]
\partial_z \mathcal{P}_1.$$ For brevity in this equation, subscripts are used denote partial derivatives.
We seek normal-mode solutions $\mathcal{P}_1 \propto \exp(\sigma t + ikx +mz)$ of this linear equation, where $\sigma$ is the growth rate and $k$ is a horizontal wavenumber. Thus we obtain the characteristic polynomial (dispersion relationship) $$\label{eq:dispersion3} \frac{\sigma}{\mathrm{Da}} m^3
+ \left( \sigma \mathcal{K} - \frac{n}{\mathrm{Da} \mathcal{S}}
\right)m^2 - \left( \frac{n\mathcal{K}}{\mathcal{S}} +
\frac{\sigma}{\mathrm{Da}} k^2 \right) m + \left(n-\sigma
\mathcal{K} \right)k^2 = 0,$$ where $\mathcal{K}=1+k^2/\mathrm{Da}\mathrm{Pe}$. Equation has three roots $m_j$ $(j=1,2,3)$ and hence the compaction pressure perturbation will be given by $$\label{eq:cmppres_solution}
\mathcal{P}_1 = \sum_{j=1}^3 A_j\exp(\sigma t + ikx + m_jz).$$ The three unknown pre-factors $A_j$ are determined by the boundary conditions.
Boundary conditions on the perturbation {#sec:bcs}
---------------------------------------
We previously eliminated $\chi_1$ and $\phi_1$ in favour of the compaction pressure $\mathcal{P}_1$. The corresponding boundary conditions on $\mathcal{P}_1$, derived from equations and are
\[eq:boundary\_conditions\] $$\begin{aligned}
\mathcal{P}_1 &= 0 \quad \mathrm{at } \, z=0, \label{eq:pert_bc_1} \\
\frac{\partial \mathcal{P}_1}{\partial z} &= 0 \quad \mathrm{at } \, z=0. \label{eq:pert_bc_2}
\end{aligned}$$ The upper boundary condition (equation \[eq:nd-bcs-z1\]) is $$\frac{\partial \mathcal{P}_1}{\partial z} = 0 \quad \mathrm{at } \, z=1. \label{eq:pert_bc_3}$$
The boundary conditions can be expressed in matrix form in terms of the coefficients of the normal-mode expansion as $$\label{eq:bc_matrix}
\left(\begin{array}{ccc}
1 & 1 & 1 \\
m_1 & m_2 & m_3 \\
m_1\textrm{e}^{m_1} & m_2\textrm{e}^{m_2} & m_3\textrm{e}^{m_3}
\end{array}\right)
\left(\begin{array}{c}
A_1 \\ A_2 \\ A_3
\end{array}\right) =
\left(\begin{array}{c}
0 \\ 0 \\ 0
\end{array}\right) .$$ A necessary (but not sufficient condition) for a non-trivial solution $A_j$ to exist is that the boundary-condition matrix $M$ has zero determinant.
Analysis of the dispersion relationship {#sec:analysis1}
---------------------------------------
We analyse the characteristic polynomial for the case of real growth rate $\sigma$ (that is, we look for channel modes rather than compaction-dissolution waves, as discussed in §\[sec:intro\]). The characteristic polynomial, a cubic, has three roots $m_j$ $(j=1,2,3)$. The character of these roots is controlled by the cubic discriminant. If the discriminant is strictly positive, the roots are distinct and real. If the discriminant is zero, the roots are real but at least one root is repeated (degenerate). If the discriminant is strictly negative, then there is one real root ($m_1$, say), and a pair of complex conjugate roots ($m_2,m_3$).
For the case of real and distinct roots, the columns of $M$ are linearly independent, the determinant of $M$ is non-zero, and the only solution has $A_j=0$. When the roots are real but degenerate, $\det M = 0$ but there is no set of coefficients $A_j$ that can satisfy the boundary condition at $z=1$ . Hence there are physically meaningful roots only when the cubic discriminant of is strictly negative.
In this latter case, with one real root and two complex conjugate roots, $\det M$ is purely imaginary. A proof of this follows. Consider a $2 \times 2$ matrix whose columns are complex conjugate, say $Y= \left( \boldsymbol{X}, \boldsymbol{X}^* \right)$ where $\boldsymbol{X}=[X_1,X_2]^T$. Then $\det Y = X_1 X_2^* - X_2 X_1^*$, so $\det Y + \det Y^* = 0$, i.e., $\det Y$ is pure imaginary. The boundary condition matrix $M$ is $3\times 3$, but $\det M$ can be written as the sum of purely imaginary determinants of $2 \times 2$ sub-matricies, multiplied by purely real numbers; hence $\det M$ is pure imaginary.
With $m_1$ real, and $m_2$ and $m_3=m_2^*$ complex, there are eigenvalues of $\sigma$ for which the imaginary part of $\det M$ vanishes. At these eigenvalues, $\det M = 0$ and there exists an eigenvector $A_j$ such that the boundary conditions are satisfied. We find these eigenvalues/vectors by numerically solving the coupled problem of the cubic polynomial and $\det M = 0$.
Physical discussion of instability mechanism (part I: growth rate) {#sec:physical-discussion-1}
------------------------------------------------------------------
Figure \[fig:perturbation\_eg\](*a*) shows an example of the dispersion relationship $\sigma(k)$. The curves are a series of valid solutions. The solutions on the uppermost dispersion curve have the largest growth rate $\sigma$ and are monotonic in $z$. Curves below this fundamental mode are higher order, with increasing numbers of turning points in $z$ as $\sigma$ decreases at fixed $k$. In this example, the instability is only present at $k \gtrsim 1$, which roughly translates to channels that are narrower than the domain height. Hence we expect that the lateral wavelength is always smaller than the domain height.
![(*a*) Example dispersion relationship calculated at $\mathrm{Da}=10^2$, $\mathrm{Pe}=10^2$, $\mathcal{S}=1$, $n=3$. (*b*) Perturbation corresponding to most unstable wavenumber (indicated by a diamond symbol in panel (*a*)). The background colour scale shows the porosity perturbation $\phi_1$ (normalized to have a maximum value of 1). The black curves are contours of liquid undersaturation $\chi_1$, which is positively correlated with $\phi_1$ (solid = positive, dashed = negative). The magenta arrows show the perturbation liquid velocity $\boldsymbol v_1$. Note the flow into the proto-channels (regions of elevated porosity $\phi_1>0$). The compaction pressure $\mathcal{P}_1$ (not shown) is anti-correlated with $\phi_1$, consistent with flow direction from high to low pressure.[]{data-label="fig:perturbation_eg"}](Figure2-eps-converted-to.pdf){width="1.0\linewidth"}
We now explain the physical mechanism that gives rise to the instability. Figure \[fig:perturbation\_eg\](*b*) shows an example of the structure of the fastest-growing perturbation (most unstable mode). Regions of positive porosity perturbation $\phi_1$ (which we call proto-channels) create a positive perturbation of the vertical flux, according to equation . For didactic purposes, consider the case of no compaction pressure (which is directly applicable to a rigid porous medium). Then $$\label{eq:scaling_1}
\phi_0 w_1 + \phi_1 w_0 = n w_0 \phi_1.$$ Note that the positive vertical flux perturbation only occurs because the permeability increases with porosity ($n>0$); this is a crucial aspect of the instability.
Positive vertical advection against the background equilibrium concentration gradient leads to positive liquid undersaturation $\chi_1$, according to equation . In more physical terms, the enhanced vertical flux advects corrosive liquid from below. Thus the equilibrium concentration gradient is the other crucial aspect of the instability, alongside the porosity-dependent permeability. For didactic purposes, consider the case of very fast reaction ($\mathrm{Da} \gg 1$), in which the leading order balance in equation gives $$\label{eq:scaling_2}
\chi_1 = \phi_0 w_1 + \phi_1 w_0 = n w_0 \phi_1.$$ Positive liquid undersaturation in turn causes reactive melting and hence increasing porosity by equation , so the proto-channel emerges. Again, neglecting compaction pressure, replacing $\partial_t \to \sigma$, and substituting equation , we find $$\label{eq:scaling_3}
\sigma \phi_1 = n w_0 \phi_1 \quad \Rightarrow \quad \sigma = n,$$ where we used $w_0=1$. Note that the maximum growth rate in Figure \[fig:perturbation\_eg\](*a*) is about $n=3$. Recalling the non-dimensionalization of time in equation , we see that the timescale for channel growth is the timescale for reactive melting ($\alpha /\beta w_0$) multiplied by the sensitivity of melt flux to porosity ($n$).
Further consideration of equation reveals two stabilising mechanisms. The instability is weakened by diffusion, especially at high wavenumber, since diffusion acts to smooth out lateral gradients in the undersaturation. It is also weakened by advection of the liquid undersaturation, because the undersaturation in the proto-channel increases with height . The subsequent analysis shows that this latter mechanism is also more important at large wavenumber, so both advection and diffusion of liquid undersaturation play a role in wavelength selection (see §\[sec:advection-controlled\] and §\[sec:diffusion-controlled\], respectively). Indeed, figure \[fig:perturbation\_eg\](*a*) shows that the growth rate decreases at large $k$.
Finally, we consider the effect of compaction, which is a further stabilising mechanism at both large and small wavenumbers [@aharonov95] (see §\[sec:wavenumber\], §\[sec:compaction\] and appendix \[app:kminmax\]). The instability only occurs if the matrix stiffness exceeds some critical value (see §\[sec:Scrit\] and §\[sec:Scrit\_analysis\]). To leading order ($\mathcal{M} \ll 1$), if we consider equation governing liquid mass conservation, then $$\label{eq:scaling_4}
\phi_0 \nabla \cdot \boldsymbol v_1 = - w_0 \frac{\partial \phi_1}{\partial z}
\quad \Rightarrow \quad K_0 \nabla^2 \mathcal{P}_1 =
\frac{n w_0}{\mathcal{S}} \frac{\partial \phi_1}{\partial z},$$ where we substitute in equation to achieve the last expression (*cf*. equation \[eq:mass\_1\]). Proto-channels are regions of increasing porosity perturbation . Thus, by liquid mass conservation, they are regions of convergence of the perturbation velocity $\boldsymbol v_1$. Therefore, proto-channels are regions of negative compaction pressure perturbation, which reduces the porosity perturbation, according to the equation of solid mass conservation . Again, this stabilising mechanism is wavelength dependent through the Laplacian in equation . Note further that the perturbation to the compaction pressure decreases with increasing matrix stiffness $\mathcal{S}$, so we recover the rigid porous medium case as $\mathcal{S}\gg1$. We return to the physical discussion of the instability in §\[sec:physical-discussion-2\] to explain the wavelength selection and the critical matrix stiffness.
Asymptotic analysis of the large-$\boldsymbol{\mathrm{Da}}$ limit {#sec:asymptotics}
=================================================================
In this section, we use asymptotic analysis to estimate the maximum growth rate $\sigma^*$ and the the wavenumber $k^*$ of the most unstable mode. The analysis allows us to understand the physical controls on the instability, particularly the wavelength selection.
The cubic dispersion relation has a structure that simplifies in the limit of large $\mathrm{Da}$. There is one real root of $O(\mathrm{Da})$ and a pair of complex conjugate roots. Take $m_1\sim O(\mathrm{Da})$ as ansatz and obtain: $$m_1 \sim - \mathcal{K} \mathrm{Da} .$$ Take $m_{2,3} \sim O(\mathrm{1})$ as ansatz and obtain: $$\label{eq:dispersion2} \left( \sigma \mathcal{K} -
\frac{n}{\mathrm{Da} \mathcal{S}} \right)m^2 - \left(
\frac{n\mathcal{K}}{\mathcal{S}} + \frac{\sigma}{\mathrm{Da}} k^2
\right) m + \left( n-\sigma \mathcal{K} \right)k^2 = 0.$$
The boundary condition is accommodated by a boundary layer of thickness $O(1/\mathrm{Da})$ associated with the root $m_1$. The remaining boundary conditions (\[eq:pert\_bc\_1\] & \[eq:pert\_bc\_3\]) can be written $$\label{eq:bc_matrix_quadratic}
\left(\begin{array}{cc}
1 & 1 \\
m_2\textrm{e}^{m_2} & m_3\textrm{e}^{m_3}
\end{array}\right)
\left(\begin{array}{c}
A_2 \\ A_3
\end{array}\right) =
\left(\begin{array}{c}
0 \\ 0
\end{array}\right) .$$ As before, we require the determinant of this boundary condition matrix to be zero. Noting that $m_3=m_2^*$, we find that $$0 = \mathrm{imag} \left[ m_2 \exp(m_2) \right] .$$ We write $m_2$ in terms of its real and imaginary parts $m_2=a+i b$, then $$\label{eq:critical_condition_2} 0 = \tan b + b/a.$$ This algebraic equation has an infinite family of solutions corresponding to the multiple roots shown in figure \[fig:perturbation\_eg\](*a*). The perturbation compaction pressure can be written $$\mathcal{P}_1 \propto \exp(az) \sin(bz).$$ Note that there is no part of the solution proportional to because of boundary condition . Equation is equivalent to boundary condition .
The real and imaginary parts of $m_2$ can be found using a variant of the quadratic formula. $$\label{eq:quadratic_formula}
px^2+qx+r=0 \Rightarrow x= \frac{-q}{2p} \pm i \sqrt{\frac{r}{p}-\left(\frac{q}{2p} \right)^2},$$ where we assume that the quantity within the square root is real for the reasons discussed above (§\[sec:analysis1\]). We use equation to obtain the exact expressions
\[eq:ab\_full\_1+2\] $$\begin{aligned}
\label{eq:ab_full_1}
a &= \frac{\left( \frac{n\mathcal{K}}{\mathcal{S}} +
\frac{\sigma}{\mathrm{Da}} k^2 \right)}{{2 \left( \sigma
\mathcal{K} - \frac{n}{\mathrm{Da} \mathcal{S}} \right)}}, \\
b^2 + a^2 &=\frac{\left( n-\sigma \mathcal{K} \right) k^2}{{
\left( \sigma \mathcal{K} - \frac{n}{\mathrm{Da}
\mathcal{S}} \right)}} . \label{eq:ab_full_2}
\end{aligned}$$
It is possible to solve these algebraic equations for $\sigma$ numerically (*cf*. dashed blue curve in figure \[fig:dispersion\_high\_da\]), but it is instructive to make the additional ansatz $\sigma \sim n(1-\epsilon)$, where $\epsilon \ll 1$. This allows us to approximate the behaviour near the maximum growth rate, where $\sigma \sim n$. We also assume that $(\mathrm{Da} \mathcal{S})^{-1} \ll 1$ but retain terms $O(k^2 \mathrm{Da} ^{-1})$ since the latter is important at large wavenumber. In general $\mathcal{K} \sim 1$, except on the right-hand-side of equation , where we obtain a term proportional to $\left[(1-\epsilon)^{-1} - \mathcal{K}\right] \sim \epsilon - k^2/\mathrm{Da} \mathrm{Pe}$. We test all the results obtained using these approximations against full numerical solution of the cubic dispersion relation. Under the simplifying assumptions,
\[eq:ae\_full\_1+2\] $$\begin{aligned}
\label{eq:ae_full_1}
a & \sim \frac{1}{2 \mathcal{S}} + \frac{ k^2 }{2 \mathrm{Da}}, \\
\epsilon &\sim \frac{b^2 + a^2}{k^2} +
\frac{k^2}{\mathrm{Da} \mathrm{Pe} } . \label{eq:ae_full_2}
\end{aligned}$$
The terms that constitute $\epsilon$ represent a series of stabilizing mechanisms that reduce the growth rate $\sigma$, namely compaction (through the $1/2\mathcal{S}$ term in equation ), advection of undersaturation (through the $k^2/2\mathrm{Da}$ term in equation ), and diffusion (through the $k^2/\mathrm{Da} \mathrm{Pe}$ term in equation ). We show an example dispersion relationship at moderately high $\mathrm{Da}$ in figure \[fig:dispersion\_high\_da\].
![High $\mathrm{Da}$ dispersion relationship with scaling relationships overlaid. Solid black: full numerical calculation of cubic dispersion relationship , only showing the most unstable mode. Dashed blue: solution of equations (\[eq:ab\_full\_1\], \[eq:ab\_full\_2\]) from the simplified quadratic dispersion relationship. Dot-dashed red: solution of equation . The blue curve agrees well everywhere, the red curve is only valid when $n-\sigma$ is small, consistent with the asymptotic approximations. []{data-label="fig:dispersion_high_da"}](Figure3-eps-converted-to.pdf){width="3.4950in"}
Dependence on wavenumber $k$ {#sec:wavenumber}
----------------------------
Starting at small $k$, $ \epsilon $ initially decreases with $k$, reaches some minimum value $ \epsilon^* $ at $k=k^*$ \[corresponding to the most unstable mode with maximum growth rate $\sigma^* =n(1-\epsilon^*)$\], and then increases as $k \rightarrow \infty$.
Scaling arguments make these statements more precise. When $k \ll k^*$,
$$\begin{aligned}
a & \sim \frac{1}{2 \mathcal{S}} \label{eq:ae_smallk_1} \\
\epsilon &\sim \frac{b^2 +
a^2}{k^2}. \label{eq:ae_smallk_2}
\end{aligned}$$
It is convenient to define $\mathcal{B}(\mathcal{S}) = b^2 + a^2$, where $a = 1/2\mathcal{S}$ and $b$ satisfies equation . Then a small wavenumber ‘cut-off’ occurs when $\epsilon = O(1)$ (which is outside the bounds of our previous assumption $\epsilon \ll 1$) when $k \sim \mathcal{B}^{1/2}$. We use ‘cut-off’ to refer to the wavenumber at which the growth rate departs significantly from its maximum value, not the strict minimum wavelength, which we discuss below.
Conversely, at large $k$, $k \gg k^*$
$$\begin{aligned}
a & \sim \frac{1}{2 \mathcal{S}} + \frac{k^2}{2 \mathrm{Da}} , \label{eq:ae_largek_1} \\
\epsilon &\sim k^2
\left({\frac{1}{(2\mathrm{Da})^2}}
+\frac{1}{\mathrm{Da}\mathrm{Pe}}\right)
. \label{eq:ae_largek_2}
\end{aligned}$$
If $\mathrm{Da} \gg \mathrm{Pe}$, then $ \epsilon \sim {k^2}/{\mathrm{Da} \mathrm{Pe} } $, so the large wavenumber ‘cut-off’ occurs when $k \sim (\mathrm{Da} \mathrm{Pe})^{1/2}$. Physically, the small scale of the instability is limited by the distance a chemical component can diffuse over the reaction timescale [@spiegelman01]. Conversely if $\mathrm{Pe} \gg \mathrm{Da}$, then $ \epsilon \sim {k^2}/{(2 \mathrm{Da} )^2 } $, so the large wavenumber ‘cut-off’ occurs when $k \sim 2 \mathrm{Da}$. Physically, the small scale of the instability is limited by the distance a chemical component is transported by the background liquid flow over the reaction timescale. These two limits also affect the maximum growth rate of the instability. In the next sections we consider each limit in turn.
It is also possible to determine strict minimum and maximum wavenumbers for instability, although this is more technical so we leave the details for appendix \[app:kminmax\]. In summary, we find $$\begin{aligned}
&k_\mathrm{min} \sim \frac{1.5171}{ \mathcal{S}^{1/2}} \quad (\mathcal{S} \gg 1),
\qquad k_\mathrm{min} \sim \frac{1}{ \mathcal{S}} \quad (\mathcal{S} \ll 1), \\
&k_\mathrm{max} \sim \mathcal{S} \mathrm{Da} \mathrm{Pe}.\end{aligned}$$ The dependence on matrix stiffness $\mathcal{S}$ means that compaction stabilizes the system at both large and small wavenumbers [@aharonov95]. Indeed, in a rigid medium there is no minimum or maximum wavenumber.
Advection controlled instability $\mathrm{Pe} \gg \mathrm{Da} \gg 1$ {#sec:advection-controlled}
--------------------------------------------------------------------
We first consider case of negligible diffusion. In this case, it is natural to introduce a change of variables: $\tilde{k} = k \mathrm{Da}^{-1/2}$, $\tilde{\epsilon} = \epsilon \mathrm{Da}$. Then, to leading order,
$$\begin{aligned}
a &\sim \frac{1}{2\mathcal{S}} + \frac{\tilde{k}^2}{2}, \\
\tilde{\epsilon} &\sim \frac{a^2 +
b^2}{\tilde{k}^2}. \label{eq:epsilon-tilde}
\end{aligned}$$
Note that both $b$ and $a$, and hence $\tilde{\epsilon}$, are functions of $(\tilde{k},\mathcal{S})$ alone.
We find the maximum growth rate by differentiating equation and seeking the (unique) turning point, which satisfies $$\label{eq:k-tilde}
b^2 + \left(\tilde{k}^2-1 \right) b \cos(b)\sin(b) -\tilde{k}^2 \sin^2(b) =0.$$ We solve numerically to obtain the solution $\tilde{k}^*=\tilde{k}^*(\mathcal{S})$. The corresponding growth rate is $\tilde{\epsilon}^*(\mathcal{S})$. In summary, the most unstable wavelength $k^* \sim \tilde{k}^* \mathrm{Da}^{1/2}$ [consistent with the numerical results of @aharonov95] and the corresponding growth rate $\sigma^* \sim n\left[1-\tilde{\epsilon}^*\mathrm{Da}^{-1}\right]$. These scaling results are shown in figure \[fig:Da\_scalings\] (panels *a, b*). The dependence on compaction through matrix stiffness $(\mathcal{S})$ is shown in figure \[fig:S\_scalings\] (panels *a, b*). The wavenumber is controlled by advection of liquid undersaturation (see §\[sec:physical-discussion-2\]).
![Numerical calculations of the full cubic dispersion relation (solid black curves) compared to power-law scalings (dashed red lines) for maximum growth rate and corresponding wavenumber as a function of $\mathrm{Da}$, at fixed $\mathrm{Pe}=10^{12}$ (panels *a, b*), and $\mathrm{Pe}=10^2$ (panels *c, d*). For all calculations $\mathcal{S}=1$.[]{data-label="fig:Da_scalings"}](Figure4-eps-converted-to.pdf){width="1.0\linewidth"}
Diffusion controlled instability $\mathrm{Da} \gg \mathrm{Pe}$ {#sec:diffusion-controlled}
--------------------------------------------------------------
The other limit occurs when diffusion is significant. For this case, it is natural to introduce a different change of variables: $\hat{k} = k (\mathrm{Da}\mathrm{Pe})^{-1/4}$, $\hat{\epsilon} = \epsilon (\mathrm{Da} \mathrm{Pe})^{1/2} $. Then
$$\begin{aligned}
a &\sim \frac{1}{2\mathcal{S}}, \\
\hat{\epsilon} & \sim
\frac{\mathcal{B}+\hat{k}^4}{
\hat{k}^2},
\end{aligned}$$
where $\mathcal{B}(\mathcal{S})$ was defined previously.
This dispersion relation is simple enough to analyse by hand. The minimum of $\hat{\epsilon}^*=2\mathcal{B}^{1/2}$ and occurs when $\hat{k}^*=\mathcal{B}^{1/4}$. Thus the maximum growth rate $\sigma^*$ that occurs at wavenumber $k^*$ satisfies
$$\begin{aligned}
k^* &\sim (\mathrm{Pe Da}\mathcal{B} )^{1/4},\\
\sigma^* &\sim n
\left[1-\frac{2
\sqrt\mathcal{B}}{\sqrt{\mathrm{Da
Pe}}}
\right].
\end{aligned}$$
That $ k^* \sim \mathrm{Pe }^{1/4}$ was observed numerically by @aharonov95, although they did not obtain the dependence on $\mathrm{Da}$ or $\mathcal{S}$. Thus the instability grows most rapidly at some wavelength controlled by diffusion. The analysis is consistent with numerical results (figure \[fig:Da\_scalings\]*c, d*). The dependence on compaction through the function $\mathcal{B}(\mathcal{S})$ is shown in figure \[fig:S\_scalings\] (panels *c, d*). Increasing matrix stiffness $\mathcal{S}$ increases the growth rate and reduces the wavenumber of the most unstable mode. The wavenumber is controlled by diffusion (see §\[sec:physical-discussion-2\]).
Effect of compaction (dependence on $\mathcal{S}$) {#sec:compaction}
--------------------------------------------------
Asymptotic estimates of the dependence on $\mathcal{S}$ are obtained by analysing the roots of equation : $\tan b + b/a = 0$. The first non-trivial root $b$ of this equation occurs for $b \in (\pi/2, \pi)$. At small $a$ (large $\mathcal{S}$), the root $b \rightarrow \pi/2^+ $. At large $a$ (small $\mathcal{S}$), the root $b \rightarrow \pi^-$.
Next we determine the maximum growth rate for large and small $\mathcal{S}$. First we consider the case of advection controlled growth ($\mathrm{Pe} \gg \mathrm{Da} \gg 1$). At large $\mathcal{S} \gg 1$, $a \sim \tilde{k}^2/2$ independent of $\mathcal{S}$. Thus $a,b,\tilde{k}^*,\tilde{\epsilon}^*$ approach some limit that is independent of $\mathcal{S}$. By solving equation numerically, we find that
$$\begin{aligned}
\tilde{k}^* &\rightarrow 1.898, \\
\tilde{\epsilon}^* &\rightarrow 2.302. \label{eq:highDa_infinitePe_highS_limit}
\end{aligned}$$
At small $\mathcal{S} \ll 1$, we obtain the following leading order expressions
$$\begin{aligned}
a^* &\sim \mathcal{S}^{-1}, \\
b^* &\sim \pi(1- \mathcal{S} ), \\
\tilde{k}^* &\sim \mathcal{S}^{-1/2}, \\
\tilde{\epsilon}^* &\sim
\mathcal{S}^{-1}. \label{eq:highDa_infinitePe_S_scaling}
\end{aligned}$$
Second we consider the case of diffusion controlled growth ($\mathrm{Da} \gg \mathrm{Pe} $). As before, the growth rate approaches a constant as $\mathcal{S}$ increases, namely
\[eq:highDa\_highS\_limit\] $$\begin{aligned}
\hat{k}^* &\rightarrow (\pi/2)^{1/2} , \\
\hat{\epsilon}^* &\rightarrow \pi .
\end{aligned}$$
For $\mathcal{S}\ll 1$, as before, $a\sim \mathcal{S}^{-1}$ and we obtain
$$\begin{aligned}
\hat{k}^* &\sim (2 \mathcal{S})^{-1/2}, \\
\hat{\epsilon}^* &\sim\mathcal{S}^{-1}. \label{eq:highDa_S_scaling}
\end{aligned}$$
These asymptotic results are consistent with numerical results (figure \[fig:S\_scalings\]). Indeed, figure \[fig:S\_scalings\] shows that compaction reduces the growth rate and increases the wavenumber of the most unstable mode relative to the rigid-medium limit $\mathcal{S} \to \infty$. The numerical calculations show that the rigid-medium limit is approximately attained when $\mathcal{S} \gtrsim 1 $.
![Dependence on matrix compaction (stiffness $\mathcal{S}$) in the two regimes $\mathrm{Pe} \gg \mathrm{Da}$ (panels *a, b*) and $\mathrm{Da} \gg \mathrm{Pe}$ (panels *c, d*). Panels (*a, c*) show the maximum growth rate, and panels (*b, d*) show the corresponding wavenumber. Solid black curves are numerical calculations of the full cubic dispersion relation . Dashed red curves (which are almost indistinguishable) are asymptotic results in the limit of large $\mathrm{Da}$. Blue dot-dashed lines are asymptotic results in the limit of small $\mathcal{S}$. []{data-label="fig:S_scalings"}](Figure5-eps-converted-to.pdf){width="1.0\linewidth"}
The scalings for wavenumber and growth rate are the same in terms of the power-law dependence on $\mathcal{S}$. In either case, a compactible medium is less unstable than a rigid medium. That is, compaction stabilises the system. We can interpret equations and in terms of a critical stiffness such that the instability occurs when ${\mathcal{S}} \geq {\mathcal{S}}_\mathrm{crit}$ where
$$\begin{aligned}
{\mathcal{S}}_\mathrm{crit} &\propto \frac{1}{\mathrm{Da }}
\qquad (\mathrm{Pe} \gg \mathrm{Da}), \label{eq:Scrit_scaling1} \\
{\mathcal{S}}_\mathrm{crit} &\propto
\frac{1}{\sqrt{\mathrm{Da Pe}}}
\qquad(\mathrm{Da}\gg\mathrm{Pe}). \label{eq:Scrit_scaling2}
\end{aligned}$$
The critical stiffness occurs when the destabilising influence of reaction balances the stabilising influence of compaction (see §\[sec:physical-discussion-2\]).
We can also estimate the aspect ratio $\mathcal{A}$ of the instability for the case $\mathcal{S}\ll1$ by noting that $a \sim \mathcal{S}^{-1}$. The ratio of horizontal to vertical length scale is approximately $\mathcal{A} \sim a/ k^*$. Substituting in the wavenumber scalings, we find
$$\begin{aligned}
\mathcal{A} &\propto (\mathcal{S} \mathrm{Da })^{-1/2}
\qquad (\mathrm{Pe} \gg \mathrm{Da}), \label{eq:A_scaling1} \\
\mathcal{A} &\propto (\mathcal{S}^2 \mathrm{Da } \mathrm{Pe })^{-1/4}
\qquad(\mathrm{Da}\gg\mathrm{Pe}). \label{eq:A_scaling2}
\end{aligned}$$
The the horizontal scale of the instability is generally small compared to the vertical scale, but the aspect ratio approaches unity near $ {\mathcal{S}}_\mathrm{crit}$. Thus our assumption that vertical diffusion is negligible compared to horizontal diffusion becomes less valid as we approach $ {\mathcal{S}}_\mathrm{crit}$. However, for the rigid medium case $ \mathcal{S} \gtrsim 1$, the aspect ratio is always small. Furthermore, for the geologically relevant parameters considered in §\[sec:geological\_discussion\], the aspect ratio is predicted to be small, *i.e.* the horizontal scale is much smaller than the vertical.
Numerical investigation of the critical stiffness {#sec:Scrit}
-------------------------------------------------
We next test these asymyptotic predictions of a critical stiffness by numerically calculating the dispersion relationship at successive values of $\mathcal{S} \rightarrow {\mathcal{S}}_\mathrm{crit}^+$. Figure \[fig:Scrit\_explore\] shows that (*a*) the dispersion relationship forms closed loops whose size approaches zero; (*b*) the perturbation is localized in an $O(\mathcal{S})$ boundary layer near the upper boundary. The latter observation is consistent with the asymptotic result that the vertical length scale . We estimate the critical value $ {\mathcal{S}}_\mathrm{crit}$ using the method described in appendix \[app:Scrit\], and map out the dependence on Damköhler number and Péclet number.
![Behaviour as $\mathcal{S} \rightarrow {\mathcal{S}}_\mathrm{crit}^+$ (example with $\mathrm{Da}=\mathrm{Pe}=10$). (*a*) Series of closed loops in $(k,\sigma)$-space as $\mathcal{S}$ decreases toward the critical value \[purple to light blue; black dot-dashed loop corresponds to the final iteration; method described in appendix \[app:Scrit\]\]. (*b*) The porosity perturbation $\phi_1$ corresponding to the most unstable mode indicated by the diamond symbol in panel (*a*). []{data-label="fig:Scrit_explore"}](Figure6-eps-converted-to.pdf){width="1.0\linewidth"}
Figure \[fig:Scrit\_results\](*a*) shows the dependence of $\mathcal{S}_\mathrm{crit}$ on $\mathrm{Da}$ at $\mathrm{Pe}=10,10^2,10^3,10^4$. The calculations with high $\mathrm{Pe}$ support the prediction of equation that ${\mathcal{S}}_\mathrm{crit} \propto {\mathrm{Da }}^{-1} $ when $\mathrm{Pe} \gg \mathrm{Da}$. The calculations with lower $\mathrm{Pe}$ support the prediction of equation that ${\mathcal{S}}_\mathrm{crit} \propto {\mathrm{Da }}^{-1/2} $ when $\mathrm{Da}\gg\mathrm{Pe}$; they are also consistent with the predicted $\mathrm{Pe}^{-1/2}$ dependence. By estimating the prefactors numerically, we obtain the following scalings:
$$\begin{aligned}
{\mathcal{S}}_\mathrm{crit} &\sim \frac{1}{\mathrm{Da }}
\qquad (\mathrm{Pe} \gg \mathrm{Da}), \label{eq:Scrit_scaling3} \\
{\mathcal{S}}_\mathrm{crit} & \sim
\frac{2}{\sqrt{\mathrm{Da Pe}}}
\qquad(\mathrm{Da}\gg\mathrm{Pe}). \label{eq:Scrit_scaling4}
\end{aligned}$$
Note that equation is consistent with the numerical results of @aharonov95 when $\mathcal{S} = O(1)$, although they did not obtain the other limit, equation .
Figure \[fig:Scrit\_results\](*b*) shows that, across the range of parameters considered, the wavenumber at $\mathcal{S}_\mathrm{crit}$ obeys the scaling $$\label{eq:Scrit_k-scaling}
k(\mathcal{S}_\mathrm{crit}) \sim (\mathrm{Da} \mathrm{Pe})^{1/2}.$$ This is the same as the scaling for the large wavenumber cutoff when $\mathrm{Da} \gg \mathrm{Pe}$, identified in §\[sec:wavenumber\]. However, we see in §\[sec:Scrit\_analysis\] that a different scaling eventually emerges at higher $\mathrm{Pe}$.
Finally, figure \[fig:Scrit\_results\](*c*) shows that the growth rate at $\mathcal{S}_\mathrm{crit}$ appears to approach a (non-zero) constant at large $\mathrm{Da}$, which might be independent of $\mathrm{Pe}$, a hypothesis we confirm in §\[sec:Scrit\_analysis\].
![(*a*) The critical stiffness $\mathcal{S}_\mathrm{crit}$ as a function of $\mathrm{Da}$ at $\mathrm{Pe}=10,10^2,10^3,10^4$ (from dark purple to light blue). We also show power law scalings $\mathrm{Da}^{-1}$ (dash-dotted) and $\mathrm{Da}^{-1/2}$ (dotted). (*b*) The corresponding wavenumber, which obeys the scaling . (*c*) The corresponding growth rate, which appears to approach a constant at large $\mathrm{Da}$. []{data-label="fig:Scrit_results"}](Figure7-eps-converted-to.pdf){width="3.4950in"}
Analysis of behaviour near the critical stiffness {#sec:Scrit_analysis}
-------------------------------------------------
We now analyze the structure of the bifurcation at $\mathcal{S}_\mathrm{crit}$. Our goal is to complement the numerical results obtained previously by mapping out the bifurcation structure and obtaining asymptotic results at very large $\mathrm{Da}$ and $\mathrm{Pe}$, regimes that were hard to achieve numerically.
We proceed by rescaling equations (\[eq:ab\_full\_1+2\]*a*,*b*). As we have seen previously, there are two distinguished limits depending on the relative magnitude of $\mathrm{Da}$ to $\mathrm{Pe}$.
### Case: $\mathrm{Pe} \gg \mathrm{Da} \gg 1$
We first consider the case in which chemical diffusion is negligible. We use the rescaling
$$\begin{aligned}
\tilde{x} & =k^2 \mathrm{Da}^{-2}, \\
\tilde{a} & = a \mathrm{Da}^{-1}, \\
\tilde{\mathcal{S}} & = \mathcal{S} \mathrm{Da}, \\
\tilde{\sigma} & = \sigma n^{-1},
\end{aligned}$$
(extending the scaling first introduced in §\[sec:advection-controlled\]). Then we note that $\mathcal{K} = 1 + k^2/ \mathrm{Da}\mathrm{Pe} = 1+ \tilde{x}(\mathrm{Da}/ \mathrm{Pe})$, so $\mathcal{K} \sim 1$. We also note that , so . Thus, to leading order, equations (\[eq:ab\_full\_1+2\]*a*,*b*) become, respectively,
$$\begin{aligned}
\label{eq:ab_ScritA_1}
\tilde{a} &= \frac{\left( \tilde{\mathcal{S}}^{-1} +
\tilde{\sigma} \tilde{x} \right)}{{2 \left( \tilde{\sigma}
- \tilde{\mathcal{S}}^{-1} \right)}}, \\
\tilde{a}^2 &=\frac{\left( 1- \tilde{\sigma} \right) \tilde{x}}{{
\left( \tilde{\sigma} - \tilde{\mathcal{S}}^{-1} \right)}} . \label{eq:ab_ScritA_2}
\end{aligned}$$
We can eliminate $\tilde{a}$ and rearrange into a quadratic for $\tilde{\sigma}$: $$\label{eq:quadratic_sigma}
\tilde{\mathcal{S}}^{2} \tilde{x} (4 + \tilde{x} ) \tilde{\sigma}^2 -2 \tilde{\mathcal{S}} \tilde{x} (1+2\tilde{\mathcal{S}} ) \tilde{\sigma} +(1+4\tilde{x} \tilde{\mathcal{S}}) = 0.$$ There are repeated roots when the discriminant of the quadratic is zero, corresponding to the left and right hand limits of the loops shown in figure \[fig:Scrit\_explore\](*a*). The discriminant is $$\tilde{\Delta} = -16 \tilde{x} \tilde{\mathcal{S}}^2 \left[ \tilde{x}^2 \tilde{\mathcal{S}} + \tilde{x} (3 \tilde{\mathcal{S}} -\tilde{\mathcal{S}}^2) +1 \right].$$ Given that $\tilde{\mathcal{S}} >0$ and $\tilde{x}\propto k^2 >0$, the roots $\tilde{\Delta} = 0$ must satisfy the quadratic equation $$\label{eq:Delta3}
\tilde{x}^2 \tilde{\mathcal{S}} + \tilde{x} (3 \tilde{\mathcal{S}} -\tilde{\mathcal{S}}^2) +1 = 0 .$$ The bifurcation (at the critical matrix stiffness) occurs when the discriminant of this quadratic is zero, when $$0 = (3 \tilde{\mathcal{S}} -\tilde{\mathcal{S}}^2)^2 - 4 \tilde{\mathcal{S}} = \tilde{\mathcal{S}} (\tilde{\mathcal{S}} -1)^2(\tilde{\mathcal{S}} -4) .$$ The root $ \tilde{\mathcal{S}} = 0$ is excluded because $ \tilde{\mathcal{S}} >0$. The roots $\tilde{\mathcal{S}} = 1$ are excluded because they correspond to repeated roots $\tilde{x} = -1$ in equation . The only physically meaningful root is $\tilde{\mathcal{S}} = 4$, which corresponds to repeated roots $\tilde{x} = 1/2$ in equation . We substitute back into equation and find that the corresponding $\tilde{\sigma} =1/2$.
This gives us the critical matrix stiffness and the corresponding properties of the solution (horizontal and vertical wavenumbers and growth rate). In summary, we find that $$\mathcal{S}_\mathrm{crit} = 4 \mathrm{Da}^{-1}, \quad
k_\mathrm{crit} \sim \mathrm{Da} /\sqrt{2} , \quad
a_\mathrm{crit} \sim \mathrm{Da}, \quad
\sigma_\mathrm{crit} \sim n /2, \quad ( \mathrm{Pe} \gg \mathrm{Da} \gg 1).$$ In the numerical results (§\[sec:Scrit\]), we found the same $\mathcal{S}_\mathrm{crit} \propto \mathrm{Da}^{-1}$ scaling, albeit with a different prefactor. However, we didn’t observe the $k_\mathrm{crit} \propto \mathrm{Da}$ scaling (independent of $\mathrm{Pe}$), which indicates that our numerical calculations were not performed at sufficiently high $\mathrm{Pe}$ to observe the asymptotic regime. Our analysis in this section allows access to that regime. Furthermore, observations such as those in figure \[fig:Scrit\_explore\] of loops emerging at finite (non-zero) values of the growth rate $\sigma$ emerge as generic features of the bifurcation.
### Case: $\mathrm{Da} \gg \mathrm{Pe}$
We second consider the opposite case in which advection of the liquid undersaturation is negligible relative to diffusion. We apply the same type of methodology as before. We use the rescaling
$$\begin{aligned}
\hat{x} & =k^2 ( \mathrm{Da} \mathrm{Pe})^{-1}, \\
\hat{a} & = a ( \mathrm{Da} \mathrm{Pe})^{-1/2}, \\
\hat{\mathcal{S}} & = \mathcal{S}( \mathrm{Da} \mathrm{Pe})^{1/2}, \\
\hat{\sigma} & = \sigma n^{-1},
\end{aligned}$$
(the scaling extends that introduced in §\[sec:diffusion-controlled\]). We note that $\mathcal{K} = 1 + k^2/ \mathrm{Da}\mathrm{Pe} = 1+ \hat{x} $. Again, and $1/\mathrm{Da}\mathcal{S} \sim (\mathrm{Pe}/\mathrm{Da})^{1/2} \ll 1$. Thus, to leading order, equations (\[eq:ab\_full\_1+2\]*a*,*b*) become, respectively,
$$\begin{aligned}
\label{eq:ab_ScritB_1}
\hat{a} &= \frac{1}{2 \hat{\sigma} \hat{\mathcal{S}} }, \\
\hat{a}^2 &=\frac{\left( 1- \hat{\sigma}(1+\hat{x}) \right) \hat{x}}{
\hat{\sigma} (1+\hat{x}) } . \label{eq:ab_ScritB_2}
\end{aligned}$$
We find a quadratic for $\hat{\sigma}$: $$\label{eq:quadratic_sigmaB}
4 \hat{\mathcal{S}}^{2} \hat{x} (1+ \hat{x} ) \hat{\sigma}^2 - 4 \hat{\mathcal{S}}^2 \hat{x} \hat{\sigma} +(1+\hat{x} ) = 0,$$ whose repeated roots are zeros of the discriminant $$\hat{\Delta} = -16 \hat{x} \hat{\mathcal{S}}^2 \left[ \hat{x}^2 + \hat{x} (2 -\hat{\mathcal{S}}^2) +1 \right].$$ Again, we have $\hat{\mathcal{S}} >0$ and $\hat{x}\propto k^2 >0$, so roots $\hat{\Delta} = 0$ satisfy $$\label{eq:Delta3B}
\hat{x}^2 + \hat{x} (2 -\hat{\mathcal{S}}^2) +1.$$ The bifurcation occurs when the discriminant of this quadratic is zero, when $$0 = (2 -\hat{\mathcal{S}}^2)^2 - 4 = \hat{\mathcal{S}}^2 (\hat{\mathcal{S}} +2) (\hat{\mathcal{S}} -2) .$$ The only physically meaningful root is $\hat{\mathcal{S}} = 2$, which corresponds to repeated roots $\hat{x} = 1$ in equation . We substitute back into equation and find that the corresponding $\hat{\sigma} =1/4$. In conclusion, $$\mathcal{S}_\mathrm{crit} = 2( \mathrm{Da} \mathrm{Pe})^{-1/2} , \quad
k_\mathrm{crit}, a_\mathrm{crit}
\sim ( \mathrm{Da} \mathrm{Pe})^{1/2} , \quad
\sigma_\mathrm{crit} \sim n /4, \qquad ( \mathrm{Da} \gg \mathrm{Pe}).$$ In the numerical results (§\[sec:Scrit\]), we found the same scaling relationships (with the same prefactors), so our numerics were able to access this regime adequately. The analysis in this section additionally obtained $\sigma_\mathrm{crit} \sim n /4$. Therefore, in both regimes we find that the bifurcation at $\mathcal{S}_\mathrm{crit}$ results in an instability with a finite growth rate.
Physical discussion of instability mechanism (part II: wavelength selection) {#sec:physical-discussion-2}
----------------------------------------------------------------------------
In §\[sec:physical-discussion-1\], we explained the basic structure of the physical instability mechanism. We found that there is an enhanced vertical flux in proto-channels (regions of positive porosity perturbation) caused by the porosity-dependent permeability. This vertical flux across a background equilibrium concentration gradient dissolves the solid matrix, increasing the porosity, and establishing an instability that grows at a rate $\sigma \sim n$. In this section, we use the insights gained from our asymptotic analysis to explain the physical controls on the vertical and horizontal length scales of the instability, and on the critical matrix stiffness. All of the following estimates are consistent with the results of our asymptotic analysis and numerical calculations.
We derive scalings focussing on the more interesting case of a compacting porous medium ($\mathcal{S} \ll 1$). Results for a rigid porous medium (up to an unknown prefactor) can be obtained by substituting $\mathcal{S} = 1$ into the subsequent scalings, consistent with our numerical results that the rigid medium limit applies when $\mathcal{S} \gtrsim 1$.
First, we consider the vertical length scale of the instability at fixed horizontal wavenumber. In a compacting porous medium, mass conservation implies that gradients in porosity are sources or sinks of compaction pressure, as expressed in equation , which we now rewrite by substituting expressions for the base state variables: $$\label{eq:scaling_5}
\nabla^2 \mathcal{P}_1 =
\frac{n }{\mathcal{S}} \frac{\partial \phi_1}{\partial z}.$$ We next substitute in the balance between compaction and porosity change from equation , namely $\sigma \phi_1 \sim \mathcal{P}_1$, use $\sigma \sim n$, and scale $\partial_z \sim m$, neglecting horizontal derivatives at fixed $k$, to obtain $$\label{eq:scaling_6}
m \sim \mathcal{S}^{-1}.$$ So the vertical structure is controlled by the matrix stiffness. In the rigid medium case, $m\sim 1$ and the instability extends through the full depth of the melting region.
Second, we consider the horizontal length scale of the most unstable mode $k^*$. We combine equations , & to obtain the estimate $$\label{eq:scaling_7}
-k^2 \mathcal{P}_1 \sim \chi_1 / \mathcal{S}^2.$$ Physically, reactive dissolution in the channels requires a convergent flow of liquid into the proto-channels, which must be down a gradient in the compaction pressure. Then, by substituting and into the liquid concentration equation , we find that $$\label{eq:scaling_8}
\mathcal{S} \frac{\partial \mathcal{P}_1 }{\partial z} - \mathcal{P}_1
\sim - \frac{1 }{\mathrm{Da}} \frac{ \partial \chi_1 }{\partial z }
+ \frac{1}{\mathrm{Da}\mathrm{Pe}}\frac{\partial^2 \chi_1 }{\partial x^2}.$$ Both terms on the left-hand-side are $O(\mathcal{P}_1)$. When $\mathrm{Pe} \gg \mathrm{Da}$, diffusion on the right-hand-side is negligible compared to advection of the liquid undersaturation. Then by substituting equation we find $$\label{eq:scaling_9}
\quad k^{*2} \sim \frac{\mathrm{Da} }{\mathcal{S}}, \qquad (\mathrm{Pe} \gg \mathrm{Da}).$$ Conversely, if $\mathrm{Da} \gg \mathrm{Pe}$, diffusion dominates the right-hand-side of equation and we find: $$\label{eq:scaling_10}
k^{*2} \sim \frac{\sqrt{\mathrm{Da} \mathrm{Pe} }}{\mathcal{S}}, \qquad (\mathrm{Da} \gg \mathrm{Pe}).$$ Physically, the perturbed compaction-driven advection against the equilibrium concentration gradient is balanced by either advection or diffusion of liquid undersaturation. These results mean that the most unstable horizontal wavelength is proportional to the compaction length (recall that $\mathcal{S} \propto \delta^2$). However, it is much smaller than the compaction length since $\mathrm{Da} \gg 1$, as seen in the 2D numerical calculations of @spiegelman01.
Third, if the matrix stiffness $\mathcal{S}$ is reduced below some critical value, then the stabilising influence of compaction is dominant over the destabilising influence of reactive melting such that the instability is suppressed. We can obtain an estimate of this critical value as follows. At the critical value, equation gives that compaction balances reaction, so $-\mathcal{P}_1 \sim \chi_1$. We next use equation , to obtain $-\mathcal{P}_1 \sim n w_0 \phi_1$. We substitute into equation to obtain $$\label{eq:scaling_11}
{\mathcal{S}}_\mathrm{crit} \sim m/k^{*2},$$ We then substitute in our estimates of the vertical ($m$) and horizontal ($k^{*}$) wavenumbers to obtain:
$$\begin{aligned}
{\mathcal{S}}_\mathrm{crit} &\sim \frac{1}{\mathrm{Da }},
\qquad (\mathrm{Pe} \gg \mathrm{Da}), \\
{\mathcal{S}}_\mathrm{crit} &\sim
\frac{1}{\sqrt{\mathrm{Da Pe}}},
\qquad(\mathrm{Da}\gg\mathrm{Pe}).
\end{aligned}$$
Conversely, we could interpret equation in terms of a minimum wavenumber for growth $$\label{eq:scaling_12}
k^2_\mathrm{min} \sim m/\mathcal{S} \sim 1/\mathcal{S}^2 \quad \Rightarrow \quad k_\mathrm{min} \sim 1/\mathcal{S}.$$ Thus the maximum wavelength for the instability is proportional to $\beta \delta^2 / \alpha$: the product of the compaction length and the amount of reactive melting over a compaction length. In the rigid medium limit ($\mathcal{S} \gg 1$) the vertical wavelength is the full height of the domain ($m\sim 1$). Thus $$\label{eq:scaling_13}
k^2_\mathrm{min} \sim m/\mathcal{S} \sim 1/\mathcal{S} \quad \Rightarrow \quad k_\mathrm{min} \sim 1/\mathcal{S}^{1/2}.$$
Geological discussion {#sec:geological_discussion}
=====================
Geologically significant predictions of this model include the conditions under which the reaction-infiltration instability occurs and the size and spacing of the resulting channels. We found earlier that the length scale of the reaction-infiltration instability can be limited by either advection or diffusion. To cover both of these regimes, it is instructive to introduce a reactive length scale $L_\mathrm{eq}$ $$L_\mathrm{eq} = \left\{
\begin{array}{ll}
L_{w} \equiv \frac{\phi_0 w_0}{\alpha R} ,
& \mathrm{Pe} \gg \mathrm{Da}\text{ (advection controlled)}, \\[2pt]
L_{D}\equiv 2 \left(\frac{\phi_0 D}{\alpha R}\right)^{1/2},
& \mathrm{Da} \gg \mathrm{Pe} \text{ (diffusion controlled)}.
\end{array} \right.$$ $L_w$ is the distance a chemical component is transported by the background liquid flow over the reaction timescale. $L_D$ is the distance a chemical component can diffuse in the liquid over the reaction timescale. The factor of 2 is introduced to simplify the dimensional estimates given later in this section. $L_\mathrm{eq}$ is a generalization of the length scale introduced by @aharonov95. The condition for the advection-controlled instability (rather than the diffusion-controlled case) is $ \mathrm{Pe} \gg \mathrm{Da}$. This is equivalent to the statement $L_D \ll L_{w}$, and thus $L_\mathrm{eq} \sim \max \left( L_{w}, L_D\right)$. With this definition, the most unstable wavelength $\lambda^*$ for the instability can be written $$\label{eq:dim_wavelength}
\lambda^* = \left\{
\begin{array}{ll}
2\pi \lambda_c (L_\mathrm{eq} H)^{1/2}, & \mathcal{S} \gtrsim 1, \\[2pt]
2\pi \delta (L_\mathrm{eq} \beta / \alpha)^{1/2}, & \mathcal{S} \ll 1.
\end{array} \right.$$ In the former equation, we introduced a prefactor $\lambda_c$ that, as $\mathcal{S}\rightarrow \infty$, satisfies $\lambda_c \rightarrow 0.5268$ in the case $\mathrm{Pe} \gg \mathrm{Da}$, and $\lambda_c \rightarrow \pi^{-1/2} \approx 0.5642$ in the case $\mathrm{Da} \gg \mathrm{Pe}$. Note that if the reaction rate were infinitely fast (*i.e.* if the liquid chemistry were at equilibrium), then the equilibrium length scale would be zero, and the channels would be arbitrarily small. This potentially explains why channels localize to the grid scale in some numerical calculations based on an equilibrium formulations [for example, @hewitt10].
The vertical length scale of the instability $\lambda_v$ is approximately $$\label{eq:dim_verticallength}
\lambda_v \sim \left\{
\begin{array}{ll}
H, & \mathcal{S} \gtrsim 1, \\[2pt]
\delta^2 \beta / \alpha, & \mathcal{S} \ll 1.
\end{array} \right.$$ Thus the channels occupy the full depth of the melting region in the case of a rigid medium ($\mathcal{S} \gtrsim 1$) and have a length proportional to the square of the compaction length when $\mathcal{S} \ll 1$. The condition $ \mathcal{S} \ll 1$ delineates the compaction-limited instability. In dimensional terms, $$\mathcal{S} \ll 1 \quad \Leftrightarrow \quad \delta \ll \left( \frac{\alpha H }{\beta} \right)^{1/2}.$$
We also identified the critical condition for the instability to occur. This condition can be written in terms of a critical compaction length $\delta_\text{crit}$ and the reaction length $L_\mathrm{eq}$: $$\label{eq:critical_compaction_length}
\delta > \delta_\text{crit} \propto \left( \frac{\alpha L_\mathrm{eq} }{\beta} \right)^{1/2}.$$ Note that @aharonov95 claim that the instability occurs when the compaction length is much larger than the reaction length. Equation shows that the relevant length scale is $\left( {\alpha L_\mathrm{eq} }/{\beta} \right)^{1/2}$, which depends on both the reaction length $L_\mathrm{eq}$ and also on the solubility gradient $\beta$. Indeed, the numerical results of @aharonov95 are consistent with equation .
We now seek to identify the region in parameter space relevant for the partially molten upper mantle. We provide geologically plausible parameter values in table \[tab:parameters\]. Using the central estimates of these parameters, the dimensionless parameters considered previously can be estimated as follows. Damköhler and Péclet numbers $\mathrm{Da} \approx \mathrm{Pe} \approx 8\times10^7$, reactivity $\mathcal{M} \approx 0.16$, and matrix stiffness $\mathcal{S} \approx 2.5\times10^{-5}$, consistent with our asymptotic approximations $\mathrm{Da},\mathrm{Pe}\gg1$, $\mathcal{M}\ll 1$. The aspect ratio of the most unstable mode is $\mathcal{A} \approx 0.02$, consistent with our assumption that the channels are much narrower in the horizontal than in the vertical.
Returning to dimensional units, if $H = 80$ km (the total depth of the primary melting region; melting may occur deeper in the presence of volatile species) and $\beta = 2 \times 10^{-6}$ m$^{-1}$ [@aharonov95] and $\alpha \approx 1$, then $\left({\alpha H }/ {\beta} \right)^{1/2} \approx 200$ km. The compaction length is typically smaller than this in the mantle, so the instability is likely to be limited by compaction, rather than the total height $H$. For the compaction-limited instability $(\mathcal{S} \ll 1)$, the most unstable wavelength is proportional to the compaction length and the square root of the amount of chemical disequilibrium that occurs over the height $L_\mathrm{eq}$. The case of the rigid medium is rather different. Here, the most unstable wavelength is the geometric mean of $L_\mathrm{eq}$ and the total height $H$, and is independent of the solubility gradient $\beta$.
-------------------------------- -------------- ------------------------------------------------------- --
Variable (unit) Symbol Estimate (range)
\[3pt\] Permeability exponent $n$ $3$ $(2-3)$
Solubility gradient (m$^{-1}$) $\beta$ $2\times10^{-6}$ ($10^{-6}-4\times10^{-6}$)
Compositional offset $\alpha$ $1$
Melting region depth (m) $H$ $8\times10^{4}$
Compaction length (m) $\delta$ $10^{3}$ ($3\times10^2-10^4$)
Melt flux (ms$^{-1}$) $\phi_0 w_0$ $3\times10^{-11}$ ($5\times10^{-12}-2\times10^{-10}$)
Diffusivity (m$^2$s$^{-1}$) $\phi_0 D$ $3\times10^{-14}$ ($10^{-15}-10^{-12}$)
Reaction rate (s$^{-1}$) $R$ $3\times10^{-8}$ ($10^{-11}-10^{-4}$)
-------------------------------- -------------- ------------------------------------------------------- --
: Estimates of parameter values with units specified (where relevant), following @aharonov95 as far as possible. For some variables, we consider a range of values to illustrate the range of possible behaviours. This reflects both uncertainty in the parameters themselves, and differences between geological settings. The extreme uncertainty in $R$ reflects uncertainty in the linear chemical dissolution rate and the internal surface area available for reaction. The estimate of $\beta$ is based on thermodynamic calculations [@kelemen95b].[]{data-label="tab:parameters"}
Based on the range of parameter values in table \[tab:parameters\], we suggest that $5\times10^{-8} \leq L_{w} \leq 20$ m, and $6\times10^{-6} \leq L_{D} \leq 0.6$ m. The overlap of these ranges suggests that both cases of advection- or diffusion-controlled instability are geologically relevant. In either case, $6\times10^{-6} \leq L_\mathrm{eq} \leq 20$ m. The corresponding range of critical compaction length is $2 \leq \delta_\text{crit} \leq 3 \times 10^{3}$ m. If the reaction is fast ($L_{w}$ is small, so $L_\mathrm{eq}$ and $\delta_\text{crit}$ are small), the critical compaction length is likely below the compaction length in the mantle (perhaps 300 m to 10 km), and the instability occurs. However, if the reaction is slow ($L_{w}$ and hence $L_\mathrm{eq}$ and $\delta_\text{crit}$ are large), the critical compaction length may be less than the compaction length, which would suppress the instability. Assuming that the geological observations support channelisation allows us to estimate a lower bound on the reaction rate. We estimate a minimum reaction rate of $R_\mathrm{min}\approx \phi_0 w_0 / \beta \delta^2 \approx 1.5 \times 10^{-11}$ s$^{-1}$ based on the central parameter estimates in table \[tab:parameters\] (and a range $R_\mathrm{min}\approx 3\times10^{-14}-2\times10^{-9}$ s$^{-1}$).
We next estimate the dominant wavelength. If the instability does occur, it is most unstable at a wavelength that is smaller than the compaction length by a factor $2\pi (L_\mathrm{eq} \beta / \alpha)^{1/2}$, where $1.5 \times 10^{-5} \leq 2\pi(L_\mathrm{eq} \beta / \alpha)^{1/2} \leq 5.5 \times 10^{-2}$. Thus the wavelength of the most unstable mode is much smaller than a compaction length. For example, a compaction length of 1 km would have a preferred spacing of 1.5 cm to 55 m. However, the upper end of this estimate corresponding to high $L_\mathrm{eq}$ has a critical compaction length of about 2 km, so the instability would be suppressed. Taking this into account, the largest wavelength expected would be around 25 m. As another example, a compaction length of 10 km would have a preferred spacing of 15 cm to 550 m; the critical compaction length is exceeded throughout the range. The even larger estimates of @aharonov95 are associated with the limit of a rigid medium $\mathcal{S} \gtrsim 1$, which is probably less geologically relevant.
![Summary of dimensional predictions. (*a*) The physical compaction length $\delta$ (units m) is generally larger than the critical value $\delta_\text{crit}$ (solid black line), allowing an instability to occur (the darker shaded region). At slower reaction rate $R$ (units s$^{-1}$), the instability may not occur (the lighter shaded region). The diamond marker indicates the transition from advection to diffusion controlled instability, which occurs around $R=10^{-8}$ s$^{-1}$. (*b*) The most unstable wavelength shown across a physically plausible range of compaction length. The light grey shaded region indicates where the instability is not predicted to occur, as in (*a*). Unless varied, we use the central estimates of parameter values listed in table \[tab:parameters\]. []{data-label="fig:dimensional_predictions"}](Figure8-eps-converted-to.pdf){width="1.0\linewidth"}
Figure \[fig:dimensional\_predictions\] summarises the geological implications of our results. Figure \[fig:dimensional\_predictions\](*a*) shows that the reaction-infiltration instability occurs robustly across a large part of the plausible parameter space (the dark grey region in panel (*a*) covers most of the range of compaction length expected in the upper mantle). The instability is suppressed by small compaction length and slow reaction rate. It is also suppressed by a high background melt flux (not shown in figure \[fig:dimensional\_predictions\]), because the equilibrium length $L_\mathrm{eq}$ increases with melt flux. Figure \[fig:dimensional\_predictions\](*b*) shows the predicted horizontal spacing of reactively dissolved channels. Where the instability occurs, we expect it to result in channelized flow on a scale ranging from centimetres to hundreds of meters, a range that is consistent with field observations of reactively dissolved channels [@braun02].
There are additional physical mechanisms, excluded from the present model, that may affect the reaction-infiltration instability. First, a greater degree of complexity in the thermodynamic modelling might be important [@hewitt10]. For example, volatile chemical species are thought to promote channelized magma flow [@keller16] and magma flow can alter the temperature structure [@reesjones18a]. Second, variation in the background vertical magma flux and solubility gradient [@kelemen95b] with depth are very likely to be important, since these drive the instability and control its characteristics. Third, rheology also significantly affects the instability. Indeed @hewitt10 used a variable compaction viscosity that suppressed instability, as observed numerically by @spiegelman01. We discuss this important issue in appendix \[app:hewitt\]. Furthermore, it is plausible that reactive channelization is modified by large-scale shear deformation through a viscous feedback [@stevenson89; @holtzman03a]. Fourth, the nonlinear development of the instability and other finite-amplitude effects in the form of chemical and lithological heterogeneity of the mantle may be significant [@weatherley12; @katz12]. Such heterogeneity may be important because the growth rate of the linear reaction-infiltration instability is relatively slow [@spiegelman01].
![ Speculations regarding the potential consequences of a linear variation in solubility and magma flux with depth, shown in (*a*). In (*b*) we show that the number of channels per unit width decreases at shallower depths. We use the central estimates of parameter values listed in table \[tab:parameters\], except in the curve marked ‘Fast reaction rate’ ($R = 10^{-10}$ s$^{-1}$). The deepest part of the domain is always stable, although this is only visible in the case of fast reaction rate (plotted as a dashed part of the curve). This figure is intended to be interpreted qualitatively, so we to do not number the horizontal axis.[]{data-label="fig:dimensional_predictions_zdep"}](Figure9-eps-converted-to.pdf){width="1.0\linewidth"}
We believe that these mechanisms merit detailed study. But to speculate about the second of these, we consider a hypothetical situation where the background melt flux and solubility gradient both increase linearly in $z$, as shown in figure \[fig:dimensional\_predictions\_zdep\](*a*). Then, our prediction gives an estimate of the corresponding most unstable wavelength, shown in figure \[fig:dimensional\_predictions\_zdep\](*b*). We find that there are no channels in the deepest part of the domain; channels emerge at shallower depth and progressively coarsen, perhaps due to channel coalescence. Channel coalescence also occurs in two-dimensional numerical calculations [@spiegelman01], even with a constant solubility gradient, due to the nonlinear development of the instability. It seems worthwhile to investigate further numerically.
The authors thank M. Spiegelman, J. Rudge, D. Hewitt, I. Hewitt, A. Fowler and M. Hesse for helpful discussions. We thank P. Kelemen and an anonymous referee for constructive reviews. D.R.J. acknowledges research funding through the NERC Consortium grant NE/M000427/1. The research of R.F.K. leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007–2013)/ERC grant agreement number 279925. We thank the Isaac Newton Institute for Mathematical Sciences for its hospitality during the programme Melt in the Mantle, which was supported by EPSRC Grant Number EP/K032208/1. We also thank the Deep Carbon Observatory of the Sloan Foundation.
Minimum and maximum wavenumbers {#app:kminmax}
===============================
The minimum and maximum wavenumbers for instability can be analysed by considering equations & which we reproduce here $$\begin{aligned}
0 &= \tan b + b/a, \nonumber \\
a &= \frac{\left( \frac{n\mathcal{K}}{\mathcal{S}} +
\frac{\sigma}{\mathrm{Da}} k^2 \right)}{{2 \left( \sigma
\mathcal{K} - \frac{n}{\mathrm{Da} \mathcal{S}} \right)}},\nonumber \\
b^2 + a^2 &=\frac{\left( n-\sigma \mathcal{K} \right) k^2}{{
\left( \sigma \mathcal{K} - \frac{n}{\mathrm{Da}
\mathcal{S}} \right)}} . \nonumber
\end{aligned}$$
Previously we assumed that $\sigma \sim n$, but this assumption can break down near the minimum or maximum wavenumbers. Instead, we make the assumption $ \sigma \mathcal{K} \gg n /\mathrm{Da} \mathcal{S}$ such that $ n /\mathrm{Da} \mathcal{S}$ can be neglected in the denominators. We verify this assumption *post hoc*. Then
$$\begin{aligned}
a &= \left( \frac{n}{\sigma}\frac{1}{ 2\mathcal{S}} +
\frac{k^2}{2\mathrm{Da} \mathcal{K}} \right), \\
b^2 + a^2 &=\left( \frac{n}{\sigma \mathcal{K}} - 1 \right) k^2.
\end{aligned}$$
We can eliminate $n/\sigma$ between these equations $$\label{eq:general_k}
b^2 + a^2 =\left( \frac{2 a \mathcal{S} }{ \mathcal{K}} - \frac{\mathcal{S} k^2}{\mathrm{Da} \mathcal{K}^2} - 1 \right) k^2.$$
We now simplify these equations for the cases of small and large wavenumber $k$. First, when $k$ is small, we assume that $k^2 \ll \mathrm{Da} \mathrm{Pe} $ (so $\mathcal{K} \sim 1$) and $k^2 \ll \mathrm{Da} / \mathcal{S} $, which again we verify *post hoc*. Then equation becomes $$\label{eq:small_k}
b^2 + a^2 \sim \left( {2 a \mathcal{S} } - 1 \right) k^2 \quad \Rightarrow \quad k^2 = \frac{b^2 \mathrm{cosec} ^2 b}{ -2 \mathcal{S} b\cot b - 1}.$$ Note that $\pi/2< b<\pi$ so $\cot b<0$. The minimum wavenumber corresponds to the turning point $d k/db = 0$. With some algebra, it is possible to show that this occurs when $$\label{eq:small_k_bc}
1 - b \cot b + \mathcal{S} b\left[ \cot b + b(1-\cot^2 b) \right] = 0.$$ There is a unique solution $b_c$ to this algebraic equation in $\pi/2< b<\pi$.
When $\mathcal{S} \gg 1$ (rigid medium), $b_c$ satisfies $ \cot b + b(1-\cot^2 b) = 0$. We find $b_c \approx 2.2467$, the corresponding $a_c \approx 1.8017$, and $$\label{eq:results_small_k_large_S}
k_\mathrm{min} \sim \frac{1.5171}{ \mathcal{S}^{1/2}}, \quad \sigma(k_\mathrm{min}) \sim 0.2775 \frac{n}{\mathcal{S}}, \quad (\mathcal{S} \gg 1),$$ This means that the instability operates at increasingly long wavelength as the matrix rigidity increases. Conversely compaction stabilizes the long wavelength limit [@aharonov95].
When $\mathcal{S} \ll 1$ (compactible medium), $b_c$ satisfies $ a \equiv -b\cot(b) = 1/\mathcal{S} $, and we find $$\label{eq:results_small_k_small_S}
\quad k_\mathrm{min} \sim \frac{1}{ \mathcal{S}}, \quad \sigma(k_\mathrm{min}) = \frac{n}{ 2 }, \quad (\mathcal{S} \ll 1).$$ We substitute all the results back into the assumptions we made and can show that they hold for sufficiently large $\mathrm{Da}$ and $\mathrm{Pe} $. More precisely, when $\mathcal{S} \gg 1$ we need $\mathcal{S} \mathrm{Da} \mathrm{Pe} \gg 1$ and $\mathrm{Da} \gg 1$. When $\mathcal{S} \ll 1$ we need $\mathcal{S}^2 \mathrm{Da} \mathrm{Pe} \gg 1$ and $\mathcal{S} \mathrm{Da} \gg 1$.
Finally, we consider the large wavenumber limit. Here we assume $k^2 \gg \mathrm{Da} \mathrm{Pe} $ (so $\mathcal{K} \sim k^2 / \mathrm{Da} \mathrm{Pe}$), $a \gg b$ (since $b < \pi$), and $k^2 \gg S \mathrm{Da} \mathrm{Pe}^2$. Then equation becomes $$\label{eq:large_k}
a^2 =\left( \frac{2 a \mathcal{S} \mathrm{Da} \mathrm{Pe} }{k^2} - 1 \right) k^2 \quad \Rightarrow \quad k^2 = 2 a \mathcal{S} \mathrm{Da} \mathrm{Pe} - a^2.$$ The maximum wavenumber corresponds to $d k/da = 0$, i.e. when $a = \mathcal{S} \mathrm{Da} \mathrm{Pe}$, and so $$\label{eq:results_small_k_small_S}
\quad k_\mathrm{max} \sim \mathcal{S} \mathrm{Da} \mathrm{Pe}, \quad \sigma(k_\mathrm{max}) = \frac{n}{ 2 } \frac{1}{ \mathcal{S}^2 \mathrm{Da} \mathrm{Pe}}.$$ This means that compaction stabilizes the system at large wavenumber [@aharonov95]. Indeed there is no maximum wavenumber for a rigid medium, instantaneous reaction and/or zero diffusion (infinite $ \mathcal{S}$, $\mathrm{Da}$ and/or $\mathrm{Pe}$ respectively). However, the growth rate at large $k$ would be infinitessimal. We check all the assumptions we made and can show that they hold, provided $\mathcal{S} \mathrm{Da} \gg 1$.
Numerical determination of critical matrix stiffness {#app:Scrit}
====================================================
When ${\mathcal{S}}$ is sufficiently close to the critical value, the most unstable mode is part of a dispersion curve that forms a closed loop. The size of this loop approaches zero as $\mathcal{S} \rightarrow {\mathcal{S}}_\mathrm{crit}^+$. If we define $L(\mathcal{S})$ as the length of the loop, then we find numerically that $$\label{eq:length_Scrit}
L \propto (\mathcal{S} -{\mathcal{S}}_\mathrm{crit})^{1/2}.$$ This behaviour is shown in figures \[fig:Scrit\_explore\](*a*) and \[fig:Scrit\_numerics\]. In §\[sec:Scrit\_analysis\], this behaviour emerges as a generic feature of the bifurcation.
Our numerical strategy to determine ${\mathcal{S}}_\mathrm{crit}$ is as follows. Given an initial guess ${\mathcal{S}}_n$, we calculate $L({\mathcal{S}}_n)$. We also estimate $\partial L/\partial{S}$ using a simple finite difference. We then update $$\label{eq:Scrit_iteration}
{\mathcal{S}}_{n+1} = {\mathcal{S}}_n - \frac{(1-\lambda) L({\mathcal{S}}_n)}{\partial L/\partial{S}},$$ where $0<\lambda <1$. This is a stabilised Newton iteration, designed to estimate ${\mathcal{S}}_{n+1} $ such that $L({\mathcal{S}}_{n+1}) \approx \lambda L({\mathcal{S}}_n)$. The iteration is fastest when $\lambda$ is small but most reliable when $\lambda$ is near 1, so we use $\lambda=0.9$. Motivated by equation , we next fit a straight line to the square of the length $L^2({\mathcal{S}}_{n})$ (having calculated at least 8 iterates, we use a rolling window of width 8, such that earlier iterates at larger $ \mathcal{S}$ are successively discarded). The intersection of this line with $L=0$ gives an estimate for $\mathcal{S}_\mathrm{crit}$. We iterate until the estimate converges to some small prescribed tolerance ($10^{-8}$). We also calculate the centre of the loop in $(k,\sigma)$-space and extrapolate to $\mathcal{S}_\mathrm{crit}$. Parameter continuation is then used to map out $\mathcal{S}_\mathrm{crit}(\mathrm{Da},\mathrm{Pe})$. This method is robust provided the estimate for $\mathcal{S}$ is sufficiently close to the critical value. This can necessitate taking extremely small steps in parameter space, limiting the calculations that can be performed.
 approaches zero, consistent with equation . Dots denote values of $\mathcal{S}_n$ according to the iteration . The estimate of ${\mathcal{S}}_\mathrm{crit}$ is shown using a red line (see inset). []{data-label="fig:Scrit_numerics"}](Figure10-eps-converted-to.pdf){width="0.5\linewidth"}
Technical note on the treatment of reaction rate and compaction viscosity in @hewitt10 {#app:hewitt}
======================================================================================
@hewitt10 (hereafter H10) argued that the reaction-infiltration instability is not likely to occur in the mantle. This was attributed to a more complex (perhaps more realistic) choice of thermochemical model of melting, leading to a ‘background’ melting rate. However H10 also used a different compaction viscosity compared to our study [and to @aharonov95]. In this appendix we argue that the choice of compaction viscosity was largely responsible for the different conclusion, rather than the model of melting.
The argument made by H10 revolves around the solid mass conservation equation, which (making the same simplifications given in §\[sec:eqs\_simplified\], which are also made in H10) can written in dimensionless form as $$\frac{\partial \phi}{\partial t} = \nabla \cdot {\boldsymbol{v}_s}+ \Gamma, \label{eq:mass_s_hewitt} \\$$ where $ \nabla \cdot {\boldsymbol{v}_s}= \mathcal{P} / \zeta(\phi)$. In the current paper we take $\zeta(\phi)=1$ (non-dimensional version). However, H10 takes $\zeta(\phi)=\phi^{-1}$ (non-dimensional version), in which case equation becomes $$\frac{\partial \phi}{\partial t} = \phi \mathcal{P} + \Gamma. \label{eq:mass_s_hewitt_2} \\$$ Note that that the compaction pressure $\mathcal{P}$ in our manuscript is equal to the negative of the effective pressure variable in H10. Accounting for this sign difference, equation is consistent with equation (28) in H10. Then the growth rate of the linear instability can be estimated $$\label{eq:mass_s_hewitt_3}
\sigma \phi_1 = \phi_0 \mathcal{P}_1 + \phi_1 \mathcal{P}_0 + \Gamma_1.\\$$ H10 argues that the terms $\phi_0 \mathcal{P}_1$ and $\boldsymbol v_{s1} $ (the perturbation to the solid velocity) are small at high wavenumber. The thermochemical model of melting used by H10 states that $$\begin{aligned}
\Gamma &= \mathcal{G} \left[(1-\phi){\boldsymbol{v}_s}+ \phi {\boldsymbol{v}_l}\right]\cdot \hat{\boldsymbol z}
\approx \mathcal{G} \left[{\boldsymbol{v}_s}+ \phi^n \hat{\boldsymbol z} \right]\cdot \hat{\boldsymbol z} , \label{eq:melt_hewitt} \end{aligned}$$ where $ \mathcal{G}$ is a dimensionless melt rate (proportional to our $\beta/\alpha$). Thus perturbations to the melting rate are $$\begin{aligned}
\Gamma_1 & = \mathcal{G} \left[\boldsymbol v_{s1} + n\phi_0^{n-1}\phi_1 \hat{\boldsymbol z} \right]\cdot \hat{\boldsymbol z}
\approx n\mathcal{G} \phi_0^{n-1}\phi_1. \end{aligned}$$ Equation then becomes $$\sigma \approx \mathcal{P}_0 + n\mathcal{G} \phi_0^{n-1}, \label{eq:sigma_hewitt} \\$$ which is the same as equation (32) in H10. The steady compaction rate is equal to the steady melting rate: $$-\phi_0 \mathcal{P}_0 = \Gamma_0. \label{eq:melt_hewitt_steady} \\$$ H10 estimates that the stabilizing compaction term ($\mathcal{P}_0<0$) overcomes the destabilizing reaction term in equation . However, it is important to emphasize that the stabilizing term in equation is present only because a strongly porosity-weakening compaction viscosity was chosen. A similar effect was also observed numerically by @spiegelman01.
What then of the importance of the thermochemical modelling of the reaction rate? Clearly, a reaction rate parameter appears in equation . However, in deriving the approximated melt-rate perturbation $\Gamma_1$ above, H10 shows that perturbations to the liquid flux are dominant over those to the solid flux. In footnote 3, H10 notes that the previous melting model of @liang10 (which is the same as that of @aharonov95 and hence our own), can be derived from a more general thermochemical model. In our notation, this simple melting model has the form $$\Gamma = \phi {\boldsymbol{v}_l}\cdot \hat{\boldsymbol z} \beta/\alpha.$$ Thus the same form of growth rate estimate as equation (32) in H10 can be derived using our simplified melting model. At least for the linear perturbation equations governing the reaction-infiltration instability, the more complex thermochemical model of H10 is not of fundamental importance. In this particular context, such a model could be mapped onto our version simply by changing the value of the parameter $\mathcal{G}$. However, the steady compaction rate given by equation does depend on the melting model.
Therefore, with regard to the reaction-infiltration instability, it was the rheology chosen by @hewitt10 that had a decisive effect on the findings, rather than the more complex treatment of melting.
|
---
abstract: 'This paper explores a system of interacting ‘soft core’ bosons in the Gross-Pitaevskii mean-field approximation in a random Bernoulli potential. First, a condition for delocalization of the ground state wave function is proved which depends on the number of particles and interaction strength. Using this condition, asymptotics for ground state energy per particle are derived in the large system limit for small values of the coupling constant. Our methods directly describe the shape of the ground state in a given realization of the random potential.'
author:
- 'Michael Bishop, Jan Wehr'
bibliography:
- 'bibmaster.bib'
title: 'Ground State Energy of Mean-field Model of Interacting Bosons in Bernoulli Potential'
---
Introduction
============
In his seminal work [@Anderson58], Anderson discovered that quantum mechanics behave very differently in disordered environments than in periodic environments. Since this discovery, there has been a lot of effort both in physics and in mathematics to understand this behavior more completely. See [@Lewenstein12] which contains a recent physical survey of Anderson localization in a context relevant for the present paper and [@Kirsch07] for an excellent introduction to rigorous mathematical results.\
The experimental realization of cold atoms [@Anderson1Dexp2; @Anderson1Dexp] has added to the interest in the topic and led to new questions about the role of interactions in quantum systems. Physicists have researched the relation between disorder and interaction [@AAL80; @GiamarchiSchulz88; @Fisher89; @BAA06; @SCLBS07]. There is still much debate on the nature of phase transitions in both random [@ICFO12] and quasiperiodic potentials [@Roux08]. Systems similar to the one discussed in this paper have been experimentally realized [@Modugno11]. For infinite local ‘hard core’ interactions, see [@ALS]. Lieb, Seiringer, and Yngvason have rigorously shown convergence of the ground state of a bosonic system with nonrandom harmonic potential to the mean-field approximation [@LS1; @LS2; @LS3; @LS4]. The dynamics of the nonlinear Schrödinger equation with a random potential is studied in [@Soffer12]; see references therein.\
Recently, Seiringer, Yngvason, and Zagrebnov considered Bose-Einstein Condensation in systems with randomly placed point scatterers in the continuous setting; they demonstrated the existence of a condensate in such systems (under certain conditions on the interaction) as well as a presence of a phase transition. [@SYZ12]. Their analysis is based on a detailed study of the statistics of random potential realizations, similar to what we do here. Our model can be viewed as a discrete version of the model considered in [@SYZ12], with the space variable rescaled by $L$ and the parameters in the Hamiltonian and the random potential scaling as follows: $\gamma = \gamma_0 L^2,\ \sigma = \sigma_0 L,\ \nu = \nu_0 L.$\
For interacting systems in random potentials, Anderson localization competes with delocalization caused by repulsive interaction. It is not obvious which of the two mechanisms dominates the system’s behavior, even in the ground state. One approach to this question is to start from a noninteracting system and treat interaction as a perturbation. A system of bosons with no interaction places each particle in the single particle ground state. When interactions are added to a finite-volume system, they can be controlled for small values of $g$.\
The goal of this paper is to study effects of disorder in a system of bosons which interact with a weak repulsive ‘soft core’ force as the number of particles and system size are taken to infinity, proving two results. Theorem 1 is a general statement about delocalization effects of such interaction, applying to both discrete and continuous versions of the model. It is then applied, together with detailed analysis of the energy functional, to the one-dimensional system with a Bernoulli-distributed potential—Theorems 2 and 3. We derive there an asymptotic formula for the ground state energy per particle as the product of the interaction strength and the particle density goes to zero. We directly study the way the geometry of each realization of the random potential determines the ground state wave function. The methods are inspired by an adaptation of the technique used in [@BishopWehr12].\
In the discrete setting, the state of $N+1$ bosons in the lattice cube $\Lambda = \{0, \dots, L+1 \}^d$ with length $L$ is described by a normalized wave function $|\Phi(x_1, \dots, x_{N+1})\rangle$, a function in the symmetric subspace of $\otimes^{N+1} L^2(\Lambda, \mu)$ with Dirichlet boundary conditions, where the measure $\mu$ is the counting measure and $x_i$ are positions of the particles in $\Lambda$. The Hamiltonian of the system is given by H = \_[i=1]{}\^[N+1]{} H\_i + \_[ij]{} U(x\_i , x\_j) where $H_i $ is the single particle Hamiltonian acting on the $i$-th particle and $U$ is the potential of the interaction between particles. The single particle Hamiltonian is the (random) Schrödinger operator H = -+ Vwhere $\Delta$ is the discrete Laplacian and $V$ is a (random) multiplication operator, which in this paper will always be bounded below (without loss of generality by $0$). The interaction $U$ is a ‘soft core’ interaction of the form g\_[x\_i, x\_j]{}where the $\delta$ is the Kronecker delta and the coupling constant $g$ is positive, making the interaction repulsive. This repulsion makes it energetically unfavorable for bosons to occupy the same space, but does not exclude the possibility, unlike in the the case of ’hard core’ interactions where $g=\infty$. Because of the difficulty of working in a large tensor space, the multi-particle bosonic states are approximated by the Gross-Pitaevskii mean-field wave function [@Annett]. In this approximation, each boson is assumed to be in the same state $\phi$ which defines the state of the whole system: $|\Phi\rangle = |\phi(x_1)\rangle\dots|\phi(x_{N+1})\rangle$. The Hamiltonian becomes (N+1) H\_i + ||\^2, a nonlinear random Schrödinger operator (the second term acts as a potential, which depends on the state itself—hence the nonlinearity). The approximation exchanges the difficulties arising from Bose statistics for the nonlinearity of the new problem. The associated per particle energy functional is E\[\] = \_[x]{} \_[|y| =1]{} |(x+y) - (x)|\^2 + V(x)|(x)|\^2 + |(x)|\^4
[**Remark**]{}: In the infinite volume limit, if the particle number increases proportionally to the system size, the total energy diverges and the natural quantity is energy per particle. In this paper, any mention of energy refers to the per particle energy.
In the continuous setting, the state of $N+1$ bosons in the box $\Omega = [0, L]^d$ with linear size $L$ is described by the wave function $|\Phi(x_1, \dots, x_{N+1})\rangle$, a function in the symmetric subspace of $\otimes^{N+1} L^2(\Omega, \mu)$ with Dirichlet boundary conditions, where the measure $\mu$ is the Lebesgue measure, and $x_i$ are the positions of the particles in $\Omega$. The Hamiltonian of the system is given by H = \_[i=1]{}\^[N+1]{} H\_i + \_[ij]{} U(x\_i , x\_j) where $H_i $ is the single particle Hamiltonian acting on the $i$-th particle and $U$ is the interaction potential between particles. The single particle Hamiltonian is given by the (random) Schrödinger operator H = -+ V\_iwhere $\Delta$ is the Laplacian and $V$ is a (random) multiplication operator. The potential is bounded below (without loss of generality by $0$). The interaction $U$ is a ‘soft core’ interaction of the form g(x\_i-x\_j)where the $\delta$ is the Dirac delta and $g >0$ corresponds to a repulsive interaction. In the mean-field aprroximation, each boson is assumed to be in the same state $\phi$ which defines a state of the whole system: $|\Phi\rangle = |\phi(x_1)\rangle\dots|\phi(x_{N+1})\rangle$. The Hamiltonian becomes (N+1) H\_i + ||\^2, a nonlinear Schrödinger operator. The associated per particle energy functional is E\[\] = \_dx In section 2, we address the localization and delocalization of states in this system for both the discrete and continuous settings. For any state $\phi$, define the set X\_[>]{}() = {x: |(x)| > } as the set of points where the absolute value of $\phi(x)$ is greater than its average magnitude per site, multiplied by a constant $\epsilon$. This is a natural set in the study of localization of low energy states.
[**Theorem 1**]{}: Assume that $V \geq 0$, for both the discrete and continuous settings, for a state $\phi$ with energy $E' = E[\phi]$, the measure of the set $X_{> \epsilon}$ obeys the lower bound (X\_[>]{}())
This theorem implies that Anderson-type localization for low energy states, $E' \approx 0$, requires strong conditions on $gN$. For a large number of particles or strong interaction, the wave function must fill a significant amount of space, meaning the repulsion dominates the Anderson localization effects caused by the random potential $V$. The repulsive interaction, though local, forces overlap to become energetically expensive if too many bosons occupy the same place, causing the ground state to spread. If a low energy state is localized to some length $\ell$, $gN$ must be smaller than $\ell^d$. To recover an Anderson type localization, a necessary condition is $gN \leq O(E' \ell^{d})$. In physical experiments, the interaction constant $g$ is a controlled parameter rather than a variable dependent on the particle number and the particle density is approximately constant ($N \approx \rho L^d$) to ensure the existence of thermodynamic limits. In these systems, a low-energy state must occupy a nonzero fraction of the volume rather than being localized with some lower order localization length $\ell$.
In section 3, the above theorem is used to describe the ground state of the one-dimensional lattice system where the potential $V$ is Bernoulli-distributed, the interaction constant $g$ is small, and the particle density is approximately constant: $N+1 \approx \rho L$. The ground state minimizes the energy (per particle) given by E() = \_[x=1]{}\^L (|’(x)|\^2 + V(x)|(x)|\^2 + |(x)|\^4 ) where $\phi \in \ell^2\{0, \dots, L+1\}$ with Dirichlet boundary conditions and $V$ is a multiplication operator by a function $V(x)$ (of a discrete argument), where $V(x)$ are IID Bernoulli random variables: $P[V(x) = 0] = p$, $P[V(x) = b] = 1 - p = q$, and $b>0$ is a constant.\
[**Definition**]{}: The [**finite volume ground state**]{} is defined as the normalized state $\phi^{(L)}_0$ which minimizes the energy functional $E[\phi]$ and the corresponding energy, $E^{(L)}_0$, is called the [**ground state energy**]{}.\
[**Theorem 2**]{}: For any $g$, $\rho$, $p$, and $b$, \_[L ]{} E\_0\^[(L)]{} = E\_0 where $E_0$ is a nonrandom function of the above parameters.\
After taking the infinite volume limit, we want to understand the behavior of the ground state for small $g\rho$, when the nonlinear term is taken to zero.\
[**Theorem 3**]{}: For the one-dimensional lattice Gross-Pitaevskii model with Bernoulli disorder, the ground state energy $E_0$ satisfies the following condition with probability one.
0 < \_[g0]{} E\_0 \_p\^2(g) \_[g0]{} E\_0 \_p\^2(g) <
Theorem 3 is an illustration of Theorem 1. In Theorem 3, the ground state is approximated by sine waves on intervals of zero potential longer than some minimum interval length and by zero everywhere else. The ground state energy is bounded above by $\frac{C_+}{(\log_p(g\rho))^2}$. Using Theorem 1, $\mu(X_{>\epsilon}(\phi))$ is bounded below by which is proportional to the system size. The proof of Theorem 3 shows that for the ground state this lower bound is asymptotically accurate.\
[**Remark**]{}: We expect that the upper and lower limit in the above statement are equal. One should be able to close the gap between our lower and upper bounds using more accurate variational functions—the solution of the discrete nonlinear Schrödinger equation on an interval with Dirichlet boundary conditions, which can be thought of as a discrete version of the Jacobi elliptic sine function.\
Proof of Theorem 1
==================
[**Theorem 1**]{}: If the energy of a state is $E' = E[\phi]$ and the potential $V$ is nonnegative, then for $\epsilon \in (0,1)$
(X\_[>]{}()) : The $\ell^2$-norm of the state $\phi$ restricted to $X_{\leq \epsilon}$ is bounded by $$\|\phi(x)|_{X_{\leq \epsilon}}\|^2 \leq \mu(X_{\leq \epsilon}) \frac{\epsilon^2}{L^{d}} \leq \epsilon^2$$ Because the state $\phi$ has $L^2$-norm equal to one, $$\|\phi(x)|_{X_{> \epsilon}}\| \geq 1 - \epsilon^2$$ The energy of $\phi$ is bounded below by its interaction energy on $X_{> \epsilon}$. The interaction energy is bounded below using Schwarz’s Inequality: $$\left(\int_{X_{> \epsilon}} |f|^2 d\mu \right)^2 \leq \left(\int_{X_{> \epsilon}} d\mu\right) \left(\int_{X_{> \epsilon}} |f|^4 d\mu \right)$$ which bounds the $L^4$-norm below by $$\int_{X_{>\epsilon}(\phi)} |f|^4 d\mu \geq \frac{(1-\epsilon^2)^2}{\mu(X_{>\epsilon}(\phi))}$$ The interaction energy is thus bounded below by $$\frac{gN(1-\epsilon^2)^2}{2\mu(X_{>\epsilon}(\phi))}$$ and bounded above by $E'$. It follows that $$\frac{gN(1-\epsilon^2)^2}{2\mu(X_{> \epsilon})} \leq E'$$ which gives the desired lower bound: (X\_) and completes the proof. $\Box$\
[**Remark**]{}: By shifting the energy, one can easily generalize the above theorem to arbitrary potentials bounded below. If $V \geq V_{min}$, $$\mu(X_{>\epsilon}(\phi)) \geq \frac{gN(1-\epsilon^2)^2}{2(E' - V_{min})}$$\
Ground State Estimates for Weak Interaction in Bernoulli Potentials
===================================================================
Theorems 2 and 3 are specifically for the system on the one-dimensional lattice with Bernoulli potential.\
In the proof of Theorem 2, we will apply Kingman’s subadditive ergodic theorem, using the version from the standard probability reference [@Durrett]; Kingman’s original formulation [@Kingman68] would also be sufficient for our purposes.\
[**Kingman’s Subadditive Ergodic Theorem**]{}: Suppose $X_{m,n}$, $0\leq m < n$ satisfy:
\(i) $X_{0,m} + X_{m,n} \geq X_{0,n}$.
\(ii) $\{ X_{nk, (n+1)k},\ n\geq 1\}$ is a stationary sequence for each $k$.
\(iii) The distribution of $\{ X_{m, m + k},\ k\geq 1\}$ does not depend on $m$.
\(iv) $\mathbb{E}[X^+_{0,1}] < \infty$ and for each $n$, $\mathbb{E}[X_{0, n}] \geq \gamma_0 n$, where $\gamma_0 > - \infty$.\
Then
\(a) $\lim_{n\to\infty}\mathbb{E}[X_{0, n}]/n = \inf_m \mathbb{E}[X_{0, m}]/m \equiv \gamma$.
\(b) $X = \lim_{n\to\infty} X_0 /n $ exists almost surely and in $L^1$, so $\mathbb{E}[X] = \gamma$.
(c)If all the stationary sequences in (ii) are ergodic then $X = \gamma$ almost surely.\
[**Proof of Theorem 2**]{}: We have E\_0\^[(L)]{} = \_[ = 1]{} \_[j=0]{}\^[L-1]{} |(j+1) - (j)|\^2 + V(j)|(j)|\^2 + where $\psi \in \ell^2\{0, \dots L\}$ with Dirichlet boundary conditions. With $\phi = \sqrt{L}\psi$, we can rewrite $E_0^{(L)}$ as E\_0\^[(L)]{} = \_[ = ]{} \_[j=0]{}\^[L-1]{} |(j+1) - (j)|\^2 + V(j)|(j)|\^2 + which is equal to $\frac{1}{L} X_{0,L}$, where X\_[0,L]{} = \_[ = ]{} \_[j=0]{}\^[L-1]{} |(j+1) - (j)|\^2 + V(j)|(j)|\^2 + The process $X_{0,L}$ satisfies the assumptions of Kingman’s theorem. For (i), $X_{0,L}$ is subadditive since the only difference between $X_{0,M} + X_{M, L}$ and $ X_{0,L}$ is the restriction $\phi(M) = 0$ in the definition of the former. Properties (ii) and (iii) hold because the $V(j)$ are independent. For (iv), $X^+_{0,1} < 2 + b + \frac{g\rho}{2}$ and $X_{0, n} \geq 0$.\
Therefore, \_[L ]{} E\_0\^[(L)]{} = \_[L]{} = E\_0almost surely.$\Box$\
Theorem 3 says, in essence, that the limit of $E^{(L)}_0$ as $L \to \infty$ to leading order is $\frac{C}{(\log_p(g\rho))^2}$ for some constant $C$ when $g\rho$ small. The proof of Theorem 3 does give the relationship for small $g\rho$, however it does not provide the precise value of $C$. However, the proof of Theorem 3 does provide a good approximation of the true ground state.\
[**Theorem 3**]{}: For the one-dimensional lattice Gross-Pitaevskii model with Bernoulli disorder, the ground state energy $E_0$ satisfies the following condition with probability one.
0 < \_[g0]{} E\_0 \_p\^2(g) \_[g0]{} E\_0 \_p\^2(g) <
The structure of the ground state depends on the distribution of the intervals of zero potential. These intervals are colloquially referred to as ‘lakes’ or ‘islands.’ A realization of the potential $V$ is determined by alternating intervals of zero potential and positive potential. In a sequence of Bernoulli random variables, the lengths of intervals of zero potential are independent and distributed geometrically: $P[L_i = x] = qp^{x}$. If the system size $L$ is fixed, the intervals of zero and positive potential are not independent; they are subject to the condition that the sum of their lengths is exactly $L$. Their number is not constant, but it easy to show that it satisfies a law of large numbers. The considered system can by approximated by a system of variable length, in which the number of intervals is fixed at the value dictated by the law of large numbers. Such an approximation was carried out in detail in [@BishopWehr12] by standard probabilistic methods and will not be repeated here.\
Let us thus fix the number of independent intervals of zero and positive potential to be $2n$. This makes $L = \sum_{i=1}^n (L_i + \tilde{L_i})$ a random variable, the sum of $n$ geometrically distributed intervals of zero potential and $n$ geometrically distributed intervals of positive potential . There intervals of zero potential have lengths $L_i$ with the distribution P\[L\_i = x\] = qp\^[x-1]{}for integer values of $x$ greater than or equal to one (an interval must have at least one site of zero potential). Likewise, lengths of there intervals of positive potential are distributed according to P\[ = x\] = pq\^[x-1]{} The variable $i$ indexes the intervals and takes values $1, \dots, n$. The total system size $L$ is the sum of the random interval lengths. The system size $L$ has expected value =
[**Remark**]{}: By the Law of Large Numbers [@Durrett], with probability one in the limit $n\to\infty$, the difference of both $L$ and $\sum_{L_i > x} L_i$ and their expectations has order less than $n$. These controls occur with probability one in the limit $n \to \infty$, further referred to as: |L - | = o(n) |\_[L\_i x]{} L\_i - \[\_[L\_i x]{} L\_i\]| = o(n) where $$\mathbb{E}\left[\sum_{L_i > x} L_i\right] = n\sum_{y \geq \lfloor x\rfloor + 1} y P\left[L_i = y \right]$$ = ( xqp\^[x+ 1]{} + p\^[x+1]{})\
The result of Theorem 3 is obtained using the following strategy. An upper bound on the ground state energy can be generated by evaluating the energy functional on any test function. The method is to find a test function with asymptotics similar to a demonstrable lower bound. The process is iterative; first, a test function is evaluated to find an upper bound on the ground state energy. Second, this upper bound is used on the norm of the true ground state restricted to various sets, such as the set of sites of positive potential, in order to isolate the sites where the $\ell^2$-norm of the ground state is concentrated. Third, the energy of the ground state on these sets is minimized to find a lower bound. This lower bound may give intuition for a better choice of test function for the upper bound. The process repeats until the upper and lower bounds on the ground state energy are asymptotically similar. The proof of Theorem 3 is simply the final iteration of this process.\
[**Proof of Theorem 3, Upper Bound**]{}: The energy of any test function bounds the ground state energy from above. Consider the test function $\psi$ defined as follows: for an interval of zero potential with length $L_i$, the function is a sine wave $m_i\sqrt{\frac{2}{L_i+1}}\sin(\frac{\pi x}{L_i+1})$, where $m_i$ is the $L^2$-norm of the function restricted to the interval. On intervals of high potential, $\psi$ is zero. For intervals of zero potential with length $L_i > \log_p(g\rho) + \log_p\left(\log_p(g\rho)\right)$, we let m\_i\^2 = and for shorter intervals, $m_i = 0$. This makes the kinetic energy term and interaction energy term have the same asymptotic order as $g\rho \to 0$. This test function also satisfies the normalization criterion $\sum_i m_i^2 =1$. The kinetic energy of the discrete sine wave on a specific interval of zero potential is bounded above by $\frac{m_i^2\pi^2}{(L_i+1)^2}$ [@BishopWehr12]. The interaction energy, $\frac{g\rho L}{2} \sum_x \|\phi(x)\|^4$ is equal to $\frac{3g\rho L m_i^4}{4L_i}$. Summing the upper bound on kinetic energy and the interaction energy of test function $\psi$ over the space, the total energy of the test function $\psi$ is bounded above by + where the second term is an overestimate of the kinetic energy, treating each interval as the shortest interval admitted. Both $L$ and $\sum_{L_i > \log_p(g\rho) + \log_p\left(\log_p(g\rho)\right)} L_i$ depend on the realization of the potential, but by equations (27-29), with probability one in the limit $n \to \infty$, L = + o(n) and using $\lfloor x \rfloor \geq x - 1$, $$\sum_{L_i > \log_p(g\rho) + \log_p\left(\log_p(g\rho)\right)} L_i$$ g\_p(g) (p + q\_p(g) + q\_p(\_p(g))) + o(n) by equation (31). The interaction energy is bounded in the limit as $n \to \infty$ with probability one by $$\frac{3 }{ 4q \log_p(g\rho)\left[\log_p(g\rho) + \log_p\left(\log_p(g\rho)\right) + \frac{p}{q}\right]}$$ In the limit as $g\rho \to 0$, the leading order term of the upper bound on the ground state energy is $$\frac{C'}{\log^2_p(g\rho)}$$ with $C' = \frac{3}{4q} + \pi^2$. Thus, $\limsup_{g\rho \to \infty} E_0 \log_p^2(g\rho) < \infty$. $\Box$\
[**Proof of Lower Bound**]{}:
For each $n$ and $g\rho$, the ground state $\phi_0$ is well-defined but not explicitly known. The lattice will be partitioned into four sets: the set of sites of high potential, denoted $M_b$; the set of sites on intervals of zero potential longer than $\log_p(g\rho)$, denoted $M_{long}$; the set of sites on intervals of zero potential shorter than $\log_p(g\rho)$ where the kinetic energy cannot be easily bounded below, denoted $M_{light}$; and the set of sites on intervals of zero potential shorter than $\log_p(g\rho)$ where the kinetic energy can be easily bounded below, denoted $M_{heavy}$.\
The lower bound is shown as follows. The ground state energy is bounded below by a lower bound of the ground state energy restricted to $M_{heavy}$. The kinetic energy for a given interval in $M_{heavy}$ is bounded below by Lemma 1. The interaction energy is bounded below by the bound of Theorem 1. Lemma 2 provides a lower bound on the norm of $\phi_0$ restricted to $M_{heavy}$, which converges to one in the limit $g\rho \to 0$. Using Lagrange multipliers, the lower bound on kinetic and interaction energy on each interval is minimized over the $m_i$, the norms of restrictions of the state to each interval. This minimization determines a minimal interval length for any interval in $M_{heavy}$. The number of sites in $M_{heavy}$ is estimated above by the number of sites on intervals longer that this minimal interval length. Using this and the lower bound of the norm of $\phi_0$ restricted to $M_{heavy}$, we obtain the desired lower bound of the ground state energy.\
Without loss of generality, the ground state wave function can be assumed to be non-negative. By a standard argument: if the ground state does not have the same complex phase at each site, the kinetic energy can by reduced by making the phases equal. Since the potential and interaction energy only depend on the magnitude of the state at each site and not on the complex phase, the energy of the resulting state is strictly smaller. Therefore the ground state must have the same complex phase and can be assumed to be positive.\
To define $M_{light}$ and $M_{heavy}$, the ground state energy on an interval of zero potential is approximated by that of the the minimizer of the kinetic energy on the interval—the discrete sine wave. For a given interval with length $L_i$, the ground state determines boundary values $m_i\delta_i^L$ and $m_i\delta_i^R$ on the sites of high potential to the left and right of the interval, respectively, where $m_i$ is the norm of the ground state on the interval. The boundary values are positive by the above argument. The kinetic-energy-minimizing state is of the form ( + t\_i)where the sine wave is normalized to $m_i$ by $c_i \in [1, \sqrt{2}]$, stretched by $s_i \in (0,1)$, and shifted by $t_i$; all three are determined by the $\delta_i$’s and $m_i$ [@BishopWehr12]. Heavy intervals have relatively small $\delta_i$’s which determine a lower bound on kinetic energy. Light intervals have large $\delta_i$’s which do not admit a good lower bound on kinetic energy, but do allow an upper bound on the norm of the ground state on this intervals.\
[**Definition**]{}: For intervals of zero potential with length less than $\log_p(g\rho)$, an interval is in $M_{heavy}$ if for the ground state $\phi_0$, (\_i\^L, \_i\^R) An interval is in $M_{light}$, (\_i\^L, \_i\^R) > These definitions can be restated using $m_i$ and are thus directly determined by the values of the ground state wave function on the sites adjacent to the zero potential intervals. The definition is stated above without $m_i$ in order to separate the shape and curvature of the sine wave from its norm and to facilitate estimates. The norm of $\phi_0$ restricted to a heavy interval is bounded below by m\_i m\_i 2(\_i\^L, \_i\^R) m\_i (\_i\^L, \_i\^R)where the last term is the norm of the constant function $m_i\max(\delta_i^L, \delta_i^R)$ on the interval. This means that a sine wave with this norm achieves its maximum rather than being nearly flat.\
[**Lemma 1**]{}: The kinetic energy of the ground state restricted to an interval in $M_{heavy}$ is bounded below by m\_i\^2(1 - )\^2
[**Proof of Lemma 1**]{}: The kinetic energy of a heavy interval is bounded below by the kinetic energy of the sine wave with norm $m_i$ and boundary conditions $m_i\delta_i^L$ and $m_i\delta_i^R$. The energy of the minimizer $\frac{c_im_i}{\sqrt{L_i+1}}\sin(\frac{s_i\pi x}{L_i+1} + t_i)$ is $m_i^2\sin^2(\frac{s_i\pi}{L_i+1})$. To solve for $s_i$, note that the function must satisfy the boundary conditions
(t\_i) = m\_i\_i\^L (s\_i+ t\_i) = m\_i\_i\^R The left boundary condition is solved using the inverse of sine on $[0, \frac{\pi}{2}]$, $\arcsin(x)$. The right boundary condition is solved using the inverse of sine on $[\frac{\pi}{2}, \frac{3\pi}{2}]$, $-\arcsin(x) + \pi$. We obtain as in [@BishopWehr12]:
t\_i = () s\_i+ t\_i = - ()Solving for $s_i$:
s\_i = 1 - (() + () ) Since $\arcsin(\theta) \leq \frac{\pi \theta}{2}$ for $\theta \leq 1$,
$$s_i \geq 1 - \frac{\max(\delta_i^L, \delta_i^R)\sqrt{L_i+1}}{c_i}$$ 1 - where $c_i \geq 1$, $L_i \geq 1$, and the definition of a heavy interval is used. $\Box$\
[**Lemma 2**]{}: In the limit as $n \to \infty$, with probability one, \_[n]{} \_0 |\_[M\_[heavy]{}]{}\^2 1 - O() - O() where $O(\cdot)$ is taken with respect to the limit $g\rho \to 0$.
[**Proof of Lemma 2**]{}: Because the potential energy of the ground state must be less than the upper bound on the ground state energy, the norm of the ground state restricted to high potential is bounded as follows:
\_0|\_[M\_[b]{}]{}\^2 An upper bound on thehe norm of the ground state on intervals longer than $\log_p(g\rho)$ follows from the fact that the interaction energy must be bounded above by the upper bound on the ground state energy. The minimum of the interaction energy depends on the number of sites occupied and the norm restricted to the set of these sites. It was shown in Theorem 1 that for $\|\phi\| = m'$, the minimum of $\sum_{i=1}^L |\phi(i)|^4$ is $\frac{m'^4}{L}$; this result is also due to the inequality between arithmetic and quadratic means. If $m$ is the norm of $\phi_0|_{M_{long}}$, then the interaction energy for the sites in $M_{long}$ is bounded below, using Theorem 1.
Using the upper bound on the ground state energy, the left hand side of the above inequality is bounded above by $\frac{C_+}{\left(\log_p(g\rho)\right)^2}$. In the limit as $n\to \infty$,
\_0|\_[M\_[long]{}]{}\^2 For light intervals, < m\_i\^2 (\_i\^L,\_i\^R)\^2 The upper bound on the norm of the ground state restricted to sites of high potential bounds the norm on the boundary points \_i m\_i\^2 (\_i\^L,\_i\^R)\^2 where the extra factor of $2$ is included to cover the cases where two intervals of zero potential are separated by a single site of positive potential. The bound on the norm of the ground state restricted to light intervals, which by definition are shorter than $\log_p(g\rho)$, is
$$\frac{ 1}{\log_p(g\rho)}\sum_{ light\ intervals} m_i^2 \leq \sum_{light\ intervals}\frac{ m_i^2}{L_i}$$ $$\leq \frac{1}{4}\sum_{light\ intervals} m_i^2\max(\delta_i^L,\delta_i^R)^2$$ This means that the norm on light intervals is bounded above by \_0|\_[M\_[light]{}]{}\^2 The normalization condition requires \_0|\_[M\_b]{}\^2 +\_0|\_[M\_[long]{}]{}\^2+ \_0|\_[M\_[light]{}]{}\^2+\_0|\_[M\_[heavy]{}]{}\^2=1Using the upper bounds on the three other terms, the norm of the ground state restricted to $M_{heavy}$ gives the desired lower bound $$\| \phi_0 |_{M_{heavy}}\|^2 \geq 1 - O\left(\frac{1}{\sqrt{\log_p(g\rho)}}\right) - O\left(\frac{1}{ b\log_p(g\rho)}\right)$$ $\Box$\
The energy for a heavy interval is bounded below by + m\_i\^2 (1 - )\^2where the first term is the minimum of the interaction energy and the second term is the minimum of the kinetic energy. Using Lagrange multiplier method, the choice of $m_i$, where $i$ labels the heavy intervals, which minimize this lower bound \_i + m\_i\^2 (1 - )\^2 under normalization constraint \_i m\_i\^2 = 1 - O() - O()must satisfy $$\frac{\partial}{\partial m_i} \left[\sum_i \frac{g\rho L m_i^4}{2 L_i} + m_i^2 \left(1 - \frac{1}{\sqrt{2}} \right)^2\frac{\pi^2}{L_i^2} \right]$$ = \_i m\_i\^2 This equation implies that m\_i = 0 or m\_i\^2 = ( - (1-)\^2) For an interval to contribute to the minimization of the lower bound, the kinetic energy of a heavy interval must be less than $\lambda$. For the kinetic energy to meet this bound, the heavy interval must have length L\_i > (1 - ) It follows from Lemma 2 that the normalization condition requires $\lambda$ to satisfy \_[L\_i > (1 - )]{} = 1 - O() - O()Using equation (29) and approximating $\lfloor x \rfloor$, \_[L\_i > x]{} L\_i = ( + o(n))( xqp\^[x+ 1]{} + p\^[x+1]{})which requires ( + o(n)) ( (1 - ) qp\^[(1 - ) ]{} + p\^[ (1 - ) ]{}) 1 The asymptotic behavior of the parameter $\lambda$ is determined by $g\rho$. If $\lambda$ is constant or taken to infinity, the normalization will not hold because the left side of (63) goes to infinity. If $\lambda$ converges to zero, the dominant term is the exponential. The correct asymptotic solution for $\lambda$ as $g\rho \to0$ is (1 - ) = \_p(g) + \_p(\_p(g)) + O(1)This substitution will make left hand side (63) converge to a constant. Therefore, the heavy intervals must be longer than $\log_p(g\rho) + \log_p\left(\log_p(g\rho)\right) +O(1)$. The size of $M_{Heavy}$ is bounded above by the number of sites on intervals longer than this lower bound. This number is bounded above by n g\_p(g)(\_p(g) + \_p(\_p(g))) + o(n) The interaction energy of any state supported on this number of sites is bounded below by The lower bound follows. $\Box$\
Acknowledgements
================
The authors would like acknowledge M. Lewenstein, A. Sanpéra, P. Massignan, and J. Stasińska for useful discussions following [@ICFO12] and R. Sims, and L. Friedlander for mathematical insight. The authors would like to thanks G. Modugno for discussions on experiments related to this work.\
The work was partially supported by the NSF grant DMS 0623941.\
|
=
amstex epsf =16truecm =21.5truecm
${\left(\,}
\def$[)]{} $${\left[\,}
\def$$[\]]{}
=
ł i
H Ł[[L]{}]{} Ø[O]{} ¶[P]{} §[S]{}
PS. ł
\#1[0=\#1 by 0 by -1 by by 2 \#1]{}
=cmcsc10
=0
=
0.5cm
**FLUCTUATIONS IN THE COMPOSITE REGIME**
**OF A DISORDERED GROWTH MODEL**
Janko Gravner
Department of Mathematics
University of California
Davis, CA 95616
email: gravnermath.ucdavis.edu
0.2cm
Craig A. Tracy
Department of Mathematics
Institute of Theoretical Dynamics
University of California
Davis, CA 95616
email: tracyitd.ucdavis.edu
0.2cm
Harold Widom
Department of Mathematics
University of California
Santa Cruz, CA 95064
email: widommath.ucsc.edu
0.2cm
(Version 2, March 22, 2002)
0.2cm
We continue to study a model of disordered interface growth in two dimensions. The interface is given by a height function on the sites of the one–dimensional integer lattice and grows in discrete time: (1) the height above the site $x$ adopts the height above the site to its left if the latter height is larger, (2) otherwise, the height above $x$ increases by 1 with probability $p_x$. We assume that $p_x$ are chosen independently at random with a common distribution $F$, and that the initial state is such that the origin is far above the other sites. Provided that the tails of the distribution $F$ at its right edge are sufficiently thin, there exists a nontrivial composite regime in which the fluctuations of this interface are governed by extremal statistics of $p_x$. In the quenched case, the said fluctuations are asymptotically normal, while in the annealed case they satisfy the appropriate extremal limit law.
0.2cm
2000 [*Mathematics Subject Classification*]{}. Primary 60K35. Secondary 05A16, 33E17, 60K37, 60G70, 82C44.
0.2cm
: growth model, fluctuations, Fredholm determinant, phase transition, saddle point analysis, extremal order statistics.
0.2cm
This work was partially supported by National Science Foundation grants DMS–9703923, DMS–9802122, and DMS–9732687, as well as the Republic of Slovenia’s Ministry of Science Program Group 503. Special thanks go to Harry Kesten, who supplied the main idea for the proof of Lemma 6.1. The authors are also thankful to the referee for the careful reading of the manuscript and suggestions for its improvement.
=1
**FLUCTUATIONS IN THE COMPOSITE REGIME**
**OF A DISORDERED GROWTH MODEL**
0.3cm
Janko Gravner, Craig A. Tracy, Harold Widom
0.5cm
Disordered systems, which are, especially in the context of magnetic materials, often referred to as [*spin glasses*]{}, have been the subject of much research since the pioneering work in the 1970s. The vast majority of this work is nonrigorous, based on simulations and techniques for which a proper mathematical foundation is yet to be developed. (See \[MPV\] for early developments and \[Tal\] for a nice overview of the mean field approach.) As a result, there is a large number of new and intriguing phenomena observed in these models which await rigorous treatment. Among the most fundamental of issues are the existence and the nature of a phase transition into a [*glassy*]{} or [*composite*]{} phase: below a critical temperature, the dynamics of a strongly disordered system becomes extremely slow with strong correlations, aging and localization effects and possibly many local equilibria. We refer the reader to \[NSv\] and \[BCKM\] and other papers in the same volume for reviews and pointers to the voluminous literature and to \[NSt1\] and \[NSt2\] for some recent rigorous results. In view of the difficulties associated with a detailed understanding of realistic spinglass systems, other disordered models have been introduced, which are more amenable to existing probabilistic methods.
One of the most successful of such (deceptively) simple models is the one–dimensional random walk with random rates \[FIN1\]. In this model, the walker waits at a site $x\in \bZ$ for an exponential time with mean $\tau_x$ before jumping to either of its two neighbors with equal probability. The disorder variables $\tau_x$ are i.i.d. and quenched, that is, chosen at the beginning. Provided that the distribution of $\tau_x$ has sufficiently fat tails, namely, if $P(\tau_x\ge t)$ decays for large $t$ as as $t^{-\a}$ with $\a<1$, the walk exhibits aging and localization effects (\[FIN1\], \[FIN2\]). Various one–dimensional voter models and stochastic Ising models at zero temperature can be explicitly represented with random walks. This connection has been explored to demonstrate glassy phenomena such as aging and chaotic time dependence (\[FIN1\], \[FINS\]). The positive temperature versions of such results remain open problems, even in one dimension.
In contrast with models which are exactly solvable in terms of random walks and are by now a classical subject in spatial processes (\[Gri1\], \[Lig\]), techniques based on the RSK algorithm and random matrix theory have entered into the study of growth processes only recently (\[BDJ\], \[Joh1\], \[Joh2\], \[BR\], \[PS\], \[GTW1\]). The purpose of this paper is to employ these new methods to prove the existence of a [*pure phase*]{} and a [*composite phase*]{} in a disordered growth model. It has been observed before in similar models \[SK\] that the role of temperature is for flat interfaces apparently played by their [*slope*]{}. In our case, the initial set is very far from flat and “temperature” is measured instead by the macroscopic direction (from the origin) of points on the boundary. We identify precisely the critical direction and demonstrate that the fluctuations asymptotics provide an order parameter that distinguishes the two phases. We emphasize that a hydrodynamic quantity, the asymptotic shape, has a discontinuity of the first derivative at the transition point, at which the shape changes from curved to flat. However, this does not signify the existence of a new phase as kinks are common in many random growth models \[GG\], thus a finer resolution is necessary.
The particular model we investigate is [*Oriented Digital Boiling (ODB)*]{} (Feb. 12, 1996, Recipe at \[Gri2\], \[Gra\], \[GTW1\], \[GTW2\]), arguably the simplest interacting model for a growing interface in the two–dimensional lattice $\bZ^2$. The occupied set, which changes in discrete time $t=0,1,2,\dots$, is given by $\A_t=\{(x,y): x\in \bZ, y\le h_t(x)\}$. The initial state is a long stalk at the origin: $$h_0(x)=\cases
0, &\text{if }x=0,\\
-\infty, &\text{otherwise, }
\endcases$$ while the time evolution of the height function $h_t$ is determined thus: $$h_{t+1}(x)=\max\{h_{t}(x-1), h_{t}(x)+\e_{x,t}\}.$$ Here $\e_{x,t}$ are independent Bernoulli random variables, with $P(\e_{x,t}=1)=p_x$. Although this model is simplistic, note that it does involve the roughening noise (random increases) as well as the smoothing surface tension effect (neighbor interaction), the basic characteristics of many growth and deposition processes. (See Sections 5.1, 5.2 and 5.4 of \[Mea\] for an overview of simple models of ODB type as well as some other disordered growth processes.)
We will assume, throughout this paper, that the disorder variables $p_x$ are initially chosen at random, independently with a common distribution $F(s)=P(p_x\le s)$. We use $\<\,\cdot\,\>$ to denote integration with respect to $dF$ and label by $p$ a generic random variable with distribution $F$.
It quickly turns out (\[GTW1\]), that fluctuation in ODB can be studied via equivalent increasing path problems. Start by constructing a random $m\times n$ matrix $A=A(F)$, with independent Bernoulli entries $\e_{i,j}$ and such that $P(\e_{i,j}=1)=p_j$, where, again, $p_j\eqd p$ are i.i.d. Label columns as usual, but rows started at the bottom. We call a sequence of 1’s in $A$ whose positions have column index nondecreasing and row index strictly increasing an [*increasing path*]{} in $A$, and denote by $H=H(m,n)$ the length of the longest increasing path. Then, under a simple coupling, $h_t(x)=H(t-x,x+1)$ (\[GTW1\]). Thus we will concentrate our attention on the random matrix $A$ rather than the associated growth model. From now on we will also replace $p_i$ with its [*ordered*]{} sample, so that $p_1\ge p_2\ge \dots \ge p_n$ (see section 2.2 of \[GTW1\]).
We initiated the study of ODB in a random environment in an earlier paper (\[GTW2\]), from which we now summarize the notation and the main results. Throughout, we denote by $b$ the right edge of the support of $dF$ and assume it is below 1, i.e., $$b=\min\{s:F(s)=1\}<1.$$ Moreover, we fix an $\a>0$ and assume that $n=\a m$. (Actually, $n=\lfloor \a m\rfloor$, but we omit the obvious integer parts.) As mentioned above, we can expect different behaviors for different slopes on the boundary of the asymptotic shape, which translates to different $\a$’s. To be more precise, we define the following critical values $$\aligned
&\a_c=\< \frac{p}{1-p}\>^{-1},\\
&\a_c'=\<\frac{p(1-p)}{(b-p)^2}\>^{-1}.
\endaligned$$ Note that the second critical value is nontrivial, i.e., $\a_c'>0$, iff $\<(b-p)^{-2}\><\infty$. Next, define $c=c(\a,F)$ to be the time constant, $\displaystyle
c=c(\a,F)=\lim_{m\to\infty}{H}/m,
$ which determines the limiting shape of $\A_t$, namely $\lim \A_t/t$, as $t\to\infty$. In Theorem 1 of \[GTW2\], it was found that $c$ exists a.s. and is given by $$c(\a,F)=
\cases
b+\a (1-b)\<p/(b-p)\>,&\text{ if }\a\le \a_c', \\
a+\a (1-a)\<p/(a-p)\>,&\text{ if }\a_c'\le \a\le \a_c,\\
1,&\text{ if }\a_c\le \a.
\endcases$$ Here $a=a(\a,F)\in [b,1]$ is the unique solution to $\a\<{p(1-p)}/{(a-p)^2}\>=1.$
In \[GTW2\], we also determined fluctuations in the [*pure*]{} regime $\a_c'<\a< \a_c$. (The [*deterministic*]{} regime $\a_c<\a$ has no fluctuations.) The [*annealed*]{} fluctuations (\[GTW2\], Theorem 2) about the deterministic shape $c$ grow as $\sqrt m$ and are asymptotically normal: $$\frac{H-cm}{\tau_0\sqrt\a \cdot m^{1/2}}\convd N(0,1)$$ as $m\to\infty$, where $
\tau_0^2=\Var({(1-a)p}/{(a-p)}).
$
By contrast, [*quenched*]{} fluctuations conditioned on the state of the environment grow more slowly, as $m^{1/3}$, and satisfy the $F_2$–distribution known from random matrices (\[TW1\], \[TW2\]). To formulate this result, we let $r_j=p_j/(1-p_j)$, define $u_n$ to be the solution of $${\al\ov n}\,\sum_{j=1}^n{r_j\ov
(1+r_ju)^2}={1\ov(u-1)^2}\tag{1.1}$$ which lies in in the interval $(-r_1\inv,\,0)$. This solution exists provided that $\a
n^{-1}\sum_{j=1}^n r_j<1$ which holds a.s. for large $n$ as soon as $\a<\a_c$. Next, set $c_n=c(u_n)$ where $$c(u)={1\ov 1-u}-{\al\ov n}\sum_{j=1}^n{r_ju\ov 1+r_ju}.\tag{1.2}$$ Then (\[GTW2\], Theorem 3) there exists a constant $g_0\ne 0$ so that $$P\(\frac{H-c_n m}{g_0^{-1} m^{1/3}}\le s\,\mid\, p_1,\dots,p_n\)\to F_2(s),$$ as $m\to \infty$, almost surely, for any fixed $s$.
For fluctuation results in this paper we need to impose some additional assumption on $F$, which are best expressed in terms of $G(x)=1-F((b-x)-)$, the distribution function for $b-p$. First we list our weaker conditions:
[(a)]{} If $x,y\to 0$ and $x\sim y$, then $G(x)\sim G(y)$.
[(b)]{} If $x,y\to 0$ and $x=O(y)$, then $G(x)= O(G(y))$.
[(c)]{} As $x\to 0$, $G(x)=o(x^2/\log x^{-1})$.
Our stronger assumptions on $F$ require that there exists a $\g>0$ so that:
[(a$'$)]{} The function $G(x)/x^{\g}$ is nonincreasing in a neighborhood of $x=0$.
[(b$'$)]{} $G(x)=O(x^{2}/\log^\nu x^{-1})$ as $x\to 0$ for some $\nu>2\g+4$.
If $\a_c'>0$, then automatically $G(x)=o(x^2)$ as $x\to 0$. The stronger assumptions thus do not require much more: for nicely behaved $G$ they amount to $G(x)=O(x^2/\log^\nu x^{-1})$ for some $\nu>8$. The quenched and annealed fluctuations are now determined by the next two theorems.
Assume that $0<\a<\a_c'$, let $$\tau^2= {b(1-b)}\(\frac 1\a-\frac 1{\a_c'}\),$$ and let $\Phi$ be the standard normal distribution function. If (a)–(c) hold, then for any fixed $s$, as $m\to\infty$, $$P\(\frac {H-c_n m+2\tau \sqrt n}{\tau \sqrt n}\le s
\,\mid\, p_1,\dots,p_n\)\to \Phi(s).$$ Here, the convergence is in probability if (a)–(c) hold, and almost sure if (a$'$) and (b$'$) hold.
Assume that $0<\a<\a_c'$, and that (a)–(c) hold. Then, for any fixed $s$ $$P\(H\le cm-(1-\a/\a_c')\,m\,G^{-1}(s/n)\,\mid\, p_1,\dots,p_n\)\to e^{-s}$$ in probability. In particular, $$P\(H\le cm-(1-\a/\a_c')\,m\,G^{-1}(s/n)\)\to e^{-s}.$$
Throughout, we follow the usual convention in defining $G^{-1}(x)=\sup\{y: G(y)<x\}$ to be the left continuous inverse of $G$, although any other inverse works as well.
Assume, for simplicity, that, as $x\to 0$, $G(x)$ behaves as $x^\eta$ for some $\eta>2$. Then, in contrast with the pure regime, the annealed fluctuations in composite regime scale as $m^{1-1/\eta}$, while the quenched ones scale as $m^{1/2}$. In fact, this can be guessed from \[GTW2\]. Namely, as explained in Section 2 of that paper, the maximal increasing path has a nearly vertical segment of length asymptotic to $(1-\a/\a_c')m$ in (or near) the column of $A$ which uses the largest probability $p_1$. Therefore, this vertical part of the path dominates the fluctuations, as the rest presumably has $o(\sqrt m)$ fluctuations. (These are most likely [*not*]{} of the order exactly $m^{1/3}$ as they correspond to the critical case $\a=\a_c'$. The precise nature of the critical fluctuations is an interesting open problem.) The variables in the $p_1$–column are Bernoulli with variances about $b(1-b)$, thus the contribution of the vertical part to the variance is about $(b(1-b)(1-\a/\a_c')m)^{1/2}=\tau\sqrt n$. The annealed case then simply picks up the variation in the extremal statistic $p_1$.
Simple as the above intuition may be, Theorems 1 and 2 are not so easy to prove and require considerable additional technical details. We also note the mysterious correction $2\tau\sqrt n$ in Theorem 1 for which we have no intuitive explanation.
The fluctuations results in \[GTW2\] and the present paper thus sharply distinguish between two different phases of [*one*]{} particular growth model. Nevertheless, it seems natural to speculate that this phenomenon is universal in the sense that it occurs in other one–dimensional finite range dynamics of ODB type, started from a variety of initial states. Indeed, such universality has been established in other random matrix contexts \[Sos\]. Fluctuations of higher–dimensional versions seem much more elusive; it appears that a glassy transition should take place, but the fluctuation scalings could be completely different.
To elucidate, we present some simulation results. In all of them, we start from the flat substrate $h_0\equiv0$ and use $F(s)=1-(1-2s)^\eta$, so that $b=1/2$. It is expected that, as $\eta$ increases, the quenched fluctuation experience a sudden jump from $1/3$ to $1/2$. We simulate two dynamics, the ODB and the two–sided digital boiling (abbreviated simply as DB), given by $$h_{t+1}(x)=\max\{h_{t}(x-1), h_t(x+1), h_{t}(x)+\e_{x,t}\}.$$ The top of Figure 1 illustrates the ODB on 600 sites (with periodic boundary), run until time 600. The occupied sites are periodically colored so that the sites which become occupied at the same time are given the same color. On the left, $\eta=1$ (i.e., $p$ is uniform on $[0,1/2]$ and $\a_c'=0$), while $\eta=3$ (and $\a_c'>0$) on the right. The darkly colored sites thus give the height of the surface at different times and provide a glimpse of its evolution. In the pure regime ($\eta=1$), the boundary of the growing set reaches a local equilibrium (\[SK\], \[BFL\]), while in the composite regime ($\eta=3$) the boundary apparently divides into domains, which are populated by different equilibria and grow sublinearly. This is the mechanism that causes increasing fluctuations. The bottom of Figure 1 confirms this observation; it features a log–log plot of quenched standard deviation (estimated over 1000 independent trials) of $h_t(0)$ vs. $t$ up to $t=10\,000$. The $\eta=1$ case is drawn with $+$’s and the $\eta=3$ case with $\times$’s; the two least squares approximations lines (with slopes 0.339 and 0.517, respectively) are also drawn. We note that the asymptotic speed of this flat interface is known: $\lim_{t\to\infty} h_t(0)/t=\sup_{\a>0}
(\a+1)c(\a)$. Here is the reason: if ODB dynamics $h_t^i$, $h_t$ start from initial states $h_0^i$, $h_0=\sup_i h_0^i$, respectively, and are coupled by using the same coin flips $\e_{x,t}$, then $h_t=\sup_i h_t^i$ for every $t$.
=0.1235
0.2cm=0.1235
=0.30
Figure 1. Evolution and quenched deviation in the two phases of disordered ODB.
Perhaps surprisingly, it appears that the phase transition in the DB does [*not*]{} occur at $\eta=2$, and in general the delineation is much murkier. At this point, we cannot even eliminate the possibility of continuous dependence of fluctuation exponent on $\eta$. In Figure 2, we present the results of simulations for $\eta=0.2$ (left) and $\eta=1$ (right). The top figures only show evolution near time $t=5000$, as no difference is readily apparent at earlier times. The plot of quenched deviations is analogous to the one in Figure 1, with the least squares slopes 0.395 ($\eta=0.2$) and 0.49 ($\eta=1$).
=0.19
0.2cm=0.19
=0.30
Figure 2. Evolution and quenched deviation in disordered DB.
The organization of the rest of the paper is as follows. Section 2 reviews the set-up from \[GTW1, GTW2\], in Section 3 we prove the relevant asymptotic properties of the order statistics and of the solutions of (1.1) and (1.2), and demonstrate how Theorem 2 follows from Theorem 1. Section 4 is a detailed analysis of the asymptotic behavior of steepest descent curves. The proof of convergence in probability in Theorem 1 is then concluded in Section 4. Finally, Section 5 strengthens the results of Section 3 (under the stronger conditions) so that almost sure convergence is implied.
We recall how we approached these problems in \[GTW1,GTW2\]. The starting point is the identity $$\Pr(H\le h)=\det\,(I-K_h),$$ where $K_h$ is the infinite matrix acting on $\ell^2({\bZ}^+)$ with $(j,k)$–entry $$K_h(j,k)=\sum_{\l=0}^{\iy}(\ph_-/\ph_+)_{h+j+\l+1}\;
(\ph_+/\ph_-)_{-h-k-\l-1}.$$ The subscripts denote Fourier coefficients and the functions $\ph_{\pm}$ are given by $$\ph_+(z)=\prod_{j=1}^n(1+r_jz),\ \ \ \ph_-(z)=(1-z\inv)^{-m}.$$ The matrix $K_h$ is the product of two matrices, with $(j,k)$–entries given by $$\align
&(\ph_+/\ph_-)_{-h-j-k-1}={1\ov 2\pi i}
\int\prod_{j=1}^n(1+r_jz)\;(z-1)^m\,z^{-m+h+j+k}\,dz,\\
&(\ph_-/\ph_+)_{h+j+k+1}={1\ov 2\pi i}
\int\prod_{j=1}^n(1+r_jz)\inv\;(z-1)^{-m}\,z^{m-h-j-k-2}\,dz.
\endalign$$ The contours for both integrals go around the origin once counterclockwise; in the second integral 1 is on the inside and all the $-r_j\inv$ are on the outside.
If $h=c_n\,m+h'$ we have $$\align
&(\ph_+/\ph_-)_{-h-j-k-1}={1\ov 2\pi i}
\int\ps(z)\,z^{h'+j+k}\,dz,\tag{2.1}\\
&(\ph_-/\ph_+)_{h+j+k+1}={1\ov 2\pi i}\int\ps(z)\inv\,z^{-h'-j-k-2}\,dz,
\tag{2.2}
\endalign$$ where $$\ps(z)=\prod_{j=1}^n(1+r_jz)\;(z-1)^m\,z^{-(1-c_n)\,m}.$$ The idea is to apply steepest descent to the above integrals. If $\si(z)=m\inv\log\,\ps(z)$ then $$\si'(z)={\al\ov n}\,\sum_{j=1}^n{r_j\ov 1+r_jz}+{1\ov z-1}+{c_n-1\ov
z}\tag{2.3}$$ and, with $u_n$ and $c_n$ as defined above, $\si'(u_n)=\si''(u_n)=0$. The steepest descent curves both pass through $u_n$. As $n\ra\iy$ the zeros/poles $-r_j\inv$ accumulate on the half-line $(-\iy,\,\xi]$ where $\xi=1-b\inv$. In the pure regime the points $u_n$ and the curves are bounded away from this half-line, behave regularly and have nice limits. However in the composite regime the points and curves come very close to $\xi$, their behavior is not so simple, and we apply steepest descent not quite as described.
Until Section 5, we assume that all limits are in probability, unless otherwise indicated. To prove the first part of Theorem 1 and Theorem 2, we thus assume that (a)–(c) hold.
We let $q_j=b-p_j$, so that $q_1,\,\cd,q_n$ are chosen independently according to the distribution function $G$, then ordered so that $q_1\le q_2\le\cd\le q_n$.
Let $t_1< t_2< \dots < t_n$ be an ordered sample of i.i.d. uniform $(0,1)$ random variables. Then we may construct the $G$–sample by setting $q_j=G^{-1}(t_j)$. We will also use the well-known fact that, given $t_j$, the conditional distribution of $t_1,\dots t_{j-1}$ is that of an ordered sample of $j-1$ uniforms on $[0,t_j]$.
There exist a positive constant $c_1$ so that $x\le G(G^{-1}(x))\le x/c_1$ for $x\in (0,1)$. Moreover, $G(G^{-1}(x))\sim x$ as $x\to 0$.
Write the complement of the range of $G$ as $\cup_i I_i$, where $I_i$ are disjoint and either of the form $[a_i,b_i)$ or $(a_i, b_i)$. If $x\in (0,1)$ is in the range of $G$, then $G(G^{-1}(x))=x$, otherwise, if $x\in I_i$, $G(G^{-1}(x))=b_i$. By (a), $b_i\sim a_i$ if $a_i\to 0$. The last sentence in the statement is then proved, and the first follows. $\square$
With $c_1$ as in Lemma 3.1, for $\eta<1$ and $j\ge 2$, $$\Pr\left(G(q_1)>\eta G(q_j)\right)\le
(1-c_1\eta)^{j-1}.$$
By Lemma 3.1 and remarks preceding it, $$\Pr\(G(q_1)>\eta G(q_j)\)
\le \Pr\(t_1>{c_1}\eta t_j\)=
\(1-{c_1}\eta\)^{j-1}.\qquad\square$$
$\lim_{n\ra\iy}\Pr\left(q_1\le
G\inv(s/n)\right)=1-e^{-s}$.
Fix an $\e>0$. First, by monotonicity of $G^{-1}$, $t_1\le s/n$ implies $q_1\le G^{-1}(s/n)$. Second, by Lemma 3.1 and the monotonicity of $G$ we have that, for large enough $n$, $q_1\le G^{-1}(s/n)$ implies $t_1\le
G(G^{-1}(t_1))=G(q_1)\le G(G^{-1}(s/n)
\le (1+\e)s/n$. These give the inequalities $P(q_1\le G^{-1}(s/n))\ge 1-(1-s/n)^n$, and $P(q_1\le G^{-1}(s/n))\le 1-(1-(1+\e)s/n)^n$. The statement of the lemma now follows upon first letting $n\to\infty$ and then $\e\to 0$. $\square$
Let $t_1$ be as in the previous proof. Fix an $\e>0$. Find a $\d>0$ so that $x<\d$ implies $x\le G(G\inv(x))<(1+\e)x$. Then $P(G(G\inv(t_1))<s/n)\le
P(t_1< s/n)+P(t_1>\d)\to 1-\exp(s)$. Together with a similarly proved lower bound this implies that $P(q_1<G\inv(s/n))=P(G(q_1)<s/n)\to 1-\exp(-s)$. Thus we only need to show that $P(G^{-1}(t_1)=G^{-1}(s/n))=
P(t_1$ and $s/n$ are in the same $I_i)\to 0$. (Here $I_i$ are in the same as in the proof of Lemma 3.1.) But the probability in question is either 0 if $s/n$ is in the range of $G$ or, if $s/n\in I_i$, is bounded above by $P(t_1>a_i\,|\,t_1<b_i)\le (b_i-a_i)/b_i$. However, $a_i\to 0$ as $n\to\infty$, thus $a_i\sim b_i$ and the proof is concluded. $\square$
. It follows from Lemma 3.3, and the fact that $G(x)=o(x^2)$ near $x=0$, that $n^{1/2}q_1\ra\iy$ as $n\ra\iy$.
With high probability $q_1/q_2$ is bounded away from 1 as $n\ra\iy$. More precisely, for every $\eta>0$ there is a $\dl>0$ such that $\Pr(q_1\le(1-\dl)\,q_2)\ge 1-\eta$ for large enough $n$.
It follows from Lemma 3.1 that for every $\eta>0$ there exists a $\d_1>0$ so that the following implication holds for $t_2<\d_1$: if $G(q_1)>(1-\d_1)G(q_2))$ then $t_1>(1-\eta)t_2$. Furthermore, by the assumption (a), there exists a $\d\in (0,\d_1)$ so that, for $t_2<\d$, $q_1>(1-\d)q_2$ implies $G(q_1)>(1-\d_1)G(q_2)$. Therefore, $$P(q_1>(1-\d)q_2)\le P(t_1>(1-\eta)t_2)+P(t_2>\d)=\eta +P(t_2>\d),$$ and the proof is concluded since $t_2\to 0$ a.s. $\square$
$n\inv\sum_1^nq_1/q_j^3\ra 0$ as $n\ra\iy$.
For any fixed $k$ we have $n\inv\sum_{j=1}^kq_1/q_j^3\le
k/nq_1^2\to 0$. Also, $n\inv\sum_{j=k+1}^n q_j^{-2}<\lan q^{-2}\ran+1$ a.s. for large $n$.
Let $\dl>0$ be given. By the above paragraph, it suffices to show that $$\limsup_{n\to\infty}\Pr\left({q_1\ov q_{k+1}}> \dl \right)$$ will be arbitrarily small for sufficiently large $k$. Now, from the assumption (b), it follows that for some $\eta>0$ we have $G(q_1)> \eta G(q_{k+1})$ whenever $q_1> \dl q_{k+1}$ and $q_1<\eta$. With this $\eta$ (which we may assume is less than 1) we have, from Lemma 3.2, $$\Pr\left({q_1\ov q_{k+1}}>\dl\right)
\le(1-c_1\eta)^{k}+P(q_1\ge \eta),$$ which is clearly enough. $\square$
From now on $\{\ph_n\}$ will denote a sequence of random variables satisfying $\ph_n=o(q_1)$. Since $q_1\gg n^{-1/2}$ we shall assume when convenient that also $\ph_n\gg
n^{-1/2}$. In the statement of the next lemma, the expression $O(\ph_n)$ could have been replaced by the less awkward $o(q_1)$. The reasons for the present statement are that the substitute for this lemma (Lemma 6.2) when we consider almost sure convergence will have this form, and that the same sequence $\{\ph_n\}$ will appear in later lemmas.
Let $\{v_n\}$ be a sequence of points in a disc with diameter the real interval $[-r_1\inv-O(\ph_n),\,\xi]$. Then $$\lim_{n\ra\iy}{1\ov
n}\sum_{j=2}^n{r_j\ov(1+r_jv_n)^2}=\lan{r\ov(1+r\xi)^2}\ran.$$
Write $v_n=(b_n-1)/b_n$. Then if we recall that $\xi=(b-1)/b$ and $p_j=b-q_j$ we see that $b-b_n$ lies in a disc with diameter $[0,\,q_1+O(\ph_n)]$ and that $${1\ov
n}\sum_{j=2}^n{r_j\ov(1+r_jv_n)^2}={1\ov n}\sum_{j=2}^n{b_n^2(b-q_j)(1-b+q_j)\ov
(b_n-b+q_j)^2}.$$ If we subtract from this the same expression with $b_n$ replaced by $b$, that is, $${1\ov n}\sum_{j=2}^n{b^2(b-q_j)(1-b+q_j)\ov q_j^2}, \tag{3.1}$$ we obtain $${1\ov n}\sum_{j=2}^n(b-q_j)(1-b+q_j)
\left[{b_n^2\ov(b_n-b+q_j)^2}-{b^2\ov q_j^2}\right].\tag{3.2}$$ We shall show that this is $o(1)$. Assuming this for the moment, we can finish the proof by first noting that we may, with error o(1), start the sum in (3.1) at $n=1$ since $q_i\gg n^{-1/2}$, and then (3.1) has the a.s. limit $$\lan {b^2(b-q)(1-b+q)\ov q^2}\ran=\lan{r\ov(1+r\xi)^2}\ran.$$
It remains to show that (3.2) is $o(1)$. If we replace the numerator $b^2$ on the right by $b_n^2$, the error is $o(1)$, since $n\inv\sum q_j^{-2}$ is a.s. bounded. If we make this replacement then what we obtain is bounded by a constant times $${b\ov n}\sum_{j=2}^n\left|{(b_n-b)^2-2(b_n-b)q_j\ov
q_j^2(b_n-b+q_j)^2}\right|.$$ Since $|b-b_n|\le q_1+O(\ph_n)=q_1+o(q_1)$ it follows from Lemma 3.4 that $|b_n-b+q_j|$ is at least a constant times $q_j$ for large $n$ and so the above is at most a constant times $${1\ov n}\sum_{j=2}^n{|b_n-b|\ov q_j^3}\le
{1\ov n}\sum_{j=2}^n{q_1\ov q_j^3},$$ and by Lemma 3.5 this is $o(1)$. $\square$
We denote $$\theta=1-\a/\a_c', \qquad
\b=\left({(1-b)\,\al\ov b^3\,\theta}\right)^{1/2}.
\tag 3.3$$
We have $u_n=-r_1\inv+\b n^{-1/2}+o(n^{-1/2})$ as $n\ra\iy$.
We show first that $u_n\ge\xi$ cannot occur for arbitrarily large $n$. If it did, then we would have, using equation (1.1) for $u_n$, $$b^2={1\ov (\xi-1)^2}\le{1\ov (u_n-1)^2}
\le {\al\ov n}\,{r_1\ov(1+r_1\xi)^2}+{\al\ov n}\,\sum_{j=2}^n{r_j\ov
(1+r_j\xi)^2}.$$ It follows from the remark following Lemma 3.3 that the first term on the right is $o(1)$ and from Lemma 3.6 that the second term on the right has limit $$\al\lan {r\ov(1+r\xi)^2}\ran=\al b^2\lan{p(1-p)\ov(b-p)^2}\ran<b^2$$ since we are in the composite regime. This contradiction shows that $u_n\le\xi$ for sufficiently large $n$, and so $u_n\in[-r_1\inv,\,\xi]$. By Lemma 3.6 again, $${\al\ov n}\,\sum_{j=2}^n{r_j\ov
(1+r_ju)^2}={1\ov(u-1)^2}
\to\al\lan {r\ov(1+r\xi)^2}\ran=b^2\al/\al_c'.$$ Therefore the equation (1.1) for $u_n$ becomes $${\al\ov n}\,{r_1\ov(1+r_1u_n)^2}={1\ov (\xi-1)^2}-\al\lan
{r\ov(1+r\xi)^2}\ran+o(1)
=b^2\theta+o(1).$$ Since $r_1=b/(1-b)+o(1)$ we find that the solution is as stated. $\square$
Next, we see how $c_n$ behaves.
We have $c_n=c(\al,F)-\theta \,q_1+o(q_1)$ as $n\ra\iy$, where $\theta$ is given in (3.3).
Write $$c_n={1\ov 1-u_n}-{\al\ov n}\sum_{j=2}^n{r_ju_n\ov 1+r_ju_n}-
{\al\ov n}{r_1u_n\ov 1+r_1u_n}.\tag{3.4}$$ By Lemma 3.7, the last term above is $O(n^{-1/2})$. Equation (1.1) tells us that $${d\ov du}\left({1\ov 1-u}-{\al\ov n}\sum_{j=1}^n{r_ju\ov
1+r_ju}\right)\Big|_{u=u_n}=0,$$ and so $${d\ov du}\left({1\ov 1-u}-{\al\ov n}\sum_{j=2}^n{r_ju\ov
1+r_ju}\right)\Big|_{u=u_n}=
{\al\ov n}{r_1\ov (1+r_1u_n)^2}={\al\ov r_1\b^2}+o(1)={\al(1-b)\ov b\b^2}+o(1).$$
By Lemma 3.6 and its proof, with an error $o(1)$ the derivative of the expression in the parentheses above equals in $[u_n,\xi]$ what it equals at $u=\xi$, so the above holds with $u_n$ replaced by any point in this interval. From this and (3.4) we get $$c_n=c(u_n)=c(\xi)-{\al(1-b)\ov b\b^2}(\xi-u_n)+o(\xi-u_n).$$ We have $$\xi-u_n=1-b\inv-r_1\inv+O(n^{-1/2})=p_1\inv-b\inv+O(n^{-1/2})
={q_1\ov b^2}+o(q_1),$$ where we have used the fact that $q_1\gg n^{-1/2}$. Thus $$c_n=c(\xi)-{\al(1-b)\ov b^3\b^2}q_1+o(q_1).$$ Finally, as $\<(b-p)^2\><\infty$, we can use the central limit theorem to conclude that $c(\xi)=c(\al,F)+O(n^{-1/2})$, which completes the proof. $\square$
. Lemmas 3.3 and 3.8 show that Theorem 2 follows from the part of Theorem 1 on convergence in probability.
Now we go to our integrals (2.1) and (2.2). We are not going to apply steepest descent with $\ps$ as the main integrand, but rather with the function $\ps_1$ which is $\ps$ with the factor $1+r_1z$ removed. It is convenient to introduce the notation $$\ps_1(z,c)=\prod_{j=2}^n(1+r_jz)\;(z-1)^m\,z^{-(1-c)\,m},$$ where $c>0$. (This parameter is not to be confused with the time constant $c=c(\al,F)$ defined earlier.) Thus $\ps_1(z)=\ps_1(z,c_n)$ in this notation. We also define the integrals $$I^+(c)={1\ov 2\pi i}\int(1+r_1z)\,\ps_1(z,c)\,dz,
\ \ \ I^-(c)={1\ov 2\pi i}\int(1+r_1z)\inv\,\ps_1(z,c)\inv z^{-2}\,dz.$$ (Since $I^+(c)=0$ when $c\ge1$ we always assume that $c<1$.) Notice that these are exactly the integrals (2.1) and (2.2) when we set $$c=c_n+(h'+j+k)/m.$$ Since $j,k\ge0$ and we will eventually set $h'=sn^{1/2}$, we may also assume that $$c\ge c_n-O(n^{-1/2}).\tag{\AH}$$
To apply steepest descent to $I^{\pm}(c)$ we must locate the critical points and determine the critical values of $\ps_1(z,c)$. Thus we define $$\si_1(z,c)={1\ov m}\log\,\ps_1(z,c),$$ so that $$\si_1'(z,c)={\al\ov n}\,\sum_{j=2}^n{r_j\ov 1+r_jz}+{1\ov z-1}+{c-1\ov z}.$$ As before, if the parameter $c$ does not appear we take it to be $c_n$, e.g., $\si_1(z)=\si_1(z,c_n)$. So $$\si_1'(z)={1\ov m}\log\,\ps_1(z)=\si'(z)-{\al\ov n}{r_1\ov 1+r_1z}.$$ Using $\si'(u_n)=\si''(u_n)=0$ we get from the above and Lemma 3.7 that $$\si_1'(u_n)=-{\al\ov\b\sqrt n}(1+o(1)),\ \ \ \si_1''(u_n)={\al\ov\b^2}(1+o(1)).\tag{\siders}$$
To determine the critical values of $\si_1(z,c)$ let us first find the value of $c$ for which its derivative has a double zero. (This is the analogue of the quantity $c_n$ for $\si(z)$.) For this we use the analogue of (1.1) and (1.2) but where the terms corresponding to $j=1$ are dropped from the sums. If we call the solution of (1.1) $\ub$ and set $\cb=c(\ub)$ then $\si_1'(z,\cb)$ has a double zero at $\ub$. In analogy with $u_n$, we know that $\ub$ is to the right of and within $O(n^{-1/2})$ of $-r_2\inv$. As for $\cb$, we use Lemma 3.8, its analogue where the sums in (1.1) and (1.2) start with $j=2$, as well as Lemma 3.4, to see that to a first approximation $$\cb=c_n-\theta(q_2-q_1)$$ and that $q_2-q_1\gg n^{-1/2}$. From this and () we see that $c>\cb$.
Using subscripts for derivatives now, we have $$\si_{1z}(\ub,\cb)=\si_{1zz}(\ub,\cb)=0$$ and we want to see how the critical points $u_{c}^{\pm}$ of $\si_1(z,c)$ move away from $\ub$ as $c$ increases from $\cb$. (Here we take $u_{c}^-<u_{c}^+$.) The function $\si_{1z}(z,\cb)$ vanishes at $\ub$ and is otherwise positive in $(-r_2\inv,0)$. It follows that for $c$ close to but larger than $\cb$ we have $u_c^-<\ub<u_c^+$. Differentiating $\si_{1z}(u_{c}^{\pm},c)=0$ with respect to $c$ gives $$0=\si_{1zz}(u_{c}^{\pm},c)\,{du_{c}^{\pm}\ov dc}+\si_{1zc}(u_{c}^{\pm},c)
=\si_{1zz}(u_{c}^{\pm},c)\,{du_{c}^{\pm}\ov dc}+{1\ov
u_{c}^{\pm}}.\tag{\sonezz}$$ Since $u_{c}^{\pm}<0$ it follows that $du_{c}^+/dc\ne0$, and so each of $u_c^{\pm}$ is either a decreasing or increasing function of $c$ for $c>\cb$. From their behavior that we already know for $c$ close to $\cb$ we deduce that $u_{c}^+$ increases and $u_{c}^-$ decreases as $c$ increases. In particular, $u_{c}^-$ is even closer to $-r_2\inv$ than $\ub$.
We remark that from () and the signs of $du_{c}^+/dc$ we deduce $$\si_{1zz}(u_c^+,c)>0,\ \ \ \si_{1zz}(u_c^-,c)<0.\tag{\BH}$$
Next we shall determine the asymptotics of the critical values $\si(u_c^{\pm},c)$. The sequence $\{\ph_n\}$ is as described before Lemma 3.6.
For $c-c_n=O(\ph_n)$ $$\si_1(u_{c}^+,c)=\si_1(-r_1\inv,c)-
{r_1 \b^2\ov2\al}\left(c-c_n+{2\al\ov
r_1\b}(1+o(1))n^{-1/2}\right)^2\tag{\sioneest}$$ and for all $c\ge c_n$ $$\si_1(u_{c}^+,c)< \si_1(-r_1\inv,c)-\eta
n^{-1/2}\,(c-c_n)+O(n^{-1}).\tag{\allcn}$$ for some $\eta>0$. Moreover for all $c$ $$\si_1(u_{c}^-,c)>\si_1(-r_1\inv,c)+\ph_n^2$$ when $n$ is sufficiently large.
. In these and analogous inequalities below we think of $\si_1$ as actually meaning $\Re\si_1$.
Consider first the case $c=c_n$. We have $$\si_1(u_n+\z)=\si_1(u_n)+\si_1'(u_n)\,\z+\z^2\int_0^1(1-t)\,\si_1''(u_n+t\z)\,d
t.$$ If $\z=O(\ph_n)$ then it follows from Lemma 3.6 that $\si_1''(u_n+t\z)=\si_1''(u_n)+o(1).$ Hence, by (), we have for such $\z$ $$\si_1(u_n+\z)=\si_1(u_n)-{\al\ov\b\sqrt
n}\z+\left({\al\ov2\b^2}+o(1)\right)\z^2.
\tag{\siexp}$$ This has zero derivative for $$\z={\b\ov \sqrt n}(1+o(1))$$ and it follows that $$u_{c_n}^+=u_n+{\b\ov \sqrt n}(1+o(1))=-r_1\inv+{2\b\ov \sqrt
n}(1+o(1)).\tag{\uplus}$$ (This critical value must be $u_{c_n}^+$ rather than $u_{c_n}^-$ since the latter is within $O(n^{-1/2})$ of $-r_2\inv$.) From this and (), taking $\z=-r_1\inv-u_n
=-(\b+o(1))n^{-1/2}$ and $\z=u_{c_n}^+-u_n=(\b+o(1))n^{-1/2}$ and subtracting, it follows that $$\si_1(u_{c_n}^+)=\si_1(-r_1\inv)-2(\al+o(1))n^{-1}.\tag{\siuplus}$$
To determine the behavior of $u_{c}^+$ and $\si_1(u_{c}^+,c)$ for more general $c$ we assume first that $$c=c_n+o(1),\ \ \ u_c^+=u_n+O(\ph_n)=-r_1\inv+O(\ph_n).$$ Then $$\si_{1zz}(u_{c}^+,c)=\si_1''(u_n)-{c-c_n\ov {u_c^+}^2}={\al\ov\b^2}+o(1)$$ by (). Therefore () gives $${du_{c}^+\ov dc}=-(\b^2/\al+o(1))/u_{c}=r_1{\b^2\ov\al}(1+o(1),$$ whence $$\aligned
u_{c}^+&=u_{c_n}^++r_1{\b^2\ov\al}(c-c_n)(1+o(1))\\
&=-r_1\inv+{2\b\ov \sqrt
n}(1+o(1))
+r_1{\b^2\ov\al}(c-c_n)(1+o(1)),
\endaligned
\tag{\uest}$$ by (). This holds if $c-c_n=O(\ph_n)$ since this assures that $u_{c}^+=u_n+O(\ph_n)$. The above gives $$\log (-u_{c}^+)=
\log(-r_1\inv)-2r_1\b(1+o(1))n^{-1/2}-r_1^2{\b^2\ov\al}(c-c_n)(1+o(1)).
\tag{\logu}$$ (Again, real parts are tacitly meant.)
To determine, $\si_1(u_{c}^+,c)$ we use $\si_{1z}(u_{c}^+,c)=0$ to deduce $${d\ov dc}\si_1(u_{c}^+,c)=\log u_{c}^+.\tag{\logeq}$$ We continue to assume that $c-c_n=O(\ph_n)$ so our estimates hold. Integrating () using the first part of () gives (since $u_{c_n}^+\ra-r_1\inv$) $$\align
\si_1(u_{c}^+,c)=&\,\,\si_1(u_{c_n}^+)+(c-c_n)\,\log u_{c_n}^+-
{1\ov2}r_1^2{\b^2\ov\al}\,(c-c_n)^2\,(1+o(1))\\
=&\,\,\si_1(-r_1\inv)-2(\al+o(1))\,n\inv+\log(-r_1\inv)(c-c_n)\\
&-2r_1\b (c-c)\,n^{-1/2}(1+o(1))
-{1\ov2}r_1^2{\b^2\ov\al}\,(c-c_n)^2\,(1+o(1)),
\endalign$$ by () and (). This gives ().
For all $c\ge c_n$ we use the fact that $\log(-u_{c}^+)$ is a decreasing function of $c$, since $u_{c}^+$ increases, and integrate () with respect to $c$ from $c_n$ to $c$, which gives $$\si_1(u_{c}^+,c)\le\si_1(u_{c_n}^+)+\log(-u_{c_n}^+)(c-c_n).$$ Using () and () give ().
For the lower bound for $\si_1(u_c^-,c)$, we assume first that $c\le c_n$. By () this implies in particular that $c-c_n=O(n^{-1/2})$. Now $\si_1(z)$ is decreasing on the interval $(u_c^-,\,u_c^+)$ and $u_c^+-u_c^-\gg\ph_n$. To see the last inequality, note that, from Lemma 3.6, $\si_{1zz}(u_n+\z,c)
\ne0$ for $\z=O(\ph_n)$ and $c-c_n=o(1)$. Therefore $\si_{1z}(u_n+\z,c)$ can vanish for at most one such $\z$ and, since $u_c^+-u_n=O(\ph_n)$, we must have $u_n-u_c^-\gg\ph_n$.
Take any sequence $\ph_n=o(q_1)$ and write $$\si_1(u_c^-,c)\ge \si_1(u_c^+-\ph_n,c)=\si_1(u_c^+-\ph_n)+
(c-c_n)\log(\ph_n-u_{c}^+).$$ (As usual, we imagine real parts having been taken.) If we apply () with $\z=u_c^+-u_n$ and with $\z=u_c^+-\ph_n-u_n$ and subtract, we obtain $$\si(u_c^+-\ph_n)-\si(u_c^+)
={\al\ov\b} n^{-1/2}\ph_n(1+o(1))+
{\al\ov 2\b^2}\left(-2\ph_n(u_c^+-u_n)+\ph_n^2)\right)(1+o(1)).$$ By subtracting the first parts of () and () we see that this equals $$o(n^{-1/2}\ph_n)+{\al\ov 2\b^2}\ph_n^2.$$ Since $\ph_n\gg n^{-1/2}$, as we may assume, we obtain $$\si_1(u_c^+-\ph_n)>\si_1(u_c^+)+\eta \ph_n^2$$ for some $\eta>0$. Also, since $c-c_n>-\eta n^{-1/2}$ for some $\eta$ and $\log(1-\ph_n/u_c^+)$ is positive and $O(\ph_n)$ we have $$(c-c_n)\log (\ph_n-u_{c}^+)\ge (c-c_n)\log (-u_{c}^+)-\eta n^{-1/2}\ph_n.$$ Putting these together gives $$\si_1(u_c^-,c)>\si_1(u_c^+,c)+\eta \ph_n^2$$ for some $\eta>0$.
This was for $c\le c_n$. For $c>c_n$ we use what we get from () by replacing $^+$ with $^-$, subtracting the two, and integrating. Together with using the already proved inequality for $c=c_n$ this gives $$\si_1(u_c^-,c)-\si_1(u_c^+,c)>\eta \ph_n^2+\int_{c_n}^c\log(u_c^-/u_c^+)\,dc.$$ The logarithm is nonnegative. Hence $\si_1(u_c^-,c)-\si_1(u_c^+,c)>\eta \ph_n^2$ for all $c$.
If $c-c_n=O(\ph_n)$ then using this and () give $$\si_1(u_c^-,c)>\si_1(-r_1\inv)+\log(r_1\inv)(c-c_n)+\eta \ph_n^2.$$ with a different $\eta$. If $c\ge c_n$ we use $$\si_1(u_c^-,c)-\si_1(u_{c_n}^-)=\int_{c_n}^c\log(-u_c^-)\,dc.$$ Since $u_c^-$ is decreasing and is less than $-r_1\inv$ when $c=c_n$ this gives $$\align
\si_1(u_c^-,c)&\ge\si_1(u_{c_n}^-)+\log(r_1\inv)(c-c_n)\\
&\ge
\si_1(u_{c_n}^+)+\log(r_1\inv)(c-c_n)+\ph_n^2.
\endalign$$ Combining this with () for $c=c_n$ shows that $$\si_1(u_c^-,c)\ge\si_1(-r_1\inv)+\log(r_1\inv)(c-c_n)+\eta \ph_n^2$$ holds for these $c$ as well. Since $\{\ph_n\}$ was an arbitrary sequence satisfying $\ph_n=o(q_1)$ the last statement of the lemma follows. $\square$
Next we consider the steepest descent curves, which we denote by $C^{\pm}(c)$ corresponding to the integrals $I^{\pm}(c)$. It follows from () that $C^+(c)$ passes through $u_c^+$ because on the curve $|\ps_1(z,c)|$ has a maximum at that point; similarly, $C^-(c)$ passes through $u_c^-$. We have enough information to evaluate the portions of these integrals taken over the immediate neighborhoods of these points, but we also have to show that the integrals over the rest of the curves are negligible. This requires not only that the integrands are much smaller there, which they are, but also that the curves themselves are not too badly behaved.
To see what is needed, let $\Ga^{\pm}$ be arcs of steepest descent curves for a function $\rho$, curves on which $\Im\rho$ is constant. In analogy with our $C^{\pm}(c)$ we assume $\Re\rho$ is increasing on $\Ga^-$ as we move away from the critical point and decreasing on $\Ga^+$. If $s$ measures arc length on $\Ga^{\pm}$ we have for $z\in\Ga^{\pm}$ $${dz\ov ds}=\mp{|\rho'(z)|\ov\rho'(z)}.\tag{\CH}$$ If the arc goes from $a$ to $b$ then $$\int_{\Ga^{\pm}}|\rho'(z)|\,ds=\mp\int_{\Ga}\rho'(z)\,dz=\mp(\rho(b)-\rho(a)).$$ Hence the length of $\Ga^{\pm}$ is at most $${|\rho(b)-\rho(a)|\ov \min_{z\in\Ga^{\pm}}|\rho'(z)|}.\tag{\DH}$$ This is to be modified if $\rho'$ has a simple zero at $z=a$, for example. In this case we replace $\rho'(z)$ by $\rho'(z)/(z-a)$. (This is seen by making the variable change $z=a+\sqrt{\xi}$.)
Our goal is Lemma 4.5 below. In order to use the length estimate () to deduce the bounds of the lemma, we must first locate regions in which our curves are located, and then find lower bounds for $\si_1'(z,c)$ in these regions. (Upper bounds for $|\si_1(z,c)|$ will be easy.) These will be established in the next lemmas.
For $r>0$ define $n(r)=\#\{j:r_j\ge r\}$.
The curves $C^{\pm}(c)$ lie in the regions $$\left\{z:|\arg(r\inv+z)|\le\pi{cn\ov\al n(r)+cn}\right\}$$ for all $r$ and in $|z+r_2\inv|\ge\dl n\inv$ if $\dl$ is small enough.
For a point $z$ on either of the curves, say in the upper half-plane, we have $$\align
c\pi&={\al\ov n}\sum_{j=2}^n\arg(r_j\inv+z)+\arg(z-1)+(c-1)\,\arg z\\
&\ge {\al n(r)\ov n}\arg(r\inv+z)+c\,\arg(r\inv+z),
\endalign$$ which gives the first statement of the lemma. For the second, observe that if $\z =O(\ph_n)$ then $\si_1'(r_2\inv+\z,c)=\al/n\z+O(1)$. This shows, first, that $u_c^-$ lies to the right of the circle $|\z|=\dl\,n\inv$ if $\dl$ is small enough and, second, that $1/\si_1'(z,c)$, thought of a vector, points outward from this circle if $\dl$ is small enough. Since a point of $C^-(c)$ moves in the direction of $1/\si_1'(z,c)$ as it moves away from $u_c^-$ (see (3.7) of \[GTW2\]), the curve can never pass inside the circle. Therefore the entire disc $|\z|\le\dl\,n\inv$ lies to the left of $C^-(c)$. This gives the second statement for $C^-(c)$ and it follows also for $C^+(c)$ since this is to the right of $C^-(c)$. $\square$
The next lemma, together with () and the length estimate (), will imply that for $z$ large the curves will move in the direction of $z$ and are well-behaved. If we take any $\bar r<b/(1-b)$ then a positive proportion of the $r_j$ are greater than $\bar r$ and so by Lemma 4.2 the curves lie in a region $$\left\{z:|\arg(\bar r\inv+z)|\le\pi
(1-\dl)\right\}\tag{\regionone}$$ for some $\dl>0$.
We have $z\,\si_1'(z,c)\ra c+\al$ as $n\ra\iy$ and $z\ra\iy$ through region ().
We have $$z\,\si_1'(z,c)=c+\al+O(n\inv)+O(z\inv)+{\al\ov
n}\sum_{j=2}^n{1\ov1+r_jz},$$ and it suffices to show that the last term tends to 0 as $n\ra\iy$ and $z\ra\iy$ through region (). If $z$ is in this region and $r<\bar r/2$ then $|1+rz|\ge \dl(1+ r|z|)$ for another $\dl$. The same bound will hold for all $r\le b/(1-b)$ if $z$ is large enough. Choose $M$ large and break the sum on the right, with its factor $n\inv$, into two parts, the terms where $r_j|z|<M$ and the terms where $r_j|z|\ge M$. We find that its absolute value is at most $$n\inv(n-n(M/|z|))+{1\ov \dl M}.$$ The first term tends to 0 as $z\ra\iy$ while the second could have been arbitrarily small to begin with. $\square$
. If $\Pr(p=0)$ is positive then the above has to be modified. We replace $c+\al$ by $c+\al\,\Pr(p>0)$.
Because of the above lemma we need only consider $z$ in a bounded set. We use the fact that by Lemma 4.2 with $r=r_2$ our curves lie a region $$\left\{z:|\arg(r_2\inv+z)|\le\pi(1-\dl n\inv),\ \ |r_2\inv+z|\ge\dl
n\inv\right\}. \tag{\regiontwo}$$
For all $z$ in any bounded subset of the region () we have $$|\si_1'(z,c)|\ge \dl\,
n^{-6}\,\left|{(z-u_c^-)\,(z-u_c^+)\ov z(z-1)}\right|$$ for some $\dl>0$ independent of $c$. To obtain the lower bound we write $$\Ph(s;z)=\Ph(s_2,\,s_3,\cd,s_n;\,z)={\al\ov n}\sum_{j=2}^n{1\ov s_j+z}+{1\ov
z-1}+ {c-1\ov z}.$$ Of course $\si_1'(z,c)=\Ph(r_2\inv,r_3\inv,\cd,r_n\inv)$. Think of $s_2=r_2\inv$ and $z$ as fixed, and consider the problem of finding $\inf\,|\Ph(s;z)|$ where $s_3,\cd,s_n$ are subject to the conditions $$s_j\ge s_2,\ \ \Ph(s;u_c^{\pm})=0.$$ If we take sequences so that the inf is approached in the limit, then some $s_j$ may tend to infinity, others may tend to $s_2$, and the rest, if any, tend to values strictly greater than $s_2$. Thus our inf is equal to the minimum of $|\Ph(s;z)|$, where $\Ph$ now has the form $$\Ph(s_2,\,s_3,\cd,s_{n'};\,z)={\al\ov n}\sum_{j=2}^{n'}{n_j\ov
s_j+z}+{1\ov z-1}+ {c-1\ov z}$$ with $n'\le n,\ \sum n_j=n-1$, and the $s_j$ with $j>2$ satisfying $s_j>s_2$ and the constraints $\Ph(s;u_c^{\pm})=0.$
Notice that the minimum cannot be zero since $\Ph(s;\,z)$, thought of for the moment as a function of $z$, has $n'$ finite zeros. It has zeros at $u_c^{\pm}$ and one between each pair of consecutive $-s_j$ since all the coefficients of $1/(s_j+z)$ are positive. This accounts for all $n'$ zeros, so our $z$ cannot be one of them.
We apply Lagrange multipliers to find the minimum of $|\Ph(s;z)|^2$ over $s_3,\cd,s_{n'}$, achieved at interior points. There are two constraints, hence two multipliers $\lambda$ and $\mu$. If $p+iq$ is the value $\Ph(s;z)$ where its absolute value achieves its minimum, then the equations we get are $$\Re\,(p-iq)\,{1\ov(s_j+z)^2}={\lambda\ov (s_j+u_c^-)^2}+{\mu\ov
(s_j+u_c^+)^2},$$ where we have divided by the factor $n_j$ appearing in all terms. This is the same sixth degree polynomial equation for all the $s_j$. It follows that there are at most six different $s_j$. Assuming there are exactly six (if there are fewer the argument is the same and the final estimate is better) we change notation again and write these as $s_3,\cd,s_{8}$ so that the minimum is achieved for $$\Ph(s_2,\,s_3,\cd,s_8;\,z)={\al\ov n}\sum_{j=2}^{8}{n_j\ov
s_j+z}+{1\ov z-1}+{c-1\ov z}$$ with other $n_j$.
This has eight zeros. Two of them are $u_c^{\pm}$ and the other six, lying between consecutive $-s_j$, we denote by $u_1,\cd,u_6$. We have the factorization $$\Ph(s;\,z)={1-c\ov u_c^-\,u_c^+}{(z-u_c^-)\,(z-u_c^+)\ov z(z-1)}
{\prod_{i=1}^6 (1-z/u_i)\ov \prod_{j=2}^{8}(1-z/s_j)},$$ and it remains to find a lower bound for this. Near $z=0$ we have $\si_1'(z,c)= (1-c)z\inv-1+\al\lan
r\ran +o(1)$, so if $c$ is close to 1 then $(1-c)/u_c^+=1-\al\lan r\ran +o(1)$. In particular this is bounded away from zero. Thus the first factor above is bounded away from zero. As for the factors in the products, observe first that each factor $1-z/s_j$ is bounded since $z$ and all factors $1/s_j$ are. For the others, we use again the fact that the curves lie in a region (). In any bounded subset of this region each $|1-z/u_i|\ge \eta n\inv$ for some $\eta>0$. (If $z$ is in a neighborhood of 0 this is clear since each $u_i<0$. Otherwise write $1-z/u_i=z(z\inv-u_i\inv)$.) Therefore the product of these is bounded below by a constant times $n^{-6}$. This completes the proof. $\square$
-0.2cm Now we can show that the curves $C^{\pm}(c)$ are not too badly behaved.
For some constant $A>0$ the length of $C^+(c)$ is $O(n^{A})$ and $$\int_{C^-(c)}|z|^{-2}\,|dz|=O(n^{A}).$$
It follows from Lemma 4.3 that $C^+(c)$ lies in a bounded set. For, this lemma implies that the vectors $1/\si_1'(z,c)$ point outward from a large circle $|z|=R$, and since by () $C^+(c)$ goes in the direction opposite to $1/\si_1'(z,c)$, a point of the curve starting at $u_c^+$ can never pass outside the circle. Also, some disc $|z|\le\dl(1-c)$ is disjoint from $C^+(c)$ because $1/\si_1'(z,c)$ points outward from a small enough circle $|z|=\dl(1-c)$ and so $C^+(c)$ cannot cross into it. It follows that $\si_1'(z,c)$, and so also $\si_1(z,c)$, is bounded on any portion of $C^+(c)$ close to $z=0$. A similar argument shows that some disc $|z-1|\le\dl$ lies entirely inside $C^+(c)$. Finally, we know that $u_c^-$ is within $O(n^{-1/2})$ of $-r_2\inv$ and if $\z =o(q_1)$ then $\si_1'(r_2\inv+\z,c)=\al/n\z+O(1)$. In particular $u_c^-$ lies in a region $|\z|\ge\dl n\inv$ for some $\dl>0$. Since also $\si_1''=-\al/n\z^2+O(1)$, by Lemma 3.6, we deduce that $\si_1''(z,c)=O(n)$ when $|z-u_c^-|\le \dl n\inv/2$, thus for such $z$ we have $\si_1(z,c)=\si_1(u_c^-,c)+O(n|z-u_c^-|^2)$. But it follows from Lemma 4.1 that $\si_1(u_c^-,c)-\si_1(u_c^+,c)>\ph_n^2$, and then, since $n\inv=o(\ph_n^2)$, $\si_1(u_c^+,c)<\si_1(z,c)$ for $|z-u_c^-|\le \dl n\inv/2$. As the maximum of $\si_1(z,c)$ on $C^+(c)$ occurs at $u_c^+$, this shows that the distance from $C^+(c)$ to $u_c^-$is at least $\dl n\inv/2$. With these facts established we use the lower bound of Lemma 4.4, the length estimate () (extended as in the remark following it), and the obvious upper bound for $|\si_1(z,c)|$ in the region () to deduce that the length of $C^+(c)$ is $O(n^{A})$ for some constant $A$.
As for the integral over $C^-(c)$, we observe that, since $c<1$ and $cm$ is an integer, $1-c$ is at least a constant times $n\inv$. Since $C^-(c)$ lies outside a disc $|z|\le\dl(1-c)$, we have $z\inv=O(n)$ on $C^-(c)$. A lower bound for the distance from $C^-(c)$ to $u_c^+$ is obtained using the fact that $\si_1(u_c^-,c)-\si_1(u_c^+,c)>\ph_n^2$. Since $\si_1'$ is bounded in a neighborhood of $u_c^+$, we have $\si_1(u_c^-,c)>\si_1(z,c)$ for $|z-u_c^+|$ less than $\ph_n^2$ times a sufficiently small constant. This shows that $C^-(c)$ is at least this far from $u_c^+$. We apply the other bounds as before; we think of the integral over the portion of $C^-(c)$ outside a large circle as the sum of integrals over the arcs from $a_k$ to $a_{k+1}$ where $a_k$ is the point of $C^-(c)$ where $|z|=k$. Lemma 4.3 and () are used again here. $\square$
-0.2cm -0.1cm
We evaluate $I^+(c)$ first when $c-c_n=O(\ph_n)$. Then $\si_{1zz}(u_{c}^+,c)=\al/\b^2+o(1)$ and so if we set $z=u_{c}^++\z$ we have $$\si_1(z,c)=\si_1(u_{c}^+,c)+{\al\ov2\b^2}(1+o(1))\z^2$$ as long as $\z=O(\ph_n)$. If $|\z|=\ph_n$ then the real part of the second term above is less than a negative constant times $\ph_n^2$ and, since this real part decreases as we go out $C^+(c)$, it is at least this negative whenever $|\z|\ge
\ph_n$. If we recall that this gets multiplied by $m$ in the exponent and the fact that $C^+(c)$ has the length at most a power of $n$ (by Lemma 4.5), we see that the contribution of this part of the integral is $O\left(e^{m\si(u_{c}^+,c)-n\ph_n^2+O(\log n)}\right)$. It follows from Lemma 3.3 and assumption (c) that with high probability $q_1\gg \log n/n^{1/2}$, and we could have chosen $\ph_n$ to satisfy this also. Thus, with error $o(e^{m\si(u_{c}^+,c)})$ the integral $I^+(c)$ is equal to $${1\ov2\pi
i}\int_{|\z|<\ph_n}(1+r_1(u_{c}^++\z))\,e^{(n/2\b^2)(1+o(1))\z^2}\,dz\,
e^{m\si_1(u_{c}^+,c)}$$ (since $\al m=n$). Since $\ph_n\gg n^{-1/2}$, in the limit after making the variable change $\z\ra n^{-1/2}\z$ the integration can be taken over $(-i\iy,i\iy)$ (downward really, but we can reverse the directions of integrations), the linear factor $\z$ contributes zero, and by () $$1+r_1u_c^+=r_1\left(2\b
n^{-1/2}+{r_1\b^2\ov\al}(c-c_n)+o(n^{-1/2}+|c-c_n|)\right).$$ Thus the integral is asymptotically equal to $\b\sqrt{2\pi}in^{-1/2}$ times the above and, by (), $$\align
&I^+(c)={r_1\b^2\ov\sqrt{2\pi}}n\inv \left(2 +{r_1\b\ov\al}n^{1/2}
(c-c_n)+o(1+n^{1/2}|c-c_n|\right)\\
&\qquad\qquad
\times \ps_1(-r_1,c)\inv\,e^{-{r_1 \b^2\ov2\al}m\left(c-c_n+
{2\al\ov r_1\b}(1+o(1))n^{-1/2}\right)^2}.
\endalign$$
This assumed that $c-c_n=O(\ph_n)$. For all $c\ge c_n$ we use the second part of Lemma 4.1 and again the fact that $C^+(c)$ has the length at most a power of $n$. We deduce $$I^+(c)=O\left(\ps_1(-r_1\inv,c)\,e^{-\eta n^{1/2}\,(c-c_n)+O(\log n)}\right)$$ for $c\ge c_n$.
For the integral over $C^-(c)$ we use the last part of Lemma 4.1 and the second part of Lemma 4.5. These imply that the integral over $C^-$ is $$O\left(\ps_1(-r_1\inv,c)\inv\,e^{-n\ph_n^2+O(\log
n)}\right)=o(\ps_1(-r_1\inv,c)).$$
But our integral for $I^-(c)$ is [*not*]{} taken over $C^-(c)$. Recall that the original contour must have all the $-r_j\inv$ on the outside whereas $-r_1\inv$ is inside (more precisely, on the other side of) $C^-(c)$. Therefore if we deform the contour to $C^-(c)$ we pass through the pole at $-r_1\inv$. Thus $$I^-(c)=r_1\,\psi_1(-r_1\inv,c)\inv+o(\ps_1(-r_1\inv,c)).$$
Now recall that in $I^+(c)$ we set $c-c_n=h'+j+\ell$, in $I^-(c)$ we set $c-c_n=h'+\ell+k$ and then we sum over $\ell$ to get the matrix product. Recall also that $\ps_1(-r_1\inv,c)=\ps_1(-r_1\inv)\,(-r_1)^{-m(c-c_n)}$. The factors $(-r_1)^{-m(c-c_n)}$ in $I^+(c)$ and $(-r_1)^{m(c-c_n)}$ in $I^-(c)$ will combine to give $(-r_1)^{m(k-j)}$ which can be eliminated without affecting the determinant. It follows that we can modify the expressions for $I^\pm(c)$ by removing these factors. We can also remove the factors $\ps_1(-r_1\inv)^{\pm1}$ since they cancel upon multiplying. Thus our replacements are $$I^+(c)\ra {r_1\b^2\ov\sqrt{2\pi}}n\inv \left(2 +{r_1\b\ov\al}n^{1/2}
(c-c_n)\right)e^{-{r_1 \b^2\ov2\al}m\left(c-c_n+
{2\al\ov r_1\b}(1+o(1))n^{-1/2}\right)^2},$$ if $c-c_n=O(\ph_n)$, and $$I^+(c)\ra O\left(e^{-\eta n^{1/2}(c-c_n)+O(\log n)}\right),$$ if $c>c_n.$ Furthermore, $I^-(c)\ra r_1+o(1).$
Recall next that we set $h'=sn^{1/2}$ and in $I^+(c),\
c=c_n+sn^{1/2}+\lfloor xn^{1/2}\rfloor+\lfloor zn^{1/2}\rfloor$, so that $$c-c_n=(s+x+z+o(1))n^{1/2}/m=\al(s+x+z+o(1))n^{-1/2},$$ and eventually we multiply by $n$ because of the scaling. Take first the case $c-c_n=O(\ph_n)$, that is, $x+z=O(n^{1/2}\ph_n)$. Since $m=n/\al$ and $r_1\,\b=\tau\inv\,(1+o(1))$ the modified $I^+(c)$ equals $${r_1^2\b^3\ov\sqrt{2\pi}}n\inv (2\tau +s+x+z+o(1+x+y))
e^{-(2\tau +s+x+z+o(1))^2/2\tau^2}.$$ On the other hand, $I^-(c)$ is equal to $r_1$ with error $o(1)$. The result of multiplying these together, multiplying by $n$, and integrating with respect to $z$ over $(0,\,\infty)$, is asymptotically equal to $${1\ov\sqrt{2\pi}\tau}e^{-(2\tau +s+x)^2/2\tau^2}.\tag{\kernel}$$ This holds for $c-c_n=O(\ph_n)$. If $c-c_n\ge \ph_n$ we have, for our modified $I^+(c)$, the estimate $$O\left(e^{-\eta n^{1/2}(c-c_n)+O(\log n)}\right)=O(n\inv).$$ Integrating the square of this over a region $x+z=O(n^{1/2})$ will give $o(1)$.
It follows that the matrix product scales to the operator on $(0,\,\iy)$ with kernel (). This is a rank one kernel so its Fredholm determinant equals one minus its trace, which equals $${1\ov\sqrt{2\pi}\tau}\int_{-\iy}^{2\tau+s}e^{-x^2/2\tau^2}.$$ This establishes the convergence in probability statement of Theorem 1.
-0.1cm . One could rightly object that to scale a product to a trace class operator we should know that each factor scales in Hilbert-Schmidt norm. In our case the second limiting kernel is a constant and the product is not even Hilbert-Schmidt. But we could have multiplied the kernel of the first operator by $(1+x)\,(1+z)$ and the kernel of the second operator by $(1+z)\inv\,(1+y)\inv$. This would not have affected the determinant of the product, both operators would have scaled in Hilbert-Schmidt norm and the product would have scaled in trace norm to the rank one kernel $${1\ov\sqrt{2\pi}\tau}\,e^{-(2\tau +s+x)^2/2\tau^2}{1+x\ov 1+y}$$ which has the same Fredholm determinant.
What is needed, and all that is needed, is an “almost sure” substitute for Lemma 3.6 under assumptions (a$'$) and (b$'$). We begin with a lemma on extreme order statistics of uniform random variables, part or all of which may well be in the literature.
-0.5cm Let $a>1$ be arbitrary. Then, almost surely, $$t_1\ge {\eta\ov n\,\log^an},\ \ \ \ {t_1\ov t_2}\le 1-{1\ov \log^{a}n},$$ for sufficiently large $n$. Here, $\eta$ is a positive constant depending on $a$.
We use the notation $t_{n,j}$ for our $t_j$ to display their dependence on $n$. We have $$\Pr(t_{n,1}\le\dl)=1-(1-\dl)^n\sim n\dl\ \ {\text {if}}\ n\dl=o(1).$$ In particular $$\Pr\left(t_{2^k,1}\le {2^{-k}\ov k^a}\right)\sim {1\ov k^a}.$$ It follows that, a.s. for sufficiently large $k$ we have $$t_{2^k,1}>{2^{-k}\ov k^a}.$$ Take any $n$ and let $k$ be such that $2^{k-1}<n\le 2^k$. From the above we have, a.s. for sufficiently large $n$ $$t_{n,1}\ge t_{2^k,1}>{2^{-k}\ov k^a}\ge{\eta\ov n \log^an},$$ for some $\eta$.
For the ratio we use the fact that $$\Pr\left({t_{n,j}\ov t_{n,j+1}}>1-\dl\right)=
1-(1-\d)^j\sim j\dl\ \ {\text {if}}\
j\dl=o(1).\tag{6.1}$$ Now suppose that $${t_{n,1}\ov t_{n.2}}> 1-{1\ov \log^{a}n}\tag{6.2}$$ and let $k$ be such that $2^{k-1}<n\le 2^k$. Take any $J$ (which will eventually be of order $\log k$). Then there are two possibilities:
[(1)]{} $t_{2^k,j}\le t_{n,1}$ for all $j\le J$;
[(2)]{} $t_{2^k,j}>t_{n,1}$ for some $j\le J$.
Consider possibility (1) first. Let $G_n$ be the event that $\ t_{n,1}\le a\log\log n/n$. By Ex. 4.3.2 of \[Gal\], $P(G_n$ eventually$)=1.$ Moreover, $$\align
&\Pr(\{t_{2^k,j}\le t_{n,1}\ {\text {for\ all}}\ j\le J\}\cap G_n)
\le \Pr(t_{2^k,j}\le 2\log\log n/n\ {\text {for\ all}}\ j\le J)\\
&\le{{2^k}\choose{J!}}\left(2\,{\log\log n\ov n}\right)^J
\le e^{J\log\log k-J\log J+AJ},
\endalign$$ for some constant $A$. If $J=B\log k$ then the bound above equals $e^{-B(\log B-A)\log k}$, so if we choose $B$ large enough the sum over $k$ of these probabilities will be finite. With this $J$, (1) can therefore a.s. occur for only finitely many $k$.
Next consider possibility (2) and let $j$ be the smallest integer $\le J$ such that $t_{2^k,j}>t_{n,1}$. Then $t_{2^k,j}\le t_{n,2}$ and $t_{n,1}=t_{2^k,\ell}$ for some $\ell<j$. It follows that $t_{2^k,j-1}/t_{2^k,j}> t_{n,1}/t_{n,2}$ and by (6.2) this is at least $1-C/k^{a}$, for some constant $C$ (which will change from appearance to appearance). Therefore, by (6.1), $$\align
&P((6.2)\text{ and }(2)\text{ both happen})\\
&\le
P(t_{2^k,j-1}/t_{2^k,j}>1-C/k^a\text{ for some }j\le J)\le
C J^2/k^a\le C\log^2k/k^a.
\endalign$$ It follows that (2) and (6.2) can happen together only for finitely many $n$. The upshot is that a.s. the inequality (6.2) can occur for only finitely many $n$, which completes the proof. $\square$
We are now ready to prove our substitute for Lemma 3.6. Recall that we can set $q_j=G^{-1}(t_j)$. The assumption (a$'$) implies that $G$ is continuous near 0, so that $G(G^{-1}(x))=x$ for small $x$.
Suppose (a$'$) and (b$'$) are satisfied. Then there exists a sequence $\ph_n\gg \log n/n^{1/2}$ such that a.s. for any sequence $\{v_n\}$ lying in the disc with diameter the real interval $[-r_1\inv-O(\ph_n),\,\xi]$ we have $$\lim_{n\ra\iy}{1\ov
n}\sum_{j=2}^n{r_j\ov(1+r_jv_n)^2}=\lan{r\ov(1+r\xi)^2}\ran.$$
From the proof of Lemma 3.6 we see that we want to show that, for some sequence $\ph_n$ as described, we have a.s. $$\lim_{n\ra\iy}{1\ov n}\sum_{j=2}^n{q_1\ov q_j\,(q_j-(q_1+O(\ph_n))^2}=0.$$ Assumption (a$'$) implies that $$\frac xy\ge \(\frac{G^{-1}(x)}{G^{-1}(y)}\)^\ga,$$ when $x\le y$ are small enough. Therefore, it follows from the second part of Lemma 6.1, that a.s. for large $n$, $${q_1\ov q_2}\le 1-{\eta\ov \log^{a}n}\tag{6.3}$$ for another constant $\eta>0$. Set $$\ps_n={1\ov2}{\eta\ov \log^{a}n}q_2.$$
Let us show that $\ps_n\gg \log n/n^{1/2}$. Assumption (a$'$) implies that $G^{-1}(x)$ is at most a constant times $x^{1/\g}$, thus the fact that $t_1=O(\log\log n/n)$ shows that $q_1$ is at most a constant times $(\log\log n/n)^{1/\ga}$. Furthermore, assumption (b$'$) gives, with a slightly smaller $\nu$, $x^2\gg G(x)\log^{\nu} x\inv$. Applying this with $x=q_1=G^{-1}(t_1)$ and using the first part of Lemma 6.1 gives $$q_1^2\gg {1\ov n \log^a n}\log^{\nu}q_1\inv.$$ We therefore deduce that $$q_1^2\gg {1\ov n} \log^{\nu-a}n\tag{6.4}$$ for a slightly smaller $\nu$ than in (b$'$). By (6.3), the same holds for $q_2$ and so $$\ps_n^2\gg{1\ov n}\log^{\nu-3a}n$$ and $\ps_n\gg \log n/n^{1/2}$ as long as $\nu-3a>2$. Since $a>1$ is arbitrary the requirement becomes $\nu>5$. But from (a$'$) and (b$'$) we see that necessarily $\ga>2$, so that $\nu>8$.
If $j\ge 2$, then (6.3) and the inequality $q_2\le q_j$ imply that $$q_j-(q_1+\ps_n)\ge{1\ov2}{\eta\ov \log^{a}n}q_j.$$
We take for $\{\ph_n\}$ any sequence satisfying $${\log n\ov n^{1/2}}\ll\ph_n\ll\ps_n.$$ At this point we follow the proof of Lemma 3.6 to see that the expression $${\log^{2a}n\ov n}\sum_{j=2}^n{q_1\ov q_j^3}\tag{6.5}$$ needs to go to 0 a.s. to conclude the proof of this lemma. This is what we will demonstrate.
For any $k_n$, if we separate the sum in (6.5) over $j\le k_n$ from the sum over $j>k_n$, we see that (6.5) is at most $${\log^{2a}n\ov n\,q_1^2}\,k_n+\log^{2a}n\,{q_1\ov q_{k_n+1}}\,
{1\ov n}\sum_{j=1}^n{1\ov q_j^2}.\tag {6.6}$$
We first determine $k_n$ so the second term in (6.6) goes a.s. to 0. By strong law, $n^{-1}\sum q_j\to \<q^{-2}\>$ a.s., so $\log^{2a}n\,{q_1/q_{k_n+1}}$ needs to go to 0. We have, for each $\dl>0$, $$\align
&\Pr \left(\log^{2a}n{q_1\ov q_{k_n+1}}\ge\dl\right)=
\Pr \left({G^{-1}(t_1)\ov G^{-1}(t_{k_n+1})}\ge{\dl\ov \log^{2a}n}\right)\\
&\le \Pr \left({t_1\ov t_{k_n+1}}\ge\left({\dl\ov \log^{2a}n}\right)^\ga\right)
=\left(1-\left({\dl\ov \log^{2a}n}\right)^\ga\right)^{k_n}\\
&\le e^{-\left({\dl\ov \log^{2a}n}\right)^\ga k_n}.
\endalign$$ This is summable over $n$ if we choose $$k_n=\lfloor\log^a n\left({\log^{2a}n}\right)^\ga\rfloor+1.$$ With this choice, the second summand in (6.6) therefore goes to 0 a.s.
On the other hand, the first term in (6.6) is with the same choice of $k_n$ at most a constant times $${\log^{(2\ga +3)a}n\ov n\,q_1^2},$$ and from (6.4) this is $o(1)$ times $\log^{(2\ga+4)a-\nu}n$. Since $a>1$ was arbitrary and $\nu>2\ga+4$, we can make $(2\ga+4)a-\nu<0$ and then the first summand in (6.6) goes to 0 a.s. This completes the proof. $\square$
With this lemma in place of Lemma 3.6 the reader will find that all subsequent limits and estimates in Sections 4 and 5 will hold almost surely, thus giving the second statement of the theorem. The reason our sequence had to satisfy $\ph_n\gg \log n/n^{1/2}$ is that errors of the form $O\left(e^{-n\ph_n^2+O(\log n)}\right)$ appeared in the evaluation of $I^{\pm}(c)$ and these had to be $o(1)$.
REFERENCES
0.5cm
[**\[BCKM\]**]{} J.-P. Bouchaud, L. F. Cugliandolo, J. Kurchan, M. Mézard, [ *Out of equilibrium dynamics in spin–glasses and other glassy systems.*]{} In “Spin Glasses and Random Fields,” A. P. Young, editor, World Scientific, 1998.
[**\[BDJ\]**]{} J. Baik, P. Deift, K. Johansson, [ *On the distribution of the length of the longest increasing subsequence of random permutations.*]{} J. Amer. Math. Soc. 12 (1999), 1119–1178.
[**\[BFL\]**]{} I. Benjamini, P. A. Ferrari, C. Landim, [ *Asymptotic conservative processes with random rates.*]{} Stochastic Process. Appl. 61 (1996), 181–204.
[**\[BR\]**]{} J. Baik, E. M. Rains, [*Limiting distributions for a polynuclear growth model with external sources.*]{} J. Statist. Phys. 100 (2000), 523–541.
[**\[FIN1\]**]{} L. R. G. Fontes, M. Isopi, C. M. Newman, [ *Random walks with strongly inhomogeneous rates and singular diffusions: convergence, localization and aging in one dimension.*]{} Preprint (ArXiv: math.PR/0009098).
[**\[FIN2\]**]{} L. R. G. Fontes, M. Isopi, C. M. Newman, [ *Chaotic time dependence in a disordered spin system.*]{} Probab. Theory Relat. Fields 115 (1999), 417–443.
[**\[FINS\]**]{} L. R. G. Fontes, M. Isopi, C. M. Newman, D. L. Stein, [*Aging in 1D Discrete Spin Models and Equivalent Systems.*]{} Phys. Rev. Lett. 87 (2001). 110201–1.
[**\[Gal\]**]{} J. Galambos, “The Asymptotic Theory of Order Statistics.” Second edition. Krieger, 1987.
[**\[GG\]**]{} J. Gravner, D. Griffeath, [*Cellular automaton growth on $Z^2$: theorems, examples, and problems.*]{} Adv. in Appl. Math. 21 (1998), 241–304.
[**\[Gri1\]**]{} D. Griffeath, “Additive and Cancellative Particle Systems.” Lecture Notes in Mathematics 724, Springer, 1979.
[**\[Gri2\]**]{} D. Griffeath, [*Primordial Soup Kitchen.*]{} [psoup.math.wisc.edu]{}
[**\[Gra\]**]{} J. Gravner, [*Recurrent ring dynamics in two–dimensional excitable cellular automata.*]{} J. Appl. Prob. 36 (1999), 492–511.
[**\[GTW1\]**]{} J. Gravner, C. A. Tracy, H. Widom, [*Limit theorems for height fluctuations in a class of discrete space and time growth models.* ]{} J. Statist. Phys. 102 (2001), 1085–1132.
[**\[GTW2\]**]{} J. Gravner, C. A. Tracy, H. Widom, [*A growth model in a random environment.*]{} To appear in Ann. Probab. (ArXiv: math.PR/0011150).
[**\[Joh1\]**]{} K. Johansson, [*Shape fluctuations and random matrices.*]{} Commun. Math. Phys. 209 (2000), 437–476.
[**\[Joh2\]**]{} K. Johansson, [ *Discrete orthogonal polynomial ensembles and the Plancherel measure.*]{} Ann. Math. 153 (2001), 259–296.
[**\[Lig\]**]{} T. Liggett, “Interacting Particle Systems.” Springer–Verlag, 1985.
[**\[Mea\]**]{} P. Meakin, “Fractals, scaling and growth far from equilibrium.” Cambridge University Press, 1998.
[**\[MPV\]**]{} M. Mézard, G. Parisi, M. A. Virasoro, “Spin Glass Theory and Beyond.” World Scientific, 1987.
[**\[NSt1\]**]{} C. M. Newman, D. L. Stein, [*Equilibrium pure states and nonequilibrium chaos.*]{} J. Statist. Phys. 94 (1999), 709–722.
[**\[NSt2\]**]{} C. M. Newman, D. L. Stein, [ *Realistic spin glasses below eight dimensions: a highly disordered view.*]{} Phys. Rev. E (3) 63 (2001), no. 1, part 2, 016101, 9 pp.
[**\[NSv\]**]{} P. Norblad, P. Svendlindh, [*Experiments on spin glasses.*]{} In “Spin Glasses and Random Fields,” A. P. Young, editor, World Scientific, 1998.
[**\[NV\]**]{} C. M. Newman, S. B. Volchan, [ *Persistent survival of one-dimensional contact processes in random environments.* ]{} Ann. Probab. 24 (1996), 411–421.
[**\[PS\]**]{} M. Prähofer, H. Spohn, [*Universal distribution for growth processes in $1+1$ dimensions and random matrices*]{}. Phys. Rev. Lett. 84 (2000), 4882–4885.
[**\[PS2\]**]{} M. Prähofer, H. Spohn, [*Scale Invariance of the PNG Droplet and the Airy Process.*]{} Preprint (ArXiv: math.PR/0105240).
[**\[Rai\]**]{} E. M. Rains, [*A mean identity for longest increasing subsequence problems.*]{} Preprint (arXiv: math.CO/0004082).
[**\[Sep1\]**]{} T. Sepp" al" ainen, [*Increasing sequences of independent points on the planar lattice.*]{} Ann. Appl. Probab. 7 (1997), 886–898.
[**\[Sep2\]**]{} T. Seppäläinen, [ *Exact limiting shape for a simplified model of first-passage percolation on the plane.*]{} Ann. Probab. 26 (1998), 1232–1250.
[**\[SK\]**]{} T. Seppäläinen, J. Krug, [*Hydrodynamics and platoon formation for a totally asymmetric exclusion model with particlewise disorder.*]{} J. Statist. Phys. 95 (1999), 525–567.
[**\[Sos\]**]{} A. Soshnikov, [ *Universality at the edge of the spectrum in Wigner random matrices.*]{} Commun. Math. Phys. 207 (1999), 697–733.
[**\[Tal\]**]{} M. Talagrand, [*Huge random structures and mean field models for spin glasses.*]{} Doc. Math., Extra Vol. I (1998), 507–536.
[**\[TW1\]**]{} C. A. Tracy, H. Widom, [*Level spacing distributions and the Airy kernel.*]{} Commun. Math. Phys. 159 (1994), 151–174.
[**\[TW2\]**]{} C. A. Tracy, H. Widom, [*Universality of the Distribution Functions of Random Matrix Theory. II.*]{} In “Integrable Systems: From Classical to Quantum,” J. Harnad, G. Sabidussi and P. Winternitz, editors, American Mathematical Society, Providence, 2000. Pages 251–264.
|
---
abstract: 'We prove analytical results showing that decoherence can be useful for mixing time in a continuous-time quantum walk on finite cycles. This complements the numerical observations by Kendon and Tregenna ([*Physical Review A*]{} [**67**]{} (2003), 042315) of a similar phenomenon for discrete-time quantum walks. Our analytical treatment of continuous-time quantum walks includes a continuous monitoring of all vertices that induces the decoherence process. We identify the dynamics of the probability distribution and observe how mixing times undergo the transition from quantum to classical behavior as our decoherence parameter grows from zero to infinity. Our results show that, for small rates of decoherence, the mixing time improves linearly with decoherence, whereas for large rates of decoherence, the mixing time deteriorates linearly towards the classical limit. In the middle region of decoherence rates, our numerical data confirms the existence of a unique optimal rate for which the mixing time is minimized.'
author:
- '[Leonid Fedichkin]{}[^1]'
- '[Dmitry Solenov]{}[^2]'
- '[Christino Tamon]{}[^3]'
title: ' Mixing and Decoherence in Continuous-Time Quantum Walks on Cycles '
---
Introduction
============
The study of quantum walks on graphs has gained considerable interest in quantum computation due to its potential as an algorithmic technique and as a more natural physical model for computation. As in the classical case, there are two important models of quantum walks, namely, the discrete-time walks [@adz93; @m96; @aakv01; @abnvw01], and the continuous-time walks [@fg98; @cfg02; @ccdfgs03; @cg03]. Excellent surveys of both models of quantum walks are given in [@kendon; @kempe]. In this work, our focus will be on continuous-time quantum walks on graphs and its dynamics under decoherence.
Some promising non-classical dynamics of continuous-time quantum walks were shown in [@mr02; @k03; @ccdfgs03]. In [@mr02], Moore and Russell proved that the continuous-time quantum walk on the $n$-cube achieves (instantaneous) uniform mixing in time $O(n)$, in contrast to the $\Omega(n \log n)$ time needed in the classical random walk. Kempe [@k03] showed that the hitting time between two diametrically opposite vertices on the $n$-cube is $n^{O(1)}$, as opposed to the well-known $\Omega(2^{n})$ classical bound (related to the Ehrenfest urn model). In [@ccdfgs03], an interesting algorithmic application of a continuous-time quantum walk on a specific blackbox search problem was given. This latter result relied on the exponentially fast hitting time of these quantum walks on path-collapsible graphs.
Further investigations on mixing times for continuous-time quantum walks were given in [@abtw03; @gw03; @aaht03]. These works prove non-uniform (average) mixing properties for complete multipartite graphs, group-theoretic circulant graphs, and the Cayley graph of the symmetric group. The latter graph was of considerable interest due to its potential connection to the Graph Isomorphism problem, although Gerhardt and Watrous’s result in [@gw03] strongly discouraged natural approaches based on quantum walks. All of these cited works have focused on unitary quantum walks, where we have a closed quantum system without any interaction with its environment.
A more realistic analysis of quantum walks that take into account the effects of decoherence was initiated by Kendon and Tregenna [@kt03]. In that work, Kendon and Tregenna made a striking numerical observation that a small amount of decoherence can be useful to improve the mixing time of discrete quantum walks on cycles. In this paper, we provide an analytical counterpart to Kendon and Tregenna’s result for the continuous-time quantum walk on cycles. Thus showing that the Kendon-Tregenna phenomena is not merely an artifact of the discrete-time model, but suggests a fundamental property of decoherence in quantum walks. Recent realistic treatment for the hypercube was provided in a recent work by Alagić and Russell [@ar05]. Developing algorithmic applications that exploit this [*positive*]{} effect of decoherence on quantum mixing time provides an interesting challenge for future research.
In this work, we prove that Kendon and Tregenna’s observation holds in the continuous-time quantum walk model. Our analytical results show that decoherence can improve the mixing time in continuous-time quantum walk on cycles. We consider an analytical model due to Gurvitz [@g97] that incorporates the continuous monitoring of all vertices that induces the decoherence process. We identify the dynamics of probability distribution and observe how mixing times undergo transition from quantum to classical behavior as decoherence parameter grows from $0$ to $\infty$. For small rates of decoherence, we observe that mixing times improve linearly with decoherence, whereas for large rates, mixing times deteriorate linearly towards the classical limit. In the middle region of decoherence rates, we give numerical data that confirms the existence of a unique optimal rate for which the mixing time is minimal.
Preliminaries
=============
Continuous-time quantum walks are well-studied in the physics literature (see, e.g., [@fls65], Chapters 13 and 16), but mainly over constant-dimensional lattices. It was studied recently by Farhi, Gutmann, and Childs [@fg98; @cfg02] in the algorithmic context. Let $G = (V,E)$ be an undirected graph with adjacency matrix $A_{G}$. The Laplacian of $G$ is defined as $\mathcal{L} = A_{G}-D$, where $D$ is a diagonal matrix with $D_{jj}$ is the degree of vertex $j$[^4]. If the time-dependent state of the quantum walk is ${| \psi(t) \rangle}$, then, by the Schrödinger’s equation, we have $$i \hslash {\frac{\mathsf{d}}{\mathsf{d}t} {| \psi(t) \rangle}} = \mathcal{L} {| \psi(t) \rangle}.$$ The solution of the above equation is ${| \psi(t) \rangle} = e^{-it \mathcal{L}} {| \psi(0) \rangle}$ (assuming $\hslash = 1$).
We consider the $N$-vertex cycle graph $C_{N}$ whose adjacency matrix $A$ is a circulant matrix. The eigenvalues of $A$ are $\lambda_{j} = 2\cos(2\pi j/N)$ with corresponding eigenvectors ${| v_{j} \rangle}$, where ${\langle k | v_{j} \rangle} = \frac{1}{\sqrt{N}}\exp(-2\pi i jk/N)$, for $j = 0,1,\ldots,N-1$. So, if the initial state of the quantum walk is ${| \psi(0) \rangle} = {| 0 \rangle}$, then ${| \psi(t) \rangle} = e^{-it L}{| 0 \rangle}$. After decomposing ${| 0 \rangle}$ in terms of the eigenvectors ${| v_{j} \rangle}$, we get $${| \psi(t) \rangle} = e^{2it} \frac{1}{\sqrt{N}} \sum_{j=0}^{N-1} e^{-it\lambda_{j}}{| v_{j} \rangle}.$$ The scalar term $e^{2it}$ is an irrelevant phase factor which can be ignored.
If ${| \psi(t) \rangle}$ represents the state of the particle at time $t$, let $P_{j}(t) = |{\langle j | \psi(t) \rangle}|^{2}$ be the probability that the particle is at vertex $j$ at time $t$. Let $P(t)$ be the (instantaneous) probability distribution of the quantum walk on $G$. The [*average*]{} probability of vertex $j$ over the time interval $[0,T]$ is defined as by $\overline{P}_{j}(T) = \frac{1}{T} \int_{0}^{T} P_{j}(t) \ {\mathsf{d}t}$. Let $\overline{P}(T)$ be the (average) probability distribution of the quantum walk on $G$ over the time interval $[0,T]$.
To define the notion of mixing times of continuous-time quantum walks, we use the [*total variation*]{} distance between distributions $P$ and $Q$ that is defined as $||P - Q|| = \sum_{s} |P(s) - Q(s)|$. For $\varepsilon \ge 0$, the $\varepsilon$-mixing time $T_{mix}(\varepsilon)$ of a continuous-time quantum walk is the minimum time $T$ so that $||P(T) - U_{G}|| \le \varepsilon$, where $U_{G}$ is the uniform distribution over $G$, or $$T_{mix}(\varepsilon) \ = \
\min\left\{T \ : \ \sum_{j=0}^{N-1} \left|P_{j}(T) - \frac{1}{N}\right| \le \varepsilon \right\}.$$
#### Gurvitz’s Model
To analyze the decoherent continuous-time quantum walk on $C_{N}$, we use an analytical model developed by Gurvitz [@g97; @gfmb03]. In this model, we consider the density matrix $\rho(t) = {| \psi(t) \rangle}{\langle \psi(t) |}$ and study its evolution under a continuous monitoring of all vertices of $C_{n}$. Note that in this case, the probability distribution $P(t)$ of the quantum walk is specified by the diagonal elements of $\rho(t)$, that is, $P_{j}(t) = \rho_{j,j}(t)$.
The time-dependent non-unitary evolution of $\rho(t)$ in the Gurvitz model is given by (see [@smallG]): $$\label{drdt}
{\frac{\mathsf{d}}{\mathsf{d}t} \rho_{j,k}(t)}
= i \ \left[\frac{\rho _{j,k+1} - \rho_{j+1,k} - \rho_{j-1,k} + \rho_{j,k-1}}{4}\right]
- \Gamma \left({1 - \delta_{j,k}}\right)\rho_{j,k}$$ Our subsequent analysis will focus on the variable $S_{j,k}$ defined as $$\label{6}
S_{j,k} = i^{k-j} \rho_{j,k}$$ The above substitution reduces the system differential equations with complex coefficients into the following system with only real coefficients: $$\label{dSdt}
{\frac{\mathsf{d}}{\mathsf{d}t} S_{j,k}} = \frac{1}{4}\left({S_{j,k+1} + S_{j+1,k} - S_{j-1,k} - S_{j,k-1}}\right) -
\Gamma \left({1 - \delta_{j,k}}\right) S_{j,k}.$$ Throughout the rest of this paper, we will focus on analyzing Equation (\[dSdt\]) for various rates of $\Gamma$. One can note that, if $\Gamma=0$, there is an exact mapping of the quantum walk on a cycle onto a classical random walk on a two-dimensional torus. If $\Gamma \ne 0$, there is still an exact mapping of the quantum walk on a cycle onto some classical dynamics on a directed toric graph. This observation may be useful in estimating quantum speedup in other systems.
Small Decoherence
=================
We consider the decoherent continuous-time quantum walks when the decoherence rate $\Gamma$ is small. More specifically, we consider the case when $\Gamma N \ll 1$. First, we rewrite (\[dSdt\]) as the perturbed linear operator equation $$\label{eqn:operator}
{\frac{\mathsf{d}}{\mathsf{d}t} S(t)} = ({\mathbb{L}}+ {\mathbb{U}}) \ S(t),$$ where the linear operators ${\mathbb{L}}$ and ${\mathbb{U}}$ are defined as $$\begin{aligned}
\label{89}
{\mathbb{L}}_{(\alpha,\beta)}^{(\mu,\nu)}
& = &
\frac{1}{4}\left( {\delta_{\alpha,\mu} \delta_{\beta,\nu - 1} + \delta_{\alpha,\mu - 1} \delta_{\beta,\nu}
- \delta_{\alpha,\mu} \delta_{\beta,\nu + 1} - \delta_{\alpha,\mu + 1} \delta_{\beta,\nu}} \right) \\
{\mathbb{U}}_{(\alpha,\beta)}^{(\mu,\nu)}
& = &
- \Gamma \delta_{\alpha,\mu} \delta_{\beta,\nu} \left( {1 - \delta_{\alpha,\beta} } \right).\end{aligned}$$ Here, we consider ${\mathbb{L}}$ as a $N^2 \times N^2$ matrix where ${\mathbb{L}}_{(\alpha,\beta)}^{(\mu,\nu)}$ is the entry of ${\mathbb{L}}$ indexed by the row index $(\mu,\nu)$ and the column index $(\alpha,\beta)$. We view ${\mathbb{U}}$ in a similar manner. The solution of (\[eqn:operator\]) is given by $S(t) = e^{t({\mathbb{L}}+ {\mathbb{U}})}S(0)$, or $$\label{7}
{\frac{\mathsf{d}}{\mathsf{d}t} S_{\alpha,\beta}} =
\sum_{\mu,\nu = 0}^{N-1} \left({{\mathbb{L}}_{(\alpha,\beta)}^{(\mu,\nu)}
+ {\mathbb{U}}_{(\alpha,\beta)}^{(\mu,\nu)}} \right) S_{\mu,\nu},$$ where $0 \le \alpha,\beta,\mu,\nu \le N-1$. The initial conditions are $$\label{10}
\rho_{\alpha,\beta}(0) = S_{\alpha,\beta}(0) = \delta_{\alpha,0} \delta_{\beta,0}.$$
#### Perturbation Theory
We will use tools from the perturbation theory of linear operators (see [@kato; @horn-johnson]). To analyze Equation (\[eqn:operator\]), we find the eigenvalues and eigenvectors of ${\mathbb{L}}+ {\mathbb{U}}$. Suppose that $V$ is some eigenvector of ${\mathbb{L}}$ with eigenvalue $\lambda$, that is, ${\mathbb{L}}V = \lambda V$. Considering the perturbed eigenvalue equation $$({\mathbb{L}}+ {\mathbb{U}}) (V + \tilde{V}) = (\lambda + \tilde{\lambda}) \ (V + \tilde{V}),$$ we drop the second-order terms ${\mathbb{U}}\tilde{V}$ and $\tilde{\lambda} \tilde{V}$ to obtain the first-order approximation $$\label{eqn:perturbed}
{\mathbb{U}}\ V + {\mathbb{L}}\ \tilde{V} \ = \ \tilde{\lambda} V \ + \ \lambda \tilde{V}.$$ By taking the inner product of the above equation with $V^{\dagger}$, and since ${\mathbb{L}}$ is Hermitian, we see that the eigenvalue perturbation term $\tilde{\lambda}$ is defined as $$\tilde{\lambda} \ = \ V^{\dagger} {\mathbb{U}}V.$$
Let $\mathcal{E}_{\lambda}$ be an eigenspace corresponding to the eigenvalue $\lambda$ and let $\{V_{k} : \ k \in I\}$ be a set of eigenvectors of ${\mathbb{L}}$ that spans $\mathcal{E}_{\lambda}$. Let $V = \sum_{k \in I} c_{k} V_{k}$ be a unit vector in $\mathcal{E}_{\lambda}$. Using Equation (\[eqn:perturbed\]), we have $$\sum_{k \in I} c_{k} {\mathbb{U}}V_{k} = \tilde{\lambda} \sum_{k \in I} c_{k} V_{k},$$ and after taking the inner product with $V_{j}^{\dagger}$, we get $\sum_{k \in I} c_{k} V_{j}^{\dagger} {\mathbb{U}}V_{k} = \tilde{\lambda} c_{j}$. If the linear combination is uniform, that is $c_{j} = c$, for all $j$, then the eigenvalue perturbation $\tilde{\lambda}$ is simply given by $$\label{eqn:tilde-lambda}
\tilde{\lambda} \ = \
\sum_{k \in I} V_{j}^{\dagger} {\mathbb{U}}V_{k}.$$ In the case when $\mathcal{E}_{\lambda}$ is one-dimensional or the matrix ${\mathbb{U}}$ is diagonal under all similarity actions $V_{j}^{\dagger} {\mathbb{U}}V_{k}$, for $j, k \in I$, the correction to the eigenvalues is given by the diagonal term $\tilde{\lambda} = V^{\dagger} {\mathbb{U}}V$. Otherwise, we need to solve the system described by $det({\mathbb{U}}_{\lambda} - \tilde{\lambda} I) = 0$.
To analyze the equation $S'(t) = ({\mathbb{L}}+ {\mathbb{U}})S(t)$, for which the solution is $S(t) = \exp[t({\mathbb{L}}+ {\mathbb{U}})] S(0)$, we express $S(0)$ as a linear combination of the eigenvectors of ${\mathbb{L}}+ {\mathbb{U}}$, say $\{V_{j} + \tilde{V}_{j}\}$. In our case, the evolution of $S(t)$ can be described using the eigenvectors of ${\mathbb{L}}$, since the contribution of the terms $\tilde{V}_{j}$ are negligible. If $S(0) = \sum_{j} c_{j} V_{j}$, where $V_{j}$ are the eigenvectors of ${\mathbb{L}}$, then $$S(t) = \sum_{\lambda} e^{t(\lambda + \tilde{\lambda})} \sum_{j \in \mathcal{E}_{\lambda}} c_{j} \ V_{j}.$$
#### Spectral Analysis
The unperturbed linear operator ${\mathbb{L}}$ has eigenvalues $$\label{13}
\lambda_{(m,n)}
= i \ \sin\left(\frac{{\pi (m + n)}}{N}\right) \cos\left(\frac{{\pi (m - n)}}{N}\right)
$$ with corresponding eigenvectors $$\label{14}
V_{(\mu,\nu)}^{(m,n)} = \frac{1}{N} \exp\left(\frac{2\pi i}{N}(m\mu + n\nu)\right).$$ Thus, for $0 \le m,n \le N-1$, we have $$\label{12}
\sum_{\mu,\nu = 0}^{N-1} {{\mathbb{L}}_{(\alpha,\beta)}^{(\mu,\nu)} V_{(\mu,\nu)}^{(m,n)} }
= \lambda_{(m,n)} V_{(\alpha,\beta)}^{(m,n)}.$$ To analyze the effects of ${\mathbb{U}}$, we compute the similarity actions of the eigenvectors on ${\mathbb{U}}$: $$\begin{aligned}
{\mathbb{U}}_{(m,n),(m',n')}
& = & (V^{(m,n)})^{\dagger} {\mathbb{U}}V^{(m',n')} \\
& = & - \frac{\Gamma}{N^{2}} \sum_{(a,b)} (1 - \delta_{a,b}) \exp\left(\frac{2\pi i}{N}[(m' - m)a + (n' - n)b]\right) \\
\label{eqn:U-matrix}
& = & - \Gamma \ \delta_{m',m} \ \delta_{n',n}
+ \frac{\Gamma}{N} \ \delta_{[(m'-m)+(n'-n)] ~(\mbox{\scriptsize mod $N$}), 0}\end{aligned}$$ where $0 \le m,m',n,n' \le N-1$.\
The eigenvalues $\lambda_{(m,n)}$ of ${\mathbb{L}}$ have the following important [*degeneracies*]{}:
1. Diagonal ($m = n$): $\lambda_{(m,m)} = i \ \sin(2\pi m/N)$.\
Each of this eigenvalue has multiplicity $2$, by the symmetries of the sine function. This degeneracy is absent in our case, since ${\mathbb{U}}$ is diagonal over the corresponding eigenvectors. For example, ${\mathbb{U}}_{(m,n),(N/2-m,N/2-m)} = 0$, for $0 < m < N/2$.
2. Zero ($m + n \equiv 0\pmod{N}$): $\lambda_{(m,n)} = 0$.\
This degeneracy is absent in our case since the corresponding eigenvectors are not involved in the linear combination of the initial state $S(0)$.
3. Off-diagonal ($m \neq n$): $\lambda_{(m,n)} = \lambda_{(n,m)}$.\
Since $\lambda_{(m,n)} = i \ [\sin(2\pi m/N) + \sin(2\pi m/N)]$, each of this eigenvalue has multiplicity at least 4, due to the symmetries of the sine function. In our case, the effective degeneracy of these eigenvalues are 2, again by a similar argument.
By (\[eqn:U-matrix\]), the off-diagonal contribution is present if $m + n \equiv m' + n'\pmod{N}$. Thus, $\lambda_{(m,n)} = \lambda_{(m',n')}$ implies that $\cos(\pi(m-n)/N) = \pm \ \cos(\pi(m'-n')/N)$, since $\sin(\pi(m+n)/N) = \pm \sin(\pi(m'+n')/N)$. This implies that $m-n = -(m'-n')$ or $|(m-n)-(m'-n')| = N$, since $-(N-1) \le m-n, m'-n' \le N-1$. In either case, we get $m = n \pm N/2$ or $m' = n' \pm N/2$. But, upon inspection, we note that ${\mathbb{U}}$ is diagonal over these combinations, except for the case when $(m',n') = (n,m)$.
In what follows, we calculate the eigenvalue perturbation terms $\tilde{\lambda}$. For [*simple*]{} eigenvalues, these correction terms are given by the diagonal elements $$\tilde{\lambda}_{(m,n)}
\ = \ (V^{(m,n)})^{\dagger} \ {\mathbb{U}}\ V^{(m,n)}
\ = \ -\Gamma \frac{(N-1)}{N},$$ by Equation (\[eqn:U-matrix\]). For a [*degenerate*]{} eigenvalue $\lambda_{(m,n)}$ with multiplicity two, if $V = c (V^{(m,n)} + V^{(n,m)})$, for some constant $c$, then $\tilde{\lambda}_{(m,n)} = (V^{(m,n)})^{\dagger} {\mathbb{U}}V$, and similarly for $V^{(n,m)}$. Further calculations reveal that the eigenvalue perturbation $\tilde{\lambda}_{(m,n)}$ is $$\tilde{\lambda}_{(m,n)}
\ = \ (V^{(m,n)})^{\dagger} \ {\mathbb{U}}\ V^{(m,n)} + (V^{(m,n)})^{\dagger} \ {\mathbb{U}}\ V^{(n,m)}
\ = \ -\Gamma \frac{(N-2)}{N},$$ again by Equation (\[eqn:U-matrix\]).
#### Dynamics
We are ready to describe the full solution to Equation (\[dSdt\]). First, note that there exists a [*trivial*]{} time-independent solution given by $S^{0}_{\alpha,\beta}(t) = \frac{\delta_{\alpha,\beta}}{N}$, that can be expressed as the following linear combination of the eigenvectors of ${\mathbb{L}}$: $$S^{0}(t) = \sum_{(m,n)} \frac{1}{N} \ (\delta_{m+n,0} + \delta_{m+n,N}) \ V^{(m,n)}.$$ The particular solution will depend on the initial condition $S(0)$, where $S_{\alpha,\beta}(0) = \delta_{\alpha,0}\delta_{\beta,0}$. Note that we have $$S(0) = \sum_{(m,n)} \frac{1}{N} \ V^{(m,n)}.$$ Thus, the solution is of the form
$$S_{\alpha,\beta}(t)
= \frac{\delta_{\alpha,\beta}}{N} +
\frac{1}{N^{2}} \sum_{(m,n)}
(1 - \delta_{[m+n] (\mbox{\scriptsize mod }N),0}) \
e^{t(\lambda_{(m,n)} + \tilde{\lambda}_{(m,n)})} \
\exp\left[\frac{2\pi i}{N}(m\alpha + n\beta)\right]$$
The probability distribution of the continuous-time quantum walk is given by the diagonal terms $P_{j}(t) = S_{j,j}(t)$, that is $$\begin{aligned}
P_{j}(t)
& = & \frac{1}{N} +
\frac{1}{N^{2}} \sum_{(m,n)} (1 - \delta_{m+n(\mbox{\scriptsize mod }N),0})
\times \left[\delta_{m,n} e^{-\Gamma \frac{N-1}{N}t} + (1-\delta_{m,n}) e^{-\Gamma \frac{N-2}{N}t}\right] \\
& & \times \exp\left[i\sin\left(\frac{\pi(m+n)}{N}\right)\cos\left(\frac{\pi(m-n)}{N}\right)\right]
\exp\left[\frac{2\pi i}{N}(m + n)j\right] \end{aligned}$$ We calculate an upper bound on the $\varepsilon$-uniform mixing time $T_{mix}(\varepsilon)$. For this, we define $$M_{j}(t) = \frac{1}{N} \sum_{m=0}^{N-1} e^{it\sin(2\pi m/N)} \omega_{N}^{mj},$$ where $\omega_{N} = \exp(2\pi i/N)$. Note that $$M_{j}^{2}(t/2) = \frac{1}{N^{2}} \sum_{m,n=0}^{N-1} e^{it\lambda_{(m,n)}} \omega_{N}^{(m+n)j}, \ \ \
M_{2j}(t) = \frac{1}{N} \sum_{m=0}^{N-1} e^{it\lambda_{(m,m)}} \omega_{N}^{2mj}$$ Using these expressions, we have $$\begin{aligned}
\left|P_{j}(t) - \frac{1}{N}\right|
& \le & e^{-\Gamma \frac{N-2}{N}t} \left| M_{j}^{2}(t/2) + \frac{e^{-t\Gamma/N} - 1}{N}
\left[M_{2j}(t) - \frac{2 - (N \mbox{ mod } 2)}{N}\right] \right| \\
& \le & e^{-\Gamma \frac{N-2}{N}t} \ \left|1 + \frac{e^{-t\Gamma/N} - 1}{N} (1 - 2/N)\right|.\end{aligned}$$ One can note that $|M_{j}(t)| \le 1$, and therefore, $$\sum_{j=0}^{N-1} \left|P_{j}(t) - \frac{1}{N}\right|
\ \le \
e^{-\Gamma \frac{N-2}{N}t} \ (N + e^{-t\Gamma/N} - 1).$$ Since $e^{-t\Gamma/N} \le 1$, the above equation shows that $N e^{-\Gamma \frac{N-2}{N}t} \le \varepsilon$. This gives the mixing time bound of $$T_{mix}(\varepsilon)
\ < \
\frac{1}{\Gamma}
\ln\left(\frac{N}{\varepsilon}\right)
\left[1 + \frac{2}{N-2}\right].
$$
Large Decoherence
=================
We analyze the decoherent continuous-time quantum walks when the decoherence rate $\Gamma$ is large, that is, when $\Gamma \gg 1$. In our analysis, we will focus on diagonal sums of the matrix $S(t)$ from (\[dSdt\]). For $k = 0,\ldots,N-1$, we define the diagonal sum $D_{k}$ as $$D_{k} = \sum_{j=0}^{N-1} S_{j, ~j+k \mbox{\scriptsize ~(mod $N$)}},$$ where the indices are treated as integers modulo $N$. We note that $$\label{dDdt}
{\frac{\mathsf{d}}{\mathsf{d}t} D_{k}} = - \Gamma \left( {1 - \delta _{k,0}} \right) D_{k}.$$ We refer to the diagonal $D_{0}$ as [*major*]{} and the other diagonals as [*minor*]{}. Equation (\[dDdt\]) suggests that the minor diagonal sums decay strongly with characteristic time of order $1/\Gamma$. By the initial conditions, the non-zero elements appear only along the major diagonal. From (\[dSdt\]), it follows that the system will evolve initially in the following way. The elements on the two minor diagonals [*nearest*]{} to the major diagonal will deviate slightly away from zero due to nonconformity of classical probability distribution along the major diagonal. This process with a rate of order $1/4$ will compete with a self-decay with rate of order $\Gamma \gg 1/4$, thereby limiting the corresponding off-diagonal elements to small values of the order $1/\Gamma$. A similar argument applies to elements on the other minor diagonals which will be kept very small compared to their neighbors that are closer to the major diagonal and will be of the order of $1/\Gamma^2$, etc. By retaining only matrix elements that are of order of $1/\Gamma$, we derive a truncated set of differential equations for the elements along the major and the two adjacent minor diagonals: $$\begin{aligned}
\label{dSaa}
{S'_{j,j}} & = & \frac{1}{4}\left(S_{j,j+1} + S_{j+1,j} - S_{j-1,j} - S_{j,j-1}\right), \\
\label{dSaa1}
{S'_{j,j+1}} & = & \frac{1}{4}\left(S_{j+1,j+1} - S_{j,j}\right) - \Gamma S_{j,j+1}, \\
\label{dSa1a}
{S'_{j,j-1}} & = & \frac{1}{4}\left(S_{j,j} - S_{j-1,j-1}\right) - \Gamma S_{j,j-1}.\end{aligned}$$ To facilitate our subsequent analysis, we define $$a_j = S_{j,j}, \ \ \ \ \ d_j = S_{j,j+1} + S_{j+1,j}.$$ Then, we observe that $$a'_{j} = \frac{\left(d_j - d_{j-1} \right)}{4}, \ \ \ \ \
d'_{j} = \frac{\left(a_{j+1} - a_j\right)}{2} - \Gamma d_j.$$ The general solution of the above system of difference equations has the form $$\begin{aligned}
a_j & = & \frac{1}{N}\sum_{k=0}^{N-1} \
\left\{A_{k,1}\exp{\left(-\gamma_{k,0}t\right)} + A_{k,2}\exp{\left(-\gamma_{k,1}t\right)}\right\} \ \omega^{jk} \\
d_j & = & \frac{1}{N}\sum_{k=0}^{N-1} \
\left\{D_{k,1}\exp{\left(-\gamma_{k,0}t\right)} + D_{k,2}\exp{\left(-\gamma_{k,1}t\right)}\right\} \ \omega^{jk}\end{aligned}$$ where $\omega = e^{2\pi i/N}$, and the exponents $\gamma_{k,0}$ and $\gamma_{k,1}$ are the quadratic roots of $$x(\Gamma-x) = \frac{1}{2}\sin^2\left({\frac{\pi k}{N}}\right).$$ Letting $\gamma_{k,0} < \gamma_{k,1}$, we have $$\begin{aligned}
\gamma_{k,0} & = & \frac{1}{2\Gamma}\sin^2\left(\frac{\pi k}{N}\right) + o\left(\frac{1}{\Gamma}\right), \\
\gamma_{k,1} & = & \Gamma - \frac{1}{2\Gamma}\sin^2\left(\frac{\pi k}{N}\right) + o\left(\frac{1}{\Gamma}\right).\end{aligned}$$ By the initial conditions $a_j(0) = \delta_{j,0}$ and $d_{j}(0) = 0$, for $j = 0,\ldots,N-1$. Thus,
$$A_{k,0} \ \simeq \ 1, \ \ \
A_{k,1} \ \simeq \ - \ \frac{1}{\Gamma^2} \sin^2{\frac{\pi k}{N}}$$
and, for $b = 0,1$, we have $$D_{k,b} \ \simeq \ (-1)^{b} \ \frac{i}{\Gamma} \sin\left(\frac{\pi k}{N}\right) \exp{\left(\frac{i \pi k}{N}\right)},
$$ These equations show that the amplitudes of the elements along minor diagonals are reduced by an extra factor of $\Gamma$ compared to the elements along the major diagonal. Summarizing, the solution of differential equation at large $\Gamma$ has the form $$a_j = \frac{1}{N}\sum_{k=0}^{N-1} \
\exp{\left(-\frac{\sin^2{\frac{\pi k}{N}}}{2\Gamma} t\right)} \ \omega^{jk}.$$ Based on the above analysis, the full solution for $S(t)$ is given by $$\label{eqn:large-solution}
S_{j,k}(t) =
\left\{\begin{array}{ll}
a_{j} & \mbox{ if ~$j = k$ } \\
d_{j}/2 & \mbox{ if ~$|j - k| = 1$ } \\
0 & \mbox{ otherwise }
\end{array}\right.$$ It can be verified that $S(t)$ is a solution to Equation (\[dSdt\]) modulo terms of order $o(1/\Gamma)$.
The total variation distance between the uniform distribution and the probability distribution of the decoherent quantum walk on $C_{N}$ is given by $$\sum_{j=0}^{N-1} \left| {a_j(t) - \frac{1}{N}} \right| =
\sum_{j=0}^{N-1} \left|\frac{1}{N} \sum_{k=0}^{N-1}
\exp\left(-\frac{\sin^2{\frac{\pi k}{N}}}{2\Gamma} t\right)
\exp\left(\frac{2\pi ijk}{N}\right) - \frac{1}{N}\right|,$$ which simplifies to $$\sum_{j=0}^{N-1} \left| {a_j(t) - \frac{1}{N}} \right| =
\frac{1}{N} \sum_{j=0}^{N-1} \left| \sum_{k=1}^{N-1}
\exp\left(-\frac{\sin^2{\frac{\pi k}{N}}}{2\Gamma} t\right)
\cos{\left(\frac{2\pi k j}{N}\right)} \right|.$$
#### Lower bound
A lower bound on the mixing time for large decoherence rate $\Gamma$ can be derived as follows. Note that $$\begin{aligned}
\sum_{j=0}^{N-1} {\left| {a_j(t) - \frac{1}{N}} \right|}
& \ge & {\left| {a_0(t) - \frac{1}{N}} \right|}
= \frac{1}{N} \sum_{k=1}^{N-1} \exp{\left(-\frac{\sin^2{\frac{\pi k}{N}}}{2\Gamma} t\right)}, \\
& \ge & \frac{2}{N} \exp{\left(-\frac{\sin^2{\frac{\pi}{N}}}{2\Gamma} t\right)},\end{aligned}$$ where the first inequality uses the term $j = 0$ only and the second inequality uses the terms $k = 1,N-1$. This expression is monotone in $t$, and is a lower bound on the total variation distance. It reaches $\varepsilon$ at time $T_{lower}$, when $$T_{lower}
\ = \
\frac{2\Gamma}{\sin^{2}{\frac{\pi}{N}}} \ln\left( \frac{2}{N \varepsilon} \right)
\ \simeq \
\frac{2\Gamma N^2}{\pi^2} \ln\left(\frac{2}{N\varepsilon}\right),$$ for large $N \gg 1$.
#### Upper bound
An upper bound on the mixing time for large decoherence rate $\Gamma$ can be derived as follows. Consider the following derivation: $$\begin{aligned}
\sum_{j=0}^{N-1} {\left| {a_j(t) - \frac{1}{N}} \right|}
& = & \frac{1}{N} \sum_{j=0}^{N-1}
{\left| {\sum_{k=1}^{N-1} \exp{\left(-\frac{\sin^2{\frac{\pi k}{N}}}{2\Gamma} t\right)}
\cos{\left(\frac{2\pi k j}{N}\right)}} \right|} \\
& \le & \frac{1}{N} \sum_{j=0}^{N-1}
\sum_{k=1}^{N-1} \exp{\left(-\frac{\sin^2{\frac{\pi k}{N}}}{2\Gamma} t\right)},\end{aligned}$$ since $|\cos(x)| \le 1$. The last expression is equal to $$\begin{aligned}
\sum_{k=1}^{N-1} \exp{\left(-\frac{\sin^2{\frac{\pi k}{N}}}{2\Gamma} t\right)}
& = & 2 \sum_{k=1}^{\lfloor N/2 \rfloor}
\exp{\left(-\frac{\sin^2{\frac{\pi k}{N}}}{2\Gamma} t\right)}, \\
& \le & 2 \sum_{k=1}^{\lfloor N/2 \rfloor}
\exp{\left(-\frac{2 k^2 t}{\Gamma{N^2}} \right)},\end{aligned}$$ where the last inequality is due to $\sin(x) > 2x/\pi$, whenever $0 < x < \pi/2$ (see Eq. 4.3.79, [@as]). Since $k \ge 1$, we have $k^2 \ge k$. Thus, we have $$\sum_{j=0}^{N-1} {\left| {a_j(t) - \frac{1}{N}} \right|}
\ < \ 2 \sum_{k=1}^{\lfloor N/2 \rfloor} \exp{\left(-\frac{2 k t}{\Gamma{N^2}} \right)}
\ < \ 2 \sum_{k=1}^{\infty} \exp{\left(-\frac{2 kt}{\Gamma{N^2}} \right)}.
$$ The last expression is a geometric series that equals $2/[\exp(2t/(\Gamma{N^2})) - 1]$. This expression is monotone in $t$, and it is the upper bound for the total variation distance. It reaches $\varepsilon$ value at time $T_{upper}$, when $$T_{upper} \ = \ \frac{\Gamma N^2}{2} \ln\left(\frac{2+\varepsilon}{\varepsilon}\right).$$
Conclusions
===========
In this work, we studied the average mixing times in a continuous-time quantum walk on the $N$-vertex cycle $C_{N}$ under decoherence. For this, we used an analytical model developed by S. Gurvitz [@g97]. We found two distinct dynamics of the quantum walk based on the rates of the decoherence parameter. For small decoherence rates, where $\Gamma N \ll 1$, the mixing time is bounded as $$T_{mix} \ < \ \frac{1}{\Gamma} \ln\left(\frac{N}{\varepsilon}\right) \left[1 + \frac{2}{N-2}\right].
$$ This bound shows that $T_{mix}$ is inversely proportional to the decoherence rate $\Gamma$. For large decoherence rates $\Gamma \gg 1$, the mixing times are bounded as $$\frac{\Gamma N^2}{\pi^2} \ln\left(\frac{2}{N \varepsilon}\right)
\ < \ T_{mix} \ < \
\frac{\Gamma N^2}{2} \ln \left(\frac{2+\varepsilon}{\varepsilon}\right).$$ These bounds are show that $T_{mix}$ is linearly proportional to the decoherence rate $\Gamma$, but is quadratically dependent on $N$. Note that the dependences on $N$ of the mixing times exhibit the expected quantum to classical transition.
These analytical results already point to the existence of an [*optimal*]{} decoherence rate for which the mixing time is minimum. Our additional numerical experiments (see Figure (\[figure:dima\])) for $\Gamma \sim 1$ confirmed that there is a unique optimal decoherence rate for which the mixing time is minimum. This provides a continuous-time analogue of the Kendon and Tregenna results in [@kt03].
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Viv Kendon for her kind encouragements in our interests on decoherence in continuous-time quantum walks and Vladimir Privman for helpful discussion. This research was supported by the National Science Foundation grant DMR-0121146.
[40]{}
William Adamczak, Kevin Andrew, Peter Hernberg, and Christino Tamon, “A note on graphs resistant to quantum uniform mixing,” quant-ph/0308073.
Dorit Aharonov, Andris Ambainis, Julia Kempe, and Umesh Vazirani, “Quantum Walks on Graphs,” [*Proceedings of 33rd ACM Symposium on Theory of Computing*]{} (2001), 50-59.
Amir Ahmadi, Ryan Belk, Christino Tamon, and Carolyn Wendler, “On Mixing in Continuous-time Quantum Walks on Some Circulant Graphs,” [*Quantum Information and Computation*]{} [**3**]{} (2003), 611-618.
Andris Ambainis, Eric Bach, Ashwin Nayak, Ashvin Viswanath, and John Watrous, “One-dimensional Quantum Walks,” [*Proceedings of 33rd ACM Symposium on Theory of Computing*]{} (2001), 37-49.
Yakir Aharonov, Luiz Davidovich, and Nicim Zagury, “Quantum Random Walks,” [*Physical Review Letters*]{} [**48**]{} (1993), 1687-1690.
Gorjan Alagić and Alexander Russell, “Decoherence in Quantum Walks on the Hypercube,” quant-ph/0501169.
Milton Abramowitz and Irene A. Stegun, [*Handbook of Mathematical Functions*]{}, Dover (1972).
Andrew M. Childs, Enrico Deotto, Richard E. Cleve, Edward Farhi, Samuel Gutmann, and Daniel A. Spielman, “Exponential algorithmic speedup by quantum walk,” [*Proceedings of 35th ACM Symposium on Theory of Computing*]{} (2003), 59-68.
Andrew M. Childs, Edward Farhi, and Samuel Gutmann, “An example of the difference between quantum and classical random walks,” [*Quantum Information Processing*]{} [**1**]{} (2002), 35.
Andrew M. Childs and Jeffrey Goldstone, “Spatial search by quantum walk,” quant-ph/0306054.
Edward Farhi and Samuel Gutmann, “Quantum computation and decision trees,” [*Physical Review A*]{} [**58**]{} (1998), 915-928.
Richard P. Feynman, Robert B. Leighton, and Matthew Sands, [*The Feynman Lectures on Physics*]{}, volume III, Addison-Wesley (1965).
Shmuel A. Gurvitz, “Measurements with a noninvasive detector and dephasing mechanism,” [*Physical Review B*]{} [**56**]{} (1997), 15215.
Shmuel A. Gurvitz, Leonid Fedichkin, Dima Mozyrsky, and Gennady P. Berman, “Relaxation and Zeno effect in qubit measurements,” [*Physical Review Letters*]{} [**91**]{} (2003), 066801.
Heath Gerhardt and John Watrous, “Continuous-time quantum walks on the symmetric group,” [*Proceedings of the 7th Workshop on Randomization and Approximation Techniques in Computer Science*]{}, edited by Sanjeev Arora, Klaus Jansen, José D.P. Rolim, and Amit Sahai, Lecture Notes in Computer Science [**2764**]{}, Springer (2003), 290-301.
Roger A. Horn and Charles R. Johnson, [*Topics in Matrix Analysis*]{}, Cambridge University Press (1991).
Tosio Kato, [*Perturbation Theory for Linear Operators*]{}, Springer-Verlag (1966).
Viv Kendon, “Quantum Walks on General Graphs,” quant-ph/0306140.
Julia Kempe, “Quantum random walks – an introductory overview,” [*Contemporary Physics*]{} [**44**]{} (2003), 307-327.
Julia Kempe, “Quantum Random Walks Hit Exponentially Faster,” [*Proceedings of the 7th Workshop on Randomization and Approximation Techniques in Computer Science*]{}, edited by Sanjeev Arora, Klaus Jansen, José D.P. Rolim, and Amit Sahai, Lecture Notes in Computer Science [**2764**]{}, Springer (2003), 354-369.
Viv Kendon and Ben Tregenna, “Decoherence can be useful in quantum walks,” [*Physical Review A*]{} [**67**]{} (2003), 042315.
David A. Meyer, “From quantum cellular automata to quantum lattice gases,” [*Journal of Statistical Physics*]{} [**85**]{} (1996), 551-574.
Cristopher Moore and Alexander Russell, “Quantum Walks on the Hypercube,” [*Proceedings of the 6th Workshop on Randomization and Approximation Techniques in Computer Science*]{}, edited by José D.P. Rolim and Salil Vadhan, Lecture Notes in Computer Science [**2483**]{}, Springer (2002), 164-178.
Michael Nielsen and Isaac Chuang, [*Quantum Computation and Quantum Information*]{}, Cambridge University Press (2000).
Leonid Fedichkin, Arkady Fedorov, and Vladimir Privman, “Additivity of Decoherence Measures for Multiqubit Quantum Systems,” [*Physical Review A*]{} [**328**]{} (2004), 87.
Dmitry Solenov and Leonid Fedichkin, “Continuous-Time Quantum Walks on a Cycle Graph,” quant-ph/0506096.
[^1]: Center for Quantum Device Technology, Department of Physics, and Department of Electrical and Computer Engineering, Clarkson University, Potsdam, NY 13699–5721, USA. Email: [email protected]
[^2]: Center for Quantum Device Technology and Department of Physics, Clarkson University, Potsdam, NY 13699–5721, USA. Email: [email protected]
[^3]: Department of Mathematics and Computer Science and Center for Quantum Device Technology, Clarkson University, Potsdam, NY 13699–5815, USA. Email: [email protected]
[^4]: We have $D = kI$, if $G$ is $k$-regular.
|
---
author:
- |
Wolfgang Kappus\
[email protected]
date: 'v02: 2013-01-29'
title: 'Strain mediated adatom stripe morphologies on Cu$<$111$>$ simulated.'
---
Abstract {#abstract .unnumbered}
========
Substrate strain mediated adatom configurations on Cu$<$111$>$ surfaces have been simulated [ ]{}in a coverage range up to nearly 1 monolayer. Interacting adatoms occupy positions on a triangular lattice in two dimensions. The elastic interaction is taken from earlier calculations, short range effects are added for comparison. Dependent on the coverage different morphologies are observed: Superlattices of single adatoms in the 0.04 ML region, ordered adatom clusters in the 0.1 ML region, elongated islands in the 0.3 ML region, and interwoven stripes in the 0.5 ML region. In the region above the sequence is reversed with occupied and empty positions complemented. Stronger short range interactions increase the feature size of the clusters and reduce their lattice order. The influence of the substrate elastic anisotropy turns out to be significant. Results are compared with morphologies observed on Cu$<$111$>$ surfaces and the applicability of the model is discussed.
1.Introduction {#introduction .unnumbered}
==============
Regular self-assembled adatom structures, ranging from superlattices via nanodot arrays to strain relief pattern are interesting for various general and technological reasons, reviews were given in \[1,2\]. While interactions of adatoms comprise various mechanisms \[3\] the focus on elastic interactions in this paper is driven by the question on their importance compared with other interactions. In recent calculations on the stability and dynamics of strain mediated superlattices it was shown that the role of elastic interactions was underestimated compared with surface state mediated interactions \[4\], so other surface phenomena seem worth to be discussed in the light of strain mediated interactions. Anisotropies of adatom pattern can act as a probe for indicating strain mediated interactions via correlation to anisotropies of the substrate elastic constants.
The calculations on the stability and dynamics of strain mediated superlattices \[4\] covered a low coverage region and left the question open how adatoms arrange under equilibrium conditions when the coverage is increased. The experiments of Plass et al. on domain patterns \[5\] provide a challenge to prove the ability of an elastic continuum theory in building a bridge between superlattices and stress relief patterns. Such bridge was built before with a Green function method for the non-equilibrium case \[6\]. [ ]{}
The focus on Cu$<$111$>$ has good reasons as well: Cu is among the substrates with the highest elastic anisotropies and Cu$<$111$>$ seems to be a preferred surface for experiments. Unfortunately the crystal directions are often not published, so a solid proof of elastic effects is hindered. The predictions of this work are intended to allow a verification of the theory by experiments. [ ]{}
The model used in this work to simulate adatom morphologies is an adaptation of the one used in \[4\]. The latter used a grid-less molecular dynamic algorithm suited for low coverages. For the higher coverages up to 1 ML and the $<$111$>$ surface discussed in this work it had to be converted to a grid base where adatom positions reside on a triangular lattice representing identical threefold coordinated substrate sites. The interaction mechanism has been kept; it is based on the isotropic stress individual adatoms on threefold coordinated sites exert to their neighborhood. The limitations of such interaction mechanisms and of other model assumptions are discussed below. [ ]{}
The model results will be presented as sample adatom configurations for increasing coverages and for three variants of the interaction. The variants stand for three different strengths of short range interactions and should give an idea of the interplay between short- and medium-term interactions. The model results will also be presented as pair distributions derived from averaging over sample configurations.
This work is organized as follows: In section 2 the details of the interaction model are recalled and the simulation model is detailed. Furthermore the calculation method for pair distributions is described. In section 3 the model results will be presented as sample adatom configurations and as pair distributions derived from averaging over sample configurations. For symmetry reasons the pair distributions will cover 30${}^{\circ}$ segments only. In section 4 the model assumptions are reviewed, the model results are summarized and compared with a few experiments and open questions are addressed. Section 5 closes with a summary of the results.
2.Model details {#model-details .unnumbered}
===============
In this section the elastic interactions used within the model are recalled, the grid based algorithm for the Molecular Dynamic simulations is introduced and the method for deriving adatom pair distributions is explained. Also scaling relations, intended for the interpretation of experiments, are recalled. [ ]{} [ ]{}
2.1.Elastic interactions of adatoms {#elastic-interactions-of-adatoms .unnumbered}
------------------------------------
Following \[8\] the interaction of adatoms located at the origin and at $\overset{\rightharpoonup }{s}$ using polar coordinates (s,$\phi $) for their distance s = $|$$\overset{\rightharpoonup }{s}$$|$ and pair direction angle $\phi $ with respect to the crystal axes is given by
$$U(s,\phi ) =(2\pi )^{-1}\sum _p \omega _p \frac{\left.\cos (p \phi )\cos (p \frac{\pi }{2}\right)\Gamma \left(\frac{p+3}{2}\right)s^p\, _1F_1\left(\frac{p+3}{2};p+1;\frac{-s^2}{4
\alpha ^2}\right)}{2^{p+1}\Gamma (p + 1)\alpha ^{p+3}},\text{ }(2.1)$$
where $\, _1F_1$ denotes the Hypergeometric Function, $\Gamma (p)$ the Gamma function, $\alpha $=$\sqrt{2}$/2 is a cutoff length defining height and location of the potential wall and the medium range potential, and the $\omega _p$ denote coefficients of a cosine series describing the solution of an elastic eigenvalue problem \[8\]. The dominating isotropic p=0 term of Eq. (2.3) is negative for small s describing a potential well (i.e. an attractive potential), has a positive wall (i.e. a repulsive potential) at s=$s_{w }$ and approaches infinity with a $s^{-3}$ law. For elastic anisotropic substrates like Cu [ ]{}the p$>$0 terms describe the anisotropic part of the interaction and influence the height of the positive wall in dependence of the pair direction angle $\phi $ with respect to the crystal axes. Tab. 1 shows the $\omega _p$ for the elastic adatom interaction on Cu$<$111$>$ and W$<$111$>$ (for comparison) calculated as outlined in \[8\]. We note the units of the $\omega _p$:\
- the numerator is $P^2$, the square of a scalar parameter P describing the lateral stress magnitude an adatom exerts to the surface\
- the denominator is the $c_{44}$ elastic constant of the substrate.\
For details of the parameter P see \[8\].\
$\pmb{
\begin{array}{|cccccccc|}
\hline
\text{Substrate} & c_{11} & c_{12} & c_{44} & \zeta & \omega _0 & \omega _6 & \omega _{12} \\
\text{Cu} & 169. & 122. & 75.3 & -1.376 & -1.01 & -0.007 & +0.0004 \\
W & 523. & 203. & 160. & 0. & -0.720 & 0. & 0. \\
\hline
\end{array}
}$
Table 1. Substrate Elastic Constants $c_{\text{ik}}$ (GPa) from \[9\], anisotropy $\zeta $=($c_{11}$-$c_{12}$-2$c_{44}$)/$c_{44}$ and coefficients $\omega _p$ (in $P^2$/$c_{44}$ units) [ ]{}on Cu $<$111$>$ and W$<$111$>$ .$\quad $
In the present analysis the strong attractive interaction of Eq. (2.1) in the region s$<$$s_{w }$ is replaced by three variants to study the influence of short range interactions - in addition to the elastic interaction - on the medium range adatom morphology:\
- variant 1 as used and described in \[4\]
$$U_1(s,\phi ) =U_w+U_{\text{wp}}\cos (\text{p$\phi $}) \frac{s}{s_w}\text{ }\text{for} s<s_w ,\text{ }(2.2)$$
where $U_w$ describes the wall height, $U_{\text{wp}}$ the wall anisotropy variance, and $s_w$ is the location of the wall maximum,\
- variant 2, describing additional attraction between next neighbors
$$U_2(s,\phi ) =0\text{ }\text{for} s<s_0 ,\text{ }(2.3)$$
where $s_0$ is defined by $U$($s_0$,$\phi $) = 0, covering the range s$\lesssim $1.75, significantly smaller than $s_w$,
- variant 3, describing stronger attraction between next neighbors
$$U_3(s,\phi ) =-5\text{ }k_B T\text{ }\text{for} s\leq s_3 ,\text{ }(2.4)$$
where $s_3$ is the next neighbor distance. The value -5 is chosen to get an equidistant series of U values.
2.2.Simulations {#simulations .unnumbered}
----------------
The Molecular Dynamics grid-less algorithm used in \[4\] turned out unstable and inefficient in the coverage range $\theta >$0.1. Therefore a triangular grid algorithm has been used instead. The triangular grid represents adatom positions on a $<$111$>$ surface with threefold symmetry fulfilling the symmetry condition used for the adatom generated surface stress \[8\]. Periodic boundary conditions were applied to avoid the problem of adatom diffusion to the boundary. The hexagon diameter of 48 units was chosen to keep the computing time in the range of hours while the interaction u(s=24) has decreased well below 0.01. Temperature effects are treated by the normalized interaction
$$u(s,\phi )=U(s,\phi ) \left/k_B\right. T.\text{ }(2.5)$$
Not knowing the size of the stress parameter P and as in \[4\] the average wall height is assumed $u_W$=5 and this choice determines all u(s,$\phi
$).
In our grid algorithm an adatom configuration is described by a set of occupation numbers $\left\{\tau _i\right\}, \tau _i\in \{0,1\}.\text{}$Starting from a random k member adatom configuration $\{$$\tau _{i,0}$$\left\}, \text{step} n+1 \left\{\tau _{i,n+1}\right\} \text{evolves} \text{from}
\text{step} n \left\{\tau _{i,n}\right\} \text{by}\right.$ comparing the total interaction of each adatom i
$$u_{\text{tot}}(i)=\sum _{j=1}^k u_{\text{ij}} \tau _j\text{ }(2.6)$$
with that of its empty next neighbor positions. If a next neighbor position m has less total interaction, the adatom i jumps to that position m. So adatoms move around under the force field of all neighbors until all interaction is minimized.
The iterations are terminated when either no more jumps occur or when loops of identical configurations are detected. [ ]{}
2.3.Pair distribution {#pair-distribution .unnumbered}
----------------------
The adatom pair distribution $g_{\text{ik}}$ is calculated by averaging occupation pairs
$$g_{\text{ik}}=<\tau _i \tau _k>\left/\theta ^2\right.\text{ },\text{ }(2.7)\text{ }$$
where $\theta $ denotes the coverage. So =1 in a random configuration, $>$1 if the pair $\{$$\tau
_i$$,\tau _k$$\}$ occurs more likely and $<$1 if the pair $\{$$\tau _i$$,\tau _k$$\}$ occurs less likely. It is the discrete variant of g(s,$\theta $) calculated in \[7\] with a 2-dimensional Born-Green-Ivon type integral equation.
2.4.Pair distribution scaling {#pair-distribution-scaling .unnumbered}
-----------------------------
For the discussion of experimental results in section 4.5 we will need to recall scaling properties of the continuous pair distribution g(s,$\theta
$) as outlined in \[7\]. In the long range isotropic limit the adatom-adatom interaction becomes
$$\text{ }u(s) = u_0 s^{-3} + O\left( s^{-5}\right)\text{ }, s>>s_0\text{ }(2.8)\text{ }$$
and the pair distribution scales
$$\text{ }g\left(s,u_0,\theta \right) = g\left(\text{$\tau $s},\tau ^3u_0,\tau ^{-2}\theta \right)\text{ }(2.9)\text{ }$$
with a scaling factor $\tau $. In other words the pair distribution has the same shape if simultaneously the length is doubled, the interaction is eightfold and the coverage is reduced by a factor of four. We also note from Eq. (2.5) an eightfold normalized interaction u results if [ ]{}the interaction U is kept constant and the temperature T is reduced by a factor of eight. We further note that doubling the stress parameter P increases the interaction U by a factor 4.
3.Results {#results .unnumbered}
=========
The results are presented in pairs of figures, the first of which shows a sample adatom configuration in a hexagon area simulated according to section 2.2 and the second shows an equivalent pair distribution according to section 2.3 and averaged over configuration samples. The presentation comprises varying coverages $\theta $ and interactions $U_i$ (sections 3.3 to 3.5) to study the influence of short range interactions. We note the different notation of interactions $U_i$ and scaled interactions $u_i$ according to Eq. (2.5).
The adatom pair distributions are shown as dots at lattice positions in a 30 degree sector for symmetry reasons. A color code with different colors/ darkness is used to mark pair distribution ranges:\
- $\geq $1.5 black [ ]{}\
- 1.5$>$$\geq $1.0 blue/ dark gray\
- 1.0$>$$\geq $0.5 green/ medium gray\
- $>$0.5 yellow/ light gray\
- $\leq $0.5 white.
Since the algorithm used is different from the one previously used \[4\] the results section starts with a reference. Unfortunately a fault in the code of \[4\] was just detected: [ ]{}the $\cos (p \pi /2)$ term in Eq. (2.1) was omitted and therefore the results in \[4\] were rotated 30${}^{\circ}$ compared to the current corrected version.
3.1.Reference configuration {#reference-configuration .unnumbered}
---------------------------
Fig. 2.a shows empty (yellow points) and occupied positions (red points) of a triangular lattice. The interaction used is $U_1$ and described by Eqs. (2.1) and (2.2). Fig. 1.a acts as reference to \[4\] with a coverage [ ]{}$\theta $=0.045 to demonstrate that the new algorithm leads to the same sample results except a 30${}^{\circ}$ rotation (as stated before). A substrate aligned superlattice of adatoms and a few dimers with a lattice parameter of 5 grid units shows up like in the reference.
Fig. 1.b. shows the adatom pair distribution in a 30 degree sector taken from a configuration average. Black points near $s_{<1-21>}$=3\*$\sqrt{3}$ and 6\*$\sqrt{3}$ lattice spacings and at $s_{<1-10>}$=9 reflect the (not quite perfect) aligned monomer superlattice. We note a blue dot at 1 $s_{<1-10>}$=1 reflecting a small population of next neighbor sites.
3.2.Influence of substrate elastic (an-)isotropy {#influence-of-substrate-elastic-an-isotropy .unnumbered}
------------------------------------------------
Previous investigations showed a strong influence of the substrate elastic constants on the adatom pair distribution \[7\]. A triangular grid algorithm could compromise such delicate matter. To prove the grid algorithm properly handling substrate isotropy, the elastic constants of tungsten were used as reference (see Tab.1). Fig.2 shows the resulting pair distribution for a coverage [ ]{}$\theta $=0.045 in a 30 degree sector. It shows (irrespective statistical variances) the characteristic rings at 5, 10, 15 substrate lattice spacings already discussed in \[7\]. [ ]{} [ ]{}
Isotropy could also be compromised by adatom multiples. Though there are almost no angular moments of circular clusters in the relevant distance of 5 lattice constants, adatom straight tripoles would generate differences in the interaction of up to 18$\%$ (less repulsive orthogonal to the axes).
3.3.Adatom configurations for coverages between 0.1 and 1 monolayer {#adatom-configurations-for-coverages-between-0.1-and-1-monolayer .unnumbered}
-------------------------------------------------------------------
To stay consistent with \[4\] we will use in this section the short range interaction $U_1(s,\phi )$, Eq. (2.2) and thus the more repulsive variant 1. In 0.2 steps the coverage is increased in Figs. 3.a. to 3.h showing the effects of a subsequent population of the 2-dimensional triangular lattice up to 0.9 monolayers and the corresponding pair distributions according to Eq. (2.7).
Fig.3.a shows a sample configuration at coverage $\theta $=0.1 and Fig.3.b shows the equivalent pair distribution taken from a configuration average. The black dots in Fig.3.b near $s_{<1-21>}$=3\*$\sqrt{3}$ and at $s_{<1-10>}$=8 reflect a superlattice with a superlattice constant of nearly 5 substrate lattice spacings consisting of monomers, dimers, trimers and a few 4-mers. The dark dot at $s_{<1-10>}$=1 reflects the high amount of next neighbors.
Fig.3.c shows a sample configuration at coverage $\theta $=0.3 and Fig.3.d shows the equivalent pair distribution taken from a configuration average. The black dot in Fig.3.d near $s_{<1-21>}$=3\*$\sqrt{3}$ again reflects a superlattice with a superlattice constant of nearly 5 substrate lattice spacings consisting of circular and elongated n-mers. A few thin bridges between islands should be noted in Fig.3.a.
Fig.3.e shows a sample configuration at coverage $\theta $=0.5. Elongated islands have now merged to an interwoven stripe structure. Fig.3.f shows the equivalent pair distribution taken from a configuration average. The pair distribution indicates a characteristic distance of 4 to 5 substrate lattice spacings. values of 1.0 at $s_{<1-10>}$=5 to 6 and of $1.2 \text{at} s_{<1-21>}$=3\*$\sqrt{3}$ indicate a weak stripe alignment towards $<$1-21$>$. We note in Fig.3.e. a similar vacancy stripe structure.
Fig.3.g shows a sample configuration at coverage $\theta $=0.9. The vacancies are forming aligned dimers, trimers, n-mers like the adatoms in Fig.3.a.
Omitting intermediate results for coverages $\theta >$0.5 has a good reason: they show vacancy structures inverse to adatom structures at (1-$\theta
$). Therefore a vacancy pair distribution [ ]{} [ ]{}
$$g_{\text{ik}}^{\text{vac}}=<\left(1-\tau _i\right) \left(1-\tau _k\right)>\left/(1-\rho )^2\right.\text{ },\text{ }(3.1)$$
is introduced. measures the likeliness of vacancy pairs $\{$(1-$\tau _i$),(1-$\tau _k$)$\}$.
Fig.3.h shows the vacancy pair distribution taken from a configuration average at coverage $\theta $=0.9. It shows almost the same structure as the adatom pair distribution at coverage $\theta $=0.1 in Fig.3.b, indicating a superlattice now of vacancy monomers, dimers, trimers and some 4-mers.
In summary the interaction $u_1$ with increasing coverage leads to clusters growing on superlattice positions from mono- to n-mers. Subsequently elongated islands are formed, merge to stripes at 0.5 ML and then the sequence is reversed with empty positions instead of occupied ones. Such changes in adatom morphology are summarized in Tab.2. [ ]{}
$\pmb{
\begin{array}{|ccccccc|}
\hline
\text{Coverage} & \text{Form} & \text{Superlattice} & \text{Inversion} & \text{Adatoms} & \text{Vacancies} & \text{Feature} \text{Size} \\
& & & & \text{avg}. & \text{avg}. & \text{avg}. \\
0.045 & \text{monomers} & Y & & 1 & & 5 \\
0.1 & \text{dimers} & Y & & 2 & & 4.8 \\
0.3 & \text{triangles}/\text{linear} & Y & & 7 & & 4.8 \\
0.5 & \text{coherent} \text{stripes} & & & & & 4.6 \\
0.9 & \text{dimers} & Y & Y & & 2 & \\
\hline
\end{array}
}$
Table 2. Changes of adatom morphology with increasing coverage, simulated with $\text{interaction} u_1$$\quad $
3.4.Influence of short range interactions, the $U_2$ example {#influence-of-short-range-interactions-the-u_2-example .unnumbered}
------------------------------------------------------------
Variant 1 $U_1(s,\phi )$, Eq. (2.2) of the short range interaction was used in \[4\] to enable convergence of the BGY type integral equation. Compared with Eq. (2.1) it describes an effective repulsive interaction in the short range. To show the influence of short range interactions, variant 2 $U_2(s,\phi
)$, Eq. (2.3) is chosen less repulsive and therefore promotes adatoms to nucleate at next neighbor sites. We note in the pair distributions below black or dark dots at next neighbor distance.
Fig.4.a shows a sample configuration with short range interaction $U_2(s,\phi )$, Eq. (2.3) at coverage $\theta $=0.1. The superlattice consists of many n-mers and some smaller aggregates. Fig.4.b shows the equivalent pair distribution taken from a configuration average. The black dots at $s_{<1-21>}$=3\*$\sqrt{3}$ indicate a superlattice with a lattice constant of slightly above 5. The blue dots at a distance of about 10.5 in all directions indicate a trend towards isotropy, i.e. a reduced superlattice order compared to Fig. 3.b.
Fig.4.c shows a sample configuration with short range interaction $U_2(s,\phi )$, Eq. (2.3) at coverage $\theta $=0.3. The superlattice consists of islands some of which have merged to elongated islands. Small bridges between islands create dog-bone-like shapes. Fig.4.d shows the equivalent pair distribution taken from a configuration average. The blue dots indicate an isotropic ring structure with a characteristic distance of nearly 6, the values of 1.2 at $s_{<1-10>}$=5 and of $1.35 \text{at} s_{<1-21>}$=3\*$\sqrt{3}$, however, indicate a weak island alignment towards $<$1-21$>$. [ ]{}
Fig.4.e shows a sample configuration with short range interaction $U_2(s,\phi )$, Eq. (2.3) at coverage $\theta $=0.5. The islands of lower coverages have now merged to an interwoven but incoherent stripe structure with an average stripe broadness of nearly 3. Fig.4.f shows the equivalent pair distribution taken from a configuration average. The blue dots again pretend an isotropic ring structure with a characteristic distance of 6, the values of 1.1 at $s_{<1-10>}$=5 to 6 and of $1.2 \text{at} s_{<1-21>}$=3\*$\sqrt{3}$, however, indicate a weak stripe alignment towards $<$1-21$>$. Compared with Fig. 3.c - with short range interaction $U_1$ at $\theta $=0.5 - the stripes are a bit thicker, slightly less coherent and their distance is about one lattice constant larger.
3.5.Influence of short range interactions, the $U_3$ example {#influence-of-short-range-interactions-the-u_3-example .unnumbered}
------------------------------------------------------------
Variant 3 $U_3(s,\phi )$, Eq. (2.4) is more attractive and therefore strongly promotes adatoms to nucleate at next neighbor sites. We note in the pair distributions black dots at 1 lattice spacing in the $<$1-10$>$ direction reflecting strong population of next neighbor sites.
Fig.5.a shows a sample configuration with short range interaction $U_3(s,\phi )$, Eq. (2.4) at coverage $\theta $=0.1. The cluster structure consists of a variety of sizes from monomers to n-mers. Fig.5.b shows the equivalent pair distribution taken from a configuration average. A reduced alignment of clusters to the substrate crystal directions is visible compared to Figs.3.b and 4.b.
Fig.5.c shows a sample configuration with short range interaction $U_3(s,\phi )$, Eq. (2.4) at coverage $\theta $=0.3. The cluster structure consists of larger islands some of which have merged to elongated islands with dog-bone-like shapes. Fig.5.d shows the equivalent pair distribution taken from a configuration average. The blue dots pretend an isotropic ring with a characteristic distance of 6, the values of 1.1 at $s_{<1-10>}$=5 to 6 and of $1.3 \text{at} s_{<1-21>}$=3\*$\sqrt{3}$, however, indicate a weak island alignment towards $<$1-21$>$.
Fig.5.e shows a sample configuration with short range interaction $U_3(s,\phi )$, Eq. (2.4) at coverage $\theta $=0.5. The islands of lower coverages have merged to an interwoven stripe structure with an average stripe broadness of more than 3. Fig.5.f shows the equivalent pair distribution taken from a configuration average. Within the range of blue dots values of 1.05 at $s_{<1-10>}$=5 to 6 and of $1.1 \text{at}
s_{<1-21>}$=3\*$\sqrt{3}$indicate very weak alignment towards $<$1-21$>$. The characteristic distance is between 5 and 6. Compared with Fig. 4.e - with short range interaction $U_2$ at $\theta $=0.5 - the stripes look similar, but if Fig.5.f is compared with Fig.4.f we note a tendency towards a reduced order.
In summary the variants with more attractive short range interactions lead to less ordered superlattices consisting of more adatoms with a greater superlattice constant and thicker, more coherent stripes.
4.Discussion {#discussion .unnumbered}
============
In this section the model assumptions are reviewed, the model results are summarized and compared with experiments. The section closes with a discussion of open points.
4.1.Model assumptions {#model-assumptions .unnumbered}
---------------------
Assumptions and approximations used for this model have been discussed in \[4\] in detail. The most relevant approximation is an elastic continuum model for the substrates instead of a lattice model, known to be inadequate for describing short range effects. The elastic continuum model predicts a $s^{-3}$ repulsion on the long range, a repulsive wall near 2.3 lattice distances and a strong attractive well at next neighbor distances.\
The assumption of a perfectly flat surface excludes the effects of steps, known for their active role in nucleation and growth, partly due to strain in their neighborhood.\
A further key assumption is thermal equilibrium for the adatom configurations, i.e. neglecting of kinetic effects.\
Anisotropic stress generated by stretching adatom bonds is not covered.\
Further assumptions cover the short range interactions used. Replacement of the deep attractive potential well by either a cap of about 5 units (in fact describing a strong repulsion of next neighboring adatoms) or by a cap at potential zero (in fact describing a weak repulsion of next neighboring adatoms) or by a cap at potential -5 units (describing a stronger attraction of next neighbors) is a method to indicate the effects of short range interactions while preserving the merits of a theory with medium range focus.
4.2.Comparison with previous off-grid simulations {#comparison-with-previous-off-grid-simulations .unnumbered}
-------------------------------------------------
The adatom configurations with interaction $U_1$ at $\theta $=0.045 resulting from an off-grid algorithm in \[4\] could be repeated with an on-grid algorithm. Both results and the derived pair distributions are consistent with the solution of a Born-Green-Ivon type integral equation describing the adatom pair distribution from Statistical Mechanics principles \[7\]. So the present simulations can be rated as a high coverage extension of previous results.
4.3.Adatom distribution on Cu $<$111$>$ {#adatom-distribution-on-cu-111 .unnumbered}
---------------------------------------
The results of sections 3.1 to 3.3 draw the picture of a substrate aligned hexagonal packed superlattice of adatoms or clusters at coverages up to $\theta \approx $0.3 ML and stripes around coverages $\theta $=0.5 ML. The simulations predict a populated/unpopulated symmetry $\theta $/(1-$\theta
$). This is explained by minimization of the total configuration energy by forming adatom clusters or stripes or vacancies around 5 lattice distances apart. Substrate strain generated by adatoms stressing the surface is allowed to release within the unpopulated sites.
4.4.The role of short range interactions {#the-role-of-short-range-interactions .unnumbered}
----------------------------------------
The results of sections 3.4 and 3.5 indicate a strong role of short range interactions. The trends when increasing the short range attraction are\
- larger clusters\
- triangular instead of linear clusters (reflected by an increasing next neighbor pair distribution)\
- more connected clusters\
- less influence of the substrate on the superlattice and stripe directions, i.e. more isotropic configurations\
- slightly increased feature size. [ ]{}
4.5.Comparison with experiments {#comparison-with-experiments .unnumbered}
-------------------------------
Strain mediated superlattices on Cu$<$111$>$ in the coverage region up to $\theta $=0.045 ML were already discussed and compared with experiments \[10,11\] in \[4\]. The agreement was good enough to propose strain mediated interactions as an alternative to the one discussed in \[10,11\]. The current investigation was triggered by an unknown referee of \[4\]. He raised the point how other manifestations of elastic interactions on surfaces, especially stress domains \[12\], are related to adatom superstructures.
Stress domains are ordered patterns of less dense and more dense adatom areas, for example adatom gas areas and monolayer areas forming spontaneously. They minimize surface energy by balancing short-range attractive with long-range repulsive interaction. Substrate strain created by surface stress in the more dense areas is allowed to relax in the less dense areas. The domains reflect the elastic anisotropy of the substrate.
Observations at the Pb/Cu$<$111$>$ system \[5\] can be characterized by\
- ordered but mobile circular droplets (containing thousands of adatoms) at low coverages\
- stripes at medium coverage with a long range order improving when reducing temperature\
- ordered inverse droplets at high coverages approaching a monolayer (as predicted earlier, see references in \[5\]).\
The periodicity of patterns is in the 100 nm range, decreasing with increasing temperature. The observed temperature range is 623 K to 673 K. The order of droplets can - from a first glance - be interpreted as a superlattice type.
The sequence of island superlattices, domain patterns and inverse droplets on Pb/Cu$<$111$>$ with increasing coverage observed in \[5\] would serve as a striking experimental evidence of the theory and the simulation results presented in section 3.4 [ ]{}if the length scales and the temperature would be the same. Unfortunately \[5\] describes a high temperature experiment with adatom clusters of thousands of adatoms while the experiments showing superlattice effects on a few lattice constant scale have been performed in the 10K region \[10,11\]. So the question arises if the present Molecular Dynamics simulation could be extended to handle clusters and structures of hundreds or thousands of adatoms. Unfortunately this would be far beyond the resources of a [ ]{}PC, so we must rely on scaling arguments to argue the same driver - elastic interactions - for both phenomena, adatom superlattices and stress domain patterns:
Following the cluster section of \[8\] we argue a simple superposition ansatz for the elastic interactions: two clusters of $n_1$ and $n_2$ adatoms create $n_1$\*$n_2$ times the elastic inter cluster energy of two single adatoms. This ansatz, of course, is a strong restriction not considering e.g. short range adatom-adatom interactions and lattice mismatch effects.
Two n=$10^3$ clusters would create an interaction $U^{\text{Cluster}}$ $10^6$ times $U$. The [ ]{}temperature range for the stress domain experiments at 650 K is a factor$10^2$ higher than the regime of single adatom effects, so the scaled cluster interaction $u^{\text{Cluster}}$=$U^{\text{Cluster}}$/$k_B$T would be $10^4$ times higher. The typical length according to Eq. (2.9) would then be $10^{4/3}$$\approx $21 times higher than . The coverage $\theta ^{\text{Cluster}}$ would be reduced by a factor of about 464 (noting that the coverage of adatoms and of clusters have different meaning).
We summarize that length scales may differ from 5 substrate lattice constants to 100 lattice constants in the example above, but the elastic interaction mechanism is the same. [ ]{}
It would be a big surprise if such simple scaling arguments could explain the physics of stress domains more than qualitatively. In fact the measurement in \[15\] shows a decrease in domain feature size from 140 nm to 40 nm when the temperature rises from 590 K to 650 K, far beyond the above scaling effects. The authors explain such decrease in stripe periodicity with the change in domain-boundary free energy caused by thermally broken Pb-Pb bonds. Thus the effects of short range interactions override the effects of elastic medium range interactions under certain conditions. This is not in contradiction to the scaling arguments above since Eq. (2.9) is valid only for a type interaction which does not include short range effects. [ ]{}
The influence of substrate anisotropy as observed in \[13\] is reflected in the present simulations and the pair distributions derived. The dominating stripe orientation at coverage $\theta $=0.5 ML reported is $<$-1-12$>$, the model results show a weak stripe orientation in the same directions. [ ]{}.
The assumed equilibrium conditions are confirmed by experiments with reversible shape transitions (droplet to elongated islands with dog-bone-like shapes) during heating cycles \[14\]. An increase in temperature changes the shapes the same way as an increasing coverage does, again in line width the scaling arguments outlined above (same shape of if reduced or $\theta $ increased). Both experimental results can be seen as a hint for the validity of the assumptions and conclusions.
The lack of details reported makes the comparison of further experiments with the present model similarly difficult. Two further examples should show its ability and its limitations: [ ]{}
N on Cu$<$111$>$ forms elongated islands, stable at room temperature, aligned in 3 equivalent crystal directions. The islands show a characteristic distance of about 10 nm and are often colliding \[16\]. Since coverage and crystal directions are not reported, a comparison with stripes as calculated from the model is incomplete, but similarities with Figs. 3.e, 4.e, 5.e should encourage further research. [ ]{}
Co on Cu$<$111$>$ acts as a nanoisland reference system with a well documented strain mediated morphologies \[17, 18\]. They are different from the ones found in this calculations due to their tendency of bi-layer growth even at moderate coverages. Triangular bi-layer islands show lateral displacements of the Co-Co bond lengths (measured by the surface state electron energy) dependent on their positions within the islands, associated with lateral strain.
The following picture for strain mediated morphologies is concluded:\
Stripe morphologies correlate with repulsive short range interaction while attractive short range interactions destabilize stripes and - via multilayer growth - lead to islands. Islands create strain and interact via strain and their shape minimizes elastic energy. [ ]{} [ ]{}
4.6.Open questions {#open-questions .unnumbered}
------------------
Clusters arranged in superlattices and stress domain patterns on Cu$<$111$>$ in the temperature range of about 10K with a characteristic length of about 5 to 6 lattice constants hopefully may be found in existing material. More experimental material is needed to determine size and nature of short range interactions and also the orientation of cluster/ stripe structures relative to the substrate crystal directions.
First principles methods (like density-functional-theory) need to be applied for estimating the stress parameters.
A further question is how the theory successfully describing mesoscopic stress patterns \[12\] can be utilized to better understand the microscopic effects discussed in the present analysis: When domain boundary effects play an important role in the mesoscopic range, the effects of short range interactions in the microscopic range should be similarly significant. [ ]{}
The morphology of adatoms on other surfaces is an equally interesting topic, the much stronger effects of elastic anisotropy on $<$001$>$ surfaces of many materials is expected to lead to further insight. Increasing the accuracy of the simulation by increasing the diameter of the simulation area on much more powerful computers may also lead to additional insights.
Bi-layer effects would extend the scope of the model but require some basic work. It should be noted that long range magnetic interactions should also be considered in the Co case.
The restrictions of the present isotropic stress model motivated an extension of the model \[19\]: Dimers are supposed to create anisotropic stress by stretching their bond. Such stress creates other types of elastic interactions including lattice mismatch.
7.Summary {#summary .unnumbered}
=========
Substrate strain mediated adatom configurations have been simulated for Cu$>$111$>$ surfaces for three short range interaction types. The adatom coverages range up to nearly a monolayer. Pair distributions have been derived to prove morphologies from superlattices of single adatoms and clusters to ordered stress domain patterns. Higher coverages beyond 0.5 monolayers show vacancy structures just inverted. The short range interaction shows a significant influence on the cluster size within the superlattices. Substrate elastic anisotropy influences the superlattice orientation with respect to the substrate crystal directions.
Experiments showing similar structures have been compared with the model. For low temperatures superlattices of single adatoms have been found while for increased temperatures ordered islands and stripes of adatoms have been reported. There is some evidence of elastic interactions being the common cause but a final conclusion on the validity of the theory remains open at this point in time.
Erratum {#erratum .unnumbered}
=======
In the course of recent calculations a code fault affecting previous results \[4,7\] was detected: The $\cos (p \pi /2)$ term in Eq. (2.1) was omitted in the code there. Therefore the p=6 interaction terms on $<$111$>$ surfaces had a sign error. As a consequence the results have to be rotated by 30${}^{\circ}$. In the current version of this paper the fault has been corrected. [ ]{}The author apologizes for any inconvenience.
References {#references .unnumbered}
==========
\[1\] H.Brune, Creating Metal Nanostructures at Metal Surfaces Using Growth Kinetics, in: Handbook of Surface Science Vol.3 (E.Hasselbrink and B.I.Lundqvist ed.), Elsevier, Amsterdam (2008)
\[2\] H.Ibach, Surf.Sci.Rep. 29, 195 (1997)
\[3\] T.L.Einstein, Interactions between Adsorbate Particles, in: Physical Structure of Solid Surfaces (W.N.Unertl ed.), Elsevier, Amsterdam (1996)
\[4\] W.Kappus, Surf. Sci. 609, 30 (2013)
\[5\] R.Plass, J.A.Last, N.C.Bartelt, G.L.Kellogg, Nature 412, 875 (2001)
\[6\] L.Proville, Phys. Rev. B 64, 165406 (2001)
\[7\] W.Kappus, Surf. Sci. 606, 1842 (2012)
\[8\] W.Kappus, Z.Physik B 29, 239 [ ]{}(1978)
\[9\] A.G.Every, A.K.McCurdy: Table 3. Cubic system. Elements. D.F.Nelson(ed.), SpringerMaterials- The Landolt-B[" o]{}rnstein Database
\[10\] J.Repp, F.Moresco, G.Meyer, K.H.Rieder, P.Hyldgaard and M.Persson, Phys. Rev. Lett. 85, 2981 (2000).
\[11\] F.Silly, M.Pivetta, M.Ternes, F.Patthey, J.P.Pelz, W.D.Schneider, New J. of Phys. 6, 16 (2004)
\[12\] O.L.Alerhand, D.Vanderbilt, R.D.Meade, J.D.Joannopoulos, Phys. Rev. Lett. 61, 1973 (1988)
\[13\] F.Leonard, N.C.Bartelt, G.L.Kellogg, Phys. Rev. B 71, 045416 (2005)
\[14\] R.van Gastel, N.C.Bartelt, G.L.Kellogg, Phys. Rev. Lett. 96, 036106 (2006)
\[15\] R.van Gastel, N.C.Bartelt, P.J.Feibelman, F.L[' e]{}onard, and G.L.Kellogg, Phys. Rev. B 70, 245413 (2004)
\[16\] F.M.Leibsle, Surf. Sci. 514, 33 (2002)
\[17\] M.V.Rastei, B.Heinrich, L.Limot, P.A.Ignatiev, V.S.Stepanyuk, P. Bruno, J.P.Bucher, Phys. Rev. Lett. 99, 246102 (2007)
\[18\] N.N.Negulyaev, V.S.Stepanyuk, P. Bruno, L.Diekh[" o]{}ner, P.Wahl, K.Kern, Phys. Rev. B 77, 125437 (2008)
\[19\] W.Kappus, http://arxiv.org/abs/1301.3643
Acknowledgement {#acknowledgement .unnumbered}
===============
Many thanks to the unknown referee of \[4\] for directing the author[’]{}s interest to stripes.
Appendix {#appendix .unnumbered}
========

Fig. 1.a shows a sample configuration with reference short range interaction $U_1(s,\phi )$ (2.2) and a coverage $\theta $=0.045.

Fig. 1.b shows an average adatom pair distribution in a 30 degree sector with reference short range interaction $U_1(s,\phi )$ (2.2) and a coverage $\theta $=0.045. The differently colored dots represent different values of the pair distribution (darker colors represent higher values).

Fig.2 shows an average pair distribution for an isotropic substrate (tungsten) at coverage [ ]{}$\theta $=0.045 in a 30 degree sector.

Fig.3.a shows a sample configuration with short range interaction $U_1(s,\phi )$ (2.2) at coverage $\theta $=0.1.

Fig.3.b shows an average pair distribution with short range interaction $U_1(s,\phi )$ (2.2) at coverage $\theta $=0.1.

Fig.3.c shows a sample configuration with short range interaction $U_1(s,\phi )$ (2.2) at coverage $\theta $=0.3.

Fig.3.d shows an average pair distribution with short range interaction $U_1(s,\phi )$ (2.2) at coverage $\theta $=0.3.

Fig.3.e shows a sample configuration with short range interaction $U_1(s,\phi )$ (2.2) at coverage $\theta $=0.5.

Fig.3.f shows an average pair distribution with short range interaction $U_1(s,\phi )$ (2.2) at coverage $\theta $=0.5.

Fig.3.g shows a sample configuration with short range interaction $U_1(s,\phi )$ (2.2) at coverage $\theta $=0.9.

Fig.3.h shows an average vacancy pair distribution (3.1) with short range interaction $U_1(s,\phi )$ (2.2) at coverage $\theta $=0.9.

Fig.4.a shows a sample configuration with short range interaction $U_2(s,\phi )$ (2.3) at coverage $\theta $=0.1.

Fig.4.b shows an average pair distribution with short range interaction $U_2(s,\phi )$ (2.3) at coverage $\theta $=0.1.

Fig.4.c shows a sample configuration with short range interaction $U_2(s,\phi )$ (2.3) at coverage [ ]{}$\theta $=0.3.

Fig.4.d shows an average pair distribution with short range interaction $U_2(s,\phi )$ (2.3) at coverage [ ]{}$\theta $=0.3.

Fig.4.e shows a sample configuration with short range interaction $U_2(s,\phi )$ (2.3) at coverage $\theta $=0.5.

Fig.4.f shows an average pair distribution with short range interaction $U_2(s,\phi )$ (2.3) at coverage $\theta $=0.5.

Fig.5.a shows a sample configuration with short range interaction $U_3(s,\phi )$ (2.3) at coverage $\theta $=0.1.

Fig.5.b shows an average pair distribution with short range interaction $U_3(s,\phi )$ (2.3) at coverage $\theta $=0.1.

Fig.5.c shows a sample configuration with short range interaction $U_3(s,\phi )$ (2.3) at coverage [ ]{}$\theta $=0.3.

Fig.5.d shows an average pair distribution with short range interaction $U_3(s,\phi )$ (2.3) at coverage [ ]{}$\theta $=0.3.

Fig.5.e shows a sample configuration with short range interaction $U_3(s,\phi )$ (2.3) at coverage $\theta $=0.5.

Fig.5.f shows an average pair distribution with short range interaction $U_3(s,\phi )$ (2.3) at coverage $\theta $=0.5.
$\copyright $ Wolfgang Kappus 2012\
|
---
abstract: 'In this Letter, three physical predictions on the phase separation of binary systems are derived based on a dynamic transition theory developed recently by the authors. First, the order of phase transitions is precisely determined by the sign of a parameter $K_d$ (or a nondimensional parameter $K$) such that if $K_d>0$, the transition is first-order with latent heat and if $K_d <0$, the transition is second-order. Second, a theoretical transition diagram is derived, leading in particular to a prediction that there is only second-order transition for molar fraction near $1/2$. This is different from the prediction made by the classical transition diagram. Third, a critical length scale $L_d^c$ is derived such that no phase separation occurs at any temperature if the length of the container is smaller than the critical length scale.'
author:
- Tian Ma
- Shouhong Wang
title: Phase Separation of Binary Systems
---
[^1]
Materials compounded by two components $A$ and $B$, such as binary alloys, binary solutions and polymers, are called binary systems. Sufficient cooling of a binary system may lead to phase separations, i.e., at the critical temperature, the concentrations of both components $A$ and $B$ with homogeneous distribution undergo changes, leading to heterogeneous spatial distributions. The main objective of this Letter is to precisely describe the phase separation mechanism and to make a few physical predictions.
[**Cahn-Hilliard Equation.**]{} Let $u_A$ and $u_B$ be the concentrations of components $A$ and $B$ respectively, then $u_B=1-u_A$. In a homogeneous state, $u_B=\bar{u}_B$ is a constant. We take $u$ to be the concentration density deviation $u=u_B-\bar{u}_B.$ The Cahn-Hilliard free energy is given by $$F(u)=F_0+\int_{\Omega} \Big[\frac{\mu}{2}|\nabla u|^2+ f(u)\Big]dx,\label{8.49}$$ where $$f(u)= \alpha_1 u^2+\alpha_2 u^3+\alpha_3 u^4.$$ The same results in this article can be derived in the same fashion, and for simplicity, we take this form of $f$ as given here. Then the classical Cahn-Hilliard equation is as follows: $$\left.
\begin{aligned}
&\frac{\partial u}{\partial
t}=-k\Delta^2u+\Delta [ b_1 u^1+ b_2 u^2+ b_3 u^3 ],\\
&\int_{\Omega}u(x,t)dx=0,
\end{aligned}
\right.\label{8.57}$$ supplemented with the Neumann boundary condition: $$\frac{\partial u}{\partial n}=\frac{\partial\Delta u}{\partial
n}=0 \ \ \ \ \text{on}\
\partial\Omega ,\label{8.53}$$ where $\Omega=\Pi^3_{k=1}(0,L_k) \subset {\mathbb R}^3$ is a rectangular domain. We note that the more general domain case can be studied as well.
To derive the nondimensional form of equation, let $$\begin{aligned}
&x=lx^{\prime}, && t=\frac{l^4}{k}t^{\prime}, && u=u_0u^{\prime},\\
&\lambda =-\frac{l^2b_1}{k},&& \gamma_2=\frac{l^2b_2u_0}{k},
&& \gamma_3=\frac{l^2b_3u^2_0}{k},
\end{aligned}\la{nondim}$$ where $l$ is a given length, $u_0=\bar{u}_B$ is the constant concentration of $B$, and $\gamma_3>0$. Then the equation (\[8.57\]) can be rewritten as follows (omitting the primes) $$\left.
\begin{aligned}
&\frac{\partial u}{\partial
t}=-\Delta^2u-\lambda\Delta u+\Delta (\gamma_2u^2+\gamma_3u^3),\\
&\int_{\Omega}u(x,t)dx=0,\\
&u(x,0)=\varphi .
\end{aligned}
\right.\label{8.58}$$
[**Criteria of separation order.**]{} Each 3D rectangular domain is one of the following two cases: $$\begin{aligned}
& \text{ Case I: } \quad L=L_1>L_j\qquad \forall j\leq 2, 3, \\
& \text{ Case II: }\quad L=L_1=L_2 > L_3 \text{ or } L_1 = L_2 = L_3.
\end{aligned}$$ We define a nondimensional parameter: $$K=\left\{
\begin{aligned}
& \frac{2L^2}{9\pi^2}\gamma^2_2 - \gamma_3
&& \text{ for Case I},\\
& \frac{26L^2}{27\pi^2}\gamma^2_2 - \gamma_3
&& \text{ for Case II}.
\end{aligned}
\right.$$ which, by (\[nondim\]), is equivalent to the following dimensional parameter: $$K_d=\left\{
\begin{aligned}
& \frac{2L^2_d}{9\pi^2}\frac{b_2^2}{k}- b_3 && \text{ for Case I}, \\
& \frac{26L_d^2}{27\pi^2}\frac{b_2^2}{k} - b_3 && \text{ for Case II}.
\end{aligned}
\right.$$ where $L_d=L \cdot l$ is the dimensional length scale.
By theorems proved in [@MW08d], the order of transitions is determined by the sign of this parameter $K$ or $K_d$ as follows, and we have readily derived the following physical predictions:
[Physical Conclusion I:]{}
*The order of phase separation is completely determined by the sign of the nondimensional parameter $K_d$ as follows:*
- If $K_d<0$, the separation is second order and the dynamic behavior of the Cahn-Hilliard system is as shown in Figure \[f8.15-1\].
- If $K_d>0$, the separation is first order transition with latent heat. In particular, there are two critical temperature $T^*> T_c$ such that if the temperature $T> T^*$, the system is in the homogeneous state, when $T^*> T > T_c$, the system is in metastable state accompanied with hysteresis corresponding to saddle-node bifurcation, and when $T< T_c$, the system is under phase separation state. In addition, the critical temperatures are functions of $u_0$ and $L$: $T^*=T^*(u_0, L), T_c=T_c(u_0, L)$. See Figure \[f8.16-1\].
{width="40.00000%"}
{width="40.00000%"}
This is in agreement with [*part of*]{} the classical phase diagram from the classical thermodynamic theory given in Figure \[f8.14-2\]; see, among others, Reichl [@reichl], Novick-Cohen and Segal [@NS84] , and Langer [@langer71]. However, as we shall see below, our result shows that near $u_0=1/2$, there is no metastable region; see Figure \[fch-4\].
{width="40.00000%"}
[**Transition diagram.**]{} We now examine the order of separation in terms of the length scale $L_d$ and mol fraction $u_0$. For this purpose, according to the Hildebrand theory (see Reichl [@reichl]), $b_2$ and $b_3$ can be expressed in two explicit formulas. Disregarding the term $|\nabla u|^2$, the molar Gibbs free energy takes the following form $$\begin{aligned}
f=& \mu_A(1-u)+\mu_Bu+RT(1-u)\ln (1-u)\nonumber \\
& +RTu\ln
u+au(1-u),\label{8.122}\end{aligned}$$ where $\mu_A,\mu_B$ are the chemical potential of $A$ and $B$ respectively, $R$ the molar gas constant, $a>0$ the measure of repel action between $A$ and $B$. Therefore, the coefficients $b_2$ and $b_3$ are given by $$\label{ch-1}
\begin{aligned}
& b_2=\frac{D}{3 !} \frac{d^3 f(u_0)}{du^3}= \frac{2u_0-1}{6u^2_0(1-u_0)^2}D RT,\\
& b_3=\frac{D}{4 !} \frac{d^4 f(u_0)}{du^4}= \frac{1-3u_0 +3u^2_0}{12u^3_0(1-u_0)^3}DRT,
\end{aligned}$$ where $D$ is the diffusion coefficient. It is easy to see that $$\begin{aligned}
&b_2\left\{
\begin{aligned}
& =0&& \text{ if } u_0=\frac{1}{2},\\
& \neq 0&& \text{ if } u_0\neq\frac{1}{2},
\end{aligned}
\right.\\
&b_3> 0 \qquad \forall 0<u_0<1.\end{aligned}$$ It is clear that the above formulas for $b_2$ and $b_3$ based on the Hildebrand theory fail near $u_0=0, 1$. However, the physically relevant case is away from these two end points of $u_0$, and then we have: $$\begin{aligned}
& b_2 =\frac{16 DRT}{3} (u_0-\frac{1}{2}) + o(u_0-\frac12), \\
& b_3 =\frac{4DRT}{3} + o(1).
\end{aligned}\label{ch-2}$$ Then solving $K_d=0$ gives a critical (dimensional) length scale $L_d$: $$L_d = \left\{
\begin{aligned}
&\frac{3 \pi \sqrt{k}}{\sqrt{2}} \frac{\sqrt{b_3}}{|b_2|} && \text{ for Case I},\\
&\frac{3 \pi \sqrt{3k}}{\sqrt{26}} \frac{\sqrt{b_3}}{|b_2|} && \text{ for Case II}.
\end{aligned}\right. \label{ch-3}$$ By (\[ch-2\]) and (\[ch-3\]), we have $$L_d = \left\{
\begin{aligned}
&\frac{3\sqrt{3k}\pi }{8\sqrt{2DRT_c}|u_0-\frac12|} + O(1) && \text{for Case I},\\
&\frac{9 \sqrt{k}\pi}{8\sqrt{26DRT_c}|u_0-\frac12|} + O(1) && \text{for Case II},
\end{aligned}\right. \label{ch-4}$$ where $T_c$ is the critical temperature as given in Physical Conclusion I. From this formula, we derive the transition diagram given by Figure \[fch-2\], and consequently, we derive a theoretical phase diagram given in Figure \[fch-3\]. In particular, we have shown the following physical conclusions:
[Physical Conclusion II.]{}
**
- For a fixed length scale $L=L'$, there are numbers $x_1 <\frac12< x_2$ such that the transition is second-order if the molar fraction $x_1< u_0< x_2$, and the transition is first-order if $u_0 > x_2$ or $u_0 < x_1$.
- The phase diagram Figure \[fch-3\] is for this fixed length scale $L'$. The points $x_1$ and $x_2$ are the two molar concentrations where there is no metastable region and no hysteresis phenomena for $x_1<u_0< x_2$. In other words, $$T^*(u_0)=T_c(u_0) \qquad \text{ for } x_1 <u_0< x_2.$$
{width="40.00000%"}
{width="40.00000%"}
[**$TL$-phase diagram.**]{} We now derive the length and temperature phase diagram. For this purpose, we consider the linear eigenvalue problem for the Cahn-Hilliard equation as follows: $$\begin{aligned}
& - \Delta^2 u - \lambda \Delta u = \beta u, \\
& \frac{\partial u}{\partial n}=\frac{\partial\Delta u}{\partial
n}=0 \ \ \ \ \text{on}\
\partial\Omega.
\end{aligned}$$ The first eigenvalue is given by $$\beta_1 = - \frac{\pi^2}{L^2}\left( \frac{\pi^2}{L^2} - \lambda\right) =
- \frac{\pi^2}{L^2}\left( \frac{\pi^2}{L^2} + \frac{\l^2 b_1}{k}\right).$$ By (10), we have $$b_1 = \frac{D}{2} \frac{d^2 f(u_0)}{du^2} = \frac{DRT}{2 u_0(1-u_0)} -\frac{a}{2}.$$ The critical parameter curve equation $\beta_1=0$ is given by $$\begin{aligned}
T_c= & \frac{u_0(1-u_0)}{RD}\left( a- \frac{k\pi^2}{2l^2 L^2}\right) \nonumber \\
=
& \frac{u_0(1-u_0)}{RD}\left( a- \frac{k\pi^2}{2L_d^2}\right).\label{critical-t}\end{aligned}$$ Using this formula and the theorems in [@MW08d], we derive the $TL$ phase diagram given by Figure \[fch-4\], and the following physical conclusions:
{width="40.00000%"}
[Physical Conclusion III.]{}
*For a given molar fraction $0<u_0 < 1$, there is a critical (dimensional) length $$L^c_d= \sqrt{\frac{k\pi^2}{2 a}}$$ such that the following hold true:*
- For $L_d < L^c_d$, there is no phase separation for any temperature.
- For $L_d > L^c_d$, phase separation occurs at the critical temperature $T=T_c$ given by (\[critical-t\]).
[**Summary.**]{} Based on a dynamic transition theory developed recently by the authors [@b-book; @chinese-book; @MW08c; @MW08f], a systematic mathematical analysis is made for the Cahn-Hilliard equation modeling phase separation of binary systems [@MW08d]. Based on this rigorous analysis, we are able to make three physical predictions on the phase separation of binary systems:
[First]{}, the order of phase transitions is precisely determined by the sign of a parameter $K_d$ (or a nondimensional parameter $K$) such that if $K_d>0$, the transition is first-order with latent heat and if $K_d <0$, the transition is second-order. This parameter $K_d$ is explicitly given in terms of the system properties and the geometry of the container.
[Second]{}, a theoretical transition diagram is derived, leading in particular to a prediction that there is only second-order transition for molar fraction near $1/2$. This is different from the prediction made by the classical transition diagram.
[Third]{}, a critical length scale $L_d^c$ is derived such that no phase separation even occurs at any temperature if the length scale of the container is smaller than the critical length scale. The transition temperature $T_c$ is precisely given as well for the length scale is larger than the critical scale.
[Finally]{}, our theory fully reveals the transition dynamics. This is the advantage of using the dynamic classification scheme as proposed in [@chinese-book; @MW08c; @MW08f], where the transitions are classified as Type-I, Type-II and Type-III. Also, we would like to mention that our results are derived for rectangular domains, and more general domain case can be studied using the dynamic transition theory as well, and other transition types such as the mixed transition may occur; see [@MW08d].
[1]{}
, [*Theory of spinodal decomposition in allays*]{}, Ann. of Physics, 65 (1971), pp. 53–86.
, [*Bifurcation theory and applications*]{}, vol. 53 of World Scientific Series on Nonlinear Science. Series A: Monographs and Treatises, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2005.
height 2pt depth -1.6pt width 23pt, [*Stability and Bifurcation of Nonlinear Evolutions Equations*]{}, Science Press (in Chinese), Beijing, 2007.
height 2pt depth -1.6pt width 23pt, [*Cahn-hilliard equations and phase transition dynamics for binary systems*]{}, Dist. Cont. Dyn. Systs., Ser. B, (2008); see also arXiv:0806.1286.
height 2pt depth -1.6pt width 23pt, [*Dynamic phase transition theory in [PVT]{} systems*]{}, Indiana University Mathematics Journal, to appear; see also Arxiv: 0712.3713, (2008).
height 2pt depth -1.6pt width 23pt, [*Dynamic phase transitions for ferromagnetic systems*]{}, Journal of Mathematical Physics, 49:053506 (2008), pp. 1–18.
, [*Nonlinear aspects of the [C]{}ahn-[H]{}illiard equation*]{}, Phys. D, 10 (1984), pp. 277–298.
, [*A modern course in statistical physics*]{}, A Wiley-Interscience Publication, John Wiley & Sons Inc., New York, second ed., 1998.
[^1]: This work is supported in part by grants from ONR and NSF.
|
---
abstract: 'Terahertz and far-infrared electric and magnetic responses of hexagonal piezomagnetic YMnO$_{3}$ single crystals are investigated. Antiferromagnetic resonance is observed in the spectra of magnetic permeability $\mu_{a}$ \[**H**$\,(\omega)$ oriented within the hexagonal plane\] below the Néel temperature $T_{N}$. This excitation softens from 41 to 32[$\,\mbox{cm}^{-1}$]{} on heating and finally disappears above $T_{N}$. An additional weak and heavily-damped excitation is seen in the spectra of complex dielectric permittivity $\varepsilon_{c}$ within the same frequency range. This excitation contributes to the dielectric spectra in both antiferromagnetic and paramagnetic phases. Its oscillator strength significantly increases on heating towards room temperature thus providing evidence of piezomagnetic or higher-order couplings to polar phonons. Other heavily-damped dielectric excitations are detected near 100[$\,\mbox{cm}^{-1}$]{} in the paramagnetic phase in both $\varepsilon_{c}$ and $\varepsilon_{a}$ spectra and they exhibit similar temperature behavior. These excitations appearing in the frequency range of magnon branches well below polar phonons could remind electromagnons; however, their temperature dependence is quite different. We have used density functional theory for calculating phonon dispersion branches in the whole Brillouin zone. A detailed analysis of these results and of previously published magnon dispersion branches brought us to the conclusion that the observed absorption bands stem from phonon-phonon and phonon-paramagnon differential absorption processes. The latter is enabled by a strong short-range in-plane spin correlations in the paramagnetic phase.'
author:
- 'C. Kadlec'
- 'V. Goian'
- 'K.Z. Rushchanskii'
- 'P. Kužel'
- 'M. Ležaić'
- 'K. Kohn'
- 'R.V. Pisarev'
- 'S. Kamba'
title: |
Terahertz and infrared spectroscopic evidence of phonon-paramagnon coupling\
in hexagonal piezomagnetic YMnO$_{3}$
---
Introduction
============
Spin waves (magnons) in magnetically ordered materials can be excited by the magnetic component **H**$(\omega)$ of the electromagnetic radiation, giving rise to a resonant dispersion of magnetic permeability in the microwave or terahertz (THz) frequency region. Recently, new coupled spin–lattice excitations named electromagnons have been discovered in multiferroics, where the magnetic order coexists with the ferroelectric one.[@pimenov06; @sushkov07] Electromagnons are excited by the electric component **E**$(\omega)$ of the electromagnetic radiation, therefore they can be detected in the THz dielectric permittivity spectra. Though they were theoretically predicted in 1970,[@chupis70] the first experimental confirmation appeared as late as in 2006.[@pimenov06] These excitations were mainly investigated in the rare earth (*R*) orthorhombic manganites *R*MnO$_3$ and *R*Mn$_2$O$_5$ (for reviews see e.g. Refs. ), and in hexaferrites.[@kida09b]
Multiferroics can be roughly divided into two groups.[@khomskii06; @khomskii09; @lotter09] In the so-called type-I multiferroics the ferroelectric (FE) order takes place both above and below the magnetic ordering temperature and the spontaneous polarization is large. However, the coupling between magnetic and electric order parameters is weak.
A general feature of type-II multiferroic materials is that the ferroelectric phase is induced by magnetic ordering characterized by a particular type of incommensurate spiral magnetic structure. In this case the magnetically-induced polarization is by several orders of magnitude smaller than in type-I multiferroics. However the coupling between electric and magnetic subsystems is large and giant magnetoelectric effects are observed. The magnon dispersion branch in the incommensurate phase exhibits a minimum at the wave vector **q$_{m}$** corresponding to the modulation vector of the ordered spins. In contrast to the magnetic resonance (magnon at **q**$\approx$0) characterized by sharp spectral features, the electromagnons manifest themselves as very broad spectral bands because their activation in the dielectric spectra is closely related to the high density of states close to the extrema of the magnon dispersion branches. Since the probing THz radiation has a long wavelength (i.e. the wave vector **q**$\approx$0), the electromagnons cannot be excited by a resonant single-photon absorption due to the wave vector conservation law; in this sense polar phonons should be involved in the interaction process.
An experimentally observed low-frequency electromagnon in type-II multiferroics was found to be related to the spin waves near the magnetic Brillouin zone (BZ) center with **q**=**q$_{m}$** [@pimenov08; @lee09] or to those with **q**=**q**$_\textrm{BZE}$-2**q$_{m}$**; [@rovillain11] here **q**$_\textrm{BZE}$ stands for the wave vector at the BZ edge. In both cases the low-frequency electromagnon has a similar frequency as the magnon with q=0 which is expected because all these excitations are related to the same magnon branch. A high-frequency electromagnon corresponds to an excitation of the BZ-edge magnons (**q**=**q**$_\textrm{BZE}$) which can induce a quasi-uniform modulation (**q**$\approx$0) of the local electric dipole moment.[@lee09; @valdes09] As for mechanisms of the electromagnon excitations, some researchers claim that the low-frequency electromagnons are activated by the inverse Dzyaloshinskii-Moriya mechanism, while the high-frequency one by the Heisenberg exchange coupling.[@kida09; @shuvaev11] Other authors believe that both types of electromagnons can be explained by the Heisenberg exchange coupling.[@mochizuki10]
Formerly, it was assumed that the electromagnons can be activated only in type-II multiferroics due to the large magnetoelectric coupling. Nevertheless, electromagnons were recently observed also in BiFeO$_{3}$, [@cazayous08; @komandin10; @talbayev11] which is the most prominent type-I multiferroic with a rather weak magnetoelectric coupling. In this context, THz dielectric spectra of multiferroics may shed new light on the nature of magnetoelectric coupling.
Hexagonal manganites *R*MnO$_3$ belong to the type-I multiferroics. In particular, the hexagonal YMnO$_{3}$ is ferroelectric below $\approx$1250[$\,\mbox{K}$]{}[@gibbs11] and the antiferromagnetic (AFM) ordering sets only below T$_N\approx70$[$\,\mbox{K}$]{}.[@bertaut63; @chatterji07] The magnetic symmetry is P$_3$c [@fiebig00] and therefore the linear magnetoelectric coupling is forbidden. However, piezomagnetic and magnetoelastic couplings, and higher-order magnetoelectric couplings are allowed.[@Birss; @fiebig02; @goltsev03] The piezomagnetic coupling is characterized by a bilinear interaction between the magnetic order parameter and strain, in contrast to the magnetoelastic coupling which is proportional to the product of squared order parameter and strain.[@Birss; @Landau] By using the method of optical second harmonic generation,[@Fiebig-JOSAB] the piezomagnetic coupling was observed owing to the interaction between AFM and FE domain walls in YMnO$_{3}$.[@fiebig02; @goltsev03] Switching of the FE polarization triggers a reversal of the AFM order parameter.[@fiebig02; @goltsev03; @choi10] Higher order magnetoelectric coupling in YMnO$_{3}$ has been observed in several works. Exceptionally large atomic displacements at $T_N$ were observed in structural studies and they demonstrate unusually strong magnetoelastic coupling.[@lee08] The large spin–polar-phonon coupling manifests itself by a decrease of the low-frequency permittivity[@aikawa05] near $T_{N}$ which is probably caused by anomalous hardening of several infrared-active phonons[@zaghrioui08]. Similar phonon anomalies were observed near $T_{N}$ also in the Raman spectra.[@fukumura07] Ultrasound measurements on a single crystal of the hexagonal YMnO$_{3}$ showed anomalous behavior of the elastic moduli $C_{11}$ and $C_{66}$ due to a strong coupling of the lattice with the in-plane exchange interaction.[@poirier07]
The AFM resonance in hexagonal YMnO$_{3}$ crystal was first observed and briefly (without any figures) reported in Ref. . More detailed THz studies of YMnO$_{3}$ ceramics were recently published in Ref. . The AFM resonance lies near 43[$\,\mbox{cm}^{-1}$]{}at 4[$\,\mbox{K}$]{} and its frequency softens on heating towards $T_N$, where it disappears.[@penney69; @goian10] Three magnon branches were discovered below $T_N$ using inelastic neutron scattering (INS).[@sato03; @petit07; @chatterji07] Two of them are degenerated near the BZ center and their frequencies correspond to the above mentioned AFM resonance. Moreover, a possible existence of magnons and short-range correlations between spins at Mn sites in paramagnetic phase were indicated by INS.[@park03; @roessli05; @demmel07] The magnetoelastic coupling manifests itself also by a strong mixing of magnons with acoustic phonons; this leads to a gap in the transverse acoustic (TA) phonon branch occurring at the frequencies and wave vectors where the uncoupled magnon and TA branches would intersect.[@petit07] Recent polarized INS measurements revealed that the excitation detected at liquid helium temperatures near 43[$\,\mbox{cm}^{-1}$]{} has a mixed character of magnetic spin wave and lattice vibration,[@pailhes09] i.e. its contribution to both the magnetic permeability and the dielectric permittivity is possible.
{width="17cm"}
The reported piezomagnetic, magnetoelastic, and higher-order magnetoelectric couplings in optical, acoustic and mainly INS data stimulated our spectroscopic study of hexagonal single crystals of YMnO$_{3}$. In this paper, we present results on far-infrared (FIR) and THz polarized spectra in this material emphasizing interaction between magnetic, electric and phonon subsystems. We demonstrate that strongly underdamped AFM resonance observed near $\approx$ 40[$\,\mbox{cm}^{-1}$]{} contributes only to the magnetic permeability spectra below T$_{N}$. An additional broad and weak absorption band was observed in the same frequency range in the dielectric spectra both below and above T$_{N}$. In contrast to electromagnons which are typically observed only below 50K, the oscillator strength of this excitation significantly increases on heating when room temperature is approached. This indicates that the feature must be related to the occupation number of magnons and/or phonons. Additional absorption band with similar temperature behavior was observed also near 100[$\,\mbox{cm}^{-1}$]{}. We will show that both these excitations can be explained by differential multiphonon and magnon-phonon processes.
Experimental details
====================
The experiments were performed using a Fourier-transform infrared (FTIR) spectrometer Bruker IFS113v and a custom-made THz time-domain spectrometer.[@kuzel10] In both experiments, Optistat CF cryostats (Oxford Instruments) with polyethylene (FIR) or Mylar (THz) windows were used for measurements between 10 and 300[$\,\mbox{K}$]{}. Helium-cooled bolometer operating at 1.6K was used as a detector in the FTIR spectrometer. Principles of THz time-domain spectroscopy are explained in Ref. . The output of a femtosecond Ti:sapphire laser oscillator (Coherent, Mira) excites an interdigitated photoconducting switch TeraSED (Giga-Optics) to generate linearly polarized broadband THz probing pulses. A gated detection scheme based on an electro-optic sampling with a 1mm-thick \[110\] ZnTe crystal permits to measure the time profile of the electric field of the transmitted THz pulse (see Ref. for further details).
![(Color online) Temperature dependence of the complex permittivity $\varepsilon_c$ and permeability $\mu_a$ spectra calculated from data plotted in Fig. \[Fig1\]. The solid $\varepsilon_c$ curves at 100 and 140[$\,\mbox{K}$]{} result from the oscillator fit.[]{data-label="Fig2"}](Fig2n.eps){width="85mm"}
![(Color online) Temperature dependences of parameters of the resonances observed in magnetic $\mu_{a}$ and dielectric $\varepsilon_{c}$ spectra. Closed circles: frequency of the AFM resonance; Open squares and triangles: eigen-frequency $\omega_{diel1}$ and oscillator strength $\Delta\varepsilon\omega_{diel1}^{2}$, respectively, of the mode observed in the dielectric spectra in Fig. \[Fig2\]. The dotted line shows the population increase of an energy level at 66[$\,\mbox{cm}^{-1}$]{} following the Bose-Einstein statistics.[]{data-label="Fig3"}](Fig3n.eps){width="75mm"}
Hexagonal YMnO$_{3}$ single crystals were grown by the floating zone method.[@Kohn00] Two crystal plates with lateral dimensions of $\sim4.5\times 5$ mm$^2$ and with the [*c*]{}-axis oriented either in-plane or out-of-plane along its normal, were cut and polished to obtain highly plane-parallel samples (within $\pm$ 1 $\mu$m) with thicknesses of 1100 and 348 $\mu$m for each orientation, respectively. These crystal plates were probed using the THz and FIR beam in all possible geometries: **E**$\,(\omega)$$\perp$**c**, **H**$\,(\omega)$$\perp$**c**; **E**$\,(\omega)$$\perp$**c**, **H**$\,(\omega)$$\|$**c** and **E**$\,(\omega)$$\|$**c**, **H**$\,(\omega)$$\perp$**c**. It enabled us to get access to the complex spectra of the products $\varepsilon_{a} \mu_{a}$, $\varepsilon_{a} \mu_{c}$, and $\varepsilon_{c} \mu_{a}$ as shown in Fig. \[Fig1\](a), (b), and (c), respectively.
Results
=======
At low temperatures, the peak around 40 cm$^{-1}$ seen in the spectra of $\varepsilon_{a}
\mu_{a}$ and $\varepsilon_{c} \mu_{a}$ \[Fig. \[Fig1\](a, c)\] but not in those of $\varepsilon_{a} \mu_{c}$ \[Fig. \[Fig1\](b)\] is definitely due to the AFM resonance as it contributes only to the magnetic permeability $\mu_{a}$. The AFM resonance vanishes above $T_{N}\sim70\,{\rm K}$. The data shown in Fig. \[Fig1\](b) allow us to assume that $\mu_{c}=1$ in the THz range. This is in agreement with the magnetic order of YMnO$_{3}$ in the AFM phase: the spins are ordered in adjacent layers in the hexagonal plane in such a way that the magnetic resonances are not expected to be excited with **H**$\|$**c**. Based on this assumption, we are able to retrieve the complex values of the permeability $\mu_{a}$ and of the permittivity $\varepsilon_{c}$ (see Fig. \[Fig2\]).
The spectra of $\mu_{a}$ were fitted by a damped harmonic oscillator and the resulting AFM resonance frequency is plotted in Fig. \[Fig3\]; a strong softening is observed upon heating towards $T_{N}$. Similar temperature dependence was briefly published earlier,[@penney69; @goian10] with the magnon frequency higher by approximately 2[$\,\mbox{cm}^{-1}$]{}.
Besides the sharp AFM resonance line in the low-temperature $\mu_{a}$ spectra one can observe a broad dielectric absorption band around 40[$\,\mbox{cm}^{-1}$]{} in the $\varepsilon_{c}$ spectra. This feature is detected even above $T_N$, where its strength remarkably increases with temperature. The presence of such a resonance in $\varepsilon_{c}$ is qualitatively expected from a simple comparison of the raw data in Figs. \[Fig1\](a) and (c). The accessible spectral range of the THz measurements for our sample is limited to $\sim$ 60[$\,\mbox{cm}^{-1}$]{}, therefore we have performed also FTIR transmission (up to 100[$\,\mbox{cm}^{-1}$]{}) and reflectivity (up to 650[$\,\mbox{cm}^{-1}$]{}) measurements for all polarizations.
![(Color online) Example of the experimental FTIR transmittance and reflectivity spectra of 348$\mu$m thick YMnO$_3$ crystal with polarization **E**$\|$**c** obtained at 120K. Dashed-dotted blue line: theoretical transmittance spectrum obtained from parameters of the FTIR reflectivity fit (without considering modes observed by THz spectroscopy); solid red line: simultaneous fit of the FTIR transmission and THz spectra. Dashed green line is the result of a fit of the reflectivity using the parameters obtained from the fit of FIR and THz transmittance. One can see that the reflectivity spectrum is not sensitive enough to detect the weak broad modes near 40 and 100[$\,\mbox{cm}^{-1}$]{}. Oscillations in the experimental reflectivity spectrum observed below 80[$\,\mbox{cm}^{-1}$]{} are caused by the diffraction of FIR beam on a small sample.[]{data-label="Fig4"}](Fig4.eps){width="86mm"}
An example of FTIR experimental transmittance and reflectivity spectra obtained at 120[$\,\mbox{K}$]{}and their various fits are shown in Fig. \[Fig4\]. Regular oscillations observed in the transmittance spectrum are due to Fabry-Pérot interferences in the plane-parallel sample; a weak minimum near 40[$\,\mbox{cm}^{-1}$]{} corresponds to the broad absorption band detected in the THz dielectric spectra (see Fig. \[Fig2\]). According to Ref. as well as according to our FTIR reflectivity (see e.g. Fig. \[Fig4\]), the lowest frequency polar phonons lie above 150[$\,\mbox{cm}^{-1}$]{} in both polarized **E**$\|$**c** and **E**$\perp$**c** spectra. Nevertheless, our simultaneous fits of the THz complex permittivity and FTIR transmittance and reflectivity data reveal several additional modes below these phonon frequencies. The relevant spectra are plotted in Fig. \[Fig5\]. Besides the sharp magnon line at 40[$\,\mbox{cm}^{-1}$]{} three other broad modes at roughly 10, 40 and 100[$\,\mbox{cm}^{-1}$]{} were used in the fitting procedure in order to account for the measured shape of the **E**$\|$**c** spectra at 10[$\,\mbox{K}$]{} (see Fig. \[Fig5\]a). The additional modes remain in the spectra up to room temperature and their strength increases on heating. Also in **E**$\perp$**c** polarized spectra, two broad modes observed near 10 and 90[$\,\mbox{cm}^{-1}$]{} were used for the fits above 50[$\,\mbox{K}$]{}.
The feature observed near 10[$\,\mbox{cm}^{-1}$]{} in both polarized spectra could be related to low-frequency magnons[@sato03] (cf. the low-frequency magnon branches shown in Fig. \[fig:dispersion\]). However, the sensitivity and accuracy of our THz spectra below 20[$\,\mbox{cm}^{-1}$]{} is limited; therefore we cannot exclude that it is only an artifact. For this reason we will not speculate about the origin of this excitation. All other modes appearing below 150 [$\,\mbox{cm}^{-1}$]{} are clearly observed in the THz and/or FTIR transmittance spectra while the FTIR reflectivity measurements are not sensitive enough to detect and resolve these weak and broad spectral features (see Fig. \[Fig4\]). Their origin will be discussed in the next section.
![(Color online) The measured THz loss spectra of YMnO$_3$ (symbols) and those obtained from the fits of FTIR transmittance and reflectivity spectra. Below T$_{N}$=70[$\,\mbox{K}$]{}, the spectra correspond to imaginary parts of the permittivity-permeability product. Above [T$_{N}$]{}, the spectra correspond to the dielectric losses. Polarizations of electric and magnetic components of IR or THz beams are indicated. Dashed lines are the fits of room-temperature FTIR reflectivity spectra without taking into account the IR and THz transmittance spectra. The marked peaks above 150[$\,\mbox{cm}^{-1}$]{} are due to phonons; the origin of lower frequency absorption bands is discussed in the text.[]{data-label="Fig5"}](ztraty_IR-THz.eps){width="85mm"}
![(Color online) Temperature dependence of the (a) permittivity and (b) dielectric loss measured at 20[$\,\mbox{cm}^{-1}$]{} with polarization **E**$\perp$**c** (red solid lines) and **E**$\parallel$**c** (black dashed lines).[]{data-label="fig:eps-T"}](eps-T.eps){width="7cm"}
The temperature dependence of the sub-THz complex dielectric permittivity $\varepsilon_{a}$ plotted in Fig. \[fig:eps-T\] for 20[$\,\mbox{cm}^{-1}$]{} exhibits a pronounced drop below $T_{N}$. Such anomaly is a typical feature of large spin-phonon coupling which occurs only in hexagonal planes of YMnO$_{3}$, where the spins are ordered. For that reason the anomaly is not observed in $\varepsilon_{c}$(*T*). The AFM phase transition is accompanied by unusually large atomic displacements, which were detected by neutron diffraction;[@lee08] for this reason the phonon frequencies change below $T_{N}$. The decrease in $\varepsilon'_{a}$ and $\varepsilon''_{a}$ is mainly caused by hardening of the $E_{1}$ symmetry polar mode seen near 250[$\,\mbox{cm}^{-1}$]{} in the IR reflectivity spectra with polarization **E**$\perp$**c**.[@zaghrioui08] Fits of our IR reflectivity spectra show that the mode near 250[$\,\mbox{cm}^{-1}$]{} hardens from 246 [$\,\mbox{cm}^{-1}$]{} (at 300 K) to 256[$\,\mbox{cm}^{-1}$]{} (at10K) and therefore its dielectric contribution $\Delta\varepsilon$ is reduced from 9.1 (300K) to 7.6 (10K). This decrease of $\Delta\varepsilon$ is mainly responsible for the change of the permittivity $\varepsilon'_{a}$(T) seen in Fig. \[fig:eps-T\]. Hardening of other modes brings a minor contribution to the decrease of $\varepsilon'_{a}$(T) on cooling. Similar temperature dependence of $\varepsilon'_{a}$ was observed also in the radio-frequency region[@aikawa05] providing evidence of the absence of dielectric dispersion below 100 GHz. Gradual decrease of $\varepsilon'_{a}$ and $\varepsilon'_{c}$ on cooling from 300 to 100[$\,\mbox{K}$]{} is a usual behavior caused by a small phonon stiffening as a consequence of thermal contraction.
![(Color online) Dispersion branches of phonons (theoretical; black solid lines) and magnons (experimental[@sato03] at 7[$\,\mbox{K}$]{}; red dashed lines). The red-dotted line indicates the presumable dispersion of the paramagnon near the M-point. The symbols shown at the BZ edges indicate the polarization of the phonons at the BZ boundary: *a* and *c* stand for phonons polarized within the hexagonal plane and in the perpendicular direction, respectively. In the $\Gamma$-point, the *E$_{1}$*- and *A$_{1}$*-phonons observed experimentally[@zaghrioui08] are marked by green and blue points, respectively; other modes are silent. Blue arrows with assignment $\omega_{diel1}$ and $\omega_{diel2}$ indicate phonon-paramagnon excitations observed in the dielectric loss spectra of $\varepsilon''_c$. Green arrow marked as $\omega_{diel3}$ indicates a broad multiphonon absorption observed in the $\varepsilon''_a$ loss spectra (see Fig. \[Fig5\]).[]{data-label="fig:dispersion"}](disp-branches4.eps){width="8.3cm"}
Discussion
==========
The question arises about the origin of the absorption bands appearing below phonon resonances in Fig. \[Fig5\]. They are much weaker and significantly broader than those of polar phonons and their strength increases when the temperature is increased, i.e., the strength is high in the paramagnetic phase. Their frequencies lying in the range of 40–100 [$\,\mbox{cm}^{-1}$]{} coincide with those of the magnon branch observed by INS at 7 K over the BZ[@sato03] (see Fig. \[fig:dispersion\]). In the following text we discuss whether these features can be related to the magnon dispersion branches.
Could a spin wave still exist in hexagonal YMnO$_{3}$ at room temperature? It is well established that Mn spins exhibit a strong short-range correlation in hexagonal YMnO$_{3}$ far above T$_{N}$. This was proved by an anomalous behavior of the thermal conductivity,[@sharma04] elastic moduli,[@poirier07] as well as by neutron scattering experiments.[@park03; @roessli05; @demmel07] Nevertheless, due to the short-range correlation of the spins in the hexagonal plane of YMnO$_{3}$, one can expect the existence of only short-wavelength paramagnons, i.e. magnons with large wavevectors ***q$_{x}$*** near the M-point of the BZ. A part of such a paramagnon branch is schematically plotted in Fig. \[fig:dispersion\]. Note that its frequency is lower than that of the magnon branch at 7[$\,\mbox{K}$]{}, as the magnon frequency decreases by almost 10[$\,\mbox{cm}^{-1}$]{} on heating towards T$_{N}$ (see Fig. \[Fig2\]).
Electromagnons are excitations with frequencies close to those of spin waves, which, due to specific couplings, are activated in the dielectric spectra. In perovskite manganites, the parts of magnon branches exhibiting a high density of states are mainly involved in these interactions (at BZ edge or close to the spin modulation wave vector).[@valdes09] However, these electromagnons were observed only at very low temperatures (typically less than 50[$\,\mbox{K}$]{}). Their strength dramatically decreases on heating and they usually disappear from the spectra at T$_{N}$ or close above T$_{N}$.[@pimenov08; @kida09; @shuvaev11] This is in contradiction with our observations in YMnO$_{3}$.
We came to the conclusion that the broad absorption bands we observe in the dielectric spectra reflect excitations which must be coupled to phonons. Let us discuss in brief which types of interaction between the magnetic subsystem and other degrees of freedom might be expected on the basis of the point group crystallographic symmetry 6$mm$ and the magnetic symmetry $\underline{6}m\underline{m}$.[@fiebig00] The magnetic order parameter of YMnO$_3$ was analyzed in several publications and it was shown to transform following B$_1$ ($\Gamma_4$) irreducible representation of the 6$mm$ group.[@Nedlin; @Pashkevich; @Sa; @Koster] The $\underline{6}m\underline{m}$ symmetry strictly forbids the linear magnetoelectric effect, i.e. bilinear terms $\alpha_{ij}H_iE_j$ (where $H_i$ and $E_i$ are components of the magnetic and electric field, respectively) are not allowed in the thermodynamic potential.[@Birss] However, a higher order magnetoelectric effect (called sometimes the magnetodielectric effect), accounted for by the $\beta_{ijk}H_iH_jE_k$ terms in the thermodynamic potential, is allowed. This effect manifests itself in our measurements as a kink near $T_N$ in the temperature dependence of $\varepsilon'_a$ (see Fig.\[fig:eps-T\]).
The magnetic symmetry of YMnO$_3$ allows the piezomagnetic contribution to the thermodynamic potential described by the terms $p_{ijk}H_{i}\sigma_{jk}$, where $\sigma_{jk}$ is a stress component and $p_{ijk}$ denotes components of the piezomagnetic tensor.[@Birss; @goltsev03] We believe that this type of bilinear coupling must play an important role in the interaction between the magnetic subsystem and the lattice. Usually, the piezomagnetic effect is allowed thanks to the relativistic part of spin-lattice and spin-spin interactions, provided the symmetry restrictions are met.[@Landau] However, in YMnO$_3$ which is a noncollinear antiferromagnet the exchange (Coulomb) interactions may be by several orders of magnitude stronger than the relativistic ones and, therefore, they can be the origin of piezomagnetism.[@Vitebsk] For example, extraordinary spin-phonon interactions were shown to contribute to the thermal conductivity of YMnO$_3$ below $T_N$.[@sharma04] Higher order effects such as $p_{ijkl}H_iH_j\sigma_{kl}$ are naturally also allowed in YMnO$_3$.
In order to provide a more quantitative explanation of the interaction between magnetic subsystem and phonons, we calculated the phonon spectrum from first principles within the spin-polarized local density approximation [@PhysRevB.23.5048]. We used projector augmented-wave potentials as implemented in Vienna *Ab Initio* Simulation Package (VASP) [@VASP_Kresse:1993; @VASP_Kresse:1996; @Bloechl:1994; @VASP_Kresse:1999]. The following valence-electron configurations were considered: $4s^{2}4p^{6}5s^{2}4d^{1}$ for Y, $3p^{6}4s^{2}3d^{5}$ for Mn, and $2s^{2}2p^{4}$ for oxygen. To account for the strong electron correlation effects on the [*d*]{}-shells of Mn atoms, we used LDA+U approach [@Anisimov_et_al:1997; @Dudarev] with an on-site Coulomb parameter $U=8.0$ eV and Hund’s exchange $J_H=0.88$ eV as calculated in Ref. . The spin-orbit interaction was not taken into account. We used an A-type antiferromagnetic structure, where spins on Mn honeycomb layers are aligned ferromagnetically and the layers with opposite spin-direction alternate along the $c$-axis. [@Spaldin_NMat_2004; @fennie05] A kinetic energy cutoff of 500 eV and a $4 \times 4
\times 2$ $\Gamma$-centered $k$-point mesh was used in the structural relaxation of the unit cell, where the Hellman-Feynman forces were minimized to a value smaller than 0.5 meV/$\mathrm{\AA}$. Phonon calculations were performed on a $2 \times 2 \times 1$ $\Gamma$- centered $k$-point mesh, with a $2 \times 2 \times 2$ supercell within the force-constant method. [@Kunc:1982; @Alfe:2009] The Hellman-Feynman forces were calculated for displacements of atoms of up to 0.04 $\mathrm{\AA}$. The dynamical matrix for each $q$-point in the BZ was constructed by a Fourier transformation of the force constants, calculated for the $\Gamma$-point and for the BZ boundaries. Phonon-mode frequencies and atomic displacement patterns for each $q$-point were obtained as eigenvalues and eigenvectors of the dynamical matrices. The result for directions A-$\Gamma$-M and wavenumbers up to 200 cm$^{-1}$ are presented in Fig. \[fig:dispersion\].
As we have already pointed out, the absorption strength significantly increases on heating. This is typical for difference frequency absorption. Such process includes the annihilation of one quasi-particle (phonon or magnon) with frequency $\omega_{1}$ and the creation of another quasi-particle with a higher frequency $\omega_{2}$. The dielectric resonance then occurs at frequency $\omega_{diel}=\omega_{2}-\omega_{1}$. This process can involve excitations from the whole BZ provided that the total wave vector is conserved. The contribution of the parts of the dispersion branch with the highest density of states is expected to dominate. The high number of available states is found namely at the flat parts of the bands close to the BZ boundaries, as it was observed, for example, in MgO.[@komandin09] Obviously such a process is strongly temperature dependent, as it is related to the population of excitations with frequency $\omega_{1}$, which follows the Bose-Einstein statistics. At low temperatures the population of the levels which we study is close to zero and the differential absorption then practically vanishes. It becomes more probable when the energy level is thermally populated at higher temperatures. This is in qualitative agreement with our observations.
The differential transitions at the BZ boundary are possible only between phonons with the same symmetry and if the total wave vector is conserved (i.e. the transition must be vertical in the wave-vector space). The broad absorption around 90[$\,\mbox{cm}^{-1}$]{} seen in $\varepsilon_{a}$ spectra (Fig. \[Fig5\]b) can be explained by differential multiphonon absorption. Phonons near 60 and 150[$\,\mbox{cm}^{-1}$]{} at the A-point of BZ are polarized in the hexagonal plane (marked as *a* in Fig. \[fig:dispersion\]) and their difference gives the frequency $\omega_{diel3}$ = 90[$\,\mbox{cm}^{-1}$]{}, as observed.
However, the two bands seen in $\varepsilon_{c}$ spectra around $\omega_{diel1}$ = 40[$\,\mbox{cm}^{-1}$]{}and $\omega_{diel2}$ = 100[$\,\mbox{cm}^{-1}$]{} are impossible to explain by multiphonon absorption. The frequency of the *c*-polarized phonons at the BZ edge is higher than 100[$\,\mbox{cm}^{-1}$]{}. It means that the population of such phonons should be much lower than that of the *a*-polarized phonon at 60 [$\,\mbox{cm}^{-1}$]{}. For this reason the strength of the differential multiphonon absorption in the $\varepsilon_{c}$ spectra should be weaker than in $\varepsilon_{a}$ spectra. Moreover, within such a hypothesis, a continuous absorption band would be expected in the spectra due to the large number of *c*-polarized phonons at the M-point (see scheme in Fig. \[fig:dispersion\]). This is in contradiction with the experimental results presented in Fig. \[Fig5\].
We assume the existence of paramagnons near the M-point, and in this case a differential paramagnon-phonon absorption with several maxima can be obtained. Moreover, because of the similar Bose-Einstein factor for the paramagnon close to 70[$\,\mbox{cm}^{-1}$]{} and phonon near 60[$\,\mbox{cm}^{-1}$]{} at the A-point, the absorptions observed in both $\varepsilon_{a}$ and $\varepsilon_{c}$ should have comparable strengths. This fits well with the experiment. The frequency $\omega_{diel1}$ increases on heating (Fig. \[Fig3\]) presumably due to the softening of the paramagnon branch with increasing temperature. The increase of the oscillator strength $\Delta\varepsilon\omega_{diel1}^{2}$ of the mode observed in Fig. 3 is compatible with the temperature increase of the Bose-Einstein factor: this is demonstrated by the dotted line which shows the expected population increase of an energy level at 66[$\,\mbox{cm}^{-1}$]{} (i.e. the frequency of paramagnon at **q**$_\textrm{BZE}$).
Conclusions
===========
The THz and FTIR transmission spectra of hexagonal YMnO$_{3}$ clearly revealed two kinds of excitations of different nature, which exist below polar phonon frequencies. The sharp AFM resonance band observed near 40[$\,\mbox{cm}^{-1}$]{} at low temperatures broadens upon heating and disappears close to $T_{N}$. This resonance is the main contributor to the magnetic permeability $\mu_a$. Additional broad excitations were observed in the frequency range 40–100 [$\,\mbox{cm}^{-1}$]{} in the dielectric permittivity spectra in both the AFM and paramagnetic phases. Our theoretical explanation of the activation of these excitations in the THz dielectric spectra is based on a two-particle differential processes schematically shown in Fig. \[fig:dispersion\]. The resonance observed in $\varepsilon_{a}$ spectra is caused by differential phonon absorption in the A-point of the BZ. The two broad absorption bands in $\varepsilon_{c}$ spectra were described as differential phonon-paramagnon processes. The absorption strength of these excitations in the THz spectra increases on heating due to the growing population of paramagnons and phonons with temperature. This is possible in the paramagnetic phase owing to strong short-range spin correlations within hexagonal planes of YMnO$_{3}$. The processes we observe in YMnO$_{3}$, where the linear magnetoelectric coupling is forbidden, are clearly different from the one responsible for the appearance of electromagnons in multiferroics with spin-induced ferroelectricity.[@pimenov06; @sushkov07; @pimenov08; @kida09] The multiphonon absorptions are allowed by symmetry in all dielectric systems, while paramagnon-phonon absorptions can be expected only in paramagnetic systems with a strong short-range magnetic order (e.g. in hexagonal manganites). Magnon-phonon absorption should be also detectable in all magnetically ordered systems (FM, AFM, ferrimagnets etc.) with relatively high critical temperatures. In such conditions the magnons at the Brillouin zone edge may become sufficiently populated to allow multiparticle effects in the spectra. This may stimulate further THz and FIR studies of other magnetically polarizable systems.
[**Acknowledgements**]{}
The authors thank M. Mostovoy for valuable discussions. This work was supported by the Czech Science Foundation (Project No. 202/09/0682), by AVOZ10100520, and by the Young Investigators Group Program of the Helmholtz Association (Contract VH-NG-409). The contribution of Ph.D. student V.G. has been supported by projects 202/09/H041 and SVV-2011-263303. R.V.P. acknowledges the support by the RFBR (Project No. 09-02-00070). The support of the Jülich Supercomputing Center is gratefully acknowledged.
[10]{}
A. Pimenov, A.A. Mukhin, V.Yu. Ivanov, V.D. Travkin, A.M. Balbashov, and A. Loidl, Nature Phys. **2**, 97-100 (2006).
A.B. Sushkov, R. V. Aguilar, S. Park, S.-W. Cheong, and H.D. Drew, Phys. Rev. Lett. **98**, 027202 (2007).
V.G. Baryakhtar and I.E. Chupis, Sov. Phys.-Solid State **11**, 2628 (1970).
A. Pimenov, A.M. Shuvaev, A.A. Mukhin, and A. Loidl, J. Phys.: Condens. Matter **20**, 434209 (2008).
N. Kida, Y. Takahashi, J.S. Lee, R. Shimano, Y. Yamasaki, Y. Kaneko, S. Miyahara, N. Furukawa, T. Arima, and Y. Tokura, J. Opt. Soc. Amer. B **26**, A35-A51 (2009).
A.M. Shuvaev, A.A. Mukhin and A. Pimenov, J. Phys.: Condens. Matter **23**, 113201 (2011).
N. Kida, D. Okuyama, S. Ishiwata, Y. Taguchi, R. Shimano, K. Iwasa, T. Arima, and Y. Tokura, Phys. Rev. B **80**, 220406(R) (2009).
D.I. Khomskii, J. Magn. Magn. Mater. **306**, 1 (2006).
D. Khomskii, Physics **2**, 20 (2009).
Th. Lottermoser, D. Meier, R.V. Pisarev, and M. Fiebig, Phys. Rev. B **80**, 100101 (2009).
J.S. Lee, N. Kida, S. Miyahara, Y. Takahashi, Y. Yamasaki, R. Shimano, N. Furukawa, and Y. Tokura, Phys. Rev. B **79**, 180403(R) (2009).
P. Rovillain, M. Cazayous, Y. Gallais, M-A. Measson, A. Sacuto, H. Sakata, and M. Mochizuki, Phys. Rev. Lett. **107**, 027202 (2011).
R. Valdés Aguilar, M. Mostovoy, A.B. Sushkov, C.L. Zhang, Y.J. Choi, S.-W. Cheong, and H.D. Drew, Phys. Rev. Lett. **102**, 047203 (2009).
M. Mochizuki, N. Furukawa, and N. Nagaosa, Phys. Rev. Lett. **104**, 177206 (2010).
M. Cazayous, Y. Gallais, A. Sacuto, R. de Sousa, D. Lebeugle and D. Colson, Phys. Rev. Lett. **101**, 037601 (2008).
G. Komandin, V. Torgashev, A. Volkov, O. Porodinkov, I. Spektor, and A. Bush, Phys. Sol. State **52**, 734 (2010).
D. Talbayev, S.A. Trugman, S. Lee, H.T. Yi, S.-W. Cheong, and A.J. Taylor, Phys. Rev. B **83**, 094403 (2011).
A.S. Gibbs, K.S. Knight, and P. Lightfoot, Phys. Rev. B **83**, 094111 (2011).
E. Bertaut and M. Mercier, Phys. Lett. **5**, 27 (1963).
T. Chatterji, S. Ghosh, A. Singh, L.P. Regnault, and M. Rheinstädter, Phys. Rev. B **76**, 144406 (2007).
M. Fiebig, D. Fröhlich, K. Kohn, St. Leute, Th. Lottermoser, V.V. Pavlov, and R.V. Pisarev, Phys. Rev. Lett. **84**, 5620-5623 (2000).
R.R. Birss, *Symmetry and Magnetism*, North-Holland, 1967.
M. Fiebig, Th. Lottermoser, D. Fröhlich, A.V. Goltsev, and R.V. Pisarev, Nature **419**, 818-820 (2002).
A.V. Goltsev, R. V. Pisarev, Th. Lottermoser, and M. Fiebig, Phys. Rev. Lett. **90**, 177204 (2003).
L.D. Landau and E.M. Lifshitz, *Electrodynamics of Continuos Media,* 2ed., Pergamon, 1984.
M. Fiebig, V.V. Pavlov, and R.V. Pisarev, J. Opt. Soc. Amer. B **22**, 96 (2005).
T. Choi, Y. Horibe, H.T. Yi, Y.J. Choi, Wu Weida, and S.-W. Cheong, Nature Mat. **9**, 253-258 (2010).
S. Lee, A. Pirogov, M. Kang, K.-H. Jang, M. Yonemura, T. Kamiyama, S.-W. Cheong, F. Gozzo, N. Shin, H. Kimura, Y. Noda, and J.-G. Park, Nature **451**, 805-809 (2008).
Y. Aikawa, T. Katsufuji, T. Arima, and K. Kato, Phys. Rev. B **71**, 184418 (2005).
M. Zaghrioui, V. Ta Phuoc, R.A. Souza, and M. Gervais, Phys. Rev. B **78**, 184305 (2008).
H. Fukumura, S. Matsui, H. Harima, K. Kisoda, T. Takahashi, T. Yoshimura, and N. Fujimura, J. Phys.: Condens. Matter **19**, 365239 (2007).
M. Poirier, F. Laliberté, L. Pinsard, and A. Revcolevschi Phys. Rev. B **76**, 174426 (2007).
T. Penney, P. Berger, and K. Kritiyakirana, J. Appl. Phys. **40**, 1234-1235 (1969).
V. Goian, S. Kamba, C. Kadlec, D. Nuzhnyy, P. Kužel, J. Agostino Moreira, A. Almeida and P.B. Tavares, Phase Transitions **83**, 931 (2010).
T.J. Sato, S.-H. Lee, T. Katsufuji, M. Masaki, S. Park, J.R.D. Copley, and H. Takagi, Phys. Rev. B **68**, 014432 (2003).
S. Petit, F. Moussa, M. Hennion, S. Pailhès, L. Pinsard-Gaudart, and A. Ivanov, Phys. Rev. Lett. **99**, 266604 (2007).
J. Park, J.-G. Park, G.S. Jeon, H.-Y. Choi, Ch. Lee, W. Jo, R. Bewley, K.A. McEwen, and T.G. Perring, Phys. Rev. B **68**, 104426 (2003).
B. Roessli, S.N. Gvasaliya, E. Pomjakushina, and K. Conder, JETP Letters **51**, 287 (2005).
F. Demmel, T. Chatterji, Phys. Rev. B **76**, 212402 (2007).
S. Pailhès, X. Fabrèges, L.P. Régnault, L. Pinsard-Godart, I. Mirebeau, F. Moussa, M. Hennion, and S. Petit, Phys. Rev. B **79**, 134409 (2009).
P. Kužel, H. Němec, H., F. Kadlec, and C. Kadlec, Opt. Express **18**, 15338 (2010).
S.L. Dexheimer, *THz Spectroscopy: Principles and Applications* (Boca Raton, FL: CRC Press).
H. Yamagichi, T. Fujita, T. Shinozaki, H. Sigie, and K. Kohn, *Ferrites: Proceedings of the Eighth International Conference on Ferrites (ICF 8)*, Kyoto and Tokyo, Japan 2000.
P.A. Sharma, J.S. Ahn, N. Hur, S. Park, S.B. Kim, S. Lee, J.-G. Park, S. Guha, and S.-W. Cheong, Phys. Rev. Lett. **93**, 177202 (2004).
G.M. Nedlin, Sov. Phys. Solid St. **6**, 2165 (1965).
Yu.G. Pashkevich, V.L. Sobolev, S.A. Fedorov, and A.V. Eremenko, Phys. Rev. B **51**, 15898 (1995).
D. Sa, R. Valentí, and C. Gros, Eur. Phys. Journ. B **14**, 301 (2000).
G.F. Koster, J.O. Dimmock, R.G. Wheeler, and H. Statz, *Properties of the Thirty Two Point Groups*, MIT, 1963.
I.M. Vitebskii, N.M. Lavrinenko, and V.L. Sobolev, J. Magn. Magn. Mater. **97**, 263 (1991).
C.J. Fennie, and K.M. Rabe, Phys. Rev. B **72** 100103(R) (2005).
J.P. Perdew, and A. Zunger, Phys. Rev. B **23**, 5048 (1981).
G. Kresse, and J. Hafner, Phys. Rev. B **47**, 558 (1993).
G. Kresse, and J. Furthmüller, Phys. Rev. B **54**, 11169 (1996).
P.E. Blöchl, Phys. Rev. B **50**, 17953 (1994).
G. Kresse, and D. Joubert, Phys. Rev. B **59**, 1758 (1999).
V.I. Anisimov, F. Aryasetiawan, and A.I. Lichtenstein, J. Phys.: Condens. Matter **9**, 767 (1997).
S.L. Dudarev, G.A. Botton, S.Y. Savrasov, C.J. Humphreys, and A.P. Sutton, Phys. Rev. B **57**, 1505 (1998).
J.E. Medvedeva, V.I. Anisimov, M.A. Korotin, O.N. Mryasov, A.J. Freeman, J. Phys.: Condens. Matter **12**, 4947 (2000).
B.B. vanAken, T.T.M. Palstra, A. Filippetti, and N.A. Spaldin, Nat. Mater. **3**, 164 (2004).
D. Alfè, Comp. Phys. Commun. **180**, 2622 (2009).
K. Kunc and R.M. Martin, Phys. Rev. Lett. **48**, 406 (1982).
G.A. Komandin, O.E. Porodinkov, I.E. Spector, and A.A. Volkov, Phys. Sol. State **51**, 2045 (2009).
|
---
abstract: 'We propose a new estimator for the change point parameter in a dynamic high dimensional graphical model setting. We show that the proposed estimator retains sufficient adaptivity against plugin estimates of the edge structure of the underlying graphical models, in order to yield an $O(\psi^{-2})$ rate of convergence of the change point estimator in the integer scale. This rate is preserved while allowing high dimensionality as well as a diminishing jump size $\psi,$ provided $s\log^{3/2}(p\vee T)=o\big(\surd(Tl_T)\big).$ Here $s,p,T$ and $l_T$ represent a sparsity parameter, model dimension, sampling period and the separation of the change point from its parametric boundary, respectively. Moreover, since the rate of convergence is free of $s,p$ and logarithmic terms of $T,$ it allows the existence of a limiting distribution valid in the high dimensional setting, which is then derived. The method does not assume an underlying Gaussian distribution. Theoretical results are supported numerically with monte carlo simulations.'
bibliography:
- 'meanchange.bib'
---
\
Abhishek Kaul[^1], Hongjin Zhang, Konstantinos Tsampourakis\
Department of Mathematics and Statistics,\
Washington State University, Pullman, WA 99164, USA.
.1in
Introduction {#sec:intro}
============
A large body of literature has been developed on the recovery of large network structures such as high dimensional graphical models. Such networks play a vital role in a variety of problems, for e.g., to serve as a representation of interactions between a set of nodes, to aid in classification problems, and to allow the implementation of classical dimension reduction techniques such as factor analysis amongst several other uses. Owing to their versatility, such models have been adopted in a variety of scientific fields such as in the field of neuroimaging, e.g., [@cribben2012dynamic], where graphical models obtained from FMRI data are utilized to understand neurological network structures. In microbiome studies, e.g., [@kaul2017structural], where such models have found use in geographical classification of persons based on their gut microbiome observations.
An undirected graphical model is a network where an edge between the $(i,j)^{th}$ nodes represents a non-zero $(i,j)^{th}$ entry of underlying the precision matrix, which is defined as the inverse of the covariance matrix. The reasoning for this network representation arises from the well known classical multivariate theory result, where under a Gaussian distribution, a zero valued $(i,j)^{th}$ entry of the precision matrix characterizes independence between the $(i,j)^{th}$ nodes, conditioned on all remaining nodes. Accordingly, the statistical problem in the recovery of a graphical model is equivalent to the estimation of the underlying precision matrix. In the high dimensional setting, where the dimension $(p)$ of this matrix might diverge faster than the number of observations $(T)$, it is now well understood that given sparsity assumptions on parameters (edge structure), these matrices can be consistently estimated by several different methods in the literature, for e.g., neighborhood selection and its variants \[[@meinshausen2006high], [@friedman2008sparse] and [@yuan2010high] among others\] where edges are recovered locally for each node. Alternatively, direct global approaches to estimate the precision or covariance matrix via $\ell_1$ regularization of $\ell_1$ minimization \[[@banerjee2008model], and [@cai2011constrained] among others\].
In many problems of current scientific interest the assumption of stationarity of a network over an extended sampling period could be unrealistic and may lead to flawed inference. In the recent past dynamic graphical models capable of fitting evolving networks in a piecewise manner, characterized via one or more change points have received attention in the literature as tractable relaxation of the fairly rigid assumption of stationarity. In a Gaussian graphical model (GGM) setting, [@kolar2012estimating] consider a fused lasso regularization together with a neighborhood selection approach. [@angelosante2011sparse] propose dynamic programming in conjunction with neighborhood selection. Likelihood based approaches together with suitable regularization have been proposed in [@kolar2010estimating], [@gibberd2017multiple], and [@keshavarz2018sequential], where the latter take a detection perspective. In similar setup, [@londschien2019change] provide a correction method to alleviate bias due to missingness. [@atchade2017scalable] provide a majorize-minimize algorithm allowing efficient implementation of the likelihood approach in comparison to a brute force search. Studies on other types of dynamic network structures are also available in the literature. [@roy2017change] provide a likelihood based approach for Markov random fields with a single change point. [@bhattacharjee2018change] and [@wang2018optimal] consider stochastic block models and provide least squares and cusum based methodologies, respectively.
In this article we consider the following change point model, where the change is in the covariance matrix or equivalently the precision matrix of a $p$-dimensional distribution. \[model:dggm\] z\_t=
w\_t, & t=1,...,T\^0\
x\_t, & t=T\^0+1,....,T
, here the observed variable is $z_t\in\R^p,$ $t=1,...,T.$ The variables $w_t, x_t\in\R^p$ are independent and zero mean subgaussian random variables (r.v.’s), with unknown covariance matrices $\Si$ and $\D,$ respectively. These variables are not directly observed, in the sense that the change point parameter $\tau^0\in (0,1)$ is unknown. Thus it is apriori unknown from which of the two subgaussian distributions a realization $z_t$ arises. We allow the dimension $p$ to diverge potentially at an exponential rate, i.e., $\log p=o(T^{\delta}),$ while making a sparsity assumption to be specified in the following section. The parameters of interest here are the change point $\tau^0,$ and the unknown matrices $\Si$ and $\D,$ with the former being of main interest in this article. The homogenous case of ‘no change’ where model (\[model:dggm\]) reduces to $T$ i.i.d. observations of a subgaussian distribution with covariance $\Si,$ occurring at $\tau^0=1$ is disallowed, i.e., our objective throughout the article is that of estimation and inference on $\tau^0$ when it exists.
To aid further discussion on the main objectives of this article, we require additional notation. For any $p\times p$ matrix $W,$ define a $(p-1)$-dimensional vector $W_{-i,j}$ as the $j^{th}$ column of $W$ with the $i^{th}$ entry removed, and similarly define $W_{i,-j}.$ Also define a $(p-1)\times (p-1)$ matrix $W_{-i,-j}$ as the sub-matrix of $W$ with the $i^{th}$ row and the $j^{th}$ column removed. Now define the following parameter vectors in $\R^{p-1},$ \[def:muga\] \^0\_[(j)]{}= \_[-j,-j]{}\^[-1]{}\_[-j,j]{},\^0\_[(j)]{}= \_[-j,-j]{}\^[-1]{}\_[-j,j]{},j=1,...,p. The parameter vectors $\mu^0_{(j)},$ and $\g^0_{(j)}$’s play a fundamental role in neighborhood selection and the underlying graphical model structure. When $\mu^{0}_{(j)k}=0$ ($k^{th}$ component of $\mu^0_{(j)}$) $\Leftrightarrow$ the $(j,k)^{th}$ entry of the corresponding precision matrix is zero, and thus indicates the absence of an edge between these nodes in the corresponding graph. From a technical perspective, these coefficient vectors play an important role since they can be used to orthogonalize the $j^{th}$ and the remaining components of the underlying distribution, i.e., if $w_t\in\R^p$ is a realization from a zero mean distribution with covariance $\Si,$ then it is straightforward to see that the component $w_{tj}$ and the vector $(w_{tj}-w_{t,-j}^T\mu^0_{(j)})$ are uncorrelated (or independent under Gaussianity). Given their fundamental role in characterizing the network structure, we use these coefficients to characterize the magnitude of the jump size across the two networks. Specifically, we let, \[def:jumpsize\] \^0\_[(j)]{}=\^0\_[(j)]{}-\^0\_[(j)]{}, j=1,...,p,\_[2,2]{}=(\_[j=1]{}\^p\^0\_[(j)]{}\_2\^2)\^,=. The quantities $\xi_{2,2},$ and $\psi$ are representative of jump size, with the latter being a normalized version that plays a central role in our analysis. For $\xi_{2,2}$ or $\psi$ to be non-zero, it is necessary that there are either edge connectivity changes in the underlying graphs or changes in magnitude of conditional dependence between nodes. An example of a structural change that the measure $\xi_{2,2}$ will be insensitive towards is the following. Let the two underlying covariance matrices be $\Si$ and $\D=c\Si,$ for any constant $0<c<\iny,$ in other words the covariance structures are different but the correlation structure is identical, thus the underlying graph structures are identical. It is straightforward to observe that in this case $\mu^0_{(j)}=\g^0_{(j)},$ $j=1,...,p,$ and that the jump size $\xi_{2,2}=\psi=0.$ This characterization is somewhat similar to that of [@kolar2012estimating], who define the jump size as $\min_{j}\|\eta^0_{(j)}\|_2.$ The advantage of using the measure $\xi_{2,2}$ or $\psi$ over $\min_{j}\|\eta^0_{(j)}\|_2$ is that the latter requires changes in each and every row and column of the precision matrix, whereas the former allows for sub-block changes of the precision matrix pre and post the change point. Other metrics of the jump size have also been utilized in the literature, e.g. $\|\Si-\D\|_F$ in [@gibberd2017multiple], which is comparable to $\xi_{2,2}$ defined above.
The proposed estimator is described in the following. Consider any $z_t\in\R^p,$ $t=1,..,T$ and $z=(z_1,z_2,...,z_T)^T\in\R^{T\times p},$ any $\mu_{(j)},\g_{(j)}\in\R^{p-1},$ $j=1,...,p,$ any $\tau\in(0,1),$ $T\ge2,$ $p\ge 3,$[^2] Let $\mu,$ and $\g$ be the concatenation of $\mu_{(j)}'$s and $\g_{(j)}'$s. Then define the least squares loss, \[eq:Q\] Q(z,,,)=\_[t=1]{}\^[T]{}\_[j=1]{}\^p(z\_[tj]{}-z\_[t,-j]{}\^T\_[(j)]{})\^2 + \_[t=(T+1)]{}\^[T]{}\_[j=1]{}\^p(z\_[tj]{}-z\_[t,-j]{}\^T\_[(j)]{})\^2.
Now suppose the availability of nuisance estimates $\h\mu_{(j)},$ $\h\g_{(j)}\in\R^{p-1},$ $j=1,...,p$ of the coefficient vectors defined in (\[def:muga\]), such that the following bound is satisfied. \[eq:optimalmeans\] \_[1jp]{}(\_[(j)]{}-\^0\_[(j)]{}\_2\_[(j)]{}-\^0\_[(j)]{}\_2)c\_u(1+\^2){}\^, with probability at least $1-o(1).$ Here $l_T$ is a sequence separating the change point parameter from the boundary of its parametric space, i.e. $\lfloor T\tau^0\rfloor\wedge(T-\lfloor T\tau^0\rfloor)\ge Tl_T,$ (see, Condition A). The quantities $\si,\ka,\nu,$ are other parameters defined in Section \[sec:mainresults\] (see, Condition B). Under this setup, define the plug in estimator $\tilde\tau$ as, \[est:optimal\] :=(,)=\_[(0,1)]{} Q(z,,,). Clearly, the estimator $\tilde\tau$ utilizes estimates of the potentially high dimensional edge parameters $\mu^0_{(j)},$ and $\g^0_{(j)},$ $j=1,...,p.$ The inference results of Section \[sec:mainresults\] are agnostic about the choice of the estimator used to obtain these nuisance estimates, as long as they satisfy sufficient estimation properties, mainly, the $\ell_2$ error bound (\[eq:optimalmeans\]). In Section 3 we present an estimator for these nuisance parameters which possesses the required theoretical guarantees, thereby making the methodology feasible in practice.
Section \[sec:mainresults\] studies the behavior of $\tilde\tau$ in a high dimensional setup $p>>T.$ Our first main result obtains the rate of convergence of the proposed estimator $\tilde\tau,$ where we show that $\big(\lfloor T\tilde\tau\rfloor-\lfloor T\tau^0\rfloor\big)=O_p(\psi^{-2}),$ under weak conditions. It is important to note this rate is free of the dimension parameters $s,p$ and other logarithmic terms of the sampling period $T.$ In a change point model with a mean change of a random vector, a rate of convergence of this form is known to be optimal. Since the same is not known in a graphical model setting, thus we do not refer to the presented rate as optimal, although it is natural to suspect that it is indeed the case. Furthermore, the given rate is preserved while allowing a diminishing jump size $\psi,$ under the sufficient condition that the model dimensions satisfy $s\log^{3/2}(p\vee T)=o\big(\surd(Tl_T)\big).$ Here $s$ is a sparsity parameter to be defined in Section \[sec:mainresults\]. To the best of our knowledge, the rate of convergence described above is sharper, and the minimum jump size assumption is significantly weaker than those available for existing estimators in the literature on dynamic high dimensional network models, for e.g., [@kolar2012estimating] provide a rate of convergence $O_p(\psi^{-2}p\log T)$ under a minimum jump size assumption of order $O(p\log T\big/ T)^{1/2},$ [@gibberd2017multiple] provide a rate $O_p(\psi^{-2}p^2\log p)$ under a jump assumption of order $O\{p \surd(\log p^{\beta/2}/T)\},$ and [@roy2017change] provide a rate of $O_p\big(\psi^{-2}\log(pT)\big)$ under a jump assumption of order $O(\log pT)^{1/4}.$ The former two articles are under a gaussian graphical model setting, whereas the latter is in a markov random field setting.
The sharper rate of convergence of the proposed estimator in the high dimenisonal setting allows for the existence of a limiting distribution. Our second main result derives this limiting distribution as the distribution of the minimizer of an asymmetric and off-center Brownian motion. This enables inference on $\tau^0,$ when it exists, i.e., to construct an asymptotically valid confidence interval for $\tau^0,$ when $\tau^0<1.$ Here asymptotics are in the high dimensional sense, where the sample size $T$ is diverging, and the dimension $p$ can be fixed or diverging, at potentially an exponential rate of $T.$ This limiting distribution has been studied in [@bai1997estimation], where it was obtained as the limiting distribution of the change point estimate in the classical fixed $p$ linear regression setting. The density function of this distribution is also available in the same article, thus enabling straightforward computation of quantiles, which can in turn be utilized to construct asymptotically valid confidence intervals.
An indirect but informative comparison here is with recent results of inference on regression coefficients in high dimensional regression models. For estimation of a component of the regression vector, it is known that the least squares estimator itself is not sufficiently adaptive against nuisance parameter estimates (estimates of remaining regression vector components) to allow for an optimal rate of convergence. Instead, certain corrections to the least squares loss or its first order moment equations, such as debiasing ([@van2014asymptotically]) or orthogonalization ([@belloni2011inference], [@chernozhukov2015valid],[@belloni2017confidence] and [@ning2017general]) induce sufficient adaptivity against nuisance estimates and thereby allow optimal estimation of the target parameter. The results of this article show that in the context of change point estimation, the plugin least squares estimator (\[est:optimal\]) itself possesses the required adaptivity against potentially high dimensional nuisance estimates, in order to allow for $O(\psi^{-2})$ estimation of the change point $\lfloor T\tau^0\rfloor,$ provided the nuisance parameters are estimated with sufficient precision.
We conclude this section with a short note on the notations used in this article. The following section provides a description of the statistical behavior of $\tilde\tau$ defined above.
***Notation***: Throughout the paper, $\R$ represents the real line. For any vector $\delta,$ the norms $\|\delta\|_1,$ $\|\delta\|_2,$ $\|\delta\|_{\iny}$ represent the usual 1-norm, Euclidean norm, and sup-norm respectively. For any set of indices $U\subseteq\{1,2,...,p\},$ let $\delta_U=(\delta_j)_{j\in U}$ represent the subvector of $\delta$ containing the components corresponding to the indices in $U.$ Let $|U|$ and $U^c$ represent the cardinality and complement of $U.$ We denote by $a\wedge b=\min\{a,b\},$ and $a\vee b=\max\{a,b\},$ for any $a,b\in\R.$ The notation $\lfloor \cdotp \rfloor$ is the usual greatest integer function. We use a generic notation $c_u>0$ to represent universal constants that do not depend on $T$ or any other model parameter. In the following this constant $c_u$ may be different from one term to the next. All limits in this article are with respect to the sample size $T\to\iny.$ We use $\Rightarrow$ to represent convergence in distribution.
Assumptions and Main Results {#sec:mainresults}
============================
In this section we state all sufficient conditions assumed to obtain our main theoretical results regarding the plugin least squares estimator $\tilde\tau$ of (\[est:optimal\]). Specifically, an $O(\psi^{-2})$ rate of convergence of $\lfloor T\tilde\tau\rfloor,$ and its limiting distribution.
Condition A provides control on the rate of divergence of $s,p,$ and convergence of $\psi,$ and $l_T,$ which are model parameters that can vary with $T.$ Condition A(iii) can be viewed from two perspectives. First, it allows a vanishing jump size, $\psi\to 0,$ when $s\log^{3/2}(p\vee T)=o\big(\surd (Tl_T)\big).$ Alternatively, $s,p$ can be allowed to diverge at an arbitrary rate provided Condition A(iii) is preserved, i.e., provided the jump size is large enough to compensate for the increasing dimensions $s,p,$ so as to preserve Condition A(iii) (also see, Remark \[rem:dimension.restriction\]). To the best of our knowledge, this is the weakest condition assumed on the jump size in the dynamic networks literature, where the comparable counterpart of $\psi$ is typically assumed to be diverging. The assumption of sparsity on the coefficient vectors $\mu^0_{(j)}$ and $\g^0_{(j)}$ is equivalent to assuming that both the pre and post network structures of $\Si^{-1}$ and $\D^{-1}$ are such that each node has at most $s$ connecting edges of a total of $(p-1)$ possible edges. This is a direct extension of the same assumption in the in the static setting, see, e.g. [@yuan2010high]. In the case where $s,p,l_T$ are fixed, the rate required of the minimum jump size $\psi$ in Part (iii) can be replaced with $ T^{\big(\frac{1}{2}-b\big)}\psi\to \iny,$ for some $0<b<(1/2).$
A subgaussian assumption is a significant relaxation to assuming a Gaussian distribution, for e.g., this condition allows asymmetric distributions such as a centered mixture of two Gaussian distributions. While this assumption is fairly standard in the high dimensional regression literature, these are however much rarer for graphical models, where a Gaussian distribution has often been assumed. Our methodology allows this more general setup since $\tilde\tau$ is based on least squares as opposed to a likelihood based approach which are more common in the graphical models setting. More specifically, this condition serves three purposes. Firstly, it allows the residual process in the estimation of $\tau^0$ to converge weakly to the distribution (\[def:Zr\]). Secondly, under a suitable choice of regularization parameters, it allows estimation of nuisance parameters at the rates of convergence presented in (\[eq:optimalmeans\]). Finally, in addition to other technical uses, part (ii) of this condition also provides an upper bound on the components of $\mu^0_{(j)}$ and $\g^0_{(j)},$ $j=1,...,p,$ which is necessary to our analysis (Lemma \[lem:condnumberbound\]). For the presentation of this section we are agnostic about the choice of the estimator of nuisance parameters and instead require the following condition.
This condition is a mild requirement of the nuisance estimates. It allows the nuisance estimates $\h\mu_{(j)}$ and $\h\g_{(j)}$ to be irregular in the sense that they are only required to be in a $\{s\log (p\vee T)/T\}^{1/2}$ order neighborhood of the corresponding unknown vectors $\mu^0_{(j)}$ and $\g^0_{(j)},$ $j=1,...,p,$ in the $\ell_2$ norm. These nuisance estimates are not required to possess any oracle properties, i.e., selection mistakes in the identification of the sign of coefficient vectors do not influence the eventual change point estimate $\tilde\tau$ in its rate of convergence. Accordingly, we do not require assuming irrepresentable conditions on the covariance matrices $\Si$ and $\D,$ such as those assumed in [@kolar2012estimating], nor minimum magnitude conditions of the coefficient vectors $\mu^0_{(j)},$ $\g^0_{(j)},$ which are assumptions that typically guarantee perfect selection in the components of the vectors $\mu^0_{(j)},$ and $\g^0_{(j)},$ $j=1,...,p.$
A few more notations are necessary to proceed further. For any $\mu,\g\in\R^{p(p-1)},$ and any $\tau\in (0,1)$ define, \[def:cU\] [[U]{}]{}(z,,,g)=Q(z,,,)-Q(z,\^0,,),where $\tau^{0}\in(0,1)$ is the unknown change point parameter and $Q(z,\tau,\mu,\g)$ is the least squares loss defined in (\[eq:Q\]). Also, for any non-negative sequences $0\le v_T\le u_T\le 1,$ define the collection, \[def:setG\] [[G]{}]{}(u\_T,v\_T)={(0,1);Tv\_T|T-T\^0|Tu\_T} We begin with a lemma that provides a uniform lower bound on the expression ${{\cal U}}(z, \tau,\h\mu,\h\g),$ over the collection ${{\cal G}}(u_T,v_T).$ This lower bound forms the basis of the argument used to obtain the desired rate of convergence for the proposed estimator.
\[lem:mainlowerb\] Suppose Condition A, B and C hold and let $0\le v_T\le u_T$ be any non-negative sequences. For any $0<a<1,$ let $c_{a1}=4\cdotp 48c_{a2},$ with $c_{a2}\ge \surd(1/a),$ and c\_[a3]{}=c\_u{}. Additionally, let $u_T\ge c_{a1}^2\si^4\big/(T\phi^2),$ then for $T\ge 2,$ we have, \[eq:12\] \_[[U]{}]{}(z,,,)\^[2]{}\_[2,2]{} with probability at least $1-3a-o(1).$
Lemma \[lem:mainlowerb\] is a tool that allows us to obtain the rate of convergence of the change point estimator $\tilde\tau.$ An observation that provides some insight into this connection and the adaptivity property of the proposed plug in least squares estimator is as follows. Although ${{\cal U}}(z,\tau,\h\mu,\h\g)$ involves the r.v.’s $z_t,$ which are $p$-dimensional, and the estimates $\h\mu_{(j)},$ and $\h\g_{(j)}$ which approximate $(p-1)$-dimensional unknown parameters $\mu^0_{(j)}$ and $\g^0_{(j)},$ $j=1,...,p,$ up to the rate $O\big(\surd(s\log p/T)\big),$ yet, the eventual lower bound of Lemma \[lem:mainlowerb\] is free of the dimensions $s,p$ under the assumed conditions. In a heuristic sense, this alludes to the plugin least squares estimator of the change point behaving as if the nuisance parameters $\mu^0_{(j)}$ and $\g^0_{(j)}$ are known. This is indeed the property that allows for the rate of convergence presented in the following theorem to hold. Further insight on the inner working of this result is provided in Remark \[rem:kolmogorov\] stated after the following result.
\[thm:optimalapprox\] Suppose Conditions A, B and C hold, and for any $0<a<1,$ let $c_{a1},c_{a2}$ and $c_{a3}$ be as defined in Lemma \[lem:mainlowerb\]. Then, for $T$ sufficiently large, we have the following.\
(i) When $\psi\to 0$ we have, $(1+\nu^2)^{-1}(\si^2\vee\phi)^{-2}\ka^2\psi^2\big|\lfloor T\tilde\tau\rfloor-\lfloor T\tau^0\rfloor
\big|\le c_u^2c_{a1}^2,$ with probability at least $1-3a-o(1).$ Equivalently, in this case we have, $\psi^2\big(\lfloor T\tilde\tau\rfloor-\lfloor T\tau^0\rfloor
\big)=O_p(1).$\
(ii) When $\psi\not\to 0,$ we have, $\big|\lfloor T\tilde\tau\rfloor-\lfloor T\tau^0\rfloor
\big|\le c_{a3}^2,$ with probability at least $1-3a-o(1).$ Equivalently, in this case we have, $\big(\lfloor T\tilde\tau\rfloor-\lfloor T\tau^0\rfloor\big)=O_p(1).$
Theorem \[thm:optimalapprox\] provides the rate of convergence of the proposed estimator $\tilde\tau.$ The main idea of the proof of this results is to use a contradiction argument as follows. Using Lemma \[lem:mainlowerb\] recursively, we show that any value of $\lfloor T\tau\rfloor$ lying outside an $O(c_{a3}^2)$ neighborhood of $\lfloor T\tau^0\rfloor$ satisfies, ${{\cal U}}(z,\tau,\h\mu,\h\g)>0,$ with probability at least $1-3a-o(1).$ Upon noting that by definition of $\tilde\tau,$ we have, ${{\cal U}}(z,\tilde\tau,\h\mu,\h\g)\le 0,$ yields the desired result. The complete argument is provided in Appendix \[sec:appA\]. Following are two important remarks regarding this result.
\[rem:optapprox\] [Theorem \[thm:optimalapprox\] provides the rate of convergence for $\lfloor\tilde\tau\rfloor$ in the integer time scale. The analogous result in a continuous time scale can be obtained as follows. Note that for any $\tau\ge \tau^0,$ we have the deterministic inequality, $T(\tau-\tau^0)-1\le \big(\lfloor T\tau\rfloor-\lfloor T\tau^0\rfloor\big)\le T(\tau-\tau^0)+1.$ In the case where $\psi\to 0,$ an application of this inequality together with the Part (i) of Theorem \[thm:optimalapprox\] leads to $T\psi^{2}(\tilde\tau-\tau^0)=O_p(1).$ In the case where $\psi\not\to 0,$ the same inequality when used with Part (ii) of Theorem \[thm:optimalapprox\] yields $T(\tilde\tau-\tau^0)=O_p(1).$]{}
\[rem:dimension.restriction\] [It may be observed that Theorem \[thm:optimalapprox\] is obtained without any any explicit restriction only on the rate of divergence of the sparsity $s$ and dimension $p$ with respect to the sampling period $T.$ This result is true in itself for $s,p$ diverging at an arbitrary rate with respect to $T,$ as long as the jump size $\psi$ is large enough to compensate in order to preserve Condition A(iii). This is however not the complete picture. Effectively, this result has passed on the burden of an additional assumption controlling the divergence of $s,p$ to Condition C on the nuisance estimates. In order to obtain feasible estimates of the nuisance parameters we shall later require an additional assumption of the form $s\log p=o(Tl_T)$ (see, Condition A$'$(i) and Theorem \[thm:alg1.nearoptimal\] of Section \[sec:nuisance\]). ]{}
\[rem:kolmogorov\][Here we provide some partial technical insight as to how Lemma \[lem:mainlowerb\] and Theorem \[thm:optimalapprox\] are able to eliminate dimensional parameters $s,p$ and other logarithmic terms of $T$ to obtain the rate of convergence described in its result. The behavior of the estimator $\tilde\tau,$ is in part controlled by a stochastic noise term of the form, \_[; \^0]{} \_[2,2]{}\^[-1]{}|\_[t=T\^0]{}\^[T]{} \_[j=1]{}\^p \_[tj]{}z\_[t,-j]{}\^T\_[(j)]{}|, \_[(j)]{}=\_[(j)]{}-\_[(j)]{}, and its mirroring counterpart. Here $\vep_{tj}$ is as defined in (\[def:epsilons\]). Note here the need for uniformity over $\tau$ of this stochastic term, since this forms a critical part of the analysis. A large proportion of the literature upper bounds such uniform stochastic terms using usual subexponential tail bounds (or similar) and supplying uniformity over $\tau$ by means of union bounds over the at most $T$ distinct values $\lfloor T\tau\rfloor.$ This approach using union bounds forces logarithmic terms of $T$ to necessarily be present in the upper bound for this stochastic term, which passes over to the eventual bound for the change point estimate. Additionally, dimensional parameters $s,p$ also often show up, depending upon how one chooses to control for the nuisance estimates $\h\eta_{(j)}.$ Instead of following this approach, we use a novel application of the Kolmogrov’s inequality (Theorem \[thm:kolmogorov\]) on partial sums in order to control such stochastic terms with sharper upper bounds. This is done by first using a triangle inequality, \_[; \^0]{} \_[2,2]{}\^[-1]{}|\_[t=T\^0]{}\^[T]{} \_[j=1]{}\^p \_[tj]{}z\_[t,-j]{}\^T\_[(j)]{}|&& \_[; \^0]{} \_[2,2]{}\^[-1]{}|\_[t=T\^0]{}\^[T]{} \_[j=1]{}\^p \_[tj]{}z\_[t,-j]{}\^T\_[(j)]{}\^0|\
&&+\_[; \^0]{} \_[2,2]{}\^[-1]{}|\_[t=T\^0]{}\^[T]{} \_[j=1]{}\^p \_[tj]{}z\_[t,-j]{}\^T(\_[(j)]{}-\_[(j)]{}\^0)|. The first term on the rhs can now be controlled at an optimal rate $O(\surd{T}),$ (see, Lemma \[lem:optimalcross\] and Lemma \[lem:optimalsqterm\]) without any additional logarithmic terms of $T,$ using the Kolmogorov’s inequality. Moreover, under Condition A and C, the second term on the rhs of the above inequality can also controlled with the same upper bound, despite high dimensionality and without dimensional parameters $s,p,$ being involved in the upper bound (see, Lemma \[lem:nearoptimalcross\], Lemma \[lem:term123\] and proof of Lemma \[lem:mainlowerb\]). This provides the desired sharper control on stochastic noise terms and consequently allows for the rate of convergence presented in Theorem \[thm:optimalapprox\]. This is ofcourse a simplified explanation and only meant for intuition purposes.]{}
The availability of an $O(\psi^{-2})$ rate of the proposed change point estimator $\lfloor T\tilde\tau\rfloor$ from Theorem \[thm:optimalapprox\] allows the existence of a limiting distribution and thus we now shift our focus to performing inference on the change point $\tau^0.$ For this purpose, let $W_1(r),$ and $W_2(r)$ be two independent Brownian motions defined on $[0,\iny).$ For any constants $0<\si_1,\si_2,\si_{1}^*,\si_{2}^*<\iny,$ define, \[def:Zr\] Z(r)=
|r|-W\_1(r) & [if]{} r>0,\
0 & [if]{} r=0,\
|r|-2W\_2(r)& [if]{} r<0.
This process is presented in [@bai1997estimation], where it is defined in a linear regression context. It is an asymmetric variant of the well known process $\{|r|-2W(r)\},$ which often arises in the context of change point estimators, see, e.g., [@yao1987approximating], [@bai1994], among others. In the following for any $t=1,...,T$ we define, \[def:epsilons\] \_[tj]{}=
z\_[tj]{}-z\_[t,-j]{}\^T\^0\_[(j)]{}, & t=1,...,T\^0\
z\_[tj]{}-z\_[t,-j]{}\^T\^0\_[(j)]{}, & t=T\^0+1,...,T.
We shall also require the following additional conditions to proceed further.
The additional assumption (\[eq:addasm\]) is made to ensure convergence of the variance of the proposed estimator $\tilde\tau,$ which in turn ensures the stability of the limiting distribution. This is only a mild additional requirement, since the assumed convergence is on a positive sequence that is already guaranteed to be bounded, i.e., \_[2,2]{}\^2\_[j=1]{}\^p \^[0T]{}\_[(j)]{}\_[-j,-j]{}\^[0]{}\_[(j)]{}\_[2,2]{}\^2,where the inequalities from the bounded eigenvalues of the covariance matrix $\Si$ (Condition B(ii)), and similar for the post-change covariance matrix $\D.$
The convergence in Condition E(i) is closely related to Condition D(ii). The need to assume this condition despite the availability of Condition D(ii) is due to the block dependence in the double array $\{\z_{tj}\}.$ Although for any $t\ne t',$ we have the independence of $\vep_{tj},$ and $\vep_{tk}.$ However, within a fixed block $t,$ the variables $\z_{tj},$ and $\z_{tk}$ may be correlated. It is due to the same underlying dependence that Condition E(ii) needs to be assumed. If all $\z_{tj}'$s are pairwise independent, the Condition E becomes redundant. In this case, the first requirement follows from Condition D(ii) and the second requirement follows directly from the classical functional central limit theorem. In the case where $p$ is fixed, Condition E(ii) again becomes redundant, since we have independence over the indices $t$ and thus the assumption follows from the functional central limit theorem.
\[thm:limitingdist\] Suppose Conditions A, B, C, D and E hold. Additionally assume that \[eq:rateextra\] {}= o(1). Then the estimator $\tilde\tau$ of (\[est:optimal\]) obeys the following limiting distribution. T(\_1\^[\*]{})\^[-2]{}\_1\^4\^2(-)\_r Z(r). where $Z(r)$ is as define in (\[def:Zr\]).
The result of Theorem \[thm:limitingdist\] establishes the second main result of this article. It provides the limiting distribution of $\tilde\tau,$ whose density function is readily available in [@bai1997estimation]. Thereby allowing straightforward computation of quantiles of this distribution. The only difference between the assumption (\[eq:rateextra\]) and the second rate restriction in Part (iii) of Condition A is that the rhs has been tightened to $o(1)$ from $O(1).$ This slightly stronger requirement for the existence of the limiting distribution is in coherence with classical results such as those in [@bai1994] and [@bai1997estimation]. Feasible computation of a confidence interval requires the parameters $\si_1,\si_1^*,\si_2,$ and $\si_2^*,$ these computations can be carried on binary partition of data induced by $\tilde\tau,$ the details of these estimations are provided in Section \[sec:numerical\]
[The results of Theorem\[thm:optimalapprox\] and Theorem \[thm:limitingdist\] above can be viewed as stating that $\tilde\tau$ utilizing $2p$ estimated vectors $\h\mu_{(j)}$ and $\h\g_{(j)},$ $j=1,...,p,$ each of dimension $p-1,$ is still behaving as if these nuisance parameters are known. This is despite allowing high dimenisonality in keeping with Condition A(iii), and a potentially diminishing jump size $\psi.$ This is effectively the adaptation property as described in [@bickel1982adaptive], in a high dimensional setting withing a change point parameter context.]{}
While the results of this subsection allow $\lfloor T\tilde\tau\rfloor$ to provide an $O(\psi^{-2})$ approximation of $\lfloor T\tau^0\rfloor,$ and in turn allows a limiting distribution to perform inference on the unknown change point. However, all results rely on the apriori availability of nuisance estimates $\h\mu_{(j)},$ and $\h\g_{(j)},$ $j=1,...,p,$ satisfying Condition C. Without this availability, these results remain infeasible to implement in practice. In the following section we develop an algorithmic estimator to obtain these nuisance estimates theoretically guarantee Condition C for the same. Consequently, making the methodology of this section viable in practice.
Construction of a feasible $O(\psi^{-2})$ estimator of $\lfloor T\tau^0\rfloor$ {#sec:nuisance}
===============================================================================
To discuss the methods and results of this section we require more notation. For any $\tau\in(0,1),$ such that $\lfloor T\tau\rfloor\ge 1,$ consider ordinary lasso estimates of the regression of each column of the observed variable $z$ on the rest, for each of the two binary partitions induced by $\tau.$ Specifically, for each $j=1,...,p,$ define \[est:lasso\] \_[(j)]{}() &=& \_ {\_[t=1]{}\^[T]{} (z\_[tj]{}- z\_[t, -j]{}\^T\_[(j)]{})\^2 + \_j\_[(j)]{}\_1 },\
\_[(j)]{}() &=& \_ {\_[t=T+1]{}\^[T]{} (z\_[tj]{}- z\_[t, -j]{}\^T\_[(j)]{})\^2 + \_j\_[(j)]{}\_1 },.
To develop a feasible estimator for the $\tau^0,$ recall two aspects from Section \[sec:mainresults\]. (a) The missing links required to implement the estimator $\tilde\tau(\h\mu,\h\g)$ of Section \[sec:mainresults\] are the edge parameter vector estimates $\h\mu_{(j)}$ and $\h\g_{(j)},$ $j=1,....,p.$ (b) These edge estimates require the sufficient Condition C to be satisfied in order to retain the results of Section \[sec:mainresults\]. We shall fulfill these nuisance estimate requirements using the estimators in (\[est:lasso\]), implemented in a twice iterated manner. Here the iterations are between the change point parameter $\tau$ and the edge parameters $\mu_{(j)}$ and $\g_{(j)}.$
The twice iterative approach of the estimator to be considered is as follows. Very rough edge estimates $\check\mu_{(j)}=\h\mu_{(j)}(\check\tau),$ and $\check\g_{(j)}=\h\g_{(j)}(\check\tau),$ $j=1,...,p,$ computed using a nearly arbitrarily chosen $\check\tau\in(0,1)$ (see, initializing condition of Algorithm 1 below) possesses sufficient information so that an update of the change point parameter, i.e., $\h\tau=\tilde\tau(\check\mu,\check\g),$ moves the nearly arbitrary starting point $\check\tau$ into a near optimal neighborhood, $O_p\big(\psi^{-2}T^{-1}\log(p\vee T)\big),$ with this single step. With this availability of a near optimal estimate $\h\tau,$ we shall show that another update $\h\mu_{(j)}=\h\mu_{(j)}(\h\tau),$ and $\h\g_{(j)}=\h\g_{(j)}(\h\tau)$ satisfies all requirements of Condition C. This provides sufficient ingredients to now implement the estimator of Section \[sec:mainresults\], i.e., performing a second update of the change point $\tilde\tau=\tilde\tau(\h\mu,\h\g),$ which moves $\h\tau$ from the near optimal neighborhood $O_p\big(\psi^{-2}T^{-1}\log(p\vee T)\big)$ into an $O_p\big(\psi^{-2}T^{-1}\big)$ neighborhood of $\tau^0.$ This is a direct consequence of Theorem \[thm:optimalapprox\]. Additionally, Theorem \[thm:limitingdist\] also provides the limiting distribution of this second update $\tilde\tau,$ thereby allowing inference on $\tau^0.$ Thus, in performing these updates (two each of the change point and the mean) we have taken a $\check\tau$ from a nearly arbitrary neighborhood of $\tau^0,$ and deposited it in an $O_p\big(\psi^{-2}T^{-1}\big)$ neighborhood of $\tau^0,$ with an intermediate $\h\tau$ that lies in a near optimal neighborhood. This process is stated as Algorithm 1 below and is described visually in Figure \[fig:schematic\]. Theorem \[thm:alg1.nearoptimal\] and the subsequent corollaries provide the precise description of the statistical performance of the proposed Algorithm 1 and the required sufficient conditions.
(ctau) [$\lfloor T\check\tau\rfloor$]{}; (ctheta) [$\check\mu=\h\mu(\check\tau)$\
$\check\g=\h\g(\check\tau)$]{}; (htau) [$\lfloor T\h\tau\rfloor$]{}; (htheta) [$\h\mu=\h\mu(\h\tau),$ $\h\g=\h\g(\h\tau)$]{}; (btau) [$\lfloor T\tilde\tau\rfloor$]{}; (arb) ; (c1) [ bounds]{}; (nopt) ; (opt) ; (c2) ; (ctau) – (ctheta); (ctheta) – (htau); (htau) – (htheta); (htheta) – (btau); (ctau) – (arb); (htau) – (nopt); (btau) – (opt); (ctheta) – (c1); (htheta) – (c2); (arb) – (c1); (c1) – (nopt); (nopt) – (c2); (c2) – (opt);
------------------------------------------------------------------------
$O(\psi^{-2})$ estimation of $\lfloor T\tau^0\rfloor:$
------------------------------------------------------------------------
Choose any $\check\tau\in (0,1)$ satisfying Condition F.
Obtain $\check\mu_{(j)}=\h\mu_{(j)}(\check\tau),$ and $\check\gamma_{(j)}=\h\gamma(\check\tau),$ $j=1,...,p,$ and update change point as,
=\_[(0,1)]{}Q(z,,,)
Obtain $\h\mu_{(j)}=\h\mu_{(j)}(\h\tau),$ and $\h\gamma_{(j)}=\h\gamma(\h\tau),$ $j=1,...,p,$ and perform another update,
=\_[(0,1)]{}Q(z,,,)
$\tilde\tau$
------------------------------------------------------------------------
To complete the description of Algorithm 1, we provide the weak sufficient condition required from the initializing choice $\check\tau.$
The first requirement of Condition F is clearly innocuous, all it requires is a marginal separation of the chosen initializer from the boundaries of the parametric space of the change point. It is satisfied with any $\check\tau\in [c_{u1},c_{u2}]\subset(0,1).$ While at face value the second may seem stringent, however this is not the case and this condition is satisfied by nearly any and all arbitrarily chosen $\check\tau\in(0,1),$ when $T$ is large. To see this, first consider a simplistic case when $l_T\ge c_u>0,$ i.e., true change point $\tau^0$ is in some bounded subset of $(0,1),$ and $s\le c_u,$ i.e., the sparsity parameter is bounded above by a constant. Then, requirement (ii) of Condition F, reduces to $|\lfloor T\check\tau\rfloor-\lfloor T\tau^0\rfloor|=O(T^{1-k}).$ Recall here that constant $k$ in Condition F can be any arbitrary value (not depending on $T$) close to zero. Thus the neighborhood $O(T^{(1-k)}),$ of $\tau^0$ can cover a larger and larger proportion of the entire parametric space $(0,1)$ as $T$ increases. This can be further illustrated by noting that the disallowed case of $k=0$ covers the entire parameteric space of $\tau^0.$ Here we also refer to [@Kaul2019] where a similar initializer condition has been discussed in detail. This makes it a very weak requirement and when $T$ is large, it will be satisfied by any nearly any arbitrarily chosen value. More generally, the powerfulness of Algorithm 1 is that it starts with any $\lfloor T\check\tau\rfloor$ in a very wide neighborhood of $\lfloor T\tau^0\rfloor,$ $\h\tau$ of Step 1 then moves it into a near optimal neighborhood, and finally $\tilde\tau$ of Step 2 moves it into an sharper neighborhood, i.e., $O(s^{-1}T^{(1-k)})$-nbd.$\longrightarrow^{\rm Step 1}$ near optimal-nbd., $O_p(\psi^{-2}\log p)$ $\longrightarrow^{\rm Step 2}$ $O_p(\psi^{-2}).$
While the working mechanism of Algorithm 1 in its ability to provide an $O(\psi^{-2})$ rate of convergence from a nearly arbitrary neighborhood in two iterations is clear, in the following we provide further arguments regarding the initializer in case the reader remains unconvinced on the choice of the initializer from a practical perspective. In order to find a suitable initializer in an $O(s^{-1}T^(1-k))$ neighborhood, at a given sampling period $T,$ one may consider using a preliminary equally spaced coarse grid of values in $(0,1),$ and choosing a best fitting value to the data. If the dimensionality of this preliminary grid is larger than $O(sT^k),$ then one will arrive at a theoretically valid initializer. A similar preliminary coarse grid search has also been heuristically utilized in [@roy2017change] in a different model setting. However, based on extensive numerical experiments, we have observed that even this preliminary coarse grid search is redundant. In Section \[sec:numerical\] we present results with this initializer fixed at $\check\tau=0.5,$ irrespective of the underlying true change point $\tau^0,$ which is allowed to vary all across the parametric space $(0,1).$ Note here that in the absence of any information on $\tau^0,$ the choice $\check\tau=0.5$ forms the worst or farthest initializer in a mean sense, and all other value of $\check\tau$ shall only serve to make the estimation easier. Despite this worst possible choice, numerical results remain indistinguishable compared to those obtained when $\check\tau$ is chosen with a preliminary coarse grid search. The reader may numerically confirm these observations using the simple software associated with this article. This observation while at first may be counter intuitive, however it is exactly that described in the previous paragraph, i.e., Condition F is weak enough to be satisfied with nearly any arbitrarily chosen $\check\tau.$
The results to follow provide a precise description of the above discussion. We begin with an additional condition that is largely a weaker version of Condition A, which is sufficient to obtain near optimality of $\h\tau$ of Step 1 of Algorithm 1. Note that this near optimal rate of convergence to follow, matches the best available in the current literature in the assumed setting. However, we shall obtain this rate under a much weaker assumption on the jump size $\psi,$ which is infact even weaker than that assumed on $\psi$ earlier in Condition A of Section \[sec:mainresults\].
The following theorem shows that $\lfloor T\h\tau\rfloor$ of Step 1 of Algorithm 1 lies in an $O\big(\psi^{-2}\log (p\vee T)\big)$ neighborhood of $\tau^0.$ The validity of $\tilde\tau$ of Step 2 in terms of properties provided in Section \[sec:mainresults\] shall rely critically on this result.
\[thm:alg1.nearoptimal\] Suppose Conditions A$'$, B and F hold. Let $\h\tau$ be the change point estimate of Step 1 of Algorithm 1. Then for $T$ sufficiently large, we have, \[eq:32\] (1\^[2]{})(1+\^2)\^[-1]{}(\^[2]{})\^[-2]{}\^2|T-T\^0|c\_u(pT) with probability $1-o(1).$ In other words, $(1\wedge \psi^2)\big(\lfloor T\h\tau\rfloor-\lfloor T\tau^0\rfloor\big)=O\big(\log(p\vee T)\big),$ with probability converging to one.
Theorem \[thm:alg1.nearoptimal\] shows that $\lfloor T\h\tau\rfloor$ of Step 1 of Algorithm 1 will satisfy a near optimal bound $O_p\big(\psi^{-2}\log(p\vee T)\big),$ despite the algorithm initializing with any $\lfloor T\check\tau\rfloor$ in a $O(s^{-1}T^{(1-k)})$ neighborhood of $\tau^0.$ This result now allows us to study the behavior of estimates of the edge parameters and the change point parameter obtained from the second iteration of Step 2 of Algorithm 1. We note here that the properties of these second iteration estimates rely solely on the bound (\[eq:32\]) of $\lfloor T\h\tau\rfloor,$ and the availability of this bound renders no further use of the initial edge estimates $\check\mu_{(j)}$ and $\check\g_{(j)},$ $j=1,...,p.$ This feature allows Algorithm 1 to be modular in its construction, in the sense that for Step 2 to yield an estimate $\lfloor T\tilde\tau\rfloor$ that is $O(\psi^{-2})$ in its rate of convergence, it does not require the estimator of Step 1 to be specifically the one that has currently been chosen in Algorithm 1. Alternatively, Step 1 of Algorithm 1 can readily be replaced with any other near optimal estimator available in the literature, i.e., satisfying a bound $O\big(\psi^{-2}\log (p\vee T)\big)$ with probability $1-o(1).$ This is described below as Algorithm 2.
------------------------------------------------------------------------
$O(\psi^{-2})$ estimation of $\lfloor T\tau^0\rfloor:$
------------------------------------------------------------------------
Implement any $\h\tau$ from the literature that satisfies (\[eq:32\]) with probability $1-o(1).$
Obtain $\h\mu_{(j)}=\h\mu_{(j)}(\h\tau),$ and $\h\gamma_{(j)}=\h\gamma(\h\tau),$ $j=1,...,p,$ and perform update,
=\_[(0,1)]{}Q(z,,,)
$\tilde\tau$
------------------------------------------------------------------------
An estimator from the literature that can be used in Step 1 of Algorithm 2 is of [@atchade2017scalable], which obeys the near optimal bound of Theorem \[thm:alg1.nearoptimal\]. However, their method being based on a likelihood approach would limit the algorithm to the Gaussian setting, moreover it requires stronger sufficient conditions of the minimum jump size and separation sequence $l_T$ for analytical validity. To the best of our knowledge, there is no available estimator in the current literature that would serve as a replacement for Step 1 of Algorithm 1 under the assumptions of Condition A$'$ (or Condition A) and Condition B.
The following results present the statistical behavior of $\h\mu_{(j)}$ and $\h\g_{(j)},$ $j=1,...,p$ and $\tilde\tau$ obtained from Step 2 of Algorithm 1 or Algorithm 2. These results show that edge parameter updates $\h\mu_{(j)}$ and $\h\g_{(j)},$ $j=1,...,p$ obtained using the near optimal $\h\tau,$ are of a much tighter precision than those in Step 1. In particular, these satisfy all requirements of Condition C. This in turn allows the applicability of the results of Section \[sec:mainresults\] obtaining a higher precision for $\tilde\tau$ in comparison to that of the change point estimate of Step 1 of Algorithm 1 or Algorithm 2.
\[cor:optimalmeans.step2\] Suppose Conditions A$'$, B and F hold. Let $\h\mu_{(j)},$ and $\h\g_{(j)},$ $j=1,...,p,$ be the edge estimate of Step 2 of Algorithm 1 or Algorithm 2. Then the following two properties hold with probability at least $1-o(1).$\
(i) We have $\h\mu_{(j)}-\mu^0_{(j)}\in{{\cal A}}_{1j},$ and $\h\g_{(j)}-\g^0_{(j)}\in{{\cal A}}_{2j},$ $j=1,...,p$ where ${{\cal A}}_{ij}$ are sets as defined in Condition C.\
(ii) The following bound is satisfied, \_[1jp]{}(\_[(j)]{}-\^0\_[(j)]{}\_2\_[(j)]{}-\^0\_[(j)]{}\_2)c\_u(1+\^2){}\^. Consequently, these second iteration edge estimates satisfy all requirements of Condition C.
Corollary \[cor:optimalmeans.step2\] provides the feasibility of Condition C. We are now in a position to appeal to the results of Section \[sec:mainresults\] in order to obtain the statistical performance of $\tilde\tau$ of Step 2 of Algorithm 1 or Algorithm 2. The following corollary is a direct consequence of Theorem \[thm:optimalapprox\] and Theorem \[thm:limitingdist\].
\[cor:final\] Suppose Conditions A, B and F hold and additionally assume that model dimensions are sufficiently restricted to satisfy $c_u\ka T^{(1-b)}l_T \ge (\si^2\vee\phi)s\log (p\vee T).$ Then $\tilde\tau$ of Algorithm 1 or Algorithm 2 satisfies the error bounds of Theorem \[thm:optimalapprox\]. Additionally assuming Condition D, E and (\[eq:rateextra\]) holds, then $\tilde\tau$ obeys the limiting distribution of Theorem \[thm:limitingdist\].
Corollary \[cor:final\] completes the description of the behavior of the proposed Algorithm 1 and Algorithm 2, which are both feasible estimators implementable in practice. This result allows for $O(\psi^{-2})$ estimation of $\lfloor T\tau^0\rfloor,$ and the ability to perform inference. It may be of interest to note that Corollary \[cor:final\] assumes tighter restrictions on the model parameters $\psi,s,p$ (see, Condition A(iii)) in comparison to those required for near optimality described in Theorem \[thm:alg1.nearoptimal\] (see, Condition A$'$(iii)). This is the only price we have paid to go from $O\big(\psi^{-2}\log (p\vee T)\big)$ to an $O(\psi^{-2})$ neighborhood of the unknown change point $\lfloor T\tau^0\rfloor.$ Similar additional sufficient restrictions to allow for an optimal rate of convergence and in turn for inference in a high dimensional setting have a precedent in the recent literature in context of inference on a regression coefficient parameter in the presence of high dimensionality, see, e.g. The debiased lasso [@van2014asymptotically], orthogonalized score estimator [@belloni2011inference], [@chernozhukov2015valid], [@belloni2017confidence] and [@ning2017general]. In this context, it has been shown that while the condition $s\log p=o(T)$ is sufficient for near optimal estimation, a tighter constraint $s\log p=o(\surd T)$ is sufficient for an optimal rate and in turn inference. The distinction between Condition A and Condition A$'$ of Section \[sec:mainresults\] and Section \[sec:nuisance\] can be viewed from a similar lens.
Numerical Results {#sec:numerical}
=================
This section provides an empirical validation of the proposed $\h\tau$ and $\tilde\tau$ of Step 1 and Step 2 of Algorithm 1, respectively. We begin with the description of the simulation design. In all cases considered, the unobserved variables $w_t,x_T$ on model (\[model:dggm\]) are generated as independent, $p$-dimensional, mean zero Gaussian r.v.’s with distinct covariance structures. More precisely, we set $w_t\sim {{\cal N}}(0,\Si),$ $t=1,...,\lfloor T\tau^0 \rfloor$ and $x_t\sim {{\cal N}}(0,\Delta),$ $t=\lfloor T\tau^0\rfloor+1,...,T.$ The covariance’s $\Si,$ and $\D$ are chosen so as to preserve the sparsity of each of these matrices, and the jump size between these two matrices, as the dimension $p$ grows larger. The covariances $\Si$ and $\D$ are constructed as follows. We begin with a toeplitz type covariance matrix $\G$ where the $(l,m)^{th}$ component of this matrix is chosen as $\G_{(l,m)}=\rho^{|l-m|^a},$ $l,m=1,...,p.$ We set $\rho=0.5$ and $a=1/4.$[^3]. Then we set $\Si=c_1\cdotp A_1\cdotp \G$ and $\D=c_2\cdotp A_2\cdotp \G,$ where $\cdotp$ represents a componentwise product. Here $c_1=2$ and $c_1=1$ are constants that allow the data generating process to have differing variances pre and post the change point. Recall from the discussion in Section \[sec:intro\], the constants $c_1$ and $c_2$ have no impact on the jump size $\psi$ between the matrices $\Si$ and $\D.$ The matrices $A_1$ and $A_2$ are chosen as $p\times p$ matrices of signs $\{-1,0,1\}.$ These serve to induce sparsity and a varying edge structure between $\Si$ and $\D$ as follows. $A_1$ and $A_2$ are constructed as block diagonal matrices of ones consisting of $(s+1)\times (s+1)$ blocks, and $(2s+1)\times (2s+1)$ blocks, respectively. Furthermore, alternating components of $A_1$ and $A_2$ are switched to a negative sign and zero, respectively. This yields distinct positive definite matrices $\Si$ and $\D$ with differing variance and edge structures. Moreover, by this construction the sparsity of each row and column of $\Si$ and $\D$ is fixed at $s=5,$ i.e., $|S_{ij}|=5,$ $i=1,2$ and $j=1,...,p$ where $S_{ij}$ are sets of indices defined in Condition A(i). The normalized jump size $\psi\approx 0.46,$[^4] remains fixed when the $p$ changes. Examples of the adjacency matrices corresponding to $\Si$ and $\D$ obtained from this construction are illustrated in Figure \[fig:covariance.picture\].
We perform simulations for all combinations of parameters described in the following. The sampling period $T$ in chosen in the grid $\{200,275,350,425\},$ and the dimension $p$ in $\{100,200,300,400\}.$ The change point $\tau^0$ is chosen from an equally spaced grid of values $\{0.15,...,0.85\}_{10\times 1}.$ Note that this grid contians values from nearly the entire parameteric space $(0,1)$ of the change point $\tau^0.$ All computations are carried out in the software [*R*]{}, \[[@RCT2017]\], lasso optimizations of the form (\[est:lasso\]) are carried out via the r-package [*glmnet*]{}, \[[@glmnet]\]. In all simulations the initializer for Algorithm 1 is chosen as $\check\tau=0.5,$ which assumes no prior knowledge of the underlying change point parameter.
The regularizers $\la_{j},$ $j=1,...,p$ for the implementation of lasso optimizations of Step 1 and Step 2 of Algorithm 1 are chosen via a five fold cross validation from a set grid of values, carried out internally in [*glmnet*]{}. This grid is chosen as $\Big\{c\{\log (p\vee T)/T\}^{1/2},...2\Big\},$ with $c=1.75,$ with fifty equally spaced grid points. The separation of this grid from zero is done to avoid overfitting and is in coherence with the choice of regularizer prescribed in Theorem \[thm:est.nuisance.para\].
For the purposes of calculation of the standard error and quantiles, which in turn require $\si_1,\si_2,\si_1^*,$ and $\si_2^*$ defined in Codition D and Condition E of Section \[sec:mainresults\], we assume the covariances $\Si$ and $\D$ to be known. In practice these can be replaced with estimates of $\Si$ and $\D$ reconstructed from the edge estimates $\h\mu_{(j)},$ and $\h\g_{(j)}$ of Step 2 using for e.g. neighborhood selection ([@meinshausen2006high], [@yuan2010high]) on each binary partition yielded by $\tilde\tau.$ The parameters $\si_1,\si_2$ are obtained as their respective finite sample approximations from their defining expressions in Condition D. A further approximation is made in order to calculate $\si_1^*,$ and $\si_2^*.$ These are obtained as, &&\_1\^[\*2]{}\^[-2]{}\_[2,2]{}\_[j=1]{}\^[p]{}\_[(j)]{}\^2\^[0T]{}\_[(j)]{}\^0\_[(j)]{},,\
&&\_[(j)]{}\^2= [var]{}(\_[tj]{})=\_[j,j]{}-\^[0T]{}\_[(j)]{}\_[-j,j]{}, tT\^0,j=1,...,p, and similar for the calculation of $\si_2^{*2},$ where $\Si$ replaced with $\D,$ and $\mu^0_{(j)}$ replaced with $\g^0_{(j)},$ $j=1,...,p.$ This calculation ignores the blockwise dependence structure between $\vep_{tj}z_{t,-j}$ and $\vep_{tk}z_{t,-k},$ $j\ne k,$ i.e., fourth order interactions between the components of $z$ are not taken into account for this approximation.
To report our results we provide the following metrics which are approximated based on 200 monte carlo replications: (bias, $|E(\h\tau-\tau^0)|$), and root mean squared error (rmse, $E^{\frac{1}{2}}(\h\tau-\tau^0)^2$) and corresponding metrics for $\tilde\tau.$ We also report the standard error and quantile associated with the estimator $T\tilde\tau,$ and the limiting distribution of Theorem \[thm:limitingdist\], respectively. In particular, the former is given as ${\rm SE}(T\tilde\tau)=\si_1^{*2}\big/(\psi^2\si_1^4).$ The latter, $c_{\alpha}$ is obtained as a symmetric quantile at a $(1-\al)=0.95$ confidence level. This computation in turn requires the ratio’s $\si_2^2/\si_1^2$ and $\si_2^{*2}/\si_1^{*2},$ and the cumulative distribution function of the limiting distribution, which is provided in [@bai1997estimation]. Since the standard error and quantiles are computed based on parameters which are assumed to be known, these are not ifluenced by the sampling period $T,$ the change point parameter $\tau^0,$ or the monte carlo replications. Furthermore, by the construction of $\Si$ and $\D,$ these quantites stay roughly the same accross the dimension p.
Partial results of the numerical experiments are provided in Table \[tab015\], Table \[tab:t023\], and Figure \[fig:bias.tau\]. Most notable observation here is the uniform improvement in bias and rmse provided by the Step 2 estimate $\tilde\tau$ over the Step 1 estimate $\h\tau.$ Moreover, this improvement is more pronounced when the sampling period is larger. This indicative of the sharper rate of convergence of $\tilde\tau$ (Theorem \[thm:optimalapprox\]) over the near optimal rate of $\h\tau$ (Theorem \[thm:alg1.nearoptimal\]). A clear consistency trend of improved estimation with a large sampling period with an expected deterioration of estimation precision with increased dimenisonality $p$ is observed. The numerical results of all other cases of $\tau^0$ mimic these trends. The results for $\tau^0=0.69$ and $\tau^0=0.77$ are provided in Table \[tab069\] and Table \[tab077\] in Appendix \[sec:add.numerical\] of the supplementary materials, the remainder are ommitted. Coverage of confidence intervals constructed over replications using the corresponding standard error and quantile was also evaluated. This was found to be conservative and near one in all case with $T\ge 275,$ and $0.21\le\tau^0\le 0.77.$ This is likely due to the nature of result of Theorem \[thm:limitingdist\], which is obtained under the regime $\psi\to 0.$ Thus under any finite $\psi$ setup, these intervals are expected to be conservative, when $T$ is large. This conservative behavior of confidence intervals has also been mentioned in the seminal work of [@bai1994], where a similar limiting distribution was first presented in context of estimation of a change point in the mean of a random variable.
The U-shaped trends observed in the left and right panels of Figure \[fig:bias.tau\] are indicative of any one of two underlying effects. First, the boundary effect, i.e., as the change point moves closer to the boundary of $(0,1)$ the effective sample size on one of the induced binary segements is reduced thereby causing the diminished precision. Second, is the intializer effect, i.e., these U-shaped trends are potentially indicative of the reach of the initializing choice $\check\tau$ of Algorithm which is set to 0.5. Upon observing the distinctions between the left and right panel of Figure \[fig:bias.tau\], these effects are observed to be further compounded (as expected) with an increase in dimensionality. These concerns are however alleviated by observing their diminishing effects over an increasing sampling period $T,$ thereby providing numerical evidence for its statistical validity.
{#SM .unnumbered}
Proofs of results of Section \[sec:mainresults\] {#sec:appA}
================================================
The following notations are required for readability of this section. In addition to $\xi_{2,2}$ defined in the $\ell_{2,2}$ norm in Condition A, we also define $\xi_{2,1}=\sum_{j=1}^p\|\eta_{(j)}^0\|_2$ in the $\ell_{2,1}$ norm. Also, in all to follow we denote as $\h\eta_{(j)}=\h\mu_{(j)}-\h\g_{(j)},$ $j=1,...,p.$
For any fixed $\tau\ge \tau^0$ consider, \[eq:10\] [[U]{}]{}(z,,,)&=&Q(z,,,)-Q(z,\^0,,)\
&=&\_[t=1]{}\^[T]{}\_[j=1]{}\^p(z\_[tj]{}-z\_[t,-j]{}\^T\_[(j)]{})\^2 + \_[t=T+1]{}\^[T]{}\_[j=1]{}\^p(z\_[tj]{}-z\_[t,-j]{}\^T\_[(j)]{})\^2\
&&- \_[t=1]{}\^[T\^0]{}\_[j=1]{}\^p(z\_[tj]{}-z\_[t,-j]{}\^T\_[(j)]{})\^2 - \_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p(z\_[tj]{}-z\_[t,-j]{}\^T\_[(j)]{})\^2\
&=&\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p(z\_[tj]{}-z\_[t,-j]{}\^T\_[(j)]{})\^2 - \_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p(z\_[tj]{}-z\_[t,-j]{}\^T\_[(j)]{})\^2\
&=&\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p(z\_[t,-j]{}\^T\_[(j)]{})\^2-\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^T\_[(j)]{}\
&&+\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p(\_[(j)]{}-\^0\_[(j)]{})\^Tz\_[t,-j]{}z\_[t,-j]{}\^T\_[(j)]{}. The expansion in (\[eq:10\]) provides the following relation, \[eq:11\] \_[[U]{}]{}(z,,,)&& \_ \_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p(z\_[t,-j]{}\^T\_[(j)]{})\^2\
&&-2\_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^T\_[(j)]{}|\
&&-2\_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p(\_[(j)]{}-\^0\_[(j)]{})\^Tz\_[t,-j]{}z\_[t,-j]{}\^T\_[(j)]{}|\
&=&R1-R2-R3 Bounds for terms $R1,R2$ and $R3$ have been provided in Lemma \[lem:term123\] and Lemma \[lem:assumptionbounds\]. In particular, R1&& \_[2,2]{}\^2\
&& \_[2,2]{}\^2, with probability at least $1-a-o(1).$ Here the first inequality follows from Lemma \[lem:term123\] and the final inequality follows by using the bounds of Lemma \[lem:assumptionbounds\]. Next we obtain upper bounds for the terms $R2\big/\ka\xi_{2,2}^2$ and $R3\big/\ka\xi_{2,2}^2.$ For this purpose, first note that $(\xi_{2,1}\big/\xi_{2,2})\le \surd{p},$ consequently $(\xi_{2,1}\big/\xi_{2,2}^2)\le 1\big/\psi.$ Now consider, && c\_[a1]{}(1+\^2)()\^+ c\_[u]{}(1+\^2)()\^(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_1\
&& c\_[a1]{}(1+\^2)()\^+ {(1+\^2)()\^}{c\_u(1+\^2)}\
&& c\_uc\_[a1]{}(1+\^2)()\^ with probability at least $1-a-o(1).$ As before, the first inequality follows from Lemma \[lem:term123\] and the final inequality follows by using the bounds of Lemma \[lem:assumptionbounds\]. Similarly we can also obtain, && c\_u(\^2){s(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2}\^\
&& c\_[u1]{}(\^2) with probability at least $1-a-o(1).$ Substituting these bounds in (\[eq:11\]) and applying a union bound over these three events yields the bound (\[eq:12\]) uniformly over the set $\{{{\cal G}}(u_T,v_T);\,\tau\ge\tau^0\}.$ The mirroring case of $\tau\le\tau^0$ can be obtained by similar arguments.
$\rule{5.5in}{0.1mm}$
To prove this result, we show that for any $0<a<1,$ the bound \[eq:13\] |T-T\^0|c\_[a3]{}\^2, holds with probability at least $1-3a-o(1).$ Note that Part (i) of this theorem is a direct consequence of the bound (\[eq:13\]) in the case where $\psi\to 0.$ The proof for the bound (\[eq:13\]) to follow relies on a recursive argument on Lemma \[lem:mainlowerb\], where the optimal rate of convergence $O_p(1)$ is obtained by a series of recursions with the rate of convergence being sharpened at each step.
We begin by considering any $v_T>0,$ and applying Lemma \[lem:mainlowerb\] on the set ${{\cal G}}(1,v_T)$ to obtain, \_[[[G]{}]{}(1,v\_T)]{} [[U]{}]{}(z,,,)\^2\_[2,2]{} with probability at least $1-3a-o(1).$ Recall by assumption $b<(1/2),$ and choose any $v_T>v_T^*=c_{a3}/T^{b}.$ Then we have $\inf_{\tau\in{{\cal G}}(1,v_T)}{{\cal U}}(z,\tau,\h\mu,\h\g)>0,$ thus implying that $\tilde\tau\notin{{\cal G}}(1,v_T),$ i.e., $\big|\lfloor T\tilde\tau\rfloor-\lfloor T\tau^0\rfloor\big|\le Tv_T^*,$ with probability at least $1-3a-o(1)$[^5]. Now reset $u_T=v_T^*$ and reapply Lemma \[lem:mainlowerb\] for any $v_T>0$ to obtain, \_[[[G]{}]{}(u\_T,v\_T)]{} [[U]{}]{}(z,,,)\^2\_[2,2]{} Again choosing any, v\_T>v\_T\^\*={,}, where, g\_2=1+, u\_2=+,[and]{} v\_2=b+v\_12b,[with]{}u\_1=v\_1=b, we obtain $\inf_{{{\cal G}}(u_T,v_T)}{{\cal U}}(z,\tau,\h\mu,\h\g)>0,$ with probability at least $1-3a-o(1).$ Consequently $\tilde\tau\notin{{\cal G}}(u_T,v_T),$ i.e., $\big|\lfloor T\tilde\tau\rfloor-\lfloor T\tau^0\rfloor\big|\le Tv_T^*.$ Note that rate of convergence of $\tilde\tau$ has been sharpened at the second recursion in comparison to the first. Continuing these recursions by resetting $u_T$ to the bound of the previous recursion, and applying Lemma \[lem:mainlowerb\], we obtain for the $m^{th}$ recursion, |T-T\^0|T{,}:=T{R\_[1m]{},R\_[2m]{}},\
g\_m=\_[k=0]{}\^[m-1]{}, u\_m=+=+\_[k=1]{}\^[m]{},[and]{} v\_m=b+v\_[m-1]{}mb,[with]{}u\_1=v\_1=b Next, we observe that for $m$ large enough, $R_{2m}\le R_{1m}.$ This follows since $R_{2m}$ is faster than any polynomial rate of $1/T.$[^6] Consequently for $m$ large enough we have $\big|\lfloor T\tilde\tau\rfloor-\lfloor T\tau^0\rfloor\big|\le T R_{1m},$ with probability at least $1-3a-o(1).$ Finally, we continue these recursions an infinite number of times to obtain, $g_{\iny}=\sum_{k=0}^{\iny}1/2^{k},$ $u_\iny=\sum_{k=1}^{\iny}(1/2^k),$ thus yielding, |T-T\^0|T=c\_[a3]{}\^2 with probability at least $1-3a-o(1).$ This proves the bound (\[eq:13\]). To finish the proof, note that despite the recursions in the argument, the probability bound after every step is maintained at $1-3a-o(1).$ This follows since the probability statement of Lemma \[lem:mainlowerb\] arises from stochastic upper bounds of Lemma \[lem:optimalcross\], Lemma \[lem:nearoptimalcross\], Lemma \[lem:optimalsqterm\] and Lemma \[lem:WUURE\], applied recursively, with a tighter bound at each recursion. This yields a sequence of events such that each event is a proper subset of the event at the previous recursion.
$\rule{5.5in}{0.1mm}$
For a clear presentation of the proof of Theorem \[thm:limitingdist\] we use the following additional notation. Denote by, [[U]{}]{}()=[[U]{}]{}(z,,,),[[U]{}]{}()=[[U]{}]{}(z,,\^0,\^0),where ${{\cal U}}(z,\tau,\mu,\g)$ is defined in (\[def:cU\]). The proof of this theorem shall also rely on the ‘Argmax’ theorem, see, Theorem 3.2.2 of [@vaart1996weak] (reproduced as Theorem \[thm:argmax\]).
Under the assumed regime of $\psi\to 0,$ recall from Remark \[rem:optapprox\] that we have $T\psi^2(\tilde\tau-\tau^0)=O_p(1).$ It is thus sufficient to examine the behavior of $\tilde\tau,$ such that $\tilde\tau=\tau^0+rT^{-1}\psi^{-2},$ with $r\in[-c_1,c_1],$ for a given constant $c_1>0.$ Now in view of ‘Argmax’ theorem (Theorem \[thm:argmax\]), in order to prove the statement of this theorem it is sufficient to establish the following results, \[eq:steps\] &(i)& \_[[[G]{}]{}((c\_1T\^[-1]{}\^[-2]{}),0)]{} Tp\^[-1]{}|[[U]{}]{}()-[[U]{}]{}()|=o\_p(1),\
&(ii)& Tp\^[-1]{}[[U]{}]{}(\^0+r\^[-2]{}T\^[-1]{})G(r)=
\^2\_2|r| - 2 \_2\^[\*]{}W\_1(r), & [if]{} r>0,\
0, & [if]{}r=0,\
\_1\^2|r| - 2\_1\^[\*]{}W\_2(r), & [if]{}r<0.
, Then it is straightforward to show that $\operatorname*{arg\,min}_r G(r)=^d\big(\si_1^{*2}\big/\si_1^4\big)\operatorname*{arg\,min}_r Z(r),$ where $Z(r)$ is as defined in (\[def:Zr\]) and $=^d$ represents equality in distribution, see, e.g. proof of Proposition 3 of [@bai1997estimation]. Thereby yielding the statement of this theorem. [**Step 1**]{}, and [**Step 2**]{} below provides the results $(i),$ and $(ii)$ of (\[eq:steps\]), respectively.
For any $\tau\ge \tau^0,$ first define the following, R\_1&=&p\^[-1]{}\_[T\^0+1]{}\^[T]{}\_[j=1]{}\^pz\_[t,-j]{}\^T\_[(j)]{}\_2\^2-2p\^[-1]{}\_[T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^T\_[(j)]{}\
&&+2p\^[-1]{}\_[T\^0+1]{}\^[T]{}\_[j=1]{}\^p(\_[(j)]{}-\^0\_[(j)]{})\^Tz\_[t,-j]{}z\_[t,-j]{}\^T\_[(j)]{}=R\_[11]{}-2R\_[12]{}+2R\_[13]{},\
R\_2&=&p\^[-1]{}\_[T\^0+1]{}\^[T]{}\_[j=1]{}\^pz\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2-2p\^[-1]{}\_[T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^T\^0\_[(j)]{}=R\_[21]{}-2R\_[22]{}. Then we have the following algebraic expansion, \[eq:algebra\] Tp\^[-1]{}([[U]{}]{}()-[[U]{}]{}())&=&Tp\^[-1]{}(Q(z,,,)-Q(z,\^0,,))\
&&-Tp\^[-1]{}(Q(z,,\^0,\^0)-Q(z,\^0,\^0,\^0))\
&=&(R\_[1]{}-R\_[2]{})={(R\_[11]{}-2R\_[12]{}+2R\_[13]{})-(R\_[21]{}-2R\_[22]{})}. Lemma \[lem:limiting.dist.residual.terms\] shows that the expressions $\big|R_{11}-R_{21}\big|,$ $\big|R_{12}-R_{22}\big|,$ and $|R_{13}|$ are $o_p(1)$ uniformly over the set $\{{{\cal G}}\big(c_1T^{-1}\psi^{-2},0\big)\}\cap\{\tau\ge \tau^0\}.$ The same result can be obtained symmetrically on the set $\{{{\cal G}}\big(c_1T^{-1}\psi^{-2},0\big)\}\cap\{\tau\le \tau^0\},$ thereby yielding $o_p(1)$ bounds for these terms uniformly over ${{\cal G}}\big(c_1T^{-1}\psi^{-2},0\big)$ Consequently, \_[[[G]{}]{}((c\_1T\^[-1]{}\^[-2]{}),0)]{} Tp\^[-1]{}|[[U]{}]{}()-[[U]{}]{}()| \_[[[G]{}]{}((c\_1T\^[-1]{}\^[-2]{}),0)]{}|R\_[11]{}-R\_[21]{}|\
+\_[[[G]{}]{}((c\_1T\^[-1]{}\^[-2]{}),0)]{}2|R\_[12]{}-R\_[22]{}| +\_[[[G]{}]{}((c\_1T\^[-1]{}\^[-2]{}),0)]{}2|R\_[13]{}|=o\_p(1) This completes the proof of [**Step 1**]{}.
Consider $\tau^*=\tau^0+rT^{-1}\psi^{-2},$ with $r\in(0,c_1].$ Then using Lemma \[lem:design.convergence.for.limiting.dist\] we have, \[eq:17\] p\^[-1]{}\_[T\^0+1]{}\^[T\^\*]{}\_[j=1]{}\^p\^[0T]{}\_[(j)]{}z\_[t,-j]{}z\_[t,-j]{}\^T\^0\_[(j)]{}\_p r\_2\^2. Next, let $\z_t=\sum_{j=1}^p\z_{tj}=\sum_{j=1}^p\vep_{tj}z_{t,-j}^T\eta^0_{(j)},$ and note that ${\rm var}\big(p^{-1/2}\psi^{-1}\z_t\big)={\rm var}\big(\xi_{2,2}^{-1}\z_t\big).$ Then using Condition E we obtain, \[eq:20\] p\^[-1]{}\_[t=T\^0+1]{}\^[T\^\*]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^0\_[(j)]{}&=&p\^[-1/2]{}\_[t=T\^0+1]{}\^[T\^\*]{} p\^[-1/2]{}\^[-1]{}\_t\_2\^\*W\_1(r), where $W_1(r)$ is a Brownian motion on $[0,\iny).$ Now consider $Tp^{-1}{{\cal U}}(\tau^*),$ and observe that an algebraic simplification yields, Tp\^[-1]{}[[U]{}]{}(\^\*)&=&p\^[-1]{}\_[t=T\^0+1]{}\^[T\^\*]{}\_[j=1]{}\^p(z\_[tj]{}-z\_[t,-j]{}\^T\^0\_[(j)]{})\^2-p\^[-1]{}\_[t=T\^0+1]{}\^[T\^\*]{}\_[j=1]{}\^p(z\_[tj]{}-z\_[t,-j]{}\^T\^0\_[(j)]{})\^2\
&=&p\^[-1]{}\_[t=T\^0+1]{}\^[T\^\*]{}\_[j=1]{}\^p\^[0T]{}\_[(j)]{}z\_[t,-j]{}z\_[t,-j]{}\^T\^0\_[(j)]{}- 2p\^[-1]{}\_[t=T\^0+1]{}\^[T\^\*]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^T\^0\_[(j)]{}\
&& {\^2\_2 r - 2 \_2\^\*W\_1(r)}, where the convergence in distribution follows from (\[eq:17\]) and (\[eq:20\]). Similarly for $\tau^*=\tau^0+rT^{-1}\psi^{-2},$ with $r\in[-c_1,0),$ it can be shown that, T[[U]{}]{}(\^\*) {\_1\^2(-r) - 2\_1\^\*W\_2(r)}, where $W_2(r)$ is another Brownian motion on $[0,\iny)$ independent of $W_1(r).$ This completes the proof of [**Step 2**]{} and the proof of this theorem.
$\rule{5.5in}{0.1mm}$
Proofs of results of Section \[sec:nuisance\]
=============================================
The main result of Section \[sec:nuisance\] is Theorem \[thm:alg1.nearoptimal\], which forms the basis of the subsequent corollaries. The proof of Theorem \[thm:alg1.nearoptimal\] requires some preliminary work in the form of Theorem \[thm:est.nuisance.para\], Lemma \[lem:check.mu.g\] and Lemma \[lem:lower.b.near.optimal\] below. We begin with Theorem \[thm:est.nuisance.para\] that provides uniform bounds (over $\tau$) of the $\ell_2$ error in the lasso estimates (\[est:lasso\]) obtained from a regression of each column of $z$ on the rest.
\[thm:est.nuisance.para\] Suppose Condition A$'$ and B holds. Let $u_T\ge 0$ and $\la_j=2(\la_{1j}+\la_{2j}),$ where \_[1j]{}=c\_u\^2(1+\^2){}\^, \_[2j]{}=c\_u(\^2)\^0\_[(j)]{}\_2{,} Then uniformly over all $j=1,...,p,$ the following two properties hold with probability at least $1-c_{u2}\exp \big\{-(c_{u3}\log(p\vee T) \big\},$ for some $c_{u2},c_{u3}>0.$\
(i) The vectors $\h\mu_{(j)}(\tau)-\mu_{(j)}^0\in{{\cal A}}_{1j},$ and $\h\g_{(j)}(\tau)-\g_{(j)}^0\in{{\cal A}}_{2j},$ where the sets ${{\cal A}}_{ij},$ $i=1,2,$ and $j=1,...,p$ are as defined in Condition C.\
(ii) For any constant $c_{u1}>0,$ we have, \_\_[(j)]{}()-\^0\_[(j)]{}\_2c\_u\_j. The same upper bounds also hold for $\h\g_{(j)}(\tau)-\g^0_{(j)},$ uniformly over $j$ and $\tau.$
Consider any $\tau\in{{\cal G}}(u_T,0),$ and w.l.o.g. assume that $\tau\ge\tau^0.$ Then for any $j=1,..,p,$ by construction of the estimator $\h\mu_{(j)}(\tau),$ we have the basic inequality, \_[t=1]{}\^[T]{} (z\_[tj]{}- z\_[t, -j]{}\^T\_[(j)]{}())\^2 + \_j\_[(j)]{}()\_1 \_[t=1]{}\^[T]{} (z\_[tj]{}- z\_[t, -j]{}\^T\_[(j)]{}\^0)\^2 + \_j\_[(j)]{}\^0\_1. An algebraic rearrangement of this inequality yields, \_[t=1]{}\^[T]{}(z\_[t,-j]{}\^T(\_[(j)]{}-\_[(j)]{}\^0))\^2+\_j\_[(j)]{}()\_1\_j\_[(j)]{}\^0\_1+ \_[t=1]{}\^[T]{} \_[tj]{}z\_[t,-j]{}\^T(\_[(j)]{}-\_[(j)]{}\^0), where $\tilde\vep_{tj}=\vep_{tj}=z_{tj}-z_{t,-j}^T\mu_{(j)}^0,$ for $t\le \lfloor T\tau^0\rfloor,$ and $\tilde\vep_{tj}=z_{tj}-z_{t,-j}^T\mu_{(j)}^0=\vep_{tj}-z_{t,-j}^T(\mu_{(j)}^0-\g_{(j)}^0),$ for $t>\lfloor T\tau^0\rfloor.$ A further simplification using these relations yields, \[eq:23\] \_[t=1]{}\^[T]{}(z\_[t,-j]{}\^T(\_[(j)]{}-\_[(j)]{}\^0))\^2+\_j\_[(j)]{}()\_1\_j\_[(j)]{}\^0\_1+ \_[t=1]{}\^[T]{} \_[tj]{}z\_[t,-j]{}\^T(\_[(j)]{}-\_[(j)]{}\^0)\
-\_[t=T\^0+1]{}\^[T]{} (\_[(j)]{}\^0-\_[(j)]{}\^0)z\_[t,-j]{}z\_[t,-j]{}\^T(\_[(j)]{}-\_[(j)]{}\^0)\
\_[(j)]{}\^0\_1+ \_[t=1]{}\^[T]{} \_[tj]{}z\_[t,-j]{}\^T\_\_[(j)]{}-\_[(j)]{}\^0\_1\
+\_[t=T\^0+1]{}\^[T]{} (\_[(j)]{}\^0-\_[(j)]{}\^0)z\_[t,-j]{}z\_[t,-j]{}\^T\_\_[(j)]{}-\_[(j)]{}\^0\_1 Now using the bounds of Lemma \[lem:bounds.for.nuis.thm\] we have that, &&\_[t=1]{}\^[T]{}\_[tj]{}z\_[t,-j]{}\_c\_u\^2(1+\^2){}\^=\_[1j]{}\
&& \_[t=T\^0+1]{}\^[T]{}\^[0T]{}\_[(j)]{}z\_[t,-j]{}z\_[t,-j]{}\^T\_c\_[u]{}(\^2)\^0\_[(j)]{}\_2{,}=\_[2j]{}, with probability at least $1-c_{u2}\exp\{-c_{u3}\log(p\vee T)\}.$ Applying these bounds in (\[eq:23\]) yields, \_[t=1]{}\^[T]{}(z\_[t,-j]{}\^T(\_[(j)]{}-\_[(j)]{}\^0))\^2+\_j\_[(j)]{}()\_1\_j\_[(j)]{}\^0\_1 + (\_[1j]{}+ \_[2j]{})\_[(j)]{}()-\_[(j)]{}\^0\_1, with probability at least $1-c_{u2}\exp\{-c_{u3}\log(p\vee T)\}.$ Choosing $\la_j\ge 2(\la_{1j}+ \la_{2j}),$ leads to $\big\|\big(\h\mu_{(j)}(\tau)\big)_{S_{1j}^c}\big\|_1\le 3\big\|\big(\h\mu_{(j)}(\tau)-\mu_j^0\big)_{S_{1j}}\big\|_1,$ and thus by definition $\h\mu_{(j)}-\mu^0_{(j)}\in{{\cal A}}_{1j},$ with the same probability. This proves the first assertion of this theorem. Next applying the restricted eigenvalue condition of (\[lem:lower.RE.ordinary\]) to the l.h.s. of the inequality (\[eq:23\]), we also have that, \_[(j)]{}()-\^0\_[(j)]{}\_2\^23\_[(j)]{}()-\^0\_[(j)]{}\_13\_j\_[(j)]{}()-\^0\_[(j)]{}\_2. This directly implies that $\|\h\mu_{(j)}(\tau)-\mu^0_{(j)}\|_2\le 3\surd{s}(\la_j/\ka),$ which yields the desired $\ell_2$ bound. To finish the proof recall that the stochastic bounds used here hold uniformly over ${{\cal G}}(u_T,0),$ and $j,$ consequently the statements of this theorem also hold uniformly over the same collections. The case of $\tau\le\tau^0,$ and the corresponding results for $\h\g_{(j)}(\tau)-\g^0_{(j)}$ can be obtained by symmetrical arguments.
$\rule{5.5in}{0.1mm}$
The following lemma obtains $\ell_2$ error bounds for the Step 1 edge estimates by utilizing the initializing Condition F and Theorem \[thm:est.nuisance.para\].
\[lem:check.mu.g\] Suppose Condition A$',$ B and F hold. Choose regularizers $\la_j,$ $j=1,...,p,$ as prescribed in Theorem \[thm:est.nuisance.para\], with $u_T=\big(c_ul_T\ka\big)\big/\big(sT^k(\si^2\vee\phi)\big).$ Then edge estimates $\check\mu_{(j)},$ $j=1,...,p$ of Step 1 of Algorithm 1 satisfy the following bound. (i)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2,[and]{} (ii)(s\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2)\^ with probability $1-o(1).$ Corresponding bounds also holds for $\check\g_{(j)},$ $j=1,...,p.$
We begin by noting that Part (ii) of the initializing Condition F of Algorithm 1 guarantees that $\check\tau$ satisfies, |T-T\^0|T\^[(1-k)]{} In other words, $\check\tau\in{{\cal G}}(u_T,0),$ where $u_T=\big(c_ul_T\ka\big)\big/\big(sT^k(\si^2\vee\phi)\big),$ where $k<b.$ This choice of $u_T$ provides the following relations, &&=.\[eq:27\]\
&&c\_u(\^2)s= c\_u\^2(1+\^2){}\^\[eq:28\] Here the inequality of (\[eq:27\]) follows from the assumption $c_u\ka T^{(1-k)}l_T\ge (\si^2\vee\phi)s\log (p\vee T)$ of Condition F. The equality of (\[eq:28\]) follows directly upon substituting the choice of $u_T,$ and the inequality follows from assumption A$'$(iii) and since w.l.o.g we have $k<b.$ Now using this choice of $u_T$ in $\la_j$ of Part (ii) of Theorem \[thm:est.nuisance.para\] we obtain, \_[j=1]{}\^p(\_[1j]{}+\_[2j]{})&& c\_u\^2(1+\^2){}\^+c\_u(\^2)\_[2,1]{}{,}\
&& c\_u\^2(1+\^2){}\^+c\_u(\^2){}c\_u. The second inequality follows from (\[eq:27\]) and the final inequality follows from (\[eq:28\]). The bound of Part (i) is now a direct consequence of Theorem \[thm:est.nuisance.para\]. We proceed similarly to prove Part (ii), note that, \_[j=1]{}\^p(\_[1j]{}+\_[2j]{})\^2&& c\_u\^4(1+\^2){}+c\_u(\^4\^2)\_[2,2]{}\^2{,}\^2\
&& c\_u\^4(1+\^2){}+c\_u(\^4\^2){}\^2\
&&c\_u\^4(1+\^2){} +. The final inequality follows from Condition A$'$(iii). Part (ii) is now a direct consequence.
$\rule{5.5in}{0.1mm}$
\[lem:lower.b.near.optimal\] Suppose Condition A$'$, B and F hold and let $\check\mu_{(j)}$ and $\check\g_{(j)},$ $j=1,...,p$ be edge estimates of Step 1 of Algorithm 1. Additionally, let $\log(p\vee T)\le Tv_T\le Tu_T$ be non-negative sequences. Then, \[eq:31\] \_[[[G]{}]{}(u\_T,v\_T)]{}[[U]{}]{}(z,,,)\^[2]{}\_[2,2]{} with probability at least $1-o(1).$ Here $c_m=\{c_u(\si^2\vee\phi)\surd(1+\nu^2)\big\}\big/\big\{\ka(1\wedge \phi)\big\}.$
The structure of this proof is similar to that of Lemma \[lem:mainlowerb\], the distinction being the use of weaker available error bounds of the edge estimates $\check\mu_{(j)},$ $\check\g_{(j)},$ and sharper bounds for other stochastic terms made possible by the additional assumption $\log(p\vee T)\le Tv_T\le Tu_T.$ Proceeding as in (\[eq:11\]) we have that, \[eq:29\] \_[[U]{}]{}(z,,,)&& R1-R2-R3 Where $R1,R2$ and $R_3$ are as defined in (\[eq:11\]) with $\h\mu_{(j)},$ $\h\g_{(j)}$ and $\h\eta_{(j)}$ replaced with $\check\mu_{(j)},$ $\check\g_{(j)}$ and $\check\eta_{(j)}=\check\mu_{(j)}-\check\g_{(j)},$ $j=1,...,p.$ Now applying the bounds of Lemma \[lem:term123.check\] we obtain, R1&& \_[2,2]{}\^2\
&&\_[2,2]{}\^2 with probability $1-o(1).$ Where the final inequality follows from Lemma \[lem:check.mu.g\]. Next we obtain upper bounds for the terms $R2\big/\ka\xi_{2,2}^2$ and $R3\big/\ka\xi_{2,2}^2.$ Consider, && c\_u(1+\^2)()\^+ c\_[u]{}(1+\^2)()\^\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_1\
&& c\_u(1+\^2)()\^+ c\_[u]{}(1+\^2)()\^\
&& c\_u(1+\^2)()\^ with probability $1-o(1).$ Here the first and second inequalities follow from Lemma \[lem:term123.check\] and Lemma \[lem:check.mu.g\], respectively. Similarly we can also obtain, && c\_u(\^2){s\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2}\^\
&& c\_[u1]{}(\^2) with probability $1-o(1).$ Substituting these bounds in (\[eq:29\]) and applying a union bound over these three events yields the bound (\[eq:31\]) uniformly over the set $\{{{\cal G}}(u_T,v_T);\,\tau\ge\tau^0\}.$ The mirroring case of $\tau\le\tau^0$ can be obtained by similar arguments.
$\rule{5.5in}{0.1mm}$
Following is the proof of the main result of Section \[sec:nuisance\].
This proof relies on the same recursive argument as that of Theorem \[thm:optimalapprox\], the distinction being that recursions are made on the bound of Lemma \[lem:lower.b.near.optimal\] instead of Lemma \[lem:mainlowerb\]. Consider any $Tv_T>\log(p\vee T),$ and apply Lemma \[lem:lower.b.near.optimal\] on the set ${{\cal G}}(u_T,v_T)$ to obtain, \_[[[G]{}]{}(1,v\_T)]{} [[U]{}]{}(z,,,)&& \^2\_[2,2]{}\
&&\^2\_[2,2]{} with probability at least $1-o(1).$ Substituting $u_T=1,$ yields, \_[[[G]{}]{}(1,v\_T)]{} [[U]{}]{}(z,,,)\^2\_[2,2]{} with probability at least $1-o(1).$ Recall that w.l.og $k<b<(1/2),$ and now choose any $v_T>v_T^*=c_{m}\big(\log(p\vee T)/T\big)^k.$ Then we have $\inf_{\tau\in{{\cal G}}(1,v_T)}{{\cal U}}(z,\tau,\check\mu,\check\g)>0,$ thus implying that $\h\tau\notin{{\cal G}}(1,v_T),$ i.e., $\big|\lfloor T\check\tau\rfloor-\lfloor T\tau^0\rfloor\big|\le Tv_T^*,$ with probability at least $1-o(1).$ Now reset $u_T=v_T^*$ and reapply Lemma \[lem:mainlowerb\] for any $v_T>0$ to obtain, \_[[[G]{}]{}(u\_T,v\_T)]{} [[U]{}]{}(z,,,)\^2\_[2,2]{} Again choosing any, v\_T>v\_T\^\*={c\_[m]{}\^[g\_2]{}()\^[u\_2]{},c\_[m]{}\^[2]{}()\^[v\_2]{}}, where, g\_2=1+, u\_2=+,[and]{} v\_2=k+v\_12k,[with]{}u\_1=v\_1=k, we obtain $\inf_{{{\cal G}}(u_T,v_T)}{{\cal U}}(z,\tau,\check\mu,\check\g)>0,$ with probability at least $1-o(1).$ Consequently $\h\tau\notin{{\cal G}}(u_T,v_T),$ i.e., $\big|\lfloor T\h\tau\rfloor-\lfloor T\tau^0\rfloor\big|\le Tv_T^*.$ Continuing these recursions by resetting $u_T$ to the bound of the previous recursion, and applying Lemma \[lem:mainlowerb\], we obtain for the $l^{th}$ recursion, |T-T\^0|T{c\_[m]{}\^[g\_l]{}()\^[u\_l]{},c\_[m]{}\^[l]{}()\^[v\_l]{}}:=T{R\_[1l]{},R\_[2l]{}},[where,]{}\
g\_l=\_[j=0]{}\^[l-1]{}, u\_l=+=+\_[j=1]{}\^[l]{},[and]{} v\_l=k+v\_[l-1]{}lk,[with]{}u\_1=v\_1=k Next, it is straightforward to observe that for $l$ large enough, $R_{2l}\le R_{1l},$ for $T$ sufficiently large. Consequently for $l$ large enough we have $\big|\lfloor T\tilde\tau\rfloor-\lfloor T\tau^0\rfloor\big|\le T R_{1m},$ with probability at least $1-o(1).$ Finally, we continue these recursions an infinite number of times to obtain, $g_{\iny}=\sum_{j=0}^{\iny}1/2^{j},$ $u_\iny=\sum_{j=1}^{\iny}(1/2^j),$ thus yielding, |T-T\^0|T=c\_[m]{}\^2(pT) with probability at least $1-o(1).$ This completes the proof of this result.
$\rule{5.5in}{0.1mm}$
Under the assumed conditions, we have from Theorem \[thm:alg1.nearoptimal\] that $\h\tau\in{{\cal G}}(u_T,0),$ with probability at least $1-o(1),$ where $u_T=c_m^2T^{-1}\log(p\vee T),$ where $c_m$ is as defined in Lemma \[lem:lower.b.near.optimal\]. The relation of Part (i) follows directly from Theorem \[thm:est.nuisance.para\]. To obtain Part (ii), substitute this choice of $u_T$ in $\la_{2j},$ $j=1,...p,$ of Theorem \[thm:est.nuisance.para\] to obtain, \_[2j]{}=c\_u(\^2)\^0\_[(j)]{}\_2{,c\_m\^2} o(1){}\^ Here the final inequality follows since by Condition A$'$(i) we have $\log(p\vee T)=o(Tl_T),$ furthermore from Lemma \[lem:condnumberbound\] we have $\|\eta^0_{(j)}\|_2\le 2\nu,$ $j=1,...,p.$ Conseuently $\la_{2j}\le \la_{1j},$ $j=1,...,p,$ and thus applying Theorem \[thm:est.nuisance.para\] we obtain, \_[(j)]{}-\^0\_[(j)]{}\_2c\_u\_[j]{}c\_u(1+\^2){}\^ for all $j=1,...,p,$ with probability at least $1-o(1).$ Corresponding bound for $\h\g_{(j)}-\g^0_{(j)},$ $j=1,...,p,$ can be obtained using symmetrical arguments. This completes the proof of this corollary.
$\rule{5.5in}{0.1mm}$
Note that Corollary \[cor:optimalmeans.step2\] has established that the edge estimates $\h\mu_{(j)}$ and $\h\g_{(j)},$ $j=1,...,p,$ satisfy the requirements of Condition C of Section \[sec:mainresults\]. Thus, this result is now a direct consequence of Theorem \[thm:optimalapprox\] and Theorem \[thm:limitingdist\].
$\rule{5.5in}{0.1mm}$
Deviation bounds used for proofs of Section \[sec:mainresults\]
===============================================================
\[lem:subez\] Suppose Condition B holds and let $\vep_{tj}$ be as defined in (\[def:epsilons\]). Then, (i) the r.v. $\vep_{tj}z_{t,-j,k}$ is sub-exponential with parameter $\la_1=48\si^2\surd{(1+\nu^2)},$ for each $j=1,...,p,$ $k=1,...,p-1$ and $t=1,...,T.$ (ii) The r.v. $\z_t=\sum_{j=1}^p\vep_{tj}z_{t,-j}^T\eta^0_{(j)}$ is sub-exponential with parameter $\la_2=48\si^2\xi_{2,1}\surd{(1+\nu^2)},$ for each $t=1,...,T.$ (iii) $E\big[|\z_t|^k\big]\le 4\la_2^k k^k,$ for any $k> 0.$
Here we only prove Part (ii) of this lemma, Part (i) follows using similar arguments, and Part (iii) follows from properties of sub-exponential random variables, see, Lemma \[lem:momentprop\]. We begin by noting that the following r.v’s are mean zero, $E(\vep_{tj})=0,$ $E(z_{t,-j}^T\eta^0_{(j)})=0$ and $E\big(\vep_{tj}z_{t,-j}^T\eta^0_{(j)}\big)=0.$ Also note that for $t\le \lfloor T\tau^0\rfloor,$ we have, \_[tj]{}=z\_[tj]{}-z\_[t,-j]{}\^T\^0\_[(j)]{}=(1, -\^[0T]{}\_[(j)]{})(z\_[tj]{},z\_[t,-j]{}\^T)\^T Using Lemma \[lem:condnumberbound\] and by properties of sub-gaussian distributions we have $\vep_{tj},$ $1\le j\le p\sim {\rm subG(\si_1)}$ with $\si_1=\si\surd(1+\nu^2).$ The same also holds for $\vep_{tj},$ for $t>\lfloor T\tau^0\rfloor.$ Similarly, $z_{t,-j}\eta^0_{(j)}\sim {\rm subG(\si_2)}$ with $\si_2=\si\|\eta_{(j)}^0\|_2.$ Recall that if $Z\sim {\rm subG}(\si),$ then the rescaled variable $Z/\si\sim{\rm subG}(1).$ Next observe that, ={(+)-()-()}=\[T1-T2-T3\] where $\Phi(v)=\|v\|_2^2-E\big(\|v\|_2^2\big).$ Using Lemma \[lem:lcsubG\] and Lemma \[lem:sqsubGsubE\] we have that $T1\sim{\rm subE}(64),$ $T2\sim{\rm subE}(16),$ and $T3\sim{\rm subE}(16).$ Applying Lemma \[lem:lcsubE\] and rescaling with $\si_1,$ and $\si_2$ we obtain that $\vep_{tj}z_{t,-j}^T\eta^0_{(j)}\sim {\rm subE}(48\si_1\si_2).$ Another application of Lemma \[lem:lcsubE\] yields $\z_t=\sum_{j=1}^p \vep_{tj}z_{t,-j}^T\eta^0_{(j)}\sim {\rm subE}(\la_2)$ where \_2=48 \^2\_[j=1]{}\^p \_[(j)]{}\^0\_2(1+\^2)=48 \^2\_[2,1]{}(1+\^2) This completes the proof of Part (ii).
$\rule{5.5in}{0.1mm}$
\[lem:optimalcross\] Suppose Condition B holds and let $\vep_{tj}$ be as defined in (\[def:epsilons\]). Additionally, let $u_T,v_T$ be any non-negative sequences satisfying $0\le v_T\le u_T.$ Then for any $0<a<1,$ choosing $c_{a1}=4\cdotp 48c_{a2},$ with $c_{a2}\ge \surd{(1/a)},$ we have for $T\ge 2,$ \[eq:26\] \_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^0\_[(j)]{}|c\_[a1]{} \^2\_[2,1]{}()\^, with probability at least $1-a.$
First note that without loss of generality we can assume $u_T\ge (1/T).$ This is because when $u_T<(1/T),$ the set ${{\cal G}}(u_T,0)$ contains only the singleton $\tau^0$ with a distinct value $\lfloor T\tau^0\rfloor.$ Consequently, the sum of interest in (\[eq:26\]), is over indices $t$ in an empty set, and is thus trivially zero. Now, let $\z_t=\sum_{j=1}^p\vep_{tj}z_{t,-j}^T\eta^0_{(j)},$ then using Lemma \[lem:subez\] we have that $\z_t\sim {\rm subE(\la)},$ where $\la=48\xi_{2,1}\surd{(1+\nu^2)}\si^2.$ Additionally, from part (iii) of Lemma \[lem:subez\], we have, ${\rm var(\z_t)}=E(\z_t)^2\le 16\la^2.$ Consider the set ${{\cal G}}(u_T,v_T) \cap \{\tau\ge\tau^0\}$ and note that in this set, there are at most $Tu_T$ distinct values of $\lfloor T\tau\rfloor.$ Applying Kolmogorov’s inequality (Theorem \[thm:kolmogorov\]) with any $d>0$ yields, pr(\_|\_[t=T\^0+1]{}\^[T]{} \_[t]{}|> d) \_ [var (z\_t)]{} Choosing $d=4c_{a2}\la\surd{(Tu_T)},$ with $c_{a2}\ge \surd{(1/a)}$ yields the statement of the lemma.
$\rule{5.5in}{0.1mm}$
\[lem:nearoptimalcross\] Suppose Condition B holds and let $\vep_{tj}$ be as defined in (\[def:epsilons\]) and let $0\le v_T\le u_T$ be any non-negative sequences. Then for any $c_{u2}>3$ and $c_{u1}\ge 96c_{u2},$ we have for $T\ge 2,$ &(i)& \_\_[t=T\^0+1]{}\^[T]{}\_[tj]{}z\_[t,-j]{}\^T\_c\_[u1]{}\^2 ()\^(pT),\
&(ii)&\_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^T(\_[(j)]{}-\^0\_[(j)]{})|c\_[u1]{}\^2 ()\^(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_1, each with probability at least $1-2\exp\big\{-(c_{u2}-3)\log(p\vee T)\big\}.$[^7]
Part (ii) is a direct consequence of Part (i), thus we only prove Part (i). Without loss of generality we can assume $v_T\ge (1/T).$ This follows since the only additional distinct element $\lfloor T\tau\rfloor$ in the set ${{\cal G}}(u_T,0)$ in comparison to ${{\cal G}}(u_T,(1/T))$ is $\lfloor T\tau^0\rfloor,$ and at this value, the sum of interest is over indices $t$ in an empty set and is thus trivially zero.
Let $z_{t,-j}=(z_{t,-j,1},....,z_{t,-j,p-1})^T,$ then from Lemma \[lem:subez\] we have $\vep_{tj}z_{t,-j,k}\sim {\rm subE}(\la_1),$ with $\la_1= 48\surd{(1+\nu^2)}\si^2.$ Now applying Bernstein’s inequality (Lemma \[lem:subetail\]) for any fixed $\tau\in{{\cal G}}(u_T,v_T)$ satisfying $\tau\ge\tau^0,$ we have for any $d>0,$ \[eq:3\] pr(|\_[t=T\^0+1]{}\^[T]{}\_[tj]{}z\_[t,-j,k]{}|>d(T-T\^0))2{-()} Choose $d=2c_{u2}\la_1\log (p\vee T)\big/\surd\big(\lfloor T\tau\rfloor-\lfloor T\tau^0\rfloor\big),$ then, (T-T\^0)&=&2c\_[u2]{}\^2\^2 (pT),,\
(T-T\^0)&=&c\_[u2]{}(pT), where we have used $\big(\lfloor T\tau\rfloor-\lfloor T\tau^0\rfloor\big)\ge Tv_T\ge 1.$ A substitution back in the probability bound yields, |\_[t=T\^0+1]{}\^[T]{}\_[tj]{}z\_[t,-j,k]{}|2c\_[u2]{}\_1(pT)(T-T\^0)\^[1/2]{}2c\_[u2]{}\_1(pT) (Tu\_T)\^, with probability at least $1-2\exp\{-c_{u2} \log (p\vee T)\}.$ Finally applying a union bound over $j=1,...,p,$ $k=1,...,p-1$ and over the at most $T$ distinct values of $\lfloor T\tau\rfloor$ for $\tau\in{{\cal G}}(u_T,v_T),$ we obtain, \_\_[t=T\^0+1]{}\^[T]{}\_[tj]{}z\_[t,-j,k]{}\_2c\_[u2]{}\_1 (pT)()\^, with probability at least $1-2\exp\{-(c_{u2}-3)\log (p\vee T)\}.$ This completes the proof of Part (i).
$\rule{5.5in}{0.1mm}$
\[lem:optimalsqterm\] Suppose Condition B holds and let $u_T,$ $v_T$ be any non-negative sequences satisfying $0\le v_T\le u_T.$ Then for any $0<a<1,$ choosing $c_{a1}=64c_{a2},$ with $c_{a2}\ge \surd{(1/a)},$ we have for $T\ge 2,$ (i)\_\_[t=T\^0+1]{}\^[T]{} \_[j=1]{}\^p \^[0T]{}\_[(j)]{}z\_[t,-j]{}z\_[t,-j]{}\^T\^[0]{}\_[(j)]{}v\_T\_[2,2]{}\^2- c\_[a1]{} \^2\_[2,2]{}\^2()\^,\
(ii)\_\_[t=T\^0+1]{}\^[T]{} \_[j=1]{}\^p \^[0T]{}\_[(j)]{}z\_[t,-j]{}z\_[t,-j]{}\^T\^[0]{}\_[(j)]{}u\_T\_[2,2]{}\^2+ c\_[a1]{} \^2\_[2,2]{}\^2()\^ with probability at least $1-a.$
As before in Lemma \[lem:optimalcross\], w.l.o.g we assume $u_T\ge (1/T).$ Now, we have $\eta^{0T}_{(j)}z_{t,-j}\sim {\rm subG}\big(\si\|\eta^0_{(j)}\|_2\big),$ consequently, using Lemma \[lem:sqsubGsubE\] and Lemma \[lem:lcsubE\] we have \_[j=1]{}\^p(\^[0T]{}\_[(j)]{}z\_[t,-j]{}\_2\^2-E\^[0T]{}\_[(j)]{}z\_[t,-j]{}\_2\^2)\~[subE]{}(),=16\^2\_[2,2]{}\^2. Using moment properties of sub-exponential distributions (Part (iii) of Lemma \[lem:subez\]) we also have that {\_[j=1]{}\^p (\^[0T]{}\_[(j)]{}z\_[t,-j]{}\_2\^2-E\^[0T]{}\_[(j)]{}z\_[t,-j]{}\_2\^2)}16\^2. Now applying Kolmogorov’s inequality (Lemma \[thm:kolmogorov\]) we obtain, pr{\_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p(\^[0T]{}\_[(j)]{}z\_[t,-j]{}\_2\^2-E\^[0T]{}\_[(j)]{}z\_[t,-j]{}\_2\^2)|>d }. Choosing $d=4c_{a2}\la\surd{(Tu_T)},$ with $c_{a2}\ge \surd(1/a)$ yields, \_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p(\^[0T]{}\_[(j)]{}z\_[t,-j]{}\_2\^2-E\^[0T]{}\_[(j)]{}z\_[t,-j]{}\_2\^2)|4c\_[a2]{}()\^ with probability at least $1-a.$ The statement of this lemma is now a direct consequence.
$\rule{5.5in}{0.1mm}$
We require additional notation for the following results. Consider any sequence of $\alpha_{(j)},\psi_{(j)}\in\R^{p-1},$ $j=1,...,p,$ and let $\al,$ $\psi$ represent the concatenation of all $\al_{(j)}$’s and $\psi_{(j)}$’s. Then define \[def:Phi\] (,)=\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[(j)]{}\^Tz\_[t,-j]{}z\_[t,-j]{}\_[(j)]{}
\[lem:etabounds\] Let $\Phi(\cdotp,\cdotp)$ be as defined in (\[def:Phi\]) and suppose Condition B and C(ii) hold. Let $u_T, v_T$ be any non-negative sequences satisfying $0\le v_T\le u_T.$ Then for any $0<a<1,$ choosing $c_{a1}=64c_{a2},$ with $c_{a2}\ge\surd{(1/a)},$ we have for $T\ge 2,$ &(i)&\_(\^0,\^0)v\_T\_[2,2]{}\^2-c\_[a1]{}\^2\_[2,2]{}\^2()\^\
&(ii)&\_(-\^0,-\^0)c\_u(\^2)s(pT)u\_T\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2 with probability at least $1-a,$ and $1-o(1),$ respectively. Moreover, when $u_T\ge c_{a1}^2\si^4\big/T\phi^2,$ we have, &(iii)&\_(\^0,\^0)2u\_T\_[2,2]{}\^2,\
&(iv)&\_|(-\^0,\^0)|c\_u(\^2) u\_T \_[2,2]{}{s(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2}\^, with probability at least $1-a,$ and $1-a-o(1),$ respectively.
Part (i) and Part (iii) of this lemma are a direct consequence of Lemma \[lem:optimalsqterm\]. To prove Part (ii), first note that, \[eq:9\] \_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_1\^2&& 2\_[j=1]{}\^p(\_[(j)]{}-\^0\_[(j)]{}\_1\^2+\_[(j)]{}-\^0\_[(j)]{}\_1\^2)\
&& 32s\_[j=1]{}\^p(\_[(j)]{}-\^0\_[(j)]{}\_2\^2+\_[(j)]{}-\^0\_[(j)]{}\_2\^2)32s\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2, with probability at least $1-\pi_T=1-o(1).$ Here the second inequality follows since by Condition C(ii) we have, $\h\mu_{(j)}-\mu^0_{(j)}\in{{\cal A}}_{1j},$ and $\h\g_{(j)}-\g^0_{(j)}\in{{\cal A}}_{2j},$ $j=1,...,p.$ Now applying Lemma \[lem:WUURE\], we have, \_[[[G]{}]{}(u\_T,v\_T)]{} (-\^0,-\^0)&& c\_u(\^2)(pT)u\_T(\_[j=1]{}\^p\_[(j)]{}-\^0\_2\^2+\_[j=1]{}\^[p]{}\_[(j)]{}-\^0\_1\^2)\
&& c\_u(\^2)s(pT)u\_T\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2 with probability at least $1-o(1).$ Here the final inequality follows by using (\[eq:9\]). The proof of Part (iv) is an application of the Cauchy-Schwarz inequality together with the bounds of Part (ii) and Part (iii), \[eq:5\] \_|(-\^0,\^0)|&& \_ {(-\^0,-\^0)}\^\_ {(\^0,\^0)}\^. This completes the proof of this lemma.
$\rule{5.5in}{0.1mm}$
\[lem:term123\] Suppose Condition B and C(ii) hold. Let $u_T, v_T$ be any non-negative sequences satisfying $0\le v_T\le u_T.$ Then for any $0<a<1,$ choosing $c_{a1}=4\cdotp 48c_{a2},$ with $c_{a2}\ge \surd{(1/a)},$ and for $u_T\ge c_{a1}^2\si^4\big/(T\phi^2),$ we have for $T\ge 2,$ (i)\_\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p (\_[(j)]{}\^Tz\_[t,-j]{})\^2\
\_[2,2]{}\^2\
(ii)\_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p(\_[(j)]{}-\^0\_[(j)]{})\^Tz\_[t,-j]{}z\_[t,-j]{}\^T\_[(j)]{}|\
c\_u(\^2)\_[2,2]{}u\_T{s(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2}\^\
(iii)\_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^T\_[(j)]{}|\
c\_[a1]{}(1+\^2)\^2\_[2,1]{}()\^+ c\_[u]{}(1+\^2)\^2()\^(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_1, each with probability at least $1-a-o(1).$
Let $\Phi(\cdotp,\cdotp)$ be as defined in (\[def:Phi\]). Then note that $\Phi(\h\eta,\h\eta)=\Phi(\eta^0,\eta^0)+2\Phi(\h\eta-\eta^0,\eta^0)+\Phi(\h\eta-\eta^0,\h\eta-\eta^0).$ Using this relation together with the bounds of Part (i) and Part (iv) of Lemma \[lem:etabounds\] we obtain, \_(,)&& \_(\^0,\^0)-2\_|(-\^0,\^0)|\
&& v\_T\_[2,2]{}\^2- c\_[a1]{}\^2\_[2,2]{}\^2()\^- c\_u(\^2)u\_T \_[2,2]{}(s(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2)\^ with probability at least $1-a-o(1).$ To prove Part (ii), note that using identical arguments as in the proof of Lemma \[lem:etabounds\] it can be shown that, &&\_(-\^0,-\^0)c\_u(\^2) s(pT)u\_T\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2,\
&&\_|(-\^0,\^0)|c\_u(\^2) u\_T \_[2,2]{}{s(T)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2}\^, with probability at least $1-a-o(1).$ The above inequalities and the relation $\Phi\big(\h\g-\g^0,\h\eta\big)\le \big|\Phi(\h\g-\g^0,\h\eta-\eta^0)\big|+\big|\Phi(\h\g-\g^0,\eta^0)\big|,$ together with applications of the Cauchy-Schwarz inequality yields, \_ |(-\^0,)| c\_u(\^2) s(pT)u\_T(\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2)\^(\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2)\^\
+ c\_u(\^2) u\_T \_[2,2]{}{s(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2}\^\
c\_u(\^2)\_[2,2]{}u\_T{s(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2}\^ with probability at least $1-a-o(1).$ To prove Part (iii), note that, \_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^T\_[(j)]{}|&&\_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^T\^0\_[(j)]{}|\
&&+\_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^T(\_[(j)]{}-\^0\_[(j)]{})|\
&:=& R1+R2. Now using Lemma \[lem:optimalcross\] we have for any $0<a<1,$ $R1\le c_{a1}\surd(1+\nu^2)\si^2\xi_{2,1}\big(u_T\big/T\big)^{1/2},$ with probability at least $1-a.$ Also, using Lemma \[lem:nearoptimalcross\] we have, R2c\_[u]{}(1+\^2)\^2()\^(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_1 with probability at least $1-o(1).$ Part (iv) now follows by combining bounds for terms $R1$ and $R2.$
$\rule{5.5in}{0.1mm}$
\[lem:assumptionbounds\] Suppose Condition A and C hold. Then we have, &(i)&\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2c\_u(1+\^2){},\
&(ii)&\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_1c\_u(1+\^2){}\^\
&(iii)& (s(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2)\^=o(1),\
&(iv)&\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_1{}\^ with probability at least $1-o(1).$
Part (i) can be obtained as, \_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^22\_[j=1]{}\^p(\_[(j)]{}-\^0\_[(j)]{}\_2\^2+\_[(j)]{}-\^0\_[(j)]{}\_2\^2)c\_u(1+\^2){}, with probability at least $1-o(1).$ Here the final inequality follows from (\[eq:optimalmeans\]). Part (ii) can be obtained quite analogously. To prove Part (iii) note that from Condition A we have $(1\big/\xi_{2,2})= (1\big/\psi\surd{p})$ and consider, (s(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2)\^&& (sp\^[-1]{}(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2)\^\
&& c\_u(1+\^2){}, with probability at least $1-o(1).$ Here the second inequality follows by using the bound of Part (i) and the second follows from Condition A. To prove Part (iv) consider, \_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_1&& c\_u(1+\^2){}\^\
&& {}c\_u(1+\^2){}\
&& {}\^ with probability at least $1-o(1).$ Here the first inequality follows by the assumption $(1\big/\xi_{2,2})=(1\big/\psi\surd{p})$ together with the bound in Part (ii). The final inequality follows from Condition A.
$\rule{5.5in}{0.1mm}$
\[lem:limiting.dist.residual.terms\] Suppose the Conditions of Theorem \[thm:limitingdist\] and let $R_{11},R_{12},R_{13},$ and $R_{21},R_{22}$ be as defined in its proof. Let $0<c_1<\iny$ be any constant, then we have the following bounds. &(i)&\_|R\_[11]{}-R\_[21]{}|=o(1),(ii)\_|R\_[12]{}-R\_[22]{}|=o(1)\
&(iii)&\_|R\_[13]{}|=o(1) each with probability at least $1-o(1).$
Let $\Phi(\cdotp,\cdotp)$ be as defined in (\[def:Phi\]) and consider, \[eq:unifpart1\] \_|R\_[11]{}-R\_[21]{}|= \_p\^[-1]{}|\_[T\^0+1]{}\^[T]{}\_[j=1]{}\^p(z\_[t,-j]{}\^T\_[(j)]{}\_2\^2-z\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2)|\
=\_ p\^[-1]{}|\_[T\^0+1]{}\^[T]{}\_[j=1]{}\^p(\_[(j)]{}-\^0\_[(j)]{})\^Tz\_[t,-j]{}z\_[t,-j]{}\^T(\_[(j)]{}+\^0\_[(j)]{})|\
=\_| Tp\^[-1]{}(-\^0,-\^0)+2Tp\^[-1]{}(-\^0,\^0)|. Now from Part (ii) of Lemma \[lem:etabounds\] we have \[eq:15\] \_ Tp\^[-1]{}(-\^0,-\^0)&& c\_uc\_1(\^2)\^[-2]{}p\^[-1]{}s(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2\
&=& O()=o(1), with probability at least $1-o(1).$ Also, from Part (iv) of Lemma \[lem:etabounds\], we have for $u_T\ge c_{a1}^2\si_x^4\big/(T\phi^2),$ \[eq:18\] \_ 2Tp\^[-1]{}|(-\^0,\^0)|c\_u(\^2) Tu\_Tp\^[-1]{}\_[2,2]{}(s(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2)\^ with probability at least $1-a-o(1).$ Upon choosing $a=\big(64^2 \psi^2\si^4\big)\big/(c_1\phi^2)\to 0,$ we have $c_1T^{-1}\psi^{-2}= c_{a1}^2\si_x^4\big/(T\phi^2),$ consequently from (\[eq:18\]) we have, \[eq:19\] \_ 2T|(-\^0,\^0)|&& c\_uc\_1(\^2) (s(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2)\^\
&=&c\_uc\_1(\^2)(s(pT)\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2)\^\
&&O()=o(1) with probability at least $1-a-o(1)=1-o(1).$ Substituting this uniform bound together with (\[eq:15\]) back in (\[eq:unifpart1\]) yields Part (i) of this lemma. To prove Part (ii), note that \_|R\_[12]{}-R\_[22]{}|&=& \_p\^[-1]{}|\_[T\^0+1]{}\^[T]{}\_[j=1]{}\^p \_[tj]{}z\_[t,-j]{}\^T(\_[(j)]{}-\^0\_[(j)]{})|\
&=& O(p\^[-1]{}\^[-1]{}(pT) \_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_1)\
&& O()=o(1), with probability at least $1-o(1).$ Here the second equality follows from Part (ii) of Lemma \[lem:nearoptimalcross\]. To prove Part (iii) we first note that the expressions $\Phi\big(\h\g-\g^0,\h\eta-\eta^0\big),$ and $\Phi(\h\g-\g^0,\eta^0)$ can be bounded above with probability at least $1-o(1),$ by the same bounds as in (\[eq:15\]) and (\[eq:19\]), respectively. Now applications of the Cauchy-Schwarz inequality yields the following bound for the term $|R_{13}|.$ \_|R\_[13]{}|&=&\_|\_[T\^0+1]{}\^[T]{}\_[j=1]{}\^p (\_[(j)]{}-\^0\_[(j)]{})\^Tz\_[t,-j]{}z\_[t,-j]{}\^T\_[(j)]{}|\
&& \_T{|(-\^0,-\^0)|+|(-\^0,\^0)|}=o(1), with probability at least $1-o(1),$ thus completing the proof of this lemma.
$\rule{5.5in}{0.1mm}$
\[lem:design.convergence.for.limiting.dist\] Suppose Condition B holds and that $\psi\to 0.$ Then for any constant any $r>0,$ we have, p\^[-1]{}|\_[T\^0+1]{}\^[T\^0+r\^[-2]{}]{}\_[j=1]{}\^p(z\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2-Ez\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2)|=o\_p(1) Additionally, if $\xi^{-2}_{2,2}\sum_{j=1}^pE\|z_{t,-j}^T\eta^0_{(j)}\|_2^2\to \si^*,$ then, p\^[-1]{}\_[T\^0+1]{}\^[T\^0+r\^[-2]{}]{}\_[j=1]{}\^pz\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2 \_p r\^\*.
We begin with the following observation. For any $\tau\ge \tau^0,$ we have the deterministic inequality $T(\tau-\tau^0)-1\le \big(\lfloor T\tau\rfloor-\lfloor T\tau^0\rfloor \big)\le T(\tau-\tau^0)+1.$ It is straightforward to verify that under the assumption $\psi\to 0,$ this inequality directly yields $c_{u1}r\psi^{-2}\le\big(\lfloor T\tau^0+ r\xi^{-2}\rfloor-\lfloor T\tau^0\rfloor \big)\le c_{u2}r\psi^{-2}.$ Also, note that from Lemma \[lem:lcsubE\] and Lemma \[lem:sqsubGsubE\] we have, p\^[-1]{}\^[-2]{}\_[j=1]{}\^p(z\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2-Ez\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2)\~[subE]{}(),=16\^2. Now upon applying Bernstein’s inequality (Lemma \[lem:subetail\]) together with the above observations, we obtain for any $d>0,$ pr{p\^[-1]{}|\_[T\^0+1]{}\^[T\^0+r\^[-2]{}]{}\_[j=1]{}\^p(z\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2-Ez\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2)|> c\_[u2]{}dr}2{-()}. Choosing $d$ as any sequence converging to zero slower than $\psi,$ say $d=\psi^{1-b},$ for any $0<b<1,$ and noting that in this case $(d\big/\la) \le 1$ for $T$ large, we obtain, p\^[-1]{}|\_[T\^0+1]{}\^[T\^0+r\^[-2]{}]{}\_[j=1]{}\^p(z\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2-Ez\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2)|=o\_p(1), This completes the proof of the first part of this lemma, the second part can be obtained as a direct consequence of Part (i).
$\rule{5.5in}{0.1mm}$
Deviation bounds used for proofs of Section \[sec:nuisance\]
============================================================
\[lem:bounds.for.nuis.thm\] Suppose Condition A$'$(i), A$'$(ii) and B holds, and $c_{u1}>0$ be any constant. Then uniformly over $j=1,...,p,$ we have, \_\_[t=1]{}\^[T]{}\_[tj]{}z\_[t,-j]{}\_48\^2(c\_u/c\_[u1]{})(1+\^2){}\^ with probability at least $1-2\exp\big[-\{(c_u^2/2)-3\}\log(p\vee T)\big].$ Additionally, let $u_T\ge 0,$ be any sequence and $c_u>0$ any constant, then uniformly over $j=1,...,p,$ we have, \_ \_[t=T\^0+1]{}\^[T]{}\^[0T]{}\_[(j)]{}z\_[t,-j]{}z\_[t,-j]{}\^T\_&& c\_[u2]{}(\^2)\^0\_[(j)]{}\_2{,}, with probability $1-2\exp\big\{-c_{u3}\log (p\vee T)\big\},$ with $c_{u2}=(1+48c_u)/c_{u1},$ $c_{u3}=\{(c_u\wedge c_u^2)/2\}-3.$
We begin with proving Part (i). Using Lemma \[lem:subez\] we have that $\vep_{tj}z_{t,-j,k}\sim {\rm subE}(\la_1),$ with $\la_1=48\si^2\surd (1+\nu^2).$ For any $\tau\in(0,1)$ satisfying $\lfloor T\tau\rfloor\ge c_{u1}Tl_T,$ applying Lemma \[lem:subetail\] we have for any $d>0,$ pr(|\_[t=1]{}\^[T]{}\_[tj]{}z\_[t,-j,k]{}|>dT)2{-()}. Choose $d=c_{u}\la_1\surd\{\log (p\vee T)\big/\lfloor T\tau\rfloor\},$ and recall that by choice we have $\lfloor T\tau\rfloor\ge c_{u1}Tl_T,$ and from Condition A$'$(i) we have $\log(p\vee T)\le c_{u1}Tl_T.$ Thus, $d/\la_1\le 1,$ and consequently $(d^2/\la_1^2)\le (d/\la_1).$ Using these relations the above probability bound yields, |\_[t=1]{}\^[T]{}\_[tj]{}z\_[t,-j,k]{}|(c\_u/c\_[u1]{})\_1 {}\^ with probability at least $1-2\exp\big\{-(c_u^2/2)\log(p\vee T)\big\}.$ Part (i) now follows by applying a union bound over $k=1,...,(p-1),$ $j=1,...,p$ and over the at most $T$ distinct values of $\lfloor T\tau\rfloor.$
To prove Part (ii), first note that using similar arguments as in Lemma \[lem:subez\] we have that $\eta^{0T}_{(j)}z_{t,-j}z_{t,-j,k}-E\big(\eta^{0T}_{(j)}z_{t,-j}z_{t,-j,k}\big)\sim {\rm subE}(\la_1),$ with $\la_1= 48\si^2\|\eta^0_{(j)}\|_2.$ For any $\tau\in{{\cal G}}(u_T,0),$ satisfying $\lfloor T\tau\rfloor\ge c_{u1}Tl_T,$ applying a union bound over $k=1,...,p-1,$ on the Bernstein’s inequality (Lemma \[lem:subez\]) yields the following probability bound, \[eq:22\] pr{\_[t=T\^0+1]{}\^[T]{}(\^[0T]{}\_[(j)]{}z\_[t,-j]{}z\_[t,-j]{}-\^[0T]{}\_[(j)]{}\_[-j,-j]{})\_>d(T-T\^0)}\
2p{-()} Now upon choosing, d=c\_u\_1, it can be verified that [^8], \[eq:21\] &&d\_1{,},,\
&&()=(pT) Substituting the relations of (\[eq:21\]) in the probability bound (\[eq:22\]) we obtain, \_[t=T\^0+1]{}\^[T]{}(\^[0T]{}\_[(j)]{}z\_[t,-j]{}z\_[t,-j]{}-\^[0T]{}\_[(j)]{}\_[-j,-j]{})\_\_1{,} with probability at least $1-2p\exp\big[\{(c_u\wedge c_u^2)/2\}\log (p\vee T)\big].$ Next, using the bounded eigenvalue assumption of Condition B we have that, \_[t=T\^0+1]{}\^[T]{}\^[0T]{}\_[(j)]{}\_[-j,-j]{}\^0\_[(j)]{}\_2 Using this relation in the probability bound now yields, \_[t=T\^0+1]{}\^[T]{}\^[0T]{}\_[(j)]{}z\_[t,-j]{}z\_[t,-j]{}\_c\_[u2]{}\^0\_[(j)]{}\_2+c\_[u3]{}\^2\^0\_[(j)]{}\_2{,}, with probability at least $1-2p\exp\big[\{(c_u\wedge c_u^2)/2\}\log (p\vee T)\big],$ where $c_{u2}=1/c_{u1},$ and $c_{u3}=48c_u/c_{u1}.$ Uniformity over $\tau$ can be obtained by using a union bound over the atmost $T$ distinct values of $\lfloor T\tau\rfloor,$ and similarly over $j=1,...,p,$ by using another union bound. This completes the proof of the lemma.
$\rule{5.5in}{0.1mm}$
\[rem:calculation\] Consider, \[eq:24\] d=c\_u\_1, observe that when $\log (p\vee T)\big/(\lfloor T\tau\rfloor-\lfloor T\tau^0\rfloor)\ge 1,$ then the maximum of the two terms in the expression (\[eq:24\]) is $\log (p\vee T)\big/(\lfloor T\tau\rfloor-\lfloor T\tau^0\rfloor).$ In this case, \[eq:25\] () = (c\_u\^2c\_u) . In the case where $\log (p\vee T)\big/(\lfloor T\tau\rfloor-\lfloor T\tau^0\rfloor)< 1,$ the maximum in the expression $(\ref{eq:24})$ becomes $\surd\{\log (p\vee T)\big/(\lfloor T\tau\rfloor-\lfloor T\tau^0\rfloor)\},$ however the minimum in the expression (\[eq:25\]) remains the same.
$\rule{5.5in}{0.1mm}$
\[lem:nearoptimalcross.check\] Suppose Condition B holds and let $\vep_{tj}$ be as defined in (\[def:epsilons\]). Let $T\ge\log(p\vee T)$ and $\log(p\vee T)\le Tv_T\le Tu_T$ be any non-negative sequences. Then for any $c_{u}>0,$ we have, \_\_[t=T\^0+1]{}\^[T]{}\_[tj]{}z\_[t,-j]{}\^T\_48(2c\_u)\^2 ()\^, with probability at least $1-2\exp\big\{-(c_{u1}-3)\log(p\vee T)\big\},$ with $c_{u1}=c_u\wedge\surd(c_u/2).$
The proof of this result is very similar to that of Lemma \[lem:nearoptimalcross\], the difference being utilization of the additional assumption $Tv_T\ge \log(p\vee T),$ in order to obtain this sharper bound. Proceeding as in (\[eq:3\]) we have, pr(|\_[t=T\^0+1]{}\^[T]{}\_[tj]{}z\_[t,-j,k]{}|>d(T-T\^0))2{-()}, where $\la_1=48\si^2\surd(1+\nu^2).$ Choose $d=\la_1\{2c_{u}\log (p\vee T)\big/(\lfloor T\tau\rfloor-\lfloor T\tau^0\rfloor)\}^{1/2},$ then, (T-T\^0)&=&c\_[u]{}(pT),,\
(T-T\^0)&=&(c\_[u]{}/2){(pT)(T-T\^0)}\^[1/2]{}(c\_[u]{}/2) (pT) , where we used $\big(\lfloor T\tau\rfloor-\lfloor T\tau^0\rfloor\big)\ge Tv_T\ge \log(p\vee T).$ Substituting back in the probability bound yields, |\_[t=T\^0+1]{}\^[T]{}\_[tj]{}z\_[t,-j,k]{}|\_1{}\^[1/2]{}, with probability at least $1-2\exp\{-c_{u1} \log (p\vee T)\},$ with $c_{u1}=c_u\wedge \surd(c_u/2).$ Finally applying a union bound over $j=1,...,p,$ $k=1,...,p-1$ and over the at most $T$ distinct values of $\lfloor T\tau\rfloor$ for $\tau\in{{\cal G}}(u_T,v_T),$ yields the statement of this lemma.
$\rule{5.5in}{0.1mm}$
\[lem:nearoptimal.sqterm\] Let $\Phi(\cdotp,\cdotp)$ be as defined in (\[def:Phi\]) and suppose Condition B holds and $T\ge \log(p\vee T).$ Additionally, let $u_T,$ $v_T$ be non-negative sequences satisfying $\log (p\vee T)\le Tv_T\le Tu_T.$ Then for any constant $c_u>0,$ we have, &(i)&\_(\^0,\^0)v\_T\_[2,2]{}\^2-16(2c\_u)\^2\_[2,2]{}\^2()\^,\
&(ii)&\_(\^0,\^0)u\_T\_[2,2]{}\^2+16(2c\_u)\^2\_[2,2]{}\^2()\^ with probability at least $1-2\exp\{-(c_{u1}-1)\log(p\vee T)\},$ where $c_{u1}=c_u\wedge \surd(c_u/2).$
Note that $\sum_{j=1}^p\big(\|z_{t,-j}^T\eta^0_{(j)}\|_2^2-E\|z_{t,-j}^T\eta^0_{(j)}\|_2^2\big)\sim {\rm subE}(\la),$ where $\la=16\si^2\xi_{2,2}^2.$ For any fixed $\tau \in{{\cal G}}(u_T,v_T),$ applying the Bernstein’s inequality (Lemma \[lem:subetail\]) we obtain, pr{|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p(z\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2-Ez\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2)|d(T-T\^0)}\
2{-()} Choose $d=\la\{2c_u\log(p\vee T)\big/(\lfloor T\tau\rfloor-\lfloor T\tau^0\rfloor)\}^{1/2}$ and observe that, (T-T\^0)&=&c\_u(pT)\
(T-T\^0)&=& (c\_u/2){Tv\_T(pT)}\^[1/2]{}(c\_u/2)(pT) where the inequality follows from the assumption $Tv_T\ge \log(p\vee T).$ A substitution back in the above probability bound yields, \[eq:30\] |\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p(z\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2-Ez\_[t,-j]{}\^T\^0\_[(j)]{}\_2\^2)|(2c\_u){}\^ with probability at least $1-2\exp\big(-c_{u1}\log(p\vee T)\big),$ $c_{u1}=c_u\wedge \surd(c_u/2).$ Applying a union bound over at most $T$ distinct values of $\lfloor T\tau\rfloor,$ yields the bound (\[eq:30\]) uniformly over $\tau.$ The statements of this lemma are now a direct consequence.
$\rule{5.5in}{0.1mm}$
\[lem:etabounds.check\] Let $\Phi(\cdotp,\cdotp)$ be as defined in (\[def:Phi\]) and suppose Condition B holds and $T\ge \log(p\vee T).$ Let $\check\mu_{(j)}$ and $\check\g_{(j)},$ $j=1,...,p$ be Step 1 edge estimates of Algorithm 1, and $u_T, v_T$ be any non-negative sequences satisfying $\log (p\vee T)\le Tv_T\le Tu_T.$ Then, &(i)&\_(\^0,\^0)v\_T\_[2,2]{}\^2-c\_u\^2\_[2,2]{}\^2()\^,\
&(ii)&\_ (-\^0,-\^0)c\_u(\^2) u\_T(s\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2) with probability $1-o(1).$ Furthermore, when $u_T\ge c_u\si^4\log (p\vee T)\big/T\phi^2,$ we have, &(iii)&\_(\^0,\^0)2u\_T\_[2,2]{}\^2,\
&(iv)&\_|(-\^0,\^0)|c\_u(\^2) u\_T \_[2,2]{}{s\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2}\^, with probability at least $1-o(1).$
Part (i) and Part (iii) are a direct consequence of Lemma \[lem:nearoptimal.sqterm\]. To prove Part (ii), first note from Theorem \[thm:est.nuisance.para\] we have that $\check\mu_{(j)}-\mu^0_{(j)}\in{{\cal A}}_{1j},$ and $\check\g_{(j)}-\g^0_{(j)}\in{{\cal A}}_{2j},$ $j=1,...,p,$ with probability at least $1-o(1).$ It can be verified that this property yields $\|\check\eta_{(j)}-\eta^0_{(j)}\|_1\le c_u \surd s\|\check\eta_{(j)}-\eta^0_{(j)}\|_2.$ (see, e.g. (\[eq:9\])). Now applying Part (ii) of \[lem:WUURE\] yields, \_ (-\^0,-\^0)c\_u(\^2) u\_T(s\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2) with probability at least $1-o(1).$ Part (iv) follows by an application of the Cauchy-Schwarz inequality together with the bounds of Part (ii) and Part (iii) (see, (\[eq:5\])). This completes the proof of this lemma.
$\rule{5.5in}{0.1mm}$
\[lem:term123.check\] Suppose Condition B holds and $T\ge \log(p\vee T).$ Let $\check\mu_{(j)},$ $\check\g_{(j)},$ $j=1,...,p$ be Step 1 estimates of Algorithm 1, and assume $u_T, v_T$ satisfy $\log (p\vee T)\le Tv_T\le Tu_T.$ Then, (i)\_\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p \_[(j)]{}\^Tz\_[t,-j]{}\_2\^2\
\_[2,2]{}\^2\
(ii)\_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p(\_[(j)]{}-\^0\_[(j)]{})\^Tz\_[t,-j]{}z\_[t,-j]{}\^T\_[(j)]{}|\
c\_u(\^2)\_[2,2]{}u\_T{s\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2}\^\
(iii)\_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^T\_[(j)]{}|\
c\_u(1+\^2)\^2\_[2,1]{}()\^+ c\_[u]{}(1+\^2)\^2()\^\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_1, each with probability at least $1-o(1).$
Let $\Phi(\cdotp,\cdotp)$ be as defined in (\[def:Phi\]). Then note that $\Phi(\check\eta,\check\eta)=\Phi(\eta^0,\eta^0)+2\Phi(\check\eta-\eta^0,\eta^0)+\Phi(\check\eta-\eta^0,\check\eta-\eta^0).$ Using this relation together with the bounds of Part (i) and Part (iv) of Lemma \[lem:etabounds.check\] we obtain, \_(,)&& \_(\^0,\^0)-2\_|(-\^0,\^0)|\
&& v\_T\_[2,2]{}\^2- c\_u\^2\_[2,2]{}\^2{}\^- c\_u(\^2)u\_T \_[2,2]{}(s\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2)\^ with probability at least $1-o(1).$ To prove Part (ii), note that using identical arguments as in the proof of Lemma \[lem:etabounds.check\] it can be shown that, &&\_(-\^0,-\^0)c\_u(\^2)u\_Ts\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2,\
&&\_|(-\^0,\^0)|c\_u(\^2) u\_T \_[2,2]{}{s\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2}\^, with probability at least $1-o(1).$ The above inequalities and the relation $\Phi\big(\check\g-\g^0,\check\eta\big)\le \big|\Phi(\check\g-\g^0,\check\eta-\eta^0)\big|+\big|\Phi(\check\g-\g^0,\eta^0)\big|,$ together with applications of the Cauchy-Schwarz inequality yields, \_ |(-\^0,)| c\_u(\^2) u\_T(s\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2)\^(s\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2)\^\
+ c\_u(\^2) u\_T \_[2,2]{}{s\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2}\^\
c\_u(\^2)\_[2,2]{}u\_T{s\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_2\^2}\^ with probability at least $1-o(1).$ To prove Part (iii), note that, \_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^T\_[(j)]{}|&&\_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^T\^0\_[(j)]{}|\
&&+\_|\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p\_[tj]{}z\_[t,-j]{}\^T(\_[(j)]{}-\^0\_[(j)]{})|\
&:=& R1+R2. Now using Lemma \[lem:nearoptimalcross.check\], we have &R1&c\_[u]{}(1+\^2)\^2\_[2,1]{}()\^,[and]{}\
&R2&c\_[u]{}(1+\^2)\^2()\^\_[j=1]{}\^p\_[(j)]{}-\^0\_[(j)]{}\_1 with probability at least $1-o(1).$ Part (iv) now follows by combining bounds for terms $R1$ and $R2.$
$\rule{5.5in}{0.1mm}$
Uniform (over $\tau$) Restricted Eigenvalue Condition
=====================================================
\[lem:UURElem1\] Let $z_t\in\R^p,$ $t=1,...,n$ be independent ${\rm subG}(\si)$ r.v’s and $\la=16\si^2.$ Additionally, for any $s\ge 1,$ let ${{\cal K}}_p(s)=\{\delta\in\R^{p};\,\,\|\delta\|_1\le 1,\,\|\delta\|_0\le s\}.$ Then for non-negative $0\le v_T\le u_T,$ and any $d_1>0,$ we have $T\ge 2,$ pr\
2{-()+3s(pT)}
Consider any fixed $\delta\in\R^p,$ with $\|\delta\|_2\le 1,$ then from Lemma \[lem:sqsubGsubE\] we have $\|z_t^T\delta\|_2^2-E\|z_t^T\delta\|_2^2\sim{\rm subE}(\la),$ with $\la=16\si^2.$ Now, for any fixed $\tau\in{{\cal G}}(u_T,v_T),$ $\tau\ge\tau^0$ applying Lemma \[lem:subetail\] (Bernstein’s inequality) we have, pr(|\_[t=T\^0+1]{}\^[T]{}z\_t\^T\_2\^2-Ez\_t\^T\_2\^2|>d(T-T\^0))2{-()} Choose $d=d_1Tu_T/(\lfloor T\tau\rfloor-\lfloor T\tau^0\rfloor)$ and observe that by definition of the set ${{\cal G}}(u_T,v_T),$ we have $Tv_T\le (\lfloor T\tau\rfloor-\lfloor T\tau^0\rfloor)\le Tu_T,$ this in turn yields $d_1\le d,$ and consequently, \[eq:6\] pr(|\_[t=T\^0+1]{}\^[T]{}z\_t\^T\_2\^2-Ez\_t\^T\_2\^2|d\_1u\_T)2{-()} Using the inequality (\[eq:6\]) and a covering number argument, it can be shown that (see, Lemma 15 of the supplementary materials of [@loh2012]) for any $s\ge 1,$ pr(\_[[[K]{}]{}\_p(2s)]{}|\_[t=T\^0+1]{}\^[T]{}z\_t\^T\_2\^2-Ez\_t\^T\_2\^2|d\_1u\_T)2{-()+2s(pT)}. Finally, uniformity over the set ${{\cal G}}(u_T,v_T)$ can be obtained by applying a union bound over the at most $T$ distinct values of $\lfloor T\tau\rfloor$ for $\tau\in{{\cal G}}(u_T,v_T),$ thus yielding the statement of this lemma.
$\rule{5.5in}{0.1mm}$
\[lem:WUURE\] Suppose Condition B holds and let $0\le v_T\le u_T$ be any non-negative sequences. Then for all $\delta_{(j)}\in\R^{p-1},$ $j=1,...,p,$ and $T\ge 2,$ we have, (i)\_\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p \_[(j)]{}\^T z\_[t,-j]{}z\_[t,-j]{}\^T\_[(j)]{}c\_u (\^2) u\_T(pT)(\_[j=1]{}\^p\_[(j)]{}\_2\^2+\_[j=1]{}\^p\_[(j)]{}\_1\^2) with probability at least $1-2\exp\big\{-\log (p\vee T)\big\}.$ Additionally assuming that $T\ge \log(p\vee T)$ and $v_T$ satisfies $Tv_T\ge \log(p\vee T),$ then for all $\delta_{(j)}\in\R^{p-1},$ $j=1,...,p,$ (ii) \_\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^p \_[(j)]{}\^T z\_[t,-j]{}z\_[t,-j]{}\^T\_[(j)]{}c\_u (\^2) u\_T(\_[j=1]{}\^p\_[(j)]{}\_2\^2+\_[j=1]{}\^p\_[(j)]{}\_1\^2) with probability at least $1-2\exp\big\{-\log (p\vee T)\big\}.$
w.l.o.g. assume $v_T\ge (1/T)$ (see, Lemma \[lem:nearoptimalcross\]). Now for any $s\ge 1,$ consider any non-negative $u_T,$ any $\delta_{(j)}\in{{\cal K}}_{p-1}(2s),$ $j=1,...,p.$ Then for any $d_1>0,$ applying a union bound to the result of Lemma \[lem:UURElem1\] over the components $j=1,...,p$ we obtain, \[eq:7\] \_\_|\_[t=T\^0+1]{}\^[T]{}z\_[t,-j]{}\^T\_[(j)]{}\_2\^2-Ez\_[t,-j]{}\^T\_[(j)]{}\_2\^2|d\_1u\_T with probability at least $1-2\exp\Big\{-\frac{Tv_T}{2}\Big(\frac{d_1^2}{\la^2}\wedge \frac{d_1}{\la}\Big)+4s\log (p\vee T)\Big\}.$ It can be shown that the bound (\[eq:7\]) in turn implies that (see, Lemma 12 of supplement of [@loh2012]), for all $\tau\in {{\cal G}}(u_T,v_T),$ and for all $\delta_{(j)}\in\R^{p-1},$ $j=1,...,p,$ |\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^pz\_[t,-j]{}\^T\_[(j)]{}\_2\^2-Ez\_[t,-j]{}\^T\_[(j)]{}\_2\^2|27d\_1u\_T(\_[j=1]{}\^p\_[(j)]{}\_2\^2+(1/s)\_[j=1]{}\^p\_[(j)]{}\_1\^2) with probability at least $1-2\exp\Big\{-\frac{Tv_T}{2}\Big(\frac{d_1^2}{\la^2}\wedge \frac{d_1}{\la}\Big)+4s\log (p\vee T)\Big\}.$ Now choose $d_1=10\la\log(p\vee T),$ and note that $\frac{Tv_T}{2}\Big(\frac{d_1^2}{\la^2}\wedge \frac{d_1}{\la}\Big)\ge 5\log(p\vee T).$ This follows since $Tv_T\ge 1,$ and that $d_1/\la\ge 1.$ A substitution back in the probability bound yields, |\_[t=T\^0+1]{}\^[T]{}\_[j=1]{}\^pz\_[t,-j]{}\^T\_[(j)]{}\_2\^2-Ez\_[t,-j]{}\^T\_[(j)]{}\_2\^2|270 u\_T(pT){\_[j=1]{}\^p\_[(j)]{}\_2\^2+\_[j=1]{}\^p\_[(j)]{}\_1\^2}, with probability at least $1-2\exp\big\{-5\log(p\vee T)+4s\log (p\vee T)\big\}.$ The statement of Part (i) follows upon setting $s= 1.$ The proof of Part (ii) is quite analogous. This can be obtained by proceeding as earlier with (\[eq:7\]) above, and additionally utilizing $Tv_T\ge \log(p\vee T),$ and setting $d_1=10\la,$ instead of the choice made for Part (i). This completes the proof of this result.
$\rule{5.5in}{0.1mm}$
\[lem:lower.RE.ordinary\] Suppose Condition A$'$ and B hold, then for $i=1,2,$ we have, \_[j=1,...,p;]{}\_\_ \_[t=1]{}\^[T]{}\^Tz\_[t,-j]{}z\_[t,-j]{}\^T. with probability at least $1-2\exp\{-c_u\log(p\vee T)\},$ for some $c_u>0$ and for $T$ sufficiently large.
Lemma \[lem:lower.RE.ordinary\] is a nearly direct extension of the usual restricted eigenvalue condition. Its proof is analogous to those available in the literature, for e.g., Corollary 1 of [@loh2012]. In comparison to the typical restricted eigenvalue condition, Lemma \[lem:lower.RE.ordinary\] has additional uniformity over $\tau,$ $i$ and $j,$ which can be obtained by simply using additional union bounds. $\rule{5.5in}{0.1mm}$
Auxiliary results
=================
In the following Definition’s \[def:subg\], \[def:sube\], and Lemma’s \[lem:tailb\]-\[lem:subetail\], we provide basic properties of subgaussian and subexponential distributions. These are largely reproduced from [@vershynin2019high] and [@rigollet201518]. Theorem \[thm:kolmogorov\] and \[thm:argmax\] below reproduce the Kolmogorov’s inequality and the argmax theorem. Lemma \[lem:condnumberbound\] provides an upper bound for the $\ell_2$ norm of the parameter vectors defined in Section \[sec:intro\].
\[def:subg\] [**Sub-gaussian r.v.**]{}: A random variable $X\in\R$ is said to be sub-gaussian with parameter $\si>0$ (denoted by $X\sim{\rm subG(\si)}$) if $E(X)=0$ and its moment generating function E(\^[tX]{})\^[t\^2\^2/2]{}, t Furthermore, a random vector $X\in\R^p$ is said to be sub-gaussian with parameter $\si,$ if the inner products $\langle X, v\rangle\sim {\rm subG}(\si)$ for any $v\in\R^p$ with $\|v\|_2 = 1.$
\[def:sube\] [**Sub-exponential r.v.**]{}: A random variable $X\in\R$ is said to be sub-exponential with parameter $\si>0$ (denoted by $X\sim{\rm subE(\si)}$) if $E(X)=0$ and its moment generating function E(\^[tX]{})\^[t\^2\^2/2]{}, |t|
\[lem:tailb\]\[Tail bounds\] (i) If $X\sim {\rm subG}(\si),$ then, pr(|X|)2(-\^2/2\^2). (ii) If $X\sim {\rm subE}(\si),$ then pr(|X|)2{-() }.
This proof is a simple application of the Markov inequality. For any $t>0,$ pr(X)=pr(tXt)=\^[-t+t\^2\^2/2]{}. Minimizing over $t>0,$ yields the choice $t^*=\la/\si^2,$ and substituting in the above bound ,we obtain, pr(X)\_[t>0]{}\^[-t+t\^2\^2/2]{}=e\^[-\^2/2\^2]{}. Repeating the same for $P(X\le -\la)$ yields part (i) of the lemma. To prove Part (ii), repeat the above argument with $t\in(0,1/\si],$ to obtain, pr(X)=pr(tXt)\^[-t+t\^2\^2/2]{}. As in the subgaussian case, to obtain the tightest bound one needs to find $t^*$ that minimizes $-t\la+t^2\si^2/2,$ with the additional constraint for this subexponential case that $t\in(0,1/\si].$ We know that the unconstrained minimum occurs at $t^*=\la/\si^2>0.$ Now consider two cases:
1. If $t^*<(0,1/\si] \Leftrightarrow\la\le \si$ then the unconstrained minimum is same as the constrained minimum, and substituting this value yields the same tail behavior as the subgaussian case.
2. If $t^*>(1/\si)\Leftrightarrow\la>\si,$ then note that $-t\la+t^2\si^2/2$ is decreasing in $t,$ in the interval $(0,(1/\si)],$ thus the minimum occurs at the boundary $t=1/\si.$ Substituting in the tail bound we obtain for this case, pr(X)\^[-t+t\^2\^2/2]{}= {-(/)+(1/2)}, where the final inequality follows since $\la>\si.$
Part (ii) of the lemma is obtained by combining the results of the above two cases.
$\rule{5.5in}{0.1mm}$
\[lem:momentprop\] (i) If $X\sim {\rm subG}(\si),$ then E|X|\^k3k\^[k]{} k\^[k/2]{}, k1. (ii) If $X\sim {\rm subE}(\si),$ then E|X|\^k4\^k k\^k, k> 0.
Consider $X\sim {\rm subG}(\si),$ and w.l.o.g assume that $\si=1$ (else define $X^*=X/si$). Using the integrated tail probability expectation formula, we have for any $k> 0,$ E|X|\^k&=&\_[0]{}\^pr(|X|\^k>t)dt=\_[0]{}\^pr(|X|>t\^[1/k]{})dt\
&& 2\_0\^(-)dt\
&=& 2\^[k/2]{}k\_0\^\^[-u]{}u\^[k/2-1]{}du,u=\
&=& 2\^[k/2]{}k (k/2) Here the first inequality follows from the tail bound Lemma \[lem:tailb\]. Now, for $x\ge 1/2,$ we have the inequality $\G(x)\le 3x^x,$ thus for $k\ge 1$ we have, $\G(k/2)\le 3(k/2)^{(k/2)}.$ A substitution back in the moment bound yields desired bound of Part (i).
To prove the moment bound of Part (ii). As before, w.l.o.g. assume $\si=1.$ Consider the inequality, |x|\^kk\^k(e\^x+e\^[-x]{}) which is valid for all $x\in\R$ and $k>0.$ Substitute x=X and take expectation to get, E|X|\^kk\^k(E\^X+E\^[-X]{}). Since in this case $\si=1,$ from the mgf condition, at $t=\pm 1$ we have, $E\e^X\le\e^{1/2}\le 2,$ and $E\e^{-X}\le 2.$ Thus for any $k>0,$ E|X|\^k4k\^k This yields the desired moment bound of Part (ii).
$\rule{5.5in}{0.1mm}$
\[lem:lcsubG\] Assume that $X\sim{\rm subG}(\si),$ and that $\al\in\R,$ then $\alpha X\sim{\rm subG}(|\alpha|\si).$ Moreover if $X_1\sim{\rm subG}(\si_1)$ and $X_2\sim{\rm subG}(\si_2),$ then $X_1+X_2\sim {\rm subG}(\si_1+\si_2).$
The first part follows directly from the inequality $E(\e^{t\al X})\le \exp(t^2\al^2\si^2/2).$ To prove Part (ii) use the Hölder’s inequality to obtain, E(\^[t(X\_1+X\_2)]{})&=&E(\^[tX\_1]{}\^[tX\_2]{}){E(\^[tX\_1p]{})}\^{E(\^[tX\_2q]{})}\^\
&& \^[\_1\^2p\^2]{}\^[\_2\^2q\^2]{}=\^[(p\_1\^2+q\_2\^2)]{} where $p,q\in[1,\iny],$ with $1/p+1/q=1.$ Choose $p^{*}=(\si_2/\si_1)+1,$ $q^*=(\si_1/\si_2)+1$ to obtain $E(\e^{t(X_1+X_2)})\le \exp\big\{\frac{t^2}{2}(\si_1+\si_2)^2\big\}.$ This completes the proof of this lemma.
$\rule{5.5in}{0.1mm}$
\[lem:lcsubE\] Assume that $X\sim{\rm subE(\si)},$ and that $\al\in\R,$ then $\alpha X\sim{\rm subE}(|\alpha|\si).$ Moreover, assume that $X_1\sim{\rm subE(\si_1)}$ and $X_2\sim{\rm subE(\si_2)},$ then $X_1+X_2\sim{\rm subE(\si_1+\si_2)}.$
The proof of Lemma \[lem:lcsubE\] is analogous to that of Lemma \[lem:lcsubG\] and is thus omitted.
$\rule{5.5in}{0.1mm}$
\[lem:sqsubGsubE\] Let $X\sim {\rm subG}(\si)$ then the random variable $Z=X^2-E[X^2]$ is sub-exponential: $Z\sim {\rm subE(16\si^2)}.$
$\rule{5.5in}{0.1mm}$
The next result is the Bernstein’s inequality, reproduced from Lemma 1.13 of [@rigollet201518].
\[lem:subetail\] Let $X_1,X_2,...,X_T$ be independent random variables such that $X_t\sim {\rm subE}(\si).$ Then for any $d>0$ we have, pr(||X|>d)2{-()}
$\rule{5.5in}{0.1mm}$
The next result is the Kolmogorov’s inequality reproduced from [@hajek1955generalization]
\[thm:kolmogorov\] If $\xi_1,\xi_2,...$ is a sequence of mutually independent random variables with mean values $E(\xi_k)=0$ and finite variance ${\rm var}(\xi_k)=D_k^2$ $(k=1,2,...),$ we have, for any $\vep>0,$ pr(\_[1km]{}|\_1+\_2+...+\_k|>)\_[k=1]{}\^mD\_k\^2
$\rule{5.5in}{0.1mm}$
The following theorem is the well known ‘Argmax’ theorem reproduced from Theorem 3.2.2 of [@vaart1996weak]
\[thm:argmax\] Let ${{\cal M}}_n,{{\cal M}}$ be stochastic processes indexed by a metric space $H$ such that ${{\cal M}}_n\Rightarrow{{\cal M}}$ in $\ell^{\iny}(K)$ for every compact set $K\subseteq H$[^9]. Suppose that almost all sample paths $h\to {{\cal M}}(h)$ are upper semicontinuous and posses a unique maximum at a (random) point $\h h,$ which as a random map in $H$ is tight. If the sequence $\h h_n$ is uniformly tight and satisfies ${{\cal M}}_n(\h h_n)\ge \sup_h {{\cal M}}_n(h)-o_p(1),$ then $\h h_n\Rightarrow \h h$ in $H.$
$\rule{5.5in}{0.1mm}$
\[lem:condnumberbound\] Suppose condition B holds, and let $\mu^0_{(j)}$ and $\g^0_{(j)},$ be as defined in (\[def:muga\]). Then we have, \_[1jp]{}(\^0\_[(j)]{}\_2 \^0\_[(j)]{}\_2),
Let $\Omega=\Sigma^{-1}$ be the precision matrix corresponding to $\Si.$ Then we can write $\Om_{jj}=-(\Si_{jj}-\Si_{j,-j}\mu^0_{(j)})^{-1},$ and $\Om_{-j,j}=-\Om_{jj}\mu^0_{(j)},$ for each $j=1,...,p,$ (see, e.g., [@yuan2010high]). We also have that $1\big/\phi\le \max_{j}|\Om_{jj}|\le 1\big/\ka.$ Now note that the $\ell_2$ norm of the rows (or columns) of $\Om$ are bounded above, i.e., $\|\Om_{j\cdot}\|_2=\|\Om e_j\|_2\le 1/\ka.$ This finally implies that \[eq:2\] \^0\_[(j)]{}\_2=-\_[-j,j]{}/\_[jj]{}\_2\_[j]{}\_2/ |\_[jj]{}|= Since the r.h.s. in (\[eq:2\]) is free of $j,$ this implies that $\max_{j}\|\mu^0_{(j)}\|\le \nu.$ Identical arguments can be used to show that $\max_{j}\|\g^0_{(j)}\|\le \nu.$ These two statements together imply the statement of the lemma.
$\rule{5.5in}{0.1mm}$
Additional numerical results {#sec:add.numerical}
============================
[^1]: Email: [email protected]
[^2]: We assume $p\ge 3$ throughout the article. This is done so that $\log (p)\ge 1.$ This is not a necessary condition and is only assumed for clarity of exposition.
[^3]: We choose the fourth root of $|l-m|$ so as to somewhat preserve the magnitude of correlations
[^4]: The matrices $\Si$ and $\D$ are constructed so as to preserve the jump $\psi,$ irrespective of the dimension $p.$ However slight numerical fluctuations of order $\approx 10^{-3}$ are observed, potentially due the numerical inversion of large matrices undertaken to calculate $\psi.$
[^5]: Since by construction of $\tilde\tau$ we have, ${{\cal U}}(\tilde\tau,\h\g,\h\g)\le 0.$
[^6]: Consider $c_1^m/T^{mb}\le (c_1\big/\log T)^m(\log T\big/T)^{mb}\le (1/T^{mb_1}),$ for any $0<b_1<b,$ for $T$ sufficiently large.
[^7]: Here $\Big\|\sum\vep_{tj}z_{t,-j}^T\Big\|_{\iny}= \max_{j,k}\Big|\sum\vep_{tj}z_{t,-j,k}\Big|.$
[^8]: See, Remark \[rem:calculation\]
[^9]: i.e., $\sup_{h\in K}\big|{{\cal M}}_n(h)-{{\cal M}}(h)\big|\to^p 0.$
|
---
abstract: 'If the favored hierarchical cosmological model is correct, then the Milky Way system should have accreted $\sim 100-200$ luminous satellite galaxies in the past $\sim 12$ Gyr. We model this process using a hybrid semi-analytic plus N-body approach which distinguishes explicitly between the evolution of light and dark matter in accreted satellites. This distinction is essential to our ability to produce a realistic stellar halo, with mass and density profile much like that of our own Galaxy, and a surviving satellite population that matches the observed number counts and structural parameter distributions of the satellite galaxies of the Milky Way. Our model stellar halos have density profiles which typically drop off with radius faster than those of the dark matter. They are assembled from the inside out, with the majority of mass ($\sim 80\%$) coming from the $\sim 15$ most massive accretion events. The satellites that contribute to the stellar halo have median accretion times of $\sim 9$ Gyr in the past, while surviving satellite systems have median accretion times of $\sim 5$ Gyr in the past. This implies that stars associated with the inner halo should be quite different chemically from stars in surviving satellites and also from stars in the outer halo or those liberated in recent disruption events. We briefly discuss the expected spatial structure and phase space structure for halos formed in this manner. Searches for this type of structure offer a direct test of whether cosmology is indeed hierarchical on small scales.'
author:
- 'James S. Bullock & Kathryn V. Johnston'
title: 'Tracing Galaxy Formation with Stellar Halos I: Methods '
---
Introduction
============
There has been a long tradition of searching in the stellar halo of our Galaxy for signatures of its formation. Stars in the halo provide an important avenue for testing theories of galaxy formation because they have long orbital time periods, have likely suffered little from dissipation effects, and tend to inhabit the outer regions of the Galaxy where the potential is relatively smooth and slowly evolving. The currently favored Dark Energy + Cold Dark Matter ($\Lambda$CDM) model of structure formation makes the specific prediction that galaxies like the Milky Way form hierarchically, from a series of accretion events involving lower-mass systems. This leads naturally to the expectation that the stellar halo should be formed primarily from disrupted, accreted systems. In this work, we develop an explicit, cosmologically-motivated model for stellar halo formation using a hybrid N-body plus semi-analytic approach. Set within the context of $\Lambda$CDM, we use this model to test the general consistency of the hierarchical formation scenario for the stellar halo and to provide predictions for upcoming surveys aimed at probing the accretion history of the Milky Way and nearby galaxies.
In a classic study, @els62 used proper motions and radial velocities of 221 dwarfs to show that those with lower metallicity (i.e. halo stars) tended to move on more highly eccentric orbits. They interpreted this trend as a signature of formation of the lower metallicity stars during a rapid radial collapse. In contrast, @sz78 suggested that the wide range of metallicites found in a sample of 19 globular clusters at a variety of Galactocentric radii instead indicated that the Galaxy formed from the gradual agglomeration of many sub-galactic sized pieces. A recent analysis of 1203 metal-poor Solar neighborhood stars, selected without kinematic bias [@chiba00], points to the truth being some combination of these two pictures: this sample contained a small concentration of very low metallicity stars on highly eccentric orbits (reminiscent of Eggen, Lynden-Bell & Sandage’s, 1962 work) but otherwise showed no correlation of increasing orbital eccentricity with decreasing metallicity.
In the last decade, much more direct evidence for the lumpy build-up of the Galaxy has emerged in the form of clumps of stars in phase-space (and, in some cases, metallicity) both relatively nearby [@majewski96; @helmi99b] and at much larger distances. The most striking example in the latter category is the discovery of the Sagittarius dwarf galaxy [@ibata94; @ibata95] — hereafter Sgr — and its associated trails of debris [see @majewski03 for an overview of the many detections] which have now been traced entirely around the Galaxy [@ibata01b; @majewski03]. Large scale surveys of the stellar halo are now underway [@majewski00; @morrison00; @yanny00; @ivezic00; @newberg02], and have uncovered additional structures, not associated with Sgr [@newberg03; @martin04; @rocha-pinto04]. Moreover, recent advances in instrumentation are now permitting searches for and discoveries of analogous structures around other galaxies in the form of overdensities in integrated light [@shang98; @zheng99; @forbes03; @pohlen04] or, in the case of M31, star counts [@ibata01a; @ferguson02; @zucker04]. Given this plethora of discoveries, there can be little doubt that the accretion of satellites has been an important contributor to the formation of our and other stellar halos. In addition, both theoretical [@abadi03; @brook05a; @robertson05] and observational [@gilmore02; @yanny03; @ibata03; @crane03; @rocha-pinto03; @frinchaboy04; @helmi05] work is beginning to suggest that some significant fraction of the Galactic disk could also have been formed this way.
All of the above discoveries are in qualitative agreement with the expectations of hierarchical structure formation [@peebles:65; @ps74; @bfpr:84]. As the prevailing variant of this picture, $\Lambda$CDM is remarkably successful at reproducing a wide range of observations, especially on large scales [e.g. @eisenstein:05; @maller:05; @tegmark:04; @spergel:03; @percival:02]. On sub-galactic scales, however, the agreement between theory and observation is not as obvious [e.g. @simon05; @kazantzidis04b; @DB:04]. Indeed, the problems explaining galaxy rotation curve data, dwarf galaxy counts, and galaxy disk sizes have lead some to suggest modifications to the standard paradigm, including an allowance for warm dark matter [e.g. @SL04], early-decaying dark matter [@kaplinghat:05], or non-standard inflation [@zb:02; @zb:03]. These modifications generally suppress fluctuation amplitudes on small scales, driving sub-galactic structure formation towards a more monolithic, non-hierarchical collapse. These issues bring to sharper focus a fundamental question in cosmology today: is structure formation truly hierarchical on small scales? Stellar halo surveys offer powerful data sets for directly answering this question.
Numerical simulations of individual satellites disrupting about parent galaxies can in many cases provide convincing similarities to the observed phase-space lumps. These models allow the observations to be interpreted in terms of the mass and orbit of the progenitor satellite [e.g., @velazquez95; @johnston95; @johnston99b; @johnston99c; @helmi01; @helmi03b; @law05], and even the potential of the galaxy in which it is orbiting [@johnston99a; @murali99; @ibata01b; @ibata04; @johnston05]. Nevertheless, a true test of hierarchical galaxy formation will require robust predictions for the frequency and character of the expected phase space structure of the halo.
Going beyond qualitative statements to model the full stellar halo (including substructure) within a cosmological context is non-trivial. The largest contributor of substructure to our own halo is Sgr, estimated to have a currently-bound mass of order $3\times
10^8 M_\odot$ [@law05]. Even the highest resolution cosmological N-body simulations would not resolve such an object with more than a few hundred particles, which would permit only a poor representation of the phase-space structure of its debris [see @helmi03a for an example of what can currently be done in this field]. Such simulations are computationally intensive, so the cost of examining more than a handful of halos is prohibitive and it is difficult to make statements about the variance of properties of halos that might be seen in a large sample of galaxies. Moreover, such simulations in general only follow the dark matter component of each galaxy not the stellar component. In their studies of thick disk and inner halo formation, Brook and collaborators [@brook03; @brook04a; @brook04b; @brook05a; @brook05b] have modeled the stellar components directly by simulating the evolution of individual galaxies as isolated spheres of dark matter and gas with small-scale density fluctuations superimposed to account for the large-scale cosmology. However, their sample size remains small and, though they are able to make general statements about the properties of their stellar halos, their resolution would prohibit a detailed phase-space analysis.
An alternative is to take an analytic or semi-analytic approach to halo building [e.g. @bullock01; @johnston01; @taylor04]. This allows the production of many halos, and the potential of including prescriptions to follow the stars separately from the dark matter. However, such techniques use only approximate descriptions of the dynamics and are unable to follow the fine details of the phase-space structure accurately.
In this study we develop a hybrid scheme, which draws on the strengths of each of the former techniques to build high resolution, full phase-space models of a statistical sample of [*stellar*]{} halos. Our approach is to vastly decrease the computational cost of a full cosmological simulation by modeling only those accretion events that contribute directly to the stellar halo in detail with N-body simulations, and to represent the rest of the galaxy with smoothly-evolving analytic functions. The baryonic component of each contributing event is followed using semi-analytic prescriptions.
The purpose of this paper is to describe our method, its strengths and limitations (§2), present the results of tests of the consistency of our models with general properties of galaxies and their satellite systems (§3) and outline some implications (§4). We summarize the conclusions in §5. In further work we will go on to compare the full phase-space structure of our halos in detail to observations and to examine the evolution of dark and light matter in satellite galaxies after their accretion.
Methods
=======
Our methods can be broadly separated into: (I) a [*simulation*]{} phase, which follows the phase-space evolution of the dark matter; and (II) a [*prescription*]{} phase, which embeds a stellar mass with each dark matter particle. Specifically:
[Phase I: Simulations]{}
1. We generate merger trees for our parent galaxies using the method outlined in @somerville99 based on the Extended-Press-Schechter (EPS) formalism [@lc93 — see § 2.1)].
2. For each event in step IA, we run a high-resolution N-body simulation that tracks the evolution of the dark matter component of a satellite disrupting within an analytic, time-dependent, parent galaxy + host halo potential (see §2.2).
Phase II: Prescriptions
1. We follow the gas accretion history of each satellite prior to falling into the parent and track its star-formation rate using cosmologically-motivated, semi-analytic prescriptions (see §2.3).
2. We embed the stellar components generated in step IIA within each dark matter satellite by assigning a variable mass-to-light ratio to every particle that is tracked in the (Phase I) N-body simulations (see §2.4).
We consider the two-phase approach a necessary and acceptable simplification since it allows us to separate well-understood and justified approximations in Phase I from prescriptions that can be adjusted and refined during Phase II. In addition, this separation allows us to save computational time and use just one set of dark matter simulations to explore the effect of varying the details of how baryons are assigned to each satellite. A more complete discussion of the strengths and limitations of our scheme is given in §2.5.
Cosmological Framework
----------------------
Throughout this work we assume a $\Lambda$CDM cosmology with $\Omega_{\rm m}=0.3$, $\Omega_{\rm \Lambda}=0.7$, $\Omega_{\rm
b}h^2=0.024$, $h=0.7$, and $\sigma_8 = 0.9$. The implied baryon fraction is $\Omega_b/\Omega_{\rm m} = 0.16$. We focus on the formation of stellar halos for “Milky-Way” type galaxies. In all cases our $z=0$ host dark matter halos have virial masses $M_{\rm vir,0}=1.4 \times 10^{12} M_\odot$, corresponding virial radii $R_{\rm vir,0}=282$ kpc, and virial velocities $\Vvir = 144 \kms$. The quantities $\Mvir$ and $\Rvir$ are related by \[eq:Rvir\] = \_[M]{}(z) (z) \^3, where $\rho_{\rm M}$ is the average matter density of the Universe and $\Delta_{\rm vir}$ is the “virial overdensity”. In the cosmology considered here, $\dvir(z=0) \simeq 337$, and $\dvir \rightarrow 178$ at $z \gsim 1$ [@bn:88]. The virial velocity is defined as $\Vvir \equiv \sqrt{G \Mvir/\Rvir}$.
We generate a total of eleven random realizations of stellar halos. General properties of all eleven are summarized in Table \[halo\_tab\]. Any variations in our results for stellar halos among these are determined by differences in their accretion histories. In all subsequent figures we present results for four stellar halos (1,2,6, and 9) chosen to span the range of properties seen in our full sample.
### Semi-analytic accretion histories
We track the mass accretion and satellite acquisition of each parent galaxy by constructing merger trees using the statistical Monte Carlo method of [@somerville99] based on the EPS formalism [@lc93]. This method gives us a record of the the masses and accretion times of all satellite halos and hence allows us to follow the mass accretion history of each parent as a function of lookback time. We explicitly note all satellites more massive than $M_{\rm min} = 5 \times 10^6 \Msun$ and treat all smaller accretion events as diffuse mass accretion. Column 2 of Table \[halo\_tab\] lists the total number of such events for each simulated halo. For further details see @lc93 [@somerville99; @zb:03]. Four examples of the cumulative mass accretion histories of parent galaxies generated in this manner are shown by the (jagged) solid lines in Figure \[buildup\_fig\].
### Satellite orbits
Upon accretion onto the host, each satellite is assigned an initial orbital energy based on the range of binding energies observed in cosmological simulations [@klypin99]. This is done by placing each satellite on an initial orbit of energy equal to the energy of a circular orbit of radius $R_{\rm circ} = \eta \Rvir$, with $\eta$ drawn randomly from a uniform distribution on the interval $[0.4,0.8]$. Here $\Rvir$ is the virial radius of the host halo at the time of accretion. We assign each subhalo an initial specific angular momentum $J = \epsilon
J_{\rm circ}$, where $J_{\rm circ}$ is the specific angular momentum of the aforementioned circular orbit and $\epsilon$ is the [*orbital circularity*]{}, which takes a value between $0$ and $1$. We choose $\epsilon$ from the binned distribution shown in Figure 2 of [@zb:03], which was designed to match the cosmological N-body simulation results of [@ghigna98], and is similar to the circularity distributions found in more recent N-body analyses [@zentner04; @benson05]. Finally, the plane of the orbit is drawn from a uniform distribution covering the halo sphere.
### Dark matter density distributions {#sec:dm}
We model all satellite and parent halos with the spherically averaged density profile of @nfw96 (NFW): \[eq:nfw\_profile\] \_[\_[NFW]{}(r)]{} = \_ ()\^[-1]{} (1 + )\^[-2]{} , where $r_{\rm halo}$ ($\equiv r_{s}$ in NFW) is the characteristic inner scale radius of the halo. The normalization, $\rho_{\mathrm{s}}$, is set by the requirement that the mass interior to $\Rvir$ be equal to $\Mvir$. The value of $r_{\rm halo}$ is usually characterized in terms of of the halo “concentration” parameter: $\cvir \equiv \Rvir/r_{\rm halo}$. The implied maximum circular velocity for this profile occurs at a radius $r_{\rm max}
\simeq 2.15 r_{\rm halo}$ and takes the value $\Vmax \simeq 0.466 \Vvir F(c)$, where $F(c) = \sqrt{c/[\ln(1+c)-c/(1+c)]}$.
For satellites, we set the value of $\cvir$ using the simulation results of @b01 and the corresponding relationship between halo mass, redshift, and concentration summarized by their analytic model. The median $\cvir$ relation for halos of mass $\Mvir$ at redshift $z$ is given [*approximately*]{} by 9.6 ( )\^[-0.13]{} (1+z)\^[-1]{}, although in practice we use the full analytic model discussed in [@b01].
For parent halos, we allow their concentrations to evolve self-consistently as their virial masses increase, as has been seen in the N-body simulations of @wechsler02. Rather than represent the halo growth as a series of discrete accretion events, we smooth over the Monte Carlo EPS merger tree by fitting the following functional form to the Monte Carlo mass accretion history for each halo: $$\begin{aligned}
M_{\rm vir}(a) & = & M_{\rm vir}(a_0) \exp \left[-2 a_c \left(\frac{a_0}{a} - 1 \right) \right].
\label{growth}\end{aligned}$$ Here $a \equiv (1+z)^{-1}$ is the expansion factor, and $a_c$ is the fitting parameter, corresponding to the value of the expansion factor at a characteristic “epoch of collapse”. @wechsler02 demonstrated that the value of $a_c$ connects in a one-to-one fashion with the halo concentration parameter [@wechsler02]: c(a) = 5.1 [a a\_c]{}. \[cofz\] Halos that form earlier (smaller $a_c$’s) are more concentrated.
Example fits to four of our halo mass accretion histories are shown by the smooth solid lines in Figure 1. The $a_c$ values for each of the halos in this analysis are listed in the third column of Table 1. Typical host halos in our sample have $\cvir \simeq 14$ at $z=0$, scale radii $r_{\rm halo}
\simeq 20$kpc, and maximum circular velocities $\Vmax \simeq 190 \kms$.
N-body simulations of dark matter evolution
-------------------------------------------
Having determined the mass, accretion time and orbit of each satellite (§2.1.1 and §2.1.2), and the evolution the potential into which it is falling (§2.1.3), we next run individual N-body simulations to track the dynamical evolution of each satellite halo separately. We follow only those that contain a significant stellar component (see §2.3 below). In practice, this restricts our analysis to satellite halos more massive than $\Mvir \gsim 10^8 \Msun$ — the number of such satellites infalling into each parent is listed in column 5 of Table \[halo\_tab\]. Based on our star-formation prescription discussed in §2.3, systems smaller than this never contain an appreciable number of stars and thus don’t contribute significantly to the stellar halo.
### The parent galaxy potential
The parent galaxy is represented by a three-component bulge/disk/dark halo potential which we allow to evolve with time as the halo accretes mass. The (spherically-symmetric) dark halo potential at each epoch $a$ is given by the NFW potential generated by the dark matter distribution in equation (\[eq:nfw\_profile\]) \_[halo]{}(r) = -[G M\_[halo]{}r\_[halo]{}]{}[1 (r/r\_[halo]{})]{} ([r r\_[halo]{}]{}+1), \[haloeqn\] where $M_{\rm halo}= M_{\rm halo}(a)$ and $r_{\rm halo} = r_{\rm halo}(a)$ are the instantaneous mass and length scales of the halo respectively. The halo mass scale is related to the virial mass via $$\begin{aligned}
M_{\rm halo}&=&{M_{\rm vir} \over \ln (c+1) -c/(c+1)}. \end{aligned}$$
The disk and bulge are assumed to grow in mass and scale with the halo virial mass and radius: $$\Phi_{\rm disk}(R,Z)=-{GM_{\rm disk} \over
\sqrt{R^{2}+\left(R_{\rm disk}+\sqrt{Z^{2}+Z_{\rm disk}^{2}}\right)^{2}}},
\label{diskeqn}$$ $$\Phi_{\rm sphere}(r)=-{GM_{\rm sphere} \over r+r_{\rm sphere}},
\label{bulgeqn}$$ where $M_{\rm disk}(a)=1.0 \times 10^{11} (M_{\rm vir}/M_{\rm vir,0})
M_{\odot}$, $M_{\rm sphere}(a)=3.4 \times 10^{10} (M_{\rm vir}/M_{\rm
vir,0}) M_{\odot}$, $R_{\rm disk}=6.5 (r_{\rm vir}/r_{\rm vir,0})$ kpc, $Z_{\rm disk}=0.26
(r_{\rm vir}/r_{\rm vir,0})$ kpc and $r_{\rm sphere}=0.7(r_{\rm vir}/r_{\rm
vir,0})$ kpc.
### Satellite initial conditions
We use $10^5$ particles to represent the dark matter in each accreted satellite. Particles are initially distributed as an isotropic NFW model, with mass and scale chosen as described in §2.1.2. The phase-space distribution function is derived by integrating over the density and potential distributions $$f(\epsilon)= {1 \over 8\pi^2} \left[
\int_0^\epsilon {d^2\rho \over d\Psi^2} {d\Psi \over \sqrt{\epsilon -\Psi}} + \frac{1}{\sqrt{\epsilon}} \left(\frac{d\rho}{d \Psi}\right)_{\Psi=0} \right].
\label{fofe_eq}$$ with $\rho = \rho_{_{\rm NFW}}$ and where $\Psi = -\Phi_{_{\rm NFW}} + \Phi_0$ is the relative potential (such that $\Psi \rightarrow 0$ as $r \rightarrow \infty$) and $\epsilon = \Psi - v^2/2$ is the relative energy [see @bt87 for discussion]. This distribution function is used (in tabulated form) to generate a random realization. This ensures a stable satellite configuration — initial conditions generated by instead assuming a local Maxwellian velocity distribution have been shown to evolve [@kazantzidis04a]. Given $f(\epsilon)$, the differential energy distribution follows in a straightforward manner from the density of states, $g(\epsilon)$, = f()g(), g() 16 \^2 \_0\^[r\_]{} r\^2 dr, where $r_\epsilon$ is the largest energy that can be reached by a star of relative energy $\epsilon$. The differential energy distribution for our initial halo is shown by the solid histogram in Figure \[embed\_fig\]. We see that the majority of the (dark matter) material in an infalling satellite is quite loosely bound.
Rather than generating a unique $f(\epsilon)$ and particle distribution for each satellite in each accretion history, a single initial conditions file with unit mass and scale, and outer radius $R_{\rm out} = 35 r_{\rm halo}$ ($ = 35$ in our units) is used for all simulations with masses and scales appropriately rescaled for each run. Since all of our accreted satellites have concentrations $c < 35$, our set up effectively allows each accreted satellite’s mass profile to extend beyond its virial radius for several scale lengths. We do not expect this simplification to significantly affect our results because the the light matter is always embedded at the very central regions of the halo ($r_\star \lsim r_{\rm halo}$) and the outer material is always quickly stripped away from the outer parts of the halos upon accretion.
In §2.4 we discuss our method of “embedding” star particles within the cores the accreted satellite dark halos.
### Satellite evolution
The mutual interactions of the satellite particles are calculated using a basis function expansion code [@hernquist92]. The initial conditions file for the satellite is allowed to relax in isolation for ten dynamical times using this code to confirm stability. For each accretion event a single simulation is run, following the evolution of the relaxed satellite under the influence of its own and the parent galaxy’s potential, for the time since it was accreted (as generated by methods in §2.1.1) along the orbit chosen at random from the distribution discussed in §2.1.2.
Using this approach, the satellites are not influenced by each other, other than through the smooth growth of the parent galaxy potential. Nor does the parent galaxy react to the satellite directly. In order to mimic the expected decay of the satellite orbits due to dynamical friction (i.e. the interaction with the parent), we include a drag term on all particles within two tidal radii $r_{\rm
tide}$ of the satellite’s center, of the form proposed by @hashimoto03 and modified for NFW hosts by @zb:03. This approach includes a slight modification to the standard Chandrasekhar dynamical friction formula [e.g. @bt87]. The tidal radius $r_{\rm tide}$ is calculated from the instantaneous bound mass of the satellite $m_{\rm sat}$, the distance $r$ of the satellite to the center of the parent galaxy and the mass of the parent galaxy within that radius $M_r$ as $r_{\rm tide}=r (m_{\rm sat}/M_r)^{1/3}$.
### Increasing phase-space resolution with test particles {#sec:test}
In this study, we are most interested in following the phase-space evolution of the stellar material associated with each satellite. This is assumed to be embedded deep within each dark matter halo (see §2.4) — typically only of order $10^4$ of the N-body particles in each satellite have any light associated with them at all. In order to increase the statistical accuracy our analysis we sample the inner 12% of the energy distribution with an additional $1.2\times 10^5$ [*test*]{} particles. This does not increase the dynamic range our simulation, but does allow us to more finely resolve the low surface brightness features we are interested in with only a modest increase in computational cost: we gain a factor of 10 in particle resolution with an increase of $\sim$25% in computing time. In this paper, we have used test particles only in generating the images shown in Figures \[haloviz1\_fig\] - \[phase\_diagram2\_fig\].
Following the satellites’ baryonic component
--------------------------------------------
We follow each satellite’s baryonic component using the expected mass accretion history of each satellite halo (prior to falling into the parent galaxy) in order to track the inflow of gas. The gas mass is then used to determine the instantaneous star formation rate and to track the buildup of stars within each halo. The physics of galaxy formation is poorly understood, and any attempt to model star formation and gas inflow into galaxies (whether semi-analytic or hydrodynamic) necessarily require free parameters. Our own prescription requires three “free” parameters: $z_{\rm re}$, the redshift of reionization (see §\[sec:reion\]); $f_{\rm gas}$, the fraction of baryonic material in the form of cold gas (i.e. capable of forming stars) that remains bound to each satellite at accretion (see §\[sec:gas\]); and $t_\star$, the globally-averaged star formation timescale (see §\[sec:stars\]).
In the following subsections we describe how these parameters enter into our prescriptions, and choose a value of $f_{\rm gas}$ consistent with observations. In §3 we go on to demonstrate that the observed characteristics of the stellar halo (e.g. its mass, and radial profile) and the Milky Way’s satellite system (e.g. their number and distribution in structural parameters) provide strong constraints on the remaining free parameters and hence the efficiency of star formation in low-mass dark matter halos in general.
### Reionization {#sec:reion}
Any attempt to model stellar halo buildup within the context of $\Lambda$CDM must first confront the so-called “missing satellite problem” — the apparent over-prediction of low-mass halos compared to the abundance of satellite galaxies around the Milky Way and M31. For example, there are eleven know satellites of the Milky Way — nine classified as dwarf spheroidal and two as dwarf irregulars — yet numerical work predicts several hundred dark matter satellite halos in a similar mass range [@klypin99; @moore99]. It is quite likely that our inventory of stellar satellites is not complete given the luminosity and surface brightness limits of prior searches [as the recent discovery of the Ursa Minor dwarf spheroidal demonstrates, see @willman05], but incompleteness is not seen as a viable solution for a problem of this scale [see @willman04 for a discussion].
The simplest solution to this problem is to postulate that only a small fraction of the satellite halos orbiting the Milky Way host an observable galaxy. In this work, we solve the missing satellite problem using the suggestion of @bullock00, which maintains that only the $\sim
10\%$ of low-mass galaxies ($ \Vmax < 30 \kms$) that had accreted a substantial fraction of their gas before the epoch of reionization host observable galaxies [see also @chiu01; @somerville02; @benson02; @kravtsov04]. The key assumption is that after the redshift of hydrogen reionization, $z_{\rm
re}$, gas accretion is suppressed in halos with $\Vmax < 50 \kms$, and completely stopped in halos with $\Vmax < 30 \kms$. These thresholds follow from the results of @thoul96 and @gnedin00 who used hydrodynamic simulations to show that gas accretion in low-mass halos is indeed suppressed in the presence of an ionizing background.
We also impose a low-mass cutoff for tracking galaxy formation in satellite halos with $\Vmax < 15 \kms$. Two processes and one practical consideration motivate us to ignore galaxy formation in these tiny halos: first, photo-evaporation acts to eliminate any gas that was accreted before reionization in halos with $\Vmax \lsim 15 \kms$ [@barkana99; @shaviv03]; second, the cooling barrier below virial temperatures of $\sim 10^{4}$K (corresponding to $\Vmax \sim 16 \kms$) prevents any gas that could remain bound to these halos from cooling and forming stars [@kepner97; @dw03]; finally, even if we were to allow star formation in these systems, their contribution to the stellar halo mass would be negligible. Once we are more confident of our inventory of the lowest luminosity and lowest surface brightness satellites [@willman04]of the Milky Way we should be able to confirm these physical arguments with observational constraints.
The epoch of reionization $z_{\rm re}$ determines the numbers of galaxies that have collapsed in each of the above $\Vmax$ limits, and hence the number of luminous satellites that will be accreted, whether they disrupt to form the stellar halo or survive to form the Galaxy’s satellite system. We discuss limits on this parameter in §3.1.1.
### Gas accretion following reionization {#sec:gas}
The virial mass of each satellite, $M_{\rm vir}^{\rm sat}$, at the time of its accretion, $a_{\rm ac}$, is set by our merger tree initial conditions (§2.1.1). We assume that each satellite halo has had a mass accumulation history set by Equation \[growth\] up to the time of its merger into the “Milky Way” host, with $a_0 = a_{\rm ac}$. After accretion, all mass accumulation onto the satellite is truncated (see §2.3.3). For massive satellites, $\Vmax > 50 \kms$, we set $a_c$ in Equation \[growth\] using the satellite’s mass-defined concentration parameter via Equation \[cofz\] [see @b01; @wechsler02]. This provides a “typical” formation history for each satellite. For low-mass satellites, we are necessarily interested in where $a_c$ falls in the distribution of halo formation epochs because this determines the fraction of mass in place at reionization. Therefore, if $\Vmax < 50 \kms$, we use the methods of @lc93 in order to derive the fraction of the satellite’s mass that was in place at the epoch of reionization, $z_{\rm re}$, and use this to set the value of $a_c$. Given $a_c$ for each satellite, we determine the instantaneous accretion rate of dark matter $h(t)$ in to this system as a function of cosmic time via h(t) = .
In the absence of radiative feedback effects, cooling is extremely efficient in pre-merged satellites of the size we consider [see, e.g. @mb04]. Therefore we expect the cold gas inflow rate to track the dark matter accretion rate, $h(t)$ — at least in the absence of the effects of reionization — and take it to be C f\_[gas]{} h(t- t\_[in]{}). The time lag within $h(t)$ accounts for the finite time it takes for gas to settle into the center of the satellite after being accreted. We assume this occurs in roughly a halo orbital time at the virial radius: $t_{\rm in} = \pi R_{\rm h}/\Vvir \simeq 6 \, {\rm Gyr} \,
\, (1+z)^{-3/2}$. We have introduced the constant $C$ in order to account for the suppression of gas accretion in low-mass halos (as alluded to in §\[sec:reion\]). Before the epoch of reionization, we set $C=1$ for all galaxies. For systems with $\Vmax > 50 \kms$, $C=1$ at all times. After reionization, $C=0$ in systems with $\Vmax < 30 \kms$, and $C$ varies linearly in $\Vmax$ between $0$ and $1$ if $\Vmax$ falls between $30$ and $50 \kms$ [see @thoul96].
The fraction of mass in each satellite in the form of cold, accreting baryons, $f_{\rm gas}$, determines the total stellar mass plus cold gas mass associated with each dark matter halo. In what follows, we adopt $f_{\rm gas} = 0.02$, which is an upper limit on the range of cold baryonic mass fraction in observed galaxies (Bell et al. 2003).
### Star Formation {#sec:stars}
If we assume that cold gas forms stars over a timescale $t_\star$, then the evolution of stellar mass $M_\star$ and cold gas mass $M_{\rm gas}$ follows a simple set of equations: & = & , \[sfreqa\]\
& = & - + C f\_[gas]{} h(t- t\_[in]{}). \[sfreqb\] For simplicity, the star formation is truncated soon after each satellite halo is accreted onto the Milky Way host. Physically, this could result from gas loss via ram-pressure stripping from the background hot gas halo [@lin_faber83; @moore94; @blitz_robishaw00; @mb04; @Mayer05]. This model is broadly consistent with observations that demonstrate that the gas fraction in satellites of the Milky Way and Andromeda is typically far less than that in field dwarfs in the Local Group, as illustrated by the separation of the open (satellites) and filled (field dwarfs) symbols in Figure \[gasfrac\_fig\] [plotting data taken from @grebel03]. Of course, this assumption is over-simplified, but it allows us to capture in general both the expectations of the hierarchical picture and the observational constraints. We note that this is likely a bad approximation for massive satellites, whose deep potential wells will tend to resist the effects of ram-pressure stripping. However, we expect that this will have little impact on our stellar halo predictions, since most of the stellar halo is formed from satellites that are accreted early and destroyed soon after. The star formation timescale, $t_\star$ determines the star to cold gas fraction in each satellite upon accretion and, for a given value of $f_{\rm gas}$, total stellar luminosity associated with each surviving satellite and the stellar halo. We discuss limits on this parameter in §3.1.2.
Embedding baryons within the dark matter satellites {#sec:king}
---------------------------------------------------
We model the evolution of a two-component population of stellar matter and dark matter in each satellite by associating stellar matter with the more tightly bound material in the halo. As discussed in §\[sec:dm\], the mass profile of the satellite is assumed to take the NFW form. Mass-to-light ratios for each particle are picked based on the particle energy in order to produce a realistic stellar profile for a dwarf galaxy.
A phenomenologically-motivated approximation for the stellar distribution in dwarf galaxies is the spherically symmetric King profile (King 1962): \[eq:king\] \_(r) = ( - ), x . The core radius is $r_{\rm c}$ and $r_{\rm t}$ is the tidal radius, where $\rho_{\star}(r>r_{\rm t}) = 0$. The normalization, $K$, is set by the average density of the satellite, determined by its mass (\[sec:stars\]) and size scales (discussed below).
For each satellite, we assume a stellar mass to light ratio of $M_\star/L_{\rm V} = 2$, and use the stellar mass calculated in §\[sec:stars\] in order to assign a median King core radius r\_c = 160 [pc]{} ( )\^[0.19]{}, where throughout $L_{\star}$ is assumed to be the V-band stellar luminosity. We allow scatter about the relation using a uniform logarithmic deviate between $-0.3 \le \Delta \log_{10} L \le 0.3$. This slope and normalization was determined by least-square fit to the luminosity and core size correlation for the dwarf satellite data presented in Mateo (1998), and the scatter was determined by a “by-eye” comparison to the scatter in the data about the relation. Our adopted relation between $r_c$ and $L_\star$ is also consistent with the relevant projection of the fundamental plane for dwarf galaxies (e.g. Kormendy 1985). For all satellites we adopt $r_{\rm t}/r_{\rm c} = 10.$
Assuming isotropic orbits for the stars and that the gravitational potential is completely dominated by the dark matter, the stellar energy distribution function corresponding to the King profile $f_\star(\epsilon)$ is determined by setting $\rho=\rho_{\star}$ and $\Psi = -\Phi_{_{\rm NFW}} + \Phi_0$ in equation (\[fofe\_eq\]). The mass-to-light ratio of a particle of energy $\epsilon$ is then simply $f_\star(\epsilon)/f(\epsilon)=
(dM/d\epsilon)_\star/(dM/d\epsilon)$. Three examples are given Figure \[embed\_fig\].
Limitations of our method
-------------------------
The main limitation of our method is that it only follows the smooth growth of the parent potential analytically — the satellite/satellite interactions and reaction of the parent to the satellite are not modeled self-consistently. Hence we do not anticipate following the evolution of the field or satellite particles during a major or even minor merger event with great accuracy. Given this limitation, we only simulate the accretion histories of halos generated from the Monte Carlo merger tree code that have not suffered a significant merger ($>$10 % of the parent halo mass) in the recent past ($<$7 Gyr) — 11 of the 20 accretion histories generated met this criterion. In addition, we consider results from simulations of accretion events that have occurred prior to the last significant merger to be less reliable. We label the halos used in this work 1-11. The five left-hand columns of Table \[halo\_tab\] summarize the properties of the simulations run for each halo.
Even with these restrictions, we consider our study to be a useful approach for exploring substructure in galaxy halos because: (i) the highest surface brightness features in halos are likely to have come from recent events, whose debris has had a shorter time to phase-mix and/or be dispersed by oscillations in the parent galaxy potential; and (ii) substructure should be more readily detectable around spiral (rather than elliptical) galaxies because their stellar distributions are less extended — the existence of disks in spirals suggests that these are the ones the more quiescent accretion histories.
Results I: Tests of the Model
=============================
As outlined in §2.5, our method most accurately follows the phase-space evolution of debris from accretion events that occur during relatively quiescent times in a galaxy’s history (which we define as being after the last $>$10% merger event). In future work we concentrate on those events. In this paper, we analyze the results from simulations of the full accretion histories of our halos. While not accurate in following the phase-space properties of debris material from events occurring before the epoch of major merging, the fact that these systems [*are*]{} disrupted is predicted robustly, and we are able to record the time of disruption and the cumulative mass in those disrupted events as well.
In what follows, we first constrain the remaining free parameters $z_{\rm re}$ (§3.1.1) and $t_\star$ (§3.1.2) (with $f_{\rm gas}=0.02$), by requiring that the general properties of our surviving satellite populations are consistent with those of the Milky Way’s own satellites. We then go on to demonstrate that these parameter choices naturally produce the observed distributions in and correlations of the structural parameters of surviving satellites (see §3.2.1 and 3.2.2), as well as stellar halos with total luminosity and radial profiles consistent with the Milky Way (§3.2.3).
Primary constraints on parameters
---------------------------------
### Satellite number counts
As described in §2.3.1, we have chosen to solve the missing satellite problem by suppressing gas accretion in small halos after the epoch of reionization, $z_{\rm re}$, and suppressing gas accumulation all together in satellites smaller than $15 \kms$. The number of satellites that host stars is then set by choosing $z_{\rm re}$. In the work presented in this paper, we assume that reionization occurred at a redshift $z_{\rm re} = 10$ or at a lookback time of $13$Gyr. The fifth column of Table \[halo\_tab\] gives the number of luminous satellites accreted over the lifetime of each halo and the sixth column gives the number of luminous satellites that survive disruption in each. (The numbers in brackets are for those events since the last $>$10% merger.) We see that our reionization prescription leads to agreement within a factor of $\sim 2$ with the number of satellites observed orbiting the Milky Way. Our results are roughly insensitive to this choice as long as $8 \lsim z_{\rm re}
\lsim 15$.
### Infalling satellite gas content
When reviewing the properties of Local Group dwarf galaxies it is striking that — with the notable exceptions of the Large and Small Magellanic Clouds (hereafter LMC and SMC) — satellites of the Milky Way and Andromeda galaxies are exceedingly gas-poor compared to their field counterparts [@mateo98; @grebel03]. Figure \[gasfrac\_fig\] emphasizes this point by plotting the V-band luminosity [*vs*]{} gas fraction from the compilation by @grebel03 for satellites (open squares) and field dwarfs (filled squares). We see that field dwarfs tend to have $M_{HI}/L_V \simeq 0.3 - 3$, whereas satellite dwarfs have gas fractions $\sim 0.001 - 0.1$.
While our star formation model assumes that most of the gas in accreted dwarfs is lost shortly after a dwarfs becomes a satellite galaxy, consistency with the field dwarf population requires that the most recent events in our simulations have gas-to-star ratios of order unity immediately prior to their accretion. This requirement forces us to choose a long star formation timescale, $t_\star=15$ Gyr, comparable to the Hubble time. Figure \[gas\*\_fig\] shows the ratio $M_{\rm gas}/L_{\rm V}$ each satellite at the time it was accreted for our four example halos. The clear trend with accretion time follows because early accreted systems have not had time to turn their gas into stars. Solid points indicate satellites that survive until the present day. We see that the most recently accreted systems ($t_{\rm accr} \sim 1-2$ Gyr, those that should correspond most closely with true “field” dwarfs today) have $M_{\rm gas}/L_\star \sim 1 - 2$, which is in reasonable agreement with the gas content of field dwarfs. The points along the lower edge of the trend have lower gas fractions at a fixed accretion time because they stopped accreting gas at reionization (see §2.3.1).
Our choice of $t_\star = 15$Gyr is much longer than is typical for semi-analytic prescriptions of galaxy formation set within the CDM context [e.g., @somerville_primack99], but these usually focus on much larger galaxies than the dwarfs we focus on here, where star formation is likely to have proceeded more efficiently. Observations suggest that the dwarf spheroidal satellites of the Milky Way have rather bursty, sporadic star formation histories, with recent star formation in some cases [@grebel00; @smecker99; @gallart99]. This effectively demands that the star formation timescales must be long in these systems: our model can be viewed as smoothing over these histories with an average low-level of star formation.
Note that we do not explicitly include supernova feedback in our star formation histories, but it is implicitly included by requiring a very low level of efficiency in our model (i.e. a large value of $t_\star$). In two companion papers we do include the effects of feedback (accounting for both gas gained due to mass loss from stars during normal stellar evolutionary phases and gas lost via winds driven by supernovae) in order to accurately model chemical enrichment in our accreted satellites [@robertson05; @font05]. With feedback included, a choice of $t_\star = 6.75$Gyr provides nearly identical distributions of gas and stellar mass in satellite galaxies as does our non-feedback choice of $t_\star = 15$Gyr.
Verification of Model’s Validity
--------------------------------
### Distributions in satellite structural parameters
Figure \[sat\_fig\] shows histograms of the fractional number of satellites as a function of central surface brightness $\mu_0$, total luminosity $L_\star$ and central line-of-sight velocity dispersion $\sigma_\star$ for the Milky Way dwarf spheroidal satellites in solid lines. The dashed lines represent our simulated distribution of surviving satellite properties, derived by combining the structural properties of the 156 surviving satellites from all eleven halos. The histograms are visually similar. (Note that the LMC and SMC are not included in the observational data set since they are rotationally supported and our models are restricted to hot systems. They would be equivalent to the most luminous, highest velocity dispersion systems in our model data set that appear to be missing from the Milky Way distribution.)
To quantify the level of similarity of the simulated and observed data sets we use the 3-dimensional KS-statistic [@fasano87] $$Z_{n, 3\rm D}=d_{\rm max}\sqrt{n},$$ where $n$ is the number in the sample tested against our model parent distribution of all 156 surviving satellites. In this method $d_{\rm max}$ is defined as the maximum difference between the observed and predicted normalized integral distributions, cumulated within the eight volumes of the three-dimensional space defined for each data point $(X_i,Y_i,Z_i)=(\mu_{0,i},L{\star,i},\sigma_{\star,i})$ by $$(x<X_i,y<Y_i,z<Z_i), ..., (x>X_i,y>Y_i,z>Z_i).$$ @fasano87 present assessments of the significance level of values obtained for $Z_{n, 3\rm D}$ as a function of $n$ and of the degree of correlation of the data. Since we already have eleven similarly-sized samples drawn from the same parent distribution, we instead quantify the significance level of $Z_{n, 3\rm D}$ found for the Milky Way satellites by comparing it to the distribution of $Z_{n, 3\rm D}$ for our simulated samples. Figure \[kstest\_fig\] shows a histogram of the results for our simulated halos, with the dotted line indicating where the Milky Way satellite distribution falls. According to this test only one of the eleven simulated populations is more similar to the simulated parent population than the observed satellites. (Note that $\sim$80% of our simulated samples have $Z_{n, 3\rm D}<1.2$. This significance level is similar to those derived by @fasano87 for 3-dimensional samples with $n=10$ and a moderate degree of correlation in the distribution — see their Figure 7 — as might be expected given the expected relation between $\sigma_0$ and $L_{\rm tot}$, see §3.2.2.)
### Correlations in satellite structural parameters
Figure \[dw\_fig\] shows the relationship between the central ($< r_c$), 1-D light-weighted velocity dispersion and satellite stellar mass, $M_\star$, for model galaxies and observed galaxies in the Local Group. Crosses show surviving model satellites for all halos and open circles show the relationship for the same set of satellites [*before they were accreted into the host dark matter halo*]{}. Solid triangles show the relationship for Local Group satellites as compiled by @dw03. The two nearly identical solid lines show the best-fit regressions for the initial and final model populations. The dashed line shows the best-fit line for the data. Our model galaxies reproduce a trend quite similar to that seen in the data. The relative agreement is significant for two reasons. First, the stellar velocity dispersion of our initialized satellites is set by the underlying potential well of their dark matter halos convolved with their associated King profile parameters. While in §\[sec:king\] we set King profile parameters using a phenomenological relation based on the stellar luminosity ($L_\star \Rightarrow r_c$), there was no guarantee that the dark matter potential associated with a given luminosity would provide a consistent stellar velocity dispersion ($r_c + \rho_{_{DM}} \Rightarrow \sigma_\star$). In this sense, the general agreement between model satellites and the data is a success of our star formation prescription, which varies based on the mass accretion histories of halos of a given size (and therefore density structure).
A second interesting feature shown in Figure \[dw\_fig\] is that [*final*]{} surviving satellites obey the same relation as the initial satellites. Most of these systems have experienced significant [*dark matter*]{} mass loss, but since the star particles are more tightly bound, their velocity dispersion does not significantly evolve. This point is emphasized in Figure \[vdisp\_fig\], where we plot the central ($< r_c$), 1-D velocity dispersion for the [*dark matter*]{} in halos, again as a function of the satellite galaxy’s stellar mass, $M_\star$. As in Figure \[dw\_fig\], open circles show the relationship for the final, surviving satellites, and crosses show the relationship for those same satellites before they were accreted. Unlike in the case of light-weighted dispersions, the dark matter dispersion velocities in the surviving systems is systematically lower than in the initial halos owing to the loss of the most energetic particles. They also exhibit a broader scatter at fixed stellar mass, reflecting variations in their mass-loss histories. Comparing again to Figure \[dw\_fig\] we see that most of the particles associated with light in these systems remains bound to the satellites and their velocity dispersions do not evolve significantly. This result may have important implications for interpreting the nature of the dark matter halos of dwarf galaxies in the Local Group and for understanding the regularity in observed dwarf properties irrespective of their environments. In future work we will return to a more detailed structural and evolutionary analysis of the light matter and the dark matter halos in which the stars are embedded.
The results presented in this and the previous sub-section clearly indicate that our star formation scenario coupled with setting the King parameters of our infalling dwarfs to match Local Group observations lead to surviving satellite populations consistent both in number and structural properties with the Milky Way’s.
### The stellar halo’s mass and density profile
Estimates for the size, shape and extent of the Milky Way’s stellar halo come either from star count surveys [@morrison00; @chiba00; @yanny00; @siegel02] or studies where distances could be estimated using RR Lyraes [@wetterer96; @ivezic00] . These studies agree on a total luminosity of order $L_{\rm V} \sim
10^9 L_\odot$ (or mass $\sim 2\times 10^9 \Msun$), which is in good agreement with the unbound stellar luminosity for all eleven of our model stellar halos, listed in Column 6 of Table 1 (numbers in brackets again refer to stars from accretion events since the last $>$10% merger). The match between predicted and observed total halo mass is non-trivial and depends sensitively on the mass accretion history of the dark matter halo along with the value of the star-formation timescale, $t_\star$. Specifically, we show in §4.1.1 that the majority of dwarf galaxies that make up the stellar halo were accreted early, more than $\sim 8$Gyr ago. The total stellar halo mass ($\sim 10^{9} \Msun$) is relatively small compared to the total cold baryonic mass in accreted satellites ($\sim 10^{10} \Msun$), because the star formation timescale is long compared to the age of the Universe at typical accretion times, and the stellar mass fractions are correspondingly low (see Figure \[gas\*\_fig\]). If we would have chosen a star formation timescale short compared the time of typical accretion for a destroyed system (e.g. $\sim 5$Gyr) this would have resulted in a stellar halo of stripped stellar much more massive than that observed for the Milky Way. This is in agreement with the results of @brook04b, who found that a strong feedback model (effectively slowing the star formation rate in dwarf galaxies) in their smoothed particle hydrodynamical simulations of galaxy formation was necessary in order to build relatively small halo components in their models.
The observational studies find density profiles falling more steeply than the dark matter halo (a power law index in the range -2.5 to -3.5, compared to $\sim -2$ for the dark matter at relevant scales). Some of the variance between results from different groups can be attributed to substructure in the halo since these studies have commonly been limited in sky-coverage with surveys covering significant portions of the sky only now becoming feasible. Figure \[halo\_fig\] plots the density profiles generated (arbitrarily normalized) from our four representative stellar halo models (light solid curves), which transition between slopes of -2 within $\sim 10$kpc to -4 around 50kpc and fall off even more steeply beyond this. To illustrate the general agreement with observations, the dotted line is a power law with exponent of -3. Note that there is some variation in the total luminosity (about a factor of 2) and slopes of our model halos, as might be expected given their different accretion histories. There is also a clear roll-over below the power law in the outer parts of the stellar halo, sometimes at radii as small as 30kpc.
To contrast to the light, the density profiles of the dark matter in our models are plotted in bold lines Figure \[halo\_fig\] (also with arbitrary normalization). The dark matter profiles are all close to an NFW profile with $m_{\rm halo}=1.4\times 10^{12} M_\odot$ and $r_{\rm
halo}=10$kpc. Within $\sim 30$kpc of the Galactic center it appears that our stellar halos roughly track the dark matter, but beyond this they tend to fall more steeply. The difference in profile shapes — and the steep roll-over in the light matter at moderate to large radii — is a natural consequence of embedding the light matter deep within the dark matter satellites: The satellites’ orbits can decay significantly before any of the more tightly bound material is lost. Hence we anticipate that more/less extended stellar satellites would result in a more/less extended stellar halo. Studies of the distant Milky Way halo are still sufficiently limited that it is not possible to say whether the location of the roll-over in our model stellar halos is in agreement with observations and this could be an interesting test of our models in the near future [see, e.g. @ivezic03].
Results II: Model Predictions
=============================
We have now fixed our free parameters to be $z_{\rm re}=10$, $t_\star=15$Gyr and $f_{\rm gas}$=0.02. By limiting our description of the evolution of the baryons associated with each dark matter satellite to depend on only these parameters, we find we have little freedom in how we choose them. For example, if we were to choose a shorter star formation timescale $t_\star$, we would over-produce the mass of the stellar halo, form dwarf galaxies that were over-luminous at fixed velocity dispersions, and form dwarfs with low gas fractions compared to isolated dwarfs observed in the Local Group. The first two problems could be adjusted by adapting $f_{\rm gas}$, but the last problem is independent of this.
Despite its simplicity, our model reproduces observations of the Milky Way in some detail. In particular, we recover the full distribution of satellites in structural properties. This suggests both that have we assigned the right fraction of dark matter halos to be luminous [*and*]{} that our luminous satellites are sitting inside the right mass dark matter halos.
We can now go on with some confidence to discuss the implications of our model for the mass accretion history of the halo and satellite systems (§4.1) and the level of substructure in the stellar halo (§4.2).
Building up the stellar halo and satellite systems
--------------------------------------------------
### Accretion times and mass contributions of infalling satellites
The stellar halo in our model is formed from stars originally born in accreted satellites. Once accreted, satellites lose mass with time until the satellite is destroyed. Once a particle becomes unbound from a satellite, we associate its [*stellar*]{} mass with the stellar halo. Figure \[cumfrac\_fig\] shows the cumulative luminosity fraction of the stellar halo (solid lines) coming from accreted satellites as a function of the accretion time of the satellite for halos 1,2, 6, and 9. Clearly most of the mass in the stellar halos originates in satellites that were accreted more than $8$ Gyr ago. The dotted lines show the contribution to the stellar halo from satellite halos more massive than $\Mvir > 2 \times 10^{10}
\Msun$ at the time of their accretion. While only $10-20$ of the $\sim 150$ accreted satellites meet this mass requirement, we see that $\sim 75-90\%$ of the mass associated with each stellar halo originated within massive satellites of this type.
Compare these to the dashed lines, which show the cumulative number fraction of surviving satellite galaxies as a function of the time they were accreted for the entire population (long-dashed lines) and restricted to satellite halos that were more massive than $\Mvir \gsim
5 \times 10^{9} \Msun$ at the time of their accretion (short dashed lines). We see that surviving satellites are accreted much later ($\sim 3-5$ Gyr lookback) than their destroyed counterparts and that the most massive satellites that survive tend to be accreted even later because the destructive effects of dynamical friction are more important for massive satellites.
### Spatial growth
Studies of dark matter halos in N-body simulations show that they are built from the inside out [e.g. @helmi03a]. The top panel of Figure \[rsats\_fig\] confirms that this idea holds for our model [*stellar*]{} halos: it shows the average over all our halos of the fraction of material in each spherical shell from all accretion events (solid line) and from those that have occurred since the last major ($>10$%) merger (dotted line — the time when this occurred is given in column 4 of Table \[halo\_tab\]). Although the recent events represent only a fraction of the total halo luminosity ($\sim$5-50%, see Table \[halo\_tab\]), they become the dominant contributor at radii of 30-60kpc and beyond. There is some suggestion of this being the case for the Milky Way’s halo globular cluster population, which can can fairly clearly be separated into an ’old’ , inner population (which exhibits some rotation, is slightly flattened and has a metallicity gradient) and a ’young’ outer one [which is more extended and has a higher velocity dispersion — see @zinn93].
One implication of the inside-out growth of stellar halos, combined with the late accretion time of surviving satellites is that the two should each follow different radial distributions. The dashed lines in the bottom panel of Figure \[rsats\_fig\] shows the number fraction of all surviving satellites in our models as a function of radius — the distribution is much flatter than the one shown for the halo in the upper panel. In fact, all satellites of our own Milky Way (except Sgr) lie at or beyond 50kpc from its center, with most 50-150kpc away, — as shown by the solid line in the lower panel. Hence, the radial distribution of the observed satellites is consistent with our models and suggests that they do indeed represent recent accretion events.
### Implications for the abundance distributions of the stellar halo and satellites
Studies which contrast abundance patterns and stellar populations in the stellar halo with those in dwarf galaxies seem to be at odds with models (such as ours) that build the stellar halo from satellite accretion [@unavane96]. For example, both field and satellite populations have similar metallicity ranges, but the former typically have higher alpha-element abundances than the latter [@tolstoy03; @shetrone03; @venn04]. Clearly, it is not possible to build the halo from present-day satellites.
We have already shown (in Figure \[cumfrac\_fig\] ) that we would expect a random sample of halo stars to have been accreted 8-10Gyears ago from satellites with masses $\Mvir \gsim 10^{10}\Msun$, while surviving satellites are accreted much more recently. (Note that Figure \[cumfrac\_fig\] deliberately compares the cumulative [*luminosity*]{} fraction of the stellar halo to the [*number*]{} fraction of satellites. This is the most relevant comparison to make when interpreting observations because any sample of halo stars will be weighted by the luminosity of the contributing satellites, while samples of satellite stars are often composed of a few stars from each satellite.) Figure \[lsats\_fig\] explores the number and luminosity contribution of different luminosity satellites to each population in more detail. It shows the number fraction of satellites in different luminosity ranges contributing to the stellar halo (dotted lines) and satellites system (dashed lines): the peak of the dotted/dashed lines at lower/higher luminosities is a reflection of the much later accretion time — and hence longer time available for growth of the individual contributors — of the satellite system relative to the stellar halo. However, as discussed above, it is more meaningful to compare the number fraction of surviving satellites to the luminosity fraction of the halo (solid lines) contributed by satellites of a given luminosity range. The solid line emphasizes (as noted above) that most of the stellar halo comes from the few most massive (and hence most luminous) satellites, with luminosities in the range $10^7-10^9 L_\odot$. In contrast, Galactic satellite stellar samples would likely be dominated by stars born in $10^5-10^7L_\odot$ systems.
Overall our results provide a simple explanation of the difference between halo and satellite stellar populations and abundance patterns. The bulk of the stellar halo comes from massive satellites that were accreted early, and hence had star formation histories that must be short (because of their early disruption) and intense (in order to build a significant luminosity in the time before disruption). In contrast, surviving satellites are lower mass and accreted much later, and hence have more extended, lower level star formation histories. Stars formed in these latter environments represent a negligible fraction of the stellar halo in all our models. This is confirmed by the last column of Table \[halo\_tab\], which lists the percentage contribution of surviving satellites to the total halo (less than 10% in every case). Note that the contributions of surviving satellites to the [*local*]{} halo (i.e. within 10-20kpc of the Sun), which is the only region of the halo where detailed abundance studies have been performed, are even lower (less than 1% in every case).
A more quantitative investigation of the consequences of the difference between the “accretion age” of stars and satellites in the halo is underway [@robertson05; @font05].
Substructure
------------
Abundant substructure is one of the most basic expectations for a hierarchically-formed stellar halo. Here we give a short description of the substructure we see in our simulations, and reserve more detailed and quantitative explorations for future work. Recall, our study (by design) follows the more recent accretion events in our halo more accurately than the earlier ones — we showed in §4.1.2 that these are the dominant contributors to the halo at radii of 30-60kpc and beyond. Hence we can expect our study make fairly accurate predictions of expectations of the level of substructure in the outer parts of galactic halos — precisely the region where substructure should be more dominant and easier to detect.
Figures \[haloviz1\_fig\] and \[haloviz2\_fig\] show external galactic views for halo realizations 1, 2, 6, and 9. The color code reflects surface brightness per pixel: white, 24 magnitudes per square arcsecond, to light blue at 30 magnitudes per square arcsecond, to black which is (fainter than) 38 magnitudes per square arcsecond. The darkest blue features are of course too faint to be seen (except by star counts). We have simply set the scale in order to reveal all the spatial features that are there in principle. We mention that our test particles (§\[sec:test\]) were used in making these images.
In addition to spatial structure, we also expect significant structure in phase space. A two-dimensional slice of the full six dimensional phase space is illustrated in Figure \[phase\_diagram1\_fig\], where we plot radial velocity $V_r$ versus radius $r$ for all of halo 1 (left) and halo 9 (right). Each point represents 1000 solar luminosities. [^1] The color code reflects the time the particle became unbound from its original satellite: dark blue for particles that became unbound more than 12 Gyr ago and white for particles that either remain bound or became unbound less than 1.5 Gyr ago. The radial gradient in color reflects the “inside out” formation of the stellar halo discussed in previous sections.
Note that significant coherent structure is visible in Figure \[phase\_diagram1\_fig\] even without any spatial slicing of the halos. Except for particles belonging to bound satellites (white streaks), the structure strongly resembles a nested series of orbit diagrams. This is not surprising since the halo was formed by particles brought in on satellite orbits. A direct test of this prediction should be possible with SDSS data and other similar surveys. Indeed, if the phase space structure of stellar halo stars does reveal this kind of orbit-type structure it will be a direct indication that the stellar halo formed hierarchically. Figure \[phase\_diagram2\_fig\] shows the same diagram for halo 1, now subdivided into two distinct quarters of the sky.
Summary and Conclusions
=======================
We have presented a cosmologically self-consistent model for the formation of the stellar halo in Milky-Way type galaxies. Our approach is hybrid. We use a semi-analytic formalism to calculate a statistical ensemble of accretion histories for Milky Way size halos and to model star formation in each accreted system. We use a self-consistent N-body approach to follow the dynamical evolution of the accreted satellite galaxies. A crucial ingredient in our model is the explicit distinction between the evolution of light and dark matter in accreted galaxies. Stellar material is much more tightly bound than the majority of the dark matter in accreted halos and this plays an important role in the final density distribution of stripped stellar material as well as the evolution in the observable quantities of satellite galaxies.
A primary goal of this, our first paper on stellar halo formation, was to normalize our model to, and demonstrate consistency with, the gross properties of the Milky Way stellar halo and its satellite galaxy population. We constrained our two main star formation parameters, the redshift of reionization $z_{\rm re}$, and the star formation timescale of cold gas $t_*$, using the observed number counts of Milky Way satellites and the gas mass fraction of isolated dwarf galaxies. With these parameters fixed, the model reproduces many of the observed structural properties of (surviving) Milky Way satellites: the luminosity function, the luminosity-velocity dispersion relation, and the surface brightness distribution. The satellite galaxies that are accreted and destroyed in our model produce stellar halos of material with a total luminosity in line with estimates for the stellar halo of the Milky Way ($\sim 10^9 L_{\odot}$).
The success of our model lends support to the hierarchical stellar halo formation scenario, where the stellar halos of large galaxies form mainly via the accretion subsequent disruption of smaller galaxies. More specifically, it allows us to make more confident predictions concerning the precise nature of stellar halos and their associated satellite systems in Milky Way type galaxies. These include:
- The density profile of the stellar halo should follow a varying power-law distribution, changing in radial slope from $\sim -2$ within $\sim 20$kpc to $\sim -4$ beyond 50kpc. The distribution is expected to be much more centrally concentrated than the dark matter, owing to the fact that the stars that build the stellar halo were much more tightly bound to their host systems than the dark material responsible for building up the dark matter halo.
- Stellar halos (like dark matter halos) are expected to form from the inside out, with the majority of mass being deposited from the $\sim 15$ most massive accretion events, typically dwarf-irregular size halos with mass $\sim 10^{10} \Msun$ and luminosities of order $10^7-10^9 L_\odot$.
- Destroyed satellites contributing mass to the stellar halo tend to be accreted earlier than satellites that survive as present-day dwarf satellites ($\sim 9$Gyr compared to $\sim 5$ Gyr in the past).
- Substructure, visible both spatially and in phase space diagrams, should be abundant in the outer parts of galaxies. Proper counts of this structure, both in our galaxy and external systems, should provide important constraints on the late-time accretion histories of galaxies and a test of hierarchical structure formation.
Together, the second and third points imply that most of the stars in the inner halo are associated with massive satellites that were accreted $\gsim 9$Gyr ago. Dwarf satellites, on the other hand, tend to be lower mass and are associated with later time accretion events. This suggests that classic “stellar halo” stars should be quite distinct chemically from stars in surviving dwarf satellites. We explore this point further in two companion papers (Robertson et al. 05; Font et al. 05).
KVJ’s contribution was supported through NASA grant NAG5-9064 and NSF CAREER award AST-0133617.
[99]{}
Abadi, M. G., Navarro, J. F., Steinmetz, M., & Eke, V. R. 2003, , 597, 21
Barkana, R., & Loeb, A. 1999, , 523, 54
Benson, A. J., Lacey, C. G., Baugh, C. M., Cole, S., & Frenk, C. S. 2002, , 333, 156
Benson, A. J. 2005, , 358, 551
Binney, J. & Tremaine, S. 1987, “Galactic Dynamics”, Princeton University Press, 1987, 747 p.,
Blitz, L., & Robishaw, T. 2000, , 541, 675
Blumenthal, G. R., Faber, S. M., Primack, J. R., & Rees, M. J. 1984, , 311, 517
Bryan, G. L., & Norman, M. L. 1998, , 495, 80
Bullock, J.S., Kravtsov, A.K., Weinberg, D.H., 2000, ApJ, 539, 517
Bullock, J. S., Kolatt, T. S., Sigad, Y., Somerville, R. S., Kravtsov, A. V., Klypin, A. A., Primack, J. R., & Dekel, A. 2001, , 321, 559
Bullock, J. S., Kravtsov, A. V., & Weinberg, D. H. 2001, , 548, 33
Bullock, J.S. & Johnston, K. V. 2005, in prep
Brook, C. B., Kawata, D., Gibson, B. K., & Flynn, C. 2003, , 585, L125
Brook, C. B., Kawata, D., Gibson, B. K., & Flynn, C. 2004, , 349, 52
Brook, C. B., Kawata, D., Gibson, B. K., & Freeman, K. C. 2004, , 612, 894
Brook, C. B., Gibson, B. K., Martel, H., & Kawata, D. 2005a, ArXiv Astrophysics e-prints, arXiv:astro-ph/0503273
Brook, C. B., Martel, H., Gibson, B. K., & Kawata, D. 2005b, ArXiv Astrophysics e-prints, arXiv:astro-ph/0503323
Chiba, M. & Beers, T. C. 2000, , 119, 2843
Chiu, W. A., Gnedin, N. Y., & Ostriker, J. P. 2001, , 563, 21
Dekel, A., & Woo, J. 2003, , 344, 1131
D’Onghia, E., & Burkert, A. 2004, , 612, L13
Crane, J. D., Majewski, S. R., Rocha-Pinto, H. J., Frinchaboy, P. M., Skrutskie, M. F., & Law, D. R. 2003, , 594, L119
Eggen, O. J., Lynden-Bell, D., & Sandage, A. R. 1962, , 136, 748 Fasano, G. & Franceschini, A. 1987, , 225, 155 Ferguson, A. M. N., Irwin, M. J., Ibata, R. A., Lewis, G. F., & Tanvir, N. R. 2002, , 124, 1452
Eisenstein, D. J., et al. 2005, ArXiv Astrophysics e-prints, astro-ph/0501171
Font, Bullock, Johnston & Robertson 2005, in prep
Forbes, D. A., Beasley, M. A., Bekki, K., Brodie, J. P., & Strader, J. 2003, Science, 301, 1217
Frinchaboy, P. M., Majewski, S. R., Crane, J. D., Reid, I. N., Rocha-Pinto, H. J., Phelps, R. L., Patterson, R. J., & Mu[\~ n]{}oz, R. R. 2004, , 602, L21
Gallart, C., Freedman, W. L., Aparicio, A., Bertelli, G., & Chiosi, C. 1999, , 118, 2245
Ghigna, S., Moore, B., Governato, F., Lake, G., Quinn, T., & Stadel, J. 1998, , 300, 146
Gilmore, G., Wyse, R. F. G., & Norris, J. E. 2002, , 574, L39
Gnedin, N. Y. 2000, , 542, 535
Grebel, E. K. 2000, Bulletin of the American Astronomical Society, 32, 698
Grebel, E. K., Gallagher, J. S., & Harbeck, D. 2003, , 125, 1926
Harris, J., & Zaritsky, D. 2004, , 127, 1531
Hashimoto, Y., Funato, Y., & Makino, J. 2003, , 582, 196
Hayashi, E., Navarro, J. F., Taylor, J. E., Stadel, J., & Quinn, T. 2003, , 584, 541
Helmi, A., Navarro, J. F., Nordstrom, B., Holmberg, J., Abadi, M.G. & Steinmetz, M. 2005, astro-ph/0505401
Helmi, A., Navarro, J. F., Meza, A., Steinmetz, M., & Eke, V. R. 2003, , 592, L25
Helmi, A., White, S. D. M., & Springel, V. 2003, , 339, 834
Helmi, A. & White, S. D. M. 2001, , 323, 529 Helmi, A., White, S. D. M., de Zeeuw, P. T., & Zhao, H. 1999, , 402, 53
Helmi, A. & White, S. D. M. 1999, , 307, 495
Hernquist, L. & Ostriker, J.P. 1992, , 386, 375 Ibata, R. A., Chapman, S., Ferguson, A. M. N., Irwin, M. J., Lewis, G. F., & McConnachie, A. W. 2004, astro-ph/0401092
Ibata, R. A., Irwin, M. J., Lewis, G. F., Ferguson, A. M. N., & Tanvir, N. 2003, , 340, L21
Ibata, R., Irwin, M., Lewis, G., Ferguson, A. M. N., & Tanvir, N. 2001, , 412, 49
Ibata, R., Lewis, G. F., Irwin, M., Totten, E., & Quinn, T. 2001, , 551, 294
Ibata, R. A., Gilmore, G. & Irwin, M. J. 1995, , 277, 781
Ibata, R. A., Gilmore, G., & Irwin, M. J. 1994, , 370, 194
Ivezi[' c]{}, [Ž]{}. et al. 2000, , 120, 963
Ivezi[' c]{}, [Ž]{}. et al. 2003, astro-ph/0309074
Johnston, K. V., Law, D. R. & Majewski, S. R. 2005, , in press
Johnston, K. V., Spergel, D. N., & Haydn, C. 2002, , 570, 656
Johnston, K. V., Sackett, P. D., & Bullock, J. S. 2001, , 557, 137
Johnston, K. V., Majewski, S. R., Siegel, M. H., Reid, I. N., & Kunkel, W. E. 1999, , 118, 1719
Johnston, K. V., Sigurdsson, S., & Hernquist, L. 1999, , 302, 771
Johnston, K. V., Zhao, H., Spergel, D. N., & Hernquist, L. 1999, , 512, L109
Johnston, K. V. 1998, , 495, 297
Johnston, K. V., Hernquist, L., & Bolte, M. 1996, , 465, 278
Johnston, K. V., Spergel, D. N., & Hernquist, L. 1995, , 451, 598
Kaplinghat, M. 2005, in preparation.
Kazantzidis, S., Magorrian, J., & Moore, B. 2004, , 601, 37
Kazantzidis, S., Mayer, L., Mastropietro, C., Diemand, J., Stadel, J., & Moore, B. 2004, , 608, 663
Kepner, J. V., Babul, A., & Spergel, D. N. 1997, , 487, 61
Klypin, A. A., Kratsov, A. V., Valenzuela, O. & Prada, F. 1999, , 522, 82
Kravtsov, A. V., Gnedin, O. Y., & Klypin, A. A. 2004, , 609, 482
Lacey, C., & Cole, S. 1993, , 262, 627
Law, D. R. , Johnston, K. V. & Majewski, S. R. 2005, in preparation
Lin, D. N. C., & Faber, S. M. 1983, , 266, L21
Majewski, S. R., Skrutskie, M.F., Weinberg, M.D. & Ostheimer, J.C. 2003a, in preparation. (“Paper I”)
Majewski, S. R., Ostheimer, J. C., Kunkel, W. E., & Patterson, R. J. 2000, , 120, 2550
Majewski, S. R., Siegel, M. H., Kunkel, W. E., Reid, I. N., Johnston, K. V., Thompson, I. B., Landolt, A. U., & Palma, C. 1999, , 118, 1709
Majewski, S. R., Munn, J. A., & Hawley, S. L. 1996, , 459, L73
Maller, A. H., McIntosh, D. H., Katz, N., & Weinberg, M. D. 2005, , 619, 147
Maller, A. H., & Bullock, J. S. 2004, , 355, 694
Martin, N. F., Ibata, R. A., Bellazzini, M., Irwin, M. J., Lewis, G. F., & Dehnen, W. 2004, , 348, 12
Mateo, M. L. 1998, , 36, 435
Mayer, L., Mastropietro, C., Wadsley, J., Stadel, J., & Moore, B. 2005, ArXiv Astrophysics e-prints, astro-ph/0504277
McConnachie, A. W., Irwin, M. J., Ibata, R. A., Ferguson, A. M. N., Lewis, G. F., & Tanvir, N. 2003, , 343, 1335
McWilliam, A. & Rich, R. M. 1994, , 91, 749
Miyamoto, M. & Nagai, R. 1975, , 27, 533
Moore, B., & Davis, M. 1994, , 270, 209
Moore, B., Ghigna, S., Governato, F., Lake, G., Quinn, T., Stadel, J. & Tozzi, P. 1999, , 524, L19
Morrison, H. L., Mateo, M., Olszewski, E. W., Harding, P., Dohm-Palmer, R. C., Freeman, K. C., Norris, J. E., & Morita, M. 2000, , 119, 2254
Murali, C. & Dubinski, J. 1999, , 118, 911
Navarro, J. F., Frenk, C.S. & White, S.D.M. 1996,
Navarro, J. F., Frenk, C.S. & White, S.D.M. 1997, , 490, 493
Peebles, P. J. E. 1965, , 142, 1317
Peebles, P. J. E. 1982, , 263, L1
Percival, W. J., et al. 2002, , 337, 1068
Press, W. H. & Schechter, P. 1974, , 187, 425
Pohlen, M., Martinez-Delgado, D., Majewski, S., Palma, C., Prada, F. & Balcells, M. 2004, to be published in the ASP proceedings of the “Satellites and Tidal Streams” conference, La Palma, Canary Islands, 26-30 May 2003, eds, F. Prada, D. Martinez-Delgado, T. Mahoney
Newberg, H. J. et al. 2003
Newberg, H. J. et al. 2002, , 569, 245
Plummer, H.C. 1911, , 71, 460
Reitzel, D. B., Guhathakurta, P., & Gould, A. 1998, , 116, 707
Rich, R. M. & McWilliam, A. 2000, , 4005, 150
Robertson, B., Hernquist, L., Bullock, J. S., Cox, T. J., Di Matteo, T., Springel, V., & Yoshida, N. 2005, ArXiv Astrophysics e-prints, astro-ph/0503369
Rocha-Pinto, H. J., Majewski, S. R., Skrutskie, M. F., & Crane, J. D. 2003, , 594, L115
Rocha-Pinto, H. J., Majewski, S. R., Skrutskie, M. F., Crane, J. D., & Patterson, R. J. 2004, , 615, 732
Sackett, P. D., Morrison, H. L., Harding, P., & Boroson, T. A. 1994, , 370, 441
Searle, L. & Zinn, R. 1978, , 225, 357
Shetrone, M., Venn, K. A., Tolstoy, E., Primas, F., Hill, V., & Kaufer, A. 2003, , 125, 684
Shang, Z. et al. 1998, , 504, L23
Shaviv, N. J., & Dekel, A. 2003, ArXiv Astrophysics e-prints, astro-ph/0305527
Siegel, M. H., Majewski, S. R., Reid, I. N., & Thompson, I. B. 2002, , 578, 151
Sigurdson, K., & Kamionkowski, M. 2004, Physical Review Letters, 92, 171302
Simon, J. D., Bolatto, A. D., Leroy, A., Blitz, L., & Gates, E. L. 2005, , 621, 757
Smecker-Hane, T., & Mc William, A. 1999, ASP Conf. Ser. 192: Spectrophotometric Dating of Stars and Galaxies, 192, 150
Sommer-Larsen, J., Naselsky, P., Novikov, I., Gotz, M. 2004, , 352, 299
Somerville, R. S. & Kolatt, T. S. 1999, , 305, 1
Somerville, R. S., & Primack, J. R. 1999, , 310, 1087
Somerville, R. S. 2002, , 572, L23
Spergel, D. N. & Steinhardt, P. J. 2000, Physical Review Letters, 84, 3760
Spergel, D. N., et al. 2003, , 148, 175
Taylor, J. E. 2004, ArXiv Astrophysics e-prints, astro-ph/0411549
Tegmark, M., et al. 2004, , 69, 103501
Thoul, A. A., & Weinberg, D. H. 1996, , 465, 608
Tolstoy, E., Venn, K. A., Shetrone, M., Primas, F., Hill, V., Kaufer, A., & Szeifert, T. 2003, , 125, 707
Totten, E.J. & Irwin, M.J. 1998, , 294, 1
Unavane, M., Wyse, R. F. G., & Gilmore, G. 1996, , 278, 727
Velazquez, H. & White, S. D. M. 1995, , 275, L23 Venn, K. A., Irwin, M., Shetrone, M. D., Tout, C. A., Hill, V., & Tolstoy, E. 2004, , 128, 1177
Wechsler, R. H., Bullock, J. S., Primack, J. R., Kravtsov, A. V., & Dekel, A. 2002, , 568, 52
Wetterer, C. J. & McGraw, J. T. 1996, , 112, 1046
Willman, B., et al. 2005, ArXiv Astrophysics e-prints, arXiv:astro-ph/0503552
Willman, B., Governato, F., Dalcanton, J. J., Reed, D., & Quinn, T. 2004, , 353, 639
Willman, B., Dalcanton, J., Ivezi[' c]{}, [Ž]{}., Jackson, T., Lupton, R., Brinkmann, J., Hennessy, G., & Hindsley, R. 2002, , 123, 848
Yanny, B., Newberg, H.J., Grebel, E.K., Kent, S., Odenkirchen, M., Rockosi, C.M., Schlegel, D., Subbarao, M., Brinkmann, J., Fukugita, M., Ivezic, [Ž]{}., Lamb, D.Q., Schneider, D.P., and York, D.G. 2003, , 588, 824
Yanny, B. et al. 2000, , 540, 825
Zentner, A. R., & Bullock, J. S. 2003, , 598, 49
Zentner, A. R., & Bullock, J. S. 2002, , 66, 043003
Zentner, A. R., Berlind, A. A., Bullock, J. S., Kravtsov, A. V., & Wechsler, R. H. 2004, ArXiv Astrophysics e-prints, astro-ph/0411586
Zheng, Z. et al. 1999, , 117, 2757
Zinn, R. 1993, ASP Conf. Ser. 48: The Globular Cluster-Galaxy Connection, 38
Zucker, D. B., et al. 2004, , 612, L117
----------- ------------ ------- ------------- ------------ ------------ ------------------ ------------ ----------- -------------- ------------ --
\# time of \# \# stellar halo % halo 80% halo 80% halo % of halo
Halo satellites $a_c$ last $>$10% satellites surviving luminosity from 15 accretion accumulation from
in merger merger simulated satellites ($10^9 L_\odot$) largest time time surviving
tree (Gyr) satellites (Gyr) (Gyr) satellites
Milky Way — — 8-10? — 11 $\sim 1$ — — — —
1 391 0.375 8.3 115 (57) 18 (18) 1.2 (0.29) 87 % 8.4 5.3 0.96%
2 373 0.287 9.2 102 (45) 6 (6) 1.1 (0.35) 87 % 8.6 7.0 0.03%
3 322 0.388 8.9 106 (47) 16 (15) 0.95 (0.05) 79 % 9.0 7.4 0.12%
4 347 0.393 8.3 97 (32) 8 (7) 1.33 (0.14) 91 % 8.3 6.3 0.40%
5 512 0.214 10.8 160 (115) 18 (18) 0.68 (0.44) 78 % 7.0 2.1 0.25%
6 513 0.232 10.5 169 (68) 16 (15) 0.60 (0.24) 77 % 8.6 6.2 0.01%
7 361 0.385 7.4 102 (48) 20 (18) 0.70 (0.20) 82 % 7.2 4.4 8.42%
8 550 0.205 9.3 213 (62) 13 (13) 0.64 (0.201) 80 % 8.8 7.1 2.55%
9 535 0.187 10.0 182 (63) 15 (15) 0.85 (0.36) 87 % 4.7 1.5 0.01%
10 484 0.229 9.7 156 (76) 13 (13) 1.02 (0.65) 80 % 6.7 2.9 0.04%
11 512 0.230 9.0 153 (63) 10 (10) 0.84 (0.22) 89 % 9.1 7.2 0.02%
----------- ------------ ------- ------------- ------------ ------------ ------------------ ------------ ----------- -------------- ------------ --
: Properties of our simulated stellar halos.
\[halo\_tab\]
[^1]: In most cases, we subsample our luminous particles in order to plot a single point for every $1000 \Lsun$. However, some of the particles in our simulations have luminosity weights greater than 1000 $\Lsun$. In these cases we plot a number of points ($= L_{\rm particle}/1000$) with the same $V_r$ and $r$ as the relevant particle (using small random offsets to give the effect of a “bigger” point on the plot).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.